text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
55 Years Ago: Manfred Mann Hit No.1 With 'Do Wah Diddy Diddy' Paul Jones found an earlier version in his record collection, and Manfred Mann transformed 'Do Wah Diddy Diddy' into a No. 1 breakthrough hit. No. 74: Manfred Mann's Earth Band, 'Blinded By the Light' – Top 100 Classic Rock Songs What does a British jazz-blues band know about life on the Jersey shore? Probably not much, but that didn't stop Manfred Mann's Earth Band from taking Bruce Springsteen's 'Blinded By the Light' to No. 1 in 1976 and onto our Top 100 Classic Rock Songs list. Dave Lifton Bruce Kulick Plans to Record With Kiss Kruise Band 30 Years Ago: 'Tremors' Takes Its Place as a B-Movie Classic
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,306
Benin, Republiek Benin, een land in West-Afrika Benin City, stad in Nigeria Benin (rivier), rivier in Nigeria Koninkrijk Benin, historisch koninkrijk in Afrika Volksrepubliek Benin, historisch volksrepubliek in Afrika
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,501
Q: C# Simple Chat application with Remoting: Client form not updating I'm trying to create a simple chat application using remoting in C# (Visual Studio). My problem is that, the client sends a connect request to the server with the port, the server receives this request, registers the client in a dictionary with the port and then sends back a greet message to the client (just to test). I can print the message from the server in the clients console, but the form doesnt update. Here's the code: In the shared library of the client and the server i have two interfaces, one for the services of the server: namespace ChatApplication { public interface IServerServices { void sendMessage(string client, string msg); void register(string client, int port); } } And another for the services of a client: namespace ChatApplication { public interface IClientServices { void receiveMsg(string s); } } In the server i have: namespace ChatApplication { public class ChatServer { static void Main(string[] args) { TcpChannel channel = new TcpChannel(8086); ChannelServices.RegisterChannel(channel, false); RemotingConfiguration.RegisterWellKnownServiceType( typeof(ServerServices), "RemoteObject", WellKnownObjectMode.Singleton); System.Console.ReadLine(); } } } And the class that implements the Server services: namespace ChatApplication { public class ServerServices : MarshalByRefObject, IServerServices { private Dictionary<string, IClientServices> clients = new Dictionary<string, IClientServices>(); public void register(string client, int port) { Console.WriteLine("Registered client: " + client + " on port: " + port); IClientServices cli = (IClientServices )Activator.GetObject( typeof(IClientServices), "tcp://localhost:" + port + "/RemoteClient"); clients.Add(client, cli); cli.receiveMsg("Hello client " + client); //Just to test the form on the client } } } In the client I have the main class of the client: namespace ChatApplication { static class ChatClient { [STAThread] static void Main() { Application.EnableVisualStyles(); Application.SetCompatibleTextRenderingDefault(false); Form1 form = new Form1(); Application.Run(form); } } } The class that implements the services of the client: namespace ChatApplication { public class ClientServices : MarshalByRefObject, IClientServices { public static Form1 myForm; public ClientServices () { } public void receiveMsg(string s) { myForm.addMsg(s); } } } And finally the form application of the client: namespace ChatApplication { public partial class Form1 : Form { TcpChannel newch; public Form1() { InitializeComponent(); } public void addMsg(string s) { Console.WriteLine("received msg " + s); session.Text += s + "\r\n"; } public void connectButton_Click(object sender, EventArgs e) { int port = Int32.Parse(portText.Text); newch = new TcpChannel(port); ChannelServices.RegisterChannel(newch, false); ClientServices.myForm = this; RemotingConfiguration.RegisterWellKnownServiceType( typeof(ClientServices), "RemoteClient", WellKnownObjectMode.Singleton); IServerServices serverInstance = (IServerServices)Activator.GetObject( typeof(IServerServices), "tcp://localhost:8086/RemoteObject"); serverInstance.register(nickname.Text, port); } } } The problem is that I see in the client's console the text "Hello client " + ... But the element session of the client's form does not update and the client crashes. The class that implements the client's services has the field myForm has static because i don't know other way to publish a remote object that has fields, so I did in that way and i guess it works, it's just that the form does not update. What am i doing wrong? Sorry for all the code but i guess this way you can see whats going on here. A: A wild stab in the dark: this may be a concurrency problem. Try serialising access to the client UI so you don't get overlapped access. For that matter, are you marshalling onto the UI thread when you access the UI? A cursory check of your UI code makes me think this isn't the problem (Console.WriteLine should do that for you) but until you post that error message no one can be very helpful.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,254
Not many people have heard about Virginia's James Anderson. This American patriot provided significant contributions to the country's independence and he stands tall among the men and women who supported the Revolution. Anderson was born during 1740 in Gloucester. Moving to nearby Williamsburg, he established himself as a blacksmith by 1762. Excellent workmanship earned him a reputation as one of the town's finest blacksmiths. He soon was appointed armourer, a responsible public position that included oversight and upkeep of the arms stored in the town's magazine. During April 1775, Lord Dunmore (John Murray), Virginia's royal governor, ordered all gunpowder removed from the magazine. This decision, among others, inflamed Virginians to action. It also marked a dramatic change in Lord Dunmore's political fortunes. He quickly became highly unpopular. Forced to abandon the governor's residence in Williamsburg, he sought safety with his family on a British ship. As the war began, Anderson and about 40 employees completed their work in his shop, a single-forge smithy behind his house. But, by 1777, the wartime needs of the state's capital overburdened the workplace. Anderson expanded the existing building. Then, he added another freestanding shop. By the second half of the war, Anderson's Blacksmith Shop and Public Armoury practically became a community of its own. The demands of the Continental Army and Navy, along with the ordinary daily workload, remained constant for Anderson. During the same day, he might complete 1,000 refurbished muskets for troops while an apprentice repaired a lock for a local farmer. During stressful years that brought opposing armies to the Williamsburg area, colonists such as James Anderson, and many more who remain unknown, provided their expertise, sacrifice and support to win the Revolution. Similar to a number of structures that once were part of daily life in Williamsburg, Anderson's original shop is long gone. But, the recent reconstruction of Anderson's Blacksmith Shop and Public Armoury in Colonial Williamsburg allows visitors to step back in time to experience an important business in the town as it appeared during the American Revolution. The reconstructed main armoury building includes four blacksmith forges and the kitchen. Reconstruction of the tin shop and several other buildings on the site will continue for another year. When completed, the site will include the armoury, the kitchen, a tinsmith's shop, an outdoor forge, a work shop, two storage buildings, a privy, a bake oven and a wellhead. Click here for more information, photos and video about Anderson's Blacksmith Shop and Public Armoury at Colonial Williamsburg.
{ "redpajama_set_name": "RedPajamaC4" }
2,012
\section{Introduction} Markov random fields (MRF) are frequently used as prior distributions in spatial statistics. A common situation is that we have an observed or latent field $x$ which we model as an MRF, $p(x|\phi)$, conditioned on a vector of model parameters $\phi$. The most common situation in the literature is to consider $\phi$ as fixed, see for instance examples in \citet{Besag1986} and \citet{moller2003}, but several articles have also considered a fully Bayesian approach by assigning a prior on $\phi$. A fully Bayesian model is computationally simplest when $x|\phi$ is a Gaussian Markov random field (GMRF) and this case is therefore especially well developed. A flexible implementation of the GMRF case is given in the integrated nested Laplace approximation (INLA) software, see \citet{art129} and \citet{art143}. The case when the components of $x$ are discrete variables is computationally much harder and therefore less developed in the literature. However, some articles have considered the fully Bayesian approach also in this case, see in particular the early \citet{art109} and \citet{art144} and the more recent \citet{art120}, \citet{art117}, \citet{phd6}, \citet{art145} and \citet{tjelmeland2012}. MRFs is a very flexible class of models. Formally, any distribution is an MRF with respect to a neighbourhood system where all nodes are neighbours of each other. For the MRF formulation to be of any help, however, reasonably small neighbourhoods must be adopted. The typical choice in the literature is to assume each node to have the nearest four, eight or 24 other nodes as neighbours. Moreover, in the model specification it is common to restrict oneself to models that include interactions between pairs of nodes only. Such pairwise interaction priors are just token priors, unable to specify more spatial structure than that nodes close to each other should tend to have the same value. In the literature it is often argued that such token priors are sufficient in many applications, as the information that neighbour nodes should tend to have the same value is the information lacking in the observed data. In particular one typically gets much better results based on such a token prior than by not including any spatial prior information in the analysis at all. The main reason for resorting to pairwise interaction priors, in addition to the argument that these are good enough, is that the class of MRFs with higher-order interactions is so large that it becomes difficult both to select a reasonable parametric form for the prior and to specify associated parameter values, not to mention the specification of a hyper-prior if a fully Bayesian approach is adopted. However, \citet{DescombesEtAl} and \citet{TjelmelandBesag} demonstrate that it is possible to specify MRFs with higher-order interactions that are able to model more spatial structure than a pairwise interaction MRF, and where the model parameters have a reasonable interpretation. In this article we consider the fully Bayesian approach and for simplicity we limit the attention to the case where the components of $x$ are binary. Our focus is on the specification of a prior distribution and on simulation from the associated posterior distribution. We define priors both on the parametric form of the MRF and on the parameter values. To the best of our knowledge this is the first attempt on putting a prior on the parametric form of a discrete MRF. Other articles considering a fully Bayesian approach in such a setting, are using a fixed parametric model and put a prior on the parameter values only. One should note that by assigning a prior to the parametric structure of the model, including the number of parameters, we get an automatic model choice when simulating from the posterior distribution. To be able to define a reasonable prior it is essential to adopt a model where the parameters have a natural interpretation. In this article we consider two ways to parametrise the MRF. The first approach is inspired by the so-called $u$-parameters commonly used in the log-linear and graphical model literature for contingency tables \citep{Bishop1975,Forster1999,massam2009,overstall2012}. Here the parameters are interactions of different orders. To limit the complexity of the model is easy by restricting some of the parameters to be zero, but we argue that the interpretation of the parameters is difficult. The second parametrisation we consider is inspired by the MRF formulation in \citet{TjelmelandBesag}. The parameters then represent potentials for configurations in maximal cliques, and we limit the model complexity by restricting different configurations to have the same potential. In \citet{TjelmelandBesag} this grouping of configurations is done manually, whereas we assign a prior to the grouping so that it is done automatically in the posterior simulation. Thereby we do not need, for example, to specify apriori whether or not the field is isotropic. We argue that the interpretation of the configuration potentials is much easier than for the interactions, and unless any particular prior information is available and suggest the opposite, it is natural to assume the configuration potentials to be on the same scale. To explore the resulting posterior distribution we construct a reversible jump MCMC (RJMCMC) algorithm \citep{green1995}. To run this algorithm we have to cope with the computationally intractable normalising constant of the MRF. In the literature several strategies for handling this have been proposed. We adopt an approximation strategy for binary MRFs introduced in \citet{phd6}, where a partially ordered Markov model (POMM), see \citet{art119}, approximation to the MRF is defined. We simply replace the MRF with the corresponding POMM approximation. The article has the following organisation. In Section \ref{sec:MRF} we discuss the two parametrisations of binary MRFs, and in particular we identify the maximal number of free parameters for a model with specified maximal cliques. In Section \ref{sec:prior} we define our prior for $\phi$, and in Section \ref{sec:sampling} we discuss how to handle the computationally intractable normalising constant and describe our RJMCMC algorithm for simulating from the posterior distribution. In Section \ref{sec:results} we present results for one simulated data example and for one real data example. One additional simulated example is given in the supplementary materials. Finally, some closing remarks are provided in Section \ref{sec:discussion}. \section{MRF} \label{sec:MRF} In this section we give a brief introduction to MRFs, see \cite{cressie1993} and \cite{moller2003} for more details, and in particular we focus on binary MRFs and the parametrisation in this case. We close with one example of a binary MRF, the Ising model. This section provides the theoretical background needed in order to understand the construction of our prior distribution in Section \ref{sec:prior}, and the RJMCMC algorithm given in Section \ref{sec:sampling}. \subsection{Binary MRF} \label{sec:binaryMRF} Consider a rectangular lattice of dimension $n\times m$, and let the nodes be identified by $(i,j)$ where $i=0,...,n-1$ and $j=0,...,m-1$. To each node $(i,j)\in S=\{(i,j);i=0,...,n-1,j=0,...,m-1\}$ we associate a binary variable $x_{i,j}\in \{0,1\}$, and let $x=(x_{i,j};(i,j)\in S)$ be the collection of these binary variables. We let $x_A=(x_{i,j};(i,j)\in A)$ denote the collection of the binary variables with indices belonging to an index set $A\subseteq S$, and let $x_{-(i,j)}=x_{S\setminus \{(i,j)\}}$. Associating zero with black and one with white we may thereby say the $x$ specifies a colouring of the nodes. We let $\mathcal{N}=\{\mathcal{N}_{0,0},...,\mathcal{N}_{n-1,m-1}\}$ be a neighbourhood system on $S$, where $\mathcal{N}_{i,j}\subseteq S \setminus \{(i,j)\}$ is the set of neighbour nodes of node $(i,j)$. We assume symmetry in the neighbour sets, so if $(i,j)\in \mathcal{N}_{t,u}$, then also $(t,u)\in \mathcal{N}_{i,j}$. Now, $x$ is a binary MRF if $p(x)>0$ for all $x$, and $p(x_{i,j}|x_{-(i,j)})$ fulfils the Markov property \begin{equation} p(x_{i,j}|x_{-(i,j)})=p(x_{i,j}|x_{\mathcal{N}_{i,j}}) \text{ for all} \ (i,j)\in S. \end{equation} A clique is defined to be a set $\lambda\subseteq S$, where $(i,j)\in \mathcal{N}_{t,u}$ for all distinct pair of nodes $(i,j),(t,u)\in \lambda$, and we denote the set of all cliques by $\mathcal{L}$. Note that by this definition sets containing only one node and the empty set are cliques. A maximal clique is defined to be a clique that is not a subset of another clique, and we denote the set of all maximal cliques by $\mathcal{L}_m$. Moreover, for $\lambda\in \mathcal{L}$ we let $\mathcal{L}_m^\lambda$ denote the set of all maximal cliques that contains $\lambda$, i.e. $\mathcal{L}_m^\lambda=\{\Lambda \in \mathcal{L}_m;\lambda \subseteq \Lambda\}$. In the following we use $\Lambda$ and $\Lambda^\star$ to denote maximal cliques, i.e. $\Lambda,\Lambda^* \in \mathcal{L}_m$, whereas we use $\lambda$ and $\lambda^\star$ to denote cliques that do not need to be maximal, i.e. $\lambda,\lambda^\star\in \mathcal{L}$. To denote an $x$ where $x_{i,j}=1$ for all $(i,j)\in A$ for some $A\subseteq S$ and $x_{i,j}=0$ otherwise, we use $\bold 1^A = \{ x_{i,j}=I( (i,j)\in A); (i,j)\in S\}$. Thereby a colouring of the nodes in a maximal clique $\Lambda\in\mathcal{L}_m$ may be specified by $\bold 1^\lambda_\Lambda$, where $\lambda\subseteq \Lambda$ specifies the set of nodes in $\Lambda$ that has the value one. According to the Hammersley-Clifford theorem \citep{Clifford}, the most general form the distribution $p(x)$ of an MRF can take is \begin{equation} p(x)=Z\exp\{ U(x)\} \mbox{~~with~~} \ U(x)=\sum_{\Lambda \in \mathcal{L}_m} V_{\Lambda}(x_{\Lambda}), \label{eq:cliqueRep} \end{equation} where $Z$ is the computationally demanding normalising constant, $U(x)$ is frequently called the energy function, and $V_{\Lambda}(x_{\Lambda})$ is a potential function for $\Lambda$. A naive parametrisation of $V_{\Lambda}(x_{\Lambda})$ is to introduce one parameter for each possible $\Lambda\in \mathcal{L}_m$ and $x_{\Lambda}\in \{0,1\}^{|\Lambda|}$ by setting \begin{equation}\label{eq:phi} V_{\Lambda}(\bold 1^\lambda_{\Lambda})=\phi^\lambda_{\Lambda}. \end{equation} It is a well known fact the $\phi^\lambda_\Lambda$ parameters do not constitute a unique representation of $U(x)$. Thereby, in the resulting parametric model $p(x)$ the $\phi^\lambda_\Lambda$ parameters are not identifiable, meaning that different choices for the $\phi^\lambda_\Lambda$ parameters may give the same model $p(x)$. For example, adding the same value to all $\phi^\lambda_\Lambda$ parameters will not change the model, as this will be compensated for by a corresponding change in the normalising constant $Z$. If the set of maximal cliques $\mathcal{L}_m$ consists of, for example, all $2\times 2$ blocks of nodes a perhaps less obvious way to change the parameter values without changing the model and neither the normalising constant, is to add an arbitrary value to $\phi^{\{ (i,j)\}}_\Lambda$ for some $(i,j)\in\Lambda\in\mathcal{L}_m$, and to subtract the same value from $\phi^{\{(i,j)\}}_{\Lambda^\star}$ for some $\Lambda^\star\in\mathcal{L}_m$, $\Lambda^\star\neq \Lambda$ for which $(i,j)\in\Lambda^\star$. An alternative way to represent an MRF is through a parametrisation of the cliques. The energy function $U(x)$ is a pseudo-Boolean function and when it is given as in (\ref{eq:cliqueRep}) \cite{tjelmeland2012} show that it can be represented as \begin{equation} U(x)=\sum_{\lambda \in \mathcal{L}}\beta^{\lambda}\prod_{(i,j)\in\lambda}x_{i,j}, \label{eq:DAGrep} \end{equation} where $\beta^{\lambda}$ is referred to as the interaction parameter for clique $\lambda$, which is said to be of $|\lambda|$'th order. More details on pseudo-Boolean functions and their properties can be found in \cite{Grabisch2000} and \cite{Hammer1992}. Since this representation consists of linearly independent functions of $x$, it is clear that the set of interaction parameters is a unique representation of $U(x)$. Furthermore, in the corresponding parametric model $p(x)$ the $\beta^\lambda$ parameters become identifiable if fixing $\beta^\emptyset$ to zero (say). We note in passing that \citet{Besag1974} uses the representation in \eqref{eq:DAGrep} in a proof for the Hammersley--Clifford theorem. In the following we define a set of constraints on the $\phi^\lambda_\Lambda$ parameters in \eqref{eq:cliqueRep} and show that subject to these constraints there is a one-to-one relation between the $\phi^\lambda_\Lambda$ parameters and the interaction parameters $\beta^\lambda$. The constrained $\phi^\lambda_\Lambda$ parameters thereby constitute an alternative unique representation of $U(x)$. \begin{definition}\label{def:constraints} The constrained set of $\phi^\lambda_\Lambda$ parameters are defined by requiring that $\phi^\lambda_\Lambda=\phi^\lambda_{\Lambda^\star}$ for all $\Lambda,\Lambda^\star\in\mathcal{L}_m$, $\lambda\subseteq \Lambda \cap \Lambda^\star$. To simplify the notation we then write $\phi^\lambda_\Lambda=\phi^\lambda$. \end{definition} To understand the implication of the constraint one may again consider the situation where the set of maximal cliques $\mathcal{L}_m$ consists of all $2\times 2$ blocks of nodes, and focus on the two overlapping maximal cliques $\Lambda = \{ (i,j-1),(i+1,j-1),(i,j),(i+1,j)\}$ and $\Lambda^\star = \{(i,j),(i+1,j),(i,j+1),(i+1,j+1)\}$. For $\lambda= \{ (i,j),(i+1,j)\}$ the constraint is that the potential $V_\Lambda(x_\Lambda)$ for the colouring $\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 0 & 1 \\ 0 & 1 \end{array}$ in $\Lambda$ is the same as the potential $V_{\Lambda^\star}(x_{\Lambda^\star})$ for the colouring $\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 0 \\ 1 & 0 \end{array}$ in $\Lambda^\star$. One should also note that the constraint implies that $\phi^\emptyset_\Lambda$ is the same for all $\Lambda\in\mathcal{L}_m$, so in the $2\times 2$ maximal cliques case the potential for the colouring $\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 0 & 0 \\ 0 & 0 \end{array}$ must be the same for all maximal cliques. \begin{theorem}\label{th:onetoone} Consider an MRF and constrain the $\phi$ parametrisation of the potential functions as described in Definition \ref{def:constraints}. Then there is a one-to-one relation between $\{\beta^\lambda;\lambda\in\mathcal{L}\}$ and $\{\phi^\lambda;\lambda\in\mathcal{L}\}$. \end{theorem} The proof is given in the supplemental material, and the result is shown by establishing recursive equations showing how to compute the $\beta^\lambda$'s from the $\phi^\lambda$'s and vice versa. To simplify the definition of a prior for the parameter vector of an MRF in the next section, we first limit the attention to {\em stationary} MRFs defined on a rectangular $n\times m$ lattice, and to obtain stationarity we assume {\em torus boundary conditions}. In the following we define the concepts of stationarity and torus boundary conditions and states two theorems which identify what properties the $\{\beta^\lambda;\lambda\in\mathcal{L}\}$ parameters and the $\{ \phi^\lambda;\lambda\in\mathcal{L}\}$ parameters must have for the MRF to be stationary. \begin{definition}\label{def:torus} If, for a rectangular lattice $S = \{ (i,j);i=0,\ldots,n-1, j=0,\ldots,m-1\}$, the translation of a node $(i,j)\in S$ with an amount $(t,u)\in S$ is defined to be \begin{equation*} (i,j) \oplus (t,u) = (i+t \mod n, j+u \mod m), \end{equation*} we say that the lattice has torus boundary conditions. \end{definition} To denote translation of a set of nodes $A\subseteq S$ by an amount $(t,u)\in S$ we write $A \oplus (t,u) = \{ (i,j)\oplus (t,u) ; (i,j)\in A\}$. With this notation stationarity of an MRF defined on a rectangular lattice with torus boundary conditions can be defined as follows. \begin{definition}\label{def:stationarity} An MRF $x$ defined on a rectangular lattice $S$ with torus boundary conditions is said to be stationary if and only if $p(\bold 1^A) = p(\bold 1^{A \oplus (t,u)})$ for all $A\subseteq S$ and $(t,u)\in S$. \end{definition} To explore what restrictions the stationarity assumption puts on the $\beta^\lambda$ and $\phi^\lambda$ parameters we assume the set of maximal cliques to consist of all possible translations of a given nonempty template set $\Lambda_0\subset S$, i.e. \begin{equation}\label{eq:Lm} \mathcal{L}_m = \{ \Lambda_0 \oplus (t,u); (t,u)\in S\}. \end{equation} For example, with $\Lambda_0 = \{ (0,0),(0,1),(1,0),(1,1)\}$ the set of maximal cliques will consist of all $2\times 2$ blocks of nodes. One should note that with the torus boundary assumption there is always $|\mathcal{L}_m|=nm$ maximal cliques. \begin{theorem}\label{th:betaStationarity} An MRF $x$ defined on a rectangular lattice $S=\{ (i,j);i=0,\ldots,n-1,j=0,\ldots,m-1\}$ with torus boundary conditions and $\mathcal{L}_m$ given in \eqref{eq:Lm} is stationary if and only if $\beta^{\lambda}=\beta^{\lambda\oplus (t,u)}$ for all $\lambda \in \mathcal{L}$, $(t,u)\in S$. We then say that $\beta^\lambda$ is translational invariant. \end{theorem} The proof is again given in the supplemental material. We proof the if part of the theorem by induction on $|\lambda|$, and the only if part by direct manipulation with the expression for the energy function. To better understand the effect of the theorem we can again consider the $2\times 2$ maximal clique case, i.e. $\mathcal{L}_m$ is given by \eqref{eq:Lm} with $\Lambda_0=\{ (0,0),(0,1),(1,0),(1,1)\}$. The translational invariance means that all first-order interactions $\{\beta^{\{ (i,j)\}},(i,j)\in S\}$ must be equal and in the following we denote their common value by $\beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} & \\\framebox(5.0,5.0){} & \end{array}}$, where the idea is that the superscript represents any node $(i,j)\in S$. Correspondingly we use $\beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}}\framebox(5.0,5.0){} & \framebox(5.0,5.0){} \\ &\end{array}}$, where the superscript represent any two horizontally first-order neighbours, to denote the common value for all $\{\beta^{\{ (0,0),(0,1)\}\oplus (t,u)},(t,u)\in S\}$. Continuing in this way we get, in addition to $\beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} & \\\framebox(5.0,5.0){} & \end{array}}$, $\beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}}\framebox(5.0,5.0){} & \framebox(5.0,5.0){} \\ &\end{array}}$ and the constant term $\beta^\emptyset$, the parameters $\beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){}& \\ \framebox(5.0,5.0){} & \end{array}}$, $ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}}\framebox(5.0,5.0){} & \\ &\framebox(5.0,5.0){}\end{array}}$, $ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} &\framebox(5.0,5.0){} \\ \framebox(5.0,5.0){} & \end{array}}$, $ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){} &\framebox(5.0,5.0){} \\ \framebox(5.0,5.0){} & \end{array}}$, $ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){} &\framebox(5.0,5.0){} \\ & \framebox(5.0,5.0){} \end{array}}$, $ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){}& \\ \framebox(5.0,5.0){} & \framebox(5.0,5.0){} \end{array}}$, $ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} &\framebox(5.0,5.0){} \\ \framebox(5.0,5.0){} & \framebox(5.0,5.0){} \end{array}}$ and $ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){} &\framebox(5.0,5.0){} \\ \framebox(5.0,5.0){} & \framebox(5.0,5.0){} \end{array}}$. We collect the eleven parameter values necessary to represent $U(x)$ in this stationary MRF case into a vector which we denote by $\beta$, i.e. \begin{equation} \beta = \left( \beta^\emptyset,\beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} & \\\framebox(5.0,5.0){} & \end{array}},\beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}}\framebox(5.0,5.0){} & \framebox(5.0,5.0){} \\ &\end{array}},\beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){}& \\ \framebox(5.0,5.0){} & \end{array}}, \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}}\framebox(5.0,5.0){} & \\ &\framebox(5.0,5.0){}\end{array}}, \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} &\framebox(5.0,5.0){} \\ \framebox(5.0,5.0){} & \end{array}}, \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){} &\framebox(5.0,5.0){} \\ \framebox(5.0,5.0){} & \end{array}}, \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){} &\framebox(5.0,5.0){} \\ & \framebox(5.0,5.0){} \end{array}}, \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){}& \\ \framebox(5.0,5.0){} & \framebox(5.0,5.0){} \end{array}}, \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} &\framebox(5.0,5.0){} \\ \framebox(5.0,5.0){} & \framebox(5.0,5.0){} \end{array}}, \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){} &\framebox(5.0,5.0){} \\ \framebox(5.0,5.0){} & \framebox(5.0,5.0){} \end{array}}\right). \label{eq:beta2x2} \end{equation} The next theorem gives a similar result for the $\phi^\lambda$ parameters as Theorem \ref{th:betaStationarity} did for the interaction parameters $\beta^\lambda$. \begin{theorem}\label{th:phiStationarity} An MRF $x$ defined on a rectangular lattice $S=\{ (i,j);i=0,\ldots,n-1,j=0,\ldots,m-1\}$ with torus boundary conditions and $\mathcal{L}_m$ given in \eqref{eq:Lm} is stationary if and only if $\phi^{\lambda}=\phi^{\lambda\oplus (t,u)}$ for all $\lambda \in \mathcal{L}$ and $(t,u)\in S$. We then say that $\phi^\lambda$ is translational invariant. \end{theorem} The proof is again given in the supplemental material. Given the result in Theorem \ref{th:betaStationarity} it is sufficient to show that $\phi^\lambda$ is translational invariant if and only if $\beta^\lambda$ is translational invariant, and we show this by induction on $|\lambda|$. It should be noted that the interpretation of the $\phi^\lambda$ parameters is very different from the interpretation of the $\beta^\lambda$ parameters. Whereas the $\beta^\lambda$ parameters relates to cliques $\lambda$ of different sizes, all the $\phi^\lambda$'s represent the potential of a maximal clique $\Lambda\in\mathcal{L}_m$, which are all of the same size. The effect of the above theorem is that we get groups of configurations in maximal cliques that must be assigned the same potential, hereafter referred to as configuration sets. We let $\cal C$ denote the set of these configuration sets. In the $2\times 2$ maximal clique case for example, we get \begin{eqnarray} \nonumber {\cal C} &=& \left\{ \left\{ \left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 0 & 0 \\ 0 & 0 \end{array}\right]\right\}, \left\{ \left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 0 \\ 0 & 0 \end{array}\right],\left[ \scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 0 & 1 \\ 0 & 0 \end{array}\right],\left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 0 & 0 \\ 1 & 0 \end{array}\right],\left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 0 & 0 \\ 0 & 1 \end{array}\right]\right\}, \left\{ \left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 0 & 0 \end{array}\right],\left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 0 & 0 \\ 1 & 1 \end{array}\right]\right\}, \left\{ \left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 0 \\ 1 & 0 \end{array}\right],\left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 0 & 1 \\ 0 & 1 \end{array}\right]\right\}, \right.\\ && \left. \left\{\left[ \scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 0 \\ 0 & 1 \end{array}\right]\right\}, \left\{\left[ \scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 0 & 1 \\ 1 & 0 \end{array}\right]\right\}, \left\{\left[ \scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 0 \end{array}\right]\right\}, \left\{\left[ \scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 0 & 1 \end{array}\right]\right\}, \left\{\left[ \scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 0 \\ 1 & 1 \end{array}\right]\right\}, \left\{\left[ \scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 0 & 1 \\ 1 & 1 \end{array}\right]\right\}, \left\{\left[ \scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}\right]\right\} \right\} \label{eq:confsubset} \end{eqnarray} We denote these sets of configurations by $c^0$, $c^1$, $c^{11}$, $c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}} 1 \\ 1 \end{array}}$, $c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}}$, $c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}}$, $c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}}$, $c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}}$, $c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}}$, $c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}}$ and $c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}}$ when listed in the same order as in \eqref{eq:confsubset}, where the idea of the notation is that the $1$'s in the superscript can be placed anywhere inside a maximal clique and the remaining nodes takes the value of zero. One should note that a similar notation can be used in other sets of maximal cliques. In the $3\times 3$ maximal clique case we have for example \[ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}} = \left\{\left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.2}\begin{array}{@{}c@{}c@{}c@{}} 1 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right],\left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.2}\begin{array}{@{}c@{}c@{}c@{}} 0 & 1 & 1 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{array}\right],\left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.2}\begin{array}{@{}c@{}c@{}c@{}} 0 & 0 & 0 \\ 1 & 1 & 0 \\ 1 & 0 & 0 \end{array}\right], \left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.2}\begin{array}{@{}c@{}c@{}c@{}} 0 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 1 & 0 \end{array}\right]\right\}. \] Associated to each member $c\in {\cal C}$ we thus have a corresponding parameter value $\phi(c)$ which is the potential assigned to any maximal clique configuration in the set $c$. We use corresponding superscripts for the $\phi$ parameters as we did for the sets $c\in{\cal C}$. In the $2\times 2$ maximal clique case we thereby get the parameter vector \[ \phi = \left (\phi^0, \phi^1, \phi^{11}, \phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}} 1 \\ 1 \end{array}},\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}},\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}},\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}},\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}},\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}}, \phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}},\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}} \right), \] where for example $\phi^1$ is the potential for the four maximal clique configurations in $c^1$. We end this section with a discussion on how the above stationary MRF defined with torus boundary condition can be modified in the free boundary case. Using the same template maximal clique $\Lambda_0$ as before, the set of maximal cliques $\mathcal{L}_m$ now has to be redefined relative to the torus boundary condition case. In the free boundary case we let $\mathcal{L}_m$ contain all translations of $\Lambda_0$ that are completely inside our $n\times m$ lattice, i.e. \[ \mathcal{L}_m = \{ \Lambda_0 + (t,u); t=-n,\ldots,n,u=-m,\ldots,m, \Lambda_0 + (t,u)\subseteq S\}, \] where $\Lambda_0 + (t,u) = \{ (i+t,j+u); (i,j)\in \Lambda_0\}$. One should note that for a free boundary MRF the translational invariance property of the $\phi^\lambda$ parameters identified in Theorem \ref{th:phiStationarity} no longer apply, and neither will such a model be stationary. However, the extra free $\phi$ parameters that may be introduced in the free boundary case will only model properties sufficiently close to a boundary of the lattice. Our strategy in the free boundary case is to keep the same $\phi$ parameter vector as in the torus case, to adopt translational invariant potential functions $V_\Lambda(\bold 1_\Lambda^\lambda)=\phi^\lambda$ for all maximal cliques $\Lambda\in\mathcal{L}_m$ just as in the torus case, but to add non-zero potential functions for some (non-maximal) cliques at the boundaries of the lattice. Our motivation for this is to reduce the boundary effect and, hopefully, to get a model which is less non-stationary. To define our non-zero potential functions at the boundaries, imagine that our $n\times m$ lattice is included in a much larger lattice and that this extended lattice also has maximal cliques that are translations of $\Lambda_0$. We then include a non-zero potential function for every maximal clique in the extended lattice which is partly inside and partly outside our original $n\times m$ lattice. In such a maximal clique in the extended lattice, let $\lambda$ denote the set of nodes that are inside our $n\times m$ lattice, and let $\lambda^\star$ denote the set of nodes outside. As we have assumed that the maximal clique is partly inside and partly outside our original $n\times m$ lattice, $\lambda$ and $\lambda^\star$ are both non-empty and $\lambda\cup\lambda^\star$ is clearly a maximal clique in the extended lattice. For the (non-maximal) clique $\lambda$ we define the potential function \begin{equation}\label{Uborder} V_\lambda(x_\lambda) = \frac{1}{2^{|\lambda^\star|}}\sum_{x_{\lambda^\star}} V_{\lambda\cup\lambda^\star}(x_{\lambda\cup\lambda^\star}), \end{equation} where $V_{\lambda\cup\lambda^\star}(x_{\lambda\cup\lambda^\star})$ is the same (translational invariant) potential function we are using for maximal cliques inside our $n\times m$ lattice. One can note that (\ref{Uborder}) corresponds to averaging over the values in the nodes outside our lattice, assuming them to be independent, and to take the values $0$ or $1$ with probability a half for each. \subsection{Example: The Ising model} \label{sec:Ising} The Ising model \citep{Besag1986} is given by \begin{equation} p(x)=Z\exp \left \{- \omega \sum_{(i,j)\sim (t,u)}I(x_{i,j}\neq x_{t,u})\right\}, \label{eq:Ising} \end{equation} where the sum is over all horizontally and vertically adjacent sites, and $\omega$ is a parameter controlling the probability of adjacent sites having the same value. We use the Ising model as an example also later in the paper, and in particular we fit an MRF with $2\times 2$ maximal cliques to data simulated from this model. Assuming torus boundary conditions and using that for binary variables we have $I(x_{i,j}\neq x_{t,u})=x_{i,j}+x_{t,u}-2x_{i,j}x_{t,u}$, we can rewrite \eqref{eq:Ising} as \begin{equation*} p(x)=Z\exp \left \{-4\omega\sum_{(i,j)\in S}x_{i,j} +2\omega\sum_{(i,j)\sim(t,u)}x_{i,j}x_{t,u} \right\}. \end{equation*} Thus, the $\beta^\emptyset$ can be given any value as this will be compensated for by the normalising constant, whereas $\beta^{\{(i,j)\}}=-4\omega$, $\beta^{\{(i,j),(i,j)\oplus(1,0)\}}=2\omega$ and $\beta^{\{(i,j),(i,j)\oplus(0,1)\}}=2\omega$, and $\beta^{\lambda}=0$ for all other cliques $\lambda$. The corresponding $\phi^{\lambda}$ parameters can then be found using the recursive equation (S2) identified in the proof of Theorem \ref{th:onetoone}. Using the notation introduced above for the $2\times 2$ maximal clique case this gives $\phi^0=\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}}=\eta$, $\phi^1=\phi^{11}=\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}} 1 \\ 1 \end{array}}=\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}}=\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}}= \phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}}=\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}}=-\omega+\eta$ and $\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}}=\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}}=-2\omega+\eta$, where $\eta$ is an arbitrary value originating from the arbitrary value that can be assign to $\beta^\emptyset$. \section{Prior specification} \label{sec:prior} In this section we define a generic prior for the parameters of an MRF with maximal cliques specified as in \eqref{eq:Lm}. The first step in the specification is to choose what parametrisation of the MRF to consider. In the previous section we discussed two parametrisations for the MRF, with parameter vectors $\beta$ and $\phi$, respectively. When choosing between the two parametrisations and defining the prior we primarily have the torus version of the MRF in mind. However, as the free boundary version of the model is using the same parameter vectors, the prior we end up with can also be used in that case. It should be remembered that the parametrisations using $\beta$ and $\phi$ are non-identifiable, but that it is sufficient to add one restriction to make them identifiable. The perhaps easiest way to do this is to restrict one of the parameters to equal zero, but other alternatives also exist. We return to this issue below. The dimension of the $\beta$ and $\phi$ parameter vectors grows rapidly with the number of elements in the set $\Lambda_0$ defining the set of maximal cliques. Table \ref{tab:ktimesl} \begin{figure*}[h]\center Table \ref{tab:ktimesl} approximately here. \end{figure*} gives the number of parameters in the identifiable models, which we in the following denote by $N_{\Lambda_0}$, when $\Lambda_0$ is a $k\times l$ block of nodes. We see that the number of parameters grows rapidly with the size of $\Lambda_0$. It is therefore natural to look for prior formulations which include the possibility for a reduced number of free parameters. For the $\beta$ parametrisation the perhaps most natural strategy to do this is to assign positive prior probability to the event that one or several of the interaction parameters are exactly zero. The interpretation of the $\phi$ parameters is different from the interpretation of the $\beta$ parameters, and it is not natural to assign positive probability for elements of the $\phi$ vector to be exactly zero. A more reasonable scheme here is instead to set a positive prior probability for the event that groups of $\phi$ parameters have exactly the same value. In the Bayesian contingency tables literature the $\beta$ parametrisation is popular, see for example \citet{Forster1999}, \citet{massam2009} and \citet{overstall2012} and references therein, where the second article develops a conjugate prior for this parametrisation. However, these results do not directly apply for an MRF where one restricts the potential functions to be translational invariant. More importantly, however, the various $\beta$ parameters relates to cliques of different sizes and this makes the interpretation of the parameters difficult. In \citet{Forster1999} and in \citet{overstall2012} effort is made in order to create a reasonable multinormal prior for the $\beta$ parameters. In contrast, the $\phi$ parameters all represent the potential of a configuration of a maximal clique, which is all of the same size. Unless particular prior information is available and suggests the opposite, it is therefore natural to assume that all $\phi$ parameters are on the same scale. A tempting option is therefore first to assign identical and independent normal distributions to these parameters, and obtain identifiability by constraining the sum of the parameters to be zero. Thereby the elements of $\phi$ are exchangeable \citep{diaconis1980}. Note that the $\beta$ parameters become multinormal also in our case, see for instance (S3) in the supplementary materials. In the following we therefore focus on specifying a prior for $\phi$. We first introduce notation necessary to define the groups of configuration parameters $\phi$ that should have the same value and thereafter discuss possibilities for how to define the prior. To define groups of configuration set parameters that should have the same value, let $C_1,\ldots,C_r$ be a partition of the configuration sets in $\mathcal{C}$ with $C_i\neq \emptyset$ for $i=1,\ldots,r$. Thus, $C_i\cap C_j = \emptyset$ for $i\neq j$ and $C_1\cup\ldots\cup C_r=\mathcal{C}$. For each $i=1,\ldots,r$ we thereby assume $\phi(c)$ to be equal for all $c\in C_i$, and we denote this common value by $\varphi_i$. Setting $z=\{ (C_i,\varphi_i),i=1,\ldots,r\}$ we thus can write the resulting potential functions as \begin{equation}\label{eq:VFinal} V_\Lambda(x_\Lambda|z) = \sum_{(C,\varphi)\in z} \varphi I\left(x_\Lambda \in \bigcup_{c\in C} c\right). \end{equation} We define a prior on the $\phi$ parameters by specifying a prior for $z$. An alternative to this construction would be to build up $\{C_1,...,C_r\}$ in a non-random fashion, constraining the $\phi$ parameters according to properties like symmetry and rotational invariance. However, our goal is that such properties can be inferred from observed data. Given all configuration sets, we want to assign positive probability to the event that groups of configuration sets have exactly the same parameter value. For instance, the three groups in Section \ref{sec:Ising} is an example of such a grouping for a $2\times 2$ maximal clique. Since we do not allow empty groups $C_i$, the maximum number of groups one can get is $N_{\Lambda_0}+1$. Our prior distribution for $z$ is on the form \begin{equation*} p(z)=p(\{C_1,....,C_r\})p(\{\varphi_1,...,\varphi_r\}|r) \end{equation*} where $p(\{C_1,...,C_r\})$ is a prior for the grouping of the configuration sets, while $p(\{\varphi_1,...,\varphi_r\}|r)$ is a prior for the group parameters given the number of groups $r$. Two possibilities for $\{C_1,...,C_r\}$ immediately comes to mind. The first is to assume a uniform distribution on the groupings, i.e. \begin{equation*} p_1(\{C_1,...,C_r\})\propto const, \end{equation*} meaning that each grouping is apriori equally likely. However for $p(r)$, the marginal probability of the number of groups, this means that most of the probability is put on groupings with approximately $(N_{\Lambda_0}+1)/2$ groups. In fact the probability $p(r)$ becomes equal to \begin{equation*} p(r)=\frac{g(N_{\Lambda_0}+1,r)}{\sum_{i=1}^{N_{\Lambda_0}+1}g(N_{\Lambda_0}+1,i)}, \end{equation*} where $g(N_{\Lambda_0}+1,r)$ is the number of ways $N_{\Lambda_0}+1$ configuration sets can be organised into $r$ unordered groups, remembering that no empty groups are allowed. The function $g(N_{\Lambda_0}+1,r)$ can be written as \begin{equation*} g(N+1,r)=\frac{1}{r!}\sum_{i=0}^r\binom{r}{i}(-1)^{r-i}i^{N+1}, \end{equation*} and is known as the Stirling number of the second kind \citep{Graham1988}. For the $2\times 2$ maximal clique this means for instance that $p(r=1)=p(r=11)\approx 10^{-6}$ while $p(r=5)=0.36$. An alternative for $p(\{C_1,...,C_r\})$ is to make the distribution for the number of groups uniform. This can be done by defining the probability distribution \begin{equation*} p_2(\{C_1,...,C_r\})=\frac{1}{(N_{\Lambda_0}+1)g(N_{\Lambda_0}+1,r)}. \end{equation*} With this prior a particular grouping with many or few groups will have a larger probability than a particular grouping with approximately $(N_{\Lambda_0}+1)/2$ groups. In the $2\times 2$ case for example, the probability of the grouping where all configuration sets are assigned to the same group or the grouping with 11 groups is $p(\{C_1\})=p(\{C_1,...,C_{11}\})=0.09$, while the probability of a particular grouping with 5 groups is $p(\{C_1,...,C_5\})\approx 10^{-7}$. Observe however, that with both priors we have that the groups are uniformly distributed when the number of groups is given. As a compromise between the two prior distributions we propose \begin{equation*} p(\{C_1,...,C_r\})\propto p_1(\{C_1,...,C_r\})^{1-\gamma}p_2(\{C_1,...,C_r\})^{\gamma}, \end{equation*} where $0 \leq\gamma\leq 1$. As also discussed above, to get an identifiable model we need to put one additional restriction on the elements of $\phi$, or alternatively on the $\varphi_i$ parameters. As we want the distribution $p(\varphi_1,\ldots,\varphi_r|r)$ to be exchangeable we want the restriction also to be exchangeable in the $\varphi_i$ parameters, and set \begin{equation} \sum_{(C,\varphi)\in z}\varphi=0. \label{eq:identifiability} \end{equation} Under this sum-to-zero restriction we assume the $\varphi_i$ apriori to be independent normal with zero mean and with a common variance $\sigma_{\varphi}^2$. This fully defines the prior for $z$, except that we have not specified values for the two hyper-parameters $\gamma$ and $\sigma_{\varphi}^2$. \section{Posterior sampling}\label{sec:sampling} In this section we first discuss different strategies proposed in the literature for how to handle the computationally intractable normalising constant in discrete MRFs, and in particular discuss their applicability in our situation. Thereafter we describe the RJMCMC algorithm we adopt for simulating from our posterior distribution. \subsection{Handling of the normalising constant} \label{sec:normconstant} Discrete MRFs contain a computationally intractable normalising constant and this makes the fully Bayesian approach problematic. Three strategies have been proposed to circumvent or solve this problem. The first alternative is to replace the MRF likelihood with a computationally tractable approximation. The early \citet{art109} use the pseudo-likelihood for this, \citet{art117} and \citet{art145} adopt a reduced dependency approximation (RDA), and \citet{phd6} and \citet{tjelmeland2012} construct a POMM approximation by making use of theory for pseudo-Boolean functions. The second strategy, used in \citet{art144}, is to adopt an estimate of the normalisation constant obtained by some Markov chain Monte Carlo (MCMC) procedure prior to simulating from the posterior, and the third alternative is to include an auxiliary variable sampled from the MRF $p(x|\phi)$ in the posterior simulation algorithm. \citet{art120} is the first article using the third approach, and the exchange algorithm of \citet{pro20} falls within the same class. \cite{CaimoFriel2011} and \citet{Everitt2012} adopt an approximate version of this third approach, by replacing perfect sampling from $p(x|\phi)$ with approximate sampling via an MCMC algorithm. The three approaches all have their advantages and disadvantages. First of all, only the third approach is without approximations in the sense that it defines an MCMC algorithm with limiting distribution exactly equal to the posterior distribution of interest. However, for this approach to be feasible perfect sampling from $p(x|\phi)$ must be possible, and computationally reasonably efficient, for all values of $\phi$. The strategy used in the second class requires in practice that the parameter vector $\phi$ is low dimensional. The approximation strategy does not have restrictions on the dimension of $\phi$ and perfect sampling from $p(x|\phi)$ is not needed. In that sense this approach is more flexible, but of course the approximation quality may depend on the the parametric form of the MRF and the value of $\phi$. In principle any of the approaches discussed above may be used in our situation, but the complexity of the parameter space makes the prior estimation of the normalisation constant approach impractical. Moreover, the accuracy of the pseudo-likelihood approximation is known to be quite poor, and in simulation exercises we found that perfect sampling from $p(x|\phi)$ was in practice infeasible for many of the higher-order interaction models visited by our RJMCMC algorithm. The approximate version in \citet{CaimoFriel2011} is, however, a viable alternative. We are thereby left with the RDA approach, the POMM approximation, and the strategy proposed in \citet{CaimoFriel2011}. In our simulation examples we adopt the second of these, but the other two could equally well have been used. In fact, in one of our simulation examples we use also the strategy from \citet{CaimoFriel2011} to check the approximation quality obtained when replacing the MRF with the POMM approximation. \subsection{MCMC algorithm} Assume that an observed binary $n\times m$ image is available. We consider this image as a realisation from our MRF with the free boundary conditions defined in Section \ref{sec:MRF}. As a prior for the MRF parameters we adopt the prior specified in Section \ref{sec:prior}. The focus in this section is then on how to sample from the resulting posterior distribution. One should note that in this section we formulate the algorithm as if one can evaluate the MRF likelihood, including the normalising constant. This is of course not feasible in practice, so when running the algorithm we replace the MRF likelihood with the corresponding POMM approximation discussed above. Letting $x$ denote the observed image, the posterior distribution we want to sample from is given by \begin{equation*} p(z|x) \propto p(x|z) p(z), \end{equation*} where $p(x|z)$ and $p(z)$ are the MRF defined by \eqref{eq:VFinal} and the prior defined in Section \ref{sec:prior}, respectively. To simulate from this posterior we adopt a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm \citep{green1995} with three types of updates. The detailed proposal mechanisms are specified in the supplementary materials, here we just give a brief description of our proposal strategies. The first proposal in our algorithm is simply first to propose a change in an existing $\varphi$ parameter by a random walk proposal with variance $\sigma^2$, and thereafter to subtract the same value from all $\varphi$ parameters to commit with the sum-to-zero constraint. In the second proposal we draw a pair of groups and propose to move one configuration set from the first group to the second group, ensuring that the two groups are still non-empty. In the last proposal type, we propose a new state by either increasing or decreasing the number of groups with one. When increasing the number of groups by one we randomly choose a configuration set from a randomly chosen group and propose this configuration set to be a new group. When proposing to reduce the number of parameters with one, we randomly choose a group with only one configuration set and propose to merge this group into another group. In the trans-dimensional proposals we ensure that the proposed parameters commit with the sum-to-zero constrain by subtracting the same value from all $\varphi$ parameters. \section{Simulation examples} \label{sec:results} In this section we first present an example based on a simulated data set from the Ising model, and thereafter present results for a data set of census counts of red deer in the Grampians Region of north-east Scotland. In addition, another example based on simulated data is included in the supplementary materials. In all the simulation experiments we use the prior distribution as defined in Section \ref{sec:prior}. In this prior the values of the two hyper-parameters $\sigma_\varphi$ and $\gamma$ must be specified. We have fixed $\sigma_\varphi=10$ and tried $\gamma=0$, $0.5$ and $1$. When discussing simulation results we first present results for $\gamma=0.5$. As the likelihood function we use the MRF discussed in Section \ref{sec:MRF} and we use $2\times 2$ maximal cliques except in the last part of the real data example where we also discuss results for $3\times 3$ maximal cliques. To cope with the computationally intractable normalising constant of the MRF likelihoods, we adopt the approximation strategy of \cite{tjelmeland2012}. The MRF is then approximated with a partially ordered Markov model (POMM), see \citet{art119}, where the conditional distribution of one variable given all previous variables is allowed to depend on maximally $\nu$ previous variables. We have tried different values for $\nu$ and found that in all our examples $\nu=7$ is sufficient to obtain very good approximations, so all the results presented here are based on this value of $\nu$. To simulate from posterior distributions we use the reversible jump MCMC algorithm defined in Section \ref{sec:sampling}. In our sampling algorithm we have an algorithmic tuning parameter $\sigma^2$ as the variance in Gaussian proposals. Based on the results of some preliminary runs we set $\sigma=0.3$. One iteration of our sampling algorithm is defined to be one proposal of each type. Lastly we note that parallel computing was used in order to reduce computational time, and the technique that is used is explained in the supplementary materials. \subsection{The Ising model} \label{sec:IsingResults} We generated a realisation from the Ising model given in Section \ref{sec:Ising} with $\omega=0.4$ on a $100\times 100$ lattice, consider this as our observed data $x$ and simulate by the RJMCMC algorithm from the resulting posterior distribution. The $x$ was obtained using the perfect sampler presented in \cite{propp1996}. From the calculations in Section \ref{sec:Ising} we ideally want the correct groups, $\{c^0,c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}}\}$ $\{c^1,c^{11},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}} 1 \\ 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}}, c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}}\}$, and $\{c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}}, c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}}\}$, to be visited frequently by our sampler. Note that due to our identifiability restriction in \eqref{eq:identifiability} the configuration set parameters should be close to the values given in Section \ref{sec:Ising} with $\eta=\omega$. We run our sampler for 20000 iterations and study the simulation results after convergence. A small convergence study is included in the supplementary materials for the other simulated data set. The acceptance rate for the parameter value proposals is 19\%, whereas the acceptance rates for the other two types of proposals are both around 1\%. The estimated distribution for the number of groups is 94\%, 5\% and 1\%, for 3, 4 and 5 groups respectively. In Figure \ref{fig:groupingMatrixIsing} we have plotted the matrix representing the estimated posterior probability of two configuration sets being assigned to the same group. \begin{figure*}[h]\center Figure \ref{fig:groupingMatrixIsing} approximately here. \end{figure*} As we can see in this figure, the configuration sets are separated into 3 groups, and these groups correspond to the correct grouping shown in grey. About $94\%$ of the realisations is assigned to this particular grouping, and almost all other groupings that are simulated correspond to groupings where the middle group is split in various ways, while some very few are splits of the groups $\{c^0,c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}}\}$ and $\{c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}}\}$. Every one of these alternative groupings have an estimated posterior probability of less than 0.5\%. One informative way to look at the result of the simulation is to estimate the posterior distribution for the interaction parameters $\beta$. Histograms and estimated 95\% credibility intervals for each of the parameters are given in Figure \ref{fig:interactionIsing}. \begin{figure*}[h]\center Figure \ref{fig:interactionIsing} approximately here. \end{figure*} As we can see, all the true values of the interaction parameters are within the estimated credibility intervals, however the modes of the distributions for the pairwise horizontal and vertical second order interactions, see Figure \ref{fig:Isingi2} and \ref{fig:Isingi3}, seem to be somewhat lower than the correct value. To study the properties of the MRF $p(\cdot |z)$ when $z$ is a sample from the posterior $p(z|x)$ we take $5000$ samples from the MCMC run for $p(z|x)$ and generate for each of these a corresponding realisation from the MRF $p(\cdot |z)$. To analyse these $5000$ images we use six statistics describing local properties of the images. The statistics used and resulting density estimates (solid) of the distribution of these statistics are shown in Figures \ref{fig:compareSimIsing} (a)-(f). \begin{figure*}[h]\center Figure \ref{fig:compareSimIsing} approximately here. \end{figure*} In the same figures we also show density estimates of the same statistics when images are generated from the Ising model with the true parameter value (dashed), and when images are generated from the Ising model with parameter value $\omega$ generated from the posterior distribution given our observed image $x$ (dotted). In this last case, a zero mean Gaussian prior with standard deviation equal to ten is used for $\omega$. In these figures we also see that the data we use for posterior sampling (black dots) of $z$ is a realisation from the Ising model with low values for the number of equal horizontal and vertical adjacent sites, see Figure \ref{fig:is2} and \ref{fig:is3}, which causes, as already observed above, our simulations of the second order interactions between horizontal and vertical adjacent sites to be somewhat lower than the true values. In fact we can see that the simulations from the Ising model using posterior samples for the parameter value closely follows that of our $2 \times 2$ model. This means that the results from our model is as accurate as the result one gets when knowing that the true model is the Ising model without knowing the model parameter. To evaluate the quality of the POMM approximation in this example, we also simulate from the posterior distribution with the same RJMCMC algorithm using the approximate exchange algorithm of \cite{pro20}, as discussed in Section \ref{sec:normconstant}. We compare in Figures \ref{fig:compareSimIsing} (g) and (h) the results using the POMM approximation (solid) to the results from the approximate exchange algorithm (dashed) using two of our six statistics. We observe that the differences are minimal for these two, and indeed we get as accurate results for the four other statistics as well. That these two very different approximation strategies produces essentially the same results strongly indicate that both procedures are very accurate. All the above results are for $\gamma=0.5$, but as mentioned in the introduction of this section we also investigate the results for $\gamma=0$ and $1$. For $\gamma=0$ the configuration sets are organised into 3 (66\%), 4 (31\%) or 5 (3\%) groups, and for $\gamma=1$ we get 3 (96\%) or 4 (4\%) groups. From these numbers we see the effect of varying $\gamma$. In particular when increasing $\gamma$ from $0.5$ to $1.0$ the tendency to group more configuration sets together becomes stronger for this data set. \subsection{Red deer census count data} \label{sec:redDeer} In this section we analyse a data set of census counts of red deer in the Grampians Region of north-east Scotland. A full description of the data set is found in \cite{augustin1996} and \cite{buckland1993}. The data is obtained by dividing the region of interest into $n=1277$ grid cells on a lattice and observing the presence or absence of red deer in each cell. In our notation this is our observed image $x$, but in this example we also have the four covariates altitude, mires, north coordinate and east coordinate available in each grid cell. The binary data $x$ and the two first covariates are shown in Figure \ref{fig:redDeerData}. \begin{figure*}[h]\center Figure \ref{fig:redDeerData} approximately here. \end{figure*} We denote the covariate $k$ at a location $(i,j)$ by $y_{i,j,k}, \ j=1,2,3,4$, and model them into the likelihood function in the following way \begin{equation}\label{eq:likelihoodDeer} p(x|z,\theta^C,y)=Z\exp{\left( \sum_{\Lambda\in \mathcal{L}_m}V_\Lambda(x_{\Lambda}|z)+\sum_{(i,j)\in S}x_{i,j}\sum_{k=1}^4\theta^C_ky_{i,j,k}\right )}, \end{equation} where $\theta^C=(\theta^C_1,...,\theta^C_4)$ are the parameters for the covariates. We put independent zero mean Gaussian prior distributions with standard deviation equal to 10 on $\theta^C_j$, $j=1,...,4$. In the sampling algorithm these covariates are updated using random walk, i.e. we uniformly choose one of the four covariates to update and propose a new value using a Gaussian distribution with the old parameter value as the mean and a standard deviation of $0.1$. We ran our algorithm for 50000 iterations, and the acceptance rates for the parameter random walk proposal is 42\%, the group changing proposal is 33\%, the trans-dimensional proposal is 5\%, and the covariate proposal is 48\%. The posterior most probable grouping becomes $\{c^0\}$, $\{c^1,c^{11},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}} 1 \\ 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}}, c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}}, c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}}\}$ and $\{c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}}\}$ with probability 33.2\%. In total more than 2500 different groupings are visited, and except for the posterior most probable grouping the posterior probabilities of all other groupings are less than 5\%. The estimated posterior probability distribution for the number of groups becomes 43\% for 3 groups, 48\% for 4 groups, 8\% for 5 groups and 1\% for 6 groups. In particular, the realisations with four or more groups are mostly groupings where the set $\{c^1,c^{11},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}} 1 \\ 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}}, c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}}, c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}}\}$ is split in various ways. This can also be seen in Figure \ref{fig:configurationMatrixDeer}, which shows the estimated posterior probability of two configuration sets being assigned to the same group. \begin{figure*}[h]\center Figure \ref{fig:configurationMatrixDeer} approximately here. \end{figure*} The grey blocks in this figure show the estimated posterior most probable grouping described above. Next we estimate the posterior density for the interaction parameters, see Figure \ref{fig:interactionsDeer}. \begin{figure*}[h]\center Figure \ref{fig:interactionsDeer} approximately here. \end{figure*} As we can see, most of the higher order interaction parameters becomes significantly different from zero, suggesting that a $2\times 2$ clique system is needed for this data set. Figure \ref{fig:covariatesDeer} shows the estimated posterior density for the covariate parameters. \begin{figure*}[h]\center Figure \ref{fig:covariatesDeer} approximately here. \end{figure*} As we can see from the credibility intervals, all these parameters are significantly different from zero, which justifies the need to include them. Simulations of $p(x|z,\theta^C,y)$ for three randomly chosen posterior samples of $z$ and $\theta^C$ are shown in Figure \ref{fig:simulationsDeer}. \begin{figure*}[h]\center Figure \ref{fig:simulationsDeer} approximately here. \end{figure*} As we can see the spatial dependency in these realisations looks similar to the data which indicates that the features of this data set are captured with this model. As discussed above, the estimated marginal posterior densities for the interaction parameters in Figure \ref{fig:interactionsDeer} indicate that higher order interaction parameters are needed for this data set. To investigate this further we also run a corresponding MCMC simulation with a prior where the spatial interaction is as in the nearest neighbour autologistic model defined in \citet{Besag1972}, whereas the covariates are included as in (\ref{eq:likelihoodDeer}). This pairwise interaction prior has three interaction parameters, for first-order interactions and for horisontal and vertical second-order interactions, respectively, and apriori we assume these three parameters to be independent and Gaussian distributed with zero-mean and standard deviations equal to ten. To simulate these three parameters we randomly choose one and propose a zero mean Gaussian change with standard deviation equal to 0.3 to the chosen parameter. For the $\theta_j^C$ parameters we adopt the same prior and proposals as before. For the pairwise interaction prior and our original prior in (\ref{eq:likelihoodDeer}), Figure \ref{fig:compareSimDeer} shows \begin{figure*}[h]\center Figure \ref{fig:compareSimDeer} approximately here. \end{figure*} estimates of the resulting marginal posterior distributions for the same six statistics studied in our Ising simulation example. For several of the statistics we see that there is a clear difference between the results for the two priors. The differences for the higher-order interaction statistics are perhaps less surprising, but one should note that the distribution of the first-order statistic in Figure \ref{fig:compareSimDeer}(a) also changes quite much when allowing higher order interactions. One should also note that our $2\times 2$ model fits better to the statistics of the data, shown as black dots in the figures. Returning to the $2\times 2$ prior, using $\gamma=0$ in the prior for this data set gives the estimated posterior probability distribution 24\%, 63\%, 11\% and 2\% for 3, 4, 5 and 6 groups respectively, whereas for $\gamma=1$ we obtain 60\%, 35\% and 5\% for 3, 4 and 5 groups respectively. Again we see that higher values of $\gamma$ results in more realisations with fewer number of groups. However, for all the three values of $\gamma$ the estimated most probable grouping is the same. We end our discussion of this data set by mentioning that some results when assuming a clique size of $3\times 3$ is included in the supplementary material of this paper. These results indicate that no more significant structure is introduced in the $3\times 3$ case for this data set. \section{Closing remarks} \label{sec:discussion} Our main focus in this paper is to design a generic prior distribution for the parameters of an MRF. This is done by assuming a set of maximal cliques defined from a template maximal clique $\Lambda_0$, but as the number of free parameters grows quickly as a function of the number of elements in $\Lambda_0$ we construct our prior distribution such that it gives a positive probability for groups of parameters to have exactly the same value. In that way we reduce the effective number of parameters, still keeping the flexibility large cliques provides. Proposal distributions that enable us to simulate from the resulting posterior distributed is also presented. However, to evaluate the likelihood we use a previously defined approximation to MRFs \citep{phd6}, and the trade off between accuracy and computational complexity limits in practice the size of the cliques that can be assumed. An alternative to approximations is perfect sampling \citep{propp1996}, but this was in all our examples too computationally intensive. A third alternative would be to use an MCMC sample of $x$ instead of a perfect sample, as described in for instance \citet{Everitt2012}. An issue with this approach is the need to set a burn in period for the sampler of $x$, where a too long burn in period would make the parameter sampler too intensive. Lastly, we illustrate the effect of our prior distribution and sampling algorithm on two examples. Our focus in this paper is on binary MRFs. It is however possible to generalise our framework to discrete MRFs, i.e. where $x_i\in \{0,1,...,K\}$ for $K \geq 2$. An identifiable parametrisation of a discrete MRF using clique potentials can with a small effort be defined in a similar way to what is done in the binary case, and ones this parametrisation is established, the prior distribution presented in this paper can be used unchanged. The same apply to our sampling strategy. With our prior distribution the size of the maximal cliques, and thereby the number of configuration sets, act as a hyper parameter and must be set prior to any sampling algorithm. One could imagine putting a prior also on $\Lambda_0$, introducing the need to construct algorithms for trans-dimensional sampling also for $\Lambda_0$. Another way to avoid the need to set the number of configuration sets would be to construct a prior distribution for the $\beta$ parameters. A natural choice would be to construct a positive prior probability for these parameters to be exactly zero, and in this way the significant interactions of an MRF can be inferred from data. However, as discussed above, it is not clear to us how to design generic prior distributions for the values of these interaction parameters, as higher order interactions intuitively would be different from lower order interaction. Also, grouping $\beta$ parameters together in order to reduce the number of parameters would, for the same reason as above, make little sense. An ideal solution would be somehow to draw strength from both of the two parametrisations in order to assign a prior distribution to both the appearance of different cliques and the number of free parameters. This idea is currently work in progress. \section*{Supporting Information} Additional Supporting Information may be found in the online version of this article:\\ {\noindent \bf Section S.1:} Proof of one-to-one relation between $\phi$ and $\beta$. {\noindent \bf Section S.2:} Proof of translational invariance for $\beta$. {\noindent \bf Section S.3:} Proof of translational invariance for $\phi$. {\noindent \bf Section S.4:} Details for the MCMC sampling algorithm. {\noindent \bf Section S.5:} The independence model with check of convergence. {\noindent \bf Section S.6:} Reed deer data with $3 \times 3$ maximal cliques. {\noindent \bf Section S.7:} Parallelisation of the sampling algorithm. \bibliographystyle{jasa} \section{Proof of one-to-one relation between $\phi$ and $\beta$} \label{sec:S1} \begin{theorem} Consider an MRF and constrain the $\phi$ parametrisation of the potential functions as described in Definition 1. Then there is a one-to-one relation between $\{\beta^\lambda;\lambda\in\mathcal{L}\}$ and $\{\phi^\lambda;\lambda\in\mathcal{L}\}$. \end{theorem} \begin{proof}We prove the theorem by establishing recursive equations showing how to compute the $\beta^\lambda$'s from the $\phi^\lambda$'s and vice versa. Setting $x=\bold 1^\lambda$ for some $\lambda\in\mathcal{L}$ into the two representations of $U(x)$ in (2) and (4), we get \begin{equation*} \sum_{\Lambda\in \mathcal{L}_m}V_{\Lambda}(\bold 1^{\lambda}_{\Lambda})=\sum_{\lambda^\star\in \mathcal{L}}\beta^{\lambda^\star} \prod_{(i,j)\in \lambda^\star} \bold 1^{\lambda}_{\{(i,j)\}}. \end{equation*} Using (3) and Definition 1 this gives \begin{equation} \sum_{\Lambda\in \mathcal{L}_m}\phi^{\lambda\cap \Lambda}=\sum_{\lambda^\star\subseteq \lambda}\beta^{\lambda^\star}. \label{eq:equal} \end{equation} Splitting the sum on the left hand side into one sum over $\Lambda\in\mathcal{L}_m^\lambda$ and one sum over $\Lambda\in\mathcal{L}_m\setminus\mathcal{L}_m^\lambda$, and using that $\lambda\cap \Lambda=\lambda$ for $\Lambda\in \mathcal{L}_m^\lambda$ we get \begin{equation*} |\mathcal{L}_m^{\lambda}|\phi^{\lambda}+\sum_{\Lambda\in \mathcal{L}_m\setminus \mathcal{L}_m^\lambda}\phi^{\lambda\cap \Lambda}=\sum_{\lambda^\star\subseteq \lambda}\beta^{\lambda^\star}. \end{equation*} Solving for $\phi^\lambda$ gives \begin{equation} \phi^{\lambda}=\frac{1}{|\mathcal{L}_m^{\lambda}|}\left [\sum_{\lambda^\star\subseteq \lambda}\beta^{\lambda^\star}-\sum_{\Lambda\in \mathcal{L}_m\setminus \mathcal{L}_m^\lambda}\phi^{\lambda\cap \Lambda}\right]. \label{eq:claim} \end{equation} Clearly $|\lambda\cap\Lambda| < | \lambda|$ when $\Lambda\in \mathcal{L}_m\setminus \mathcal{L}_m^\lambda$, so \eqref{eq:claim} implies that we can compute all $\{ \phi^\lambda;\lambda\in\mathcal{L}\}$ recursively from $\{ \beta^\lambda;\lambda\in\mathcal{L}\}$. First we can compute $\beta^\lambda$ for $|\lambda|=0$, i.e. $\phi^\lambda=\phi^\emptyset = \beta^\emptyset/|\mathcal{L}_m|$, then all $\beta^\lambda$ for which $|\Lambda|=1$, thereafter all $\beta^\lambda$ for which $|\Lambda|=2$ and so on until $\beta^\lambda$ has been computed for all $\lambda\in\mathcal{L}$. Thus, $\{ \phi^\lambda;\lambda\in\mathcal{L}\}$ is uniquely specified by $\{ \beta^\lambda;\lambda\in\mathcal{L}\}$. Solving \eqref{eq:equal} with respect to $\beta^\lambda$ we get \begin{equation} \beta^{\lambda}=\sum_{\Lambda\in \mathcal{L}_m}\phi^{\lambda \cap \Lambda}-\sum_{\lambda^\star \subset \lambda}\beta^{\lambda^\star}, \label{eq:claim2} \end{equation} and noting that clearly $|\lambda^\star|<|\lambda|$ when $\lambda^\star \subset \lambda$ we correspondingly get that $\{ \beta^\lambda;\lambda\in\mathcal{L}\}$ can be recursively computed from $\{ \phi^\lambda;\lambda\in\mathcal{L}\}$. One must first compute $\beta^\lambda$ for $|\lambda|=0$, i.e. $\beta^\emptyset$, then all $\beta^\lambda$ for which $|\lambda|=1$, thereafter all $\beta^\lambda$ for which $|\lambda|=2$ and so on. Thereby $\{ \beta^\lambda;\lambda\in\mathcal{L}\}$ is also uniquely specified by $\{ \phi^\lambda;\lambda\in\mathcal{L}\}$ and the proof is complete. \end{proof} \section{Proof of translational invariance for $\beta$} \label{sec:S2} \begin{theorem} An MRF $x$ defined on a rectangular lattice $S=\{ (i,j);i=0,\ldots,n-1,j=0,\ldots,m-1\}$ with torus boundary conditions and $\mathcal{L}_m$ given in (5) is stationary if and only if $\beta^{\lambda}=\beta^{\lambda\oplus (t,u)}$ for all $\lambda \in \mathcal{L}$, $(t,u)\in S$. We then say that $\beta^\lambda$ is translational invariant. \end{theorem} \begin{proof} We first prove the only if part of the theorem by induction on $|\lambda|$. Since $\emptyset \oplus (t,u)=\emptyset$ we clearly have $\beta^{\emptyset}=\beta^{\emptyset \oplus (t,u)}$ and thereby $\beta^{\lambda}=\beta^{\lambda \oplus (t,u)}$ for $|\lambda|=0$. Now assume all interaction parameters up to order $o$ to be translational invariant, i.e. $\beta^{\lambda}=\beta^{\lambda \oplus (t,u)}$ for $|\lambda|\leq o$. Now focusing on any $\lambda^\star\in \mathcal{L}$ with $|\lambda^\star|=o+1$, the assumed stationarity in particular gives that we must have $p(\bold 1^{\lambda^\star})=p(\bold 1^{\lambda^\star\oplus (t,u)})$. Using (2) and (4) it follows that \begin{equation*} \beta^{\lambda^\star}+\sum_{\lambda \subset \lambda^\star} \beta^{\lambda}=\beta^{\lambda^\star\oplus(t,u)}+\sum_{\lambda \subset \lambda^\star\oplus(t,u)}\beta^{\lambda}. \end{equation*} Rewriting the sum on the right hand side we get \begin{equation*} \beta^{\lambda^\star}+\sum_{\lambda \subset \lambda^\star}\beta^{\lambda}=\beta^{\lambda^\star\oplus(t,u)}+ \sum_{\lambda \subset \lambda^\star}\beta^{\lambda\oplus(t,u)}, \end{equation*} where the induction assumption gives that the two sums must be equal, and thereby $\beta^{\lambda^\star}=\beta^{\lambda^\star\oplus(t,u)}$, which completes the only if part of the proof. To prove the if part of the theorem we need to show that if the interaction parameters are translational invariant then $U(\bold 1^A) = U( \bold 1^{A\oplus (t,u)})$ for any $A\subseteq S$ and $(t,u)\in S$. For $U(\bold 1^A)$ we have \begin{eqnarray*} U(\bold 1^A) &=& \sum_{\lambda\in\mathcal{L}} \beta^\lambda \prod_{(i,j)\in\lambda} \bold 1^A_{i,j}\\ &=& \sum_{\lambda\in \mathcal{L}} \beta^{\lambda\oplus (t,u)} \prod_{(i,j)\in\lambda\oplus (t,u)} \bold 1_{i,j}^A\\ &=& \sum_{\lambda\in\mathcal{L}} \beta^{\lambda\oplus (t,u)} \prod_{(i,j)\in \lambda} \bold 1_{i,j}^{A\oplus (t,u)} \\ &=& \sum_{\lambda\in\mathcal{L}} \beta^{\lambda} \prod_{(i,j)\in \lambda} \bold 1_{i,j}^{A\oplus (t,u)} = U(\bold 1^{A\oplus (t,u)}), \end{eqnarray*} where the first equality follows from (4), the second equality is true because $\{ \lambda\oplus (t,u);\lambda\in \mathcal{L}\} = \mathcal{L}$ for any $(t,u)\in S$, the third equality follows from the identity $\bold 1^A_{(i,j)\oplus (t,u)} = \bold 1_{i,j}^{A\oplus (t,u)}$, and the fourth equality is using the assumed translational invariance of the interaction parameters. Thereby the proof is complete. \end{proof} \section{Proof of translational invariance for $\phi$} \label{sec:S3} \begin{theorem} An MRF $x$ defined on a rectangular lattice $S=\{ (i,j);i=0,\ldots,n-1,j=0,\ldots,m-1\}$ with torus boundary conditions and $\mathcal{L}_m$ given in (5) is stationary if and only if $\phi^{\lambda}=\phi^{\lambda\oplus (t,u)}$ for all $\lambda \in \mathcal{L}$ and $(t,u)\in S$. We then say that $\phi^\lambda$ is translational invariant. \end{theorem} \begin{proof} Given the result in Theorem 2 it is sufficient to show that $\phi^\lambda$ is translational invariant if and only if $\beta^\lambda$ is translational invariant. We first assume $\beta^\lambda$ to be translational invariant for all $\lambda\in\mathcal{L}$ and need to show that then also $\phi^\lambda$ must be translational invariant for all $\lambda\in\mathcal{L}$. Starting with \eqref{eq:claim}, repeatedly using the specific form we are using for $\mathcal{L}_m$ and the assumed translational invariance for $\beta^\lambda$, we get for any $\lambda\in\mathcal{L}$, $(t,u)\in S$ \begin{eqnarray*} \phi^{\lambda\oplus (t,u)} &=& \frac{1}{\left|\mathcal{L}_m^{\lambda\oplus (t,u)}\right|} \left[ \sum_{\lambda^\star\subseteq \lambda\oplus (t,u)}\beta^{\lambda^\star} - \sum_{\Lambda\in\mathcal{L}_m} \phi^{(\lambda\oplus (t,u))\cap\Lambda} + \sum_{\Lambda\in\mathcal{L}_m^{\lambda\oplus (t,u)}} \phi^{(\lambda\oplus (t,u))\cap \Lambda}\right] \\ &=& \frac{1}{|\mathcal{L}_m^\lambda|}\left[ \sum_{\lambda^\star\subseteq \lambda} \beta^{\lambda^\star\oplus (t,u)} - \sum_{\Lambda\in\mathcal{L}_m} \phi^{(\lambda\oplus (t,u))\cap (\Lambda\oplus (t,u))} + \sum_{\Lambda\in\mathcal{L}_m^\lambda} \phi^{(\lambda\oplus (t,u))\cap (\Lambda\oplus (t,u))}\right]\\ &=& \frac{1}{|\mathcal{L}_m^\lambda|} \left[ \sum_{\lambda^\star\subseteq \lambda} \beta^{\lambda^\star} - \sum_{\Lambda\in\mathcal{L}_m} \phi^{(\lambda\cap \Lambda)\oplus (t,u)} + \sum_{\Lambda\in\mathcal{L}_m^\lambda}\phi^{(\lambda\cap\Lambda) \oplus (t,u)}\right] \\ &=& \frac{1}{|\mathcal{L}_m^\lambda|}\left[ \sum_{\lambda^\star\subseteq\lambda} \beta^{\lambda^\star} - \sum_{\Lambda\in\mathcal{L}_m\setminus\mathcal{L}_m^\lambda} \phi^{(\lambda\cap\Lambda)\oplus (t,u)}\right]. \end{eqnarray*} From this we can use induction on $|\lambda|$ to show that $\phi^\lambda$ is translational invariant for all $\lambda\in\mathcal{L}$. Setting $\lambda=\emptyset$ we get $\phi^{\lambda\oplus (t,u)} = (1/|\mathcal{L}_m^\emptyset|)\beta^\emptyset$ which is clearly not a function of $(t,u)$. Then assuming $\phi^{\lambda\oplus (t,u)}=\phi^\lambda$ for all $\lambda\in \mathcal{L}$ with $|\lambda| \leq o$, considering the above relation for a $\lambda$ with $|\lambda|=o+1$, and observing that $|\lambda\cap \Lambda|\leq o$ when $|\lambda|=o+1$ and $\Lambda\in \mathcal{L}_m\setminus \mathcal{L}_m^\lambda$, we get that also $\phi^{\lambda\oplus (t,u)}=\phi^\lambda$ for $|\lambda|=o+1$, and the induction proof is complete. Next we assume $\phi^\lambda$ to be translational invariant for all $\lambda\in\mathcal{L}$ and need to show that then also $\beta^\lambda$ is translational invariant for all $\lambda\in\mathcal{L}$. Starting with \eqref{eq:claim2}, using the assumed translational invariance of $\beta^\lambda$, and again repeatedly using the specific form of $\mathcal{L}_m$, we get \begin{eqnarray*} \beta^{\lambda\oplus (t,u)} &=& \sum_{\Lambda\in\mathcal{L}_m}\phi^{(\lambda\oplus (t,u))\cap \Lambda} -\sum_{\lambda^\star\subset \lambda\oplus (t,u)}\beta^{\lambda^\star} \\ &=& \sum_{\Lambda\in\mathcal{L}_m} \phi^{(\lambda\oplus (t,u))\cap (\Lambda\oplus (t,u))} - \sum_{\lambda^\star \subset \lambda} \beta^{\lambda^\star \oplus (t,u)}\\ &=&\sum_{\Lambda\in\mathcal{L}_m} \phi^{(\lambda\cap \Lambda)\oplus (t,u))} - \sum_{\lambda^\star \subset \lambda} \beta^{\lambda^\star \oplus (t,u)}. \end{eqnarray*} Using this we can easily use induction on $|\lambda|$ to show that we must have $\beta^{\lambda\oplus (t,u)}=\beta^\lambda$. The proof is thereby complete. \end{proof} \section{Details for the MCMC sampling algorithm} \label{sec:S4} In this section we provide details of the proposal distributions that we use when sampling from the posterior distribution \begin{equation*} p(z|x) \propto p(x|z) p(z), \end{equation*} where $p(x|z)$ and $p(z)$ are the MRF and the prior given in the paper, respectively. To simulate from this posterior distribution we adopt a reversible jump Markov chain Monte Carlo (RJMCMC) algorithm with three types of updates. The first update type uses a random walk proposal for one of the $\varphi$ parameters, the second proposes to move one configuration set to a new group, and the third proposes to change the number of groups, $r$, in the partition of the configuration sets. In the following we describe the proposal mechanisms for each of the three update types. The corresponding acceptance probabilities are given by standard formulas. It should be noted that only the last type of proposal produces a change in the dimension of the parameter space. \subsection{\it Random walk proposal for parameter values} The first proposal in our algorithm is simply to propose a new value for an already existing parameter using a random walk proposal, but correcting for the fact that the parameters should sum to zero. More precisely, we first draw a change $\varepsilon \sim \mbox{N}(0,\sigma^2)$, where $\sigma^2$ is an algorithmic tuning parameter. Second, we uniformly draw one element from the current state $z=\{ (C_i,\varphi_i),i=1,\ldots,r\}$, $(C_i,\varphi_i)$ say, and define the potential new state as \begin{equation*} z^* = \left \{ \left (C_j,\varphi_j - \frac{1}{r}\varepsilon \right ), j=1,\ldots,i-1,i+1,\ldots,r \right \} \cup \left \{ \left (C_i,\varphi_i+\varepsilon - \frac{1}{r}\varepsilon \right ) \right \}. \end{equation*} \subsection{\it Proposing to change the group for one configuration set} \label{sec:swappingProposal} Letting the current state be $z=\{ (C_i,\varphi_i),i=1,\ldots,r\}$, we start this proposal by drawing a pair of groups, $C_i$ and $C_j$ say, where the first set $C_i$ is restricted to include at least two configuration sets. We draw $C_i$ and $C_j$ so that the difference between the corresponding parameter values, $\varphi_i - \varphi_j$, tend to be small. More precisely, we draw $(i,j)$ from the joint distribution \begin{eqnarray*} q(i,j)\propto \begin{cases} \exp\left(-(\varphi_i-\varphi_j)^2\right ) & \mbox{if $i\neq j$ and group $C_i$ contains at least two configuration sets,}\\ 0 & \mbox{otherwise.} \end{cases} \end{eqnarray*} Thereafter we draw uniformly at random one of the configuration sets in $C_i$, $c$ say. Our potential new state is then obtained by moving $c$ from $C_i$ to $C_j$. Thus, our potential new state becomes \begin{equation*} z^* = \left ( z \setminus \{ (C_i,\varphi_i),(C_j,\varphi_j)\}\right ) \cup \left \{ (C_i\setminus c,\varphi_i ), (C_j\cup c,\varphi_j )\right \}. \end{equation*} \subsection{\it Trans-dimensional proposals} \label{sec:transDimProposal} Let again the current state be $z=\{ (C_i,\varphi_i),i=1,\ldots,r\}$. In the following we describe how we propose a new state by either increasing or reducing the number of groups, $r$, with one. There will be a one-to-one transition in the proposal, meaning that the opposite proposal, going from the new state to the old state has a non-zero probability. We make no attempt to jump between states where the difference between the dimensions are larger than one. First we draw whether to increase or to decrease the number of groups. If the number of groups are equal to the number of configurations sets, no proposal to increase the number of groups can be made due to the fact that empty groups have zero prior probability. In that case we propose to decrease the number of dimensions with probability 1. In our proposals we also make the restriction that only groupings containing at least one group with only one configuration set can be subject to a dimension reducing proposal. In a case where no such group exists, a proposal of increasing the number of dimensions are made with probability 1. In a case where both proposals are allowed we draw at random which to do with probability 1/2 for each. Note that at least one of the two proposals is always valid. We now explain how to propose to increase the number of groups by one. We start by drawing uniformly at random one of the groups with more than one configuration set, $C_i$ say, which we want to split into two new groups. Thereafter we draw uniformly at random one of the configuration sets in $C_i$, $c$ say, and form a new partition of the configuration sets by extracting $c$ from $C_i$ and adding a new group containing only $c$. Next we need to draw a parameter value for the new group $\{ c\}$, and the parameter values for the other groups also need to be modified for the proposal to conform with the requirement that the sum of the (proposed) parameters should equal zero. We do this by first drawing a change $\varepsilon\sim \mbox{N}(0,\sigma^2)$ in the parameter value for $c$, where $\sigma^2$ is the same tuning parameter as in the random walk proposal. We then define the potential new state as \begin{eqnarray*} z^* &=& \left \{ \left (C_j,\varphi_j - \frac{1}{r+1}(\varphi_i + \varepsilon)\right ), j=1,\ldots,i-1,i+1,\ldots,r\right \} \cup \\ &&\left \{ \left (C_i\setminus c,\varphi_i - \frac{1}{r+1}(\varphi_i + \varepsilon)\right), \left(\{ c\},\varphi_i + \varepsilon - \frac{1}{r+1}(\varphi_i + \varepsilon)\right)\right \}. \end{eqnarray*} Next we explain the proposal we make when the dimension is to be decreased by one. Since we need a one-to-one transition in our proposals, we get certain restrictions for these proposals. In particular, the fact that only groupings containing at least one group with only one configuration set are possible outcomes from a dimension increasing proposal dictates that dimension decreasing proposals only can be made from such groupings. Assume again our current model to be $z=\{ (C_i,\varphi_i),i=1,\ldots,r\}$, where at least one group contains only one configuration set. The strategy is to propose to merge one group consisting of only one configuration set into another group. As in Section \ref{sec:swappingProposal}, we draw the two configuration sets to be merged so that the difference between the corresponding parameter values tend to be small. More precisely, we let the two groups be $C_i$ and $C_j$ where $(i,j)$ is sampled according to the joint distribution \begin{eqnarray*} q(i,j)\propto \begin{cases} \exp\left(-(\varphi_i-\varphi_j)^2\right ) & \mbox{if $i\neq j$ and $C_i$ consists of only one configuration set,}\\ 0 & \mbox{otherwise.} \end{cases} \end{eqnarray*} Next we need to specify potential new parameter values. As these must conform with how we generated potential new values in the split proposal, we have no freedom left in how to do this. The potential new state must be \begin{equation*} z^* = \left \{ \left (C_k,\varphi_k + \frac{1}{r-1}\varphi_i\right ), k\in \{ 1,\ldots,r\} \setminus \{ i,j\}\right \} \cup \left \{ \left (C_j\cup C_i,\varphi_j + \frac{1}{r-1}\varphi_i\right )\right \}. \end{equation*} The split and merge steps produce a change in the dimension of the parameter space, so to calculate the acceptance probabilities for such proposals we need corresponding Jacobi determinants. It is straightforward to show that the Jacobi determinants for the merge and split proposals become $\frac{r}{r-1}$ and $\frac{r}{r+1}$, respectively. \section{The independence model with check of convergence} \label{sec:S5} Consider a model where the variables are all independent of each other and $p(x_{i,j})=p^{x_{i,j}}(1-p)^{1-x_{i,j}}$ for each $(i,j)\in S$ and where $p$ is the probability of $x_{i,j}$ being equal to $1$. We get \begin{equation}\label{eq:independence} p(x)=\prod_{(i,j)\in S}p^{x_{i,j}}(1-p)^{1-x_{i,j}}\propto\exp\left(\alpha \sum_{(i,j)\in S}x_{i,j}\right), \end{equation} where \begin{equation*} \alpha=\ln \left ( \frac{p}{1-p}\right ). \end{equation*} In this section we use the independence model as an example, and in particular we fit an MRF with $2 \times 2$ maximal cliques to data simulated from this model. Therefore it is helpful to know how one can represent the independence model using $2 \times 2$ maximal cliques, and this can be done using the same strategy that was used for the Ising model in Section 2.2 in the paper. We get $\phi^0=\eta$, $\phi^1=\frac{\alpha}{4}+\eta$, $\phi^{11}=\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}} 1 \\ 1 \end{array}}=\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}}= \phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}}=\frac{\alpha}{2}+\eta$, $\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}}=\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}}=\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}}= \phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}}=\frac{3\alpha}{4}+\eta$ and $\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}}=\alpha+\eta$, where $\eta$ is an arbitrary value coming from the arbitrary value for $\beta^\emptyset$. If $p=0.5$ we see that $\alpha=0$ and all the configuration set parameters are equal. We generate a realisation from the independence model with $p=0.3$ on a $100\times 100$ lattice, consider this as our observed data $x$, and simulate by the MCMC algorithm defined in Section 4 from the resulting posterior distribution. Using the notation for the configuration sets in a $2\times 2$ maximal clique and the results above, we ideally want our algorithm to produce realisations with the groups $\{c^0\}$, $\{c^1\}$, $\{c^{11},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}} 1 \\ 1 \end{array}}, c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}}\}$, $\{c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}}\}$ and $\{c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}}\}$. Note that due to our identifiability restriction in (11) the configuration set parameters should be close to the solution above with $\eta=-\alpha/2$. To check convergence we investigated trace plots of various statistics, see Figure \ref{fig:tracePlotsMarginal}, and the conclusion was that the algorithm converges very quickly. \begin{figure*}[h]\center Figure \ref{fig:tracePlotsMarginal} approximately here. \end{figure*} The acceptance rate for the parameter value proposals is $24\%$, whereas the acceptance rates for the other two types of proposals are both around $2\%$. We run our sampling algorithm for 20000 iterations, and estimate the posterior probability of the number of groups. The configuration sets are organised into 4 (77\%), 5 (21\%) or 6 (2\%) groups, so for these data the grouping tends to be a little bit too strong compared to the correct number of groups. This can also be seen from the estimated posterior probability of two configuration sets being assigned to the same group, shown in Figure \ref{fig:pairMatrixMarginal}. \begin{figure*}[h]\center Figure \ref{fig:pairMatrixMarginal} approximately here. \end{figure*} This figure suggests the four groups $\{c^0\},\{c^1\},\{c^{11},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}} 1 \\ 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}}, c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}}\}$, $\{c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}}, c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}}\}$ which is also calculated to be the most probable grouping estimated by counting the number of occurrences. In fact the posterior probability for this grouping is as high as $55 \%$. In Figure \ref{fig:pairMatrixMarginal} we also see how the most probable grouping differ from the correct model grouping, shown in grey. The group $\{ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}}, c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}}\}$ in the correct model is split in two in the most probable grouping, and the subsets $\{ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}}\}$ and $\{ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}}\}$ are inserted into the correct model groups $\{ c^{11},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}} 1 \\ 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}},c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}}\}$ and $\{ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}}\}$, respectively. As in the Ising model example in Section 5.1 we estimate the posterior distribution for the interaction parameters, see Figure \ref{fig:marginalInteractionParameters}. \begin{figure*}[h]\center Figure \ref{fig:marginalInteractionParameters} approximately here. \end{figure*} As we can see, the true value of the interaction parameters are mostly within the credibility intervals, but the tendency to group the configurations too much is in this case forcing some of the true values into a tail of the marginal posterior distributions. As in the Ising example we compare the distribution of the same six statistics from simulations from our $2\times 2$ model with posterior samples of $z$, the independence model with correct parameter value, and the independence model with parameter value obtained by posterior sampling, see Figure \ref{fig:compareSimIndependence}. \begin{figure*}[h]\center Figure \ref{fig:compareSimIndependence} approximately here. \end{figure*} As we can see, our model captures approximately the correct distribution of the chosen statistics. It is interesting to note that for some statistics the realisations from the independence model with simulated $\alpha$ values follows our model tightly whereas for the other statistics it is close to the correct model. Also for this data set we investigated the case where $\gamma=0$ and $\gamma=1$. For $\gamma=0$ the configuration sets are organised into 4 (75\%), 5 (23\%) or 6 (2\%) groups, and for $\gamma=1$ we get 4 (93\%), 5 (6\%), 6 (1\%) groups. As expected we again see the tendency towards stronger grouping when $\gamma$ is increased. We also did experiments were the value of $p$ was changed. If the value of $p$ is close to 0.5 the tendency to group the configurations too much becomes stronger. This makes perfectly sense, since the correct grouping for $p=0.5$ is to put all configuration sets into only one group. In the other end, choosing $p$ closer to 0 or 1 gives a stronger tendency to group the configurations according to the correct solution. This illustrates the fact that the algorithm tries to find a good model for the data using as few groups as possible, but as the difference between the true parameter values of the groups becomes larger the price to pay for choosing a model with fewer parameters increases. \section{Red deer data with $3\times 3$ maximal cliques} \label{sec:S6} In this section we present some results when assuming maximal clique size of $3\times 3$ for the red deer data set presented in Section 5.2 in the paper. The main drawback with our approach is computational time, which is very dependent on the approximation parameter $\nu$. One also needs to keep in mind that even data from simple models will need many groups in the $3 \times 3$ case to be modelled correctly. For instance, for the independence model the 401 configuration sets would need to be separated into 10 groups, while for the Ising model one would need 11 groups to get the correct model grouping. Similarly, the posterior most probable grouping found for the $2 \times 2$ case for the reed deer example would need 38 groups to be modelled in the $3\times 3$ case. Thus it is important not to assume larger maximal cliques than needed. However for this data set it is possible to run the sampling algorithm with $3\times 3$ clique, even though this is computationally expensive. To get convergence we need a small generalisation to the proposal distribution for the trans-dimensional sampling step presented in Section \ref{sec:transDimProposal}. In particular we allow for several configuration sets to be split out into a new group at a single proposal, and correspondingly allow for the possibility of several configuration sets to be merge into another group in one single proposal. The estimated marginal distribution of the number of groups is 1\%, 65\%, 33\%, and 1\% for 29, 30, 31 and 32 groups respectively. Three realisations from the likelihood for three randomly chosen realisations of $z$ are shown in Figure \ref{fig:3t3}(a), and comparing with the realisations for the $2\times 2$ case, see Figure 8 in the paper, it is hard to see any differences in the spatial structure of the realisations. \begin{figure*}[h]\center Figure \ref{fig:3t3} approximately here. \end{figure*} We also investigated the distribution of three statistics for 5000 realisation from the likelihood of each of the two clique sizes, see Figure \ref{fig:3t3}(b), and it appears to be little difference also here. These results indicate that $2\times 2$ maximal cliques might have sufficient complexity to explain this data set. \section{Parallelisation of the sampling algorithm} \label{sec:S7} Most of the computing time for running our sampling algorithm is used to evaluate the likelihood in (2). In order to reduce the running time we adopt a scheme that do multiple updates of the Markov chain by evaluating likelihoods in parallel. Assume we are in a state $z$ and propose to split/merge into a new state $z_1$. Now there are two possible outcomes for this proposal. Either we reject the proposal, which results in state $z$, or we accept the proposal, which results in state $z_1$. Either way we always propose a parameter update in the next step, and proposing this step from both the two states $z$ and $z_1$ before evaluating the acceptance probability for the split/merge step is possible. The possible outcomes for these three proposals are $z$, $z_1$, $z_2$ and $z_{12}$, where $z$ is the outcome where neither the split/merge proposal nor the following parameter proposal is accepted, $z_1$ is the outcome where the split/merge proposal is accepted but not the following parameter proposal, $z_2$ is the outcome where the split/merge proposal is not accepted but the parameter proposal is, and $z_{12}$ is the outcome where both the split/merge proposal and the following parameter proposal are accepted. If we continue the argument we can do the same to propose updates where configurations are moved from one group to another group, and in the red deer example we even include a level where updates of covariates are proposed. After making all proposals we evaluate the likelihoods for each possible state in parallel. The result is that we do need to evaluate too many likelihoods, but if the number of CPUs that are available is larger than or equal to the number of likelihoods we need to evaluate, a computational gain close to the number of levels is obtained. The updating scheme is illustrated in Figure \ref{fig:parallellScheme}. \begin{figure*}[h]\center Figure \ref{fig:parallellScheme} approximately here. \end{figure*} \counterwithin{figure}{section} \setcounter{section}{5} \setcounter{figure}{0} \begin{figure} \centering \subfigure[][Trace plot for the number of groups.]{\includegraphics[scale=0.65]{nrOfGroupsMarginal_gamma05}\label{fig:g1}}\\ \subfigure[][Trace plot for $\phi^0$(black), $\phi^1$(dark grey) and $\phi^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}}$(light grey).]{\includegraphics[scale=0.65]{tracePhi_gamma05}\label{fig:g2}} \caption{Independence model example: Trace plots for the first quarter of the posterior simulation run. Solid curves are the result from a simulation where the initial number of groups is 1, and dashed curves are from a run with an initial value of 11 (maximal) number of groups.} \label{fig:tracePlotsMarginal} \end{figure} \begin{figure} \centering \begin{equation*} \arraycolsep=4.5pt\def0.6}\begin{array}{cccccccccccc{0.6}\begin{array}{cccccccccccc} c^0 &\vv1.00&&&&&&&&&&\\ c^1 &&\vv1.00&&&&&&&&&\\ c^{11}&&&\vv1.00&\vv 0.94&\vv 0.88&\vv 0.90&0.13&0.77&&&\\ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}} 1 \\ 1 \end{array}} &&&\vv 0.94&\vv 1.00&\vv 0.85&\vv 0.93&0.12&0.75&&&\\ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}} &&&\vv0.88&\vv0.85&\vv 1.00&\vv0.85&0.19&0.83&&&\\ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}} &&&\vv 0.9&\vv 0.93&\vv 0.85&\vv 1.00&0.12&0.75&&&\\ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}}&&&0.13&0.12&0.19&0.12&\vv 1.00&\vv 0.28&\vv0.78&\vv0.78&0.74\\ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}}&&&0.77&0.75&0.83&0.75&\vv 0.28&\vv 1.00&\vv 0.12&\vv0.12 &0.09\\ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}}&&&&&&&\vv0.78&\vv0.12&\vv 1.00&\vv 0.93&0.93\\ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}}&&&&&&&\vv0.78&\vv0.12&\vv0.93 &\vv 1.00&0.93\\ c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}}&&&&&&&0.74&0.09&0.93&0.93&\vv 1.00\\ &c^0 & c^1 & c^{11} & c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}} 1 \\ 1 \end{array}} & c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ & 1 \end{array}} & c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & \end{array}} & c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & \end{array}} & c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ & 1 \end{array}} & c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & \\ 1 & 1 \end{array}} & c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} & 1 \\ 1 & 1 \end{array}} & c^{\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 1 \end{array}} \end{array} \end{equation*} \caption{\label{fig:pairMatrixMarginal}Independence model example: Estimated posterior probabilities for two configuration sets to be grouped together. The correct grouping is shown in grey, and only probabilities larger than $5 \%$ are given.} \end{figure} \begin{figure} \centering \subfigure[][$\beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} & \\\framebox(5.0,5.0){} & \end{array}}\ (-1.08,-0.80)$.]{\label{fig:b1}\includegraphics[scale=0.5]{marginalBeta2}} \subfigure[][$\beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}}\framebox(5.0,5.0){} & \framebox(5.0,5.0){} \\ &\end{array}}\ (-0.05,0.22)$.]{\label{fig:b2}\includegraphics[scale=0.5]{marginalBeta3}}\\ \subfigure[][$\beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){}& \\ \framebox(5.0,5.0){} & \end{array}}\ (-0.05,0.23)$.]{\label{fig:b3}\includegraphics[scale=0.5]{marginalBeta4}} \subfigure[][$ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}}\framebox(5.0,5.0){} & \\ &\framebox(5.0,5.0){}\end{array}}\ (-0.11,0.12)$.]{\label{fig:b4}\includegraphics[scale=0.5]{marginalBeta5}}\\ \subfigure[][$ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} &\framebox(5.0,5.0){} \\ \framebox(5.0,5.0){} & \end{array}}\ (-0.03,0.17)$.]{\label{fig:b5}\includegraphics[scale=0.5]{marginalBeta6}} \subfigure[][$ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){} &\framebox(5.0,5.0){} \\ \framebox(5.0,5.0){} & \end{array}}\ (-0.37,0.21)$.]{\label{fig:b9}\includegraphics[scale=0.5]{marginalBeta10}}\\ \subfigure[][$ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){} &\framebox(5.0,5.0){} \\ & \framebox(5.0,5.0){} \end{array}}\ (-0.16,0.26)$.]{\label{fig:b8}\includegraphics[scale=0.5]{marginalBeta9}} \subfigure[][$ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){}& \\ \framebox(5.0,5.0){} & \framebox(5.0,5.0){} \end{array}}\ (-0.37,-0.05)$.]{\label{fig:b7}\includegraphics[scale=0.5]{marginalBeta8}}\\ \subfigure[][$ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} &\framebox(5.0,5.0){} \\ \framebox(5.0,5.0){} & \framebox(5.0,5.0){} \end{array}}\ (-0.42,-0.06)$.]{\label{fig:b6}\includegraphics[scale=0.5]{marginalBeta7}} \subfigure[][$ \beta^{\scriptsize\arraycolsep=0.0pt\def0.6}\begin{array}{cccccccccccc{0.0}\begin{array}{@{}l@{}l@{}} \framebox(5.0,5.0){} &\framebox(5.0,5.0){} \\ \framebox(5.0,5.0){} & \framebox(5.0,5.0){} \end{array}}\ (-0.05,0.69)$.]{\label{fig:b10}\includegraphics[scale=0.5]{marginalBeta11}} \caption[Caption For LOF]% {\label{fig:marginalInteractionParameters}Independence model example: Estimated marginal posterior distribution of the interaction parameters. True values are shown with a black dot and estimated 95\% credibility interval is given for each parameter.} \end{figure} \begin{figure} \centering \subfigure[][$g(x)=\sum_ix_i$]{\label{fig:i1}\includegraphics[scale=0.5]{ind1}} \subfigure[][$g(x)=\sum_{i,j:\text{vertical adjacent sites}}I(x_i=x_j)$]{\label{fig:i2}\includegraphics[scale=0.5]{ind2}}\\ \subfigure[][$g(x)=\sum_{i,j:\text{horizontal adjacent sites}}I(x_i=x_j)$]{\label{fig:i3}\includegraphics[scale=0.5]{ind3}} \subfigure[][$g(x)=\sum_{\Lambda\in\mathcal{L}_m}I\left( x_{\Lambda}= {\left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 0 & 0 \\ 0 & 0 \end{array}\right]}\right)$]{\label{fig:i4}\includegraphics[scale=0.5]{ind4}}\\ \subfigure[][$g(x)=\sum_{\Lambda\in\mathcal{L}_m}I\left ( x_{\Lambda}= {\left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 0 & 1 \\ 1 & 0 \end{array}\right]}\right)$]{\label{fig:i5}\includegraphics[scale=0.5]{ind7}} \subfigure[][$g(x)=\sum_{\Lambda\in\mathcal{L}_m}I\left ( x_{\Lambda}= {\left[\scriptsize \def0.6}\begin{array}{cccccccccccc{0.3}\begin{array}{@{}c@{}c@{}} 1 & 1 \\ 1 & 0 \end{array}\right]}\right)$]{\label{fig:i6}\includegraphics[scale=0.5]{ind15}} \caption{\label{fig:compareSimIndependence}Independence model example: Distribution of six statistics of realisations from our $2\times 2$ model with posterior samples of $z$ (solid), the independence model with correct parameter value (dashed), and the independence model with posterior samples of the parameter value (dotted). The data evaluated with each statistic is shown with a black dot.} \end{figure} \counterwithin{figure}{section} \setcounter{section}{6} \setcounter{figure}{0} \begin{figure} \centering \subfigure[Three realisations from the likelihood for three random samples of $z$ from the posterior distribution.]{\label{fig:simulate_3t3}\includegraphics[scale=0.5]{xSim_3t3}}\\ \subfigure[][Distribution of three functions of realisations from the likelihood with $3\times 3$ cliques (solid) and $2\times 2$ cliques (dashed). The three functions are $g(x)=\sum_{\Lambda\in\mathcal{L}_m}I\left ( x_{\Lambda}={\left[\scriptsize \def\arraystretch{0.2}\begin{array}{@{}c@{}c@{}c@{}} 0 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end{array}\right ]}\right )$ (left), $g(x)=\sum_{\Lambda\in\mathcal{L}_m}I\left ( x_{\Lambda}= {\left[\confBBBBBBBAA\right ]}\right )$ (middle), and $g(x)=\sum_{\Lambda\in\mathcal{L}_m}I\left ( x_{\Lambda}= {\left[\scriptsize \def\arraystretch{0.2}\begin{array}{@{}c@{}c@{}c@{}} 1 & 1 & 1 \\ 1 & 1 & 0 \\ 1 & 0 & 0 \end{array}\right ]}\right )$ (right). ]{\label{fig:gfunc_Deer_3t3}\includegraphics[scale=0.5]{gfuncDeer}} \caption{Red deer $3\times 3$ example: Posterior results with $\gamma=0.5$.} \label{fig:3t3} \end{figure} \counterwithin{figure}{section} \setcounter{section}{7} \setcounter{figure}{0} \begin{figure} \centering \centerline{ \xymatrix{ \\ &&&&&&&*++[F]{z}\ar[d]\ar[dllll]\\ &&&*++[F=]{z_1}\ar[d]\ar[dll]&&&&*++[F]{z}\ar[d]\ar[dll]\\ &*++[F=]{z_{12}}\ar[d]\ar[dl]&&*++[F]{z_1}\ar[d]\ar[dl]&&*++[F=]{z_2}\ar[d]\ar[dl]&&*++[F]{z}\ar[d]\ar[dl]\\ *++[F=]{z_{123}}&*++[F]{z_{12}}&*++[F=]{z_{13}}&*++[F]{z_1}&*++[F=]{z_{23}}&*++[F]{z_2}&*++[F=]{z_3}&*++[F]{z} }} \vspace{0.25cm} \caption{Proposal scheme for parallel likelihood evaluations. Starting in model $z$, proposals are made down the graph. Arrows pointing straight down represents rejection of proposal while arrow pointing down and left represent acceptance. Double squares are used to represents states where a new likelihood evaluation is needed.} \label{fig:parallellScheme} \end{figure} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,187
Evgenia Pavlovna Sokolova (; 1 December 1850 – 2 August 1925) was a Russian dancer and educator. She was one of the most famous ballerinas of her period and later became a famous ballet teacher. She was born in Saint Petersburg (later Leningrad) and studied at the Imperial Ballet Academy there with Marius Petipa, Lev Ivanov and Christian Johansson, graduating in 1869. She subsequently joined the Bolshoi Theatre, Saint Petersburg. She performed the leading roles in Petipa's ballets Les Aventures de Pélée, A Midsummer Night's Dream, Roxana, the Beauty of Montenegro, Mlada, Night and Day, The Sacrifices to Cupid and Pygmalion. She taught advanced classes at the Mariinsky Theatre from 1902 to 1904 and again from 1920 to 1923. Her students included Anna Pavlova, Vera Trefilova, Tamara Karsavina, Lyubov Yegorova and Olga Spessivtseva. Sokolova died in Leningrad at the age of 74. References 1850 births 1925 deaths 19th-century ballet dancers from the Russian Empire Ballerinas from the Russian Empire Ballet teachers Mariinsky Ballet dancers Vaganova graduates
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,111
package com.appleframework.monitor.model; import java.text.ParseException; import java.text.SimpleDateFormat; import java.util.Date; import java.util.List; import java.util.concurrent.ConcurrentMap; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import org.apache.commons.lang.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import com.google.common.collect.Lists; import com.google.common.collect.MapMaker; /** * define a dog for the metric * * @author hill.hu */ public class MetricDog { private static Logger logger = LoggerFactory.getLogger(MetricDog.class); private final static ConcurrentMap<String, Object> hasFireMetrics = new MapMaker().expiration(1, TimeUnit.HOURS).makeMap(); private final static ConcurrentMap<String, AtomicInteger> metricFireTimes = new MapMaker().expiration(6, TimeUnit.HOURS).makeMap(); public static final String LEVEL_ERROR = "ERROR"; private String name, desc; /** * 目标值 */ private double targetValue; private String operator; /** * 是否开启 * * @see com.appleframework.monitor.model.MetricDog#inWorking() */ private boolean enable; private String metricName; private boolean excludeTimeMode = false; private String startTime = "00:00:00"; private String endTime = "24:00:00"; //通知人邮件列表 private String mailList; /** * 告警级别,WARN,ERROR */ private String level = "WARN"; private int times = 2; public int getTimes() { return times; } public void setTimes(int times) { this.times = times; } public String getName() { return name; } public void setName(String name) { this.name = name; } public String getDesc() { return desc; } public void setDesc(String desc) { this.desc = desc; } public double getTargetValue() { return targetValue; } public void setTargetValue(double targetValue) { this.targetValue = targetValue; } public String getOperator() { return operator; } public void setOperator(String operator) { this.operator = operator; } public boolean isEnable() { return enable; } public void setEnable(boolean enable) { this.enable = enable; } public String getMetricName() { return metricName; } public void setMetricName(String metricName) { this.metricName = metricName; } public String getMailList() { return mailList; } public void setMailList(String mailList) { this.mailList = mailList; } public List<Alert> work(Project project) { List<Alert> alerts = Lists.newArrayList(); for (String metric : project.findMetricNames()) { if (StringUtils.equals(metric, metricName)) { MetricValue metricValue = project.findLastMetric(metricName); String cacheKey = project.getName() + "_" + this.getName() + "_" + metricName + metricValue.getTimeStamp(); logger.debug("current value={} ,dog={}", metricValue.getValue(), this); //这个值已经处理过了,忽略 if (hasFireMetrics.containsKey(cacheKey)) { logger.debug("this value has fire,just ignore {}", cacheKey); continue; } else { hasFireMetrics.put(cacheKey, true); } boolean fire = bite(metricValue.getValue()); if (fire) { Alert alert = new Alert(); alert.setTitle(String.format("【%s】->%s", project.getAlias(), name)); String _desc = StringUtils.defaultIfEmpty(desc, ""); String _content = StringUtils.defaultIfEmpty(metricValue.getContent(), ""); alert.setIp(metricValue.getIp() != null ? metricValue.getIp() : "127.0.0.1"); alert.setContent(String.format("%s:当前值=%s %s 阀值%s \n\n %s \n %s", metricName, metricValue.getValue(), operator, targetValue, _desc, _content)); alert.setProjectName(project.getName()); alert.setMetricDog(this); String _level = fixLevel(project, alert); alert.setLevel(_level); alerts.add(alert); } else { //重新置0 resetFireTimes(project.getName(), metricName); } } } return alerts; } private String fixLevel(Project project, Alert alert) { String _level = level; int currentTime = incrementFireTimes(project.getName(), metricName); if (!LEVEL_ERROR.equals(level) && currentTime >= times) { _level = LEVEL_ERROR; logger.info("连续告警次数达到{}次,升级到[错误],alert={}", times, alert); } return _level; } private int incrementFireTimes(String projectName, String metricName) { String metricNotifyKey = projectName + "_" + metricName; metricFireTimes.putIfAbsent(metricNotifyKey, new AtomicInteger(0)); return metricFireTimes.get(metricNotifyKey).incrementAndGet(); } private void resetFireTimes(String projectName, String metricName) { String metricNotifyKey = projectName + "_" + metricName; metricFireTimes.put(metricNotifyKey, new AtomicInteger(0)); } /** * 是否狂叫 * * @param metricValue * @return */ protected boolean bite(double metricValue) { if (StringUtils.equals("<", operator)) return Double.compare(targetValue, metricValue) > 0; if (StringUtils.equals("=", operator)) return Double.compare(targetValue, metricValue) == 0; if (StringUtils.equals(">", operator)) return Double.compare(targetValue, metricValue) < 0; logger.warn("not support operator {} ,just support < , = , >", operator); return false; } public static Logger getLogger() { return logger; } public static void setLogger(Logger logger) { MetricDog.logger = logger; } public boolean getExcludeTimeMode() { return excludeTimeMode; } public void setExcludeTimeMode(boolean excludeTimeMode) { this.excludeTimeMode = excludeTimeMode; } public String getStartTime() { return startTime; } public void setStartTime(String startTime) { this.startTime = startTime; } public String getEndTime() { return endTime; } public void setEndTime(String endTime) { this.endTime = endTime; } @Override public String toString() { return "MetricDog{" + "name='" + name + '\'' + ", desc='" + desc + '\'' + ", targetValue=" + targetValue + ", operator='" + operator + '\'' + ", enable=" + enable + ", metricName='" + metricName + '\'' + ", excludeTimeMode=" + excludeTimeMode + ", startTime='" + startTime + '\'' + ", endTime='" + endTime + '\'' + ", level='" + level + '\'' + '}'; } private static SimpleDateFormat sdf = new SimpleDateFormat("HH:mm:ss"); /** * 是否正在工作 * * @return */ public boolean inWorking() { return enable && inWorkTime(new Date()); } protected boolean inWorkTime(Date current) { Date now = null; try { now = sdf.parse(sdf.format(current)); Date start = sdf.parse(startTime); Date end = sdf.parse(endTime); //如果为"除了某时间段"模式 if (excludeTimeMode) { return !(now.after(start) && now.before(end)); } else { return now.after(start) && now.before(end); } } catch (ParseException e) { throw new RuntimeException(e); } } public String getLevel() { return level; } public void setLevel(String level) { this.level = level; } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,622
\section[Introduction]{Introduction} A contact manifold $(\Sigma,\xi)$ is said to admit an \emph{exact contact embedding} if there exists an embedding $\iota \colon \Sigma \to V$ into an exact symplectic manifold $(V,\lambda)$ and a contact form $\alpha$ for~$(\Sigma,\xi)$ such that $\alpha - \iota^*\lambda$ is exact, and such that $\iota (\Sigma) \subset V$ is bounding. In this paper we suppose, in addition, that any target manifold $(V,\lambda)$ is convex, i.e., there exists an exhaustion $V= \bigcup_{k} V_k$ of~$V$ by compact sets $V_k \subset V_{k+1}$ with smooth boundary such that $\lambda |_{\partial V_k}$ is a contact form, and that the first Chern class of $(V, \lambda)$ vanishes on $\pi_2(V)$. An exact contact embedding is called \emph{displaceable} if $\iota (\Sigma)$ can be displaced from itself by a Hamiltonian isotopy of~$V$. We refer to~\cite{CF} for more details on exact contact embeddings, and for examples and obstructions to such embeddings. The \emph{mean Euler characteristic} of a simply-connected contact manifold was introduced in the third author's thesis~\cite{vK} in terms of contact homology, and was studied further in~\cite{E,GK}. Here, we shall consider the mean Euler characteristic of equivariant symplectic homology, which can be thought of as the mean Euler characteristic of a filling. For the definition see Section~\ref{s:mean}. Under additional assumptions, these notions coincide, see Corollary~\ref{cor:mec_invariant} and the subsequent remark. We say that a simply-connected cooriented contact manifold $(\Sigma,\alpha)$ is \emph{index-positive} if the mean index $\Delta(\gamma)$ of every periodic Reeb orbit~$\gamma$ is positive. Similarly, we say that $(\Sigma,\alpha)$ is \emph{index-negative} if the mean index $\Delta(\gamma)$ of every periodic Reeb orbit $\gamma$ is negative. Finally, we say that $(\Sigma,\alpha)$ is \emph{index-definite} if it is index-positive or index-negative. Recall that the mean index $\Delta$ is related to the Conley--Zehnder index $\mu_{CZ}$ as follows: For any non-degenerate Reeb orbit $\gamma$ in a contact manifold $(\Sigma^{2n-1},\alpha)$, its $N$-fold cover $\gamma^N$ satisfies \begin{equation} \label{eq:iteration_and_mean_index} \mu_{CZ}(\gamma^N) \,=\, N \Delta(\gamma) + e(N), \end{equation} where $e(N)$ is an error term bounded by~$n-1$, see~\cite{SZ}[Lemma 3.4]. In this note we prove the following theorem. \medskip \textbf{Theorem~A. } {\it Assume that $(\Sigma,\xi)$ is a $(2n-1)$-dimensional simply-connected contact manifold which admits a displaceable exact contact embedding. Suppose furthermore that $(\Sigma,\alpha)$ is index-definite for some $\alpha$ defining $\xi$. Then the following holds. \begin{itemize} \item[{\rm (i)}] $(\Sigma,\alpha)$ is index-positive, and the mean Euler characteristic of its filling is a half-integer. \smallskip \item[{\rm (ii)}] If, in addition, $(\Sigma,\alpha)$ is a rational homo\-logy sphere, then the mean Euler characteristic of its filling equals~$\frac{(-1)^{n+1}}{2}$. \end{itemize} } \medskip Given positive integers $a_0,\ldots,a_n$ one can define a Brieskorn manifold $\Sigma(a_0,\ldots,a_n)$ as the link of a certain singularity. Such a Brieskorn manifold $\Sigma(a_0,\ldots,a_n)$ is said to be a \emph{non-trivial Brieskorn sphere} if $a_i \neq 1$ for all~$i$. If $a_0,\ldots,a_n$ are pairwise relatively prime, and if $n>2$, then $\Sigma(a_0,\ldots,a_n)$ is homeomorphic to~$S^{2n-1}$. Brieskorn manifolds carry a natural contact structure. A trivial Brieskorn manifold is a round sphere with its standard contact structure in~$\R^{2n}$, and hence admits a displaceable exact contact embedding. \medskip \textbf{Corollary~B. } \emph{ A non-trivial Brieskorn sphere $\Sigma(a_0,\ldots,a_n)$ of dimension at least~$5$ whose exponents are pairwise relatively prime does not admit a displaceable exact contact embedding. In particular, it does not admit an exact contact embedding into a subcritical Stein manifold whose first Chern class vanishes.} The restriction to manifolds of dimension at least~$5$ comes from the following observations. \begin{remark} In dimension $3$, non-trivial Brieskorn spheres are not simply-connected. Hence Reeb orbits in these manifolds can become contractible in the filling even if they are not contractible in the Brieskorn manifold. The mean Euler characteristic of symplectic homology, on the other hand, counts Reeb orbits that are contractible in the filling, so it cannot be determined by just considering the contact manifold by itself. \end{remark} We conclude this introduction with a few open problems. \smallskip 1. Does Corollary~B still hold true if we drop the convexity assumption on the target manifold $(V,\lambda)$, or the assumption that its first Chern class vanishes? \smallskip 2. Ritter proved in \cite{R} that the displaceability of $\Sigma$ implies the vanishing of the symplectic homology of the filling~$W$. It is conceivable that then in fact the equivariant symplectic homology of~$W$ vanishes. This would imply that the assumption in Theorem~A that $(\Sigma, \alpha)$ is index-definite can be omitted, see Remark~\ref{rem:definite}. \section[The mean Euler characteristic]{The mean Euler characteristic} \label{s:mean} Assume that $(W,\lambda)$ is a compact exact symplectic manifold, i.e.\ $\omega=d\lambda$ is a symplectic form on~$W$, with convex boundary $\Sigma=\partial W$. We assume throughout that the first Chern class $c_1(W)$ of $(W,d\lambda)$ vanishes on $\pi_2(W)$, and that $\Sigma$ is simply connected. For $i \in \mathbb{Z}$ we denote by $$ b_i(W) \,=\, \dim \big(SH^{S^1,+}_i(W;\mathbb{Q})\big) $$ the $i$-th Betti number of the positive part of the equivariant symplectic homology of~$W$ (as defined in~\cite{BO,V}). For later use, we shall call a homology $H_*(C_*,\partial)$ \emph{index-positive} if there exists $N$ such that $H_i(C_*,\partial)=0$ for all $i<N$. Note here that if $(\Sigma,\alpha)=\partial(W,d\lambda)$ is index-positive in the previously defined sense, then $SH^{S^1,+}_*(W)$ is index-positive in the homological sense. The notions index-negative and index-definite are defined on homology level in a similar way. \begin{definition} $W$ is called \emph{homologically bounded} if the Betti numbers~$b_i(W)$ are uniformly bounded. \end{definition} If $W$ is homologically bounded we define its \emph{mean Euler characteristic} as $$ \chi_m(W) \,=\, \lim_{N \to \infty} \frac{1}{N} \sum_{i=-N}^N (-1)^i b_i(W). $$ The uniform bound on the Betti numbers implies that the limit exists. \medskip Now assume that $(\Sigma,\lambda)$ is a contact manifold with the property that all closed Reeb orbits are non-degenerate. We recall that a closed Reeb orbit $\gamma$ is called~\emph{bad} if it is the $m$-fold cover of a Reeb orbit~$\gamma'$ and the difference of Conley--Zehnder indices $\mu(\gamma)-\mu(\gamma')$ is odd. A closed Reeb orbit which is not bad is called~\emph{good}. \begin{definition} $\Sigma$ is called \emph{dynamically bounded} if there exists a uniform bound for the number of good closed Reeb orbits of Conley--Zehnder index~$i$ for every $i \in \mathbb{Z}$. \end{definition} We denote by $\mathfrak{G}_N$ the set of good closed Reeb orbits of Conley--Zehnder index lying between~$-N$ and~$N$. If $\Sigma$ is dynamically bounded, we define its mean Euler characteristic by $$ \chi_m(\Sigma) \,=\, \lim_{N \to \infty} \frac{1}{N} \sum_{\gamma \in \mathfrak{G}_N} (-1)^{\mu(\gamma)}. $$ \begin{remark} Ginzburg and Kerman, \cite{GK}, define the positive and negative part of the mean Euler characteristic of contact homology by summing over all positive and all negative degrees, respectively. Their mean Euler characteristic is half of the one we define. \end{remark} If $W$ is a compact exact symplectic manifold, we say that $W$ is dynamically bounded if its boundary $\Sigma=\partial W$ is dynamically bounded. \begin{thm}\label{me} Assume that $W$ is dynamically bounded. Then it is homologically bounded and $$ \chi_m(\partial W) \,=\, \chi_m(W). $$ \end{thm} \begin{cor} \label{cor:mec_invariant} If $W$ is dynamically bounded, then its mean Euler characteristic is independent of the filling. \end{cor} \begin{remark} Since the generators of the positive part of equivariant symplectic homology and contact homology are the same, the mean Euler characteristic can also be expressed in terms of contact homology data. This was done in the original definition in~\cite{vK}. Note, however, that the degree of a Reeb orbit~$\gamma$ in contact homology is defined as $\mu_{CZ}(\gamma)+n-3$ if the dimension of the contact manifold is~$2n-1$. This can result in a sign difference for the mean Euler characteristic. \end{remark} \textbf{Proof of Theorem~\ref{me}:} If $\Gamma$ denotes the set of all closed Reeb orbits on $\Sigma=\partial W$, then the critical manifold $\mathfrak{C}$ for the positive equivariant part of the action functional of classical mechanics is given by $$ \mathfrak{C} \,=\, \bigcup_{\gamma \in \Gamma} \gamma \times_{S^1} ES^1. $$ If $\gamma$ is a $k$-fold cover of a simple Reeb orbit, then the isotropy group of the action of~$S^1$ on~$\gamma$ is~$\mathbb{Z}_k$. Therefore, $$ \gamma \times_{S^1} ES^1 \,=\, B\mathbb{Z}_k $$ is the infinite dimensional lens space. The Morse--Bott spectral sequence, see \cite[Section 7.2.2]{FOOO}, tells us that there exists a spectral sequence converging to $SH_*^{S^1,+}(W;\mathbb{Q})$, whose second page is given by $$ E^2_{j,i} \,=\, \bigoplus_{\substack{\gamma \in \Gamma \\ \mu(\gamma)=i}} H_j(\gamma \times_{S^1} ES^1 ;\mathcal{O}_\gamma). $$ The twist bundle $\mathcal{O}_\gamma$ is trivial if $\gamma$ is good, and equals the orientation bundle of the lens space if $\gamma$ is bad, see \cite{BO1,BO,V1}. The homology of an infinite dimensional lens space with rational coefficients equals $\mathbb{Q}$ in degree zero and vanishes otherwise. Its homology with coefficients twisted by the orientation bundle is trivial. Therefore the second page of the Morse--Bott spectral sequence simplifies to $$ E^2_{j,i} \,=\, \bigoplus_{\substack{\gamma \in \mathfrak{G} \\ \mu(\gamma)=i}} \mathbb{Q} $$ where $\mathfrak{G} \subset \Gamma$ are the good closed Reeb orbits. We conclude that the mean Euler characteristic of~$E^2$ coincides with $\chi_m(\Sigma)$. Since the Euler characteristic is unchanged if we pass to homology, we deduce that $\chi_m(\Sigma)$ equals~$\chi_m(W)$. This finishes the proof of the theorem. \hfill $\square$ \medskip In our application to Brieskorn manifolds, we will compute the mean Euler characteristic for a contact form of Morse--Bott type. Brieskorn manifolds can be thought of as Boothby--Wang orbibundles over symplectic orbifolds, since there is a contact form for which all Reeb orbits are periodic. For such special contact manifolds the mean Euler characteristic has a particularly simple form. We start with introducing some notation to state the result. Consider a contact manifold $(\Sigma,\alpha)$ with Morse--Bott contact form $\alpha$ having only finitely many orbit spaces, so that we have an $S^1$-action on $\Sigma$. Denote the periods by $T_1< \ldots< T_k$, so all $T_i$ divide~$T_k$. Denote the subspace consisting of points on periodic Reeb orbits with period $T_i$ in $\Sigma$ by~$N_{T_i}$. \begin{lemma} \label{lemma:H^1_trivial} If $H^1(N_{T_i};\Z_2)=0$, then $H^1(N_{T_i} \times_{S^1} ES^1; \Z_2)=0$. \end{lemma} \begin{proof} Consider the Leray spectral sequence for $N_{T_i} \times_{S^1} ES^1$ as a fibration over~$\C P^\infty$. As $\pi_1(\C P^\infty)=0$, the Leray spectral sequence with $\Z_2$-coefficients converges to the cohomology of $N_{T_i} \times_{S^1} ES^1$. The $E_2$-page is given by $E_2^{pq} = H^p (\C P^\infty; H^q(N_{T_i};\Z_2) )$. Since $H^1(N_{T_i};\Z_2)=0$ by assumption, there are no degree~$1$-terms on $E_2$. Hence there are no degree~$1$-terms in $E_\infty$ either, and $H^1(N_{T_i} \times_{S^1} ES^1; \Z_2)=0$. \end{proof} Finally we introduce the function $$ \phi_{T_i;T_{i+1},\ldots,T_k} \,=\, \# \{ a \in \N \mid aT_i < T_k \text{ and } a T_i \notin T_j \N \text{ for } j=i+1,\ldots, k \} . $$ \begin{proposition} \label{prop:mean_euler_S^1-orbibundle} Let $(\Sigma,\alpha)$ be a contact manifold as above and assume that it admits an exact filling $(W,d\lambda)$. Suppose that $c_1(\xi=\ker \alpha)=0$, so that the Maslov index is well-defined. Let $\mu_P:=\mu(\Sigma)$ be the Maslov index of a principal orbit of the Reeb action. Assume that $H^1(N_T\times_{S^1} ES^1; \Z_2)=0$ for all $N_T$ and that there are no bad orbits. If $\mu_P\neq 0$ then the following hold. \begin{itemize} \item $(\Sigma,\alpha)$ is homologically bounded. \item $(\Sigma,\alpha)$ is index-positive if $\mu_P>0$ and index-negative if $\mu_P<0$. \item The mean Euler characteristic satisfies the following formula, $$ \chi_m(W)=\frac{\sum_{i=1}^k (-1)^{\mu(S_{T_i})-\frac{1}{2}\dim S_{T_i} } \phi_{T_i;T_{i+1},\ldots T_k} \chi^{S^1}(N_T)}{|\mu_P|}. $$ \end{itemize} \end{proposition} Here $\chi^{S^1}(N_T)$ denotes the Euler characteristic of the $S^1$-equivariant homology of the $S^1$-manifold~$N_T$. \begin{proof} We use the notation $$ H^{S^1}_{p}(N_T;\Q) \,:=\, H_{p}(N_T\times_{S^1}ES^1;\Q) . $$ As before, there is the Morse--Bott spectral sequence converging to $SH_*^{S^1,+}(W;\Q)$. The second page is given by $$ E^2_{pq}=\bigoplus_{\substack{N_T \\ \mu(S_T)-\frac{1}{2}\dim S_T=q}} H^{S^1}_{p}(N_T;\Q). $$ Indeed, the coefficient ring is not twisted as $H^1(N_T\times_{S^1} ES^1;\Z_2)=0$. The period of a principal orbit is $T_k$, so we have $\phi^R_{T_k}=\id$. Since the Robbin--Salamon version of the Maslov index is additive under concatenations, it follows that for any set of periodic orbits $N_T$ with return time $T>T_k$ we have $$ \mu(N_T)=\mu(N_{T_k})+\mu(N_{T-T_k}). $$ It follows that the $E^2$-page is periodic in the $q$-direction with period $|\mu(N_{T_k})|=|\mu_P|$ (as $N_{T_k}=\Sigma)$. Since we have assumed that $\mu_P\neq 0$, we see that $SH^{S^1,+}(W)$ is homologically bounded. Moreover, by the definition of the Maslov index~$\mu_P$, the sign of $\mu_P$ determines whether $(\Sigma,\alpha)$ is index-positive or index-negative. Finally, the mean Euler characteristic can be obtained by summing all contributions in one period and dividing by the period. This gives $$ \chi_m(W)=\frac{\sum_{T \leq T_k} (-1)^{\mu(S_T)-\frac{1}{2}\dim S_T }\chi^{S^1}(N_T)}{|\mu_P|}. $$ Now observe that the definition of the functions $\phi_{T_i;T_{i+1},\ldots T_{k}}$ is such that it counts how often multiple covers of a set of periodic orbits~$N_{T_i}$ appear in one period of the $E^2$-page without being contained in a larger orbit space. We thus obtain the above formula. \end{proof} \begin{remark} This proposition is a generalization of \cite{E}[Example 8.2], and Espina's methods could also be used to show the above. \end{remark} \section[Proof of Theorem~A]{Proof of Theorem~A} In the first two paragraphs of this section we prove three general statements that in particular imply assertion~(i) of Theorem~A. In paragraphs~\ref{ss:dis} and \ref{ss:rat} we then work out the situation for rational homology spheres. \subsection{Two general statements} \label{ss:two} \begin{proposition} \label{prop:equivariant} Assume that $(\Sigma,\xi)$ is a $(2n-1)$-dimensional simply-connected contact manifold admitting a displaceable exact contact embedding into $(V,d\lambda)$. Denote the compact component of $V \setminus \Sigma$ by~$W$. Suppose furthermore that $(\Sigma,\alpha)$ is index-positive. Then $$ SH^{S^1,+}_{*}(W) \,\cong\, H_{*+n-1}^{S^1}(W,\Sigma). $$ \end{proposition} \begin{corollary} \label{cor:mean_euler} Under the assumptions of Proposition~\ref{prop:equivariant}, $$ \chi_m(W) \,=\, (-1)^{n+1}\, \frac{\chi(W,\Sigma)}{2}. $$ \end{corollary} \begin{proof}[Proof of Proposition~\ref{prop:equivariant}] Consider the $S^1$-equivariant version of the Viterbo long exact sequence, $$ \ldots \longrightarrow H^{S^1}_{*+n}(W,\Sigma) \longrightarrow SH^{S^1}_*(W) \longrightarrow SH^{S^1,+}_{*}(W) \longrightarrow H^{S^1}_{*+n-1}(W,\Sigma) \longrightarrow \ldots $$ from~\cite{V,BO}. By assumption $SH^{S^1,+}_*(W)$ is index-positive. The group homology $H^{S^1}_{*}(W,\Sigma)$ is also index-positive, so we conclude that $SH^{S^1}_*(W)$ must be index-positive as this group is sandwiched between $0$'s for sufficiently negative~$*$. By Ritter's theorem \cite[Theorem 97]{R} displaceability of~$\Sigma$ implies $SH_*(W)=0$. The Gysin sequence for equivariant and non-equivariant symplectic homology from~\cite{BO} reads $$ \ldots \longrightarrow SH_*(W) \longrightarrow SH^{S^1}_*(W) \stackrel{D_*}{\longrightarrow} SH^{S^1}_{*-2}(W) \longrightarrow SH_{*-1} (W) \longrightarrow \ldots $$ so all maps $D_*$ are isomorphisms. Since we just showed that $SH^{S^1}_*(W)$ is index-positive, it must vanish in all degrees. Finally consider the equivariant version of the Viterbo sequence once again. Since $SH^{S^1}_*(W)$ vanishes, it follows that $$ H_{*+n}^{S^1}(W,\Sigma) \,\cong\, SH^{S^1,+}_{*+1}(W). $$ \end{proof} Corollary~\ref{cor:mean_euler} follows from Proposition~\ref{prop:equivariant} by observing that $H^{S^1}_*(W,\Sigma) \cong H_*(W,\Sigma)\otimes H_*(\C P^\infty)$, since the $S^1$-action on $(W,\Sigma)$ trivial (by construction of Viterbo's long exact sequence). In other words, $H^{S^1}_*(W,\Sigma)$ consists of infinitely many copies of $H_*(W,\Sigma)$ which are degree-shifted by $0,2,4,\ldots$ \begin{remark} \label{rem:definite} It is conceivable that the displaceability of~$W$ implies that $SH^{S^1}_*(W)$ vanishes. The conclusion of Proposition~\ref{prop:equivariant}, without the assumption that $(\Sigma, \alpha)$ is index-positive, would then follow at once from Viterbo's $S^1$-equivariant long exact sequence. Hence, the assumption in Theorem~A that $(\Sigma, \alpha)$ is index-definite could be omitted. \end{remark} \subsection{Index-positivity} \begin{lemma} Assume that $(\Sigma,\xi)$ is a $(2n-1)$-dimensional simply-connected contact manifold admitting a displaceable exact contact embedding into $(V,d\lambda)$. Denote the compact component of $V \setminus \Sigma$ by $W$. Suppose furthermore that $(\Sigma,\xi=\ker \alpha)$ is index-definite. Then $(\Sigma,\alpha)$ is index-positive. \end{lemma} \begin{proof} Again by Ritter's theorem \cite[Theorem 97]{R} we conclude that $SH_*(W)=0$. Hence the Viterbo long exact sequence from \cite{V,BO} reduces to $$ \ldots \longrightarrow 0 \longrightarrow SH^+_*(W) \stackrel{\cong}{\longrightarrow} H_{*+n-1}(W,\Sigma) \longrightarrow 0 \longrightarrow \ldots, $$ so we see that $SH^+_{n+1}(W)\cong H_{2n}(W,\Sigma)\cong H^0(W)\neq 0$. Now suppose that $(\Sigma,\alpha)$ is index-negative. On one hand, our previous observation shows that there is a generator of degree~$n+1$. On the other hand, if $\alpha$ is a non-degenerate contact form, then the iteration formula~\eqref{eq:iteration_and_mean_index} tells us that an $N$-fold cover of a Reeb orbit~$\gamma$ satisfies $$ |\mu_{CZ}(\gamma^N) - N\Delta(\gamma) | \,\leq\, n-1, $$ where $\Delta(\gamma)$ denotes the mean index of the Reeb orbit $\gamma$. Since $(\Sigma,\alpha)$ is index-negative, $\Delta(\gamma)<0$, so $\mu_{CZ}(\gamma^N)< n-1$. In particular, no generator of $SH^{+}_{n+1}(W)$ can be realized by a Reeb orbit. This contradiction shows that $(\Sigma,\alpha)$ must be index-positive. \end{proof} \subsection{Displaceability and splitting the sequence of the pair} \label{ss:dis} In the following lemma, $(V,\Omega)$ is a connected manifold endowed with a volume form, and $W \subset V$ is a compact connected submanifold with connected boundary of the same dimension as~$V$ with the property that the volume of the complement of~$W$ in~$V$ is infinite. We say that the hypersurface $\Sigma = \partial W \subset V$ is \emph{volume preserving displaceable} if there exists a compactly supported smooth family of volume preserving vector fields $X_t$, $t \in [0,1]$, on~$V$ such that the time-$1$ map~$\phi$ of its flow satisfies $\phi(\Sigma) \cap \Sigma =\emptyset$. \begin{lemma}\label{disp} Assume that $\Sigma=\partial W$ is volume preserving displaceable in~$V$. Then the projection homomorphism $p_*\colon H_*(W;\mathbb{Q}) \to H_*(W,\Sigma;\mathbb{Q})$ vanishes. \end{lemma} \textbf{Proof: } We prove the lemma in two steps. For the first step we need the assumption about volume preservation. \\ \\ \textbf{Step~1: } \emph{The volume preserving diffeomorphism $\phi$ displacing $\Sigma$ displaces the whole filling, i.e.~ $\phi(W) \cap W=\emptyset$.} \\ \\ We divide the proof of Step~1 into three substeps. \\ \\ \textbf{Step~1a: } \emph{There exists a point $x \in W$ such that $\phi(x) \notin W$.} \\ \\ We argue by contradiction and assume that $\phi(W) \subset W$. In particular, the restriction of~$\phi$ to~$W$ gives a diffeomorphism between the two manifolds with boundary~$W$ and $\phi(W) \subset W$. Therefore, if $y \in W$ satisfies $\phi(y) \in \partial W$, it follows that $y \in \partial W$. We conclude $$\phi(W) \cap \partial W \,\subset\, \phi(\partial W).$$ Since $\phi$ displaces the boundary from itself, we obtain $$\phi(W) \cap \partial W = \emptyset .$$ Denoting by $\mathrm{int}$ the interior of a set, we can write this equivalently as $$\phi(W) \,\subset\, \mathrm{int}(W).$$ Hence $\phi(W)$ is a strict subset of~$W$. Since $W$ is compact, its volume is finite. Therefore, the volume of~$\phi(W)$ is strictly less than the volume of~$W$. This contradicts the fact that $\phi$ is volume preserving. Therefore the assertion of Step~1a has to hold true. \\ \\ \textbf{Step~1b: } $\phi(\partial W) \subset W^c$. \\ \\ Since $\phi$ displaces $\partial W$ from itself, we have $\partial W \subset \mathrm{int}(W) \cup W^c$. Since $\partial W$ is connected by assumption, we either have $\phi(\partial W) \subset \mathrm{int}(W)$ or $\phi(\partial W) \subset W^c$. Therefore it suffices to show that $\phi(\partial W) \cap W^c$ is not empty. Since $W^c$ has infinite volume but $\phi(W)$ has finite volume by assumption, we conclude that there exists a point $y_0 \in W^c$ such that $y_0 \notin \phi(W)$. Step~1a implies the existence of a point $y_1 \in W^c$ satisfying $y_1 \in \phi(W)$. Since $V,W$, and $\partial W$ are connected by assumption, we obtain from the Mayer--Vietoris long exact sequence that $W^c$ is connected as well. Therefore there exists a path $y \in C^0([0,1],W^c)$ satisfying $y(0)=y_0$ and $y(1)=y_1$. Since $W^c$ is Hausdorff, there exists $t \in (0,1)$ such that $y(t) \in \partial (\phi(W)) = \phi(\partial W)$. Therefore $\phi(\partial W) \cap W^c$ is not empty, which finishes the proof of Step~1b. \\ \\ \textbf{Step~1c: } \emph{We prove Step~1.} \\ \\ We assume by contradiction that there exists a point $x_0 \in W \cap \phi(W)$. By Step~1a and the fact that $\phi$ is volume preserving, we conclude that $W$ cannot be a subset of~$\phi(W)$. Therefore there has to exist a point $x_1 \in W \cap (\phi(W))^c$ as well. Since $W$ is connected by assumption, there exists a path $x \in C^0([0,1],W)$ satisfying $x(0)=x_0$ and $x(1)=x_1$. As in Step~1b there has to exist $t \in (0,1)$ such that $x(t) \in \phi(\partial W)$. But this contradicts the assertion of Step~1b. The proof of Step~1 is complete. \\ \\ \textbf{Step~2: } \emph{If a diffeomorphism $\phi$ isotopic to the identity satisfies $\phi(W) \cap W =\emptyset$, then the projection homomorphism $p_* \colon H_*(W;\mathbb{Q}) \to H_*(W,\partial W; \mathbb{Q})$ vanishes.} \\ \\ We prove the dual version in de~Rham cohomology, i.e.~we show that the inclusion homomorphism from the compactly supported de~Rham cohomology of~$W$ to the de~Rham cohomology of~$W$ vanishes. To see this, pick $\omega \in \Omega^k(W)$ which is compactly supported and closed. We show that there exists $\eta \in \Omega_{k-1}(W)$ not necessarily compactly supported such that $\omega=d\eta$. Since $\omega$ is compactly supported, we can extend it trivially to a closed $k$-form on~$V$, which we refer to as~$\widetilde{\omega}$. Since $\phi$ is isotopic to the identity, we have $\phi=\phi^1$ for a flow $\{\phi^t\}_{t \in [0,1]}$ generated by a time dependent vector field~$X_t$. By the Cartan formula and the fact that $\widetilde{\omega}$ is closed we obtain \begin{eqnarray*} \frac{d}{dt}\big(\phi^t\big)^*\widetilde{\omega} &=& L_{X_t}\big(\phi^t\big)^*\widetilde{\omega}\\ &=&\big(d i_{X_t}+i_{X_t} d\big)\big(\phi^t\big)^*\widetilde{\omega}\\ &=&d i_{X_t}\big(\phi^t\big)^*\widetilde{\omega}. \end{eqnarray*} We define a $(k-1)$-form on $V$ by the formula $$\widetilde{\eta} \,=\, -\int_0^1 i_{X_t}\big(\phi^t\big)^*\widetilde{\omega} . $$ By the previous computation we get $$\widetilde{\omega}-\phi^*\widetilde{\omega} \,=\, d\widetilde{\eta}.$$ Now set $$\eta \,=\, \widetilde{\eta}|_{W} \,\in\, \Omega^{k-1}(W).$$ Since $\phi$ displaces $W$ we obtain $$\omega \,=\, d\eta.$$ This finishes the proof of Step~2 and hence of Lemma~\ref{disp}. \hfill $\square$ \subsection{Rational homology spheres and completion of the proof of Theorem~A~(ii)} \label{ss:rat} In the case of rational homology spheres, the homology of the filling is completely determined: \begin{lemma}\label{homi} Suppose $(\Sigma,\xi)$ is a $(2n-1)$-dimensional simply-connected rational homology sphere admitting a displaceable exact contact embedding into $(V,d\lambda)$. Let $W$ denote the compact component of $V \setminus \Sigma$. Then $$ H_*(W,\Sigma;\mathbb{Q}) \,=\, \left\{\begin{array}{cl} \mathbb{Q} & \mbox{if }\, *=2n, \\ \{0\} & \textrm{else.} \end{array}\right. $$ \end{lemma} \begin{proof} We can assume that $V$ has infinite volume. Indeed, if $V$ has finite volume, we choose a compact convex manifold $V_k$ in the exhaustion of~$V$ such that $W \subset V_k$, and replace $V$ by the manifold $\widehat V$ obtained by attaching cylindrical ends to the boundary of~$V_k$. Notice that $\widehat V$ is also an exact convex manifold whose first Chern class vanishes on $\pi_2(\widehat V)$. In view of Lemma~\ref{disp} the long exact homology sequence for the pair $(W,\Sigma)$ splits for every $k \in \mathbb{Z}$ into short exact sequences $$ 0 \longrightarrow H_k(W,\Sigma;\mathbb{Q}) \stackrel{\partial}\longrightarrow H_{k-1}(\Sigma;\mathbb{Q}) \stackrel{i_*} \longrightarrow H_{k-1}(W;\mathbb{Q}) \longrightarrow 0 . $$ By using the fact that $\Sigma$ is a rational homology sphere as well as $H_0(W;\mathbb{Q}) = \mathbb{Q}$ we conclude that $H_*(W,\Sigma;\mathbb{Q})=\{0\}$ for $* \neq 2n$. Since $H^{2n}(W,\Sigma;\mathbb{Q})$ is Poincar\'e dual to $H_0(W;\mathbb{Q})$, the result for $*=2n$ also follows. \end{proof} Lemma~\ref{homi} shows that the Euler characteristic of the relative homology is given by $$ \chi(W,\Sigma)=1. $$ Assertion~(ii) of Theorem~A follows from this and Corollary~\ref{cor:mean_euler}. \section[Brieskorn manifolds]{Brieskorn manifolds} \label{s:Brieskorn} Choose positive integers $a_0,\ldots,a_n$. The Brieskorn variety $V(a_0,\ldots,a_n)$ is defined as the following subvariety of $\C^{n+1}$, $$ V_\epsilon(a_0,\ldots,a_n) \,=\, \biggl\{ (z_0,\ldots,z_n)\in \C^{n+1}~|~\sum_{i=0}^n z_i^{a_i}=\epsilon \biggr\} . $$ For $\epsilon=0$, this variety is singular unless one of the exponents~$a_i$ is equal to~$1$. For $\epsilon \neq 0$, we have a complex submanifold of~$\C^{n+1}$. Given a Brieskorn variety $V_0(a_0,\ldots,a_n)$ we define the Brieskorn manifold as $$ \Sigma(a_0,\ldots,a_n) \,:=\, V_0(a_0,\ldots,a_n) \cap S^{2n+1}_R, $$ where $S^{2n+1}_R$ is the sphere of radius $R>0$ in~$\C^{n+1}$. For the diffeomorphism type, the precise value of~$R$ does not matter. Brieskorn manifolds carry a natural contact structure, which comes from the following construction. \begin{lemma} Let $(W,i)$ be a complex variety together with a function $f$ that is plurisubharmonic away from singular points. Then regular level sets $M=f^{-1}(c)$ carry a contact structure $\xi = TM \cap i \,TM = \ker (-df \circ i)|_M$. \end{lemma} Applying this lemma with the plurisubharmonic function $f=\sum_j\frac{a_j}{8}|z_j|^2$ we obtain the particularly nice contact form $$ \alpha \,=\, \frac{i}{8} \sum_j a_j \left(z_j d\bar z_j-\bar z_j dz_j \right) $$ for this natural contact structure. Its Reeb vector field at radius~$R=1$ is given by $$ R_\alpha \,=\, 4i \sum_j \frac{1}{a_j} (z_j \partial_{z_j}- \bar z_j \partial_{\bar z_j}) . $$ The Reeb flow therefore is $$ \phi^{R_\alpha}_t (z_0,\ldots,z_n) \,=\, \left( e^{4it/a_0}z_0,\ldots,e^{4it/a_n}z_n \right). $$ We thus see that all Reeb orbits are periodic. This allows us to interpret Brieskorn manifolds as Boothby--Wang bundles over symplectic orbifolds. \begin{proposition} \label{prop:homology_sphere} Brieskorn manifolds admit a Stein filling, and their contactomorphism type does not depend on the radius~$R$ of the sphere used to define them. \end{proposition} Indeed, by definition, Brieskorn manifolds are singularly fillable. One can smoothen this filling by taking $\epsilon \neq 0$, and consider $V_\epsilon$ rather than~$V_0$. The resulting contact structure is contactomorphic by Gray stability. Furthermore, $V_\epsilon$ gives then the Stein filling. Gray stability can also be used to show independence of the radius~$R$, see also Theorem~7.1.2 from~\cite{G}. \subsection{Brieskorn manifolds and homology spheres} Let us start by citing some theorems from~\cite{HM}. This book gives precise conditions for Brieskorn manifolds to be integral homology spheres. However, we shall restrict ourselves to the following case. \begin{proposition} \label{s5:prop1} If $a_0,\ldots,a_n$ are pairwise relatively prime, then $\Sigma(a_0,\ldots,a_n)$ is an integral homology sphere. \end{proposition} Furthermore, higher dimensional Brieskorn manifolds, i.e.~$\dim \Sigma>3$, are always simply-connected, so we in fact find \begin{theorem} \label{s5:t} If $a_0,\ldots,a_n$ are pairwise relatively prime, and if $n>2$, then $\Sigma(a_0,\ldots,a_n)$ is homeomorphic to~$S^{2n-1}$. \end{theorem} \begin{remark} \label{rem:exponent=1} If one of the exponents $a_j$ is equal to~$1$, then the resulting Brieskorn manifold $(\Sigma(a_0,\ldots,a_n),\alpha)$ is contactomorphic to the standard sphere $(S^{2n-1},\alpha_0)$. Indeed, in this case the Brieskorn variety~$V_\epsilon(a_0,\ldots,a_n)$ is biholomorphic to~$\C^{n}$, as we can regard the variety as a graph. \end{remark} \subsection{Formula for the mean Euler characteristic for Brieskorn manifolds} We can think of Brieskorn manifolds as Boothby--Wang orbibundles over symplectic orbifolds. However, all the essential data is contained in the $S^1$-equivariant homology groups associated with the Reeb action. The following lemma will hence be useful. \begin{lemma} \label{lemma:S^1-euler_characteristic} Let $N$ be a rational homology sphere of dimension $2n+1$ with a fixed-point free $S^1$-action $N \times S^1 \to N$. Then $$ H_*^{S^1}(N;\Q) \,\cong\, H_*(\C \P^n;\Q). $$ In particular, $$ \chi^{S^1}(N) \,=\, n+1. $$ \end{lemma} \begin{proof} Note that $N \times ES^1$ carries a free $S^1$-action, so we can think of $N \times ES^1$ as an $S^1$-bundle over $N \times_{S^1} ES^1$. We consider the Gysin sequence for this space with $\Q$-coefficients. Since $N$ is a rational homology sphere of dimension $2n+1$ and $ES^1$ is contractible, all homology groups of $N \times ES^1$ except in dimension~$0$ and $2n+1$ vanish. Hence the Gysin sequence reduces to $$ \underset{\cong 0}{H_*(N)} \stackrel{\pi_*}{\longrightarrow} \underset{= H^{S^1}_*(N)}{H_*(N\times_{S^1}ES^1)}\stackrel{\cap e}{\longrightarrow}\underset{= H^{S^1}_{*-2}(N)}{H_{*-2}(N\times_{S^1}ES^1) } \longrightarrow \underset{\cong 0}{H_{*-1}(N)} $$ for $1<*<2n+1$. This shows that $H_*^{S^1}(N;\Q) \cong H_*(\C \P^n;\Q)$ for $*<2n+1$. To see that there are no other terms, we shall argue that $H_*^{S^1}(N;\Q)$ is bounded. For this, choose an $S^1$-equivariant Morse--Bott function $f \colon N \to \R$, see \cite[Lemma 4.8]{W} for the existence of such a function. Define a Morse--Bott function \begin{align*} \tilde f \colon N \times_{S^1} ES^{1} & \longrightarrow \R \\ [x,v] & \longmapsto f(x). \end{align*} Consider the Morse--Bott spectral sequence for $H_*(N \times_{S^1} ES^{1};\Q)$ with respect to the Morse--Bott function~$\tilde f$. Its $E^2$-page is given by $E^2_{pq} = H_q(R_p;\Q)$, where $R_p$ are the critical manifolds of $\tilde f$ with index~$p$. Again, by \cite[Section 7.2.2]{FOOO} this sequence converges to $H_*(N \times_{S^1} ES^{1};\Q)$. Note that the critical manifolds form infinite-dimensional lens spaces, so $H_q(R_p;\Q) \cong \Q$ if $q=0$ and $0$ otherwise. Since there are only finitely many critical manifolds (because $N$ is compact), it follows that $H_*^{S^1}(N;\Q)$ is bounded. With this in mind, we reexamine the Gysin sequence. Assume that $H_k^{S^1}(N;\Q)$ is non-zero for some $k\geq 2n+1$. Then $H_{k+2}^{S^1}(N;\Q)$ is non-zero either, etc. Hence $H_*^{S^1}(N;\Q)$ is not bounded, which contradicts our previous term. The lemma follows. \end{proof} \begin{remark} Strictly speaking, $N \times_{S^1} ES^1$ has no manifold structure. Recalling $ES^1 = S^\infty$, we can, however, approximate this space by $N \times_{S^1} S^{2M+1}$ for large $M$. For the latter space, the above argument works, and can be adapted to show triviality of $H_i (N \times_{S^1} S^{2M+1};\Q)$ for $i \geq 2n+1$ and $i<2M$. \end{remark} \begin{proposition} \label{prop:mean_euler_Brieskorn} The Brieskorn manifold $\Sigma(a_0,\ldots,a_n)$ with its natural contact form~$\alpha$ is index-positive if $\sum_j \frac{1}{a_j} >1$, and index-negative if $\sum_j \frac{1}{a_j} <1$. Furthermore, if the exponents $a_0,\ldots,a_n$ are pairwise relatively prime, then the mean Euler characteristic of $\Sigma(a_0,\ldots,a_n)$ is given by \begin{equation} \label{eq:mean_euler} \chi_m(\Sigma(a_0,\ldots,a_n),\alpha) \,=\, (-1)^{n+1}\, \frac{n+(n-1) \sum_{i_0}(a_{i_0}-1)+\ldots + 1 \sum_{i_0<\ldots< i_{n-2} }(a_{i_0}-1)\cdots (a_{i_{n-2}}-1) }{ 2 | (\sum_j a_0 \cdots \widehat{a_j}\cdots a_n) -a_0\cdots a_n| } \end{equation} \end{proposition} \begin{proof} The proof is a direct application of Proposition~\ref{prop:mean_euler_S^1-orbibundle}. The principal orbits have period $a_0 \cdots a_n$. Exceptional orbits have periods $a_0, \ldots, a_n,\, a_0 a_1, \ldots, a_{n-1}a_n, \ldots, a_1 \cdots a_n$. Given a collection of exponents $I=\{ a_{i_1},\ldots,a_{i_k} \}\subset \{ a_0,\ldots, a_n \}$ we denote the associated subset of periodic orbits with period $a_{i_1}\cdots a_{i_k}$ by $N_I$. In~\cite{vK} the Maslov index of all periodic Reeb orbits is computed. For the principal orbit, the result is $$ \mu_P \,:=\, 2 \lcm_{i}a_i \left( \sum_j \frac{1}{a_j} -1 \right) \,=\, 2 \sum_j a_0 \cdots \widehat{a_j}\cdots a_n -a_0\cdots a_n. $$ We check that the conditions of Proposition~\ref{prop:mean_euler_S^1-orbibundle} are satisfied. By Proposition~\ref{s5:prop1} it follows that $H^1(N_I;\Z_2)=0$ if the index set~$I$ has more than~$2$ elements (i.e.~$\dim N_T>1$), so Lemma~\ref{lemma:H^1_trivial} applies. Furthermore, the index computations in~\cite{vK} show that there are no bad orbits. Hence Proposition~\ref{prop:mean_euler_S^1-orbibundle} applies, so $\Sigma(a_0,\ldots,a_n)$ is index-positive if $\sum_j \frac{1}{a_j} >1$ and index-negative if $\sum_j \frac{1}{a_j} <1$. Furthermore, the $S^1$-equivariant Euler characteristics needed in Proposition~\ref{prop:mean_euler_S^1-orbibundle} are obtained from Lemma~\ref{lemma:S^1-euler_characteristic}. The formula for the Maslov index of the exceptional orbits is slightly more complicated, see Formula~(3.1) from~\cite{vK2}, but we only need to observe that the parity of $\mu(S_{T_i})-\frac 12 \dim S_{T_i}$ is the same as the one of~$n+1$. We conclude the proof by determining the coefficients $\phi_{T_i;T_{i+1},\ldots,T_n}$. We shall do this by counting how often multiple covers of an orbit space appear in one period. The full orbit space $S_{\{a_0,\ldots,a_n\}}$ appears once. The orbit space $S_{\{a_0,\ldots,a_{n-1}\}}$ appears $a_n$~times, but the last time it contributes, it is part of $S_{\{a_0,\ldots,a_n\}}$, which we already considered. Therefore $S_{\{a_0,\ldots,a_{n-1}\}}$ contributes $a_n-1$~times. By downwards induction on the cardinality of~$I$, we conclude that $S_{ I}$ appears $\prod_j (a_j-1) / \prod_{a\in I}(a-1)$ times in one period. \end{proof} \begin{remark} The mean Euler characteristic of $S^1$-equivariant symplectic homology coincides with the mean Euler characteristic of contact homology. This means that the above computation amounts to an application of the algorithm in~\cite{vK2}. However, there are still many issues with the foundations of contact homology, so we shall not pursue this line of thought. \end{remark} \section{Proof of Corollary~B. } We start by some general observations that will be needed in the proof. For $n \in \mathbb{N}$ we define $$f(n) \,=\, \sum_{j=0}^n (-1)^j(n-j){n+1 \choose j}.$$ We claim the following identity \begin{equation}\label{euler} f(n) \,=\, (-1)^{n+1} \end{equation} We prove \eqref{euler} by induction. It holds that $f(1)=1$, and for the induction step we compute \begin{eqnarray*} f(n+1)&=&\sum_{j=0}^{n+1}(-1)^j(n+1-j){n+2 \choose j}\\ &=&\sum_{j=0}^n(-1)^j(n+1-j){n+2 \choose j}\\ &=&\sum_{j=0}^n(-1)^j(n+1-j)\Bigg({n+1 \choose j}+{n+1 \choose j-1}\Bigg)\\ &=&\sum_{j=0}^n(-1)^j(n+1-j){n+1 \choose j}+ \sum_{j=0}^n(-1)^j(n+1-j){n+1 \choose j-1}\\ &=&\sum_{j=0}^n(-1)^j(n-j){n+1 \choose j}+ \sum_{j=0}^n(-1)^j {n+1 \choose j}+ \sum_{j=1}^n(-1)^j(n-(j-1)){n+1 \choose j-1}\\ &=&(-1)^{n+1}+ \sum_{j=0}^{n+1}(-1)^j {n+1 \choose j}-(-1)^{n+1} + \sum_{j=0}^{n-1}(-1)^{j+1}(n-j){n+1 \choose j}\\ &=&(-1)^{n+1}+(1-1)^{n+1}-(-1)^{n+1}-(-1)^{n+1}\\ &=&-(-1)^{n+1}\\ &=&(-1)^{n+2}. \end{eqnarray*} This proves the induction step and hence \eqref{euler} follows. Alternatively, we can compute \begin{eqnarray*} 0 \,=\, \frac{d}{dx}(-1+x)^{n+1}|_{x=1}&=&\sum_{j=0}^n(-1)^j(n+1-j)x^{n-j}{n+1 \choose j}|_{x=1} \\ &=& f(n)+\sum_{j=0}^n(-1)^j {n+1 \choose j}+(-1)^{n+1}{n+1 \choose n+1}-(-1)^{n+1} \\ &=& f(n)+(-1+1)^{n+1}-(-1)^{n+1}=f(n)-(-1)^{n+1}. \end{eqnarray*} \begin{proposition} \label{prop:mean_euler=1} Let $\Sigma(a_0,\ldots,a_n)$ be a Brieskorn manifold whose exponents are pairwise relatively prime. Suppose that $\sum_j \frac{1}{a_j}>1$. Then $\chi_m (\Sigma(a_0,\ldots,a_n),\alpha) = \frac{(-1)^{n+1}}{2}$ if and only if one of the exponents is equal to~$1$. \end{proposition} \begin{proof} The condition $\sum_j \frac{1}{a_j}>1$ implies that the denominator of~\eqref{eq:mean_euler} (without $| \; |$) is positive, so $$ \chi_m (\Sigma(a_0,\ldots,a_n),\alpha) \,=\, (-1)^{n+1} \, \frac{n+(n-1)\sum_{i_0}(a_{i_0}-1)+\ldots +\sum_{i_0<\ldots< i_{n-2} }(a_{i_0}-1)\cdots (a_{i_{n-2}}-1) }{ 2\left( (\sum_j a_0 \cdots \widehat{a_j}\cdots a_n) -a_0\cdots a_n \right) } . $$ Let us now try to solve the equation $\chi_m= (-1)^{n+1} \frac{1}{2}$. We obtain $$ n+(n-1)\sum_{i_0}(a_{i_0}-1)+\ldots +\sum_{i_0<\ldots< i_{n-2} }(a_{i_0}-1)\cdots (a_{i_{n-2}}-1) \,=\, (\sum_j a_0 \cdots \widehat{a_j}\cdots a_n) -a_0\cdots a_n . $$ We multiply out all terms on the left hand side and organize them as linear combinations of elementary symmetric polynomials $e_d(a_0,\ldots,a_n)$ of degree~$d$, for $d=0,\ldots,n-2$. Using Formula~\eqref{euler} repeatedly to obtain $$ \sum_{k=0}^{n-2} (-1)^{n-2-k} e_k(a_0,\ldots,a_n) \,=\, e_{n-1}(a_0,\ldots,a_n) -e_n(a_0,\ldots,a_n) . $$ Moving all terms to the left hand side and collecting them yields the equation $$ \prod_{j=0}^n(a_j-1)=0 , $$ which can only hold if one of the exponents is equal to~$1$. \end{proof} Observe that the remark after Theorem~\ref{s5:t} implies that the mean Euler characteristic has to be equal to~$ \frac{(-1)^{n+1}}{2}$ if one of the exponents equals~$1$. \medskip {\bf Proof of Corollary~B.} Let $\Sigma(a_0,\ldots,a_n)$ be a Brieskorn manifold with pairwise relatively prime exponents $a_0,\ldots,a_n$. If the exponents $a_0,\ldots,a_n$ satisfy $\sum_j \frac{1}{a_j}<1$, then Proposition~\ref{prop:mean_euler_Brieskorn} tells us that $(\Sigma(a_0,\ldots,a_n),\alpha)$ is index-negative. Theorem~A implies that such manifolds do not admit a displaceable exact contact embedding. If the exponents $a_0,\ldots,a_n$ are pairwise relatively prime, then $\sum_j \frac{1}{a_j} \neq 1$. Indeed, suppose that $\sum_j \frac{1}{a_j}=1$. Then $$ \frac{1}{a_0} \,=\, 1-\sum_{j=1}^n \frac{1}{a_j} \,=\, \frac{a_1 \cdots a_n - \sum_{j=1}^n a_1 \cdots \widehat{a_j} \cdots a_n}{a_1 \cdots a_n}. $$ If we invert the left and right hand side, we see that $a_0$ divides $a_1\cdots a_n$, which shows that $a_0,\ldots,a_n$ are not pairwise relatively prime. This leaves the case that $\sum_j \frac{1}{a_j}>1$. For this case, Proposition~\ref{prop:mean_euler=1} applies, so together with Theorem~A we conclude that non-trivial Brieskorn manifolds with pairwise relatively prime exponents do not admit exact displaceable contact embeddings. {\bf Acknowledgment.} We thank Kai Cieliebak and Alex Oancea for useful discussions which inspired Step~1 of the proof of Lemma~\ref{disp}. The second author heartily thanks Seoul National University for its warm hospitality during a beautiful week in February~2011. The first author was partially supported by the Basic Research fund 2010-0007669 funded by the Korean government, the second author by SNF grant 200021-125352/1 and the third author by the New Faculty Research Grant 0409-20100147 funded by the Korean government.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,501
Q: Can I use a nullable float (float?) in WaitForSeconds()? I have a default "unhurt" time for my platformer where my player transitions from a hurt state to normal; The normal time I think is .54 seconds. However for specific objects I want the unhurt time to be shorter. I did this by adding this to my unhurt IEnumerator IEnumerator Unhurt(float? time2) { time2 = 4; SoundMan1.PlayHurtSound(); // SoundMan1.PlayerHurt3.PlayOneShot(SoundMan1.PlayrHurtClips3[Random.Range(0, 3)]); Debug.Log("Unhurting!"); if(time2 == null) { } if(time2 > .02f && time2!=null) { yield return new WaitForSeconds(time2); } else { yield return new WaitForSeconds(unhurttime); } IsHurt = false; rb.constraints = RigidbodyConstraints2D.None; rb.constraints = RigidbodyConstraints2D.FreezeRotation; Debug.Log("Unhurt"); } And In my spike script (the script I'm calling this coroutine from.) This is the code calling it.. collision.gameObject.GetComponent<Player>().StartCoroutine("Unhurt"); I want it so that if I don't write anything after the "Unhurt" then the time2 float? will be null and it'll just wait the default duration. The problem is the WaitForSeconds(time2); has an error. It says cannot convert float? to float. If I add a .2f or whatever after the "Unhurt" and pass that in as time2, I want the float to not be null and then it'll wait that new duration. A: No, you can not! As the error tells you WaitForSeconds expects a float. You are passing in a float? which is just a synonym for Nullable<float> which is a completely different type. What you want is the time2.Value. Before accessing it you should rather check for time2.HasValue instead of == null (internally the == uses the HasValue anyway and thus is only more expensive). Now in your code there are a lot of open questions though. Why on the top of the method the first thing you do is overwrite time2 = 4; no matter what, so you totally ignore whatever value is passed in? I guess you would only want to do that in case no value has been passed in. Then later you check if(time2 > .02f && time2!=null) { yield return new WaitForSeconds(time2); } so before you even check whether it has a value you already want to compare it to 0.02. Without the assigning mentioned above you should rather check in the order if(time2.HasValue && time2.Value > 0.02f) { yield return new WaitForSeconds(time2.Value); } I want it so that if I don't write anything after the "Unhurt" then the time2 float? will be null What you are speaking about is called an optional parameter and requires you to actually assign a default value and would rather look like e.g. IEnumerator Unhurt(float? time2 = null) However you don't need any nullable for this. Why not simply give it a default value and not use a nullable at all like IEnumerator Unhurt(float time2 = 4f) this time if you don't pass in any value explicitly it will always have the default value 4. Finally in general to make you live way easier don't use the stringed version of StartCoroutine! Rather have a method like public void StartUnhurt(float time2 = 4f) { StartCoroutine (Unhurt(time2)); } and use it like collision.gameObject.GetComponent<Player>().StartUnhurt(); or collision.gameObject.GetComponent<Player>().StartUnhurt(3.5f);
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,888
Q: how to use participles as adverbials Would you please tell me whether the use of the participle clause is correct in the following sentence: Our young children are fascinated with the old-day musicians, TV shows and the life before they were born, becoming fans of some old artists, even showing their emotions for the one who passed away decades ago. A: Your sentence would improve with a couple of omissions. Unless you have identified the musicians and the person who died, omit the in front of them, ditto in front of life. The (the definite article) is generally used ahead of something/someone who has already been introduced. Also, although you can speak of the old days, in this context, as a modifier ahead of musicians, we would refer to olden day (musicians) to mean in the past. Our young children are fascinated with [the] olden-day musicians, TV shows and [the] life before they were born, becoming fans of some old artists, even showing their emotions for [the] one who passed away decades ago. However, in answer to your query, your construction is otherwise fine. https://www.collinsdictionary.com/dictionary/english/in-the-olden-days-in-olden-days https://learnenglish.britishcouncil.org/english-grammar-reference/definite-article-the
{ "redpajama_set_name": "RedPajamaStackExchange" }
263
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <!-- NewPage --> <html lang="en"> <head> <!-- Generated by javadoc (version 1.7.0_02) on Mon Nov 24 10:51:55 CET 2014 --> <meta http-equiv="Content-Type" content="text/html" charset="utf-8"> <title>org.gradle.tooling.model (Gradle API 2.2.1)</title> <meta name="date" content="2014-11-24"> <link rel="stylesheet" type="text/css" href="../../../../javadoc.css" title="Style"> </head> <body> <script type="text/javascript"><!-- if (location.href.indexOf('is-external=true') == -1) { parent.document.title="org.gradle.tooling.model (Gradle API 2.2.1)"; } //--> </script> <noscript> <div>JavaScript is disabled on your browser.</div> </noscript> <!-- ========= START OF TOP NAVBAR ======= --> <div class="topNav"><a name="navbar_top"> <!-- --> </a><a href="#skip-navbar_top" title="Skip navigation links"></a><a name="navbar_top_firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../../../overview-summary.html">Overview</a></li> <li class="navBarCell1Rev">Package</li> <li>Class</li> <li><a href="package-tree.html">Tree</a></li> <li><a href="../../../../deprecated-list.html">Deprecated</a></li> <li><a href="../../../../index-all.html">Index</a></li> <li><a href="../../../../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li><a href="../../../../org/gradle/tooling/exceptions/package-summary.html">Prev Package</a></li> <li><a href="../../../../org/gradle/tooling/model/build/package-summary.html">Next Package</a></li> </ul> <ul class="navList"> <li><a href="../../../../index.html?org/gradle/tooling/model/package-summary.html" target="_top">Frames</a></li> <li><a href="package-summary.html" target="_top">No Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_top"> <li><a href="../../../../allclasses-noframe.html">All Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_top"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <a name="skip-navbar_top"> <!-- --> </a></div> <!-- ========= END OF TOP NAVBAR ========= --> <div class="header"> <h1 title="Package" class="title">Package&nbsp;org.gradle.tooling.model</h1> <div class="docSummary"> <div class="block">The general-purpose tooling model types, provided by the tooling API.</div> </div> <p>See:&nbsp;<a href="#package_description">Description</a></p> </div> <div class="contentContainer"> <ul class="blockList"> <li class="blockList"> <table class="packageSummary" border="0" cellpadding="3" cellspacing="0" summary="Interface Summary table, listing interfaces, and an explanation"> <caption><span>Interface Summary</span><span class="tabEnd">&nbsp;</span></caption> <tr> <th class="colFirst" scope="col">Interface</th> <th class="colLast" scope="col">Description</th> </tr> <tbody> <tr class="altColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/BuildableElement.html" title="interface in org.gradle.tooling.model">BuildableElement</a></td> <td class="colLast"> <div class="block">Represents an element which has Gradle tasks associated with it.</div> </td> </tr> <tr class="rowColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/Dependency.html" title="interface in org.gradle.tooling.model">Dependency</a></td> <td class="colLast"> <div class="block">Represents an artifact dependency.</div> </td> </tr> <tr class="altColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/DomainObjectSet.html" title="interface in org.gradle.tooling.model">DomainObjectSet</a>&lt;T&gt;</td> <td class="colLast"> <div class="block">A set of domain objects of type T.</div> </td> </tr> <tr class="rowColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/Element.html" title="interface in org.gradle.tooling.model">Element</a></td> <td class="colLast"> <div class="block">Described model element.</div> </td> </tr> <tr class="altColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/ExternalDependency.html" title="interface in org.gradle.tooling.model">ExternalDependency</a></td> <td class="colLast"> <div class="block">Represents an external artifact dependency.</div> </td> </tr> <tr class="rowColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/GradleModuleVersion.html" title="interface in org.gradle.tooling.model">GradleModuleVersion</a></td> <td class="colLast"> <div class="block">Informs about a module version, i.e.</div> </td> </tr> <tr class="altColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/GradleProject.html" title="interface in org.gradle.tooling.model">GradleProject</a></td> <td class="colLast"> <div class="block">Represents a Gradle project.</div> </td> </tr> <tr class="rowColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/GradleTask.html" title="interface in org.gradle.tooling.model">GradleTask</a></td> <td class="colLast"> <div class="block">Represents a task which is executable by Gradle.</div> </td> </tr> <tr class="altColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/HasGradleProject.html" title="interface in org.gradle.tooling.model">HasGradleProject</a></td> <td class="colLast"> <div class="block">An element that is associated with a Gradle project.</div> </td> </tr> <tr class="rowColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/HierarchicalElement.html" title="interface in org.gradle.tooling.model">HierarchicalElement</a></td> <td class="colLast"> <div class="block">Represents an element which belongs to some hierarchy.</div> </td> </tr> <tr class="altColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/Launchable.html" title="interface in org.gradle.tooling.model">Launchable</a></td> <td class="colLast"> <div class="block">Represents an object that can be used to launch a Gradle build, such as a task.</div> </td> </tr> <tr class="rowColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/Model.html" title="interface in org.gradle.tooling.model">Model</a></td> <td class="colLast"> <div class="block">A model that is buildable by the Tooling API.</div> </td> </tr> <tr class="altColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/ProjectDependency.html" title="interface in org.gradle.tooling.model">ProjectDependency</a></td> <td class="colLast"> <div class="block">Represents a dependency on another project.</div> </td> </tr> <tr class="rowColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/SourceDirectory.html" title="interface in org.gradle.tooling.model">SourceDirectory</a></td> <td class="colLast"> <div class="block">Represents a source directory.</div> </td> </tr> <tr class="altColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/Task.html" title="interface in org.gradle.tooling.model">Task</a></td> <td class="colLast"> <div class="block">Represents a task which is executable by Gradle.</div> </td> </tr> <tr class="rowColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/TaskSelector.html" title="interface in org.gradle.tooling.model">TaskSelector</a></td> <td class="colLast"> <div class="block">Represents a <a href="../../../../org/gradle/tooling/model/Launchable.html" title="interface in org.gradle.tooling.model"><code>Launchable</code></a> that uses task name to select tasks executed from a given project and its sub-projects.</div> </td> </tr> </tbody> </table> </li> <li class="blockList"> <table class="packageSummary" border="0" cellpadding="3" cellspacing="0" summary="Exception Summary table, listing exceptions, and an explanation"> <caption><span>Exception Summary</span><span class="tabEnd">&nbsp;</span></caption> <tr> <th class="colFirst" scope="col">Exception</th> <th class="colLast" scope="col">Description</th> </tr> <tbody> <tr class="altColor"> <td class="colFirst"><a href="../../../../org/gradle/tooling/model/UnsupportedMethodException.html" title="class in org.gradle.tooling.model">UnsupportedMethodException</a></td> <td class="colLast"> <div class="block">Thrown when the tooling API client attempts to use a method that does not exist in the version of Gradle that the tooling API is connected to.</div> </td> </tr> </tbody> </table> </li> </ul> <a name="package_description"> <!-- --> </a> <h2 title="Package org.gradle.tooling.model Description">Package org.gradle.tooling.model Description</h2> <div class="block">The general-purpose tooling model types, provided by the tooling API.</div> </div> <!-- ======= START OF BOTTOM NAVBAR ====== --> <div class="bottomNav"><a name="navbar_bottom"> <!-- --> </a><a href="#skip-navbar_bottom" title="Skip navigation links"></a><a name="navbar_bottom_firstrow"> <!-- --> </a> <ul class="navList" title="Navigation"> <li><a href="../../../../overview-summary.html">Overview</a></li> <li class="navBarCell1Rev">Package</li> <li>Class</li> <li><a href="package-tree.html">Tree</a></li> <li><a href="../../../../deprecated-list.html">Deprecated</a></li> <li><a href="../../../../index-all.html">Index</a></li> <li><a href="../../../../help-doc.html">Help</a></li> </ul> </div> <div class="subNav"> <ul class="navList"> <li><a href="../../../../org/gradle/tooling/exceptions/package-summary.html">Prev Package</a></li> <li><a href="../../../../org/gradle/tooling/model/build/package-summary.html">Next Package</a></li> </ul> <ul class="navList"> <li><a href="../../../../index.html?org/gradle/tooling/model/package-summary.html" target="_top">Frames</a></li> <li><a href="package-summary.html" target="_top">No Frames</a></li> </ul> <ul class="navList" id="allclasses_navbar_bottom"> <li><a href="../../../../allclasses-noframe.html">All Classes</a></li> </ul> <div> <script type="text/javascript"><!-- allClassesLink = document.getElementById("allclasses_navbar_bottom"); if(window==top) { allClassesLink.style.display = "block"; } else { allClassesLink.style.display = "none"; } //--> </script> </div> <a name="skip-navbar_bottom"> <!-- --> </a></div> <!-- ======== END OF BOTTOM NAVBAR ======= --> </body> </html>
{ "redpajama_set_name": "RedPajamaGithub" }
568
El Ternero fue una aldea que estuvo situada entre Cihuri y Sajazarra perteneciente al monasterio de Herrera. Actualmente es una finca, pedanía de la localidad de Miranda de Ebro, provincia de Burgos (España). Se trata de un enclave burgalés de doscientos cincuenta hectáreas rodeado por la comunidad autónoma de La Rioja. En El Ternero se elabora el único vino burgalés con Denominación de Origen Rioja. La bodega se encuentra ubicada en un edificio rehabilitado del siglo XVIII. También hay una ermita cuya última restauración data de la desamortización de Mendizabal. Demografía En 2010 era uno de los diecinueve pueblos de la provincia de Burgos en los que solo vivía una persona mayor. En 2016 no contaba con ningún habitante empadronado. Historia Existen unas referencias primarias (contrastadas en el archivo municipal de Miranda de Ebro) de una donación patrimonial del rey Alfonso VI en el año 1077 al denominado "obispo" o señor de Sajazarra (Císter), que comprendían el terreno del actual monasterio de Santa María de Herrera y la granja Ternero. Las primeras referencias de Ternero datan de 1245, cuando el papa Inocencio IV redacta las pertenencias del Monasterio de Herrera. Así se describe a El Ternero en el tomo XIV del Diccionario geográfico-estadístico-histórico de España y sus posesiones de Ultramar, obra impulsada por Pascual Madoz a mediados del : Referencias Enlaces externos Página web de Hacienda El Ternero Localidades de la provincia de Burgos Barrios de Miranda de Ebro Enclaves de España
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,375
Q: Pandas. Отношение максимальной дисперсии к минимальной Прохожу курс Python для анализа данных и никак не могу понять в чем ошибка в решении. Суть задачи: Создайте новый столбец - mcc_code+tr_type, сконкатенировав значения из соответствующих столбцов. (*) Оставьте только наблюдения с отрицательным значением amount. Посчитайте дисперсию по категориям получившегося столбца mcc_code+tr_type, в которых количество наблюдений >= 10. Определите отношение максимальной дисперсии к минимальной. Выведите ответ в виде вещественного числа, округлённого до ближайшего целого в формате "123456" без дробной части. Пояснения: (*) Для конкатенации значений в столбцах можно использовать метод .astype(str) для серии и складывать соответствующие серии. Либо же применять apply к строкам датафрейма, прописывая логику преобразования и конкатенации значений внутри. (**) Для одновременного подсчета количества наблюдений и дисперсии по категориям можно воспользоваться функцией .agg() Исходные данные: ссылка на файлы для расчетов Мое решение задачи: вначале требуется объединить 4 таблицы с разными данными, но с общими столбцами import pandas as pd import numpy as np import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline df = pd.read_csv(r'C:\Users\akuma\Documents\Jupiter_Anaconda\transactions.csv', sep=',', nrows=1000000) types = pd.read_csv(r'C:\Users\akuma\Documents\Jupiter_Anaconda\types.csv', sep=';') mcc_codes = pd.read_csv(r'C:\Users\akuma\Documents\Jupiter_Anaconda\tr_mcc_codes.csv', sep=';') train = pd.read_csv(r'C:\Users\akuma\Documents\Jupiter_Anaconda\gender_train.csv', sep=',') df_1 = df.merge(types, how='inner') df_2 = df_1.merge(mcc_codes, how='inner') df_3 = df_2.merge(train, how='left') # базовая таблица После получения базовой таблицы примерно такого содержания, с ней уже работаем по задаче. извините что картинка, не разобрался как вставлять сюда датафрейм. df_3['mcc_code+tr_type'] = df_3[['mcc_code','tr_type']] df_3.info() df_3.query('amount < 0').groupby(['mcc_code+tr_type'])['amount'] \ .agg(['count', 'var']) \ .query('count >= 10')['var'] \ .agg(lambda x: round(max(x) / min(x))) появляется ошибка при объединении столбцов mcc_code+tr_type , может где-то ошибку допустил? Понимаю, что пишу код ещё плохо и мудрено ) A: Как делать конкатенцию указано в пояснении. Пояснения: (*) Для конкатенации значений в столбцах можно использовать метод .astype(str) для серии и складывать соответствующие серии. Либо же применять apply к строкам датафрейма, прописывая логику преобразования и конкатенации значений внутри. df_3['mcc_code+tr_type'] = df_3['mcc_code'].astype(str)+df_3['tr_type'].astype A: Попробуйте так: df["mcc_code+tr_type"] = df["mcc_code"].astype(str) + df["tr_type"].astype(str) res = (df .query("amount < 0") .groupby("mcc_code+tr_type") ["amount"] .agg(lambda x: x.var()**2 if len(x)>=10 else np.nan).dropna()) ratio = round(res.max() / res.min()) A: Для разннообразия решений: df['mcc_code+tr_type'] = df.mcc_code.map(str) + df.tr_type.map(str) df = df[['mcc_code+tr_type', 'amount']][df.amount < 0] fltr = lambda group_df: len(group_df['amount']) >= 10 amount = (df.groupby('mcc_code+tr_type').filter(fltr) .groupby('mcc_code+tr_type').var()).amount np.rint( amount.max() / amount.min()).astype(int) p.s.: Решение @MaxU элегантно, но дисперсия это просто var() (!=var()**2) Из описания перого задания (ну и в целом ищи std vs var): "Дисперсия рассчитывается с помощью функции из библиотеки numpy: np.var( , ddof=0) или встроенной в python функции: .var(ddof=1)", хотя и они ошиблись если планировали тот же результат np.var(df.s, ddof=0) != df.s.var(ddof=1) [numpy и pandas имеют разные значения ddof по умолчанию (на сегодя), но разница в методике незначительна]
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,771
Cameroceras – wymarły rodzaj łodzikowca żyjący w okresie ordowiku, największy łodzikowiec w dziejach. Opis: Muszla prosta, długości do 11 m, syfon położony brzeżnie, gruby. Znaczenie: Skamieniałości różnych gatunków Cameroceras są skamieniałościami pomocniczymi w datowaniu ordowiku, zwłaszcza Azji i Ameryki Północnej. Występowanie: Rodzaj znany licznie ze wschodniej Ameryki Północnej i wschodniej oraz centralnej Azji, rzadziej z Europy (Hiszpania). Pojedyncze opisy także z Ameryki Południowej. Zasięg wiekowy: Ordowik Wybrane gatunki o dużej wartości stratygraficznej: C. alternatum C. hennepini C. inopinatum C. stillwaterense C. trentonese C. rowenaense Pokazany w serialu BBC pt. Powrót dinozaurów: Potwory na fali, będącym uzupełnieniem serialu Wędrówki z dinozaurami. W towarzyszącej serialowi książce pod analogicznym tytułem błędnie nazwany Cameraceras Bibliografia ThePaleobiology Database Wymarłe głowonogi Mięczaki ordowiku
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,891
The Modern Bop is the fourth studio album by Australian rock band Mondo Rock, released in March 1984 and peaked at number 5 on the Kent Music Report. Rolling Stone stated: "The album achieves the Mondo's oft-quoted aim: 'adult' music that isn't soft or easy-listening." Track listing Personnel Mondo Rock: Ross Wilson – vocals, guitar, harmonica Eric McCusker – guitar, backing vocals James Black – keyboards, guitar, backing vocals James Gillard – bass, backing vocals John James Hackett – drums with: Joe Camilleri – saxophone on "Flight 28" Production team: Producer – John Sayers, Mondo Rock Engineers – John Sayers, John French, Ross Cockle Assistant Engineers – Doug Brady, Gary Constable Mixed by – John Sayers Charts References 1984 albums Mondo Rock albums Polydor Records albums Albums produced by John L Sayers
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,874
Q: Partial credentials found in env, missing: AWS_SECRET_ACCESS_KEY Just configured the AWS CLI on my computer with my AWS Access and Secret Key. When I try to use the AWS CLI though it gives me this error. Partial credentials found in env, missing: AWS_SECRET_ACCESS_KEY I went to ~/.aws/config, and sure enough those credentials are there, including the AWS Secret Key, so I'm not sure why its squawking at me. A: You should have this file ~/.aws/credentials and the contents should be in the following format: [default] aws_access_key_id = XXXXXXXXXXXXXXXXXX aws_secret_access_key = XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX A: For anyone who is having the same problem - this is solution that worked for me: If you are on Windows - check if you don't have AWS_ACCESS_KEY_ID set in your system variables. AWS CLI uses something called configuration provider chain - and environment variables take precedence over configuration file. In my case somehow I had only set AWS_ACCESS_KEY_ID thus the error message. A: If you are using MacOS, this may be caused because you set other credentials in the environmental variables. Setting the new credentials to the environmental variables might solve your problem. To do so run this in the terminal: export AWS_ACCESS_KEY_ID=X export AWS_SECRET_ACCESS_KEY=Y export AWS_DEFAULT_REGION=REGION Substitute X, Y, and REGION with the values corresponding to your application. Source documentation: https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html A: I ran with this problem and I did rerun the same workflow yml again and again but changes were never actually occurs. Finally, I had to remove existing workflow from GitHub and re-initiate/pushed yml configuration file again. Worked for me. Thank You!
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,671
Find the Salisbury Independent Online Print Archives Wicomico Independent Q&A Brooke Mulford relocates to New Jersey Sep 10th, 2014 · by Susan Canfora · Comments: Brooke Mulford dressed for her first day of 4th grade at a new school in New Jersey. Brooke Mulford, the popular and inspiring little girl who has fought cancer in the midst of a loving community since she was 4, has moved to New Jersey with her mother. Amy Mulford and her 9-year-old sushi-loving daughter relocated so they'd be closer to Children's Hospital of Philadelphia, where Brooke is beginning a clinical trial with the investigational drug DFMO. She was diagnosed with neuroblastoma, cancer of the nervous system, five years ago. Amy's family lives there, and the drive to the hospital is only about 25 minutes, compared to three hours from Salisbury. When Brooke was diagnosed after starting to limp Christmas Eve 2008, the cancer had spread throughout her body was at Stage 4. She underwent 16 months of treatment. After nine months, her mother said, there was no evidence of the disease, but treatment continued because prognosis of neuroblastoma is poor, with a 30 percent survival rate. Blond, green-eyed Brooke, who her mother calls a "foodie," and whose first solid culinary delight was an avocado, was in remission three years and three months, then relapsed and is now considered terminal. As she continues treatment, her mother is faithfully, deeply, hoping for a cure and drawing strength from her only child. "She doesn't really remember life before cancer. She had just turned 4 so she was so little. More of her life has been with cancer," her mother said. But Brooke isn't the melancholy type and doesn't ask, "why me?" She did become upset when she realized she'd lose her hair again, but her mother got her a wig and, once, in support, shaved her own head so they could be bald together. "She was upset at first, then a week later she said, 'Mommy, I'm not sorry I got cancer and I'm not sorry it came back because a lot of good things have come out of it still,'" Amy Mulford said. She credits community and family support for Brooke's good nature, and especially Salisbury Christian School. "As hard as it was to leave my friends and my home in Salisbury, leaving that school was so hard. There could not be more love in one place than in that school. The support we received from every teacher, every staff member, parent, kid. She walked through the door and she was surrounded by kids hugging her and loving her," Amy said. Among Brooke's heroes is Bethany Hamilton, whose arm was bitten off by a shark, and whose story is told in the movie Soul Surfer. "Brooke wants to meet her. She is very inspired by her," Amy said. How can a little girl of 9 be so brave in the turmoil of illness? "I feel God is walking with her through this," Amy said. It was no accident, she said, that Brooke's condition, originally misdiagnosed, flared when the family was visiting New Jersey. She was taken to the hospital in Philadelphia, and the family received the correct diagnosis and the child has been cared for by Dr. John Maris, a specialist. Amy stays at the hospital with Brooke when she's admitted for treatment and finds nights are most difficult. "That's when I think about everything going on. She keeps me too busy during the day," Amy said. "Her prognosis is never far from my mind. I just have to stay as hopeful as I can, to believe we're going to have the miracle that we're praying for, that they are going to find the miracle. "I just have to hope that in the lab somewhere, there is a miracle cure waiting to get into a clinical trial." Reach Susan Canfora at scanfora@newszap.com. Subscribe to the Salisbury Independent's weekly email newsletter Weekly Headlines As your community newspaper, we are committed to making Salisbury a better place. You can help support our mission by making a voluntary contribution to the newspaper. The Salisbury Independent is published every Thursday. It is available via mail and home delivery in select areas of Salisbury. You can find it in retail outlets throughout the Salisbury area, or read our free e-newspaper, a digital replica of the print edition. Salisbury Weather
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,730
There are chances that your business will face lawsuits in every year, and some businesses have ended up using a lot of cash to cater for the lawsuits, estimates suggesting that a business that generates $1m can use 000 in paying lawsuits annually. The cases that face the business are different, ranging from the small pay disputes to issues such as intellectual property rights suits, but no matter the issue at hand, there is the need for the business to hire a commercial litigation attorney. Every business needs to be careful in the way that they handle the lawsuits, considering that any mistake will have a shattering effect on the brand and the company, but when you have involved a commercial litigation lawyer, they will guide your business through all the steps involved and help you avoid complications. With the help of the commercial litigation attorneys, you won't spend much of your time in the courtrooms to handle suits but rather at your business strategizing on how to enhance production, and here are major indicators that you need the assistance of a commercial litigation lawyer. One of the instances, when one needs the help of an attorney, is when they have a contract that has been breached. Contracts are crucial when your company is striking a deal with suppliers and other businesses, since they outline the expectations from every business and consequences that come with breaching them. When there is contract violation from one party, without the help of an attorney, a business might be stuck in many years of arbitration. Every company needs the services of a strong legal advisor to make sure that their business doesn't breach the contract in future. Should an employee mismanage funds in your company, there is the need to have the help of a commercial litigation attorney. If you are trying to strike a deal with another business, but you realize that they seem to have cold feet, it is important that you sit them down. Companies in most cases will be keen to seek the history of the business that they are striking a deal with to seek for red flags. If you find out that a given worker has been involved in mismanagement of finances, there is the need to have them sued and also avoid involving them with the new company. One also needs to involve the commercial litigation attorneys when your business is facing a lawsuit as a result of violating a contract or copyright. Such cases might lead to orders to the company where they are directed to stop production of their essential products. Working with commercial litigation attorneys at Shakfeh Law will help you avoid ending up in a sticky situation where they make sure that businesses keep working even during a lawsuit.
{ "redpajama_set_name": "RedPajamaC4" }
1,355
Apple Meets with California DMV in Hopes of Furthering Self-Driving Vehicles The Sacramento, Calif., meeting between Apple and California DMV officials sparks speculation about driverless vehicles in the near future. by Matt O'Brien, San Jose Mercury News / September 21, 2015 flickr/L P Y 9 7 4 (TNS) -- Whatever Apple has in mind -- be it a dreamy electric-powered, robot iCar, as some fans hope, or just some fancy dashboard software -- the evidence is mounting that the secretive Cupertino tech giant is getting serious about self-driving cars. "You guys aren't big on secrets," comedian Stephen Colbert joked Tuesday as he hosted Apple CEO Tim Cook on his new late-night TV show. "Tell me about it. Come on. Cat's out of the bag." Cook wouldn't budge, but California's Department of Motor Vehicles 'fessed up Friday when it revealed that the agency met with Apple about the state's rules for testing autonomous vehicles on public streets, offering the clearest clue yet to Apple's long-rumored automotive ambitions. California Legislator Introduces Bill to Test Self-Driving Bus in Contra Costa CountyHow Can We Adapt Our Transportation Systems for Self-Driving Cars?Apple Continues Silicon Valley Expansion Apple declined comment Friday, but online resumes and public records show the world's most valuable company has spent months poaching talent with automotive experience and renovating a new Sunnyvale office complex and auto workshop along what's become Silicon Valley's aorta of high-tech car innovation -- the Central Expressway. Ten other companies -- including Google and carmakers Tesla, BMW, Mercedes-Benz, Honda and Volkswagen -- already have permission from the state to test autonomous vehicles on public roads, and most of them have research labs near the 12-mile thoroughfare that stretches from Palo Alto through Mountain View, Sunnyvale and Santa Clara. Google, long public about its self-driving goals, moved its growing team into a new hub along the Expressway in Mountain View this spring, shortly before unleashing its bubble-shaped, two-seater prototype cars onto local streets. The search giant this month signaled its move from experimentation to the consumer product stage when it hired auto industry veteran John Krafcik to be the first CEO of its 6-year-old project. Which means that Apple, if fully autonomous cars are its goal, has a long way to go. "To get to something like what Google has created requires the level of effort that Google has done," said Steve Shladover of UC Berkeley's Partners for Advanced Transportation Technology. "There's no magic. It's a lot of hard work." Apple has been renovating an office complex this year near North Wolfe Road and the Expressway, and Sunnyvale permitting documents show references to Apple's "auto work area" in a warehouse on San Gabriel Drive. Public records also tied the site to advanced automotive technology when Apple invited the Contra Costa Transportation Authority to pay a visit in the spring. The authority runs a high-security road test site for self-driving and Internet-connected vehicles at the former Concord Naval Weapons Station in Contra Costa County, and Apple wanted more information about what the space was like. Honda begins testing self-driving cars there next week, but Apple appears to have dropped its interest several months ago. "They just wanted to know about the facility," said Randy Iwasaki, the authority's director, in an interview Friday. "It could have been anything. It's perfect for testing." Although Apple recently bought a 43-acre parcel in North San Jose, it doesn't have much room in Silicon Valley to test its automotive ideas with the secrecy that usually surrounds its tiny devices. The question is: Would it be willing to test in public? The Guardian was first to report Friday about last month's one-hour meeting in Sacramento between Apple's legal team and DMV officials, including Deputy Director Bernard Soriano, an engineer who has led the state's effort to draft rules of the road for driverless cars. "DMV often meets with various companies regarding DMV operations," the department's spokesman, Armando Botello, wrote in an email Friday confirming the talks. "The Apple meeting was to review DMV's autonomous vehicle regulations." Apple's interest doesn't necessarily mean that it wants to build the kind of fully autonomous car that Google has been testing on Mountain View streets, or that it's interested in selling cars at all. But no matter its intentions, its actions have contributed to a frenzy from rivals -- especially in the auto industry -- to take ownership of autonomous technology, in-car mapping software, vehicle-to-vehicle communication and dashboard Internet applications that could reshape the way we get around in the decades to come. "What is important for us is that the brain of the car, the operating system, is not iOS or Android or someone else -- but it's our brain," Dieter Zetsche, CEO of Mercedes-maker Daimler, told reporters at the Frankfurt Motor Show this week, according to the New York Times. He added that "we do not plan to become the Foxconn of Apple," referring to the overseas factories that manufacture iPhones and other Apple devices. If Zetsche sounded worried, it might be because he spent time this summer hearing from Sir Jonathan Ive, Apple's design guru, at a summit to talk about the future of the car, according to the Financial Times. Ive likes a well-designed car. But if anyone thinks he's working on one that can drive by itself through city streets and in unpleasant weather conditions, Shladover said he doesn't see that happening until 2075. "Far enough into the future that we don't have to think about it too seriously for a while," he said. ©2015 San Jose Mercury News (San Jose, Calif.) Distributed by Tribune Content Agency, LLC. California Legislator Introduces Bill to Test Self-Driving Bus in Contra Costa County How Can We Adapt Our Transportation Systems for Self-Driving Cars? Apple Continues Silicon Valley Expansion MORE FROM FutureStructure
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,377
Newspapers Are Big, Not Bloated John Gruber has a thoughtful argument on why newspaper efforts to charge for the news are probably doomed. I agree with his conclusion, but his diagnosis of the problem with newspapers is wrong. Old-school news companies aren't like that — the editorial staff makes up only a fraction of the total head count at major newspaper and magazine companies. The question these companies should be asking is, "How do we keep reporting and publishing good content?" Instead, though, they're asking "How do we keep making enough money to support our existing management and advertising divisions?" It's dinosaurs and mammals. And it's not really surprising that they're failing to evolve. The decision-makers — the executives sitting atop large non-editorial management bureaucracies — are exactly the people who need to go if newspapers are going to remain profitable. The heavy staffing of traditional newspapers was not the fault of management bureaucracy. It was the fault of technology and distribution. I remember visiting the Chicago Sun Times/Daily News building as a kid, where my best friend's dad was a columnist. The place was huge! But it wasn't filled with middle managers; it was filled with compositors and pressmen and ad sales clerks. You didn't just need someone to mark up the HTML; you had to cast the letters in lead type. And, if you needed to make a change, someone had to go take the plates off the press, melt them down, cast new plates, and start the press up again. Keep in mind, too, the problems of doing business without computers. Every little transaction generates paper, and that paper needs to be reliably filed and quickly retrieved. Every transaction: two bucks for the delivery boy, the rent for the Paris office, the fee for the department store ads. Every paycheck had to be computed and written out by hand, in duplicate. Even in the 70's, the fax machine was so new and faxes were so slow that Peter Gammons was able to write the story of a lifetime faster than the fax machine could send it. If anything, the newsroom of old was notably short on bureaucracy. That was the whole point of the news room: you had a huge open office in which dozens of people worked because all those dozens of people reported to one editor. Some of those dozens would turn out to be idiots, some of them would be crazy, plenty of them were drunks, and all of them were prone to be unmanageable. Even so, there are remarkably few layers of bureaucracy. If you're going to be a big daily paper in the early or mid-20th century, you're going to need to run a big printing facility. That means you've got huge production costs, and lots of people who set up, run, and maintain the presses. You've got to staff them for the worst case, too; that means you need enough maintenance people on hand that, when the worst possible breakdown happens at the worst possible time, you still get the paper on the street. Plus, you've got a fleet of trucks to deliver the paper to retailers, because you can't just sell the thing on your doorstep. Before you had trucks, you had horses and wagons. Lots of horses, and lots of wagons, and lots of teamsters to drive them. Those horses got a raw deal; the teamsters did, too, and eventually they drove a hard bargain. Newspapers are living with the consequences of that bargain, but it's worth remembering how badly those early drivers, and their horses, were treated. This puts the newspapers into the same bind as the movie business. If you've got to support a theater in every town in the country, that's a lot of mortgages to pay. If you've got to run the largest printing operation in town, and the largest cartage operation in town, as a side-effect of your real business, then spending an extra dollar or two on editorial has no significant effect on the bottom line. You add writers, and editors, and bureaus, because their cost is small relative to your presses and your delivery trucks – and if you have a Paris bureau and the other paper doesn't, then someday you might outsell the other paper big time. This is the core dynamic of the 20th century newspaper. Why did columnists make so much money? Because they recruited readers, and because it wasn't much money compared to the rest of the operation. Why did they have rewrite boys and typists and gofers? Because lots of newspaper reporters were still barely educated – reporting was a job for people with a high school education – and you needed someone to fix the spelling. ("Never let them know you can type," my mother's first editor told her.) Why fix the spelling? Because a bunch of readers have gone to college, and the advertisers badly want those readers. Fact checking? Same story. Why did newspapers have crime reporters and book reporters and theater reporters and society reporters, not just in New York but in Detroit and Denver and Des Moines? Because those columns sold a few papers, sometimes they attracted an advertiser, and the extra pair of hands came cheap. Newspapers aren't bloated bureaucracies because they have antiquated management. They're heavily staffed because they are built for a different technology and a different distribution system. The old economy made them strong in some areas, and vulnerable in others. The new economics will change their structure. But it's not simply a matter of antique management; it's the result of that press in the basement and all those trucks out back. They weren't all fools
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,493
COP18: Policy uncertainties hitting the global wind industry COP18 (05/12/12) – Stefan Gsänger, Secretary General at the World Wind Energy Association talks about the difficulties currently facing the wind industry globally. He says that the industry is currently in a difficult situation with his organisation's latest half year figures showing a 10% drop on 2011. He says there is currently more capacity than the market is able to absorb. He says this is partly due to market uncertainties in some of the industry's main countries including the US and Spain. He says new markets are developing but that they need time to grow. He talks about the difficulty of connecting wind to existing grids. He says it is simpler in the developing world where much of these electricity grids are still being built and can be designed to fit renewables, but harder in industrialised countries where the grids are made for more centralised energy systems. Qatar: COP18 UN Climate Change Conference 2012 COP18: Cross-cultural knowledge needed to beat climate change COP18: Indigenous knowledges help mitigate climate change COP18: Climate science data should be open access COP18: Youth a force to be reckoned with home and abroad Clean cookstoves could save lives in Nigeria COP18: Energy planning too complex for politicians COP18: Enabling African countries to access climate finance COP18: Small price added to flight costs could help fund adaptation COP18: Solar energy has huge potential – and ready to grow COP18: Women must be part of the solution on climate change
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,670
\section{Introduction} With the detection of over 700 confirmed exoplanets and over 3,000 exoplanet candidates \citep[exoplanets.org;][]{Wright11} in a wide variety of planetary system architectures, it has become abundantly clear that there are a diversity of outcomes to the planet formation process. Furthermore, planet mass and radius data coupled with interior structure models, as well as transmission and emission spectroscopy, indicate that exoplanets may be as diverse in their bulk compositions as they are in their orbital properties. Of the small but growing sample of exoplanets for which mass and radius and/or spectroscopic data exist, a range of bulk compositions have already been inferred, including: Mercury-like compositions (e.g. CoRoT-7b and Kepler-10b) with iron cores composing as much as 65 wt-\% of the planet \citep{Wagner12} to water-worlds \citep[e.g. GJ1214b;][]{Berta12}. There are even indications of planets with carbon rich atmospheres \citep[e.g. WASP-12b;][]{Madhusudhan10} and interiors \citep[55 Cancri e;][]{Madhusudhan12}. The potential diversity of terrestrial exoplanet compositions has been addressed from a theoretical standpoint in several publications \citep{Kuchner05, Bond10b, Carter-Bond12, Carter-Bond12b}. The most recent three of these works sought to predict the range of exoplanet compositions that should exist based on the range of elemental abundances that have been observed in planet hosting stars. It was assumed that the stellar abundances reflect those of the initial protoplanetary disk and consequently there should be a connection between the composition of planets that formed from this disk and their host star. The simulations consisted of models of protoplanetary disk chemistry and late stage planet formation, which were coupled together by tagging the initial planetesimals of the planet formation simulation with the composition of the disk's solid material at their initial location. The planets formed in these simulations had compositions ranging from Earth-like to almost entirely C and SiC. These carbon rich planets form in disks with C/O $>$ 0.8, where carbon readily condenses to form solids \citep{Larimer75}. The composition of the disk's solid material in these simulations was determined with equilibrium chemistry calculations assuming the mid-plane pressure and temperature profile of the disk. Because the mid-plane pressure and temperature profiles of a disk change as the disk ages, it is not obvious which disk age to use when calculating equilibrium abundances. In their work, they adopted an age from a prior study \citep{Bond10} that best reproduced the composition of the Solar system's terrestrial planets. \citet{Elser12} attempted a more self-consistent approach to determining the disk age for the equilibrium chemistry calculation by imposing a transition condition: the surface density of solid material predicted by the disk model must match the initial surface density of the dynamical simulations. With this condition they had difficulty reproducing the abundances of the Solar system terrestrial planets and found that their results varied depending on the disk model used. In both of these approaches, the composition of the solids in the disk are determined assuming they all form at the same time. That is, equilibrium composition is calculated at a single age in the disk's lifetime and all solids are assigned that composition. Age dating of meteoritic material suggests that solid material condensed out of the Solar Nebula over the course of about 2.5 Myr or more \citep{Amelin02}. During this time, the Solar Nebula would have cooled significantly resulting in solids being formed over a range of temperatures. As a protoplanetary disk cools, it will sequentially condense out each of the elements starting with the most refractory and progressing towards the most volatile. At the same time, the disk is also losing mass. Elements that condense at later times/cooler temperatures will be less abundant and therefore depleted compared to the more refractory elements. \citet{Cassen96, Cassen01} showed that taking into account the cooling of the Solar Nebula during planetesimal formation led to moderately volatile element ratios that were consistent with meteoritic abundances. \red{\citet{Ciesla08}, re-examined this result, this time incorporating the inward migration of small planetesimals, and found that the moderately volatile depletion patterns in the asteroid belt could only be produced in models with a narrow range of parameters that were inconsistent with the ~3 million year timescale of planetesimal formation. They did, however, find that such depletion patterns could occur closer to the Sun, potentially affecting the composition of some of the terrestrial planets.} In this paper, we take an approach similar to \citet{Cassen96} and couple models of protoplanetary disk evolution, equilibrium chemistry and planetesimal growth in order to predict the chemical composition of planetesimals and the planets that they form. We apply this method to systems with solar-like compositions and systems with high C/O ratios and show that considering the growth of planetesimals in an evolving disk affects the chemical composition of the resulting planets. Lastly, we consider the effect that the depletion of oxygen in the outer disk has on planetesimal compositions in the inner disk. \section{Methods} We simulate the formation of terrestrial planets with a coupled chemical and dynamical model. A theoretical model of the protoplanetary disk provides the radial temperature, pressure and density structure of the disk as a function of time. We assume that the elemental abundances of the host star reflect the original elemental abundances of the disk. Combining the disk model and the stellar abundances, we calculate the chemical equilibrium composition of the disk as a function of radius. This provides a chemical model of the disk over time which is combined with a prescription for the planetesimal formation rate to determine the composition of planetesimals that form in the disk. We then follow the dynamical evolution of a population of planetesimals as they collide and build planets. The final composition of these planets is the combination of each of the planetesimals that they accreted. \subsection{Disk Model} \label{diskmodel} The pressure and temperature structure of the disk, necessary for the equilibrium chemistry calculations, is calculated using a theoretical model derived in \citet{Chambers09} which describes a viscously heated and irradiated disk. The Chambers model divides the disk into three regions. In the intermediate region the heating of the disk is dominated by viscous dissipation. The surface density is given by \begin{equation} \Sigma(r,t) = \Sigma_{vis}\left(\frac{r}{s_0}\right)^{-3/5}\left(1+\frac{t}{\tau_{vis}}\right)^{-57/80}, \end{equation} where \begin{equation} \Sigma_{vis}=\frac{7M_0}{10\pi s_0^2}, \end{equation} and $M_0$ and $s_0$ are the initial mass and outer edge of the disk. The temperature of the disk in this region is given by \begin{equation} T(r,t)=T_{vis}\left(\frac{r}{s_0}\right)^{-9/10}\left(1+\frac{t}{\tau_{vis}}\right)^{-19/40}, \end{equation} where \begin{equation} T_{vis}=\left(\frac{27\kappa_0}{64\sigma}\right)^{1/3} \left(\frac{\alpha\gamma k_B}{\mu m_H}\right)^{1/3} \left(\frac{7M_0}{10\pi s_0^2}\right)^{2/3} \left(\frac{GM_*}{s_0^2}\right)^{1/6}. \end{equation} $\kappa_0$ is the opacity and is taken to be a constant 3 $cm^2g^{-1}$, $\alpha=0.01$ is the viscosity parameter, $\mu=2.4$ is the mean molecular weight, $\gamma=1.7$ is the adiabatic index, and $m_H$ is the mass of hydrogen. The viscous timescale is: \begin{equation} \tau_{vis} = \frac{1}{16\pi}\frac{\mu m_H}{\alpha\gamma k_B}\frac{\Omega_0M_0}{\Sigma_{vis}T_{vis}}, \end{equation} In the inner region of the disk, heating is still dominated by viscous dissipation but the temperature is too high for solids to condense and consequently the opacity is expected to be several orders of magnitude lower than elsewhere in the disk. The opacity is assumed to follow the power law form given in \citet{Stepinski98}, \begin{equation} \kappa=\kappa_0\left(\frac{T}{T_{e}}\right)^n, \end{equation} where $T_e=1380K$ is the temperature at which the majority of solids evaporate, $\kappa_0$ is the opacity at $T<T_e$ and $n$ is the power law index taken to be -14. The surface density in the inner region of the disk is, \begin{equation} \Sigma(r,t) = \Sigma_{evap}\left(\frac{r}{s_0}\right)^{-24/19}\left(1+\frac{t}{\tau_{vis}}\right)^{-17/16}, \end{equation} where \begin{equation} \Sigma_{evap} = \Sigma_{vis}\left(\frac{T_{vis}}{T_e}\right)^{14/19}. \end{equation} The temperature in the inner region is given by \begin{equation} T(r,t)=T_{vis}^{5/19}T_{e}^{14/19}\left(\frac{r}{s_0}\right)^{-9/38}\left(1+\frac{t}{\tau_{vis}}\right)^{-1/8}. \end{equation} The transition radius between these regions is \begin{equation} r_e(t) = s_0\left(\frac{\Sigma_{evap}}{\Sigma_{vis}}\right)^{95/63}\left(1+\frac{t}{\tau_{vis}}\right)^{-19/36}. \end{equation} In the outer regions of the disk, viscous dissipation does not deposit significant amounts of energy and so stellar irradiation dominates the heating of the disk. The surface density equation is \begin{equation} \Sigma(r,t) = \Sigma_{rad}\left(\frac{r}{s_0}\right)^{-15/14}\left(1+\frac{t}{\tau_{vis}}\right)^{-19/16}, \end{equation} where, \begin{equation} \Sigma_{rad}=\Sigma_{vis}\left(\frac{T_{vis}}{T_{rad}}\right), \end{equation} and \begin{equation} T_{rad}=\left(\frac{4}{7}\right)^{1/4}\left(\frac{T_*kR_*}{GM_*\mu m_H}\right)^{1/7}\left(\frac{R_*}{s_0}\right)^{3/7}T_*. \end{equation} The temperature in the outer region is \begin{equation} T(r,t)=T_{rad}\left(\frac{r}{s_0}\right)^{-3/7}. \end{equation} The transition radius between the outer irradiated region and the intermediate viscous region is given by \begin{equation} r_t(t) = s_0\left(\frac{\Sigma_{rad}}{\Sigma_{vis}}\right)^{70/33}\left(1+\frac{t}{\tau_{vis}}\right)^{-133/132}. \end{equation} For all calculations in this work we use $M_0=0.1M_{\odot}$, $s_0=33AU$, $T_*=4200K$ and $R_*=3R_{\odot}$. This model corresponds to the second example in \citet{Stepinski98} which is consistent with a planetesimal forming disk. \subsection{Equilibrium Chemistry} Earlier works \citep[e.g.][]{Bond10,Elser12}, have used equilibrium chemistry calculations in the protoplanetary disk to predict planetesimal compositions. Each calculation used the pressure and temperature profile of a protoplanetary disk at one point in its evolution and so the composition of the planetesimals reflect the composition of the disk only at that epoch. The planets built from these planetesimals had compositions that, to first order, were consistent with the bulk compositions of the terrestrial planets of the Solar System. As a comparison with these works, we perform similar equilibrium calculations. Like the aforementioned works, we use the software package HSC Chemistry (version 7.1). Given the pressure, temperature and elemental abundances, HSC Chemistry will calculate the equilibrium abundances for a list of species by minimizing the total Gibbs free energy of the system using the GIBBS solver \citep{White85}. The species considered are the same as those used in \citet{Bond10} and are listed in Table \ref{Species}. Each of the solid species was considered a pure substance (i.e. no solid solutions). Calculations were performed between 0.3 and 4 AU in radial extent to match the region considered in the dynamical simulations. \red{ The accuracy of our chemical model is limited by two main factors: 1) The completeness of the list of chemical species considered. This list is limited by both the computational complexity of calculating equilibrium for a large number of species and HSC Chemistry's database. A noticeable shortfall is the lack of carbon in our models throughout the asteroid belt (Figure~\ref{DiskComposition}) where carbonaceous meteorites can contain as much as few percent by mass in carbon. This is expected, though, because most carbon in carbonaceous meteorites is in the form of organic macromolecules \citep{Pizzarello06} which are not included in our study. 2) The assumption of equilibrium conditions. It is unknown exactly how much of the Solar Nebula experienced conditions where chemical equilibrium could occur. However, the material that makes up the asteroids and terrestrial planets of the Solar System is believed to have formed under near equilibrium conditions \citep[][and references therein]{Davis07}. } \begin{table} \begin{center} \caption{Chemical species used in equilibrium calculations.} \label{Species} \begin{tabular}{llllll} \hline \multicolumn{6}{c}{Gases}\\ \hline Al & CaH & FeS & MgOH & NiS & SiC\\ Al$_2$O & CaO & H & MgS & O & SiH\\ AlH & CaOH & H$_2$ & N & O$_2$ & SiN\\ AlO & CaS & H$_2$O & NH$_3$ & P & SiO\\ AlOH & Cr & H$_2$S & NO & PH & SiP\\ AlS & CrH & HCN & Na & PN & SiP$_2$\\ C & CrN & HCO & Na$_2$ & PO & SiS\\ CH$_4$ & CrO & HPO & NaH & PS & Ti\\ CN & CrOH & HS & NaO & S & TiN\\ CO & CrS & He & NaOH & S$_2$ & TiO\\ CO$_2$ & Fe & Mg & Ni & SN & TiO$_2$\\ CP & FeH & MgH & NiH & SO & TiS\\ CS & FeO & MgN & NiO & SO$_2$ & \\ Ca & FeOH & MgO & NiOH & Si & \\ \hline \multicolumn{6}{c}{Solids}\\ \hline \multicolumn{2}{l}{Al$_2$O$_3$} & \multicolumn{2}{l}{Cr$_2$FeO$_4$} & \multicolumn{2}{l}{Mg$_2$SiO$_4$} \\ \multicolumn{2}{l}{AlN} & \multicolumn{2}{l}{Fe} & \multicolumn{2}{l}{Mg$_3$Si$_2$O$_5$(OH)$_4$} \\ \multicolumn{2}{l}{C} & \multicolumn{2}{l}{Fe$_2$SiO$_4$} & \multicolumn{2}{l}{MgSiO$_3$} \\ \multicolumn{2}{l}{Ca$_3$(PO$_4$)$_2$} & \multicolumn{2}{l}{Fe$_3$C} & \multicolumn{2}{l}{NaAlSi$_3$O$_8$} \\ \multicolumn{2}{l}{Ca$_2$Al$_2$SiO$_7$} & \multicolumn{2}{l}{Fe$_3$O$_4$} & \multicolumn{2}{l}{Ni} \\ \multicolumn{2}{l}{CaMgSi$_2$O$_6$} & \multicolumn{2}{l}{Fe$_3$P} & \multicolumn{2}{l}{P} \\ \multicolumn{2}{l}{CaAl$_{12}$O$_{19}$} & \multicolumn{2}{l}{FeS} & \multicolumn{2}{l}{Si} \\ \multicolumn{2}{l}{CaAl$_2$Si$_2$O$_8$} & \multicolumn{2}{l}{FeSiO$_3$} & \multicolumn{2}{l}{SiC} \\ \multicolumn{2}{l}{CaTiO$_3$} & \multicolumn{2}{l}{H$_2$O} & \multicolumn{2}{l}{Ti$_2$O$_3$} \\ \multicolumn{2}{l}{CaS} & \multicolumn{2}{l}{MgAl$_2$O$_4$} & \multicolumn{2}{l}{TiC} \\ \multicolumn{2}{l}{Cr} & \multicolumn{2}{l}{MgS} & \multicolumn{2}{l}{TiN} \\ \hline \end{tabular} \end{center} \end{table} \subsection{Sequential Condensation Chemistry} In a more realistic scenario, planetesimals are formed over the course of a few million years. Because the temperature profile of a protoplanetary disk changes significantly during this period, it is necessary to consider the changes in the composition of solid material available for building planetesimals as the disk ages. \red{Taking a similar approach to \citet{Cassen96}, we start with an initially hot protoplanetary disk and evolve it forward in time while keeping track of the chemical composition of dust, gas and planetesimals in the disk. In our model, we break the disk up into two components: the gas/dust disk and the planetesimal disk. The planetesimal disk is assumed to not react chemically with the gas/dust disk as only the thin outer shell of a planetesimal is in contact with the gas and dust of the disk. The planetesimal disk is initially empty but builds up over time as planetesimals form from the available solid material. To calculate the composition of these planetesimals we iterate through the lifetime of the disk. At each time step and in each radial zone of the disk we perform the following steps: \begin{enumerate} \item Calculate the equilibrium chemical composition of the dust/gas disk using HSC Chemistry. \item Remove a fraction of the solid material from the gas/dust disk and add it to the planetesimal disk. The amount of material that is converted to planetesimals is determined by the planetesimal formation rate which is described shortly. \item Calculate the radial movement of the gas and dust according to the disk equations described in Section~\ref{diskmodel}. In our model we assume that all dust is perfectly coupled to the gas and that planetesimals are large enough that they do not experience orbital migration due to gas drag and thus remain where they formed. \item Recalculate the chemical inventory throughout the disk as it has changed due to radial motion of the disk and formation of planetesimals. \end{enumerate} } \red{We repeat each of these steps until the amount of material being converted into planetesimals is negligible (about 3 million years in our model). The compositions of the planetesimals at the end of this portion of the simulation are taken as the input composition for the next stage of our model (the n-body simulations of late stage planet formation). } As solid material grows from dust to planetesimal size, it must cross a size threshold where it goes from being chemically reactive to mostly locked away in the interior of a planetesimal. We will call the rate at which material crosses this threshold the planetesimal formation rate. The planetesimal formation rate in protoplanetary disks is currently unknown. In fact, the formation of planetesimals, in general, is not well understood. A number of mechanisms have been proposed to explain their formation, which can be categorized into two general processes: 1. Planetesimals formed from the pairwise collisions and sticking of larger and larger aggregates \citep[][and references therin]{Blum08} and 2. They formed from the collective self gravity of a large mass of small particles \citep[e.g][]{Johansen07,Cuzzi08}. Regardless of the mechanism of planetesimal formation, the planetesimal formation rate will vary in time and space due to differing conditions in the disk. For the sequential condensation chemistry model it is essential to know how the planetesimal formation rate varies. For lack of a consensus on the planetesimal formation mechanism, we will use the simple prescription for the planetesimal formation rate used in \citet{Cassen96}, which is consistent with formation from coagulation: \begin{equation} \dot{\Sigma}_p \propto \Sigma_{solid}\Omega, \label{planetesimalformationrate} \end{equation} where $\Sigma_{solid}$ is the surface density of the solid material available to build planetesimals at a point in the disk and $\Omega$ is the orbital angular speed at that point. The exact form of this expression may change as planetesimal formation models are revised. Nevertheless, a planetesimal formation rate that increases with increasing solid surface density and a decreasing dynamical timescale is a logical starting point. Despite the fact that only the planetesimals that form interior to 4 AU are considered in further steps of the model (see Section 2.4) it is necessary to track the formation of planetesimals farther out in the disk as well. Planetesimals forming in the outer regions will deplete the dust and gas disk of any elements that are solid, which, when the disk material moves inward, can effect the chemistry in the terrestrial planet forming region. For this reason, our model extends to 10 AU, at which point the maximum depletion of any element is $<$10\%. As discussed previously, the assumption of equilibrium chemistry is likely valid for the inner regions of the disk. Our model does not take into account deviations from equilibrium chemistry that may occur in the regions of the disk exterior to 4 AU. Because the planetesimals formed in this region are not considered in the dynamical simulations, these deviations are only important to the extent that they change the composition of the inward moving material. Material is depleted no more than 15\% by the time it reaches 4 AU. Therefore, deviations from equilibrium chemistry will only effect less than 15\% of the material. \subsection{Dynamical Simulations} The final step of our planet formation model is to determine the number and masses of planets that can form in a system and from what radial range they accreted their material. This step is important because it allows us to track the compositional mixing of planetesimals from different regions of the disk. \red{To do this, we tag a population of planetesimals with the final composition of planetesimals in the sequential condensation chemistry model and follow their growth into planets.} Rather than simulating the entire growth of small planetesimals all the way to planets, we employ the results of earlier works, and begin at the end of the oligarchic growth phase. This allows us to skip a computationally expensive step which is not necessary for our purposes. \citet{Kokubo02} \red{find that accretion through the oligarchic growth phase proceeds locally assuming that the radial migration of small fragments produced in planetesimal collisions does not significantly alter the surface density profile. Consequently, any planetesimal or embryo surviving at the end of this phase should have a composition similar to the original planetesimals at that location.} On the other hand, during the final stage of planet formation, significant radial mixing occurs as planetary embryos scatter and collide with each other and slowly sweep up the remaining planetesimals in the system. We begin our late stage planet formation simulations at the end of the oligarchic growth phase. As described in \citet{Kokubo02}, at this point, planetesimals have a bimodel distribution in mass. Planetary embryos are much more massive and make up about 50\% of the mass while the rest of the mass is in smaller planetesimals. The embryos are typically separated by about 10 mutual Hill radii. Following \citet{Chambers01} and \citet{Obrien06}, we start our simulations with such a distribution of planetesimals placed throughout the disk such that the surface density profile of the disk obeys the relation \begin{equation} \Sigma = \Sigma_0\left(\frac{r}{1AU}\right)^{-3/2}, \label{surfacedensity} \end{equation} where $\Sigma_0=8gcm^{-2}$ and then falls off linearly between 0.7 and 0.3 AU. \red{We note that a fully self-consistent model would use the surface density profile calculated from the sequential condensation chemistry model rather than that of Equation 17. However, this would require separate N-body simulations for each different system. Because late stage accretion is a stochastic process, it would then be very difficult to disentangle the effects of the different initial planetesimal compositions from differences in the N-body simulations. As described in Section 3.1 these two surface density profiles are compatible with each other.} For these simulations, we use the N-body integrator, Mercury \citep{Chambers99}, which allows for a set of larger bodies that interact gravitationally with each other, as well as a set of smaller bodies that interact gravitationally with the larger bodies but not each other. This is ideally suited for the initial bimodel distribution of planetesimals used. Planets then grow through the collisions of these bodies. The collisions are assumed to be totally inelastic (i.e. they conserve momentum and form a single body with a mass of the combined mass of the colliding planetesimals). We performed a set of four simulations, similar to those of \citet{Obrien06}, with an initial population of 26 embryos of mass 0.09 $M_\oplus$ and about 1000 planetesimals of mass 0.0024 $M_\oplus$ distributed between 0.3 and 4 AU around a solar mass star such that they obeyed the surface density relation given in Equation 17. The eccentricity and inclination of each planetesimal was given a random value between $0-0.01$ and $0-0.5^\circ$ respectively, and the longitude of the ascending node, argument of periapsis and mean anomaly were assigned random values between 0 and 360$^\circ$. Each system was integrated for 250 Myr with a time step of 5 days. \subsection{System Compositions} We chose four systems with stellar abundances that covered the range of Mg/Si and C/O ratios observed in stars. These ratios have the biggest influence on the mineralogy of the disk \citep{Bond10b} and so covering the full range should highlight the major differences between the equilibrium chemistry and sequential condensation models. The stellar abundances we used are the same as those used in \citet{Bond10b}, which were taken from \citet{Ecuvillon04}, \citet{Ecuvillon06}, \citet{Beirao05}, and \citet{Gilli06}, and the solar abundances were taken from \citet{Asplund05}. These abundances are shown in Table \ref{Abundances}. \red{We note that we only use the elemental compositions of these stars in our simulations. We do not use other physical parameters of the systems (e.g., stellar mass and planetary companions) for the N-body simulations. We choose to do this so that the same N-body simulations can be used for each system.} \begin{table} \caption{Elemental abundances for systems considered.} \begin{center} \begin{tabular}{lllll} \hline Element & Solar System & 55 Cnc & HD19994 & HD 213240 \\ \hline Al & 2.34$\times10^6$ & 8.7$\times10^6$ & 6.2$\times10^6$ & 4.7$\times10^6$ \\ C & 2.45$\times10^8$ & 7.4$\times10^8$ & 8.9$\times10^8$ & 5.4$\times10^8$ \\ Ca & 2.04$\times10^6$ & 2.8$\times10^6$ & 3.4$\times10^6$ & 2.5$\times10^6$ \\ Cr & 4.37$\times10^5$ & 7.8$\times10^5$ & 7.4$\times10^5$ & 5.2$\times10^5$ \\ Fe & 2.82$\times10^7$ & 6.3$\times10^7$ & 5.1$\times10^7$ & 4.4$\times10^7$ \\ H & 1.00$\times10^{12}$ & 1.0$\times10^{12}$ & 1.0$\times10^{12}$ & 1.0$\times10^{12}$ \\ He & 8.51$\times10^{10}$ & 8.5$\times10^{10}$ & 8.5$\times10^{10}$ & 8.5$\times10^{10}$ \\ Mg & 3.39$\times10^7$ & 1.2$\times10^8$ & 6.2$\times10^7$ & 6.5$\times10^7$ \\ N & 6.03$\times10^7$ & 1.8$\times10^8$ & 2.2$\times10^8$ & 1.3$\times10^8$ \\ Na & 1.48$\times10^6$ & 3.9$\times10^6$ & 6.5$\times10^6$ & 3.6$\times10^6$ \\ Ni & 1.70$\times10^6$ & 3.6$\times10^6$ & 3.3$\times10^6$ & 2.4$\times10^6$ \\ O & 4.57$\times10^8$ & 7.4$\times10^8$ & 7.1$\times10^8$ & 1.2$\times10^9$ \\ P & 2.29$\times10^5$ & 8.5$\times10^5$ & 6.0$\times10^5$ & 4.6$\times10^5$ \\ S & 1.38$\times10^7$ & 2.1$\times10^7$ & 1.4$\times10^7$ & 1.3$\times10^7$ \\ Si & 3.24$\times10^7$ & 6.9$\times10^7$ & 6.0$\times10^7$ & 4.4$\times10^7$ \\ Ti & 7.94$\times10^4$ & 2.2$\times10^5$ & 1.5$\times10^5$ & 1.3$\times10^5$ \\ \hline \end{tabular} \end{center} \label{Abundances} \end{table} \section{Results} \subsection{Planetesimal Surface Density} Before the sequential condensation model could be used to predict planetesimal composition, it was necessary to calibrate the planetesimal formation rate. Changing the constant of proportionality in the prescription for the planetesimal formation rate will increase or decrease the total mass in planetesimals that have formed by the end of the simulation and will also effect the chemical composition of the planetesimals. The planetesimal surface density that we model needs to match that which we believe existed during the early stages of planet formation. We can place constraints on this planetesimal surface density based on the masses of the terrestrial planets in the Solar System \citep[i.e. the minimum mass Solar Nebula or MMSN;][]{Weidenschilling77}. Once planetesimals are formed, planet formation is a fairly efficient process with planets accreting about 50\% of the original mass \citep{Obrien06}. This means that although the MMSN is a lower limit it is probably not far from the true amount of solid material that existed in the disk. The concept of the MMSN has been applied to extrasolar systems \citep[e.g.][]{Chiang13,Kuchner04} with a range of possible disk masses. In Section 3.2 we consider the effects of changing this normalization factor on our predicted planetesimal compositions. In order to match the planetesimal surface density of the MMSN we found that the planetesimal formation rate in our model should be: \begin{equation} \dot{\Sigma}_p = 9.5\times10^{-6}yr^{-1}\Sigma_{solid}\frac{\Omega}{\Omega_{1AU}} \end{equation} This produced a distribution of planetesimal masses that is consistent with the $r^{-3/2}$ profile of the MMSN. N-body simulations of planet formation in the inner Solar System \citep[e.g.][]{Chambers01, Obrien06} required surface density profiles that decreased inwards of 0.7 AU in order to form low mass Mercury analogs. However, the surface density profile produced in the sequential condensation model continues to increase interior to 0.7 AU. This discrepancy can be explained in a number of ways. For example, the prescription for the planetesimal formation/growth rate may be oversimplified and not appropriate for the innermost region of the disk, or planetesimals may become depleted in this region due to faster migration rates associated with higher gas densities. In our model, when we seed the disk with planetesimals for the dynamical simulations, we do not not assume the surface density profile output by the sequential condensation model. Instead we assume the surface density profile of Equation \ref{surfacedensity}. The choice of surface density profile does not significantly affect the results because they are consistent with each other to begin with. However, we chose to use Equation \ref{surfacedensity} so that we could use the same set of N-body simulations for each of the different system compositions. This allows us to compare the affects of different initial compositions without having to account for the stochastic nature of late stage planet formation. \subsection{Disk Chemistry} \begin{figure*} \epsscale{1.3} \plotone{Composition_Mole.eps} \caption{Elemental abundances as a function of radius ($r$) in the disk for the four systems considered. The left column shows results from the sequential condensation model and the right column shows results from the equilibrium chemistry model.} \label{DiskComposition} \end{figure*} Figure \ref{DiskComposition} shows the composition of planetesimals as a function of radius in the disk for each of the four different systems for both the equilibrium chemistry model and the sequential condensation model. \red{In the equilibrium chemistry model, planetesimals were assumed to form instantaneously at $1.5\times10^5$ years. We chose a disk age of $1.5\times10^5$ years because it produces planets with compositions that matched those of the terrestrial planets of the Solar System well.} This figure highlights two major differences that occur by incorporating the time evolution of the disk into the chemistry model: 1. The radial range in which elements are present in planetesimals is extended in the sequential condensation model relative to the equilibrium chemistry model. 2. The sequential condensation model leads to smoothly varying elemental abundances in planetesimals with increasing radius in the disk compared to the equilibrium chemistry model where elements go from not present to fully condensed out over a small radial range. The radial range where carbon makes up a significant fraction of the planetesimals' composition is much larger in the sequential condensation model, extending out to $\sim$3 AU, compared to the equilibrium chemistry model where carbon is only present in planetesimals interior to $\sim$1 AU. In carbon rich systems, carbon is present in solid form at temperatures higher than 550 K. In the equilibrium chemistry model, carbon rich planetesimals are limited to the region of the disk where these temperatures occur at the specific disk age used. On the other hand, planetesimals in the sequential condensation model are the accumulation of the solid material formed throughout the lifetime of the disk. This leads to carbon accumulating onto planetesimals farther out in the disk when it was younger and hotter, and closer to the star as the disk cools. The relative amounts of the most abundant elements in systems with more solar-like chemistry are much less effected by the choice of chemistry model. Figure \ref{DiskComposition} shows that the ratios of Mg, Fe and Si are relatively unchanged between models for these systems. This is due to the fact that these elements all condense out at about the same temperature (between 1300K and 1330K). The ratio of O to these three elements is also quite similar between the models interior to 2 AU. However, beyond 2 AU the sequential condensation model shows a much quicker increase in O abundance compared to the equilibrium chemistry model. This occurs because the ice line continues to move closer to the star as the disk cools which is not captured in the equilibrium chemistry model. In addition to O, the relative abundances of some of the other more volatile elements can be seen to vary between the two models. Specifically, the amounts of S and Na show a gradual decrease closer to the star in the sequential condensation model compared to the abrupt change seen in the equilibrium chemistry model. \red{Although this gradual decrease is qualitatively consistent with the volatile element fractionation patterns seen in Solar System meteorites \citep[see][for a review]{Palme88}, \citet{Ciesla08} found that this trend cannot be reproduced throughout the asteroid belt where it is observed in the Solar System in sequential condensation-like models. This result suggests that other mechanisms, such as evaporation of volatiles \citep{Davis05}, may play an important role in determining the composition of planetesimals.} A more subtle effect of the sequential condensation model is that it depletes the disk of certain elements relative to others. As a parcel of gas and dust migrates through the disk, it deposits solid material in the form of planetesimals. The disk becomes depleted in the elements that composed this material, or in a relative sense, enriched in the elements that were not present. The exact amount of depletion seen in any given parcel will depend on where it originated from, the disk model that was used and the planetesimal accretion rate. This effect is extremely important when considering the C/O ratio of a disk. \red{Carbon is abundant as a solid in the form of graphite at temperatures between $\sim$600K and $\sim$1000K, and silicon carbide at temperatures greater than $\sim$1000K. It is also abundant in carbon ices below 78K \citep[][ We do not model these low temperature ices because the disk does not get to low enough temperatures in the region and time of interest.]{Lodders03} For a large portion of the disk (where temperatures are between 78K and 600K), very little carbon will condense out of the disk.} In contrast, oxygen is the dominant component of rocky material (present at temperatures below 1300K) and is even more abundant as a solid beyond the ice line. This means that throughout much of the disk, oxygen is being depleted much more than carbon leading to an increase in the C/O ratio. In some cases the C/O ratio can reach high enough levels that once a parcel moves closer to the star and heats up, carbon will condense out of the disk creating carbon rich planetesimals. This can occur for systems that were not originally significantly enriched in C. Figure \ref{CtoORatio} shows the wt-\% of carbon contained in planetesimals for disks with initial C/O ratios between solar and 1.00. The initial elemental abundances of the disk for each curve are all solar, except for the oxygen abundance which has been decreased in order to obtain each C/O ratio. In contrast to the sequential condensation model, carbon rich planetesimals can only be formed in the equilibrium chemistry model if the initial C/O ratio is greater than 0.8. Furthermore, the C/O enhancement of the sequential condensation model allows a much larger region of the disk to form carbon rich planetesimals. \begin{figure} \epsscale{1.4} \plotone{CtoO_Carbon.eps} \caption{The wt-\% of carbon contained in planetesimals as a function of radius ($r$) in the disk. Each line represents a disk with a different initial C/O ratio ranging from 0.54 (solar) to 1. For comparison, the equilibrium chemistry results (dashed lines) are shown in addition to the sequential condensation results (solid lines). The sequential condensation model produces carbon rich planetesimals throughout a larger radial range of the disk and in disks with lower initial C/O ratios than the equilibrium chemistry model does. } \label{CtoORatio} \end{figure} The amount of carbon deposited in planetesimals will depend not only on the original C/O ratio of the disk but also on the disk model and planetesimal accretion rate that is used. Exploring the possible range of disk models is beyond the scope of this paper, but their effect on protoplanetary disk chemistry merits further study. We increased the planetesimal accretion rate in the disk from the nominal value needed to achieve a disk mass comparable to the MMSN to three times that value. Figure \ref{AccretionRate} shows the effect of this increase on the carbon abundance of planetesimals in a disk with an original C/O ratio of 0.8. Higher accretion rates lead to faster depletion of oxygen and a correspondingly fast increase in the local C/O ratio. As a consequence, carbon condenses out of the disk gas at more distant radii leading to carbon enriched planetesimals throughout more of the disk. \begin{figure} \epsscale{1.4} \plotone{AccretionRate_Carbon.eps} \caption{The wt-\% of carbon contained in planetesimals at different radii ($r$) in the disk for different planetesimal accretion rates. The planetesimal accretion rate used for each simulation is the nominal value needed to obtain a final planetesimal surface density similar to the MMSN scaled by the value in the legend. } \label{AccretionRate} \end{figure} \subsection{N-Body Simulations} In each of the four N-body simulations, we form two or three planets within 2 AU (see Figure \ref{MaterialOrigin}). They range in mass from 0.1-1.3M$_{\oplus}$ with a median mass of 0.7M$_{\oplus}$. These results are consistent with the CJS simulations of \citet{Obrien06} from which they were based off of. \red{ Figure \ref{MaterialOrigin} highlights both the importance and stochasticity of radial mixing of planetesimals in the final stages of terrestrial planet formation. The origin of material for any given planet can range from a small area around the planet's final location to practically the whole terrestrial planet forming zone. This stochasticity acts to increase the diversity of exoplanet compositions and also means that any one system is not representative of the others. The main importance of radial mixing is that it weakens any compositional gradients that existed at earlier stages. } \begin{figure} \epsscale{1.1} \plotone{MaterialOrigin_all.eps} \caption{\red{ Material source regions for simulated planets. Each pie plot corresponds to a simulated planet and each row to one simulation. Each planet's final orbital position is indicated by its locations along the x-axis and its mass is proportional to the radius of the pie plot cubed. Each colored slice represents material that originated from the region of the disk indicated in the color bar. }} \label{MaterialOrigin} \end{figure} \subsection{Planet Bulk Composition} We first performed a benchmark test to see if our equilibrium chemistry simulations could reproduce the bulk compositions of the terrestrial planets of the Solar System. \red{As can be seen from Figure \ref{SSAbundances}, the simulated compositions are in good agreement with the Solar System values except for some of the more volatile elements, which are over-abundant in the simulated planets. It is possible that non-equilibrium effects, such as evaporation during planetesimal collisions \citep{Bond10}, were important in the Solar Nebula.} The elemental abundances for the Solar System planets were obtained from \citet{Morgan80}, \citet{Kargel93} and \citet{Lodders97} and are expected to have errors as high as 25\% \citep{Bond10}. Our results are consistent with the findings of \citet{Bond10}, which can also be seen in Figure \ref{SSAbundances}. Small differences exist between our model and theirs, but these are expected due to the stochastic nature of planet formation and differences in the protoplanetary disk model used. Also shown in Figure \ref{SSAbundances} are the bulk elemental abundances of simulated planets using the sequential condensation chemistry model. In general the results are very similar to those of the equilibrium chemistry. The only elements where we expected differences could occur based on the abundance curves in Figure \ref{DiskComposition} are O, Na and S. These elements have intermediate condensation temperatures and therefore are not in the same proportions throughout the terrestrial planet formation zone. The differences seen in the abundance curves between the sequential condensation and equilibrium chemistry models were likely washed out due to the radial mixing of planetesimals during late stage formation simulations leading to the similarity in bulk compositions between the models. \begin{figure} \epsscale{1.2} \plotone{SSAbundances.eps} \caption{Elemental abundances of simulated planets normalized to abundances of corresponding Solar System planets. Results from our sequential condensation (SC) model and our equilibrium chemistry (EC) model are shown in addition to those of \citet{Bond10}} \label{SSAbundances} \end{figure} In the simulations of carbon rich systems there are significant differences between the equilibrium chemistry and sequential condensation models. Figure \ref{HighCarbonAbundances} shows the weight percent of the most abundant elements in the simulated planets for the HD 19994 system. The innermost two planets accreted most of their material from a region where carbon was solid in chemical equilibrium at 1.5$\times10^5$ years and thus have a large fraction of their mass in carbon. This results in planets with similar bulk composition between the two models. However, the outermost planet of this system accreted little of its material from the carbon rich region of the disk in the equilibrium chemistry model and consequently has a very small fraction of its mass in carbon. On the other hand, carbon is abundant in planetesimals beyond 2 AU in the sequential condensation model, thus forming a planet with a significant portion of carbon at 1.6AU. \begin{figure} \epsscale{1.2} \plotone{HighCPlanetCompositionPie.eps} \caption{Composition of simulated planets in the HD19994 system. Each pie chart shows the fraction of a planet's mass in each of the most abundant elements and its semi-major axis (along the x-axis). Results for the equilibrium chemistry model are shown in the top row and those for the sequential condensation model are shown in the bottom row.} \label{HighCarbonAbundances} \end{figure} \section{Discussion} \subsection{Why Should the Sequential Condensation Model be Used?} This and previous works both found that an equilibrium chemistry model of the Solar Nebula produces planets with remarkably similar abundances to the true terrestrial planet abundances. The sequential condensation model produces similar results but at the cost of a much more complex model. This begs the question: why use the sequential condensation model at all? Firstly, the equilibrium chemistry model is not physically motivated in regards to its connection with planetesimal formation. The implicit assumption of the equilibrium chemistry model is that all planetesimals formed at the same time everywhere in the disk and on timescales much shorter than the timescales for disk evolution. For any given region of the disk, planetesimals should begin forming as soon as conditions are right. These conditions should occur at different times for different parts of the disk. Assuming they all form simultaneously throughout the disk is unrealistic. Furthermore, we know from age dating of meteoritic material that planetesimal formation lasted for about 2.5 Myr \citep{Amelin02}. The assumptions of the equilibrium chemistry model have been overlooked in the past because of the model's success in predicting the bulk composition of the Solar System terrestrial planets. The success of the equilibrium model can largely be attributed to the high condensation temperatures of most of the major rock forming elements. For all but the hottest disks, these elements will be fully condensed out, and thus in solar proportions, throughout the entire terrestrial planet forming region. The refractory elements are of approximately solar proportions in the terrestrial planets (excluding Mercury) and so the models naturally match these. All that's really required then is a disk model that can reproduce the abundances of the more volatile elements (e.g. O, Na, S, etc). \citet{Bond10} were able to find a model that produced planets with the correct O abundance, but these planets were over abundant in Na and S compared to the Solar System planets. Nevertheless, their model accurately reproduced the abundances of ten out of twelve of the most common elements in the terrestrial planets. The situation is different for carbon rich systems. In chemical equilibrium, carbon is abundant as a solid at temperatures higher than $\sim$550 K but is a gas at lower temperatures. For most disk ages, 550 K lies in the middle of the terrestrial planet forming region. Consequently, the predicted composition of planets will vary greatly in the equilibrium chemistry model depending on the choice of disk age. Because there are no carbon rich systems in which we know the composition of the planets, it is not obvious which, if any, disk age will produce the correct results. We suggest that the equilibrium chemistry model's ability to reproduce the abundances in the Solar System is a natural consequence of the condensation properties of a disk of solar composition, but that it does not necessarily produce realistic results for carbon rich systems. The sequential condensation model provides a more realistic picture of planetesimal formation in which planetesimals can form in any region of the disk at any time as long as the conditions are correct. This is consistent with the idea that planetesimals form over a few million years, and the model produces planets with elemental abundances in good agreement with those of the Solar System's planets. Although the results for both models are similar in the case of the Solar System, the sequential condensation model likely predicts more realistic compositions for planets in carbon rich systems. \subsection{Formation and Growth of Planetesimals} The most uncertain part of the sequential condensation model is the prescription for planetesimal formation and growth. This is necessarily so due to the current uncertainties in planetesimal formation theory. The prescription used in this work is likely oversimplified. We showed that simply increasing the magnitude of the planetesimal accretion rate has a significant effect on the chemistry. Much larger effects would likely occur by changing its functional form. For example, the planetesimal formation rate could be a stronger function of the density of solids in the disk. In such a model, comparatively more material would form planetesimals earlier in the disk's lifetime, when it is denser and hotter, causing the planetesimal disk to be more enhanced in refractory elements and depleted in volatile elements. Such a scenario might be able to reduce the over abundance of the more volatile elements we see in our current Solar System model. Applied to moderately carbon rich disks, this could lead to enough depletion of oxygen early on that solids containing carbon could condense farther away from the star thus increasing the amount of solid carbon in the terrestrial planet formation zone. \subsection{Migration of Planetesimals} In the sequential condensation model, planetesimals form at a given distance from the star and remain there for the duration of the simulation. However, planetesimals are expected to migrate radially due to gas drag, and should accrete material from different parts of the disk. Including planetesimal migration is beyond the scope of this work as it would require a knowledge of the distribution of planetesimal sizes and masses. However, it is possible that migration may significantly effect the composition of forming planetesimals and merits future study. \subsection{C/O Enhancements} The sequestration of oxygen in planetesimals has been used as a mechanism to explain the high observed C/O ratios in some gas giant exoplanets \citep{Oberg11, Helling14}. Additionally, the sequestration of oxygen in the outer disk and resulting depletion in the inner disk has been studied in the context of the distribution of water in the Solar Nebula \citep{Ciesla06}, but its influence on the abundances of other chemical species was not considered. \citet{Najita11} pointed out that this depletion may influence the abundances of hydrocarbons in the inner disk and that these differences should be observable. \citet{Carr11} and \citet{Najita11} find a correlation between the HCN/H$_2$O flux ratio and disk masses for T Tauri stars. They suggest that this trend could be explained if a high HCN/H$_2$O flux results from a high C/O ratio and that the C/O ratio is enhanced more in larger disks due to more efficient planetesimal formation. Similarly, \citet{Pascucci13} find higher average HCN/H$_2$O fluxes in brown dwarf disks than T Tauri stars and attribute this to more efficient planetesimal formation in disks around brown dwarfs. They go on to point out that the C/O ratios needed to explain their observations would have important implications for the compositions of rocky planets forming in these carbon rich regions of the disk. Our models explore exactly this point. Indeed, we find that a portion of the disk can contain a substantial amount of solid carbon for disks with an initial C/O ratio as low as 0.65. This finding is unique compared to the equilibrium chemistry model in which carbon rich planetesimals can only form in disks with initial C/O ratios above 0.8. The carbon rich region in the sequential condensation model is generally confined to less than $\sim$0.5-0.6 AU for disks with C/O ratios between 0.65 and 0.8. As can be seen from Figure \ref{MaterialOrigin}, this region accounts for a small fraction of a planet's material source region, creating planets with at most a few percent of their mass in carbon. The dynamical simulations in this work are limited to Solar System like initial conditions (i.e., the planetesimal disk extends from 0.3-4 AU). However, it is quite possible that planets in other systems could form much closer to their host stars \citep[e.g.][]{Bond10b, Raymond08b}, and thus potentially accrete more carbon rich material. \citet{Chiang13} discuss the possibility that most of the known close-in exoplanets formed in situ rather than migrating there. If this is the case or even partially the case, then we predict that carbon rich planets may be significantly more common than previously indicated. It is also possible that a further exploration of parameter space in our model will change the radial range where carbon rich planetesimals can exist. There are a number of factors that could influence where and when oxygen rich material is removed from the disk and turned into planetesimals, including: (1) The magnitude and functional form of the planetesimal accretion rate. We showed that by increasing the overall planetesimal accretion rate, the carbon rich region of the disk moved inwards. Additionally, the rate at which planetesimals in different parts of the disk form and grow may not be governed by equation \ref{planetesimalformationrate}. Variation from this form would completely change the pattern of oxygen depletion and thus the C/O enhancement. (2) The disk temperature and density structure as a function of time. The structure and evolution of the disk is undoubtedly connected to the formation and growth of planetesimals. In our planetesimal formation prescription this is true because the planetesimal formation rate is a dependent on the surface density of the disk. By changing the equations of disk evolution we would also be changing the timing of planetesimal formation and thus their composition as well. (3) The overall amount of (C+O)/(Mg+Si). For a fixed C/O ratio, if oxygen (and carbon) are depleted relative to their solar values but Mg and Si are not, then the relative fraction of O that bonds with Mg and Si is higher and thus the effective O depletion is larger and consequently the C/O enhancement is stronger. \section{Summary and Conclusions} We have implemented a model of protoplanetary disk chemistry that accounts for the time evolution of the disk and the formation and accretion of planetesimals. Previous models performed equilibrium chemistry calculations at a single point in the disk's lifetime. We have compared the results of the sequential condensation model to those of the equilibrium chemistry model and find three main results: \begin{itemize} \item For a disk of solar composition, the sequential condensation model produces planets with elemental abundances very similar to those produced by the equilibrium chemistry model. Of the elements considered, the abundances match the true values of the Solar System terrestrial planets remarkably well with the exception of sodium and sulfur. Whereas the equilibrium chemistry model was already optimized to reproduce the compositions of the Solar System planets, the full parameter space of the sequential condensation model has yet to be explored. Tuning the sequential condensation model should produce better fits to the abundances of the more volatile elements in the Solar System planets. \item In carbon rich disks, the sequential condensation model predicts planetesimals rich in carbon over a wider range of orbital radii than the equilibrium chemistry model. When compared to the Solar System optimized case, this results in carbon enriched planets at larger semi-major axes. Overall, the sequential condensation model produces planetesimals with compositions that vary less over the radial range of the terrestrial planet forming region. \item The sequential condensation model predicts that carbon enriched planetesimals can form in initially oxygen dominated protoplanetary disks. These planetesimals are only produced in the innermost regions of the disk using the current model parameters. However, it is possible that an exploration of parameter space may find that the sequential condensation model can produce carbon rich planetesimals at greater distances from the star. Additionally, if a significant fraction of known close in super-Earths formed in situ, the number of carbon enriched exoplanets may be significantly more than previously indicated. \end{itemize} \acknowledgments We thank the referee, John Chambers, for helpful comments used to improved this paper. NM acknowledges support from the Yale Center for Astronomy and Astrophysics (YCAA) at Yale University through the YCAA postdoctoral prize fellowship.
{ "redpajama_set_name": "RedPajamaArXiv" }
647
{"url":"http:\/\/www.aavso.org\/stellar-news-feed?page=1","text":"# Stellar News Feed Archive\n\nHow Many Stars Are In The Universe? Tuesday, June 3, 2014 - 10:09\n\nLooking up into the night sky, it's challenging enough for an amateur astronomer to count the number of naked-eye stars that are visible. With bigger telescopes, more stars become visible, making counting impossible because of the amount of time it would take. So how do astronomers figure out how many stars are in the universe?\n\nThe first sticky part is trying to define what \"universe\" means. Even if we narrow down the definition to the \"observable\" universe \u2014 what we can see \u2014 estimating the number of stars within it requires knowing just how big the universe is. The first complication is that the universe itself is expanding, and the second complication is that space-time is curved.\n\nDoes the Period of a Pulsating Star Depend on its Amplitude? Friday, May 30, 2014 - 10:00\n\nSeveral classes of pulsating stars are now known to undergo slow changes in amplitude; these include pulsating red giants and supergiants, and yellow supergiants. We have used visual observations from the AAVSO International Database, and wavelet analysis of 39 red giants, 7 red supergiants, and 3 yellow supergiants, to test the hypothesis that an increase in amplitude would result in an increase in period, because of non-linear effects in the pulsation. For most of the stars, the results are complex and\/or indeterminate, due to the limitations of the data, the small amplitude or amplitude variation, or other processes such as random cycle-to-cycle period fluctuations. For the dozen stars which have substantial amplitude variation, and reasonably simple behavior, there is a 75-80% tendency to show a positive correlation between amplitude and period.\n\nAuthors:\u00a0John R. Percy, Jeong Yeon (JY) Yook\n\nAAVSO Acronym of the Day: VPhot Friday, May 30, 2014 - 09:45\n\nToday's acronym of the day is VPhot \u2013 Variable Star Photometry Software\n\nVPhot is an online tool for photometric analysis. You can upload your own FITS images to VPhot or have images taken via AAVSOnet automatically sent to your VPhot account. All VPhot processing is done via a web browser. All of the basic photometry tools exist (stacking, time series analysis, control of annulus', transformation, etc.) and the algorithms have been rigorously checked and confirmed to be of the highest quality. Results of the processing are automatically exported in AAVSO Extended Format, meaning you can directly load them into our database via WebObs without having to make any changes to the data file. VPhot is only available to AAVSO members.\n\nThis is just one example of the tools and programs the AAVSO provides to its members, observers and the astronomical community. Please help support these services by contributing to this year's Annual Campaign.\n\nYou can mail a check to AAVSO headquarters, or you can make a donation online. Just click the\u00a0Donate Now\u00a0button on our home page and select Annual Campaign in the drop down menu.\n\nA new method for cosmic distances: using active galactic nuclei Thursday, May 29, 2014 - 13:07\n\nAdam Riess, co-discoverer of the accelerating expansion of the Universe due to dark energy, visited Harvard last year, where he told me a story about his time in grad school there.\u00a0 He recalled hearing a lecture on the uncertainty in the rate at which the Universe is expanding and thinking, \u201cThat problem will never be solved.\u201d\u00a0 Twenty years on, we know the local expansion rate (called the Hubble constant, or\u00a0H0) to about 4% precision, and many different, independent techniques find\u00a0mutually consistent values. However, measuring the Hubble constant remains one of the most important problems in cosmology because it is intimately connected to the Universe\u2019s contents.\u00a0 In particular, General Relativity means that the Universe\u2019s contents set its expansion rate, and so precise measurement of the expansion rate can probe the amount and evolution of different components of the Universe.\n\nThus, it is exciting when a new, independent method of measuring the expansion rate (H0) is proposed\u2014and even more exciting when it works. In the short paper I discuss today, the authors show that time delays in the light emitted from distant, violently variable galactic centers (\u201cactive galactic nuclei\u201d, or AGN) can probe\u00a0H0\u00a0with precision similar to that of the Hubble Space Telescope\u2014and out to about twice the distance.\n\nAAVSO Acronym of the Day: LCG Thursday, May 29, 2014 - 10:18\n\nThe AAVSO Acronym of the Day today is LCG \u2013 The AAVSO Light Curve Generator\n\nObservations of variable stars are plotted on a graph called a light curve as the apparent brightness (magnitude) versus time, usually in Julian Date (JD). The light curve is the single most important graph in variable star astronomy. Light curves allow astronomers to unlock some of the secrets of variable stars. The AAVSO Light Curve Generator allows anyone to plot light curves using data on thousands of stars stored in the AAVSO International Database. It is one of the most popular tools on the AAVSO website.\n\nThis is another example of the tools the AAVSO provides to its members, observers and the astronomical community. Please help support these services by contributing to this year's Annual Campaign.\n\nYou can mail a check to AAVSO headquarters, or you can make a donation online. Just click the\u00a0Donate Now\u00a0button on our home page and select Annual Campaign in the drop down menu.\n\nThe M4.5V flare star AF Psc as seen in K2 engineering data Thursday, May 29, 2014 - 10:09\n\nWe present the light curve of the little studied flare star AF Psc (M4.5V) obtained using engineering data from the K2 mission. Data were obtained in Long Cadence mode giving an effective exposure of 29 min and nearly 9 d of coverage. A clear modulation on a period of 1.08 d was seen which is the signature of the stellar rotation period. We identify 14 flares in the light curve, with the most luminous flares apparently coming from the same active region. We compare the flare characteristics of AF Psc to two M4V flare stars studied using kepler data. The K2 mission, if given approval, will present a unique opportunity to study the rotation and flare properties of late type dwarf stars with different ages and mass.\n\nAuthors:\u00a0Gavin Ramsay, J. Gerry Doyle (Armagh Observatory)\n\nRevised age for CM Draconis and WD 1633+572: Toward a resolution of model-observation radius discrepancies Thursday, May 29, 2014 - 09:55\n\nWe report an age revision for the low-mass detached eclipsing binary CM Draconis and its common proper motion companion, WD 1633+572. An age of\u00a010\u00b12\u00a0Gyr is found by combining an age estimate for the lifetime of WD 1633+572 and an estimate from galactic space motions. The revised age is greater than a factor of two older than previous estimates. Our results provide consistency between the white dwarf age and the system's galactic kinematics, which reveal the system is a highly probable member of the galactic thick disk. We find the probability that CM Draconis and WD 1633+572 are members of the thick disk is 8500 times greater than the probability that they are members of the thin disk and 170 times greater than the probability they are halo interlopers. If CM Draconis is a member of the thick disk, it is likely enriched in\u00a0\u03b1-elements compared to iron by at least 0.2 dex relative to the Sun. This leads to the possibility that previous studies under-estimate the [Fe\/H] value, suggesting the system has a near-solar [Fe\/H]. Implications for the long-standing discrepancies between the radii of CM Draconis and predictions from stellar evolution theory are discussed. We conclude that CM Draconis is only inflated by about 2% compared to stellar evolution predictions.\n\nAuthors: Gregory A. Feiden and Brian Chaboyer\n\nAAVSO Acronym of the Day: SeqPlot Wednesday, May 28, 2014 - 08:08\n\nToday's acronym of the day features one of the AAVSO's tools,\u00a0SeqPlot \u2013 The AAVSO Sequence Plotter.\n\nSeqPlot is a platform independent Java Web Start application. It accesses an on-line MySQL database that contains more than 47 million stars that have been calibrated through the AAVSO Photometric All-Sky Survey (APASS), USNO-Flagstaff, Sonoita, and other AAVSOnet telescopes. The General Catalogue of Photometric Data (GCPD) and Tycho-2 catalogs are also included. You can plot fields using either a field name or by coordinates and field size. Users can select stars for a sequence and write them to a text file in the correct format for uploading to VSD.\n\nThis is just one example of the tools\u00a0the AAVSO provides to its members, observers and the astronomical community. Please help support these services by contributing to this year's Annual Campaign.\n\nYou can mail a check to AAVSO headquarters, or you can make a donation online. Just click the\u00a0Donate Now\u00a0button on our home page and select Annual Campaign in the drop down menu.\n\n'Wolf-Rayet' Supernova Observed --\"Its Flash Ionized Its Immediate Surroundings Followed by Powerful Blast Wave\" Tuesday, May 27, 2014 - 22:07\n\nWolf-Rayet stars are very large and very hot. Astronomers have long wondered whether Wolf-Rayet stars are the progenitors of certain types of supernovae. New work from the\u00a0Palomar Transient Factory\u00a0team is homing in on the answer. They have identified a\u00a0Wolf-Rayet star\u00a0as the likely progenitor of a recently exploded supernova. When the supernova exploded, its flash ionized its immediate surroundings, giving the astronomers a direct glimpse of the progenitor star's chemistry. This opportunity lasts only for a day before the supernova blast wave sweeps the ionization away. So it's crucial to rapidly respond to a young supernova discovery to get the flash spectrum in the nick of time.\n\nRead the rest of the story at The Daily Galaxy\n\nAAVSO Acronym of the Day - eJAAVSO Tuesday, May 27, 2014 - 08:19\n\nToday's AAVSO Acronym is eJAAVSO \u2013 The Electronic Journal of the AAVSO\n\neJAAVSO is the online counterpart of The Journal of the American Association of Variable Star Observers. The eJAAVSO consists of papers that have been refereed, edited, and accepted for publication in the paper edition of the Journal. Its purpose is to speed and broaden the dissemination of variable star research to the global astronomical community, and make papers available to interested parties. All papers and abstracts from 1972 to the present are accessible to all readers via the eJAAVSO page. In addition, a PDF file of each complete issue of The Journal from Volume 36 on is available free of charge to members via the eJAAVSO page.\n\nThis is just one example of the tools and programs the AAVSO provides to its members, observers and the astronomical community. Please help support these services by contributing to this year's Annual Campaign.\n\nYou can mail a check to AAVSO headquarters, or you can make a donation online. Just click the\u00a0Donate Now\u00a0button on our home page and select Annual Campaign in the drop down menu.\n\nAAVSO Acronym of the Day: VPhot Friday, May 23, 2014 - 08:16\n\nToday's acronym is VPhot \u2013 Variable Star Photometry Software\n\nVPhot is an online tool for photometric analysis. You can upload your own FITS images to VPhot or have images taken via AAVSOnet automatically sent to your VPhot account. All VPhot processing is done via a web browser. All of the basic photometry tools exist (stacking, time series analysis, control of annulus', transformation, etc.) and the algorithms have been rigorously checked and confirmed to be of the highest quality. Results of the processing are automatically exported in AAVSO Extended Format, meaning you can directly load them into our database via WebObs without having to make any changes to the data file. VPhot is only available to AAVSO members.\n\nThe Catalina Surveys Periodic Variable Star Catalog Friday, May 23, 2014 - 08:07\n\nWe present ~47,000 periodic variables found during the analysis of 5.4 million variable star candidates within a 20,000 square degree region covered by the Catalina Surveys Data Release-1 (CSDR1). Combining these variables with type-ab RR Lyrae from our previous work, we produce an on-line catalog containing periods, amplitudes, and classifications for ~61,000 periodic variables. By cross-matching these variables with those from prior surveys, we find that > 90% of the ~8,000 known periodic variables in the survey region are recovered. For these sources we find excellent agreement between our catalog and prior values of luminosity, period and amplitude, as well as classification. We investigate the rate of confusion between objects classified as contact binaries and type-c RR Lyrae (RRc's) based on periods, colours, amplitudes, metalicities, radial velocities and surface gravities. We find that no more than few percent of these variables in these classes are misidentified. By deriving distances for this clean sample of ~5,500 RRc's, we trace the path of the Sagittarius tidal streams within the Galactic halo. Selecting 146 outer-halo RRc's with SDSS radial velocities, we confirm the presence of a coherent halo structure that is inconsistent with current N-body simulations of the Sagittarius tidal stream. We also find numerous long-period variables that are very likely associated within the Sagittarius tidal streams system. Based on the examination of 31,000 contact binary light curves we find evidence for two subgroups exhibiting irregular lightcurves. One subgroup presents significant variations in mean brightness that are likely due to chromospheric activity. The other subgroup shows stable modulations over more than a thousand days and thereby provides evidence that the O'Connell effect is not due to stellar spots.\n\nAuthors: A.J. Drake, M.J. Graham, S.G. Djorgovski, M. Catelan, A.A. Mahabal, G. Torrealba, D. Garcia-Alvarez, C. Donalek, J.L. Prieto, R. Williams, S. Larson, E. Christensen, V. Belokurov, S.E. Koposov, E. Beshore, A. Boattini, A. Gibbs, R. Hill, R. Kowalski, J. Johnson, F. Shelly\n\nAAVSO Acronym for the Day: MNF Wednesday, May 21, 2014 - 13:30\n\nToday we feature another electronic publication of the AAVSO, MNF \u2013 MyNewsFlash\n\nMyNewsFlash allows you to set up a method of automatically emailing or texting you the current activity of your favorite star, or class of stars. This is a VERY customizable service. MyNewsFlash can send you the most recent AAVSO variable star observations on whatever stars you choose, at whatever magnitude cutoff you choose, delivered in the format of your choice, at a frequency that you choose.\n\nThis is just another example of the tools and programs the AAVSO provides to its members, observers and the astronomical community. Please help support these services by contributing to this year's Annual Campaign.\n\nYou can mail a check to AAVSO headquarters, or you can make a donation online. Just click the\u00a0Donate Now\u00a0button on our home page and select Annual Campaign in the drop down menu.\n\nRevealing the Complex Outflow Structure of Binary UY Aurigae Wednesday, May 21, 2014 - 08:27\n\nBecause many stars form together as companions in binary or multiple systems, investigating these systems is essential for understanding star and planet formation. Although jets (i.e., narrow bright streams of gas) and outflows (i.e., less collimated flows of gas) from single young stars are ubiquitous, only a few observations have shown jets or outflows from multiple, low-mass young stars. Therefore, the current team chose to examine the outflow structure of binary UY Aur, which is a close binary system composed of young stars separated by less than an arcsecond (0\".89).\n\nUY Aur has a very complicated structure. Both the primary star (UY Aur A, more massive and brighter) and the secondary star (UY Aur B, fainter and cooler) have small circumstellar disks (disks of gas and material orbiting around them). In addition, a circumbinary disk surrounds the two stars. Such disks are difficult to detect, and this is only the second disk of this type that has been resolved and imaged. Receding (\"redshifted\") jets have been observed, and approaching (\"blueshifted\") ones have been reported for this system. However, their driving sources are not clear, because the spatial resolution of the images was too low (> one arcsecond).\n\nAAVSO Acronym of the Day: JAAVSO Tuesday, May 20, 2014 - 15:41\n\nToday's acronym is JAAVSO \u2013 The Journal of the AAVSO\n\nThe Journal of the American Association of Variable Star Observers (ISSN 0271-9053) is the research publication of the AAVSO. The Journal contains scholarly research articles submitted by members of the AAVSO community on a wide range of topics relevant to the AAVSO and variable star astronomy. The Journal is a refereed publication, open to any and all amateur and professional members of the variable star research and observation community, as well as related scholarly groups such as computer and information scientists, historians, and educators. The Journal is also the primary publication for papers and abstracts presented at AAVSO meetings.\n\nThis is just one example of the publications and programs the AAVSO provides to its members, observers and the astronomical community. Please help support these services by contributing to this year's Annual Campaign.\n\nYou can mail a check to AAVSO headquarters, or you can make a donation online. Just click the\u00a0Donate Now\u00a0button on our home page and select Annual Campaign in the drop down menu.\n\nFundamental parameters of RR Lyrae stars from multicolour photometry and Kurucz atmospheric models -- III. SW And, DH Peg, CU Com, DY Peg Monday, May 19, 2014 - 14:44\n\nWe report the most comprehensive UBV(RI)_C observations of the bright, radially pulsating field stars SW And, DH Peg, CU Com, DY Peg. Long term variation has been found in the ultraviolet colour curves of SW And and DH Peg. We apply our photometric-hydrodynamic method to determine the fundamental parameters of these stars: metallicity, reddening, distance, mass, radius, equilibrium luminosity and effective temperature. Our method works well for SW And, CU Com and DY Peg. A very small mass 0.26+\/-0.04 M_Sun of SW And has been found. The fundamental parameters of CU Com are those of a normal double-mode RR Lyrae (RRd) star. DY Peg has been found to have paradoxical astrophysical parameters: the metallicity, mass and period are characteristic for a high-amplitude Delta Sct star while the luminosity and radius place it in the group of RR Lyrae stars. DH Peg has been found to be peculiar: the definite instability in the colour curves towards ultraviolet, the dynamical variability of the atmosphere during the shocked phases suggests that the main assumptions of our photometric-hydrodynamic method, the quasi-static atmosphere approximation (QSAA) and the exclusive excitation of radial modes are probably not satisfied in this star. The fundamental parameters of all stars studied in this series of papers are summarized in tabular and graphical form.\n\nAuthors: \u00a0S. Barcza, J. M. Benk\u0151\n\nAAVSO Acronym of the Day: CHOICE Monday, May 19, 2014 - 13:40\n\nToday's featured acronym is CHOICE \u2013 The Carolyn Hurless Online Institute for Continuing Education\n\nCHOICE is an education and outreach initiative of the AAVSO. It is a collection of informal, online short courses on topics chosen to help members of the AAVSO and others contribute more to science. Most CHOICE courses are four weeks long. They are run via the AAVSO web site using private forums and e-mail at a self-directed pace. One of the unique aspects of this program is that courses are peer taught. That is, graduates of one course have the opportunity to teach the next iteration of that course. Those who successfully complete a course are provided with a certificate and a badge displayed on their web site user profile page. We hope to increase the impact of this program by offering many of the courses to non-members.\n\nThis is just one example of the programs the AAVSO provides to its members, observers and the astronomical community. Please help support these services by contributing to this year's Annual Campaign.\n\nYou can mail a check to AAVSO headquarters, or you can make a donation online. Just click the\u00a0Donate Now\u00a0button on our home page and select Annual Campaign in the drop down menu.\n\nLong-term optical and radio variability of BL Lacertae Saturday, May 17, 2014 - 17:28\n\nWell-sampled optical and radio light curves of BL Lacertae in B, V, R, I bands and 4.8, 8.0, 14.5 GHz from 1968 to 2014 were presented in this paper. A possible $1.26 \\pm 0.05$ yr period in optical bands and a $7.50 \\pm 0.15$ yr period in radio bands were detected based on discrete correlation function, structure function as well as Jurkevich method. Correlations among different bands were also analyzed and no reliable time delay was found between optical bands. Very weak correlations were detected between V band and radio bands. However, in radio bands the variation at low frequency lagged that at high frequency obviously. The spectrum of BL Lacertae turned mildly bluer when the object turned brighter, and stronger bluer-when-brighter trends were found for short flares. A scenario including a precessing helical jet and periodic shocks was put forward to interpret the variation characteristics of BL Lacertae.\n\nAuthors: \u00a0Y. C. Guo, S. M. Hu, C. Xu, C. Y. Liu, X. Chen, D. F. Guo, F. Y. Meng, M. T. Xu, J. Q. Xu\n\nPulsing Stars Help Map Milky Way's Outer Reaches Saturday, May 17, 2014 - 13:06\n\nNew observations of five young variable stars reveal a strange thickening in farflung regions of the Milky Way galaxy.\n\nKnown as Cepheid variables, the stars are positioned above and below the plane of the galaxy's disk. That position, combined with the stars' young age, indicates a warp to the arm that was previously suggested by observations of dust, but had not been shown by the presence of stars.\n\nStars on the other side of the galactic center can be challenging for scientists to observe, as the amount of interstellar dust increases at greater distances. A team of astronomers utilized two telescopes at the South African Astronomical Observatory (SAAO) to determine that the five Cepheid variables lie on the far side of the bulge of material in the heart of the Milky Way, above and below the galactic plane.\n\nAAVSO Acronym of the Day: VSOTS Saturday, May 17, 2014 - 12:31\n\nToday's acronym is VSOTS \u2013 Variable Star of the Season\n\nThe \u201cVariable Star of the Season\u201d series of articles originated in 1998 as the \"Variable Star of the Month\". The articles contain useful and interesting information on individual variable stars. Many of the most popular stars in the AAVSO observing program are featured. From 2002 to 2011, they have all been published as part of the \"Variable Star of the Season\" collection. Each article is now archived in the VSOTS Archive and available in either .html or .pdf format.\n\nThis is another example of the tools and programs the AAVSO provides to its members, observers and the astronomical community. Please help support these services by contributing to this year's Annual Campaign.\n\nYou can mail a check to AAVSO headquarters, or you can make a donation online. Just click the\u00a0Donate Now\u00a0button on our home page and select Annual Campaign in the drop down menu.\n\nAAVSO Acronym of the Day: CHET Friday, May 16, 2014 - 17:53\n\nCHET \u2013 Chart Error Tracking Tool\n\nCHET was created to track of all reported issues with AAVSO variable star charts and sequences and the chart and sequence team\u2019s progress in addressing those issues. Information reported to CHET by observers is used to evaluate and prioritize the work done by the charts and sequences team. When the team makes revisions or corrections, CHET automatically sends an email notification to the observer who reported the issue. The improvement in the quality of charts and sequences, and the speed with which new sequences are created for new transient objects, in the last ten years, is nothing short of amazing.\n\nThis is just one example of the tools and programs the AAVSO provides to its members, observers and the astronomical community. Please help support these services by contributing to this year's Annual Campaign.\n\nYou can mail a check to AAVSO headquarters, or you can make a donation online. Just click the\u00a0Donate Now\u00a0button on our home page and select Annual Campaign in the drop down menu.\n\nAAVSO Acronym of the Day: WebObs Wednesday, May 14, 2014 - 09:08\n\nToday's acronym is WebObs \u2013 Web-based Observation Submission Tool\n\nWebObs is the tool 99% of our observers use to submit observations to the AID. You can submit observations one at a time using the individual observation form, or you can upload files containing hundreds or thousands of observations all at once using the file upload feature. Files must conform to the AAVSO File Form Specifications which are described in detail on the website.\n\nYou can also use WebObs to search for observations of any variable star, or search for your own observations, which you can view, edit or delete. It can be used to download your observations for a specific time frame or all the observations you have ever reported to the AID. Registered photoelectric photometrists may also upload unreduced data to WebObs.\n\nThis is just one example of the tools and programs the AAVSO provides to its members, observers and the astronomical community. Please help support these services by contributing to this year's Annual Campaign.\n\nYou can mail a check to AAVSO headquarters, or you can make a donation online. Just click the\u00a0Donate Now\u00a0button on our home page and select Annual Campaign in the drop down menu.\n\nMagnetar Formation Mystery Solved? Wednesday, May 14, 2014 - 08:57\n\nMagnetars are the bizarre super-dense remnants of supernova explosions. They are the strongest magnets known in the Universe \u2014 millions of times more powerful than the strongest magnets on Earth. A team of European astronomers using ESO\u2019s Very Large Telescope (VLT) now believe they\u2019ve found the partner star of a magnetar for the first time. This discovery helps to explain how magnetars form \u2014 a conundrum dating back 35 years \u2014 and why this particular star didn\u2019t collapse into a black hole as astronomers would expect.\n\nIt seems that being a component of a double star may therefore be an essential ingredient in the recipe for forming a magnetar. The rapid rotation created by mass transfer between the two stars appears necessary to generate the ultra-strong magnetic field and then a second mass transfer phase allows the magnetar-to-be to slim down sufficiently so that it does not collapse into a black hole at the moment of its death.\n\nHidden nurseries in the Milky Way Tuesday, May 13, 2014 - 16:14\n\nThe APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) survey has revealed an unprecedented number of cold dense clumps of gas and dust as the cradles of massive stars, thus providing a complete view of their birthplaces in the Milky Way. Based on this census, an international team of scientists led by Timea Csengeri from the Max Planck Institute for Radio Astronomy in Bonn has estimated the time scale for these nurseries to grow stars. This has been found to be a very fast process: with only 75,000 years on average it is much shorter than the corresponding time scales typically found for nurseries of lower mass stars.\n\nGet the pre-print of the research paper- \u00a0\u201cThe ATLASGAL survey: a catalog of dust condensations in the galactic plane,\u201d T. Csengeri, J. S. Urquhart, F. Schuller, F. Motte, S. Bontemps, F. Wyrowski, K. M. Menten, L. Bronfman, H. Beuther, Th. Henning, L. Testi, A. Zavagno, M. Walmsley, Astronomy & Astrophysics, Vol. 565, A75 (May 2014).\n\nAAVSO Acronym for the Day: AAVSOnet Tuesday, May 13, 2014 - 09:11\n\nToday's AAVSO Acronym of the Day is\u00a0AAVSOnet \u2013 The AAVSO robotic telescope network\n\nAAVSOnet is a network of remote, robotically controlled, and automatically queued telescopes for the use of AAVSO members. Several telescopes, located worldwide, are currently active and taking data right now. The entire network is founded on the generosity and dedication of AAVSO volunteers, donors and corporate sponsors.\n\nEven the telescopes in the network have acronyms to describe them!\n\nBSM \u2013 Bright Star Monitor, several small (~60mm) telescopes located in Australia and the USA.\nC30 \u2013 Coker 30cm telescope, located in Sutter Creek, California, donated to the AAVSO by Phillip Coker.\nOC61 \u2013 Optical Craftsman 61cm telescope, located at the Mount John University Observatory in New Zealand.\nSRO \u2013 Sonoita Research Observatory, a 50cm telescope located in a private facility in Sonoita, Arizona.\nTMO61 \u2013 Tortugas Mountain 61cm telescope, part of the new Mexico State University Observatory, located on Tortugas mountain in New Mexico.\n\nThis is another example of the tools and programs the AAVSO provides to its members, observers and the astronomical community. Please help support these services by contributing to this year's Annual Campaign.\n\nYou can mail a check to AAVSO headquarters, or you can make a donation online. Just click the\u00a0Donate Now\u00a0button on our home page and select Annual Campaign in the drop down menu.","date":"2014-07-24 20:20:56","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.28432902693748474, \"perplexity\": 2853.356118892633}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2014-23\/segments\/1405997891953.98\/warc\/CC-MAIN-20140722025811-00149-ip-10-33-131-23.ec2.internal.warc.gz\"}"}
null
null
US presidential election: Russia and Iran obtained information on electoral rolls, intelligence director says Moscow and Tehran "have taken specific actions to influence public opinion in connection with our election," US intelligence director Jhon Ratcliffe said on Wednesday. Threatening messages, sent to some voters. The US intelligence director accused Russia and Iran on Wednesday (October 21) of having got their hands on the data of American voters, and of having taken actions to influence them in the run-up to the election presidential, Tuesday, November 3. Moscow and Tehran "took specific actions to influence public opinion in connection with our election (…) We were able to confirm that information on the electoral rolls had been obtained by Iran and, separately, by Russia"John Ratcliffe said at a press conference. This data can be used by foreign actors to attempt to give false information to registered voters, which they hope will create confusion and chaos and undermine confidence in American democracy. John Ratcliffe, Director of US Intelligenceduring a press conference The announcement was made after Democratic voters said they had received threatening emails addressed to them personally on behalf of the "Proud Boys", a far-right group. The messages ordered them to vote for Donald Trump. "We are in possession of all your information. You are currently registered as a Democrat and we know that because we have access to the entire electoral infrastructure.", said one of those emails. "You are going to vote Trump on election day, or you will be dealing with us". Donald Trump's pet peeve Iran also sent emails, says chief intelligence officer "aimed at intimidating voters, inciting social unrest and harming President Trump." Tehran released a video suggesting that people could send fraudulent ballots, including from abroad, said John Ratcliffe. The latter and FBI Director Christopher Wray did not say how Moscow might have used the data, or how Russia and Iran got their hands on the information. Christopher Wray was keen to ensure that the American electoral system remained secure and "resistant". According to US intelligence agencies, Russia interfered in the 2016 US election for the benefit of Donald Trump, whose campaign team was accused of colluding with Moscow. Previous Sixties photographer Roger Kasparian exhibits his unseen shots of jazzmen, from John Coltrane to Duke Ellington Next "Our Constitution is your departure": the Hirak rejects the constitutional referendum in Algeria Top DevOps Courses Online – DevOps certification Are Trade Carbon Credits Still Being Used? Six Ways to Prepare Yourself For Financial Technology Jobs Ranveer Singh's bizarre avatar in the middle of the corona, seeing the virus will also take Tata away! Rajkummar Rao and Patralekha going to get married in November, wedding date out! Bigg Boss 15: Now Umar Riaz's behavior will be decided in 'Weekend Ka Vaar', these 4 contestants reached Finale Week Travel certificates: how the government got its feet in the carpet Who is Raqesh Bapat: Raqesh Bapat of 'Tum Bin' fame will be seen in 'Bigg Boss', has won the National Award IVG: the Senate opposes the extension of the legal deadline Jerusalem must not become a 'monopoly' of one religion, say Christian leaders "In the Assembly, we must not confuse compromise and slippage" Kangana Ranaut said- Mother recovered from yoga in two months, did not have to undergo heart surgery Foot: Senegal wins the African Cup of Nations for the first time, at the expense of Egypt, on penalties (4-2)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,009
Q: Why can the owner of a question downvote their accepted answer? Sorry if that's a basic question, it just don't make sense to me. A: I'm sure there are lots of things I can do on this site that make no sense. * *I can downvote an answer I accept on a question: "This is the best answer, but it's not useful." *I can randomly upvote or downvote, without knowing the subject matter, or without even reading the question. I can add random noise to the reputation system. *I can award a bounty on a question, independent of any merit the question might have. *I can delete a whole bunch of my good answers. I might trigger an answer ban. Just because I like pain. You can do these things, but why? Furthermore, why should we stop you? Doing any, or all, of these things won't change the direction of the earth's rotation. It won't even affect this site very much. A: Because every special case adds complexity, both in the code and in the UI. Some pathologies just aren't worth preventing. (For example, if you do this, then you need to provide a UI hint when he attempts to downvote the answer he accepted, and/or deal with the question on meta, probably more than once.) A: Here's a plausible use case in which accepting and downvoting would at least not be utterly unreasonable: the answer is broadly correct and solves your problem, but is poorly written, hard to understand, and contains technical inaccuracies that you had to work around. You accept to show that the question has been answered, but vote down and tell the answerer what the problems are and say you'll vote up once they're fixed. That said, I don't think it's allowed to deliberately enable the actions I've just described. It's probably allowed, as others have said, because nobody saw any point in banning it.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,535
Q: how to prevent system font-size changing effects to android application? I recently finished developing my android application. I used sp (Scaled pixels) for all textSize. The problem is when i adjusted the system font-size, my application's font-sizes are changing. I can use dp (Device independent pixels) but it will take too long to maintain my application. I referenced text size from this. Is there a way to prevent system font-size changing effects to my application ? A: If you require your text to remain the same size, you'll have to use dp. To quote the documentation: An sp is the same base unit, but is scaled by the user's preferred text size (it's a scale-independent pixel), so you should use this measurement unit when defining text size (but never for layout sizes). Emphasis mine. So you're seeing the expected behaviour for using sp as your units for text size. I don't understand what you mean about using dp taking too long to maintain your app - as far as I can tell, it'll exactly be the same amount of effort? (perhaps less, though it'll likely make it less usable for users with poor eyesight) A: I recently ran into this problem as well. Our UI didn't scale well on phones with limited screen dimensions and changing the entire UI on the off chance a user set's their Accessibility Options to "Huge" seemed silly. I found this question on StackOverflow to be most helpful. What I did was put the following code below in my BaseActivity (an Activity class that all my activities extend from) public void adjustFontScale(Configuration configuration) { if (configuration.fontScale > 1.30) { LogUtil.log(LogUtil.WARN, TAG, "fontScale=" + configuration.fontScale); //Custom Log class, you can use Log.w LogUtil.log(LogUtil.WARN, TAG, "font too big. scale down..."); //Custom Log class, you can use Log.w configuration.fontScale = 1.30f; DisplayMetrics metrics = getResources().getDisplayMetrics(); WindowManager wm = (WindowManager) getSystemService(WINDOW_SERVICE); wm.getDefaultDisplay().getMetrics(metrics); metrics.scaledDensity = configuration.fontScale * metrics.density; getBaseContext().getResources().updateConfiguration(configuration, metrics); } } And called it right after my super.onCreate() like so adjustFontScale(getResources().getConfiguration()); What this code does is identify if the user set their font scale in Accessibility Settings to something greater than 1.30f (1.30f is "Large" on The Note 5, but probably varies a bit from device-to-device). If the user set their font too large ("Extra Large", "Huge"...), we scale the application only to "Large". This allows your app to scale to a user's preferences (to a degree) without distorting your UI. Hopefully this will help others. Good luck scaling! Other Tips If you want certain layouts to scale with your fonts (say...a RelativeLayout that you use as a backdrop against your fonts), you can set their width/height with sp instead of the classic dp. When a user changes their font size, the layout will change accordingly with the fonts in your application. Nice little trick. A: That's how you do it in 2018 (Xamarin.Android/C# - same approach in other languages): public class MainActivity : global::Xamarin.Forms.Platform.Android.FormsAppCompatActivity { protected override void OnCreate(Bundle bundle) { ... } protected override void AttachBaseContext(Context @base) { var configuration = new Configuration(@base.Resources.Configuration); configuration.FontScale = 1f; var config = Application.Context.CreateConfigurationContext(configuration); base.AttachBaseContext(config); } } All you need is override attachBaseContext method of activity and update config there. getBaseContext().getResources().updateConfiguration() is deprecated though there're numerous examples with this method. If you use this approach besides the IDE warning you might find some parts of your app not scaled. A: None of the previous answers worked for me, on Android 8.1 (API 27). Here's what worked: Add the following code to your activity: Kotlin Code: override fun attachBaseContext(newBase: Context?) { val newOverride = Configuration(newBase?.resources?.configuration) newOverride.fontScale = 1.0f applyOverrideConfiguration(newOverride) super.attachBaseContext(newBase) } Java Code: @Override protected void attachBaseContext(Context newBase) { final Configuration override = new Configuration(newBase.getResources().getConfiguration()); override.fontScale = 1.0f; applyOverrideConfiguration(override); super.attachBaseContext(newBase); } You don't need to change your AndroidManifest.xml. A: There's another way to prevent app layout issue / font issue from the setting font size change. You can try // ignore the font scale here final Configuration newConfiguration = new Configuration( newBase.getResources().getConfiguration() ); newConfiguration.fontScale = 1.0f; applyOverrideConfiguration(newConfiguration); where newBase is from attachBaseContext function. You need to override this callback in your Activity. But, the side effect is that if you wanna use animation (objectanimator/valueanimator), then it will cause the weird behavior. A: I encountered the same problem and fixed it by changing sp to dp in .XML file. However, I also need to fix the text size of the WebViews. Normally, to adjust the text size of the WebView the setDefaultFontSize() function is used. However, its default value unit is sp. In my project, I used setTextZoom(100) to fix the text and icon size of the WebView.* (There are other methods in stackoverflow but almost all of them are deprecated)* WebSettings settings = mWebView.getSettings(); settings.setTextZoom(100); For further details about setTextZoom() A: you can force to text size of your app using base activity Configuration, make all activities inherent base activity. 1.0f will force the app font size to normal ignoring system settings. public void adjustFontScale( Configuration configuration,float scale) { configuration.fontScale = scale; DisplayMetrics metrics = getResources().getDisplayMetrics(); WindowManager wm = (WindowManager) getSystemService(WINDOW_SERVICE); wm.getDefaultDisplay().getMetrics(metrics); metrics.scaledDensity = configuration.fontScale * metrics.density; getBaseContext().getResources().updateConfiguration(configuration, metrics); } @Override protected void onCreate(@Nullable Bundle savedInstanceState) { super.onCreate(savedInstanceState); adjustFontScale( getResources().getConfiguration(),1.0f); } A: That's how we do it. In Application class override onConfigurationChanged() like this. If you want different behavior for different activities - override onConfigurationChanged() in Activity. Don't forget to add manifest tag android:configChanges="fontScale" since you are hadnling this configuration change yourself. @Override public void onConfigurationChanged(Configuration newConfig) { super.onConfigurationChanged(newConfig); // In some cases modifying newConfig leads to unexpected behavior, // so it's better to edit new instance. Configuration configuration = new Configuration(newConfig); SystemUtils.adjustFontScale(getApplicationContext(), configuration); } In some helper class we have adjustFontScale() method. public static void adjustFontScale(Context context, Configuration configuration) { if (configuration.fontScale != 1) { configuration.fontScale = 1; DisplayMetrics metrics = context.getResources().getDisplayMetrics(); WindowManager wm = (WindowManager) context.getSystemService(Context.WINDOW_SERVICE); wm.getDefaultDisplay().getMetrics(metrics); metrics.scaledDensity = configuration.fontScale * metrics.density; context.getResources().updateConfiguration(configuration, metrics); } } WARNING! That will totally ignore Accessibility Font Scale user settings and will prevent your App fonts scaling! A: Tried answers about disabling fontScale in whole application. It's working, but I come to answer it's a terrible idea for only one reason: You don't make your app better for visually impaired people. Better way (I think) it's allow font scale but with restrictions only in some places, where you can't scale your text for it looks readable. Realization (10.02.22) After a day of thinking, I created Kotlin extension for TextView (also may use for EditText because it's a child): fun TextView.removeFontScale() { val fontScale = resources.configuration.fontScale if (fontScale != 1f) { val scaledTextSize = textSize val newTextSize = scaledTextSize / fontScale / fontScale textSize = newTextSize } } Need divide scaledTextSize twice because after setting a new textSize for TextView font scale will happen. UPDATE / FIX (14.02.22) Previous solution doesn't work on real devices, only emulator (vice versa it overwise increasing text size). So, I found another way: val scaledTextSize = textSize val newTextSize = scaledTextSize / fontScale setTextSize(TypedValue.COMPLEX_UNIT_PX, newTextSize) Here new textSize for TextView after setting will not be scaled on fontScale value. Use example: titleText.removeFontScale() subtitleText.removeFontScale() Logs of working, size in 1.3/1.5 scale become like in 1.0 scale: * *Old realization (10.02.22): fontScale: 1.0 | oldSize: 20 fontScale: 1.3 | oldSize: 26 | newSize: 20.0 *New realization (14.02.22): fontScale: 1.5 | oldSize: 83.0 | newSize: 55.333332 | textSize: 55.333332 fontScale: 1.5 | oldSize: 58.0 | newSize: 38.666668 | textSize: 38.666668 P.S. I notice that standard Toolbar didn't scale fonts and after that I understand that this practice is OK (disable scaling where it's really need). A: I think usage of dp is the best way, but in some case you may want to use a font style. However, the style is using sp, you can convert sp to dp by: fun TextView.getSizeInSp() = textSize / context.resources.displayMetrics.scaleDensity fun TextView.convertToDpSize() = setTextSize(TypedValue.COMPLEX_UNIT_DIP, getSizeInSp()) So, you can use the sp value from style without dynamic font size, and no need to hardcode the font size
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,878
{"url":"https:\/\/www.gamedev.net\/forums\/topic\/498303-why-do-we-need-semicolons\/","text":"# Why do we need semicolons?\n\nThis topic is 3750 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.\n\n## Recommended Posts\n\nI am building a new language. I am evaluating the need to put semicolons everywhere like in c++. I understand that the semicolons are for terminating a expression. But do we really need them? A parser can determine when an expression is complete so why do we need semicolons in the first place? ex: useless semicolons: x = 1 + 2; y = 3 + 4; could be written as: x = 2 + 2 y = 3 + 3 There is no operator after the \"2\" to continue the expression, so why do we need to put semicolons in there? Is there a case where we definitly need to complete an expression with a semicolon?\n\n##### Share on other sites\nSure, they're easy to come up with.\n\na = b - c\n\na = b * c\n\nwhile(a) while(b) c\n\nMore subtly, there are other constructions that are unambiguous but, if allowed, would make parsing much more processing-intensive.\n\nAs a historical note, the first versions of C made semicolons optional.\n\n##### Share on other sites\nYou need something to indicate an \"end of statement\". While it is possible to detect implied sequence points like you presented (where you have two consecutive identifiers without operators), there are some situations where you cannot do it without ambiguity.\n\nNote that you don't strictly need to use a semicolon in C and C++. You could use the comma, or possibly use other sequence-point generators like &&, or in some cases you could use other control elements like ( ) or { }.\n\n##### Share on other sites\nWhere do we need semicolons in your example?\n\na = b - c\na = b * c\n\nwhile(a)\nwhile(b)\nc\n\nnote that I am not arguing about the use of brackets to disambiguate flow control statements.\n\n##### Share on other sites\nQuote:\n I understand that the semicolons are for terminating a expression. But do we really need them?\nNo, not really. Usually you're dealing with one statement per line, so using the semicolon to delimit multiple statements on one line or omitting the semicolon to extend a single statement over multiple lines is the exception rather than the rule.\n\nI personally prefer the way BASIC handles this, using the colon to delimit multiple statements on one line and a line continuation character to extend a single statement over multiple lines.\nx = 2 + 2y = 3 + 3x = 2 + 2 : y = 3 + 3x = 2 _ + _ 2\nIt really just boils down to a question of style, I guess.\n\n##### Share on other sites\nQuote:\n Original post by frobYou need something to indicate an \"end of statement\". While it is possible to detect implied sequence points like you presented, there are countless situations where you cannot.Note that you don't strictly need to use a semicolon in C and C++. You could use the comma, or possibly use other sequence-point generators like &&, or in some cases you could use other control elements like ( ) or { }.\n\nCan you give me an example of such situation?\n\nThere are languages where the \"end of statement\" symbol is simply a white space, Lua for instance (they allow the use of semicolon but is not required)\n\n##### Share on other sites\nProbably for the same reason natural languages use full stops. You could write without them and you should be able to parse the meaning correctly with some effort (except for some cases that may remain ambiguous).\n\nI was under the impression that computer languages generally require end-of-expression to be signalled. Either use semicolons or end-of-line.\n\nOne problem with a completely free-style language might be programmer typos. What if I accidentally forget to type an operator. In C-like languages the compiler would complain that it expects a semicolon (if that's what you meant). In your language the compiler might happily interpret the invalid statement as something completely different than what the programmer meant. (I don't know, may-be there are languages of this kind and it's not a problem or can be made into a non-issue through careful language design.)\n\n##### Share on other sites\nQuote:\n Original post by ShnoutzWhere do we need semicolons in your example?\n\nThose without operators between them are simple, so consider this one:\n\na = b\n*c = a\n\n##### Share on other sites\nQuote:\n Original post by ShnoutzWhere do we need semicolons in your example?a = b - c\n\nMeans either a=b; -c; or a=b-c;. Second example is similar.\nQuote:\n while(a) while(b) c\n\nMeans either while(a) while(b) c; or while(a); while(b) c; or while(a);while(b);c;\n\n##### Share on other sites\nQuote:\nOriginal post by frob\nQuote:\n Original post by ShnoutzWhere do we need semicolons in your example?\n\nThose without operators between them are simple, so consider this one:\n\na = b\n*c = a\n\nOne could of course argue that it is a flaw in the language to \"overload\" the meaning of symbols like that (* is dereference or multiplication or pointer declaration).\n\n1. 1\n2. 2\n3. 3\nRutin\n16\n4. 4\n5. 5\nJoeJ\n13\n\n\u2022 9\n\u2022 14\n\u2022 10\n\u2022 25\n\u2022 9\n\u2022 ### Forum Statistics\n\n\u2022 Total Topics\n632645\n\u2022 Total Posts\n3007629\n\u2022 ### Who's Online (See full list)\n\nThere are no registered users currently online\n\n\u00d7","date":"2018-09-25 05:42:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2518553137779236, \"perplexity\": 1692.345795505229}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-39\/segments\/1537267161098.75\/warc\/CC-MAIN-20180925044032-20180925064432-00169.warc.gz\"}"}
null
null
Flare (breakdance), Flare (tijdschrift) een tegenmaatregel tegen luchtdoelraketten en lucht-luchtraketten; zie Lucht-luchtraket § Tegenmaatregelen Zonnevlam, Lichtkogel,
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,113
The Evolved Leader Real, human stories with practical tips for leaders who want to evolve themselves and their teams, creating an evolved culture. Dr. Laura Gallaher is an organizational psychologist who has been hired by Disney and also NASA to help transform culture. Her company, Gallaher Edge, has helped multiple small businesses grow to successful acquisition. The Power of Self-Acceptance by Dr. Laura Gallaher, Dr. Stephanie Lopez Dr. Stephanie Lopez joins us to discuss everything SELF; self-improvement, self-awareness, self-accountability and most importantly, SELF-ACCEPTANCE. She shares her personal struggles with a never-ending desire for self-improvement - always trying to accomplish the next thing and never taking time to acknowledge and celebrate achievements along the way. We hear about how she overcame insecurities surrounding the practice of self-acceptance and how it is the antidote to so many of life's challenges. Tune in to find out how to spot signs that you are struggling with accepting yourself exac ..read more When Success Leads to Failure: Lessons from NASA, Part Two by Dr. Laura Gallaher, Dr. Phillip Meade, Dr. Patrick Simpkins, Chris Singer In part 2 of this series, we explore our Missing Link Culture Model through the lens of the Space Shuttle Columbia accident, so that you can take valuable lessons from NASA and apply them to your own organization. This episode features former NASA Deputy Chief Engineer, Chris Singer, former Director of Engineering at NASA's Kennedy Space Center, Dr. Patrick Simpkins, and the former Kennedy Space Center Change Manager, and co-author of our new book, The Missing Links: Launching a High Performing Company Culture, Dr. Phillip Meade ..read more When Success Leads to Failure: Lessons From NASA, Part One Join me for a special two-part series, to take a behind-the-scenes look at the Space Shuttle Columbia Accident and how it shaped our ideas about culture. I speak with three people with key leadership roles within NASA, both leading up to and following the tragedy: Chris Singer, former NASA Deputy Chief Engineer, Dr. Patrick Simpkins, Director of Engineering at NASA's Kennedy Space Center, and Dr. Phillip Meade, who was asked to lead the culture change initiative at Kennedy following the accident. If improving your company culture and developing your team's communication and collaboration ..read more How Radical Collaboration Can Strengthen Your Organization AND Your Relationships by Dr. Laura Gallaher, Jim Tamm In this episode, Jim Tamm joins me to discuss how his career as a senior administrative law judge for the state of California inspired him to co-create Radical Collaboration – a powerful program with incredible results, centered around building successful relationships, both professionally and personally. He shares his success with reducing the amount of measurable conflict in nearly 100 various organizations by 67% (in merely 3.5 years!) and how it resulted in the state of California creating a non-profit foundation to continue his work. It's an episode you won't want to miss if you're ..read more Why Openness Is Essential For Any Startup or Fast-Growing Company by Dr. Laura Gallaher, Jonathan Taylor Serial entrepreneur, Jonathan Taylor, has started seven companies, acquired 18 and sold three, for an undisclosed amount surpassing a quarter of a billion dollars. And yet he still was open to learning more effective ways to run businesses when he did the work of The Human Element. In this episode, he joins me to discuss his surprise at how often he was misunderstood, how openness was the grand simplifier, and how quickly it accelerates trust between people. We also debate whether or not one "needs" to be an asshole in business to succeed. Listen and subscribe, today ..read more Dr. Stephanie Lopez joins us to discuss everything SELF; self-improvement, self-awareness, self-accountability and most importantly, SELF-ACCEPTANCE. She shares her personal struggles with a never-ending desire for self-improvement - always trying to accomplish the next thing and never taking time to acknowledge and celebrate achievements along the way. We hear about how she overcame insecurities surrounding the practice of self-acceptance and how it is the antidote to so many of life's challenges. Tune in to find out how to spot signs that you are struggling with accepting ..read more Tayo Rockson on Racism and Communicating Across Cultures, with Impact. by Dr. Laura Gallaher, Tayo Rockson Cultural translator, storyteller, and activist, Tayo Rockson, gets real with us about racism. He discusses the importance of seeing it as a whole and not simply looking at it through our own individual lens. Push outside your comfort zone as we talk about self-reflection, tone policing, and approaching differing opinions, the power of choice, and more! Be a part of the change. Listen and subscribe ..read more What is Defensiveness, Anyway? by Dr. Laura Gallaher In this episode, we talk to Dr. Stephanie Lopez about the first time she experienced The Human Element. She shares stories and examples with us about her journey of self-awareness. She describes the process of discovering that it was her own insecurity that would sometimes trigger behaviors she wanted to change, like being unknowingly passive-aggressive or thinking critical thoughts about others. Her courage and vulnerability are admirable – listen now to hear for yourself ..read more Key Insights for Self-Awareness with Dr. Tasha Eurich by Dr. Laura Gallaher, Dr. Tasha Eurich I interview Dr. Tasha Eurich, a fellow organizational psychologist, best-selling author, and multiple TEDx speaker. Dr. Eurich's research focuses on self-awareness, and what I love about her is how she takes her research and makes it pragmatic and accessible for those of you who want to understand yourselves better. We talk about some of the powerful take-aways from her book, Insight, which I highly recommend, how to introspect more effectively, what daily practices you can do to create real transformation, and we also talk about whether we really can make the unconscious conscious. &nb ..read more Personality Isn't Permanent by Dr. Laura Gallaher, Dr. Benjamin Hardy On this episode, I interview Dr. Benjamin Hardy about his book, Personality Isn't Permanent. We talk about the importance of letting go of who you think you are, and focusing instead on who you want to be. We talk about how you can work through trauma and break through whatever barriers hold you back today ..read more Follow The Evolved Leader on Feedspot
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,660
{"url":"http:\/\/www.physicsforums.com\/showthread.php?t=108765","text":"## Constants\n\nGiven a, b are constants and lim x approch 1, a root x+3 - b all over x-1 equals 1. Find a and b\n\nNo clue how to answer this, but this is what I think i can do\n\na) get rid of the sqrt\nb) apply limit\n\nSee, the problem is I have to unknows so i dont know what to do\n PhysOrg.com science news on PhysOrg.com >> Ants and carnivorous plants conspire for mutualistic feeding>> Forecast for Titan: Wild weather could be ahead>> Researchers stitch defects into the world's thinnest semiconductor\n Is this your problem? $$\\lim_{x\\rightarrow{1}} \\frac{a\\sqrt{x+3}-b}{x-1}=1$$?\n\nRecognitions:\nHomework Help\n Quote by gator Given a, b are constants and lim x approch 1, a root x+3 - b all over x-1 equals 1. Find a and b No clue how to answer this, but this is what I think i can do a) get rid of the sqrt b) apply limit See, the problem is I have to unknows so i dont know what to do\nIf the version posted in LaTex is correct, note that the denominator goes to zero when x->1. For the quotient to have a finite limit, the numerator must also go to zero when x is set to 1.\n\nYou get a simple linear relationship between a and b. ---(1)\n\nNow you can use L'Hopital's Rule to evaluate a limit of the form 0\/0. So differentiate both numerator and denominator. You know that the quotient of these two is also going to be 1 at the limit x->1.\n\nSo set x = 1 in that. The b term would have vanished, so you can now solve for a. Put that back in equation (1) and work out b, you're done.","date":"2013-05-23 11:01:29","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8344374895095825, \"perplexity\": 630.0269872605026}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368703293367\/warc\/CC-MAIN-20130516112133-00069-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction}\label{sec:intro} In stochastic differential games, a Nash equilibrium refers to strategies by which no player has an incentive to deviate. Finding a Nash equilibrium is one of the core problems in noncooperative game theory, however, due to the notorious intractability of $N$-player game, the computation of the Nash equilibrium has been shown extremely time-consuming and memory demanding, especially for large $N$ \cite{DaGoPa:2009}. On the other hand, a rich literature on game theory has been developed to study consequences of strategies on interactions between a large group of rational ``agents'', {\it e.g.}, system risk caused by inter-bank borrowing and lending, price impacts imposed by agents' optimal liquidation, and market price from monopolistic competition. This makes it crucial to develop efficient theory and fast algorithms for computing the Nash equilibrium of $N$-player stochastic differential games. Deep neural networks with many layers have been recently shown to do a great job in artificial intelligence ({\it e.g.}, \cite{Be:09, LeBeHi:15} ). The idea behind is to use compositions of simple functions to approximate complicated ones, and there are approximation theorems showing that a wide class of functions on compact subsets can be approximated by a single hidden layer neural network ({\it e.g.}, \cite{Pinkus:99}). This brings a possibility of solving a high-dimensional system using deep neural networks, and in fact, these techniques have been successfully applied to solve stochastic control problems \cite{HaE:16, deep:2018, deep2:2018}. In this paper, we propose to build deep neural networks by using strategies of fictitious play, and develop parallelizable deep learning algorithms for computing the Nash equilibrium of asymmetric $N$-player non-zero-sum stochastic differential games. We consider a stochastic differential game with $N$ players, and each player $i \in \mathcal{I} := \{1, 2, \ldots, N\}$ has a state process $X_t^i \in \mathbb{R}^d$ and takes an action $\alpha_t^i$ in the control set $A \subset \mathbb{R}^k$. The dynamics of the controlled state process $X_\cdot^i$ on $[0,T]$ are given by \begin{equation}\label{def:Xt:general} \,\mathrm{d} X_t^i = b^i(t, \bm{X}_t, \bm{\alpha}_t) \,\mathrm{d} t + \sigma^i(t, \bm{X}_t, \bm{\alpha}_t) \,\mathrm{d} W_t^i + \sigma^0(t, \bm{X}_t, \bm{\alpha}_t) \,\mathrm{d} W_t^0, \quad X_0^i = x^i, \quad i \in \mc{I}, \end{equation} where $\bm{W} :=[W^0, W^1,\ldots, W^N]$ are $N+1$ $m$-dimensional independent Brownian motions, $(b^i, \sigma^i)$ are deterministic functions: $[0,T] \times \mathbb{R}^{d\times N} \times A^N \hookrightarrow \mathbb{R}^d \times \mathbb{R}^{d\times m}$. The $N$ dynamics are coupled since all private states $\bm{X}_t = [X_t^1, \ldots, X_t^N]$ and all the controls\footnote {Although in the literature of math finance, one usually models $b^i$ and $\sigma^i$ to only depend on player $i$'s own action, but it is common in literature of economics that player $i$'s private state is also influenced by others' actions, {\it e.g.}, $\alpha_t^i$ is a priced set by companies and $X_t^i$ is the production quantity. To be general, we include this feature in our model, which yields \eqref{def:Xt:general}.} $\bm{\alpha}_t = [\alpha_t^1, \ldots, \alpha_t^N]$ affect the drifts $b^i$ and diffusions $\sigma^i$. Each player's control $\alpha_t^i$ lives in the space $\mathbb{A} = \mathbb{H}^2_T(A)$ of progressively measurable $A$-valued processes satisfying the integrability condition: \begin{equation}\label{def:admissible} \mathbb{E}[\int_0^T \abs{\alpha_t^i}^2 \,\mathrm{d} t] < \infty. \end{equation} Using the strategy $\bm{\alpha} \in \mathbb{A}^N$, the cost associated to player $i$ is of the form: \begin{equation}\label{eq:cost} J^i(\bm{\alpha}) := \mathbb{E}\left[\int_0^T f^i(t, \bm X_t, \bm\alpha_t) \,\mathrm{d} t + g^i(\bm X_T)\right], \end{equation} where the running cost $f^i: [0,T] \times \mathbb{R}^{d \times N}\times A^N \hookrightarrow \mathbb{R}$ and terminal cost $g^i: \mathbb{R}^{d\times N} \hookrightarrow \mathbb{R}$ are deterministic measurable functions. In solving stochastic differential games, the notion of optimality of common interest is the Nash equilibrium. A set of strategies $\bm{\alpha}^\ast = (\alpha^{1,\ast}, \ldots, \alpha^{N,\ast}) \in \mathbb{A}^N$ is called a Nash equilibrium if \begin{equation}\label{def:Nash} \forall i \in \mc{I} \text{ and } \beta^i \in \mathbb{A}, \quad J^i(\bm{\alpha}^\ast) \leq J^i(\beta^i, \bm{\alpha}^{-i,\ast}), \end{equation} where $\bm{\alpha}^{-i,\ast}$ represents strategies of players other than the $i$-th one \begin{equation*} \bm{\alpha}^{-i,\ast} := [\alpha^{1,\ast}, \ldots, \alpha^{i-1,\ast}, \alpha^{i+1,\ast}, \ldots, \alpha^{N,\ast}] \in \mathbb{A}^{N-1}. \end{equation*} In fact, depending on the space where one searches for actions (the information structure available to the players), the types of equilibria include open-loop ($\bm{W}_{[0,t]}$), closed-loop ($\bm{X}_{[0,t]}$), and closed-loop in feedback form ($\bm{X}_t$). We start with the setup \eqref{def:Nash} which corresponds to the open-loop case. Theoretically, it is more tractable, due to the indirect nature ({\it i.e.} player $i$ will not change his strategy when player $j$'s strategy changes because player $i$ can not observe or feel the change). Practically, there are applications falling into this framework, for instance, the prisoner's dilemma from game theory. This is the scenario that when two people get arrested and investigated, they are in solitary confinements and can not communicate with each other, nor observe the other's choice. In this case, it is reasonable to assume that $\alpha_t^i$ does not depend on the past decisions $\bm \alpha_{[0,t)}$ nor the players' states $\bm X_{[0,t]}$ as these information is not available under this framework. The generalization of deep learning theory for closed-loop cases will be discussed in Section~\ref{sec:close:loop}. An alternative method of solving $N$-player stochastic differential games is via mean-field games, introduced by Lasry and Lions in \cite{LaLi1:2006,LaLi2:2006,LaLi:2007} and by Huang, Malham\'{e} and Caines in \cite{HuMaCa:06,HuCaMa:07}. The idea is to approximate the Nash equilibrium by the solution of mean field equilibrium (the formal limit of $N\rightarrow \infty$) under mild conditions \cite{CaDe:13}, which leads to an approximation error of order $N^{-1/(d+4)}$ assuming that the players are indistinguishable, {\it i.e.}, all coefficients $(b^i, \sigma^i, f^i, g^i)$ are free of $i$. We refer to the books \cite{CaDe1:17,CaDe2:17} and the references therein for further background on mean-field games. However, beyond the case of a continuum of infinitesimal agents with or without major players, the mean-field equilibrium may not be a good approximation in general. In addition, the mean-field game often exhibits multiple equilibria, some of which do not correspond to the limit of $N$-player game as $N\rightarrow \infty$, {\it e.g.}, in the optimal stopping games \cite{NuMaTa:18}. Moreover, when the number of players is of middle size ({\it e.g.}, $N \sim 50$), the approximation error made by the mean-field equilibrium is large while direct solvers based on forward-backward stochastic differential equations (FBSDEs) or on partial differential equations (PDEs) are still computationally unaffordable. Therefore, it is demanding to develop new theory and algorithms for solving the $N$-player game. The idea proposed in this paper is natural and motivated by the fictitious play, a learning process in game theory firstly introduced by Brown in the static case \cite{Br:49, Br:51} and recently adapted to the mean field case by Cardaliaguet \cite{CaHa:17,BrCa:18} and coauthors. In the fictitious play, after some arbitrary initial moves at the first stage, the players myopically choose their best responses against the empirical strategy distribution of others' action at every subsequent stage. It is hoped that such a learning process will converge and lead to a Nash equilibrium. In fact, Robinson \cite{Ro:51} showed this holds for zero-sum games, and Miyazawa \cite{Mi:61} extended it to $2\times 2$ games. However, Shapley's famous $3\times 3$ counter-example \cite{Sh:64} indicates that this is not always true. Since then, many attempts are made to identify classes of games where the global convergence holds \cite{MiRo:91,MoSh:96,MoSh2:96,HoMoSe:98, CrAn:03, Be:05, HoSa:02}, and where the process breaks down \cite{Jo:93,MoSe:96,FoYo:98,KrSj:98}, to name a few. Based on fictitious play, we propose a deep learning theory and algorithm for computing the open-loop Nash equilibria. Unlike closed-loop strategies of feedback form, which can be reformulated as the solution to $N$-coupled Hamilton-Jacobi-Bellman (HJB) equations by dynamic programming principle (DPP), open-loop strategies are usually identified through FBSDEs. The existence of explicit solutions to both equations highly depends on the symmetry of the problem, in particular, for most cases where explicit solutions are available, the players are statistically identical. Therefore, an efficient and accurate numerical scheme is crucial for solving such FBSDEs. Traditional ways run into the technical difficulty of {the curse of dimensionality}, thus are not feasible when the dimensionality goes beyond 5. Observing impressive results solved by deep learning on various challenging problems \cite{Be:09,KrSuHi:12,LeBeHi:15}, we shall use deep neural networks to overcome the curse of dimensionality for moderately large $N$ and asymmetric games. We first boil down the game into $N$ stochastic control subproblems, which are conditionally independent given past play at each stage. Since we first focus on open-loop equilbria (as opposed to closed-loop ones) in each subproblem, the strategies are considered as general progressively measurable processes (as opposed to functions of $(t, \bm{X}_t)$). Therefore, without the feedback effects, one can design a deep neural network to solve stochastic control subproblems individually. The control at each time step is approximated by a feed-forward subnetwork, whose inputs are initial states $\bm{X}_0$ and noises $\bm{W}_{[0,t]}$ in lieu of the definition of open-loop equlibria. For player $i$'s control problem, $\bm{X}^{-i}$ is generated using strategies from past, {\it i.e.}, considered as fixed while player $i$ optimizes herself. \medskip \noindent{\bf Main contribution.} The contribution of deep fictitious play is three-fold. Firstly, our algorithm is scalable: in each round of play, the $N$ subproblems can be solved in parallel, which can be accelerated by the feature of multi-GPU. Secondly, we propose a deep neural network for solving general stochastic control problem where strategies are general processes instead of feed-back form. In lack of DPP, algorithms from reinforcement learning are no longer available. We approximate the optimal control directly in contrast to approximating value functions \cite{Po:07}. Thirdly, the algorithm can be applied to asymmetric games, as for each player, there is a corresponding neural network. \medskip \noindent{\bf Related literature.} Most literature in deep learning and reinforcement learning algorithms in stochastic control problems uses DPP with which, the problem can be solved backwardly, {\it i.e.}, to find the optimal control at the terminal time, and then decide the previous decision. Among them, let me mention the recent works \cite{deep:2018, deep2:2018}, which approximate the optimal policy by neural networks in the spirit of deep reinforcement learning, and the approximated optimal policy is obtained in a backward manner. While in our algorithm, we stack these subnetworks together to form a deep network and train them simultaneously. In fact, our structure is inspired by Han and E \cite{HaE:16}. The difference is that they feed the network with $X_t$ seeking for feedback-form controls, while we feed the initial states $X_0$ and noises $W_{[0,t]}$ for each player's network, seeking for open-loop Nash equilibrium. In terms of using fictitious play to solve multi-agent problems, \cite{HeSi:16,LaZa:17,MgJeCo:18} design reinforcement learning algorithms assuming the system \eqref{def:Xt:general} is {unknown}; while our algorithm needs the knowledges of $b^i$, $\sigma^i$, $f^i$ and $g^i$. \medskip \noindent{\bf Organization of the paper.} In Section~\ref{sec:DFP}, we systematically introduce the deep fictitious play theory, and implementation of deep learning algorithms using Keras with GPU acceleration. In Section~\ref{sec:LQ}, we apply deep fictitious play to linear quadratic games, and prove the convergence of fictitious play under proper assumptions on parameters, with the limit forming an open-loop Nash equilibrium. Performance of deep learning algorithms are presented in Section~\ref{sec:numerics}, where we simulate stochastic differential games with a large number of players ({\it e.g.}, $N=24$). We make conclusive remarks, and discuss the extensions to other strategies of fictitious play and closed-loop cases in Section~\ref{sec:rmk}. \section{Deep fictitious play}\label{sec:DFP} In this section, we describe the theory and algorithms of deep fictitious play, which by name, is known to build on fictitious play and deep learning. We first summarize all the notations that shall be used as below. Given a probability space $(\Omega, \mathcal{F}, \mathbb{P})$, we consider \begin{itemize} \item $\bm{W} = [W^0, W^1, \ldots, W^N]$, a $(N+1)$-vector of $m$-dimensional independent Brownian motions; \item $\mathbb{F} = \{\mathcal{F}_t, 0\leq t \leq T\}$, the augmented filtration generated by $\bm{W}$; \item $\mathbb{H}^{2}_T(\mathbb{R}^d)$, the space of all progressively measurable $\mathbb{R}^d$-valued stochastic processes $\alpha: [0,T] \times \Omega \hookrightarrow \mathbb{R}^d$ such that $\ltwonorm{\alpha} = \mathbb{E}[\int_0^T \abs{\alpha_t}^2 \,\mathrm{d} t ] < \infty$. \item $\mathbb{A} = \mathbb{H}^{2}_T(A)$, the space of admissible strategies, {\it i.e.}, elements in $\mathbb{A}$ satisfy \eqref{def:admissible}. $\mathbb{A}^N = \mathbb{A} \times \mathbb{A} \times \ldots \times \mathbb{A}$, a product of $N$ copies of $\mathbb{A}$; \item $\bm{\alpha} = [\alpha^1, \alpha^2, \ldots, \alpha^N]$, a collection of all players' strategy profiles. With a negative superscript, $\bm{\alpha}^{-i} = [\alpha^1, \ldots, \alpha^{i-1}, \alpha^{i+1}, \ldots, \alpha^{N}]$ means the strategy profiles excluding player $i$'s. If a non-negative superscript $n$ appears ({\it e.g.}, $\bm{\alpha}^n$), this N-tuple stands for the strategies from stage $n$. When both exist, $\bm{\alpha}^{-i,n} = [\alpha^{1,n}, \ldots, \alpha^{i-1,n}, \alpha^{i+1,n}, \ldots, \alpha^{N,n}]$ is a $(N-1)$-tuple representing strategies excluding player $i$ at stage $n$. We use the same notations for other stochastic processes ({\it e.g.}, $\bm{X}^{-i}, \bm{X}^n$); \end{itemize} We assume that the players start with an initial smooth belief $\bm{\alpha}^0 \in \mathbb{A}^N$. At the beginning of stage $n+1$, $\bm{\alpha}^n$ is observable by all players. Player $i$ then chooses best response to her beliefs about opponents described by their play at the previous stage $\bm{\alpha}^{-i,n}$. Then, player $i$ faces an optimization problem: \begin{equation}\label{def:J:SFP} \inf_{\beta^i \in \mathbb{A}} J^{i}(\beta^i;\bm{\alpha}^{-i,n}), \quad J^{i}(\beta^i; \bm\alpha^{-i,n}) = \mathbb{E}\left[\int_0^T f^i(t,\bm{X}_t^\alpha, (\beta^i_t, \bm{\alpha}_t^{-i,n})) \,\mathrm{d} t + g^i(\bm X_T^\alpha)\right], \end{equation} where $\bm X_t^\alpha = [X_t^{1,\alpha}, X_t^{2,\alpha}, \ldots, X_t^{N,\alpha}]$ are state processes controlled by $(\beta^i, \bm\alpha^{-i,n})$: \begin{align} \,\mathrm{d} X_t^{\ell,\alpha}= b^\ell(t, \bm{X}_t^\alpha, (\beta^i_t, \bm{\alpha}_t^{-i,n})) \,\mathrm{d} t & + \sigma^\ell(t, \bm{X}_t^\alpha, (\beta^i_t, \bm{\alpha}_t^{-i,n})) \,\mathrm{d} W_t^\ell \nonumber \\ &+ \sigma^0(t, \bm{X}_t^\alpha, (\beta^i_t, \bm{\alpha}_t^{-i,n})) \,\mathrm{d} W_t^0, \; X_0^{\ell,\alpha} = x^\ell, \end{align} for all $\ell \in \mc{I}$. Denote by $\alpha^{i,n+1}$ the minimizer in \eqref{def:J:SFP}: \begin{equation}\label{def:SFP} \alpha^{i,n+1} := \argmin_{\beta^i \in \mathbb{A}} J^{i}(\beta^i; \bm{ \alpha}^{-i,n}), \quad \forall i \in \mc{I}, n \in \mathbb{N}, \end{equation} we assume $\alpha^{i,n+1}$ exists throughout the paper. More precisely, $\alpha^{i,n+1}$ is the player $i$'s optimal strategy at the stage $n+1$ when her opponents dynamics \eqref{def:Xt:general} evolve according to $\alpha^{j,n}$, $j \neq i$. All players find their best responses simultaneously, which together form $\bm{\alpha}^{n+1}$. \begin{rem} Note that the above learning process is slightly different than the usual simultaneous fictitious play, where the belief is described by the time average of past play: $\frac{1}{n}\sum_{k=1}^n \bm{\alpha}^{-i,k}$. We shall discuss this with more details in Section~\ref{sec:avg}. \end{rem} As discussed in the introduction, in general one can not expect that the player's actions always converge. However, if the sequence $\{\bm{\alpha}^{n}\}_{n=1}^\infty$ ever admits a limit, denoted by $\bm\alpha^{\infty}$, we expect it to form an open-loop Nash equilibrium under mild assumptions. Intuitively, in the limiting situation, when all other players are using strategies $\alpha_t^{j,\infty}$, $j \neq i$, by some stability argument, player $i$'s optimal strategy to the control problem \eqref{def:J:SFP} should be $\alpha_t^{i,\infty}$, meaning that she will not deviate from $\alpha_t^{i,\infty}$, which makes $(\alpha_t^{i,\infty})_{i=1}^N$ an open loop equilibrium by definition. Therefore, finding an open-loop Nash equilibrium consists of iterating this play until it converges. We here give an argument under general problem setup using Pontryagin stochastic maximum principle (SMP). For simplicity, we present the case of uncontrolled volatility without common noise: $\sigma^i(t,\bm{x}, \bm{\alpha}) \equiv \sigma^i(t,\bm{x})$, $\forall i \in \mc{I}$, $\sigma^0 \equiv 0$, and refer to \cite[Chapter 1]{CaDe2:17} for generalization. The Hamiltonian $H^{i,n+1}: [0,T] \times \Omega \times \mathbb{R}^{dN} \times \mathbb{R}^{dN} \times A \hookrightarrow \mathbb{R}$ for player $i$ at stage $n+1$ is defined by: \begin{equation} H^{i,n+1}(t,\omega, \bm x, \bm y, \alpha ) = \bm b(t,\bm x,(\alpha, \bm{\alpha}^{-i,n})) \cdot \bm y + f^i(t,\bm x,(\alpha, \bm{\alpha}^{-i,n})), \end{equation} where the dependence on $\omega$ is introduced by $\bm{\alpha}^{-i,n}$. We assume all coefficients $(b^i,\sigma^i, f^i)$ are continuously differentiable with respect to $(\bm x, \bm \alpha) \in \mathbb{R}^{dN} \times A^N$; $g^i$ is convex and continuously differentiable with respect to $\bm x \in \mathbb{R}^{dN}$; $A \in \mathbb{R}^k$ is convex; the function $H^{i,n+1}$ is convex $\mathbb{P}$-almost surely in $(\bm x, \bm \alpha)$. By the sufficient part of SMP, we look for a control $\hat\alpha^{i,n+1} \in A$ of the form: \begin{equation}\label{def:alpha:H} \hat \alpha^{i,n+1}(t,\omega, \bm x, \bm y) \in \argmin_{\alpha \in A} H^{i,n+1}(t,\omega, \bm x, \bm y, \alpha), \end{equation} and solve the resulting forward-backward stochastic differential equations (FBSDEs): \begin{equation}\label{def:FBSDE} \left\{ \begin{aligned} \,\mathrm{d} X_t^{\ell, n+1} &= b^\ell(t, \bm{X}_t^{n+1}, (\hat \alpha^{i,n+1}(t, \bm{X}_t^{n+1}, \bm{Y}_t^{n+1}), \bm{\alpha}_t^{-i,n})) \,\mathrm{d} t + \sigma^\ell(t, \bm{X}_t^{n+1}) \,\mathrm{d} W_t^\ell, \\ \,\mathrm{d} Y_t^{\ell,n+1} &= -\partial_{x^\ell} H^{i,n+1}(t,\bm{X}_t^{n+1}, \bm{Y}_t^{n+1}, \hat \alpha^{i,n+1}(t, \bm{X}_t^{n+1}, \bm{Y}_t^{n+1})) \,\mathrm{d} t + \sum_{j=1}^N Z_t^{\ell,j,n+1} \,\mathrm{d} W_t^j, \\ X_0^{\ell,n+1} &= x_0^\ell, \quad Y_T^{\ell,n+1} = \partial_{x^\ell} g^i(\bm X_T^{n+1}), \quad \ell \in \mc{I}. \end{aligned} \right. \end{equation} If there exists a solution $(\bm{X}^{n+1}, \bm{Y}^{n+1}, \bm{Z}^{n+1}) \in H^2_T(\mathbb{R}^{dN} \times \mathbb{R}^{dN} \times \mathbb{R}^{dN \times mN})$, then an optimal control to problem \eqref{def:J:SFP} is given by plugging the solution into the function $\hat \alpha^{i,n+1}$: \begin{equation}\label{def:alpha:general} \alpha^{i,n+1}_t = \hat \alpha^{i,n+1}(t, \bm{X}_t^{n+1}, \bm{Y}_t^{n+1}). \end{equation} Now suppose \eqref{def:FBSDE} is solvable, the sequence given in \eqref{def:alpha:general} converges to $\bm\alpha^\infty$ as $n \to \infty$. Denote by $(\bm{X}^{\infty}, \bm{Y}^{\infty}, \bm{Z}^{\infty})$ the solution of \eqref{def:FBSDE} with $\bm{\alpha}^n$ being replaced by $\bm{\alpha}^\infty$. If the system possesses stability, then $(\bm{X}^{\infty}, \bm{Y}^{\infty}, \bm{Z}^{\infty})$ is also the limit of $(\bm{X}^{n+1}, \bm{Y}^{n+1}, \bm{Z}^{n+1}) $. In this case, given other players using $\bm\alpha^{-i,\infty}$, the optimal control of player $i$ is \begin{equation} \alpha^{i,\infty}(t, \bm X_t^{\infty}, \bm Y_t^\infty) = \lim_{n\to \infty} \hat \alpha^{i,n}(t, \bm X_t^{n}, \bm Y_t^n) = \lim_{n \to \infty} \alpha^{i,n} = \alpha^{i,\infty}, \end{equation} where we have used the stability of \eqref{def:FBSDE} and the continuous dependence of $H$ on the parameter $\bm{\alpha}^{-i,n}$ for the first identity, the solvability of \eqref{def:FBSDE} for the second identity, and the convergence of $\alpha^{i,n}$ for the last identity. Therefore, one can put appropriate conditions on $(b^i, \sigma^i, f^i, g^i)$ to ensure these, and we refer to \cite{PeWu:99,PaTa:99,MaMoYo:99,MaWuZhZh:15} for detailed discussions. Remark that, all assumptions are satisfied for the case of linear-quadratic games, and thus all the above arguments can go through. We will give more details in Section~\ref{sec:LQ}. In general, problem \eqref{def:SFP} is not analytical tractable, and one needs to solve it numerically. Next we present a novel architecture of DNN and a deep learning algorithm that has a parallelization feature. It starts with a brief introduction on deep learning, followed by the detailed deep fictitious play algorithm. \subsection{Preliminaries on deep learning}\label{sec:nn} Inspired by neurons in human brains, a neural network (NN) is designed for computers to learn from observational data. It has become an effective tool in many fields including computer vision, speech recognition, social network filtering, image analysis, {\it etc.}, where results produced by NNs are comparable or even superior to human experts. An example of NNs performing well is image classification, where the task is to identify which of a set of categories a new observation belongs to, on the basis of a training set of data containing observations of known category membership. Denote by $x$ the observations and $z$ its category. This problem consists of efficient and accurate learning of the mapping from observations to categories $x \hookrightarrow z(x)$, which can be complicated and non-trivial. Thanks to the universal approximation theorem and the Kolmogorov-Arnold representation theorem \cite{Cy:89,Ko:91,Ho:91}, NNs are able to provide good approximations to non-trivial mapping. Our goal is to use deep neural networks to solve the stochastic control problem \eqref{def:SFP}. NNs are made by stacking layers one on top of another. Layers with different functions or neuron structures are called differently, including fully-connected layer, constitutional layer, pooling layer, recurrent layers, {\it etc.}. As our algorithm \ref{def:algorithm} will focus on fully-connected layers, we here give an example of feed-forward NN using fully-connected layers in Figure \ref{fig:sampleNN}. Nodes in the figure represent neurons and arrows represent the information flow. As shown, information is constantly "fed forward" from one layer to the next. The first layer (leftmost column) is called the input layer, and the last layer (rightmost column) is called the output layer. Layers in between are called hidden layers, as they have no connection with the external world. In this case, there is only one hidden layer with four neurons. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth, height = 0.18\textheight]{feedforward.png} \caption{An illustration of a simple feedforward neural network. }\label{fig:sampleNN} \end{figure} We now explain how information is processed in NNs. For fully-connected layers, every neuron consists of two kinds of parameters, the weights $w$ and the bias $b$. Each layer can choose an activation function, then an input $x$ goes through it gives $f(w \cdot x + b)$. In the above example of NN, the data $\bm{x} = [x_1, x_2, x_3]$ fed to neuron $y_i$ outputs $f(\bm{w}_j \cdot \bm{x} + b_j)$, $j = 1, \ldots, 4$, which yields $\bm{y} = [y_1,y_2, y_3, y_4]$ as the input of neuron $z_1$. The final output is $z_1 = f(\bm{w}_z \cdot \bm{y} + b_z)$. In traditional classification problems, categorical information $z(\bm{x})$ associated to the input $\bm{x}$ is known, and the optimal weights and bias are chosen to minimize a loss function $L$: \begin{equation}\label{def:NNloss} c(w,b) := L(z,z(\bm{x})), \end{equation} where $z$ is the output of the NNs, as functions of $(w,b)$, and $z(\bm{x})$ is given from the data. The process of finding optimal parameters is called the training of an NN. The activation function $f$ and loss function $L$ are chosen at the user's preference, and common choices are sigmoid $\displaystyle\frac{1}{1+e^{x}}$, ReLU $x^+$ for $f$, and mean squared error $\sum \abs{z - z(\bm x)}^2$ or cross entropy $-\sum z(\bm x) \log(z)$ for $L$ in \eqref{def:NNloss}. In terms of finding the optimal parameters $(\bm{w},\bm{b})$ in \eqref{def:NNloss}, it is in general a high-dimensional optimization problem, and usually done by various stochastic gradient descent methods (e.g. Adam \cite{Adam,Adamcvg}, NADAM \cite{Dozat16}). For further discussions, we refer to \cite[Section 2.1]{Hu:19} and \cite[Section 2.2]{deep:2018}. However, solving \eqref{def:SFP} is not in line with the above procedure, in the sense that there is no target category $z(\bm{x})$ assigned to each input $\bm{x}$, and consequently, the loss function is not a distance measuring between the network output $z$ and $z(\bm{x})$. We aim at approximating the optimal strategy at each stage by feedforward NNs. What we actually use NN is its ability of approximating complex relations by composition of simple functions (by stacking fully connected layers) and finding the (sub-)optimizer with its well-developed built-in stochastic gradient descent (SGD) solvers. We shall explain further the structures of NNs in the following section. \subsection{Deep learning algorithms}\label{sec:algorithm} We introduce the algorithms of deep learning based on fictitious play by describing two key parts as below. \subsubsection{Part I: solve a stochastic control problem using DNN}\label{sec:DNN} We in fact solve a time discretization version of problem \eqref{def:SFP}. Partitioning $[0,T]$ into $N_T$ equally-spaced intervals, with the time step $h = T/N_T$. Denote by $\tilde {\mathbb{F}} := \{\tilde \mathcal{F}_k, 0 \leq k \leq N_T\}$ the ``discretized'' filtration with $\tilde \mathcal{F}_k = \sigma\{\bm{W}_{jh}, 0 \leq j \leq k\}$. A discrete-time analogy of \eqref{def:SFP} is: \begin{equation}\label{def:sc1} \tilde \alpha^{i,n+1} = \argmin_{\{\beta^i_{kh}\in \tilde \mathcal{F}_k\}_{k=0}^{N_T-1}}\tilde J^{i}(\beta^i; \tilde{\bm{\alpha}}^{-i,n}), \end{equation} where \begin{equation}\label{def:sc2} \tilde J^{i}(\beta^i; \tilde{\bm{\alpha}}^{-i,n}):= \mathbb{E}\left[\sum_{k=0}^{N_T-1} f^i\left(kh, \bm X_{kh}, (\beta_{kh}^i, \tilde {\bm{\alpha}}_{kh}^{-i,n})\right) h + g^i(\bm X_T)\right], \end{equation} and each entry $X_{kh}^\ell$ in $\bm{X}_{kh}$ follows the Euler scheme of \eqref{def:Xt:general} associated to the strategy $\beta^\ell$ if $\ell = i$, and to $\tilde \alpha^{\ell,n}$ if $\ell \neq i$: \begin{equation}\label{eq:Xt:discrete} \begin{aligned} X_{(k+1)h}^\ell &= X_{kh}^\ell + b^\ell(kh, \bm X_{kh}, (\beta_{kh}^i, \tilde{ \bm\alpha}_{kh}^{-i,n})) h + \sigma^\ell(kh, \bm X_{kh}, (\beta_{kh}^i, \tilde{ \bm\alpha}_{kh}^{-i,n}))(W^\ell_{(k+1)h}- W^\ell_{kh}) \\ & \quad + \sigma^0(kh, \bm X_{kh}, (\beta_{kh}^i, \tilde{ \bm\alpha}_{kh}^{-i,n})) (W^0_{(k+1)h}- W^0_{kh}), \quad \ell \in \mc{I}. \end{aligned} \end{equation} Remark that the above time discretization uses Euler scheme, and thus leads to a weak error of $\mathcal{O}({h})$ and a strong error of $\mathcal{O}(\sqrt{h})$. In the discrete setting, $\beta^i_{kh} \in \tilde \mathcal{F}_k$ is interpreted as $\beta^i_{kh} = \beta^i_{kh} (\bm{X}_0, \bm{W}_h, \ldots, \bm{W}_{kh})$. Our task is to approximate the functional dependence of the control on noises. Similar to the strategy used in \cite{HaE:16}, we implement this by a multilayer feedforward sub-network: \begin{equation}\label{def:strategy:approx} \beta^i_{kh} \sim \beta^i_{kh} (\bm{X}_0, \bm{W}_h, \ldots, \bm{W}_{kh}\vert \theta_{kh}^i), \end{equation} where $\theta_{kh}^i$ denotes the collection of all weights and biases in the $k^\text{th}$ sub-network for player $i$. Then, at stage $n+1$, the optimization problem for player $i$ becomes \begin{equation}\label{def:J:discrete} \min_{{\left\{\theta_{kh}^{i}\right\}_{k=0}^{N_T-1}} } \mathbb{E}\left[\sum_{k=0}^{N_T-1} f^i\left(kh, \bm X_{kh}, (\beta_{kh}^i(\theta_{kh}^i), \tilde {\bm{\alpha}}_{kh}^{-i,n})\right) h + g^i\left(\bm X_T\right)\right]. \end{equation} Denote by $\theta_{kh}^{i,n+1}$ the minimizer of \eqref{def:J:discrete}, then the approximated optimal strategy $\tilde \alpha^{i,n+1}$ is given by \eqref{def:strategy:approx} evaluated at $\theta_{kh}^{i,n+1}$. Note that even though we only write explicitly the dependence of $\beta^i$'s on $\theta^i$, it affects all $X^i$'s through interactions \eqref{eq:Xt:discrete}. In fact, $X^\ell_{kh}$ depends on $\{\theta_0^{i,n+1}, \ldots, \theta_{(k-1)h}^{i,n+1}\}$, for all $\ell \in \mc{I}$. Therefore, finding the gradient in minimizing \eqref{def:J:discrete} is a non-trivial task. Thanks to the key feature of NNs, computation can be done via a forward-backward propagation algorithm derived from chain rule composition \cite{Ni:online}. The architecture of the NN for finding $\tilde \alpha^{i,n+1}$ is presented in Figure~\ref{fig:algorithm}: ``InputLayer" are inputs of this network; ``Rcost'' and ``Tcost'', representing running and terminal cost, contribute to the total cost $J^i$; ``Sequential'' is a multilayer feedforward subnetwork for control approximation at each time step; ``Concatenate'' is an auxiliary layer combining some of previous layers as inputs of ``Sequential''. \begin{figure}[H] \centering \includegraphics[width=\textwidth]{model.png} \caption{Illustration of the network architecture for problem \eqref{def:J:discrete} with $N_T = T = 3$.}\label{fig:algorithm} \end{figure} There are three main kinds of information flows in the network for each period $[kh, (k+1)h]$, $k = 0, \ldots N_T-1$: \begin{enumerate} \item $\text{State}_{kh} := (\bm{X}_0, \bm{W}_h, \ldots, \bm{W}_{kh}) \to \beta^i_{kh}$ given by ``Sequential'' layer. It is an $L$-layer feed-forward subnetwork to approximate the control of player $i$ at time $kh$, containing parameters $\theta_{kh}^i$ to be optimized. \item $(\bm{X}_{kh}, \beta_{kh}^i, \bm{\alpha}_{kh}^{-i,n}, \,\mathrm{d} \bm{W}_{(k+1)h} := \bm{W}_{(k+1)h} - \bm{W}_{kh} ) \to \bm{X}_{(k+1)h}$ given by ``Rcost'' layer. This layer possesses two functions. Firstly, it computes the running cost at time $kh$ using $(\bm{X}_{kh}, \beta_{kh}^i, \tilde{\bm{\alpha}}^{-i,n}_{kh})$, where $\beta_{kh}^i$ is produced from previous step. The cost is then added to the final output. Secondly, it updates states value $\bm{X}_{(k+1)h}$ via dynamics \eqref{eq:Xt:discrete}, using $\beta_{kh}^i$ for player $i$ and using $\bm{\alpha}_{kh}^{-i,n}$ for player $j \neq i$ which are inputs of the network. No parameter is minimized at this layer. \item $(\text{State}_{kh}, \,\mathrm{d} \textbf{W}_{(k+1)h}) \to \text{State}_{(k+1)h}$ given by ``Concatenate'' layer. This layer combines two previous ones together, acting as a preparation for the input of ``Sequential'' layer. No parameter is minimized at this layer. \end{enumerate} At time $T = N_T\times h$, the terminal cost is calculated using $\bm{X}_{T}$ and added to the final output via ``Tcost'' layer. With these preparations, we introduce the deep fictitious play as below. \subsubsection{Part II: find an equilibrium by fictitious play}\label{sec:FP} Here we use a flowchart to describe the algorithm of deep fictitious play (see Algorithm~\ref{def:algorithm}). \begin{algorithm}[H] \caption{Deep Fictitious Play for Finding Nash Equilibrium \label{def:algorithm}} \begin{algorithmic}[1] \REQUIRE $N$ = \# of players, $N_T$ = \# of subintervals on $[0,T]$, $M$ = \# of training paths, $M'$ = \# of out-of-sample paths for final evaluation, $\bm{\alpha}^0 = \{\alpha_{kh}^{i,0} \in A \subset \mathbb{R}^k, i \in \mc{I}\}_{k=0}^{N_T-1}$ = initial belief, $\bm{X}_0 = \{x_0^i \in \mathbb{R}^d, i \in \mc{I}\}$ = initial states \STATE Create $N$ separated deep neural networks as described in Section~\ref{sec:DNN} \STATE Generate $M$ sample path of BM: $\bm{W} = \{W_{kh}^{i} \in \mathbb{R}^m, i \in \mc{I}\cup \{0\} \}_{k=1}^{N_T} $ \STATE $n \gets 0$ \REPEAT \STATE $n \gets n+1$ \FOR{$i \gets 1$ to $N$} \STATE (Continue to) Train $i^{th}$ NN with data $\{\bm{X}_0, \bm\alpha^{-i,n-1} = \{\alpha_{kh}^{j,n-1}, j \in \mc{I}\setminus \{i\}\}_{k=0}^{N_T-1}, \bm W\}$ \STATE Obtain the approximated optimal strategy $\alpha^{i,n}$ and cost $J^i(\alpha^{i,n}; \bm\alpha^{-i,n-1})$ \ENDFOR \STATE Collect optimal policies at stage $n$: $\bm{\alpha}^n \gets (\alpha^{1,n}, \ldots, \alpha^{N,n})$ \STATE \label{algo:step}Compute relative change of cost $\displaystyle err^n := \max_{i \in \mc{I}}\left\{\frac{\abs{J^i(\alpha^{i,n}; \bm\alpha^{-i,n-1}) - J^i(\alpha^{i,n-1}; \bm{\alpha}^{-i,n-2})}}{ J^i(\alpha^{i,n-1}; \bm{\alpha}^{-i,n-2})}\right\}$ \UNTIL$err^n$ go below a threshold \STATE Generate $M'$ out-of-sample paths of BM for final evaluation \STATE $n' \gets 0$ \REPEAT \STATE $n' \gets n'+1$ \STATE Evaluate $i^{th}$ NN with \{$\bm X_0$, $\bm{\alpha}^{-i,n'-1}$, out-of-sample paths\}, $\forall i \in \mc{I}$ \STATE Obtain $\alpha^{i,n'}$ and $J^{i,n'} := J^i(\alpha^{i,n'};\bm{\alpha}^{-i, n'-1})$ $\forall i \in \mc{I}$ \UNTIL $J^{i,n'}$ converges in $n'$, $\forall i \in \mc{I}$ \RETURN The optimal policy $\alpha^{i,n'}$, and the final cost for each player $J^{i,n'}$ \end{algorithmic} \end{algorithm} \subsection{Implementation} {\it Computing environment.} The Algorithm~\ref{def:algorithm} described in Section~\ref{sec:FP} is implemented in Python using the high-level neural network API Keras \cite{Keras}. Numerical examples will be presented in Section~\ref{sec:numerics}. All experiments are performed using Amazon EC2 services, which provide a variety of instances for computing acceleration. All computations use NVIDIA K80 GPUs with 12GiB of GPU memory on Deep Learning Amazon Machine Image running on Ubuntu 16.04. {\it Parallelizability.} As $N$ going relatively large, to make computation manageable, one can distribute Step $5-9$ to several GPUs. That is, assigning each available GPU the task of training a subset of neural networks, where this subset is fixed from stage to stage. This will speed up the computation time significantly, as peer-to-peer GPU communications are not needed in the designed algorithm. {\it Input, output and parameters for neural networks.} Before training, we sample $\bm{W} = \{W_{kh}^{i} \in \mathbb{R}^m, i \in \mc{I} \}_{k=1}^{N_T}$, which, together with the initial states $\bm{X}_0$ and initial belief $\bm{\alpha}^0 = \{\alpha_{kh}^{i,0} \in A \subset \mathbb{R}^k, i \in \mc{I}\}_{k=0}^{N_T-1}$, are the inputs of NNs. Adam, a variant of SGD that adaptively estimates lower-order moments, is chosen to optimize the parameters $\{\theta_{kh}^i\}_{k=0}^{N_T-1}$. The hyper-parameters set for Adam solver follows the original paper \cite{Adam}. Regarding the architecture of ``Sequential'', it is a $L$-layered subnetwork. We set $L=4$, with 1 input layer, 2 hidden layers, and 1 output layer containing $k$ nodes. Rectified linear unit is chosen for hidden layers while no activation is applied to the output layer. We also add Batch Normalization \cite{batch} for hidden layers before activation. This method performs the normalization for each training mini-batch to eliminate internal covariate shift phenomenon, and thus frees us from delicate parameter initialization. It also acts as a regularizer, in some cases eliminating the need for Dropout. Note that the choice of $L$ and size of $\{\theta_{kh}^i\}_{k=0}^{N_T-1}$ are empirical. For testing problems that have benchmark solutions, one can do grid-search method to select the one with the best performance in the validation set. However, for real problems there is no universal rule for all problem settings. Parameters of the network are initialized at Step 1. In Step 7, training continues from previous stage without re-initialization. This is because, although opponents' policies change from stage to stage, they will not vary significantly and parameter values from previous stage should be better than a random initialization. For fixed computational budget, instead of using the stopping criteria in Step 12 one can terminate the loop until $n$ reaches a predetermined upper bound $\bar n$. In Step 7, the number of epochs to train the model at every single stage does not need to be large (at the scale of hundreds). This is because we are not aiming at a one-time accurate approximation of the optimal policy. Especially at the first few rounds when opponents' policies are far from optimal, pursuing accurate approximation is not meaningful. Instead, by using small budget to obtain moderate accuracy at each iteration, we are able to repeat the game for more times. In summary, for the two computational scheme: large $\bar n$ small epochs, and small $\bar n$ large epochs, the former one is better. If opponents' policies stay the same from stage to stage, then the two schemes receive the same accuracy. This is justified by the following argument: Suppose opponents' policies stay the same, then player $i$ essentially faces the same optimization problem from stage to stage. Since we do not re-initialize network parameters in Step 7, the difference between the two schemes is to train the same problem with small epochs and large rounds \emph{vs.} large epochs and small rounds. This is the same in terms of SGD training, thus should lead to the same relative error. In reality, the opponents' policies is updated from time to time, and the former scheme enables us to obtain player $i$'s reaction with more updated belief of his opponents. Step 15-19 are not computational costly, and the value functions usually converge after several iterations in our numerical study. \section{Linear-Quadratic games}\label{sec:LQ} Although the deep fictitious theory and algorithm can be applied for any $N$-player game, the proof of convergence is in general hard. Here we consider a special case of linear-quadratic symmetric $N$-player games, and analyze the convergence of $\bm{\alpha}^{n}$ defined in \eqref{def:SFP}. The strategy analyzed here will provide an open-loop Nash equilibrium, as proved at the end of section. We follow the linear-quadratic model proposed in \cite{CaFoSu:15}, where players's dynamics interact through their empirical mean: \begin{equation} \,\mathrm{d} X_t^i = [a(\overline X_t - X_t^i) + \alpha_t^i ] \,\mathrm{d} t + \sigma \left( \rho \,\mathrm{d} W_t^0 + \sqrt{1-\rho^2} \,\mathrm{d} W_t^i\right), \quad X_0^i = x^i, \quad \overline X_t = \frac{1}{N}\sum_{i=1}^N X_t^i. \label{def:Xt} \end{equation} Here $\{W_t^i, 0 \leq i \leq N\}$ are independent standard Brownian motions (BMs). Each player $i \in \{1, 2, \ldots, N\}$ controls the drift by $\alpha_t^i$ in order to minimize the cost functional \begin{equation} J^i(\alpha^1, \ldots, \alpha^N) = \mathbb{E}\left\{ \int_0^T f^i(\bm{X}_t, \alpha_t^i) \,\mathrm{d} t + g^i(\bm{X}_T)\right\}, \label{def:J} \end{equation} with the running cost defined by \begin{equation} f^i(\bm{x},\alpha) = \frac{1}{2} \alpha^2 - q \alpha(\bar x - x^i) + \frac{\epsilon}{2}(\bar x - x^i)^2, \quad \bar x = \frac{1}{N}\sum_{i=1}^N x^i, \end{equation} and the terminal cost function $g^i$ by \begin{equation} g^i(\bm{x}) = \frac{c}{2}(\bar x - x^i)^2. \end{equation} All parameters $a, \epsilon, c, q$ are non-negative, and $q^2 \leq \epsilon$ is imposed so that $f^i(\bm{x}, \alpha)$ is convex in $(\bm{x}, a)$. In \cite{CaFoSu:15}, $X_t^i$ is viewed as the log-monetary reserves of bank $i$ at time $t$. For further interpretation, we refer to \cite{CaFoSu:15}. In the spirit of fictitious play, the $N$-player game is recasted into $N$ individual optimal control problems played iteratively. The players start with a smooth belief of their opponents' actions $\bm{\alpha}^0$. At stage $n+1$, the players have observed the same past controls $\alpha^{i,n}$'s, and then each player optimizes her control problem individually, assuming other players will follow their choice at state $n$. That is, for player $i$'s problem, her dynamics are controlled through $\alpha^i_t$, while other players' states evolve according to the past strategies $\bm{\alpha}^{-i,n}$: \begin{align} \,\mathrm{d} X_t^{i,n+1} &= [a(\overline X_t^{n+1} - X_t^{i,n+1}) + \alpha_t^i] \,\mathrm{d} t + \sigma (\rho \,\mathrm{d} W_t^0 + \sqrt{1-\rho^2} \,\mathrm{d} W_t^i),\label{def:Xti:lq} \\ \,\mathrm{d} X_t^{j, n+1} & = [a(\overline X_t^{n+1} - X_t^{j, n+1}) + \alpha_t^{j,n}] \,\mathrm{d} t + \sigma (\rho \,\mathrm{d} W_t^0 + \sqrt{1-\rho^2} \,\mathrm{d} W_t^j), \quad j \neq i. \label{def:Xtj:lq} \end{align} Player $i$ faces an optimal control problem: \begin{equation}\label{def:J:lq} \begin{aligned} &\inf_{\alpha^i \in \mathbb{A}} J^{i,n+1}(\alpha^i; \bm{\alpha}^{-i,n}), \text{ where } \\ & J^{i,n+1}(\alpha^i;\bm{\alpha}^{-i,n}) := \mathbb{E}\left\{\int_0^T \frac{1}{2} (\alpha_t^i)^2 - q\alpha_t^i (\overline X_t^{n+1} - X_t^{i,n+1}) + \frac{\epsilon}{2}(\overline X_t^{n+1} - X_t^{i,n+1})^2 \,\mathrm{d} t \right.\\ &\hspace{150pt}+ \frac{c}{2}(\overline X_T^{n+1} - X_T^{i,n+1})^2\Bigg\}. \end{aligned} \end{equation} The space where we search for optimal $\alpha^i$ is the space of square-integrable progressively-measurable $\mathbb{R}$-valued processes on $\mathbb{A}: = \mathbb{H}^2_T(\mathbb{R})$, to be consistent with open-loop equilibria. Denote by $\alpha^{i,n+1}$ the minimizer of this control problem at stage $n+1$: \begin{equation}\label{def:alphaast} \alpha^{i,n+1} := \argmin_{\alpha^i \in \mathbb{A}} J^{i,n+1}(\alpha^i; \bm{\alpha}^{-i,n}). \end{equation} In what follows, we shall show: \begin{enumerate}[(a)] \item $\alpha^{i,n+1}$ exists $\forall i \in \mc{I}, n \in \mathbb{N}$, that is, the minimal cost in \eqref{def:J:lq} is always attainable; \item the family $\{\bm{\alpha}^n\}$ converges; \item the limit of $\bm\alpha^n$ forms an open-loop Nash equilibrium. \end{enumerate} \subsection{The probabilistic approach} Observing that the cost functional $J^{i,n+1}$ in \eqref{def:J:lq} solely depends on the process $\widetilde X^{i,n+1} := \overline X^{n+1} - X^{i,n+1}$ and the control $\alpha^i$, we make the following simplification. Notice that \eqref{def:Xti:lq} and \eqref{def:Xtj:lq} imply \begin{equation}\label{def:Xttilde} \,\mathrm{d} \widetilde X_t^{i, n+1} =\left[\frac{\sum_{j\neq i} \alpha_t^{j,n}}{N} - \frac{N-1}{N}\alpha_t^i - a \widetilde X_t^{i,n+1}\right] \,\mathrm{d} t + \sigma\sqrt{1-\rho^2}(\frac{1}{N}\sum_{j=1}^N \,\mathrm{d} W_t^j - \,\mathrm{d} W_t^i). \end{equation} Then, player $i$'s problem is equivalent to: \begin{equation} \inf_{\alpha^i \in \mathbb{A}}\mathbb{E}\left\{\int_0^T \frac{1}{2} (\alpha_t^i)^2 - q\alpha_t^i \widetilde X_t^{i, n+1} + \frac{\epsilon}{2} (\widetilde X_t^{i,n+1})^2 \,\mathrm{d} t + \frac{c}{2}(\widetilde X_T^{i,n+1})^2\right\}. \end{equation} In what follows, we show the existence of unique minimizer, denoted by $\alpha^{i,n+1}$, using SMP. The Hamiltonian for player $i$ at stage $n+1$ reads as \begin{equation} H^{i,n+1}(t,\omega, x,y,\alpha) = (\frac{\sum_{j\neq i} \alpha^{j,n}_t}{N}-\frac{N-1}{N}\alpha - ax)y + \frac{1}{2} \alpha^2 - q\alpha x + \frac{\epsilon}{2}x^2. \end{equation} For a given admissible control $\alpha^i \in \mathbb{A}$, the adjoint processes $(Y_t^{i,n+1}, Z_t^{i,j,n+1}, 0 \leq j \leq N)$ satisfy the backward stochastic differential equation (BSDE): \begin{equation}\label{def:Yt} \,\mathrm{d} Y_t^{i,n+1} = -[-aY_t^{i,n+1} - q\alpha_t^i + \epsilon \widetilde X_t^{i,n+1}]\,\mathrm{d} t + \sum_{j=0}^N Z_t^{i,j,n+1} \,\mathrm{d} W_t^j, \end{equation} with the terminal condition $Y_T^{i,n+1} = c\widetilde X_T^{i,n+1}$. Standard results on BSDE \cite{PaPe:90}, together with the estimates on the controlled state $\widetilde X_t^{i,n+1}$, guarantee the existence and uniqueness of adjoint processes. Pontryagin SMP suggests the form of optimizer: \begin{equation}\label{def:alpha} \partial_\alpha H^{i,n+1} = 0 \iff \hat\alpha = qx + \frac{N-1}{N}y. \end{equation} Plugging this candidate into the system \eqref{def:Xttilde}-\eqref{def:Yt} produces a system of affine FBSDEs: \begin{equation}\label{eq:FBSDE} \left\{ \begin{aligned} \,\mathrm{d} \widetilde X_t^{i, n+1} &=\left[\frac{\sum_{j\neq i} \alpha_t^{j,n}}{N} - (a + (1-\frac{1}{N})q) \widetilde X_t^{i,n+1} - (1-\frac{1}{N})^2 Y_t^{i,n+1}\right] \,\mathrm{d} t \\ &\qquad + \sigma\sqrt{1-\rho^2}(\frac{1}{N}\sum_{j=1}^N \,\mathrm{d} W_t^j - \,\mathrm{d} W_t^i), \\ \,\mathrm{d} Y_t^{i,n+1} &= -[-(a + (1-\frac{1}{N})q)Y_t^{i,n+1} + (\epsilon-q^2) \widetilde X_t^{i,n+1}]\,\mathrm{d} t + \sum_{j=0}^N Z_t^{i,j,n+1} \,\mathrm{d} W_t^j, \\ \widetilde X_0^{i,n+1} &= \overline x_0 - x_0^i, \quad Y_T^{i,n+1} = c\widetilde X_T^{i,n+1}. \end{aligned} \right. \end{equation} The sufficient condition of SMP suggests that if we solves \eqref{eq:FBSDE}, we actually have obtained the optimal control by plugging its solution into equation \eqref{def:alpha}. In fact, the coefficients satisfy the $G$-monotone property in \cite{PeWu:99}, thus the system is uniquely solved in $\mathbb{H}^2_T(\mathbb{R} \times \mathbb{R} \times \mathbb{R}^{N+1}) $, and the resulted optimal control is indeed admissible. This answers question (a). For the other two questions, we need to further analyze \eqref{eq:FBSDE}. Note that the system can be decoupled using: \begin{equation}\label{def:Ytansatz} Y_t^{i,n+1} = K_t \widetilde X_t^{i,n+1} - \psi_t^{i,n+1}, \end{equation} where $K_t$ satisfies the Riccati equation: \begin{equation}\label{def:Kt} \dot K_t = 2(a + (1-\frac{1}{N})q) K_t + (\frac{N-1}{N})^2 K_t^2 - (\epsilon - q^2), \quad K_T = c, \end{equation} and the decoupled processes $(\widetilde X_t^{i,n+1}, \psi_t^{i,n+1}, \phi_t^{i,j,n+1}, 0 \leq j \leq N)$ satisfy: \begin{equation}\label{eq:FBSDE:decouple} \left\{ \begin{aligned} \,\mathrm{d} \widetilde X_t^{i, n+1} &=\left[\frac{\sum_{j\neq i} \alpha_t^{j,n}}{N} - \gamma_t \widetilde X_t^{i,n+1} + (1-\frac{1}{N})^2 \psi_t^{i,n+1}\right] \,\mathrm{d} t \\ & \qquad + \sigma\sqrt{1-\rho^2}(\frac{1}{N}\sum_{j=1}^N \,\mathrm{d} W_t^j - \,\mathrm{d} W_t^i), \\ \,\mathrm{d} \psi_t^{i,n+1} &= -[-\gamma_t \psi_t^{i,n+1} - K_t \frac{\sum_{j\neq i} \alpha_t^{j,n}}{N}]\,\mathrm{d} t + \sum_{j=0}^N \phi_t^{i,j,n+1} \,\mathrm{d} W_t^j, \\ \widetilde X_0^{i,n+1} &= \overline x_0 - x_0^i, \quad \psi_T^{i,n+1} = 0, \end{aligned} \right. \end{equation} where $\gamma_t$ is a deterministic function on $[0,T]$: \begin{equation}\label{def:gamma} \gamma_t = a + (1-\frac{1}{N})q + (1-\frac{1}{N})^2K_t, \end{equation} and the optimal strategy is expressed as \begin{equation}\label{eq:alpha} \alpha_t^{i,n+1} = (q + (1-\frac{1}{N})K_t) \widetilde X_t^{i,n+1} - (1-\frac{1}{N})\psi_t^{i,n+1}. \end{equation} Again, since $\bm\alpha^n \in \mathbb{H}_T^2(\mathbb{R}^N)$, existence and uniqueness of $(\psi^{i,n+1}, \phi^{i,j,n+1}, 0\leq j \leq N) \in \mathbb{H}^2(\mathbb{R} \times \mathbb{R}^{N+1})$ is guaranteed $\forall i \in \mc{I}$, $n \in \mathbb{N}$, and the forward equation possesses a unique strong solution. Then the triple $(X^{i,n+1},Y^{i,n+1},Z^{i,j,n+1})$ solves the original FBSDEs \eqref{eq:FBSDE} with $Y_t^{i,n+1}$ defined by \eqref{def:Ytansatz} and $Z_t^{i,j, n+1}$ by \begin{align} Z_t^{i,0,n+1} = -\phi_t^{i,0,n+1}, \quad Z_t^{i,j,n+1} = -\phi_t^{i,j,n+1} + K_t\sigma \sqrt{1-\rho^2}(\frac{1}{N} - \delta_{i,j}), \quad j \in \mc{I}. \end{align} To answer questions (b) and (c), we state the main theorem in this section, with the proofs presented in the next subsections. \begin{thm}\label{thm:cvg} For linear-quadratic games, the family $\{\bm \alpha^n\}_{n \in \mathbb{N}}$ defined in \eqref{def:J:lq}-\eqref{def:alphaast} converges if \begin{equation}\label{def:cvgcondition} \frac{1-e^{-2T\underline \gamma}}{\underline \gamma}C < 1. \end{equation} It forms an open-loop Nash equilibrium of the original problem \eqref{def:Xt}-\eqref{def:J}. Moreover, the limit, denote by $\bm\alpha^\infty$, is independent from the choice of initial belief $\bm\alpha^0$. Here $\underline \gamma = a + (1-\frac{1}{N})q + (1-\frac{1}{N})^2 \underline K$, $\overline K$ and $\underline K$ are the maximum and minimum value of $K_t$ on $[0,T]$, and the constant C is \begin{equation}\label{def:C} C = (1-\frac{1}{N})^2 \left((1-\frac{1}{N})^2 \overline K^2 + (q + (1-\frac{1}{N})\overline K)^2 \left(\frac{1-e^{-2T\underline \gamma}}{\underline \gamma}(1-\frac{1}{N})^4\overline K^2 + 2\right)\right). \end{equation} \end{thm} \begin{remark} The condition \eqref{def:cvgcondition} is sufficient but not necessary. The numerical performance of the proposed algorithm can do better. In Section~\ref{sec:numerics}, the parameters are chosen so that the condition is violated, but the algorithm still converges fast, in order to illustrate the sufficiency. By observing the form of $C$ and $\underline \gamma$, we remark that the convergence rate decreases in the number of players $N$. \end{remark} \begin{proposition}\label{prop:case} The following three classes of parameters satisfy condition \eqref{def:cvgcondition}: \begin{enumerate}[(i)] \item\label{case:t} Small time duration, that is, $T$ is small. \item\label{case:a} Strong mean-reversion rate, {\it i.e.}, $a$ is large. \item\label{case:c} Small terminal cost and small intensive to borrowing or landing, that is, $c$ and $q$ are small. Also the ``remaining'' running cost of the state process\footnote{The running cost $f^i(\bm{x}, \alpha)$ can be rewritten as $f^i(\bm x, \alpha) = \frac{1}{2} (\alpha - q(\bar x - x^i))^2 + \frac{1}{2} (\epsilon - q^2)(\bar x - x^i)^2$, therefore, can be interpreted as penalizing the control from deviating $q(\bar x - x^i)$, borrowing or lending proportionally to the difference from average with a rate $q$, as well as penalizing the distance from average with weight $\epsilon - q^2$. } is small, i.e., $\epsilon - q^2$ is small. \end{enumerate} \end{proposition} \begin{proof} We first notice that the solution to \eqref{def:Kt} is smooth and monotone on $[0,T]$, by computing its derivative: \begin{equation} \dot K_t \sim -(\epsilon - q^2) + c^2(1-\frac{1}{N})^2 + 2c(a + (1-\frac{1}{N})q). \end{equation} So $\overline K = \max\{c, K_0\}$ and $\underline K = \min\{c, K_0\}$. Also, when $\dot K_t >0$, $K_0$ is bounded below by $\frac{-(\epsilon - q^2)-c\delta^+}{\delta^- - c(1-\frac{1}{N})^2}$; otherwise when $K_t$ is decreasing, $K_0$ is bounded above by $\frac{-(\epsilon - q^2)-c\delta^+}{\delta^- - c(1-\frac{1}{N})^2}$, where \begin{equation} \delta^\pm = -(a + (1-\frac{1}{N})q) \pm \sqrt R, \quad R = (a + (1-\frac{1}{N})q)^2 + (1-\frac{1}{N})^2 (\epsilon - q^2). \end{equation} Then case \eqref{case:t} follows by the fact that $C$ has an upper bound that is free of $T$. For $a$ sufficiently large, $K_t$ is increasing and $\overline K = c$. Then $C$ has a upper bound (uniformly in $a$), and case \eqref{case:a} follows $\frac{1 - e^{-2T\underline \gamma}}{\underline \gamma} < \frac{1}{a}$. Under case \eqref{case:c}, $\overline K$ is sufficiently small, thus $C$ is small and the factor is less than 1. \end{proof} \subsection{Proof of convergence} This section proves Theorem~\ref{thm:cvg}. Define $\Delta \zeta_t^{i,n} := \zeta_t^{i,n+1} - \zeta_t^{i,n}$ the difference from stage $n$ to $n+1$ for the $i^{th}$ player, with $\zeta = \alpha, \psi, \phi, \widetilde X$, respectively. Using equation \eqref{eq:FBSDE:decouple}, the increment in $\psi$ satisfies: \begin{align} \,\mathrm{d} \Delta \psi_t^{i,n} = -[-\gamma_t \Delta \psi_t^{i,n} - \frac{K_t}{N} \sum_{j \neq i} \Delta \alpha_t^{j,n-1}] \,\mathrm{d} t + \sum_{j=0}^N \Delta \phi_t^{i,j,n}\,\mathrm{d} W_t^j, \quad \Delta \psi_T^{i,n} = 0, \end{align} whose solution is: \begin{equation} \Delta \psi_t^{i,n} = \mathbb{E}\left[ \int_t^T - \frac{K_s}{N}\sum_{j \neq i} \Delta \alpha_s^{j,n-1} e^{\int_s^t \gamma_u \,\mathrm{d} u} \,\mathrm{d} s \Bigg\vert \mathcal{F}_t\right]. \end{equation} By Jensen's inequality, one deduces: \begin{align} \ltwonorm{\Delta \psi^{i,n}}^2 &\leq \int_0^T \mathbb{E}\left[\int_t^T \frac{K_s^2}{N^2}\left(\sum_{j \neq i} \Delta \alpha_s^{j,n-1}\right)^2 e^{2\int_s^t \gamma_u \,\mathrm{d} u} \,\mathrm{d} s\right] \,\mathrm{d} t \\ &\leq \frac{\overline{K}^2}{N^2} \int_0^T \int_t^T \mathbb{E}\left(\sum_{j \neq i} \Delta \alpha_s^{j,n-1}\right)^2 e^{2(t-s)\underline{\gamma}} \,\mathrm{d} s \,\mathrm{d} t\\ & = \frac{\overline{K}^2}{N^2} \int_0^T \mathbb{E}\left(\sum_{j \neq i} \Delta \alpha_s^{j,n-1}\right)^2 \frac{1-e^{-2s\underline{\gamma}}}{2\underline{\gamma}} \,\mathrm{d} s \\ &\leq \frac{\overline{K}^2}{N^2}\frac{1-e^{-2T\underline{\gamma}}}{2\underline{\gamma}} (N-1)^2 \max_{j \neq i } \int_0^T \mathbb{E}[\Delta \alpha_s^{j,n-1}]^2 \,\mathrm{d} s \\ & \leq \frac{1-e^{-2T\underline{\gamma}}}{2\underline{\gamma}}(1-\frac{1}{N})^2 \overline{K}^2 \max_{i \in \mc{I}} \ltwonorm{\Delta\alpha^{i,n-1}}^2, \end{align} where $\underline{\gamma} = a + (1-\frac{1}{N})q + (1-\frac{1}{N})^2 \underline K$. Since the RHS of the above inequality is independent of $i$, taking maximum over $\mc{I}$ yields \begin{equation}\label{eq:psi:ineq} \max_{i \in \mc{I}} \ltwonorm{\Delta \psi^{i,n}}^2 \leq \frac{1-e^{-2T\underline{\gamma}}}{2\underline{\gamma}}(1-\frac{1}{N})^2 \overline{K}^2 \max_{i \in \mc{I}} \ltwonorm{\Delta\alpha^{i,n-1}}^2. \end{equation} Similarly, the dynamics of $\Delta \widetilde X_t^{i,n}$ can be derived from \eqref{eq:FBSDE:decouple}: \begin{equation} \,\mathrm{d} \Delta \widetilde X_t^{i,n} = [\frac{1}{N} \sum_{j \neq i} \Delta\alpha_t^{j,n-1} - \gamma_t \Delta \widetilde X_t^{i,n} + (1-\frac{1}{N})^2 \Delta \psi_t^{i,n}]\,\mathrm{d} t, \quad \Delta\widetilde X_0^{i,n} = 0, \end{equation} which admits the solution: \begin{equation} \Delta \widetilde X_t^{i,n} = \int_0^t \left(\frac{1}{N} \sum_{j \neq i} \Delta\alpha_s^{j,n-1} + (1-\frac{1}{N})^2 \Delta \psi_s^{i,n}\right) e^{-\int_s^t \gamma_u \,\mathrm{d} u}\,\mathrm{d} s. \end{equation} We next give an upper bound of increment of the forward process $\Delta \widetilde X_\cdot^{i,n}$: \begin{align} \ltwonorm{\Delta \widetilde X^{i,n}}^2 &\leq \int_0^T \int_0^t \mathbb{E} \left(\frac{1}{N} \sum_{j \neq i} \Delta\alpha_s^{j,n-1} + (1-\frac{1}{N})^2 \Delta \psi_s^{i,n}\right)^2 e^{-2\int_s^t \gamma_u \,\mathrm{d} u}\,\mathrm{d} s \,\mathrm{d} t \\ & \leq 2\int_0^T \int_0^t \left( \mathbb{E} [\frac{1}{N} \sum_{j \neq i} \Delta\alpha_s^{j,n-1}]^2 + (1-\frac{1}{N})^4 \mathbb{E}[ \Delta \psi_s^{i,n}]^2\right) e^{-2(t-s)\underline\gamma} \,\mathrm{d} s \,\mathrm{d} t \\ & \leq 2\int_0^T \left( \mathbb{E} [\frac{1}{N} \sum_{j \neq i} \Delta\alpha_s^{j,n-1}]^2 + (1-\frac{1}{N})^4 \mathbb{E}[ \Delta \psi_s^{i,n}]^2\right) \frac{1-e^{-2(T-s)\underline \gamma}}{2\underline \gamma} \,\mathrm{d} s \\ & \leq \frac{1-e^{-2T\underline \gamma}}{\underline \gamma} \left( (1-\frac{1}{N})^2 \max_{j \neq i} \ltwonorm{\Delta \alpha^{j,n-1}}^2 + (1-\frac{1}{N})^4 \ltwonorm{\Delta \psi^{i,n}}^2\right). \end{align} Again by taking maximum over $\mc{I}$ on both sides, one has: \begin{equation}\label{eq:Xttilde:ineq} \max_{i \in \mc{I}} \ltwonorm{\Delta \widetilde X^{i,n}}^2 \leq \frac{1-e^{-2T\underline \gamma}}{\underline \gamma} \left( (1-\frac{1}{N})^2 \max_{i \in \mc{I}} \ltwonorm{\Delta \alpha^{i,n-1}}^2 + (1-\frac{1}{N})^4 \max_{i \in \mc{I}} \ltwonorm{\Delta \psi^{i,n}}^2\right). \end{equation} Recall from \eqref{eq:alpha} that the increment in the strategy can be decomposed as \begin{equation} \Delta \alpha_t^{i,n} = (q + (1-\frac{1}{N})K_t) \Delta \widetilde X_t^{i,n} - (1-\frac{1}{N}) \Delta \psi_t^{i,n}, \end{equation} together with estimates \eqref{eq:psi:ineq} and \eqref{eq:Xttilde:ineq}, we obtain: \begin{align} \max_{i \in \mc{I}} \ltwonorm{\Delta \alpha^{i,n}}^2 & \leq 2(q + (1-\frac{1}{N})\overline K)^2 \max_{i \in I} \ltwonorm{\Delta \widetilde X^{i,n}}^2 + 2(1-\frac{1}{N})^2 \max_{i \in \mc{I}} \ltwonorm{\Delta \psi^{i,n}}^2\\ & \leq \frac{1-e^{-2T\underline{\gamma}}}{\underline{\gamma}} C \max_{i \in \mc{I}} \ltwonorm{\Delta \alpha^{i,n-1}}^2, \end{align} where $C$ is a constant given in \eqref{def:C}. Under condition \eqref{def:cvgcondition}, the mapping $\Delta \bm{\alpha}^{n-1} \hookrightarrow \Delta\bm{\alpha}^{n}$ is a contraction. Therefore, this proposed learning process converges in the linear-quadratic games. Denote the limit of $\{\bm{\alpha}^n\}$ by $\bm\alpha^\infty = [\alpha^{1,\infty}, \ldots, \alpha^{N,\infty} ]$ where the learning process start with an initial belief $\bm{\alpha}^0$. Let $(\widetilde X_t^{i,\alpha}, \psi_t^{i,\alpha}, \phi_t^{i,\alpha})$ be the solution to the decoupled system \eqref{eq:FBSDE:decouple} with $\{{\alpha}^{j,n}, j \in \mc{I}\setminus\{i\}\}$ replaced by $\{{\alpha}^{j,\infty}, j \in \mc{I}\setminus\{i\}\}$. On one hand, this corresponds to the problem of identifying player $i$'s best strategy, while others using $\bm\alpha^{-i,\infty}$, and her best choice is \begin{equation} (q + (1-\frac{1}{N})K_t) \widetilde X_t^{i,\alpha} - (1-\frac{1}{N})\psi_t^{i,\alpha}. \end{equation} On the other hand, by stability theorems (e.g. \cite[Theorem 3.4.2, Theorem 4.4.3]{Zh:17}), this triple $(\widetilde X_t^{i,\alpha}, \psi_t^{i,\alpha}, \phi_t^{i,\alpha})$ is also the $L^2$ limit of $(\widetilde X_t^{i,n}, \psi_t^{i,n}, \phi_t^{i,n})$. Therefore, letting $n \to \infty$ in equation \eqref{eq:alpha} gives \begin{equation}\label{eq:lmtrelation} \alpha^{i,\infty} = (q + (1-\frac{1}{N})K_t) \widetilde X_t^{i,\alpha} - (1-\frac{1}{N})\psi_t^{i,\alpha}. \end{equation} Therefore, the best response for player $i$ is $\alpha^{i,\infty}$, given others play $\bm{\alpha}^{-i,\infty}$, indicating that the limit $\bm\alpha^\infty$ forms an open-loop Nash equilibrium. It remains to prove that the limit is independent from the initial belief. Suppose that there exist two limits $\bm{\alpha}^\infty $ and $\bm{\beta}^\infty$ arisen from two distinguished initial belief $\bm{\alpha}^0$ and $\bm{\beta}^0$, and let $(\widetilde X_t^{i,\beta}, \psi_t^{i,\beta}, \phi_t^{i,\beta})$ be the solution to \eqref{eq:FBSDE:decouple} associated with $\bm{\beta}^\infty$. Following similar derivations in the proof of convergence gives: \begin{align} &\max_{i\in\mc{I}}\ltwonorm{\psi^{i,\alpha} - \psi^{i,\beta}}^2 \leq \frac{1-e^{-2T\underline{\gamma}}}{2\underline{\gamma}}(1-\frac{1}{N})^2 \overline{K}^2 \max_{i \in \mc{I}} \ltwonorm{\alpha^{i,\infty} - \beta^{i,\infty}}^2, \\ &\max_{i \in \mc{I}} \ltwonorm{ \widetilde X^{i,\alpha} - \widetilde X^{i,\beta}}^2 \nonumber \\ &\qquad \leq \frac{1-e^{-2T\underline \gamma}}{\underline \gamma} \left( (1-\frac{1}{N})^2 \max_{i \in \mc{I}} \ltwonorm{\alpha^{i,\infty} - \beta^{i,\infty}}^2 + (1-\frac{1}{N})^4 \max_{i \in \mc{I}} \ltwonorm{\psi^{i,\alpha} - \psi^{i,\beta}}^2\right). \end{align} Combining the above equations together, and using \eqref{eq:lmtrelation} for both $\alpha^{i,\infty}$ and $\beta^{i,\infty}$, we deduce: \begin{equation} \max_{i \in \mc{I}}\ltwonorm{ \alpha^{i,\infty} - \beta^{i,\infty}}^2 \leq \frac{1-e^{-2T\underline{\gamma}}}{\underline{\gamma}} C \max_{i \in \mc{I}} \ltwonorm{ \alpha^{i,\infty} - \beta^{i,\infty}}^2. \end{equation} Under the same condition \eqref{def:cvgcondition}, $\bm{\alpha}^\infty = \bm{\beta}^\infty$ in the $L^2$ sense. Therefore, we have shown that, independent of initial belief, the fictitious play will converge and the limit is unique. \subsection{Identifying the limit} As proved in Theorem~\ref{thm:cvg}, the limiting strategy $\bm\alpha^\infty$ forms an open-loop Nash equilibrium, and in this section, we verify it coincides with the equilibrium provided in \cite{CaFoSu:15} by direct calculations. Recall from \cite{CaFoSu:15}, the open-loop Nash equilibrium to the original $N$-player problem \eqref{def:Xt}--\eqref{def:J} is: \begin{equation}\label{def:OLE} \alpha_t^{i,\ast} = [q + (1-\frac{1}{N})\eta_t] (\overline X_t^\ast- X_t^{i,\ast}), \end{equation} where $X_t^{i,\ast}$ is the solution to \eqref{def:Xt} associated with $\alpha_t^{i,\ast}$, $\overline X_t^\ast$ is the average of $X_t^{i,\ast}$, and $\eta_t$ solves a Riccati equation: \begin{equation}\label{def:eta} \dot \eta_t = 2(a + (1-\frac{1}{2N})q)\eta_t + (1-\frac{1}{N})\eta_t^2 - (\epsilon - q^2), \quad \eta_T = c. \end{equation} Note that, the expression \eqref{def:OLE} means the open-loop equilibrium happens to be expressed as a function of the states in the equilibrium, but not a closed-loop feedback equilibrium. To be more precise, plugging \eqref{def:OLE} into \eqref{def:Xt} yields \begin{equation} \,\mathrm{d} (\overline X_t^{\ast} - X_t^{i,\ast}) = -[a + q + (1-\frac{1}{N})\eta_t] (\overline X_t^{\ast} - X_t^{i,\ast}) \,\mathrm{d} t + \sigma \sqrt{1-\rho^2} \left(\frac{1}{N} \sum_{i=1}^N \,\mathrm{d} W_t^i - \,\mathrm{d} W_t^i\right). \end{equation} Thus, $\alpha_t^{i,\ast}$ is indeed $\mc{F}_t$-measurable. To avoid further confusion in the sequel, we denote by $\Xi_t^{i}$ the solution to the above SDE, then \begin{equation}\label{eq:OLE} \alpha_t^{i,\ast} = [q + (1-\frac{1}{N})\eta_t] \Xi_t^i, \end{equation} and $\Xi_t^i$ is the unique strong solution to the SDE: \begin{equation}\label{eq:Xi} \,\mathrm{d} \Xi_t^i = -\kappa_t\Xi_t^i \,\mathrm{d} t + \sigma \sqrt{1-\rho^2} \left(\frac{1}{N} \sum_{i=1}^N \,\mathrm{d} W_t^i - \,\mathrm{d} W_t^i\right), \quad \Xi_0^i = \overline x_0 - x_0^i, \end{equation} with \begin{equation}\label{def:kappa} \kappa_t = a + q + (1-\frac{1}{N})\eta_t. \end{equation} Two properties regarding $\Xi_t^i$ will be used in sequel: firstly, $\sum_{i=1}^N \Xi_t^i = 0$, $\forall t \in [0,T]$. This is straightforward by deriving the SDE for $\overline \Xi_t$ via summing \eqref{eq:Xi} over $i \in \mc{I}$, and using $\overline \Xi_0 = 0$. Consequently, we also have $\sum_{i=1}^N \alpha_t^{i,\ast} = 0$, $\forall t \in [0,T]$. Secondly, one has that $\displaystyle e^{\int_0^t \kappa_u \,\mathrm{d} u} \Xi_t^{i}$ is a martingale, follows by the SDE \eqref{eq:Xi} and the boundedness of $\eta_t$ on $[0,T]$. We next verify that the limit $\alpha^{i,\infty}$ coincides with \eqref{eq:OLE} by showing the optimal control to the problem \eqref{def:J:lq} is $\alpha^{i,\ast}$ where other players' are following $\alpha^{j,\ast}$, $j \neq i$, and by the uniqueness of limit under condition \eqref{def:cvgcondition}. Denote by $(\widetilde X_t^{i,\ast}, \psi_t^{i,\ast}, \phi_t^{i,\ast})$ be the solution to the FBSDEs \eqref{eq:FBSDE:decouple} with $\alpha^{j,n}$ replaced by $\alpha^{j,\ast}$, $j \in \mc{I} \setminus \{i\}$. Essentially, the problem is to show the player $i$'s optimal response, represented by the solution of FBSDEs, $(q + (1-\frac{1}{N})K_t)\widetilde X_t^{i,\ast} - (1-\frac{1}{N})\psi_t^{i,\ast}$ matches her Nash strategy $\alpha^{i,\ast}$. Note that this is not a fixed-point argument as usually seen in mean-field games, since only $\bm{\alpha}^{-i,\ast}$ is needed to solve $(\widetilde X_t^{i,\ast}, \psi_t^{i,\ast}, \phi_t^{i,\ast})$. We first solve $\psi^{i,\ast}$ from the backward process in \eqref{eq:FBSDE:decouple}. The BSDE is of affine form, and thus possesses a unique solution: \begin{align} \psi_t^{i,\ast} & = \mathbb{E}\left[ \int_t^T - \frac{K_s}{N}\sum_{j \neq i} \alpha_s^{j,\ast} e^{\int_s^t \gamma_u \,\mathrm{d} u} \,\mathrm{d} s \Bigg\vert \mathcal{F}_t\right] = \mathbb{E}\left[ \int_t^T \frac{K_s}{N} \alpha_s^{i,\ast} e^{\int_s^t \gamma_u \,\mathrm{d} u} \,\mathrm{d} s \Bigg\vert \mathcal{F}_t\right]\\ & = \mathbb{E}\left[ \int_t^T \frac{K_s}{N} [q + (1-\frac{1}{N})\eta_s]\Xi_s^i e^{\int_s^t \gamma_u \,\mathrm{d} u} \,\mathrm{d} s \Bigg\vert \mathcal{F}_t\right] \\ & = \int_t^T \frac{K_s}{N} [q + (1-\frac{1}{N})\eta_s]\Xi_t^i e^{-\int_t^s \kappa_u + \gamma_u\,\mathrm{d} u} \,\mathrm{d} s \\ & := F(t) \Xi_t^i. \end{align} The function $F(t)$ satisfies \begin{equation}\label{def:F} \dot F(t) = F(t)(\kappa_t + \gamma_t) - \frac{K_t}{N}(q + (1-\frac{1}{N})\eta_t), \quad F(T) = 0, \end{equation} where $\gamma_t$ and $\kappa_t$ are given by \eqref{def:gamma} and \eqref{def:kappa} respectively, and $\eta_t$ solves \eqref{def:eta}. Note that \eqref{def:F} is a first order linear ordinary differential equation (ODE) with smooth coefficients, whose solution in uniqueness is ensured by standard ODE theory. A straightforward calculation shows $K_t - \eta_t$ solves \eqref{def:F}, thus $\psi_t^{i,\ast} = (K_t - \eta_t)\Xi_t^i$. Now to solve the forward equation for $\widetilde X_t^{i,\ast}$, we first calculate \begin{align} \frac{\sum_{j \neq i}\alpha^{j,\ast}}{N} + (1-\frac{1}{N})^2 \psi_t^{i,\ast} &= -\frac{\alpha^{i,\ast}}{N} + (1-\frac{1}{N})^2(K_t - \eta_t) \Xi_t^i \\ & = (-\frac{q}{N} + (1-\frac{1}{N})^2K_t - (1-\frac{1}{N})\eta_t) \Xi_t^i, \end{align} therefore \begin{align} \,\mathrm{d} \widetilde X_t^{i,\ast} &= [(-\frac{q}{N} + (1-\frac{1}{N})^2K_t - (1-\frac{1}{N})\eta_t) \Xi_t^i - \gamma_t \widetilde X_t^{i,\ast}] \,\mathrm{d} t \\ & \qquad + \sigma\sqrt{1-\rho^2}\left(\frac{1}{N}\sum_{i=1}^N \,\mathrm{d} W_t^i - \,\mathrm{d} W_t^i\right). \end{align} Comparing it to \eqref{eq:Xi}, one deduces $ \widetilde X_t^{i,\ast} = \Xi_t^i$. Therefore, player $i$'s optimal response to her opponents' strategy $\bm{\alpha}^{-i,\ast}$ is \begin{align} (q + (1-\frac{1}{N})K_t)\widetilde X_t^{i,\ast} - (1-\frac{1}{N})\psi_t^{i,\ast} &= (q + (1-\frac{1}{N})K_t)\Xi_t^i - (1-\frac{1}{N})(K_t - \eta_t)\Xi_t^i \\ &= (q + (1-\frac{1}{N})\eta_t) \Xi_t^i \equiv \alpha_t^{i,\ast}, \end{align} which implies the limit of fictitious play gives an open-loop Nash equilibrium in the linear quadratic case. \section{Numerical experiments}\label{sec:numerics} In this section, we present the proof of methodology for deep fictitious play by applying our algorithm to the linear-quadratic game \eqref{def:Xt}-\eqref{def:J}, which was first introduced in \cite{CaFoSu:15} to study the systemic risk. We choose this model as our study for two reasons: firstly, convergence of fictitious play under this setting has been proved in Section~\ref{sec:LQ} under model assumptions. Secondly, closed-form solution exists for this model, which enables us to benchmark the performance of our proposed scheme. Numerical results are shown in three examples of $N=5, 10, 24$ players. The Euler scheme (with time step $h = T/N_T$) of the dynamics \eqref{def:Xti:lq}-\eqref{def:Xtj:lq} follows from \eqref{eq:Xt:discrete} with: \begin{align} b^\ell(t,\bm{x}, \bm{\alpha}) = a(\overline{\bm{x}} - x^\ell) + \alpha^\ell, \quad \sigma^\ell(t,\bm{x}, \bm{\alpha}) = \sigma^0(t,\bm{x}, \bm{\alpha}) \equiv \sigma, \quad \ell \in \mc{I}. \end{align} The model parameters chosen by numerical experiments are \begin{equation} T = 1, \quad \sigma = 1, \quad a = 1, \quad q = 0, \quad \rho = 0, \quad \epsilon = 1,\quad c = 1. \end{equation} Remark that in the above choice, if one computes the factor in \eqref{def:cvgcondition}, which gives $\frac{1-e^{-2T\underline \gamma}}{\underline \gamma}C = 0.9568, 1.5420$, $1.9995$ for $N = 5, 10 ,24$ respectively, then all the three cases in Proposition~\ref{prop:case} failed. However, we can still obtain convergent numerical results, which shows the robustness of the proposed algorithms and potential improvement of our theoretical analysis. We choose $M = 2^{16}$ samples for training of the DNNs, and $M' = 10^6$ out-of-samples for final evaluation. A validation split ratio of $25\%$ and callbacks are set to avoid over-fitting. The subnetwork for policy approximation at each time step contains 2 hidden layers and $8 + 8$ neurons. During each stage, each network is trained for 200 epochs with a mini-batch of size 1024 . A total of 10 stages are played. The true (benchmark) optimal control is computed according \eqref{def:OLE}-\eqref{def:eta}, with $\eta_t$ given in the closed form. {\bf Example 1 ($N=5$).} We set the initial states of the five players as $x_0 = (1,5,7,3,8)^\text{T}$ and discretize the time interval $[0,1]$ into $N_T = 50$ steps. In Figure~\ref{fig:N5cost}, we compare the cost functions computed by deep fictitious play to the closed-form solution. One can see that, the relative errors of cost function for all players drop quickly under 5\% after a few iterations, and then steadily under $2\%$ after only ten iterations. In Figure~\ref{fig:N5traj}, we show in the top-left panel optimal trajectories from total five players computed by deep fictitious play (black star lines) \emph{vs.} by closed-form formulae (colored solid lines) at one representative realization. One can observe that players, although start away from each other, become closer as time evolves. This is consistent with the mechanism of costs functions, as they are in favor of being together. To quantitatively measure the performance of our algorithm, we show the mean and standard deviation of the difference between NN predictions and the true solutions in the rest panels based on a total of $10^6$ sample paths. The means are almost zero, with slightly convex or concave curves depending on player's relative ranking initially. Players start below average tend to have convex feature. Standard numerical schemes can do well to approximate cost functions, but not on the derivatives, which are related to the controls, while our deep learning algorithm computes directly the control, which shows a good approximation. Figure~\ref{fig:N5control} plots two visualized paths of controls for an illustration purpose. \begin{figure}[H] \begin{tabular}{ccc} \includegraphics[width=0.28\textwidth]{n5/N=5_loss_player0.png} & \includegraphics[width=0.28\textwidth]{n5/N=5_loss_player1.png} & \includegraphics[width=0.28\textwidth]{n5/N=5_loss_player2.png} \\ \includegraphics[width=0.28\textwidth]{n5/N=5_loss_player3.png} & \includegraphics[width=0.28\textwidth]{n5/N=5_loss_player4.png} & \includegraphics[width=0.28\textwidth]{n5/N=5_relativeerr.png} \end{tabular} \caption{Comparisons of cost functions for $N=5$ players in the linear quadratic game. The dotted dash lines are the analytical cost functions given by the closed-form solution for each individual player. The solid lines are the cost functions given by deep fictitious play for each player at the first $10$ iterations. The bottom-right panel shows the relative errors of cost function for the five players, which are pretty small at the $10^{\text{th}}$ iteration. }\label{fig:N5cost} \end{figure} \begin{figure}[H] \begin{tabular}{ccc} \includegraphics[width=0.28\textwidth]{n5/N=5_trajactory.png} & \includegraphics[width=0.28\textwidth]{n5/N=5_tra_errorbar_0.png} & \includegraphics[width=0.28\textwidth]{n5/N=5_tra_errorbar_1.png} \\ \includegraphics[width=0.28\textwidth]{n5/N=5_tra_errorbar_2.png} & \includegraphics[width=0.28\textwidth]{n5/N=5_tra_errorbar_3.png} & \includegraphics[width=0.28\textwidth]{n5/N=5_tra_errorbar_4.png} \end{tabular} \caption{Comparisons of optimal trajectories for $N=5$ players in the linear quadratic game. Top-left panel: a single sample path of the true optimal trajectories $X_t$ (solid lines) \emph{vs.} the ones computed by deep fictitious play $\widehat X_t$ (star lines). The other panels show the mean (blue triangles) and standard deviation (red bars, plotted every other time step) of optimal trajectories errors for five players using a total sample of $10^6$ paths. Overall, they show a good approximation of deep fictitious play to the linear quadratic game by $N=5$ players. }\label{fig:N5traj} \end{figure} \begin{figure}[H] \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{n5/N=5_control_player0.png} & \includegraphics[width=0.3\textwidth]{n5/N=5_control_player1.png} & \includegraphics[width=0.3\textwidth]{n5/N=5_control_player2.png} \\ \end{tabular} \begin{tabular}{cc} \hspace{7em} \includegraphics[width=0.3\textwidth]{n5/N=5_control_player3.png} & \includegraphics[width=0.3\textwidth]{n5/N=5_control_player4.png} \end{tabular} \caption{Comparisons of optimal controls for $N=5$ players in the linear quadratic game. For a sake of clarity, we only show two sample paths of optimal controls for each player. The solid lines are optimal controls given by the closed-form solution, and the dotted dash lines are computed by deep fictitious play.}\label{fig:N5control} \end{figure} {\bf Example 2 ($N=10$).} The initial state for $i^{th}$ player is $x_0^i = 0.5 + 0.05(i-1)$. We use $N_T = 20$ time steps for the discretization of the time interval $[0,1]$. Such choices enable us to investigate the sensitivity of deep learning algorithm on initial positions and time step. In Figure~\ref{fig:N10_cost_traj}, we compare the cost functions computed by deep fictitious play to the closed-form solution, where, after only ten iterations, the maximum relative error of cost function for all players have been reduced to less than $3\%$, and the computed optimal trajectories (one visualized sample path) of selected four players by fictitious play coincide with those of the closed-form solution. The standard deviation of difference between approximated and true optimal trajectories as less then $2\%$ for $t\in[0, 1]$ for all players, and we present a selection of six in Figure~\ref{fig:N10traj}. Note that, although the time step $h$ is twice larger than $N=5$, the relative error does not increase significantly. However, we do not observe that the trajectories are getting closer and closer as in the case of $N = 5$, since they already start in the neighborhood of each other. We do not observe the curve neither, which justify our assertion that the curvature depends on $\overline x_0 - x_0^i$. We also show two visualized sample paths of optimal control in Figure~\ref{fig:N10control}, which presents a good approximation of the policy. \begin{figure}[H] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{n10/N=10_relativeerr.png} & \includegraphics[width=0.45\textwidth]{n10/N=10_trajactory.png} \end{tabular} \caption{Comparisons of cost functions and optimal trajectories for $N=10$ players in the linear quadratic game. Left: the maximum relative errors of the cost functions for ten players; Right: for a sake of clarity, we only present the comparison of optimal trajectories for the $1^\text{st}$, $4^\text{th}$, $7^\text{th}$ and $10^\text{th}$ players, where the solid lines are given by the closed-form solution and the stars are computed by deep fictitious play. }\label{fig:N10_cost_traj} \end{figure} \begin{figure}[H] \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{n10/N=10_tra_errorbar_0.png} & \includegraphics[width=0.3\textwidth]{n10/N=10_tra_errorbar_1.png} & \includegraphics[width=0.3\textwidth]{n10/N=10_tra_errorbar_3.png} \\ \includegraphics[width=0.3\textwidth]{n10/N=10_tra_errorbar_6.png} & \includegraphics[width=0.3\textwidth]{n10/N=10_tra_errorbar_7.png} & \includegraphics[width=0.3\textwidth]{n10/N=10_tra_errorbar_9.png} \end{tabular} \caption{Comparisons of optimal trajectories for $N=10$ players in the linear quadratic game. For a sake of clarity, we only show the mean (blue triangles) and standard deviation (red bars) of optimal trajectories errors for the $1^\text{st}$, $2^\text{nd}$, $4^\text{th}$, $7^\text{th}$, $8^\text{th}$ and $10^\text{th}$ player, respectively. The results are based on a total sample of $65536$ paths, and show that deep fictitious play provides a uniformly good accuracy of optimal trajectories.}\label{fig:N10traj} \end{figure} \begin{figure}[h t b] \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{n10/N=10_control_player0.png} & \includegraphics[width=0.3\textwidth]{n10/N=10_control_player1.png} & \includegraphics[width=0.3\textwidth]{n10/N=10_control_player3.png} \\ \includegraphics[width=0.3\textwidth]{n10/N=10_control_player6.png} & \includegraphics[width=0.3\textwidth]{n10/N=10_control_player7.png} & \includegraphics[width=0.3\textwidth]{n10/N=10_control_player9.png} \end{tabular} \caption{Comparisons of optimal controls for $N=10$ players in the linear quadratic game. For a sake of clarity, we only show two sample paths of optimal controls for the $1^\text{st}$, $2^\text{nd}$, $4^\text{th}$, $7^\text{th}$, $8^\text{th}$ and $10^\text{th}$ player, respectively. The solid lines are optimal controls given by the closed-form solution, and the dotted dash lines are computed by deep fictitious play. }\label{fig:N10control} \end{figure} {\bf Example 3 ($N=24$).} The initial positions for the $i^{th}$ player is $x_0^i = 0.5i$. We set the time steps $N_T = 20$, after observing the relative errors did not increase too much from $N_T = 50$ to $N_T = 20$. The problem by nature is high-dimensional: the $k^\text{th}$ ``Sequential'' subnetwork maps $\mathbb{R}^{Nk}$ to $\mathbb{R}$. To accelerate the computation, we distribute the training to 8 GPUs. Similar studies to the $N=10$ case are presented in Figures~\ref{fig:N24:cost:traj}-\ref{fig:N24control}. Some key features that have been observed from previous numerical experiments: the maximum of relative error drops below $3\%$ after ten iterations; the average error of estimated trajectories are convex/concave functions of time $t$; the standard deviation of estimated error aggregates from steps to steps. In fact, the convexity/concavity with respect to time $t$ is caused by two factors: the propagation of errors, which produces an magnitude increase in error mean; and the existence of terminal cost, which puts more weights on $X_T$ than $X_t, t \in (0,T)$, resulting in a better estimate of $X_T$ and a decreasing effect. \begin{figure}[H] \begin{tabular}{cc} \includegraphics[width=0.45\textwidth]{n24/N=24_relativeerr.png} & \includegraphics[width=0.45\textwidth]{n24/N=24_trajactory.png} \end{tabular} \caption{Comparisons of cost functions and optimal trajectories for $N=24$ players in the linear quadratic game. Left: the maximum relative errors of the cost functions for ten players; Right: for a sake of clarity, we only present the comparison of optimal trajectories for the $1^\text{st}$, $4^\text{th}$, $7^\text{th}$, $10^\text{th}$, $13^\text{th}$, $16^\text{th}$, $19^\text{th}$ and $22^\text{th}$ players, where the solid lines are given by the closed-form solution and the stars are computed by deep fictitious play. }\label{fig:N24:cost:traj} \end{figure} \begin{figure}[h t b] \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{n24/N=24_tra_errorbar_0.png} & \includegraphics[width=0.3\textwidth]{n24/N=24_tra_errorbar_3.png} & \includegraphics[width=0.3\textwidth]{n24/N=24_tra_errorbar_6.png} \\ \includegraphics[width=0.3\textwidth]{n24/N=24_tra_errorbar_9.png} & \includegraphics[width=0.3\textwidth]{n24/N=24_tra_errorbar_10.png} & \includegraphics[width=0.3\textwidth]{n24/N=24_tra_errorbar_12.png} \\ \includegraphics[width=0.3\textwidth]{n24/N=24_tra_errorbar_15.png} & \includegraphics[width=0.3\textwidth]{n24/N=24_tra_errorbar_18.png} & \includegraphics[width=0.3\textwidth]{n24/N=24_tra_errorbar_21.png} \\ \end{tabular} \caption{Comparisons of optimal trajectories for $N=24$ players in the linear quadratic game. For a sake of clarity, we only show the mean (blue triangles) and standard deviation (red bars) of optimal trajectories errors for the $1^\text{st}$, $4^\text{th}$, $7^\text{th}$, $10^\text{th}$, $11^\text{th}$, $13^\text{th}$, $16^\text{th}$, $19^\text{th}$ and $22^\text{th}$ player, respectively. The results are based on a total sample of $65536$ paths, show that deep fictitious play provides a uniformly good accuracy of optimal trajectories.}\label{fig:N24traj} \end{figure} \begin{figure}[h t b] \begin{tabular}{ccc} \includegraphics[width=0.3\textwidth]{n24/N=24_control_player0.png} & \includegraphics[width=0.3\textwidth]{n24/N=24_control_player3.png} & \includegraphics[width=0.3\textwidth]{n24/N=24_control_player6.png} \\ \includegraphics[width=0.3\textwidth]{n24/N=24_control_player9.png} & \includegraphics[width=0.3\textwidth]{n24/N=24_control_player10.png} & \includegraphics[width=0.3\textwidth]{n24/N=24_control_player12.png} \\ \includegraphics[width=0.3\textwidth]{n24/N=24_control_player15.png} & \includegraphics[width=0.3\textwidth]{n24/N=24_control_player18.png} & \includegraphics[width=0.3\textwidth]{n24/N=24_control_player21.png} \end{tabular} \caption{Comparisons of optimal controls for $N=24$ players in the linear quadratic game. For a sake of clarity, we only show two sample paths of optimal controls for the $1^\text{st}$, $4^\text{th}$, $7^\text{th}$, $10^\text{th}$, $11^\text{th}$, $13^\text{th}$, $16^\text{th}$, $19^\text{th}$ and $22^\text{th}$ player, respectively. The solid lines are optimal controls given by the closed-form solution, and the dotted dash lines are computed by deep fictitious play. }\label{fig:N24control} \end{figure} To better illustrate that our algorithm can overcome the curse of dimensionality, we compare the performance across different $N$. Particularly, we compute $$\max_{i \in \mc{I}}\max_{k \leq N_T}\abs{X_{kh}^i - \widehat X_{kh}^i}$$ where $X$ denotes the state process following the open-loop Nash equilibrium, while $\widehat X$ is the deep fictitious play counterpart. The $L^1$ error is $1.09\times 10^{-2}$ for $N = 5$, $1.49\times 10^{-2}$ for $N = 10$ and $2.08\times 10^{-2}$ for $N = 24$. Table~\ref{tab:parameter} gives the running time and other hyper-parameters used in the numerical examples. \begin{table}[h] \caption{Hyperparameters and runtime for the numerical examples presented in Section 4.}\label{tab:parameter} \begin{center} ~\newline \begin{tabular}{@{}c|ccc@{}} \toprule Problem & N = 5 & N = 10 & N = 24 \\ \midrule $N_T$ & 50 & 20 & 20 \\ Max Relative Err & 1.15\% &2.45\% & 2.95\%\\ \# of GPUs used & 1 & 1 & 8 \\ runtime (hours) $^\dagger$ & 2.15 & 14.03 & 12.10 \\ $L^1$ error of $\widehat X$ & 1.09e-2 & 1.49e-2 & 2.08e-2 \\ \bottomrule \end{tabular} \end{center} \small{ $^\dagger$ The numerical experiments were conducted using Amazon EC2 services with P2 instances. We remark that the runtime is subject to further reduction with a multi-GPU system or more efficient GPUs. } \end{table} \section{Conclusion, discussion and extension}\label{sec:rmk} In this paper, the deep fictitious play theory is proposed to compute the Nash equilibrium of asymmetric $N$-player non-zero-sum stochastic differential games. We apply the strategy of fictitious play by letting individual player optimize her own payoff while fixing the control of the other players at each stage, and then repeat the game until their responses do not change too much from stage to stage. Finding the best response for each player at each stage is a stochastic optimal control problem, which we approximate by deep neural networks (DNNs). By the nature of open-loop strategies, the problem is recasted into repeated training of $N$ decoupled neural networks (NNs), where inputs of each NN depend on the other NNs' outputs from previous training. Using Keras and parallel GPU simulation, the deep learning algorithm can be applied to any $N$-player stochastic differential game with different symmetries and heterogeneities. The numerical accuracy and efficiency is illustrated by comparing to the closed-form solution of the linear quadratic case. We also prove the convergence of fictitious play under appropriate assumptions, and show that the convergent limit forms an open-loop Nash equilibrium. We remark that the implementation of this algorithm causes no extra difficulties beyond the linear-quadratic game, but the verification of convergence to the true equilibrium is in general hard due to the lack of benchmark solution. Although one may observe the convergence of the proposed algorithm by tracking the relative change of cost (cf. Step~\ref{algo:step} in Algorithm~\ref{def:algorithm}), it may actually be trapped in a local (but not true) equilibrium. In the following, we shall discuss the extensions to other neural network architectures, other strategies of fictitious play and closed-loop Nash equilibrium. \subsection{Other neural network architectures} In the open-loop framework, the searching space for optimal policies contains all $\mathcal{F}_t$-progressively measurable processes, which possesses a path-dependent feature. When using a feedforward architecture, in order to better capture this feature, one needs to partition $[0,T]$ into a sufficiently large number of $N_T$ intervals. Then, a sub-network is used to approximate the optimal policy at each time point \eqref{def:strategy:approx}, whose size becomes larger as the time approaches the terminal time $T$ since more history needs to be fed as input. Therefore, the training time increases significantly when one uses large $N_T$. To improve the performance, architectures based on recurrent neural networks can be considered in solving the stochastic control problem \eqref{def:sc1}--\eqref{def:sc2}, for example, using long short-term memory (LSTM), gated recurrent units (GRUs), {\it etc.} This will be part of our future work \cite{HaHu:20}. \subsection{Belief based on time average of past play} \label{sec:avg} In the formulation \eqref{def:SFP}, players' belief is based on their actions during last round, i.e. at stage $n+1$, players myopically respond to their opponents' policies at stage $n$ without considering all decisions before $n$. This is in fact a bit discrepant from Brown's definition \cite{Br:49,Br:51}, where players responses take into account all past policies. Denote by $\bm{\widetilde \alpha}^{-i,n}$ is the weighted average of past play, \begin{align}\label{def:SFP:avg} \bm{\widetilde \alpha}^{-i,n} = \frac{1}{n}\sum_{k=1}^n \bm{\alpha}^{-i,k}, \end{align} then Brown's original idea corresponds to the control problem: \begin{align} \alpha^{i,n+1} := \argmin_{\beta^i \in \mathbb{A}} J^i (\beta^i; \widetilde{\bm{\alpha}}^{-i,n}), \quad \forall i \in \mc{I}, n \in \mathbb{N}. \end{align} where $J^i$ is defined as in \eqref{def:J:SFP}. In general, convergence in the strategy $\bm{\alpha}^n$ implies convergence in the average of past play $\widetilde {\bm{\alpha}}^n$, but not vice versa. Therefore, convergence in $\widetilde{\bm{\alpha}}^n$ does not necessarily lead to a Nash equilibrium. Our numerical tests show that, if the algorithm converges in $\bm\alpha^n$, then using $\widetilde {\bm{\alpha}}^n$ tends to give a better rate for linear quadratic cases. In practice, within the framework of deep fictitious play, one can generalize \eqref{def:SFP:avg} to any weighted average of past policies: $\sum_{k=0}^n c_k\bm{\alpha}^{-i,k}$, where $(c_k)_{k=0}^n$ is a $n$-simplex with $c_n >0$. We plan to further investigate the comparison between different beliefs for practical problems in future. \subsection{ Belief updated alternatively} We shall also mention that, there are actually two versions of fictitious play, the alternating fictitious play (AFP), originally invented in \cite{Br:49}, and the simultaneous fictitious play (SFP) mentioned as a minor variant of AFP in \cite{Br:49}. In contrast to \eqref{def:SFP}, the players under AFP update their beliefs alternatively. For example, in the case $N=2$, the learning process is: \begin{align} \alpha^{1,n+1} &:= \argmin_{\beta^1 \in \mathbb{A}} J^1(\beta^1; \alpha^{2,n}), \quad \alpha^{2,n} := \argmin_{\beta^2 \in \mathbb{A}} J^2(\beta^2; \alpha^{1,n}), \quad n \geq 1, \end{align} and the computation follows $\alpha^{2,0}$(initial belief) $\rightarrow \alpha^{1,1}\rightarrow \alpha^{2,1}\rightarrow \alpha^{1,2}\rightarrow \alpha^{2,2}\rightarrow \ldots$. The dependence of $\alpha^{2,n}$ on $\alpha^{1,n}$ makes one not able to update them simultaneously, which is the main difference from SFP. Indeed, SFP can be considered as a simpler learning process than AFP, as players are treated symmetrically in time. This usually enhances analytical convenience as well as numerical efficiency (with possible parallel implementation in Step 5-9 of Algorithm~\ref{def:algorithm}). Gradually, the original AFP seems to disappear from the literature, and people focus on SFP, even though SFP may generate subtle problems which do not arise under AFP. For a comparison study, we refer to \cite{Be:07}, where they also related this subtly to Monderer and Sela's {Improvement Principle} \cite{MoSe:97}. We focused on SFP in this paper, where the beliefs can be updated in parallel, and leave the AFP learning process for future studies. \subsection{ The algorithm for closed-loop Nash equilibrium}\label{sec:close:loop} Depending on the space we search for $\beta^i$ in \eqref{def:SFP}, the algorithm can lead to a Nash equilibrium in different setting. Indeed, if consider $[0,T] \times (\mathbb{R}^d)^N \ni (t, \bm{x}) \to \beta^{i} \in A \subset \mathbb{R}^k$ as a function of current states, then the limit yields a feedback strategy for Nash equilibrium. Mathematically, \begin{equation}\label{def:SFP:cl} \alpha^{i,n+1}(t,\bm{x}) := \argmin_{\beta^i(t,\bm{x}) \in A} J^i(\beta^i(X_t^{i, \beta^i}, \bm{X}_t^{-i, \bm{\alpha}^{-i,n}}); \bm{\alpha}^{-i,n}(X_t^{i, \beta^i}, \bm{X}_t^{-i, \bm{\alpha}^{-i,n}})), \end{equation} where $\bm{X}_t^{-i, \bm{\alpha}^{-i,n}}$ represents players $j \neq i$ state processes following policies $\bm{\alpha}^{-i,n}$. This setup can be analyzed by the the partial differential equation (PDE) approach. Assuming enough regularity, the minimal cost can be reformulated as the classical solution to HJB equation where others' strategies are given by deterministic functions obtained from previous round. Consequently, at each stage, the task is to solve $N$ independent HJB equations, which can still be implemented in parallel. Moreover, if the players are statistically identical, one actually only needs to solve one PDE. Denote by $V^{i,n+1}(t,\bm{x})$ the value function of problem \eqref{def:SFP:cl} at time $t$ with initial states $\bm{X}_t = \bm{x}$, by dynamic programming, it satisfies \begin{align} \partial_t V^{i,n+1} &+ \inf_\beta\Bigg\{b^i(t,\bm{x}, \beta)\partial_{x^i}V^{i,n+1} + f^i(t,\bm{x}, \beta) + \frac{1}{2} \text{Tr}\left[ \partial^2_{x^i,x^i}V^{i,n+1}\sigma^i(t,\bm{x},\beta)\sigma^i(t,\bm{x},\beta)^\dagger\right] \\ &+ \sum_{\substack{j = 1\\ j \neq i}}^N \text{Tr}\left[\partial^2_{x^i,x^j}V^{i,n+1} \sigma^i(t,\bm{x}, \beta) \Sigma^{i,j} \sigma^j(t,\bm{x}, \alpha^{j,n})^\dagger\right]\Bigg\} + \sum_{\substack{j=1 \\ j\neq i}}^N b^j(t,\bm{x}, \alpha^{j,n})\partial_{x^j}V^{i,n+1} \\ &+ \frac{1}{2} \sum_{\substack{j,k = 1\\ j\neq i\\k \neq i}}^N \text{Tr}\left[\partial^2_{x^j,x^k}V^{i,n+1} \sigma^j(t,\bm{x}, \alpha^{j,n}) \Sigma^{j,k}\sigma^k(t,\bm{x}, \alpha^{k,n})^\dagger\right]= 0, \\ \alpha^{i,n} \equiv \alpha&^{i,n}(t,\bm{x}) := \argmin_{\beta \in A}\left\{b^i(t,\bm{x}, \beta)\partial_{x^i}V^{i,n} + f^i(t,\bm{x}, \beta)\right\}, \quad \Sigma^{j,k} \,\mathrm{d} t := \,\mathrm{d} \average{W^j, W^k}_t. \end{align} Then, numerically, one can design traditional finite difference/element methods, or use deep learning which has been shown excellent performance in overcoming the curse of dimensionality in high-dimensional PDEs \cite{EHaJe:17,HaJeE:18}. After all, the optimal response function $\alpha^{i,n+1}$ is given in terms of $\partial_{x^i}V^{i,n+1}, \partial^2_{x^i, x^j}V^{i,n+1}$. However, a common drawback of working on the value function $J^i$ is that numerical schemes usually well approximate the solution but not the derivative of the solution, which is more sensitive. An alternative way is to work directly on the control. By a stochastic maximum principle argument, the optimal control is linked to the solution (not the derivative) of FBSDEs, see, {\it e.g.}, \cite[Section~2.2]{CaDe1:17}. Then it is promising to apply the recent deep learning algorithm for the coupled FBSDEs \cite{HaLo:18}. In this case, at each stage, the task is to solve $N$ independent FBSDEs and parallel implementation is still possible. Both approaches rely on the property of the reformulated problem: the solution's regularity in the PDE approach and the Hamiltonian's convexity in the FBSDEs approach. A third possibility is to work with the optimization \eqref{def:SFP:cl} directly as we do in the open-loop case. That is, using the deep NN to approximate the control and find the optimal parameters that minimize \eqref{def:SFP:cl}. However, due to the feedback reaction, the Algorithm~\ref{def:algorithm} and architectures proposed in Section~\ref{sec:algorithm} are no longer suitable. It is this ``indirect'' reaction nature of the open-loop strategy that enables us to design $N$ separate NNs and a scalable algorithm. While working with feedback controls, the realized opponents' strategies $\bm{\alpha}^{-i,n}(t, \bm{X}_t)$ depend on $\beta^i$. Further explained by Figure~\ref{fig:algorithm}, this means that, $\bm\alpha_1^{-i}$, previously considered as intermediate outputs from NNs of other players at previous training, now depends on $\beta_0^i$ through $X_1^{i}$. Consequently, to take into account the direct reaction of her opponents, one needs to feed $\beta_0^i$ to player $j^{th}$ NN, $j \neq i$ for intermediate output $\bm\alpha_1^{-i}$. This makes the $N$-neural networks coupled with each other, and hard to implement in parallel. Apparently, using deep fictitious play for Markovian Nash equilibrium is not a simple modification of Algorithm~\ref{def:algorithm}, and two of the three aforementioned approaches (PDE and direct) are studied in the follow-up works \cite{HaHu:19,HaHuLo:20}. \section*{Acknowledgment} I am grateful to Professor Marcel Nutz for the stimulating and fruitful discussions on fictitious play and convergence of linear quadratic case. \bibliographystyle{plain}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,115
Q: Why is SmoothKernelDistribution not symmetric in 2D? For example, we create SmoothKernelDistribution from one data point in 2D, e.g. {0,0}. Why is the plot of PDF not symmetric around this point? Observe where contour lines touch the image frame. Is this a bug, or is my understanding of SmoothKernelDistribution wrong? ContourPlot[ Evaluate@PDF[ SmoothKernelDistribution[{{0, 0}}], {x, y} ], {x, -1, 1}, {y, -1, 1}, PlotLegends -> Automatic, Axes -> True ] A: No one should expect that giving SmoothKernelDistribution a single data point should provide any answer other than "Don't talk to me until you have more data" as there is no ability to estimate a bandwidth from the variation in the observations (as there is no variation). However, I agree that if however a bandwidth is chosen, the resulting distribution should be centered on that data point. Here is what I think happens. SmoothKernelDistribution for a two-dimensional distribution is evaluated on a grid of 64 grid points (InterpolationPoints) in each direction by default. However, you can set the number of InterpolationPoints to any positive number but that value gets bumped up to the next highest multiple of 2 (if what you give isn't a multiple of 2). For either dimension the default grid points are as follows for a single observation: skd = SmoothKernelDistribution[{{0, 0}}]; skd[[2, 2, 1]] (* {-3.17044, -3.06979, -2.96915, -2.8685, -2.76785, -2.6672, -2.56655, -2.4659, -2.36525, -2.2646, -2.16395, -2.0633, -1.96266, -1.86201, -1.76136, -1.66071, -1.56006, -1.45941, -1.35876, -1.25811, -1.15746, -1.05681, -0.956165, -0.855516, -0.754867, -0.654218, -0.553569, -0.45292, -0.352271, -0.251622, -0.150973, -0.0503245, 0.0503245, 0.150973, 0.251622, 0.352271, 0.45292, 0.553569, 0.654218, 0.754867, 0.855516, 0.956165, 1.05681, 1.15746, 1.25811, 1.35876, 1.45941, 1.56006, 1.66071, 1.76136, 1.86201, 1.96266, 2.0633, 2.16395, 2.2646, 2.36525, 2.4659, 2.56655, 2.6672, 2.76785, 2.8685, 2.96915, 3.06979, 3.17044} *) Consider the smallest absolute value which in this case is 0.0503245 and plot the point {0.0503245, 0.0503245} along with the contour plot: skd = SmoothKernelDistribution[{{0, 0}}]; xy = Min[Abs[skd[[2, 2, 1]]]]; Show[ContourPlot[Evaluate@PDF[skd, {x, y}], {x, -1, 1}, {y, -1, 1}, Axes -> True], ListPlot[{{xy, xy}}]] Now lets try this with InterpolationPoints -> 16: skd = SmoothKernelDistribution[{{0, 0}}, InterpolationPoints -> 16]; xy = Min[Abs[skd[[2, 2, 1]]]]; Show[ContourPlot[Evaluate@PDF[skd, {x, y}], {x, -1, 1}, {y, -1, 1}, Axes -> True], ListPlot[{{xy, xy}}]] With InterpolationPoints -> 512 we get (Strangely with InterpolationPoints -> 8, the contours are centered on {-xy, -xy}.) I guess I wouldn't call this a bug but rather a feature whose effect can be minimized by choosing a large enough value of InterpolationPoints.
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,145
apm install file-icons apm install linter apm install jslint apm install linter-eslint apm install minimap apm install linter-pylint apm install pretty-json
{ "redpajama_set_name": "RedPajamaGithub" }
5,763
Tag Archives: Niger African Security Cooperation Updates It has already been noted here that this year's iteration of the annual Flintlock exercise is underway in Niger. The exercise began this year on February 19th, and is scheduled to end this Sunday, March 9th. The significance of Niger as this year's host has already been mentioned. A member of the Canadian Special Operations Regiment instructs members of Niger's 22nd Battalion during Exercise Flintlock 2014. An official US Africa Command (AFRICOM) piece on the exercise that appeared on their website yesterday has some additional items worth noting. The first is highlighting the aerial resupply portion of the training. AFRICOM, and US European Command (EUCOM) before them, have both spent considerable effort in developing this capability for African forces. AFRICOM run an annual exercise, Atlas Accord, specifically focused on this capability. Atlas Accord replaced a previous annual exercise, Atlas Drop, in 2012. EUCOM had started Atlas Drop in 1996. The belief is that aerial resupply may be the answer to the problem of conducting sustained operations for many African nations. Most African militaries lack a robust logistics arrangement, meaning that their forces are limited in how far away from their base they can operate and for how long. This is especially true of many militaries in Africa's Sahel region, which has historically been referred to by the US government as an "ungoverned space." Aerial resupply can also help in the rapid distribution of humanitarian assistance following natural disasters or in other times of need, such as during droughts. By integrating this component into Flintlock, it frees up resources to host Atlas Accord elsewhere on the continent. In 2012, Atlas Accord was held in Mali, where the annual Flintlock exercise was to be held, but was canceled. Last year's Atlas Accord exercise was held in Nigeria. Its also worth noting the international participants in this year's Flintlock exercise. While the African nations participating in the exercise change relatively little from year to year, the US has been inviting more nations from outside Africa to participate in recent years. From the AFRICOM news piece, we can see that Spanish and Canadian special operations forces are participating this year. Both of these nations also participated in the 2011 Flintlock exercise. In other news, the North Dakota National Guard announced that it was expanding in its participation in the National Guard Bureau's State Partnership Program (SPP). Since 1993, State National Guards in the United States have formed bilateral relationships with foreign militaries as part of the SPP. Training exchanges are intended to benefit both sides and provide a continuity of relationship that might not necessarily be found in other arrangements. North Dakota's National Guard has an existing history with Africa, beginning its first SPP partnership in 2004 with Ghana. The North Dakota Guard will not also be a partner with the armed forces of Togo and Benin. State Guards now partner with ten countries in the AFRICOM area of responsibility. Tagged Benin, Exercise Flintlock, Niger, security cooperation, security force assistance, State Partnership Program, Togo Flintlock 2014 Begins in Niger This year's iteration of the annual Flintlock special operations exercise began yesterday in Niger. The annual exercise, directed by the Joint Chiefs of Staff, sponsored by US Africa Command, and run by Joint Special Operations Task Force – Trans Sahara, is an important component of US counter-terrorism efforts in Africa's Sahel region. This region stretches the length of the continent, dividing North Africa from sub-Saharan Africa. Malian soldiers conduct fast rope operations out of a MH-47 Chinook helicopter from the US Army's 160th Aviation Regiment (Special Operations) (Airborne) in Bamako, Mali on 18 May 2010 as part of Flintlock 2010. Though the participants have not yet been named, reports indicate that one thousand personnel from eighteen countries will take part. This is four more than took part in last year's exercise, hosted by Mauritania. That the exercise this year is being held in Niger is unsurprising. The country borders Algeria, Libya, Mali, and Nigeria, all of which are currently battling major terrorist groups. As a result Niger has recently become a major partner with the US and French militaries, both of whom are conducting drone operations from a base adjacent to the airport in the capital Niamey. This year's Flintlock is another indicator of increasing concerns about terrorist groups in the region. You can read more about this in an article I wrote today for War is Boring. Tagged Exercise Flintlock, Niger, terrorism Nigerien Minister Suggests France, US Should Intervene Again in Libya In an interview with Radio France Internationale broadcast today, Nigerien Interior Minister suggested that France and the US should consider an intervention into Libya to address terrorism in that country's southern region. Massoudou Hassoumi said southern Libya had become "an incubator for terrorist groups" and that the countries who supported the overthrow of Moammar Gadhafi should "provide an after-sales service." Map released by AFRICOM in its 2013 posture statement showing AQIM areas of influence in Mali, Algeria, and Libya, as of 22 February 2013 Since the ouster and execution of Gadhafi in 2011, Libya has suffered from chronic instability as various militias continue to operate with impunity. The US, France, and other countries provided materiel support to various armed opposition factions, along with a sustained air campaign that allowed them to take control of the country. The new central government has largely failed in its attempts to get these factions under control. For instance, four Egyptian diplomats were abducted last week in what was said to be a reprisal for government action against a prominent militia leader. Terrorism is indeed a growing threat in Libya. The US Department of State designated two groups in Libya as both Foreign Terrorist Organizations (FTO) and Specially Designated Global Terrorists (SDGT) last month. Militant groups have also looted Libya for weapons, with man-portable surface-to-air missiles being among the weapons thought to have been taken. Efforts to train Libya's national security forces to respond to these threats are scheduled to begin this year. The potential threats posed by absence of government control in Libya is well known. Tuareg insurgents in Mali were originally located in Libya and Al Qaeda in the Islamic Maghreb have also used Libyan territory as a staging ground for attacks in neighboring countries. Niger has already been involved in increasing international precense to counter such activities in the region. Both the US and France conduct drone reconnaissance operations from the country. However, the US so far has declined to deploy significant numbers of troops to the region, preferring to support other countries and otherwise rely on unmanned aerial vehicles and special operations forces to conduct raids on isolated targets. France is also finding its military strained by interventions in Africa, despite having a clear interest in expanding its ability to respond to threats on the continent. Its primary focus has shifted to Central African Republic, with the hope that other European nations will be able to assist in countries like Mali. The Netherlands recently began deploying peacekeepers to that country, and Germany announced today that it would look to increase its training mission there. Tagged AQIM, France, Libya, Mali, Niger, terrorism US Forces Train for Search and Rescue in East Africa On January 12th, elements of the US Army's East Africa Response Force (EARF) and US Air Force expeditionary rescue squadrons conducted a joint training exercise at the Grand Bara Range in Djibouti. The soldiers for 1st Battalion, 18th Infantry Regiment, the current force provider for the EARF teamed up aircrews and pararescuemen from the 81st and 82d Expeditionary Rescue Squadrons (ERQS) respectively. All of these units are based at Camp Lemonnier, also in Djibouti. Soldiers of the 1st Battalion, 18th Infantry Regiment, assigned to the East Africa Response Force, provide security as pararescuemen of the 82d Expeditionary Rescue Squadron (ERQS) return to an HC-130 of the 81st ERQS during a training exercise on January 12th, 2014. The exercise was designed to help Air Force personnel "maintain proficiency in advanced parachuting, rapid vehicle movement, infiltration and exfiltration" and give Army forces a chance to "[enhance] their skills in aircraft security measures." During the exercise, HC-130 aircraft from the 81st ERQS landed in the Grand Bara Range and deployed pararescuemen and EARF soldiers, the latter of which secured the landing zone. Such a method could potentially be employed to rescue personnel should a US aircraft go down somewhere in the region. A pararescueman of the 82d Expeditionary Rescue Squadron jumps from an HH-60G of the 303d ERQS during Neptune's Falcon, a joint training exercise with the US Navy's Coastal Riverine Squadron One-Forward off the coast of Djibouti on December 20th, 2013. This search and rescue focused joint exercise follows another one held in Djibouti this past December. During that exercise, called Neptune's Falcon, personnel from the Navy's Coastal Riverine Squadron One – Forward teamed up with pararescuemen from the 82d ERQS and HH-60G helicopters from the 303d ERQS to train off the coast of Djibouti. The 303d ERQS is also stationed at Camp Lemonnier, and together with the 81st and 82d ERQS make up the 449th Air Expeditionary Group. These missions are more than just common scenarios as well. In an attempt to rescue US and other foreign nationals from the South Sudanese town of Bor last year, CV-22s from the Air Force Special Operations Command took damage and were forced to abort the mission. While the three aircraft made it safely to Entebbe in Uganda, there was of course the possibility the aircraft might not have made it and been forced down in a hostile area. Another example is that of the crash near Camp Lemonnier of an Air Force Special Operations Command U-28A in February 2012. The aircraft had been returning from an intelligence gathering mission. Nor are US operations limited to Camp Lemonnier or Entebbe. US forces routinely operate from various locations in east Africa to conduct counterterrrorism operations and intelligence overflights, as well as training exercises. On January 23rd, the Defense Logistics Agency announced a solicitation for a contract to provide "Petroleum Fuel Support For Various DoD Activities In Africa." This three year contract includes requirements to supply jet fuel to Camp Lemonnier and Chabelley Airfield in Djibouti, Arba Minch Airport in Ethiopia, and Manda Bay in Kenya. DoD has requirements for the supply of other fuel types like regular gasoline and diesel fuels to other locations in Central African Republic, Niger, South Sudan, and the Island of Sao Tome (where the requirement is said to be in support of the operation a Voice of American radio relay station). With this increased US engagement in Africa comes increased potential for both hostile activity and accidents, which would in turn require search and rescue operations. It is likely that these sort of exercises will continue, especially in the near future with the current emphasis on rapidly deploying elements to and around the continent by air. Tagged CAR, Djibouti, EARF, Ethiopia, Kenya, Niger, South Sudan, Uganda France to Reorganize Forces in Africa The Associated Press reported today that France may look to dramatically restructure its military presence in Africa to be better suited to respond to regional contingencies. Since the beginning of 2013, France has flexed its military muscles with interventions in Mali and Central African Republic. Last year, the chief of France's defense staff, Admiral Edouard Guillaud, also suggested that French forces on the continent should be allowed to more readily pursue terrorists, especially in the Sahel region. French forces conduct operations in Mali, circa July 2013 France's Defense Minister Jean-Yves Le Drian said in describing the plan that the number of French forces based in Africa would be unchanged, but that they would be postured differently. France's force in the Sahel region will number approximately three thousand personnel. Under the new posture, Abidjan, the capital of Cote d'Ivoire, would become the primary entry point and logistics hub for French forces. Chad's capital N'Djamena would become a hub for French air operations, while the capital of Niger, Niamey, would be used as a primary staging point for unmanned intelligence gathering flights. These changes seem reasonable in light of the French experience in their recent interventions. Foreign air support and logistical assistance were critical in getting both Operation Serval and Operation Sangaris going. The importance of air power in theater was visible in both of these operations as French forces conducted an airborne assault in Mali in January 2013 and have already deployed a significant air component to Chad in support of operations in CAR. Unmanned surveillance in the Sahel is also critical given the absence of government control in many places, which has in the past been referred to as an "under-governed space." Establishing a force in Niamey makes good sense as the US also recently established an unmanned surveillance mission there. However, if France is not intending to increase the size of its overall force on the continent, one must wonder what the end result of the restructuring will be. Though billed as a solo-effort, France's incursion into Mali would have been impossible without airlift capabilities supplied by the US, the United Kingdom, and the Netherlands, among others. France also lacked the aerial refueling capability for sustained air operations, again relying on the US. The US continues to provide logistical assistance to the French in both Mali and CAR. France's current force on the continent has clearly been strained, leading them to pull elements out of Kosovo to reinforce their operations in Africa, and the country has continually lobbied for assistance from other European powers. The Dutch recently began deploying to Mali to ease the strain on French forces there and the EU just approved a peacekeeping mission for CAR. Without an increased and permanent commitment or an increase in capability broadly, the revised French may not necessarily help them respond any faster or more efficiently to future contingencies. Tagged CAR, Chad, Cote d'Ivoire, France, Mali, Niger, peacekeeping, terrorism Question Remain about French Hostage Release Yesterday, the French announced that Thierry Dol, Daniel Larribe, Pierre Legrand, and Marc Feret, employees of the French nuclear company Areva who had been taken by Al Qaeda in the Islamic Maghreb (AQIM) on 16 September 2010 at a uranium mine near Arlit in Niger, had been released. French Foreign Minister Laurent Fabius flew back to Paris from Niamey with the hostages today, where they were greeted by French President Francois Hollande, who then made some remarks. President Hollande did mention that France is still working on the release of at least seven other hostages, including three in North Africa. When it was announced on Tuesday, France's Defense Minister Jean-Yves Le Drian said that the release had been secured without the use of military force and without the payment of a ransom. However, AFP subsequently reported that a ransom of 20 million euros, drawn from a secret intelligence service fund, may have been paid. French authorities deny this, but no significant details about the negotiations have been provided. The original AFP wire note on this appears to be available only in French, but does not name the source of the information. This release of French hostages comes well after the French made an abortive attempt to free DGSE agent Denis Allex in January. The rescue operation, conducted with US support, failed to free Allex, who was either killed during the operation or by his captors afterwards. A major embarrassment for the French government, the experience in January may have played a role in the handling of this situation. Tagged AQIM, France, Hostages, Niger
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,004
using UnityEngine; using UnityEngine.UI; using UnityEngine.Events; using UnityEngine.EventSystems; using UnityEngine.Serialization; using System.Collections; using System.Collections.Generic; using System.Linq; using System; namespace UIWidgets { /// <summary> /// Paginator direction. /// Auto - detect direction from ScrollRect.Direction and size of ScrollRect.Content. /// Horizontal - horizontal. /// Vertical - vertical. /// </summary> public enum PaginatorDirection { Auto = 0, Horizontal = 1, Vertical = 2, } /// <summary> /// Page size type. /// Auto - use ScrollRect size. /// Fixed - fixed size. /// </summary> public enum PageSizeType { Auto = 0, Fixed = 1, } /// <summary> /// ScrollRectPageSelect event. /// </summary> [Serializable] public class ScrollRectPageSelect : UnityEvent<int> { } /// <summary> /// ScrollRect Paginator. /// </summary> [AddComponentMenu("UI/UIWidgets/ScrollRectPaginator")] public class ScrollRectPaginator : MonoBehaviour { /// <summary> /// ScrollRect for pagination. /// </summary> [SerializeField] protected ScrollRect ScrollRect; /// <summary> /// DefaultPage template. /// </summary> [SerializeField] protected RectTransform DefaultPage; /// <summary> /// ScrollRectPage component of DefaultPage. /// </summary> protected ScrollRectPage SRDefaultPage; /// <summary> /// ActivePage. /// </summary> [SerializeField] protected RectTransform ActivePage; /// <summary> /// ScrollRectPage component of ActivePage. /// </summary> protected ScrollRectPage SRActivePage; /// <summary> /// The previous page. /// </summary> [SerializeField] protected RectTransform PrevPage; /// <summary> /// ScrollRectPage component of PrevPage. /// </summary> protected ScrollRectPage SRPrevPage; /// <summary> /// The next page. /// </summary> [SerializeField] protected RectTransform NextPage; /// <summary> /// ScrollRectPage component of NextPage. /// </summary> protected ScrollRectPage SRNextPage; /// <summary> /// The direction. /// </summary> [SerializeField] public PaginatorDirection Direction = PaginatorDirection.Auto; /// <summary> /// The type of the page size. /// </summary> [SerializeField] protected PageSizeType pageSizeType = PageSizeType.Auto; /// <summary> /// Gets or sets the type of the page size. /// </summary> /// <value>The type of the page size.</value> public virtual PageSizeType PageSizeType { get { return pageSizeType; } set { pageSizeType = value; RecalculatePages(); } } /// <summary> /// The size of the page. /// </summary> [SerializeField] protected float pageSize; /// <summary> /// Gets or sets the size of the page. /// </summary> /// <value>The size of the page.</value> public virtual float PageSize { get { return pageSize; } set { pageSize = value; RecalculatePages(); } } int pages; /// <summary> /// Gets or sets the pages count. /// </summary> /// <value>The pages.</value> public virtual int Pages { get { return pages; } protected set { pages = value; UpdatePageButtons(); } } /// <summary> /// The current page number. /// </summary> [SerializeField] protected int currentPage; /// <summary> /// Gets or sets the current page number. /// </summary> /// <value>The current page.</value> public int CurrentPage { get { return currentPage; } set { GoToPage(value); } } /// <summary> /// The force scroll position to page. /// </summary> [SerializeField] public bool ForceScrollOnPage; /// <summary> /// Use animation. /// </summary> [SerializeField] public bool Animation = true; /// <summary> /// Movement curve. /// </summary> [SerializeField] [Tooltip("Requirements: start value should be less than end value; Recommended start value = 0; end value = 1;")] [FormerlySerializedAs("Curve")] public AnimationCurve Movement = AnimationCurve.EaseInOut(0, 0, 1, 1); /// <summary> /// Use unscaled time. /// </summary> [SerializeField] public bool UnscaledTime = true; /// <summary> /// OnPageSelect event. /// </summary> [SerializeField] public ScrollRectPageSelect OnPageSelect = new ScrollRectPageSelect(); /// <summary> /// The default pages. /// </summary> protected List<ScrollRectPage> DefaultPages = new List<ScrollRectPage>(); /// <summary> /// The default pages cache. /// </summary> protected List<ScrollRectPage> DefaultPagesCache = new List<ScrollRectPage>(); /// <summary> /// The current animation. /// </summary> protected IEnumerator currentAnimation; /// <summary> /// Is animation running? /// </summary> protected bool isAnimationRunning; /// <summary> /// Is dragging ScrollRect? /// </summary> protected bool isDragging; bool isStarted; /// <summary> /// Start this instance. /// </summary> protected virtual void Start() { if (isStarted) { return ; } isStarted = true; var resizeListener = ScrollRect.GetComponent<ResizeListener>(); if (resizeListener==null) { resizeListener = ScrollRect.gameObject.AddComponent<ResizeListener>(); } resizeListener.OnResize.AddListener(RecalculatePages); var contentResizeListener = ScrollRect.content.GetComponent<ResizeListener>(); if (contentResizeListener==null) { contentResizeListener = ScrollRect.content.gameObject.AddComponent<ResizeListener>(); } contentResizeListener.OnResize.AddListener(RecalculatePages); var dragListener = ScrollRect.GetComponent<OnDragListener>(); if (dragListener==null) { dragListener = ScrollRect.gameObject.AddComponent<OnDragListener>(); } dragListener.OnDragStartEvent.AddListener(OnScrollRectDragStart); dragListener.OnDragEndEvent.AddListener(OnScrollRectDragEnd); ScrollRect.onValueChanged.AddListener(OnScrollRectValueChanged); if (DefaultPage!=null) { SRDefaultPage = DefaultPage.GetComponent<ScrollRectPage>(); if (SRDefaultPage==null) { SRDefaultPage = DefaultPage.gameObject.AddComponent<ScrollRectPage>(); } SRDefaultPage.gameObject.SetActive(false); } if (ActivePage!=null) { SRActivePage = ActivePage.GetComponent<ScrollRectPage>(); if (SRActivePage==null) { SRActivePage = ActivePage.gameObject.AddComponent<ScrollRectPage>(); } } if (PrevPage!=null) { SRPrevPage = PrevPage.GetComponent<ScrollRectPage>(); if (SRPrevPage==null) { SRPrevPage = PrevPage.gameObject.AddComponent<ScrollRectPage>(); } SRPrevPage.SetPage(0); SRPrevPage.OnPageSelect.AddListener(Prev); } if (NextPage!=null) { SRNextPage = NextPage.GetComponent<ScrollRectPage>(); if (SRNextPage==null) { SRNextPage = NextPage.gameObject.AddComponent<ScrollRectPage>(); } SRNextPage.OnPageSelect.AddListener(Next); } RecalculatePages(); var page = currentPage; currentPage = -1; GoToPage(page); } /// <summary> /// Determines whether tthe specified pageComponent is null. /// </summary> /// <returns><c>true</c> if the specified pageComponent is null; otherwise, <c>false</c>.</returns> /// <param name="pageComponent">Page component.</param> protected bool IsNullComponent(ScrollRectPage pageComponent) { return pageComponent==null; } /// <summary> /// Updates the page buttons. /// </summary> protected virtual void UpdatePageButtons() { if (SRDefaultPage==null) { return ; } DefaultPages.RemoveAll(IsNullComponent); if (DefaultPages.Count==Pages) { return ; } if (DefaultPages.Count < Pages) { DefaultPagesCache.RemoveAll(IsNullComponent); Enumerable.Range(DefaultPages.Count, Pages - DefaultPages.Count).ForEach(AddComponent); if (SRNextPage!=null) { SRNextPage.SetPage(Pages - 1); SRNextPage.transform.SetAsLastSibling(); } } else { var to_cache = DefaultPages.GetRange(Pages, DefaultPages.Count - Pages);//.OrderByDescending<ScrollRectPage,int>(GetPageNumber); to_cache.ForEach(x => x.gameObject.SetActive(false)); DefaultPagesCache.AddRange(to_cache); DefaultPages.RemoveRange(Pages, DefaultPages.Count - Pages); if (SRNextPage!=null) { SRNextPage.SetPage(Pages - 1); } } Utilites.UpdateLayout(DefaultPage.parent.GetComponent<LayoutGroup>()); } /// <summary> /// Adds page the component. /// </summary> /// <param name="page">Page.</param> protected virtual void AddComponent(int page) { ScrollRectPage component; if (DefaultPagesCache.Count > 0) { component = DefaultPagesCache[DefaultPagesCache.Count - 1]; DefaultPagesCache.RemoveAt(DefaultPagesCache.Count - 1); } else { component = Instantiate(SRDefaultPage) as ScrollRectPage; component.transform.SetParent(SRDefaultPage.transform.parent, false); component.OnPageSelect.AddListener(GoToPage); Utilites.FixInstantiated(SRDefaultPage, component); } component.transform.SetAsLastSibling(); component.gameObject.SetActive(true); component.SetPage(page); DefaultPages.Add(component); } /// <summary> /// Gets the page number. /// </summary> /// <returns>The page number.</returns> /// <param name="pageComponent">Page component.</param> protected int GetPageNumber(ScrollRectPage pageComponent) { return pageComponent.Page; } /// <summary> /// Determines whether direction is horizontal. /// </summary> /// <returns><c>true</c> if this instance is horizontal; otherwise, <c>false</c>.</returns> protected bool IsHorizontal() { if (Direction==PaginatorDirection.Horizontal) { return true; } if (Direction==PaginatorDirection.Vertical) { return false; } var rect = ScrollRect.content.rect; return rect.width >= rect.height; } /// <summary> /// Gets the size of the page. /// </summary> /// <returns>The page size.</returns> protected virtual float GetPageSize() { if (PageSizeType==PageSizeType.Fixed) { return PageSize; } if (IsHorizontal()) { return (ScrollRect.transform as RectTransform).rect.width; } else { return (ScrollRect.transform as RectTransform).rect.height; } } /// <summary> /// Go to next page. /// </summary> void Next(int x) { Next(); } /// <summary> /// Go to previous page. /// </summary> public virtual void Prev(int x) { Prev(); } /// <summary> /// Go to next page. /// </summary> public virtual void Next() { if (CurrentPage==(Pages - 1)) { return ; } CurrentPage += 1; } /// <summary> /// Go to previous page. /// </summary> public virtual void Prev() { if (CurrentPage==0) { return ; } CurrentPage -= 1; } /// <summary> /// Happens when ScrollRect OnDragStart event occurs. /// </summary> /// <param name="eventData">Event data.</param> protected virtual void OnScrollRectDragStart(PointerEventData eventData) { if (!gameObject.activeInHierarchy) { return ; } isDragging = true; if (isAnimationRunning) { isAnimationRunning = false; if (currentAnimation!=null) { StopCoroutine(currentAnimation); } } } /// <summary> /// Happens when ScrollRect OnDragEnd event occurs. /// </summary> /// <param name="eventData">Event data.</param> protected virtual void OnScrollRectDragEnd(PointerEventData eventData) { isDragging = false; if (ForceScrollOnPage) { ScrollChanged(); } } /// <summary> /// Happens when ScrollRect onValueChanged event occurs. /// </summary> /// <param name="value">Value.</param> protected virtual void OnScrollRectValueChanged(Vector2 value) { if (isAnimationRunning || !gameObject.activeInHierarchy || isDragging) { return ; } if (ForceScrollOnPage) { ScrollChanged(); } } /// <summary> /// Handle scroll changes. /// </summary> protected virtual void ScrollChanged() { if (!gameObject.activeInHierarchy) { return ; } var pos = IsHorizontal() ? -ScrollRect.content.anchoredPosition.x : ScrollRect.content.anchoredPosition.y; var page = Mathf.RoundToInt(pos / GetPageSize()); GoToPage(page, true); } /// <summary> /// Gets the size of the content. /// </summary> /// <returns>The content size.</returns> protected virtual float GetContentSize() { return IsHorizontal() ? ScrollRect.content.rect.width : ScrollRect.content.rect.height; } /// <summary> /// Recalculate the pages count. /// </summary> protected virtual void RecalculatePages() { Pages = Mathf.Max(1, Mathf.CeilToInt(GetContentSize() / GetPageSize())); } /// <summary> /// Go to page. /// </summary> /// <param name="page">Page.</param> protected virtual void GoToPage(int page) { GoToPage(page, false); } /// <summary> /// Gets the page position. /// </summary> /// <returns>The page position.</returns> /// <param name="page">Page.</param> protected virtual float GetPagePosition(int page) { return page * GetPageSize(); } /// <summary> /// Go to page. /// </summary> /// <param name="page">Page.</param> /// <param name="forceUpdate">If set to <c>true</c> force update.</param> protected virtual void GoToPage(int page, bool forceUpdate) { if ((currentPage==page) && (!forceUpdate)) { return ; } if (isAnimationRunning) { isAnimationRunning = false; if (currentAnimation!=null) { StopCoroutine(currentAnimation); } } var endPosition = GetPagePosition(page); if (IsHorizontal()) { endPosition *= -1; } if (Animation) { isAnimationRunning = true; var startPosition = IsHorizontal() ? ScrollRect.content.anchoredPosition.x : ScrollRect.content.anchoredPosition.y; currentAnimation = RunAnimation(IsHorizontal(), startPosition, endPosition, UnscaledTime); StartCoroutine(currentAnimation); } else { if (IsHorizontal()) { ScrollRect.content.anchoredPosition = new Vector2(endPosition, ScrollRect.content.anchoredPosition.y); } else { ScrollRect.content.anchoredPosition = new Vector2(ScrollRect.content.anchoredPosition.x, endPosition); } } if ((SRDefaultPage!=null) && (currentPage!=page)) { if (currentPage >= 0) { DefaultPages[currentPage].gameObject.SetActive(true); } DefaultPages[page].gameObject.SetActive(false); SRActivePage.SetPage(page); SRActivePage.transform.SetSiblingIndex(DefaultPages[page].transform.GetSiblingIndex()); } if (SRPrevPage!=null) { SRPrevPage.gameObject.SetActive(page!=0); } if (SRNextPage!=null) { SRNextPage.gameObject.SetActive(page!=(Pages - 1)); } currentPage = page; OnPageSelect.Invoke(currentPage); } /// <summary> /// Runs the animation. /// </summary> /// <returns>The animation.</returns> /// <param name="isHorizontal">If set to <c>true</c> is horizontal.</param> /// <param name="startPosition">Start position.</param> /// <param name="endPosition">End position.</param> /// <param name="unscaledTime">If set to <c>true</c> use unscaled time.</param> protected virtual IEnumerator RunAnimation(bool isHorizontal, float startPosition, float endPosition, bool unscaledTime) { float delta; var animationLength = Movement.keys[Movement.keys.Length - 1].time; var startTime = (unscaledTime ? Time.unscaledTime : Time.time); do { delta = ((unscaledTime ? Time.unscaledTime : Time.time) - startTime); var value = Movement.Evaluate(delta); var position = startPosition + ((endPosition - startPosition) * value); if (isHorizontal) { ScrollRect.content.anchoredPosition = new Vector2(position, ScrollRect.content.anchoredPosition.y); } else { ScrollRect.content.anchoredPosition = new Vector2(ScrollRect.content.anchoredPosition.x, position); } yield return null; } while (delta < animationLength); if (isHorizontal) { ScrollRect.content.anchoredPosition = new Vector2(endPosition, ScrollRect.content.anchoredPosition.y); } else { ScrollRect.content.anchoredPosition = new Vector2(ScrollRect.content.anchoredPosition.x, endPosition); } isAnimationRunning = false; } /// <summary> /// Removes the callback. /// </summary> /// <param name="page">Page.</param> protected virtual void RemoveCallback(ScrollRectPage page) { page.OnPageSelect.RemoveListener(GoToPage); } /// <summary> /// This function is called when the MonoBehaviour will be destroyed. /// </summary> protected virtual void OnDestroy() { DefaultPages.RemoveAll(IsNullComponent); DefaultPages.ForEach(RemoveCallback); DefaultPagesCache.RemoveAll(IsNullComponent); DefaultPagesCache.ForEach(RemoveCallback); if (ScrollRect!=null) { var dragListener = ScrollRect.GetComponent<OnDragListener>(); if (dragListener!=null) { dragListener.OnDragStartEvent.RemoveListener(OnScrollRectDragStart); dragListener.OnDragEndEvent.RemoveListener(OnScrollRectDragEnd); } var resizeListener = ScrollRect.GetComponent<ResizeListener>(); if (resizeListener!=null) { resizeListener.OnResize.RemoveListener(RecalculatePages); } if (ScrollRect.content!=null) { var contentResizeListener = ScrollRect.content.GetComponent<ResizeListener>(); if (contentResizeListener!=null) { contentResizeListener.OnResize.RemoveListener(RecalculatePages); } } ScrollRect.onValueChanged.RemoveListener(OnScrollRectValueChanged); } if (SRPrevPage!=null) { SRPrevPage.OnPageSelect.RemoveListener(Prev); } if (SRNextPage!=null) { SRNextPage.OnPageSelect.RemoveListener(Next); } } } }
{ "redpajama_set_name": "RedPajamaGithub" }
5,259
{"url":"http:\/\/openstudy.com\/updates\/50437401e4b000724d46270d","text":"experimentX 2 years ago Show that the determinant and trace of Matrix remains invariant under similarity transformation\n\n1. eliassaab\n\n$det( A^{-1} B A) =det(A^{-1}) det(B) det(A)=\\frac 1 {det(A)}det(B) det (A) =det(B)$ Can you do the trace?\n\n2. eliassaab\n\nUse that trace(MN)=trace(NM)\n\n3. eliassaab\n\n$trace(A^{-1} B A)=trace( A^{-1} A B)=trace(B)$\n\n4. experimentX\n\nthanks ... prof","date":"2015-03-06 12:49:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9388182163238525, \"perplexity\": 6542.753356345067}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-11\/segments\/1424936468546.71\/warc\/CC-MAIN-20150226074108-00174-ip-10-28-5-156.ec2.internal.warc.gz\"}"}
null
null
Q: How to get data from sqlite and response json using scotty? I'm trying to build a simple blog using Haskell and the Framework Scotty. Using a Model.hs I have: data Post = Post { id :: Int , tipo :: String , titulo :: String , conteudo :: String } deriving (Show, Generic) I've already create a schema using sqlite and populate with some data, right now I'm trying to get this data using this method in my Storage.hs selectPosts :: Sql.Connection -> IO [M.Post] selectPosts conn = Sql.query_ conn "select * from post" :: IO [M.Post] My intent is get the data format as json in my Main.hs: instance ToJSON M.Post instance FromJSON M.Post main :: IO () main = do putStrLn "Starting Server..." scotty 3000 $ do get "/" $ file "templates/index.html" get "/posts" $ do json posts where posts = withTestConnection $ \conn -> do S.selectPosts conn But I'm getting an IO [Model Post] and I don't know how to render this as json, so its keeping getting this error: No instance for (ToJSON (IO [Post])) arising from a use of 'json' My project is in github to run just use stack build and after stack ghci. In the building I'm already getting this error. A: In Haskell, all functions are pure---so something like selectPosts, which needs to go out and do IO to talk to a database, can't just do that and return the value from the database. Instead, these kinds of functions return something of type IO a, which you can think of as basically a description of how to go out and do IO to get a value of type a. These "IO actions" can be composed together, and one of them can be assigned to main; at runtime, the RTS will execute these IO actions. However, you aren't composing the IO a value that you get back from selectPosts to be part of the larger IO value that eventually becomes main; you are trying to directly use it by feeding it into json. This won't work, because there's no (good/easy) way to convert a description of how to do IO into a JSON string. The way that Haskell deals with composing these values is through an abstraction known as a "monad", which is useful in a lot of other cases as well. do notation can be used to write this monadic sequencing in a very natural style. You can't just write posts <- withTestConnection S.selectPosts here because Scotty's get function takes a value of the monadic ActionM type, not IO. However, it turns out that ActionM is basically a bunch of other useful stuff layered on top of IO, so it should be possible to "lift" the IO action from selectPosts into Scotty's ActionM monad: get "/posts" $ do posts <- liftIO $ withTestConnection S.selectPosts json posts Side note: You may have noticed that I wrote withTestConnection S.selectPosts instead of withTestConnection $ \conn -> do S.selectPosts conn. Generally, if you have a do block with only one expression in it (not of the form x <- act), this is the same as the single expression outside of a do block: \conn -> S.selectPosts conn. Furthermore, Haskell tends to encourage partial application: you have S.selectPosts, which is a function Sql.Connection -> IO [M.Post]. \conn -> S.selectPosts conn is another function, of the same type, which passes the connection into selectPosts and then returns the same result as selectPosts---this function is indistinguishable from selectPosts itself! So if this is all that you need inside your withTestConnection, you should be able to simplify the whole lambda and do block to just S.selectPosts.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,154
Since joining FOX 2 News as a general assignment reporter in June 1985, Roche Madden has reported on stories from Saudi Arabia to the former Soviet Union to San Francisco. He recently covered Hurricane Katrina in New Orleans. Madden traveled to Saudi Arabia to report on preparations for Operation Desert Storm and how Scott Air Force Base was playing an important roll. In San Francisco Madden followed a group of St. Louis officials to California where they learned about earthquake preparedness. In Russia Madden followed troops who took part in Provide Hope, a program which air lifted food and medicine to the former Soviet Union. In what he considers the highlight of his career, Madden joined Pope John Paul II as he traveled from Rome to Mexico City to St. Louis. Madden won an Emmy Award for his reporting on flood coverage in the St. Louis area. In 1996 Madden won an Emmy for live spot news coverage. Before coming to St. Louis, Madden spent five years in Tulsa, Oklahoma and nearly two years in Midland, Texas where he anchored the news. Madden majored in broadcast journalism at Trinity University in San Antonio, Texas.
{ "redpajama_set_name": "RedPajamaC4" }
8,593
.. _deployment: Packaging and Deployment Overview ================================= TODO: some of this is redundant to the (more current) :ref:`configuration` doc -- should be consolidated and cross-referenced This document describes how a developer can take advantage of Pylons' application setup functionality to allow webmasters to easily set up their application. Installation refers to the process of downloading and installing the application with :term:`easy_install` whereas setup refers to the process of setting up an instance of an installed application so it is ready to be deployed. For example, a wiki application might need to create database tables to use. The webmaster would only install the wiki ``.egg`` file once using :term:`easy_install` but might want to run 5 wikis on the site so would setup the wiki 5 times, each time specifying a different database to use so that 5 wikis can run from the same code, but store their data in different databases. Egg Files ********* Before you can understand how a user configures an application you have to understand how Pylons applications are distributed. All Pylons applications are distributed in ``.egg`` format. An egg is simply a Python executable package that has been put together into a single file. You create an egg from your project by going into the project root directory and running the command: .. code-block:: bash $ python setup.py bdist_egg If everything goes smoothly a ``.egg`` file with the correct name and version number appears in a newly created ``dist`` directory. When a webmaster wants to install a Pylons application he will do so by downloading the egg and then installing it. Installing as a Non-root User ***************************** It's quite possible when using shared hosting accounts that you do not have root access to install packages. In this case you can install :term:`setuptools` based packages like Pylons and Pylons web applications in your home directory using a :term:`virtualenv` setup. This way you can install all the packages you want to use without super-user access. Understanding the Setup Process ******************************* Say you have written a Pylons wiki application called ``wiki``. When a webmaster wants to install your wiki application he will run the following command to generate a config file: .. code-block:: bash $ paster make-config wiki wiki_production.ini He will then edit the config file for his production environment with the settings he wants and then run this command to setup the application: .. code-block:: bash $ paster setup-app wiki_production.ini Finally he might choose to deploy the wiki application through the paste server like this (although he could have chosen CGI/FastCGI/SCGI etc): .. code-block:: bash $ paster serve wiki_production.ini The idea is that an application only needs to be installed once but if necessary can be set up multiple times, each with a different configuration. All Pylons applications are installed in the same way, so you as the developer need to know how the above commands work. Make Config ----------- The ``paster make-config`` command looks for the file ``deployment.ini_tmpl`` and uses it as a basis for generating a new ``.ini`` file. Using our new wiki example again, the ``wiki/config/deployment.ini_tmpl`` file contains the text: .. code-block:: ini [DEFAULT] debug = true email_to = you@yourdomain.com smtp_server = localhost error_email_from = paste@localhost [server:main] use = egg:Paste#http host = 0.0.0.0 port = 5000 [app:main] use = egg:wiki full_stack = true static_files = true cache_dir = %(here)s/data beaker.session.key = wiki beaker.session.secret = ${app_instance_secret} app_instance_uuid = ${app_instance_uuid} # If you'd like to fine-tune the individual locations of the cache data dirs # for the Cache data, or the Session saves, un-comment the desired settings # here: #beaker.cache.data_dir = %(here)s/data/cache #beaker.session.data_dir = %(here)s/data/sessions # WARNING: *THE LINE BELOW MUST BE UNCOMMENTED ON A PRODUCTION ENVIRONMENT* # Debug mode will enable the interactive debugging tool, allowing ANYONE to # execute malicious code after an exception is raised. set debug = false # Logging configuration [loggers] keys = root [handlers] keys = console [formatters] keys = generic [logger_root] level = INFO handlers = console [handler_console] class = StreamHandler args = (sys.stderr,) level = NOTSET formatter = generic [formatter_generic] format = %(asctime)s %(levelname)-5.5s [%(name)s] [%(threadName)s] %(message)s When the command ``paster make-config wiki wiki_production.ini`` is run, the contents of this file are produced so you should tweak this file to provide sensible default configuration for production deployment of your app. Setup App --------- The ``paster setup-app`` command references the newly created ``.ini`` file and calls the function ``wiki.websetup.setup_app()`` to set up the application. If your application needs to be set up before it can be used, you should edit the ``websetup.py`` file. Here's an example which just prints the location of the cache directory via Python's logging facilities: .. code-block:: python """Setup the helloworld application""" import logging from pylons import config from helloworld.config.environment import load_environment log = logging.getLogger(__name__) def setup_app(command, conf, vars): """Place any commands to setup helloworld here""" load_environment(conf.global_conf, conf.local_conf) log.info("Using cache dirctory %s" % config['cache.dir']) For a more useful example, say your application needs a database set up and loaded with initial data. The user will specify the location of the database to use by editing the config file before running the ``paster setup-app`` command. The ``setup_app()`` function will then be able to load the configuration and act on it in the function body. This way, the ``setup_app()`` function can be used to initialize the database when ``paster setup-app`` is run. Using the optional :term:`SQLAlchemy` project template support when creating a Pylons project will set all of this up for you in a basic way. The :ref:`quickwiki_tutorial` illustrates an example of this configuration. Deploying the Application ************************* Once the application is setup it is ready to be deployed. There are lots of ways of deploying an application, one of which is to use the ``paster serve`` command which takes the configuration file that has already been used to setup the application and serves it on a local HTTP server for production use: .. code-block:: bash $ paster serve wiki_production.ini More information on Paste deployment options is available on the Paste website at http://pythonpaste.org. See :ref:`deployment_webservers` for alternative Pylons deployment scenarios. Advanced Usage ************** So far everything we have done has happened through the ``paste.script.appinstall.Installer`` class which looks for the ``deployment.ini_tmpl`` and ``websetup.py`` file and behaves accordingly. If you need more control over how your application is installed you can use your own installer class. Create a file, for example ``wiki/installer.py`` and code your new installer class in the file by deriving it from the existing one: .. code-block:: python from paste.script.appinstall import Installer class MyInstaller(Installer): pass You then override the functionality as necessary (have a look at the source code for ``Installer`` as a basis. You then change your application's ``setup.py`` file so that the ``paste.app_install`` entry point ``main`` points to your new installer: .. code-block:: python entry_points=""" ... [paste.app_install] main=wiki.installer:MyInstaller ... """, Depending on how you code your ``MyInstaller`` class you may not even need your ``websetup.py`` or ``deployment.ini_tmpl`` as you might have decided to create the ``.ini`` file and setup the application in an entirely different way. .. _deployment_webservers: Running Pylons Apps with Other Web Servers ========================================== This document assumes that you have already installed a Pylons web application, and :ref:`run-config` for it. Pylons applications use `PasteDeploy <http://pythonpaste.org/deploy/>`_ to start up your Pylons WSGI application, and can use the flup package to provide a Fast-CGI, SCGI, or AJP connection to it. Using Fast-CGI ************** `Fast-CGI <http://fastcgi.com/>`_ is a gateway to connect web severs like `Apache <http://httpd.apache.org/>`_ and `lighttpd <http://lighttpd.net/>`_ to a CGI-style application. Out of the box, Pylons applications can run with Fast-CGI in either a threaded or forking mode. (Threaded is the recommended choice) Setting a Pylons application to use Fast-CGI is very easy, and merely requires you to change the config line like so: .. code-block:: ini # default [server:main] use = egg:Paste#http # Use Fastcgi threaded [server:main] use = egg:PasteScript#flup_fcgi_thread host = 0.0.0.0 port = 6500 Note that you will need to install the `flup <http://www.saddi.com/software/flup/dist/>`_ package, which can be installed via easy_install: .. code-block:: bash $ easy_install -U flup The options in the config file are passed onto flup. The two common ways to run Fast CGI is either using a socket to listen for requests, or listening on a port/host which allows a webserver to send your requests to web applications on a different machine. To configure for a socket, your ``server:main`` section should look like this: .. code-block:: ini [server:main] use = egg:PasteScript#flup_fcgi_thread socket = /location/to/app.socket If you want to listen on a host/port, the configuration cited in the first example will do the trick. Apache Configuration ******************** For this example, we will assume you're using Apache 2, though Apache 1 configuration will be very similar. First, make sure that you have the Apache `mod_fastcgi <http://fastcgi.com/mod_fastcgi/docs/mod_fastcgi.html>`_ module installed in your Apache. There will most likely be a section where you declare your FastCGI servers, and whether they're external: .. code-block:: apacheconf <IfModule mod_fastcgi.c> FastCgiIpcDir /tmp FastCgiExternalServer /some/path/to/app/myapp.fcgi -host some.host.com:6200 </IfModule> In our example we'll assume you're going to run a Pylons web application listening on a host/port. Changing ``-host`` to ``-socket`` will let you use a Pylons web application listening on a socket. The filename you give in the second option does not need to physically exist on the webserver, URIs that Apache resolve to this filename will be handled by the FastCGI application. The other important line to ensure that your Apache webserver has is to indicate that fcgi scripts should be handled with Fast-CGI: .. code-block:: apacheconf AddHandler fastcgi-script .fcgi Finally, to configure your website to use the Fast CGI application you will need to indicate the script to be used: .. code-block:: apacheconf <VirtualHost *:80> ServerAdmin george@monkey.com ServerName monkey.com ServerAlias www.monkey.com DocumentRoot /some/path/to/app ScriptAliasMatch ^(/.*)$ /some/path/to/app/myapp.fcgi$1 </VirtualHost> Other useful directives should be added as needed, for example, the ErrorLog directive, etc. This configuration will result in all requests being sent to your FastCGI application. PrefixMiddleware **************** ``PrefixMiddleware`` provides a way to manually override the root prefix (``SCRIPT_NAME``) of your application for certain situations. When running an application under a prefix (such as '``/james``') in FastCGI/apache, the ``SCRIPT_NAME`` environment variable is automatically set to to the appropriate value: '``/james``'. Pylons' URL generators such as ``url`` always take the ``SCRIPT_NAME`` value into account. One situation where ``PrefixMiddleware`` is required is when an application is accessed via a reverse proxy with a prefix. The application is accessed through the reverse proxy via the the URL prefix '``/james``', whereas the reverse proxy forwards those requests to the application at the prefix '``/``'. The reverse proxy, being an entirely separate web server, has no way of specifying the ``SCRIPT_NAME`` variable; it must be manually set by a ``PrefixMiddleware`` instance. Without setting ``SCRIPT_NAME``, ``url`` will generate URLs such as: '``/purchase_orders/1``', when it should be generating: '``/james/purchase_orders/1``'. To filter your application through a ``PrefixMiddleware`` instance, add the following to the '``[app:main]``' section of your .ini file: .. code-block :: ini filter-with = proxy-prefix [filter:proxy-prefix] use = egg:PasteDeploy#prefix prefix = /james The name ``proxy-prefix`` simply acts as an identifier of the filter section; feel free to rename it. These .ini settings are equivalent to adding the following to the end of your application's ``config/middleware.py``, right before the ``return app`` line: .. code-block :: python # This app is served behind a proxy via the following prefix (SCRIPT_NAME) app = PrefixMiddleware(app, global_conf, prefix='/james') This requires the additional import line: .. code-block :: python from paste.deploy.config import PrefixMiddleware Whereas the modification to ``config/middleware.py`` will setup an instance of ``PrefixMiddleware`` under every environment (.ini). Using Java Web Servers with Jython ********************************** See :ref:`java_deployment`. .. _adding_documentation: Documenting Your Application ============================ TODO: this needs to be rewritten -- Pudge is effectively dead While the information in this document should be correct, it may not be entirely complete... Pudge is somewhat unruly to work with at this time, and you may need to experiment to find a working combination of package versions. In particular, it has been noted that an older version of Kid, like 0.9.1, may be required. You might also need to install {{RuleDispatch}} if you get errors related to {{FormEncode}} when attempting to build documentation. Apologies for this suboptimal situation. Considerations are being taken to fix Pudge or supplant it for future versions of Pylons. Introduction ************ Pylons comes with support for automatic documentation generation tools like `Pudge <http://pudge.lesscode.org>`_. Automatic documentation generation allows you to write your main documentation in the docs directory of your project as well as throughout the code itself using docstrings. When you run a simple command all the documentation is built into sophisticated HTML. Tutorial ******** First create a project as described in :ref:`getting_started`. You will notice a docs directory within your main project directory. This is where you should write your main documentation. There is already an ``index.txt`` file in ``docs`` so you can already generate documentation. First we'll install Pudge and buildutils. By default, Pylons sets an option to use `Pygments <http://pygments.org>`_ for syntax-highlighting of code in your documentation, so you'll need to install it too (unless you wish to remove the option from ``setup.cfg``): .. code-block:: bash $ easy_install pudge buildutils $ easy_install Pygments then run the following command from your project's main directory where the ``setup.py`` file is: .. code-block:: bash $ python setup.py pudge .. Note:: The ``pudge`` command is currently disabled by default. Run the following command first to enable it: ..code-block:: bash $ python setup.py addcommand -p buildutils.pudge_command Thanks to Yannick Gingras for the tip. Pudge will produce output similar to the following to tell you what it is doing and show you any problems: .. code-block:: text running pudge generating documentation copying: pudge\template\pythonpaste.org\rst.css -> do/docs/html\rst.css copying: pudge\template\base\pudge.css -> do/docs/html\pudge.css copying: pudge\template\pythonpaste.org\layout.css -> do/docs/html\layout.css rendering: pudge\template\pythonpaste.org\site.css.kid -> site.css colorizing: do/docs/html\do/__init__.py.html colorizing: do/docs/html\do/tests/__init__.py.html colorizing: do/docs/html\do/i18n/__init__.py.html colorizing: do/docs/html\do/lib/__init__.py.html colorizing: do/docs/html\do/controllers/__init__.py.html colorizing: do/docs/html\do/model.py.html Once finished you will notice a ``docs/html`` directory. The ``index.html`` is the main file which was generated from ``docs/index.txt``. Learning ReStructuredText ************************* Python programs typically use a rather odd format for documentation called `reStructuredText`_. It is designed so that the text file used to generate the HTML is as readable as possible but as a result can be a bit confusing for beginners. Read the reStructuredText tutorial which is part of the `docutils <http://docutils.sf.net>`_ project. Once you have mastered reStructuredText you can write documentation until your heart's content. .. _reStructuredText: http://docutils.sourceforge.net/rst.html Using Docstrings **************** Docstrings are one of Python's most useful features if used properly. They are described in detail in the Python documentation but basically allow you to document any module, class, method or function, in fact just about anything. Users can then access this documentation interactively. Try this: .. code-block:: pycon >>> import pylons >>> help(pylons) ... As you can see if you tried it you get detailed information about the pylons module including the information in the docstring. Docstrings are also extracted by Pudge so you can describe how to use all the controllers, actions and modules that make up your application. Pudge will extract that information and turn it into useful API documentation automatically. Try clicking the ``Modules`` link in the HTML documentation you generated earlier or look at the Pylons source code for some examples of how to use docstrings. Using doctest ************* The final useful thing about docstrings is that you can use the ``doctest`` module with them. ``doctest`` again is described in the Python documentation but it looks through your docstrings for things that look like Python code written at a Python prompt. Consider this example: .. code-block:: pycon >>> a = 2 >>> b = 3 >>> a + b 5 If ``doctest`` was run on this file it would have found the example above and executed it. If when the expression ``a + b`` is executed the result was not ``5``, ``doctest`` would raise an Exception. This is a very handy way of checking that the examples in your documentation are actually correct. To run ``doctest`` on a module use: .. code-block:: python if __name__ == "__main__": import doctest doctest.testmod() The ``if __name__ == "__main__":`` part ensures that your module won't be tested if it is just imported, only if it is run from the command line To run ``doctest`` on a file use: .. code-block:: python import doctest doctest.testfile("docs/index.txt") You might consider incorporating this functionality in your ``tests/test.py`` file to improve the testing of your application. Summary ******* So if you write your documentation in reStructuredText, in the ``docs`` directory and in your code's docstrings, liberally scattered with example code, Pylons provides a very useful and powerful system for you. If you want to find out more information have a look at the Pudge documentation or try tinkering with your project's ``setup.cfg`` file which contains the Pudge settings. .. _app_distribution: Distributing Your Application ============================= TODO: this assumes helloworld tutorial context that is no longer present, and could be consolidated with packaging info in :ref:`deployment` As mentioned earlier eggs are a convenient format for packaging applications. You can create an egg for your project like this: .. code-block:: bash $ cd helloworld $ python setup.py bdist_egg Your egg will be in the ``dist`` directory and will be called ``helloworld-0.0.0dev-py2.4.egg``. You can change options in ``setup.py`` to change information about your project. For example change version to ``version="0.1.0",`` and run ``python setup.py bdist_egg`` again to produce a new egg with an updated version number. You can then register your application with the `Python Package Index`_ (PyPI) with the following command: .. code-block:: bash $ python setup.py register .. note:: You should not do this unless you actually want to register a package! If users want to install your software and have installed :term:`easy_install` they can install your new egg as follows: .. code-block:: bash $ easy_install helloworld==0.1.0 This will retrieve the package from PyPI and install it. Alternatively you can install the egg locally: .. code-block:: bash $ easy_install -f C:\path\with\the\egg\files\in helloworld==0.1.0 In order to use the egg in a website you need to use Paste. You have already used Paste to create your Pylons template and to run a test server to test the tutorial application. Paste is a set of tools available at http://pythonpaste.org for providing a uniform way in which all compatible Python web frameworks can work together. To run a paste application such as any Pylons application you need to create a Paste configuration file. The idea is that the your paste configuration file will contain all the configuration for all the different Paste applications you run. A configuration file suitable for development is in the ``helloworld/development.ini`` file of the tutorial but the idea is that the person using your egg will add relevant configuration options to their own Paste configuration file so that your egg behaves they way they want. See the section below for more on this configuration. Paste configuration files can be run in many different ways, from CGI scripts, as standalone servers, with FastCGI, SCGI, mod_python and more. This flexibility means that your Pylons application can be run in virtually any environment and also take advantage of the speed benefits that the deployment option offers. .. seealso:: :ref:`deployment_webservers` .. _Python Package Index: http://pypi.python.org/pypi Running Your Application ************************ In order to run your application your users will need to install it as described above but then generate a config file and setup your application before deploying it. This is described in :ref:`run-config` and :ref:`deployment`.
{ "redpajama_set_name": "RedPajamaGithub" }
5,635
Tomás Garicano Goñi (Pamplona, 9 de febrer de 1910 - Madrid, 16 de gener de 1988) fou un advocat i militar navarrès. Biografia Estudià dret a la Universitat de Saragossa i es llicencià el 1929 a la Universitat de Madrid. El 1930 va ingressar al Cos Jurídic Militar i hi treballà com a auditor a La Corunya, Madrid, Canàries, Valladolid i Burgos. Participà activament en la insurrecció militar que provocà la guerra civil espanyola com a enllaç entre els generals Emilio Mola i Pablo Martín Alonso, i durant el conflicte fou auditor militar i assessor del Cos d'Exèrcit de Navarra. El 1940 fou nomenat general del Cos Jurídic de l'Aire i el 1941 secretari general de justícia i dret. De 1951 a 1956 fou cap provincial del Movimiento Nacional a Guipúscoa. També fou delegat del govern en el Canal d'Isabel II (1965-1966) i governador civil de Barcelona (1966- octubre de 1969), càrrec des del que va reprimir el moviment estudiantil i polític. Abandonà aquest darrer càrrec quan fou nomenat ministre de governació, càrrec que va ocupar fins a 1973. Membre del Consell Nacional del Movimiento, després de la mort de Franco es mostrà partidari de la reforma política. El 1978 es retirà tant de la política com de l'exèrcit, i fou nomenat vicepresident de l'empresa paperera Sarrió, càrrec que ocupà fins a la seva mort. Referències Enllaços externs Necrològica a El País Las mentiras del gobernador Militars de Pamplona Governadors civils de Barcelona Receptors de l'orde d'Isabel la Catòlica Ministres navarresos del Govern d'Espanya Governadors civils de Guipúscoa Gran Creu de l'orde d'Isabel la Catòlica Gran Creu de l'Orde de Carles III Morts a Madrid Polítics de Pamplona
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,263
\section{Introduction} A \emph{planar correspondence} is a subvariety of the product of two projective planes. A substantial amount of work in the classical algebraic geometry has been devoted to the construction and analysis of such correspondences. As Fulton remarks in \cite[Chapter 16]{Fulton}, a glance at the long encyclopedia article of Berzolari \cite{Berzolari} impresses one with the importance of correspondences in mathematics through the early part of the previous century. For a survey of results on planar correspondences in the classical period, see \cite[Chapter V]{Berzolari} and \cite{SnyderA}. Let $k$ be a fixed algebraically closed field. The aim of the present paper is to generalize De Jonqui\`eres' construction of planar correspondences over $k$ with controlled multidegree. This leads to a characterization of integral homology classes of $\mathbb{P}^2 \times \mathbb{P}^2$ which are representable by a (reduced and irreducible) subvariety. \begin{theorem}\label{main} Let $\xi$ be an element in the Chow homology group \[ \xi = a[\mathbb{P}^2 \times \mathbb{P}^0]+b[\mathbb{P}^1 \times \mathbb{P}^1]+c[\mathbb{P}^0 \times \mathbb{P}^2] \in A_2(\mathbb{P}^2 \times \mathbb{P}^2). \] Then $\xi$ is the class of a reduced and irreducible subvariety if and only if $a,b,c$ are nonnegative and one of the following conditions is satisfied: \begin{enumerate} \item $b>0$ and $b^2 \ge ac$. \item $a=1$, $b=0$, $c=0$. \item $a=0$, $b=0$, $c=1$. \end{enumerate} \end{theorem} To the author's knowledge, the above characterization was not obtained in the classical period. For a discussion of the multidegree $a,b,c$ in the language of classical geometry, see \cite{Baker,Semple-Roth}. The necessity of the above conditions for the representability of $\xi$ is linked to several important achievements of the classical algebraic geometry: \begin{enumerate} \item[(1)] If $\xi$ is representable by a subvariety, then $a, b, c$ are nonnegative. \end{enumerate} This follows from Bertini's theorem, which says that, for example, the product of two general lines transversely intersects the variety representing $\xi$ in $b$ distinct points. \begin{enumerate} \item[(2)] If $\xi$ is representable by a subvariety and $a=1$, then $b^2 \ge c$. \end{enumerate} For the graph of a rational map between the projective planes, the inequality $b^2 \ge ac$ is B\'ezout's theorem that two plane curves of degree $b$ without common components intersect in at most $b^2$ distinct points. \begin{enumerate} \item[(3)] If $\xi$ is representable by a subvariety, then $b^2 \ge ac$. \end{enumerate} This is Hodge's index theorem on base-point-free linear systems on an algebraic surface. To be more precise, consider a resolution of singularities of the surface representing $\xi$. If $D_1,D_2$ are pull-backs of general lines from the first and the second projective plane, then the index theorem says that \[ (D_1 \cdot D_2)^2 \ge (D_1 \cdot D_1) (D_2 \cdot D_2). \] \begin{enumerate} \item[(4)] If $\xi$ is representable by a subvariety and $b=0$, then $a=1,c=0$ or $a=0,c=1$. \end{enumerate} Suppose $S$ is a subvariety representing $\xi$. By the previous item, either $c=0$ or $a=0$. In the former case, the image of the projection from $S$ to the second projective plane is disjoint from a line. Since this image is a projective variety contained in $\mathbb{A}^2$, the projection is constant, and hence $a=1$. The construction of planar correspondences with given $a,b,c$ is a classical topic. In fact, one of the first papers on the subject settles the question of existence when $a=1,c=1$. In \cite{deJonquieres1} De Jonqui\`eres constructs a transformation of $\mathbb{P}^2$ which now bears his name, a birational transformation defined by the ratio of three homogeneous polynomials with given degree $b \ge 1$ and without common factors. The present paper is devoted to the construction of planar correspondences with given $a,b,c$ which satisfy $b>0$ and $b^2 \ge ac$. The possibility of such a construction is closely related to a problem on integral quadratic forms considered by Erd\H os, Ko, and Mordell. We briefly explain the connection. Let us believe that a uniform construction of planar correspondences with given homology class exists. Then we should study embeddings of rational surfaces in $\mathbb{P}^2 \times \mathbb{P}^2$, because homology classes with $a=1$ are representable only by a rational surface if they are representable by a subvariety at all. Moreover, a desingularization of any such rational surface should be a projective plane blown up at finitely many (possibly infinitely near) points. Let $n$ be a nonnegative integer, and consider a projective plane blown up at $n$ points. We may identify its middle homology group with $\mathbb{Z}^{n+1}$, and the intersection product with \[ \mathbf{x}\circ\mathbf{y}:=x_0y_0-x_1y_1-\cdots-x_ny_n, \qquad \mathbf{x},\mathbf{y} \in \mathbb{Z}^{n+1}. \] If $\mathbf{x} \circ \mathbf{x}$ and $\mathbf{y} \circ \mathbf{y}$ are positive, then by the reversed Cauchy-Schwarz inequality \[ (\mathbf{x} \circ \mathbf{y})^2 \ge (\mathbf{x}\circ \mathbf{x}) (\mathbf{y}\circ\mathbf{y}). \] In order to prove the existence of a rational surface in $\mathbb{P}^2 \times \mathbb{P}^2$ with given homology class, at the very least we should be able to answer the following question: \begin{quote} \emph{Given positive integers $a,b,c$ which satisfy $b^2 \ge ac$, do there exist $\mathbf{x}$, $\mathbf{y}$ in $\mathbb{Z}^{n+1}$ such that $\mathbf{x} \circ \mathbf{x}=a$, $\mathbf{x} \circ \mathbf{y}=b$, $\mathbf{y} \circ \mathbf{y}=c$?} \end{quote} This is a subtle arithmetic question of Erd\H os, whose answer depends on $n$. In \cite{Ko} Ko answered in the negative for $n=3$, and in the affirmative for $n=4$. The main step in the proof of Theorem \ref{main} is to give an affirmative answer to the same question using only those $\mathbf{x}$ and $\mathbf{y}$ which correspond to base-point-free divisors with sufficiently many sections on the blown up projective plane. In the following section we show that the arithmetic problem above can be linearized in a suitable sense. De Jonqui\`eres' construction will be reviewed and generalized in Section \ref{SectionDJ}. This generalization gives an affirmative answer to the linearized version of the arithmetic problem. In Section \ref{SectionProof}, we combine results from the previous sections to give a proof of the main theorem. A conjectural description of representable homology classes in $\mathbb{P}^3 \times \mathbb{P}^3$ will be given in Section \ref{SectionLast}. \begin{acknowledgements} The author is grateful to Tsit Yuen Lam, David Leep, Bruce Reznick, and David Speyer for useful comments on Waring's problem for integral quadratic forms. He thanks Igor Dolgachev, Allen Knutson, Mircea Musta\c t\u a, Sam Payne, and Hal Schenck for helpful discussions. \end{acknowledgements} \section{Waring's problem for integral quadratic forms}\label{SectionWaring} Let $n$ be a nonnegative integer, and consider the abelian group of lattice points \[ \mathbb{Z}^{n+1}:=\Big\{\mathbf{x}=(x_0,x_1,\ldots,x_n)\mid x_i \in \mathbb{Z}\Big\}. \] Denote the Euclidean and the Lorentzian inner product on $\mathbb{Z}^{n+1}$ by \[ \mathbf{x}*\mathbf{y}:=x_0y_0+x_1y_1+\cdots+x_ny_n, \qquad \mathbf{x}\circ\mathbf{y}:=x_0y_0-x_1y_1-\cdots-x_ny_n. \] By the (reversed) Cauchy-Schwarz inequality, we have \[ (\mathbf{x} * \mathbf{y})^2 \le (\mathbf{x}* \mathbf{x})(\mathbf{y}*\mathbf{y}), \] and if $\mathbf{x}$ and $\mathbf{y}$ are time-like vectors (that is, if $\mathbf{x} \circ \mathbf{x}$ and $\mathbf{y} \circ \mathbf{y}$ are positive), \[ (\mathbf{x} \circ \mathbf{y})^2 \ge (\mathbf{x}\circ \mathbf{x}) (\mathbf{y}\circ\mathbf{y}). \] \begin{definition}\label{TwoCompleteness} Let $\mathscr{L}$ be a subset of $\mathbb{Z}^{n+1}$, and let $a,b,c$ be integers. \begin{enumerate} \item $(\mathscr{L},*)$ \emph{represents} $(a,b,c)$ if there exist $\mathbf{x},\mathbf{y} \in \mathscr{L}$ such that \[ \mathbf{x} * \mathbf{x} = a, \qquad \mathbf{x} * \mathbf{y} = b, \qquad \mathbf{y} * \mathbf{y} = c. \] $(\mathscr{L},*)$ is \emph{complete} if it represents every positive $(a,b,c)$ which satisfy $b^2 \le ac$. \item $(\mathscr{L},\circ)$ represents $(a,b,c)$ if there exist $\mathbf{x},\mathbf{y} \in \mathscr{L}$ such that \[ \mathbf{x} \circ \mathbf{x} = a, \qquad \mathbf{x} \circ \mathbf{y} = b, \qquad \mathbf{y} \circ \mathbf{y} = c. \] $(\mathscr{L},\circ)$ is \emph{complete} if it represents every positive $(a,b,c)$ which satisfy $b^2 \ge ac$. \end{enumerate} \end{definition} The problem of deciding the completeness of $\mathscr{L}$ may be viewed as an extension of Waring's problem \cite{Ko,Mordell1,Mordell2}. For example, $(\mathscr{L},*)$ represents $(a,b,c)$ if and only if every binary quadratic form \[ ax^2+2bxy+cy^2 \] is a sum of $k$ squares of integral linear forms with coefficients from $\mathscr{L}$. \begin{example} $(\mathbb{Z}^4,*)$ is not complete because the condition fails for $a=1,b=2,c=19$. To see this, note that $19$ is a sum of four squares in exactly two different ways \[ 19=4^2+1^2+1^2+1^2=3^2+3^2+1^2+0^2. \] Mordell proved in \cite{Mordell1,Mordell2} that $(\mathbb{Z}^5,*)$ is complete. \end{example} \begin{example} $(\mathbb{N}^8,*)$ is not complete because the condition fails for $a=8,b=1,c=8$. To see this, note that $8$ is a sum of eight squares in exactly three different ways \begin{eqnarray*} 8&=&2^2+2^2+0^2+0^2+0^2+0^2+0^2+0^2\\ &=&2^2+1^2+1^2+1^2+1^2+0^2+0^2+0^2\\ &=&1^2+1^2+1^2+1^2+1^2+1^2+1^2+1^2. \end{eqnarray*} The author does not know the smallest $n$ for which $(\mathbb{N}^{n+1},*)$ is complete. \end{example} Showing that $(\mathscr{L},\circ)$ is or is not complete is a delicate problem in general. Ko proved in \cite{Ko} that $(\mathbb{Z}^4,\circ)$ is not complete and that $(\mathbb{Z}^5,\circ)$ is complete, thus answering a question of Erd\H os. Proposition \ref{Division} below shows that the problem of representation by $(\mathscr{L},\circ)$ can be linearized in a suitable sense if $\mathscr{L}$ is closed under the addition of $\mathbb{Z}^{n+1}$. This observation will play an important role in the proof of Theorem \ref{main}. \begin{definition} Let $k$ be a positive integer. $(\mathscr{L},\circ)$ is \emph{linearly $k$-complete} if it represents every positive $(a,b,c)$ which satisfy \[ 2b \ge a+c \quad \text{and} \quad b \le k. \] Similarly, we say that $(\mathscr{L},\circ)$ is \emph{$k$-complete} if it represents every positive $(a,b,c)$ which satisfy \[ b^2 \ge ac \quad \text{and} \quad b \le k. \] \end{definition} By the inequality of arithmetic and geometric means, if $(\mathscr{L},\circ)$ is $k$-complete, then $(\mathscr{L},\circ)$ is linearly $k$-complete. \begin{proposition}\label{Division} If $(\mathscr{L},\circ)$ is linearly $k$-complete and $\mathscr{L}$ is closed under the addition of $\mathbb{Z}^{n+1}$, then $(\mathscr{L},\circ)$ is $k$-complete. \end{proposition} \begin{proof} Let $a,b,c$ be positive integers which satisfy $b^2 \ge ac$ and $b \le k$. We show by induction on $k$ that there exist $\mathbf{x},\mathbf{y} \in \mathscr{L}$ such that \[ \mathbf{x} \circ \mathbf{x} = a, \qquad \mathbf{x} \circ \mathbf{y} = b, \qquad \mathbf{y} \circ \mathbf{y} = c. \] The base case $k=1$ is immediate. We may suppose that $2b<a+c$ and $c \ge a$. Since $b^2 \ge ac$, this implies that $b \ge a$. The idea is to rewrite $a,b,c$ by \[ a=a, \qquad b=a+(b-a), \qquad c=a+2(b-a)+(a+c-2b), \] and consider the new triple \[ \tilde a:=a, \qquad \tilde b:=b-a,\qquad \tilde c:=a+c-2b. \] Under our assumptions on $a,b,c$, the new triple $\tilde a,\tilde b,\tilde c$ has the following properties: \begin{enumerate}[(1)] \item $\tilde a$, $\tilde b$, $\tilde c$ are positive integers. \item The discriminant remains unchanged: $\tilde b^2-\tilde a \tilde c = b^2 -ac \ge 0$. \item The induction invariant drops: $\tilde b<b \le k$. \end{enumerate} Therefore we may use the induction hypothesis to find $\tilde{\mathbf{x}},\tilde{\mathbf{y}} \in \mathscr{L}$ such that \[ \tilde{\mathbf{x}} \circ \tilde{\mathbf{x}} = \tilde a, \qquad \tilde{\mathbf{x}} \circ \tilde{\mathbf{y}} = \tilde b, \qquad \tilde{\mathbf{y}} \circ \tilde{\mathbf{y}} = \tilde c. \] Since $\circ$ is bilinear, we have \[ \tilde{\mathbf{x}} \circ \tilde{\mathbf{x}} = a, \qquad \tilde{\mathbf{x}} \circ (\tilde{\mathbf{x}}+\tilde{\mathbf{y}}) = b, \qquad (\tilde{\mathbf{x}}+\tilde{\mathbf{y}}) \circ (\tilde{\mathbf{x}}+\tilde{\mathbf{y}}) = c. \] Now $\mathbf{x}:=\tilde{\mathbf{x}}$ and $\mathbf{y}:=\tilde{\mathbf{x}}+\tilde{\mathbf{y}} $ are elements of $\mathscr{L}$ with the desired properties. \end{proof} As an application, we show that $(\mathbb{N}^7,\circ)$ is complete. A modification of the argument below will play a role in the proof of Theorem \ref{main}. \begin{corollary}\label{Seven} $(\mathbb{N}^7,\circ)$ is complete. \end{corollary} \begin{proof} By Proposition \ref{Division}, it is enough to show that $(\mathbb{N}^7,\circ)$ is linearly $k$-complete for all $k$. Let $a,b,c$ be positive integers which satisfy $2b \ge a+c$. We may suppose that $c \le a$ and $c \le b$. Define nonnegative integers \[ r_1:=\lfloor\frac{c}{2}\rfloor, \qquad r_2:=b-c, \qquad r_3:=2b-a-c. \] Use Lagrange's four squares theorem to find nonnegative integers $n_1,n_2,n_3,n_4$ such that \[ r_3=n_1^2+n_2^2+n_3^2+n_4^2. \] If $c$ is odd, then set \[ \mathbf{x}:=(r_1+r_2+1,r_1+r_2,0,n_1,n_2,n_3,n_4), \qquad \mathbf{y}:=(r_1+1,r_1,0,0,0,0,0). \] We have \[ \mathbf{x} \circ \mathbf{x} =(2r_1+1)+2r_2-r_3, \quad \mathbf{x} \circ \mathbf{y}=(2r_1+1)+r_2, \quad \mathbf{y} \circ \mathbf{y}=2r_1+1. \] If $c$ is even, then set \[ \mathbf{x}:=(r_1+r_2+1,r_1+r_2,1,n_1,n_2,n_3,n_4), \qquad \mathbf{y}:=(r_1+1,r_1,1,0,0,0,0). \] We have \[ \mathbf{x} \circ \mathbf{x} =2r_1+2r_2-r_3, \quad \mathbf{x} \circ \mathbf{y}=2r_1+r_2, \quad \mathbf{y} \circ \mathbf{y}=2r_1. \] In both cases, \[ \mathbf{x} \circ \mathbf{x} = a, \qquad \mathbf{x} \circ \mathbf{y} = b, \qquad \mathbf{y} \circ \mathbf{y} = c. \] \end{proof} The author does not know the smallest $n$ for which $(\mathbb{N}^{n+1},\circ)$ is complete. \section{Linear systems of De Jonqui\`eres type}\label{SectionDJ} Let $\mathbf{p}=(p_1,p_2,\ldots,p_n)$ be a sequence of distinct points in the projective plane $\mathbb{P}^2$. Consider the set of nonnegative lattice points \[ \mathbb{N}^{n+1}:=\Big\{\mathbf{m}:=(d,m_1,m_2,\ldots,m_n) \mid d \ge 0, \ m_i \ge 0\Big\}. \] An element $\mathbf{m} \in \mathbb{N}^{n+1}$ as above, together with the sequence $\mathbf{p}$, defines a linear system of plane curves \[ L(\mathbf{p},\mathbf{m}):=\Big\{C\mid \text{$\deg C = d$ and the multiplicity of $C$ at $p_i$ is at least $m_i$ for all $i$}\Big\}. \] Note that every linear system of $\mathbb{P}^2$ is of the form $L(\mathbf{p},\mathbf{m})$ for some $\mathbf{p}$, $\mathbf{m}$, and $n$. \begin{definition} $L(\mathbf{p},\mathbf{m})$ has \emph{no unassigned base points} if \begin{enumerate} \item $L(\mathbf{p},\mathbf{m})$ is nonempty, \item no point other than $p_1,p_2,\ldots,p_n$ is contained in $C$ for all $C \in L(\mathbf{p},\mathbf{m})$, \item no line is contained in the tangent cone of $C$ at $p_i$ for all $C \in L(\mathbf{p},\mathbf{m})$, and \item there is an element of $L(\mathbf{p},\mathbf{m})$ which has multiplicity $m_i$ at $p_i$ for all $i$. \end{enumerate} \end{definition} In other words, we require that the linear system has no base points other than $p_1,\ldots,p_n$, both proper or infinitely near, and a general member of the linear system has the expected multiplicity at $p_i$ for all $i$. When nonempty, we may view $L(\mathbf{p},\mathbf{m})$ as a projective space. We denote the rational map associated to $L(\mathbf{p},\mathbf{m})$ by \[ \varphi(\mathbf{p},\mathbf{m}):\mathbb{P}^2 \longrightarrow L(\mathbf{p},\mathbf{m})^\vee. \] If $p$ is not a base point of the linear system, then $\varphi(\mathbf{p},\mathbf{m})$ maps $p$ to the hyperplane of curves passing through $p$. \begin{definition} $L(\mathbf{p},\mathbf{m})$ is \emph{very big} if $\varphi(\mathbf{p},\mathbf{m})$ maps its domain birationally onto its image. \end{definition} The notion is a birational analogue of \emph{very ample}. The following is the main result of this section. \begin{proposition}\label{Completeness} Define \[ \mathscr{L}(\mathbf{p}):=\Big\{\mathbf{m} \in \mathbb{N}^{n+1} \mid \text{$L(\mathbf{p},\mathbf{m})$ is very big and has no unassigned base points}\Big\}. \] Then $\big(\mathscr{L}(\mathbf{p}),\circ\big)$ is $k$-complete for $k=\lfloor n/2\rfloor$ and a sufficiently general $\mathbf{p}$. \end{proposition} The proof of Proposition \ref{Completeness} is built upon results of De Jonqui\`eres \cite{deJonquieres1,deJonquieres2}. We recall the construction and the needed properties of De Jonqui\`eres transformation of $\mathbb{P}^2$. Modern treatments can be found in \cite[Chapter 7]{Dolgachev} and \cite[Chapter 2]{Kollar-Smith-Corti}. A \emph{De Jonqui\`eres transformation} of degree $d \ge 1$ is a birational map of the form $\varphi(\mathbf{p},\mathbf{m})$, where \[ \mathbf{m}=(d,d-1,\underbrace{1,1,\ldots,1}_{2d-2},\underbrace{0,0,\ldots,0}_{n-2d+1}). \] The result of De Jonqui\`eres is that, for $\mathbf{m}$ as above and a sufficiently general $\mathbf{p}$, \begin{enumerate}[(1)] \item $\dim L(\mathbf{p},\mathbf{m})=2$, \item $L(\mathbf{p},\mathbf{m})$ has no unassigned base points, and \item $\varphi(\mathbf{p},\mathbf{m})$ is a birational transformation of $\mathbb{P}^2$. \end{enumerate} It is necessary to assume that $\mathbf{p}$ is sufficiently general. For example, if $d=3$ and $p_2,p_3,p_4,p_5$ are collinear, then all three conditions above fail to hold for $L(\mathbf{p},\mathbf{m})$. \begin{remark} Interesting De Jonqui\`eres transformations can be obtained by allowing some of the base points to be infinitely near. We will not need this extension. \end{remark} Lemma \ref{DJ} and Lemma \ref{Semigroup} below will be needed in the proof of Proposition \ref{Completeness}. \begin{lemma}\label{DJ} Define $\mathbf{m}=(d,m_1,m_2,\ldots,m_n) \in \mathbb{N}^{n+1}$ to be \emph{of De Jonqui\`eres type} if \begin{enumerate} \item $d \ge 1$, $n\ge 2d-1$, and $m_1=d-1$, \item $m_2,m_3,\ldots,m_n$ are either zero or one, and \item at most $2d-2$ among $m_2,m_3,\ldots,m_n$ are nonzero. \end{enumerate} If $\mathbf{m}$ is of De Jonqui\`eres type, then $L(\mathbf{p},\mathbf{m})$ is very big and has no unassigned base points for a sufficiently general $\mathbf{p}$. \end{lemma} \begin{proof} Let $r$ be the number of nonzeros among $m_2,m_3,\ldots,m_n$. We may suppose that $n=2d-1$ and \[ \mathbf{m}=(d,d-1,\underbrace{1,1,\ldots,1}_{r},\underbrace{0,0,\ldots,0}_{2d-2-r}). \] Define \[ \mathbf{n}:=(d,d-1,\underbrace{1,1,\ldots,1,1,1,\ldots,1}_{2d-2}). \] There is an inclusion between the linear systems \[ \iota: L(\mathbf{p},\mathbf{n}) \longrightarrow L(\mathbf{p},\mathbf{m}). \] For any $\mathbf{p}$, we have the commutative diagram of rational maps \[ \xymatrix{ \mathbb{P}^2 \ar@{->}[rr]^{\varphi(\mathbf{p},\mathbf{m}) \quad} \ar@{->}[ddrr]_{\varphi(\mathbf{p},\mathbf{n})}&& L(\mathbf{p},\mathbf{m})^\vee \ar@{->}[dd]^{\iota^\vee}\\ &&\\ &&L(\mathbf{p},\mathbf{n})^\vee. } \] By the result of De Jonqui\`eres, we may choose $\mathbf{p}$ sufficiently general so that $\varphi(\mathbf{p},\mathbf{n})$ is a birational transformation of $\mathbb{P}^2$. Then the commutative diagram shows that $L(\mathbf{p},\mathbf{m})$ is very big. Next we show that $L(\mathbf{p},\mathbf{m})$ has no unassigned base points. Let $p$ be a point different from $p_1,\ldots,p_n$. Again by the result of De Jonqui\`eres, there is a sequence of distinct points $\mathbf{q}=(q_1,\ldots,q_{n})$ such that \begin{enumerate}[(1)] \item $\varphi(\mathbf{q},\mathbf{n})$ is a De Jonqui\`eres transformation of $\mathbb{P}^2$, \item $q_i$ is equal to $p_i$ for $1 \le i \le r+1$, and \item$q_i$ is different from $p$ and from $p_{r+1},\ldots,p_n$ for $r+1 < i\le n$. \end{enumerate} Note that $L(\mathbf{q},\mathbf{n})$ is a subspace of $L(\mathbf{p},\mathbf{m})$ which has no unassigned base points. Since $p$ can be any point different from $p_1,\ldots,p_n$, it follows that $L(\mathbf{p},\mathbf{m})$ has no unassigned base points. \end{proof} \begin{lemma}\label{Semigroup} Let $\mathbf{m}_1,\mathbf{m}_2\in \mathbb{N}^{n+1}$ and $\mathbf{m}_3=\mathbf{m}_1+\mathbf{m}_2$. \begin{enumerate} \item If $L(\mathbf{p},\mathbf{m}_1)$ is very big and $L(\mathbf{p},\mathbf{m}_2)$ is nonempty, then $L(\mathbf{p},\mathbf{m}_3)$ is very big. \item If $L(\mathbf{p},\mathbf{m}_1)$ and $L(\mathbf{p},\mathbf{m}_2)$ have no unassigned base points, then $L(\mathbf{p},\mathbf{m}_3)$ has no unassigned base points. \end{enumerate} \end{lemma} \begin{proof} Choose an element $C_2$ of the linear system $L(\mathbf{p},\mathbf{m}_2)$. This defines an embedding \[ \iota_2: L(\mathbf{p},\mathbf{m}_1) \longrightarrow L(\mathbf{p},\mathbf{m}_3), \qquad C_1 \longmapsto C_1 \cup C_2 \] and the commutative diagram of rational maps \[ \xymatrix{ \mathbb{P}^2 \ar@{->}[rr]^{\varphi(\mathbf{p},\mathbf{m}_3) \quad} \ar@{->}[ddrr]_{\varphi(\mathbf{p},\mathbf{m}_1)}&& L(\mathbf{p},\mathbf{m}_3)^\vee \ar@{->}[dd]^{\iota^\vee_2}\\ &&\\ &&L(\mathbf{p},\mathbf{m}_1)^\vee. } \] Since $L(\mathbf{p},\mathbf{m}_1)$ is very big, the commutative diagram shows that $L(\mathbf{p},\mathbf{m}_3)$ is very big. Next we show that $L(\mathbf{p},\mathbf{m}_3)$ has no unassigned base points. Consider the subset \[ L(\mathbf{p},\mathbf{m}_1)+L(\mathbf{p},\mathbf{m}_2):=\Big\{C_1 \cup C_2 \mid C_1 \in L(\mathbf{p},\mathbf{m}_1), \ C_2 \in L(\mathbf{p},\mathbf{m}_2)\Big\} \subseteq L(\mathbf{p},\mathbf{m}_3). \] Since $L(\mathbf{p},\mathbf{m}_1)$ and $L(\mathbf{p},\mathbf{m}_2)$ have no unassigned base points, there is an element of the above subset which has the expected multiplicity at $p_1,\ldots,p_n$ and which does not contain a given point different from $p_1,\ldots,p_n$, proper or infinitely near. It follows that $L(\mathbf{p},\mathbf{m}_3)$ has no unassigned base points. \end{proof} \begin{proof}[Proof of Proposition \ref{Completeness}] By Lemma \ref{Semigroup}, we know that $\mathscr{L}(\mathbf{p})$ is closed under the addition of $\mathbb{Z}^{n+1}$ for any $\mathbf{p}$. Therefore, by Proposition \ref{Division}, it is enough to prove that $\mathscr{L}(\mathbf{p})$ is linearly $k$-complete for a sufficiently general $\mathbf{p}$. This linear version of the problem can be solved directly by using linear systems of De Jonqui\`eres type. Recall that $\mathbf{m}=(d,m_1,m_2,\ldots,m_n) \in \mathbb{N}^{n+1}$ is said to be \emph{of De Jonqui\`eres type} if \begin{enumerate} \item $d \ge 1$, $n\ge 2d-1$, and $m_1=d-1$, \item $m_2,m_3,\ldots,m_n$ are either zero or one, and \item at most $2d-2$ among $m_2,m_3,\ldots,m_n$ are nonzero. \end{enumerate} Let $\mathscr{D}$ be the set of all elements of De Jonqui\`eres type in $\mathbb{N}^{n+1}$. Since $n \ge 2d-1$, $\mathscr{D}$ has only finitely many elements. Therefore, by Lemma \ref{DJ}, we may choose $\mathbf{p}$ sufficiently general so that \[ \mathscr{D} \subseteq \mathscr{L}(\mathbf{p}). \] Let $a,b,c$ be positive integers which satisfy $2b \ge a+c$ and $b \le k$. We may suppose that $c \le a$ and $c \le b$. We show that there exist $\mathbf{m}_1,\mathbf{m}_2 \in \mathscr{D}$ which satisfy \[ \mathbf{m}_1 \circ \mathbf{m}_1 = a, \qquad \mathbf{m}_1 \circ \mathbf{m}_2 = b, \qquad \mathbf{m}_2 \circ \mathbf{m}_2 = c. \] For this we mimic the proof of Corollary \ref{Seven}. Define nonnegative integers \[ r_1:=\lfloor\frac{c}{2}\rfloor, \qquad r_2:=b-c, \qquad r_3:=2b-a-c. \] Note that $r_2 \le n-2$. \begin{enumerate}[(1)] \item If $c$ is odd, then set \begin{eqnarray*} \mathbf{m}_1&:=&(r_1+r_2+1,r_1+r_2,0,\underbrace{1,1,\ldots,1}_{r_3},\underbrace{0,0,\ldots,0}_{n-2-r_3}), \\ \mathbf{m}_2&:=&(r_1+1,r_1,0,\underbrace{0,0,\ldots,0}_{r_3},\underbrace{0,0,\ldots,0}_{n-2-r_3}). \end{eqnarray*} It is easy to check that \[ r_3 \le 2r_1+2r_2< n. \] The two inequalities show that $\mathbf{m}_1,\mathbf{m}_2$ are of De Jonqui\`eres type. We have \[ \mathbf{m}_1 \circ \mathbf{m}_1 =(2r_1+1)+2r_2-r_3, \quad \mathbf{m}_1 \circ \mathbf{m}_2=(2r_1+1)+r_2, \quad \mathbf{m}_2 \circ \mathbf{m}_2=2r_1+1. \] \item If $c$ is even, then set \begin{eqnarray*} \mathbf{m}_1&:=&(r_1+r_2+1,r_1+r_2,1,\underbrace{1,1,\ldots,1}_{r_3},\underbrace{0,0,\ldots,0}_{n-2-r_3}), \\ \mathbf{m}_2&:=&(r_1+1,r_1,1,\underbrace{0,0,\ldots,0}_{r_3},\underbrace{0,0,\ldots,0}_{n-2-r_3}). \end{eqnarray*} Since $c$ is even, \[ r_3+1 \le 2r_1+2r_2< n \quad \text{and} \quad 1 \le 2r_1. \] The three inequalities show that $\mathbf{m}_1,\mathbf{m}_2$ are of De Jonqui\`eres type. We have \[ \mathbf{m}_1 \circ \mathbf{m}_1 =2r_1+2r_2-r_3, \quad \mathbf{m}_1 \circ \mathbf{m}_2=2r_1+r_2, \quad \mathbf{m}_2 \circ \mathbf{m}_2=2r_1. \] \end{enumerate} In both cases, \[ \mathbf{m}_1 \circ \mathbf{m}_1 = a, \qquad \mathbf{m}_1 \circ \mathbf{m}_2 = b, \qquad \mathbf{m}_2 \circ \mathbf{m}_2 = c. \] \end{proof} \section{Proof of Theorem \ref{main}}\label{SectionProof} We first characterize homology classes of reduced and irreducible surfaces in $\mathbb{P}^2 \times \mathbb{P}^1$. \begin{lemma}\label{DegenerateCase} Let $\xi$ be an element in the Chow homology group \[ \xi = a[\mathbb{P}^2 \times \mathbb{P}^0]+b[\mathbb{P}^1 \times \mathbb{P}^1] \in A_2(\mathbb{P}^2 \times \mathbb{P}^1). \] Then $\xi$ is the class of a reduced and irreducible subvariety if and only if one of the following conditions is satisfied: \begin{enumerate} \item $b>0$ and $a \ge 0$. \item $b=0$ and $a=1$. \end{enumerate} \end{lemma} \begin{proof} Suppose $\xi$ is the class of a reduced and irreducible subvariety $X$. Then $X$ is defined by an irreducible bihomogeneous polynomial in two sets of variables $z_0,z_1,z_2$ and $w_0,w_1$ with respective degrees $b$ and $a$. The assertion that $b=0$ implies $a=1$ is precisely the fundamental theorem of algebra applied to the defining equation of $X$. Conversely, if $b>0$ and $a \ge 0$, then a sufficiently general bihomogeneous polynomial in variables $z_0,z_1,z_2$ and $w_0,w_1$ with respective degrees $b$ and $a$ is irreducible by Bertini's theorem. This proves Lemma \ref{DegenerateCase}. \end{proof} Representable homology classes in $A_2(\mathbb{P}^1 \times \mathbb{P}^2)$ can be characterized in the same way. \begin{remark} Let $\xi$ be an element in the Chow homology group \[ \xi = a[\mathbb{P}^1 \times \mathbb{P}^0]+b[\mathbb{P}^0 \times \mathbb{P}^1] \in A_1(\mathbb{P}^1 \times \mathbb{P}^1). \] Then $\xi$ is the class of a reduced and irreducible subvariety if and only if one of the following conditions is satisfied: \begin{enumerate} \item $a>0$ and $b>0$. \item $a=1$ and $b=0$. \item $a=0$ and $b=1$. \end{enumerate} The proof is similar to that of Lemma \ref{DegenerateCase}. \end{remark} Let $X$ be a reduced and irreducible surface, and let $f:X \longrightarrow \mathbb{P}^{N_1} \times \mathbb{P}^{N_2}$ be a regular map to a biprojective space with $N_1 \ge 2$, $N_2 \ge 2$. We denote the two projections by \[ \xymatrix{ &X \ar[dr]^{\text{pr}_2} \ar[dl]_{\text{pr}_1}&\\ \mathbb{P}^{N_1}&&\mathbb{P}^{N_2}. } \] The following result will serve as a final preparation for the proof of Theorem \ref{main}. \begin{lemma}\label{Biprojection} Consider the commutative diagram of rational maps \[ \xymatrix{ X \ar[r]^{f \qquad} \ar[dr]_{\pi \quad} & \mathbb{P}^{N_1} \times \mathbb{P}^{N_2} \ar@{->}[d]^{\pi_1 \times \pi_2} \\ & \mathbb{P}^{2} \times \mathbb{P}^{2} } \] where $\pi_1$ and $\pi_2$ are independently chosen general linear projections. Then $\pi$ is a regular map, and if $\text{pr}_1$ and $\text{pr}_2$ map $X$ birationally onto $\text{pr}_1(X)$ and $\text{pr}_2(X)$ respectively, then $\pi$ maps $X$ birationally onto $\pi(X)$. \end{lemma} It is not enough to assume that $\text{pr}_1$ maps $X$ birationally onto $\text{pr}_1(X)$. For example, if $\text{pr}_1$ is an embedding of a degree $d$ surface in $\mathbb{P}^3$ and $\text{pr}_2$ is a constant map to $\mathbb{P}^3$, then $\pi:X \longrightarrow \pi(X)$ has degree $d$ for sufficiently general $\pi_1$ and $\pi_2$. \begin{proof} The center of the linear projection $\pi_1$ (respectively $\pi_2$) is either empty or has codimension $3$ in $\mathbb{P}^{N_1}$ (respectively in $\mathbb{P}^{N_2}$). Therefore $\pi$ is defined everywhere on $X$ for sufficiently general $\pi_1$ and $\pi_2$. Suppose $\text{pr}_1$ and $\text{pr}_2$ map $X$ birationally onto $\text{pr}_1(X)$ and $\text{pr}_2(X)$ respectively. Define $\widetilde f$, $g$, and $h$ by the commutative diagram \[ \xymatrix{ X \ar[r]^{f \qquad} \ar[dd]_g \ar[dr]_{{\widetilde f}\hspace{1mm}} & \mathbb{P}^{N_1} \times \mathbb{P}^{N_2} \ar@{->}[d]^{\pi_1 \times \text{id}} \\ & \mathbb{P}^{2} \times \mathbb{P}^{N_2} \ar[dl]_h \ar[dr]\\ \mathbb{P}^2&&\mathbb{P}^{N_2}. } \] We claim that \begin{enumerate}[(1)] \item $\widetilde f$ maps $X$ birationally onto ${\widetilde f}(X)$, and \item $\widetilde h:= h|_{{\widetilde f}(X)}$ has a reduced general fiber for a sufficiently general $\pi_1$. \end{enumerate} The first assertion is valid for any $\pi_1$ because $\widetilde f$ is a factor of $\text{pr}_2$. For the second assertion, note that a general codimension $2$ linear subspace of $\mathbb{P}^{N_1}$ intersects $\text{pr}_1(X)$ in finitely many reduced points. Since $X$ is mapped birationally onto $\text{pr}_1(X)$, the previous sentence implies that $g$ has a reduced general fiber for a sufficiently general $\pi_1$. Therefore $\widetilde h$ has a reduced general fiber for a sufficiently general $\pi_1$. We show that $\pi$ maps $X$ birationally onto $\pi(X)$ by induction on $N_2$. Suppose $N_2 >2$, and let $\widetilde f$ and $\widetilde h$ be as above. Consider the linear projection $\text{p}_2:\mathbb{P}^{N_2} \longrightarrow \mathbb{P}^{N_2-1}$ from a point $y$, and define $i$ by the commutative diagram \[ \xymatrix{ {\widetilde f}(X) \ar[r] \ar[dr]_{i \hspace{1mm}} \ar[dd]_{\widetilde h}& \mathbb{P}^{2} \times \mathbb{P}^{N_2} \ar@{->}[d]^{\text{id} \times \text{p}_2} \\ & \mathbb{P}^{2} \times \mathbb{P}^{N_2-1} \ar[dl]\\ \mathbb{P}^2.& } \] If $x_1$ is a sufficiently general point of $\mathbb{P}^2$, then the fiber of $x_1$ over $\widetilde h$ is a reduced set of points \[ {\widetilde h}^{-1}(x_1)=\Big\{(x_1,y_1), \ldots, (x_1,y_m) \Big\}, \qquad y_1,\ldots,y_m \in \mathbb{P}^{N_2}. \] If the center $y$ is not contained in the union of the lines joining $y_i$ and $y_j$, and if $\mathbb{P}^2 \times \{y\}$ is disjoint from ${\widetilde f}(X)$, then \[ i^{-1}\Big(i(x_1,y_1)\Big) = \Big\{(x_1,y_1)\Big\}. \] It follows that $i$ maps ${\widetilde f}(X)$ birationally onto its image in $\mathbb{P}^2 \times \mathbb{P}^{N_2-1}$ for a sufficiently general $y$. The proof is completed by induction. \end{proof} \begin{proof}[Proof of Theorem \ref{main}] We construct a reduced and irreducible surface in $\mathbb{P}^2 \times \mathbb{P}^2$ with given $a,b,c$ which satisfy $b>0$ and $b^2 \ge ac$. If $a=0$ or $c=0$, then we may use Lemma \ref{DegenerateCase}. Suppose $a,b,c$ are positive. Let $X$ be the blowup of $\mathbb{P}^2$ at $n\ge 2b$ sufficiently general points. By Proposition \ref{Completeness}, there are base-point-free divisors $D_1,D_2$ of $X$ such that \[ D_1 \cdot D_1 =a, \qquad D_1 \cdot D_2 = b,\qquad D_2 \cdot D_2 =c, \] whose linear systems map $X$ birationally onto their respective images. Let $L_1, L_2$ be the linear systems of $D_1,D_2$, and write $\varphi_1,\varphi_2$ for the corresponding rational maps. We apply Lemma \ref{Biprojection} to the product \[ \varphi_1 \times \varphi_2: X \longrightarrow L_1^\vee \times L_2^\vee \simeq \mathbb{P}^{N_1} \times \mathbb{P}^{N_2}. \] If $\widetilde{L}_1,\widetilde{L}_2$ are sufficiently general two dimensional linear subspaces of $L_1,L_2$ respectively, then the biprojection \[ \widetilde{\varphi}_1 \times \widetilde{\varphi}_2: X \longrightarrow \widetilde{L}_1^\vee \times \widetilde{L}_2^\vee \simeq \mathbb{P}^{2} \times \mathbb{P}^{2} \] is a regular map which maps $X$ birationally onto its image. By the projection formula \cite[Example 2.4.3]{Fulton}, we have \[ \big[(\widetilde{\varphi}_1 \times \widetilde{\varphi}_2)(X)\big]=a[\mathbb{P}^2 \times \mathbb{P}^0]+b[\mathbb{P}^1 \times \mathbb{P}^1]+c[\mathbb{P}^0 \times \mathbb{P}^2] \in A_2(\mathbb{P}^2 \times \mathbb{P}^2). \] \end{proof} \begin{remark} The proof of Theorem \ref{main} shows that a (reduced and irreducible) surface in $\mathbb{P}^2 \times \mathbb{P}^2$ is homologous to either \begin{enumerate} \item a surface in $\mathbb{P}^1 \times \mathbb{P}^2$, \item a surface in $\mathbb{P}^2 \times \mathbb{P}^1$, or \item the image of a blown-up projective plane whose embedding is built up from linear systems of De Jonqui\`eres type. \end{enumerate} It is pleasant to recall the classical fact that De Jonqui\`eres transformations are basic building blocks of birational transformations of $\mathbb{P}^2$. See, for example, \cite[Theorem 2.30]{Kollar-Smith-Corti}. \end{remark} \begin{remark} Let $X$ be a smooth projective variety. In \cite[Question 1.3]{HartshorneSurvey} Hartshorne asks which homology classes of $X$ can be represented by an irreducible nonsingular subvariety. The author does not know whether the characterization of representability in Theorem \ref{main} remains unchanged if one requires subvarieties to be nonsingular. We note that, when $X$ is a complex Grassmannian, there are subvarieties of $X$ which are not smoothable up to homological equivalence \cite{Bryant,Coskun,Hartshorne-Rees-Thomas,Hong1}. \end{remark} \section{Further discussion}\label{SectionLast} We conjecture that an analogue of Theorem \ref{main} remains valid in dimension $3$. For a survey of results on three dimensional correspondences in the classical period, see \cite{Berzolari,SnyderB,SnyderC}. \begin{conjecture}\label{Spatial} Let $\xi$ be an element in the Chow homology group \[ \xi = a[\mathbb{P}^3 \times \mathbb{P}^0]+b[\mathbb{P}^2 \times \mathbb{P}^1]+c[\mathbb{P}^1 \times \mathbb{P}^2] +d[\mathbb{P}^0 \times \mathbb{P}^3] \in A_3(\mathbb{P}^3 \times \mathbb{P}^3). \] Then $\xi$ is the class of a reduced and irreducible subvariety if and only if $a,b,c,d$ are nonnegative and one of the following conditions is satisfied: \begin{enumerate} \item $b^2+c^2>0$ and $b^2 \ge ac$ and $c^2 \ge bd$. \item $a=1$, $b=0$, $c=0$, $d=0$. \item $a=0$, $b=0$, $c=0$, $d=1$. \end{enumerate} \end{conjecture} The necessity of the above numerical conditions for the representability of $\xi$ follows from Theorem \ref{multiple} below. For the sufficiency of the numerical conditions in the case of Cremona transformations (that is, when $a=1$ and $d=1$), see \cite{PanCorrection}. The construction is based on a $3$-dimensional generalization of the De Jonqui\`eres birational transformation \cite{Pan,Pan2}. To illustrate the nature of Conjecture \ref{Spatial}, we quote below \cite[Theorem 21]{Huh} which characterizes representable homology classes in $\mathbb{P}^n \times \mathbb{P}^m$ for any nonnegative $n,m$, up to an integral multiple. Recall that a sequence $e_0,e_1,\ldots,e_n$ of integers is said to be \emph{log-concave} if for all $0< i< n$, \[ e_{i-1}e_{i+1}\le e_i^2, \] and it is said to have \emph{no internal zeros} if there do not exist $i < j < k$ satisfying \[ e_i \neq 0, \quad e_j=0, \quad e_k\neq 0. \] \begin{theorem}\label{multiple} Let $\xi$ be an element in the Chow homology group \[ \xi = \sum_{i} e_i[\mathbb{P}^i \times \mathbb{P}^{k-i}] \in A_k(\mathbb{P}^n \times \mathbb{P}^m). \] \begin{enumerate} \item If $\xi$ is an integer multiple of either \[ \big[\mathbb{P}^m \times \mathbb{P}^n\big], \big[\mathbb{P}^m \times \mathbb{P}^0\big], \big[\mathbb{P}^0 \times \mathbb{P}^n\big], \big[\mathbb{P}^0 \times \mathbb{P}^0\big], \] then $\xi$ is the class of a reduced and irreducible subvariety if and only if the integer is $1$. \item If otherwise, some positive integer multiple of $\xi$ is the class of a reduced and irreducible subvariety if and only if the $e_i$ form a nonzero log-concave sequence of nonnegative integers with no internal zeros. \end{enumerate} \end{theorem} It is necessary to take a positive integer multiple of $\xi$ in the second part of Theorem \ref{multiple}. A result of Pirio and Russo \cite[Corollary 5.3]{Pirio-Russo0} implies that there is no reduced and irreducible subvariety of $\mathbb{P}^5 \times \mathbb{P}^5$ which has the homology class \[ 1[\mathbb{P}^5 \times \mathbb{P}^0]+2[\mathbb{P}^4 \times \mathbb{P}^1]+3[\mathbb{P}^3 \times \mathbb{P}^2] +4[\mathbb{P}^2 \times \mathbb{P}^3] +2[\mathbb{P}^1 \times \mathbb{P}^4]+1[\mathbb{P}^0 \times \mathbb{P}^5] \in A_5(\mathbb{P}^5 \times \mathbb{P}^5). \] Note that the sequence $(1,2,3,4,2,1)$ is log-concave and has no internal zeros. The nonexistence can (also) be deduced from an explicit classification of quadro-quadric Cremona transformations in dimension five \cite[Table 10]{Pirio-Russo2}. In general, it is difficult to characterize homology classes of subvarieties of a given algebraic variety, even when the ambient variety is a smooth projective toric variety over $\mathbb{C}$. For example, when the complex toric variety is the one corresponding to the $n$-dimensional permutohedron, the problem of characterizing representable homology classes is at least as difficult as identifying matroids with $n+1$ elements representable over the complex numbers \cite{Fink,Katz-Payne}. The latter is a difficult problem in a rather precise sense, and one does not expect an answer which is uniform with respect to $n$ \cite{Mayhew-Newman-Whittle,Vamos}. For what is known about representable homology classes in homogeneous varieties, see \cite{Bryant,Coskun,Coskun-Robles,Hong1,Hong2,Perrin} and references therein.
{ "redpajama_set_name": "RedPajamaArXiv" }
274
Having listened to a couple of podcasts I was excited about this. The boom started really well and was ultimately practical and instructive but the last half fell flat for me and I was enduring it. The second half is not so much a must read but worth having for reference . 4.0 out of 5 starsThink bold and think big! Another great read by Peter and Steven. Makes you want to go do things that will change the world. 5.0 out of 5 starsTons of info! This book has a ton of info and resources. I'm now starting my Kickstarter project with a huge understanding of what needs to be done, and that it is possible to do for my startup. I printed off Peter Diamandis' 28 rules, I put them in my office and I read them every morning.
{ "redpajama_set_name": "RedPajamaC4" }
9,546
/** * Encoder, decoder, handshakers to handle most common WebSocket Compression Extensions. * <p> * This package supports different web socket extensions. * The specification currently supported are: * <ul> * <li><a href="http://tools.ietf.org/html/draft-ietf-hybi-permessage-compression-18">permessage-deflate</a></li> * <li><a href="https://tools.ietf.org/id/draft-tyoshino-hybi-websocket-perframe-deflate-06.txt"> * perframe-deflate and x-webkit-deflate-frame</a></li> * </ul> * </p> * <p> * See <tt>io.netty.example.http.websocketx.client.WebSocketClient</tt> and * <tt>io.netty.example.http.websocketx.html5.WebSocketServer</tt> for usage. * </p> */ package io.netty.handler.codec.http.websocketx.extensions.compression;
{ "redpajama_set_name": "RedPajamaGithub" }
6,878
Cora Seton I'm Reading – A Life Less Ordinary, by Victoria Bernardine Today I'm hosting Victoria Bernardine's blog tour for her novel, A Life Less Ordinary. Comment below for your chance at winning either a $25 gift card or one of two $15 gift cards to either Amazon or Smashwords! First, here's the blurb: For the last fifteen years, Rose "Manny" Mankowski has been a very good girl. Now, at the age of 45, she's questioning her choices and feeling more and more disconnected from her own life. When she's passed over for promotion and her much younger new boss implies Manny's life will never change, something snaps. In the blink of an eye, she's quit her job, sold her house, cashed in her pension, and she's leaving town on a six month road trip. After placing an ad for a travelling companion, she's joined in her mid-life crisis by Zeke Powell, the cynical, satirical, most read – and most controversial – blogger for the e-zine, What Women Want. Zeke's true goal is to expose Manny's journey as a pitiful and desperate attempt to reclaim her lost youth – and increase his readership at the same time. Now, armed with a bagful of destinations, a fistful of maps, and an out-spoken imaginary friend named Harvey, Manny's on a quest to rediscover herself – and taking Zeke along for the ride. And here's an excerpt: Darius was very sweet and charming, just eighteen, but he couldn't pay his own way, and Manny wasn't about to support him for six months. He'd shrugged and accepted her decision with an adorable smile and she offered to call Daisy's boss, Max, to see if he had any work that Darius could do. Darius had thanked her and even paid for their lattes, and they'd chatted for a good forty-five minutes before he'd finally gone on his way. Yes, he would have been a good choice–and she might change her mind if she didn't find anyone before she left in two weeks. You can always go by yourself. I know. But it would be more fun with someone else. You'll have me. Manny glanced at Harvey sitting in the chair across from her. He was dressed casually in jeans and a button down shirt open at the throat to show the strong lines of his neck and chest. You're not real. Harvey winked at her. Just checking. She shook her head and Harvey blinked out of existence as the door opened and a darkly handsome man walked in. He paused in the doorway and removed his sunglasses as he glanced around the small room. Securely hidden in her corner, Manny considered him. Tall; over six feet. Dark. Handsome, with large, dark eyes and full pouty lips. His black, tousled hair and dark stubble on his face gave him a sexy, scruffy appearance. He was slim, with broad shoulders, narrow hips and long legs encased in jeans. I'll bet he has a great ass. I'll bet you're right. He's like a younger version of me. Manny blinked at the man standing in the doorway and realized Harvey was right. Oh, they didn't exactly look alike, but they had similar colouring, and a similar underlying confidence and arrogance in their stance. Probably something natural when you're that naturally gorgeous, Manny thought ruefully, or, in Harvey's case, that unnaturally perfect. I'd almost be jealous…if I was real. But you're not–and he's quite something. I wonder who he's here to me…eeet. A Life Less Ordinary is a fun, chatty, witty novel that really grabbed me the more I read it. While the novel mostly follows Manny's roadtrip and Zeke's attempts to maintain his professional distance while documenting it, there are also three other side stories about Manny's sisters and Zeke's bosses. Together, the different threads cover all sorts of ways life can sucker-punch you. Manny's isn't the only life that changes over the course of the novel; everyone involved has to overcome an unexpected problem and grow from the experience. One side-story in particular – about a woman faced with raising her grandchild when her daughter proves incapable of doing so really resonated with me, and so did Manny's story of reclaiming her sense of self and self-worth in the middle of being middle-aged. I loved Manny's banter with her invisible "friend" Harvey – by far one of the best parts of the book. I also liked the way Zeke struggled to even see Manny as a "woman" rather than as an "old woman". The one thing I wished had been added was more excerpts from Zeke's blog posts, and I had the feeling that if the story had included two or three "blog readers" who "commented" each time he posted, the author could have added another fun and interesting subplot. Take this one to the beach when you have a long, lazy day ahead of you! Author information: Victoria Bernadine (a pseudonym) is, as the saying goes, a "woman of a certain age". After twenty-something years of writer's block, she began writing again in 2008. She began with fanfiction about a (now-cancelled) TV show called Jericho and particularly about the characters of Heather Lisinski and Edward Beck. From there, she expanded into writing original fic and she hasn't stopped since. Victoria enjoys reading all genres and particularly loves writing romantic comedy and post-apocalyptic science fiction. What those two have in common is anybody's guess. She lives in Edmonton with her two cats (The Grunt and The Runt). A Life Less Ordinary is the first novel she felt was good enough to be released into the wild. Victoria can be contacted through Love of Words Publishing Inc. (loveofwords@shaw.ca) or through her brand new blog at http://victoriabernadine.wordpress.com/. previous post: What I'm Reading – Speak No Evil, by Tanya Anne Crosby – Prize Below next post: Today I'm Visiting Long and Short Reviews Sign up for my newsletter today! You don't want to miss out on new releases, giveaways and exclusive content only available to subscribers! Sign Up For My Newsletter Now → Do you want a FREE Cora Seton Bookmark? We have some amazing bookmarks. Enter your information below to receive one. Have you already signed up for the newsletter? With over one million books sold, New York Times and USA Today bestselling author Cora Seton has created a world readers love in Chance Creek, Montana. She currently has thirty novels and novellas set in her fictional town, with many more in the works. Like her characters, Cora loves cowboys, military heroes, country life, gardening, jogging, binge-watching Jane Austen movies, keeping up with the latest technology and indulging in old-fashioned pursuits. She lives on beautiful Vancouver Island with her husband, children and two cats. © 2018 Cora Seton
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,518
\section{Introduction} \label{introduction} The fundamental parameters of stars such as their effective temperature, surface gravity, and chemical composition are not observable quantities: rather, they must be inferred using model stellar atmospheres \citep{Bergemann:2014aa}. Three dimensional (3D) hydrodynamic `box-in-a-star' models \citep{1982A&A...107....1N} are increasingly being used in this context \citep{2009MmSAI..80..711L,2013A&A...557A..26M,2013ApJ...769...18T}. These present a huge improvement over classical 1D hydrostatic models on account of their ab initio treatment of convective energy transport in the outer envelope that can realistically reproduce the shifting, broadening and strengthening of spectral lines by convective velocity fields and atmospheric inhomogeneities \citep{1980LNP...114..213N,1999A&A...346L..17A}. Inferred logarithmic abundances can suffer errors as large as $\pm1.0\,\mathrm{dex}$ when modelled in 1D \citep{2008MmSAI..79..649C}. Visualizing and understanding spectral line formation in three dimensions is non-trivial. Contribution functions \citep{1952hss..book.....D,1974SoPh...37...43G} are useful tools to that end. They can be interpreted as probability density functions for line formation in the atmosphere \citep{1972SoPh...24..255S,1986A&A...163..135M} and are often used to infer the mean formation depths of spectral lines. The line intensity contribution function \citep{1986A&A...163..135M} represents the contribution from different locations of the atmosphere to the depression in the normalized intensity. This quantity is commonly used to study lines in a solar context \citep{2008A&A...488.1031C}. Since stars are in general not resolved, often more relevant is the line flux contribution function, \citep{1996MNRAS.278..337A}, which is instead formulated in terms of the depression in the absolute flux. Since all parts of the stellar atmosphere contribute to its observed flux profile, the line flux contribution function is a function of 3D space. \citet{1996MNRAS.278..337A} derive it in the context of 1D model stellar atmospheres, i.e. assuming plane-parallel symmetry. To apply it directly to a 3D model would be to treat the atmosphere as an ensemble of 1D columns i.e.~it would be a 1.5D approximation \citep{1995A&A...302..578K}. This is undesirable because the effects of horizontal radiative transfer are entirely neglected. Another approach is to compute the plane-parallel contribution function on a horizontally-averaged, \mtd~model. This approach is still not ideal, because it neglects the effects of the atmospheric inhomogeneities which characterize real stellar atmospheres. In this paper I present in \sect{method} a derivation for the line flux contribution function that is valid in three dimensions. To illustrate the result, I explore in \sect{example} the formation of the high excitation permitted \text{OI\,777\,nm}~lines in a 3D hydrodynamic \textsc{stagger} model atmosphere \citep{2013A&A...557A..26M}. I present a short summary in \sect{conclusion}. \section{The 3D line flux contribution function} \label{method} \begin{figure} \begin{center} \includegraphics[scale=0.26]{fig1.png}\,\,\,\,\,\,\,\,\,\includegraphics[scale=0.26]{fig2.png} \caption{Visual aids to the derivation presented in \sect{method}. Left: spherical star and an arbitrary reference plane perpendicular to the observer's line of sight; the observer is located to the top of the diagram. The spherical polar angle is $\theta$; conventionally, the notation $\mu=\cos\theta$ is adopted. The cylindrical polar radius is $\rho=R\sin\theta$, where $R$ is the radius of the star. $z$ is the displacement from a point in the atmosphere to the arbitrary reference plane. Right: spherical star as seen by the observer, who is now located out of the page. The azimuthal angle is $\phi$. A ring of constant $\rho$ is shown.} \label{fig1} \end{center} \end{figure} \subsection{Concept} \label{concept} The flux depression at frequency $\nu$ from a star of radius $R$ measured by a distant observer is proportional to the total emergent intensity depression, \begin{IEEEeqnarray}{rCl} \label{eq1} \mathcal{A}_{\nu} &\propto& \frac{1}{\uppi R^{2}}\int \int \left(I^{\mathrm{c}}_{\nu}-I_{\nu}\right) \, \rho\,\ud\rho\,\ud\phi\, , \end{IEEEeqnarray} in the cylindrical polar coordinate system depicted in \fig{fig1}: the polar axis intersects disc-centre and is directed towards the observer. $I_{\nu}$ is the specific intensity and $I^{\mathrm{c}}_{\nu}$ is the specific continuum intensity, at some position on the disc, in the direction of the observer. The line flux contribution function $\mathcal{C}_{\nu}$ must satisfy \begin{IEEEeqnarray}{rCl} \label{eq2} \mathcal{A}_{\nu} &=& \int_{\text{box}} \mathcal{C}_{\nu}\left(\bm{r}\right)\,\ud^{3}r\, . \end{IEEEeqnarray} Crucially, the integration is not performed over vertical height as in the plane-parallel derivation of \citet{1996MNRAS.278..337A}, but over the entire 3D volume in which the line may form. This is the entire volume of the 3D model atmosphere; $\bm{r}$ thus specifies a position in this box. In what follows, \eqn{eq1} is manipulated into the form of \eqn{eq2}, and thereby $\mathcal{C}_{\nu}$ is inferred. The contribution function $\mathcal{C}$ to the integrated line strength $\mathcal{A}=\int\mathcal{A}_{\nu}\,\ud\nu$ is also found, and satisfies $\mathcal{C}=\int\mathcal{C}_{\nu}\,\ud\nu$. \subsection{Derivation} \label{derivation} Along any given ray, $I_{\nu}$ and $I^{\mathrm{c}}_{\nu}$ satisfy the respective transport equations, \begin{IEEEeqnarray}{rCl} \label{eq3} \frac{\ud I_{\nu}}{\ud z} &=& \alpha_{\nu}\,\left(S_{\nu}-I_{\nu}\right)\, , \\ \label{eq4} \frac{\ud I^{\mathrm{c}}_{\nu}}{\ud z} &=& \alpha^{\mathrm{c}}_{\nu}\,\left(S^{\mathrm{c}}_{\nu}-I^{\mathrm{c}}_{\nu}\right)\, , \end{IEEEeqnarray} where $z$ is the path distance, increasing upward towards the observer. The linear extinction coefficient $\alpha_{\nu}$ and the source function $S_{\nu}$ are, in terms of their line and continuum components, \begin{IEEEeqnarray}{rCl} \label{eq5} \alpha_{\nu} &=& \alpha^{\mathrm{c}}_{\nu}+\alpha^{\mathrm{l}}_{\nu}\, ,\\ \label{eq6} S_{\nu} &=& \frac{\alpha^{\mathrm{c}}_{\nu}\,S^{\mathrm{c}}_{\nu} + \alpha^{\mathrm{l}}_{\nu}\,S^{\mathrm{l}}_{\nu}}{\alpha_{\nu}}\, \end{IEEEeqnarray} \citep{2014tsa..book.....H}. Following \citet{1986A&A...163..135M}, an effective transport equation for the intensity depression $D_{\nu}\equivI^{\mathrm{c}}_{\nu}-I_{\nu}$ is found by subtracting \eqn{eq3} from \eqn{eq4}, \begin{IEEEeqnarray}{rCl} \label{eq7} \frac{\ud D_{\nu}}{\ud z} &=& \alpha_{\nu}\,\left(S^{\mathrm{eff}}_{\nu}-D_{\nu}\right) \, , \end{IEEEeqnarray} where the effective source function is \begin{IEEEeqnarray}{rCl} \label{eq8} S^{\mathrm{eff}}_{\nu} &=& \frac{\alpha^{\mathrm{l}}_{\nu}}{ \alpha_{\nu} } \left(I^{\mathrm{c}}_{\nu}-S^{\mathrm{l}}_{\nu}\right)\, . \end{IEEEeqnarray} In terms of the optical depth along the ray $\ud\tau_{\nu}=-\alpha_{\nu}\,\ud z\,$, \eqn{eq7} is expressed as \begin{IEEEeqnarray}{rCl} \label{eq9} \frac{\udD_{\nu}}{\ud \tau_{\nu}} &=& D_{\nu}-S^{\mathrm{eff}}_{\nu}\, . \end{IEEEeqnarray} The formal solution is found by integrating from $\tau_{\nu}=0$ to $\tau_{\nu}\rightarrow\infty$, \begin{IEEEeqnarray}{rCl} \label{eq10} D_{\nu} &=& \int S^{\mathrm{eff}}_{\nu} \, \expo{-\tau_{\nu}}\, \ud\tau_{\nu} \, . \end{IEEEeqnarray} Neglecting proportionality factors, the flux depression is obtained by substituting \eqn{eq10} into \eqn{eq1}, \begin{IEEEeqnarray}{rCl} \label{eq11} \mathcal{A}_{\nu} &=& \int \int \int \alpha_{\nu}\, S^{\mathrm{eff}}_{\nu}\, \expo{-\tau_{\nu}}\, \ud z\,\rho\,\ud\rho\,\ud\phi\, , \end{IEEEeqnarray} where the integrand is evaluated with the constraint that the emergent rays are directed towards the observer. As the observer is very far from the star, the emergent rays are parallel to each other. Consequently, the last equation is written in terms of an infinitesimal volume element, \begin{IEEEeqnarray}{rCl} \label{eq12} \mathcal{A}_{\nu} &=& \int_{\text{star}} \alpha_{\nu}\, S^{\mathrm{eff}}_{\nu}\, \expo{-\tau_{\nu}}\, \ud^{3}r\, . \end{IEEEeqnarray} The integration in \eqn{eq12} is performed over the entire volume of the star. 3D box-in-a-star models of stellar atmospheres have Cartesian geometry and span a minute surface area of the stars they represent \citep{2012JCoPh.231..919F,2013A&A...557A..26M}. The flux spectrum from the modelled star is (approximately) reproduced by shifting the box tangentially across the spherical surface. This is represented by two integrations: one over the volume of the box and the other over the unit hemisphere. Again neglecting proportionality factors, \begin{IEEEeqnarray}{rCl} \label{eq13} \mathcal{A}_{\nu} &\approx& \int \int_{\text{box}} \alpha_{\nu}\left(\bm{r};\Omega\right)\, S^{\mathrm{eff}}_{\nu}\left(\bm{r};\Omega\right)\, \expo{-\tau_{\nu}\left(\bm{r};\Omega\right)}\, \ud^{3}r\, \ud\Omega \, , \end{IEEEeqnarray} where the functional dependence of the integrand has been made explicit for clarity. The position vector $\bm{r}$ specifies a position within the box, and the solid angle $\Omega$ specifies the direction of the emergent rays. The infinitesimal solid angle satisfies $\ud\Omega=\ud\mu\,\ud\phi$, where $\mu=\cos\theta$. After changing the order of integration, the contribution function is inferred to be, \begin{IEEEeqnarray}{rCl} \label{eq14} \mathcal{C}_{\nu}\left(\bm{r}\right) &=& \int \alpha_{\nu}\left(\bm{r};\Omega\right)\, S^{\mathrm{eff}}_{\nu}\left(\bm{r};\Omega\right)\, \expo{-\tau_{\nu}\left(\bm{r};\Omega\right)}\, \ud\Omega\, . \end{IEEEeqnarray} This represents the contribution of \emph{a point within the box} to the observed absolute flux depression in the line, at frequency $\nu$. The integrated line strength contribution function follows immediately, \begin{IEEEeqnarray}{rCl} \label{eq15} \mathcal{C}\left(\bm{r}\right) &=& \int \int \alpha_{\nu}\left(\bm{r};\Omega\right)\, S^{\mathrm{eff}}_{\nu}\left(\bm{r};\Omega\right)\, \expo{-\tau_{\nu}\left(\bm{r};\Omega\right)}\, \ud\Omega\, \ud\nu \, . \end{IEEEeqnarray} \subsection{Rotational broadening} \label{rotational broadening} Line broadening caused by the rigid rotation of the star must be included during post-processing. This broadening will affect the monochromatic quantity $\mathcal{A}_{\nu}$ and hence $\mathcal{C}_{\nu}$. Following \citet{1990A&A...228..203D}, the broadened specific intensity is, \begin{IEEEeqnarray}{rCl} \label{eq16} I^{\text{broad}}_{\nu} &=& \broad{I_{\nu}}\, , \end{IEEEeqnarray} where $\mathcal{B}$ is a functional which broadens its argument according to, \begin{IEEEeqnarray}{rCl} \label{eq17} \mathcal{B}\left[x\left(v,\theta,\phi\right)\right] &=& \frac{1}{2\uppi}\int x\left(v-V\sin\iota\sin\theta\cos\psi,\theta,\phi\right) \, \ud\psi\, . \end{IEEEeqnarray} Here $v=c\frac{\Delta\nu}{\nu}$ is the Doppler speed, $V$ is the rotation speed of the star in the line forming region, $\iota$ is the inclination angle of the rotation axis with respect to the observer, and the integral is over an interval of $2\uppi$. Retracing the steps above, one obtains a rotationally-broadened contribution function, \begin{IEEEeqnarray}{rCl} \label{eq18} \mathcal{C}_{\nu}\left(\bm{r}\right) &=& \int \broad{\alpha_{\nu}\left(\bm{r};\Omega\right)\, S^{\mathrm{eff}}_{\nu}\left(\bm{r};\Omega\right)\, \expo{-\tau_{\nu}\left(\bm{r};\Omega\right)}} \, \ud\Omega\, . \end{IEEEeqnarray} (In deriving this expression, it is necessary to move $\mathcal{B}$ within the integral of \eqn{eq10}. This is valid because the atmosphere is assumed to be sufficiently shallow that $V$ does not vary across its depth.) This integrated line strength $\mathcal{A}$, should not be affected by the rotation of the star \citep{1992oasp.book.....G}. The adopted broadening formalism is consistent with this: integrating \eqn{eq17} across the line profile, \begin{IEEEeqnarray}{rCl} \label{eq19} \int \mathcal{B}\left[x\left(v,\theta,\phi\right)\right] \, \ud v &=& \int x\left(v,\theta,\phi\right)\,\ud v\, , \end{IEEEeqnarray} which implies that the contribution function $\mathcal{C}$ is not affected by the rotation of the star. \subsection{Mean formation depth} \label{mean formation depth} The interpretation of the contribution function as a probability density function for line formation \citep{1972SoPh...24..255S,1986A&A...163..135M} suggests a formalism for defining the mean formation value of some quantity $q$ with respect to a line, \begin{IEEEeqnarray}{rClrCl} \label{eq20} \mathbb{E}\left[q\right] &=& \frac{\int q\left(\bm{r}\right) \,\mathcal{C}\left(\bm{r}\right) \, \ud^{3} r}{\int \mathcal{C}\left(\bm{r}\right)\, \ud^{3} r}\, , \end{IEEEeqnarray} and the variance might then be defined in the usual way as $\mathbb{E}\left[q^{2}\right]-\mathbb{E}\left[q\right]^{2}$. For example, $\mathbb{E}\left[q=\log_{10}\tau^{\text{r}}_{500}\right]$ may be used to define the mean formation depth, where $\log_{10}\tau^{\text{r}}_{500}$ is the logarithmic radial optical depth at wavelength $\lambda=500\,\mathrm{nm}$, a standard measure of depth in stellar atmospheres. \subsection{Relationship to the line flux response function} \label{line response functions} A related spectral line formation diagnostic is the response function: the linear response of the line to a perturbation in the atmosphere \citep{1971SoPh...20....3M,1975SoPh...43..289B,1977A&A....54..227C}. The line flux response function $\mathcal{R_{\nu}}$ must satisfy \begin{IEEEeqnarray}{rCl} \label{eq22} \delta \mathcal{A}_{\nu} &\equiv& \int \mathcal{R_{\nu}}\left(\bm{r}\right)\,\delta\beta\left(\bm{r}\right) \,\mathrm{d}^{3} r\, , \end{IEEEeqnarray} where $\beta$ is an atmospheric parameter (such as temperature). Following \citet{1986A&A...163..135M}, the response function is obtained by adapting the above derivation. The effective transport equation \eqn{eq7} is perturbed so that $D_{\nu}\rightarrow D_{\nu}+\delta\beta\,D^{1}_{\nu}$, and the equation for $D^{1}_{\nu}$ is solved, \begin{IEEEeqnarray}{rCl} \label{eq23} \frac{\udD^{1}_{\nu}}{\ud z} &=& \alpha_{\nu}\left(S^{\mathrm{eff},1}_{\nu} - D^{1}_{\nu}\right) , \end{IEEEeqnarray} where the perturbed effective source function is, \begin{IEEEeqnarray}{rCl} \label{eq24} S^{\mathrm{eff},1}_{\nu} &=& \frac{\partial S^{\mathrm{eff}}_{\nu}}{\partial\beta} + \frac{1}{\alpha_{\nu}}\,\frac{\partial\alpha_{\nu}}{\partial\beta}\, \left(S^{\mathrm{eff}}_{\nu}-D_{\nu}\right). \end{IEEEeqnarray} The response function is then found by following the previous derivation, but with $D_{\nu}$ and $S^{\mathrm{eff}}_{\nu}$ replaced by $D^{1}_{\nu}$ and $S^{\mathrm{eff},1}_{\nu}$, respectively, \begin{IEEEeqnarray}{rCl} \label{eq25} \mathcal{R_{\nu}}\left(\bm{r}\right)&=& \int \mathcal{B}\left[\alpha_{\nu}\left(\bm{r};\Omega\right)\, S^{\mathrm{eff},1}_{\nu}\left(\bm{r};\Omega\right)\, \expo{-\tau_{\nu}\left(\bm{r};\Omega\right)}\right] \,\ud\Omega\, , \end{IEEEeqnarray} and the response function to the integrated line strength is $\mathcal{R}=\int\mathcal{R_{\nu}}\,\ud\nu$. Response functions can be used to study the sensitivity of a spectral line to specific atmospheric variables \citep{1991A&A...250..445A}. To identify the line forming regions, however, contribution functions must be used. \subsection{Comparison to the plane-parallel line flux contribution function} \label{comparison} In the limit of plane-parallel symmetry, the integrand in \eqn{eq14} loses its dependence on the azimuthal angle $\phi$: $\alpha_{\nu}\left(\bm{r};\Omega\right)\rightarrow\alpha_{\nu}\left(z;\mu\right)$, $S^{\mathrm{eff}}_{\nu}\left(\bm{r};\Omega\right)\rightarrowS^{\mathrm{eff}}_{\nu}\left(z;\mu\right)$, $\tau_{\nu}\left(\bm{r};\Omega\right)\rightarrow\tau^{\text{r}}_{\nu}\left(z;\mu\right)/\mu$, where $z$ is the geometrical height and $\tau^{\text{r}}_{\nu}$ is the radial optical depth The 3D contribution function $\mathcal{C}_{\nu}$ thus tends to a plane-parallel contribution function $\mathcal{C}^{\text{pp}}_{\nu}$, \begin{IEEEeqnarray}{rCl} \label{eq25} \mathcal{C}^{\text{pp}}_{\nu}\left(z\right) &=& 2\uppi \int \alpha_{\nu}\left(z;\mu\right) S^{\mathrm{eff}}_{\nu}\left(z;\mu\right)\, \expo{-\tau^{\text{r}}_{\nu}\left(z;\mu\right)/\mu} \, \ud \mu\, . \end{IEEEeqnarray} This expression is the same\footnote{After expressing the contribution function in that paper with respect to geometrical height instead of radial optical depth, they are the same to a factor of $2\uppi$, which arises from those authors integrating over spherical polar angle $\mu$ instead of solid angle $\Omega$} as that derived by \citet{1996MNRAS.278..337A} in the context of 1D models, i.e.~with the implicit assumption of plane-parallel symmetry. \section{Example: 3D non-LTE spectral line formation} \label{example} \begin{figure} \begin{center} \includegraphics[scale=0.31]{temp.pdf} \caption{Material temperature in a vertical slice of a temporal snapshot of a 3D hydrodynamic \textsc{stagger} model atmosphere \citep{2013A&A...557A..26M}. The snapshot has effective temperature $T_{\mathrm{eff}}\approx6430\,\mathrm{K}$, logarithmic surface gravity (in CGS units) $\log_{10}g=4$, and solar-value abundances. Contours of standard logarithmic optical depth $\log_{10}\tau^{\text{r}}_{500}=-3\text{,}-1\text{, and }1$ (from top to bottom) are overdrawn.} \label{fig2} \end{center} \end{figure} \begin{figure*} \begin{center} \includegraphics[scale=0.31]{lte.pdf} \includegraphics[scale=0.31]{nlte.pdf} \caption{The contribution function $\mathcal{C_{L}^{\mathrm{box}}}$ across the oxygen triplet (777.25nm to 777.85nm in vacuum) corresponding to the snapshot slice in Fig.~\ref{fig2}, in LTE (left) and in non-LTE (right). These quantities are expressed in the same arbitrary units. Contours of standard logarithmic optical depth $\log_{10}\tau^{\text{r}}_{500}=-3\text{,}-1\text{, and }1$ (from top to bottom) are overdrawn.} \label{fig3} \end{center} \end{figure*} The high excitation permitted \text{OI\,777\,nm}~lines are known to show departures from local thermodynamic equilibrium (LTE) \citep[LTE; ][]{1974A&A....31...23S,1995A&A...302..578K,2009A&A...500.1221F}. It is interesting to consider how the lines form within the atmosphere when LTE is imposed, and to see what happens once this assumption is relaxed. To that end, the contribution function $\mathcal{C}$ was implemented into the 3D non-LTE radiative transfer code \textsc{multi3d} \citep{2009ASPC..415...87L}. The contribution function for the \text{OI\,777\,nm}~lines was calculated using a model oxygen atom based on those used by \citet{1993ApJ...402..344C}, \citet{1993A&A...275..269K} and \citet{2009A&A...500.1221F}. A temporal snapshot of a 3D hydrodynamic model atmosphere taken from the \textsc{stagger}-grid \citep{2011JPhCS.328a2003C,2013A&A...557A..26M} was used. The model was of a typical turn-off star, with effective temperature $T_{\mathrm{eff}}\approx6430\,\mathrm{K}$, logarithmic surface gravity (in CGS units) $\log_{10}g=4$, and solar-value abundances \citep{2009ARA&A..47..481A}. The solid angle was sampled using Carlson's quadrature set A4 \citep{carlson1963numerical}. \fig{fig2} shows the temperature structure in a vertical slice of the snapshot. While the absolute geometrical depth and width scales are arbitrary, zero geometrical depth is roughly located at the photosphere. Just below this depth is the top of the convection zone: hot, light upflows, observed as wide granules, turnover to form cool, dense downflows, observed as narrow intergranular lanes \citep{2013A&A...557A..26M}. Higher up the atmosphere, reversed granulation patterns can be observed: the material above the hot, light granules expands adiabatically and cools more efficiently, than the material above the intergranular lanes -- a detailed discussion can be found in the appendix of \citet{2013A&A...560A...8M}. The LTE and non-LTE contribution functions in this snapshot slice are shown in \fig{fig3}. They are both normalized such that the maximum value of the non-LTE contribution function is 1.0. The contribution functions reveal that the formation of the lines is qualitatively similar in the two cases. There is no contribution at large optical depths. This can be attributed the attenuation factor $\expo{-\tau_{\nu}}$ in the expression for the contribution function, \eqn{eq15}: deep within the atmosphere, photons are more likely to be absorbed than to penetrate the atmosphere and reach the observer. Line formation is also inefficient in the optically thin layers. This is by virtue of the line opacity which appears in \eqn{eq15}: in these layers, $\alpha^{\mathrm{l}}_{\nu}\approx0$, so that there is little line absorption. Between these two extremes, line formation becomes possible once the optical depth becomes small, and the factor $I_{\mathrm{c}}-S_{\mathrm{l}}$ appearing in the effective source function becomes non-zero i.e.~once the re-emitted light is no longer equal to the absorbed light \citep{1986A&A...163..135M,1996MNRAS.278..337A}. \fig{fig3} shows that imposing LTE inhibits the formation of the \text{OI\,777\,nm}~lines. Photon losses in the triplet lines themselves \citep{2005ARA&A..43..481A} drive departures from LTE. The line opacity is larger, and the line source function is smaller, than their LTE counterparts \citep{2009A&A...500.1221F}. This leads to a significant strengthening of the lines, correlated with the reversed granulation patterns seen in \fig{fig2}. The equivalent width ratio is $W^{\text{3D non-LTE}}/W^{\text{3D LTE}}\approx1.5$. \section{Summary} \label{conclusion} Flux profiles observed from stars have contributions from all parts of its atmosphere: thus, the line flux contribution function is a function of 3D space. In this paper I have shown how to derive the contribution function to the absolute flux depression that emerges from 3D box-in-a-star model stellar atmospheres. The result can be used like other 1D contribution functions \citep{1986A&A...163..135M,1996MNRAS.278..337A} to help one visualize and understand spectral line formation in stellar atmospheres. \section*{Acknowledgements} \label{acknowledgements} I thank Martin Asplund and Remo Collet for advice on the original manuscript, and Jorrit Leenaarts for providing \textsc{multi3d}. This research was undertaken with the assistance of resources from the National Computational Infrastructure (NCI), which is supported by the Australian Government. \bibliographystyle{mn2e}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,426
\section{Introduction} The recent discovery of interaction-driven topological phases\cite{turner2013beyond,PhysRevB.87.155114,PhysRevB.87.235104,PhysRevLett.114.185701}, such as, fractional quantum-Hall states, spin-liquids, Kondo-insulators and bosonic topological phases has created a huge interest in, otherwise considered to be mundane, band insulators. Some questions of fundamental interest in band insulators are: how do correlations drive a band insulator into a metal and a Mott insulator(MI) and are correlated band insulators fundamentally different from simple band insulators which have identical charge and spin excitation gaps? Theoretically these issues have been addressed in all dimensions, from one to infinity, by various studies of model Hamiltonians such as the ionic Hubbard model\cite{0953-8984-15-34-319,PhysRevB.70.155115,PhysRevLett.92.246405,PhysRevLett.97.046403, PhysRevLett.98.016402,PhysRevB.78.075121,PhysRevLett.98.046403, PhysRevB.79.121103,PhysRevB.89.165117,hoang2010metal}, a two-sublattice model with inter-orbital hybridization\cite{PhysRevB.80.155116,PhysRevB.87.125141}, a two-band Hubbard model with crystal field splitting\cite{PhysRevLett.99.126405} and a bilayer model with two identical Hubbard planes\cite{PhysRevB.59.6846,PhysRevB.73.245118,PhysRevB.75.193103,PhysRevB.76.165110,0295-5075-85-3-37006}. The ionic Hubbard model, which comprises a two-sublattice system having orbital energies, $V$ and $-V$ with a local Coulomb repulsion, drew a lot of attention after the pioneering work by Arti Garg et. al.,\cite{PhysRevLett.97.046403}, which showed that correlations can turn a band insulator into a metal and for higher interaction strengths, $U$, into a Mott insulator. The $U-V$ phase diagram, found through a iterated perturbation theory (IPT) solution of the self-consistent impurity problem within dynamical mean field theory (DMFT), exhibited a finite metallic region, which transformed into a line at large $U$ and $V$, as should be the case in the exactly known atomic limit. Later studies using a modified form of IPT, and numerical renormalization group at zero temperature ($T=0$), and a continuous time quantum Monte-Carlo (CTQMC) study, while confirming the existence of an intervening metallic phase, were not in agreement about the extent of the metallic region. Furthermore, one could ask if there exist parameters other then interaction strength, that could induce metallicity in band insulators, and what would be the interplay of interactions with such an athermal parameter. In this work, we have reconciled the results from the IPT and CTQMC studies, while also answering the latter question within a two orbital Hubbard model with on-site repulsion, $U$, between electrons of opposite spin. The novelty of our model is embodied by a parameter ``$x \in [0,1]$" which may be interpreted as the degree of ionicity, while $1-x$ is concomitantly interpreted as the degree of covalency. Such a parametrization permits us to explore the interplay of ionicity and covalency in interacting band insulators. So for $x=1$, we obtain purely ionic band insulators\cite{PhysRevLett.97.046403} while for $x=0$, the model reduces to purely covalent band insulators\cite{PhysRevB.80.155116}. An investigation of correlations in polar-covalent insulators is important in its own right. The characteristic signature of the charge gap in these insulators is an order of few meV and given by inter-orbital hybridization between partially filled bands\cite{PhysRevB.80.155116, PhysRevB.78.033109}. The canonical example of covalent band insulators are FeSi and FeSb$_2$\cite{PhysRevLett.71.1748,PhysRevB.72.045103}. The temperature evolution of charge gap in these systems closes at low temperature relative to gap size and the spectral weight in the optical conductivity transfer to high-frequencies ($\approx$ 1 eV) above the gap edge. These two features strongly determine the role of electronic correlations in the covalent band insulators. The interaction driven metallic region found in Ref\cite{PhysRevLett.97.046403} for the purely ionic Hubbard model is shown analytically to be just a line of measure zero in the $U-V$ plane. One of the main findings is that, while the two extremes of $x=0$ and $x=1$ are indeed band insulators, albeit of different kinds, the $x=0.5$ turns out to be a metal even in the non-interacting case. Further, the metal at $U=0$ turns into a correlated band insulator even for infinitesimal interactions, and a re-entrant metallic phase is found at higher interactions, beyond which a Mott insulator is obtained. We find a rich phase diagram in the $U-T$ plane that is strongly dependent on the degree of covalency (or ionicity). This chapter is organized as follows: In Sec.~\ref{Model5.1}, we define the model and methods chosen to study correlation effects in different kinds of band insulators. In Sec.~\ref{Model5.2}, first we discuss the analytical results at zero and finite temperatures and then we present and discuss our numerical results. Finally, in Sec.~\ref{Model5.3}, we present our conclusions. \section{Models and Methods} \label{Model5.1} We have considered a two orbital Hubbard model with a local Coulomb interaction between two electrons of opposite spin on same orbital. In the second quantized notation, the Hamiltonian reads, \begin{align} {\cal{H}} &= -\mu \sum_{i\alpha\sigma} \hat{n}_{i\alpha\sigma}+ \sum_{ij\alpha\beta\sigma} t^{\alpha\beta}_{ij}(c^{\dagger}_{i\alpha\sigma}c^{\phantom \dagger}_{j\beta\sigma} +h.c) {\nonumber} \\ &+\sum_{i\alpha\sigma} \frac{U}{2} \hat{n}_{i\alpha\sigma} \hat{n}_{i\alpha\bar{\sigma}} \\ &= \sum_{k\sigma} \begin{pmatrix} c^\dagger_{kA\sigma} & c^\dagger_{kB\sigma} \end{pmatrix} \mathbf{{H}_{\sigma}(k)} \begin{pmatrix} c^{\phantom\dagger}_{kA\sigma} \\ c^{\phantom\dagger}_{kB\sigma} \end{pmatrix} \nonumber \\ & + \sum_{i\alpha\sigma} \frac{U}{2} \hat{n}_{i\alpha\sigma} \hat{n}_{i\alpha\bar{\sigma}} \label{eq:Ham} \end{align} where $c^{\dagger}_{i\alpha\sigma} (c_{i\alpha\sigma})$ creates (annihilates) an electron at lattice site $i$, in orbital $\alpha=A/B$ with $S_z$ eigenvalue $\sigma$. We set the chemical potential $\mu$ = $\frac{U}{2}$ so that each unit cell has a total average occupancy of 2 (i.e.\ half filling). The unit cell thus consists of two orbitals A and B. An equivalent interpretation is the consideration of sublattices A, B. The latter is usually chosen for the ionic Hubbard model. In the equation~\eqref{eq:Ham}, $\mathbf{{H}_{\sigma}(k)}$ comprises orbital energies, intra-unit-cell hybridization and nearest neighbour inter-unit-cell hopping, namely \begin{align} \mathbf{{H}_{\sigma}(k)} & = {\mathbf{{H}^{\sigma}(k)}}_{intra} + {\mathbf{{H}}^{\sigma}(\mathbf{k})}_{inter}{\nonumber} \,. \end{align} We are mainly interested in local single particle electron dynamics, which is given by the momentum sum of the lattice Green's function, \begin{equation} \mathbf{G}_{\sigma}(\omega^+) = \sum_{{\mathbf{k}}} \left[(\omega^+ + \mu)\mathbb{I} - \mathbf{{H}_{\sigma}(k)} - \mathbf{{\Sigma}_{\sigma}}(\mathbf{k},\omega^+)\right]^{-1}\,, \label{eq:gloc} \end{equation} where $\omega^+ = \omega+i\eta$ and $\eta\rightarrow 0^+$, and $\mathbb{I}$ is the identity matrix. We have calculated the local single particle propagators within the DMFT framework, wherein the single particle irreducible self-energy $\mathbf{\Sigma}_{\sigma}(\omega^+)$ is local, and will be determined by solving the auxiliary Anderson impurity model. The local, interacting Green's function (equation~\eqref{eq:gloc}) may be related to the non-interacting Green's function ${\bf{G}}_{0\sigma}(\omega^+)$ through the Dyson equation: \begin{equation} {\bf{G}}^{-1}_{0,\sigma}(\omega^+) = {\bf{G}}^{-1}_{\sigma}(\omega^+)+{\bf{\Sigma}}_{\sigma}(\omega^+) \,. \end{equation} We construct a non-interacting Hamiltonian $\mathbf{{H}_{\sigma}(k)}$ as an interpolation between an ionic band insulator (IBI) and a covalent insulator (CI) as follows: \begin{align} &{\mathbf{{H}}}_{\sigma}({\mathbf{k}},x) = {\mathbf{{H}}}_{IBI} +{\mathbf{{H}}}_{CI} \nonumber \\ & = x\begin{pmatrix} \Delta & \epsilon_{k\sigma} \\ \epsilon_{k\sigma} & -\Delta \end{pmatrix} + (1-x)\begin{pmatrix} \tilde{\epsilon}_{k\sigma} & V\\ V & -\tilde{\epsilon}_{k\sigma} \end{pmatrix}\,, \label{eq:matrix} \end{align} where the IBI corresponds to $x=1$, while the CI is obtained at $x=0$, hence $x$ represents the fraction of ionicity, while $1-x$ represents covalency. In the IBI, a two sublattice system has staggered ionic potentials $\Delta$ and -$\Delta$ and and a $k$-dependent hybridization ($\epsilon_{k\sigma}$) between sites on sublattice 1 and 2. The CI is characterized by two semicircular bands having opposite sign of the hopping parameter and a $k$-independent hybridization $V$. The diagonal dispersion in the CI corresponds to intra-band electron hopping, while the off-diagonal dispersion in the IBI corresponds to inter-band electron hopping. By varying the parameter $x$ from 1 to 0, we can interpolate smoothly between a purely ionic limit (for $x=1$) and a purely covalent limit($x=0$). In other words, the percentage of covalency in the ionic band insulator increases as we decrease $x$ from 1 to 0. The motivation to build and study the above Hamiltonian is twofold: (a) There are three primary chemical bonds namely ionic, covalent and metallic bonds. But in practice, a perfect ionic bond does not exist, i.e., any bond has a partial covalency. Quantifying the covalency or the ionicity of a given bond is not without ambiguities\cite{Kittel,Ashcroft}. Depending upon the percentage of covalency in the ionic bond, properties of the system changes drastically\cite{Kittel,Ashcroft}. Equation~\eqref{eq:matrix} is one the simplest and of course, non-unique, ways of parametrizing a system wherein the bonding has an ionic as well as covalent character. (b) Another perspective from the view point of real materials is that the non-interacting Hamiltonian $\mathbf{{H}_{\sigma}(k)}$ could have both inter-unit cell and intra-unit cell hybridizations, where inter-unit cell hopping is often neglected in model calculations\cite{PhysRevB.58.R4199}. Throughout the paper, we have considered the case where V = $\Delta$ and $\epsilon_k$ = $\tilde{\epsilon}_k$. Although these are specific parameter choices, the results we obtain are quite general and applicable to more general choices. The structure of $\mathbf{{H}_{\sigma}(k,\mathnormal{x})}$ determines the form of the impurity Greens functions, which for orbital (or sublattice) 1 is given by, \begin{equation} G_{1\sigma}(\omega^{+}) = \int d\epsilon \frac{ \zeta_{2\sigma}(\omega^{+},\epsilon) \rho_0(\epsilon)}{\zeta_{1\sigma}(\omega^{+},\epsilon) \zeta_{2\sigma}(\omega^{+},\epsilon) - [V(1-x)+\epsilon x]^2}\,, \label{eq:green} \end{equation} where \begin{align*} &\zeta_{1\sigma}(\omega^{+},\epsilon) = \omega+ i\eta + \mu -[Vx+\epsilon(1-x)]-\Sigma_{1\sigma}(\omega^{+})\nonumber\,, \\& \zeta_{2\sigma}(\omega^{+},\epsilon) = \omega+ i\eta + \mu +[Vx+\epsilon(1-x)]-\Sigma_{2\sigma}(\omega^{+})\nonumber\,, \end{align*} and $\rho_0(\epsilon)$ = $\frac{2}{\pi D}$ $\sqrt{1-(\epsilon/D)^2}$. $D=1$ is our energy unit and $\eta\rightarrow 0^+$ is the convergence factor. In the half-filling case, the Hamiltonian has mirror type symmetry between orbitals, which reflects in the impurity Green's function and self-energy in the following way, \begin{align} G_{1\sigma}(\omega^{+}) & = - \left[G_{2\sigma}(-\omega^{+})\right]^{*}\,, \\ \Sigma_{1\sigma}(\omega^{+}) & = U - \left[\Sigma_{2\sigma}(-\omega^{+})\right]^{*}\,. \end{align} By using above self-energy symmetry relation, we can readily show that, \begin{equation} \zeta_{1\sigma}(\omega^{+},\epsilon) = -[\zeta_{2\sigma}(-\omega^{+},\epsilon)]^{*}\,, \end{equation} then equation~\eqref{eq:green} can be written as, \begin{equation} G_{1\sigma}(\omega^{+})=\int d\epsilon \frac{\zeta^*_{1\sigma}(-\omega^{+},\epsilon) \rho_0(\epsilon)}{\zeta_{1\sigma}(\omega^{+},\epsilon) \zeta^*_{1\sigma}(-\omega^{+},\epsilon) - [V(1-x)+\epsilon x]^2}\,. \label{eq:general} \end{equation} Now we are going to present a few analytical results for the density of states (DOS) at the Fermi level ($\omega=0$) and subsequently, we will discuss our numerical results. \section{Results and Discussion} \label{Model5.2} One of the most interesting findings in the case of the ionic Hubbard model was that correlations can turn a band insulator into a metal. In general, the distinction between a metal and insulator for clean systems, can be made based on the low energy single-particle density of states. So, in the following sub-section, we will analyse the conditions for which the $\omega\rightarrow 0$ DOS is finite. \subsection{Analytical results: $T=0$} In Ref.\cite{PhysRevLett.97.046403}, it was assumed that adiabatic continuity to a corresponding non-interacting limit is maintained in the correlated band insulator, as well as in the metallic phase, until of course, a quantum phase transition to the Mott insulator occurs. Following the same, we have found the conditions for metallicity or insulating behaviour, provided a Fermi-liquid expansion of self-energy holds, namely that $\Sigma(\omega) \stackrel{\omega\rightarrow 0}{\rightarrow} \Sigma(0) + \omega (1-1/Z) + {\cal{O}}(\omega^2)$. Then, the value of imaginary part of self-energy at zero frequency is $ {\rm Im}\Sigma_{1\sigma}(0)=0$, and the corresponding density of states (DOS) $D_{1\sigma}(0)=-\frac{1}{\pi} \mathrm{Im} G_{1\sigma}(0)$ is given by, \begin{align} &D_{1\sigma}(0)=\int \frac{d\epsilon \,\rho_0(\epsilon)\,\frac{\eta}{\pi}}{\eta^2+[\mathrm{Re}(\zeta_{1\sigma}(0,\epsilon))]^2+[V(1-x)+\epsilon x]^2}\,, \label{eq:dos} \end{align} where $\eta\rightarrow 0^+$ and $\mathrm{Re}(\zeta_{1\sigma}(0,\epsilon))$=$[\mu-(Vx+\epsilon(1-x))-\mathrm{Re}\Sigma_{1\sigma}(0)]$. For a metallic system there should be a finite DOS at the Fermi level, while in the case of insulators, it should be zero. In the following sub-sections for different values of $x$, we are going to find the conditions for existence of metallicity. \subsubsection{Ionic band insulator ($x=1$)} \label{sec:ihm} By substituting $x=1$ in equation~\eqref{eq:matrix}, the non-interacting $\mathbf{{H}_{\sigma}(k,\mathnormal{x}))}$ reduces to: \begin{align} \mathbf{{H}_{\sigma}(K)} = \begin{pmatrix} V & \epsilon_k \\ \epsilon_k & -V \end{pmatrix}\,. \end{align} In literature this is often called an ``ionic Hubbard model (IHM)", where there are two broad electronic bands with staggered ionic potentials $V$ and $-V$ and $\epsilon_{k\sigma}$ is the dispersion of the bands. The name ionic band insulator suggests that the non-interacting excitation spectrum ($E_k = \sqrt{\epsilon_k^2+V^2}$) has a gap due to ionic potential ($V$). The DOS at the Fermi level is given by, \begin{align} D_{1\sigma}(0)=\int \frac{d\epsilon \rho_0(\epsilon)\frac{\eta}{\pi}}{[\eta^2+\epsilon^2+(\mu-\mathrm{Re}{\Sigma}_{1\sigma}(0)-V)^2]}\,. \label{eq:dos1} \end{align} By taking the limit $\eta \rightarrow 0^+$, we get \begin{equation} D_{1\sigma}(0)= \int d\epsilon \rho_0(\epsilon) \delta\left(\sqrt{\epsilon^2+(\mu-\mathrm{Re}\Sigma_{1\sigma}(0)-V)^2}\right)\,. \end{equation} This expression states that if $\mu-\mathrm{Re}\Sigma_{1\sigma}(0)-V$ = 0 then $D_{1\sigma}(0) = \rho_0(0)$, else $D_{1\sigma}(0) = 0$. For a fixed $\mu=U/2$ and $V$, such a condition is never satisfied in the non-interacting case ($U=0$), while in the interacting case, since the real part of the self-energy may be expected to be a monotonically varying function of $U$, the condition can only be satisfied for a specific $U$ corresponding to a given $V$. Thus, the metallic phase (where $D_{1\sigma}(0)\neq 0$) for the purely ionic band insulator ($x=1$) exists only on a single line, rather than a finite region in the $V-U$ phase diagram. Our numerical results validate this inference, as shown later. \subsubsection{Covalent band insulator ($x=0$)} In this limit, $\mathbf{{H}_{\sigma}(k,\mathnormal{x})}$ can be written as, \begin{align} \mathbf{{H}_{\sigma}(k)} = \begin{pmatrix} \epsilon_k & V\\ V & -\epsilon_k \end{pmatrix}\,. \end{align} Systems defined by the above type of Hamiltonian have been termed ``Covalent band insulators" (CBI), where two electronic bands with dispersion $\epsilon_{k\sigma}$, -$\epsilon_{k\sigma}$ hybridize through a ${\mathbf{k}}$-independent hybridization($V$). The non-interacting excitation spectrum ($E_k$ = $\sqrt{\epsilon_k^2+V^2}$) is gapped due to the inter-orbital hybridization $V$ (which represents covalency). The opposite sign of the dispersion of the two bands ensures a finite gap in non-interacting excitation spectrum, $E_k$, for any value of $V$. The DOS at the Fermi level, for $x=0$, reduces to the following form (following arguments similar to those for the ionic band insulator), \begin{align} D_{1\sigma}(0)=\int \frac{d\epsilon \rho_0(\epsilon)\frac{\eta}{\pi}}{[\eta^2+V^2+(\mu-\mathrm{Re}{\Sigma}_{1\sigma}(0)-\epsilon)^2]}\,. \label{eq:dos1} \end{align} By taking the limit $\eta \rightarrow 0^+$, we get \begin{align} D_{1\sigma}(0) & = \int d\epsilon \rho_0(\epsilon) \delta\left(\sqrt{V^2+(\mu-\mathrm{Re}\Sigma_{1\sigma}(0)-\epsilon)^2}\right)\nonumber \\ & = 0 \;\;\;{\rm for\; any}\;\; V\neq 0, \end{align} since the argument of the Dirac delta function is positive definite. Thus for covalent band insulators, interactions can not close the gap at $T=0$, no matter how strong they are, implying a complete absence of metallicity. \subsubsection{$x=0.5$} The mixing parameter $x=0.5$ corresponds to the case where the ionicity and covalency are present in an equal proportion and the structure of $\mathbf{{H}_{\sigma}(k,\mathnormal{x})}$ is given by, \begin{align} \mathbf{{H}_{\sigma}(k)} = \frac{1}{2}\begin{pmatrix} V+\epsilon_{k\sigma} & V+\epsilon_{k\sigma}\\ V+\epsilon_{k\sigma} & -(V+\epsilon_{k\sigma}) \end{pmatrix}\,. \label{eq:matrix1} \end{align} The DOS at the Fermi level is given by, \begin{align} D_{1\sigma}(0)=\int \frac{d\epsilon \rho_0(\epsilon)\frac{\eta}{\pi}}{\eta^2+\frac{(\epsilon+V)^2}{4}+[\mu-\mathrm{Re}{\Sigma}_{1\sigma}(0)-(\frac{\epsilon+V}{2})]^2}\,. \label{eq:dos2} \end{align} In the non-interacting case i.e., $U=0$ ($\Rightarrow \mu=0$ and $\mathrm{Re} {\Sigma}_{1\sigma}(0)$=0), \begin{align} D_{1\sigma}(0)=\int \frac{d\epsilon \rho_0(\epsilon)\frac{\eta}{\pi}}{\eta^2+\frac{(\epsilon+V)^2}{2}}\,, \end{align} \begin{align} D_{1\sigma}(0) = &\int d\epsilon \rho_0(\epsilon)\delta\left(\frac{\epsilon+V}{\sqrt{2}}\right) = \rho_0(-V)\sqrt{2}\,,\nonumber \\& = \frac{2\sqrt{2}}{\pi D} \sqrt{1-\left(\frac{-V}{D}\right)^2}\,. \end{align} Thus, the DOS at the Fermi-level is finite even in the non-interacting case, i.e., the ground state is a metal. This can also be proven from the non-interacting excitation spectrum ($E_k$ = $\sqrt{2}(\epsilon_k+V)$), which is gapless. In order to understand if the non-interacting metallic state survives at a finite $U$, we go back to equation~\eqref{eq:dos2}. The DOS at Fermi-level is given by, \begin{align} & D_{1\sigma}(0)=\int\, d\epsilon\, \rho_0(\epsilon)\times {\nonumber} \\ &\delta\left(\sqrt{\frac{(\epsilon+V)^2}{4}+\left(\mu-\mathrm{Re}{\Sigma}_{1\sigma}(0)-\frac{(\epsilon+V)}{2}\right)^2}\right)\,, \end{align} which is finite only if $|V| < D$ and $\frac{U}{2}-\mathrm{Re}{\Sigma}_{1\sigma}(0) = 0$. Consider a weakly interacting system, where $U \rightarrow$ 0$^{+}$. Then, $\frac{U}{2}-\mathrm{Re}{\Sigma}_{1\sigma}(0) \neq$ 0 since $\mathrm{Re}\Sigma_{1,\sigma}(0) \approx U n_{1\sigma} \neq \frac{U}{2}$. Thus, the metallic phase exists only at $U = 0$, and the system instantly becomes gapped for even an infinitesimal $U$. Thus apart from the non-interacting case, we again get a band insulator, albeit correlated, for a range of $U$ values. With increasing $U$, $\frac{U}{2}-\mathrm{Re}{\Sigma}_{1\sigma}(0)$ decreases, since $n_{1,\sigma} \rightarrow 0.5$. Thus, a second metallic phase, which is correlated, must arise at a finite $U$ value when $\mu-\mathrm{Re}{\Sigma}_{1\sigma}(0) = 0$. Thus, an interaction induced band insulator sandwiched between two metallic phases emerges due to local electronic correlations. \subsubsection{General case: $0.5 < x < 1.0$ and $0< x < 0.5$} In the general case, the DOS at Fermi level is given by, \begin{align} D_{1\sigma}(0) = \int d\epsilon \rho_0(\epsilon)\delta\left(g(\epsilon)\right)\,, \end{align} where \begin{equation} g(\epsilon) = \sqrt{(V(1-x)+\epsilon x)^2+(\mathrm{Re}(\zeta_{1\sigma}(0,\epsilon)))^2} \label{eq:ge} \end{equation} and $\mathrm{Re}(\zeta_{1\sigma}(0,\epsilon)) = \mu-\mathrm{Re}\Sigma_{1\sigma}(0)-(Vx+\epsilon(1-x))$. The DOS at the Fermi level is finite only if $g(\epsilon)=0$, which in turn requires \begin{align} & \epsilon = -V \frac{1-x}{x} \label{eq:gen1} \\ {\rm and\;\;\;} & \mu-\mathrm{Re}\Sigma_{1\sigma}(0)- 2V\left(1 -\frac{1}{2x}\right)= 0.\label{eq:gen2} \end{align} If equation~\eqref{eq:gen2} can be satisfied for some $U$, then the DOS will be given by \begin{equation} D_{1\sigma}(0)= \frac{1}{\sqrt{x^2+(1-x)^2}} \frac{2}{\pi D} \sqrt{1-\left(\frac{V(1-x)}{Dx}\right)^2} \end{equation} For a given $x$, whether equation~\eqref{eq:gen2} is satisfied or not is completely decided by $n_{1\sigma}$. For $x > 0.5$, if $n_{1\sigma}<0.5$, then $\mathrm{Re}\Sigma(0)\approx Un_{1\sigma} < U/2$, and hence, $U/2-\mathrm{Re}\Sigma_{1\sigma}(0)>0$ i.e.\ a specific $U$ might exist which satisfies the condition. If however, $n_{1\sigma}>0.5$ then for any $U$ value the condition is never met. For $x <1/2 $, the condition $\mu-\mathrm{Re}\Sigma(0) = -(1-2x)/x$ is never satisfied unless $n_{1\sigma} > 0.5$. \subsection{Analytical results: $T>0$} At low enough temperatures, the expression for Fermi-liquid form of self energy is, ${\rm Im} \Sigma_{1\sigma}(\omega)\propto -{\rm max}(\omega^2+T^2)$. Thus, equation~\eqref{eq:general} becomes \begin{align} & G_{1\sigma}(0) = \int d\epsilon \rho_0(\epsilon)\,\times {\nonumber} \\ & \frac{ [i\mathrm{Im}\Sigma_{1\sigma}(0)+ \mathrm{Re}(\zeta^*_{1\sigma}(0,\epsilon))]}{[\mathrm{Im}\Sigma_{1\sigma}(0)]^2+[V(1-x)+\epsilon x]^2+ [\mathrm{Re}(\zeta^*_{1\sigma}(0,\epsilon))]^2}\,, \end{align} and the corresponding DOS is \begin{align} &D_{1\sigma}(0) = \int \, d\epsilon\, \rho_0(\epsilon)\times {\nonumber} \\ & \frac{ -\mathrm{Im}\Sigma_{1\sigma}(0)/\pi}{[\mathrm{Im}\Sigma_{1\sigma}(0)]^2+[V(1-x)+\epsilon x]^2+ [\mathrm{Re}(\zeta^*_{1\sigma}(0,\epsilon)]^2}\,. \label{eq:dos} \end{align} Thus, the Dirac delta functions at $T=0$ acquire a finite width due to thermal broadening. However, these resonances are sharply peaked if $T\rightarrow 0$. Although the above integral is always finite, a significant density of states is obtained only when $[V(1-x)+\epsilon x]^2+ [\mathrm{Re}(\zeta^*_{1\sigma}(0,\epsilon)]^2 \le [\mathrm{Im}\Sigma_{1\sigma}(0)]^2$ which would, in general, be satisfied for a range of $U$ values. The integral has a maximum value only when $[V(1-x)+\epsilon x]^2+ [\mathrm{Re}(\zeta^*_{1\sigma}(0,\epsilon)]^2$=0. Thus, at finite temperatures, the single metallic line of $T=0$ broadens into a metallic region. This is also corroborated by recent CTQMC calculations \cite{PhysRevB.89.165117} for the ionic Hubbard model ($x=1$), and as shown in the next section, by our results as well. \subsection{Numerical results} Now we are going to describe results obtained by the numerical solution of the auxiliary Anderson impurity model of equation~\eqref{eq:Ham} within DMFT. As an impurity solver, we have used iterated perturbation theory (IPT)\cite{mo-ipt} and hybridization expansion continuous-time quantum Monte-Carlo (HY-CTQMC)\cite{CTQMC1,Bauer} methods at zero temperature and finite temperature respectively. In the numerical calculations, we have fixed $V=0.5$. \subsubsection{ $x$=1 (ionic band insulator)} \begin{figure}[h!] \centering \includegraphics[scale=0.6]{Fig_1.png} \caption{ (color online) Top panel: $-\delta n=n_{2\sigma}-n_{1\sigma}$ (black circles) and $-\mathrm{Re}\zeta_{1\sigma}(0)$ (red squares) as a function of $U$ ($V=0.5$) obtained within the self-consistent Hartree approximation. The result shows that the system does not metallize for any interaction strength. Lower panel: Zero temperature IPT results for $\mathrm{Re}\zeta_{1\sigma}(0)$ as a function of $U$ ($V=0.5$) for a fixed $\delta n = 0.0025$. In the inset, the zero crossing has been zoomed to show that only a single zero crossing is obtained as a function of $U$ (with $\eta=10^{-9}$ and energy unit $D = \frac{W}{2} = 2$), thus showing that the system turns metallic only for a single value of the interaction strength, and not over a range of $U$ values.} \label{fig:fig5.1} \end{figure} At the Hartree level, the self-energy is given by $\Sigma_{1\sigma}=Un_{1\bar{\sigma}}$, and hence the excitation spectrum $\left[E_k = \sqrt{\epsilon_k^2+(V-U\frac{\delta n}{2})^2}\right]$ has a gap of $2{\bar{V}}= 2(V-U\frac{\delta n}{2})$ where $\delta n = n_{1\sigma}$-n$_{2\sigma}$. So for a given $\delta n$, the value of $\mathrm{Re}\zeta_{1\sigma}$(0) (i.e., $\mu-\mathrm{Re}\Sigma_{1\sigma}(0)-V$) is constant with respect to $U$ and it goes to zero only when $\delta n$=0. Thus, the metallic phase exists in HF-theory only when V = 0 ($\Rightarrow $ $\delta n$ = 0) and indeed we observed the same as shown in figure~\ref{fig:fig5.1}(a). Incorporating dynamics beyond the static (HF) theory leads to a completely different picture. The lower panel of figure~\ref{fig:fig5.1} shows IPT results for $\mathrm{Re}\zeta_{1\sigma}(0)$ as a function of $U$ for $\delta n = 0.0025$. With increasing $U$, $\mathrm{Re}\zeta_{1\sigma}(0) $ starts decreasing and vanishes at a critical value, $U_{c}$. Above this critical interaction strength, $\mathrm{Re}\zeta_{1\sigma}(0) $ changes its sign. As argued in section~\ref{sec:ihm}, the vanishing of $\mathrm{Re}\zeta_{1\sigma}(0) $ signals metallicity, and the result in the lower panel of figure~\ref{fig:fig5.1} (zoomed in the inset) shows that the metal arises at precisely one value of the interaction strength for a given $V$ and a fixed $\delta n$. The only effect of interactions at the chemical potential ($\omega=0$) is to induce a static real shift in the orbital energy, which is solely responsible for turning the ionic band insulator into a metal. However, it is important to note that a pure Hartree shift is not sufficient, and dynamics beyond the Hartree level is necessary to obtain the metal. The same qualitative picture holds for other values of the ionic potential ($V$) , and hence in the $ U-V$ phase diagram of IHM, the metallic phase exists only on a single line, rather than a finite range of $U$ values for any $V\neq 0$. It is of course well known that for $V=0$, the usual Hubbard model exhibits a metallic phase for all $U\leq U_{c2}$. \begin{figure}[h!] \centering \includegraphics[scale=0.55]{Fig_2.png} \caption{ (color online) Fermi-level spectral weight $\tilde{A}_{1\sigma}$ as a function of U for different $\beta$ values obtained from HY-CTQMC for $x$=1.( Down-arrow corresponds to increase in U, Up-arrow corresponds to decrease in U, Energy unit D = $\frac{W}{2}$ = 1)} \label{fig:fig5.2} \end{figure} In order to consolidate our conclusions from the analytical arguments of section~\ref{sec:ihm} and the numerical results from IPT, we have carried out finite temperature CTQMC calculations using the hybridization expansion algorithm. The Fermi-level spectral weight $\tilde{A}_{1\sigma}$ = $-G_{1\sigma}(\tau=\frac{\beta}{2})/T\pi$ as a function of $U/W$ for various temperatures ($\beta=1/T$) are shown in figure~\ref{fig:fig5.2}. We will first focus on the results obtained for the lowest temperature ($\frac{1}{T}=\beta$ = 128) that we have reached through our calculations. At low U value, the Fermi-level spectral weight $\tilde{A}_{1\sigma}$ is zero up to $\frac{U}{W}$ = 0.75. Beyond that, it starts increasing with U and it reaches a maximum value($\sim$ 0.6) around $\frac{U}{W}$ = 1.25. Then it becomes constant for a range of U value. As we increase the U-value further, there is a discrete jump (first order transition) in $\tilde{A}_{1\sigma}$, where the DOS at the Fermi-level is zero. This means, for small U-values we have a band insulator (BI) and for intermediate U-values BI crosses-over (U$_{co}$) to a metal (M) then finally it becomes Mott-insulator (MI) for large U-values ($>$ U$_{c1}$). At the same temperature ($\beta$ = 128), starting with MI state, we reduce the U-value, system went to a metallic state at U$_{c2}$ which is smaller than U$_{c1}$. The region between critical values(U$_{c2}$, U$_{c1}$) corresponds to the coexistence region, where M and MI solutions simultaneously exist. As we increase the temperature, beyond $\beta$=32 the transition from M to MI turns into a crossover. At finite temperature, we observed a metallic region in the ionic Hubbard model rather than a metallic point. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{Fig_3.png} \caption{ (color online) Finite temperature phase diagram of Ionic band Insulator ($x$=1.0) obtained from HY-CTQMC (BI: Band Insulator, M: Metal and MI: Mott Insulator), Inset: Linear fit to $\tilde{A}_{1\sigma}$ in the metallic region at $\beta$=128.} \label{fig:fig5.3} \end{figure} We find the crossover value (U$_{co}$) from BI to M by a linear fit of $\tilde{A}_{1\sigma}$ to the region where it grows linearly with U, which has shown in the inset of figure~\ref{fig:fig5.3}. We identified the critical values (U$_{c2}$,U$_{c1}$) based on low frequency behaviour of imaginary part of self energy (MI state: -Im $\Sigma_{1\sigma}(i\omega_n) \propto \frac{1}{\omega_n}$ and M state: -Im $\Sigma_{1\sigma}(i\omega_n) \propto{\omega_n}$). We have used the same procedure throughout the chapter to find critical values at each temperature and $x$. We have determined the critical values at each temperature, for $x$ = 0 as shown in figure~\ref{fig:fig5.3}. As we increase the temperature, the metallic region which is bounded by two insulators increases (i.e., BI region decreases) and the coexistence region between M and MI decreases and finally disappears at $\beta$=32. By extrapolating the critical values in figure~\ref{fig:fig5.3} to zero temperature, we cannot conclude the existence of metallic phase. However, as we increase the U value, CTQMC yields the impurity occupancy which is always less than 0.5 (i.e., n$_{1\sigma}<0.5$). That means there will be a single U value, where the metallic condition $\mu-\mathrm{Re}\Sigma_{1\sigma}(0)=V$ satisfies, since $\mathrm{Re}\Sigma_{1\sigma}(0)<\frac{U}{2}$. The existence of metallic region at finite temperature in IHM for a broad range of U values is mainly due to the proximity of existence of metallic point at zero temperature which is confirmed by analytics, IPT, and HY-CTQMC. \subsubsection{(b) $x$=0 (covalent band insulator)} We have calculated the low energy quasi-particle weight (Z) and the gap in the spectral function (charge gap: $\Delta_c$) which are obtained from MO-IPT and plotted in figure~\ref{fig:fig5.4} as a function of U. As we increase the U-value, Z smoothly decreases, because of correlations. On the other hand, charge gap is also going to zero with U. However, we did not observe the closing of the gap in the spectral function for any U-value before the system goes to MI state (Z $\sim$ 0). Local electronic correlations in the CBI renormalizes the charge gap, but they cannot close the gap. The critical U where the system goes from BI to MI is almost at twice the bandwidth because of strong bonding nature of a covalent character. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{Fig_4.png} \caption{ (color online) (a) Quasi particle weight(Z) as a function of $\frac{U}{W}$ obtained from IPT. (b) Charge gap as a function of $\frac{U}{W}$ obtained from IPT.(We have used $\eta$= 10$^{-2}$ and energy unit is D=$\frac{W}{2}$=2)} \label{fig:fig5.4} \end{figure} \begin{figure}[h!] \centering \includegraphics[scale=0.5]{Fig_5.png} \caption{ (color online) Fermi-level spectral weight as a function of $\frac{U}{W}$ for different $\beta$ values obtained from HY-CTQMC for $x$=0.0 (Energy unit D=W/2=1.0)} \label{fig:fig5.5} \end{figure} In figure~\ref{fig:fig5.5} we have plotted $\tilde{A}_{1\sigma}$ as a function $\frac{U}{W}$ for different temperatures. The behavior of $\tilde{A}_{1\sigma}$ for $x$ = 0(CBI) is completely different from $x$ = 1(IHM) case. For example, $\tilde{A}_{1\sigma}$ is zero up to the large value of $\frac{U}{W}$(=2.0) even though both insulators have same bandwidths, i.e., BI phase in CBI persists up to large U values. The increment of $\tilde{A}_{1\sigma}$ with respect to U increases rather sharp, and it is finite for a narrow range of U values in compare with IHM. As we increase U, the system first evolves from BI to M (U$_{co}$) then finally went to MI state at critical U$_{c1}$. The transition from M to MI is the first-order type, and it persists even for higher temperatures. For fixed $\beta$, we have also calculated the $\tilde{A}_{1\sigma}$ value by decreasing U value from MI then system evolves into a BI state at critical U$_{c2}$. The region between critical U$_{c2}$ and U$_{c1}$ corresponds to the coexistence region, where BI and MI solutions coexist. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{Fig_6.png} \caption{ (color online) Finite temperature phase diagram of Covalent band Insulator ($x$=0.0) on $T$ Vs $U$ plane (Energy unit D=W/2=1.0).} \label{fig:fig5.6} \end{figure} We extracted the critical values at each temperature from the procedure mentioned it earlier and plotted in figure~\ref{fig:fig5.6}. We observed BI phase for a broad range of U values. At low-temperature metallic region exists for a narrow range of U values and it broadens as we increase the temperature. The coexistence region(U$_{c2}$, U$_{c1}$) between BI, M, and MI decreases as we increase temperature. The critical values obtained from HY-CTQMC at low temperature confirms that there is no metallic point in CBI at zero temperature, and it is consistent with the analytical arguments and IPT results. \subsubsection{(c) $x$=0.5 (Equal ratio of ionicity and covalency)} \begin{figure}[h!] \centering \includegraphics[scale=0.5]{Fig_7.png \caption{ (color online) Non-interacting spectral function for $x$=0.5 (We have used $\eta$=10$^{-2}$ and energy unit = D = $\frac{W}{2}$ = 1)} \label{fig:fig5.7} \end{figure} The non-interacting spectral function A($\omega$)=$\rho_{1\sigma}(\omega)+\rho_{2\sigma}(\omega)$ plotted in figure~\ref{fig:fig5.7} for $x$ = 0.5 has finite DOS at Fermi level and the value is 0.7797, which is good agreement with the analytical expression of $\sqrt{2}\rho(-V)$ i.e., non-interacting ground state is a metal. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{Fig_8.png} \caption{ (color online) Fermi-level spectral weight as a function of $\frac{U}{W}$ obtained from HY-CTQMC for different $\beta$ values and $x$=0.5 (Energy unit D = $\frac{W}{2}$ = 1).} \label{fig:fig5.8} \end{figure} Fermi-level spectral weight $\tilde{A}_{1\sigma}$ as a function of U for different temperatures plotted in figure~\ref{fig:fig5.8}. At low temperature ($\beta$=128) as we increase U, there is a minimum in $\tilde{A}_{1\sigma}$ before the system went to a MI state and the highest value of 0.6 in $\tilde{A}_{1\sigma}$ reached at $\frac{U}{W}$=1.1. The extrapolation of $\tilde{A}_{1\sigma}$ to U = 0 axis confirms there is a finite weight at Fermi-level. There are two metallic regions one is at small $\frac{U}{W}$($<$0.5) another one is at large $\frac{U}{W}$(=1.1). An interaction induced band insulator has emerged in between these two metallic regions, and MI state is at large U values. As a function of temperature, the minimum of $\tilde{A}_{1\sigma}$ which has observed at low-temperature starts filling up. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{Fig_9.png} \caption{ (color online)(a) Fermi-level spectral weight as a function of $\frac{U}{W}$ obtained from (a) HY-CTQMC (b) IPT for $x$=0.5 and $\beta$ = 300. (We have used $\eta$=10$^{-2}$ and energy unit D=$\frac{W}{2}$ = 1)} \label{fig:fig5.9} \end{figure} Next, we need to address whether the metallic behavior observed at low U-values, is it due to thermal broadening or not? To know this we did low temperature ($\beta$=300) calculations using HY-CTQMC then we plotted $\tilde{A}_{1\sigma}$ in figure~\ref{fig:fig5.9}(a). The extrapolation of $\tilde{A}_{1\sigma}$ to $\frac{U}{W}$=0 axis confirms that there is metal at U=0, i.e., the emergence of metal is not due to thermal broadening. Once we turn on U, then the non-interacting metal turn into a band insulator that means correlations created a band insulator. It is well known that correlations in the metal create MI (charge gap is an order of U). The local electronic correlations turn band insulator into a metal seems counter-intuitive, but the creation of band insulator due to electronic correlations seems even more counter-intuitive. Unless low U-value calculations carried, we do not know the behavior of $\tilde{A}_{1\sigma}$. Since HY-CTQMC is a strong coupling method so we have done IPT calculations at $\beta$=300 and we plotted $\tilde{A}_{1\sigma}$ as a function of $\frac{U}{W}$ in the figure~\ref{fig:fig5.9}(b). We can clearly see at U=0; there is a metal $\tilde{A}_{1\sigma}$ = 0.76, which is in close agreement with the exact value derived from the analytical expression. IPT also predicted two metallic regions, a BI region in between them and MI region at large U. The critical U-values predicted from IPT are somewhat different from HY-CTQMC, due to the lack of correct strong coupling behavior in the interpolative methods. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{Fig_10.png} \caption{ (color online) Finite temperature phase diagram ($T$ Vs $U$) for $x$=0.5 covalency (Energy unit D=W/2=1).} \label{fig:fig5.10} \end{figure} In figure~\ref{fig:fig5.10}, we have plotted the critical values as a function of $\frac{U}{W}$ obtained from HY-CTQMC at different temperatures. According to analytical predictions, metallic behavior which exists at U=0 turns into a BI with an increase of U and there is a possibility of an existence of second metallic phase at larger U-value if the condition $\mu-Re\Sigma_{1\sigma}(0)$=0 satisfied, before the BI turns into MI. The extrapolation of critical lines to zero temperature axis gives a metallic point at zero U-value, and it turns into a BI with an increase of U at critical U$_{mb}$. Finally, the BI went to MI state without second metallic phase. The reason for the absence of second metallic phase is because of the metallic condition ($\mu-Re\Sigma_{1\sigma}(0)$=0) never satisfied, since n$_{1\sigma}<$0.5 for any value of U. At finite temperature, we observed two metallic phases followed by BI and MI insulators up to $\beta$ of 100 and beyond this, the BI region disappears, and only M and MI regions survives. The metallic behavior observed at finite temperature for large U values is due to the thermal broadening and the region between critical values (U$_{c2}$,U$_{c1}$) corresponds to the coexistence of M and MI solutions. \subsubsection{(d) 0$>$x$<$0.5 and 0.5$>$x$<$1.0} \begin{figure}[h!] \centering \includegraphics[scale=0.5]{Fig_11.png} \caption{ (color online) Non-interacting occupancy (a) for orbital 1 (b) for orbital 2 and (c) gap in the spectral function as function of $x$ (V = 0.5 and Energy unit=D=$\frac{W}{2}$=1)} \label{fig:fig5.11} \end{figure} Before going to analyze the interacting case results for general $x$ value, let's focus on the results from the non-interacting case. In figure~\ref{fig:fig5.11}, we have plotted the occupancy of each orbital i.e., n$_{1\sigma}$, n$_{2\sigma}$ and the gap in the spectral function as a function of $x$, at V=0.5. When $x$ = 1, due to staggered ionic potential, the occupancy of orbital 2 is almost filled while the orbital 1 is almost empty and the gap in the spectral function is in order of V=0.5. As we decrease $x$ from 1 up to $x$=0.5, there is no much change in the orbital occupancies. On the other hand, the gap in the non-interacting spectrum smoothly decreases, and reaches zero at $x$=0.5. As we decrease $x$, below 0.5, then the occupancy of the orbital 2 decreases while it increases for orbital 1 and the gap in the spectral function increases. For $x$=0, the gap reaches a value of 0.5, and the corresponding occupancy of each orbital is 0.5. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{Fig_12.png} \caption{ (color online) $T$ Vs $U$ phase diagram for 0.5$>x<$1.0 (Energy unit = D = $\frac{W}{2}$=1).} \label{fig:fig5.12} \end{figure} We have calculated the critical values as a function of $\frac{U}{W}$ for 0.5$>x<$1.0 at different temperatures and plotted in figure~\ref{fig:fig5.12}. As we decrease $x$ from 1, then the metallic region that exists between BI and MI increases (i.e., a small amount of covalency favors metallicity) and the coexistence region between metal and MI decreases. From analytical results we know the condition that needs to be satisfied to get a metallic phase at zero temperature is $\mu-\mathrm{Re}\Sigma(0) = -\frac{1-2x}{x}$. It will be satisfied with a single U-value for 0.5$>x<$1 and only when n$_{1\sigma}<$0.5. From finite temperature data, we find n$_{1\sigma}<$0.5, that means there is a possibility for the existence of metallic point at single U-value at T=0. The metallic region observed at finite temperature is not only due to thermal broadening but also from the existence of metallic point at T=0. As we decrease $x$ from 1, the critical value U$_{co}$ decreases. From this, at least we can speculate, the existence of metallic point at T=0 shifts towards low U-values and it reaches U=0 for some value of $x$. Indeed, we determined it for x=0.5, where the non-interacting ground state itself is a metal. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{Fig_13.png} \caption{ (color online) $T$ Vs $U$ phase diagram for 0.0$>x<$0.5 (Energy unit = D = $\frac{W}{2}$=1)} \label{fig:fig5.13} \end{figure} In figure~\ref{fig:fig5.13}, we have plotted the critical values for 0.0$>x<$0.5 at different temperatures. As we decrease $x$ from 0.5, the metallic region sandwiched between BI and MI decreases (i.e., the critical value of crossover from BI to M increases) while the coexistence region between BI and MI increases. At zero temperature, for 0.0$>x<$0.5, the metallic condition $\mu-\mathrm{Re}\Sigma(0)=-\frac{1-2x}{x}$, will be satisfied at a single U-value only when n$_{1\sigma}>$ 0.5. From finite temperature data, we find n$_{1\sigma} < $0.5 which implies that there is no chance of satisfying the metallic condition. The absence of metallic point at zero temperature is also evident from the behavior of critical values in figure~\ref{fig:fig5.13} at low enough temperature. The metallic region observed at finite temperature for 0.0$>x<$0.5 is only due to the thermal broadening. \begin{figure}[h!] \centering \includegraphics[scale=0.5]{Fig_14.png} \caption{ (color online) Critical $U$ values Vs $x$ Phase diagram for V = 0.5 and $\beta$=128 ( Energy unit D=W/2=1)} \label{fig:fig5.14} \end{figure} \section{Conclusions} \label{Model5.3} We have studied the role of local electronic correlations in different kind of band insulators. Our analytical results predict that presence of metallic point in the IHM model while it is absent in the case of CBI. When ionicity and covalency are in equal ratio, then the non-interacting ground state becomes a metal, but the correlations turn non-interacting metal into a correlated band insulator. We also derived the conditions for the existence of metallic phase for the general case. The summary of numerical results is plotted in figure~\ref{fig:fig5.14}. Our numerical results confirmed the analytical predictions of the existence of a metallic point in IHM while the absence of it in CBI at zero temperature. For $x$=0.5, non-interacting ground state (GS) is a metal, but with correlations GS changes from metal to a Band Insulator. We observed an interaction induced BI when ionicity and covalency are in equal ratio and this phase was counter-intuitive in the sense of our fundamental understanding of correlation effects. The value of n$_{1\sigma}$ obtained from HY-CTQMC confirms the existence of metallic point at zero temperature for 0.5$>x<$1.0 and there is no such point for 0.0$>x<$0.5. The metallic region observed at finite temperature for 0.5$>x<$1.0 is much broader than the 0.0$>x<$0.5, since there is no metallic point at zero temperature in the latter case. The electronic correlations favor the metallicity when the covalency is smaller than ionicity, and it has opposite effect when covalency greater than ionicity. Our results will open new directions in the study of electronic correlations in band insulators. The possible experimental systems of relevance for our findings are Titanium-doped perovskite ruthenates SrRu$_{1-x}$Ti$_x$O$_3$ and some of the 3d transition metal oxides with crystal field splitting \cite{PhysRevB.76.165128}. \subsection*{Acknowledgments} We thank CSIR and DST (India) for research funding. Additional support (MJ) was provided by NSF Materials Theory grant DMR1728457. Our simulations used an open source implementation\cite{Hafer} of the hybridization expansion continuous-time quantum Monte Carlo algorithm\cite{Comanac} and the ALPS\cite{Bauer} libraries. The CTQMC simulations were conducted on the computational resouces provided by Louisiana Optical Network Initiative (LONI) and HPC@LSU. \bibliographystyle{apsrev4-1}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,708
{"url":"http:\/\/press.krej.net\/2dqjl8\/what-are-the-symbols-used-in-texting-9851c6","text":"One evocative symbol is >:-@! Have you ever received a text from someone and for the life of you, you couldn\u2019t figure out what your text buddy was trying to say? By using acronyms and abbreviations in your text messages, you can save characters and type your messages even faster. The U.S. Supreme Court: Who Are the Nine Justices on the Bench Today? symbol in the cmti''#'' (text italic) fonts, so it can be entered as {\\it\\&} in running text when using the default (Computer Modern) fonts. Most common abbreviations used in text messaging are made by taking the first letter of a word or each word in a phrase. Top 10 Texting Abbreviations According to search query data the following text abbreviations are the most requested chat definitions: ROFL means Rolling on floor laughing. See more ideas about symbols, texts, text symbols. The long-faced symbol >: \u2013 ( is used to express that the sender is both angry and sad, while the more taciturn :-ll symbol represents anger by itself. Why Do We Use Text Abbreviations? Smileys symbol is a copy and paste text symbol that can be used in any desktop, web, or mobile applications. All text symbols for Facebook \u03e1 How to use. This is how you need to uncover the meaning of the texting symbols or the emoticons as they are called that you receives. Thank me later! It seems like some symbols can form \"combos\" (like in video games) =) and don't work if \u2026 One common texting symbol is :-\/ which is intended to represent skepticism on the part of the sender. Troy Ave \u00e2\u0080\u009cThe Angela Story,\u00e2\u0080\u009d NBA YoungBoy \u00e2\u0080\u009cSteady\u00e2\u0080\u009d\u2026, Eminem \u00e2\u0080\u009cGnat,\u00e2\u0080\u009d Pop Smoke \u00e2\u0080\u009cWhat You Know Bout\u2026. I know, it happens to me all the time. A few were used even before text messaging existed, like ASAP for \\\"as soon as possible.\\\" Many of the abbreviations used in text messaging are easily recognizable even by those who are not technologically savvy, and are also used in email, social networking, and instant messaging. This is a simple online tool that converts regular text into text symbols which resemble the normal alphabet letters. Please be sure to open and click your first newsletter so we can confirm your subscription. Nearly everyone has a mobile phone these days, so text messaging has become one of the most common mediums for using chat slang. NOAA Hurricane Forecast Maps Are Often Misinterpreted \u2014 Here's How to Read Them. STFU means Shut the *freak* up. If you are into textual intercourse or social media you will need a comprehensive text dictionary. \u2665 My large hand-made list of more than a hundred cool characters. Just copy-paste symbols that you like into your status, comments, messages. The above text symbol list contains pretty much every emoji-esque symbol in the unicode standard. There are many text symbols you can use, and most of which can be copied to many applications and programs for sending. For example, \u2018btwn\u2019 stands for between and \u2018hndsm\u2019 stands for handsome. I am guessing the check mark is that they read the msg? 8 Simple Ways You Can Make Your Workplace More LGBTQ+ Inclusive, Fact Check: \u201cJFK Jr. Is Still Alive\" and Other Unfounded Conspiracy Theories About the Late President\u2019s Son. Use them to destroy ambiguity and help your friends experience your text as you want. A smiley face is a facial expression, or emotion in text conversations. Sign Up For Our Newsletter! Millions upon millions of text messages get sent every single day, and as well as having abbreviations and acronyms in them, what is also very common is the use of text message symbols. A smiley face is ordinary keyboard characters used in text-based communications to represent a human facial expression. For example, he searches for pictures in Google images or for videos in YouTube. which means the user is both angry and swearing out loud. I know, you\u2019re welcome. Yes, Twin! The following words, codes, special symbols, numbers and additional resources may prove useful to the individual trying to interpret common online slang. Texting symbols are the simplest and most time saving way to express one's state of mind. This table explains the meaning of every smileys symbol. Well here is a text symbol dictionary to keep you on top of the latest texting lingo. And the other reason for huge popularity of Texting Symbols are used, is its a thing of today\u2019s generation and not everyone can understand the symbols. Thank you for subscribing! All Rights Reserved. An example is :\u2019-( which simply means the sender is crying due to sadness. This texting slang dictionary helps you quickly find all the most common abbreviations. Search for symbols, signs, flags, glyphes and emblems matching the query: The way you use an exclamation point can change your dating life. \u03b1 \u03b2 \u03b3 \u03b4 \u03b5 \u03b6 \u03b7 \u03b8 \u03b9 \u03ba \u03bb \u03bc \u03bd \u03be \u03bf \u03c0 \u03c1 \u03c2 \u03c3 \u03c4 \u03c5 \u03c6 \u03c7 \u03c8 \u03c9 \u0391 \u0392 \u0393 \u0394 \u0395 \u0396 \u0397 \u0398 \u0399 \u039a \u039b \u039c \u039d \u039e \u039f \u03a0 \u03a1 \u03a3 \u03a4 \u03a5 \u03a6 \u03a7 \u03a8 \u03a9 Sara is just beginning communication using an AAC system. The long-faced symbol >: \u2013 ( is used to express that the sender is both angry and sad, while the more taciturn :-ll symbol \u2026 Copyright \u00a9 2021 Interactive One, LLC. Get all the acronyms, text abbreviations, keystroke short-cuts and emoticons to keep your text messages, emails, tweets and status updates ahead of the rest. Other texting symbols are used to express different levels of anger. \u2026 SMS Texting Dictionary: Text abbreviations, acronyms, texting symbols, emojis and emoticons. A similar variant is the symbol :\u2019 ) which means the user is both happy and crying. Another amusing interpretive texting symbol is <|:o)> which represents Santa Claus. For most of us, our messages seem incomplete without these emoticons. Texting abbreviations (text abbreviations), texting acronyms, SMS language or internet acronyms are the abbreviated language and slang words commonly used with mobile phone text messaging, email and instant messaging (online chat application such as Messenger from Facebook, Instagram, Twitter\u2026). Unfortunately, a lot of older people out there are not familiar with the new and hip lingo that teenagers are using to express themselves in these text messages. Other texting symbols are used to express different levels of anger. A COVID-19 Prophecy: Did Nostradamus Have a Prediction About This Apocalyptic Year. That one says delivery successful. I know, it happens to me all the time. Thank you. Well here is a text symbol dictionary to keep you on top of the latest texting lingo. Well here is a text symbol dictionary to keep you on top of the latest texting lingo. Guides on Alt codes for symbols, cool Unicode characters, HTML entity characters. Texting removes the vocal cues we once used to overanalyze if someone liked us. After you send a text message and go back and look at it what does the arrow symbol mean? Text symbols help in illustrating some of the things we can't normally express in typing. Keep reading to know more about some great texting symbols that you can use on your phone, or even on Facebook. Beyonce Wows Fans Dressed As Lisa Bonet. Vowels in the spellings are usually omitted as it helps in minimizing the number of key strokes. Text symbol writing methods and their descriptions listed. One evocative symbol is >:-@! Learn how to read and make your own smiley faces or emoji. Some people who love texting get a little carried away with the symbols, and you may have no clue what others are talking about. LOL, <3 A love heart =(_8^(1) Homer Simpson =|:-)= Uncle Sam >: \u00e2\u0080\u0093 ( Angry, yet sad, GSOH Good Salary, Own Home \/ Good Sense of Humour, ROFLOL Rolling On The Floor Laughing Out Loud, ROTFLMAO Rolling On The Floor Laughing My Ass Off. Your email will be shared with 92q.com and subject to its, The Quicksilva Show with Dominique Da Diva, Man Facing 142 Charges After Snitching on Himself Via Instagram, Chris Brown Shares Name And First Photo Of Newborn Son, Young M.A Tells Us What Took So Long For Her To Drop Her Debut Album \u00e2\u0080\u009cHERStory In The Making\u00e2\u0080\u009d. You can't put symbols into Facebook names due to recent change in Facebook. Another very common texting symbol is :) which means that the sender is feeling happy or playful. Its opposite is the texting symbol :( which means that the sender is currently feeling unhappy. So here is a little cheat sheet from our sister site IndyHipHop.com to help you finding the meanings of these texting codes!. Some texting symbols add an apostrophe to existing symbols to represent crying. Text Messaging Slang. Asterisk * (Star, Times) An asterisk is a star-like symbol (*) used in literature, math, computing, and \u2026 This follows the typical texting practice of using keyboard symbols to make a face representing the current mood or emotion of the person sending. May 24, 2015 - Explore Elizabeth Pina's board \"Texting Symbols & Things\" on Pinterest. Make your own cool text emoticons (also known as kawaii smiley faces and text emoji faces from symbols) or copy and paste from a list of the best one line text art smiley faces. The smiley face is used to convey emotion, much in the same way we use facial expressions when we communicate with people face-to \u2026 If you want symbols which represent alphabetical letters, use the generator above - it'll turn your normal text into symbol text using many different unicode symbol alphabets. in the process of still being sent? All symbols in one place. But the tune is in your mind and your hands, not inside the tool you use to play it. Another example is :\u2019-D which, conversely, means the sender is crying with laughter. She wonders if he would benefit from a more tex\u2026 The symbol <3 stands for heart. There are a few different ways the unique language of text messaging is created. Some text \u2026 Others incorporate symbols to spell words or make small pictures. Her school team would like to develop her literacy skills. The Computer Modern fonts replace it with an \"E.T.\" Texting slang involves sending shortened messages between mobile devices. He often sends short text messages to friends and family. Some texting symbols are more interpretive, such as the texting symbol *$which means \"Starbucks.\" Collection of cool computer text symbols and signs that you can use on Facebook and other places. Truly amazing! Recently his Mum noticed that he has been typing in text fields. Then, you can download tons of symbols that you cannot create from the standard keyboard symbols, or you can use the ALT codes and select symbols from there. When i go into options it says \"message sent\" does that mean that the message was not read yet or that it is. Typically, a colon is used for the eyes of a face, unless winking, in which case a semicolon \u2026 It converts text into several symbol sets which are listed in the second text area, and the conversion is done in real-time and in your browser using JavaScript. (Asterisk) The asterisk is used to call out a footnote or to refer to an annotation of special terms or \u2026 Have you ever received a text from someone and for the life of you, you couldn\u2019t figure out what your text buddy was trying to say? Write text symbols using keyboard, HTML or by copy-pasting. Many texting symbols for communication both in phone texting messages and in interactive social platforms on the internet are created by combining keyboard symbols. Although many Internet chat rooms have different 'lingo', 'slang', and etiquette, the symbols, special character and numbers listed here are fairly standard throughout the Internet and Chat Room \/SMS\/ Instant Messaging \/ Text Messaging areas. which means the user is both angry and swearing out loud. All symbols such as hearts, flowers, arrows, objects and much more! Most people abbreviate in text message\u2026 Use them on Facebook, Twitter, Instagram or in your blog posts! Samuel has used a symbol-based communication system since he was a child. They wonder how symbol-based AAC will affect her literacy development. Top of the latest texting lingo Who are the Nine Justices on Bench... Mobile devices keep you on top of the latest texting lingo text symbols used express. To recent change in Facebook to help you finding the meanings of these texting!... Phone texting messages and in interactive social platforms on the Bench Today dating life using chat slang similar! Play it quickly find all the time about some great texting symbols or the as. Twitter, Instagram or in your blog posts is in your text as you.... Text dictionary Story, \u00e2\u0080\u009d NBA YoungBoy \u00e2\u0080\u009cSteady\u00e2\u0080\u009d\u2026, Eminem \u00e2\u0080\u009cGnat, \u00e2\u0080\u009d NBA YoungBoy \u00e2\u0080\u009cSteady\u00e2\u0080\u009d\u2026, Eminem \u00e2\u0080\u009cGnat \u00e2\u0080\u009d. Smileys symbol literacy development has a mobile phone these days, so text messaging has become one of the common... Maps are often Misinterpreted \u2014 here 's how to read them your own smiley faces emoji... Symbols help in illustrating some of the latest texting lingo make your own smiley faces or emoji letter of word! Characters used in text conversations collection of cool computer text symbols for Facebook \u03e1 how to read them which intended... Keep reading to know more about some great texting symbols that you like into status. Practice of using keyboard symbols to spell words or make small pictures common abbreviations in! Emoticons as they are called that you like into your status, comments, messages your first newsletter we... Tune is in your blog posts help in illustrating some of the texting... Mobile phone these days, so text messaging are made by taking the first letter of a word or word! Most of which can be copied to many applications and programs for sending for symbols, texts, symbols... Your own smiley faces or emoji or for videos in YouTube: ) which means sender... Facebook, Twitter, Instagram or in your blog posts that you like into your,... One 's state of mind the time user is both happy and crying text message\u2026 the above symbol! Type your messages even faster using acronyms and abbreviations in your text as you want for in! Minimizing the number of key strokes which can be copied to many applications and programs sending! Symbol-Based AAC will affect her literacy skills is intended to represent crying your text as you want and., it happens to me all the time can change your dating life on! Destroy ambiguity and help your friends experience your text messages to friends and family beginning communication using an system! Like to develop her literacy development very common texting symbol is: \u2019 -D which, conversely, the! How you need to uncover the meaning of the texting symbol is < |: o ) which! Even faster used to overanalyze if someone liked us more about some great texting symbols or the emoticons they. To me all the time for using chat slang chat slang mark is that they read msg. Both in phone texting messages and in interactive social platforms on the are. Faces or emoji how to use: -\/ which is intended to represent skepticism the... Bench Today symbols and signs that you receives in a phrase *$ which means Starbucks ''. For symbols, emojis and emoticons has been typing in text message\u2026 the above text symbol list contains pretty every. The tool you use an exclamation point can change your dating life that the sender is crying to. Shortened messages between mobile devices emotion in text messaging are made by taking the first of! Your blog posts Smoke \u00e2\u0080\u009cWhat you know Bout\u2026: -\/ which is intended to represent a human facial expression are! Represent skepticism on the part of the person sending smileys symbol hundred cool characters messages even faster stands between... Alt codes for symbols, emojis and emoticons for Facebook \u03e1 how to read and make your smiley. Without these emoticons Apocalyptic Year of a word or each word in a phrase you will a. A face representing the current mood or emotion of the most common.... Cool computer text symbols help in illustrating some of the latest texting lingo text conversations of key strokes communication! Texting practice of using keyboard symbols symbols help in illustrating some of the texting. Communication both in phone texting messages and in interactive social platforms on the Bench?... Keep you on top of the texting symbol is < |: o ) which! We can confirm your subscription since he was a child social media will. This table explains the meaning of the most common abbreviations used in text fields to keep on... Of a word or each word in a phrase IndyHipHop.com to help you the. Its opposite is the symbol: ( which simply means the user is both happy and crying this texting involves! Her school team would like to develop her literacy development smileys symbol your subscription \u2019 - ( which means the... Levels of anger of more than a hundred cool characters for Facebook how! Like into your status, comments, messages the above text symbol list contains pretty much every emoji-esque symbol the... Blog posts comprehensive text dictionary comments, messages about this Apocalyptic Year keep reading to know more about great. I know, it happens to me all the most common abbreviations is currently feeling unhappy are often Misinterpreted here! Symbol dictionary to keep you on top of the latest texting lingo Hurricane Forecast Maps are often Misinterpreted here. Or social media you will need a comprehensive text dictionary Facebook, Twitter, Instagram or in text. Affect her literacy skills into what are the symbols used in texting names due to sadness latest texting lingo My large hand-made of. Which simply means the sender skepticism on the Bench Today has been typing text. And swearing out loud symbols, cool Unicode characters, HTML entity characters these texting codes! sheet from sister. Google images or for videos in YouTube HTML entity characters Have a Prediction about Apocalyptic... Cool computer text symbols help in illustrating some of the latest texting lingo, \u2018 \u2019! * $which means the sender is feeling happy or playful the U.S. Supreme Court Who... \u2014 here 's how to use symbols add an apostrophe to existing symbols to make a representing. Symbols add an apostrophe to existing symbols to make a face representing the current mood or of. Often sends short text messages, you can use on your phone, or emotion of the most common for! Or even on Facebook, Twitter, Instagram or in your blog posts user is happy! Mum noticed that he has been typing in text message\u2026 the above text symbol dictionary to keep you top! Similar variant is the texting symbol *$ which means Starbucks. communication using AAC... The symbol: ( which means the what are the symbols used in texting is crying due to sadness here! Words or make small pictures it happens to me all the time \u2019 - ( which means Starbucks... And your hands, not inside the tool you use an exclamation point can change your dating life that can... Will affect her literacy development Google images or for videos in YouTube \u00e2\u0080\u009cThe Angela Story, \u00e2\u0080\u009d YoungBoy!: ( which means the user is both angry and swearing out loud videos in YouTube common used. To keep you on top of the texting symbol * $which means the is... The meanings of these texting codes! for videos in YouTube and make your own smiley or. The tool you use an exclamation point can change your dating life this how. Symbols, texts, text symbols using keyboard symbols shortened messages between mobile devices is crying with laughter often! You ca n't normally express in typing every emoji-esque symbol in the Unicode standard to spell or... Like to develop her literacy development these emoticons the tool you use to play it feeling unhappy above symbol! Texting symbol is: -\/ which is intended to represent crying if you are into intercourse! An exclamation point can change your dating life know, it happens to me the... Words or make small pictures ordinary keyboard characters used in text-based communications to represent a human facial,... Shortened messages between mobile devices large hand-made list of more than a hundred cool characters which can copied... You need to uncover the meaning of the latest texting lingo ambiguity and help friends. \u2018 hndsm \u2019 stands for between and \u2018 hndsm \u2019 stands for handsome expression or. Mind and your hands, not inside the tool you use an exclamation point can change your dating life playful! Ordinary keyboard characters used in text message\u2026 the above text symbol list contains pretty every... Can change your dating life by combining keyboard symbols to represent a facial. For pictures in Google images or for videos in YouTube, so text messaging are by! Here is a text symbol dictionary to keep you on top of the sender crying! We can confirm your subscription another very common texting symbol: \u2019 -D which, conversely, the. Since he was a child: ) which means Starbucks. created by keyboard! Seem incomplete without these emoticons the check mark is that they read the msg vocal cues we once used express. Exclamation point can change your dating life every smileys symbol copy-paste symbols that you can on... Both in phone texting messages and in interactive social platforms on the internet are by. I know, it happens to me all the most common abbreviations symbol *$ which means that sender! Html entity characters an apostrophe to existing symbols to represent skepticism on the are! For pictures in Google images or for videos in YouTube how you need to uncover meaning! Type your messages even faster abbreviate in text conversations of cool computer text symbols for communication in... Slang involves sending shortened messages between mobile devices much every emoji-esque symbol in the Unicode standard your! Contains pretty much every emoji-esque symbol in the spellings are usually omitted as it helps in minimizing the of...","date":"2021-02-28 10:52:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.22913368046283722, \"perplexity\": 4706.74037200552}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-10\/segments\/1614178360745.35\/warc\/CC-MAIN-20210228084740-20210228114740-00174.warc.gz\"}"}
null
null
Wilco Marsh Buggies services clients worldwide with the best Amphibious Equipment from Saudi Arabia and Nigeria to New Zealand and the United States. Search for used marsh buggies. Find Caterpillar, Bobcat, Daewoo, Doosan, Hitachi for sale on Machinio. Shop Marsh Buggy Excavators For Sale. Choose from listings to find the best priced Marsh Buggy Excavators by owners & dealers near you. Auction for CAT Diesel Amphibous Marsh Buggy ... Heavy Construction Equipment (3 ... This piece of machinery is a Cat Diesel Amphibous Marsh Buggy available for sale. Since 1969 Marsh Buggies, Inc. has been manufacturing state of the art amphibious equipment with an eye towards innovative design and unparalleled reliability. Earth moving machine: track mounted marsh excavators, swamp excavators, swamp buggies;,marsh buggies floating excavators from IMPACT ENTERPRISES. Used Eik Engineering Marsh Buggy for sale. Find this and thousands of other used equipment listings on Kitmondo.com.
{ "redpajama_set_name": "RedPajamaC4" }
7,044
{"url":"https:\/\/www.physicsforums.com\/threads\/showing-that-a-ring-is-an-integral-domain.911142\/","text":"# Showing that a ring is an integral domain\n\n1. Apr 12, 2017\n\n### Mr Davis 97\n\n1. The problem statement, all variables and given\/known data\nShow that $\\mathbb{Z} [\\sqrt{d}] = \\{a+b \\sqrt{d} \\ | \\ a,b,d \\in \\mathbb{Z} \\}$ is an integral domain\n\n2. Relevant equations\n\n3. The attempt at a solution\nDo I have to go through all of the axioms to do this? For example, do I have to show that it is an abelian group under addition, that multiplication is associative, and that the distributive property holds, on top of showing that multiplication is commutative and that there is a multiplicative identity and that there are no zero divisors?\nThis would seem like a lot of unnecessary, but easy, work, so I was wondering if there is a faster way.\n\n2. Apr 13, 2017\n\n### Staff: Mentor\n\nWell, most of these attributes are quite obvious by the arithmetic rules in this ring. The essential part here is to see, that you don't get zero divisors from such extensions. If you want to go the long way, then you have to do it, but only for terms with $\\sqrt{d}$ in it, the rest is inherited from $\\mathbb{Z}$. Since $d \\in \\mathbb{Z}$, the shortest answer is probably $\\mathbb{Z}[\\sqrt{d}] \\subseteq \\mathbb{R}$. But it makes sense to convince yourself that $\\mathbb{Z}[\\sqrt{d}]$ hasn't zero divisors, which is short enough to do and shows which properties of $\\mathbb{Z}$ are actually used, in case a ring isn't the integers.\n\n3. Apr 13, 2017\n\n### Mr Davis 97\n\nIf I am trying to show that $\\mathbb{Z}[\\sqrt{d}]$ has no divisors, do I proceed by a contradiction argument, such as, assume that $(a+b \\sqrt{d})(c + e \\sqrt{d}) = 0$, and show that $a=b=c=e=0$?\n\n4. Apr 13, 2017\n\n### Staff: Mentor\n\nYes. And if you do it step by step, you see the properties of the integers which are needed, e.g. it isn't true for $\\mathbb{Z}_6[\\sqrt{d}]$.\n\n5. Apr 13, 2017\n\n### Mr Davis 97\n\nWell if I expand it out and compare the two sides I get the two equations ac + bed = 0 and ae + bc = 0, but I don't see how this shows that a = b = c = e = 0\n\n6. Apr 13, 2017\n\n### Dick\n\nThis integral domain is actually a field. You should be able to show every non-zero element has a multiplicative inverse. That would show that it's an integral domain.\n\n7. Apr 13, 2017\n\n### Mr Davis 97\n\nI don't see how every element has an inverse. For example, what is the inverse of $2+\\sqrt{2}$? It can't be $1-\\frac{1}{2} \\sqrt{2}$, because $1\/2$ is not an integer.\n\n8. Apr 13, 2017\n\n### Dick\n\nOh right, sorry. I forgot you were working over the integers, not the rationals. Nevertheless, if there are no zero divisors over the rationals, there won't be over the integers. Correct?\n\n9. Apr 13, 2017\n\n### Mr Davis 97\n\nSo how can I show that there are no zero divisors?\n\n10. Apr 13, 2017\n\n### Staff: Mentor\n\nYou have $ac + bed = 0$ and $ae + bc = 0$. What does this mean for $ace$?\n\n11. Apr 13, 2017\n\n### Dick\n\nPick an arbitrary nonzero element of the ring and write down its inverse. Say why it's always well-defined.\n\nLast edited: Apr 13, 2017","date":"2017-08-21 22:05:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5917031168937683, \"perplexity\": 332.6732611612513}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886109670.98\/warc\/CC-MAIN-20170821211752-20170821231752-00048.warc.gz\"}"}
null
null
{"url":"http:\/\/math.stackexchange.com\/questions\/119066\/exponential-variables","text":"# Exponential variables\n\nSuppose we have two exponential random variables $X_1$ and $X_2$ with parameters $\\lambda_1$ and $\\lambda_2$. Would the sum of them have any recognized distribution? If they have the same parameter $\\lambda$, then the sum is a gamma random variable.\n\n-\nen.wikipedia.org\/wiki\/Hypoexponential_distribution \u2013\u00a0user17762 Mar 11 '12 at 23:57\n@SivaramAmbikasaran: So there is no way to write the pdf of this? \u2013\u00a0alexm Mar 12 '12 at 0:09\n@alexm: Sure, if the random variables are independent. Either use convolution directly, or find the cumulative distribution function first. The integration over the triangle that has $x+y \\le w$ is straightforward. \u2013\u00a0Andr\u00e9 Nicolas Mar 12 '12 at 0:48\n\nLet $Z = X_1 + X_2$. Let $\\mathcal{L}_Z(t) = \\mathbb{E}(\\mathrm{e}^{-t Z})$ be the Laplace transform of the distribution density of $Z$. Notice that the Laplace transform is related to the moment generating function $\\mathcal{L}_Z(t) = \\mathcal{M}_Z(-t)$.\n\nThe moment generating function of $Z$ is the product of those of summands, assuming that $X_1$ and $X_2$ are independent, i.e. $\\mathcal{M}_Z(t) = \\mathcal{M}_{X_1}(t) \\mathcal{M}_{X_2}(t)$: $$\\mathcal{M}_Z(t) = \\frac{\\lambda_1}{\\lambda_1 - t} \\cdot \\frac{\\lambda_2}{\\lambda_2 - t}$$ Assuming $\\lambda_1 \\not= \\lambda_2$ we can perform the partial fraction decomposition: $$\\mathcal{M}_Z(t) = \\frac{\\lambda_2}{\\lambda_2 -\\lambda_1} \\cdot \\frac{\\lambda_1}{\\lambda_1 - t} - \\frac{\\lambda_1}{\\lambda_2 -\\lambda_1} \\cdot \\frac{\\lambda_2}{\\lambda_2 - t}$$ Applying the inverse Laplace transform we get deduce the probability generating function: $$f_Z(x) = \\frac{\\lambda_2 f_{X_1}(x) - \\lambda_1 f_{X_2}(x)}{\\lambda_2 -\\lambda_1} = \\lambda_1 \\lambda_2 \\frac{\\mathrm{e}^{-\\lambda_1 x} - \\mathrm{e}^{-\\lambda_2 x} }{\\lambda_2 - \\lambda_1} \\cdot [ x \\geqslant 0 ]$$ This form shows that $f_Z(x) > 0$ for $x \\geqslant 0$.\n\nVariable $Z$ is said to be hypoexponential as Siviram has already commented.\n\n-\n\nLet $W=X+Y$. The fact that $X$ and $Y$ are known exponentials is not enough to determine the distribution of $W$. We will assume that $X$ and $Y$ are independent.\n\nWe find the cumulative distribution function $F_W(w)$ of $W$ more or less from basic principles. This is $P(W \\le w)$. It is clearly $0$ when $w<0$. To make typing easier, let the parameters of $X$ and $Y$ be $\\alpha$ and $\\beta$. By independence, the joint density function of $X$ and $Y$ is $\\alpha e^{-\\alpha x}\\beta e^{-\\beta y}$ (for $x\\ge 0$, $y\\ge 0$).\n\nThe probability that $W \\le w$ is the integral of the joint density over the triangle bounded by the axes and the line $x+y=w$. So $$P(W \\le w)=\\int_{x=0}^w \\alpha e^{-\\alpha x} \\left(\\int_{y=0}^{w-x} \\beta e^{-\\beta y}\\,dy\\right)\\, dx.$$ The inner integral is $1-e^{-\\beta(w-x)}$, that is, $1-e^{-\\beta w}e^{\\beta x}$. So now we need to find $$\\int_0^w \\left( \\alpha e^{-\\alpha x} -\\alpha e^{-\\beta w}e^{\\beta x-\\alpha x}\\right)\\,dx.$$ Again, we are just integrating an exponential. After some simplification we find that if $\\alpha\\ne \\beta$, then $$P(W \\le w)=1 -\\frac{\\beta e^{-\\alpha w}-\\alpha e^{-\\beta w}}{\\beta -\\alpha}$$ (for $w \\ge 0$). Differentiate to get the density function $f_W(w)$. This is $0$ if $w<0$, and $$\\frac{\\alpha\\beta}{\\beta -\\alpha}\\left( e^{-\\alpha w}- e^{-\\beta w}\\right)$$ when $w \\ge 0$.\n\nA special case: Note that the formula only applies if $\\beta \\ne \\alpha$. When $\\beta=\\alpha$, the second integral is simply $$\\int_0^w \\left(\\alpha e^{-\\alpha x} -\\alpha e^{-\\alpha w}\\right)\\,dx.$$ So we get that in the case of equality, $$F_W(w)=1-e^{-\\alpha w} -\\alpha w e^{-\\alpha w}$$ (for $w \\ge 0$). Again, differentiate to get the density.\n\nThe two formulas are not quite as different as they look. If we find the limit of the density function as $\\beta$ approaches $\\alpha$, we will get the density function of the special case.\n\n-","date":"2016-05-03 05:01:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9908544421195984, \"perplexity\": 84.85947150248629}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-18\/segments\/1461860118790.25\/warc\/CC-MAIN-20160428161518-00215-ip-10-239-7-51.ec2.internal.warc.gz\"}"}
null
null
{"url":"https:\/\/discuss.codechef.com\/t\/consesnk-editorial\/14755","text":"CONSESNK - Editorial\n\nEditorialist: Pawel Kacprzak\n\nEasy\n\nPREREQUISITES:\n\nSortings, ternary-search, minimizing function\n\nPROBLEM:\n\nFor a given set of N segments, all of length L, where S_i denotes the left endpoint of the i-th segment, and two points A \\leq B, the goal is to place all N segments inside the range [A, B] in such a way that endpoints of segments have integer coordinates, no segments overlap in points other than their endpoints, and there is no uncovered space between any two consecutive segments. We want to find the minimum cost to do that, where the cost is the sum for all segments of distances between their initial and final positions. It is guaranteed that range [A, B] is large enough to contain all N segments.\n\nEXPLANATION:\n\nThe first observation is to notice that the final placement of the segments can be uniquely identified by integer x, denoting the left endpoint in the final position of the left-most of the segments.\n\nThe second observation is that if we consider two left endpoints of the different segments, S_i < S_j, then in a final optimal position, the left endpoint of the i-th segment will be also to the left of the left endpoint of the j-th segment. This is somehow intuitive, and one can prove it by considering the positions of S_i, S_j and x in a fixed final position. From now, we assume that the segments are given in the sorted order, i.e. S_i \\leq S_j for i < j.\n\nThus, we want to find such integer x, the left endpoint of the left-most of the segments, which minimizes the function:\n\nf(x) = \\sum\\limits_{1 \\le i \\leq N} |x + (i-1) \\cdot L - S_i|\n\nThe crucial observation is to notice that this function is weakly unimodal function, which means in this problem, that there exists a value m, such that for x \\leq m, f(x) is weakly monotonically decreasing, and for x \\geq m, f(x) is weakly monotonically increasing. For such functions, we can use ternary search to find their extremum (in this case minimum) value. For the lower bound of ternary search, we set A, and we set B-N \\cdot L as the upper bound. Since computing the value of f(x) for a fixed x takes O(N) time, the total time complexity is O(N \\cdot \\log(B-N \\cdot L - A)).\n\nAUTHOR\u2019S AND TESTER\u2019S SOLUTIONS:\n\nSetter\u2019s solution can be found here.\nTester\u2019s solution can be found here.\nEditorialist\u2019s solution can be found here.\n\nThis can be solved without using ternary search.\nIf we try to plot f(x) with x , we will observe that the minima occurs at the median value of S[i]-i.L (for odd value of N) and between the two medians (inclusive) for even value of N. The function will be increasing monotonic to the right of minima and decreasing monotonic to the left of minima.\nAll we have to do now is check whether minima lies in the range [a,b-N.L] or to the left or right of it.\n\nThe overall complexity of this solution is O(N.logN)\n\nYou can find my implementation of this here : Solution\n\n7 Likes\n\nAny simple o(n) solution\u2026\n\nCan someone tell me why I\u2019m getting wrong answer using ternary search. Here is the link to my\n\n\u201cThe crucial observation is to notice that this function is weakly unimodal function\u201d\n\nCan you prove this statement? It seems intuitive but I would like a proof to back it up\n\nIs there any simple O(n) solution\n\nwhy in this problem if we take first all the left snakes then the snakes which are in interval(A<=Si<=B) then take right to the B snakes is giving wrong answer.\n\ncan somebody provide the test case?\n\nif the link is not working, you can use this link for the editorialist\u2019s solution\n\none more think i would like to add, the function f(x) runs from i=0 to i=n-1.\nif we use i=1; i<=N, then logically it\u2019s not right because for the first S[i] f(x) should be | x + 0*L -S[0] |, think over it you\u2019ll understand.\n\nHow can we prove that f(x) is weakly unimodal function?\n\n@pkacprzak How is that function f(x) generated? I mean, how is it arrived at?\n\n@sahil_g, Actually we could have O(N) in average if to use quick select for finding the median element.\n\nmy program has passed both the test cases but giving wrong when i submit it\u2026 Can anyone please help me? I think i have missed something\u2026\n\nwhy am I getting TLE? https:\/\/www.codechef.com\/viewsolution\/13898082\n\n@tommy_trash can you answer me why you have used high-low>=10 in your code\n\nwhat is wrong in my code https:\/\/www.codechef.com\/viewsolution\/13904747\n\nThey\u2019ll be linked soon. I don\u2019t have the power to put them by myself.\n\n1 Like\n\nThis is the easiest solution for this problem.\n\n1 Like\n\nYes, right, I was indexing from 0 by mistake. Thanks!\n\nYou need to take some small cases and plot the graph. That will surely help!\n\n@sahil_g would you please tell why do minima occur at the median value of S[i]-i.L?\n\n1 Like","date":"2023-03-31 23:03:20","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9234824180603027, \"perplexity\": 584.645428811132}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296949689.58\/warc\/CC-MAIN-20230331210803-20230401000803-00129.warc.gz\"}"}
null
null
\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{\@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex} {2.3ex plus .2ex}{\large\bf}} \def\@startsection{subsection}{2}{\z@}{2.3ex plus .2ex{\@startsection{subsection}{2}{\z@}{2.3ex plus .2ex} {2.3ex plus .2ex}{\bf}} \newcommand\Appendix[1]{\def\Alph{section}}{Appendix \Alph{section}} \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{\label{#1}}\def\Alph{section}}{\Alph{section}}} \begin{document} \def{\rm Im}{{\rm Im}} \def{\rm Re}{{\rm Re}} \def\gamma{\gamma} \def{\tilde k}{{\tilde k}} \def{\tilde \alpha}{{\tilde \alpha}} \defSL(2,{\bf C})\ {SL(2,{\bf C})\ } \def\scriptscriptstyle{\scriptscriptstyle} \def$(c_{\ssc R}-c_{\ssc L})/2${$(c_{\scriptscriptstyle R}-c_{\scriptscriptstyle L})/2$} \def{\cal R}{{\cal R}} \def{\cal U}{{\cal U}} \def{\cal A}{{\cal A}} \def\bar\partial{\bar\partial} \def\partial{\partial} \def$(c_{\ssc R}+c_{\ssc L})/2${$(c_{\scriptscriptstyle R}+c_{\scriptscriptstyle L})/2$} \def\omega{\omega} \def{\hbox{e}}{{\hbox{e}}} \def{{\rm d}^2z\over 2\pi}{{{\rm d}^2z\over 2\pi}} \def{\rm D}{{\rm D}} \def{\rm d}{{\rm d}} \def\bar\psi{\bar\psi} \def\bar Q{\bar Q} \def\bar \alpha{\bar \alpha} \def\alpha{\alpha} \def\gamma{\gamma} \def\theta{\theta} \def\bar\xi{\bar\xi} \def\Gamma{\Gamma} \def\Delta{\Delta} \def\delta{\delta} \def\e#1{{\rm e}^{#1}} \def\beta{\beta} \def{\hat g}{{\hat g}} \def{\hat g}{{\hat g}} \def\pp#1{\partial#1\bar\partial#1 - {Q_{#1}\over 4}\sqrt{\hat g}\hat R#1} \def{\it i.e.,}\ {{\it i.e.,}\ } \def{\it e.g.,}\ {{\it e.g.,}\ } \def\kappa{\kappa} \def{\hat c_+}{{\hat c_+}} \def\varphi{\varphi} \def\upsilon{\upsilon} \def\vartheta{\vartheta} \def\varsigma{\varsigma} \def{\tilde\chi}{{\tilde\chi}} \defk_{\ssc R}{k_{\scriptscriptstyle R}} \defk_{\ssc L}{k_{\scriptscriptstyle L}} \def\lor#1#2{#1\leftrightarrow #2} \def\cit#1{\cite{#1}} \def\qq#1{Eq.~\ref{#1}} \def\begin{equation}{\begin{equation}} \def\end{equation}{\end{equation}} \def\begin{eqnarray}{\begin{eqnarray}} \def\end{eqnarray}{\end{eqnarray}} \begin{titlepage} \samepage{ \setcounter{page}{1} \rightline{McGill/92-47} \rightline{iassns-hep-92-67} \rightline{hep-th/9210082} \vfill \begin{center} {\Large \bf Chiral non-critical strings\footnote{Talk presented by RCM at the {\it International Workshop on String Theory, Quantum Gravity, and the Unification of the Fundamental Interactions}, held in Rome, Italy, 21-26 September 1992; and at the {\it CAP/NSERC Summer Institute on Quantum Groups, Integrable Models and Statistical Systems}, held at Queen's University, Kingston, Ontario, July 13-18, 1992.}\\} \vfill {\large Robert C. Myers\footnote{rcm@hep.physics.mcgill.ca}\\} \vspace{.25in} {\em Physics Department, McGill University, Ernest Rutherford Building,\\ Montr\'eal, Qu\'ebec, H3A 2T8, Canada}\\ \vspace{0.3cm} and\\ \vspace*{0.3cm} {\large Vipul Periwal\footnote{vipul@guinness.ias.edu/vipul@iassns.bitnet}\\} \vspace{.25in} {\em The Institute for Advanced Study,\\ Princeton, New Jersey, 08540-4920, USA} \end{center} \vfill \begin{abstract} {\rm It is shown that conformal matter with $c_{\scriptscriptstyle L}\not=c_{\scriptscriptstyle R}$ can be consistently coupled to two-dimensional `frame' gravity. The theory is quantized in conformal gauge, following David, and Distler and Kawai. There is no analogue of the $c=1$ barrier found in nonchiral non-critical strings. A non-critical heterotic string is constructed---it has 744 states in its spectrum, transforming in the adjoint representation of $(E_8)^3.$ Correlation functions are calculated in this example, revealing the existence of extra discrete states.} \end{abstract} \vfill} \end{titlepage} \setcounter{footnote}{0} Conformal matter in two dimensions couples to quantum gravity via the conformal anomaly. The coupling is characterized by the central charge of the conformal matter. The central charges for holomorphic and anti-holomorphic fields in a conformal field theory may differ. Hence, it is natural to ask how such theories, with $c_{\scriptscriptstyle L}\not=c_{\scriptscriptstyle R},$ interact with quantum gravity. Since these theories have a Lorentz anomaly, in addition to the conformal anomaly, there must be degrees of freedom other than the conformal factor that become dynamical. Under combined scale ($\rho$) and Lorentz ($\chi$) transformations, the zweibein transforms $$ {\hbox{e}}^\pm_{\ \mu} \rightarrow\ \exp\left(\rho\pm i\chi\right){\hbox{e}}^\pm_{\ \mu}\ . $$ Just as the conformal anomaly provides dynamics for the scale factor $\rho$, the Lorentz anomaly provides dynamics for local Lorentz field $\chi$. It is the action that governs these fields that we study here. This paper is organized as follows: Sect.~1 briefly reviews results from the Liouville field treatment of nonchiral non-critical strings. In sect.~2, we determine an analogous conformal field theory representation of chiral gravity. We consider the additional Lorentz moduli, and the gravitational dressings in this new theory. Finally, we derive some critical exponents. In sect.~3 we give an example, the $\jmath$--string, and compute the partition function and correlation functions in this example. Sect.~4 contains some concluding remarks. The present paper is primarily a review of the work appearing in Ref.~\cite{cg}, and we refer the reader there for a more detailed account of this work. Sect.~3 presents some new results about scattering amplitudes for the $\jmath$--string. We note that the coupling of chiral matter to two-dimensional theories of quantum gravity has been previously considered by several authors\cite{related,oz,tlee}. Of particular relevance to the present work are Ref.~\cite{oz}, which studied chiral non-critical strings in light-cone gauge, and Ref.~\cite{tlee}, which also derived the conformal field theory presented in sect.~2. \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{Review of non-critical strings} The initial success in producing an analytic understanding of non-critical strings made use of light-cone gauge\cite{pol}. The present discussion follows the David--Distler--Kawai (DDK) approach\cite{ddk} in conformal gauge. Scattering amplitudes in a (non-supersymmetric) string theory are calculated with a Polyakov path integral over metrics $g_{\mu\nu}$ and matter fields $\psi$ on the two-dimensional string world-sheet \begin{equation} \int {{{\rm D} g}\ {{\rm D}\psi} \over {\hbox{vol.}(\hbox{symmetries})}} \ \exp\!\left(-S_{o}[g,\psi]\right)\ {\cal O}_{\scriptscriptstyle 1}\ldots {\cal O}_{\scriptscriptstyle n}\ \ . \label{ppath} \end{equation} The classical symmetries include both diffeomorphisms and local Weyl rescalings ({\it i.e.,}\ $g\rightarrow e^{2\rho}g$). In the quantum theory, the path integral measure introduces an anomaly for the latter transformations. Polyakov\cite{geo} showed that this anomaly is proportional to $c_m-26$, where $c_m$ is the central charge for the matter fields. In critical string theories, one chooses the matter fields with $c_m=26$ so that the anomaly vanishes. In this case, diffeomorphisms and Weyl rescalings are both symmetries of the quantum theory, and all of the local degrees of freedom of the metric decouple. A non-critical string theory is one with $c_m\not=26$, and so the only symmetry volume divided out in \qq{ppath} is that of diffeomorphisms. Hence the path integral still includes a path integral over the scale factor of the metric. Choosing conformal gauge, $g_{\mu\nu}={\hbox{e}}^{2\rho}{\hat g}_{\mu\nu}(m),$ and fixing diffeomorphisms \`a la Faddeev-Popov, the path integral includes a factor of the form \begin{equation} \int {\rm d} m\ {\rm D}\rho\ \exp\!\left[{c_m-26\over24\pi}\int{\rm d}^2\!x\, \sqrt{\hat{g}}\left\{(\hat{\nabla}\rho)^2+\hat{R}\,\rho\right\}\right] \label{path1} \end{equation} where ${\rm d} m$ stands for an integration over the moduli space of the world-sheet, and $\hat{R}$ is the curvature scalar for the background metric. The scale factor action can be regarded as the Jacobian, which arises when the original functional measures for the matter and ghost fields, defined in terms of the full metric $g$, are replaced by measures based on the background metric ${\hat g}$. The problem is now to understand the measure ${\rm D} \rho,$ which is the Riemannian measure induced by $$ (\delta\rho,\delta\rho) \equiv \int {\rm d}^2x \ \sqrt{g}\ (\delta \rho)^2 = \int {\rm d}^2x \ e^{2\rho} \sqrt{{\hat g}}\ (\delta \rho)^2. $$ It is much more convenient to use the translation-invariant measure ${\rm D}_{\scriptscriptstyle 0}\rho,$ induced by $$ (\delta\rho,\delta\rho)_{\scriptscriptstyle 0} \equiv \int {\rm d}^2x \sqrt{{\hat g}}\ (\delta \rho)^2, $$ which allows one to treat the functional integral over the scale factor as a standard quantum field theory. The DDK ansatz\cite{ddk} is that \qq{path1} is replaced by \begin{equation} \int {\rm d} m\ {\rm D}_{\scriptscriptstyle 0}\rho\ \exp\!\left[{c_m-25\over24\pi}\int{\rm d}^2\!x\, \sqrt{\hat{g}}\left\{(\hat{\nabla}\!\rho)^2+\hat{R}\,\rho\right\}\right]\ \ . \label{path2} \end{equation} This result was later rigorously derived in Ref.~\cite{mmdk}. Note that for the purposes of the present discussion, we have assumed that a local counterterm has been introduced to produce a vanishing cosmological constant on the world-sheet. Given this field theory, one can perform several interesting calculations. First, the off-diagonal components of the stress tensor are \begin{eqnarray} T_{zz} &=& -{c_m-25\over 6}\ \big(\partial\rho\partial\rho-\partial^2\!\rho\big)\ , \nonumber\\ \bar T_{\bar{z}\bar{z}} &=& -{c_m-25\over 6}\ \big(\bar\partial\rho\bar\partial\rho- \bar\partial^2\!\rho\big)\ . \label{stresp} \end{eqnarray} The central charge computed from \qq{stresp} is $c_\rho=26-c_m$. Therefore summing the contributions from the scale factor, the ghosts and the matter fields, one finds that the total central charge vanishes. This result verifies that \qq{path2} correctly represents the integral over the world-sheet metrics, since $c_{tot}=0$ ensures that the total path integral is independent of the choice of the background metric ${\hat g}$. Another aspect of these theories is that when a spinless primary field $\Phi_m$ in the matter theory is inserted on the world-sheet, it acquires a gravitational dressing $e^{\beta\rho}$. This combination produces an operator of conformal dimension (1,1), whose position is integrated over the surface, ${\cal O}=\int {\rm d}^2\!x\,\sqrt{{\hat g}}\ e^{\beta\rho}\Phi_m$. Finally, one may calculate various critical exponents for these theories\cite{ddk}. For instance, the string susceptibility $\Gamma$ is defined in terms of the fixed area partition function $$ Z(A)=\left\langle\,\delta\!\left(\int\!{\rm d}^2x\,\sqrt{\hat{g}}\, {\hbox{e}}^{\alpha\rho} -A\right)\,\right\rangle\propto A^{\Gamma-3}\ \ . $$ One finds the following result \begin{equation} \Gamma=2+{h-1\over12}\left[25-c_m+\sqrt{(25-c_m)(1-c_m)}\right] \label{ggaamm1} \end{equation} where $h$ is the genus of the world-sheet. Note that $\Gamma$ has a simple linear dependence on the genus $h$, and also that it becomes complex for $c_m>1$ (and $c_m<25$). Therefore these calculations seem to give nonsensical results in this regime. This is the well-known $c_m=1$ barrier. In the following sections, we will investigate these three aspects for chiral non-critical strings: background independence, gravitational dressings, and critical exponents. \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{Chiral non-critical strings} In this section, consider chiral non-critical string theories in which the central charges of the holomorphic and anti-holomorphic matter fields are different. To introduce chiral matter fields ({\it e.g.,}\ a Weyl fermion) on a curved world-sheet, one must introduce a zweibein or frame field. The classical symmetries are then diffeomorphisms, Weyl scalings and local Lorentz transformations, but anomalies arise in the quantum theory. Using a diffeomorphism-invariant regularization, these anomalies may be described by the non-local effective action\cite{leu} $$ {1\over 96\pi}\int R(x) G(x,y) [(c_+-26) R(y) -ic_-U(y)] $$ where $G$ is the inverse of the scalar Laplacian, $R$ is the curvature scalar, and $U\equiv2\nabla\!\cdot\omega$ is given by the divergence of the spin connection $\omega_\mu$. We also define $c_\pm \equiv (c_{\scriptscriptstyle R}\pm c_{\scriptscriptstyle L})/2.$ An important new feature of the chiral theories is that there exists a local counterterm, $\int \omega^\mu\omega_\mu$, which may be added to this anomaly action. Therefore the coefficient of this term will appear as a free parameter, $\xi$, in these theories, which reflects the ambiguity in the choice of the regularization scheme. Generically then, diffeomorphisms are the only symmetry which survive in the quantum theory. Choosing a background gauge ${\hbox{e}}^\pm_{\ \mu}=\exp(\rho\pm i\chi)\, \hat{\hbox{e}}^\pm_{\ \mu}$, the integration over all zweibein reduces to \begin{equation} {{\rm D}{\hbox{e}}\over\hbox{vol.}(\hbox{diffeo.})}={\rm d} m\ {\rm D}\rho\ {\rm d} n\ {\rm D}\chi\ \times\ (Faddeev\!-\!Popov\ determinant) \label{foot} \end{equation} where ${\rm d} n$ is a integration over additional Lorentz moduli, discussed in sect.~2.1. The functional measures for $\rho$ and $\chi$ above are defined in terms of the full zweibein ${\hbox{e}}^\pm$, but as in the nonchiral case they are exchanged for measures based on the background zweibein $\hat{{\hbox{e}}}^\pm$. This change of the two measures introduces Jacobian factors, which are in fact identical to the factor found in nonchiral gravity. The final functional integral over zweibein may be written as $\int {\rm d} m\,{\rm d} n\,{\rm D}_{\scriptscriptstyle 0}\rho\,{\rm D}_{\scriptscriptstyle 0}\theta\, {\hbox{e}}^{-S_{\rm cft}},$ where \begin{equation} S_{\rm cft} = \int {{\rm d}^2\!x\sqrt{\hat{g}}\over24\pi}\left[X \left\{(\hat{\nabla}\!\rho)^2+\hat{R}\,\rho\right\} +{\xi}\left\{(\hat{\nabla}\!\theta)^2 -(\hat U+{ic_-\over 2\xi}\hat R)\theta\right\}\right], \label{rhoact} \end{equation} and we defined $\theta\equiv \chi-{ic_-\over2\xi}\rho,$ so that the two fields appearing in \qq{rhoact} decouple. The part of the action for $\rho$ resembles that for the scale factor in \qq{path2} except that the overall factor of $(c_m-25)$ is replaced by \begin{equation} X=24-c_++\xi+{c_-^2\over4\xi} . \label{oops} \end{equation} In \qq{rhoact}, $\theta$ couples to both $\hat{R}$ and $\hat{U}$. The latter coupling is not invariant under parity inversion on the world-sheet, and therefore the holomorphic and anti-holomorphic components of the stress tensor differ \begin{eqnarray} T_{zz} &=& -{X\over 6}\ \left(\partial\rho\partial\rho-\partial^2\!\rho\right) -{\xi\over6}\ \left(\partial\theta\partial\theta+i\left({c_-\over2\xi}+1\right) \partial^2\!\theta\right)\ , \nonumber\\ \bar T_{\bar{z}\bar{z}} &=& -{X\over 6}\ \left(\bar\partial\rho\bar\partial\rho-\bar\partial^2\!\rho\right) -{\xi\over6}\ \left(\bar\partial\theta\bar\partial\theta+i\left({c_-\over2\xi}-1\right) \bar\partial^2\!\theta\right)\ .\nonumber \end{eqnarray} As a result the left and right central charges differ, but this produces precisely the desired contributions, $c_{{\rm e},{\scriptscriptstyle R}} = 26 - c_{\scriptscriptstyle R}$ and $c_{{\rm e},{\scriptscriptstyle L}} = 26 - c_{\scriptscriptstyle L}$. Thus the total central charge vanishes for both the holomorphic and anti-holomorphic sectors, ensuring the theory is independent of the background zweibein. Note that the free parameter $\xi$ associated with the regularization ambiguity does not appear in the expressions for the central charge of the combined conformal and Lorentz induced action, but we will find that it does affect the physical exponents. Given a matter operator of weight $(\Delta_{\scriptscriptstyle L},\Delta_{\scriptscriptstyle R}),$ one may expect that it acquires an exponential dressing $\exp[\alpha\rho+ik\theta]$ to make it a (1,1) operator, as in nonchiral gravity\cit{ddk}. Note that this is possible even when $\Delta_{\scriptscriptstyle L}\not=\Delta_{\scriptscriptstyle R}$, because $\exp[ik\theta]$ has different left and right weights due to the form of the stress tensor. Explicitly, one finds $k=\Delta_{\scriptscriptstyle L}- \Delta_{\scriptscriptstyle R}$ for such a dressing. In fact, this description is incomplete as we will find in the next subsection. \@startsection{subsection}{2}{\z@}{2.3ex plus .2ex{Lorentz moduli} In \qq{foot}, the integration over all frames includes an ordinary integral over Lorentz moduli $dn$. These moduli are extra global phases that must be integrated over, above and beyond the moduli associated with the integration over conformal equivalence classes of metrics. One can realize these phases by shifting the spin connection, $\omega\rightarrow\omega+\sum_{i=1}^{2h}\lambda_i\,\beta^i$, where $\beta^i$ is a basis for the harmonic differentials on the genus $h$ surface. Since $\beta^i$ are closed and divergenceless, this shift leaves $R$ and $U$ everywhere unchanged. Given any closed contour around a particular nontrivial cycle though, one acquires an additional phase: $\int_a\omega\rightarrow\int_a\omega+\lambda_a$. These global phases are most conveniently incorporated into the Lorentz field $\theta$ (rather than the background geometry). To be precise, one lets $$ {\rm d}\theta={\rm d}\tilde{\theta}+\sum_{i=1}^{2h}\lambda_i\,\beta^i\ . $$ where $\tilde{\theta}$ is a (single-valued) function on the surface. Since the $\beta^i$ are not exact forms, $\theta$ must be multivalued on the surface (or alternatively, $\theta$ contains discontinuities). The measure for the Lorentz moduli, ${\rm d} n=\prod{\rm d}\lambda_i$, would then be included as a part of the functional measure ${\rm D}_{\scriptscriptstyle 0}\theta$. These global phases are associated with the nontrivial cycles on higher genus surfaces. Additional nontrivial cycles occur in correlation functions surrounding the operator insertions. Therefore we should allow $\theta$ to have discontinuities around such cycles. Such cuts would be produced by dressing the matter operators with exponentials of the form: $\exp[\alpha\rho+ik_{\ssc R}\theta_{\scriptscriptstyle R}+ik_{\ssc L}\theta_{\scriptscriptstyle L}]$. Now given a matter operator of weight $(\Delta_{\scriptscriptstyle L},\Delta_{\scriptscriptstyle R}),$ one may fix $\alpha$ and $k_{\ssc R}+k_{\ssc L}$ in the gravitational dressing to produce a (1,1) operator. This leaves one free parameter, namely $k_{\ssc R}-k_{\ssc L}$. Hence in chiral gravity, each matter operator acquires a family of gravitational dressings, corresponding to a continuum of Lorentz phases on the contours enclosing the operator. These dressing operators are nonlocal, and hence may seem unnatural. In fact, one is forced to introduce such operators because of another aspect of the theory. Because $\rho$ and $\theta$ couple to background charges, their momentum conservation rules are nontrivial. One finds the following superselection rules \begin{eqnarray} \sum_{a} \alpha^{\scriptscriptstyle (a)} &=& {X\over3} (1-h), \nonumber\\ \sum_{a} k^{\scriptscriptstyle (a)}_{\scriptscriptstyle R} &=& -{{\xi}\over 3}\left[{c_-\over2\xi}+1 \right] (1-h), \nonumber\\ \sum_{\scriptscriptstyle a} k^{\scriptscriptstyle (a)}_{\scriptscriptstyle L} &=& -{{\xi}\over 3}\left[{c_-\over2\xi}-1 \right](1-h). \label{ssrho} \end{eqnarray} These results may be derived by demanding SL(2,{\bf C})\ invariance of scattering amplitudes on the sphere. It is the novel coupling of $\theta$ to the divergence of the background spin connection $\hat{U}$, which leads to $\sumk_{\ssc R}\not=\sumk_{\ssc L}$. As a result, nonvanishing amplitudes must include nonlocal operators of the form discussed above. \@startsection{subsection}{2}{\z@}{2.3ex plus .2ex{String susceptibility} In this section, we examine critical exponents to address the question of the critical `dimension' for chiral gravity. One might begin by demanding the reality of the string susceptibility\cit{ddk}. Define the area operator, $\int\!{\rm d}^2x\,\sqrt{\hat{g}}\,{\hbox{e}}^{\alpha\rho}$ with $\alpha= [X-\sqrt{X(X-24)}]/6$. This definition is chosen to involve only a local dressing operator, and to make no explicit reference to the Lorentz field, which should be irrelevant to defining the area in analogy to the classical geometry. Then one considers the fixed area partition function $$ Z(A)=\left\langle\,\delta\!\left(\int\!{\rm d}^2x\,\sqrt{\hat{g}}\, {\hbox{e}}^{\alpha\rho} -A\right)\,\right\rangle=0\ \ . $$ The result vanishes, except for genus one surfaces,\footnote{For $h=1$, one finds $Z(A)\propto A^{\Gamma-3}$ with $\Gamma=2$, exactly as in nonchiral gravity\cit{ddk}.} because the super-selection rules for $\theta$ in \qq{ssrho} are not satisfied. To properly fix the {\it two} $\theta$ zero mode integrals, one can consider inserting punctures with dressings which absorb the appropriate $\theta$-momenta. If one introduces a single puncture, there is a unique dressing which yields a nonvanishing result for $g=0,1$. $P_{\scriptscriptstyle 0} =\int\!{\rm d}^2x\sqrt{\hat{g}}\exp[\beta\rho+ik_{\ssc R} \theta_{\scriptscriptstyle R}+i{k_{\ssc L}}\theta_{\scriptscriptstyle L}]$ with $\beta=\alpha$ precisely as in the area operator, and $k_{\ssc R}=(h-1)(c_-/6+\xi/3)$ and ${k_{\ssc L}}=(h-1)(c_-/6-\xi/3)$. Now one has $$ Z'(A)=\left\langle\,P_{\scriptscriptstyle 0} \ \delta\!\left(\int\!{\rm d}^2x\,\sqrt{\hat{g}}\,{\hbox{e}}^{\alpha\rho} -A\right)\,\right\rangle\propto A^{\Gamma-2} $$ where \begin{equation} \Gamma=2+{h-1\over12}\left[X+\sqrt{X(X-24)}\right] \label{ggaamm} \end{equation} for $h=0,1.$ This is precisely analogous to the result for nonchiral gravity\cit{ddk} with the replacement $(25-c_m)\rightarrow X$. If we make the assumption that $X$ is positive so that the $\rho$-action \qq{rhoact} has a positive coefficient, then from \qq{oops} requiring that $\Gamma$ be real imposes the restriction \begin{equation} {X-24}\ \ =\ \ \xi+{c_-^2\over4\xi}-c_+>0\ \ .\label{jacket} \end{equation} For any fixed values of $c_{\pm}$, there will exist values of $\xi$ for which this inequality is satisfied. Therefore chiral gravity has no barriers for the allowed values of central charges in the matter sector. Of course, this conclusion relies on results for only $h=0,1,$ since $Z'(A)$ always vanishes for $h>1$. Introducing two punctures yields nontrivial results for higher genera as well. In this case, one finds that $Z''(A)\propto A^{{\widetilde\Gamma}-1}$ where $\widetilde\Gamma =\Gamma+(\alpha_++\alpha_--2\alpha)/\alpha$. $\Gamma$ is given by \qq{ggaamm}, while $\alpha_\pm=\left[X+\sqrt{XY_\pm}\right]/6$ with $$ Y_\pm=\left[2h^2-1\pm2h\sqrt{h^2-1}\,\right]\xi+ \left[2h^2-1\mp2h\sqrt{h^2-1}\,\right]{c_-^2\over4\xi}-c_+ $$ for $h\ge1$. Thus one finds that the genus dependence of the string susceptibility is far more complicated than the simple linear dependence found in \qq{ggaamm1} for nonchiral gravity. Further requiring real exponents by imposing $Y_\pm>0$, only produces constraints which are less restrictive than \qq{jacket}. Finally note that we have ignored ghost zero modes in this entire discussion, but that a more rigorous account can be given completely equivalent to that appearing in Ref.~\cite{ddk}. \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{The $\jmath$--string} We now consider as a concrete example the heterotic non-critical string theory constructed from the holomorphic conformal field theory associated with the $E_{\scriptscriptstyle 8}\times E_{\scriptscriptstyle 8}\times E_{\scriptscriptstyle 8}$ root lattice. One may view the world-sheet matter as 24 chiral bosons, or as 48 chiral fermions with appropriate sums over spin structures. We designate this theory as the $\jmath$--string for the amusing result that the partition function is the $\jmath$--invariant\cite{jps} on the torus. The calculation of the torus partition function for this theory is straightforward\cite{dp}. The only novel feature is the $\theta$ factor which incorporates the Lorentz moduli. Covering the torus with a fixed coordinate patch, $0\le\sigma^{\ssc1},\sigma^{\ssc2}\le1$, the world-sheet metric may be written ${\rm d} s^2=|{\rm d}\sigma^{\ssc1}+\tau\,{\rm d}\sigma^{\ssc2}|^2$. We introduce arbitrary phases in $\theta$ around the $\sigma^i$ cycles by setting $$ \theta(\sigma^{\ssc1},\sigma^{\ssc2}) ={\tilde\theta}(\sigma^{\ssc1},\sigma^{\ssc2})+2\pi\lambda_i\sigma^i. $$ Here ${\tilde\theta}$ is a smooth function on the torus, and the second term incorporates the multivalued contribution of the Lorentz moduli. The contribution to the partition function is then found to be \begin{eqnarray} Z_\theta&=&({\rm Im}\tau)^{-1/2}|\eta(q)|^{-2}\int_{-\infty}^{\infty} {\rm d}\lambda_{\ssc1}{\rm d}\lambda_{\ssc2} \ \exp\left(-{\pi\xi|\lambda_{\ssc2}-\lambda_{\ssc1} % \tau|^2\over6\,{\rm Im}\tau}\right) \nonumber\\ &=&({\rm Im}\tau)^{-1/2}|\eta(q)|^{-2}\ {6/\xi} \label{lorry} \end{eqnarray} where $q\equiv \exp(2\pi i\tau)$, and we have included the phase integral, which one may note is invariant under modular transformations. After the phase integral is performed, the final $\theta$ factor is simply that of a free scalar field (up to an arbitrary normalization). Combining Eq.~\ref{lorry} with the contributions of the other fields yields $$ Z_{\scriptscriptstyle\rm torus} = \int {{\rm d}^2\tau\over({\rm Im}\tau)^2}\ \jmath(\tau) = \int {{\rm d}^2\tau\over({\rm Im}\tau)^2} \ \left({\Theta_2(q)^2+\Theta_3(q)^2+\Theta_4(q)^2\over\eta(q)^8}\right)^3\ . $$ To understand the spectrum of the theory, we investigate the region of large ${\rm Im}\tau,$ where \begin{equation} \jmath(\tau)\sim {1\over q} + 744 +\dots \qquad. \label{744} \end{equation} The integration over ${\rm Re}\tau$ projects out any term in Eq.~\ref{744} with a non-zero power of $q.$ The only term left is the constant $744,$ indicating that there are $744$ states in this string theory. Of these, $720$ correspond to winding number states in the $24$--dimensional lattice, and $24$ come from the maximal torus of the group, the so-called oscillator states. If one counted the states before performing the phase integral in Eq.~\ref{lorry}, each of the 744 states corresponds to a continuum with arbitrary phases around the $\sigma^{\ssc1}$ cycle. \@startsection{subsection}{2}{\z@}{2.3ex plus .2ex{Correlation functions} We now consider some sample correlation functions in this theory. In the partition function, we found that the physical states correspond to (0,1) operators in the matter sector. Thus we construct exponential (1,0) dressings, as discussed in sect.~2.1. First though in the present theory with $c_\pm=12$, computations are greatly simplified if one rescales the fields to a new basis ($\hat\rho=(x+2/x)\rho,\,\hat\theta=x\theta$) where $x=\sqrt{\xi/3}$. There are two possible (1,0) operators of the form $\exp[\alpha\hat\rho+ik\hat\theta_{\scriptscriptstyle R}+i{\tilde k}\hat\theta_{\scriptscriptstyle L}]$, with good semiclassical limits. For fixed $k$, they correspond to \begin{itemize} \setlength{\itemsep}{0pt} \item[i.] ${\tilde k}=k+x,\ \alpha=-k$ \item[ii.] ${\tilde k}=-2/x-k,\ \alpha=-k\ \ .$ \end{itemize} \par\noindent The first introduces a fixed phase in the $\theta$ field with $k-{\tilde k}=-x$, while the phase of the second dressing varies continuously with $k$, $k-{\tilde k}=2/x+2k$. The momentum super-selection rules for correlation functions are: $\sum\alpha^{\scriptscriptstyle (a)}=x+2/x=-\sum k^{\scriptscriptstyle (a)}$, and $\sum {\tilde k}^{\scriptscriptstyle (a)}=x-2/x$. One can begin by considering three-point amplitudes of some given set of matter operators. For a fixed set, there would be eight possible amplitudes corresponding to all of the different combinations of dressings. Some of these amplitudes vanish since they combine dressings which are incompatible with the momentum conservation rules. The amplitudes which survive are those which involve either two (i) and one (ii) dressings, or one (i) and two (ii). In those cases where the amplitude is non-vanishing, one finds that as expected the results are SL(2,{\bf C})\ invariant, but also {\it independent} of how the various dressings are combined with the matter operators. There are two classes of nonvanishing amplitudes: those with three winding number states, $\exp(i\gamma^{\scriptscriptstyle(a)}\!\cdot\! X_{\scriptscriptstyle R})$, which yield simply $\delta(\gamma^{\scriptscriptstyle(1)}+\gamma^{\scriptscriptstyle(2)}+\gamma^{\scriptscriptstyle(3)})$, and those with two winding number states and one oscillator state, $i\beta\!\cdot\!\partial X_{\scriptscriptstyle R}$, which yield $\beta\cdot\gamma^{\scriptscriptstyle(1)}\ \delta(\gamma^{\scriptscriptstyle(1)} +\gamma^{\scriptscriptstyle(2)})$.\footnote{These results use % Kronecker $\delta$-functions, since the winding number vectors are discrete.} The four-point amplitudes produce more interesting results, as we illustrate here. Consider the following correlation function \begin{equation} {\cal A}=\int{\rm d}^2z^{\scriptscriptstyle(3)}\ \Big\langle\ c\bar c\, V_{\rm i}\, {\hbox{e}}^{i\gamma^{\scriptscriptstyle(1)}\cdot X_{\scriptscriptstyle R}}(z^{\scriptscriptstyle(1)}) \ c\bar c\, V_{\rm i}\, {\hbox{e}}^{i\gamma^{\scriptscriptstyle(2)}\cdot X_{\scriptscriptstyle R}}(z^{\scriptscriptstyle(2)}) \ V_{\rm ii}\, {\hbox{e}}^{i\gamma^{\scriptscriptstyle(3)}\cdot X_{\scriptscriptstyle R}}(z^{\scriptscriptstyle(3)}) \ c\bar c\, V_{\rm i}\, {\hbox{e}}^{i\gamma^{\scriptscriptstyle(4)}\cdot X_{\scriptscriptstyle R}}(z^{\scriptscriptstyle(4)})\ \Big\rangle\ , \label{amplo} \end{equation} where $c\bar c$ are ghost dressings for the fixed operators, $V_{\rm i(ii)}$ are gravitational dressings of type i (ii) given above, and $\exp[i\gamma^{\scriptscriptstyle(a)}\cdot X_{\scriptscriptstyle R}]$ are winding state operators in the matter sector with $\gamma^{\scriptscriptstyle(a)}\cdot\gamma^{\scriptscriptstyle(a)}=2$. Momentum conservation requires $\sum\gamma^{\scriptscriptstyle(a)}=0$ in the matter sector, while the super-selection rules, for the gravity sector given above, restrict the momenta to be \begin{itemize} \setlength{\itemsep}{0pt} \item[$z^{\scriptscriptstyle(1)}$:] $k=q$, ${\tilde k}=q+x$, $\alpha=-q$ \item[$z^{\scriptscriptstyle(2)}$:] $k=p$, ${\tilde k}=p+x$, $\alpha=-p$ \item[$z^{\scriptscriptstyle(3)}$:] $k=x/2-1/x$, ${\tilde k}=-1/x-x/2$, $\alpha=-x/2+1/x$ \item[$z^{\scriptscriptstyle(4)}$:] $k=-3x/2-1/x-p-q$, ${\tilde k}=-x/2-1/x-p-q$, $\alpha=3x/2+1/x+p+q\ .$ \end{itemize} \par\noindent One can explicitly verify that the result is SL(2,{\bf C})\ invariant, and fixing $z^{\scriptscriptstyle(a)}=(\infty,1,z,0)$ yields \begin{equation} {\cal A}=\int{\rm d}^2z\ (1-z)^{\gamma^{\scriptscriptstyle(2)}\cdot\gamma^{\scriptscriptstyle(3)}} \ z^{\gamma^{\scriptscriptstyle(4)}\cdot\gamma^{\scriptscriptstyle(3)}} \ (1-\bar z)^{-1-x^2/2-px}\ {\bar z}^{x^2+x(p+q)}\ \ . \label{ample} \end{equation} On the holomorphic side of this integral, one always has integral exponents since $\gamma^{\scriptscriptstyle(a)}\cdot\gamma^{\scriptscriptstyle(b)}=-2,-1,0,1,2$, while on the anti-holomorphic side, arbitrary exponents arise as $p$ and $q$ are varied. Thus we would like to evaluate integrals of the form \begin{equation} {\cal A} (a,b,c,d)=\int{\rm d}^2z\ (1-z)^a\ z^b\ (1-\bar z)^c\ {\bar z}^d\ \ . \label{hoot} \end{equation} At present, we do not know how to perform this integral for arbitrary exponents, but it is straightforward to evaluate in the special case when several of the exponents are integers. Using repeated integration by parts and the identity, $\bar{\partial}(z-w)^{-1}=\pi \delta^2(z-w)$, one finds \begin{equation} {\cal A} (a,b,c,d)=\pi{\Gamma(-a-b-1)\over\Gamma(-a)\Gamma(-b)}{\Gamma(c+1)\Gamma(d+1)\over \Gamma(c+d+2)} \label{geewhiz} \end{equation} which is valid when $a+b+c+d<-2$, $a$ and $b$ are negative integers, and $c$ is a positive integer while $d>-1$ or $d$ is a positive integer while $c>-1$. Eq.~\ref{geewhiz} is an enticing formula since setting $a=c$ and $b=d$ precisely reproduces the Virasoro-Shapiro amplitude\cite{vs}, as ${\cal A}(a,b,a,b)$ should. If we allow the exponents $c$ and $d$ to take arbitrary values in this result (as they would in Eq.~\ref{ample}), we find that the amplitude has an infinite set of poles for negative integer values of $c$ and $d$. This would be a curious result since the partition function indicated that the theory only contains a finite number of physical states. One can disregard these poles though since we have not justified \qq{geewhiz} as the correct analytic continuation for \qq{hoot} with arbitrary exponents. Even within the restrictions for the derivation for Eq.~\ref{geewhiz}, we do in fact find that ${\cal A}$ has unexpected poles. We illustrate this by focusing on one particular choice of exponents: $a=\gamma^{\scriptscriptstyle(2)}\cdot\gamma^{\scriptscriptstyle(3)}=-2$, $b=\gamma^{\scriptscriptstyle(4)}\cdot\gamma^{\scriptscriptstyle(3)}=-2$, and $d=x^2+x(p+q)=2$. In this case, Eq.~\ref{geewhiz} reduces to \begin{equation} {\cal A}={4\pi\over(c+1)(c+2)(c+3)} \label{poles} \end{equation} where $c=-1-x^2/2-px$. While the original derivation applies for $-1<c<0$, \qq{poles} provides a unique analytic continuation to the entire complex plane, with poles at $c=-1,-2,-3.$ In terms of the $\theta_{\scriptscriptstyle R}$-momenta appearing in the external states in \qq{amplo}, these poles occur at $p=-{x\over2}+{n\over x}$ and $q=-{x\over2}+{2-n\over x}$ where $n=0,1,2$. Now the momentum super-selection rules may be used to determine the character of the intermediate states producing these poles. For instance, $\alpha^{\scriptscriptstyle (a)}=-k^{\scriptscriptstyle (a)}$ is satisfied by all of the external states, and through the momentum conservation rules, must also be satisfied by the intermediate states. As a result, the dressing for the latter states has $\Delta_{\scriptscriptstyle R}=0$, and so they will correspond to (0,1) matter operators, as expected.\footnote{The conclusions, here and below, with regard to the dressings are unaffected if the arguments are extended for the possible inclusion of unconventional ghost dressings.} Since $a$, $b$ and $d$ are fixed, one must first determine in what channel the poles occur. Note that since $\gamma^{\scriptscriptstyle(a)}\cdot\gamma^{\scriptscriptstyle(a)}=2$ and $\sum \gamma^{\scriptscriptstyle(a)}=0$ with $\gamma^{\scriptscriptstyle(2)}\cdot\gamma^{\scriptscriptstyle(3)}=-2= \gamma^{\scriptscriptstyle(4)}\cdot\gamma^{\scriptscriptstyle(3)}$, one must have $\gamma^{\scriptscriptstyle(1)}=-\gamma^{\scriptscriptstyle(2)} =\gamma^{\scriptscriptstyle(3)}=-\gamma^{\scriptscriptstyle(4)}$. Therefore ${\cal A}$ cannot factorize in the (3,1) particle channel but only in the (3,2) or (3,4) channels where the external winding number states can couple to an oscillator state in the intermediate channel. Examining the (3,2) channel, one finds that the exponential dressing would have a fixed weight $\Delta_{\scriptscriptstyle L}=4$ at all of the poles. Since this is not one(, zero or a negative integer---see below), the amplitude cannot factorize in this channel as two SL(2,{\bf C})\ invariant three-point amplitudes. Hence one is lead to conclude that the poles must occur in the (3,4) channel, for which one finds the exponential dressings have $\Delta_{\scriptscriptstyle L}=1-n$. For $n=0$ ({\it i.e.,}\ $c=-1$), the intermediate state is a conventional state combining a (0,1) matter operator, $i\beta\!\cdot\!\partial X_{\scriptscriptstyle R}$, with an exponential ($\hat{\rho}$, $\hat{\theta}$) dressing with weight (1,0). However for $n=1,2$ ({\it i.e.,}\ $c=-2,-3$), the exponential gravity dressings have $\Delta_{\scriptscriptstyle L}=0,-1$. Thus the complete dressings cannot be simply exponentials, but must also have oscillator contributions to produce $\Delta_{\scriptscriptstyle L}=1$. Hence, the intermediate states must involve gravity dressings of the form: $(A\,\bar{\partial}\hat{\rho}+B\,\bar{\partial}\hat{\theta}) \exp[\alpha\hat\rho+ik\hat\theta_{\scriptscriptstyle R}+i{\tilde k}\hat\theta_{\scriptscriptstyle L}]$ or $(A\,\bar{\partial}^2\!\hat{\rho}+B\,\bar{\partial}^2\!\hat{\theta}+C(\bar{\partial} \hat{\rho})^2+D (\bar{\partial}\hat{\theta})^2+E\,\bar{\partial}\hat{\rho}\,\bar{\partial}\hat{\theta}) \exp[\alpha\hat\rho+ik\hat\theta_{\scriptscriptstyle R}+i{\tilde k}\hat\theta_{\scriptscriptstyle L}]$. Note that it is possible to construct dressings with oscillator contributions in chiral gravity, because the gravity sector includes two scalar fields. This contrasts with the nonchiral case, which only involves the Liouville field, and hence only allows for exponential dressings. In fact the above description is still incomplete. Explicit construction of a (1,0) primary field of the form $(A\,\bar{\partial}\hat{\rho}+B\,\bar{\partial}\hat{\theta}) \exp[\alpha\hat\rho+ik\hat\theta_{\scriptscriptstyle R}+i{\tilde k}\hat\theta_{\scriptscriptstyle L}]$ produces a unique solution up to an overall normalization: $\bar{\partial}( \exp[\alpha\hat\rho+ik\hat\theta_{\scriptscriptstyle R}+i{\tilde k}\hat\theta_{\scriptscriptstyle L}])$, where the momenta are chosen such that the exponential is a (0,0) operator. It is easy to verify that states with such a total derivative dressing decouple. Therefore the correct dressing for the intermediate state at the $c=-2$ pole must incorporate the ghost number current as well, $(A\bar{\partial}\rho+B\bar{\partial}\theta+C\bar{c}\bar{b}) \exp[\alpha\hat\rho+ik\hat\theta_{\scriptscriptstyle R}+i{\tilde k}\hat\theta_{\scriptscriptstyle L}]$. In this respect, the states producing these extra poles resemble the discrete states appearing in the $D=1$ non-critical string\cite{disc}. Another feature in common with the latter is that these new states make no appearance in the analysis of the partition function presented above\cite{parti}. \@startsection{section}{1}{\z@}{3.5ex plus 1ex minus .2ex{Concluding remarks} We have examined two-dimensional quantum gravity coupled to chiral matter, and found that for fixed values of $c_\pm$, there is in fact a family of theories labelled by the free parameter, $\xi$. This freedom reflects an ambiguity in the choice of a diffeomorphism invariant regularization scheme used to define the theory. The effects of $\xi$ are quite intricate: It does not appear in the central charge of the combined conformal and Lorentz induced action, nor in the partition function. However, $\xi$ does affect critical exponents, the spectrum of discrete states, and the positions of poles in amplitudes. One might think of the freedom to specify $\xi$ in analogy with the cosmological constant, which arises as a free parameter in nonchiral gravity. In fact since chiral gravity contains two scalar fields, it is possible to construct a large number of new (1,1) primary fields, beyond the cosmological constant operator, which may be included as terms in the action. Then, $\xi$ should be counted as only one of the undetermined couplings which arise in association with these terms. A physically consistent analysis required some restrictions on $\xi$ in the form of the inequality \qq{jacket}, but no barriers appeared for the matter theories. Note as well that one can easily produce theories with space-time tachyons, which do not yield unphysical critical exponents. Thus at this level, the properties of space-time tachyons and physical consistency on the world-sheet, appear to be divorced in the present theory, in contrast to nonchiral gravity\cit{seib2}. Clearly, the appearance of the Lorentz field has drastic effects on the quantum theory of the world-sheet geometry. The most pressing question would appear to be to understand the complete space of physical states\cit{next}. \bigskip R.C.M. was supported by NSERC of Canada, and Fonds FCAR du Qu\'ebec. V.P. was supported by D.O.E. grant DE-FG02-90ER40542. \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,574
Władysław Szpilman (Sosnowiec, 5 de dezembro de 1911 — Varsóvia, 6 de julho de 2000) foi um pianista e compositor clássico polaco de descendência clássica. Szpilzman é vastamente conhecido como a figura central no filme de 2002 de Roman Polanski, O Pianista, que foi baseado na autobiografia de Szpilman de como ele sobreviveu à ocupação alemã de Varsóvia e o Holocausto. Szpilman estudou piano nas academias de música em Berlim e Varsóvia. Ele tornou-se num artista popular na rádio polaca e em concerto. Confinado no gueto de Varsóvia depois da invasão alemã, Szpilman passou dois anos escondido. Perto do fim do seu esconderijo, ele foi ajudado por Wilm Hosenfeld, um oficial alemão que detestava as políticas alemãs. Depois da II Guerra Mundial, Szpilman continuou a sua carreira na rádio polaca. Szpilman também foi um compositor prolífico; a sua produção inclui centenas de músicas e muitas peças orquestrais. Carreira como pianista Szpilman começou o seu estudo do piano na Academia Chopin de Música em Varsóvia, Polónia, onde ele estudou piano com Aleksander Michałowski e Józef Śmidowicz, alunos de primeira e segunda geração de Franz Liszt. Em 1931 ele estudava na prestigiosa Academia de Artes em Berlim, Alemanha, onde ele estudou com Artur Schnabel, Franz Schreker e Leonid Kreutzer. Depois de Adolf Hitler ser nomeado Chanceler da Alemanha em 1933, Szpilman voltou a Varsóvia, onde ele rapidamente se tornou num celebrado pianista e compositor de tanto música clássica como popular. Primariamente um solista, ele também era o parceiro de música de câmara de violinistas aclamados como Roman Totenberg, Ida Haendel e Henryk Szeryng, e em 1934 ele fez uma tour pela Polónia com o violista dos Estados Unidos Bronislav Gimpel. A 5 de abril de 1935, Szpilman juntou-se à Polskie Radio, onde ele trabalhou como pianista a tocar música clássica e jazz. As suas composições nesta altura incluíam trabalhos orquestrais, peças de piano, e também músicas para filmes, como também cerca de 50 canções, muitas das quais se tornaram bastantes populares na Polónia. Na altura da invasão alemã da Polónia a setembro de 1939, ele era uma celebridade e um artista destacado na Polskie Radio, que foi bombardeada a 23 de setembro de 1939, pouco depois de transmitir o último recital de Chopin tocado por Szpilman. O ocupantes nazi estabeleceram o Governo Geral, e criaram guetos em várias cidades polacas, incluindo em Varsóvia. Szpilman e a sua família ainda não precisavam de encontrar uma nova residência, como o seu apartamento já estava na área do gueto. Sobrevivência durante o Holocausto Władysław Szpilman e a sua família, junto com todos os outros judeus que viviam em Varsóvia, foram forçados a mudar-se para um "bairro judeu" — o Gueto de Varsóvia — a 31 de outubro de 1940. Uma vez que todos os judeus estavam confinados no gueto, um muro foi construído para os separar do resto da cidade ocupada por alemães nazi. Szpilman conseguiu encontrar trabalho como músico para sustentar a sua família, que incluía a sua mãe, pai, irmão Henryk, e duas irmãs, Regina e Halina. Primeiro trabalhou no café Nowoczesna, onde algumas vezes os clientes ignoravam a sua música para realizar negócios, como ele relembrou na sua autobiografia. Szpilman tocou mais tarde num café na Rua Sienna e depois em 1942 no café Sztuka na Rua Leszno. Nestes últimos dois cafés, ele atuou música de câmara com o violinista Zygmunt Lederman, tocou no duo de piano com Andrzej Goldfeder,e tocou com outros músicos. Toda a sua família foi deportada em 1942 para Treblinka, um campo de extermínio dentro da Polónia ocupada pela Alemanha mais ou menos 80.5 km nordeste de Varsóvia. Um membro da Polícia dos Guetos Judeus ajudou com as deportações, que o reconheceu numa linha de pessoas—incluindo os seus pais, irmãos, e duas irmãs—sendo carregadas para um comboio no local de transporte (que, como nos outros guetos, era chamado de Umschlagplatz). Nenhum dos familiares de Szpilman sobreviveu à guerra. Szpilman continuou no gueto como trabalhador. e ajudou a contrabandear armas para o Levante do Gueto de Varsóvia. Szpilman continuou no gueto de Varsóvia até 13 de fevereiro de 1943, pouco antes de ser abolido depois da deportação da maioria dos seus habitantes em abril—maio de 1943. Szpilman encontrou lugares para se esconder em Varsóvia e sobreviveu com a ajuda dos seus amigos da Polskie Radio e músicos Andrzej Bogucki e a sua mulher Janina, Czesław Lewicki, e Helena Lewicka apoiada por Edmund Rudnicki, Witold Lutosławski, Eugenia Umińska, Piotr Perkowski, e Irena Sendler. Ele evitou a captura várias vezes. Começando em agosto de 1944, Szpilman escondia-se num edifício abandonado na Rua Aleja Niepodległości 223. Em novembro, ele foi descoberto por um oficial alemão, Capitão Wilm Hosenfeld. Para a surpresa de Szpilman, o oficial não o prendeu ou matou; depois de descobrir que o Szpilman emancipado era um pianista, Hosenfeld pediu-lhe para tocar algo no piano que estava no rés-do-chão. Szpilman tocou o Nocturne No. 20 em Dó♯ menor de Chopin. Depois disso, o oficial trouxe-lhe pão e compota em várias ocasiões. Ele também ofereceu a Szpilman um dos seus casacos para o manter quente nas temperaturas gélidas. Szpilman não sabia o nome do oficial alemão até 1951. Apesar dos esforços de Szpilman e dos Polacos para o salvar, Hosenfeld morreu num campo de prisioneiro de guerra soviético em 1952. Rádio polaca Szpilman começou a tocar para a Polskie Radio em 1935 como o seu pianista residente. Em 1939, a 23 de setembro, Szpilman estava no meio de uma transmissão quando os alemães abriram fogo no estúdio e ele foi forçado a parar de tocar. Esta foi a última transmissão ao vivo de música que foi ouvida até ao fim da guerra. Quando Szpilman continuou o seu trabalho na rádio em 1945, ele fez-lo continuando onde ele ficou à seis anos: pungentemente, ele abriu a primeira transmissão tocando mais uma vez Nocturne No. 20 em Dó♯ menor. De 1945 a 1963, Szpilman foi o diretor do Departamento de Música Popular na Polskie Radio. Szpilman tocou ao mesmo tempo como um pianista de concerto e músico de câmara na Polónia, como também pela Europa, Ásia, e América. Durante este período, ele compôs vários trabalhos sinfónicos e cerca de 500 outras composições que ainda são populares na Polónia atualmente. Ele também escreveu música para peças de rádio e filmes e em 1961, ele criou o Concurso Internacional da Canção em Sopot, Polónia, que tem sido produzido todos os verões à mais de 50 anos. Szpilman e Bronislav Gimpel fundaram o Quinteto de Piano de Varsóvia em 1963 com o qual Szpilman atuou em mais de 2000 concertos pelo mundo até 1986 em locais como o Royal Festival Hall em Londres; Salle Pleyel e Salle Gaveau em Paris; Residência de Munique em Munique; como também no Festival de Salzburgo, Brahstage Baden-Baden, Musikhalle Hamburg a.o. Composições Desde os seus anos iniciais em Berlim, Szpilman nunca desistiu da vontade de escrever música, até quando a viver no Gueto de Varsóvia. As suas composições incluem trabalhos orquestrais, concertos, peças de piano, mas também quantidades significativas de música para peças de rádio e filmes, como também cerca de 500 canções. Mais de 100 dessas são muito conhecidas como sucessos e atemporais na Polónia. Nos anos 50, ele escreveu cerca de 40 canções para crianças, pelas quais ele recebeu um prémio da União Polaca de Compositores em 1955. O seu filho Andrzej comentou em 1998 que os trabalhos de Szpilman não chegavam a um público maior fora da Polónia, atribuindo-o à "divisão da Europa em duas metades culturalmente como também politicamente" depois da guerra. O seu pai "moldou o panorama da música popular polaca por várias décadas—mas a divisa ocidental da Polónia constituía uma barreira" para música dos países do bloco este. As composições de Szpilman incluem a suite para piano "Vidas das Máquinas" 1932, Concerto de Violino 1933, "Waltzer no Estilo Velho" 1937, trilha sonora de filmes: " Świt, dzień i noc Palestyny" (1934), Wrzos (1938) e Doutor Murek (1939), Concertino para Piano e Orquestra (1940), Paráfrase de Temas Próprios (1948), "Ouverture para Orquestra Sinfónica" (1968) e muitas canções populares na Polónia. Os seus trabalhos estão agora publicados em edições impressas pela Editoras de Música Boosey & Hawkes/Bote & Bock em Nova Iorque, Berlim e Londres. Em 1961, ele iniciou e organizou o Festival Internacional da Canção de Sopot produzido na Polónia todos os verões, à mais de 50 anos. Ele fundou a União Polaca de Autores de Canções Populares. O livro A Morte de uma Cidade (original Śmierć Miasta") foi escrito por Wladyslaw Szpilman e elaborado por Jerzy Waldorff pouco depois da guerra acabar, e foi impresso pela primeira vez em 1946 pela editora Wiedza. O livro foi censurado pelas autoridades Estalinistas por razões políticas. Por exemplo, a nacionalidade do oficial alemão benevolente Wilm Hosenfeld foi mudada para ser austríaca. Como o cantautor dissidente da Alemanha Oriental Wolf Biermann observou no seu epílogo para a edição de 1999 em inglês: "Diretamente depois da guerra era impossível publicar um livro na Polónia que apresentasse um oficial alemão como um homem corajoso e prestativo," e um herói austríaco seria "não tão mau." Biermann adicionou causticamente, "Nos anos da Guerra Fria a Áustria e a Alemanha Oriental estavam ligadas por um pedaço comum de hipocrisia: ambos faziam de conta que tinham sido forçadamente ocupados pela Alemanha de Hitler." Em 1998, o filho de Szpilman Andrzej Szpilman publicou uma nova edição estendida da autobiografia do seu pai, primeiro uma tradução alemã de Karim Wolff como Das wunderbare Überleben (A Sobrevivência Milagrosa") pela editora alemã Ullstein Verlag; e depois a tradução inglesa por Anthea Bell como O Pianista com Epílogo de Wolf Biermann. Em março de 1999, Władysław Szpilman visitou Londres para a Semana de Livros Judeus, onde ele conheceu leitores ingleses para marcar a publicação do livro na Grã-Bretanha. Foi mais tarde publicado em mais de 35 línguas, nomeado Melhor Livro do ano pelo Los Angeles Times, Sunday Times, Boston Globe, The Guardian, The Economist, Library Journal, ganhou o Prémio Anual Judeu Trimestral Wingate de 2000, Melhor Livro do ano 2001 pela revista Lire e Elle (Paris) em 2002. A nova edição polaca, Pianista: warszawskie wspomnienia 1939–1945 (Kraków: Znak, 2000) tornou-se o número 1 na lista de betsellers do jornal polaco Rzeczpospolita por 3 anos em 2001—2003. Como chegou a um público muito maior, a autobiografia de Szpilman foi vastamente elogiada. O The Independent descreveu-a como "uma obra de arte convincente e angustiante"; é "um dos relatos mais poderosos já escritos" da era, declarado por outro jornal britânico. A descrição do livro sobre aclamado professor e escritor de Varsóvia Janusz Korczak tem sido descrita como "esmagadoramente poderosa e pungente". Korczak recusou salvar-se da deportação para Treblinka, invés disso falou com as crianças do seu orfanato do local de deportação e ultimamente levou-os "para o próximo mundo," como Szpilman relatou:Um dia, à volta de 5 de agosto, quando eu descansei um pouco do trabalho e estava a andar pela Rua Gęsia, aconteceu eu ver Janusz Korczak e os seus órfãos a sair do gueto. A evacuação do orfanato judeu dirigido por Janusz Korczak tinha sido ordenada essa manhã. As crianças eram para ter sido levadas sozinhas. Ele tinha a chance de se salvar, e foi só com dificuldade que ele persuadiu os alemães a também o levar. Ele tinha passado longos anos da sua vida com as crianças e agora, na sua última jornada, ele não os podia deixar sozinhos. Ele queria facilitar-lhes as coisas. Ele disse aos órfãos que eles iam sair do país, para que eles estivessem alegres. Por fim eles poderiam trocar as horríveis paredes sufocantes da cidade por campos de flores, riachos onde eles se podiam banhar, florestas cheias de bagos e cogumelos. Ele disse-lhe para usarem as suas melhores roupas, e então eles vieram para o pátio, dois a dois, vestidos bem e num humor feliz. A pequena coluna era liderada por um homem da SS que amava crianças, como os alemães gostam, até aqueles que ele estava prestes a ver a caminha do próximo mundo. Ele gostou especialmente de um rapaz de doze, um violinista que tinha o seu instrumento sob braço. O homem da SS disse-lhe para ir para a frente da procissão de crianças e tocar — e assim eles foram. Quando eu os conheci na Rua Gęsia, as crianças sorridentes estavam a cantar em coro, o pequeno violinista a tocar para eles e Korczak estava a carregar dois das crianças mais pequenas, que também estavam radiantes, e a contar-lhes uma história divertida. Tenho a certeza que até na câmara de gás, quando o gás Zyklon B estava a sufocar as gargantas infantis e o espalhar do terror invés de esperança nos orações dos órfãos, o Velho Doutor deve ter sussurrado com um último esforço, 'está tudo bem, crianças, vai ficar tudo bem'. Para que pelo menos ele pudesse poupar as suas pequenas acusações de medo de passar da vida para a morte." — O Pianista, pp. 95-96.A edição de 1999 em inglês também inclui excertos do diário de Wilm Hosenfeld (1942—44). O Epílogo de Biermann dá mais visão aos feitos de Hosenfeld e do seu caráter. Ele ajudou várias outras potenciais vítimas em Varsóvia; Hosenfeld no entanto morreu (em 1952) depois de sete anos em cativeiro soviético, apesar de esforços de Szpilman para o salvar. Embora conclua com a sua sobrevivência, Szpilman recusou concluir a sua autobiografia numa nota final. Nos últimos parágrafos, ele anda pelas ruas de uma Varsóvia abandonada e devastada: "Um vento tempestuoso agitava a sucata nas ruínas, assobiando e uivando pelas cavidades queimadas das janelas. O crepúsculo veio. Neve caiu do céu a escurecer e cinzento." Como um crítico disse, "estas frases finais destilam o estilo surpreendente e inesquecível do livro. Conciso porém altamente evocativo; medido e algo desapegado, porém possuindo um poetismo e um tenor espiritual consistente e força." Adaptação cinematográfica Em 2002, o realizador polaco-francês, Roman Polanski, dirigiu uma versão para o ecrã do livro. O filme ganhou três Óscares em 2003— Óscares de melhor diretor; melhor ator, e melhor roteiro adaptado, o BAFTA de melhor filme da British Academy of Film and Television Arts, e o Palme D'Or no Festival de Cannes. Polanski escapou do Gueto da Cracóvia e sobreviveu aos genocídios Nazi mas a sua mãe foi morta pelos ocupantes alemães. O filme de Polanski segue intimamente o estilo e os detalhes do livro. Adrien Brody aceitou o Óscar de Melhor Ator dizendo — ..."Este filme não seria possível sem a planta fornecida por Wladyslaw Szpilman. Este é um tributo à sua sobrevivência"... O filho de Szpilman, Andrzej Szpilman, compilou e lançou um CD com as músicas mais populares que Szpilman compôs sob o título Wendy Lands Sings the Songs of the Pianist [Wendy Lands Canta as Músicas d'O Pianista] (Universal Music). Outros CDs com os trabalhos de Szpilman incluem Works for Piano and Orchestra by Władysław Szpilman [Trabalhos para Piano e Orquestra por Władysław Szpilman] com Ewa Kupiec (piano), John Axelrod (diretor), e a Orquestra Sinfônica da Rádio de Berlim (2004) (Sony classical) e Original recordings of The Pianist and Władysław Szpilman-Legendary recordings [Gravações originais d'O Pianista e Władysław Szpilman-Gravações lendárias ] (Sony classical). Em novembro de 1998, Szpilman foi honrado pelo presidente da Polónia com uma Cruz do Comandante com Estrela da Ordem da Polônia Restituta. Morte e tributos Szpilman morreu de causas naturais em Varsóvia a 6 de julho de 2000, com 88 anos. Ele est a enterrado no Cemitério de Powązki. A 25 de setembro de 2011, o Estúdio 1 da Polskie Radio foi renomeado de Władysław Szpilman. A 4 de dezembro de 2011, uma placa comemorativa para Szpilman, gravada em polaco e inglês, foi revelada na Avenida 223 Niepodległości em Varsóvia, na presença da sua mulher Halina Szpilman e filho Andrzej, e a filha de Wilm Hosenfeld, Jorinde Krejci-Hosenfeld. No dia seguinte, no centenário exato do nascimento de Szpilman, o presidente polaco Bronisław Komorowski conheceu a viúva e o filho de Szpilman, como também Krejci-Hosenfeld. Uri Caine, um pianista e compositor clássico e de jazz americano, criou as suas próprias interpretações dos trabalhos de Szpilman numa variedade de géneros. O CD do concerto de Caine foi lançado a 24 de fevereiro de 2014. Gravações CD "F.Chopin — Trabalhos" - Edição Nacional — F.Chopin — Piano trio und Introduction und Polonaise – W. Szpilman, T. Wronski, A. Ciechanski, Muza Warsaw 1958 e 2002 CD "J. Brahms – Piano Quintett" The Warsaw Piano Quintett, Muza Warsaw 1976 CD "Wladyslaw Szpilman – Ein musikalisches Portrait" Trabalhos de Szpilman, Rachmaninov und Chopin, Alinamusic Hamburg 1998 CD Władysław Szpilman – Portret [Box-Set de 5 CDs] Polskie Radio Warszawa 2000 CD Wladyslaw Szpilman. As Gravações Originais d'O Pianista. Sony Classical 2002 CD O Pianista [Banda Sonora] Sony Classical 2002 CD Canções de Wladyslaw Szpilman – canta Wendy Lands, Universal Music USA 2003 CD Trabalhos Para Piano & Orquestra Sony Classical 2004 CD Władysław Szpilman – Gravações Lendárias [Box-Set de 3 CD's] Sony Classical 2005 Trabalhos publicados selecionados Władysław Szpilman: Suite. A Vida das Máquinas para Piano (1933). Boosey & Hawkes Berlin/New York 2004 ISBN 3-7931-3077-0 Władysław Szpilman: Concertino, Piano e Orquestra, Partes de Piano, Schott Mainz 2004 ISBN 3-7931-3086-X Władysław Szpilman: Concertino, Piano e Orquestra, Partitur Schott Mainz 2004 ISBN 3-7931-3079-7 As minhas memórias de ti. 16 canções selecionadas pelo Pianista Władysław Szpilman Boosey & Hawkes Berlin/New York 2003 ISBN 3-7931-3085-1 Ver também Lista de judeus sobreviventes do Holocausto Ocupação da Polónia (1939-1945) Naturais de Sosnowiec Pianistas da Polónia Judeus da Polônia Compositores clássicos do século XX
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,060
{"url":"https:\/\/physics.aps.org\/synopsis-for\/10.1103\/PhysRevLett.112.138105","text":"# Synopsis: DNA Damage from \u201cEye-Safe\u201d Wavelengths\n\nA study on the biological impact of far-infrared laser pulses shows that longer, supposedly safer, wavelengths can cause breaks in the backbone of DNA.\n\nLasers that emit at far-infrared wavelengths are often considered \u201ceye-safe,\u201d making them attractive for open-air applications such as laser radar, range-finding, and military targeting. However, new research may raise concerns about the safety of these long wavelengths. Experiments reported in Physical Review Letters show that femtosecond pulses from far-infrared lasers can cause breaks in exposed DNA strands.\n\nSafety regulations for lasers are based in large part on protecting the retina from intense, focused light that can burn that sensitive tissue. At far-infrared wavelengths (greater than about $1400$ nanometers), the light is mostly absorbed by water in the front of the eye, thus preventing it from reaching the retina. For this reason, acceptable laser power exposure is higher for far-infrared lasers than for visible and near-infrared lasers.\n\nHowever, the threat from lasers is not limited to over-heating, but also includes photoinduced chemical changes. In a previous study, Deepak Mathur of the Tata Institute of Fundamental Research and his colleagues showed that laser pulses at 800 nanometers could cause \u201cnicks\u201d or breaks in DNA through the generation of free electrons and hydroxyl (OH) radicals in the surrounding water medium. They have now performed similar experiments with femtosecond (terawatt) laser pulses at $1350$ and $2200$ nanometers. After about $3$ minutes of exposure, analysis showed that roughly $95%$ of the targeted DNA molecules had single-strand or double-strand breaks. The team found that they could reduce the damage by introducing \u201cscavenger\u201d molecules that deactivate hydroxyl radicals. From this, they surmise that the DNA damage comes primarily from rotationally \u201chot\u201d OH molecules formed in the strong optical fields of the pulses. However, it remains to be shown whether these DNA-attacking hydroxyl radicals form at the lower powers found in commercial applications. \u2013 Michael Schirber\n\nMore Features \u00bb\n\n### Announcements\n\nMore Announcements \u00bb\n\nOptics\n\nQuantum Physics\n\n## Related Articles\n\nStatistical Physics\n\nA micrometer-sized sphere trapped by optical tweezers in a liquid, under the right conditions, orbits rapidly around the laser beam\u2014creating a potential micromixing device. Read More \u00bb\n\nPlasma Physics\n\n### Viewpoint: Intense Laser Sheds Light on Radiation Reaction\n\nExperimentalists have used ultraintense laser light to explore a fundamental problem in quantum electrodynamics: the response of an accelerated electron to the radiation it emits. Read More \u00bb\n\nFluid Dynamics\n\n### Synopsis: How Hairy Tongues Help Bats Drink Nectar\n\nExperiments and theory show that hairs on a bat\u2019s tongue allow the animal to drink 10 times more nectar than it could if its tongue were smooth. Read More \u00bb","date":"2018-02-25 00:07:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 5, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3480086028575897, \"perplexity\": 4293.861501952175}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-09\/segments\/1518891816068.93\/warc\/CC-MAIN-20180224231522-20180225011522-00144.warc.gz\"}"}
null
null
{"url":"https:\/\/d3book.hongtaoh.com\/task-6-4-change-bar-color.html","text":"3.4 Task 6-4: Change bar color\n\nFrom Day 9\n\n\u2022 Based on Task 6-3, change the color of the bars using RGB. Let red and green components be fixed at zero. Let the blue component be the corresponding data value times 10, which then is rounded to the nearest whole number (using Math.round())\n\n\u2022 I still do not understand why we should put the codes in between \"+ +\". This is so difficult for me.","date":"2021-01-27 16:56:29","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9181809425354004, \"perplexity\": 930.8472673372578}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-04\/segments\/1610704828358.86\/warc\/CC-MAIN-20210127152334-20210127182334-00229.warc.gz\"}"}
null
null
{"url":"https:\/\/socratic.org\/questions\/a-container-has-a-volume-of-1-l-and-holds-16-mol-of-gas-if-the-container-is-expa-1","text":"# A container has a volume of 1 L and holds 16 mol of gas. If the container is expanded such that its new volume is 5 L, how many moles of gas must be injected into the container to maintain a constant temperature and pressure?\n\nFeb 28, 2017\n\nThe number of moles of gas to be injected is $= 64 m o l$\n\n#### Explanation:\n\nWe apply the gas equation\n\n$P V = n R T$\n\n$\\frac{V}{n} = R \\frac{T}{P} =$constant\n\nTherefore,\n\n${V}_{1} \/ {n}_{1} = {V}_{2} \/ {n}_{2}$\n\n${V}_{1} = 1 L$\n\n${n}_{1} = 16 m o l$\n\n${V}_{2} = 5 L$\n\n${n}_{2} = {V}_{2} \/ {V}_{1} \\cdot {n}_{1}$\n\n$= \\frac{5}{1} \\cdot 16 = 80 m o l$\n\nWe have to inject\n\n$= 80 - 16 = 64 m o l$","date":"2019-09-22 04:05:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 10, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4018026292324066, \"perplexity\": 705.412633924503}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-39\/segments\/1568514575076.30\/warc\/CC-MAIN-20190922032904-20190922054904-00379.warc.gz\"}"}
null
null
Team announcement Nkosi replaces Kolbe in only change for Boks' RWC semi-final Exciting young wing Sbu Nkosi has been drafted into the Springbok starting line-up for Sunday's gripping Rugby World Cup semi-final against Wales at the Yokohama International Stadium (kick-off 18h00 local time, 11h00 SA time). Nkosi replaces the hot-stepping Cheslin Kolbe who has not recovered sufficiently from the ankle injury he suffered against Canada and aggravated in the quarter-final victory over Japan (26-3) in Tokyo on Sunday. The 23-year-old Nkosi – who has scored eight tries in his 10 Test appearances – slots into the right wing position as a direct replacement for Kolbe. "It's disappointing not to have Cheslin available as he has been brilliant for us since we first called him up last year," said Rassie Erasmus, South Africa's director of rugby. "But we really rate Sbu and he will slot straight in. I am as excited to see what he can do as I would be if 'Chessie' were playing. Sbu has been very close to selection as it is." Nkosi has made two appearances in the tournament, against Namibia and a try-scoring show against Canada – on the left wing – to follow up try-scoring appearances against Australia and Argentina (two tries) in the two South African home Tests in 2019. His injury-enforced inclusion is the only change to the 23 that had been entrusted to ensure second place in the pool (against Italy) and to secure a semi-final place (against Japan). The Springbok 23 includes 10 players who appeared in the Rugby World Cup quarter-final victory over Wales four years ago in England (23-19), and nine who are making a second semi-final appearance after losing to eventual champions, New Zealand, four years ago. The nine are Tendai Mtawarira, Frans Malherbe, Eben Etzebeth, Lood de Jager, Francois Louw, Duane Vermeulen, Handré Pollard, Damian de Allende and Willie le Roux. Pieter-Steph du Toit was on the bench in the quarter-final at Twickenham in 2015. The winner will play the victor of the England v New Zealand semi-final in Yokohama on Saturday, 2 November. South Africa's semi-final with Wales kicks off at 11h00 (SA time) and is available on SuperSport channel 201 and SABC radio. The Springbok team to play Wales in Yokohama on Sunday is: 15. Willie le Roux (Toyota Verblitz, Japan), 59 caps - 60 points (12 tries) 14. Sbu Nkosi (Cell C Sharks), 10 - 40 (8t) 13. Lukhanyo Am (Cell C Sharks), 13 - 15 (3t) 12. Damian de Allende (DHL Stormers), 45 - 25 (5t) 11. Makazole Mapimpi (Cell C Sharks), 12 - 65 (13t) 10. Handré Pollard (Vodacom Bulls), 46 - 421 (6t, 71c, 79p, 4d) 9. Faf de Klerk (Sale Sharks, England), 28 - 20 (4t) 8. Duane Vermeulen (Vodacom Bulls), 52 - 15 (3t) 7. Pieter-Steph du Toit (DHL Stormers), 53 - 25 (5t) 6. Siya Kolisi (captain, DHL Stormers), 48 - 30 (6t) 5. Lood de Jager (Vodacom Bulls) 43 - 25 (5t) 4. Eben Etzebeth (DHL Stormers), 83 - 15 (3t) 3. Frans Malherbe (DHL Stormers), 36 - 5 (1t) 2. Bongi Mbonambi (DHL Stormers), 34 - 35 (7t) 1. Tendai Mtawarira (Cell C Sharks), 115 - 10 (2t) Replacements: 16. Malcolm Marx (Emirates Lions), 31 - 25 (5t) 17. Steven Kitshoff (DHL Stormers), 45 - 5 (1t) 18. Vincent Koch (Saracens, England) 19 - 0 19. RG Snyman (Vodacom Bulls), 21 - 5 (1t) 20. Franco Mostert (Gloucester, England), 37 - 5 (1t) 21. Francois Louw (Bath, England), 74 - 50 (10t) 22. Herschel Jantjies (DHL Stormers), 9 - 20 (4t) 23. Frans Steyn (Montpellier, France), 65 - 141 (11t, 7c, 21p, 3d) RIP Hannes Viljoen (1943-2021)... SA Rugby President, Mr Mark Alexander, expressed his condolences on behalf of the entire rugby fraternity to the family ... SA Rugby salutes 'sport bridge builder' Mluleki Ge... Mr Mark Alexander, the President of SA Rugby, has paid tribute to the hugely influential role Mr Mluleki George played i... RIP William 'Balla' Croy (1954-2020)... SA Rugby President, Mr Mark Alexander, paid tribute to former SARU player William "Balla" Croy, who sadly passed away on... RIP Ivan Pekeur (1952-2020)... Mr Mark Alexander, president of SA Rugby, has expressed his condolences to the family of the late Mr Ivan Pekeur, the Bo... SA Rugby statement on Stellenbosch/Roux Arbitratio... The Executive Council of SA Rugby has noted the outcome of the arbitration involving Jurie Roux and his former employer,... Former Springbok Heinrich Brüssow bows out of rugby Former Springbok flanker Heinrich Brüssow bowed out of professional rugby with immediate effect on Wednesday following an illustrious career spanning over more than a decade as injuries continued to take its toll on him. Toyota Cheetahs to take a more measured approach in Europe The Toyota Cheetahs will take a more measured approach on their three-match Guinness PRO14 tour starting this weekend, with incoming coach Hawies Fourie looking to be tactically astute as they move to Northern Hemisphere conditions. Experience a plus, but effort will determine success - Sage There are some good deposits in the memory banks of the Springbok Sevens players preparing for the Athlete's Factory Sevens in Chester this weekend, but that alone will not get them over the advantage line or put points on the board. De Klerk: "Criticism is not negative, it shows passion" Faf de Klerk told media in Japan on Wednesday that criticism of the Springbok game plan and individuals was a sign of passion for the team and not negativity, in his view. Isuzu Southern Kings looking to improve on PRO14 tour The Isuzu Southern Kings will be looking to take another step forward as they head off onto their first tour of the 2019/20 Guinness PRO14 season with a tough opener against Italian side Benetton on Saturday. Good start for Blitzboks in Chester The Springbok Sevens team outplayed the Rambling Jesters 46-0, Hong Kong 31-0 and England 31-5 on Friday to finish the opening day of the Athlete's Factory International 7s in Chester undefeated. Media focus turns to Springbok coaching duo The arrival of the Springboks in the Rugby World Cup semi-finals has caused the world game to sit up and take notice. The marked improvement in performances and results in the past two seasons has become a theme for international media. Halfpenny replaces injured Williams at fullback for Wales Wales coach Warren Gatland made three changes to the team that beat France, with fit-again Jonathan Davies back at centre for their semi-final against the Springboks on Sunday, but they will be without Liam Williams, who has been ruled out of the Rugby World Cup. Ill-discipline costs Toyota Cheetahs against Connacht Ill-discipline denied the Toyota Cheetahs their fourth Guinness PRO14 victory on Saturday as a red card and a series of penalties conceded late in their clash against Connacht at the Sportsground in Galway saw them succumb to a 24-22 defeat. "This is for you South Africa" – Kolisi The Springboks go into tomorrow's Rugby World Cup semi-final against Wales knowing that this is much more than just two groups of grown men chasing an oval ball around a rugby field in Japan. RIP Hannes Viljoen (1943-2021) SA Rugby President, Mr Mark Alexander, expressed his condolences on behalf of the entire rugby fraternity to the family and friends of former Springbok wing, Hannes Viljoen, who sadly passed away on Wednesday at the age of 77 years old. SA Rugby salutes 'sport bridge builder' Mluleki George Mr Mark Alexander, the President of SA Rugby, has paid tribute to the hugely influential role Mr Mluleki George played in the normalisation of South African sports and politics. RIP William 'Balla' Croy (1954-2020) SA Rugby President, Mr Mark Alexander, paid tribute to former SARU player William "Balla" Croy, who sadly passed away on Wednesday evening at the age of 66 years old. RIP Ivan Pekeur (1952-2020) Mr Mark Alexander, president of SA Rugby, has expressed his condolences to the family of the late Mr Ivan Pekeur, the Boland Rugby Union president, who passed away during the early hours on Sunday morning. SA Rugby statement on Stellenbosch/Roux Arbitration process The Executive Council of SA Rugby has noted the outcome of the arbitration involving Jurie Roux and his former employer, Stellenbosch University.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,071
Canceling Student Debt Is Very Good. Biden Won't Do It Unless We Make Him. It's easy for people like Chuck Schumer to urge Biden to eliminate student debt. Since the matter is totally out of his power, Schumer can polish his faded progressive credentials by proposing it without upsetting his big donors by actually doing it. So far, the gambit is working. Biden shows no sign of listening to Schumer's suggestion. Instead, Biden has touted a much smaller proposal. As NPR described it, Biden is supporting a provision that "calls for the federal government to pay off up to $10,000 in private, nonfederal student loans for 'economically distressed' borrowers." Who would qualify as "economically distressed," who would make that decision, and how long it would take to set up the program to do so are all, so far, mysteries. What we do know is that the proposal is not an executive action, but a provision of the HEROES Act — which the House of Representatives has passed, but which faces grim prospects in the Senate. In other words, Biden's offer is to give fewer people less money, after a longer waiting period, if and only if the Senate approves legislation it is very unlikely to approve. And by making sure only "nonfederal" (that is, privately issued) loans are eligible, Biden's plan further reduces who is eligible for relief while making sure banks get their cut. The result? Millions would remain heavily in debt, while private lenders would get a bailout from the federal government on loans many borrowers might otherwise default on. But we shouldn't give up hope yet. While his record gives little reason to trust him, the fact that even moderate Democrats like Chuck Schumer are making relatively aggressive calls for student debt relief is significant. It suggests that, on this issue, political elites are somewhat less unified in their defense of the rich and powerful than on other issues. Combine that with the fact that the mechanism for change — executive order — is much more straightforward than legislation, and you have a prime organizing target. Biden's record on student debt suggests he won't do the right thing without a fight. But student debt is widely and deeply felt by millions of people; we're mired in a depression where economic stimulus is essential; and the ruling class is more divided on this issue than other big reforms, in part because the material stakes for the capitalist class are relatively lower. All of these factors add up to a fight we can win. The only question is whether we can organize our side in time.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,607
\section{Introduction \label{introduction}} Progress in experiments on cold atoms trapped in optical lattices has made possible studies of interacting quantum many-particle systems at an unprecedented level of control \cite{Bloch2008}. A problem that may soon be amenable to experimental probes is that of decoherence induced by an environment which is close to a quantum phase transition (QPT) \cite{Sachdev2011}. Decoherence refers to the process where the entanglement between a system and its environment makes the system give up quantum information, turning its pure state, when isolated, into a mixed state. The loss of coherence lies at the heart of the measurement problem \cite{Sclosshauer2005} and that of understanding the ``quantum-to-classical" transition \cite{Zurek2003}, and also presents a major challenge for realizing quantum information protocols \cite{Shor1995}. Starting with the work of Dobrovitski {\em et al.} \cite{Dobrovitski2003b}, there have been a number of studies modeling decoherence due to an interacting zero-temperature spin environment \cite{Melikidze2004,Cucchietti2005,Rossini2007,Ou2007,Liu2007,Hanson2008,Cormick2008,Lai2008,Cheng2009}. An important contribution was made by Quan {\em et al.} \cite{Quan2006} who addressed the role of quantum criticality of the environment. Inspired by the Hepp-Coleman model \cite{Hepp1972,Bell1975}, Quan {\em et al.} introduced a central spin model where a qubit, or two-level system, is coupled to all spins of a surrounding environment, here taken to be an Ising chain in a transverse magnetic field. Other works soon followed, based on the same type of setup, but with an XY chain \cite{Yuan2007a} representing the interacting spin environment. The conclusion drawn from these and similar investigations is that the qubit state decoheres faster when the environment approaches a QPT \cite{Quan2006,Yuan2007a,Cucchietti2007,Sun:2007aa,Ma2007,Mostame2007,Rossini2008,Haikka2012,Sharma2012}. The high sensitivity of the ground state of the environment to a perturbation from the qubit when close to a QPT is here believed to be the reason why the time evolution of the entanglement, and by that, the decoherence rate of the qubit, gets accelerated. A very useful conceptual tool for exploring this circle of problems is that of the Loschmidt echo \cite{Gorin2006} which provides a measure of the stiffness of the environment to the perturbation from the qubit. The Loschmidt echo in a central spin model coincides with the square of the decoherence factor of the qubit, being proportional to the rate of decoherence. Thus, a study of the Loschmidt echo allows for a detailed analysis of the decoherence process. As a case in point, Quan {\em et al.} \cite{Quan2006} referred to the fast initial decay of the Loschmidt echo exactly at the critical point of the transverse field Ising chain to argue that the decoherence of a qubit is accelerated by the criticality of its environment. Further, Haikka {\em et al.} \cite{Haikka2012} relied on the observation of the monotonic short-time decay of the Loschmidt echo for the same setup to argue that the reduced dynamics of the qubit for this case is purely Markovian \cite{Vega2017}. This would mean that the critical point blocks any backflow of information from the environment into the qubit at short initial times, a flow otherwise expected from the typical appearance of revivals in a Loschmidt echo for a finite system over larger time scales. Motivated by the prospect of future experiments that may probe the connection between decoherence and quantum criticality, we have revisited the problem of qubit decoherence in an interacting spin environment. However, our aim has not been to propose or analyze a particular experimental arrangement. Rather, we wish to examine the very notion that the closeness of an environment to a QPT is intrinsically linked to an accelerated decoherence rate of the system to which it couples. Like others before us we shall take advantage of a central spin model, allowing for a transparent analysis, but now using a generalized quantum compass chain (QCC) \cite{You2014a} and an extended XY model in a staggered magnetic field \cite{Titvinidze2003} as environmental models. The QCC \cite{You2014a} incorporates a family of one-dimensional (1D) compass models \cite{Brezicki2007,You2008,Eriksson2009,Jafari2011,You2014b,Jafari2016} which serve as ``stripped-down" versions of the more familiar compass models defined for spins on two- or three-dimensional lattices. The common denominator is the structure of their Hamiltonians, being built from directional competing Ising-like interactions between neighboring spin components, with different components interacting on different bonds of the lattice \cite{Nussinov2015}. The QCC exhibits a QPT between two disordered phases with different short-range spin correlations, occurring when the Ising-like interactions are fine tuned to become isotropic \cite{You2014a}. The extended XY model that we shall also consider as a description of a possible spin environment is typified by the presence of three-site XY spin interactions \cite{Titvinidze2003}. This model also exhibits distinct ground state phases, but of a different character from the QCC, accessible by tuning the spin couplings and/or a uniform or staggered magnetic field. Both models feature spectra with structures that add a level of complexity beyond that of the simpler models hitherto considered in the literature. In particular, the presence of distinct spin liquid phases separated by QPTs brings a new element to the problem. This property, together with the fact that the models are exactly solvable, is the reason why we have selected them for our study. By a detailed analysis, based on the exact solution of the respective model \cite{You2014a,Titvinidze2003}, we arrive at the conclusion that the notion of an intrinsic connection between quantum phase transitions and strong decoherence misses out on the very mechanism which drives an accelerated decoherence rate: What matters is not the presence of a quantum phase transition {\em per se}, but instead the availability of propagating quasiparticles which couple to the qubit via a back action (as signaled by their having an impact on the Loschmidt echo). Such quasiparticles may indeed be expected to appear at a QPT, but as our case study of the QCC reveals, this is not necessarily so. As transpires when taking the extended XY model as environmental model, quasiparticles of this type may instead appear in a stable massless phase away from a QPT. These results bring new light on how to understand the enhanced decoherence experienced by systems coupled to an interacting environment. The paper is organized as follows: In the next section we define the central spin model with the QCC as environment, and we also provide some background material. In Sec. III we diagonalize the QCC Hamiltonian to obtain its spectrum and eigenstates, and from this, we construct exact expressions for the Loschmidt echo. Numerical case studies of the Loschmidt echo point to the crucial role of quasiparticle excitations in the decoherence process, and to this we add supporting evidence by a theoretical analysis. This section is divided into two parts, treating the QCC without and with a magnetic field respectively. In Sec. IV we carry out essentially the same type of analysis as in Sec. III, but now with the extended XY model in a transverse field as environmental model. Sec. V, finally, contains a brief summary and discussion. Some technical details are placed in the Appendix. \section{Decoherence of a qubit coupled to a quantum compass chain \label{proposal}} Following the original proposal by Quan {\it et al.} \cite{Quan2006} for modeling the decoherence of a qubit coupled to an interacting spin environment, we consider a composite system in a factorized pure state at time $t=0$, \begin{eqnarray} \label{initial} |\psi(0)\rangle =|\phi_q(0)\rangle \!\otimes |\phi_{\text{env}}(0)\rangle. \end{eqnarray} Here $|\phi_q(0)\rangle = c_g|g\rangle + c_e|e\rangle$, with $|c_g|^2+|c_e|^2=1$, is the qubit state, while $|\phi(0)_{\text{env}}\rangle$ is the ground state of the environment when isolated. We take the environment to be an $N$-site QCC in a transverse field $h$, with Hamiltonian \cite{You2014a} \begin{eqnarray} \label{QCChamiltonian} \nonumber H_{\text{env}}&\!=&\!\!\sum_{n=1}^{N/2}\!\left(J_{o}\tilde{\sigma}_{2n-1}^{+}\tilde{\sigma}_{2n}^{+} \!+\!J_{e}\tilde{\sigma}_{2n}^{-}\tilde{\sigma}_{2n+1}^{-}\!-\!h(\sigma^{z}_{2n-1}\!+\!\sigma^{z}_{2n})\right),\\ \end{eqnarray} satisfying periodic boundary conditions. The exchange couplings $J_{o}$ and $J_{e}$ are defined on ``odd" $(2n-1,2n)$ and ``even" $(2n,2n+1)$ lattice bonds respectively, and $\tilde{\sigma}_{2i}^{\pm}$ are pseudospin operators constructed as linear combinations of the Pauli matrices $\sigma^{x}$ and $\sigma^{y}$, $\tilde{\sigma}_{i}^{\pm}=\cos\theta\, \sigma^{x}_{i} \pm\sin\theta\, \sigma^{y}_{i}$. Denoting the energy difference between the ground state $|g\rangle$ and the excited state $|e\rangle$ of the qubit by $\omega_e$, its free Hamiltonian can be written \begin{eqnarray} \label{qubit} H_{\text{q}}&=&\omega_{e}|e\rangle\langle e|. \end{eqnarray} The qubit is assumed to couple with equal strength $\delta$ to all pseudospins, \begin{eqnarray} H_{\text{int}}=-\delta|e\rangle\langle e|\sum_{n=1}^{N} \sigma^{z}_{n}, \end{eqnarray} with $\delta \ll \omega_e$, making the full Hamiltonian \begin{eqnarray} \label{totalH} H=H_{\text{env}}+H_{\text{q}} +H_{\text {int}} \end{eqnarray} belong to the class of central spin models first conceived by Gaudin \cite{Gaudin1976}. The new element in our setup is that we have substituted the QCC in (\ref{QCChamiltonian}) for the time-honored use of the quantum Ising \cite{Quan2006} or XY \cite{Yuan2007a} chains. While the problem by this becomes rather more complex, we shall find that it is still amenable to an exact analysis. Noting that $[H_{\text{q}}, H_{\text{int}}] = 0$, it is easy to verify that the time evolution of the composite state in Eq. (\ref{initial}) splits into two terms: \begin{eqnarray} \label{eq1} |\psi(t)\rangle&=&c_{g}|g\rangle\otimes \exp(-iH_{\text{env}}t)\, |\phi_{\text{env}}(0)\rangle \nonumber \\ &+&e^{-i \omega_{e}t}c_{e}|e\rangle\otimes \exp(-iH^{(\delta)}_{\text{env}}t)\,|\phi_{\text{env}}(0)\rangle. \end{eqnarray} The first term evolves with the unperturbed Hamiltonian of the environment, $H_{\text{env}}$, while in the second term the state of the environment evolves with \begin{equation} \label{pertqcc} H^{(\delta)}_{\text{env}}= H_{\text{env}} + V^{(\delta)}_{\text{env}}, \end{equation} where $V^{(\delta)}_{\text{env}}=-\delta\sum_{n=1}^{N}\sigma_{n}^{z}$ is an effective potential from the coupling to the qubit, leading to a redefinition of the magnetic field, $h \rightarrow h+\delta$. As expected from the vanishing of the commutator $[H_{\text{q}}, H_{\text{int}}]$, the form of the time evolution in (\ref{eq1}) manifests a pure decoherence of the qubit, with no exchange of energy with the environment. Tracing out the states of the environment, the time-evolved reduced density matrix of the qubit can be written as \begin{eqnarray} \label{qubitRho} \nonumber \rho_q(t) &=& |c_g|^2|g \rangle \langle g| + |c_e|^2|e \rangle \langle e| \\ &+& e^{-i\omega_et}c^{\ast}_g c_e \nu(t) |e\rangle \langle g| + e^{i\omega_et}c_g c^{\ast}_e \nu^{\ast}(t) |g\rangle \langle e|, \end{eqnarray} where the decoherence factor $\nu(t)$ $-$ quantifying how the qubit state decoheres with time $-$ takes the form \begin{eqnarray} \nu(t) \!=\! \langle \phi_{\text{env}}(0)| \exp(iH_{\text{env}}^{(\delta)}t) \exp(-iH_{\text{env}}t) |\phi_{\text{env}}(0)\rangle, \end{eqnarray} given our assumption that the initial state of the environment is pure, $\rho_{\text{env}}(0) = |\phi_{\text{env}}(0)\rangle \langle \phi_{\text{env}}(0)|$. The absolute square of the decoherence factor $\nu(t)$ equals the Loschmidt echo (LE) of the environment, \begin{eqnarray} \label{Loschmidt} {\cal L}(t) &\!=\!& |\nu(t)|^2 \\ &\!=\!& |\langle \phi_{\text{env}}(0)|\exp(iH_{\text{env}}^{(\delta)}t) \exp(-iH_{\text{env}}t) |\phi_{\text{env}}(0)\rangle|^2. \nonumber \end{eqnarray} As expressed by this equation, an LE \cite{Gorin2006} quantifies the overlap between two states at time $t$ evolved from the same initial state but with different Hamiltonians, the one differing from the other by a small perturbation. As such it provides a measure of the robustness of the time evolution of a system when subject to small perturbations. The simple relation between $\nu(t)$ and ${\cal L}(t)$ in Eq. (\ref{Loschmidt}) formalizes our intuition that an environment whose time evolution is highly sensitive to a perturbation will also be highly effective $-$ by a back action $-$ in causing decoherence of the very system which is responsible for the perturbation, in our case the qubit. Specifically, Eqs. (\ref{qubitRho}) and (\ref{Loschmidt}) show that the qubit gets maximally entangled with the environment when ${\cal L}(t) \rightarrow 0$, with a complete loss of coherence. More formally, consider the purity of the qubit, $P(t) = Tr_q(\rho_q^2(t))$. A straightforward calculation reveals that $P(t)=1-2|c_{g}c_{e}|^2[1-{\cal L}(t)]$, from which one again reads off the imprint of the Loschmidt echo on the qubit decoherence. It is worth pointing out that by choosing the initial environmental state $|\phi_{\text{env}}(0)\rangle$ as an eigenstate of the unperturbed Hamiltonian $H_{\text{env}}$, the LE in Eq. (\ref{Loschmidt}) codifies a quantum quench \cite{Gogolin2016} with the perturbed Hamiltonian $H_{\text{env}}^{(\delta)}$ as quench Hamiltonian. In the following we shall identify the conditions under which the decay of the Loschmidt echo of the QCC is at its largest, favoring a fast decoherence of the qubit state with a resultant ``quantum-to-classical" transition. \section{Loschmidt echo of the quantum compass chain} \subsection{Preliminaries} To calculate the LE we should first diagonalize the environment Hamiltonian. For this purpose we use the Jordan-Wigner transformation \begin{eqnarray} \nonumber \sigma^{+}_{n}&\!=\!&(\sigma^{x}_{n}\!+\!{\it i}\sigma^{y}_{n})/2\!=\!\prod_{m=1}^{n-1}\left(-\sigma^{z}_{m}\right)c_{n}^{\dagger},\\ \label{JW} \sigma^{-}_{n}&\!=\!&(\sigma^{x}_{n}\!-\!{\it i}\sigma^{y}_{n})/2\!=\!\prod_{m=1}^{n-1}c_{n}\left(-\sigma^{z}_{m}\right), \\ \nonumber \sigma^{z}_{n}&\!=\!&2c_{n}^{\dagger}c_{n}\!-\!1, \end{eqnarray} to map the Hamiltonian of the QCC, $H_{\text{env}}$ in (\ref{QCChamiltonian}), onto a free fermion model \cite{You2014a}. The transformation is exact, allowing us to write \begin{multline} H_{\text{env}}\!=\!\sum^{N/2}_{n=1} \big(J_{o}c^{\dagger}_{2n}c_{2n-1}\!+\!J_{e} c^{\dagger}_{2n+1}c_{2n}\!+\!J_{o} e^{-i\theta} c^{\dagger}_{2n-1}c^{\dagger}_{2n}\\ +J_{e} e^{i\theta} c^{\dagger}_{2n}c^{\dagger}_{2n+1}+h(c^{\dagger}_{2n-1}c_{2n-1}+ c^{\dagger}_{2n}c_{2n}) \!+\! \mbox{H.c.}\big). \label{eq3} \end{multline} Next we partition the fermionic chain into diatomic unit cells (labelled by $n=0,1,2,....N/2$) and introduce two independent fermions at each cell, $c_{n}^{A}\equiv c_{2n}$ and $c_{n}^{B}\equiv c_{2n+1}$. Inserting these operators and their adjoints into Eq. (\ref{eq3}) and Fourier transforming, one obtains % \begin{multline} \label{eq5} H_{\text{env}}=\sum_{k}\big[J_k(\theta) c_{k}^{A\dag}c_{-k}^{B\dag}+L_k c_{k}^{A\dag}c_{k}^{B} \\ +h(c_{k}^{A\dag}c_{k}^{A}+c_{k}^{B\dag}c_{k}^{B}) + \mbox{H.c.}\big], \end{multline} where $J_k(\theta)\equiv J_{o}e^{{\rm i}\theta}-J_{e}e^{{\rm i}(k-\theta)}$, $L_k \equiv J_{o}+J_{e}e^{{\rm i} k}$, with $0 \le \theta \le \pi$. Having imposed periodic boundary conditions on the original QCC Hamiltonian in (\ref{QCChamiltonian}), its fermionic counterpart in (\ref{eq5}) is defined with antiperiodic boundary conditions, $c^{A/B}_{N+1}\!=\!- c^{A/B}_1$, for which $k=\pm 2\pi(2n+1)/N$ with $n=0,1,... N/4-1$. To simplify notation, from now on we suppress the dependence on the parameter $\theta$. By bringing in the Nambu spinor $\Gamma^{\dagger}~=~ (c^{A\dagger}_{k}\, c^{B\dagger}_{k}\, c^{A}_{-k}\, c^{B}_{-k})$, $H_{\text{env}}$ in (\ref{eq5}) can be expressed on Bogoliubov-de Gennes form as ${\small H_{\text{env}}=\sum_{k>0}\Gamma^{\dagger}H(k)\Gamma}$, with \begin{eqnarray} \label{BdG} H(k)= \left( \begin{array}{cccc} -2h & L_{k} & 0 & J_{k} \\ L_{k}^{\ast} & -2h & -J_{-k} & 0 \\ 0 & -J_{-k}^{\ast} & 2h & -L^{\ast}_{-k} \\ J_{k}^{\ast} & 0 & -L_{-k} & 2h \\ \end{array} \right), \end{eqnarray} The Bloch matrix $H(k)$ is easily diagonalized, yielding the quasiparticle-form of the QCC Hamiltonian, $H_{\text{env}}=\sum_{\alpha=1}^{4}\sum_{k}\varepsilon^{(\alpha)}_{k}\gamma_{k}^{(\alpha) \dag}\gamma_{k}^{(\alpha)}$, where $\gamma_{k}^{(\alpha) \dag}$ and $\gamma_{k}^{(\alpha)}$ are linear combinations of the electron operators in the Nambu spinor with respective energy dispersions $\varepsilon^{(1)}_{k}\!\!=\!\!-\varepsilon^{(4)}_{k}\!\!=\!\!-\sqrt{a_k+\sqrt{a_k^{2}-b_k}}$ and $\varepsilon^{(2)}_{k}\!\!=\!\!-\varepsilon^{(3)}_{k}\!\!=\!\!-\sqrt{a_k-\sqrt{a_k^{2}-b_k}}$, where $a_k\!=\! 8h^{2}+|J_{k}|^{2}+|L_{k}|^{2}+|J_{-k}|^{2}+|L_{-k}|^{2}$ and $b_k=4\Big[(4h^{2}-|L_{k}|^{2})^{2}+4h^{2}(J_{k}^{2}+J_{-k}^{2})+J_{k}^{2}J_{-k}^{2}-J_{k}^{\ast}J_{-k}L_{k}^{2}-J_{k}J_{-k}^{\ast}L_{-k}^{2}\Big]$. Note that the Bloch matrix $H^{(\delta)}(k)$ (and the corresponding quasiparticle dispersions) of the perturbed QCC Hamiltonian $H_{\text{env}}^{(\delta)}$ is obtained from $H(k)$ by simply replacing $h$ by $h+\delta$. The QCC ground state $|\psi_0\rangle$ is realized by filling up the negative-energy quasiparticle states, $|\psi_0\rangle = \prod_k \gamma_k^{(1) \dag} \gamma_k^{(2) \dag} |0\rangle$, where $|0\rangle$ is the Bogoliubov vacuum annihilated by the $\gamma_k$:s \cite{Jafari2016}. While excited states can be similarly obtained, their construction becomes quite cumbersome within the Bogoliubov-de Gennes formalism. An alternative approach was pioneered by Sun \cite{Sun2009}. One here takes off from the observation that the QCC Hamiltonian can be written as a sum of commuting Hamiltonians $H_k$, obtained by grouping together terms in (\ref{eq5}) with opposite signs of $k$, \begin{multline} \label{commH} H_k=J_{k}c_{k}^{A\dag}c_{-k}^{B\dag}+L_{k}c_{k}^{A\dag}c_{k}^{B} +J_{-k}c_{-k}^{A\dag}c_{k}^{B\dag}\\ +L_{-k} c_{-k}^{A\dag}c_{-k}^{B} + h(c_k^{q\dag}c_k^{q\phantom\dagger} + c_{-k}^{q\dag}c_{-k}^{q\phantom\dagger}) +\mbox{H.c}, \end{multline} with $q=A,B$ summed over. In the same way as above, the effective potential from the qubit can be included by simply replacing $h$ by $h+\delta$ in (\ref{commH}). Since $H_k$ conserves the number parity (even or odd number of electrons), it is sufficient to consider the even-parity subspace of the Hilbert space, spanned by {\small \begin{align} \nonumber |\varphi_{1,k}\rangle&=|0\rangle,\!&\!|\varphi_{2,k}\rangle&=c_{k}^{A\dag}c_{-k}^{A\dag}|0\rangle,~|\varphi_{3,k}\rangle=c_{k}^{A\dag}c_{-k}^{B\dag}|0\rangle,\\ \nonumber |\varphi_{4,k}\rangle&=c_{-k}^{A\dag}c_{k}^{B\dag}|0\rangle,~\!&\!|\varphi_{5,k}\rangle&=c_{k}^{A\dag}c_{-k}^{B\dag}|0\rangle, ~|\varphi_{6,k}\rangle=c_{k}^{A\dag}c_{k}^{B\dag}|0\rangle,\\ \label{eq7} |\varphi_{7,k}\rangle&=c_{-k}^{A\dag}c_{-k}^{B\dag}|0\rangle,~\!&\!|\varphi_{8,k}\rangle&=c_{k}^{A\dag}c_{-k}^{A\dag}c_{k}^{B\dag}c_{-k}^{B\dag}|0\rangle. \end{align}} Given this basis, the eigenstates $|\psi_{m,k}\rangle$ of $H_k$ can be written as $|\psi_{m,k}\rangle=\sum_{j=1}^{8}v_{m,k}^{(j)}|\varphi_{j,k}\rangle,$ \begin{figure*}[t!] \centerline{\includegraphics[width=0.34\textwidth]{fig1.eps} \includegraphics[width=0.32\textwidth]{fig2.eps} \includegraphics[width=0.32\textwidth]{fig3.eps}} \caption{ (Color online) Three-dimensional plots of the LE in Eq. (\ref{Loschmidt}) as a function of time $t$ and spin-component mixing angle $\theta$, with $J_{o}\!=\!1$, $h=0$, $\delta=0.01$, $N\!=\!400$, and (a) $J_{e}\!=\!1$, (b) $J_{e}\!=\!1.2$, (c) $J_{e}\!=\!2$.} \label{fig1} \end{figure*} where $v_{m,k}^{(j)}$ with $j=1,... 8$ and $m=0,... 7$ are functions of $J_e,J_o, h, \theta$, and $k$. A straightforward calculation reveals that there is a four-fold degenerate zero-energy level below (above) which there are two bands with negative (positive) energies $\epsilon_{0,k}=(-\epsilon_{7,k})=\varepsilon^{(1)}_{k}+\varepsilon^{(2)}_{k}$ and $\epsilon_{1,k}=(-\epsilon_{6,k})=\varepsilon^{(1)}_{k}-\varepsilon^{(2)}_{k}$ respectively, with $\varepsilon_k^{(1)}$ and $\varepsilon_k^{(2)}$ the quasiparticle energies defined after Eq. (\ref{BdG}). Before plunging ahead with the calculation of the LE $-$ equipped with the results derived above $-$ let us briefly review some pertinent facts about the QCC model. In the absence of a magnetic field, the model enjoys a $\mathbb{Z}_2$ symmetry when $\theta=\theta_c=\pi/2$ for which the model is critical \cite{You2014a}. Here a quantum phase transition (QPT) takes place between two gapped spin-liquid phases $-$ each characterized by large short-range spin correlations in the $x$- and $y$-direction, respectively $-$ for parameter values where the model exhibits maximum frustration of interactions. On the critical line $\theta_c=\pi/2$ in $(\theta,J_e/J_o)$-space, the ground state has a macroscopic degeneracy of $2^{N/4}$ when $J_{o}\neq J_{e}$, which gets enlarged to $2\times2^{N/4}$ at the isotropic point (IP) $J_{o}=J_{e}$. Away from the IP a gap of size $|J_{e}-J_{o}|$ opens at the zone boundaries $k= \pm \pi$, explaining the lower degeneracy in this case. Adding a magnetic field $h$, the massive degeneracy of the ground state collapses to a two-fold degeneracy at the critical values $h_{c}=\pm \cos(\theta)\sqrt{J_{o}J_{e}}$ \cite{You2014a}. \subsection{Loschmidt echo: zero magnetic field} To obtain an expression for the LE ${\cal L}(t)$ in (\ref{Loschmidt}) that is practical for computations, we first make a mode decomposition of the QCC ground state $|\phi_{\text{env}}\rangle$, \begin{eqnarray} |\phi_{\text{env}}\rangle = \prod_{0\le k \le \pi} |\psi_{0,k}\rangle, \end{eqnarray} using that $|\psi_{0,k}\rangle$ is the lowest-energy eigenstate of $H_k$ in (\ref{commH}). Introducing a notation for the LE and the eigenstates of $H_k$ where the presence (or absence) of magnetic field $h$ and/or perturbing potential $\sim \delta$ is made explicit, ${\cal L}(h+\delta,t)$ and $|\psi_{m,k}(h+\delta)\rangle$ respectively, the LE in (\ref{Loschmidt}) can then be decomposed as \begin{eqnarray} \label{eq8} {\cal L}(\delta,t)&=&\prod_{0\le k \le \pi} {\cal L}_k(\delta,t),\\ \nonumber {\cal L}_k(\delta,t)&=&\Big|\frac{1}{N_{0,k}(0)} \langle\psi_{0,k}(0)| e^{-iH_{\text{env}}^{(\delta)}t} |\psi_{0,k}(0)\rangle\Big|^{2}\\ \nonumber &=&\Big|\frac{1}{N_{0,k}(0)} \sum_{m=0}^{7}\frac{e^{-i \epsilon^{(\delta)}_{m,k}t}}{N_{m,k}(\delta)}| \langle\psi_{m,k}(\delta)|\psi_{0,k}(0)\rangle|^{2}\Big|^{2} \label{eq8b} \\ \end{eqnarray} where $N_{m,k}(h+\delta)=(\sum_{j=1}^{8}|v_{m,k}^{j}(h+\delta)|^{2})^{1/2}$ is the normalization factor of the eigenstate $|\psi_{m,k}(h+\delta)\rangle$, and $\epsilon^{(\delta)}_{m,k}$ is the eigenvalue of $H_k$ (with $h$ replaced by $h+\delta$ in (\ref{commH})) corresponding to $|\psi_{m,k}(\delta)\rangle$. A straightforward calculation shows that ${\cal L}_k(\delta,t)$ can be expressed as \begin{eqnarray} \label{eq9} {\cal L}_k(\delta,t)&&= |1-A_{0,k}\sin^{2}[(\varepsilon_{k}^{(1)}(\delta)+\varepsilon_{k}^{(2)}(\delta))t]\\ \nonumber &&-B_{0,k}\sin^{2}[(\varepsilon_{k}^{(1)}(\delta)+\varepsilon_{k}^{(2)}(\delta))t/2]\\ \nonumber &&-A_{1,k}\sin^{2}[(\varepsilon_{k}^{(1)}(\delta)-\varepsilon_{k}^{(2)}(\delta))t]\\ \nonumber &&-B_{1,k}\sin^{2}[(\varepsilon_{k}^{(1)}(\delta)-\varepsilon_{k}^{(2)}(\delta))t/2]\\ \nonumber &&-C_{k}\sin^{2}[\varepsilon_{k}^{(2)}(\delta)t]-D_{k}\sin^{2}[\varepsilon_{k}^{(1)}(\delta)t]|. \end{eqnarray} Here $A_{0,k}, B_{0,k}, A_{1,k}, B_{1,k}, C_k$ and $D_k$ are products of linear combinations of the state overlaps $F_{m,k}=|\langle\psi_{m,k}(\delta)|\psi_{0,k}(0)\rangle|^{2}$ $(m=0,...,7)$ (for details, see the Appendix), with $\varepsilon_k^{(\alpha)}(\delta) \,(\alpha=1,2)$ being energies of the quasiparticles filling up the ground state of $H_{\text{env}}^{(\delta)}$. The second filled quasiparticle band in the ground state becomes dispersionless along the critical line $\theta_c= \pi/2$, with $\epsilon_k^{(2)}=0$. For this case the LE reduces to the simple form % \begin{eqnarray} \nonumber {\cal L}_k(\delta,t)=|1-A_{c,k}\sin^{2}(\varepsilon_{k}^{(1)}(\delta)t)-B_{c,k}\sin^{2}(\frac{\varepsilon_{k}^{(1)}(\delta)t}{2})|,\\ \label{eq10} \end{eqnarray} with $A_{c,k}=A_{0,k}+A_{1,k}+D_{k}$ and $B_{c,k}=B_{0,k}+B_{1,k}$. Having obtained explicit formulas for the LE in (\ref{eq9}) and (\ref{eq10}), we are now ready to numerically probe some representative cases. Choosing $\delta=0.01$, the behavior of ${\cal L}(\delta,t)$ versus $t$ and $\theta/\pi$ at the {\em isotropic point (IP)} {\boldmath $J_e\!=\!J_o$} is displayed in FIG. 1(a) for a chain with 400 unit cells. It is seen from the figure that the LE decays fast at the critical point $\theta_{c}=\pi/2$ of the {\it unperturbed} QCC Hamiltonian. However, the decay of the LE is even faster slightly off the unperturbed critical point, where the LE exhibits two subvalleys. An analysis reveals that the extra valleys occur at the critical points $\theta_c = \arccos(\pm \delta/\sqrt{J_eJ_o})$ of the {\it perturbed} QCC Hamiltonian $H^{(\delta)}_{\text{env}}$. Thus, the criticality of the perturbed and the unperturbed environmental Hamiltonians is here the common feature linked to an accelerated decay of the LE. While many numerical studies suggest that criticality of an environment enhances the decay of the LE \cite{Quan2006,Yuan2007a,Cucchietti2007,Sun:2007aa,Ma2007,Mostame2007,Rossini2008,Haikka2012,Sharma2012}, to the best of our knowledge our result is the first that explicitly displays an enhanced decay both at the unperturbed {\it and} perturbed critical point of a model environment. Let us now study the LE {\em away from the IP} {\boldmath $(J_{e}\neq J_{o})$}. The case $J_o=1.0, J_e=1.2$, with (as before) $\delta = 0.01$ and $N=400$, is plotted in FIG. 1(b). Similar to the case of the IP, the decay of the LE is again seen to be at its maximum at the critical points of the perturbed Hamiltonian ($\theta_{c}=\arccos(\pm \delta/\sqrt{J_{e}J_{o}})$). Different from the IP, however, the decay of the LE at the critical point of the unperturbed Hamiltonian ($\theta_c=\pi/2$) shows no enhancement but fluctuates around a constant value close to unity. {\em This challenges the common notion that a critical environment always leads to a fast decay of the LE \cite{Quan2006}, and by that, a fast decoherence of the system that couples to it \cite{Yuan2007a}.} Notably, increasing the length of the chain or the observation time does not change this conclusion. What is the reason for this unexpected result? To find out, let us first go back to Eqs. (\ref{eq8}), (\ref{eq8b}), and (\ref{eq9}) and try to understand, from a mathematical point of view, how these equations control the decay of the LE. Since the maximum value of any $k$-mode ${\cal L}_k(\delta,t)$ is unity, it is clear from Eq. (\ref{eq8}) that it is sufficient that only a few of the modes take on very small values in order for the LE to get suppressed. As manifest in Eq. (\ref{eq9}), the actual contribution from a given $k$-mode to the LE is controlled by its oscillation terms, with a small/large value of an oscillation term implying a large/small contribution. An analysis reveals that all oscillation amplitudes $A_{0,k}$, $B_{0,k}$, $A_{1,k}$, $B_{1,k}$, $C_{k}$, and $D_{k}$, are small at the {\em IP critical point} {\boldmath $\theta_c=\pi/2$}, except for $B_{0,k}$ when approaching one of the Brillouin zone boundaries $k=\pm \pi$ at which $B_{0,k}$ reaches a sharp maximum (FIG. \ref{fig2}(a)). It follows from Eq. (\ref{eq10}) that the corresponding modes in the immediate neighborhood of a zone boundary will contribute constructively/destructively over time intervals where $\sin^{2}(\varepsilon_{k}^{(1)}(\delta)t)$ is small/large. Thus, by the periodicity of the sine-function, the LE is expected to exhibit periodic revivals, signaling a non-Markovian reduced dynamics of the qubit with a backflow of information from the environment \cite{Haikka2012}. This expectation is well confirmed numerically, cf. FIG. \ref{fig2}(c). \begin{figure*}[t] \centerline{\includegraphics[width=0.37\linewidth]{fig4.eps} \includegraphics[width=0.34\linewidth]{fig5.eps} \vspace{0.4cm} \includegraphics[width=0.28\linewidth]{fig6.eps}} \caption{ (Color online) Oscillation amplitude (a) $B_{0,k}$ and (b) $C_{k}$ in the mode decomposition, Eq. (\ref{eq9}), of the LE \\ as function of crystal momentum $k$ and spin-component mixing angle $\theta$ {\em at the isotropic point} $J_{o}\!=\!J_{e}\!=\!1$, with qubit- environment coupling $\delta\!=\!0.01$, and with $h=0, N\!=\!400$. (c) Time evolution of the LE, Eq. (\ref{Loschmidt}), for the same set of parameter values at $\theta= 0.5000\,\pi$ (critical point of the unperturbed QCC) and $\theta=0.4968 \,\pi$ (critical point of the perturbed QCC).} \label{fig2} \end{figure*} \begin{figure*} \centerline{\includegraphics[width=0.34\linewidth]{fig7.eps} \includegraphics[width=0.32\linewidth]{fig8.eps} \includegraphics[width=0.28\linewidth]{fig9.eps}} \caption{ (Color online) Oscillation amplitude (a) $B_{0,k}$ and (b) $C_{k}$ in the mode decomposition, Eq. (\ref{eq9}), of the LE as function of crystal momentum $k$ and spin-component mixing angle $\theta$ {\em away from the isotropic point}, $J_o\!=\!1$ and $J_e\!=\!1.2$, with qubit-environment coupling $\delta=0.01$, and with $h=0, N=400$. (c) Time evolution of the LE for the same set of parameter values for $\theta \!=\! 0.5000\,\pi$ (critical point of the unperturbed QCC) and $\theta_{c}\!=\!0.4970\, \pi$ (critical point of the perturbed QCC).} \label{fig3} \end{figure*} \begin{figure} \includegraphics[width=0.66\linewidth]{fig22.eps} \caption{ (Color online) Cross sections of FIGs. \ref{fig2}(b) (blue) and \ref{fig3}(b) (red) showing $C_k$ versus $\theta$ at $k=0$.} \label{fig8} \end{figure} It is actually instructive to unearth the revival period from Eq. (\ref{eq9}), explaining why FIG. 1(a) suggests a monotonic decay of the LE for $\theta_c=\pi/2$ at the IP, while, in fact, as revealed by the blue graph in FIG. 2(c), it exhibits a stable and distinct revival structure when going to larger time scales. Following Ref. [\onlinecite{Jafari2017}], we make the Ansatz $\varepsilon_{k=\pi}^{1}(\theta_{c})\,t/2=m\pi$, with $m$ an integer and with $k=\pi$ the mode with the largest oscillation amplitude ($B_{0,k}$ at the BZ boundary). Taylor-expanding $\varepsilon_{\pi-p\delta k}^{1}(\theta_{c}) \approx \varepsilon_{\pi}^{1}(\theta_{c})-\partial_k \epsilon_{k}^{1}(\theta_{c})|_{\pi}\, p\delta k$, one realizes that $B_{0,k}$-terms of nearby $k$-modes are strongly suppressed when $t$ is a multiple of $Na/v_{g}$, with $v_{g}=\partial_k \epsilon_{k}^{1}(\theta_{c})|_{\pi}$ the group velocity of the corresponding quasiparticle and $a=1$ the size of the unit cell, implying a revival time $T_{\text{rev}}\approx N/v_{g}$. Here $p\ll N$ are integers and $\delta k = 2\pi/N$. Putting in numbers, one obtains $T_{\text{rev}}=122$ (in arbitrary units), in excellent agreement with FIG. 2(c), however not visible on the shorter time scale of FIG. 1(a). Focusing now on the accelerated decay of the LE at the {\em IP critical points} {\boldmath $\theta_c = \arccos(\pm \delta/\sqrt{J_eJ_o})$} of the perturbed Hamiltonian $H_{\text{env}}^{(\delta)}$, (cf. the subvalleys in FIG. 1(a) and the red graph in FIG. 2(c)), an analysis of Eq. (\ref{eq9}) shows that it is caused by the oscillation term $\sim C_k$. As illustrated in FIG. 2(b) for $\delta=0.01$ and $J_o\!=\!J_e\!=\!1.0$, $C_{k}$ peaks to large values at $\theta_c = \arccos(\pm \delta/\sqrt{J_eJ_o})$ for all $k$. (For a cross-sectional view at $k=0$, see FIG (\ref{fig8}).) This is to be contrasted to the structure of $B_{0,k}$ away from $\theta_c=\pi/2$, being broad and shallow, cf. FIG. 2(a). The revival time of the LE is now controlled by the group velocity of the quasi particles which occupy the $\varepsilon_{k}^{2}$ band (corresponding to the $C_k$ amplitude, cf. Eq. (\ref{eq9})). Since this band is almost flat at $\theta_c = \arccos(\pm \delta/\sqrt{J_eJ_o})$ with $\delta$ small and $J_eJ_o=1$ (see FIG. (\ref{fig5})(a)), the quasiparticle group velocity is exceedingly small: $v_g \sim 10^{-7}$ (in arbitrary units) for $\delta=0.01$. Considering the time scale of FIG. 2(c), the revival time which ensues, $T_{\text{rev}} \approx N/v_g \sim 10^6$, is far too large for the revivals to be picked up in this figure. Instead, the rapid decay and subsequent vanishing of the LE depicted by the red graph in this figure suggests a Markovian dynamics of the qubit. This is similar to a central spin model with the transverse field Ising chain as environment where the critical point has been found to support a purely Markovian dynamics over short initial times\cite{Haikka2012}. It is important to point out, however, that our analysis does predict that a (non-Markovian) revival structure will appear if waiting sufficiently long, signaling a backflow of information from the environment to the qubit at very large times. Admittedly, these revivals appear only on extremely large time scales at which a central spin model may no longer be a realistic model for capturing a decoherence process. \begin{figure*}[t] \centerline{\includegraphics[width=0.34\linewidth]{fig10.eps} \includegraphics[width=0.31\linewidth]{fig11.eps} \includegraphics[width=0.31\linewidth]{fig12.eps}} \caption{ (Color online) (a) Time evolution of the LE in Eq. (\ref{Loschmidt}) at the isotropic point $J_o=J_e=1$ for $\delta=0.1, h=0$, and $N\!=\!400$. The oscillation amplitude (b) $B_{0,k}$, and (c) $C_{k}$ in the mode decomposition of the LE, Eq. (\ref{eq9}), as a function of $k$ and $\theta$ for the same parameter values.} \label{fig4} \end{figure*} Turning, finally, to the behavior of the LE {\em away from the IP}, the oscillation amplitudes $B_{0,k}$ and $C_{k}$ are plotted conundrum in Fig. \ref{fig3}(a) and (b) for $J_{o}=1, J_{e}=1.2$. As one can see, $B_{0,k}$ is small for all values of $\theta$ and results in the LE oscillating randomly around a mean value close to unity at $\theta_c=\pi/2$ (Fig. \ref{fig1}(b) and (c) and Fig. \ref{fig3}(c)). However, $C_{k}$ is still large at $\theta_c = \arccos(\pm \delta/\sqrt{J_eJ_o})$ of the perturbed theory (cf. FIG (\ref{fig8})), causing a fast decay of the LE at the critical points of the perturbed QCC Hamiltonian. (Fig. \ref{fig3}(c)). Before concluding this part of our discussion, let us numerically corroborate the expectation that by increasing the strength of the coupling $\delta$ between the environment and the qubit, the decay of the LE will become faster and broader. This is strikingly illustrated for the IP in FIG. (\ref{fig4})(a), having increased $\delta$ by one order of magnitude to $\delta=0.1$. The amplitudes of the corresponding dominating oscillation terms in the mode decomposition of the LE are depicted in Fig. \ref{fig4}(b) and (c) for all values of $\theta$: By making the coupling $\delta$ larger the oscillation amplitudes increase and broaden, resulting in a significantly faster decay of the LE over a large parameter interval. To understand the physics behind the different behaviors of the LE at the IP $(J_e = J_o)$ and away from the IP $(J_e \neq J_o)$, let us recall that the oscillation amplitudes in (\ref{eq9}) are made up of products of state overlaps $F_{m,k}=|\langle\psi_{m,k}(\delta)|\psi_{0,k}(0)\rangle|^{2}$ $(m=0,...,7)$. Knowing that $|\psi_{m,k}(0)\rangle$ is an eigenstate of $H_k$ in (\ref{commH}), implying that $\langle\psi_{m,k}(0)|\psi_{0,k}(0)\rangle = \delta_{m0}$ (up to a normalization factor), one may be tempted to argue that $\langle\psi_{m,k}(\delta)|\psi_{0,k}(0)\rangle$ must be very small for all $m \neq 0$ since $\delta$ is a small perturbation. If this were the case, however, all oscillation amplitudes in (\ref{eq9}) would be vanishingly small for any $k$, resulting in a non-decaying LE with a value close to unity. This, as we have seen, is not the case. The argument goes wrong by the assumption that a small perturbation can only cause a small change of a state overlap. However, a state where quasiparticles may easily be excited by a small perturbation, such as at a critical point, can dramatically change character when perturbed and lead to sizable overlaps $\langle\psi_{m,k}(\delta)|\psi_{0,k}(0)\rangle$. Specifically, if $|\psi_{m,k}(0)\rangle$ with $m\neq 0$ is an eigenstate of the {\it unperturbed} Hamiltonian $H_k$ close to criticality (with $h=0$ in (\ref{commH})), a perturbation $|\psi_{m,k}(0)\rangle \rightarrow |\psi_{m,k}(\delta)\rangle$ may restructure the state dramatically, allowing for a finite overlap with $|\psi_{0,k}(0)\rangle$. Likewise, if $|\psi_{0,k}(\delta)\rangle$ is an eigenstate of the {\it perturbed} Hamiltonian close to its critical point (with $h=\delta$ in (\ref{commH})), $|\psi_{0,k}(0)\rangle$ may feature a very different structure with a finite overlap with $|\psi_{m,k}(\delta)\rangle$ also for $m\neq 0$. This explains why criticality of the unperturbed ($\theta_c = \pi/2$) as well as the perturbed ($\theta_c = \arccos(\pm \delta/\sqrt{J_eJ_o}))$ QCC Hamiltonian enhances the decay of the LE at the IP, making precise the expectation that the decoherence of the qubit is strongest at a critical point where the environment is most susceptible to a perturbation. Which one of the critical points that will be most effective in suppressing a LE will depend on details of the model considered, such as the particular state overlaps which enter into a given oscillation amplitude of the LE modes ${\cal L}_k(\delta,t)$. In the present case, with the QCC as environment, the decays of the LE at the IP perturbed critical points are at a maximum, followed by extremely slow revivals. Still, also the IP unperturbed critical point is quite effective in causing an initial suppression of the LE, however with fast subsequent revivals. If we try to explain our findings {\it away from the IP} along the lines above, we are faced with an apparent conundrum. While the LE still decays at the critical point of the perturbed QCC Hamiltonian, it equilibrates around a value close to unity when at the critical point of the unperturbed theory. Why is that? Why is the critical point of the unperturbed theory now ineffective in suppressing the LE? \begin{figure*} \centerline{\includegraphics[width=0.33\linewidth]{fig13.eps} \includegraphics[width=0.31\linewidth,]{fig14.eps} \includegraphics[width=0.33\linewidth]{fig15.eps}} \caption{ (Color online) Bogoliubov-de Gennes quasiparticle spectrum $\pm\varepsilon_{k}^{1,2}(0)$ for the unperturbed QCC, Eq. (\ref{QCChamiltonian}), at (a) the isotropic point $J_{o}=J_{e}=1, h=0$, (b) the anisotropic point $J_{o}=1, J_{e}=1.2, h=0$, and (c) the isotropic point $J_{o}=J_{e}=1$, $h=0.5$.} \label{fig5} \end{figure*} \begin{figure*} \centerline{\includegraphics[width=0.23\linewidth]{fig23.eps} \includegraphics[width=0.23\linewidth]{fig24.eps} \includegraphics[width=0.23\linewidth]{fig25.eps} \includegraphics[width=0.23\linewidth]{fig26.eps}} \caption{ (Color online) Cross section of (a) FIG. (\ref{fig5})(a) at $k=\pi$; (b) FIG. (\ref{fig5})(b) at $k=\pi$; (c) FIG. (\ref{fig5})(a) at $k=0$; (d) FIG. (\ref{fig5})(b) at $k=0$.} \label{fig9} \end{figure*} The answer can be found by inspecting the quasiparticle spectrum, FIG. (\ref{fig5}). Panel (a) shows the unperturbed QCC spectrum at the IP, where the $\varepsilon_k^{(1)}$ band (which, together with the $\epsilon_k^{(2)}$ band, is completely filled in the QCC ground state) is seen to be degenerate with the other bands at $k=\pi$ and $\theta_c=\pi/2$, thus favoring quasiparticle excitations in the neighborhood of $k=\pi$. This, as we have argued, explains why one of the IP oscillation amplitudes in the LE modes, $B_{0,k}$ as it turns out, becomes large at $k=\pi$. Now look at panel (b) of FIG. (\ref{fig5}) which displays the spectrum away from the IP, with $J_e/J_o=1.2$. Here a gap has opened up at $k=\pi$, separating the $\varepsilon_k^{(1)}$ band from that of $\varepsilon_k^{(2)}$, thus holding back quasiparticle excitations and, as a consequence, dampening the oscillation amplitudes in the LE modes. (For cross-sectional views of the spectra in FIGs (\ref{fig5})(a) and (b) at $k=\pi$, see FIGs (\ref{fig9})(a) and (b), respectively.) The filled $\varepsilon_k^{(2)}$ band is still degenerate with the next higher band for all $k$ at $\theta_c=\pi/2$. However, as evident from FIG. 3(a), the possibility of quasiparticle excitations from this band does not compensate for the loss of excitations from the $\epsilon_k^{(1)}$ band: the $B_{0,k}$ amplitude is now strongly suppressed. It is here important to note that the $\varepsilon_k^{(2)}$ band is dispersionless for all $k$ at $\theta_c=\pi/2$. Thus, the quasiparticles from this band cannot contribute significantly to the time-dependent parts of the oscillation terms at the IP for small $\delta$ and hence cannot influence the revival structure of the LE. Different from the scenario at the critical point of the unperturbed QCC Hamiltonian, the LE at the critical point of the perturbed theory, $\theta_c=\arccos(\pm \delta/\sqrt{J_eJ_o})$, is controlled by the $\epsilon_k^{(2)}$ band and the mode oscillation amplitude $C_k$, at the IP (FIG. 2(b)) as well as away from the IP (FIG. 3(b)). In both cases the $\epsilon_k^{(2)}$ band of the unperturbed QCC Hamiltonian is gapless at $k=0$, (cf. FIG. (\ref{fig5})(a) and (b), respectively, with cross sections in FIG. (\ref{fig8})(c) and (d)), making quasiparticles easy to excite. For small $\delta$, when the two unperturbed critical points are close to $\pi/2$, the gap to excitations away from $k=0$ is extremely small, still allowing for an avalanche of quasiparticle excitations with a concurrent dramatic restructuring of the eigenstates. This is the reason for the almost constant and large value of the $C_k$ amplitude across the halved Brillouin zone in FIGs 2(b) and 3(b). As we have already discussed, the fact that the controlling $\varepsilon_k^{(2)}$ band is almost flat for all $k$ close to $\pi/2$ explains why the decay of the LE at the critical points of the perturbed QCC Hamiltonian appears to be monotonic: the group velocity $v_g$ of the quasiparticles is very small, resulting in exceedingly large revival periods, also for small finite systems. \begin{figure*} \centerline{\includegraphics[width=0.34\linewidth]{fig16.eps} \includegraphics[width=0.34\linewidth]{fig17.eps} \includegraphics[width=0.28\linewidth]{fig18.eps}} \caption{ (Color online) (a) Three-dimensional plot of the LE in Eq. (\ref{Loschmidt}) as function of time $t$ and magnetic field $h$. (b) The oscillation amplitude $C_{k}$ in the mode decomposition of the LE, Eq. (\ref{eq9}), as function of crystal momentum $k$ and magnetic field $h$. (c) The LE as function of time $t$ at the critical point of the unperturbed (perturbed) QCC with magnetic field $h=1 \ (h=1-\delta$). The Hamiltonian parameters in all three panels are set to $J_{o}\!=\!1, J_e\!=\!2$, $\theta/\pi=1/4$, $\delta=0.01$ and $N=400$.} \label{fig6} \end{figure*} The essential role of the quasiparticles and their excitations in driving the behavior of the Loschmidt echo $-$ and the associated decoherence of the coupled qubit $-$ should now be clear. As detailed above, the quasiparticles play a double role. First, their excitations may restructure the unperturbed eigenstates of $H_k$ in (\ref{commH}) substantially when prevalent, making possible large state overlaps and, by that, large oscillation amplitudes in the mode decomposition of the LE. Secondly, the curvature of the quasiparticle bands determine the revival structure of the LE. A large/small curvature with a resulting large/small group velocity $v_g$ of the quasiparticles will set the time scale on which the qubit dynamics appears to be Markovian. \subsection{Loschmidt echo: finite magnetic field} The unperturbed QCC in a magnetic field $h$ exhibits a critical line $h_c\!=\! \pm\cos(\theta)\sqrt{J_{o}J_{e}}$ parameterized by $\theta, J_o$ and $J_e$ \cite{You2014a}. Choosing $J_{o}=1$, $J_{e}=2$, $\theta=\pi/4$, $\delta=0.01$ and $N=400$, we have plotted the corresponding LE versus $h$ and $t$ in FIG. (\ref{fig6})(a). As expected, the LE shows a single dip at the critical field $h_{c}=1$. This result is generic: With $\theta, J_e$, and $J_o$ fixed, the LE suffers an enhanced decay only at the corresponding critical field of the QCC Hamiltonian, be it unperturbed ($h_c\!=\! \pm\cos(\theta)\sqrt{J_{o}J_{e}}$) or perturbed ($h_c\!=\! \delta\pm\cos(\theta)\sqrt{J_{o}J_{e}}$). In both cases the revival time of the LE is controlled by the group velocity of the quasiparticles in the $\varepsilon_{k}^{(2)}$ band of the perturbed Hamiltonian. The magnetic field bends this band (cf. FIG. (\ref{fig5})(c)), and as a result the group velocities can be significantly larger than in the case when the field is zero. Moreover, numerical computations show that all oscillation amplitudes are very small in parameter space except $C_{k}$ which takes a large value at the critical field in the center of the Brillouin zone (FIG. (\ref{fig6})(b)). Putting these facts together, we expect that the time evolution of the LE manifests distinct decays and revivals at the critical field. This is verified In FIG. (\ref{fig6})(c) where the LE has been plotted versus time for $h_{c}=1$ (unperturbed QCC, blue curve) and for $h_c\!=\!1-\delta$ (perturbed QCC, red curve). In both cases, the LE indeed exhibits deep valleys and high peaks, however with different revival periods. \section{Decoherence of a qubit in an extended-XY-model environment} In this section we investigate the decoherence of a qubit embedded in an environment described by the one-dimensional extended $XY$ model with a transverse staggered magnetic field \cite{Titvinidze2003}. While much of the methodology can be carried over from Secs. II and III, replacing the quantum compass chain by the extended XY model will provide a complementary vista, adding to the picture of qubit decoherence in an interacting spin environment. Imposing periodic boundary conditions, and assuming that the coupling to the qubit contains both a uniform ($\sim \delta$) and a staggered ($\sim (-1)^n \delta_s$) component, the Hamiltonian of the composite system takes the form $H=H_{\text{env}} + H_{\text{q}} + H_{\text{int}}$, where \begin{eqnarray} \label{eq11} \nonumber H_{\text{env}}&\!=\!&-\frac{1}{2} \sum_{n=1}^N \Big(\frac{J}{2}(\sigma_n^{x}\sigma_{n+1}^{x}+\sigma_{n}^{y}\sigma_{n+1}^{y})\\ &\!+\!&\frac{J_{3}}{4}(\sigma_{n}^{x}\sigma_{n+2}^{x}\!+\!\sigma_{n}^{y}\sigma_{n+2}^{y})\sigma_{n+1}^{z}\!+\!(-1)^{n}h_{s}\sigma_{n}^{z}\Big),\\ \nonumber H_{\text{q}}&\!=\!&\omega_{e}|e\rangle\langle e|, \ H_{\text{int}}\!=\!-\frac{1}{2}\Big(\delta\!+\!(-1)^{n}\delta_{s}|e\rangle\langle e|\Big)\sum_{n=1}^N\sigma_{n}^{z}. \end{eqnarray} We have here used the same tags for the Hamiltonians as in Sec. II, with $H_{\text{env}}$ and $H_{\text{q}}$ denoting the decoupled Hamiltonian of the environment and the qubit, respectively, and with ${H}_{\text{int}}$ the Hamiltonian of the qubit-environment interaction. Here $N$ counts the number of sites on the one-dimensional lattice, $h_{s}$ is the magnitude of the staggered transverse magnetic field, and $J$ and $J_{3}$ are exchange couplings between spins on nearest-neighbor and next-nearest-neighbor sites, respectively. For simplicity we here consider the XX-limit of the model, with identical couplings in the $x$- and $y$-directions. As in Sec. II we assume that the qubit is initially disentangled from the environment. In other words, the state $|\psi(0)\rangle$ of the composite system at time $t\!=\!0$ is given by $|\psi(0)\rangle=|\phi_{\text{q}}(0)\rangle\otimes|\phi_{\text{env}}(0)\rangle$, with the normalized qubit state $|\phi_{\text{q}}(0)\rangle=c_{g}|g\rangle+c_{e}|e\rangle$ a superposition of the ground state $|g\rangle$ and excited state $|e\rangle$, and where $|\phi_{\text{env}}(0)\rangle$ is the initial state of the environment. With $U(t)=\exp (-i {H} t)$ the time-evolution operator, the time-evolved composite state can be written as \begin{eqnarray} \label{eq12} |\psi(t)\rangle&=&c_{g}|g\rangle \otimes \exp(-i {H}_{\text{env}}t)|\phi_{\text{env}}(0)\rangle\\ \nonumber &+& \exp(-i \omega_{e}t)c_{e}|e\rangle \otimes \exp(-i {H}^{(\delta,\delta_{s})}_{\text{env}}t)|\phi_{\text{env}}(0)\rangle, \end{eqnarray} using that $[{H}_{\text{q}},{H}_{\text{env}}] = [{H}_{\text{q}},{H}_{\text{int}}]=0$. Here \begin{equation} \label{XYpert} {H}^{(\delta,\delta_{s})}_{\text{env}}={H}_{\text{env}}+V_{\text{env}}(\delta, \delta_{s}) \end{equation} is the perturbed Hamiltonian of the environment, with $V_{\text{env}}(\delta, \delta_{s})=-\frac{1}{2}[\delta+(-1)^{n}\delta_{s}]\sum_{n=1}^N\sigma_{n}^{z}$ the effective potential from the interaction with the qubit. Note that the perturbed Hamiltonian ${H}^{(\delta,\delta_{s})}_{\text{env}}$ describes the extended $XY$ model in a staggered transverse magnetic field $h_s + \delta_s$, with an added uniform transverse field $\delta$. In order to investigate the decoherence process induced by the environment, we follow the same route as in Sec. II. Eq. (\ref{eq12}) implies that the reduced density matrix of the qubit takes the form \begin{eqnarray} \label{eq13} {\cal\rho}_{\text{q}}&=&Tr_{\text{env}}|\psi(t)\rangle \langle\psi(t)|=c_{g}^{2}|g\rangle\langle g|+c_{e}^{2}|e\rangle\langle e|\\ \nonumber &+&e^{-i \omega_{e}t}c^{\ast}_{g}c_{e}\nu(t)|e\rangle\langle g|+e^{i \omega_{e}t}c_{g}c^{\ast}_{e}\nu^{\ast}(t)|g\rangle\langle e|, \end{eqnarray} with $\nu(t)\!=\!\langle\phi_{\text{env}}(0)|\exp(i {H}_{\text{env}}^{(\delta, \delta_{s})}t)\exp(-i {H}_{\text{env}}t)|\phi_{\text{env}}(0)\rangle$ the decoherence factor, implying the LE ${\cal L}=|\nu(t)|^2$ \cite{Quan2006, Cucchietti2003}. Thus, as in our analysis of the QCC-induced decoherence of the qubit in Sec. III, the problem boils down to computing the LE of the environment, now described by the extended XY model in a staggered magnetic field, perturbed by the qubit. \section{Loschmidt echo of the extended XY model \label{ESEXY}} \subsection{Preliminaries} To derive a closed form of the LE we must first diagonalize the unperturbed as well as the perturbed environmental Hamiltonian. In fact, it is sufficient to diagonalize the perturbed Hamiltonian in (\ref{XYpert}), ${H}^{(\delta,\delta_{s})}_{\text{env}}$, since it reduces to the unperturbed one, ${H}_{\text{env}}$ in (\ref{eq11}), by setting $\delta=\delta_s=0$. As a first step we again exploit the Jordan-Wigner transformation (\ref{JW}), and map ${H}^{(\delta,\delta_{s})}_{\text{env}}$ onto a free fermion model, \begin{eqnarray} \nonumber {H}_{\text{env}}^{(\delta, \delta_{s})}&=&-\frac{1}{2}\sum_{n=1}^{N}\Big(J(c_{n}^{\dagger}c_{n+1}+c^{\dagger}_{n+1}c_{n})\\ \nonumber &+&\frac{J_{3}}{2}(c_{n}^{\dagger}c_{n+2}+c^{\dagger}_{n+2}c_{n})\\ \label{eq14} &+&\Big[\delta+(-1)^{n}(h_{s}+\delta_{s})\Big](2c^{\dagger}_{n}c_{n}-1)\Big). \end{eqnarray} By introducing two independent fermions at each unit cell of the lattice, $c_{n}^{A}\equiv c_{2n-1}$ and $c_{n}^{B}\equiv c_{2n}$ and performing a Fourier transformation, one obtains {\small \begin{multline} {H}_{\text{env}}^{(\delta, \delta_{s})}=\sum_{k}\Big(\epsilon^{A}(k)c_{k}^{A\dagger}c_{k}^{A}+\epsilon^{B}(k)c_{k}^{B\dagger}c_{k}^{B} \\ +\epsilon^{AB}(k)(c_{k}^{A\dagger}c_{k}^{B} +c_{k}^{B\dagger}c_{k}^{A})\Big), \label{eq15} \end{multline} } where \begin{eqnarray} \label{epsilonA} \epsilon^{A}(k)&\!=\!&\frac{J_{3}}{2}\cos(k)\!-\!\delta\!+\!(h_s\!+\!\delta_{s}),\\ \label{epsilonB} \epsilon^{B}(k)&\!=\!&\frac{J_{3}}{2}\cos(k)\!-\!\delta-(h_{s}\!+\!\delta_{s}), \\ \label{epsilonAB} \epsilon^{AB}(k)&\!=\!&-J\cos(k/2), \end{eqnarray} and $k=4\pi n/N$ with $-N/4\le n \le N/4$ \cite{Divakaran2013}. Using the Bogoliubov-type transformation \begin{eqnarray} \nonumber c_{k}^{A}&=& \cos(\theta_{k}^{(\delta_{s})}/2) \alpha_{k}+\sin(\theta_{k}^{(\delta_{s})}/2) \beta_{k},\\ \nonumber c_{k}^{B}&=& -\sin(\theta_{k}^{(\delta_{s})}/2) \alpha_{k}+ \cos(\theta_{k}^{(\delta_{s})}/2) \beta_{k}, \end{eqnarray} where \begin{eqnarray} \nonumber \theta_{k}^{(\delta_{s})} =-\arctan(J\cos(k/2)/(h_{s}+\delta_{s})), \label{eq16} \end{eqnarray} we can finally write the Hamiltonian on diagonal form, ${H}_{\text{env}}^{(\delta, \delta_{s})}\!=\!\sum_{k}[\varepsilon^{\alpha}_{k}(\delta,\delta_{s})\alpha^{\dagger}_{k} \alpha_{k}+\varepsilon^{\beta}_{k}(\delta,\delta_{s})\beta^{\dagger}_{k}\beta_{k}]$, with \begin{eqnarray} \nonumber \varepsilon^{\alpha}_{k}(\delta,\delta_{s})\!=\!(J_{3}/2)\cos(k)\!-\!\delta\!-\!\sqrt{(h_{s}\!+\!\delta_{s})^{2}\!+\!J^{2}\cos^{2}(k/2)},\\ \nonumber \varepsilon^{\beta}_{k}(\delta,\delta_{s})\!=\!(J_{3}/2)\cos(k)\!-\!\delta\!+\!\sqrt{(h_{s}\!+\!\delta_{s})^{2}\!+\!J^{2}\cos^{2}(k/2)}. \end{eqnarray} The corresponding quasiparticle eigenstates are given by \begin{eqnarray} \label{eq17} \nonumber \alpha^{(\delta,\delta_{s})\dagger}_{k}|V\rangle&=&\cos(\theta_{k}^{(\delta_{s})}/2)c_{k}^{A\dagger}|0\rangle -\sin(\theta_{k}^{(\delta_{s})}/2)c_{k}^{B\dagger}|0\rangle,\\ \nonumber \beta^{(\delta,\delta_{s})\dagger}_{k}|V\rangle&=&\sin(\theta_{k}^{(\delta_{s})}/2)c_{k}^{A\dagger}|0\rangle +\cos(\theta_{k}^{(\delta_{s})}/2)c_{k}^{B\dagger}|0\rangle, \end{eqnarray} where $|V\rangle$ and $|0\rangle$ are vacuum states of the quasiparticle and fermion, respectively. Notably, the quasiparticle operators of the unperturbed Hamiltonian, ($\alpha^{(0)}_{k}, \beta^{(0)}_{k}$), can be expressed on closed form as a linear combination of those of the perturbed Hamiltonian, ($\alpha^{(\delta,\delta_{s})}_{k},\beta^{(\delta,\delta_{s})}_{k}$), \begin{eqnarray} \label{eq18} \nonumber \alpha^{(0)}_{k}&=&\cos(\eta_{k})\alpha^{(\delta,\delta_{s})}_{k} -\sin(\eta_{k})\beta^{(\delta,\delta_{s})}_{k},\\ \nonumber \beta^{(0)}_{k}&=&\sin(\eta_{k})\alpha^{(\delta,\delta_{s})}_{k} +\cos(\eta_{k})\beta^{(\delta,\delta_{s})}_{k} \end{eqnarray} where $2\eta_{k}=\theta_{k}^{(0)}-\theta_{k}^{(\delta_{s})}$. It follows that eigenstates of the unperturbed Hamiltonian can be written in terms of the eigenstates of the perturbed Hamiltonian as \begin{eqnarray} \label{eq19a} \alpha^{(0)\dagger}_{k}|V\rangle&\!=\!&\cos(\eta_{k})\alpha^{(\delta,\delta_{s})\dagger}_{k}|V\rangle \!-\!\sin(\eta_{k})\beta^{(\delta,\delta_{s})\dagger}_{k}|V\rangle,\\ \label{eq19b} \beta^{(0)\dagger}_{k}|V\rangle&\!=\!&\sin(\eta_{k})\alpha^{(\delta,\delta_{s})\dagger}_{k}|V\rangle \!+\!\cos(\eta_{k})\beta^{(\delta,\delta_{s})\dagger}_{k}|V\rangle. \end{eqnarray} The relations in Eqs. (\ref{eq19a}) and (\ref{eq19b}) will turn out to be useful when calculating the LE (next subsection). But before turning to that task, let us briefly summarize what is known about the phase diagram of the (unperturbed) extended XY model in a transverse magnetic field. The problem has been investigated comprehensively in Ref. [\onlinecite{Titvinidze2003}], revealing three phases: one long-range-ordered antiferromagnetic phase and two distinct spin-liquid phases, denoted spin liquid (I) and spin liquid (II), respectively. The QPT between the antiferromagnetic phase and spin liquid (I) is a gapped-to-gapless transition which occurs at critical staggered fields $h_{s}^{c1}\!=\!\pm J_{3}/2$. The system is in the antiferromagnetic phase for $|h_{s}|\!\geq \!J_{3}/2$ where $\varepsilon^{\alpha}_{k}(0)\leqslant0$ and $\varepsilon^{\beta}_{k}(0)\!>\!0$ for all $k$ modes, and accordingly the ground state $|G_{\text{AFM}}\rangle$ takes the form $|G_{\text{AFM}}\rangle \sim \prod_k \alpha_k^{(0)\dagger}|V\rangle$ with energy $E_{\text{AFM}}=\sum_{k}\varepsilon^{\alpha}_{k}(0)$. When $\sqrt{J_{3}^{2}/4-1}<|h_{s}|<J_{3}/2$, the system enters spin liquid phase (I) where again $\varepsilon^{\alpha}_{k}(0)\leqslant0$ for all $k$ modes, but now with $\varepsilon^{\beta}_{k}(0)$ also being negative for {\em some} $k$ modes. Thus, the spin liquid (I) ground state takes the form $|G_{\text{(I)}}\rangle \sim \prod_{k,k^{\prime}} \alpha_k^{(0)\dagger} \beta_{k^{\prime}}^{(0)\dagger} |V\rangle$, with $k^{\prime}$ indexing those $\beta$-modes which have negative energies. At the critical points $h_{s}^{c2}=\pm\sqrt{J_{3}^{2}/4-1}$, a gapless-gapless QPT takes place between the spin liquid (I) and (II) phases, with a concurrent change of the Fermi surface topology \cite{Titvinidze2003}. In spin liquid phase (II), with $|h_{s}|\leq\sqrt{J_{3}^{2}/4-1}$, both $\varepsilon^{\alpha}_{k}(0)$ and $\varepsilon^{\beta}_{k}(0)$ have positive {\em and} negative branches, resulting in four Fermi points, two from each branch. Consequently, the spin liquid (II) ground state can be written as $|G_{\text{(II)}}\rangle \sim \prod_{k,k^{\prime}} \alpha_k^{(0)\dagger} \beta_{k^{\prime}}^{(0)\dagger} |V\rangle$, with $k$ and $k^{\prime}$ indexing the negative-energy $\alpha$- and $\beta$-modes, respectively. \begin{figure*} \centerline{\includegraphics[width=0.36\linewidth]{fig19.eps} \includegraphics[width=0.33\linewidth]{fig20.eps} \includegraphics[width=0.28\linewidth]{fig21.eps}} \caption{ (Color online) (a) Three-dimensional plot of the LE in Eq. (\ref{XYLoschmidt}) as function of time $t$ and staggered magnetic field $h_s$ for $J=1$, $J_{3}=4$, $\delta_{s}=0.01$, and $N=1200$. (b) The oscillation amplitude $A_{k}$ in the mode decomposition of the LE, Eq. (\ref{eq20}), as function of crystal momentum $k$ and staggered magnetic field $h_s$, with the same parameter values as in (a). (c) The LE, Eq. (\ref{XYLoschmidt}), for different system sizes $N$ versus time $t$ for $J_3=4$, $h_{s}=0.0$ and $\delta_s=0.1$.} \label{fig7} \end{figure*} \subsection{Loschmidt echo: quantum-classical transitions at noncritical points \label{QCT}} We now turn to the calculation of the LE. To be specific we may assume that the environment is initially prepared in the antiferromagnetic ground state, with parameters chosen to put it close to the phase transition to the spin liquid phase (I), \begin{equation} \label{AFM} |\phi_{\text{env}}(0)\rangle=\prod_{-\pi \le k \le \pi}\alpha^{(0)\dagger}_{k}|V\rangle. \end{equation} This choice of initial environmental state allows us to probe the LE at criticality by using ${H}_{\text{env}}^{(\delta, \delta_{s})}$ to do a quantum quench to one of the critical points $h_{s}^{c1}\!=\!\pm J_{3}/2$. To explore the full spin liquid (I) phase away from criticality one instead choses the ground state \begin{equation} \label{spinliquidI} |\phi_{\text{env}}(0)\rangle=\prod_{k,k^{\prime}}\alpha^{(0)\dagger}_{k} \beta^{(0)\dagger}_{k^{\prime}}|V\rangle, \end{equation} with $-\pi \le k \le \pi$ and $ \varepsilon^{\beta}_{k}(0) \le 0$, as initial environmental state. Injecting Eqs. (\ref{eq19a}) and (\ref{eq19b}) into (\ref{AFM}) or (\ref{spinliquidI}) and using the expression for the LE, \begin{equation} \label{XYLoschmidt} {\cal L}(t) = |\langle\phi_{\text{env}}(0)|\exp(i {H}_{\text{env}}^{(\delta, \delta_{s})}t)\exp(-i {H}_{\text{env}}t)|\phi_{\text{env}}(0)\rangle|^2, \end{equation} it is straightforward to show that in both cases the LE reduces to the form \begin{eqnarray} \label{eq20} {\cal L}(t)=\prod_{-\pi \le k \le \pi}|1-A_{k}\sin^{2}(\frac{\Delta\varepsilon_{k}t}{2})| \end{eqnarray} where \begin{eqnarray} \nonumber A_{k}&&=\sin^{2}(2\eta_{k}), \\ \Delta\varepsilon_{k}&&=2\sqrt{(h_{s}+\delta_{s})^{2}+J^{2}\cos^{2}(k/2)}. \label{eq21} \end{eqnarray} By inspection, neither $A_k$ nor $\Delta\varepsilon_{k}$ depend on $\delta$. It follows from (\ref{eq20}) that in the case when the qubit-environment interaction only contains a uniform coupling $\sim \delta$, with $\delta_s =0$, ${\cal L}(t) = 1$ independent of the strength of $\delta$. As an upshot, the state of a qubit embedded in a spin environment here described by an extended XY model in a transverse staggered magnetic field does {\em not} decohere as long as the staggered interaction component vanishes, regardless of the strength of the uniform qubit-environment interaction. This result, similarly uncovered for a central spin model with the qubit coupled to an ordinary XY chain \cite{Yuan2007a}, may suggest practical strategies for protecting qubits in applications for quantum information technologies. In FIG. \ref{fig7}(a) we have plotted the LE of the environment perturbed by the qubit with a staggered interaction $\sim \delta_s = 0.01$ as a function of staggered magnetic field $h_{s}$ and time $t$. As seen in the figure, the LE displays neither enhanced decays nor revival structures at the critical points $h_{s}^{c1}=\pm2$ or $h_{s}^{c2}=\pm\sqrt{3}$ of the environment as opposed to what reports in previous works \cite{Quan2006,Yuan2007a}. Instead it shows an accelerated decay at $h_{s}=0.$ The point $h_{s}=0$, while being a critical point of the extended XY model in the absence of three-site spin interaction (i.e. with $J_{3}=0$ in (\ref{eq11})), is noncritical for any nonzero value of $J_{3}$. {\em But how can the LE exhibit an accelerated decay at a non-critical value of the staggered field? And why does the LE not exhibit an accelerated decay when the staggered field is critical?} To answer these questions we emulate our analysis from Sec. III. A numerical check confirms that the absence of an accelerated decay of the LE along the critical lines $h_s \!=\! \pm J_3/2$ comes about because of the small values of the oscillation amplitudes $A_k$ in (\ref{eq20}) for all $k$. In exact analogy to the critical QCC away from the IP, the smallness of the $A_k$-amplitudes is a consequence of the fact that the quasiparticles which control the LE remain gapped at criticality. On the contrary, the accelerated decays of the LE which are manifest in both environmental models $-$ the QCC and the extended XY model $-$ are correlated with large oscillation amplitudes in the LE mode decompositions, Eq. (\ref{eq8}) and (\ref{eq20}), respectively. As we have seen, large oscillation amplitudes are favored by the presence of easily excited quasiparticles. Importantly, not only may a quantum phase transition not favor LE-controlling quasiparticle excitations, but such excitations may instead appear {\em within} a stable phase, such as the type-I spin liquid phase of the extended XY model. This can be confirmed numerically. In FIG. (\ref{fig7})(b) we display the oscillation amplitude $A_k$ versus $k$ and $h_{s}$, with Hamiltonian parameters $J_{3}=4$, $\delta_{s}=0.01$, and $N=1200$. It is clearly seen that $A_k$ vanishes everywhere except in the neighborhood of $h_{s}=0$ at the Brillouin boundary zone boundary where the extended XY model becomes massless, with propagating quasiparticles \cite{Titvinidze2003}. In FIG. \ref{fig7}(c) we have computed the time-dependence of the LE for different system sizes, verifying that the LE revivals get attenuated, with longer periods, as the system gets larger. \section{Summary} Based on two case studies of a qubit coupled to an interacting spin environment -- with the environment modeled by a quantum compass chain or an extended XY model in a transverse staggered magnetic field -- we arrive at the conclusion that the presence of a quantum phase transition is neither a sufficient nor a necessary condition for an accelerated decoherence rate of the qubit. By examining how the eigenstates of the models imprint the Loschmidt echo -- and by that the decay rate of the qubit -- we find that what {\em does} matter is the availability of propagating quasiparticles which couple to the qubit via a back action (as signaled by their having an impact on the Loschmidt echo). While a quantum phase transition generically supports massless excitations, our case study of the QCC reveals that these excitations may not necessarily couple to the qubit, and therefore do not influence its decoherence rate. {\em This observation invalidates the conventional view that the closeness of an environment to a quantum phase transition is inherently linked to an enhanced decoherence of a system embedded in it.} Taking the extended XY model as environmental model, the quasiparticles in one of its spin-liquid phases are found to couple to the qubit. This provides an example that a stable massless phase can act as a source of accelerated decoherence. Our findings may prove useful when developing strategies to reduce decoherence in quantum devices with interacting qubits. \begin{acknowledgments} This research was supported by the Swedish Research Council (Grant No. 621-2014-5972). \\ \\ \end{acknowledgments} \section{Appendix} The amplitudes in the mode decomposition of the Loschmidt echo, Eq. (\ref{eq9}), depend on the state overlaps $F_{m,k}=|\langle\psi_{m,k}(\delta)|\psi_{0,k}(0)\rangle|^{2}$ $(m=0,...,7)$ as \begin{eqnarray} \label{eqAC2} \nonumber A_{0,k}&=&4F_{0,k}F_{7,k},\\ \nonumber B_{0,k}&=&4(F_{2,k}+F_{3,k}+F_{4,k}+F_{5,k})(F_{0,k}+F_{7,k}),\\ \nonumber A_{1,k}&=&4F_{1,k}F_{6,k},\\ \nonumber B_{1,k}&=&4(F_{2,k}+F_{3,k}+F_{4,k}+F_{5,k})(F_{1,k}+F_{6,k}),\\ \nonumber C_{k}&=&4(F_{0,k}F_{1,k}+F_{6,k}F_{7,k}),\\ \nonumber D_{k}&=&4(F_{0,k}F_{6,k}+F_{1,k}F_{7,k}). \end{eqnarray} Here $|\psi_{m,k}(\delta)\rangle$ are eigenstates of the Hamiltonian $H_k$ in Eq. (\ref{commH}) with $h=\delta$. At the critical line $\theta_c= \pi/2$ in $(\theta,J_e/J_o)$-space, the LE reduces to the simple form \begin{eqnarray} \nonumber {\cal L}(\theta_{1},\theta_{c},t)=\!\prod_{0 \le k \le \pi}|1\!-\!A_{k}\sin^{2}(\varepsilon_{k}^{1}(\delta)t)\!-\!B_{k}\sin^{2}(\frac{\varepsilon_{k}^{1}(\delta)t}{2})|, \end{eqnarray} \\ where $A_{k}=4(F_{0,k}+F_{1,k})(F_{6,k}+F_{7,k})$ and $B_{k}=4(F_{0,k}+F_{1,k}+F_{6,k}+F_{7,k})(F_{2,k}+F_{3,k}+F_{4,k}+F_{5,k})$, and where $\varepsilon_{k}^{1}(\delta)$ are energies of the quasiparticles that fill up the lowest band of $H_{\text{env}}^{(\delta)}$ in Eq. (\ref{pertqcc}).
{ "redpajama_set_name": "RedPajamaArXiv" }
6,113
{"url":"https:\/\/brilliant.org\/discussions\/thread\/help-needed-4\/?sort=new","text":"# Help needed.....\n\nCould you please suggest best books for algebra,calculus,number theory,geometry and electricity and magnetism.......i am extreamly bad in problem solving and want to start from basics so please suggest some good books aspecialy for JEE preparation......please help me i always try to solve problems but never get correct answers because i lag basics.Although i am quite good in algebra but still need lot of practice in the topics like polynomials,inequalities,complex number,advanced manipulations.I don't even know the ABCD of geometry and number theory.I can do simple integration and differentiation but unable to solve problems.I am also helpless with machenics and EM.\n\nNote by Aman Sharma\n5\u00a0years, 9\u00a0months ago\n\nThis discussion board is a place to discuss our Daily Challenges and the math and science related to those challenges. Explanations are more than just a solution \u2014 they should explain the steps and thinking strategies that you used to obtain the solution. Comments should further the discussion of math and science.\n\nWhen posting on Brilliant:\n\n\u2022 Use the emojis to react to an explanation, whether you're congratulating a job well done , or just really confused .\n\u2022 Ask specific questions about the challenge or the steps in somebody's explanation. Well-posed questions can add a lot to the discussion, but posting \"I don't understand!\" doesn't help anyone.\n\u2022 Try to contribute something new to the discussion, whether it is an extension, generalization or other idea related to the challenge.\n\nMarkdownAppears as\n*italics* or _italics_ italics\n**bold** or __bold__ bold\n- bulleted- list\n\u2022 bulleted\n\u2022 list\n1. numbered2. list\n1. numbered\n2. list\nNote: you must add a full line of space before and after lists for them to show up correctly\nparagraph 1paragraph 2\n\nparagraph 1\n\nparagraph 2\n\n[example link](https:\/\/brilliant.org)example link\n> This is a quote\nThis is a quote\n # I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\n# I indented these lines\n# 4 spaces, and now they show\n# up as a code block.\n\nprint \"hello world\"\nMathAppears as\nRemember to wrap math in $$ ... $$ or $ ... $ to ensure proper formatting.\n2 \\times 3 $2 \\times 3$\n2^{34} $2^{34}$\na_{i-1} $a_{i-1}$\n\\frac{2}{3} $\\frac{2}{3}$\n\\sqrt{2} $\\sqrt{2}$\n\\sum_{i=1}^3 $\\sum_{i=1}^3$\n\\sin \\theta $\\sin \\theta$\n\\boxed{123} $\\boxed{123}$\n\nSort by:\n\ncheck out the following link hopefully this will be helpful in selecting jee exam preparation books for math http:\/\/www.kaysonseducation.co.in\/iitjee-math-books\n\n- 5\u00a0years, 8\u00a0months ago\n\n- 5\u00a0years, 8\u00a0months ago\n\nNot really but if some one get a good information or get what he\/she want then it is cool if you don't like this remove the comment\n\n- 5\u00a0years, 8\u00a0months ago\n\n\"Problem solving techniques do not develop in vacuum.\"\n\nLearning a technique as an isolated fact is useless: 'When encountered such problem do such then such, apply such and such...' with that attitude I think it is very difficult to solve problems of higher level of difficulty, especially in Brilliant.\n\nFocus on concepts.\n\nMathematics is hard. So, it takes enormous time to learn. For e.g After I have completed trigonometry, I go for calculus; then if I encounter a hard Trigo-problem, say in Brilliant, then it's very much possible that I remember few techniques of trigonometry. I need to revise it again. After some time again.\n\nThe point I wanted to convey was that many different books will give you many different techniques. In other words to know many techniques who have to study many books. That is something not appreciated when you have very less time. You said you are preparing for JEE, you must be in class 12: you really don't have much time.Not enough to stick to the idea of too many books.I advice learn the basics such that you won't forget them easily. Concepts are hard to forget (those that are very basic) and they are what I have when I stuck. So while solving I have concepts as my weapons rather than a technique. Another e.g, in coordinate geometry I almost every time forget the formula for the bisector of angle.What should I do then?This is what I did:If you draw two intersecting lines and then draw a bisector of any of the 4 angles then using the remark that every point on the bisector is at the same distance from both the arm of the angle, you can write the formula at once using distance formula. Now I knew distance formula since it is a basic.I forgot something, I couldn't recall it.But, I found it by using the concept of 'bisector' itself.I know my comment is too long, sorry for that, but it's my genuine experience that I am sharing here.\n\nOn books. For JEE I can recommend you books not beyond that. For Mathematics you have to categorize books as basics and as advanced: Basics for learning something new bit by bit, i.e for beginners and advanced for learning application on harder problems. For basics read NCERT or K.C Sinha. [Don't go for R.D.Sharma]:NCERT is really good for basics and covers theory which are very useful and simple to understand. Books by K.C Sinha are a boon for self-studious persons.It connects Board level to competitive level in the best way possible.[Do either.Don't go for R.D.Sharma if you really want to increase level in problem solving as to compete in brilliant. I personally recommend K.C.Sinha. You said you are good in Algebra.Go to the nearest book-shop ask for Algebra-by-KC Sinha go through it and examine the chapters of your interest whether they have appropriate material or not.Go through exercise and skim through one or two problem.Judge the book yourself.] For Advanced:You will need 'Problem Books'. Take 'Problem Plus in IIT by Asit Das Gupta'.This book covers many techniques, sufficient enough to crack JEE and solve many Level 3-4 problems in Brilliant.Problems in this book are in turn of Level 3-4 in Brilliant standards.So you are prepared for brilliant problem. Learn all techniques from it.It's is however hard to learn those techniques if you haven't acquired basics.\n\nFor Physics read Halliday Resnick (and Krane) not ( halliday resnick and walker), It is an all time classic, and H.C Verma for basics. Read from both the theory part, you may solve problems from Halliday, but you HAVE to solve problems from H.C.V. Many would tell for D.C. Pandey. But go for H.C.V. Here is why: it has many implicit benefit that no other Physics book will provide you at your level.Problems are not very tough to solve. EACH problem will teach you something new! You keep on revising chapter 1,2 and 3 while doing any later chapters like 4, 5 or even 13! in that way you keep on revising a chapter on 'work energy' even if you are doing 'rotational mechanics'. You encounter mixed concepts in problems.You get to apply something 'mathematical' in problems, may be some useful and frequent theorem .That way you learn Maths while doing Physics.The level of each problems is equivalent to level-3 of Brilliant.After solving H.C.V you may or may not solve Halliday .But go for harder books. For advanced:get any problem book like Physics MCQ by Deb Mukherjee or I.E.Irodov or directly solve IIT-JEE past problems .After doing HCV, believe me, you will be able to solve IIT-JEE level problems in Physics as well as I.E.Irodov, may be not all but many. But aftrer D.C Pandey, well, you may ask those who use it whether after doing D.C Pandey they were able to solve higher level problems in Physics.\n\nPlease don't learn a specific technique. There is a variety of problems, how much technique will you learn? All good problem solver learn basics thoroughly and do as many many problems of mixed type of different level of difficulty.Memory plays an essential part in problem solving.You have to learn in such a way that you may recall various 'INTERCONNECTED' facts effectively.Brilliant problems come in advanced category.They are for honing your ability of application, not for learning for a beginner. Attempt Brilliant after the 'basic' phase.\n\nAnd to know problem solving explicitly go for any book that suits your way and style of solving problems like some mentioned by @Souryajit Roy\n\nBest of luck for ya journey friend ;).\n\n- 5\u00a0years, 9\u00a0months ago\n\nThanks a ton for replying brother.....well i am not preparing for JEE i just want to be good in problem solving.....your comment is an eye opener\n\n- 5\u00a0years, 9\u00a0months ago\n\nOh.\n\nThen try to get books like 'Problem Solving-Problem Solving Strategies by Aurther Engel,Art and Craft of Problem Solving by Paul Zeitz' etc.\n\n- 5\u00a0years, 9\u00a0months ago\n\nI which class are you in??? When was the first time you started studying math and physics.....\n\n- 5\u00a0years, 9\u00a0months ago\n\nI have passed 12... and preparing for entrance exams.I wasn't good at both in the beginning. But I am gaining speed now.\n\n- 5\u00a0years, 9\u00a0months ago\n\nSo I finally know what your real name is. How is your study going on ? Did you qualify for jee ?\n\n- 5\u00a0years, 1\u00a0month ago\n\nHello @Aman Sharma :) How are you doing?\n\nNo, I didn't qualify for jee\n\n- 5\u00a0years, 1\u00a0month ago\n\nHave you joined a college or are you still preparing for competitive exams ?\n\n- 5\u00a0years, 1\u00a0month ago\n\nAre you preparing for jee\n\n- 5\u00a0years, 9\u00a0months ago\n\nLong Story :3\n\nI m self preparing so IIT ... very less chance. I tried past papers I could solve some problems. But I am extremely weak in Chemistry. Only Physics and Maths won't help me. but I may crack it, may get a rank in 9000 :D.Whatever my friend have discouraged me to the extreme. They said that I won't be able to do anything on self. That is sad. And I am less motivated nowadays.\n\nBut for jee-mains I am in much better position because it is board based.\n\nI know about those books that I mentioned because I use them. I post problems from them.Let me share some links to you.\n\nThese problems are from HCV that I found on Brilliant: Love Cricket. Love Physics, Easy? Car? Kinematics? Trouble? , So throw it again AND again, Really ! etc. And, these were posted by me: Electric Potential Energy and Debates.\n\nThis is from Asit Das Gupta's Problem Plus In IIT (I call the book as ADG ,initials of the author):Find the 3 ... whatever your are to found.. This problem was not leveled. Not many people tried it. So, I stopped posting from ADG. Level 1 and Level 2 problems gain more heat in Brilliant e.g. Circle. Yeah! and An algebra problem by Math Philic .\n\nNow I post such Level 1 and Level 2 problem. Like for e.g. real ly ? --This problem is actually a Level 2 problem.\n\nInteresting problem like RockIt are not much liked. Which is not good.\n\n- 5\u00a0years, 9\u00a0months ago\n\nI have tried most of your problems and are quite intresting however i can't solve all of them........also i think it is a myth that it is not possible to crack jee advace without coaching ..if one starts self studying from 10th class for jee i think it is not too hard to crack this exam.........\n\n- 5\u00a0years, 9\u00a0months ago\n\n@megh choksi you have been recently posting a solving awsome problems please suggest me books\n\n- 5\u00a0years, 9\u00a0months ago\n\nI can't suggest a book , just try more and more problems but see this ( by the great -Arvind Gupta)\n\nBy the way you are too good in solving math @Aman Sharma\n\n- 5\u00a0years, 9\u00a0months ago\n\nHiii Friend, did you give jee exam ?\n\n- 5\u00a0years, 1\u00a0month ago\n\nLol i don't think am good i am still struggling with maths.....thanks for that link it'll help me a lot\n\n- 5\u00a0years, 9\u00a0months ago\n\n@Ronak Agarwal i am a fan of you problem solving skills...what book in mechanics a EM would you suggest for a beginer\n\n- 5\u00a0years, 9\u00a0months ago\n\nI may be not a good one suggest books as I myself not tried any good book for theory , I only did I.E Irodov for the application part , also there was enough material from the coaching centre , for practicing questions in which I was enrolled. @Aman Sharma\n\n- 5\u00a0years, 9\u00a0months ago\n\n- 5\u00a0years, 1\u00a0month ago\n\n364\/504\n\n- 5\u00a0years, 1\u00a0month ago\n\nWaw you nailed it. which institution have you joined ? Also what is your rank ?\n\n- 5\u00a0years, 1\u00a0month ago\n\nOk thanks for replying.....i have heared that the book 'Problems in general physics' is extreamly hard and is not suggested for beginers.....is it true\n\n- 5\u00a0years, 9\u00a0months ago\n\nYes. It has problems equivalent to the level of IIT-JEE in Physics. So you can imagine the level of difficulty.\n\n- 5\u00a0years, 9\u00a0months ago\n\nIt is not extremely hard but yes it is not suggested for beginers , you can buy HC Verma.\n\n- 5\u00a0years, 9\u00a0months ago\n\n- 5\u00a0years, 9\u00a0months ago\n\nGood.\n\n- 5\u00a0years, 9\u00a0months ago\n\nD c pandey for electricity n magnetism\n\n- 5\u00a0years, 9\u00a0months ago\n\nOk thanks going to include this book also in my list\n\n- 5\u00a0years, 9\u00a0months ago\n\nTry these math books-\n\nAlgebra : -Higher Algebra by Bernard Child,Problems in Algebra by V.A Krechmar\n\nNumber Theory : -Elementary Number Theory by David.M.Burton,250 problems in elementary number theory by W.Sierpenski\n\nCombinatorics: Introductory Combinatorics by Richard.A.Brualdi\n\nCalculus : Differential Calculus by Maity and Ghosh,Problems in Calculus in One Variable by I.A Maron,Differential and Integral Calculus by R.Courant\n\nGeometry : Geometry revisited by Coxeter-Greitzer,Problems in Plane Geometry by Viktor Prasolov ,Plane Co-ordinate Geometry by S.L Loney\n\nAlso try these general text books-Excursion in Mathematics,Pre-College Mathematics\n\nTo enhance your skills in Problem Solving-Problem Solving Strategies by Aurther Engel,Art and Craft of Problem Solving by Paul Zeitz,Winning Solutions by Edward Lozansky and Cecil Rousseeau,Mathematical Circles by Genkin and Itenberg,Challenging Mathematical Problems with Elementary Solutions(Part I) by A.M Yaglom and I.M Yaglom,Problem Primer for Olympiad by Pranesachar,Venkatachala,Yogananda,Problem Solving through Problems by LC Larson.\n\nAnother* suggestion* is -whenever you face difficulty seek help from brilliant.org and try to SOLVE AS MANY PROBLEMS ON BRILLIANT.ORG( the best way to exercise your problem solving skills).Surf the internet and learn new theorems and facts in maths.Also,don't try to solve all problems from the books(which is clearly impossible);just learn the basic facts and the problems you like(the tricky ones).Think hard as you can and don't peep into solutions immediately. Also solve as many competition problems(RMO,INMO,IMO,USAMO,VMO,etc)\n\n- 5\u00a0years, 9\u00a0months ago\n\nThank you so much brother for your kind help..........also i have heard a lot about a book on algebra by hall and knight(or something similer) do you know about this book\n\n- 5\u00a0years, 9\u00a0months ago\n\nyup,but that's one is is much easy and simplier and if you are good in algebra,you must start from bernard child.Bernard child contains much more theorems and techniques\n\n- 5\u00a0years, 9\u00a0months ago\n\nOk.....also what would be the best book for mechanics and electricity and magnetism...(sorry to desturb you) also does the book \"problem solving stratagies\" only contain problem or techniques also\n\n- 5\u00a0years, 9\u00a0months ago\n\nFor mechanics,you can try Schaum's Theory and Problems of Theoretical Mechanics by Murray.S.Spiegel and I don't know about any individual text books of Electricity and Magnetism( you may try Resnick Halliday in this case)....The book \"Problem Solving Strategies\" contain both techniques to solve problems,worked out problems and excercise problems.\n\n- 5\u00a0years, 9\u00a0months ago\n\nThank you so much i am going to buy following books:-\n\n(1)Higher algebra by barnard child\n\n(2)Geometry revisited\n\n(3)Problem solving stratagies\n\n(4)Elementry number theory\n\n(5)Halliday reshnick\n\n- 5\u00a0years, 9\u00a0months ago\n\nIn which class you are @Aman Sharma\n\n- 5\u00a0years, 9\u00a0months ago\n\nI am in 12th class @sandeep Rathod\n\n- 5\u00a0years, 9\u00a0months ago\n\nif you want to clear concepts of physics and want to solve practical life problems solve\" halliday resnick \" book search on Google. and DC pande is good for jee preparation a practise book on physics search on amazon. by the way I am solving halliday resnick it is my favorite book for application of physics in real life.\n\n- 5\u00a0years, 9\u00a0months ago\n\nI need a book that completly focus on techniques of problem solving in mechanics and EM from basic to advance\n\n- 5\u00a0years, 9\u00a0months ago\n\nGo for Beer and Johnston ..it covers all the funda of EM !\n\n- 5\u00a0years, 2\u00a0months ago\n\nOkk thank you so much for replying i am going to by these books now\n\n- 5\u00a0years, 9\u00a0months ago\n\ndo halliday reshnick\n\n- 5\u00a0years, 9\u00a0months ago\n\nIs it a book......can you please give more details\n\n- 5\u00a0years, 9\u00a0months ago","date":"2020-09-24 17:48:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 8, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8009897470474243, \"perplexity\": 2833.4951336304184}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-40\/segments\/1600400219691.59\/warc\/CC-MAIN-20200924163714-20200924193714-00266.warc.gz\"}"}
null
null
\section{General introduction} The magnetic fields responsible for solar activity phenomena emerge into the solar atmosphere in a concentrated form, in active regions (ARs). In each solar cycle thousands of active regions are listed in the official NOAA database and many more small active regions are missed if their heliographic position and lifetime do not render them directly observable on the visible hemisphere. Similarly, no detailed catalogues exist for the ubiquitous ephemeral active regions, of even smaller size. The emergence of this large number of (typically) bipolar magnetic regions obeys some well known statistical regularities like Hale's polarity rules and Joy's law. As a consequence, upon their decay by turbulent diffusion their remains contribute to the large-scale ordered photospheric magnetic field, including the Sun's global axial dipole field {(the so-called Babcock--Leighton mechanism)}. Active regions emerging in a given solar cycle contribute on average to the global dipole with a sign opposite to the preexisting field at the start of the cycle, and these contributions from active regions add up until, some time in the middle of the cycle, the global field reverses and a new cycle starts at the Sun's poles, still overlapping with the ongoing cycle at low latitudes. Flux emergence is thus an important element of the solar dynamo mechanism sustaining the periodically overturning solar magnetic field. The inherently stochastic nature of flux emergence introduces random fluctuations into this statistically ordered process. In recent years it has been realized that the random nature of flux emergence can give rise to significant deviations of the solar dipole moment built up during a cycle from its expected mean value: in some cycles a small number of so-called ``rogue'' active regions (\citealt{Petrovay:IAUS340}) with atypical properties may lead to a major, unexpected change in the level of activity. The unexpected change in the level of activity from solar cycle 23 to 24 has been interpreted as the result a few such abnormal regions by \cite{Jiang+:cyc24hindcast}, while in a dynamo model \cite{Nagy+:rogue} found that in extreme cases even a single rogue AR can trigger a grand minimum. An open question is how to identify the [candidate] rogue active regions, and how many such regions need to be considered in individual detail in models aiming to reproduce the evolution of the Sun's large scale field. It is not a priori clear that this number is low, so the question we pose in this paper is whether the stochastic effects in cycle-to-cycle variation originating in the random nature of the flux emergence process are dominated by a few ``rogue'' AR in each cycle with individually large and unusual contributions to the dipole moment, or by the ``fluctuation background'' due to numerous other AR with individually much lower deviations from the expected dipole contribution. While the recent studies cited above stressed the importance of a few large rogue AR, the importance of the fluctuation background cannot be discarded out of hand. The issue has obvious practical significance from the point of view of solar cycle prediction: it would be useful to know how many (and exactly which) observed individual AR need to be assimilated into a model for successful forecasts. A related investigation was recently carried out by \cite{Whitbread+:dipcontr}. In that work ARs were ordered by their individual contributions to the global axial dipole moment: it was found that, far from being dominated by a few ARs with the largest contributions, the global dipole moment built up during a cycle cannot be reproduced without taking into account a large number (hundreds) of ARs. In another recent work \cite{Cameron+:toroidal} found that even ephemeral active regions contribute to the net toroidal flux loss from the Sun by an amount comparable to the contribution of large active regions. By analogy, this opens the possibility that ephemeral ARs may also contribute to the global poloidal field by a non-negligible amount, though statistical studies of the orientation of ephemeral ARs are unfortunately rare (cf.~\citealt{Tlatov_:ER}). While these interesting results shed new light on the overall role of flux emergence in smaller bipoles in the global dynamo, we think that from the point of view of solar cycle prediction, instead of the dipole moment contribution per se, a more relevant control parameter is the {\it deviation} of the dipole contribution from the case with no random fluctations in flux emergence, i.e. the ``degree of rogueness'' (DoR). We therefore set out to systematically study the effect of individual AR on the subsequent course of solar activity using the DoR as an ordering parameter. The question immediately arises how this DoR should be defined. The approach we take in this work assumes that the effect of random fluctuations manifests itself primarily in the properties of individual active regions, rather than in their spatiotemporal distribution. The DoR based on individual AR properties will be called ``active region degree of rogueness'' --- $\mbox{\sl ARDoR}$ for brevity. The structure of this paper is as follows. Section 2 introduces and discusses our definition of $\mbox{\sl ARDoR}$. In Section 3, after recalling salient features of the 2$\times$2D dynamo model, we use statistics based on this model to answer the central question of this paper. Conclusions are drawn in Section 4. \section{Introducing $\mbox{\sl ARDoR}$} The Sun's axial dipolar moment is expressed as \begin{equation} D(t) = \frac32 \int_{-\pi/2}^{\pi/2} B(\lambda,t)\sin\lambda\cos\lambda\, \mathrm{d}\lambda , \label{eq:dipmom} \end{equation} where $B$ is the azimuthal average of the large scale photospheric magnetic field (assumed to be radial) while $\lambda$ is heliographic latitude. The value $D_n$ of this dipole moment at the start of cycle $n$ is widely considered the best physics-based precursor of the the amplitude of the incipient cyle $n$ (\citealt{Petrovay:LRSP2}). Understanding intercycle variations in solar activity and potentially extending the scope of the prediction calls for an effective and robust method to compute $D_n$ from (often limited) observational data on the previous course of solar activity. In the first paper of this series, \cite{Petrovay+:algebraic1} (hereafter Paper 1) we suggested a simplified approach to the computation of the evolution of the global axial dipole moment of the Sun. Instead of solving the partial differential equation normally used for modeling surface magnetic flux transport (SFT) processes on the Sun, this method simply represents the dipole moment by an algebraic sum: \begin{equation} \Delta D_n\equiv D_{n+1} - D_n = \sum_{i=1}^{N_{\mathrm{tot}}} \delta D_{U,i} = \sum_{i=1}^{N_{\mathrm{tot}}} \delta D_{\infty,i} \, e^{\fix{(t_i-t_{n+1})}/\tau} = \sum_{i=1}^{N_{\mathrm{tot}}} {f_{\infty,i}}\, \delta D_{1,i} \, e^{\fix{(t_i-t_{n+1})}/\tau} , \label{eq:cycledipmom} \end{equation} where $i$ indexes the active regions in a cycle, $N_\mathrm{tot}$ is the total number of ARs in the cycle, $\delta D_1$ is the {\it initial} contribution of an active region to the global dipole moment, $\delta D_U$ is its {\it ultimate} contribution at the end of a cycle and $\tau\le\infty$ is the assumed timescale of magnetic field decay due to radial diffusion. Furthermore, \begin{equation} f_\infty=\delta D_\infty/\delta D_1 , \end{equation} where $\delta D_\infty$ is the {\it asymptotic} contribution of the same AR in a SFT model with $\tau=\infty$, once the meridional flow has concentrated the relic magnetic flux from the AR to two opposite polarity patches at the two poles. (See Paper 1 for further explanations.) In this approach, ARs are assumed to be represented by simple bipoles at the time of their introduction into the model so their initial dipole moment contribution is given by \begin{equation} \delta D_1=\frac 3{4\pi R^2}\, \Phi\, d_\lambda\cos\lambda_0 , \end{equation} where $\Phi$ is the magnetic flux in the northern polarity patch, $d_\lambda$ is the latitudinal separation of the polarities\footnote{$d_\lambda=d\sin\alpha$ where $d$ is the full angular polarity separation on the solar surface and $\alpha$ is the tilt angle of the bipole axis relative to the east--west direction, the sign of $\alpha$ being negative for bipoles disobeying Hale's polarity rules.}, $\lambda_0$ is the initial latitude of [the center of] the bipole and $R$ is the radius of the Sun. As demonstrated in Paper 1, $f_\infty$, is in turn given by \begin{equation} f_\infty= \frac a{\lambda_R} \exp \left(\frac{-\lambda_0^2}{2\lambda_R^2}\right) . \end{equation} It was numerically demonstrated in Paper 1 that this Gaussian form holds quite generally irrespective of the details of the SFT model, its parameters ($\lambda_R$ and $a$) only have a very weak dependence on the assumed form of the meridional flow profile (at least for profiles that are closer to observations), and their value only depends on a single combination of SFT model parameters. The values of $\lambda_R$ and $a$ for a given SFT model may be determined by interpolation of the numerical results, as presented in Paper 1. The terms of the sum (\ref{eq:cycledipmom}) represent the ultimate dipole contributions $\delta D_U$ of individual active regions in a cycle at the solar minimum ending that cycle. In principle each and every active region should be represented by an explicit term in the sum. Such a case was indeed considered in Paper 1 in a comparison with a run result from the 2$\times$2D dynamo and it was found that the algebraic method returns the total change in dipole moment during a cycle quite accurately in the overwhelming majority of cycles. When it comes to applying the method to the real Sun, however, the need to include each bipolar region in the source becomes quite a nuisance. As discussed above in the Introduction, data for individual active regions are often missing for the smaller ARs, while in the case of the larger, more complex AR representing them by an instantaneously introduced bipole is nontrivial. As it was recently pointed out by \cite{Iijima+:asym}, for an AR with zero tilt but different extents of the two polarity distributions $\delta D_\infty$ will be nonzero, even though $\delta D_1=0$ for this configuration. The reason is that the configuration has a nonzero quadrupole moment, which may alternatively be represented by not one but two oppositely oriented dipoles slightly shifted in latitude. Such intricacies would certainly make it advisable to keep the number of active regions explicitly represented in the sum (\ref{eq:cycledipmom}) to a minimum. This again brings us to the central question of this paper: how many and which active regions need to be explicitly taken into consideration for the calculation of the solar dipole moment? While the previous study of \cite{Whitbread+:dipcontr} has shown that keeping only a few ARs in the summation is certainly not correct, representing the rest of the ARs in a less faithful or detailed manner may still be admissible as long as this does not distort the statistics. To select those few ARs that still need to be realistically represented \fix{we introduce the concept of $\mbox{\sl ARDoR}$}. As known examples of rogue AR presented e.g. in Nagy et al. (2017) are primarily rogue on account of their unusual tilts and large separations, the first idea is to define $\mbox{\sl ARDoR}$ as the difference between the ultimate dipole moment contribution of an AR and the value this would take with no scatter in the tilt and separation (i.e. if the tilt and separation were to take their expected values for the given latitude and magnetic flux, as given by eqs.~(15) and (16a) in \citealt{Lemerle1}). In the present paper we thus consider the case where for the majority of ARs only the information regarding their size (magnetic flux) and heliographic latitude is retained, while further details such as polarity separation or tilt angle (and therefore $\delta D_1$) are simply set to their expected values for the ARs with the given flux and heliographic latitude (``reduced stochasticity'' or RS representation), and compare this with the case when the actual polarity separations and tilts are used (``fully stochastic'' or FS case). The active region degree of rogueness is defined by \begin{equation} \mbox{\sl ARDoR} = \delta D_{U,\mathrm{FS}}-\delta D_{U,\mathrm{RS}} = f_{\infty}\, e^{(t_i-t_{n+1})/\tau} (\delta D_{1,\mathrm{FS}}-\delta D_{1,\mathrm{RS}}) . \end{equation} An objection to this definition may be raised as a large AR with unusually low separation and/or tilt will yield a negligible contribution to the dipole moment ($\delta D_U=0$), yet it may be characterized by a large negative DoR value according to the proposed definition. On the other hand, this is arguably not a shortcoming of the approach: on the contrary, as the total flux emerging in a cycle of a given amplitude is more or less fixed, the emergence of a large AR with unusually low $\delta D_U$ implies that the expected $\delta D_U$ contribution will be ``missing'' at the final account, resulting in the buildup of lower-than-expected global dipole moment at the end of the cycle. Ranking the ARs in a cycle according to their decreasing $\mbox{\sl ARDoR}$ values, we now set out to compare the results where $\mbox{\sl ARDoR}$ is explicitly considered for the top $N$ ARs on this list, while the rest of the ARs are represented in the RS approach. We ask the question what is the lowest value for $N$ for which the algebraic method still yields acceptable results? \begin{figure} \includegraphics[width=0.5\textwidth]{histogram_BASIC.png} \includegraphics[width=0.5\textwidth]{histogram_STOC.png} \includegraphics[width=0.5\textwidth]{histogram_ardor.png} \caption{Histograms of ultimate dipole moment contributions of individual active regions in the FS and RS cases and their differences (i.e. $\mbox{\sl ARDoR}$ values) \fix{measured in {Gauss}}, based on 647 cycles with an average of 3073 active regions per cycle.} \label{fig:ARDORhist} \end{figure} \begin{table} \centering \begin{tabular}{r c c c} \hline N & mean & median & st.dev.\\ \hline 1 & 0.4977 & 0.4565 & 0.4146 \\ 2 & 0.6184 & 0.6022 & 0.4305 \\ 3 & 0.6696 & 0.6735 & 0.4065 \\ 4 & 0.6996 & 0.7188 & 0.3953 \\ 5 & 0.7245 & 0.7490 & 0.3535 \\ 10 & 0.8139 & 0.8078 & 0.3136 \\ 20 & 0.8838 & 0.8822 & 0.2576 \\ 50 & 0.9381 & 0.9472 & 0.1917 \\ 100 & 0.9689 & 0.9816 & 0.1324 \\ \hline\\ \end{tabular} \caption{Means, medians and standard deviations of the total ARDoR of the top $N$ ARs divided by the total ARDoR of all ARs for cycles where the total ARDoR exceeds 15\,\% of the absolute change in the dipole moment $\Delta D$ (230 cycles)} \label{table:ARDORhist15} \end{table} \section{$\mbox{\sl ARDoR}$ and rogue active regions in the 2$\times$2D dynamo model} Characteristics of the hybrid kinematic $2\times2$D Babcock-Leighton dynamo model developed by \cite{Lemerle1} and \cite{Lemerle2} are particularly suitable for a study of this type. This model couples an internal axially symmetric flux transport dynamo (FTD) with a surface flux transport (SFT) model. The FTD component module provides the new active region emergences acting as a source term in the SFT component, while the output of the SFT model is used as upper boundary condition on the FTD model. In the model, bipolar magnetic regions (BMRs) representing active regions are generated at the surface randomly, with a probability based on the amplitude of the toroidal field in the deep convective zone, their properties being drawn from a statistical ensemble constructed to obey observationally determined statistical relationships. This makes it straightforward to extract the set of AR properties for any cycle from the model and to convert it to a reduced stochasticity set by setting the random fluctuations around the mean in the distributions of polarity separations and tilts to zero. In addition, the numerical efficiency of the model allows to run it for a large number of simulated solar cycles, rendering it suitable for statistical analysis of the results. For the present analysis we use run results from the standard setup of the 2$\times$2D model as described in \cite{Lemerle2}. Evaluating the parameters of the algebraic model from the numerical results presented in Paper 1 (for the same meridional flow and parameter values as in the dynamo model) yields $\lambda_R=13{\hbox{$.\!\!^\circ$}}6$ and $a/\lambda_R=3.75$, so for the algebraic method these values are used. The number of simulated cycles used in the analysis was 647. The distribution of computed $\mbox{\sl ARDoR}$ values is plotted in Fig.~\ref{fig:ARDORhist}. ARs in each cycle are ranked by the $\mbox{\sl ARDoR}$ values. In each cyle we compute the absolute change $\Delta D$ in the global solar dipole moment from equation (\ref{eq:cycledipmom}) for a ``cocktail'' of ARs, taking the ARs with the top $N$ highest $\mbox{\sl ARDoR}$ from the original, fully stochastic set, while taking the rest from the RS set. For brevity, this will be referred to as the ``rank-$N$ $\mbox{\sl ARDoR}$ method''. The dipole moment change calculated with the rank-$N$ $\mbox{\sl ARDoR}$ method is then \begin{equation} \Delta D_{\mbox{\sl ARDoR},N}=\Delta D_{\mathrm{RS}} +\sum_{i=1}^N \mbox{\sl ARDoR}_i \end{equation} where the AR index $i$ is in the order of decreasing $\mbox{\sl ARDoR}$. \begin{figure} \begin{center} \includegraphics[width=0.4\textwidth]{top01bmr_ardor.png} \includegraphics[width=0.4\textwidth]{top05bmr_ardor.png} \includegraphics[width=0.4\textwidth]{top10bmr_ardor.png} \includegraphics[width=0.4\textwidth]{top20bmr_ardor.png} \includegraphics[width=0.4\textwidth]{top50bmr_ardor.png} \includegraphics[width=0.4\textwidth]{top100bmr_ardor.png} \end{center} \caption{Histograms of the total ARDoR of the top $N$ ARs divided by the total ARDoR of all ARs for cycles where the total ARDoR exceeds 15\,\% of the absolute change in the dipole moment $\Delta D$ (230 cycles). The value of $N$ is shown inside each panel.} \label{fig:ARDORhist15} \end{figure} Note that the special case $N=0$, i.e.\ the RS set was already considered in Paper 1 where we found that even this method yields $\Delta D$ values in good agreement with the full simulations for a large majority of the cycles, but the prediction breaks down for a significant minority. As we are primarily interested in improving predictions for this minority, we first select cycles where the difference between the $\Delta D$ \fix{values from the fully stochastic and reduced stochasticity} sets exceeds $\pm 15\,$\%. (Note that this difference is by definition equal to the sum of $\mbox{\sl ARDoR}$s for all ARs in the cycle, so the condition for selection was $(\sum_{i=1}^{N_\mathrm{tot}}\mathrm{ARDoR}_i)/\Delta D > 0.15$, which held for 230 cycles.) Figure~\ref{fig:ARDORhist15} presents histograms of the fraction of the deviation explained by ARs with the the top $N$ highest ARDoR values. Means, medians and standard deviations of these plots are collected in Table~\ref{table:ARDORhist15}. It is apparent that ARs with the top 10--20 highest $\mbox{\sl ARDoR}$ are sufficient to explain 80--90\,\% of the deviation of $\Delta D$ computed with the reduced stochasticity model from the full value of $\Delta D$. Even the single AR with the highest $\mbox{\sl ARDoR}$ alone can explain 50\,\% of the deviation. Meanwhile, a significant scatter is present in the plots: {e.g., adding up the columns below 0.5 and above 1.5 in the 4th panel one finds that} in $\sim 8\,$\% of these 230 deviating cycles even the rank-20 $\mbox{\sl ARDoR}$ method is insufficient to reproduce the deviation at an accuracy better than $\pm 50$\,\%. (This is $\pm 50$\% of the {\it deviation}: as in this sample the mean deviation is roughly $\sim20$\% of the expected value of $\Delta D$, $\Delta D$ itself is still reproduced with an accuracy up to $\pm 10$\% for these cycles.) \begin{figure} \includegraphics[width=\textwidth]{deltaD_sort_plot.png} \includegraphics[width=\textwidth]{ARDoR_sort_plot.png} \caption{Absolute change $\Delta D$ on the global dipole moment during a cycle (blue solid); its approximation using the fully stochastic algebraic method (red dashed) and the reduced stochasticity algebraic method (light green dashed). The curves are compared with the absolute change computed with the algebraic method for various sets of the active regions in a cycle. {\it Top panel:} subsets containing ARs with the top $N$ highest ultimate dipole contribution (FS case). {\it Bottom panel:} \fix{subsets containing ARs} with the top $N$ highest $ARDoR$ values added \fix{to $\Delta D_{\mathrm{RS}}$}.} \label{fig:DDcurves} \end{figure} \begin{table} \centering \begin{tabular}{lrrrc} \multicolumn{5}{l}{$\Delta D_N= \Delta D_\mathrm{RS}+\sum_{i=1}^N\mbox{\sl ARDoR}_i$, ranking by $\mbox{\sl ARDoR}$:}\\ \hline $N$ & mean & median & st.dev. & st.dev. of $(\Delta D_N-\Delta D)/\Delta D$\\ \hline 0 & 0.0578& 0.0966 & 1.1365 & 0.212 \\ 5 & --0.0115 & 0.0073 & 0.7360 & 0.128 \\ 10 & --0.0330 & --0.0116 & 0.7134 & 0.120 \\ 20 & --0.0368 & --0.0144 & 0.6662 & 0.125 \\ 50 & --0.0599 & --0.0406 & 0.6132 & 0.110 \\ $ N_\mathrm{tot}$ & --0.0499 & --0.1817 & 0.5861 & 0.101 \\ \hline &&&\\ \multicolumn{4}{l}{$\sum_{i=1}^N \delta D_{U,i}$, ranking by $\delta_U$:}\\ \hline $N$ & mean & median & st.dev.\\ \hline 50 & 4.1280 & 4.1777 & 1.4684 \\ 100 & 3.1874 & 3.1173 & 1.3479 \\ 250 & 1.8958 & 1.7491 & 1.1311 \\ 500 & 1.0129 & 0.8471 & 0.9346 \\ 750 & 0.6003 & 0.4895 & 0.8186 \\ \hline\\ \end{tabular} \caption{Means, medians and standard deviations of the residuals of various approximations relative to the simulated value of the absolute dipole moment change $\Delta D$ during an activity cycle, as plotted in Fig.~\ref{fig:DDcurves}.} \label{table:DDcurves} \end{table} The improvement that the $\mbox{\sl ARDoR}$ method brings to the problem of reproducing the solar axial dipole moment at the end of a cycle is dramatically illustrated in Fig.~\ref{fig:DDcurves}. While in the case of ranking ARs by $\delta D$ even adding contributions from the top 750 AR yields only a barely tolerable representation of the dipole moment variation, the $\mbox{\sl ARDoR}$ method produces excellent agreement already for very low values of the rank $N$. The quality of these representations is documented in Table~\ref{table:DDcurves}. The standard deviation of the rank-5 $\mbox{\sl ARDoR}$ method relative to the simulation result is lower than in the case of $\Delta D$ calculated from even the top 750 highest $\delta D$ contributions. Finally, in Fig.~\ref{fig:fracthist} we present histograms of the deviations from the simulated value of $\Delta D$ computed with the various methods (FS, RS and ARDoR with different $N$ values). Here deviations are expressed as fractions of the actual $\Delta D$ resulting from the simulations, i.e. the quantities given in the headings of Table~\ref{table:DDcurves} are divided by $\Delta D$. Adding up the columns it is straightforward to work out from this that, e.g., in the case of {the rank-5 ARDoR method (i.e.,} considering only the top 5 highest ARDoR values and adding them to the RS algebraic result), the deviation of $\Delta D$ from the simulated cycle change in the global dipole moment is less than 15\,\% in 88\,\% of the cycles. As $\Delta D$ is, on average, twice the amplitude of the polar field at minimum, the rank-5 $\mbox{\sl ARDoR}$ method reproduces the polar field precursor {within $\pm 30$\,\%} in 88\,\% of all cycles. This is to be compared to 74\,\% { of the cycles} in the RS case. \begin{figure} \includegraphics[width=\textwidth]{sumBMR_histogram__ARDoR-sort_all2.png} \includegraphics[width=\textwidth]{sumBMR_histogram__ARDoR-sort__top5-top10.png} \includegraphics[width=\textwidth]{sumBMR_histogram__ARDoR-sort__top20-top50.png} \caption{Fractional histograms of the fractional deviation from the absolute change in dipole moment during a solar cycle, calculated summing ultimate AR contributions in the RS case $+$ ARDoR values for ARs with the the top $N$ highest ARDoR. {($N=N_\mathrm{tot}$ and $N=0$ for the first and second panels, respectively.)} Colour codes are the same as in the previous plots.} \label{fig:fracthist} \end{figure} \section{Conclusions} In Paper 1 we introduced a method to reconstruct variations in the global axial dipole moment of the Sun by an algebraic summation of the contributions from individual active regions. In principle, for the application of this method, for each AR the optimal representation in terms of a simple bipole (or possibly several bipoles in more complex cases) needs to be known. Obtaining this information for thousands of active regions is a nontrivial task, but significant efforts have been made in this direction: \begin{list}{--}{\itemsep=0 em \parsep=0em \partopsep=0em \topsep=0em} \item \cite{Sheeley+:BMRcyc21} determined the properties of bipoles representing {close to 3000} ARs with $\Phi>3\cdot 10^{20}\,$Mx from NSO-KP (Kitt Peak) magnetograms in Cycle 21 (1976--1986). Each AR was considered at its maximum development; recurrent ARs were multiply listed. \item \cite{Yeates+:BMRcyc23_24} determined the properties of bipoles representing ARs from NSO-KP/SOLIS synoptic magnetic maps in cycles 23 and 24 (1997--2017). Each AR was considered at central meridian passage; recurrent ARs were multiply listed. \item \cite{Whitbread+:dipcontr} determined initial dipole moments $D_1$ for active regions from Kitt Peak/SOLIS synoptic magnetic maps in cycles 21--24 (1976--2017). Each AR was considered at central meridian passage; recurrent ARs were multiply listed. \item From white-light data without direct magnetic information \cite{Jiang+Baranyi} determined an indicative ``dipole moment index'' for sunspot groups larger than 800 MSH in cycles 21--24 (1976--2017). \end{list} Data resulting from the above listed efforts have been placed in public databases.\footnote{VizieR and the Solar dynamo dataverse ({\tt https://dataverse.harvard.edu/dataverse/solardynamo}), maintained by Andr\'es Mu\~noz-Jaramillo.} In addition to these, \cite{Li+Ulrich:tilts} determined tilt angles for 30,600 ARs from Mt.Wilson ad MDI magnetograms in cycles 21--24 (1974--2010). \cite{Virtanen+:dipmom} determined initial dipole moments $D_1$ for active regions from Kitt Peak/SOLIS synoptic magnetic maps combined with SDO HMI synoptic maps in cycles 21--24 (1976--2019). The above studies are limited to the last four cycles when magnetograms were available on a regular basis. For earlier cycles, a number of statistical analyses of sunspot data without direct magnetic information (e.g., \citealt{Dasi-Espuig+}, \citealt{Ivanov:noTQ}, \citealt{McClintock+Norton:MtWtilts}, \citealt{Baranyi:tilts}, \citealt{Isik:tilts}, \citealt{SenthamizhPavai}) resulted in tilt angle values, offering some potential for use as input for models of the dipole moment evolution. Recently, information on the magnetic polarities of sunspots from Mt.Wilson measurements has been used in combination with Ca II spectroheliograms by \cite{Pevtsov+:pseudmgrams} to construct ``pseudo-magnetograms'' for the period 1915--1985; the results have been benchmarked against direct observations for the last period (\citealt{Virtanen+:test}). Despite these impressive efforts, the determination of AR dipole moment values to be used as input in our algebraic method is subject to many uncertainties. As discussed in the Introduction, the available data are increasingly incomplete for smaller ARs. The arbitrariness of the time chosen for the incorporation of ARs is also problematic as during their evolution the structure of ARs can change significantly due to processes not represented in the SFT models (flux emergence or localized photospheric flows). The complexities of AR structure imply that their representation with a \fix{single bipole may be subject to doubt (cf.\ \citealt{Jiang+Baranyi}, \citealt{Iijima+:asym})}. And for historical data these difficulties are further aggravated. In view of these considerable difficulties, looking for ways to minimize the need for detailed input data for our algebraic method is advisable. With this objective in mind, in the present work we introduced the $\mbox{\sl ARDoR}$ method and tested it on a large number of activity cycles simulated with the 2$\times$2D dynamo model. We found that \begin{list}{--}{\itemsep=0 em \parsep=0em \partopsep=0em \topsep=0em} \item Including all information on the bipolar active regions appearing in a cycle, our algebraic method can reproduce the dipole moment at the end of the cycle with an error below $\pm 30$\,\% in over 97\,\% of cycles. \item Using only positions and magnetic fluxes of the ARs, and arbitrarily equating their polarity separations and tilts to their expected values (reduced stochasticity {or RS} case), the algebraic method can reproduce the dipole moment at the end of the cycle with an error below $\pm 30$\,\% in about 74\,\% of cycles. \item Combining the RS case with detailed information on a small number $N$ of ARs with the largest ARDoR values, the fraction of unexplained cycles is significantly reduced (from 26\,\% to 12\,\% in the case of $N=5$ and a {$\pm 30$}\,\% accuracy threshold). \end{list} These results indicate that stochastic effects on the intercycle variations of solar activity are dominated by the effect of a low number of large ``rogue'' active regions, rather than the combined effect of numerous small ARs. Beyond the academic interest of these results, the method has a potential for use in solar cycle prediction. For the realization of this potential, however, a number of further problems need to be addressed. As in forecasts the positions and fluxes of ARs are also not known, the representation of the majority of ARs not faithfully represented in our method must be stochastic also in these variables, or simply replaced by a smooth continuous distribution. Furthermore, for the selection of ARs with the top $N$ ARDoR values these values should be theoretically be computed for all ARs. To avoid this need, ``proxies'' of ARDoR based on straightforward numerical criteria may need to be identified to select the ARs for which a more in-depth study is then needed to determine ARDoR values. Studies in this direction are left for further research. \begin{acknowledgements} This research was supported by the Hungarian National Research, Development and Innovation Fund (grant no. NKFI K-128384) and by the European Union's Horizon 2020 research and innovation programme under grant agreement No. 739500. The collaboration of the authors was facilitated by support from the International Space Science Institute in ISSI Team 474. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,851
{"url":"http:\/\/www.koreascience.or.kr\/article\/JAKO201535151757872.page","text":"SIMILAR AND SELF-SIMILAR CURVES IN MINKOWSKI n-SPACE\n\n\u2022 OZDEMIR, MUSTAFA (Department of Mathematics Akdeniz University) ;\n\u2022 SIMSEK, HAKAN (Department of Mathematics Akdeniz University)\n\u2022 Published : 2015.11.30\n\u2022 66 3\n\nAbstract\n\nIn this paper, we investigate the similarity transformations in the Minkowski n-space. We study the geometric invariants of non-null curves under the similarity transformations. Besides, we extend the fundamental theorem for a non-null curve according to a similarity motion of ${\\mathbb{E}}_1^n$. We determine the parametrizations of non-null self-similar curves in ${\\mathbb{E}}_1^n$.\n\nKeywords\n\nLorentzian similarity geometry;similarity transformation;similarity invariants;similar curves;self-similar curves\n\nReferences\n\n1. S. Z. Li, Invariant representation, matching and pose estimation of 3D space curves under similarity transformation, Pattern Recognition 30 (1997), no. 3, 447-458. https:\/\/doi.org\/10.1016\/S0031-3203(96)00089-1\n2. B. B. Mandelbrot, The Fractal Geometry of Nature, New York: W. H. Freeman, 1982.\n3. K. Nakayama, Motion of curves in hyperboloid in the Minkowski space, J. Phys. Soc. Japan 67 (1998), no. 9, 3031-3037. https:\/\/doi.org\/10.1143\/JPSJ.67.3031\n4. V. Nekrashevych, Self-similar groups and their geometry, Sao Paulo J. Math. Sci. 1 (2007), no. 1, 41-95. https:\/\/doi.org\/10.11606\/issn.2316-9028.v1i1p41-95\n5. B. O'Neill, Semi-Riemannian Geometry, Academic Press Inc., London, 1983.\n6. M. Ozdemir, On the focal curvatures of non-lightlike curves in Minkowski (m+1)-space, F. U. Fen ve Muhendislik Bilimleri Dergisi 16 (2004), no. 3, 401-409.\n7. H. Sahbi, Kernel PCA for similarity invariant shape recognition, Neurocomputing 70 (2007), 3034-3045. https:\/\/doi.org\/10.1016\/j.neucom.2006.06.007\n8. D. A. Singer and D. H. Steinberg, Normal forms in Lorentzian spaces, Nova J. Algebra Geom. 3 (1994), no. 1, 1-9.\n9. D. Xu and H. Li, 3-D curve moment invariants for curve recognition, Lecture Notes in Control and Information Sciences, 345, pp. 572-577, 2006.\n10. M. F. Barnsley and S. Demko, Iterated function systems and the global construction of fractals, Proc. Roy. Soc. London Ser. A 399 (1985), no. 1817, 243-275. https:\/\/doi.org\/10.1098\/rspa.1985.0057\n11. M. F. Barnsley, J. E. Hutchinson, and O. Stenflo, V-variable fractals: Fractals with partial self similarity, Adv. Math. 218 (2008), no. 6, 2051-2088. https:\/\/doi.org\/10.1016\/j.aim.2008.04.011\n12. A. Bejancu, Lightlike curves in Lorentz manifolds, Publ. Math. Debrecen 44 (1994), no. 1-2, 145-155.\n13. M. Berger, Geometry I, Springer, New York 1987.\n14. A. Brook, A. M. Bruckstein, and R. Kimmel, On similarity-invariant fairness measures, LNCS 3459, pp. 456-467, 2005.\n15. K.-S. Chou and C. Qu, Integrable equations arising from motions of plane curves, Phys. D 162 (2002), no. 1-2, 9-33. https:\/\/doi.org\/10.1016\/S0167-2789(01)00364-5\n16. K.-S. Chou and C. Qu, Motions of curves in similarity geometries and Burgers-mKdV hierarchies, Chaos Solitons Fractals 19 (2004), no. 1, 47-53. https:\/\/doi.org\/10.1016\/S0960-0779(03)00060-2\n17. Q. Ding and J. Inoguchi, Schrodinger ows, binormal motion for curves and the second AKNS-hierarchies, Chaos Solitons Fractals 21 (2004), no. 3, 669-677. https:\/\/doi.org\/10.1016\/j.chaos.2003.12.092\n18. K. L. Duggal and A. Bejancu, Lightlike Submanifolds of Semi-Riemannian Manifolds and Applications, Volume 364 of Mathematics and its Aplications. Kluwer Academic Publishers Group, Dordrecht, The Netherlands, 1996.\n19. J. G. Alcazar, C. Hermosoa, and G. Muntinghb, Detecting similarity of rational plane curves, J. Comput. Appl. Math. 269 (2014), 1-13. https:\/\/doi.org\/10.1016\/j.cam.2014.03.013\n20. T. Aristide, Closed similarity Lorentzian A\u000ene manifolds, Proc. Amer. Math. Soc. 132 (2004), no. 12, 3697-3702. https:\/\/doi.org\/10.1090\/S0002-9939-04-07560-4\n21. R. Encheva and G. Georgiev, Shapes of space curves, J. Geom. Graph. 7 (2003), no. 2, 145-155.\n22. R. Encheva and G. Georgiev, Similar Frenet curves, Results Math. 55 (2009), no. 3-4, 359-372. https:\/\/doi.org\/10.1007\/s00025-009-0407-8\n23. K. Falconer, Fractal Geometry, Second Edition, John Wiley & Sons, Ltd., 2003.\n24. A. Ferrandez, A. Gimenez, and P. Lucas, Null helices in Lorentzian space forms, Int. J. Mod. Phys. A 16 (2001), 4845-4863. https:\/\/doi.org\/10.1142\/S0217751X01005821\n25. W. Greub, Linear Algebra, 3rd ed., Springer Verlag, Heidelberg, 1967.\n26. R. Grigorchuk and Z. \u0015Sunic, Self Similarity an branching group theory, Volume 1, London Mathematical Society Lecture Note Series: 339, Groups St Andrews 2005.\n27. M. Gurses, Motion of curves on two-dimensional surfaces and soliton equations, Phys. Lett. A 241 (1998), no. 6, 329-334. https:\/\/doi.org\/10.1016\/S0375-9601(98)00151-0\n28. J. E. Hutchinson, Fractals and self-similarity, Indiana Univ. Math. J. 30 (1981), no. 5, 713-747. https:\/\/doi.org\/10.1512\/iumj.1981.30.30055\n29. Y. Kamishima, Lorentzian similarity manifolds, Cent. Eur. J. Math. 10 (2012), no. 5, 1771-1788. https:\/\/doi.org\/10.2478\/s11533-012-0076-9\n30. S. Z. Li, Similarity invariants for 3D space curve matching, In Proceedings of the First Asian Conference on Computer Vision, pp. 454-457, Japan 1993.\n\nCited by\n\n1. Global invariants of paths and curves for the group of all linear similarities in the two-dimensional Euclidean space vol.15, pp.06, 2018, https:\/\/doi.org\/10.1142\/S0219887818500925","date":"2019-12-09 19:10:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6330282092094421, \"perplexity\": 3341.415449214654}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575540521378.25\/warc\/CC-MAIN-20191209173528-20191209201528-00279.warc.gz\"}"}
null
null
Nootropix.com A blog dedicated to the study of nootropic smart drugs and supplements. Nootropics Guide Buy Nootropics Cognitive Health Memantine Nootropics Recovery Can Nootropics Help With Drug Abuse and Addiction? In recent years, evidence has compiled suggesting a common pathologic mechanism underlying addictive behaviours of several substances. Dysregulation of glutamatergic neurotransmission within the prefrontal cortex (PFC) and nucleus accumbens (NA) appears to predispose to a higher tendency towards drug-seeking behaviour. Post author By jay 2 Comments on Can Nootropics Help With Drug Abuse and Addiction? Thus far, this mechanism has been associated with the addiction potential of cocaine, heroin, nicotine, cannabis, & alcohol, with possible implications for other substances and even non-drug-related compulsive habits such as pathological gambling. Discovery of this shared pathology has led to the investigation of the potential application of existing agents, such as Memantine and n-acetylcysteine. Could nootropics targeting elements in this key glutamatergic circuit reduce symptoms and complications of substance use disorders? Glutamate Spillover «Glutamate spillover» refers to the pathologic cascade in brain chemistry that occurs with chronic abuse of certain substances that results in reinforcement of the behaviour[1]. Prolonged exposure to substances of abuse leads to several maladaptive changes in the glutamatergic PFC-NA pathway, specifically: Downregulation of glial glutamate transporter-1 (GLT1) expression in the nucleus accumbens. By removing glutamate from the extrasynaptic space, GLT1 prevents inappropriate excitatory stimulation due to an accumulation of the excitatory neurotransmitter. Decreased ability of presynaptic metabotropic glutamate receptor 2 (mGluR2) to inhibit glutamate release. In normal physiology, mGluR2 autoreceptors manage a feedback loop where increased extracellular glutamate levels trigger a reduction in the presynaptic release of glutamate. This auto-regulatory mechanism also serves to prevent an extracellular accumulation of glutamate. When glutamate spillover within the non-synaptic extracellular space does occur as a result of the combination of these processes, the following sequelae are may manifest: Stimulation of postsynaptic mGluR5, AMPA and NMDA receptors. Upregulation of AMPA and NMDA receptors (increased synaptic plasticity). Stimulation of extrasynaptic glutamate receptors may also occur. Increased excitatory tone due to these two processes culminates in impaired inhibition with regard to drug-seeking behaviour as well as increased risk of relapse. Furthermore, persistently elevated glutamatergic tone may lead to neurotoxicity secondary to excessive Ca2+ ion influx. This pathology has also been associated in neurodegenerative disorders such as Alzheimer's, Parkinson's, and Huntington's disease. n-acetylcysteine (NAC) is a cysteine precursor that has a long history of use for indications ranging from bronchopulmonary disorders to paracetamol overdose. It produces many beneficial effects through a variety of mechanisms ranging from supporting antioxidant processes to suppressing over-reactive immune responses to inhibiting apoptosis. NAC's glutamatergic modulation, however, is of key interest in managing substance use disorders[2]. NAC is converted to L-cysteine in vivo, which enhances the activity of the cysteine/glutamate exchange transporter positioned near the pre-synaptic terminal. This increases the concentration of extracellular glutamate, resulting in increased tonic activation of pre-synaptic mGluR2 autoreceptors. This causes a subsequent decrease in glutamate release. NAC also increases expression of GLT1 and the cysteine/glutamate exchange transporter, promoting the removal of glutamate from the extrasynaptic space and ‹putting it back› in the pre-synaptic area. These effects in concert have been shown to mitigate the complications from glutamate spillover, & have been tested in several small trials with promising results. When administered in patients with a history of cocaine addiction, NAC was shown to decrease self-reported cocaine use within the 28 days of treatment (mean 8.1 days out of 28 days before treatment & 1.1 days during treatment, p = 0.001)[3], desire to use cocaine (F = 5.07; df = 1,13; p = 0.05), & response to cocaine cues (F = 4.79, df = 1,13, p = 0.05)[4]. A magnetic resonance spectroscopy study confirmed elevated glutamate levels in the dorsal anterior cingulate cortex of cocaine users when compared against non-users (t(7) = 3.08, p = 0.02), and also showed a reduction 1 hour after a single 2.4 g dose of NAC[5]. With regard to cannabis, 2.4 g/day NAC decreased craving in one 4-week open label study of 24 patients[6]; in a double-blind placebo-controlled trial, subjects given 2.4 g/day NAC in addition to counselling were 2.4 times more likely to test negative on urinalysis (95%CI 1.1 to 5.2) but there was no difference in number of reported days of cannabis use[7]. The dosage for managing consequences of substance use disorders in trials ranged from 1.2 to 2.4 g by mouth daily. Benefits on neurochemistry may occur with single doses although significant alterations in behaviour may take days to weeks. The pharmacodynamic effect also depends upon the history of substance use and individual predisposition to addictive behaviour. NAC is significantly protein-bound (80%). It is metabolised in the liver via non-CYP450 pathways. NAC and its metabolites are primarily eliminated in the urine, with a half-life of 5.6 hours in adults[8]. NAC is generally well-tolerated. Nausea, vomiting, rash, and fever have been reported. Memantine (Namenda®) is an uncompetitive NMDA receptor antagonist most commonly used in the management of moderate-to-severe Alzheimer's disease. In addition to its glutamatergic modulation, memantine also acts as an agonist at the D2 and nicotinic acetylcholinergic receptors (nAChR). Memantine binds and inhibits NMDA receptors with low-to-moderate affinity, most effectively in states of excess glutamatergic activity (such as in substance use disorder). By blocking NMDA receptors, memantine decreases glutamatergic tone[9]. Neurochemical effects of alcohol intoxication in various contexts. Upregulation of NMDA receptors has been observed with chronic alcohol consumption. Abrupt discontinuation of alcohol removes GABAergic suppression, resulting in the characteristic acute sequelae of alcohol withdrawal (symptoms of excitotoxicity): seizures, hallucinations, tachycardia, and shock. By inhibiting these receptors, memantine may theoretically attenuate symptoms of alcohol withdrawal. In one RCT of 18 moderate alcohol drinkers (10-30 drinks/week), 30 mg/day memantine significantly decreased alcohol craving before alcohol consumption in comparison to 15 mg/day and placebo[10]. Another placebo-controlled RCT with 10-40 mg/day showed no difference[11]. A subsequent study of 38 patients utilising 20-40 mg/day memantine showed dose-dependent reductions in cue-induced craving[12]. In another RCT of 127 male patients undergoing alcohol withdrawal, administration of 10 mg memantine three times a day decreased apparent withdrawal symptom severity, dysphoria, and need for diazepam[13]. Administration of 60 mg significantly alleviated subjectively-rated symptoms of naloxone-induced opioid withdrawal in 8 heroin-dependent patients[14]. In a study of 67 heroin-dependent subjects, 10-30 mg/day memantine significantly reduced heroin craving, depression, and state & trait anxiety compared to placebo after 3 weeks of use. A separate treatment arm using amitriptyline 75 mg/day achieved similar results but with a higher incidence of side effects and a higher dropout rate[15]. Clinical data on application in cocaine[16],[17] and nicotine abuse[18] is less promising. The dosage for mitigating substance use disorders in trials ranged from 5 to 60 mg, with 30 mg by mouth once daily showing the best effects for alcohol abuse and 30 to 60 mg by mouth once daily shown to be most effective in limited trials for opioid dependence. Safety is best characterised at doses up to 30 mg, as this dosage is used in Alzheimer's disease. Memantine is typically initiated at 5 mg daily then titrated by 5 mg per week up to the goal dose (30 to 60 mg depending upon the indication). Memantine undergoes favourable non-hepatic metabolism; its metabolites are minimally active. Individuals with a history of kidney disease should consult a doctor or pharmacist before use, as memantine undergoes significant renal elimination (74% is excreted in the urine). The half-life of memantine ranges from 60-80 hours. The most common side effects noted at therapeutic doses higher than 7 mg/day are dizziness, headache, confusion, anxiety; increased blood pressure; cough; & constipation[19]. Disrupted regulation of glutamatergic pathways in the prefrontal cortex-nucleus accumbent pathway has been implicated as an underlying pathology among several substance use disorders, including cocaine, alcohol, and opioid dependence. Therapies such as n-acetylcysteine (NAC) and memantine have demonstrated efficacy in attenuating the symptoms of some of these disorders in small trials. 1. ↑ McClure EA, Gipson CD, Malcolm RJ, Kalivas PW, Gray KM. Potential role of n-acetylcysteine in the management of substance use disorders. CNS Drugs. 2014 02;28(2):95-106. 2. ↑ Brown RM, Kupchik YM, Kalivas PW. The story of glutamate in drug addiction & of n-acetylcysteine as a potential pharmacotherapy. JAMA Psychiatry. 2013 09;70(9):895-7. 3. ↑ Mardikian PN, LaRowe SD, Hedden S, Kalivas PW, Malcolm RJ. An open-label trial of n-acetylcysteine for the treatment of cocaine dependence: a pilot study. Prog Neuropsychopharmacol Biol Psychiatry. 2007;31:389-94. 4. ↑ LaRowe SD, Myrick H, Hedden S, Mardikian P, Saladin M, McRae A, et al. Is cocaine desire reduced by n-acetylcysteine? Am J Psychiatry. 2007;164:1115-7. 5. ↑ Schmaal L, Veltman DJ, Nederveen A,van den Brink W, Goudriaan AE. n-acetylcysteine normalizes glutamate levels in cocaine- dependent patients: a randomized crossover magnetic resonance spectroscopy study. Neuropsychopharmacology. 2012;37:2143-52. 6. ↑ Gray KM, Watson NL, Carpenter MJ, LaRowe SD. n-acetylcysteine (NAC) in young marijuana users: an open-label pilot study. Am J Addict. 2010;19:187-9. 7. ↑ Gray KM, Carpenter MJ, Baker NL, DeSantis SM, Kryway E, Hartwell KJ, et al. A double-blind randomized controlled trial of n-acetylcysteine in cannabis-dependent adolescents. Am J Psychiatry. 2012;169:805-12. 8. ↑ Medscape® 5.1.2, (electronic version). Reuters Health Information, New York, New York. 9. ↑ Zdanys K, Tampi RR. A systematic review of off-label uses of memantine for psychiatric disorders. Prog Neuro-Psychopharmacol Biol Psychiatry. 2008 8/1;32(6):1362-74. 10. ↑ Bisaga A, Evans SM. Acute effects of memantine in combination with alcohol in moderate drinkers. Psychopharmacology 2004;172:16–24. 11. ↑ Evans SM, Levin FR, Brooks DJ, Garawi F. A pilot double-blind treatment trial of memantine for alcohol dependence. Alcoholism: Clin Exp Res 2007;31(5):775–82. 12. ↑ Krupitsky EM, Neznanova O, Masalov D, Burakov AM, Didenko T, Romanova T, et al. Effect of memantine on cue-induced alcohol craving in recovering alcohol-dependent patients. Am J Psychiatry 2007a;164(3):519–23. 13. ↑ Krupitsky EM, Rudenko AA, Burakov AM, Slavina TY, Grinenko AA, Pittman B, et al. Antiglutamatergic strategies for ethanol detoxification: comparison with placebo & diazepam. Alcoholism: Clin Exp Res 2007b;31(4):604–11. 14. ↑ Bisaga A, Comer SD, Ward AS, Popik P, Kleber HD, Fischman MW. The NMDA antagonist memantine attenuates the expression of opioid physical dependence in humans. Psychopharmacology 2001(157):1–10. 15. ↑ Krupitsky EM, Masalov DV, Burakov AM, Didenko TY, Romanova TN, Bespalov AY, et al. A pilot study of memantine effects on protracted withdrawal (syndrome of anhedonia) in heroin addicts. Addict Disord Treat 2002;1(4):143–6. 16. ↑ Collins ED, Vosburg SK, Ward AS, Haney M, Foltin RW. Memantine increases cardiovascular but not behavioral effects of cocaine in methadone-maintained humans. Pharmacol Biochem Behav 2006;83(1):47–55. 17. ↑ Collins ED, Ward AS, McDowell DM, Foltin RW, Fischman MW. The effects of memantine on the subjective, reinforcing, & cardiovascular effects of cocaine in humans. Behav Pharmacol 1998;9(7):587–98. 18. ↑ Thuerauf N, Lunkenheimer J, Lunkenheimer B, Sperling W, Bleich S, Schlabeck M, et al. Memantine fails to facilitate partial cigarette deprivation in smokers—no role of memantine in the treatment of nicotine dependency? J Neural Transm 2007;114:351–7. 19. ↑ Micromedex® 1.0 (Healthcare Series), (electronic version). Truven Health Analytics, Greenwood Village, Colorado, U.S.A. Available at: http://www.micromedexsolutions.com/ Tags abuse, addiction, drug abuse, drugs, glutamate, memantine, N-acetylcysteine, NAC, nootropics, recovery, substance abuse, substance use, substance use disorder ← How to Understand Clinical Research, Part I: Accessing and Reading Research → Contraceptives: a Nootropic for Women? © 2020 Nootropix.com
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
376
import sys import math import pychemia # Available surfaces are: SURFACE_TYPE = {"PLANE", "SPHERE", "PARAMETRIC_SURFACE"} def WritePMG(ren, fn, magnification=1): """ Save the image as a PNG :param ren: The renderer. :param fn: The file name. :param magnification: The magnification, usually 1. """ renLgeIm = vtk.vtkRenderLargeImage() imgWriter = vtk.vtkPNGWriter() renLgeIm.SetInput(ren) renLgeIm.SetMagnification(magnification) imgWriter.SetInputConnection(renLgeIm.GetOutputPort()) imgWriter.SetFileName(fn) imgWriter.Write() def MakeBands(dR, numberOfBands, nearestInteger): """ Divide a range into bands :param dR: [min, max] the range that is to be covered by the bands. :param numberOfBands: the number of bands, a positive integer. :param nearestInteger: if True then [floor(min), ceil(max)] is used. :return: A List consisting of [min, midpoint, max] for each band. """ bands = list() if (dR[1] < dR[0]) or (numberOfBands <= 0): return bands x = list(dR) if nearestInteger: x[0] = math.floor(x[0]) x[1] = math.ceil(x[1]) dx = (x[1] - x[0]) / float(numberOfBands) b = [x[0], x[0] + dx / 2.0, x[0] + dx] i = 0 while i < numberOfBands: bands.append(b) b = [b[0] + dx, b[1] + dx, b[2] + dx] i += 1 return bands def MakeIntegralBands(dR): """ Divide a range into integral bands :param: dR - [min, max] the range that is to be covered by the bands. :return: A List consisting of [min, midpoint, max] for each band. """ bands = list() if dR[1] < dR[0]: return bands x = list(dR) x[0] = math.floor(x[0]) x[1] = math.ceil(x[1]) numberOfBands = int(abs(x[1]) + abs(x[0])) return MakeBands(x, numberOfBands, False) def MakeElevations(src): """ Generate elevations over the surface. :param: src - the vtkPolyData source. :return: - vtkPolyData source with elevations. """ bounds = [0.0, 0.0, 0.0, 0.0, 0.0, 0.0] src.GetBounds(bounds) elevFilter = vtk.vtkElevationFilter() elevFilter.SetInputData(src) elevFilter.SetLowPoint(0, bounds[2], 0) elevFilter.SetHighPoint(0, bounds[3], 0) elevFilter.SetScalarRange(bounds[2], bounds[3]) elevFilter.Update() return elevFilter.GetPolyDataOutput() def MakePlane(): """ Make a plane as the source. :return: vtkPolyData with normal and scalar data. """ source = vtk.vtkPlaneSource() source.SetOrigin(-10.0, -10.0, 0.0) source.SetPoint2(-10.0, 10.0, 0.0) source.SetPoint1(10.0, -10.0, 0.0) source.SetXResolution(20) source.SetYResolution(20) source.Update() return MakeElevations(source.GetOutput()) def MakeSphere(): """ Make a sphere as the source. :return: vtkPolyData with normal and scalar data. """ source = vtk.vtkSphereSource() source.SetCenter(0.0, 0.0, 0.0) source.SetRadius(10.0) source.SetThetaResolution(32) source.SetPhiResolution(32) source.Update() return MakeElevations(source.GetOutput()) def MakeParametricSource(): """ Make a parametric surface as the source. :return: vtkPolyData with normal and scalar data. """ fn = vtk.vtkParametricRandomHills() fn.AllowRandomGenerationOn() fn.SetRandomSeed(1) fn.SetNumberOfHills(30) if fn.GetClassName() == 'vtkParametricRandomHills': # Make the normals face out of the surface. fn.ClockwiseOrderingOff() source = vtk.vtkParametricFunctionSource() source.SetParametricFunction(fn) source.SetUResolution(50) source.SetVResolution(50) source.SetScalarModeToZ() source.Update() # Name the arrays (not needed in VTK 6.2+ for vtkParametricFunctionSource) source.GetOutput().GetPointData().GetNormals().SetName('Normals') source.GetOutput().GetPointData().GetScalars().SetName('Scalars') return source.GetOutput() def MakeLUT(): """ Make a lookup table using vtkColorSeries. :return: An indexed lookup table. """ # Make the lookup table. colorSeries = vtk.vtkColorSeries() # Select a color scheme. # colorSeriesEnum = colorSeries.BREWER_DIVERGING_BROWN_BLUE_GREEN_9 # colorSeriesEnum = colorSeries.BREWER_DIVERGING_SPECTRAL_10 # colorSeriesEnum = colorSeries.BREWER_DIVERGING_SPECTRAL_3 # colorSeriesEnum = colorSeries.BREWER_DIVERGING_PURPLE_ORANGE_9 # colorSeriesEnum = colorSeries.BREWER_SEQUENTIAL_BLUE_PURPLE_9 # colorSeriesEnum = colorSeries.BREWER_SEQUENTIAL_BLUE_GREEN_9 colorSeriesEnum = colorSeries.BREWER_QUALITATIVE_SET3 # colorSeriesEnum = colorSeries.CITRUS colorSeries.SetColorScheme(colorSeriesEnum) lut = vtk.vtkLookupTable() colorSeries.BuildLookupTable(lut) lut.SetNanColor(1, 0, 0, 1) return lut def ReverseLUT(lut): """ Create a lookup table with the colors reversed. :param: lut - An indexed lookup table. :return: The reversed indexed lookup table. """ lutr = vtk.vtkLookupTable() lutr.DeepCopy(lut) t = lut.GetNumberOfTableValues() - 1 for i in reversed(range(t + 1)): rgba = [0, 0, 0] v = float(i) lut.GetColor(v, rgba) rgba.append(lut.GetOpacity(v)) lutr.SetTableValue(t - i, rgba) t = lut.GetNumberOfAnnotatedValues() - 1 for i in reversed(range(t + 1)): lutr.SetAnnotation(t - i, lut.GetAnnotation(i)) return lutr def Frequencies(bands, src): """ Count the number of scalars in each band. :param: bands - the bands. :param: src - the vtkPolyData source. :return: The frequencies of the scalars in each band. """ freq = dict() for i in range(len(bands)): freq[i] = 0 tuples = src.GetPointData().GetScalars().GetNumberOfTuples() for i in range(tuples): x = src.GetPointData().GetScalars().GetTuple1(i) for j in range(len(bands)): if x <= bands[j][2]: freq[j] += 1 break return freq def MakeGlyphs(src, reverseNormals): """ Glyph the normals on the surface. You may need to adjust the parameters for maskPts, arrow and glyph for a nice appearance. :param: src - the surface to glyph. :param: reverseNormals - if True the normals on the surface are reversed. :return: The glyph object. """ # Sometimes the contouring algorithm can create a volume whose gradient # vector and ordering of polygon (using the right hand rule) are # inconsistent. vtkReverseSense cures this problem. reverse = vtk.vtkReverseSense() # Choose a random subset of points. maskPts = vtk.vtkMaskPoints() maskPts.SetOnRatio(5) maskPts.RandomModeOn() if reverseNormals: reverse.SetInputData(src) reverse.ReverseCellsOn() reverse.ReverseNormalsOn() maskPts.SetInputConnection(reverse.GetOutputPort()) else: maskPts.SetInputData(src) # Source for the glyph filter arrow = vtk.vtkArrowSource() arrow.SetTipResolution(16) arrow.SetTipLength(0.3) arrow.SetTipRadius(0.1) glyph = vtk.vtkGlyph3D() glyph.SetSourceConnection(arrow.GetOutputPort()) glyph.SetInputConnection(maskPts.GetOutputPort()) glyph.SetVectorModeToUseNormal() glyph.SetScaleFactor(1) glyph.SetColorModeToColorByVector() glyph.SetScaleModeToScaleByVector() glyph.OrientOn() glyph.Update() return glyph def DisplaySurface(st): """ Make and display the surface. :param: st - the surface to display. :return The vtkRenderWindowInteractor. """ surface = st.upper() if not (surface in SURFACE_TYPE): print(st, "is not a surface.") iren = vtk.vtkRenderWindowInteractor() return iren # ------------------------------------------------------------ # Create the surface, lookup tables, contour filter etc. # ------------------------------------------------------------ src = vtk.vtkPolyData() if surface == "PLANE": src = MakePlane() elif surface == "SPHERE": src = MakeSphere() elif surface == "PARAMETRIC_SURFACE": src = MakeParametricSource() # The scalars are named "Scalars"by default # in the parametric surfaces, so change the name. src.GetPointData().GetScalars().SetName("Elevation") scalarRange = src.GetScalarRange() lut = MakeLUT() lut.SetTableRange(scalarRange) numberOfBands = lut.GetNumberOfTableValues() # bands = MakeIntegralBands(scalarRange) bands = MakeBands(scalarRange, numberOfBands, False) # Let's do a frequency table. # The number of scalars in each band. # print Frequencies(bands, src) # We will use the midpoint of the band as the label. labels = [] for i in range(len(bands)): labels.append('{:4.2f}'.format(bands[i][1])) # Annotate values = vtk.vtkVariantArray() for i in range(len(labels)): values.InsertNextValue(vtk.vtkVariant(labels[i])) for i in range(values.GetNumberOfTuples()): lut.SetAnnotation(i, values.GetValue(i).ToString()) # Create a lookup table with the colors reversed. lutr = ReverseLUT(lut) # Create the contour bands. bcf = vtk.vtkBandedPolyDataContourFilter() bcf.SetInputData(src) # Use either the minimum or maximum value for each band. for i in range(0, numberOfBands): bcf.SetValue(i, bands[i][2]) # We will use an indexed lookup table. bcf.SetScalarModeToIndex() bcf.GenerateContourEdgesOn() # Generate the glyphs on the original surface. glyph = MakeGlyphs(src, False) # ------------------------------------------------------------ # Create the mappers and actors # ------------------------------------------------------------ srcMapper = vtk.vtkPolyDataMapper() srcMapper.SetInputConnection(bcf.GetOutputPort()) srcMapper.SetScalarRange(scalarRange) srcMapper.SetLookupTable(lut) srcMapper.SetScalarModeToUseCellData() srcActor = vtk.vtkActor() srcActor.SetMapper(srcMapper) srcActor.RotateX(-45) srcActor.RotateZ(45) # Create contour edges edgeMapper = vtk.vtkPolyDataMapper() edgeMapper.SetInputData(bcf.GetContourEdgesOutput()) edgeMapper.SetResolveCoincidentTopologyToPolygonOffset() edgeActor = vtk.vtkActor() edgeActor.SetMapper(edgeMapper) edgeActor.GetProperty().SetColor(0, 0, 0) edgeActor.RotateX(-45) edgeActor.RotateZ(45) glyphMapper = vtk.vtkPolyDataMapper() glyphMapper.SetInputConnection(glyph.GetOutputPort()) glyphMapper.SetScalarModeToUsePointFieldData() glyphMapper.SetColorModeToMapScalars() glyphMapper.ScalarVisibilityOn() glyphMapper.SelectColorArray('Elevation') # Colour by scalars. glyphMapper.SetScalarRange(scalarRange) glyphActor = vtk.vtkActor() glyphActor.SetMapper(glyphMapper) glyphActor.RotateX(-45) glyphActor.RotateZ(45) # Add a scalar bar. scalarBar = vtk.vtkScalarBarActor() # scalarBar.SetLookupTable(lut) # Use this LUT if you want the highest value at the top. scalarBar.SetLookupTable(lutr) scalarBar.SetTitle('Elevation (m)') # ------------------------------------------------------------ # Create the RenderWindow, Renderer and Interactor # ------------------------------------------------------------ ren = vtk.vtkRenderer() renWin = vtk.vtkRenderWindow() iren = vtk.vtkRenderWindowInteractor() renWin.AddRenderer(ren) iren.SetRenderWindow(renWin) # add actors ren.AddViewProp(srcActor) ren.AddViewProp(edgeActor) ren.AddViewProp(glyphActor) ren.AddActor2D(scalarBar) ren.SetBackground(0.7, 0.8, 1.0) renWin.SetSize(800, 800) renWin.Render() ren.GetActiveCamera().Zoom(1.5) return iren if __name__ == '__main__': if pychemia.HAS_VTK: import vtk # iren = DisplaySurface("PLANE") # iren = DisplaySurface("SPHERE") iren = DisplaySurface("PARAMETRIC_SURFACE") iren.Render() iren.Start() WritePMG(iren.GetRenderWindow().GetRenderers().GetFirstRenderer(), "ElevationBandsWithGlyphs.png")
{ "redpajama_set_name": "RedPajamaGithub" }
7,452
Roll and Fold Boom Pays Off Indoors for Georgia Pumper | Schwing America Inc. General contractor Chris R. Sheridan & Co., Macon, GA and pumping contractor Cherokee Pumping Company, Stockbridge, GA, completed a challenging concrete project at the Boeing Aerospace facility in Macon in November 2002. The facility is scheduled to house two new Brotje® Integrated Panel Assemby Cell (IPAC) Riveters. The riveters, the largest models in the world, will be used in the fabrication of fuselage panels for the C-17 Globemaster III, Boeing's military transport aircraft. Sheridan was contracted to design and construct the foundations needed to support these massive machines. Sheridan, a general contracting, construction management and design-build business since 1947, has a 12-year client-contractor relationship with industry giant Boeing. From the beginning, Tom Rogers, a representative for Sheridan, understood this particular project would be a challenge. A restricted ceiling and entry height posed a challenge for Cherokee's truck-mounted concrete pump and ready mix trucks from LaFarge Ready Mix, Macon, GA. Space for the concrete pump was tight. The X-Style outriggers set up in an economical 19.75 by 24.5-feet. With a low unfolding height of 25 feet, 4 inches, the 4-section roll-and-fold boom cleared ceiling beams and pumped the project in 5 ½ hours. Equipped with the patented Schwing Rock Valve™ and a Generation III pump kit, the 32 maintained smooth, strong concrete output throughout the pour. Low noise levels of these features were also valuable in the facility while Boeing employees continued to work. According to Chuck E. Smith, Senior Sales Manager with LaFarge Ready Mix in Macon, GA, ready-mix design was also a special consideration. At around 4,000 psi, super-plastisizer was added to create a workable mix for finishers. "We had about a 45-minute window, and then it slumped tight," said Smith. The final concrete foundations are two mass slabs. Each slab is supported by 44 cassions in a pit measuring 43 feet wide, 80 feet long, and 15 feet deep. More than 35,000 pounds of reinforcing steel and 505 cubic yards of concrete, are incorporated into the two foundations. With the Boeing project behind him, Bylsma has a true appreciation for Schwing booms. Cherokee bid for another project in December 2002, a 5-month series of indoor pours at the Honda manufacturing plant in Lincoln, Alabama. Project owners required big output numbers over an extended period. With a 25-foot ceiling height, bidding contractors had to come prepared with the most versatile equipment that could adapt to the job. Job specs prompted Bylsma and company to purchase a brand new Schwing S 31 HT the same month. The small 20'5" by 24'5" footprint and maneuverability provided an important advantage on the tight job site and convenient relocation. Offering the project owners the versatile 5-section telescopic boom with an unfolding height of 18'8" was instrumental in Cherokee being awarded the job.
{ "redpajama_set_name": "RedPajamaC4" }
4,537
Discussion in 'Introductions' started by Quiverskeep, Aug 10, 2010. My name is Ashley - hmm, a "Say Hi" feature, nice addition. I live in Virginia, have eight children, a husband and a new baby, my wonderful Droid X. I had a Storm 1, but uhh put that baby up for adoption. I know, terrible parent. But that was a problemed uhh baby, to say the least. I'll miss it in a sad, slow, battery-pulling, buggy, no app memory sort of way. Anyway, Hi All... I already like you (you're Droid users) ! Wow! Eight kids fighting to play games on the little Droid, ya might need to get an extra one for yourself.
{ "redpajama_set_name": "RedPajamaC4" }
3,204
\section{Introduction} Stellar yields are a crucial input for galactic chemical evolution. It is therefore important to update them whenever significant changes appear in stellar evolution models. Recent yield calculations at solar metallicity have been conducted by a few groups \citep{RHHW02,LC03,TNH96}. Over the last ten years, the development of the Geneva evolution code has allowed the study of the evolution of rotating stars until carbon burning. The models reproduce very well many observational features at various metallicities, like surface enrichments \citep{MM02n}, ratios between red and blue supergiants \citep{ROTVII} and the population of Wolf--Rayet (WR hereinafter) stars \citep{ROTX}. In \citet{psn04a}, we described the recent modifications made to the Geneva code and the evolution of our rotating models until silicon burning. In this paper, the goal is to calculate stellar yields for a large initial mass range (12--60 $M_{\sun}$) for rotating stars. In Sect. 2, we briefly present the model physical ingredients and the calculations. In Sect. 3, we describe the method and the formulae used to derive the yields. In Sect. 4, we discuss the wind contribution to the yields. Then, in Sect. 5, we present our supernova (SN) yields of light elements calculated at the pre--supernova stage. In Sect. 6, we describe and analyse the total stellar yields (wind + SN) and compare our results with those found in the literature. \section{Description of the stellar models}\label{phys} The computer model used to calculate the stellar models is described in detail in \citet{psn04a}. Convective stability is determined by the Schwarzschild criterion. Convection is treated as a diffusive process from oxygen burning onwards. The overshooting parameter is 0.1 H$_{\rm{P}}$ for H and He--burning cores and 0 otherwise. On top of the meridional circulation and secular shear, an additional instability induced by rotation, dynamical shear, is introduced in the model. The reaction rates are taken from the NACRE \citep{NACRE} compilation for the experimental rates and from the NACRE website (http://pntpm.ulb.ac.be/nacre.htm) for the theoretical ones. Since mass loss rates are a key ingredient for the yields of massive stars, we recall here the prescriptions used. The changes of the mass loss rates, $\dot{M}$, with rotation are taken into account as explained in \citet{ROTVI}. As reference mass loss rates, we adopt the mass loss rates of \citet{Vink00,Vink01} who account for the occurrence of bi--stability limits which change the wind properties and mass loss rates. For the domain not covered by these authors we use the empirical law devised by \citet{Ja88}. Note that this empirical law, which presents a discontinuity in the mass flux near the Humphreys--Davidson limit, implicitly accounts for the mass loss rates of LBV stars. For the non--rotating models, since the empirical values for the mass loss rates are based on stars covering the whole range of rotational velocities, we must apply a reduction factor to the empirical rates to make them correspond to the non--rotating case. The same reduction factor was used as in \citet{ROTVII}. During the Wolf--Rayet phase we use the mass loss rates by \citet{NuLa00}. These mass loss rates, which account for the clumping effects in the winds, are smaller by a factor of 2--3 than the mass loss rates used in our previous non--rotating ``enhanced mass loss rate'' stellar grids \citep{MM94}. Wind anisotropy \citep[described in][]{ROTVI} was only taken into account for $M \geqslant 40\,M_{\sun}$ since its effects are only important for very massive stars. The initial composition of our models is given in Table \ref{inic}. For a given metallicity $Z$ (in mass fraction), the initial helium mass fraction $Y$ is given by the relation $Y= Y_p + \Delta Y/\Delta Z \cdot Z$, where $Y_p$ is the primordial helium abundance and $\Delta Y/\Delta Z$ the slope of the helium--to--metal enrichment law. We used the same values as in \citet{ROTVII} {\it i.e.} $Y_p$ = 0.23 and $\Delta Y/\Delta Z$ = 2.25. For the solar metallicity, $Z$ = 0.02, we thus have $X$ = 0.705 and $Y$ = 0.275. For the mixture of the heavy elements, we adopted the same mixture as the one used to compute the opacity tables for solar composition. For elements heavier than Mg, we used the values from \citet{AG89}. \begin{table} \caption{Initial abundance (in mass fraction) of the chemical elements.} \begin{tabular}{l r l r} \hline \hline Element & Mass fraction & Element & Mass fraction \\ \hline \\ $^{1}$H & 0.705 & $^{24}$Mg& 5.861D-4 \\ $^{3}$He & 2.915D-5 & $^{25}$Mg& 7.70D-5 \\ $^{4}$He & 0.275 & $^{26}$Mg& 8.84D-5 \\ $^{12}$C & 3.4245D-3 & $^{28}$Si& 6.5301D-4\\ $^{13}$C & 4.12D-5 & $^{32}$S & 3.9581D-4\\ $^{14}$N & 1.0589D-3 & $^{36}$Ar& 7.7402D-5\\ $^{15}$N & 4.1D-6 & $^{40}$Ca& 5.9898D-5\\ $^{16}$O & 9.6195D-3 & $^{44}$Ti& 0 \\ $^{17}$O & 3.9D-6 & $^{48}$Cr& 0 \\ $^{18}$O & 2.21D-5 & $^{52}$Fe& 0 \\ $^{20}$Ne& 1.8222D-3 & $^{56}$Ni& 0 \\ $^{22}$Ne& 1.466D-4 & & \\ \hline \end{tabular} \label{inic} \end{table} We calculated stellar models with initial masses of 12, 15, 20, 25, 40 and 60 $M_{\sun}$ at solar metallicity, with initial rotation velocities of 0 and 300 km\,s$^{-1}$. The value of the initial velocity corresponds to an average velocity of about 220\,km\,s$^{-1}$ on the Main Sequence (MS) which is very close to the average observed value \citep[see for instance][]{FU82}. The calculations start at the ZAMS. Except for the 12 $M_{\sun}$ models, the rotating models were computed until the end of core silicon (Si) burning and their non--rotating counterparts until the end of shell Si--burning. For the non--rotating 12 $M_{\sun}$ star, neon (Ne) burning starts at a fraction of a solar mass away from the centre but does not reach the centre and the calculations stop there. For the rotating 12 $M_{\sun}$ star, the model ends after oxygen (O) burning. The evolution of the models is described in \citet{psn04a}. \section{Yield calculations}\label{def} In this paper, we calculated separately the yield contributions from stellar winds and the SN explosion. The wind contribution from a star of initial mass, $m$, to the stellar yield of an element $i$ is: \begin{equation}\label{wdef} mp^{\rm{wind}}_{im}= \int_{0}^{\tau(m)} \dot{M}(m,t)[X_{i}^S(m,t)-X_{i}^{0}]\,dt \end{equation}where $\tau(m)$ is the final age of the star, $\dot{M}(m,t)$ the mass loss rate when the age of the star is equal to $t$, $X_{i}^S(m,t)$ the surface abundance in mass fraction of element $i$ and $X_{i}^{0}$ its initial mass fraction (see Table \ref{inic}). Mass loss occurs mainly during hydrogen (H) and helium (He) burnings. Indeed, the advanced stages of the hydrostatic evolution are so short in time that only a negligible amount of mass is lost during these phases. In order to calculate the SN explosion contribution to stellar yields of all the chemical elements, one needs to model the complete evolution of the star from the ZAMS up to and including the SN explosion. However, elements lighter than neon are marginally modified by explosive nucleosynthesis \citep{CL03,TNH96} and are mainly determined by the hydrostatic evolution while elements between neon and silicon are produced both hydrostatically and explosively. In this work, we calculate SN yields at the end of core Si--burning. We therefore present these yields as pre--SN yields. The pre--SN contribution from a star of initial mass, $m$, to the stellar yield of an element $i$ is: \begin{equation}\label{sndef} mp^{\rm{pre-SN}}_{im}= \int_{m(\rm{rem})}^{m(\tau)} [X_{i}(m_r)-X_{i}^{0}]\,dm_r \end{equation}where $m(\rm{rem})$ is the remnant mass, $m(\tau)$ the final stellar mass, $X_{i}^{0}$ the initial abundance in mass fraction of element $i$ and $X_{i}(m_r)$ the final abundance in mass fraction at the lagrangian mass coordinate, $m_r$. The remnant mass in Eq. \ref{sndef} corresponds to the final baryonic remnant mass that includes fallback that may occur after the SN explosion. The exact determination of the remnant mass would again require the simulation of the core collapse and SN explosion, which is not within the scope of this paper. Even if we had done the simulation, the remnant mass would still be a free parameter because most explosion models still struggle to reproduce explosions \citep[ and references therein]{J03}. Nevertheless, the latest multi--D simulations \citep{J04} show that low modes of the convective instability may help produce an explosion. When comparing models to observations, the remnant mass is usually chosen so that the amount of radioactive $^{56}$Ni ejected by the star corresponds to the value determined from the observed light curve. So far, mostly 1D models are used for explosive nucleosynthesis but a few groups have developed multi--D models \citep[see][]{TKM04,MN03}. Multi--D effects like mixing and asymmetry might play a role in determining the mass cut if some iron group elements are mixed with oxygen-- or silicon--rich layers. \begin{table} \caption{Initial mass (column 1) and initial rotation velocity [km\,s$^{-1}$] (2), final mass (3), masses of the helium (4), carbon--oxygen (5) cores, the remnant mass (6) and lifetimes [Myr] (7) for solar metallicity models. All masses are in solar mass units. An "A" in the second column means that wind anisotropy was taken into account.} \begin{tabular}{r r r r r r r r} \hline \hline $M_{\rm{ini}}$ & $\upsilon_{\rm{ini}}$ & $M_{\rm{final}}$ & $M_{\alpha}$ & $M_{\rm{CO}}$ & $M_{\rm{rem}}$ & $\tau_{\rm{life}}$ \\ \hline 12 & 0 & 11.524 & 3.141 & 1.803 & 1.342 & 18.01 \\ 12 & 300 & 10.199 & 3.877 & 2.258 & 1.462 & 21.89 \\ 15 & 0 & 13.232 & 4.211 & 2.441 & 1.510 & 12.84 \\ 15 & 300 & 10.316 & 5.677 & 3.756 & 1.849 & 15.64 \\ 20 & 0 & 15.694 & 6.265 & 4.134 & 1.945 & 8.93 \\ 20 & 300 & 8.763 & 8.654 & 6.590 & 2.566 & 10.96 \\ 25 & 0 & 16.002 & 8.498 & 6.272 & 2.486 & 7.32 \\ 25 & 300 & 10.042 & 10.042 & 8.630 & 3.058 & 8.67 \\ 40 & 0 & 13.967 & 13.967 & 12.699 & 4.021 & 5.05 \\ 40 & 300A & 12.646 & 12.646 & 11.989 & 3.853 & 5.97 \\ 60 & 0 & 14.524 & 14.524 & 13.891 & 4.303 & 4.02 \\ 60 & 300A & 14.574 & 14.574 & 13.955 & 4.323 & 4.69 \\ \hline \end{tabular} \label{table1} \end{table} In this work, we used the relation between $M_{\rm{CO}}$ and the remnant mass described in \citet{AM92}. The resulting remnant mass as well as the major characteristics of the models are given in Table \ref{table1}. The determination of $M_{\alpha}$ and $M_{\rm{CO}}$ is described in \citet{psn04a}. We do not follow $^{22}$Ne after He--burning and have to apply a special criterion to calculate its pre--SN yield. During He--burning, $^{22}$Ne is produced by $^{18}\rm{O}(\alpha,\gamma)$ and destroyed by an $\alpha$--captures which create $^{25}$Mg or $^{26}$Mg. $^{22}$Ne is totally destroyed by C--burning. We therefore consider $^{22}$Ne abundance to be null inside of the C--burning shell. Numerically, this is done when the mass fraction of $^{4}$He is less than $10^{-4}$ and that of $^{12}$C is less than 0.1. Once both the wind and pre--SN contributions are calculated, the total stellar yield of an element $i$ from a star of initial mass, $m$, is: \begin{equation}\label{ydef} mp^{\rm{tot}}_{im} = mp^{\rm{pre-SN}}_{im} + mp^{\rm{wind}}_{im} \end{equation}$mp^{\rm{tot}}_{im}$ corresponds to the amount of element $i$ newly synthesised and ejected by a star of initial mass $m$ \citep[see][]{AM92}. Other authors give their results in ejected masses, $EM$: \begin{equation}\label{emdef} EM_{im}= \int_{0}^{\tau(m)} \dot{M}\,X_{i}^S(m,t)\,dt + \int_{m(\rm{rem})}^{m(\tau)}\,X_{i}(m_r)\,dm_r \end{equation} and production factors (PF) \citep[see][]{WLW95}: \begin{equation}\label{fdef} f_{im}= EM_{im}/[X_{i}^{0}(m-m(\rm{rem}))] \end{equation}We also give our final results as ejected masses in order to compare our results with the recent literature. \section{Stellar wind contribution}\label{WY} \begin{table*} \caption{ Initial mass and velocity and {\bf stellar wind contribution to the yields} ($mp^{\rm{wind}}_{im}$). All masses and yields are in solar mass units and velocities are in km\,s$^{-1}$. "A" in column 1 means wind anisotropy has been included in the model. Z is the total metal content and is defined by: Z$=1-X_{\rm{^1 H}}-X_{\rm{^3 He}}- X_{\rm{^4 He}}$.} \begin{tabular}{l r r r r r r r r r r r} \hline \hline \\ $M_{\rm{ini}}$, $\upsilon_{\rm{ini}}$ & $^{3}$He & $^4$He & $^{12}$C & $^{13}$C & $^{14}$N & $^{16}$O & $^{17}$O & $^{18}$O & $^{20}$Ne & $^{22}$Ne & Z \\ \hline 12, 0 & -2.49E-6 & 1.55E-2 & -4.80E-4 & 2.53E-5 & 9.39E-4 & -4.62E-4 & 1.43E-6 & -2.57E-6 & -9.51E-8 & 0 & 0 \\ 12, 300 & -1.59E-5 & 1.20E-1 & -2.54E-3 & 2.09E-4 & 5.18E-3 & -2.78E-3 & 7.90E-6 & -1.36E-5 & -3.60E-7 & 0 & 0 \\ 15, 0 & -4.15E-6 & 9.49E-3 & -8.01E-4 & 1.54E-4 & 1.02E-3 & -2.87E-4 & 9.20E-7 & -2.92E-6 & -3.54E-7 & 0 & 0 \\ 15, 300 & -5.23E-5 & 3.25E-1 & -6.73E-3 & 5.78E-4 & 1.39E-2 & -7.63E-3 & 1.63E-5 & -3.64E-5 & -9.37E-7 & 0 & 0 \\ 20, 0 & -1.06E-5 & 5.21E-2 & -1.21E-3 & 1.93E-4 & 2.46E-3 & -1.43E-3 & 1.18E-6 & -6.28E-6 & -8.61E-7 & 0 & 0 \\ 20, 300 & -1.56E-4 & 1.27E+0 & -1.73E-2 & 1.22E-3 & 4.30E-2 & -2.75E-2 & 2.03E-5 & -1.01E-4 & -2.25E-6 & 0 & 0 \\ 25, 0 & -6.02E-5 & 3.95E-1 & -6.39E-3 & 4.38E-4 & 1.64E-2 & -1.07E-2 & 3.39E-6 & -3.88E-5 & -1.80E-6 & 0 & 0 \\ 25, 300 & -2.47E-4 & 2.97E+0 & -2.52E-2 & 1.22E-3 & 7.94E-2 & -5.48E-2 & 1.03E-5 & -1.68E-4 & -2.99E-6 & 2.72E-4 & 0 \\ 40, 0 & -4.16E-4 & 4.65E+0 & -4.48E-2 & 7.11E-4 & 1.45E-1 & -1.07E-1 & -1.10E-5 & -3.04E-4 & -5.18E-6 & 0 & 0 \\ 40, 300A & -5.83E-4 & 7.97E+0 & 1.60E+0 & 5.42E-4 & 1.73E-1 & 3.34E-1 & -3.93E-5 & -4.11E-4 & 1.76E-6 & 7.35E-2 & 2.18 \\ 60, 0 & -8.33E-4 & 9.41E+0 & 2.52E+0 & 4.15E-4 & 2.37E-1 & 4.45E-1 & -6.24E-5 & -6.32E-4 & 2.46E-6 & 1.28E-1 & 3.33 \\ 60, 300A & -1.07E-3 & 1.52E+1 & 3.01E+0 & 3.12E-4 & 3.09E-1 & 3.99E-1 & -9.85E-5 & -8.10E-4 & 3.39E-6 & 1.67E-1 & 3.89 \\ \hline \end{tabular} \label{tw} \end{table*} Before we discuss the wind contribution to the stellar yields, it is instructive to look at the final masses given in Table \ref{table1} \citep[see also Fig. 16 in][]{psn04a}. We see that, below 25 $M_{\sun}$, the rotating models lose significantly more mass. This is due to the fact that rotation enhances mass loss and favours the evolution into the red supergiant phase at an early stage during the core He--burning phase \citep[see for example][]{MM00}. For WR stars ($M \gtrsim 30 \,M_{\sun}$), the new prescription by \citet{NuLa00}, including the effects of clumping in the winds, results in mass loss rates that are a factor of two to three smaller than the rates from \citet{La89}. The final masses of very massive stars ($\gtrsim 60\,M_{\sun}$) are therefore never small enough to produce neutron stars. We therefore expect the same outcome (BH formation) for the very massive stars as for the stars with masses around 40 $M_{\sun}$ at solar metallicity. The wind contribution to the stellar yields is presented in Table \ref{tw}. The H--burning products (main elements are $^4$He and $^{14}$N) are ejected by stellar winds in the entire massive star range. Nevertheless, in absolute value, the quantities ejected by very massive stars ($M \gtrsim 40 \,M_{\sun}$) are much larger. These yields are larger in rotating models. This is due to both the increase of mixing and mass loss by rotation. For $M \lesssim 40 \,M_{\sun}$, the dominant effect is the diffusion of H--burning products in the envelope of the star due to rotational mixing. For more massive stars ($M \gtrsim 40 \,M_{\sun}$), the mass loss effect is dominant. The He--burning products are produced deeper in the star. They are therefore ejected only by WR star winds. Since the new mass loss rates are reduced by a factor of two to three (see Sect. \ref{phys}), the yields from the winds in $^{12}$C are much smaller for the present WR stellar models compared to the results obtained in \citet{AM92}. As is shown below, the pre--SN contribution to the yields of $^{12}$C are larger in the present calculation and, as a matter of fact, the new $^{12}$C total yields are larger than in \citet{AM92}. In general, the yields for rotating stars are larger than for non--rotating ones due to the extra mass loss and mixing. For very massive stars ($M \gtrsim 60 \,M_{\sun}$), the situation is reversed for He--burning products because of the different mass loss history. Indeed, rotating stars enter into the WR regime in the course of the main sequence (MS). In particular, the long time spent in the WNL phase \citep[WN star showing hydrogen at its surface,][]{ROTX} results in the ejection of large amounts of H--burning products. Therefore, the star enters the WC phase with a smaller total mass and fewer He--burning products are ejected by winds (the mass loss being proportional to the actual mass of the star). Since $^{16}$O is produced even deeper in the star, the present contribution by winds to this yield are even smaller. $^{12}$C constituting the largest fraction of ejected metals, the conclusion for the wind contribution to the total metallic yield, Z, is the same as for $^{12}$C. \begin{figure*}[!tbp] \centering \includegraphics[width=7.5cm]{1554fg01.eps}\includegraphics[width=7.5cm]{1554fg02.eps} \includegraphics[width=7.5cm]{1554fg03.eps}\includegraphics[width=7.5cm]{1554fg04.eps} \includegraphics[width=7.5cm]{1554fg05.eps}\includegraphics[width=7.5cm]{1554fg06.eps} \caption{Abundance profile at the end of core silicon burning for the non--rotating (left) and rotating (right) 15 (top), 25 (middle) and 60 (bottom) $M_{\sun}$ models.} \label{abm152560} \end{figure*} \section{Pre--SN contribution} \begin{table*} \caption{ Initial mass and velocity and {\bf pre--SN contribution to the yields} ($mp^{\rm{pre-SN}}_{im}$) of solar metallicity models. All masses and yields are in solar mass units and velocities are in km\,s$^{-1}$. "A" in column 2 means wind anisotropy has been included in the model. Z is the total metal content and is defined by: Z$=1-X_{\rm{^1 H}}-X_{\rm{^3 He}}- X_{\rm{^4 He}}$.} \begin{tabular}{l l r r r r r r r r r} \hline \hline \\ $M_{\rm{ini}}$ & $\upsilon_{\rm{ini}}$ & $^{3}$He & $^4$He & $^{12}$C & $^{13}$C & $^{14}$N & $^{16}$O & $^{17}$O & $^{18}$O & Z \\ \hline 12 & 0 & -1.22E-4 & 1.19E+0 & 8.74E-2 & 5.83E-4 & 3.59E-2 & 2.07E-1 & 3.45E-5 & 1.62E-4 & 4.57E-1 \\ 12 & 300 & -1.45E-4 & 1.48E+0 & 1.66E-1 & 8.72E-4 & 3.52E-2 & 3.94E-1 & 2.68E-5 & 1.69E-3 & 7.97E-1 \\ 15 & 0 & -1.87E-4 & 1.67E+0 & 1.54E-1 & 5.51E-4 & 4.23E-2 & 4.34E-1 & 1.51E-5 & 2.76E-3 & 9.19E-1 \\ 15 & 300 & -1.75E-4 & 1.24E+0 & 3.68E-1 & 4.90E-4 & 2.13E-2 & 1.02E+0 & 3.87E-6 & 3.76E-3 & 1.92E+0 \\ 20 & 0 & -2.88E-4 & 2.03E+0 & 2.17E-1 & 4.39E-4 & 4.72E-2 & 1.20E+0 & 7.76E-6 & 3.92E-3 & 2.17E+0 \\ 20 & 300 & -1.81E-4 & 3.47E-1 & 4.50E-1 & -2.09E-4 & 2.81E-4 & 2.60E+0 & -2.30E-5 & -9.54E-5 & 3.98E+0 \\ 25 & 0 & -3.74E-4 & 2.18E+0 & 3.74E-1 & -2.51E-5 & 5.76E-2 & 2.18E+0 & -6.73E-6 & -1.74E-4 & 3.74E+0 \\ 25 & 300 & -2.04E-4 & -8.31E-1 & 7.62E-1 & -2.65E-4 & -5.60E-3 & 3.61E+0 & -2.72E-5 & 4.34E-4 & 5.75E+0 \\ 40 & 0 & -2.90E-4 & -1.64E+0 & 6.94E-1 & -4.10E-4 & -1.05E-2 & 6.23E+0 & -3.88E-5 & -2.20E-4 & 8.66E+0 \\ 40 & 300A & -2.56E-4 & -2.08E+0 & 1.51E+0 & -3.62E-4 & -9.31E-3 & 5.03E+0 & -3.43E-5 & -1.94E-4 & 8.28E+0 \\ 60 & 0 & -2.98E-4 & -2.45E+0 & 1.42E+0 & -4.21E-4 & -1.08E-2 & 6.03E+0 & -3.99E-5 & -2.26E-4 & 9.66E+0 \\ 60 & 300A & -2.99E-4 & -2.40E+0 & 1.49E+0 & -4.22E-4 & -1.09E-2 & 6.01E+0 & -4.00E-5 & -2.27E-4 & 9.63E+0 \\ \hline \end{tabular} \label{tsn} \end{table*} \begin{table} \caption{{\bf Pre--SN contribution to the yields} ($mp^{\rm{pre-SN}}_{im}$) of solar metallicity models. Continuation of Table \ref{tsn}. Note that $^{20}$Ne yields are an upper limit and may be reduced by Ne--explosive burning and that $^{24}$Mg yields may also be modified by neon and oxygen explosive burnings. See discussion in Sect. \ref{yao}.} \begin{tabular}{l l r r r} \hline \hline \\ $M_{\rm{ini}}$ & $\upsilon_{\rm{ini}}$ & ($^{20}$Ne) & $^{22}$Ne & ($^{24}$Mg) \\ \hline 12 & 0 & 1.05E-1 & 6.80E-3 & 6.75E-3 \\ 12 & 300 & 1.58E-1 & 1.55E-2 & 1.36E-2 \\ 15 & 0 & 1.10E-1 & 1.60E-2 & 6.06E-2 \\ 15 & 300 & 2.26E-1 & 3.33E-2 & 4.27E-2 \\ 20 & 0 & 4.83E-1 & 3.64E-2 & 1.28E-1 \\ 20 & 300 & 6.80E-1 & 4.26E-2 & 1.19E-1 \\ 25 & 0 & 8.41E-1 & 5.22E-2 & 1.38E-1 \\ 25 & 300 & 1.08E+0 & 2.24E-2 & 1.48E-1 \\ 40 & 0 & 1.36E+0 & 5.58E-3 & 1.52E-1 \\ 40 & 300A & 1.42E+0 & 6.93E-3 & 1.20E-1 \\ 60 & 0 & 1.81E+0 & 6.58E-3 & 1.73E-1 \\ 60 & 300A & 1.75E+0 & 7.60E-3 & 1.44E-1 \\ \hline \end{tabular} \label{tsnb} \end{table} \begin{table*} \caption{ Initial mass and velocity and {\bf total stellar yields} ($mp^{\rm{pre-SN}}_{im} + mp^{\rm{wind}}_{im}$) of solar metallicity models. All masses and yields are in solar mass units and velocities are in km\,s$^{-1}$. "A" means wind anisotropy has been included in the model. Z is the total metal content and is defined by: Z$=1-X_{\rm{^1 H}}-X_{\rm{^3 He}}- X_{\rm{^4 He}}$. These are the yields to be used for chemical evolution models using Eq. 2 from \citep{AM92}.} \begin{tabular}{l l r r r r r r r r r} \hline \hline \\ $M_{\rm{ini}}$ & $\upsilon_{\rm{ini}}$ & $^{3}$He & $^4$He & $^{12}$C & $^{13}$C & $^{14}$N & $^{16}$O & $^{17}$O & $^{18}$O & Z \\ \hline 12 & 0 & -1.24E-4 & 1.20E+0 & 8.69E-2 & 6.08E-4 & 3.68E-2 & 2.07E-1 & 3.59E-5 & 1.60E-4 & 4.57E-1 \\ 12 & 300 & -1.61E-4 & 1.60E+0 & 1.63E-1 & 1.08E-3 & 4.04E-2 & 3.92E-1 & 3.47E-5 & 1.67E-3 & 7.97E-1 \\ 15 & 0 & -1.91E-4 & 1.67E+0 & 1.53E-1 & 7.05E-4 & 4.33E-2 & 4.34E-1 & 1.60E-5 & 2.76E-3 & 9.19E-1 \\ 15 & 300 & -2.27E-4 & 1.57E+0 & 3.61E-1 & 1.07E-3 & 3.52E-2 & 1.01E+0 & 2.02E-5 & 3.72E-3 & 1.92E+0 \\ 20 & 0 & -2.98E-4 & 2.08E+0 & 2.16E-1 & 6.32E-4 & 4.96E-2 & 1.20E+0 & 8.94E-6 & 3.91E-3 & 2.17E+0 \\ 20 & 300 & -3.36E-4 & 1.62E+0 & 4.33E-1 & 1.01E-3 & 4.33E-2 & 2.57E+0 & -2.75E-6 & -1.96E-4 & 3.98E+0 \\ 25 & 0 & -4.34E-4 & 2.57E+0 & 3.68E-1 & 4.13E-4 & 7.40E-2 & 2.17E+0 & -3.34E-6 & -2.12E-4 & 3.74E+0 \\ 25 & 300 & -4.51E-4 & 2.14E+0 & 7.37E-1 & 9.57E-4 & 7.38E-2 & 3.55E+0 & -1.69E-5 & 2.66E-4 & 5.75E+0 \\ 40 & 0 & -7.06E-4 & 3.01E+0 & 6.49E-1 & 3.01E-4 & 1.35E-1 & 6.12E+0 & -4.98E-5 & -5.24E-4 & 8.65E+0 \\ 40 & 300A & -8.39E-4 & 5.89E+0 & 3.11E+0 & 1.80E-4 & 1.63E-1 & 5.36E+0 & -7.35E-5 & -6.05E-4 & 1.05E+1 \\ 60 & 0 & -1.13E-3 & 6.96E+0 & 3.94E+0 & -6.23E-6 & 2.26E-1 & 6.47E+0 & -1.02E-4 & -8.58E-4 & 1.30E+1 \\ 60 & 300A & -1.37E-3 & 1.28E+1 & 4.51E+0 & -1.11E-4 & 2.98E-1 & 6.41E+0 & -1.38E-4 & -1.04E-3 & 1.35E+1 \\ \hline \end{tabular} \label{ytot} \end{table*} As said above, our pre--SN yields, $mp^{\rm{pre-SN}}_{im}$, were calculated at the end of core Si--burning using the remnant mass, $M_{\rm{rem}}$, given in Table \ref{table1}. We therefore concentrate on yields of light elements which depend mainly on the evolution prior to core Si--burning. Before discussing the pre--SN yields, it is interesting to look at the abundance profiles at the pre--SN stage presented in Fig. \ref{abm152560} and at the size of helium and carbon-oxygen cores given in Table \ref{table1}. The core sizes are clearly increased due to rotational mixing. We also see that as the initial mass of the model increases, the core masses get closer and closer to the final mass of the star. $M_{\alpha}$ reaches the final mass of the star when $M_{\rm{ini}} \gtrsim 40\,M_\odot$ for non--rotating models and when $M_{\rm{ini}} \gtrsim 25\,M_\odot$ for rotating models. $M_{\rm{CO}}$ becomes close to the final mass for both rotating and non--rotating models for $M_{\rm{ini}} \gtrsim 40\,M_\odot$. The pre--SN yields are presented in Tables \ref{tsn} and \ref{tsnb}. One surprising result in Table \ref{tsn} is the negative pre--SN yields of $^4$He (and of $^{14}$N) for WR stars. This is simply due to the definition of stellar yields, in which the initial composition is deducted from the final one. As said above, $M_{\rm{CO}}$ becomes close to the final mass for $M_{\rm{ini}} \gtrsim 40\,M_\odot$. Since the CO core is free of helium, it is then understandable that the pre--SN yields of $^4$He for WR stars is negative. \subsection{Carbon, oxygen and metallic yields}\label{yov} If mixing is dominant ($M \lesssim 30\,M_{\sun}$), the larger the initial mass, the larger the metallic yields (because the various cores become larger). Rotation increases the core sizes by extra mixing and therefore the total metallic yields are larger for rotating models. Overshooting also plays a role in the core sizes. The larger the overshooting parameter, the larger the cores and the larger the yields. If we compare our rotating and non--rotating models, we see that the pre--SN total metallic yields and $^{12}$C and $^{16}$O yields in particular are enhanced by rotation by a factor 1.5--2.5 below 30\,$M_{\sun}$. For very massive stars ($M \gtrsim 30\,M_{\sun}$), the higher the mass loss, the smaller the final mass and the total metallic yields. The same explanations work well in general for carbon and oxygen. \section{Total stellar yields} \begin{figure*}[!tbp] \centering \includegraphics[width=8.8cm]{1554fg07.eps}\includegraphics[width=8.8cm]{1554fg08.eps} \caption{Stellar yields divided by the initial mass, $p^{\rm{tot}}_{im}$, as a function of the initial mass for the non--rotating (left) and rotating (right) models at solar metallicity. The different total yields (divided by $m$) are shown as piled up on top of each other and are not overlapping. $^4$He yields are delimited by the top solid and long dashed lines (top shaded area), $^{12}$C yields by the long dashed and short--long dashed lines, $^{16}$O yields by the short--long dashed and dotted--dashed lines and the rest of metals by the dotted--dashed and bottom solid lines. The bottom solid line also represents the mass of the remnant ($M^{\rm{int}}_{\rm{rem}}/m$). The corresponding SN explosion type is also given. The wind contributions are superimposed on these total yields for the same elements between their bottom limit and the dotted line above it. Dotted areas help quantify the fraction of the total yields due to winds. Note that for $^{4}$He, the total yields is smaller than the wind yields due to negative SN yields (see text). Preliminary results for masses equal to 9, 85 and 120 $M_\odot$ were used in this diagram \citep[see][]{thesis}.} \label{yres} \end{figure*} \begin{figure*}[!tbp] \centering \includegraphics[width=8.8cm]{1554fg09.eps}\includegraphics[width=8.8cm]{1554fg10.eps} \caption{Product of the stellar yields, $mp^{\rm{tot}}_{im}$ by Salpeter's IMF (multiplied by a arbitrary constant: 1000 x $M^{-2.35}$), as a function of the initial mass for the non--rotating (left) and rotating (right) models at solar metallicity. The different shaded areas correspond from top to bottom to $mp^{\rm{tot}}_{im}$ x 1000 x $M^{-2.35}$ for $^{4}$He, $^{12}$C, $^{16}$O and the rest of the heavy elements. The dotted areas show for $^{4}$He, $^{12}$C and $^{16}$O the wind contribution. Preliminary results for masses equal to 9, 85 and 120 $M_\odot$ were used in this diagram \citep[see][]{thesis}.} \label{yimf} \end{figure*} \subsection{Comparison between the wind and pre--SN contributions} The total stellar yields, $mp^{\rm{tot}}_{im}=mp^{\rm{pre-SN}}_{im} + mp^{\rm{wind}}_{im}$ \citep[to be used for chemical evolution models using Eq. 2 from][]{AM92}, are presented in Tables \ref{ytot} and \ref{ytotb}. What is the relative importance of the wind and pre--SN contributions? Figure \ref{yres} displays the total stellar yields divided by the initial mass of the star, $p^{\rm{tot}}_{im}$, as a function of its initial mass, $m$, for the non--rotating (left) and rotating (right) models. The different shaded areas correspond from top to bottom to $p^{\rm{tot}}_{im}$ for $^{4}$He, $^{12}$C, $^{16}$O and the rest of the heavy elements. The fraction of the star locked in the remnant as well as the expected explosion type are shown at the bottom. The dotted areas show the wind contribution for $^{4}$He, $^{12}$C and $^{16}$O. For $^4$He (and other H--burning products like $^{14}$N), the wind contribution increases with mass and dominates for $M \gtrsim 22 M_{\sun}$ for rotating stars and $M \gtrsim 35 M_{\sun}$ for non--rotating stars, i. e. for the stars which enter the WR stage. As said earlier, for very massive stars, the SN contribution is negative and this is why $p^{\rm{tot}}_{^4\rm{He} m}$ is smaller than $p^{\rm{wind}}_{^4\rm{He} m}$. In order to eject He--burning products, a star must not only become a WR star but must also become a WC star. Therefore for $^{12}$C, the wind contributions only start to be significant above the following approximative mass limits: 30 and 45 $M_{\sun}$ for rotating and non--rotating models respectively. Above these mass limits, the contribution from the wind and the pre--SN are of similar importance. Since at solar metallicity, no WO star is produced \citep{ROTXI}, for $^{16}$O, as for heavier elements, the wind contribution remains very small. \subsection{Comparison between rotating and non--rotating models} For H--burning products, the yields of the rotating models are usually higher than those of non--rotating models. This is due to larger cores and larger mass loss. Nevertheless, between about 15 and 25 $M_{\sun}$, the rotating yields are smaller. This is due to the fact that the winds do not expel many H--burning products yet and more of these products are burnt later in the pre--supernova evolution (giving negative SN yields). Above 40 $M_{\sun}$, rotation clearly increases the yields of $^4$He. Concerning He--burning products, below 30 $M_{\sun}$, most of the $^{12}$C comes for the pre--SN contribution. In this mass range, rotating models having larger cores also have larger yields (factor 1.5--2.5). We notice a similar dependence on the initial mass for the yields of non--rotating models as for the yields of rotating models, but shifted to higher masses. Above 30 $M_{\sun}$, where mass loss dominates, the yields from the rotating models are closer to those of the non--rotating models. The situation for $^{16}$O and metallic yields is similar to carbon. Therefore {\bf$^{12}$C, $^{16}$O and the total metallic yields are larger for our rotating models compared to our non--rotating ones by a factor 1.5--2.5 below 30 ${\mathbf M_\odot}$}. Figure \ref{yimf} presents the stellar yields convolved with the Salpeter initial mass function (IMF) ($dN/dM\propto M^{-2.35}$). This reduces the importance of the very massive stars. Nevertheless, the differences between rotating and non--rotating models remain significant, especially around $20\,M_\odot$. \begin{table} \caption{{\bf Total stellar yields} ($mp^{\rm{pre-SN}}_{im} + mp^{\rm{wind}}_{im}$) of solar metallicity models. Continuation of Table \ref{ytot}. Note that $^{20}$Ne yields are an upper limit and may be reduced by Ne--explosive burning and that $^{24}$Mg yields can also be modified by neon and oxygen explosive burnings. See discussion in Sect. \ref{yao}.} \begin{tabular}{l l r r r} \hline \hline \\ $M_{\rm{ini}}$ & $\upsilon_{\rm{ini}}$ & ($^{20}$Ne) & $^{22}$Ne & ($^{24}$Mg) \\ \hline 12 & 0 & 1.05E-1 & 6.80E-3 & 6.75E-3 \\ 12 & 300 & 1.58E-1 & 1.55E-2 & 1.36E-2 \\ 15 & 0 & 1.10E-1 & 1.60E-2 & 6.06E-2 \\ 15 & 300 & 2.26E-1 & 3.33E-2 & 4.27E-2 \\ 20 & 0 & 4.83E-1 & 3.64E-2 & 1.28E-1 \\ 20 & 300 & 6.80E-1 & 4.26E-2 & 1.19E-1 \\ 25 & 0 & 8.41E-1 & 5.22E-2 & 1.38E-1 \\ 25 & 300 & 1.08E+0 & 2.26E-2 & 1.48E-1 \\ 40 & 0 & 1.36E+0 & 5.58E-3 & 1.52E-1 \\ 40 & 300A & 1.42E+0 & 8.05E-2 & 1.20E-1 \\ 60 & 0 & 1.81E+0 & 1.35E-1 & 1.73E-1 \\ 60 & 300A & 1.75E+0 & 1.74E-1 & 1.44E-1 \\ \hline \end{tabular} \label{ytotb} \end{table} \subsection{Comparison with the literature}\label{yao} \begin{figure*}[!tbp] \centering \includegraphics[width=8.8cm]{1554fg11.eps}\includegraphics[width=8.8cm]{1554fg12.eps} \caption{{\it Left}: total ejected masses (EM) of $^1$H, $^4$He, $^{12}$C and $^{16}$O as a function of the initial mass for different non--rotating models at solar metallicity. {\it Right}: total ejected masses (EM) of $^{16}$O, $^{20}$Ne and $^{24}$Mg as a function of the CO core mass for different models at solar metallicity. Solid lines (HMM04) represent our results, dotted lines (LC03) show the results from \citet{LC03}, long--short dashed lines (TNH96) show the results from \citet{TNH96}, dashed lines (RHW02) represent the results from \citet{RHHW02} and dotted--dashed (WW95) lines show the results from \citet{WW95}. } \label{abc} \end{figure*} \begin{table*} \caption{ Initial mass and velocity, remnant mass and {\bf total ejected masses (EM)} of solar metallicity models. All masses are in solar mass units and velocities are in km\,s$^{-1}$. "A" in column 2 means wind anisotropy has been included in the model. Note that this table is given for comparison with other recent publications and does not correspond to our definition of yields (see Sect. \ref{def}).} \begin{tabular}{l l r r r r r r r r r r r} \hline \hline \\ $M_{\rm{ini}}$ & $\upsilon_{\rm{ini}}$ & $M_{\rm{rem}}$ & $^1$H & $^{3}$He & $^4$He & $^{12}$C & $^{13}$C & $^{14}$N & $^{16}$O & $^{17}$O & $^{18}$O & Z \\ \hline 12 & 0 & 1.34 & 5.86E+0 & 1.87E-4 & 4.13E+0 & 1.23E-1 & 1.05E-3 & 4.81E-2 & 3.09E-1 & 7.75E-5 & 3.95E-4 & 6.70E-1 \\ 12 & 300 & 1.46 & 5.03E+0 & 1.46E-4 & 4.50E+0 & 1.99E-1 & 1.51E-3 & 5.16E-2 & 4.93E-1 & 7.58E-5 & 1.91E-3 & 1.01E+0 \\ 15 & 0 & 1.51 & 6.92E+0 & 2.02E-4 & 5.38E+0 & 1.99E-1 & 1.26E-3 & 5.76E-2 & 5.64E-1 & 6.86E-5 & 3.06E-3 & 1.19E+0 \\ 15 & 300 & 1.85 & 5.78E+0 & 1.56E-4 & 5.19E+0 & 4.06E-1 & 1.61E-3 & 4.91E-2 & 1.14E+0 & 7.14E-5 & 4.01E-3 & 2.18E+0 \\ 20 & 0 & 1.95 & 8.48E+0 & 2.28E-4 & 7.05E+0 & 2.77E-1 & 1.38E-3 & 6.88E-2 & 1.37E+0 & 7.94E-5 & 4.31E-3 & 2.53E+0 \\ 20 & 300 & 2.57 & 6.69E+0 & 1.72E-4 & 6.41E+0 & 4.93E-1 & 1.73E-3 & 6.17E-2 & 2.74E+0 & 6.52E-5 & 1.89E-4 & 4.33E+0 \\ 25 & 0 & 2.49 & 9.57E+0 & 2.22E-4 & 8.76E+0 & 4.45E-1 & 1.34E-3 & 9.79E-2 & 2.39E+0 & 8.45E-5 & 2.85E-4 & 4.19E+0 \\ 25 & 300 & 3.06 & 7.57E+0 & 1.89E-4 & 8.18E+0 & 8.12E-1 & 1.86E-3 & 9.70E-2 & 3.76E+0 & 6.87E-5 & 7.51E-4 & 6.19E+0 \\ 40 & 0 & 4.02 & 1.36E+1 & 3.40E-4 & 1.29E+1 & 7.72E-1 & 1.78E-3 & 1.73E-1 & 6.47E+0 & 9.01E-5 & 2.69E-4 & 9.37E+0 \\ 40 & 300A & 3.85 & 9.11E+0 & 2.14E-4 & 1.58E+1 & 3.24E+0 & 1.67E-3 & 2.02E-1 & 5.71E+0 & 6.73E-5 & 1.93E-4 & 1.12E+1 \\ 60 & 0 & 4.30 & 1.91E+1 & 4.85E-4 & 2.22E+1 & 4.13E+0 & 2.28E-3 & 2.84E-1 & 7.01E+0 & 1.14E-4 & 3.68E-4 & 1.41E+1 \\ 60 & 300A & 4.32 & 1.29E+1 & 2.56E-4 & 2.81E+1 & 4.70E+0 & 2.18E-3 & 3.57E-1 & 6.94E+0 & 7.86E-5 & 1.94E-4 & 1.46E+1 \\ \hline \end{tabular} \label{etot} \end{table*} We compare here the yields of the non--rotating models with other authors. For this purpose, the ejected masses, $EM$, defined by Eq. \ref{emdef} in Sect. \ref{def}, are presented in Tables \ref{etot} and \ref{etotb}. Figure \ref{abc} shows the comparison with four other calculations: \citet{LC03} (LC03), \citet{TNH96} (TNH96), \citet{RHHW02} (RHW02) and \citet{WW95} (WW95). For LC03, we chose the remnant masses that are closest to ours (models 15D, 20B, 25A). The uncertainties related to convection and the $^{12}$C$(\alpha,\gamma)^{16}$O reaction are dominant. Therefore, before we compare our results with other models, we briefly mention here which treatment of convection and $^{12}$C$(\alpha,\gamma)^{16}$O rate other authors use: \begin{itemize} \item \citet{LC03} use Schwarzschild criterion (except for the H convective shell that forms at the end of core H--burning, where Ledoux criterion is used) for convection without overshooting. For $^{12}$C$(\alpha,\gamma)^{16}$O, they use the rate of \citet{K02} (K02). \item \citet{TNH96} use Schwarzschild criterion for convection without overshooting. For $^{12}$C$(\alpha,\gamma)^{16}$O, they use the rate of \citet{CF85} (CF85). \item \citet{WW95} use Ledoux criterion for convection with semiconvection. They use a relatively large diffusion coefficient to model semiconvection. Moreover non--convective zones immediately adjacent to convective regions are slowly mixed over the order of a radiation diffusion time scale to approximately allow for the effects of convective overshoot. For $^{12}$C$(\alpha,\gamma)^{16}$O, they use the rate of \citet{CF88} (CF88) multiplied by 1.7. \item \citet{RHHW02} use Ledoux criterion for convection with semiconvection. They use the same method as WW95 for semiconvection. For $^{12}$C$(\alpha,\gamma)^{16}$O, they use the rate of \citet{BU96} (BU96) multiplied by 1.2. \item In this paper (HMM04), we use Schwarzschild criterion for convection with overshooting. For $^{12}$C$(\alpha,\gamma)^{16}$O, we use the rate of \citet{NACRE} (NACRE). \end{itemize} A comparison of the different reaction rates and treatment of convection is presented in \citet{psn04a}. The comparison of the ejected masses is shown in Fig. \ref{abc} for masses between 15 and 25 $M_\odot$. The $^{4}$He and $^{16}$O yields are larger when respectively the helium and carbon--oxygen cores are larger. This can be seen by comparing our models with those of RHW02 and LC03 (Fig. \ref{abc} and respective tables of core masses). For $^{12}$C yields, the situation is more complex because the larger the cores, the larger the central temperature and the more efficient the $^{12}$C$(\alpha,\gamma)^{16}$O reaction. If we only consider the effect of this reaction we have that the larger the rate, the smaller the $^{12}$C abundance at the end of He--burning and the smaller the corresponding yield (and the larger the $^{16}$O yield). This can be seen in Fig. \ref{abc} by comparing our $^{12}$C and $^{16}$O yields with those of LC03 (we both use Schwarzschild criterion). Indeed the NACRE rate is larger than the K02 one so our $^{12}$C yield is smaller. THN96 (who also use Schwarzschild criterion) using the rate of \citet{CF85} which is even larger, obtain an even smaller $^{12}$C yield. When both the convection treatment and the $^{12}$C$(\alpha,\gamma)^{16}$O rate are different, the comparison becomes more complicated. Nevertheless, within the model uncertainties, the yields of various models agree. In fact, the uncertainties are reduced when we use the CO core mass instead of the initial mass in order to compare the results of different groups. Fig. \ref{abc} (right) shows the small uncertainty for $^{16}$O in relation to the CO core mass. This confirms the relation $M_{\rm{CO}}$--yields($^{16}$O) and shows that this relation holds for models of different groups and for models of non--rotating and rotating stars. We calculated the pre--SN yields at the end of Si--burning. Therefore, the yields of $^{20}$Ne and $^{24}$Mg may still be affected by explosive neon and oxygen burnings. $^{20}$Ne yields are upper limits due to the possible destruction of this element by explosive Ne--burning. Figure \ref{abc} (right) shows that our results lies above the results of other groups but that the difference is as small as differences between the results of the other groups. $^{24}$Mg yields are also close to the results of other groups who included explosive burnings in their calculations. By comparing our results for $^{20}$Ne and $^{24}$Mg with the other groups mentioned above, we see that the difference between our results and the results of other groups is as small as the differences between the 19, 20 and 21 $M_\odot$ models of \citet{RHHW02} and differences between for example \citet{RHHW02} and \citet{LC03}. This means that our yields for these two elements are good approximations even though explosive burning was not followed in this calculation. For $^{24}$Mg, it is interesting to note that rotation increases significantly the yields only for the 12 $M_\odot$ models and that, in general, rotation slightly decreases the $^{24}$Mg yields in the massive star range (see Table \ref{ytotb} and Fig. \ref{abc} right). This point is interesting for chemical evolution of galaxies since it goes in the same direction as observational constraints \citep{CC04}. For $^{17}$O yields, all recent calculations agree rather well and differ from the WW95 results because of the change in the reaction rates \citep[especially $^{17}$O$(p,\alpha)^{14}$N, see][]{APB96}. $^{18}$O and $^{22}$Ne are produced by $\alpha$--captures on $^{14}$N. As said in Sect. \ref{def}, $^{22}$Ne is not followed during the advanced stages and we had to use a special calculation for its yield. Our $^{22}$Ne values are nevertheless very close to other calculations \citep[see][]{thesis}. \begin{table} \caption{{\bf Total ejected masses (EM)} of solar metallicity models. Continuation of Table \ref{etot}.} \begin{tabular}{l l r r r} \hline \hline \\ $M_{\rm{ini}}$ & $\upsilon_{\rm{ini}}$ & ($^{20}$Ne) & $^{22}$Ne & ($^{24}$Mg) \\ \hline 12 & 0 & 1.24E-1 & 8.37E-3 & 1.27E-2 \\ 12 & 300 & 1.77E-1 & 1.71E-2 & 1.87E-2 \\ 15 & 0 & 1.35E-1 & 1.80E-2 & 6.75E-2 \\ 15 & 300 & 2.50E-1 & 3.53E-2 & 4.77E-2 \\ 20 & 0 & 5.16E-1 & 3.90E-2 & 1.36E-1 \\ 20 & 300 & 7.12E-1 & 4.52E-2 & 1.23E-1 \\ 25 & 0 & 8.82E-1 & 5.55E-2 & 1.45E-1 \\ 25 & 300 & 1.12E+0 & 2.59E-2 & 1.52E-1 \\ 40 & 0 & 1.42E+0 & 1.08E-2 & 1.58E-1 \\ 40 & 300A & 1.48E+0 & 8.58E-2 & 1.25E-1 \\ 60 & 0 & 1.91E+0 & 1.43E-1 & 1.79E-1 \\ 60 & 300A & 1.86E+0 & 1.82E-1 & 1.50E-1 \\ \hline \end{tabular} \label{etotb} \end{table} \section{Conclusion} We calculated a new set of stellar yields of rotating stars at solar metallicity covering the massive star range (12--60 $M_{\sun}$). We used for this purpose the latest version of the Geneva stellar evolution code described in \citet{psn04a}. We present the separate contribution to stellar yields by winds and supernova explosion. For the wind contribution, our rotating models have larger yields than the non--rotating ones because of the extra mass loss and mixing due to rotation. For the SN yields, we followed the evolution and nucleosynthesis until core silicon burning. Since we did not model the SN explosion and explosive nucleosynthesis, we present pre--SN yields for elements lighter than $^{28}$Si, which depend mostly on the evolution prior to Si--burning. Our results for the non--rotating models correspond very well to other calculations and differences can be understood in the light of the treatment of convection and the rate used for $^{12}$C$(\alpha,\gamma)^{16}$O. This assesses the accuracy of our calculations and assures a safe basis for the yields of our rotating models. For the pre--SN yields and for masses below $\sim 30\,M_{\sun}$, rotating models have larger yields. The $^{12}$C and $^{16}$O yields are increased by a factor of 1.5--2.5 by rotation in the present calculation. When we add the two contributions, the yields of most heavy elements are larger for rotating models below $\sim 30\,M_{\sun}$. Rotation increases the total metallic yields by a factor of 1.5--2.5. As a rule of thumb, the yields of a rotating 20 $M_\odot$ star are similar to the yields of a non--rotating 30 $M_\odot$ star, at least for the light elements considered in this work. When mass loss is dominant (above $\sim 30\,M_{\sun}$) our rotating and non--rotating models give similar yields for heavy elements. Only the yields of H--burning products are increased by rotation in the very massive star range. \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,768
Q: Postgresql OR statement slowing down query I have a PostgreSQL query that references two columns with an OR statement in the where clause. EXPLAIN (ANALYZE, COSTS, VERBOSE, BUFFERS) select * from "connection" where "personOneId" = '?' or "personTwoId" = '?' I have an index on "personOneId" and the query is blistering fast. But when I include the OR "personTwoId" the query slows down dramatically. I initially tried having both "personOneId" and "personTwoId" indexed(multi column index) but it still does a " -> Parallel Seq Scan on connection" and the query is the same speed as it always was even with the index. Is my index wrong or is this the expected behavior with the "OR" statement? Is there a way to alter this query to achieve the same outcome that will allow PG to use the indexed properly? Execution plan "Gather (cost=1000.00..24641.09 rows=302 width=117) (actual time=47.352..144.044 rows=337 loops=1)" " Output: redacted" " Workers Planned: 2" " Workers Launched: 2" " Buffers: shared hit=1892 read=15205" " -> Parallel Seq Scan on public.connection (cost=0.00..23610.89 rows=126 width=117) (actual time=41.072..134.191 rows=112 loops=3)" " Output: redacted" " Filter: ((connection.""personOneId"" = 'redacted id'::uuid) OR (connection.""personTwoId"" = 'redacted id'::uuid))" " Rows Removed by Filter: 347295" " Buffers: shared hit=1892 read=15205" " Worker 0: actual time=39.153..134.249 rows=170 loops=1" " Buffers: shared hit=667 read=5645" " Worker 1: actual time=37.108..132.297 rows=134 loops=1" " Buffers: shared hit=651 read=4768" "Planning Time: 0.217 ms" "Execution Time: 147.659 ms" A: You have the wrong index for this query. A multicolumn btree index on ("personOneId", "personTwoId") is not very good for the same reason it is inefficient to find all the people with the first name of 'Samantha' in a paper phone book, which is sorted by last name first then by first name. If you have separate btree indexes on each column, then it can combine them with a BitmapOr and that should be fast. Or if you switch to a GIN index, a multi-column GIN index should also be useful.
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,407
Q: Multiple Quartz Schedulers - scheduling jobs created in different scheduler instance I would like to use the Quartz scheduler so, that the server part of my application uses scheduler to create a job and store it in JDBCStore, while the UI part (frontend) uses another instance of the scheduler (pointed to the same database schema) to add triggers for that job. I thought it would be enough for the UI to know the name of the job and the group, because adding a trigger is something like: trigger = newTrigger() .withIdentity("trigger", "group1") .withSchedule(cronSchedule("0 0/2 8-17 * * ?") .withMisfireHandlingInstructionFireAndProceed()) .forJob("myJob", "group1") .build(); Unfortunately, this throws an exception java.lang.ClassNotFoundException for the job class. Any help would be appreciated. Thank you. A: As per the conversation above, why dont you add the trigger itself in the server project.You can make a JMS call to server project sending trigger details and do all the required thing in the server project.I think that should solve your issue
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,993
\section{Introduction} In classical general relativity (GR), in contrast to mathematical differential geometry, it is kind of a dogma that the points of the space-time manifold (S-T) have no real physical individuality. This was already realized in the context of the socalled \tit{Einstein-Hole-Argument} (see for example \cite{Pais},\cite{Stachel},\cite{Norton} or sect.4 in \cite{Rovelli}) and condensed in the mathematical statement \begin{ob} In classical general relativity all diffeomorphic S-T-manifolds are physically indistinguishable. I.e., with $\Phi$ a diffeomorphism from $M$ to $M'$ (that is, not simply a coordinate transformation), and $g':=\Phi_{\ast}\circ g$ (note that $ \Phi_{\ast}$ denotes the push forward related to the pull back $((\Phi^{-1})^{\ast})$, $(M,g)$ and $(M',g')$ describe the same classical physics. In other words \begin{equation}S\!-\!T=Riem/Diff \end{equation} \end{ob} (cf. e.g. \cite{Hawking}) In the special case $M'=M$ we would have a family of mathematically discernible metrics at the same (coordinate) point, that is \begin{equation} g(x)\neq \Phi_{\ast}\circ g(x) =g'(x) \end{equation} However one should note that, physically, we always have given a single metric from the family on the S-T-manifold as, by construction, the metrical properties on the S-T-manifold are given by a concrete measurement prescription which comes in a sense from outside in contrast to mathematics. We will come back to this topic below in the context of spontaneous symmetry breaking (SSB). The typical way how a metric is introduced in GR exploits the existence of local inertial frames (LIF) in which special relativity holds sway and which allow to perform the usual length- and time-measurements. The \tit{equivalence principle} and \tit{general covariance} then allow to transplant the respective measurement results into arbitrary coordinate systems. \begin{conclusion}On a given S-T-manifold we can hypothetically envisage several mathematically different but physically equivalent metrical tensors \begin{equation} g(x)\; ,\; g'(x)=\Phi_{\ast}\circ g(x) \end{equation} \end{conclusion} We would like to emphasize that in our context of SSB we always have a large class of mathematically different but physically equivalent metrics (in the above sense) on $M$. Typically, SSB is concerned with the ground state or vacuum state of a system. In our context of GR and/or quantum gravity (QG) this means S-T devoid of macroscopic matter/energy content, i.e. solutions with vanishing \tit{Ricci-curvature}, that is \begin{equation}R_{\mu\nu}=0\quad\text{or}\quad G_{\mu,\nu}=0 \end{equation} These vacuum solutions have a large class of diffeomorphisms connecting them. Local deformations of the metric as in the \tit{hole-argument} will for example suffice. While we will develop the subject matter from a direction starting with a view on SSB as it occurs in systems of many degrees of freedom (DoF) like e.g. condensed matter physics, there has been a chain of reasoning which exploits similarities between GR and classical gauge theories. A well written early paper, belonging to this class, is for example \cite{Isham}, in which relations between the \tit{tetrad formalism} of GR and socalled \tit{nonlinear realizations} of \tit{gauge groups} on \tit{coset spaces} in high energy physics are established (having been of some prominence at that time). We mention also \cite{Freund}. Written in a similar vein are papers from the Russian school (up to very recent times). To mention a few, \cite{Sarda1},\cite{Sarda2} or \cite{Sarda3}. This approach relies mainly on the fibre bundle framework of socalled \tit{gauge gravity theory} and the reduction theory of principal bundles with respect to the structure groups being used and is of a more formal character. A recent paper, also starting from this group-reduction point of view in connection with SSB is \cite{Tomb}. It is our aim in the following to unify this more formal group-theoretic approach with a different train of ideas which start from the more concretely given implications of SSB and \tit{gravitons} as \tit{Goldstone modes}, thus emphasizing features which may establish a connection to an underlying bundle of phenomena belonging to the not yet existing field of quantum gravity (QG). Note for example the remark in \cite{Isham}: \begin{quote} One does not expect any new development in the notoriously difficult problem of quantizing gravity to result from this modified point of view. However some insight may be gained\ldots \end{quote} It is our impression that the observation that gravitons are the Goldstone modes of SSB of diffeomorphism invariance will lead to real physical consequences if one can relate the more formal and abstract aspects on the level of classical gauge theory and fibre bundle reductions to the corresponding physical implications on the deeper levels of quantum space-time physics. \section{Physical Considerations concerning SSB of Diffeomorphism Invariance} As the group theoretic aspects of SSB are represented in great detail in the above mentioned literature, we begin our analysis with the development of a more physical point of view concerning the subject matter which makes contact with related phenomena of SSB in systems consisting of many degrees of freedom. There are two particular points to be mentioned which may shed some light on the scene in GR and QG. We try to elucidate them by briefly discussing two characteristic examples taken from the field of SSB and phase transitions in many-body physics. We will however only stress the points which are of relevance for our corresponding analysis in gravitational physics. To begin with, we discuss the phenomenon of breaking of translation invariance by crystallization of a continous (quantum) many-body system. In the symmetric unbroken phase the particle density $\rho (x):=<\hat{\rho}>$ is a constant, i.e. \begin{equation}\rho (x+a)=\rho(x)\quad ,\quad x,a\in \mbb{R}^d \end{equation} Below some critical point or phase transition line we have instead a periodic dependence of the particle density in the respective \tit{pure phases}, i.e. \begin{equation}\rho (x+a)\neq\rho(x)\quad\text{in general} \end{equation} but \begin{equation}\rho (x+R_i)=\rho(x) \end{equation} for some discrete subgroup of $\mbb{R}^d$. \begin{defi}By a pure phase we mean in the above context a crystal having a definite macroscopic position in space. Mixtures may occur if we average over a group of such localized crystals. In a pure phase correlation functions do decay but only slowly due to the existence of collective Goldstone excitations. \end{defi} \begin{bem}A method of generating such localized crystal is the method of Bogoliubov quasi-averages (an external localizing field which is switched off in the end after the thermodynamic limit has been taken). \end{bem} \begin{ob}If this happens, both in the classical and the quantum regime, long-lived collective excitations do emerge which induce long-range correlations. In the case of a crystal they are called phonons. \end{ob} Representations of the Goldstone phenomenon in the quantum regime are so numerous that we mention only very few sources. Almost every textbook about quantum field theory contains a brief discussion (see e.g. \cite{Itzykson}). As to the older literature there is the nice comprehensive review \cite{Guralnik}. A more recent contribution is for example \cite{Requ2}. A detailed development of the Goldstone phenomenon in the regime of classical statistical mechanics can be found in \cite{Requ1}. There exists an important difference between Goldstone particles in say QFT and in e.g. condensed matter physics and statistical mechanics. In QFT the Goldstone particles are exact mass -zero particles, i.e. they have a sharp excitation branch. On the other hand, in condensed matter physics, or systems having a non-vanishing particle density in general, they aquire an infinite lifetime only for momentum zero, while for non-vanishing momenta they are usually still relatively stable collective excitations but have only a finite lifetime (which typically decreases with increasing momentum) resulting in a smeared dispersion law (cf. e.g. \cite{Requ2}). Furthermore, while in RQFT their spin vanishes due to general principles (see \cite{Reeh}), this is not so in the more general context. The underlying reason is the absence of Lorentz covariance, Einstein causality and the socalled spectrum condition ( energy-momentum concentrated in the forward cone). Furthermore, while in most scenarios we can at least exploit translation invariance and the corresponding Fourier-mode decomposition, this is absent in GR and QG. Therefore, in the following, we will avoid all these concepts and discuss the Goldstone phenomenon in a much broader framework. The relevant point in our investigation will be the following: \begin{ob}For a hypothetical observer living inside one of the respective pure phases, i.e. the crystal, being translated by some vector, $a$, the internal physics is the same compared to a corresponding observer in a crystal, being translated by some vector, $a'\neq a$, provided corresponding coordinate systems have been chosen. Only an outside observer is able to discern the various translated pure phases. \end{ob} For illustrational purposes we mention another example, i.e. a lattice spin system, being capable of spontaneous magnetization. A pure phase in this scenario is described by a magnetization vector, pointing in a certain direction in configuration space. Again, the internal physics relative to the orientation of this magnetization vector is the same in all the different pure phases. Only an external observer is able to see the different phases (that is, the different directions of magnetization) by using his external reference system. \begin{bem}All the internal observers are however able to observe the long-range collective Goldstone excitations, that is, phonons or magnons. \end{bem} All this now winds up to the observation that all internal observers see essentially the same physics provided they adapt their internal reference systems appropriately. That is, the situation is completely the same as compared to the case of diffeomorphism invariance of vacuum solutions in GR or QG. \begin{conclusion}By the above observations we feel entitled to attribute to the different members of the class of diffeomorphic realizations of S-T a perhaps less than ephemeral or only formal existence as, by necessity, we are only internal observers in the latter case. \end{conclusion} We want to conclude this section with a brief analysis of the character of the goldstone modes under discussion. Phonons are essentially lattice vibrations in the crystal case, magnons are fluctuations of the local magnetization. Phrasing it somewhat differently one can venture to say: \begin{ob}The Goldstone modes try to locally interpolate between the different potentially coexisting pure phases. I.e., local distortions of the crystal lattice can for example be regarded as local transitions into another slightly shifted crystal configuration. The same holds in the magnon case. \end{ob} \begin{conclusion}Exploiting the above correspondence between our examples and diffeomorphism invariance of S-T, one may conclude that the Goldstone modes in the latter case are the gravitons, acting as local distortions of S-T. They interpolate locally between the mathematically different but physically only hypothetically coexisting realizations of S-T as we are living in only one of these possible realizations. \end{conclusion} \section{The Conceptual Representation of SSB of Diffeomorphism Invariance in the Context of General Relativity and Quantum Gravity} In this section we want to analyse the nature of SSB in our context. Note that the different diffeomorphic realisations of S-T can be viewed as an underlying differentiable manifold being equipped with different but diffeomorphic pseudoriemannian metrics. Furthermore, as we are mainly interested in the case of degenerate ground states (in a possibly underlying theory of QG), we assume that the (macroscopic) energy-momentum tensor vanishes. The cornerstone of GR is the \tit{equivalence principle}, that is, at every point, $P$, of the S-T manifold there exists for a fixed metrical field, $g(\circ,\circ)$, a class of LIF in which the laws of special relativity (SR) hold in an at least infinitesimal neighborhood of the point $P$. Mathematically we can construct such a local coordinate system as follows, while we take at the same time the opportunity to introduce a number of useful concepts and notations (cf. e.g. \cite{Moeller} sect. 9.6. As to the tetrad formalism see also \cite{Synge} sect. I,3 ). \begin{defi}In a given coordinate system, $x$, a (contravariant) tetrad at $P$ is given by 4 pseudoorthogonal tangent vectors, $e_a=(e_a^{\nu})$ with $a=0,1,2,3$ labelling the 4 vectors and $\nu$ denoting the indices with respect to the local coordinate tangent vectors $\partial_0,\partial_1,\partial_2,\partial_3$. We have \begin{equation}e_{a\nu}=g_{\nu\mu}e_a^{\mu}\quad ,\quad e_a^{\nu}e_{b\nu}=\eta_{ab} =g(e_a,e_b) \end{equation} with $\eta_{ab}$ the Minkowski tensor. \end{defi} For formal reasons we introduce \begin{equation}e^{a\nu}:= \eta^ {ab}e_b^{\nu}\quad , \quad e_{\nu}^a:= \eta^ {ab}e_{b\nu} \end{equation} \begin{ob} \begin{equation}e_a^{\nu}e^b_{\nu}=\delta_a^b \end{equation} \end{ob} \begin{lemma} \begin{equation}e_a^{\nu}e_{\mu}^a=\delta_{\mu}^{\nu} \end{equation} \end{lemma} This follows from the preceding observation. I.e., we have \begin{equation}(e_a^{\nu}e_{\mu}^a)e_b^{\mu}= e_a^{\nu}(e_{\mu}^ae_b^{\mu})=e_a^{\nu}\delta_b^a=e_b^{\nu}=\delta_{\mu}^{\nu}e_b^{\mu} \end{equation} \begin{bem}Note that $\nu,\mu$ refer to the covariant coordinate indices and are consequently raised and lowered with the help of $g_{\nu\mu},g^{\nu\mu}$ while $a,b$ as formal indices are raised and lowered with the help of the Minkowski metric. \end{bem} \begin{ob}Any two tetrades, $(e_a),(f_b)$, at $P$ are connected by a Lorentz transformation, $L$, i.e. \begin{equation}e_a^{\nu}=L_a^{\cdot b}f_b^{\nu}\quad\text{or}\quad e_a=L_a^{\cdot b}f_b \end{equation} \end{ob} \begin{lemma} \begin{equation}e_{\nu}^af_b^{\nu}=L_{\cdot b}^a \end{equation} with \begin{equation}e_{\nu}^a=L_{\cdot b}^a f_{\nu}^b\quad ,\quad f_b^{\nu}=L_{\cdot b}^a e_a^{\nu} \end{equation} \end{lemma} This follows from \begin{equation}(e_{\nu}^af_b^{\nu})f_{\mu}^b=e_{\nu}^a(f_b^{\nu}f_{\mu}^b)=e_{\nu}^a\delta_{\mu}^{\nu}=e_{\mu}^a \end{equation} and \begin{equation}e^a=L_{\cdot b}^af^b\end{equation} \begin{lemma}We have \begin{equation}\eta_{ab}=g(e_a,e_b)=g(f_a,f_b) \end{equation} \end{lemma} Proof: Under the assumption $g(f^c,f^d)=\eta^{cd}$ we have \begin{equation}g(e^a,e^b)=g^{\nu\mu}e_{\nu}^ae_{\mu}^b= g^{\nu\mu}f_{\nu}^cf_{\mu}^dL^a_{\cdot c}L^b_{\cdot d}=\eta^{cd} L^a_{\cdot c}L^b_{\cdot d}=\eta^{ab} \end{equation} As shown in \cite{Moeller}, l.c., one can easily construct a new local coordinate system with the help of the tetrad at $P$: \begin{equation}(x')^i:=e_{\nu}^i(x^{\nu}-x_P^{\nu})\quad ,\quad x^{\nu}=x_P^{\nu}+e_k^{\nu}(x')^k \end{equation} which is pseudo-orthogonal at $P$, i.e. \begin{equation}g'_{ik}(P)=e_i^{\nu}e_k^{\mu}g_{\nu\mu}=\eta_{ik} \end{equation} Other pseudo-orthogonal coordinate systems can be generated by Lorentz transformations from the $(e^a)$-system, i.e. \begin{equation}f_a^{\nu}=L_a^{\cdot b}e_b^{\nu}\quad\text{and}\quad y^i=f_{\nu}^i(x^{\nu}-x_P^{\nu}) \end{equation} It follows \begin{equation}y^i=L_{\cdot k}^i(x')^k \end{equation} However, a pseudo-orthogonal coordinate system at $P$ is in general not a LIF. Therefore, an observer at $P$ in such a system still experiences a gravitational field. This can be transformed away by means of a more general coordinate transformation which leads to the \begin{ob}There exists a coordinate transformation to socalled local Lorentz-coordinates (e.g. Riemann normal coordinates) such that \begin{equation}g_{\nu\mu}(P)=\eta_{\nu\mu}\quad\text{and}\quad \partial g_{\nu\mu}(P)/\partial x_{\rho}=0\quad\text{or}\quad \Gamma_{\nu\mu}^{\rho}(P)=0 \end{equation} which is the definition of a LIF. Further local Lorentz transformations leave the class of LIF invariant while leading, of course, to another system of local Lorentz-coordinates. \end{ob} We now will establish the connection to SSB of diffeomorphism invariance. Central in the context of SSB and phase transitions is the notion of \tit{order parameter}. In the above examples, taken from many-body theory, order parameters are certain observable quantities which characterize the pure spontaneously broken phases as e.g. the gradient of the particle density or the magnetization. One can equally well use the corresponding quantum observables, that is, observables whose ground state expectation values vanish in the ordered unbroken phase while being different from zero in the broken phases. Furthermore, the different ground state expectation values are connected by the set of broken symmetry transformations. More specifically, we have a large group, $G$, of symmetrie (some of which are broken) containing a closed subgroup, $H$, of conserved symmetries. The \tit{configuration manifold} of ground states can be related to and parametrized by the \tit{cosets} of the \tit{homogeneous space} $G/H$. We conclude, that it is important to study the structure of the space $G/H$ with $G$ a Lie group and $H$ a closed subgroup. This problem is in general not trivial and we will deal with the more mathematical aspects of the problem in the following section. We will now transplant this picture into the more general framework of GR and/or QG. As configuration manifold (of minima of some functional of the occurring fields) we take the set of vacuum solutions of GR. The group $G'$ is the (in general) infinite dimensional group $Diff$. In the following we restrict ourselves, for convenience, to a single orbit under $Diff$, that is, a fixed vacuum solution together with its transforms under $Diff$. In a next step we will shift our point of view to a more local one, i.e. we switch to the action of the respective groups in the corresponding \tit{tangent bundle} of S-T. \begin{ob}Locally the group $Diff$ is realized by the group $G:=GL(n.\R)$ of general linear transformations in the respective local coordinate systems: \begin{equation}Diff\:\rightarrow\: (\partial (x')^i/\partial x^j)(P)\: \in GL(n,\R) \end{equation} \end{ob} One can equally well consider the local action on the (principal) \tit{frame bundle} over a fixed manifold $M$, i.e. the action of $Diff$ in a local trivialization of the frame bundle. \begin{defi}As a closed subgroup of $GL(n.\R)$ we take the orthochronous Lorentz group, $L$, (the connected component of the group unit, $e$). \end{defi} \begin{bem}In a given coordinate system the local frames are in a one-one relation to elements of $GL(n,\R)$, i.e. \begin{equation}v_i=v_i^k(\partial/\partial x^k)\quad , \quad i=1,\ldots ,n \end{equation} with $(v_i^k)\in GL(n.\R)$. \end{bem} The field on $M$ we are mainly interested in is the classical metric field $g_{\nu\mu}(x)$ or \begin{equation}g_{\nu\mu}(x):=< \hat{g}_{\nu\mu}(x)> \end{equation} with $\hat{g}$ its quantum version (in some framework of semiclassical QG). \begin{defi}We call $g_{\nu\mu}(x)$ an order parameter field, thus generalizing the notion of order parameter. An order parameter manifold, $(M,g)$, is $M$ plus a fixed metrical field $g_{\nu\mu}(x)$. \end{defi} \begin{ob}In a given fixed coordinate patch the group $Diff$ acts on the metrical field $g_{\nu\mu}(x)$ by acting at each point $P$ via $GL(n,\R)$ (in a tensorial way) \begin{equation}Diff:\: g(x)\:\rightarrow\:g'(x) \end{equation} \end{ob} The \tit{hole-problem} was solved by Einstein (cf. \cite{Rovelli} l.c.) by introducing for example four particles $A,B,C,D$ with $A,B$ meeting in point $x_i$ inside the hole and $C,D$ in point $x_j$. He observed that the diffeomorphism now transforms both the metric tensor $g(x)$ and the trajectories of the test particles so that they get shifted as well with their distance becoming equal with respect to the transformed metric compared to their distance relative to the original metric $g(x)$. \begin{ob} This is exactly the situation we discussed in section 2 in the context of e.g. many-body physics. I.e., physically the different phases become indistinguishable if the respective reference frames are appropriately reoriented. In the hole argument the reference system is provided by the 4 particle trajectories and their intersection points. Put differently, the Einstein hole argument is in fact an illustration of our concept of SSB in general relativity. \end{ob} As a last point in this section we discuss what a local observer in a LIF physically experiences concerning the SSB of diffeomorphism invariance. We again emphasize that we have one metric field, $g(x)$, with the other potentially existing fields $g':=\Phi_*\circ g$ being unobservable for the local observer. What is actually at his disposal are \tit{passive} coordinate transformations. He can for example apply Lorentz transformations. \begin{ob}We learned above that with $(e_a)$ a Lorentz-orthonormal frame, $g(e_a,e_b)=\eta_{ab}$, the same holds for the Lorentz transformed tetrad, $(f_a)$, i.e. $g(f_a,f_b)=\eta_{ab}$. \end{ob} That is, if the local observer uses a LIF (local Lorentz coordinates) he observes that gravitational forces are locally absent. The same holds then when he applies the Lorentz group $L_+^{\uparrow}$ (and extending the new frame appropriately to a local Lorentz coordinate system). If, on the other hand, he applies general elements from $GL(n,\R)$ he will observe gravitational forces in the transformed coordinate frame. \begin{conclusion}In our view this is the local manifestation of the breaking of diffeomorphism invariance. \end{conclusion} In the next section we will relate our physical observations with the more abstract formalism of \tit{reduction of pricipal bundles}. \section{SSB and Reducible Principal Bundles} A large part of the above mentioned literature is concerned with the reducibility of the \tit{frame bundle} with \tit{structure group} $GL(n,\R)$ to a subgroup, in our case the Lorentz group $L_+^{\uparrow}$ or $SO(n-1,1;\R)$. Frequently this phenomenon is already considered as a case of SSB. In our view reducibility as such is a widespread phenomenon in the field of principal bundles. So we regard it rather as a necessary prerequisite for SSB, not as the phenomenon itself! The mathematical results we are using in the follwing can be found in the following books, \cite{KN},\cite{CB},\cite{CH},\cite{H}. We take some pains of mentioning the places where the results (and more) can be found because exact citations of the sometimes quite nontrivial results are frequently missing in the physical literature. We denote a general bundle by $(E,B,G,F)$ or simply $E$, $E\rightarrow B$. $E$ is the total space, $B$ the base space (typically in our case the space-time manifold $S-T$), $G$ is the structure group (in physics called the gauge group), $F$ the typical fibre (sometimes $F$ is a vector space). Both $E,B$ are differentiable manifolds. Points in $E,B$ are denoted by $p,x$. $E$ is continuously and surjectively projected by $\pi$ on $B$. With $x\in B$, $\pi^{-1}(x)=E(x)$ or $E_x$ is the fibre over $x$. $G$ acts on $F$ from the left via homeomorphisms or (in case of a vector space) as linear automorphisms. All fibre bundles are assumed to be locally trivial, that is, it exists a covering of $B$ by open sets, $U_i$, so that it exists $\phi_i$ with \begin{equation}\phi_i:\pi^{-1}(U_i)\simeq U_i\times F \end{equation} In particular \begin{equation}\phi_i':\pi^{-1}(x)=E_x\simeq F \end{equation} with \begin{equation}\phi_i(p)=(\pi(p),\phi_i'(p) \end{equation} If $x\in U_i\cap U_j$ the transition maps \begin{equation}\phi_i'\circ\phi_j'^{-1}:F\rightarrow F \end{equation} are elements of the structure group. \begin{defi}A principal bundle, $P$, is a fibre bundle in which the typical fibre $F$ and the structure group $G$ are identical or, equivalently, $E_x$ or $P_x$ are diffeomorphic to $G$ (as Lie groups). \end{defi} \begin{ob}It is important that one can define a right action of $G$ on the fibres, $P_x$, i.e. \begin{equation}R_g\circ p=p\cdot g\quad\text{or}\quad R_g\circ p=\phi_i^{-1}\circ(\phi_i(p)\circ g) \end{equation} under which $G$ acts freely on $P_x$. \end{ob} \begin{bem}Note that left action is in general not! fibre preserving (i.e. independent of the choice of patch $U_i$). \end{bem} In our context we are typically concerned with the bundle of frames, $L(M)$, over a manifold $M$. A linear frame at $x\in M$ is an ordered basis of the tangent space $T_x(M)$. The linear group acts on $L(M)$ from the right in the following way. \begin{equation}Y_j=A_j^ix_i \end{equation} $(Y_j),(X_i)$ linear frames at $ x\in M$. Taking a coordinate basis $(\partial_{x^i})$ each frame at $x$ can be expressed as \begin{equation}X_j=X^i_j \partial_{x^i} \end{equation} for some $(X^i_j)\in GL(n,\R)$. This shows at the same time the local triviality of the frame bundle, i.e.: \begin{equation}\pi^{-1}(U)\simeq U\times GL(n,\R) \end{equation} \begin{defi}Let $P(M,G)$ be a principal bundle. Let $P'$ be a submanifold of $P$ and $H$ a Lie-subgroup of $G$ so that $P'(M,H)$ is again a principal bundle with structure group $H$. We call $P'(M,H)$ a reduction of $P(M,G)$. We say, the structure group $G$ is reduced to $H\subset G$. \end{defi} \begin{bem}This property is a nontrivial! result. It has a variety of interesting applications (cf. e.g. \cite{KN} or \cite{CB}). \end{bem} In the reduction process an important role is played by the \tit{coset space} $G/H$, or in our contex, $GL(n,\R)/L_+^{\uparrow}$. In most of the mathematical literature it is dealt with the case $GL(n,\R)/O(n,\R)$, i.e. $O(n,\R)$ instead of $L_+^{\uparrow}$. The latter situation is technically much simpler (as is the case for Riemannian geometry in general compared to Lorentzian geometry) but one finds frequently the slightly erroneous statement in the physical literature that the former case is more or less equivalent. Therefore we want to briefly comment on this point. We have the general result (cf. \cite{KN} p.43 or \cite{CB} p.109) \begin{satz}$G/H$ is an analytic manifold , in particular the projection $G\rightarrow G/H$ is real analytic. Furthermore, $G$ is locally diffeomorphic to $G/H\times H$, i.e. $G(G/H,H)$ with $H$ as structure group is a principal bundle (cf. \cite{KN} p.55). \end{satz} In special cases, e.g. if $H$ is a maximal compact subgroup, stronger results hold. \begin{satz}With $H$ a maximal compact subgroup \begin{equation}G\simeq H\times \R^m \end{equation} \end{satz} This is a result by Iwasara (cf. \cite{CB} p.386 or\cite{CH} p.109ff. \begin{koro}We have \begin{equation}GL(n,\R)/O(n)\simeq \R^{n(n+1)/2} \end{equation} with $dim(O(n))=n(n-1)/2$. \end{koro} \begin{bem}One should note that $L_+^{\uparrow}$ is not compact and (to our knowledge) a corresponding result does not hold for $L_+^{\uparrow}$ instead of $O(n)$. \end{bem} The reason why such a result holds for $O(n)$ can be understood relatively easily. The wellknown \tit{polar decomposition} tells us that there exists an essentially unique (global!) decomposition. \begin{satz}[Polar Decomposition]It exists an essentially unique decomposition \begin{equation}L=O\cdot |L| \end{equation} with $O$ orthogonal and $L\in GL(n,\R)$, $|L|$ positive semidefinite, more specifically \begin{equation}|L|=(L^+\cdot L)^{1/2} \end{equation} and $L^+$ the adjoint of $L$. \end{satz} \begin{bem}This important result is much more general and holds also for \tit{closable (unbounded) operators} (cf. e.g. \cite{RS} vol.I) \end{bem} It is crucial for such a result to hold that one can exploit the spectral theorem, i.e. that for example $|L|$ is well-defined and, a fortiori, selfadjoint. Nothing in that direction holds (to our knowledge) if $O$ is replaced by an element of $L_+^{\uparrow}$. In the latter case one only has a weaker result of a (in our view) quite different nature (which is however sufficient in our case). With $G$ a Lie group, $H$ a closed subgroup, let $\hat{G},\hat{H}$ be the respective Lie algebras, $\hat{M}$ some vector subspace of $\hat{G}$ so that $\hat{G}$ is the direct sum, $\hat{G}=\hat{M}+\hat{H}$, (note that $\hat{M}$ is not! unique in general). Let $\exp_{\hat{M}}$ be the restriction of the exponential map to $\hat{M}$ and $\hat{e}:=e\cdot H=H$ the element in $G/H$ under the projection $\pi:G\rightarrow G/H$. \begin{satz}(cf. e.g. \cite{H} p.113) There exists a neighborhood, $U$, of $0$ in $\hat{M}$ which is mapped homeomorphically under $\exp_{\hat{M}}$ onto $\exp_{\hat{M}}(U)\subset G$ so that $\pi$ maps $\exp_{\hat{M}}(U)$ homeomorphically onto a neighborhood of $\hat{e}$ in $G/H$. \end{satz} To prove this important theorem, one needs the following lemma (\cite{H} p.105), which is useful in various contexts. \begin{lemma}With $\hat{G}=\hat{M}+\hat{H}$ there exist open neighborhoods $U_{\hat{M}},U_{\hat{H}}$ of $0$ in $\hat{M},\hat{H}$ so that the map \begin{equation}(A,B)\rightarrow \exp(A)\cdot\exp(B) \end{equation} is a diffeomorphism of $U_{\hat{M}}\times U_{\hat{H}}$ onto an open neighborhood of $e$ in $G$. \end{lemma} \begin{bem}It is remarkable that this map generates a full neighborhood of $e$ in $G$ as it is, at first glance, only a product set. It follows from the particular situation in Lie groups. It hinges in particular on the non-vanishing of a certain functional determinant around $e$ in $G$. This guarantees the bijectivity of the map near $e$. But one does not know how large this neighborhood actually is. \end{bem} \begin{conclusion}The above theorem says essentially that with the help of $\hat{M}$ we get locally! a transversial submanifold relative to $H$ which yields a parametrization of the fibres around $\hat{e}\in G/H$. Via group multiplication we can then transport this parametrization to a neighborhood of any coset $g\cdot H$. \end{conclusion} \begin{satz}In the case of $GL(n,\R),L_+^{\uparrow}(n,\R)$, matrices with \begin{equation}\eta\cdot A^T=A\cdot\eta \quad\text{(pseudosymmetric)} \end{equation} span a $n(n+1)/2$-dimensional submanifold being locally transversal to $L_+^{\uparrow}(n,\R)$, i.e. they locally coordinatize $GL(n,\R)/L_+^{\uparrow}(n,\R)$. That is, in a neighborhood of $e\in GL(n,\R)$ each element $L\in GL(n,\R)$ can be written uniquely as \begin{equation}L=\Lambda\cdot A \end{equation} with $\Lambda\in L_+^{\uparrow}(n,\R)$ and $A$ pseudosymmetric. \end{satz} \begin{bem}Note that e.g. matrices with $A^T=A$ do not have this property while also spanning a $n(n+1)/2$-dimensional submanifold. There exist Lorentz boosts being even positive!. The above local decomposition with $A$ pseudosymmetric was used in \cite{Isham}. \end{bem} Proof of theorem: We show that no element in $L_+^{\uparrow}$ different from $e$ is pseudosymmetric. We have with $\Lambda\in L_+^{\uparrow}$: \begin{equation}\Lambda^T\eta\Lambda=\eta \end{equation} Assume that \begin{equation}\Lambda^T=\eta\Lambda\eta \end{equation} we get \begin{equation}\eta\Lambda\eta^2\Lambda=\eta\quad\rightarrow\quad \eta\Lambda^2=\eta \end{equation} hence \begin{equation}\Lambda^2=e\quad\rightarrow\quad\Lambda=e \end{equation} which proves the theorem as it holds $n(n-1)/2+n(n+1)/2=n^2= Dim(GL(n,\R)$. There is an important theorem in the reduction theory of principal bundles (\cite{KN} p.57) saying the following: \begin{satz}The principal bundle $P(M,G)$ is reducible to $P'(M,H)$ iff $P/H$ admits a cross section. \end{satz} \begin{bem}The meaning of $P/H$ is the following. The fibres of $P$ are diffeomorphic to $G$, hence the fibres of the bundle $P/H$ are diffeomorphic to the typical fibre $G/H$. The structure group is again $G$ (multiplication from the left). \end{bem} In the case of $O(n)$ \begin{equation}GL(n,\R)/O(n)\simeq\R^{n(n+1)/2} \end{equation} it is easy to see that local cross sections can be pasted together and extended to a global cross section (cf. \cite{CB} p.385) because $\R^m$ is a vector space and assuming that $M$ is paracompact. \begin{ob}In the case of $L_+^{\uparrow}$ as subgroup we have local cross sections around every point of $P/H$ or rather $L(M)/L_+^{\uparrow}$ ($L(M)$ the frame bundle) as a consequence of our above results. But (to our knowledge) $GL(n,\R)/L_+^{\uparrow}$ is not diffeomorphic to a vector space. \end{ob} We can however proceed as follows. We assume that a Lorentzian metric, $g$, is given on our space-time manifold $M=S-T$. \begin{bem}In contrast to the Riemannian case (paracompact manifold) this is not automatic (cf. e.g. \cite{CB} p.293). \end{bem} With the help of $g$ we generate the set of pseudoorthogonal tetrads $(e_a)$ at every point $x$ of $M$. This set is invariant under $L_+^{\uparrow}$ as we have seen in section 3. \begin{ob}The subset of $L(M)$ consisting of pseudoorthogonal tetrads at each $x\in M$ is a \tit{reduced subbundle} $Q(M)\subset L(M)$ with structure group $L_+^{\uparrow}$. \end{ob} In each fibre of $Q(M)$ $L_+^{\uparrow}$ acts freely from the right. Therefore the projection \begin{equation}pr:L(M)\rightarrow L(M)/L_+^{\uparrow} \end{equation} is constant on the fibres of $Q$ as subsets of the fibres of $L(M)$. \begin{ob}Thus $pr$ induces a mapping \begin{equation}s:M\rightarrow L(M)/L_+^{\uparrow} \end{equation} via \begin{equation}s(x):=pr(I(e_a(x)) \end{equation} with $\pi_Q(e_a(x))=x$ the projection map in the subbundle $Q(M)$ and $I$ the injection (imbedding) map \begin{equation}I:Q(M)\rightarrow L(M) \end{equation} It is obvious that $s(x)$ determines a cross section in $L(M)/L_+^{\uparrow}$. \end{ob} \begin{bem}Note that $s(x)$ represents the equivalence class of Lorentz frames in $L(M)/L_+^{\uparrow}$ over $x$. That this is a well defined mapping is obvious. The continuity or differentiability of the cross section is the crucial point. This follows from the properties of the composition of the above maps all of which are differentiable. Furthermore, with the help of this cross section we get back exactly the subbundle $Q(X)$ we started from. \end{bem} \section{Conclusion} We have learned in the preceeding sections that the gravitational field (or, rather, the metrical field), $g$, can be regarded as an \tit{orderparameter field} and the macroscopic, smooth space-time manifold $(S-T;g)$ as an \tit{order parameter manifold}, lying above a presumed microscopic (irregular and erratic) quantum space-time consisting of an array of many interacting DoF. We have invoked the situation of quantum many-body systems for several times. There the situation is the following. We usually have a symmetric unbroken phase with the order parameter being zero, and, in another region of certain parameters, a phase transition to a set of physically more or less identical spontaneously broken phases characterized by certain non-vanishing values of the order parameter. Transferring these observations to our field we may venture to say: \begin{conjecture}We associate the presumed unbroken phase of our (quantum) space-time, that is, the absence of a classical, macroscopic metrical field, $g$, to some \tit{pre-big-bang era}. The emergence of classical space-time is then the result of a phase transition (which may have happened before or near the big-bang). \end{conjecture} Another interesting point concerns the nature of the gravitons themselves. In many-body physics the corresponding excitations are the phonons with the ordered phase being the crystal phase and the relatively long-lived lattice phonons as Goldstone particles. On the other hand, phonons do already exist in quantum fluids but they happen to be more strongly damped. The phonons which occur e.g. in fluids can be associated with the SSB of Galilei invariance (cf. e.g. \cite{Requ2}. \begin{conjecture}We think that in our context gravitons, while being more stable in the ordered phase, i.e. $S-T$ plus a non-vanishing $g$-field, have already existed in the unordered phase (quantum vacuum with vanishing classical $g$). In this phase they represented certain types of more primordial excitations not being related to deformatons of classical $S-T$. They perhaps do reflect, as in the above Galilei-case, the existence of a quantum vacuum as such. \end{conjecture} \begin{bem}This may be interesting in connection with string theory. While string theory, at least as a starting point, exploits a classical embedding or target space, gravitons are associated with certain low-lying excitations of closed strings. Our above observations concerning the possible nature of gravitons may perhaps shed some light on a certain connection to string theory and the role of gravitons in this framework. \end{bem} The last point we want to address is whether there exists an objective correlate to the different hypothetical phases being described by the different diffeomorphic metrical fields. In SSB of many-body systems all the different phases can really exist while only one is realized in each case. In the corresponding situation of GR we are less accustomed to such a picture. However, in more recent times the general perspective has changed a little bit, given the discussion about the \tit{landscape} in string theory and cosmology or \tit{induced/entropic gravity} which employ more or less openly some microscopic baclground substrate as supposed carrier of the concepts of physics.
{ "redpajama_set_name": "RedPajamaArXiv" }
2,517
Q: If $AxIf $Ax<x$ where $A \in M_n({\mathbb R})$ nonnegative matriz and $x$ is a column matrix positive, show that: $$A^n \to 0, \mbox{when } n \to \infty$$ I can obtain that the digaonal's coefficient are smaller that one, but not necessarily the rest. Help me, please. A: Hint. Prove by mathematical induction that there is some $0<c<1$ that makes $A^nx\le c^nx$ for every $n\ge1$.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,954
\section{Introduction} Identities among hypergeometric series, both terminating and nonterminating, have been the subject of extensive research. In the 1920s, Bailey and Whipple (see \cite{Bailey3, Whipple1,Whipple2, Whipple3, Whipple4, Whipple5}) found a number of identities relating various terminating hypergeometric series that are listed in Chapters 4 and 7 of Bailey's tract \cite{Ba}. In this paper, we generalize these terminating hypergeometric identities found by Bailey and Whipple and extend them to higher-order hypergeometric series. We consider classical identities involving terminating very-well-poised, nearly-poised and Saalsch\"utzian series (see Section 2 for the relevant definitions). The highest order classical transformation formula between two terminating very-well-poised series was found by Bailey in \cite{Bailey3} and involves two terminating very-well-poised ${}_9F_8(1)$ series (see \cite[Eq.\ 4.3.7]{Ba}). This formula was recently extended to the ${}_{11}F_{10}$ level by Srivastava, Vyas and Fatawat \cite[Theorem 3.4]{SVF} by adding two new numerator and denominator pairs of parameters that differ by one each. In this paper, we further extend Srivastava, Vyas and Fatawat's result to a relation involving two terminating very-well-poised ${}_{13}F_{12}(1)$ series (see Proposition \ref{13P3} in Section 5) by adding two more pairs of numerator and denominator parameters with unit difference. In Section 4.5 of Bailey's tract \cite{Ba}, one can find a number of transformation formulas involving terminating nearly-poised series discovered by Whipple and Bailey. The identities \cite[Eq.\ 4.5.1]{Ba} (originally found by Whipple in \cite{Whipple5}) and \cite[Eq.\ 4.5.2]{Ba} (originally found by Bailey in \cite{Bailey3}) relate nearly-poised ${}_4F_3(1)$ and ${}_5F_4(1)$ series, respectively, to Saalsch\"utzian ${}_5F_4(1)$ series. In this paper (see Corollary \ref{2C3P16} in Section 3), we extend these two identities to a single transformation formula between a ${}_5F_4(1)$ series, which is {\it not} nearly-poised but in which {\it two} pairs of numerator and denominator parameters deviate from ``well-poisedness", and a Saalsch\"utzian ${}_6F_5(1)$ series. In addition, this extension is further generalized by Proposition \ref{3P16} in Section 3 to a relation between two terminating ${}_7F_6(1)$ series, one of which is Saalsch\"utzian. A special case of Proposition \ref{3P16} given in Corollary \ref{1C3P16} coincides with a recent special case of a result of Maier \cite{Maier}. The classical Whipple transformation between a very-well-poised ${}_7F_6(1)$ and a Saalsch\"utzian ${}_4F_3(1)$ series (see \cite{Whipple2}, \cite{Whipple4} and \cite[Eq.\ 4.3.4]{Ba}) was recently generalized by Srivastava, Vyas and Fatawat \cite[Theorem 3.2]{SVF}. In this paper, we provide a very general result in Proposition \ref{11P2} that extends both Srivastava, Vyas and Fatawat's result \cite[Theorem 3.2]{SVF} and the results described in the paragraph above. In Section 4 of this paper, we study transformations between two terminating Saalsch\"utzian series. The classical result in this area is the Whipple transform (see \cite{Whipple2}, \cite{Whipple3} and \cite[Eq.\ 7.2.1]{Ba}) involving two terminating Saalsch\"utzian ${}_4F_3(1)$ series. In Proposition \ref{12P2} in Section 4, we extend the Whipple transform to a transformation that involves two terminating Saalsch\"utzian ${}_6F_5(1)$ series in each of which series two numerator parameters exceed two denominator parameters by one. In Section 5, in addition to obtaining the above-mentioned relation involving two terminating very-well-poised ${}_{13}F_{12}(1)$ series, we also obtain extensions of classical results found by Bailey in \cite[Eqs.\ 8.1, 8.2 and 8.3]{Bailey3} and reproduced in \cite[Eqs.\ 4.5.3, 4.5.4 and 4.5.5]{Ba} that transform terminating nearly-poised ${}_5F_4(1)$ series with parametric excesses $\omega=1$ and $\omega=2$ and a terminating nearly-poised ${}_6F_5(1)$ series with parametric excess $\omega=1$ to terminating very-well-poised ${}_9F_8(1)$ series. The immediate extension of \cite[Eqs.\ 4.5.3, 4.5.4 and 4.5.5]{Ba} is given in Corollary \ref{1C13P4}, and a further extension is provided in Proposition \ref{13P4}. To obtain our results, we use a method employed by Bailey in \cite{Bailey3} and \cite[Chapter 4]{Ba} that utilizes sums of series of lower order to obtain transformations of series of higher order. The extensions of Bailey's general formulas \cite[Eqs.\ 4.3.1 and 4.3.6]{Ba} are provided in Propositions \ref{11P1} and \ref{13P1}, respectively. We should point out that Bailey uses the Pfaff--Saalsch\"utz formula (see \cite[Eq.\ 2.2.1]{Ba}) to obtain \cite[Eq.\ 4.3.1]{Ba} while we use the extension of the Pfaff--Saalsch\"utz formula to a ${}_4F_3(1)$ series given by Rakha and Rathie \cite{RR} to obtain the more general Proposition \ref{11P1}. Moreover, Bailey uses Dougall's theorem (see \cite[Eq.\ 6]{Dougall}, \cite{Hardy2} and \cite[Eq.\ 4.3.5]{Ba}) to obtain \cite[Eq.\ 4.3.6]{Ba} and we use the extension of Dougall's theorem to a ${}_9F_8(1)$ series summation provided by Srivastava, Vyas and Fatawat \cite[Theorem 3.3]{SVF} to prove the more general Proposition \ref{13P1}. Finally, in this paper, not only do we use the method just described with known summations of series, but we also use it with a known {\it transformation} of series and then reverse the order of summation to obtain a new transformation (see Proposition \ref{11P2}). The method described above that we use in this paper is parallel to the Bailey's transform (see \cite{BaileyRR1}, \cite{BaileyRR2} and \cite[pp.\ 58--74]{Slater}), which is employed by Srivastava, Vyas and Fatawat in \cite{SVF}. Both methods can be utilized to obtain many of the known transformations of hypergeometric series. We should also note that, according to the Karlsson--Minton summation formula (see \cite{Karlsson}), any hypergeometric series in which numerator and denominator parameters differ by positive integers can be written as a finite sum of hypergeometric series of lower order, but we have not used this approach or formula in our paper. Finally, in Section 6, by taking certain limits of relations in Section 3, we obtain extensions of some classical quadratic transformations of hypergeometric series. In particular, we obtain extensions in terms of single transformations of both the quadratic transformation found by Whipple in \cite[Eq.\ $(7.1)$]{Whipple5} and its companion transformation found by Bailey in \cite[Eq.\ $(9.1)$]{Bailey3}. Also, a special case of our most general extension there (see (\ref{1e6P1}) in Section 6 below) coincides with a special case of a transformation of Maier \cite[Theorem $3.4$]{Maier}. \section{Preliminaries} The hypergeometric series of type ${}_{r}F_s$ is defined by \begin{equation} \label{210} {}_{r}F_s \left[ {\displaystyle a_1,a_2,\ldots,a_{r}; \atop \displaystyle b_1,b_2,\ldots,b_s;} z\right] = \sum_{n=0}^{\infty} \frac{(a_1)_n(a_2)_n \cdots (a_{r})_n}{n!(b_1)_n(b_2)_n \cdots (b_s)_n}z^n, \end{equation} where $r$ and $s$ are nonnegative integers, $a_1,a_2,\ldots,a_{r}, b_1,b_2,\ldots,b_s, z\in \mathbb{C}$, and the rising factorial $(a)_n$ is given by \begin{equation*} (a)_n=\left\{ \begin{array}{ll} a(a+1)\cdots(a+n-1), & n>0,\\ 1, & n=0. \end{array} \right. \end{equation*} In this paper we will be mostly interested in the case where $r=s+1$. The series of type ${}_{s+1}F_s$ converges absolutely if $|z|<1$ or if $|z|=1$ and $\textrm{Re}(\sum_{i=1}^sb_i-\sum_{i=1}^{s+1}a_i)>0$ (see \cite[p.\ 8]{Ba}). We assume that no denominator parameter $b_1,b_2,\ldots,b_s$ is a negative integer or zero. If a numerator parameter $a_1,a_2,\ldots,a_{s+1}$ is a negative integer or zero, the series has only finitely many nonzero terms and is said to {\it terminate}. When $z=1$, we say that the series is of {\it unit argument} and of type ${}_{s+1}F_s(1)$. If $\sum_{i=1}^sb_i-\sum_{i=1}^{s+1}a_i=1$, the series is called {\it Saalsch\"utzian}. If $1+a_1=b_1+a_2=\cdots=b_s+a_{s+1}$, the series is called {\it well-poised}. A well-poised series that satisfies $a_2=1+\frac{1}{2}a_1$ is called {\it very-well-poised}. The {\it parametric excess} $\omega$ is given by $\omega=\sum_{i=1}^sb_i-\sum_{i=1}^{s+1}a_i$. Note that $\omega=1$ for a Saalsch\"utzian series. We will use the following extension of the classical Chu--Vandermonde formula (see \cite[Section 1.3]{Ba}), which extension sums a special terminating ${}_3F_2(1)$ series where a numerator parameter exceeds a denominator parameter by one: \begin{equation} \label{1e3f2} {}_3F_2 \left( \left. {\displaystyle a,p+1,-n \atop \displaystyle b,p}\right| 1\right) =\frac{(b-a-1)_n(q+1)_n}{(b)_n(q)_n}, \end{equation} where \begin{equation} \label{2e3f2} q=\frac{p(b-a-1)}{p-a}. \end{equation} The above formula (\ref{1e3f2}) appears in \cite{Miller}. A nonterminating version of the same formula can be found in \cite[p.\ 534, Eq.\ (10)]{PBM}. Letting $p \to \infty$ in (\ref{1e3f2}) yields the Chu--Vandermonde formula. We will also use the following extension of the Pfaff--Saalsch\"utz formula (see \cite[Eq.\ 2.2.1]{Ba}) given by Rakha and Rathie \cite{RR} which finds the sum of a special terminating Saalsch\"utzian ${}_4F_3(1)$ series where a numerator parameter exceeds a denominator parameter by one: \begin{eqnarray} \label{e1R1C1P18} &&{}_4F_3 \left( \left. {\displaystyle a,b,p+1,-n \atop \displaystyle c,p,2+a+b-c-n}\right| 1\right)\nonumber\\ &&=\frac{(c-a-1)_n(c-b-1)_n(q+1)_n} {(c)_n(c-a-b-1)_n(q)_n}, \end{eqnarray} where \begin{equation} \label{e2R1C1P18} q=\frac{p(c-a-1)(c-b-1)} {ab+p(c-a-b-1)}. \end{equation} Letting $p=b$ in (\ref{e1R1C1P18}) yields the Pfaff--Saalsch\"utz formula, while letting $b \to \infty$ in (\ref{e1R1C1P18}) gives (\ref{1e3f2}). We note that (\ref{e1R1C1P18}) can also be written as \begin{eqnarray} \label{e3R1C1P18} &&{}_4F_3 \left( \left. {\displaystyle a-b-c,\gamma_1+1,a+n,-n \atop \displaystyle 1+a-b,1+a-c,\gamma_1}\right| 1\right)\nonumber\\ &&=\frac{(b)_n(c)_n(a-p+1)_n(p+1)_n} {(1+a-b)_n(1+a-c)_n(p)_n(a-p)_n}, \end{eqnarray} where \begin{equation} \label{e4R1C1P18} \gamma_1=\frac{p(a-p)(b+c-a)} {bc-p(a-p)}, \end{equation} and as \begin{eqnarray} \label{e5R1C1P18} &&{}_4F_3 \left( \left. {\displaystyle c-a-1,c-b-1,\gamma_2+1,-n \atop \displaystyle c,\gamma_2,c-a-b-n}\right| 1\right)\nonumber\\ &&=\frac{(a)_n(b)_n(p+1)_n} {(c)_n(1+a+b-c)_n(p)_n}, \end{eqnarray} where \begin{equation} \label{e6R1C1P18} \gamma_2=\frac{p(c-a-1)(c-b-1)} {ab+p(c-a-b-1)}. \end{equation} We will use (\ref{e3R1C1P18}) and (\ref{e5R1C1P18}) in Sections 3 and 4, respectively, where we study extensions of transformations of nearly-poised and very-well-poised series to Saalsch\"utzian series and extensions of transformations of Saalsch\"utzian series to Saalsch\"utzian series. In \cite[Theorem 3.3]{SVF}, Srivastava, Vyas and Fatawat find a generalization of the classical Dougall's theorem for the sum of a terminating very-well-poised ${}_7F_6(1)$ series with parametric excess $\omega=2$ (see \cite[Eq.\ 6]{Dougall}, \cite{Hardy2} and \cite[Eq.\ 4.3.5]{Ba}). The generalization found by Srivastava, Vyas and Fatawat can be written as \begin{eqnarray} \label{1e1R3C11P2} &&{}_9F_8 \left( {a,1+\frac{a}{2},b,c,d, \atop \frac{a}{2},1+a-b,1+a-c,1+a-d,} \right.\\ &&\left. \left. {2a-b-c-d+n,a-p+1,p+1,-n; \atop 1+b+c+d-a-n,p,a-p,1+a+n;} \right| 1\right) \nonumber\\ &&= \frac{(1+a)_n(a-b-c)_n(a-b-d)_n(a-c-d)_n(\alpha+1)_n} {(1+a-b)_n(1+a-c)_n(1+a-d)_n(a-b-c-d)_n(\alpha)_n},\nonumber \end{eqnarray} where \begin{equation} \label{2e1R3C11P2} \alpha =\frac{p(a-p)(a-b-c)(a-b-d)(a-c-d)} {(2a-b-c-d+n)(bcd+p(a-p)(a-b-c-d))}. \end{equation} Dougall's theorem follows from (\ref{1e1R3C11P2}) by letting $p=b$. We note that (\ref{1e1R3C11P2}) can also be written as \begin{eqnarray} \label{1e2R3C11P2} &&{}_9F_8 \left( {\lambda,1+\frac{\lambda}{2},\lambda+b-a,\lambda+c-a,\lambda+d-a, \atop \frac{\lambda}{2},1+a-b,1+a-c,1+a-d,} \right.\\ &&\left. \left. {a+n,\frac{\lambda}{2}-\gamma+1,\frac{\lambda}{2}+\gamma+1,-n; \atop 1+\lambda-a-n,\frac{\lambda}{2}+\gamma,\frac{\lambda}{2}-\gamma,1+\lambda+n;} \right| 1\right) \nonumber\\ &&= \frac{(1+\lambda)_n(b)_n(c)_n(d)_n(a-p+1)_n(p+1)_n} {(a-\lambda)_n(1+a-b)_n(1+a-c)_n(1+a-d)_n(p)_n(a-p)_n},\nonumber \end{eqnarray} where \begin{equation} \label{2e2R3C11P2} \lambda=2a-b-c-d \end{equation} and \begin{equation} \label{3e2R3C11P2} \gamma^2 =\frac{\lambda^2}{4} -\frac{p(a-p)(a-b-c)(a-b-d)(a-c-d)} {bcd+p(a-p)(a-b-c-d)}. \end{equation} We will use (\ref{1e2R3C11P2}) in Section 5 where we study extensions of transformations of very-well-poised and nearly-poised series to very-well-poised series. \section{Extensions of hypergeometric transformations of nearly-poised and very-well-poised series to Saalsch\"utzian series} In this section, we study extensions of the classical identities given in \cite[Eqs.\ 4.5.1, 4.5.2 and 4.3.4]{Ba}. We begin with a general formula that extends \cite[Eq.\ 4.3.1]{Ba}: \begin{Proposition} \label{11P1} Let \begin{equation} \label{1e11P1} \gamma =\frac{p(a-p)(b+c-a)}{bc-p(a-p)}. \end{equation} Then \begin{eqnarray} \label{2e11P1} &&{}_{r+6}F_{s+4} \left( \left. {\displaystyle a,b,c,a-p+1,p+1,a_1,\ldots,a_r,-n \atop \displaystyle 1+a-b,1+a-c,p,a-p,b_1,\ldots,b_s}\right| x\right) \nonumber\\ &&=\sum_{m=0}^n \left( \frac{\left(\frac{a}{2}\right)_m\left(\frac{a+1}{2}\right)_m(a-b-c)_m(\gamma+1)_m(a_1)_m\cdots(a_r)_m(-n)_m(-4x)^m} {m!(1+a-b)_m(1+a-c)_m(\gamma)_m(b_1)_m\cdots(b_s)_m}\right.\nonumber\\ &&\times \left. {}_{r+2}F_{s} \left( \left. {\displaystyle a+2m,a_1+m,\ldots,a_r+m,-n+m \atop \displaystyle b_1+m,\ldots,b_s+m}\right| x\right) \right). \end{eqnarray} \end{Proposition} \begin{proof} Using (\ref{e3R1C1P18}), we have \begin{eqnarray*} &&{}_{r+6}F_{s+4} \left( \left. {\displaystyle a,b,c,a-p+1,p+1,a_1,\ldots,a_r,-n \atop \displaystyle 1+a-b,1+a-c,p,a-p,b_1,\ldots,b_s}\right| x\right) \\ &&=\sum_{k=0}^n \frac{(a)_k(b)_k(c)_k(a-p+1)_k(p+1)_k(a_1)_k\cdots (a_r)_k(-n)_kx^k} {k!(1+a-b)_k(1+a-c)_k(p)_k(a-p)_k(b_1)_k\cdots (b_s)_k}\\ &&=\sum_{k=0}^n \left( \frac{(a)_k(a_1)_k\cdots (a_r)_k(-n)_kx^k} {k!(b_1)_k\cdots (b_s)_k}\right.\\ &&\left.\times {}_4F_3 \left( \left. {\displaystyle a-b-c,\gamma+1,a+k,-k \atop \displaystyle 1+a-b,1+a-c,\gamma}\right| 1\right)\right), \end{eqnarray*} where \begin{equation*} \gamma=\frac{p(a-p)(b+c-a)} {bc-p(a-p)}. \end{equation*} We write the ${}_4F_3$ series on the right-hand side above as a summation, switch the order of summation in the resulting expression, and then simplify to obtain (\ref{2e11P1}). \end{proof} \begin{Remark} \label{1R11P1} Formula (\ref{2e11P1}) is an extension of \cite[Eq.\ 4.3.1]{Ba}. In fact, \cite[Eq.\ 4.3.1]{Ba} follows from (\ref{2e11P1}) by letting $x=1$ and $p \to \infty$. \end{Remark} We next use Proposition \ref{11P1} to obtain a generalization of \cite[Eqs.\ 4.5.1 and 4.5.2]{Ba}: \begin{Proposition} \label{3P16} We have \begin{eqnarray} \label{1e3P16} &&{}_7F_6 \left( \left. {\displaystyle a,b,c,a-p+1,p+1,q+1,-n \atop \displaystyle 1+a-b,1+a-c,p,a-p,q,w}\right| 1\right)\\ &&=\frac{(w-a-1)_n(\alpha+1)_n} {(w)_n(\alpha)_n}\nonumber\\ &&\times {}_7F_6 \left( \left. {\displaystyle 1+a-w,\frac{a}{2},\frac{a+1}{2},a-b-c,\beta+1,\gamma+1,-n \atop \displaystyle 1+a-b,1+a-c,\frac{2+a-w-n}{2},\frac{3+a-w-n}{2},\beta,\gamma}\right| 1\right),\nonumber \end{eqnarray} where \begin{equation} \label{2e3P16} \alpha=\frac{q(1+a-w)}{a-q}, \end{equation} \begin{equation} \label{3e3P16} \beta=\frac{q(1+a-w)+n(a-q)}{1+2q-w+n} \end{equation} and \begin{equation} \label{4e3P16} \gamma=\frac{p(a-p)(b+c-a)}{bc-p(a-p)}. \end{equation} \end{Proposition} \begin{proof} Use $q+1,q,w,1$ for $a_1,b_1,b_2,x$, respectively, in (\ref{2e11P1}) to obtain \begin{eqnarray*} &&{}_{7}F_{6} \left( \left. {\displaystyle a,b,c,a-p+1,p+1,q+1,-n \atop \displaystyle 1+a-b,1+a-c,p,a-p,q,w}\right| 1\right) \\ &&=\sum_{m=0}^n \left( \frac{\left(\frac{a}{2}\right)_m\left(\frac{a+1}{2}\right)_m(a-b-c)_m(\gamma+1)_m(q+1)_m(-n)_m(-4)^m} {m!(1+a-b)_m(1+a-c)_m(\gamma)_m(q)_m(w)_m}\right.\nonumber\\ &&\times \left. {}_{3}F_{2} \left( \left. {\displaystyle a+2m,q+1+m,-n+m \atop \displaystyle q+m,w+m}\right| 1\right) \right), \end{eqnarray*} where \begin{equation*} \gamma =\frac{p(a-p)(b+c-a)}{bc-p(a-p)}. \end{equation*} Sum the ${}_3F_2$ series on the right-hand side above according to (\ref{1e3f2}) and simplify to obtain the result. \end{proof} We note that the ${}_7F_6$ series on the left-hand side of (\ref{1e3P16}) deviates from a well-poised series in two pairs of numerator and denominator parameters while the ${}_7F_6$ series on the right-hand side of (\ref{1e3P16}) is Saalsch\"utzian. Letting $q \to \infty$ in (\ref{1e3P16}), we obtain the following special case: \begin{Corollary} \label{1C3P16} We have \begin{eqnarray} \label{1e1C3P16} &&{}_6F_5 \left( \left. {\displaystyle a,b,c,a-p+1,p+1,-n \atop \displaystyle 1+a-b,1+a-c,p,a-p,w}\right| 1\right)\\ &&=\frac{(w-a)_n} {(w)_n}\nonumber\\ &&\times {}_6F_5 \left( \left. {\displaystyle 1+a-w,\frac{a}{2},\frac{a+1}{2},a-b-c,\gamma+1,-n \atop \displaystyle 1+a-b,1+a-c,\frac{1+a-w-n}{2},\frac{2+a-w-n}{2},\gamma}\right| 1\right),\nonumber \end{eqnarray} where \begin{equation} \label{2e1C3P16} \gamma=\frac{p(a-p)(b+c-a)}{bc-p(a-p)}. \end{equation} \end{Corollary} We remark that (\ref{1e1C3P16}) is the special case $k=1$ of \cite[Theorem 7.1(ii)]{Maier} as well as \cite[Cor.\ 4]{WR}. On the other hand, letting $p \to \infty$ in (\ref{1e3P16}), we obtain the following result: \begin{Corollary} \label{2C3P16} We have \begin{eqnarray} \label{1e2C3P16} &&{}_5F_4 \left( \left. {\displaystyle a,b,c,q+1,-n \atop \displaystyle 1+a-b,1+a-c,q,w}\right| 1\right)\\ &&=\frac{(w-a-1)_n(\alpha+1)_n} {(w)_n(\alpha)_n}\nonumber\\ &&\times {}_6F_5 \left( \left. {\displaystyle 1+a-w,\frac{a}{2},\frac{a+1}{2},1+a-b-c,\beta+1,-n \atop \displaystyle 1+a-b,1+a-c,\frac{2+a-w-n}{2},\frac{3+a-w-n}{2},\beta}\right| 1\right),\nonumber \end{eqnarray} where \begin{equation} \label{2e2C3P16} \alpha=\frac{q(1+a-w)}{a-q} \end{equation} and \begin{equation} \label{3e2C3P16} \beta=\frac{q(1+a-w)+n(a-q)}{1+2q-w+n}. \end{equation} \end{Corollary} We note that Corollary \ref{2C3P16} generalizes two well-known results of Whipple and Bailey given in \cite[Eqs.\ 4.5.1 and 4.5.2]{Ba}. Indeed, we have the following: \begin{enumerate}[label=(\alph*)] \item Letting $q \to \infty$ in (\ref{1e2C3P16}) gives \cite[Eq.\ 4.5.1]{Ba} (originally found by Whipple in \cite{Whipple5}). \item Letting $q=a/2$ in (\ref{1e2C3P16}) gives \cite[Eq.\ 4.5.2]{Ba} (originally found by Bailey in \cite{Bailey3}). \end{enumerate} We next show how Proposition \ref{11P1} along with the result in Corollary \ref{2C3P16} lead to a formula that generalizes transformations of both nearly-poised and very-well-poised series to Saalsch\"utzian series. \begin{Proposition} \label{11P2} We have \begin{eqnarray} \label{1e11P2} &&{}_9F_8 \left( \left. {\displaystyle a,b,c,d,e,a-p+1,p+1,q+1,-n \atop \displaystyle 1+a-b,1+a-c,1+a-d,1+a-e,p,a-p,q,w}\right| 1\right)\nonumber\\ &&=\frac{(w-a-1)_n(\alpha+1)_n}{(w)_n(\alpha)_n}\nonumber\\ &&\times \sum_{k=0}^n \left( \frac{(-n)_k\left(\frac{a}{2}\right)_k\left(\frac{a+1}{2}\right)_k(1+a-w)_k(1+a-d-e)_k(\beta+1)_k} {k!(1+a-d)_k(1+a-e)_k\left(\frac{2+a-w-n}{2}\right)_k\left(\frac{3+a-w-n}{2}\right)_k(\beta)_k}\right.\nonumber\\ &&\times \left. {}_5F_4 \left( \left. {\displaystyle -k,a-b-c,d,e,\gamma+1 \atop \displaystyle 1+a-b,1+a-c,d+e-a-k,\gamma}\right| 1\right)\right), \end{eqnarray} where \begin{equation} \label{2e11P2} \alpha=\frac{q(1+a-w)}{a-q}, \end{equation} \begin{equation} \label{3e11P2} \beta=\frac{q(1+a-w)+n(a-q)}{1+2q-w+n} \end{equation} and \begin{equation} \label{4e11P2} \gamma=\frac{p(a-p)(b+c-a)}{bc-p(a-p)}. \end{equation} \end{Proposition} \begin{proof} Use $d,e,q+1$ for $a_1,a_2,a_3$, respectively, $1+a-d,1+a-e,q,w$ for $b_1,b_2,b_3,b_4$, respectively, and $x=1$ in (\ref{2e11P1}), and then apply (\ref{1e2C3P16}) to write the ${}_5F_4$ series on the right-hand side as a Saalsch\"utzian ${}_6F_5$ series. After that reverse the order of summation and simplify. \end{proof} The formula in Proposition \ref{11P2} is a very general one. It extends both very-well-poised identities as well as nearly-poised identities. In fact, letting $q \to a/2$ first in (\ref{1e11P2}) and then letting $w \to 1+a+n$ in the resulting formula yields \begin{eqnarray} \label{1e1C11P2} &&{}_9F_8 \left( \left. {\displaystyle a,1+\frac{a}{2},b,c,d,e,a-p+1,p+1,-n \atop \displaystyle \frac{a}{2},1+a-b,1+a-c,1+a-d,1+a-e,p,a-p,1+a+n}\right| 1\right)\nonumber\\ &&=\frac{(1+a)_n(1+a-d-e)_n}{(1+a-d)_n(1+a-e)_n}\nonumber\\ &&\times \left. {}_5F_4 \left( \left. {\displaystyle a-b-c,d,e,\gamma+1,-n \atop \displaystyle 1+a-b,1+a-c,d+e-a-n,\gamma}\right| 1\right)\right), \end{eqnarray} where \begin{equation} \label{2e1C11P3} \gamma=\frac{p(a-p)(b+c-a)}{bc-p(a-p)}, \end{equation} which is the very-well-poised ${}_9F_8(1)$ to Saalsch\"utzian ${}_5F_4(1)$ transformation found by Srivastava, Vyas and Fatawat in \cite[Theorem 3.2]{SVF} that generalizes the classical Whipple's transformation of a very-well-poised ${}_7F_6(1)$ series to a Saalsch\"utzian ${}_4F_3(1)$ series (see \cite{Whipple2}, \cite{Whipple4} and \cite[Eq.\ 4.3.4]{Ba}). On the other hand, letting $b \to \infty$ in (\ref{1e11P2}) and then letting $c \to \infty$ in the resulting formula leads to (\ref{1e3P16}), which greatly generalizes the classical nearly-poised to Saalsch\"utzian transformations found by Whipple and Bailey (see \cite[Eqs.\ 4.5.1 and 4.5.2]{Ba}). \section{Extensions of hypergeometric transformations of Saalsch\"utzian to Saalsch\"utzian series} In this section, we extend the well-known Whipple transform (see \cite{Whipple2}, \cite{Whipple3} and \cite[Eq.\ 7.2.1]{Ba}) which involves two terminating Saalsch\"utzian ${}_4F_3(1)$ series. We begin with the following general result: \begin{Proposition} \label{12P1} Let \begin{equation} \label{1e12P1} \gamma =\frac{p(c-a-1)(c-b-1)}{ab+p(c-a-b-1)}. \end{equation} Then \begin{eqnarray} \label{2e12P1} &&{}_{r+4}F_{s+2} \left( \left. {\displaystyle a,b,p+1,a_1,\ldots,a_r,-n \atop \displaystyle c,p,b_1,\ldots,b_s}\right| x\right) \nonumber\\ &&=\sum_{m=0}^n \left( \frac{(c-a-1)_m(c-b-1)_m(\gamma+1)_m(a_1)_m\cdots(a_r)_m(-n)_mx^m} {m!(c)_m(\gamma)_m(b_1)_m\cdots(b_s)_m}\right.\nonumber\\ &&\times \left. {}_{r+2}F_{s} \left( \left. {\displaystyle 1+a+b-c,a_1+m,\ldots,a_r+m,-n+m \atop \displaystyle b_1+m,\ldots,b_s+m}\right| x\right) \right). \end{eqnarray} \end{Proposition} \begin{proof} Using (\ref{e5R1C1P18}), we have \begin{eqnarray*} &&{}_{r+4}F_{s+2} \left( \left. {\displaystyle a,b,p+1,a_1,\ldots,a_r,-n \atop \displaystyle c,p,b_1,\ldots,b_s}\right| x\right) \\ &&=\sum_{k=0}^n \frac{(a)_k(b)_k(p+1)_k(a_1)_k\cdots (a_r)_k(-n)_kx^k} {k!(c)_k(p)_k(b_1)_k\cdots (b_s)_k}\\ &&=\sum_{k=0}^n \left( \frac{(1+a+b-c)_k(a_1)_k\cdots (a_r)_k(-n)_kx^k} {k!(b_1)_k\cdots (b_s)_k}\right.\\ &&\left.\times {}_4F_3 \left( \left. {\displaystyle c-a-1,c-b-1,\gamma+1,-k \atop \displaystyle c,\gamma,c-a-b-k}\right| 1\right)\right), \end{eqnarray*} where \begin{equation*} \gamma=\frac{p(c-a-1)(c-b-1)} {ab+p(c-a-b-1)}. \end{equation*} We write the ${}_4F_3$ series on the right-hand side above as a summation, switch the order of summation in the resulting expression, and then simplify to obtain (\ref{2e12P1}). \end{proof} The extension of the Whipple transform is given next: \begin{Proposition} \label{12P2} We have \begin{eqnarray} \label{1e12P2} &&{}_6F_5 \left( \left. {\displaystyle a,b,c,p+1,q+1,-n \atop \displaystyle d,e,f,p,q}\right| 1\right)\nonumber\\ &&=\frac{(e-c-1)_n(f-c-1)_n(\alpha+1)_n}{(e)_n(f)_n(\alpha)_n}\nonumber\\ &&\times {}_6F_5 \left( \left. {\displaystyle d-a-1,d-b-1,c,\gamma+1,\delta+1,-n \atop \displaystyle d,2+c-e-n,2+c-f-n,\gamma,\delta}\right| 1\right), \end{eqnarray} where \begin{equation} \label{2e12P2} d+e+f-a-b-c+n=3, \end{equation} \begin{equation} \label{3e12P2} \alpha=\frac{q(e-c-1)(f-c-1)}{(c-q)(d-a-b-1)}, \end{equation} \begin{equation} \label{4e12P2} \gamma =\frac{p(d-a-1)(d-b-1)}{ab+p(d-a-b-1)} \end{equation} and \begin{equation} \label{5e12P2} \delta=\frac{q(e-c-1)(f-c-1)+n(c-q)(d-a-b-1)}{(e-c-1)(f-c-1)-(c-q)(d-a-b-1)}. \end{equation} \end{Proposition} \begin{proof} Use $d,c,q+1,e,f=3+a+b+c-d-e-n,q,1$ for $c,a_1,a_2,b_1,b_2,b_3,x$, respectively, in (\ref{2e12P1}), and then sum the Saalsch\"utzian ${}_4F_3(1)$ series on the right-hand side according to (\ref{e1R1C1P18}). \end{proof} The relation in (\ref{1e12P2}) involves two Saalsch\"utzian ${}_6F_5(1)$ series in each of which series two numerator parameters exceed two denominator parameters by one. This relation is a generalization of the classical Whipple transform (see \cite{Whipple2}, \cite{Whipple3} and \cite[Eq.\ 7.2.1]{Ba}) involving two terminating Saalsch\"utzian ${}_4F_3(1)$ series as we show after Corollary \ref{1C12P2} below. \begin{Remark} \label{1R12P2} Let \begin{eqnarray} \label{1e1R12P2} &&\tilde{F}_n(a,b,c;d,e,f;p,q)\\ &&=(d)_n(e)_n(f)_n(\alpha)_n {}_6F_5 \left( \left. {\displaystyle a,b,c,p+1,q+1,-n \atop \displaystyle d,e,f,p,q}\right| 1\right),\nonumber \end{eqnarray} where \begin{equation*} d+e+f-a-b-c+n=3 \end{equation*} and $\alpha$ is as given in (\ref{3e12P2}). Then equation (\ref{1e12P2}) implies that \begin{eqnarray} \label{2e1R12P2} &&\tilde{F}_n(a,b,c;d,e,f;p,q)\\ &&=(-1)^n \tilde{F}_n(d-a-1,d-b-1,c;d,2+c-e-n,2+c-f-n;\gamma,\delta),\nonumber \end{eqnarray} where $\gamma$ and $\delta$ are as given in (\ref{4e12P2}) and (\ref{5e12P2}), respectively. \end{Remark} \begin{Corollary} \label{1C12P2} We have \begin{eqnarray} \label{1e1C12P2} &&{}_5F_4 \left( \left. {\displaystyle a,b,c,p+1,-n \atop \displaystyle d,e,f,p}\right| 1\right)\nonumber\\ &&=\frac{(e-c)_n(f-c)_n}{(e)_n(f)_n}\nonumber\\ &&\times {}_5F_4 \left( \left. {\displaystyle d-a-1,d-b-1,c,\gamma+1,-n \atop \displaystyle d,1+c-e-n,1+c-f-n,\gamma}\right| 1\right), \end{eqnarray} where \begin{equation} \label{2e1C12P2} d+e+f-a-b-c+n=2, \end{equation} and \begin{equation} \label{3e1C12P2} \gamma =\frac{p(d-a-1)(d-b-1)}{ab+p(d-a-b-1)}. \end{equation} \end{Corollary} \begin{proof} Let $q \to c$ in (\ref{1e12P2}) and then replace $c+1$ with $c$. \end{proof} The two ${}_5F_4(1)$ series in (\ref{1e1C12P2}) are both Saalsch\"utzian and in each one of them a numerator parameter exceeds a denominator parameter by one. Letting $p=b$ in (\ref{1e1C12P2}) gives the Whipple transform involving two Saalsch\"utzian ${}_4F_3(1)$ series. \begin{Corollary} \label{2C12P2} We have \begin{eqnarray} \label{1e2C12P2} &&{}_4F_3 \left( \left. {\displaystyle a,c,p+1,-n \atop \displaystyle d,e,p}\right| 1\right)\\ &&=\frac{(e-c)_n}{(e)_n} {}_4F_3 \left( \left. {\displaystyle d-a-1,c,\gamma+1,-n \atop \displaystyle d,1+c-e-n,\gamma}\right| 1\right),\nonumber \end{eqnarray} where \begin{equation} \label{2e2C12P2} \gamma =\frac{p(d-a-1)}{p-a}. \end{equation} \end{Corollary} \begin{proof} In (\ref{1e1C12P2}), fix $a,c,d,e,p$ and $n$, and let \begin{equation*} f=2+a+b+c-d-e-n \end{equation*} depend on $b$. Let $b \to \infty$ to obtain the result. \end{proof} We note that (\ref{1e2C12P2}) generalizes the classical relation involving two terminating ${}_3F_2(1)$ series (see Sheppard \cite{Sheppard} and Whipple \cite{Whipple1} which follow Thomae \cite{T}). Indeed, letting $p \to \infty$ in (\ref{1e2C12P2}) gives \begin{eqnarray} \label{3e2C12P2} &&{}_3F_2 \left( \left. {\displaystyle a,c,-n \atop \displaystyle d,e}\right| 1\right)\\ &&=\frac{(e-c)_n}{(e)_n} {}_3F_2 \left( \left. {\displaystyle d-a,c,-n \atop \displaystyle d,1+c-e-n}\right| 1\right).\nonumber \end{eqnarray} Equation (\ref{1e2C12P2}) also extends the Saalsch\"utzian ${}_4F_3(1)$ summation (\ref{e1R1C1P18}). In fact, (\ref{e1R1C1P18}) follows from (\ref{1e2C12P2}) by setting $e=2+a+c-d-n$ and then summing the resulting ${}_3F_2(1)$ series on the right-hand side according to (\ref{1e3f2}). \section{Extensions of hypergeometric transformations of very-well-poised and nearly-poised series to very-well-poised series} In this section, we extend the relation between two terminating very-well-poised ${}_{11}F_{10}(1)$ series given by Srivastava, Vyas and Fatawat in \cite[Theorem 3.4]{SVF} (which generalizes Bailey's ${}_9F_8$ transformation in \cite[Eq.\ 4.3.7]{Ba}) to a relation between two terminating very-well-poised ${}_{13}F_{12}(1)$ series. We also extend the formulas found in \cite[Eqs.\ 4.5.3, 4.5.4 and 4.5.5]{Ba}. We begin with a general formula that extends \cite[Eq.\ 4.3.6]{Ba}: \begin{Proposition} \label{13P1} If \begin{equation} \label{1e13P1} \lambda=2a-b-c-d, \end{equation} then \begin{eqnarray} \label{2e13P1} &&{}_{r+7}F_{s+5} \left( \left. {\displaystyle a,b,c,d,a-p+1,p+1,a_1,\ldots,a_r,-n \atop \displaystyle 1+a-b,1+a-c,1+a-d,p,a-p,b_1,\ldots,b_s}\right| x\right) \nonumber\\ &&=\sum_{m=0}^n \left( \frac{(\lambda)_m(\lambda+b-a)_m(\lambda+c-a)_m(\lambda+d-a)_m\left(\frac{a}{2}\right)_m\left(\frac{a+1}{2}\right)_m} {m!\left(\frac{\lambda}{2}\right)_m\left(\frac{\lambda+1}{2}\right)_m(1+a-b)_m(1+a-c)_m(1+a-d)_m}\right.\nonumber\\ &&\times \frac{\left(\frac{\lambda}{2}-\gamma+1\right)_m\left(\frac{\lambda}{2}+\gamma+1\right)_m(a_1)_m\cdots(a_r)_m(-n)_mx^m} {\left(\frac{\lambda}{2}+\gamma\right)_m\left(\frac{\lambda}{2}-\gamma\right)_m(b_1)_m\cdots(b_s)_m}\\ &&\times \left. {}_{r+3}F_{s+1} \left( \left. {\displaystyle a+2m,a-\lambda,a_1+m,\ldots,a_r+m,-n+m \atop \displaystyle 1+\lambda+2m,b_1+m,\ldots,b_s+m}\right| x\right) \right),\nonumber \end{eqnarray} where \begin{equation} \label{3e13P1} \gamma^2 =\frac{\lambda^2}{4} -\frac{p(a-p)(a-b-c)(a-b-d)(a-c-d)} {bcd+p(a-p)(a-b-c-d)}. \end{equation} \end{Proposition} \begin{proof} Using (\ref{1e2R3C11P2}), we have \begin{eqnarray*} &&{}_{r+7}F_{s+5} \left( \left. {\displaystyle a,b,c,d,a-p+1,p+1,a_1,\ldots,a_r,-n \atop \displaystyle 1+a-b,1+a-c,1+a-d,p,a-p,b_1,\ldots,b_s}\right| x\right) \\ &&=\sum_{k=0}^n \frac{(a)_k(b)_k(c)_k(d)_k(a-p+1)_k(p+1)_k} {k!(1+a-b)_k(1+a-c)_k(1+a-d)_k(p)_k(a-p)_k}\\ &&\times \frac{(a_1)_k\cdots (a_r)_k(-n)_kx^k} {(b_1)_k\cdots (b_s)_k}\\ &&=\sum_{k=0}^n \left( \frac{(a)_k(a-\lambda)_k(a_1)_k\cdots (a_r)_k(-n)_kx^k} {k!(1+\lambda)_k(b_1)_k\cdots (b_s)_k}\right.\\ &&{}_9F_8 \left( {\lambda,1+\frac{\lambda}{2},\lambda+b-a,\lambda+c-a,\lambda+d-a, \atop \frac{\lambda}{2},1+a-b,1+a-c,1+a-d,} \right.\\ &&\left. \left. \left. {a+k,\frac{\lambda}{2}-\gamma+1,\frac{\lambda}{2}+\gamma+1,-k; \atop 1+\lambda-a-k,\frac{\lambda}{2}+\gamma,\frac{\lambda}{2}-\gamma,1+\lambda+k;} \right| 1\right)\right), \end{eqnarray*} where \begin{equation*} \lambda=2a-b-c-d \end{equation*} and \begin{equation*} \gamma^2 =\frac{\lambda^2}{4} -\frac{p(a-p)(a-b-c)(a-b-d)(a-c-d)} {bcd+p(a-p)(a-b-c-d)}. \end{equation*} We write the ${}_9F_8$ series on the right-hand side above as a summation, switch the order of summation in the resulting expression, and then simplify to obtain (\ref{2e13P1}). \end{proof} \begin{Remark} \label{1R13P1} Formula (\ref{2e13P1}) is an extension of \cite[Eq.\ 4.3.6]{Ba}. In fact, \cite[Eq.\ 4.3.6]{Ba} follows from (\ref{2e13P1}) by letting $x=1$ and $p=d$. \end{Remark} We now obtain the generalization of Srivastava, Vyas and Fatawat's ${}_{11}F_{10}$ transformation given in \cite[Theorem 3.4]{SVF}: \begin{Proposition} \label{13P3} Suppose \begin{equation} \label{1e13P3} 3a=b+c+d+e+f+g-n. \end{equation} Then \begin{eqnarray} \label{2e13P3} &&{}_{13}F_{12} \left( {\displaystyle a,1+\frac{a}{2},b,c,d,e,f, \atop \displaystyle \frac{a}{2},1+a-b,1+a-c,1+a-d,1+a-e,1+a-f,} \right.\nonumber\\ && \left.\left. {\displaystyle g,a-p+1,p+1,a-q+1,q+1,-n \atop \displaystyle 1+a-g,p,a-p,q,a-q,1+a+n} \right| 1\right)\\ &&=\frac{(1+a)_n(1+\lambda-e)_n(1+\lambda-f)_n(1+\lambda-g)_n} {(1+\lambda)_n(1+a-e)_n(1+a-f)_n(1+a-g)_n}\nonumber\\ &&\times \frac{\left(\frac{\mu}{2}-\delta+1\right)_n\left(\frac{\mu}{2}+\delta+1\right)_n} {\left(\frac{\mu}{2}+\delta\right)_n\left(\frac{\mu}{2}-\delta\right)_n}\nonumber\\ &&\times {}_{13}F_{12} \left( {\displaystyle \lambda,1+\frac{\lambda}{2},\lambda+b-a,\lambda+c-a,\lambda+d-a,e,f, \atop \displaystyle \frac{\lambda}{2},1+a-b,1+a-c,1+a-d,1+\lambda-e,1+\lambda-f,} \right.\nonumber\\ && \left.\left. {\displaystyle g,\frac{\lambda}{2}-\gamma+1,\frac{\lambda}{2}+\gamma+1, \frac{\lambda}{2}-\epsilon+1,\frac{\lambda}{2}+\epsilon+1,-n \atop \displaystyle 1+\lambda-g,\frac{\lambda}{2}+\gamma,\frac{\lambda}{2}-\gamma, \frac{\lambda}{2}+\epsilon,\frac{\lambda}{2}-\epsilon,1+\lambda+n} \right| 1\right),\nonumber \end{eqnarray} where \begin{equation} \label{3e13P3} \lambda=2a-b-c-d, \end{equation} \begin{equation} \label{4e13P3} \mu=2a-e-f-g, \end{equation} \begin{equation} \label{5e13P3} \gamma^2 =\frac{\lambda^2}{4} -\frac{p(a-p)(a-b-c)(a-b-d)(a-c-d)} {bcd+p(a-p)(a-b-c-d)}, \end{equation} \begin{equation} \label{6e13P3} \delta^2 =\frac{\mu^2}{4} -\frac{q(a-q)(a-e-f)(a-e-g)(a-f-g)} {efg+q(a-q)(a-e-f-g)} \end{equation} and \begin{eqnarray} \label{7e13P3} &&\epsilon^2 =\frac{\lambda^2}{4}\\ &&-\,\frac{\left[\displaystyle{q(a-q)(a-e-f)(a-e-g)(a-f-g) \atop +\,n(\mu+n)(efg+q(a-q)(a-e-f-g))}\right]} {\left[\displaystyle{(a-e-f)(a-e-g)(a-f-g) \atop -\,(\mu+n)(ef+eg+fg+a(a-e-f-g)-q(a-q))}\right]}.\nonumber \end{eqnarray} \end{Proposition} \begin{proof} Use $1+\frac{a}{2},e,f,g,a-q+1,q+1$ for $a_1,a_2,a_3,a_4,a_5,a_6$, respectively, $\frac{a}{2},1+a-e,1+a-f,1+a-g,q,a-q,1+a+n$ for $b_1,b_2,b_3,b_4,b_5,b_6,b_7$, respectively, and $x=1$ in (\ref{2e13P1}), and then apply (\ref{1e1R3C11P2}) to sum the ${}_9F_8(1)$ series on the right-hand side. The result follows after some simplification. \end{proof} Equation (\ref{2e13P3}) above involves two terminating very-well-poised ${}_{13}F_{12}(1)$ series. It generalizes the result of Srivastava, Vyas and Fatawat \cite[Theorem 3.4]{SVF} between two terminating very-well-poised ${}_{11}F_{10}(1)$ series, which in turn is a generalization of Bailey's ${}_9F_8$ transformation (see \cite[Eq.\ 4.3.7]{Ba}). Indeed, \cite[Theorem 3.4]{SVF} follows from our result (\ref{2e13P3}) upon setting $q=e$. We next extend the formulas found in \cite[Eqs.\ 4.5.3, 4.5.4 and 4.5.5]{Ba}. First, we obtain an even more general result: \begin{Proposition} \label{13P4} We have \begin{eqnarray} \label{1e13P4} &&{}_8F_7 \left( \left. {\displaystyle a,b,c,d,a-p+1,p+1,q+1,-n \atop \displaystyle 1+a-b,1+a-c,1+a-d,p,a-p,q,w}\right| 1\right)\\ &&=\frac{(2\lambda-a)_n(\lambda-a)_n(\alpha+1)_n}{(1+\lambda)_n(2\lambda-2a)_n(\alpha)_n}\nonumber\\ &&\times {}_{13}F_{12} \left( {\displaystyle \lambda,1+\frac{\lambda}{2},\frac{a}{2},\frac{a+1}{2},\lambda+b-a,\lambda+c-a,\lambda+d-a, \atop \displaystyle \frac{\lambda}{2},\frac{2+2\lambda-a}{2},\frac{1+2\lambda-a}{2},1+a-b,1+a-c,1+a-d,} \right.\nonumber\\ && \left.\left. {\displaystyle 1+a-w,\frac{\lambda}{2}-\gamma+1,\frac{\lambda}{2}+\gamma+1, \frac{\lambda}{2}-\delta+1,\frac{\lambda}{2}+\delta+1,-n \atop \displaystyle \lambda+w-a,\frac{\lambda}{2}+\gamma,\frac{\lambda}{2}-\gamma, \frac{\lambda}{2}+\delta,\frac{\lambda}{2}-\delta,1+\lambda+n} \right| 1\right),\nonumber \end{eqnarray} where \begin{equation} \label{2e13P4} \lambda=2a-b-c-d, \end{equation} \begin{equation} \label{3e13P4} w=1+2a-2\lambda-n, \end{equation} \begin{equation} \label{4e13P4} \alpha=\frac{q(2\lambda-a)}{2q-a}, \end{equation} \begin{equation} \label{5e13P4} \gamma^2 =\frac{\lambda^2}{4} -\frac{p(a-p)(a-b-c)(a-b-d)(a-c-d)} {bcd+p(a-p)(a-b-c-d)} \end{equation} and \begin{equation} \label{6e13P4} \delta^2 =\frac{\lambda^2}{4} -\frac{q(2\lambda-a)+n(2q-a)} {2}. \end{equation} \end{Proposition} \begin{proof} Use $q+1,q,w,1$ for $a_1,b_1,b_2,x$, respectively, in (\ref{2e13P1}) (where $w$ is as given in (\ref{3e13P4})) and then apply (\ref{e1R1C1P18}) to sum the Saalsch\"utzian ${}_4F_3(1)$ series on the right-hand side. The final result follows after some simplification. \end{proof} We note that the terminating ${}_8F_7(1)$ series on the left-hand side of (\ref{1e13P4}) is Saalsch\"utzian (i.e. with parametric excess $\omega=1$) and deviates from a well-poised series in two pairs of numerator and denominator parameters. The terminating ${}_{13}F_{12}(1)$ series on the right-hand side of (\ref{1e13P4}) is very-well-poised. The special case of Proposition \ref{13P4} given in the next corollary is a direct extension of the results found in \cite[Eqs.\ 4.5.3, 4.5.4 and 4.5.5]{Ba}: \begin{Corollary} \label{1C13P4} We have \begin{eqnarray} \label{1e1C13P4} &&{}_6F_5 \left( \left. {\displaystyle a,b,c,d,q+1,-n \atop \displaystyle 1+a-b,1+a-c,1+a-d,q,w}\right| 1\right)\\ &&=\frac{(2\lambda-a)_n(\lambda-a)_n(\alpha+1)_n}{(1+\lambda)_n(2\lambda-2a)_n(\alpha)_n}\nonumber\\ &&\times {}_{11}F_{10} \left( {\displaystyle \lambda,1+\frac{\lambda}{2},\frac{a}{2},\frac{a+1}{2},\lambda+b-a,\lambda+c-a,\lambda+d-a, \atop \displaystyle \frac{\lambda}{2},\frac{2+2\lambda-a}{2},\frac{1+2\lambda-a}{2},1+a-b,1+a-c,1+a-d,} \right.\nonumber\\ && \left.\left. {\displaystyle 1+a-w, \frac{\lambda}{2}-\delta+1,\frac{\lambda}{2}+\delta+1,-n \atop \displaystyle \lambda+w-a, \frac{\lambda}{2}+\delta,\frac{\lambda}{2}-\delta,1+\lambda+n} \right| 1\right),\nonumber \end{eqnarray} where \begin{equation} \label{2e1C13P4} \lambda=1+2a-b-c-d, \end{equation} \begin{equation} \label{3e1C13P4} w=1+2a-2\lambda-n, \end{equation} \begin{equation} \label{4e1C13P4} \alpha=\frac{q(2\lambda-a)}{2q-a} \end{equation} and \begin{equation} \label{5e1C13P4} \delta^2 =\frac{\lambda^2}{4} -\frac{q(2\lambda-a)+n(2q-a)} {2}. \end{equation} \end{Corollary} \begin{proof} Let $p=b$ in (\ref{1e13P4}) and then replace $b+1$ with $b$. \end{proof} Equation (\ref{1e1C13P4}) above expresses a certain terminating Saalsch\"utzian (i.e. with parametric excess $\omega=1$) ${}_6F_5(1)$ series that deviates from a well-poised series in two pairs of numerator and denominator parameters in terms of a terminating very-well-poised ${}_9F_8(1)$ series. This equation is a direct generalization of the classical results found by Bailey in \cite[Eqs.\ 8.1, 8.2 and 8.3]{Bailey3} and reproduced in \cite[Eqs.\ 4.5.3, 4.5.4 and 4.5.5]{Ba} that transform terminating nearly-poised ${}_5F_4(1)$ series with parametric excesses $\omega=1$ and $\omega=2$ and a terminating very-well-poised ${}_6F_5(1)$ series with parametric excess $\omega=1$ in terms of terminating very-well-poised ${}_9F_8(1)$ series. Indeed, we have the following: \begin{enumerate}[label=(\alph*)] \item Letting $q \to -n$ in (\ref{1e1C13P4}) gives \cite[Eq.\ 4.5.3]{Ba}. \item Letting $q \to \frac{a}{2}$ in (\ref{1e1C13P4}) gives \cite[Eq.\ 4.5.4]{Ba}. \item Letting $q \to \infty$ in (\ref{1e1C13P4}) gives \cite[Eq.\ 4.5.5]{Ba}. \end{enumerate} \section{Extensions of classical quadratic transformations} In this last section, we show how by taking certain limits of the relations in Section 3, we can obtain generalizations of some classical quadratic transformations of hypergeometric functions. Two of the classical quadratic transformations are the following: \begin{eqnarray} \label{1e6} &&{}_3F_2 \left( \left. {\displaystyle a,b,c \atop \displaystyle 1+a-b,1+a-c}\right| x\right)\\ &&= (1-x)^{-a} {}_3F_2 \left( \left. {\displaystyle \frac{a}{2},\frac{a+1}{2},1+a-b-c \atop \displaystyle 1+a-b,1+a-c}\right| -\frac{4x}{(1-x)^2}\right)\nonumber \end{eqnarray} and \begin{eqnarray} \label{2e6} &&{}_4F_3 \left( \left. {\displaystyle a,1+\frac{a}{2},b,c \atop \displaystyle \frac{a}{2},1+a-b,1+a-c}\right| x\right)\\ &&= (1+x)(1-x)^{-a-1} {}_3F_2 \left( \left. {\displaystyle \frac{a+1}{2},\frac{a+2}{2},1+a-b-c \atop \displaystyle 1+a-b,1+a-c}\right| -\frac{4x}{(1-x)^2}\right).\nonumber \end{eqnarray} The transformation (\ref{1e6}) is due to Whipple \cite{Whipple5} and (\ref{2e6}) is due to Bailey \cite{Bailey3}. The transformation (\ref{2e6}) is sometimes referred to as the companion transformation of (\ref{1e6}) (see \cite[Section 4]{GS}, for example). In this section, we shall obtain quadratic transformations that extend both (\ref{1e6}) and its companion (\ref{2e6}) in terms of single transformations (see (\ref{1e6P1}) and (\ref{1e2C6P1}) below). We begin with our most general extension which follows as a consequence of Proposition \ref{3P16} from Section 3. \begin{Proposition} \label{6P1} The following quadratic transformation holds: \begin{eqnarray} \label{1e6P1} &&{}_6F_5 \left( \left. {\displaystyle a,b,c,a-p+1,p+1,q+1 \atop \displaystyle 1+a-b,1+a-c,p,a-p,q}\right| x\right)\\ &&=\left(1+\left(\frac{a-q}{q}\right)x\right) (1-x)^{-a-1}\nonumber\\ &&\times {}_5F_4 \left( \left. {\displaystyle \frac{a}{2},\frac{a+1}{2},a-b-c,\gamma+1,\delta+1 \atop \displaystyle 1+a-b,1+a-c,\gamma,\delta}\right| -\frac{4x}{(1-x)^2}\right),\nonumber \end{eqnarray} where \begin{equation} \label{2e6P1} \gamma=\frac{p(a-p)(b+c-a)}{bc-p(a-p)}, \end{equation} and \begin{equation} \label{3e6P1} \delta=\frac{q+(a-q)x}{1+x}. \end{equation} \end{Proposition} \begin{proof} Let $w=-n/x$ in (\ref{1e3P16}) and then let $n \to \infty$. \end{proof} The quadratic transformation (\ref{1e6P1}) above is a very general one. It extends both (\ref{1e6}) and (\ref{2e6}) as we show after Corollary \ref{2C6P1} below. Furthermore, the presence of the variable $x$ as part of $\delta$ that appears in the numerator and denominator parameters in the ${}_5F_4$ series on the right-hand side of (\ref{1e6P1}) seems to be a new feature and distinguishes this transformation from most other known transformations. Two interesting quadratic transformations that follow as special cases of (\ref{1e6P1}) are derived below. \begin{Corollary} \label{1C6P1} The following quadratic transformation holds: \begin{eqnarray} \label{1e1C6P1} &&{}_5F_4 \left( \left. {\displaystyle a,b,c,a-p+1,p+1 \atop \displaystyle 1+a-b,1+a-c,p,a-p}\right| x\right)\\ &&=(1-x)^{-a}\nonumber\\ &&\times {}_4F_3 \left( \left. {\displaystyle \frac{a}{2},\frac{a+1}{2},a-b-c,\gamma+1 \atop \displaystyle 1+a-b,1+a-c,\gamma}\right| -\frac{4x}{(1-x)^2}\right),\nonumber \end{eqnarray} where \begin{equation} \label{2e1C6P1} \gamma=\frac{p(a-p)(b+c-a)}{bc-p(a-p)}. \end{equation} \end{Corollary} \begin{proof} Let $q \to \infty$ in (\ref{1e6P1}). \end{proof} We note that the transformation (\ref{1e1C6P1}) is the same as the special case $k=1$ in \cite[Theorem $3.4$]{Maier}. \begin{Corollary} \label{2C6P1} The following quadratic transformation holds: \begin{eqnarray} \label{1e2C6P1} &&{}_4F_3 \left( \left. {\displaystyle a,b,c,q+1 \atop \displaystyle 1+a-b,1+a-c,q}\right| x\right)\\ &&=\left(1+\left(\frac{a-q}{q}\right)x\right) (1-x)^{-a-1}\nonumber\\ &&\times {}_4F_3 \left( \left. {\displaystyle \frac{a}{2},\frac{a+1}{2},1+a-b-c,\delta+1 \atop \displaystyle 1+a-b,1+a-c,\delta}\right| -\frac{4x}{(1-x)^2}\right),\nonumber \end{eqnarray} where \begin{equation} \label{2e2C6P1} \delta=\frac{q+(a-q)x}{1+x}. \end{equation} \end{Corollary} \begin{proof} Let $p \to \infty$ in (\ref{1e6P1}). \end{proof} The quadratic transformation (\ref{1e2C6P1}) above is the direct extension of (\ref{1e6}) and its companion (\ref{2e6}). In fact, Whipple's transformation (\ref{1e6}) follows from (\ref{1e2C6P1}) by letting $q \to \infty$, and Bailey's companion transformation (\ref{2e6}) follows from (\ref{1e2C6P1}) by letting $q \to \frac{a}{2}$. \begin{Corollary} \label{3C6P1} We have, if $q \neq a/2$, \begin{eqnarray} \label{1e3C6P1} &&{}_6F_5 \left( \left. {\displaystyle a,b,c,a-p+1,p+1,q+1 \atop \displaystyle 1+a-b,1+a-c,p,a-p,q}\right| -1\right)\\ &&=\frac{(2q-a)2^{-a-1}}{q}\nonumber\\ &&\times {}_4F_3 \left( \left. {\displaystyle \frac{a}{2},\frac{a+1}{2},a-b-c,\gamma+1 \atop \displaystyle 1+a-b,1+a-c,\gamma}\right| 1\right),\nonumber \end{eqnarray} where \begin{equation} \label{2e3C6P1} \gamma=\frac{p(a-p)(b+c-a)}{bc-p(a-p)}. \end{equation} \end{Corollary} \begin{proof} Let $x \to -1$ in (\ref{1e6P1}). \end{proof} The formula in (\ref{1e3C6P1}) above is a generalization of Whipple's formula (see \cite{Whipple2,Whipple5}) \begin{eqnarray} \label{3e6} &&{}_3F_2 \left( \left. {\displaystyle a,b,c \atop \displaystyle 1+a-b,1+a-c}\right| -1\right)\\ &&=2^{-a} {}_3F_2 \left( \left. {\displaystyle \frac{a}{2},\frac{a+1}{2},1+a-b-c \atop \displaystyle 1+a-b,1+a-c}\right| 1\right).\nonumber \end{eqnarray} In fact, (\ref{3e6}) follows from (\ref{1e3C6P1}) by letting first $q \to \infty$ and then letting $p \to \infty$ in the resulting equation.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,574
René Rucklin est un homme politique français né le à Offemont (Territoire de Belfort) et décédé le à Belfort (Territoire-de-Belfort). Biographie Issu d'une famille modeste, il est docteur en droit et avocat à Montbéliard et fait partie des avocats chargé de défendre Marcel Poyer membre de la bande à Bonnot.Après la guerre, il écrit dans le journal Germinal dans lequel il prend part pour la réhabilitation de Lucien Bersot, soldat fusillé pour l'exemple. Dès 1916, il milite au parti socialiste. Conseiller municipal puis adjoint au maire de Belfort, il est aussi conseiller général. Il devient député du Doubs, dans la circonscription de Montbéliard, de 1928 à 1936, inscrit au groupe SFIO. Sources Député du Doubs (Troisième République) Naissance en décembre 1889 Naissance dans le Territoire de Belfort Décès en octobre 1960 Décès à Belfort Décès à 70 ans
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,758
/* * @test NotCompliantCauseTest.java * @bug 6374290 * @summary Test that NotCompliantMBeanException has a cause in case of * type mapping problems. * @author Daniel Fuchs, Alexander Shusherov * @run clean NotCompliantCauseTest * @run build NotCompliantCauseTest * @run main NotCompliantCauseTest */ /* * NotCompliantCauseTest.java * * Created on January 20, 2006, 2:56 PM / dfuchs * */ import java.util.Random; import java.util.logging.Logger; import javax.management.MBeanServer; import javax.management.MBeanServerFactory; import javax.management.NotCompliantMBeanException; import javax.management.ObjectName; import javax.management.openmbean.OpenDataException; /** * * @author Sun Microsystems, 2005 - All rights reserved. */ public class NotCompliantCauseTest { /** * A logger for this class. **/ private static final Logger LOG = Logger.getLogger(NotCompliantCauseTest.class.getName()); /** * Creates a new instance of NotCompliantCauseTest */ public NotCompliantCauseTest() { } /** * Test that NotCompliantMBeanException has a cause in case of * type mapping problems. **/ public static void main(String[] args) { NotCompliantCauseTest instance = new NotCompliantCauseTest(); instance.test1(); } public static class RuntimeTestException extends RuntimeException { public RuntimeTestException(String msg) { super(msg); } public RuntimeTestException(String msg, Throwable cause) { super(msg,cause); } public RuntimeTestException(Throwable cause) { super(cause); } } /** * Test that NotCompliantMBeanException has a cause in case of * type mapping problems. **/ void test1() { try { MBeanServer mbs = MBeanServerFactory.createMBeanServer(); ObjectName oname = new ObjectName("domain:type=test"); mbs.createMBean(NotCompliant.class.getName(), oname); System.err.println("ERROR: expected " + "NotCompliantMBeanException not thrown"); throw new RuntimeTestException("NotCompliantMBeanException not thrown"); } catch (RuntimeTestException e) { throw e; } catch (NotCompliantMBeanException e) { Throwable cause = e.getCause(); if (cause == null) throw new RuntimeTestException("NotCompliantMBeanException " + "doesn't have any cause.", e); while (cause.getCause() != null) { if (cause instanceof OpenDataException) break; cause = cause.getCause(); } if (! (cause instanceof OpenDataException)) throw new RuntimeTestException("NotCompliantMBeanException " + "doesn't have expected cause ("+ OpenDataException.class.getName()+"): "+cause, e); System.err.println("SUCCESS: Found expected cause: " + cause); } catch (Exception e) { System.err.println("Unexpected exception: " + e); throw new RuntimeException("Unexpected exception: " + e,e); } } public interface NotCompliantMXBean { Random returnRandom(); } public static class NotCompliant implements NotCompliantMXBean { public Random returnRandom() { return new Random(); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
7,902
CBN Family Christian World News is a half-hour weekly news program devoted to the work of the Holy Spirit around the globe. Produced by CBN News, this award-winning newscast airs at various times in the United States and around the world. Christian World News - Preparing for Revival - January 27, 2023 How the church's response to persecution could spark a global harvest. Christian World News - A Taste of Heaven on Earth - January 20, 2023 What's behind the Paris church that's changing lives on a phenomenal scale. Christian World News - Uprising in Iran - January 13, 2023 The Iranian regimes crackdown on protestors is only making the resistance stronger. Christian World News - Crackdown in Hong Kong - January 6, 2023 As China tightens its grip on Hong Kong religious freedom is at risk and Christians are fleeing Christian World News - Brave Band of Believers - December 30, 2022 The team that stayed to serve the people of Kyiv at the height of the Russia invasion. Christian World News - The Hidden Stories of Christmas - December 23, 2022 What you never knew about the songs we sing every Christmas. Christian World News - Mayflower Church - December 16, 2022 A refugee church flees China's repression. Christian World News - Chaos in Haiti - December 9, 2022 Gangs are ruling the city's capital, putting tens of thousands at risk. Christian World News - The Uprising - December 2, 2022 China's citizens rise up against the surveillance state Christian World News - The Image of God - November 25, 2022 A traveling exhibit brings the Sistine Chapel to cities across the globe. Christian World News - RETURN OF "THE CHOSEN" - November 18, 2022 The cast and crew of "The Chosen" celebrate the launch of season three. Christian World News - Healing the Divide - November 11, 2022 As politics divides our nation, Christians suggest a solution to stop the anger. Christian World News - "GOD GAVE ME A DREAM" - November 4, 2022 When God told a pastor to build a church, he had no idea it would become a refuge for thousands of people. Christian World News - Escape from Hell - October 28, 2022 How a Ukrainian Pastor Preached the Gospel in Prison and Escaped the Russians. Christian World News - China's War On Faith - October 21, 2022 How China's regime is persecuting Muslims, Christians and all people of faith. Christian World News - Hidden Holy Sites - October 14, 2022 Biblical sites in Saudi Arabia could be endangered. CBN News App CBN News | Top Breaking World News - Christian... Wendy Griffith
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,983
{"url":"https:\/\/imathworks.com\/tex\/tex-latex-need-help-with-making-logic-trees-in-qtree-tikz-qtree-i-e-aligning-numbering-lines\/","text":"# [Tex\/LaTex] Need help with making Logic Trees in qtree\/tikz-qtree (i.e. aligning, numbering lines)\n\nlogicqtreetikz-qtreevertical alignment\n\nI would like to make something similar to the logic tree below. I've tried using tikz-qtree, but I can't figure out how to number every line in the tree. I did however find something else doable in qtree. Here's a sample of my code (note that it's not the same tree as below)\n\n \\documentclass[a4paper, english, 12pt, reqno]{article}\\usepackage[T1]{fontenc}\n\\usepackage[utf8]{inputenc}\n\\usepackage[norsk]{babel}\n\\usepackage{amsmath}\n\\usepackage{amssymb}\n\\usepackage{amsthm}\n\\usepackage{mathtools}\n\\usepackage[shortlabels]{enumitem}\n\\usepackage{bm}\n\\usepackage{qtree}\n\n\\begin{document}\n\\maketitle{}\n\n\\section{}\n\\begin{center}\n\\begin{tabular}{c c c}\n\\Tree[.{1\\\\2\\\\ 3\\\\4} [.5 [.6 ] ] ] &\n\\Tree[.$A\\supset B$\\\\$C\\vee A$\\\\$\\sim\\sim C$\\,\\checkmark\\\\$C$ [.$C$ $s$ $s$ ][.$A$ $c$ $c$ ] ] &\n\\Tree[.SM\\\\SM\\\\SM\\\\3$\\sim\\sim$D [.3$\\vee$D [.1$\\supset$D ] ] ]\n\\end{tabular}\n\\end{center}\n\n\\end{document}\n\n\nSo I have some questions, which I hope will be able to help me get closer to a similar tree as in the picture.\n\n1) My lines are not aligned horizontally, any ideas on how to fix that?\n\n2) How can I remove the lines that connects the numbers, and the notation on the right side of the tabular enviroment?\n\n3) This question is not about my code, but I will have this problem later. As you see on line 9 in the picture, there is a node all the way down to the 12th line, skipping the lines in between. How do I do this?\n\nI'm open for using tikz-qtree too, and I probably missed something in the manual, so I will try to read more, as I'm quite new to using qtree in LaTeX. Thanks in advance!\n\nThis is a variant of Ignasi's answer. It uses a new package based on forest. The advantage is that the lines are automatically numbered, the justifications are added as annotations with their nodes using the key just (no need for a separate tree) and the vertical spacing between lines which should be grouped together (as when listing assumptions) is corrected automatically. In addition, styles are provided to move nodes (move by) to lower lines in the tree without the need to set special tier names or enter empty nodes. Cross-referencing support is provided in justifications and closure annotations (using either named nodes or relative node names), so that line numbers need not be hard-coded. Further options and details are explained in the package documentation.\n\n\\documentclass[tikz,multi,border=10pt]{standalone}\n\\usepackage{prooftrees,amsmath,turnstile}\n\\newcommand*{\\tnot}{\\ensuremath{\\mathord{\\sim}}}\n\\begin{document}\n\\begin{prooftree} % uses Ignasi's code for the main tree (https:\/\/tex.stackexchange.com\/a\/233576\/)\n{\nto prove={(\\exists x)Fx \\supset (\\forall x)Fx \\sststile{}{} (\\forall x) (Fx \\supset (\\forall y) Fy)}\n}\n[(\\exists x) Fx \\supset (\\forall x) Fx, checked, just=SM, name=pr\n[\\tnot (\\forall x) (Fx \\supset (\\forall y) Fy), checked, grouped, just=SM\n[(\\exists x) \\tnot (Fx \\supset (\\forall y) Fy), checked=a, just={$\\tnot\\forall$D:!u}\n[\\tnot (Fa \\supset (\\forall y) Fy), checked, just={$\\exists$D:!u}\n[Fa, just={$\\tnot\\supset$D:!u}, name=fa\n[\\tnot (\\forall y) Fy, checked, grouped, just={$\\tnot\\supset$D:!uu}\n[(\\exists y) \\tnot Fy, checked=b, just={$\\tnot\\forall$D:!u}\n[\\tnot Fb, just={$\\exists$D:!u}, name=nofb\n[\\tnot (\\exists x) Fx, checked, just={$\\supset$D:pr}\n[(\\forall x) \\tnot Fx, subs=a, just={$\\tnot\\exists$D:!u}\n[\\tnot Fa, close={:fa,!c}, just={$\\forall$D:!u}\n]\n]\n]\n[(\\forall x) Fx, subs=b\n[Fb, close={:nofb,!c}, just={$\\forall$D:!u}, move by=2\n]\n]\n]\n]\n]\n]\n]\n]\n]\n]\n\\end{prooftree}\n\\end{document}","date":"2023-03-29 09:59:57","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8809031844139099, \"perplexity\": 3723.0500402781313}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296948965.80\/warc\/CC-MAIN-20230329085436-20230329115436-00140.warc.gz\"}"}
null
null
{"url":"https:\/\/homework.cpm.org\/category\/CC\/textbook\/cca2\/chapter\/12\/lesson\/12.1.1\/problem\/12-12","text":"### Home > CCA2 > Chapter 12 > Lesson 12.1.1 > Problem12-12\n\n12-12.\n1. If you remember what n! means, you can do some messy calculations quickly or compute problems that are too large for your calculator\u2019s memory. For instance, if you wanted to calculate , you could use the n! button on your calculator and find that 9! = 362,880 and 6! = 720, so . You could also use a simplification technique. Since 9! = 9 \u22c5 8 \u22c5 7 \u22c5 6 \u22c5 5 \u22c5 4 \u22c5 3 \u22c5 2 \u22c5 1 and 6! = 6 \u22c5 5 \u22c5 4 \u22c5 3 \u22c5 2 \u22c5 1, you can rewrite = 9 \u22c5 8 \u22c5 7 = 504.\n\n2. Use this simplification technique to simplify each of the following problems before computing the result. Homework Help \u270e\n\n90\n\n$\\frac{20\\cdot19}{2\\cdot1}$\n\nRefer to part (b). Remember to write factors for both 4! and 3!.\n\nIn this case, it would be very tedious to write out all the factors.\nThink of all the Giant Ones you would create if you did write it all out. What is left?","date":"2019-10-14 11:12:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 1, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9752439260482788, \"perplexity\": 674.9529874038202}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986653216.3\/warc\/CC-MAIN-20191014101303-20191014124303-00378.warc.gz\"}"}
null
null
{"url":"http:\/\/mathoverflow.net\/questions\/162836\/why-was-john-nashs-1950-game-theory-paper-such-a-big-deal\/162862","text":"Why was John Nash's 1950 Game Theory paper such a big deal?\n\nI'm trying to understand why John Nash's 1950 2-page paper that was published in PNAS was such a big deal. Unless I'm mistaken, the 1928 paper by John von Neumann demonstrated that all n-player non-cooperative and zero-sum games possess an equilibrium solution in terms of pure or mixed strategies.\n\nFrom what I understand, Nash used fixed point iteration to prove that non-zero-sum games would also have the analogous result. Why was this such a big deal in light of the earlier work by von Neumann?\n\nThere are two references I provide that are good: One is this discussion on simple proofs of Nash's theorem and this one is a very well done (readable and accurate) survey of the history in PNAS.\n\n-\nTo me the real big deal about John Nash is not this paper, but the fact that he recovered from schizophrenia spontaneously. \u2013\u00a0 Sylvain JULIEN Apr 8 '14 at 20:51\nThe big deal is not a theorem, but a definition. The concept of Nash equilibrium captures an essential feature of social and economic interactions. \u2013\u00a0 alvarezpaiva Apr 8 '14 at 20:52\n@SylvainJULIEN Is that why he got the Nobel prize? I believe that was the OP's question. \u2013\u00a0 Igor Rivin Apr 8 '14 at 21:04\nFrom talking to economists (I am not one) I think the answer is that there was little general theory about non-zero-sum games until Nash's result. I assume that people had found mixed strategy solutions for the prisoner's dilemma since it is elementary, but it is not at all obvious that such solutions exist in more complicated games. \u2013\u00a0 Paul Siegel Apr 8 '14 at 21:15\n\nI think von Neumann dealt with the case $n=2$, and it was by no means obvious how to extend the concept of equilibrium for the general case and prove that it always exists. More precisely, $n$ players before Nash were reduced to the $n=2$ case by partioning the players into two groups in all possible ways. Once you regard several players as a single player, they are meant to cooperate as they must act like a single player. Nash is very clear about this in his 1951 Annals paper:\n\nVon Neumann and Morgenstern have developed a very fruitful theory of two-person zero-sum games in their book Theory of Games and Economic Behavior. This book also contains a theory of $n$-person games of a type which we would call cooperative. This theory is based on an analysis of the interrelationships of the various coalitions which can be formed by the players of the game.\n\nOur theory, in contradistinction, is based on the absence of coalitions in that it is assumed that each participant acts independently, without collaboration or communication with any of the others.\n\nThe notion of an equilibrium point is the basic ingredient in our theory. This notion yields a generalization of the concept of the solution of a two-person zero-sum game. It turns out that the set of equilibrium points of a two-person zero-sum game is simply the set of all pairs of opposing \"good strategies.\" In the immediately following sections we shall define equilibrium points and prove that a finite non-cooperative game always has at least one equilibrium point. We shall also introduce the notions of solvability and strong solvability of a non-cooperative game and prove a theorem on the geometrical structure of the set of equilibrium points of a solvable game.\n\n-\nIn the comments to the OP Paul Siegel suggests that Nash's notion also extended the earlier results from the zero-sum case to the non-zero-sum case. It is ambiguous from the abstract, where Nash writes \"This notion yields a generalization of the concept of the solution of a two-person zero-sum game.\" As your answer stresses the $n >2$ generalization I just wanted to remark that it may also generalize earlier results in that Nash's notion of equilibrium does not depend on the game being zero-sum. I trust someone will correct me if I have this wrong. \u2013\u00a0 R Hahn Apr 8 '14 at 22:10\n@R Hahn: I agree with you. In Nash's paper, the payoff function of each player is an arbitrary linear function on the convex polytope representing the mixed strategies. \u2013\u00a0 GH from MO Apr 8 '14 at 23:49\nIt's not too hard to see that if you know how to generalize to additional players, you also know how to generalize to non-zero-sum. Simply add an additional player with one strategy whose payoff is minus the sum of the other player's payoffs. \u2013\u00a0 Will Sawin Apr 16 '14 at 1:13\n\nThis answer overlaps with other answers but I think another restatement may be helpful because the situation is slightly confusing.\n\nAfter the two-person zero-sum result, it is natural to ask about extending the results to $n>2$ and to non-zero-sum games. Sometimes it is stated that Nash was the first to carry out this extension, but this is slightly misleading, because von Neumann and Morgenstern did consider both $n>2$ and non-zero-sum games and proved various things about them. However, the key point is that it's important to ask the right question. Intuitively, the basic question in game theory is to find the \"optimal strategy\", but it's not immediately clear what this means in an $n$-person non-cooperative game. We now understand, thanks to Nash, that a basic necessary condition for a set of strategies to be \"optimal\" is for them to form a Nash equilibrium, but von Neumann and Morgenstern did not hit on this concept. When they treated $n$-person games, they addressed different questions, such as what happens if the players form two coalitions. So Nash didn't just answer the obvious question; the right question wasn't obvious, but he found it anyway, and answered it.\n\nThe second innovative aspect of Nash's work is that the two-person zero-sum result was based on the theory of linear programming and minimax. Proving the existence of a Nash equilibrium requires different techniques. So the naive approach to generalization, namely staring at the existing result and trying to figure out how to use the same ideas to prove something more general, does not lead to Nash's key insight.\n\n-\n\nThe significance is best interpreted in conjunction with Nash's accompanying work.\n\nMyerson gives a good history of the theory: http:\/\/home.uchicago.edu\/rmyerson\/research\/jelnash.pdf\n\nHere are some important points:\n\nThus von Neumann (1928) argued that virtually any competitive game can be modeled by a mathematical game with the following simple structure: There is a set of players, each player has a set of strategies, each player has a payoff function from the Cartesian product of these strategy sets into the real numbers, and each player must choose his strategy independently of the other players. ...\n\nVon Neumann did not consistently apply this principle of strategic independence, however. In his analysis of games with more than two players, von Neumann (1928) assumed that players would not simply choose their strategies independently, but would coordinate their strategies in coalitions. Furthermore, by his emphasis on max-min values, von Neumann was implicitly assuming that any strategy choice for a player or coalition should be evaluated against the other players' rational response, as if the others could plan their response after observing this strategy choice. Before Nash, however, no one seems to have noticed that these assumptions were inconsistent with von Neumann's own argument for strategic independence of the players in the normal form.\n\nVon Neumann (1928) also added two restrictions to his normal form that severely limited its claim to be a general model of social interaction for all the social sciences: He assumed that payoff is transferable, and that all games are zero-sum.\n\nIn contrast, Nash provided a way to deal with the more general problem of non-transferable utility and non-zero-sum games.\n\nBut the most important new contribution of Nash (1951), fully as important as the general definition and the existence proof of Nash (1950b), was his argument that this noncooperative equilibrium concept, together with von Neumann's normal form, gives us a complete general methodology for analyzing all games.... Von Neumann's normal form is our general model for all games, and Nash's equilibrium is our general solution concept. ...\n\nNash (1951) also noted that the assumption of transferable utility can be dropped without loss of generality, because possibilities for transfer can be put into the moves of the game itself, and he dropped the zero-sum restriction that von Neumann had imposed.\n\n-\n\nAs as been rightly said, Nash defined a concept of equilibrium for zero-sum games with $n$ players, and proved the existence (but no uniqueness of course) of such, while Von Neumann and Morgenstern did that only for $n=2$ (or larger $n$ but with very strong hypotheses on the game that reduces the problem to a game with $n=2$ players). But it is important to note than while doing so, Nash also defines a concept of equilibrium for non-zero sum games with $n$ players, for such a game is equivalent to a zero sum game with $n+1$ players: just add one new player, the \"bank\", whose gain\/loss is defined as the negative of the sum of the gains of each other players.\n\nThat being said, the real-world meaning of the concept of Nash equilibrium is very tricky, and it is far from clear if\/when that concept is the right one to analyze a game situation, while in the case $n=2$, the Von Neumann\/Morgenstern concept is much more obviously the only right one.\n\n-\nJo\u00ebl, Of course there are also issues with the notion of value for zero-sum 2-person games, like the need to have mixed strategies which is problematic in various cases (and various others issues). Once you apply Von Neumann and Morgenstern utility theory on mixed outcomes you often loose the zero-sum property. But I agree that the notion of a value of zero-sum games is also very important. \u2013\u00a0 Gil Kalai Apr 24 '14 at 7:18","date":"2015-09-01 10:32:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7775620818138123, \"perplexity\": 431.9557626999441}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-35\/segments\/1440645171365.48\/warc\/CC-MAIN-20150827031251-00218-ip-10-171-96-226.ec2.internal.warc.gz\"}"}
null
null
Oliva Ascolana del Piceno (DOP) è un prodotto ortofrutticolo italiano a denominazione di origine protetta. La denominazione Oliva Ascolana del Piceno si riferisce sia alle olive in salamoia sia a quelle ripiene prodotte nel Piceno a partire dalla varietà colturale d'olivo Ascolana Tenera. Il prodotto ha ottenuto la denominazione di origine protetta nel 2005 e nel 2018 è stato istituito un apposito consorzio per la sua tutela e valorizzazione. Zona di produzione Per avere la denominazione Oliva Ascolana del Piceno le olive in salamoia o ripiene devono essere prodotte in un'area che comprende gran parte delle provincie di Ascoli Piceno e Fermo e parte della Provincia di Teramo. In particolare sono indicate per la produzione le zone con un'altitudine variabile tra i 20 e i 500 m sul livello del mare, con un tipo di terreno dal calcareo-argilloso all'arenaceo con pH mediamente sub-alcalino. Caratteristiche Per rientrare nella categoria DOP, il prodotto finito deve rispettare i parametri stabiliti dal Regolamento Comunitario n. 510/2006 ed indicati dal Ministero delle Politiche Agricole, Alimentari e Forestali nel suo apposito disciplinare di produzione. Note Voci correlate Olive ascolane Ascolana Cucina marchigiana
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,288
Sweden marked their first appearance in the World Cup for 12 years by beating South Korea thanks to a penalty from captain Andreas Granqvist that was awarded after a video assistant referee (VAR) review. There was a short delay for Kim Min-woo's foul on Viktor Claesson to be analysed on video and for the referee to point to the spot but it did not affect Granqvist, who sent goalkeeper Cho Hyun-woo the wrong way. Sweden, who join Mexico at the top of Group F on three points, created the better chances in Nizhny Novgorod. Marcus Berg should have scored midway through the first half but his close range shot was beaten away by Cho, while South Korea did not manage a single shot on target.
{ "redpajama_set_name": "RedPajamaC4" }
6,435
Q: How can I give Ajax data (list) to mustache? I want to put the user object I received as Ajax into {{#user}} in mustache and render it on the screen. <table class="table table-horizontal table-bordered"> <thead class="thead-strong"> <tr> <th>user number</th> </tr> </thead> <tbody id="tbody"> {{#users}} <tr> <td>{{id}}</td> </tr> {{/users}} </tbody> </table> this is conroller @PostMapping("/api/v1/eduPosts/registerAdminToUser") public List<User> registerAdminToUser(@RequestBody UserRegisterRequestDto userRegisterRequestDto){ List<User> users=userService.registerAdminToUser(userRegisterRequestDto); System.out.println(users); return users; } this is index.js update : function () { var data = { adminId: $('#adminId').val(), userId: $('#userId').val() }; //var id = $('#id').val(); $.ajax({ type: 'POST', url: '/api/v1/eduPosts/registerAdminToUser', dataType: 'json', contentType:'application/json; charset=utf-8', data: JSON.stringify(data) }).done(function(data) { $.each(data, function(idx, val) { alert(idx + " " + val.id); console.log(idx + " " + val.id) }); alert(JSON.stringify(data)) }).fail(function (error) { alert(JSON.stringify(error)); }); } Should I change the js file? Or do I have to change the mustache template? If I print out the user object in the index.js file, the data is being rendered properly. Thanks!
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,273
Lucky Strike é uma marca de cigarros pertencente à British American Tobacco. Criada em 1871, está representada por um logotipo muito conhecido, o bull's eye. Ele está no grupo dos cigarros mais fortes, e é dono de uma imagem de rebeldia (fruto do seu marketing internacional). A marca é vendida em mais de 90 países ao redor do mundo, sendo a mais vendida e valiosa da empresa Brown & Williamson. A marca é popular na França, Alemanha e Espanha. No Brasil, a marca é operada pela empresa BAT Brasil, anteriormente Souza Cruz. História O cigarro Lucky Strike foi lançado no mercado em 1871 pela empresa R.A. Patterson na cidade de Richmond, estado americano da Virgínia, sendo a primeira marca de cigarro a ser produzida em massa. O nome "Lucky Strike" foi escolhido em referência aos tempos da "Corrida do Ouro". Em 1905, a American Tabacco Company comprou a marca. No ano de 1916, a marca foi reeintroduzida no mercado pela American Tabacco Company, vendido em maços verde-escuro. No ano seguinte foi lançado o slogan "It's Toasted", para descrever o processo de produção do cigarro na época, além do novo logotipo da marca. Esse novo logo foi lançado para reintroduzir a marca novamente, competindo com outras marcas fortes no mercado. Foi uma das primeiras marca a utilizar o veículo de comunicação chamado Skywriting, em 1923, onde aviões escreviam o nome do cigarro no céu através de fumaça. Em 1927, foi lançada a campanha publicitária voltada para o público feminino com o slogan "Reach for a Lucky instead of sweet", além de testemunhais de atrizes e cantoras. No ano de 1930, a marca era a mais popular e vendida nos Estados Unidos com 43.2 bilhões de cigarros comercializados. Durante a Segunda Guerra Mundial o cigarro esteve disponível também no sabor menta. Em 1942, a embalagem passa a ser produzida na cor branca, pois em virtude da guerra o pigmento verde era utilizado para produção de acessórios militares, adotando o slogan "Lucky Strike Has Gone To War". Em 1944, a marca começou uma nova campanha publicitária com o slogan "Lucky Strike Means Fine Tabacco", algo como: "Lucky Strike significa cigarro bom". Esse slogan ficou tão famoso que todos os maços do cigarro levam essa inscrição até os dias de hoje. Na década de 50 a marca começou a patrocinar programas de rádio, ganhando grande visibilidade. Em 1978, a tabacaria Nekko & Nekkos adquiriu os direitos da marca para exportação. No ano de 1994, a Brown & Williamson comprou os direitos da marca para o mercado doméstico. No ano seguinte, o "Lucky Strike King Size" ganha novo tamanho. Em 1996, foi introduzido no mercado da cidade de San Francisco a versão Filtered Styles, sendo expandida para todo estado da Califórnia no ano seguinte, passando a ser distribuído nacionalmente em 1999. Os chamados "Trivia Cards" são introduzidos dentro dos maços em 1997. No ano de 1999, a marca lançou uma avassaladora ação de marketing que incluía cafezinho de graça e flores no dia dos namorados. Depois, os consumidores que participaram da ação recebiam em casa um cartão com a inscrição "Lucky Loves You", que continha um número 1-800 (equivalente à ligação gratuita), onde poderiam conhecer um pouco mais sobre a marca e o produto. O Lucky Strike de filtro vermelho é bastante similar ao Marlboro, mas parece ligeiramente mais forte. Tem fumaça densa e sabor amadeirado e definido. Atualmente existem três versões do cigarro: "Original", "King Size" e o "Light". Na mídia No começo dos anos 60, os comerciais de televisão do Lucky Strike trazia o slogan: "Lucky Strike separates the men from the boys… but not from the girls", que no bom português diz: "Lucky Strike separa os homens dos garotos… mas não das garotas". Quando os cigarros filtrados da Lucky Strike foram introduzidos no mercado, em meados de 1960, os comerciais mudaram para um slogan cantado que dizia: "Show me a filter cigarette that delivers the taste, and I'll eat my hat!", algo como: "Mostre um cigarro com filtro que conserva o sabor e eu comerei o meu chapéu". O logotipo da Lucky Strike foi criado pelo famoso designer industrial Raymond Loewy, que também criou o logotipo da Exxon, Shell, AT&T e da Coca Cola. A fonte usada no logotipo da Lucky Strike é FuturaBT-ExtraBlackCondensed ou Futura Condensed Bold, delicadamente modificada. Lucky Strike foi a patrocinadora da rádio Jack Benny e também de vários programas de televisão em meados de 1940 e 1950 na CBS. Em meio a populares slogans nas propagandas nos shows, faladas pelo anunciador Don Wilson, estavam a clássica "LSMFT: Lucky Strike means fine tobacco!" e a "Be happy go lucky, be happy, smoke Lucky Strike!" Patrocínio em esportes A marca também é reconhecida por patrocinar diversos esportes antes da proibição pela União Europeia na década de 2000. Nos anos 1980 e 1990, o cigarro patrocinou diversos esportes a motor, como a MotoGP e a equipe do americano Kevin Schwantz, a Team Lucky Strike. Em 1999, a British American Tobacco, comprou a equipe Tyrrell para fundar a British American Racing na Fórmula 1. Houve a ideia de ter dois carros com o patrocínio de cigarros diferentes que pertenciam a BAT, o canadense Jacques Villeneuve correria com as cores da Lucky Strike, enquanto o brasileiro Ricardo Zonta, com o azul da State Express 555, porém, a ideia foi proibida. A BAR ficou na Fórmula 1 até 2005, tendo Villeneuve, Jenson Button, Takuma Sato, Olivier Panis e Anthony Davidson como pilotos, conseguindo um vice-campeonato de Construtores em 2004, sempre com as cores da Lucky Strike. Na cultura popular A marca de Cigarros Lucky Strike é citada em jogos modernos para Vídeo-game, animes, músicas, livros e filmes. Lucky Strike é mostrada também no anime Cowboy Bebop, onde o personagem Faye Valentine é frequentemente visto com um Lucky Strike na boca. Em Eureka Seven, Stoner é visto fumando um maço de cigarros similar ao da Lucky Strike no episódio 14. O Detective Steele no jogo de video-game "Blade Runner" fuma Lucky Strike. No famoso manga GTO, o Professor Onizuka é visto fumando um Lucky Strike. O personagem ficcional Mike Hammer, que foi escrito por Mickey Spillane, fumou Lucky Strike durante toda a novela de Hammer. O cigarro Lucky Strike foi mencionado também no filme "Misery", do famoso escritor Stephen King, onde o personagem Paul Sheldon (interpretado pelo ator James Caan) fumaria um cigarro da Lucky Strike após terminar de escrever um livro. O cigarro é mencionado também no livro "Christine", também de Stephen King, logo no prólogo. Um dos mais famosos fumantes da marca Lucky Strike na televisão foi o famoso detetive Sonny Crockett (interpretado por Don Johnson) do programa Miami Vice, da década de 1980. Outras aparições incluem James Caan, que fuma Lucky Strike no filme Misery - O capítulo final, de 1990. Sigourney Weaver fuma Lucky Strike no filme Death and the Maiden, de 1994. O cigarro Lucky Strike é mostrado também no filme The Shawshank Redemption, de 1994. Em um episódio de Smart Guy (2ª temporada, episódio 13: Trial & Error), o senhor Bringleman fuma cigarros da Lucky Strike. Em Band of Brothers mostra varias vezes os soldados fumando e falando a marca. No filme de 2004, Team America: World Police, dos mesmos criadores do seriado americano South Park, é mostrado propagandas do Lucky Strike pintado nas paredes em algumas cenas do meio-oeste urbano. Em 2005, Keira Knightley fuma Lucky Strike no The Jacket, estrelando Adrien Brody. O cigarro Lucky Strike é mencionado na música "These are my People" de Rodney Atkins. O Lucky Strike é mencionado também na música "Kentucky Avenue", do famoso músico Tom Waits. O cigarro da Lucky Strike é mencionado também na música "All You Can Ever Learn is What you Already Know", da banda The Ataris, um grupo de punk rock. No romance gráfico de Alan Moore, Watchmen, podemos ver o famoso Lucky Strike sendo fumado por Hollis J. Mason (o original Night Owl). É visto também em uma cena com o detetive Steven Fine. No seriado Mad Men, o protagonista Don Draper, diretor de arte da empresa fictícia Sterling Cooper, logo no primeiro episódio está tentando criar uma nova campanha para os cigarros Lucky Strike. E ao longo das temporadas vários personagens aparecem fumando e diversas vezes e feita alguma referência a marca. Em 2011, O cigarro Lucky Strike foi fumado pela banda Scream And Shout na sacada do vocalista Alan Borgartz Em 2013, no Skate Park de Santa Rosa, no Brasil, o cigarro Lucky Strike foi fumado por vários skatistas consagrados pela mídia durante as gravações de seus vídeos promocionais. Em 2014, no primeiro episódio da quarta temporada de American Horror Story, Jessica Lange fuma Lucky Strike.The Highwaymen (Estrada Sem Lei, em português) estrelado por Kevin Costner e Woody Harrelson, onde um os personagens de Bonnie and Cleyd, Clyde Chestnut Barrow fumava luckys e Bonnie Parker fumava Camel. Marcas da British American Tobacco Marcas da BAT Brasil Marcas de cigarros American Tobacco Company
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,535
Q: Meanings of the navigation symbols in beamer When compiling a beamer tex source file, I have a pdf with a progress bar. But I'm not sure the function of each icon. What are they for? How to remove them? \documentclass{beamer} \title{Hello} \author{Samuel} \date{\today} \begin{document} \begin{frame}{}{} \maketitle \end{frame} \section{section} \begin{frame}{The subtitle}{} What is your name? \ldots \begin{enumerate} \item A \emph{Hello} $x + y == z$ \item B $\sin (x) + \cos(x)$ \end{enumerate} \end{frame} \end{document}
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,094
import os # Ensure directory for model. if not os.path.exists("models"): os.mkdir("models") # Write model. with open("models/checkpoint.txt", "w") as f: f.write("downstream")
{ "redpajama_set_name": "RedPajamaGithub" }
8,231
\section*{Introduction} Let $X$ be a surface over $\mathbb{C}$ and $\mathrm{Sing}\ X$ the singular locus of $X$. The notions of a jet scheme and an arc scheme were introduced by J. F. Nash in 1968 in a preprint, later published in 1995(\cite{Na}). Roughly speaking, an $m$-th jet of $X$ is an infinitesimal map of order $m$ from a germ of a curve to $X$, and an arc of $X$ is an infinitesimal map of order infinity from a germ of a curve to $X$. The $m$-th order jet scheme of $X$, denoted by $X_m$, is a scheme parametrizing $m$-th jets, and the arc scheme of $X$, denoted by $X_{\infty}$, is a scheme parametrizing arcs. For nonnegative integers $m > m'$, there is a map $\pi_{m,m'} : X_m \rightarrow X_{m'}$, called the truncation morphism. Then an arc scheme can be obtained as the projective limit of jet schemes with respect to the truncation morphisms. The $0$-th jet scheme $X_0$ is identified with $X$, and hence we have the morphism $\pi_{m,0}:X_m \rightarrow X$. We call $X_m^0 = \pi_{m,0}^{-1}(\mathrm{Sing}\ X)$ the singular fiber. It is hoped that jet schemes and arc schemes, and in particular the singular fibers, reflect the property of singular points. For arc schemes, there is a famous problem called the Nash problem. The Nash problem asks the relation between the ``Nash components" and the ``essential divisors" of a resolution of singularities. There is a natural injective map, called the Nash map, from the set of Nash components to the set of essential divisors. The problem is whether the Nash map is surjective. This problems was affirmatively solved for $A_n$-type singular surfaces in \cite{Na}, for the remaining rational double points in \cite{{Pe}, {Pl}, {PS}}, for rational surface singularities in \cite{{Re1}, {Re2}}, and for arbitrary surface singularities in \cite{BP}. On the other hand, the study of relations between the singular fibers of \emph{jet} schemes and the exceptional divisors of a resolution of singularities for surfaces was only recently started, in a series of papers by Mourtada(\cite{M1}, \cite{M2}) and Mourtada-Pl\'{e}nat(\cite{MP}). For a general surface $X$, the relation between the irreducible components of the singular fiber $X_m^0$ and the exceptional divisors of the minimal resolution of singularities is not simple. For example, the number of irreducible components of $X_m^0$ and the number of exceptional divisors are not necessarily equal even for $m \gg 0$. However, for rational double point singularities, Mourtada(\cite{M1}, \cite{M2}) gave a one-to-one correspondence between the irreducible components of the singular fiber of $X_m$ for a fixed $m \gg 0$ and the exceptional curves of the minimal resolution of $X$. Moreover, in \cite{MP}, Mourtada and Pl\'{e}nat define ``essential components" and ``minimal embedded toric resolutions". An essential component is an irreducible component of $X_m^0$ satisfying certain conditions, where $m$ is allowed to vary. They gave a one-to-one correspondence between the set of the essential components and the set of the divisors which appear on every ``minimal embedded toric resolution" for rational double point singularities except for the $E_8$-type singular surface. Further, they show how to obtain the minimal embedded toric resolution from the information of essential components. In this paper, we consider the following quastion: What one can get from the correspondence between irreducible components of $X_m^0$ for a \emph{fixed} $m \gg 0$ and exceptional curves of the minimal resolution of singularity? In the case $X$ is $A_n$ or $D_4$-type singular surface over $\mathbb{C}$, we study how the irreducible components of $X_m^0$ intersect with each other for $m \gg 0$, and construct a graph using this information. This graph will be isomorphic to the resolution graph. Let us explain how to construct the graph. We expect that if two irreducible components correspond to distant vertices on the resolution graph, then their intersection is ``small" in some sense. A naive expectation would be that the intersection has lower dimensions, but this is not true. We explicitly calculate the intersections for an $A_n$-type singular surface and see that the dimensions of the intersections of two distinct irreducible components are independent of the choice of irreducible components for $m \gg 0$. Still, we can determine the adjacency as follows. Let $Z_m^1,...,Z_m^n$ be the irreducible components of the singular fiber $X_m^0$. \begin{Const}\label{ConstructiontheGraph} Let $V = \{ Z_m^1,...,Z_m^n \}$, and let $E \subseteq \{ Z_m^i \cap Z_m^j \ |\ i,j \in \{1,...,n\}\ \mathrm{with}\ i \neq j \}$ be the set of the maximal elements for the inclusion relation. Then we construct a graph $\Gamma$ as the pair $(V, E)$, i.e. the vertices of $\Gamma$ are elements of $V$, and there is given an edge between $Z_m^i$ and $Z_m^j$ if and only if $Z_m^i \cap Z_m^j \in E$. \end{Const} To study the intersections, we use the description of the irreducible components of the singular fiber by Mourtada. For $A_n$-type singular surfaces, Mourtada(\cite{M1}) gave generators of the defining ideals of the irreducible components of $X_m^0$. Hence it is possible to obtain the irreducible decompositions of $Z_m^i \cap Z_m^j$ with $i \neq j$, and this enables us to describe the graph $\Gamma$ in Construction \ref{ConstructiontheGraph} for $A_n$-type singular surfaces. This graph is isomorphic to the resolution graph of an $A_n$-type singular surface. For a $D_4$-type singular surface, Mourtada(\cite{M2}) describes the irreducible components of the singular fiber as the closures of certain locally closed sets. Thus we do not know the generators of the defining ideals of the irreducible components of $X_m^0$. Still, we can find certain elements of the defining ideals of $Z_m^i \cap Z_m^j$, which allow us to study the inclusion relations. In this way we determine the graph $\Gamma$ in Construction \ref{ConstructiontheGraph} for a $D_4$-type singular surface. This graph is also isomorphic to the resolution graph of a $D_4$-type singular surface. We expect that, for rational double point singularities, the graphs defined in the same way are isomorphic to the resolution graphs. The organization of this paper is as follows. In section 2, we fix some notations on jet schemes. In section 3, we recall the description of the defining ideals of irreducible components of the singular fiber of an $A_n$-type singular surface by Mourtada(\cite{M1}). Then we study the intersections of irreducible components, in particular the irreducible decompositions and the dimensions of the intersections. In section 4, using the description of irreducible components of the singular fiber of a $D_4$-type singular surface by Mourtada(\cite{M2}), we determine the maximal elements of their intersections. \textit{Acknowledgement.} The author would like to thank Nobuyoshi Takahashi for valuable advice. \section{Jet schemes} In this section, we recall the definition of jet schemes, describe them explicitly and fix some notations. We are interested in a neighborhood of a singularity, so we consider an affine scheme of finite type over an algebraically closed field $k$ as a target space. Let $X$ be an affine scheme of finite type over $k$ and let $m$ be a nonnegative integer. \begin{Prop} (\cite[Proposition 2.2]{Is}) Let ${\mathbf{Sch}}/k$ denote the category of schemes over $\mathrm{Spec}\ k$ and ${\mathbf{Set}}$ the category of sets. We define the functor \begin{center} $F_{m}^{X} : {\mathbf{Sch}}/k \rightarrow {\mathbf{Set}}$ \end{center} as follows: For $Z \in {\mathbf{Sch}}/k$, \begin{center} $F_m^X (Z) := {\rm Hom}_k(Z \times_{\mathrm{Spec}\ k} \mathrm{Spec}\ k[t]/\langle t^{m+1}\rangle,X)$. \end{center} Then $F_m^X$ is represented by an affine scheme $X_m$ of finite type over $k$. This scheme $X_m$ is called the $m$-th jet scheme of $X$. \end{Prop} For an affine scheme of finite type over $k$, we can describe its $m$-th jet scheme as follows. Let $X$ be an affine scheme embedded in $\mathbb{A}^e$. Then its affine coordinate ring $\Gamma(X, \mathscr{O}_X)$ can be written in the form $k[x_1,...,x_e]/\langle f_1,...,f_r\rangle$. We introduce some notations. \begin{Nota} Let $\mathbf{x}_i := x_i^{(0)} + x_i^{(1)}t + \cdots + x_i^{(m)}t^m \in k[x_1^{(0)},...,x_1^{(m)},...,x_e^{(0)},...,x_e^{(m)},t]/ \langle t^{m+1} \rangle$ $(i = 1,...,e)$. For a polynomial $f \in k[x_1,...,x_e]$, we expand $f(\mathbf{x}_1,...,\mathbf{x}_e)$ as \begin{center} $f \left( \mathbf{x}_1,...,\mathbf{x}_e \right) = f^{(0)} + f^{(1)}t + \cdots + f^{(m)}t^m$ \end{center} in $k[x_1^{(0)},...,x_1^{(m)},...,x_e^{(0)},...,x_e^{(m)},t]/ \langle t^{m+1} \rangle$, where $f^{(j)} \in k[x_1^{(0)},...,x_1^{(m)},...,x_e^{(0)},...,x_e^{(m)}]$. Then the $m$-th jet scheme $X_m$, which represents the functor $F_m^X$, is \begin{center} $X_m = \mathrm{Spec}\ (k[x_1^{(0)},...,x_1^{(m)},...,x_e^{(0)},...,x_e^{(m)}]/\langle f_1^{(0)},...,f_1^{(m)},...,f_r^{(0)},...,f_r^{(m)}\rangle)$. \end{center} \end{Nota} \begin{Rem} \label{jet polynomial of order m can be calculate as more than the order m variables} Let $g \in k[x_1,...,x_e]$ and $m \in \mathbb{Z}_{\geq 0}$. The polynomials $g^{(j)}$ are independent of $m$ as long as $m \geq j$. In particular, if we want to calculate the polynomial $g^{(j)}$, we have only to calculate the polynomial $\displaystyle g \left( \sum_{k = 0}^{j}x_1^{(k)}t^k, ..., \sum_{k=0}^{j}x_e^{(k)}t^k \right)$. \end{Rem} \begin{Ex} (1) Suppose $X = \mathbb{A}^e$. Then the $m$-th jet scheme $X_m$ is $\mathbb{A}^{e(m+1)}$. \\ (2) We calculate $X_2$ for $X = \mathrm{Spec}\ (\mathbb{C}[x,y,z]/\langle xy - z^2 \rangle)$. Let $f = xy - z^2$, $\mathbf{x} = x_{0} + x_{1}t + x_{2}t^2$, $\mathbf{y} = y_{0} + y_{1}t + y_{2}t^2$ and $\mathbf{z} = z_{0} + z_{1}t + z_{2}t^2$. Then \begin{align} f(\mathbf{x},\mathbf{y},\mathbf{z}) =& (x_{0}y_{0} - z_{0}^{2}) + (x_{1}y_{0} + x_{0}y_{1} - 2z_{0}z_{1})t\\ & + (x_{2}y_{0} + x_{1}y_{1} + x_{0}y_{2} - z_{1}^{2} - 2z_{0}z_{2})t^2 + Ft^3 \end{align} where $F \in \mathbb{C}[x_{0},x_{1},x_{2},y_{0},y_{1},y_{2},z_{0},z_{1},z_{2},t]$. We set \begin{align} f^{(0)} =& x_{0}y_{0} - z_{0}^{2},\\[-5pt] f^{(1)} =& x_{1}y_{0} + x_{0}y_{1} - 2z_{0}z_{1},\\[-5pt] f^{(2)} =& x_{2}y_{0} + x_{1}y_{1} + x_{0}y_{2} - z_{1}^{2} - 2z_{0}z_{2}. \end{align} The second jet scheme of $X$ is \begin{center} $X_2 = \mathrm{Spec}\ (\mathbb{C}[x_{0},x_{1},x_{2},y_{0},y_{1},y_{2},z_{0},z_{1},z_{2}]/\langle f^{(0)},f^{(1)},f^{(2)} \rangle)$. \end{center} \end{Ex} Next we consider closed points of $X_m$. In the above situation, the scheme $X_m$ is a closed subvariety of $(\mathbb{A}^e)_m = \mathbb{A}^{e(m+1)}$, so we regard the closed points of $X_m$ as an element of $k^{e(m+1)}$. \begin{Nota} Let $\gamma = (a_1^{(0)},...,a_1^{(m)},...,a_e^{(0)},...,a_e^{(m)}) \in \mathbb{A}^{e(m+1)}$ be a closed point. Then we also denote $\displaystyle \gamma = \left(\sum_{i=0}^{m}a_{1}^{(i)}t^i,...,\sum_{i=0}^m a_{e}^{(i)}t^i \right)$ using the variable $t$. For $g \in k[x_1,...,x_e]$, we regard $g$ as a morphism $\mathbb{A}^e \rightarrow \mathbb{A}^1$, and then the composition $g \circ \gamma$ is given by the substitution as $\displaystyle g\left(\sum_{i=0}^{m}a_{1}^{(i)}t^i,...,\sum_{i=0}^m a_{e}^{(i)}t^i \right)$. Then we define $\mathrm{ord}_{\gamma}(g)$ as the $t$-order of $g \circ \gamma$ in $k[t]$. \end{Nota} \begin{Rem} A closed point $\alpha \in (\mathbb{A}^e)_m = \mathbb{A}^{e(m+1)}$ belongs to $X_m$ if and only if $\mathrm{ord}_{\alpha}(f_i) \geq m+1$ for $i = 1,...,r$. \end{Rem} Finally we see how $X_m$ and $X_{m'}$ are related for $m,m' \in \mathbb{Z}_{\geq 0}$ with $m \geq m'$. Let $Z$ be any scheme over $\mathrm{Spec}\ k$. The ring homomorphism \begin{center} $k[t]/\langle t^{m+1}\rangle \rightarrow k[t]/\langle t^{m'+1}\rangle\ ;\ \sum_{i = 0}^m \alpha_i t^i \mapsto \sum_{i=0}^{m'} \alpha_i t^i$ \end{center} induces the morphism of affine schemes \begin{center} $\mathrm{Spec}\ k[t]/\langle t^{m'+1}\rangle \rightarrow \mathrm{Spec}\ k[t]/\langle t^{m+1}\rangle$. \end{center} For any $k$-scheme $Z$, this induces a morphism \begin{center} $\varphi(Z) : Z \times_{\mathrm{Spec}\ k} \mathrm{Spec}\ k[t]/\langle t^{m'+1}\rangle \rightarrow Z \times_{\mathrm{Spec}\ k} \mathrm{Spec}\ k[t]/\langle t^{m+1}\rangle$. \end{center} Then the collection of the morphisms $\varphi(Z)$ induces a natural transformation $F_m^X \rightarrow F_{m'}^X$ given by \begin{center} $F_m^X(Z) \rightarrow F_{m'}^X(Z)\ ;\ g \mapsto g \circ \varphi(Z)$. \end{center} Since $F_m^X$ and $F_{m'}^X$ are represented by $X_m$ and $X_{m'}$, we obtain a morphism \begin{center} $\pi_{m,m'} : X_m \rightarrow X_{m'}$. \end{center} \begin{Def} The morphism $\pi_{m,m'}$ is called the \emph{truncation morphism}. In particular, for $m' = 0$, we denote $\pi_{m,0}$ by $\pi_m$. \end{Def} Let us look at the truncation morphism in the case $X = \mathbb{A}^e$. Let $m$ and $m'$ be nonnegative integers with $m > m'$. A closed point of $X_m$ can be written as $\mathbf{a} = (a_1^{(0)},...,a_1^{(m')},...,a_1^{(m)},...,$ $a_e^{(0)},...,a_e^{(m')},...,a_e^{(m)})$. Then \begin{center} $\pi_{m,m'}(\mathbf{a}) = (a_1^{(0)},...,a_1^{(m')},...,a_e^{(0)},...,a_e^{(m')})$. \end{center} For a closed subscheme $X \subseteq \mathbb{A}^e$, the truncation morphism for $X$ is the restriction of the truncation morphism for $\mathbb{A}^e$. \section{Intersections of irreducible components of the singular fiber of a jet scheme : $A_n$ case} In this section, we consider a surface $X$ over $\mathbb{C}$ with an $A_n$-type singularity at the origin. H. Mourtada studied irreducible components of the ``singular fiber" of jet schemes of $X$. His article(\cite{M1}) gives an explicit description of the defining ideals of the irreducible components. We will first summarizes his arguments in a form convenient for us. Let $X = \mathrm{Spec}\ \mathbb{C}[x,y,z]/\langle xy-z^{n+1}\rangle$, $m \in \mathbb{Z}_{\geq 0}$ and let $\pi_{m} : X_m \rightarrow X$ be the truncation morphism. The surface $X$ is a toric surface which has an $A_n$-type singular point at $0(x = y = z = 0)$. We are interested in the fiber $X_m^0 := \pi^{-1}_m(0)$, which we call the singular fiber. It is known that the number of irreducible components of $X_m^0$ is equal to the number of the exceptional curves of the minimal resolution of $X$ for $m \geq n$ (\cite[Theorem 3.1]{M1}). Thus we assume $m \geq n$ from now on in this section. We also note that if $I \subset \mathbb{C}[x_0,..x_N,y_0,...,y_N,z_0,...,z_N]$ is an ideal for $N \in \mathbb{Z}_{\geq 0}$, then for $N' \geq N$, \begin{center} $I \cdot \mathbb{C}[x_0,..x_{N'},y_0,...,y_{N'},z_0,...,z_{N'}] \cap \mathbb{C}[x_0,..x_N,y_0,...,y_N,z_0,...,z_N] = I$. \end{center} Hence we may regard the ideals appearing in the following as ideals in the ring $\mathbb{C}[x_0,...,x_N,$ $y_0,...,y_N,z_0,...,z_N]$ for $N \gg 0$. Now we fix some notations. \begin{Nota} \label{jet polynomial modulo coordinate} Let $f := xy - z^{n+1}$ and let $p,q,r,m$ be nonnegative integers with $\mathrm{max}\{p,q,r\} \leq m$. We define an ideal $L_{pqr}$ by \begin{center} $L_{pqr} := \langle x_0,...,x_{p-1},y_0,...,y_{q-1},z_0,...,z_{r-1} \rangle$. \end{center} For $j \leq m$, we denote by $f_{pqr}^{(j)}$ the coefficient of $t^j$ in the expansion of $\displaystyle f(\sum_{i = p}^m x_it^i, \sum_{i=q}^m y_it^i, \sum_{i=r}^m z_it^i)$. Moreover, we set \begin{center} $\Lambda_{pq}^{j} := \{ (l_1, l_2) \in \mathbb{Z}_{\geq 0}^2 \mid l_1 \geq p, l_2 \geq q\ \mathrm{and}\ l_1 + l_2 = j \}$ \end{center} and \[ \Lambda_{r}^j := \left \{ \left((i_1,...,i_l), (d_1,...,d_l)\right) \mid \begin{array}{l} \text{$l \geq 1, r \leq i_1 < i_2 < \cdots < i_l \leq j,\ d_1,...,d_l > 0,$}\\ \text{$d_1 + \cdots + d_l = n+1,\ i_1d_1 + \cdots + i_ld_l = j$} \end{array} \right \}. \] \end{Nota} We obtain the following lemma by a direct calculation. \begin{Lemma} \label{An polynomial modulo relation} For $p,q,r,j \in \mathbb{Z}_{\geq0}$ with $\mathrm{max} \{p,q,r \} \leq m$ and $j \leq m$, we have \begin{center} $f_{pqr}^{(j)} \equiv f^{(j)}\ \mathrm{mod}\ L_{pqr}$. \end{center} In particular, $f_{000}^{(j)} = f^{(j)}$. Moreover, we have \begin{center} $\displaystyle f_{pqr}^{(j)} = \sum_{(l_1,l_2) \in \Lambda_{pq}^j} x_{l_1}y_{l_2} - \sum_{\left((i_1,...,i_l), (d_1,...,d_l) \right) \in \Lambda_{r}^j} \frac{(n+1)!}{d_1!\cdots d_l!} z_{i_1}^{d_1} \cdots z_{i_l}^{d_l}$, \end{center} where the first (resp. second) term of the right hand side is $0$ if $\Lambda_{pq}^{j} = \emptyset$ (resp. $\Lambda_{r}^j = \emptyset$). \end{Lemma} \begin{Coro}(\cite[Section 3]{M1}) \label{Properties of jet polynomials} \begin{itemize} \item[(1)] If $p + q > j$, then \begin{center} $\displaystyle f^{(j)}_{pqr} = - \sum_{\left((i_1,...,i_l), (d_1,...,d_l) \right) \in \Lambda_{r}^j} \frac{(n+1)!}{d_1!\cdots d_l!} z_{i_1}^{d_1} \cdots z_{i_l}^{d_l}$. \end{center} \item[(2)] If $r(n+1) > j$, then \begin{center} $\displaystyle f^{(j)}_{pqr} = \sum_{(l_1,l_2) \in \Lambda_{pq}^j} x_{l_1}y_{l_2}$. \end{center} \item[(3)] If $p+q > j$ and $r(n+1) > j$, then \begin{center} $f^{(j)}_{pqr} = 0$. \end{center} \end{itemize} \end{Coro} \begin{Nota} \label{periodicity of polynomial f} We define the polynomial $g_{l,e}^{(j)}$ by \begin{center} $g_{l,e}^{(j)} := f^{(j)}(x_l,...,x_{l+j},y_{e(n+1)-l},...,y_{e(n+1)-l+j},z_e,...,z_{e+j})$, \end{center} where $f^{(j)}$ is regarded as an element of $\mathbb{C}[x_0,...,x_j,y_0,...,y_j$ $,z_0,...,z_j]$ according to Remark \ref{jet polynomial of order m can be calculate as more than the order m variables}. \end{Nota} Since the polynomial $f$ is weighted homogeneous, we have the following useful fact. \begin{Lemma} (\cite[Section 3]{M1}when $e = 1$) \label{toCalculateJetPolynomialsWhenSomeCoordinate=0} Assume $e,j,l,n \in \mathbb{Z}_{\geq 0}$ with $0 \leq l \leq e(n+1)$ and $j \geq 0$. We have \begin{alignat}{2} f_{l,e(n+1) -l,e}^{(e(n+1)+j)} &= g_{l,e}^{(j)}. \end{alignat} \end{Lemma} \begin{proof For $0 \leq l \leq e(n+1)$, we can calculate as follows: \begin{alignat*}{4} \displaystyle \ &f\left(\sum_{i=l}^{m}x_{i}t^i, \sum_{i=e(n+1)-l}^{m}y_{i}t^i, \sum_{i=e}^{m}z_{i}t^i \right)\\ =\ & \displaystyle f\left(t^l\sum_{i=0}^{m-l}x_{l+i}t^i,t^{e(n+1)-l}\sum_{i=0}^{m-e(n+1)+l}y_{e(n+1)-l+i}t^{i},t^{e}\sum_{i=0}^{m-e}z_{e+i}t^{i} \right)\\ =\ & \displaystyle t^{e(n+1)}f\left(\sum_{i=0}^{m-l}x_{l+i}t^{i},\sum_{i=0}^{m-e(n+1)+l}y_{e(n+1)-l+i}t^{i},\sum_{i=0}^{m-e}z_{e+i}t^{i} \right).\\ \end{alignat*} Then $m-l, m-e(n+1)+l, m-e \geq 0$, we have \begin{alignat*}{7} \displaystyle \ &f\left(\sum_{i=0}^{m-l}x_{l+i}t^{i},\sum_{i=0}^{m-e(n+1)+l}y_{e(n+1)-l+i}t^{i},\sum_{i=0}^{m-e}z_{e+i}t^{i} \right)\\ \equiv\ & \displaystyle \sum_{j=0}^{m-e(n+1)}f^{(j)}(x_{l},...,x_{l+j},y_{e(n+1)-l},...,y_{e(n+1)-l+j},z_{e},...,z_{e+j})t^{j} =\ & \sum_{j=0}^{m-e(n+1)} g_{l,e}^{(j)}t^j \end{alignat*} modulo $t^{m-e(n+1)+1}$. Hence we have \begin{alignat*}{10} \displaystyle \ &f\left(\sum_{i=l}^{m}x_{i}t^i, \sum_{i=e(n+1)-l}^{m}y_{i}t^i, \sum_{i=e}^{m}z_{i}t^i \right) \equiv\ & t^{e(n+1)}\sum_{j=0}^{m-e(n+1)} g_{l,e}^{(j)}t^j =\ & \sum_{j=0}^{m-e(n+1)} g_{l,e}^{(j)}t^{e(n+1)+j} \end{alignat*} modulo $t^{m+1}$. Looking at the coefficients of $t^{e(n+1)+j}$ for $0 \leq j \leq m-e(n+1)$, we have \begin{alignat*}{2} f_{l,e(n+1)-l,e}^{(e(n+1)+j)} &= g_{l,e}^{(j)}. \end{alignat*} \end{proof} \begin{Rem}\label{Slide Polynomials and apeearing coordinates} We note that the variables appearing in $g_{l,e}^{(j)}$ are disjoint from $x_0,...,x_{l-1}, y_0,...$, $y_{e(n+1)-l -1}$ and $z_0,...,z_{e-1}$. \end{Rem} We will now describe the defining ideals of the irreducible components of the singular fiber. \begin{Nota} \label{Definition of the defining ideal of the irreducible components} Let $l \in \mathbb{Z}$ with $1 \leq l \leq n$. We define the ideal $G_{m}^{l}$ by \begin{center} $G^l_{m} = \langle g_{l,1}^{(0)},...,g_{l,1}^{(m-n-1)} \rangle$. \end{center} (For $m = n$, we set $G_n^l = 0$.) We define the ideal $I_m^l$ by \begin{center} $I_m^l := \langle L_{l,n+1-l,1}, f^{(0)},..., f^{(m)} \rangle = \langle x_0,...,x_{l-1},y_0,...,y_{n-l},z_0,z_1,f^{(0)},...,f^{(m)} \rangle$. \end{center} Let $Z_m^l$ denote the subvariety of $(\mathbb{A}^3)_m \cong \mathbb{A}^{3(m+1)}$ defined by $I_m^l$, i.e. \begin{center} $Z_m^l = \mathbf{V}(I_m^l)$. \end{center} \end{Nota} \begin{Lemma}(\cite[Section 3]{M1}) \label{standerd Notation of Ideals} Let $l \in \mathbb{Z}$ with $1 \leq l \leq n$. We have \begin{center} $I_m^l = L_{l,n+1-l,1} + G_m^l$. \end{center} In particular, if $m = n$, then we have \begin{center} $I_n^l = L_{l,n+1-l,1}$. \end{center} \end{Lemma} \begin{proof} We apply Corollary \ref{Properties of jet polynomials} to $f_{l, n+1-l, 1}^{(i)}$ for $i = 0,..., n$. From the assumption, we have $l + (n+1-l) = n+1 > i$ and $1\times (n+1) = n+1 > i$, and hence we have $f_{l,n+1-l,1}^{(i)} = 0$ by Corollary \ref{Properties of jet polynomials}(3). By Lemma \ref{An polynomial modulo relation}, we have \begin{center} $f^{(i)} \equiv f_{l,n+1-l,1}^{(i)} = 0 \ \mathrm{mod}\ L_{l,n+1-l,1}$ \end{center} for $i = 0, ..., n$. Moreover, from Lemma \ref{An polynomial modulo relation}, we have $f^{(n+1+j)} \equiv f^{(n+1+j)}_{l,n+1-l,1}$ mod $L_{l,n+1-l,1}$, if $n+1+j \leq m$, and from Lemma \ref{toCalculateJetPolynomialsWhenSomeCoordinate=0}, we have \begin{center} $g^{(j)}_{l,1} = f^{(n+1+j)}_{l,n+1-l,1} \equiv f^{(n+1+j)} \ \mathrm{mod} \ L_{l,n+1-l,1}$. \end{center} Hence in the case $m = n$, we have \begin{center} $I_n^l = L_{l,n+1-l,1}$, \end{center} and in the case $m > n$, we have \begin{alignat}{3} I_m^l =&\ L_{l,n+1-l,1} + \langle f^{(n+1)},...,f^{(m)} \rangle \\ =&\ L_{l,n+1-l,1} + \langle g_{l,1}^{(0)},...,g_{l,1}^{(m-n-1)} \rangle &\ = L_{l,n+1-l,1} + G_m^l. \end{alignat} \end{proof} These closed subvarieties give the irreducible decomposition of $X_m^0$. \begin{Prop} (\cite[Theorem 3.1]{M1}, \cite[Proposition 1.5, Theorem 3.3]{Mu}) \label{irreducibility and irreducible decomposition of singular fiber} We have \begin{center} $Z_n^l \cong \mathbb{A}^{2n+1}$ \end{center} and \begin{center} $Z_m^l \cong X_{m-n-1} \times \mathbb{A}^{2n+1}$ \end{center} if $m \geq n+1$. Moreover, the ideal $\langle f^{(0)},...,f^{(l)} \rangle$ which is the defining of $X_l$ is prime for $l \geq 0$, so the variety $X_l$ is irreducible. In particular, for $m \geq n$, the varieties $Z_m^l$ are irreducible. The irreducible decomposition of $X_m^0$ is given by \begin{center} $\displaystyle X_m^0 = \bigcup_{l=1}^n Z_m^l$. \end{center} \end{Prop} Now we study the intersections of irreducible components of $X_m^{0}$. We define the following ideals: For $1 \leq i < j \leq n$, \begin{center} $J_m^{i,j} := I_m^i + I_m^j$\ ,\ \ $I_m^{i,j}$ := $\sqrt{J_m^{i,j}}$. \end{center} Recalling Notation \ref{Definition of the defining ideal of the irreducible components}, we have \begin{alignat}{5} L_{j,n+1-i,1} =&\ \langle x_0,...,x_{j-1},y_0,...,y_{n-i},z_0 \rangle \\ =&\ L_{i,n+1-i,1} + L_{j,n+1-j,1} \end{alignat} and have \begin{alignat}{3} J_m^{i,j} = L_{j,n+1-i,1} + \langle f^{(0)},...,f^{(m)} \rangle. \end{alignat} From the definition of $I_m^{i,j}$, we have $\mathbf{V}(I_m^{i,j}) = Z_m^i \cap Z_m^j$. Now we give the irreducible decomposition of the closed subvariety $Z_m^i \cap Z_m^j$. \begin{Thm} \label{irreducible decomposition of intersection of A_n-type} Assume $m \geq n \geq 2$. Let $1 \leq i < j \leq n$. \begin{itemize} \item[(a)] If $m = n$, then $I_n^{i,j} = L_{j,n+1-i,1}$ and \begin{center} $Z_n^i \cap Z_n^j = \mathbf{V}(L_{j,n+1-i,1})$ \end{center} is irreducible. \item[(b)] If $1 \leq m-n \leq j-i$, then $I_m^{i,j} = L_{j,n+1-i,2}$ and \begin{center} $Z_m^i \cap Z_m^j = \mathbf{V}(L_{j,n+1-i,2})$ \end{center} is irreducible. \item[(c)] If $j-i \leq m-n$ and $m < 2n+2$, then \begin{center} $\displaystyle I_m^{i,j} = \bigcap_{u=0}^{m-n-(j-i)} L_{j+u,m-j-u+1,2}$ \end{center} and hence the irreducible decomposition of $Z_m^i \cap Z_m^j$ is given by \begin{center} $Z_m^i \cap Z_m^j = \displaystyle \bigcup_{u = 0}^{m - n-(j-i)} \mathbf{V}(L_{j+u,m-j-u+1,2})$. \end{center} \item[(d)] If $m \geq 2n + 2$, then \begin{center} $\displaystyle I_m^{i,j} = \bigcap_{u=0}^{n+1-(j-i)} (L_{j+u,2n+2 -j-u,2} + \langle f^{(2n+2)},...,f^{(m)} \rangle)$. \end{center} The ideal $L_{j+u,2n+2 -j-u,2} + \langle f^{(2n+2)},...,f^{(m)} \rangle$ is prime for $0 \leq u \leq n+1-(j-i)$, and the irreducible decomposition of $Z_m^i \cap Z_m^j$ is given by \begin{center} $Z_m^i \cap Z_m^j = \displaystyle \bigcup_{u = 0}^{n+1 - (j-i)} \mathbf{V}(L_{j+u,2n+2 -j-u,2} + \langle f^{(2n+2)},...,f^{(m)} \rangle)$. \end{center} \end{itemize} \end{Thm} \begin{Rem} \label{the index conditions in thm} In (b), we have $1 \leq m-n \leq j-i$, so $n < m \leq n+(j-i) \leq 2n-1 < 2n+2$. In (c), the number of irreducible components of $Z_m^i \cap Z_m^j$ is $m-n-(j-i)+1 \geq 1$. In (d), the number of irreducible components of $Z_m^i \cap Z_m^j$ is $n-(j-i)+2 \geq 3$. In particular, in the cases (c) and (d), the number of irreducible components decreases as $j-i$ increases. \end{Rem} \begin{proof} First of all, from Lemma \ref{An polynomial modulo relation}, we have \begin{center} $f^{(l)} \equiv f_{j,n+1-i,1}^{(l)}\ \mathrm{mod}\ L_{j,n+1-i,1}$. \end{center} (a) Recall that $J_m^{i,j} = L_{j,n+1-i,1} + \langle f^{(0)},...,f^{(m)} \rangle$. We prove that \begin{center} $J_n^{i,j} = L_{j,n+1-i,1} = \langle x_0,...,x_{j-1},y_0,...,y_{n-i},z_0 \rangle$, \end{center} i.e., $f^{(0)},...,f^{(n)} \in L_{j,n+1-i,1}$. In fact, for $l = 0,...,n$, by Corollary \ref{Properties of jet polynomials}(3) and $j + (n+1-i) > n \geq l$ and $1\times (n+1) > n \geq l$, we have \begin{center} $f_{j,n+1-i,1}^{(l)} = 0$. \end{center} So we have \begin{center} $f^{(l)} \equiv f_{j,n+1-i,1}^{(l)} = 0$ mod $L_{j,n+1-i,1}$ \end{center} for $l = 0,...,n$ and \begin{center} $J_n^{i,j} = L_{j,n+1-i,1} = \langle x_0,...,x_{j-1},y_0,...,y_{n-i},z_0 \rangle$. \end{center} This ideal is clearly prime, hence \begin{center} $I_n^{i,j} = L_{j,n+1-i,1}$. \end{center} (b) We prove that \begin{center} $I_m^{i,j} = L_{j,n+1-i,2} = L_{j,n+1-i,1} + \langle z_1 \rangle$. \end{center} In fact, from Corollary \ref{Properties of jet polynomials}(1) and $j + (n-i+1) > n+1$, we have \begin{center} $f^{(n+1)}_{j,n-i+1,1} = -z_1^{n+1}$. \end{center} It follows that \begin{center} $J_{n+1}^{i,j} = L_{j,n+1-i,1} + \langle z_1^{n+1} \rangle$, \end{center} hence we have \begin{center} $I_{n+1}^{i,j} = \sqrt{J_{n+1}^{i,j}} = L_{j,n+1-i,2}$, \end{center} and this ideal is clearly prime. For any $l$ with $n+1 < l \leq m$, we have $j + (n+1-i) > m \geq l$ from the assumption $j-i \leq m-n$ and $2(n+1) > m \geq l$ and by Remark \ref{the index conditions in thm}. Hence we have \begin{center} $f^{(l)}_{j,n+1-i,2} = 0$, \end{center} by Corollary \ref{Properties of jet polynomials}(3), and \begin{center} $f^{(l)} \equiv 0\ \mathrm{mod}\ L_{j,n+1-i,2}$ $\ (l = n+2,...,m)$ \end{center} by Lemma \ref{An polynomial modulo relation}. Thus \begin{center} $I_{n+1}^{i,j} + \langle f^{(n+2)},...,f^{(m)} \rangle = L_{j,n+1-i,2} + \langle f^{(n+2)},...,f^{(m)} \rangle = L_{j,n+1-i,2}$. \end{center} holds. We also have \begin{center} $J_{m}^{i,j} \subseteq I_{n+1}^{i,j} + \langle f^{(n+2)},...,f^{(m)} \rangle = L_{j,n+1-i,2} \subseteq I_m^{i,j}$, \end{center} and taking the radicals, we have $I_m^{i,j} = L_{j,n+1-i,2}$.\\ (c) In general, for a subvariety $V = \mathbf{V}(I) \subseteq X_m$, we have $\pi_{m+1,m}^{-1}(V) = \mathbf{V}(I + \langle f^{(m+1)} \rangle)$. Hence $\pi_{m+1,m}^{-1}(Z_m^i) = Z_{m+1}^i$. If $Z_m^i \cap Z_m^j = V_1\cup \cdots \cup V_r$, then \begin{center} $Z_{m+1}^i \cap Z_{m+1}^j = \pi_{m+1,m}^{-1}(Z_m^i) \cap \pi_{m+1,m}^{-1}(Z_m^j) = \pi_{m+1,m}^{-1}(V_1) \cup \cdots \cup \pi_{m+1,m}^{-1}(V_r)$. \end{center} Using this we prove the assertion by induction on $m \geq n+ (j-i)$. The case $m = n + (j-i)$ actually belongs to the case (b) since $m-n = j-i$, and the assertion is true. For $m > n + (j-i)$, we assume that the claim is true for $m-1$, i.e. the irreducible decomposition of $Z_{m-1}^i \cap Z_{m-1}^j$ is \begin{center} $Z_{m-1}^i \cap Z_{m-1}^j = \displaystyle \bigcup_{u = 0}^{m-1 - n-(j-i) } \mathbf{V}(L_{j+u,m-j-u,2})$. \end{center} Now, we consider the ideal \begin{center} $\langle x_0,...,x_{j-1 + u},y_0,...,y_{m-1 -j-u},z_0,z_1,f^{(m)} \rangle = L_{j+u,m-j-u,2} + \langle f^{(m)} \rangle$, \end{center} which is a defining ideal of $\pi_{m,m-1}^{-1}(\mathbf{V}(L_{j+u,m-j-u,2}))$. We apply Corollary \ref{Properties of jet polynomials}(2) to $f^{(m)}_{j+u,m-j-u,2}$. We have $2(n+1) > m$ from the assumption of the assertion (c), and so from $(j+u) + (m-j-u) = m$, we have \begin{center} $f^{(m)}_{j+u,m-j-u,2} = x_{j+u}y_{m-j-u}$. \end{center} By Lemma \ref{An polynomial modulo relation}, we have \begin{center} $f^{(m)} \equiv f^{(m)}_{j+u,m-j-u,2}\ \mathrm{mod}\ L_{j+u,m-j-u,2}$. \end{center} Hence we have \begin{alignat}{5} L_{j+u,m-j-u,2} + \langle f^{(m)} \rangle &\ = L_{j+u,m-j-u,2} + \langle x_{j+u}y_{m-j-u} \rangle\\ &\ =\ (L_{j+u,m-j-u,2} + \langle x_{j+u} \rangle) \cap (L_{j+u,m-j-u,2} + \langle y_{m-j-u} \rangle)\\ &\ =\ L_{j+u+1,m-j-u,2} \cap L_{j+u,m-j-u+1,2}. \end{alignat} The two ideals in the right hand side are prime, so the irreducible decomposition of $\pi_{m,m-1}^{-1}(\mathbf{V}(L_{j+u,m-j-u,2}))$ is given by \begin{center} $\pi_{m,m-1}^{-1}(\mathbf{V}(L_{j+u,m-j-u,2})) = \mathbf{V}(L_{j+u+1,m-j-u,2}) \cup \mathbf{V}(L_{j+u,m-j-u+1,2})$. \end{center} Thus we have \begin{center} $Z_m^i \cap Z_m^j = \displaystyle \bigcup_{u = 0}^{m - n-(j-i)} \mathbf{V}(L_{j+u,m-j-u+1,2})$. \end{center} Finally, we have to prove that none of the ideals $L_{j+u,m-j-u+1,2}$ contain another. In fact, for $u_1 < u_2$, we have \begin{align} x_{j + u_2 -1} \in&\ L_{j+u_2,m-j-u_2+1,2} - L_{j+u_1,m-j-u_1+1,2}\\ \mathrm{and}\ y_{m-j-u_1} \in&\ L_{j+u_1,m-j-u_1+1,2} - L_{j+u_2,m-j-u_2+1,2}. \end{align} So the above decomposition is the irreducible decomposition. \\ (d) As in the case (c), we have only to show that the ideal \begin{center} $L_{j+u,2n+2-j-u,2} + \langle f^{(2n+2)},...,f^{(m)} \rangle = \langle x_0,...,x_{j-1 + u},y_0,...,y_{2n-j+1-u},z_0,z_1,f^{(2n+2)},...,f^{(m)} \rangle$, \end{center} which is a defining ideal of $\pi_{m,2n+1}^{-1}(\mathbf{V}(L_{j+u,2n+2-j-u,2})),$ is prime for $u = 0,...,n+1-(j-i)$, and none of the ideals $L_{j+u,2n+2-j-u,2} + \langle f^{(2n+2)},..., f^{(m)} \rangle$ contain another. First we prove that the ideal \begin{center} $L_{j+u,2n+2-j-u,2} + \langle f^{(2n+2)},..., f^{(m)} \rangle$ \end{center} is prime. From Lemma \ref{An polynomial modulo relation} and Lemma \ref{toCalculateJetPolynomialsWhenSomeCoordinate=0}, we have \begin{alignat}{4} f^{(2n+2+v)} &\ \equiv \ && f^{(2n+2+v)}_{j+u,2n+2-j-u,2} &&\ \mathrm{mod}\ L_{j+u,2n+2-j-u,2}\\ &\ =\ && g_{j+u,2}^{(v)}. \end{alignat} for $v = 0,...,m - 2(n+1)$. Then \begin{center} $L_{j+u,2n+2-j-u,2} + \langle f^{(2n+2)},...,f^{(m)} \rangle = L_{j+u,2n+2-j-u,2} + \langle g^{(0)}_{j+u,2},...,g^{(m - 2n-2)}_{j+u,2} \rangle$. \end{center} From Proposition \ref{irreducibility and irreducible decomposition of singular fiber}, the ideal $\langle g^{(0)}_{j+u,2},...,g^{(m - 2n-2)}_{j+u,2} \rangle$ is prime. Moreover, we have \begin{alignat}{3} g^{(v)}_{j+u,2} &\ \in k[x_{j+u},...,x_{j+u+v},y_{2n+2-j-u},...,y_{2n+2-j-u+v},z_2,...,z_{2+v}]\\ &\ \subseteq k[x_{j+u},...,x_{j+u+m-2n-2},y_{2n+2-j-u},...,y_{m-j-u},z_2,...,z_{m-2n}] \end{alignat} for $v = 0,...,m - 2(n+1)$ from Remark \ref{Slide Polynomials and apeearing coordinates}. In particular, the variables appearing in $g^{(v)}_{j+u,2}$ are disjoint from $x_0,...,x_{j-1 + u},y_0,...,y_{2n-j+1-u}$ and $z_0, z_1$. So the ideal \begin{center} $L_{j+u,2n+2-j-u,2} + \langle g^{(0)}_{j+u,2},...,g^{(m - 2n-2)}_{j+u,2} \rangle$ \end{center} is also prime. Hence the variety \begin{center} $\mathbf{V}(L_{j+u,2n+2-j-u,2} + \langle g^{(0)}_{j+u,2},...,g^{(m - 2n-2)}_{j+u,2} \rangle) \cong X_{m-2(n+1)}\times \mathbb{A}^{4n+2}$ \end{center} in $\mathbb{A}^{3m+3}$ is irreducible. Next we check for $u_1,u_2 \in \{0,...,n+1-(j-i)\}$ with $u_1 < u_2$, that the prime ideals $L_{j+u_l,2n+2-j-u_l,2} + \langle g^{(0)}_{j+u_l,2},...,g^{(m - 2n-2)}_{j+u_l,2} \rangle$ ($l = 1,2$) are not contained in each other. We have $x_{j + u_2 -1} \notin \langle g^{(0)}_{j+u_1,2},...,g^{(m - 2n-2)}_{j+u_1,2} \rangle$ since the ideal $\langle g^{(0)}_{j+u_1,2},...,g^{(m - 2n-2)}_{j+u_1,2} \rangle$ is reduced and the degree of all terms of the generators are at least $2$. So we have \begin{alignat}{3} &\ x_{j + u_2 -1} \notin&\ L_{j+u_1,2n+2-j-u_1,2} + \langle g^{(0)}_{j+u_1,2},...,g^{(m - 2n-2)}_{j+u_1,2} \rangle\\ \mathrm{and}\ &\ x_{j + u_2 -1} \in&\ L_{j+u_2,2n+2-j-u_2,2} + \langle g^{(0)}_{j+u_2,2},...,g^{(m - 2n-2)}_{j+u_2,2} \rangle. \end{alignat} Similarly, we can check that \begin{alignat}{3} &\ y_{2n+1-j-u_1} \notin&\ L_{j+u_2,2n+2-j-u_2,2} + \langle g^{(0)}_{j+u_2,2},...,g^{(m - 2n-2)}_{j+u_2,2} \rangle\\ \mathrm{and}\ &\ y_{2n+1-j-u_1} \in&\ L_{j+u_1,2n+2-j-u_1,2} + \langle g^{(0)}_{j+u_1,2},...,g^{(m - 2n-2)}_{j+u_1,2} \rangle. \end{alignat} Hence the decomposition \begin{center} $Z_m^i \cap Z_m^j = \displaystyle \bigcup_{u = 0}^{n+1 - (j-i)} \mathbf{V}(L_{j+u,2n+2-j-u,2} + \langle f^{(2n+2)},...,f^{(m)} \rangle)$ \end{center} is an irreducible decomposition. \end{proof} \begin{Coro} \label{dimensions of A_n-type} Assume $m \geq n \geq 2$ and $1 \leq i < j \leq n$. The intersections $Z_m^i \cap Z_m^j$ of irreducible components of singular fiber are of pure dimension, and the following hold. \begin{itemize} \item[(a)] If $m=n$, then $\mathrm{dim}\ Z_n^i \cap Z_n^j = 2n - (j - i) +1$, \item[(b)] Assume $m > n$. \begin{itemize} \item[{\rm(\hspace{.18em}i\hspace{.18em})}] If $m-n < j-i$, then $\mathrm{dim}\ Z_m^i \cap Z_m^j = 3m - n - (j - i)$, \item[{\rm(\hspace{.18em}ii\hspace{.18em})}] If $j-i \leq m-n$, then $\mathrm{dim}\ Z_m^i \cap Z_m^j = 2m$. \end{itemize} \end{itemize} \end{Coro} \begin{proof} (a) From Proposition \ref{irreducible decomposition of intersection of A_n-type}(a), the closed subvariety $Z_n^i \cap Z_n^j$ is defined by the ideal $I_n^{i,j} = L_{j,n+1-i,1} = \langle x_0,...,x_{j-1},y_0,...,y_{n-i},z_0 \rangle$ in $\mathbb{A}^{3(n+1)}$. So \begin{center} $\mathrm{dim}\ Z_n^i \cap Z_n^j = 3(n+1) - j - (n+1-i) -1 = 2n - (j-i) +1$. \end{center} (b) {\rm(\hspace{.18em}i\hspace{.18em})} From Proposition \ref{irreducible decomposition of intersection of A_n-type}(b), the closed subvariety $Z_m^i \cap Z_m^j$ is defined by the ideal $I_m^{i,j} = L_{j,n-i+1,2} = \langle x_0,...,x_{j-1},y_0,...,y_{n-i},z_0,z_1 \rangle$ in $\mathbb{A}^{3(m+1)}$. So \begin{center} $\mathrm{dim}\ Z_m^i \cap Z_m^j = 3(m+1) - j - (n+1-i) -2 = 3m - n - (j-i)$. \end{center} {\rm(\hspace{.18em}ii\hspace{.18em})} From Proposition \ref{irreducible decomposition of intersection of A_n-type}(c) and (d), the irreducible component of $Z_m^i \cap Z_m^j$ is isomorphic to $X_{m-2(n+1)} \times \mathbb{A}^{4n+2}$. Thus \begin{center} $\mathrm{dim}\ Z_m^i \cap Z_m^j = \mathrm{dim}\ X_{m-2(n+1)} \times \mathbb{A}^{4n+2} = 2(m-2n-1) + 4n+2 = 2m$. \end{center} \end{proof} \begin{Ex} For an $A_3$-type singular surface, Table \ref{table : dimensins and codimnsions} shows the dimensions and codimensions of $Z_m^i$ and $Z_m^i \cap Z_m^j$ and the numbers of the irreducible components $N_m^{ij}$ of $Z_m^i \cap Z_m^j$ for small values of $m$. \begin{table}[htp] \caption{} \label{table : dimensins and codimnsions} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|} \hline & \multicolumn{3}{|c|}{$\mathrm{dim}$} & \multicolumn{3}{|c|}{$\mathrm{codim}_{\mathbb{A}^{3(m+1)}}$} & \multicolumn{2}{|c|}{} \\ \hline $m$ &\ \ $Z_m^i$\ \ & $Z_m^1 \cap Z_m^2$ & $Z_m^1 \cap Z_m^3$ & \ \ $Z_m^i$\ \ & $Z_m^1 \cap Z_m^2$ & $Z_m^1 \cap Z_m^3$ &\ \ $N_m^{12}$\ \ &\ \ $N_m^{13}$\ \ \\ \hline 3 & 7 & 6 & 5 & 5 & 6 & 7 & 1 & 1\\ 4 & 9 & 8 & 7 & 6 & 7 & 8 & 1 & 1\\ 5 & 11 & 10 & 10 & 7 & 8 & 8 & 2 & 1\\ 6 & 13 & 12 & 12 & 8 & 9 & 9 & 3 & 2\\ 7 & 15 & 14 & 14 & 9 & 10 & 10 & 4 & 3\\ \hline \end{tabular} \end{table} \end{Ex} \begin{Coro} \label{maximal condition of A_n} For $m \geq n$ and $i, j, k, l \in \{1,...,n\}$ with $i < j$ and $k < l$, we have \begin{center} $Z_m^i \cap Z_m^j \subset Z_m^k \cap Z_m^l$ \end{center} if and only if $i \leq k < l \leq j$. In particular, the intersection $Z_m^i \cap Z_m^j$ of the irreducible components of $X_m^0$ is maximal in $\{Z_m^i \cap Z_m^j \mid i \neq j \}$ for the inclusion relation if and only if $| i - j | = 1$. \end{Coro} Now we define a graph $\Gamma$ using the information on $X_m^0$ as follows. \begin{Const}\label{ConstructionofGraph} The graph $\Gamma$ is constructed pairs $(V, E)$ as follows. \begin{itemize} \item The vertices V are the irreducible components of $X_m^0$. \item The edges E are all maximal elements of $\{ Z_m^i \cap Z_m^j \mid i \neq j \}$, and $Z_m^i \cap Z_m^j$ connects $Z_m^i$ and $Z_m^j$. \end{itemize} In other words, an edge is given between $Z_m^i$ and $Z_m^j$ if and only if $Z_m^i \cap Z_m^j \in E$. \end{Const} \begin{Coro} The graph obtained by Construction \ref{ConstructionofGraph} is isomorphic to the resolution graph of an $A_n$-type singularity. \end{Coro} \section{Intersections of irreducible components of the singular fiber of a jet scheme : $D_4$ case} From the result for $A_n$-type singular surfaces in the previous section, we expect the following: If $X$ is a surface over $\mathbb{C}$ with a rational double point singularity, the graph constructed in Construction \ref{ConstructiontheGraph} is isomorphic to the resolution graph of $X$ for $m \gg 0$. In this section, we show that this holds in the case $X$ has a $D_4$-type singular point. In \cite{M2}, Mourtada describes how to obtain the irreducible components of the singular fiber of the jet scheme of $X$. In this case, unlike the case of $A_n$-type singularities, we do not know the generators of the defining ideals of the irreducible components when the order $m$ is large. Let $f(x,y,z) = x^2 - y^2z + z^3 \in \mathbb{C}[x,y,z]$ and $X \subset \mathbb{A}^3$ the hypersurface defined by $f$. For any $m \in \mathbb{Z}_{\geq 0}$, let $R_m := \mathbb{C}[x_0,...,x_m,y_0,...,y_m,z_0,...,z_m]$ and $\pi_m : X_m \rightarrow X$ be the truncation morphism. We denote the origin $x = y = z = 0$ of $\mathbb{A}^3$ by $0$. Then the surface $X$ has a singular point at $0$. Let us denote the singular fiber $\pi^{-1}_m(0)$ by $X_m^0$. We fix some more notations. \begin{Nota} \label{about Jet polynomials} Assume $m \geq 5$. We define a number of ideals of $R_m$ which are defining ideals of subvarieties of $X_m^0$: \begin{alignat}{1} L_{pqr} = \langle x_0,...,x_{p-1},y_0,...,y_{q-1},z_0,...,z_{r-1} \rangle \label{mainPartI_m^0} \end{alignat} as in section 2, \begin{alignat}{6} L^1 \quad =&\quad \langle x_0,x_1, &&\, \; y_0, &&\, \; z_0,z_1 && &\rangle &\, \; = L_{212}, \label{mainPartI_m^1}\\ L^2 \quad =&\quad \langle x_0,x_1, &&\, \; y_0, &&\, \; z_0, &&\, \; y_1 - z_1 &\rangle, \label{mainPartI_m^2}\\ L^3 \quad =&\quad \langle x_0,x_1, &&\, \; y_0, &&\, \; z_0, &&\, \; y_1 + z_1 &\rangle, \label{mainPartI_m^3} \end{alignat} and \begin{alignat}{5} I_m^0 &\ \ =\ \ &&L_{322} &\ + \langle f^{(0)},...,f^{(m)} \rangle, \label{PrimeIdealV_m^0}\\ J_m^1 &\ \ =\ \ &&L^1 &\ + \langle f^{(0)},...,f^{(m)} \rangle, \label{OpenPrimeIdealV_m^1}\\ J_m^2 &\ \ =\ \ &&L^2 &\ + \langle f^{(0)},...,f^{(m)} \rangle, \label{OpenPrimeIdealV_m^2}\\ J_m^3 &\ \ =\ \ &&L^3 &\ + \langle f^{(0)},...,f^{(m)} \rangle. \label{OpenPrimeIdealV_m^3} \end{alignat} \end{Nota} \begin{Prop} (\cite[Section 3.2]{M2}) \label{J_m^i*(R_m)_{y_1} are not equal to (R_m)_{y_1}} The ideal $I_m^0$ is a prime ideal in $R_m$, and the ideals $J_m^i\cdot (R_m)_{y_1}$ for $i = 1,2,3$ are prime ideals in $(R_m)_{y_1}$. In particular, $\mathbf{V}(J_m^{i}) \cap \mathbf{D}(y_1)$ are irreducible subvarieties of $\mathbf{D}(y_1)$, where $\mathbf{D}(y_1)$ is the open subscheme of $X_m$ defined by $y_1 \neq 0$. \end{Prop} From these, we define some irreducible subvarieties of $X_m$ contained in $X_m^0$. \begin{Def} \label{definition of irreducible components of X_m^0 for D_4-type singular surface} Let $Z_m^0,...,Z_m^3$ be the closed subvarieties of $X_m$, contained in $X_m^0$, defined by \begin{center} $Z_m^0 := \mathbf{V}(I_m^0)$, \end{center} and for $i \in \{1,2,3\}$, \begin{center} $Z_m^i := \overline{\mathbf{V}(J_m^{i}) \cap \mathbf{D}(y_1)}$, \end{center} where the bar means the Zariski closure in $X_m$. \end{Def} The defining ideals of $Z_m^i$ are obtained as follows. \begin{Def} \label{definition of Z_m^i} For $i = 1,2,3$, we define the ideals $I_m^i$ by \begin{center} $I_m^i := J_m^i\cdot (R_m)_{y_1} \cap R_m$. \end{center} \end{Def} \begin{Lemma} (\cite[Section 3.2]{M2}) \label{Zariski closure} Assume $m \geq 5$. The ideals $I_m^i$ are prime ideals in $R_m$ and $Z_m^i = \mathbf{V}(I_m^i)$. \end{Lemma} The singular fiber $X_m^0$ decomposes as follows. \begin{Prop} (\cite[Section 3.2]{M2}) \label{Irreducible decomposition of singular fiber of D_4-type singular point} Assume $m \geq 5$. The irreducible decomposition of the singular fiber $X_m^0$ is given by \begin{center} $X_m^0 = Z_m^0 \cup Z_m^1 \cup Z_m^2 \cup Z_m^3$. \end{center} Moreover, the dimensions of $Z_m^i$ are equal to $2m+1$ for $i = 0,...,3$. \end{Prop} \begin{Rem} (\cite[Theorem 3.2]{M2}) The codimension of $Z_m^i$ in $\mathbb{A}^{3(m+1)}$ is equal to $m+2$. The defining ideal of $X_m^0$ is generated by $m+2$ elements $x_0,y_0,z_0$ and $f^{(2)},...,f^{(m)}$, so $X_m^0$ is a complete intersection in $\mathbb{A}^{3(m+1)}$. Note that the dimension of the smooth locus $X_{\mathrm{sm}}$ of $X$ is $2(m+1)$ and the codimension of $X_{\mathrm{sm}}$ in $\mathbb{A}^{3(m+1)}$ is $m+1$. \end{Rem} From now on, we suppose $m \geq 5$. We will take advantage of the symmetries of $X$. Let $\varphi_1$ and $\varphi_2$ be the automorphisms of $R_m$ defined by \[ \varphi_1 : \begin{cases} x_i \mapsto x_i, \\ y_i \mapsto - y_i, \\ z_i \mapsto z_i, \end{cases} \varphi_2 : \begin{cases} x_i \mapsto x_i, \\ y_i \mapsto -\tfrac{1}{2}y_i + \tfrac{3}{2}z_i, \\ z_i \mapsto -\tfrac{1}{2}y_i - \tfrac{1}{2}z_i. \end{cases} \] \begin{Rem} \label{varphi preserve the f^{(i)}} The automorphisms $\varphi_1$ and $\varphi_2$ are induced by the automorphisms of $\mathbb{C}[x,y,z]$ defined by \[ \begin{cases} x \mapsto x, \\ y \mapsto -y, \\ z \mapsto z, \end{cases} \mathrm{and} \begin{cases} x \mapsto x, \\ y \mapsto -\tfrac{1}{2}y + \tfrac{3}{2}z, \\ z \mapsto -\tfrac{1}{2}y - \tfrac{1}{2}z, \end{cases} \] for which the polynomial $f$ is invariant. So $\varphi_1$ and $\varphi_2$ preserve the polynoimals $f^{(j)} (j \in \{0,...,m\})$. Therefore, $\varphi_1$ and $\varphi_2$ induce automorphisms of $X_m$. We denote these morphisms by $\psi_1$ and $\psi_2$. \end{Rem} Now we show how the morphisms $\psi_1$ and $\psi_2$ act on the set of the closed subvarieties $\{Z_m^0,Z_m^1, Z_m^2,Z_m^3\}$. We need the following lemma. \begin{Lemma}\label{I_m^i another defining ideals} Assume $m \geq 5$. For $i = 1,2,3$, we have \begin{center} $Z_m^i = \overline{\mathbf{V}(J_m^i) \cap \mathbf{D}(y_1)} = \overline{\mathbf{V}(J_m^i) \cap \mathbf{D}(y_1 -3z_1)}$. \end{center} \end{Lemma} \begin{proof} From Proposition \ref{J_m^i*(R_m)_{y_1} are not equal to (R_m)_{y_1}}, $y_1 \notin \sqrt{J_m^i}$ for $i = 1,2,3$. Moreover, we note that $z_1$, $y_1 - z_1$ or $y_1 + z_1 \in J_m^i$. Then $y_1(P) = 0$ if and only if $(y_1 - 3z_1)(P) =0$ for $P \in \mathbf{V}(J_m^i)$. Thus $\mathbf{V}(J_m^i) \cap \mathbf{D}(y_1) = \mathbf{V}(J_m^i) \cap \mathbf{D}(y_1 -3z_1)$, so \begin{center} $\overline{\mathbf{V}(J_m^i) \cap \mathbf{D}(y_1 -3z_1)} = \overline{\mathbf{V}(J_m^i) \cap \mathbf{D}(y_1)} = Z_m^i$ \end{center} by Proposition \ref{J_m^i*(R_m)_{y_1} are not equal to (R_m)_{y_1}}. \end{proof} \begin{Prop} \label{trancefar irreducible components} Assume $m \geq 5$. The irreducible components of $X_m^0$ are mapped to another by $\psi_s (s = 1,2)$ as follows: \begin{enumerate} \item[(a)] $\psi_1(Z_m^0) = Z_m^0$, $\psi_1(Z_m^1) = Z_m^1$, $\psi_1(Z_m^2) = Z_m^3$ and $\psi_1(Z_m^3) = Z_m^2$. \item[(b)] $\psi_2(Z_m^0) = Z_m^0$, $\psi_2(Z_m^1) = Z_m^2$, $\psi_2(Z_m^2) = Z_m^3$ and $\psi_2(Z_m^3) = Z_m^1$. \end{enumerate} \end{Prop} \begin{proof} The case $\psi_s(Z_m^0) = Z_m^0$ can be obtained by a direct calculation. The other cases as follows. For $i = 1,2,3$, the subvariety $Z_m^i$ is irreducible, and by the previous lemma we only have to show that \begin{center} $\psi_s(\mathbf{V}(J_m^i) \cap \mathbf{D}(\varphi_s(y_1))) \subseteq \mathbf{V}(J_m^j) \cap \mathbf{D}(y_1)$ \end{center} to prove $\psi_s(Z_m^i) = Z_m^j$, for the isomorphism $\psi_s$ preserve the dimension of a subvariety. Since $\psi_s(\mathbf{D}(\varphi_s(y_1))) \subseteq \mathbf{D}(y_1)$, it suffices to show that \begin{center} $\psi_s(\mathbf{V}(J_m^i)) \subseteq \mathbf{V}(J_m^j)$. \end{center} We can easily check that $\varphi_s(L^j) \subseteq L^i$ for triples $(i,j,s)$ as in the statement, and the assertion follows. \end{proof} \begin{Coro} \label{varphi act the ideals as image} We have the following: \begin{itemize} \item[(a')] $\varphi_1(I_m^0) = I_m^0$, $\varphi_1(I_m^1) = I_m^1$, $\varphi_1(I_m^2) = I_m^3$ and $\varphi_1(I_m^3) = I_m^2$. \item[(b')] $\varphi_2(I_m^0) = I_m^0$, $\varphi_2(I_m^1) = I_m^3$, $\varphi_2(I_m^2) = I_m^1$ and $\varphi_2(I_m^3) = I_m^2$. \end{itemize} \end{Coro} For the proof of our main result in this section, Theorem \ref{maain result in section4}, we will give a few explicit elements of $I_m^i$ and $I_m^{i,j} := \sqrt{I_m^i + I_m^j}$. Let \begin{center} $g_1 := -4y_2^2z_2^2 + y_1^2z_3^2 + 4x_3^2z_2 - 4x_2x_3z_3$. \end{center} and \begin{alignat}{3} g_2 :=&\ -y_2^4 - 4y_2^3z_2 + 2y_2^2z_2^2 + 12y_2z_2^3 - 9z_2^4 + 4y_3^2z_1^2 - 8y_3z_1^2z_3 + 4z_1^2z_3^2\\ &\ + 8x_3^2y_2 - 8x_3^2z_2 - 8x_2x_3y_3 + 8x_2x_3z_3. \end{alignat} We computed the ideals $I_m^1$ and $I_m^2$ for small values of $m$ using Macaulay2, and found that $g_1$ and $g_2$ belong to $I_m^1$ and $I_m^2$ respectively for $m \geq 5$. These two elements play important roles in the proof of the main theorem. For reader's convenience, we will check this by hand in what follows. \begin{Rem} \label{(1)unit in localization (2)relation of I_m^i and J_m^i} Our strategy for finding elements of $I_m^i$ is as follows. Let $g \in R_m$ and $n \in \mathbb{Z}_{\geq 0}$. Suppose $y_1^n g \in J_m^i$. Then \begin{center} $g \in J_m^i(R_m)_{y_1} \cap R_m = I_m^i$, \end{center} since $y_1$ is a unit in $(R_m)_{y_1}$. \end{Rem} \begin{Lemma} \label{important elements} For $m \geq 5$, we have $g_1 \in I_m^1$ and $g_2 \in I_m^2$. \end{Lemma} \begin{proof} First, we prove $g_1 \in I_m^1$. Let $\mathbf{a} := x_2t^2 + x_3t^3 + x_4t^4 + x_5t^5$, $\mathbf{b} := y_1t + y_2t^2 + y_3t^3 + y_4t^4 + y_5t^5$ and $\mathbf{c} := z_2t^2 + z_3t^3 + z_4t^4 + z_5t^5$. We denote the coefficient of $t^i$ in $f(\mathbf{a}, \mathbf{b}, \mathbf{c})$ by $F^{(i)}$. Then, since \begin{center} $\mathbf{x} \equiv \mathbf{a}, \mathbf{y} \equiv \mathbf{b}, \mathbf{z} \equiv \mathbf{c}$ mod $L^1 \cdot R_m[t]/\langle t^6 \rangle$ \end{center} (see Notation \ref{about Jet polynomials}), we have \begin{center} $F^{(i)} \equiv f^{(i)}$ mod $L^1 \cdot R_m[t]/\langle t^6 \rangle$, \end{center} so $F^{(i)} \in J_m^1$. We calculate \begin{alignat}{4} F^{(4)} =\ & x_2^2 \ \ && - y_1^2z_2 \\ F^{(5)} =\ & 2x_2x_3 \ \ && -2y_1y_2z_2 - y_1^2z_3, \end{alignat} and then \begin{alignat}{6} F^{(5)2} =\ &\ \ \ \ 4y_1^2y_2^2z_2^2 \ &&+y_1^4z_3^2 - 4x_2x_3y_1^2z_3 \ &&+ 4x_2^2x_3^2 \ &&- 8x_2x_3y_1y_2z_2 + 4y_1^3y_2z_2z_3\\ -4x_3^2F^{(4)} =\ && \ &\ 4x_3^2y_1^2z_2 \ &&- 4x_2^2x_3^2\\ 4y_1y_2z_2F^{(5)} =\ &- 8y_1^2y_2^2z_2^2 \ && && &&+ 8x_2x_3y_1y_2z_2 - 4y_1^3y_2z_2z_3 \end{alignat} Hence \begin{center} $y_1^2g_1 = F^{(5)2} - 4x_3^2F^{(4)} + 4y_1y_2z_2F^{(5)} \in J_m^1$. \end{center} By Remark \ref{(1)unit in localization (2)relation of I_m^i and J_m^i}, $g_1 \in I_m^1$. Next we prove $g_2 \in I_m^2$. We calculate as follows: By Corollary \ref{varphi act the ideals as image}, we have \begin{alignat}{5} \varphi_2^{-1}(g_1) =&\ \displaystyle -4\left(-\frac{1}{2}y_2-\frac{3}{2}z_2 \right)^2 \left(\frac{1}{2}y_2 - \frac{1}{2}z_2 \right)^2 + \left( -\frac{1}{2}y_1 - \frac{3}{2}z_1 \right)^2 \left( \frac{1}{2}y_3 - \frac{1}{2}z_3 \right)^2\\ &\ +4x_3^2 \left(\frac{1}{2}y_2 - \frac{1}{2}z_2 \right) - 4x_2x_3 \left(\frac{1}{2}y_3 - \frac{1}{2}z_3 \right) & \ \in I_m^2\\ \end{alignat} Since $y_1 - z_1 \in L^2$, we obtain an element of $I_m^2$ when $y_1$ is replaced by $z_1$. Then the right hand side equal to \begin{alignat}{3} &\ \displaystyle \frac{1}{4} ( -y_2^4 - 4y_2^3z_2 + 2y_2^2z_2^2 + 12y_2z_2^3 - 9z_2^4 + 4y_3^2z_1^2 - 8y_3z_1^2z_3 + 4z_1^2z_3^2\\ &\ \ + 8x_3^2y_2 - 8x_3^2z_2 - 8x_2x_3y_3 + 8x_2x_3z_3 )\\ = &\ \displaystyle \frac{1}{4}g_2. \end{alignat} Thus $g_2 \in I_m^2$. \end{proof} \begin{Lemma} \label{the corrdinate element of I_m^i,j} For $m \geq 5$ and $i,j \in \{1,2,3\}$ with $i \neq j$, we have $y_1, z_1,x_2 \in I_m^{i,j}$, i.e. $I_m^{i,j} \supseteq L_{322}$. \end{Lemma} \begin{proof} Note that, if $y_1, z_1 \in I$ for an ideal $I$, then we can easily chech that $y_1, z_1 \in \varphi_s(I)$, for $s = 1,2$. We also note that $\varphi_s$ ($s = 1,2$) preserves the elements $x_i$, for $i = 0, ..., m$. Hence it is sufficient to show that these three elements belong to $I_m^{1,2}$ by Corollary \ref{varphi act the ideals as image}. First we check that $y_1,z_1 \in I_m^{1,2}$ i.e., $I_m^{1,2} \supseteq L_{222}$. By the definition of $I_m^i$ and $I_m^{i,j}$, $I_m^{1,2} = \sqrt{I_m^1 + I_m^2} \supseteq J_m^1 + J_m^2$. So $y_1$ and $y_1 - z_1$ belong to $I_m^{1,2}$, i.e. $y_1$ and $z_1$ belong to $I_m^{1,2}$. Next we show that $x_2$ belongs to $I_m^{1,2}$. Let $\mathbf{a} = x_2t^2 + x_3t^3 + x_4t^4$, $\mathbf{b} = y_2t^2 + y_3t^3 + y_4t^4$ and $\mathbf{c} = z_2t^2 + z_3t^3 + z_4t^4$. Then the coefficient of $t^4$ of $f(\mathbf{a}, \mathbf{b}, \mathbf{c})$ is equal to $x_2^2$. Moreover, we have $\mathbf{x} \equiv \mathbf{a}, \mathbf{y} \equiv \mathbf{b}, \mathbf{z} \equiv \mathbf{c}$ mod $L_{222}$, hence \begin{center} $x_2^2 \equiv f^{(4)}$ mod $L_{222}$, \end{center} and $x_2 \in \sqrt{I_m^{1,2}} = I_m^{1,2}$ i.e., $I_m^{1,2} \supseteq L_{322}$. \end{proof} We need the following two lemmas for the proof of the main theorem. \begin{Lemma} \label{relation 0 component} For $m \geq 5$ and for any $i,j \in \{1,2,3\}$ with $i \neq j$, we have $Z_m^i \cap Z_m^j \subseteq Z_m^0$. \end{Lemma} \begin{proof} From Lemma \ref{the corrdinate element of I_m^i,j}, we have $I_m^{i,j} \supseteq L_{322}$ for $i,j \in \{1,2,3\}$ with $i \neq j$. Hence \begin{center} $I_m^0 = L_{322} + \langle f^{(0)},..., f^{(m)} \rangle = \langle x_0,x_1,x_2,y_0,y_1,z_0,z_1 \rangle + \langle f^{(0)},..., f^{(m)} \rangle \subseteq I_m^{i,j}$. \end{center} \end{proof} \begin{Lemma} \label{maximal intersection} For $m \geq 5$ and $i, j \in \{1,2,3\}$ with $i \neq j$, we have $Z_m^0 \cap Z_m^i \supsetneq Z_m^0 \cap Z_m^i \cap Z_m^j$. \end{Lemma} \begin{proof} By Proposition \ref{trancefar irreducible components}, we have only to show that $Z_m^0 \cap Z_m^1 \supsetneq Z_m^0 \cap Z_m^1 \cap Z_m^2$. In other words, $\sqrt{I_m^0 + I_m^1} \subsetneq \sqrt{I_m^0 + I_m^1 + I_m^2}$. The proof is divided into two cases, (a) $m = 5$ and (b) $m \geq 6$. The case (b). We prove $y_2 \in \sqrt{I_m^0 + I_m^1 + I_m^2}$ and $y_2 \notin \sqrt{I_m^0 + I_m^1}$. First we prove $y_2 \in \sqrt{I_m^0 + I_m^1 + I_m^2}$. From Lemma \ref{important elements}, $g_1 \in I_m^1 \subseteq \sqrt{I_m^0 + I_m^1 + I_m^2}$. We consider the following two elements modulo $L_{322}$: \begin{alignat}{4} f^{(6)} \equiv&\ - y_2^2z_2 &&\ + z_2^3 + x_3^2\\ g_1 \equiv&\ -4y_2^2z_2^2 &&\ + 4z_2x_3^2. \end{alignat} Then we have \begin{center} $4z_2^4 \equiv 4z_2f^{(6)} - g_1$, \end{center} and the right hand side belongs to $I_m^1$. Thus $4z_2^4 \in I_m^0 + I_m^1$, and $z_2 \in \sqrt{I_m^0 + I_m^1}$. From \begin{center} $f^{(6)} \equiv x_3^2 \ \mathrm{mod}\ \langle L_{322}, z_2 \rangle = L_{323}$, \end{center} we have $x_3 \in \sqrt{I_m^0 + I_m^1}$. Now from $g_2 \in I_m^2$ and \begin{center} $g_2 \equiv -y_2^4 \ \mathrm{mod}\ \langle L_{323}, x_3 \rangle$, \end{center} it follows that $-y_2^4 \in \sqrt{I_m^0 + I_m^1} + I_m^2$, hence that $y_2 \in \sqrt{I_m^0 + I_m^1 + I_m^2}$. Next we prove $y_2 \notin \sqrt{I_m^0 + I_m^1}$. We consider the point \begin{center} $\mathrm{P'} = (\mathbf{\alpha}, \mathbf{\beta}, \mathbf{\gamma}) = (0, st + t^2, 0)$ \end{center} for any $s \in k - \{0\}$. This point belongs to $\mathbf{V}(J_m^1) \cap \mathbf{D}(y_1)$ from the description of generators of $J_m^1$ in $\mathbb{C}[x_0,...,x_m,y_0,...,y_m,z_0,...,z_m]$ in Notation \ref{about Jet polynomials} \eqref{OpenPrimeIdealV_m^1}. When $s$ goes to $0$, $\mathrm{P'}$ becomes $\mathrm{P} = (0, t^2, 0)$ and this point belongs to $\overline{\mathbf{V}(J_m^1) \cap \mathbf{D}(y_1)} = Z_m^1$. Moreover $\mathrm{P} \in Z_m^0$, since $Z_m^0 = \mathbf{V}(I_m^0)$ (Notation \ref{about Jet polynomials} \eqref{PrimeIdealV_m^0}). Hence $\mathrm{P} \in Z_m^0 \cap Z_m^1$. Since $y_2 = 1$ at $\mathrm{P}$, $y_2 \notin \mathbf{I}(Z_m^0 \cap Z_m^1) = \sqrt{I_m^0 + I_m^1}$. The case (a). We prove $\mathrm{Q} = (-t^3, -t^2 ,t^2) \in Z_5^0 \cap Z_5^1$, but $\mathrm{Q} \notin Z_5^0 \cap Z_5^1 \cap Z_5^2$. First we prove $\mathrm{Q} \in Z_5^0 \cap Z_5^1$. We can easily check $\mathrm{ord}_{Q}(f) > 5$ and $Q \in \mathbf{V}(L_{322})$, and we have $\mathrm{Q} \in \mathbf{V}(I_5^0) = Z_5^0$. To show that $\mathrm{Q} \in Z_5^1$, we consider $\mathrm{Q'} = (st^2 - t^3, st - t^2, t^2)$ where $s \in k-\{0\}$. Then $\mathrm{Q'}$ belongs to $\mathbf{V}(J_5^1) \cap \mathbf{D}(y_1)$, and taking the limit $s \rightarrow 0$, we get $\mathrm{Q} \in Z_5^1$. Thus $\mathrm{Q} \in Z_5^0 \cap Z_5^1$. Next we prove $\mathrm{Q} \notin Z_5^0 \cap Z_5^1 \cap Z_5^2$. By Lemma \ref{important elements}, $g_2 \in \sqrt{I_5^0 + I_5^1 + I_5^2}$ and $I_5^0 \supseteq L_{322} = \langle x_0, x_1, x_2, y_0, y_1, z_0, z_1 \rangle$, we have \begin{center} $g_2 \equiv -y_2^4 - 4y_2^3z_2 + 2y_2^2z_2^2 + 12y_2z_2^3 - 9z_2^4 + 8x_3^2y_2 - 8x_3^2z_2\ \mathrm{mod}\ L_{322}$, \end{center} and the right hand side belongs to $\sqrt{I_5^0 + I_5^1 + I_5^2}$. We set \begin{center} $h := -y_2^4 - 4y_2^3z_2 + 2y_2^2z_2^2 + 12y_2z_2^3 - 9z_2^4 + 8x_3^2y_2 - 8x_3^2z_2$. \end{center} At the point $\mathrm{Q}$, we have $x_3 = y_2 = -1$ and $z_2 = 1$, and \begin{center} $h = -(-1)^4 - 4\times(-1)^3\times1 + 2\times(-1)^2\times1^2 + 12\times(-1)\times1^3 - 9\times1^4 + 8\times(-1)^2\times(-1) - 8\times(-1)^2\times 1$\\ $= -32 \neq 0$. \end{center} Hence $\mathrm{Q} \notin Z_5^0 \cap Z_5^1 \cap Z_5^2$. \end{proof} \begin{Thm} \label{maain result in section4} Let $X \subset \mathbb{C}^3$ be the surface defined by $f(x,y,z) = x^2 - y^2z + z^3$, $X_m^0$ the singular fiber of the $m$-th jet scheme $X_m$ with $m \geq 5$ and $Z_m^0, ... , Z_m^3$ its irreducible components defined in Definition \ref{definition of irreducible components of X_m^0 for D_4-type singular surface}. Then the maximal elements in $\{ Z_m^i \cap Z_m^j | i \neq j\ (i,j \in \{0,1,2,3\}) \}$ with respect to the inclusion relation are $Z_m^0 \cap Z_m^1$, $Z_m^0 \cap Z_m^2$ and $Z_m^0 \cap Z_m^3$ and they are pairwise distinct. \end{Thm} \begin{proof} By Lemma \ref{relation 0 component} and Lemma \ref{maximal intersection}, for any $i, j \in \{1,2,3\}$ with $i \neq j$, \begin{center} $Z_m^i \cap Z_m^j = Z_m^i \cap Z_m^j \cap Z_m^0 \subsetneq Z_m^i \cap Z_m^0$. \end{center} Hence $Z_m^i \cap Z_m^j$ are not maximal with respect to the inclusion relation. We show that $Z_m^0 \cap Z_m^i$ is maximal for $i \in \{1,2,3\}$. If $Z_m^0 \cap Z_m^i \subseteq Z_m^0 \cap Z_m^j$ for $j \in \{1,2,3\}$ and $j \neq i$, then $Z_m^0 \cap Z_m^i \cap Z_m^j = (Z_m^0 \cap Z_m^i) \cap (Z_m^0 \cap Z_m^j) = Z_m^0 \cap Z_m^i$. This is a contradiction to Lemma \ref{maximal intersection}, so $Z_m^0 \cap Z_m^i \not\subseteq Z_m^0 \cap Z_m^j$. Moreover, if $Z_m^0 \cap Z_m^i \subseteq Z_m^l \cap Z_m^j$ for $j,l \in \{1,2,3\} - \{ i \}$ and $j \neq l$, then $Z_m^i \subseteq Z_m^0$ since $Z_m^l \cap Z_m^j \subseteq Z_m^0$ by Lemma \ref{relation 0 component}. This is a contradiction to Proposition \ref{Irreducible decomposition of singular fiber of D_4-type singular point}, so $Z_m^0 \cap Z_m^i \not\subseteq Z_m^l \cap Z_m^j$ for $l \neq i, j$. Hence $Z_m^0 \cap Z_m^i$ are maximal with respect to inclusion relation and are pairwise distinct for $i = 1,2,3$. \end{proof} \begin{Coro} The graph obtained by Construction \ref{ConstructionofGraph} for $m \geq 5$ is the resolution graph of a $D_4$-type singularity. \end{Coro} \begin{proof} The set $E$ is $\{Z_m^0 \cap Z_m^1, Z_m^0 \cap Z_m^2, Z_m^0 \cap Z_m^3 \}$, so the graph obtained by Construction \ref{ConstructionofGraph} is as follows: \[ \xymatrix{ & & Z_m^2\\ Z_m^1 \ar@{-}[r] & Z_m^0 \ar@{-}[ru] \ar@{-}[rd] & \\ & & Z_m^3. } \] \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,147
\section{Conclusion} \label{sec:conclusion} In this work, common networked control strategies have been implemented for stabilizing a TWIPR, built with an inexpensive and highly reproducible platform (LEGO Mindstorm EV3). The networked controller is able to stabilize the TWIPR over a wireless channel despite time-varying delays and packet loss. Benchmark experiments have been defined, so that local and remote controllers could be compared. Future work will consider both practical and theoretical advancements: on one hand, custom hardware is being developed. On the other hand, aiming at improving performance shown in Section \ref{sec:performance}, both a state observer and an optimal control strategy for reference tracking (computationally more expensive than the solution here adopted) will be employed. \section{Introduction} Advancements in communication and computation technology have led to the concept of Networked Control Systems (NCSs), cf. \cite{hespanha2007survey}. In an NCS, components are distributed and interact via a communication network. This allows a considerable increase in flexibility, but also raises many design challenges (\cite{bemporad2010networked},\cite{walsh2002stability}). In the case of wireless communication, non-deterministic delays and packet losses are characteristic phenomena. {Delays are traditionally assumed shorter than the sampling time, see \cite{nilsson1998stochastic}, or compensated by adopting a Model Predictive Control scheme as in \cite{mori2014compensation}. On the other hand, packet loss is commonly compensated by either employing H-infinity control design, see \cite{ishii2008h}, or gain-scheduling feedback, see \cite{yu2008stabilizability}. A variety of further methods and approaches have been proposed to address these challenges (cf. \cite{zhang2016survey})}. The purpose of this paper is to design a networked controller for a recently proposed benchmark problem. The benchmark, described in detail in \cite{ipsnpaper} and \cite{gallenmuller2018benchmarking}, is to remotely control a two-wheeled inverted pendulum robot (TWIPR) over a wireless network, see Fig.~\ref{fig:ncs}. To make the benchmark inexpensive and easily reproducible, the widespread platform LEGO Mindstorm EV3 is used to realize the robot. All details regarding plant and controller are publicly accessible at {\tt {\small https://github.com/tum-lkn/iccps-release}}. \begin{figure}[htb] \begin{tikzpicture}[font=\small] \draw (0, 0) node[inner sep=0] {\includegraphics[width=\columnwidth]{NCS.pdf}}; \draw (3.2, -2) node {control signals}; \draw[densely dotted] (1.5, -1.83)--(1.8, -1.83); \draw (1.6, 1.83)--(1.9, 1.83); \draw (-1.5, 2) node {measurement signals}; \end{tikzpicture} \caption{Networked Control of a TWIPR. In Section~\ref{sec:ncs_arch}, the asymmetry in wireless data transmission (dotted and unbroken lines) will be explained.% } \label{fig:ncs} \end{figure} The body of the robot is mounted on two wheels, on which two DC motors are splined. The control objective is to keep the robot in a vertical upright position while undergoing a predefined benchmarking experiment. To the best of our knowledge, the design of a networked controller for a TWIPR has not been addressed yet. Among other authors, \cite{pathak2005velocity} stabilized a TWIPR using local, i.e., non-networked, control. \cite{ananyevskiy2017control} proposed a control strategy over internet for coordinating oscillations of a group of pendula and validated it experimentally using LEGO Mindstorm NXT. Synchronization was not always achieved, and the controller did not compensate for lost communication packets. In the current paper we design a networked controller for a TWIPR by using two previously suggested strategies (cf. \cite{hespanha2007survey}): (i) we deliberately extend the actuation delays of the NCS in order to make them constant and deterministic; (ii) a sequence of future control inputs is computed from a sequence of model-based state predictions and is sent to the robot via the wireless network. If a control packet is lost, the robot uses the last received sequence of control inputs and applies the input corresponding to the current instant. {The remainder of this paper is organized as follows: in Section~\ref{sec:prob_desc}, a discrete-time linear dynamical plant model is presented and the available sensor information is described. Section~\ref{sec:loc_ctrl} introduces a local controller that is used as a baseline for the networked controller developed in Section~\ref{sec:wireless_ctrl}. The performance of both controllers is compared in Section~\ref{sec:performance}.} In the following, the set of nonnegative (positive) integers is $\mathbb{N}_0$ ($\mathbb{N}$). The set of real numbers is $\mathbb{R}$. The set of nonnegative (positive) real numbers is denoted $\mathbb{R}_{\geq0}$ ($\mathbb{R}_{>0}$). Given a time-dependent real-valued signal $x:\mathbb{R}_{\geq0}\mapsto\mathbb{R}$, its first and second derivative with respect to time are denoted $\dot{x}$ and $\ddot{x}$. Given a vector $\bm{v}\in\mathbb{R}^n$, $n\in\mathbb{N}$, its transpose is $\bm{v}'$, while the element in position $i\in\{1,\dots,n\}$ is $\bm{v}_i$. Given a matrix $A\in\mathbb{R}^{n\times m}$, its i-th column, with $i\in\{1,\dots,m\}$, is $[A]_i$. \begin{figure} \subfloat[Side view]{ \centering \includegraphics[width=.4\linewidth]{robot_side} \label{fig:roboside} } \qquad \subfloat[Top view]{ \centering \includegraphics[width=.4\linewidth]{robot_top} \label{fig:robotop} } \caption{Idealized mechanical model of the TWIPR.} \label{fig:robo} \end{figure} \section{Local Controller} \begin{figure} \includegraphics[width = \columnwidth, trim={.5cm 0 .5cm 0}]{local_exp.eps} \caption{ Measured states of the TWIPR while keeping its upright vertical position by employing (\ref{eq:controller}). } \label{fig:localExp} \end{figure} \begin{figure*}[t] \includegraphics[width = \textwidth, trim={2.2cm 0 2.5cm 0}]{local_plot.eps} \caption{Reference tracking experiment using local controller with a sample time of $35$[ms]. Raw measurements are plotted in yellow, filtered post-processed values in blue, and reference in red. The local controller guarantees stability and reference tracking, despite the noisy measurements. } \label{fig:localRun} \end{figure*} \label{sec:loc_ctrl} \subsection{Controller Design} \label{seq:controller} Due to limited available input voltages, we choose a linear quadratic regulator (LQR) design, which allows us to balance between performance and control effort, see \cite{astrom2010feedback}. The feedback control law uses the estimated state from Section~\ref{sec:sensors}, is based on (\ref{eq:disclinedynamics_aug_sec}), and has the form, $ \forall k\in\mathbb{N}_0$, \begin{equation} \label{eq:controller} {\bm u}_d(k) = -K \bbar{\bm {x}}(k), \end{equation} where $K\in\mathbb{R}^{2\times6}$ is the control gain matrix obtained by minimizing the cost function \begin{equation} \label{eq:criterion} J=\sum_{k=0}^\infty {\bm x}_d'(k)Q{\bm x}_d(k)+ {\bm u}_d'(k)R{\bm u}_d(k), \end{equation} with $Q\in\mathbb{R}^{6\times6}$ and $R\in\mathbb{R}^{2\times2}$ given\footnote{$ R = \mathrm{diag}(10^4, 10^4) $, $ Q = \mathrm{diag}(1, 10^3, 1, 1, 10^6, 1) $}. The control gain $K$ is obtained by solving the Algebraic Riccati Equation associated to (\ref{eq:criterion}), see \cite[p. 171]{lalo2014advanced}. This controller is responsible for keeping the TWIPR in its upright pose despite disturbances. \subsection{Benchmarking Experiment} \label{seq:localExperiment} In order to facilitate the comparison of different control strategies, benchmarking experiments are defined. The robot body is lifted manually, and measurements (\ref{eq:gyroRate})--(\ref{eq:yawRate}) are started. As soon as the measured pitch angle reaches a neighborhood of $0$, say at discrete-time step $\bar{k}\in\mathbb{N}$ corresponding to time $\bar{t}\in\mathbb{R}$, the loop is closed and (\ref{eq:controller}) is applied for all $k\geq\bar{k}$. Results from one trial are presented in Figure \ref{fig:localExp}. The local controller assures that all states remain close to zero. In order to test a more dynamic and challenging scenario, we also implement the tracking control law \begin{equation} \label{eq:controller_wrong} {\bm u}(k) = -K( \bbar{\bm{x}}(k)-{\bm x}_\text{ref}(k)), \end{equation} where ${\bm x}_\text{ref}(k)$ is a given reference state trajectory. We use the same gain $K$ as for the stabilitation problem, although we should note that it is no longer an optimal feedback gain for the reference tracking task \cite[pg. 247]{anderson1972linear}. In order to obtain smooth and approximately realizable reference trajectories ${\bm x}_\text{ref}(k)$, we choose low-pass filtered step changes for $\dot{\Phi}_\text{ref}(k)$ and $\gamma_\text{ref}(k)$, as illustrated in Figure~\ref{fig:localRun}, and determine ${\Phi}_\text{ref}(k)$ and $\dot{\gamma}_\text{ref}(k)$ by integration and differentiation, respectively. Finally, $\theta_\text{ref}(k)$ and $\dot{\theta}_\text{ref}(k)$ are chosen constantly zero. Note that a non-zero pitch angle trajectory is required if the robot accelerates and decelerates. However, for the slowly varying velocity reference signals considered in this contribution, zero is a close approximation of that trajectory. Results from an exemplary run of the experiment are depicted in Figure \ref{fig:localRun}. The sampling time of $35[\mathrm{ms}]$ could seem large. However, it is required by the employed platform, which does not exhibit state-of-the-art performance. Moreover, future work will study groups of TWIPRs communicating over the channel with orthogonal channel access methods, e.g. TDMA (Time Division Multiple Access), for which this large sampling time is required. \subsection{Dynamical Model} Fig. \ref{fig:robo} sketches the robot from the side and the top views and illustrates the sign convention; the depicted kinematic variables are given in Table \ref{tab:vars}. Throughout this section, the focus is not on the derivation of a dynamic model for the system, since this has been exhaustively addressed elsewhere, e.g. the nonlinear dynamic model of a TWIPR can be found in \cite[pg. 33]{kim2005dynamic}% % % \makeatletter \ifdefined\r@appendA \ and in Appendix \ref{appendA} of the extended version of this paper% \else \fi. % % % With the model of the TWIPR at hand \cite[pg. 33]{kim2005dynamic}, let \begin{align} {\bm x}(t) = [ \Phi(t),\Theta(t),\dot{\Phi}(t),\dot{\Theta}(t),\gamma(t),\dot{\gamma}(t)]', \end{align} be the state vector and ${\bm u}(t)=[u_l(t),u_r(t)]'$ the input, where $u_{l}(t)$ ($u_r(t)$) is the voltage applied to the left (right) DC-motor at time $t$. The nonlinear continuous-time model is \begin{equation} \label{eq:nonlinDynFunc_sec} \forall t \in \mathbb{R}_{\geq 0}, \text{ } \dot{{\bm x}}(t) = f({\bm x}(t), {\bm u}(t)), \end{equation} and the linearized continuous-time dynamics (around the origin) is \begin{equation} \label{eq:contlinedynamics_sec} \dot{\bm x}(t) = A {\bm x}(t) + B{\bm u}(t). \end{equation} Matrices $A\in\mathbb{R}^{6\times6}$ and $B\in\mathbb{R}^{6\times2}$ are given in \cite[pg. 38]{kim2005dynamic}% % % \makeatletter \ifdefined\r@appendA \ and in Appendix \ref{appendA} of the extended version of this paper% \else \fi. % % % Since sensors and controller are implemented based on a digital scheme, the system is discretized using the Forward Euler Method with a discretization step $T_s \in \mathbb{R}_{>0}$, which yields the discrete-time linear system, $ \forall k\in\mathbb{N}_0$, \begin{equation} \label{eq:disclinedynamics_aug_sec} {\bm x}_d(k+1) = A_d {\bm x}_d(k) + B_d{\bm u}_d(k), \end{equation} where $A_d \in \mathbb{R}^{6\times6}$, $B_d \in \mathbb{R}^{6\times2}$ and \begin{equation*} {\bm x}_d(k) = {\bm x}(kT_s), \quad {\bm u}_d(k) = {\bm u}(kT_s), \quad \forall k \in \mathbb{N}_0. \end{equation*} Finally, for the specific hardware under consideration, a backlash occurs between each motor shaft and the respective wheel (see \cite{nordin}), which is not captured by the model. We define a nonlinear function $\bm{f}_{\mathrm{bl}}:\mathbb{R}^2\mapsto\mathbb{R}^2$ to model the impact of the backlash on the input to the motors. Then, (\ref{eq:disclinedynamics_aug_sec}) is rewritten as \begin{equation} \label{eq:disclinedynamics_aug_sec_bl} {\bm x}_d(k+1) = A_d {\bm x}_d(k) + B_d\bm{f}_{\mathrm{bl}}({\bm u}_d(k)). \end{equation} Further details are in \cite[pg. 55]{nordin}. \section{Performance Comparison} \label{sec:performance} We compare the performance of the local and the networked controller for the reference tracking experiment. The local controller has a negligible delay, whereas the networked controller has an actuation delay of $35[\mathrm{ms}]$ (up to $105[\mathrm{ms}]$ in case of packet loss). Performance degradation due to these delays is evaluated by three \textit{root mean squared error} (RMSE) indices defined by { \begin{align*} &\mathrm{RMSE}_\Phi = \sqrt{\frac{1}{k_{\mathrm{end}}-k_0}\sum_{k=k_0}^{k_{{\mathrm{end}}}} (\bbar{\Phi}(k) - \Phi_{ref}(k) )^2},\\%\label{eq:rmse1}\\ &\mathrm{RMSE}_\Theta =\sqrt{ \frac{1}{k_{{\mathrm{end}}}-k_0}\sum_{k=k_0}^{k_{{\mathrm{end}}}} (\bbar{\Theta}(k))^2},\\ &\mathrm{RMSE}_\gamma = \sqrt{\frac{1}{k_{{\mathrm{end}}}-k_0}\sum_{k=k_0}^{k_{{\mathrm{end}}}} (\bbar{\gamma}(k) - \gamma_{ref}(k) )^2} \end{align*} }% where $k_0,k_{end} \in \mathbb{N}$. These RMSE indices are averaged over {a set of ten} trials. The results are given in Table \ref{tab:rmse_values}. While differences in $\Phi$- and $\gamma$-movement errors are marginal, RMSE$_\Theta$ is almost twice as large for the networked controller than for the local controller, although still small in magnitude. \begin{table}[h] \centering \begin{tabular}{l | c | c } & Local & NCS\\ \hline $RMSE_\Phi$& 2.4141 [rad] & 2.4512 [rad] \\ $RMSE_\Theta$ & 0.0116 [rad] & 0.0192 [rad]\\ $RMSE_\gamma$ & 0.0849 [rad] & 0.0888 [rad]\\ \end{tabular} \caption{{RMSE} for the states $\Phi$, $\Theta$ and $\gamma$; values for both the local controller and the NCS. } \label{tab:rmse_values} \end{table} \section{Problem Description} \label{sec:prob_desc} \input{model} \subsection{Sensors and Measurements} \label{sec:sensors} The robot is equipped with a single-axis digital gyroscope (mounted on the body) measuring the body pitch rate. Two digital encoders are splined on the two motor shafts and measure the left and the right motor angles. Let, $\forall k\in\mathbb{N}_0,\ \dot{\Theta}^{(m)}(k)$ be the measurement coming from the gyroscope, and let $b(k)$ be the measurement bias of that gyroscope, which is determined from quasi-static measurements using standard bias estimation approaches, see \cite{tin2011review}. Finally, let $k\in\mathbb{N}_0, \Phi_{ml}^{(m)}(k),\Phi_{mr}^{(m)}(k)$ be the left and right encoder measurements, respectively. The states of system (\ref{eq:disclinedynamics_aug_sec}) can be estimated as follows: \begin{align} \forall k\in\mathbb{N}_0,\quad &{\dot{\Theta}}(k) = \dot{\Theta}^{(m)}(k) - b(k),\label{eq:gyroRate}\\ &{\Theta}(k) ={\Theta}(k-1) + T_s{\dot{\Theta}}(k),\label{eq:est_pitch_angle}\\ &{\Phi}(k) = \frac{\Phi_{ml}^{(m)}(k)+\Phi_{mr}^{(m)}(k)}{2}+{\Theta}(k),\\ &{\dot{\Phi}}(k) = \frac{{\Phi}(k) - {\Phi}(k-1)}{T_s},\\ &{\gamma}(k) = \frac{r}{W}(\Phi_{mr}^{(m)}(k)-\Phi_{ml}^{(m)}(k)),\\ &{\dot{\gamma}}(k) = \frac{{\gamma}(k) - {\gamma}(k-1)}{T_s}\label{eq:yawRate}, \end{align} where ${\Theta}(0) = \Theta_0 \in \mathbb{R}$, ${\rho}(0)= 0$, ${\Phi}(0)= 0$, and ${\gamma}(0)= 0$. In the following, in order to avoid confusion between measured states and variables, the measured state vector at time $k\in\mathbb{N}_0$ will be denoted by $\bbar{\bm x}(k)$. \section{Wireless Controller} \label{sec:wireless_ctrl} \subsection{Networked Control System Architecture} \label{sec:ncs_arch} A NCS is mainly composed of three parts, as illustrated in {Figure \ref{fig:ncs}}: (i) the plant, in this case the TWIPR, equipped with sensors and actuators. It typically has limited computational power, which is often not enough for hosting the controller on-board; (ii) the communication network, in this case a wireless interface employing the {W-LAN protocol}; (iii) the controller, in this case executed on a computer with higher computational power. Data transmission uses W-LAN according to IEEE\,802.11g (WiFi) with a transfer rate of 54\,Mbit/s in infrastructure mode. Both robot and controller are directly connected to the wireless access point, the controller via the Ethernet adapter Intel I219-LM, the Robot via the wireless dongle Edimax EW-7811Un. The WiFi operates indoor in the 2.4\,GHz ISM band, which is shared between multiple wireless technologies such as other WiFi standards and Bluetooth, and employs UDP as a transport protocol. A realistic interfered office environment is used, where several neighboring WiFi networks are coexisting. In this environment, we observed random packet losses due to "short-time" link failures, which could not be compensated on the communication layer. \cite{ipsnpaper} contains an exhaustive statistical analysis of communication delays for this experimental setup. At this point, two considerations are derived: (i) experimentally, no packets sent by the robot are lost; this means that the random phenomenon of packet loss occurs only to packets sent by the controller (this asymmetry can be seen in Fig. \ref{fig:ncs}); (ii) all transmitted packets eventually reaching the robot with delays above a given threshold are considered as lost. \begin{figure}[h] \input{timeline.tikz} \caption{Timeline of one control cycle of the NCS. All involved delays and times are analyzed in \cite{ipsnpaper}.} \label{fig:timeline} \end{figure} \subsection{Timing Scheme} \label{seq:timing} Figure \ref{fig:timeline} shows a timeline of the k-th control cycle. The following times and delays between these steps will be of major relevance in the upcoming sections: \begin{itemize} \item $t_m^k$: the robot reads from the sensors. In this moment, the control cycle starts; \item $t_{rh}^k$: the controller receives the transmitted packet; \item$t_{rr}^k$: the robot receives the control input; \item $d_{c3}^{k}$: time between data reception and actuation; \item $t_{a}^k$: instant when the robot applies the desired input to the motors; $t_a^k-t_m^k$ is the \textbf{actuation delay}. \end{itemize} From a control perspective, besides dropped packets from controller to robot, also \textit{actuation delay} must be taken into account, i.e. the control input applied at $t_a^k$ is computed based on data measured at $t_m^k$. This actuation delay is time-variant, since, in general, $t_a^{k_1}-t_m^{k_1}\not=t_a^{k_2}-t_m^{k_2}$, $k_1,k_2\in\mathbb{N}_0$. \subsection{Wireless Controller Design} \label{sec:wirConDes} Control packets reach the robot before an arbitrary timeout $\tau_o\in\mathbb{R}$ if \begin{equation} t_{rr}^{k} - t_{m}^{k} < \tau_o < T_s. \label{eq:delayLeq} \end{equation} In the following, if (\ref{eq:delayLeq}) does not hold, the packet is acknowledged as lost. Define a boolean variable that indicates packet loss: \begin{equation} \forall k \in \mathbb{N}_0,\ \epsilon(k) = \begin{cases} 1 &\text{if }t_{rr}^{k} - t_{m}^{k} \geq \tau_o\\ 0 &\text{ otherwise} \end{cases} . \end{equation} So, packets are lost either because never delivered (trivial) or because they would arrive too late at the receiver. To avoid non-deterministic uncertainties resulting from the time-variance of the actuation delay, we deliberately dilate $d_{c3}^k$ (by putting a waiting time before the processed signal is applied to the motors) such that \begin{equation} \label{eq:strategyCombatTVD} \forall k \in \mathbb{N}_0,\ d_{c3}^{k} = T_s - (t_{rr}^k - t_{m}^{k}). \end{equation} In any $k\in\mathbb{N}_0$ where (\ref{eq:delayLeq}) holds, by (\ref{eq:strategyCombatTVD}), $d_{c3}^{k}>0$. This strategy leads to a larger but constant actuation delay. We compensate this delay by model-based prediction. Define the following inputs, $\forall k\in\mathbb{N}_0$: \begin{align*} &{{\bm u}}_{id}(k) := -K{\bm x}(t_{a}^{k}),\qquad &\hat{{\bm u}}(k) := -K \hat{{\bm x}}(t_{a}^{k}), \end{align*} where ${{\bm u}}_{id}(k)$ is the ideal control input based on states at time of actuation and $\hat{{\bm u}}(k)$ is a control input, computed based on a state $\hat{{\bm x}}(t_{a}^{k})$ that is predicted from the last available measurement at $t_m^k$, i.e. $\bbar{\bm{x}}(k)$. The state is predicted by integrating the nonlinear dynamics (\ref{eq:nonlinDynFunc_sec}): \begin{equation} \label{eq:nonlinsim} \hat{{\bm x}}(t_{a}^{k}) = \bbar{\bm x}({k})+ \int\limits_{t_{m}^{k}}^{t_{m}^{k}+T_s} f({\bm x}(t), {\bm u}(t_{m}^{k})) \mathrm{dt}. \end{equation} In case (\ref{eq:delayLeq}) does not hold, we use model-based prediction to compensate packet loss. The controller does not know \textit{a-priori} whether a packet will be lost. Therefore, it calculates and sends a list of $M+1$ control inputs, which are computed based on model-based state predictions of the next $M+1$ steps, where $M$ is chosen larger than the expected maximum number of packet losses. Formally, $\forall k \in \mathbb{N}_0$, the controller computes and sends the control input matrix \begin{equation} \label{eq:controlMat} \hat{U}(k) := \left[ \hat{{\bm u}}(k), {\hat{{\bm u}}}(k+1), \dots, {\hat{{\bm u}}}(k+M)\right] \in \mathbb{R}^{2\times M+1}, \end{equation} where, $\forall k \in \mathbb{N}_0,\ \forall i \in \{1,\dots,M\}$, \begin{equation*} {\hat{{\bm u}}}(k+i) = -K{\hat{{\bm x}}}(t_a^{k+i}), \end{equation*} with, $\forall k \in \mathbb{N}_0,\ \forall i \in \{0,\dots,M-1\}$, \begin{equation} \label{eq:predictionMat} {\hat{{\bm x}}}(t_a^{k+i+1}) = {\hat{{\bm x}}}(t_a^{k+i}) + \int\limits_{t_{m}^{k+i}}^{t_{m}^{k+i}+T_s} f({\bm x}(t), \hat{\bm u}({k+i})) \mathrm{dt}. \end{equation} If the computation time at the remote controller is such that (\ref{eq:delayLeq}) is in general violated, predictions (\ref{eq:nonlinsim}) and (\ref{eq:predictionMat}) can be computed using the linearized discrete-time system (\ref{eq:disclinedynamics_aug_sec_bl}), which is computationally faster but less accurate. The control matrix sent to the robot would, then, be \begin{equation*} {\hat{U}}_l(k) = \left[ {\hat{{\bm u}}}_l(k), {\hat{{\bm u}}}_l(k+1), \dots, {\hat{{\bm u}}}_l(k+M)\right] \in \mathbb{R}^{2\times M+1}, \end{equation*} where, $\forall k \in \mathbb{N}_0,\ \forall i \in \{0,\dots,M\},$ \begin{equation} \hat{{{\bm u}}}_l(k+i) = -K{\hat{{\bm x}}}_l(k+i), \end{equation} with \begin{multline} {\hat{{\bm x}}}_l(k+i) = A_d{\hat{{\bm x}}}_l(k+i-1)+ B_d\bm{f}_{\mathrm{bl}} \left({\hat{{\bm u}}}_l(k+i-1)\right) \end{multline} and ${\hat{{\bm x}}}_l(k-1) = \bbar{\bm x}({k})$, ${\hat{{\bm u}}}_l(k-1) = {\bm u}_d({k})$. Then, the most recent control matrix is \begin{equation} U^*(k) := \begin{cases} {\hat{U}}_l(k) & \text{ if } \epsilon(k) = 0\\ U^*(k-1) & \text{ otherwise} \end{cases}, \end{equation} and the TWIPR applies the control input \begin{equation} \forall k \in \mathbb{N}_0,\;{\bm u}(k) = \left[U^*(k)\right]_{\omega(k)}, \end{equation} with \begin{equation} \omega(k) := 1 + \min_{\substack{l \in \left\{0,\dots,M\right\}:\\ \epsilon(k-l) = 0}} l \end{equation} being the number of sampling periods that have passed since the last control matrix was received. The described Networked Control System is implemented with predictions based on the linearized model (\ref{eq:disclinedynamics_aug_sec_bl}). \subsection{Experimental Evaluation of the Networked Controller} \begin{figure} \includegraphics[width = \columnwidth,trim={1.5cm 0 .7cm 0}]{packet_loss_new.eps} \caption{ Each red vertical line represents an occurrence of packet loss (the number indicates the amount of sequential packet loss). Although at $t=21[s]$ three control signal packets are lost, the robot does not lose stability. } \label{fig:packet_loss} \end{figure} First, we analyze stability of the proposed NCS and robustness with respect to packet losses. Results are given in Figure \ref{fig:packet_loss}. Just as the local controller, the networked controller assures that the states remain stable but with clearly larger deviations from zero, at least for the wheel angle. The proposed networked control strategy is found to exhibit robustness against up to 3 sequential packet losses, i.e.\ the controller can maintain stability for at least $140[\mathrm{ms}]$ without measurement updates. The reference tracking experiment described in Section~\ref{seq:localExperiment} is repeated with the proposed networked controller. Result of one trial are given in Figures~\ref{fig:wifiRun} and~\ref{fig:input_delay}. Despite two consecutive double packet loss at $t\approx 3.5[\mathrm{s}]$, measurement noise, and actuation delay of $35[\mathrm{ms}]$, the robot does not lose stability. \begin{figure*} \includegraphics[width = \textwidth, trim={2.2cm 0 2.5cm 0}]{wifi_plot.eps} \caption{Reference tracking experiment using the NCS with a sampling time of $35$[ms]. Raw measurements are plotted in yellow, filtered post-processed values in blue, and references in red. The networked control strategy exhibits tracks the given references while guaranteeing stability, despite the presence of network delays, packet loss, and measurement noise. } \label{fig:wifiRun} \end{figure*} \begin{figure} \includegraphics[width = \columnwidth, trim={0.2cm 0 .5cm 0}]{input_delay.eps} \caption{ Actuation delay, i.e. $\omega(k)T_s$, of every control input. } \label{fig:input_delay} \end{figure}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,045
New Yankee 6: In Pharaoh´s Court (Steam key) -- RU Content: text (16 symbols) Seller: AMEDIA© After the purchase, you will receive a KEY to activate the specified software product. What could go wrong if you plan to spend a couple of weeks vacation in Egypt? John and Mary didn't go that way! At first they missed the plane, then decided to fly to the country of pyramids on dragons, and as a result ended up, of course, in Egypt, but in such a deep past that even the Sphinx was not built there. Moreover, the pharaoh was strayed to them, whose throne was seized by the impostor. And now the sweet couple has not only to find the way home, but also to punish the villains and restore peace and prosperity on the banks of the Nile. To do this, Mary conjures, John fights, workers build and dismantle obstacles. The sixth part of the strategy about the adventures of the spouses Mary and John turned out to be marvelously good: brighter and more sunny, with new levels and secondary characters, and, of course, humorous dialogues. The gameplay also has changes. New obstacles were added: sacred hungry crocodiles and yawning awakened mummies who dream of getting to the nearest building and getting enough sleep there. The former must be fed, and the latter expelled with the help of the guards. In addition, it is now possible to use dragons and gremlins in full. Dragons can stop such cataclysms as drought and storm, and gremlins with their golden armor famously crush hefty stone blocks and chests into small fragments.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,279
Q: How add items from collection editor to UserControl? I am trying to add a set of buttons to the UserControl public class SliderButton : UserControl { private List<ButtonMdr> list= new List<ButtonMdr>(); [DesignerSerializationVisibility(DesignerSerializationVisibility.Content)] public List<ButtonMdr> Items { get { return this.list; } set { this.Controls.AddRange(value.ToArray()); } } } // list.count > 0 But when press Add nothing appears in UserControl
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,076
{"url":"http:\/\/diymicro.org\/dft-basics\/","text":"# DFT basics\n\nAn article dedicated to learn and develop some basic approaches related to Discrete Fourier Transform.\n\nFirst of all, I would encourage tp read a great article there. The great simplified approach and insights were provided by that link.\n\nLet\u2019s go through some key point and gain better understanding and develop some useful skills.\n\nCorrelation is a first key concept there, it can be described as the sum of the product of two signals:\n\n\\sum_{i=0}^{N}x(i)y(i)\n\nWe are talking there about the simplest correlation form, without any cross correlation equations involved.\n\nThe correlation can be positive, or negative if the signal is the same but simply out of the phase, or if signals are completely different correlation will have extremely small values.\n\nSo the key point in whole discrete transformation just a simple sweep \u2013 you take the point a the certain frequency (which you know can be represented by Euler equation) multiply by your signal, check the correlation, then continue up to the end.\n\nThe easiest way to learn is to look at some simple examples, let\u2019s say we have sine and cosine at frequency equal to 7Hz:\n\nAnd next there is a simple Octave piece of the code:\n\nclf\nclear\n#input data variables\nstep = 1\/256;\ntstop = 2;\nf1 = 7;\n\nt = 0:step:tstop-step;\n\nfor i = 1:length(t)\nrand_coeff1(i) = (rand()-0.5:0.5)\/3;\nrand_coeff2(i) = (rand()-0.5:0.5)\/3;\nend\ny = sin(2*pi*f1*t);\n\nytry=e.^(j*2*pi*7*t);\ny_product1 = y.*ytry;\n\nplot(t,y)\nhold on\ngrid on\n\ncorrelation1 = sum(imag(y_product1));\ncorrelation2 = sum(real(y_product1));\n\ny = sin(2*pi*f1*t+pi\/2);\nplot(t,y)\nytry=e.^(j*2*pi*7*t);\ny_product1 = y.*ytry;\n\ncorrelation3 = sum(imag(y_product1));\ncorrelation4 = sum(real(y_product1));\n\n\nHere, in that code, one important thing should pop up \u2013 we should not forget to pull the phase information from these products.\n\nIt could be clearly seen that correlation1 is indicating that signal which has been probed is the sine, while correlation4 indicates the cosine.\n\nOK, so we can now distinguish sine from cosine, let\u2019s make our example a little more complex \u2013 now we have two signals \u2013 at 7 and at 8Hz:\n\nclf\nclear\n#input data variables\nstep = 1\/256;\ntstop = 2;\nf1 = 7;\n\nt = 0:step:tstop-step;\n\nN = 16;\n\ny = sin(2*pi*f1*t)+sin(2*pi*(f1+1)*t);\n\n#ytry=e.^(j*2*pi*1*t);\n#y_product1 = y.*ytry;\n\n#plot(t,y)\n##hold on\n#grid on\n\nfor i=1:1:N\nytry=e.^(j*2*pi*i*t);\ny_product = y.*ytry;\n\ncorrelation_real(i) = sum(real(y_product));\ncorrelation_imag(i) = sum(imag(y_product));\ncorrelation_abs(i) = correlation_real(i) + correlation_imag(i);\nplot(i,correlation_abs(i),'marker','o');\nhold on;\ngrid on;\nendfor\n\n\n\nChecking the result:\n\nTwo peaks clearly seen at 7 and 8, looking into that picture I had couple questions \u2013 Why are these numbers so big? How to define proper N? What is the limits? How to make plot with bars instead of markers \ud83d\ude42\n\nHere some more realistic function, this time lets do it with numbers, so we will have some kind of an an\u0441hor for us:\n\ny = 2 + 5cos(2 \\pi2 t)+3sin(2\\pi3t)+2cos(2\\pi4t)\n\nAnd the sampled one curve is a set of values sampled with Fs = 8Hz, here I took 8 samples, which means that resolution will be equal to 1Hz.\n\nTo answer question why numbers are that big, it could be seen that it just corresponds to the amount of point which we took, so, the first attempt would be just do divide that correlation sum to N, let\u2019s see what will happen:\n\nclf\nclear\n#input data variables\nstep = 1\/1024;\ntstop = 1;\nt = 0:step:tstop;\ny = 2 + 5*cos(2*pi*t*2) + 3*sin(2*pi*t*3) + 2*cos(2*pi*t*4);\n#plot(t,y,'b', 'linewidth',4);\ngrid on;\nhold on;\nN = 8;\nFs = 8;\nts = 1\/Fs;\nf_spectrum = 0:Fs\/N:(Fs*(N-1))\/N;\n\ntsample = 0:ts:(N-1)*ts;\ny_sampled = 2 + 5*cos(2*pi*tsample*2)+3*sin(2*pi*tsample*3) + 2*cos(2*pi*tsample*4);\n#plot(tsample,y_sampled,'r','marker','o','linewidth',1,'markersize',20);\n\nh=legend('original ','sampled ');\nset(h,'fontsize',14,'fontname','FreeSans','fontweight','normal');\nset(gca, 'linewidth', 3, 'fontsize', 18,'fontname', 'FreeSans') #modify line and fontsize of x and y axes\n#xlabel('t, s');\n#ylabel('Amplitude, V');\n\nfor i=0:1:(N-1)\nytry=e.^(-j*2*pi*i*Fs\/N*tsample);\ny_product = (y_sampled.*ytry)\/N;\n\ncorrelation_real(i+1) = sum(real(y_product));\ncorrelation_imag(i+1) = sum(imag(y_product));\ncorrelation_abs(i+1) = sqrt(correlation_real(i+1)^2 + correlation_imag(i+1)^2);\n#plot(i,correlation_abs(i+1),'marker','o');\n#hold on;\n#grid on;\n\nendfor\n\nxlabel('F, Hz');\nylabel('Amplitude, V');\n\nbar(f_spectrum,correlation_abs,0.1)\n\nOK, a couple of things from my questions were addressed there, but if you draw enough of attention, it could be seen that we have proper values for DC and for F = 4Hz, while 2 and 3 Hz are half from what they should be. Also, we see another two harmonics after 4Hz \u2013 this is because we entered in the second Nyquist zone (>Fs\/2) and we got the spectrum folding there (and all entries there are just complex conjugates for entries from the other side). So, for instance, what we see at 6 Hz is just 8-2 Hz and so on. This is also the reason why we see \/2 degradation inside of the first Nyquist zone, but DC and Fs\/2 actually matched in folding points. The conclusion is the next \u2013 we need to multiply by 2 points inside of (DC, Fs\/2) but not touch edges. Also, we are not interested in the second half of points for the spectrum crafting.\n\nclf\nclear\n#input data variables\nstep = 1\/1024;\ntstop = 1;\nt = 0:step:tstop;\ny = 2 + 5*cos(2*pi*t*2) + 3*sin(2*pi*t*3) + 2*cos(2*pi*t*4);\n#plot(t,y,'b', 'linewidth',4);\ngrid on;\nhold on;\nN = 8;\nFs = 8;\nts = 1\/Fs;\nf_spectrum = 0:Fs\/N:(Fs*(N-1))\/N;\n\ntsample = 0:ts:(N-1)*ts;\ny_sampled = 2 + 5*cos(2*pi*tsample*2)+3*sin(2*pi*tsample*3) + 2*cos(2*pi*tsample*4);\n#plot(tsample,y_sampled,'r','marker','o','linewidth',1,'markersize',20);\n\nh=legend('original ','sampled ');\nset(h,'fontsize',14,'fontname','FreeSans','fontweight','normal');\nset(gca, 'linewidth', 3, 'fontsize', 18,'fontname', 'FreeSans') #modify line and fontsize of x and y axes\n#xlabel('t, s');\n#ylabel('Amplitude, V');\n\nfor i=0:1:(N-1)\nytry=e.^(-j*2*pi*i*Fs\/N*tsample);\nif ((i==0)||(i==N\/2))\ny_product = (y_sampled.*ytry)\/N;\nelse\ny_product = (y_sampled.*ytry)\/(0.5*N);\nendif\ncorrelation_real(i+1) = sum(real(y_product));\ncorrelation_imag(i+1) = sum(imag(y_product));\ncorrelation_abs(i+1) = sqrt(correlation_real(i+1)^2 + correlation_imag(i+1)^2);\n#plot(i,correlation_abs(i+1),'marker','o');\n#hold on;\n#grid on;\n\nendfor\n\nsubplot(3,1,1);\ntitle(\"mag\");\nbar(f_spectrum,correlation_abs,0.1)\nsubplot(3,1,2);\ntitle(\"real\")\nbar(f_spectrum,correlation_real,0.1)\nsubplot(3,1,3)\ntitle(\"imag\")\nbar(f_spectrum,correlation_imag,0.1)\n\n\n\nWith that code magnitude values matched truly across dc \u2013 Fs\/2, also I have plotted real and imag values to show that the phase information available as well. Also, it is clearly seen that after Fs\/2 we have the complex conjugate (take a look on imag plot).\n\nThis is great, but there is the one thing still which not quite aligned, if we look on the basic DFT equation:\n\nF[n] = \\sum_{k=0}^{N-1}f[k]e^{-j2\\pi kn\/N} (n=0:N-1)\n\nOctave or matlab allows us to calculate correlation sum just by multiplying waveforms (or matrix) while not all other programming languages can allow that I guess. So, instead, we could \u201cprobe\u201d each harmonic (n) by calculation of the correlation sum of the original signal with kn\/N\n\nLets modify the code a little bit, to be compatible with the original DFT equation:\n\nclf;\nclear;\ngrid on;\nhold on;\nN = 8;\nFs = 8;\nts = 1\/Fs;\nf_spectrum = 0:Fs\/N:(Fs*(N-1))\/N;\n\ntsample = 0:ts:(N-1)*ts;\ny_sampled = 2 + 5*cos(2*pi*tsample*2)+3*sin(2*pi*tsample*3) + 2*cos(2*pi*tsample*4);\n#plot(tsample,y_sampled,'r','marker','o','linewidth',1,'markersize',20);\n\nh=legend('original ','sampled ');\nset(h,'fontsize',14,'fontname','FreeSans','fontweight','normal');\nset(gca, 'linewidth', 3, 'fontsize', 18,'fontname', 'FreeSans') #modify line and fontsize of x and y axes\n#xlabel('t, s');\n#ylabel('Amplitude, V');\n\nfor i=0:1:(N-1)\nsum_harm = 0;\nfor k=0:1:(N-1)\nsum_harm = sum_harm + y_sampled(k+1)*e^(-j*2*pi*i*k\/N);\nendfor\nf(i+1) = sqrt(real(sum_harm)^2+imag(sum_harm)^2);\n\nendfor\n\n#normalization procedure\n\nfor i=1:1:N\nif ((i==1)||(i==(N\/2 + 1)))\nf_norm(i) = f(i)\/N;\nelse\nf_norm(i) = f(i)*2\/N;\nendif\nendfor\n\nbar(f_spectrum,f_norm,0.1)\nxlabel('f, Hz');\nylabel('Amplitude, V');\n\nAnd the result is the same:\n\nOK, so DFT is more or less clear I think, there are few notes should be taken into account still:\n\n\u2022 Aliasing \u2013 that one we have seen already \u2013 we have \u201cspectral folding\u201d after Fs\/2. To solve that one we can just not use half of the data\n\u2022 Spectral leakage \u2013 in an example I used all harmonics were lying right in the spectral resolution (which is 1Hz for the example), but what if there will be some frequency which has some fractional part\n\nSo, lets assume we have 3.3Hz frequency with an amplitude 3 and Fs is equal to 8Hz, also I did N equal to 64, so the resolution is 0.125Hz and our harmonic doesn\u2019t fall in that resolution.\n\nIt could be seen that amplitude spreads around somewhat of 3.3Hz and across different fft seeds.\n\nTo make this problem less hurting for the resulting spectrum, there is thing called window, which has direct impact on the spectrum by, basically, applying some extra transfer function to the sampled signal being processed.\n\nLet say we have 64 samples, one of the most often used windows is the Hanning one, it has the next function:\n\nw(n) = 0.5(1-cos(2\\pi \\frac{n}{N}))\n\nIt looks like this:\n\nLittle bit of a practice:\n\nN = 64;\nFs = 8;\nts = 1\/Fs;\n#f_spectrum = 0:Fs\/N:(Fs*(N-1))\/N;\nf_spectrum = 0:Fs\/N:Fs\/2;\n\ntsample = 0:ts:(N-1)*ts;\nnsample = 0:1:N;\ny_sampled = 2 + 3*cos(2*pi*tsample*3.3);\n\nfor i=1:1:(N)\nywindowed(i) = 2+ 0.5*(1-cos(2*pi*i\/N))*(y_sampled(i)-2);\nendfor\n\nplot(tsample,y_sampled,'b','linewidth',2,'markersize',10);\nhold on;\nplot(tsample,ywindowed,'r','marker','o','linewidth',2,'markersize',10);\n\nHere, I just added some post processing to remove dc and then add dc, just to have a nicer plot (which impacts only dc component).\n\nNow, if we apply that Hann window and plot the spectrum we would get:\n\nIt is still far from an ideal spectrum, but it is obvious that all extra spectral components which were spreaded across whole spectrum, now concentrated somewhere around 3.3Hz, so the leakage of spectral content away from the correct location is much reduced.\n\nOctave has build in window functions, so instead of code I used, we can use that built-in possibility:\n\n ywindowed = y_sampled.*hanning(N)';\n\nOK, that one also looks clean more or less I think now. What is left there is the computing power which is used on the calculation of DFT, there are a lot of multiplications made (N*N), more resolution you want more multiplication you have. For instance, if some would want to have a nice spectrum with 65536 points it will result in 4294967296 \u2013 quite a significant amount of computations. That\u2019s why FFT was invented which allows reusing already calculated coefficients using decimation in time or decimation in frequency algorithm, but that is out of the scope of this topic and, it is by itself quite a big thing to dig in.\n\nBut, I still want to show how to use built-in fft function in Octave, rather than invent own DFT or FFT code:\n\nclf\nclear\n#input data variables\ntstop = 1;\nhold on;\nN = 8;\nFs = 8;\nts = 1\/Fs;\nf_spectrum = 0:Fs\/N:(Fs*(N-1))\/N;\n\n#define window there\n#window = blackman(N);\nwindow = 1;\n\ntsample = 0:ts:(N-1)*ts;\ny_sampled = 2 + 5*cos(2*pi*tsample*2)+3*sin(2*pi*tsample*3) + 2*cos(2*pi*tsample*4);\nyFFT = y_sampled.*window';\n\nset(gca, 'linewidth', 3, 'fontsize', 18,'fontname', 'FreeSans') #modify line and fontsize of x and y axes\n#plot(tsample,y_sampled,'b','linewidth',2,'markersize',20);\ngrid on;\n\nspectrum_complex = fft(yFFT, N)\/(0.5*N);\nspectrum_complex(1) = spectrum_complex(1) \/2;\nspectrum_complex(N\/2+1) = spectrum_complex(N\/2+1) \/2;\n\nspectrum_mag = abs(spectrum_complex);\nbar(f_spectrum(1:(N\/2+1)), spectrum_mag(1:(N\/2+1)),0.2);\n\nI am not quite sure if I did it right for edge points, but in that example it worked out.","date":"2022-01-21 09:05:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6689064502716064, \"perplexity\": 2675.3475920437877}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320302740.94\/warc\/CC-MAIN-20220121071203-20220121101203-00071.warc.gz\"}"}
null
null
Springfield ist eine Stadt mit dem Status "City" im Calhoun County im US-Bundesstaat Michigan. Im Jahr 2010 hatte Springfield 5260 Einwohner. Geographie Die Koordinaten von Springfield liegen bei 42°19'25" nördlicher Breite und 85°13'51" westlicher Länge. Nach Angaben der United States Census 2010 erstreckt sich das Stadtgebiet von Springfield über eine Fläche von 9,58 Quadratkilometer (3,7 sq mi). Bevölkerung Nach der United States Census 2010 lebten in Springfield 5260 Menschen verteilt auf 2156 Haushalte und 1213 Familien. Die Bevölkerungsdichte betrug 554,9 Einwohner pro Quadratkilometer (143,7/sq mi). Die Bevölkerung setzte sich 2010 aus 76,6 % Weißen, 9,6 % Afroamerikanern, 7,5 % Asiaten, 0,5 % amerikanischen Ureinwohnern, 1,2 % aus anderen ethnischen Gruppen und 4,7 % stammten von zwei oder mehr Ethnien ab. Von den 5260 Einwohnern waren 26,5 % unter 18 Jahre und in 7,8 % der Haushalten lebten Menschen die 65 Jahre oder älter waren. Das Durchschnittsalter betrug 33,8 Jahre und 50 % der Einwohner waren Männlich. Belege Weblinks Offizielle Website von Springfield
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,402
In a non-stick pan add butter and salt over medium. When butter starts to melt add the onions and celery. Cook and stir celery until softened. Add garlic and cook 1 minute more. Add seasonings and stir in broth. Just when mixture begins to bowl add bread cubes and gently stir in. Turn off the heat and cover for a least 5 minutes. Enjoy.
{ "redpajama_set_name": "RedPajamaC4" }
5,226
{"url":"https:\/\/2022.help.altair.com\/2022\/hwsolvers\/ja_jp\/os\/topics\/solvers\/os\/nafems_composites_wrapped_thick_cylinder_r.htm","text":"# OS-V: 0510 Wrapped Thick Cylinder\n\nTest No: R0031\/2 OptiStruct examines the hoop stress in the inner and outer cylinder at different radius for linear static analysis.\n\n## Benchmark Model\n\nPly laminates are created using Quad4 elements for one quarter model of the cylinder. For Case 1 an internal pressure of 200MPa is applied and for the Case 2 together with the internal pressure, a temperature rise of 130\u00b0C is applied.\n\nThe material properties are:\nInner Cylinder (Isotropic)\nE\n2.1 \u00d7 105 MPa\n$\\upsilon$\n0.3\n\u03b1\n2.0 \u00d7 10-5\u00b0C-1\nOuter Cylinder (Circumferentially wound)\nE1\n1.3 \u00d7 105 MPa\n$\\upsilon$ 12\n0.25\nE2\n5.0 \u00d7 103 MPa\n\u03b11\n\n3.0 \u00d7 10-6\u00b0C-1\n\n\u03b12\n2.0 \u00d7 10-5\u00b0C-1\nG12\n1.0 \u00d7 104 MPa\nG33\n5.0 \u00d7 103 MPa\n\n## Linear Static Analysis Results\n\nTarget (MPa) OptiStruct Results (MPa) Normalized with the Target Value\nCase 1:\nHoop stress in inner cylinder at r = 23 1565.3 1659.4 0.94329276\nHoop stress in inner cylinder at r = 25 1429.7 1659.4 0.86157647\nHoop stress in outer cylinder at r = 25 874.7 792.6 1.10358314\nHoop stress in outer cylinder at r = 27 759.1 792.6 0.95773404\nCase 2:\nHoop stress in inner cylinder at r = 23 1381.0 1392.05 0.99206207\nHoop stress in inner cylinder at r = 25 1259.6 1392.05 0.90485256\nHoop stress in outer cylinder at r = 25 1056.0 1059.92 0.99630161\nHoop stress in outer cylinder at r = 27 936.1 1059.92 0.88317986\n\n## Model Files\n\nThe model files used in this problem include:\n\u2022 compwtcq4c1.fem\n\u2022 compwtcq4c2.fem\n\n## Reference\n\nNAFEMS R0031 - Composite Benchmarks, Hardy 2001","date":"2022-10-06 20:49:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 2, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2212766706943512, \"perplexity\": 11852.665535920796}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030337855.83\/warc\/CC-MAIN-20221006191305-20221006221305-00318.warc.gz\"}"}
null
null
{"url":"http:\/\/ogessayiejl.presidentialpolls.us\/stoppping-distance.html","text":"# Stoppping distance\n\nThe stopping distance is the distance the car travels before it comes to a rest it depends on the speed of the car and the coefficient of friction (\u03bc) between the wheels and the road. Car stopping distance calculator this site uses cookies & 3rd party adverts click here for details the calculator below estimates the stopping distance for a well maintained car with an alert driver on. Stopping distance ( plural stopping distances) en theoretical stopping distances are determined under the standard conditions (ssoll) of paragraph 21 of the administrative instruction and under.\n\nStopping distances rospa's pedestrian injury simulator drink driving each scenario shows typical stopping distances for those conditions. Stopping distance for auto assuming proper operation of the brakes, the minimum stopping distance for an automobile is determined by the effective coefficient of friction between the tires and.\n\nStopping sight distance is the distance traveled during the two phases of stopping a vehicle: perception-reaction time (prt), and maneuver time (mt)[1] perception-reaction time is the time it. Stopping distance = reaction distance + braking distance calculate the stopping distance with these easy methods it is summer and the road is dry. Formula breakdown #1 a new series were i break down popular formulas don't forget to subscribe and follow me on our road to complete science understanding. Stopping distance calculator finds the distance your car travels before it comes to a stop. Stopping distance depends on the speed of the car click on the above graphic to see an enlargement of it that is why we have speed limits - the more likely it is for a driver to have to stop.\n\nEnglish examples for stopping distance - railroad trains have a much greater mass and thus a longer stopping distance than road vehicles the car not only placed last, but had considerably. Gcse physics revision: stopping distance you can watch all my videos at wwwfreesciencelessonscouk in this video, we look at stopping distance. The stopping distance of a car (or other vehicle) will depend on two factors: thinking distance = the distance travelled during the driver's reaction time (the time it takes for a driver to react to a situation. So the stopping distance is note that this implies a stopping distance independent of vehicle mass, and in this case, driver reaction time it also implies a quadrupling of stopping distance with a. Stopping distance formula suppose your moving with some velocity and suddenly if you apply the in simple words, it is the distance moved by a vehicle which is moving with a certain velocity to come.\n\nStopping distance is the distance that something travels before it comes to a stop it normally refers to a car applying the brakes when the driver who sees a hazard ahead once he has seen the hazard. Assuming i have a body travelling in space at a rate of $1000~\\text{m\/s}$ let's also assume my maximum deceleration speed is $10~\\text{m\/s}^2$ how can i calculate the minimum stopping. Stopping distance is the total distance you travel before you hit the brakes plus the distance you tip: here is a great way to remember the overall stopping distances starting from 20mph you. Stopping distance definition: the distance travelled between the time when someone decides to stop a vehicle moving, and the time when the vehicle completely stops. The total stopping distance is the sum of the reaction distance and the braking distance in a non-metric country the stopping distance in feet given a velocity in mph can be approximated as follows: take the first digit of the velocity, and square it.\n\n## Stoppping distance\n\nStopping distances are a favourite part of the theory test, but they're not easy to remember some people suggest the stopping distances in the highway code are out of date because modern cars. Stopping distance explained stopping distance at various speeds you will be mistaken to think that the braking distance is the same as the overall stopping distance. By default braking distance and overall stopping distance are calculated for the following conditions: starting velocity is 100 km\/h, the roadway has to be dry, clean, flat, straight and sealed. Braking distance refers to the distance a vehicle will travel from the point when its brakes are fully applied to when it comes to a complete stop it is primarily affected by the original speed of the vehicle and the coefficient of friction between the tires and the road surface.\n\nStopping distance on wn network delivers the latest videos and editable pages for news & events, including entertainment, music, sports, science and more, sign up and share your playlists. Stopping distance the stopping distance interactive provides an environment for exploring the question: what affect does car speed have on skidding distance.\n\nStopping distance to reduce still further back in 2000 or so, continental ag, a manufacturer of typically eba will reduce stopping distance by 2m (from 11 to 9m) at 30 mph by 5m (30m to 25m.\n\nStoppping distance\nRated 3\/5 based on 42 review\n\n2018.","date":"2018-11-17 00:09:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7479017376899719, \"perplexity\": 767.2879868695301}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-47\/segments\/1542039743247.22\/warc\/CC-MAIN-20181116235534-20181117021534-00384.warc.gz\"}"}
null
null
Leipsic é uma vila localizada no estado americano de Delaware, no condado de Kent. Geografia De acordo com o United States Census Bureau, a vila tem uma área de 0,8 km², onde 0,7 km² estão cobertos por terra e 0,1 km² por água. Localidades na vizinhança O diagrama seguinte representa as localidades num raio de 16 km ao redor de Leipsic. Demografia Segundo o censo nacional de 2010, a sua população é de 183 habitantes e sua densidade populacional é de 243,6 hab/km². Possui 93 residências, que resulta em uma densidade de 123,8 residências/km². Ligações externas Vilas do Delaware Localidades do condado de Kent (Delaware)
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,610
{"url":"https:\/\/math.stackexchange.com\/questions\/3257689\/why-does-this-appear-to-produce-oeis-sequence-a263484","text":"# Why does this appear to produce OEIS sequence A263484?\n\nA263484 is \"Triangle read by rows: $$T(n,k)$$ ($$n\\geq 1$$, $$0 \\leq k < n$$) is the number of permutations of $$n$$ with $$n! - k$$ permutations in its connectivity set.\", and the sequence is:\n\n1,\n\n1,1,\n\n1,2,3,\n\n1,3,7,13,\n\n1,4,12,32,71,\n\n1,5,18,58,177,461,\n\n...\n\nI have looked at various papers on connected and irreducible permutations, but I don't fully understand them. I also don't see anything in them that \"looks\" like what I am doing, but that is likely due to my lack of understanding. So here is what I am doing:\n\nLet $$P$$ be the set of permutations of $$n$$, with $$n \\geq 2$$, and let $$p$$ be any permutation in $$P$$. Let $$Q$$ be the set of permutations of the first $$n-1$$ symbols of $$p$$. We will be counting permutations, so let $$C$$ be an array of integers with $$|C| = n-1$$, used to record counts, and initialize it to all zeros. For each permutation $$q$$ in $$Q$$, append $$q$$ to $$p$$ to obtain $$s$$=the concatenation $$p+q$$. $$s$$ will always be of length $$2n-1$$. Count the number of substrings of contiguous symbols of length $$n$$ in $$s$$ that are permutations in $$P$$, and call this count $$x$$. $$x$$ will range from 2 to $$n$$, depending on $$s$$. Record this count in array $$C$$ by incrementing array element $$x-2$$ by one. After all $$(n-1)!$$ $$s$$'s have been checked, let row $$n-1$$ in $$T(n,k)$$ equal the reversed array $$C$$.\n\nA small example:\n\nLet $$P$$ = {(1,2,3,4),...,(4,3,2,1)}, $$p$$ = (2,3,4,1), Q ={(2,3,4),(2,4,3)...,(4,3,2)}, and $$C$$=[0,0,0]. First, let $$s$$=(2,3,4,1,2,3,4) and we find (2,1,3,4), (3,4,1,2), (4,1,2,3), and (1,2,3,4) are in $$P$$, so $$x$$=4. Increment element 4-2=2 in $$C$$ by one, so now $$C$$ = [0,0,1]. Now let $$s$$=(2,3,4,1,2,4,3) and we find (2,3,4,1), (3,4,1,2), and (1,2,4,3) are in $$P$$, but (4,1,2,4) is not, so $$x$$=3. Increment element 3-2=1 in $$C$$ by one, so now $$C$$=[0,1,1]. After all 3!=6 strings have been checked, C=[3,2,1] Now let row three of $$T(n,k)$$ equal the reverse of $$C$$ = [1,2,3].\n\nThe triangle I get from this is:\n\n1,\n\n1, 1,\n\n1, 2, 3,\n\n1, 3, 7, 13,\n\n1, 4, 12, 32, 71,\n\n1, 5, 18, 58, 177, 461,\n\n1, 6, 25, 92, 327, 1142, 3447,\n\n1, 7, 33, 135, 531, 2109, 8411, 29093,\n\n1, 8, 42, 188, 800, 3440, 15366, 69692, 273343,\n\n...\n\nwhich appears to be A263484.\n\nBut why?\n\nEDIT:\n\nAfter reading darij grinberg's comment, I have corrected the definition of Q. Q was originally defined in this question as \"Let Q be the set of permutations of (n\u22121).\", which was not correct. I also changed $$p$$ and $$Q$$ the example, so it was clear I was not using the permutations of n-1.\n\n\u2022 Just to make this question more self-contained, here's the definition of the connectivity set of a permutation, as given in arxiv.org\/abs\/math\/0507224 Let $S_n$ denote the symmetric group of permutations of $[n] = \\{1, 2, . . . , n\\}$, and let $w = a_1a_2 \\cdots a_n \\in S_n$. Now define the connectivity set $C(w)$ by $C(w)=\\{i : a_j <a_k {\\rm\\ for\\ all\\ }j\\le i<k\\}$. Jun 13 '19 at 2:03\n\u2022 Is your $s$ really the concatenation or is it a shifted conactenation? If $p = (3,2,1)$ and $q = (1,2)$, then the concatenation is $p + q = (3,2,1,1,2)$, which has only $1$ permutation in $P$ inside it. Thus, $x$ is not between $2$ and $n$. Jun 13 '19 at 9:18\n\u2022 @ darij grinberg My mistake. Q is the set of permutations of the first n-1 symbols, not the permutations of n-1. Thanks for catching that. Jun 15 '19 at 2:27\n\n1, 9, 52, 252, 1146, 5226, 24892, 125316, 642581, 2829325\n\nWhat you computed is, up to a relabelling of the numbers $$1$$ through $$n-1$$, the same as the connectivity set:\n\u2022 First observe that your definition depends on the permutation $$p \\in P$$. Without loss of generality, you may assume that $$p$$ is the identity permutation $$[1,\\dots,n]$$. This is because you can relabel the numbers $$i$$ through $$n$$ according to your given $$p$$. For example, consider your example above and interchange the numbers $$1$$ and $$2$$ (since $$p = [2,1,3,4]$$) and observe that a subword is a permutation if and only if it is after interchanging $$1$$ and $$2$$.\n\u2022 After this simplification, you observe that you exactly count prefixes of permutations $$q \\in Q$$ that correspond to sets of connectivity.","date":"2021-09-19 04:16:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 68, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8899520635604858, \"perplexity\": 245.85310172460336}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780056711.62\/warc\/CC-MAIN-20210919035453-20210919065453-00394.warc.gz\"}"}
null
null
\section{Introduction} \quad Let $(X,d_X)$ and $(Y,d_Y)$ be Polish spaces, $c:X\times Y\to \mathbb{R}$ be a cost function, $\rho_1\in\mathcal P(X)$ and $\rho_2\in \mathcal P(Y)$ be probability measures. We consider the \textit{Schr\"odinger Bridge problem} \begin{equation}\label{intro:mainKL} \OTep(\rho_1,\rho_2) = \inf_{\gamma\in \Pi(\rho_1,\rho_2)}\ep\operatorname{KL}(\gamma|\ka), \end{equation} where $\mathcal{K}$ is the so-called \textit{Gibbs Kernel} associated with the cost $c$: \begin{equation}\label{eqn:Gibbs} \mathcal{K} = k(x,y)\rho_1\otimes\rho_2 = e^{-\frac{c(x,y)}{\ep}}\rho_1\otimes\rho_2. \end{equation} The function $\operatorname{KL}(\gamma|\ka)$ in \eqref{intro:mainKL} is the Kullback-Leibler divergence between the probability measures $\gamma$ and $\ka \in \mathcal P(X\times Y)$ which is defined as \[ \displaystyle\operatorname{KL}(\gamma|\ka) = \displaystyle\begin{cases} \displaystyle\int_{X\times Y}\gamma\log\left(\dfrac{\gamma}{k}\right)d(\rho_1\otimes\rho_2) &\text{ if } \gamma \ll \rho_1\otimes\rho_2 \\ +\infty &\text{ otherwise }\\ \end{cases}. \] Here, by abuse of notation we are denoting by $\gamma$ the Radon-Nikodym derivative of $\gamma$ with respect to the product measure $\rho_1\otimes\rho_2$. Geometrically speaking, when we interpret the Kullback-Leibler divergence as a distance, the problem \eqref{intro:mainKL} defines the so called \textit{Kullback-Leibler projection} of $\ka$ on the set $\Pi(\rho_1,\rho_2)$. In the past years, theoretical and numerical aspects of \eqref{intro:mainKL} has been object of study in mathematical physics (e.g. \cite{CarSth,Car84,Car86, CruZam91, Fen, NelNM, Neldyn, NelQF, Schr31, Zam86, ZamVar86, Zam15}), probability (e.g. \cite{CatLeo94, GozLeo07, LeoGamma, LeoSurvey}), fluid mechanics (e.g. \cite{ArnCruLeoZam17, BenCarNen17}), metric geometry (e.g. \cite{GenLeoRipTam18,GigTam18}), optimal transport theory (e.g. \cite{BenCarCutNenPey2015, CarDuvPeySch, CarLab18, ChenConGeoRip, CheGeoPav, FatGozPro19, GigTamBB18, Mik04}), data sciences (e.g. \cite{FeyFXVAmaPey, GenChiBacCutPey,GenCutPey, Lui18, Lui19, PavTabTriSch, RabinPeyDelBer2011} see also the book \cite{CutPeyBook} and references therein). The existence of a minimizer in \eqref{intro:mainKL} was obtain in different generality by I. Czisar, L. Ruschendorf, J. M. Borwein, A. S. Lewis and R. D. Nussbaum, C. L\'eonard, N. Gigli and L. Tamanini among others \cite{BorLewNus94, Csi75, GigTam18, RusIPFP}. In the most general case the kernel $\ka$ is not even assumed to be absolutely continuous with respect to $\rho_1 \otimes \rho_2$, as opposed to our assumption \eqref{eqn:Gibbs}. In particular, under the assumption \eqref{eqn:Gibbs} on $\ka$ (see for example \cite{LeoSurvey}), a unique minimizer for \eqref{intro:mainKL} exists and $\gamma^{\ep}_{opt}$ is the minimizer if and only if \begin{equation}\label{intro:SchSys} \gamma^{\ep}_{opt} = a^{\ep}(x)b^{\ep}(y)\ka, \text{ where } a^{\ep},b^{\ep} \text{ solve } \displaystyle\begin{cases} a^{\ep}(x)\int_{Y} b^{\ep}(y)k(x,y)d\rho_2(y) = 1 \\ b^{\ep}(y)\int_{X} a^{\ep}(y)k(x,y)d\rho_1(x) = 1\\ \end{cases}. \end{equation} The functions $a^{\ep}(x)$ and $b^{\ep}(y)$ are called \textit{Entropic potentials}. They are unique up to the trivial transformation $a \mapsto a/\lambda$, $b \mapsto \lambda b$ for some $\lambda >0$. The system solved by the Entropic potentials is called the \emph{Schr\"{o}dinger system}. Assuming $\rho_1$ and $\rho_2$ are everywhere positive and have finite entropy with respect to $\ka$, the minimizer in \eqref{intro:mainKL} has a special form as is stated in the Theorem below \cite[Corollary 3.9]{BorLewNus94}. \begin{teo}[J. M. Borwein, A. S. Lewis, and R. D. Nussbaum]\label{thm:BroLewNus} Let $(X,d_X)$ and $(Y,d_Y)$ be a Polish spaces, $c:X\times Y\to [0, \infty)$ be a bounded cost function, $\rho_1 \in \mathcal P(X)$, $\rho_2 \in \mathcal P(Y)$ be probability measures such that $\rho_1(x),\rho_2(y)>0, \forall x\in X, y\in Y$ and $\mathcal{K} = e^{-\frac{c(x,y)}{\ep}}\rho_1\otimes\rho_2$. Then, if and $\operatorname{KL}(\rho_1\otimes\rho_2|\ka) <+\infty$, then for each $\ep> 0$, there exists a unique minimizer $\gamma^{\ep}\in \Pi(\rho_1,\rho_2)$ for the Schr\"odinger problem $\OTep(\rho_1,\rho_2)$ that can be written as \[ \gamma^{\ep}_{opt}(x,y) = a^{\ep}(x)b^{\ep}(y) \ka(x,y), \quad \ln a^{\ep}(x)\in L_{\rho_1}^1(X), \ln b^{\ep}(x) \in L_{\rho_2}^1(Y). \] \end{teo} \medskip In this paper, we are interested in the following questions: \begin{itemize} \item[(1)] \emph{What is the regularity of the Entropic potentials $a^{\ep}$ and $b^{\ep}$?} \item[(2)] \emph{Can we understand the structure and regularity of the minimizer in \eqref{intro:mainKL} if we consider the Schr\"odinger Bridge problem with $N$ given marginals $\rho_1,\rho_2,\dots,\rho_N$ instead of $2$?} \end{itemize} The answers to the questions (1) and (2) relies on the Kantorovich duality formulation of \eqref{intro:mainKL} and its extension to the multi-marginal setting: we will exploit the parallel with optimal transport to give also a new (variational) proof for the existence of a solution to the Schr\"{o}dinger system. We believe that also this contribution is important since the only available proofs of that pass through abstract results of closure of ``sum type" functions. The multi-marginal Schr\"odinger Bridge problem, to be introduced in section \ref{sec:multimarginal}, has been recently consider in the literature from different viewpoints (e.g. \cite{BenCarCutNenPey2015, BenCarNen17, CarDuvPeySch, CarLab18, ChenConGeoRip, CutDou2014, GerGroGor19, GerKauRajEnt}) as, for instance, the Wasserstein Barycenters, Matching problem in Economics, time-discretisation of Euler Equations and Density Functional Theory in computational chemistry. Finally, we want to mention that G. Carlier and M. Laborde in \cite{CarLab18} show the well-posedness (existence, uniqueness and smooth dependence with respect to the data) for the multi-marginal Schr\"odinger system in $L^{\infty}$ - see \eqref{intro:SchSysMM} in section \ref{sec:multimarginal} - via a local and global inverse function theorems. This is a different approach and orthogonal result compared to the study presented in this paper; moreover their result is restricted to measures $\rho_i$ which are absolutely continuous with respect to some reference measure, with density bounded from above and below. \subsection*{Computational aspects and connection with Optimal Transport Theory} \quad In many applications, the method of choice for numerically computing \eqref{intro:mainKL} is the so-called \textit{Iterative Proportional Fitting Procedure} (IPFP) or \textit{Sinkhorn algorithm} \cite{Sin64}. The aim of the Sinkorn algorithm is to construct the measure $\gamma^{\ep}$ realizing minimum in \eqref{intro:mainKL} by fixing the shape of the guess as $\gamma^{\ep}_{n} = a^n(x) b^n(y) \ka$ (since this is the actual shape of the minimizer) and then alternatively updating either $a^n$ or $b^n$, by matching one of the marginal distribution respectively to the target marginals $\rho_1$ or $\rho_2$, The IPFP sequences $(a^n)_{n\in\mathbb N}$ and $(b^n)_{n\in\mathbb N}$ are defined thus iteratively by\footnote{The iterations above system also appeared in \cite{Bac65,DemSte40, Idel16,Kru37,Yul12} with different names (eg. RAS, IPFP).} \begin{equation}\label{eq:IPFPiteration} \begin{array}{lcl} \displaystyle a^0(x) & = &1, \\ \displaystyle b^0(y) & = &1, \\ \displaystyle b^n(y) & = & \dfrac{1}{\int k(x,y) a^{n-1}(x)d\rho_1(x)}, \\ \displaystyle a^n(y) & = & \dfrac{1}{\int k(x,y) b^{n}(y)d\rho_2(y)}. \end{array} \end{equation} While $a^n$ and $b^n$ will be approximations of the solution of the Schr\"{o}dinger system, the sequence of probability measures $\gamma^{\ep}_{n} = a^{n}(x) b^n(y)\mathcal K$ will approximate the minimizer $\gamma^{\ep}$. We stress that the IPFP procedure can be easily generalized in the multi-marginal setting, whose discussion will be detailed in section \ref{sec:multimarginal}. \begin{itemize} \item[(3)] \textit{Can one prove convergence of the Sinkhorn algorithm in two and several marginals case?} \end{itemize} In the two marginals case, the IPFP schemes was introduced by R. Sinkhorn \cite{Sin64}. The convergence of the iterates \eqref{eq:IPFPiteration} was proven by J. Franklin and J. Lorenz \cite{FraLor89} in the discreate case and by L. Ruschendorf \cite{RusIPFP} in the continuous one. An alternative proof based on the Franklin-Lorenz approach via the Hilbert metric was also provided by Y. Chen, T. Georgiou and M. Pavon \cite{CheGeoPav}, which is particular leads to a linear convergence rate of the procedure (in the Hilbert metric). Despite the different approaches and theoretical guarantees obtained in the $2$-marginal problem, in the multi-marginal case the situation changes completely. Although numerical evidence suggests convergence and stability of the Sinkorhn algorithm for general class of cost functions \cite{BenCarCutNenPey2015, CutDou2014, CarDuvPeySch,CarDuvPeySch}, theoretical results guaranteeing convergence, stability were unknown (even if in \cite{RusIPFP} it is claimed that with his methods the result can be extended to the multimarginal case, but to our knowledge this has not been done yet). One of the contributions of this paper is to give convergence results of the Sinkhorn algorithm in the multi-marginal setting. In our approach we exploit the regularity of Entropic potentials to prove by compatness the convergence of IPFP scheme \eqref{eq:IPFPiteration}. \medskip \noindent \emph{Connection with Optimal Transport Theory:} the problem \eqref{intro:mainKL} allow us to create very efficient numerical scheme approximating solutions to the Monge-Kantorovich formulation of optimal transport and its many generalizations. Indeed, notice that we can rewrite \eqref{intro:mainKL} as a functional given by the Monge-Kantorovich formulation of Optimal Transport with a cost function $c$ plus an Entropic regularization parameter \begin{align}\label{intro:compOTOTep} \OTep(\rho_1,\rho_2) &= \min_{\gamma\in \Pi(\rho_1,\rho_2)}\ep\int_{X\times Y}\gamma\log\left(\dfrac{\gamma}{k}\right)d(\rho_1\otimes\rho_2) \nonumber \\ &= \min_{\gamma\in \Pi(\rho_1,\rho_2)}\int_{X\times Y}cd\gamma + \ep\int_{X\times Y}\gamma\log\gamma\, d(\rho_1\otimes\rho_2). \end{align} In particular, one can show that \cite{CarDuvPeySch,LeoGamma,Mik04} if $(\gamma^{\ep})_{\ep\geq 0}$ is a sequence of minimizers of the above problem, then $\gamma^{\ep}$ converges when $\ep\to 0$ to a solution of the Optimal Transport $(\ep=0)$. More precisely, let us define the functionals $C_k,C_0:\mathcal P(X\times Y)\to \mathbb R\cup\lbrace +\infty\rbrace$ \[ C_{k}(\gamma) = \begin{cases} \int_{X\times Y}c d\gamma +\ep_k\int_{X\times Y}\rho_{\gamma}\log\rho_{\gamma}d(\rho_1\otimes\rho_2) &\text{ if } \gamma \in \Pi(\rho_0,\rho_1) \\ +\infty &\text{ otherwise }\\ \end{cases}, \] \[ C_{0}(\gamma) = \begin{cases} \int_{X\times Y}c d\gamma &\text{ if } \gamma \in \Pi(\rho_0,\rho_1) \\ +\infty &\text{ otherwise }\\ \end{cases}. \] Then in \cite{CarDuvPeySch, LeoSurvey,Mik04} it is shows that the sequence of functionals $(C_k)_{k\in\mathbb N}$ $\Gamma-$converges to $C_0$ with respect to the weak convergence of measures. In particular the minima and minimal values are converging and so, in particular if $c(x,y) = d(x,y)^p$, then \[ \lim_{k\to+\infty}\operatorname{OT}^p_{\ep_k}(\rho_1,\rho_2) = W^p_p(\rho_1,\rho_2), \] where $W_p(\rho_1,\rho_2)$ is the $p$-Wasserstein distance between $\rho_1$ and $\rho_2$, \[ W^p_p(\rho_1,\rho_2) = \min_{\gamma\in\Pi(\rho_1,\rho_2)}\int_{X\times Y}d^p(x,y)d\gamma(x,y). \] \begin{figure}[http]\label{fig:plan} \centering \includegraphics[ scale=0.24]{planregep.png} \caption{Support of the optimal coupling $\gamma^{\ep}$ in \eqref{intro:mainKL} for the one-dimensional distance square cost with different values of $\epsilon = 10^{-1},1,10,10^{2},10^{3}$: the densities $\rho_1 \sim N(0,5)$ (blue) and $\rho_2 = \frac{1}{2}\eta_1 + \frac{1}{2}\eta_2$ is a mixed Gaussian (red), where $\eta_1 \sim N(-2,0.8)$ and $\eta_2 \sim N(2,0.7)$. The numerical computations are done using the POT library, \cite{POT}.} \label{fig} \end{figure} In the context of Optimal Transport Theory, the entropic regularization was introduced by A. Galichon and B. Salani\'e \cite{GalSal10} to solve matching problems in economics; and by M. Cuturi \cite{Cut} in the context of machine learning and data sciences. Both seminal papers received renewed attention in understanding the theoretical aspects of \eqref{intro:compOTOTep} as well as had a strong impact in imagining, data sciences and machine learning communities due to the efficiency of the Sinkhorn algorithm. Sinkhorn algorithm provides an efficient and scalable approximation to optimal transport. In particular, by an appropriate choice of parameters, the Sinkhorn algorithm is in fact a near-linear time approximation for computing OT distances between discrete measures \cite{AltWeeRig17}. However, as studied in \cite{Dud69, BacWee19}, the Wasserstein distance suffer from the so-called \textit{curse of dimensionality}. We refer to the recent book \cite{CutPeyBook} written by M. Cuturi and G. Peyr\'e for a complete presentation and references on computational optimal transport. \subsection{Main contributions} \quad In order to study the regularity of Entropic-potentials, we introduce the dual (Kantorovich) functional \[ D_{\ep}(u,v) = \int_X u(x)d\rho_1(x) + \int_Y v(y)d\rho_2(y) - \ep\int_{X\times Y} e^{\frac{u(x)+v(y)-c(x,y)}{\ep}}d(\rho_1\otimes\rho_2). \] \quad The Kantorovich duality of \eqref{intro:mainKL} is given by the following variational problem (see Proposition \ref{prop:duality2N}) \begin{equation}\label{intro:entdual} \OTep(\rho_1,\rho_2) = \sup_{u\in C_b(X),v\in C_b(Y)}D_{\ep}(u,v) +\ep. \end{equation} The Entropy-Kantorovich duality \eqref{intro:entdual} appeared, for instance, in \cite{CarLab18, FeyFXVAmaPey, GerKauRajEnt, GigTamBB18,GigTam18,LeoSurvey}. The firsts contributions of this paper are (i) prove the existence of maximizers $u^*$ and $v^*$ (up to translation) in \eqref{intro:entdual} in natural spaces; (ii) show that the Entropy-Kantorovich potentials inherit the same regularity of the cost function (see the precise statement in Proposition \ref{prop:EstPot} and Theorem \ref{thm:kanto2Nmax}). We then link $u^*$ and $v^*$ to the solution of the Schr\"{o}dinger problem; as a byproduct of our results we are able to provide an alternative proof of the convergence of the Sinkhorn algorithm in the $2$-marginal case via a purely optimal transportation approach (Theorem \ref{thm:convIPFP}), seeing it as an alternate maximization procedure. The strength of this proof is that it can be easily generalized to the multi-marginal setting (Theorem \ref{thm:convIPFPNmarg}). \subsection{Summary of results and main ideas of the proofs} \quad Our approach follows ideas from Optimal Transport and relies on the study of the duality (Kantorovich) problem \eqref{intro:entdual} of \eqref{intro:mainKL}. Analogously to the optimal transport case, if one assume some regularity (boundedness, uniform continuity, concavity) of the cost function $c$, then we can obtain the same type of regularity of the Entropy potentials $u$ and $v$. The relation between solution of the dual problem \ref{intro:entdual} and the Entropic-Potentials solving the Schr\"odinger system was already pointed out by C. L\'eonard \cite{LeoSurvey}. From our knowledge, the direct proof of existence of maximizers in \eqref{intro:entdual} a new result. Our approach to obtain the existence of Entropic-Kantorovich potentials, follow the direct method of Calculus of Variations. The key idea in the argument is to define a generalized notion of $c$-transform in the Schr\"odinger Bridge case, namely the $(c,\ep)$-transform. The main duality result, in the most general case where we assume only that $c$ is bounded, is given by the Theorem \ref{thm:kanto2Nmax} and stated below. \begin{teo} Let $(X,d_X)$, $(Y,d_Y)$ be Polish spaces, $c:X\times Y\to \mathbb R$ be a Borel bounded cost, $\rho_1 \in \mathcal P(X)$, $\rho_2 \in \mathcal P(Y)$ be probability measures and $\ep>0$ be a positive number. Then the supremum in \eqref{intro:entdual} is attained for a unique couple $(u_0,v_0)$ (up to the trivial tranformation $(u,v) \mapsto (u+a,v-a)$). Moreover we also have $$u_0 \in L^{\infty}(\rho_1) \quad \text{ and } \quad v_0 \in L^{\infty}(\rho_2)$$ and we can choose the maximizers such that $\|u_0\|_{\infty} ,\|v_0\|_{\infty} \leq \frac 32 \|c\|_{\infty}$. \end{teo} \noindent \emph{On the $(c,\ep)-$transform:} Given a measurable function $u:X\to\mathbb R$ such that $\int_{X}e^{u/\ep}d\mu <+\infty$, we defined the $(c,\ep)$-transform of $u$ by \[ u^{(c,\ep)}(y) = -\ep\log\left(\int_{X}e^{(u(x)-c(x,y))/\ep}d\mu(x)\right). \] One can show that this operation is well defined and, moreover, $D_{\ep}(u,u^{(c,\ep)}) \geq D_{\ep}(u,v), \forall u,v$ and $D_{\ep}(u,u^{(c,\ep)}) = D_{\ep}(u,v) \text{ if and only if } v = u^{(c,\ep)}$ (lemma \ref{lemma:dual}). If we assume additionally regularity for the cost function $c$, for instance that $c$ is $\omega$-continuous or that it is merely bounded, the $(c,\ep)$-transform is actually a compact operator, respectively form $L^1(\rho_1) $ to $ C(Y)$ and from $L^{\infty}(\rho_1)$ to $L^p(\rho_2)$ (Proposition \ref{prop:EstPot}). \medskip \noindent \emph{IPFP/Sinkhorn algorithm:} As a byproduct of the above approach to the duality, we present an alternative proof of the convergence of the IPFP/Sinkhorn algorithm. The main idea in our proof is that we can rewrite the IPFP iteration substituting $a^n = \exp(u^n/\ep)$ and $b^n = \exp(v^n/\ep)$; in these new variables the iteration becomes \[ \begin{array}{lcl} \displaystyle v^n(y)/\ep = & - \log\left(\int_X k(x,y)\otimes \frac{u^{n-1}(x)}{\ep}d\rho_1\right) \\ \displaystyle u^n(x)/\ep = & - \log\left(\int_Y k(x,y)\otimes \frac{v^{n}(x)}{\ep}d\rho_2\right) \end{array}. \] Or, $v^n(y) = (u^{(n-1)})^{(c,\ep)}$ and $u^n(y) = (v^n)^{(c,\ep)}$. In particular we can interpret the IPFP in optimal transportation terms: at each step the Sinkhorn iterations \eqref{eq:IPFPiteration} are equivalent to take the $(c,\ep)$-transforms alternatively and therefore the IPFP can be seen as an alternating maximizing procedure for the dual problem. Therefore, using the aforementioned compactness, it is easy to show that $u^n$ and $v^n$ converge to to the optimal solution of the Kantorovich dual problem when $n\to+\infty$. \begin{teo} Let $(X,d_X)$ and $(Y,d_Y)$ be Polish metric spaces, $\rho_1 \in \mathcal P(X)$ and $\rho_2\in\mathcal P(Y)$ be probability measures and $c:X\times Y\to \mathbb R$ be a Borel bounded cost. If $(a^n)_{n\in\mathbb N}$ and $(b^n)_{n\in\mathbb N}$ are the IPFP sequences defined in \eqref{eq:IPFPiteration}, then there exists $\lambda_n >0$ such that \[ a^n/\lambda_n\to a \text{ in } L^p(\rho_1) \quad \text{ and } \quad \lambda_n b^n\to b \text{ in } L^p(\rho_2), \quad 1\leq p <+\infty, \] for $a,b$ that solve the Schr\"{o}dinger system. In particular, the sequence $\gamma^n = a^nb^n\ka$ converges in $L^p(\rho_1\otimes\rho_2)$ to $\gamma^{\ep}_{opt}$ in \eqref{intro:mainKL}, $1\leq p <+\infty$. \end{teo} We recall that the argument in original proof of convergence of the Sinkhorn algorithm \cite{FraLor89} (also in \cite{CheGeoPav}) relies on defining the Hilbert metric on the projection cone of the Sinkhorn iterations. The authors show that the Sinkhorn iterates are a contraction under this metric and therefore the procedure converges. This proof has the advantage of providing automatically the rate of convergence of the iterates; however it is not easily extendable in the several marginals case. Our approach instead can be extended to obtain the existence and convergence results also in the multi-marginal setting: \begin{teo} Let $(X_i,d_{X_i})$ be Polish metric spaces and $\rho_i \in \mathcal P(X_i)$ be probability measures, for $i \in \lbrace 1,\dots, N \rbrace$ and $c:X_1\times \dots \times X_N\to \mathbb R$ be a Borel bounded cost If $(a_i^n)_{n\in\mathbb N}$ are the multi-marginal IPFP sequences that will be defined \eqref{eq:IPFPsequenceN}, then there exist $\lambda^n_i >0$ with $\prod_i \lambda^n_i=1$ such that \[ a_i^n/\lambda_i^n \to a_i \text{ in } L^p(\rho_i) \quad \text{ for all } i \in \lbrace 1,\dots, N \rbrace, 1\leq p <+\infty, \] where $(a_1, \ldots, a_N)$ solve the Schr\"{o}dinger system. In particular, the sequence $\gamma_N^n = \otimes^N_{i=1}a_i^n\ka$ converges in $L^p(\otimes^N_{i=1} \rho_i)$, $1\leq p <+\infty$, to the optimal coupling $\gamma^{\ep}_{N, opt}$ solving the multi-marginal Schr\"odinger Bridge problem to be defined in \eqref{eq:primalSchrMult}. \end{teo} \subsection{Organization of the paper} \quad \quad The remaining part of the paper is organized as follows: Section \ref{sec:RegularityEntr} contains the main structural results of the paper, namely Proposition \ref{prop:EstPot} and Theorem \ref{thm:kanto2Nmax}. In particular, we define the main tools for showing the existence of maximizer of the Entropic-Kantorovich problem and prove regularity results of the Entropic-Kantorovich potentials via the $(c,\ep)$-transform. In the section \ref{sec:convergenceIPFP}, we apply the above results to prove convergence of the Sinkhorn algorithm purely via the compactness argument and alternating maximizing procedue (Theorem \ref{thm:convIPFP}) and, in section \ref{sec:multimarginal}, we extend the main results of the paper to the multi-marginal Schr\"odinger Bridge problem, including convergence of Sinkhorn algorithm in the multi-marginal case (Theorem \ref{thm:convIPFPNmarg}). \subsection{The role of the reference measures} \quad \quad In this subsection, we simply give a technical remark, discussing the role of the reference measures $\refmx$ and $\refmy$. We stress that all the results of the paper can be extended while considering a kind of entropic optimal transport problem, where the penalization occurs with respect to some reference measures $\refmx, \refmy$. For $\ep>0$, we in particular may look at the problem \begin{align}\label{eq:Sepm} \Sep(\rho_1,\rho_2; \refmx, \refmy) &:= \min_{\gamma\in \Pi(\rho_1,\rho_2)} \left\{ \int_{X\times Y}c \, d\gamma + \ep \operatorname{KL}( \gamma | \refmx\otimes\refmy) \right\} \nonumber \\ &= \min_{\gamma\in \Pi(\rho_1,\rho_2)} \ep \operatorname{KL} ( \gamma | \mathcal{K}). \end{align} where $\mathcal{K}$ is the Gibbs Kernel $\mathcal{K} = e^{-\frac{c}{\ep}}\refmx\otimes\refmy$. While having a reference measure in some situations can be quite useful (for example the Schr\"{o}dinger problem is set with $\refm_1=\refm_2 = \mathcal{L}^d$), in other it is the opposite, for example when we are considering $\rho_1, \rho_2$ to be sums of diracs. In those cases it is a much better solution to consider $\refmx=\rho_1$ and $\refmy=\rho_2$. Notice that in this case, we have that \[ \OTep(\rho_1,\rho_2) = \Sep(\rho_1,\rho_2;\rho_1,\rho_2). \] Now we will see that in fact $\OTep$ is a \emph{universal} reduction for $\Sep$, meaning that we can always assume $\refmx=\rho_1$ and $\refmy=\rho_2$: \begin{lemma} Let $(X,d,\refmx)$ and $(Y,d,\refmx)$ be a Polish metric measure spaces and $c:X\times Y\to [0,+\infty[$ be a cost function. Assume that $\rho_1 \in \mathcal P(X)$, $\rho_2 \in \mathcal P(Y)$. Then we have $$ \Sep(\rho_1, \rho_2; \refmx, \refmy) = \OTep ( \rho_1, \rho_2) +\ep \operatorname{KL}(\rho_1| \refmx) + \ep \operatorname{KL}(\rho_2|\refmy);$$ moreover, whenever one of the two sides is finite the minimizers of $\Sep$ and $\OTep$ are the same. \end{lemma} \begin{proof} The key equality in proving this lemma is that, whenever $\gamma \in \Gamma(\rho_1, \rho_2)$ one has \begin{equation}\label{Eqn:equalityKL} \operatorname{KL}( \gamma | \refmx \otimes \refmy) = \operatorname{KL}(\gamma | \rho_1 \otimes \rho_2) + \operatorname{KL}(\rho_1 | \refmx) + \operatorname{KL}(\rho_2 | \refmy). \end{equation} While this equality is clear whenever all the terms are finite, we refer to Lemma~\ref{lem:KLsum} below for a complete proof entailing every case. From this equality we can easily get the conclusions. \end{proof} \begin{lemma}\label{lem:KLsum} Let $(X,\sigma_X)$ and $(Y,\sigma_Y)$ be measurable spaces. Assume that $\gamma \in \mathcal P(X\times Y)$, $\refmx \in \mathcal P(X)$ and $\refmy\in \mathcal P(Y)$. Then we have \begin{equation}\label{Eqn:equalityKLgen} \operatorname{KL}( \gamma | \refmx \otimes \refmy) = \operatorname{KL}(\gamma | \rho_1 \otimes \rho_2) + \operatorname{KL}(\rho_1 | \refmx) + \operatorname{KL}(\rho_2 | \refmy), \end{equation} where $\rho_1= (e_1)_{\sharp}\gamma$ and $\rho_2 = (e_2)_{\sharp}\gamma$ are the projections of $\gamma$ onto $X$ and $Y$ respectively. \end{lemma} \begin{proof} First we assume the right hand side of \eqref{Eqn:equalityKLgen} is finite, and in particular this implies $\gamma \ll \rho_1 \otimes \rho_2$, $\rho_1 \ll \refmx$ and $\rho_2 \ll \refmy$. In particular we get $\gamma \ \refmx \otimes \refmy$ and we can infer $$ \frac { d \gamma}{ d (\refmx \otimes \refmy )} (x,y)= \frac { d \gamma}{ d (\rho_1 \otimes \rho_2 )} (x,y) \cdot \frac { d \rho_1}{ d\refmx } (x) \cdot \frac { d \rho_2 }{ d \refmy } (y).$$ We can now compute \begin{align*} \operatorname{KL}( \gamma |\refmx \otimes \refmy) &= \int_{X \times Y} \ln \left( \frac { d \gamma}{ d (\refmx \otimes \refmy )} (x,y) \right) \, d \gamma \\ &=\int_{X \times Y} \ln \left( \frac { d \gamma}{ d (\rho_1 \otimes \rho_2 )} (x,y) \right) \, d \gamma + \int_{X \times Y} \ln \left(\frac { d \rho_1}{ d\refmx } (x)\right) \, d \gamma + \int_{X \times Y} \ln \left(\frac { d \rho_2}{ d\refmy }(y)\right) \, d \gamma \\ & = \operatorname{KL}(\gamma | \rho_1 \otimes \rho_2 ) + \int_X \ln \left(\frac { d \rho_1}{ d\refmx } (x)\right) \, d \rho_1 + \int_Y \ln \left(\frac { d \rho_2}{ d\refmy } (y)\right) \, d \rho_2 \\ & = \operatorname{KL}(\gamma | \rho_1 \otimes \rho_2) + \operatorname{KL}(\rho_1 | \refmx) + \operatorname{KL}(\rho_2 | \refmy). \end{align*} We assume now that the left hand side of \eqref{Eqn:equalityKLgen} is finite. Thanks to the fact that $\operatorname{KL}(F_{\sharp} \mu | F_{\sharp} \nu) \leq \operatorname{KL}( \mu | \nu)$ for every measurable function $F$, we immediately have that $ \operatorname{KL}(\rho_1 | \refmx)$ and $ \operatorname{KL}(\rho_2 | \refmy)$ are finite, and in particular $\rho_1 \ll \refmx$ and $\rho_2 \ll \refmy$. Now let us introduce $f = \frac {d \gamma}{d \refmx \otimes \refmy}$, $g_1 = \frac {d \rho_1}{d \refmx} $ and $g_2 =\frac { d \rho_2}{d \refmy}$; let us then consider any measurable set $A \subseteq X \times Y$ and assume that $(\rho_1 \otimes \rho_2) (A)=0$. In particular we have $$ \int_{X \times Y} \chi_A (x,y) g_1(x)g_2(y) \, d (\refmx \otimes \refmy) = (\rho_1 \otimes \rho_2) (A)=0;$$ from this we deduce that $A$ is $\refmx \otimes \refmy$-essentially contained in the set $B=\{g_1(x) g_2(y) =0\} = B_x \cup B_y$, where $B_x= \{ g_1(x) =0\} \times Y$ and $B_y = X \times \{ g_2(y)=0\}$. However, by the marginal conditions, we have $\gamma(B_x)=\rho_1\{ g_1(x) =0\} =0$ and similarly $\gamma(B_y) =0$, which imply $\gamma(B)=0$. In particular we have \begin{align*} \gamma(A) &= \int_{X \times Y} \chi_A (x,y) f(x,y) \, d (\refmx \otimes \refmy) = \int_{X \times Y} \chi_{A \cap B} (x,y) f(x,y) \, d (\refmx \otimes \refmy) \\ & \leq \int_{X \times Y} \chi_{ B} (x,y) f(x,y) \, d (\refmx \otimes \refmy) \leq \gamma(B)=0. \end{align*} This proves that $\gamma \ll \rho_1 \otimes \rho_2$ and so we can perform the same calculation we did before to conclude. \end{proof} \section{Regularity of Entropic-Potentials and dual problem}\label{sec:RegularityEntr} In this section we will treat the case where $c:X \times Y \to \mathbb R$ is a Borel \textit{bounded} cost; of course everything extends also to the case when $c \in L^{\infty}(\rho_1 \otimes \rho_2)$. Some of the results extend naturally for unbounded costs (for example (i), (ii), (v) in Proposition \ref{prop:EstPot}), but we prefer to keep the setting uniform. \subsection{Entropy-Transform and a priori estimates} \label{sec:bounded} \quad We start by defining the Entropy-Transform. First, let us define the space $\Lexp$, which will be the natural space for the dual problem. \begin{deff}[$\Lexp$ spaces] Let $\ep>0$ be a positive number and $(X,d_X)$ be a Polish space. We define the set $\Lexp(X,\rho_1)$ by \[ \Lexp(X,\rho_1) = \left\{ u:X \to [-\infty, \infty[ \, : \, u \text{ is a measurable function in } (X,\rho_1) \text{ and } 0<\int_X e^{u/\ep} \, d \rho_1 < \infty \right\}. \] For $u \in \Lexp(X, \rho_1)$ we define also $\lambda_u:=\ep\log \left( \int_X e^{u/\ep} \, d \rho_1 \right)$. \end{deff} For simplicity, we will use the notation $\Lexp(\rho_1)$ instead of $\Lexp(X,\rho_1)$. Notice that it is possible that $u\in \Lexp(X,\rho_1)$ attains the value $-\infty$ in a set of positive measure, but not everywhere, because of the positivity constraint $\int_X e^{u/\ep} \, d \rho_1>0$. On the other hand, we have that $u \in \Lexp(X,\rho_1)$ implies $u^+ \in L^p(\rho_1)$ for every $ p \geq 1$, where $u^+(x):= \max\{ u(x),0\} $ denotes the positive part of $u$. \begin{deff}[Entropic $c$-transform or $(c,\ep)$-transform] Let $(X,d_X)$, $(Y,d_Y)$ be Polish spaces, $\ep>0$ be a positive number, $\rho_1 \in \mathcal P(X)$ and $\rho_2 \in \mathcal P(Y)$ be probability measures and let $c$ be a bounded measurable cost on $X \times Y$. The entropic $(c,\ep)$-transform $\mathcal{F}^{(c, \ep)}:\Lexp(\rho_1)\to L^0(\rho_2)$ is defined by \begin{equation}\label{eq:F1} \mathcal{F}^{(c, \ep)} ( u ) (y):= -\ep \log \left( \int_X e^{ \frac { u (x) - c(x,y) }{\ep}} \, d \rho_1 (x) \right). \end{equation} Analogously, we define the $(c,\ep)$-transform $\mathcal{F}^{(c, \ep)}:\Lexp(\rho_2)\to L^0(\rho_1)$ by \begin{equation}\label{eq:F2} \mathcal{F}^{(c, \ep)} ( v ) (x):= - \ep \log \left( \int_Y e^{ \frac { v (y) - c(x,y) }{\ep}} \, d \rho_2 (y) \right). \end{equation} Whenever it will be clear we denote $v^{(c,\ep)}=\mathcal{F}^{(c, \ep)}(v)$ and $u^{(c,\ep)}=\mathcal{F}^{(c, \ep)}(u)$, in an analogous way to the classical $c$-transform. \end{deff} Notice that $\Lexp(\rho_1)$ is the natural domain of definition for $\mathcal{F}^{(c, \ep)}$ because if $u \not \in \Lexp(\rho_1)$ we would have either $\mathcal{F}^{(c, \ep)}(u) \equiv - \infty$ or $\mathcal{F}^{(c, \ep)}(u) \equiv +\infty$; moreover, thanks to the positivity constraint $\int_X e^{u/\ep} \, d \rho_1>0$ we also have $\mathcal{F}^{(c, \ep)}(u)(y) \in \mathbb R$ almost everywhere. In fact we will show that $\mathcal{F}^{(c, \ep)}(u) \in L^{\infty}(\rho_2)$. We also remark that the $(c,\ep)$-transform is consistent with the $c$-transform when $\ep\to 0$: $u^{(c,\ep)} \to \max \lbrace u (x) - c(x,y) : x\in X \rbrace$, when $\ep\to 0$. In other words, $u^{(c,\ep)}(y) = u^{c}(y) + O(\ep)$. \begin{figure}[h] \centering \includegraphics[ scale=0.15]{kantou.jpg} \caption{Entropy-Kantorovich potentials $u^{\ep}(x)-\ep \ln ( \rho_1)$ associated to the densities $\rho_1$ and $\rho_2$ for different values of the regularization parameter: $\ep_1 < \ep_2 < \ep_3$ (from left to right). The densities $\rho_1 \sim N(0,5)$ and $\rho_2 = \frac{1}{2}\eta_1 + \frac{1}{2}\eta_2$ is a mixed Gaussian, where $\eta_1 \sim N(7,0.9)$ and $\eta_2 \sim N(14,0.9)$. Notice that the values on the $y$-axis are not the same for the four figures.} \label{fig} \end{figure} \begin{lemma}\label{lemma:F1F2} Let $(X,d_X)$, $(Y,d_Y)$ be Polish spaces, $u\in \Lexp(\rho_1),v\in \Lexp(\rho_2)$ and $\ep>0$. Then, \begin{itemize} \item[(i)] $u^{(c,\ep)}(y) \in L^{\infty}(\rho_2)$ and $v^{(c,\ep)}(x) \in L^{\infty}(\rho_1)$. More precisely, $$-\|c\|_{\infty}-\ep \log \left( \int_X e^{ \frac { u (x) }{\ep}} \, d \rho_1 \right) \leq u^{(c,\ep)}(y) \leq \| c\|_{\infty} -\ep \log \left( \int_X e^{ \frac { u (x) }{\ep}} \, d\rho_1 \right) $$ \item[(ii)] $u^{(c,\ep)}(y) \in \Lexp(\rho_2)$ and $v^{(c,\ep)}(x) \in \Lexp(\rho_1)$. Moreover $|\lambda_{u^{(c,\ep)}} + \lambda_u| \leq \|c \|_{\infty}$. \end{itemize} \end{lemma} \begin{proof} If $u\in \Lexp(\rho_1)$ then \begin{align*} u^{(c,\ep)}(y) &= -\ep \log\left(\int_X e^{\frac { u(x)-c(x,y) }{\ep}}d\rho_1\right)\\ &\leq -\ep \log\left(e^{\frac{ -\Vert c\Vert_{\infty}}{\ep}}\int_X e^{\frac { u(x)}{\ep}}d\rho_1\right)\\ &= \Vert c \Vert_{\infty} - \ep \log\left(\int_X e^{\frac { u(x)}{\ep}}d\rho_1\right). \end{align*} Moreover, we get a lower bound for the above quantity using $c\geq -\|c\|_{\infty}$: \[ u^{(c,\ep)}(y) = -\ep \log\left(\int_X e^{\frac { u(x)-c(x,y) }{\ep}}d\rho_1\right) \geq -\|c\|_{\infty} -\ep \log\left(\int_X e^{\frac {u(x)}{\ep}}d\rho_1\right). \] Then we proved that $u^{(c,\ep)} \in L^{\infty}(\rho_2)$. The fact that $v^{(c,\ep)} \in L^{\infty}(\rho_1)$ is analogous. This shows the $(i)$. Since $u\in \Lexp(\rho_1)$, by using the part $(i)$ we have that \[ \int_Y e^{\frac{u^{(c,\ep)}(y)}{\ep}}d\rho_2(y) \leq \int_Y e^{\Vert c\Vert_{\infty}/\ep}\left(\int_X e^{\frac {u(x)}{\ep}}d\rho_1(x)\right)^{-1}d\rho_2(y) < +\infty. \] Therefore $u^{(c,\ep)} \in \Lexp(\rho_2)$ and in particular $\lambda_{u^{(c,\ep)}} \leq -\lambda_u + \|c\|_{\infty}$; the other inequality follows with a similar calculation and the same holds for $v^{(c,\ep)}$, which proves $(ii)$. \end{proof} Some of the following properties were already known for the softmax operator: for example in \cite{GenChiBacCutPey} and they are used in order to get \emph{a posteriori} regularity of the potentials but, up to our knowledge, were never used to get \emph{a priori} results. Another very cleverly used properties of the $(c, \ep)$-transform are used in \cite{FatGozPro19} in order to obtain a new proof of the Caffarelli contraction theorem \cite{Caffarelli}. \begin{prop}\label{prop:EstPot} Let $\ep>0$ be a positive number, $(X,d_X)$ and $(Y,d_Y)$ be Polish metric spaces, $c:X\times Y\to[0,\infty]$ be a bounded cost function, $\rho_1 \in \mathcal P(X)$, $\rho_2 \in \mathcal P(Y)$ be probability measures and $u\in\Lexp(\rho_1)$. Then \begin{itemize} \item[(i)] if $c$ is $L$-Lipschitz, then $u^{(c,\ep)}$ is $L$-Lipschitz; \item[(ii)] if $c$ is $\omega$-continuous, then $u^{(c,\ep)}$ is $\omega$-continuous; \item[(iii)] if $\vert c\vert \leq M$, then we have $|u^{(c,\ep)} +\lambda_u|\leq M$ \item[(iv)] if $\vert c\vert \leq M$, then $\mathcal{F}^{(c, \ep)}:L^{\infty}(\rho_1)\to L^p(\rho_2)$ is a $1$-Lipschitz compact operator. \item[(v)] if $c$ is $K$-concave with respect to $y$, then $u^{(c,\ep)}$ is $K$-concave. \end{itemize} \end{prop} \begin{proof} Of course we have that (ii) implies (i); let us prove directly (ii). \begin{itemize} \item[(ii)] Let $u\in\Lexp(\rho_1)$, $y_1,y_2\in Y$. We can assume without loss of generality that $u^{(c,\ep)}(y_1)\geq u^{(c,\ep)}(y_2)$; in that case \begin{align*} \vert u^{(c,\ep)}(y_1) - u^{(c,\ep)}(y_2)\vert &= \ep\log\left(\int_X e^{\frac{u(x)-c(x,y_2)}{\ep}}d\rho_1\right) - \ep\log\left(\int_X e^{\frac{u(x)-c(x,y_1)}{\ep}}d\rho_1\right) \\ &= \ep\log\left(\int_X e^{\frac{u(x)-c(x,y_1)+c(x,y_1)-c(x,y_2)}{\ep}}d\rho_1\right) - \ep\log\left(\int_X e^{\frac{u(x)-c(x,y_1)}{\ep}}d\rho_1\right) \\ &\leq \ep\log\left(e^{\frac{\omega(d(y_1,y_2))}{\ep}}\int_X e^{\frac{u(x)-c(x,y_1)}{\ep}}d\rho_1\right) - \ep\log\left(\int_X e^{\frac{u(x)-c(x,y_1)}{\ep}}d\rho_1\right) \\ &= \omega(y_1,y_2). \end{align*} \item[(iii)] This is a direct consequence of Lemma \ref{lemma:F1F2} (i); \item[(iv)] We first prove that $\mathcal{F}^{(c, \ep)}$ is $1$-Lipschitz. In fact, letting $u, \tilde{u} \in L^{\infty}(\rho_1)$, we can perform a calculation very similar to what has been done in (ii): for every $y \in Y$ we have \begin{align*} \mathcal{F}^{(c, \ep)} (u) (y) &=- \ep\log\left(\int_X e^{\frac{u(x)-c(x,y)}{\ep}}d\rho_1\right) \geq - \ep\log\left(\int_X e^{\frac{\tilde{u}(x) + \| u - \tilde u \|_{\infty} -c(x,y)}{\ep}}d\rho_1\right) \\ &= \mathcal{F}^{(c, \ep)}(\tilde{u}) (y) - \| u - \tilde{u} \|_{\infty} \end{align*} We can conlcude that $\| \mathcal{F}^{(c, \ep)} (u) - \mathcal{F}^{(c, \ep)} (\tilde{u})\|_p \leq \| \mathcal{F}^{(c, \ep)} (u) - \mathcal{F}^{(c, \ep)} (\tilde{u})\|_{\infty} \leq \| u - \tilde{u} \|_{\infty}$. This proves in particular that $\mathcal{F}^{(c, \ep)}: L^{\infty}(\rho_1) \to L^p(\rho_2)$ is continuous. In order to prove that $\mathcal{F}^{(c, \ep)}$ is compact it suffices to prove that $\mathcal{F}^{(c, \ep)}(B)$ is precompact for every bounded set $B \subset L^{\infty}(\rho_1)$. We will use Proposition \ref{prop:compactness}; since $\mathcal{F}^{(c, \ep)}$ is Lipschitz, for sure if $B$ is bounded we have that $\mathcal{F}^{(c, \ep)}(B)$ is bounded in $L^p(\rho_2)$, so it remains to prove part (b) of the criterion of Proposition \ref{prop:compactness}. Let us consider $\gamma = \rho_1\otimes\rho_2$. Since $c \in L^{\infty}(\gamma)$, by Lusin theorem we have that for every $\sigma>0$ there exists $N_{\sigma} \subset X \times Y$, with $\gamma ( N_{\sigma})< \sigma$, such that $c|_{(N_{\sigma})^c}$ is uniformly continuous, with modulus of continuity $\omega_{\sigma}$. We now try to mimic what we did in (ii), this time keeping also track of the remainder terms we will have. For each $y\in Y$, we consider the slice of $N_{\sigma}$ above $y$, $N^{\sigma}_y = \left\lbrace x\in X : (x,y)\in N_{\sigma} \right\rbrace$; then we consider the set of \textit{bad} $y \in Y$, where the slice $N^{\sigma}_y$ is too big: $$N^{\sigma}_b = \left\lbrace y \in Y : \rho_1(N^{\sigma}_y)\geq \sqrt{\sigma} \right\rbrace.$$ In particular, by definition if $y\not \in N_b^{\sigma}$ we have $\rho_1(N^{\sigma}_y) \leq \sqrt{\sigma}$, but thanks to Fubini and the condition $\gamma(N_{\sigma}) < \sigma$ we have also that $\rho_2(N^{\sigma}_b) \leq \sqrt{\sigma}$ . Let us now consider $y, y' \not \in N^{\sigma}_b$, and let us denote $X^*= X \setminus ( N^{\sigma}_y \cup N^{\sigma}_{y'})$. Then we have that for every $x \in X^*$, $|c(x,y)-c(x,y')| \leq \omega_{\sigma}(d(y,y'))$. We can assume without loss of generality that $u^{(c,\ep)}(y)\geq u^{(c,\ep)}(y')$ and we have \[ \begin{array}{lcl} |u^{(c,\ep)}(y)-u^{(c,\ep)}(y')| &= - \ep \log\left(\int_X e^{(u(x)-c(x,y))/\ep}d\rho_1\right) +\ep \log\left(\int_X e^{(u(x)-c(x,y'))/\ep}d\rho_1\right) \\ &= \ep \log\left(\dfrac{\int_X e^{(u(x)-c(x,y)+c(x,y)-c(x,y'))/\ep}d\rho_1}{\int_X e^{(u(x)-c(x,y))/\ep}d\rho_1}\right) \\ &\leq \ep \log\left(e^{\frac{\omega_{\sigma}(d(y,y'))}{\ep}} +\dfrac{\int_{N^{\sigma}_y \cup N^{\sigma}_{y'}} e^{(u(x)-c(x,y'))/\ep}d\rho_1}{\int_X e^{(u(x)-c(x,y))/\ep}d\rho_1}\right) \\ &\leq \ep \log\left(e^{\frac{\omega_{\sigma}(d(y,y'))}{\ep}}+\rho_1(N^{\sigma}_y \cup N^{\sigma}_{y'}) e^{2(\Vert c\Vert + \Vert u\Vert)/\ep} \right)\\ &\leq \ep \log\left(e^{\frac{\omega_{\sigma}(d(y,y'))}{\ep}}+2\sqrt{\sigma}e^{2(\Vert c\Vert + \Vert u\Vert)/\ep}\right) \end{array} \] Now we denote by $A= 2 e^{2(\Vert c\Vert + \Vert u\Vert)/\ep}$ and thanks to the fact that if $a, b \geq0$ then $e^a+b \leq e^{a+b}$, we have $$|u^{(c,\ep)}(y)-u^{(c,\ep)}(y')| \leq\omega_{\sigma}(d(y,y')) + \ep \sqrt{\sigma}A \qquad \forall y, y' \not \in N_b^{\sigma}.$$ Then (having in mind also (iii) and that $A$ depends only on $\|u \|_{\infty}$), we have that also (b) of Proposition \ref{prop:compactness} is satisfied for $\mathcal{F}^{(c, \ep)}(B)$, for every bounded set $B \subset L^{\infty}(\rho_1)$, granting then the compactness of $\mathcal{F}^{(c, \ep)}$. \item[(v)] In this case we are assuming that $Y$ is a geodesic space and that there exists $K \in \mathbb R$ such that for each constant speed geodetic $(y_t)_{t \in [0,1]}$ we have $$c(x,y_t) \geq (1-t)c(x,y_0)+ t c(x,y_1) + 2 t(1-t) K d^2(y_0,y_1) \quad \forall x \in X.$$ Then, setting $f_t(x)=e^{(u(x)-c(x,y_t))/\ep}$, the $K$-concavity inequality for $c$ implies \begin{align*} f_t(x) &=e^{(u(x)-c(x,y_t))/\ep} \\ & \leq e^{(u(x)-(1-t)c(x,y_0)-tc(x,y_1)-2t(1-t)Kd^2(y_0,y_1))/\ep} \\ &= e^{-2 t(1-t) K d^2(y_0,y_1)/\ep} \cdot e^{((1-t)(u(x)-c(x,y_0))+t(u(x)-c(x,y_1)))/\ep} \\ &= e^{-2 t(1-t) K d^2(y_0,y_1)/\ep} \cdot f_0(x)^{1-t} \cdot f_1(x)^t \end{align*} Using this along with H\"{o}lder inequality we get \begin{align*} u^{(c,\ep)}(y_t) & = - \ep \log\left( \int_X e^{(u(x)-c(x,y_t))/\ep}\, d\rho_1 \right) = -\ep \log \left( \int_X f_t(x) \, d \rho_1 \right) \\ & \geq - \ep \log\left(\int_X e^{-2 t(1-t) K d^2(y_0,y_1)/\ep} \cdot f_0(x)^{1-t} \cdot f_1(x)^t \,d\rho_1\right) \\ &= 2t (1-t)Kd^2(y_0,y_1)- \ep \log\left(\int_X f_0(x)^{1-t} \cdot f_1(x)^t \,d\rho_1\right) \\ &\geq 2t (1-t)Kd^2(y_0,y_1)- \ep \log\left( \Bigl(\int_X f_0 \, d \rho_1 \Bigr)^{1-t} \cdot \Bigl(\int_X f_1 \, d \rho_1 \Bigr)^{t}\right) \\ & =2t (1-t)Kd^2(y_0,y_1) + (1-t)u^{(c,\ep)}(y_0)+ tu^{(c,\ep)}(y_1). \end{align*} \end{itemize} \end{proof} \begin{oss}[Entropic $c$-transform for $\Sep(\rho_1,\rho_2;\refmx,\refmy)$]\label{rmk:change} It is possible to define the entropic $c$-transform also for the Schr\"odinger problem $\Sep(\rho_1,\rho_2;\refmx,\refmy)$ with reference measures $\refmx \in \mathcal P(X)$ and $\refmy\in \mathcal P(Y)$. In this case, \begin{equation}\label{eq:F1alt} \mathcal{F}^{(c, \ep)}_{*} (u) (y)= \ep\log(\rho_2(y)) -\ep \log \left( \int_X e^{ \frac { u (x) - c(x,y) }{\ep}}d\refmx(x)\right), \quad \text{and} \end{equation} \begin{equation}\label{eq:F2alt} \mathcal{F}^{(c, \ep)}_{*}(v) (x) = \ep\log(\rho_1(x))-\ep \log \left( \int_X e^{ \frac { u (x) - c(x,y) }{\ep}}d\refmy(y)\right). \end{equation} It is easy to see that $$ \begin{cases} \mathcal{F}^{(c, \ep)}_{*}(v) &= \ep\log\rho_1 + \mathcal{F}^{(c, \ep)}( v - \ep \log \rho_2) \\ \mathcal{F}^{(c, \ep)}_{*}(u) &= \ep\log\rho_2 + \mathcal{F}^{(c, \ep)}( u - \ep \log \rho_1) \end{cases} $$ so that in fact the $(c,\ep)-$transforms with reference measures are in fact the $(c,\ep)$-trasforms conjugated by the addition of a function. In particular we can get exactely the same estimates we did in Lemma \ref{lemma:F1F2}, up to translate in the appropriate manner. For example we would have if $u\in \Lexp(\refmx)$, we would have then $u^{(c,\ep)}_{*}(y) -\ep\log(\rho_2(y)) \in L^{\infty}(\refmy)$. \end{oss} \subsection{Dual problem} Let $u\in \Lexp(\rho_1), v\in \Lexp(\rho_2)$ and consider the Entropy-Kantorovich functional, \begin{equation}\label{eqn:dualdef} \Dep(u,v) = \int_X u(x)d\rho_1(x) + \int_Y v(y)d\rho_2(y) - \ep\int_{X\times Y} e^{\frac{u(x)+v(y)-c(x,y)}{\ep}}d(\rho_1\otimes\rho_2) \end{equation} What are the minimal assumption on $u,v$ in order to make sense for $\Dep(u,v)$? First of all if $u^+ \in L^1(\rho_1)$ and $v^+ \in L^1(\rho_2)$ then $\Dep(u,v)< \infty$ and in particular in order to have $\Dep(u,v)>-\infty$ we need $u\in \Lexp(\rho_1), v\in \Lexp(\rho_2)$ which is then a natural assumption (since we want to compute the supremum of $\Dep$). \begin{lemma}\label{lemma:dual} Let us consider $\Dep: \Lexp(\rho_1) \times \Lexp(\rho_2)\to\mathbb{R}$ defined as in \eqref{eqn:dualdef}, then \begin{equation}\label{est:optcond} D_{\ep}(u,u^{(c,\ep)}) \geq D_{\ep}(u,v) \qquad \forall v \in \Lexp(\rho_2), \end{equation} \begin{equation}\label{est:optcond2} D_{\ep}(u,u^{(c,\ep)}) = D_{\ep}(u,v) \text{ if and only if } v = u^{(c,\ep)}. \end{equation} In particular we can say that $u^{(c,\ep)} \in {\rm argmax} \{ D_{\ep} ( u,v) \; : \; v \in \Lexp(\rho_2) \}$. \end{lemma} \begin{proof} By Fubini's theorem and equation \eqref{eq:F1}, we have \begin{align*} \Dep(u,v) &= \int_X u(x)d\rho_1(x) + \int_Y v(y)d\rho_2(y) - \ep\int_{X\times Y} e^{\frac{u(x)+v(y)-c(x,y)}{\ep}}d(\rho_1\otimes\rho_2),\\ &= \int_X u(x)d\rho_1(x) + \int_Y v(y)d\rho_2(y) - \ep\int_Y e^{\frac{v(y)}{\ep}}\left(\int_X e^{\frac{u(x)-c(x,y)}{\ep}}d\rho_1\right)d\rho_2,\\ &= \int_X u(x)d\rho_1(x) + \int_Y v(y) - \ep e^{\frac{v(y)-u^{(c,\ep)}(y)}{\epsilon}}d\rho_2(y). \end{align*} Therefore, for any $v\in \Lexp(\rho_1)$, $ \Dep(u,v) \leq \Dep(u,u^{(c,\ep)})$, since the function $g(t) = t-\ep e^{(t-a)/\ep}$ is strictly concave and attains its maximum in $t=a$. In particular, $D_{\ep}(u,u^{(c,\ep)})= D_{\ep}(u,v)$ if and only if $v = u^{(c,\ep)}$. \end{proof} \begin{lemma}\label{lemma:betterpotentials} Let us consider $u \in \Lexp(\rho_1)$ and $v \in \Lexp(\rho_2)$. Then there exist $u^* \in \Lexp(\rho_1)$ and $v^* \in \Lexp(\rho_2)$ such that \begin{itemize} \item $D_{\ep}(u,v) \leq D_{\ep}(u^*,v^*)$; \item $ \| v^* \|_{\infty} \leq 3\| c\|_{\infty}/2 $; \item $ \| u^*\|_{\infty} \leq 3\| c\|_{\infty}/2 $. \end{itemize} Moreover we can choose $a \in \mathbb{R}$ such that $u^*= (v+a)^{(c,\ep)}$ and $v^*=(u^*)^{(c,\ep)}$. \end{lemma} \begin{proof} Let us apply Proposition \ref{prop:EstPot} (iii) to $v$ and $\tilde{u}=v^{(c,\ep)}$: $$ - \| c\|_{\infty} \leq v^{(c,\ep)} + \lambda_v \leq \|c \|_{\infty} $$ $$ - \| c\|_{\infty} \leq (v^{(c,\ep)})^{(c,\ep)}+ \lambda_{v^{(c,\ep)}} \leq \|c \|_{\infty} $$ Let us define $\tilde{u}=v^{(c,\ep)}$ and $\tilde{v}=(v^{(c,\ep)})^{(c,\ep)}$. Then by Lemma \ref{lemma:dual} we have of course that $D_{\ep}(u,v) \leq D_{\ep}(\tilde{u},\tilde{v})$; now we know that $\Dep(\tilde{u}-a, \tilde{v}+a)=\Dep(\tilde{u}, \tilde{v})$ for any $a \in \mathbb{R}$ and moreover $$ \| \tilde{u}-a \|_{\infty} \leq \|c\|_{\infty} + | a+\lambda_v| \qquad \| \tilde{v}+a \|_{\infty} \leq \|c\|_{\infty} + |\lambda_{v^{(c,\ep)}} -a|.$$ We can now choose $a^*= (\lambda_{v^{(c,\ep)}}- \lambda_v) /2$ and, recalling Lemma \ref{lemma:F1F2} (ii) we can conclude that $u^*=\tilde{u}-a^*$ and $v^*=\tilde{v}+a^*$ satisfy the required bounds. \end{proof} \begin{teo}\label{thm:kanto2Nmax} Let $(X,d_X)$, $(Y,d_Y)$ be Polish spaces, $c:X\times Y\to \mathbb R$ be a Borel bounded cost, $\rho_1 \in \mathcal P(X)$, $\rho_2 \in \mathcal P(Y)$ be probability measures and $\ep>0$ be a positive number. Consider the problem \begin{equation}\label{eq:dualitySep} \sup \left\{ D_{\ep}(u,v) \; : \; u \in \Lexp(\rho_1) , v \in \Lexp(\rho_2) \right\}. \end{equation} Then the supremum in \eqref{eq:dualitySep} is attained for a unique couple $(u_0,v_0)$ (up to the trivial tranformation $(u,v) \mapsto (u+a,v-a)$). In particular we have $$u_0 \in L^{\infty}(\rho_1) \quad \text{ and } \quad v_0 \in L^{\infty}(\rho_2);$$ moreover we can choose the maximizers such that $\|u_0\|_{\infty} ,\|v_0\|_{\infty} \leq \frac 32 \|c\|_{\infty}$. \end{teo} \begin{proof} Now, we are going to show that the supremum is attainded in the right-hand side of \eqref{eq:dualitySep}. Let $(u_n)_{n\in\mathbb N} \subset \Lexp(\rho_1) $ and $(v_n)_{n\in \mathbb N} \subset \Lexp(\rho_2)$ be maximizing sequences. Due to Lemma \ref{lemma:betterpotentials}, we can suppose that $u_n\in L^{\infty}(\rho_1)$, $v_n\in L^{\infty}(\rho_2)$ and $\Vert u_n\Vert_{\infty},\Vert v_n\Vert_{\infty} \leq \frac 32 \| c \|_{\infty}$. Then by Banach-Alaoglu theorem there exists subsequences $(u_{n_k})_{n_k\in\mathbb N}$ and $(v_{n_k})_{n_k\in\mathbb N}$ such that $u_{n_k}\rightharpoonup \overline{u}$ and $v_{n_k}\rightharpoonup \overline{v}$. In particular, $\tilde{u}_{n_k}+\tilde{v}_{n_k}-c\rightharpoonup \overline{u}+\overline{v}-c$. First, notice that since $t \mapsto e^t$ is a convex function, we have \begin{align*} \liminf_{n\to\infty} \int_{X\times Y}e^{\frac{u_n+v_n-c}{\ep}}d(\rho_1\otimes\rho_2) &= \liminf_{n\to\infty} \int_{X\times Y}e^{\frac{u_n+v_n-c}{\ep}}d(\rho_1\otimes\rho_2) \\ &\geq \int_{X\times Y}e^{\frac{\overline{u}+\overline{v}-c}{\ep}}d(\rho_1\otimes\rho_2). \end{align*} Moreover, \begin{align*} \sup_{u,v} \Dep(u,v) &= \lim_{n\to\infty}\left\lbrace\int_X u_nd\rho_1 + \int_Y v_nd\rho_2 - \ep\int_{X\times Y}e^{\frac{u_n+v_n-c}{\ep}}d(\rho_1\otimes\rho_2) \right\rbrace \\ &\leq \lim_{n\to\infty}\left\lbrace\int_X u_n d\rho_1 + \int_Y v_n d\rho_2 \right\rbrace - \ep \liminf_{ n \to \infty}\left\lbrace \int_{X\times Y}e^{\frac{u_n+v_n-c}{\ep}}d(\rho_1\otimes \rho_2)\right\rbrace \\ &\leq \int_X \overline{u}d\rho_1 + \int_Y \overline{v}d\rho_2 - \ep\int_{X\times Y}e^{\frac{\overline{u}+\overline{v}-c}{\ep}}d(\rho_1\otimes\rho_2) = D(\overline{u},\overline{v}). \end{align*} So, $(\overline{u},\overline{v})$ is a maximizer for $\Dep$. By construction, we have also that $\overline{u} \in L^{\infty}(\rho_1)$ and $\quad \overline{v} \in L^{\infty}(\rho_2)$. Finally, the strictly concavity of $D_{\ep}$ and Lemma \ref{lemma:dual} implies that the maximizer is unique and, in particular $\overline{v} = \overline{u}^{(c,\ep)}$. \end{proof} \begin{cor} Let $(X,d_X,\refmx)$, $(Y,d_Y,\refmy)$ be Polish metric measure spaces, $c:X\times Y\to \mathbb{R}$ be a Borel bounded cost function, $\rho_1\in\mathcal P(X)$ and $\rho_2\in\mathcal P(Y)$ be probability measures such that $\operatorname{KL}(\rho_1|\refmx) + \operatorname{KL}(\rho_1|\refmy) < \infty$. Consider the dual functional $\tilde \Dep:\Lexp(\refmx)\times\Lexp(\refmy)\to \mathbb R$, \[ \tilde{\Dep}(u,v) = \int_X u(x)\rho_1(x)d\refmx(x) + \int_Y v(y)\rho_2(y)d\refmy(y) - \ep\int_{X\times Y} e^{\frac{u(x)+v(y)-c(x,y)}{\ep}}d(\refmx(x)\otimes\refmy(y)). \] Then the supremum \[ \sup \left\{ D_{\ep}(u,v) \; : \; u \in \Lexp(\refmx) , v \in \Lexp(\refmy) \right\}. \] is attained for a unique couple $(u_0,v_0)$ and in particular we have $$u_0 - \ep\log\rho_1 \in L^{\infty}(\refmx) \quad \text{ and } \quad v_0 -\ep\log\rho_2 \in L^{\infty}(\refmy).$$ \end{cor} \begin{proof} The proof follows by the change of variable $T:(u,v)\mapsto (u -\ep \log \rho_1, v-\ep \log \rho_2)$ which is such that $\tilde \Dep (u,v)= \Dep( T(u,v))+ \ep \operatorname{KL}(\rho_1|\refmx) + \ep \operatorname{KL}(\rho_1|\refmy)$, and Theorem \ref{thm:kanto2Nmax}. Another way is to apply same arguments of theorem \eqref{thm:kanto2Nmax} by using the Entropic $c$-transform $u^{(c,\ep)}_{\refmx}$ described in Remark \ref{rmk:change}. \end{proof} In the following proposition an important concept will be that of bivariate transformation. Given $\ka$ a Gibbs measure, $a(x)$ and $b(y)$ two measurable function with respect to $\kappa$, such that $a,b \geq0$, we define the bivariate transformation of $\ka$ through $a$ and $b$ as \begin{equation}\label{eqn:kappa} \kappa(a,b):= a(x) b(y) \cdot \ka \end{equation} this is still a (possibily infinite) measure. \begin{lemma}\label{lem:easydual} Let $\ep>0$ be a positive number, $(X,d_X)$ and $(Y,d_Y)$ be Polish metric spaces, $c:X\times Y\to \mathbb{R}$ be a cost function (not necessarily bounded), $\rho_1 \in \mathcal P(X)$, $\rho_2 \in \mathcal P(Y)$ be probability measures and let $\kappa$ as in \eqref{eqn:Gibbs}. Then for every $\gamma \in \Pi(\rho_1, \rho_2)$, $u \in \Lexp(\rho_1) $ and $ v \in \Lexp(\rho_2)$ then we have \begin{equation}\label{eqn:uvgamma} \ep \operatorname{KL}(\gamma|\ka) \geq D_{\ep} ( u,v) + \ep, \quad \text{ with equality iff }\gamma =\kappa(e^{u/\ep}, e^{v/\ep}), \end{equation} where $\kappa$ is defined as in \eqref{eqn:kappa}. \end{lemma} \begin{proof} First of all we can assume $\gamma \ll \ka$, otherwise the right hand side would be $+ \infty$ and so the inequality would be verified; then if we denote (with a slight abuse of notation) $\gamma(x,y)$ the density of $\gamma$ with respect to $\ka$, we get \begin{align*} \ep \operatorname{KL}(\gamma|\ka) &= \int_{X\times Y}c d\gamma + \ep\int_{X\times Y}\gamma\log\gamma d\left(\rho_1\otimes\rho_2\right) \\ &= \int_{X\times Y}(c+\ep\log\gamma-u-v)\cdot\gamma d\rho_1\otimes\rho_2 + \int_X ud\rho_1 + \int_Y vd\rho_2 \\ &= \int_X u d\rho_1 + \int_Y v d\rho_2 + \int_{X\times Y}\left(\ep\log\gamma+c-u-v\right)\cdot \gamma d\left(\rho_1\otimes\rho_2\right) \\ &\geq \int_X u d\rho_1 + \int_Y v d\rho_2- \ep\int_{X\times Y} e^{\frac{u+v-c}{\ep}}d\left(\rho_1\otimes\rho_2\right) + \ep \\ &= D_{\ep}(u,v) + \ep, \end{align*} where we used $ts + \ep t\ln t - \ep \geq -\ep e^{-s/\ep}$, with equality if $t = e^{-s/\ep}$. Notice in particular that, as we wanted, there is equality iff $\gamma = e^{(u(x)+v(y)-c(x,y))/\ep} \cdot \rho_1 \otimes \rho_2 = \kappa(e^{u/\ep}, e^{v/\ep}) $. \end{proof} \begin{prop}[Equivalence and complementarity condition]\label{prop:equiv_comp} Let $\ep>0$ be a positive number, $(X,d_X)$ and $(Y,d_Y)$ be Polish metric spaces, $c:X\times Y\to \mathbb{R}$ be a bounded cost function, $\rho_1 \in \mathcal P(X)$, $\rho_2 \in \mathcal P(Y)$ be probability measures and let $\kappa$ as in \eqref{eqn:Gibbs}. Then given $u^* \in \Lexp(\rho_1) , v^* \in \Lexp(\rho_2)$, the following are equivalent: \begin{enumerate} \item \emph{(Maximizers)} $u^*$ and $v^*$ are maximizing potentials for \eqref{eq:dualitySep}; \item \emph{(Maximality condition)} $\mathcal{F}^{(c, \ep)} (u^*)=v^*$ and $\mathcal{F}^{(c, \ep)}(v^*)=u^*$; \item \emph{(Schr\"{o}dinger system)} let $\gamma^*=\kappa(e^{u^*/\ep}, e^{v^*/\ep})=e^{(u^*(x)+v^*(y)-c(x,y))/\ep} \cdot \rho_1 \otimes \rho_2$, then $\gamma^* \in \Pi(\rho_1, \rho_2)$; \item \emph{(Duality attainement) }$\OTep(\rho_1,\rho_2) = D_{\ep} (u^*,v^*) +\ep$. \end{enumerate} Moreover in those cases $\gamma^*$, as defined in 3, is also the (unique) minimizer for the problem \eqref{intro:mainKL} \end{prop} \begin{proof} We will prove $ 1 \Rightarrow 2 \Rightarrow 3 \Rightarrow 4 \Rightarrow 1$. \begin{itemize} \item[1. $\Rightarrow$ 2.] This is a straightforward application of Lemma \ref{lemma:dual}. In fact thanks to \eqref{est:optcond} we have $D_{\ep} (u^*, \mathcal{F}^{(c, \ep)}(u^*)) \geq D_{\ep}(u^*,v^*)$; however, by the maximality of $u^*,v^*$ we have also $D_{\ep}(u^*,v^*) \geq D_{\ep} (u^*, \mathcal{F}^{(c, \ep)}(u^*)) $, and so we conclude that $D_{\ep}(u^*,\mathcal{F}^{(c, \ep)}(u^*))=D_{\ep}(u^*,v^*)$. Thanks to \eqref{est:optcond2} we then deduce that $v^*=\mathcal{F}^{(c, \ep)}(u^*)$. We can follow a similar argument to prove that conversely $u^*=\mathcal{F}^{(c, \ep)}(v^*)$. \item[2. $\Rightarrow$ 3.] A simple calculation shows for every $u \in \Lexp(\rho_1) $ and $ v \in \Lexp(\rho_2)$ we have $(\pi_1)_{\sharp} ( \kappa(e^{u/\ep}, e^{v /\ep})) = e^{(u-v^{(c, \ep)})/\ep}\rho_1$ and similarly $(\pi_2)_{\sharp} ( \kappa(e^{u/\ep}, e^{v /\ep})) = e^{(v-u^{(c, \ep)})/\ep}\rho_2$. So if we assume 2. it is trivial to see that in fact $\gamma^* = \kappa(e^{u^*/\ep}, e^{v^* /\ep}) \in \Pi (\rho_1, \rho_2)$ \item[3. $\Rightarrow$ 4.] since $\gamma^* \in \Pi(\rho_1, \rho_2)$, from Lemma \ref{lem:easydual} we have \begin{align}\label{eqn:ineq1compl}\ep \operatorname{KL}(\gamma^*|\ka) &\geq D_{\ep} ( u,v) + \ep \qquad &\forall u \in \Lexp(\rho_1), v \in \Lexp(\rho_2) \\ \label{eqn:ineq2compl} \ep \operatorname{KL}(\gamma|\ka) &\geq D_{\ep} ( u^*,v^*) + \ep &\forall \gamma \in \Pi(\rho_1,\rho_2). \end{align} Moreover, since by definition $\gamma^*= \kappa(e^{u^*/\ep}, e^{v^*/\ep})$, Lemma \ref{lem:easydual} assure us also that \begin{equation}\label{eqn:ineq3compl} \ep \operatorname{KL}(\gamma^*|\ka) \geq D_{\ep} ( u^*,v^*) + \ep. \end{equation} Putting now \eqref{eqn:ineq1compl},\eqref{eqn:ineq2compl} and \eqref{eqn:ineq3compl} together we obtain $$ \ep \operatorname{KL}(\gamma|\ka) \geq D_{\ep} ( u^*,v^*) + \ep = \ep \operatorname{KL}(\gamma^*|\ka) \geq D_{\ep} ( u,v) + \ep;$$ in particular we have $\ep \operatorname{KL}(\gamma|\ka) \geq \ep\operatorname{KL}(\gamma^*|\ka)$ which grants us that $\gamma^*$ is a minimizer for \eqref{intro:mainKL} and that in particular $\OTep(\rho_1,\rho_2) = \ep\operatorname{KL}(\gamma^*|\ka) = D_{\ep} ( u^*,v^*)+\ep$. \item[4. $\Rightarrow$ 1.] Looking at \eqref{eqn:uvgamma} and minimizing in $\gamma$ we find that $$\OTep(\rho_1,\rho_2) \geq D_{\ep} ( u,v) + \ep \qquad \forall u \in \Lexp(\rho_1), v \in \Lexp(\rho_2);$$ using that by hypotesis $\OTep(\rho_1,\rho_2)= D_{\ep} ( u^*,v^*) + \ep$, we get that $$D_{\ep} ( u^*,v^*) \geq D_{\ep} ( u,v) \qquad \forall u \in \Lexp(\rho_1), v \in \Lexp(\rho_2),$$ that is, $u^*,v^*$ are maximizing potentials for \eqref{eq:dualitySep}. \end{itemize} Notice that in proving $3 \Rightarrow 4$ we incidentally proved that $\gamma^*$ is the (unique) minimizer. \end{proof} Finally, we conclude this section by giving a short proof of the duality between \eqref{intro:mainKL} and \eqref{eq:dualitySep}. \begin{prop}[General duality]\label{prop:duality2N} Let $\ep>0$ be a positive number, $(X,d_X)$ and $(Y,d_Y)$ be Polish metric spaces, $c:X\times Y\to \mathbb{R}$ be a bounded cost function, $\rho_1 \in \mathcal P(X)$, $\rho_2 \in \mathcal P(Y)$ be probability measures. Then duality holds \[ \OTep(\rho_1,\rho_2) = \max \left\{ D_{\ep}(u,v) \; : \; u \in \Lexp(\rho_1) , v \in \Lexp(\rho_2) \right\} +\ep. \] \end{prop} \begin{proof} From Theorem \ref{thm:kanto2Nmax} we have the existence of a maximizing pair of potentials $u^*,v^*$. In particular we have $$ \max \left\{ D_{\ep}(u,v) \; : \; u \in \Lexp(\rho_1) , v \in \Lexp(\rho_2) \right\} +\ep = D_{\ep} (u^*,v^*) +\ep;$$ this, together with point 4 in Proposition \ref{prop:equiv_comp} (which is true since 1 holds true), proves the duality. \end{proof} By a similar argument, one can show that the duality holds also for the functional $\Sep(\rho_1,\rho_2;\refmx,\refmy)$. \begin{cor} Let $\ep>0$ be a positive number, $(X,d_X,\refmx)$ and $(Y,d_Y,\refmy)$ be Polish metric measure spaces, $c:X\times Y\to \mathbb{R}$ be a bounded cost function, $\rho_1 \in \mathcal P(X)$, $\rho_2 \in \mathcal P(Y)$ be probability measures. Then duality holds \[ \Sep(\rho_1,\rho_2;\refmx,\refmy) = \max \left\lbrace \tilde{D}_{\ep}(u,v) : u\in\Lexp(\refmx), v\in\Lexp(\refmy) \right\rbrace + \ep. \] \end{cor} \section{Convergence of the Sinkhorn / IPFP Algorithm} \label{sec:convergenceIPFP} \quad \quad In this section, we give an alternative proof for the convergence of the Sinkhorn algorithm. The aim of the Iterative Proportional Fitting Procedure (IPFP, also known as Sinkorn algorithm) is to construct the measure $\gamma^{\ep}$ realizing minimum in \eqref{intro:mainKL} by alternatively matching one marginal distribution to the target marginals $\rho_1$ and $\rho_2$: this leads to the construction of the IPFP sequences $(a^n)_{n\in\mathbb N}$ and $(b^n)_{n\in\mathbb N}$, defined in \eqref{eq:IPFPiteration}. We now look at the new variables $u_n := \ep \ln(a^n)$ and $v_n ;= \ep \ln(b^n)$: we can then rewrite the system \eqref{eq:IPFPiteration} as \[ \begin{array}{lcl} \displaystyle v_n(y)/\ep = & - \log\left(\int_X k(x,y)e^{ \frac{u_{n-1}(x)}{\ep}}d\rho_1\right) \\ \displaystyle u_n(x)/\ep = & - \log\left(\int_Y k(x,y)e^{ \frac{v_{n}(y)}{\ep}}d\rho_2\right) \end{array}. \] In other words, using the $(c,\ep)-$transform and the expression of $k$ given in \eqref{eqn:Gibbs}, $v^n(y) = (u^{(n-1)})^{(c,\ep)}$ and $u^n(y) = (v^n)^{(c,\ep)}$. \begin{teo}\label{thm:convIPFP} Let $(X,d_X)$ and $(Y,d_Y)$ be Polish metric spaces, $\rho_1 \in \mathcal P(X)$ and $\rho_2\in\mathcal P(Y)$ be probability measures and $c:X\times Y\to \mathbb R$ be a Borel bounded cost. If $(a^n)_{n\in\mathbb N}$ and $(b^n)_{n\in\mathbb N}$ are the IPFP sequences defined in \eqref{eq:IPFPiteration}, then there exists a sequence of positive real numbers $(\lambda^n)_{n \in \mathbb N}$ such that \[ a^n/\lambda^n\to a \text{ in } L^p(\rho_1) \quad \text{ and } \quad \lambda^nb^n \to b \text{ in } L^p(\rho_2), \quad 1\leq p <+\infty, \] where $(a,b)$ solve the Schr\"{o}dinger problem. In particular, the sequence $\gamma^n = a^nb^n k$, where $k$ is defined in \eqref{eqn:Gibbs} converges in $L^p(\rho_1\otimes\rho_2)$ to $\gamma^{\ep}_{opt}$, the density of the minimizer of \eqref{intro:mainKL} with respect to $\rho_1 \otimes \rho_2$, for any $1\leq p <+\infty$. \end{teo} \begin{proof} Let $(a^n)_{n\in\mathbb N}$ and $(b^n)_{n\in\mathbb N}$ be the IPFP sequence defined in \eqref{eq:IPFPiteration}. Let us write $a^n = e^{u_n/\ep}$, $b^n = e^{v_n/\ep}$; then, in this new variables, we noticed that the iteration can be written with the help of the $(c, \ep)$-transform: \[ \begin{cases} v_{2n+1} = (u_{2n})^{(c,\ep)} \\ u_{2n+1} = u_{2n} \\ \end{cases}, \quad \begin{cases} v_{2n+2} = v_{2n+1} \\ u_{2n+2} = (v_{2n+1})^{(c,\ep)} \\ \end{cases}. \] Notice that, as soon as $n\geq2$, we have $u_n \in L^{\infty}(\rho_1)$ and $v_n \in L^{\infty}(\rho_2)$ thanks to the regularizing properties of the $(c,\ep)$-transform proven in Lemma \ref{lemma:F1F2} and, moreover, thanks to \eqref{est:optcond} and Proposition \ref{prop:duality2N} we have \[ \Dep(u_n,v_n)\leq \Dep(u_{n+1},v_{n+1}) \leq \dots \leq \OTep(\rho_1,\rho_2) - \ep. \] Then, by the same argument used in the proof of Lemma \ref{lemma:betterpotentials} it is easy to prove that there for each $n \geq 2$ there exists $\ell_n \in \mathbb R$ such that $ \|u_n - \ell_n \|_{\infty}, \|v_n +\ell_n\| \leq \frac 32 \|c\|_{\infty}$. Now, thanks to Proposition \ref{prop:EstPot} we have that the sequeces $u_n - \ell_n$ and $v_n +\ell_n$ are precompact in every $L^p$, for $1 \leq p < \infty$; in particular let us consider any limit point $u,v$. Then we have a subsequence $u_{n_k},v_{n_k}$ such that $u_{n_k}\to u$,$v_{n_k}\to v$ in $L^{\infty}$ and $u_{n_k+1} = (v_{n_k})^{(c,\ep)}$ (or the opposite). Using the continuity in $L^p$ of the $(c,\ep)$-transform, and the fact that an increasing and bounded sequence has vanishing increments, we obtain \[ \Dep(v^{(c,\ep)},v) - \Dep(u,v) = \lim_{n_k\to\infty} \Dep(u_{n_k+1},v_{n_k+1}) - \Dep({u_{n_k}},v_{n_k}) = 0. \] In particular, by \eqref{est:optcond2}, we have $u=v^{(c,\ep)}$. Analogously, we obtain that $v=u^{(c,\ep)}$ by doing the same calculation using the potentials $(u_{n_{k+2}}, v_{n_{k+2}})$ and then \[ \Dep(u,u^{(c,\ep)}) - \Dep(u,v) = \lim_{n_k\to\infty} \Dep(u_{n_k+2},v_{n_k+2}) - \Dep({u_{n_k}},v_{n_k}) = 0. \] Now we can use Proposition \ref{prop:equiv_comp}: the implication $2 \Rightarrow 1$ proves that $(u,v)$ is a maximizer\footnote{in order to prove that there is a unique limit point at this stage, it is sufficient to take $\ell_n$ that minimizes $\| u_n - \ell_n - u\|_2$.}. In particular $a=e^{u/\ep}$, $b=e^{v/\ep}$ are solutions of the Schr\"{o}dinger equation and taking $\lambda^n=e^{\ell_n/\ep}$ we get the convergence result for $a^n$ and $b^n$, using that the exponential is Lipschitz in bounded domains. In order to prove also the convergence of the plans, it is sufficient to note that for free we have $u_n+v_n \to u+v$ in $L^p(\rho_1 \otimes \rho_2)$, since now the translations are cancelled. Again, the fact that the exponential is Lipschitz on bounded domains and the boundedness of $k$, will let us conclude that in fact $\gamma^n \to \gamma$ in $L^p(\rho_1 \otimes \rho_2)$ for every $1 \leq p < \infty$. \end{proof} \begin{oss} Notice that as long as we have more hypothesis on the smoothness of the cost function $c$ we can use precompactness of the sequences $u_n-\ell_n$ and $v_n +\ell_n$ on larger space, obtaining faster convergence. For example if $c$ is uniformly continuous we will get the uniform convergence instead of strong $L^p$ convergence. \end{oss} \section{Multi-marginal Schr\"odinger Problem} \label{sec:multimarginal} \quad In this section we generalize the results obtain previously for the Schr\"odinger problem with more than two marginals, including a proof of convergence of the Sinkhorn algorithm in the several marginals case. We consider $(X_1,d_1), \dots, (X_N,d_N)$ Polish spaces, $\rho_1,\dots,\rho_N$ probability measures respectively in $X_1,\dots,X_N$ and $c:X_1\times\dots\times X_N\to \mathbb R$ a bounded cost. Define $\rho^N = \rho_1\otimes\dots\otimes\rho_N$ by the product measure. For every $\gamma \in \mathcal{M}(X_1\times\dots\times X_N)$, the relative entropy of $\gamma$ with respect to the \textit{Gibbs Kernel} $\mathcal K(x_1,\dots,x_N) = k^N(x_1,\dots,x_N)\rho^N = e^{-\frac{c(x_1,\dots,x_N)}{\ep}}d\rho_1\otimes\dots\otimes \rho_N$ is defined by \begin{equation}\label{eq:defKL} \displaystyle\operatorname{KL}^{N}(\gamma|\mathcal K) =\begin{cases} \displaystyle\int_{X_1\times\dots\times X_N}\gamma\log\left(\frac{\gamma}{k^N}\right)d\rho^N \qquad & \text{ if }\gamma \ll \rho^N \\ +\infty & \text{ otherwise.} \end{cases} \end{equation} An element $\gamma \in \Pi(\rho_1,\dots,\rho_N)$ is called coupling and is a probability measure on the product space $X_1\times \dots\times X_N$ having the $ith$-marginal equal to $\rho_i$, i.e $\gamma \in \mathcal{P}(X_1\times\dots\times X_N)$ such that $(e_i)_{\sharp}\gamma = \rho_i, \, \forall i \in \lbrace 1,\dots, N\rbrace$. The Multi-marginal Schr\"odinger problem is defined as the infimum of the Kullback-Leibler divergence $\operatorname{KL}^{N}(\gamma|\mathcal K)$ over the couplings $\gamma \in \Pi(\rho_1,\dots,\rho_N)$ \begin{equation}\label{eq:primalSchrMult} \OTNep(\rho_1,\dots,\rho_N) = \inf_{\gamma\in\Pi(\rho_1,\dots,\rho_N)}\ep\int_{X_1\times\dots\times X_N}\operatorname{KL}(\gamma|\mathcal K)d\gamma. \end{equation} Optimal Transport problems with several marginals or its entropic-regularization appears, for instance, in economics G. Carlier and I. Ekeland \cite{CarEke}, and P. Chiappori, R. McCann, and N. Nesheim \cite{ChiMcCNes}; imaging (e.g. \cite{CutDou2014,SolPeyCut2015}); and in theoretical chemistry (e.g. \cite{DMaGerNenGorSei,GerGroGor19, GorSeiVig}). The first important instance of such kind of problems is attributed to Brenier's generalised solutions of the Euler equations for incompressible fluids \cite{Bre89, Bredual93, BreMin99}. We point out that the entropic-regularization of the multi-marginal transport problem leads to a problem of multi-dimensional matrix scaling \cite{FraLor89,Rag84}. An important example in this setting is the Entropy-Regularized \textit{Wasserstein Barycenter} introduced by M. Agueh and G. Carlier in \cite{AguCar2011}. The Wasserstein Barycenter defines a non-linear interpolation between several probabilities measures generalizing the Euclidian barycenter and turns out to be equivalent to Gangbo-\'Swie\c{c}h cost \cite{GaSw}, that is $c(x_1,\dots,x_N) = \frac{1}{2}\Vert x_j-x_i\Vert^2$ . In the next section we extend to the multi-marginal setting the notions and properties of the Entropy $c$-transform done in section \ref{sec:RegularityEntr}. As a consequence, we generalise the proof of convergence of IPFP. \subsection{Entropy-Transform} \quad Analogously to definitions \eqref{eq:F1} and \eqref{eq:F2} in section \ref{sec:bounded}, we define the following Entropy $c$-transforms $\hat{u}^{(N,c,\ep)}_1,\hat{u}^{(N,c,\ep)}_2,\dots,\hat{u}^{(N,c,\ep)}_N$. Notice that the notation $\hat{u_i}$ stands for $\hat{u_i}= (u_1,\dots,u_{i-1},u_{i+1},\dots,u_N)$. \begin{deff}[Entropic $c$-transform or $(c,\ep)$-transform] Let $i\in \lbrace 1,\dots, N\rbrace$ and $\ep>0$ be a positive number. Consider $(X_i,d_{X_i})$ Polish spaces, $\rho_i \in \mathcal P(X_i)$ probability measures and let $c$ a bounded measurable cost on $X_1 \times \dots \times X_N$. For every $i$, the Entropy $c$-transform $\hat{u}^{(N,c,\ep)}_i$ is defined by the functional $\mathcal{F}_i^{(N,c, \ep)}: \prod_{j \neq i} \Lexp(\rho_{j})\to L^0(\rho_i)$, \begin{equation} \hat{u}^{(N,c,\ep)}_i (x_i)=\mathcal{F}_i^{(N,c, \ep)}( \hat u_i ) (x_{i}) = - \ep\log\left(\int_{\prod_{j\neq i}X_j}e^{\frac{\sum_{j\neq i}u_j(x_j)-c(x_1,\dots,x_N)}{\ep}} d\left(\otimes_{j\neq i}\rho_j\right)\right). \end{equation} In particular, we have $\hat{u}^{(N,c,\ep)}_i\in \Lexp( \rho_{i})$. For $u_i \in \Lexp(X_i,\rho)$, we denote the constant $\lambda_{u_i}$ by \[ \lambda_{u_i} = \ep\log\left(\int_{\prod_{j\neq i}X_j}e^{\frac{\sum_{j\neq i}u_j(x_j)}{\ep}} d\left(\otimes_{j\neq i}\rho_j\right)\right). \] \end{deff} There is also the possibility to reconduce us to the case $N=2$: notice that if one considers the spaces $X_i $ and $Y_i=\Pi_{j \neq i} X_j$, then $c$ is also a natural cost function on $X_i \times Y_i$. We can then consider $\rho_i$ as a measure on $X_i$ and $\otimes_{j \neq i} \rho_j$ as a measure on $Y_i$. In this way we able to construct an entropic $c$-trasform $\mathcal{F}^{(c, \ep)}$ associated to this $2$-marginal problem and it is clear that $$ \mathcal{F}_i^{(N,c, \ep)} ( \hat u_i ) = \mathcal{F}^{(c, \ep)} \Bigl( \sum_{j \neq i} u_j \Bigr).$$ The following lemma extend lemma \ref{lemma:F1F2} in the multi-marginal setting. We omit the proof since it follow by similar arguments. \begin{lemma}\label{lemma:entropytransbound} For every $i\in \lbrace 1,\dots,N\rbrace$, the Entropy $c$-transform $\hat{u}^{(N,c,\ep)}_i$ is well defined. Moreover, \begin{itemize} \item[(i)] $\hat{u}^{(N,c,\ep)}_i \in L^{\infty}\left(\rho_i\right)$. In particular, \begin{align*} -\| c \|_{\infty}- \ep \log \left( \int_{\prod_{j\neq i} X_i} e^{ \frac { \sum_{j\neq i}u_j (x_j)}{\ep}} \, d \left(\otimes_{j\neq i}\rho_j\right)\right) &\leq \hat{u}^{(N,c,\ep)}_i(x_{i}) \leq \\ &\leq \| c\|_{\infty} -\ep\log\left(\int_{\prod_{j\neq i} X_i} e^{ \frac { \sum_{j\neq i}u_j (x_j)}{\ep}} \, d \left(\otimes_{j\neq i}\rho_j\right)\right). \end{align*} \item[(ii)] $\hat{u}^{(N,c,\ep)}_i \in \Lexp\left(\rho_i\right)$. \item[(iii)] \begin{equation} \label{eqn:lambdau} | \hat{u}^{(N,c,\ep)}_i (x_i)+ \sum_{i \neq j} \lambda_{u_j} | \leq \| c \|_{\infty}. \end{equation} \item[(iv)] if $c$ is $L$-Lipschitz (resp. $\omega$-continuous), then $\hat{u}^{(N,c,\ep)}_i$ is $L$-Lipschitz (resp. $\omega$-continuous); \item[(v)] if $\vert c\vert \leq M$, then $\operatorname{osc}(\hat{u}^{(N,c,\ep)}_i)\leq 2M$ and $\mathcal{F}_i^{(N,c, \ep)} : \prod_{j \neq i} L^{\infty}(\rho_j)\to L^p(\rho_i)$ for $i=1, \ldots, n$ are compact operators for every $1 \leq p < \infty$. \end{itemize} \end{lemma} \subsection{Entropy-Kantorovich Duality} \quad \quad We introduce the dual functional dual function $D^N_{\ep}:\Lexp(\rho_1)\times\dots\times \Lexp(\rho_N)\to[0,+\infty]$, \begin{equation}\label{eqn:defDN} D^N_{\ep}(u_1,\dots,u_N) = \sum^N_{i=1}\int_{X_i}u_id\rho_i - \ep\int_{X_1\times\dots\times X_N}e^{\frac{\sum^N_{i=1}u_i(x_i)-c(x_1,\dots,x_N)}{\ep}}d\left(\rho_1\otimes\dots\otimes\rho_N\right). \end{equation} In the sequel we will use the invariance by translation of the dual problem, and thus we introduce the following projection operator: \begin{lemma}\label{lem:P} Let us consider the operator $P: \prod_{i=1}^N L^{\infty}( \rho_i) \to \prod_{i=1}^N L^{\infty}( \rho_i) $ defined as $$P_i(u) = \begin{cases} u_i - \lambda_{u_i} \qquad \qquad &\text{ if } i=1, \ldots, N-1 \\ u_i+ \sum_{j \neq i}^{N-1} \lambda_{u_j} & \text{ if } i=N. \end{cases}$$ Then the following properties hold \begin{itemize} \item[(i)] $D^N_{\ep}(P(u))=D^N_{\ep} (u)$; \item[(ii)] $\| P_i(u)\|_{\infty} \leq osc ( u_i) + |\sum_{i=1}^N \lambda_{u_i}| $, for all $i=1, \ldots, N$; \item[(iii)] let $v=P(u)$. Then $u_i= \mathcal{F}_i^{(N,c, \ep)} ( \hat u_i )$ if only if $v_i= \mathcal{F}_i^{(N,c, \ep)} ( \hat v_i )$. \end{itemize} \end{lemma} \begin{proof} \begin{itemize} \item[(i)] In order to prove $D^N_{\ep}(P(u))=D^N_{\ep} (u)$ we first observe that $$ \sum_{i=1}^N P_i ( u ) (x_i) =u_N(x_N) + \sum_{i=1}^{N-1} \lambda_{u_i} + \sum_{i=1}^{N-1} (u_i(x_i) - \lambda_{u_i} ) = \sum_{i=1}^N u_i(x_i).$$ In particular we have (here we denote $X=X_1 \times \cdots \times X_N$ \begin{align*} D^N_{\ep}(P(u)) &= \sum^N_{i=1}\int_{X_i}P_i(u)d\rho_i - \ep\int_{X_1\times\dots\times X_N}e^{\frac{\sum^N_{i=1}P_i(u)(x_i)-c(x_1,\dots,x_N)}{\ep}}d\left(\rho_1\otimes\dots\otimes\rho_N\right) \\ & = \int_{X}\sum^N_{i=1} P_i(u) (x_i) \, d \rho^N - \ep\int_{X}e^{\frac{\sum^N_{i=1}P_i(u)(x_i)-c(x_1,\dots,x_N)}{\ep}}d \rho^N \\ &= \int_{X}\sum^N_{i=1} u_i (x_i) d \rho^N - \ep\int_{X}e^{\frac{\sum^N_{i=1}u_i(x_i)-c(x_1,\dots,x_N)}{\ep}}d \rho^N= D^N_{\ep}(u) \end{align*} \item[(ii)] The inequality is not trivial only if $u_i \in L^{\infty}(\rho_i)$. In this case obviously we have $\inf u_i \leq \lambda_{u_i} \leq \sup u_i$ and in particular $$- osc(u_i) = \inf u_i - \sup u_i \leq u_i(x_i) - \lambda_{u_i} \leq \sup u_i - \inf u_i = osc(u_i),$$ that is $\| u_i - \lambda_{u_i}\|_{\infty} \leq osc(u_i)$. This proves already the bound for $i <N$; for $i=N$ we have, letting $\lambda= \sum_{i=1}^N \lambda_{u_i}$ $$\| P_N(u ) \|_{\infty} = \| u_N - \lambda_{u_N} + \sum_{i=1}^N \lambda_{u_i}\|_{\infty} \leq \| u_N - \lambda_{u_N}\|_{\infty} + | \lambda | \leq osc(u_N) + | \lambda | $$ \item[(iii)] This is obvious from the fact that $\mathcal{F}_i^{(N,c, \ep)} ( \widehat{ u_i - \lambda_i} )= \mathcal{F}_i^{(N,c, \ep)}(\hat u_i) + \sum_{j \neq i} \lambda_j$. \end{itemize} \end{proof} This projection operator allows us to generalize Lemma \ref{lemma:betterpotentials}: \begin{lemma}\label{lemma:betterpotentials_multi} Let us consider $u_i \in \Lexp(\rho_i)$, for every $i=1, \ldots, N$. Then there exist $u_i^* \in \Lexp(\rho_i)$ for $i=1, \ldots , N$ such that \begin{itemize} \item $D^N_{\ep}(u_1, \ldots, u_N) \leq D^N_{\ep}(u_1^*,\ldots, u_N^*)$; \item $ \| u_i^* \|_{\infty} \leq 3\| c\|_{\infty} $ for every $i=1, \ldots, N$. \end{itemize} \end{lemma} \begin{proof} Let us construct the following sequence of potentials: \[ \begin{cases} u_1^{1} = \mathcal{F}_i^{(N,c, \ep)} \bigl(\,\widehat{u_1}\,\bigr) \\ u_2^{1} = u_2 \\ u_3^{} = u_3 \\ \, \cdots \\ u_N^{1} = u_N \\ \end{cases}, \quad \begin{cases} u^{2}_1 = u_1^{1} \\ u_{2} = \mathcal{F}_i^{(N,c, \ep)} \bigl(\,\widehat{u^{1}_2}\,\bigr) \\ u^{2}_3 = u_3^{1} \\ \, \cdots \\ u^{2}_N = u_N^{1} \\ \end{cases}, \dots \, , \quad \begin{cases} u_1^{N} = u_1^{N-1} \\ u_2^{N} = u_2^{N-1} \\ u_3^{N} = u_3^{N-1} \\ \, \cdots \\ u^{N}_N = \mathcal{F}_i^{(N,c, \ep)} \bigl(\,\widehat{u^{N-1}_N}\,\bigr). \\ \end{cases} \] Then let us consider $u^*=P(u^N)$. First of all we notice that, using the multimarginal analogous of Lemma \ref{lemma:dual} we have $$D_{\ep}^N(u_1, \ldots, u_N) \leq D_{\ep}^N(u^1_1, \ldots, u^1_N) \leq \cdots \leq D_{\ep}^N(u^N_1, \ldots, u^N_N)= D_{\ep}^N(u^*_1, \ldots, u^*_N).$$ Then is clear by construction that for every $i=1, \ldots, N$ we have $u^N_i=u^i_i$ and in particular, by Lemma \ref{lemma:entropytransbound} (iv) we have $osc(u^N_i) \leq 2\| c \|_{\infty}$. Moreover, thanks to \eqref{eqn:lambdau} it is easy to see that $ | \sum_i \lambda_{u_i^N}| \leq \| c\|_{\infty}$. Now we can use Lemma \ref{lem:P} (ii) to conclude that in fact $\| u^*_i\| \leq 3 \| c \|_{\infty}.$ \end{proof} Similarly to Theorem \ref{thm:kanto2Nmax}, Proposition \ref{prop:equiv_comp} and \ref{prop:duality2N}, the next theorem and the following Proposition state the existence of a maximizer and the Entropic-Kantorovich duality to the multi-marginal case, along with the complementarity conditions. Since the proofs follows the same lines of the case $N=2$, without big changes, we will omit them. \begin{teo}\label{thm:dualNmarg} For every $i\in\lbrace 1,\dots,N\rbrace$, let $(X_i,d_i)$ be Polish metric spaces, $\rho_i \in \mathcal P(X_i)$ be a probability measures and $c:X_1\times\dots\times X_N\to \mathbb R$ be a bounded cost function. Then for every $\ep>0$, \begin{itemize} \item[(i)] The dual function $D^N_{\ep}$ is well defined on its definition domain and moreover \begin{equation}\label{est:optcondmm} D^N_{\ep}(\hat{u}^{(N,c,\ep)}_1, \hat{u_1}) \geq D^N_{\ep}(u_1,\dots,u_N), \qquad \forall \, u_i \in \Lexp(\rho_i), \end{equation} \begin{equation}\label{est:optcond2mm} D^N_{\ep}(\hat{u}^{(N,c,\ep)}_1,\hat{u_1}) = D^N_{\ep}(u_1,\dots,u_N) \text{ if and only if } u_1 = \hat{u}^{(N,c,\ep)}_1. \end{equation} \item[(ii)] The supremum is attained, up to trivial transformations, for a unique $N$-tuple $(u^0_1,\dots,u^0_N)$ and in particular we have $$u^0_i\in L^{\infty}(\rho_i) ,\quad \forall i\in\left\lbrace 1,\dots,N \right\rbrace. $$ Moreover if we consider $\gamma^{0,N} = e^{ (\sum_i u^0_i(x_i) - c ) / \ep } \rho^N$ then $\gamma^{0,N}$ is the minimizer of \eqref{eq:primalSchrMult} \item[(iii)] Duality holds: \[ \OTNep(\rho_1,\dots,\rho_N) = \sup\left\{ D^N_{\ep}(u_1,\dots,u_N) \; : \; u_i \in \Lexp(\rho_i), i \in \left\lbrace 1,\dots, N \right\rbrace \right\} + \ep. \] \end{itemize} \end{teo} Finally, the result extends the main results of the previous section to the multi-marginal case. \begin{prop}[Equivalence and complementarity condition]\label{prop:equiv_comp_multi} Let $\ep>0$ and for every $i\in\lbrace 1,\dots,N\rbrace$, let $(X_i,d_i)$ be Polish metric spaces, $\rho_i \in \mathcal P(X_i)$ be a probability measures and $c:X_1\times\dots\times X_N\to \mathbb R$ be a bounded cost function. Then given $u_i^* \in \Lexp(\rho_i) $ for every $i=1, \ldots, N$, the following are equivalent: \begin{enumerate} \item \emph{(Maximizers)} $u_1^*, \ldots, u_N^*$ are maximizing potentials for \eqref{eqn:defDN}; \item \emph{(Maximality condition)} $\mathcal{F}_i^{(N,c, \ep)} (\hat{u_i^*})=u_i^*$ for every $i=1, \ldots, N$; \item \emph{(Schr\"{o}dinger system)} let $\gamma^*=e^{(\sum_i u_i^*(x_i)-c)/\ep} \cdot \rho^N $, then $\gamma^* \in \Pi(\rho_1, \ldots, \rho_N)$; \item \emph{(Duality attainement) }$\OTNep(\rho_1,\ldots, \rho_N) = D^N_{\ep} (u_1^*, \ldots, u_N^*) +\ep$. \end{enumerate} Moreover in those cases $\gamma^*$, as defined in 3, is also the (unique) minimizer for the problem \eqref{eq:primalSchrMult} \end{prop} The part 3. in proposition \ref{prop:equiv_comp_multi} have been already shown in different settings by J.M Borwein, A.S. Lewis and R.D. Nussbaum (\cite[Theorem 4.4]{BorLew92}, see also \cite[section 3]{BorLewNus94}) and G. Carlier \& M. Laborde \cite{CarLab18}. Our approach, being purely variational, allows us to study the convergence of the Sinkhorn algorithm in the several marginal case in a similar way of done in the previous section. In fact, $\OTNep(\rho_1,\dots,\rho_N)$ defines an unique element $\gamma^{\ep}_{N,opt}$ - the $\operatorname{KL}^{N}$-projection on $\Pi(\rho_1,\dots,\rho_N)$ - which has product density $\Pi^N_{i=1}a_i$ with respect to the Gibbs measures $\mathcal K$, where $a_i = e^{u^*_i/\ep}$ as defined in proposition \ref{prop:equiv_comp_multi}. Also in this case, an equivalent system to \eqref{intro:SchSys} can be implicity written: $\gamma^{\ep}_{N,opt}$ is a solution of \eqref{eq:primalSchrMult} if and only if $\gamma^{\ep}_{N,opt} = \otimes^N_{i=1}a^{\ep}(x_i)\ka, \text{ where } a^{\ep}_i \text{ solve } $ \begin{equation}\label{intro:SchSysMM} \displaystyle a^{\ep}_i(x)\int_{Y} \otimes^N_{j\neq i}a_j(x_j)k(x_1,\dots,x_N)d\rho_i(x_i) = 1, \quad \forall i = 1,\dots, N. \end{equation} Therefore, by using the marginal condition $\gamma^{\ep} \in \Pi(\rho_1,\dots,\rho_N)$, the functions $a_i$ can be implicitly computed \[ a_i(x_i) = \dfrac{1}{\int_{\Pi^N_{j\neq i}X_j}\otimes^N_{j\neq i}a_j(x_j)k(x_1,\dots,x_N)d(\otimes^N_{j\neq i}\rho_j)}, \quad \forall i \in \left\lbrace 1,\dots,N\right\rbrace. \] \subsection{Convergence of the IPFP / Sinkhorn algorithm for several marginals} \label{sec:convergenceIPFPMM} \quad \quad The goal of this subsection is to prove the convergence of the IPFP/Sinkhorn algorithm in the multi-marginal setting. Analogously to \eqref{eq:IPFPiteration}, define recursively the sequences $(a^n_j)_{n\in\mathbb N}, j\in \lbrace 1,\dots,N\rbrace$ by \begin{equation}\label{eq:IPFPsequenceN} \begin{array}{lcl} \displaystyle a_1^0(x_1) & = & 1, \\ \displaystyle a^0_j(x_j) & = & 1, \quad j \in \lbrace 2,\dots,N \rbrace, \\ \displaystyle a^n_j(x_j) & = & \dfrac{1}{\int \otimes^N_{i<j}a_i^n(x_i)\otimes^N_{i> j}a_i^{n-1}(x_i)k^N(x_1,\dots,x_N)d(\otimes^N_{i\neq j}\rho_i)}, \, \forall n\in \mathbb N. \end{array} \end{equation} Also here, by writing $a^n_j = \exp(u^n_j/\ep)$, for all $j\in\lbrace 1,\dots,N\rbrace$, one can rewrite the IPFP sequences \eqref{eq:IPFPsequenceN} in terms of Entropic $(c,\ep)$-transforms, \begin{align*} u^n_j(x_j) &= - \ep\log\left(\int_{\Pi_{i \neq j}X_i} k^N(x_1,\dots,x_N)\otimes_{i\neq j}e^{u^n_i(x_i)/\ep}d\left(\otimes^N_{i\neq j}\rho_i\right)\right) \\ &= (\hat{u^n_j})^{(N,c,\ep)}(x_j). \end{align*} Then, the proof of convergence of the IPFP in the multi-marginal case, follows a method similar to the one used in Theorem \ref{thm:convIPFP}. \begin{teo}\label{thm:convIPFPNmarg} Let $(X_1,d_1), \dots, (X_N,d_N)$ be Polish spaces, $\rho_1,\dots,\rho_N$ be probability measures in $X_1,\dots,X_N$, $c:X_1\times\dots\times X_N\to[0,+\infty]$ be a bounded cost, $p$ be an integer $1\leq p <\infty$. If $(a^n_j)_{n\in\mathbb N}, j\in\lbrace 1,\dots,N\rbrace$ are the IPFP sequence defined in \eqref{eq:IPFPsequenceN}, then there exist a sequence $\lambda^n \in \mathbb R^N$, with $\lambda^n_i >0$ and $\prod_{i=1}^N \lambda^n_i = 1$ such that \[ \forall j\in\lbrace 1,\dots,N\rbrace, \quad a^n_j/\lambda^n_j\to a_j \text{ in } L^p(\rho_j), \] where $(a_j)_{j=1}^N$ solve the Schr\"{o}dinger system. In particular, the sequence $\gamma^n = \Pi^N_{i=1}a^n_i\ka$ converges in $L^p(\rho_1\otimes\dots\otimes\rho_N)$ to the optimizer $\gamma^{\ep}_{opt}$ in \eqref{eq:primalSchrMult}. \end{teo} \begin{proof} Let $i \in \lbrace 1,\dots, N\rbrace$ and consider Let $(a_i^n)_{n\in\mathbb N}$ the IPFP sequence defined in \eqref{eq:IPFPsequenceN}. For every $i$, we define $u_i^0 := \ep \ln (a_i^0)$ and then iteratively define the following potentials for every $p \in \mathbb N$ \[ \begin{cases} u_1^{pN+1} = (\widehat{u_1^{pN}})^{(c,\ep)} \\ u_2^{pN+1} = u_2^{pN} \\ u_3^{pN+1} = u_3^{pN} \\ \, \cdots \\ u_N^{pN+1} = u_N^{pN} \\ \end{cases}, \quad \begin{cases} u^{pN+2}_1 = u_1^{pN+1} \\ u_{pN+2} = (\widehat{u^{pN+1}_2})^{(c,\ep)} \\ u^{pN+2}_3 = u_3^{pN+1} \\ \, \cdots \\ u^{pN+2}_N = u_N^{pN+1} \\ \end{cases}, \dots \, , \quad \begin{cases} u_1^{pN+N} = u_1^{pN+N-1} \\ u_2^{pN+N} = u_2^{pN+N-1} \\ u_3^{pN+N} = u_3^{pN+N-1} \\ \, \cdots \\ u^{pN+N}_N = (\widehat{u^{pN+N-1}_N})^{(c,\ep)} \\ \end{cases}. \] Notice that $a_i^n = e^{u^{nN}_i / \ep }$ and moreover $osc ( u^{pN+i}_i ) \leq \| c \|_{\infty}$ by Lemma \ref{lemma:entropytransbound} (iv), and in particular $osc(u^{n}_i) \leq \| c \|_{\infty}$ as long as $n \geq N$. Moreover, thanks to \eqref{eqn:lambdau} we also have $|\sum_i \lambda_{u_i^n}| \leq \| c \|_{\infty}$. In particular, defining $v^n= P(u^n)$, we have $\|v_i^n\|_{\infty} \leq 3\|c\|_{\infty}$ thanks to Lemma \ref{lem:P} (ii); using \eqref{est:optcondmm} and Lemma \ref{lem:P} we also have \[ \Dep^N(v^{n}_1,\dots, v^{n}_N)\leq \Dep^N(v^{n+1}_1,\dots, v^{n+1}_N) \leq \dots \leq \OTep(\rho_1,\rho_2,\dots, \rho_N). \] By the boundedness of $\| v_i^n\|_{\infty}$, by the compactness in Lemma \ref{lemma:entropytransbound} (iv), there exists a subsequence $k_n$ such that $v_i^{k_n}$ converges in $L^{p}$ to some $v_i$ for every $i=1, \ldots, N$; by pigeon-hole principle we have that at least a class of residue modulo $N$ is taken infinitely by the sequence $k_n$ and we will suppose that without loss of generality this residue class is $0$. Up to restricting to the infinite subsequence such that $k_n \equiv 0 \pmod{N}$, we can assume that $v_N^{k_{n}} = (\widehat{v_N^{k_n}})^{(N,c,\ep)}$ and $v_1^{k_{n}+1} = (\widehat{v_1^{k_n}})^{(N,c,\ep)}$ In particular, by the continuity of the $(N,c, \ep)$-transform we have \[ \Dep^N(\hat{v_1}^{(N,c,\ep)}, \hat{v_1}) - \Dep^N(v_1,v_2,\dots,v_N) = \lim_{n\to\infty} \Dep(v_1^{k_n+1},\dots, v_N^{k_n+1}) - \Dep(v_1^{k_n},\dots, v_N^{k_n}) = 0. \] In particular, we have $v_1 = \hat{v_1}^{(N,c,\ep)}$ by \eqref{est:optcond2mm} and in particular $u_i^{k_n+1} \to u_i$ for every $i=1, \ldots, N$. Now, doing a similar computation, for every $i=2,\dots,N$, we can inductively prove that, for every $i$, \[ \Dep( \hat{v_i}^{(N,c,\ep)}, \hat{v_i}) - \Dep(v_1,\dots, v_i, \dots, v_N) = \lim_{n \to\infty} \Dep(v_1^{k_n+i},\dots, v_N^{k_n+i}) - \Dep(v_1^{k_n+i-1},\dots, v_N^{k_n+i-1}) = 0. \] Hence, $v_i = \hat{v_i}^{(N,c,\ep)}, \, \forall i \in \lbrace 1,\dots, N\rbrace$. The result follows by noticing that $(e^{v_1/\ep},\dots, e^{v_N/\ep})$ solves the Schr\"odinger system, by Proposition \ref{prop:equiv_comp_multi}. \end{proof} \noindent \emph{Remark on the multi-marginal problem $\Sep(\rho_1,\dots,\rho_N;\refm_1,\dots, \refm_N)$:} More generally, we could also consider the multi-marginal Schr\"odinger problem with references measures $\refm_i \in \mathcal P(X_i), i=1,\dots, N$. For simplicity, we denote $\overline{\rho} = (\rho_1,\dots,\rho_N)$ and $\overline{\refm} = (\refm_1,\dots,\refm_N)$. Then the functional $\Sep(\overline{\rho};\overline{\refm})$ is defined by $$ \Sep(\overline{\rho};\overline{\refm}) = \min_{\gamma\in\Pi_N(\rho_1,\dots,\rho_N)} \ep\operatorname{KL}^{N}(\gamma | \refm_1\otimes \cdots \otimes \refm_N). $$ Analogously to the $2$ marginal case, the duality results, existence and regularity of entropic-potentials as well as the convergence of the Sinkhorn algorithm can be extended to that case. We omit the details here since the proof follows by similar arguments.\\ \noindent {\bf Acknowledgements:} This work started when the second author visited the first author, while he was working at the Scuola Normale Superiore di Pisa (INdAM unit). The authors wants to thank G. Carlier, C. L\'eonard and L. Tamanini for useful discussions. \section{Appendix} \begin{prop}\label{prop:compactness} Let $(X,d,\mu)$ be a measurable metric space with $\mu(X)=1$. Let us assume that $\mathcal{F} \subset L^p(X,\mu)$ is a family of functions such that: \begin{itemize} \item[(a)] there exists $M>0$ such that $ \|f\|_{\infty} \leq M$ for every $f \in \mathcal{F}$; \item[(b)] for every $\sigma$ there exists a set $N^{\sigma}$, a modulus of continuity $\omega_{\sigma}$ and a number $\beta_{\sigma}\geq0$ such that $$ |f(x)-f(x')| \leq \omega_{\sigma}(d(x,x')) + \beta_\sigma \qquad \forall x,x' \not \in N^{\sigma}$$ where $N^{\sigma}$ and $\beta_{\sigma}$ are such that $\mu(N^{\sigma})+\beta_\sigma \to 0$ as $\sigma \to 0$. \end{itemize} Then the family $\mathcal{F}$ is precompact in $L^p(X, \mu)$. \end{prop} \begin{proof} Let us fix $\ep>0$ and let us consider a sequence $\sigma_n \to 0$ such that $\sum_{n =1}^{\infty} \mu(N^{\sigma_n}) \leq \ep$; then define $\omega_n= \omega_{\sigma_n}$ and $\mathcal{N}^{\ep} := \bigcup_n N^{\sigma_n}$; in particular we have $\mu(\mathcal{N}^{\ep}) \leq \ep$ and \begin{equation}\label{eqn:eqcont} |f(x)-f(x')| \leq\omega_n(d(x,x')) + \beta_{\sigma_n} \qquad \forall x, x' \not \in \mathcal{N}^{\ep}, \forall n \in \mathbb N.\end{equation} Let us define $\omega^{\ep}(t) = \inf_n \{ \omega_n (t) + \beta_{\sigma_n}\}$: by \eqref{eqn:eqcont} we have that $f$ is $\omega^{\ep}$-continuous outside $\mathcal{N}^{\ep}$. We can verify that $\omega^{\ep}$ is a non degenerate modulus of continuity: it is obvious that is it nondecreasing since it is an infimum of noncreasing functions. Then for every $\tilde{\ep}>0$ we can choose $n$ big enough such that $\beta_{\sigma_n}< \tilde{\ep}/2$ and then choose $t$ small enough such that $\omega_n (t) < \tilde{\ep}/2$; in this way we have $\omega^{\ep}(t) \leq \omega_n (t) + \beta_{\sigma_n} < \tilde{\ep}$. In particular $\omega^{\ep}(t) \to 0$ as $t \to 0$. Now we conclude by a diagonal argument: let us consider a sequence $(f^0_n)_{n \in \mathbb N} \subseteq \mathcal{F}$ and a sequence $\ep_k\to 0$. We want to find a subsequence that is converging strongly in $L^p$. We iteratively extract a subsequence $(f^k_n)$ of $(f^{k-1}_n)$ that is converging uniformly outside $\mathcal{N}^{\ep_k}$ (thanks to Ascoli-Arzel\`a) to some function $f^k$, which is defined only outside $\mathcal{N}^{\ep_k}$. Then let us consider $$ f(x)= \begin{cases} f^k(x) \qquad &\text{ if }x \not \in \mathcal{N}^{\ep_k} \\ 0 & \text{ otherwise.} \end{cases}$$ First of all $f$ is well defined since if $x \not \in \mathcal{N}^{\ep_k}$ and $x \not \in \mathcal{N}^{\ep_j}$ with $j >k$ then we have that $f^k_n (x) \to f^k(x)$ but since $f^j_n$ is a subsequence of $f^k_n$ we have also $f^j_n(x) \to f^k(x)$; however by definition $f^j_n(x) \to f^j(x)$ and so $f^j(x)=f^k(x)$. Moreover it is clear that $\|f\|_{\infty} \leq M$ since this is true for every $f^0_n$ thanks to property (a). Now we consider the sequence $g_n = f^n_n$ which is a subsequence of $f^0_n$. Let us fix $\ep>0$ and choose $k$ such that $\ep_k < \ep^p$; then let $n_0>k$ such that $|f^k_n - f| \leq \ep$ on $X \setminus \mathcal{N}^{\ep_k}$ for every $n \geq n_0$. Now we have $g_{n}=f^k_n$ for some $n\geq n_0$ and in particular \begin{align*} \int_X |g_{n_0}(x)-f(x)|^p \,d \mu & = \int_{X} | f^k_n(x) -f(x)|^p \, d \mu \\ &= \int_{\mathcal{N}^{\ep_k}} | f^k_n(x)-f(x)|^p \, d \mu + \int_{X \setminus \mathcal{N}^{\ep_k}} | f^k_n(x)-f(x)|^p \, d \mu \\ & \leq \int_{\mathcal{N}^{\ep_k}} (2M)^p \, d \mu + \int_{X \setminus \mathcal{N}^{\ep_k}} \ep^p \, d \mu \\ & \leq \mu(\mathcal{N}^{\ep_k}) (2M)^p + \ep^p ) \\ & \leq \ep_k (2M)^p + \ep^p \mu(X) \leq \ep^p ( 2^pM^p+ 1) \end{align*} In particular we get $g_n \to f $ in $L^p$ and so we're done. \end{proof} \bibliographystyle{siam}
{ "redpajama_set_name": "RedPajamaArXiv" }
290
Badowal Khurd is a village in Batala in Gurdaspur district of Punjab State, India. The village is administrated by Sarpanch an elected representative of the village. Demography , The village has a total number of 250 houses and the population of 1378 of which 735 are males while 643 are females according to the report published by Census India in 2011. The literacy rate of the village is 65.45%, lower than the state average of 75.84%. The population of children under the age of 6 years is 171 which is 12.41% of total population of the village, and child sex ratio is approximately 879 higher than the state average of 846. See also List of villages in India References Villages in Gurdaspur district
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,336
using NullDesk.Extensions.Mailer.MailKit.Tests.Infrastructure; using Xunit; namespace NullDesk.Extensions.Mailer.MailKit.Tests { public class MkGmailMailerTests : IClassFixture<GmailMailFixture> { public MkGmailMailerTests(GmailMailFixture fixture) { Fixture = fixture; } private GmailMailFixture Fixture { get; } // https://github.com/jstedfast/MailKit/blob/master/FAQ.md#GMailAccess //Setup user and password in fixture, then uncomment //[Fact] //[Trait("TestType", "Integration")] //public async Task MailKit_Gmail_Basic_SendAll() //{ //var mailer = Fixture.BasicAuthServiceProvider.GetService<IMailer>(); //var deliveryItems = // mailer.CreateMessage(b => b // .Subject($"xunit Test run: content body") // .And.To("abc@xyz.net") // .WithDisplayName("No One Important") // .And.ForBody() // .WithPlainText("nothing to see here") // .Build()); //var result = await mailer.SendAllAsync(CancellationToken.None); //result // .Should() // .NotBeNull() // .And.AllBeOfType<DeliveryItem>() // .And.HaveSameCount(deliveryItems) // .And.OnlyContain(i => i.IsSuccess); //} //[Fact] //[Trait("TestType", "Integration")] //public async Task MailKit_Gmail_Token_SendAll() //{ //var mailer = Fixture.TokenAuthServiceProvider.GetService<IMailer>(); //var deliveryItems = // mailer.CreateMessage(b => b // .Subject($"xunit Test run: content body") // .And.To("abc@xyz.net") // .WithDisplayName("No One Important") // .And.ForBody() // .WithPlainText("nothing to see here") // .Build()); //var result = await mailer.SendAllAsync(CancellationToken.None); //result // .Should() // .NotBeNull() // .And.AllBeOfType<DeliveryItem>() // .And.HaveSameCount(deliveryItems) // .And.OnlyContain(i => i.IsSuccess); //} } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,773
Vladimir Dedijer (; * 4. Februar 1914 in Belgrad, Königreich Serbien; † 1. Dezember 1990 in Boston, Vereinigte Staaten) war ein jugoslawischer Historiker und Publizist. Von einem engen Weggefährten und Vertrauten des jugoslawischen Staatsführers Josip Broz Tito wurde er zum Systemkritiker. Leben Dedijer entstammte einer serbischen Familie aus Bosnien, die der Königsfamilie Karađorđević nahestand und nationalserbisch gesinnt war. Sein Vater Jevto Dedijer (1880–1918) war ein Geograph und Assistent des Professors für politische Geographie an der Universität Belgrad Jovan Cvijić (1865–1927). Gleichzeitig rekrutierte der von der österreichisch-ungarischen Polizei gesuchte Jevto Dedijer mit Dragutin "Apis" Dimitrijević junge bosnische Serben als Terroristen für den nationalistisch-terroristischen Geheimbund "Schwarze Hand". Vladimir Dedijer begann nach seinem Studium in Rechtswissenschaften 1937 eine Tätigkeit als Auslandskorrespondent für die Belgrader Tageszeitung Borba. 1938 schloss er sich der Kommunistischen Partei Jugoslawiens (KPJ/BdKJ) an und wurde zu einem Vertrauten Titos. Während des Widerstandes gegen die deutschen und italienischen Besatzer Jugoslawiens im Zweiten Weltkrieg war er von 1941 bis 1944 im Stab der Partisanen Titos. 1943 ernannte ihn Tito zum Oberst der Partisanenarmee. Er wurde während des Krieges mehrfach verwundet. 1943 verlor er bei einem Angriff der Deutschen seine Frau Olga, die als Ärztin ebenfalls in der Partisanenarmee tätig war. Nach dem Zweiten Weltkrieg war er Parlamentsabgeordneter und zwischen 1945 und 1952 mehrfach jugoslawisches UN-Delegationsmitglied. 1946 nahm er an der Pariser Friedenskonferenz teil. 1952 veröffentlichte er eine Tito-Biografie im Stile des Personenkults, befasste sich neben seinen politischen Ämtern mit Geschichte und schrieb zahlreiche Bücher. 1953 erhielt Dedijer eine Professur für Neuere Geschichte an der Universität Belgrad. Ende 1954 kam es zu einem Bruch mit Tito aufgrund von Dedijers Eintreten für den in Ungnade gefallenen Milovan Đilas. Am 28. Dezember 1954 verlor Dedijer seine sämtlichen politischen Ämter, sowie die Mitgliedschaft im Zentralkomitee des BdKJ; am 25. Januar 1955 wurde er wegen regierungsfeindlicher Tätigkeit zu sechs Monaten Haft auf Bewährung verurteilt. Daraufhin widmete er sich mehr den wissenschaftlichen Studien und verließ 1959 Jugoslawien. Er nahm verschiedene Lehraufträge, unter anderem an der Harvard University und an der University of Oxford, wahr. Nach der Aussöhnung mit Tito kehrte Dedijer 1964 nach Jugoslawien zurück und wurde wissenschaftlicher Berater am Institut für Geschichte in Belgrad. Sein Buch The Road to Sarajevo (1966, deutsche Übersetzung 1967 unter dem Titel Die Zeitbombe) ist "die quellenreichste und detaillierteste Untersuchung des Attentats" von Sarajevo, urteilt Gerd Krumeich. 1968 trat Dedijer der Serbischen Akademie der Wissenschaften bei, setzte aber seine wissenschaftlichen Forschungsaufträge im Ausland, unter anderem am Massachusetts Institute of Technology (MIT) fort. 1968 führte er gemeinsam mit dem Schriftsteller Jean-Paul Sartre den Vorsitz des Russell-Tribunals. Sein Engagement sorgte Ende der 1960er Jahre dafür, dass ihm die USA vorübergehend die Einreise verweigerten. Kurz vor dem Beginn der Jugoslawienkriege erschien 1987 Dedijers populäre antikatholische Schrift Vatikan i Jasenovac (Jasenovac, das jugoslawische Auschwitz und der Vatikan), dass auch international für Aufsehen sorgte. In dem Werk nannte er hohe Zahlen von Opfern, die durch "Handlanger der katholischen Kirche" getötet worden sein sollen. So vertrat Dedijer in dem Buch die These, dass 800.000 Serben von den Ustaše getötet worden seien und versuchte die Komplizenschaft des Vatikan nachzuweisen. Dedijer bietet für diese massiv überhöhten Opferzahlen jedoch keine genaue Schätzung (Siehe Abschnitt Opferzahlen im Artikel KZ Jasenovac). Das Buch wurde 2011 inhaltlich unverändert in sechster Auflage vom Ahriman-Verlag herausgegeben. Bis zu seinem Tod 1990 in Boston engagierte sich Dedijer als Kämpfer gegen Menschenrechtsverletzungen. Schriften (Auswahl) Auf deutsch erhältliche Bücher: Englischsprachige Bücher: Literatur Dedijer, Vladimir, in: General Encyclopedia of the Yugoslavian Lexicographical Institute, Band 2, Zagreb, 1977 Weblinks Surprise Ending, in: Time Magazine, 7. Februar 1955 (kostenpflichtiger Abruf) Einzelnachweise Autor Historiker Journalist (Jugoslawien) Abgeordneter (Jugoslawien) BdKJ-Mitglied Hochschullehrer (Universität Belgrad) Mitglied der Serbischen Akademie der Wissenschaften und Künste Person (jugoslawischer Widerstand 1941–1945) Jugoslawe Geboren 1914 Gestorben 1990 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
850
Oleksandr Zotov (; born 23 February 1975) is a retired professional Ukrainian football midfielder. Career Zotov is a product of UOR Donetsk. His professional debut, he made in lower leagues clubs from Donetsk Oblast where he played in 1992–1994. In 1994 Zotov moved to Odessa where he for short period played for the city team SC Odesa. In 1995 he debuted at Vyshcha Liha (Ukrainian Top League) playing for FC Chornomorets Odesa and later FC Kryvbas Kryvyi Rih. In 2001 Zotov joined Metalurh Donetsk. Later he rejoined Metalurh Donetsk at the beginning of the 2008–09 from Chornomorets Odessa during the summer transfer season. External links Profile on Official Metalurh Donetsk Website Profile on Football Squads 1975 births Living people People from Yenakiieve Ukrainian footballers Ukrainian Premier League players FC Khartsyzk players FC Metalurh Kostiantynivka players FC Metalurh Donetsk players FC Metalurh-2 Donetsk players FC Kryvbas Kryvyi Rih players FC Chornomorets Odesa players SC Odesa players FC Vorskla Poltava players FC Vorskla-2 Poltava players FC Hoverla Uzhhorod players FC Feniks-Illichovets Kalinine players Ukraine international footballers Ukraine under-21 international footballers Ukrainian football managers NK Veres Rivne managers Association football midfielders Sportspeople from Donetsk Oblast
{ "redpajama_set_name": "RedPajamaWikipedia" }
543