text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
Nestlé S.A. är ett schweiziskt multinationellt livsmedelsföretag. Företaget är världens största livsmedelsföretag med en årlig omsättning på cirka 90 miljarder schweizerfranc. Företaget grundades 1866 av Henri Nestlé. Huvudkontoret ligger i Vevey, Schweiz.
Nestlé har ett stort antal varumärken inom områdena barnmat, kaffe, buteljerat vatten, katt- och hundfoder, modersmjölksersättning, frukostflingor, choklad, godis, glass, välling med mera. Företaget satsar mest på att visa upp produktens namn medan varumärket Nestlé inte har någon framträdande plats på förpackningarna.
Historia
Nestlé grundades 1866 av apotekaren Henri Nestlé (Heinrich Nestle) som Farine Lactée Henri Nestlé lk.A. 1867 började Henri Nestlé tillverka mjölkpulver som såldes som bröstmjölksersättning. Nestlé började använda familjens vapen som logotyp, fåglar i ett fågelbo. Namnet Nestlé betyder litet bo på den sydtyska dialekten Schwabiska. 1874 expanderade bolaget till Tyskland och samma år lämnade grundaren verksamheten.
Fusioner och internationell expansion
1898 började tillverkning utomlands då en fabrik öppnades i Norge. 1905 gick bolaget samman med Anglo-Swiss Condensed Milk Company. 1929 gick chokladproducenterna Peter, Cailler, Kohler och Nestlé samman. Nestlé behölls som namn. 1938 fick bolaget stora framgångar med pulverkaffe – Nescafé. 1947 gick Nestlé samman med Maggi och övertaganden fortsatte med svenska Findus (1962) och Ursina-Franck AG (1971). Namnet är sedan dess Nestlé S.A.
1986 lanserades Nespresso i Schweiz. 1988 köpte Nestlé brittiska Rowntree's varpå märken som Smarties, After Eight och Kit Kat blev en del av varumärkesportföljen. Samma år köpte Nestlé den italienska pastatillverkaren Buitoni. Den svenska verksamheten i Findus som köptes av Marabou blev 1989 Svenska Nestlé AB. Nestlé sålde verksamheten i Findus till EQT 2001 och bildade det nya bolaget Nestlé Sverige AB med säte i Helsingborg.
Under 1990-talet blev mineralvatten ett nytt segment efter köpen av Perrier och Sanpellegrino.
Priskartell
Nestlé anklagades 2007 av den kanadensiska konkurrensmyndigheten för deltagande i en priskartell. Nestlé betalade en straffavgift på nio miljoner dollar efter förlikning i domstol. Liknande process pågår i USA.
Några varumärken
Konfekt
After Eight
Lion
Kit Kat
Aero
Galak
Smarties
PowerBar
Kaffe
Nescafé
Nespresso
Zoégas
Frukostflingor
Cheerios (Cereal Partners Worldwide) - frukostflingor
Nestlé Fitness
Glass
Hemglass - glass (såldes till Varsego i januari 2014)
Mövenpick - glass
Matlagning
Juicy Juice
Maggi
Buitoni
Hälsans Kök
Vatten
Perrier
pure life
San Pellegrino
Vittel
Poland Spring Water
Djurfoder
Nestlé Purina Pet Care
Friskies - kattmat
Purina ONE - katt och hundmat
Kritik
Nestlé har utsatts för en omfattande kritik under många år.
Bröstmjölksersättning, giftig mjölk och illegal mjölkförsäljning
En bojkott pågår sedan 1977 mot företaget för dess marknadsföring av bröstmjölksersättning som bryter mot Världshälsoorganisationens regler. Bland annat säger Rädda barnen att det leder till att barn avlider eller får problem med hälsan. Nätverket International Baby Food Action Network driver en global kampanj mot Nestlés marknadsföring.
I melaminskandalen i Kina 2008 dog sex barn och 860 fick behandlas på sjukhus. Melamin gör att proteinhalten i mjölkersättning verkar högre än den egentligen är. Kinesiska myndigheter fann 2008 att Nestlé sålde mjölk med låga halter av melamin. Även myndigheterna i Taiwan stoppade försäljningen.
2009 framkom det att Nestlé köpt mjölk från olagligt beslagtagna farmer i Zimbabwe. Det skedde trots att dessa gårdar styrdes av Mugabe i strid mot EU:s bojkottregler. Nestlé har stoppat denna verksamhet.
Konflikten med Etiopien (2002)
Under början av 2000-talet drabbades Etiopien av en omfattande svältkatastrof. Mitt under denna, år 2002, krävde Nestlé att den Etiopiska staten skulle återbetala en skuld till företaget på sex miljoner dollar. Det var först efter att ha mottagit 8500 mail med kritik som företaget backade från kravet.
Barnslavar
Nestlé undertecknade 2001 Kakaoprotokollet mot barnarbete inom kakaoindustrin. 2005 togs Nestlé till domstol för att barn kidnappats och tvingats till slavarbete. Nestlé friades av en kalifornisk domstol, men beslutet är överklagat till högre domstol.
Hästkött
Nestlé hör till de företag som under 2013 sålt biffkött som innehållit hästkött.
Noter
Externa länkar
Officiell webbplats
Officiell svensk webbplats
Företag bildade 1866
Företag listade på SIX Swiss Exchange
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 8,556
|
\section{Introduction}
For many combinatorial optimization problems, their set of feasible solutions is a certain set of subsets of a finite
ground set $ E $ and there exists a vector $ c \in \mathbb{R}^E $ such that the objective value of each feasible set $ F
\subseteq E $ is equal to $ \sum_{e \in F} c_e $.
A central paradigm in combinatorial optimization is to identify each feasible subset $ F $ with its
\emph{characteristic vector} $ \chi(F) \in \{0,1\}^E $, where $ \chi(F)_e = 1 \iff e \in F $, and to consider the
(equivalent) problem of maximizing or minimizing the linear function $ x \mapsto \langle c,x \rangle $ over these
vectors.
In order to treat this problem algorithmically, one needs an algebraic description of the set $ X := \{\chi(F) : F \subseteq E \text{ feasible}\} $. The standard approach that has been followed extremely successfully for many problems is to use a system of linear
inequalities $ Ax \leq b $ with
\begin{equation}
\label{eq:description_relaxation}
X = \setdef{ x \in \mathbb{Z}^E }[ Ax \leq b ]\,,
\end{equation}
i.e., an \emph{integer linear programming} (\emph{ILP}) formulation.
A most prominent example of this approach is provided by the traveling salesman problem.
Let $ K_n = (V_n,E_n) $ be the undirected complete graph on $ n $ nodes and $ \mathrm{STSP}_n $ the set of characteristic
vectors of hamiltonian cycles in $ K_n $.
One ILP-formulation for $ X = \mathrm{STSP}_n $ of type~\eqref{eq:description_relaxation}, which has established itself as
\emph{the} starting point for solving the traveling salesman problem, is the so-called \emph{subtour elimination}
formulation
\begin{align}
\nonumber
\mathrm{STSP}_n = \Big\{ x \in \{0,1\}^{E_n} :
x(\delta(S)) & \geq 2 \quad \forall \, \emptyset \neq S \subsetneq V_n \\
\label{eq:subtour-relaxation}
x(\delta(v)) & = 2 \quad \forall \, v \in V_n \quad \quad \Big\}.
\end{align}
This description uses exponentially many (in $ n $) linear inequalities, but nevertheless, it can be used computationally efficiently (both with respect to theoretical and to practical aspects), because the separation problem associated with these inequalities can be solved efficiently. Though the mere size of the description thus does not harm its algorithmic treatment, one may wonder whether an ILP-formulation of the traveling salesman problem in the above setup necessarily needs to be of exponential size, while for other combinatorial optimization problems (both NP-hard ones as well as polynomial time solvable ones) like, e.g., the maximum clique, the maximum cut, or the maximum matching problem, polynomial size ILP-formulations are ready at hand. The work reported about in this paper had its origins in this question, which, to our initial slight surprise, apparently has not been treated before.
Besides pure mathematical curiosity, the question for small rather than only efficiently separable ILP-formulations of combinatorial optimization problems also has some practical relevance. If simplicity of implementing a solution procedure is a more important issue than efficiency of the solution process, a small ILP-formulation that can be fed immediately into a black box ILP-solver is likely to be preferred over one for which a separation procedure has to be implemented and linked to the solver. Therefore, also from a practical point of view there seems to be some interest in understanding better the general possibilities and limits of formulating combinatorial optimization problems via ILP's.
Throughout this paper, given a set $ X \subseteq \mathbb{Z}^d $, let us call a polyhedron $ R \subseteq \mathbb{R}^d $ a
\emph{relaxation} for $ X $ if $ R \cap \mathbb{Z}^d = \conv(X) \cap \mathbb{Z}^d $ holds.
Furthermore, the smallest number $ \rc(X) $ of facets of any relaxation for $ X $ will be called the \emph{relaxation complexity} of $
X $.
With this notation, the initial question asks for the asymptotic behavior of $ \rc(\mathrm{STSP}_n) $.
Except for a paper by Jeroslow \cite{Jeroslow75}, the authors are not aware of any reference that deals with a similar
quantity.
In his paper, for a set $ X \subseteq \{0,1\}^d $ of binary vectors, Jeroslow introduces the term \emph{index} of $ X $
(short: $ \ind(X) $), which is defined as the smallest number of inequalities needed to separate $ X $ from the
remaining points in $ \{0,1\}^d $.
Thus, the notion of relaxation complexity can be seen as a natural extension of the index with respect to general
subsets of $ \mathbb{Z}^d $.
Clearly, we have that $ \ind(X) \leq \rc(X) $ holds for all sets $ X \in \{0,1\}^d $.
On the other hand, as we will briefly discuss in Section~\ref{sec:cube}, both quantities differ at most by an additive
term of $ d + 1 $.
As the main result in his paper, Jeroslow shows that $ 2^{d-1} $ is an upper bound on $ \ind(X) $, which is attained by the set of
binary vectors of length $ d $ that contain an even number of ones, see Section~\ref{sec:parity}.
We generalize his idea of bounding the index of a set $ X \subseteq \{0,1\}^d $ from below to provide lower bounds on
the relaxation complexity of general $ X $, see Section~\ref{sec:lower-bounds}.
This allows us to provide exponential lower bounds on the relaxation complexities of sets associated to problems that
are variants of the traveling salesman, the spanning tree, or the $T$-join problem.
In particular, we show that the asymptotic growth of $ \rc(\mathrm{STSP}_n) $ is $ 2^{\Theta(n)} $. In this sense, the exponentially large subtour
elimination formulation thus is asymptotically smallest possible.
Of course, exponential lower bounds on the relaxation complexity only imply that it is impossible to come up with polynomial size ILP-formulations in the space of the variables~$x$ that are naturally associated with the respective problem in the way described above. They do not refer to ILP-formulations of the form
\begin{equation}
\label{eq:intro-general-ips}
\min \setdef{ \langle c,x \rangle }[Ax + By \leq b, \, x \in \mathbb{Z}^E, \, y \in \mathbb{Z}^m],
\end{equation}
where the vector $ y $ consists of additional variables and $ X = \{ x \in \mathbb{Z}^E : \exists y \in \mathbb{Z}^m: \ Ax + By \leq b
\} $.
Indeed, there exist classical formulations for $ X = \mathrm{STSP}_n $ for which the system $ Ax + By \leq b $ in~\eqref{eq:intro-general-ips} consists of
polynomially many linear inequalities, see, e.g. \cite{MillerTZ60} or \cite{GavishG78}.
As we briefly discuss in Section~\ref{sec:basics}, it turns out that in fact every in a certain sense reasonable combinatorial
optimization problem admits polynomial size descriptions of type~\eqref{eq:intro-general-ips}.
In contrast to this, recent results show that for many problems the associated polytope $\conv(X)$ has exponential extension complexity, i.e., every representation $ \conv(X) = \{ x \in \mathbb{R}^E : \exists y \in \mathbb{R}^m: \ Ax + By \leq b
\} $ has exponential size.
This refers to both NP-hard problems like the traveling salesman problem and others~\cite{FioriniMPTW12} (see also~\cite{AvisT13,PokuttaV13}) as well as even to the polynomial time solvable matching problem~\cite{Rothvoss14}. Thus, our exponential lower bounds on $ \rc(\mathrm{STSP}_n) $ in particular show that in polynomial size formulations of the traveling salesman problem of type~\eqref{eq:intro-general-ips} both the use of additional variables and imposing integrality constraints are necessary.
Our paper is organized as follows:
In Section \ref{sec:basics}, we discuss basic facts about existence and properties of certain types of integer programs
describing sets that are mainly relevant in the field of combinatorial optimization.
This part gives further motivation on our notion of relaxations.
Section \ref{sec:cube} addresses to the relaxation complexity of the hypercube's vertices and serves as a simple
starting point to get familiar with questions asked (and basically being answered) in this paper.
Most of our main results are contained in Section~\ref{sec:lower-bounds}.
There, we introduce the concept of \emph{hiding sets}, which turns out to be a powerful technique to provide lower
bounds on $ \rc(X) $.
Finally, we use this technique to give exponential lower bounds on the sizes of relaxations for concrete structures that
occur in many practical IP-formulations.
In the last section of the paper, we briefly discuss some open questions regarding the rationality of minimum size
relaxations.
\makeatletter{}\section{Basic Observations}
\label{sec:basics}
General sets of integer points do not have to admit any relaxation.
Therefore, we will focus on \emph{polyhedral} sets $ X $, i.e., sets whose convex hull is a polyhedron.
By definition, we have that $ \rc(X) $ is finite for such sets.
Further, in this setting, it is easy to see that any relaxation corresponds to a valid IP-formulation and vice versa:
\begin{proposition}
Let $ X \subseteq \mathbb{Z}^d $ be polyhedral and $ P \subseteq \mathbb{R}^d $ a polyhedron. Then, $ P $ is a relaxation for $ X $
if and only if $ \sup \{ \langle c,x \rangle : x \in P \cap \mathbb{Z}^d \} = \sup \{ \langle c,x \rangle : x \in X \} $
holds for all $ c \in \mathbb{R}^d $.
\end{proposition}
\subsection{Projection is a Powerful Tool}
Following Schrijver's proof \cite[Thm. 18.1]{Schrijver86} of the fact that integer programming is $ \mathrm{NP} $-hard,
one finds that for any language $ \mathcal{L} \subseteq \{0,1\}^* $ that is in $ \mathrm{NP} $, there is a polynomial $
p $ such
that for any $ k > 0 $ there is a system $ Ax + By \leq b $ of at most $ p(k) $ linear inequalities and $ m \leq p(k) $
auxiliary variables with
\[
\setdef{x \in \{0,1\}^k}[x \in \mathcal{L}] = \setdef{x \in \{0,1\}^k}[\exists \, y \in \{0,1\}^m \ Ax+By \leq b].
\]
Further, suppose we are given a \emph{boolean circuit} and let $ X \subseteq \{0,1\}^d $ be the set of inputs that
evaluate to true. It is straightforward to model the outputs of all intermediate gates in terms of additional variables
and linear inequalities: For inputs $ y_1, y_2 \in \{0,1\} $, the resulting output $ y_3 $ of a, say, OR-gate is the
unique solution $ y_3 \in [0,1] $ of the system $ y_1 \leq y_3, \, y_2 \leq y_3, \, y_3 \leq y_1+y_2 $. A crucial
property of these constraints is that if the inputs have $ 0/1 $-values, then $ y_3 $ is also implicitly forced to take
its value in $ \{0,1\} $. It is straightforward to make analogous observations for AND-gates and NOT-gates, see, e.g.,
\cite[p. 445]{Yannakakis91}. Since for every language $ \mathcal{L} \in \mathrm{P} $ there exists a polynomial time
algorithm to construct (polynomial size) boolean circuits that decide $ \mathcal{L} $, see, e.g.,
\cite[pp.~109--110]{AroraB09}, we conclude:
\begin{proposition}
\label{prop:polynomial-size-integer-programs}
Let $ X_d \subseteq \{0,1\}^d $ be a family of sets such that the membership problem ``Given $ x \in
\{0,1\}^d $, is $ x $ in $ X_d $?'' is in $ \mathrm{NP} $. Then there exists a polynomial $ p $ such that for any $
d $ there is a system $ Ax + By \leq b $ of at most $ p(d) $ linear inequalities and $ m \leq p(d) $ auxiliary
variables with
\[
X_d = \setdef{x \in \{0,1\}^d}[Ax + By \leq b, \, y \in \mathbb{Z}^m].
\]
If the membership problem is even in $ \mathrm{P} $, then there exist such systems without integrality
constraints on the auxiliary variables $ y $. \qed
\end{proposition}
Coming back to the exponential size of the subtour elimination relaxation, it might be argued that the number of
inequalities is not the right measure of complexity since it is still possible to optimize linear functions over the
subtour elimination relaxation in polynomial time.
However, the mere existence of a relaxation over which optimization can be performed in polynomial time is nothing
special, as the following result shows.
\begin{proposition}
Let $ X_d \subseteq \{0,1\}^d $ be a family of sets such that the membership problem ``Given $ x \in
\{0,1\}^d $, is $ x $ in $ X_d $?'' is in $ \mathrm{P} $. Then there exists a family of relaxations $ R_d $ for $
X_d $ such that linear programming over $ R_d $ can be done in polynomial time.
\end{proposition}
\begin{proof}
By Proposition \ref{prop:polynomial-size-integer-programs}, we know that for each $ d $ there exists a system $ Ax +
By \leq b $ of polynomially many linear inequalities such that $ X_d = \{ x \in \{0,1\}^d : Ax + By \leq b, \, y \in
\mathbb{R}^m \} $. As mentioned in the above argumentation, such systems even can be constructed by a
polynomial time algorithm. Thus, setting
\begin{align*}
R'_d &:= \{ (x,y) : Ax + By \leq b, \, x \in [0,1]^d, \, y \in \mathbb{R}^m \} \\
R_d & := \{ x \in \mathbb{R}^d : \exists y \in \mathbb{R}^m \, (x,y) \in R'_d \}
\end{align*}
gives us the desired relaxations $ R_d $. Indeed, given $ c \in \mathbb{Q}^d $ we have that
\[
\max \, \{ \langle c,x \rangle : x \in R_d \} = \max \, \{ \langle c,x \rangle : (x,y) \in R'_d \},
\]
where the latter problem can be solved in time polynomially bounded in $ d $ and the encoding length of $ c $. \qed
\end{proof}
\subsection{Encoding Lengths of Coefficients}
Our notion of relaxation complexity does not involve sizes of coefficients in (minimum size) relaxations.
In fact, we are not aware of any bounds on the coefficients' sizes of minimum size relaxations for general polyhedral
sets. We even do not know if they can be chosen to be rational in every case, see Section \ref{sec:rationality} for a
discussion. However, let us consider the following observation concerning binary vectors:
\begin{proposition}
\label{prop:encoding-lengths-coefficients}
There exists a polynomial $ p $ such that for any real vector $ a \in \mathbb{R}^d $ and any real number $ \gamma \in \mathbb{R} $
there is a rational vector $ a' \in \mathbb{Q}^d $ and a rational number $ \gamma' \in \mathbb{Q}^d $ satisfying
\begin{itemize}
\item $ \{ x \in \{0,1\}^d : \langle a,x \rangle \leq \gamma \} = \{ x \in \{0,1\}^d : \langle a',x \rangle \leq
\gamma' \} $ and
\item the encoding lengths of $ a' $ and $ \gamma' $ are both bounded by $ p(d) $.
\end{itemize}
\end{proposition}
\begin{proof}
We may assume that $ X := \{ x \in \{0,1\}^d : \langle a,x \rangle \leq \gamma \} $ is not empty. By setting
$ \bar{x} := \arg \max \{ \langle a,x \rangle : x \in X \} $, let us define the affine isomorphism $ \varphi \colon
\mathbb{R}^d \to \mathbb{R}^d $ via $ \varphi(x) := x - \bar{x} $. Clearly, we now have that
\[
\langle a,x \rangle \leq \gamma \iff \langle a,\varphi(x) \rangle \leq 0
\]
holds for all $ x \in \{0,1\}^d $. Thus, the polyhedron
\[
P := \Big\{ \tilde{a} \in \mathbb{R}^d : \langle y,\tilde{a} \rangle \leq 0 \ \forall \, y \in \varphi(X), \,
\langle y,\tilde{a} \rangle \geq 1 \ \forall \, y \in \varphi(\{0,1\}^d \setminus X) \Big\}
\]
is not empty. Since $ P $ can be described by a system of linear inequalities whose coefficients are in $ \{-1,0,1\}
$, we know that there exists a rational point $ a' \in P $ whose encoding length is polynomially bounded in $ d
$, see \cite{Schrijver86}.
For any point $ x \in \{0,1\}^d $, by the definition of $ a' $, we now have that
\begin{align*}
\langle a,x \rangle \leq \gamma
\iff \langle a,\varphi(x) \rangle \leq 0
& \iff \langle a',\varphi(x) \rangle \leq 0 \\
& \iff \langle a',x-\bar{x} \rangle \leq 0
\iff \langle a',x \rangle \leq \langle a',\bar{x} \rangle.
\end{align*}
and hence setting $ \gamma' := \langle a',\bar{x} \rangle $ completes the proof. \qed
\end{proof}
Proposition \ref{prop:encoding-lengths-coefficients} tells us that if we are given a set $ X \subseteq \{0,1\}^d $, then we
can always find a rational relaxation $ \tilde{R} $ for $ X $ whose number of facets is close to $ \rc(X) $ and the
encoding lengths of coefficients in a suitable outer description of $ \tilde{R} $ can be polynomially bounded in $ d $.
Indeed, let $ R $ be a minimal relaxation for $ X $, perturb its facet-defining inequalities according to Proposition
\ref{prop:encoding-lengths-coefficients} and call the obtained polyhedron $ R' $. If we now choose $ C \subseteq \mathbb{R}^d $
to be any relaxation for $ \{0,1\}^d $, we obtain that $ \tilde{R} := R' \cap C $ is a relaxation for $ X $. In Section
\ref{sec:cube}, we will see that the number of inequalities we thus have to add, i.e., the number of facets of $ C
$, can be assumed to be at most $ d+1 $.
\subsection{Restricting to Facet-Defining Inequalities}
At the end of this section, let us briefly discuss a further requirement on relaxations:
Many known relaxations for sets $ X \subseteq \mathbb{Z}^d $ that are identified with feasible points in combinatorial problems
are defined by linear inequalities of which, preferably, most of them are facet-defining for $ \conv(X) $. Clearly, this
has important practical reasons since such formulations are tightest possible in some sense. However, if one is
interested in a relaxation that has as few number of facets as possible, one cannot only use
facet-defining inequalities of $ \conv(X) $: In fact, in the next section we will see that $ \rc(\{0,1\}^d) = d + 1 $
whereas by
removing any of the cube's inequalities the remaining polyhedron gets unbounded. Nevertheless, the restriction turns out
to be not too hard:
\begin{proposition}
Let $ X \subseteq \mathbb{Z}^d $ be polyhedral and $ \rc_F(X) $ the smallest number of facets of any relaxation for $ X $
whose facet-defining inequalities are also facet-defining for $ \conv(X) $. Then, $ \rc_F(X) \leq \dim(X) \cdot
\rc(X) $.
\end{proposition}
\begin{proof}
By Carath\'eodory's Theorem, any facet-defining inequality of a relaxation $ R $ for $
X $ can be replaced by $ \dim(X) $ many facet-defining inequalities of $ \conv(X) $. The resulting polyhedron is
still a relaxation for $ X $. \qed
\end{proof}
\makeatletter{}\section{Warm-Up: The Cube}
\label{sec:cube}
\noindent
As mentioned in the introduction, Jeroslow \cite{Jeroslow75} showed that for any set $ X \subseteq \{0,1\}^d $, one
needs at most $ 2^{d-1} $ many linear inequalities in order to separate $ X $ from $ \{0,1\}^d \setminus X $. If $ P
\subseteq \mathbb{R}^d $ is a polyhedron such that $ P \cap \{0,1\}^d = X $, then, in order to construct a relaxation for $ X $,
we need to additionally separate all points $ \mathbb{Z}^d \setminus \{0,1\}^d $ from $ X $. This can be done by intersecting $
P $ with a relaxation for $ \{0,1\}^d $. We conclude:
\begin{proposition}
\label{prop:binary}
Let $ X \subseteq \{0,1\}^d $. Then $ \rc(X) \leq 2^{d-1} + \rc(\{0,1\}^d) $. \qed
\end{proposition}
\noindent
Motivated by Proposition \ref{prop:binary}, we are interested in the relaxation complexity of $ \{0,1\}^d $. Since $
[0,1]^d = \conv(\{0,1\}^d) $, we obviously have that $ \rc(\{0,1\}^d) \leq 2d $. However, it turns out that one can
construct a relaxation of only $ d+1 $ facets:
\begin{lemma}
\label{lem:rc-cube}
For $ d \geq 1 $, we have
\[
\{0,1\}^d = \setdef{x \in \mathbb{Z}^d}[x_k \leq 1 + \sum_{i=k+1}^d 2^{-i} x_i \ \forall \, k \in [d], \,
x_1 + \sum_{i=2}^d 2^{-i} x_i \geq 0].
\]
\end{lemma}
\begin{proof}
Obviously, any point $ x \in \{0,1\}^d $ satisfies
\begin{equation}
\label{eq:rc-cube-1}
x_k \leq 1 + \sum_{i=k+1}^d 2^{-i} x_i
\end{equation}
for all $ k \in [d] $ and
\begin{equation}
\label{eq:rc-cube-2}
x_1 + \sum_{i=2}^d 2^{-i} x_i \geq 0.
\end{equation}
Let $ x \in \mathbb{Z}^d $ be any integer point that satisfies \eqref{eq:rc-cube-1} for all $ k \in [d] $ as well as
\eqref{eq:rc-cube-2}. First, we claim that $ x_i \leq 1 $ for all $ i \in [d] $: Suppose that $ x_k > 1 $ for some $
k \in [d] $. W.l.o.g. we may assume that $ x_i \leq 1 $ for all $ i > k $ and obtain
\[
x_k \stackrel{\text{\eqref{eq:rc-cube-1}}}{\leq}
1 + \sum_{i=k+1}^d 2^{-i} x_i \leq 1 + \sum_{i=k+1}^d 2^{-i} < 2,
\]
a contradiction. Further, we see that $ x_1 \geq 0 $ since (due to $ x_i \leq 1 $ for all $ i $)
\[
x_1 \stackrel{\text{\eqref{eq:rc-cube-2}}}{\geq}
- \sum_{i=2}^d 2^{-i} x_i \geq - \sum_{i=2}^d 2^{-i} > -1.
\]
It remains to show that $ x_i \geq 0 $ for all $ i \in [d] \setminus \{1\} $. Suppose that $ x_j \leq -1 $ for some
$ j \in [d] \setminus \{1\} $ and $ x_i \geq 0 $ for all $ i < j $. In this case, we claim that $ x_i = 0 $ for all
$ i < j $: Otherwise, let $ k $ be the largest $ k<j $ such that $ x_k > 0 $ (and hence $ x_k = 1 $). By inequality
\eqref{eq:rc-cube-1}, we would obtain
\begin{align*}
1 = x_k \leq 1 + \sum_{i=k+1}^d 2^{-i} x_i & = 1 + 2^{-j} x_j + \sum_{i=j+1}^d 2^{-i} x_i \\
& \leq 1 + 2^{-j} \cdot (-1) + \sum_{i=j+1}^d 2^{-i} < 1.
\end{align*}
Thus, we have $ x_i \geq 0 $ for all $ i < j $, and hence, by inequality \eqref{eq:rc-cube-2}, we deduce
\[
0 \leq x_1 + \sum_{i=2}^d 2^{-i} x_i = 2^{-j} x_j + \sum_{i=j+1}^d 2^{-i} x_i \leq
2^{-j} \cdot (-1) + \sum_{i=j+1}^d 2^{-i} < 0,
\]
a contradiction. \qed
\end{proof}
\noindent
To show that this construction is best possible, note that if a polyhedron that contains $ \{0,1\}^d $ (and hence is $ d
$-dimensional) has less than $ d+1 $ facets, it must be unbounded. In order to show that such a (possibly irrational)
polyhedron must contain infinitely many integer points (and hence cannot be a relaxation of $ \{0,1\}^d $), we make use
of Minkowski's theorem:
\begin{theorem}[\textsc{Minkowski} \cite{Minkowski96}]
\label{thm:minkowski}
Any convex set which is symmetric with respect to the origin and with volume greater than $ 2^d $ contains a
non-zero integer point.
\end{theorem}
\noindent
For $ \varepsilon > 0 $ let $ B_{\varepsilon} := \setdef{x \in \mathbb{R}^d}[\|x\|_2 < \varepsilon] $ be the open ball with radius
$ \varepsilon $. As a direct consequence of Minkowski's theorem, the following corollary is useful for our
argumentation.
\begin{corollary}
\label{cor:minkowski}
Let $ c \in \mathbb{R}^d \setminus \{ \mathbb{O} \} $, $ \lambda_0 \in \mathbb{R} $ and $ \varepsilon > 0 $. Then
\[
L(c,\lambda_0,\varepsilon) := \setdef{\lambda c \in \mathbb{R}^d}[\lambda \geq \lambda_0] + B_{\varepsilon}
\]
contains infinitely many integer points.
\end{corollary}
\begin{proof}
Let us define $ L(c, \varepsilon) := \setdef{\lambda c \in \mathbb{R}^d}[\lambda \in \mathbb{R}] + B_{\varepsilon} $. Clearly,
$ L(c, \varepsilon) $ is symmetric with respect to the origin.
Since $ L(c,0,\varepsilon) \setminus L(c,\lambda_0,\varepsilon) $ is bounded,
it suffices to show that $ L(c, \varepsilon) $ contains infinitely many integer points.
Since the latter statement is obviously true if
\begin{equation}
\label{eq:nointegerpoints}
\setdef{ \lambda c }[\lambda \in \mathbb{R}] \cap \mathbb{Z}^d \neq \{ \mathbb{O} \},
\end{equation}
we assume that \eqref{eq:nointegerpoints} does not hold. Setting $ \varepsilon_1 := \varepsilon $, by Theorem
\ref{thm:minkowski}, $ L(c,\varepsilon_1) $ contains a point $ p_1 \in \mathbb{Z}^d \setminus \{ \mathbb{O} \} $. Since
\eqref{eq:nointegerpoints} does not hold, there exists some $ \varepsilon_2 > 0 $ such that $ L(c,\varepsilon_2)
\subseteq L(c,\varepsilon_1) $ and $ p_1 \notin L(c,\varepsilon_2) $. Again, by Theorem \ref{thm:minkowski}, $
L(c,\varepsilon_2) $ also contains a point $ p_2 \in \mathbb{Z}^d \setminus \{ \mathbb{O} \} $. Further, there is also some $
\varepsilon_3 > 0 $ such that $ L(c,\varepsilon_3) \subseteq L(c,\varepsilon_2) $ and $ p_2 \notin
L(c,\varepsilon_3) $. By iterating these arguments, we obtain an infinite sequence $ (\varepsilon_i, p_i) $ such
that $ p_i \in L(c,\varepsilon_i) \cap \mathbb{Z}^d \subseteq L(c,\varepsilon) \cap \mathbb{Z}^d $ and $ p_i \notin
L(c,\varepsilon_{i+1}) $ for all $ i > 0 $. In particular, all $ p_i $ are distinct. \qed
\end{proof}
\begin{theorem}
For $ d \geq 1 $, we have that $ \rc(\{0,1\}^d) = d + 1 $.
\end{theorem}
\begin{proof}
By Lemma \ref{lem:rc-cube}, we already know that $ \rc(\{0,1\}^d) \leq d + 1 $. Suppose there is a relaxation $ R
\subseteq \mathbb{R}^d $ for $ \{0,1\}^d $ with less than $ d + 1 $ facets. As mentioned above, since $ \dim(R) \geq
\dim(\{0,1\}^d) = d $, $ R $ has to be unbounded.
By induction over $ d \geq 1 $, we will show that any unbounded polyhedron $ R \subseteq \mathbb{R}^d $ with $ \{0,1\}^d
\subseteq R $ contains infinitely many integer points. Hence, it cannot be a relaxation of $ \{0,1\}^d $. Clearly,
our claim is true for $ d = 1 $. For $ d \geq 1 $, let $ c \in \mathbb{R}^d \setminus \{ \mathbb{O} \} $ be a direction such
that $ x + \lambda c \in R $ for any $ x \in R $ and $ \lambda \geq 0 $. Since $ \{0,1\}^d $ is invariant under
affine maps that map a subset of coordinates $ x_i $ to $ 1-x_i $, we may assume that $ c \geq \mathbb{O} $.
If $ c > \mathbb{O} $, then there is some $ \lambda_0 > 0 $ such that $ \lambda_0 c $ is in the interior of $ [0,1]^d
$. Thus,
there is some $ \varepsilon > 0 $ such that $ \lambda_0 c + B_{\varepsilon} \subseteq [0,1]^d \subseteq R $. By the
definition of $ c $ and $ \varepsilon $, we thus obtained that $ L(c,\lambda_0,\varepsilon) \subseteq R $. By
Corollary \ref{cor:minkowski}, it follows that $ L(c,\lambda_0) $ contains infinitely many integer points and so
does $ R $.
Otherwise, we may assume that $ c_d = 0 $. Let $ \mathcal{H}_d := \{x \in \mathbb{R}^d : x_d = 0 \} $ and $ p \colon
\mathcal{H}_d \to \mathbb{R}^{d-1} $ be the projection onto the first $ d-1 $ coordinates. Then, the polyhedron $ \tilde{R}
= p(R \cap \mathcal{H}_d) $ is still unbounded and contains $ \{0,1\}^{d-1} = p(\{0,1\}^d) $. By induction, $
\tilde{R} $ contains
infinitely many integer points and so does $ R $. \qed
\end{proof}
\noindent
With Proposition \ref{prop:binary} we thus obtain:
\begin{corollary}
\label{cor:binary}
Let $ X \subseteq \{0,1\}^d $. Then $ \rc(X) \leq 2^{d-1} + d + 1 $. \qed
\end{corollary}
\makeatletter{}\section{Lower Bounds}
\label{sec:lower-bounds}
\noindent
The technique used to obtain a lower bound on the relaxation complexity of the cube is apparently useless to prove
exponential lower bounds for $ \rc(\mathrm{STSP}_n) $ since it only provides a lower bound of at most $ d + 1$.
Therefore, let us introduce another simple framework to provide lower
bounds on the relaxation complexity for polyhedral sets $ X \subseteq \mathbb{Z}^d $.
\begin{definition}
Let $ X \subseteq \mathbb{Z}^d $. A set $ H \subseteq \aff(X) \cap \mathbb{Z}^d \setminus \conv (X) $ is called a \emph{hiding set} for $ X
$ if for any two distinct points $ a,b \in H $ we have that $ \conv \{a,b\} \cap \conv (X) \neq \emptyset $.
\end{definition}
\begin{figure}
\begin{center}
\makeatletter{}\def0.15{0.15}
\begin{tikzpicture}
\tikzstyle{simp} = [fill=black];
\tikzstyle{hid} = [fill=black!50];
\clip (-1.25,-1.25) rectangle (2.25,2.25);
\draw[dotted] (-2,-2) grid (3,3);
\draw (0,0) -- (1,0) -- (0,1) -- cycle;
\fill[simp] (0,0) circle (0.15);
\fill[simp] (1,0) circle (0.15);
\fill[simp] (0,1) circle (0.15);
\fill[hid] (1,1) circle (0.15);
\fill[hid] (-1,1) circle (0.15);
\fill[hid] (1,-1) circle (0.15);
\end{tikzpicture}
\end{center}
\caption{Hiding set (gray) for the vertices of the standard 2-simplex (black).}
\label{fig:hiding-set-simplex}
\end{figure}
\begin{proposition}
\label{prop:hiding-set}
Let $ X \subseteq \mathbb{Z}^d $ be polyhedral and $ H \subseteq \aff(X) \cap \mathbb{Z}^d \setminus X $ a hiding set for $ X $.
Then, $ \rc(X) \geq |H| $.
\end{proposition}
\begin{proof}
Let $ R \subseteq \mathbb{R}^d $ be a relaxation for $ X $. Since $ H \subseteq \aff(X) \subseteq \aff(R) $, any point in $
H $ must be separated from $ X $ by a facet-defining inequality of $ R $.
Suppose that a facet-defining inequality $ \langle \alpha,x \rangle \leq \beta $ of $ R $ is
violated by two distinct points $ a,b \in H $. Since $ H $ is a hiding set, there exists a point $ x \in \conv
\{a,b\} \cap \conv (X) $. Clearly, $ x $ does also violate $ \langle \alpha,x \rangle \leq \beta $, which is a
contradiction since $ \langle \alpha,x \rangle \leq \beta $ is valid for $ R \supseteq \conv (X) $.
Thus, any facet-defining inequality of $ R $ is violated by at most one point in $ H $. Hence, $ R $ has at least $
|H| $ facets. \qed
\end{proof}
\noindent
Let $ \Delta_d := \{ \mathbb{O}, \mathbbm{e}_1, \dotsc, \mathbbm{e}_d \} \subseteq \{0,1\}^d $ be the vertices of the standard $
d $-simplex. As an example, see Fig. \ref{fig:hiding-set-simplex} illustrating a hiding set for $ \Delta_2 $, which
yields a simple proof of the fact that any relaxation for these points must have at least three facets. Unfortunately,
it turns out that we cannot construct larger hiding sets for any $ \Delta_d $.
\begin{proposition}
Any hiding set for $ \Delta_d $ has cardinality at most $ 3 $.
\end{proposition}
\begin{proof}
Let $ H $ be any hiding set for $ \Delta_d $. Then, for any of the inequalities $ x_i \geq 0 $, $ i \in [d] $ as
well as $ \sum_{i=1}^d x_i \leq 1 $, there exists at most one point in $ H $ violating it. In particular, at most
one of the points in $ H $ lies in the nonnegative orthant.
Let us assume that $ |H| \geq 4 $. Then there are distinct points $ a, b, p, q \in H $ with $ a_i < 0 $ and $ b_j <
0 $ for some $ i, j \in [d] $ with $ i \neq j $. Since $ \lambda a + (1 - \lambda) p \in \Delta \subseteq \mathbb{R}^d_+ $ for
some $ \lambda \in [0,1]^d $, we must have $ p_i > 0 $. Since $ p_i \in \mathbb{Z} $, it follows that $ p_i \geq 1 $.
Analogously, we obtain $ p_j, q_i, q_j \geq 1 $.
Consider now any point $ y = \lambda p + (1-\lambda) q \in \conv \{p,q\} $ with $ \lambda \in [0,1] $. Note that $
x_i + x_j \leq 1 $ is a valid inequality for all points $ x \in \conv \Delta_d $. But since $ p_i, p_j, q_i, q_j
\geq 1 $, it is easy to see that $ y_i + y_j \geq 2 $ and hence $ y \notin \conv \Delta_d $. Thus, we obtain that $
\conv \{p,q\} \cap \conv \Delta_d = \emptyset $, a contradiction to $ H $ being a hiding set for $ \Delta_d $. \qed
\end{proof}
This observation shows that the hiding set bound has its limitations. (As a consequence of Proposition
\ref{prop:lower-bound-simplex}, we will see that $ \rc(\Delta_d) $ indeed grows in $ d $.)
Nevertheless, in what follows, we will demonstrate that this bound is a powerful tool to provide exponential lower
bounds on the relaxation complexities of numerous interesting sets $ X $. By dividing these sets into three
classes, we try to identify general structures that are hard to model in the context of relaxations.
\makeatletter{}\subsection{Connectivity and Acyclicity}
\noindent
In many IP-formulations for practical applications, the feasible solutions are subgraphs that are required to be
connected or
acyclic. Quite often in these cases,
there are polynomial size IP-formulations that use auxiliary variables. For instance, for the \emph{spanning tree polytope} there are even
polynomial size extended formulations \cite{Martin91} that can be adapted to also
work for the \emph{connector polytope} $ \mathrm{CONN}_n $ (see below).
In contrast, we give exponential lower bounds on the relaxation complexities of
some important representatives of this structural class.
\makeatletter{}\subsubsection{STSP \& ATSP}
As a first application of the hiding set bound, we will show that the subtour relaxation for $ \mathrm{STSP}_n $ has indeed
asymptotically smallest size (in the exponential sense), i.e., that $ \rc(\mathrm{STSP}_n) = 2^{\Theta(n)} $ holds.
In fact, we will also give an exponential lower
bound for the directed version $ \mathrm{ATSP}_n \subseteq \{0,1\}^{A_n} $, which is the set of characteristic vectors of
directed hamiltonian cycles in the complete directed graph on $ n $ nodes whose arcs we denote by $ A_n $.
We will first construct a large hiding set for $ \mathrm{ATSP}_n $. Towards this end,
let $ n = 2(N+1) $ for some integer $ N \ge 0 $ and let us consider the complete directed graph on the node set
\[
V := \setdef{ v_1, \dotsc, v_{N+1}, w_1, \dotsc, w_{N+1} }
\]
of cardinality~$n$.
For a binary vector $ b \in \{0,1\}^N $ let us further define the arc set
\begin{align*}
\mathcal{E}_b := & \big\{ (v_{N+1},v_1), \, (w_{N+1},w_1) \big\} \\
& \cup \bigcup_{i:b_i = 0} \big\{ ( v_i,v_{i+1} ), \, ( w_i,w_{i+1} ) \big\} \ \cup \
\bigcup_{i:b_i = 1} \big\{ ( v_i,w_{i+1} ), \, ( w_i,v_{i+1} ) \big\},
\end{align*}
see Fig. \ref{fig:hamburger} for an example.
Note that $ \mathcal{E}_b $ is a directed hamiltonian cycle on the node set $ V $ if and only if $ \sum_{i=1}^N b_i $ is
odd.
Thus, the set
\[
H_N := \setdef{ \chi(\mathcal{E}_b) }[b \in \{0,1\}^N, \, \sum_{i=1}^N b_i \text{ is even}]
\]
is clearly disjoint from $ \mathrm{ATSP}_{2(N+1)} $.
In this section, we will only consider graphs on these $ 2(N+1) $ nodes.
It is easy to transfer the following observations to complete graphs with an odd number of nodes
by replacing arc $ ( v_{N+1}, v_1 ) $ in $ \mathcal{E}_b $ by a directed path including one additional node.
\begin{figure}
\begin{center}
\makeatletter{}\begin{tikzpicture}[scale=1.25]
\tikzstyle{vertex} = [fill,circle,inner sep=0pt,minimum size=5pt];
\tikzstyle{arc} = [color=gray,arrows=->];
\tikzstyle{usedarc} = [very thick,arrows=->];
\foreach \i in {1,...,6}{
\node[vertex] (v\i) at (\i,1) {};
\node[vertex] (w\i) at (\i,0) {};
}
\foreach \i in {1,...,5}{
\pgfmathtruncatemacro{\iplusone}{\i + 1}
\draw[arc] (v\i) -- (v\iplusone);
\draw[arc] (w\i) -- (w\iplusone);
\draw[arc] (v\i) -- (w\iplusone);
\draw[arc] (w\i) -- (v\iplusone);
}
\draw[usedarc] (v6) to[out=30,in=150] (v1);
\draw[usedarc] (w6) to[out=-30,in=-150] (w1);
\draw[usedarc] (v1) -- (v2);
\draw[usedarc] (w1) -- (w2);
\draw[usedarc] (v2) -- (w3);
\draw[usedarc] (w2) -- (v3);
\draw[usedarc] (v3) -- (v4);
\draw[usedarc] (w3) -- (w4);
\draw[usedarc] (v4) -- (v5);
\draw[usedarc] (w4) -- (w5);
\draw[usedarc] (v5) -- (w6);
\draw[usedarc] (w5) -- (v6);
\foreach \i in {1,...,6}{
\node at (\i,1.3) {$v_\i$};
\node at (\i,-0.3) {$w_\i$};
}
\end{tikzpicture}
\end{center}
\caption{Construction of the set $ \mathcal{E}_b $ for $ b = (0,1,0,0,1) $.}
\label{fig:hamburger}
\end{figure}
\begin{lemma}
\label{lem:hiding-set-atsp}
$ H_N $ is a hiding set for $ \mathrm{ATSP}_{2(N+1)} $.
\end{lemma}
\begin{proof}
First, note that
\[
H_N \subseteq \aff(\mathrm{ATSP}_{2(N+1)}) = \setdef{x \in \mathbb{R}^A}[x(\delta^{\mathrm{in}}(v)) = x(\delta^{\mathrm{out}}(v))
= 1, \ \forall \, v \in V ]
\]
holds,
where $ A $ is the set of arcs in the complete directed graph on~$V$. Let $ b, b' \in
\{0,1\}^N $ be distinct with $ \sum_{i=1}^N b_i $ even and $ \sum_{i=1}^N b'_i $ even.
Let $ j \in [N] $ be an index with $ b_j \neq b'_j $.
Consider the binary vectors $ c := b + (1 - b_j) \mathbbm{e}_j $ and $ c' := b' + (1 - b'_j) \mathbbm{e}_j $.
Clearly, we have that $ \sum_{i=1}^N c_i $ is odd and $ \sum_{i=1}^N c'_i $ is odd and hence $ \chi(\mathcal{E}_c) $
and $ \chi(\mathcal{E}_{c'}) $ are both contained in $ \mathrm{ATSP}_{2(N+1)} $.
Finally, it is easy to check that
\[
\chi(\mathcal{E}_b) + \chi(\mathcal{E}_{b'}) = \chi(\mathcal{E}_c) + \chi(\mathcal{E}_{c'})
\]
holds and hence $ \conv(\{ \chi(\mathcal{E}_b), \chi(\mathcal{E}_{b'})\} ) \cap \conv(\mathrm{ATSP}_{2(N+1)}) \neq \emptyset $.
\end{proof}
\begin{theorem}
The asymptotic growth of $ \rc(\mathrm{ATSP}_n) $ and $ \rc(\mathrm{STSP}_n) $ is $ 2^{\Theta(n)} $.
\end{theorem}
\begin{proof}
Lemma \ref{lem:hiding-set-atsp} shows that $ H_N $ is a hiding set for $ \mathrm{ATSP}_n $. By replacing all directed arcs
with their undirected versions, the set $ H_N $ yields a hiding set for $ \mathrm{STSP}_n $. By Proposition
\ref{prop:hiding-set}, we obtain a lower bound of $ |H_N| = 2^{\Omega(n)} $ for $ \rc(\mathrm{ATSP}_n) $ and $ \rc(\mathrm{STSP}_n)
$. To complete the argumentation, note that both $ \mathrm{ATSP}_n $ and $ \mathrm{STSP}_n $ have relaxations of size $
2^{\Theta(n)} $, which are variants of the formulation in \eqref{eq:subtour-relaxation}. \qed
\end{proof}
\makeatletter{}\subsubsection{Connected Sets}
Let $ \mathrm{CONN}_n $ be the set of all characteristic vectors of edge sets that form a connected spanning subgraph in the complete
graph on $ n $ nodes. The polytope
\[
\setdef{ x \in [0,1]^{E_n} }[ x(\delta(S)) \geq 1 \ \forall \, \emptyset \neq S \subsetneq V_n ]
\]
is a relaxation for $ \mathrm{CONN}_n $. Thus, we have that $ \rc(\mathrm{CONN}_n) \leq \O(2^n) $.
For a lower bound, consider again the undirected version of our set $ H_N $. Since each point in $ H_N $ belongs to a
node-disjoint union of two cycles, we have that $ H_N \cap \mathrm{CONN}_n = \emptyset $. Further, we know that for any $ a, b
\in H_N $ we have that
\[
\emptyset \neq \conv \{a,b\} \cap \conv (\mathrm{STSP}_n) \subseteq \conv \{a,b\} \cap \conv (\mathrm{CONN}_n)
\]
and since $ H_N \subseteq \aff(\mathrm{CONN}_n) = \mathbb{R}^{E_n} $, we see that $ H_N $ is also a hiding set for $ \mathrm{CONN}_n $. We
obtain:
\begin{corollary}
The asymptotic growth of $ \rc(\mathrm{CONN}_n) $ is $ 2^{\Theta(n)} $.
\end{corollary}
\makeatletter{}\subsubsection{Branchings and Forests}
\noindent
Besides connectivity, we show that, in general, it is also hard to force acyclicity in the context of relaxation. Let
therefore $ \mathrm{ARB}_n $ ($ \mathrm{SPT}_n $) be the set of characteristic vectors of arborescences (spanning trees) in the complete
directed (undirected) graph.
\begin{theorem}
\label{thm:arb-spt}
The asymptotic growth of $ \rc(\mathrm{ARB}_n) $ and $ \rc(\mathrm{SPT}_n) $ is $ 2^{\Theta(n)} $.
\end{theorem}
\begin{proof}
First, note that both the \emph{arborescence polytope} and the spanning tree polytope (i.e., $ \conv(ARB_n) $ and $
\conv(\mathrm{SPT}_n) $) have $ \O(2^n) $ facets \cite{Schrijver03} and hence we have an upper bound of $ \O(2^n) $ for both
$ \rc(\mathrm{ARB}_n) $ and $ \rc(\mathrm{SPT}_n) $.
For a lower bound, let us modify the definition of $ \mathcal{E}_b $ by removing arc $ (w_{N+1}, w_1) $.
Then, if $ b \in \{0,1\}^N $ with $ \sum_{i=1}^N b_i $ even, we have that $ \mathcal{E}_b $ is a
node-disjoint union of a cycle and a path and hence not an arborescence. By following the proof of
Lemma \ref{lem:hiding-set-atsp},
we still have
\[
\chi(\mathcal{E}_b) + \chi(\mathcal{E}_{b'}) = \chi(\mathcal{E}_c) + \chi(\mathcal{E}_{c'}),
\]
where $ \mathcal{E}_c $ and $ \mathcal{E}_{c'} $ are spanning arborescences. (Actually, they are in fact directed paths
visiting each node.) Since $ \aff(\mathrm{ARB}_n) = \mathbb{R}^{A_n} $, we therefore obtain that the modified set $ H_N $ is a
hiding set for $ \mathrm{ARB}_n $. By undirecting all arcs, $ H_N $ also yields a hiding set for $ \mathrm{SPT}_n $.
Again, by Proposition \ref{prop:hiding-set}, we deduce a lower bound of $ |H_N| = 2^{\Omega(n)} $ for both $
\rc(\mathrm{ARB}_n) $ and $ \rc(\mathrm{SPT}_n) $. \enspace \qed
\end{proof}
\begin{remark}
Since in the proof of Theorem \ref{thm:arb-spt} $ T_1 $ and $ T_2 $ are rooted at node $ v_1' $, the statements
even hold if the sets $ \mathrm{ARB}_n $ and $ \mathrm{SPT}_n $ are restricted to characteristic vectors of arborescences/trees
rooted at a fixed node.
\end{remark}
\noindent
Let $ \mathrm{BRANCH}_n $ ($ \mathrm{FORESTS}_n $) be the set of characteristic vectors of branchings (forests) in the complete directed
(undirected) graph.
\begin{corollary}
The asymptotic growth of $ \rc(\mathrm{BRANCH}_n) $ and $ \rc(\mathrm{FORESTS}_n) $ is $ 2^{\Theta(n)} $.
\end{corollary}
\begin{proof}
The claim follows from Theorem \ref{thm:arb-spt} and the facts that
\begin{align*}
\mathrm{ARB}_n &= \mathrm{BRANCH}_n \cap \Big\{ x \in \mathbb{R}^{A_n} : \sum_{a \in \mathbb{R}^{A_n}} x_a = n-1 \Big\} \\
\mathrm{SPT}_n &= \mathrm{FORESTS}_n \cap \Big\{ x \in \mathbb{R}^{E_n} : \sum_{e \in \mathbb{R}^{E_n}} x_e = n-1 \Big\}. \enspace \qed
\end{align*}
\end{proof}
\subsection{Distinctness}
\noindent
Another common component of practical IP-formulations is the requirement of distinctness of a certain set of vectors or
variables. Here, we consider two general cases in which we can also show that the benefit of auxiliary variables is
essential.
\makeatletter{}\subsubsection{Binary All-Different}
\noindent
In the case of the \emph{binary all-different} constraint, one requires the distinctness of rows of a binary matrix with
$ m $ rows and $ n $ columns. The set of feasible points is therefore defined by
\[
\mathrm{DIFF}_{m,n} := \setdef{ x \in \{0,1\}^{m \times n} }[ x \text{ has pairwise distinct rows}].
\]
\noindent
As an example, \cite{LeeM07} give IP-formulations to solve the coloring problem in which they binary
encode the color classes assigned to each node. As a consequence, certain sets of encoding vectors have to be distinct.
By separating each possible pair of equal rows by one inequality, it is further easy to give a relaxation for $ \mathrm{DIFF}_{m,n} $
that has at most $ \binom{m}{2} 2^n + 2mn $ facets. In the case of $ m = 2 $, for instance, this bound turns out to be
almost tight:
\begin{theorem}
For all $ n \geq 1 $, we have that $ \rc(\mathrm{DIFF}_{2,n}) \geq 2^n $.
\end{theorem}
\begin{proof}
Let us consider the set
\[
H_{2,n} := \setdef{ (x,x)^T \in \{0,1\}^{2 \times n}}[x \in \{0,1\}^n].
\]
For $ x, y \in \{0,1\}^n $ distinct, we obviously have that
\[
\frac{1}{2} \left( (x,x)^T + (y,y)^T \right) = \frac{1}{2} \left( (x,y)^T + (y,x)^T \right) \in \conv(\mathrm{DIFF}_{2,n}).
\]
Since $ H_{2,n} \cap \mathrm{DIFF}_{2,n} = \emptyset $ and $ H_{2,n} \subseteq \aff(\mathrm{DIFF}_{2,n}) = \mathbb{R}^{2 \times n} $, $
H_{2,n} $ is a hiding set for $ \mathrm{DIFF}_{2,n} $ and by Proposition \ref{prop:hiding-set} we obtain that $
\rc(\mathrm{DIFF}_{2,n}) \geq |H_{2,n}| = 2^n $. \enspace \qed
\end{proof}
\makeatletter{}\subsubsection{Permutahedron}
\noindent
As a case in which one does not require the distinctness of binary vectors but of a set of numbers let us consider the
set
\[
\mathrm{PERM}_n := \setdef{ (\pi(1),\dotsc,\pi(n)) \in \mathbb{Z}^n}[\pi \in \mathcal{S}_n],
\]
which is the vertex set of the \emph{permutahedron} $ \conv(\mathrm{PERM}_n) $. Rado \cite{Rado52} showed that the permutahedron
can be described via
\begin{align}
\nonumber
\conv(\mathrm{PERM}_n) = \bigg\{ x \in \mathbb{R}^n : \sum_{i=1}^n x_i &= \frac{n(n+1)}{2} \\
\nonumber
\sum_{i \in S} x_i & \geq \frac{|S| (|S|+1)}{2} \ \text{for all } \emptyset \neq S \subset [n] \\
\label{eq:perm-ineq}
x &\geq \mathbb{O} \bigg\}
\end{align}
\noindent
and hence has $ \O(2^n) $ facets. Apart from that, it is a good example for a polytope having many different, very
compact extended formulations, see, e.g., \cite{Goemans14}. In the contrary, we show that the relaxation complexity of $
\mathrm{PERM}_n $ has exponential growth in $ n $:
\begin{theorem}
The asymptotic growth of $ \rc(\mathrm{PERM}_n) $ is $ 2^{\Theta(n)} $.
\end{theorem}
\begin{proof}
Let $ m := \lfloor \frac{n}{2} \rfloor $. For any set $ S \subseteq [n] $ with $ |S| = m $ select an integer vector
$ x^S \in \mathbb{Z}^n $
with $ \{ x^S_i : i \in S\} = \setdef{1,\dotsc,m-1} $ and $m-1$
occurring twice among the $x^S_i$ ($i\in S$) and $ \{ x^S_i : i \in [n] \setminus S\} =
\setdef{m+2,\dotsc,n} $ and $m+2$ occurring twice among the $x^S_i$ ($i\in [n] \setminus S$).
Such a vector is not contained in $ \conv(\mathrm{PERM}_n) $ as
\[
\sum_{i \in S} x_i^S = 1 + 2 + \dotsc + (|S| - 1) + (|S| - 1) < \frac{|S|(|S|+1)}{2}
\]
On the other hand, note that this is the only constraint from \eqref{eq:perm-ineq} that is violated by $ x^S $. In
particular, $ x^S \in \aff(\mathrm{PERM}_n) $ holds.
Let $ S_1, S_2 \subseteq [n] $ with $ |S_1| = |S_2| = m $ be distinct. We will show that $ x := \frac{1}{2} \cdot
(x^{S_1} + x^{S_2} ) \in \conv(\mathrm{PERM}_n) $ holds. Since $ x $ satisfies all constraints that are satisfied by both
$ x^{S_1} $ and $ x^{S_2} $, it suffices to show that $ \sum_{i \in T} x_i \geq \frac{|T|(|T|+1)}{2} $ holds for $ T
\in \{ S_1, S_2 \} $. W.l.o.g. we may assume that $ T = S_1 $ and obtain
\begin{align*}
\sum_{i \in S_1} x_i & = \frac{1}{2} \sum_{i \in S_1} x_i^{S_1} + \frac{1}{2} \sum_{i \in S_1} x_i^{S_2} \\
& = \frac{1}{2} \left( \frac{m(m+1)}{2} - 1 \right) + \frac{1}{2} \sum_{i \in S_1} x_i^{S_2} \\
& \geq \frac{1}{2} \left( \frac{m(m+1)}{2} - 1 \right) + \frac{1}{2} \left( \frac{m(m+1)}{2} + 2 \right) \\
& = \frac{m(m+1)}{2} + \frac{1}{2} \geq \frac{|T|(|T|+1)}{2}. \enspace \qed
\end{align*}
Thus, the set $ H := \setdef{x^S}[S \subseteq [n], \, |S| = m] $ is a hiding set for $ \mathrm{PERM}_n $. Our claim follows
from Proposition \ref{prop:hiding-set} and the fact that $ |H| = \binom{n}{\lfloor \frac{n}{2} \rfloor} =
2^{\Theta(n)} $. \qed
\end{proof}
\subsection{Parity}
\label{sec:parity}
\noindent
The final structural class we consider deals with the restriction that the number of selected elements of a given set
has a certain parity. Let us call a binary vector $ a \in \{0,1\}^d $ \emph{even} (\emph{odd}) if the sum of its entries
is even (odd). In \cite{Jeroslow75} it is shown that the number of inequalities needed to separate
\[
\mathrm{EVEN}_n := \setdef{ x \in \{0,1\}^n }[ x \text{ is even}]
\]
from all other points in $ \{0,1\}^n $ is exactly $ 2^{n-1} $. This is done by showing that
\[
\mathrm{ODD}_n := \setdef{ x \in \{0,1\}^n }[ x \text{ is odd}]
\]
is a hiding set for $ \mathrm{EVEN}_n $ (although the notion is different from ours). Hence, with Corollary \ref{cor:binary}, we
obtain:
\begin{theorem}
\label{thm:parity}
The asymptotic growth of $ \rc(\mathrm{EVEN}_n) $ is $ \Theta(2^n) $. \qed
\end{theorem}
\makeatletter{}\subsubsection{$ T $-joins}
\noindent
As a well-known representative of this structural class let us consider $ T\text{-}\mathrm{JOINS}_n $, which is, for given $ T \subseteq
V_n $, defined as the set of characteristic vectors of $ T $-joins in the complete graph on $ n $ nodes. Let us recall
that a $ T $-join is a set $ J \subseteq E_n $ of edges such that $ T $ is equal to the set of nodes of odd degree in
the graph $ (V_n, J) $. Note, that if a $ T $-join exists, then $ |T| $ is even.
\begin{theorem}
Let $ n $ be even and $ T \subseteq V_n $ with $ |T| $ even. Then, $ \rc(T\text{-}\mathrm{JOINS}_n) \geq 2^{\frac{n}{4}-1} $.
\end{theorem}
\begin{proof}
Since $ n $ is even and $ |T| $ is even, we may partition $ V_n $ into pairwise disjoint sets $ T_1, \, T_2, \, U_1,
\, U_2 $ with $ T = T_1 \cup T_2 $, $ k = |T_1| = |T_2| $ and $ \ell = |U_1| = |U_2| $. Let $ M_1, \dotsc, M_k $ be
pairwise edge-disjoint matchings of cardinality $ k $ that connect nodes from $ T_1 $ with nodes from $ T_2 $.
Analogously, let $ N_1, \dotsc, N_\ell $ be pairwise edge-disjoint matchings of cardinality $ \ell $ that connect
nodes from $ U_1 $ with nodes from $ U_2 $. For $ b \in \{0,1\}^k $ and $ c \in \{0,1\}^\ell $ let
\[
J(b,c) := \bigg( \bigcup_{i : b_i = 1} M_i \bigg) \cup \bigg( \bigcup_{j : c_j = 1} N_j \bigg) \subseteq E_n.
\]
By definition, $ J(b,c) $ is a $ T $-join if and only if $ b $ is odd and $ c $ is even. Let $ b^* \in \{0,1\}^k $
odd and $ c^* \in \{0,1\}^\ell $ even be arbitrarily chosen but fixed. Since $ \mathrm{ODD}_n $ is a hiding set for $
\mathrm{EVEN}_n $ and vice versa, it is now easy to see that both sets
\begin{align*}
H_1 & := \setdef{ J(b,c^*) }[b \in \{0,1\}^k \text{ even}] \\
H_2 & := \setdef{ J(b^*,c) }[c \in \{0,1\}^\ell \text{ odd}],
\end{align*}
are hiding sets for $ T\text{-}\mathrm{JOINS}_n $. Our claim follows from Proposition \ref{prop:hiding-set} and the fact that
\begin{align*}
\max \setdef{|H_1|, |H_2|} = \max \setdef{2^{k-1}, 2^{\ell-1}} & = \max \setdef{2^{\frac{1}{2}|T|-1},
2^{\frac{1}{2}(n-|T|)-1}} \\
& \geq 2^{\frac{1}{2} \cdot \frac{n}{2} - 1}. \enspace \qed
\end{align*}
\end{proof}
\makeatletter{}\section{The Role of Rationality}
\label{sec:rationality}
\noindent
An interesting question is whether it may help (in order to construct a
relaxation having few facets) to use irrational coordinates in the description of a relaxation. First, let us show that
one does not lose too much when restricting to rational relaxations only:
\begin{proposition}
\label{thm:rc-irrational}
Let $ X \subseteq \mathbb{Z}^d $ be finite and $ \rc_{\mathbb{Q}}(X) $ be the smallest number of facets of any rational relaxation
for $ X $. Then, $ \rc_{\mathbb{Q}}(X) \leq \rc(X) + \dim(X) + 1 $.
\end{proposition}
\begin{proof}
Since $ X $ is finite, there exists a rational simplex $ \Delta \subseteq \mathbb{R}^d $ of dimension $ \dim(X) $ such that
$ X \subseteq \Delta $. Let $ R $ be any relaxation of $ X $ having $ f $ facets and set $ B := (\mathbb{Z}^d \setminus X)
\cap \Delta $. Since $ B \cap R = \emptyset $ and $ |B| < \infty $, we are able to slightly perturb the
facet-defining inequalities of $ R $ in order to obtain a polyhedron $ \tilde{R} $ such that $ B \cap \tilde{R} =
\emptyset $ and $ \tilde{R} $ is rational. Now $ \tilde{R} \cap \Delta $ is still a relaxation for $ X $, which is
rational and has at most $ f + (\dim(\Delta) + 1) = f + \dim(X) + 1 $ facets. \qed
\end{proof}
\noindent
However, we are not aware of any polyhedral set $ X $ where $ \rc(X) < \rc_{\mathbb{Q}}(X) $. In fact, we even do not know if $
\rc(\Delta_d) < d + 1 $ holds. Note that any relaxation $ R $ for $ \Delta_d $ that has less than $
d+1 $ facets has to be unbounded. Hence, if $ R $ was rational, it would contain a rational ray and hence infinitely
many integer points, which shows $ \rc_{\mathbb{Q}}(\Delta_d) = d+1 $.
In order to show that any relaxation for $ \{0,1\}^d $ has at least $ d+1 $ facets, we basically used the fact that for
any line $ L(c) := \{ \lambda c \in \mathbb{R}^d : \lambda \in \mathbb{R} \} $ with $ c \in \mathbb{R}^d \setminus \{ \mathbb{O} \} $, the set $
[0,1]^d + L(c) $ contains infinitely many integer points. Unfortunately, such a statement is not true for the general
simplex:
\begin{customexample}
Consider the $ 5 $-dimensional simplex
\[
S := \conv \setdef{ \mathbb{O}, \mathbbm{e}_1, \mathbbm{e}_2, \mathbbm{e}_3, \mathbbm{e}_1 + \mathbbm{e}_3 + \mathbbm{e}_4,
\mathbbm{e}_2 + \mathbbm{e}_3 + \mathbbm{e}_5 } \subseteq \mathbb{R}^5.
\]
The polyhedron $ S + L((0,0,0,1,\sqrt{2})^T) $ does not contain any other integer points than those in $ S $.
Indeed, let $ p + \lambda \cdot (0,0,0,1,\sqrt{2})^T $ be integral for some $ p \in S $ and some $ \lambda \in \mathbb{R} $.
It is easy to see that $ p $ has to be one of the vertices of $ S $ and hence $ p $ is integral.
Thus, $ \lambda $ and $ \lambda \sqrt{2} $ are both integral, which implies $ \lambda = 0 $.
\end{customexample}
Since $ S $ does not contain other integer points than its vertices, we can apply a unimodular transformation and
obtain a direction $ c' \in \mathbb{R}^5 $ such that $ R = \conv(\Delta_5) + L(c') $ is indeed an unbounded relaxation for $
\Delta_5 $. However, it can be verified that $ R $ has more than $ 6 $ facets in this case.
These simple observations lead to the following questions, whose answers are not known to the authors:
\begin{enumerate}
\item Is it true that $ \rc(\Delta_d) = d+1 $ holds for all $ d $?
\item Is it true that $ \rc(X) \geq \dim(X)+1 $ holds for all polyhedral (or at least finite) sets $ X \subseteq
\mathbb{R}^d $?
\item Is there any polyhedral (or even finite) set $ X \subset \mathbb{Z}^d $ such that $ \rc(X) < \rc_{\mathbb{Q}}(X) $?
\end{enumerate}
We close our paper by giving at least some non-constant lower bound on the relaxation complexity of $ \Delta_d $:
\begin{proposition}
\label{prop:lower-bound-simplex}
For all $ k \geq 1 $, we have $ \rc(\Delta_{k!}) \geq k $.
\end{proposition}
\begin{proof}
Clearly, the claim is true for $ k = 1 $. Further, it is easy to see that $ \rc(\Delta_m) \leq \rc(\Delta_n) $ holds
for all $ m \leq n $.
Thus, by setting $ d(k) := k!-1 $, it suffices to show that even $ \rc(\Delta_{d(k)}) \geq k $ holds for all $ k
\geq 2 $, which we will show by induction over $ k $. Note that the latter statement is true for $ k = 2 $. Let us
assume that it is wrong for some $ k \geq 3 $, i.e., there exists a relaxation $ R \subseteq \mathbb{R}^d $ of $
\Delta_{d(k)} $ that has $ \ell < k $ facets. Since $ \ell < k \leq d(k) = \dim(R) $, $ R $ is unbounded.
We claim that every integer point $ p $ of $ \Delta_{d(k)} $ must lie in at least one facet of $ R $. Otherwise,
since $ R $ is unbounded, there exist some $ \varepsilon > 0 $ and $ c \in \mathbb{R}^d \setminus \{\mathbb{O}\} $ such that $
p + L(c,0,\varepsilon) $ is contained in $ R $. By Corollary \ref{cor:minkowski}, $ L(c,0,\varepsilon) $ contains
infinitely many integer points and so does $ R $, a contradiction.
Hence, there must be a facet of $ R $ that contains $ t \geq \frac{d(k)+1}{l} $ vertices $ v_1,\dotsc,v_t $
of $ \Delta_{d(k)} $. Let $ \mathcal{H} $ be the affine subspace spanned by $ v_1,\dotsc,v_t $ and let $ \varphi \colon
\mathcal{H} \cap \mathbb{Z}^{d(k)} \to \mathbb{Z}^{t-1} $ be an affine isomorphism mapping $ \{ v_1,\dotsc,v_t \} $ to $
\Delta_{t-1} $. Extending $ \varphi $ to an affine map from $ \mathcal{H} $ to $ \mathbb{R}^{t-1} $ yields that $ R' :=
\varphi(R \cap \mathcal{H}) $ is a relaxation of $ \Delta_{t-1} $. Since
\begin{align*}
t-1 \geq \frac{d(k)+1}{l} - 1 \geq \frac{d(k)+1}{k} - 1 = \frac{k!-1+1}{k}-1 = (k-1)! - 1 = d(k-1),
\end{align*}
by induction, we must have that $ R' $ has at least $ k - 1 $ facets. On the other hand, note that $ R' $ has at
most $ l-1 $ facets. This implies $ k \leq l $, a contradiction to our assumption. \qed
\end{proof}
\makeatletter{}\subsubsection*{Acknowledgements}
We would like to thank Gennadiy Averkov for valuable comments on this work.
\bibliographystyle{spmpsci}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,845
|
Islands – album Mike'a Oldfielda wydany w roku 1987. Zawiera typowy dla twórczości Oldfielda w latach 80. zestaw piosenek i jeden długi utwór instrumentalny. Mike Oldfield był kompozytorem i autorem tekstów wszystkich utworów. Wielotematyczna suita The Wind Chimes zawiera wiele motywów muzycznych, rozwiniętych w obecnych na płycie piosenkach. Główną wokalistką została Anita Hegerland, ówczesna życiowa partnerka Mike'a Oldfielda. W piosenkach śpiewali także Bonnie Tyler, Kevin Ayers, Max Bacon oraz Jim Price. Album został wydany w dwóch różnych wersjach: amerykańskiej i brytyjskiej.
Lista utworów
Wersja wydana w Wlk. Brytanii
"The Wind Chimes Part One And Part Two" - 21:49
"Islands" - 4:19 (wokal - Bonnie Tyler)
"Flying Start" - 3:36 (wokal - Kevin Ayers)
"North Point" - 3:33 (wokal - Anita Hegerland)
"Magic Touch" - 4:14 (wokal - Jim Price)
"The Time Has Come" - 3:51 (wokal - Anita Hegerland)
"When The Nights On Fire" - 6:41 (wokal - Anita Hegerland)
Wersja wydana w USA
"The Wind Chimes Part 1" - 2:31
"The Wind Chimes Part 2" - 19:15
"Magic Touch" - 4:15 (wokal - Max Bacon)
"The Time Has Come" - 3:53 (wokal - Anita Hegerland)
"North Point" - 3:33 (wokal - Anita Hegerland)
"Flying Start" - 3:38 (wokal - Kevin Ayers)
"Islands" - 4:18 (wokal - Bonnie Tyler)
Album video
W 1988 roku ukazała się kaseta VHS The Wind Chimes, zawierająca teledyski do m.in. pięciu piosenek z płyty Islands i instrumentalnego utworu tytułowego. Przy produkcji tego ostatniego, skorzystano z najlepszych wówczas technik animacji komputerowej. Mike Oldfield był reżyserem wszystkich klipów do utworów z płyty Islands. Współreżyserem teledysku do piosenki Magic Touch był znany później reżyser filmowy Alex Proyas. Album video ukazał się także na nośniku Laserdisc. W 2004 roku został dołączony do płyty DVD Elements, zawierającej teledyski z największymi przebojami Mike'a Oldfielda.
Przypisy
Linki zewnętrzne
Okładka
Albumy muzyczne wydane w roku 1987
Albumy Mike'a Oldfielda
Albumy Virgin Records
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,160
|
{"url":"https:\/\/studyadda.com\/sample-papers\/kvpy-stream-sx-model-paper-21_q53\/1897\/462936","text":"\u2022 # question_answer The major product of the following reaction is: A) B) C) \u00a0\u00a0\u00a0\u00a0\u00a0 D)\n\nSolution :\n\nYou need to login to perform this action.\nYou will be redirected in 3 sec","date":"2022-01-20 21:03:51","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8006426095962524, \"perplexity\": 6910.159348988631}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-05\/segments\/1642320302622.39\/warc\/CC-MAIN-20220120190514-20220120220514-00231.warc.gz\"}"}
| null | null |
Odomar van Sankt Gallen, of Otmar, (690 - het eiland Werd bij Eschenz, 16 november 759) is een Zwitserse heilige. De geestelijke kreeg zijn opleiding in Chur, werd priester gewijd en leidde een tijd de kerk van de H. Florinus in Rhaetia. Dit is vermoedelijk dezelfde kerk als de kerk van de H. Petrus in Remüs, waar de H. Florinus had gewerkt en begraven werd.
In 720 stelde Waltram van Thurgau Odomar aan als overste van de abdij van Sankt Gallen. Hij verenigde in één klooster de monniken die in een cel in Sankt Gallen leefden volgens de regel van de H. Columbanus en werd hun eerste abt. Hij bouwde een hospitaal en een school bij en hij verving de regel van Columbanus door de regel van Benedictus.
Toen Karloman in 747 afzag van de troon, bezocht deze Odomar in Sankt Gallen en die gaf hem een brief mee voor zijn broer Pepijn de Korte om Odomar en zijn abdij te laten genieten van de koninklijke vrijstellingen. Toen de graven Warinus en Ruodhart trachtten om ten onrechte bezittingen van de abdij in te nemen, weigerde Odomar onverschrokken om daar op in te gaan. De graven namen daarop Odomar, terwijl hij onderweg was naar Konstanz, gevangen en hielden hem vast, eerst in het kasteel van Bodmann en vervolgens op het eiland Werd in het Bodenmeer, nabij Eschenz, waar hij ook overleed. Zijn lichaam werd in 769 overgebracht naar de abdij van Sankt Gallen.
Thans is hij, met de H. Mauritius en de H. Gallus, de meest populaire heilige van Zwitserland. In de kunst wordt hij meestal afgebeeld als een benedictijns abt die een vaatje in zijn hand houdt. Dit is een verwijzing naar het wonder dat het vat van de H. Odomar nooit leeg raakte, hoeveel hij er ook uit tapte voor de armen.
Zijn feestdag is op 16 november.
Abt van Sankt Gallen
8e-eeuws abt
Heilige in het christendom
Zwitsers heilige of zalige
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,294
|
\section{Introduction}
In one of the classic papers of quantum electrodynamics, Feynman \cite{Feyn}
suggested that relativistic electron propagation could be understood in
terms of a sum over electron worldlines running both forwards and backwards
in time. The evolution parameter was a path parameter,
associated with the proper
time of the electron worldlines, rather than the ``clock time'' of the
laboratory. Related ideas were discussed by Stueckelberg, Fock,
Nambu, and Schwinger \cite{Nambu}. In this article we would like to
extend Feynman's worldline quantization of electron paths in
spacetime to the quantization of a closed Universe propagating in
superspace.
The elements of the proper-time approach for relativistic particles
are, of course, very well-known. Consider for simplicity a spinless
particle of mass $m$, propagating freely on a background spacetime
with metric $g_{\mu \nu}$. The classical motion of the particle (i.e.
the geodesic equation) is derived from variation of the worldline
action
\begin{eqnarray}
S_p &=& - m \int ds
\nonumber \\
&=& - m \int d\t \sqrt{-g_{\mu \nu} {dx^\mu \over d\t}
{dx^\nu \over d\t}}
\label{Sp}
\end{eqnarray}
Removing the square-root by introduction of a Lagrange multiplier
(lapse) $N$, we can write $S_p$ in the form
\begin{equation}
S_p' = m \int d\t \; \left[ {1\over 2N} g_{\mu \nu} {dx^\mu \over d\t}
{dx^\nu \over d\t} - \frac{1}{2} N \right]
\label{Sp'}
\end{equation}
Applying the usual Legendre transform, one obtains the first-order form
\begin{eqnarray}
S_p'' &=& \int d\t \; \left[ p_\mu {dx^\mu \over d\t} - NH \right]
\nonumber \\
H &=& {1\over 2m}(g^{\mu \nu}p_\mu p_\nu + m^2)
\label{Sp''}
\end{eqnarray}
Now go to the gauge $N=1$. In this gauge, $\t=s$ is the proper-time
parameter of the classical equations of motion. Adopting $s$ as the evolution
parameter for the quantized theory,
the amplitude for a relativistic
spinless particle to propagate from point $x'^\mu$ to point $x^\mu$ in an
interval $s$ can be expressed as a path-integral
\begin{eqnarray}
G(x,x';s) &=& \int^x_{x'} Dx(s') \; \exp[{i\over \hbar}
\int_0^s L_p' ds']
\nonumber \\
&=& <x|e^{-i\H_s s/\hbar}|x'>
\nonumber \\
L_p' &=& {m\over 2} \left( g_{\mu \nu} {dx^\mu \over ds}
{dx^\nu \over ds} - 1 \right)
\end{eqnarray}
Up to an operator-ordering, the corresponding Hamiltonian operator $\H_s$
describing state evolution in the evolution parameter $s$
\begin{equation}
i\hbar {\partial \psi \over \partial s} = \H_s \psi
\label{Seq}
\end{equation}
is obtained from the classical Hamiltonian (with $N=1$) via the usual
replacement of c-number momenta by the corresponding operators, i.e.
\begin{equation}
<x| \H_s |x'> = {1\over 2m}\left( -\hbar^2 \Box_x + m^2 -i\epsilon +
\xi R \right) \d^D(x-x') [-g(x')]^{-1/2}
\label{H_S}
\end{equation}
The Feynman propagator is proportional to the inverse of $\H_s$,
and the one-loop contribution to the gravitational effective action
\begin{equation}
{\cal S}_{eff}[g_{\mu \nu}] = \frac{1}{2} i\hbar \mbox{Trln}[\H_s]
\label{Seff}
\end{equation}
is the trace logarithm of $\H_s$. The proper-time formalism itself
has various uses, e.g. in calculating the Feynman propagator exactly
in certain, especially simple, background electromagnetic fields,
as well as in the evaluation of certain loop diagrams. We note
that eigenvalues of the proper time Hamiltonian $\H_s$, such as those
used in the evaluation of the effective action, can take
on any value. Classically, however, the mass-shell condition $H=0$
is to be respected (this follows from variation of \rf{Sp''} by $N$),
and for free particles this condition is imposed, in Dirac
quantization, as a constraint
on physical states $\H_s \psi = 0$. For spinless particles, this
operator constraint is just the Klein-Gordon equation in curved spacetime.
In an interacting theory the mass-shell condition is relaxed somewhat;
it is required only of asymptotic states.
The 4-momentum of a virtual particle is allowed to violate the mass-shell
condition.
\section{The Worldline Action of a Closed Universe}
We would now like to generalize the proper-time approach to the case
of gravity in combination with any number of interacting bosonic fields;
this calls for rewriting the gravitational action in the form
\begin{equation}
S_g = - \l \int ds
\end{equation}
where $s$ is an invariant length parameter in the space of all fields
modulo spatial diffeomorphisms, i.e. superspace, and $\l$ is an arbitrary
dimensionless parameter. The only reasonable
candidate for $s$ is the usual action of general relativity, so the problem
is to reformulate that action as a proper time in superspace. Such a
formulation was developed recently in ref. \cite{Me}; closely
related ideas were put forward long ago in ref. \cite{DeWitt}. The
identification of action with proper time goes as follows:
Let $\{q^a(x),p_a(x)\}$ represent a
set of gravitational and other bosonic fields, and their conjugate
momenta, with the fields scaled by an appropriate power of $\kappa^2=16\pi G_N$
so as to be dimensionless.\footnote{Only the metric formulation will be
considered here; hence the restriction to bosonic fields.} In a condensed
notation, the standard ADM action has the form
\begin{eqnarray}
S_{ADM} &=& \int d^4x \; [p_a \partial_0 q^a - N\H_x - N_i\H^i_x]
\nonumber \\
\H_x &=& \kappa^2 G^{ab} p_a p_b + \sqrt{g} U(q)
\nonumber \\
\H^i_x &=& O^{ia}[q,\partial] p_a
\label{ADM}
\end{eqnarray}
where $G^{ab}$ and $U$ are, respectively, the metric and potential
in superspace, and the operator $O^{ia}$ is linear in the 3-d spacetime
covariant derivative.
Go to the ``shift gauge'' $N_i=0$. The supermomentum constraints
$\H^i=0$ are not lost by this choice, since they are still required
for consistency of the Hamiltonian constraint with the equations of
motion. Solving Hamilton's equation for the momenta in terms of
velocities, then solving the Hamiltonian constraint for the
lapse function $N$ in terms of velocities, and inserting both
expressions into $S_{ADM}$, one obtains the Baierlein-Sharp-Wheeler
(BSW) form of the gravitational action \cite{BSW}
\begin{equation}
S_{BSW} = - \int d^4x \; \sqrt{-{1\over \kappa^2} \sqrt{g} U G_{ab}
\partial_{0} q^a \partial_{0} q^b}
\label{BSW}
\end{equation}
in shift gauge $N_i=0$. The BSW action is to serve as a proper-time
parameter. It is also useful to introduce an arbitrary mass-scale $\sigma$
in order to define an evolution parameter $t$ with dimensions of time,
so that
\begin{equation}
S_{BSW} = - \int ds = -\sigma \int dt
\label{t}
\end{equation}
Choose $x_0=t$. Comparing \rf{t} with \rf{BSW}, we have
\begin{equation}
dt = {1\over \sigma} \int d^3x \; \sqrt{-{1\over \kappa^2} \sqrt{g} U
G_{ab} dq^a dq^b }
\label{dt}
\end{equation}
Let $\tilde{N}$ denote the lapse function (derived by solving the Hamiltonian
constraint) associated with the time parameter $t$
\begin{equation}
\tilde{N} = \sqrt{-{1\over 4\kappa^2 \sqrt{g} U} G_{ab} \partial_t q^a \partial_t q^b }
\label{tN}
\end{equation}
Then we have
\begin{equation}
1 = - {1\over \sigma} \int d^3x \; {1 \over 2 \tilde{N} \kappa^2}
G_{ab} \partial_t q^a \partial_t q^b
\label{1}
\end{equation}
Now $t$ denotes a ``many-fingered'' time variable, with the different
possibilities distinguished by a choice of $\tilde{N}$. Equation \rf{dt}
imposes only one global restriction on the choice of $\tilde{N}$. From eq. \rf{tN}
we have
\begin{equation}
\int d^3x \; \tilde{N} \sqrt{g} U = \int d^3x \; \sqrt{-{1\over 4\kappa^2} \sqrt{g} U
G_{ab} \partial_t q^a \partial_t q^b }
\end{equation}
which implies, from the definition \rf{dt}, the condition
\begin{equation}
\int d^3x \; \tilde{N} \sqrt{g} U = \frac{1}{2} \sigma
\label{conN}
\end{equation}
For a given $\tilde{N}$ satisfying this condition, there corresponds a time variable
proportional to $S_{BSW}$. The condition is solved trivially by
\begin{equation}
\tilde{N} = {\frac{1}{2} \sigma {\cal N} \over \int d^3x \; {\cal N} \sqrt{g} U}
\label{calN}
\end{equation}
where ${\cal N}$ is unrestricted. Inserting this form for $\tilde{N}$ back into
\rf{1}, we find
\begin{equation}
1 = - {1\over \sigma^2} \int d^3x \; \left[ \int d^3x' \;
{\cal N} \sqrt{g} U \right] {1\over {\cal N} \kappa^2} G_{ab} \partial_t q^a \partial_t q^b
\end{equation}
or
\begin{equation}
ds^2 = - \int d^3x \; \left[ \int d^3x' \;
{\cal N} \sqrt{g} U \right] {1\over {\cal N} \kappa^2} G_{ab} dq^a dq^b
\label{ds2}
\end{equation}
Now introduce a mixed discrete/continuous ``coordinate index'' $(\a,x)$
in superspace:
\begin{eqnarray}
q^{(\a x)} \equiv q^{\a}(x) = \left\{ \begin{array}{cl}
{\cal N}(x) & \a=0 \\
q^a(x) & \a=a \ne 0 \end{array} \right.
\end{eqnarray}
Apart from notation we are extending the definition of superspace slightly
to include the non-dynamical field ${\cal N}(x)$, related via eq. \rf{calN}
to the lapse parameter. Define a degenerate metric for
this extended superspace
\begin{eqnarray}
{\cal G}_{(a x)(b y)} &=& \left[\int d^3x' \; {\cal N} \sqrt{g} U \right]
{1 \over {\cal N}(x) \kappa^2} G_{ab}(x) \d^3(x-y)
\nonumber \\
{\cal G}_{(0 x)(0 y)} &=& {\cal G}_{(a x)(0 y)} = {\cal G}_{(0 x)(b y)} = 0
\label{metric}
\end{eqnarray}
With these definitions, and an obvious summation convention,
eq. \rf{ds2} becomes
\begin{equation}
ds^2 = - {\cal G}_{(\a x)(\b y)} dq^{(\a x)} dq^{(\b y)}
\end{equation}
The gravitational action then has the desired form
\begin{eqnarray}
S_g &=& - \l \int ds
\nonumber \\
&=& - \l \int d\t \; \sqrt{ - {\cal G}_{(\a x)(\b y)}
{dq^{(\a x)}\over d\t} {dq^{(\b y)} \over d\t} }
\label{Sg}
\end{eqnarray}
Variation of the action $S_g$ w.r.t $q^{(\a x)}$ leads, in the usual
way, to a geodesic equation
\begin{equation}
{\cal G}_{(\a x)(\b y)} {d^2 q^{(\b y)} \over d s^2} + \frac{1}{2} \left(
{\d {\cal G}_{(\a x)(\b y)} \over \d q^{(\gamma z)} } +
{\d {\cal G}_{(\a x)(\gamma z)} \over \d q^{(\b y)} } -
{\d {\cal G}_{(\b y)(\gamma z)} \over \d q^{(\a x)} } \right)
{dq^{(\b y)} \over ds}{dq^{(\gamma z)} \over ds}
= 0
\label{geo}
\end{equation}
Identifying $ds=\sigma dt$, it is straightforward to verify that
the $\a \ne 0$ components of \rf{geo} are the equations
of motion
\begin{equation}
{\partial \over \partial t}\left[ {1\over 2\tilde{N} \kappa^2} G_{ab} \partial_t q^b \right]
- {1 \over 4\tilde{N} \kappa^2}{\partial G_{cd} \over \partial q^a}\partial_t q^c \partial_t q^d
+ \int d^3x' \; \tilde{N} {\d \over \d q^a(x)}(\sqrt{g} U) = 0
\label{motion}
\end{equation}
while the $\a=0$ component is the Hamiltonian constraint
\begin{equation}
{1 \over 4\tilde{N}^2 \kappa^2} G_{ab}\partial_t q^a \partial_t q^b + \sqrt{g} U = 0
\label{constraint}
\end{equation}
These equations are identical to those obtained
from the ADM action \rf{ADM}, with the gauge choice $N_i=0$ and
$N=\tilde{N}$. We have therefore interpreted the classical field equations
of general relativity as describing the free fall of a point particle
in superspace.\footnote{Eq. \rf{Sg} can also be motivated from Jacobi's
principle in mechanics, c.f. ref. \cite{Me}.}
Some further comments are in order. First, the choice of lapse
$N=\tilde{N}$ imposes only one global condition \rf{conN}
on the choice of lapse function. This does not result in any restriction
on the choice of foliation, but only on the time-label $t$ associated with each
hypersurface of a given foliation. A second point is that
the degeneracy of the supermetric ${\cal G}_{(\a x)(\b y)}$ in
eq. \rf{metric} implies an infinite set of solutions for the geodesic
between any two points in superspace. It is not hard to show that
these solutions are related by (ordinary D=4) time-reparametrizations, and
have the same ``proper time'' interval in superspace (proportional to $S_g$)
between those two points. Finally, let us note that the parameter $\l$
in $S_g$, which is analogous to the mass parameter $m$ in the relativistic
particle action $S_p$, drops out of the classical configuration-space
equations of motion.
Having recognized that the worldline action \rf{Sg} leads to the
same classical motion as the ADM action, we can proceed as in the
relativistic particle case to derive the proper-time Hamiltonian.
Again introducing a Lagrange multiplier $n$ to remove the square-root
\begin{equation}
S_g' = \l \int d\t \; \left[ {1\over 2n}{\cal G}_{(ax)(by)}
{dq^{(ax)} \over d\t} {dq^{(by)} \over d\t} - \frac{1}{2} n \right]
\end{equation}
the first-order form is
\begin{eqnarray}
S_g''&=& \int d\t \; \left[ p_{(ax)} {dq^{(ax)} \over d\t} - nH \right]
\nonumber \\
H &=& {1\over 2\l}[{\cal G}^{(ax)(by)} p_{(ax)} p_{(by)} + \l^2 ]
\nonumber \\
&=& {1\over 2\l}[\mbox{\AE} + \l^2]
\label{1st_order}
\end{eqnarray}
where the supermetric
\begin{equation}
{\cal G}^{(ax)(by)}\dot ={{\cal N} \kappa^2 G^{ab} \delta^3(x-y) \over
\int d^3x' \; {\cal N} \sqrt{g} U }
\end{equation}
and the expression
\begin{eqnarray}
\mbox{\AE} &=& {\cal G}^{(ax)(by)} p_{(ax)} p_{(by)}
\nonumber \\
&=& { \int d^3x \; {\cal N} \kappa^2 G^{ab} p_a p_b \over
\int d^3x' \; {\cal N} \sqrt{g} U }
\end{eqnarray}
were introduced in ref. \cite{Us1}. Variation of \rf{1st_order} with
respect to $q^a(x,\t),~p_a(x,\t)$ and ${\cal N}(x,\t),~n(\t)$ give us,
respectively, the set of Hamiltonian equations and constraints
\begin{equation}
\partial_\t q^a(x) = {n\over 2\l}{\d \mbox{\AE} \over \d p_a(x)} ~~,~~
\partial_\t p_a(x) = -{n\over 2\l}{\d \mbox{\AE} \over \d q^a(x)} ~~,~~
{\d \mbox{\AE} \over \d {\cal N}(x)} = 0 ~~,~~
\mbox{\AE} = -\l^2
\end{equation}
Setting $n=1$, so that $\t=s=\sigma t$, these equations are
equivalent to the usual Hamiltonian equations of
motion and Hamiltonian constraint
\begin{eqnarray}
\partial_t q^a(x) &=& \int d^3x' \; \tilde{N} {\d \over \d p_a(x)} \H
\nonumber \\
\partial_t p_a(x) &=& - \int d^3x' \tilde{N} {\d \over \d q^a(x)} \H
\nonumber \\
\H &=& {\kappa^2 \over \l} G^{ab} p_a p_b + \l \sqrt{g} U = 0
\label{H2}
\end{eqnarray}
in the $N=\tilde{N},~N_i=0$ gauge. These equations of motion and (Hamiltonian)
constraint imply the remaining
supermomentum constraint as a consistency condition.
The constant $\l$ is implicitly set to $\l=1$
in the usual Hamiltonian formulation of general relativity,
but we note at this point that there is no overwhelming reason
to make this choice. The constant $\l$ appears as a
constant multiplicative factor in the worldline action \rf{Sg},
as does the mass $m$ in the worldline action \rf{Sp}. Both of
these constants drop out of the corresponding geodesic equations.
Just as there is no way of determining the mass of a particle from
its trajectory in free fall, there is also no way of determining
the value of $\l$ from a given solution of the configuration-space
field equations. In the context of the first-order formulation,
the condition $\mbox{\AE} = -\l^2$ is in every sense analogous to the
particle mass-shell condition $g^{\mu \nu}p_\mu p_\nu = -m^2$. It
is therefore reasonable to identify $\l$ as a kind of (dimensionless)
mass-shell parameter, and to dignify the constraint $\mbox{\AE} = -\l^2$
with the title ``mass-shell of the Universe''.
\section{Quantization}
We now consider canonical quantization, in
the ``proper-time'' gauge $n=1$. The corresponding Schr\"odinger equation is
\begin{eqnarray}
i\hbar {\partial \Psi \over \partial s} &=& \H_s \Psi
\nonumber \\
&=& {1\over 2\l} (\mbox{\AE} + \l^2) \Psi
\label{Seq1}
\end{eqnarray}
which has the general $s$-dependent solution
\begin{eqnarray}
\Psi[q,s] &=& \sum_{{\cal E} \b} a_{{\cal E} \b} \Phi_{{\cal E}\b}[q]
e^{i({\cal E}-\l^2)s/(2\l \hbar)}
\nonumber \\
\mbox{\AE} \Phi_{{\cal E}\b}[q] &=& -{\cal E} \Phi_{{\cal E} \b}[q]
\label{expand}
\end{eqnarray}
where the label $\b$ distinguishes among a linearly independent set
of eigenstates of $\mbox{\AE}$ with eigenvalue $-{\cal E}$.
The classical constraint $\d \mbox{\AE} /\d {\cal N} = 0$ becomes an operator
constraint ${\d \mbox{\AE} \over \d {\cal N}} \Psi = 0$. Inserting the eigenstate
expansion \rf{expand}, we find that each eigenstate $\Phi_{\cal E}$ satisfies
a Wheeler-DeWitt equation
\begin{equation}
\left[ -{\hbar^2 \over {\cal E}} \kappa^2 ``G^{ab} {\d^2 \over \d q^a \d q^b}''
+ \sqrt{g} U \right] \Phi_{\cal E}[q] = 0
\label{WD}
\end{equation}
associated with the parameter ${\cal E}$ (quotation marks indicate the
ordering ambiguity). Finally, if we also impose the mass-shell constraint
\begin{equation}
\mbox{\AE} \Psi = -\l^2 \Psi
\label{ms}
\end{equation}
then the only physical states are those with ${\cal E} = \l^2$, and the
(classically indeterminate) constant $\l$ can be absorbed, via
\begin{equation}
\hbar_{eff} = {\hbar \over \l}
\end{equation}
into a rescaling of Planck's constant.
There are two ways in which the off mass-shell states, with
${\cal E} \ne \l^2$, may be physically relevant. First, the
mass-shell constraint \rf{ms} may not really {\it be} a constraint,
at the quantum level. The mass-shell
condition is derived by trading the square-root form of the action for an
expression involving a Lagrange multiplier. What if one avoids this
step, and quantizes the square-root action $S_{BSW}$ directly? This approach
has been advocated in ref. \cite{Us1,Us2}, and it leads to a formulation
in which the dynamical equation \rf{Seq1} is supplemented by the
constraints $(\d \mbox{\AE} / \d {\cal N} )\Psi = 0$, but without the mass-shell
constraint $\mbox{\AE} = -\l^2$. It should be noted, once again, that there is no way
to determine $\l$ classically, or to verify the mass-shell
condition $\mbox{\AE}=-\l^2$, since the configuration-space equations are
independent of $\l^2$. Determination of $\l^2$ would require
a violation of the Einstein field equations; it is analogous to
trying to determine the mass of a particle from its trajectory
in free fall. Moreover, the freedom to choose arbitrary foliations of
4-space is already reflected in the constraint $(\d \mbox{\AE} / \d {\cal N}) \Psi = 0$.
In the formulation of \cite{Us1,Us2}, the physical Hilbert space is spanned
by the solutions of a family of Wheeler-DeWitt equations \rf{WD}, one
equation for each eigenvalue $-{\cal E}$ of $\mbox{\AE}$.
The second way in which off mass-shell states could become relevant
is suggested by the phenomenon of black hole evaporation.
Although it is known that black holes must lose mass via Hawking radiation,
it is not known what the final state of the radiative process
will be. It is possible that the black hole disappears entirely, and
this might be considered a case of topology change involving the production of
a ``baby universe'', analogous to similar processes in string theory.
It is also possible that the evaporation is not complete, and the
black hole leaves a remnant. Let us suppose that the first alternative,
namely, complete evaporation accompanied by baby universe production, is
the correct one. In that case the Universe is {\it not} really in
free fall; there will be interactions associated with topology changing
processes (emission and absorbtion of baby universes).
A satisfactory description of topology-changing processes awaits
development of a ``third-quantized'' theory of gravity \cite{3q}; unfortunately,
at present, we do not even have a satisfactory understanding of
second-quantized gravity. Still, it may be possible to obtain some
insight into ``multi-versal'' effects via the worldline formulation. For
example, by direct analogy to eq. \rf{Seff}, the 1-loop contribution of
virtual universe loops to the effective action would be
\begin{equation}
S_{eff}[{\cal G}_{ab}] = {i\hbar \over 2} \mbox{Trln}[\mbox{\AE} + \l^2]
\end{equation}
where the trace runs over a basis of states $\Phi_{\cal E}$ satisfying the
one-parameter family of Wheeler-DeWitt equations \rf{WD}. Of course,
the supermetric ${\cal G}_{ab}$, unlike the ordinary spacetime metric
$g_{\mu \nu}$, is not arbitrary; it is constrained to have the form
\rf{metric}. Therefore $S_{eff}$ may be regarded as a functional of
the potential term $U(q)$. But the form of $U(q)$ is also tightly
constrained: it is the sum of all possible potential terms that could
appear in an ADM Hamiltonian. With this restriction, $S_{eff}$ is
just a function of the coupling constants of each possible interaction
term, i.e.
\begin{equation}
S_{eff}[{\cal G}_{ab}] = S_{eff}[\lambda,e^2,g^2,...]
\end{equation}
and the couplings are now viewed as dynamical variables.
Variation of $S_{eff}$ with respect to the couplings could,
in principle, determine their phenomenological
values, very much in the spirit of Coleman's
``Big Fix'' \cite{Coleman}.
Let us illustrate this possibility with a minisuperspace toy model,
in which the supermetric ${\cal G}_{ab}$ depends on one parameter only, namely,
the cosmological constant $\lambda$. The starting point is the minisuperspace
action representing a closed, homogenous and isotropic
Friedman-Robertson-Walker (FRW) universe filled with a three-component,
minimally coupled scalar field $\vec{\phi}\dot =(\phi_1, \phi_2, \phi_3)$, i.e.
\begin{equation}
S = \frac{1}{2} \int dt \left[ -{a\dot{a}^2\over N}
+ {a^3\dot{\vec{\phi}} \cdot \dot{\vec{\phi}}\over N}
+ N(a - \lambda a^3) \right]
\label{adm1}
\end{equation}
where the 4-d invariant length is
\begin{equation}
ds^2={\hat\sigma}^2(-N^2dt^2+a^2d\Omega_3^2)
\end{equation}
and with ${\hat\sigma}^2\dot = 2G_N/3\pi$.
With the choice of coordinates $q^0=a,~q^i=\phi^i$, the
supermetric for the corresponding worldline action
\begin{equation}
S_g = - \l \int d\t \sqrt{-{\cal G}_{ab} \dot{q}^a \dot{q}^b}
\label{miniwld}
\end{equation}
reads
\begin{equation}
{\cal G}_{ii}=-a^2{\cal G}_{00} =a^4(\lambda a^2 -1)~~~~;~~~~i=1, 2, 3
\label{metric1}
\end{equation}
Now, on general grounds of diffeomorphism invariance in minisuperspace,
the effective action for a generic FRW universe will have a weak-curvature
(adiabatic) expansion of the form
\begin{equation}
S_{eff}[{\cal G}_{ab}] = \int da\; \int d\vec{\phi}~\sqrt{|{\cal G}|} \left[
\L_S + \kappa_S {\cal R} + O({\cal R}^2) \right]
\label{adiabat1}
\end{equation}
where $\L_S$ and $\kappa_S$ are the
(dimensionless) ``supercosmological constant''
and ``super Newton's constant'', respectively, and ${\cal R}$ is the scalar
``supercurvature'' of ${\cal G}_{ab}$.
In general, since $\L_S,~\kappa_S$ are divergent at one loop,
even in simple minisuperspace models, it must be assumed that either
these constants are renormalized (and there exists a bare
action $S_0[{\cal G}_{ab}]$), or else that there is a fundamental cutoff of some
kind in the theory.
Let us now temporarily compactify the ranges of integration in
\rf{adiabat1} so that the scale factor runs from $a=0$ to
$a=\bar a$ and the scalar fields run from $\phi_i=-\phi_{i M}$ to
$\phi_i=\phi_{i M}$, and keep only the leading term in the adiabatic expansion
\rf{adiabat1}.
Then the effective action \rf{adiabat1} reads
\begin{eqnarray}
S_{eff}&\simeq &\L_S \int_0^{\bar a} da\int_{-\phi_{1M}}^{\phi_{1M}} d\phi_1
\int_{-\phi_{2M}}^{\phi_{2M}} d\phi_2 \int_{-\phi_{3M}}^{\phi_{3M}} d\phi_3
~a^7(\lambda a^2-1)^2
\nonumber \\
&=&\left( \int d^3\phi \right)\L_S\bar a^8\left (
{\lambda^2 \bar a^4\over 12}-{\lambda \bar a^2\over 5}+{1\over 8}\right)
\label{r22b}
\end{eqnarray}
It is easy to check that $S_{eff}$ is stationary at
\begin{equation}
{d S_{eff} \over d\lambda}= 0 ~~~ \Longrightarrow ~~
\lambda\simeq {6\over 5\bar a^2}
\label{r29b}
\end{equation}
with the result that $\lambda \rightarrow 0^+$ as $\bar a\rightarrow \infty$.
It is also straightforward to show that this stationary point is actually a
minimum for $S_{eff}$ provided that $\L_{S}>0$.
\section{Inclusion of Mass Terms and Supercurvature}
Any minisuperspace model is a toy, and
only illustrates effects which might (or might not) be present
in the full theory. Still, even within the category of toy models, it is
interesting to study whether the vanishing of the cosmological
constant survives some modest complications of the minisuperspace
action, and/or improvements in the approximations for \rf{adiabat1},
e.g., the inclusion of contributions from the supercurvature terms in the
adiabatic expansion of the effective action.
We consider the action for a FRW universe
filled with $N_s$ scalar fields $(\phi_1, .., \phi_{N_s})\dot = \vec{\phi}$
with potential $V(\phi)$, i.e.
\begin{equation}
S={1\over 2}\int dt\left \{ -{a {\dot a}^2\over N}+{a^3{\dot{\vec{\phi}}}\cdot
{\dot{\vec{\phi}}}\over N}+Na[1-(\lambda +V(\phi))a^2]\right \}
\label{f9a}
\end{equation}
Again choosing coordinates $q^0=a, ~q^i=\phi^i$, then, from eq. \rf{miniwld},
the diagonal, $N_s+1$-dimensional worldline supermetric ${\cal G}_{ab}$ is just
\begin{equation}
{\cal G}_{ii}=-a^2{\cal G}_{00}=a^4[(\lambda +V(\phi))a^2-1]~~~;~~~i=1, ... N_s
\label{f9b}
\end{equation}
and the effective action is given by eq. \rf{adiabat1}.
The question is whether the stationary point of the effective action
\rf{adiabat1}, with supermetric \rf{f9b}, is still at $\lambda = 0^+$.
We will now consider some cases for various numbers of scalar fields, with
and without mass-term potentials.
\subsection{$N_s$ massless scalar fields}
As a first example, we consider the model of a FRW universe filled with
$N_s$ massless, minimally coupled scalar fields, i.e. the case with potential
\begin{equation}
V(\phi)=0
\label{f0}
\end{equation}
For this scalar potential, inserting the supermetric \rf{f9b} into eq.
\rf{adiabat1} we can easily write down for the effective action
(neglecting the `supercurvature' contributions)
\begin{eqnarray}
S_{eff}&\simeq &\L_S\int_0^{\bar a} da\int_{-\phi_{1M}}^{\phi_{1M}} d\phi_1 ~...~
\int_{-\phi_{N_sM}}^{\phi_{N_sM}} d\phi_{N_s}~a^{2N_s+1}|\lambda a^2-1|^{(N_s+1)/2}
\nonumber \\
&=&{N_s!\L_{S} \left (\prod_{i=1}^{N_s}I_{\phi_i}\right )\bar a^{2(N_s+1)}
\over 3(3N_s+1)!!~x^{N_s+1}}\biggl \{ 2^{N_s}(N_s-1)!! + [\Theta(x-1)
\nonumber \\
&-&\Theta(1-x)]x^{N_s} |x-1|^{(N_s+3)/2}
\sum_{k=0}^{N_s} 2^k{(3N_s-2k+1)!!\over (N_s-k)!}x^{-k}\biggr \}
\label{f10}
\end{eqnarray}
where $\Theta(x)$ is the Heaviside step function and the quantities
$x$ and $I_{\phi_i}$ are given by
\begin{equation}
x\dot =\lambda\bar a^2
\label{g5}
\end{equation}
and
\begin{equation}
I_{\phi_i}\dot =\int_{-\phi_{iM}}^{\phi_{iM}}d\phi_{i}
\label{f10a}
\end{equation}
Taking the derivative of the effective action \rf{f10} with respect to
$\lambda$ we get
\begin{eqnarray}
{dS_{eff}\over d\lambda}&=&
{2^{N_s}(N_s+1)!\L_{S}\left(\prod_{i=1}^{N_s} I_{\phi_i}\right )
\bar a^{2(N_s+2)}\over 3(3N_s+1)!!~x^{N_s+2}}\biggl \{-(N_s-1)!!
\nonumber \\
&+&|x-1|^{(N_s+1)/2}\sum_{k=0}^{N_s+1} 2^{-k}{(N_s+2k-1)!!\over k!}
x^{k}\biggr \}
\label{f11}
\end{eqnarray}
Unfortunately, the stationarity condition coming from eq. \rf{f11}, i.e. by
imposing $dS_{eff}/d\lambda=0$, cannot be easily solved for arbitrary $N_s$.
However, one can still prove the existence of a finite number
(at least one) of stationary points of $S_{eff}$ which are all at $x >0$
and at a finite distance from the origin $x=0$.
In fact, studying the behaviour of $dS_{eff}/d\lambda$ in the range
$x\geq 1$, we get (for $\L_S>0$)
\begin{eqnarray}
{dS_{eff}\over d\lambda}(x=1)&=&-{2^{N_s}(N_s+1)!!\L_{S}\left(\prod_{i=1}^{N_s}
I_{\phi_i}\right )\bar a^{2(N_s+2)}\over 3(3N_s+1)!!}~<0
\nonumber \\
{dS_{eff}\over d\lambda}(x\rightarrow +\infty)&\sim &
{\L_{S}\left(\prod_{i=1}^{N_s} I_{\phi_i}\right )
\bar a^{2(N_s+2)}x^{(N_s-1)/2}\over 6}~>0
\label{f15}
\end{eqnarray}
(with the inequality signs reversed in the case $\L_S<0$).
On the other hand, it is possible to check (i.e., by using Mathematica), that
\begin{equation}
{dS_{eff}\over d\lambda}~<0 ~~~;~~~ \forall ~x \leq 0
\label{f21}
\end{equation}
(again with the inequality sign reversed in the case $\L_S<0$).
In other words, eqs. \rf{f15} and \rf{f21} imply that $dS_{eff}/d\lambda$
will have at least one finite zero at $x\dot =x_{1}>1$, and at most a
finite number of extra zeros at $x\dot =x_n=finite >0$.
Therefore, the effective action $S_{eff}$ will be stationary at
\begin{equation}
{d S_{eff} \over d\lambda}\biggl\vert_{x_{1}} = 0 ~~~ \Longrightarrow ~~
\lambda={x_{1}\over \bar a^2}
\label{f14}
\end{equation}
(or, at any other of the points $x_{n}=c_n x_{1}$,
with $c_n=constant>0$).
Removing the cutoff, $\bar a\rightarrow \infty$, this leads again to the result
that $\lambda=0^+$.
\paragraph{$N_s=0$ scalar fields}
In this case the minisuperspace is one-dimensional, with the single
metric component
\begin{equation}
{\cal G}_{00}=-a^2(\lambda a^2-1)
\label{metric2}
\end{equation}
The supercurvature ${\cal R}$ is, of course, identically zero.
One can then immediately write the effective action from eq. \rf{adiabat1} as
\begin{eqnarray}
S_{eff}[\lambda]
&=& \L_S \int_0^{\bar a} da \; a |\lambda a^2 -1|^{1/2}
\nonumber \\
&=& {\L_{S}\bar a^2\over 3x}\{1+[\Theta(x-1)-\Theta(1-x)]|x-1|^{3/2}\}
\label{g4}
\end{eqnarray}
Taking the derivative of \rf{g4} with respect to $\lambda$, one obtains
\begin{equation}
{d S_{eff} \over d\lambda} = 0 ~~~ \Longrightarrow ~~
\lambda={[(3+2\sqrt{2})^{1/3}+(3-2\sqrt{2})^{1/3}-1]\over \bar a^2}
\label{g8}
\end{equation}
with the result that $\lambda\rightarrow 0^+$ as the regulator $\bar a$ is removed.
It is also straightforward to check that this stationary point is a minimum for
the effective action if $\L_S>0$.
\paragraph{$N_s=1$ massless scalar field (with supercurvature contribution)}
Next we consider the case of a single massless scalar field.
In this case the supermetric will have two independent diagonal entries,
${\cal G}_{00}$ and ${\cal G}_{11}$, which can again be read from eq. \rf{f9b},
and we can also improve the evaluation of the effective action by including
the contribution of the supercurvature term given by
\begin{equation}
{\cal R}=-{4\lambda\over a^2(\lambda a^2-1)^3}
\label{r2}
\end{equation}
The effective action, up to first order contributions from the
adiabatic expansion in ${\cal R}$, reads
\begin{eqnarray}
S_{eff}&\simeq &\int_0^{\bar a} da\int_{-\phi_{M}}^{\phi_{M}} d\phi \left [
\L_S~a^3|\lambda a^2-1|-4\kappa_S~{\lambda a |\lambda a^2-1|\over (\lambda a^2-1)^3}\right ]
\nonumber \\
&=&I_{\phi}\biggl \{\L_{S}{\bar a^4\over 6}\biggl[\left (x-
{3\over 2}+{1\over x^2}\right)\Theta(x-1)
+\left ({3\over 2}-x\right)\Theta(1-x)\biggr ]
\nonumber \\
&-&{\kappa_{S}\over (x-1)}[(x-3)\Theta(x-1)
+2x\Theta(1-x)]\biggr\}
\label{r3}
\end{eqnarray}
Evaluating the derivative of \rf{r3} with respect to $\lambda$ we get
\begin{eqnarray}
{dS_{eff}\over d\lambda}
&\simeq &{I_{\phi }\L_{S}\bar a^6\over 6}
\biggl \{\left [{(x^3-2)\over x^3}-{\a\over (x-1)^2}\right]\Theta(x-1)
\nonumber \\
&-&\left [ 1-{\a\over (x-1)^2}\right ]\Theta(1-x)\biggr\}
\label{r8}
\end{eqnarray}
where we have introduced the quantity
\begin{equation}
\a\dot ={12\kappa_{S}\over \L_{S}\bar a^4}
\label{r11}
\end{equation}
Now, provided that $\L_{S}\not =0$,\footnote{When $\L_S=0$
the analysis of the stationary points of $S_{eff}$ critically depends
on the relative scaling between $\lambda$ and the cutoff $\bar a$, and
is not conclusive.} imposing the stationarity
condition with $dS_{eff}/d\lambda$ given by eq. \rf{r8}, and noting from
eq. \rf{r11} that the contribution coming from the supercurvature term can be
neglected in the limit when the regulator for the scale factor is removed
($\bar a\rightarrow \infty$), it is straightforward to get the result
\begin{equation}
{d S_{eff} \over d\lambda}= 0 ~~~ \Longrightarrow ~~
\lambda\simeq {2^{1/3}\over \bar a^2}
\label{r15}
\end{equation}
In other words, the effective action is stationary at $\lambda=0^+$
as the regulator $\bar a\rightarrow \infty$ is removed.
Moreover, evaluating the second order derivative of $S_{eff}$ with
respect to $\lambda$ it is easy to
see that the stationary point is a minimum for $S_{eff}$ if $\L_S>0$.
\paragraph{$N_s=3$ massless scalar fields (with supercurvature contribution)}
The analysis of the model of a FRW universe filled with three massless
scalar fields essentially proceeds along the same lines
as in the previous paragraph.
In particular, the supermetric now has two extra diagonal elements
(i.e., ${\cal G}_{22}={\cal G}_{33}\equiv{\cal G}_{11}$,
plus ${\cal G}_{00}$, all readable from eq. \rf{f9b}), and the supercurvature
turns out as
\begin{equation}
{\cal R}={6[9(\lambda a^2)^2-14\lambda a^2+4]\over a^4(\lambda a^2-1)^3}
\label{r21}
\end{equation}
so that the effective action, up to first order contributions from
${\cal R}$, reads
\begin{eqnarray}
S_{eff}&\simeq &\int_0^{\bar a} da\int_{-\phi_{1M}}^{\phi_{1M}} d\phi_1
\int_{-\phi_{2M}}^{\phi_{2M}} d\phi_2 \int_{-\phi_{3M}}^{\phi_{3M}} d\phi_3
\biggl \{\L_S~a^7(\lambda a^2-1)^2
\nonumber \\
&+&6\kappa_S~{a^3 [9(\lambda a^2)^2-14\lambda a^2+4]
\over (\lambda a^2-1)}\biggr \}
\nonumber \\
&=&\left(\prod_{i=1}^3 I_{\phi_i}\right)\bar a^4\biggl [\L_{S}\bar a^4\left (
{x^2\over 12}-{x\over 5}+{1\over 8}\right)
\nonumber \\
&+&3{\kappa_{S}\over x^2}
\left (3x^3-{5\over 2}x^2-x-\ln|x-1| \right )\biggr ]
\label{r22}
\end{eqnarray}
Finally,
taking the derivative with respect to $\lambda$ we get
\begin{eqnarray}
{dS_{eff}\over d\lambda}
&\simeq& \left(\prod_{i=1}^3I_{\phi_i}\right )\L_{S}\bar a^{10}
\biggl \{{x\over 6}-{1\over 5}
\nonumber \\
&+&{3\over 4}\a \left [1+{(x-2)\over 3x^2(x
-1)}+{2\over 3x^3}\ln|x-1|\right]\biggr \}
\label{r24}
\end{eqnarray}
where $\a$ is again defined according to eq. \rf{r11}.
Let us consider the case $\L_{S} \not =0$ first.
In this ansatz, using similar arguments to those of the previous section one
can easily see that the contribution coming from the supercurvature term
(the $\a$ term in eq. \rf{r24}) can be neglected when removing the cutoff
$\bar a$,
and therefore the stationarity condition for $S_{eff}$ implied by eq.
\rf{r24} becomes
\begin{equation}
{d S_{eff} \over d\lambda}= 0 ~~~ \Longrightarrow ~~
\lambda\simeq {6\over 5\bar a^2}
\label{r29}
\end{equation}
Eq. \rf{r29} again predicts the value $\lambda =0^+$ as $\bar a \rightarrow \infty$.
Contrarily to the previous model for a single scalar field, in the three
massless scalar field case we can also consider the ansatz $\L_{S}=0$.
In this case, in fact, it is easy to check from eq. \rf{r24} that the
stationarity condition for $S_{eff}$ has a solution at the point $x\dot =x_{2}=
finite >0$
\begin{equation}
{d S_{eff} \over d\lambda}\biggl\vert_{x_{2}} = 0 ~~~ \Longrightarrow ~~
\lambda\simeq {x_{2}\over\bar a^2}
\label{r31}
\end{equation}
In other words, also in this case $\lambda=0^+$ is a stationary
point for the effective action.
Finally, evaluating
the second order derivative of $S_{eff}$ with respect to $\lambda$,
at the stationary points \rf{r29} or \rf{r31}, it is
easy to check that these are minima for $S_{eff}$ either if $\L_{S}>0$
(for any $\kappa_{S}$) or if $\kappa_{S}>0$ (when $\L_{S}=0$).
\subsection{$N_s=4r-1$ massive scalar fields}
The next complication of the FRW universe toy model is to consider the case
of an odd number $N_s=4r-1$ ($r=1, 2, ..$) of massive, minimally coupled
scalar fields (the case of one single massive scalar field
is separately treated in the Appendix) with potential
\begin{equation}
V(\phi)=\sum_{i=1}^{4r-1}m_i^2\phi_i^2
\label{f1}
\end{equation}
The supermetric is once again diagonal, with $4r-1$ identical entries
${\cal G}_{ii}$ plus ${\cal G}_{00}$ (which can be easily read off eq.
\rf{f9b}), and there is no ambiguity in the sign of its determinant when
evaluating $S_{eff}$.
In particular, making use of the binomial expansion theorem three times,
the effective action can be written (neglecting the `supercurvature'
contributions) as
\begin{eqnarray}
S_{eff}&\simeq &\L_S\int_0^{\bar a} da\int_{-\phi_{1M}}^{\phi_{1M}} d\phi_1 ~..
\int_{-\phi_{N_sM}}^{\phi_{N_sM}} d\phi_{N_s}
~a^{2N_s+1}\left [\left(\sum_{i=1}^{N_s}m_i^2\phi_i^2 +\lambda\right )a^2-1\right
]^{2r}
\nonumber \\
&=&{1\over 2}({N_s+1\over 2})!\L_{S}
\left (\prod_{i=1}^{N_s}I_{\phi_i} \right )\bar a^{2(N_s+1)}
\nonumber \\
&\times&\sum_{k=0}^{2r} \sum_{j=0}^k \sum_{s_1 .. s_{N_s}=0}^{k-j}
{\delta\left(\sum_{i=1}^{N_s}s_i-k+j\right )(-1)^kx^j y_{1}^{s_1}...
y_{N_s}^{s_{N_s}}\over (2s_1+1)s_1!...(2s_{N_s}+1)s_{N_s}!(2r-k)!j!(4r+k)}
\label{f3}
\end{eqnarray}
where we have used the `cosmological constant-variable' defined by eq.
\rf{g5} and introduced the new `mass-variable' $y_{i}$ according to
\begin{equation}
y_{i}\dot =m_{i}^2\phi_{i M}^2\bar a^2
\label{f5}
\end{equation}
Now, in order to find the stationary points of $S_{eff}$, we can simplify
the whole analysis by taking partial derivatives with respect to $\lambda$
and $m_{i}^2$ and evaluating them at $y_{i}=0$ (i.e., at zero masses for
the scalar fields $\phi_i$).
Proceeding in this way and noting that the only relevant terms
surviving at $y_{i}=0$ from the sums in eq. \rf{f3} are, for the
derivative with respect to $m_{i}^2$, those with $j=k-1$, and,
for the derivative with respect to $\lambda$ and the effective action itself,
those with $j=k$, we obtain the formulas
\begin{eqnarray}
{\partial S_{eff}\over \partial (m_{i}^2)}\biggl\vert_{y_{i}=0}&=
&{\phi_{i M}^2\over 3}{\partial S_{eff}\over \partial \lambda}\biggl
\vert_{y_{i}=0}={\phi_{i M}^2\over 3}
{\partial \over \partial \lambda} \left [S_{eff}
\vert_{y_{i}=0}\right ]
\nonumber \\
&\simeq &{\L_{S}
\bar a^{2(4r+1)}(I_{\phi_i})^2\left (\prod_{j=1}^{4r-1}I_{\phi_j}\right )
\over 24}
\nonumber \\
&\times&
\sum_{k=0}^{2r}(-1)^k \left (\begin{array}{c} 2r \\ k \end{array}
\right){k\over (4r+k)}x^{k-1}
\label{f23}
\end{eqnarray}
The effective action \rf{f3} evaluated at $y_{i}=0$ is, of course,
the same as that considered in section 4.1 for the case of $N_s$ massless
scalar fields (with the restriction that $N_s=4r-1$), and as a consequence
also the stationarity conditions derived from eqs. \rf{f23} are equivalent
to the massless model condition coming from eq. \rf{f11}.
Then the result is that also for the massive model considered here there
is at least one (trivial) stationary point at $m_{i}=0$ ($i=1, 2, .. 4r-1$)
and $\lambda$ given by eq. \rf{f14}.
Moreover, since the general stationarity conditions which one would derive by
equating the partial derivatives of $S_{eff}$, eq. \rf{f3}, with respect to
$\lambda$ and $m_{i}^2$ for {\it arbitrary} $m_{i}$ are still polynomial
equations of finite order in $x$ and $y_{i}$, it is easy to see that any
other eventual stationary point for the effective action would still be at
$|x_{n}|=finite~~,~~ |y_{i, n}|=finite$.
Therefore, we can again conclude that the stationary point for the effective
action representing a FRW universe filled with $N_s=4r-1$ massive scalar fields
is, after removal of the cutoffs, unique, i.e. at $|\lambda|=0$ and $m_{i}=
0$ ($i=1, 2, .. 4r-1$).\footnote{The modulus in the value for $\lambda$
is actually due to our ignorance about the signs of the other eventual
stationary points $x_n$ and $y_{i, n}$.}
\paragraph{$N_s=3$ massive scalar fields}
In the case $N_s=3$, the algebra is especially simple.
In this case the effective action \rf{f3} simplifies to
\begin{eqnarray}
S_{eff}&=&\L_{S}\bar a^8\left (\prod_{i=1}^{3}I_{\phi_i} \right )
\biggl [ {1\over 12}
\biggl ({1\over 5}\sum_{i=1}^3y_{i}^2+{2\over 9}\sum_{i>j=1}^3y_{i} y_{j}
\nonumber \\
&+&{2\over 3}x\sum_{i=1}^3y_{i}+x^2\biggr )-{1\over 5}\left ({1\over 3}
\sum_{i=1}^3y_{i}+x\right )+{1\over 8}\biggr ]
\label{r48}
\end{eqnarray}
In particular, the partial derivatives with respect to $\lambda$ and $m_{i}^2$
turn out as
\begin{equation}
{\partial S_{eff}\over \partial (m_{i}^2)}=
{\L_{S}\bar a^{10}\left (\prod_{j=1}^{3}I_{\phi_j}
\right )(I_{\phi_i})^2\over 120}\left [y_{i}+
{5\over 9}\sum_{j\not =i}y_j +5{x\over 3}-2\right ]
\label{r49}
\end{equation}
\begin{equation}
{\partial S_{eff}\over \partial \lambda}=
{\L_{S}\bar a^{10}\left (\prod_{i=1}^{3}I_{\phi_i}
\right )\over 6}\left [{1\over 3}\sum_{i=1}^3y_{i}
+x -{6\over 5}\right ]
\label{r50}
\end{equation}
from which it is straightforward to find that the unique stationary point
of $S_{eff}$ is at
\begin{eqnarray}
{\partial S_{eff} \over \partial\lambda}= 0 ~~~ \Longrightarrow ~~
\lambda&=&{6\over 5\bar a^2}
\nonumber \\
{\partial S_{eff} \over \partial(m_{i}^2)}= 0 ~~~ \Longrightarrow ~~
m_{i}^2&=&0~~~;~~~i=1, 2, 3
\label{r51}
\end{eqnarray}
On removing the cutoffs we find, as anticipated in the last section,
that the stationary point is at $\lambda=0^+$ and $m_{i}=0$ ($i=1, 2, 3$).
We can also check the nature of this stationary point by evaluating the
eigenvalues of the Hessian of $S_{eff}$, finding
that the stationary point \rf{r51} is a minimum for $S_{eff}$ provided
$\L_{S}>0$.
\section{A New Source of Decoherence?}
We have speculated that the dynamics of the Universe is
{\it not} precisely free fall, possibly due to topology-changing
absorbtion/emission processes. If so, then in the interval
between such interactions the Universe propagates as a virtual particle in
superspace. Alternatively, as we have suggested in some previous articles,
the mass-shell constraint may not really {\it be} a constraint at
the quantum level. In either case, the Universe would be propagating
somewhat off-shell. It is interesting to imagine how this off-shell character
might manifest itself, if the effect would be large enough to be
observable.
Consider a solution of the evolution equation \rf{Seq1} and constraints
\hfil\break
$(\d \mbox{\AE} / \d {\cal N} )\Psi = 0$, which is a superposition of two WKB states
\begin{equation}
\Psi(q,\t) = \Psi_A(q,\t) + \Psi_B(q,\t)
\end{equation}
of the form
\begin{eqnarray}
\Psi_A(q,\t) &=& \int d{\cal E} D\a \; F_A({\cal E},\a) \exp\left[ {i\over \hbar}
\left\{ ({\cal E} - \l^2)\t - \sqrt{\cal E} S[Q,\a] \right\} \right] \phi_A(q)
\nonumber \\
\Psi_B(q,\t) &=& \int d{\cal E} D\a \; F_B({\cal E},\a) \exp\left[ {i\over \hbar}
\left\{ ({\cal E} - \l^2)\t - \sqrt{\cal E} S[Q,\a] \right\} \right] \phi_B(q)
\nonumber \\
\label{AB}
\end{eqnarray}
where $\t = s/2\l$ is the rescaled evolution (proper-time) parameter
and $F_{A,B}$ are
distributions concentrated at ${\cal E}=\l^2$ (with a rms uncertainty
$\Delta {\cal E}$) and at parameter values $\{\a\}=\{\a_{A,B}\}$ respectively.
The functional $S[Q,\a]$ is a solution, invariant under
3-space diffeomorphisms, of the Hamilton-Jacobi equation
\begin{equation}
\kappa^2 G^{ij} {\d S \over \d Q^i(x)}{\d S \over \d Q^j(x)}
+ \sqrt{g} {\cal U}[Q(x)] = 0
\end{equation}
with $\{ \a \}$ a set of integration constants.
In these equations $Q$ represents the set of degrees of freedom to be
treated semiclassically, and $\sqrt{g} {\cal U}[Q]$ is the part of the
superpotential involving only those degrees of freedom. Note that in the case
of on-shell propagation, i.e. ${\cal E}=\l^2$, the $\t$-dependence drops out
of the wavefunction, and the expressions in \rf{AB} are just WKB solutions
of the Wheeler-DeWitt equation.
Let us imagine that in some region of superspace where the amplitudes
$\Psi_{A,B}$ are non-negligible, the phase difference
\begin{equation}
\d S[Q'] = \left| S[Q,\a_A] - S[Q,\a_B] \right|
\end{equation}
depends mainly on a small subset $Q'$ of the $Q$ degrees of freedom.
For example, $Q'$ might refer to the location of a particle recorded
on a photographic plate, and $\d S$ refers to the difference in
action, associated with two well separated particle paths in an
interferometer, leading to the same final location.
We now ask whether the two components $\Psi_A$ and $\Psi_B$ will interfere
coherently, in the sense that the term is used in optics, in a measurement
of $Q' \subset Q$. If $\Delta {\cal E} \ne 0$, then we must consider
stationarity with respect to variation
in ${\cal E}$, as well as stationarity with respect to variations in the
parameters $\a$. The stationary phase condition tells us
that the components $\Psi_{A}$ and $\Psi_{B}$ are peaked at a
given configuration $Q$ at parameter times
\begin{equation}
\t_A = {S[Q,\a_A] \over 2 \sqrt{\cal E} } ~~~~, ~~~~
\t_B = {S[Q,\a_B] \over 2 \sqrt{\cal E} }
\end{equation}
respectively, with ${\cal E}$ evaluated at ${\cal E}=\l^2$.
Interference of wavefunctions $\Psi_A$ and $\Psi_B$
is coherent, in the sense of physical optics, if the relative phase
between the two wavefunctions is constant in the $\t$-interval
$[\t_A,\t_B]$. In standard terminology the ``linewidth'' of
the wavefunction is $\Delta {\cal E} /\hbar$, and the ``coherence time'' is
$\Delta \t = \hbar / \Delta {\cal E}$. If the linewidth has a stochastic origin, then
the phase of the wavefunction at $\t + \Delta \t$ is not related in a simple
way to the phase at parameter time $\t$.
The coherence criterion is then
\begin{equation}
\d \t < \Delta \t = {\hbar \over \Delta {\cal E}}
\end{equation}
where
\begin{equation}
\d \t \equiv |\t_A - \t_B| \approx {1\over 2\sqrt{\cal E}} \d S[Q']
\end{equation}
which means
\begin{equation}
{1\over 2\sqrt{\cal E}} \d S < {\hbar \over \Delta {\cal E}}
\end{equation}
Defining $\hbar_{eff}({\cal E}) = \hbar /\sqrt{\cal E}$ and the dispersion
\begin{equation}
\d \hbar = \left| {d \over d {\cal E}} \hbar_{eff} \right|_{{\cal E}=\l^2} \Delta {\cal E}
= \frac{1}{2} \hbar_{eff} {\Delta {\cal E} \over {\cal E}}
\end{equation}
the condition for coherent interference becomes
\begin{equation}
{\d S \over \hbar_{eff} } < {\hbar_{eff} \over \d \hbar}
\label{dS}
\end{equation}
The argument above is quite general, and applies
to any WKB treatment of the evolution equation \rf{Seq1}.
In fact, if one is prepared to accept that there may be
a stochastic uncertainty $\d \hbar$ (of whatever origin)
in the phenomenological value of Planck's
constant, then a condition of the form \rf{dS} can be
easily deduced from the standard Feynman path integral in fixed background
spacetime. If there are two or more semiclassical paths which contribute
to a given transition amplitude at leading order in $\hbar$, i.e.
\begin{equation}
G[q_f,q_0] \approx \sum_{i} \mu_i e^{iS_i[q_f,q_0]/\hbar}
\end{equation}
and if $\hbar$ itself has some dispersion $\d \hbar$, then the relative
phase between path $i$ and path $j$ becomes indeterminate
if the inequality \rf{dS} is violated, where
$\d S = |S_i-S_j|$ is the difference in action of the
two paths, and $\hbar_{eff}\equiv \hbar$.
A signature of finite dispersion $\d \hbar$ in the effective
value of Planck's constant could be, e.g., an observed decoherence of
particle beams in an ultra-sensitive particle interferometer, in a
situation where standard time-energy considerations would imply that the
beams should interfere coherently. In this case, the wavefunction $\Psi_A$
($\Psi_B$) represents the
contribution to the full ``wavefunction of the Universe'' $\Psi$ in which
the particle travels through path A (B) of the interferometer,
respectively, while $\d S / \hbar_{eff}$ is a WKB phase difference associated
with this difference in path. If the Universe
propagates slightly off-shell, as has been suggested here, then
the interference will be incoherent if the inequality \rf{dS}
is violated. To our knowledge no such decoherence has ever
been observed, and, in the absence of any theoretical lower bound
on $\d \hbar$, a more detailed discussion
of particle interferometry in this context would be premature.
Of course, any finite dispersion in Planck's constant would also feed into
finite uncertainties in every other physical quantity, and some of these
quantities have been measured quite accurately. In particular,
$\hbar / e^2$ can be
deduced, by combining $g-2$ measurements with high-order QED calculations,
to one part in $10^{12}$. However, an ultra-high accuracy measurement
of some physical constant, such as $\hbar / e^2$, does not necessarily
project the Universe into an eigenstate of ${\cal E}$ (or $\hbar_{eff}$).
Planck's constant
is not determined from a single measurement (although $g-2$ {\it can}
be determined from observations of a single electron), and the
reported value would be, in our formalism, an average value
for $\hbar_{eff}$, at the average value ${\cal E}=\l^2$. For example, in the
$g-2$ experiments, one adjusts a rf frequency to maximize the number
of spin flips of a trapped electron \cite{g-2}. Naturally, the peak in
spin-flips versus frequency has a certain width. The dispersion $\d \hbar$,
if indeed there is such a dispersion, would be a contribution
(perhaps negligible, compared to other sources) to that
width, while the center of the peak would locate, in the quantity
$\hbar / e^2$, only the average value of the effective Planck's constant.
\section{Conclusions}
We have seen that the classical dynamics of bosonic fields (including
gravity) in a closed Universe can be re-expressed as describing the free fall
of a point particle in superspace. The Hamiltonian operator describing this
``particle'' contains a (classically unobservable) parameter $\l$ analogous
to mass, and the usual Hamiltonian constraint of general relativity can
be viewed, in terms of this parameter, as a mass-shell condition.
This ``free-fall'' description of general relativity is, of course,
a formal result. Conceivably it also has physical content, and we have
suggested two possibilities: First, quantum effects (virtual universe
loops) could induce an effective action for the (non-standard) supermetric,
and this action is essentially a function of the coupling constants
of the bosonic field theory. In various minisuperspace models, we have seen
that the effective action (or at least, the first terms in its adiabatic
expansion) is stationary for vanishing cosmological constant. We do not know
whether this desirable feature survives in the full theory.
Secondly, one may speculate that the universe, propagating like a
particle, may propagate slightly off-shell. In principle this could lead
to some very interesting effects, as suggested in the last section, but
unfortunately we have no estimate to offer of their magnitude.
\vspace{33pt}
\noindent {\Large \bf Acknowledgements}
\bigskip
J.G.'s research is supported in part by the U.S. Dept. of Energy, under
Grant No. DE-FG03-92ER40711. A.C.'s research is supported by
a JSPS postdoctoral fellowship, under contract No. P95196; he would like
to thank the cosmology group at Tokyo Institute of Technology for the
kind hospitality during this work. Support was also provided by the
Director, Office of Energy Research, Office of Basic Energy Services,
of the U.S. Department of Energy under Contract DE-AC03-76SF00098.
\newpage
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,833
|
\section*{Introduction}
Engagement and attention are important in situations of learning, but most methods for measuring of attention or engagement are intrusive and unrealistic in everyday situations \citep{robinson1997,cohen1990,radwan2005}.
Recently, inter-subject correlation (ISC) of electroencephalography (EEG) has been proposed as a marker of attentional engagement \citep{Dmochowski2012,Dmochowski2014,ki2016attention} and we ask in this work whether it can be recorded robustly with commercial-grade wireless EEG devices in a classroom setting. Furthermore, we address two other issues related to the robustness of the signal: The potential neurophysiological origin of the measure and the robustness of the detection scheme to inter-subject variability in spatial alignment.
User engagement has been defined as `... the emotional, cognitive and behavioural connection that exists, at any point in time and possibly over time, between a user and a resource' \citep{attfield2011towards}. Traditional approaches to measuring engagement are based on capturing user behaviour via user interfaces, self-report, or manual annotation \citep{o2013examining}. However, tools from cognitive neuroscience are increasingly being employed \citep{szafir2013artful}.
Recent efforts in neuroscience aim to elucidate perceptual and cognitive processes in a more realistic setting and using naturalistic stimuli \citep{Dmochowski2012,ringach2002receptive,hasson2004intersubject,Lahnakoski2014,lankinen2014,chang2015}.
From an educational perspective such quantitative measures may help identify mechanisms that make learning more efficient \citep{szafir2013artful}, align services better with students needs \citep{attfield2011towards}, or monitor critical task performance \citep{lin2013can}. The potential uses of engagement detection in the classroom are numerous, e.g., real-time and summary feedback for the teacher, motivational strategies for increased student engagement, and screening for impact of teaching materials.
Before the findings of tracking attentional responses with neural activity \citep{Dmochowski2012,Dmochowski2014,ki2016attention} can be employed in a real-time classroom scenario, several issues must be addressed first, including: 1) Is it possible to reproduce the ISCs to naturalistic stimuli under the adverse conditions of a classroom? 2) Are the ISCs robust to inter-student variability of the spatial information processing networks? And 3) can ISCs be recorded with equipment that is both comfortable and affordable enough to make it a realistic technology for schools?
Here we investigate the feasibility of recording such neural responses from students who are viewing videos. We use an approach developed by Dmochowski et al.\ (2012) that uses inter-subject correlation (ISC) of EEG evoked responses.
The basic premise is that subjects who are engaged with the content exhibit reliable neural responses that are correlated across subjects and repetitions within the same subject. In contrast, a lack of engagement manifests in generally unreliable neural responses \citep{ki2016attention}.
ISC of neural activity while watching films have been shown to predict the popularity and viewership of TV-series and commercials \citep{Dmochowski2014},
and shows clinical promises as a measure of consciousness levels in non-responsive patients \citep{Naci2015} (fMRI study). We argue here that the neural reliability of students indeed may be quantified on a second-by-second basis in groups and in a classroom setting, and we seek to investigate the robustness of measuring it with electroencephalography (EEG) responses during exposure to media stimuli.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/Multipanel1}%
\caption{\revone{Experimental setup for joint viewings. \textbf{(Left):} 9 subjects where placed on a line to induce a cinema-like experiences. \textbf{(Right):} Subjects seen from the back, watching films projected onto a screen.
Tablets recording EEG are resting on the tables behind the subjects. The signal is transmitted wirelessly from each subject.}}
\label{Fig.setup_joint}
\end{figure}
To enable correlations between multi-dimensional EEG, correlated component analysis (CorrCA) was introduced \citep{Dmochowski2012}. CorrCA finds multiple spatial projections that are shared amongst subjects, such that their components are maximally correlated across time.
Here we are interested in the reproducibility of using CorrCA as a measure of inter-subject correlation, and will focus predominantly
on the first component, which captures most of the neural responses shared across students.
The main goal of the present work is to determine \revtwo{whether student neural reliability} can be quantified in a real-time manner \revtwo{based on} recordings of brain activity in a classroom setting using a low-cost, portable EEG system -- the Smartphone Brain Scanner
\citep{stopczynski2014smartphone}. \revtwo{With regard to} the robustness of the detection scheme, we report on both theoretical and experimental investigations.
\revone{First, we show that ISC evoked by rich naturalistic stimuli is robust enough to be reproduced with commercial-grade equipment, and to be recorded simultaneously from multiple subjects in a classroom setting. \revtwo{This opens up for} the possibility of real-time estimation of student \revtwo{attentional} engagement.
Secondly, we show mathematically that the CorrCA algorithm is surprisingly robust to variations in the spatial patterns of brain activity across subjects.
Finally, we demonstrate that the level of ISC
is related to a very basic visual response that is modulated by
narrative coherence of the video stimulus.}
\section*{Results}
To monitor \revtwo{neural reliability} we used video stimuli as they provide a balance between realism and reproducibility \citep{hasson2004intersubject}. We recorded EEG activity using the
Smartphone Brain Scanner while subjects watched short video clips of approximately 6 minutes duration, either individually or in a group setting (Fig. \ref{Fig.setup_joint}).
To measure reliability of EEG responses, we used correlated components analysis (CorrCA, see Methods) to extract maximally correlated time series with shared spatial projection across repeated views within the same subject (inter-viewing correlation, IVC), or between subjects (inter-subject correlation, ISC).
One of our main points of interest is to investigate the robustness of ISC from EEG recorded in a classroom through comparisons with results previously measured in a laboratory setting \citep{Dmochowski2012}. We therefore employed similar methods of analysis and calculated ISCs and IVCs in 5 second windows with 80 \% overlap to investigate their temporal development in a 1-second resolution. We chose to analyse the EEG with CorrCA in a broad frequency band (0.5 and 45 Hz), instead of investigating specific frequency bands, to keep the analysis methods comparable with the prior lab-based study.
Moreover, CorrCA is a method used for robustly measuring ISC with low computational costs; hence making it a good candidate for long term real-time analyses on small devices in a classroom setting.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/Multipanel2}%
\caption{\revone{ISC of neural responses to naturalistic stimuli are robust across different groups of subjects and reproducible in a classroom setting.} \textbf{(a)} Comparison between the ISC obtained by Dmochowski et al. 2012 and the present study \revone{for the first CorrCA component and the first viewing of \textit{Bang! You're Dead}. The ISC is calculated with a 1-second resolution (5 s windows, 80\% overlap).
The grey area indicates chance levels for ISC ($p > 0.01$ estimated with time-shuffled surrogate data, uncorrected for multiple comparisons). } \textbf{(b)} \revone{The corresponding scalp projections of the first three components obtained from the correlated component analysis (CorrCA) of each of the four subject groups watching \textit{Bang! You're Dead} the first time. For each component, CorrCA finds one shared set of weights for all subjects in the group.} Four distinct groups of subjects watched videos in different scenarios: individually on a tablet computer (\textit{Individual}), individually with order of scenes scrambled in time (\textit{Scrambled}), \revone{and jointly in a classroom as seen in Fig. \ref{Fig.setup_joint} (\textit{Joint 1} and \textit{Joint 2}).} For each projection, the polarity was normalized so the value at the Cz electrode is positive.}
\label{Fig.iscscalp}
\end{figure}
\revtwo{The subjects watched three video clips, which were presented twice in random order. The first video was}
a suspenseful excerpt from the short film, \textit{Bang! You're Dead}, directed by Alfred Hitchcock. It was selected because it is known to effectively synchronize brain responses across viewers \citep{hasson2008neurocinematics,Dmochowski2012}.
The second video was an excerpt from \textit{Sophie's Choice}, directed by Alan J. Pakula (1982), and the third was an uneventful baseline video of people silently descending an escalator.
For both the joint and individual recording scenarios, the time course of the ISC, based on the first CorrCA component from subjects watching the film, closely reproduces results obtained previously in a laboratory setting (Fig. \ref{Fig.iscscalp}a and Table \ref{Tab.isccorr_both}).
\revtwo{An indication of the stability of the technique is provided by the spatial patterns of the neural activity that drives these reproducible responses.
Similar to other component extraction techniques, such as independent component analysis or common spatial patterns \citep{parra2003,koles1990}, CorrCA reduces the signal of multiple electrodes to a few components.
The ISC is then computed for the first few components,} \revone{which capture most of the correlation between recordings.}
The strongest \revone{three} correlated components show a stable pattern of activity across the different groups and recording
\revone{conditions (Fig. \ref{Fig.iscscalp}b), all three obtaining significant \revtwo{spatial} correlations between groups ($r_{\textit{comp1}} = 0.97$, $r_{\textit{comp2}} = 0.91$, $r_{\textit{comp3}} = 0.79$, all with $p < 0.002$ for uncorrected permutation test), for \textit{Bang! You're Dead}.
The robustness to recording conditions is also apparent for the second film clip from \textit{Sophie's Choice} ($r_{\textit{comp1}} = 0.51$, $p < 0.002$; $r_{\textit{comp2}} = 0.48$, $p = 0.008$; $r_{\textit{comp3}} = 0.36$, $p = 0.033$), albeit with a lower average correlation, which for the first two components may be due to noisy scalp maps for the \textit{Joint 1} group and \textit{Individual} group, respectively (see supplementary Fig. S1). For the baseline video, only the first component achieved significant average correlation between groups ($r_{\textit{comp1}} = 0.46$, $p = 0.014$).}
\revone{The lower stability in the scalp maps obtained for \textit{Sophie's Choice} and the baseline video could be explained by the lower ALD of these stimuli (see below), since these films obtain lower average IVC compared to \textit{Bang! You're Dead} for all groups (Fig. \ref{Fig.barintraplot}).}
\begin{table}
\caption{Correlation coefficients between the \revone{ISC time courses} obtained in a laboratory setting \citep{Dmochowski2012} and those obtained in the present study (groups \textit{Individual, Joint 1} and \textit{Joint 2}). Inter-subject correlation (ISC) measures similarity of responses between subjects for the first and second viewings (v1,v2), and the inter-viewing correlation (IVC) measures similarity within-subject between the two views. \revone{Coefficients are calculated for the first CorrCA component recorded while watching \textit{Bang! You're dead}. **: $p<0.01$.}}
\label{Tab.isccorr_both}
\centering
\begin{tabular}{lccc}
& ISC v1 & ISC v2 & IVC \\
\toprule
Individual & 0.64** & 0.33** & 0.49** \\
\hline
Joint group 1 & 0.51** & 0.15** & 0.44** \\
\hline
Joint group 2 & 0.61** & 0.28** & 0.54** \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
{\small
\caption{\revone{\revtwo{Scenes described by the subjects as having the strongest impression on them.} Based on the 30 subjects which saw \textit{Bang! You're Dead} with uninterrupted narrative. \revtwo{In a post-experiment questionnaire, subjects were asked to describe the scenes that made the strongest impression on them. Their answers were collected} in the eight groups. The subjects each mentioned 1.77 scenes on average (0.77 std.). 29 subjects (97 \%) mentioned either \revtwo{scenes where the boy points the gun} at his mother or at other people.}}
\label{Tab.questbang}
\centering
\begin{tabular}{llr}
\textbf{Scene} & \textbf{Approx.\ times} & \textbf{No of times mentioned (\%) }\\
\toprule
The boy shoots (or points gun at) mother & 2:25 and 3:00 & 16 (53 \%) \\
\hline
The boy shoots (or points gun at) at people & 2:10, 3:30 and 5:30 & 15 (50 \%) \\
\hline
The boy loads another bullet into gun & 6:10 & 8 (27 \%) \\
\hline
The uncle discovers his gun is gone & 4:35 & 4 (13 \%) \\
\hline
The boy finds and loads gun & 0:25 and 1:40 & 4 (13 \%) \\
\hline
The boy points at mirror or shoot towards camera & 0:40, 1:50 and 5:25 & 4 (13 \%) \\
\hline
When the father did not run after the boy & 3:00 & 1 (3 \%) \\
\hline
The abrupt ending & 6:14 & 1 (3 \%) \\
\bottomrule
\end{tabular}
}
\end{table}
\revone{\revtwo{Previous research has indicated the potentials of ISC as a marker of engagement of conscious processing \citep{Dmochowski2012,Dmochowski2014,Naci2015,Lahnakoski2014,ki2016attention}. To further investigate this, we asked subjects post-experiment to describe the film segments (or "scenes") that made the biggest impact on them. We quantified their answers by assigning each answer to one of eight general scene descriptions.} Table \ref{Tab.questbang} shows that the scenes most frequently mentioned are "Boy pointing gun at mother" or "Boy pointing gun at people", and 29 out of 30 subjects mentioned one or both of the scenes as \revtwo{having had high} impact on them. The most frequently mentioned scene occurs around 2:25, where a peak in the ISC can be seen (Fig. \ref{Fig.iscscalp}a). The high impact of this particular scene was confirmed by the suspense ratings presented in Naci et al. (2015).}
\revtwo{See Dmochowski et al. (2012) for additional descriptions and examples of scenes eliciting high ISC in \textit{Bang! You're Dead}.}
To determine if the portable equipment, which uses only 14 \revone{channels}, can detect varying levels of \revtwo{neural reliability}, a second group of subjects watched \revone{the same two film clips} individually, but now with scenes scrambled in time. \revone{This intervention is a widely used tool to create a baseline with similar low-level stimuli, yet reduced engagement \citep{miller1950verbal,anderson2006cortical,hasson2008neurocinematics,Dmochowski2012}}.
\revone{See Methods for more information on the definition and time scales of the scrambled scenes.}
Despite using consumer-grade EEG we find that IVC is significantly above chance for a large fraction of the original engaging clip, but drops dramatically when the scenes are scrambled in time (mean IVC, Fig. \ref{Fig.barintraplot}, $p<0.01$, for \textit{Bang! You're Dead}). Also the baseline video, \revtwo{which subjects reported not to engage them at all, only obtained significant ISC ($p < 0.01$, uncorrected) in 2.3 \% of the 354 tested time windows, compared to the 54.1 \% significant windows obtained for \textit{Bang! You're Dead}.}
\begin{figure}
\centering
\includegraphics[width=.85\columnwidth]{figures/ViolinIntraMovie_ISC_pval_png}%
\caption{\revone{Distribution and mean of IVC calculated from the first CorrCA component for subject groups and films. Violin plots show distributions of IVC estimated using a squared exponential (normal) kernel with bandwidth of 0.005 \citep{hoffmann2015}. Horizontal black bars denote distribution means. For visualisation purposes, the extreme 2.5\% values at either end of the distributions were left out of the violin plots (but were kept for estimating mean and p-values).}
A block permutation test (block size $B = 25\ s$) \revtwo{was employed} to estimate statistical significant differences \revone{in the mean IVC} between viewing conditions \revone{(uncorrected for multiple comparisons).} \revone{For both films there were significant differences in mean IVC between groups with normal narrative and the \textit{Scrambled} group (\textit{Bang! You're Dead}: $p_{\textit{Individual}} = 0.006$, $p_{\textit{Joint 1}} = 0.033$, $p_{\textit{Joint 2}} = 0.004$; \textit{Sophie's Choice}: $p_{\textit{Individual}} = 0.059$, $p_{\textit{Joint 1}} = 0.37$, $p_{\textit{Joint 2}} = 0.012$). However, there were no significant differences between groups with the original, unscrambled narrative.}
Note that the \textit{Scrambled} group did not watch the baseline video.}
\label{Fig.barintraplot}
\end{figure}
For experiments conducted in \revone{less controlled, everyday} settings as in \revtwo{this study}, it is important to assess across-session reproducibility. To test this, we recorded a second group of subjects in a classroom setting \revtwo{who watched} the material together (\textit{Joint 1} and \textit{2}). These two groups obtained mean IVCs comparable to the individual recordings (Fig. \ref{Fig.barintraplot}, \textit{Bang! You're Dead}: $p>0.49$, \textit{Sophie's Choice}: $p>0.26$), and also showed reproducibility between the groups of simultaneous recordings (Fig. \ref{Fig.barintraplot}, \textit{Bang! You're Dead}: $p>0.49$, \textit{Sophie's Choice}: $p>0.08$).
Robustness to inter-subject variations in the spatial brain structure is a basic question when applying CorrCA to classroom data.
\revtwo{CorrCA is derived under the assumption that the spatial networks of subjects are identical. This assumption} could be challenged by inter-individual differences, however, it turns out to be surprisingly robust to such variability \citep{kamronn2015}.
To demonstrate this, we briefly analyse a 'worst case' scenario in which the true mixing weights of two subjects form a pair of \textit{orthogonal} vectors. The observations are assumed to consist of a single true signal, \textbf{z}, mixed into $D$ dimensions with additive Gaussian noise; $\textbf{X}_1 = \textbf{a}_1\textbf{z}^\intercal + \boldsymbol{\epsilon}\ $, $\textbf{X}_2 = \textbf{a}_2\textbf{z}^\intercal + \boldsymbol{\epsilon}\ $. Given a large sample, the covariance matrices are given as $\textbf{R}_{11} = P\cdot\textbf{a}_1\textbf{a}_1^\intercal + \sigma^2 \textbf{I}\ $, $\quad \textbf{R}_{12} = P\cdot\textbf{a}_1\textbf{a}_2^\intercal\ $, where $P$ is the variance of \textbf{z} and $\sigma^2$ signifies the noise variance. For simplicity the weight vectors are assumed to be unit length. The two matrices in Eq.\ (\ref{eq.correlated component analysis}) can then be written as
\begin{align}
(\textbf{R}_{11} + \textbf{R}_{22})^{-1} = \frac{1}{P}\left([\textbf{a}_1\ \textbf{a}_2] \wvec + \frac{2\sigma^2}{P} \textbf{I}\right)^{-1}; \ \ \ \ \textbf{R}_{12} + \textbf{R}_{21} = P\cdot [\textbf{a}_1\ \textbf{a}_2] \wvectwo , \label{eq.r12r21}
\end{align}
using block matrix notation. With $\textbf{a}_1^\intercal\textbf{a}_2 = 0$, $\|\textbf{a}_1\|^2 = \|\textbf{a}_2\|^2 = 1$ and the Woodbury identity, the product of the two matrices in Eq.\ (\ref{eq.r12r21}) can be expressed as
\begin{align}
\left(\textbf{R}_{11} + \textbf{R}_{22}\right)^{-1} \left(\textbf{R}_{12} + \textbf{R}_{21}\right) = \frac{P}{2\sigma^2 + P}(\textbf{a}_1\textbf{a}_2^\intercal + \textbf{a}_2\textbf{a}_1^\intercal ). \label{eq.cocamatrix}
\end{align}
An eigenvector of matrix (\ref{eq.cocamatrix}) takes the form $\alpha\textbf{a}_1 + \beta\textbf{a}_2$,
with $\alpha = \pm\beta$ and $\pm\frac{P}{2\sigma^2 + P}$ as eigenvalues.
\revtwo{By applying this eigenvector to observations, $\textbf{X}_1$ and $\textbf{X}_2$, we see that CorrCA still identifies the relevant time series, \textbf{z}.}
\begin{figure}
\centering
\includegraphics[width=.75\columnwidth]{figures/ISCscene}%
\caption{The ISC of \revone{the first CorrCA component is temporally correlated with the average luminance differences (ALD) of the film stimulus. ALD is calculated as the frame-to-frame difference in pixel intensity, smoothed to match the 5 s window of ISC, and mainly reflects the frequency of changes in camera position.} Data computed from the neural responses \revone{of subjects watching }\textit{Bang You're Dead}.}
\label{Fig.ISCscene}
\end{figure}
\begin{table}
\caption{Correlation coefficients between the ALD and the ISC for the
two viewings (v1,v2) as well as the IVC for the first correlated
component. The correlation is presented for \textit{Bang You're dead} and \textit{Sophie's Choice} for the \textit{Individual} and \textit{Scrambled (Scr)} groups. **: $p<0.01$.}
\label{Tab.scenecorr}
\centering
\begin{tabular}{lccc}
& ISC v1 & ISC v2 & IVC \\
\toprule
Bang You're Dead & 0.71** & 0.61** & 0.56** \\
\hline
Sophie's Choice & 0.50** & 0.24** & 0.23** \\
\hline
Bang You're Dead (Scr) & 0.54** & 0.45** & 0.35** \\
\hline
Sophie's Choice (Scr) & 0.42** & 0.01 & -0.22** \\
\bottomrule
\end{tabular}
\end{table}
For the first CorrCA component, the channels weighted most heavily are the ones positioned over the occipital lobe (see Fig. \ref{Fig.iscscalp}b).
To estimate how much of the ISC was driven by basic low-level visual processing, we analysed the relation between ISC and a measure of frame-to-frame luminance fluctuations (average luminance difference, ALD; see methods). Note that to avoid synchronised eye artefacts and to ensure that only signals of neural origin contributed to the measured correlations, we removed independent components related to eye artefacts from the EEG (see methods).
Figure \ref{Fig.ISCscene} and Table \ref{Tab.scenecorr} show that there is a significant correlation between the ISC and the ALD for both \textit{Bang! You're Dead} and \textit{Sophie's Choice} \revone{for the first CorrCA component}. This suggests that this portion of the correlated activity may indeed be driven by low-level visual evoked responses. However, \revtwo{the degree of engagement, here represented by narrative coherence, appear to modulate the \emph{amplitude} of the ISC time course, since even though the scrambled stimulus was driven by the visual stimulus, it was so to a lesser extent.}
Previous research has shown that visual evoked potentials (VEP) are modulated by spatial attention \citep{Johannes1995} and that even feature-specific attention enhances steady-state VEPs \citep{muller2006}.
We \revtwo{quantify the effect of scrambling the narrative} by comparing the sensitivity (slope) of ISC to ALD in both the normal and scrambled conditions \revone{by fitting a simple linear model} (Fig. \ref{Fig.GAINS}). For both films we found significant reductions of the ISC/ALD slope in the scrambled version ($p<0.01$; block permutation test, with block size $B = 25\ s$).
\begin{figure}
\centering
\includegraphics[width=.95\columnwidth]{figures/gain_analysis_nonrm}%
\caption{Relation between the ISC and the ALD for different conditions. Each point indicates
\revone{a point in the ISC time course as seen in Fig. \ref{Fig.iscscalp}a (5 s windows, 80\% overlap) and the corresponding} ALD calculated from the visual stimulus.
It is evident \revone{that time points} with higher luminance fluctuations (hight ALD) result in higher correlation of brain activity across subjects (high ISC).
\revone{The indicated "slope" is a least squares fit of the slope of lines passing through (0,0).}
The slope indicates the strength of ISC for a given ALD value. \revone{For both films} there is a significant drop in the slope ($p<0.01$: block permutation test with block size $B=25{\rm sec}$), thus the original narrative (blue) elicits higher ISC than the less engaging scrambled version of the films (red). \revone{Note that brightness of the scenes in \textit{Sophie's Choice} is much lower than in \textit{Bang! You're dead}, resulting in an ALD that is lower by almost a factor 10.}}
\label{Fig.GAINS}
\end{figure}
\section*{Discussion}
We have demonstrated that student \revtwo{neural reliability to} media stimuli may be quantified using EEG in a classroom setting.
For educational technology cost and robustness are key features, hence, we aimed at establishing a realistic scenario based on low-cost consumer grade equipment, the Smartphone Brain Scanner, focusing on several potential sources that could degrade robustness.
We have provided evidence that salient aspects of the \revtwo{neural reliability previously} detected with laboratory grade equipment can be reproduced in a realistic setting. We recorded fully-synchronized EEG with nine subjects in a real classroom and found that the level of neural response reliability matched prior laboratory results.
\revone{The robustness of CorrCA and ISC is granted by the reproducibility between recording conditions, both of the ISC time-courses throughout the film clips and of the spatial topographies of the first three CorrCA components. For the \revtwo{film clip from} \textit{Bang! You're Dead} we saw that seven subjects were enough to obtain stable topographies for all three components, whereas for \textit{Sophie's Choice} and the baseline video the results were more noisy, \revtwo{suggesting} that more subjects are needed to obtain stable results.
Previous research shows that ten subjects provided for stable results in a case involving non-narrative baseline videos or films with lower ISC and IVC in a laboratory setting \citep{Dmochowski2012}.}
Mathematically, we have shown that our detection scheme, CorrCA, is robust to inter-subject variability in spatial configurations of brain networks, \revtwo{or induced by cap misalignment.}
\revone{In the calculations, we assumed two subjects \revtwo{ in a worst case scenario where the subjects' spatial projections are orthogonal.} This result conforms well with simulations \revtwo{that show} that, even for multiple subjects with randomly drawn spatial projections, CorrCA was able to find the relevant times series \citep{kamronn2015}. The simulations also showed that increasing the number of subjects decreased the signal-to-noise ratio, presumably due to the estimated common projection not being able to fit with the different projections of each subject.}
\revone{We have presented results that further \revtwo{indicate a relationship between changes in ISC and viewer engagement.} Through a basic analysis of questionnaires on scenes of high impact, we found that high ISC indeed is associated with high impact. We have also showed a relationship between neural responses to luminance fluctuations and coherence of stimulus narrative. For both the films presented, we saw a significant drop in the average IVC for subjects watching the \revtwo{film sequences in which the narrative had been temporally scrambled.}
At the same time no significant difference was found between the groups watching \revtwo{the film sequences that had not been scrambled,} which further underlines the robustness of the measure.}
It may appear surprising that there exists a significant correlation between the \emph{raw EEG signals} of various students in the classroom. However, it is well-known that eye scan patterns in a film audience follow a specific pattern after a scene change, activating the dorsal pathway \citep{unema2005time}.
\revone{A valid assumption could therefore be that the correlation is due to synchronised artefacts from eye movements, but this has recently been shown not to affect attentional modulation of ISC \citep{ki2016attention}.
Also, it is known} that stimuli in the form of flashing images elicit VEPs, which are modulated in amplitude by the luminance \citep{Armington1968}. When recorded with EEG, the spatial distribution of the early VEP at 100ms (P100) is similar to the scalp maps of the first correlated component (C1 in Fig. \ref{Fig.iscscalp}b) \citep{Johannes1995,Sandmann2012}.
\revone{We investigated whether low-level visual processes could be a driving force behind the measured ISCs by correlating \revtwo{the ISC} with changes in luminance in the video stimuli, as measured by the ALD. We found that luminance fluctuations drive a significant portion of the ISC.
In all four groups of subjects \textit{Sophie's Choice} obtained lower IVC compared to \textit{Bang! You're Dead}. \revtwo{This difference could be} explained by the fact that the film clip also had a much lower ALD. Also, Fig.\ \ref{Fig.ISCscene} indicates that the passage in \textit{Bang! You're dead} with the highest and most sustained ISC (around 1:20 to 1:50) coincides \revtwo{ with the interval with the most scene changes.} This relationship could, however, also be due to more complex processes, as fast-paced cutting is a known cinematographic tool used by Hitchcock to induce suspense and thereby increase the attention of the viewer \citep{Bordwell2002}.
The strong link between ISC and luminance fluctuations due to scene cuts have also recently been presented in a fMRI study \citep{herbec2015}. This is something that would be interesting to take into account for future studies investigating the applicability of ISC. Baseline videos could be created in ways to achieve similar ALD features as the target stimuli. The baseline video, created for this study, consisted of one continuous scene of people entering and exiting an escalator in a relaxed manner, which did not produce any significant correlation. Future studies might use a baseline video containing scene cuts of faces and body parts, to also take the effect of editing into account.
To investigate the possibility of higher level processes also being at play, we analysed the linear relationship between ISC and luminance fluctuations at a given time in the video stimulus. The scrambling operation aimed to test for a change in attentional engagement while controlling for low level features. The premise was that subjects would be less attentive to the stimulus, i.e. less "engaged", if they did not follow the narrative arch of the story. With that in mind, Fig. \ref{Fig.ISCscene} and \ref{Fig.GAINS} suggest that ISC is driven by stimulus-evoked responses that are modulated by attentional engagement with the stimulus.
\revtwo{We have demonstrated the feasibility of tracking inter-subject correlation in a classroom setting; a measure that has been related to attentional modulation \citep{ki2016attention}.
We have shown that ISC is robust to recording equipment and conditions, and we have presented evidence that the amplification of ISC in films that have a strong and coherent narrative is due to attentional modulation of visual evoked responses. Thus ISC may be used as an} indirect electrophysiological measure of engagement through an attentional top-down modulation of low-level neural processes. Recent research has shown that attentional modulation of neural responses takes place in speech perception \citep{mesgarani2012,mirkovic2015}, which lends credibility to a similar process occurring in the visual system. The evidence that such a basic and well defined mechanism could be at play further adds to the robustness of the approach in real everyday scenarios.}
\section*{Methods}
{\bf Protocol.} Four groups of subjects watched the video stimuli in different scenarios. The first group ($N=12$, \textit{Individual}) watched videos individually in an office environment on a tablet computer (Google Nexus 7 tablet, with a 7" (17.8 cm) screen) with earphones. The second group ($N=12$) saw the videos in the same manner, but the scenes of the film stimulus \revtwo{were scrambled in time resulting in the narrative being lost (\textit{Scrambled}). The objective of this condition was} to demonstrate that the similarity of responses across subjects is not simply the result of low-level stimulus features (which are identical in the \textit{Individual} and \textit{Scrambled} conditions), but instead, is modulated narrative coherence, which presumably engages viewers. Two additional groups ($N=9,\ N=9$) watched the original videos on a screen in a classroom (Figure \ref{Fig.setup_joint}, \textit{Joint 1} and \textit{Joint 2}), with sound projected through loudspeakers. An attempt was made to \revtwo{create viewing conditions for the subjects in the \textit{joint} groups, that were similar to the viewing conditions for the \textit{individual} group,} i.e., lights were dampened and the projected image produced approximately the same field-of-view (see supplementary materials).
The central question was whether the viewing condition (i.e., in a group versus individually) \revone{influences the level of ISC across subjects.}
{\bf Stimuli.} The first video clip was a suspenseful excerpt from the short film \textit{Bang! You're Dead} (1961) directed by Alfred Hitchcock. It was selected because it is known to elicit highly reliable brain activity across subjects in fMRI \citep{hasson2004intersubject} as well as EEG \citep{Dmochowski2012}.
Our second stimulus was a clip from \textit{Sophie's Choice}, \revone{directed by Alan J. Pakula (1982), which has been used earlier} to study fMRI activity in the context of emotionally salient naturalistic stimuli \citep{Raz2012}.
A third non-narrative control video was recorded in a Danish metro station \revtwo{of several people who were being transported} quietly on an escalator.
Each video clip had a length of approximately six minutes and was shown twice to each subject. For each viewing the order of the clips was randomized, while the same random order was used the second time the clips were shown. A combined video was created for each of the six possible permutations of the order of the clips, starting with a 10 second 43 Hz tone for use in post processing synchronization, and 20 seconds black screen between each film clip. The total length of the video amounted to 39 minutes.
An additional control stimulus (\textit{Scrambled}) was created by scrambling the order of the scenes in \textit{Bang! You're Dead} and \textit{Sophie's Choice} \revtwo{in accordance with previous research} \citep{hasson2008neurocinematics,Dmochowski2012}.
\revone{In these studies, scene segments were defined in varying temporal scales (36 s, 12 s, and 4 s) \revtwo{that consisted} of multiple camera positions, "shots". For this study we defined a scene as a single shot (i.e. the segment between two scene cuts) with the added rule \revtwo{that a scene must not exceed 250 frames ($\sim$ 10 s) to reduce subjects' ability to infer the narrative from long scenes.}
This procedure resulted in 73 scenes lasting between 0.5 and 10 seconds and corresponded to the intermediate to short time-scales employed in previous studies \citep{hasson2008neurocinematics}.}
{\bf Subjects.} A total of 42 female subjects (mean age: 22.4y, age range: 18-32y), who gave written informed consent prior to the
experiment, \revtwo{were recruited for this study.} Non-invasive experiments on healthy subjects are exempt from ethical committee processing by Danish law \citep{DenNationaleVidenskabsetiskeKomite2014}.
Among the 42 recordings, \revone{nine} were excluded due to unstable wireless communication that precluded proper synchronization of the data across subjects \revone{(five from the \textit{Individual} group, \revtwo{one from the \textit{Scrambled} group} and three from the two \textit{Joint} groups)}. \revtwo{The difference in the number of recordings in the different groups} \revone{could give unfair advantages with respect to noise when using CorrCA or calculating ISC. We therefore decided to randomly choose four subjects from the \textit{Scrambled} group and one from} \revtwo{\textit{Joint 2} group and excluded these from the analyses. This was to ensure that each group had seven fully synchronized recordings.}
{\bf Portable EEG -- Smartphone Brain Scanner.}
Research grade EEG equipment is costly, time-consuming to set up, and immobile. However, recently consumer grade EEG equipment \revtwo{that is more affordable and has increased comfort has appeared.} Here we use the modified 14 channel system, 'Emocap', based on the EEG Emotiv EPOC headset. For details and validation, see \citep{stopczynski2014smartphone,stopczynski2014a}. \revtwo{In this study it was implemented} on Asus Nexus 7 tablets. An electrical trigger and associated sound was used to synchronize EEG and video signals in the individual viewing condition, while a split audio signal (simultaneously feeding into microphone and EEG amplifiers) was used to synchronize the nine subjects EEG recordings and the video in the joint viewing condition
\revone{(see supplementary materials for further information on synchronisation).}
The resulting timing uncertainty was measured to be less than 16 ms.
\revone{The EEG was recorded at 128 Hz and subsequently bandpass filtered digitally}
using a linear phase windowed sinc FIR filter between 0.5 and 45 Hz and shifted to adjust for group delay. Eye artefacts were reduced with a conservative pre-processing procedure using independent component analysis (ICA), removing up to 3 of the 14 available components (Corrmap plug-in for EEGLAB \citep{Delorme2004,Viola2009}).
{\bf Correlated component analysis to measure ISC and IVC.}
CorrCA was presented in Dmochowski et al. 2012, as a constrained version of Canonical Correlation Analysis (CCA).
\revone{CorrCA seeks to find sets of weights that maximises the correlation between the neural activity of subjects experiencing the same stimuli. For each neural component, CorrCA finds one shared
set of weights for all subjects in the group.}
Given two multivariate spatio-temporal time series \revtwo{(termed "view" in CorrCA)}, $\{\textbf{X}_{1},\textbf{X}_{2}\} \in \mathbb{R}^{D\times N}$, with $D$ being the number of measured features (EEG channels) in the two views and $N$ the number of time samples, CCA estimates weights, $\{\textbf{w}_{1},\textbf{w}_{2}\}$, which maximize the correlation between the components, $\textbf{y}_1 = \textbf{X}_1^\intercal\textbf{w}_1$ and $\textbf{y}_2 = \textbf{X}_2^\intercal\textbf{w}_2$. The weights are calculated using two eigenvalue equations, with the constraint that the components \revone{belonging to each multivariate time series} are uncorrelated \citep{Hardoon2004}. CorrCA is relevant for the case where the views are homogeneous, e.g., using the same EEG channel positions, and imposes the additional constraint of shared weights $\textbf{w} = \textbf{w}_1 = \textbf{w}_2$. This assumption can potentially increase sensitivity involving fewer parameters. In CorrCA the weights are thus estimated through a single eigenvalue problem;
\vspace{-10pt}
\begin{align}
\left(\textbf{R}_{11} + \textbf{R}_{22}\right)&^{-1} \left(\textbf{R}_{12} + \textbf{R}_{21}\right)\textbf{w} = \rho \textbf{w},\label{eq.correlated component analysis}
\end{align}
where, $\textbf{R}_{ij}=\frac{1}{N}\textbf{X}_i\textbf{X}_j^\intercal$, is the sample covariance matrix \citep{Dmochowski2012}. \revone{To illustrate the spatial distribution of the underlying physiological activity of the components, we use the estimated forward models ("patterns") as discussed in \citep{Parra2005,haufe2014}.}
{\bf Average luminance difference (ALD).} Video clips were converted to grey scale (0-255) by averaging over the three colour channels. We then calculated the squared difference in pixel intensity from one frame to the next and took the average across pixels. These signals were non-linearly re-sampled at 1Hz by selecting the maximum ALD for each 1 $s$ interval \revone{to emphasise the large differences during changes in camera position} (see figure \revone{S2} in supplementary materials for an comparison between frame-to-frame and smoothed difference). These values were then smoothed in time by convolving with a Gaussian kernel with a "variance" parameter of 2.5 $s^2$. This down sampling and smoothing was aimed at matching the temporal resolution of the ALD to that of the time-resolved ISC computation (5 $s$ sliding window with 1 $s$ intervals).
{\bf Statistical testing.}
In order to evaluate the statistical relevance of the correlations, we employed a simple permutation test ($P=5000$ permutations) \citep{Dmochowski2012}. \revtwo{To test the robustness of the obtained weights for the spatial projections, we calculated the average correlation of all possible pairings of the four conditions groups for a given component. Again, we employed a permutation test ($P=5000$ permutations) to evaluate statistical relevance by randomly permuting the channel order for each group and recalculating the average correlation.} When testing differences in average IVC between conditions, \revtwo{we used a block permutation test (block size $B = 25\ s$, $P=5000$ permutations) to account for temporal dependencies.}
\bibliographystyle{apa}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 567
|
package com.randspy.tictactoe.console;
public class ConsoleOutput{
public void setOutput(String output) {
System.out.print(output);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,237
|
\section{Introduction}
\noindent Protoplanetary disks are rotating accretion disks surrounding young newly formed stars (e.g., T Tauri stars, Herbig Ae/Be stars).
They are composed of dust grains and gas, and contain all the material which will form
planetary systems orbiting main-sequence stars (e.g., \citealt{Armitage2011}).
They are active environments for the creation of simple and complex molecules, including organic matter and $\mathrm{H_2O}$ (e.g., \citealt{CaselliCeccarelli2012, Henning2013, Pontoppidan2014PPIV}).
The physical and chemical environments of protoplanetary disks determine the properties of various planets, including mass and chemical composition (e.g., \citealt{Oberg2011, Pontoppidan2014PPIV}).
Among all molecules in disks, $\mathrm{H_2O}$ is one of the most important in determining physical and chemical properties.
\\
\\
$\mathrm{H_2O}$ gas and ice likely carries most of the oxygen that is available, the only competitors are CO and possibly $\mathrm{CO_2}$ \citep{Pontoppidan2014, Walsh2015}.
In the hot inner regions of protoplanetary disks, $\mathrm{H_2O}$ ice evaporates from the dust-grain surfaces into the gas phase.
On the other hand, it is frozen out on the dust-grain surfaces in the cold outer parts of the disk.
The $\mathrm{H_2O}$ snowline is the surface that divides the disk into these two different regions \citep{Hayashi1981}. $\mathrm{H_2O}$ ice enhances the solid material in the cold region beyond the $\mathrm{H_2O}$ snowline, and $\mathrm{H_2O}$ ice mantles on dust grains beyond the $\mathrm{H_2O}$ snowline allow dust grains to stick at higher collisional velocities and promote efficient coagulation compared with collisions of refractory grains (e.g., \citealt{Wada2013}).
As a result, the formation of the cores of gaseous planets are promoted in such regions.
In the disk midplane, we can thus regard the $\mathrm{H_2O}$ snowline as the line that divides the
regions of rocky planet and gas-giant planet formation (e.g., \citealt{Hayashi1981}, \citeyear{Hayashi1985}, \citealt{Oberg2011}).
In the upper layers of the disk, the surface separating water vapor from water ice is determined by
the photodesorption of water ice by the stellar UV radiation field in competition with freeze-out of water onto dust grains (e.g., \citealt{Blevins2015}, see
also Section 3.1).
\\
\\
Icy planetesimals, comets, and/or icy pebbles coming from outside the $\mathrm{H_2O}$ snowline may bring water to rocky planets including the Earth (e.g., \citealt{Morbidelli2000, Morbidelli2012, CaselliCeccarelli2012, vanDishoeck2014, Sato2016, Matsumura2016}).
In the case of disks around solar-mass T Tauri stars, the $\mathrm{H_2O}$ snowline is calculated to exist at a few AU from the central T Tauri star (e.g., \citealt{Hayashi1981}).
However, if we change the physical conditions such as the luminosity of the central star, the mass accretion rate, and the dust-grain size distribution in the disk, the location of the $\mathrm{H_2O}$ snowline will change (e.g., \citealt{Du2014}, \citealt{Piso2015}).
Recent studies \citep{Davis2005, Garaud2007, Min2011, Oka2011, Harsono2015, Mulders2015, Piso2015} calculate the evolution of the $\mathrm{H_2O}$ snowline in optically thick disks, and show that it migrates as the mass accretion rate in the disk decreases and as the dust grains grow in size. In some cases the line may lie within the current location of Earth's orbit (1AU), meaning that the formation of water-devoid planetesimals in the terrestrial planet region becomes more difficult as the disk evolves.
\citet{Sato2016} estimated the amount of icy pebbles accreted by terrestrial embryos after the $\mathrm{H_2O}$ snowline has migrated inwards to the distance of Earth's orbit (1AU). They argued that the fractional water content of the embryos is not kept below the current Earth's water content (0.023 wt$\%$) unless the total disk size is relatively small ($<$ 100 AU) and the disk has a stronger value of turbulence than that suggested by recent work, so that the pebble flow decays at early times.
In contrast, other studies \citep{Martin2012, Martin2013} model the evolution of the $\mathrm{H_2O}$ snowline in a time-dependent disk with a dead zone and self-gravitational heating, and suggest that there is sufficient time and mass in the disk for the Earth to form from water-devoid planetesimals at the current location of Earth's orbit (1AU).
\\ \\
\citet{Ros2013} showed that around the $\mathrm{H_2O}$ snowline, dust-grain growth due to the condensation from millimeter to at least decimeter sized pebbles is possible on a timescale of only 1000 years.
The resulting particles are large enough to be sensitive to concentration by streaming instabilities, pressure bumps and vortices, which can cause further growth into planetesimals, even in young disks ($<$1 Myr, e.g., \citealt{Zhang2015}).
Moreover, \citet{Banzatti2015} recently showed that the presence of the $\mathrm{H_2O}$ snowline leads to a sharp discontinuity
in the radial profile of the dust emission spectral index, due to replenishment of small grains through fragmentation because of the change in fragmentation velocities
across the $\mathrm{H_2O}$ snowline.
Furthermore, \citet{Okuzumi2016} argued that dust aggregates collisionally disrupt and pile up at the region slightly outside the snowlines due to the effects of sintering.
These mechanisms of condensation \citep{Ros2013, Zhang2015}, fragmentation \citep{Banzatti2015}, and sintering \citep{Okuzumi2016} of dust grains have been invoked to explain the multiple bright and dark ring patterns in the dust spectral index of the young disk HL Tau \citep{ALMA2015}.
\\ \\
Therefore, observationally measuring the location of the $\mathrm{H_2O}$ snowline is vital because it will provide information on the physical and chemical conditions of protoplanetary disks, such as the temperature, and water vapor distribution in the disk midplane, and will give constraints on the current formation theories of planetesimals and planets.
It will help clarify the origin of water on rocky planets including the Earth, since icy planetesimals, comets, and/or icy pebbles coming from outside the $\mathrm{H_2O}$ snowline may bring water to rocky planets including the Earth (e.g., \citealt{Morbidelli2000, Morbidelli2012, CaselliCeccarelli2012, vanDishoeck2014, Sato2016, Matsumura2016}).
\\
\\
Recent high spatial resolution direct imaging of protoplanetary disks at infrared wavelengths (e.g., Subaru/HiCIAO, VLT/SPHERE, Gemini South/GPI) and sub-millimeter wavelengths (e.g., the Atacama Large Millimeter/sub-millimeter Array (ALMA), the Sub-Millimeter Array (SMA)) have revealed detailed structures in disks, such as the CO snowline (e.g., \citealt{Mathews2013, Qi2013ApJ, Qi2013, Qi2015, Oberg2015}), spiral structures (e.g., \citealt{Muto2012, Benisty2015}), strong azimuthal asymmetries in the dust continuum (e.g., \citealt{Fukagawa2013, van der Marel2013, van der Marel2016}), gap structures (e.g., \citealt{Walsh2014b, Akiyama2015, Nomura2016, Rapson2015, Andrews2016, Schwarz2016}), and multiple axisymmetric bright and dark rings in the disk of HL Tau \citep{ALMA2015}.
$\mathrm{H_2O}$ ice in disks has been detected by conducting low dispersion spectroscopic observations including the 3 $\mu$m $\mathrm{H_2O}$ ice
absorption band \citep{Pontoppidan2005, Terada2007}, and crystalline and amorphous $\mathrm{H_2O}$ ice features at 63 $\mu$m (e.g., \citealt{McClure2012, McClure2015}).
Multi-wavelength imaging including the 3 $\mu$m $\mathrm{H_2O}$ ice absorption band \citep{Inoue2008} detected $\mathrm{H_2O}$ ice grains in the surface of the disksaround the Herbig Ae/Be star, HD142527 \citep{Honda2009}.
More recently, \citet{Honda2016} report the detection of $\mathrm{H_2O}$ ice in the HD~100546 disk,
and postulate that photodesorption of water ice from dust grains in the disk surface can help explain the radial absorption strength at 3 $\mu$m.
As we described previously, the $\mathrm{H_2O}$ snowline around a solar-mass T Tauri star is thought to exist at only a few AU from the central star. Therefore, the required spatial resolution to directly locate the $\mathrm{H_2O}$ snowline is on the order of 10 mas (milliarcsecond) around nearby disks ($\sim$100-200pc), which remains challenging for current facilities.
\\
\\
In contrast, $\mathrm{H_2O}$ vapor has been detected through recent space spectroscopic observations of infrared rovibrational and pure rotational lines from protoplanetary disks around T Tauri stars and Herbig Ae stars using $Spitzer$/IRS (e.g., \citealt{CarrNajita2008, CarrNajita2011, Salyk2008, Salyk2011, Pontoppidan2010a, Najita2013}), $Herschel$/PACS (e.g., \citealt{Fedele2012, Fedele2013, Dent2013, Meeus2012, Riviere-Marichalar2012, Kamp2013, Blevins2015}), and $Herschel$/HIFI \citep{Hogerheijde2011, vanDishoeck2014}.
\citet{vanDishoeck2013, vanDishoeck2014} reviewed the results of these recent space spectroscopic observations.
The observations using $Spitzer$/IRS and $Herschel$/PACS are spatially unresolved; hence, large uncertainties
remain on the spatial distribution of $\mathrm{H_2O}$ gas in the protoplanetary disk.
Although the observations using $Herschel$/HIFI \citep{Hogerheijde2011, vanDishoeck2014} have high spectral resolution (allowing some constraints on the radial location), they mainly probe the cold water vapor in the outer disk (beyond 100 AU).
The lines they detected correspond to the ground state rotational transitions which have low upper state energies (see Section 3.2.3).
\citet{Zhang2013} estimated the position of the $\mathrm{H_2O}$ snowline in the transitional disk around TW Hya by using the intensity ratio of $\mathrm{H_2O}$ lines with various wavelengths and upper state energies.
They used archival spectra obtained by $Spitzer$/IRS, $Herschel$/PACS, and $Herschel$/HIFI.
\citet{Blevins2015} investigated the surface water vapor distribution in four disks
using data obtained by $Spitzer$/IRS and $Herschel$/PACS, and found that they have critical radii of
$3-11$~AU, beyond which the surface gas-phase water abundance decreases by at least 5 orders of magnitude.
The measured values for the critical radius are consistently smaller than the location of the
surface $\mathrm{H_2O}$ snowline, as predicted by temperature profiles derived from the observed spectral
energy distribution.
\\
\\
Studies investigating the structure of the inner disk from the analyses of velocity profiles of emission lines have been conducted using lines, such as the 4.7 $\mu$m rovibrational lines
of CO gas (e.g., \citealt{Goto2006, Pontoppidan2008, Pontoppidan2011}).
Profiles of emission lines from protoplanetary disks are usually affected by Doppler shift due to Keplerian rotation, and thermal broadening.
Therefore, the velocity profiles of lines are sensitive to the radial distributions of molecular tracers in disks.
Follow-up ground-based near- and mid-infrared (L, N band) spectroscopic observations of $\mathrm{H_2O}$ emission lines for some of the known brightest targets have been conducted using VLT/VISIR, VLT/CRIRES, Keck/NIRSPEC, and TEXES (a visitor instrument on Gemini North), and the velocity profiles of the lines have been obtained (e.g., \citealt{Salyk2008, Salyk2015, Pontoppidan2010b, Fedele2011, Mandell2012}).
These observations suggested that the water vapor resides in the inner disk, but the spatial and spectroscopic resolution is not sufficient to investigate the detailed structure, such as the position of the $\mathrm{H_2O}$ snowline. In addition, the lines they observed are sensitive to the disk surface temperature and are potentially polluted by slow disk winds, and they do not probe the midplane where planet formation occurs (e.g., \citealt{Salyk2008, Pontoppidan2010b, Mandell2012, vanDishoeck2014}).
This is because these lines have large Einstein $A$ coefficients and very high upper state energies ($>$ 3000K), and exist in the near- to mid-infrared wavelength region where dust emission becomes optically thick in the surface regions (see also Section 3.2).
\\
\\
In this work,
we seek candidate $\mathrm{H_2O}$ lines for locating the $\mathrm{H_2O}$ snowline through future high-dispersion spectroscopic observations.
The outline of the paper is as follows.
First, we calculate the chemical composition of a protoplanetary disk using a self-consistently derived physical model of a T Tauri disk to investigate the abundance and distribution of $\mathrm{H_2O}$ gas and ice, as opposed to assuming the position of the $\mathrm{H_2O}$ snowline.
Second, we use the model results to calculate the velocity profiles of $\mathrm{H_2O}$ emission lines ranging in wavelength from near-infrared to sub-millimeter, and investigate the properties of
$\mathrm{H_2O}$ lines which trace the emission from the hot water vapor within the $\mathrm{H_2O}$ snowline and are promising for locating the $\mathrm{H_2O}$ snowline.
These calculations are explained in Section 2. The results and discussion are described in Section 3 and conclusions listed in Section 4.
\section{Methods}
\subsection{The physical model of the protoplanetary disk}
\noindent
The physical structure of a protoplanetary disk model is calculated using the methods outlined in
\citet{NomuraMillar2005} including X-ray heating as described in \citet{Nomura2007}.
In this subsection, we provide a brief overview of the physical model we adopt.
A more detailed description of the background theory and computation of this physical model
is described in the original papers \citep{NomuraMillar2005, Nomura2007}.
\citet{Walsh2010, Walsh2012, Walsh2014a, Walsh2015}, \citet{Heinzeller2011}, and \citet{Furuya2013}
used the same physical model to study various chemical and physical effects,
and they explain the treatment of the physical structure in detail.
\\
\\
We adopt the physical model of a steady, axisymmetric Keplarian disk surrounding a
T Tauri star with mass $M_{\mathrm{*}}$=0.5$M_{\bigodot}$, radius $R_{\mathrm{*}}$=2.0$R_{\bigodot}$,
and effective temperature $T_{\mathrm{*}}$=4000K \citep{KenyonHartmann1995}.
The $\alpha$-disk model \citep{ShakuraSunyaev1973} is adopted to obtain the radial surface density,
assuming a viscous parameter $\alpha$=$10^{-2}$ and an accretion rate
$\dot{M}$=$10^{-8}M_{\bigodot}$ yr$^{-1}$.
The steady gas temperature and density distributions of the disk are computed self-consistently
by solving the equations of hydrostatic equilibrium in the vertical direction and the local
thermal balance between gas heating and cooling.
The heating sources of the gas are grain photoelectric heating by UV photons and heating
due to hydrogen ionization by X-rays.
The cooling mechanisms are gas-grain collisions and line transitions
(for details, see \citealt{NomuraMillar2005} and \citealt{Nomura2007}).
The dust temperature distribution is obtained by assuming radiative equilibrium between absorption
and reemission of radiation by dust grains.
The dust heating sources adopted are the radiative flux produced by viscous dissipation
($\alpha$-disk model) at the midplane of the disk, and the irradiation from the central star.
The radial range for which the calculations are conducted is $r\sim$0.04~AU to 305~AU.
\\
\\
The dust properties are important because they affect the physical and chemical structure of protoplanetary disks in several ways (for details, see, e.g., \citealt{NomuraMillar2005}).
Since dust grains are the dominant opacity source, they determine the dust temperature profile and the UV radiation field throughout the disk.
Photodesorption, photodissociation, and photoionization processes are affected by the UV radiation field.
The dust properties affect the gas temperature distribution, because photoelectric heating by UV photons is the dominant source of gas heating at the disk surface.
The total surface area of dust grains has an influence on the chemical abundances of molecules through determining the gas and ice balance.
In Appendix A, we describe a brief overview of the X-ray and UV radiation fields and the dust-grain models we adopt.
\\
\\
In Figure \ref{Figure1_original}, we display the gas number density in $\mathrm{cm}^{-3}$ (top left), the gas temperature in K (top right, $T_{g}$), the dust-grain temperature in K (bottom left, $T_{d}$),
and the wavelength-integrated UV flux in erg $\mathrm{cm}^{-2}$ s$^{-1}$ (bottom right),
as a function of disk radius in AU and height (scaled by the radius, $z/r$).
The density decreases as a function of disk radius and disk height with the densest region of the disk found in the disk midplane
close to the star ($\sim10^{14}$ $\mathrm{cm}^{-3}$) and the most diffuse, in the disk surface at large radii ($\sim10^{5}$ $\mathrm{cm}^{-3}$), so that the density range in
our adopted disk model covers almost 10 orders of magnitude. The gas temperature increases as a function of disk height and decreases as a function of disk radius with the hottest region found in the disk surface ($> 10^{3}$K), and the coldest region found in the outer disk ($\sim$10K).
In addition, due to the influence of viscous heating at the disk midplane, the temperature increases within several AU from the central T Tauri star.
In the disk surface, the dust-grain temperature is more than ten times lower than the gas temperature.
At low densities, gas-grain collisions become ineffective so the gas cools via radiative line transitions.
In contrast, the gas and dust-grain temperatures are similar in the midplane region with high densities.
Moreover, the disk surface closest to the parent star is subjected to the largest flux of both UV and
X-ray photons, although the disk midplane is effectively shielded from UV and X-ray photons over the radial extent of our disk model.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=0.6]{Figure1a_paper1_rev-submitted.eps}
\includegraphics[scale=0.6]{Figure1b_paper1_rev-submitted.eps}
\includegraphics[scale=0.6]{Figure1c_paper1_rev-submitted.eps}
\includegraphics[scale=0.6]{Figure1d_paper1_rev-submitted.eps}
\end{center}
\caption{The total gas number density in $\mathrm{cm}^{-3}$ (top left), the gas temperature in K (top right), the dust temperature in K (bottom left), and the UV flux in erg $\mathrm{cm}^{-2}$ s$^{-1}$ (bottom right) of a disk around a T Tauri star as a function of the disk radius in AU and height (scaled by the radius, $z/r$) up to maximum radii of $r=$100AU.}\label{Figure1_original}
\end{figure*}
\subsection{Chemical structure of the protoplanetary disk}
\noindent
In order to investigate the chemical structure of the protoplanetary disk, we use a large chemical network which includes gas-phase reactions and gas-grain
interactions (freezeout of gas molecules on dust grains, and thermal and non-thermal desorption from dust grains). The non-thermal desorption mechanisms we adopt include cosmic-ray-induced desorption, and photodesorption by UV photons. \citet{Walsh2010, Walsh2012, Walsh2014a, Walsh2015}, \citet{Heinzeller2011}, \citet{Furuya2013}, \citet{Furuya2014}, \citet{Ishimoto2013}, and \citet{Du2014} used similar chemical networks to calculate chemical structure of a protoplanetary disk, and they, and the reviews of \citet{Henning2013} and \citet{Dutrey2014}, explain the background theories and procedures in detail. In this subsection, we outline the key points of the chemical network we use.
\\
\\
The addition of grain-surface chemistry (e.g., \citealt{Hasegawa1992}) is expected to aid the synthesis of complex organic molecules in the outer disk where significant freezeout has occurred.
Some previous works on chemical modeling of disks (e.g., \citealt{Willacy2007, Semenov2011, Walsh2012, Walsh2014a, Walsh2015, Furuya2013, Furuya2014, Drozdovskaya2014}) have contained grain-surface reactions.
However, the chemical network we adopt in this work does not contain such grain-surface reactions, and is equivalent to one of models in \citet{Walsh2010}, which includes the same freezeout and desorption processes as considered here.
This is because we are primarily interested in the hot inner disk region where molecular line emission originates from the thermally desorbed gas reservoir.
\subsubsection{Gas-phase reactions}
\noindent
Our gas-phase chemistry is extracted from the UMIST Database for Astrochemistry (UDfA),
henceforth referred to as ``{\sc Rate}06" \citep{Woodall2007}.
\citet{Walsh2010, Walsh2012}, and \citet{Heinzeller2011} used {\sc Rate}06 to calculate the
chemical structure of a protoplanetary disk.
We include almost the entire {\sc Rate}06 gas-phase network removing only those species
(and thus reactions) which contain fluorine, F, and phosphorus, P, in order to reduce computation time.
We have confirmed that the loss of F- and P-containing species have a minimal impact on the
remaining chemistry \citep{Walsh2010, Heinzeller2011}.
Our gas-phase network thus contains 375 atomic, molecular, and ionic species composed of the elements
H, He, C, N, O, Na, Mg, Si, S, Cl, and Fe.
Table~1 in the online material from \citet{Woodall2007} shows the list of these 375 species.
The initial elemental fractional abundances (relative to total hydrogen nuclei density) we
use are the set of oxygen-rich low-metallicity abundances from \citet{Graedel1982},
listed in Table 8 of \citet{Woodall2007}.
The chemical evolution is run for $10^{6}$ years.
By this time, the chemistry in the inner regions of the disk midplane inside the $\mathrm{H_2O}$ snowline is close to
steady state, at which time the chemistry has forgotten its origins, justifying our use of initial
elemental abundances, instead of ambient cloud abundances.
\\ \\
Our adopted reaction network consists of 4336 reactions including 3957 two-body reactions, 214 photoreactions, 154 X-ray/cosmic-ray-induced photoreactions, and 11 reactions of direct
X-ray/cosmic-ray ionization. The adopted equations that give the reaction rates of two-body reactions, X-ray/cosmic-ray-induced photoreactions, and reactions of X-ray/cosmic-ray ionization are described in Section 2.1 of \citet{Woodall2007}.
\\ \\
Here we mention that we use the previous version of the UDfA {\sc Rate}06 \citep{Woodall2007},
instead of the latest version of UDfA, ``{\sc Rate}12" \citep{McElroy2013}.
There are some updates in {\sc Rate}12 such as reactions related to some complex molecules,
and \citet{McElroy2013} described that the major difference between {\sc Rate}12 and
{\sc Rate}06 is the inclusion of anion reactions.
Although this has an influence on the abundances of carbon-chain molecules \citep{Walsh2009, McElroy2013},
it has little effect on the chemistry of main simple molecules, such as $\mathrm{H_2O}$.
\\
\\
In our calculations of the chemistry, we have approximated our photoreaction rates at each point in the disk, $k^{\mathrm{ph}}(r, z)$, by scaling the rates of {\sc Rate}06 which assume the interstellar UV field, $k_{0}$, using the wavelength integrated UV flux calculated at each point (see also Figure \ref{Figure1_original}),
\begin{equation}
G_{\mathrm{FUV}}(r, z)=\int_{912\mathrm{\mathring{A}}(13.6\mathrm{eV})}^{2068\mathrm{\mathring{A}}(6\mathrm{eV})} G_{\mathrm{FUV}}(\lambda, r, z) \mathrm{d}\lambda.
\end{equation}
Using this value of $G_{\mathrm{FUV}}(r, z)$, the rate for a particular photoreaction at each $(r, z)$ is given by
\begin{equation}
k^{\mathrm{ph}}(r, z) = \frac{G_{\mathrm{FUV}}(r, z)}{G_{0}}k_{0} \ \mathrm{s}^{-1},
\end{equation}
where $G_{0}$ is the interstellar UV flux (2.67$\times 10^{-3}$ erg $\mathrm{cm}^{-2}$ s$^{-1}$, \citealt{vanDishoeck2006}).
\subsubsection{Gas-grain Interactions}
\noindent
In our calculations, we consider the freezeout of gas-phase molecules on dust grains, and the thermal and non-thermal desorption of molecules from dust grains.
For the thermal desorption of a molecule to occur, the dust-grain temperature must exceed the freezeout (sublimation) temperature of each molecule.
Non-thermal desorption requires an input of energy from an external source and is thus independent of dust-grain temperature.
The non-thermal desorption mechanisms we investigate are cosmic-ray-induced desorption \citep{Leger1985, Hasegawa1993} and photodesorption from UV photons \citep{Westley1995, Willacy2000, Oberg2007}, as adopted in some previous studies (e.g., \citealt{Walsh2010, Walsh2012}).
In this subsection, we explain the mechanisms of freezeout and thermal desorption we use in detail.
In Appendix C, we introduce the detailed mechanisms of non-thermal desportion we adopt (cosmic-ray-induced desorption and photodesorption from UV photons).
\\
\\
The freezeout (accretion) rate, $k_{i}^{a}$ [s$^{-1}$], of species $i$ onto the dust-grain surface is treated using the standard equation (e.g., \citealt{Hasegawa1992, Woitke2009a, Walsh2010}),
\begin{equation}
k_{i}^{a} = \alpha \sigma_{d} \langle v_{i}^{th}\rangle n_{d} \ \mathrm{s}^{-1},
\end{equation}
where $\alpha$ is the sticking coefficient, here assumed to 0.4 for all species, which is in the range of high gas temperature cases ($T_{g} \sim 100-200$K) reported in \citet{Veeraghattam2014}.
Previous theoretical and experimental studies suggested that the sticking coefficient tends to be lower as the gas and dust-grain temperature become higher (e.g., \citealt{Masuda1998, Veeraghattam2014}).
$\sigma_{d}=\pi a^{2}$ is the geometrical cross section of a dust grain with radius, $a$,
$\langle v_{i}^{th}\rangle$ is the thermal velocity of species $i$ with mass $m_{i}$ at gas temperature $T_{g}$, $k_{B}$ is the Boltzmann's constant, and $n_{d}$ is the number density of dust grains.
We adopt the value of $\langle v_{i}^{th}\rangle=(k_{B}T_{g}/m_{i})^{1/2}$ as \citet{Walsh2010} adopted.
In this work, for our gas-grain interactions, we assume a constant grain radius $a=0.1\mu m$ and a fixed dust-grain fractional abundance ($x_{d}=n_{d}/n_{\mathrm{H}}$\footnote[1]{$n_{\mathrm{H}}$ is the total gas atomic hydrogen number density.}) of 2.2$\times 10^{-12}$, as previous studies adopted (e.g., \citealt{Walsh2012}).
From the viewpoint of dust-grain surface area per unit volume, the adopted value of a constant grain radius $a$ is consistent with the value from the dust-grain size distributions in the disk physical model adopted in this work (see also Appendix A). This adopted value of $x_{d}$ is consistent with a gas-to-dust ratio of 100 by mass.
\\
\\
The thermal desorption rate, $k_{i}^{d}$ [s$^{-1}$], of species $i$ from the dust-grain surface is given by (e.g., \citealt{Hasegawa1992, Woitke2009a, Walsh2010}),
\begin{equation}
k_{i}^{d}=\nu_{0}(i) \exp\left(\frac{-E_{d}^{\mathrm{K}}(i)}{T_{d}} \right) \ \mathrm{s}^{-1},
\end{equation}
where $E_{d}^{\mathrm{K}}(i)$ is the binding energy of species $i$ to the dust-grain surface in units of K.
The values of $E_{d}^{\mathrm{K}}(i)$ for several important molecules are listed in Table 2 of Appendix B. Most of these values are adopted in \citet{Walsh2010} or \citet{Walsh2012}.
$T_{d}$ is the dust-grain temperature in units of K.
The characteristic vibrational frequency of each adsorbed species $i$ in its surface potential well, $\nu_{0}(i)$, is represented by a
harmonic oscillator relation \citep{Hasegawa1992},
\begin{equation}
\nu_{0}(i) = \sqrt{\frac{2n_{surf} E_{d}^{\mathrm{erg}}(i)}{\pi^{2} m_{i}}} \ \mathrm{s}^{-1},
\end{equation}
where, $E_{d}^{\mathrm{erg}}(i)$ is in units of erg here, $m_{i}$ is the mass of each absorbed species $i$, and $n_{surf}=1.5\times 10^{15}$ $\mathrm{cm}^{-2}$ is the surface density of absorption sites on each dust grain.
\\
\\
Considering these processes of freezeout, thermal desorption, cosmic-ray-induced desorption, and photodesorption, the total formation rate of ice species $i$ is
\begin{equation}
\dot n_{i, \mathrm{ice}}=n_{i}k_{i}^{a}-n_{i, \mathrm{ice}}^{\mathrm{desorb}}(k_{i}^{d}+k_{i}^{\mathrm{crd}}+k_{i}^{\mathrm{pd}}).
\end{equation}
where $k_{i}^{\mathrm{crd}}$ is the cosmic-ray-induced thermal desorption rate for each species $i$, $k_{i}^{\mathrm{pd}}$ is the photodesorption rate for a specific species $i$,
$n_{i, \mathrm{ice}}$ denotes the number density of ice species $i$, and $n_{i, \mathrm{ice}}^{\mathrm{desorb}}$ is the fraction of $n_{i, \mathrm{ice}}$
located in the uppermost active surface layers of the ice mantles. The value of $n_{i, \mathrm{ice}}^{\mathrm{desorb}}$ is given by \citep{Aikawa1996, Woitke2009a}
\begin{equation}
n_{i, \mathrm{ice}}^{\mathrm{desorb}}= \begin{cases}
n_{i, \mathrm{ice}} & (n_{\mathrm{ice}}< n_{act}), \\
n_{act}\frac{n_{i, \mathrm{ice}}}{n_{\mathrm{ice}}} & (n_{\mathrm{ice}} \geq n_{act}),
\end{cases}
\end{equation}
where $n_{\mathrm{ice}}$ is the total number density of all ice species,
$n_{act} = 4\pi a^{2}n_{d}n_{surf}N_{Lay}$ is the number of active surface sites in the ice mantle per volume. $N_{Lay}$ is the number of
surface layers to be considered as ``active", and we adopt the value from \citet{Aikawa1996}, $N_{Lay}=2$.
\subsection{Profiles of $\mathrm{H_2O}$ emission lines from protoplanetary disks}
\noindent Using the $\mathrm{H_2O}$ gas abundance distribution obtained from our chemical calculations described in Section 2.2, we
calculate the profiles of $\mathrm{H_2O}$ emission lines ranging from near-infrared to sub-millimeter wavelengths,
and investigate which lines are the best candidates for probing emission from the inner thermally desorbed water reservoir, i.e., within the $\mathrm{H_2O}$ snow line.
We also study how the line flux and profile shape depends upon the location of the $\mathrm{H_2O}$ snowline.
In the following paragraphs, we outline the calculation methods used to determine the $\mathrm{H_2O}$ emission line profiles (based on \citet{Rybicki1986}, \citet{Hogerheijde2000}, and \citet{NomuraMillar2005}).
\\ \\
Here we define the transition frequency of each line as $\nu_{ul}$, where the subscript $ul$ means the transition from the upper level ($u$) to the lower level ($l$).
The intensity of each line profile at the frequency $\nu$, $I_{ul}(\nu)$, is obtained by solving the radiative transfer equation in the line-of-sight direction of the disk,
\begin{equation}
\frac{\mathrm{d}I_{ul}(\nu)}{\mathrm{d}s}=-\chi_{ul}(\nu)(I_{ul}(\nu)-S_{ul}(\nu)).
\end{equation}
The source function, $S_{ul}(\nu)$, and the total extinction coefficient, $\chi_{ul}(\nu)$, are given by
\begin{equation}
S_{ul}(\nu)=\frac{1}{\chi_{ul}(\nu)}n_{u}A_{ul}\Phi_{ul}(\nu)\frac{h{\nu}_{ul}}{4\pi},
\end{equation}
and
\begin{equation}
\begin{split}
\chi_{ul}(\nu)=&\rho_{d}\kappa_{ul} \\
&+(n_{l}B_{lu}-n_{u}B_{ul})\Phi_{ul}(\nu)\frac{h{\nu}_{ul}}{4\pi},
\end{split}
\end{equation}
where the symbols $A_{ul}$ and $B_{ul}$ are the Einstein $A$ and $B$ coefficients for
the transition $u \rightarrow l$, the symbol $B_{lu}$ is the Einstein $B$ coefficient for
the transition $l \rightarrow u$, $h$ is the Planck constant, and $n_{u}$ and $n_{l}$ are the number densities of
the upper and lower levels, respectively. The energy difference between the levels $u$ and $l$ corresponds to $h{\nu}_{ul}$.
$\rho_{d}$ is the mass density of dust grains which we calculate from the values of total gas mass density $\rho_{g}$ and
gas-to-dust mass ratio ($\rho_{g}/\rho_{d}=100$). $\kappa_{ul}$ is dust absorption coefficient at the frequency $\nu_{ul}$ as described in Section 2.1.
\\ \\
The symbol $\Phi_{ul}(\nu)$ is the line profile function at the frequency $\nu$,
and we consider the Doppler shift due to Keplerian rotation,
and thermal broadening, in calculating the emission line profiles.
This function is given by,
\begin{equation}
\Phi_{ul}(\nu)=\frac{1}{\Delta\nu_D\sqrt{\pi}}\exp\biggl[-\frac{(\nu+\nu_K-\nu_{ul})^2}{\Delta\nu_D^2}\biggr],
\end{equation}
where $\Delta\nu_D=(\nu_{ul}/c)(\sqrt{2kT_g/m})$ is the Doppler width,
$c$ is the speed of light, $T_{g}$ is the gas temperature,
$k$ is the Boltzmann constant, $m$ is the mass of a water molecule, and
$\nu_{K}$ is the Doppler-shift due to projected Keplerian velocity for
the line-of-sight direction and is given by,
\begin{equation}
\nu_{K}=\frac{\nu_{ul}}{c}\sqrt{\frac{GM_{\mathrm{*}}}{r}}\sin\phi\sin i,
\end{equation}
where $G$ is the gravitational constant, $M_{\mathrm{*}}$ is the mass of central star,
$r$ is the distance from the central star,
$\phi$ is the azimuthal angle between the semimajor axis and the line which links the point in the
disk along the line-of-sight and the center of the disk.
\\
\\
The observable profiles of flux density are obtained by integrating Eq.~(8) in the
line-of-sight direction and summing up the integrals in the
plane of the projected disk, $(x,y)$, as,
\begin{equation}
\begin{split}
F_{ul}(\nu) =&\frac{1}{4\pi d^2}\int d\Omega \int\int \mathrm{d}x \mathrm{d}y \\
&\times \int_{-s_{\mathrm{\infty}}}^{s_{\mathrm{\infty}}} j_{ul}(s,x,y,\nu) \mathrm{d}s,
\end{split}
\end{equation}
where $d$ is the distance of the observed disk from the Earth.
$j_{ul}(s,x,y,\nu)$ is the emissivity at $(s,x,y)$ and the frequency $\nu$ considering the effect of absorption in the upper disk layer and it is given by the following equation,
\begin{equation}
\begin{split}
j_{ul}(s,x,y,\nu)=&n_{u}(s,x,y)A_{ul}\frac{h\nu_{ul}}{4\pi}\Phi_{ul}(s,x,y,\nu)\\
&\times \mathrm{exp}(-\tau_{ul}(s,x,y,\nu)),
\end{split}
\end{equation}
and $\tau_{ul}(s,x,y,\nu)$ is the optical depth from $s$ to the disk surface $s_{\mathrm{\infty}}$ at the frequency $\nu$ given by,
\begin{equation}
\tau_{ul}(s,x,y,\nu)=\int_{s}^{s_{\mathrm{\infty}}} \chi_{ul}(s',x,y,\nu) \mathrm{d}s'.
\end{equation}
Hence, the observable total flux of the lines, $F_{ul}$, are given by the following equation,
\begin{equation}
F_{ul}=\int F_{ul}(\nu) \mathrm{d}\nu
\end{equation}
Here, we use a distance $d=140$ pc for calculating the line profiles since this is the distance to the Taurus molecular cloud, one of the nearest star formation regions with observable protoplanetary disks.
\\ \\
The code for ray tracing which we have built for calculating emission line profiles from the protoplanetary disk
is a modification of the original 1D code called RATRAN\footnote[2]{\url{http://home.strw.leidenuniv.nl/~michiel/ratran/}} \citep{Hogerheijde2000}.
We adopt the data for the ortho- and para-$\mathrm{H_2O}$ energy levels from \citet{Tennyson2001}, the radiative rates (Einstein $A$ coefficients $A_{ul}$) from the BT2 water line list \citep{Barber2006}, and the collisional rates, $<\sigma v>$, for the excitation of $\mathrm{H_2O}$ by H$_{\mathrm{2}}$ and by electrons from \citet{Faure2008}.
We use the collisional rates to determine the critical densities of transitions of interest.
These data are part of Leiden Atomic and Molecular Database called LAMDA\footnote[3]{\url{http://home.strw.leidenuniv.nl/~moldata/}} \citep{Schoier2005}.
The level populations of the water molecule ($n_{u}$ and $n_{l}$) are calculated under the assumption of local thermal equilibrium (LTE).
In Section 3.2.5, we discuss the validity of the assumption of LTE in our work.
We do not include dust-grain emission nor emission from disk winds and jet components in calculating the emission line profiles.
However, we do include the effects of the absorption of line emission by dust grains (as described above).
\\ \\
The nuclear spins of the two hydrogen atoms in each water molecule can be either parallel or anti-parallel,
and this results in a grouping of the $\mathrm{H_2O}$ energy levels into ortho ($K_{a}+K_{c}=$odd) and para ($K_{a}+K_{c}=$even) ladders.
The ortho to para ratio (OPR) of water in the gas gives information on the conditions, formation, and thermal history of water in specific regions, such as comets and protoplanetary disks (e.g., \citealt{Mumma2011, vanDishoeck2013, vanDishoeck2014}).
An alternative way to describe the OPR is through the ``spin temperature", defined as the temperature that characterizes the observed OPR if it is in thermal equilibrium.
The OPR becomes zero in the limit of low temperature and 3 in the limit of high temperature ($\gtrsim$60K). The original definition of OPR of water vapor in thermal equilibrium is described in \citet{Mumma1987}. In this paper, we set the OPR$=$3 throughout the disk to calculate values of $n_{u}$ and $n_{l}$ from the $\mathrm{H_2O}$ gas abundance distribution. The lines we calculate in order to locate the position of the $\mathrm{H_2O}$ snowline mainly trace the hot water vapor for which the temperature is higher than the water sublimation temperature ($\sim 150-160$K).
The disk physical structure of our adopted model is steady, and thermal and chemical equilibrium is mostly achieved throughout the disk. In addition, previous observational data on warm water detected at mid-infrared wavelengths in the inner regions of protoplanetary disks are consistent with OPR$=3$ (e.g., \citealt{Pontoppidan2010a}).
\\ \\
Here we also mention that \citet{Hama2016} reported from their experiments that water desorbed from the icy dust-grain surface at 10K shows the OPR$=$3, which invalidates the assumed relation between OPR and the formation temperature of water. They argue that the role of gas-phase processes which convert the OPR to a lower value in low temperature regions is important, although the detailed mechanism is not yet understood.
\section{Results and Discussion}
\subsection{The distributions $\mathrm{H_2O}$ gas and ice}
\noindent
Figure \ref{Figure2_original} shows the fractional abundances (relative to total gas hydrogen nuclei density, $n_{\mathrm{H}}$) of $\mathrm{H_2O}$ gas and $\mathrm{H_2O}$ ice in a disk around a T Tauri star as a function of disk radius $r$ and height scaled by the radius ($z/r$).
The radial range over which the chemistry is computed is $r\sim$0.5AU and 100AU in order to reduce computation time.
Here we mention that at small radii, due to the high densities found in the midplane, there is a significant
column density of material shielding this region from the intense
UV and X-ray fields of the star. Therefore, molecules are expected to survive in the midplane at radii within $\sim$0.1AU, unless there are cavities in the dust and gas.
Thus the actual total amount of molecular gas in the inner disk may be larger than that of our chemical calculation results.
\\ \\
According to this figure, the fractional abundance with respect to $\mathrm{H_2}$ of $\mathrm{H_2O}$ gas is high ($\sim10^{-4}$) in the midplane region inside the $\mathrm{H_2O}$ snowline,
and in contrast, it is low ($\lesssim10^{-12}$) in the midplane outside the $\mathrm{H_2O}$ snowline.
The fractional abundance of $\mathrm{H_2O}$ ice has the opposite distribution. It is low ($\lesssim 10^{-9}$) in the midplane region inside the $\mathrm{H_2O}$ snowline, and in contrast, it is high ($\sim10^{-5}$) in the midplane outside the $\mathrm{H_2O}$ snowline.
The $\mathrm{H_2O}$ snowline in the T Tauri disk that we adopt in this work exists at a radius of $\sim$ 1.6 AU in the midplane ($T_{g} \sim 150-160$K), consistent with the binding energy we adopt.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.6]{Figure2a_paper1_rev-submitted.eps}
\includegraphics[scale=0.6]{Figure2b_paper1_rev-submitted.eps}
\end{center}
\caption{\noindent The fractional abundance (relative to total hydrogen nuclei density) distributions of $\mathrm{H_2O}$ gas (top) and $\mathrm{H_2O}$ ice (bottom) of a disk around a T Tauri star as a function of disk radius and height (scaled by the radius, $z/r$) up to maximum radii of $r=$100AU.}\label{Figure2_original}
\vspace{0.2cm}
\end{figure}
\\ \\
Inside the $\mathrm{H_2O}$ snowline, the temperature exceeds the sublimation temperature under the pressure conditions of the midplane ($T_{g} \sim 150-160$K) and most of
the $\mathrm{H_2O}$ is released into the gas-phase through thermal desorption.
In addition, this region is almost completely shielded from
intense the UV and X-ray fields of the star and interstellar medium (\citealt{NomuraMillar2005, Nomura2007}, see also Figure 2 of \citealt{Walsh2012}), has a high temperature ($> 150$K) and large total gas particle number density ($> 10^{11}$ $\mathrm{cm}^{-3}$), and thermal equilibrium between the gas and dust is achieved ($T_{g} \sim T_{d}$).
Under these conditions, the gas-phase chemistry is close to thermochemical equilibrium and most of the oxygen atoms will be locked up into $\mathrm{H_2O}$ (and CO) molecules (e.g., \citealt{Glassgold2009, Woitke2009a, Woitke2009b, Walsh2010, Walsh2012, Walsh2015, vanDishoeck2013, vanDishoeck2014, Du2014, Antonellini2015}).
Therefore, the $\mathrm{H_2O}$ gas abundance of this region is approximately given by the elemental abundance of oxygen ($1.76\times10^{-4}$, \citealt{Woodall2007}) minus
the fraction bound in CO.
\\ \\
In addition, the fractional abundance of $\mathrm{H_2O}$ gas is relatively high in the hot surface layer of the outer disk.
First, at $z/r$ of $0.1-0.3$ between $r\sim$ 0.5$-$100 AU, the fractional abundance $\mathrm{H_2O}$ gas is $\sim10^{-8}-10^{-7}$.
This region can be considered as the sublimation (photodesorption) front of $\mathrm{H_2O}$ molecules, driven by the relatively strong stellar UV radiation. This so-called photodesorbed layer \citep{Dominik2005} allows $\mathrm{H_2O}$ to survive in the gas phase where it would otherwise be frozen out on the dust-grain surfaces.
The abundance and extent of gas-phase $\mathrm{H_2O}$ in this layer is mediated by absorption back onto the dust grain, destruction by the stellar UV photons and by chemical reactions with other species.
\\ \\
Second, at $z/r$ of 0.15-0.7 between $r\sim$0.5$-$100 AU, the $\mathrm{H_2O}$ abundance is relatively high ($\sim10^{-7}$) compared with the cold midplane region of the outer disk ($\lesssim10^{-12}-10^{-10}$).
Since the gas temperature is significantly higher than the dust temperature (typically $T_{g} \sim 200-2000$K) and the gas density is low compared to the disk midplane, the water chemistry is controlled by chemical kinetics as opposed to thermodynamic (or chemical) equilibrium.
Due to the very high gas temperature ($>$200K), the energy barriers for the dominant neutral-neutral reactions of O$+$H$_{2}$$\rightarrow$OH$+$H and OH$+$H$_{2}$$\rightarrow$$\mathrm{H_2O}$$+$H are readily surpassed and gaseous $\mathrm{H_2O}$ is produced rapidly.
This route will drive all the available gas-phase oxygen into $\mathrm{H_2O}$, unless strong UV or a high atomic hydrogen abundance is able to convert some water back to OH and O
(e.g., \citealt{Glassgold2009, Woitke2009b, Meijerink2012, vanDishoeck2013, vanDishoeck2014, Walsh2015}).
In the uppermost surface layers, $\mathrm{H_2O}$ is even more rapidly destroyed by photodissociation and reactions with atomic hydrogen than it is produced, so there is little water at the very top of the disk.
The OH gas abundance in our calculations and others (e.g., \citealt{Walsh2012}) is high in this hot surface region.
It is consistent with the above discussions that neutral-neutral reactions including OH and $\mathrm{H_2O}$ are dominant and strong UV or a high atomic hydrogen abundance converts some water back to OH and O \citep{Walsh2012, Walsh2015}.
\\ \\
Figure \ref{Figure3_original} shows the radial column density profile of $\mathrm{H_2O}$ gas ({\it red solid line}) and ice ({\it blue dashed line}).
The column density of $\mathrm{H_2O}$ gas and ice in the disk midplane flips across the $\mathrm{H_2O}$ snowline ($\sim$ 1.6AU).
The column density of $\mathrm{H_2O}$ gas is high ($\sim10^{21}$ $\mathrm{cm}^{-2}$) inside the $\mathrm{H_2O}$ snowline,
and, in contrast, is low outside the $\mathrm{H_2O}$ snowline ($\sim 10^{14}-10^{15}$ $\mathrm{cm}^{-2}$).
The column density profile of $\mathrm{H_2O}$ ice is roughly opposite. The column density of $\mathrm{H_2O}$ ice in the outer disk is $\sim10^{20}-10^{21}$ $\mathrm{cm}^{-2}$.
Previous chemical modeling calculations (e.g., \citealt{Walsh2012, Walsh2015, Du2014}) gave a column density of $\mathrm{H_2O}$ gas inside the $\mathrm{H_2O}$ snowline of around $10^{21}-10^{22}$ $\mathrm{cm}^{-2}$.
This value is slightly higher than in our calculations, possibly due to the inclusion of
grain surface reactions.
However, since gas-phase $\mathrm{H_2O}$ in the disk midplane is likely obscured by dust grains at near- to mid-infrared wavelengths \citep{Walsh2015}, the ``visible" $\mathrm{H_2O}$ gas column density at these wavelength is much smaller than the actual amount. For example in \citet{Walsh2015}, the visible value is on the order of a few times $10^{19}$ $\mathrm{cm}^{-2}$ within the $\mathrm{H_2O}$ snowline.
Previous infrared low dispersion spectroscopic observations using $Spitzer$/IRS for classical
T Tauri stars derive the $\mathrm{H_2O}$ gas column densities ranging from 4$\times10^{17}$ to 7.9$\times10^{20}$ $\mathrm{cm}^{-2}$ \citep{CarrNajita2011, Salyk2011}.
Despite the model T Tauri disk being a generic model which is not representative of any particular source, there is significant overlap between the calculated
``visible" column densities and these observed values, although it should be acknowledged that there is a three orders-of-magnitude spread in the observed values.
\\ \\
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.5]{Figure3_paper1_rev-submitted.eps}
\end{center}
\vspace{0.3cm}
\caption{\noindent The radial profile of the vertically integrated column density $\mathrm{cm}^{-2}$ of $\mathrm{H_2O}$ gas ({\it red solid line}) and ice ({\it blue dashed line}).
}\label{Figure3_original}
\vspace{0.5cm}
\end{figure}
Previous analytical models and numerical simulations derived the position of the $\mathrm{H_2O}$ snowline of an optically thick disk for given parameters, such as mass ($M_{*}$) and temperature ($T_{*}$) of the central star, a viscous parameter $\alpha$, an accretion rate $\dot{M}$, a gas-to-dust mass ratio $g/d$, and the average dust grain size $a$ and opacity (e.g., \citealt{Davis2005, Garaud2007, Min2011, Oka2011, Du2014, Harsono2015, Mulders2015, Piso2015}), and suggested that the position of the $\mathrm{H_2O}$ snowline changes, as these parameters change.
In the case of T Tauri disks with $M_{\mathrm{*}} \sim 0.5-1M_{\bigodot}$, $\dot{M} \sim 10^{-8} M_{\bigodot}$ yr$^{-1}$, and $a \sim 0.1 \mu$m, the position of the $\mathrm{H_2O}$ snowline is $\sim 1.5-2$ AU.
In our calculations, we use similar parameters for $M_{\mathrm{*}}$,
$\dot{M}$ and $a$, and the $\mathrm{H_2O}$ snowline appears at a radius
of around 1.6AU in the midplane ($T_g\sim 150-160$K), which is
in the range of previous works.
\\ \\
\citet{Heinzeller2011} investigated the effects of physical mass transport phenomena in the radial direction by viscous accretion and in the vertical direction by diffusive turbulent mixing and disk winds.
They showed that the gas-phase $\mathrm{H_2O}$ abundance is enhanced in the warm surface layer due to the effects of vertical mixing.
In contrast, they mentioned that the gas-phase $\mathrm{H_2O}$ abundance in the midplane inside the $\mathrm{H_2O}$ snowline is not affected by the accretion flow, since the chemical reactions are considered to be fast enough in this region to compensate for the effects of the accretion flow.
\subsection{$\mathrm{H_2O}$ emission lines from protoplanetary disks}
\noindent We perform ray-tracing calculations and investigate the profiles of $\mathrm{H_2O}$ emission lines for a protoplanetary disk in Keplerian rotation, using the methods described in Section 2.3 and next paragraph. We include rovibrational and pure rotational ortho- and para- $\mathrm{H_2O}$ lines at near-, mid-, and far-infrared and sub-millimeter wavelengths,
and find that $\mathrm{H_2O}$ lines which have small Einstein $A$ coefficients ($A_{ul}\sim 10^{-3}-10^{-6} \mathrm{s}^{-1}$) and relatively high upper energy levels ($E_{up} \sim$1000K) are most promising for tracing emission from the innermost hot water reservoir within the $\mathrm{H_2O}$ snowline.
\\ \\
Here we describe how we find 50 candidate lines which are selected from the LAMDA database of $\mathrm{H_2O}$ transition lines.
First of all, we proceeded by selecting about 20 $\mathrm{H_2O}$ lines from the LAMDA database which have various wavelengths (from near-infrared to sub-millimeter), Einstein $A$ coefficients ($A_{ul}\sim 10^{-1}-10^{-7} \mathrm{s}^{-1}$), and upper state energies ($E_{up} <$ 3000K).
In making this initial selection we ignored lines with very small Einstein $A$ coefficients and very high upper state energies, since the emission fluxes of these lines are likely to weak to detect.
When we calculated the profiles of these lines,
we noticed that
$\mathrm{H_2O}$ lines with small Einstein $A$ coefficients ($A_{ul}\sim 10^{-3}-10^{-6} \mathrm{s}^{-1}$) and relatively large upper state energies ($E_{up}\sim 700-2100$K) are the best candidates to trace emission from the hot water reservoir within the $\mathrm{H_2O}$ snowline.
The number of these candidate lines is 10 lines within originally selected 20 lines.
Then we searched all other ortho-$\mathrm{H_2O}$ transition lines which satisfy these conditions, and found an additional 40 ortho-$\mathrm{H_2O}$ candidate water lines.
\begin{table*}
\caption{{Calculated ortho-$\mathrm{H_2O}$ line parameters and total line fluxes}}\label{tab:T1}
\begin{center}
\begin{tabular}{lllllll}
\hline
\vspace{-0.2cm}
$J_{K_{a}K_{c}}$& \ \ $\lambda$&\ Freq.&\ $A_{ul}$&$E_{up}$&$n_{\mathrm{cr}}$& total flux$^{1}$\\
&&&&&& \\
&[$\mu$m]&[GHz]&[s$^{-1}$]&[K]&[$\mathrm{cm}^{-3}$]&[W $\mathrm{m}^{-2}$]\\
\hline
6$_{43}$-5$_{50}$ & 682.926 & 439.286 & 2.816$\times 10^{-5}$ &1088.7 & $1.0\times10^{6}$ &$3.12\times10^{-22}$ \\
8$_{18}$-7$_{07}$ & 63.371 & 4733.995 & 1.772 & 1070.6 & $1.5\times10^{10}$ & $5.66\times10^{-18}$ \\
1$_{10}$-1$_{01}$ & 538.664 & 556.933 & 3.497$\times 10^{-3}$& 61.0 & $2.9\times10^{7}$ & $1.13\times10^{-20}$ \\
\hline
\multicolumn{3}{l}{\hbox to 0pt{\parbox{120mm}{
\footnotesize
\footnotemark[1] In calculating total flux of these $\mathrm{H_2O}$ lines, we use a distance $d=140$pc and the inclination angle of the disk $i=$30 deg.}}}
\end{tabular}
\end{center}
\end{table*}
\\ \\
In the remaining part of this Section, we describe the detailed properties of three characteristic pure rotational ortho-$\mathrm{H_2O}$ lines ($\lambda$=682.93, 63.37, 538.66$\mu$m). These three lines have different values of $A_{ul}$ and $E_{up}$.
We find that the $\mathrm{H_2O}$ 682.93$\mu$m line, which falls in ALMA band 8 (see Section 3.2.6),
is a candidate for tracing emission from the innermost hot water reservoir within the $\mathrm{H_2O}$ snowline.
The 63.37 and 538.66$\mu$m lines are examples of lines which are less suited to trace emission from water vapour within the $\mathrm{H_2O}$ snowline.
We consider these two particular lines to test the validity of our model calculations, since the fluxes of these two lines from protoplanetary disks are observed with $Herschel$ (see Section 3.2.2, 3.2.3).
The list of suitable lines from mid-infrared (Q band) to sub-millimeter, and their properties, especially the variation in line fluxes with wavelength, are described in detail in our companion paper (paper II, \citealt{Notsu2016b}).
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=0.42]{Figure4a_682.926um_paper1_rev-submitted.eps}
\includegraphics[scale=0.42]{Figure4b_63.3714um_paper1_rev-submitted.eps}
\includegraphics[scale=0.42]{Figure4c_538.664um_paper1_rev-submitted.eps}
\includegraphics[scale=0.42]{Figure4d_63.3714um_paper1_rev-submitted.eps}
\includegraphics[scale=0.42]{Figure4e_538.664um_paper1_rev-submitted.eps}
\end{center}
\caption{\noindent Top: the velocity profiles of three characteristic pure rotational ortho-$\mathrm{H_2O}$ lines at $\lambda$=682.93$\mu$m ($J_{K_{a}K_{c}}$=6$_{43}$-5$_{50}$, top left), 63.37$\mu$m ($J_{K_{a}K_{c}}$=8$_{18}$-7$_{07}$, top middle), and 538.66$\mu$m ($J_{K_{a}K_{c}}$=1$_{10}$-1$_{01}$, top right), which have various Einstein $A$ coefficients $A_{ul}$ and upper state energies $E_{up}$.
Bottom: the velocity profiles of the ortho-$\mathrm{H_2O}$ 63.37$\mu$m line (bottom left) and 538.66$\mu$m line (bottom right), which enlarge the inner components.
{\it Red solid lines} are the emission line profiles from inside 2AU ($\sim$inside the $\mathrm{H_2O}$ snowline), {\it black dashed lines} are those from 2-30AU ($\sim$outside the $\mathrm{H_2O}$ snowline), and {\it blue dotted lines} are those from the total area inside 30AU.
In calculating these profiles, we assume that the distance to the object $d$ is 140pc ($\sim$ the distance of Taurus molecular cloud), and the inclination angle of the disk $i$ is 30 deg.}\label{Figure4_original}
\end{figure*}
\subsubsection{The case of a suitable $\mathrm{H_2O}$ emission line to trace emission from the hot water reservoir within the $\mathrm{H_2O}$ snowline}
\noindent
The top panels in Figure \ref{Figure4_original} show the emission profiles of three pure rotational ortho-$\mathrm{H_2O}$ lines at $\lambda$=682.93$\mu$m ($J_{K_{a}K_{c}}$=6$_{43}$-5$_{50}$, top left), 63.37$\mu$m ($J_{K_{a}K_{c}}$=8$_{18}$-7$_{07}$, top middle), and 538.66$\mu$m ($J_{K_{a}K_{c}}$=1$_{10}$-1$_{01}$, top right),
which have various Einstein $A$ coefficients ($A_{ul}$) and upper state energies ($E_{up}$).
The detailed parameters, such as transitions ($J_{K_{a}K_{c}}$), wavelength, frequency, $A_{ul}$, $E_{up}$, critical density $n_{\mathrm{cr}}$, and total line fluxes of these three $\mathrm{H_2O}$ lines are listed in Table \ref{tab:T1}.
In calculating these profiles, we assume that the distance $d$ to the object is 140pc ($\sim$ the distance of Taurus molecular cloud), and the inclination angle $i$ of the disk is 30 degs.
The total fluxes of these three lines ($\lambda$=682.93, 63.37, 538.66$\mu$m) are $3.12\times 10^{-22}$, $5.66\times 10^{-18}$, $1.13\times 10^{-20}$ W $\mathrm{m}^{-2}$, respectively.
The bottom panels in Figure \ref{Figure4_original} show the velocity profiles of the $\mathrm{H_2O}$ 63.37$\mu$m line (bottom left) and the 538.66$\mu$m line (bottom right), which enlarge the inner components.
\\ \\
Since the $\mathrm{H_2O}$ lines at $\lambda$=682.93 and 63.37$\mu$m have large upper state energies ($E_{up}$=1088.7K and 1070.6K), these lines trace the hot water vapor ($T_{g} \gtrsim$ a few hundred K).
On the basis of the results of our chemical calculations, the abundance of $\mathrm{H_2O}$ gas is high in the optically thick hot inner region within the $\mathrm{H_2O}$ snowline near the equatorial plane ($T_{g}>$ 150K) and in the hot optically thin surface layer of the outer disk.
\\ \\
In the top left panel of Figure \ref{Figure4_original}, we show the $\mathrm{H_2O}$ line emission at 682.93$\mu$m. The contribution from the optically thin surface layer of the outer disk ({\it black dashed line}, 2-30AU, ``out" component) is very small compared with that from the optically thick region near the midplane of the inner disk ({\it red solid line}, 0-2AU, ``in" component).
This is because this $\mathrm{H_2O}$ 682.93$\mu$m line has a small $A_{ul}$ (=$2.816\times10^{-5}$ s$^{-1}$).
On the basis of Eqs. (13)-(16) in Section 2.3, the observable flux density is calculated by summing up the emissivity at each point ($j_{ul}(s,x,y,\nu)$) in the line-of-sight
direction. In the optically thin ($\tau_{ul}$$<<$ 1) region (e.g., the disk surface layer), the flux density is roughly characterized by integrating the values of $n_{u}(s,x,y)A_{ul}$ at each point. On the other hand, in the optically thick ($\tau_{ul}$$\geq$ 1) region (e.g., the disk midplane of the inner disk), the flux density is independent of $n_{u}(s,x,y)$ and $A_{ul}$ at each point, and it becomes similar to the value of the Planck function at $T_{g}$ around the region of $\tau_{ul}\sim1$. Therefore, the emission profile of the $\mathrm{H_2O}$ 682.93$\mu$m line which has a small $A_{ul}$ and a relatively high $E_{up}$ mainly traces the hot $\mathrm{H_2O}$ gas inside the $\mathrm{H_2O}$ snowline, and shows the characteristic double-peaked profile due to Keplerian rotation.
In this profile, the position of the two peaks and the rapid drop in flux density between the peaks gives information on the distribution of hot $\mathrm{H_2O}$ gas
within the $\mathrm{H_2O}$ snowline.
This profile potentially contain information which can be used to determine the $\mathrm{H_2O}$ snowline position.
The spread in the wings of the emission profile (high velocity regions) represents the inner edge of the $\mathrm{H_2O}$ gas distribution in the disk.
This is because emission from each radial region in the disk is Doppler-shifted due to the Keplerian rotation. Because the area near the outer emitting region is larger than that of the inner region ($\propto r^{2}$), the contribution to the emission from the region near the outer edge is larger if the emissivity at each radial point is similar.
\\ \\
Figure \ref{Figure5_original} shows the line-of-sight emissivity distributions of these three pure rotational ortho-$\mathrm{H_2O}$ lines.
Figure \ref{Figure6_original} shows the total optical depth (gas emission and dust) distributions for the same transitions.
We assume that the inclination angle, $i$, of the disk is 0 deg in making these figures, and thus the line-of-sight direction is from z=+$\infty$ to -$\infty$ at each disk radius.
According to the top panels of Figures \ref{Figure5_original} and \ref{Figure6_original}, the values of the emissivity at $r<$1.6AU (= the position of the $\mathrm{H_2O}$ snowline) and $z/r \sim 0.1$ are stronger than that of the other regions including the optically thin hot surface layer of the outer disk and the photodesorbed layer.
Although we cannot detect the emission from $z \sim 0$ because of the high optical depth of the inner disk midplane due to the absorption by dust grains and excited $\mathrm{H_2O}$ molecules, we can get information about the distribution of hot $\mathrm{H_2O}$ gas within the $\mathrm{H_2O}$ snowline.
This is because the $\mathrm{H_2O}$ gas fractional abundance is close to constant within $r<$1.6AU (= the position of the $\mathrm{H_2O}$ snowline) and $z/r \sim$ 0-0.1 (see also Section 3.1 and Figure \ref{Figure2_original}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.65]{Figure5a_682.926um_paper1_rev-submitted.eps}
\includegraphics[scale=0.65]{Figure5b_63.3714um_paper1_rev-submitted.eps}
\includegraphics[scale=0.65]{Figure5c_538.664um_paper1_rev-submitted.eps}
\end{center}
\caption{\noindent The line of sight emissivity distributions of the three characteristic pure rotational ortho-$\mathrm{H_2O}$ lines with $\lambda$=682.93$\mu$m (top), 63.37$\mu$m (middle), and 538.66$\mu$m (bottom).
The dimension is W $\mathrm{m}^{-2}$ $\mathrm{Hz}^{-1}$ ${\mathrm{sr}}^{-1}$.
We assume that the inclination angle of the disk $i$ is 0 degree in making these figures, and thus the direction of line-of-sight is from z=+$\infty$ to -$\infty$ at each disk radius.}\label{Figure5_original}\end{figure}
\subsubsection{The case of a $\mathrm{H_2O}$ emission line that traces the hot surface layer}
\noindent The top middle panel of Figure \ref{Figure4_original} where we show the line profile for the $\mathrm{H_2O}$ 63.37$\mu$m line, the contribution from the optically thin surface layer of the outer disk ({\it black dashed line}, 2-30AU, ``out" component) is large compared with that of the optically thick region near the midplane of the inner disk ({\it red solid line}, 0-2AU, ``in" component), and the shape of the line profile is a much narrower double peaked profile.
This is because this $\mathrm{H_2O}$ 63.37$\mu$m line has a large $A_{ul}$ (=1.772 s$^{-1}$), although $E_{up}$ (=1070.6K) is similar to that of the $\mathrm{H_2O}$ 682.93$\mu$m line (=1088.7K), and thus the flux density from the hot surface layer of the outer disk becomes strong.
Here we note that since the peak velocities of the ``in" and ``out" components are different, water lines with large $A_{ul}$ at infrared wavelengths, such as the $\mathrm{H_2O}$ 63.37$\mu$m line, can in principal trace the hot $\mathrm{H_2O}$ gas within the $\mathrm{H_2O}$ snowline.
However, there is no current or future instrument with enough sensitivity and spectral resolution to distinguish the peaks of the ``in" component from the ``out" component in these lines.
For example, SPICA/SAFARI is a future instrument with far-infrared spectrograph, but its spectral resolution is low ($R\sim$3000) and is not enough to distinguish the peaks of these line profiles.
The difference in the peak flux density is very large ($\gtrsim$ several tens) and the wings of both components are blended (see also the bottom left panel of Figure \ref{Figure4_original}).
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.65]{Figure6a_682.926um_paper1_rev-submitted.eps}
\includegraphics[scale=0.65]{Figure6b_63.3714um_paper1_rev-submitted.eps}
\includegraphics[scale=0.65]{Figure6c_538.664um_paper1_rev-submitted.eps}
\end{center}
\caption{\noindent The line of sight optical depth $\tau_{ul}(s,x,y,\nu)$ distributions of three characteristic pure rotational ortho-$\mathrm{H_2O}$ lines at $\lambda$=682.93$\mu$m (top), 63.37$\mu$m (middle), and 538.66$\mu$m (bottom).
We assume that the inclination angle of the disk $i$ is 0 degree in making these figures, and thus the direction of line-of-sight is from z=+$\infty$ to -$\infty$ at each disk radius.}\label{Figure6_original}\end{figure}
\\ \\
According to the middle panels of Figures \ref{Figure5_original} and \ref{Figure6_original}, the values of the emissivity at each $(r,z)$ point in the optically thin hot surface layer of the outer disk and the photodesorbed layer
are as strong as that of the optically thick region inside the $\mathrm{H_2O}$ snowline.
For similar reasons as the case for the 682.93$\mu$m line, emission from the outer disk dominates.
In addition, the outer disk midplane opacity of this line is larger than that of the $\mathrm{H_2O}$ 682.93$\mu$m line, because the dust opacity becomes large at shorter wavelengths (e.g, \citealt{NomuraMillar2005}).
\\ \\
We mention that previous space far-infrared low dispersion spectroscopic observations with $Herschel$/PACS ($R\sim$1500) detected this line from some T Tauri disks and Herbig Ae disks (e.g., \citealt{Fedele2012, Fedele2013, Dent2013, Meeus2012, Riviere-Marichalar2012}).
Although the profiles of these lines are unresolved, comparison with models indicates that the emitting regions of these observations are thought to originate in the hot surface layer (e.g., \citealt{Fedele2012, Riviere-Marichalar2012}).
In addition, the total integrated line flux of classical T Tauri objects in the Taurus molecular cloud are observed to be $\sim 6\times 10^{-18} - 3\times 10^{-16}$ W $\mathrm{m}^{-2}$ (e.g., \citealt{Riviere-Marichalar2012}). These values have a dispersion factor of 50. \citet{Riviere-Marichalar2012} suggested that the objects with higher values of line flux have extended emission from outflows, in contrast to those with lower values which have no extended emissions (e.g., AA Tau, DL Tau, and RY Tau). The latter lower values are of the same order as the value we calculate here assuming a T Tauri disk model with no outflow and envelope.
\subsubsection{The case of a $\mathrm{H_2O}$ emission line that traces the cold water}
\noindent In the top right panel of Figure \ref{Figure6_original} where we show the line profile for the $\mathrm{H_2O}$ 538.66$\mu$m line, the contribution from the outer disk ({\it black dashed line}, 2-30AU, ``out" component) is large compared with that of the optically thick region near the midplane of the inner disk ({\it red solid line}, 0-2AU, ``in" component) and the shape of the profile is a much narrower double peaked profile (closer to a single peaked profile), although the $A_{ul}$ is not so high (=3.497$\times10^{-3}$s$^{-1}$). This is because this $\mathrm{H_2O}$ 538.66$\mu$m line is the ground-state rotational transition and has low $E_{up}$ (=61.0K) compared with the other lines.
The flux of this line comes mainly from the outer cold water reservoir in the photodesorbed layer (see also Section 3.1).
We propose that this line is not optimal to detect emission from the innermost water reservoir within the $\mathrm{H_2O}$ snowline
for the same reasons explained in Section 3.2.2 for the 63.37$\mu$m line (see also the bottom right panel of Figure \ref{Figure4_original}).
\\ \\
According to the bottom panels of Figure \ref{Figure5_original} and \ref{Figure6_original}, the value of the emissivity at each $(r,z)$ point in the photodesorbed layer is comparable to that of the optically thick region inside the $\mathrm{H_2O}$ snowline.
The larger surface area of the outer disk, however, means that most disk-integrated emission arises from this region.
In addition, the outer disk midplane opacity of this line is larger than that of the $\mathrm{H_2O}$ 682.93$\mu$m line, although the wavelength and thus the dust opacity is similar.
This is because the abundance of cold $\mathrm{H_2O}$ is relatively high, and because this line has low $E_{up}$.
\\ \\
We mention that previous space high dispersion spectroscopic observations with $Herschel$/HIFI detected the profiles of this line from disks around one Herbig Ae star (HD100546) and TW Hya (e.g., \citealt{Hogerheijde2011, vanDishoeck2014}). The number of detections is small since the line flux is low compared with the sensitivity of that instrument \citep{Antonellini2015}.
The detected line profiles and other line modeling work (e.g., \citealt{Meijerink2008, Woitke2009b, Antonellini2015, Du2015}) suggested that the emitting region arises in the cold outer disk, consistent with the results of our model calculations.
In addition, the total integrated line flux of TW Hya is observed to be $(1.7\pm1.1)\times 10^{-19}$ W $\mathrm{m}^{-2}$ \citep{Hogerheijde2011, Du2015}.
Considering the difference in distance between TW Hya ($\sim$51pc, e.g., \citealt{Zhang2013, Du2015})
and our assumed value, 140 pc, the observed flux is within about a factor $\approx 2$ of our estimated value (see also Table 1).
\\ \\
We note that previous observations suggested that the OPR of the emitting region is 0.77 for TW Hya \citep{Hogerheijde2011} derived using the observed para-$\mathrm{H_2O}$ ground state 1$_{11}$-$0_{00}$ 269.47$\mu$m line ($A_{ul}$=1.86$\times 10^{-2}$ and $E_{up}$=53.4K) and the observed ortho-$\mathrm{H_2O}$ ground state 538.66$\mu$m line.
Since we define OPR as 3 (=the value in the high temperature region) throughout the disk (see also Section 2.3), we likely overestimate the line flux of the ortho-$\mathrm{H_2O}$ 538.66$\mu$m line.
In addition, since the flux of this line is controlled by the outer cold $\mathrm{H_2O}$ gas which is desorbed from the cold dust-grain surfaces, it is necessary to include grain-surface reactions (e.g., \citealt{Hasegawa1992}) to calculate the $\mathrm{H_2O}$ gas and ice abundance to more accurately model this region.
\begin{figure}[htbp]
\begin{center}
\includegraphics[scale=0.53]{Figure7a_cum2_paper1_rev-submitted.eps}
\includegraphics[scale=0.53]{Figure7b_cum2_paper1_rev-submitted.eps}
\end{center}
\vspace{0.3cm}
\caption{\noindent The radial distributions of the normalized cumulative flux for three pure rotational ortho-$\mathrm{H_2O}$ lines at $\lambda$=682.93$\mu$m ({\it red solid line}), 63.37$\mu$m ({\it black dotted line}), and 538.66$\mu$m ({\it blue dashed line}). We normalized the cumulative flux of each line using the values at $r=30$AU (top panel) and at $r=300$AU (bottom panel).
We assume that the inclination angle of the disk $i$ is 0 degree in making these figures.}\label{FigureA2_add}
\end{figure}
\subsubsection{Influence of model assumptions}
\noindent Figure \ref{FigureA2_add} shows the radial distributions of normalized cumulative fluxes for these three pure rotational ortho-$\mathrm{H_2O}$ lines.
We normalized the values of cumulative fluxes of these lines using the values at $r=30$AU (top panel) and at $r=300$AU (bottom panel).
According to these panels, the 682.93$\mu$m line is emitted mostly from the region inside the $\mathrm{H_2O}$ snowline.
In contrast, the 63.37$\mu$m line and the 538.66$\mu$m line are emitted mostly from the region outside the $\mathrm{H_2O}$ snowline.
In addition, although the 63.37$\mu$m line is mainly emitted from the region between $r\sim10-100$AU, the 538.66$\mu$m line is mainly emitted from a region much further out ($r\sim 50-300$AU).
This is because the 682.93$\mu$m line has a small $A_{ul}$ and a relatively high $E_{up}$, and thus it mainly emits from the hot $\mathrm{H_2O}$ gas inside the $\mathrm{H_2O}$ snowline. In contrast, the 63.37$\mu$m line has a large $A_{ul}$, although $E_{up}$ is similar to that of the 682.93$\mu$m line, and thus the flux density from the hot surface layer of the outer disk is strong (see also Section 3.13 and 3.2.2).
Moreover, the flux density of the 538.66$\mu$m line from the outer cold water reservoir in the photodesorbed layer is strong, since this line is the ground-state rotational transition and has low $E_{up}$ compared with the other lines (see also Section 3.1 and 3.2.3).
These results suggest that the total fluxes of the 538.66$\mu$m line (and partly the 63.37$\mu$m line) will be influenced by the size of the disk which is included in the calculation of the line profiles, although the 682.93$\mu$m line does not have this problem because the line emitting region is sufficiently small.
\\ \\
\noindent Although we adopt a dust-grain size distribution with a maximum radius of $a_{\mathrm{max}}\sim$10$\mu$m throughout the disk (see Appendix A), dust grains are expected to grow in size due to settling and coagulation as the disk evolves and planet formation proceeds.
\citet{Aikawa2006} calculated disk physical structures with various dust-grain size distributions.
In addition, \citet{Vasyunin2011} and \citet{Akimkin2013} calculated the chemical structure of the outer disk ($\gtrsim$10AU) with grain evolution and discuss its features.
They showed that the dust-grain settling and growth reduce the total
dust-grain surface area and lead to higher UV irradiation rates in the upper disk. Therefore, the hot surface layer of the outer disk which contains abundant gas-phase molecules, including $\mathrm{H_2O}$, gets wider and shifts closer to the disk midplane, thus the abundances and column densities of species are enhanced.
However, they did not discuss the midplane structure of the inner disk including the position of the $\mathrm{H_2O}$ snowline, since they restricted their calculations to the outer disk ($\gtrsim$10AU).
Here, we note that the position of the $\mathrm{H_2O}$ snowline in such an evolved disk is expected to be closer to the central star, since the total dust-grain surface area and thus dust opacity decreases as the size of dust grains becomes large, leading to a decrease in dust-grain and gas temperatures in the midplane of the inner disk \citep{Oka2011}.
Moreover, \citet{Ros2013}, \citet{Zhang2015}, and \citet{Banzatti2015} discussed the effects of rapid dust-grain growth that leads to pebble-sized particles near the $\mathrm{H_2O}$ snowline.
\\ \\
As we explained in Section 2.1, the dominant dust heating source in the disk midplane of the inner disk is the radiative flux produced by viscous dissipation ($\alpha$-disk model) which determines the dust and
gas temperature of the region.
Recent studies (e.g., \citealt{Davis2005, Garaud2007, Min2011, Oka2011, Harsono2015, Piso2015}) calculated the evolution of the position of the $\mathrm{H_2O}$ snowline in optically thick disks, and showed that it migrates as the disk evolves and as the mass accretion rate in the disk decreases, since the radiative flux produced by viscous dissipation becomes larger as the mass accretion rate increases.
We suggest that younger protoplanetary disks like HL Tau \citep{ALMA2015} are expected to have a larger mass accretion rate compared with that of our reference T Tauri disk model, and the position of the $\mathrm{H_2O}$ snowline will reside further out in the disk midplane.
\citet{Zhang2015} argue that the center of the prominent innermost gap at 13 AU is coincident with the expected midplane condensation front of water ice.
Here we note that \citet{Banzatti2015} and \citet{Okuzumi2016} report the position of the $\mathrm{H_2O}$ snowline in HL Tau as $\lesssim$ 10 AU.
The difference occurs because the midplane radial temperature profile of \citet{Zhang2015} is larger than those of \citet{Banzatti2015} and \citet{Okuzumi2016}.
\\ \\
As we described in Section 2.2 and Appendix C, we adopt the wavelength integrated UV
flux calculated at each point by Eqs.~(1) and (2) to approximate the photoreaction rates $k^{\mathrm{ph}}(r, z)$
and photodesorption rate $k_{i}^{\mathrm{pd}}$.
This UV flux is estimated by summing up the fluxes of three components: photospheric blackbody radiation,
optically thin hydrogenic bremsstrahlung radiation, and strong Ly$\alpha$ line (see also Appendix A).
\citet{Walsh2012} pointed out that using Eqs.~(1) and (2), we may overestimate the strength of the UV field
at wavelengths other than the Ly$\alpha$ ($\sim 1216\mathrm{\mathring{A}}$).
On the basis of their calculations, if we adopt the wavelength dependent UV flux to calculate photochemical reaction
rates, the fractional abundance of $\mathrm{H_2O}$ vapor in the outer disk surface becomes larger
because of the combination of increased gas phase production and decreased photodestruction.
In contrast, the fractional abundance of $\mathrm{H_2O}$ vapor in the inner disk midplane is not expected to
change, since the UV flux plays a minor role in determining physical and chemical structures around the $\mathrm{H_2O}$ snowline (see Figure \ref{Figure1_original}).
\citet{Walsh2012} suggested that the column density of $\mathrm{H_2O}$ vapor in the outer disk can be
enhanced by an order of magnitude depending on the method used to calculate the photodissociation rates.
\\ \\
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=0.5]{Figure8a_H2Oabund_io1.0_snowline1.eps}
\includegraphics[scale=0.5]{Figure8b_682.93um_io1.0_snowline1.eps}
\includegraphics[scale=0.5]{Figure8c_H2Oabund_io4.0_snowline2.eps}
\includegraphics[scale=0.5]{Figure8d_682.93um_io4.0_snowline2.eps}
\includegraphics[scale=0.5]{Figure8e_H2Oabund_io8.0_snowline3.eps}
\includegraphics[scale=0.5]{Figure8f_682.93um_io8.0_snowline3.eps}
\end{center}
\vspace{0.3cm}
\caption{\noindent The left three panels: the fractional abundance (relative to total hydrogen nuclei density) distributions of $\mathrm{H_2O}$ gas of a disk around a T Tauri star as a function of disk radius and height (scaled by the radius, $z/r$) up to maximum radii of $r=$100AU. We change the positions of the $\mathrm{H_2O}$ snowline to 1 AU (top left), 4 AU (middle left), 8 AU (bottom left) by hand, in order to test the sensitivity for the position of the $\mathrm{H_2O}$ snowline. The right three panels: The velocity profiles of the pure rotational ortho-$\mathrm{H_2O}$ lines at $\lambda$=682.93$\mu$m ($J_{K_{a}K_{c}}$=6$_{43}$-5$_{50}$). The three panels correspond to the cases that the $\mathrm{H_2O}$ snowline is assumed to be 1 AU (top right) 4AU (middle right), 8AU (bottom right). {\it Red solid lines} are the emission line profiles from inside 1, 4, 8AU ($\sim$inside the $\mathrm{H_2O}$ snowline), {\it black dashed lines} are those from 1-30, 4-30, 8-30AU ($\sim$outside the $\mathrm{H_2O}$ snowline), and {\it blue dotted lines} are those from the total area inside 30AU, respectively. In calculating these profiles, we assume that the distance to the object $d$ is 140pc ($\sim$ the distance of Taurus molecular cloud), and the inclination angle of the disk $i$ is 30 deg.}\label{FigureA3_add}
\end{figure*}
\setcounter{figure}{7}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=0.5]{Figure8g_63.37um_io1.0_snowline1.eps}
\includegraphics[scale=0.5]{Figure8h_538.66um_io1.0_snowline1.eps}
\includegraphics[scale=0.5]{Figure8i_63.37um_io4.0_snowline2.eps}
\includegraphics[scale=0.5]{Figure8j_538.66um_io4.0_snowline2.eps}
\includegraphics[scale=0.5]{Figure8k_63.37um_io8.0_snowline3.eps}
\includegraphics[scale=0.5]{Figure8l_538.66um_io8.0_snowline3.eps}
\end{center}
\vspace{0.3cm}
\caption{
\noindent
(Continued.) The left three panels: The velocity profiles of the pure rotational ortho-$\mathrm{H_2O}$ lines at $\lambda$=63.37$\mu$m ($J_{K_{a}K_{c}}$=8$_{18}$-7$_{07}$).
The right three panels: The velocity profiles of the pure rotational ortho-$\mathrm{H_2O}$ lines at $\lambda$=538.66$\mu$m ($J_{K_{a}K_{c}}$=1$_{10}$-1$_{01}$).
These panels correspond to the cases that the $\mathrm{H_2O}$ snowline is assumed to be 1 AU (top left and right) 4AU (middle left and right), 8AU (bottom left and right).
}\label{Figure9_add}
\end{figure*}
In the remaining part of this subsection, we discuss the behavior of the $\mathrm{H_2O}$ lines for some cases in which we artificially change the distribution of $\mathrm{H_2O}$ vapor, the position of the $\mathrm{H_2O}$ snowline and the fractional abundance of $\mathrm{H_2O}$ gas in the outer disk surface, and test the validity of our model predictions.
We explore different values of the $\mathrm{H_2O}$ snowline radius to simulate the effects of viscous heating and of different dust opacities due to dust evolution, and of the water abundance to simulate the effects of the strength of photo-reactions, as outlined above.
\\ \\
In Figure \ref{FigureA3_add}, we show the distributions of $\mathrm{H_2O}$ gas and profiles of the 682.93$\mu$m, 63.37$\mu$m, and 538.66 $\mu$m lines when we change the positions of the $\mathrm{H_2O}$ snowline ($r_{\mathrm{snowline}}$) to 1 AU (top panels), 4 AU (middle panels), 8 AU (bottom panels) by hand.
In the case of $r_{\mathrm{snowline}}=$1 AU, we change the fractional abundance of $\mathrm{H_2O}$ gas by hand to 10$^{-12}$ in the regions of $r=1-1.6$AU and $z/r\sim 0-1.5$.
In the cases of $r_{\mathrm{snowline}}=$4 AU and 8AU, we change the fractional abundance to $5\times 10^{-5}$ in the regions of $r=1.6-4$AU and $z/r\sim 0-1.5$, and $r=1.6-8$AU and $z/r\sim 0-1.7$, respectively.
In calculating these line profiles, we assume that the distance to the object $d$ is 140pc ($\sim$ the distance of Taurus molecular cloud), and the inclination angle of the disk $i$ is 30 deg.
Here we note that the disk physical structure is the same as the original reference model (see Figure \ref{Figure1_original}). As the position of the $\mathrm{H_2O}$ snowline moves outward, the flux of these three line from the inner disk becomes larger, that from the outer disk becomes weaker, and the line width, especially the width between the two peaks becomes narrower.
In the case of the 682.93$\mu$m line, the emission flux inside the $\mathrm{H_2O}$ snowline is still larger than that outside the $\mathrm{H_2O}$ snowline, even when the $\mathrm{H_2O}$ snowline is artificially set at 1 AU. In addition, the position of the $\mathrm{H_2O}$ snowline can be distinguished using the difference in the peak separations, although the sensitivity to its position will depend on the spectral resolution of the observations and the uncertainty of other parameters (e.g., inclination $i$).
In the cases of the 63.37$\mu$m and 538.66 $\mu$m lines, the emission fluxes inside the $\mathrm{H_2O}$ snowline are still much smaller than that outside the $\mathrm{H_2O}$ snowline, even when the $\mathrm{H_2O}$ snowline is at 8 AU.
However, if we calculate the line fluxes using self-consistent physical models, the emission flux of the 63.37$\mu$m line inside the $\mathrm{H_2O}$ snowline is around ten times larger in the case of $r_{\mathrm{snowline}}=$ 8 AU, and its emission flux could be similar to that outside the $\mathrm{H_2O}$ snowline (see below).
\\ \\
We use the same disk physical structure as the original reference model, because calculating several different disk physical structures
and chemical structures self-consistently using our method (see Section 2.1 and 2.2)
is computationally demanding and beyond the scope of this work.
Even if we adopt self-consistent models, we expect that the line widths will not be affected; however, we do expect that the line fluxes will be affected
since the temperature of line emitting regions will be different.
In our original reference model, the gas and dust temperatures around the $\mathrm{H_2O}$ snowline are about 150$-$160K.
In contrast, the temperatures of the line emitting regions
around the $\mathrm{H_2O}$ snowline for the models with a snowline radius ($r_{\mathrm{snowline}}$) of 1 AU, 4 AU, and 8 AU are 180$-$300K, 85$-$90K, and $\sim$65K, respectively.
Therefore, estimation of blackbody intensities at $\lambda \sim$63$-$683$\mu$m suggests that the line peak flux densities could be $\sim$0.3$-$0.85 times lower for the model with $r_{\mathrm{snowline}}=$1AU,
and $\sim$$2-4$ times and $\sim$$2.5-10$ times higher for the models with $r_{\mathrm{snowline}}$=4AU and 8AU, respectively, if we calculate the line fluxes using self-consistent physical models. These differences in the peak flux densities are larger in the lines at shorter wavelengths.
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=0.5]{Figure9a_H2Oabund_hyomen1.eps}
\includegraphics[scale=0.5]{Figure9b_682.93um_hyomen1.eps}
\includegraphics[scale=0.5]{Figure9c_H2Oabund_hyomen2.eps}
\includegraphics[scale=0.5]{Figure9d_682.93um_hyomen2.eps}
\end{center}
\vspace{0.3cm}
\caption{\noindent The left two panels: the fractional abundance (relative to total hydrogen nuclei density) distributions of $\mathrm{H_2O}$ gas of a disk around a T Tauri star as a function of disk radius and height (scaled by the radius, $z/r$) up to maximum radii of $r=$100AU. We change the fractional abundance of $\mathrm{H_2O}$ gas in the hot disk surface of the outer disk to a larger value (10$^{-5}$, top left), and to a smaller value (10$^{-8}$, bottom left) compared with the original self-consistently calculated value (see also Figure 2).
The right two panels: The velocity profiles of the pure rotational ortho-$\mathrm{H_2O}$ lines at $\lambda$=682.93$\mu$m ($J_{K_{a}K_{c}}$=6$_{43}$-5$_{50}$). The two panels correspond to the case of the larger value (10$^{-5}$, top right), and to the case of the smaller value (10$^{-8}$, bottom right).
{\it Red solid lines} are the emission line profiles from inside 2AU ($\sim$inside the $\mathrm{H_2O}$ snowline), {\it black dashed lines} are those from 2-30AU ($\sim$outside the $\mathrm{H_2O}$ snowline), and {\it blue dotted lines} are those from the total area inside 30AU, respectively.
In calculating these profiles, we assume that the distance to the object $d$ is 140pc ($\sim$ the distance of Taurus molecular cloud), and the inclination angle of the disk $i$ is 30 deg.}\label{FigureA4_add}
\end{figure*}
\setcounter{figure}{8}
\begin{figure*}[htbp]
\begin{center}
\includegraphics[scale=0.5]{Figure9e_63.37um_hyomen1.eps}
\includegraphics[scale=0.5]{Figure9f_538.66um_hyomen1.eps}
\includegraphics[scale=0.5]{Figure9g_63.37um_hyomen2.eps}
\includegraphics[scale=0.5]{Figure9h_538.66um_hyomen2.eps}
\end{center}
\vspace{0.3cm}
\caption{
\noindent
(Continued.) The left two panels: The velocity profiles of the pure rotational ortho-$\mathrm{H_2O}$ lines at $\lambda$=63.37$\mu$m ($J_{K_{a}K_{c}}$=8$_{18}$-7$_{07}$).
The right two panels: The velocity profiles of the pure rotational ortho-$\mathrm{H_2O}$ lines at $\lambda$=538.66$\mu$m ($J_{K_{a}K_{c}}$=1$_{10}$-1$_{01}$).
These panels correspond to the case of the larger value (10$^{-5}$, top left and right), and to the case of the smaller value (10$^{-8}$, bottom left and right).
}\label{Figure11_add}
\end{figure*}
\\ \\
In Figure \ref{FigureA4_add}, we show the distributions of $\mathrm{H_2O}$ gas and profiles of the 682.93$\mu$m, 63.37$\mu$m, and 538.66 $\mu$m lines when we change the fractional abundance of $\mathrm{H_2O}$ gas by hand in the hot disk surface of the outer disk to a larger value (10$^{-5}$, top panels), and to a smaller value (10$^{-8}$, bottom panels) compared to the original self-consistently calculated value (see also Figure 2), to test the sensitivity of the predictions to the disk surface abundance.
If the fractional abundance of $\mathrm{H_2O}$ gas in the hot disk surface of the outer disk is larger, the flux of the 682.93$\mu$m line from the outer disk becomes larger. Here we note that since the peak velocities of the ``in" and ``out" components are different, we can separate both components with very high sensitivity and high dispersion spectroscopic observations, especially in the very high abundance case (top panels), although the wings of both components are blended.
As the abundance in the hot surface of the outer disk becomes small, the fluxes of the 63.37$\mu$m and 538.66 $\mu$m lines from the outer disk become smaller.
This effect is stronger in the case of the the 63.37$\mu$m line, since this line has a large Einstein $A$ coefficient and high upper state energy compared to those of the 538.66$\mu$m line.
However, the contributions of the fluxes of these two lines from the outer disk are still larger than that from the inner disk even when the abundance in the hot surface of the outer disk is small.
\subsubsection{Critical density and the assumption of LTE}
\noindent As described in section 2.3, the level populations of the water molecule ($n_{u}$ and $n_{l}$) are calculated under the assumption of local thermal equilibrium (LTE).
In this subsection, we discuss the validity of the assumption of LTE within our work.
\\ \\
We calculate the critical density $n_{\mathrm{cr}}=A_{ul}{<\sigma v>}^{-1}$ of the three characteristic lines discussed here (ortho-$\mathrm{H_2O}$ 682.93, 63.37, 538.66$\mu$m lines, see Table 1).
$<\sigma v>$ is the collisional rates for the excitation of $\mathrm{H_2O}$ by H$_{\mathrm{2}}$ and electrons for a an adopted collisional temperature of
200K from \citet{Faure2008}.
The critical density $n_{\mathrm{cr}}$ of these three lines are $1.0\times 10^{6}$, $1.5\times 10^{10}$, $2.9\times 10^{7}$ $\mathrm{cm}^{-3}$, respectively.
LTE is only realized when collisions dominate the molecular excitation/deexcitation, that is, when the total gas density is larger than $n_{\mathrm{cr}}$.
In contrast, non-LTE allows for the fact that the levels may be sub-thermally excited, when $n_{\mathrm{cr}}$ is higher than the total gas density, or when the emission (deexcitation) dominates collisions, as well as when the levels are super-thermally excited when the radiative excitation dominates the collisions.
When a level is sub-thermally populated in a particular region of the disk, it has a smaller population than in LTE, thus the line flux in non-LTE is smaller than that for LTE (e.g., \citealt{Meijerink2009, Woitke2009b}).
According to \citet{Meijerink2009}, lines with small $A_{ul}$ ($<$10$^{-2}$ $\mathrm{s}^{-1}$) and low $E_{up}$ ($<$2000K at $r=1$AU) are close to LTE, since collisions dominate the radiative excitation/deexcitation in those lines.
\\ \\
As described in Section 2.1 (see also Figure \ref{Figure1_original}), the total gas density decreases as a function of disk radius and disk height.
We found that the densest region of the disk is in the hot disk midplane inside the $\mathrm{H_2O}$ snowline ($\sim10^{12}-10^{14}$ $\mathrm{cm}^{-3}$), where $n_{\mathrm{cr}}$ of the three characteristic lines are much smaller than the total gas density.
In contrast, the total gas density in the hot surface layer of the outer disk is $\sim10^{7}-10^{8}$ $\mathrm{cm}^{-3}$, and that in the photodesorbed layer of water molecules is $\sim10^{8}-10^{10}$ $\mathrm{cm}^{-3}$. Therefore in these regions, the critical densities of the 63.37 and 538.66$\mu$m lines are similar to and larger than the values of the total gas density, while that of the 682.93$\mu$m line is smaller.
In Section 3.2.1, we showed that the emission flux of the 682.93$\mu$m line which traces the $\mathrm{H_2O}$ snowline mainly comes from the hot disk midplane inside the $\mathrm{H_2O}$ snowline. Since the value of n$_{\mathrm{cr}}$ for this line is much smaller than the total gas density in the line emitting region, it is valid to use the LTE for this region.
\\ \\
On the other hand, in our LTE calculations it remains possible that we have overestimated the emission flux of strong $\mathrm{H_2O}$ lines with large $A_{ul}$ which trace the hot surface layer of the outer disk (e.g., the $\mathrm{H_2O}$ 63.37 $\mu$m line) and lines which trace cold water vapor in the photodesorbed layer (e.g., the $\mathrm{H_2O}$ 538.66 $\mu$m line).
Previous works which model such $\mathrm{H_2O}$ lines (e.g., \citealt{Meijerink2009, Woitke2009b, Banzatti2012, Antonellini2015}) showed that non-LTE calculations are important for these lines. They suggest that non-LTE effects may, however, alter line fluxes by factors of only a few for moderate excitation lines. Moreover, current non-LTE calculations are likely to be inaccurate, due to the incompleteness and uncertainty of collisional rates (e.g., \citealt{Meijerink2009, Banzatti2012, Kamp2013, Zhang2013, Antonellini2015}).
\subsubsection{Requirement for the observations}
\noindent Since the velocity width between the emission peaks is $\sim$20 km s$^{-1}$, high dispersion spectroscopic observations (R=$\lambda$/$\delta \lambda$$>$ tens of thousands) of the identified $\mathrm{H_2O}$ lines are needed to trace emission from the hot water reservoir within the $\mathrm{H_2O}$ snowline.
Their profiles potentially contain information which can be used to determine the $\mathrm{H_2O}$ snowline position.
Moreover, the lines that are suitable to trace emission from the hot water gas within the $\mathrm{H_2O}$ snowline (e.g., 682.93$\mu$m) tend to have a much smaller $A_{ul}$ than those detected by previous observations (e.g, 63.37$\mu$m, 538.66$\mu$m). Since the area of the emitting regions are small (radii $<$ 2AU for a T Tauri disk) compared with the total disk size, the total flux of each line is very small ($3.12\times 10^{-22}$ W $\mathrm{m}^{-2}$ for the 682.93$\mu$m line).
In addition, the sensitivity and spectral resolution (of some instruments) used for previous mid-infrared, far-infrared, and sub-millimeter observations (e.g., $Spitzer$/IRS, $Herschel$/PACS, $Herschel$/HIFI) were not sufficient to detect and resolve weak lines.
\\ \\
Among the various $\mathrm{H_2O}$ lines in ALMA band 8, the $\mathrm{H_2O}$ 682.93$\mu$m line is the most suitable to trace emission from the hot water reservoir within the $\mathrm{H_2O}$ snowline. Several suitable sub-millimeter $\mathrm{H_2O}$ lines exist in ALMA bands 7, 9, and 10 ($\sim$ $300-1000$ $\mu$m), some of which have the same order-of-magnitude fluxes compared with that of the 682.93$\mu$m line.
With ALMA, we can now conduct high sensitivity ($\sim 10^{-21}-10^{-20}$ W $\mathrm{m}^{-2}$ (5$\sigma$, 1 hour)), high dispersion (R$>$ 100,000), and even high spatial resolution ($<$ 100 mas) spectroscopic observations.
Since the total fluxes of the candidate sub-millimeter lines to trace emission from the hot water reservoir within the $\mathrm{H_2O}$ snowline are small in T Tauri disks, they are challenging to detect with current ALMA sensitivity.
However, in hotter Herbig Ae disks and in younger T Tauri disks (e.g., HL Tau), the $\mathrm{H_2O}$ snowline exists at a larger radius and the flux of these lines will be stronger compared with those in our fiducial T Tauri disk ($\sim$1.6AU). Thus the possibility of a successful detection is expected to increase in these sources and could be achieved with current ALMA capabilities.
\\ \\
In addition, suitable lines for detection exist over a wide wavelength range, from mid-infrared (Q band) to sub-millimeter, and there are future mid-infrared instruments including the Q band which will enable high sensitivity and high-dispersion spectroscopic observations: Mid-Infrared Camera High-disperser \& IFU spectrograph on the Thirty Meter Telescope (TMT/MICHI, e.g., \citealt{Packham2012}), and HRS of SPICA\footnote[4]{\url{http://www.ir.isas.jaxa.jp/SPICA/SPICA_HP/research-en.html}} Mid-Infrared Instrument (SPICA/SMI).
Moreover, since SPICA/SMI has an especially high sensitivity, successful detection is expected even for a T Tauri disk with several hours of observation.
In our companion paper (paper II, \citealt{Notsu2016b}),
we will discuss in detail the difference in flux between T Tauri and Herbig Ae disks in lines ranging from the mid-infrared to sub-millimeter wavelengths, and their possible detection with future instruments (e.g., ALMA, TMT/MICHI, SPICA/SMI-HRS).
\section{Conclusion}
\noindent In this paper, we identify candidate $\mathrm{H_2O}$ lines to trace emission from the hot water reservoir within the $\mathrm{H_2O}$ snowline through high-dispersion spectroscopic observations in the near future.
First, we calculated the chemical composition of a protoplanetary disk using a self-consistent physical model of a T Tauri disk, and investigated the abundance distributions of $\mathrm{H_2O}$ gas and ice.
We found that the abundance of $\mathrm{H_2O}$ is high ($\sim 10^{-4}$) in the hot inner region within the $\mathrm{H_2O}$ snowline ($\sim$1.6AU) near the equatorial plane, and relatively high $\sim 10^{-7}$ in the hot surface layer of the outer disk, compared to its value in the regions outside the $\mathrm{H_2O}$ snowline near the equatorial plane ($\sim 10^{-12}$).
Second, we calculated the velocity profiles of $\mathrm{H_2O}$ emission lines, and showed that lines (e.g., the ortho-$\mathrm{H_2O}$ 682.93$\mu$m line) with small Einstein $A$ coefficients ($A_{ul}\sim10^{-3}-10^{-6}$ s$^{-1}$) and relatively high upper state energies (E$_{\mathrm{up}}\sim$1000K) are dominated by emission from the disk region inside the $\mathrm{H_2O}$ snowline, and therefore their profiles potentially contain information which can be used to determine the $\mathrm{H_2O}$ snowline position.
This is because the water gas column density of the region inside the $\mathrm{H_2O}$ snowline is sufficiently high that all lines emitting from this region are optically thick
as long as $A_{ul} > 10^{-6}$ s$^{-1}$.
Instead, the region outside the $\mathrm{H_2O}$ snowline has a lower water gas column density and
lines with larger Einstein $A$ coefficients have a more significant contribution to their fluxes since the lines are expected to be optically thin there.
Therefore, we argue that the $\mathrm{H_2O}$ lines with small Einstein $A$ coefficients and relatively high upper state energies
are the most suitable to trace emission from the hot water reservoir within the $\mathrm{H_2O}$ snowline in disks through high-dispersion spectroscopic observations in the near future.
The wavelengths of those lines suitable to trace emission from the hot water reservoir within the $\mathrm{H_2O}$ snowline range from mid-infrared (Q band) to sub-millimeter, and they overlap with the capabilities of ALMA and future mid-infrared high dispersion spectrographs (e.g., TMT/MICHI, SPICA/SMI-HRS).
In addition, we calculate the behavior of water lines which have been detected by previous spectroscopic observations
(e.g., the ortho-$\mathrm{H_2O}$ 63.37$\mu$m line, the ortho-$\mathrm{H_2O}$ 538.66$\mu$m line).
The fluxes calculated for these lines are consistent with those of previous observations and models.
These lines are less suited to trace emission from water vapour within the $\mathrm{H_2O}$ snowline because they are mainly emitted from the region outside the snowline.
In a future paper (paper II, \citealt{Notsu2016b}), we will discuss the differences of fluxes in the suitable lines ranging from mid-infrared (Q band) to sub-millimeter, and the possibility of future observations (e.g., ALMA, TMT/MICHI, SPICA) to locate the position of the $\mathrm{H_2O}$ snowline.
\\
\acknowledgments
\noindent We are grateful to Dr. Itsuki Sakon, Dr. Chris Packham, Dr. Hiroshi Shibai, Dr. Takao Nakagawa, Dr. Satoshi Okuzumi, and Dr. Inga Kamp for their useful comments.
We thank the referee for many important suggestions and comments.
The numerical calculations in this study were carried out on SR16000 at Yukawa Institute for Theoretical Physics (YITP) and computer systems at Kwasan and Hida Observatory (KIPS) in Kyoto University. This work is supported by Grants-in-Aid for Scientific Research, 23103005, 25108004, 25400229, and 16J06887.
S. N. is grateful for the support from the educational program organized by Unit of Synergetic Studies for Space, Kyoto University.
C. W. acknowledges support from the Netherlands Organization for Scientific Research (NWO, program number 639.041.335).
Astrophysics at Queen's University Belfast is supported by a grant from the STFC.
\\ \\
\begin{table}
\caption{{Molecular Binding Energies}}\label{tab:T2}
\begin{center}
\begin{tabular}{lll}
\hline
\vspace{-0.2cm}
Species & Binding Energy & References \\
&&\\
& \ \ \ $E_{d}^{\mathrm{K}}(i)$ [K] & \\
\hline
CO & 855 & 1 \\
CO$_{2}$ & 2990 & 2 \\
$\mathrm{H_2O}$ & 4820 & 3 \\
CH$_{4}$ & 1080 & 4 \\
N$_{2}$ & 790 & 1 \\
NH$_{3}$ & 2790 & 5 \\
HCN & 4170 & 4 \\
H$_{2}$CO & 1760 & 6 \\
C$_{2}$H$_{2}$ & 2400 & 4 \\
\hline
\multicolumn{3}{l}{\hbox to 0pt{\parbox{70mm}{
\footnotesize
\footnotemark[1] \citet{Oberg2005};
\footnotemark[2] \citet{Edridge2010};
\footnotemark[3] \citet{Sandford1993};
\footnotemark[4] \citet{Yamamoto1983};
\footnotemark[5] \citet{Brown2007};
\footnotemark[6] \citet{Hasegawa1993} }}}
\end{tabular}
\end{center}
\end{table}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,942
|
@interface ViewController : UIViewController
@property (nonatomic, retain) IBOutlet AnimatedLabel* animLabel;
@end
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,470
|
{"url":"https:\/\/theteche.com\/defects-in-welding-weldability-and-testing-of-weldments\/","text":"# Defects in Welding \u2013 Weldability and Testing of Weldments\n\n## Defects in Welding Process\n\nA welding defect is any flaw that compromises, the usefulness of a weldment. The improper welding parameters, base metal and selection of method introduce defects in the weld metal. So, the defective weld causes failure in service conditions and damages to the properties the defects in weld depending on thickness, load, environment and size of the weld. Defects welding the major defects which are causing in the weld are:\n\n1. Lack of fusion\n2. Lack of root penetration\n3. Cracks\n4. Cavity\n5. Porosity\n6. Undercut\n7. Distortion\n8. Slag inclusion\n9. Lamellar tearing\n10. Overlapping\n11. Imperfect shape or unacceptable contour\n12. Miscellaneous defects.\n\n### Lack of fusion\n\nLack of fusion is the poor adhesion of the weld bead to the base metal. The parameter mainly affects the welding current. If the current is very low, it is not sufficient to heat the metal all over the place. The wrong design of the weld also causes defects.\n\n### Lack of root penetration\n\nLack of fusion is a weld bead in which fusion has not occurred throughout the cross section of joint due to improper penetration of the joint. Incomplete penetration forms channels and crevices in the root of the weld which can cause serious issues in pipes because corrosive substances can settle in these areas. This defect occurs due to too small root gap, too large size electrode, high travel speed and incorrect use of electrode.\n\n### Cracks \u2013 Defects in Welding\n\nFracture-type interruptions are either in weld or base metal adjacent to weld. It is a serious defect because it is a discontinuity in the metal that significantly reduces strength. It is duo to embrittlement or low ductility of weld and base metal combined with high restraint during contraction. In general, this defect must be repaired.\n\nThe cracks are mainly classified into the following two types:\n\n\u2666Hot cracking.\nCold cracking.\n\nImage shows different types of cracks in the weldment. Hot cracking also known as solidification cracking can occur with all metals and happens in the fusion zone of a weld. To diminish the probability of this type of cracking, excess material restraint should be avoided and a proper filler material should be utilized.\n\nA heat-affected zone. (HAZ) is a crack that forms a short distance away from the fusion line. It occurs in low alloy and high alloy steel. The exact causes of this type of crack are not completely understood but the dissolved hydrogen must be present.\n\nCrater cracks occur in the crater when the welding arc is terminated prematurely. Crater cracks are normally shallow, hot cracks usually forming single or star cracks. These cracks usually start at a crater pipe and extend longitudinal in the crater.\n\nHot cracking occurs at high temperature and cold cracking occurs at room temperature.\n\nThe main causes of crack formation are:\n\n1. Arc speed\n2. Ductility\n3. Solidification rate\n4. Temperature.\n\nResidual stresses can reduce the strength of the base material and it can lead to catastrophic failure through cold cracking. Cold cracking is limited to steels and it is associated with the formation of martensite as the weld cools. The cracking occurs in the heat-affected zone of the base material.\n\n### Cavity\n\nThere are two cavity type defects that may present in the weldment.\n(i) Porosity\n(ii) Shrinkage voids.\n\nPorosity : It is small voids in weld metal formed by gases entrapped during solidification as shown in image. It is caused by inclusion of atmospheric gases, sulfur in weld metal or surface contaminants. It is due to the presence of gases in the solidifying metal which are producing porosity. The gases are: oxygen, nitrogen and hydrogen. The parameters which are causing porosity are:\n\n1. Arc speed\n2. Coating of the electrode\n3. Incorrect welding technique\n4. Base metal composition.\n\nThe sources of hydrogen formed on the weld pool are electrode coatings. Then oxygen becomes as oxide form in the pool. Nitrogen enters in the form of atmospheric nitrogen.\n\nShrinkage voids : Cavities are formed by shrinkage during solidification.\n\n### Defects in Welding \u2013 Undercut\n\nUndercut is a groove gets formed in the parent metal along the sides of the weld as shown in image. The main causes of the undercut are:\n\n1. High current\n2. Arc length\n3. Electrode diameter\n4. Inclination of electrode.\n\n### Distortion\n\nDistortion is defined as the change in shape and difference between positions of two plates during the welding. The base metal under the arc melts and already welded base metal starts cooling. It will create a temperature difference in the weld and will cause distortion.\n\n\u2022 Arc speed\n\u2022 Number of passes\n\u2022 Stresses in plates\n\u2022 Joint type\n\u2022 Order of welding.\n\n### Slag inclusions\n\nDuring solidification of weld, any foreign materials present in the molten metal will not float. It will be entrapped inside the metal. So, it will lower the strength of the joint. Most common form is slag inclusions generated during arc welding processes that use flux instead of floating to top of weld pool and globules of slag become encased during solidification. Other forms are metallic oxides that form during welding of certain metals such as aluminum which normally has a surface coating of\n\n$A{1}_{2}{O}_{3}$\n\n### Lamellar tearing\n\nIt is mainly a problem with low quality steels. It occurs in plate that has a low ductility\u2026 in the through thickness direction which is caused by non-metallic inclusions such as suphides and oxides that have been elongated during rolling process.\n\nThese inclusions mean that the plate cannot tolerate the contraction stresses in the short transverse direction. It is seen in large structures. Lamellar tearing can occur in both fillet and butt welds but the most vulnerable joints are T and corner joints where the fusion boundary is parallel to the rolling plane.\n\n### Overlap \u2013 Defects in Welding\n\nOverlap is the protrusion of the weld metal beyond the weld toe or weld root. It may occur because of fusion problem.\n\n1. Arc length\n2. Arc speed\n3. Joint type\n4. Current.\n\n### Spatter\n\nSpatter is small droplets of electrode material which have been ejected from the arc which may or may not have fused to the parent plate. The main causes of spatter are high welding current, excessive arc length, damp electrodes, arc blow, incorrect electrode angle, incorrect polarity and poor gas shielding.","date":"2022-08-12 23:20:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 1, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5639682412147522, \"perplexity\": 4686.353173110019}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882571847.45\/warc\/CC-MAIN-20220812230927-20220813020927-00634.warc.gz\"}"}
| null | null |
The second stage of the erosion cycle in the topographic development of a landscape or region characterized by numerous and closely spaced mature streams, reduction of level surfaces to slopes, large well-defined drainage systems, and the absence of swamps or lakes on the uplands. Also known as topographic maturity.
A stage in the development of a shore or coast that begins with the attainment of a profile of equilibrium.
The extent to which the texture and composition of a clastic sediment approach the ultimate end product.
The stage of stream development at which maximum vigor and efficiency has been reached.
A measure of the developing of strength in concrete; combines the effects of curing temperature and time of hydration.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,617
|
Der Hochmoorgelbling oder Zitronengelbe Heufalter (Colias palaeno) ist ein Schmetterling aus der Familie der Weißlinge (Pieridae) in der Unterfamilie der Gelblinge. Er kommt in den gemäßigten und subarktischen Zonen Europas, Asiens und Nordamerikas vor. Palaeno ist der Name einer Nymphe, die anmutig in Mooren und Wiesen tanzt und spielt.
Merkmale
Imagines
Der Hochmoorgelbling hat eine Flügelspannweite von 50–56 Millimetern. Die Flügeloberseiten der Männchen sind weißlich, schwach gelb gefärbt mit einem dunklen, scharf begrenzten Rand, der nicht bestäubt ist und roten Flügelfransen. Der dunkle Rand ist auf den Hinterflügeln schmaler. Ein kleiner dunkler Fleck sitzt am Rand der Diskoidalquerader der Zelle auf dem Vorderflügel. Die Unterseite der Vorderflügel ist gelblich, die der Hinterflügel ist graugrün bestäubt und zum Rand hin gelblich. Die Hinterflügel haben einen kleinen, dunkel umrandeten weißen Fleck in der Zelle. Die Flügel reflektieren kein ultraviolettes Licht.
Das Weibchen ist weiß, manchmal auch gelb und der dunkle Rand auf der Flügeloberseite ist weniger scharf begrenzt und ähnelt sonst dem Männchen. Die Tiere werden mit der Verbreitung nach Norden weißer.
In Skandinavien ist der Hochmoorgelbling variabler als in Mitteleuropa und teilweise mehr goldgelb als blond, teilweise auch deutlich blasser und im hohen Norden kleiner.
Präimaginalstadien
Die Eier sind anfangs gelb, sie werden später rot und kurz vor dem Schlupf der Raupen dunkelblaugrau. Die junge Raupe ist bräunlich gefärbt und hat einen dunklen Kopf. Im späteren Stadium ist sie grün mit einem kräftigen, gelben Seitenstreifen und kurzen schwarzen Haaren. Die Raupe verpuppt sich normalerweise an einem Zweig an der Futterpflanze in einer grünen Gürtelpuppe.
Ähnliche Arten
Goldene Acht (Colias hyale (, 1758))
Hufeisenklee-Gelbling (Colias alfacariensis , 1905)
Hellorangegrüner Heufalter (Colias chrysotheme (, 1781))
Colias erate (, 1805)
Colias interior , 1862 (Nordamerika)
Vorkommen
Der Hochmoorgelbling ist in Hochmooren und sonstigen feuchten Gebieten mit der Futterpflanze der Raupe anzutreffen.
Das ist in Europa im Jura, in den Vogesen, im Schwarzwald, in Oberschwaben, im Westallgäuer Hügelland, im nördlichen Alpenvorland, im Bayerischen- und im angrenzenden Böhmerwald. In der Schweiz in den Zentral- und Südalpen, in Österreich, der Tschechischen Republik, in Polen außer Zentralpolen, der Slowakei, den Karpaten in Rumänien, Weißrussland und in den Baltischen Staaten, Skandinavien, Dänemark, Russland durch Sibirien bis zum Amur und Sachalin, Nordkorea, Nordost-China und Mittel-Japan.
In Nordamerika kommt der Hochmoorgelbling vom Westen Alaskas über die kanadischen Provinzen Yukon, Nordwest-Territorien und Nunavut bis zur Hudson Bay vor. Nach Norden reicht die Verbreitung bis zur Victoria-Insel. Östlich der Hudson Bay im Süden der zu Nunavut gehörenden Baffininsel und am gegenüberliegenden Festland im nördlichen Québec und Neufundland. Nach Süden erstreckt sich das Vorkommen in den nördlichen bis zentralen Teil der Provinzen British Columbia, Alberta, Saskatchewan, Manitoba und Ontario, wobei im Westen Albertas und in Ontario die südlichste Verbreitung erreicht wird.
Lebensraum
Die Raupe des Hochmoorgelblings lebt nur an der Rauschbeere (Vaccinium uliginosum) und ist damit an Lebensräume gebunden, in der die Pflanze wächst. In Nordamerika könnte sie auch noch an Vaccinium cespitosum fressen. Die Falter benötigen viel Nektar und brauchen daher blütenreiche Biotope in der Nähe. Sie entfernen sich selten deutlich mehr als einen Kilometer von ihrem Lebensraum, es wurden aber schon Distanzen von über sechs Kilometern beobachtet.
Geeignete Lebensräume sind in Mitteleuropa Übergangsmoore, Moorränder, Hochmoore mit Moorkiefern, einer Unterart der Bergkiefer und lichte Moorkiefer-Wälder mit blütenreichen Berg- oder Streuobstwiesen in der Nähe, da es in den Mooren selbst fast keine Blüten gibt. In Mittelskandinavien kamen die Falter zu Linnés Zeiten auch in Wäldern, in denen die Rauschbeere wächst, vor.
Lebensweise
Die Männchen fliegen die Moore und die Umgebung bei Sonne auf der Suche nach Weibchen nahezu ununterbrochen ab, die etwa eine Woche nach den Männchen schlüpfen.
Dabei werden oft Geländemarkierungen abgeflogen und Hindernisse wie hohe Baumgruppen vermieden. Trifft ein Männchen auf ein Weibchen, so umkreisen sie einander und steigen bis zu 30 Meter auf. Gegen Ende kehren sie zum Boden zurück und das Männchen drückt das Weibchen durch häufiges Anstoßen immer tiefer bis anschließend die Kopulation in der Vegetation erfolgt.
Die Weibchen legen an feuchten und besonnten Stellen mit bevorzugt lichten Beständen der Rauschbeere ihre Eier einzeln auf den Blattoberseiten ab. Die Raupen schlüpfen meist nach etwa ein bis zwei Wochen, je nach Witterung auch erst nach vier Wochen. Nach dem Schlüpfen spinnt sich die Raupe Haltefäden an das Blatt und beginnt mit einem Fensterfraß nahe der Blattspitze, bei dem nur die Epidermis abgenagt wird. Die jungen Raupen überwintern nach der zweiten Häutung an der Pflanze und beginnen im folgenden Jahr wieder zu fressen, wenn die Pflanzen austreiben. Jetzt werden die Blattknospen und dann die Blätter gefressen und nicht mehr nur die Epidermis. Ende Mai bis Anfang Juni erfolgt in Mitteleuropa die Verpuppung.
Flug- und Raupenzeiten
Der Falter fliegt in Mitteleuropa von Juni bis Juli und in Nordamerika von Ende Juni bis Ende August in einer Generation.
Systematik
Linné beschrieb den Hochmoorgelbling 1760 als Papilio [Helicunius] Palaeno anhand zweier Männchen und eines Weibchen aus der Region Uppsala und aus Südfinnland, wo die Falter sehr selten bzw. sehr häufig waren. Der Hochmoorgelbling ist die einzige Art der Gattung in der alten Welt, die an Heidelbeeren (Vaccinium) frisst. In Nordamerika fressen noch die Raupen von C. behrii, C. pelidne, C. skinneri und C. interior an Heidelbeeren, was darauf hindeutet, dass der Ursprung der Arten in Nordamerika liegt.
Zusammen mit C. aias, C. pelidne und C. skinneri bildet der Hochmoorgelbling den Colias palaeno-Artkomplex. C. aias wird von manchen Autoren als Unterart betrachtet.
Unterarten
palaeno ist die Nominatform. Sie ist oft blasser als die mitteleuropäische Unterart europome und kommt in Skandinavien (Norwegen, Schweden, Finnland), den Baltischen Staaten, Weißrussland und Russland über den Ural bis in die Umgebung von Nowosibirsk vor. Funde liegen dort aus Nowosibirsk, Omsk und Tscheljabinsk vor. Die Form lapponica kommt in nördlichen Skandinavien und im polaren Ural vor und ist durchschnittlich kleiner und etwas dunkler auf der Hinterflügelunterseite, wahrscheinlich aufgrund der klimatischen Bedingungen. In Weißrussland, den Baltischen Staaten und Nordost-Polen gibt es auch häufig gelbe Falter.
arctica , 1927 wird vorläufig als Unterart betrachtet und kommt im arktischen, nördlichen und nordöstlichen Sibirien vor. Es könnte aber auch ein Synonym für die nordamerikanischen Unterart chippewa sein. orientalis wandert in die südlichen Gebiete von arctica ein, wo es Übergangsformen gibt. Nördlich von Bilibino gibt es keine Übergangsformen mehr und es fliegen nur die durchschnittlich blasseren arctica. Die Falter fliegen im Juli und unterscheiden sich von orientalis in der geringere Größe (38 – 40 mm). Der marginale Rand auf der Oberseite ist beim Männchen sehr schwarz und gräulich beim Weibchen. Die Unterseite der Hinterflügel ist beim Männchen stark grün bestäubt und beim Weibchen braun. Der runde silbrige Fleck ist sehr groß und unmerklich schwarz umrandet, er ist kleiner als bei orientalis, bei der das marginale Band schmaler und beim Männchen sehr schwarz und beim Weibchen grau ist. Der Name arctica wurde 1908 von Ruggero Verity benutzt, allerdings an der vierten Stelle, entgegen den Regeln der Internationalen Regeln für zoologische Nomenklatur. Nordström benutzte ihn dann 1927 für die Unterart, weshalb er heute als Autor gesehen werden muss.
europome (, [1778]), ist die gelbe Unterart in West und Mitteleuropa. Falter aus den Alpen sind etwas kleiner als die vom Tiefland. Sie wird von manchen Autoren nur als Form betrachtet. Die Form illgneri in den Zentralalpen hat zitronengelbe Weibchen. Esper beschrieb in seinem Werk erneut C. paleano, die Beschreibung und die Abbildung ist jedoch die Goldene Acht (C. hyale) und seine Beschreibung und Abbildung von C. hyale ist in Wirklichkeit der Postillon (C. croceus). Europome ist eine der Danaiden, den 50 Töchtern des Danaus.
orientalis , 1892, ist östlich des Jenissei und des Altai-Gebirges bis zur russischen Pazifikküste verbreitet. Im südlichen Teil des Verbreitungsgebietes kommt sie nur im Hochgebirge vor.
poktussani , 1935, ist orientalis und sugitanii sehr ähnlich und kommt im Changbai-Gebirge im Grenzland zwischen China und Nordkorea vor. Die Falter haben eine Spannweite von 40–42 mm beim Männchen und 42–46 mm beim Weibchen. Die Saumbinde der Vorderflügel ist tiefschwarz und breiter als bei orientalis und schmaler als bei Colias aias, bei den Männchen ist der Vorderflügelinnenrand bis zur Hälfte schwarz. Sie fliegen im Juli. Der Name leitet sich vom alten Namen Poktussan des Bergs Paektusan im Changbai-Gebirge ab, wo die Falter, die Bang-Haas für seine Beschreibung nutzte, auf 2500 Meter Höhe gefunden wurden.
sugitanii , 1929, ist poktussani und orientalis sehr ähnlich und fliegt im Hida-Gebirge in den nördlichen japanischen Alpen im Grenzgebiet der Präfekturen Nagano und Toyama unter anderem am Mida-ga-hara, Chou-ga-take, Gaki, Higashizawa, Johnen, Johnen-nokkoshi-Pass, Taro-dake (Taro-daira-Plateau), Tateyama, Tsubakuro, Yari-ga-take, Yakusi-dake und Taroudaira. Die andere in Japan vorkommende Art aus dem C. palaeno-Artkomplex Colias aias fliegt dagegen beim Asama-Vulkan im Grenzgebiet der Präfekturen Nagano and Gunma. Sugitanii fliegt über steilen Grashängen oder sumpfigen Hochebenen. Das Verbreitungsgebiet ist klein, die Art ist aber nicht gefährdet, da es unzugänglich ist. Die Unterart ist nach Iwahiko Sugitani benannt, der den Holotyp, ein Männchen, am 20. Juli 1922 am Berg Johnen-dake bei der Stadt Shinano fing. Ein Paratyp wurde von Tadao Kano am Berg Tsubakuro-dake in der Präfektur Nagano gefangen.
chippewa , 1807, kommt in Nordamerika bis auf die Baffininsel vor und ist kaum von nordsibirischen Tieren der Unterarten arctia und orientalis zu unterscheiden. Die Männchen sind blass gelb mit breitem schwarzem Rand. Der zentrale Fleck auf der Hinterflügeloberseite ist meist blass gelb und selten orange und fehlt nur ganz selten. Auf der grünlichen Hinterflügelunterseite fehlt dem zentralen Fleck der Rand. Die weiße Form alba der Weibchen kommt häufig vor. Sie wird manchmal als eigene Art betrachtet.
baffinensis , 1977 gleicht chippewa, ist aber dunkler und kommt nur auf der Baffininsel vor.
Synonyme
Synonyme von C. palaeno palaeno:
cretacea , 1888. Der Name cretacea, von lateinisch creta (Kalk), wurde 1884 von Schilde für blasse Männchen und Weibchen benutzt. Aurivillius erhob sie 1888 zur Unterart, obwohl Schilde damit nur Aberrationen beschreiben wollte. Der Typenfundort ist im Norden Finnlands in der Nähe der russischen Grenze.
lapponica , 1861 ist eine blasse und kleinere Form aus Lappland.
philomene , [1805] wurde wahrscheinlich anhand nur eines Falter beschrieben, der später bei einem Brand vernichtet wurde. Der Typenfundort ist nicht bekannt, wahrscheinlich stammte das Tier aus dem zentralen oder nördlichen Schweden.
pruefferi , 1967 eine etwas gelbere Form in Nordost-Polen.
synonyma , 1923 kommt im selben Gebiet wie die Nominatform vor. Als Bryk die Art beschrieb, war der Typenfundort der Nominatform fälschlicherweise 1000 km nach Norden verschoben. Nur so konnte in diesem Gebiet eine neue Unterart beschrieben werden.
valeria , 1860 wurde anhand eines heute nicht mehr auffindbaren Falters aus der Umgebung von Sankt Petersburg beschrieben.
Synonyme von C. palaeno arctica , 1927
gomojunovae , 1996 wird von Grieshuber, Worthy und Lamas als Synonym betrachtet, da die Falter innerhalb der Variation von arctica liegen. Sie wurden im Oblast Magadan, im Nordwesten der Tschuktschen-Halbinsel bei Bilibino gesammelt. Der Dedikationsnamen bezieht sich auf die russische Entomologin und Parasitologin Nina Petrovna Gomojunova (1933–1973), die am Biologischen Institut der sibirischen Abteilung der Russischen Akademie der Wissenschaften tätig war.
Synonyme von C. palaeno chippewa , 1870
helena , 1863. Der Name ist ein jüngeres Homonym von helena , 1844 und wurde durch chippewa , 1870 ersetzt.
Synonyme von C. palaeno europome [1778]
alpina , 1901 sind Populationen in den Hochlagen der Alpen und etwas kleiner (wie bei laponica wohl auch klimatisch bedingt).
caflischi , 1893 ist eine ökologische Variante, die etwas kleiner und mehr grünlich gelb als die andere ökologische Variante europomene. Die Falter stammen aus der Umgebung des Fexergletschers im schweizerischen Fextal.
deprunneri , 1944 ist nach Leonardo de Prunner benannt. Die Falter wurde auf 2100 Meter Höhe in den Cottischen Alpen nahe der französischen Grenze gesammelt. Rocca gab den Status nicht klar an. Er beschrieb sie als Form unter der Überschrift Colias palaeno europomene O.[chsenheimer]
europomene , 1816 ist eine alpine ökologische Form, die von etwa 1600 bis mindestens 2500 Meter Höhe vorkommt. Anhand welcher Falter sie beschrieben wurde, ist unklar. Ochsenheimer schrieb: "befindet sich in einigen Sammlungen unter dem Namen P. Europomene". Die Falter sind kleiner, mit kräftigeren Farben und einer dunkleren Unterseite. Die gelben Weibchen kommen deutlich häufiger im Verhältnis zu den Weißen vor, im Vergleich zu Populationen aus niederen Lagen. Der Name könnte eine Verkleinerung von europome sein. Sie wurde erst im vierten Nachtrag von 1816 zu Ochsenheimers 1808 erschienen Schmetterlinge von Europa beschrieben. Deshalb ist das Datum 1816 und nicht 1808 korrekt.
Synonyme von C. palaeno orientalis , 1892
sachalinensis , 1919 wurde von der Insel Sachalin beschrieben. Sie hat keinen Unterschied zu C. palaeno orientalis und ist auch nicht geografisch isoliert.
Synonyme von C. palaeno poktussani , 1935
nekkana , 1939 wird von Grieshuber, Worthy und Lamas vorläufig als Synonym betrachtet. Der Typen-Fundort ist nicht genau bekannt und liegt im Grenzgebiet zwischen den chinesischen Provinzen Hebei und Innere Mongolei.
Bestand, Gefährdung und Schutz
Bestand
In den Ardennen ist der Hochmoorgelbling seit den 1950er Jahren ausgestorben. Versuche der Wiederansiedlung scheiterten hier genauso wie im Jura, wo er noch an wenigen Stellen vorkommt. In den Vogesen ist er wahrscheinlich ausgestorben. Esper fand die Falter noch zahlreich im Fichtelgebirge, sie sind dort aber schon lange ausgestorben, ebenso in der Niederlausitz. Auch im Erzgebirge sind bis heute viele Vorkommen erloschen und nur noch wenige im Mittel- und Westerzgebirge erhalten. Aus dem polnischen Niederschlesien gibt es noch aktuelle Nachweise. In Baden-Württemberg kommt er nur noch im mittleren und südöstlichen Schwarzwald und Hochschwarzwald, in Oberschwaben und im Westallgäuer Hügelland vor. Im Nordschwarzwald und auf der Baar sind die Vorkommen erloschen. Stark rückläufige Tendenzen gibt es in Bayern in den Hoch- und Zwischenmooren des voralpinen Hügel- und Moorlandes unter 800 Meter Höhe. Im Bayerischen Wald und in der Passauer Senke sind die Bestände noch recht stabil, ebenso im angrenzenden Böhmerwald. Auch im österreichischen Alpenvorland ist die Art sehr selten geworden, größere Rückzugsgebiete hat die Art etwa in den Hohen Tauern oder im Steirischen Ennstal (Pürgschachenmoor).
Die Ursachen für die starken Rückgänge in Deutschland seit den 1990er Jahren lassen sich nicht nur auf Biotopvernichtung und Lebensraumveränderungen zurückführen, da im Alpenvorland seitdem etwa 50 % der Bestände erloschen sind, auch in intakten Mooren.
Gefährdung
In intakten Hochmooren ist der Hochmoorgelbling nie häufig, da die Rauschbeere nur in Randgebieten wachsen kann und nicht in den feuchten und durchnässten Kernbereichen. Durch die Vernichtung von Mooren und angrenzender Lebensräume ist er in Mitteleuropa stark gefährdet und die Bestände sind stark rückläufig und vielerorts schon verschwunden. Torfabbau und Entwässerung vernichtet die Lebensgrundlage der Raupen, durch Verbuschung verschwindet die Rauschbeere. Aufforstungen der Moorränder mit Fichtenmonokulturen und Umnutzung von Mähwiesen (Mähen vor Ende Juli) zerstören die Nahrungsquellen der Falter. Das Mikroklima an der Nahrungspflanze hat großen Einfluss auf die Mortalität der Jungraupen, von denen bis zur Überwinterung über 90 % sterben. Sie überleben häufiger, wenn sie die Pflanzen nicht mit anderen Pflanzenfressern teilen müssen, was bei feuchten Standorten häufiger der Fall ist. Eine leichte Beschattung erhöht die Überlebenschance, während zu starke Beschattung durch Sukzession massiv schadet, ebenso wie zu viel Trockenheit, etwa durch Entwässerung, Klimaerwärmung oder geringere Niederschläge. Zwar fördert dies die Ausbreitung der Rauschbeere und bringt eine hohe Individuenzahl hervor, diese kann aber durch die Trockenheit später wieder einbrechen. Im Winter ist eine Schneebedeckung von Vorteil, da dies die Austrocknung der Raupen verhindert, während Regen zu Fäulnis führen kann. Die Raupen überleben problemlos Temperaturen bis Minus 26 °C.
Rote Liste BRD: 2
Rote Liste Baden-Württemberg: 2
Rote Liste Bayern: 2
Liste rouge (Rote Liste) Frankreich: 1
Rote Liste Japan (2020): 2 Unterarten potentiell gefährdet (C. p. aias und C. p. sugitanii)
In Kanada ist der Hochmoorgelbling in Alberta mit Imperiled S2 (bedroht) und Critically Imperiled S1 (stark bedroht) eingestuft, in den anderen Provinzen und Alaska ist er nicht gefährdet.
Schutz
Zum Schutz des Hochmoorgelblings in Mitteleuropa müssen die noch verbliebenen Moore großräumig geschützt werden. Eine Pufferzone von mindestens 150 Meter um die Moore ist erforderlich, um Nektar für die Falter zu liefern und um den Eintrag von Nährstoffen zu verringern, die zu einer Veränderung der Vegetation führen. Ein Mindestgröße von 10 ha ist für eine stabile Population notwendig. Schon geschädigte Moore können durch Schließen der Gräben wiedervernässt werden und müssen entbuscht werden, um den Raupen das Überleben zu erleichtern. Die Zugänge zu Nektarhabitaten müssen frei gehalten werden, Hindernisse in Form von Bäumen oder Buschgürteln sind zu entfernen. Wiesen in der Umgebung der Moore dürfen erst nach Ende der Flugzeit der Falter gemäht werden.
Literatur
Einzelnachweise
Anmerkungen
Weblinks
Lepiforum e. V. – Taxonomie und Fotos
www.schmetterling-raupe.de
Markus Schwibinger: Die Tagfalter Oberbayerns – Weißlinge
Europäische Schmetterlinge von Christopher Jonko – Colias palaeno (Linnaeus, 1761)
Butterflies of America – Colias palaeno chippewa W. H. Edwards, 1870 (Palaeno Sulphur)
Butterflies of America – Colias palaeno baffinensis Ebner & Ferris, [1978] (Palaeno Sulphur)
Gelblinge
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,898
|
sample vehicle log book spreadsheet line excel word free template kilometer mileage format download car travel auto templ.
ms assignment book template log excel sample how to use templates in km australia office practical questions and.
printable mileage log templates free template lab kilometre logbook km book.
elegant boat log book template maintenance vehicle mileage unique excel business e.
logbook for drivers tax preparation mileage log book template purposes click here to download km.
truck driver log book excel template kilometer vehicle format office visitor mileage download mi travel free km.
truck driver log book template antique daily mileage free excel km.
vehicle log book format excel km template mileage.
mileage log book awesome business kilometre logbook template download by expense large km.
drivers log book template free awesome car service receipt excel kilometer time together with learning to produce run funeral km boo.
trip mileage log business book template for tax purposes simple km.
prospect tracking spreadsheet template motor vehicle log book free mileage excel download service km b.
template km log book kilometer excel travel logbook business vehicle full size.
composition notebook paper template km log book australia lab travel excel rental vehicle laboratory example printable mileage templates bo.
km log book template mileage record vehicle excel business logbook travel expense online f.
vehicle log book spreadsheet excel template free download travel t mileage reimbursement form google docs large size of business km.
travel log book template excel drivers free maker software kilometer books vehicle inspections printers sample best of word daily sheet driver lo.
best of truck driver log book template dive mileage for tax purposes download km.
vehicle log book template excel templates kilometer km.
mileage tracker form log book awesome printable kilometre tax template download new of km.
printable mileage log book business mentor vehicle kilometer template templates free lab km.
manager log book template free km kilometre logbook mileage for tax purposes travel format vehicle.
travel log book template tax mileage ms excel vehicle word templates project science fair logbook km.
fresh vehicle log book template collections mileage awesome truck driver excel km kilometer.
log template excel ms vehicle book running car km kilometer.
mileage log book vehicle excel template km.
printable mileage log templates free template lab km book australia.
project log book template mileage sample example vehicle kilometer download excel free daily templates science.
new vehicle receipt template convenient automotive bill sale form kilometer log book mileage unique excel car km.
employee training log template excel workout google docs tracker mileage book download running km.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 6,434
|
{"url":"http:\/\/math.stackexchange.com\/questions\/188039\/showing-that-the-roots-of-a-polynomial-with-descending-positive-coefficients-lie?answertab=active","text":"# Showing that the roots of a polynomial with descending positive coefficients lie in the unit disc.\n\nLet $P(z)=a_nz^n+\\cdots+a_0$ be a polynomial whose coefficients satisfy $$0<a_0<a_1<\\cdots<a_n.$$\n\nI want to show that the roots of $P$ live in unit disc. The obvious idea is to use Rouche's theorem, but that doesn't quite work here, at least with the choice $f(z)=a_nz^n, g(z)=$ (the rest).\n\nAny ideas?\n\n-\n I think this is related to the Schur-Cohn criterion \u2013\u00a0Cocopuffs Aug 28 '12 at 19:28 It's known as the Enestr\u00f6m\u2013Kakeya theorem. See this question: math.stackexchange.com\/questions\/185818\/enestrom-kakeya-theorem \u2013\u00a0Hans Lundmark Aug 28 '12 at 19:40\n\nThe thing to do is to look instead at the polynomial $$Q(z) = (1-z)P(z) = (1-z)\\left(\\sum_{i=0}^n a_iz^i \\right) = a_0 -a_n z^{n+1} + \\sum_{i=1}^n (a_i-a_{i-1})z^i$$ Now, let $|z|>1$ be a root of $P(z)$, and hence a root of $Q(z)$. Therefore, we have $a_0 + \\sum_{i=1}^n (a_i-a_{i-1})z^i = a_n z^{n+1}$ Then, we have \\begin{aligned} |a_n z^{n+1}| &= |a_0 + \\sum_{i=1}^n (a_i-a_{i-1})z^i| \\\\ & \\le a_0 + \\sum_{i=1}^n (a_i-a_{i-1})|z^i| \\\\ & < a_0|z^n| + \\sum_{i=1}^n (a_i-a_{i-1})|z^n| \\\\ & = |a_n z^n|\\end{aligned} a contradiction.\n The proof looks short enough that you could present it as an answer. External links might be broken some time in the future... \u2013\u00a0Fabian Aug 28 '12 at 19:37 How did you get the idea to construct $Q(z)$? \u2013\u00a0MJD Aug 28 '12 at 20:17 +1 Very nice proof. \u2013\u00a0DonAntonio Aug 29 '12 at 2:52 yea, thanks, the construction of $Q(z)$, makes the condition $a_i$ are monotonic into practice. \u2013\u00a0van abel Sep 6 '12 at 8:59","date":"2013-05-18 17:08:51","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9989524483680725, \"perplexity\": 401.4919300393105}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2013-20\/segments\/1368696382560\/warc\/CC-MAIN-20130516092622-00087-ip-10-60-113-184.ec2.internal.warc.gz\"}"}
| null | null |
{"url":"http:\/\/www.goodmath.org\/blog\/category\/good-math\/topology-take-2\/","text":"# Manifold are the Manifolds\n\nIn the stuff I\u2019ve been writing about topology so far, I\u2019ve been talking about topologies mostly algebraically. I\u2019m not sure why, but for me, algebraic topology is both the most interesting and easy to understand bit of topology. Most people seem to think I\u2019m crazy; in general, people seem to think that algebraic topology is hard. It must say something about me that I find other things much harder, but I\u2019ll leave it to someone else to figure out what.\n\nDespite the fact that I love the algebraic side, there\u2019s a lot of interesting stuff in topology that you need to talk about, which isn\u2019t purely algebraic. Today I\u2019m going to talk about one of the most important ones: manifolds.\n\nThe definition we use for topological spaces is really abstract. A topological space is a set of points with a structural relation. You can define that relation either in terms of neighborhoods, or in terms of open sets \u2013 the two end up being equivalent. (You can define the open sets in terms of neighborhoods, or the neighborhoods in terms of open sets; they define the same structure and imply each other.)\n\nThat abstract definition is wonderful. It lets you talk about lots of different structures using the language and mechanics of topology. But it is very abstract. When we think about a topological space, we\u2019re usually thinking of something much more specific than what\u2019s implied by that definition. The word space has an intuitive meaning for us. We hear it, and we think of shapes and surfaces. Those are properties of many, but not all topological spaces. For example, there are topological spaces where you can have multiple distinct points with identical neighborhoods. That\u2019s definitely not part of what we expect!\n\nThe things that we think of as a spaces are really are really a special class of topological spaces, called manifolds.\n\nInformally, a manifold is a topological space where its neighborhoods for a surface that appears to be euclidean if you look at small sections. All euclidean surfaces are manifolds \u2013 a two dimensional plane, defined as a topological space, is a manifold. But there are also manifolds that aren\u2019t really euclidean, like a torus, or the surface of a sphere \u2013 and they\u2019re the things that make manifolds interesting.\n\nThe formal definition is very much like the informal, with a few additions. But before we get there, we need to brush up on some definitions.\n\n\u2022 A set ${\\mathbf S}$ is countable if and only if there is a total, onto, one-to-one function from ${\\mathbf S}$ to the natural numbers.\n\u2022 Given a topological space $(T, \\tau)$, a basis $\\beta$ for $\\tau$ is a collection of open sets from which any open set in $\\tau$ can be generated by a finite sequence of unions and intersections of sets in $\\beta$. What this really means is that the structure of the space is regular \u2013 it\u2019s got nothing strange like an infinitary-union in its open-sets.\n\u2022 A topological space $(T, \\tau)$ is called a Hausdorff space if and only if for any two distinct points $p, q \\in T$, there are at least one open set $o_p$ where $p \\in o_p \\land q \\not\\in o_p$, and at least one open set $o_q$ where $q \\in o_q \\land p \\not\\in o_q$. (That is, there is at least one open set that includes $p$ but not $q$, and one that includes $q$ but not $p$.) A Hausdorff space just defines another kind of regularity: disjoint points have disjoint neighborhoods.\n\nA topological space $(T, \\tau)$ is an n-manifold if\/f:\n\n\u2022 $\\tau$ has a countable basis.\n\u2022 $(T, \\tau)$ is a Hausdorff space.\n\u2022 Every point in $T$ has a neighborhood homeomorphic to an open euclidean $n$-ball.\n\nBasically, what this really means is pretty much what I said in the informal definition. In a euclidean $n$-space, every point has a neighborhood which is shaped like an $n$-ball, and can be separated from any other point using an $n$-ball shaped neighborhood of the appropriate size. In a manifold, the neighborhoods around a point look like the euclidean neighborhoods.\n\nIf you think of a large enough torus, you can easily imagine that the smaller open 2-balls (disks) around a particular point will look very much like flat disks. In fact, as the torus gets larger, they\u2019ll become virtually indistinguishable from flat euclidean disks. But as you move away from the individual point, and look at the properties of the entire surface, you see that the euclidean properties fail.\n\nAnother interesting way of thinking about manifolds is in terms of a construction called charts, and charts will end up being important later.\n\nA chart for an manifold is an invertable map from some euclidean manifold to part of the manifold which preserves the topological structure. If a manifold isn\u2019t euclidean, then there isn\u2019t a single chart for the entire manifold. But we can find a set of overlapping charts so that every point in the manifold is part of at least one chart, and the edges of all of the charts overlap. A set of overlapping charts like that is called an atlas for the manifold, and we will sometimes say that the atlas defines the manifold. For any given manifold, there are many different atlases that can define it. The union of all possible atlases for a manifold, which is the set of all charts that can be mapped onto parts of the manifold is called the maximal atlas for the manifold. The maximal atlas for a manifold is, obviously, unique.\n\nFor some manifolds, we can define an atlas consisting of charts with coordinate systems. If we can do that, then we have something wonderful: a topology on which we can do angles, distances, and most importantly, calculus.\n\nTopologists draw a lot of distinctions between different kinds of manifolds; a few interesting examples are:\n\n\u2022 A Reimann manifold is a manifold on which you can meaningfully define angles and distance. (The mechanics of that are complicated and interesting, and I\u2019ll talk about them in a future post.)\n\u2022 A differentiable manifold is one on which you can do calculus. (It\u2019s basically a manifold where the atlas has measures, and the measures are compatible in the overlaps.) I probably won\u2019t say much more about them, because the interesting thing about them is analysis, and I stink at analysis.\n\u2022 A Lie group is a differentiable manifold with a valid closed product operator between points in the manifold, which is compatible with the smooth structure of the manifold. It\u2019s basically what happens when a differentiable manifold and a group fall in love and have a baby.\n\nWe\u2019ll see more about manifolds in future posts!\n\n# Squishy Equivalence with Homotopy\n\nIn topology, we always talk about the idea of continuous deformation. For example, we say that two spaces are equivalent if you can squish one into the other \u2013 if your space was made of clay, you could reshape it into the other just by squishing and molding, without ever tearing or gluing edges.\n\nThat\u2019s a really nice intuition. But it\u2019s a very informal intuition. And it suffers from the usual problem with informal intuition: it\u2019s imprecise. There\u2019s a reason why math is formal: because it needs to be! Intuition is great, as far as it goes, but if you really want to be able to understand what a concept means, you need to go beyond just intuition. That\u2019s what math is all about!\n\nWe did already talk about what topological equivalence really is, using homeomorphism. But homeomorphism is not the easiest idea, and it\u2019s really hard to see just how it connects back to the idea of continuous deformation.\n\nWhat we\u2019re going to do in this post is look at a related concept, called homotopy. Homotopy captures the idea of continuous deformation in a formal way, and using it, we can define a form of homotopic equivalence. It\u2019s not quite equivalent to homeomorphism: if two spaces are homeomorphic, they\u2019re always homotopy equivalent; but there are homotopy equivalent spaces that aren\u2019t homeomorphic.\n\nHow can we capture the idea of continuous transformation? We\u2019ll start by looking at it in functions: suppose I\u2019ve got two functions, $f$ and $g$. Both $f$ and $g$ map from points in a topological space $A$ to a topological space $B$. What does it mean to say that the function $f$ can be continuously transformed to $g$?\n\nWe can do it using a really neat trick. We\u2019ll take the unit interval space \u2013 the topological space using the difference metric over the interval from 0 to 1. Call it $U = [0, 1]$.\n\n$f$ can be continuously deformed into $g$ if, and only if, there is a continuous function $t: A \\times U \\rightarrow B$, where $\\forall a \\in A: t(a, 0) = f(a) \\land t(a, 1) = g(a)$.\n\nIf that\u2019s true, then we say $t$ is a homotopy between $f$ and $g$, and that $f$ and $g$ are homotopic.\n\nThat\u2019s just the first step. Homotopy, the way we just defined it, doesn\u2019t say anything about topological spaces. We\u2019ve got two spaces, but we\u2019re not looking at how to transform one space into the other; we\u2019re just looking at functions that map between the spaces. Homotopy says when two functions between two spaces are loosely equivalent, because one can be continuously deformed into the other.\n\nTo get from there to the idea of transformability of spaces, we need to think about what we\u2019re trying to say. We want to say that a space $A$ can be transformed into a space $B$B. What does that really mean?\n\nOne way to say it would be that if I\u2019ve got $A$, I can mush it into a shape $B$, and then much it back to $A$, without ever tearing or gluing anything. Putting that in terms of functions instead of squishies, that means that there\u2019s a continous function $f$ from $A$ to $B$, and then a continous function $g$ back from $B$ to $A$. It\u2019s not enough just to have that pair of functions: if you apply $f$ to map $A$ to $B$, and then apply $g$ to map back, you need to get back something that\u2019s indistinguishable from what you started with.\n\nFormally, if $A$ and $B$ are topological spaces, and $f: A \\rightarrow B$ and $g: B \\rightarrow A$ are continuous functions, then the spaces $A$ and $B$ are homotopically equivalent \u2013 equivalent over squishing and remolding, but not tearing or gluing \u2013 if $f \\circ g$ is homotopic with the id function on $A$, and $g \\circ f$ is homotopic with the id function on $B$.\n\nThat captures exactly the notion of continuous transformation that we tried to get with the intuition at the start. Only now it\u2019s complete and precise \u2013 we\u2019ve gotten rid of the fuzziness of intuition.\n\n# Multiplying Spaces\n\nWhen people talk informally about topology, we always say that the basic idea of equivalence is that two spaces are equivalent if they can be bent, stretched, smushed, or twisted into each other, without tearing or gluing. A mug is the same shape as a donut, because you can make a donut out of clay, and then shape that donut into a mug without tearing, punching holes, or gluing pieces together. A sphere is the same shape as a cube, because if you\u2019ve got a clay sphere, you can easily reshape it into a cube, and vice-versa.\n\nHomeomorphism is the actual formal definition of that sense of equivalence. The intuition is fantastic \u2013 it\u2019s one of the best informal description of a difficult formal concept that I know of in math! But it\u2019s not ideal. WHen you take a formal idea and make it informal, you always lose some details.\n\nWhat we\u2019re going to do here is try to work our way gradually through the idea of transformability and topological equivalence, so that we can really understand it. Before we can do that, we need to be able to talk about what a continuous transformation is. To talk about continuous transformations, we need to be able to talk about some topological ideas called homotopy and isotopy. And to be able to define those, we need to be able to use topological products. (Whew! Nothing is ever easy, is it?) So today\u2019s post is really about topological products!\n\nThe easiest way that I can think of to explain the product of two topological spaces is to say that it\u2019s a way of combining the structures of the spaces by adding dimensions. For example, if you start with two spaces each of which is a line segment, the product of those two spaces is a square (or a circle, or an octagon, or \u2026) You started with two one-dimensional spaces, and used them to create a new two-dimensional space. If you start with a circle and a line, the product is a cylinder.\n\nIn more formal terms, topological products are a direct extension of cartesian set products. As the mantra goes, topological spaces are just sets with structure, which means that the cartesian product of two topological sets is just the cartesian products of their point-sets, plus a combined structure that preserves attributes of the original structure of the spaces.\n\nLet\u2019s start with a reminder of what the cartesian product of two sets is. Given a set $A$ and a set $B$, the cartestian product $A \\times B$ is defined as the set of all possible pairs $(a, b)$, where $a \\in A$ and $b \\in B$. If $A=\\{1, 2, 3\\}$ and $B=\\{4, 5\\}$, then $A\\times B = \\{ (1, 4), (1, 5), (2, 4), (2, 5), (3, 4), (3, 5) \\}$.\n\nIn category theory, we take the basic idea of the cartesian product, and extend it to a general product of different mathematical objects. It does this by using the idea of projections. In this model, instead of saying that the product of sets $A$ and $B$ is a set of pairs $(a, b)$, we can instead say that the product is a set $S$ of objects, and two functions $P_A : S \\rightarrow A$ and $P_B : S \\rightarrow B$. (To be complete, we\u2019d need to add some conditions, but the idea should be clear from this much.) Given any object in the the product set $S$, $P_A(S)$ will give us the projection of that object onto $A$. This becomes more interesting when we consider sets of objects. The A-projection of a collection of points from the product set $S$ is the shadow that those points cast onto the set A.\n\nA topological product is easiest to understand with that categorical approach. The set of points in a product category $A \\times B$ is the cartesian product of the sets of points in $A$ and the sets of points in $B$. The trick, with topologies, is that you need to describe the topological structure of the product set: you need to be able to say what the neighorhoods are. There are lots of ways that you could define the neighborhoods of the product, but we define it as the topological space with the smallest collection of open-sets. To understand how we get that, the projections of the category theoretical approach make it much easier.\n\nInformally, the neighborhoods in the product $A \\times B$ are things that cast shadows into the topological spaces $A$ and $B$ which are neighborhoods in $A$ and $B$.\n\nSuppose we have topological spaces A and B. If $S$ is the product topology $A \\times B$, then it has projection functions $P_A: S \\rightarrow A$ and $P_B: S \\rightarrow P_B$.\n\nThe projection functions from the product need to maintain the topological structure of the original topologies. That means that the projection function must be continuous. And that, in turn, means that the inverse image of the projection function is an open set. So: for each open set $O$ in $A$, $P_A^{-1}(O)$ is an open-set in $S$.\n\nLet\u2019s look at an example. We\u2019ll start with two simple topological spaces \u2013 a cartesian plane (2d), and a line (1d). In the plane, the neighborhoods are open circles; in the line, the neighborhoods are open intervals. I\u2019ve illustrated those below.\n\nThe product of those two is a three dimensional space. The neighborhoods in this space are cylinders. If you use the projection from the product to the plane, you get open circles \u2013 the neighborhood structure of the plane. If you use the projection from the product to the line, you get open intervals \u2013 the neighborhood structure of the line.\n\nOne interesting side-point here. One thing that we come across constantly in this kind of formal math is the axiom of choice. The AoC is an annoying bugger, because it varies from being obviously true to being obviously ridiculously false. Topological products is one of the places where it\u2019s obviously true. The axiom choice is equivalent to the statement that given a collection of non-empty topological spaces, the product space is not empty. Obvious, right? But then look at the Banach-Tarski paradox.\n\n# Topological Spaces: Defining Shapes by Closeness\n\nWhen people talk about what a topological space is, you\u2019ll constantly hear one refrain: it\u2019s just a set with structure!\n\nI\u2019m not a fan of that saying. It\u2019s true, but it just doesn\u2019t feel right to me. What makes a set into a topological space is a relationship between its members. That relationship \u2013 closeness \u2013 is defined by a structure in the set, but the structure isn\u2019t the point; it\u2019s just a building block that allows us to define the closeness relations.\n\nThe way that you define a topological space formally is:\n\nA topological space is a pair $(X, T, N)$, where $X$ is a set of objects, called points; $T$ is a set of subsets of $X$; and $N$ is a function from elements of $X$ to elements of $T$ (called the neighborhoods of $X$ where the following conditions hold:\n\n1. Neighborhoods basis: $\\forall A \\in N(p): p \\in A$: every neighborhood of a point must include that point.\n2. Neigborhood supersets: $\\forall A \\in N(p): \\forall B \\in X: B \\supset A \\Rightarrow B \\in N(p)$. If $B$ is a superset of a neighborhood of a point, then $B$ must also be a neighborhood of that point.\n3. Neighborhood intersections: $\\forall A, B \\in N(p): A \\cap B \\in N(p)$: the intersection of any two neighborhoods of a point is a neighborhood of that point.\n4. Neighborhood relations: $\\forall A \\in N(x): \\exists B \\in N(x): \\forall b \\in B: A \\in N(b)$. If $A$ is a neighborhood of a point $p$, then there\u2019s another neighborhood $B$ of $p$, where $A$ is also a neighborhood of every point in $B$.\n\nThe collection of sets $T$ is called a topology on $T$, and the neighborhood relation is called a neighborhood topology of $T$.\n\nLike many formal definitions, this is both very precise, and not particularly informative. What the heck does it mean?\n\nIn the previous topology post, I talked about metric spaces. Every metric space is a topological space (but not vice-versa), and we can use that to help explain how the set-of-sets $T$ defines a meaningful definition of closeness for a topological space.\n\nIn the metric space, we define open balls around each point in the space. Each one forms an open set around the point. For any point $p$ in the metric space, there are a sequence of ever-larger open-balls of points around $p$.\n\nThat sequence of open balls defines the closeness relation in the metric space:\n\n\u2022 a point $q$ is closer to $p$ than it $r$ is if $q$ is in one of the open balls around $p$, which $r$ isn\u2019t. (In a metric space, that\u2019s equivalent to saying that the distance $d(q, p) < d(q, r)$.)\n\u2022 two points $q$ and $r$ are equally close to $p$ if there is no open ball around $p$ where $q$ is included but $r$ isn\u2019t, or where $r$ is included but $p$ isn\u2019t. (In a metric space, that\u2019s equivalent to saying that $d(q, p) = d(r, p)$.)\n\nIn a topological space, we don\u2019t neccessarily have a distance metric to define open balls. But the neighborhoods of each point $p$ define the closeness relation in the same way as the open-balls in a metric space!:\n\n\u2022 The neighborhoods $N(p)$ of a point are equivalent to the open balls around $p$ in a metric space.\n\u2022 The open sets of the topology (the members of $T$) are equivalent to the open sets of the metric space.\n\u2022 The complements of the members of $T$ are equivalent to the closed sets of the metric space.\n\nOne of the most important ideas in topology is the notion of continuity. Some people would say that it\u2019s the fundamental abstraction of topology, because the whole idea of the equivalence between two shapes is that there is a continuous transformation between them. And now that we know what a topological space is, we can define continuity.\n\nContinuity isn\u2019t a property of a topological space, but rather a property of a function between two topological spaces: if $(T, X_T, N_T)$ and $(U, X_U, N_U)$ are both topological spaces, then a function $f: X \\rightarrow Y$ is continuous if and only if for every open set $C \\in X_U$, the inverse image of $f$ on $C$ is an open set in $X_T$. (The inverse image of $f$ is the set of points $x \\in X_T: f(x) \\in C$).\n\nOnce again, we\u2019re stuck with a very precise definition that\u2019s really hard to make any sense out of. I mean really, the inverse image of the function on an open set is an open set? What the heck does that mean?\n\nWhat it\u2019s really capturing is that there are no gaps in mapping from one space to the other. If there was a gap, it would create a boundary \u2013 there would be a hard edge in the mapping, and so the inverse image would show that as a closed set. Think of the metric spaces idea of open sets. Imagine an open set with a cube cut out of the middle. It\u2019s definitely not continuous. If you took a function on that open set, and its inverse image was the set with the cube cut out, then the function is not smoothly mapping from the open set to the other topological space. It\u2019s only mapping part of the open set, leaving a ugly, hard-edged gap.\n\nIn topology, we say that two shapes are equivalent if and only if they can be continuously transformed into each other. In intuitive terms, that continuous transformation means that you can do the transformation without tearing holes are gluing edges. That gives us a clue about how to understand this definition. What the definition means is really saying is pretty much that there\u2019s no gluing or tearing: it says that if a set in the target is an open set, the set of everything that mapped to it is also an open set. That, in turn, means that if $f(x)$ and $f(y)$ are close together in $U$, then $x$ and $y$ must have been close together in $T$: so the structure of neighborhood relations is preserved by the function\u2019s mapping.\n\nOne continuous map from a topological space isn\u2019t enough for equivalence. It\u2019s possible to create a continuous mapping from one topological space to another when they\u2019re not the same \u2013 for example, you could map part of the topology $T$ onto $U$. As long as for that part, it\u2019s got the continuity properties, that\u2019s fine. For two topologies to be equivalent, there must be a homeomorphism between the sets. That is, a function $f$ such that:\n\n\u2022 $f$ is one-to-one, total, and onto\n\u2022 Both $f$ and $f^{-1}$ are continuous.\n\nAs a quick aside: here\u2019s one of the places where you can see the roots of category theory in algebraic topology. There\u2019s a very natural category of topological spaces. The objects in the category are, obviously, the topological spaces. The arrows are continuous functions between the spaces. And a homeomorphism (homo-arrow) in the category is a homeomorphism between the objects.\n\n# Closeness without distance\n\nIn my introduction, I said that topology is fundamentally built on the notion of closeness. Someone very quickly responded on Twitter, because they thought that was wrong. It wasn\u2019t wrong, but it\u2019s easy to see where the confusion came from. Like so much math, Topology is built on a very precise logical and set-theoretic formalism. Mathematicians build those formalisms not because they\u2019re annoying people who want to be mysterious and incomprehensible, but because the precision of those formalisms is critically important.\n\nWhen you hear a statement like \u201cpoint A is close to point B in a space S\u201d, you have an intuitive idea of what the word \u201cclose\u201d means. But when you try to expand that to math, it could potentially mean several different things. The easiest meaning would be: the distance between A and B is small.\n\nMathematicians have used that definition for a lot of interesting work. It\u2019s got one limitation though: For it to work, you need to be able to define \u201cdistance\u201d in the space. How do you do that? In conventional Euclidean space, we have an easy definition. Describe the position of the two points using Cartesian coordinates: A=(x1, y1), B = (x2, y2). The distance between A and B is:\n\n$d(A, B) = \\sqrt{(x_2-x_1)^2 + (y_2-y_1)^2}$\n\nBut we\u2019re moving towards the world of topology. We can\u2019t count on our spaces to be Euclidean. In fact, the whole point of topology is, in some sense, to figure out what happens when you have different spatial structures \u2013 that is, structures other than the familiar Euclidean one! We need to be able to talk about distances in some more general way. To do that, we\u2019ll create a new kind of space \u2013 a space with an associated distance metric. This new space is called a metric space.\n\nA distance metric is conceptually simple. It\u2019s just a special kind of function, from pairs of points in a space to a real number. To be a distance metric, it needs a couple of properties. Suppose that the set of points in the space is $S$. Then a function $d: S \\times S \\rightarrow \\mathbf{R}$ is a distance metric if it satisfies the following requirements:\n\n1. Identity: $\\forall s_i, s_j \\in S: d(s_i, s_j) = 0 \\Leftrightarrow s_i = s_j$\n2. Symmetry:$\\forall s_i, s_j \\in S: d(s_i, s_j) = d(s_j, s_i)$\n3. Triangle Inequality: $\\forall s_i, s_j, s_k \\in S: d(s_i, s_k) \\le d(s_i, s_j) + d(s_j, s_k)$\n4. Non-negativity: $\\forall s_i, s_j \\in S: d(s_i, s_j) \\ge 0$\n\nA metric space is just the pair $(S,d)$ of a set $S$, and a metric function $d$ over the set. For example:\n\n1. A cartesian plane is a metric space whose metric function is the euclidean distance: $d((a_x,a_y), (b_x,b_y)) = \\sqrt{(a_x-b_x)^2 + (a_y-b-y)^2}$.\n2. A checkerboard is a metric space with the number of kings moves as the metric function.\n3. The Manhattan street grid is a metric space where the distance function between two intersections is the sum of the number of horizontal blocks and the number of vertical blocks between them.\n\nAll of this is the mathematical work necessary to take one intuitive notion of closeness \u2013 the idea of \u201ctwo points are close if there\u2019s a small distance between them\u201d and turn it into something formal, general, and unambiguous. But we still haven\u2019t gotten to what closeness means in topology! It\u2019s not based on any idea of distance. There are many topological spaces which aren\u2019t metric spaces \u2013 that is, there\u2019s no way to define a metric function!\n\nFortunately, metric spaces give us a good starting point. In topological spaces, closeness is defined in terms of neighborhoods and open balls.\n\nTake a metric space, $(S, d)$, and a point $p \\in S$. An open ball B(p, r) (that is, a ball of radius $r$ around the point $p$) is the set of points $x \\in S | d(p, x) < r$.\n\nGiven a large enough set of points, you can create an infinite series of concentric open spheres: $B(p, \\epsilon), B(p, 2\\epsilon), B(p, 3\\epsilon)$, and so on. Once you\u2019ve got that series of ever-smaller and ever-larger open balls around a point $p$, you\u2019ve got another notion of closeness. $A$ is closer to $p$ than $B$ is if $A$ is in a smaller open ball around $p$.\n\nThis is the heart of topology. You can define something like an open-ball on a set without a metric. As long as you can create a consistent sequence of open balls, where each larger ball is a strict superset of all of the smaller ones, you can define closeness without any notion of measurable distance!\n\nIn the next post, we\u2019ll use this notion of a distance-free sense of closeness to define what a topology actually is.\n\n# Another pass at Topology!\n\nA long time ago \u2013 in 2006! \u2013 I wrote a ton of blog posts about topology. In the course of trying to fix up some of the import glitches from migrating this blog to its new home, I ended up looking at a bunch of them. And\u2026 well\u2026 those were written in my early days of blogging, and looking back at those posts now\u2026 well, let\u2019s just say that my writing has come a long way with 8 years of practice! I was thinking, \u201cI could do a much better job of writing about that now!\u201d\n\nSo that\u2019s what I\u2019m going to do. This isn\u2019t going to be reposts,\nbut rather completely rewrites.\n\nTopology is typical of one of the methods of math that I love: abstraction. What mathematicians do is pick some fundamental concept, focus tightly on it, discarding everything else. In topology, you want to understand shapes. Starting with the basic idea of shape, topology lets us understand shapes, distortions, dimensions, continuity, and more.\n\nThe starting point in topology is closeness. You can define what a shape is by describing which points are close to which other points. Two shapes are equivalent if they can be built using the same closeness relationships. That means that if you can take one shape, and pull, squash, and twist it into another shape \u2013 as long as you don\u2019t have to either break closeness relations (by tearing or punching holes in it), or add new closeness relations (by gluing edges together) \u2013 the two shapes are really the same thing.","date":"2023-03-21 13:46:49","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 388, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8192250728607178, \"perplexity\": 193.7500166268204}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296943698.79\/warc\/CC-MAIN-20230321131205-20230321161205-00044.warc.gz\"}"}
| null | null |
Our T-Light shoes are something sort of special and are ideal for those in Aviation and Ground Handling. Why? For a few simple reasons.
Firstly, they are completely metal free, which make them fantastic for airports and security. If you work in an airport you don't want to have to continually take your shoes on and off numerous times throughout the day, and this is just me getting started.
The T-light is metal free as it has a fibreglass toe-cap. Not only does this make the T-light lightweight, it still protects your toes to the same impact as a steel toe cap at 200 joules. Another benefit of the T-light's fibreglass toe-cap is that it inevitably leads to more comfort for the wearer due to it's lightness.
Everybody wants to be comfortable at work, especially if you're walking around airports or airplanes all day. That's why the T-light was crafted with a "plasmafeel" lining. This lining adds high levels of comfort for the wearer as it is exceptionally soft. It also adds softness at the toecap as well as breathability to the shoe. We know comfort here at integra.boot, trust us.
Our T-light shoes come in sizes 2-13, which make them ideal for female and male workforce. The smaller sizes (size 2-6) are manufactured on a female last. As women's feet are different shaped to men's, this means that the fit is more comfortably shaped to their feet. Don't worry if you're a man with small feet though, I can assure you that you won't be able to tell the difference.
The T-light is water repellent which means that if you're caught in a light rain shower, you can be comfortable knowing that your feet have some protection against the elements. The shoes are also highly slip resistant, with an SRC sole which is the highest available slip resistance available.
On top its SRC (which you can read about here) classification, the T-light was manufactured with a double density PU sole (you can read about sole units in our blog here). This means that the T-light has a softer layer that is more comfortable and lasts longer, contributing to the durability of this safety shoe.
The T-light is made with Full Grain Leather, which doesn't only make it a nice safety shoe to look at but also high quality. It also boasts a Texon Pierce Resistant midsole, protecting your feet and adding yet again to the lightness and comfort of the T-light as this is another completely metal free feature of the safety shoes.
Due to the T-Lights high levels of comfort, durability and lightweight features, it is a versatile safety shoe and can be worn as a dress or work shoe. This means that the T-Light is suitable for employees in a range of different roles in any one company.
If you need any more convincing, read below what Oguz Gelhasan, an Aircraft Engineer for KLM and Air France Industries had to say about our T-light shoes, and how they lead to him enjoying his work again.
So what are you waiting for? Contact our safety footwear specialists and start your journey with integra.boot today!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 461
|
Q: Formatting scraped content I am trying to scrape the title, date and article content from bbc news article when given a url.
The article content is inside multiple divs with the ssrcss-1q0x1qg-Paragraph css class. The article content goes into the content variable. Without the for loop, content is only assigned the content of the first div.
import requests
from bs4 import BeautifulSoup
def scraper():
link = 'https://www.bbc.co.uk/news/uk-52255054'
page = requests.get(link)
soup = BeautifulSoup(page.content, 'html.parser')
results = soup.find(class_='ssrcss-pv1rh6-ArticleWrapper')
body = results.find_all(attrs={'class': 'ssrcss-1q0x1qg-Paragraph'})
content = []
for div in body:
paras = div.text
content.append(paras)
return content
If I just print the scraped content to the console the formatting is perfect, but when I assign and append it to the content variable it ends up with random \ characters and with extra ', ' since it is a list. See below excerpt.
"\"Coronavirus will not overcome us,\" the Queen has said, in an Easter message to the nation.",
"While celebrations would be different for many this year, she said: \"We need Easter as much as ever.\"",
"Referencing the tradition of lighting candles to mark the occasion, she said: \"As dark as death can be - particularly for those suffering with grief - light and life are greater.\"",
"It comes as the number of coronavirus deaths in UK hospitals reached 9,875.",
"Speaking from Windsor Castle, the Queen said many religions had festivals celebrating light overcoming darkness, which often featured the lighting of candles.",
"She said: \"They seem to speak to every culture, and appeal to people of all faiths, and of none.",
"\"They are lit on birthday cakes and to mark family anniversaries, when we gather happily around a source of light. It unites us.\"",
Is there a way to remove these to just make the list one block of text?
Thanks
A: *
*The \ is meant for escaping quotes.
*The , separating items in a list are not part of the items themselves (in this case strings).
If you want to get the full text, with each string in content separated by a whitespace, add these lines at the end of your code:
full_text = ' '.join(scraper())
print(full_text)
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,991
|
Gioca esclusivamente in doppio, ha vinto due tornei del circuito maggiore ai Delray Beach International Tennis Championships 2018 e al San Diego Open 2022, vanta inoltre diversi titoli nei circuiti minori. Nelle prove del Grande Slam si è spinto fino ai quarti di finale agli US Open 2019. Il suo miglior ranking ATP è stato il 46º posto nel gennaio 2023.
Statistiche
Aggiornate al 20 marzo 2023.
Doppio
Vittorie (2)
Finali perse (4)
Tornei minori
Doppio
Vittorie (17)
Finali perse (7)
Risultati in progressione
Doppio
Doppio misto
Altri progetti
Collegamenti esterni
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,884
|
package command
import (
"strings"
"github.com/mitchellh/cli"
)
// DebugCommand is a Command implementation that just shows help for
// the subcommands nested below it.
type DebugCommand struct {
Meta
}
func (c *DebugCommand) Run(args []string) int {
return cli.RunResultHelp
}
func (c *DebugCommand) Help() string {
helpText := `
Usage: terraform debug <subcommand> [options] [args]
This command has subcommands for debug output management
`
return strings.TrimSpace(helpText)
}
func (c *DebugCommand) Synopsis() string {
return "Debug output management (experimental)"
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,793
|
Hind limb somatosensory evoked potentials in the monkey: The effects of distal axonopathy
Joseph C. Arezzo, H. H. Schaumburg, H. G. Vaughan, P. S. Spencer, J. Barna
Computer-averaged short-latency somatosensory evoked potentials (SLSEP) to unilateral stimulation of the peroneal nerve were recorded from surface electrodes overlying the peripheral nerve, cauda equina, spinal cord, brainstem, and contralateral sensorimotor region. Seven monkeys were studied under normal conditions and at various stages of distal axonopathy induced by systematic acrylamide intoxication. With the use of the noncephalic reference, a series of five small-amplitude positive components were identified that precede the initial cortical activity. On the basis of timing and topography of the components, the source of the first one, SLSEP1, was localized to the lumbar dorsal root fibers and that of the second, SLSEP2, to the ascending spinal tracts, principally the gracile fasciculus. Bipolar recordings of the SLSEP2 overlying the rostral extreme of the cervical spinal cord provided a sensitive measure of the onset of distal axonopathy. Changes in the timing of this component antedated both abnormalities of spinal or peripheral nerve conduction and behavioural signs of intoxication. The initial alteration was specific to stimulation of the hind limb and was associated with early morphological change limited to the terminal preterminal portions of the long axons in the gracile fasciculus.
Annals of Neurology
https://doi.org/10.1002/ana.410120105
Somatosensory Evoked Potentials
Peripheral Nerves
Haplorhini
Spinal Nerves
Peroneal Nerve
Spinal Nerve Roots
Neural Conduction
Cervical Cord
Arezzo, J. C., Schaumburg, H. H., Vaughan, H. G., Spencer, P. S., & Barna, J. (1982). Hind limb somatosensory evoked potentials in the monkey: The effects of distal axonopathy. Annals of Neurology, 12(1), 24-32. https://doi.org/10.1002/ana.410120105
Hind limb somatosensory evoked potentials in the monkey : The effects of distal axonopathy. / Arezzo, Joseph C.; Schaumburg, H. H.; Vaughan, H. G.; Spencer, P. S.; Barna, J.
In: Annals of Neurology, Vol. 12, No. 1, 1982, p. 24-32.
Arezzo, JC, Schaumburg, HH, Vaughan, HG, Spencer, PS & Barna, J 1982, 'Hind limb somatosensory evoked potentials in the monkey: The effects of distal axonopathy', Annals of Neurology, vol. 12, no. 1, pp. 24-32. https://doi.org/10.1002/ana.410120105
Arezzo JC, Schaumburg HH, Vaughan HG, Spencer PS, Barna J. Hind limb somatosensory evoked potentials in the monkey: The effects of distal axonopathy. Annals of Neurology. 1982;12(1):24-32. https://doi.org/10.1002/ana.410120105
Arezzo, Joseph C. ; Schaumburg, H. H. ; Vaughan, H. G. ; Spencer, P. S. ; Barna, J. / Hind limb somatosensory evoked potentials in the monkey : The effects of distal axonopathy. In: Annals of Neurology. 1982 ; Vol. 12, No. 1. pp. 24-32.
@article{d5f853305af54b32bdf58dff0403df3a,
title = "Hind limb somatosensory evoked potentials in the monkey: The effects of distal axonopathy",
abstract = "Computer-averaged short-latency somatosensory evoked potentials (SLSEP) to unilateral stimulation of the peroneal nerve were recorded from surface electrodes overlying the peripheral nerve, cauda equina, spinal cord, brainstem, and contralateral sensorimotor region. Seven monkeys were studied under normal conditions and at various stages of distal axonopathy induced by systematic acrylamide intoxication. With the use of the noncephalic reference, a series of five small-amplitude positive components were identified that precede the initial cortical activity. On the basis of timing and topography of the components, the source of the first one, SLSEP1, was localized to the lumbar dorsal root fibers and that of the second, SLSEP2, to the ascending spinal tracts, principally the gracile fasciculus. Bipolar recordings of the SLSEP2 overlying the rostral extreme of the cervical spinal cord provided a sensitive measure of the onset of distal axonopathy. Changes in the timing of this component antedated both abnormalities of spinal or peripheral nerve conduction and behavioural signs of intoxication. The initial alteration was specific to stimulation of the hind limb and was associated with early morphological change limited to the terminal preterminal portions of the long axons in the gracile fasciculus.",
author = "Arezzo, {Joseph C.} and Schaumburg, {H. H.} and Vaughan, {H. G.} and Spencer, {P. S.} and J. Barna",
doi = "10.1002/ana.410120105",
journal = "Annals of Neurology",
T1 - Hind limb somatosensory evoked potentials in the monkey
T2 - The effects of distal axonopathy
AU - Arezzo, Joseph C.
AU - Schaumburg, H. H.
AU - Vaughan, H. G.
AU - Spencer, P. S.
AU - Barna, J.
N2 - Computer-averaged short-latency somatosensory evoked potentials (SLSEP) to unilateral stimulation of the peroneal nerve were recorded from surface electrodes overlying the peripheral nerve, cauda equina, spinal cord, brainstem, and contralateral sensorimotor region. Seven monkeys were studied under normal conditions and at various stages of distal axonopathy induced by systematic acrylamide intoxication. With the use of the noncephalic reference, a series of five small-amplitude positive components were identified that precede the initial cortical activity. On the basis of timing and topography of the components, the source of the first one, SLSEP1, was localized to the lumbar dorsal root fibers and that of the second, SLSEP2, to the ascending spinal tracts, principally the gracile fasciculus. Bipolar recordings of the SLSEP2 overlying the rostral extreme of the cervical spinal cord provided a sensitive measure of the onset of distal axonopathy. Changes in the timing of this component antedated both abnormalities of spinal or peripheral nerve conduction and behavioural signs of intoxication. The initial alteration was specific to stimulation of the hind limb and was associated with early morphological change limited to the terminal preterminal portions of the long axons in the gracile fasciculus.
AB - Computer-averaged short-latency somatosensory evoked potentials (SLSEP) to unilateral stimulation of the peroneal nerve were recorded from surface electrodes overlying the peripheral nerve, cauda equina, spinal cord, brainstem, and contralateral sensorimotor region. Seven monkeys were studied under normal conditions and at various stages of distal axonopathy induced by systematic acrylamide intoxication. With the use of the noncephalic reference, a series of five small-amplitude positive components were identified that precede the initial cortical activity. On the basis of timing and topography of the components, the source of the first one, SLSEP1, was localized to the lumbar dorsal root fibers and that of the second, SLSEP2, to the ascending spinal tracts, principally the gracile fasciculus. Bipolar recordings of the SLSEP2 overlying the rostral extreme of the cervical spinal cord provided a sensitive measure of the onset of distal axonopathy. Changes in the timing of this component antedated both abnormalities of spinal or peripheral nerve conduction and behavioural signs of intoxication. The initial alteration was specific to stimulation of the hind limb and was associated with early morphological change limited to the terminal preterminal portions of the long axons in the gracile fasciculus.
U2 - 10.1002/ana.410120105
DO - 10.1002/ana.410120105
JO - Annals of Neurology
JF - Annals of Neurology
10.1002/ana.410120105
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,270
|
{"url":"https:\/\/transfer-learning.ai\/page\/140\/","text":"## Unsupervised Learning of Monocular Depth and Ego Motion Using Multiple Masks\n\nA new unsupervised learning method of depth and ego-motion using multiplemasks from monocular video is proposed in this paper . The method is to use a geometricrelationship to filter the mismatched pixels for training . The experiments on KITTI dataset show our method achieves good performance in terms of depth .\u2026\n\n## Design and development of an Aerial Surveillance Security System\n\nAerial security means performing security-aimed monitoring and surveillanceoperations with the help of airborne vehicles . Human officers (security organizations, law enforcement, police etc.) would be able to remotely monitor and view video and data acquired from Drones while planning and evaluating their operations .\u2026\n\n## Using multidimensional speckle dynamics for high speed large scale parallel photonic computing\n\nThe recent rapid increase in demand for data processing has resulted in the need for novel machine learning concepts and hardware . Physical reservoircomputing and an extreme learning machine are novel computing paradigms basedon physical systems themselves . The speckle-based mapping of the input information is high-dimensional and nonlinear and can berealized at the speed of light; thus, nonlinear time-dependent informationprocessing can successfully be achieved at fast rates when applying areservoir-computing-like-approach .\u2026\n\n## Cooperative UWB Based Localization for Outdoors Positioning and Navigation of UAVs aided by Ground Robots\n\nUnmanned aerial vehicles (UAVs) are becoming largely ubiquitous with anincreasing demand for aerial data . Accurate navigation and localization often relies on RTK GNSS . Inexpensive ultra-wideband (UWB) transceivers enable centimeter-level relative positioning . With fast deployment and wide setup flexibility, the proposed system is able to accommodate different environments and can also beutilized in GNSS-denied environments .\u2026\n\n## Classically Verifiable Quantum Advantage from a Computational Bell Test\n\nWe propose and analyze a novel interactive protocol for demonstrating quantumcomputational advantage . Ourprotocol relies upon the cryptographic hardness of trapdoor claw-free functions . Through a surprising connection to Bell\u2019s inequality, our protocolavoids the need for an adaptive hardcore bit, with essentially no increase inthe quantum circuit complexity and no extra cryptographic assumptions .\u2026\n\n## Cortical Morphometry Analysis based on Worst Transportation Theory\n\nBiomarkers play an important role in early detection and intervention in Alzheimer\u2019s disease (AD) However, obtaining effective biomarkers for AD is still a big challenge . The worst transportation (WT) aims to find the least economical way to transport one measure to the other, which contrasts to the optimal (OT) The WT map is the gradient of a concave function satisfying the Monge-Ampere equation .\u2026\n\n## Towards Evaluating and Training Verifiably Robust Neural Networks\n\nCROWN, a bounding method based ontight linear relaxation, often gives very loose bounds on these networks . We also design a new activation function, parameterized ramp function (ParamRamp) which has more diversity of neuron status than ReLU . We conduct extensive experiments onMNIST, CIFAR-10 and Tiny-ImageNet with ParamRamp activation and achievestate-of-the-art verified robustness.\u2026\n\n## Positive Sample Propagation along the Audio Visual Event Line\n\nVisual and audio signals often coexist in natural environments, forming audio-visual events (AVEs) Given a video, we aim to localize video segments containing an AVE and identify its category . In order to learn discriminativefeatures for a classifier, it is pivotal to identify the helpful (or positive)audio-visual segment pairs while filtering out the irrelevant ones .\u2026\n\n## Replicate or Relocate Non Uniform Access in Parameter Servers\n\nParameter servers (PSs) facilitate the implementation of distributed trainingfor large machine learning tasks . Parameter access is non-uniform in many real-world machine-learning tasks . Skew and nondeterminism are two major sources for non-Uniformity . Lapse2 outperformed existing, single-technique PSs by up to one order of magnitude .\u2026\n\n## Neural Video Portrait Relighting in Real time via Consistency Modeling\n\nVideo portraits relighting is critical in user-facing human photography, especially for immersive VR\/AR experience . Recent advances still fail to cover consistent relit result under dynamic illuminations from monocular RGBstream, suffering from the lack of video consistency supervision . In thispaper, we propose a neural approach for real-time, high-quality and coherentvideo portrait relighting, which jointly models the semantic, temporal andlighting consistency .\u2026\n\n## Multi rate attention architecture for fast streamable Text to speech spectrum modeling\n\nHigh-quality spectrum models usually incorporate the encoder-decoder architecture with self-attention orbi-directional long short-term (BLSTM) units . While these models can produce high quality speech, they often incur O($L$) increase in both latency and RTF with respect to input length $L$. Long input leads to longer delay and slower synthesis speed, limiting its use in real-time applications .\u2026\n\n## An Energy Efficient Quad Camera Visual System for Autonomous Machines on FPGA Platform\n\nThe visual frontend is a major performance and energy consumption bottleneck in autonomous machine applications . Compared to Nvidia TX1 and Intel i7, ourFPGA-based implementation achieves 5.6x and 3.4x speedup, as well as 3.0x and 34.6X power reduction, respectively. Compared to the Nvidia TX-1, Intel i.7, the implementation achieves 3.5x power reduction .\u2026\n\n## Optimizer Fusion Efficient Training with Better Locality and Parallelism\n\nMachine learning frameworks adopt iterative optimizers to train neuralnetworks . By reordering the forward computation, gradientcalculation, and parameter updating, our proposed method improves theefficiency of iterativeoptimizers . Experimental results demonstrate that we achieve an up to 20% training time reduction on various configurations .\u2026\n\n## E Commerce in Turkey and SAP Integrated E Commerce System\n\nE-commerce is becoming an indispensable method with the increase of internet usage . SAP is a pioneer and leader in the company resource planning software sector . The SAP is very important forlarge-scale companies. They manage all their processes on SAP and itsintegration is important with other related software.\u2026\n\n## Sub GMN The Subgraph Matching Network Model\n\nSubgraph matching is acrucial task in many fields, ranging from information retrieval, computervision, biology, chemistry and natural language processing . Yet subgraphmatching problem remains to be an NP-complete problem . Study proposes anend-to-end learning-based approximate method for subgraph matching task, calledsubgraph matching network (Sub-GMN) The proposed Sub-GMn firstly uses graphrepresentation learning to map nodes to node-level embedding .\u2026\n\n## Hereditary rigidity separation and density In memory of Professor I G Rosenberg\n\nWe observe that on aset $V$ with $m$ elements, there is a hereditarily rigid set made of $n$ tournaments . We ask if the sameinequality holds when the tournaments are replaced by linear orders . We show that $h_{\\rm Lin}(m)$ is the least cardinal $n such that$m(m-1) and $d(V) is the topological density of the set of linear orders on$V) We do not know whether these equalities hold without any set theoretical hypothesis .\u2026\n\n## Efficient Set Based Approaches for the Reliable Computation of Robot Capabilities\n\nTo reliably model real robot characteristics, interval linear systems ofequations allow to describe families of problems that consider sets of values . This allows to easily account for typical complexities such as sets of jointstates and design parameters uncertainties . For eachclass, reliable and efficient polytope, n-cube, and n-ball inner approximations are presented .\u2026\n\n## The best laid plans or lack thereof Security decision making of different stakeholder groups\n\nCyber security requirements are influenced by the priorities and decisions of a range of stakeholders . No group of experts makes significantly better gamedecisions than anyone else, and that their biases lead them to not fullycomprehend what they are defending or how the defenses work .\u2026\n\n## DVMark A Deep Multiscale Framework for Video Watermarking\n\nVideo watermarking embeds a message into a cover video in an imperceptible manner . The message can be retrieved even if the video undergoes certain modifications or distortions . The new model consists of a novel multiscale design where the watermarks are distributed across multiple spatial-temporal scales .\u2026\n\n## Hetero functional Network Minimum Cost Flow Optimization A Hydrogen Natural Gas Network Example\n\nThis work aims to develop an optimization program for a dynamic, hetero-functional graphtheory-based model of an engineering system . The optimization program is demonstrated through the application of the program to a hydrogen-naturalgas infrastructure test case . Four distinct scenarios are optimized todemonstrate potential synergies or cascading network effects of policy acrossinfrastructures .\u2026\n\n## Intuitive Tasks Planning Using Visuo Tactile Perception for Human Robot Cooperation\n\nDesigning robotic tasks for co-manipulation necessitates to exploit not onlypriprioceptive but also exteroceptive information for improved safety andautonomy . Research proposes to formulateintuitive robotic tasks following human viewpoint by incorporatingvisuo-tactile perception . The visual data using depth cameras surveils and determines the object dimensions and human intentions while the tactile sensing ensures to maintain the desired contact to avoid slippage .\u2026\n\n## The k Colorable Unit Disk Cover Problem\n\nIn this article, we consider colorable variations of the Unit Disk Cover (CDUDC) problem . We propose a 4-approximation algorithm in $O(m^{7k)n\\log k) time for this problem, where$k$is a positive integer . We also extend our algorithm to solve the .it$k\\$-Colorable\u2026\n\n## O 1 Steiner Point Removal in Series Parallel Graphs\n\nWe study how to vertical-sparsify a graph while preserving both the graph\u2019s metric and structure . The main engine of our approach is a newmetric decomposition for series-parallel graphs . Roughly, a hammock decomposition is a forest-like structure thatpreserves certain critical parts of the metric induced by a series parallelgraph .\u2026\n\n## A Survey on Natural Language Video Localization\n\nNatural language video localization (NLVL) aims to locate a targetmoment from a video that semantically corresponds to a text query . In this paper, we present acomprehensive survey of the NLVL algorithms . We categorize them into supervised andweakly-supervised methods, following by the analysis of the strengths andweaknesses of each kind of methods .\u2026\n\n## Sample efficient Gear ratio Optimization for Biomechanical Energy Harvester\n\nThe biomechanical energy harvester is expected to harvest the electricenergies from human motions . A tradeoff between harvesting energy and keeping the user\u2019s natural movements should be balanced via optimization techniques . CVT could continuously adjust its gear ratio to balance the tradeoff foreach task .\u2026\n\n## AdaPool A Diurnal Adaptive Fleet Management Framework using Model Free Deep Reinforcement Learning and Change Point Detection\n\nDeep Reinforcement Learning (RL) suffers from catastrophicforgetting due to being agnostic to the timescale of changes in the distribution of experiences . This paper introduces an adaptive model-free deep reinforcement approach that can recognize and adapt to the diurnal patterns in the ride-sharing environment with car-pooling .\u2026\n\n## Optimization Algorithm for Feedback and Feedforward Policies towards Robot Control Robust to Sensing Failures\n\nModel-free or learning-based control, in particular, reinforcement learning(RL), is expected to be applied for complex robotic tasks . Traditional RL requires a policy to be optimized is state-dependent, that means, the policy is a kind of feedback (FB) controllers . To be improved, RL can be improvedby dealing with the FB\/FF policies, but to the best of our knowledge, amethodology for learning them in a unified manner has not been developed .\u2026\n\n## A Joint Network for Grasp Detection Conditioned on Natural Language Commands\n\nCommand Grasping Network(CGNet) proposes a model to directly output command satisficinggrasps from RGB image and textual command inputs . CGNet outperforms a cascaded object-retrieval and grasp detection baseline by alarge margin . Three physical experiments demonstrate the functionality andperformance of CGNet .\u2026\n\n## Touch based Curiosity for Sparse Reward Tasks\n\nTouch-based Curiosity (ToC) learns what visibleobjects interactions are supposed to \u201cfeel\u201d like . We encourage exploration by rewarding interactions where the expectation and the experience don\u2019t match . We compare our cross-modal approach to single-modality (touch- or vision-only) approaches as well as othercuriosity-based methods and find that our method performs better and is moresample-efficient .\u2026\n\n## Residual Model Learning for Microrobot Control\n\nA majority of microrobots are constructed using compliant materials that are difficult to model analytically, limiting the utility of traditional model-based controllers . We propose anovel framework residual model learning (RML) that leverages approximate modelsto substantially reduce the sample complexity associated with learning anaccurate robot model .\u2026\n\n## Qualitative Planning in Imperfect Information Games with Active Sensing and Reactive Sensor Attacks Cost of Unawareness\n\nWe consider the probabilistic planning problem where the agent (called Player1, or P1) can jointly plan the control actions and sensor queries in a sensornetwork . We model such an adversarial interaction using a formal model \u2014 areachability game with partially controllable observation functions .\u2026\n\n## Putting NeRF on a Diet Semantically Consistent Few Shot View Synthesis\n\nWe present DietNeRF, a 3D neural scene representation estimated from a fewimages . NeRF learns a continuous volumetric representation of a scene through multi-view consistency . We introduce an auxiliary semantic consistency loss that encourages realistic renderings at novel poses .\u2026\n\n## Trajectory Tracking of Underactuated Sea Vessels With Uncertain Dynamics An Integral Reinforcement Learning Approach\n\nUnderactuated systems like sea vessels have degrees of motion that are insufficiently matched by a set of independent actuation forces . An online machine learning mechanism based on integral reinforcement learning is proposed to find a solution for a class of nonlinear tracking problems with partial prior knowledge of the system dynamics .\u2026\n\n## Seeing through a Black Box Toward High Quality Terahertz TomographicImaging via Multi Scale Spatio Spectral Image Fusion\n\nStrong water absorption nature and low noise tolerance lead to undesiredblurring and distortion of reconstructed terahertz images . MS3-Unet uses multi-scale branches to extract spatio-spectral features then processed by element-wise adaptive filters, and then fused to achieve high-quality image restoration .\u2026\n\n## Modeling High order Interactions across Multi interests for Micro video Reommendation\n\nSelf-over-CoAttention module uses co-attention to model correlation patterns across different levels . We propose a Self-Over-Coattention module to enhance user\u2019s interest representation . Experimental results on filtered public datasets verify that our module is useful . We use self-attraction to model correlations patterns within a specificlevel of interest in micro-videos .\u2026\n\n## Drug Discovery Approaches using Quantum Machine Learning\n\nTraditional drug discovery pipeline takes several years and cost billions of dollars . Classical machines cannot efficiently produce atypical patterns of quantum computers which might improve the training quality of learning tasks . We propose a suite of quantum machine learning techniques e.g.,generative\u2026\n\n## Distributed Video Adaptive Block Compressive Sensing\n\nVideo block compressive sensing has been studied for use in resource-strstrained scenarios, such as wireless sensor networks, but the approach still suffers from low performance and long reconstruction time . We propose two algorithms that leverage convolutional neuralnetwork components to reconstruct video with greatly reduced reconstructiontime .\u2026\n\n## PhySG Inverse Rendering with Spherical Gaussians for Physics based Material Editing and Relighting\n\nPhySG is an end-to-end inverse rendering pipeline that includes afully differentiable renderer and can reconstruct geometry, materials, andillumination from scratch . Our frameworkrepresents specular BRDFs and environmental illumination using mixtures ofspherical Gaussians . We demonstrate, with both synthetic and real data, that our reconstructions not only enable rendering of novel viewpoints, but also physics-based appearance editing of materials and illumination .\u2026\n\n## Fusing RGBD Tracking and Segmentation Tree Sampling for Multi Hypothesis Volumetric Segmentation\n\nThe key challenge is estimating the segment boundaries of (partially) occluded objects, which areinherently ambiguous when considering only a single frame . We propose Multihypothesis Segmentation Tracking (MST), a novel method forvolumetric segmentation in changing scenes . MST outperforms baselines in all tested scenes, showing it outperforms\u00a0baselines\u00a0in all tests .\u2026\n\n## TL DR Out of Context Adversarial Text Summarization and Hashtag Recommendation\n\nThis paper presents Out-of-Context Summarizer, a tool that takes arbitrarypublic news articles out of context by summarizing them to coherently fiteither a liberal- or conservative-leaning agenda . The tool suggests hashtag keywords to bolster the polarization of the summary, incase one is inclined to take it to Twitter, Parler or other platforms fortrolling .\u2026\n\n## Topic Scaling A Joint Document Scaling Topic Model Approach To Learn Time Specific Topics\n\nThis paper proposes a new methodology to study sequential corpora by implementing a two-stage algorithm that learns time-based topics with respect to a scale of document positions and introduces the concept of Topic Scaling . The first stageranks documents using Wordfish, a Poisson-based document scaling method, toestimate document positions that serve, in the second stage, as a dependent variable to learn relevant topics via a supervised Latent Dirichlet Allocation .\u2026\n\n## Ultra Reliable Indoor Millimeter Wave Communications using Multiple Artificial Intelligence Powered Intelligent Surfaces\n\nA novel framework for guaranteeing ultra-reliable millimeterwave (mmW) communications using multiple artificial intelligence (AI)-enabledreconfigurable intelligent surfaces (RISs) is proposed . The use of multipleAI-powered RISs allows changing the propagation direction of the signalstransmitted from a mmW access point (AP) thereby improving coverage for non-line-of-sight (NLoS) areas .\u2026\n\n## Real Time Global Illumination Using OpenGL And Voxel Cone Tracing\n\nVoxel Cone Tracing, as proposed by Cyril Crassinet al. in 2011, makes use of mipmapped 3D textures containing a voxelizedrepresentation of an environments direct light component to trace diffuse,specular and occlusion cones in linear time to extrapolate a surface fragmentsindirect light emitted towards a given photo-receptor .\u2026\n\n## High Dimensional Differentially Private EM Algorithm Methods and Near Optimal Statistical Guarantees\n\nIn this paper, we develop a framework to design differentiallyprivate expectation-maximization (EM) algorithms in high-dimensional latent variable models . We propose a near rate-optimal EM algorithm for low-dimensionallatent variable models in this setting . Simulation studies and real data analysis are conducted to support our results .\u2026\n\n## Enriched Music Representations with Multiple Cross modal Contrastive Learning\n\nDeeplearning is commonly used to obtain representations using various sources of information, such as the audio, interactions between users and songs, or associated genre metadata . In this paper, we present a novel approach that combines multipletypes of information related to music using cross-modal contrastive learning .\u2026\n\n## quantum Case Based Reasoning qCBR\n\nCase-Based Reasoning (CBR) is an artificial intelligence approach toproblem-solving with a good record of success . This article proposes usingQuantum Computing to improve some of the key processes of CBR defining so aQuantum Case-based Reasoning paradigm . The focus is set on designing and implementing a qCBR based on the variational principle that improves itsclassical counterpart in terms of average accuracy, scalability and toleranceto overlapping .\u2026\n\n## GDPR Compliant Blockchains A Systematic Literature Review\n\nMultiple paradoxes between blockchains and GDPR have been highlighted in the recent literature . This article aims to conduct asystematic literature review on GDPR compliant blockchains . The findings synthesized that theblockchains relevant GDPR articles can be categorized into six major groups, including data deletion and modification .\u2026\n\n## Two Truths and a Lie Exploring Soft Moderation of COVID 19 Misinformation with Amazon Alexa\n\nIn this paper, we analyzed the perceived accuracy of COVID-19 vaccine Tweets when they were spoken back by a third-party Amazon Alexa skill . We mimicked the soft moderation that Twitter applies to Twitter misinformation content in both forms of warning covers and warning tags to investigate whether the third-partyskill could affect how and when users heed these warnings .\u2026\n\n## Perspective Survey and Trends Public Driving Datasets and Toolsets for Autonomous Driving Virtual Test\n\nAutonomous driving virtual testing has recently gained increasing attention compared with closed-loop testing in real scenarios . The availability and quality of autonomous driving datasets and toolsets are the premise to diagnose theautonomous driving system bottlenecks and improve the system performance .\u2026\n\n## Integrated optimization of heterogeneous network management and the elusive role of macrocells\n\nWe consider heterogeneous wireless networks in the physical interferencemodel and introduce a new formulation of the optimization problem underlying their management . This formulation targets the minimization of powerconsumption by integrating base-station activation and many-to-manyassociations into the same mixed-integer nonlinear programming (MINLP) problem .\u2026","date":"2021-07-27 22:20:07","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3093128502368927, \"perplexity\": 4911.391115476397}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046153491.18\/warc\/CC-MAIN-20210727202227-20210727232227-00137.warc.gz\"}"}
| null | null |
package schema
import (
"bytes"
"encoding/binary"
"go.uber.org/zap"
"go.etcd.io/etcd/server/v3/auth"
"go.etcd.io/etcd/server/v3/storage/backend"
)
const (
revBytesLen = 8
)
var (
authEnabled = []byte{1}
authDisabled = []byte{0}
)
type authBackend struct {
be backend.Backend
lg *zap.Logger
}
var _ auth.AuthBackend = (*authBackend)(nil)
func NewAuthBackend(lg *zap.Logger, be backend.Backend) *authBackend {
return &authBackend{
be: be,
lg: lg,
}
}
func (abe *authBackend) CreateAuthBuckets() {
tx := abe.be.BatchTx()
tx.LockOutsideApply()
defer tx.Unlock()
tx.UnsafeCreateBucket(Auth)
tx.UnsafeCreateBucket(AuthUsers)
tx.UnsafeCreateBucket(AuthRoles)
}
func (abe *authBackend) ForceCommit() {
abe.be.ForceCommit()
}
func (abe *authBackend) ReadTx() auth.AuthReadTx {
return &authReadTx{tx: abe.be.ReadTx(), lg: abe.lg}
}
func (abe *authBackend) BatchTx() auth.AuthBatchTx {
return &authBatchTx{tx: abe.be.BatchTx(), lg: abe.lg}
}
type authReadTx struct {
tx backend.ReadTx
lg *zap.Logger
}
type authBatchTx struct {
tx backend.BatchTx
lg *zap.Logger
}
var _ auth.AuthReadTx = (*authReadTx)(nil)
var _ auth.AuthBatchTx = (*authBatchTx)(nil)
func (atx *authBatchTx) UnsafeSaveAuthEnabled(enabled bool) {
if enabled {
atx.tx.UnsafePut(Auth, AuthEnabledKeyName, authEnabled)
} else {
atx.tx.UnsafePut(Auth, AuthEnabledKeyName, authDisabled)
}
}
func (atx *authBatchTx) UnsafeSaveAuthRevision(rev uint64) {
revBytes := make([]byte, revBytesLen)
binary.BigEndian.PutUint64(revBytes, rev)
atx.tx.UnsafePut(Auth, AuthRevisionKeyName, revBytes)
}
func (atx *authBatchTx) UnsafeReadAuthEnabled() bool {
arx := &authReadTx{tx: atx.tx, lg: atx.lg}
return arx.UnsafeReadAuthEnabled()
}
func (atx *authBatchTx) UnsafeReadAuthRevision() uint64 {
arx := &authReadTx{tx: atx.tx, lg: atx.lg}
return arx.UnsafeReadAuthRevision()
}
func (atx *authBatchTx) Lock() {
atx.tx.LockInsideApply()
}
func (atx *authBatchTx) Unlock() {
atx.tx.Unlock()
}
func (atx *authReadTx) UnsafeReadAuthEnabled() bool {
_, vs := atx.tx.UnsafeRange(Auth, AuthEnabledKeyName, nil, 0)
if len(vs) == 1 {
if bytes.Equal(vs[0], authEnabled) {
return true
}
}
return false
}
func (atx *authReadTx) UnsafeReadAuthRevision() uint64 {
_, vs := atx.tx.UnsafeRange(Auth, AuthRevisionKeyName, nil, 0)
if len(vs) != 1 {
// this can happen in the initialization phase
return 0
}
return binary.BigEndian.Uint64(vs[0])
}
func (atx *authReadTx) Lock() {
atx.tx.RLock()
}
func (atx *authReadTx) Unlock() {
atx.tx.RUnlock()
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,785
|
Gyula Kosice, vlastním jménem Fernando Fallik, (26. dubna 1924 Košice – 25. května 2016 Buenos Aires) byl argentinský sochař, plastik, teoretik a básník, předchůdce kinetického a světelného umění. Narodil se v maďarské rodině v Košicích. Ve svých čtyřech letech emigroval s rodiči do Argentiny. Gyula použil jako své umělecké jméno název svého rodného města. Byl jedním ze zakladatelů abstraktního nefigurativní umění v Latinské Americe. Uskutečnil 40 individuálních a 500 kolektivních výstav na celém světě. Stejně je autorem 15 knih – esejů a básni. V roce 2005 vytvořil ze své dílny muzeum.
Reference
Externí odkazy
Slovenští sochaři
Narození v roce 1924
Narození 26. dubna
Narození v Košicích
Úmrtí 25. května
Úmrtí v roce 2016
Muži
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,330
|
{"url":"https:\/\/www.imrpress.com\/journal\/jin\/19\/3\/10.31083\/j.jin.2020.03.196","text":"NULL\nCountries | Regions\nCountries | Regions\nArticle Types\nArticle Types\nYear\nVolume\nIssue\nPages\nIMR Press \/ JIN \/ Volume 19 \/ Issue 3 \/ DOI: 10.31083\/j.jin.2020.03.196\n52\n6\nCitations\n174\nViews\nJournal Browser\nVolume | Year\nIssue\nAnnouncements\nOpen Access Short Communication\n[18F] FDOPA PET may confirm the clinical diagnosis of Parkinson's disease by imaging the nigro-striatal pathway and the sympathetic cardiac innervation: Proof-of-concept study\nShow Less\n1 Department of Nuclear Medicine, Tel-Aviv Sourasky Medical Center, Tel-Aviv, 6423906,\u00a0Israel\n2 Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv, 6997801,\u00a0Israel\n3 Movement Disorders Unit, Neurological Institute, Tel-Aviv Sourasky Medical Center, Tel-Aviv, 6423906,\u00a0Israel\n4 Sagol School of Neurosciences, Tel-Aviv University, Tel-Aviv, 6997801,\u00a0Israel\n*Correspondence: jonathanku@tlvmc.gov.il (Jonathan Kuten)\nCurrent Address: Department of Neurology, Meir Medical Center, Kfar Saba, 4428164, Israel\nJ. Integr. Neurosci. 2020, 19(3), 489\u2013494; https:\/\/doi.org\/10.31083\/j.jin.2020.03.196\nSubmitted: 18 June 2020 | Revised: 14 July 2020 | Accepted: 24 July 2020 | Published: 30 September 2020\nThis is an open access article under the CC BY 4.0 license (https:\/\/creativecommons.org\/licenses\/by\/4.0\/).\nAbstract\n\nAutonomic involvement, including cardiac denervation, may precede the motor symptoms of Parkinson\u2019s disease by several years. L-3,4-dihydroxy-6-[18F] fluoro-phenylalanine is a positron emitter and a true analog of L-dopa, used in clinical practice to assess striatal dopaminergic integrity. The present study aimed to assess the feasibility of evaluating cardiac sympathetic denervation in Parkinson\u2019s disease patients using L-3,4-dihydroxy-6-[18F] fluoro-phenylalanine positron emission tomography\/computed tomography. Patients referred for an L-3,4-dihydroxy-6-[18F] fluoro-phenylalanine positron emission tomography\/computed-tomography between July 2015 and May 2017 to evaluate striatal presynaptic dopaminergic integrity underwent a heart positron emission tomography scan following a brain positron emission tomography scan. L-3,4-dihydroxy-6-[18F] fluoro-phenylalanine uptake in the left ventricle was quantified using Carimas${}^{TM}$ software and compared between patients with and without Parkinson\u2019s disease. The area under the receiver operating characteristic curve was used to evaluate the ability of the left ventricular mean standardized uptake value to discriminate between patients with Parkinson\u2019s disease and those with other extrapyramidal syndromes. Seventy-six patients were included, of whom 52 were diagnosed with Parkinson\u2019s disease. The mean L-3,4-dihydroxy-6-[18F] fluoro-phenylalanine left ventricular mean standardized uptake value was lower in the Parkinson\u2019s disease patients compared to the non- Parkinson\u2019s disease patients (1.08 $\\pm{}$ 0.21 vs. 1.24 $\\pm{}$ 0.32, P = 0.015). The left ventricular mean standardized uptake value was able to discriminate between Parkinson\u2019s disease and non- Parkinson\u2019s disease patients (the area under the receiver operating characteristic curve = 0.641, P = 0.049). In conclusion, quantification of cardiac L-3,4-dihydroxy-6-[18F] fluoro-phenylalanine uptake may be able to differentiate between patients with and without Parkinson\u2019s disease. Validation of this finding in more substantial, prospective trials are warranted.\n\nKeywords\nParkinson\u2019s disease\npositron emission tomography\nmyocardium\nneuroimaging\nFigures\nFig. 1.\nShare","date":"2023-02-04 09:40:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 4, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.21856817603111267, \"perplexity\": 14016.760096765136}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500095.4\/warc\/CC-MAIN-20230204075436-20230204105436-00812.warc.gz\"}"}
| null | null |
In her work Sherin uses various combinations of collage, collagraph, relief printing, drypoint, carborundum, chine collé, and monoprint techniques. She has evolved from her traditional roots in lithography and intaglio to the exclusive use of plastic materials and direct non-chemical-mediated methods. Her works on paper engage in a dynamic dialogue painting printmaking and collage. Working on an etching press, she approaches printmaking with the directness of a painter and an eye for contrasts of a collage artist. Her rich and complex prints evolve from the multiple layering of collagraphic elements. In some prints offer further sharp spatial or clashing contrasts by her method of slicing apart prints and reassembling them into new and contrasting wholes.
Here is a link to the making of a carborundum print.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,047
|
\section{Lower Bound on the Rate-Distortion Function} \label{sec:slb}
Based on the parametric representation of $R(D)$ in \cite[Theorem 2.3]{cs74},
a Shannon lower bound for rectifiable measures \cite[Definition 2.59]{amfupa00} as reference measures was reported recently in \cite[Theorem 55]{kopirihl16}.
We now extend this bound to general (not necessarily rectifiable) reference measures $\mu$.
\begin{lem}\label{thm.ko17}
Consider a random variable $X$ distributed on the measure space $(\setX,\colX,\mu)$, a measurable space $(\setY,\colY)$, and a distortion function $\rho\colon \setX\times\setY\to[0,\infty]$ satisfying
\begin{enumerate}
\renewcommand{\theenumi}{\roman{enumi})}
\renewcommand{\labelenumi}{\roman{enumi})}
\item $\inf_{y\in\setY}\rho(x,y)=0$ for all $x\in\setX$, and \label{cond:infzero}
\item there exists a finite set $\setB\subseteq\setY$ such that
$
\opE\mleft[\min_{y\in\setB}\rho(X,y)\mright]<\infty
$. \label{cond:refpoint}
\end{enumerate}
Suppose that $\mu$ is a reference measure for $X$ and
let $D_0:=\inf\{D\geq 0:R(D)<\infty\}$.
Then, $R(D)\geq R_{\text{SLB}}(D)$ for all $D\in(D_0,\infty)$, where
\begin{align}
R_{\text{SLB}}(D)= h_\mu(X)-\inf_{s\geq 0}\mleft(sD +\log \nu(s)\mright)\label{eq:SLB}
\end{align}
with
\begin{equation}
\nu(s)=\sup_{y\in\setY}\int e^{-s\rho(x,y)} \mathrm d \mu(x). \label{eq:defns1}
\end{equation}
\end{lem}
For discrete $X$ of finite entropy, $\mu$ the counting measure, and $\sum_{x\in\setX}e^{-s\rho(x,y)}$ independent of $y$ for all $s>0$, Lemma \ref{thm.ko17} recovers the Shannon lower bound for discrete random variables reported in \cite[Lemma 4.3.1]{gr90}.
For $X$ continuous, $\mu$ the Lebesgue measure, $\setX=\setY=\reals^d$, and $\rho(x,y)=\rho(x-y)$, Lemma \ref{thm.ko17} recovers the Shannon lower bound for continuous random variables \cite[Equation 4.6.1]{gr90}, which
can be evaluated explicitly for $\rho(x,y)=\lVert x-y\rVert^k_\mathrm s$ with $k>0$, leading to the classical form of the Shannon lower bound \cite[Section VI]{yatagr80}
\begin{equation} \label{eq:shlbclassic}
R_{\text{SLB}}(D) =
h(X) +\log\mleft(\frac{\mleft(\frac{d}{kD}\mright)^\frac{d}{k}}{V_d\, \Gamma\mleft(\frac{d}{k}+1\mright)}\mright)-\frac{d}{k}.
\end{equation}
Here, $V_d$ is the Lebesgue measure of the unit ball with respect to the semi-norm $\lVert\,\cdot\,\rVert_\text{s}$.
What makes the explicit expression \eqref{eq:shlbclassic} possible is the following
simplification of $\nu(s)$ in \eqref{eq:defns1} for difference distortion functions $\rho(x,y)=\rho(x-y)$ and translation invariant reference measures $\mu$, namely
\begin{align}
\nu(s)
&= \sup_{y\in\setY}\int e^{-s\rho(x-y)} \,\mathrm d \mu(x)\\
&= \int e^{-s\rho(x)} \,\mathrm d \mu(x),\label{eq:defns1b}
\end{align}
which can be evaluated explicitly for $\rho(x,y)=\lVert x-y\rVert^k_\mathrm s$ with $k>0$ by changing variables to polar coordinates.
Unfortunately, for $X$ of general distribution and for general distortion functions, $\nu(s)$ in \eqref{eq:defns1} cannot be further simplified, which precludes an explicit expression for $R_{\text{SLB}}(D)$.
However, if the reference measure $\mu$ is $\rho^{1/k}$-subregular, then we can
upper-bound $\nu(s)$. This leads to a lower bound on $R(D)$ that is explicit
up to a parameter obtained by solving a
convex optimization problem in a nonnegative real variable.
The corresponding formal statement is as follows.
\begin{thm}\label{thm:new}
Consider a random variable $X$ distributed on the measure space $(\setX,\colX,\mu)$, a measurable space $(\setY,\colY)$, and a distortion function $\rho\colon \setX\times\setY\to[0,\infty]$ satisfying
Properties \ref{cond:infzero} and \ref{cond:refpoint} stated in Lemma \ref{thm.ko17}.
Suppose that $\mu$ is a $\rho^{1/k}$-subregular reference measure for $X$ of dimension $m$ satisfying \eqref{eq:subregularity} with
$\delta_0\in(0,\infty]$ and $c>0$,
and
let $D_0:=\inf\{D\geq 0:R(D)<\infty\}$.
Suppose further that either $\delta_0=\infty$ or $\mu(\setX)<\infty$.
Then,
\begin{align}\label{eq:toshow1}
R_{\text{SLB}}(D)&\geq R_{\text{L}}(D)\quad\text{for all $D>D_0$},
\end{align}
where $R_{\text{L}}(D)$ is given by
\begin{align}
&R_{\text{L}}(D) =\\
&\begin{cases}
h_\mu(X) +\log\mleft( \frac{\mleft(\frac{m}{kD} \mright)^\frac{m}{k}}{c\,\Gamma\mleft(\frac{m}{k}+1\mright)}\mright) -\frac{m}{k}
&\quad\text{if}\ c\geq \mu(\setX)\delta_0^{-m}\\
h_\mu(X) -\min_{s\geq 0}q(s,D)&\quad\text{else},
\end{cases}\label{eq:SLB1}
\end{align}
where
\begin{equation}
q(s,D)= s\delta_0^{-k}D + p(s)\label{eq:qs}
\end{equation}
with
\begin{equation}
p(s)=\log\mleft(\frac{\mu(\setX)\Gamma\mleft(\frac{m}{k}+1\mright)-\mleft(\mu(\setX)-\delta_0^m c\mright)\gamma\mleft(\frac{m}{k}+1,s\mright)}{s^\frac{m}{k}}\mright).\label{eq:ks}
\end{equation}
For every $D>0$, the function $q(\cdot,D)$ is strictly convex on $\reals_+$ and attains its unique minimum at $s_0$ defined (implicitly) through $\delta_0^{k}\,p^\prime(s_0)=-D$.
\end{thm}
The lower bound $R_{\text{L}}(D)$ in \eqref{eq:SLB1} is explicit in the regime $c\geq \mu(\setX)\delta_0^{-m}$; for $c<\mu(\setX)\delta_0^{-m}$, it is explicit up to a parameter obtained by solving a convex optimization problem in a nonnegative real variable.
As the lower bound $R_{\text{L}}(D)$ is obtained from $R_{\text{SLB}}(D)$ in \eqref{eq:SLB} by upper-bounding $\nu(s)$ in \eqref{eq:defns1} making use of subregularity of the reference measure $\mu$, it follows that $R_{\text{L}}(D)=R_{\text{SLB}}(D)$ whenever the reference measure satisfies the subregularity condition with equality and for $\delta_0=\infty$.
Specifically, we have equality in the following special case.
\begin{cor}
Consider a continuous random variable $X$ distributed on $\reals^d$ and of finite differential entropy.
Suppose that $\rho(x,y)=\lVert x-y\rVert_\text{s}^k$ with $k>0$. Then, $R_{\text{L}}(D)=R_{\text{SLB}}(D)$ for all $D\geq D_0$.
\end{cor}
\section{Examples}
To illustrate the generality of Theorem \ref{thm:new}, we consider two specific examples of random variables,
namely a random variable distributed uniformly on a manifold, specifically the unit circle, and a random variable distributed uniformly on a self-similar set, specifically the middle third Cantor set.
\begin{exa}(Uniform distribution on the unit circle)\label{ex:S1A}
Let $\setX=\setY=\reals^2$ be equipped with the Borel $\sigma$-algebra and the distortion function $\rho(x,y)=\lVert x-y\rVert^2_2$, and take $X$
distributed uniformly on the unit circle $\setS_1\subseteq\reals^2$,
i.e., $\mu_X=\colH^m|_{\setS_1}/\colH^m(\setS_1)$.
We first establish the subregularity condition \eqref{eq:subregularity} for $\mu=\mu_X$, $k=2$, and $m=1$.
It turns out that (cf. Figure \ref{fig:S1a})
\begin{figure}[tb]
\begin{center}
\begin{tikzpicture}[scale=2]
\newcommand*{\rechterWinkel}[3]
\draw[shift={(#2:#3)}] (#1) arc[start angle=#2, delta angle=90, radius = #3];
\fill[shift={(#2+45:#3/2)}] (#1) circle[radius=1.25\pgflinewidth];
}
\draw (0,0) circle(1);
\draw ({sqrt(1-0.5^2)},0) circle(0.5);
\draw[arrows=->](0,-1.3)--(0,1.3);
\draw[arrows=->](-1.5,0)--(2.1,0);
\draw[thick,blue,dashed]({sqrt(1-0.5^2)},0)--({sqrt(1-0.5^2)},0.5);
\draw[thick,blue,dashed](0,0)--({sqrt(1-0.5^2)},0.5);
\draw[arrows=->]({sqrt(1-0.5^2)},0)--({sqrt(1-0.5^2)+0.35},{sqrt(0.5^2-0.35^2)});
\draw[ultra thick, red] (1,0) arc (0:30:1);
\draw[ultra thick, red] (1,0) arc (0:-30:1);
\filldraw[color=black] ({sqrt(1-0.5^2)},0) circle(0.05);
\put (42,-8){$x$};
\put (72,22){$\delta$};
\put (58,-15){\color{red}$\alpha$};
\put (-48,48){$\setS_1$};
\color{blue}
\rechterWinkel{{sqrt(1-0.5^2)},0}{90}{.2}
\end{tikzpicture}
\caption{For fixed $\delta<1$, the maximum Hausdorff measure of the arc $\alpha(x,\delta)=\setS_1\cap\setB_{\lVert\,\cdot\,\rVert_2}(x,\delta)$ is $\colH^1(\alpha(x,\delta))=2\arcsin(\delta)$, which is achieved for any $x\in \reals^2$ satisfying $\lVert x\rVert_2=\sqrt{(1-\delta^2)}$.
\label{fig:S1a}}
\end{center}
\end{figure}
\begin{align}
\mu_X\mleft(\setB_{\lVert\,\cdot\,\rVert_2}\big(x,\delta \big)\mright)\label{eq:subpre1}
&=\mu_X(\{y\in\reals^2:\lVert y-x\rVert_2\leq \delta\})\\
&=\frac{\colH^1(\{y\in\setS_1:\lVert y-x\rVert_2\leq \delta\})}{2\pi}\\
&\leq \frac{\arcsin(\delta)}{\pi}\label{eq:subpre2}
\end{align}
for all $\delta\in(0,1]$ and $x\in\reals^2$.
Since $\arcsin(x)/x$ is monotonically increasing on $(0,1)$,
we can upper-bound
$\arcsin(\delta)\leq \delta \frac{\arcsin(\hat{\delta})}{\hat{\delta}}$ for all $\delta\in(0,\hat{\delta})$ and $\hat{\delta}\in (0,1]$.
Therefore, \eqref{eq:subpre1}--\eqref{eq:subpre2} leads to the family of subregularity conditions
\begin{align}\label{eq:measupball}
\mu_X\mleft(\setB_{\lVert\,\cdot\,\rVert_2}\big(x,\delta\big)\mright)
&\leq \frac{\arcsin(\hat{\delta})}{\pi\hat{\delta}} \delta
\end{align}
for all $x\in\reals^2$ and $\delta\in(0,\hat\delta)$, parametrized by $\hat{\delta}\in (0,1]$.
For $\mu=\mu_X$, $m=1$, $k=2$, $\delta_0=\hat \delta\in(0,1]$, and $c=\arcsin(\hat \delta)/(\pi\hat\delta)$
and hence $c< \mu_X(\setX)/\delta_0=1/\delta_0$, the lower bound in \eqref{eq:SLB1} is given by
\begin{align}
&R^{(\hat\delta)}_{\text{L}}(D):=\\
&-\frac{s_0}{\hat\delta^{2}}D
-\log\mleft(\Gamma\mleft(\frac{3}{2}\mright)-\mleft(1-\frac{\arcsin(\hat \delta)}{\pi}\mright)\gamma\mleft(\frac{3}{2},s_0\mright)\mright)\\
&+\frac{1}{2}\log s_0\quad\text{for all $D>0$},
\end{align}
where $s_0$ is the unique solution of
\begin{align}
\frac{\hat\delta^2}{2s_0}+\frac{\hat \delta^2s_0^\frac{1}{2}e^{-s_0}}{\frac{\Gamma\mleft(\frac{3}{2}\mright)}{1-\frac{\arcsin(\hat \delta)}{\pi}}-\gamma\mleft(\frac{3}{2},s_0\mright)}=D.
\end{align}
Finally, we set
\begin{equation}\label{eq:SLB1S1}
R_{\text{L}}(D)=\max_{\hat\delta\in(0,1]}R^{(\hat\delta)}_{\text{L}}(D)\quad\text{for all $D>0$}.
\end{equation}
The result of the maximization in \eqref{eq:SLB1S1} carried out numerically is depicted in Figure~\ref{fig.bounds} along with the numerically evaluated Shannon lower bound $R_{\text{SLB}}(D)$ in \eqref{eq:SLB} from \cite[Section X.C]{kopirihl16}. It can be seen that
$R_{\text{L}}(D)$ approaches $R_{\text{SLB}}(D)$ as $D\to 0$.
\begin{figure}[tb]
\centering
\resizebox{1.0\linewidth}{!}{
\begin{tikzpicture}
\begin{axis}[
xmode=log,
ymin=1, ymax=5,
xmin=0.0001, xmax=1/3,
tick label style={font=\small},
ylabel style={rotate=-90},
grid=major,
width=10cm, height=10cm,
grid = major,
grid style={gray!30},
axis background/.style={fill=white},
ylabel={$R$},
xlabel={$D$},
tick align=outside,
legend entries={
$R_\text{SLB}(D)$,
$R_\text{L}(D)$},
legend style={legend pos=north east, font=\small}
]
\addplot+[darkgreen, no markers, dashed, line width=1pt] table[x expr=\thisrowno{0},y expr=\thisrowno{1}] {newshlb.dat};
\addplot+[blue, no markers, line width=1pt] table[x expr=\thisrowno{0},y expr=(\thisrowno{1})] {shanlower2red.dat};
\end{axis}
\end{tikzpicture}
}
\caption{
The Shannon lower bound $R_\text{SLB}(D)$ evaluated numerically in \cite[Section X.C]{kopirihl16} and the lower bound ${R_\text{L}}(D)$ in \eqref{eq:SLB1S1} for
$X$ distributed uniformly on the unit circle.\label{fig.bounds}}
\end{figure}
\end{exa}
To prepare the ground for the second example, we need some preliminaries on contracting similarities; we follow the exposition in \cite{frheolro15}.
A mapping $s\colon \reals^d\to\reals^d$ is called a contracting similarity if there exists a $\kappa \in (0,1)$, referred to as contraction parameter, such that
\begin{align}
\lVert s(\vecu)-s(\vecv)\rVert_2= \kappa\lVert\vecu-\vecv\rVert_2\quad \text{for all}\ \vecu,\vecv\in\reals^d. \label{eq:contractions}
\end{align}
For $i\in\setI:=\{1,\dots,|\setI|\}$, consider contracting similarities $s_i\colon \reals^d\to\reals^d$ with corresponding contraction parameters $\kappa_i\in(0,1)$.
By \cite[Theorem 9.1]{fa90}, there exists a unique self-similar set
\begin{equation}
\setK=\bigcup_{i\in\setI} s_i(\setK)\,\subseteq\reals^d.
\end{equation}
Let $\setI^\ast=\bigcup_{j\in\naturals}\setI^j$.
For every $\alpha=(i_1,\dots,i_j)\in\setI^\ast$, we set $\bar\alpha=(i_1,\dots,i_{j-1})\in\setI^\ast\cup\{\omega\}$
with $\omega$ denoting the empty sequence of length zero.
We designate the identity mapping on $\reals^d$ by $s_\omega$, set $\kappa_\omega=1$,
and define
\begin{align}
s_\alpha&=s_{i_1}\circ s_{i_2}\circ\dots\circ s_{i_j}\\
\kappa_\alpha&=\kappa_{i_1}\kappa_{i_2}\dots \kappa_{i_j}\,
\end{align}
for all $\alpha\in\setI^\ast$.
It follows directly that $s_\alpha$ is a contracting similarity with contraction parameter $\kappa_\alpha$ for all $\alpha\in\setI^\ast$.
Finally, for every $\delta>0$ and $x\in\setX$, let
\begin{align}
\setJ_\delta&=\{\alpha\in\setI^\ast:\kappa_\alpha\leq \delta<\kappa_{\bar\alpha}\}\\
\setJ_\delta(x)&=\mleft\{\alpha\in\setJ_\delta:\setB_{\lVert\,\cdot\,\rVert_2}\big(x,\delta \big)\cap s_\alpha(\setK)\neq\emptyset\mright\}. \label{eq:setJd}
\end{align}
The following result will allow us to establish subregularity for random variables distributed uniformly on self-similar sets.
\begin{lem}\cite[Theorem 2.1]{frheolro15}\label{lem:frheolro15}
For $i\in\setI:=\{1,\dots,|\setI|\}$, consider contracting similarities $s_i\colon \reals^d\to\reals^d$ with contraction parameters $\kappa_i\in(0,1)$. Let
\begin{equation}
\setK=\bigcup_{i\in\setI} s_i(\setK)
\end{equation}
be the corresponding self-similar set and let $m$ be the similarity dimension given by the unique solution of
\begin{equation}
\sum_{i=1}^k \kappa_i^m=1.
\end{equation}
Then,
\begin{align}
\colH^m\mleft(\setB_{\lVert\,\cdot\,\rVert_2}\big(x,\delta\big)\mright)
&\leq \colH^m(\setK)|\setJ_\delta(x)|\delta^m\label{eq:cantorsub2a}
\end{align}
for all $x\in\reals^d$ and $\delta\in (0,\infty)$.
If, in addition, the contracting similarities satisfy the weak separation property \cite[Definition on p. 3533]{ze96} and $\setK$
is not contained in any hyperplane of dimension $d-1$, then $0<\colH^m(\setK)<\infty$
and
\begin{equation}
\colH^m\mleft(\setB_{\lVert\,\cdot\,\rVert_2}\big(x,\delta\big)\mright)
\leq c \delta^m\quad \text{for all $x\in\reals^d$ and $\delta\in (0,\infty)$} \label{eq:cantorsub2}
\end{equation}
with $c>1$ and independent of $x$ and $\delta$.
\end{lem}
We are now ready to present our second example, namely, a random variable distributed uniformly on the middle third Cantor set.
\begin{exa}(Uniform distribution on the middle third Cantor set)\label{ex:CA}
Let $\setX=\setY=\reals$ be equipped with the Borel $\sigma$-algebra and the distortion function $\rho(x,y)=\lVert x-y\rVert_2^2$.
Consider the middle third Cantor set $\setC\subseteq [0,1]$, i.e., the self-similar set corresponding to $\setI=\{1,2\}$, $\kappa_1=\kappa_2=1/3$, $s_1(x)=x/3$, $s_2(x)=x/3+2/3$, and $m=\log 2/\log 3$.
Since $0<\mathscr{H}^{\log 2/\log 3}(\setC)<\infty$ \cite[Example 4.5]{fa90}, we can take
$X$ distributed uniformly on $\setC$, i.e., $\mu_X=\colH^m|_\setC/\colH^m(\setC)$.
Next, we use \eqref{eq:cantorsub2a} in Lemma \ref{lem:frheolro15} to obtain a subregularity condition for $\mu=\mu_X$. To this end, it is first shown that $|\setJ_{\delta}(x)|\leq 3$ for all $\delta \in (0,1)$ and $x\in \reals$.
Note that
$\kappa_\alpha=3^{-j}$ for all $\alpha=(i_1,\dots,i_j)$ and $j\in\naturals_0$. Thus,
\begin{align}
\setJ_{\delta}
&=\{\alpha\in\setI^\ast:\kappa_\alpha\leq \delta <\kappa_{\bar\alpha}\}\\
&=\{\alpha:|\alpha|=j\}\quad\text{for all $\delta\in\big[3^{-j},3^{-j+1}\big)$ and $j\in\naturals$},
\end{align}
which implies $|\setJ_{\delta}(x)|\leq 3$ for all $\delta\in (0,1)$ and $x\in\reals$ (cf. Figure \ref{fig:cantor}).
\begin{figure}[tb]
\resizebox{0.77\linewidth}{!}{
\begin{tikzpicture}[scale=2]
\foreach \order in {0,...,4}
\draw[line width=0.5mm, yshift=-\order*10pt] l-system[l-system={cantor set, axiom=F, order=\order, step=100pt/(3^\order)}];
\put (220,-22){$|\alpha|=1$};
\put (220,-42){$|\alpha|=2$};
\put (220,-62){$|\alpha|=3$};
\put (220,-82){$|\alpha|=4$};
\put (10,0){\draw[{Arc Barb[]}-{Arc Barb[]}, ultra thick, red, dashed] (0,-20pt) -- (66pt,-20pt);};
\put (0,0){
\put (4,-90){\draw [dotted, line width=0.3mm] (0pt,0pt) -- (0pt,-7pt);};
\put (18,-90){\draw [dotted, line width=0.3mm] (0pt,0pt) -- (0pt,-7pt);};
};
\put (45,0){
\put (4,-90){\draw [dotted, line width=0.3mm] (0pt,0pt) -- (0pt,-7pt);};
\put (18,-90){\draw [dotted, line width=0.3mm] (0pt,0pt) -- (0pt,-7pt);};
};
\put (134,0){
\put (4,-90){\draw [dotted, line width=0.3mm] (0pt,0pt) -- (0pt,-7pt);};
\put (18,-90){\draw [dotted, line width=0.3mm] (0pt,0pt) -- (0pt,-7pt);};
};
\put (178,0){
\put (4,-90){\draw [dotted, line width=0.3mm] (0pt,0pt) -- (0pt,-7pt);};
\put (18,-90){\draw [dotted, line width=0.3mm] (0pt,0pt) -- (0pt,-7pt);};
};
\end{tikzpicture}
}
\vspace*{10truemm}
\caption{Sets $s_\alpha([0,1])$ with $|\alpha|=j$ have length $3^{-j}$. At most three different sets $s_\alpha([0,1])$ with $|\alpha|=j$ intersect with an open interval of length $2(3^{-j+1})$.
\label{fig:cantor}}
\end{figure}
Therefore,
\eqref{eq:cantorsub2a} together with $m=\log 2/\log 3$ yields the subregularity condition
\begin{align}
\mu_X\mleft(\setB_{\lVert\,\cdot\,\rVert_2}\big(x,\delta\big)\mright)&\leq 3 \delta^\frac{\log 2}{\log 3}\quad \text{for all $x\in\reals$ and $\delta\in (0,\infty)$}. \label{eq:Cantorsub}
\end{align}
With \eqref{eq:Cantorsub}
the lower bound $R_\text{L}(D)$ in
\eqref{eq:SLB1} for $\mu=\mu_X$, $m=\log2/\log3$, $k=2$, $\delta_0=\infty$, and $c=3$
and hence $c\geq \mu_X(\setX)/\delta_0=0$ is given by
\begin{equation}\label{eq:SLB1aC}
R_{\text{L}}(D) =
\sigma \log \mleft(\frac{\sigma}{D} \mright)-\sigma
-\log
\mleft(3
\Gamma\mleft(\sigma+1\mright)
\mright)\quad\text{for all $D>0$},
\end{equation}
where $\sigma:=\log2/\log9$.
\end{exa}
\bibliographystyle{IEEEtran}
\section{Introduction and Mathematical Setup}
This paper is concerned with a rate-distortion (R-D) theory for sequences of i.i.d. random variables with general distribution supported on general sets including manifolds and fractal sets.
Manifold structures are prevalent in data science, e.g., in compressed sensing \cite{bawa09,care09,capl11,albdekori18,ristbo15}, machine learning \cite{lz12}, image processing \cite{lufahe98,soze98}, and handwritten digit recognition \cite{hidare97}.
Fractal sets find application in image compression and in modeling of Ethernet traffic \cite{letawi94}.
R-D theory \cite{sh59,be71,gr90,grne98} is concerned with the characterization of ultimate limits on the discretization of sequences of random variables.
Specifically, let $(\setX,\colX)$ and $(\setY,\colY)$ be measurable spaces equipped with a measurable function $\rho\colon \setX\times\setY\to[0,\infty]$, henceforth called distortion function, and let $(X_i)_{i\in\naturals}$ be a sequence of random variables with the $X_i$ distributed on $\setX$.
For every $l\in\naturals$, one considers all measurable mappings $g_l\colon \setX^l \to \setY^l$ with $\lvert g_l(\setX^l) \rvert <\infty$, referred to as source codes of length $l$.
A pair $(R,D)$ of nonnegative real numbers is said to be {achievable} if, for sufficiently large $l\in\naturals$,
there exists a source code $g_l$ of length $l$ with $|g_l(\setX^l)| \leq \lfloor e^{lR}\rfloor$ and expected average distortion
\begin{align}\label{eq:leqd}
\opE\mleft[\frac{1}{l}\sum_{i=1}^l\rho(X_i,(g_l(X_1,\dots,X_l))_i)\mright] \leq D .
\end{align}
Suppose that $(\setX,\colX)$ and $(\setY,\colY)$ are standard spaces
(cf. \cite[Section 1.4]{gr11}) and consider a sequence $(X_i)_{i\in\naturals}$ of i.i.d. random variables that are distributed on $\setX$. The
(single-letter) R-D function is defined as
\begin{align}\label{eq:RD}
R(D):=
\inf_{Y:\,\opE[\rho(X,Y)]\,\leq\, D} I(X,Y),
\end{align}
where $Y$ is distributed on $(\setY,\colY)$, $X=X_1$, and $I(\cdot,\cdot)$ denotes mutual information.
If there exists a $y^*\in\setY$ with
$\opE\mleft[\rho(X,y^*)\mright]<\infty$,
then the R-D theorem \cite[Theorems 7.2.4 \& 7.2.5]{be71} states that
\begin{enumerate}
\renewcommand{\theenumi}{\roman{enumi})}
\renewcommand{\labelenumi}{\roman{enumi})}
\item
for every $D\geq 0$ with $R(D)<\infty$, $(R,D)$ is achievable for all $R>R(D)$, and
\item $(R,D)$ is not achievable for all $R<R(D)$.
\end{enumerate}
The function $ R(D)$ is difficult to characterize analytically in general, but
asymptotic results in terms of the R-D dimension of order $k>0$, defined as
$-(1/k)\lim_{D\to 0} R(D)/\log D$ if the limit exists, are available \cite{kade94}.
For discrete-continuous mixtures, the function $ R(D)$ is known explicitly up to a term that vanishes as $D\to 0$ \cite{ro88}.
For general distributions, only bounds on $ R(D)$ are available.
While upper bounds on $ R(D)$ can be obtained by evaluating $I(X,Y)$ for a specific $Y$ with $\opE[\rho(X,Y)]\leq D$,
lower bounds are notoriously hard to obtain.
The best-known lower bound is the Shannon lower bound for discrete random variables of finite entropy and with $\sum_{x\in\setX}e^{-s\rho(x,y)}$ independent of $y$ for all $s>0$ \cite[Section 4.3]{gr90}, and for continuous random variables of finite differential entropy and with difference distortion function $\rho(x-y)$ \cite[Section 4.6]{gr90}.
For continuous $X$ of finite differential entropy and distortion function $\rho(x-y)=\lVert x-y\rVert_\text{s}^k$, where $\lVert\,\cdot\,\rVert_\text{s}$ is a semi-norm and $k>0$, the Shannon lower bound is known explicitly \cite[Section VI]{yatagr80} and,
provided that $X$ satisfies a certain moment constraint, tight as $D\to 0$ \cite{liza94,ko16}.
Using Csisz\'ar's parametric representation of $ R(D)$ \cite{cs74}, a Shannon lower bound was reported recently in \cite[Theorem 55]{kopirihl16} for the class of $m$-rectifiable random variables \cite[Definition 11]{kopirihl16}, and
for general random variables in \cite[Theorem 2]{ko17}.
The bounds in \cite{kopirihl16,ko17} are, however, not explicit.
\emph{Contributions.}
We derive a lower bound $R_{\text{L}}(X)$ on the {R-D} function $ R(D)$ in \eqref{eq:RD}
for random variables $X$ of general distribution supported on general sets including manifolds and fractal sets.
The expression for $R_{\text{L}}(X)$ we get is explicit up to a parameter obtained by solving a convex optimization problem in a nonnegative real variable and, for continuous $X$ of finite differential entropy and distortion function $\rho(x-y)=\lVert x-y\rVert_\text{s}^k$, reduces to the classical Shannon lower bound reported in \cite{yatagr80}.
The only requirement for our lower bound to apply is the existence of a $\sigma$-finite reference measure $\mu$ for $X$ (i.e., a measure $\mu$ with $\mu_X\ll\mu$ and such that the generalized entropy $h_\mu(X)$ is finite) satisfying a certain subregularity condition.
This subregularity condition guarantees the existence of a $\delta_0>0$ such that the reference measure $\mu$ is not highly concentrated on balls of radii $\delta\in(0,\delta_0]$; it is satisfied, e.g., by uniform distributions
on regular sets of dimension $m$ in $\reals^d$ (cf. \cite[Section 12]{grlu00}).
Specific examples of regular sets of dimension $m$ are compact convex sets $\setK\subseteq\reals^m$ with $\operatorname{span}(\setK)=\reals^m$ \cite[Example 12.7]{grlu00}, surfaces of compact convex sets $\setK\subseteq\reals^{m+1}$ with $\operatorname{span}(\setK)=\reals^{m+1}$ \cite[Example 12.8]{grlu00},
$m$-dimensional compact $C^1$-submanifolds of $\reals^d$ \cite[Example 12.9]{grlu00},
self-similar sets of similarity dimension $m$ satisfying the weak separation property \cite[Theorem 2.1]{frheolro15}, and finite unions
of regular sets of dimension $m$ \cite[Lemma 12.4]{grlu00}.
To illustrate the wide applicability of our result, we evaluate the lower bound $R_{\text{L}}(X)$ for a random variable distributed uniformly on a manifold, namely, the unit circle, and for a random variable distributed uniformly on a self-similar set, namely, the middle third Cantor set.
Proofs are omitted throughout due to space constraints.
\emph{Notation.}
Sets are designated by calligraphic letters, e.g., $\setA$, with $|\setA|$ denoting cardinality and
$\overline{\setA}$ closure.
$\sigma$-algebras are indicated by script letters, e.g., $\colX$,
and will throughout be assumed to contain all singleton sets.
For a measure space $(\setX,\colX,\mu)$ and a measurable set $\setA \in \colX$, we write $\mu|_{\setA}$ for the restriction of $\mu$ to $\setA$.
For a Borel measure $\mu$, the support $\operatorname{supp}(\mu)$ is the smallest closed set such that $\mu(\setX\mysetminus \operatorname{supp}(\mu))=0$.
We denote the $m$-dimensional Hausdorff measure by $\colH^m$ \cite[Definition 2.46]{amfupa00}.
For $\mu$ and $\nu$ defined on the same measurable space with
$\mu$ absolutely continuous with respect to $\nu$, expressed by $\mu\ll\nu$, we write $\mathrm d\mu/\mathrm d \nu$ for the Radon-Nikodym derivative of $\mu$ with respect to $\nu$.
The product measure of $\mu$ and $\nu$ is designated by $\mu\otimes\nu$.
Random variables distributed on general measurable spaces $(\setX,\colX)$ are denoted by capital letters, e.g., $X$, and $\mu_X$ is the distribution of $X$.
$\opE[\cdot]$ stands for the expectation operator.
If $X$ is distributed on the $\sigma$-finite measure space $(\setX,\colX,\mu)$ and of finite generalized entropy
\begin{align}
h_\mu(X)
&:=-\opE\mleft[\log \frac{\mathrm d\mu_X}{\mathrm d\mu}(X)\mright]
\end{align}
with $\mu_X\ll\mu$, then we call $\mu$ a reference measure for $X$.
For $X$ distributed on $(\setX,\colX)$ and $Y$ distributed on $(\setY,\colY)$, the mutual information between $X$ and $Y$ is
\begin{align}
I(X,Y):=\opE\mleft[\log \frac{\mathrm d\mu_{X,Y}}{\mathrm d(\mu_X\otimes\mu_Y)}(X,Y)\mright]
\end{align}
if $\mu_{X,Y}\ll\mu_X\otimes\mu_Y$, and $I(X,Y):=\infty$ else.
For $a>0$, the gamma function is defined by $\Gamma(a)=\int_0^\infty t^{a-1}e^{-t}\,\mathrm d t$.
For $a>0$ and $s\geq 0$, the lower incomplete gamma function is $\gamma(a,s)=\int_0^s t^{a-1}e^{-t}\,\mathrm d t$.
Norms on $\reals^d$ are denoted as $\lVert\,\cdot\,\rVert$, $\lVert\,\cdot\,\rVert_2$ stands for the Euclidean norm, and $\lVert\,\cdot\,\rVert_\text{s}$ refers to a general semi-norm.
For $a\in \reals$, we let $\lfloor a \rfloor$ be the greatest integer less than or equal to $a$.
For $a>0$, $\log a$ denotes the logarithm of $a$ taken to the base $e$.
We use the convention $0\cdot\infty=0$.
\section{The Subregularity Condition}
Our lower bound on the R-D function is valid for reference measures $\mu$ satisfying the following subregularity condition, which prevents $\mu$ from being highly concentrated on balls of small radii.
\begin{dfn}\label{def:subreg}
Let $(\setX,\colX,\mu)$ be a
measure space, $(\setY,\colY)$ a measurable space, $\rho\colon \setX\times\setY\to[0,\infty]$
a distortion function, $k>0$, and set $\setB_{\rho^{1/k}}\mleft(y,\delta\mright):=\{x\in\setX:\rho^{1/k}(x,y)<\delta\}$.
The measure $\mu$ is {$\rho^{1/k}$-subregular of dimension $m$} if there exist
constants $\delta_0\in(0,\infty]$ and $c>0$ such that
\begin{align}\label{eq:subregularity}
\mu\mleft(\setB_{\rho^{1/k}}\mleft(y,\delta\mright)\mright)\leq c\delta^m\quad\text{for all $y\in\setY$ and $\delta\in (0,\delta_0)$}.
\end{align}
The measure $\mu$ is {$\rho^{1/k}$-regular of dimension $m$} if there exist
constants $\delta_0\in(0,\infty]$ and $c^\prime,c>0$ such that
\begin{align}\label{eq:regularity}
c^\prime\delta^m
\leq \mu\mleft(\setB_{\rho^{1/k}}\mleft(y,\delta\mright)\mright)
\leq c\delta^m\quad\!\text{for all $y\in\setY$ and $\delta\in (0,\delta_0)$}.
\end{align}
\end{dfn}
Lebesgue measure on $\setX=\setY=\reals^d$ together with $\rho(x,y)=\lVert x-y\rVert_\mathrm{s}^k$ satisfies \eqref{eq:regularity} with $c^\prime=c$. Discrete measures do not satisfy \eqref{eq:subregularity}.
For the particular choices $\setX=\reals^d$, $\lVert\,\cdot\,\rVert$ a norm on $\reals^d$, $\mu$ a Borel measure, and $\setY=\operatorname{supp}(\mu)$, $\lVert\,\cdot\,\rVert$-regularity of dimension $m$ agrees with regularity of dimension $m$ as introduced in \cite[Definition 12.1]{grlu00}.
A compact set $\setK\subseteq \reals^d$ with $0<\colH^m(\setK)<\infty$ is called regular of dimension $m$
if the measure $\colH^m|_\setK$ is $\lVert\,\cdot\,\rVert$-regular (and hence also $\lVert\,\cdot\,\rVert$-subregular) of dimension $m$ \cite[Definition 12.1]{grlu00}.
Specific examples of regular sets of dimension $m$ are
compact convex sets $\setK\subseteq\reals^m$ with $\operatorname{span}(\setK)=\reals^m$ \cite[Example 12.7]{grlu00}, surfaces of compact convex sets $\setK\subseteq\reals^{m+1}$ with $\operatorname{span}(\setK)=\reals^{m+1}$ \cite[Example 12.8]{grlu00},
$m$-dimensional compact $C^1$-submanifolds of $\reals^d$ \cite[Example 12.8]{grlu00},
self-similar sets of similarity dimension $m$ satisfying the weak separation property \cite[Theorem 2.1]{frheolro15}, and finite unions
of regular sets of dimension $m$ \cite[Lemma 12.4]{grlu00}.
If $\mu(\setX)<\infty$ and the subregularity condition \eqref{eq:subregularity} holds for some $c,\delta_0>0$, then $c$ can be modified to make \eqref{eq:subregularity} hold for $\delta_0=\infty$.
The formal statement is as follows.
\begin{lem}\label{lem:rho
Let $(\setX,\colX,\mu)$ be a measure space with $\mu(\setX)<\infty$, $(\setY,\colY)$ a measurable space,
$\rho\colon \setX\times\setY\to[0,\infty]$
a distortion function, and $k>0$.
If there exist constants $c,\delta_0>0$ such that $\mu$ satisfies the subregularity condition \eqref{eq:subregularity}, then
\begin{align}\label{eq:global}
\mu\mleft(\setB_{\rho^{1/k}}\mleft(y,\delta\mright)\mright)\leq
\max(c,\mu(\setX)\delta_0^{-m})\delta^m
\end{align}
for all $y\in\setY$ and $\delta>0$.
\end{lem}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,234
|
Home > News & Events > IJ Goes Global > Japan - Tokyo
Day 1 (17 June 2018) Singapore – Shizuoka
Shinagawa Train Station
Sunpu Castle Park
Shizuoka Futaba
Various streets in Shizuoka
We landed at Haneda International Airport at about 9.50 am local time. We then had to take the Tokyo monorail to Shinagawa Station and Shinkansen (bullet train) to Shizuoka. After a long flight it was quite an experience, taking the public transport from one point to another with our luggage in tow. It was only when we got to our hotel in Shizuoka at around 2.45 pm that we were finally able to explore the city without our cumbersome luggage.
My first impression of Japan based on my observations at Haneda Airport is that Japan is extremely clean and efficient. At the Shinagawa station, I noticed that most of the Japanese commuters were walking very quickly as though they were going to be late and it was a Sunday. I also noticed the precision with which the Japanese pride themselves, as they ensure that trains arrive exactly on time.
~ Liana
Japan is a very clean country and the people very polite. Although the country is very advanced, the people are very humble and kind to everyone. It was very easy and convenient travelling around as the train stations were close to each other. Sunpu Castle Park in Shizuoka was very pretty with lots of flowers, making it very pleasing to the eyes. There was also a wide variety of food available to us and the people were also very approachable. To top it off, the weather was a very cool 19 degrees Celsius.
~ Rovina
When I first arrived and got off the plane, I could already sense how quiet and simple the Japanese were. Nobody pushed past us forcefully us even in the busy train stations. Furthermore, on the trains, the Japanese would either be using their mobile devices, reading a book or catching up with sleep. Also, I hardly (if ever) saw any litter on the streets. Walking around Sunpu Castle Park, I felt the most relaxed I've ever been in a long time, with the trees, grass, perfect weather and simply watching the children play and interact with their family members and loved ones. The fact that it was litter free contributed immensely to my positive experience.
~ Isabella
My first impression of Japan and Shizuoka is that they were very clean cities. The people in Shizuoka were very friendly and tried their best to explain things to us when they realised that we did not understand their initial statements. Shizuoka also struck me as a very neat and orderly city. For example, on the streets, the people would keep to one side, usually the left to facilitate the smooth flow of traffic along their walkways.
~ Wen Hui
Day 2 (18 June 2018) Shizuoka
Shizuoka Futaba Junior/Senior High School (Twinning Programme Day 1)
I think the school inculcates in its students a lot of good values, for example, I observed how the junior students always bow and greet their seniors. I also observed that the students are also very respectful to their teachers and I was told that if students knew that a member of the teacher's family was unwell, they would often drop in at the chapel to pray for that person. The students were very hospitable and warmly welcomed us when we went to the our assigned classes. Almost every student opened up to speak to us and I felt very comfortable. It was as though I was in my own class in Singapore.
~ Eri Eliana
It was a really interesting experience this first day at the school as I met a lot of friendly staff and students, especially my buddies. They treated me very patiently and nicely, knowing that I did not understand what they were talking about in Japanese. My buddies would translate their conversations or class lessons to help me to understand what they or the teachers were talking about.
~ Joey Tang
One interesting thing that I learnt in Shizuoka was how they graciously put your needs before their own. For example, while I was queueing up to make waffles at dinner, my 2 buddies who were before me, offered their places to me. Although I declined the offer, my buddies waited patiently until my waffles were ready so that we could return to our dining table together.
~ Aryanee
I thought that everyone was sweet and nice . I waved to people I did not know and they all waved back. I also thought that the students were very accommodating, going out of their way to help me during lessons even when it would disrupt their own work.
~ Natalie
Day 3 (19 June 2018) Shizuoka - Yokohama
Shinkansen to Yokohama
The students are super nice and I made so many more friends over and above my assigned buddies. The other students in the class really tried their best to talk to us, for example, there was a classmate who really wanted to speak to me about BTS but was not very proficient in English, so she asked her friend to translate for her. She even gave me a 'post it' with here email and instagram account and told me that she would like to speak with me again. My buddies were also super nice as they bought me lots of gifts and really tried hard to talk to me, sometimes they had to look up a word in their Japanese-English dictionary just to explain something to me, their determination and perseverance left a deep impression on me.
~ Lavanya
Having spent two days at Shizuoka Futaba High School I have found the students to be very gracious hosts. They welcomed us so warmly that when it was time to bid them farewell, we found it a most difficult thing to do. I had bonded very well not only with my assigned buddies, but with their friends as well. One example of how gracious and genuine they were could be seen in their farewell for us. My buddies got us to stand in-front of the whole class so that they could present us with their farewell gifts which was a laminated group photo of the class taken the previous day and a thank you card. This made me reflect on how we had hosted them when they visited us in March of this year. There is much we can learn from our gracious hosts in terms of welcoming visitors to our school. Their kindness has left a very deep impression on me and I will miss them a lot.
~ Mandy
Today was our last day with the students in Shizuoka Futaba High School, I was very touched by the actions of the teachers and students as they presented us with a memento of our visit to their school, a group photo with the class covered with hand written farewell messages. Quite a number of students shed tears as we parted ways. I was very grateful to have had these students as my buddies as we had formed strong bonds and connections very quickly over the last two days. After leaving Shizuoka, we headed to Yokohama where we spent the evening enjoying each other's company over dinner while enjoying the soft cool evening breeze and the amazing view of the sky over the city.
Day 4 (20 June 2018) Yokohama - Tokyo
Yokohama Futaba High School (Twinning Programme Day 3)
Mother Mathilde's Cemetary/ Foreigners Cemetary
Yamate Museum
I thought the Yokohama Futaba students were really enthusiastic and warm when we came. As we were having a school tour, we walked past a few classes, some of the students turned and waved at us and smiled brightly. they were really enthusiastic when I stepped into the class and most of them approached to talk to me. They were full of energy and were running around. It was a really different environment as compare to Shizuoka. However, due to the lack of time, I did not get to know my buddy as much as I would have liked to. The teachers at school were very kind and welcoming. The English teacher that was teaching was very kind and welcoming. She made the effort to mark our worksheet and gave it back to us during lunch. I also learned a lot from the teachers
~ Eri
The students were really welcoming; my buddies for example called my name when I came forward after the introduction. They linked our arms together and even called me cute. Yokohama Futaba looked really new and I was told that it was actually built many years ago. The teachers in the staffroom were also really nice.
~ Kristen
The teachers and students were very welcoming and friendly to me when they met me. The school is like an old convent compound. We walked through many interesting corridors. The classroom doors also look very different from ours as theirs is a sliding door. The teachers did their best to engage us in their lessons.
We went to the Yokohama Foreigners Cemetery where Mother Mathilde's resting grave was at. I'm glad we got to go as this was my purpose of going on this trip. I was really interested in how Mother Mathilde set up the various schools in Singapore and even japan. She put in great effort to build the school and we really wanted to thank her for that. As I was praying at her grave, I thanked her from the bottom of my heart for all she had done.
~ Joey
Day 5 (21 June 2018) Tokyo (Cultural Immersion)
Harajuku district
Asakusa Shrine
Meiji Jingu was very traditional and it seemed like I had travelled back in time. There were many traditional rituals the are not so commonly practiced in modern times. Being there was quite a spiritual experience. In contrast, Harajuku district, which was just across from the shrine, was noisy and filled with shops and people displaying the latest modern fashion trends. I felt that Harajuku appealed more to the younger generation. While Meiji Jingu and Harajuku were very different, the Asakusa Shrine area seemed to be a combination of the old and new. There were traditional crafts and food on offer in the shops leading up to the temple, but there were also trendy clothes and toys from popular children's TV shows for sale. It was an interesting day for me.
Walking through Meiji Jingu shrine was a wonderful experience as it was very different from modern, developed Tokyo. We were completely surrounded by nature, which was very peaceful and magical for me. It was interesting to enter the shrine, as we had to perform a cleansing ritual which included washing our hands and rinsing our mouth before proceeding into the grounds of the shrine. We also got to purchase charms for various purposes like warding off evil and passing exams. There was also an area where we could write our wishes on a piece of paper and place them in a box at the base of a sacred tree. We then visited Harajuku which was chaotic with a large number people walking along a narrow street. It was fun to explore the various shops along the street and also to try the famous crepes and other food in the area. Asakusa was equally if not more crowded than Harajuku but we still enjoyed ourselves and sampled the traditional snacks on offer there.
I enjoyed the shrines most. Everytime we went to a shrine, I felt very much a peace. I felt that it was a place everyone should visit at least once in their lifetime. I was very happy to see that within the shrines, there were places for visitors to connect with a higher power, for example, the Meiji Jingu shrine have a place for visitors to write their intentions and prayers which would be deposited in front of a sacred tree. I found that really cool. While exploring the streets of Harajuku, I notices a group of boys wearing cool outfits and giving out flyers. They were promoting their debut concert and were keen to invite anyone to their performance. Many local people took their flyers and listened to what they had to say. I noticed that they were really nice and respectful to each other. I hope Singaporeans can learn to support one another just like the Japanese people here in Harajuku.
Day 6 (22 June 2018) Tokyo
Denenchofu Futaba High School (Twinning Programme Day 4)
The school was very big and clean. The students were able to converse in English which facilitated easier communication. I learnt more about Japanese school culture in Tokyo from this visit. The teachers whom I met where very nice as as they made the effort to engage us. The only drawback was the inability to spend more time with our buddies due to the time constraint. Nonetheless, I thoroughly enjoyed the English language classes which were very fun and engaging. There was a lot of energy and enthusiasm in the classes I attended.
The students in Denenchofu had good general knowledge and seemed to have similar interests as us. They were very eager to speak to us the moment we stepped into the class. The teachers too, especially the 'native' english teachers were very lively and fun-loving, making their lessons very engaging. The behaviour of the students in my class was quite similar to our friends back in IJ. They were very energetic and the noise volume was similar to what we experience in Singapore. The school had very strict handphone rules which stated that students could only use their phones after they had left the school.
I found the students in my class to be very humble and approachable. Although it took awhile for them to warm up to me, but when they did, we got along very well. During their Japanese class, the teacher asked me to introduce myself and when I mentioned that I loved ramen, the class applauded for a good 2 minutes! To top it off, the same teacher ended her lesson 15 minutes earlier to allow a question and answer session which she dubbed, "ask Ary-san questions time." Denenchofu really reminds me of IJ with its loud, warm and friendly atmosphere. Although we did not have much time to bond with our buddies, I felt quite attached to them and when the time came to say goodbye, my buddy presented me with a card and we exchanged many hugs.
The students in Denenchofu were very warm and inclusive. When I first stepped into their class, many of my classmates immediately came up to me and began talking to me. During the PE lesson, my classmates made the effort to include me in all their games and they would encourage me to try harder whenever I hit the ball badly or missed the ball completely. When I did manage to hit the call, there were cheers and 'high fives.
Day 7 (23 June 2018) Tokyo - Singapore
We left early for Haneda International Airport this morning to do some last minute shopping for gifts and souvenirs of our trip to Japan. Our flight departed for Singapore at 11.30 am local time and we reached Singapore at around 5.30 pm to be reunited with our families.
All in all, not only was this twinning programme to Japan a fun experience, it was also a fruitful and enriching one. Although we were all on the same trip, I believe that all our experiences were different. I will definitely hold all the memories and friendships forged close to my heart. I really enjoyed gaining a better insight into the Japanese culture, especially the school life in the Futaba schools. It was really interesting to see the differences between the 3 schools as well as our own. Something I find truly amazing is, even though we may have been exhausted, we continued to encourage each other and lift each other's spirits. In the course of this trip, I believe that I have truly felt the IJ Spirit and I hope that we have managed to allow the Japanese students to experience it as well.
It was a very fruitful trip as I was able to learn more about my friends and myself. I experienced the different cultures in Japan and saw people from all walks of life. I was able to improve my socialising skills as I made the effort to speak to people I barely knew. It was heartwarming to see how sincere our Japanese hosts were in everything they did and the effort everyone put in to make the trip a success. It was also a great opportunity to make new Japanese friends, growing so close that it felt like we had known each other for months. Being on this trip has really opened my eyes to life outside Singapore and made me realise just how small Singapore is. It was quite a hectic trip, but I felt that this would be one of the most memorable experiences of my school life.
I think this twinning programme was a good experience because it has helped me improve myself as a person. Through this trip I was able to learn to be more independent and how to be a leader. It has also taught me to appreciate everything that I have, from the time I have on this earth to the family and the friends who love and care for me. I learnt to treasure every single second because we were all given only 1 to 2 days with our Japanese buddies before we had to say goodbye. This trip has also enabled me to build new friendships which I treasure with all my heart. Before this trip, I did not have many friends from other classes but now, not only did I return with souvenirs and memories from Japan, I also have new and wonderful close friends. I never expected all of us to end up getting so closed but I am glad we did.
The twinning programme has really taught me many things. Firstly, I have gained many new friendships on this trip, friends from IJ and the Futaba schools in Japan. The students were all very nice and welcoming. As we had to wake up early on most days, I also learnt the importance of being punctual. The trip passed by very quickly and happily and I realised that time passes quickly especially when you are enjoying yourself, so we need to treasure and appreciate every moment we have. One thing that left an impression on me was the sewing class I attended. I felt that this was an important life skill and something we do not learn in IJ so it was very interesting and new to me. I am very thankful for the opportunity to have gone on this trip, for learning so many things and forging so many new friendships.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,047
|
I was hoping someone could direct me on how to fix my cut levels dialog box. Seems that the option to select the ZC in the top range has been turned off somehow. Was wondering if anyone knew how to turn this option back on.
ZC is normal to the tool axis and parallel to the cut levels.
Either rotate the WCS or select a face to set the top of the ranges.
Hint - if you always want your WCS to be aligned with the current MCS, turn on the preference "Orient WCS to MCS", and the WCS will move around as you edit different operations.
Thank you Mark, we use multiple axis machine and our MCS is different than the WCS.
If the cut levels are not aligned with the ZC axis, then the ZC value is hidden, since it wouldn't make any sense.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,372
|
Oriental Ornithopods – Enter the Dragons
By Mike| 2014-03-03T09:22:45+00:00 January 17th, 2012|Dinosaur and Prehistoric Animal News Stories, Dinosaur Fans|1 Comment
New Species of Plant-Eating Dinosaur From China
The Chinese New Year, the Year of the Dragon, is rapidly approaching and true to form a new dinosaur species is discovered in China. This part of Asia could lay claim to being the most prolific location on the planet for new dinosaur discoveries at the moment. Over the last twenty or so years, more new types of dinosaur have been discovered and named than in the proceeding two hundred years. This new dinosaur, described as a member of the Ornithopoda (bird-hipped dinosaurs – Ornithischians) adds greatly to the current knowledge of this type of Chinese dinosaur, as up until now only a handful of Chinese Ornithopods have been scientifically described.
The dinosaur jointly researched by a team of Chinese and Japanese scientists has been named Yueosaurus tiantaiensis. A paper detailing the research work has been published in the scientific journal "Cretaceous Research".
Known from just a single, well-preserved but incomplete specimen Y. tiantaiensis is believed to have been less than a metre tall, and little more than 1.5 metres long. It has been described as a basal Ornithopod. The fossils of these type of dinosaurs have been found on all continents but fossil specimens from China are rare. These animals possessed beaks, were probably mainly vegetarian but some species may have eaten insects and small vertebrates. They had hind legs longer than their front legs, and were probably facultative bipeds, (running on hind legs usually, but moving around on all fours if required). Most basal Ornithopods were gracile and small although their descendants were to become the most widespread and common large plant-eating dinosaurs by the Late Cretaceous.
An Illustration of a Typical Basal Ornithopod
Fast running, new species of Chinese Dinosaur
The original fossil material was found in 1998 when construction workers uncovered the remains of this small dinosaur during a road building project in Tiantai county, in the eastern province of Zhejiang. The location where this prehistoric herbivore's remains were found was the inspiration behind the specific name of this dinosaur. The fossils were intensively studied at the Zhejiang Museum of Natural History and it was from this analysis that the joint Sino/Japanese team were able to ascribe the remains to an entirely new species.
Yueosaurus Tiantaiensis lived during the Cretaceous geological period. The fossils associated with this dinosaur were removed from strata approximately 100 million years old (Albian faunal stage). The full name of this new dinosaur, one of half a dozen new species described from Zhejiang province in the last twelve months; means "Tiantai Yue Dinosaur" in Chinese, as it was discovered in the present-day Tiantai county and the region used to be the territory of the ancient state of Yue. So the name reflects both modern and ancient China.
The new species represents the southernmost basal Ornithopod dinosaur discovered to date on the continent of Asia. It is surprising how few of this type of dinosaur is known from China, especially given the extensive fossil record of animals such as the related Hypsilophodontids from North America and Europe. It could be that small Ornithopods are rare in Asia when compared to the northern hemisphere for example, or there could be a bias in the fossil record with these small dinosaurs, perhaps being under-represented in the fossil record.
The closest living analogs to dinosaurs such as Y. tiantaiensis are wallabies, deer and small antelope. Fossilised burrows found in North America and what would have been the polar regions of Australia suggest that some types of small Ornithopods may have lived underground.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,034
|
{"url":"https:\/\/chemistry.stackexchange.com\/questions\/40856\/general-notation-for-one-of-the-d-orbitals","text":"General notation for one of the d-orbitals\n\nWhat is the general notation to represent the d-orbital with $l=2$, $m_l=0$, i.e. the orbital normally referred to as $\\mathrm{d}_{z^2}$. To elaborate more, this orbital can be ordered in various ways in a crystal environment and can be written as: $\\mathrm{d}_{3z^2-r^2}$, $\\mathrm{d}_{3x^2-r^2}$ or $\\mathrm{d}_{3y^2-r^2}$. So is there a general notation to represent all three, like $\\mathrm{d}_{3A^2-r^2}$?\n\n\u2022 The $d_{z^2}$ orbital points along the $z$-axis. I believe you choose the $z$-axis to point wherever you want that orbital to point (not the other way round). \u2013\u00a0orthocresol Nov 17 '15 at 9:00\n\u2022 Yes, but in some crystal structures there is an orbital ordering along different local axes, like in LaMnO$_3$. So there will be a $d_{3z^2}$, $d_{3y^2}$ and $d_{3x^2}$. I just wanted to know how to refer to all three types collectively. \u2013\u00a0hat Nov 17 '15 at 10:24\n\nGenerally you are free in your choice of coordinate system to describe a certain problem and calling the d-orbital with $l=2$, $m_l=0$ either $\\mathrm{d}_{3z^2-r^2}$, $\\mathrm{d}_{3x^2-r^2}$ or $\\mathrm{d}_{3y^2-r^2}$ depends on this choice but by convention the $z$-direction is usually chosen to be the system's preferred direction, i.e. it points along the main symmetry of the system.\nLet's move to $d$ orbitals in crystals: Usually, metal $d$ orbitals are not \"properties\" of the crystal itself but \"belong\" to the local metal centers. Thus, the $z$-direction along which a $\\mathrm{d}_{3z^2}$ orbital points refers to the preferred direction of the metal ion in its local environment, e.g. in a Jahn-Teller distorted Perowskite $\\ce{ABO3}$ the $\\mathrm{d}_{3z^2}$ orbital of metal ion $\\ce{B}$ always points along the direction of the elongated\/compressed $\\ce{B-O}$ bonds within the local $\\ce{BO6}$ octahedron or quadratic bipyramid. So, even though in $\\ce{LaMnO3}$ you have differently tilted $\\ce{MnO3}$ octahedra that have different preferred directions and thus differently oriented $\\mathrm{d}_{3z^2}$ orbitals, you still refer to all of them as \"$\\mathrm{d}_{3z^2}$\" and everyone you talk to about this will know which orbitals you mean and where they will point in the different local octahedra in the crystal. That is pretty much the status quo.\n$\\hspace{2em}$\nNow, what you want to do, is talk about the $\\mathrm{d}$ orbitals not in the context of their local coordinate system but in the context of a more global coordinate system related to the whole crystal, e.g. declare a certain crystal plane to be the $x$,$y$-plane and name the $\\mathrm{d}$ orbitals according to this choice to let their name reflect their orientation within the plane.\nOf course, you can do this and it has been done here pretty much as you suggested where the authors chose notations like \"$\\mathrm{d}_{3\\sigma^2 - r^{2}}$ orbital ($\\sigma = x, y , z$)\" or \"$\\mathrm{d}_{3(x, y)^2 - r^{2}}$ orbitals\". But be aware that this is a kind of notation that needs to be explained beforehand when you use it because it refers to an unconventional choice of coordinate system as a frame of reference for the $\\mathrm{d}$ orbital labels.","date":"2021-05-11 03:46:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8390312194824219, \"perplexity\": 410.8612174497362}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243991641.5\/warc\/CC-MAIN-20210511025739-20210511055739-00007.warc.gz\"}"}
| null | null |
When logging in to a Plesk server via a web browser (e.g. https://pleskxx.hyve.com:8443/), a message is displayed warning that the certificate in place is not trusted because it is 'self signed'. Specific warning messages differ from browser to browser.
An SSL certificate is in place to confirm encryption of data being sent and received at the server from a client browser.
On our Plesk servers we use 'self-signed' certificates, which means that although we use 128bit encryption on the data, it is not guaranteed by a certification authority. Instead we sign the certificates directly on the servers themselves, so that we are not paying for a yearly certificate subscription from a third party, such as VeriSign unnecessarily. Technically there is absolutely no difference between the two types of certificate. The self signed certificates offer identical encryption to those issued by an authority.
The only real difference is that your browser recognises certain certification authorities, (such as VeriSign) so it can not confirm that the certificate is real/valid and that the site physically warranties certain levels of financial transactions. (Usually up to £10,000 per transaction). However, since we do not accept credit card data, nor do we sell anything via the Plesk control panel, there is no point in offering this transaction warranty, and therefore no point in having a third party issue the certificate.
Any data entered into the Plesk control panel is guaranteed to be 128bit encrypted and secure, even though the browser displays a message to indicate that the certificate is not valid. The reason this message is displayed is because the browser doesn't by default recognise the issuing body (self-signed). All of our Plesk servers are hosted behind Cisco firewalls, using port 8443 via 128bit SSL encryption, so it would be extremely difficult to intercept the data stream between the browser and our Plesk servers.
Continue past the warning message into the control panel. This is achieved in different ways depending on the browser. In Internet Explorer, there is a link to continue whereas in Firefox, an exception must be added. Simply follow the instructions on the screen to continue past the warning intro the control panel.
It is strongly recommended that you DO NOT use any website that displays a warning message of this type if you intend to enter any sensitive data such as card details.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,575
|
Q: Django annotation + filtering I've some data that I've loaded from SQLite to PSQL using pgloader, it works more or less but some types are a bit clunky, so that's why my dates are in string, but I don't think that's the issue ( might be wrong here)
Consider the following models:
class User(models.Model):
hash_id = models.TextField(primary_key=True)
first_name = models.TextField(blank=True, null=True)
last_name = models.TextField(blank=True, null=True)
class Education(models.Model):
hash_code = models.ForeignKey('Users', models.CASCADE)
startdate = models.TextField(blank=True, null=True)
enddate = models.TextField(blank=True, null=True)
class Job(models.Model):
hash_code = models.ForeignKey('Users', models.CASCADE)
jobends = models.TextField(blank=True, null=True)
jobstarts = models.TextField(blank=True, null=True)
I'm trying to get all the jobs after X years from the user's first Education.
So far I've got a Subquery, to get the first education's start date for each user:
# This should return a date eg: '2000-01-01'
mba_start_subq = Educations.objects.filter(hash_code=OuterRef('hash_code'))
.order_by('startdate')
.values('startdate')[:1]
)
Then I append this to each Job via:
jobs = (Job.objects.all()
# .distinct('hash_code')
.order_by('hash_code')
.annotate(mba_start = Subquery(mba_start_subq))
)
So far so good, the issue is when I try to add a .filter() afterwards it takes ages to get the response (basically an infinite loop kind of thing)
# Filtering with date strings works
jobs.filter(
Q(jobstarts__lt = '2000-01-01'),
Q(jobends__gt = '2002-01-01')
)
# this is the desired functionality, that doesn't work
jobs.filter(
Q(jobstarts__lt = Subquery(mba_start_subq)),
Q(jobends__gt = Subquery(mba_start_subq))
)
I've also tried to use F(annotated_value) in the .filter() after I've annotated it to the queryset, but no luck, I don't get any response from the server at all, takes ages.
What did I miss? the way I'm getting the education start date is wrong? Is there a more efficient way?
UPDATE: Here is a working SQL query that I would like to achieve in Django
SELECT
Organizations.id,
Organizations.hash_code,
educations.EducationDegree,
Organizations.Role,
Organizations.Industry,
Organizations.JObStarts,
Educations.StartDate,
Organizations.JobEnds
FROM educations
INNER JOIN Organizations ON
Organizations.hash_code=Educations.hash_code
WHERE (date(Educations.StartDate,'+1 year')
BETWEEN Organizations.JObStarts AND Organizations.JobEnds )
GROUP BY Organizations.hash_code
A: Firstly, I have modified your Education and Job models slightly to use the Django User rather than the extra model you are adding. The new model definitions are:
class Education(models.Model):
hash_code = models.ForeignKey(settings.AUTH_USER_MODEL, models.CASCADE)
startdate = models.TextField(blank=True, null=True)
enddate = models.TextField(blank=True, null=True)
class Job(models.Model):
hash_code = models.ForeignKey(settings.AUTH_USER_MODEL, models.CASCADE)
jobends = models.TextField(blank=True, null=True)
jobstarts = models.TextField(blank=True, null=True)
Then, in order to match the SQL query you have, I would use the following:
subq = Education.objects.filter(hash_code__id=OuterRef('hash_code__id'))
.order_by('startdate')
This uses the ID of the User object rather than the entire object itself for filtering. The values can also be added as part of the sub-query after.
The final jobs query can then be run as follows:
jobs = Job.objects.filter(jobstarts__lt=Subquery(subq.values('startdate')[:1]),
jobends__gt=Subquery(subq.values('startdate'[:1]))
.order_by('hash_code__id')
This removes the need to order and annotate all Jobs objects and applies a simple, more efficient filter instead. I hope this helps. Let me know if it isn't quite what you are after.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,966
|
Freedom for All
Freedom4All Home
Case Links
Updates Feed
Richard Glossip~controversial case, controversial drug, controversial execution
Richard Glossip's execution was stopped today and what a relief.. But, it was stayed for 2 weeks and that time will be up before we know it.
If there's anything I've learned in the last couple of years, writing about and trying to spread awareness on wrongful convictions, it's that a huge majority, I would even go as far as saying, most of us, don't seem to care. No matter how big or blatant the injustice, if it doesn't affect people in their daily lives, it doesn't exist. And to the contrast, believing the worst in people is real-life drama which is compelling and just all out more interesting to most. This was an extremely hard pill for me to swallow but then decided to spend more time and share more information because ignoring it isn't an option for me. People who many, including myself, believe to have been innocent were executed but we didn't even know their names. However, in this day and time, with the power to share knowledge to the masses, it's unacceptable.
I'm against the death penalty because it's a proven fact that innocent men and women sat on death row, in unthinkable conditions, many in solitary confinement for 10,20,30, even 40 years.. Those even less fortunate were executed. I don't believe anyone has the right to take another human life but also know there are evil people who do horrible things and trying to save their lives just isn't my cause.
A few points in the Richard Glossip case, I hope everyone will at least take a moment to read.
The admitted murderer, Justin Sneed was given a lesser sentence to testify against Glossip. This isn't uncommon in death penalty cases and for me, that's mind blowing. Someone so morally bankrupt, he bludgeoned a man to death was offered a deal to save his own life. And we're to trust his word enough to take another man's? Not only was Sneed's testimony evidence used during trial, it was pretty much the only evidence and reason for Glossip's conviction and sentence of death.
Sneed gave quite a few contradictory accounts of what took place that night, even during the initial taped, police interview he changed his story several times. For whatever reason, the tape wasn't used during trial or shown to the jury. The tape also shows detectives telling Sneed they didn't believe he acted alone, then offering that it probably wasn't even his idea.
The jurors were never told that Sneed admitted to being on a two-day meth run leading up to the murder or shown evidence that would have at minimum brought reasonable doubt. Receipts and other evidence were "lost in a flood". How convenient. There was a man who left the hotel so quickly the following morning, he didn't even take his luggage but was never treated as a suspect or introduced to the jury.
Sneed's daughter who has remained in contact with him throughout his sentence is a Glossip supporter. She even made attempts to reach out to the Oklahoma pardon and parole board. In a letter she wrote, "For a couple of years now, my father has been talking to me about recanting his original testimony; I feel his conscious is getting to him." Now ask yourself, for what reason or benefit would she do this other than she believes Glossip to be innocent?
Lastly, the first cases of wrongful conviction I learned of ten years ago, took place in Oklahoma. In fact of the 155 death row exonerees 10 of them were from tiny little Oklahoma. Glossip was young, naive and poor at the time of arrest, a common thread among wrongful convictions everywhere and definitely in Oklahoma. If interested, look up Ronald Williamson and Donald Fritz who have been exonerated ("The Innocent Man") and Tommy Ward and Karl Fontenot who remain behind bars today ("Dreams of Ada").
Side note~ In 99% of cases of wrongful conviction, the family of the victim whole heartedly believes the defendant to be guilty; they want and deserve justice for their loved one and have been told, by the 'trusted' state or their government that the defendant is guilty. But their need for justice doesn't make the defendant any more or less guilty. That being said, I have great sympathy for the victim, Van Treese's family and any other family who's faced with such tragedy.
for more details on this case or about wrongful convictions, plenty of good links provided below...
http://www.cnn.com/…/oklahoma-richard-glossip-ex…/index.html
http://www.richardeglossip.com/
https://www.facebook.com/RichardGlossipIsInnocent
http://innocence.okcu.edu/
http://www.innocenceproject.org/news-events-exonerations/oklahoma-
innocence-project-files-motion-to-speed-up-client2019s-hearing_
mom, music enthusiast, writer and injustice fighter
Those who deny freedom to others deserve it not for themselves~ Abraham Lincoln
Freedom And Justice For All
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,448
|
Alpensia () è una stazione sciistica situata nella località di Daegwallyeong, nella provincia di Gangwon, nella contea di Pyeongchang, Corea del Sud.
Storia
La decisione di costruire la stazione sciistica di Alpensia fu presa nel 2003, nell'ottica dell'ambizione della provincia di Gangwon di poter ospitare i Giochi olimpici invernali. I lavori furono completati nel 2011. Nel 2013, la stazione sciistica è stata la sede dei X Giochi olimpici speciali invernali.
Nel 2018, è stata la sede delle gare di biathlon, combinata nordica, salto con gli sci, bob, slittino e skeleton dei XXIII Giochi olimpici invernali così come delle gare di biathlon e sci di fondo dei XII Giochi paralimpici invernali.
Etimologia
"Alpensia" (알펜시아) è una parola macedonia composta combinando le parole Alpi (알펜시아), Asia (과 아시아) e fantasia (판타지아), cosicché il suo significato letterale è "le fantastiche Alpi dell'Asia".
Caratteristiche
La stazione sciistica dispone di sei piste per sci e snowboard, ognuna con una lunghezza superiore ai 1,4 km, e 9 impianti di risalita.
All'interno della stazione sciistica sono situati anche quattro degli impianti che saranno utilizzati durante i giochi olimpici invernali del 2018: lo stadio del salto, lo stadio del biathlon, lo stadio del fondo e il tracciato per bob, slittino e skeleton, così come uno dei due villaggi olimpici e il centro per i giornalisti.
Note
Altri progetti
Collegamenti esterni
Stazioni e comprensori sciistici sudcoreani
Impianti dei XXIII Giochi olimpici invernali
Daegwallyeong
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,180
|
package ipam
import (
"bytes"
"encoding/json"
"fmt"
"net"
"os"
"os/exec"
"path/filepath"
"strings"
"github.com/coreos/rocket/Godeps/_workspace/src/github.com/vishvananda/netlink"
"github.com/coreos/rocket/networking/util"
)
// L3 config value for interface
type IPConfig struct {
IP *net.IPNet
Gateway net.IP
Routes []net.IPNet
}
type ipConfig struct {
IP string `json:"ip"`
Gateway string `json:"gateway,omitempty"`
Routes []string `json:"routes,omitempty"`
}
func (c *IPConfig) UnmarshalJSON(data []byte) error {
ipc := ipConfig{}
if err := json.Unmarshal(data, &ipc); err != nil {
return err
}
ip, err := util.ParseCIDR(ipc.IP)
if err != nil {
return err
}
var gw net.IP
if ipc.Gateway != "" {
if gw = net.ParseIP(ipc.Gateway); gw == nil {
return fmt.Errorf("error parsing Gateway")
}
}
routes := []net.IPNet{}
for _, r := range ipc.Routes {
dst, err := util.ParseCIDR(r)
if err != nil {
return err
}
routes = append(routes, *dst)
}
c.IP = ip
c.Gateway = gw
c.Routes = routes
return nil
}
func (c *IPConfig) MarshalJSON() ([]byte, error) {
ipc := ipConfig{
IP: c.IP.String(),
}
if c.Gateway != nil {
ipc.Gateway = c.Gateway.String()
}
for _, dst := range c.Routes {
ipc.Routes = append(ipc.Routes, dst.String())
}
return json.Marshal(ipc)
}
func findIPAMPlugin(plugin string) string {
// try 3rd-party path first
paths := strings.Split(os.Getenv("RKT_NETPLUGIN_IPAMPATH"), ":")
for _, p := range paths {
fullname := filepath.Join(p, plugin)
if fi, err := os.Stat(fullname); err == nil && fi.Mode().IsRegular() {
return fullname
}
}
return ""
}
// Executes IPAM plugin, assuming RKT_NETPLUGIN_COMMAND == ADD.
// Parses and returns resulting IPConfig
func ExecPluginAdd(plugin string) (*IPConfig, error) {
if os.Getenv("RKT_NETPLUGIN_COMMAND") != "ADD" {
return nil, fmt.Errorf("RKT_NETPLUGIN_COMMAND is not ADD")
}
pluginPath := findIPAMPlugin(plugin)
if pluginPath == "" {
return nil, fmt.Errorf("could not find %q plugin", plugin)
}
stdout := &bytes.Buffer{}
c := exec.Cmd{
Path: pluginPath,
Args: []string{pluginPath},
Stdout: stdout,
Stderr: os.Stderr,
}
if err := c.Run(); err != nil {
return nil, err
}
ipConf := &IPConfig{}
err := json.Unmarshal(stdout.Bytes(), ipConf)
return ipConf, err
}
// Executes IPAM plugin, assuming RKT_NETPLUGIN_COMMAND == DEL.
func ExecPluginDel(plugin string) error {
if os.Getenv("RKT_NETPLUGIN_COMMAND") != "DEL" {
return fmt.Errorf("RKT_NETPLUGIN_COMMAND is not DEL")
}
pluginPath := findIPAMPlugin(plugin)
if pluginPath == "" {
return fmt.Errorf("could not find %q plugin", plugin)
}
c := exec.Cmd{
Path: pluginPath,
Args: []string{pluginPath},
Stderr: os.Stderr,
}
return c.Run()
}
func ApplyIPConfig(ifName string, ipConf *IPConfig) error {
link, err := netlink.LinkByName(ifName)
if err != nil {
return fmt.Errorf("failed to lookup %q: %v", ifName, err)
}
if err := netlink.LinkSetUp(link); err != nil {
return fmt.Errorf("failed too set %q UP: %v", ifName, err)
}
addr := &netlink.Addr{IPNet: ipConf.IP, Label: ""}
if err = netlink.AddrAdd(link, addr); err != nil {
return fmt.Errorf("failed to add IP addr to %q: %v", ifName, err)
}
for _, dst := range ipConf.Routes {
if err = util.AddRoute(&dst, ipConf.Gateway, link); err != nil {
// we skip over duplicate routes as we assume the first one wins
if !os.IsExist(err) {
return fmt.Errorf("failed to add route '%v via %v dev %v': %v", dst.String(), ipConf.Gateway, ifName, err)
}
}
}
return nil
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,008
|
{"url":"http:\/\/math.stackexchange.com\/questions\/133548\/radius-around-point","text":"I have a question, I am trying to calculate a radius between two latitude and longitude points on the Google map.\n\nI understand the coding part, but I do not understand the mathematical side to it. I would appericate if you guys can give me some guidance to this.\n\nHere is the code, maybe you can get some understanding on what I am trying to achive\n\nCheers!\n\n-\nYou need to clarify your question. What do you mean by 'calculate a radius between to latitude and longitude points'? \u2013\u00a0 copper.hat Apr 18 '12 at 17:44\nI answered a similar question. I think you're looking for the great arc distance between two points on the sphere, in miles. \u2013\u00a0 bgins Apr 18 '12 at 18:06\n\nLatitude and longitude values are basically vectors in spherical coordinates, usually given without the radius of the Earth.\n\nLet's say you have a 2D circle, and you have two vectors to the edge of that circle. You want to find the distance along the edge of the circle between the heads of those two vectors -- this is called the arc length. In order to do that, you need to know the angle $\\beta$ between the two vectors and the radius $r$ of the circle:\n\n$$d=r\\beta$$\n\nAn easy way to find the angle between two vectors is to use the dot product:\n\n$$\\cos \\beta = \\frac{\\vec a \\cdot \\vec b}{|\\vec a| |\\vec b|}$$\n\nThis formula holds for vectors in 2-space as well as vectors in 3-space.\n\nThe goal now is to figure out what the two vectors should be in Cartesian coordinates. bgins' answer has this formula:\n\n$$X(\\theta,\\phi) =\\left[\\matrix{x\\\\y\\\\z\\\\}\\right] =\\left[ \\matrix{ r\\cos\\phi\\cos\\theta\\\\ r\\cos\\phi\\sin\\theta\\\\ r\\sin\\phi }\\right] =r\\left[ \\matrix{ \\cos\\phi\\cos\\theta\\\\ \\cos\\phi\\sin\\theta\\\\ \\sin\\phi }\\right]$$\n\nNow you just take two of these vectors and put them into the equation for the angle:\n\n$$\\vec a = r\\left[ \\matrix{ \\cos\\phi_a\\cos\\theta_a\\\\ \\cos\\phi_a\\sin\\theta_a\\\\ \\sin\\phi_a }\\right]$$\n\n$$\\vec b = r\\left[ \\matrix{ \\cos\\phi_b\\cos\\theta_b\\\\ \\cos\\phi_b\\sin\\theta_b\\\\ \\sin\\phi_b }\\right]$$\n\n$$\\vec a \\cdot \\vec b = r^2 (\\cos\\phi_a\\cos\\theta_a\\cos\\phi_b\\cos\\theta_b + \\cos\\phi_a\\sin\\theta_a\\cos\\phi_b\\sin\\theta_b + \\sin\\phi_a\\sin\\phi_b)$$\n\n$$=r^2(\\cos\\phi_a\\cos\\phi_b[\\cos\\theta_a\\cos\\theta_b + \\sin\\theta_a\\sin\\theta_b] + \\sin\\phi_a\\sin\\phi_b)$$\n\n$$=r^2(\\cos\\phi_a\\cos\\phi_b\\cos(\\theta_a-\\theta_b)+\\sin\\phi_a\\sin\\phi_b)$$\n\nIt's easy to see that the magnitude of each vector is $r$, so the final formula is:\n\n$$\\cos \\beta = \\frac {r^2(\\cos\\phi_a\\cos\\phi_b\\cos(\\theta_a-\\theta_b)+\\sin\\phi_a\\sin\\phi_b)}{r^2}$$\n\n$$\\beta = \\cos^{-1}(\\cos\\phi_a\\cos\\phi_b\\cos(\\theta_a-\\theta_b)+\\sin\\phi_a\\sin\\phi_b)$$\n\n$$d=r\\beta=r\\cos^{-1}(\\cos\\phi_a\\cos\\phi_b\\cos(\\theta_a-\\theta_b)+\\sin\\phi_a\\sin\\phi_b)$$\n\n-","date":"2015-10-10 00:04:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8853946924209595, \"perplexity\": 151.24520485117557}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-40\/segments\/1443737936627.82\/warc\/CC-MAIN-20151001221856-00228-ip-10-137-6-227.ec2.internal.warc.gz\"}"}
| null | null |
AQUA (GERMANY) - AQUA
Here something quite special, not known by everyone: Aqua were founded in Kassel in 1972. They played progressive rock like it was at fashion at those times, thus with a lot of organ. In 1972, too they recorded four self-written songs for to show to promoters and record companies. Though they couldn't make a record company to become interested they got a lot of gigs. Thus, they themselves released a 7" single in 1978, already more smoothed out. In 1981 they did an LP which already was mainstream, like it was up to date, then. The number of copies
released was 1000 for each record. The here available CD contains the until now unreleased four songs of 1972, both tracks of their single, and according to the demand of the artists two songs from the LP which still show some progressive
influences.
TYBURN TALL - TYBURN TALL
Reissue of ULTRA RARE PRIVATE PRESSING of German PROGRESSIVE ROCK band from the mid 70's. Long tracks with dark HEAVY HAMMOND ORGANS and FUZZY GUITARS!
OS MUNDI - STURMFLUT
The Berlin band is well-known because of their LPs "Latin mass" (1970, on Metronome) and "43 Minuten" (1972, on Brain). Here are some so far unreleased recordings from 1973 and 1975: two tracks from RIAS Berlin radio station and five studio tracks. ... read more »
SUN - S.U.N.
Sun from Hettenleidelheim near Eisenberg in the Palatinate were founded under the name of Punished Sun in 1969 and shortened their name to Sun in 1974. They played progressive rock with slight Zappa and jazz influences; the singing, however, is in parts a bit silly. In that ... read more »
FLORIAN GEYER - BEGGAR'S PRIDE
REISSUE ON CD of ULTRA-RARE PRIVATE PRESSING from '76! EXCELLENT PSYCHEDELIC PROGRESSIVE ROCK with touches of FOLK-INFLUENCES! long tracks up to 10 minutes! also includes here are 8 UNRELEASED BONUS TRACKS!
POSEIDON (GERMANY) - FOUND MY WAY
Reissue of obscure LP by German progressive kraut band POSEIDON, originally released in 1975. here with 8 bonus tracks. (Garden Of Delights)
EMMA MYLDENBERGER - EMMA MYLDENBERGER +5
A nice folk album from Germany, never before re-released on LP or CD. From the master tapes and with five bonus tracks in top sound quality. With male/female vocals, flute, glockenspiel, autoharp, mandolin etc. Booklet with long band story, detailed discography, many ... read more »
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,175
|
\section{Introduction}
Doppler searches for Jupiter-mass planets are nearly complete for
semimajor axes $0.03 \leq a \leq 3$~AU. The distribution of exoplanet
semimajor axes within this range reveals a paucity of planets orbiting
less than $0.5$~AU from their host stars (Butler \mbox{\rm et al.~} 2006). Instead,
many giant planets are being found on terrestrial planet-like orbits, in
or near the habitable zone: 25\% of 212 known exoplanets within 100 pc
have semimajor axes between 1.0 and 2.0 AU\footnote{See catalog at
www.exoplanets.org.}. These observations suggest that planetary systems
often have habitable zones dominated by gas giants. In this paper, we
announce the discovery of Jupiter-mass planets orbiting HD 5319 and HD
75898. Both planets have nearly circular orbits with semimajor axes
between 1 and 2 AU.
HD 5319 and HD 75898 were selected for the Keck planet search after
being flagged as metal-rich by the N2K consortium, on the basis of
photometry and low-resolution spectroscopy (Ammons \mbox{\rm et al.~} 2006, Robinson
\mbox{\rm et al.~} 2007). The N2K project's primary goal was to identify metal-rich
stars likely to host hot Jupiters, which have high transit probabilities
(Fischer \mbox{\rm et al.~} 2005). So far, one transiting hot Saturn (Sato \mbox{\rm et al.~}
2005) and six planets with periods $P < 15$ days (Wright \mbox{\rm et al.~} 2007,
Johnson \mbox{\rm et al.~} 2006, Fischer \mbox{\rm et al.~} 2006, and Fischer \mbox{\rm et al.~} 2005) have
been discovered among the N2K targets. However, the planet-metallicity
correlation holds for all orbital periods, making the N2K target list a
good source for discoveries of longer-period planets as well. The new
discoveries reported in this paper, HD 5319~b and HD 75898~b, are two of
the seven intermediate-period planets so far found orbiting N2K target
stars (see also Wright \mbox{\rm et al.~} 2007, Fischer \mbox{\rm et al.~} 2007).
In \S 2, we report our observations and Keplerian fit to HD 5319. We
discuss the HD 75898 system in \S 3. In \S 4, we discuss the implied
presence of long-period stellar or substellar companions orbiting each
star. We present discussion and conculsions in \S 5.
\section{HD 5319}
\subsection{Stellar Characteristics}
HD 5319 is a subgiant with $M_V=3.05$, $V=8.05$, $B-V=0.985$, and
Hipparcos parallax (ESA 1997) of $0.010 ''$, corresponding to a distance
of 100 parsecs. High-resolution spectroscopic analysis (Valenti \&
Fischer 2005) yields $T_{\rm eff}~$ = 5052 $\pm$ 50K, ${\rm \log g}~$ = 3.57 $\pm$ 0.15,
$v \sin i~$ = 3.31 $\pm$ 0.50 \mbox{km s$^{-1}~$}, and {\rm [Fe/H]} = 0.15 $\pm$ 0.05 dex. HD 5319's
spectral type is listed as K0 III in the SIMBAD database and as G5 IV in
the Hipparcos catalog. The star's $M_V$ and ${\rm \log g}~$ values are most
consistent with the G5 IV designation.
The luminosity is 4.6 $L_{\odot}$, including a bolometric correction of
$-0.259$ (VandenBerg \& Clem 2003). The luminosity and effective
temperature imply a stellar radius of 2.8 $R_{\sun}$. We estimate
stellar masses using theoretical stellar models based on the Yale
Stellar Evolution Code as described in Takeda \mbox{\rm et al.~} (2007). The fine
grid of evolutionary tracks have been tuned to the uniform spectroscopic
analysis of Valenti \& Fischer (2005) and provide posterior
distributions for stellar mass, radius, gravity and age. Based on this
analysis, we derive a stellar mass of $1.56 \: M_{\odot}$, a radius of
$3.26 \: R_{\odot}$, higher than implied by the bolometric luminosity,
and an age of 2.4~Gyr for this subgiant. As a measure of uncertainty,
the lower and upper 95\% confidence intervals are provided in
parentheses in Table 1 for the stellar mass, age and radius.
Measurement of the core of the Ca H\&K lines (Figure 1) show
that the star is chromospherically inactive. From 30 observations, we
measure mean values of the Ca H\&K indices of $S_{HK}$ = 0.12 and $\log R^\prime_{HK}$ =
-5.34. Based on the values of $S_{HK}$~and $\log R^\prime_{HK}$, we derive a rotational
period of $P_{ROT}~$ = 19.0 days (Noyes \mbox{\rm et al.~} 1984). However, we caution
that the interpretation of $S_{HK}$~and $\log R^\prime_{HK}$ and their correlation with
$P_{ROT}~$ may be subject to systematic errors for evolved stars, since the
$P_{ROT}~$ calibration was created for main-sequence stars.
We also monitored the star's brightness with the T10 0.8~m automatic
photometric telescope (APT) at Fairborn Observatory (Henry 1999, Eaton
\mbox{\rm et al.~} 2003). The T10 APT measures the brightness of program stars
relative to nearby constant comparison stars with a typical precision of
0.0015--0.0020 mag for a single measurement. We obtained 89 Str\"omgren
$b$ and $y$ brightness measurements spanning 438 days between 2004
October and 2006 January. The standard deviation of a single
observation from the mean was 0.0017 mag, comparable to the measurement
precision, which provides an upper limit to photometric variability in
HD~5319. A periodogram analysis found no significant periodicity
between 1 and 220 days. Thus, our photometry confirms the star's
low level of chromospheric activity.
\subsection{Doppler Observations and Keplerian Fit}
Doppler observations were made at the Keck telescope using HIRES (Vogt
et al. 1994) with an iodine cell to model the instrumental profile and
to provide the wavelength scale (Butler \mbox{\rm et al.~} 1996). An exposure meter
maintains a constant signal-to-noise ratio of about 200 in our spectra,
yielding relatively uniform radial velocity precision. We obtained 30
observations of HD 5319. The observation dates, radial velocities and
measurement uncertainties are listed in Table 2 and plotted in Figure 2.
The periodogram of the radial velocities (Figure 3) shows a strong,
broad peak in the power spectrum, spanning 600-900 days. This peak is
wide because of modest phase sampling for HD 5319~b. To estimate the
False Alarm Probability (FAP), the probability that the power in the
highest peak is an artifact of noisy data or timing of observations, we
use the bootstrap Monte Carlo method of Cumming (2004). We generated
10,000 data sets of noise using the measured stellar velocities,
selected with replacement from residuals about the mean velocity, and
calculated the periodogram for each synthetic RV data set. The fraction
of trials with maximum periodogram power that exceeds the observed value
gives the FAP (Cumming 2004). Figure 4 shows a histogram of the tallest
peak height in each trial periodogram. Only 13 of 10,000 synthetic data
sets yielded any peak with higher power than in the true periodogram,
for ${\rm FAP} = 0.0013$ (Table 3). The probability that the 600-900
day periodogram peak arises from a true physical source is therefore
$99.87\%$, suggesting this period range should be searched for a
Keplerian orbital fit.
The final task before determining the orbit of HD 5319~b is to assess
the astrophysical sources of error in radial velocity measurements. In
addition to velocity errors arising from our measurement uncertainties
(including photon shot noise), the star itself can have cool spots,
granular convective flows, or $p$-mode oscillations that contribute
non-dynamical velocity noise. These noise sources are collectively
termed ``jitter''. For purposes of fitting a Keplerian model, the
stellar jitter is added in quadrature to the formal instrumental errors.
Jitter is not included in the tabulated measurement uncertainties for
the radial velocity sets.
We empirically estimate stellar jitter based on the chromospheric
activity of the star and spectral type, following Wright (2005). The
20$^{th}$ percentile, median, and 80$^{th}$ percentile jitter amplitudes
of stars at the chromospheric activity level and evolutionary stage of
HD 5319 are 4.6~\mbox{m s$^{-1}$}, 5.7~\mbox{m s$^{-1}$}, and 9.5~\mbox{m s$^{-1}$}, respectively. We adopt
the 20$^{th}$ percentile value as a conservative jitter estimate (Table
1). The $p$-mode oscillation component of the jitter is $\sim
0.9$~\mbox{m s$^{-1}$}, according to the solar scaling relation of Kjeldsen \&
Bedding (1995).
A Levenberg-Marquardt (LM) fitting algorithm was used to model the
radial velocities of HD 5319. The best-fit Keplerian model gives an
orbital period of 674.6 $\pm$ 16.9 d, with semi-velocity amplitude $K =
33.6 \pm$ 4.3 \mbox{m s$^{-1}$}, and orbital eccentricity $e = 0.12 \pm 0.08$. We
include a center of mass acceleration $dv/dt = 9.11$~\mbox{m s$^{-1}~$}~yr$^{-1}$,
corresponding to a linear trend in the residual radial velocities. The
best fit has RMS~=~6.08~\mbox{m s$^{-1}~$} and $\sqrt{\chi_{\nu}^2}~$~=~1.22, including 4.6~\mbox{m s$^{-1}~$} for
astrophysical jitter. Adopting a stellar mass of 1.56~$M_{\odot}$, we
derive $M \sin i~$~=~1.94 M$_{\rm JUP}~$ and semimajor axis $a = 1.75$~AU (angular
separation, $\alpha = 0.'' 0175$). The orbital solution is listed in
Table 3 and the RV data are plotted with the best-fit Keplerian model
(solid line) in Figure 2.
Uncertainties in the orbital parameters are first estimated with a
model-based bootstrap Monte Carlo analysis. First, we find the best fit
Keplerian model. Then, for each of 250 trials, that theoretical best
fit is subtracted from the observed radial velocities. The residual
velocities are then scrambled (with replacement) and added back to the
theoretical best fit velocities and a new trial Keplerian fit is
obtained. We adopt the standard deviation of each orbital parameter for
the 250 Monte Carlo trials as the parameter uncertainty. The
uncertainties of the Keplerian parameters of HD 5319~b are listed in
Table 3.
In order to confirm the orbital parameters of HD 5319~b, a Markov Chain
Monte Carlo (MCMC) simulation was carried out for the HD 5319
velocities. This analysis, which gives posterior probability
distribution for the orbital parameters, can be a useful check of the
convergence of the Levenberg-Marquardt fitting algorithm, particularly
when the modeled $\sqrt{\chi_{\nu}^2}~$ space is confused with several local minima.
For example, poor phase coverage might result in an aliased value of the
period. A bimodal MCMC posterior distribution for the period would
indicate the need for more observations to break the degeneracy.
Posterior probability distributions of $P$, $e$ and $K$ are shown for HD
5319~b in Figure 5. Because the orbit is nearly circular, the time of
periastron passage and longitude of periastron are not well constrained,
and the MCMC histograms are nearly flat. This ambiguity is also
reflected in the large uncertainties ($\sim 1/8$~orbit) for ${\rm
T}_{\rm p}$ and $\omega$ inferred from the orbit-based bootstrap
simulations. The eccentricity distribution has mean $e = 0.09$, with
our reported value of $e = 0.12$ lying within $1 \sigma$ of the mean.
The mean of the period distribution, 686~days, is consistent with the
period determined by the LM analysis, 675~days. The MCMC simulations
settle on a somewhat larger value of velocity semiamplitude, $K =
39$~\mbox{m s$^{-1}~$}, than the LM analysis ($K = 33.8$~\mbox{m s$^{-1}$}), a difference of
$1.4 \, \sigma$.
To assess the wisdom of adding the extra term $dv/dt$ to our fit, we
perform an $F$-test for an additional term (Bevington \& Robinson 1992).
We define $\Delta \chi^2$ as the difference in unreduced $\chi^2$
between the best fits obtained with and without the $dv/dt$ term, and
$\chi^2_{\nu}$ as the reduced $\chi^2$ of the published fit, including
$dv/dt$. The quantity
\begin{equation}
F = {\Delta \chi^2 \over \chi^2_{\nu}}
\label{fstat}
\end{equation}
follows an $F$-distribution with one numerator degree of freedom and
$\nu = N_{\rm obs} - 7$ denominator degrees of freedom, where $N_{\rm
obs}$ is the number of observations (30 for HD 5319). The best fit
obtained without the $dv/dt$ term has $\chi^2 = 61.4$ ($\sqrt{\chi_{\nu}^2}~$ = 1.60),
giving $F = 18.2$. The probability $P(F;1,23)$ that a randomly selected
F exceeds this value is 0.00029, for a less than 1 in 1,000 chance that
the fit improvement provided by the $dv/dt$ term is spurious.
Therefore, there is strong evidence that a long-period companion is
accelerating the center of mass of the HD 5319~a-b system.
Noticing a smooth variation in the residuals of a one-planet fit,
one might be tempted to fit a second Keplerian with a longer period.
However, in the case of HD 5319, this is premature: the linear
correlation coefficient of the one-planet residuals is 0.85, indicating
that a linear model describes the variation in these residuals well. We
do not yet detect any curvature in the radial velocity signature of the
second companion, so we refrain from fitting a full Keplerian or a
circular orbit. We allow the period of this long-period companion to
remain undetermined, and approximate its effects on the system with a
constant acceleration $dv/dt$.
\section{HD 75898}
\subsection{Stellar Characteristics}
HD 75898 has $V = 8.03$, $B-V = 0.626$, and Hipparcos parallax (ESA
1997) of $0.012 ''$, corresponding to a distance of 80.6 parsecs.
Spectroscopic analysis yields $T_{\rm eff}~$ = 6021 $\pm$ 50K, ${\rm \log g}~$ = 4.16 $\pm$
0.15, $v \sin i~$ = 4.54 $\pm$ 0.50 \mbox{km s$^{-1}~$}, and {\rm [Fe/H]} = 0.27 $\pm$ 0.05 dex. The
absolute visual magnitude is $M_V = 3.49$, and the luminosity is
3.0~L$_{\odot}~$ (with bolometric correction of -0.039). Although the SIMBAD
spectral type designation is G0V, the luminosity, temperature and
surface gravity are more consistent with a metal-rich F8V star. The
value of $M_V$ indicates that the star is just beginning to evolve onto
the subgiant branch. From the stellar luminosity and surface gravity,
we derive a stellar radius of $1.6 R_{\sun}$, identical to the radius
derived from evolutionary tracks. A stellar mass of 1.28~M$_{\odot}~$ and an
age of 3.8~Gyr are derived from evolutionary tracks (Takeda \mbox{\rm et al.~} 2007).
The physical parameters of HD 75898 are listed in Table 1.
HD 75898 was selected for the Keck planet search after being observed by
the N2K low-resolution spectroscopic survey (Robinson et al. 2007),
carried out at the 2.1m telescope at KPNO from August 2004 to April
2005. The atmospheric parameters measured from N2K spectra were $T_{\rm
eff} = 5983 \pm 82$~K, ${\rm [Fe/H]} = 0.22 \pm 0.07$~dex, and $\log \;
g = 4.22 \pm 0.13$~dex. These values agree with the Keck measurements
within uncertainties.
Figure 1 shows that the star is chromospherically inactive, with no
observed emission in Ca II H\&K . We derive mean $S_{HK}$ = 0.15 and $\log R^\prime_{HK}$ =
-5.02, with a corresponding rotational period $P_{ROT}~$ = 12.6 d. The
caution that the $P_{ROT}~$ measurement may be affected by systematic errors
for evolved stars applies to HD 75898 as well, since this star is
beginning to move off the main sequence. Wright (2005) reports
20$^{th}$ percentile, median, and 80$^{th}$ percentile jitter amplitudes
of 2.6~\mbox{m s$^{-1}$}, 4.0~\mbox{m s$^{-1}$}, and 6.2~\mbox{m s$^{-1}$}, for stars with similar activity
level and evolutionary stage to HD 75898. Again, we adopt a
conservative, 20$^{th}$ percentile jitter estimate of 2.6~\mbox{m s$^{-1}~$} (Table 1).
The $p$-mode oscillation component of the jitter is $\sim 0.5$ \mbox{m s$^{-1}~$}
(Kjeldsen \& Bedding 1995). The stellar characteristics are summarized
in Table 1.
\subsection{Doppler Observations and Keplerian Fit}
We obtained 20 observations of HD 75898. Observation dates, radial
velocities and instrumental uncertainties in the radial velocities (not
including stellar jitter) are listed in Table 4. The periodogram for
this data set (Figure 7) shows a strong peak at 446 days. Once again,
we calculate the FAP by sampling the observed radial velocities with
replacement, keeping the original observation times, and calculating the
maximum periodogam power for the scrambled velocities. In 10,000
synthetic data sets, no periodogram had higher power than the original
446-day peak. The FAP for this peak is $< 0.0001$ (Table 3), indicating
a better than 0.9999 probability that the periodicity in radial
velocities has an astrophysical source, and is not caused by noise. The
histogram of periodogram power in the tallest periodogram peak in each
of 10,000 trials is plotted in Figure 8.
There is also a peak in the periodogram at 200 days, which may be an
alias of the true, $\sim 400$-day period; an artifact of the $1/2$-year
observing season of HD 75898, which is near the ecliptic. Two other
possible explanations for the 200-day peak are that it arises from the
modest eccentricity ($e \approx 0.1$) of the best-fit 418-day orbit, or
that there is a second planet in the system with a period near 200 days.
The observations between 2004 January and 2006 May, which do not include
the minimum of the radial velocity curve, can be modeled credibly with
Keplerian orbits of either $\sim 200$ or $\sim 400$ days. However, when
the four most recent observations, which do cover the radial-velocity
minimum, are included, the degeneracy is broken and single planets with
200-day orbits do not fit the data.
The best-fit Keplerian model gives a period of 418.2 $\pm$ 5.7 days,
with semi-velocity amplitude $K = 58.2 \pm 3.1$ \mbox{m s$^{-1}$}, and orbital
eccentricity $e = 0.10 \pm 0.05$. The RMS to the fit is 5.48 \mbox{m s$^{-1}~$} with
$\sqrt{\chi_{\nu}^2}~$ = 1.77, including the estimated astrophysical jitter of 2.6 \mbox{m s$^{-1}$}.
Adopting a stellar mass of 1.28 M$_{\odot}~$, we derive $M \sin i~$ = 2.51 $M_{\rm
JUP}$. The corresponding semimajor axis is $a = 1.19$~AU, and the
angular separation is $\alpha = 0.'' 0148$. The residual velocities
show a strong trend, $dv/dt = -14.6$ \mbox{m s$^{-1}~$} yr$^{-1}$, suggesting that an
additional companion orbits the star. The Keplerian orbital parameters
are listed in Table 3 and plotted with the best-fit Keplerian model
(solid line) in Figure 6.
To assess whether the constant acceleration $dv/dt$ should be included
in the fit, we again perform the $F$-test for an additional term given
in Equation \ref{fstat}. The best-fit Keplerian without the $dv/dt$
term has $\chi^2 = 142.5$ ($\sqrt{\chi_{\nu}^2}~$ = 3.19). The $F$-statistic comparing
the best fits with and without $dv/dt$ is 32.5. There are 20
observations of this star, giving the $F$ distribution 13 denominator
degrees of freedom. The probability $P(F;1,13)$ that the fit
improvement from including $dv/dt$ is spurious is only $7.3 \times
10^{-5}$. The detected acceleration of the HD 75898~a-b center of mass
is therefore almost certainly real, and not an artifact of noise.
Further evidence for a long-period companion to HD 75898 is provided by
the periodogram, which rises toward a 2000-day period (almost twice the
length of our observational baseline). The correlation coefficient for
a linear fit to the single-planet residuals is $r = -0.96$, indicating
that variation in RV residuals is well described by a constant
acceleration. The relatively sparse sampling precludes detection of
curvature from any additional planets at this time.
We carried out a Markov Chain Monte Carlo simulation for the radial
velocity residuals of HD 75898. The resulting posterior distributions
for period, radial velocity semi-amplitude, and eccentricity are shown
in Figure 9. For this low-eccentricity orbit, time of
periastron passage and longitude of periastron are not well constrained.
The fact that the MCMC eccentricity distribution peaks at zero suggests
that the orbit of HD 75898~b could in fact be circular. The mean
eccentricity in the MCMC posterior distribution is $0.1$, in agreement
with the Levenberg-Marquardt value of $0.10 \pm 0.05$. The mean of the
MCMC period distribution is 417~days, which is well matched with the
results of the LM analysis ($P = 418$~days). For velocity
semi-amplitude $K$, the MCMC results also reproduce the LM results, with
mean $K = 58$~\mbox{m s$^{-1}~$}.
Being near the ecliptic ($\delta = 33\degr$), with a period near one
year (418 days), HD 75898 presents a special hazard for planet
detection. The observing season for HD 75898 is just 7 months, so only
half of the orbital phase is visible during one year. At the same time,
the visible phase of HD 75898~b's orbit advances only $12\%$ per year.
Although our observational baseline covers 3 years and 2/3 of the orbit,
it would take 5 years to obtain full phase coverage. We expect the
orbital solution to be revised as more observations of HD 75898 are
obtained.
Observations near periastron passage contain the most information about
the orbit, particularly eccentricity (e.g. Endl \mbox{\rm et al.~} 2006). If our
best-fit orbit is correct, we have observed the periastron passage to
within 4 days (JD 2453747). This fact, combined with the results of the
MCMC simulation for eccentricity, leads us to believe that our basic
discovery is correct, that HD 75898 has a planet with a minimum mass
of 2~M$_{\rm JUP}~$ in a nearly circular orbit near 1 AU.
\section{Long-Period Companions}
The radial velocity residuals of HD 5319 and HD 75898 have significant
linear trends, $|dv/dt| \geq 9$~\mbox{m s$^{-1}~$} yr$^{-1}$. This indicates that the
center of mass of each two-body system is accelerating, which cannot
happen unless there is a third component in each system. Both stars,
then, show evidence of long-period companions with incomplete phase
coverage during our observational baseline. The possibility of finding
brown dwarfs orbiting sunlike stars is a tantalizing one, warranting
further analysis of the one-planet residuals of HD 5319 and HD 75898.
Brown dwarf companions might reside in the brown dwarf desert (McCarthy
\& Zuckerman 2004, Grether \& Lineweaver 2006), the dearth of substellar
companions to main-sequence stars with $a < 1200$~AU. Another
possibility is that the third components are giant planets with $P \ga
2000$ days. Even the presence of stellar companions would make HD 5319
and HD 75898 unusual planet hosts, as new evidence indicates planet
occurrence is infrequent in binaries closer than 120~AU (Eggenberger \&
Udry 2007). In this section, we analyze possible configurations of the
HD 5319 and HD 75898 systems.
The possible companion types---planets, brown dwarfs, and stars---are
restricted to a particular semimajor axis range by the measured $dv/dt$
and the long-term dynamical stability of each system. Although these
ranges overlap substantially when all potential variations in time of
periastron passage, line of apsides and eccentricity are taken into
account, the general pattern is $a_* \ga a_{\rm bd} \ga a_{\rm planet}$.
This pattern can be illustrated by the simple example of a circular
orbit: the star reaches radial velocity semiamplitude when the planet
has moved 1/4 orbit from its ephemeris, so we can calculate the
approximate semiamplitude by
\begin{equation}
K \approx \left ( {P \over 4} \right ) {dv \over dt}.
\label{perk}
\end{equation}
Equation \ref{perk} shows that the longer a star maintains the measured
constant $dv/dt$, the higher the mass of the companion. For eccentric
orbits, the proportionality constant relating $K$ and $P$ changes, but
the pattern $a_* \ga a_{\rm bd} \ga a_{\rm planet}$ holds.
The smallest possible semimajor axis for component c, $a_{\rm min}$, is
determined by the requirement that the two companions in each system do
not experience close encounters, which could lead to large perturbations
of both orbits. Absent any dynamical considerations, $a_{\rm min}$
would correspond to a highly eccentric orbit with the apoastron passage
near the midpoint of our observations, and with a period only slightly
longer than our time baseline. However, the more eccentric the outer
component's orbit, the nearer its approach to the inner planet during
periastron passage.
Assuming nonresonant systems, we can set a lower limit to the distance
of closest approach between components b and c. David \mbox{\rm et al.~} (2003)
examine of the stability of a two-planet system, an intermediate-mass
companion exterior to an Earth-mass planet on a circular orbit at 1~AU.
They find that an outer planet with mass $1 M_{\rm JUP}$ and $R_{\rm
peri} \sim 2.5$~AU give a mean ejection time of 1~Gyr for the
terrestrial planet. Although HD 5319 and HD 75898 have far more massive
inner planets than the theoretical system of David et al., we apply
their analysis because of the similar orbits of the inner planets in all
three systems. Adopting the 1~Gyr stability criterion, we set the
minimum periastron distance of the c component as $R_{\rm min} \geq 2.5
\, a_{\rm inner}$. For each component type (planet, brown dwarf, star),
the orbit corresponding to $a_{\rm min}$ must obey this stability
criterion and reproduce the observed $dv/dt$ within uncertainties.
In this section, we refer to HD 5319~c, HD 75898~c and ``the c
components.'' We are using this nomenclature as shorthand for ``implied
long-period companion,'' and are not claiming actual detections of
these objects.
\subsection{Planet Orbits}
\label{porbs}
If HD 5319~c or HD 75898~c is a planet, $a_{\rm min}$ is simply the
stability limit $R = 2.5 \, a_{\rm inner}$, and the orbit associated
with $a_{\rm min}$ is circular. We note that giant planets on circular
orbits can have semimajor axis ratios less than 2.5, as Jupiter and
Saturn do, but $a_{\rm outer} / a_{\rm inner} < 2$ is rare among
exoplanets (Butler et al. 2006). To find the minimum planet
mass for HD 5319~c and HD 75898~c, we substitute test values of $M \,
\sin i$ into the equations
\begin{equation}
\left ( {a \over {\rm AU}} \right )^3 = \left ({M_{\star} + M \, \sin i
\over M_{\odot}} \right ) \left ({P \over {\rm yr}} \right )^2,
\label{kepler}
\end{equation}
\begin{equation}
M \, \sin i = K \sqrt{1-e^2} \left [ {P (M_{\star} + M \, \sin i)^2
\over 2 \pi G} \right ]^{1/3},
\label{msini}
\end{equation}
and use the resulting value of $K$ to calculate a radial velocity curve.
$M_{\rm min,planet}$ is the lowest value of $M \, \sin i$ for which the
radial velocity slope, determined from a linear fit, matches the
observed $dv/dt$ within the uncertainties reported in Table 3:
$|dv/dt_{\rm calc} - dv/dt_{\rm obs}| \leq \sigma(dv/dt)$.
The minimum mass of HD 5319~c is $1.0 \; M_{\rm JUP}$. This planet
would reside at $a = 4.4$ AU, and have a period of 2675 days (7.3 yr).
Figure 10 shows the radial velocity curve corresponding to this orbit,
together with the observed trend in the fit residuals of HD 5319~b. HD
75898~c also has a minimum mass $M_{\rm min} = 1.0 \: M_{\rm JUP}$, in a
circular orbit with semimajor axis $a = 3.0$ AU and $P = 1656$ days (4.5
yr). The resulting radial velocity curve, plus the measured trend in
the HD 75898~b residuals, are shown in Figure 11. These orbital
solutions show that HD 5319~c and HD 75898~c could be similar, in
mass, semimajor axis and perhaps equilibrium temperature, to Jupiter.
For the maximum possible planet mass of HD 5319~c or HD 75898~c, we
adopt the IAU criterion that a planet does not burn deuterium, and so
$M_{\rm max} = 13 \: M_{\rm JUP}$ (Boss et al. 2007). We can calculate
$a_{\rm max}$ and $P_{\rm max}$ for this borderline planet by examining
the limiting case where the periastron passage, which coincides with
ephemeris, occurs at the midpoint of our observations. This is the part
of the radial velocity curve that varies most rapidly. In principle,
$a_{\rm max}$ and $P_{\rm max}$ could become arbitrarily large as $e
\rightarrow 1$. We adopt the convention of Patel et al. (2007) and
define $e_{\rm max} = 0.8$, since $90\%$ of spectrosopic binaries with
$P > 10$ yr have $e < 0.8$ (Pourbaix et al. 2004).
To calculate $a_{\rm max}$ for planet orbits, we substitute test values
of $a$ into equations \ref{kepler} and \ref{msini}, and use the
resulting value of $K$ to calculate a radial velocity curve. We then
find the maximum $a$ for which $|dv/dt_{\rm calc} - dv/dt_{\rm obs}|
\leq \sigma(dv/dt)$. For HD 5319~c, $a_{\rm max} = 85$ AU,
corresponding to a period of 625 yr. For HD 75898~c, $a_{\rm max} = 65$
AU and $P_{\rm max} = 460$ yr. In practice, it is extremely unlikely
that either object is a planet on this type of orbit: the probability of
catching these long-period, eccentric orbits exactly at periastron
passage is quite low. The ranges of possible planet orbits for HD
5319~c and HD 75898~c are summarized in Table 5.
\subsection{Brown Dwarf Orbits}
\label{bdorbs}
To find $a_{\rm min}$ for brown dwarf orbits, we examine the limiting
case where apoastron coincides with ephemeris at the midpoint of our
observational baseline. This configuration gives the RV curve that most
nearly approximates a straight line. Recalling that high mass implies
high semimajor axis (c.v. Equation \ref{perk}), we set
$M \sin i~$~=~13~M$_{\rm JUP}~$, the minimum possible brown dwarf mass. Substituting
test values of $a$ and $e$ into Equations \ref{kepler} and \ref{msini},
we find the semiamplitude $K$ for each $a$, $e$ pair that meets the
stability criterion $a \, (1-e) \geq 2.5 \, a_{\rm inner}$. The linear
slopes of the resulting radial velocity curves are examined to find the
minimum $a$ where $\Delta(dv/dt) \leq \sigma(dv/dt)$ (Table 3). For HD
5319~c, $a_{\rm min} = 9.8$ AU for a brown dwarf. This orbit has $e =
0.55$ and $P = 24$ yr. If HD 75898~c is a brown dwarf, $a_{\rm min} =
7.5$ AU and $P_{\rm min} = 18$ yr, with eccentricity $e = 0.60$.
We determine $a_{\rm max}$ for brown dwarf orbits by setting
$M \sin i~$~=~83.8 M$_{\rm JUP}~$, the minimum mass for hydrogen fusion. We follow
the same method outlined in \S \ref{porbs} for finding $a_{\rm max}$,
once again assuming $e_{\rm max} = 0.8$. For HD 5319~c, $a_{\rm max} =
190$ AU and $P_{\rm max} = 2045$ yr for brown dwarf orbits. For HD
75898~c, $a_{\rm max} = 170$ AU and $P_{\rm max} = 1900$ yr.
We note that the brown dwarf semimajor axis ranges implied by our
measured $dv/dt$, $9.8 \la a \la 190$ AU for HD 5319~c and $7.5 \la a
\la 170$ AU for HD 5319~c, fall directly in the brown dwarf desert
(McCarthy \& Zuckerman 2004). The ranges of possible brown dwarf
periods and semimajor axes for HD 5319~c and HD 75898~c are summarized
in Table 5.
\subsection{Star Orbits}
\label{storbs}
To find $a_{\rm min}$ for stellar companions to HD 5319 and HD 75898, we
set $M \sin i~$~=~83.8~M$_{\rm JUP}~$, the minimum mass for a main-sequence star.
Once again, we set the apoastron passage to coincide with the ephemeris
and place it at the midpoint of our observations, finding the best
approximation to a linear RV curve. We follow the procedure outlined in
\S \ref{bdorbs}, testing a grid of $a$ and $e$ values which meet the
stability criterion and finding the minimum $a$ for which the linear RV
slope matches our measured $dv/dt$ within uncertainties (Table 3). The
minimum semimajor axis for HD 5319~c, if it is a star, is $a_{\rm min} =
22$ AU, with $P_{\rm min} = 81$ yr and $e = 0.8$. For HD 75898~c,
$a_{\rm min,\star} = 17$ AU, $P_{\rm min,\star} = 58$ yr and $e = 0.8$.
We determine the maximum masses of stellar companions to HD 5319 and HD
75898 by noting that neither star was identified as a double-lined
spectroscopic binary (SB2) in the Keck spectra. The minimum flux ratio
for detecting SB2s with HIRES is $\sim 0.01$. This limit gives $M_V >
8.05$ HD 5319~c and $M_V > 8.49$ HD 75898~c. The corresponding masses
are $M \sin i~$~=~0.65 M$_{\odot}~$ and $M \sin i~$~=~0.6 M$_{\odot}~$, respectively (Yi et al.
2001). Assuming periastron passage and ephemeris fall at the midpoint
of our observations and $e_{\rm max} = 0.8$, HD 5319~c has $a_{\rm max}
= 630$ AU and $P_{\rm max} = 10600$ yr. HD 75898~c has $a_{\rm max} =
470$ AU and $P_{\rm max} = 7400$ yr. The possible stellar orbits for HD
5319~c and HD 75898~c are summarized in Table 5. Note that these orbit
determinations are extremely uncertain, as we are using a 3-yr
observational baseline to characterize orbits in the $10^2 - 10^4$-year
range.
\section{Discussion and Conclusions}
We have discovered two Jovian-mass planets in Earthlike orbits, $1 < a <
2$~AU, orbiting the stars HD 5319 and HD 75898. Target selection of
both stars was performed by the N2K Consortium (Fischer et al. 2005).
For HD 75898, which was observed as part of the N2K low-resolution
spectroscopic survey (Robinson et al. 2007), we find good agreement
between the N2K and Keck atmospheric parameter estimates.
At 1.56~M$_{\odot}~$, HD 5319 is on the verge of being a ``retired'' A-star
($M_{\star} > 1.6$~M$_{\odot}~$) of the type discussed by Johnson et al.
(2007). In all 9 previously known former A-dwarf planetary systems, the
planets orbit at semimajor axes $a \geq 0.78$~AU. HD 5319~b fits this
pattern well, with $a = 1.75$~AU. Although the total number of known
planet hosts with $M > 1.6$~M$_{\odot}~$ is small, Johnson et al. concluded
that the dearth of short-period planets around these stars is real, and
the semimajor axis distributions of planets orbiting intermediate-mass
and low-mass stars are different. Furthermore, engulfment by the
expanding subgiant can only explain the disappearance of planets
orbiting at $a < 30 R_{\odot}$. Burkert \& Ida (2007) point out that
the lack of short-period planets orbiting intermediate-mass stars can be
explained if these stars' protostellar disks have a shorter depletion
timescale than their low-mass counterparts.
Among orbits larger than the tidal circularization cutoff of 0.1 AU,
circular orbits, while not rare, are certainly not preferred. Butler et
al. (2006) report that the distribution of eccentricities is nearly
uniform beyond 0.3 AU. However, Meschiari et al. (2007, submitted)
performed a blind experiment where they presented users of the
Systemic\footnote{http://oklo.org} radial velocity-fitting console with
synthetic radial velocity data sets drawn from circular orbits. The
recovered eccentricity distributions had median values between 0.1 and
0.2, indicating a bias toward finding eccentric orbits. If the median
exoplanet eccentricity has been skewed higher than its true value by the
planet discovery process, solar system-like orbits, which seem
noteworthy in the context of so many eccentric exoplanets, may be quite
common. With $1 < a < 2$ AU and $e \approx 0.1$, HD 5319~b and HD
75898~b have orbits quite similar to our own terrestrial planets.
HD 5319 and HD 75898 have radial velocity residuals that imply
additional companions in the system. To account for our measured
center-of-mass accelerations, HD 5319~c and HD 75898~c must both be at
least 1~M$_{\rm JUP}~$. If the periods and masses of these objects are near the
minimum values recorded in Table 5, further radial velocity observations
might add these stars to the known list of multiple-planet systems
within a few years. However, it is likely that these objects have
periods too long for radial-velocity follow-up. In that case, HD 5319
and HD 75898 are good candidates for high-resolution imaging. The
NIRC-2 coronagraph spotsize is $0.'' 5$, which would restrict the
detection space to $a > 50$~AU for HD 5319 and $a > 40$~AU for HD 75898.
With the NIRC-2+AO limiting contrast ratio of $0.1\%$, this detection
space includes massive brown dwarfs and low-mass stars. The analytical
work of Matzner \& Levin (2005) supports the hypothesis that
protostellar disk fragmentation is not a viable formation mechanism for
star-brown dwarf binary pairs. HD 5319 and HD 75898 could therefore
serve as laboratories for investigating the presumably rare phenomenon
of brown dwarf formation in protostellar disks.
\acknowledgements
SER thanks Eugenio Rivera and Peter Bodenheimer for helpful input on
this work. We gratefully acknowledge the dedication and support of the
Keck Observatory staff, in particular Grant Hill for support with HIRES.
We thank the NASA and UC Telescope assignment committees for generous
allocations of telescope time. The authors extend thanks to those of
Hawaiian ancestry on whose sacred mountain of Mauna Kea we are
privileged to be guests. Without their kind hospitality, the Keck
observations presented here would not have been possible. The authors
have made use of the SIMBAD database, the Vienna Atomic Line Database,
and NASA's Astrophysics Data System.
This research is made possible by the generous support of Sun
Microsystems, NASA, and the NSF. SER was supported by the National
Science Foundation Graduate Research Fellowship. GL received support
from the NSF Career grant (No. 0449986). SSV's work was supported by
the NSF grant AST-0307493. DAF was supported by Research Corporation's
Cottrell Science Scholar program and by NASA grant NNG05G164G. We thank
the Michelson Science Center for travel support through the KDPA
program.
{\it Facilities:} \facility{Keck I (HIRES)}, \facility{APT}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 850
|
Q: how to insert a jpeg image into an excel sheet I'm able to generate tables with all the required columns and with data from an Excel sheet. Now I want to insert an image that is present in my bin. How could I achieve this?
A: Look here:
c# excel image
Paste image through clipboard.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 9,258
|
Students of all races, cultural backgrounds and religions united for dinner and dancing at the Thanksgiving Gala Friday night at the J.W. Marriott, a hotel in Northwest D.C.
The event, sponsored by the Program Board, the International Student Organization, the International Student Society and the Student Activities Center, was the culmination of Religion Week.
It's a great way to get off campus and still be with students, senior Melissa Peterson said.
The theme of the night was The Spirit of Faith, and the guests were given power beads, which symbolize things such as trust and love.
The event drew about 150 guests, who grooved on the dance floor to the sounds of Steve, a disc jockey who plays at local clubs such as Polly Esthers.
It's a completely secular party, said Ni-cheng Liang, co-chair of Cultural Affairs for PB. We wanted to unite the International Students with the American students.
With international students, they are not intermingled with us very often, said Liang's co-chair Andrea Bautista. This is a good excuse to get together. They don't go away for Thanksgiving. This is a way to intermingle with U.S. kids and learn a little bit.
I think it gives you a different perception of our University, senior Edith Valenzuela said.
Guests said they enjoyed the event, but some said they wished that it was not so American-centric.
It's hard to complain about, said senior Nick Krupa. It's nice to have a cultural platform. They are playing all American music.
Dunk Yabe and Koruko Tsunasachima, graduate students who attended the event, said they believed the event was a good idea but said there still has to be more done to bring together international and American students.
Senior Jennifer Anderson praised the event for getting students together.
You're not in your little bubble, Anderson said.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 483
|
Christian Julius Wilhelm Schiede (February 3, 1798 – December 1836) was a German physician and botanist born in Kassel.
He studied natural sciences and medicine in Berlin and Göttingen, where he earned his doctorate in 1825. Afterwards he practiced medicine in Kassel.
In 1828 Schiede emigrated to Mexico, accompanied by Ferdinand Deppe (1794-1861), a German naturalist with previous experience in the country. The two scientists planned to collect zoological and botanical specimens, which would then be sold to museums and dealers in Europe. In July 1828 they settled in Jalapa, and performed scientific excursions throughout the state of Veracruz. Although they were able sell their collections to museums in Berlin and Vienna, the money earned was insufficient to continue operations, causing Deppe and Schiede to abandon their enterprise in late 1830. Christian Schiede died in Mexico in 1836 at the age of 38.
The botanical genera Schiedeella and Schiedea are named after him, as is a species of lizard, Anolis schiedei.
Publications
Über Bastarde im Pflanzenreich, 1824 – Hybrids in the plant kingdom.
Schiede, C. J. W. De plantis hybridis sponte natis (1825) on BioLib
Befruchtung der pflanzen, 1825 – Fertilization of plants.
De plantis Mexicanis a G. Schiede (1830-1844); with Diederich Franz Leonhard von Schlechtendal, Ferdinand Deppe and Adelbert von Chamisso.
Bibliography on World.Cat
References
External links
UNI Goettingen Department of Systematics, Biodiversity and Evolution of Plants (with Herbarium).
Repository Naturalis.
IPNI Plants named for Schiede.
19th-century German botanists
1798 births
1836 deaths
Scientists from Kassel
German emigrants to Mexico
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,199
|
{"url":"http:\/\/math.stackexchange.com\/questions\/54572\/conjecture-forall-x-exists-m-n-xmn-and-make-pip-mm-m-p","text":"# Conjecture:$\\forall x , \\exists m,n$, $x<m<n$ and make $\\pi(p_{m}+m) - m > \\pi(p_{n}+n) - n$\n\n$p_i$ is the $i^{\\rm th}$ prime. $\\pi(x)$ is prime counting function.\n\nFirstly, I think that Prime gap inequality holds true for any $i>0$: $p_{i+1} - p_{i} \\leq i$.\n\nVery often, $\\pi(p_{m}+m) - m \\leq \\pi(p_{n}+n) - n$ if $m<n$ . However, there exist counter examples. $\\pi(17+7)-7 > \\pi(19+8)-8$ . I conjecture there exists infinite this sort of counter examples. In math words,\n\nConjecture:$\\forall x , \\exists m,n$, satisfy $x<m<n$ and $\\pi(p_{m}+m) - m > \\pi(p_{n}+n) - n$.\n\n-\nI have editing the typesetting a bit. Most significantly, I have replaced the $p[i]$ notation with the more standard subscript notation $p_i$. I haven't changed the wording otherwise. Hope this is ok. (My edit will be visible to all only after it is peer-reviewed.) \u2013\u00a0 Srivatsan Jul 30 '11 at 6:02\nvery kind of you! \u2013\u00a0 a boy Jul 30 '11 at 6:17\nHow much and what kind of evidence do you have supporting your conjecture? Did you run some computer experiments (this should be rather easy to do)? \u2013\u00a0 t.b. Jul 30 '11 at 6:55\n@Theo p = Prime; pi = PrimePi; Table[pi[p[n] + n] - n, {n, 400, 500}] \u2013\u00a0 a boy Jul 30 '11 at 7:37\nThanks. I don't have Mathematica, so I can't use your code (well - I could achieve that easily with sage, too, of course, if I wanted to). However, I was intending to suggest something that doesn't take a few microseconds only but would be a serious test. I mean, $\\sim 35'000$ isn't very big. \u2013\u00a0 t.b. Jul 30 '11 at 7:55","date":"2015-07-28 15:27:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.859908938407898, \"perplexity\": 1056.8377344601165}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-32\/segments\/1438042981969.11\/warc\/CC-MAIN-20150728002301-00165-ip-10-236-191-2.ec2.internal.warc.gz\"}"}
| null | null |
Preview AYP Data Tables Available Now!
June 30th, 2012 Comments Off on Preview AYP Data Tables Available Now!
DISCOUNTS AND FREE MATERIALS* NOW AVAILABLE!!!
June 20th, 2012 Comments Off on DISCOUNTS AND FREE MATERIALS* NOW AVAILABLE!!!
Austin school board members Monday night approved continuing to give owners of historic properties tax breaks.
In a 6-2 vote, the board approved for another year its participation in the city's program to give property tax exemptions to historic properties. Trustees Vince Torres and Robert Schneider voted against; Trustee Tamala Barksdale was not present.
The district would have net about $350,000 if trustees had not passed the measure. The district could have collected more than $1.8 million in property taxes but would have sent nearly $1.5 million to the state under school finance laws.
The board Monday night also approved the district's $724.2 million expenditure budget, confirmed the appointments of two principals and named several executive positions, including the Mel Waxler, the district's attorney since 2000, to chief of staff.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,760
|
\section{Introduction}
{\bf The physics of Mott systems is characterized by an interaction-driven metal-to-insulator transition (MIT) in a partially filled band. In the resulting insulating state, magnetic order of the local moments (often antiferromagnetic (AFM)) typically develops, but in rare situations no long-range magnetic order appears, even at zero temperature, rendering the system a ``quantum spin liquid". In Mott insulating oxides, intriguing charge, spin and/or orbital orderings are often found in the presence of localized carriers, e.g. in manganites, while mobile carriers may experience strong quantum fluctuations resulting in non-Fermi liquid (NFL) behavior, such as the ``strange metal regime" (a linear-$T$ resistivity) in the cuprates.
Despite this diversity, the underlying energetic landscape in these materials is derived from three essential parameters: the on-site electron-electron repulsion $U$, the energy difference $\Delta$ from the empty local $d$-orbitals to the band-like oxygen $p$-states, and their hybridization strength, which is encapsulated in the bandwidth, $W$.
A fundamental and technologically critical question is whether one can tune these parameters to control both MITs and Neel transitions, and even stabilize latent metastable phases, ideally on a platform suitable for applications.
Here we demonstrate how to achieve control of all these features in ultrathin films of NdNiO$_{3}$ grown on substrates of various degrees of lattice mismatch. In particular, upon the decay of the AFM Mott insulating state into a stable NFL phase distinctly different from that in the cuprates, we find evidence of a quantum MIT that spans a non-magnetic insulating phase (possibly a quantum spin liquid). These quantum critical behaviors are not observed in the bulk phase diagram of NdNiO$_{3}$.}
With recent advances in the atomic layering of correlated oxides \cite{Chakhalian2012,Mannhart}, a new route has been established for manipulating the low-energy electronic structure at the nanoscale. Although the quantum criticality of the MIT and the associated magnetic transition has been a key issue in correlated electron systems for decades, it has not been investigated under the versatile controls of oxide heteroepitaxy due to the formidable challenges of probing AFM order in ultrathin layers. Leveraging this experimental approach with theoretical advances, however, could bring us closer to ultimately establishing general ``design" rules for engineering desired phases in complex oxide heterostructures (Fig.~\ref{phase-control}(a)).
With this goal in mind, we performed a detailed study on ultrathin $\sim\!5.7$ nm (15 unit cells) films of fully strained NdNiO$_{3}$ (oriented along the pseudocubic [001]-direction) synthesized by laser Molecular Beam Epitaxy (MBE) in a layer-by-layer fashion as described in Ref.[\onlinecite{Jian}]. A series of high-quality perovskite-based single crystal substrates is used to attain a wide range of lattice mismatch $\varepsilon$ from $-2.9$\% to $+4$\% (more details in \cite{Supplemental}). The results reveal full control of epitaxy on the MIT and the magnetic ordering. Specifically, tuning the amount of epitaxial strain from the tensile to compressive side first merges the MIT and AFM transition, followed by a rapid decay of the magnetic ordering into a spin-disordered phase (possible quantum spin liquid) before stabilizing a conducting NFL phase. In this work, we show the underlying electronic reconstruction is associated with simultaneous modulation of the bandwidth, $W$, and the self-doping, determined by $\Delta$.
In the bulk, NdNiO$_{3}$ belongs to the charge-transfer nickelate family RENiO$_{3}$, where RE=Nd{\dots}Lu (except for La) are paramagnetic metals (PM) at high temperatures, but become insulating with charge-ordering and antiferromagnetically ordered at $T_{\textrm{MI}}$ and $T_\textrm{N}$, respectively \cite{Catalan0,Medarde1}. An important character of the AFM ordering is the presence of thermal hysteresis in transport properties around the transition due to the coupling between the spin and charge degrees of freedom when $T_{\textrm{N}}$ and $T_{\textrm{MI}}$ approaches each other, such as for RE=Pr and Nd in the bulk; as the temperature is lowered well below $T_{\rm N}$ the transport hysteretic behavior is strongly suppressed and eventually disappears \cite{Catalan0,Medarde1}. Within the Sawatzky-Allen-Zaanen (SAZ) scheme, RENiO$_{3}$ belongs to the class of charge-transfer-type materials where the charge gap corresponds to the excitation of an oxygen 2$p$-electron into the unoccupied upper nickel $d$-band \cite{Zaanen,Mizokawa0,Sarma} as shown in Fig.~\ref{K-edge}(a)-(c). The unusual high 3+ oxidation state of Ni and the presence of a small excitation energy ($\Delta\lesssim 1$ eV), as schematically illustrated in Fig.~\ref{K-edge}(b) and (c), naturally facilitates the transfer of oxygen $p$-electrons into the unoccupied nickel $d$-electron states (or alternatively a transfer of a correlated hole onto oxygen) \cite{Khomskii}. This `self-doping' phenomenon results from the coupling of a band-like continuum of oxygen-derived states and localized correlated $d$-states, and is believed to be responsible for the unusual AFM spin ordering ($E^{\prime}-$type) \cite{Mizokawa,Catalan0,Medarde1,Alonso,Staub, Goodenough}, sometimes described as an ``$\uparrow-\uparrow-\downarrow-\downarrow$" stacking sequence of ferromagnetically (FM) ordered planes along the pseudo-cubic (111) direction which is characterized by the magnetic vector $\bf{k}=$(1/4,1/4,1/4) in cubic notation (see Fig.~\ref{phase-diagram}(d)).
Fig.~\ref{phase-diagram}(a) summarizes the evolution of the electronic and magnetic states in the lattice mismatch-temperature phase space. As seen, despite the ultrathin form and the large lattice mismatch on some of the substrates, the metallicity is well preserved for all samples at room temperature. A direct inspection of the temperature-dependent resistivity curves from 5 to 300 K for different lattice mismatch, $\varepsilon$ in Fig.~\ref{phase-diagram}(b) indicates the presence of well controlled and diverse electronic phase behaviors in the ultrathin films that are absent in bulk NdNiO$_3$. Specifically, for samples in the positive $\varepsilon$ range (under tensile strain), the insulating ground state continuously develops with increasing magnitude of $\varepsilon$, whereas an unusual metallic NFL phase emerges and persists throughout the whole range of negative $\varepsilon$ (compressive strain).
The absence of the bulk-like first-order MIT near $\varepsilon \approx 0$ manifests the unique role of the heteroepitaxy in destabilization of Mott insulating state with charge and spin orderings; due to the interface-imposed lattice boundary condition, collective long-range order that strongly couples to the lattice degrees of freedom may be frustrated. In addition, even for $\varepsilon=0$, a substrate can still strongly distort the film structure via internal structural mismatches such as octahedral rotation, distortion and crystal symmetry.
Meanwhile, the strain-induced MIT signals that the heteroepitaxial NdNiO$_3$ is in close proximity to quantum criticality near $\varepsilon \approx 0$. Although the discrete values of $\varepsilon$ limit the ability to precisely pinpoint the location of the critical end point of the ``E$^{\prime}$-AFI" region, magnetism is rapidly suppressed on the insulating side upon approach to $\varepsilon \approx 0$ and appears to join the low-temperature NFL region for $\varepsilon<0$ by an intervening ground state without long-range magnetic ordering (a possible quantum spin liquid) as shown in Fig.~\ref{phase-diagram}(a). The intervening ground state is inferred at $\varepsilon=+0.3\%$ from the lack of thermal hysteresis in resistivity (which is present at all larger $\varepsilon$ values--see supplemental), and also the absence of the magnetic ordering peak in resonant X-ray diffraction (see below).
On the insulating side ($\varepsilon>0$), Fig.~\ref{phase-diagram}(a) shows the evolution of characteristic transition temperatures, from the high-temperature metallic phase to the intermediate-temperature paramagnetic insulating (PI) phase, $T^{**}$ (resistivity minimum temperature), and to the low-temperature $E'$-AFI phase transition temperature, $T_\textrm{N}$ (the thermal hysteric inflection point of resistivity \cite{Supplemental}). In particular, for large values of $\varepsilon$ the $T^{**}$ is significantly higher than $T_\textrm{N}$. This behavior is sharply distinct from bulk NdNiO$_3$ where $T^{**}(=T_{\rm MI})$ coincides with $T_\textrm{N}$. The resulting intermediate-temperature PI phase thus implies the opening of a gap which is decoupled from the spin ordering. As $\varepsilon$ is reduced, $T^{**}$ and $T_\textrm{N}$ merge albeit with a different slope, $i.e.$ while $T^{**}$ quickly decreases, $T_\textrm{N}$ steadily rises until $\varepsilon \approx +1.8\%$. This convergence of critical temperatures is further evidenced in the enhanced thermal hysteresis around $\varepsilon\approx +1.8\%$ [see Fig.~\ref{phase-diagram}(b) and Supplemental]. It is important to note that this PI phase is unattainable in bulk NdNiO$_3$ -- an example of a latent electronic phase in this system stabilized by the heterointerface.
As shown in Fig.~\ref{phase-diagram}(a), for $\varepsilon\lesssim+1.8\%$, the evolution of $T_\textrm{N}$ qualitatively changes so that $T_\textrm{N}$ and $T^{**}$ exhibit approximately a ``parallel dive" in response to reducing $\varepsilon$. Upon further lowering $\varepsilon$ toward zero, $T_\textrm{N}$ rapidly vanishes accompanied by a drastically weakened thermal hysteresis in resistivity \cite{Supplemental}, e.g. at $\varepsilon\approx +1.1\%$ (see Fig.~\ref{phase-diagram}(b)); the thermal hysteresis completely recedes from the system at around $\varepsilon\approx +0.3\%$ (see Supplemental for more detailed plots). On the other hand, $T^{**}$ remains finite with the tendency of suppression toward zero. This is in sharp contrast with the bulk where increasing hydrostatic pressure always favors magnetic ordering \cite{Zhou0}.
The difference in the behavior of $T^{**}$ and $T_\textrm{N}$ thus strongly suggests the emergence of an unusual weakly insulating ground state with completely quenched long-range AFM order in the vicinity of $\varepsilon=0\%$ .
To further corroborate this magnetic behavior, resonant magnetic X-ray diffraction \cite{Doering} at the Ni $L_3$-edge is utilized to directly track the $E^{\prime}-$type AFM ordering \cite{Bodenthin}. This measurement is done by monitoring the emerging intensity of the magnetic Bragg reflection at the magnetic vector $\bf{k}=$(1/4,1/4,1/4) as a function of temperature, such as that at $\varepsilon=+1.8\%$ shown in Fig.~\ref{phase-diagram}(c). While the appearance of the magnetic peak at low temperatures is consistent with the $T_\textrm{N}$ extracted from the resistivity, no $E^{\prime}-$type AFM reflection is observed at $\varepsilon=+0.3\%$ down to 12 K (see the inset of Fig.~\ref{phase-diagram}(c)). The stabilization of this emergent spin-disordered state implies enhanced frustration in proximity to the zero temperature MIT, which presents an intriguing candidate for a quantum spin liquid (SL) \cite{Suter,Balents}.
Upon crossing the zero temperature MIT towards negative values of $\varepsilon$, the temperature-driven MIT is completely quenched and a new exotic metallic ground state emerges across the entire range of $\varepsilon <0$. To stress the peculiarity of the phase we point out that at ``high" temperatures but still well below the Debye temperature $\sim$ 420 K\cite{Debye}, the resistivity exhibits extended unconventional linear $T$-dependence (see the bottom right inset of Fig.~\ref{phase-diagram}(d)) commonly seen in the ``strange metal" regime of the high-$T_{\rm c}$ cuprates \cite{Imada}, while Fermi liquids have a $T^{2}$-dependence. Upon crossing the intermediate temperature scale ($\sim$150 K) marked as T$^{\prime}$ in Fig.~\ref{phase-diagram}(a), however, another characteristic temperature dependence clearly appears. A fitting-free resistivity data analysis (see Fig.~\ref{phase-diagram}(d)) reveals a $T^{4/3}$ power-law behavior lingering over a 100 K temperature range. The 4/3 power law behavior is characteristic of a NFL in the vicinity of a two-dimensional quantum critical point with dynamical exponent $z=3$ \cite{Maslov}. For the large negative values of $\varepsilon\lesssim -2.9\%$, the power of the NFL exponent below T$^\prime$ switches to 5/3 with increasing compressive strain \cite{Supplemental}. The 5/3 exponent is characteristic of a three-dimensional critical point with dynamical exponent $z=3$ \cite{Maslov}. While we do not detect any sizable structural transition which might cause a change in the effective dimensionality of our system (from two-dimensional to three-dimensional) near $\varepsilon \approx -3\%$, theoretically, large bi-axial compression could drive such a transition in NdNiO$_3$\cite{Angel}.
The observed NFL features in transport imply the presence of strong quantum fluctuations stabilized by the hetero-epitaxial boundary. Indeed, as discussed above, the simultaneous rapid collapse of the AFM order and the emerging spin-disordered phase also highlights the important difference of this new quantum melting regime from that of the bulk where a $T_{\textrm{MI}}=T_\textrm{N}$ phase boundary is driven to a single critical end point. To the best of our knowledge bulk NdNiO$_3$ does not exhibit the NFL behavior reported here for $\varepsilon <0$ under either hydrostatic or chemical pressure \cite{Catalan0,Medarde1}, nor are the 4/3 and 5/3 exponents commonly found across the RENiO$_3$ series.
It is remarkable that the NFL exponent is stabilized over such a wide range of temperatures and negative $\varepsilon$ in our experiments. This behavior is qualitatively and even semi-quantitatively consistent with a Boltzmann-type transport theory \cite{Supplemental} involving multiple bands of different effective masses and zero-momentum critical fluctuations in the heaviest of the bands \cite{Maslov}. Our density functional theory calculations supports the multiple-band picture of different masses, and the structure of the transport theory\cite{Supplemental} provides a natural mechanism for the crossover of a fractional exponent at lower temperatures (4/3 or 5/3) to the linear-$T$ behavior observed above T$^\prime$ but still below the Debye temperature. The precise character of the zero-momentum quantum fluctuations remains unclear at present, and further experimental and theoretical work is due.
In order to elucidate the electronic energy scales involved in controlling these emergent phases (PI, NFL, and possible quantum spin liquid), we have performed extensive resonant soft X-ray measurements (XAS) on the oxygen K-edge, which $directly$ probes the hole state in the unoccupied $2p$-projected density of states \cite{Sarma02}. By utilizing the 1$s\rightarrow 2p$ transition on the oxygen K-edge (see Fig.~\ref{K-edge}(c)), i.e.\ $3d^{8}\underline{L}\rightarrow\underline{1s}3d^{8}$ ($\underline{L}$ denotes the ligand hole state), we evaluate the connection between the insulating phase behavior and `self-doping' behavior. In addition, a set of high-quality bulk ceramic samples of LaNiO$_3$, NdNiO$_3$, and GdNiO$_3$ have been measured to provide a benchmark comparison for resolving the underlying physics of heterointerface-control.
Figure~\ref{K-edge}(d) shows the resulting X-ray absorption spectra obtained at the threshold energy around 528.5 eV, where the absorption pre-peak is exclusively due to Ni $3d$ states hybridized with O $2p$ states \cite{Sarma02} (also see \cite{Supplemental} for representative spectra in a wider energy range).
As clearly seen, the pre-peak around 529 eV exhibits a remarkably large and asymmetric (with the sign of $\varepsilon$) energy shift, indicative of an evolution in the charge excitation energy. Figure~\ref{K-edge}(e) quantifies the finding as follows: the oxygen-derived band edge moves downwards by as much as 270 meV at $\varepsilon$ = +4 (or $\sim$80(13) meV/\%) and upwards by $\sim$150 meV at $\varepsilon= -2.9$ (or $\sim$34(13) meV/\%). In sharp contrast to this result, the shift is completely absent for the bulk data when varying chemical pressure and/or temperature crossing the MIT into the charge-ordered AFM insulating state [shown as grey shaded curves in Fig.~\ref{K-edge}(d)]. These results point to the pivotal role of the epitaxial substrate lattice mismatch as the driving force for the observed shift. Although the decrease of the absorption threshold is strikingly similar to that seen from the introduction of holes and in-gap states through conventional chemical doping \cite{Merz}, the `hole doping' response here is achieved by shifting the entire prepeak in virtue of heterointerface strain in the absence of explicit chemical doping \cite{footnote}.
The large observed shift of the excitation energy to the unoccupied states is connected to the shift of the O $1s$ core-level states with respect to the Ni $3d$-hybridized state. This manifests itself through an altered relative Madelung site potential between Ni and O, which is the primary effect that defines the magnitude of the charge excitation energy $\Delta$ \cite{Ohta}. This finding lends strong support to the notion of a modulation of the fundament energy scale -- $\Delta$ with $\varepsilon$ \cite{Imada,Zaanen1}. Additionally, the pre-peak width, a measure of a degree of $p$-$d$ hybridization or covalency $W$, scales almost linearly with $\varepsilon$ (see Fig.~\ref{K-edge}(d) and (e)), in accordance with the induced MIT. The combined modulations in both $\Delta$ and $W$ reflects the unique control of ultrathin NdNiO$_3$ in a `self-doped' material. In particular, the simultaneous regulation of the self-doped oxygen hole density via both $\Delta$ and $W$ is expected to tune the balance between the ferromagnetic and AFM exchange channels of the Ni-O-Ni bond in the $E^{\prime}-$type spin ordering \cite{Mizokawa}. Thus, deviation in the degree of self-doping would transpire to cause strong frustration and act to suppress the spin order, especially near the MIT, resulting in the collapse of the AFM ordering and a possible quantum spin liquid state.
In summary, we have demonstrated the consequences of heterointerface constraints from substrate lattice mismatch and used it to drive emergent phase behavior and induce quantum critical behavior not accessible in the bulk series of RENiO$_3$. This control is achieved through the modulation of the covalency, $W$, and charge transfer energy, $\Delta$ with $\varepsilon$. We have demonstrated that a specific ground state can be selected by the fine balance of the self-doped hole density on the oxygen atoms. We expect the physics uncovered for NdNiO$_3$ is rather general and should open the door to the rational design of new classes of correlated electron materials with a wider range of applications through an enriched phase diagram.
The authors acknowledge numerous insightful discussions with D. I. Khomskii, A. J. Millis, S. Okamoto and G. A. Sawatzky. J.C. was supported by DOD-ARO under the grant No. 0402-17291 and NSF grant No. DMR-0747808, M.K. and G.A.F. by DOD-ARO grant No. W911NF-09-1-0527, W911NF-12-1-0573, and NSF Grant No. DMR-0955778. J.M.R. supported by DARPA under award no. N66001-12-1-4224. The Advanced Light Source is supported by the Director, Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. Work at the APS, Argonne is supported by the U.S. Department of Energy, Office of Science under grant No. DEAC02-06CH11357.
\pagebreak
\newpage
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,914
|
Aggrieved Father Suspected of Leaving Son's Attacker to Die in Forest
Komi republic police have detained a man suspected of abducting a youth involved in the murder of his son and leaving him to die in a local forest more than a year after judges handed the youth a suspended sentence.
According to investigators, aggrieved father Sergei Asayonok abducted Konstantin Kotov on Sept. 13 and tied him to a tree in a forest outside Syktyvkar.
Two days later, Kotov managed to escape from the handcuffs used to detain him and crawled to a nearby police station, where he told officers about his ordeal. Asayonok and an accomplice in the abduction were promptly detained by police, investigators said in a statement on their official website.
A criminal case has been opened on abduction charges.
Local media linked Kotov's abduction to the murder of Asayonok's 18-year-old son Roman, who was savagely beaten to death while walking home in Syktyvkar by Kotov and five others in November 2010.
Kotov and four others were let off with a suspended sentences at a court hearing in May 2011, and only the youth who dealt the fatal blow to Asayonok's son received prison time.
When Russian Children Are Allowed to Die
Dishwasher Explosion Causes School Evacuation
Russian Teens Find Surprises in U.S. Schools
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,649
|
\section{Introduction}
Nuclear resonances and excited states can be very complicated
many-body structures with a number of different decay modes. The
simplest decay, perhaps beside $\gamma$-emission, is breakup into two
particles as exemplified by nucleon- and $\alpha$-emission, and binary
fission \cite{sie87}. The deceivingly simple breakup into three
particles is much less studied and far from understood. This is in
contrast to bound three-body cluster structures where a variety of
techniques are available and able to predict the properties, even of
the exotic quantum halo states \cite{jen04}. The three-body continuum
properties are less established although rather well studied over many
years \cite{glo96}.
Experimental information is obtained by measuring the properties of
the particles in the final state. The experimental techniques are now
advanced to a level where accurate and kinematically complete
measurements are available on a number of different systems
\cite{dan87,boc89,kry95,bai96,gom01,gio02,pfu02,chr02,fyn03,bla03,zer04,bla05},
and many more are expected to follow. Reliable theoretical
descriptions are needed to interpret existing data, to predict
unknown decay results and to help in the design of new interesting
experiments. Both the structure of the initial state and the decay
mechanism are essential and both must therefore be properly described
simultaneously. Clearly for a genuine many-body state only the
intermediate and large-distance structure is decisive. The small
distance behavior is artificial and only serving to provide the proper
continuous boundary conditions.
For two-body decay, like $\alpha$-emission, the relative potential
determines all properties. In the example of $\alpha$-emission, the
two-body potential can be divided into short-, intermediate- and
long-distance. The short-distance part is artificially adjusted to
allow the correct resonance energy and the barrier region then
determines the width. At large distances where the potential has
vanished the energy of the $\alpha$-particle is determined by energy
conservation. In two-body decay the energy distribution then only
reflects the width of the initial resonance. These properties are
very different for decays with more than two particles in the final
state.
For three-body bound states and resonances ``large distance'' is less
well defined . However, the corresponding structure can efficiently be
computed by use of the hyperspherical adiabatic expansion method
\cite{nie01,fed02}. The wave functions are in this technique expanded
on basis states related to adiabatic potentials calculated as
functions of the hyperradius $\rho$. Then $\rho$ provides a measure
of distances for the three-body problem. The wave function is usually
dominated by the component related to the lowest adiabatic potential.
The small-distance part ($\rho$ small) of both wave function and
potential are only directly meaningful if the particles appearing in
the final state form a genuine three-body system. Otherwise this part
of the effective potential is constructed to produce the correct
resonance energy and provide an appropriate boundary condition for the
wave function. At intermediate distances the potential has a barrier
which is decisive for the width of the resonance. At large distances
the resonance wave function is characterized by outgoing waves which
contain information about distributions of relative energies and
possibly other quantities like spin distributions.
For three-body decay the two most obvious decay mechanisms, direct and
sequential, were recently studied in a schematic model in
\cite{gar05a}. Another schematic model is also formulated in the limit
where only the Coulomb interaction is important at intermediate and
large distances \cite{kar04}. The detailed resonance
structure at small and intermediate distances were investigated in
realistic models in \cite{gar05b}. The large-distance properties are
much more difficult to calculate accurately, because either the
correct continuum three-body Coulomb wave functions are unknown, or the
short-range potentials may produce an almost long-range (inverse
square) effective potential at large distance. In the latter case the
origin is precisely as for the Efimov effect \cite{nie01,gar05c}. The
corresponding potential is most likely the lowest at large distance
but not necessarily also at small distances. Thus, the resonance
wave function may change structure from small to large distance. The
relative energy distribution arises as the result appearing at large
distance. The numerical computations are then sometimes rather
tricky.
The purpose of the present paper is to establish a general method to
compute relative energy distributions after decay into three
particles. The short- and intermediate-distance resonance structure
from \cite{gar05a,gar05b} is a good starting point but we need in
addition to calculate the asymptotic behavior in momentum space. The
asymptotics vary for the different decay mechanisms, and the related
numerical treatment is difficult when all the possibilities
simultaneously have to be accounted for.
We assume that formation and decay of the resonances are independent
processes. The resonance could be formed by beta-decay from a
neighbouring nucleus, or a window with the relevant energies can be
selected in an experimental setup where contributions from other
processes also are eliminated. In section 2 we develop the theoretical
formalism for resonance decay. This was previously sketched by use of
the saddle point approximation \cite{fed04}, while we here shall
instead use the Zeldovich regularization of the divergent Fourier
integrals \cite{zel60}. In section 3 we discuss some of the important
features arising from calculation of resonance wave functions by use of
the hyperspherical adiabatic expansion method combined with the
complex scaling method. In section 4 we illustrate in details with
realistic computations for the $2^+$-resonance in $^{6}$He. Finally,
section 5 contains a brief summary and the conclusions.
\section{Theoretical formulation}
We assume that the system of particles has been generated in a meta-stable
quantum state (a resonance) that is a generalized eigen-state of the
Hamiltonian with complex energy. This is a decaying state -- it describes a
constant flux of particles towards infinity. Suppose we have a system of
detectors at large distances which measure the momenta of the particles
emerging from this decaying state. Clearly these detectors will measure the
probability distribution of particle momenta in the meta-stable state,
that is the absolute square of the momentum space wave-function.
\subsection{Two-body resonances}
The theoretical derivation is most easily understood if we first
explain the idea for simple resonance decay into a two-body system.
We need resonance inventions in coordinate and momentum-space and
transformations between these non-square integrable functions.
\subsubsection{Resonance wave functions}
The momentum space wave-function of a resonant state with the complex energy
$E_{r}=\frac{\hbar ^{2}}{2m}k_{0}^{2}=E_{0}-i\frac{\Gamma }{2}$ has the form
\cite{new?}
\begin{equation}
\psi _{k_{0}}({\mathbf{k}}) = \frac{g(k,\Omega _{k})}{k^{2}-k_{0}^{2}} \; ,
\label{psi_k}
\end{equation}
where $\mathbf{k}$ is the relative momentum and $\Omega _{k}$
indicates the two angles defining the direction of the vector
$\mathbf{k}$. We assume that the wave function $\psi
_{k_{0}}(\mathbf{k})$ only has the pole at $k = k_{0}$ and $g(k,\Omega
_{k})$ is then a continuous function of the momentum $\mathbf{k}$ with
no poles.
The distribution $P(\mathbf{k)}$ of the relative momentum $\mathbf{k}$ of
the two particles in the resonant state $\psi _{k_{0}}$ is given by the
absolute square of the momentum-space wave function, i.e.
\begin{equation}
P({\mathbf{k}})=|\psi _{k_{0}}({\mathbf{k}})|^{2} \propto\frac{|g(k,\Omega
_{k})|^{2}}{(E-E_{0})^{2}+\frac{\Gamma ^{2}}{4}} \; ,
\end{equation}
where the real observable energy $E$ is $E=\hbar ^{2}k^{2}/(2m)$. The
approximation that the system is generated in a pure resonant state
$\psi _{k_{0}}$ is most likely only valid in the neighborhood of the
resonant energy, i.e. $E \simeq E_{0}$. Furthermore, the function
$g(k,\Omega _{k})$ is smooth and varies by definition much less than
the denominator in eq.(\ref{psi_k}). In any case we shall only
consider energies where $|E-E_0|$ is less than a few times $\Gamma$.
We can then confidently substitute the momentum $k$ with the
resonant momentum $k_{0}$ in $g(k,\Omega _{k})$ and thus arrive at the
expression of the famous Breit-Wigner type
\begin{equation}
P({\mathbf{k}})\propto \frac{|g(k_{0},\Omega _{k})|^{2}}{(E-E_{0})^{2}+\frac{
\Gamma ^{2}}{4}} \; , \label{p_k}
\end{equation}
where the energy dependence is given by the factor $\left[
(E-E_{0})^{2}+\frac{\Gamma ^{2}}{4}\right] ^{-1}$ while the angular
dependence is given by the (absolute square of the) function
$g(k_0,\Omega _{k})$. Thus, the momentum-space wave-function
eq.(\ref{psi_k}) of the resonance allows direct calculation of the
momentum distributions of the decay fragments through eq.(\ref{p_k}).
Clearly improvements are possible by use of different approximations
of $g(k,\Omega _{k})$.
Instead of momentum-space, the wave-function of the resonance may be
available in coordinate-space where the large-distance asymptotic form
is given by
\begin{equation} \label{e60}
\psi _{k_{0}}({\mathbf{r}})\stackrel{r\rightarrow \infty }{\longrightarrow }
\frac{e^{+ik_{0}r}}{r}f(\Omega _{r}) \; ,
\end{equation}
where $\Omega _{r}$ denotes the two angles defining the direction of
the relative coordinate ${\mathbf{r}}$. The structure is an outgoing
spherical wave potentially modified by an angular dependence contained
in $f(\Omega _{r})$. Generally the resonance wave function can be
written as a partial-wave expansion in the spherical harmonics
$Y_{lm}$, i.e.
\begin{equation}
\psi _{k_{0}}({\mathbf{r}})=\sum_{lm}C_{lm}\chi _{l}(r)Y_{lm}(\Omega _{r}) \; ,
\end{equation}
where $C_{lm}$ are constants depending on angular momentum and
projection quantum numbers $l$ and $m$. The radial functions $\chi
_{l}(r)$ are those solutions of the radial Schr\"{o}dinger equation
that asymptotically approach the outgoing spherical wave in
eq.(\ref{e60}), i.e.
\begin{equation} \label{chi-}
\chi _{l}(r)\stackrel{r\rightarrow \infty }{\longrightarrow }\frac{
e^{+ik_{0}r}}{r} \;\; ,\;\,
f(\Omega _{r})=\sum_{lm}C_{lm}Y_{lm}(\Omega _{r}) \; .
\end{equation}
This defines the asymptotic behavior of the decaying resonance
wave function which in turn determines the energy distribution in the
observable final state.
\subsubsection{Transformation from coordinate- to momentum-space}
The coordinate-, $\psi _{k_{0}}(\mathbf{r})$, and momentum-space, $
\psi _{k_{0}}(\mathbf{k})$, wave-functions are connected via a Fourier
transform
\begin{equation} \label{e70}
\psi _{k_{0}}({\mathbf{k}})=\int e^{-i{\mathbf{kr}}}\psi _{k_{0}}
({\mathbf{r}})d^{3}r.
\end{equation}
Expansion of the plane-wave in terms of spherical harmonics
\begin{equation}
e^{i{\mathbf{kr}}}=\sum_{lm}4\pi i^{l}j_{l}(kr)Y_{lm}(\Omega _{r})
Y_{lm}^{*}(\Omega _{k})
\end{equation}
reduces the Fourier integral in eq.(\ref{e70}) to a one-dimensional
radial integral
\begin{equation} \label{e75}
\psi _{k_{0}}({\mathbf{k}})=4\pi \sum_{lm}C_{lm}(-i)^{l}Y_{lm}(\Omega _{k})
\int_{0}^{\infty }r^{2}dr\chi _{l}(r)j_{l}(kr) \; .
\end{equation}
Because of the asymptotics in eq.(\ref{chi-}) the radial integral is
seen to diverge. The large-distance behavior is responsible for the
divergence. The physics content, expressed by a finite value, then
has to be extracted by a suitable regularization. We use the
prescription proposed by Zeldovich \cite{zel60}, i.e. multiplication
of the integrand by a gaussian where the range after integration is
increased to infinity. For an exponential this gives
\begin{equation} \label{e77}
\int_{0}^{\infty }e^{iqr}dr\rightarrow \lim_{\alpha \rightarrow
0}\int_{0}^{\infty }e^{iqr-\alpha ^{2}r^{2}}dr=\lim_{\alpha \rightarrow
0}e^{-\frac{q^{2}}{4\alpha ^{2}}}\frac{\sqrt{\pi }}{2\alpha }
{\mathrm{erfc}}(- \frac{iq}{2\alpha })=\frac{i}{q} \; ,
\end{equation}
where $q$ can be complex and $erfc$ is the error function of complex
argument. In the present context the radial integral in
eq.(\ref{e75}) is first rewritten by adding and subtracting the
asymptotic expression of the diverging integrand. The difference
between the true and the asymptotic expression then remains finite
even without multiplication by the gaussian function. Only the
asymptotic expression then diverges when the gaussian smoothly
converges to an overall factor of one.
The physics content is extracted by dividing with a similarly
diverging normalization integral of the square of the wave function
$\chi _{l}(r)$. Also this integral, now in the denominator, is
rewritten by addition and subtraction of its asymptotic
expression. Again only the asymptotic expression diverges. The
Zeldovich prescription now leaves the ratio of these two diverging
integrals of the asymptotic expressions. However, this ratio does not
diverge but converge towards the physically meaningful result. Apart
from a normalization we therefore have to regularize only the
asymptotic expression obtained by use of eq.(\ref{chi-}) and the
asymptotic approximation of $j_{l}(kr)$, i.e.
\begin{equation} \label{e85}
\psi _{k_{0}}({\mathbf{k}})=4\pi \sum_{lm}C_{lm}(-i)^{l}Y_{lm}(\Omega_{k})
\int_{0}^{\infty }r^{2}dr\frac{e^{+ik_{0}r}}{r}\frac{\sin (kr-\frac{l\pi }{2
})}{kr} \; .
\end{equation}
The radial integral is then by use of eq.(\ref{e77}) evaluated to be
\begin{eqnarray}
\int_{0}^{\infty }e^{+ik_{0}r}\sin (kr-\frac{l\pi }{2})dr = \frac{
i^{l}}{2}\left[ \frac{1}{k-k_{0}}-\frac{(-1)^{l}}{k+k_{0}}\right]
\nonumber\\
=\frac{i^{l}}{2}\frac{k+k_{0}-(-1)^{l}(k-k_{0})}{k^{2}-k_{0}^{2}} =
i^{l} \frac{k_{0}}{k^{2}-k_{0}^{2}} \;\; \;\; {\rm or } \;\;\;\;
i^{l} \frac{k}{k^{2}-k_{0}^{2}} \;
\end{eqnarray}
for even or odd $l$, respectively. The summation in eq.(\ref{e85}) is
proportional to the angular amplitude $f$ from eq.(\ref{chi-}), but
now as a function of the momentum $\mathbf{k}$. In any case we
assumed earlier that $k\approx k_{0}$ in the smooth functions. We
therefore arrive at the final expression for the Fourier-transform of
the resonance wave-function, i.e.
\begin{equation}
\psi _{k_{0}}({\mathbf{k}})=\frac{4\pi }{k^{2}-k_{0}^{2}}
\sum_{lm}C_{lm}Y_{lm}(\Omega _{k})=\frac{4\pi }{k^{2}-k_{0}^{2}}
f(\Omega _{k}) \; .
\end{equation}
Thus the function $g$ from eq.(\ref{psi_k}) is then related to $f$
from eq.(\ref{e60}) by
\begin{equation}
g(k_{0},\Omega _{k})=4\pi f(\Omega _{k}) \; .
\end{equation}
The convenient fact that only the asymptotic limit of the resonance
wave function enters after the regularization procedure is perhaps more
surprising in mathematics than in physics where the observable energy
distributions always are obtained from the properties at large
distances.
The observable distribution in momentum-space is determined by the
angular wave function in coordinate-space evaluated for angles
describing the direction of the momentum. This peculiar fact can
intuitively be understood by the geometry of particles moving towards
the detectors at infinitely large distances. Coordinates and momenta
then must point in the same direction. A mathematical formulation is
available from ionization cross sections calculated for atomic physics
processes \cite{ovc04}.
\subsection{Three-body resonances}
The generalization to three particles first requires a convenient set
of coordinates. We choose the scaled Jacobi coordinates \cite{nie01}
\begin{eqnarray}
{\mathbf{x}} &=&\sqrt{\frac{m_{2}m_{3}}{m(m_{2}+m_{3})}}({\mathbf{r}}_{2}-
{\mathbf{r}}_{3}), \\
{\mathbf{y}} &=&\sqrt{\frac{m_{1}(m_{2}+m_{3})}{m(m_{1}+m_{2+}m_{3})}}\left(
{\mathbf{r}}_{1}-\frac{m_{2}{\mathbf{r}}_{2}+m_{3}{\mathbf{r}}_{3}}
{m_{2}+m_{3}} \right) , \nonumber
\end{eqnarray}
where $m$ is a mass scale, and ${\mathbf{r}}_{i}$ and $m_{i}$ are the
coordinate and mass of the particle number $i$. The hyper-spherical
coordinates are then the hyper-radius $\rho $, the hyper-angle $\alpha
$, and the directional angles $
\Omega _{x}$ and $\Omega _{y}$ of the vectors $\mathbf{x}$ and $\mathbf{y}$
\textbf defined by
\begin{equation}
\rho =\sqrt{x^{2}+y^{2}} \;\; , \;\;\alpha =\arctan (x/y) \;\;,
\;\;\Omega _{\rho }=\{\alpha ,\Omega _{x},\Omega _{y}\} \; .
\end{equation}
The Jacobi coordinates depend on the sequence chosen for the
particles, and the three different pairs of $\mathbf{x}$ and
$\mathbf{y}$ could be labeled to distinguish. We omit these labels
when the meaning is clear.
The corresponding momentum-space variables are
\begin{equation}
\kappa =\sqrt{p^{2}+q^{2}}\;\; , \;\;\alpha _{\kappa }=\arctan (q/p)
\;\; , \;\; \Omega _{\kappa}=\{\alpha ,\Omega _{p},\Omega _{q}\} \; ,
\end{equation}
where $\mathbf{p}$ and $\mathbf{q}$ are the conjugate momenta related
to the coordinates $\mathbf{x}$ and $\mathbf{y}$.
\subsubsection{No bound two-body subsystems}
The generalization of the two-body spherical harmonics are the so-called
hyper-spherical harmonics \cite{nie01}
\begin{eqnarray}
{{\mathcal{Y}}}_{{{\mathcal{K}}}}(\Omega _{\rho
})=N_{n}^{(l_{x},l_{y})} \sin^{l_{x}} \alpha \cos^{l_{y}} \alpha
P_{n}^{(l_{x}+\frac{1}{2},\;l_{y}+\frac{1}{2})}(\cos 2\alpha)
\nonumber\\
\times
Y_{l_{x}m_{x}}(\Omega _{x})Y_{l_{y}m_{y}}(\Omega _{y})
\end{eqnarray}
where ${{\mathcal{K}}\equiv \{Kl_{x}m_{x}l_{y}m_{y}\}}$,
$K=2n+l_{x}+l_{y}$, and $(l_{x},m_{x},l_{y},m_{y})$ are the angular
quantum numbers related to coordinates $\bf{x}$ and $\bf{y}$, and
$N_{n}^{(l_{x},l_{y})}$ is a normalization factor. These functions
are the eigen-functions of the angular part $\Lambda ^{2}$ of the
three-body kinetic energy operator $T$
\begin{equation}
T=\frac{\hbar ^{2}}{2m}(\nabla _{x}^{2}+\nabla _{y}^{2})=\frac{\hbar ^{2}}{2m
}\left[ -\frac{\partial ^{2}}{\partial \rho ^{2}}-\frac{5}{\rho }\frac{
\partial }{\partial \rho }+\frac{\Lambda ^{2}}{\rho ^{2}}\right]
\end{equation}
with the eigenvalues $K(K+4)$, i.e.
\begin{equation}
\Lambda ^{2}{\mathcal{Y}}_{{\mathcal{K}}}(\Omega _{\rho })=K(K+4)
{\mathcal{Y}}_{{\mathcal{K}}}(\Omega _{\rho }) \; ,
\end{equation}
where $K$ is a non-negative integer. Without Coulomb and without
bound two-body subsystems the three-body resonance wave-function $\Psi
_{\kappa _{0}}(\rho ,\Omega _{\rho })$ with the complex energy
$E_r=\hbar ^{2}\kappa _{0}^{2}/(2m) = E_0 -i \Gamma_0/2$ can be
expanded in terms of the hyper-spherical harmonics
\begin{equation}
\Psi _{\kappa _{0}}(\rho ,\Omega _{\rho })=\sum_{{\mathcal{K}}}C_{{\mathcal{K}}
}\chi _{{\mathcal{K}}}(\rho ){\mathcal{Y}}_{{\mathcal{K}}}(\Omega
_{\rho }),
\label{eq21}
\end{equation}
where the hyper-radial functions $\chi _{{\mathcal{K}}}(\rho )$ have
the usual resonance asymptotic behavior of an out-going
hyper-spherical wave
\begin{equation} \label{e115}
\chi _{{\mathcal{K}}}(\rho )\stackrel{\rho \rightarrow \infty }{
\longrightarrow }\frac{e^{+i\kappa _{0}\rho }}{\rho ^{5/2}}.
\label{eq22}
\end{equation}
The three-body wave-function asymptotically has the form of the
hyper-spherical wave with an angular amplitude $F(\Omega _{\rho })$
determined by the expansion coefficients $C_{{\mathcal{K}}}$, i.e.
\begin{equation}\label{asy}
\Psi _{\kappa _{0}}(\rho ,\Omega _{\rho })\stackrel{\rho \rightarrow \infty
}{\longrightarrow }\frac{e^{+i\kappa _{0}\rho }}{\rho ^{5/2}}\sum_{{\mathcal{K
}}}C_{{\mathcal{K}}}{\mathcal{Y}}_{{\mathcal{K}}}(\Omega _{\rho })\equiv \frac{
e^{+i\kappa _{0}\rho }}{\rho ^{5/2}}F(\Omega _{\rho }).
\end{equation}
The momentum-space wave-function is the Fourier transform
\begin{equation} \label{e90}
\Psi _{\kappa _{0}}(\kappa ,\Omega _{\kappa })=\int e^{-i{\mathbf{px}}-i
{\mathbf{qy}}}\Psi _{\kappa _{0}}(\rho ,\Omega _{\rho })\rho ^{5}d\rho d\Omega
_{\rho }.
\end{equation}
The three-body plane-wave can be expanded in hyper-spherical harmonics as
\begin{equation}
e^{i{\mathbf{px}}+i{\mathbf{qy}}}=\frac{(2\pi )^{3}}{(\kappa \rho )^{2}}\sum_{
{\mathcal{K}}}i^{K}J_{K+2}(\kappa \rho ){\mathcal{Y}}_{{\mathcal{K}}}(\Omega
_{\rho }){\mathcal{Y}}_{{\mathcal{K}}}^{*}(\Omega _{\kappa }).
\end{equation}
Due to orthogonality of the hyper-spherical harmonics the angular part
of the integral in eq.(\ref{e90}) is trivial and we are only left with
the hyper-radial integral
\begin{equation}
\Psi _{\kappa _{0}}(\kappa ,\Omega _{\kappa })=\frac{(2\pi )^{3}}{\kappa ^{2}
}\sum_{{\mathcal{K}}}(-i)^{K}C_{{\mathcal{K}}}
{\mathcal{Y}}_{{{\mathcal{K}}}}(\Omega
_{\kappa })\int \rho ^{3}d\rho \chi _{{\mathcal{K}}}(\rho )J_{K+2}(\kappa \rho
).
\end{equation}
Precisely as in the two-body case, in the vicinity of the resonance
the integrand can be made ready for regularization by substitution of
eq.(\ref{e115}) and the asymptotic form
\begin{equation}
J_{K+2}(\kappa \rho )\stackrel{\rho \rightarrow \infty }{\longrightarrow }-
\sqrt{\frac{2}{\pi \kappa \rho }}\sin (\kappa \rho -\frac{\pi K}{2}) \;.
\end{equation}
This results in a diverging integral similar to that of the two-body
case
\begin{equation}
\Psi _{\kappa _{0}}(\kappa ,\Omega _{\kappa })=-\frac{(2\pi )^{3}}{\kappa
^{5/2}}\sqrt{\frac{2}{\pi }}\sum_{{\mathcal{K}}}(-i)^{K}C_{{\mathcal{K}}}
{\mathcal{Y}}_{{\mathcal{K}}}(\Omega _{\kappa })\int d\rho e^{+i\kappa _{0}\rho
}\sin (\kappa \rho -\frac{\pi K}{2}).
\end{equation}
Using the Zeldovich regularization leads to
\begin{eqnarray}
\Psi _{\kappa _{0}}(\kappa ,\Omega _{\kappa })=-\frac{(2\pi )^{3}}{\kappa
_{0}^{5/2}}\sqrt{\frac{2}{\pi }}\sum_{{\mathcal{K}}}(-i)^{K}C_{{\mathcal{K}}}
{\mathcal{Y}}_{{\mathcal{K}}}(\Omega _{\kappa })\frac{i^{K}}{2}\frac{2\kappa _{0}
}{\kappa ^{2}-\kappa _{0}^{2}}
\nonumber\\
=-\frac{2^{7/2}\pi ^{5/2}}{\kappa _{0}^{3/2}}
\frac{1}{\kappa ^{2}-\kappa _{0}^{2}}F(\Omega _{\kappa }),
\end{eqnarray}
that is, in the vicinity of the resonance, the angular wave function in
momentum-space is proportional to that in coordinate-space but
evaluated for the momentum variables. The energy distribution is
determined by the Breit-Wigner factor where the width is obtained from
the three-body resonance. The function $F(\Omega _{\kappa })$ now
contains information about the non-trivial energy distribution between
the three particles. This is in contrast to the two-body decay where all
the energy is in the only existing relative degree of freedom.
\subsubsection{One bound two-body subsystem}
Sometimes a bound two-body subsystem can be emitted from a three-body
resonance. Such a final state configuration can not be described by
hyper-spherical harmonics. However if this is the only open channel,
the description of such a decay reduces to the two-body case. Indeed
the asymptotics of the resonance wave-function is then
\begin{equation}
\Psi _{\kappa _{0}}(\rho ,\Omega _{\rho })\stackrel{\rho \rightarrow \infty
}{\longrightarrow }\phi _{23}({\bf{x}})\frac{e^{iq_{0}y}}{y}f(\Omega _{y}),
\end{equation}
where $\phi _{23}(\bf{x})$ describes a bound system of particles 2 and
3 with binding energy $B_{23}$, $q_{0}=\sqrt{2m(E_r-B_{23})/\hbar
^{2}}$, $f(\Omega _{y})$ is the angular amplitude and $E_r$ is the
complex three-body energy. If both three-body and two-body
decays are possible the wave-function contains asymptotics of both
two- and three-body types,
\begin{equation}
\Psi _{\kappa _{0}}(\rho ,\Omega _{\rho })\stackrel{\rho \rightarrow \infty
}{\longrightarrow }A\frac{e^{+i\kappa _{0}\rho }}{\rho ^{5/2}}F(\Omega
_{\rho })+B\phi_{23}({\bf{x}})\frac{e^{iq_0y}}{y}f(\Omega _{y}),
\end{equation}
where $A$ and $B$ are the asymptotic coefficients determining the
relative weights of the two decay channels. Both $F$ and $f$ are
dimensionless and the dimension (length to $-3/2$) of $\phi_{23}$
compensate for the one length in the denominator of the last term.
The Fourier transform and the corresponding regularization then give
the momentum-space wave function, i.e.
\begin{equation}
\Psi _{\kappa _{0}}(\kappa ,\Omega _{\kappa }) =
-\frac{2^{7/2}\pi ^{5/2}}{\kappa _{0}^{3/2}}
\frac{A}{\kappa ^{2}-\kappa _{0}^{2}}F(\Omega _{\kappa }) +
B \phi_{23}({\bf{p}}) \frac{4\pi }{q^{2}-q_{0}^{2}} f_{y}(\Omega _{q}) \;,
\end{equation}
where $q^{2} = \kappa ^{2} - 2 m B_{23}/\hbar^2$, $\phi_{23}({\bf{p}})$ is the
momentum-space wave function of the two-body bound state
$\phi_{23}$. These two channels correspond, respectively, to two close-lying
particles in a bound state far away from the third one, and three particles
all far away from each other. Thus, in the of large distances they do not
interfere, and the resulting
momentum distribution is simply a weighted sum of the corresponding
distributions. The relative contributions of the two channels are
given by $|A|^{2}$ and $|B|^{2}$ correspondingly.
Generalization to describe decays into more than one two-body bound
state is formally straightforward, i.e. the corresponding
non-interfering asymptotic terms should simply be added. This holds
for more than one bound state in the same two-body system as well as
for bound states in different two-body systems.
\subsubsection{One resonant two-body subsystem}
Instead of a bound state the decay via a two-body resonance is often
considered in interpretation and analysis of experiments. Clearly the
narrower the resonance the more similarity to the case of two-body
bound states. In any case the hyper-spherical expansion must
eventually converge, although the convergence can be too slow for a
reliable extraction of the asymptotic coefficients in eq.(\ref{asy})
from a numerical solution of the three-body problem.
However, in this case the (slowly convergent) two-body resonance
configuration can then be explicitly included into the asymptotics
while only the remaining (hopefully fast convergent) part is expanded,
i.e.
\begin{equation}
\Psi _{\kappa _{0}}(\rho ,\Omega _{\rho })\stackrel{\rho \rightarrow \infty
}{\longrightarrow }A\frac{e^{+i\kappa _{0}\rho }}{\rho ^{5/2}}F(\Omega
_{\rho })+B\frac{e^{ip_{0}x}}{x}f_{x}(\Omega _{x})\frac{e^{iq_{0}y}}{y}
f_{y}(\Omega _{y}),
\end{equation}
where $\hbar ^{2}\kappa_{0}^{2}/(2m) = E_{0} -i \Gamma_{0} /2 $ is the
complex three-body energy, $\hbar ^{2}p_{0}^{2}/(2m)= E_{23}^{(0)} -i
\Gamma_{23} /2$ is the (complex) energy of the two-body resonance and
the remaining part is described by the complex momentum
$q_{0}^{2}=\kappa_{0}^{2} - p^{2}$. The precise definition of
$q_{0}^{2}$ arises from a constraint to be seen below.
The corresponding momentum-space wave-function is again given by the
regularized Fourier transform, i.e.
\begin{equation} \label{e95}
\Psi _{\kappa _{0}}(\kappa ,\Omega _{\kappa })=-A\frac{2^{7/2}\pi ^{5/2}}{
\kappa _{0}^{3/2}}\frac{F(\Omega _{\kappa })}{\kappa ^{2}-\kappa _{0}^{2}}+B
\frac{4\pi }{p^{2}-p_{0}^{2}}f_{x}(\Omega _{p})\frac{4\pi }{q^{2}-q_{0}^{2}}
f_{y}(\Omega _{q}).
\end{equation}
The momentum distribution is given by the absolute square of this
momentum-space wave-function. In the center of mass system we can
directly find the distribution of particle 3 arising from the
sequential emission via the two-body resonance.
Absolute square of the last term in eq.(\ref{e95}) and use of the
energy conservation $\kappa^{2} = q^{2} + p^{2}$ (or $E = E_{1} +
E_{23}$) immediately gives the energy distribution for particle $1$
\begin{eqnarray}
P(E_1) \propto \int d E_{23}
\frac{1}{[(E_{23}-E_{23}^{(0)})^2 + \Gamma_{23}^2/4]}
\frac{1}{[(E_{23}+E_1 - E_{0})^2 + \Gamma_{0}^2/4]} \nonumber \\ \propto
\frac{1}{(E_1 - (E_{0}-E_{23}^{(0)})^2 + (\Gamma_{0}+\Gamma_{23})^2/4} \; ,
\label{e100}
\end{eqnarray}
which states that the most probable energy of particle $1$ is $E_1 =
E_0-E_{23}^{(0)}$ and the width of the distribution is the sum of the
two and the three-body widths. Precisely this Breit-Wigner
distribution only arises when the $|q^{2}-q_{0}^{2}|^2$ in
eq.(\ref{e95}) is proportional to $(E-E_{0})^2 + \Gamma_{0}^2/2$,
i.e. given by the probability distribution in the initial three-body
state, which also is of Breit-Wigner form. Thus the definition of
$q_{0}^{2}$ must involve $p^2$ and not $p_0^2$.
The same integration could of course be performed on the energy of
particle 1, but this would only give the two-body Breit-Wigner
distribution for $E_{23}$ whereas the measurements provide individual
energies, $E_2$ and $E_3$, for particles 2 and 3 in the center of mass
system. To get the distributions of $E_2$ and $E_3$ involve trivial
but tedious kinematical transformations where also energies and
directions of particle 1 are required.
\subsubsection{Alternative real-coordinate procedure}
The relative energy distributions can be obtained from the angular
resonance wave function calculated without complex scaling. We assume
that the system of particles is produced in an initial state for
example by a beta-decay process. We can then imagine the subsequent
decay as the time evolution of the initial non-stationary state. This
can be formulated as a time dependent coupled channels problem. It can
also be viewed intuitively as a particle described by time dependent
coordinates determined by classical equations of motion. This should
be done with the appropriate initial amplitudes for all parts of the
initial wave function. The hyperradius must vary from being very small
to infinitely large. We increase $\rho$ until all particles are
outside the interaction ranges of all other particles. From then on
all distances scale as $\rho$ and all other coordinates remain
unchanged with time until $\rho = \infty$.
Energy conservation is maintained at large distances by converting the
potential energy into kinetic energy in the scaling degree of freedom,
i.e. by increasing the velocity $\dot{\rho}$ of the $\rho$-coordinate.
The wave function then evolves with all angular degrees of freedom
frozen eventually reaching the detectors placed far away. The
absolute square of the angular wave function as function of
$\cos^2\alpha$ then provide the energy distributions simply because
the kinetic energy of particle 1 is given by the velocity of $y$,
i.e. $\dot{y} = \dot{\rho} \cos\alpha$. Then the energy distribution
as function of the kinetic energy, proportional to $\dot{y}^2 \propto
\cos^2\alpha$, is the probability coordinate-space distribution as
function of $\cos^2\alpha$, apart from the phase space conversion from
$\alpha$ to energy, i.e. division by a factor proportional to
$dE/d\alpha \propto \sin(2\alpha)$.
This procedure is tempting since we only need to increase the maximum
value of $\rho$ in all the numerical implementations and plot the
wave function at that large distance. For this to be accurate the
asymptotics has to be well described by the hyperspherical expansion
or the basis has to be very large and able to describe the necessary
large $\rho$-behavior. However, when the basis functions have
asymptotics different from one of the intermediate structures
the size of the basis needed to reach convergenge can easily become huge,
making the procedure impractical. This is not necessarily
easy to see in the numerical results where an increase of basis size
usually is rather expensive while the convergence could be extremely
slow. The procedure is probably only directly useful for direct decay
or for sequential decay via broad resonances. Otherwise the different
intermediate structure should be computed somehow and extrapolations
to large distances applied to each component individually.
An example to illustrate the present alternative formulation is
available in the schematic model discussed in details in
\cite{gar05a} where the widths but not the energy distribution were
computed. Assume that only the Coulomb potential is important and the
most probable path (ridge in the wave function) from small to large
distances can be described by scaling the hyperradius. The
corresponding optimum path is defined by minimizing the WKB-tunneling
expression as function of different relative scaling parameters
$s_{ik} = r_{ik}/\rho$, where $r_{ik}$ is the distance between
particles $i$ and $k$. The path is given by $s_{ik}^{3} m_i Z_j =
s_{jk}^{3} m_j Z_i$ (see also \cite{kar04}), where $Z_i e$ is the
charge of particle $i$. We then arrive at the most probable value for
the energy division, i.e.
\begin{eqnarray} \label{e110}
E_k = E_{total} \bigg(1 + \big(\frac{m_k Z_k^2}{m_i Z_i^2}\big)^{1/3}
+ \big(\frac{m_k Z_k^2}{m_j Z_j^2}\big)^{1/3} \bigg)^{-1} \; ,
\end{eqnarray}
where $E_{total}$ is the total energy distributed among all the three
particles. This expression is simple but not very accurate. It also
only provides an estimate of the peak value. To compute the
distribution other paths must also be considered. This is possible
but we shall leave this for a later discussion in connection with a
detailed treatment of the Coulomb interaction.
\section{Resonance wave functions}
The resonance wave function contains all information including that of
the relative energy distribution after the decay. The calculations
must then first provide the corresponding three-body resonance states.
Second the large-distance behavior must be accurately extracted. Due
to the different structures, these steps are not trivially completed
by use of only one method. We briefly describe first the main
ingredients in our computations and the features of the wave function.
Second we explain how the asymptotic behavior is obtained in practice.
\subsection{Method}
We use the hyperspherical adiabatic expansion method combined with
complex scaling to obtain resonance wave functions. The coordinates are
defined in section 2 along with our basis functions in angular space,
i.e. the hyperspherical harmonics in each of the three Jacobi systems.
We solve the complex scaled Faddeev equations as function of
hyperradius \cite{nie01,fed03}. The complex scaled coupled set of
radial equations are subsequently solved with the appropriate boundary
conditions, i.e. exponentially vanishing with increasing $\rho$ for
both bound states and resonances. Thus here we assume that we do not
need to treat the Coulomb interaction explicitly at asymptotic large
distances. As shown in \cite{fed03}, the results obtained with this method
agree well with some of the most common procedures, as for instance the
complex energy method.
The energies are usually accurately determined in this method. The
same applies to the wave functions at small and intermediate distances
where the exponential fall-off still is not too restrictive. However,
we need the information at distances where the asymptotic limit is
reached, i.e. possibly at very large $\rho$ where the complex scaled
wave functions are very small. Furthermore, more than one geometric
structure can be important at the same time, e.g. two different
spatially confined two-body configurations with the (different) third
particle far away. This happens frequently with two identical
particles like neutrons and protons as constituent particles, because
the nucleon-core interaction must be sufficiently attractive to
produce a bound or resonating three-body system, and this implies that
such two-body configurations are favored. To account simultaneously
for different two-body substructures, it is essential to use three
components as in the Faddeev decomposition adopted by us. The same
efficiency can be achieved in a variational approach by allowing
Faddeev-like components in the trial wave function \cite{kam89}. It is
much more difficult, if not impossible, to reach convergence with only
one component as in the hyperharmonic expansion method \cite{fed97}.
Even with three Faddeev components a large basis has to be
employed. To describe substructures inside one of the two-body
potentials (range $R_{eff}$) for large $\rho$ we need values of the
hyperpsherical quantum number $K$ up to a few times $\rho\sqrt{m} /
(R_{eff}\sqrt{\mu})$ where $\mu$ is the reduced mass of the two
particles. This is because $K/2$ is the number of nodes in the basis,
and details can only be described if a few nodes can be placed inside
the structure in question. Thus, for nuclear systems, where $R_{eff}
\simeq 4$~fm, we need $K_{max}$ of at least $50$ to describe such
structures for $\rho \simeq 100$~fm. Employing complex scaling
transforms resonances into states obeying the numerically easier bound
state boundary conditions. The required basis size is larger,
essentially because the exponential fall-off moves to larger distances
with increasing scaling angle. These estimates provide necessary
conditions for a reasonable description of two-body substructures
which in turn are necessary to describe the sequential decay mechanism.
The requirement of large $K$ to describe substructures must be
reconciled with the fact that an increase of $\rho$ towards infinity
results in convergence of the angular eigenvalues to the hyperharmonic
spectrum for free particles. We show an example in Fig.~\ref{fig1}
where the imaginary parts are omitted as they both oscillate around
zero and approach zero at large $\rho$. The real parts of the angular
eigenvalues approach $K(K+4)$ as $\rho$ increases while the
corresponding potentials all approach zero faster than $\rho^{-2}$.
The attractive pockets in the eigenvalues at short distance disappear
in the potentials except for the two lowest where the negative values
remain as a prominent feature. The approach to the asymptotic values
is very fast except for the levels where $s$-waves contribute
significantly. The low energies favor these levels at large distances.
The related adiabatic wave functions approach the hyperspherical
harmonics. The reason is that the regions in space, where the
short-range interactions are significant, are shrinking in size with
increasing $\rho$ relative to the total space available. The radial
extension of these regions, responsible for two-body correlations,
decrease with $\rho$ as $1/\rho$. The interactions are non-vanishing
in smaller and smaller regions. Consequently they become less and
less important for both energies and wave functions. Thus, the basis
size has to increase with $\rho$ in order to allow a description of
the two-body substructures or equivalently of sequential decay, but
the lowest adiabatic potentials approach the free solutions. The
basis size in practice always has to remain finite and the
substructures eventually become impossible to describe in this way.
\begin{figure}
\begin{center}
\vspace*{-1.1cm}
\epsfig{file=Fig1.ps,scale=0.5, angle=270}
\end{center}
\caption{ The real parts of the lowest 8 angular eigenvalues (left) and
corresponding adiabatic potentials (right) as functions of $\rho$ for
the 2$^+$ states in $^{6}$He ($^{4}$He + n + n). The scaling angle is
$\theta = 0.10$ rads. }
\label{fig1}
\end{figure}
The interactions have to be chosen to reproduce the pairwise
low-energy scattering properties of the three particles appearing in
the final state. Clearly we must accurately include all the partial
waves necessary to describe the quantum numbers of the decaying
resonance. However, even with a sufficiently large basis the
three-body system does not necessarily appear with the correct energy
and width. In fact, there may not even be an attractive region at
small distances as required to produce a resonance of finite width.
This could occur when we are dealing with a many-body resonance
without traces of any three-body cluster structure. Nevertheless, a
meaningful computation can be carried out of the energy distributions
emerging after a three-body decay.
The philosophy is the same as for $\alpha$-emission where the inner
part of the effective potential is replaced by an attractive square
well with a depth adjusted to reproduce the resonance energy. The
resulting barrier is then used to derive the width, usually in the WKB
approximation. We generalize this concept to the adiabatic potentials,
i.e. we add a three-body potential of short range in the
hyperradius. It is intended to describe interactions beyond the
two-body level such that the three-body system has a resonance at the
desired energy. By doing this we have substituted the possibly
complicated many-body structure at small distance with the three-body
cluster structure resulting from an effective potential, which in turn
also provides the correct boundary conditions for a three-body
decaying resonance. This principle was introduced for fine-tuning in
the first calculation with the correct boundary conditions of the
three-$\alpha$ decay of the second $0^+$-state in $^{12}$C
\cite{fed96}. It has later become the standard procedure to adjust
three-body energies without significant changes of the underlying
substructure \cite{nie01}.
\subsection{Important features}
The radial solution is often strongly dominated by one or two of the
lowest adiabatic components at small distance where the relative
probability is large. This is because all three short-range two-body
interactions contribute simultaneously and the result is the
energetically most favored three-body resonance structure consistent
with the boundary conditions. As $\rho$ increases, at least one
particle has to move away from the other leaving at most one
non-vanishing two-body interaction. Coherent contributions from
different of these configurations are possible and sometimes even
favored. At large distance, where the energy distribution is
determined, several more adiabatic potentials are often needed. The
couplings due to the Coulomb interaction would normally increase the
necessary number of potentials.
It is established \cite{nie01} that the adiabatic potentials of lowest
energy at large $\rho$ are related to configurations with relative
$s$-waves between the two closest particles. This is the basis for
the Efimov effect \cite{efi70}. If these large-distance
configurations differ from the resonance structure at small $\rho$,
possibly with higher partial waves, the lowest angular wave function
must change its structure accordingly as $\rho$ increases. In
\cite{gar05b} we showed the structure for $^{6}$He(2$^+$) for the
lowest eigenvalue which approach the $K=2$ value at large $\rho$. In
Fig.~\ref{fig2} we show the results for the similar eigenvalue
approaching the $K=4$ level for large $\rho$. The pronounced and
rapidly changing structure is qualitatively similar to the lower-lying
$K=2$ level, i.e. dominated by $p_{3/2}-p_{3/2}$ neutron-core
structure at small $\rho$ and by $s$-waves between the two neutrons at
large $\rho$. Essentially all other allowed components contribute
with equally small amounts.
\begin{figure}
\begin{center}
\vspace*{-1.1cm}
\epsfig{file=Fig2.ps,scale=0.5, angle=270}
\end{center}
\caption{The fraction of different components in the fifth adiabatic
potential for $\theta=0.10$ rads as function of $\rho$ for $^{6}$He(2$^+$).
The angular eigenvalue corresponds to $K=4$ at large $\rho$, see
Fig.~\ref{fig1}. The angular momenta are specified by $\ell_x$,
$j_x$, $\ell_y$, $j_y$, and $L$. Left: $x$ refers to the two-neutron
system and $y$ to its center of mass motion relative to the
$\alpha$-particle. Right: $x$ refers to the neutron-$\alpha$ system
and $y$ to its center of mass motion relative to the other neutron.
We give the $(x,y)$ components on the figure as $\ell_{j}$. }
\label{fig2}
\end{figure}
These rather dramatic changes imply that it is crucial to include all
adiabatic potentials with significant couplings to those dominating
the structure at small hyperradii. This is simply because the
couplings are responsible for changing the radial weights of the
different adiabatic components as function of $\rho$, e.g. no
couplings imply the same occupation independent of $\rho$. On the
other hand, each of the angular wave functions related to the adiabatic
potentials are themselves functions of $\rho$, sometimes rapidly
varying as seen in Fig.~\ref{fig2}. In principle the non-diagonal
couplings could be vanishingly small and all change of structure would
be described by the lowest adiabatic wave function. However, this is
rather unlikely because the couplings are defined as matrix elements
of first and second radial derivatives of the angular
wave functions. Thus, radial couplings between rapidly changing angular
structures are inevitable.
In Fig.~\ref{fig3} we show how the strongly varying coupling
potentials can be related to the changing angular structure seen in
Fig.~\ref{fig2}. The peaks are most pronounced when a crossing is
avoided and the two levels switch characteristics \cite{nie01}. The
rather confusing coupling picture is crucial for the asymptotic
behavior of the wave function at large distance where the energy
distribution is determined. The second order terms are substantially
larger than the first order couplings but all vanish at large $\rho$.
Thus the numerical computations in the present case must extend at
least beyond $\rho \approx 40$~fm where the couplings have reached
very small values. This is about two times the largest scattering
length which defines the distance of convergence towards asymptotic
values.
\begin{figure}
\begin{center}
\vspace*{-1.1cm}
\epsfig{file=Fig3.ps,scale=0.5, angle=270}
\end{center}
\caption{The coupling potentials between the four dominating adiabatic
levels for $\theta=0.10$ rads shown as functions of $\rho$ for
$^{6}$He(2$^+$). The
first and the fourth levels have similar quantum numbers but approach
the $K=2$ and $4$ levels, respectively. To show the first ($P$) and
second ($Q$) order coupling potentials in the same units (fm$^{-1}$)
we multiply $Q$ by $\rho$. (The energy unit is restored in the
coupling potentials by including the omitted factor, i.e. $\hbar^2
Q/(2m)$, $\hbar^2 P/(2m)\partial/\partial \rho$). }
\label{fig3}
\end{figure}
We have now established two important but competing effects, which
determine the three-body resonance structure from small to large
values of $\rho$. In the extreme, the structure can either remain
unchanged by climbing correspondingly up on the adiabatic potentials,
or the structure can change to follow that of the lowest-lying
adiabatic wave function. A compromise between following the
energetically most favored configuration and the resistance to a
change of structure therefore must be reached. The combination of
these effects determine the relative population of the different
components in the radial solution, which in turn determines the
observable energy distribution. The couplings are more important here
than for widths, energies and small-distance wave functions. They
must be accurately computed to provide the energy distribution.
The structure of the resonance wave function at large $\rho$ could
remain unchanged and only exhibit a simple scaling behavior
proportional to $\rho$. This is typical of direct decay. The
wave function could also have large probability for finding two
close-lying particles where the hyperradius mainly changes by moving
the third particle as $\rho$ increases. This is typical of sequential
decay via more or less stable two-body configuration, e.g. sequential
decay through two-body resonances. The intermediate configuration
does not necessarily need a confining barrier, but could be provided
by low-lying two-body virtual $s$-states \cite{gar05c}. Mixtures of
all types can occur giving rise to the description of decay properties
as fractions proceeding via individual two-body configurations. All
these structures can be computed by use of our method, although
convergence for the Coulomb interaction is more difficult.
The best strategy to get reliable results is not obvious, because the
brute force method of increasing basis size and hyperradius until
convergence is reached may be beyond any reasonable computer effort.
The indecision is related to the requirement of an increasing basis
with increasing $\rho$, which means that a smaller $\rho$ and a
smaller basis could provide a better description with much less
effort. In other words a convergence may be reached in a region of
$\rho$-values for a moderate basis size. This convergence would be
destroyed as $\rho$ is allowed to increase because the basis size
cannot follow. The convergence can possibly be reached faster by
extrapolation of the observable distribution by use of a known or
anticipated dependence of $\rho$ and basis size \cite{res99}.
Different parts of the wave function may extrapolate differently. The
most efficient choice depends on the (mixtures of) decay mechanisms
which therefore has to be determined first. Therefore the first step
is to compute the structure of the resonance wave functions as
discussed in \cite{gar05b}.
\section{Realistic numerical illustration: $^{6}$He($2^+$)}
Nuclear three-body decay without complications of the Coulomb
interaction must involve emission of two neutrons. The decaying
states do not have to be three-body structures although such
two-neutron halos are available and rather well studied. The most
obvious case is the established $2^+$ resonance in $^{6}$He which is
formed by the same neutron-core components as in the ground state.
Without Coulomb interactions the computations should quickly lead to
the desired energy distributions. However, even short-range
interactions can present difficulties as highlighted by the intricate
description needed for the Efimov effect \cite{nie01}. Both the
$\alpha$-neutron and the neutron-neutron interactions are previously
employed in ground state computations \cite{nie01}.
We follow the
procedure outlined in the preceding sections. Different prescriptions
are possible to implement the Pauli principle \cite{gar99}, all of
them providing indistiguishable angular wave functions at large distances.
We adjust the three-body potential to give the correct resonance energy.
This only requires marginal fine-tuning. In total 1132 hyper-spherical
harmonics are used in the expansion (\ref{eq21}), and the maximum value of
$K$ is 200 for the most relevant partial wave components and never smaller
than 40. The resonance wave function is then
available as function of the hyperspherical coordinates. A complex scaling
angle of 0.10 rads is enough to produce an exponentially vanishing with
increasing $\rho$ wave function for the resonance. A different scaling
angle, where the numerical calculations have converged, produce the same
results.
\subsection{Resonance structure}
We already showed the angular eigenvalues and the adiabatic potentials
in Fig.~\ref{fig1}. The probability distribution arising from only
the lowest potential was shown in \cite{gar05b}. The structure changes
from peaks at small $\rho$ corresponding to $\alpha$-neutron
$p_{3/2}$-structure to a probability with one broad peak corresponding
to comparable distances between all three particles. This reflects the
change of structure of this angular wave function from small to large
$\rho$ as seen in details in Fig.~\ref{fig2}. Eventually the lowest
hyperharmonic function with $K=2$ is approached. This indicates in
itself a direct decay mechanism. However, in this case the lowest
potential provides rather misleading results.
\begin{figure}
\begin{center}
\vspace*{-1.1cm}
\epsfig{file=Fig4.ps,scale=0.5, angle=270}
\end{center}
\caption{The radial wave functions (left) and the absolulte values and
real parts of their relative sizes (right) corresponding to the four
dominating adiabatic potentials for $\theta=0.10$ rads as functions
$\rho$ for the $^{6}$He(2$^+$) resonance.}
\label{fig4}
\end{figure}
The rapidly changing structure seen in Fig.~\ref{fig2} at around $\rho
\approx 20$~fm could easily lead to occupation of higher-lying
levels. These occupation probabilities are functions of $\rho$ and
simply found as squares of the radial wave function obtained by solving
the coupled set of radial equations. The results are shown in
Fig.~\ref{fig4} for the lowest adiabatic components. At small $\rho$
the lowest potential is totally dominating but as $\rho$ increases the
lowest three components contribute with comparable amplitudes. All
radial wave functions vanish by oscillating around zero with decreasing
amplitudes. The relative sizes are more clearly seen in the right
hand side of Fig.~\ref{fig4}. After the transition around 20~fm the
individually very small radial amplitudes stabilize on relatively
constant finite ratios. The square of these give the relative
weights, i.e. reduced compared to the first component by about 0.6,
0.25, 0.01 for the second, third and fourth potential, respectively.
The transition to stable ratios is consistent with the disappearance
of the coupling terms shown in Fig.~\ref{fig3}.
\begin{figure}
\begin{center}
\vspace*{0.1cm}
\epsfig{file=Fig5.eps,scale=0.8,angle=0}
\end{center}
\caption{The probability distribution for $^{6}$He(2$^+$) including
the lowest 8 adiabatic potentials as function of hyperradius $\rho$
and hyperangle $\alpha$ related to the distance by $r_{ik} \propto
\rho \sin \alpha$, i.e. the distance between either the one neutron and
core $r_{nc}$ (left) or the two neutrons $r_{nn}$ (right). }
\label{fig5}
\end{figure}
The total probability distribution in Fig.~\ref{fig5} are quite
different from that of the lowest eigenvalue. At large $\rho$ the
probability now peaks at a smaller distance between the two neutrons
and correspondingly the $\alpha$-neutron distance is increased. Still
fairly broad distributions remain. The decay mechanism indicated by
this structure is now instead of direct rather a mixture between the
preferred sequential decay via a neutron-neutron intermediate
configuration and a smaller direct component.
\subsection{Energy distributions}
Reliable computation of the energy distribution requires numerically
converged results in an appropriate region of $\rho$-values. The
energy distributions are shown in Fig.~\ref{fig6} as functions of
$\rho$ for a sufficiently large number of adiabatic potentials. The
resemblance with the probability distribution is not surprising since
only the volume element has been changed. The observable distribution
is the cut for constant, and sufficiently large, $\rho$ where
convergence has been reached as function of basis size. The neutron
energy distribution has two peaks for small $\rho$ corresponding to
the geometric configurations of one neutron close to the
$\alpha$-particle and the other neutron further away. This is
reflected in the peak in the $\alpha$-spectrum at intermediate
energies corresponding to the same geometric configurations.
\begin{figure}
\begin{center}
\vspace*{-1.1cm}
\epsfig{file=Fig6.eps,scale=0.8,angle=0}
\end{center}
\caption{The energy distributions of neutrons (right) and
$\alpha$-particles (left) after decay of $^{6}$He(2$^+$) for $\theta=0.10$
rads. The three-dimensional plot show the dependence on $\rho$ with
inclusion of 8 adiabatic wave functions. The maximum energies are
$(m_{\alpha}+m_n)/(m_{\alpha}+2m_n) E_0$ and $2m_n /(m_{\alpha}+2m_n)
E_0$ for the neutron and the $\alpha$-particle, respectively. Here
$E_0$ is the energy of the decaying resonance. }
\label{fig6}
\end{figure}
The structure changes with $\rho$ into a broad peak at intermediate
energies for the neutron spectrum, and one peak very close to the
maximum energy for the $\alpha$-spectrum. This is the fingerprint of
sequential decay via emission of the $\alpha$-particle followed by
decay of an intermediate two-neutron structure. This is easily
visualized as the two-body decay process where the $\alpha$-particle
receives maximum energy when the two neutrons move together in the
opposite direction. In the subsequent decay each neutron then must
share the remaining energy which leads to an intermediate energy
between zero and the maximum value.
This inferred decay mechanism is perhaps counter-intuitive because
stable intermediate configuration of two neutrons do not exist neither
as bound states nor as resonances. It would be much more acceptable
with the $\alpha$-neutron $p_{3/2}$-resonance as the intermediate
structure. However, one characteristic feature of the neutron-neutron
interaction is the low-lying virtual $s$-state which simply means that
there is a substantial $s$-wave attraction. Apparently this is
decisive for the decay process where the two neutrons end up by moving
essentially in the same direction, and then necessarily guided by the
attraction. The interesting point is maybe that this is not the way
they started out at small distance in the spatially confined part of
the wave function. This change of structure with hyperradius is in a
sense reflecting the dynamic character of the decay process. At
distances larger than the scattering length the short-range
interactions are negligibly small, the wave function changes are
completed and the asymptotics are established.
\begin{figure}
\begin{center}
\vspace*{0.0cm}
\centerline{\psfig{figure=Fig7.ps,height=10.0cm,width=8.0cm,%
bbllx=2.8cm,bblly=0.6cm,bburx=20.4cm,bbury=24.5cm,angle=270}}
\end{center}
\caption{ The energy distribution of the $\alpha$-particle after decay
of the $2^+$-resonance in $^{6}$He. The scaling angle is $\theta
=0.10$ rads and $\rho=75$~fm where convergence is reached. The points are
extracted from the measurements in
\cite{dan87}. Contributions from the 4 dominating adiabatic potentials are
shown individually. }
\label{fig7}
\end{figure}
The microscopic structure of the energy distributions can be studied
by dividing into contributions from the individual adiabatic
potentials as seen in Fig.~\ref{fig7} for the emitted
$\alpha$-particle. The total distribution remains essentially
unchanged if more than the four dominating potentials are included.
Each contribution has its own characteristic feature. The first has a
peak close to the maximum energy, i.e. resembling $\alpha$-emission
from a neutron-neutron $0^+$-state. The second has a peak at
intermediate energy, i.e. resembling sequential decay by the
$\alpha$-neutron resonance. The third has a peak at small energy,
i.e. resembling $\alpha$-emission from an excited neutron-neutron
$2^+$-state. In addition the fourth potential also gives a small
contribution with maxima at intermediate and maximum energy. The size
of about 1\% cannot be seen in the total distribution. However, this
eigenvalue has the same angular momentum quantum numbers as the first
level. Therefore the non-diagonal interference term would be about
10\% of the total contribution. It turns out that the interference
essentially is destructive and responsible for the almost flat region
at intermediate energies.
The decay mechanism is then not simple although understandable in
terms of our formulation. The main contribution is decay via the
virtual $s$-state and the second is from direct decay. The third
mechanism is produced by the coupling to the higher-lying state taking
place at relatively small $\rho$. This populates the level eventually
approaching the $K=4$ hyperspherical level at large $\rho$. The
interference with the dominating contribution then leads to the total
distribution. The division into these different contributions of
direct and sequential is to some extent artificial but perhaps useful
in connection with the experimental analysis.
\begin{figure}
\begin{center}
\vspace*{0.0cm}
\centerline{\psfig{figure=Fig8.ps,height=10.0cm,width=8.0cm,%
bbllx=2.8cm,bblly=0.6cm,bburx=20.4cm,bbury=24.5cm,angle=270}}
\end{center}
\caption{ The same as Fig.~\ref{fig7} for the neutrons emerging after
decay of the $2^+$-resonance in $^{6}$He. }
\label{fig8}
\end{figure}
The mixture of all these contributions leads to the total distribution
which has the right features but without precise reproduction of the
high-energy peak, see Fig.~\ref{fig7}. Experimental resolution would
not improve very much because either the peak gets broader and lower,
or it gets higher and narrower. The discrepancies can originate from
the presence of the target and the reaction mechanism itself as well as
from contributions to the experimental points from other than resonance
decays. The experiment selects the window of energies around the
$2^+$ resonance position in the reaction $^7$Li($^2$H,$^3$He)$^6$He$^*$.
This necessarily includes some background which perhaps has a
different energy distribution than the $2^+$ resonance we investigated
in the present calculations. We find a distribution where the
two-neutron virtual $s$-state dominates whereas the measurements are
broader as expected from non-resonance decays. In this work we focus on
the decay of "populated" resonances, and an appropriate description of this
reaction goes beyond the scope of the paper.
An attempt to understand the distribution was published soon after the
experiment in \cite{dan87}. The measured distribution was fitted by a
linear combination of the lowest hyperharmonic functions of $K=2$ and
4. The conclusion was that a substantial $K=4$ component is needed to
reproduce the experiment. The decay mechanism dominated by the
neutron-neutron virtual $s$-state was abandoned in favor of the $K=4$
component. This phenomenological analysis provides a good fit even for
energies above the maximum allowed by the resonance energy. The
decaying resonance wave function does not enter anywhere. The
significance is not easy to interpret in terms of decay mechanisms as
attempted in the present work.
The neutron energy distribution is not measured but for future
comparison we show our prediction in Fig.~\ref{fig8}. The division
into different adiabatic components show that the broad total
distribution centered around an intermediate energy is obtained by
adding several qualitatively similar contributions. The different
mechanisms would all produce most likely energies around half the
maximum value. To distinguish it is therefore necessary to measure
both $\alpha$-particles and neutrons after the decay.
\subsection{Dependence on scaling angle}
It is instructive to investigate the dependence of the distributions
on the choices of $\rho$, $\theta$ and basis size. The
$\rho$-dependence is already indicated in Fig.~\ref{fig6} where the
distributions are very stable as soon as $\rho$ is larger than about
$50$~fm. However, this stability does require a sufficiently large
basis which at least up to $100$~fm still can be handled in
modest-size computers. It is also clear that a finite basis cannot
accurately describe the solutions when $\rho$ increases towards
$\infty$. Then the angular solutions approach the hyperharmonics but a
large basis is still required to reproduce the structures at small
distances between pairs of particles. Eventually this becomes
impossible. The many basis functions cancel each other at larger
distances.
At intermediate distances, where the basis is sufficiently large, the
resonance wave functions are independent of $\rho$. For the radial
solution this is seen in the right part of Fig.~\ref{fig4}, where for $\rho$
larger than about 50~fm, the ratio between the different radial components
is approximately constant. The energy distributions are mainly dominated
by the absolute squares of these ratios, although when different adiabatic
components interfere also the real parts of these complex ratios may
contribute individually. This behavior of the radial ratios is responsible
for the stable behaviour of the energy distributions for sufficiently large
values of rho. The constant behavior of the radial ratios also implies that
the radial wave functions have already reached asymptotics as given in
eq.(\ref{eq22}) for all the channels, and therefore the distributions are
independent of the scaling angle. However, the latter conclusion is based on
an assumption of analyticity of the angular solutions as function of
$\theta$. When this scaling angle is changed corresponding to a
rotation across a singularity like a two- or three-body resonance the
continuity is broken and the solutions change as well.
This is especially clear when we compare two solutions with $\theta$
smaller and larger than the angle corresponding to a two-body
resonance. For the large $\theta$ one angular eigenvalue changes
character and increases towards infinity as $\rho^2$, see
\cite{fed03}. This qualitative change of behavior necessarily causes
a change of the angular wave functions because the upgoing eigenvalue
at large distances fully describes the properties of the two-body
resonance. These features were distributed over several wave functions
for the small $\theta$-value.
In between singularities the individual angular solutions are
independent of both $\rho$ and $\theta$. This may not be an apparent
feature of the numerical solutions because the basis has to be
sufficiently large for a complete description. As $\theta$ increases
the effective ranges of the two-body interactions also increase and
the stable region is pushed to larger $\rho$-values. This means that
the minimum basis size has to increase with $\theta$.
\section{Summary and conclusions}
We formulate a method to compute the energy distribution of three
particles emerging after three-body decay of a many-body resonance.
The complex energy of a resonance corresponds to a pole in the
momentum-space wave function which has an absolute square of the form
as Breit-Wigner shape multiplied by a smoothly varying function. In
coordinate-space this form corresponds to a large-distance asymptotic
wave function consisting of only outgoing waves. We show formally by
Fourier transformation that the coordinate-space asymptotic angular
dependence determines the energy distribution by substituting momentum
directions for the conjugate coordinate directions. For this the
divergent Fourier integral is regularized by the Zeldovich
prescription.
For two-body decay the energy distribution is trivially given by the
Breit-Wigner distribution of the initial resonance. Energy
conservation is taking care of everything else. For three-body decay
the total energy can be distributed continuously among the three
particles. We show that the resonance decay results in distributions
obtained from the large-distance angular behavior of the coordinate
wave function. The asymptotic behavior can correspond to either
genuine three-body structures or two-body substructures for example
corresponding to two-body resonances or configurations favored by
substantial attraction as for virtual states. Also virtual population
of two-body intermediate substructures is allowed as an appropriate
asymptotic behavior with a resulting special energy distribution. The
different asymptotics characterize the different decay modes used in
analyses of experimental data. Different modes can co-exist.
We illustrate by the decay of the $2^+$-state in $^{6}$He. The
practical computations employ the hyperspherical adiabatic expansion
combined with the complex scaling method. We discuss how a large
hyperradius necessarily must be accompanied by a large basis.
Convergent results may then be obtained with less efforts at moderate
hyperradii and moderate basis sizes. For convergence it is crucial to
have all three Faddeev components, and especially if all decay
mechanisms simultaneously should be included in the theoretical
formulation. The wave function undergoes dramatic changes from small
distances, where the resonance properties usually are determined, and
large distances from which the energy distributions emerge. The
reasons for these structural changes are that the small distance
behavior is determined by the two-body resonance substructures,
whereas the large-distance behavior is determined as a competition
between two effects, i.e. the energetically favored configuration of
smallest two-body angular momentum with attractive two-body potentials
and maintaining the same structure as at small distances in
higher-lying levels.
In conclusion, theoretical interpretation of the simplest nuclear
three-body decay without Coulomb interactions is already rather
complicated. It is then advisable to test any given method on these
systems. The accuracy of computations of the more complicated
decaying charged systems can then be judged. This is important since
almost all nuclear three-body resonance decays involve charged
systems. The goal is to interpret the soon-to-come accurate
experimental correlation data for three-body decays of charged
systems.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 311
|
{"url":"https:\/\/math.answers.com\/Q\/What_is_the_probability_of_getting_four_consecutive_twos_from_tossing_a_fair_number_cube","text":"0\n\n# What is the probability of getting four consecutive twos from tossing a fair number cube?\n\nWiki User\n\n2012-03-18 03:24:30\n\nThe probability of the first one is 1\/6 .\n\nThe probability of the second one is 1\/6 .\n\nThe probability of the third one is 1\/6 .\n\nThe probability of the fourth one is 1\/6 .\n\nThe probability of all four is (1\/6)4 = 0.0007716 (rounded) = 0.077 %\n\nWiki User\n\n2012-03-18 03:24:30\nStudy guides\n\n20 cards\n\n\u27a1\ufe0f\nSee all cards\n3.74\n1216 Reviews","date":"2022-08-18 20:26:07","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9974249601364136, \"perplexity\": 2088.8101285839575}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882573399.40\/warc\/CC-MAIN-20220818185216-20220818215216-00667.warc.gz\"}"}
| null | null |
using System;
internal enum MyColor : byte
{
Red = 1, Blue = 2, Yellow = 3
};
//internal enum MyColor : byte
//{
// Red = 100, Blue = 200, Yellow = 300
//};
class Program
{
static void Main(string[] args)
{
var col = MyColor.Blue;
Console.WriteLine(col);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,072
|
This statistic displays the doctors' awareness of the benefits associated with animal-assisted interventions (AAI) in Italy in 2015. As of the survey period, most doctors agreed that AAI helped patients to relax.
Would you be interested if a pet sitting service also offered animal training courses?
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 6,373
|
var expect = require('chai').expect;
var rethinkOdm = require('../');
describe('Client', function () {
describe('#run', function () {
it('should run a command without waiting for connection', function (done) {
var ro = rethinkOdm();
ro.run(ro.r.now(), function (err, res) {
if (err) return done(err);
expect(res).to.be.instanceOf(Date);
done();
});
});
it('should be possible to use promise', function (done) {
var ro = rethinkOdm();
ro.run(ro.r.now()).then(function (res) {
expect(res).to.be.instanceOf(Date);
})
.nodeify(done);
});
});
describe('#close', function () {
it('should be possible to close connection', function (done) {
var ro = rethinkOdm();
ro.close(function (err) {
if (err) return done(err);
expect(ro.conn).to.be.null;
done();
});
});
it('should be possible to listen event', function (done) {
var ro = rethinkOdm();
ro.on('close', function () {
done();
});
ro.close();
});
});
});
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,491
|
Kris Gunnars presents Top 7 Unhealthy Foods to Avoid Like The Plague posted at Authority Nutrition, saying, "A list of the top 7 unhealthy foods you should avoid if you want to lose weight, feel better and lower your risk of chronic diseases."
Nell Stephenson presents No Research Behind Paleo? Excuse Me? posted at Paleoista, by Nell Stephenson, saying, "Paleo expert Nell Stephenson expresses disgust at the premise that there is no research behind Paleo."
Victoria Prince presents Ship of the desert, shaper of human evolution… posted at Principle into Practice, saying, "When you think about camels, you probably don't think about how they shaped human evolution. While it may not be 'paleo", camel milk has been drunk by people in the Middle East for millennia, and camels have helped shaped the human genome. Now, new research is showing camel milk has benefits for those with diabetes and alcoholic liver disease... It's kind of tasty too!"
Wendy Schwarts presents Food News and Reviews: KeVita, Jif and Special K posted at Go Paleo!, saying, "A Paleo rundown of what's new in the food world: the good, the bad and the nutritiously ugly."
Neely Quinn presents Toadally Primal Wellness Bundle: 33 eBooks for $39 posted at Paleo Plan, saying, "Lots of info for not a lot of money."
Peggy Emch presents Fatigued? Maybe It's Iron Deficiency - Raw Meat Can Help posted at The Primal Parent, saying, "Eating cooked iron-rich meats is great too, but cooking decreases iron's solubility. When you need a quick boost of iron, raw liver is the way to go."
Jamie Peterson presents Lunch Box Ideas posted at Groks Big Family , saying, "Lunch box ideas for kids and Sugar Detox."
Holly Woodcock presents 5 Things To Do With: CAULIFLOWER posted at Holly Would If She Could, saying, "Trying to find creative ways to expand your veggie rotation? Cauliflower is on your side! Healthy doesn't have to mean plain and repetitive."
Melissa Joulwan presents The Enchanted Broccoli Forest posted at The Clothes Make The Girl, saying, "Epic kitchen fail! A tale of when paleo-izing a beloved recipe turns disastrous..."
Angie presents A New Year and a Better You...and Yoga posted at Angie's Suburban Oasis, saying, "I've discovered a great new free Yoga video by YogaYak. I've been using this video for my morning yoga routine and highly recommend it."
The Cavegirls presents Swedish Cream Cookies posted at Northwest Cavegirls, saying, "Although I originally made this as a christmas cookie, you could make this yummy cookie anytime of the year, whenever it suits your fancy. The cookie part of this recipe is a pretty basic cracker/cookie wafer. You can top it with chocolate frosting to make a sandwich cookie or You could even spread it with something savory and serve it as an appetizer at a dinner party. The possibilities are endless! Enjoy!"
Tarah presents Paleo Pregnancy - First Trimester Recap posted at What I Gather, saying, "A recap of my first trimester and how it through my diet, sleep and exercise for a major loop!"
Paul Jaminet presents A Tale of Recovery from Panic Disorder and OCD posted at Perfect Health Diet, saying, "A reader discusses her infection induced mental health problems, culminating in a devastating panic disorder, which was ultimately cured by an ancestral diet and antibiotics."
Suz Crawt presents Are We Too Developed? posted at The Paleo Network, saying, "Perhaps third world countries haven't got it so wrong after all?"
Brittanie presents 92% off 33 e-book Paleo Wellness Bundle posted at Three Diets. One Dinner, saying, "My friend Todd Dosenberry has come up with a brilliant package of Paleo wisdom that is perfect for all you New Years Paleo Newbies. If you have any confusion or concerns about the paleo lifestyle, here is your answer. Check out this fabulous bundle on sale for just a few days. I just downloaded my bundle and it really is an incredible education package that will help you on your paleo journey. I hope my book will be part of his next promotion when it launches in the fall! Happy Paleo reading!"
Michelle Norris presents Eclectic Kitchen Evolved™: Fancy Eggs – No Fuss posted at Eclectic Kitchen Evolved, saying, "Love a fabulous weekend brunch breakfast? But hate all the muss and fuss of it, don'tcha? Me, too! That's why I love this breakfast it is so flavorful and fancy looking but definitely not a lot of work. Easily adapts for a Whole 30!"
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,258
|
\section{Introduction}
From the past several years deep learning has outperformed many conventional computer vision techniques in areas such as image classification, segmentation, tracking etc.\cite{r1}, \cite{r2}, \cite{r3}, \cite{r4}, \cite{r5}, \cite{r6}. Convolutional Neural Networks (CNN) is one of the most famous deep learning architecture which is designed in 1989 \cite{r7}, but its true effectiveness came to the surface when it is trained on more powerful machines with GPUs and leveraging large amount of training data. Krizhevsky et al. \cite{r1} trained a large CNN architecture containing 8 layers and millions of parameters by using the huge ImageNet dataset with 1 million training images. From the past several years many modified and more deeper architectures of CNN has been proposed which are not only used in the medcal imaging domain but they have been widely applied to other applications as well.
Computer vision based Medical image segmentation methods can be divided into two categories, i.e, conventional medical image segmentation techniques and deep learning based methods. Some widely used conventional medical image segmentation methods include thresholding based methods \cite{r13}, \cite{r14}, \cite{r15}, region growing methods \cite{r16}, \cite{r17}, and clustering based methods \cite{r18}, \cite{r19}. Deep CNN models are mostly used for the task of image classification, however, in medical image analysis, image segmentation has its own significance, for instance image segmentation is widely used in the localization of cancerous and defected regions in MRI, CT scan and Ultrasound images. In medical image segmentation CNN models are used along with cross entropy loss as a pixel-wise measure \cite{r8}. However, the most popular deep CNN architectures for medical image segmentation is based on an encoder-decoder architecture. The widely used models in this domain is U-Net \cite{r6} and V-Net architectures \cite{r9}. U-Net is employed for the segmentation of biological microscopy images, and since in mdeical domain the training images are not as large as in other computer vision areas, Ronneberger et al \cite{r6} trained the the U-Net model using data augmentation strategy to leverage from the available annotated images. The architecture of U-Net is consist of two main parts, i.e a contracting sub-net to encode the semantics and context information, and an expanding sub-net uses and decodes the encoded information for the generation of segmented maps. The contracting sub-net is based on down-sampling CNN blocks that extracts features
with $3 \times 3$ convolutions. The expanding sub-net is based on up-sampling CNN blocks which uses deconvolution to increase the image dimensions in spatial axis while reducing he number of channels in each image. To leverage the context information which is encoded by the intermediate layers of the contracting sub-net, the encoded feature maps are concatenated with the feature maps from the intermediate layers of deconvolutional CNN blocks of the expanding sub-net. Afterwards, $1 \times 1$ convolution is applied on the feature maps obtained from the intermediate layers of the expanding sub-net in order to produce a segmentation map in which each pixel is classified according to the corresponding semantic class of the input image. The entire U-Net architecture is trained on a dataset containing 30 transmitted
light microscopy images, and due to the efficient architectural design of this model, it won the ISBI cell tracking challenge 2015 by a significant margin.
Similarly, V-Net \cite{r9} is another widely used image segmentation network in medical image analysis, but the main difference between this network is that it is used for 3D medical image segmentation. They proposed a loss function based on the Dice coefficient to overcome the problem of voxel imbalance in the foreground and background during the network training. V-Net is trained end-to-end on MRI voxels containing prostate information, and V-Net is trained employing the Dice coefficient to infer the segmentation for the whole volume at once. Imran et al. \cite{r10} proposed a fast segmentation method known as Progressive Dense V-net (PDV-Net) for the segmentation of pulmonary lobes from chest CT images. The PDV-net architecture contains three dense feature blocks, which processes the entire CT volume in order to generate the segmentation information in an automatic manner. As opposed to existing medical image segmentation methods which requires prior information, PDV-Net eliminates the need for any user interaction in the form of providing prior information. Similarly, \cite{r11} implements a 3D-CNN encoder for lesion segmentation which combines the advantages of U-Net and CEN \cite{r12}. The 3D-CNN network is consist of two branches, a conventional convolutional branch and a deconvolutional branch. The convolutional branch is based on convolutional and pooling layers and the deconvolutional branch contains deconvolutional and unpooling layers.
In this paper, we present an architecture, which is quite similar to the aforementioned networks, but the main difference between our proposed method and the existing medical image segmentation techniques discussed in the previous paragraph is that we combine the advantages of supervised learning with the self-supervised training strategy of a typical U-Net architecture. We argue that, by explicitly, providing the supervisory signal at the bottleneck layer of the encoder part of U-Net, the encoder or the contracting branch can encode more effective features as compared to using self-supervised training approach.
\begin{figure}[t]
\centering
\includegraphics[width=10.5cm,, height=5cm]{medseg_idea_image.pdf}
\vspace{-0.58cm}
\caption{The main idea: Our CNN network takes an input medical image and passes it through its intermediate layers and produces a segmentation map using its decoder part}
\label{fig:1}
\vspace{-0.5cm}
\end{figure}
\section{Network Architecture}
The overall framework of our proposed technique is shown in Figure \ref{fig:2}. The network is consist of three parts, i.e 1) an encoder part, 2) bottleneck training part, and 3) the decoder part. The encoder part is based on
the typical design of a convolutional neural network which contains convolutional blocks with $3 \times 3$ filters. Each convolutional block is followed by a a rectified linear unit (ReLU) and a down-sampling layer having 2x2 max pooling operation with stride 2. The down-sampling layer reduces the size of the input image spatially, while it increases the number of channels of the feature maps to encode more useful information. The bottleneck training part is consist of two fully connected layers to predict the ground-truth segmentation map by using linear transformation as the input image and the predicted segmentation maps are registered. The decoder part of the network is designed based on the up-sampling deconvolutional blocks. We use $2 \times 2$ up-convolutions to increase the size of the feature maps in the intermediate de-convolutional layers. Following the skip-connection architecture of U-net we concatenate feature maps from the encoder layers to the corresponding layer in the decoder network. We then use $3 \times 3$ convolutional filters followed by a ReLU to incorporate non-linearity in this branch of the network.
\section{Results and Discussion}
The proposed CNN based method is evaluated using the criteria of sensitivity and specificity, defined by the following formulas:
\begin{align}
Sensitivity = {}&N_{tp}/N_p\\
Specificity = {}&N_{tn}/N_n\\
Accuracy = {}&N_{tp}+N_{tn}/N_{tp}+N_{tn}+N_{fn}+N_{fp}
\label{eq:2}
\end{align}
Table \ref{table:1} shows the specificity, sensitivity and accuracy obtained by training and validating our proposed model on MRI and CT-scan images.
\begin{table}[t!]
\begin{center}
\begin{tabular}{|l| c| c|c|}
\hline
Data&Specificity&Sensitivity&Accuracy\\
\hline
\hline
MRI& 0.926 & 0.939 & 0.913 \\
\hline
CT Scan& 0.961 & 0.972& 0.976\\
\hline
\end{tabular}
\end{center}
\vspace{-0.3cm}
\caption{Oulu-CASIA: Accuracy for six expressions classification.}
\vspace{-0.1cm}
\label{table:1}
\vspace{-0.55 cm}
\end{table}
\begin{figure*}[t]
\centering
\includegraphics[width=12cm,, height=9cm]{med_overall.pdf}
\vspace{-0.25cm}
\caption{The over-all architecture of our proposed network: An input image, such as an MRI or CT-scan image is fed to the CNN based network which extracts context information in its intermediate layers and this encoding of the context information is enhanced by using a bottleneck training layer. The decoder part of our network then uses the encoded information to generate the segmented map using skip connections from encoder layers to intermediate layers of decoder }
\vspace{-0.35cm}
\label{fig:2}
\vspace{-0.25cm}
\end{figure*}
\section{Conclusion}
In this paper we have presented a U-Net type of architecture, which is based on convolutional neural networks for medical image segmentation. Our proposed network has three parts, i.e, 1) an encoder part, 2) a bottleneck learning layer and a 3) decoder part of the network. The encoder part encodes the context information from the input image in the intermediate layers using CNN filters followed by non-linearity of RELU. The bottleneck layer is used to enhance the feature extraction capability of the encoder part by using a fully supervised linear transformation based on fully connected layers. The FC layers in the bottleneck part of the network is used to predict the ground truth segmentation map using a linear transformation. The decoder part of our network is based on deconvolutional blocks which increases the spatial dimensions of feature maps and reduces the channels of the feature maps in its intermediate layers. To take full advantage of the encoded information in the intermediate layers of the encoder and to prevent the loss of information, we add skip connections, connecting the intermediate layers of the encoder with the intermediate layers of the decoder. Experimental results show that the proposed technique produces promising results on MRI and CT scan images.
\bibliographystyle{splncs04}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,388
|
Shipping: We only ship to US addresses at this time. If we find that we have overcharged shipping on your order, you will be refunded the difference so that you are only paying for actual shipping. If you are picking up locally, send a message either during checkout through Paypal in the comment/instruction section, or right after checkout on the order confirmation screen, and we will refund the cost of shipping. Only candles may be picked up locally.
Tax: Illinois customers will be charged sales tax at a rate of 6.25%.
Return Policy: We accept returns for defective merchandise only. Please contact us if you are unsatisfied with your order for any reason.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 668
|
class ListStyleAdminMixin(object):
def get_row_css(self, obj, index):
return ''
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,248
|
Q: when i run my program it keeps throwing these errors
Warning: Missing argument 1 for MysqlDB::__construct(), called in C:\xampp\htdocs\ripplezsolution\index.php on line 9 and defined in C:\xampp\htdocs\ripplezsolution\phpinclude\include\MySqlDb.php on line 10
Warning: Missing argument 2 for MysqlDB::__construct(), called in C:\xampp\htdocs\ripplezsolution\index.php on line 9 and defined in C:\xampp\htdocs\ripplezsolution\phpinclude\include\MySqlDb.php on line 10
Warning: Missing argument 3 for MysqlDB::__construct(), called in C:\xampp\htdocs\ripplezsolution\index.php on line 9 and defined in C:\xampp\htdocs\ripplezsolution\phpinclude\include\MySqlDb.php on line 10
Warning: Missing argument 4 for MysqlDB::__construct(), called in C:\xampp\htdocs\ripplezsolution\index.php on line 9 and defined in C:\xampp\htdocs\ripplezsolution\phpinclude\include\MySqlDb.php on line 10
Notice: Undefined variable: host in C:\xampp\htdocs\ripplezsolution\phpinclude\include\MySqlDb.php on line 11
Notice: Undefined variable: username in C:\xampp\htdocs\ripplezsolution\phpinclude\include\MySqlDb.php on line 11
Notice: Undefined variable: password in C:\xampp\htdocs\ripplezsolution\phpinclude\include\MySqlDb.php on line 11
Notice: Undefined variable: db in C:\xampp\htdocs\ripplezsolution\phpinclude\include\MySqlDb.php on line 11
This is my MysqlDB.php code
<?php
class MysqlDB {
protected $_mysql;
protected $_where = array();
protected $_query;
protected $_paramTypeList;
public function __construct ($host, $username, $password, $db) {
$this->_mysql = new mysqli($host, $username, $password, $db)
or die('There was a problem connecting to the database');
}
public function query($query)
{
$this->_query = filter_var($query, FILTER_SANITIZE_STRING);
$stmt = $this->_prepareQuery();
$stmt->execute();
$results = $this->_dynamicBindResults($stmt);
return $results;
}
/**
* A convenient SELECT * function.
*
* @param string $tableName The name of the database table to work with.
* @param int $numRows The number of rows total to return.
* @return array Contains the returned rows from the select query.
*/
public function get($tableName, $numRows = NULL)
{
$this->_query = "SELECT * FROM $tableName";
$stmt = $this->_buildQuery($numRows);
$stmt->execute();
$results = $this->_dynamicBindResults($stmt);
return $results;
}
/**
*
* @param <string $tableName The name of the table.
* @param array $insertData Data containing information for inserting into the DB.
* @return boolean Boolean indicating whether the insert query was completed succesfully.
*/
public function insert($tableName, $insertData)
{
$this->_query = "INSERT into $tableName";
$stmt = $this->_buildQuery(NULL, $insertData);
$stmt->execute();
if ($stmt->affected_rows)
return true;
}
public function update($tableName, $tableData)
{
$this->_query = "UPDATE $tableName SET ";
$stmt = $this->_buildQuery(NULL, $tableData);
$stmt->execute();
if ($stmt->affected_rows)
return true;
}
public function delete($tableName) {
$this->_query = "DELETE FROM $tableName";
$stmt = $this->_buildQuery();
$stmt->execute();
if ($stmt->affected_rows)
return true;
}
public function where($whereProp, $whereValue)
{
$this->_where[$whereProp] = $whereValue;
}
protected function _determineType($item)
{
switch (gettype($item)) {
case 'string':
return 's';
break;
case 'integer':
return 'i';
break;
case 'blob':
return 'b';
break;
case 'double':
return 'd';
break;
}
}
protected function _buildQuery($numRows = NULL, $tableData = false)
{
$hasTableData = null;
if (gettype($tableData) === 'array') {
$hasTableData = true;
}
// Did the user call the "where" method?
if (!empty($this->_where)) {
$keys = array_keys($this->_where);
$where_prop = $keys[0];
$where_value = $this->_where[$where_prop];
// if update data was passed, filter through
// and create the SQL query, accordingly.
if ($hasTableData) {
$i = 1;
$pos = strpos($this->_query, 'UPDATE');
if ( $pos !== false) {
foreach ($tableData as $prop => $value) {
// determines what data type the item is, for binding purposes.
$this->_paramTypeList .= $this->_determineType($value);
// prepares the reset of the SQL query.
if ($i === count($tableData)) {
$this->_query .= $prop . " = ? WHERE " . $where_prop . "= " . $where_value;
} else {
$this->_query .= $prop . ' = ?, ';
}
$i++;
}
}
} else {
$this->_paramTypeList = $this->_determineType($where_value);
$this->_query .= " WHERE " . $where_prop . "= ?";
}
}
if ($hasTableData) {
$pos = strpos($this->_query, 'INSERT');
if ($pos !== false) {
$keys = array_keys($tableData);
$values = array_values($tableData);
$num = count($keys);
foreach ($values as $key => $val) {
$values[$key] = "'{$val}'";
$this->_paramTypeList .= $this->_determineType($val);
}
$this->_query .= '(' . implode($keys, ', ') . ')';
$this->_query .= ' VALUES(';
while ($num !== 0) {
($num !== 1) ? $this->_query .= '?, ' : $this->_query .= '?)';
$num--;
}
}
}
if (isset($numRows)) {
$this->_query .= " LIMIT " . (int) $numRows;
}
$stmt = $this->_prepareQuery();
if ($hasTableData) {
$args = array();
$args[] = $this->_paramTypeList;
foreach ($tableData as $prop => $val) {
$args[] = &$tableData[$prop];
}
call_user_func_array(array($stmt, 'bind_param'), $args);
} else {
if ($this->_where)
$stmt->bind_param($this->_paramTypeList, $where_value);
}
return $stmt;
}
protected function _dynamicBindResults($stmt)
{
$parameters = array();
$results = array();
$meta = $stmt->result_metadata();
while ($field = $meta->fetch_field()) {
$parameters[] = &$row[$field->name];
}
call_user_func_array(array($stmt, 'bind_result'), $parameters);
while ($stmt->fetch()) {
$x = array();
foreach ($row as $key => $val) {
$x[$key] = $val;
}
$results[] = $x;
}
return $results;
}
protected function _prepareQuery()
{
if (!$stmt = $this->_mysql->prepare($this->_query)) {
trigger_error("Problem preparing query", E_USER_ERROR);
}
return $stmt;
}
public function __destruct()
{
$this->_mysql->close();
}
}
?>
and i'm calling a function insert() through index.php
<?php
ob_start();
session_start();
require_once("phpinclude/include/membersite_config.php");
require_once("phpinclude/include/MySqlDB.php");
$DB = new MysqlDB('172.90.13.97','king','mi*****hhh','kxxxx_database');
if (isset($_GET['action'])){$action = htmlentities($_GET['action']);}
else{$action = NULL;}
$mysqldb = new MysqlDB();
?>
<?php if($action=='add_cart'){?>
<?php $data=array($arrival, $departure, $result, $roomID, $category_price); $table='tb_cart';?>
<?php $this->mysqldb->insert($table, $data); ?>
<?php }?>
A: Problem is in this line
$mysqldb = new MysqlDB();
The constructor requries arguments which are not passed. You need to pass $host, $username, $password, $db to constructor.
Your code acutally makes no sense. You could use $DB instead of creating new object. You also use $this->mysqldb in no object context. There are plenty of errors in your code.
To fix:
*
*Remove this line $mysqldb = new MysqlDB();
*Change <?php $this->mysqldb->insert($table, $data); ?> to $DB->insert($table, $data);
Script should +- look like:
<?php
ob_start();
session_start();
require_once("phpinclude/include/membersite_config.php");
require_once("phpinclude/include/MySqlDB.php");
$DB = new MysqlDB('172.90.13.97','king','mi*****hhh','kxxxx_database');
$action = !empty($_GET['action']) ? htmlentities($_GET['action']) : null;
if ($action == 'add_cart') {
$data = array(
'arrival' => $arrival,
'departure' => $departure,
'result' => $result,
'roomID' => $roomID,
'category_price' => $category_price
);
$DB->insert('tb_cart', $data);
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,143
|
Monday Movies ~ 15th June 2020
fragglerocking June 15, 2020 June 15, 2020 Mondays, Movie Reviews, movies
Our first offering this week was Phil's choice. Due to his work shifts we couldn't do the Thursday movie so I gave over my Saturday slot, and he chose a movie he's seen advertised on Netflix starring Robert De Niro and Al Pacino. Advertised as a 'buddy cop crime thriller', Phil thought it would be a good fun light relief kind of movie. Righteous Kill (2008) directed by Jon Avnet who I'd not heard of, (and now I know why). Given two great actors and a generic twisty plot, Avnet manages to make a car crash of a movie.
Pacino and De Niro are always good to watch, but the novelty of that soon wears off. I'd guessed the 'twist' in the plot in the first 20 minutes, but what was going on and when was all over the place, poor continuity and flow and the ending is really naff too. I won't do spoilers incase anyone is daft enough to want to see this. It has some good actors in it, Carla Gugino as De Niro's love interest, Brian Dennehy as the two cops boss, and 50 cents as a nightclub owner/drug dealer, I can only assume they all did it for the money. My favourite critic review for this ~ "The entire movie is one big build-up to a twist that, while not exactly cheating, plays an awfully cheap trick. To get there, writer Russel Gewirtz and director John Avnet sacrifice mystery, suspense, sensible editing and everything else one expects to find in a police thriller just to keep the audience off-guard. It's not worth it, and the first real pairing of De Niro and Pacino is utterly wasted". – Ken Fox of TV Guide
Anyways Phil apologised 🤣 and we'll never speak of it again.
So Phil's at work today, and it's Sunday which is ironing day so I picked a movie to watch whilst doing it. I'd wanted to see Tom Hardy in 'Locke' (2013) for a while but Phil didn't seem overly bothered (one man in a car doesn't sound exciting really!) and at 1 hour 24 mins long, it's a great fit for a pile of ironing!
The film is written and directed by Stephen Knight, (who I have heard of! 🙂 ) and takes place in a BMW X5, driven by Hardy from Birmingham to London. Hardy is the only person on screen for the whole movie. He plays Ivan Locke, a construction supervisor for a company who are building a huge building, with a concrete pour due at 5am. At the same time, a lady, Bethan, with whom Locke had a one night stand with seven months before has gone into premature labour, and in spite of his responsibilities at home and work, he decides to drive to London to be with her for the birth, as his own father abandoned him as a child.
He has 36 phone calls during the journey, with his boss Gareth (Ben Daniels) (who comes up as 'Bastard' on his screen when he rings 🙂 ), with his backup colleague, cider drinking Donal (Andrew Scott) his distressed wife Katrina (Ruth Wilson) sons Eddie and Sean (Tom Holland & Bill Milner) and the highly strung Bethan (Olivia Coleman). During the course of the journey he loses his job, his marriage, and his home, and has to coach Donal regarding the concrete pour in between. I'll leave it there so as not to do the spoiler thing.
Tom Hardy is far removed from his gangster/action man/bad guy roles, here he is a man who's life is going tits up and he's trying to juggle all the pieces and hold it all together and he does it so well, it's a wonderful, nuanced performance, and it was easy to forget Hardy and feel for Ivan. I was in tears at one point.
The movie only took 8 nights to shoot, the car being pulled down the M6 & M1 on a low flatbed trailer,with the phone calls being done in real time, the road and car noise included, and the other actors calling in from a conference room that served as the multiple "locations" of the various characters.
I can't find a single bad review of this movie, and my favourite one is "There are films to see on huge screens, but this is one that almost cries out for a small cinema, surrounded by total blackness. It's a daring experiment brilliantly executed, with Tom Hardy giving one of the best performances of his career".– Ollie Richards from Empire magazine.
DeNiro, Hardy, Locke, Monday, Monday Movies, Movie review, movies, Pacino, Righteous Kill
Previous Throwback Thursday
Next Monday Movies – 22/06/2020
beetleypete says:
I'm with you on Locke. I would also give it five stars if I gave stars. Hardy manages to pull off an idea that could have gone so badly wrong. And the 'phone voices' are an example of superb off-screen acting too. I reviewed it in 2016, and you commented.
https://beetleypete.com/2016/08/28/just-been-watching-18/
I watched 'Righteous Kill' ages ago. I thought it was simply an excuse to get the two Hollywood 'big guns' together, and didn't really work. But I didn't hate it, and liked it a bit more than you did, but not much more. 🙂
Cheers, Pete.
Cheers Pete!
I'm not familiar with either movie, Fraggle. Thanks for the warning about the first one. 😉
Locke sounds interesting as a concept.
Ugh… ironing… I used to make sure everything was perfectly pressed. Now I figure the wrinkles in my clothes will distract people from the ones on my face. LOL. Hugs on the wing.
Cheers Teagan!
Locke sounds like something I might enjoy, especially if Olivia Coleman's in it. Did none of the reviews make jokes about the phoned-in performances? It seems like a bit of a gift to me.
No funnily enough. Although you don't see the people he's talking to they do a cracking job acting on the phone.
By Hook Or By Book ~ Book Reviews, News, & Other Stuff says:
Both of these are new to me and while I think I'll skip the first one, you've twisted my arm with Locke.😁
Cool, hope you enjoy it!
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,081
|
Asociace profesionálních fotografů České republiky, (zkráceně APF ČR), (anglicky: Association of Professional Photographers of the Czech Republic) je organizace sdružující profesionální fotografy z České republiky.
APF ČR je členem Federation of European Photographers.
Historie a činnost asociace
Asociace byla založena 22. ledna 1990. Na jejím založení se podíleli bývalí členové Fondu výtvarných umělců a Svazu výtvarných umělců, aby v nových podmínkách zastupovala české profesionální fotografy. Organizační strukturu asociace tvoří pětičlenná správní rada a tříčlenná kontrolní komise. Asociace si rovněž vytváří pomocné orgány jakým je například přijímací komise nebo porota pro udělování ceny Osobnost české fotografie. Asociace svolává jednou ročně Valnou hromadu svých členů. Hlavním příjmem asociace jsou příspěvky členů. V roce 2017 měla asociace více než 100 členů. Asociace hájí zájmy svých členů, získává granty na společné projekty a výstavy, vytváří doporučení k honorářové politice profesionálních fotografů a zajišťují komunikaci a spolupráci s obdobnými mezinárodními organizacemi ve světě z nichž nejvýznamnější je Federace evropských fotografů, anglicky Federation of European Photographers (zkráceně FEP) a Světová rada profesionálních fotografů, (zkráceně WCPP).
Asociace organizuje výstavy svých členů, které jsou obvykle monotematické a žánrově zaměřené např. na krajinu, akt, architekturu, zátiší, portrét a pod. V roce 2018 se uskutečnila výstava Tělo jako znak, v předchozích letech se uskutečnily výstavy: Detail, Výběr I, Výběr II, Individuality v dokumentu, Praha, Reklama a další. Ke všem výstavám byly vydány katalogy. Asociace rovněž vydává nebo zaštiťuje knihy o českých fotografech a fotografické praxi.
Jednou z nových činností asociace je vydávání Almanachů tj. uměleckých signovaných tisků v limitované sérii. V úvodním textu pro první Almanach napsal významný český pedagog, prof. Miroslav Vojtěchovský: "Almanach prací členů Asociace profesionálních fotografů České republiky nevnímáme jako mezník, ale spíše pokus navázat na někdejší znamenitou tradici, na léta dynamického rozvoje fotografického hnutí v naší zemi, jako snahu prokázat, že fotografové soustředění v "Asociaci" ani přes jejich momentální komplikovanou profesní situaci nezapomínají na to, že je k fotografii přivedla touha po sebevyjádření, touha přistupovat k fotografii jako k modernímu vizuálnímu prostředku mezilidské komunikace."
Almanach I byl vydán v návaznosti na výstavu asociace Výběr I v roce 2013. Na samostatných listech je v něm zastoupeno svými pracemi 25 fotografů, vybraných Petrem Zhořem a Rudolfem Jungem. Almanach II navazuje na výstavu asociace Detail z roku 2015. Jsou zde práce 20 předních českých fotografů, vybraných Zdeňkem Lhotákem a Janem Neubertem. Poslední Almanach III je vydán v roce 2018 a navazuje na asociační výstavu Tělo jako znak. Dvacet šest autorů pod výběrem Vladimíra Kozlíka a Zdeňka Lhotáka je zastoupeno originálními, signovanými tisky svých prací. Koordinátor projektu Marian Beneš, grafická úprava Jakub Konupka, tisk FOMEI Collection Baryta MONO 290. Jednotlivé fotografie jsou adjustovány ve speciálně vyrobených deskách.
Zastoupení autoři: Marian Beneš, Vladimír Birgus, Magdalena Bláhová, Dorothea Bylica, Karel Došek, Bohumil Eichler, Miloš Fic, Jaroslav Fišer, Hana Hamplová, Jiří Hanke, Jaroslav Hejzlar, František Heusler, Hana Hrnčířová, Dominika Hrubá, Bořivoj Hořínek, Josef Husák, Ondřej Chmel, Blanka Chocholová, František Chrástek, Jiří Jírů, Rudolf Jung, Aleš Jungmann, Daniel Kaifer, Olga Kalašová, Jiří Kovanic, Vladimír Kozlík, Ivan Král, Dominika Kubišová, Mária Kudasová, Zdeněk Lhoták, Vít Mádr, Pavel Mára, Petr Moško, Jan Neubert, Dagmar Pavlíková, Jan Pohribný, Stanislav Pokorný, Rudo Prekop, Pavel Rydl, Pavel Rychtařík, Roman Sejkot, Petra Skoupilová, Hana Major Sládková, Vasil Stanko, Milan Šusta, Jiří Tondl, Milena Valušková, Miroslav Vojtěchovský, Jiří Všetečka, Petr Zhoř.
Členství
Členem asociace může být každý fotograf, pro kterého je fotografie zdrojem příjmů nebo absolvoval vyšší a vysokoškolské odborné vzdělání v oboru fotografie. Kromě absolventů specializovaných škol na fotografii je potřeba, aby uchazeč o členství předložil přijímací porotě ucelený soubor fotografií, který má, jako celek, jasnou vypovídající hodnotu. Fotografie v souboru musí být určitým způsobem provázány a soubor by měl gradovat. Snímky musí být očíslovány a seřazeny tak, aby byl vidět záměr autora, co tématem zamýšlel. Hodnotí se nejen fotografická úroveň, ale i adjustace fotografií, celková formální úroveň prací a profesionální řemeslné a umělecké vyznění celého souboru. Profesionalita není pouze v tom snímky nafotografovat, hraje zde důležitou roli i jejich konečné zpracování. Komise hodnotí kvalitu práce a zejména výrazný rukopis autora, jasnou přidanou hodnotu, kterou autor do díla ze sebe sama vložil.
Cena Osobnost české fotografie
APF uděluje od roku 2003 cenu "Osobnost české fotografie", která je určena fotografovi, fotografickému pedagogovi, kurátorovi, teoretikovi fotografie, galeristovi či vydavateli, který v průběhu kalendářního roku zásadním způsobem přispěje ke kvalitě, rozvoji nebo propagaci české tvůrčí fotografie u nás či v zahraničí. Uděluje také cenu "Osobnost české fotografie – za dlouhodobý přínos fotografii", jež je určena osobnosti, která dlouhodobě, zásadním způsobem přispěla ke kvalitě, rozvoji nebo propagaci české tvůrčí fotografie u nás či v zahraničí.V roce 2017 byla cena rozšířena o kategorii Publikace roku a Kalendář roku.
První cenu Osobnost české fotografie získal v roce 2003 Josef Koudelka, Osobností české fotografie 2004 byl porotou vybrán kurátor Jaroslav Anděl, Osobností české fotografie 2005 se stala kurátorská dvojice Vladimír Birgus a Jan Mlčoch. V roce 2006 byla cena poprvé rozdělena do dvou kategorií, které lépe pokrývají jednotlivé specifické oblasti fotografie – na cenu Osobnost české fotografie, kterou získal fotograf Jiří Stach a na cenu Osobnost české fotografie – za dlouhodobý přínos fotografii, jejímž držitelem se stala historička fotografie Anna Fárová. V roce 2007 byli oceněni Jindřich Štreit a Miroslav Vojtěchovský, v roce 2008 Dana Kyndrová a Pavel Dias. V roce 2019 byl oceněn Jan Reich a Jiří Všetečka, v roce 2010 to byl Viktor Kolář a Antonín Dufek. V roce 2011 byli oceněni Jan Pohribný a Jiří Hanke. V roce 2012 ocenění získali Dita Pepe a Ivan Pinkava. V r. 2013 to byli Eva Fuková a Karel Kerlický. V roce 2014 Jovan Dezort a za dlouhodobý přínos Antonín Kratochvíl. Za rok 2015 doc. Pavel Mára a za dlouhodobý přínos dlouholetý ředitel galerie G4 v Chebu Zbyněk Illek. Za rok 2016 ceny získali Jaroslav Kučera a za dlouhodobý přínos Jiří Havel, v kategorii Publikace byla porotou vybrána monografie Jiřího Bartoše a kategorii Kalendář roku kalendář Jana Pohribného pro firmu Panflex. Za rok 2017 získali ocenění Libuše Jarcovjaková a Markéta Luskačová. V kategorii publikace roku byla vybrána kniha Válka za studena autora Josefa Mouchy a cenu za kalendář roku získal kalendář s názvem "3 x 4", vytvořený studenty Vysoké školy kreativní komunikace (VŠKK) a vyšší odborné a střední školy reklamní a umělecké tvorby Michael.
Titul Kvalifikovaný evropský fotograf (QEP)
Federace evropských fotografů (FEP) se v evropském měřítku stará o sjednocení kvalifikace profesionálních fotografů podle jednotlivých kategorií. Vytvořila systém udělování kvalifikačních certifikátů ve třech úrovních: European Photographer (EP), Qualified European Photographer (QEP) a Master Qualified European Photographer (MQEP). Jednotlivé národních evropské asociace mohou své členy přihlásit k posouzení jejich prací do výběrového řízení pro získání jednoho z výše uvedených titulů. Výběrové řízení se provádí dvakrát ročně. Je k němu potřeba připravit monotematický soubor 12 fotografií 50 x 40 cm, který uceleně reprezentuje profesionální a tvůrčí schopnosti jednotlivých uchazečů. Přihlášce předchází doporučující posouzení souboru na úrovni národní asociace. Čeští fotografové získali podle oficiálních webových stránek FEP do roku 2017 dva tituly MQEP a 17 titulů QEP z nichž 10 členů asociace má své soubory fotografií publikovány na webových stránkách FEP.
Word Photographic Cup
Federace evropských fotografů (FEP), Profesionální fotografové Ameriky (PPA), United Asian Professional Photography (UAPP) a Australian Institute of Professional Photography (AIPP), každoročně vyhlašují prestižní celosvětovou soutěž World Photographic Cup o nejlepší národní tým složený z profesionálních fotografů.
Soutěží se v kategoriích: Portrét, Svatba, Komerční fotografie (tato kategorie zahrnuje reklamu, architekturu, průmysl, módu), Ilustrativní / digitální umění, Reportáž / fotožurnalismus, Příroda / krajina a wildlife.
Za Českou republiku v historicky prvním ročníku soutěže (WPC 2014) vyhlášeném v r. 2013, se finalistou a následně držitelem stříbrné medaile stal Miloš Fic v kategorii Reportáž. V ročníku WPC 2015 se finalistou a držitelem 4. místa stal Otakar Metlička v kategorii Krajina/Wildlife a finalistou a držitelem 7. místa se stal Miloš Fic v kategorii Reportáž. V roce 2015 získal tým České republiky Top 10 Team Certificate za 9. místo v celkovém hodnocení. V ročníku WPC 2016 se finalistou a následně držitelem bronzové medaile stal Otakar Metlička v kategorii Krajina/Wildlife a finalistou a držitelem 6. místa se stal Václav Sojka v kategorii Krajina/Wildlife. V roce 2016 získal tým České republiky Top 10 Team Certificate za 10. místo v celkovém hodnocení. V roce 2017 se finalisty soutěže stali Oldřich Bubák a Miloš Fic, oba v kategorii Reportage/Photojournalism. Oldřich Bubák následně získal v japonské Yokohamě bronzovou medaili. V roce 2018 byla fotografie Ladislava Kamaráda zastoupena ve výběru 10 nejlepších fotografií v kategorii Nature. Autor získal také národní cenu (Nation Awards / Best of Team).
Zastoupení České republiky v mezinárodní porotě: Jan Pohribný (2017), Zdeněk Lhoták (2018).
Kapitánem českého týmu je od založení soutěže v roce 2013 Marian Beneš.
FEP European Professional Photographer of the Year Awards
Mezinárodní soutěž o nejlepšího profesionálního fotografa v Evropě sestává z 9 kategorií: Commercial, Fashion, Sports, Reportage/Photojournalism, Illustration/Digital Art/Fine Art, Wedding, Portrait, Landscape, Wildlife. Speciální kategorií je Student and Young Photographers.
Ocenění autoři z České republiky
2010 Karel Beneš, Jiří Stránský
2011 Ondřej Prosický, Jiří Stránský
2012 Marian Beneš, Eliška Fischerová, Jiří Jiroutek, Roman Slavík, Václav Sojka, Jiří Stránský, Radek Štandera, Romana Wyllie
2014 Marian Beneš, Václav Sojka, Jan Škop
2015 Roman Slavík
2016 Miloš Fic, Marek Musil
2017 Patrik Bartuška, Hiep Duong Chi, Rastislav Marguš
2018 Patrik Bartuška, Michal Dobeš, Ladislav Kamarád, Rastislav Marguš, Ondřej Prosický, Roman Slavík, Jan Šmíd
Zastoupení České republiky v mezinárodní porotě: Marian Beneš (2009, 2017), Miloš Fic (2018).
FEP Emerging Talent Award
Od roku 2013 vyhlašuje Federace evropských fotografů soutěž pro mladé fotografy FEP Emerging Talent Award (FETA). Soutěž je určena pro studenty posledních ročníků fotografických škol a má za úkol upozornit na mimořádné talenty, kteří vstupují do profesního života a ze kterých se stávají profesionálové.
Naši studenti získali tato ocenění: 2016 Martin Vočadlo, čestné uznání, 2015 Lenka Bukačová, čestné uznání, 2014 Marek Štim, celkový vítěz, 2013 Anna Rasmussen, celková vítězka; Patricie Behenská, čestné uznání; Pavlína Soukupová, čestné uznání.
Ocenění studenti jsou z ateliérů prof. Mgr. Miroslava Vojtěchovského, QEP a MgA. Mariana Beneše, Ph.D., QEP.
Odkazy
Reference
Literatura
KOLEKTIV AUTORŮ. Česká fotografie 1989 - 1994. I. vydání Praha: KUKLIK pro APF ČR, 1994. 75 s.
VOJTĚCHOVSKÝ Miroslav, MATĚJŮ Věra. Výběr. APF ČR, 2007. 74 s.
NEUBERT Jan, LHOTÁK Zdeněk. Detail 27 x. APF ČR, 2015. 25 s.
KOLEKTIV AUTORŮ. 103 osobností české fotografie. Joyra, 2017
KOZLÍK Vladimír, LHOTÁK Zdeněk. Tělo jako znak. APF ČR, 2018. 48 s.
Externí odkazy
Veřejný rejstřík a sbírka listin
Asociace profesionálních fotografů České republiky
europeanphotographers.eu
Asociace profesionálních fotografů České republiky
Spolky v Praze
Organizace založené roku 1990
Fotografie v Česku
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,072
|
\section{Introduction}
Possessing the capability of reading text from scene images is indispensable to artificial intelligence~\cite{long2020scene,wan2020vocabulary}. To this end, early attempts regard characters as meaningless symbols and recognize the symbols by classification models~\cite{wang2011end, jaderberg2016reading}. However, when confronted with challenging environments such as occlusion, blur, noise, etc., it becomes faint due to out of visual discrimination. Fortunately, as text carries rich linguistic information, characters can be reasoned according to the context. Therefore, a bunch of methods~\cite{jaderberg2014deep,jaderberg2015deep,qiao2020seed} turn their attention to language modeling and achieve undoubted improvement.
However, how to effectively model the linguistic behavior in human reading is still an open problem. From the observations of psychology, we can make three assumptions about human reading that language modeling is autonomous, bidirectional and iterative: 1) as both deaf-mute and blind people could have fully functional vision and language separately, we use the term \emph{autonomous} to interpret the independence of learning between vision and language. The \emph{autonomous} also implies a good interaction between vision and language that independently learned language knowledge could contribute to the recognition of characters in vision. 2) The action of reasoning character context behaves like cloze task since illegible characters can be viewed as blanks. Thus, prediction can be made using the cues of legible characters on the left side and right side of the illegible characters simultaneously, which is corresponding to the \emph{bidirectional}. 3) The \emph{iterative} describes that under the challenging environments, humans adopt a progressive strategy to improve prediction confidence by iteratively correcting the recognized results.
Firstly, applying the \textbf{autonomous} principle to scene text recognition~(STR) means that recognition models should be decoupled into vision model~(VM) and language model~(LM), and the sub-models could be served as functional units independently and learned separately. Recent attention-based methods typically design LMs based on RNNs or Transformer~\cite{vaswani2017attention}, where the linguistic rules are learned \emph{implicitly} within a coupled model~\cite{lee2016recursive, shi2018aster, sheng2019nrtr} (Fig.~\ref{fig:overall}a). Nevertheless, whether and how well the LMs learn character relationship is unknowable. Besides, this kind of methods is infeasible to capture rich prior knowledge by directly pre-training LM from large-scale unlabeled text.
Secondly, compared with the unidirectional LMs~\cite{sundermeyer2012lstm}, LMs with \textbf{bidirectional} principle capture twice the amount of information. A straightforward way to construct a bidirectional model is to merge a left-to-right model and a right-to-left model~\cite{peters2018deep, devlin2018bert}, either in probability-level~\cite{wang2020decoupled,shi2018aster} or in feature-level~\cite{yu2020towards} (Fig.~\ref{fig:overall}e). However, they are strictly less powerful as their language features are unidirectional \emph{representation} in fact. Also, the ensemble models mean twice as expensive both in computations and parameters. A recent striking work in NLP is BERT~\cite{devlin2018bert}, which introduces a deep bidirectional representation learned by masking text tokens. Directly applying BERT to STR requires masking all the characters within a text instance, whereas this is extremely expensive since each time only one character can be masked.
Thirdly, LMs executed with \textbf{iterative} principle can refine the prediction from visual and linguistic cues, which is not explored in current methods. The canonical way to perform an LM is auto-regression~\cite{wang2020decoupled,cheng2017focusing,wojna2017attention} (Fig.~\ref{fig:overall}d), in which error recognition is accumulated as noise and taken as input for the following prediction. To adapt the Transformer architectures, ~\cite{lyu20192d,yu2020towards} give up auto-regression and adopt parallel-prediction (Fig.~\ref{fig:overall}e) to improve efficiency. However, noise input still exists in parallel-prediction where errors from VM output directly harm the LM accuracy. In addition, parallel-prediction in SRN~\cite{yu2020towards} suffers from unaligned-length problem that SRN is tough to infer correct characters if text length is wrongly predicted by VM.
Considering the deficiencies of current methods from the aspects of internal interaction, feature representation and execution manner, we propose ABINet guided by the principles of \emph{Autonomous}, \emph{Bidirectional} and \emph{Iterative}. Firstly, we explore a decoupled method~(Fig.~\ref{fig:overall}b) by blocking gradient flow (BGF) between VM and LM, which enforces LM to learn linguistic rules explicitly. Besides, both VM and LM are autonomous units and could be pre-trained from images and text separately. Secondly, we design a novel bidirectional cloze network (BCN) as the LM, which eliminates the dilemma of combining two unidirectional models~(Fig.~\ref{fig:overall}c). The BCN is jointly conditioned on both left and right context, by specifying attention masks to control the accessing of both side characters. Also, accessing across steps is not allowed to prevent leaking information. Thirdly, we propose an execution manner of iterative correction for LM~(Fig.~\ref{fig:overall}b). By feeding the outputs of ABINet into LM repeatedly, predictions can be refined progressively and the unaligned-length problem could be alleviated to a certain extent. Additionally, treating the iterative predictions as an ensemble, a semi-supervised method is explored based on self-training, which exploits a new solution toward human-level recognition.
Contributions of this paper mainly include: 1) we propose autonomous, bidirectional and iterative principles to guide the design of LM in STR. Under these principles the LM is a functional unit, which is required to extract bidirectional representation and correct prediction iteratively. 2) A novel BCN is introduced, which estimates the probability distribution of characters like cloze tasks using bidirectional representation. 3) The proposed ABINet achieves state-of-the-art (SOTA) performance on mainstream benchmarks, and the ABINet trained with ensemble self-training shows promising improvement in realizing human-level recognition.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{figs/overall-new}
\caption{(a) Coupled language model. (b) Our autonomous language model with iterative correction. (c) Our bidirectional structure. (d) Unidirectional RNN in auto-regression. (e) Ensemble of two unidirectional Transformers in parallel-prediction.}
\label{fig:overall}
\end{center}
\vspace{-2.5em}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=1.0\textwidth]{figs/framework-new}
\caption{A schematic overview of ABINet.}
\label{fig:framework}
\end{center}
\vspace{-2em}
\end{figure*}
\section{Related Work}
\subsection{Language-free Methods}
Language-free methods generally utilize visual features without the consideration of relationship between characters, such as CTC-based~\cite{graves2006connectionist} and segmentation-based~\cite{li2017fully} methods. The CTC-based methods employ CNN to extract visual features and RNN to model features sequence. Then the CNN and RNN are trained end-to-end using CTC loss~\cite{shi2016end,he2016reading,su2017accurate,hu2020gtc}. The segmentation-based methods apply FCN to segment characters in pixel-level. Liao~\etal recognize characters by grouping the segmented pixels into text regions. Wan~\etal~\cite{wan2019textscanner} propose an additional order segmentation map which transcripts characters in the correct order. Due to lacking of linguistic information, the language-free methods cannot resolve the recognition in low-quality images commendably.
\subsection{Language-based Methods}
\vspace{-0.5em}
\paragraph{Internal interaction between vision and language.} In some early works, bags of $N$-grams of text string are predicted by a CNN which acts as an explicit LM~\cite{jaderberg2015deep,jaderberg2014deep,jaderberg2014synthetic}. After that the attention-based methods become popular, which implicitly models language using more powerful RNN~\cite{lee2016recursive,shi2018aster} or Transformer~\cite{wang2019simple,sheng2019nrtr}. The attention-based methods follow encoder-decoder architecture, where the encoder processes images and the decoder generates characters by focusing on relevant information from 1D image features~\cite{lee2016recursive,shi2016robust,shi2018aster,cheng2017focusing,cheng2018aon} or 2D image features~\cite{yang2017learning,wojna2017attention,liao2019scene, li2019show}. For example, R$^2$AM~\cite{lee2016recursive} employs recursive CNN as a feature extractor and LSTM as a learned LM implicitly modeling language in character-level, which avoids the use of $N$-grams. Further, this kind of methods is usually boosted by integrating a rectification module~\cite{shi2018aster,zhan2019esir,yang2019symmetry} for irregular images before feeding the images into networks. Different from the methods above, our method strives to build a more powerful LM by explicitly language modeling. In attempting to improve the language expression, some works introduce multiple losses where an additional loss comes from semantics~\cite{qiao2020seed, lyu20192d, yu2020towards, fang2018attention}. Among them, SEED~\cite{qiao2020seed} proposes to use pre-trained FastText model to guide the training of RNN, which brings extra semantic information. We deviate from this as our method directly pre-trains LM in unlabeled text, which is more feasible in practice.
\vspace{-1.3em}
\paragraph{Representation of language features.} The character sequences in attention-based methods are generally modeled in left-to-right way~\cite{lee2016recursive, shi2016robust, cheng2017focusing, wan2019textscanner}. For instance, Textscanner~\cite{wan2019textscanner} inherits the unidirectional model of attention-based methods. Differently, they employ an additional position branch to enhance positional information and mitigate misrecogniton in contextless scenarios. To utilize bidirectional information, methods like~\cite{graves2008novel, shi2018aster, wang2020decoupled, yu2020towards} use an ensemble model of two unidirectional models. Specifically, to capture global semantic context, SRN~\cite{yu2020towards} combines features from a left-to-right and a right-to-left Transformers for further prediction. We emphasize that the ensemble bidirectional model is intrinsically a unidirectional feature representation.
\vspace{-1.3em}
\paragraph{Execution manner of language models.} Currently, the network architectures of LMs are mainly based on RNN and Transformer~\cite{vaswani2017attention}. The RNN-based LM is usually executed in auto-regression~\cite{wang2020decoupled,cheng2017focusing,wojna2017attention}, which takes the prediction of last character as input. Typical work such as DAN~\cite{wang2020decoupled} obtains the visual features of each character firstly using proposed convolutional alignment module. After that GRU predicts each character by taking the prediction embedding of the last time step and the character feature of the current time step as input. The Transformer-based methods have superiority in parallel execution, where the inputs of each time step are either visual features~\cite{lyu20192d} or character embedding from the prediction of visual feature~\cite{yu2020towards}. Our method falls into parallel execution, but we try to alleviate the issue of noise input existing in parallel language model.
\section{Proposed Method}
\subsection{Vision Model}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{figs/vision}
\caption{Architecture of vision model.}
\label{fig:vision}
\end{center}
\vspace{-2em}
\end{figure}
The vision model consists of a backbone network and a position attention module (Fig.~\ref{fig:vision}). Following the previous methods, ResNet\footnote{There are 5 residual blocks in total and down-sampling is performed after the 1st and 3nd blocks.}~\cite{shi2018aster, wang2020decoupled} and Transformer units~\cite{yu2020towards, lyu20192d} are employed as the feature extraction network and the sequence modeling network. For image $\bm{x}$ we have:
\begin{equation}
\mathbf{F}_b = \mathcal{T}(\mathcal{R}(\bm{x})) \in \mathbb{R}^{\frac{H}{4} \times \frac{W}{4} \times C},
\end{equation}
where $H,W$ are the size of $\bm{x}$ and $C$ is feature dimension.
The module of position attention transcribes visual features into character probabilities in parallel, which is based on the query paradigm~\cite{vaswani2017attention}:
\begin{align}
\mathbf{F}_v = \text{softmax}(\frac{\mathbf{Q}\mathbf{K}^\mathsf{T}}{\sqrt{C}})\mathbf{V}.
\end{align}
Concretely, $\mathbf{Q} \in \mathbb{R}^{T \times C}$ is positional encodings~\cite{vaswani2017attention} of character orders and $T$ is the length of character sequence. $\mathbf{K} = \mathcal{G}(\mathbf{F}_b) \in \mathbb{R}^{\frac{HW}{16} \times C}$, where $\mathcal{G}(\cdot)$ is implemented by a mini U-Net\footnote{A network with 4-layer encoder, 64 channels, $add$ fusion and interpolation upsample.}~\cite{ronneberger2015u}. $\mathbf{V} = \mathcal{H}(\mathbf{F}_b) \in \mathbb{R}^{\frac{HW}{16} \times C}$, where $\mathcal{H}(\cdot)$ is identity mapping.
\subsection{Language Model}
\subsubsection{Autonomous Strategy}
\label{sec:autonomous}
As shown in Fig.~\ref{fig:framework}, the autonomous strategy includes following characteristics: 1) the LM is regarded as an independent model of spelling correction which takes probability vectors of characters as input and outputs probability distributions of expected characters. 2) The flow of training gradient is blocked (BGF) at input vectors. 3) The LM could be trained separately from unlabeled text data.
Following the strategy of autonomous, the ABINet can be divided into interpretable units. By taking the probability as input, LM could be replaceable (\ie, replaced with more powerful model directly) and flexible (\eg, executed iteratively in Section~\ref{sec:iterative}). Besides, an important point is that BGF enforces model to learn linguistic knowledge inevitably, which is radically distinguished from implicitly modeling where what the models exactly learn is unknowable. Furthermore, the autonomous strategy allows us to directly share the advanced progresses in NLP community. For instance, pre-training the LM can be an effective way to boost the performance.
\subsubsection{Bidirectional Representation}
\begin{figure}
\begin{center}
\includegraphics[width=0.4\textwidth]{figs/language}
\caption{Architecture of language model (BCN).}
\label{fig:label}
\end{center}
\vspace{-2em}
\end{figure}
Given a text string $\bm{y}=(y_1, \ldots, y_n)$ with text length $n$ and class number $c$, the conditional probability of $y_i$ for bidirectional and unidirectional models are $P(y_i|y_n,\dots,y_{i+1},y_{i-1},\dots,y_1)$ and $P(y_i|y_{i-1},\dots,y_1)$, respectively. From the perspective of information theory, available entropy of a bidirectional representation can be quantified as $H_{\bm{y}} = (n-1)\log{c}$. However, for a unidirectional representation the information is $\frac{1}{n}\sum^n_{i=1}{(i-1)\log{c}}=\frac{1}{2}H_{\bm{y}}$. Our insight is that previous methods typically use an ensemble model of two unidirectional models, which essentially are unidirectional representations. The unidirectional representation basically captures $\frac{1}{2}H_{\bm{y}}$ information, resulting in limited capability of feature abstraction compared with bidirectional counterpart.
Benefitting from the autonomous design in Section~\ref{sec:autonomous}, off-the-shelf NLP models with the ability of spelling correction can be transferred. A plausible way is utilizing the masked language model (MLM) in BERT~\cite{devlin2018bert} by replacing $y_i$ with token {\tt{[MASK]}}. However, we notice that this is unacceptable as MLM should be separately called $n$ times for each text instance, causing extreme low efficiency. Instead of masking the input characters, we propose BCN by specifying the attention masks.
Overall, the BCN is a variant of $L$-layers transformer decoder. Each layer of BCN is a series of multi-head attention and feed-forward network~\cite{vaswani2017attention} followed by residual connection~\cite{he2016deep} and layer normalization~\cite{ba2016layer}, as shown in Fig.~\ref{fig:label}. Different from vanilla Transformer, character vectors are fed into the multi-head attention blocks rather than the first layer of network. In addition, attention masks in multi-head attention are designed to prevent from ``seeing itself". Besides, no self-attention is applied in BCN to avoid leaking information across time steps. The attention operation inside multi-head blocks can be formalized as:
\begin{align}
\mathbf{M}_{ij} &= \begin{cases} 0, & i \neq j \\ -\infty, & i = j \end{cases}, \label{eq:att:mask} \\
\mathbf{K}_i &= \mathbf{V}_i = P(y_i) \mathbf{W}_l, \\
\mathbf{F}_{mha} &= \text{softmax}(\frac{\mathbf{Q}\mathbf{K}^\mathsf{T}}{\sqrt{C}} + \mathbf{M})\mathbf{V},
\end{align}
where $\mathbf{Q} \in \mathbb{R}^{T \times C}$ is the positional encodings of character orders in the first layer and the outputs of the last layer otherwise. $\mathbf{K}, \mathbf{V} \in \mathbb{R}^{T \times C}$ are obtained from character probability $P(y_i) \in \mathbb{R}^{c}$, and $\mathbf{W}_l \in \mathbb{R}^{c \times C}$ is linear mapping matrix. $\mathbf{M} \in \mathbb{R}^{T \times T}$ is the matrix of attention masks which prevents from attending current character. After stacking BCN layers into deep architecture, the bidirectional representation $\mathbf{F}_{l}$ for text $\bm{y}$ is determined.
By specifying the attention masks in cloze fashion, BCN is able to learn more powerful bidirectional representation elegantly than the ensemble of unidirectional representation. Besides, benefitting from Transformer-like architecture, BCN can perform computation independently and parallelly. Also, it is more efficient than the ensemble models as only half of the computations and parameters are needed.
\subsubsection{Iterative Correction}
\label{sec:iterative}
The parallel-prediction of Transformer takes noise inputs which are typically approximations from visual prediction~\cite{yu2020towards} or visual feature~\cite{lyu20192d}. Concretely, as the example shown in Fig.~\ref{fig:framework} under bidirectional representation, the desired condition for $P(\text{``O"})$ is ``SH-WING". However, due to the blurred and occluded environments, the actual condition obtained from VM is ``SH-VING", in which ``V" becomes noise and harms the confidence of prediction. It tends to be more hostile for LM with increased error predictions in VM.
To cope with the problem of noise inputs, we propose iterative LM (illustrated in Fig.~\ref{fig:framework}). The LM is executed $M$ times repeatedly with different assignment for $\bm{y}$. For the first iteration, $\bm{y}_{i=1}$ is the probability prediction from VM. For the subsequent iterations, $\bm{y}_{i \ge 2}$ is the probability prediction from the fusion model (Section~\ref{sec:fusion}) in last iteration. By this way the LM is able to correct the vision prediction iteratively.
Another observation is that Transformer-based methods generally suffer from unaligned-length problem~\cite{yu2020towards}, which denotes that the Transformer is hard to correct the vision prediction if character number is unaligned with ground truth. The unaligned-length problem is caused by the inevitable implementation of padding mask which is fixed for filtering context outside text length. Our iterative LM can alleviate this problem as the visual feature and linguistic feature are fused several times, and thus the predicted text length is also refined gradually.
\subsection{Fusion}
\label{sec:fusion}
Conceptually, vision model trained on image and language model trained on text come from different modalities. To align visual feature and linguistic feature, we simply use the gated mechanism \cite{yu2020towards, yue2020robustscanner} for final decision:
\begin{align}
\mathbf{G} &= \sigma([\mathbf{F}_{v}, \mathbf{F}_{l}] \mathbf{W}_f), \\
\mathbf{F}_{f} &= \mathbf{G} \odot \mathbf{F}_{v} + (1 - \mathbf{G}) \odot \mathbf{F}_{l},
\end{align}
where $\mathbf{W}_f \in \mathbb{R}^{2C \times C}$ and $\mathbf{G} \in \mathbb{R}^{T \times C}$.
\subsection{Supervised Training}
ABINet is trained end-to-end using the following multi-task objectives:
\begin{align}
\mathcal{L} &= \lambda_v \mathcal{L}_v + \frac{\lambda_l}{M} \sum^M_{i=1}{\mathcal{L}^i_l} + \frac{1}{M} \sum^M_{i=1}{\mathcal{L}^i_f},
\label{eq:loss}
\end{align}
where $\mathcal{L}_v$, $\mathcal{L}_l$ and $\mathcal{L}_f$ are the cross entropy losses from $\mathbf{F}_{v}$, $\mathbf{F}_{l}$ and $\mathbf{F}_{f}$, respectively. Specifically, $\mathcal{L}^i_{l}$ and $\mathcal{L}^i_{f}$ are the losses at $i$-th iteration. $\lambda_v$ and $\lambda_l$ are balanced factors.
\subsection{Semi-supervised Ensemble Self-training}
\label{sec:semi-supervised}
To further explore the superiority of our iterative model, we propose a semi-supervised learning method based on self-training~\cite{xie2020self} with the ensemble of iterative predictions. The basic idea of self-training is first to generate pseudo labels by model itself, and then re-train the model using additional pseudo labels. Therefore, the key problem lies in constructing high-quality pseudo labels.
To filter the noise pseudo labels we propose the following methods: 1) minimum confidence of characters within a text instance is chosen as the text certainty. 2) Iterative predictions of each character are viewed as an ensemble to smooth the impact of noise labels. Therefore, we define the filtering function as follows:
\begin{align}
\begin{cases}
\mathcal{C} &= \min\limits_{1 \le t \le T} e^{\mathbb{E}[\log{P(y_t)}]} \\
P(y_t) &= \max\limits_{1 \le m \le M} P_m(y_t)
\end{cases},
\label{eq:filter}
\end{align}
where $\mathcal{C}$ is the minimum \emph{certainty} of a text instance, $P_m(y_t)$ is probability distribution of $t$-th character at $m$-th iteration. The training procedure is depicted in Algorithm~\ref{alg:self-training}, where $Q$ is threshold. $B_l$, $B_u$ are training batches from labeled and unlabeled data. $N_{max}$ is the maximum number of training step and $N_{upl}$ is the step number for updating pseudo labels.
\begin{algorithm}[t]
\scriptsize
\caption{Ensemble Self-training}
\begin{algorithmic}[1]
\Require Labeled images $\mathcal{X}$ with labels $\mathcal{Y}$ and unlabeled images $\mathcal{U}$
\State Train parameters $\theta_0$ of ABINet with $(\mathcal{X}$, $\mathcal{Y})$ using Equation~\ref{eq:loss}.
\State Use $\theta_0$ to generate soft pseudo labels $\mathcal{V}$ for $\mathcal{U}$
\State Get $(\mathcal{U}'$, $\mathcal{V}')$ by filtering $(\mathcal{U}$, $\mathcal{V})$ with $\mathcal{C}<Q$ (Equation~\ref{eq:filter})
\For{$i = 1,\ldots, N_{max}$}
\If{$i == N_{upl}$}
\State Update $\mathcal{V}$ using $\theta_i$
\State Get $(\mathcal{U}'$, $\mathcal{V}')$ by filtering $(\mathcal{U}$, $\mathcal{V})$ with $\mathcal{C}<Q$ (Equation~\ref{eq:filter})
\EndIf
\State Sample $B_l=(\mathcal{X}_{b}$, $\mathcal{Y}_{b}) \subsetneqq (\mathcal{X}$, $\mathcal{Y})$, $B_u=(\mathcal{U}'_{b}$, $\mathcal{V}'_{b}) \subsetneqq (\mathcal{U}'$, $\mathcal{V}')$
\State Update $\theta_i$ with $B_l$, $B_u$ using Equation~\ref{eq:loss}.
\EndFor
\end{algorithmic}
\label{alg:self-training}
\end{algorithm}
\section{Experiment}
\subsection{Datasets and Implementation Details}
Experiments are conducted following the setup of~\cite{yu2020towards} in the purpose of fair comparison. Concretely, the training datasets are two synthetic datasets MJSynth (MJ)~\cite{jaderberg2014synthetic,jaderberg2016reading} and SynthText (ST)~\cite{gupta2016synthetic}. Six standard benchmarks include ICDAR 2013 (IC13)~\cite{karatzas2013icdar}, ICDAR 2015 (IC15)~\cite{karatzas2015icdar}, IIIT 5K-Words (IIIT)~\cite{mishra2012scene}, Street View Text (SVT)~\cite{wang2011end}, Street View Text-Perspective (SVTP)~\cite{quy2013recognizing} and CUTE80 (CUTE)~\cite{risnumawan2014robust} are as the testing datasets. Details of these datasets can be found in the previous works~\cite{yu2020towards}. In addition, Uber-Text~\cite{Ying2017UberText} removing the labels is used as unlabeled dataset to evaluate the semi-supervised method.
The model dimension $C$ is set to 512 throughout. There are 4 layers in BCN with 8 attention heads each layer. Balanced factors $\lambda_v$, $\lambda_l$ are set to 1, 1 respectively. Images are directly resized to $32 \times 128$ with data augmentation such as geometry transformation (\ie, rotation, affine and perspective), image quality deterioration and color jitter, \etc. We use 4 NVIDIA 1080Ti GPUs to train our models with batch size 384. ADAM optimizer is adopted with the initial learning rate $1e^{-4}$, which is decayed to $1e^{-5}$ after 6 epochs.
\subsection{Ablation Study}
\subsubsection{Vision Model}
\begin{table}
\begin{center}
\caption{Ablation study of VM. Attn is the attention method and Trm Layer is the layer number of Transformer. SV, MV$_1$, MV$_2$ and LV are four VMs in different configurations.}
\label{tab:vision}
\resizebox{1.0\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Model & \multirow{2}{*}{Attn} & Trm & IC13 & SVT & IIIT & \multirow{2}{*}{Avg} & Params & Time\tablefootnote{Inference time is estimated using NVIDIA Tesla V100 by averaging 3 different trials.} \\
Name & & Layer & IC15 & SVTP & CUTE & & ($\times10^6$) & (ms) \\
\hline
SV & \multirow{2}{*}{parallel} & \multirow{2}{*}{2} &94.2& 89.6 & 93.7& \multirow{2}{*}{88.8} & \multirow{2}{*}{19.6} & \multirow{2}{*}{12.5} \\
(small) & & & 80.6& 82.3& 85.1& & & \\
\hline
MV$_1$ & \multirow{2}{*}{position} & \multirow{2}{*}{2} &93.6& 89.3 & 94.2& \multirow{2}{*}{89.0} & \multirow{2}{*}{20.4} & \multirow{2}{*}{14.9} \\
(middle) & & & 80.8& 83.1& 85.4& & & \\
\hline
MV$_2$ & \multirow{2}{*}{parallel} & \multirow{2}{*}{3} &94.5& 89.5 & 94.3& \multirow{2}{*}{89.4} & \multirow{2}{*}{22.8} & \multirow{2}{*}{14.8} \\
(middle) & & & 81.1& 83.7& \bf{86.8}& & & \\
\hline
LV & \multirow{2}{*}{position} & \multirow{2}{*}{3} &\bf{94.9}& \bf{90.4} & \bf{94.6}& \multirow{2}{*}{\bf{89.8}} & \multirow{2}{*}{23.5} & \multirow{2}{*}{16.7} \\
(large) & & & \bf{81.7} & \bf{84.2} & 86.5& & & \\
\hline
\end{tabular}}
\end{center}
\vspace{-1em}
\end{table}
Firstly, we discuss the performance of VM from two aspects: feature extraction and sequence modeling. Experiment results are recorded in Tab.~\ref{tab:vision}. The \emph{parallel} attention is a popular attention method~\cite{lyu20192d,yu2020towards}, and the proposed \emph{position} attention has a more powerful representation of key/value vectors. From the statistics we can conclude: 1) simply upgrading the VM will result in great gains in accuracy but at the cost of parameter and speed. 2) To upgrade the VM, we can use the position attention in feature extraction and a deeper transformer in sequence modeling.
\subsubsection{Language Model}
\begin{table}
\begin{center}
\caption{Ablation study of autonomous strategy. PVM is pre-training VM on MJ and ST in supervised way. PLM$_{in}$ is pre-training LM using text on MJ and ST in self-supervised way. PLM$_{out}$ is pre-training LM on WikiText-103~\cite{merity2016pointer} in self-supervised way. AGF means allowing gradient flow between VM and LM.}
\label{tab:autonomous}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{PVM} & \multirow{2}{*}{PLM$_{in}$} & \multirow{2}{*}{PLM$_{out}$} & \multirow{2}{*}{AGF} & IC13 & SVT & IIIT & \multirow{2}{*}{Avg} \\
& & & & IC15 & SVTP & CUTE & \\
\hline
\multirow{2}{*}{-} & \multirow{2}{*}{-} & \multirow{2}{*}{-} & \multirow{2}{*}{-} & 96.7 & 93.4 & 95.7 & \multirow{2}{*}{91.7} \\
& & & & 84.5 & 86.8 & 86.8& \\
\hline
\multirow{2}{*}{\ding{51}} & \multirow{2}{*}{-} & \multirow{2}{*}{-} & \multirow{2}{*}{-} & 97.0 & 93.0 & 96.3 & \multirow{2}{*}{92.3} \\
& & & & 85.0 & 88.5 & 89.2& \\
\hline
\multirow{2}{*}{-} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{-} & \multirow{2}{*}{-} & 97.1 & \bf{93.8} & 95.5 & \multirow{2}{*}{91.6} \\
& & & & 83.6 & 88.1 & 86.8& \\
\hline
\multirow{2}{*}{\ding{51}} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{-} & \multirow{2}{*}{-} & \bf{97.2} & 93.5 & 96.3 & \multirow{2}{*}{92.3} \\
& & & & 84.9 & \bf{89.0} & 88.5& \\
\hline
\multirow{2}{*}{\ding{51}} & \multirow{2}{*}{-} & \multirow{2}{*}{\ding{51}} & \multirow{2}{*}{-} & 97.0 & 93.7 & \bf{96.5} & \multirow{2}{*}{\bf{92.5}} \\
& & & & \bf{85.3} & 88.5 & \bf{89.6}& \\
\hline
\multirow{2}{*}{\ding{51}} & \multirow{2}{*}{-} & \multirow{2}{*}{-} & \multirow{2}{*}{\ding{51}} & 96.7 & 92.6 & 95.7& \multirow{2}{*}{91.4} \\
& & & & 83.3 & 86.5 & 88.5& \\
\hline
\end{tabular}}
\end{center}
\vspace{-2em}
\end{table}
\paragraph{Autonomous Strategy.} To analyze the autonomous models, we adopt the LV and BCN as VM and LM respectively. From the results in Tab.~\ref{tab:autonomous} we can observe: 1) pre-training VM is useful which boosts the accuracy about $0.6\%$-$0.7\%$ on average; 2) the benefit of pre-training LM on the training datasets (\ie, MJ and ST) is negligible; 3) while pre-training LM from an additional unlabeled dataset (\eg, WikiText-103) is helpful even when the base model is in high accuracy. The above observations suggest that it is useful for STR to pre-train both VM and LM. Pre-training LM on additional unlabeled datasets is more effective than on training datasets since the limited text diversity and biased data distribution are unable to facilitate the learning of a well-performed LM. Also, pre-training LM on unlabeled datasets is cheap since additional data is available easily.
Besides, by allowing gradient flow (AGF) between VM and LM, the performance decreases $0.9\%$ on average (Tab.~\ref{tab:autonomous}. We also notice that the training loss of AGF reduces sharply to a lower value. This indicates that overfitting occurs in LM as the VM helps to cheat in training, which might also happen in implicitly language modeling. Therefore it is crucial to enforce LM to learn independently by BGF. We note that SRN~\cite{yu2020towards} uses \emph{argmax} operation after VM, which is intrinsically a special case of BGF since \emph{argmax} is non-differentiable. Another advantage is that the autonomous strategy makes the models a better interpretability, since we can have a deep insight into the performance of LM (\eg, Tab.~\ref{tab:spelling_correction}), which is infeasible in implicitly language modeling.
\begin{table}
\begin{center}
\caption{Ablation study of bidirectional representation.}
\label{tab:bidirectional}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Vision} & \multirow{2}{*}{Language} & IC13 & SVT & IIIT & \multirow{2}{*}{Avg} & Params & Time \\
& & IC15 & SVTP & CUTE & & ($\times10^6$) & (ms) \\
\hline
& \multirow{2}{*}{SRN-U} & 96.0 & 90.3 & 94.9& \multirow{2}{*}{90.2} & \multirow{2}{*}{32.8} & \multirow{2}{*}{19.1} \\
& & 81.9 & 86.0 & 85.4& & & \\
\cline{2-8}
\multirow{2}{*}{SV} & \multirow{2}{*}{SRN} & 96.3 & 90.9 & 95.0 & \multirow{2}{*}{90.6} & \multirow{2}{*}{45.4} & \multirow{2}{*}{24.2} \\
& & 82.6 & \bf{86.4} & 87.5& & & \\
\cline{2-8}
& \multirow{2}{*}{BCN} & \bf{96.7} & \bf{91.7} & \bf{95.3} & \multirow{2}{*}{\bf{91.0}} & \multirow{2}{*}{32.8} & \multirow{2}{*}{19.5} \\
& & \bf{83.1} & 86.2 & \bf{88.9}& & & \\
\hline
\hline
& \multirow{2}{*}{SRN-U} & 96.0 & 91.2 & 96.2 & \multirow{2}{*}{91.5} & \multirow{2}{*}{36.7} & \multirow{2}{*}{22.1} \\
& & 84.0 & 86.8 & 87.8& & & \\
\cline{2-8}
\multirow{2}{*}{LV} & \multirow{2}{*}{SRN} & 96.8 & 92.3 & \bf{96.3} & \multirow{2}{*}{91.9} & \multirow{2}{*}{49.3} & \multirow{2}{*}{26.9} \\
& & 84.2 & 87.9 & 88.2& & & \\
\cline{2-8}
& \multirow{2}{*}{BCN} & \bf{97.0} & \bf{93.0} & \bf{96.3} & \multirow{2}{*}{\textbf{92.3}} & \multirow{2}{*}{36.7} & \multirow{2}{*}{22} \\
& & \bf{85.0} & \bf{88.5} & \bf{89.2}& & & \\
\hline
\end{tabular}}
\end{center}
\vspace{-1em}
\end{table}
\begin{table}
\begin{center}
\caption{Top-5 accuracy of LMs in text spelling correction.}
\resizebox{0.85\linewidth}{!}{
\begin{tabular}{|c|c|c|}
\hline
Language Model & Character Accuracy & Word Accuracy \\
\hline
SRN & 78.3 & 27.6 \\
\hline
BCN & \bf{82.8} & \bf{41.9} \\
\hline
\end{tabular}}
\label{tab:spelling_correction}
\end{center}
\vspace{-2em}
\end{table}
\vspace{-1em}
\paragraph{Bidirectional Representation.} As the BCN is a variant of Transformer, we compare BCN with its counterpart SRN. The Transformer-based SRN~\cite{yu2020towards} shows superior performance which is an ensemble of unidirectional representation. For fair comparison, experiments are conducted with the same conditions except the networks. We use SV and LV as the VMs to validate the effectiveness at different accuracy levels. As depicted in Tab.~\ref{tab:bidirectional}, though BCN has similar parameters and inference speed as the unidirectional version of SRN (SRN-U), it achieves competitive advantage in accuracy under different VMs. Besides, compared with the bidirectional SRN in ensemble, BCN shows better performance especially on challenging datasets such as IC15 and CUTE. Also, ABINet equipped with BCN is about $20\%$-$25\%$ faster than SRN, which is practical for large-scale tasks.
Section~\ref{sec:autonomous} has argued that the LMs can be viewed as independent units to estimate the probability distribution of spelling correction, and thus we conduct experiments from this view. The training set is the text from MJ and ST. To simulate spelling errors, the testing set is 20000 items which are chosen randomly, where we add or remove a character for $20\%$ text, replace a character for $60\%$ text and keep the rest of the text unchangeable. From the results in Tab.~\ref{tab:spelling_correction}, we can see BCN outperforms SRN by $4.5\%$ character accuracy and $14.3\%$ word accuracy, which indicates that BCN has a more powerful ability in character-level language modeling.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{figs/visual_topk.png}
\caption{Visualization of top-5 probability in BCN.}
\label{fig:visual_topk}
\end{center}
\vspace{-1.0em}
\end{figure}
To better understand how BCN works inside ABINet, we visualize the top-5 probability in Fig.~\ref{fig:visual_topk}, which takes ``today" as an example. On the one hand, as ``today" is a string with semantic information, taking ``-oday" and ``tod-y" as inputs, BCN can predict ``t" and ``a" with high confidence and contribute to final fusion predictions. On the other hand, as error characters ``l" and ``o" are noise for the rest predictions, BCN becomes less confident and has little impact to final predictions. Besides, if there are multiple error characters, it is hard for BCN to restore correct text due to lacking of enough context.
\begin{table}
\begin{center}
\caption{Ablation study of iterative correction.}
\label{tab:iterative}
\resizebox{0.9\linewidth}{!}{
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{Model} & Iteration & IC13 & SVT & IIIT & \multirow{2}{*}{Avg} & Params & Time \\
& Number & IC15 & SVTP & CUTE & & ($\times10^6$) & (ms) \\
\hline
\multirow{2}{*}{SV} & \multirow{2}{*}{1} & 96.7 & 91.7 & 95.3 & \multirow{2}{*}{91.0} & \multirow{2}{*}{32.8} & \multirow{2}{*}{19.5} \\
& & 83.1 & 86.2 & 88.9 & & & \\
\cline{2-8}
\multirow{2}{*}{+} & \multirow{2}{*}{2} & \bf{97.2} & 91.8 & \bf{95.4} & \multirow{2}{*}{91.2} & \multirow{2}{*}{32.8} & \multirow{2}{*}{24.5} \\
& & 83.3 & 86.4 & 89.2 & & & \\
\cline{2-8}
\multirow{2}{*}{BCN} & \multirow{2}{*}{3} & 97.1 & \bf{93.0} & \bf{95.4} & \multirow{2}{*}{\bf{91.4}} & \multirow{2}{*}{32.8} & \multirow{2}{*}{31.6} \\
& & \bf{83.4} & \bf{86.7} & \bf{89.6}& & & \\
\hline
\hline
\multirow{2}{*}{LV} & \multirow{2}{*}{1} & 97.0 & 93.0 & 96.3 & \multirow{2}{*}{92.3} & \multirow{2}{*}{36.7} & \multirow{2}{*}{22} \\
& & 85.0 & 88.5 & 89.2& & & \\
\cline{2-8}
\multirow{2}{*}{+} & \multirow{2}{*}{2} & 97.1 & 93.4 & 96.3 & \multirow{2}{*}{92.4} & \multirow{2}{*}{36.7} & \multirow{2}{*}{27.3} \\
& & 85.2 & 88.7 & \bf{89.6}& & & \\
\cline{2-8}
\multirow{2}{*}{BCN} & \multirow{2}{*}{3} & \bf{97.3} & \bf{94.0} & \bf{96.4} & \multirow{2}{*}{\bf{92.6}} & \multirow{2}{*}{36.7} & \multirow{2}{*}{33.9} \\
& & \bf{85.5} & \bf{89.1} & 89.2 & & & \\
\hline
\end{tabular}}
\end{center}
\vspace{-2.0em}
\end{table}
\begin{table*}[htp]
\vspace{0em}
\begin{center}
\caption{Accuracy comparison with other methods.}
\label{tab:benchmark}
\resizebox{0.85\linewidth}{!}{
\begin{tabular}{|c|l|c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{} & \multirow{2}{*}{Methods} & Labeled & Unlabeled & \multicolumn{3}{|c|}{Regular Text} & \multicolumn{3}{|c|}{Irregular Text} \\
\cline{4-10}
& & Datasets & Datasets & IC13 & SVT & IIIT & IC15 & SVTP & CUTE \\
\hline
\multirow{7}{*}{\rotatebox{90}{SOTA methods}} & 2019~Lyu~\etal~\cite{lyu20192d}~(Parallel) & MJ+ST & - & 92.7 & 90.1 & 94.0 & 76.3 & 82.3& 86.8 \\
& 2019~Liao~\etal~\cite{liao2019mask}~(SAM) & MJ+ST & - & 95.3 & 90.6 & 93.9 & 77.3 & 82.2 & 87.8 \\
& 2020~Qiao~\etal~\cite{qiao2020seed}~(SE-ASTER) & MJ+ST & - & 92.8 & 89.6 & 93.8 & 80.0 & 81.4 & 83.6 \\
& 2020~Wan~\etal~\cite{wan2019textscanner}~(Textscanner) & MJ+ST & - & 92.9 & 90.1 & 93.9 & 79.4 & 84.3 & 83.3 \\
& 2020~Wang~\etal~\cite{wang2020decoupled}~(DAN) & MJ+ST & - & 93.9 & 89.2 & 94.3 & 74.5 & 80.0 & 84.4 \\
& 2020~Yue~\etal~\cite{yue2020robustscanner}~(RobustScanner) & MJ+ST & - & 94.8 & 88.1 & 95.3 & 77.1 & 79.5 & \bf{90.3}\\
& 2020~Yu~\etal~\cite{yu2020towards} (SRN) & MJ+ST & - & 95.5 & 91.5 & 94.8 & 82.7 &85.1 &87.8\\
\hline
\multirow{6}{*}{\rotatebox{90}{Ours}} & SRN-SV (Reproduced) & MJ+ST & - & 96.3 & 90.9 & 95.0 & 82.6 & 86.4 & 87.5 \\
& ABINet-SV & MJ+ST & - & \bf{96.8} & \bf{93.2} & \bf{95.4} & \bf{84.0} & \bf{87.0} & 88.9 \\
\cline{2-10}
& SRN-LV (Reproduced) & MJ+ST & - & 96.8 & 92.3 & \bf{96.3} & 84.2 & 87.9 & 88.2 \\
& ABINet-LV & MJ+ST & - & \bf{97.4} & \bf{93.5} & 96.2 & \bf{86.0} & \bf{89.3} & \bf{89.2} \\
\cline{2-10}
& ABINet-LV${_{st}}$ & MJ+ST & Uber-Text & 97.3 & 94.9 & 96.8 & \bf{87.4} & \bf{90.1} & 93.4 \\
& ABINet-LV${_{est}}$ & MJ+ST & Uber-Text & \bf{97.7} & \bf{95.5} & \bf{97.2} & 86.9 & 89.9 & \bf{94.1} \\
\hline
\end{tabular}}
\end{center}
\vspace{-2em}
\end{table*}
\vspace{-1em}
\paragraph{Iterative Correction.} We apply SV and LV again with BCN to demonstrate the performance of iterative correction from different levels. Experiment results are given in Tab.~\ref{tab:iterative}, where the iteration numbers are set to 1, 2 and 3 both in training and testing. As can be seen from the results, iterating the BCN 3 times can respectively boost the accuracy by $0.4\%$, $0.3\%$ on average. Specifically, there are little gains on IIIT which is a relatively easy dataset with clear character appearance. However, when it comes to other harder datasets such as IC15, SVT and SVTP, the iterative correction steadily increases the accuracy and achieves up to $1.3\%$ and $1.0\%$ improvement on SVT for SV and LV respectively. It is also noted that the inference time increases linearly with the iteration number.
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{figs/iteration.png}
\caption{Accuracy of iterating BCN in training and testing.}
\label{fig:iteration}
\end{center}
\vspace{-0em}
\end{figure}
We further explore the difference of iteration between training and testing. The fluctuation of average accuracy in Fig.~\ref{fig:iteration} suggests that: 1) directly applying iterative correction in testing also works well; 2) while iterating in training is beneficial since it provides additional training samples for LM; 3) the accuracy reaches a saturated state when iterating the model more than 3 times, and therefore a big iteration number is unnecessary.
\begin{figure}
\vspace{-1.5em}
\begin{center}
\includegraphics[width=0.5\textwidth]{figs/iteration_img}
\caption{Successful examples using iterative correction. Text strings are ground truth, vision prediction, fusion prediction without iterative correction and with iterative correction respectively from left to right and top to bottom.}
\label{fig:iteration_img}
\end{center}
\vspace{-2.5em}
\end{figure}
To have a comprehensive cognition about iterative correction, we visualize the intermediate predictions in Fig.~\ref{fig:iteration_img}. Typically, the vision predictions can be revised approaching to ground truth while remain errors in some cases. After multiple iterations, the predictions can be corrected finally. Besides, we also observe that iterative correction is able to alleviate the unaligned-length problem, as shown in the last column in Fig.~\ref{fig:iteration_img}.
From the ablation study we can conclude: 1) the bidirectional BCN is a powerful LM which can effectively improve the performance both in accuracy and speed. 2) By further equipping BCN with iterative correction, the noise input problem can be alleviated, which is recommended to deal with challenging examples such as low-quality images at the expense of incremental computations.
\subsection{Comparisons with State-of-the-Arts}
Generally, it is not an easy job to fairly compare with other methods directly using the reported statistics~\cite{baek2019wrong}, as differences might exist in backbone (\ie, CNN structure and parameters), data processing (\ie, images rectification and data augmentation) and training tricks, \etc. To strictly perform fair comparison, we reproduce the SOTA algorithm SRN which shares the same experiment configuration with ABINet, as presented in Tab.~\ref{tab:benchmark}. The two reimplemented SRN-SV and SRN-LV are slightly different from the reported model by replacing VMs, removing the side-effect of multi-scales training, applying decayed learning rate, etc. Note that SRN-SV performs somewhat better than SRN due to the above tricks. As can be seen from the comparison, our ABINet-SV outperforms SRN-SV with $0.5\%$, $2.3\%$, $0.4\%$, $1.4\%$, $0.6\%$, $1.4\%$ on IC13, SVT, IIIT, IC15, SVTP and CUTE datasets respectively. Also, the ABINet-LV with a more strong VM achieve an improvement of $0.6\%$, $1.2\%$, $1.8\%$, $1.4\%$, $1.0\%$ on IC13, SVT, IC15, SVTP and CUTE benchmarks over its counterpart.
Compared with recent SOTA works that are trained on MJ and ST, ABINet also shows impressive performance~(Tab.~\ref{tab:benchmark}). Especially, ABINet has prominent superiority on SVT, SVTP and IC15 as these datasets contain a large amount of low-quality images such as noise and blurred images, which the VM is not able to confidently recognize. Besides, we also find that images with unusual-font and irregular text can be successfully recognized as the linguistic information acts as an important complement to visual feature. Therefore ABINet can obtain second best result on CUTE even without image rectification.
\subsection{Semi-Supervised Training}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\textwidth]{figs/semi_img}
\caption{\textls[-1]{Hard examples successfully recognized by ABINet-LV${_{est}}$.}}
\label{fig:semi_img}
\end{center}
\vspace{-2.5em}
\end{figure}
To further push the boundary of accurate reading, we explore a semi-supervised method which utilizes MJ and ST as the labeled datasets and Uber-Text as the unlabeled dataset. The threshold $Q$ in Section~\ref{sec:semi-supervised} is set to 0.9, and the batch size of $B_l$ and $B_u$ are 256 and 128 respectively. Experiment results in Tab.~\ref{tab:benchmark} show that the proposed self-training method ABINet-LV${_{st}}$ can easily outperform ABINet-LV on all benchmark datasets. Besides, the ensemble self-training ABINet-LV${_{est}}$ shows a more stable performance by improving the efficiency of data utilization. Observing the boosted results we find that hard examples with scarce fonts and blurred appearance can also be recognized frequently~(Fig.~\ref{fig:semi_img}), which suggests that exploring the semi-/unsupervised learning methods is a promising direction for scene text recognition.
\vspace{-1em}
\section{Conclusion}
In this paper, we propose ABINet which explores effective approaches for utilizing linguistic knowledge in scene text recognition. The ABINet is 1) autonomous that improves the ability of language model by enforcing learning explicitly; 2) bidirectional that learns text representation by jointly conditioning on character context at both sides; and 3) iterative that corrects the prediction progressively to alleviate the impact of noise input. Based on the ABINet we further propose an ensemble self-training method for semi-supervised learning. Experiment results on standard benchmarks demonstrate the superiority of ABINet especially on low-quality images. In addition, we also claim that exploiting unlabeled data is possible and promising for achieving human-level recognition.
{\small
\bibliographystyle{ieee_fullname}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 106
|
{"url":"https:\/\/www.mail-archive.com\/ntg-context@ntg.nl\/msg38646.html","text":"# [NTG-context] inserting Chinese characters (MKII)\n\nI have absolutely no knowledge of Chinese, but I am publishing a review which inserts two Chinese characters into a document as follows,\n\nthird century alchemist Ge Hong \u845b\u6d2a as another\n\n\nSo which of the two Chinese modules on http:\/\/ modules.contextgarden.net\/ should I install or do I need them both? (I am assuming that installing one of both of these modules will take care of steps 1\u20134 of the post-ConTeXt 2005.12.19 instructions on http:\/\/wiki.contextrarden.net\/Chinese .)\n\nBy the way, the LaTeX file that I am working from has\n\n\\begin{TChinese}\u00cb\u00eb\u00f5\u00ca\u00a5\u2122\\end{TChinese}\n\nwhere\n\n\\newenvironment{TChinese}{%\n\\CJKfamily{bsmi}%\n%\\CJKfamily{bkai}%\n\\CJKtilde\n\\CJKnospace}{}\n\nAlan\n\n\n___________________________________________________________________________________","date":"2022-06-25 23:42:18","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.38397330045700073, \"perplexity\": 3668.913535551236}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103036176.7\/warc\/CC-MAIN-20220625220543-20220626010543-00553.warc.gz\"}"}
| null | null |
{"url":"https:\/\/math.stackexchange.com\/questions\/2818701\/whats-the-need-for-hilberts-7th-axiom-of-incidence","text":"# What's the need for Hilbert's 7th axiom of incidence?\n\nIf two planes \u03b1, \u03b2 have a point A in common, then they have at least a second point B in common.\n\nI perfectly understand the axiom, but i don't see why it's necessary, and it's kind of counterintuitive for me.\n\nWhen I think of two planes, I can think of these ways they could intersect:\n\nThe two implied by this axiom, and that are shown in my book\n\nBut I could also think of two planes just \"barely\" touching in one point, kinda' like this\n\nI know that this is an axiomatic system, so this is just a \"a rule of the game\", and that it doesn't want to describe what we understand as physical flat surfaces, or what I think of a plane. However, I feel a little werwerfasdre about why this axiomatic system does not contemplate a case in which two planes intersect or touch in just one point. Also, I wonder why this axiom correctly describes our idealization of planes in order to study them, and what was the reason for Hilbert (hence, probably Euclid) to contemplate just two cases in which planes intersect. Could Euclidean Geometry be studied without this axiom?\n\n\u2022 Planes are not parallelograms; they are infinitely extended. \u2013\u00a0Mauro ALLEGRANZA Jun 13 '18 at 20:18\n\u2022 Two planes could intersect in a single point in four dimensions, so it sounds like this is intended to restrict space to three dimensions. \u2013\u00a0saulspatz Jun 13 '18 at 20:59\n\u2022 Indeed, 4D Euclidean geometry is a model of all of Hilbert's axioms except this one. Equally valid but weirder is 3D Euclidean geometry with an extra \"plane\" added containing the origin but no other point. Both are ruled out by this axiom. \u2013\u00a0Misha Lavrov Jun 13 '18 at 21:41\n\nActually, at least in this version of Hilbert's book the 7th axiom of incidence \/ connection is stated as\n\nUpon every straight line there exist at least two points, in every plane at least three points not lying in the same straight line, and in space there exist at least four points not lying in a plane.\n\nso I'll asume you are refering to the axiom I, 6. And it's necessary since in this book plane means a \"straight infinitely extensible surface\" (whatever that means axiomatically). Also, as you stated, you must interpretate this axiom as meaning\n\nIf two planes intersect, then they have a line in common.\n\nSince two points define a line. And for the part of necessary for it to be an Euclidean Geometry, as far as I am concerned, both Tarski's and Birkhoff's axiomatization of the Euclidean Geometry lack the use of this particular axiom, but utilize other axioms to \"use its place\", some of them regarding mathematical analysis, so it's not estrictly necessary, but in Hilbert's axiom I would say yes.\n\nFinally, I would like to show you another kind of \"planes\" in a non-euclidean space that satisfy their intersection being a single point:\n\nTwo \"planes\" intersecting at one single point\n\nWhere in this non-euclidean geometry, planes would be pretty much like a 3d parabola, where they'd been explained by the formula $$z=a((x-u)^2+(y-v)^2)+w$$, where $$a\\in\\mathbb{R}$$ and $$(u,v,w)$$ is your \"starting point\". Similarly lines are \"slices\" of this surfaces.\n\n\u2022 A slightly more familiar object satisfying all of Hilbert's axioms except this one is four-dimensional Euclidean space $\\mathbb R^4$. \u2013\u00a0Misha Lavrov Feb 21 at 4:24","date":"2019-05-20 11:33:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 3, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.640525221824646, \"perplexity\": 296.7324146630886}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232255943.0\/warc\/CC-MAIN-20190520101929-20190520123929-00051.warc.gz\"}"}
| null | null |
module Rebase.GHC.ForeignPtr
(
module GHC.ForeignPtr
)
where
import GHC.ForeignPtr
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,764
|
\section{Introduction}
Breakage, also known as fragmentation, is a basic process that describes the dissociation of particles that may occur in a variety of scientific and technical fields, including chemical process engineering, astrophysics, atmospheric science, and cellular biology. Depending on the particle breakage behaviour, the breakage process may be categorised into two kinds: The first is the \emph{linear breakage} which can happen spontaneously or as a result of external forces, and the second is the \emph{collision-induced nonlinear breakage} which takes place when two particles collide. One of the most effective approaches to characterising the kinetics of such phenomena is with the help of a rate equation which captures the evolution of the distribution of interacting clusters with respect to their sizes (or masses). In this article, we are interested in studying a mathematical model that governs the collision-induced breakage, which is often exploited to depict the raindrop breakup, cloud formation, and planet formation, see, for instance, \cite{LG 1976, SV 1972, SRC 1978}. The model under consideration here is known as the \emph{collision-induced breakage equation} or sometimes also termed as the \emph{nonlinear fragmentation equation}. The continuous collision-induced breakage equation is recently well-studied in \cite{AKG 2021I}. In the continuous case, the size (or mass) of each particle is denoted by a positive real number, whereas in the discrete case, the ratio of the mass of the basic building block (monomer) to the mass of a typical cluster is a positive integer and the size of a cluster is a finite multiple of the monomer's mass, i.e., \ a positive integer.
The collision-induced breakage equation, also known as the non-linear fragmentation equation, describes the dynamics of a large number of particles breaking apart as a result of binary collisions and is used to model cloud drop production and planet formation (see \cite{LG 1976, SV 1972, SRC 1978}). One of the most effective approaches to characterising the kinetics of such phenomena is with the help of a rate equation, which captures the evolution of the distribution of interacting clusters with respect to their size/mass. These types of the equation were first derived in the work of Smoluchowski to describe pure coagulation in the discrete case; that is, the ratio of the mass of the basic building block (monomer) to the mass of a typical cluster is positive, and the size of a cluster is a finite multiple of the monomer's mass.
The coagulation process is binary in nature when we look at a very short period of time, whereas the breakage process might be linear (spontaneous) or non-linear. The linear breakage process is regulated only by cluster attributes (and, if applicable, external forces), whereas the non-linear breakage process happens when two or more clusters collide and matter is transferred between them. As a result, the emergent cluster's mass in a non-linear breakage process may be bigger than expected.
Denoting by $w_i(t)$, $i \in \mathbb{N}$, the number of clusters made of $i$ clusters ($i$-particles) per unit volume at time $t \geq 0$, the discrete collision induced breakage equations read
\begin{align}
\frac{dw_i}{dt} =&\frac{1}{2} \sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k} w_k -\sum_{j=1}^{\infty} a_{i,j} w_i w_j \label{NLDCBE}\\
w_i(0) &= w_i^0, \hspace{.5cm} i \in \mathbb{N}.\label{NLDCBEIC}
\end{align}
Here $a_{i,j}$ denotes the rate of collisions of $i$-clusters with $j$-clusters satisfying
\begin{align}
a_{i,j}=a_{j,i}, \label{ASYMM}
\end{align}
while $\{B_{i,j}^s, s=1,2,...,i+j-1\}$ is the distribution function of the resulting fragments and satisfies
\begin{align}
B_{i,j}^s = B_{j,i}^s \geq 0 \hspace{.7cm} \text{and} \hspace{.7cm} \sum_{s=1}^{i+j-1} s B_{i,j}^s = i+j, \label{LMC}
\end{align}
where second term in \eqref{LMC} infer that mass is conserved during each collisional breakage event. The first term in \eqref{NLDCBE} takes into account collisions in which a $j$-mer and a $k$-mer collide and form $i$-mers at a rate determined by the breakup kernel $B_{j,k}^i$ whereas second term accounts for the depletion of $i$-mer due to its collisions with the other clusters in the system, which occur at a rate determined by the collision kernel $a_{i,j}$.
The linear(spontaneous) fragmentation equation with coagulation, which was first explored by Filippov \cite{FAF 61}, Kapur \cite{KAPUR 72}, McGrady and Ziff \cite{MCG 87, ZRM 85}, has gained a lot of attention in the recent several decades. Banasiak and Lamb used the semigroup technique to investigate the existence and uniqueness of classical solutions to linear fragmentation equations with coagulation under proper coagulation and fragmentation kernel assumptions (see \cite{BAN 2019, BLL 2019} and references therein). We will adopt the weak compactness technique to deal with our problem, and it consists of considering a family of truncated problems, establishing the weak compactness of their solutions, and passing to the limit, establishing in this way the existence of weak solutions to \eqref{SNLBE}--\eqref{SNLBEIC}. Uniqueness, however, requires additional assumptions and other techniques. The existence and uniqueness of weak solutions to the classical coagulation fragmentation equation have been studied using this approach in \cite{BALL 90, CARR 94, COSTA 2015, Laurencot 1999, Laurencot 2001, Laurencot 2002}.
On the other hand, the non-linear breakage equation has not been thoroughly investigated. Cheng and Redner studied the dynamics of continuous, linear, and collision-induced non-linear fragmentation events in their work \cite{CHNG 90}. They studied the asymptotic behaviour of a class of models in which a two-particle collision causes both particles to split into two equal halves, just the larger particle to split in two, or only the smaller particle to split in two for a linear fragmentation process. Later, Krapivsky and Ben-Naim \cite{Krapivsky 2003} looked into the dynamics of non-linear collision-induced fragmentation, calculating the fragment mass distribution analytically using the nonlinear collision equation traveling wave behaviour. The analytical solutions to the non-linear breakage problem and their asymptotic information were also investigated by Kostoglou, and Karabelas \cite{Kostoglou 2000}. To discuss self-similar solutions, they exploited various simple homogeneous collision and break-up kernels to convert the nonlinear breakage equation into a linear one.
In recent years, continuous versions of the coagulation equation with non-linear collision have been discussed using the weak $L^1$ compactness technique in \cite {PKB 2020, PKB 2020I, PKB 2021, AKG 2021}, where the existence and uniqueness of weak solutions for different classes of collision kernels have been discussed. Existence, uniqueness, and mass-saving solutions were discussed in \cite{AKG 2021I} for the kernel of the form $a(x,y)= x^{\alpha} y^{\beta} + x^{\beta} y^{\alpha}$ where $ \lambda := \alpha + \beta \in [1,2] $ and finite time existence of solutions also discussed for $\lambda \in [0,1]$ under the no mass transfer condition during collision. Using the same technique, a discrete version of the coagulation equation with collisional breakage is explored by Lauren\c{c}ot, and Wrzosek \cite{Laurencot 2001}, where they have studied the existence, uniqueness, mass conservation, and the large time behavior of weak solutions to \eqref{NLDCBE}--\eqref{NLDCBEIC} with reasonable restrictions on the collision kernel and probability function. Cheng and Redner look at a specific situation of \eqref{NLDCBE}. When two clusters collide in this model, they fragment into smaller pieces, and no matter is exchanged between them. They looked at the continuous class of these models, in which clusters are described by a continuous variable. For clusters described by a discrete variable reads
\begin{align}
\frac{dw_i}{dt} =& \sum_{j=i+1}^{\infty} \sum_{k=1}^{\infty} a_{j,k} b_{i,j,k} w_j w_k -\sum_{j=1}^{\infty} a_{i,j} w_i w_j,\label{SNLBE}\\
w_i(0) &=w_{0i}, \label{SNLBEIC}
\end{align}
for $i\geq 1$, where $\{b_{i,j,k}, 1\leq i \leq j-1\}$ denotes the distribution function of the fragments of a $j$ cluster after a collision with a $k$-cluster, and satisfies
\begin{align}
\sum_{i=1}^{j-1} i b_{i,j;k} = j, \hspace{.5cm} j\geq 2,~~~ k\geq 1. \label{LMC1}
\end{align}
To obtain \eqref{SNLBE} from \eqref{NLDCBE}, we put
\begin{align}
B_{i,j}^s = \textbf{1}_{[s, +\infty)} (i) b_{s,i;j} + \textbf{1}_{[s, +\infty)} (j) b_{s,j;i} \label{NMT}
\end{align}
for $i,j\geq 1$ and $s\in \{1,2,\cdots,i+j-1\},$ where $\textbf{1}_{[s, +\infty)}$ denotes the characteristic function of the interval $[s,+\infty)$. As each cluster splits into smaller pieces after collision it is expected that, in the long time, only 1-clusters remain.\\
In this article, we look for the existence of solutions to \eqref{NLDCBE}--\eqref{NLDCBEIC} for the class of collisional kernel
having quadratic growth, i.e.
\begin{align}
a_{i,j} \leq A_0 ij \hspace{.2cm} \text{for some} \hspace{.2cm} A_0>0 \hspace{.2cm}\text{and}\hspace{.2cm} i,j\geq 1.\label{QUADGROWTH}
\end{align}
In addition to \eqref{NMT}, we assume that there is a constant $\beta$ such that
\begin{align}
B_{i,j}^s \leq \beta, \hspace{.2cm} 1\leq s\leq i+j-1, \hspace{.2cm} i,j\geq 1. \label{FNP}
\end{align}
We expect the density $\rho =\sum_{i=1}^{\infty} i w_i(t)$ to be conserved because particles are neither generated nor destroyed in the interactions represented by \eqref{NLDCBE}--\eqref{NLDCBEIC}. This is mathematically equivalent to
\begin{align}
\sum_{i=1}^{\infty} iw_i(t) = \sum_{i=1}^{\infty} iw_i^0.\label{MCC}
\end{align}
In other words, the density of the solution $w$ remains constant over time.
The paper is organized as follows: The next section is devoted to a precise statement of our results, including definitions, the existence of solutions to \eqref{NLDCBE}--\eqref{NLDCBEIC}, and the mass conservation property of solutions. In Section \ref{PMCDID}, propagation of moments, uniqueness, and continuous dependence of solutions on initial data have been explored, whereas, in Section \ref{IPOS}, some invariance properties of solutions are shown. Finally, in Section \ref{LTBOS}, the large-time behaviour of solutions is discussed.
\section{Existence of Solutions} \label{EOS}
For $\gamma \geq 0$, let $Y_{\gamma}$ be the Banach space defined by
\begin{align*}
Y_{\gamma} = \Big\{ y =(y_i)_{i\in\mathbb{N}}: y_i \in \mathbb{R}, \sum_{i=1}^{\infty} i^{\gamma} |y_i| < \infty \Big\}
\end{align*}
with the norm
\begin{align*}
\|y\|_{\gamma} =\sum_{i=1}^{\infty} i^{\gamma} |y_i|.
\end{align*}
We will use the positive cone $Y_{\gamma}^+$ of $Y_{\gamma}$, that is,
\begin{align*}
Y_{\gamma}^+ =\{y\in Y_{\gamma}: ~~y_i \geq 0~~\text{for each} i\geq 1\}.
\end{align*}
It is worth noting that the norm $\|w\|_0$ of a particular cluster distribution $w$ represents the total number of clusters present, and the norm $\|w\|_1$ estimates the overall density or mass of the cluster distribution $w$ as in classical coagulation or coagulation fragmentation equations.
As in previous works on similar equations, the existence of solutions to \eqref{NLDCBE}--\eqref{NLDCBEIC} follows by taking a limit of solutions to finite-dimensional systems of ordinary differential equations
obtained by truncation of these equations. More precisely, given $l \geq 3$, we consider the following
system of $l$ ordinary differential equations
\begin{align}
\frac{dw_i^l}{dt}&= \frac{1}{2} \sum_{j=i+1}^{l} \sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k}^l w_k^l -\sum_{j=1}^{l-i} a_{i,j} w_i^l w_j^l,\hspace{.2cm} \label{FDNLBE} \\
w_i^l(0) &= w_{0i}, \label{FDNLBEIC}
\end{align}
for $i \in\{1,2, \cdots, l\}$, where first term on right hand side is zero when $i=l$.
Let us now define what we mean by a solution to \eqref{NLDCBE}--\eqref{NLDCBEIC}.
\begin{definition} \label{DEF1}
Let $T\in(0,+\infty]$ and $w^0= (w_{0i})_{i \geq 1}\in Y_1^+$ be a sequence of non negative real numbers. A solution $w=(w_i)_{i \geq 1} $ to \eqref{SNLBE}--\eqref{SNLBEIC} on $[0,T)$ is a sequence of non-negative continuous functions satisfying for each $i\geq 1$ and $t\in(0,T)$
\begin{enumerate}
\item $w_i\in \mathcal{C}([0,T))$, $\sum_{j=1}^{\infty} a_{i,j} w_j \in L^1(0,t)$, $ \sum_{j=i+1}^{\infty}\sum_{k=1}^{j-1}B_{j-k,k}^i a_{j-k,k} w_{j-k} w_k \in L^1(0,t)$,
\item and there holds
\begin{align}
w_i(t) = w_{0i} + \int_0^t \Big( \frac{1}{2} \sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k}(\tau) w_k(\tau) -\sum_{j=1}^{\infty} a_{i,j} w_i(\tau) w_j(\tau) \Big) d\tau. \label{IVOE}
\end{align}
\end{enumerate}
\end{definition}
In the following lemmas, we collect basic results about solutions to this finite-dimensional system, which were proved in \cite{BALL 90} for discrete coagulation fragmentation equations, and we follow their proof closely.
\begin{lemma} \label{LEMMAREG}
Let $w^l$ be a solution of \eqref{FDNLBE}--\eqref{FDNLBEIC} and let $(\mu_i)$ be a sequence of real numbers. Then for $1\leq r\leq l$,
\begin{align}
\sum_{i=r}^l \mu_i \dot{w}_i^l =& \frac{1}{2}\sum_{R_1}\Big( \sum_{i=r}^{j+k-1} \mu_i B_{j,k}^i -\mu_j-\mu_k\Big)a_{j,k} w_j^l w_k^l +\frac{1}{2}\sum_{R_2}\Big(\sum_{i=r}^{j+k-1} \mu_iB_{j,k}^i \Big) a_{j,k} w_j^l w_k^l \nonumber\\
&+ \sum_{R_3} \Big(\sum_{i=r}^{j+k-1} \mu_i B_{j,k}^i -\mu_k \Big) a_{j,k} w_j^l w_k^l \label{GME}
\end{align}
where
\begin{align*}
R_1 &= \{(j,k):~~~(j,k)\geq r,~~~j+k\leq l\}\\
R_2 &= \{(j,k):~~~j,k< r,~~~r\leq j+k \leq l\}\\
R_3 &= \{ (j,k):~~~1 \leq j \leq r-1,k\geq r, j+k\leq l\}
\end{align*}
with the sums equal to zero if the associated region is empty.
\end{lemma}
\begin{proof}
Using the symmetry of collision kernel and distribution function and \eqref{FDNLBE}, we have
\begin{align*}
\sum_{i=r}^l\mu_i \dot{w}_i^l =\frac{1}{2} \sum_{i=r}^l \sum_{i+1\leq j+k \leq l} \mu_i B_{j,k}^i a_{j,k}w_j^l w_k^l -\frac{1}{2}\sum_{j=r}^{l-1} \mu_j \sum_{k=r}^{l-j} a_{j,k}w_j^l w_k^l-\frac{1}{2}\sum_{k=r}^{l-1} \mu_k \sum_{j=r}^{l-k} a_{j,k}w_j^l w_k^l.
\end{align*}
The result is obtained by grouping the above terms into common regions in $j-k$ space.
\end{proof}
\begin{lemma}\label{LER}
The system \eqref{FDNLBE}--\eqref{FDNLBEIC} has a unique solution for $t \geq 0$ with $w_i^l(t)\geq 0$, $1\leq i \leq l$, and
$$ \sum_{i=1}^l i w_i^l(t) = \sum_{i=1}^l i w_i^l(0).$$
\end{lemma}
\begin{proof}
Since \eqref{FDNLBE} is a finite-dimensional system having a polynomial right-hand side, hence the existence of local solutions to the problem follows from the Cauchy–Lipschitz theorem, while the non-negativity of each $w_i^l(t)$ may be proved similarly to the corresponding result in \cite{BALL 86}. The fact that $\sum_{i=1}^l i w_i^l(t) $ is constant is obtained by putting $\mu_i= i$ in Lemma \ref{LEMMAREG}, and the global existence is attained by the bounds $0\leq w_i^l(t) \leq \sum_{i=1}^l iw_i^l(0)$.
\end{proof}
\begin{lemma}\label{LEMMALEQ}
Assume that $ a_{i,j} \leq Aij$ for all $i,j \geq 1$ where $A$ is a constant. Let $w^l$ be a solution of \eqref{FDNLBE}--\eqref{FDNLBEIC} and let $ \varrho^l(0) =\sum_{i=1}^l i w_i^l(0).$ Then
\begin{align*}
\frac{d}{dt}\Bigg\{e^{-t} \Big[\sum_{i=r}^l i w_i^l(t) + 2r A\varrho^l(0)^2\Big]\Bigg\}\leq 0.
\end{align*}
\end{lemma}
\begin{proof}
From Lemma \ref{LEMMAREG},
\begin{align*}
\frac{d}{dt}\sum_{i=r}^l i w_i^l(t) \leq \frac{1}{2}\sum_{R_2} \Big(\sum_{i=m}^{j+k-1} iB_{j,k}^i \Big) a_{j,k} w_j^l w_k^l + \sum_{R_3} \Big(\sum_{i=r}^{j+k-1} i B_{j,k}^i -k \Big) a_{j,k} w_j^l w_k^l.
\end{align*}
Hence
\begin{align*}
\frac{d}{dt}\Bigg\{e^{-t} \Big[\sum_{i=r}^l i w_i^l(t) &+ 2r A\varrho^l(0)^2\Big]\Bigg\}= e^{-t} \Bigg[ \frac{1}{2}\sum_{R_2} \Big(\sum_{i=r}^{j+k-1} iB_{j,k}^i \Big) a_{j,k} w_j^l w_k^l \\
&+ \sum_{R_3} \Big(\sum_{i=r}^{j+k-1} i B_{j,k}^i -k \Big) a_{j,k} w_j^l w_k^l-\sum_{i=r}^l i w_i^l(t) - 2m A\varrho^l(0)^2\Bigg]\\
&\leq A e^{-t}\Bigg[\frac{1}{2}\sum_{R_2} (j+k) jk w_j^l w_k^l +\sum_{R_3} j^2k w_j^l w_k^l - 2r \varrho^l(0)^2\Bigg]\\
& \leq 0.
\end{align*}
The result follows.
\end{proof}
Let $T \in (0,\infty)$ is given and \eqref{NMT} holds and $w$ be a solution of \eqref{NLDCBE}--\eqref{NLDCBEIC} on $[0,T)$ and let $(\psi_i)$ be a sequence. Then for $1 \leq l<\infty$ and $0\leq t_1<t_2 <T$, the following moment's equation holds
\begin{align}
\sum_{i=1}^l \psi_i (w_i(t_2)- w_i(t_1))=& \frac{1}{2} \int_{t_1}^{t_2}\sum_{j=1}^l \sum_{k=1}^l \Big( \sum_{i=1}^{l} \psi_iB_{j,k}^i - j -k \Big) a_{j,k} w_j(s) w_k(s) ds\nonumber \\
&+\int_{t_1}^{t_2}\sum_{j=1}^l \sum_{k=l+1}^{\infty} \Big( \sum_{i=1}^{l} \psi_iB_{j,k}^i - j \Big) a_{j,k} w_j(s) w_k(s) ds\nonumber \\
&+\frac{1}{2}\int_{t_1}^{t_2} \sum_{j=1+1}^{\infty} \sum_{k=l+1}^{\infty} \sum_{i=1}^l \psi_i B_{j,k}^ia_{j,k} w_j(s) w_k(s) ds. \label{MCE}
\end{align}
Next, we shall prove the existence of solutions to \eqref{NLDCBE}--\eqref{NLDCBEIC} in $Y_1^+$ under some mild conditions on the collision kernel. Our existence result is as follows:
\begin{theorem}\label{MAINTHEOREM}
Assume $a_{i,j}\leq Aij$, for some positive constant $A$ and all positive integers $i$ and $j$. Let $w_0\in Y_1^+$ and the distribution function satisfy \eqref{NMT} and \eqref{FNP}. Then, there is at least one solution of \eqref{SNLBE}--\eqref{SNLBEIC} with initial condition $w(0) = w_0$, defined on $[0, T)$, for some $T \in (0,+\infty]$ and in addition if for $0\leq t_1 <t_2 \leq T$, the following
\begin{align}
\int_{t_1}^{t_2} \sum_{j=1}^{\infty} \sum_{k=1}^{\infty} (j+k) a_{j,k} w_j(s) w_k(s) ds <+\infty\label{IP}
\end{align}
holds, then $w$ is mass conserving i.e.
\begin{align}
\|w(t)\|_{Y_1} = \|w_0\|_{Y_1}, \hspace{.3cm} t \in [0,\infty). \label{GMC}
\end{align}
\end{theorem}
\begin{proof}
Under the condition \eqref{NMT}, Lemma \ref{LEMMAREG} gives
\begin{align*}
\frac{d}{dt}\sum_{i=r}^l \mu_i w_i^l(t) =& \sum_{j=r}^l \sum_{k=1}^{l-j}\Big( \sum_{i=r}^{j-1} \mu_i b_{i,j,k} -\mu_j\Big)a_{j,k} w_j^l w_k^l + \sum_{j=1}^{r-1}\sum_{k=r}^{l-j} \Big(\sum_{i=r}^{k-1} \mu_i b_{i,k,j} -\mu_k \Big) a_{j,k} w_j^l w_k^l.
\end{align*}
By putting $\mu_i = i$, and using \eqref{LMC1}, we immediately conclude that,
\begin{align*}
\frac{d}{dt} \sum_{i=r}^l i w_i^l \leq 0.
\end{align*}
Thus, we have
\begin{align}\label{PMOMNT}
\sum_{i=r}^{l} i w_i^l \leq \sum_{i=r}^l i w_{0i} \leq \sum_{i=r}^{\infty} i w_{0i} \leq \sum_{i=1}^{\infty} iw_{0i}=\|w_0\|_1.
\end{align}
Fix $T \in(0,\infty)$. Consider now $i\geq 1$ and $l\geq i$. It follows from Lemma \ref{LER},
\eqref{ASYMM}, \eqref{QUADGROWTH} and \eqref{PMOMNT} that the $i$-th component $w_i^l$ of the solution to \eqref{FDNLBE}--\eqref{FDNLBEIC} satisfies
\begin{align}\label{DERVBOUND}
\Big|\frac{dw_i^l}{dt}\Big|=&\leq \frac{A_0 A_1}{2} \sum_{j=i+1}^{l} \sum_{k=1}^{j-1} (j-k)k w_{j}^l w_k^l +A \sum_{j=1}^{l-i} i jw_i^l w_j^l \nonumber \\
&\leq A(\beta+1) \|w_0\|_1^2.
\end{align}
As a result of \eqref{PMOMNT} and \eqref{DERVBOUND}, the sequence $(w_i^l)_{l\geq i}$ is bounded in $\mathcal{C}^1([0,T])$ and thus relatively compact in $\mathcal{C}([0,T])$. Therefore, according to Helly's selection theorem, for each fixed $i$ there exists a subsequence of $w_i^l(.)$ (not relabeled) that converges pointwise to a BV function $w_i(.)$ in $[0,T]$,
\begin{align}
w_i^l(t) \longrightarrow w_i(t), \hspace{.3cm} \text{as} \hspace{.3cm} l \to \infty ,~~~\forall t\in [0,T],~~~ \forall i \in \mathbb{N}. \label{LIMITw}
\end{align}
But then, for each $q \in \mathbb{N}$, and for each $t \in [0, T]$,
\begin{align*}
\sum_{i=1}^q iw_i^l(t) \longrightarrow \sum_{i=1}^q i w_i(t), ~~~~\text{as}~~l \to \infty.
\end{align*}
and therefore, by \eqref{PMOMNT}, for any such $q$,
\begin{align}
\sum_{i=1}^q iw_i(t) \leq \|w_0\|_1. \label{QBOUND}
\end{align}
By letting $q \to \infty$, we obtain
\begin{align}\label{LIMITBOUND}
\sum_{i=1}^{\infty} iw_i(t) \leq \|w_0\|_1.
\end{align}
Since Lemma \ref{LER} implies $w_i(t) \geq 0 $ , this proves not only $w(t) \in Y_1^+$, for each $ t\in [0,T]$, but in addition the first condition of Definition \ref{DEF1} is also satisfied.
We shall show that the limit function $w_i$ solves the system \eqref{NLDCBE}--\eqref{NLDCBEIC}. To achieve this result, we shall pass to the limit as $l\to \infty$ in the equation for $w^l_i$
\begin{align*}
w_i^l(t) = w_{0i} + \frac{1}{2}\int_0^t \sum_{j=i+1}^{l} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l(s) w_k^l(s) ds-\int_0^t \sum_{j=1}^{l}a_{i,j}w_i^l(s) w_j^l(s) ds.
\end{align*}
Hence, we need to prove that for all $t\in [0,T]$,
\begin{align}\label{LIMIT1}
\int_0^t \sum_{j=i+1}^{l} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l(s) w_k^l(s) ds \xrightarrow[]{l \to \infty}\int_0^t \sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}(s) w_k(s) ds,
\end{align}
and
\begin{align}\label{LIMIT2}
\int_0^t \sum_{j=1}^{l-i}a_{i,j}w_i^l(s) w_j^l(s) ds \xrightarrow[]{l \to \infty} \int_0^t \sum_{j=1}^{\infty}a_{i,j}w_i(s) w_j(s) ds.
\end{align}
To begin, we prove that the right-hand side of \eqref{LIMIT1}1 is well defined. Let $p\geq i+1$ be a fixed positive integer. Now recalling the definition of $(w_i)_{i\in \mathbb{N}}$, we know that
\begin{align*}
\sum_{j=i+1}^p \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l w_k^l \xrightarrow[]{l \to \infty} \sum_{j=i+1}^p \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k} w_k
\end{align*}
and from \eqref{GME}, for all positive integers $l$ and $p$, we get
\begin{align*}
\sum_{j=i+1}^p \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l w_k^l =& \sum_{i+1\leq j+k \leq p} a_{j,k} B_{j,k}^i w_j^l w_k^l \\
& \leq A \beta \sum_{j=1}^p \sum_{k=1}^{p+1-j} jk w_j^l w_k^l \leq A\beta \|w_0\|_1^2
\end{align*}
and thus also
\begin{align*}
\sum_{j=i+1}^p \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k} w_k \leq A \beta \|w_0\|_1^2.
\end{align*}
Owning to the fact that the right hand side is independent of $p$, and all the terms are non negative, we have
\begin{align*}
\sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k} w_k \leq A \beta \|w_0\|_1^2.
\end{align*}
Invoking the dominated convergence theorem, we can easily deduce that the right-hand side of \eqref{LIMIT1} is well defined for all $t \in (0, T)$, with $T <\infty$. The next step is to show that the limit in \eqref{LIMIT1} holds. Let $r$ be a fixed positive integer such that $i+1 \leq r < l < \infty$, then
\begin{align}
\Bigg| \int_0^t& \sum_{j=i+1}^{l} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l(s) w_k^l(s) ds -\int_0^t \sum_{j=i+1}^{\infty} \sum_{k=1}^{\infty} a_{j-k,k} B_{j-k,k}^i w_{j-k}(s) w_k(s) ds \Bigg|\leq \nonumber\\
&\leq \int_0^t \sum_{j=i+1}^{r} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i \big| w_{j-k}^l(s) w_k^l(s)-w_{j-k}(s) w_k(s)\big| ds +\label{FT} \\
&+\int_0^t \sum_{j=r+1}^{l} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l(s) w_k^l(s) ds+ \int_0^t \sum_{j=r+1}^{\infty}\sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}(s) w_k(s) ds. \label{ST}
\end{align}
Our goal now is to demonstrate that the right-hand side of this inequality can be arbitrarily small when $l\to \infty$ by choosing a sufficiently large $r$.
Since each term in the sum converges pointwise to zero, the sum has a finite fixed number of terms, and its absolute value is bounded above by $2A\beta \|w_0\|_1^2$, thus it follows from the dominated convergence theorem that \eqref{FT} converges to zero as $l\to\infty$.
Define $\kappa_r= \|w_0\|_1 \sum_{i=r}^{\infty} i w_{0i}$. Clearly $\rho_r \to 0$ as $ r\to \infty$. Now let us look at the integrals in \eqref{ST}. From \eqref{PMOMNT}, we infer that
\begin{align*}
\int_0^t \sum_{j=r+1}^{l} \sum_{k=1}^{j-1} a_{j-k,k}& B_{j-k,k}^i w_{j-k}^l(s) w_k^l(s) ds= \int_0^t \sum_{r+1\leq j+k\leq l} a_{j,k} B_{j,k}^i w_j^l(s) w_k^l(s) ds \\
& =\int_0^t \sum_{k=1}^{l-1} \sum_{j=r+1-k}^{l-k} a_{j,k} B_{j,k}^i w_j^l(s) w_k^l(s) ds +\int_0^t \sum_{j=r}^{l-1} \sum_{k=1}^{l-j} a_{j,k} B_{j,k}^i w_j^l(s) w_k^l(s) ds \\
&\leq 2A \beta \int_0^t \Big(\sum_{k=1}^{l-1} kw_k^l(s) \Big) \Big(\sum_{j=r}^{l-1} j w_{j}^l(s)\Big) ds \\
& \leq 2A \beta \int_0^t \kappa_r ds.
\end{align*}
Therefore, the first integral in \eqref{ST} can be made arbitrarily small by choosing $r$ large enough. Analogously, we prove the result for the second integral. For all $i+1 \leq r <q$ we have
\begin{align*}
\sum_{j=r+1}^{q} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k}^l w_k^l \xrightarrow[]{l\to \infty}\sum_{j=r+1}^{q} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k} w_k.
\end{align*}
Due to \eqref{GME}, the sum on the left-hand side is bounded by $2A\beta \kappa_r$, and so we also get
\begin{align*}
\sum_{j=r+1}^{q} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-K,k} w_{j} w_k \leq 2A \beta \kappa_r
\end{align*}
for all $q$. Since this bound is uniform in $q$, we have
\begin{align*}
\sum_{j=r+1}^{q} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k} w_k \xrightarrow[]{q \to \infty} \sum_{j=r+1}^{q} \sum_{k=1}^{j-1} a_{j-k,k} B_{j-k,k}^i w_{j-k} w_k \leq 2A \beta\kappa_r.
\end{align*}
As a consequence of the dominated convergence theorem, the second integral in \eqref{ST} can be made arbitrarily small by choosing sufficiently large $r$ and $l$. This completes the proof of \eqref{LIMIT1}.
Our next step is to show that \eqref{LIMIT2} holds. Let $r$ be a fix positive integer such that $1 \leq r+1 +i< l < \infty$. Then
\begin{align}
\Bigg|\int_0^t &\sum_{j=1}^{l-i} a_{i,j} w_i^l(s) w_j^l(s)ds - \int_0^t \sum_{j=1}^{\infty} a_{i,j} w_i(s) w_j(s) ds\Bigg| \leq \nonumber\\
&\leq \int_0^t \sum_{j=1}^r a_{i,j} \big|w_i^l(s) w_j^l(s)- w_i(s) w_j(s)\big| ds+\label{FFT}\\
&+ \int_0^t \sum_{j=r+1}^{l-i} a_{i,j} w_i^l(s) w_j^l(s) ds +\int_0^t \sum_{j=r+1}^{\infty} a_{i,j} w_i(s) w_j(s) ds \label{SST}
\end{align}
and our goal is to show that, given a sufficiently large value of $r$, the right-hand side of the above inequality can be made arbitrarily small when $l\to \infty$.
We next infer from \eqref{PMOMNT} that
\begin{align}
\int_0^t \sum_{j=r+1}^{l-i} a_{i,j} w_i^l(s) w_j^l(s) ds \leq&~~~~ A \int_0^t \sum_{j=r+1}^l i w_i^l(s) j w_j^l(s) ds \nonumber \\
\leq &~~~~ A \int_0^t \|w_0\|_1 \sum_{j=r+1}^l j w_j^l(s) ds \leq A\int_0^t \kappa_r ds \nonumber \\
\leq & ~~~~AT\kappa_r
\end{align}
and so we get the first integral in \eqref{SST} can be made arbitrarily small by choosing $r$ sufficiently large. For the second integral, the result is proved in an analogous way. For all $1 \leq r < q$, we have
\begin{align*}
\sum_{j=r+1}^q a_{i,j} w_i^l w_j^l \xrightarrow[]{l \to \infty} \sum_{j=r+1}^{\infty} a_{i,j} w_i w_j.
\end{align*}
Using \eqref{PMOMNT}, we notice that the sum in the left-hand side is bounded by $A \kappa_r$, which implies that
\begin{align*}
\sum_{j=r+1}^q a_{i,j} w_i w_j \leq A \kappa_r
\end{align*}
for all $q$. Since this bound is uniform in $q$, we have
\begin{align*}
\sum_{j=r+1}^q a_{i,j} w_i w_j \xrightarrow[]{q \to \infty} \sum_{j=r+1}^{\infty} a_{i,j} w_i w_j.
\end{align*}
Therefore, using the dominated convergence theorem, it is possible to make the second integral in \eqref{SST} arbitrary small by choosing $r$ and $l$ large enough. We have thus shown that $w=(w_i)$ is a solution to \eqref{NLDCBE}--\eqref{NLDCBEIC}.
\par
To complete the proof of Theorem \ref{MAINTHEOREM}, it remains to prove that $w$ is mass conserving. Therefore, considering $\psi_i=i$ in \eqref{MCE}, we obtain
\begin{align}
\sum_{i=1}^l i (w_i(t_2)- w_i(t_1))=& \frac{1}{2} \int_{t_1}^{t_2}\sum_{j=1}^l \sum_{k=1}^l \Big( \sum_{i=1}^{l} iB_{j,k}^i - j -k \Big) a_{j,k} w_j(s) w_k(s) ds\nonumber \\
&+\int_{t_1}^{t_2}\sum_{j=1}^l \sum_{k=l+1}^{\infty} \Big( \sum_{i=1}^{l} iB_{j,k}^i - j \Big) a_{j,k} w_j(s) w_k(s) ds\nonumber \\
&+\frac{1}{2}\int_{t_1}^{t_2} \sum_{j=l+1}^{\infty} \sum_{k=l+1}^{\infty} \sum_{i=1}^l i B_{j,k}^ia_{j,k} w_j(s) w_k(s) ds. \label{FSFE}
\end{align}
On the one hand \eqref{LMC1}--\eqref{NMT} entail that
\begin{align}
\sum_{i=1}^l i B_{j,k}^i = \sum_{i=1}^j ib_{i,j;k} + \sum_{i=1}^k ib_{i,k;j}=j+k \label{NMT1}
\end{align}
for $j,k \in\{1,2,\cdots,l\}$, and the first term of the right hand side of \eqref{FSFE} is equal to zero. On the other hand, using again \eqref{LMC1}--\eqref{NMT}, we obtain
\begin{align}
\sum_{i=1}^l i B_{j,k}^i = \sum_{i=1}^j ib_{i,j;k} + \sum_{i=1}^l ib_{i,k;j}\label{NMT2}
\end{align}
for $j \in\{1,2,\cdots,l\}$ and $k\geq l+1$, hence a non-negative bound from below for
the second term of the right-hand side of \eqref{FSFE}. Therefore \eqref{FSFE} yields
\begin{align*}
\sum_{i=1}^l i (w_i(t_2)- w_i(t_1))\geq& \frac{1}{2}\int_{t_1}^{t_2} \sum_{j=1}^{l} \sum_{k=l+1}^{\infty} \sum_{i=1}^l i b_{i,k;j}a_{j,k} w_j(s) w_k(s) ds \\
&+ \frac{1}{2}\int_{t_1}^{t_2} \sum_{k=1}^{l} \sum_{j=l+1}^{\infty} \sum_{i=1}^l i b_{i,j;k}a_{j,k} w_j(s) w_k(s) ds.
\end{align*}
\begin{align*}
\sum_{i=1}^l i (w_i(t_2)\geq \sum_{i=1}^l i w_i(t_1))+ \frac{1}{2}\int_{t_1}^{t_2} \sum_{j=1}^{\infty} \sum_{k=l+1}^{\infty} \sum_{i=1}^l i b_{i,k;j}a_{j,k} w_j(s) w_k(s) ds.
\end{align*}
Taking $t_1=0$ and $t_2=t$, we have
\begin{align*}
\sum_{i=1}^l i w_i(t) \geq \sum_{i=1}^l i w_{0i}+ \frac{1}{2}\int_{0}^{t} \sum_{j=1}^{\infty} \sum_{k=l+1}^{\infty} \sum_{i=1}^l i b_{i,k;j}a_{j,k} w_j(s) w_k(s) ds.
\end{align*}
The second term on the right-hand side of the above equation is finite as a consequence of \eqref{IP}. Hence, letting $l \to \infty$, we have
\begin{align}
\sum_{i=1}^{\infty} i w_i(t) \geq \sum_{i=1}^{\infty} i w_{0i}. \label{UBMC}
\end{align}
Combining \eqref{LIMITBOUND} and \eqref{UBMC}, we get the mass conservation property of the solution.
\end{proof}
Next, we prove that the sequence $w^l$ of solutions to the truncated system that converges to the solution $w$ of \eqref{NLDCBE}--\eqref{NLDCBEIC} indeed does so in the strong topology of $Y$, uniformly for $t$ in compact subsets of $[0,\infty)$.
\begin{corollary}\label{COR1}
Let $w^l$ be be the pointwise convergent subsequence of solutions to \eqref{FDNLBE}--\eqref{FDNLBEIC}. Then, $w^l \longrightarrow w$ in $Y$ uniformly on compact subsets of $[0,\infty)$.
\end{corollary}
\begin{proof}
First we prove that, for each $i$, $w_i^l(t) \to w_i(t)$ uniformly on compact subsets of $[0,+\infty)$. For this it is clearly sufficient to show that for each $r > 1$,
\begin{align*}
\Delta_r^l(t):= e^{-t} \Big[\varrho^l(0) -\sum_{i=1}^{r-1} iw_i^l(t) + 4r A\varrho^l(0)^2\Big]
\end{align*}
converges to
\begin{align*}
\Delta_r(t):= e^{-t} \Big[\varrho(0) -\sum_{i=1}^{r-1} iw_i(t) + 4r A\varrho(0)^2\Big]
\end{align*}
uniformly on compact subsets of $[0, \infty)$, where $\varrho(0)=\sum_{i=1}^{\infty} i w_i(0)$. But this follows from the pointwise convergence of $\Delta_r^l(t)$ to the continuous function $\Delta_r(t)$ and the fact that by Lemmas \ref{LER}, \ref{LEMMALEQ},
\begin{align*}
\frac{d}{dt}\Delta_r^l(t) \leq 0, \hspace{.3cm}t\in[0,T),~~~l\geq r.
\end{align*}
Let $I\subset [0,\infty)$ be compact and $t_l\to t$ in $I$, then
\begin{align*}
\lim_{l\to \infty} \|w^l(t_l) \|= \lim_{l\to \infty} \|w(t_l) \|= \|w(t) \|
\end{align*}
which implies that $w^l \to w$ in $C(I,Y)$, as required.
\end{proof}
In the next section, the issue we consider is that whether, given $w^0 \in Y_1^+$ such that $ \sum_{i=1}^{\infty} i^{\alpha}w_i^0 <\infty $ for some $\alpha >1 $, the solution $w$ to \eqref{SNLBE}--\eqref{SNLBEIC} constructed in Theorem \ref{MAINTHEOREM} enjoys the same properties throughout time evolution, that is, $ \sum_{i=1}^{\infty} i^{\alpha}w_i^0 <\infty $ for $t\in (0,\infty)$.
\section{Propagation of moments, Uniqueness and Continuous Dependence on Initial Data}\label{PMCDID}
\begin{prop}\label{PMPROP}
Let $T\in (0, \infty)$ and assume that the assumptions \eqref{ASYMM}-\eqref{QUADGROWTH} and \eqref{NMT} are fullfilled. Further, consider $ w^0 \in Y_1^+$ is such that
\begin{align}
\sum_{i=1}^{\infty} i^{\alpha} w_i^0 < \infty \label{ALPHAMOMNTIN}
\end{align}
for some $\alpha >1 $. Then the solution $w$ to \eqref{SNLBE}--\eqref{SNLBEIC} constructed on Theorem \ref{MAINTHEOREM} on $[0,T)$ satisfies
\begin{align}
\sup_{t\in[0,T]} \sum_{i=1}^{\infty} i^{\alpha} w_i(t) <\infty\label{ALPHAMOMNT}
\end{align}
for each $T>0$.
\end{prop}
\begin{proof}
We know from \eqref{LIMITw} that
\begin{align}
\lim_{l\to \infty } w_i^l(t) = w_i(t)
\end{align}
for each $t \in [0, + \infty)$ and $i \geq 1$, where $w^l$ denotes the solution to \eqref{FDNLBE}--\eqref{FDNLBEIC} given by Lemma \ref{LER}. On taking $\mu_i= i^{\alpha}$ and $m=1$ in \eqref{GME}, we get
\begin{align*}
\frac{d}{dt} \sum_{i=1}^l i^{\alpha} w_i^l = \frac{1}{2} \sum_{i=1}^l \sum_{j=1}^{l-i} \Big( \sum_{s=1}^{i+j-1} s^{\alpha} B_{i,j}^s - i^{\alpha} - j^{\alpha} \Big) a_{i,j} w_i^l w_j^l.
\end{align*}
Using \eqref{NMT}, above equation reduces to
\begin{align*}
\frac{d}{dt} \sum_{i=1}^l i^{\alpha} w_i^l = \frac{1}{2} \sum_{i=1}^l \sum_{j=1}^{l-i} \Big( \sum_{s=1}^{i-1} s^{\alpha} b_{s;i,j} +\sum_{s=1}^{j-1} s^{\alpha} b_{s;j,i} - i^{\alpha} - j^{\alpha} \Big) a_{i,j} w_i^l w_j^l.
\end{align*}
Since for $p>1$, the function $s\longrightarrow s^p$ is convex, hence using \eqref{LMC}, we have
$$ \sum_{s=1}^{i-1} s^{\alpha} b_{s;i,j} \leq i^{\alpha},~~~ i\geq 2,j\geq 1 \hspace{.5cm}\text{and} \hspace{.5cm}\sum_{s=1}^{j-1} s^{\alpha} b_{s;j,i} \leq j^{\alpha},~~~ j\geq 2,i\geq 1.$$
Hence
\begin{align*}
\frac{d}{dt} \sum_{i=1}^l i^{\alpha} w_i^l \leq 0,
\end{align*}
which implies
\begin{align*}
\sum_{i=1}^l i^{\alpha} w_i^l \leq \sum_{i=1}^l i^{\alpha} w_{0i} \leq \sum_{i=1}^{\infty} i^{\alpha} w_{0i}.
\end{align*}
With the help of \eqref{ALPHAMOMNTIN} we may pass to the limit as $l \to \infty$ in the above inequality and obtain
\begin{align*}
\sum_{i=1}^{\infty} i^{\alpha} w_i(t) \leq \sum_{i=1}^{\infty} i^{\alpha} w_i^0.
\end{align*}
This concludes the proof of Proposition \ref{PMPROP}.
\end{proof}
Next, we put the stronger assumption on the collisional kernel, i.e.,
\begin{align}
a_{i,j} \leq A_{\gamma} (ij)^{\gamma}, \gamma\in [0,1]. \label{AGAMMA}
\end{align}
Now we establish the uniqueness result for \eqref{NLDCBE}--\eqref{NLDCBEIC} this is achieved by assuming there are two solutions to the initial value problem and demonstrating that they are equal. This will be accomplished as in the usual coagulation fragmentation equations with the help of Gronwall's inequality. The proof involves slightly more restricted constraints on the collision kernel and initial condition than those used in the existence result.
\begin{prop}\label{UNIQPROP}
Assume that the assumption \eqref{ASYMM}, \eqref{NMT} and \eqref{AGAMMA} are fulfilled.
Consider $w^0 \in Y_1^+$ such that
\begin{align}
\sum_{i=1}^{\infty} i^{1+\gamma} w_i^0 <\infty. \label{GAMINIT}
\end{align}
Then there is a unique solution $w$ to \eqref{NLDCBE}--\eqref{NLDCBEIC} on $[0,+\infty)$ satisfying
\begin{align}
\sup_{t\in [0,T]} \sum_{i=1}^{\infty} i^{1+\gamma} w_i(t) <\infty \label{GAMMMNT}
\end{align}
for each $T\in(0,+\infty)$,
\end{prop}
\begin{proof}
As a $\gamma \in [0, 1]$ it follows from \eqref{AGAMMA} that $a_{i,j}$ satisfy \eqref{QUADGROWTH}
and the existence of a solution to \eqref{NLDCBE}--\eqref{NLDCBEIC} on $[0,+\infty)$ with the properties
stated in Proposition \ref{UNIQPROP} is a consequence of Theorem \ref{MAINTHEOREM} and Proposition \ref{PMPROP}.
Suppose the initial value problem for \eqref{NLDCBE}--\eqref{NLDCBEIC} with the initial condition $w(0) = w_0 \in Y_1^+$ satisfying \eqref{GAMINIT} has two solutions, $w$ and $\hat{w}$ on $[0,+\infty)$ satisfying the property \eqref{GAMMMNT}. Let $\eta(t):=w(t)-\hat{w}(t)$. We shall prove that $w \equiv \hat{w} $, by showing that the sum $\sum_{i=1}^{\infty} i |\eta_i| $ is identically zero.
For $i \geq 1$ we put
\begin{align*}
\eta(t) = w(t)- \hat{w}(t) \hspace{.4cm} \text{and} \hspace{.4cm}\theta_i = \sgn(\eta_i),
\end{align*}
where $\sgn(h) = h /|h| $ if $ h\in \mathbb{R} \ \{0\}$ and $\sgn(0) =0$.
Now, we infer from \eqref{MCE} that
\begin{align}
\sum_{i=1}^l i |\eta_i(t)| = \int_0^t \sum_{m=1}^3 \Delta_m^l(s) ds, \label{DEL}
\end{align}
where
\begin{align}
\Delta_1^l = \frac{1}{2} \sum_{i=1}^l \sum_{j=l+1-i}^{l} \Big( \sum_{s=1}^{l} s\theta_s B_{i,j}^s - i\theta_i - j\theta_j\Big) a_{i,j} (w_i w_j -\hat{w}_i\hat{w}_j),
\end{align}
\begin{align}
\Delta_2^l = \sum_{i=1}^l \sum_{j= l+1}^{\infty}\Big(\sum_{s=1}^l s\theta_s B_{j,k}^s -i\theta_i\Big) a_{i,j} (w_iw_j - \hat{w}_i\hat{w}_j),
\end{align}
\begin{align}
\Delta_3^l = \frac{1}{2} \sum_{i=l+1}^{\infty}\sum_{j=l+1}^{\infty} \sum_{s=1}^l s\theta_s B_{i,j}^s a_{i,j} (w_{i} w_j - \hat{w}_{i} \hat{w}_j).
\end{align}
From \eqref{NMT1}, it follows that
\begin{align*}
\Bigg( \sum_{s=1}^{i+j-1} s \theta_s B_{i,j}^s - \theta_i - \theta_j\Bigg) x_i&= \Bigg( \sum_{s=1}^{i} s\theta_s \theta_i b_{s,i;j}^s +\sum_{s=1}^{j} s\theta_s \theta_i b_{s,j;i} - i -j \theta_i \theta_j\Bigg) |x_i| \\
& \leq 2j |x_i|.
\end{align*}
The first term $\Delta_1^l$ can be estimated as follows
\begin{align*}
\Delta_1^l& \leq \sum_{i=1}^{l} \sum_{j=1}^{l} a_{i,j} (jw_j|x_i| + i \hat{w}_i|x_j|)
\end{align*}
hence by \eqref{ASYMM} and \eqref{AGAMMA}, we have
\begin{align}
\Delta_1^l \leq A_{\gamma} \Big( \sum_{i=1}^l i^{1+\gamma} (w_i + \hat{w}_i)\Big) \sum_{j=1}^l j |x_j|. \label{DEL1}
\end{align}
Next, we deduce from \eqref{NMT2} and \eqref{AGAMMA} that
\begin{align*}
\int_0^t \Bigg| \sum_{i=1}^l \sum_{j=l+1}^{\infty} (i+j) a_{i,j} w_i w_j \Bigg| ds \leq A_{\gamma} \int_0^t \sum_{i=1}^l \sum_{j=l+1}^{\infty} (i^{1+\gamma} j^{\gamma}+j^{1+\gamma} i^{\gamma}) w_i w_j ds,
\end{align*}
and using \eqref{GAMMMNT}, we obtain
\begin{align*}
\lim_{l \to +\infty} \int_0^t \Bigg | \sum_{i=1}^l \sum_{j=l+1}^{\infty} (i+j) a_{i,j} w_i w_j \Bigg| ds =0,
\end{align*}
from which we conclude that
\begin{align}
\lim_{l\to \infty} \Delta_2^l =0. \label{DEL2}
\end{align}
In a similar vein, we can show that
\begin{align}
\lim_{l\to \infty} \Delta_3^l =0. \label{DEL3}
\end{align}
On substituting \eqref{DEL1}, \eqref{DEL2} and \eqref{DEL3} into \eqref{DEL}, we arrive at
\begin{align*}
\sum_{i=1}^{l} i |\eta_i(t)| \leq & 2A_{\gamma} \int_0^t \Big( \sum_{i=1}^{l} i |\eta_i(s)|\Big)\Big(\sum_{j=1}^{l} j^{1+\gamma} w_j(s)\Big) ds.
\end{align*}
Finally, we use Gronwall's lemma to complete the proof of Proposition \ref{UNIQPROP}.
\end{proof}
Next, we prove the following result in terms of continuous dependence with respect to the initial conditions:
\begin{prop}
Assume that the assumptions of Proposition \ref{UNIQPROP} hold and let $w$ and $\hat{w}$ are solutions of \eqref{NLDCBE}--\eqref{NLDCBEIC} with initial conditions $w(0)= w_0$ and $\hat{w}(0) = \hat{w}_0$ satisfying \eqref{GAMINIT} then, for each $t\geq 0$, there is a positive $\kappa(t,\|w_0\|_{1+\gamma},\|\hat{w}_0\|_{1+\gamma})$ such that
\begin{align}
\|w(t)- \hat{w}(t) \|_1 \leq \kappa(t,\|w_0\|_{1+\gamma},\|\hat{w}_0\|_{1+\gamma} )\|w_0- \hat{w}_0 \|_1.\label{CD1}
\end{align}
\end{prop}
\begin{proof}
Since $w$ and $\hat{w}$ are solutions of \eqref{NLDCBE}--\eqref{NLDCBEIC} having initial conditions $w_0$ and $\hat{w}_0$ respectively, we can write
\begin{align*}
w_i(t) = w_{0i} + \int_0^t \Big[ \frac{1}{2}\sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k}(s) w_k(s) -\sum_{j=1}^{\infty} a_{i,j} w_i(s)w_j(s) \Big] ds,
\end{align*}
\begin{align*}
\hat{w}_i(t) = \hat{w}_{0i} + \int_0^t \Big[\frac{1}{2} \sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} \hat{w}_{j-k}(s) \hat{w}_k(s) -\sum_{j=1}^{\infty} a_{i,j} \hat{w}_i(s)\hat{w}_j(s) \Big] ds,
\end{align*}
and defining $\zeta(t) =w(t)-\hat{w}(t) $ and $\psi_i= i\sgn(\zeta_i)$, we perform the same estimates as in the proof of Proposition \ref{UNIQPROP} to obtain,
\begin{align*}
\sum_{i=1}^{l} i |\zeta_i(t)| \leq & \sum_{i=1}^{l} i |\zeta_i(0)| + 2A_{\gamma} \int_0^t\Big( \sum_{i=1}^{l} i |\zeta_i(s)|\Big) \Big(\sum_{j=1}^{l} j^{1+\gamma} \hat{w}_j(s)\Big) ds.
\end{align*}
By using Gronwall's lemma,we accomplish the estimate \eqref{CD1}.
\end{proof}
In the following section, we will demonstrate that the solution to the non-linear breakage model is first-order differentiable and continuously dependent on initial data.
\section{Differentiability of the solutions}\label{DOS}
The next theorem is the main result needed to prove the differentiability of the solution.
\begin{theorem}
Let \eqref{IP} and \eqref{AGAMMA} hold and $w$ be a solution of \eqref{NLDCBE}--\eqref{NLDCBEIC} on $[0,T)$ where $0<T\leq \infty$. Then, for every $r\in \mathbb{N}$
\begin{align}
\sum_{i=r}^{\infty} i w_i(t_2)-\sum_{i=r}^{\infty} i w_i(t_1)=&\frac{1}{2}\int_{t_1}^{t_2} \sum_{j=r}^{\infty} \sum_{k=r}^{\infty} \Bigg( \sum_{i=r}^{j+k-1} iB_{j,k}^i - j -k \Bigg)a_{j,k} w_j(s) w_k(s) ds \nonumber \\
&+ \frac{1}{2}\sum_{j=1}^{r-1}\sum_{k=1}^{r-1} \sum_{i=r}^{j+k-1} iB_{j,k}^i a_{j,k} w_j(s) w_k(s) ds\nonumber \\
&+\sum_{j=1}^{r-1}\sum_{k=r}^{\infty} \Big(\sum_{i=r}^{j+k-1} iB_{j,k}^i -k\Big) a_{j,k} w_j(s) w_k(s). \label{TAILEQ}
\end{align}
\end{theorem}
\begin{proof}
Let $1 \leq r \leq l$. On multiplying each equation in \eqref{IVOE} by $\psi_i$ and taking summation over $i$ from $r$ to $l$, we obtain
\begin{align}
\sum_{i=r}^l \psi_i &(w_i(t_2)- w_i(t_1))= \int_{t_1}^{t_2}\Bigg[ \frac{1}{2}\sum_{S_1} \Bigg( \sum_{i=r}^{j+k-1} \psi_iB_{j,k}^i - \psi_j -\psi_k \Bigg)+\frac{1}{2}\sum_{S_2} \sum_{i=r}^{j+k-1} \psi_iB_{j,k}^i \nonumber \\
&+ \sum_{S_3} \Bigg(\sum_{i=r}^{j+k-1} \psi_i B_{j,k}^i - \psi_k\Bigg)+ \sum_{S_4}\Bigg(\frac{1}{2}\sum_{i=r}^l \psi_i B_{j,k}^i-\psi_j\Bigg) + \frac{1}{2}\sum_{S_5}\sum_{i=r}^l \psi_i B_{j,k}^i \nonumber\\
&+\frac{1}{2}\sum_{S_6}\sum_{i=r}^l \psi_i B_{j,k}^i\Bigg] a_{j,k} w_j(s) w_k(s) ds\label{GFSFE}
\end{align}
where
\begin{align*}
S_1 &= \{(j,k):~~~j,k\geq r,~~~j+k\leq l\}\\
S_2 &= \{(j,k):~~~j,k< r,~~~r\leq j+k \leq l\}\\
S_3 &= \{ (j,k):~~~1 \leq j \leq r-1,k\geq r, j+k\leq l\} \\
S_4 &= \{ (j,k):~~~r \leq j \leq l, j+k> l\}\\
S_5 &= \{ (j,k):~~~1\leq j\leq r-1, k \geq l-j+1\}\\
S_6 &= \{ (j,k):~~~j\geq l+1, k \geq 1\}
\end{align*}
with the sums equal to zero if the associated region is empty. (Note that $S_2$, $S_3$ and $S_5$ are empty if $r = 1$.)
On taking $\psi_i=i$ in \eqref{GFSFE}, we obtain
\begin{align*}
\sum_{i=r}^l i &(w_i(t_2)- w_i(t_1))= \int_{t_1}^{t_2}\Bigg[ \frac{1}{2}\sum_{S_1} \Bigg( \sum_{i=r}^{j+k-1} iB_{j,k}^i - j -k \Bigg)+\frac{1}{2}\sum_{S_2} \sum_{i=r}^{j+k-1} iB_{j,k}^i \nonumber \\
&+ \sum_{S_3} \Bigg(\sum_{i=r}^{j+k-1} i B_{j,k}^i - k\Bigg)+ \sum_{S_4}\Bigg(\frac{1}{2}\sum_{i=r}^l i B_{j,k}^i-j\Bigg) + \frac{1}{2}\sum_{S_5}\sum_{i=r}^l i B_{j,k}^i \nonumber\\
&+\frac{1}{2}\sum_{S_6}\sum_{i=r}^l i B_{j,k}^i\Bigg] a_{j,k} w_j(s) w_k(s) ds.
\end{align*}
Under the condition \eqref{IP}, integrals having region of summation $S_4$, $S_5$ and $S_6$ converge to zero as $l\to \infty$.
\end{proof}
In the following proposition, we will address the issue of the differentiability of solutions.
\begin{prop}\label{DIFFPROP}
Let $ a_{i,j}$ satisfy \eqref{ASYMM}, \eqref{AGAMMA} and the condition \eqref{IP} holds. Let $w=(w_i)$ be a solution on some interval $[0,T]$, where $0<T\leq \infty$ of \eqref{NLDCBE} with initial condition having $(1+\gamma)$-th moment is finite. Then the series $\frac{1}{2} \sum_{j=i+1}^{\infty} \sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k}(t) w_k(t)$ and $\sum_{j=1}^{\infty} a_{i,j} w_i(t) w_j(t)$ are absolutely continuous on the compact sub-intervals of $[0, T]$.
\end{prop}
\begin{proof}
It is enough to show the boundedness of $\sum_{j=1}^{\infty} \sum_{k=1}^{\infty} (j+k) a_{j,k} w_j w_k $ for \eqref{IP} to hold. Since
\begin{align*}
\sum_{j=1}^{\infty} \sum_{k=1}^{\infty} (j+k) a_{j,k} w_j w_k\leq& 2A_{\gamma}\sum_{j=1}^{\infty} j^{1+\gamma} w_j \sum_{k=1}^{\infty} k w_k\\
&\leq 2A_{\gamma} \|w_0\|_{1+\gamma} \|w_0\|_1,
\end{align*}
which implies that \eqref{IP} holds for any $t_1, t_2 \in [0, T)$. Therefore, considering $m=1$ for $t\in [0,T]$, equation \eqref{TAILEQ} implies the uniform convergence of the series $\sum_{i=1}^{\infty} i w_i(t)$. Since the series $ \sum_{j=i+1}^{\infty}\sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k} w_k $ is bounded by this series, we conclude the uniform convergence of $ \sum_{j=i+1}^{\infty}\sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k} w_k $. Now the boundedness of $w_i(t)$ ensures the absolute continuity of $\sum_{j=i+1}^{\infty}\sum_{k=1}^{j-1} B_{j-k,k}^i a_{j-k,k} w_{j-k} w_k $. Also the series $\sum_{j=1}^{\infty} a_{i,j} w_j(t)$ is bounded by $\sum_{i=1}^{\infty} i w_i(t)$, resulting in its uniform convergence. Finally, we obtain the desired result using the boundedness of $w_i(t)$.
\end{proof}
The Definition \ref{DEF1}(1), \eqref{IP} and Proposition \ref{DIFFPROP} ensure that the solution $w$ is differentiable in the classical sense in $[0, T[$.
\section{Some Invariance properties of solutions}\label{IPOS}
It is natural to predict that under no mass transfer condition \eqref{NMT}, if there are no clusters larger than $m$ at the beginning of the physical process, then none will be generated afterwards. This will be established in the next proposition.
\begin{prop}
Assume that \eqref{NMT} holds and the Cauchy problem for \eqref{NLDCBE}--\eqref{NLDCBEIC} have unique solutions. Then, for every $m\in \mathbb{N}$, the sets
\begin{align*}
Y^{\sharp m} := \{w \in Y_1^+ | w_i=0,~~~ \forall i>m \}
\end{align*}
are positively invariant for \eqref{NLDCBE}--\eqref{NLDCBEIC}.
\end{prop}
\begin{proof}
Let $w$ be a solution to \eqref{NLDCBE} such that $w(\tau) = w_0 \in Y^{\sharp m}$, for some $\tau \geq 0$. Since we know that \eqref{NLDCBE}--\eqref{NLDCBEIC} reduces to \eqref{SNLBE}--\eqref{SNLBEIC} when the condition \eqref{NMT} holds. Hence, let $w^m(\cdot)$ be the unique solution of the $m$-dimensional Cauchy problem
\begin{align*}
\dot{w}_i^m =& \sum_{j=i+1}^{m} \sum_{k=1}^{m} b_{i,j,k} a_{j,k} w_j w_k -\sum_{k=1}^{m} a_{j,k} w_j w_k,\\
w_i^m(\tau) &= w_{0i},
\end{align*}
for $i=1, \cdots, m$ (with the first sum defined to be zero if $i=m$.) Then the function $(w_1^m, w_2^m, \cdots, w_m^m, 0,0, \cdots)$ is a solution of the infinite dimensional system \eqref{SNLBE}--\eqref{SNLBEIC} and, by uniqueness, it must be the solution $w$. As a result, for all $t\geq \tau$, we have $w_i(t)=0 $ when $i=m+1, m+2, \cdots,$ that is, $w(t) \in Y^{\sharp m}$ for all $t\geq \tau$, proving the result.
\end{proof}
This invariance condition also appears in linear fragmentation equations: if the original cluster distribution contains no clusters larger than $m$, then they cannot be formed by fragmentation of the (smaller) ones that are already there.
\par
In the upcoming section, we will discuss the large-time behaviour of the solution, and our result follows the proof \cite[Proposition 4.1]{Laurencot 2001}, where it has been proved for collisional kernel having linear growth.
\section{On the large-time behaviour of solutions} \label{LTBOS}
The investigation of the large-time behaviour of solutions is studied in this section. In this model, as previously stated, a cluster only forms smaller fragments after colliding. As a result, we anticipate that only $1$-clusters will be left in the long time.
\begin{prop}
Let $a_{i,j} \leq A ij$ and \eqref{LMC}, \eqref{NMT}, and \eqref{GMC} satisfied. For $ w^0 \in Y^+$, there is a solution $w$ to \eqref{NLDCBE}--\eqref{NLDCBEIC} on $[0,\infty)$ and there is $w^{\infty} = (w_i^{\infty})\in Y^+$ such that
\begin{align}
\lim_{t \to \infty} \|w(t) - w^{\infty}\|_{Y} = 0 . \label{WINFTYLIM}
\end{align}
Moreover, if $i \geq 2$ is such that $a_{i,i}\neq 0 $ we have
\begin{align}
w_i^{\infty} = 0. \label{WINFZERO}
\end{align}
\end{prop}
\begin{remark}
In particular, if $a_{i,i} >0$ for each $i \geq 2$ then $w_i^{\infty} = 0 $ for every $i\geq 2$, mass conservation and \eqref{WINFTYLIM} entail that $w_1^{\infty} = \|w^0 \|_Y$.
\end{remark}
\begin{proof}
Consider the identity \eqref{MCE} with $\psi_i=i$, we obtain
\begin{align}
\sum_{i=1}^{l} i w_i(t_2) - \sum_{i=1}^l i w_i(t_1) =& \int_{t_1}^{t_2} \sum_{j=l+1}^{\infty} \sum_{k=1}^{\infty} \sum_{i=1}^l i b_{i,j,k} a_{j,k} w_j(s) w_k(s) ds \geq 0. \label{LT1}
\end{align}
The first consequence of \eqref{LT1} is that the function
\begin{align}
S_l: t \mapsto \sum_{i=1}^l i w_i(t) \hspace{.3cm} \text{is a non decreasing function on}\hspace{.2cm} [0,+\infty). \label{LT2}
\end{align}
Owing to \eqref{QBOUND}, the function $S_l$ is also bounded from above it must converge to some constant $q_l^{\infty}\geq 0$. Since $\sum_{i=1}^{l-1} i w_i(t) \leq \sum_{i=1}^{l} i w_i(t)$ we have $q_{l}^{\infty}\geq q_{l-1}^{\infty}$. Then for all $l \in \mathbb{N}$ we have
\begin{align}
w_l(t) = \frac{1}{l}(S_l(t) -S_{l-1}(t)) \xrightarrow[]{t \to \infty} q_{l}^{\star}- q_{l-1}^{\star}:=w_l^{\infty}. \label{LT3}
\end{align}
Furthermore, as $w(t)\in Y^+$ for each $t \geq 0$ the convergence \eqref{LT3} ensures that $w^{\infty}:=(w_l^{\infty})$ belongs to $Y^+$.
Also the \eqref{LT2} and \eqref{GMC} entail that
\begin{align*}
\sum_{i=l}^{\infty} i w_i(t) \leq \sum_{i=l}^{\infty} i w_i^0, \hspace{.2cm} l\geq 1, \hspace{.2cm} t\geq 0.
\end{align*}
This fact and \eqref{LT3} yields \eqref{WINFTYLIM}.\\
Finally, another consequence of \eqref{WINFTYLIM} and \eqref{LT1} is that
\begin{align*}
\int_0^{\infty} \sum_{j=l+1}^{\infty} \sum_{k=1}^{\infty} \sum_{i=1}^l i b_{i,j,k} a_{j,k} w_j(s) w_k(s) ds < \infty.
\end{align*}
Let $ i\geq 2$ such that $a_{i,i}>0$. Then the above estimate with $l=i-1$ and $j=k=i$ asserts that
\begin{align*}
\sum_{s=1}^{\infty} s b_{s,i,i} a_{i,i} w_i^2 = ia_{i,i} w_i^2 \in L^1(0,+\infty).
\end{align*}
Using \eqref{LT3}, we can deduce that $a_{i,i} (w_i^{\infty})^2=0$, resulting in \eqref{WINFZERO}.
\end{proof}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,001
|
Q: android - inflate fragment view only when choosing its tab I have a PagerAdapter which creates 3 fragments.
In the MainActivity I set the ViewPager like this:
ViewPager pager = (ViewPager) findViewById(R.id.pager);
pager.setOffscreenPageLimit(2);
pager.setAdapter(new PagerAdapter(getSupportFragmentManager()));
the pager.setOffScreenPageLimit(2) is from here https://stackoverflow.com/a/11852707/1662033, to make sure OnViewCreated is called once for each fragment.
Here is my PagerAdapter class:
public class PagerAdapter extends FragmentPagerAdapter {
public PagerAdapter(FragmentManager fm) {
super(fm);
}
@Override
public CharSequence getPageTitle(int position) {
switch (position) {
case 0:
return "Home";
case 1:
return "Live";
case 2:
return "Gallery";
default:
return null;
}
}
@Override
public int getCount() {
return 3;
}
@Override
public Fragment getItem(int position) {
switch (position) {
case 0:
return new HomeFragment();
case 1:
return new LiveFragment();
case 2:
return new GalleryFragment();
default:
return null;
}
}
}
In the current code: all of the fragments's onCreateView, onActivityCreated etc are called once, at the beginning and that's it.
The issue I am having is - in one of the fragments (LiveFragment) I have a custom view which connects to a camera and shows the live stream.
What I want is - to inflate the view of LiveFragment only when the user navigates to the fragment, instead of how its now - its inflated at the beginning with the other fragments.
Is there a way to call onCreateView only when fragment is chosen?
A: FragmentPagerAdapter creates all the Fragments and has all of them in memory at all time. i.e. All your Fragments are created only once and you can navigate around them.
FragmentStatePagerAdapter creates and has only 3 Fragments (the current Fragment and the left and right Fragments to the current one) in memory at any given time, by default. You cannot reduce that number. However, you can increase the number Fragments in memory by using the viewpager.setOffScreenPageLimit().
Since you have only 3 Fragments, all your 3 fragments are created when the Viewpager is initialised. You can track which Fragment is currently visible on the screen using viewpager.addOnPageChangeListener(). Using this you can change the View of your LiveFragment from dummy one to actual View only when the Fragment is currently visible.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,412
|
To tackle this issues of uprising e-waste and improper disposal, Government of India has set up new rules and regulations. The Extended Producer Responsibility has imposed all manufacturer to take responsibility for recycling and disposal of their e-waste.We at Virogreen India Pvt Ltd provide complete e–waste solutions to every manufacturer, irrespective of the genre of their business line.
we collect e-waste from retailers, pick up points, warehouses, sellers and all other possible resources. We also buy e-waste from the different organization, for example, Virogreen is known among the top e-waste buyers in Chennai.
we guarantee you the safe transportation of all the e-waste from the pickup zone till it reaches to our operation center.
we ensure full safety of the environment and workers by following all the government rules and regulation while storing e-wastes by safely handling the components and checking the leakage or spillage of products.
we segregate all the elements by their type, chemical compositions, end of life products and parts that can be salvaged.
we extract all the precious metal with the help of our skilled workers. We successfully extract all the gold, aluminum, silvers and other metal items from old circuits, motherboards, etc., from other electronic and electrical items.
we take all the salvaged part and recycle it for further use. Those recycled part can be utilized for the resalable purpose or can be utilized as raw materials for remanufacturing purpose.
Rest of the items such as non-recyclable items are then safely disposed to minimize the environmental impact as much as possible.
Plastic is a product that has no end of life cycle. So we in Virogreen try our best to recycle plastic products by turning them into reusable products. Extending the life cycle of plastic is the best way to avoid environmental pollution. Most of the time e-scrap buyers in Chennai buy plastics from other sources for recycling.
Sometimes data from a hard drive can still be extracted even after it has been erased or there are illegal parties who collect valuable and classified information from drives post they find them from garbage. In this regard, we at Virogreen provide the best data destruction services complying government norms and also with our state-of-the-art data erasure technology, accessing a non-recovering data deletion service will be very cost effective for you.
If you have any doubt or queries, feel free to call us or mail us. We will be happy to assist you.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,963
|
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
namespace CLK.Scheduling
{
public sealed class YearlyTrigger : ITaskTrigger
{
// Fields
private readonly IEnumerable<YearlyTime> _yearlyTimeCollection = null;
// Constructors
public YearlyTrigger(YearlyTime yearlyTime)
{
#region Contracts
if (yearlyTime == null) throw new ArgumentNullException();
#endregion
// Arguments
_yearlyTimeCollection = new YearlyTime[] { yearlyTime };
}
public YearlyTrigger(IEnumerable<YearlyTime> yearlyTimeCollection)
{
#region Contracts
if (yearlyTimeCollection == null) throw new ArgumentNullException();
#endregion
// Arguments
_yearlyTimeCollection = yearlyTimeCollection;
}
// Methods
public bool Approve(DateTime executeTime, DateTime lastExecuteTime)
{
// Approve
for (int offsetYear = 0; offsetYear <= 1; offsetYear++)
{
foreach (var yearlyTime in _yearlyTimeCollection)
{
// Next
var nextExecuteTime = yearlyTime.Next(lastExecuteTime, offsetYear);
if (nextExecuteTime.HasValue == false) continue;
// Check
if (nextExecuteTime.Value > lastExecuteTime)
{
if (nextExecuteTime.Value <= executeTime)
{
return true;
}
}
}
}
// Return
return false;
}
}
public sealed class YearlyTime
{
// Constructors
public YearlyTime(int month, int day)
{
// Require
if (0 > month || month > 12) throw new ArgumentException();
if (0 > day || day > 31) throw new ArgumentException();
// Arguments
this.Month = month;
this.Day = day;
}
// Properties
public int Month { get; private set; }
public int Day { get; private set; }
// Methods
internal DateTime? Next(DateTime lastExecuteTime, int offsetYear)
{
// Next
var nextExecuteTime = new DateTime(lastExecuteTime.Year, 1, 1, 0, 0, 0);
nextExecuteTime = nextExecuteTime.AddYears(offsetYear);
nextExecuteTime = nextExecuteTime.AddMonths(this.Month - 1);
if (this.Day > DateTime.DaysInMonth(nextExecuteTime.Year, nextExecuteTime.Month)) return null;
nextExecuteTime = nextExecuteTime.AddDays(this.Day - 1);
// Return
return nextExecuteTime;
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,160
|
Lots of these pictures are from months ago but I was just sorting through our pictures and I love these!!!
That couch- it was an eye sore for me for months. It came with our house and it was broken, but until we had something to put in it's spot, it stayed. The day the garbage man took it away, I cheered! Slowly we are making great changes.
When we pick up Taryn from preschool, Haddie always goes in for a huge hug! I love that she is so excited to see her sister after only a few hours! Recently they came up with their own special greeting- a way to make each other giggle. They say to each other, "look at my funny face!" then they do a funny little dance where they jiggle around making silly noises. It makes both of them laugh and I love it!
She is our Daddy's girl. This past Sunday, when Tyson got home from meetings, Haddie followed him around all upstairs and downstairs. It was the cutest thing. She loves her Daddy!
These sweet girls and their lunch break. Scootering and running around at the park is serious work.
I love the selfless I find on my phone.
Or the crazy faces this one loves to make!
I was making dinner one Saturday and Haddie couldn't wait. She grabbed a yogurt from the fridge (besides milk, thas what she loves to take out) and plopped down in-between my legs to eat it. It didn't phase me until Tyson came in the kitchen and started laughing and then snapped the picture.
I believe this was Thanksgiving day eve. Our girls discovered the great fun of having Daddy do "under dogs" while they swing.
Taryn constantly lines up her animals for a show, or just to put them in order, and asks me to take a picture of them.
"Mama, I wah mor bra-kee peas!" I was happy to meet this little princess's request!
I think we were getting ready to leave and Taryn must have let Haddie out side. Haddie sat on our step and started singing to herself. So cute!
Taryn's preschool took a field trip to the fire station and she enjoyed the coloring book and fire hat they gave out after. They dressed up for the kids in full fire gear and kept reminding the kids not to be afraid if a person dressed like them came to help them from a fire. I never realized that I child would be terrified of this "monster" looking creature coming for them while the house was on fire .Good to know.
My sister gave me some nursing pads that she didn't need so I could give them to my friends who just had babies. Well, the girls were super quiet while I was upstairs, then they started laughing really hard. I came downstairs to this. I let them play for a few more minutes, then we made a game out of sorting them out again.
Taryn loved it. Haddie hated it.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,818
|
{"url":"https:\/\/nalinkpithwa.com\/category\/pure-mathematics\/page\/2\/","text":"# Analysis \u2014 Chapter 1: continued \u2014 Real Variables part 9\n\n9. Relations of magnitude between real numbers.\n\nIt is plain, that, now that we have extended our conception of number, we are bound to make corresponding extensions of our conceptions of equality, inequality, addition, multiplication, and so on. We have to show that these ideas can be applied to the \u00a0new numbers, and that, when this extension of them is made, all the ordinary laws of algebra retain their validity, so that we can operate with real numbers in general in exactly the same way as with the rational numbers of Chapter 1, part 1 blog. To do all this systematically would occupy considerable space\/time, and we shall be content to indicate summarily how a more systematic discussion would proceed.\n\nWe denote a real number by a Greek letter such as $\\alpha$, $\\beta$, $\\gamma\\ldots$; the rational numbers of its lower and upper classes by the corresponding English letters a, A; b, B; c, C; \u2026We denote the classes themselves by (a), (A),\u2026\n\nIf $\\alpha$ and $\\beta$ are two real numbers, there are three possibilities:\n\ni) every $\\alpha$ is a b and every A a B; in this case, (a) is identical with (b) and (A) with (B);\n\nii) every a in a b, but not all A\u2019s are B\u2019s; in this case (a) is a proper part of $(b)^{*}$, and (B) a proper part of (A);\n\niii) every A is a B, but not all a\u2019s are b\u2019s.\n\n(These three cases may be indicated graphically on a number line).\n\nIn case (i) we write $\\alpha=\\beta$, in case (ii) $\\alpha=\\beta$, and in case (iii) $\\alpha>\\beta$. It is clear that, when $\\alpha$ and $\\beta$ are both rational, these definitions agree with the ideas of equality and inequality between rational numbers which we began by taking for granted; and that any positive number is greater than any negative number.\n\nIt will be convenient to define at this stage the negative $-\\alpha$ of a positive number $\\alpha$. If\n\n$(\\alpha)$, (A) are the classes, which consitute $\\alpha$, we can define another section of the rational numbers by putting all numbers $-A$ in the lower class and all numbers $-\\alpha$ in \u00a0the upper. The real number thus defined, which is clearly negative, we denote by $-\\alpha$. Similarly, we can define\n\n$-\\alpha$ when $\\alpha$ is negative or zero; if $\\alpha$ is negative, $-\\alpha$ is positive, It is plain also \u00a0that $-(-\\alpha)=\\alpha$. Of the two numbers $\\alpha$ and $-\\alpha$ one is always positive (unless $\\alpha=0$). The one which is positive we denote by $|\\alpha|$ and call the modulus of $\\alpha$.\n\nMore later,\n\nNalin Pithwa\n\n# Analysis \u2014 Chapter 1 \u2014 Real Variables \u2014 part 8\n\n8. Real numbers.\u00a0We have confined ourselves so far to certain sections of the positive rational numbers, which we have agreed provisionally to call \u201cpositive real numbers.\u201d Before we frame our final definitions, we must alter our point of view a little. We shall consider sections, or divisions into two classes, not merely of the positive rational numbers, but of all rational numbers, including zero. We may then repeat all that we have said about sections of the positive rational numbers in part 6 and 7 merely omitting the word positive occasionally.\n\nDefinitions.\u00a0A section of the rational numbers, in which both classes exist and the lower class has no greatest member, is called a real number, or simply a number.\n\nA number which does not correspond to a rational number is called an irrational number.\n\nIf the real number does correspond to a rational number, we shall use the term \u201crational\u201d as applying to the real number line.\n\nThe term \u201crational number\u201d will, as a result of our definitions, be ambiguous, it may mean the rational number of part 1, or the, corresponding real number. If we say that $1\/2 > 1\/3$, we may \u00a0be asserting either of the two different propositions, one a proposition of elementary arithmetic, the other a proposition concerning sections of the rational numbers. Ambiguities of this kind are common in mathematics, and are perfectly harmless, since the relations between different propositions are exactly the same whichever interpretation is attached to the propositions themselves. From $1\/2>1\/3$ and $1\/3>1\/4$ we can infer $1\/2>1\/4$; the inference is in no way affected by any doubt as to whether $1\/2$, $1\/3$ and $1\/4$ are arithmetic fractions or real numbers. Sometimes, of course, the context in which (example) \u2018$1\/2$\u2018 occurs is sufficient to fix its interpretation. When we say (next blog part 9) that $1\/2 < \\sqrt{1\/3}$we must\u00a0mean by \u2018$1\/2$\u2018 the real number $1\/2$.\n\nThe reader should observe, moreover, that no particular logical importance is to be attached to the precise form of definition of a \u2018real number\u2019 that we have adopted. We defined \u2018a real number\u2019 as being a section, that is, a pair of classes. We might equally well have defined it to being the lower, or the upper class; indeed it would be easy to define an infinity of classes of entities of each of which would possess the properties of the class of real numbers. What is essential in mathematics is that its symbols should be capable of some interpretation; generally they are capable of many, and then so far as mathematics is concerned, it does not matter which we adopt. Mr. Bertrand Russell has said that \u201cmathematics is the science in which we do not know what we are talking about, and do not care what we say about it is true\u201d, a remark which is expressed in the form of paradox but which in reality embodies a number of important truths. It would take too long to analyze the meaning of Mr Russell\u2019s epigram in detail, but one at any rate of the implications is this, that the symbols of mathematics are capable of varying interpretations, and that we are in general at liberty to adopt whatever we prefer.\n\nThere are now three cases to distinguish. It may happen that all negative rational numbers belong to the lower class and zero and all positive rational numbers to the upper. We describe this section as the real number zero.\u00a0Or, again it may happen that the lower class includes some positive numbers. Such a section we as a positive real number.\u00a0Finally, it may happen that some negative numbers belong to the upper class. Such a section we describe as a negative real number.\n\nNote: The difference between our presentation of a positive real number here and that or part 7 of the blogs amounts to the addition to the lower class of zero and all the negative rational numbers. An example of a negative real number is given by taking the property P of part 6 of the blogs to be $x+1<0$ and Q to be $x+1 \\geq 0$\/ This section plainly corresponds to the negative rational number $-1$. If we took P to be $x^{3}<-2$ and Q to be $x^{3}>-2$, we should obtain a negative real number which is not rational.\n\nMore later,\n\nNalin Pithwa\n\n# Analysis \u2014 Chapter 1 Real Variables \u2014 part 7 \u2014 continued\n\nPart 7. Irrational numbers (continued).\n\nIn the first two cases, we say that the section\u00a0corresponds\u00a0to a positive rational number a, which is l in the one case and r in the other. Conversely, it is clear that to any such number a corresponds a section which we shall denote by\n\n$\\alpha^{*}$. For we might take P and Q to be the properties expressed by\n\n$x \\leq a, x > a$\n\nrespectively, or by $x and $x \\leq a$. In the first case, a would be the greatest number of L, and in the second case the least member of R. These are in fact just two sections corresponding to any positive rational number. In order to avoid ambiguity we select one of them; let us select that in which the number itself belongs to the upper class. In other words, let us agree that we will consider only sections in which the lower class L has no greatest number.\n\nThere being this correspondence between the positive rational numbers and the sections defined by means of them, it would be perfectly legitimate, for mathematical purposes, to replace the numbers by the sections, and to regard the symbols which occur in our formulae as standing for the sections instead of for the numbers. Thus, for example,\n\n$\\alpha > \\alpha^{'}$ would mean the same as $a > a^{'}$. If $\\alpha$ and $\\alpha^{'}$ are\n\nthe sections which correspond to a and $a^{'}$.\n\nBut, when we have in this way substituted sections of rational numbers for the rational numbers themselves, we are almost forced to a generalization of our number system. For there are sections (such as that of blog on Chapter 1 \u2014 part 4) which do not correspond to any rational number. The aggregate of sections is a larger aggregate than that of the positive rational numbers; it includes sections corresponding to all these numbers, and more besides. It is this fact which we make the basis of our generalization of the idea of a number. We accordingly frame the following definitions, which will however be modified in the next blog, and must therefore be regarded as temporary and provisional.\n\nA section of the positive rational numbers, in which both classes exist and the lower class has no greatest member, is called a\u00a0positive real number.\n\nA positive real number which does not correspond to a positive rational number is called a positive\u00a0irrational\n\nnumber.\n\nMore later,\n\nNalin Pithwa\n\n# The Universal Appeal of Mathematics \u2014 Geetha S. Rao\n\nI am reproducing an article, \u201cThe Universal Appeal of Mathematics \u2014 Geetha S. Rao\u201d from \u201cThe Mathematics Student\u201d , volume 83, Numbers 1 to 4, (2014), 01-04.\n\nThe purpose is just to share this beautiful article with the wider student community and math enthusiasts.\n\nThe Universal Appeal of Mathematics: Geetha S. Rao:\n\nMathematics is the Queen of all Sciences, the King of all Arts and the Master of all that is being surveyed. Such is the immaculate and immense potential of the all-pervasive, fascinating subject, that it transcends all geographical barriers, territorial domains and racial prejudices.\n\nThe four pillars that support the growth, development, flowering and fruition of this ever green subject are analytic thinking, logical reasoning, critical reviewing and decision thinking.\n\nEvery situation in real life can be modelled and simulated in mathematical language. So much so, every human must be empowered with at least a smattering of mathematical knowledge. Indeed, the field of Artificial Intelligence is one where these concepts are implemented and imparted to the digital computers of today.\n\nFrom times immemorial, people know how to count and could trade using the barter system. Those who could join primary schools learnt the fundamental arithmetic and algebraic rules. Upon entry into high school and higher secondary classes, the acquaintance with the various branches of this exciting subject commences. It is at this point that effective communication skills of the teacher impact the comprehension and conceptual understanding of the students.\n\nUnfortunately, if the teacher is unsure of the methods and rules involved, then begins a dislike of the subject by the students being taught. To prevent a carcinogenic spread of the dislike, the teacher ought to be suitably oriented and know precisely how to captivate the imagination of the students. If this is the case, the students enjoy learning process and even start loving the subject, making them eagely await Mathematics classes, with bated breath!\n\nAcquiring necessary knowledge of algebraic operations, permutations and combinations, rudiments of probabilistic methods, persuasive ideas from differential and integral calculus and modern set theory will strengthen the bonds of mathematical wisdom.\n\nFrom that stage, when one enters the portals of university education, general or technical, the opportunity to expand one\u2019s horizon of mathematical initiation is stupendous. Besides, the effective use of Mathematics in Aeronautical, Agricultural, Biological, Chemical, Geographical and Physical Sciences, Engineering, Medicine, Meteorology, Robotics, Social Sciences and other branches of knowledge is indeed mind boggling.\n\nArmed with this mathematical arsenal, the choice of a suitable career becomes very diverse. No two humans need to see eye to eye as far as such a choice is concerned, as the variety is staggering! So, it is crystal clear that studying Mathematics,at every level, is not only meaningful and worthwhile but absolutely essential.\n\nA natural mathematical genius like Srinivasa Ramanujan was and continues to be an enigma and a Swayambhu, who could dream of extraordinary mathematical formulae, without any formal training.\n\nA formally trained mathematician is capable of achieving laudable goals and imminent success in everything that he chooses to learn and if possible, discover for himself, the eternal truths of mathematics, provided he pursues the subject with imagination, passion, vigour and zeal.\n\nNothing can be so overwhelming as a long standing problem affording a unique solution, bu the creation of new tools, providing immense pleasure, a sense of reward and tremendous excitement in the voyage of discovery.\n\nThese flights of imagination and intuition form the core of research activities. With the advent of the computers, numerical algorithms gained in currency and greater precision, enabling the mathematical techniques to grow by leaps and bounds!\n\nUntil the enumeration of the Uncertainty Principle by Werner Heisenberg, in 1932, mathematics meant definite rules of certainty. One may venture to say that this is the origin of Fuzziness. Lotfi Zadeh wrote a seminal paper, entitled Fuzzy sets, Information and Control,\n\n8,\u00a01965, 328-353. He must be considered a remarkable pioneer who invented the subject of Fuzzy mathematics, which is the amalgam of mathematical rules and methods of probability put together to define domains of fuzziness.\n\nFuzzy means frayed, fluffy, blurred or indistinct. On a cold wintry day, haziness is seen around at dawn, and a person or an object at a distance, viewed through the mist, will appear hazy. This is a visual representation of fuzziness. The input variables in a fuzzy control systems are mapped into sets of membership functions known as fuzzy sets. The process of converting a crisp input value to a fuzzy value is called fuzzification.\n\nA control system may also have various types of switches or on-off inputs along with its analog inputs, and such switch inputs will have a truth value equal to either 0 or 1.\n\nGiven mappings of input variables into membership functions and truth values, the micro controller makes decisions concerning what action should be taken, based on a set of rules. Fuzzy concepts are those that cannot be expressed as true or false, but rather as partially true!\n\nFuzzy logic is involved in approximating rather than precisely determining the value. Traditional control systems are based on mathematical models in which one or more differential equations that define the system\u2019s response to the inputs will be used. In many cases, the mathematical model of the control process may not exist, or may be too expensive, in terms of computer processing power and memory, and a system based on empirical rules may be more effective.\n\nFurthermore, fuzzy logic is more suited to low cost implementation based on inexpensive sensors, low resolution analog-to-digital converters and 4-bit or 8 bit microcontroller chips. Such systems can be easily upgraded by adding new rules\/novel features to improve performance. In many cases, fuzzy control can be used to enhance the power of existing systems by adding an extra layer of intelligence to the current control system. In practice, there are several different ways to define a rule, but the most simple one employed is the max-min inference method, in which the output membership function is given the truth value generated by the underlying premise. It is important to note that rules involved in hardware are parallel, while in software they are sequential.\n\nIn 1985, interest in fuzzy systems was sparked by the Hitachi company in Japan, whose experts demonstrated the superiority of fuzzy control systems for trains. These ideas were quickly adopted and fuzzy systems were used to control accelerating, braking, and stoppage of electric trains, which led to the historic introduction, in 1987, of the bullet train, with a speed of 200 miles per hour, between Tokyo and Sendai.\n\nDuring an international conference of fuzzy researchers in Tokyo, in 1987, T. Yamakawa explained the use of fuzzy control, through a set of simple dedicated fuzzy logic chips, in an inverted pendulum experiment. The Japanese soon became infatuated with fuzzy systems and implemented these methods in a wide range of astonishing commercial and industrial applications.\n\nIn 1988, the vacuum cleaners of Matsushita used micro controllers running fuzzy algorithms to interrogate dust sensors and adjust suction power accordingly. The Hitachi washing machines used fuzzy controllers to load-weight, fabric-mix and dirt sensors and automatically set the wash cycle for the optimum use of power, water and detergent.\n\nThe renowned Canon camera company developed an auto-focusing camera that used a charge coupled device to measure the clarity of the image in six regions in its field of view and use the information provided to determine if the image is in focus. It also tracks the rate of change of lens movement during focusing and controls its speed to prevent overshoot.\n\nWork on fuzzy systems is also being done in USA, Europe, China and India. NASA in USA has studied fuzzy control for automated space docking, as simulation showed that a fuzzy control system can greatly reduce fuel consumption. Firms such as Boeing, General Motors, Allen-Bradley, Chrysler, Eaton and Whirlpool have used fuzzy logic to improve on automotive transmission, energy efficient electric meters, low power refrigerators, etc.\n\nResearchers are concentrating on many applications of fuzzy control systems, have developed fuzzy systems and have integrated fuzzy logic, neural networks and adaptive genetic software systems, with the ultimate goal of building self-learning fuzzy control systems.\n\nThis, in my opinion, is sufficient reason to \u00a0induce you to start learning mathematics!\n\nGeetha S. Rao,\n\nEx Professor, Ramanujan Institute for Advanced Study in Mathematics, University of Madras,\n\nChepauk, Chennai 600005.\n\nEmail: geetha_srao@yahoo.com\n\n**********************************************************************************************\n\nMore later,\n\nNalin Pithwa\n\n# Analysis \u2014 Chapter 1 \u2014 Real Variables: part 6: Irrational numbers continued\n\n6. Irrational numbers (continued).\n\nIn Part 4, we discussed a special mode of division of the positive rational numbers x into two classes, such that $x^{2}<2$ for the numbers of one class and $x^{2}>2$ for those of the others. Such a mode of division is called a\u00a0section\u00a0of the numbers in question. It is plain that we could equally well construct a section in which the numbers of the two classes were characterized by the inequalities\n\n$x^{3}<2$ and $x^{3}>2$, or $x^{4}>7$ and $x^{4}<7$. Let us now attempt to state the principles of the construction of such \u201csections\u201d of the positive rational numbers in quite general terms.\n\nSuppose that P and Q stand for two properties which are mutually exclusive and one of which must be possessed by every positive rational number. Further, suppose that every such number which possesses P is less than any such number which possesses Q. Thus, P might be the property \u201c$x^{2}<2$\u201d and Q the property \u201c$x^{2}>2$\u201c. Then, we call the numbers which possess P the lower or left-hand class L and those which possess Q the upper or right hand class R. In general, both classes will exist; but, it may happen in special cases that one is non-existent and every number belongs to the other. This would obviously happen, for example, if P (or Q) were the property of being rational, or of being positive. For the present, however, we shall confine ourselves to cases \u00a0in which both the classes do exist; and then it follows, as in Part 4, that we can find a member of L and a member of R, whose difference is as small as we please.\n\nIn the particular case, which we considered in Part 4, L had no greatest member and R no least. This question of the existence of greatest or least members of the classes is of the utmost importance. We observe first that it is impossible in any case that L should have a greatest member\u00a0and\u00a0R least. For, if l were the greatest member of L, and r the least of R, so that $l, then $(1\/2)(l+r)$ would be a positive rational number lying between l and r, and so could neither belong to L nor to R, and this contradicts our assumption that every such number belongs to one class \u00a0or to the other. \u00a0This being so, there are but three possibilities, which are mutually exclusive. Either\n\n(i) L has a greatest member l, or (ii) R has a least member, r, or (iii) L has no greatest member and R no least.\n\n(In Part 4, there is an example of the last possibility.)\n\nMore later,\n\nNalin Pithwa\n\n# Analysis \u2014 Chapter I \u2014 Real Variables \u2014 Part 5 \u2014 Irrational numbers continued\n\nWe have thus divided the positive rational numbers into two classes, L and R, such that (i) every member of R is greater than every member of L, and (ii) we can find a member of L and a member of R, whose difference is as small as we please, (iii) L has no greatest and R has not least member. Our common-sense notion of the attributes of a straight line, the requirements of our elementary geometry and our elementary algebra, alike demand the\u00a0existence of a number x greater than all the members of L and less than all the members of R, and of a corresponding point P on\u00a0$\\Lambda$\u00a0such that P divides the points which correspond to members of L from those which correspond to members of R.\n\nLet us suppose for a moment that there is such a number x and that it may be operated upon in accordance with laws of algebra, so that, for example, $x^{2}$ has a definite meaning. Then $x^{2}$ cannot either be less than or greater than 2. For suppose, for example, that $x^{2}$ is less than 2. Then, it follows from what precedes that we can find a positive rational number $\\xi$ such that $\\xi^{2}$ lies between $x^{2}$ and 2. That is to say, we can find a member of L greater than x; and this contradicts the supposition that x divides the members of L from those of R. Thus, $x^{2}$ cannot be less than 2, and similarly, it cannot be greater than 2. We are therefore driven to the conclusion that $x^{2}=2$, and that x is the number which in algebra \u00a0we denote by $\\sqrt{2}$. And, of course, this number $\\sqrt{2}$ is not rational, for no rational number has its square equal to 2. It is the simplest example of what is called an irrational number.\n\nBut the preceding argument may be applied to equations other than $x^{2}=2$, almost word for word; for example, to\n\n$x^{2}=N$, where N is an integer which is not a perfect square, or to\n\n$latex$x^{3}=3\\$ and $x^{2}=7$ and $x^{4}=23$,\n\nor, as we shall see later on. to $x^{3}=3x+8$. We are thus led to believe for the existence of irrational numbers x and points P on $\\Lambda$ such that x satisfies equations such as these, even when these lengths cannot (as $\\sqrt{2}$ can) be constructed by means of elementary geometric methods.\n\nThe reader may now follow one or other of two alternative courses. He may, if he pleases, be content to assume that \u201cirrational numbers\u201d such as $\\sqrt{2}$ and $\\sqrt[5]{3}$ exist and are amenable to usual algebraic laws. If he does this, he will be able to avoid the more abstract discussions of the next few blogs.\n\nIf, on the other hand, he is not disposed to adopt so naive an attitude, he will be well advised to pay careful attention to the blogs which follow, in which these questions receive further consideration.\n\nMore later,\n\nNalin Pithwa\n\n# What are worthwhile problems as per Richard Feynman, American, Physics Nobel Laureate\n\nWhat are worthwhile problems as per Richard Feynman\n\nThe letter below is from Perfectly Reasonabe Deviations From The Beaten Track, a book of letters of Richard Feynman. It is one of the most moving letters that I have read. Tomonaga mentioned below shared the 1965 Nobel prize for physics along with Feynman and Schwinger.\n\nA former student, who was also once a student of Tomonaga\u2019s, wrote to extend his congratulations. Feynman responded, asking Mr. Mano what he was now doing. The response: \u201cstudying the Coherence theory with some applications to the propagation of electromagnetic waves through turbulent atmosphere\u2026 a humble and down-to-earth type of problem.\u201d\n\nDear Koichi,\n\nI was very happy to hear from you, and that you have such a position in the Research Laboratories. Unfortunately your letter made me unhappy for you seem to be truly sad. It seems that the influence of your teacher has been to give you a false idea of what are worthwhile problems. The worthwhile problems are the ones you can really solve or help solve, the ones you can really contribute something to. A problem is grand in science if it lies before us unsolved and we see some way for us to make some headway into it. I would advise you to take even simpler, or as you say, humbler, problems until you find some you can really solve easily, no matter how trivial. You will get the pleasure of success, and of helping your fellow man, even if it is only to answer a question in the mind of a colleague less able than you. You must not take away from yourself these pleasures because you have some erroneous idea of what is worthwhile.\n\nYou met me at the peak of my career when I seemed to you to be concerned with problems close to the gods. But at the same time I had another Ph.D. Student (Albert Hibbs) was on how it is that the winds build up waves blowing over water in the sea. I accepted him as a student because he came to me with the problem he wanted to solve. With you I made a mistake, I gave you the problem instead of letting you find your own; and left you with a wrong idea of what is interesting or pleasant or important to work on (namely those problems you see you may do something about). I am sorry, excuse me. I hope by this letter to correct it a little.\n\nI have worked on innumerable problems that you would call humble, but which I enjoyed and felt very good about because I sometimes could partially succeed. For example, experiments on the coefficient of friction on highly polished surfaces, to try to learn something about how friction worked (failure). Or, how elastic properties of crystals depends on the forces between the atoms in them, or how to make electroplated metal stick to plastic objects (like radio knobs). Or, how neutrons diffuse out of Uranium. Or, the reflection of electromagnetic waves from films coating glass. The development of shock waves in explosions. The design of a neutron counter. Why some elements capture electrons from the L-orbits, but not the K-orbits. General theory of how to fold paper to make a certain type of child\u2019s toy (called flexagons). The energy levels in the light nuclei. The theory of turbulence (I have spent several years on it without success). Plus all the \u201cgrander\u201d problems of quantum theory.\n\nNo problem is too small or too trivial if we can really do something about it.\n\nYou say you are a nameless man. You are not to your wife and to your child. You will not long remain so to your immediate colleagues if you can answer their simple questions when they come into your office. You are not nameless to me. Do not remain nameless to yourself \u2013 it is too sad a way to be. now your place in the world and evaluate yourself fairly, not in terms of your na\u00efve ideals of your own youth, nor in terms of what you erroneously imagine your teacher\u2019s ideals are.\n\nBest of luck and happiness.\nSincerely,\nRichard P. Feynman.\nAn accomplished father giving heartfelt advice to a son struggling to find his way, a teacher who immediately feels from a few gestures what a pupil is going through and reaches out due to his love for his student and due to his own humility, a man who recognizes his greatness and his defects in equal measure\n\n# Analysis \u2014 Chapter 1 \u2014 Real Variables \u2014 Part 4 Irrational numbers continued\n\nPart 4. Irrational numbers\u00a0(continued).\n\nThe result of our geometrical interpretation of the rational numbers is therefore to suggest the desirability of enlarging our conception of \u201cnumber\u201d by the introduction of further numbers of a new kind.\n\nThe same conclusion might have been reached without the use of geometrical language. One of the central problems of algebra is that of the solution of equations, such as\n\n$x^{2}=1$, $x^{2}=2$.\n\nThe first equation has the two rational roots 1 and -1. But, if our conception of number is to be limited to the rational numbers, we can only say that the second equation has no roots; and the same is the case with such equations as $x^{3}=2$, $x^{4}=7$. These facts are plainly sufficient to make some generalization of our idea of number desirable, if it should prove to be possible.\n\nLet us consider more closely the equation $x^{2}=2$.\n\nWe have already seen that there is no rational number x which satisfies this equation. The square of any rational number is either less than or greater than 2. We can therefore divide the rational numbers into two classes, one containing the numbers whose squares are less than 2, and the other those whose squares are greater than 2. We shall confine our attention to the positive rational numbers, and we shall call these two classes the class L, or the lower class, or the left-hand class, and the class R, or the upper class, or the right hand class. It is obvious that every member of R is greater than all the members of class R. Moreover, it is easy to convince ourselves that we can find a member of the class L whose square, though less than 2, differs from 2 by as little as possible, and a member of R whose square, though greater than 2, also differs from 2 by as little as we please. In fact, it we carry out the ordinary arithmetical process for the extraction of the square root of 2, we obtain a series of rational numbers, viz.,\n\n1,1.4, 1.41, 1.414, 1.4142, $\\ldots$\n\nwhose squares\n\n1, 1.96, 1.9881, 1.999396, 1.99996164, $\\ldots$\n\nare all less than 2, but approach nearer and nearer to it, and by taking a sufficient number of the figures given by the process we can obtain as close an approximation as we want. And if we increase the last figure, in each of the approximations given above, by unity, we obtain a series of rational numbers\n\n2, 1.5, 1.42, 1.415,1.413, $\\ldots$\n\nwhose squares\n\n4, 2.25, 2.0164, 2.002225, 2.00024449, $\\ldots$\n\nare all greater than 2, but approximate to 2 as closely as we please.\n\nIt follows also that\u00a0there can be no largest member of L or smallest member of R.\u00a0For if x is any member of L, then\n\n$x^{2} < 2$. Suppose that $x^{2}=2-\\delta$. Then we can find a member x, of L such that ${x_{1}}^{2}$ differs from 2 by less than $\\delta$, and ${x_{1}}^{2}>x^{2}$ or $x_{1}>x$. Thus there are larger members of L than x; and, as x is any member of L, it follows that no member of L can be larger than all the rest. Hence, L has no largest member, and similarly, it has no smallest.\n\nNote: A rigorous analysis of the above can be easily carried out. If you need help, please let me know and I will post it in the next blog.\n\nMore later,\n\nNalin Pithwa\n\n# Analysis \u2014 Real Variables \u2014 Chapter 1 \u2014 Examples II\n\nExamples II.\n\n1) Show that no rational number can have its cube equal to 2.\n\nProof #1.\n\nLet, if possible, $p\/q, q \\neq 0$, and p and q do not \u00a0have any common factor and are integers. Then, if $(p\/q)^{3}=2$, we have\n\n$p^{3}=2q^{3}$. So, p contains a factor of 2. So, let $p=2k$. So, q contains a factor of 2. Hence, both p and q have a common factor have a common factor 2, contradictory to out assumption. Hence, the proof.\n\n2) Prove generally that a rational function $p\/q$ in its lowest terms cannot be the cube of a rational number unless p and q are both perfect cubes.\n\nProof #2.\n\nLet, if possible, $p\/q = (m\/n)^{3}$ where m,n,p,q are integers, with n and q non-zero and p and q are in lowest terms. This implies that m and n have no common factor. \u00a0Hence, $p=m^{3}, q=n^{3}$.\n\n3) A more general proposition, which is due to Gauss and includes those which precede as particular cases, is the following: an algebraical equation\n\n$z^{n}+p_{1}z^{n-1}+p_{2}z^{n-2}+ \\ldots + p_{n}=0$ with integral coefficients, cannot have rational, but non-integral root.\n\nProof #3.\n\nFor suppose that, the equation has a root $a\/b$, where a and b are integers without a common factor, and b is positive. Writing\n\n$a\/b$ for z, and multiplying by $b^{n-1}$, we obtain\n\n$-(a^{n}\/b)=p_{1}a^{n-1}+p_{2}a^{n-2}b+ \\ldots + p_{n}b^{n-1}$,\n\na function in its lowest terms equal to an integer, which is absurd. Thus, $b=1$, and the root is a. It is evident that a must be a divisor of $p_{n}$.\n\n4) Show that if $p_{n}=1$ and neither of\n\n$1+p_{1}+p_{2}+p_{3}+\\ldots$ and $1-p_{1}+p_{2}-p_{3}+\\ldots$\n\nis zero, then the equation cannot have a rational root.\n\nProof #4.\u00a0Please try this and send me a solution.. I do not have a solution yet \ud83d\ude42\n\n5) Find the rational roots, if any, of $x^{4}-4x^{3}-8x^{2}+13x+10=0$.\n\nSolution #5.\n\nUse problem #3.\n\nThe roots can only be integral, and so find the roots by trial and error. It is clear that we can in this way determine the rational roots of any such equation.\n\nMore later,\n\nNalin Pithwa\n\n# Analysis \u2014 Chapter I \u2014 part 3 \u2014 Real Variables \u2014 Irrational numbers\n\nPart 3. Irrational numbers.\n\nIf the reader will mark off on the line all the points corresponding to the rational numbers whose denominators are 1,2,3, \u2026in succession, he will readily \u00a0convince himself that he can cover the line with rational points, as closely as he likes. We can state this more precisely as follows:\u00a0If we take any segment BC on A, we can find as many rational points on it as we please on BC.\n\nSuppose, for example, that BC falls within the segment $A_{1}A_{2}$. it is evident that if we choose a positive integer k such that\n\n$k.BC>1$\u00a0Equation I\n\n(The assumption that this is possible is equivalent to the assumption of what is known as the Axiom of Archimedes.)\n\nand divide $A_{1}A_{2}$ into k equal parts, then at least one of the points of division (say P) must fall inside BC, without coinciding with either B or C. For if this were not so, BC would be entirely included in one of the k parts into which $A_{1}A_{2}$ has been divided, which contradicts the supposition I. But P obviously corresponds to a rational number whose denominator is k. Thus at least one rational point P lies between B and C. But, then we can find another such point Q between B and P, another between B and Q, and so on indefinitely; that is, as we asserted above, we can find as many as we please. We may express this by saying that BC includes\u00a0infinitely many\n\nrational points. (We will investigate the meaning of\u00a0infinite more closely later).\n\nFrom these considerations, the reader might be tempted to infer that an adequate view of the nature of the line could be obtained by imagining it to be formed simply by the rational points which lie on it. And, it is certainly the case that if we imagine the line to be made up of \u00a0solely of the rational points, and all other points (if there are any such) to be eliminated, the figure would possess most of the properties which common sense attributes to the straight line, and would, to put the matter roughly, look and behave very much like a line.\n\nA little further consideration, however, shows that this view would involve us in serious difficulties.\n\nLet us look at the matter for a moment with the eye of common sense, and consider some of the properties which we may reasonably expect a straight line to possess if it is to satisfy the idea which we have formed of it in elementary geometry.\n\nThe straight line must be composed of points, and any segment of it by all the points which lie between its end points. \u00a0With any such segment must be associated a certain entity called its\u00a0length,\u00a0which must be a\u00a0quantity\u00a0capable of\u00a0numerical measurement in terms of any standard or unit length, and these lengths must be capable of combination with another, according to the ordinary rules of algebra, by means of addition or multiplication. Again, it must be possible to construct a line whose length is the sum or product of any two given lengths. If the length PQ along a given line is a, and the length QR, along the same straight line, is b, the length PR must be $a+b$.\n\nMoreover, if the lengths OP and OQ, along one straight line, are 1 and a, and the length OR along another straight line is b, and if we determine the length OS by Euclid\u2019s construction for a fourth proportional to the lines OP, OQ, OR, this length must be ab, the algebraic fourth proportional to 1, a and b. And, it is hardly necessary to remark that the sums and products thus defined must obey the ordinary laws of algebra; viz.,\n\n$a+b=b+a$\n\n$a+(b+c)=(a+b)+c$\n\n$ab=ba$\n\n$a(bc)=(ab)c$\n\n$a(b+c)=ab+ac$\n\nThe lengths of our lines must also obey a number of obvious laws concerning inequalities as well as equalities: thus, if A, B, C are three points lying along A from left to right, we must have $AB, and so on. Moreover, it might be possible, on our fundamental line $\\Lambda$ to find a point P such that $A_{0}P$ is equal to any segment whatever taken along $\\Lambda$ or along any other straight line. All these properties of a line, and more, are involved in the presuppositions of our elementary geometry.\n\nNow, it is very easy to see that the idea of a straight line as composed of a series of points, each corresponding to a rational number, cannot possibly satisfy all these requirements. There are various elementary geometrical constructions, for example, which purport to construct a length x such that $x^{2}=2$. For instance, we may construct an isosceles right angled triangle ABC such that $AB=AC=1$.. Then, if $BC=x$, $x^{2}=2$. Or we may determine the length x by means of Euclid\u2019s construction for a mean proportional to a and 2, as indicated in the figure. Our requirements therefore involve the existence of a length measured by a number x, and a point P on $\\Lambda$ such that $A_{0}P=x$, $x^{2}=2$.\n\nBut, it is easy to see that there is no rational number such that its square is 2. In fact, we may go further and say that there is no rational number whose square is $m\/n$, where $m\/n$ is say positive fraction in its lowest terms, unless m and n are both perfect squares.\n\nFor suppose, if possible, that $\\frac {p^{2}}{q^{2}}=m\/n$.\n\np having no common factor with q, and m no common factor with n. Thus, $np^{2}=mq^{2}$. Every factor of $q^{2}$ must divide $np^{2}$, and as p and q have no common factor, every factor of $q^{2}$ must divide n. Hence,\n\n$n={\\lambda}q^{2}$, where $\\lambda$ is an integer. But, this involves $m={\\lambda}p^{2}$: and as m and n have common factor, $\\lambda$ must be unity. Thus, $m=p^{2}$ and $n=q^{2}$, as was to be proved. In particular, it follows by taking $n=1$, that an integer cannot be the square of a rational number, unless that rational number is itself integral.\n\nit appears that our requirements involve the existence of a number x and a point P, not one of the rational points already constructed, such that $A_{0}P=x$ and $x^{2}=2$; and, (as the reader will remember from elementary algebra) we write $x = \\sqrt {2}$.\n\nAlternate proof.\n\nThe following alternate proof that no rational number can have its square equal to 2 is interesting.\n\nSuppose, if possible, that $p\/q$ is a positive fraction, in its lowest terms such that $(p\/q)^{2}=2$. It is easy to see that this involves $(2q-p)^{2}=2(p-q)^{2}$, and so $\\frac {2q-p}{p-q}$ is also another fraction having the same property. But, clearly,\n\n$q and so $p-q. Hence, there is another fraction equal to $p\/q$ and having a smaller denomination, which contradicts the assumption that $p\/q$ is in its lowest terms.\n\nIn the next blog, we shall look at examples,\n\nMore later,\n\nNalin Pithwa","date":"2019-02-17 16:15:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 163, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7698319554328918, \"perplexity\": 382.87826612199126}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-09\/segments\/1550247482186.20\/warc\/CC-MAIN-20190217152248-20190217174248-00408.warc.gz\"}"}
| null | null |
Matlock should really have brought all three points back from the Weaver Stadium on Saturday after dominating the opening half - and although Nantwich fought back with spirit after the interval, it was a missed opportunity of a league double over the Dabbers.
On another day, Matlock's Man of the Match, Danny Holland, (left), back from suspension, could easily have finished with a hat-trick. He put Matlock into a 12th minute lead but unluckily struck the woodwork twice before being unable to convert a good second half chance.
Holland was one of two changes made by boss Mark Atkins after the win against Whitby which also saw skipper Liam Nedham recalled after injury in midfield. Corey Gregory and Lavell White dropped to the bench and the Gladiators played three strikers with Massiah McDonald and Shaun Tuton up alongside Holland.
Matlock showed their intent straight from the kick off as Tuton bulldozed his way through the home defence to force goalkeeper Steve James to push away his low shot for a corner.
Then on 7 minutes, Holland hit the frame of the goal for the first time, his header from a Tuton cross rebounding off the post with James beaten.
As the pressure was maintained, Micky Harcourt, playing at left back with James Ashmore on the right, glanced a header narrowly wide from a free kick.
The goal, which had been coming, arrived when Cecil Nyoni did well to win possession some thirty yards out, before feeding Holland who crisply shot past James for an excellent finish.
Matlock kept up the intensity in their play as Martin Foster's shot was deflected over - from the corner, an Adam Yates header drifted wide before McDonald was unlucky to see his effort blocked as he met a Tuton centre.
There was an element of comedy as referee Harty was floored by a Nantwich clearance, but there was little fun for the hosts at this stage as they were being outplayed.
Nothing had been seen from Nantwich as an attacking force and there was another chance for Matlock three minutes before the break. The ball fell invitingly to Needham whose well struck shot whistled narrowly too high.
So Matlock went in for their half time refreshment a goal to the good but perhaps reflecting it could have been more.
Holland was again unfortunate as his 48th minute header from a Needham cross rebounded off the bar with again James well beaten.
And Nantwich made the most of this latest escape when they levelled in the next minute.
Andy White floated in a free kick from the left and nobody picked up substitute Alex Frost who scored with a free header which was a really sloppy goal from the Matlock perspective.
Now for the first time in the contest, Nantwich looked dangerous as Alex Meaney drove forward from midfield to see his twenty five yard shot deflected wide.
But Holland might have won it for Matlock on 71 minutes when he was clear on goal - James doing well to save his initial effort before he fired his follow up attempt wide.
A freak incident might have seen the Gladiators return home with nothing as a bouncing ball bobbled awkwardly over Jon Kennedy (left), who then saved the day by tipping the loose ball out for an unproductive corner.
James then ensured Nantwich a share of the spoils with a fine stop from Tuton, but Matlock will look back on two points slipping away from what was generally a sound and dominating performance.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,284
|
\section{Introduction}
The problems of origin and propagation of Cosmic Rays (CRs) in the Galaxy are long standing questions and the combination of several different observations in a wide energy range is required to understand them at least partly.
The most realistic description of CR propagation is given by diffusion models. Two main approaches have been developed so far: analytical (or semi-analytical) diffusion models (see e.g.~\cite{Berezinsky:book} and ref.s therein), which solve the CR transport equation by assuming simplified distributions for the sources and the interstellar gas, and fully numerical diffusion models.
Well known realizations of these two approaches are respectively the {\it two-zone model} (see e.g. \cite{Maurin:01,Maurin:02, Maurin:2002ua}) and the GALPROP package \cite{Strong:98,Strong:04,GALPROPweb,Strong:2007nh}. Recently, some of us developed a new numerical code, DRAGON (Diffusion of cosmic RAys in Galaxy modelizatiON) \cite{Evoli:2008dv}. All these models involve in general a large number of parameters which need to be fixed using several types of experimental data.
Their knowledge is crucial not only for CR physics but also for constraining or determining the properties of an exotic galactic component from indirect measurements.
However, in spite of the strong efforts made on both observational and theoretical sides, most of these parameters are still poorly known. One of the reasons lies in the fact that best quality data on CR spectra (e.g.~the ratios of secondary to primary nuclear species) were available mainly at low energy ($E \lesssim 10 ~{\rm GeV}/{\rm n}$), where several competing physical processes (e.g.~solar modulation, convection, reacceleration) are expected to affect significantly the CR spectra by an {\it a priori} undetermined relative amount. Furthermore, the uncertainties on the spallation cross sections and their effects on the propagated CR composition are still sizable at such low energies.
On the other hand, the interpretation of high energy ($E > 10~{\rm GeV} /{\rm n}$) CR data is, in principle, easier since in this range only spatial diffusion and spallation losses (the latter becoming less and less relevant with increasing energy) are expected to shape the CR spectra. Furthermore, other uncertainties related to the physics of solar modulation and to poorly known nuclear cross sections are reduced by considering only data at energies larger than several GeV/n. Hence, the study of high energy CR spectra allows in principle to constrain the plain diffusion properties of CR in the Galaxy, in particular the strength $D_{0}$ of the diffusion coefficient at a reference rigidity and its energy slope $\delta$, and offers a lever arm to better understand low energy effects (see \cite{Castellina:2005ub} for an interesting discussion about this issue).
This possibility has been precluded for long time by the scarcity of observational data.
The experimental situation however improved recently when the CREAM balloon experiment measured the spectrum of light CR nuclei and especially the boron to carbon ratio (B/C) up to $\sim 1~ {\rm TeV}/{\rm n}$ \cite{CREAM}.
Besides CR nuclear measurements, valuable complementary data were recently provided by the PAMELA satellite experiment which measured the antiproton to proton ratio up to $\sim100~ {\rm GeV}$ with unprecedented accuracy \cite{Adriani:2008zq}.
Other valuable experimental data are expected to come from AMS-02 \cite{ams02} which will soon be installed on board of the International Space Station.
As for other secondary nuclear species, antiprotons are produced by the spallation of primary CRs (mainly protons and Helium nuclei) in the standard scenario. Therefore, their spectrum may provide an independent and complementary check of the validity of CR propagation models and a valuable probe of an extra component which may arise, for example, from secondary production in the CR astrophysical sources \cite{Blasi:2009bd,Blasi:2009hv} and/or from dark matter annihilation or decay (see e.g. \cite{Bergstrom:1999jc,Bergstrom:2008ag,Bertone:2008xr,Cirelli:2008pk}).
Whether the measured secondary/primary nuclear ratios and antiproton spectra are fully compatible within the framework of a standard CR transport model is still not completely clear.
Indeed, while a discrepancy between the parameters allowing to reproduce the B/C and the ${\bar p}/p$ was claimed in \cite{Moskalenko:2001ya} (see also \cite{Strong:2007nh}), a good concordance was found in other analyses \cite{Bergstrom:1999jc,Donato:01}.
Furthermore, even the interpretation of nuclear data alone is still confused: analyses based on the leaky-box and semi-analytical diffusion models favor values of $\delta$ significantly larger than the ones found with the numerical GALPROP package. The comparison of such results is not straightforward due to a number of different assumptions. Hence, an independent analysis accounting for most recent available data is timely.
In this work we use DRAGON \cite{Evoli:2008dv} to constrain the main diffusion parameters against updated experimental data in the energy range $1 \lesssim E \lesssim 10^3~{\rm GeV}/{\rm n}$. This code reproduces the results of the well known GALPROP under the same conditions. Furthermore, it allows to test the effects of a spatially varying diffusion coefficient. Here we use the optimized and updated version of this code, which now accounts for ionization and Coulomb energy losses, diffusive reacceleration and convection, and exploits the performances of modern computer clusters to scan a rather large range of parameters under realistic physical conditions.
These upgrades allow to constrain the diffusion coefficient normalization and spectral index, as well as the Alfv\`en velocity $v_A$, with unprecedented accuracy by means of a statistical analysis of the agreement between model predictions and CR data including recent nuclear and antiproton data.
In the following we will present the results of this analysis. In Sec.~\ref{sec:code} we briefly review the framework of CR propagation we adopt. In Sec.~\ref{sec:analysis} we describe our analysis and constrain the diffusion parameters. In the same section we also discuss how much the secondary antiproton spectrum is allowed to vary under the request that the predicted B/C, N/O and C/O ratios are compatible with the experimental data.
In Sec.\ref{sec:le_model} we introduce an effective diffusion-reacceleration model which allows to match all relevant experimental data down to $E \sim 0.1~{\rm GeV}/{\rm n}$. Finally in Sec.s~\ref{sec:discussion} and \ref{sec:conclusions} we compare them with results from other groups and discuss differences and implications for exotic source searches. Section \ref{sec:conclusions} is further devoted to our final remarks and conclusions.
\section{The CR propagation framework}
\label{sec:code}
Galactic CRs propagate diffusively in the irregular component of the Galactic magnetic field undergoing nuclear interactions with the gas present in the InterStellar Medium (ISM). Similarly to previous treatments, we assume here that CR Galactic source, magnetic field and gas distributions can be approximated to be cylindrically symmetric.
Under these conditions, and in the energy range we are interested in, CR propagation of stable nuclei obeys the well known transport equation (Ginzburg and Syrovatskii \cite{Ginzburg:64})
\begin{eqnarray}
\label{eq:diffusion_equation}
\frac{\partial N^i}{\partial t} &-& {\bm \nabla}\cdot \left( D\,{\bm \nabla}
-\bm{v}_{c}\right)N^{i} + \frac{\partial}{\partial p} \left(\dot{p}-\frac{p}{3}\bm{\nabla}\cdot\bm{v}_{c}\right) N^i -\frac{\partial}{\partial p} p^2 D_{pp}
\frac{\partial}{\partial p} \frac{N^i}{p^2} = \nonumber \\
&=& Q^{i}(p,r,z) + \sum_{j>i}c\beta n_{\rm gas}(r,z)
\sigma_{ji}N^{j} - c\beta n_{\rm gas}\sigma_{\rm in}(E_{k})N^{i}\;.
\end{eqnarray}
Here $N^i(p,r,z)$ is the number density of the $i$-th atomic species; $p$ is its momentum; $\beta$ its velocity in units of the speed of light $c$; $\sigma_{in}$ is the total inelastic cross section onto the ISM gas, whose density is $n_{\rm gas}$; $\sigma_{ij}$ is the production cross-section of a nuclear species $j$ by the fragmentation of the $i$-th one; $D$ is the spatial diffusion coefficient; $\bm{v}_{c}$ is the convection velocity; $\beta$ is the particle speed in units of $c$.
The last term on the l.h.s. of Eq. (\ref{eq:diffusion_equation}) describes diffusive reacceleration of CR in the turbulent galactic magnetic field. In
agreement to the quasi-linear theory we assume the diffusion coefficient in momentum space $D_{pp}$ to be related to the spatial diffusion coefficient by the relationship (see e.g.~\cite{Berezinsky:book}) $\displaystyle D_{pp} = \frac{4}{3 \delta (4 - \delta^2)(4 - \delta)} v_A^2~p^2 / D$ where $v_A$ is the Alfv\`en velocity. Here we assume that diffusive reacceleration takes place in the entire diffusive halo.
Although DRAGON allows to account also for CR convection, we neglect this effect in the present analysis showing {\it a posteriori} that it is not necessary to consistently describe all the available data above $1~{\rm GeV}/{\rm n}$ (see Sec.~\ref{sec:discussion}). Hence in the following we will set $v_{c} = 0$.
By this we do not mean that CR data implies that the physical value of $v_{c}$ is actually vanishing but only that an effective description of their propagation is possible even if convection is disregarded (see the discussion at the end of Sec.\ref{sec:discussion}).
DRAGON~\cite{Evoli:2008dv} solves Eq.~(\ref{eq:diffusion_equation}) numerically in the stationary limit $\partial N_{i}/\partial t = 0$
by imposing the following boundary conditions: $N(p,R_{\rm max},z) = N(p,r,z_{\rm min}) = N(p,r,z_{\rm max}) = 0$, corresponding to free escape of CRs at the outer limit of the Galaxy; a symmetry condition on the axis $r = 0$, $N(p,0+\epsilon,z) = N(p,0-\epsilon,z)$ ($\epsilon \ll 1$), due to the assumed cylindrically symmetric setup; a null flux condition $\partial N/\partial p = 0$ on the momentum boundaries, which stems for the fact that particles at null momentum should not lose momentum anymore. Even though the presence of reacceleration can effectively invalidate this condition, by producing a net flux from low to high momenta, we remark that this can affect only the part of the spectrum close to the momentum boundary. For this reason, we adopt an energy grid whose extrema are well below and well above the minimal and maximal energy of the data set we consider. In such a way, our results are in fact independent of the momentum boundary conditions we impose.
The spatial limits of our simulation box are defined by $R_{\rm max} = 20~{\rm kpc}$ and $z_{\rm max} = -z_{\rm min}$. We start the spallation routine from $Z = 16$, having verified that the effect of heavier nuclei on the results of the present analysis is negligible when compared to other uncertainties, being below the 1\% level.
We briefly recall below the main assumptions we make for the terms appearing in Eq.~(\ref{eq:diffusion_equation}).
\subsection{Spatial diffusion coefficient}
The dependence of the diffusion coefficient $D$ on the particle rigidity $\rho$ and on the distance from the Galactic plane $z$ is taken to be (here we assume $D$ to be cylindrically symmetric and independent on the Galactocentric radius $r$)
\begin{equation}
\label{eq:diff_coeff}
D(\rho, z) = D_0 ~\beta^\eta \left(\frac{\rho}{\rho_0}\right)^\delta\ ~ \exp\left\{|z|/z_t \right\}\;,
\end{equation}
where $\beta$ is the particle velocity in units of the speed of light $c$.
As shown in \cite{Evoli:2008dv}, a vertically growing $D$ is physically more realistic than a uniform one and allows to get a more regular behavior of the CR density at the vertical boundaries of the propagation halo with respect to the case of uniform diffusion. As far as the analysis discussed in this paper is concerned, however, the substitution of such a profile with a vertically uniform $D$ only requires a change of the normalization factor $D_0$.
Generally, the value $\eta =1$ is adopted in the related literature. This parameter, however, is not directly constrained by independent observations
and other values have been recently considered (see e.g. \cite{Maurin:09}).
We neglect here a possible dependence on the radial coordinate $r$, which was considered also in \cite{Evoli:2008dv}.
We always set $z_{\rm max} = 2\times z_{t}$ in Eq.~(\ref{eq:diff_coeff}) to avoid border effects, and $\rho_{0} = 3~{\rm GV}$ in the following.
Finally, we assume no break in the power-law dependence of $D$ on rigidity, and we checked that our results do not depend on the choice of $z_{\rm max}$, but only on $z_{t}$, which then acts as the effective vertical size of the diffusive halo.
\subsection{Cosmic ray sources}
For the source term we assume the general form
\begin{equation}
Q_{i}(E_{k},r,z) = f_S(r,z)\ q^{i}_{0}\ \left(\frac{\rho(E_{k})}{\rho_0}\right)^{- \alpha_i} \;,
\end{equation}
and impose the normalization condition $f_{S}(r_{\odot},z_{\odot}) = 1$.
We assume $f_S(r,z)$ to trace the SNR distribution as modeled in \cite{Ferriere:01} on the basis of pulsar and progenitor star surveys \cite{Evoli:2007iy}.
This is slightly different from the radial distributions adopted in \cite{Strong:04b} and in \cite{Maurin:01,Maurin:2002ua} which are based on pulsar surveys only.
Two-zone models assume a step like dependence of $f_S(r,z)$ as function of $z$, being 1 in the Galactic disk ($|z| < z_d$) and 0 outside.
For each value of $\delta$ in Eq.~(\ref{eq:diff_coeff}) we fix $\alpha_i$ by requiring that at very high energy ($E_k \gg 100~{\rm GeV}$/n) the equality $\alpha_i + \delta = \gamma_i$ holds, as expected in a plain diffusion regime. Indeed, at such high energies reacceleration and spallation processes are irrelevant.
Here we adopt the same spectral index ($\gamma_i = \gamma$, hence $\alpha_i = \alpha$) for all nuclei as indicated by recent experimental results \cite{CREAM3,Boyle:2008ut,Ave:2008uw}.
The low energy behavior of $Q$ is quite uncertain and several different dependencies of $Q$ on the velocity $\beta$ have been considered (see e.g.~\cite{Maurin:01}). In the energy range explored in this work, however, different choices of such behavior have negligible effects. This strengthens further the importance of relying on high energy data to reduce systematic uncertainties.
The injection abundances $q^i_0$ are tuned so that the propagated, and modulated, spectra of primary species fit the observed ones. Here we choose to normalize the source spectra of Oxygen and heavier nuclides to reproduce the observed spectra in CRs at $E \sim 100~{\rm GeV}$/n. On the other hand, Carbon and Nitrogen (which, together with Oxygen mostly affect the B/C) injection abundances (with respect to Oxygen) are free parameters, over which we marginalize our statistical variables in our analysis, in a way which we will describe in Section \ref{sec:analysis}. Our data basis for Oxygen and heavier nuclei is constituted by ACE/CRIS data \cite{ACE}. For B, C and N, besides CREAM's, we use experimental data provided by the HEAO-3 \cite{HEAO-3} and CRN \cite{CRN} satellite-based experiments. HEAO-3 B/C data are nicely confirmed by a recent preliminary analysis of AMS-01 data \cite{AMS1_BC} which, however, we do not use in this work.
We verified {\it a posteriori} that the observed Oxygen spectrum (see below), as well as the subFe/Fe ratios\footnote{ To compute these ratios, of course we extended our numerical simulations up to $Z=28$.}, are reasonably reproduced by our best-fit model.
For the primary proton local interstellar spectrum (LIS) we adopt $J_p = 1.6 \times 10^4\ (E_k/1~{\rm GeV})^{-2.73}\\ ~({\rm m}^2~{\rm s}~ {\rm sr}~ {\rm GeV})^{-1}$ as measured by BESS during the 1998 flight \cite{Sanuki:2000wh}. This spectrum also provides an excellent fit to AMS-01 \cite{AMS01} data and, as we will show below, also to preliminary PAMELA proton spectrum data \cite{PAMELA:proton}.
What is most important here, however, is that we assume no spectral breaks in the source spectrum of all nuclear species.
As we will discuss in Sec.~\ref{sec:discussion} this point is crucial to understand the difference between our results and those of some previous works.
\subsection{Nuclear cross sections}
The spallation cross sections and the spallation network are based on a compilation of experimental data (when present) and semi-empirical energy dependent interpolation formulas as provided e.g.~in \cite{Letaw:83,Webber:90,Silbeberg} (see also GALPROP, \cite{GALPROPweb} and references therein, from which data and some related routines have been obtained and included in DRAGON as an external library).
For antiprotons, the main processes responsible for their production are $p - p_{\rm gas}$, $p - {\rm He}_{\rm gas}$, ${\rm He} - p_{\rm gas}$ and
${\rm He} - {\rm He}_{\rm gas}$ reactions, plus a negligible contribution from other nuclei. Similarly to \cite{Moskalenko:2001ya,Donato:01} we adopt the ${\bar p}$ production cross-section calculated using the parametrization given in Tan \& Ng \cite{Tan:1982nc}.
Inelastic scattering, annihilation and tertiary ${\bar p}$ (antiprotons which have been inelastically scattered) are treated as in \cite{Moskalenko:2001ya}.
In order to test the possible dependence of our results on systematical uncertainties on those cross sections, we performed several DRAGON runs using also a different set of nuclear cross sections as determined in \cite{Webber:03} (see Sec.\ref{sec:discussion}).
\subsection{Target gas}
The ISM gas is composed mainly by molecular, atomic and ionized hydrogen (respectively, H$_2$, HI and HII).
Here we adopt the same distributions as in \cite{Strong:98,Evoli:2008dv}.
We checked that other possible choices do not affect significantly our final results.
Following \cite{Asplund:2004eu} we take the He/H numerical fraction in the ISM to be 0.11. We neglect heavier nuclear species.
\subsection{Solar modulation}
We describe the effect of solar modulation on CR spectra by exploiting the widely used force-free approximation \cite{Gleeson&Axford}, prescribing that the modulated spectrum $J(E_k,Z,A)$ of a CR species is given, with respect to the Local Interstellar Spectrum (LIS) $J_{\rm LIS}(E_k,Z,A)$, by
\begin{equation}
\label{eq:modulation}
J(E_k, Z, A) = \frac{ (E_k + m)^2 - m^2}{\left(E_k + m + \frac{Z|e|}{A} \Phi \right)^2 - m^2}\ J_{\rm LIS}(E_k + \frac{Z|e|}{A} \Phi, Z, A)\;,
\end{equation}
where $m$ is the nucleon mass and $\Phi$ is the so called modulation potential. This potential is known to change with the solar activity with a period of 11 years.
It must be stressed that the potential $\Phi$ is not a model independent quantity. Rather, for each propagation model it should be obtained by fitting the CR spectra at low energy. The possibility of restricting our analysis to $E_{k} > 1~{\rm GeV}/{\rm n}$ will reduce the systematic uncertainties associated to this unknown. Above $1~{\rm GeV}/{\rm n}$ the effects of modulation on the secondary/primary CR ratios used in our analysis
are tiny and can safely be accounted for by means of the simple force free approximation.
For protons and antiprotons we use a potential which allows to match BESS98 \cite{Sanuki:2000wh}, AMS-01\cite{AMS01} and PAMELA \cite{PAMELA:proton} proton data even well below 1 GeV/n (see Fig.~\ref{fig:protons}). Indeed all these experiments took their data in a period with almost the same, almost minimal, solar activity. Although a more complicated and realistic treatment of solar modulation, accounting for charge dependent effects, and the 22 year cycle change of polarity associated to solar effects, might be needed when dealing with ${\bar p}/p$ ratios for $E_{k} \; \raise0.3ex\hbox{$<$\kern-0.75em \raise-1.1ex\hbox{$\sim$}}\; 1~{\rm GeV}/{\rm n}$
(see e.g.~\cite{Bieber:1999dn}), we decide to work in the framework of the force-free field approximation and show a posteriori that the data considered in our analysis can naturally be described in that framework.
\section{Analysis and results}
\label{sec:analysis}
Our goal is to constrain the main propagation parameters $\delta$, $D_{0}$, $z_{t}$ and $v_A$ entering Eq.~(\ref{eq:diff_coeff}).
To this aim, we compare to experimental data our prediction for the following physical quantities: the B/C, N/O, C/O ratios for $1 < E_{k} < 10^3~{\rm GeV}/{\rm n}$ and the
$\bar{p}/p$ ratio for $1 < E_{k} < 10^2~{\rm GeV}/{\rm n}$. We will check {\it a posteriori} that also the Oxygen, proton and antiproton absolute spectra are correctly reproduced by our preferred models.
In order to test the relevance of low energy physics on our constraints of the diffusion-reacceleration parameters, we perform our analysis for three different values of the minimal energy $E_{\rm min}$. We will then motivate the choice of the most suitable value of $E_{\rm min}$.
As long as the propagation halo scale height is allowed to vary within the range $2 \lesssim z_t \lesssim 6~{\rm kpc}$ (which is what we assume here),
$D_{0}$ and $z_{t}$ are practically degenerate so that our results depend only on the ratio $D_{0}/z_{t}$. Throughout this paper we will always express this quantity in units of $10^{28}~{\rm cm}^2~{\rm s}^{-1}~{\rm kpc}^{-1}$.
We verified {\it a posteriori} that for this range of $z_t$ values, the predicted $^{10}$Be/$^{9}$Be ratio, which constrains the CR propagation time hence
the vertical scale height of the propagation region \cite{Berezinsky:book} when combined with secondary/primary stable nuclei data,
is consistent with experimental data.
\subsection{Light nuclei ratios}
\label{sec:nuclei_analysis}
\subsubsection{Method}
We already showed \cite{Evoli:2008dv} that in order to constrain correctly the propagation parameters on the basis of B/C measurements it is essential to take into proper account that the main primary parent species of Boron\footnote{ A non negligible contribution to the $^{10}$B comes from the beta decay of $^{10}$Be, which is properly accounted for in our analysis.} are also affected by propagation. This holds not only for the Nitrogen (N = $^{14}$N + $^{15}$N), which gets a significant secondary contribution, but also for Carbon and Oxygen, since for $E_k < 100~{\rm GeV}/{\rm n}$ their spectra are shaped by spallation losses in a propagation dependent way. Therefore, we perform our likelihood analysis in three steps:
\begin{enumerate}
\item for fixed values of the propagation parameters $v_A$, $\delta$, and $D_{0}/z_{t}$ we vary the C/O and N/O source ratios to compute the $\chi^{2}\ $\footnote{Every time we refer to a $\chi^{2}$, we mean the $\chi^{2}$ divided by the number of degrees of freedom, i.e.~the so called reduced $\chi^{2}$.} (which we call $\chi^{2}_{\rm C,N,O}$) of the propagated, and modulated, C/O and N/O ratios against experimental data in the energy range $1 < E_k < 10^3~{{\rm GeV}/{\rm n}}$;
\item for the same fixed value of $v_A$, we finely sample the parameter space ($\delta$, $D_{0}/z_{t}$) by using, for each couple of these parameters, the C/O and N/O source ratios which minimize $\chi^{2}_{\rm C,N,O}$; for each of these realizations we compute the $\chi^{2}$ (which we call $\chi^{2}_{\rm B/C}$) for the B/C modulated ratio against data with $E > E_{\rm min}$;
\item we repeat the same analysis for several values of $v_A$ to probe the effect of diffusive reacceleration. For each value of $v_A$ we then determine the allowed ranges of $\delta$ and $D_{0}/z_{t}$ for several Confidence Levels (CL).
\end{enumerate}
In \cite{Evoli:2008dv} only items (i) and (ii) were performed, for $v_A = 0$ and without accounting for CREAM data, not yet public at that time.
The wide energy range covered by these recent data allows us to perform our analysis using three different energy intervals defined by $E_{\rm min} = 1,\, 5$ and $10~{{\rm GeV}/{\rm n}}$ respectively and by the same $E_{\rm max} = 1~{\rm TeV}/{\rm n}$.
As we already stated in the above, we do not account in our analysis for light nuclei and antiproton data with energy below $1~{\rm GeV}/{\rm n}$ as they are affected by poorly known low energy physics and are not necessary to constrain the high energy behavior of the diffusion coefficient, which is the main goal of this work. In Sec. \ref{sec:le_model} we will show, however, that
specific models which fit all data even below that energy can be built, adopting diffusion coefficients allowed from our analysis.
\subsubsection{Results}
In Tab.~\ref{tab:analysis} we report the best-fit model parameters, and the relative minimal $\chi^{2}_{\rm B/C}$'s, as determined for several values of $v_A$ and $E_{\rm min}$.
\begin{table}[tbp]
\centering
\caption{Best fit parameters, and the corresponding $\chi^2$ values resulting from comparing our model predictions with nuclear experimental data alone (B/C analysis) and with nuclear and $\bar{p}/p$ combined data (combined statistical analysis), as described in text. The values corresponding to $E_{\rm min} = 5~{\rm GeV}/{\rm n}$ for the combined analysis, which are used to constraint our models, are reported in bold.}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}
\hline \multicolumn {2}{|c|} { } & \multicolumn{3}{|c|}{B/C analysis} & \multicolumn{3}{|c|}{joint analysis} \\
\hline $v_A \,{[\rm km/s]}$ & $E_{\rm min}\, [{\rm GeV}/{\rm n}]$ & $\delta$ &
$D_{0}/z_{t}$ & $\chi^2$ & $\delta$ & $D_{0}/z_{t}$ & $\chi^2$ \\
\hline
\multirow{3}{*}{0} & 1 & 0.57 & 0.60 & 0.38 & 0.47 & 0.74 & 3.25 \\
& 5 & 0.52 & 0.65 & 0.33 & {\bf 0.41} & {\bf 0.85} & {\bf 2.04} \\
& 10 & 0.46 & 0.76 & 0.19 & 0.44 & 0.82 & 1.57 \\
\hline
\multirow{3}{*}{10} & 1 & 0.52 & 0.68 & 0.32 & 0.49 & 0.71 & 1.47 \\
& 5 & 0.49 & 0.71 & 0.28 & {\bf 0.41} & {\bf 0.85} & {\bf 1.69} \\
& 10 & 0.44 & 0.82 & 0.20 & 0.44 & 0.82 & 0.12 \\
\hline
\multirow{3}{*}{15} & 1 & 0.46 & 0.76 & 0.33 & 0.47 & 0.76 & 0.94 \\
& 5 & 0.49 & 0.73 & 0.26 & {\bf 0.44} & {\bf 0.82} & {\bf 0.12} \\
& 10 & 0.44 & 0.84 & 0.18 & 0.41 & 0.98 & 0.16 \\
\hline
\multirow{3}{*}{20} & 1 & 0.41 & 0.90 & 0.47 & 0.47 & 0.79 & 2.28 \\
& 5 & 0.44 & 0.84 & 0.22 & {\bf 0.44} & {\bf 0.84} & {\bf 0.85} \\
& 10 & 0.44 & 0.87 & 0.20 & 0.44 & 0.85 & 0.98 \\
\hline
\multirow{3}{*}{30} & 1 & 0.33 & 1.20 & 0.40 & 0.33 & 1.20 & 5.84 \\
& 5 & 0.38 & 1.06 & 0.20 & {\bf 0.36} & {\bf 1.09} & {\bf 2.47} \\
& 10 & 0.41 & 0.98 & 0.16 & 0.38 & 1.04 & 1.61 \\
\hline
\end{tabular}
\label{tab:analysis}
\end{table}
\begin{figure}[tbp]
\centering
\includegraphics[scale=0.5]{CL_COMBO_eta1_vA_10_20_30_Emin_5_phi_500}
\caption{The 68\%, 95\% and 99\% confidence level regions of DRAGON models, computed for $E_{\rm min} = 5~{\rm GeV}/{\rm n}$ are represented in the plane $(D_{0}/z_{t},\delta)$. For the 68\% confidence level the corresponding value of the $\chi^2$ is also shown. The red crosses show the best-fit position.
Each row corresponds to different values of the Alfv\`en velocity: $v_A = 10,20,30~{\rm km}/{\rm s}$ from top to bottom.
Each column corresponds to different analyses: B/C (left panels), $\bar{p}/p$ (center panels) and combined (right panels).
}
\label{fig:CL}
\end{figure}
First of all we notice that in the highest energy range ($E_{\rm min} = 10~{{\rm GeV}/{\rm n}}$) the best-fit model values of $\delta$ and $D_{0}/z_{t}$ are weakly dependent on the Alfv\`en velocity. In particular, the best fit values of $\delta$ stays in the very narrow range $0.40 \div 0.46$ varying $v_{A}$ from $0$ to $30~{\rm km}/{\rm s}$.
This agrees with the common wisdom that reacceleration is almost ineffective at such high energies (see also Fig.~\ref{fig:BC_variVA}).
The most useful results, however, are those obtained for $E_{\rm min} = 5~{{\rm GeV}/{\rm n}}$ since that threshold provides the best compromise between the two opposite requirements:
1) to include in the analysis more experimental data and 2) to work in an energy range where propagation is as less as possible affected by poorly known low energy physics. For example, possible charge dependent drift effects in the solar modulation (see e.g. \cite{Bieber:1999dn,Moskalenko:2001ya}) can be safely neglected in that energy range. Best fit parameters and confidence level contours obtained for that value of $E_{\rm min}$ are showed in Tab.~\ref{tab:analysis} and in Fig.~\ref{fig:CL} respectively.
From both we notice that all considered values of $v_A$ are almost equally permitted by the B/C $\chi^2$ analysis, and that the $\delta - D_0/z_t$ allowed region slightly moves towards low $\delta$'s and large $D_0/z_t$'s as $v_A$ is increased from 0 to $30~{\rm km}/{\rm s}$. While Kraichnan diffusion is clearly favored in the case of low values of $v_A$, Kolmogorov becomes favored, for $v_A \; \raise0.3ex\hbox{$>$\kern-0.75em \raise-1.1ex\hbox{$\sim$}}\; 30 ~{\rm km}/{\rm s}$. The choice among those model, however, is difficult in the absence of an independent estimate of $v_A$. We will show that the antiproton/proton data break such degeneracy.
In Fig.~\ref{fig:BC_variVA} we show the effect on the B/C ratio of varying $v_A$ keeping $\delta$ and $D_{0}/z_{t}$ fixed to the value $(0.45, 0.8)$ which will be motivated below.
\subsection{Antiprotons}
\label{sec:antip_analys}
The statistical analysis for the $\bar{p}/p$ ratio is rather simpler than the one for B/C. Indeed, the secondary $\bar p$ production depends, besides on $D_{0}/z_{t}$, $\delta$ and $v_A$, only on the source abundance ratio He/p. This last unknown quantity can be easily fixed by looking at the measured spectrum of He at Earth, which is relatively well known.
Therefore, we do not need to fit the source abundance ratio here and can directly proceed to map the $\chi^{2}_{\bar{p}/p}$ in the ($D_{0}/z_{t},~\delta$) space, for several $v_A$, similarly to what described in items (ii) and (iii) of the previous subsection.
In the second column of Fig.~\ref{fig:CL} we show the statistically allowed regions in the plane $(D_{0}/z_{t},\delta)$ for several values of $v_A$ and compare them with the corresponding regions determined from the light nuclei analysis (first column in the same figure). The allowed CL region is significantly larger than that determined from the light nuclei analysis (due to the larger experimental errors) and they overlap only for some values of the Alfv\`en velocity.
In fact, it is remarkable that the $v_A$ varying behavior of those regions is almost opposite so that not all values of $v_A$ are allowed by a combined analysis (see Sec. \ref{sec:comb}).
\subsection{Combined analysis and constraints on the propagation parameters}\label{sec:comb}
A combined analysis of light secondary/primary nuclei and antiproton/proton data can be performed under the working hypothesis that CR antiprotons are only of secondary origin.
We define the combined reduced $\chi^2$ as $\displaystyle \chi^2_{\rm comb} = \frac{1}{2} \left( \chi^2_{\rm BC} + \chi^2_{\rm ap/p} \right)$.
The CL regions for several values of $v_A$ are reported in the third column of Fig.~\ref{fig:CL} and the corresponding best-fit parameters in Tab.~\ref{tab:analysis}.
Again, here we use only data with $E > E_{\rm min} = 5~{\rm GeV}/{\rm n}$.
As we anticipated in the previous subsection, in general the CL region allowed by the combined analysis is smaller than the B/C one.
Indeed, while the parameter regions constrained by the B/C and $\bar{p}/p$ data nicely overlap
for $10 \; \raise0.3ex\hbox{$<$\kern-0.75em \raise-1.1ex\hbox{$\sim$}}\; v_A \; \raise0.3ex\hbox{$<$\kern-0.75em \raise-1.1ex\hbox{$\sim$}}\; 20~{\rm km}~ {\rm s}^{-1}$, models outside this range do not allow a combined fit of both data sets at the required level of statistical significance (higher than 95\%).
The fact that only a limited range of the Alfv\`en velocity values are allowed is consequence of the different behavior of the B/C and ${\bar p}/{p}$ ratios with $v_A$ due to the different spectral shapes of these ratios. This is a new and quite interesting results.
It is reassuring to notice that the results of the analysis performed for $E_{\rm min} = 5$ and $10~{\rm GeV}/{\rm n}$ are practically coincident, which makes us confident that the combined analysis performed for $E_{\rm min} = 5~{\rm GeV}/{\rm n}$ probes already the purely diffusive CR regime.
It is also remarkable that the best fit values of $\delta$ and $D_0/z_t$ stay almost unchanged when varying $v_A$. In particular $(\delta, D_0/z_t) \simeq (0.4 - 0.45, 0.8)$ for all allowed values of $v_A = 10 - 20~{\rm km} {\rm s}^{-1}$ of the Alfv\`en velocity. This makes us confident that the combined analysis performed for $E_{\rm min} = 5~{\rm GeV}/{\rm n}$ best probes the diffusion-reacceleration parameters .
Among those considered $v_A = 15~{\rm km}~ {\rm s}^{-1}$ is the Alfv\`en velocity value which minimizes the $\chi^2$ of the combined analysis, hence it gives rise to the best overlap between the light nuclei and the ${\bar p}/{p}$ confidence regions.
This is also visible from Fig.s~\ref{fig:BC_variVA} and \ref{fig:app_variVA} where the B/C and the ${\bar p}/{p}$ ratios computed with $(\delta, D_0/z_t) \simeq (0.45, 0.7)$ are plotted for several $v_A$'s.
It is also interesting to notice that the dependence of the ${\bar p}/p$ ratio on $v_A$ is driven by that of the proton spectrum since the absolute ${\bar p}$ spectrum is practically unaffected by re-acceleration (see Fig.\ref{fig:ap_variVA}).
We stress that Fig.s \ref{fig:BC_variVA},\ref{fig:p_variVA} are given here mainly for illustrative reasons since, below few GeV/n some other physics needs clearly to be introduced to reproduce the B/C data (see Sec. \ref{sec:le_model}).
Since the $v_A = 15~{\rm km} {\rm s}^{-1}$ combined analysis CL region is the largest, it also provides the most conservative constraints on $\delta$ and $D_0/z_{t}$. They are
$0. 3 \; \raise0.3ex\hbox{$<$\kern-0.75em \raise-1.1ex\hbox{$\sim$}}\; \delta \; \raise0.3ex\hbox{$<$\kern-0.75em \raise-1.1ex\hbox{$\sim$}}\; 0.6$ and $0.6 \; \raise0.3ex\hbox{$<$\kern-0.75em \raise-1.1ex\hbox{$\sim$}}\; D_0/z_{t} \; \raise0.3ex\hbox{$<$\kern-0.75em \raise-1.1ex\hbox{$\sim$}}\; 1$ at 95\% CL.
\begin{figure}[tbp]
\centering
\subfigure[]
{
\includegraphics[scale=0.4]{BC_delta044_eta1_500}
\label{fig:BC_variVA}
}
\subfigure[]
{
\includegraphics[scale=0.4]{antiprotons_ratio_delta044_eta1_700}
\label{fig:app_variVA}
}
\subfigure[]
{
\includegraphics[scale=0.4]{antiprotons_delta044_eta1_700}
\label{fig:ap_variVA}
}
\subfigure[]
{
\includegraphics[scale=0.4]{protons_delta044_eta1_700}
\label{fig:p_variVA}
}
\caption{The B/C (panel a), and the ${\bar p}/p$ (panel b) ratios, as well the antiproton (panel c) and proton (panel d) absolute spectra computed with DRAGON for $\delta = 0.45$ and $D_0/z_t = 0.8$ are plotted for several values of $v_A$ and compared with the respective experimental data. Dotted, short-dashed, solid, dot-dashed, long-dashed correspond to $v_A = 0,10,15,20,30~{\rm km}/{\rm s}$ respectively.
Here $\eta = 1$ which clearly does allow to match nuclear data below $1~{\rm GeV}/n$. For this reason the
modulation potentials $\Phi = 500~{\rm MV}$ adopted here for the B/C plot (as required to reproduce low energy Oxygen data) and $\Phi = 700~{\rm MV}$ for the ${\bar p}/p$ (to fit proton data) are not representative. }
\end{figure}
It should be kept in mind that our analysis accounts only for statistical experimental errors. Several systematic uncertainties, however, may affect our constraints too. Among them, systematic errors in the experimental data, uncertainties in the Galactic gas density and hydrogen fraction distributions and nuclear fragmentation cross sections play a major role. A detailed discussion of the possible impact of these uncertainties on the determination of the CR propagation parameters is beyond the aims of this work. A thorough analysis was recently performed in \cite{Maurin:09} showing that, if low energy data are accounted for (which requires to introduce several unknown parameters with respect to those considered in this work) the systematic uncertainties on the $D_0,~\delta$ and $v_A$ can be comparable, or even larger, than the statistical ones.
However, the former uncertainties are significantly smaller if one considers only a subclass of models without convection and keeping fixed other parameters which only matters a low energies, as we do in this work. For example, it was shown in \cite{Maurin:09} that for models with $v_c = 0$ the effect of considering different cross-section sets amounts to a $\sim 40\,\%$ uncertainty variation of $\delta$ which reduces to $\sim 10\,\%$ if one considers only the most updated cross section sets.
We verified with DRAGON that changing the GALPROP nuclear fragmentation cross sections with those given in \cite{Webber:03} produces only a marginal effect on the B/C ratio. The relative effect of cross section uncertainties on the antiproton/proton ratio is negligible here due to the high statistical errors on those data.
\subsection{Maximal and minimal antiproton spectra}
\label{sec:max_min_models}
The previous results clearly favor a standard interpretation of the measured antiproton spectrum in terms of purely secondary production from CR nuclei. It is still possible, however, that a subdominant antiproton component arises from unconventional processes.
In order to constrain such ``exotic" component(s) with experimental data, one has to compare antiproton data with the predictions of the theoretical models validated against CR nuclei data alone.
For this purpose we define, for each value of $v_A$ considered in the above, a pair of MAX and MIN models which maximize and minimize respectively the antiproton absolute flux integrated in the range $1 - 100~{\rm GeV}$ under the condition to be compatible with secondary/primary light nuclei data down to $1~{\rm GeV}/{\rm n}$ within 95\% CL.
In Fig.~\ref{fig:min_max} we show the allowed ranges of the antiproton absolute spectrum for several values of $v_A$. Among the models considered here the absolute MAX and MIN models are those defined by the parameters $(\delta, D_0/z_t, v_A) = (0.68, 0.46, 0)$ and $(0.30, 1.2, 30)$ respectively.
\begin{figure}[tbp]
\centering
\includegraphics[scale=0.4]{apflux_minmax}
\caption{The ${\bar p}$ absolute spectrum is shown for $v_A = 10, 20, 30$ (from the left to the right panels respectively). The upper and lower curves correspond to the MAX and MIN models defined as in Sec.~\ref{sec:max_min_models} respectively.}
\label{fig:min_max}
\end{figure}
Therefore, we conclude that, under the hypotheses adopted in this work, $\bar p$ constraints on an exotic component should not use, as propagation models, any model whose $\bar p$ background prediction is lower than our MIN (or larger than our MAX) model, as it would be in contrast with B/C data at 95\% CL. Hence, the most conservative constraint, under our hypotheses, arises from the request that the sum of the background $\bar p$ predicted by the MIN model plus the exotic $\bar p$ component do not exceed the experimental data, within some CL.
\section{A comprehensive model describing all data sets down to $0.1~{\rm GeV}/{\rm n}$}\label{sec:le_model}
The aim of this section is to test the consistency of our previous results with CR data below few GeV/n and to identify an effective model allowing to fit all available data.
It is evident from Fig.~\ref{fig:BC_variVA} that while the best fit model obtained for $\eta = 1$ provides an excellent fit of experimental data above few $~{\rm GeV}/{\rm n}$, below that energy it overshoots the B/C observations.
As we discussed, such a discrepancy may be attributable to a number of effects which, at low energies, introduce degeneracies among the relevant parameters.
For this reason, a statistical analysis aimed to fit those low energy parameters against presently available data would be hardly interpretable (see e.g. \cite{Maurin:09}) and it is beyond the aims of this work.
Here we follow a more phenomenological approach tuning only the parameter $\eta$ (see Eq. \ref{eq:diff_coeff}) which sets the dependence of the diffusion coefficient on the particle velocity (a similar approach was followed in \cite{Maurin:09}).
Interestingly we find that the choice $\eta \simeq - 0.4$ allows to match light nuclei as well as antiproton data well below $1~{\rm GeV}/{\rm n}$
for almost the same range of $\delta$ and $D_0/z_t$ values found for $\eta =1$.
Indeed, we checked that the $\eta = -0.4 $ and $\eta = 1$ CL regions computed for $E_{\rm min} = 5~{\rm GeV}/{\rm n}$ almost coincide (which is not the case for $E_{\rm min} = 1~{\rm GeV}/{\rm n}$.
In Fig.s \ref{fig:BC_comp} - \ref{fig:protons} we show as our best fit model obtained for $\eta = - 0.4$, $\delta = 0.5$, $D_0/z_t = 0.7$, and $v_A = 15~{\rm km}/{\rm s}^{-1}$ nicely reproduces all relevant data sets. They include also the N/O and C/O ratios (with $\sim 6~\%$ and $\sim 75~\%$ injection ratios respectively) as well as the absolute oxigen spectrum.
We notice that the modified dependence of the diffusion coefficient upon rigidity, which is the consequence of adopting a value of $\eta$ different from $1$, can be considered as an effective modelization of
physics taking place at low energy, including some non-linear phenomena such as the dissipation of magneto-hydrodynamics (MHD) waves by their resonant interaction with CRs \cite{Ptuskin:2005ax}. Since this is the same interaction responsible for CR diffusion in the ISM, such an effect is unavoidable at some level. Interestingly, the value of $\delta$ used in \cite{Ptuskin:2005ax} to fit the B/C in the presence of MHD wave dissipation is 0.5, which is consistent with what we found here (differently from what we do here, however, a break in the injection index was invoked in that work).
\section{Discussion and comparison with previous results}
\label{sec:discussion}
As we mentioned above, our numerical diffusion code DRAGON reproduces the same results of GALPROP \cite{GALPROPweb} under the same physical conditions.
Our analysis and main conclusions, however, differ significantly from those reported in several papers based on that code.
In order to clarify the reasons of such a discrepancy, in Fig.~\ref{fig:BC_comp} and \ref{fig:apratio_comp} we compare the predictions of our reference diffusion-reacceleration model $(\delta, D_0/z_{t}, v_A, \eta) = (0.5, 0.7, 15, - 0.4)$, which for brevity we call {\it Kraichnan model},
with those obtained using the propagation parameters (and source distribution) of the {\it Kolmogorov model} discussed in \cite{Strong:04}, namely $(\delta, D_0(4~{\rm GV})/z_{t}, v_A, \eta) = (0.33,1.45, 30, 1)$ \footnote{In \cite{Strong:04} a spatially uniform diffusion coefficient ($z_{t}=z_{\rm max} = 4~{\rm kpc}$) was assumed. As we already noticed, for the purposes of the present analysis adopting a vertically uniform rather than varying diffusion coefficient only amounts to a rescaling of $D_{0}/z_{t}$. We verified that this does not affect any other result of our analysis.}.
For the latter combination of parameters we consider two variants, represented by the solid/dashed red lines, which differ for the presence/absence of a break at $\rho_{\rm break} = 9~{\rm GV}$ in the CR nuclei source spectra. The {\it Kolmogorov model} considered in \cite{Strong:04} adopts such a break. It is evident from Fig. \ref{fig:protons} as this is needed in order to reproduce the low energy tail of the observed proton spectrum which otherwise could not be fit for any choice of the modulation potential. It is important to notice, that this problem arises in all models with strong reacceleration $v_A > 20~{\rm km}~s^{-1}$.
On the other hand our Kraichnan reference model requires a ``modified" behavior of the diffusion coefficient at low energy ($\eta = -0.4$ rather than $\eta = 1$) which, however, may be motivated by independent physical arguments as discussed in Sec.\ref{sec:le_model}.
\begin{figure}[tp]
\centering
\subfigure[]
{
\includegraphics[scale=0.35]{BC_comp}
\label{fig:BC_comp}
}
\subfigure[]
{
\includegraphics[scale=0.35]{NO_comp}
\label{fig:NO_comp}
}
\subfigure
{
\includegraphics[scale=0.35]{CO_comp}
\label{fig:CO_comp}
}
\subfigure
{
\includegraphics[scale=0.35]{oxigen_comp}
\label{fig:oxigen}
}
\caption{The B/C (panel a), N/O (panel b), C/O (panel c) and the oxigen absolute (panel d) spectra computed with our preferred {\it Kraichnan}
model (blue solid line), the {\it Kolmogorov} reference model (red solid line) and the same model with no break in the CR source spectrum (red dashed line), are compared with available experimental data. In both cases we use DRAGON to model CR propagation and interactions (though almost identical results can be found with GALPROP). Here we use $\Phi = 450~{\rm MV}$ to modulate both the {\it Kolmogorov model} and our {\it Kraichnan} reference models.
$\Phi = 300~{\rm MV}$ was used only to match B/C ACE data which were taken in a very low activity solar phase.}
\end{figure}
\begin{figure}[tp]
\centering
\subfigure[]
{
\includegraphics[scale=0.35]{apratio_comp_new}
\label{fig:apratio_comp}
}
\subfigure[]
{
\includegraphics[scale=0.35]{apflux_comp_new}
\label{fig:apflux_comp}
}
\subfigure[]
{
\includegraphics[scale=0.35]{pflux_comp}
\label{fig:protons}
}
\caption{The $\bar{p}/p$ (panel a), $\bar{p}$ (panel b) and proton (panel c) computed with DRAGON are reported here use the same models and line notation used in Fig.s 4. The solar modulation potential used here is $\Phi = 550~{\rm MV}$. In the (a) and (b) figures PAMELA 2010 data are taken from the recent paper \cite{pamela:2010rc} (see the note at the end of this paper).}
\end{figure}
From Fig.s \ref{fig:BC_comp} the reader can see that while both the Kraichnan and Kolmogorov models reproduce the B/C equally well, the former model provides a significantly better description of the N/O ratio measured by HEAO-3 \cite{HEAO-3} and CREAM \cite{CREAM} (see Fig.~\ref{fig:NO_comp}). Furthermore, what mostly favors our Kraichnan reference model are BESS \cite{Sanuki:2000wh}, CAPRICE \cite{Boezio:2001ac} and especially the PAMELA measurements of the ${\bar p}/p$ \cite{Adriani:2008zq} and antiproton absolute spectrum \cite{PAMELA:proton}. Indeed, the discrepancy between low energy antiproton data and the prediction of the ``conventional GALPROP model", which was already noted in \cite{Strong:04}, becomes more compelling due to the new PAMELA data, as shown in Fig.~\ref{fig:apflux_comp}.
The comparison of our results with those of semi-analytical models is more difficult for obvious reasons.
One of the difficulties lies in the simplified gas and source distribution adopted in those models (see Sec.~\ref{sec:code}). We verified, however, that such differences
only affect the constraints to $D_0/z_t$ with almost no effect on the determination of $\delta$.
We also need to take into account that semi-analytical models (see e.g.~\cite{Maurin:01,Maurin:02}) assume diffusive reacceleration to take place only in the thin
Galactic disk (whose height is $z_{d}$), while in the numerical models, as the one presented here, it takes place in the entire diffusion halo. Therefore, in order to compare the values of the Alfv\`en velocity in those papers with those reported in the above it is necessary to perform a proper rescaling. This is approximatively given by (see e.g.~Eq.~(18) in \cite{Maurin:2002ua}) $v_A = v_A^{\rm SA}\ \sqrt{z_d/z_t}$, with $v_A^{\rm SA}$ being the Alfv\`en velocity in the semi-analytical models and $z_t$
the half scale height of the Galactic disk.
In spite of these differences, and that CREAM and PAMELA data were not included in those analyses for chronological reasons, it is comforting that for low values of the convective velocity $v_c \simeq 0$ the preferred value of $\delta$ estimated in \cite{Maurin:01,Maurin:02} is in remarkably good agreement with that found in this work: $\delta \simeq 0.45$. Interestingly, the rescaled value of $v_A$ determined in \cite{Maurin:01} is $v_A \simeq 10$, for $v_c \simeq 0$, which is also in good agreement with our results. It is important to notice that, similarly to what we did in our analysis, no break in the source spectral index was assumed in \cite{Maurin:01,Maurin:02}.
We remind the reader that in the above we always assumed $v_c = 0$ as higher values of that parameter are not required to interpret CR nuclei and antiproton data.
Models with a finite $v_c$, which may also allow to fit low energy data though with a different combination of some parameters ($\eta$, the modulation potential $\Phi$, or any other), will be considered elsewhere. We already tested, however, that taking $v_c$ in a reasonable range of values do not affect significantly our constraints on most relevant diffusion coefficient parameters, namely $\delta$ and $v_A$.
Indeed, we verified that for various choices of the convective velocity, and of its vertical gradient, it is always possible to rescale the
diffusion coefficient normalization $D_0/z_t$ so that both the B/C and the antiproton-to-proton ratio remain almost unaffected above $5~{\rm GeV}/{\rm n}$, i.e. the energy range we considered in our analysis. Our tests confirm what is claimed in \cite{Strong:98}: namely, that the contribution of convection to the B/C energy slope is negligible, especially in the intermediate and high energy regions.
\section{Conclusions}
\label{sec:conclusions}
We used recent data on CR light nuclei and antiprotons to determine the conditions of propagation of high energy CRs in the Galaxy, exploiting our numerical code, DRAGON. In the framework of a diffusion-reacceleration model, we performed a thorough analysis of the agreement of our predictions with experimental information, aimed at constraining, in a statistical sense, the most important model parameters: $D_{0}/z_{t}$, $\delta$ and $v_{A}$. The amount and quality of data is enough to allow us to perform our analysis in a wide energy range, from 1 to $10^3~{\rm GeV}/{\rm n}$, and also to check the evolution of our results varying the minimal energy at which data are considered. This is essential to reduce the uncertainties related to possibly unknown low energy physics, including solar modulation, and to disentangle the effects of reacceleration from those of diffusion.
One of the most important results of this analysis is that light nuclei (especially B/C) data and antiproton data can fit into a unique, coherent diffusion-reacceleration model of propagation, as it can be read off Fig.~\ref{fig:CL}. Indeed, the confidence regions obtained for $E > 5~{\rm GeV}/{\rm n}$ (where only the effects of diffusion and re-acceleration matters), light nuclei and antiproton CL regions nicely overlap to produce combined constraints on $D_{0}/z_{t}$, $\delta$.
While this was also shown in previous works (which however did not exploit the new CREAM data), a combined statistical analysis of nuclear and antiproton data has been performed here for the first time. We showed that such an analysis allows to narrow significantly the allowed values of $\delta$ and $D_{0}/z_{t}$: our constraints $0.3 < \delta < 0.6$ and $0.6 < D_{0}/z_{t} < 1$, as obtained at 95\% C.L., are significantly more stringent that those previously determined in the related literature.
Furthermore we found, for the first time, that only a relatively narrow range ($10 - 20~{\rm km}/{\rm s}$) of the Alfv\`en velocity values are allowed.
Even well below $5~{\rm GeV}/{\rm n}$, we showed that it is possible to find effective models which, still fulfilling those constraints, allow to nicely reproduce all relevant data.
We also found that the preferred values of the N/O and C/O ratios at injection are $\sim 6~\%$ and $\sim 75~\%$ respectively. These results, and in particular the analysis of data with $E_{\rm min} = 5~{\rm GeV}/{\rm n}$, clearly favor a Kraichnan like CR diffusion ($\delta = 0.5$) respect to Kolmogorov ($\delta = 0.33$). It is worth noticing that a relatively large value of $\delta$, as that preferred by our analysis, would give rise to a too large CR anisotropy if our results are extrapolated to $E_k \gg 10^{5}~{\rm GeV}/{\rm n}$ (see e.g.~\cite{Blasi:2008ch} and ref.s therein). Our results, therefore, may call for some changes in the standard CR propagation scenario.
While the effects of systematic uncertainties on fragmentation cross-sections
are not studied in details in this work, recent results suggests that they should be smaller than the quoted $2 \sigma$ statistical uncertainties on the transport parameters. Indeed we performed DRAGON runs using a different set of nuclear cross-sections as determined in \cite{Webber:03}, finding the same B/C and N/O ratio found in the above within few \%'s.
Given that anyway nuclei data alone are able to provide constraints on $D_{0}/z_{t}$ and $\delta$, we use this information to establish a range for the maximal and minimal flux of antiprotons expected from CR interactions in the gas and still compatible with light nuclei observations within 95\% CL. This range information can be used as a CR background in analyses aimed at constraining or finding some exotic signal in antiproton data.
Forthcoming data from several running or scheduled experiments, as PAMELA (both for antiprotons and light nuclei), CREAM-II \cite{CREAM3}, TRACER \cite{Boyle:2008ut,Ave:2008uw}, and AMS-02 \cite{ams02} which will measure both CR nuclei and ${\bar p}$ fluxes from hundreds MeV/n up to TeV/n, will soon allow tighter constraints.
Especially AMS-02 is expected to provide very accurate data and, what is most relevant here, it will allow simultaneous and consistently calibrated measurements of several nuclear species
and antiprotons (as well as electrons and positrons which will also provide valuable complementary inputs).
The AMS-02 potential to pinpoint the CR propagation was also recently showed in \cite{Pato:2010ih} where, however, the power of a combined analysis of CR nuclei and antiproton data was not discussed.
\section*{Note added}
When this paper was in its final refereeing process, PAMELA collaboration published updated data of the ${\bar p}/{p}$ ratio and ${\bar p}$ absolute spectrum \cite{pamela:2010rc}. The ${\bar p}/{p}$ data differ very little from those we used in our statistical analysis (discussed in Sec. 3) which were taken from \cite{Adriani:2008zq} (PAMELA 2009). Therefore, their update should not affect significantly our constraints on the propagation parameters. In Fig. \ref{fig:apratio_comp} and \ref{fig:apflux_comp} we display the new PAMELA antiproton data. It is evident as our best fit model, which was determined using ${\bar p}$ PAMELA 2009 data, nicely matches also PAMELA 2010.
\section*{Acknowledgments}
We are indebted with P.~Ullio for invaluable comments and suggestions. We warmly thank P.~Picozza for allowing us to extract preliminary PAMELA proton and antiproton data from his talk at TeVPA 2009. We thank F.~Donato, P.~Maestro, G.~Sigl and A.~W.~Strong for reading the draft of this paper and providing useful comments. We also thank D.~Maurin for giving us the electronic form of the nuclear cross section tables reported in \cite{Webber:03} under W.R.~Webber kind permission.
D.~Grasso is supported by the Italian Space Agency under the contract AMS-02.ASI/AMS-02 n.~I/035/07/0.
D.~Grasso and D.~Gaggero acknowledge partial financial support from UniverseNet EU Network under contract n.~MRTN-CT-2006-035863. LM acknowledges support from the State of Hamburg, through the Collaborative Research program ``Connecting Particles with the Cosmos'' within the framework of the LandesExzellenzInitiative (LEXI).
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,379
|
The Gapyeong Canada Monument () is a monument erected to commemorate the sacrifice of the Canadian Forces during the Korean War, especially at the Battle of Kapyong in the Canadian Korean War Memorial Garden. The English text describing the monument reads as follows:
When one walks toward the monument, at the left is a panel explaining the history of the monument while at the right is a description of the Canadian contribution to the Korean War. The main monument is centred at the far end alongside both a Korean and Canadian flag. The main monument is flanked left by the monument dedicated to the 2nd Battalion Princess Patricia's Canadian Light Infantry (PPCLI) and the battle on Hill 677 and flanked on the right with another monument naming all the Canadian units that participated in the Korean War.
The Main monument
The main monument was erected December 30, 1983 and its English text reads as follows:
PPCLI monument
At the left of the main monument lies the monument dedicated to the 2nd Battalion of the Princess Patricia Canadian Light Infantry for their actions during the Battle of Kapyong on April 24 and 25 1951, actions that had them decorated with the United States Presidential Unit Citation. This monument was erected November 7, 1975.
Canadian contribution to the Korean War
The two rightmost monument describe the Canadian contribution to the Korean war. The front monument reads as follows:
The rear monument goes into details, listing the units that participated in the Korean war as well as the size of the contribution: 26,791 Canadians during the war itself, 7,000 until 1955 with 516 casualties and 1,255 wounded. The units that served are:
Royal Canadian Navy
HMCS Athabaskan
HMCS Cayuga
HMCS Sioux
HMCS Nootka
HMCS Huron
HMCS Iroquois
HMCS Crusader
HMCS Haida
Army
Lord Strathcona's Horse (Royal Canadians)
2nd Field Regiment (FD Regt.) and 1st Regt. Royal Canadian Horse Artillery
81st FD Regt. Royal Canadian Artillery
The Corps of Royal Canadian Engineers
The Royal Canadian Corps of Signals
The Royal Canadian Regiment
2nd. 1st and 3rd Battalions
Princess Patricia's Canadian Light Infantry
2nd. 1st and 3rd Battalions
Royal 22e Régiment
2nd. 1st and 3rd Battalions
The Royal Canadian Army Service Corps
The Royal Canadian Army Medical Corps
The Royal Canadian Army Dental Corps
Royal Canadian Army Ordnance Corps
The Corps of Royal Canadian Electrical and Mechanical Engineers
Royal Canadian Army Pay Corps
The Royal Canadian Postal Corps
Royal Canadian Army Chaplain Corps
The Canadian Provost Corps
Canadian Intelligence Corps
Royal Canadian Air Force
No. 426 (Thunderbird) Squadron
See also
United Nations Memorial Cemetery – Busan, South Korea, which holds the remains of 378 Canadians killed in the Korean War
External links
Canada Monument
A Profile of the Canadian Korean War Memorial
International Expedition: South Korea
Korean War memorials and cemeteries
Monuments and memorials in South Korea
Canadian military memorials and cemeteries
Buildings and structures in Gapyeong County
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,226
|
Q: Phonegap 2.9 android toast plugin What is best toast plugin in phonegap 2.9?
I used this plugin, but NOT working.
CatLog say : exec() call to unknown plugin : ToastPlugin
Please Help me.. Thanks
A: Here are links (first, second) for android toast message plugin, please try if it works for you.
There are some changes that you would like to do:
*
*In res-> xml-> config.xml write this line
<plugin name="ToastPlugin" value="org.apache.cordova.plugin.ToastPlugin"/>
*Set <script src="toast.js"></script> in your html page properly.
A: If you're on PhoneGap 3: check out this one, which supports iOS as well: http://www.x-services.nl/phonegap-toast-plugin/796
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,053
|
UK's 5.6M vax aid to PH 'testament to enduring support': envoy
Posted on November 26, 2021 by Admin Leave a comment
By Joyce Ann L. Rocamora
'ENDURING SUPPORT'. National Task Force Against Covid-19 chief implementer and vaccine czar Secretary Carlito Galvez Jr. (4th from left) and UK Ambassador to the Philippines Laure Beaufils (5th from left) welcome the arrival of 288,000 doses of the AstraZeneca vaccine donated by the UK to the Philippines, at the Ninoy Aquino International Airport Terminal 3 on Friday (Nov. 26, 2021). The batch is part of the 5.2 million doses donated by the UK that will be shipped to Manila by November 27. (PNA photo by Jess M. Escaros Jr.)
MANILA – The British government's donation of 5.6 million doses of the AstraZeneca vaccine is a testament to the United Kingdom's (UK) "enduring support" to help the Philippines battle the coronavirus disease 2019 (Covid-19) pandemic, Ambassador Laure Beaufils said Friday.
The envoy, with Philippine vaccine czar Secretary Carlito Galvez Jr., welcomed the arrival of 288,000 doses of the AstraZeneca vaccine, which are part of the 5.2 million doses that will be shipped to Manila by Saturday.
In August, the UK made its initial donation of 414,040 vaccine doses to the Philippines.
"It's absolutely essential to us that we are able to help our partners through this pandemic and we have committed that we would be supplying up to 100 million doses (globally) and today, this is demonstrating that we're living up to this commitment," Beaufils said in an interview.
"Our contribution to the Philippines is now almost 5.6 million doses and is a testament of our friendship and enduring support through the pandemic."
She said the UK also continues to work with the Philippines on various areas of cooperation, including on education and policy exchanges, to further improve responses against Covid-19.
"So yes, it's a broad cooperation with lots of various engagement," Beaufils said.
Galvez said the fresh British aid was not requested by Manila but was initiated by the UK government.
"It's very timely because we will be having our national vaccination day for three days and we may need more," he said, citing some local government units in the Calabarzon and Central Luzon regions.
From November 29 to December 1, the pandemic task force targets to inoculate 15 million individuals across the country and reach the "population protection" threshold as soon as possible.
As of Friday, about 34,963,067 people have been fully inoculated against Covid-19, while 137,801 have received their booster shots.
Apart from the UK, Poland and South Korea are each set to deliver more than 500,000 doses of the AstraZeneca vaccine before the year ends. (PNA)
Tags: astrazeneca, coronavirus, covid 19, covid vaccine, pandemic, UK vaccine support, vaccines. Bookmark the permalink.
Previous post ← New Londonderry housing development to provide services, amenities, safe, affordable homes for 240 families in north Edmonton; NDP critic hopes new complex convinces UCP of value of building affordable housing, need to work together with municipalities, federal government
Next post Go hails release of more funds for medical workers →
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,418
|
{"url":"http:\/\/cvgmt.sns.it\/paper\/932\/","text":"# Optimal transportation for a quadratic cost with convex constraints and applications\n\ncreated by santambro on 26 May 2011\nmodified on 14 Oct 2011\n\n[BibTeX]\n\nAccepted Paper\n\nInserted: 26 may 2011\nLast Updated: 14 oct 2011\n\nJournal: J. Math. Pures et Appl.\nYear: 2011\n\nAbstract:\n\nWe prove existence of an optimal transport map in the Monge-Kantorovich problem associated to a cost $c(x,y)$ which is not finite everywhere, but coincides with $x-y ^2$ if the displacement $y-x$ belongs to a given convex set $C$ and it is $+\\infty$ otherwise. The result is proven for $C$ satisfying some technical assumptions allowing any convex body in $\\mathbb{R}^2$ and any convex polyhedron in $\\mathbb{R}^d$, $d>2$. The tools are inspired by the recent Champion-DePascale-Juutinen technique. Their idea, based on density points and avoiding disintegrations and dual formulations, allowed to deal with $L^\\infty$ problems and, later on, with the Monge problem for arbitrary norms.","date":"2019-02-16 00:01:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.49512842297554016, \"perplexity\": 642.3629260131867}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-09\/segments\/1550247479627.17\/warc\/CC-MAIN-20190215224408-20190216010408-00420.warc.gz\"}"}
| null | null |
Скела () у архитектури је привремена грађевина која служи као носач за раднике и материјал приликом конструкције, одржавања и поправке грађевина. Скеле се користе на градилиштима како би обезбедиле приступ висинама и местима која је тешко дохватити са земље. Небезбедне скеле су потенцијални извор повреда и смрти на раду. Скеле се такође користе у прилагођеним облицима за оплате и преграде, седење на трибинама, концертне бине, осматрачнице, изложбене штандове и ски-рампе.
Историја
Антика
Подножја у зидовима око палеолитичких пећинских слика у Ласку, сугеришу да је систем скела коришћен за сликање на плафону, пре више од 17.000 година.
Берлинска ливничка шоља приказује скеле у древној Грчкој (рани 5. век пре нове ере). За Египћане, Нубијце и Кинезе је такође забележено да су користили грађевине сличне скелама за изградњу високих зграда. Ране скеле биле су од дрвета и учвршћене чворовима од канапа.
Модерна ера
Скеле су подизала индивидуална предузећа следећи широк распон стандарда и величина. Процес су револуционирали Данијел Палмер Џонс и Дејвид Хенри Џонс. Савремени стандарди, праксе и процеси за скеле могу се приписати овим људима и њиховим компанијама: Рапид Скафолд Тај компанија (), Тубулар Скафолдинг компанија () и Скафолдинг Велика Британија ().
Давид Палмер-Џоунс је патентирао "скафиксер", уређај за спајање који је далеко робуснији од ужади, чиме је револуционирао конструкцију скела. Године 1913, његова компанија је упошљена на реконструкцији Бакингамске палате, током чега је његов скафиксер стекао велики публицитет. Палмер-Џоунс је након тога увео побољшани "универзални спојник" 1919. године - који је убрзо постао стандард индустријске спојнице и такав је остао до данас.
Напредак у металургији током раног 20. века довео је до увођења цевастих челичних водоводних цеви (уместо дрвених греда) са стандардизованим димензијама, што омогућава индустријску замену делова и побољшава конструкцијску стабилност скела. Употреба дијагоналних везова такође је помогла у побољшању стабилности, посебно на високим зградама. Први систем оквира је -{SGB}- пласирао на тржиште 1944. године и увелико је коришћен за послератну обнову.
Модерне скеле
Рупе у зидовима око палеолитских слика у пећини Ласко, сугеришу да је систем скела коришћен за сликање на таваници пећине, пре више од 17.000 година.
Сврха радне скеле је да обезбеди сигурну радну платформу и приступ погодан за раднике да обављају свој посао. Европски стандард утврђује захтеве за радне скеле. Они су у суштини независни од материјала од којих су израђене скеле.
Материјали
Основне компоненте скела су цеви, спојнице и даске.
Једноставне и лагане скеле од цеви, које су олакшале постављање скела и остале стандард деценијама, измишљене су и ушле у продају средином педесетих година 20. века. Са једним основним пакетом од 12 килограма, скелет различитих величина и висина могло је лако да састави неколико радника, без матица или вијака који су се раније користили.
Цеви су обично направљене од челика или алуминијума; иако постоји композитна скела која користи намотане цеви од стаклених влакана у најлонској или полиестерској матрици, због велике цене композитне цеви, обично се користи само ако постоји опасност од надземних електричних каблова који се не могу изоовати. Ако су челичне, оне су или 'црне' или поцинковане. Цеви се испоручују у разним дужинама и стандардним пречником од 48,3 мм (1,5 НПС цеви). Главна разлика између две врсте металних цеви је мања тежина алуминијумских цеви (1,7 кг / м за разлику од 4,4 кг / м). Међутим, савитљивије су и имају нижу отпорност на стрес. Цеви се углавном купују у дужинама од 6,3 м и затим се могу смањити на одређене типичне величине. Већина великих компанија ће своје цеви обележити именом и адресом како би одвратиле крађу.
Плоче/даске пружају радну површину за кориснике скела. Од сувог су дрвета и имају три дебљине (38 мм (уобичајено), 50 мм и 63 мм) стандардне ширине (225 мм) и дужине су максимално 3,9 м. Крајеви даске заштићени су или гвозденим оковом, или понекад закуцаним плочицама, на којима је често утиснут назив компаније. Дрвене даске за скеле у Великој Британији требају бити у складу са захтевима БС 2482. Поред дрвета, користе се и плоче од челика или алуминијума, као и ламинатне плоче. Поред плоча за радну платформу постоје потплате које су постављене испод скела ако је површина мекана или на било који други начин сумњива, мада се могу користити и обичне плоче. Друго решење за потплату, направљено је од гумене основе са плочом заливеном унутра; ови су пожељни за употребу на неравном терену, јер се прилагођавају, док се потплате могу расцепити и морају се заменити.
Спојнице су везе које држе цеви заједно, а постоје три основна типа: правоугаоне спојнице, зидне спојнице и окретне спојнице. За спајање цеви користе се спојни клинови или рукав-спојнице. За фиксирање цеви у "носиви спој" могу се користити само спојнице под правим углом и окретне спојнице. Појединачне спојнице нису носиве спојнице и немају конструкцијски капацитет.
Остале уобичајене компоненте скела укључују постоље, мердевине, ужад, сидрене везе, споне, гин точкове, облоге итд. Већина компанија ће усвојити одређену боју за фарбање скела, како би се брзо визуелно идентификовало у случају крађа. Све компоненте које су направљене од метала могу се фарбати, али дрвени предмети никада не смеју бити фарбани јер се на тај начин могу сакрити недостаци. Упркос метричким мерењима, многе скеле мере цеви и даске у и јединицама, са цевима од 21 стопу на доле и плочама од 13 стопа на доле.
Скеле од бамбуса широко се користе у Хонг Конгу и Макау, са најлонским тракама везаним у чворове као спојнице. У Индији се углавном користи бамбус или друге дрвене скеле, а стубови се међусобно вежу помоћу ужади од кокосових влакана.
Референце
Спољашње везе
-{Illustrated Formwork and Temporary Work Glossary}-
-{New York City Scaffolding Regulations PDF (shows nine types of scaffolding) }-
-{OSHA Publication 3150, A Guide to Scaffold Use in the Construction Industry}-
-{OSHA scaffold types illustrated}-
-{Illustrations of many kinds of scaffolding}-
-{UK Health & Safety Executive Scaffold Checklist}-
Архитектура
Грађевинарство
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,338
|
\section{Introduction}
The indirect astrophysical evidence of the existence of missing mass in form of a matter, called dark matter (DM), and the confirmation of the existence of tiny neutrino mass through neutrino oscillation are the two major motivations to look for possible extension of the standard model (SM). According to the Planck data~\cite{Ade:2015xua}, about a fourth of the energy density of the Universe consists of DM. However, in the absence of any direct observation, we are still in the darkness about the nature of DM. From the last three decades, plethora of candidates have been imagined as a DM particle in literature. Among them one of the most popular choices is a weakly interacting and massive particle (WIMP) whose mass lies in the GeV to TeV range with typically weak interactions. A WIMP pair annihilation to the SM particles provides a natural mechanism to produce the WIMP at the early Universe and can also explain the observed DM density in current Universe. As the mass range lies within the range GeV to TeV, these particles are accessible in current or future colliders as well as in different direct and indirect detection experiments of DM.
An enormous number of extensions of the SM are studied where the DM particle may have an integer or a half integer spin. Stabilization of a DM occurs naturally in the supersymmetric models where the lightest supersymmetric particle acts as a viable DM candidate. However in plenty of beyond standard model (BSM) extensions, an ad hoc discrete symmetry is imposed to forbid the decay of a DM particle. The validity of the assumption that the discrete symmetry
remains unbroken due to gravitational effects at the Planck scale suffers from a suspicion~\cite{Boucenna:2012rc,Mambrini:2015sia}. This problem is eluded by identifying it to some high scale physics which is beyond the scope of the model under consideration.
In this paper, we consider an extension of the SM with a $U(1)_{B-L}$ gauge group. One possible way to cancel the gauge anomaly is by including three right-handed neutrinos in the theory. Thus, the model can naturally incorporate the light neutrino masses through Type-I seesaw mechanism. Various ideas to incorporate a DM candidate in the $U(1)_{B-L}$ context have been explored in literature~\cite{Basak:2013cga}. An attractive option is to introduce a SM-singlet but $U(1)_{B-L}$-charged scalar particle which is stabilized by judicious choice of the $B-L$ quantum numbers.
The main purpose of this work is to revisit the case where the DM pair annihilates to right-handed neutrino pair through the $B-L$ symmetry breaking scalar and investigate the effect of right-handed neutrino decay and inverse decay in the thermal history of the DM particle. The effect of right-handed neutrino decay in a context of a supersymmetric $U(1)^\prime$ extension of the SM was studied in Ref.~\cite{Bandyopadhyay:2011qm}. We will see that such an effect plays an important role in keeping the DM in thermal equilibrium and extending the allowed parameter space satisfying the observed relic density. We present that the measurements of spin-independent (SI) cross-section of DM-nuclei scattering in direct detection experiments especially XENON1T~\cite{Aprile:2017iyp} can impose bound on the mass of $B-L$ gauge boson $Z^\prime$ superior to the collider limits. The decay of a right-handed neutrino also provides very interesting and rich phenomenology from the collider aspects. The presence of displaced vertex arising from right-handed neutrino decay allows us the impose indirect limit on the mass of $Z^\prime$ at the LHC with current as well as higher integrated luminosities.
The paper is organized as follows. First we briefly describe main features of the model in Sec.~\ref{sec:model}. In Sec.~\ref{sec:relic} right-handed neutrino decay and inverse decay effects in the relic density of DM particles are discussed. Section~\ref{sec:directD} deals with the direct detection experiment limits. In Sec.~\ref{sec:RHnu} we explore the LHC signature of the pair production of right-handed neutrinos from the decay of $Z^\prime$ and also the decay of right-handed neutrino into the SM particles. Finally we conclude in Sec.~\ref{sec:conclusion} with discussion.
\section{The model}
\label{sec:model}
In this section we briefly discuss the basic setup of the model used in our work. We consider the extension of the SM with a gauged $U(1)_{B-L}$ symmetry. Apart from the SM particles, the model contains; a $U(1)_{B-L}$ gauge boson $Z^\prime$, three right-handed neutrinos $N_i$ to cancel the $B-L$ gauge anomaly, two SM singlet $B-L$ charged complex scalar $S$ and $\ensuremath{\phi_\text{\tiny DM}}\,$ where $\ensuremath{\phi_\text{\tiny DM}}\,$ is the would-be dark matter candidate. The interaction terms in the Lagrangian due to the new particles are given by,
\begin{align}
\label{lag}
\mathcal{L}_\text{NP}&= - m_S^2 |S|^2- \frac{1}{2} \lambda_{SH} {|S|}^2 |\Phi|^2 - \lambda_{S} (S^\dagger S)^2 - \lambda_{N_i} S \bar{N_i^c} N_i- y_{ij} \bar{L_i} \Phi^\dagger N_j \nonumber \\
& - m_D^2 |\ensuremath{\phi_\text{\tiny DM}}\,\!|^2- \frac{1}{2} \lambda_{D H} {|\ensuremath{\phi_\text{\tiny DM}}\,\!|}^2 |\Phi|^2- \frac{1}{2} \lambda_{D S} {|\ensuremath{\phi_\text{\tiny DM}}\,\!|}^2 |S|^2 - \lambda_{D} (\ensuremath{\phi_\text{\tiny DM}}\,^{\!\dagger} \ensuremath{\phi_\text{\tiny DM}}\,)^2.
\end{align}
The $\Phi$ and $L_i$ are the usual SM Higgs and lepton $SU(2)_L$ doublets, respectively. In Table.~\ref{Table}, we show the $B-L$ charge assigned for all the SM and BSM particles.
After the $B-L$ symmetry breaking through the vacuum expectation value (vev) of the scalar $S$, mass term for the $B-L$ gauge boson $Z^\prime$ as well as the Majorana mass for the neutrino $N_i$ are generated. The masses for light SM neutrinos are generated through the usual Type-I seesaw mechanism:
\begin{equation}
{\cal M}^\nu_{ij} = y_{ik} y_{jk} {\langle \Phi \rangle^2 \over m_{N_k}}.
\end{equation}
For the low-scale Type-I seesaw with $m_{N_k} \sim$ TeV, one typically needs Yukawa couplings as small as $y_{ik}\sim 10^{-6} $ to generate neutrino mass scale around 0.1 eV. While lepton flavour violation (LFV) induced by such tiny couplings can hardly appear in low-energy observables,
LFV signatures may appear at the LHC through the right-handed neutrino production and decays \cite{LFVpheno}. On the other hand, in the case of inverse seesaw where $y_{ik}$ can be of order one,
the induced LFV could be observed in various low-energy processes depending on the models \cite{LFVlow} and also interestingly in exotic Higgs decays \cite{Arganda:2004bz,pbej, LFVconst}.
\begin{table}[h]
\centering
\begin{tabular}{ |c| c |c |c| c|c|c|c |}
\hline \hline
& $Q$ & $u^c,d^c$ & $L$ & $e^c$ & $N_i$ & $S$ & $\ensuremath{\phi_\text{\tiny DM}}\,$ \\ \hline
$B-L $ & $1/3$ & $-1/3$ & $-1$ & $1$ &$-1$ & $2$ &$q_\text{\tiny DM}$ \\
\hline
\hline
\end{tabular}
\caption{$B-L$ charges for all the particles present in the model. } \label{Table}
\end{table}
Being a scalar DM candidate, $\ensuremath{\phi_\text{\tiny DM}}\,$ is forbidden to get VEV and mix with the symmetry breaking fields $S$ and $\Phi$. Thus the scalar potential relevant for the gauge symmetry breaking is,
\begin{align}
V(\Phi,S)= m_H^2 |\Phi|^2 + m_S^2 |S|^2 + \frac{1}{2}\lambda_H (\Phi^\dagger \Phi)^2 +\lambda_{S} (S^\dagger S)^2 + \frac{1}{2}\lambda_{SH} {|S|}^2 |\Phi|^2.
\end{align}
After spontaneous symmetry breaking (SSB), $\Phi= (H^+,H)^T$ with $H=(v+h)/\sqrt{2}$ and $S=(v^\prime + S_0+i S^\prime)/\sqrt{2}$ the scalar mass matrix is given by,
\begin{align}
\mathcal{M}(h,S_0)= \begin{pmatrix}
v^2 \lambda_H~~~~~\frac{1}{2}v\,v^\prime \lambda_{SH} \cr
\frac{1}{2}v\,v^\prime \lambda_{SH} ~~~2{v^\prime}^2 \lambda_S \cr
\end{pmatrix},
\end{align}
where we have used the following minimization conditions,
\begin{align}
\frac{\partial V}{\partial \Phi}\bigg|_{v,v^\prime}&= 0 \Longrightarrow
m_H^2= \frac{1}{4} \left(2v^2 \lambda_H+{v^\prime}^2 \lambda_{SH}\right), \\
\frac{\partial V}{\partial S}\bigg|_{v,v^\prime}&= 0 \Longrightarrow
m_S^2= \frac{1}{4} \left(4{v^\prime}^2 \lambda_S+ v^2 \lambda_{SH}\right).
\end{align}
Allowing mixing between the two neutral components of $\Phi$ and $S$, we can write
\begin{align}
\begin{pmatrix}
\Phi \cr
S \cr
\end{pmatrix}=
\begin{pmatrix}
\text{cos}\,\alpha~~~~~\text{sin}\, \alpha \cr
-\text{sin}\,\alpha~ ~~~\text{cos}\, \alpha \cr
\end{pmatrix} \begin{pmatrix}
h \cr
S_0 \cr
\end{pmatrix},
\end{align}
where the mixing angle is defined as
\begin{align}
\label{eq:alpha}
\text{tan}\,2\alpha = \frac{v\,v^\prime \lambda_{SH}}{v^2 \lambda_H-{2v^\prime}^2 \lambda_S}.
\end{align}
The two mass eigenstates of the scalar bosons are,
\begin{align}
m_{h,S_0}^2=\frac{1}{2}\bigg(v^2 \lambda_H+{2v^\prime}^2 \lambda_S\mp \sqrt{\big(v^2 \lambda_H-{2v^\prime}^2 \lambda_S\big)^2+ v^2 {v^\prime}^2 \lambda_{SH}^2} \bigg).
\end{align}
In view of the current bound on the mixing parameter $\alpha$ from the measurements of the Higgs boson properties at the LHC, we assume almost vanishing mixing between the two scalars implying cos$\alpha\simeq 1$ through out our analysis.
The SSB of $B-L$ symmetry provides mass term for the $B-L$ gauge boson $Z^\prime$ given by
\begin{align}
m_{Z^\prime}= 2 \,g_{BL}v^\prime,
\end{align}
where $g_{BL}$ is the $B-L$ gauge coupling constant. At tree level, we assume no kinetic mixing between $U(1)_{B-L}$ and $U(1)_Y$ gauge bosons and hence $Z^\prime$ and the SM $Z$ boson do not mix with each other.
The right-handed neutrinos $N_i$ acquire mass after the SSB of $B-L$ which is $m_{N_i}\sim \sqrt{2} v^\prime \lambda_{N_i}$. We assume that only one of the three $N_i$'s is lighter than the DM candidate and thus relevant for our discussion. For simplicity, we denote the mass eigenstate of the lightest right-handed neutrino by $N$\footnote{Left-handed neutrinos ($\nu_i$) and right-handed neutrinos ($N_i$) mix in their mass basis and we get three light neutrinos and three heavy neutrinos. } and the corresponding Yukawa coupling with the SM Higgs and lepton $SU(2)_L$ doublet as $y_N$. It turns out that, to satisfy the observed relic abundance, $y_N$ cannot be large enough to produce the SM neutrino mass larger than $\sim 10^{-3}\,$eV. That is, the contribution of $N$ to the SM neutrino mass matrix is negligible and hence only the other two heavy right-handed neutrinos are relevant to explain the observed neutrino masses and mixing. We note that in general the Yukawa matrix $y_{ij}$ contains off diagonal elements however its detailed texture is not relevant for the purpose of this paper.
The complex scalar \ensuremath{\phi_\text{\tiny DM}}\, does not acquire vev and has a $B-L$ charge $q_\text{\tiny DM}$. As discussed in Ref.~\cite{Rodejohann:2015lca}, some particular choices of $q_\text{\tiny DM}$ i.e., $q_\text{\tiny DM}\not =\pm 2 n$ for $n\in \mathbb{Z}$ and $n\le 4$, forbid the \ensuremath{\phi_\text{\tiny DM}}\, to decay and hence it can be a dark matter candidate without invoking any extra symmetry in the theory. We chose $q_\text{\tiny DM}=1/2$ for most of our analysis. As we discuss later that in our case, the dominant process for the DM annihilation cross-section is through $s$-channel $S_0$ exchange, the charge of the DM candidate \ensuremath{\phi_\text{\tiny DM}}\, does not affect the results obtained in Sec.~\ref{sec:relic}. However the direct detection bounds depend on the $B-L$ charge of \ensuremath{\phi_\text{\tiny DM}}\, and will be addressed in Sec.~\ref{sec:directD}.
The mass term for $\ensuremath{\phi_\text{\tiny DM}}\,$ receives contribution from both the EW and $B-L$ symmetry breaking given by
\begin{align}
m_{DM}^2= m_D^2 + \frac{1}{4}\lambda_{DH}v^2 + \frac{1}{4}\lambda_{DS}{v^\prime}^2.
\end{align}
It can be seen from Eq.~\eqref{lag} that the Yukawa interaction of the right-handed neutrino allows it to decay to SM particles via the mixing with the SM neutrinos proportional to $y_N$ and below we quote the expressions of the decay widths of $N$ to three possible channels $h\nu$, $\ell^\pm W^\mp$ and $Z\nu$, respectively, where we assume cos$\,\alpha\simeq1$.
\begin{align}
\label{eq:Ndecayh}
\Gamma(N\to h \nu)&=\Gamma(N\to h \bar{\nu})= \frac{y_N^2 m_N}{64\pi} \left(1- \frac{m_h^2}{m_N^2}\right)^2, \\
\Gamma(N\to \ell^- W^+)&=\Gamma(N\to \ell^+ W^-)= \frac{y_N^2 m_N}{32\pi} \left(1- \frac{m_W^2}{m_N^2}\right)^2 \left(1+ 2 \frac{m_W^2}{m_N^2}\right), \\
\label{eq:NdecayZ}
\Gamma(N\to Z \nu )&=\Gamma(N\to Z \bar{\nu})= \frac{y_N^2 m_N}{64\pi} \left(1- \frac{m_Z^2}{m_N^2}\right)^2 \left(1+ 2 \frac{m_Z^2}{m_N^2}\right).
\end{align}
\section{Relic density of the scalar dark matter}
\label{sec:relic}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.8\linewidth]{dias.pdf}
\caption{The Feynman diagrams for the DM particle \ensuremath{\phi_\text{\tiny DM}}\, annihilation to right-handed neutrino $N$ pair (a), annihilation of $N$ to SM fermion anti-fermion $f\bar{f}$ (b) and the decay of $N$ to SM particles (c), (d) are shown. }\label{dia:relic}
\end{center}
\end{figure}
In this section we discuss the thermal relic abundance of the scalar DM candidate \ensuremath{\phi_\text{\tiny DM}}\,. The \ensuremath{\phi_\text{\tiny DM}}\, can annihilate through three different interactions: the SM Higgs $h$ portal, the $B-L$ scalar $S_0$ portal, and the $B-L$ gauge boson $Z^\prime$ portal. The simplistic scenario of the Higgs portal is very strongly constrained~\cite{He:2016mls} by the SI cross-section measurements at direct detection experiments. Alternatively, as the bound on the mass of a new gauge boson from collider experiments is currently $\ge 2.8 \ensuremath{\mathrm{\,Te\kern -0.1em V\,}}$ \cite{z'bnd}, to produce the observed relic abundance through $Z^\prime$ portal, one needs very heavy DM particle where the DM pair can annihilate via resonant production of $Z^\prime$. In this paper, we are interested in low mass ($\le \mathcal{O}(\ensuremath{\mathrm{\,Te\kern -0.1em V\,}})$) DM particle where the DM annihilates through the $B-L$ scalar $S_0$, predominantly. It can be seen from Eq.~\eqref{lag} that the $B-L$ scalar can couple directly to only one SM particle i.e., the Higgs doublet $\Phi$, and thus interacts to the SM fermions through mixing. To satisfy the current LHC bounds on the measurements of Higgs boson properties, the mixing angle $\alpha$ (defined in Eq.~\eqref{eq:alpha}) should be small. Hence we assume a tiny $\alpha$ of the $\mathcal{O}({10}^{-3})$ for our analysis. It can be easily understood that due to the presence of tree level coupling of $S$ and the right-handed neutrino $N$, the dominant annihilation cross-section of the DM candidate \ensuremath{\phi_\text{\tiny DM}}\, is
through the process $\ensuremath{\phi_\text{\tiny DM}}\, \ensuremath{\phi^*_\text{\tiny DM}}\, \to N N$ as shown in Fig.~\ref{dia:relic}(a). As long as $N$ is in thermal equilibrium for long enough time, the relic density can be estimated by calculating the thermal average of the annihilation process mentioned above. However, $N$ interacts through the heavy gauge boson $Z^\prime$ and also through the tiny Yukawa coupling $y_N$ and hence the interaction may not be very weak to keep $N$ in thermal equilibrium through the process of freeze-out of \ensuremath{\phi_\text{\tiny DM}}\,. Thus, to study the thermal history of \ensuremath{\phi_\text{\tiny DM}}\, through the annihilation in Fig.~\ref{dia:relic}(a), one also has to consider the evolution of $N$ determined by its annihilation (Fig.~\ref{dia:relic}(b)) and decay (Figs.~\ref{dia:relic}(c) and \ref{dia:relic}(d)).
We start with the coupled Boltzmann equations written in terms of the variable $Y_i\equiv n_i/s$, describing the actual number of particle $i$ per comoving volume, where $n_i$ being the number density, $s$ is the entropy density of the Universe, and the variable $x\equiv m_{DM}/T$ as
\begin{align}
\label{eq:dYDM}
\frac{dY_\text{\tiny DM}}{dx}&= -\frac{1}{x^2} \frac{s(m_{DM})}{H(m_{DM})} \langle \sigma v \rangle_{\ensuremath{\phi_\text{\tiny DM}}\,\ensuremath{\phi^*_\text{\tiny DM}}\,\to NN} \left(Y_\text{\tiny DM}^2 -Y_N^2\right), \\
\label{eq:dYN}
\frac{d Y_N}{dx}&= \frac{1}{x^2} \frac{s(m_{DM})}{H(m_{DM})} \langle \sigma v \rangle_{\ensuremath{\phi_\text{\tiny DM}}\, \ensuremath{\phi^*_\text{\tiny DM}}\, \to NN} \left(Y_\text{\tiny DM}^2 -Y_N^2\right) \nonumber \\
&-\frac{1}{x^2} \frac{s(m_{DM})}{H(m_{DM})} \langle \sigma v \rangle_{NN\to f\bar{f}} \left(Y_N^2 -{Y_N^{\text{eq}}}^2\right) -\frac{\Gamma}{H(m_{DM})} x \left(Y_N -Y_N^{\text{eq}}\right).
\end{align}
\noindent
The entropy density $s$ and Hubble parameter $H$ at the DM mass is
$
s(m_{DM})= \frac{2 \pi^2 }{45} g_*\, m_{DM}^3, \quad H(m_{DM})= \frac{\pi}{\sqrt{90}} \frac{\sqrt{g_*}}{M^r_{pl}} m_{DM}^2, $$ where $M^r_{pl}= 2.44\times {10}^{18}\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ is the reduced Planck mass and $Y_N^{\text{eq}}$ is the equilibrium number density of right-handed neutrino $N$ given by
\begin{align}
\label{eq:YN_eq}
Y_N^{\text{eq}}\equiv\frac{n_N^{\text{eq}}}{s} &=\frac{45}{2\pi^4} \sqrt{\frac{\pi}{8}}\left( \frac{g}{g_*}\right) \left({\frac{m_N}{T}}\right)^{3/2} e^{-\frac{m_N}{T}} \nonumber \\
&\simeq 0.145 \left( \frac{2}{100}\right) \left( \frac{m_N}{m_{DM}}\right)^{3/2} x^{3/2} e^{-\frac{m_N}{m_{DM}} x}.
\end{align}
Here in the last line of Eq.~\eqref{eq:YN_eq} we use the effective number of relativistic degrees of freedom $g_*\simeq100$ and the internal degrees of freedom $g=2$ for the complex scalar DM candidate \ensuremath{\phi_\text{\tiny DM}}\,.
The first terms on the right-hand side of Eqs.~\eqref{eq:dYDM} and \eqref{eq:dYN} denote the forward and backward reactions of \ensuremath{\phi_\text{\tiny DM}}\, \ensuremath{\phi^*_\text{\tiny DM}}\, to $NN$ through $s$-channel $S_0$ exchange shown in Fig.~\ref{dia:relic}(a). The second term on the right-hand side of Eq.~\eqref{eq:dYN} refers to the forward and backward reactions of $NN$ annihilation to the SM fermion pairs $f\bar{f}$ through the s-channel $Z^\prime$ exchange (Fig.~\ref{dia:relic}(b)) and the third term describes the decay and the inverse decay of $N$ shown in Fig.~\ref{dia:relic}(c) and (d) where $\Gamma$ being the total decay width of $N$.
The DM candidate \ensuremath{\phi_\text{\tiny DM}}\, remains in thermal equilibrium through the interaction of $N$. The right-handed neutrino $N$ annihilates to the SM fermion pair via the process $NN \to f \bar{f}$ (Fig.~\ref{dia:relic}(b)). As the current limit from LHC on $m_{Z^\prime}\ge 2.8 \ensuremath{\mathrm{\,Te\kern -0.1em V\,}}$, the mentioned annihilation cross-section is very suppressed and thus the right-handed neutrinos freezes out earlier than the DM particle \ensuremath{\phi_\text{\tiny DM}}\,. As a consequence, the DM particles are overproduced.
\begin{figure}[t]
\begin{center}
\mbox{\hskip -20 pt \subfigure[]{\includegraphics[width=0.5\linewidth]{Y_decay1.pdf}}
\subfigure[]{\includegraphics[width=0.5\linewidth]{Y_decay2.pdf}}}
\mbox{\hskip -20 pt
\subfigure[]{\includegraphics[width=0.5\linewidth]{Y_decay3.pdf}}
\subfigure[]{\includegraphics[width=0.5\linewidth]{Y_decay4.pdf}}}
\caption{The actual number of $\ensuremath{\phi_\text{\tiny DM}}\,$ and $N$ per comoving volume are shown in blue dashed and brown dotted curves, respectively. The panels (a)--(d) are obtained by solving the coupled Boltzmann equations (Eqs.~\eqref{eq:dYDM} and \eqref{eq:dYN}) with the total decay width $\Gamma$ of $N$ as $10^{-10} \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $10^{-15}\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $10^{-18}\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ and $0 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, respectively. The effect of decay term is prominent from the plots. The other parameters are chosen as follows: $m_{DM}=140\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{N}=100\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{S_0}=300\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{h}=125\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ and $m_{Z^\prime}=3\ensuremath{\mathrm{\,Te\kern -0.1em V\,}}$.}
\label{fig:1}
\end{center}
\end{figure}
To avoid the overproduction of the DM, we consider the effect of the decay and inverse decay term of right-handed neutrino in the Boltzmann equations. The decay of $N$ to the SM particle is governed by the Yukawa coupling $y_N$. We vary the coupling $y_N$ for different values of the total decay width $\Gamma$ of the right-handed neutrino, by choosing all other masses as $m_{DM}=140\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{N}=100\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{S_0}=300\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{h}=125\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ and $m_{Z^\prime}=3\ensuremath{\mathrm{\,Te\kern -0.1em V\,}}$.
The results due to the variation of $\Gamma$ for the values of $10^{-10} \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $10^{-15}\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $10^{-18}\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ and $0 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ are shown in Fig.~\ref{fig:1}.
The blue dashed and brown dotted curves represent the actual number of $\ensuremath{\phi_\text{\tiny DM}}\,$ and $N$ per comoving volume, respectively. For larger values of the Yukawa coupling $y_N$ i.e., larger decay width, the decay term of $N$ in Eq.~\eqref{eq:dYN} dominates over the other interactions of $N$ before the annihilation effect of $N$ becomes weaker than the dilution effect of the Universe expansion. Hence due to the decay effect, $N$ remains in thermal bath for much longer time compared to the case when there is no decay term present in the analysis. Thus for this case, $N$ can remain in thermal bath continuously before the DM candidate \ensuremath{\phi_\text{\tiny DM}}\, decouples and the result can be seen from Figs.~\ref{fig:1}(a) and \ref{fig:1}(b), which reproduce the standard result assuming $N$ in thermal equilibrium. In other words, for this parameter space, the combination of the interaction of $N$ and its decay effect gives back the result obtained by solving single Boltzmann equation where $N$ assumes to be in equilibrium. On the other hand, as decay width decreases, the decay effect is negligible in the early stage when both $N$ and $\phi_{\rm DM}$ decouple first from the annihilation effect and then the DM relic density is depleted further by the decay effect coming in later as in Fig.~\ref{fig:1}(c). For much smaller decay rate, its effect never becomes effective leaving the DM relic density as in Fig.~\ref{fig:1}(d) with $\Gamma=0$.
In Ref.~\cite{Fujii:2001xp} a similar qualitative behavior of $ Y_\text{\tiny DM}$ solution is found in a completely different context.
The relic abundance of the DM candidate \ensuremath{\phi_\text{\tiny DM}}\, can be evaluated by,
\begin{align}
\label{eq:relic}
\Omega h^2 = \frac{m_{DM} s_0 Y_\text{\tiny DM}(\infty)}{\rho_c/h^2},
\end{align}
where $s_0=2890$ cm$^{-3}$ is the current entropy density of the Universe and $\rho_c/h^2=1.05\times 10^{-5} \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}/ $cm$^3$ is the critical density. $Y_\text{\tiny DM}(\infty)$ is the asymptotic value of the actual number of \ensuremath{\phi_\text{\tiny DM}}\, per comoving volume obtained from numerical solutions of the corresponding Boltzmann equations.
We calculate the velocity-averaged DM annihilation cross-section for the contribution to $NN$ final state through $s$-channel $S_0$ exchange (Fig.~\ref{dia:relic}(a)). The leading term i.e., the $s$-wave contribution in the non-relativistic limit $s= 4 m_{DM}^2$ is
\begin{align}
\langle \sigma v \rangle_{\ensuremath{\phi_\text{\tiny DM}}\,\ensuremath{\phi^*_\text{\tiny DM}}\,\to NN} \equiv \langle \sigma v \rangle_0= \frac{ \lambda_N^2 \lambda_{DS}^2 v^{\prime 2}}{64 \pi m_{DM}^2} \sqrt{1-\frac{m_N^2}{m_{DM}^2}} \frac{m_{DM}^2 - m_N^2}{\left(4m_{DM}^2 - m_{S_0}^2\right)^2 +m_{S_0}^2 \Gamma_{S_0}^2 },
\end{align}
where $\Gamma_{S_0}$ is the total decay width of the scalar $S_0$.
As discussed earlier, $S_0$ interacts to the right-handed neutrinos dominantly, the total decay width of $S_0$ is saturated by its decay to $N$ pair and/or \ensuremath{\phi_\text{\tiny DM}}\, pair which is given by
\begin{align}
\Gamma_{S_0}= \frac{ m_N^2 m_{S_0} }{16 \pi v^{\prime 2}} \left(1- \frac{4 m_N^2}{m_{S _0}^2}\right)^{3/2}\text{cos}^2\alpha \, + \frac{ \lambda_{D S}^2 {v^\prime}^2 }{64 \pi m_{S_0}} \left(1- \frac{4 m_{DM}^2}{m_{S _0}^2}\right)^{1/2}.
\end{align}
\begin{figure}[!t]
\begin{center}
\mbox{\hskip -20 pt\subfigure[]{\includegraphics[width=0.5\linewidth]{relic_new3n.pdf}}
\subfigure[]{\includegraphics[width=0.48\linewidth]{mDM-YNn.pdf}}}
\caption{(a) The comparison of relic density obtained for three different scenarios is shown as a variation of the DM mass $m_{DM}$. The red points denote the proper solutions of the coupled Boltzmann equations \eqref{eq:dYDM} and \eqref{eq:dYN}. The orange points represent the solutions assuming the vanishing decay rate and the green squares correspond to the solutions for the right-handed neutrino $N$ in thermal equilibrium. By adjusting the Yukawa coupling $y_N$, the observed relic density can be satisfied over a $30\,$\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}~ range around resonance region of the $B-L$ scalar $S_0$. The masses for the other parameters are chosen as $m_{N}=100\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{S_0}=300\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{h}=125\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{Z^\prime}=3$TeV and $\lambda_{DS}=0.3$. (b) The variation of $y_N$ with $m_{DM}$ corresponding to the red points, which satisfies observed relic density, in the left panel figure where the $N$ decay effect is included in the Boltzmann equations. A larger coupling $y_N$ i.e., larger decay width is needed to satisfy relic away from the $S_0$ resonance region. }
\label{fig:reso}
\end{center}
\end{figure}
To illustrate the effects of three different scenarios, we show the variation of thermal relic density with the DM mass $m_{DM}$ in Fig.~\ref{fig:reso}(a), by choosing a benchmark point
$m_{N}=100\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{S_0}=300\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{h}=125\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{Z^\prime}=3\ensuremath{\mathrm{\,Te\kern -0.1em V\,}}$ and $\lambda_{DS}=0.3$. We mention that Fig.~\ref{fig:reso} is highlighted for a particular benchmark point and it is easily understood that the observations made here can be realized in other regions in the parameter space as well. The green squares correspond to the case where the right neutrino $N$ is assumed to be in thermal equilibrium and hence the result is obtained by solving only one Boltzmann equation. It can be seen that for most of the parameter space the annihilation rate is small and can not explain the observed relic abundance by Planck data $\Omega h^2 = 0.1199 \pm 0.0027$ ~\cite{Ade:2015xua} (shown in the black dotted line). The criteria for correct relic abundance is satisfied only at the near resonance region of the scalar $S_0$, virtually at two points one before and the one after the $S_0$ resonance. The orange solid squares depict the case when interactions of $N$ are considered in the theory. The two coupled Boltzmann equations (Eqs.~\eqref{eq:dYDM} and \eqref{eq:dYN}) are solved without the decay effect of $N$. As $N N\to Z^\prime \to f \bar{f}$ annihilation rate is suppressed due to the large $Z^\prime$ mass, $N$ decouples earlier than the DM and hence this scenario corresponds to overabundance of the DM particles. The situation improves significantly after incorporating the decay effect of $N$ which can be seen from the red points. By adjusting the Yukawa coupling $y_N$, we can satisfy the relic density over a $30\,$\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}} range around resonance region of the $B-L$ scalar $S_0$. Figure~\ref{fig:reso}(b) illustrates the variation of $y_N$ with $m_{DM}$ corresponding to the red points, which satisfy observed relic density, in the left panel figure. A larger coupling $y_N$ i.e., larger decay width is needed to satisfy relic away from the $S_0$ resonance region.
It can be seen that from the curve with green squares, there exist two positions, one before and the one after the $S_0$ resonance, where the observed relic abundance is satisfied. At the top of the resonance due to the huge enhancement of $\ensuremath{\phi_\text{\tiny DM}}\, \ensuremath{\phi^*_\text{\tiny DM}}\,\to S_0 \to NN$ annihilation cross-section the relic density is very suppressed $\sim {10}^{-5}$. Similar observation was made in Ref.~\cite{Rodejohann:2015lca} and as discussed that it is difficult to find viable models right on top of the resonance due to the suppression of relic density, we show that by incorporating the decay effect of right-handed neutrino $N$ in the analysis, one can evade such problem.
\section{Constraints from direct detection experiments}
\label{sec:directD}
\begin{figure}[!t]
\begin{center}
\mbox{\hskip -20 pt\subfigure[]{\includegraphics[width=0.5\linewidth]{directD.pdf}}
\subfigure[]{\includegraphics[width=0.5\linewidth]{directD2.pdf}}}
\caption{The contour plot for direct detection cross-section through a $t$- channel $Z^\prime$ exchange is shown in $m_{DM}- m_{Z^\prime}$ plane. The left panel (a) corresponds to $q_\text{\tiny DM}\!=\!1/2$ and the right panel (b) is obtained for $q_\text{\tiny DM}\!=\!3/2$. The red, yellow, blue and orange shaded regions denote cross-section greater than $ 10^{-45}$ cm$^2$, within $ 10^{-45}$cm$^2-10^{-46}$ cm$^2$, within $ 10^{-46}$cm$^2- 10^{-47}$cm$^2$ and within $ 10^{-47}$cm$^2- 10^{-48}$cm$^2$, respectively. The red solid, green dot and blue dashed curves are the current or future bounds obtained in XENON1T~\cite{Aprile:2017iyp}, LUX-ZEPLIN~\cite{Mount:2017qzi} and XENONnT~\cite{Aprile:2015uzo} experiments, respectively. The region below the mentioned curves are excluded at 90\% confidence level. We assume the $B-L$ gauge coupling $g_{BL}=0.3$ for both the panels.}
\label{fig:Direct}
\end{center}
\end{figure}
In this section we discuss the bound from the DM direct detection experiments. In the model under consideration, the DM is scalar particle and interacts with the nucleons either via $t$- channel exchange of the gauge boson $Z^\prime$ or the SM Higgs boson $h$. The contribution from $h$ exchange to the scattering cross-section depends strongly on the DM-Higgs coupling $\lambda_{D H}$. The value of $\lambda_{D H}$ is not of relevance for the purpose of our paper. Hence in this section we restrict ourselves on the DM-nuclei interaction through $Z^\prime$ only.
The effective Lagrangian describing the scattering off the scalar DM particle \ensuremath{\phi_\text{\tiny DM}}\, with the nucleon by $Z^\prime$ mediated channel is
\begin{align}
\mathcal{L}_\text{eff}= i\frac{q_\text{\tiny DM} g_{BL}^2}{3 m_{Z^\prime}^2} V^\mu \bar{q} \gamma_\mu q
,\qquad q\in \{u,d\},
\end{align}
where $V^\mu$ is the vector current arising from the kinetic term of the DM particle \ensuremath{\phi_\text{\tiny DM}}\,. Decomposing \ensuremath{\phi_\text{\tiny DM}}\, in terms of real and imaginary components as \ensuremath{\phi_\text{\tiny DM}}\,$= \left(\phi_1+ i\phi_2\right)/\sqrt{2}$, we get $V^\mu\simeq \left(\phi_1 \partial^\mu \phi_2- \phi_2 \partial^\mu \phi_1 \right)$.
The SI DM-nuclei scattering cross-section for the scalar DM mediated by a $t$-channel gauge boson $Z^\prime$ is~\cite{Goodman:1984dc}
\begin{align}
\sigma_{SI}^N= \frac{1}{16 \pi} \left(\frac{M_N \,m_{DM}}{M_N+ m_{DM}} \right)^2 |b_N|^2,
\end{align}
where $M_N$ is the mass of the nuclei and the coefficient $b_N$ is given by
\begin{eqnarray}
b_N= \left(A-Z\right) b_n + Z b_p,~~~b_n= b_u +2 b_d,~~~b_p= 2b_u + b_d,
\end{eqnarray}
with $Z$ and $A$ are the atomic and mass number of the nuclei, respectively.
In terms of our model parameters $$ b_n=b_p= i\frac{q_\text{\tiny DM} g_{BL}^2}{ m_{Z^\prime}^2}.$$
Thus the SI scattering contribution for the DM and a single nucleon with mass $M_n$ is
\begin{align}
\label{eq:Dcross}
\sigma_{SI}^{\tiny{Z^\prime}}= \frac{1}{16 \pi} \left(\frac{M_n \,m_{DM}}{M_n+ m_{DM}} \right)^2 \frac{q_\text{\tiny DM}^2 g_{BL}^4}{ m_{Z^\prime}^4}.
\end{align}
In Fig.~\ref{fig:Direct}, the predictions for SI scattering cross-section $\sigma_{SI}^{\tiny{Z^\prime}}$ is shown in the plane $m_{DM}- m_{Z^\prime}$. We assume the $B-L$ gauge coupling $g_{BL}=0.3$. The left panel (a) corresponds to $q_\text{\tiny DM}\!=\!1/2$ and right panel (b) is obtained for $q_\text{\tiny DM}\!=\!3/2$. The red, yellow, blue and orange shaded regions denote cross-section greater than $ 10^{-45}$ cm$^2$, within $ 10^{-45}$cm$^2-10^{-46}$ cm$^2$, within $ 10^{-46}$cm$^2- 10^{-47}$cm$^2$ and within $ 10^{-47}$cm$^2- 10^{-48}$cm$^2$, respectively. The red solid, green dot and blue dashed curves are the current or future bounds obtained in XENON1T~\cite{Aprile:2017iyp}, LUX-ZEPLIN~\cite{Mount:2017qzi} and XENONnT~\cite{Aprile:2015uzo} experiments, respectively, where the region below the mentioned curves are excluded at 90\% confidence level. It can be inferred that for the DM mass below $1\ensuremath{\mathrm{\,Te\kern -0.1em V\,}}$, the direct detection limit from XENON1T experiment is strongly competing with the current collider bounds on the mass of the $Z^\prime$ gauge boson. It should be noted from Eq.~\eqref{eq:Dcross} that due to the proportionality of $\sigma_{SI}^{\tiny{Z^\prime}}$ on $q_\text{\tiny DM}^2$ the bound on $m_{Z^\prime}$ is stronger when the $B-L$ charge of \ensuremath{\phi_\text{\tiny DM}}\, increases.
\section{Right-handed neutrino phenomenology}
\label{sec:RHnu}
In the model under consideration, the right-handed neutrinos are introduced to assure the $B-L$ gauge anomaly cancellation. The small neutrino mass terms are generated via Type-I seesaw mechanism as can be seen from Eq.~\eqref{lag}. The right-handed neutrinos are SM gauge singlet but are charged under $B-L$ as shown in Table~\ref{Table}.
As discussed in Sec.~\ref{sec:model}, only one of the three $N_i$'s is lighter than the DM candidate, in this section we consider the phenomenology of the mentioned right-handed neutrino $N$ with a mass $\le \mathcal{O}$(TeV).
The coupling $y_N$ of the right-handed neutrino $N$ with the SM leptons are Yukawa type, which governs its decay in three possible channels and the decay widths are given in Eqs.~\eqref{eq:Ndecayh}-\eqref{eq:NdecayZ}. However, the production of right-handed neutrino $N$ is dictated by its $B-L$ charge and the gauge coupling, which is electro-weak in nature. Below we discuss the production and decays of the right-handed neutrino at the LHC.
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.5\linewidth]{prod_NNn.pdf}
\caption{The variation of production cross-section of right-handed neutrino $N$ pair with its mass $m_N$, at 14 \ensuremath{\mathrm{\,Te\kern -0.1em V\,}} LHC. The other parameters are chosen such that $m_h= 125 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{S_0}=354 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{DM}=178 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{Z^\prime}= 3 \ensuremath{\mathrm{\,Te\kern -0.1em V\,}}\!$ and $g_{BL}=0.3$.}
\label{fig:sigmaN}
\end{center}
\end{figure}
\subsection{Production}
The dominant production mode for $N$ pair is in the $s$-channel via $Z^\prime$ gauge boson. Due to the mass bound of $Z^\prime$, i.e., $m_{Z^\prime}\geq 2.8$ TeV \cite{z'bnd} from LHC, such mode is suppressed. Nevertheless, it is the only dominant mode available for the production.
In Fig.~\ref{fig:sigmaN} we present the pair production cross-section of right-handed neutrino $N$ at the LHC with 14 TeV $E_{CM}$ with a choice of PDF as CTEQ6L \cite{cteq}. The renormalization and factorization scale is chosen to be $\sqrt{\hat{s}}$. The other parameters are fixed at $m_h= 125 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{S_0}=354 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{DM}=178 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$, $m_{Z^\prime}= 3 \ensuremath{\mathrm{\,Te\kern -0.1em V\,}}$ and $g_{BL}=0.3$. Since the coupling is electro-weak in nature and due to heavy $Z^\prime$ exchange, we can see that the cross-section merely turns out be $\mathcal{O}$(fb).
\subsection{Decay}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.48\linewidth]{mN-YN.pdf}
\caption{The total decay width of the right-handed neutrino is shown as a variation in $m_N - y_N$ plane. The different regions of the decay width from $10^{-16}$ GeV to $10^{-13}$ GeV are shown in red, orange, brown and yellow, respectively.}
\label{fig:totdcdth}
\end{center}
\end{figure}
The right-handed neutrino can decay into the modes with SM gauge bosons and leptons via mixing which is proportional to the Yukawa coupling $y_N^2$, and the decay widths for the mentioned modes can be found in Eq.~\eqref{eq:Ndecayh}-\eqref{eq:NdecayZ}. We have seen in Sec.~\ref{sec:relic} that, to satisfy the observed relic density a tiny value of $y_N \sim 10^{-8}$ is needed for a right-handed neutrino mass $m_N\sim \mathcal{O}(100)$GeV.
Such a low Yukawa coupling slows the decay rate which in turn gives rise to displaced decays of the right-handed neutrino. In Fig.~\ref{fig:totdcdth} we show the total decay width of the right-handed neutrino as a variation of its mass $m_N$ and Yukawa coupling $y_N$. The different shaded regions of the decay width from $10^{-16}$ GeV to $10^{-13}$ GeV are shown in red, orange, brown and yellow, respectively. It is evident from Fig.~\ref{fig:totdcdth} that there is significant region of parameter space that can be explored in the collider searches which comprise of displaced vertex signatures. Decays of such right-handed neutrino into charged leptons and gauge bosons leave displaced charged track at the collider. In the following collider study we search for such displaced final states at the LHC with 14 TeV $E_{CM}$ by choosing some suitable benchmark points.
\subsection{Benchmark points and collider signature}
We choose two benchmark points defined in Table~\ref{Table:2} to investigate the collider phenomenology of the right-handed neutrino $N$ at the LHC. The benchmark points are chosen such a way that they satisfy the observed relic density by the DM annihilation via $s$-channel $S_0$ exchange and also have displaced decays for the right-handed neutrino $N$. The BP1 deals with relatively lighter DM particle and right-handed neutrino masses $176\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ and $110\,$GeV, compared to BP2, where the masses are $600\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ and $500\,$GeV, respectively. Below we explore the effect of different mass spectrum in the kinematics of the decay products of the right-handed neutrinos $N$.
\begin{table}[h]
\centering
\begin{tabular}{ |c| c |c |c|c|c|}
\hline \hline
& $m_{h}$ & $m_{S_0}$ & $m_{DM}$ & $m_N$ & $m_{Z^\prime}$ \\ \hline
BP1 & $125 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ & $ 300 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ & $165 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}} $ & $110 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ & $3 \ensuremath{\mathrm{\,Te\kern -0.1em V\,}}$ \\
\hline
BP2 & $125 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ & $ 1225 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ & $600 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}} $ & $500 \ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}$ & $3 \ensuremath{\mathrm{\,Te\kern -0.1em V\,}}$ \\
\hline
\hline
\end{tabular}
\caption{Masses of different particles for two benchmark points.} \label{Table:2}
\end{table}
\begin{table}[b]
\begin{center}
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{||c|c|c||}
\hline\hline
Branching &BP1&BP2\\
fractions of $N$&&\\
\hline
$W^\pm e^\mp$& 79\%& 51.6\%\\
\hline
$Z \nu$ & 21\%& 25.7\%\\
\hline
$h\nu $&--& 22.7\%\\
\hline
\hline
\end{tabular}
\caption{\label{NBr} Branching fractions of the right-handed neutrino $N$ to different decay modes for BP1 and BP2 where the total decay widths are $1.09\times {10}^{-15}$\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}} and $1.74\times {10}^{-14}$\ensuremath{\mathrm{\,Ge\kern -0.1em V\,}}, respectively.}
\end{center}
\end{table}
The right-handed neutrino with lighter mass (in BP1) decays mainly to gauge boson modes, i.e., $Z\nu$ and $W^\pm e^\mp$. For higher mass of $m_N$ (in BP2), $N$ decaying to $h\nu$ and $S_0\, \nu$ are also feasible. The decay branching fractions are given in Table~\ref{NBr} for the two benchmark points and it is visible that $h\nu$ mode is open only for BP2, where all three modes share the branching fractions almost equally. However, for BP1 $W^\pm e^\mp$ is the most dominating mode. The choice of mass spectra in the benchmark points didn't allow its decay to $S_0\, \nu$ via mixing. The decay widths of $N$ are proportional to $y^2_N$, as can be seen from Eqs.~\eqref{eq:Ndecayh}-\eqref{eq:NdecayZ}. Later we discuss that such small couplings can cause displaced decays of $N$, which give rise to displaced charged leptons or bosons. Looking into the decay of $N \to W^\pm e^\mp$, it can easily be understood that the $W^\pm$ will go through prompt decays which will give rise to either two jets or one charged lepton. Thus displaced $2\ell,~ 3\ell~ \rm{and} ~4\ell$ predicted by the mentioned decays are the golden channels to look for at the LHC.
There are other studies in the context of the displaced decays of right-handed neutrinos for Type-I seesaw where the displaced decay width of right-handed neutrino is proportional to the corresponding Yukawa couplings $y_N^2$ \cite{ty1d}. The situation however changes a lot in the context of supersymmetry as the superpartners of right-handed neutrino i.e., the right-handed sneutrino can also undergo displaced decays via the mixing with left-handed sneutrinos. In some parameter space, such mixing angles go to very small values caused by cancellation in the parameter space apart from the Type-I seesaw type suppression. This prompts the displaced decays of such right-handed neutrinos into charged and neutral leptons \cite{LFVpheno,ty1s}.
For the model under consideration, we simulate the right-handed neutrino events, pair produced at the LHC with displaced charged leptons final states. We used CalcHEP, PYTHIA \cite{calchep,pythia} for the event generation and simulation. The jet formation has been performed using the {\tt Fastjet-3.0.3} \cite{fastjet} with the {\tt CAMBRIDGE AACHEN} algorithm. We have selected a jet size $R=0.5$ for the jet formation, with the following criteria:
\begin{itemize}
\item the calorimeter coverage is $\rm |\eta| < 4.5$
\item the minimum transverse momentum of the jet $ p_{T,min}^{jet} = 10$ GeV and jets are ordered in $p_{T}$
\item leptons ($\rm \ell=e,~\mu$) are selected with
$p_T \ge 20$ GeV and $\rm |\eta| \le 2.5$
\item no jet should be accompanied by a hard lepton in the event
\item $\Delta R_{\ell j}\geq 0.4$ and $\Delta R_{\ell \ell}\geq 0.2$
\item Since an efficient identification of the leptons is crucial for our study, we additionally require
a hadronic activity within a cone of $\Delta R = 0.3$ between two isolated leptons to be $\leq 0.15\, p^{\ell}_T$ GeV, with
$p^{\ell}_T$ the transverse momentum of the lepton, in the specified cone.
\end{itemize}
Figure~\ref{fig:lpt} shows the transverse momentum of the charged leptons arising from the decays of $N$ and the corresponding $W^\pm$ for the two benchmark points. The more phase space in the case of BP2 allows the charged lepton to be of very high energy $\mathcal{O}$(TeV), much higher than the BP1 case, which is around few hundreds of GeV. Two such different scenarios can thus be distinguished by applying appropriate lepton $p_T$ cuts and if exist, will be discovered at the LHC with 14 TeV $E_{CM}$.
\begin{figure}
\begin{center}
\includegraphics[width=0.38\linewidth,angle=-90]{llpt.pdf}
\caption{The transverse momentum of the charged leptons arising from the decay of $N$ and the corresponding $W^\pm$ for the two benchmark points at the 14 \ensuremath{\mathrm{\,Te\kern -0.1em V\,}} LHC. The BP1 and BP2 are defined in Table~\ref{Table:2}.}
\label{fig:lpt}
\end{center}
\end{figure}
\begin{figure}
\begin{center}
\mbox{\hskip -20 pt\subfigure[]{\includegraphics[width=0.35\linewidth,angle=-90]{nu4dcl.pdf}}
\subfigure[]{\includegraphics[width=0.35\linewidth,angle=-90]{nu4dcl2.pdf}}}
\caption{The transverse decay length of the right-handed neutrino $N$ in meter for BP1 and in centimeter for BP2 at the 14 \ensuremath{\mathrm{\,Te\kern -0.1em V\,}} LHC, after imposing the basic cuts in a PYTHIA based simulation. The BP1 and BP2 are defined in Table~\ref{Table:2}.}
\label{fig:histN}
\end{center}
\end{figure}
The charged leptons arising from the decays of right-handed neutrino $N$ are produced after some travel time of
$N$, giving rise to displaced charged tracks. The displaced $W^\pm$s' produced from such decays go through a prompt decay to either charged leptons or quarks, leaving the possibility of another displaced charged track. Figure~\ref{fig:histN}(a) and Fig.~\ref{fig:histN}(b) show the transverse decay length of the right-handed neutrino $N$ produced at the LHC for BP1 and BP2, respectively. It is evident that the charged track can be seen from few centimeters (for BP2) to few meters (for BP1) length for those particular choice of parameter spaces. Obviously such signals have no SM backgrounds making them completely clean in nature. In Table~\ref{signal} we present the signal numbers at an integrated luminosity of 100 fb$^{-1}$. It can be seen that for $2\ell$ case, we have sufficient events to probe such parameter space. For displaced $3\ell$ and $4\ell$ signals, one has to wait for higher luminosities depending on the choice of benchmark points. Non-observation of such displaced charged tracks clearly put bounds in the $m_{Z'}-m_N$ parameter space at a given luminosity. Below we explore the mentioned parameter space that can be ruled out at the LHC with increasing luminosity.
\begin{table}
\begin{center}
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{||c|c|c||}
\hline\hline
Final &BP1&BP2\\
states&&\\
\hline
$\geq 2\ell$&33.7&31.6\\
\hline
$\geq 3\ell$ &5.5&8.5\\
\hline
$\geq 4\ell $&2.9&1.1\\
\hline
\hline
\end{tabular}
\caption{Final state numbers for $2\ell,~ 3\ell,~ 4\ell$ at the 14 TeV LHC at an integrated luminosity of 100 fb$^{-1}$. }\label{signal}
\end{center}
\end{table}
\begin{figure*}[!h]
\begin{center}
\includegraphics[width=0.5\linewidth]{mN-mZpN.pdf}
\caption{The exclusion limits derived from the non-observation of displaced di-leptonic charge tracks arising from the right-handed neutrino $N$ decays are shown in $m_{N}- m_{Z^\prime}$ plane at 95\% CL for three different luminosities at the $14$ TeV LHC. The deep, lighter and lightest purple regions denote the exclusion regions within the $100\ensuremath{\text{\,fb}^{-1}}\,\!, 1000$\ensuremath{\text{\,fb}^{-1}}\, and $3000$\ensuremath{\text{\,fb}^{-1}}\, of integrated luminosities, respectively, at the LHC.}
\label{fig:Luminosity}
\end{center}
\end{figure*}
Utilizing the non-observation of displaced di-leptonic charge tracks arising from the right-handed neutrino decays, we put bounds on the mass of $Z^\prime$, which is highlighted in Fig.~\ref{fig:Luminosity}. It depicts the bounds on $m_{Z'}$-$m_N$ plane for non-observation of displaced di-leptons at 95\% CL assuming $30\%$ acceptance as obtained for the mass range of $m_N$ (similar to the case of BP1). At 100 fb$^{-1}$ integrated luminosity, such events exclude $m_{Z'} \lesssim 5.5$ TeV for $m_N \sim 100-400$ GeV. The region below the solid line (dark purple region) can be ruled out within the integrated luminosities of 100 fb$^{-1}$. Similarly $m_{Z'} \lesssim 9.5$ TeV can be excluded at 1000 fb$^{-1}$ (dashed line) and $m_{Z'} \lesssim 12.5$ TeV can be ruled out at 3000 fb$^{-1}$ (dotted line) of integrated luminosities at the LHC with 14 TeV $E_{CM}$.
\section{Discussions and conclusions}
\label{sec:conclusion}
In this paper we focus on an extension of the SM where the $B-L$ charged right-handed neutrinos have three different as well as important consequences in the BSM physics. First, it provides the explanation of tiny neutrino masses via Type-I seesaw mechanism. Second, as the right-handed neutrinos are charged under $U(1)_{B-L}$ gauge group, it gives rise to the much needed annihilation mode for $B-L$ charged but SM gauge singlet scalar DM candidate. The $s$-channel annihilation occurs via the $B-L$ symmetry breaking scalar which is a SM gauge singlet. Furthermore, the displaced decay of the right-handed neutrinos provide interesting signatures at the LHC and future colliders which can be used to indirectly constrain the mass of the $B-L$ gauge boson $Z^\prime$.
The requirement of correct DM relic density needs the annihilation of DM pair to the right-handed neutrino pair which decay further to the SM particles. Such decays of right-handed neutrinos are included in the analysis and the impact of the decay effect is prominent in Fig.~\ref{fig:reso}.
By changing the decay width, the mass gap between the two points in the DM mass axis is completely eliminated and observed relic is satisfied over a $~30$ GeV range near the $B-L$ scalar $S_0$ resonance region. It thus makes the parameter space viable for models. Given the bounds on $m_{Z^\prime}$ from collider experiments, we concentrate on the scenario where $s$-channel annihilation via $B-L$ symmetry breaking scalar is dominant. Existence of such scalar is crucial, not only for generating the $Z^\prime$ boson mass but also for obtaining the correct DM abundance. However, production of such SM singlet scalar is challenging due to the absence of coupling with the quarks. Given the tiny mixing angle with SM Higgs boson and with $m_{S_0}> m_h$, it is difficult to discover such scalar at the LHC.
We study the bound on the mass of $Z^\prime$ from SI DM-nuclei scattering cross-section measured in direct detection experiments. As the scattering cross-section is proportional to $q_\text{\tiny DM}^2$, $q_\text{\tiny DM}$ being the $B-L$ charge of the DM, the limits on $m_{Z^\prime}$ are more stringent with increasing charge of the DM and thus competes with recent $Z^\prime$ searches at the LHC.
In this paper, we also investigate the production and decays of the right-handed neutrinos at the LHC. The right-handed neutrinos decay into the SM gauge boson, Higgs boson and leptons via Type-I mixing terms and the decay widths are proportional to the Yukawa coupling $y_N^2$. Consequently, the decay widths are much smaller and lead to the displaced decays of the right-handed neutrinos and thus result in displaced charged track of leptons. Signals of this kind are mostly free from SM backgrounds and easy to probe at the LHC. We show a data of 100\,fb$^{-1}$ of integrated luminosity is sufficient to probe $m_{Z'} \sim 5.5$ TeV at the 14\ensuremath{\mathrm{\,Te\kern -0.1em V\,}} LHC.
\section*{Acknowledgments }
R.M. thanks the Korea Institute for Advanced Study, Seoul for the hospitality during the initial part of the project.
P.B. acknowledges The Institute of Mathematical Sciences, Chennai for the visit for part of the duration of the collaboration. This project has received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie Grant Agreement No. 690575.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,794
|
<div class="py2 post-footer">
<img src="{{ site.baseurl }}/images/me.jpeg" alt="Alex Good" class="avatar" />
<p>
This is the website of Alex Good <a href="http://goodalexander.com">GoodAlexander</a>.
</p>
<p>
Follow me on <a href="https://twitter.com/goodalexander">Twitter</a>.
</p>
</div>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,160
|
Q: Wordpress WPAllImport: Own PHP Function if array to string My JSON import URL has a field, which sometimes contains a single text or an array.
JSON Array:
wp_u_umlaufart
0 "Personal"
1 "Organisation"
Simple Text:
wp_u_umlaufart "Organisation"
With the function I tried to check if it is an array, if yes then return this as string and if not then just the text.
My Function:
function my_array_convert( $content ) {
if (is_array($content)) {
$converted_arr = implode(",", $content);
return $converted_arr;
} else {
return $content;
}
}
And this is my custom-field:
Unfortunately, only the texts arrive, an array is not returned.
I changed the function to:
function my_array_convert( $content ) {
$obj = json_decode(json_encode($content), true);
if (is_array($obj)) {
$converted_arr = implode(",", $obj);
return $converted_arr;
} else {
return $content;
}
}
WPIMPORT does not output this correctly, I have not found an error so far.
But if i put this in my php Sandbox it will work:
<?php
$content = ["Personal","Organisation"];
$obj = json_decode(json_encode($content), true);
echo is_array($obj) ? 'Yes Array' : 'No Array';
if (is_array($obj)) {
$converted_arr = implode(",", $obj);
echo $converted_arr;
} else {
echo $content;
}
?>
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,365
|
\section{Introduction}
The purpose of this paper is to derive Kirchhoff law in the framework of fluctuational electrodynamics introduced by Rytov\cite{Rytov,Landau}. Kirchhoff law plays a fundamental role in the study of thermal emission. It allows to predict the thermal emission by an object from the knowledge of the absorption of an incident coherent plane wave. Yet, there is no general proof of its validity. The usual derivation given in textbooks is based on energy conservation. Hence, an additional detailed balance assumption is required to derive the equality of absorptivity and emissivity for a specified frequency, direction and polarization. There is a rigorous proof of its validity for an arbitrary sphere by Kattawar \cite{Kattawar}. It has also been proven within the scalar approximation for any planar interface separating vacuum from any complex medium satisfying reciprocity \cite{Greffet,Snyder}. Many works compare for different systems a direct calculation of the emission based on the fluctuational electrodynamics and a direct calculation of the absorption. So far, all these numerical calculations were found to agree with Kirchhoff law \cite{Joan1, Joan2, Zhang}. Yet, there is no general proof for an arbitrary finite size object. This unsatisfactory status of the derivation of Kirchhoff law has led to several works questioning its validity. Of particular practical interest is the question of the existence of an upper bound to the emitted power. Is it possible to emit more than a black body ? Such an emitter has been called superplanckian emitter recently. For a finite size object, the power radiated can be modified by modifying its optical environment. It has been reported that the power emitted by a finite size object with area $A$ can be increased up to $n^2 A\sigma T^4$ by placing the emitter in a medium of refractive index $n$. The energy can then be transferred to the vacuum avoiding total internal reflection by using a solid immersion lens type of geometry as discussed in Refs \cite{Harrick,Fan}. This is in full agreement with standard radiometry as the thermodynamic radiance varies as $n^2I_b(T,\omega)$. There have been experimental reports and theoretical claims that thermal emission exceeding black body radiation in the far field is possible using a photonic crystal \cite{Lin}. This result was refuted \cite{Green} and subsequent work \cite{Fleming} reported an experimental bias. More recently, it has been suggested that hyperbolic metamaterials could be used to achieve superplanckian emitters \cite{Nefedov,Maslovski}. This short literature survey shows that the existence of an upper bound of the thermal emission is still an open question. It is thus important to clarify the status of Kirchhoff law for finite size objects.
Another important issue regarding thermal emission is the case of anisothermal bodies. In that case, Kirchhoff law cannot be applied. Obviously, the derivation based on thermodynamic equilibrium cannot be used to deal with anisothermal objects. However, it is possible to compute the emitted radiance through a direct calculation based on the fluctuational electrodynamics approach. This type of calculation has been reported in Refs \cite{Joan1,Joan2,Kong,Zhang, Han,Bardati,Yurasova} for instance. It can be used to solve the inverse problem in order to deduce the temperature field from the emitted radiance. Another application consists in controlling the emitted radiance. It has been proposed \cite{Norris} to emit at different wavelengths or in different directions or polarizations by heating different parts of an object. Heating only submicron volumes of a structure could be an alternative to achieve modulation of the power emitted by an incandescent source faster than $100$ MHz \cite{Greffetnature,apl,Noda}. The question is therefore if it is possible to generalize Kirchhoff law to anisothermal systems. Recently, Han has derived a closed form of both the local absorption rate and the local emission rate and has proven their equality \cite{Han} in the case of a periodic multilayer system. Here, we generalize to any finite size body the approach taken by Han. We introduce a generalized Kirchhoff law establishing the equality between the local absorption rate and the local emission rate in any finite size body with arbitrary shape, orientation and structure. The only requirements are i) all materials satisfy reciprocity (i.e. they have a symmetrical permittivity tensor) and ii) it is possible to define a local temperature (local thermal equilibrium). This paves the way to the discussion of the engineering of the emission properties of a hot object surrounded by a cold structure operating as an antenna. Finally, we push the generalization of Kirchhoff law to anisothermal systems one step further by considering the case where different excitations (e.g. electrons, excitons, phonons) are at different temperatures at the same position. Thermal emission by hot electrons has been observed in a variety of systems such as tunneling tips \cite{Dumas, Bouhelier}, graphene \cite{Avouris,Hone} and quantum wells \cite{Sirtori}. The theoretical tools introduced in this paper provide a rigorous framework to analyze and optimize light emission by these systems.
\section{Generalized Kirchhoff law}
To characterize the absorption of a linearly polarized monochromatic plane wave by a finite size body with volume $V$, it is useful to introduce the absorption cross-section $\sigma_{abs}$ which connects the absorbed power $P_{abs}$ with the incident Poynting vector flux:
\begin{equation}
P_{abs}^{(l)}(\omega)=\sigma_{abs}^{(l)}(\mathbf{u}, \omega)\frac{\epsilon_0 c \vert \mathbf{E}_{inc}^{(l)} (\omega)\vert^2}{2}.
\label{eq1}
\end{equation}
The absorption cross-section depends on the polarization $l=s,p$, the incident direction $\mathbf{u}$ and the frequency $\omega$. We note that this concept does not provide any information about the position where absorption takes place in the body. An alternative form of the absorbed power is given by the integral over volume $V$ of the dissipation rate per unit volume:
\begin{equation}
P_{abs}^{(l)}(\omega)= \int _V \mathrm{Im} [\epsilon(\mathbf{r'},\omega)] \frac{\omega \epsilon _0}{2} \vert \mathbf{E}^{(l)} (\mathbf{r'},\omega)\vert ^2 d^3\mathbf{r'},
\label{eq:absorbed_power_general}
\end{equation}
where $\mathbf{E}^{(l)} (\mathbf{r'})$ is the field in the body illuminated by a $l$-polarized plane wave with incident field amplitude $E_{inc}$ and $\epsilon(\mathbf{r'},\omega)$ is the permittivity. This form describes explicitly where the absorption takes place but is not related explicitly to the incident field so that it cannot be expressed in terms of cross section. We now seek a connection between the incident field $\mathbf{E}_{inc}$ and the field in the absorber $\mathbf{E}(\mathbf{r})$. We will show that the existence of this linear relation allows to cast the absorption cross section in the form:
\begin{equation}
\sigma_{abs}^{(l)}(\mathbf{u},\omega)=\int_V \d^3\r' \alpha^{(l)} (\mathbf{u},\r',\omega),
\label{eq2}
\end{equation}
where $\alpha^{(l)}$ appears as an absorption cross section density. The unit vector $\mathbf{u}$ denotes the propagation direction of the incident plane wave (see Fig. 1). Note that just like the absorption cross section, this quantity depends on the object shape and orientation, it is not a material intrinsic property. Note in particular that the absorbing object may have electromagnetic modes which can be resonantly excited leading to enhanced absorption at some particular locations and frequencies.
\begin{figure}[htb]
\centering
\includegraphics[width=80mm]{Figure1.pdf}
\caption{ Sketch of the system. A finite size volume $V$ located in the half-space $z< 0$ radiates in the solid angle $\d \Omega$ subtended by the surface $\d S$.}
\label{FIG1}
\end{figure}
We now turn to the power emitted by the object. Using the fluctuational electrodynamics framework, we will write it as an integral over the volume of the emitter. We now define a local $(l)$ polarized emissivity density $\eta^{(l)}(\mathbf{u}, \r', \omega)$ using:
\begin{equation}
P_{e}^{(l)}=\int_{0}^{\infty} \d \omega \int _V \int_{4\pi} \eta^{(l)}(\mathbf{u}, \r', \omega) \frac{I_b [T(\r'),\omega] }{2} d^3\r' d \Omega,
\label{eq3}
\end{equation}
where we have introduced the blackbody radiance $I_b [T(\r'),\omega]=\frac{\omega ^2}{4 \pi ^3 c^2}\frac{\hbar \omega}{\exp(\hbar\omega/kT)-1}$. Note that we have introduced the polarized blackbody radiance $I_b/2$ which is half the blackbody radiance.
Both quantities, the absorption cross section density $\alpha^{(l)}$ and the emissivity density $\eta^{(l)}$ are polarized, directional and monochromatic. The first goal of this paper is to establish a general form of these two quantities and prove that they are equal for any body made of materials satisfying reciprocity, namely, materials with symmetric permittivities. This result is the generalized form of Kirchhoff law:
\begin{equation}
\eta^{(l)}( \mathbf{u}, \r', \omega)= \alpha^{(l)} (\mathbf{u},\r',\omega).
\label{GKL}
\end{equation}
An immediate consequence of this result is that the total power emitted by an isothermal object in the solid angle $\d \Omega$ is given by:
\begin{equation}
\d P_e^{(l)}= \int_{0}^{\infty} \d\omega \; \sigma^{(l)}_{abs}(\mathbf{u},\omega) \;\frac{ I_b(T,\omega)}{2} \; \d\Omega.
\end{equation}
This result provides all the required information about the emission and shows that it is entirely characterized by the knowledge of the absorption cross-section. We emphasize that it is valid for any size of the object. We also emphasize that the absorption cross section is not related to the actual geometrical size of the object. We can thus consider that the particle has an effective area called the absorption cross section $\sigma^{(l)}_{abs}(\mathbf{u},\omega)$ that can be used to characterize both emission and absorption. In that sense, the two are always equal and there is no superplanckian emission in the far field: the body can be considered to have an effective emissivity or absorptivity which is equal to 1. The concept of superplanckian emission can be introduced if one compares the absorption cross section to the actual geometrical section of the object in a situation where the geometrical section is smaller than the absorption cross section. It is well known that the absorption cross section of a resonant subwavelength sphere can be much larger than the geometrical cross section \cite{Bohren}. It is also known that in the so-called resonant regime, namely for sizes on the order of the wavelength, the absorption cross section can be larger than the geometrical cross section. That would correspond to the so-called super-planckian emitters. However, in the subwavelength and resonant regimes, geometrical optics is not valid so that i) absorption is not expected to be proportional to the geometrical area, ii) the concepts of radiometry such as emissivity and absorptivity are not expected to be valid. Hence, in what follows, we only use the absorption cross section concept which is unequivocally defined as the response of an object to a plane wave. Finally, as opposed to the Kirchhoff law which is only valid for isothermal bodies, the generalized Kirchhoff law can be used to compute the power emitted by an anisothermal body. Here, we will derive the emitted power in terms of the absorption cross section density:
\begin{equation}
\d P_e^{(l)}= \int_{0}^{\infty}\d\omega \int_V \d^3\r' \alpha_{abs}^{(l)} (\mathbf{u},\r',\omega) \frac{ I_b[T(\r'),\omega]}{2}\d\Omega.
\label{Pe}
\end{equation}
In the rest of the paper, we proceed to derive the generalized Kirchhoff law using the reciprocity of a Green tensor. In the first part of the paper, we derive the emission cross section density. In the second part of the paper, we derive the absorption cross section density. Both are given in terms of the Green tensor and the imaginary part of the dielectric permittivity. From a practical point of view, the explicit forms of $\eta^{(l)}$ and $\alpha^{(l)}$ are rather cumbersome. In practice, it is possible to compute numerically $\alpha^{(l)} (\mathbf{u},\r',\omega)$ and insert it in Eq.(\ref{Pe}) to derive the emitted power.
\begin{figure}[htb]
\centering
\includegraphics[width=80mm]{Figure2bis.pdf}
\caption{ Illustration of two reciprocal situations (1) and (2) obtained upon exchanging source and detector positions.}
\label{FIG2}
\end{figure}
Before proceeding to the formal derivation, we will emphasize two basic ingredients. The first key point is the reciprocity of the Green tensor. This is better understood by looking at Figure 2. The basic idea is that a dipole moment $\mathbf{p}_1$ located at $\mathbf{r}$ generates a field $\mathbf{E}_1$ at $\mathbf{r'}$. Conversely, a dipole moment $\mathbf{p}_2$ located in the absorber at $\mathbf{r}'$ generates a field $\mathbf{E}_2$ at the position $\mathbf{r}$. It follows from reciprocity that these two fields satisfy $\mathbf{p}_1\cdot\mathbf{E}_2=\mathbf{p}_2\cdot\mathbf{E}_1$ \cite{Greffet97}. In others words, the signal is not changed by exchanging point-like sources and detectors. The second key ingredient in the generalized Kirchhoff law is the introduction of the emission angle $\theta$. To proceed, we will use the asymptotic form of the field in the far field. We now turn to the derivation of the local emission and absorption rates.
\section{Emission}
We begin by calculating the thermal emission from a local point $\mathbf{r'}$ towards a point $\mathbf{r}$. To proceed, we consider that thermal fields are radiated by stationary random currents as discussed in \cite{Rytov,Landau}. In what follows we use the notation of Ref. \cite{Landau} for the spectral analysis of stationary random processes. We use the correlation function of the current density given by the fluctuation-dissipation theorem for a medium which is linear and isotropic:
\begin{eqnarray}
&\langle j_n(\r,\omega)j_m(\r',\omega')\rangle = \nonumber \\
&2\pi\delta(\omega+\omega')\delta(\r-\r')\delta_{nm}2\omega\epsilon_0\mathrm{Im}[\epsilon(\r',\omega)]\Theta[T(\r'),\omega],
\label{FD}
\end{eqnarray}
where the brackets denote ensemble average, $\Theta[T(\r'),\omega]=\hbar\omega/(\exp(\hbar\omega/k_BT(\mathbf{r}'))-1)$ and $T(\mathbf{r}')$ is the temperature at point $\mathbf{r}'$.
We consider a point $\mathbf{r}$ in the far field so that the emission direction is specified by the unit vector $\mathbf{u}=\mathbf{r}/r$. The electric field is transverse to the propagation direction $\mathbf{u}$ so that it can be described using only two components. Let us introduce two orthogonal unit vectors denoted $\mathbf{e}^{(s)}$ and $\mathbf{e}^{(p)}$ perpendicular to $\mathbf{u}$. The emitted power flowing through the area $\d A$ is given by the flux of the Poynting vector:
\begin{equation}
\d P_e=\langle\mathbf{E}(\mathbf{r},t)\times\mathbf{H}(\mathbf{r},t)\rangle\cdot \d A\mathbf{u}.
\end{equation}
Let us formally introduce the Fourier transform of the electric field:
\begin{equation}
\mathbf{E}(\mathbf{r},t)=\int_{-\infty}^{\infty}\frac{\d \omega}{2\pi}\mathbf{E}(\mathbf{r},\omega)\exp(-i\omega t).
\end{equation}
In the far field, the electromagnetic field has a plane wave structure so that $\mathbf{H}(\mathbf{r},\omega)=\epsilon_0 c \mathbf{u}\times \mathbf{E}(\mathbf{r},\omega)$.
It follows that the Poynting vector through $\d A$ can be cast in the form:
\begin{eqnarray}
\d P_e=\d A\; \epsilon_0 c \int_{-\infty}^{\infty}\frac{\d \omega}{2\pi}\int_{-\infty}^{\infty}\frac{\d \omega'}{2\pi}\exp[-i(\omega+\omega')t] \nonumber \\
\langle\mathbf{E}(\mathbf{r},\omega)\cdot\mathbf{E}(\mathbf{r},\omega')\rangle .
\label{emission}
\end{eqnarray}
The $m-$component of the electric field is given by:
\begin{equation}
E_m (\r,\omega)= i \mu _0 \omega\int G_{mn}(\mathbf{r},\mathbf{r'},\omega)j_n(\mathbf{r'},\omega)\d^3\mathbf{r'},
\label{eq:eq_champE_green}
\end{equation}
where $G_{mn}$ is a component of the Green tensor and $j_n$ is the $n-$component of the current density. Throughout the paper, we use the Einstein notation so that there is a sum over repeated indices. The amplitude of the field along the unit vector $\mathbf{e}^{(l)}$ is given by:
\begin{equation}
E^{(l)}=\mathbf{e}^{(l)}\cdot\mathbf{E}=i \mu _0 \omega\int e_m^{(l)}G_{mn}(\mathbf{r},\mathbf{r'},\omega)j_n(\mathbf{r'},\omega)d^3\mathbf{r'}.
\label{Green}
\end{equation}
After inserting Eq.(\ref{FD}) and Eq.(\ref{Green}) into Eq.(\ref{emission}) and using $G_{mn}(\r,\r',-\omega)=G_{mn}^*(\r,\r',\omega)$, we get the power emitted in $l$-polarization through $\d A$:
\begin{eqnarray}
\d P_e^{(l)}=&\d A \int_{-\infty}^{\infty}\frac{\d \omega}{2\pi} e_m^{(l)}G_{mk}(\mathbf{r},\mathbf{r'},\omega)e_n^{(l)}G_{nk}(\mathbf{r},\mathbf{r'},-\omega) \nonumber \\
& 2 k^3 \mathrm{Im}[\epsilon(\r')]\Theta[T(\r'),\omega].
\label{Pe_0}
\end{eqnarray}
We now evaluate the power emitted per unit solid angle.
We start by introducing a plane wave expansion \cite{Mandel} of the Green tensor at the observation point $\r$:
\begin{equation}
G _{mn}(\r,\r',\omega)= \int G _{mn}(\mathbf{k}_{\parallel},z=0,\r',\omega)
e^{i\mathbf{k}_{\parallel}\cdot\r}e^{i \gamma z}\frac{d^{2}\mathbf{k}_\parallel}{(2\pi)^2},
\label{eq:eq_asymp_0}
\end{equation}
where $\gamma=[\omega^2/c^2-k_{\parallel}^2]^{1/2}$, $\mathbf{k}_{\parallel}=(k_x,k_y,0)$ is the wavevector in the (x,y) plane and $G _{mn}(\mathbf{k}_{\parallel},z=0;\r';\omega) $ is the Fourier transform of $G _{mn}(\r,\r',\omega)$ with respect to $x$ and $y$ in the plane $z=0$. An asymptotic evaluation of this integral when $kr \rightarrow \infty$ by the method of the stationary
phase \cite{Mandel} gives:
\begin{equation}
G _{mn} (\r,\r',\omega)\rightarrow \frac{-i k}{2\pi} \cos \theta \,
G _{mn}(\mathbf{k}_{\parallel},z=0,\r',\omega)\frac{e^{ikr}}{r}
\label{eq:eq_asymp}
\end{equation}
Inserting this asymptotic form in Eq.(\ref{Pe_0}) and introducing the solid angle $\d \Omega=\d A/r^2$ yields:
\begin{eqnarray}
\d P_{e}^{(l)}=&\int_{-\infty}^{\infty}\frac{\d \omega}{2\pi} \frac{k^5}{2\pi^2}\cos ^2 \theta \,
\int \vert e_m^{(l)} G_{mn} (\mathbf{k}_{\parallel},z=0,\r',\omega)\vert ^2 \nonumber \\
&\mathrm{Im}[\epsilon(\r'),\omega] \Theta (T(\r',\omega)) \d^3\r' \d \Omega.
\label{eq:emitted_power}
\end{eqnarray}
We now cast the result in a form that can be compared to the standard radiometric approach. We restrict the integration over positive frequencies. The emitted power can be cast in the form:
\begin{equation}
dP_{e}^{(l)}=\int_{0}^{\infty} \d \omega \int _V \eta^{(l)}( \mathbf{u}, \r',\omega) \frac{I_b [T(\r'),\omega] }{2} \d^3\r' \d \Omega,
\label{eq:emitted_power2}
\end{equation}
where we have defined the polarized spectral directional emissivity $\eta^{(l)} (\mathbf{u},\r',\omega)$:
\begin{eqnarray}
&\eta^{(l)} (\mathbf{u}, \r',\omega) = \nonumber \\
& 4 k^3 \cos^2 \theta
\vert e_m ^{(l)} G_{mn}(\mathbf{k}_{\parallel}, z=0,\r',\omega)\vert ^2 \mathrm{Im}(\epsilon(\r'),\omega).
\label{eq:emissivity}
\end{eqnarray}
Note that we define the polarized emission as proportional to $\frac{I_b [T(\r'),\omega] }{2}$ so that the total emitted power is proportional to $(\eta^s+\eta^p)\frac{I_b [T(\r'),\omega] }{2}$.
\section{Absorption}
To proceed, we consider that the incident electric field is generated by a point-like dipole located ar $\mathbf{r}$ in the far field with a $l$-polarized dipole moment $p_{inc}\mathbf{e}^{(l)}$. The $m-$component of the field $\mathbf{E}^{(l)}(\r')$ generated at $\mathbf{r}'$ is then given by:
\begin{equation}
E ^{(l)}_m (\r')= G_{mn}(\r',\r,\omega) e^{(l)}_n p_{inc} \mu _0 \omega ^2,
\label{eq:green_tensor_def}
\end{equation}
and the amplitude of the incident field produced by the $l-$polarized electric dipole at point $\mathbf{r}$ is given by:
\begin{equation}
E_{inc}^{(l)}=\frac{\exp(ikr)}{4\pi r}\mu_0 \omega^2 p_{inc}.
\label{eq:E-p}
\end{equation}
It follows that:
\begin{equation}
E ^{(l)}_m (\r')= G_{mn}(\r',\r,\omega) e^{(l)}_n 4\pi r E_{inc}^{(l)} \exp(-ikr)
\label{eq:E_l}
\end{equation}
Besides, we can replace the Green tensor using the reciprocity theorem:
\begin{equation}
G_{mn}(\r',\r,\omega)=G_{nm}(\r,\r',\omega),
\label{eq:thm_recipro}
\end{equation}
to obtain:
\begin{equation}
P_{abs}^{(l)}= \int _V \mathrm{Im}[\epsilon(\mathbf{r'},\omega)] \frac{\omega \epsilon _0}{2} \vert e^{(l)}_n G_{nm}(\r,\r',\omega) 4\pi r E_{inc} \vert ^2 d^3\mathbf{r'}.
\label{eq:absorbed_power_general2}
\end{equation}
We finally insert the already used
asymptotic evaluation of the Green tensor (\ref{eq:eq_asymp}) to get :
\begin{equation}
P_{abs}^{(l)}= \frac{\epsilon _0 c}{2} \vert E_{inc} \vert ^2
\int _V d^3\mathbf{r'} \alpha^{(l)} (\mathbf{u},\r',\omega)
\label{eq:absorbed_power}
\end{equation}
where we have defined a polarized directional absorption cross section density $\alpha^{(l)} (\mathbf{u},\r',\omega)$:
\begin{equation}
\alpha^{(l)} (\mathbf{u},\r',\omega) = 4 k^3 \mathrm{Im}[\epsilon(\mathbf{r'})] \cos^2 \theta
\vert e^{(l)}_n G_{nm}(\mathbf{k}_{\parallel},z=0,\r',\omega) \vert ^2
\label{eq:absorptivity1}
\end{equation}
Upon inspection, we see that $$\eta^{(l)} (\mathbf{u},\r',\omega)=\alpha^{(l)} (\mathbf{u},\r',\omega),$$
which is the generalized form of the Kirchhoff law.
\section{Anisotropic media}
In this section, we extend the derivation to the case of anisotropic media. It is known that some anisotropic materials may display unusual optical properties associated to hyperbolic dispersion relations. It has been suggested that these features may have implications for heat transfer \cite{Nefedov, Maslovski}. Here, we stress that there are no consequences on the validity of the generalized Kirchhoff law provided that the system is composed of materials with symmetric permittivity tensors $\epsilon_{nm}=\epsilon_{mn}$ as required by reciprocity. The form of the local absorption/emission rate is different but absorption rate and emission rate are still equal. Using the relevant form of the fluctuation-dissipation theorem:
\begin{eqnarray}
&\langle j_n(\r,\omega)j_m(\r',\omega')\rangle =2\pi\delta(\omega+\omega')\delta(\r-\r') \nonumber \\
&2\omega\epsilon_0\mathrm{Im}[\epsilon_{nm}(\r',\omega)]\Theta[T(\r'),\omega],
\label{eq:FD2}
\end{eqnarray}
we find
\begin{eqnarray}
&\eta^{(l)} (\mathbf{u},\r',\omega)=\alpha^{(l)} (\mathbf{u},\r',\omega) = 4 k^3 \cos^2 \theta \,\mathrm{Im}[\epsilon_{pq}(\mathbf{r'},\omega)] \nonumber \\
&e^{(l)}_n G_{np}(\mathbf{k}_{\parallel},0,\r',\omega) e^{(l)}_m G_{mq}(\mathbf{k}_{\parallel},0,\r',\omega).
\label{eq:absorptivity2}
\end{eqnarray}
\section{Emission by anisothermal systems}
The generalized Kirchhoff law introduced above allows a direct calculation of the power emitted by anisothermal systems. An example is the earth infrared radiation due to anisothermal soils \cite{Kong,Bardati}. Another example at a very different length scale is a graphene film deposited on a substrate \cite{Avouris,Hone}. When the current density in graphene is very large, the electronic temperature can be increased above 1000 K while the substrate remains at lower temperatures. A similar example is the radiation produced by hot electrons in a in a metallic tip or in a quantum well \cite{Dumas, Bouhelier,Sirtori}. These examples are interesting as they introduce another feature: it is possible to define a temperature for the electrons and a temperature for the lattice. This is the so-called two temperatures model. In that case, we have two different temperatures at the same point but for different systems. It is possible to include this feature in the model by using the following form of the power spectral density of the current:
\begin{eqnarray}
&\langle j_n(\r,\omega)j_m(\r',\omega')\rangle = 2\pi\delta(\omega+\omega')\delta(\r-\r')2\omega\epsilon_0 \nonumber \\
& \{\mathrm{Im}[\epsilon_{el}(\r',\omega)]\Theta[T_{el}(\r'),\omega]+\mathrm{Im}[\epsilon_{la}(\r',\omega)]\Theta[T_{la}(\r'),\omega]\},\nonumber \\
\label{FD_2}
\end{eqnarray}
where $\epsilon_{el}$ ($\epsilon_{la}$) denote the electron (lattice) contribution to the permittivity and $T_{el}$ ($T_{la}$) denote the electron (lattice) temperature.
In summary, it is seen that if one can assign to each type of excitation (electrons, excitons, phonons) both a temperature and a contribution to the imaginary part of the permittivity, it is possible to use the generalized Kirchhoff law to account for thermal radiation by non-equilibrium systems.\\
\section{Concluding remarks}
In summary, we have derived a generalized Kirchhoff law valid for any finite size object. When we apply it to isothermal objects, we find that the emission is always equal to the product of the blackbody radiance and the absorption cross section. Optimizing the emission is thus equivalent to optimizing the absorption cross-section. We note that the usual Kirchhoff law, i.e. the equality between the absorptivity and emissivity of an interface, appears as a particular case of the result derived here when we consider planar and isothermal objects much larger than the wavelength. The generalized Kirchhoff law can be used for anisothermal objects including the case where different excitations are at different temperatures.
Let us now explore the implications of the generalized Kirchhoff law for harnessing thermal radiation. It has been known for applications such as bolometers that absorption by a small volume of absorbing material can be increased using antennas \cite{Boreman}. Indeed, the antenna can capture more efficiently the incident power and funnel it into the absorber volume. This absorbed power in the presence of the antenna is then proportional to an effective absorption cross section denoted $\sigma_{ant}$. In addition, the antenna can be directional and frequency selective \cite{Boreman}. It follows from the generalized Kirchhoff law that if the absorption in the absorber volume is enhanced then its thermal emission is enhanced. The total emitted power can be increased by the same factor $\sigma_{ant}/\sigma_{abs}$ which can be larger than two orders of magnitude. Furthermore, the emission can be directional and frequency selective. We anticipate that a metasurface consisting of a periodic array of subwavelength hot objects connected to antennas could be optimized to behave as a blackbody antenna with unity emissivity while using only a very reduced amount of hot material. This approach paves the way to a new class of THz and IR thermal emitters with a controlled emission direction, spectrum and polarization.
\acknowledgements
We acknowledge the support of the Agence Nationale de la Recherche through the grant ANR-14-CE26-0023-03 and the support of the ONERA through the project SONS. This work was supported by the US Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering. It was performed, in part, at the Center for Integrated Nanotechnologies, an Office of Science User Facility operated for the US Department of Energy (DOE) Office of Science. JJG is a senior member of the Institut Universitaire de France.\\
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,516
|
It seems Peter Molyneux is going to stay true to himself, contrary to what it seemed this time around. Fable 2 has now been officially hyped by the master of hype himself.
In his keynote speech at GDC07 Molyneux told that the game will feature three new important main factors. It was revealed that one of the new key elements is love, but the Lionhead boss refused to give away the others just yet. Love is represented in three ways. NPC´s will respect their hero more if their affection level is high. Women can fall in love and get pregnant, also the player´s new pet dog will act differently depending on affection.
The dog itself will, according to Molyneux, be controlled by highly advanced artificial intelligence, thus not forcing the player to focus on commanding it too much himself. There will be no map or any other on-screen interface, so the player will have to at times trust the dog´s superior instincts. The man´s best friend will also attack enemies within range if the player un-sheaths his weapon. Molyneux also says, that the game may sense if the player likes his dog, and this may lead to some sinister side plots in which the dog is in peril.
The player can also purchase any and all property in the game – from huts and barns to castles and normal houses. The regions in Fable 2 will also evolve according to the player´s actions.
Whether this all will work out well in practice remains to be seen, as Molyneux is known for being a little on the "all talk, no action" side.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 2,227
|
Bubanja () is a Serbo-Croatian surname, derived from bubanj, meaning "drum". It may refer to:
Vladimir Bubanja (born 1989), Serbian footballer
Davor Bubanja (born 1987), Slovenian footballer
See also
Bubanj, settlement
Serbian surnames
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,070
|
Home / Travel Inspiration
Wishful Traveling: 10 Luckiest Places on Earth
By Briana Seftel
Do you believe in luck? Even if you're a skeptic, you can't help but be amused by the rituals and traditions around the world that people follow for good fortune. Join in on the fun and rub, kiss or toss a coin at these luckiest places on earth. Who knows, it may just work...
Trevi Fountain • Rome, Italy
Tossing a coin in the Trevi Fountain is as Italian as pizza and pasta! When in Rome, toss a coin into this beautiful Baroque fountain in the Trevi district to ensure your return to the Eternal City. As the legend goes, you should throw the coin using your right hand over your left shoulder. The tradition dates back to ancient Rome when travelers threw coins in water and prayed to the gods to help them return home safely. You can throw in a second coin if you're looking for love, and a third for marriage!
Point Zero • Paris, France
Located just outside of the Notre Dame Cathedral, Point Zero is a small geographic marker that's regarded as the center of Paris. This easily missable marker is also a popular spot for local rituals. Some spin in a circle on one foot atop the marker to fall in love, while others kiss a loved one above the plate to ensure an eternal devotion. You might also see a few coins on scattered on top - make a wish and see if it comes true!
Blarney Castle • Blarney, Ireland
One of the most famous castles in Ireland, Blarney Castle was built nearly 600 years ago by Cormac MacCarthy, one of Ireland's greatest chieftains. While the sprawling castle grounds are a joy to explore, the real draw here is the infamous Blarney Stone. For over 200 years, people have flocked to County Cork to kiss the Blarney Stone, said to grant the gift of eloquence. While the origins of the Blarney Stone are unclear, kissing the stone is one of the most popular tourist attractions in all of Ireland. Climb the steps to the top of the tower and pucker up!
Statue of St. John of Nepomuk • Prague, Czech Republic
Crossing the Vltava river in Prague, Charles Bridge is one of the city's most iconic sights - but did you know it's home to a famous legend? The bridge is decorated by some 30 statues including the statue of St. John of Nepomuk. According to the legend on the base of the statue, he was thrown off the bridge in 1393 by King Wenceslas IV for keeping the queen's confessions a secret (he was her priest). Touching the statue is said to bring luck and a return to Prague. A few steps from the actual statue is a small golden cross marking the spot where the saint's body was thrown into the Vltava river. If you touch the cross and make a wish, it will come true within a year and one day!
Hagia Sophia Wishing Column • Istanbul, Turkey
One of Istanbul's most beautiful and sacred structures, the Hagia Sophia is full of fascinating history. Originally built nearly 1,500 years ago as a Christian basilica, it was converted into a mosque by the Ottoman Empire in the 1400s. Today, it is a public museum attracting millions of visitors per year for its stunning golden mosaics, minarets and central dome. One of the most famous features of the Hagia Sophia is the Wishing (Weeping) Column. According to one tale, Emperor Justinian was wandering through the building with a severe headache and leaned his head against the column, only to later realize his headache disappeared. Legend says if you stick your thumb in the hole of the column and it emerges moist, your wish will be granted.
The Intihuatana Stone • Machu Picchu, Peru
A bucket list destination, Machu Picchu is unlike any other place on earth. Getting to the Lost City of the Incas is an adventure in of itself, but the journey is well worth it. One landmark you shouldn't miss is the Intihuatana Stone, a ritual stone used by the Incas. Pointed directly at the sun, some believe the stone was used to tether the sun along its annual path in the sky. While the site was roped off some years ago, just being in the presence of the stone is said to give off positive energy. Make your first stop here to soak in the good vibes, then continue your exploration of this incredible ancient site.
Statue of Juliet • Verona, Italy
Verona is known as being the setting for Shakespeare's Romeo and Juliet, so it comes as no surprise that the Italian city has several landmarks commemorating the iconic play. One of the most infamous landmarks is the bronze statue of Juliet located outside Juliet's house (Casa di Giulietta). A subject of controversy for years, tourists rub her left breast for good luck. Too obscene for your taste? Stick up a love note on the wall and take a photo under the famous balcony.
Laughing Buddha • Hangzhou, China
Lingyin Temple in Hangzhou is one of the oldest and most important ancient Buddhist temples in China. A spiritual place for many, the temple is home to a handful of legends including that of the Laughing Buddha. Located in the center of the Hall of the Heavenly King, the statue of Maitreya (a.k.a the Laughing Buddha) is said to be the origin of the tradition of rubbing the Buddha's belly for good luck. Go ahead and give his belly a good 'ole pat!
Il Porcellino • Florence, Italy
If you go to Florence, ask for the pig! On the south side of the Mercato Nuovo is a bronze fountain of a wild boar affectionately known as Il Porcellino ("the piglet"). Originally sculpted in 1634 by master sculptor Pietro Tacca, the statue is believed to be a representation of the mythical Calydonian Boar. Placing a coin in his mouth is said to grant good fortune, while rubbing his shiny golden snout is said to ensure your trip back to Florence! The tradition has become so popular that there are several replica statues in dozens of countries around the world.
Schöner Brunnen • Nuremberg, Germany
If you find yourself in the Bavarian city of Nuremberg, it's hard to miss the spectacular 14th-century Schöner Brunnen in the main market square. Literally translating to "Beautiful Fountain," it is composed of four rows of 40 stone figures representing the worldview of the Holy Roman Empire. Surrounding the fountain is a wrought iron gate embedded with two brass rings on opposite sides. By turning the rings, you will be blessed with good luck and your wish will come true. Who needs German fairy tales anyway?
Explore our Vacations
Thanks! You'll be hearing from us soon
By clicking Get More Stories, you agree to our Terms & Conditions and Privacy Policy.
Europe 528 Italy 63 Rome 13 Art & Culture 164 Romance & Honeymoons 29
8 Great Events To Enjoy In Venice This September
7 Incredible Events In Florence This September
8 Fantastic Things To Do In Rome In September
Visit Ortisei, Italy!
9 Photos of the Dolomites That Will Make You Pack Your Bags
Foodie Cities for Budget Travelers
Italy's Untouched Landscapes: Umbrian Castle Stay
from $1,079 per person, 6 nights
Positano Dream
Umbrian Castle & Rome
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,640
|
Environmental damage, together with climate change, is driving the water-related crises we see around the world. Floods, drought and water pollution are all made worse by degraded vegetation, soil, rivers and lakes. When we neglect our ecosystems, we make it harder to provide everyone with the clean water we need to survive and thrive.
Nature-based solutions have the potential to solve many of our water challenges. We need to do so much more with 'green' infrastructure and harmonize it with 'grey' infrastructure wherever possible. Planting new forests, reconnecting rivers to floodplains, and restoring wetlands will rebalance the water cycle and improve human health and livelihoods.
World Water Day is a global day of action and a chance to come together to remind the world of the one in nine people who don't have clean water close to home... yet.
Vice President M. Venkaiah Naidu called on people to reduce, reuse and recycle water on World Water Day. The day, observed every year March 22, is a day about focusing attention on the importance of water and the need to preserve it.
Vice President Naidu joined several others on social media stressing the importance of water conservation.
"On world water day, Reduce, reuse, and recycle water must be our watchwords if we have to handover a liveable planet to the future generations,"
Prime Minister Narendra Modi who also took to Twitter on World Water Day, saying the day is an occasion to highlight the importance of "Jal Shakti".
To mark World Water Day 2018, here are some mind-blowing and alarming facts.
According to the U.N., 2.1 billion people don't have safe drinking water at home.
And 159 million still drink untreated water--a serious health risk—from surface water sources, such as streams or lakes.
There are 663 million people who live without a safe water supply close to home.
About 71% of the Earth's surface is water-covered, according to The United States Geological Survey Water Science School.
Oceans hold around 97% of all Earth's water, which means all but about 3% of our water is saline.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,333
|
Franchi may refer to:
People
Alberto Herrera Franchi (1874–1954), Cuban general and provisional President of Cuba
Aldo Franchi (born 1882), Italian race car driver
Alessandro Franchi (cardinal) (1819–1878), Italian cardinal and archbishop
Alessandro Franchi (painter) (1838–1914), Italian painter
Andrea Franchi (1335-1401), Italian Roman Catholic member of the Order of Preachers and Bishop of Pistoia
Anna Franchi (1867-1954), Italian novelist, translator, playwright and journalist
Antonio Franchi (1638–1709), Italian painter of the 17th Century
Antonio Franchi (cyclist) (1936–2019), Italian racing cyclist
Artemio Franchi (1922–1983), President of the Italian Football Federation
Ausonio Franchi (1821–1895), Italian philosopher
Carlo Franchi (composer) (1743–1779), Italian composer
Carlo Franchi (1938–2021), Italian racing driver, known as Gimax
Dany Franchi (born 1990), Italian blues guitarist, singer and songwriter
Dorothea Anne Franchi (1920–2003), New Zealand pianist, harpist, music educator and composer
Elena Franchi (born 1996), Italian professional racing cyclist
Franco Franchi (cyclist) (born 1923), Italian racing cyclist
Franco Franchi (1928–1992), Italian comedian
Garry Franchi (born 1983), French professional football player
Giovannina Franchi (1807–1872), Italian Roman Catholic professed religious
Giuseppe Franchi (1731–1806), Italian Neoclassical sculptor
Gregory "Greg" Franchi (born 1982), Belgian racing driver
Jan Martínez Franchi (born 1998), Argentine volleyball player
John Franchi (born 1982), retired American mixed martial artist
Lorenzo Franchi (c. 1563-c. 1630), Italian painter
Morena Franchi (born (1993), Argentine female volleyball player
Rossello di Jacopo Franchi (c. 1377–c. 1456), Italian painter
Rudy Franchi (born 1939, American writer and editor
Sergio Franchi (1926–1990), Italian-American tenor and actor
Buildings
Stadio Artemio Franchi, a football stadium in Florence, Italy
Stadio Artemio Franchi – Montepaschi Arena, a football stadium in Siena, Italy
Firearms
Franchi SPAS-12, a combat shotgun
Franchi SPAS-15, a dual-mode 12 gauge combat shotgun
Franchi AL-48, a semi-automatic shotgun
Franchi LF-57, a pressed-metal submachine gun
Other uses
Artemio Franchi Trophy, an international football competition
Franchi (firearms), an Italian firearms company
See also
Saint-Franchy, Nièvre department, France
Italian-language surnames
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,793
|
Q: Micro-kernel architecture based operating system for desktop users? Can we have Operating system with micro-kernel architecture targeted on desktop users? I have read here on this website that older micro-kernel can be 50% slower than Monolithic kernel, while later version like L4 were only 2% or 4% slower than the Monolithic kernel. L4 kernel is very famous for its performance.
Why don't we have an operating system based on micro-kernel architecture targeted on desktop users? Can we have such operating systems in future?
A: Existing OSes like Windows, Linux have a huge ecosystem thus most resources go into them. However, there are projects to micro kernel base system and there's a huge use in special use cases. It takes time and work to create a general purpose OS - and the practical benefit is quite low for most user, as then wont ever notice what OS core they are actually using.
You may give this a try: https://genode.org
Or watch this: https://de.wikipedia.org/wiki/Google_Fuchsia
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 6,125
|
Cavusoglu, Timmermans talk EU-Turkey deals
Foreign minister and European Commission VP exchange views over the phone on visa-free travel for Turkish citizens in Schengen
24 Mayıs 2016 Salı 10:24
Foreign Minister Mevlut Cavusoglu discussed the issue of visa-free travel for Turkish citizens within the EU Schengen borderless zone with European Commission Vice President Frans Timmermans on the phone late Monday.
According to Turkish diplomatic sources, who spoke on condition of anonymity due to restrictions on talking to the media, said the two officials also spoke by the telephone about how to implement the EU-Turkey deal on refugees.
The EU-Turkey deal aims to discourage irregular migration through the Aegean Sea by taking stricter measures against human smugglers and improving the conditions of Syrian refugees in Turkey.
It also stipulated the acceleration of Turkey's EU membership bid and visa-free travel for Turkish nationals within the Schengen area, on the condition that Ankara met 72 requirements set by the EU.
Ankara has met most of the requirements, but the EU's demands for change in Turkey's anti-terrorism law have led to a pause in negotiations.
Meanwhile, Cavusoglu and Timmermans also agreed on meeting in Antalya or in Ankara between May 27-29, within the scope of "The Least Developed Countries Action Plan Revision Meeting".
#turkey-eu visa,
Türkiye, Azerbaijan laying foundations of new era in region, says Turkish president
Türkiye, Senegal determined to deepen cooperation: President Erdogan
Türkiye calls on Libya, Egypt to initiate maritime border dialogue 'as soon as possible'
No way Türkiye can be silent in face of growing terror threat: President
Türkiye vows to further develop its multi-faceted relations with Algeria
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 487
|
"""
Represents a Network ACL
"""
from boto.ec2.ec2object import TaggedEC2Object
from boto.resultset import ResultSet
class Icmp(object):
"""
Defines the ICMP code and type.
"""
def __init__(self, connection=None):
self.code = None
self.type = None
def __repr__(self):
return 'Icmp::code:%s, type:%s)' % ( self.code, self.type)
def startElement(self, name, attrs, connection):
pass
def endElement(self, name, value, connection):
if name == 'code':
self.code = value
elif name == 'type':
self.type = value
class NetworkAcl(TaggedEC2Object):
def __init__(self, connection=None):
super(NetworkAcl, self).__init__(connection)
self.id = None
self.vpc_id = None
self.network_acl_entries = []
self.associations = []
def __repr__(self):
return 'NetworkAcl:%s' % self.id
def startElement(self, name, attrs, connection):
result = super(NetworkAcl, self).startElement(name, attrs, connection)
if result is not None:
# Parent found an interested element, just return it
return result
if name == 'entrySet':
self.network_acl_entries = ResultSet([('item', NetworkAclEntry)])
return self.network_acl_entries
elif name == 'associationSet':
self.associations = ResultSet([('item', NetworkAclAssociation)])
return self.associations
else:
return None
def endElement(self, name, value, connection):
if name == 'networkAclId':
self.id = value
elif name == 'vpcId':
self.vpc_id = value
else:
setattr(self, name, value)
class NetworkAclEntry(object):
def __init__(self, connection=None):
self.rule_number = None
self.protocol = None
self.rule_action = None
self.egress = None
self.cidr_block = None
self.port_range = PortRange()
self.icmp = Icmp()
def __repr__(self):
return 'Acl:%s' % self.rule_number
def startElement(self, name, attrs, connection):
if name == 'portRange':
return self.port_range
elif name == 'icmpTypeCode':
return self.icmp
else:
return None
def endElement(self, name, value, connection):
if name == 'cidrBlock':
self.cidr_block = value
elif name == 'egress':
self.egress = value
elif name == 'protocol':
self.protocol = value
elif name == 'ruleAction':
self.rule_action = value
elif name == 'ruleNumber':
self.rule_number = value
class NetworkAclAssociation(object):
def __init__(self, connection=None):
self.id = None
self.subnet_id = None
self.network_acl_id = None
def __repr__(self):
return 'NetworkAclAssociation:%s' % self.id
def startElement(self, name, attrs, connection):
return None
def endElement(self, name, value, connection):
if name == 'networkAclAssociationId':
self.id = value
elif name == 'networkAclId':
self.route_table_id = value
elif name == 'subnetId':
self.subnet_id = value
class PortRange(object):
"""
Define the port range for the ACL entry if it is tcp / udp
"""
def __init__(self, connection=None):
self.from_port = None
self.to_port = None
def __repr__(self):
return 'PortRange:(%s-%s)' % ( self.from_port, self.to_port)
def startElement(self, name, attrs, connection):
pass
def endElement(self, name, value, connection):
if name == 'from':
self.from_port = value
elif name == 'to':
self.to_port = value
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,277
|
public class RemoveDuplicatesfromSortedList {
public class ListNode {
int val;
ListNode next;
ListNode(int x) {
val = x;
}
}
public static void main(String[] args) {
// TODO Auto-generated method stub
}
//myslover
// public ListNode deleteDuplicates(ListNode head) {
// ListNode dummyHead = new ListNode(0);
// ListNode curr = dummyHead;
// ListNode temp = head;
// while (temp != null) {
//
// if (temp.next == null || temp.val != temp.next.val) {
// curr.next = temp;
// curr = curr.next;
// }
// temp = temp.next;
// }
// return dummyHead.next;
// }
//office solver
public ListNode deleteDuplicates(ListNode head){
ListNode current = head;
while (current != null && current.next !=null) {
if (current.val == current.next.val) {
current.next = current.next.next;
}else {
current = current.next;
}
}
return head;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,973
|
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta http-equiv="X-UA-Compatible" content="IE=edge">
<meta name="viewport" content="width=device-width, initial-scale=1">
<meta name="description" content="">
<meta name="author" content="">
<title>George Goodman Productions</title>
<!-- Bootstrap Core CSS - Uses Bootswatch Flatly Theme: http://bootswatch.com/flatly/ -->
<link href="css/bootstrap.min.css" rel="stylesheet">
<!-- Custom CSS -->
<link href="css/freelancer.css" rel="stylesheet">
<!-- Custom Fonts -->
<link href="font-awesome/css/font-awesome.min.css" rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=Montserrat:400,700" rel="stylesheet" type="text/css">
<link href="http://fonts.googleapis.com/css?family=Lato:400,700,400italic,700italic" rel="stylesheet" type="text/css">
<!-- HTML5 Shim and Respond.js IE8 support of HTML5 elements and media queries -->
<!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
<!--[if lt IE 9]>
<script src="https://oss.maxcdn.com/libs/html5shiv/3.7.0/html5shiv.js"></script>
<script src="https://oss.maxcdn.com/libs/respond.js/1.4.2/respond.min.js"></script>
<![endif]-->
</head>
<body id="page-top" class="index">
<!-- Navigation -->
<nav class="navbar navbar-default navbar-fixed-top">
<div class="container">
<!-- Brand and toggle get grouped for better mobile display -->
<div class="navbar-header page-scroll">
<button type="button" class="navbar-toggle" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
<span class="sr-only">Toggle navigation</span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
<span class="icon-bar"></span>
</button>
<a class="navbar-brand" href="#page-top">George Goodman Productions</a>
</div>
<!-- Collect the nav links, forms, and other content for toggling -->
<div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
<ul class="nav navbar-nav navbar-right">
<li class="hidden">
<a href="#page-top"></a>
</li>
<li class="page-scroll">
<a href="#portfolio">Portfolio</a>
</li>
<li class="page-scroll">
<a href="#about">About</a>
</li>
<li class="page-scroll">
<a href="#contact">Contact</a>
</li>
</ul>
</div>
<!-- /.navbar-collapse -->
</div>
<!-- /.container-fluid -->
</nav>
<!-- Header -->
<header style= "background-image:url(img/bannerimage.png);">
<div class="container">
<div class="row">
<div class="intro-text">
<span class="name">Animation & Video <br> Production Services</span></div>
<div style="display:block; width:auto; height:auto; left-margin:auto; right-margin:auto;">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_xnrjst0wl0 popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/play2.png" alt="Demo Reel"></a>
</div>
</div>
</div>
</header>
<!-- Portfolio Grid Section -->
<section id="portfolio" style="background-color:#e6e6e6;">
<div class="container">
<div class="row">
<div class="col-lg-12 text-center" style="margin-bottom:5%;">
<h2>Portfolio</h2>
</div>
</div>
<div class="row">
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_qwnnuk8slg popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/pagemodoposts.png" alt="Pagemodo Posts">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_9x53cj1xu4 popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/backby8.png" alt="Back By 8">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_593o61en88 popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/ustravel.png" alt="US Travel Association">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_5oqidrdi5m popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/stats.png" alt="Webs Stats">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_du8iliztfz popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/difyanim.png" alt="Website Design Services">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_qhasileco2 popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/coverphotos.png" alt="Pagemodo Cover Photos">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_ulwd0a6j5a popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/locallistings.png" alt="Local Listings">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_jt85frh4yj popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/tower.png" alt="Vistaprint Website Builder">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_aml44gdohd popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/dify.png" alt="Vistaprint Design Services Testimonial">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_kj0jff4ys3 popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/scoutmob.png" alt="Scoutmob Sales Video">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_axjhilboaw popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/intown.png" alt="Intown Bicycles Tutorial">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_b1d14hyp5v popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/exercise.png" alt="YMCA Exercise Classes">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_yauoaozvy1 popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/ybear.png" alt="Ybear Tennis Commercial">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_kgi3ozo1b4 popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/andrewapp.png" alt="Andrew's KPBC Application">
</a>
</div>
<div class="col-sm-4 portfolio-item">
<script charset="ISO-8859-1" src="//fast.wistia.com/assets/external/E-v1.js" async></script><span class="wistia_embed wistia_async_qxy3a718x4 popover=true popoverContent=link" style="display:inline">
<a href="#"><img class="img-responsive" src="img/thumbs/webslocal.png" alt="Webs Local Listings">
</a>
</div>
</div>
</div>
</section>
<!-- About Section -->
<section class="success" id="about">
<div class="container">
<div class="row">
<div class="col-lg-12 text-center">
<h2>About</h2>
</div>
</div>
<div class="row" style="width:90%; margin-left:auto; margin-right:auto; padding-top:2%; text-align:center;">
<p>Hi, I'm George, your turnkey video production solution! My years of experience in animation and live video production as well as my role as a product manager at a tech company helps me focus on creating KPI driven videos that deliver solutions. While I pride myself on handling the details so you don't have to, you can rest assured that I'll always have my eyes on the big picture as well. Feel free to reach out to me with any questions!</p>
</div>
<div class="col-lg-8 col-lg-offset-2 text-center"
</div>
</div>
</div>
</section>
<!-- Contact Section -->
<section id="contact">
<div class="container">
<div class="row">
<div class="col-lg-12 text-center">
<h2 style="padding-bottom:2%;">Contact Me</h2>
</div>
</div>
<div class="row" style="width:90%; margin-left:auto; margin-right:auto; padding-top:2%; text-align:center;">
<p class="skills">E-mail: gfgood@gmail.com <br> Phone: (404) 438-2833</p>
</div>
</section>
<!-- Footer -->
<footer class="text-center">
<div class="footer-below">
<div class="container">
<div class="row">
<div class="col-lg-12">
Copyright © George Goodman Productions 2016
</div>
</div>
</div>
</div>
</footer>
<!-- jQuery -->
<script src="js/jquery.js"></script>
<!-- Bootstrap Core JavaScript -->
<script src="js/bootstrap.min.js"></script>
<!-- Plugin JavaScript -->
<script src="http://cdnjs.cloudflare.com/ajax/libs/jquery-easing/1.3/jquery.easing.min.js"></script>
<script src="js/classie.js"></script>
<script src="js/cbpAnimatedHeader.js"></script>
<!-- Contact Form JavaScript -->
<script src="js/jqBootstrapValidation.js"></script>
<script src="js/contact_me.js"></script>
<!-- Custom Theme JavaScript -->
<script src="js/freelancer.js"></script>
</body>
</html>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,148
|
Q: How not to lose Date informations by doing a query trough a varchar query in PL sql? I am making a stored function that should trough a varchar make a query but I am loosing the hours in my Date variable.
This is a working query that should give me the following records.
SELECT
RESERVATIONS.NUMERO,
RESERVATIONS.DATE_DEBUT_PRECIS,
RESERVATIONS.DATE_FIN_PRECIS
FROM RESERVATIONS, LIGNES_RESERVATIONS, OBJETS, CLIENTS
WHERE
LIGNES_RESERVATIONS.OBJ_NUMERO = 261 AND
LIGNES_RESERVATIONS.OBJ_SOCIETES_ID = 5 AND
LIGNES_RESERVATIONS.SOCIETES_ID = 5 AND
OBJETS.NUMERO = LIGNES_RESERVATIONS.OBJ_NUMERO AND
OBJETS.SOCIETES_ID = LIGNES_RESERVATIONS.OBJ_SOCIETES_ID AND
OBJETS.SOCIETES_ID = 5 AND
RESERVATIONS.SOCIETES_ID = 5 AND
RESERVATIONS.DEMANDE = 0 AND
RESERVATIONS.ANNULER = 0 AND
LIGNES_RESERVATIONS.RES_NUMERO = RESERVATIONS.NUMERO AND
LIGNES_RESERVATIONS.RES_SOCIETES_ID = RESERVATIONS.SOCIETES_ID AND
CLIENTS.NUMERO = RESERVATIONS.CLI_NUMERO AND
CLIENTS.SOCIETES_ID = RESERVATIONS.CLI_SOCIETES_ID AND
CLIENTS.SOCIETES_ID = 5 AND
(TO_DATE('03.10.2022 23:00', 'dd.mm.YYYY hh24:mi') > RESERVATIONS.DATE_DEBUT_PRECIS AND TO_DATE('03.10.2022 07:00', 'dd.mm.YYYY hh24:mi') < RESERVATIONS.DATE_FIN_PRECIS)
NUMERO DATE_DEBUT DATE_FIN
94065 03.10.22 03.10.22
93995 03.10.22 03.10.22
The problem is that the given dates and time in the request are comming from a variable.
This is how I make my query in my function :
sql_stmt VARCHAR2(2000) := 'SELECT
RESERVATIONS.NUMERO,
RESERVATIONS.DATE_DEBUT_PRECIS,
RESERVATIONS.DATE_FIN_PRECIS
FROM RESERVATIONS, LIGNES_RESERVATIONS, OBJETS, CLIENTS
WHERE
LIGNES_RESERVATIONS.OBJ_NUMERO = '||P_OBJET||' AND
LIGNES_RESERVATIONS.OBJ_SOCIETES_ID = '||P_SOCIETE||' AND
LIGNES_RESERVATIONS.SOCIETES_ID = '||P_SOCIETE||' AND
OBJETS.NUMERO = LIGNES_RESERVATIONS.OBJ_NUMERO AND
OBJETS.SOCIETES_ID = LIGNES_RESERVATIONS.OBJ_SOCIETES_ID AND
OBJETS.SOCIETES_ID = '||P_SOCIETE||' AND
RESERVATIONS.SOCIETES_ID = '||P_SOCIETE||' AND
RESERVATIONS.DEMANDE = 0 AND
RESERVATIONS.ANNULER = 0 AND
LIGNES_RESERVATIONS.RES_NUMERO = RESERVATIONS.NUMERO AND
LIGNES_RESERVATIONS.RES_SOCIETES_ID = RESERVATIONS.SOCIETES_ID AND
CLIENTS.NUMERO = RESERVATIONS.CLI_NUMERO AND
CLIENTS.SOCIETES_ID = RESERVATIONS.CLI_SOCIETES_ID AND
CLIENTS.SOCIETES_ID = '||P_SOCIETE||' AND
'|| P_DATE_FIN ||' > RESERVATIONS.DATE_DEBUT_PRECIS AND '|| P_DATE_DEBUT ||' < RESERVATIONS.DATE_FIN_PRECIS';
But then, my query looks like this
SELECT
RESERVATIONS.NUMERO,
RESERVATIONS.DATE_DEBUT_PRECIS,
RESERVATIONS.DATE_FIN_PRECIS
FROM RESERVATIONS, LIGNES_RESERVATIONS, OBJETS, CLIENTS
WHERE
LIGNES_RESERVATIONS.OBJ_NUMERO = 261 AND
LIGNES_RESERVATIONS.OBJ_SOCIETES_ID = 5 AND
LIGNES_RESERVATIONS.SOCIETES_ID = 5 AND
OBJETS.NUMERO = LIGNES_RESERVATIONS.OBJ_NUMERO AND
OBJETS.SOCIETES_ID = LIGNES_RESERVATIONS.OBJ_SOCIETES_ID AND
OBJETS.SOCIETES_ID = 5 AND
RESERVATIONS.SOCIETES_ID = 5 AND
RESERVATIONS.DEMANDE = 0 AND
RESERVATIONS.ANNULER = 0 AND
LIGNES_RESERVATIONS.RES_NUMERO = RESERVATIONS.NUMERO AND
LIGNES_RESERVATIONS.RES_SOCIETES_ID = RESERVATIONS.SOCIETES_ID AND
CLIENTS.NUMERO = RESERVATIONS.CLI_NUMERO AND
CLIENTS.SOCIETES_ID = RESERVATIONS.CLI_SOCIETES_ID AND
CLIENTS.SOCIETES_ID = 5 AND
03.10.2022 > RESERVATIONS.DATE_DEBUT_PRECIS AND 03.10.2022 < RESERVATIONS.DATE_FIN_PRECIS
As we can see, there's no hours specification in the query so I tried to force it to be in the query by doing so : "TO_CHAR(P_DATE_FIN, 'dd.mm.YYYY hh24:mi')". However it didn't work and I couldn't get any results from my query so I tried to make it convert back into a Date value in my query like this : "TO_DATE('''|| TO_CHAR(P_DATE_FIN, 'dd.mm.YYYY hh24:mi')" (the TO_DATE function was supposed to be executed during the query but it just crashed my database.
A: Can you try out using DBMS_SQL to parse the query with the types:
DECLARE
lv_sql VARCHAR2(500);
l_objet VARCHAR2(200);
l_societe VARCHAR2(200);
l_dt_deb DATE;
l_dt_fin DATE;
l_numero VARCHAR2(200);
l_debut_precis DATE;
l_fin_precis DATE;
ln_cursor_id NUMBER;
ln_rows_processed;
BEGIN
l_objet := p_objet;
l_societe := p_societe;
SELECT TO_DATE(P_DATE_FIN, 'DD.MM.YYYY HH24:MI'),TO_DATE(P_DATE_DEBUT, 'DD.MM.YYYY HH24:MI')
INTO l_dt_fin, l_dt_deb
FROM dual;
lv_sql:='SELECT RESERVATIONS.NUMERO,
RESERVATIONS.DATE_DEBUT_PRECIS,
RESERVATIONS.DATE_FIN_PRECIS
FROM RESERVATIONS, LIGNES_RESERVATIONS, OBJETS, CLIENTS
WHERE
LIGNES_RESERVATIONS.OBJ_NUMERO = :objet AND
LIGNES_RESERVATIONS.OBJ_SOCIETES_ID = :societe AND
LIGNES_RESERVATIONS.SOCIETES_ID = LIGNES_RESERVATIONS.OBJ_SOCIETES_ID AND
OBJETS.NUMERO = LIGNES_RESERVATIONS.OBJ_NUMERO AND
OBJETS.SOCIETES_ID = LIGNES_RESERVATIONS.OBJ_SOCIETES_ID AND
RESERVATIONS.SOCIETES_ID = LIGNES_RESERVATIONS.OBJ_SOCIETES_ID AND
RESERVATIONS.DEMANDE = 0 AND
RESERVATIONS.ANNULER = 0 AND
LIGNES_RESERVATIONS.RES_NUMERO = RESERVATIONS.NUMERO AND
LIGNES_RESERVATIONS.RES_SOCIETES_ID = RESERVATIONS.SOCIETES_ID AND
CLIENTS.NUMERO = RESERVATIONS.CLI_NUMERO AND
CLIENTS.SOCIETES_ID = RESERVATIONS.CLI_SOCIETES_ID
AND :date_fin > RESERVATIONS.DATE_DEBUT_PRECIS AND :date_debut < RESERVATIONS.DATE_FIN_PRECIS';
in_cursor_id:=DBMS_SQL.OPEN_CURSOR;
DBMS_SQL.PARSE(ln_cursor_id, lv_sql, DBMS_SQL.NATIVE);
DBMS_SQL.BIXD_VARLABLE(ln_cursor_id:'objet',l_objet);
DBMS_SQL.BIXD_VARLABLE(ln_cursor_id:'societe',l_societe);
DBMS_SQL.BIXD_VARLABLE(ln_cursor_id:'date_fin',l_dt_fin);
DBMS_SQL.BIXD_VARLABLE(ln_cursor_id:'date_debut',l_dt_deb);
DBMS_SQL.DEFINE_COLUMN(ln_cursor_id,1,l_numero);
DBMS_SQL.DEFINE_COLUMN(ln_cursor_id,2,l_debut_precis);
DBMS_SQL.DEFINE_COLUMN(ln_cursor_id,3,l_fin_precis);
ln_rows_processed := DBMS_SQL.EXECUTE(ln_cursor_id);
LOOP
IF DBMS_SQL.FETCH_ROWS(ln_cursor_id)=0 THEN
EXIT;
ELSE
DBMS_SQL.COLUMN_VALUE(ln_cursor_id,1,l_numero);
DBMS_SQL.COLUMN_VALUE(ln_cursor_id,2,l_debut_precis);
DBMS_SQL.COLUMN_VALUE(ln_cursor_id,3,l_fin_precis);
DBMS_OUTPUT.put_line(l_numero ||'|'|| TO_CHAR(l_debut_precis) ||'|'|| TO_CHAR(l_fin_precis));
END IF;
END LOOP;
DBMS_SQL.CLOSE_ClIRSOR(ln_cursor_id);
END;
PS. I deleted the duplicate conditions from the query.
A: I just changed the nls_date_format
execute immediate 'alter session set nls_date_format=''dd.mm.YYYY hh24:mi''';
sql_stmt := 'SELECT
RESERVATIONS.NUMERO,
RESERVATIONS.DATE_DEBUT_PRECIS,
RESERVATIONS.DATE_FIN_PRECIS
FROM RESERVATIONS, LIGNES_RESERVATIONS, OBJETS, CLIENTS
WHERE
LIGNES_RESERVATIONS.OBJ_NUMERO = '||P_OBJET||' AND
LIGNES_RESERVATIONS.OBJ_SOCIETES_ID = '||P_SOCIETE||' AND
LIGNES_RESERVATIONS.SOCIETES_ID = '||P_SOCIETE||' AND
OBJETS.NUMERO = LIGNES_RESERVATIONS.OBJ_NUMERO AND
OBJETS.SOCIETES_ID = LIGNES_RESERVATIONS.OBJ_SOCIETES_ID AND
OBJETS.SOCIETES_ID = '||P_SOCIETE||' AND
RESERVATIONS.SOCIETES_ID = '||P_SOCIETE||' AND
RESERVATIONS.DEMANDE = 0 AND
RESERVATIONS.ANNULER = 0 AND
LIGNES_RESERVATIONS.RES_NUMERO = RESERVATIONS.NUMERO AND
LIGNES_RESERVATIONS.RES_SOCIETES_ID = RESERVATIONS.SOCIETES_ID AND
CLIENTS.NUMERO = RESERVATIONS.CLI_NUMERO AND
CLIENTS.SOCIETES_ID = RESERVATIONS.CLI_SOCIETES_ID AND
CLIENTS.SOCIETES_ID = '||P_SOCIETE||' AND
''' || P_DATE_FIN ||''' > RESERVATIONS.DATE_DEBUT_PRECIS AND ''' || TO_CHAR(P_DATE_DEBUT, 'dd.mm.YYYY hh24:mi') ||''' < RESERVATIONS.DATE_FIN_PRECIS';
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 6,832
|
\section{introduction}\label{introduction}
Entanglement is a basic
quantum communication resource which
can usefully be manipulated
to suit particular tasks \cite{BBPS,BBPSSW}.
In this paper we investigate the manipulation of a single
entangled mixed states comprising two separated
single qubit subsystems.
We consider two parties, Alice and Bob, who each control one
subsystem, and who are restricted to carrying out
local quantum operations and classical communication (LQCC).
Specifically the quantum operations Alice and Bob are allowed to
perform are local unitary transformations and local filtrations.
The restriction to local quantum operations ensures that
entanglement is indeed treated as a resource: if non-local quantum
operations were allowed, Alice and Bob could
create entanglement between them from initially non-entangled states.
The interest of this problem is that
since any real-world quantum communication channel will be imperfect,
even if Alice could create perfect maximally entangled
states, she would never be able to share such states with
Bob simply by sending one subsystem through the channel.
So it is natural to ask whether Alice and Bob can use LQCC to
obtain states with better entanglement from imperfectly
entangled states.
Various entanglement
purification protocols have been suggested.
If Alice and Bob share a number of copies of an
imperfectly entangled known pure state, they can
obtain a number of maximally entangled states by carrying out
operations on each state individually or by collective
operations on a number of shared states \cite{BBPS}.
The collective algorithm has a higher asymptotic
yield of maximally entangled states in the limit in which
the number of shared states tends to infinity.
Efficient collective algorithms which give a non-zero asymptotic
yield of maximally entangled states from entangled mixed states
have also been described \cite{BBPSSW}.
In practice, though, the number of states will always be finite,
and Alice and Bob will effectively share a single entangled state
of two subsystems whose state spaces are finite dimensional.
For this and other reasons --- for example, Alice and Bob might
actually have only one copy of an entangled state of some
simple system, or it may be technologically difficult to
implement collective operations ---
it is interesting to see what Alice and Bob can
achieve by gambling with the entanglement of a single
state. That is, we would like to know how far the entanglement
of single states could be increased by LQCC if
the outcomes of Alice and Bob's local measurements were
favourable. The Procrustean
algorithm of \cite{BBPS}
provides an answer to this question in the case of pure states.
Here we answer the
question for two qubit mixed states, and in the process
illustrate a general approach to the problem based on
identifying quantities invariant under LQCC.
Though most mixed state entanglement distillation protocols
discussed so far involve collective operations on many
states, it has been established that there exist
entangled mixed states for which single-state LQCC protocols can
increase entanglement \cite{G2,H3}.
Conversely, it is known that there exist entangled
mixed states, including the important case of the Werner
states, for which no single-state LQCC protocol can increase
entanglement \cite{LMP,K,H4}.
We give here a complete description of the effect of LQCC on
entanglement of a single copy of an arbitrary mixed state
$\rho$ of two qubits.
It has been shown previously that if Alice and Bob's local density
matrices are completely random, they cannot increase the Entanglement
of Formation (EOF) by LQCC\cite{LMP,K,H4}.
Here we show that if the local density
matrices are not random and if the EOF is non-vanishing, Alice and Bob
can always increase the EOF. Moreover we construct a procedure that
maximises the EOF of the final state. This procedure, which is unique
up to local unitary transformations, leaves Alice and Bob with
completely random local density matrices.
\section{Main results}
Throughout, the states considered are those of a single system
comprising two separate single qubit subsystems.
We use the following facts.\cite{LMP}
\begin{enumerate}
\item
The LQCC protocols we consider map the state $\rho$ to states of the form
\begin{equation}\label{image}
\rho' = \frac{ A \otimes B \rho A^{\dagger} \otimes B^{\dagger} }{
{\rm{Tr}} ( A \otimes B \rho A^{\dagger} \otimes B^{\dagger} ) } \ ,
\label{finalform}
\end{equation}
where $A$ and $B$ are arbitrary operators that act on Alice and Bob's
Hilbert space respectively.
The only condition they must obey is $A^\dagger A \leq I_2$, $B^\dagger
B \leq I_2$.
The protocol succeeds with
probability ${\rm{Tr}} ( A \otimes B \rho A^{\dagger} \otimes B^{\dagger} )
$.
We need not consider the most general local protocols in which the
final state consists of mixtures of states of the form
eq. (\ref{finalform}) since mixing decreases the EOF.
The operators $A$ and $B$ can be written as
\begin{equation}\label{one}
A \otimes B = U_A f^{\mu,a,{\bf m}} \otimes
U_B f^{\nu, b, {\bf n}} \, ,
\end{equation}
where $U_A , U_B$ are unitary and the filtrations $f$ are defined by
\begin{equation}
f^{\mu,a,{\bf m}} =
\mu (I_2 + a {\bf m}. {\bf \sigma} )\ \hbox{\rm and}\ f^{\nu,b,{\bf n}} =
\nu (I_2 + b {\bf n}. {\bf \sigma} )\, .
\label{ff}
\end{equation}
Here $\mu,\nu , a ,b $ are real numbers, $I_n$ denotes the identity
operator in $n$ dimensions, and the vector ${\bf \sigma} = \{ \sigma_1 ,
\sigma_2 , \sigma_3 \}$ has the Pauli matrices as components.
We can also write these operators as $A = U_A F_A U'_A $,
where $F_A$ takes the form $ \left(\matrix{ \alpha_1 & 0 \cr 0 &
\alpha_2 } \right)$
with the $\alpha_i$ real, $0\leq \alpha_i \leq 1$ and $U_A , U'_A$
unitary; similarly $B = U_B F_B U'_B$.
We can thus write any non-trivial LQCC (i.e. any LQCC which is not
the zero map) in the form
\begin{equation}\label{three}
\gamma U_A \left(\matrix{ 1 & 0 \cr 0 &
\alpha_A } \right) U'_A \otimes
U_B \left(\matrix{ 1 & 0 \cr 0 &
\alpha_B } \right) U'_B \, ,
\end{equation}
where $\gamma$ is a scale factor in the range $0 < \gamma \leq 1$
and $0 \leq \alpha_A , \alpha_B \leq 1$.
\item
The entanglement of formation (or EOF) of a pure state
$|\psi\rangle$ is defined as $E(\psi) = - Tr \rho_A \ln \rho_A =
- Tr \rho_B \ln \rho_B$ where $\rho_A= Tr_B |\psi\rangle\langle \psi
|$, $\rho_B= Tr_A |\psi\rangle\langle \psi
|$ are the local density matrices seen by Alice and Bob. For a mixed
state the EOF is defined as\cite{BDSW}:
$E(\rho) = \min \sum_i p_i E(\psi_i)$ where the minimum is taken over
all decompositions of $\rho$ into pure states $\rho = \sum_i p_i
|\psi_i\rangle \langle \psi_i |$.
In the case of a mixed state comprised of two single qubit subsystems,
Wootters\cite{W1} has given an explicit formula for $E(\rho)$,
verifying an earlier conjecture of Hill and Wootters\cite{HW1}.
Let $\tilde{\rho} = \sigma_2 \otimes \sigma_2
\rho^* \sigma_2 \otimes \sigma_2 $.
Call $\lambda_i$ the positive square roots of the eigenvalues
of the matrix $\rho \tilde{\rho }$ written in decreasing
order. Define the concurrence by
\begin{equation}
C ( \rho ) = \max ( 0, \lambda_1 - \lambda_2 - \lambda_3 - \lambda_4 )
\ .
\end{equation}
Then the EOF of $\rho$ is
\begin{equation}
E ( \rho ) = H ( \frac { 1 + \sqrt{ 1 - C^2 ( \rho ) }}{2} ) \, ,
\end{equation}
where
$H ( p ) = - p \log_2 p - (1-p) \log_2 (1-p)$.
\item
Consider a general density matrix $\rho$ of two qubits. It can be
written as
\begin{equation}
\rho = \frac{1}{4} ( I_4 + \alpha . \sigma \otimes I_2
+ I_2 \otimes \beta . \sigma + R_{ij} \sigma_i \otimes \sigma_j ) \ .
\label{abc}
\end{equation}
In \cite{LMP} it was shown that under LQCC of the form eq. (\ref{one})
the positive square roots of the eigenvalues
of the matrix $\rho \tilde{\rho }$ transform as
\begin{equation}
\lambda_i \to \lambda'_i = \frac { \mu^2 \nu^2 ( 1 - a^2 ) (1 - b^2 ) }{
t(\rho; \mu, a , {\bf m}; \nu, b , {\bf n} ) } \lambda_i
\label{lambda}
\end{equation}
where $t$ is the probability that the LQCC
succeeded
\begin{eqnarray}
&t&(\rho; \mu, a , {\bf m}; \nu, b , {\bf n}) =\nonumber\\
& &\quad \mu^2\nu^2\big[ (1 + a^2)(1+b^2) +
2 a(1+b^2) {\bf n} \cdot {\bf \alpha} + \nonumber\\
& &\qquad 2b(1+a^2) {\bf m} \cdot {\bf \beta}
+ 4 ab R_{ij}
n_i m_j\big].
\label{t}
\end{eqnarray}
Thus the concurrence also transforms as
\begin{equation}
C(\rho') = \frac { \mu^2 \nu^2 ( 1 - a^2 ) (1 - b^2 ) }{
t(\rho; \mu, a , {\bf m}; \nu, b , {\bf n} ) } C(\rho) \, .
\label{CCC}
\end{equation}
It follows from eq. (\ref{lambda}) that the ratios $\lambda_i /
\lambda_j$ are invariant under LQCC. We add here the necessary
qualification that the LQCC must be invertible.
\end{enumerate}
Now our argument runs as follows.
We consider states $\rho$ which have non-zero EOF and which are not
Bell diagonal (recall that a state is Bell diagonal if all its
eigenvectors are maximally entangled; equivalently it satisfies
$tr_A(\rho)= tr_B(\rho)= \frac{1}{2}
I_2$, ie. $\alpha=\beta=0$ in the expression
for $\rho$ given in eq. (\ref{abc})). We show in Theorem 1 that there
is
an LQCC protocol which increases
the EOF of $\rho$ with non-zero probability.
We show further in Theorem 3 that this process can be iterated to obtain
an
LQCC protocol which, with non-zero probability, maps $\rho$ to a
Bell diagonal state with maximal EOF.
In Theorem 4 we show that this is the unique optimal protocol
up to local unitary rotations.
\theorem{1} Let $\rho$ be a density matrix
of a state with non-zero EOF written as in eq. (\ref{abc}).
If $\alpha$ or $\beta$ are non-zero, then
there is an invertible LQCC $A \otimes B$
mapping $\rho$ with non-zero probability to a density matrix
$\rho'$ with higher EOF than $\rho$.
\noindent{\bf Proof} \qquad
For small $a$ and $b$, eq. (\ref{CCC}) takes the form
\begin{equation}
C(\rho') \simeq \frac { 1 }{ 1 + 2a {\bf
m}. \alpha + 2b
{\bf n} \beta} C(\rho) \, .
\label{CCC2}
\end{equation}
Hence if $\alpha$ or $\beta$ are non-zero and if $C( \rho )$ is
non-zero we can always find an
LQCC which, with non-zero probability, increases
the EOF, by choosing appropriately
small $a$ and $b$ and suitable ${\bf m}$ and ${\bf n}$.
\vskip 10pt
We now need a technical lemma about the topology of the space $R$ of
LQCC operations which do not decrease the EOF of a given $\rho$. The
result, namely that $R$ is compact, is needed in Theorem 3.
\lemma{2} Let $\rho$ have non-zero EOF.
There exists a positive bound $\delta (\rho )$ such that
if the state $\rho'$ has greater EOF than $\rho$ and can be obtained from
$\rho$ with non-zero probability by LQCC, then there exists some LQCC
from which $\rho'$ can be obtained
from $\rho$ with probability greater than $\delta ( \rho )$.
Furthermore let $R$ be the space of LQCC which succeed with non-zero
probability in producing a density matrix with EOF greater than or
equal to that of $\rho$. Then $R$ is compact.
\noindent{\bf Proof} \qquad
Fix $\rho$.
If we write $A \otimes B$ in the form (\ref{three}), $\rho'$ is
independent of the scale factor $\gamma$,
so that any $\rho'$ obtainable from $\rho$
can be obtained by a {\it normalised} LQCC, taking the form
(\ref{three}) with $\gamma =1$.
For $\epsilon > 0$, define $S_{\epsilon}$ to
be the set of normalised LQCC of the form (\ref{three})
with $\min\{ \alpha_A , \alpha_B \} = \epsilon$.
Let $
E_{\epsilon} $ be the maximum EOF of any density matrix
$\rho'$ obtained from $\rho$ by the action (\ref{image}) for
some $A \otimes B$ in $S_{\epsilon}$.
Since $
E_{\epsilon}$ is continuous in $\epsilon$ and tends to zero
as $\epsilon$ tends to zero, there is some positive $\epsilon_0 $
such that $
E_{\epsilon} $ is less than or equal to the EOF of $\rho$ for
$\epsilon \leq \epsilon_0 $ and such that $\epsilon_0 (\rho )$ is
maximal with this property.
Let $T_{\epsilon_0}$ be the union for ${1\geq \epsilon\geq \epsilon_0}$ of
$S_{\epsilon}$.
Now if a non-trivial LQCC $A \otimes B$ annihilates $\rho$, i.e.
$ A \otimes B \rho A^{\dagger} \otimes B^{\dagger} = 0 $,
then $A \otimes B \ket { \psi_i } = 0$ for all $i$ (where
$\ket { \psi_i }$ are the eigenvectors of $\rho$ with non-zero
eigenvalue). Thus either
$A$ or $B$ must be a rank one projector up to a
scale factor. Hence no LQCC in $T_{\epsilon_0}$ can annihilate
$\rho$. Also $
T_{\epsilon_0}$ is compact. So the probability
$ {\rm{Tr}} ( A \otimes B \rho A^{\dagger} \otimes B^{\dagger} ) $ of
obtaining $\rho'$ from $\rho$ via the LQCC is non-zero
everywhere in $T_{\epsilon_0}$ and attains a non-zero lower bound
$\delta(\rho)$
on the set. This is a lower bound for all LQCC increasing
the EOF of $\rho$, since no LQCC outside $
T_{\epsilon_0}$ does. The compactness of $R$ follows since it is a closed
subset of $T_{\epsilon_0}$.
\vskip 10pt
\theorem{3} Let $\rho$ written as in eq. (\ref{abc}) be a density
matrix with non-zero EOF.
If $\alpha$ or $\beta$ are non-zero, then
there exists an invertible LQCC
which, with non-zero probability, maps $\rho$ to a Bell diagonal density
matrix
$\rho'$ which has the maximum EOF of any density matrix obtainable
from $\rho$ by LQCC.
\noindent{\bf Proof} \qquad
Since by Lemma 2 the space of normalised LQCC which do not decrease
the EOF of $\rho$
is compact, and the EOF is a
continuous function, the lowest upper bound on the attainable
EOF is attained by some LQCC.
The corresponding density matrix $\rho'$ must have
$\alpha'=\beta'=0$, otherwise, by Theorem 1, its EOF could be increased.
\vskip 10pt
\theorem{4} Let $\rho$ be the density matrix of a state with non-zero
EOF. Then the Bell diagonal state $\rho'$
which can be obtained from $\rho$ by LQCC is unique up to
local unitary transformations. This $\rho'$ has maximal possible EOF.
\noindent{\bf Proof} \qquad
We start by calculating the positive square roots $\lambda_i$ of the
eigenvalues of the matrix $\rho\tilde\rho$. We order them as
$\lambda_1 \geq \lambda_2 \geq\lambda_3 \geq\lambda_4$.
The ratios $\frac{ \lambda_i }{ \lambda_j }$ are invariant
under the actions of invertible LQCC, see eq. (\ref{lambda}).
We characterise these ratios by the three numbers $c_i =
\lambda_i / \lambda_1$, $i=2,3,4$.
{}From Theorem 3 we know that $\rho$ can be brought to Bell diagonal
form by LQCC. We shall now show that the Bell diagonal form is
uniquely specified, up to local unitary transformations, by the ratios
$c_i$.
To this end consider a Bell diagonal state
$ \rho_{R} = \frac{1}{4} ( I_4 + R_{ij} \sigma_i \otimes \sigma_j ) $
with positive EOF.
Local unitary operations $U_A \otimes U_B$ transform $ \rho_{R}$
to $ \rho_{R'} = \frac{1}{4} ( I_4 + R'_{ij} \sigma_i \otimes
\sigma_j ) $, where $R' = ( O_1 )^T R (O_2 )$ for some
elements $O_1$ and $O_2$ of ${\rm SO}(3)$: any pair of $O_i$
can be produced by suitable choices of $U_A , U_B$.
By using a singular value decomposition\cite{eglewis} of $R$,
we can find orthogonal $O_i$ such that $R'$ is diagonal, so
we can find
local unitary operations mapping $\rho_R$ to the form
\begin{equation}
\rho_{r_1 , r_2 , r_3} =
\frac{1}{4} ( I_4 + \sum_{i=1}^3 r_i \sigma_i \otimes \sigma_i ) \, ,
\label{BD}
\end{equation}
with all $r_i$ having the same sign and with $r_1 \leq r_2 \leq r_3$.
Now $\rho_{r_1 , r_2 , r_3} = \tilde{\rho}_{r_1 , r_2 , r_3}$,
hence the eigenvalues of $\rho_{r_1 , r_2 , r_3}$ are equal to the
$\lambda_i$. These eigenvalues are
$ \frac{1}{4} (1 - r_1 - r_2 - r_3 ), \frac{1}{4} (1 + r_1 + r_2 - r_3
) , \frac{1}{4} (1 + r_2 + r_3 - r_1), \frac{1}{4} (1 + r_3 + r_1 -
r_2 )$.
Since $\rho_{r_1 , r_2 , r_3}$ is assumed to be entangled, the $r_i$
are all less then or equal to zero. (This may be verified by
checking that when the $r_i$ are all positive the concurrence
vanishes).
We can
now express the ratio's $c_i$ in terms of the $r_i$. For instance
$c_2 = ( 1 + r_2 + r_3 - r_1) / (1 - r_1 - r_2 - r_3 )$. It is
straightforward to verify that the $r_i$ can be uniquely expressed in
terms of the $c_i$ by inverting these equations. Therefore the Bell
diagonal state of the form eq.(\ref{BD}) to which $\rho$ can be
brought is unique.
\vskip10pt
\section{Conclusions}
We have shown that any entangled state $\rho$ of two qubits
whose local density
matrices are not completely random can be brought by LQCC to a unique
(up to local unitary transformations) Bell diagonal state. No other
LQCC can bring $\rho$ to a state with more entanglement.
To obtain an explicit expression for this optimal protocol, one should
write explicitly the conditions that the density matrix $\rho'$
obtained from has completely random local density matrices $Tr_A \rho'
= Tr_B \rho' = I_2$. We have shown that these equations have a unique
solution for the coefficients $a,{\bf m},b ,{\bf n}$ of the
filtrations
$f^{\mu,a,{\bf m}}, f^{\nu, b, {\bf n}}$ in eqs. (\ref{one},\ref{ff}).
Our optimal protocol
should be compared to the Procrustean algorithm for
concentrating pure state entanglement of \cite{BBPS} which
brings a non maximally entangled pure state to a maximally entangled
pure state by LQCC.
The main difference between the two methods is that
the optimal mixed state
protocol generally requires Alice and Bob to carry out
different filtrations and then tell each other whether the filtrations have
succeeded. Only if both succeed do they obtain (and know that they
have) the state with maximum extractable entanglement.
The Procrustean algorithm on the other hand can be realised without
classical communication between Alice and Bob, or with only Alice
carrying out the filtration and communicating the result to Bob.
In \cite{LMP} it was noted that the ratios $c_i
= \lambda_i/\lambda_1$ are invariant under invertible LQCC.
The argument used in proving Theorem 4 also shows that for entangled
states they consitute an exhaustive set. Indeed we can bring any
entangled $\rho$ to the form eq. (\ref{BD}) which is characterised by
three parameters $r_i$ and they are in one to one correspondence with
the $c_i$. This gives a characterisation of locally equivalent
entangled density matrices.
Our method also introduces an interesting combination of these
invariants: the maximal extractable entanglement of a density
matrix. This quantity provides a new characterisation of the
entanglement of a state. It has the important
property that it decreases under
mixing (this follows from the convexity of the EOF\cite{BDSW}).
\noindent{\bf Acknowledgments}
We are very grateful to Sandu Popescu for several helpful
discussions. Part of this work was caried out at the 1998
Elsag-Bailey - I.S.I. Foundation research meeting on quantum
computation and at the 1998 workshop of the Benasque Center for Physics.
AK thanks the Royal Society for financial support. SM is a
chercheur qualifi\'e du FNRS.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,859
|
El Marco Europeo de Interoperabilidad (EIF por sus siglas en inglés) es un documento que, en la Unión Europea, define una serie de directrices y recomendaciones para los servicios de administración electrónica que garanticen la interoperabilidad de los sistemas. La versión original fue redactada por la Comisión Europea en cumplimiento del Plan de Acción eEurope 2005, adoptado por el Consejo Europeo en Sevilla, en 2002. La versión actual es de 2017.
El Plan de Acción estipula que la interoperabilidad europea debería basarse en estándares abiertos, en el uso de software de código abierto y su promoción. El EIF considera estándar abierto aquel que satisface las siguientes condiciones:
Los derechos de propiedad intelectual se encuentran depositados en una organización sin ánimo de lucro con una política de libre acceso.
Ha sido publicado a través un un organismo regulador.
Ha sido adoptado bajo un proceso de decisión abierto.
No existen restricciones para su reutilización.
Principios
En la estrategia de aplicación del Marco Europeo de Interoperabilidad a fecha de 2017 establece los siguientes principios fundamentales:
Subsidiariedad y proporcionalidad.
Apertura. Se exige que, mientras no existan restricciones, todos los datos públicos deben ser de libre acceso.
Transparecia. Permitir que las administraciones públicas y ciudadanos vean todos los procesos y tomas de decisiones.
Posibilidad de reutilización. Permitir reutilizar el trabajo ya hecho por otras administraciones públicas ante la presencia de problemas similares.
Neutralidad tecnológica y portabilidad de los datos. Se exige que las administraciones proporcionen a sus ciudadanos el acceso a sus servicios públicos y datos sin necesitar el uso de ninguna tecnología específica.
Primacía del usuario.
Inclusión y accesibilidad. Se exige que toda persona, sin importar su condición, pueda acceder a los servicios públicos digitales.
Seguridad e intimidad. Se aspira a que todos los ciudadanos confíen en el entorno digital de las administraciones públicas.
Multilingüismo. Se aspira a tener un ecosistema europeo digital multilinguista.
Simplificación administrativa. Se pide la supresión y valoración de aquellos servicios que no tengan valor público.
Conservación de la información. Se exige que las decisiones tomadas puedan ser accedidas durante un período de tiempo determinado.
Evaluación de efectividad y eficiencia. Se pide la evaluación de la efectividad y eficiencia de soluciones tecnológicas actuales y futuras.
Véase también
Esquema Nacional de Interoperabilidad (España)
Referencias
Enlaces externos
Marco Europeo de Interoperabilidad
Interoperabilidad
Tratados de la Unión Europea
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 252
|
{"url":"https:\/\/www.zbmath.org\/?q=an%3A0586.17014","text":"# zbMATH \u2014 the first resource for mathematics\n\nOn baric algebras with prescribed automorphisms. (English) Zbl\u00a00586.17014\nThe automorphism group of the gametic algebra for a $$2m$$-ploid population with $$n+1$$ alleles is isomorphic to $$A(n)$$ the affine group of $$\\mathbb R^n$$. This result is obtained as a special case from a study of automorphisms of slightly more general algebras. The author defines a certain class of algebras called $$\\alpha$$-algebras. Roughly speaking, an $$\\alpha$$-algebra is a baric algebra in which certain specified linear operators are automorphisms. The definition of these operators is too technical to be given here.\nThe author classifies the $$\\alpha$$-algebras for $$m\\leq 5$$. In a later section he has some results also in the case where $$m>5$$. Finally, he claims to have results dealing with derivations of $$\\alpha$$-algebras which he says will be published elsewhere.\n\n##### MSC:\n 17D92 Genetic algebras\nFull Text:\n##### References:\n [1] Etherington, I.M.H., Genetic algebras, Proc. roy. soc. Edinburgh, 59, 242-258, (1939) \u00b7 JFM\u00a066.1209.01 [2] Etherington, I.M.H.; Etherington, I.M.H., Commutative train algebras of ranks 2 and 3, J. London math. soc., J. London math. soc., 20, 238-149, (1945) \u00b7 Zbl\u00a00063.01291 [3] Etherington, I.M.H., Special train algebras, Quart. J. math. Oxford, 12, 1-8, (1941) \u00b7 JFM\u00a067.0093.04 [4] Schafer, R.D., Structure of genetic algebras, Amer. J. math., 71, 121-135, (1949) \u00b7 Zbl\u00a00034.02004 [5] Etherington, I.M.H., Non-commutative train algebras of rank 2 and 3, Proc. London math. soc., 52, 2, 241-252, (1951) \u00b7 Zbl\u00a00042.03401 [6] Gonshor, H., Special train algebras arising in genetics, Proc. Edinburgh math. soc., 12, 2, 41-53, (1960) \u00b7 Zbl\u00a00249.17003 [7] Gonshor, H., Special train algebras arising in genetics II, Proc. Edinburgh math. soc., 14, 2, 333-338, (1965) \u00b7 Zbl\u00a00139.03102 [8] Holgate, P.; Holgate, P., Genetic algebras associated with polyploidy, Proc. Edinburgh math. soc., Proc. Edinburgh math. soc., 17, 2, 120-9, (1970) \u00b7 Zbl\u00a00144.27202 [9] Gonshor, H., Contributions to genetic algebras, Proc. Edinburgh math. soc., 17, 2, 289-298, (1971) \u00b7 Zbl\u00a00247.92002 [10] Gonshor, H., Contributions to genetic algebras II, Proc. Edinburgh math. soc., 18, 2, 273-279, (1973) \u00b7 Zbl\u00a00272.92012 [11] Heuch, I., Genetic algebras considered as elements in a vector space, SIAM J. appl. math., 35, 695-703, (1978) \u00b7 Zbl\u00a00392.17013 [12] W\u00f6rz-Busekros, A., Algebras in genetics, Lecture notes in biomathematics, (1980), Springer New York, No. 36 \u00b7 Zbl\u00a00431.92017 [13] Costa, R., On the derivations of gametic algebras for polyploidy with multiple alleles, Bol. soc. brasil. mat., 13, 2, 69-81, (1982) \u00b7 Zbl\u00a00575.17013\nThis reference list is based on information provided by the publisher or from digital mathematics libraries. Its items are heuristically matched to zbMATH identifiers and may contain data conversion errors. It attempts to reflect the references listed in the original paper as accurately as possible without claiming the completeness or perfect precision of the matching.","date":"2021-08-05 07:43:50","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5265455842018127, \"perplexity\": 2336.7558863204285}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046155458.35\/warc\/CC-MAIN-20210805063730-20210805093730-00479.warc.gz\"}"}
| null | null |
Steele, Steve: The Expat
Alternative/Prog Rock
Ultraviolet Catastrophe Records
On the back of Steve Steele's debut CD, The Expat, he comments, "I'll sing whatever I like, to whomever I like." A more concise and clear artist's statement for any aspiring musician could hardly be written. It fits Steele whose music is as personal and passionate as it is eccentric and eclectic. He is certainly composing the music he likes.
Before speaking to the music, the theme/content of The Expat should be noted. Steele explores the isolation and disconnectedness from the modern world even in the presence of Internet connectivity and social networking. Therefore Steele becomes the 'expatriate' in his native Houston. The work is in three parts, with each part preceded by 'disjointed radio snippet.' Steele is (largely) the protagonist and guide through the songs.
The music on The Expat is largely rock, at least in the broadest sense. With his knowledge of multiple types of music, jazz, classical, rock, and others, the more specific nature of this work might be called art or prog rock. The arrangements are in one sense heavy, but perhaps a better word is dense. Revelation on the Radio or My Brother The Devil are awash with layers of sound. Steele reminds of David Bowie and Frank Zappa washed through Brian Eno. Possibly the most interesting element of The Expat is Steele's vocals; it's certainly unique and expressive. However, listening to Dramatic Girls Forever or Via Satellite, to mention only two examples, I can't help wondering if I hear a sneer, sarcasm, of that aforementioned elucidation.
Admittedly, digesting The Expat requires repeated spins. I was initially put off by the near grating signature very first song, Revelation on the Radio, and Steele's singing style. However, changing context, from office to automobile, and playing through, Steele's music became intriguing (and exposed his creative literacy). Dramatic Girls Forever, Godwin Park, Via Satellite, and Star City are terrific tracks.
Ultimately, Steve Steele's The Expat is genuinely provocative, intelligent, and satisfying music. Often challenging, and sometimes arresting, Steele combines his literate expression with fine musical innovation.
Steve Steele's The Expat is genuinely provocative, intelligent, and satisfying music. Often challenging, and sometimes arresting, Steele combines his literate expression with fine musical innovation.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,414
|
{"url":"https:\/\/littlekendra.com\/2016\/12\/29\/columnstore-indexes-and-computed-columns-in-sql-server-2016\/","text":"# Columnstore Indexes and Computed Columns in SQL Server 2016\n\nYou can\u2019t do everything with a columnstore index \u2013 but SQL Server\u2019s optimizer can get pretty creative so it can use a\u00a0columnstore index in ways you might not expect.\n\n## You can\u2019t put a computed column in a columnstore index\n\nIf\u00a0you try to create a nonclustered columnstore index on a computed column, you\u2019ll get\u00a0error message 35307:\n\nMsg 35307, Level 16, State 1, Line 270\n\nThe statement failed because column \u2018BirthYear\u2019 on table \u2018FirstNameByBirthDate_1976_2015\u2019 is a computed column. Columnstore index cannot include a computed column implicitly or explicitly.\n\n## But SQL Server may still decide to use a Columnstore index for a query specifying a computed column!\n\nI went ahead and created a nonclustered columnstore index on the other columns in my table, like this:\n\nCREATE NONCLUSTERED COLUMNSTORE INDEX col_dbo_FirstNameByBirthDate_1976_2015\non dbo.FirstNameByBirthDate_1976_2015\n( FakeBirthDateStamp, FirstNameByBirthDateId, FirstNameId, Gender);\nGO\n\n\nThen I ran this query against the table, which groups rows by the computed column, BirthYear:\n\nSELECT TOP 3\nBirthYear,\nCOUNT(*) as NameCount\nFROM dbo.FirstNameByBirthDate_1976_2015\nWHERE BirthYear BETWEEN 2001 and 2015\nGROUP BY\nBirthYear\nORDER BY COUNT(*) DESC;\nGO\n\n\nLooking at the execution plan, SQL Server decided to scan the non-clustered columnstore index, even though it doesn\u2019t contain the computed column BirthYear! This surprised me, because I have a plain old non-clustered index on BirthYear which covers the query as well. I guess the optimizer is really excited about that nonclustered columnstore.\n\nThe columnstore index isn\u2019t the best choice for this query:\n\n\u2022 Duration using nonclustered rowstore index on computed BirthYear: 2.722 seconds\n\u2022 Duration using nonclustered columnstore index: 5.5\u00a0seconds\n\n## Where\u2019s BirthYear? Let\u2019s look at the Compute Scalar farthest to the right\n\nClicking on that compute scalar operator and looking at the properties window, we can see that SQL Server looked up the definition for the computed column, and figured out that the computation is based on columns in our nonclustered index\u2013 so it could scan that index, then run the computation for each row.\n\nSQL Server is waiting until the third operator, a filter, to filter out the rows for BirthYear between 2001 and 2015:\n\n## The cost estimate on that Compute Scalar is waaaayyy low\u2026\n\nThis is an actual execution plan, so I have Actual Time Statistics, and I can see exactly how much CPU was burned to compute BirthYear for every row. Scrolling up in the\u00a0properties window, I find\u00a0that this took almost five seconds for each thread that worked on the compute scalar. That\u2019s more than 80%\u00a0of the query\u2019s duration just to figure out BirthYear.\n\nOops!\n\n## I can rewrite my query a bit to push that filter down\u2026\n\nMy original query has the predicate, \u201cBirthYear BETWEEN 2001 and 2015\u201d. Let\u2019s change that predicate to a non-computed column:\n\nSELECT TOP 3\nBirthYear,\nCOUNT(*) as NameCount\nFROM dbo.FirstNameByBirthDate_1976_2015\nWHERE FakeBirthDateStamp >= CAST('2001-01-01' AS DATETIME2(0))\nand FakeBirthDateStamp < CAST('2016-01-01' AS DATETIME2(0))\nGROUP BY\nBirthYear\nORDER BY COUNT(*) DESC;\nGO\n\n\nI\u2019m still using the computed column BirthYear in my SELECT and GROUP BY.\n\nSQL Server still chooses the columnstore index for this query, but now there is a predicate on the columnstore index scan itself:\n\nThis means far fewer rows are flowing into the compute scalar operator \u2013 we don\u2019t have to calculate BirthYear for any of the rows from 1976 through the end of 2000.\n\n## Sure enough, it\u2019s faster\n\nMaking this change to the query text makes our nonclustered columnstore index highly competitive with Ye Olde covering rowstore b-tree index:\n\n\u2022 Duration using nonclustered rowstore index on computed BirthYear: 2.722 seconds\n\u2022 Duration using nonclustered columnstore index with original query: 5.5\u00a0seconds\n\u2022 Duration using nonclustered columnstore index with predicate re-written to not reference computed column: 2.2 seconds\n\nIf we couldn\u2019t re-write the predicate easily for whatever reason, we might choose to keep the non-clustered rowstore index on BirthYear around and use\u00a0OPTION (IGNORE_NONCLUSTERED_COLUMNSTORE_INDEX) in our query.\n\n## Be careful with computed columns and columnstore\n\nI had assumed the optimizer would be reluctant to create a plan for a computed column, since that column can\u2019t be in the columnstore index. But it turned out to be pretty eager to do it.\n\nIf you\u2019ve got computed columns and are testing out columnstore, look carefully at your queries and check to make sure you don\u2019t have any super-expensive compute scalar operators showing up in your plans where you might not want them.\n\n## Vote to allow computed columns in columnstore indexes\n\nWouldn\u2019t this all be easier if you could just put the computed column in the columnstore, anyway? Vote up this Connect item.","date":"2022-08-19 22:45:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4566826820373535, \"perplexity\": 5208.883949985124}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882573849.97\/warc\/CC-MAIN-20220819222115-20220820012115-00141.warc.gz\"}"}
| null | null |
{"url":"http:\/\/mathhelpforum.com\/calculus\/96889-differential-inverse-proof.html","text":"Differential inverse, proof\n\nProve that the differentiable function $F(x,y,z)=\\left ( f(x,y,z) ,g(x,y,z) , f(x,y,z)+g(x,y,z) \\right)$ can never have a differentiable inverse.\nMy attempt : I notice that $\\dim dom F=3$ and that $\\dim Im F =2$. Hence $F$ is not injective, hence $F$ doesn't have an inverse, so it cannot have a differentiable inverse.\nI'm not sure my attempt is good.\nMaybe I should have done it using the Inverse Function Theorem, but if the Jacobian of $F$ is not invertible, I can't say anything whether the existence of an inverse of $F$, I believe. Hmm but according to my memory of Linear Algebra, if the matrix of a function is not invertible then the function is not injective so it doesn't have an inverse.\nWhat do you think?","date":"2015-11-25 06:43:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 7, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7839906811714172, \"perplexity\": 114.18834503382377}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-48\/segments\/1448398444974.3\/warc\/CC-MAIN-20151124205404-00328-ip-10-71-132-137.ec2.internal.warc.gz\"}"}
| null | null |
The Best July Weather in Generations for the Northwest?
July 2013 was probably the best Northwest weather in a half-century or more. Perfect temperatures west of the Cascade crest, lack of rain, plenty of sun...this month had it all.
And we broke or tied some amazing records this month: be prepared to be impressed. I had to wait until this evening to be sure about several of these records, as a band of convective showers moved northward across the region during the afternoon and early evening hours. But now the story is clear.
Perhaps the most extraordinary occurrence was the aridity of the northern Washington coast. Quillayute (near Forks) tied the driest month on record set 124 years ago in 1889 (.01 inch). The precipitation record I am referring to actually combines two stations because of a move. From 1883-1966, the station was at Tatoosh Island and from 1966 on a Quillayute. This is a very big record to tie. No one alive today has experienced such dry conditions on the coast.
What about Seattle? The airport only had a trace of rain, the driest since 1960. There are plenty more of these, but you get the point.
To get some perspective on this, here is a map (from the Western Region Climate Center) showing the percentage of normal precipitation for July 1-30, 2013. The Northwest was VERY dry, with the coastal region experienced 2% or less of normal.
Ironically, it has been far wetter than normal in Arizona, New Mexico and much of Nevada!
But what about July temperature? How many ways can you say perfection? Let's start with Seattle-Tacoma Airport. In the figure below the red line is the average high and blue line is the average low. Only about a handful of days were below normal and only two days failed to reach 70F. No days in the 90s.
Pasco and Spokane? Same story, but add 10-15F. Warmer than normal with very little cool weather (see graphics). Obviously, the drought and warmth east of the Cascades has a down side, with a substantial fire threat (essentially we had mid to late August ground/fuel moisture in late July).
A more comprehensive view of the temperatures are found in the difference of the monthly average temperatures (through July 30) with normal conditions (climatology)--see graphic. West of the Cascade crest the temperatures were near normal (i.e., near perfect), but warmer than normal conditions were found to the east. Southeast Oregon was very hot (and dry).
I have looked through the Sea-Tac records of the past few decades and could find no July as comfortable as July 2013. This is surely the best July on record for most of you.
We have a few days of cooler than normal temperatures and a higher chance of precipitation ahead of us because of an upper level trough (particularly wet over eastern WA and the northwestern corner of the State). But it should warm up later in the weekend. Here is the latest Climate Prediction Center 6-10 day forecast. Above normal temperatures over our region and below normal precipitation over the western side.
Enjoy. You will be telling your grandchildren about this July one day.
The Serious Fire Season Has Begun in the Northwest: Meteorological Threats Ahead
During the past week two major wildfires have initiated and spread to thousand of acres here in the Northwest: the Mile Marker 28 fire near Satus Pass in south central Washington and the Colockum Tarps fire south of Wenatchee. And meteorological issues threaten to make the fire situation worse at the end of the week.
Sunday morning's visible satellite image (at 8AM) shows lots of smoke in eastern Washington (mainly from the Satus Pass fire.
The Colockum Pass fire increased in size during the day (see satellite image around 7 PM Sunday below). Some thunderstorms developed over the north Cascades and if you look closely you can see a cumulus cloud in the middle of the Colockum Pass smoke plume. The heat was sufficient to cause the air to become highly buoyant, producing a tall cumulus cloud-- called pyrocumulus.
Lori and Don Robbins sent me a picture of the pyrocumlus from a vantage point in Ellensburg.
And amazing picture from"Sooperfly". The smoke rises to a level at which it is no longer buoyant and spreads downwind. The cumulus cloud, with extra warmth from the release of latent heat (heat is released as water is condensed) can rise even higher.
A very clear satellite image of the smoke from the fires was available earlier Sunday afternoon from the NASA MODIS satellite:
A big issue for the Colockum fire (and to a lesser degree the Satus Pass fire) has been the strong westerly winds pushing eastward down the Cascade foothills on Sunday, forced by a strong pressure difference across the Cascades. This strong winds and large pressure difference are associated with the cooler air that has moved into the west side of the mountains.
Here are the maximum winds for the 24h ending 9 PM Sunday. Lots of locations getting to 20-30 mph, a number reaching 30-40 mph. Not good for fighting fires.
The UW WRF model predicted the strong winds on Sunday (see graphic for 5 PM), but forecasts a major weakening on Monday...which should be a boon to the firefighters.
But a bigger threat is on the meteorological horizon...lightning caused fires. Today, there was quite a bit of lightning in the northeast Cascades and the Okanogan--did they initiate any new fires? But more ominously, the weather situation will be very favorable for thunderstorms during the middle and end of this week. Such lighting, plus the dry conditions of the "fuels" at the surface, will produce a substantial threat of new wildfires.
Here is the current fire danger map from the USDA Forest Service. Eastern Washington has a substantial risk, but eastern Oregon and Idaho are even drier.
Posted by Cliff Mass Weather Blog at July 29, 2013 1 comment:
The Coolest Place in the Pacific Northwest
While much of the west side of the Pacific Northwest warmed into the 80s and east of the Cascades into the 90s and higher, one area has gotten cooler and cooler. Ground zero for chilling out? Portions of the Oregon coast.
And strangely, they have cooled while the rest of us have warmed and will warm as the weather cools over the interior this week. Weird weather here in the Northwest!
Consider the temperatures at Newport, Oregon, on the central Oregon coast (station NWP03, see map below). Temperatures had fallen there over the past 10 days to highs of 50-52F, but started warming a bit (to a torrid 55F) yesterday. I might note that while the 50s were observed along the coast, 90s were only a short drive away to the east in the Willamette Valley!
During that period it was not only cold, but windy at Newport and other coastal locations, with winds increasing to over 24 kts! The wind chill temperature is down in the mid 40s. Feels like winter in summer!
A hint of what is going on is found from the latest sea surface temperature chart (see below, in °C, purple is very cold, blue is cold, then warming with green, yellow to red). The coastal Pacific is really cold--at 11 C (about 51F) and cooler. At some locations the coastal Pacific has dropped to around 47F!
In fact, looking at the measurements at some coastal buoys along the Oregon Coast (46050, 46015, see map and plots below) the water temperature has gotten cooler and cooler? Why?
Strangely enough the cooling is directly associated with our nice weather in the interior. The warming interior caused pressure to fall relative to the high pressure over the ocean (the East Pacific High). The resulting pressure differences produced day after day of strong northerlies along the coast (thus the origin of the powerful winds). The following forecast of sea level pressure and winds from the UW WRF model for a few days ago illustrates this.
Strong northerly winds cause upwelling of cool water from below...but this only happens near the coast....thus the area of cold, coastal water. (see explanation of upwelling here)
But as the interior cools and an upper trough moves in during the next few days, the winds should relax along the coast and the upwelling should weaken. So strangely, it will warm at Newport and other coastal locations.
Secret Revealed: The Northwest Has the Best Summer in the Nation. But Why?
The secret is out.
A few days ago, a well known ratings group found Seattle to be the NUMBER ONE city in the U.S. for pleasant summer weather, while Portland followed in second place. Even major newspapers like the Los Angeles Times seemto agree. A table from the authoritative Sperling report says it all (see below). With comfortable average highs in the mid-70s, sleep-friendly lows in the lower fifties, and low dew points and relative humidities, Seattle is meteorological heaven during the summer months.
(Dew point is the most important measure of the amount of water vapor of the air. It is temperature to which air must be cooled (at constant pressure) to reach saturation. The eastern U.S. often gets into the mid-60s and 70s F. We stay down in the lower 50s or less. Above around 60F the air feels sticky and humid for normal temperatures.)
But the Sperling report missed some critical meteorological and other information that makes Northwest summers even closer to heaven on earth!
(1) Being relative far north we have the longest days in the lower 48 states. So there are more hours to enjoy perfect weather. Finish work at 6 PM? No problemo...plenty of time to have fun outside.
(2) When we do have one of our "heat waves", it is nearly always DRY HEAT with low dew points. Why? Because to get really warm here, you have to get offshore, downslope flow. Air coming off the cool Pacific is obviously not going to give you a heat wave. The interior of the Northwest is dry and when the air sinks along the western slopes of the Cascades it is compressed by higher pressure and warms. It is virtually IMPOSSIBLE to get a major heat wave (temps above 95F) with high humidity as they get in the eastern two thirds of the country.
(3) Other parts of the country can get severe thunderstorms during the warm season. That is extremely rare around here because of the cool Pacific and low dew points we enjoy.
(4) We do not get hurricanes or tropical storms like the eastern U.S. Again, thank the cool Pacific Ocean.
(5) We have practically NO RAIN in the summer. Really. Seattle is drier than Phoenix in July. So you can enjoy perfect temperatures without the inconvenience of even thinking about an umbrella or rain gear.
(6) We have far fewer mosquitoes and biting flies that the eastern U.S. ?(probably the lack of rain contributes to that!). And did I mention a lack of poisonous snakes!
(7) Seattle has great visibility in the summer. This is because the air is relatively clean after passing across the Pacific and our low humidity (which prevents particles that absorb water vapor from growing). And we have great things to see as well, like the Cascades, the Olympics, Puget Sound, and Mt. Rainier.
(8) If you don't like the perfect weather of the western lowlands, a short drive can give you something a bit different (but still good!). Head to the coast if you would like to take 10F off the temps and enjoy the sound of a few fog horns. Cross the Cascades for that dry, warm sauna effects at Lake Chelan or other locations. Perfection plus choice.
(9) In the Puget Sound lowlands escape from heat is only a short drive or bus ride away. The Sound is still around 50F during the warmest spells and the beach areas can be in the 60s, while 80s or warmer are found a short distance away.
(10) It is hard for us to stay too hot for too long. The NW has a natural air conditioner system. As temperatures warm, pressures tend to fall over the hot interior. Eventually the pressure difference between the cool (and higher pressure) Pacific and the interior gets so large that marine air surges in. And profound relief follows. Virtually guaranteed.
(11) And even when we have our biggest heat waves, nighttime temperatures are still reasonable. Consider the WARMEST DAY IN SEATTLE HISTORY, when temperatures at Seattle Tacoma Airport climbed to 103F. The temperature dropped to 71F that night! A bit warm, but still ok for sleeping if you have a fan. Folks on the East Coast call that a comfortable night, particularly since our dew points were modest.
Yes, we live in as close to summertime meteorological nirvana as is available on this planet, a fact the Seattle Chamber of Commerce should use to our great advantage in the tourist trade. And there is another deep secret: because of our proximity to the Pacific the impacts of global warming on local summer weather will be far less than in most areas of the country. We will remain meteorologically blessed.
The Path to Seattle During the Summer
Are Nighttime Heat Waves Increasing in the Northwest?
On Saturday, the Seattle Times ran a front page story about how nighttime heat waves are increasing here in the Northwest during the summer. Based on an article in the Journal of Applied Meteorology and Climate, the story was repeated by media outlets throughout the region.
But the more one looks at these results, the more questions come to mind. This blog will highlight some of my concerns.
Before I go further, let me note the region they considered was western Oregon and Washington.
Interestingly, the authors found no trend in the extremes of maximum temperature, but only in minimum temperature. That seems a bit odd in itself, but some studies have suggested this is a general finding (more increase in min than max temps), with the idea that more cloud cover or other effects might be the cause. Around here, daytime warming is what really put folks under acute heat stress and the "heat waves" examined in this study were associated with nighttime temperature falling to roughly the mid 60s or more for three days or more. Not very dramatic and wimpy by heat wave standards for the rest of the country.
But the lack of correspondence between serious daytime and nighttime heat waves IS concerning, since most major heat waves around here (like July 2009) are observed in both maximum and minimum temperatures. There is a reason for that...our biggest heat waves are associated with strong offshore flow and subsidence down the western slopes of the Cascades, and this phenomenon tends to raise both maximum and minimum temperatures.
But what REALLY bothered me was their plot showing heat wave occurrence over time (see figure displayed in the Seattle Times):
There really seems to be a sudden increase in heat waves around 1990. That is why their paper and Seattle Times are talking about a trend in heat waves. Why would heat waves rev up in the years around 1990? Such a variation is not consistent with human-caused global warming, because that the warming increases slowly and would not start abruptly like shown in this figure. Natural variability can cause a sudden warming, but the big Kahuna of multi-decadal variability around here, the Pacific Decadal Oscillation (PDO), switched phase from cold to warm around 1975--which doesn't fit this figure.
So we are left with finding the origin of the sudden increase in nighttime heat waves around 1990. To explore this issue, I started playing around at the wonderful web site supported by the Office of Washington State Climatologist (OWSC) that allows you to plot temperature from stations around the region. And as I plotted station after station a disturbing pattern was apparent: many stations showed abrupt temperature jumps, from one plateau level to another around 1990. Let me show you a few examples, displaying summer minimum temperatures. Try to ignore the red lines, the trend line for the entire period...really is useless. Try covering the figures before and after the jumps..that really drives the point home.
First, there is McMillin Reservoir near Tacoma..big jump by1-2F around 1986.
Or Astoria, a jump around 1989.
Everett, a jump in the early 1990s.
Forks moved to a new, higher plateau around 1989.
So what was producing the jumps to higher plateaus? Well, it turns out that NOAA made a major change in temperature sensors, from old style alcohol/mercury thermometers in white slotted shelters to electronic sensors called MMTS in plastic housings during the late 1980s and early 1990s.. Here is how they look.
Big difference. A number of studies have shown that the MMTS sensors tend to read high for minimum temperature..by about .5F. But there is another issue. Since MMTS sensors must be wired in, they have generally been moved in closer to buildings and structures than the sensors they replaced. Here is an example--the MMTS at Forks (near a deck!)
and this one in Conconcully, above hot rocks!
A number of studies have shown the buildings and urbanized settings tend to influence the minimum temperatures more than the maximum temperatures. For example, rocks, concrete and structures tend to absorb heat during the day and release it at night. And the atmosphere is generally more mixed during the day, so local effects tends to be lessened.
NOAA converted many of the climate sites to MMTS between 1985 and 1995. And my brief examination of the record from each station suggest that many of the jumps were associated with the conversion to electronic thermometers. For example, take McMillin Reservoir. The official NOAA metadata (metadata is a description of sensor changes and moves) shows the conversion to MMTS in 1986. For Everett 1991-1992. And there was another sensor change at the airports to the HO83 sensor, which had its own biases.
Further proof of the MMTS conversion issue has been noted by Mark Albright, past state climatologists. He compared McMillin Res against Olympia since Olympia didn't undergo the MMTS conversion and is only 20 miles SW of McMillin Res. HE selected 2 years prior to the discontinuity and 2 years following the jump in temperature at McMillin Res for July. In 1984-85 McMillin Res and Olympia were both equally cold in July. Something had changed by 1990-91 to make McMillin Res +2.5 deg F warmer than Olympia.
July 1984-85 July 1990-91
McMillin Res 49.7 53.0
Olympia 49.7 50.5
Difference 0.0 -2.5
And there is another problem. A number of the minimum "heat wave" days would not be considered heat waves at all by any reasonable person. One of their periods started on July 21, 2007. Take a look at the official record at Seattle Tacoma Airport:
Several days with minimum temperatures in the mid to upper 60s. But the maximum temperatures are only in the seventies and light rain was observed. During the 3-day "heat wave" of 21-23 July 2007 at SeaTac Airport, rain fell each day totaling 0.60 inches and the high temperature averaged BELOW NORMAL 72 degrees. The average high temperature at Olympia was an even cooler 71 degrees. This is no heat wave.
I want to be careful here--my study was a brief one. But I believe there is real reason to wonder whether this sudden nighttime "heat wave storm" is real. And to assume the increased frequency of nighttime heat here in the Northwest is due to anthropogenic global warming is triply doubtful. Mankind will cause substantial warming of the planet later in this century...but I suspect that this nighttime anomaly has a less profound origin.
Stratus Secrets
Day after day we have experienced the same pattern of stratus developing overnight west of the Cascade crest, with clearing during the morning. But this morning we had "superstratus": thicker, slower to burn off, and accompanied by fog and drizzle in some locations (like north Seattle!).
The enhanced stratus this morning was associated with stronger onshore flow at low levels, with a deep marine layer capped by a strong inversion (see plot at Quillayute Washington, line on right is temperature, on left dew point; when the lines are on top of each other the air is saturated). The vertical coordinate is height (in terms of pressure); 850 is roughly 5000 ft. Air is very dry above 925 hPa (about 3000 ft).
It is fascinating to watch the stratus burn off using high-resolution weather satellite imagery. One thing we learned from the satellite pictures is how stratus and fog burns off: from the edges. Let's watch it happen today (Saturday).
Let's start Friday afternoon at 4 PM, when the stratus of the previous day had burned off. Plenty of low clouds offshore!
By 8 PM, the low clouds had begun to move inland (look carefully around Hoquiam and the western Strait of Juan de Fuca).
The images you are looking at are called "visible" imagery--it is what you would see from space. The trouble is that visible light is not available during the night and infrared imagery (that is good 24-h the day by looking at the temperature of the emitting surface) is not very useful at night for low clouds (since they have a similar temperature to the surface). No worries! By combining a number of wavelengths, NOAA has developed what is known at "fog imagery" that can show the fog even when it is dark out. Here is an example for 4:00 AM. If you look carefully, you can see the low clouds has spread over the western lowlands.
Extensive low clouds are confirmed by the visible imagery at 7:30 AM---coverage is pretty extensive.
Now let's watch it burn off. By 9:30 AM, the low clouds were pulling back from the Cascade valleys.
Pulling back more at 11:30 AM
Shrinking further at 12:30 PM
And then a rapid pull back by 1:30 PM. This is hours later than yesterday, the product of the deeper marine stratus/fog layer.
So stratus/fog burns off from the sides. In addition, there is a tendency for the base of the fog and stratus to lift, as some solar radiation gets through to warm the surface. Meteorologists have a fancy device called the laser-ceilometer that can measure the base of clouds by reflecting a laser beam off of them. The UW got some surplus ceilometers from the National Weather Service and have one running in real time (and viewable on the web). Here the latest graphic...you can see the base of the low clouds rise and weaken in time.
Another perfect weather day in meteorological paradise.
Nighttime Heat Wave Hits Victoria, Paradise Melt-Out, and a Very Dry July
Nearly every summer day Northwest temperatures follow a familiar routine, with temperatures hitting a minimum around 6-7 AM and rising until roughly 5 PM, followed by falling temperatures during the evening and morning hours.
But something very different hit the western suburbs of Victoria B.C. Wednesday morning, with temperatures rapidly rising around midnight. How could this be?
Let's start with a plot of temperatures at Victoria's Lakewood Elementary School during the last week. Yes, temperatures are in Celsius (30C=86F, 20 F-68F, for those metrically challenged). Temperatures on the 12th, 13th, 14th, and 15th do the normal diurnal thing, rising and falling
at the typical times (the big tick marks indicate midnight). But something weird happened early on the 17th. The temperature rose right after midnight, and then fell again. Strange behavior.
Here is a plot of the temperatures and winds at midnight on Wednesday around Victoria...nothing too strange. A lot of temperatures in the upper 60s. (circles indicate calm, lines with pennants are winds).
But two hours later at 2 AM something weird has happened...temperature zoomed up into the mid to upper 70s in the western part of the city, with one site hitting 79F.
The warmth is explained by the winds. While most of the city had light winds, westerly winds had pushed into the eastern suburbs. Such flow is descending off the terrain to the west of the city, being compressed as it sinks and mixing down warmer air down from aloft.
Here is a terrain map around Victoria to give you a better idea of what I am talking about.
A few hours later the the westerly winds weakened and the temperature dropped to levels similar to the rest of the area.
As long as I am talking about anomalies....it has been quite dry around here, even for July. So far we have only had a trace at Seattle-Tacoma Airport (a trace is less than .01 inches). If we get no more precipitation for the rest of the month there (a real possibility), we would enjoy the driest July since 1960. You will tell your grandchildren about this one day.
And another big meteorological event happened yesterday....the snow finally melted out at Paradise Ranger Station on Mt. Rainier (see graphic).
According to Mark Albright, past state climatologist, over the past 30 years, the median melt-out date at Paradise is 11 July. (The mean melt-out date is slightly later on 15 July.) During the first 15 years of operation from 1981-1997 the mean melt-out date was 10 July. Since then the mean melt-out date has moved 8 days later to 18 July over the 15 year period from 1998-2012. Over the past 5 years (2008-2012) the mean melt-out date has been 2 August. The earliest melt-out was 5 June 2005 while the latest melt-out was two years ago when the snow pack didn't melt out until 25 August 2011.
Bottom line: the Cascade snow pack is NOT melting out earlier during recent years.
The Best July Weather in Generations for the North...
The Serious Fire Season Has Begun in the Northwest...
Secret Revealed: The Northwest Has the Best Summe...
Nighttime Heat Wave Hits Victoria, Paradise Melt-O...
The Unusual May Happen: Summer Showers and Thunde...
A Revised Weather Improvement Act That Would Radic...
Why so few big thunderstorms in the Northwest?
Forecast for the Upcoming Winter
Boring Weather Reveals Hidden Weather Signals
Climate Versus Weather Prediction: Do We Need to ...
Fireworks Smoke: Air Quality Alert
The Yarnell Hill Fire: The Meteorological Origins
Concrete-Cracking Heat Wave: How Unusual Was It?
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,693
|
{"url":"https:\/\/www.exactstudy.com\/what-is-the-use-of-notes-in-slides\/","text":"# What is the use of Notes in slides?\n\na) It will be displayed at the time of Slide Presentation\nb) This is just a note about the slide\nc) This is just for printing purposes only\nd) Notes cannot be inserted in Slides\ne) None of above","date":"2023-03-29 03:09:30","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.805863082408905, \"perplexity\": 1378.7027913271188}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296948932.75\/warc\/CC-MAIN-20230329023546-20230329053546-00640.warc.gz\"}"}
| null | null |
Германска Нова Гвинея () е название на немско колониално владение в западната част на Тихия океан. Общата площ на колонията е възлизала на 242 476 кв. км, а населението ѝ към 1912 г. възлиза на 478 843 туземци и 772 германци. Германска Нова Гвинея обединява всички немски колониални владения в южните морета с изключение на Самоа.
Възникване и живот на Германска Нова Гвинея
Обединението на Германия бележи появата не само на една нова велика сила на европейския континент, но и дава начало на бурна агитация в германското общество за придобиване на собствени германски колониални владения. Мисълта за придобиването на колонии спира да бъде фантасмагория на малка група хора, а се превръща в политическа тема. Германски търговски ханзейски фирми са доста активни в района на Тихия океан още от средата на XIX век. Постепенно се очертава и политическа подкрепа за ханзейската търговска експанзия в района. Център на развитието на немската търговия в този огромен район стават полинезийските острови. Към 1877 г. 87% от износа и 79% от вноса на Самоа и Тонга е в ръцете на германските търговци.
Колониалният ентусиазъм в Германия обаче не е повсеместен. Чуват се и гласове, които оспорват търговската изгода от придобиването на тези некултивирани и твърде отдалечени от метрополията земи. В крайна сметка надделяват гласовете на привържениците на колониалната политика. Така през 1882 г. група едри банкери основават Компания за Нова Гвинея (на немски: Neuguinea-Kompagnie). Целта на компанията е колониална експанзия в Южния Пасифик и по-точно в Нова Гвинея, Архипелага Бисмарк, Соломоновите острови. Малко по-късно името на компанията е сменено на Консорциум за Нова Гвинея (на немски: Neuguinea-Konsortium), но целите остават същите.
На 17 май 1885 г. компанията получава от немския кайзер суверенните права над североизточната част на Нова Гвинея, известна още като Земя на Кайзер Вилхелм. Към края на 1886 г. правата на компанията обхващат и северните Соломонови острови.
На 7 октомври 1898 г. германското правителство сключва договор с Консорциума за Нова Гвинея, по силата на който си връща суверенните права над колонията. Макар и минала под управлението на правителството Консорциума за Нова Гвиния и по-точно стоящите зад него едри капиталисти не се отказват от своята дейност в колонията. Правят се опити за отглеждане на памук, тютюн, кокосови орехи и други тропически култури. Към началото на XX век поради силното разрастване на плантациите се налага и внос на работна ръка от Китай и Ява. Вдигат се данъците за туземното население с цел да бъде принудено да работи в плантациите.
Още през 1899 г. се появява и първата пощенска марка с щампа Германска Гвинея. Същата година германското правителство използва затрудненото положение на Испания и закупува за 17 милиона марки Каролинските и Марианските острови и Палау. Те също биват включени към колонията.
Столица на Германска Нова Гвинея е Финшхафен. Пристанището носи името на откривателя на залива немския изследовател Ото Финш. Поради епидемия от малария пристанището е изоставено през 1891 г. и е възстановено едва 10 години по-късно. През следващите години седалището на компанията е в днешния град Маданг, тогава наричан Фридрих Вилхелмсхафен. От 1899 до 1910 г. седалище на губернатора на колонията е Кокопо, тогава наричано Хербертсхьое. Последното седалище на германския губернатор е Симсонхафен (днес Рабаул). От това време датира възникването на втори немски креолски език, известен като Unserdeutsch и употребяван и до днес от отделни лица. Симсонхафен бива замислен и проектиран от германската колониална власт като красив град, разполагащ с широки алеи и огромни паркове.
Първата световна война слага край на Германска Нова Гвинея. По-голямата част от територията на колонията е заета от австралийски войски още през август 1914 г., японците почти без да срещат съпротива заемат Марианските, Каролинските и Маршалските острови, както и Палау. През 1920 г. ОН предоставя колонията под австралийски и японски мандат. Междувременно почти всички немски обитатели са принудени да напуснат бившата вече колония, лишени са от имущество си и обеднели се завръщат в родината си.
Източници
Информация
Бивши германски колонии
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,388
|
Home Uncategorized Pre-Registration Requested For Veteran Stand Down
Pre-Registration Requested For Veteran Stand Down
David Slone, Times Union
Veterans, service members, their families and caregivers who plan to take part in Friday's 2022 Kosciusko County Military Veteran Stand Down are being encouraged to pre-register.
"Pre-registration is pretty important. We're encouraging veterans and their families to pre-register online," said Guy Fisher, vice president of community engagement, Goodwill Industries of Michiana Inc.
Registration can be found online at https://sites.google.com/goodwill-ni.com/veteranservices/stand-downs/warsaw-stand-down
The Stand Down is 3 to 6 p.m. Friday at Warsaw Municipal Airport, 3000 Airport Road, Warsaw.
On-site registration will begin at 2:30 p.m. People are asked not to arrive more than one hour before the start of the Stand Down.
Fisher said the 2021 Stand Down was held at the airport and it went really well. At the event will be information on veteran services, vendors and giveaways.
"It's just a way for us to minister to the veteran community of Kosciusko County," he said.
Veterans and their families should bring their DD214, military ID or Veterans Administration (VA) Health Care.
The event is outside, mostly under tents, so Fisher said it will be a little warm. Pre-registering will help make everyone's experience go a little faster.
Approximately 25 to 30 vendors and organizations participated in the 2021 event, which Goodwill Industries hosts, Fisher said. Some of the participating vendors and organizations include WorkOne, Disabled American Veterans, VA, Food Bank of Northern Indiana, local colleges and other resources, plus various Goodwill resources.
He said Goodwill holds the Stand Downs to provide for the needs of veterans and it's a way to give back to the veterans, service members, their families and caregivers. When veterans serve, Fisher said, their families are also serving.
"It's just a way for us to give back to them," he said.
When veterans return from military service, they often find civilian life to be different than what they're used to. Fisher said the Stand Downs are a way for Goodwill to help veterans connect the dots to education, careers and other needs.
There also will be a traveling memory wall at the Stand Down for veterans to leave their memories. A flag ceremony at 3 p.m. will start the Stand Down.
Fisher estimates that 200 to 300 veterans and their families will come to Friday's event. Some will spend more time at the event, catching up with their friends, while others will spend time at specific tents.
"We had one a month ago in Mishawaka and more than 350 showed up," he said.
Anywhere from 25 to 30 volunteers from the community will help keep the event going smoothly, from parking cars to helping families carry the stuff they receive out to their vehicles. Boy Scouts helped direct traffic at the 2021 event.
To volunteer, contact the Goodwill Warsaw Center for Goodwill Connections at 574-269-1351, ext. 4122.
Previous articleBowling Against Bullying Continues Growing In Numbers
Next articleStaffing Approved For Valley Early Learning Academy
Indiana's top court weighing challenge to state abortion ban
Indiana proposal would expand reasons for denying bail
Spay-Neuter clinic Feb. 26 in Warsaw
Last Updated on Jan 29 2023, 5:35 am EST
Current Conditions: Light Rain
Wind: SW at 6mph
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,118
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.