text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
{"url":"https:\/\/www.albert.io\/ie\/ap-calculus-ab-bc\/graphical-interpretation-relative-minimum","text":"Free Version\nModerate\n\n# Graphical Interpretation: Relative Minimum\n\nAPCALC-@ND5T7\n\nGiven the graph of $f$ above (consisting of a triangle and semicircle) and $g(x)=\\int_{-4}^{x}{f(t)dt}$, for what $x$-value does $g(x)$ have a relative minimum?\n\nA\n\n$x=-2$\n\nB\n\n$x=0$\n\nC\n\n$x=4$\n\nD\n\n$x=8$","date":"2017-01-22 10:10:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6322274208068848, \"perplexity\": 3772.411082254519}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-04\/segments\/1484560281421.33\/warc\/CC-MAIN-20170116095121-00208-ip-10-171-10-70.ec2.internal.warc.gz\"}"}
| null | null |
\section{Introduction}
The fractional quantum Hall effect (FQHE) was discovered in
1982~\cite{Tsui:1982yy}, only a couple of years following the
discovery of the integer quantum Hall effect (IQHE). One of the most
nontrivial problems of condensed matter physics, the FQHE has
attracted the attention of theorists ever since. (One of the earliest
and most influential works is the one by
Laughlin~\cite{Laughlin:1983fy}.) This paper surveys the most recent
progress in the understanding of one particular, but very important,
aspect of the FQHE: the composite fermion in the half-filled Landau
level~\cite{Halperin:1992mh}. In particular, we will review the
arguments leading to the Dirac composite fermion
theory~\cite{Son:2015xqa}.
The quantum Hall problem is attractive for theorists partly because of
its very simple starting point: a Hamiltonian describing particles
moving on a two-dimensional plane, in a constant magnetic field, and
interacting with each other through a two-body potential,
\begin{equation}\label{H-TOE}
H = \sum_{a=1}^N \frac{(\p_a + \mathbf{A}(\x_a))^2}{2m}
+ \sum_{\<a,b\>} V(|\x_a-\x_b|).
\end{equation}
Here, $\mathbf{A}$ is the gauge potential corresponding to a constant
magnetic field. The two-body potential $V$ is normally taken to be
the Coulomb potential $V(r)=e^2/r$, but one believes many results are
valid for a large class of repulsive interactions. The quantum Hall
states are characterized by many physical properties, including
quantized Hall resistivity, vanishing longitudinal resistivity, bulk
energy gap, edge modes, etc. For the purpose of this article, we take
the existence of an energy gap to be the defining property of the
quantum Hall states. A very simplified summary of the experimental
situation is as follows: for certain values of the filling factor,
defined as
\begin{equation}
\nu = \frac \rho{B/2\pi}\,,
\end{equation}
where $\rho$ is the two-dimensional electron density, the system is in
one of the quantum Hall states with an energy gap. The values of
$\nu$ for which there is a gap are either integers, in which case we
have IQHE, or rational numbers, which correspond to FQHE.
The existence of a gap for integer $\nu$ can be understood on the
basis of the approximation of noninteracting electrons. In a
magnetic field $B$, the energy eigenvalues of the one-particle
Hamiltonian are organized into Landau levels,
\begin{equation}
E_n = \frac Bm \left( n+\frac12 \right).
\end{equation}
The degeneracy of each Landau level is $B/2\pi$ per unit area. At
integer $\nu$, states with $n\le\nu$ are filled and those with $n>\nu$
are left empty. The system then has a gap equal to the spacing
between Landau levels, which is $\omega_c=B/m$.
In contrast to the IQHE, the fractional quantum Hall effect cannot be
understood from the noninteracting limit. For example, when
$0<\nu<1$, the lowest Landau level (LLL, $n=0$) is partially filled,
so the noninteracting Hamiltonian has an exponentially large (in the
number of electrons) ground state degeneracy. The miracle of the FQHE
is that for certain rational values of $\nu$, interactions between
electrons lead to a gap.
There are two energy scales in the FQH problem. The first scale is
the cyclotron energy $\omega_c=B/m$, while the second scale is the
interaction energy scale. In the case of the Coulomb interaction, the
latter energy scale can be estimated as the potential energy
between two neighboring electrons,
\begin{equation}
\Delta = \frac{e^2}r \sim {e^2}{\sqrt B}.
\end{equation}
The FQH problem is usually considered in the limit
$\Delta\ll\omega_c$. This limit is reached experimentally by taking
$B\to\infty$ at fixed $\nu$; theoretically, it is also reached by
taking $m\to0$ at fixed $B$. When $\Delta\ll\omega_c$ one can ignore
all Landau levels above the lowest one, and the problem can be
reformulated as pertaining to a Hamiltonian which operates only on the
LLL,
\begin{equation}\label{H-projected}
H = \mathcal P_{\rm LLL} \sum_{\<a,b\>} V(|\x_a-\x_b|),
\end{equation}
where $P_{\rm LLL}$ is the projection to the lowest Landau level.
This extremely simple Hamiltonian, believed to underlie all the
richness of FQH physics, cannot be solved by traditional methods of
perturbation theory due to the lack of a small parameter. In
particular, there is only one energy scale---the Coulomb energy scale
$\Delta$. The FQH problem is essentially nonperturbative.
\section{Flux attachment}
One of the most productive ideas in FQH physics has been the idea
of the composite fermion (CF). The notion of the CF itself is based
on another concept called flux attachment~\cite{Arovas:1985yb}, which
was applied to the FQHE in a number of groundbreaking
works~\cite{Zhang:1988wy,Jain:1989tx,Fradkin:1991wy,Halperin:1992mh}.
I will now review the standard textbook field theory of the composite
fermion, although later on I will argue that it needs some nontrivial
modification to become the correct low-energy effective theory.
In the FQH case, one ``attaches'' an even number (in the simplest
case, two) of magnetic flux quanta to an electron, transforming it to
a new object called the ``composite fermion.'' In field theory
language, one starts from a theory of interacting electrons $\psi_e$
in (2+1) dimensions in a background magnetic field
\begin{equation}\label{L-orig}
\mathcal L = i \psi_e^\+ (\d_t - i A_0) \psi_e
- \frac1{2m}|(\d_i-iA_i)\psi_e|^2 + \cdots
\end{equation}
where $\cdots$ stands for interaction terms, and ``derives,'' following
a certain formal procedure, a new Lagrangian for the composite fermion
$\psi$,
\begin{equation}\label{L-cf}
\mathcal L = i\psi^\+ (\d_t -iA_0 + ia_0)\psi
- \frac1{2m} |(\d_i - iA_i + ia_i)\psi|^2 +
\frac12 \frac1{4\pi} \epsilon^{\mu\nu\lambda}
+ a_\mu \d_\nu a_\lambda \cdots
\end{equation}
The Chern--Simons term in Eq.~(\ref{L-cf}) encodes the idea of flux
attachment. In fact, the equation of motion obtained by
differentiating the action with respect to $a_0$ reads
\begin{equation}\label{flux_att}
2 \psi^\+ \psi = \frac b{2\pi}\,,
\qquad b=\bm{\nabla}\times \mathbf{a},
\end{equation}
which means that the magnetic fluxes of the dynamic gauge field
$a_\mu$ are tied to the location of the composite fermions, with two
units of fluxes per particle.
There are two features of the field theory~(\ref{L-cf})---which will
be called the HLR field theory after Halperin, Lee, and Read who used
it to study the half-filled Landau level~\cite{Halperin:1992mh}---which
are rather trivial but worth listing here for future reference:
\begin{itemize}
\item The number of composite fermion is the same as the number of
electrons. It cannot be otherwise if the composite fermion results
from attaching magnetic fluxes to an electron.
\item The action contains a Chern--Simons term for $a_\mu$. As
demonstrated above, this term encodes in mathematical terms the idea
of flux attachment.
\end{itemize}
In the literature, it is often stressed that transformation from
(\ref{L-orig}) to (\ref{L-cf}) can be done in an exact way (see, e.g.,
Ref.~\cite{Fradkin:1991wy}). ``Conservation of difficulty'' then
implies that the theory~(\ref{L-cf}) cannot be solved exactly. To
make any progress at all, one has to start with some approximation
scheme, and in every work so far this has been the mean field
approximation where one replaces the dynamical gauge field $a_\mu$ by
its average value determined from Eq.~(\ref{flux_att}). Since in the
Lagrangian (\ref{L-cf}) the gauge fields $A$ and $a$ enter through the
difference $A-a$, and the density of the composite fermions is the
same as the density of the original electrons, the effective average
magnetic field acting on $\psi$ is
\begin{equation}
B_{\rm eff} = B - \<b\> = B-4\pi\rho.
\end{equation}
Translated to the language of the filling factors,
\begin{equation}
\nu = \frac\rho{B/2\pi}\,, \qquad \nu_{\rm CF} = \frac\rho{B_{\rm eff}/2\pi}\,,
\end{equation}
the equation becomes
\begin{equation}
\nu_{\rm CF}^{-1} = \nu^{-1}-2.
\end{equation}
In particular, the values $\nu=\frac{n}{2n+1}$ map to $\nu_{\rm
CF}=n$. In this way we have mapped the FQH problem for the electron
to the IQH problem for the composite fermions, which gives an
``explanation'' for the emergence of an energy gap. Experimentally,
one finds quite robust quantum Hall plateaux at these values of $\nu$,
up to $n\approx10$.
Another sequence of quantum Hall plateaux are found at
$\nu=\frac{n+1}{2n+1}$. Now $\nu>\frac12$ so the effective average
magnetic field $B_{\rm eff}$ is negative, i.e., points in the
direction opposite to the direction of the original $B$. The
composite fermion still forms IQH states, with $n+1$ filled Landau
levels ($\nu_{\rm CF}=-(n+1)$). Together, the two series of FQH
plateaux at $\nu=\frac n{2n+1}$ and $\nu=\frac{n+1}{2n+1}$ are called
the Jain sequences of plateaux.
One of the most spectacular successes of the composite fermion theory
is the prediction of the nature of the $\nu=\frac12$ state (the
half-filled Landau level)~\cite{Halperin:1992mh}. At this filling
fraction, the average effective magnetic field is equal to 0, and the
composite fermion should form a gapless Fermi surface. HLR theory
thus predicts that the low-energy excitation is the fermionic
quasiparticle near the Fermi surface. There is strong experimental
evidence that this is indeed the
case~\cite{Willett:1990,Kang:1993,Goldman:1994zz}. These experiments
give the strongest evidence that the composite fermion is a real
physical object---a quasiparticle near half filling---and not just a
mathematical construct.
Despite its astounding success, the quantum field theory~(\ref{L-cf})
has been criticized on various grounds. The criticism leveled most
often against the theory~(\ref{L-cf}) is the lack of any information
about the projection to the lowest Landau level. In particular, the
energy gap predicted by the mean-field picture is $B_{\rm eff}/m$,
which for generic $\nu$ is of order $\omega_c$, but not $\Delta$. To
remedy the issue, one has to assume that the energy gap is determined
by an effective mass $m_*$, postulated to be parametrically
$B/\Delta$. In particular, $m_*$ is assumed to remain finite in the
limit $m\to0$.
In my view, there are in reality two energy scale problems. The first
problem, which I would call the ``grand problem'' of energy scale, is
to derive, from microscopic calculations, the finite value of $m_*$ in
the limit $m\to0$. The second problem, the more modest ``little
problem'' of energy scale, is to make the low-energy effective field
theory with $m_*$ consistent with the fundamental symmetries of the
original theory of electrons with a much smaller mass.
The ``grand problem'' is the one that attracts most attention. We
note here a few past attempts to address
it~\cite{Shankar:1997zz,Pasquier:1998sre,Read:1998dn}. However
important it is, it will not concern us if our ambition is limited to
capturing the low-energy phenomenology, i.e., the physics at energy
scales much smaller than $\Delta$. The effective mass $m_*$ would
appear simply as an input parameter in a low-energy effective field
theory, and we will simply postulate that such an effective mass
arises somehow as a result of the renormalization group flow from a
UV scale above $\omega_c$ to an IR scale below $\Delta$.
The ``little problem'' of energy scale is a fully low-energy question,
and it can now be solved, in principle, by using the
Newton--Cartan formalism (see, e.g.,
Refs.~\cite{Son:2013rqa,Jensen:2014aia,Geracie:2015xfa,Geracie:2015dea}).
However, the most recent progress in the physics of the half-filled
Landau level has arrived from an attempt to address another problem,
usually regarded as less important and subordinate to the energy scale
problem: the lack of particle--hole (PH) symmetry.
\section{The problem of particle--hole symmetry}
A system of nonrelativistic particles interacting through a two-body
interaction has two discrete symmetries: parity, or spatial reflection
($x\to x$, $y\to -y$), which we denote as $P$, and time reversal,
which will be called $T$. In a constant uniform magnetic field both
$P$ and $T$ are broken, but $PT$ is preserved. But in the lowest
Landau level limit ($\Delta\ll\omega_c$), the projected
Hamiltonian~(\ref{H-projected}) has an additional discrete symmetry:
the particle--hole symmetry, first considered in
Ref.~\cite{Girvin:1984zz}.
To define the particle--hole symmetry, one chooses a particular basis of
LLL one-particle states $\psi_k(x)$. This basis defines the electron
creation and annihilation operators $c_k^\+$, $c_k$. The many-body
LLL Fock space is obtained by acting products of creation operators on
the empty Landau level $|\textrm{empty}\>$.
Particle--hole conjugation, $\Theta$, is defined as an antilinear
operator, which maps an empty Landau level to a full one:
\begin{equation}
\Theta: |\textrm{empty}\> \to |\textrm{full}\> = \prod_{k=1}^M
c_k^\+ |\textrm{empty}\>,
\end{equation}
where $M$ is the number of orbitals on the LLL. It also maps a
creation operator to an annihilation operator, and vice versa:
\begin{equation}
\Theta: c_k^\+ \leftrightarrow c_k .
\end{equation}
One can show that the projected Hamiltonian maps to itself, up to the
addition of a chemical potential term,
\begin{equation}
\Theta: H_{\rm LLL} \to H_{\rm LLL} - \mu_0 \sum_k c^\+_k c_k ,
\end{equation}
where $\mu_0$ depends on the interaction $V$. This means that for
$\mu=\mu_0/2$, the Hamiltonian $H_{\rm LLL}-\mu N$ maps to itself: at
this chemical potential the Hamiltonian is particle--hole symmetric.
Under particle--hole conjugation the filling factor $\nu$ transforms
as
\begin{equation}
\nu \to 1-\nu .
\end{equation}
In particular $\nu=1/2$ maps to itself under PH conjugation: the
half-filled Landau level is at the same time half empty. Moreover,
$\nu=\frac n{2n+1}$ maps to $\nu=\frac{n+1}{2n+1}$: the two Jain
sequences of quantum Hall plateaux form pairs that map to each other
under PH conjugation: $\nu=1/3$ and $\nu=2/3$, $\nu=2/5$ and
$\nu=3/5$, etc.
Let us now ask what the discrete symmetries of the HLR field
theory~(\ref{L-cf}) are. It is easy to see that there is only one
such symmetry, $PT$. The Chern-Simons theory does not have any
discrete symmetry that can be associated with particle--hole
conjugation. This reflects on the asymmetry in the treatment of
quantum Hall plateaux: the $\nu=\frac n{2n+1}$ is described by an
integer quantum Hall state where the CFs fill $n$ Landau levels, while
its PH conjugate $\nu=\frac{n+1}{2n+1}$ by $n+1$ filled Landau levels.
The Fermi liquid state with $n=1/2$ presents a particularly baffling
problem for particle--hole symmetry. Naively, one expects PH
conjugation to map a filled state to an empty state an vice versa.
This would mean that the Fermi disk of the CFs, describing the Fermi
liquid state, maps to a hollow disk in momentum states: the states
with momentum $|\k|>k_F$ are filled, and those with $|\k|<k_F$ are
empty. This is obviously silly.
The lack of particle--hole symmetry has been recognized as a problem
of the HLR theory from early on. One aspect of this problem was
noticed in 1997 by Kivelson, Lee, Krotov, and
Gan~\cite{Kivelson:1997}. When disorders are statistically
particle--hole symmetric, particle--hole symmetry implies that at half
filling $\sigma_{xy}$ is exactly $\frac12(e^2/h)$, but the HLR theory,
in the random phase approximation, implies that $\rho_{xy}=2(h/e^2)$.
These two results disagree with each other when the longitudinal
conductivity $\sigma_{xx}$ (or equivalently, the longitudinal
resistivity $\rho_{xx}$) is nonzero. From time to time, the issue of
particle--hole symmetry has been brought up in the literature (for
example, it was crucial for the discovery of the anti-Pfaffian
state~\cite{Levin:2007,SSLee:2007}), but no conclusive resolution of
the problem of the lack of PH symmetry in the HLR theory has been
found.
What makes the PH symmetry problem seem hard is that PH symmetry is
not the symmetry of nonrelativistic electrons in a magnetic field [the
theory~(\ref{H-TOE})]. It only emerges as the symmetry after taking
the lowest Landau level limit [theory~(\ref{H-projected})]. The
particle--hole symmetry of the LLL is not realized as a local operation
acting on fields.
It was commonly thought that the PH symmetry problem is part of the
energy scale problem: PH symmetry becomes exact in the LLL limit,
where the energy scale problem is sharpest. But in fact, the PH
symmetry problem is easier than the ``grand problem'' of energy scale:
PH symmetry is a question about the low-energy effective field theory,
while the CF effective mass, the object of concern of the energy scale
problem, comes mostly from energy scales above $\Delta$.
One can envision three possible scenarios for the problem of
particle--hole asymmetry of the HLR theory to resolve itself:
\begin{itemize}
\item[(i)] Despite the lack of an explicit PH symmetry, the HLR theory
has a hidden PH symmetry.
\item[(ii)] Particle--hole symmetry is spontaneously broken, and the
HLR theory describes only the low-energy excitations around one of
the two ground states.
\item[(iii)] The effective field theory describing the low-energy
excitations is different from HLR. In this theory, particle--hole
symmetry is explicitly realized.
\end{itemize}
Option (i) cannot be ruled out, but a careful diagrammatic analysis by
Kivelson et al.~\cite{Kivelson:1997} does not seem to reveal any
mechanism under which particle--hole symmetry may be hidden. How this
can be reconciled with the supposed exactness of the flux attachment
procedure is not clear, but one should remember that the HLR theory,
as applied in practice, makes an additional assumption of the mean
field Fermi liquid as the starting point. One thing is clear: if one
takes the HLR Lagrangian and declares it (after making some standard
modifications like changing the electron mass $m$ to the effective
mass $m_*$, adding Landau's interactions, etc.) to be the Lagrangian
of a low-energy effective field theory (with a cutoff much smaller
than the Fermi energy), then this effective field theory would show no
indication of particle--hole symmetry.
Option (ii) is self-consistent and was investigated by Barkeshli et
al.~\cite{Barkeshli:2015afa}. If that is the case, there are two
states at $\nu=1/2$: one corresponds to a Fermi surface of ``composite
particles'' and the other to that of ``composite holes.'' However,
there is no numerical or experimental evidence for this kind of
spontaneous particle--hole symmetry breaking. In fact, the
experimental result of Ref.~\cite{Baldwin:2014} seems to indicate, at
least naively, that the $\nu=1/2$ Fermi liquid is equally well
interpreted as being made out of ``composite particles'' or
``composite holes.'' There is now strong numerical evidence that the
$\nu=1/2$ state is particle--hole symmetric~\cite{Geraedts:2015pva}.
We will now try to make sense of option (iii).
\section{Dirac composite fermion}
There exists an alternative theory that satisfies particle--hole
symmetry but also preserves all successful phenomenological
predictions of the HLR theory. This theory is the Dirac composite
fermion theory, first proposed in Ref.~\cite{Son:2015xqa} as the
low-energy effective field theory of the half-filled Landau level.
The essence of the theory is that the composite fermion does not
transform into a ``composite hole'' under particle--hole symmetry, but
remains a composite particle. Only the momentum of the composite
fermion flips sign under particle--hole conjugation,
\begin{equation}\label{PH_CF}
\Theta: \k \to -\k.
\end{equation}
Implicitly, we assume that the Fermi disk of the composite fermion
transforms into itself (a filled disk, not a hollow disk).
Equation~(\ref{PH_CF}) is how time reversal usually works. In the
theory of the Dirac composite fermion, the CF is described by a
two-component spinor field $\psi$, which transforms under PH
conjugation following the formula usually associated with time
reversal,
\begin{equation}
\psi \to i\sigma_2\psi .
\end{equation}
There are several arguments one can put forward to argue that the
composite fermion has to be a massless Dirac particle. One argument,
or rather a hint, comes from the CF interpretation of the
Jain-sequence states. Recall that one problem with the standard CF
picture is that $\nu=\frac n{2n+1}$ corresponds to the composite
fermion filling factor $\nu_{\rm CF}=n$, while $\nu=\frac{n+1}{2n+1}$
maps to $\nu_{\rm CF}=n+1$ (ignoring the sign). On the other hand,
these two states are PH-conjugate pairs and should be described by the
same filling factor of the composite fermion in any PH-symmetric
theory. The most naive way to reconcile these different pictures is
to replace the filling factors $\nu_{\rm CF}=n$ and $\nu_{\rm CF}=n+1$
with the average value $\nu_{\rm CF}=n+\frac12$. But now we have a
problem: we want to map the FQHE in the Jain sequences to the IQHE of
the composite fermions, but is it possible to have an IQH state with
half-integer filling factor? Indeed it is, if the composite fermion
is a massless Dirac fermion. Half-integer quantization of the Hall
conductivity is a characteristic feature of the Dirac fermion,
confirmed in experiments with
graphene~\cite{Novoselov:2005kj,Zhang:2005zz}.
The second argument in favor of the Dirac nature of the CF relies on a
property of the square of the particle--hole conjugation operator
$\Theta^2$~\cite{Geraedts:2015pva}.\footnote{Also, M.~Levin and
D.~T.~Son, unpublished (2015).} It is intuitively clear that
applying particle--hole conjugation twice maps a given state to
itself, but there is a nontrivial factor of $\pm1$ that one gains by
doing so.
Consider a generic state on the LLL with $N_e$ electrons,
\begin{equation}
|\psi\> = \prod_{i=1}^{N_e} c_{k_i}^\+ \, |\textrm{empty}\>.
\end{equation}
Then under PH conjugation,
\begin{equation}
\Theta: |\psi\> \to \prod_{i=1}^{N_e} c_{k_i}^{\phantom{\dagger}} |\textrm{full}\>
= \prod_{i=1}^{N_e} c_{k_i}^{\phantom{\dagger}}
\prod_{j=1}^{M} c_j^\+ |\textrm{empty}\>.
\end{equation}
Applying $\Theta$ again one finds
\begin{equation}
\Theta^2: |\psi\> \to \prod_{i=1}^{N_e} c_{k_i}^\+ \prod_{j=1}^M c_j
|\textrm{full}\> =
\prod_{i=1}^{N_e} c_{k_i}^\+ \prod_{j=1}^M c_j^{\phantom{\dagger}}
\prod_{k=1}^M c_k^\+ |\textrm{empty}\> = (-1)^{M(M-1)/2}|\psi\>.
\end{equation}
This relationship is quite easy to interpret when $M$ is an even
number: $M=2N_{\rm CF}$. Then
\begin{equation}\label{Theta2NCF}
\Theta^2: |\psi\> \to (-1)^{N_{\rm CF}} |\psi\>.
\end{equation}
This formula suggests the following interpretation: $N_{\rm CF}$ is
the number of composite fermions of the state $|\psi\>$, and each
composite fermion is associated with a factor of $-1$ under
$\Theta^2$. This $-1$ factor is natural for the Dirac fermion.
In order to have a correct $\Theta^2$, we have to identify the number
of composite fermions with half the number of orbitals on the LLL:
$N_{\rm CF}=M/2$, which is \emph{independent} of the number of
electrons $N_e$. This contradicts the intuitive picture of flux
attachment, in which the composite fermion is obtained by attaching
two units of flux quanta to an electron. On the other hand, that is
expected: in a theory that treats particles and holes in a symmetric
way, the number of composite fermions has to be in general different
from the number of electrons, otherwise it would have to be equal to
the number of holes as well.
The tentative theory of the composite fermion can be written as follows
\begin{equation}\label{L-dual}
\mathcal L = i\bar\psi \gamma^\mu(\d_\mu + 2ia_\mu)\psi + \frac1{2\pi}
\epsilon^{\mu\nu\lambda}A_\mu\d_\nu a_\lambda
+ \frac1{8\pi} \epsilon^{\mu\nu\lambda}A_\mu \d_\nu A_\lambda.
\end{equation}
(with a speed of light which is determined by microscopic physics).
There are two differences between (\ref{L-dual}) and (\ref{L-cf}).
One is the Dirac nature of the composite fermion $\psi$. The other is
the absence of the Chern--Simons term $ada$ in the Lagrangian: such a
term (as also the mass term for $\psi$), if present, would disallow
any discrete symmetry that could be identified with particle--hole
symmetry. Interestingly, each such modification to the HLR theory
would shift the filling factors of the Jain-sequence plateaux, but
together the shifts cancel each other and the Jain sequences remain
unchanged, as shown below.
How should one visualize the composite fermion? In
Ref.~\cite{Son:2015xqa} it was suggested that the CF is better
interpreted as a type of fermionic vortex, arising from a fermionic
particle--vortex duality. Particle--vortex duality is well known for
bosons~\cite{Peskin:1977kp,Dasgupta:1981zz}, but we are dealing here
with a new duality for fermions. The salient feature of
particle--vortex duality is that it switches the roles of particle
number and magnetic field. Differentiating~(\ref{L-dual}) with
respect to $A_0$, one obtains the electron density
\begin{equation}\label{rhob}
\rho = \frac{\delta S}{\delta A_0} = \frac b{2\pi} +\frac B{4\pi} \,.
\end{equation}
On the other hand, the equation of motion obtained by differentiating
the action with respect to $a_0$ is
\begin{equation}\label{rhoCFB}
\bar\psi \gamma^0 \psi = \frac B{4\pi}\,,
\end{equation}
i.e., the CF density is set by the external magnetic field.
If one defines the filling factors of the electron and the composite
fermion as
\begin{equation}
\nu = \frac{2\pi\rho} B\,,\qquad \nu_{\rm CF} = \frac{2\pi\rho_{\rm CF}}b\,,
\end{equation}
then from Eqs.~(\ref{rhob}) and (\ref{rhoCFB}) we find that they are
related by
\begin{equation}
\nu_{\rm CF} = - \frac1{4(\nu-\frac12)}\,.
\end{equation}
In particular, $\nu=\frac n{2n+1}$ maps to $\nu_{\rm CF}=n+\frac12$,
which is the filling factor of an integer quantum Hall state of the
Dirac fermion.
It should be emphasized that the Dirac nature of the CF does not mean
that there is a Dirac cone for the CF. The tip of the cone is at
$\k=0$ while the CF, as a low-energy mode, exists only near the Fermi
surface. The Dirac nature of the CF, strictly speaking, only means
that the fermionic quasiparticle has a Berry phase of $\pi$ around the
Fermi surface. It is easy to show that such a Berry phase follows
from Eqs.~(\ref{PH_CF}) and (\ref{Theta2NCF}). The quasiparticle
Berry phase has been identified as an important ingredient of Fermi
liquids~\cite{Haldane:2004zz}, but the possibility of such a phase for
the composite fermion in FQHE has been overlooked in the literature
until very recently.
\section{Consequences of Dirac composite fermion}
The Dirac composite fermion theory has distinct consequences, in
principle verifiable in experiments and numerical simulations.
It is numerical simulations~\cite{Geraedts:2015pva} that provide the
currently most nontrivial test of the Dirac nature of the composite
fermion. The numerical finding is the disappearance, attributable to
particle--hole symmetry, of the leading $2k_F$ singularity in certain
correlation functions.
It is well known that for (2+1)D massless Dirac fermion, two-point
correlation functions of time-reversal-invariant operators are free
from the leading $2k_F$ singularity in a generic two-point correlator,
a fact that originates from the quasiparticle Berry phase $\pi$ around
the Fermi surface. In the half-filled Landau level, the role of time
reversal is played by particle--hole symmetry, therefore to test the
Berry phase one should look for the absence of the leading $2k_F$
singularity in correlation functions of PH symmetric operator. The
electron density operator $\rho=\psi_e^\dagger\psi_e$ is not PH
symmetric (the deviation of the density from the mean density,
$\delta\rho=\rho-\rho_0$ flips sign under PH conjugation) but one can
easily write down more complicated operators that are PH symmetric,
for example $\delta\rho \nabla^2 \rho$. In
Ref.~\cite{Geraedts:2015pva} the leading $2k_F$ singularity in the
correlation function of such an operator was shown to disappear when
PH symmetry is made exact (and to reappear when PH symmetry is
violated), confirming the Dirac nature of the composite fermion.
There are also predictions about transport that are, strictly
speaking, consequences of particle--hole symmetry. If one introduces
the conductivities $\sigma_{xx}$, $\sigma_{xy}$, and the
thermoelectric coefficients $\alpha_{xx}$ and $\alpha_{xy}$,
\begin{equation}
\mathbf{j} = \sigma_{xx} \mathbf{E} + \sigma_{xy}\mathbf{E}\times
\mathbf{\hat{z}} + \alpha_{xx} \bm{\nabla} T
+ \alpha_{xy} \bm{\nabla} T \times \mathbf{\hat{z}},
\end{equation}
then, at exact half filling, particle--hole symmetry
implies~\cite{Son:2015xqa,Potter:2015cdn}
\begin{equation}
\sigma_{xy} = \frac12 \frac{e^2}h\,,\qquad \alpha_{xx}=0.
\end{equation}
A manifestly particle--hole symmetric theory like the Dirac composite
fermion theory reproduces these results automatically. On the other
hand, the HLR theory, supplemented by the usual approximations to make
it suitable for computation (e.g., the random phase approximation)
would, in general, break both
relationships~\cite{Kivelson:1997,Potter:2015cdn}.
\section{Conclusion}
We have presented arguments in favor of the Dirac nature of the
composite fermion. The Dirac composite fermion provides a very simple
solution to a number of puzzles that have been plaguing the quantum
field theory of the composite fermion for a long time.
A simple demonstration that the Dirac composite fermion emerges from
the dynamics of interacting electrons on the lowest Landau level is
still lacking. (For a recent attempt to address this question see
Ref.~\cite{Murthy:2016jnc}.) One may wonder how the flux attachment
procedure, supposed to be exact, can lead us to something so different
from Eq.~(\ref{L-cf}). The situation becomes less puzzling if one
remembers that the Lagrangian (\ref{L-dual}) is a low-energy effective
Lagrangian, while the action of the type~(\ref{L-cf}) obtained from
the exact flux attachment procedure contains information about all
energy scales. One may also be bothered by the emergence of a Dirac
fermion out of the initial nonrelativistic fermion. Here again, the
situation is not as strange as it sounds: what is important is not
really the nonrelativistic Hamiltonian~(\ref{H-TOE}), but the LLL
projected Hamiltonian~(\ref{H-projected}), which applies equally well
if the original fermion is a Dirac fermion (e.g., the gapless mode on
the surface of a topological insulator). In this case the duality is
one between two theories, both involving Dirac fermions.
Going beyond quantum Hall physics, a very interesting possibility is
that the duality between the free Dirac fermion (the electron theory)
and Dirac fermion interacting with a gauge field is valid even at zero
magnetic field. Such a duality would have consequences for
interacting surfaces of topological insulators: for example, the
so-called T-Pfaffian
state~\cite{Wang:2013uky,Bonderson:2013pla,Chen:2013jha,Metlitski:2015bpa},
otherwise difficult to derive, could be understood simply from the
dual picture (the quantum Hall analog of this state is the state
called PH-Pfaffian in Ref.~\cite{Son:2015xqa} and involves BCS pairing
of Dirac composite fermions in the $s$-wave channel). Much effort has
been made to derive such
duality~\cite{Metlitski:2015eka,WangSenthil1,WangSenthil2,Mross:2015idy,Karch:2016sxi,Seiberg:2016gmd}.
In one approach, one discretizes the system in one spatial dimension
and utilizes (1+1)D bosonization~\cite{Mross:2015idy}. In another
approach, the duality between the two fermion theories appears as one
particular case of a whole web of (2+1)D dualities which can be
derived from an elemental duality between a bosonic field theory and a
fermionic field theory~\cite{Karch:2016sxi,Seiberg:2016gmd},
establishing a connection with an extensive literature on duality
between (2+1)D Chern--Simons theories (see, e.g.,
\cite{Aharony:2015mjs}). The latter approach, in particular,
clarifies issues related to the parity anomaly matching. It is
unclear, however, if a single two-component fermion coupled to a
dynamical gauge field is stable with respect to spontaneous symmetry
breaking. Numerical efforts are required to settle this question.
There is a claim that QED$_3$ does not spontaneously generate a gap
for two flavors of two-component fermion~\cite{Karthik:2015sgq}, in
contrast to the general belief. The situation with one flavor is not
clear.
According to P.~Freund~\cite{Freund:2015nts}, Nambu was fascinated
with the philosophy of science of Mitsuo Taketani, according to which
scientific development passes through three stages: Phenomenon,
Substance, and Essence. In the story that we have just surveyed, I
guess Nambu would pick the FQH plateaux as the Phenomenon and the
composite fermion as the Substance. Are we catching, in the fermionic
particle--vortex duality and other field--theoretic dualities in
(2+1)D, a glimpse of the Essence?
\medskip
\noindent
\textbf{Acknowledgments}
\smallskip
\noindent
This work is supported, in part, by U.S.\ DOE
grant No.\ DE-FG02-13ER41958, ARO MURI grant No.\ 63834-PH-MUR, and a
Simons Investigator Grant from the Simons Foundation. Additional
support was provided by the Chicago MRSEC, which is funded by NSF
through grant DMR-1420709.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 563
|
{"url":"https:\/\/pfzhang.wordpress.com\/2011\/12\/04\/some-random-notes\/","text":"## Some short notes\n\n5. Let $E$ be a locally convex topological linear space, $K\\subset E$ be a compact convex subset of $E$ and $\\partial_e K$ be the set of the extreme points of $K$. Let $A(K)$ be the set of all affine continuous functions on $K$. Endowed with the supremum norm, $A(K)$ is a Banach space.\n\nThen $K$ is said to be a (Choquet) simplex if\n\n\u2014each point in $K$ is the barycenter of a unique probability measure supported on $\\partial_e K$, or equivalently\n\u2014the dual space of $A(K)$ is an $L^1$ space (in the dual ordering).\n\nA simplex $K$ is said to be Bauer if $\\partial_e K$ is closed in $K$.\nOppositely, $K$ is said to be Poulsen if $\\partial_e K$ is dense in $K$. (Poulsen in 1961 proved the existence of such simplex.)\n\nLindenstrauss, Olsen and Sternfeld showed in 1978 here that given two Poulsen simplices $P$ and $Q$, there is an affine homeomorphism $h:P\\to Q$. In other words, there exists a unique Poulsen simplex (up to affine homeomorphisms), say $\\mathcal{P}$.\n\nMoreover they proved that $\\mathcal{P}$ is characterized by\n1) strong homogeneity property: any affine homeomorphism between two proper faces can be extended to an affine automorphism of $\\mathcal{P}$;\n\n2) universal property: any metrizable simplex is affinely homeomorphic to a closed face of $\\mathcal{P}$.\n\nLet $x,y\\in\\partial_e \\mathcal{P}$. By 2) we see $\\mathcal{P}$ contains some simple arc $I$ (say the end points $a$ and $b$. By 1) there exists an affine homeomorphism $h:\\mathcal{P}\\to \\mathcal{P}$ which carries $\\{a,b\\}$ onto $\\{x,y\\}$. Hence $h(I)$ is a simple arc in $\\mathcal{P}$ with connecting $x$ and $y$. In particular $\\partial_e \\mathcal{P}$ is path-connected.\n\nThey further showed that $\\partial_e \\mathcal{P}$ is homeomorphic to the Hilbert space $\\ell_2$.\n\nBy the specification property, we know that the set $\\mathcal{M}(f)$ of invariant probability measures is Poulsen if $f$ is a $C^2$ transitive Anosov diffeomorphism or a subshift of finite type. In particular the set of ergodic measures, $\\mathcal{M}^e(f)$ is path-connected.\n\n\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2014\u2013\n4. Let $M$ be a smooth manifold. Take an atlas $\\{\\phi_i: U_i\\to\\mathbb{R}^d\\}$ for $M$. Define $\\psi_{ij}: U_i\\cap U_j\\to \\{ 1, -1\\}$ by the signs of the Jacobi determinants of the transition maps $\\phi_j\\circ \\phi_i^{-1}$. Then we glue the trivial bundles $U_i\\times \\mathbb{R}$ with the transition functions $\\psi_{ij}$. The resulting real line bundle is called the orientation line bundle $\\mathcal{L}$ on $M$.\n\nA density of order $\\alpha$ on $M$ is a map $\\sigma$ that assigns to each $x\\in M$ a map $\\sigma_x:\\wedge^d(T_xM)\\to\\mathbb{R}$ with $\\sigma_x(\\lambda\\cdot \\vec{v})=|\\lambda|^{\\alpha}\\cdot \\sigma_x(\\vec{v})$. In term of the transition functions, the bundle of densities of order $\\alpha$ is given by $\\psi_{ij}=|\\text{Jac}(\\phi_j\\circ \\phi_i^{-1})|^{-\\alpha}$.\n\nIf $\\sigma$ is a density of order $\\alpha$, then $|\\sigma|$ is also a density of order $\\alpha$. A positive density of order 1 on $M$ defines a positive measure on $M$ that is equivalent to Lebesgue measure on coordinate charts.\n\n\u2014\u2014\u2014\u2014\u2013\n3. Two different minimalities of a foliation $\\mathcal{F}$ over $M$:\n1. each leaf $F(x)$ is dense in $M$;\n2. each leaf $F(x)$ is a minimal submanifold with respect to some energy over $TM$ (for example the geodesic with respect to the Riemannian metric).\n\nLet $A=(2,1;1,1):\\mathbb{T}^2\\to \\mathbb{T}^2$ be the Cat map (linear Anosov) and $f:\\mathbb{T}^2\\to \\mathbb{T}^2$ be Smale\u2019s DA-map with a source $o\\in\\mathbb{T}^2$. Then the extended unstable foliation $\\mathcal{F}^u_f$ is not minimal.\n\n\u2014\u2014-\n2. (Piotr) Let $f:X\\to X$ be totally transitive and periodic dense. Then $f$ is weakly mixing.\nProof. Let $A,B,U,V$ be nonempty open subsets. There exists $n=n(f,A,U)\\ge1$ with $f^{-n}A\\cap U\\neq\\emptyset$. As an open set, the intersection also contains a periodic point $p$, say $f^kp=p$. Then $f^{-n-ik}A\\cap U\\neq\\emptyset$ for all $i\\ge0$. Since $f^k$ is also transitive, there exists $m=m(f^{k},f^{-n}B,V)\\ge1$ such that $f^{-mk-n}B\\cap V\\neq\\emptyset$. In particular $(f\\times f)^{-l}(A\\times B)\\cap(U\\times V)\\neq\\emptyset$ for some $l=mk+n\\ge1$.\n\n(Donnay 1988) There exists some smooth metric $g$ on $\\mathbb{T}^2$ (respectively, $S^2$) such that $(\\mathbb{T}^2,g)$ (resp. $(S^2,g)$) has a transitive geodesic flow.\n\n(Lehrer and Weiss ) If $T$ is ergodic and $\\mu(X\\backslash A)>0$, then for any prime $p$, there is a set latex $B$ with $\\{T^iB:0\\le i\\le p-1\\}$ pairwise disjoint and $\\bigcup_{0\\le i\\le p-1}T^iB\\supset A$.\n\n\u2014\u2014\u2014\u2014\u2014\n1. Let $M$ be a closed manifold and $f_t:M\\to M$ be a codim 1 Anosov flow.\n\n(Verjovsky) If the dimension of $M$ is at least 4, then every Anosov flow of codimension one is transitive.\n\n(Franks and Williams) There exist non-transitive Anosov flows on some 3-manifolds.\n\nVerjovsky conjecture (1): if the fundamental group $\\pi_1(M)$ is solvable, then the flow $f_t$ must admit a global cross-section.\n\n(proven by P. Armandariz when $\\dim M=3$ and $f$ is transitive and by J. Plante in the general case)\n\n(Ghys, Codimension one Anosov flows and suspensions Page 60): The hypothesis on the fundamental group is necessary since the geodesic flow of a negatively curved compact surface provides an example of a codimension one Anosov flow with no global cross-section.\n\nVerjovsky conjecture (2): if the dimension of $M$ is at least 4, then any codimension-one Anosov flow is topologically equivalent to a suspension flow over an Anosov diffeomorphism of the torus.\n\n(Asaoka 2008). Any transitive codimension-one Anosov flow is topologically conjugated to a smooth volume-preserving Anosov flow.\n\n(Asaoka 2008) Let $X_0$ be a $C^2$ vector field on a $C^\\infty$ closed manifold $M$. If the flow generated by $X_0$ preserves a H\u00f6lder continuous volume $m_0$, then $X_0$ can be $C^1$-approximated by a $C^\\infty$ vector field $X_k$ that generates a flow preserving a $C^\\infty$ invariant volume $m_k\\to m_0$.\n\n(Hart 1983, Asaoka 2008) For any $C^r$-foliation $\\mathcal{F}$ on a $C^\\infty$ closed manifold $M$, there exists a $C^r$ diffeomorphism $h:M\\to M$ such that $T(h\\mathcal{F})=Dh(T\\mathcal{F})$ is a $C^r$ subbundle of $TM$. Moreover, they can choose the diffeomorphism $h$ so that it is arbitrary $C^r$-close to the identity map.","date":"2017-12-18 20:10:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 134, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9219343662261963, \"perplexity\": 128.38786232448135}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-51\/segments\/1512948623785.97\/warc\/CC-MAIN-20171218200208-20171218222208-00610.warc.gz\"}"}
| null | null |
Q: JTable preferred size with header? I'm trying to set a JScrollPane to the size of the JTable, just so I can get the column headers but I don't want it to be scrollable (nor let the JScrollPane take up unnecessary screen real estate). How can I get the preferred dimensions of the JTable with the column headers, so I can set the JScrollPane's preferred dimensions to it?
A: JScrollPane pane = new JScrollPane(table);
pane .setVerticalScrollBarPolicy(JScrollPane.VERTICAL_SCROLLBAR_NEVER);
you can do the same for horizontal. Generally setting them to as needed is preferred.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,851
|
\section{Introduction}
One of the outstanding open problems in birational geometry is the Termination of Flips conjecture, which predicts that there are no infinite chains of
certain birational transformations (\emph{flips}). It is an insight due to Shokurov that this global problem can be reduced to conjectural properties of
invariants of singularities. A typical such property is the Ascending Chain Condition (ACC, for short) which predicts that in a fixed dimension, and with suitable restrictions on
the coefficients of the divisors involved, there are no infinite strictly increasing sequences of such invariants. There are two types of invariants that are important in this setting:
the log canonical thresholds and the minimal log discrepancies. As a rule, log canonical thresholds are easier to study and they are related to many other points of view on singularities.
In particular, Shokurov's ACC conjecture for log canonical thresholds has been proved (see \cite{dFEM2} for the smooth case, \cite{dFEM1} for the case of varieties with bounded singularities,
and \cite{HMX} for the general case). However, while the ACC property in this setting implies the termination of certain families of flips in an inductive setting (see \cite{Birkar} for the precise statement),
it does not allow proving any termination result in arbitrary dimension. It turns out that in order to do this one has to work with minimal log discrepancies (mlds, for short). In fact, Shokurov showed in
\cite{Shokurov} that two conjectural properties of mlds (the Semicontinuity conjecture and the ACC conjecture) imply termination of flips. The Semicontinuity conjecture is believed to be the easier of the two problems.
In fact, this is known in some cases (see \cite{EMY} for the case of smooth varieties and \cite{Nakamura} for the case of varieties with quotient singularities).
In this paper we propose an approach towards
Shokurov's ACC conjecture when we only consider mlds on a fixed germ of variety $(X,x)$. In particular, this would cover the case of smooth ambient varieties.
Before stating our main results, let us introduce some notation. We always assume that we work over an algebraically closed field of characteristic $0$.
Let $X$ be a variety and $x\in X$ a (closed) point. We work with $\RR$-ideals $\fra$, that is, formal products $\fra=\prod_{j=1}^r\fra_j^{\lambda_j}$, where
the $\lambda_j$ are nonnegative real numbers and the $\fra_j$ are nonzero coherent ideals in $\cO_X$. We say that $\fra$ \emph{has exponents in} a set $I\subseteq\RR_{\geq 0}$ if $\lambda_j\in I$ for all $j$. We assume that $X$ is $\QQ$-Gorenstein
and denote by $\mld_x(X,\fra)$ the minimal log discrepancy of $(X,\fra)$ at $x$ (see \S 2 for the definition). This is a nonnegative real number if and only if $(X,\fra)$ is log canonical in some neighborhood of $x$;
otherwise, if $\dim(X)\geq 2$, then $\mld_x(X,\fra)=-\infty$.
In this paper we consider the following boundedness conjecture for mlds on a fixed germ.
\begin{conjecture}\label{conj_main}
Let $X$ be a klt variety and let $x\in X$. Given a finite subset $I\subset \RR_{\geq 0}$, there is a positive integer $\ell$ ${\rm (}$depending on $(X,x)$ and $I$${\rm )}$ such that
for every $\RR$-ideal $\fra$ on $X$ with exponents in $I$, there is a divisor $E$
that computes $\mld_x(X,\fra)$ and such that $k_E\leq \ell$.
\end{conjecture}
We use the theory of generic limits of ideals
developed in \cite{dFM}, \cite{Kollar1}, and \cite{dFEM1} to show the weaker statement in which we bound the order along $E$ of the ideal defining
the point $x\in X$ (we expect this result to be useful for attacking the
above conjecture). More precisely, we show the following:
\begin{theorem}\label{thm_bound_ord_point}
Let $X$ be a klt variety and $x\in X$ a point defined by the ideal $\frm_x$. For every finite subset $I\subset \RR_{\geq 0}$, there is a positive integer $\ell$ ${\rm (}$depending on $(X,x)$ and $I$${\rm )}$ such that the following conditions hold:
\begin{enumerate}
\item[i)] For every $\RR$-ideal $\fra$ with exponents in $I$ such that $\mld_x(\fra)>0$ and {\bf every} divisor $E$ over $X$ that computes $\mld_x(X,\fra)$, we have
$\ord_E(\frm_x)\leq\ell$.
\item[ii)] For every $\RR$-ideal $\fra$ with exponents in $I$ such that $\mld_x(\fra)\leq 0$, there is {\bf some} divisor $E$ over $X$ that computes $\mld_x(X,\fra)$ and such that
$\ord_E(\frm_x)\leq\ell$.
\end{enumerate}
\end{theorem}
In a related direction, we also show that if $I$ is a finite set and $(X,x)$ is fixed, then there is a positive integer $\ell$ such that for every $\RR$-ideal $\fra$ on $X$ with exponents in $I$,
in order to check that $(X,\fra)$ is log canonical at $x$ it is enough to check that $a_E(X,\fra)\geq 0$ for all divisors $E$ with center $x$ and with $k_E\leq\ell$ (see
Proposition~\ref{prop_LC}). This result admits a nice consequence concerning the characterization of log canonical pairs in terms of jet schemes
(see Proposition~\ref{consequence_jet_schemes}).
As farther evidence for the conjecture, we handle the two-dimensional case and the case of monomial ideals.
\begin{theorem}\label{dim2}
Conjecture~\ref{conj_main} holds if $\dim(X)=2$.
\end{theorem}
\begin{theorem}\label{monomial_case}
Conjecture~\ref{conj_main} holds if $(X,x)=(\AAA^n,0)$ and $\fra$ is a monomial $\RR$-ideal.
\end{theorem}
Our interest in the above conjecture is motivated by the following connection with
Shokurov's ACC conjecture for minimal log discrepancies. Recall that a subset $I\subseteq\RR$ \emph{satisfies ACC} (\emph{DCC}) if
it contains no infinite strictly increasing (resp., decreasing) sequences.
\begin{theorem}\label{thm_acc}
Let $X$ be a klt variety and $x\in X$ be a point such that the assertion in Conjecture~\ref{conj_main} holds for $(X,x)$ and for every finite subset $I\subset \RR_{\geq 0}$.
For every fixed DCC set
$J\subset\RR_{\geq 0}$, the set
$$\{\mld_x(X,\fra)\mid \fra\,\,\text{is}\,\,\text{an}\,\,\RR\text{-ideal on}\,\,X\,\,\text{with exponents in}\,\,J, \,(X,\fra)\,\,\text{is log canonical around}\,\,x\}$$
satisfies ACC.
\end{theorem}
We show that Conjecture~\ref{conj_main} is equivalent to two other conjectures on minimal log discrepancies. One of these is (a uniform version of) the Ideal-adic Semicontinuity conjecture for mlds
(see Conjecture~\ref{ideal_adic} for the precise formulation). This has been studied by Kawakita and various partial answers have been obtained in \cite{Kawakita4}, \cite{Kawakita3}, and \cite{Kawakita1}. The other conjecture is the Generic Limit conjecture on minimal log discrepancies, also studied by Kawakita in \cite{Kawakita2} (see Conjecture~\ref{conjecture:generic_limit}).
\begin{theorem}\label{thm_equivalence}
Conjectures~\ref{conj_main}, \ref{conjecture:generic_limit}, and \ref{ideal_adic} are equivalent.
\end{theorem}
The paper is organized as follows. In \S 2 we recall the definition and some basic facts related to minimal log discrepancies. The following section
is devoted to a review of generic limits and to the proof of Theorem~\ref{thm_bound_ord_point}. In \S 4 and \S 5 we prove Theorems~\ref{dim2} and \ref{monomial_case},
respectively. In \S 6 we prove
Theorems~\ref{thm_acc} and in \S 7 we prove Theorem~\ref{thm_equivalence}.
\subsection*{Acknowledgments}
We would like to thank Dale Cutkosky, Atsushi Ito, Mattias Jonsson, Masayuki Kawakita, Pierre Milman, and Michael Temkin for some useful discussions in connection with this work.
We are especially indebted to Masayuki Kawakita for pointing out an error in an earlier version of this paper.
It is a pleasure to dedicate this paper to Lawrence Ein, on the occasion of his sixtieth birthday. Lawrence's work has had a profound influence on the understanding of singularities of algebraic varieties and their
role in geometry. The first author, in particular, was introduced to this area through their conversations and collaboration. He would like to express his thanks and admiration.
\section{Minimal log discrepancies: definition and basic facts}
In this section we review the definition of minimal log discrepancies and set up the notation that we will use later in the paper.
For more details and for the proofs of some of the facts that we state, we refer to \cite{Ambro}.
We work over an algebraically closed ground field, of characteristic $0$. Let $X$ be a variety (always assumed to be reduced and irreducible).
A \emph{divisor over} $X$ is a prime divisor $E$ on some normal variety $Y$, proper and birational over $X$. Such a divisor defines a discrete
valuation $\ord_E$ of the function field of $X$ and we identify two divisors if they give the same valuation. The image of $E$ on $X$ is the \emph{center} of $E$ on $X$
and it is denoted by $c_X(E)$. For a nonzero coherent ideal sheaf $\fra$ on $X$, one defines $\ord_E(\fra)$ as follows. If $E$ is a prime divisor on $Y$ and $t$ is a uniformizer of the DVR $\cO_{Y,E}$,
then we can write $\fra\cdot\cO_{Y,E}=(t^e)$ for some nonnegative integer $e$ and $\ord_E(\fra):=e$. Note that $\ord_E(\fra)>0$ if and only if $c_X(E)\subseteq\cosupp(\fra)$, where
$\cosupp(\fra)$ is the support of $\cO_X/\fra$.
Let $X$ be a normal variety. One says that $X$ is $\QQ$-Gorenstein if the canonical divisor $K_X$ is $\QQ$-Cartier. In this case, for every
proper, birational morphism $f\colon Y\to X$, with $Y$ normal, we consider the discrepancy divisor $K_{Y/X}$. If $E$ is a divisor over $X$ that appears
as a prime divisor on $Y$, then we denote by $k_E$ the coefficient of $E$ in $K_{Y/X}$ (this is independent of the choice of model $Y$).
Recall that an $\RR$-ideal on $X$ is a formal product $\fra=\prod_{j=1}^r\fra_j^{\lambda_j}$, where each $\fra_j$ is a nonzero coherent ideal sheaf on $X$ and each $\lambda_j$ is a nonnegative real number.
Given such $\fra$ and a divisor $E$ over $X$, we put
$$\ord_E(\fra):=\sum_{j=1}^r\lambda_j\cdot\ord_E(\fra_j).$$
If $\fra=\prod_{j=1}^r\fra_j^{\lambda_j}$ and $\frb=\prod_{i=1}^s\frb_i^{\mu_i}$ are two $\RR$-ideals and $\delta$ is a positive real number,
then we define the ideals
$$\fra\cdot\frb:=\prod_{j=1}^r\fra_j^{\lambda_j}\cdot\prod_{i=1}^s\frb_i^{\mu_i}$$
and
$$\fra^{\delta}:=\prod_{j=1}^r\fra_j^{\delta\lambda_j}.$$
It is clear that in this case, if $E$ is a divisor over $X$, then $\ord_E(\fra\cdot \frb)=\ord_E(\fra)+\ord_E(\frb)$
and $\ord_E(\fra^{\delta})=\delta\cdot\ord_E(\fra)$.
Suppose now that
$X$ is normal and $\QQ$-Gorenstein and $\fra$ is an $\RR$-ideal on $X$. For every
divisor $E$ over $X$, the \emph{log discrepancy} of $E$ with respect to $(X,\fra)$ is
$$a_E(X,\fra):=k_E+1-\ord_E(\fra).$$
The pair $(X,\fra)$ is \emph{log canonical} (\emph{klt}) if and only if
$a_E(X,\fra)\geq 0$ (respectively, $>0$) for every divisor $E$ over $X$. When $\fra=\cO_X$, one simply says that
$X$ is log canonical (respectively, klt).
Consider a pair $(X,\fra)$, with $X$ a normal, $\QQ$-Gorenstein variety and $\fra$ an $\RR$-ideal on $X$.
For every (closed) point $x\in X$, the \emph{minimal log discrepancy} of $(X,\fra)$ is given by
$$\mld_x(X,\fra):=\inf\{a_E(X,\fra)\mid E\,\,\text{is a divisor over}\, X\,\text{with}\,c_X(E)=x\}.$$
It is a basic fact that $\mld_x(X,\fra)\geq 0$ if and only if $(X,\fra)$ is log canonical in a neighborhood of $x$.
Moreover, if $\mld_x(X,\fra)<0$ and $\dim(X)\geq 2$, then $\mld_x(X,\fra)=-\infty$.
One can also show that if $\mld_x(X,\fra)\geq 0$, then the infimum in the definition is in fact a minimum.
Under this assumption, we say that a divisor $E$ over $X$ \emph{computes} $\mld_x(X,\fra)$ if $c_X(E)=x$
and $a_E(X,\fra)=\mld_x(X,\fra)$. When $\mld_x(X,\fra)<0$, we will say that $E$ computes $\mld_x(X,\fra)$
if $c_X(E)=x$ and $a_E(X,\fra)<0$.
Recall that if $\fra$ is a nonzero ideal on $X$, then a \textit{log resolution} of $(X,\fra)$ is a proper, birational
morphism $\pi\colon Y\to X$ such that $Y$ is a smooth variety, the exceptional locus ${\rm Exc}(\pi)$ is a divisor,
$\fra\cdot\cO_Y=\cO_Y(-F)$ for some effective divisor
$F$ on $Y$, and $F+{\rm Exc}(\pi)$ has simple normal crossings.
Since we are in characteristic $0$, log resolutions exist by Hironaka's theorem.
It is a basic result that if $X$ is a normal, $\QQ$-Gorenstein
variety, $x\in X$, and $\fra=\prod_{j=1}^r\fra_j^{\lambda_j}$ is an $\RR$-ideal on $X$, then for every log resolution $\pi\colon Y\to X$
of $(X,\frm_x\cdot\prod_{j=1}^r\fra_j)$, there is a divisor $E$ on $Y$ which computes $\mld_x(X,\fra)$.
\begin{proposition}\label{prop1}
Let $X$ be a normal, $\QQ$-Gorenstein variety, $\fra$ an $\RR$-ideal on $X$, and $x\in X$ a point defined by $\frm_x$.
If $\mld_x(X,\fra)>0$, then there is $\delta>0$ such that we have $\mld_x(X,\fra\cdot\frm_x^{\delta})=0$.
\end{proposition}
\begin{proof}
Let $\pi\colon Y\to X$ be a log resolution of $(X,\frm_x\cdot\prod_{j=1}^r\fra_j)$, where $\fra=\prod_{j=1}^r\fra_j^{\lambda_j}$.
We see that we may take
$$\delta=\min\left\{\frac{a_E(X,\fra)}{\ord_E(\frm_x)}\ \bigg| \ E\,\,\text{divisor on}\,\,Y\,\,\text{with}\,\,c_X(E)=x\right\}.$$
\end{proof}
In what follows we will also make use of the notion of log canonical threshold. Suppose that $X$ is a log canonical variety and $x\in X$.
If $\fra$ is an $\RR$-ideal on $X$, then the \emph{log canonical threshold} of $(X,\fra)$ at $x$ is given by
$$\lct_x(X, \fra):=\inf\left\{\frac{k_E+1}{\ord_E(\fra)}\ \bigg| \ E\,\,\text{divisor over}\,\,X\,\,\text{with}\,\, x\in c_X(E)\right\}.$$
In fact, if $\fra=\prod_{j=1}^r\fra_j^{\lambda_j}$ and $\pi\colon Y\to X$ is a log resolution of $(X,\prod_{j=1}^r\fra_j)$, then
there is a divisor $E$ on $Y$ that computes $\lct_x(X,\fra)$, that is, $\lct_x(X,\fra)=(k_E+1)/\ord_E(\fra)$ and $x\in c_X(E)$.
Note that we have $\mld_x(X,\fra)\geq 0$ if and only if $\lct_x(X,\fra)\geq 1$.
We collect in the next proposition a few well-known properties of minimal log discrepancies and log canonical thresholds.
The proof is straightforward and we omit it.
\begin{proposition}\label{general_properties}
Let $X$ be a log canonical variety and let $x\in X$ be defined by $\frm_x$. If $\fra_1,\ldots,\fra_r,\frb_1,\ldots,\frb_r$ are nonzero ideals on $X$
and $\lambda_1,\ldots,\lambda_r,\mu_1,\ldots,\mu_r$ are nonnegative real numbers, then the following hold:
\begin{enumerate}
\item[i)] If $\fra_j\subseteq \frb_j$ for every $j$, then
$$\mld_x(X,\fra_1^{\lambda_1}\dotsm\fra_r^{\lambda_r})\leq \mld_x(X,\frb_1^{\lambda_1}\dotsm\frb_r^{\lambda_r})\quad\text{and}\quad \lct_x(X,\fra_1^{\lambda_1}\dotsm\fra_r^{\lambda_r})\leq
\lct_x(X,\frb_1^{\lambda_1}\dotsm\frb_r^{\lambda_r}).$$
\item[ii)] If $\lambda_j\leq\mu_j$ for every $j$, then
$$\mld_x(X,\fra_1^{\lambda_1}\dotsm\fra_r^{\lambda_r})\geq\mld_x(X,\fra_1^{\mu_1}\dotsm\fra_r^{\mu_r})
\quad\text{and}\quad \lct_x(X,\fra_1^{\lambda_1}\dotsm\fra_r^{\lambda_r})\geq \lct_x(X,\fra_1^{\mu_1}\dotsm\fra_r^{\mu_r}).$$
\item[iii)] For every $\delta>0$, we have
$$\lct_x(X,\fra_1^{\delta\lambda_1}\dotsm\fra_r^{\delta\lambda_r})=\delta^{-1}\cdot \lct_x(X,\fra_1^{\lambda_1}\dotsm\fra_r^{\lambda_r}).$$
\item[iv)] If $E$ is a divisor over $X$ with $c_X(E)=x$ and $E$ computes $\mld_x(X,\fra_1^{\lambda_1}\dotsm\fra_r^{\lambda_r})$
${\rm (}$resp., $\lct_x(X,\fra_1^{\lambda_1}\dotsm\fra_r^{\lambda_r})$${\rm )}$ and if $d$ is a positive integer such that
$d\cdot \ord_E(\frm_x)\geq\ord_E(\fra_j)$ for all $j$, then $E$ computes
$\mld_x\big(X,\prod_{j=1}^r(\fra_j+\frm_x^d)^{\lambda_j}\big)$ and this is equal to $\mld_x\big(X,\prod_{j=1}^r\fra_j^{\lambda_j}\big)$
${\rm (}$resp., $E$ computes $\lct_x\big(X,\prod_{j=1}^r(\fra_j+\frm_x^d)^{\lambda_j}\big)$ and this is equal to $\lct_x\big(X,\prod_{j=1}^r\fra_j^{\lambda_j}\big)$${\rm )}$.
\end{enumerate}
\end{proposition}
In the next section we will need to work in a more general setting than the one described above, in which $X$ is allowed to be a normal, excellent,
$\QQ$-Gorenstein scheme of characteristic $0$ (that is, all the residue fields of $X$ have characteristic $0$). All the above definitions extend to this setting.
For details, in particular for the precise definitions of $K_X$ and $K_{Y/X}$ in this framework, we refer to \cite[Appendix A]{dFEM1}.
\section{Generic limits: bounding the order of the ideal of the point} \label{section:generic_limit}
Our goal in this section is to prove Theorem~\ref{thm_bound_ord_point}.
The proof uses generic limits of sequences of ideals. Such a construction based on nonstandard methods was given in \cite{dFM} and a different one,
with the same properties but based on
sequences of generic points was later given in \cite{Kollar1}. In what follows we simply recall the basic properties of such a construction, following \cite{dFEM1}.
Let $X$ be a klt variety over $k$ and $x\in X$ a closed point. Given a
positive integer $r$ and $r$
sequences of coherent sheaves of ideals $(\fra^{(i)}_j)_{i\geq 1}$ on $X$ for $1\leq j\leq r$,
we get an affine klt scheme $\widetilde{X}$, a closed point $\widetilde{x}\in\widetilde{X}$, and $r$ ideals $\widetilde{\fra}_1,\ldots,\widetilde{\fra}_r$ on $\widetilde{X}$
(note that it can happen for some $\widetilde{\fra}_j$ to be zero).
In \cite{dFEM1} one allows the variety $X$ to vary as well; since we assume that this is not the case, it is easy
to describe $\widetilde{X}$. If some affine neighborhood of $x$ in $X$ is defined in some $\AAA_k^N$ by $h_1,\ldots,h_s$, then
$\widetilde{X}=\Spec (K\llbracket x_1,\ldots,x_N\rrbracket/(h_1,\ldots,h_s))$ for some algebraically closed field extension $K$ of $k$,
and $\widetilde{x}$ is the unique closed point of $\widetilde{X}$.
If for some $j$ we have $\fra^{(i)}_j=\frm_x$ for all $i\gg 0$, then $\widetilde{\fra}_j$ is the ideal $\frm_{\widetilde{x}}$ defining $\widetilde{x}$.
We collect in the next proposition some basic properties of this construction.
\begin{proposition}\label{properties_generic_limit}
With the above notation, the following hold:
\begin{enumerate}
\item[i)] If $\widetilde{\fra}_j=0$, then for every $q$, there are infinitely many $i$ such that $\fra_j^{(i)}\subseteq\frm_x^q$.
\item[ii)] For every $d$, there is an infinite subset $\Lambda=\Lambda_d\subset\ZZ_{>0}$ such that for every $i\in \Lambda$ and for every $\lambda_1,\ldots,\lambda_r\in\RR_{\geq 0}$, we have
$$\lct_{\widetilde{x}}\big(\widetilde{X},\prod_{j=1}^r(\widetilde{\fra}_j+\frm_{\widetilde{x}}^d)^{\lambda_j}\big)=\lct_x\big(X,\prod_{j=1}^r(\fra_j^{(i)}+\frm_x^d)^{\lambda_j}\big).$$
\item[iii)] For every $\lambda_1,\ldots,\lambda_r\in\RR_{>0}$, if we consider the $\RR$-ideals $\fra^{(i)}=\prod_{j=1}^r(\fra^{(i)}_j)^{\lambda_j}$ and $\widetilde{\fra}=\prod_{j=1}^r\widetilde{\fra}_j^{\lambda_j}$,
then $\lct_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})$ is a limit point of the set $\{\lct_x(X,\fra^{(i)})\mid i\geq 1\}$ ${\rm (}$with the convention that if some $\widetilde{\fra}_j=0$, then $\lct_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})=0$${\rm )}$.
\item[iv)] Suppose that $\widetilde{\fra}_j\neq 0$ for all $j$.
If $E$ is a divisor over $\widetilde{X}$ with $c_{\widetilde{X}}(E)=\widetilde{x}$ and such that $E$ computes $\lct_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})$, then for every $d\gg 0$ there is an
infinite subset $\Lambda'=\Lambda'_d(E,\lambda_1,\ldots,\lambda_r)\subset\ZZ_{>0}$ with the following property: for every $i\in \Lambda'$ there is a divisor $E_i$ over $X$ that computes
$\lct_x\big(X,\prod_{j=1}^r(\fra^{(i)}_j+\frm_x^d)^{\lambda_j}\big)$, which is equal to $\lct_{\widetilde{x}}\big(\widetilde{X},\prod_{j=1}^r(\widetilde{\fra}_j+\frm_{\widetilde{x}}^d)^{\lambda_j}\big)$
and we have $\ord_E(\frm_{\widetilde{x}})=\ord_{E_i}(\frm_x)$ ${\rm (}$in particular, we have $c_X(E_i)=x$${\rm )}$, $k_{E_i}=k_E$, and $\ord_E(\widetilde{\fra}_j+\frm_{\widetilde{x}}^d)=\ord_{E_i}(\fra^{(i)}_j+\frm_x^d)$ for $1\leq j\leq r$.
\end{enumerate}
\end{proposition}
\begin{proof}
For the assertion in i), see \cite[Lemma~3.1]{dFEM1}. The statements in ii), iii), and iv) follow from \cite[Proposition~3.3 and Corollary~3.4]{dFEM1}. The only assertion that is not explicitly
mentioned in \emph{loc. cit.} is the one in iv) saying that $k_E=k_{E_i}$. However,
by taking $d$ such that $d\geq \ord_E(\widetilde{\fra}_j)$ for every $j$, we may assume that with $\widetilde{\frb}=\prod_{j=1}^r(\widetilde{\fra}_j+\frm_{\widetilde{x}}^d)^{\lambda_j}$, we have $\ord_E(\widetilde{\fra})=\ord_E(\widetilde{\frb})$ and
$\lct_{\widetilde{x}}(\widetilde{X},\widetilde{\frb})=\lct_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})$ (see Proposition~\ref{general_properties}).
We now conclude that $k_E=k_{E_i}$ from the other assertions.
\end{proof}
\begin{remark}\label{remark_properties_generic_limit}
With the notation in the above proposition, we also have the following variant of the assertion in Proposition~\ref{properties_generic_limit}:
for every $d$, there is an infinite subset $\Lambda=\Lambda_d\subset\ZZ_{>0}$ such that for every $i\in \Lambda$ and for every $\lambda_1,\ldots,\lambda_r\in\RR_{\geq 0}$, we have
$$\mld_{\widetilde{x}}\big(\widetilde{X},\prod_{j=1}^r(\widetilde{\fra}_j+\frm_{\widetilde{x}}^d)^{\lambda_j}\big)=\mld_x\big(X,\prod_{j=1}^r(\fra_j^{(i)}+\frm_x^d)^{\lambda_j}\big).$$
The proof is the same as in the case of log canonical thresholds (see \cite[Proposition~3.3]{dFEM1}), the key point being that minimal log discrepancies are constant generically in a family.
More precisely, suppose that $x\in X$ is fixed, $T$ is an arbitrary variety, and $\frb_1,\ldots,\frb_r$ are ideals on $X\times T$ such that each $\frb_{j,t}=\frb_j\cdot\cO_{X\times\{t\}}$, with $1\leq j\leq r$ and
$t\in T$, is nonzero. In this case, there is an open subset $U$ of $T$ such that for each $\lambda_1,\ldots,\lambda_r\in\RR_{\geq 0}$, the minimal log discrepancy
$$\mld_x\big(X,\prod_{j=1}^r\frb_{j,t}^{\lambda_j}\big)$$
is constant for $t\in U$.
Moreover, the set $\Lambda$ can be chosen such that the ideals
$\widetilde{\fra}_1,\ldots,\widetilde{\fra}_r$ are again generic limits of the sequences
$({\fra}^{(i)}_1)_{i\in\Lambda},\ldots, ({\fra}^{(i)}_r)_{i\in\Lambda}$.
\end{remark}
We can now prove the main result of this section.
\begin{proof}[Proof of Theorem~\ref{thm_bound_ord_point}]
We argue by contradiction. If the conclusion of the theorem fails, then we can find a sequence of $\RR$-ideals $(\fra^{(i)})_{i\geq 1}$ with
exponents in $I$ such that one of the following things happens:
\noindent {\bf Case 1}. We have $\mld_x(X,\fra^{(i)})>0$ for all $i$ and for every $i$ there is a divisor $E_i$ over $X$ that
computes $\mld_x(X,\fra^{(i)})$ and such that $\lim_{i\to\infty}\ord_{E_i}(\frm_x)=\infty$.
\noindent {\bf Case 2}. We have $\mld_x(X,\fra^{(i)})=0$ for all $i$ and for every choice of divisors $E_i$ over $X$ such that
$E_i$ computes $\mld_x(X,\fra^{(i)})$, we have $\lim_{i\to\infty}\ord_{E_i}(\frm_x)=\infty$.
\noindent {\bf Case 3}. We have $\mld_x(X,\fra^{(i)})<0$ for all $i$ and for every choice of divisors $E_i$ over $X$ such that
$E_i$ computes $\mld_x(X,\fra^{(i)})$, we have $\lim_{i\to\infty}\ord_{E_i}(\frm_x)=\infty$.
Suppose that $\lambda_1,\ldots,\lambda_r$ are the nonzero elements of $I$. We may assume that for every $i$ we can write
$\fra^{(i)}=\prod_{j=1}^r(\fra^{(i)}_j)^{\lambda_j}$.
We use the generic limit construction to construct $\widetilde{x}\in \widetilde{X}$ and an ideal $\widetilde{\fra}_j$ on $\widetilde{X}$ corresponding to the
sequence $(\fra^{(i)}_j)_{i\geq 1}$ for $1\leq j\leq r$. Let $\widetilde{\fra}$ be the $\RR$-ideal on $\widetilde{X}$ given by
$\widetilde{\fra}=\prod_{j=1}^r\widetilde{\fra}_j^{\lambda_j}$. When some $\widetilde{\fra}_j$ is zero, we make the convention that $\widetilde{\fra}=0$ and
$\lct_{\widetilde{x}}(\widetilde{X}, \widetilde{\fra})=0$.
Suppose first that we are either in Case 1 or in Case 2. Note that since $\lct_x(X,\fra^{(i)})\geq 1$ for every $i$, it
follows from Proposition~\ref{properties_generic_limit} that $\lct_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})\geq 1$. In particular, each $\widetilde{\fra}_j$ is nonzero and we have $\mld_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})\geq 0$.
Let us consider first the case when $\mld_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})>0$. It follows from Proposition~\ref{prop1} that there is $\delta>0$ such that $\lct_{\widetilde{x}}(\widetilde{X},\widetilde{\fra}\cdot\frm_{\widetilde{x}}^{\delta})=1$.
In this case there are infinitely many $i$ such that
$\lct_x(X,\fra^{(i)}\cdot\frm_x^{\delta})\geq 1$. Indeed, if this is not the case, then $\lct_x(X,\fra^{(i)}\cdot\frm_x^{\delta})<1$ for all $i\gg 0$. On the other hand, it follows from Proposition~\ref{properties_generic_limit}
that $\lct_{\widetilde{x}}(\widetilde{X},\widetilde{\fra}\cdot\frm_{\widetilde{x}}^{\delta})=1$ is a limit point of the set $\{\lct_x(X,\fra^{(i)}\cdot\frm_x^{\delta})\mid i\geq 1\}$. This contradicts the fact
that the set $\{\lct_x(X,\fra^{(i)}\cdot\frm_x^{\delta})\mid i\geq 1\}$ satisfies ACC (see \cite[Theorem~4.2]{dFEM1}).
For every $i$ such that $\lct_x(X,\fra^{(i)}\cdot\frm_x^{\delta})\geq 1$ and for every divisor $E_i$ that computes $\mld_x(X,\fra^{(i)})$, we obtain
$$\mld_x(X,\fra^{(i)})=k_{E_i}+1- \ord_{E_i}(\fra^{(i)})\geq \delta\cdot\ord_{E_i}(\frm_x).$$
Therefore
$$\ord_{E_i}(\frm_x)\leq \frac{\mld_x(X,\fra^{(i)})}{\delta}\leq \frac{\mld_x(X)}{\delta}$$
for infinitely many $i$, contradicting the fact that, by assumption, we can choose such divisors $E_i$ with $\lim_{i\to\infty}\ord_{E_i}(\frm_x)=\infty$.
We now consider the case when $\mld_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})=0$ (still assuming that we are either in Case 1 or in Case 2). If
$F$ is a divisor over $\widetilde{X}$ that computes $\mld_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})$, then it follows from
Proposition~\ref{properties_generic_limit} that for $d\gg 0$, we can find an infinite subset
$\Gamma'=\Gamma'_d(F,\lambda_1,\ldots,\lambda_r)\subset\ZZ_{>0}$ such that the following holds. For every
$i\in \Gamma'$ we have a divisor
$F_i$ over $X$ with $k_F=k_{F_i}$, $\ord_F(\frm_{\widetilde{x}})=\ord_{F_i}(\frm_x)$ (in particular,
$c_X(F_i)=x$), and such that if we put $\frb^{(i)}=\prod_{j=1}^r(\fra^{(i)}_j+\frm_x^d)^{\lambda_j}$ and $\widetilde{\frb}=\prod_{j=1}^r(\widetilde{\fra}_j+\frm_{\widetilde{x}}^d)^{\lambda_j}$, then
$\ord_F(\widetilde{\frb})=\ord_{F_i}(\frb^{(i)})$.
By taking $d\geq \ord_F(\widetilde{\fra}_j)$ for every $j$, we may assume that $\ord_F(\widetilde{\fra})=\ord_F(\widetilde{\frb})$.
We conclude that
$$0=a_F(\widetilde{X},\widetilde{\fra})=k_F+1-\ord_F(\widetilde{\fra})=k_{F_i}+1-\ord_{F_i}(\frb^{(i)})=a_{F_i}(X,\frb^{(i)})\geq a_{F_i}(X,\fra^{(i)})\geq 0$$
for every $i\in\Gamma'$.
In Case 1, this already gives a contradiction, since the last inequality is strict. If we are in Case 2, we conclude that the divisor $F_i$ computes $\mld_x(X,\fra^{(i)})$.
By assumption, we must have $\ord_{F_i}(\frm_x)\to\infty$, contradicting the fact that $\ord_{F_i}(\frm_x)$ is constant for $i\in \Gamma'$.
Finally, suppose that we are in Case 3. Let us assume first that every $\widetilde{\fra}_j$ is nonzero. Since $\lct_x(X,\fra^{(i)})<1$ for every $i$ and the set
$\{\lct_x(X,\fra^{(i)})\mid i\geq 1\}$ has
$\lct_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})$ as a limit point by Proposition~\ref{properties_generic_limit}, it follows that
$\lct_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})<1$ (recall that the set
$\{\lct_x(X,\fra^{(i)})\mid i\geq 1\}$ satisfies ACC by \cite[Theorem~4.2]{dFEM1}). Therefore $\mld_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})<0$ and consider a divisor $G$
over $\widetilde{X}$, with $c_{\widetilde{X}}(G)=\widetilde{x}$ and with $a_G(\widetilde{X},\widetilde{\fra})<0$. We now argue as above: we can find an infinite set $\Gamma''\subset\ZZ_{>0}$
such that the following holds. For every $i\in \Gamma''$ we have a divisor $G_i$ over $X$ with $k_{G}=k_{G_i}$, $\ord_G(\frm_{\widetilde{x}})=\ord_{G_i}(\frm_x)$ (in particular,
$c_X(G_i)=x$), and such that
$\ord_G(\widetilde{\frb})=\ord_{G_i}(\frb^{(i)})$, where $\frb$ and $\frb^{(i)}$ are defined as above. Furthermore, we may assume
that $\ord_G(\widetilde{\fra})=\ord_G(\widetilde{\frb})$ and we conclude that
$$0>a_G(\widetilde{X},\widetilde{\fra})=k_G+1-\ord_G(\widetilde{\fra})=k_{G_i}+1-\ord_{G_i}(\frb^{(i)})=a_{G_i}(X,\frb^{(i)})\geq a_{G_i}(X,\fra^{(i)})$$
for every $i\in \Gamma''$. Since $\ord_{G_i}(\frm_x)$ is constant for all $i\in \Gamma''$, this gives a contradiction.
Let us consider now the case when some $\widetilde{\fra}_j$ is zero. Let $T$ be a fixed divisor over $X$ with $c_X(T)=x$ and let $q$ be a positive integer
with $q>\frac{k_T+1}{\lambda_j\cdot\ord_T(\frm_x)}$. Since $\widetilde{\fra}_j$ is zero, it follows from Proposition~\ref{properties_generic_limit} that
there are infinitely many $i$ with $\fra_j^{(i)}\subseteq\frm_x^q$. In this case we have
$$a_T(X,\fra^{(i)})\leq a_T(X,\frm_x^{\lambda_j q})=k_T+1-\lambda_jq\cdot\ord_T(\frm_x)<0.$$
Therefore $T$ computes $\mld_x(X,\fra^{(i)})$ for infinitely many $i$, a contradiction. This completes the proof of the theorem.
\end{proof}
While by using generic limits we cannot get a proof for the full statement in Conjecture~\ref{conj_main}, we also obtain the
following related statement.
\begin{proposition}\label{prop_LC}
Let $X$ be a klt variety and $x\in X$ a closed point. If $I\subset\RR_{\geq 0}$ is a finite set, then there is a positive integer $\ell$ such that
for every $\RR$-ideal with exponents in $I$, if $a_E(X,\fra)\geq 0$ for all divisors $E$ over $X$ with $c_X(E)=x$ and $k_E\leq\ell$, then
$(X,\fra)$ is log canonical at $x$.
\end{proposition}
\begin{proof}
Suppose that the conclusion of the proposition fails. In this case we can find a sequence of $\RR$-ideals $\fra^{(i)}$ on $X$, with exponents in $I$,
such that each $(X,\fra^{(i)})$ is not log canonical at $x$, but
$a_E(X,\fra^{(i)})\geq 0$ for all divisors $E$ over $X$ with $c_X(E)=x$ and $k_E\leq i$.
Let $\lambda_1,\ldots,\lambda_r$ be the nonzero elements in $I$ and let us write
$$\fra^{(i)}=\prod_{j=1}^r(\fra^{(i)}_j)^{\lambda_j}.$$
We use the generic limit construction to produce $\widetilde{x}\in \widetilde{X}$ and ideals $\widetilde{\fra}_j$ on $\widetilde{X}$ corresponding to the
sequences $(\fra^{(i)}_j)_{i\geq 1}$ for $1\leq j\leq r$. Let $\widetilde{\fra}$ be the $\RR$-ideal on $\widetilde{X}$ given by
$$\widetilde{\fra}=\prod_{j=1}^r\widetilde{\fra}_j^{\lambda_j}.$$
When some $\widetilde{\fra}_j$ is zero, we make the convention that $\widetilde{\fra}=0$.
Our assumption implies $\lct_x(X,\fra^{(i)})<1$ for every $i$. Recall that $\lct_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})$ is a limit point of the sequence
$\big(\lct_x(X,\fra^{(i)})\big)_{i\geq 1}$ by Proposition~\ref{properties_generic_limit} iii). On the other hand, this sequence contains no strictly increasing subsequences by
\cite[Theorem~4.2]{dFEM1}. Therefore $\lct_{\widetilde{x}}(\widetilde{X},\widetilde{\fra})<1$ and the pair $(\widetilde{X},\widetilde{\fra})$ is not log canonical at $\widetilde{x}$. Let $E$ be a divisor over $\widetilde{X}$ with
center $\widetilde{x}$ and such that $a_E(\widetilde{X},\widetilde{\fra})<0$. If $d\in\ZZ_{>0}$ is large enough, but fixed, then we clearly have
$$a_E\big(\widetilde{X}, \prod_j(\widetilde{\fra}_j+\frm_{\widetilde{x}}^{d})^{\lambda_j}\big)=a_E(\widetilde{X},\widetilde{\fra})<0.$$
On the other hand, it follows from Proposition~\ref{properties_generic_limit} iv) that there are infinitely many $i$ for which we can find divisors $E_i$ over $X$ with center $x$,
such that $k_{E_i}=k_E$ and
$$a_{E_i}\big(X,\prod_j(\fra^{(i)}_j+\frm_x^{d})^{\lambda_j}\big)=a_{E}\big(\widetilde{X}, \prod_j(\widetilde{\fra}_j+\frm_{\widetilde{x}}^{d})^{\lambda_j}\big)<0.$$
Since
$$a_{E_i}\big(X,\prod_j(\fra^{(i)}_j)^{\lambda_j}\big)\leq a_{E_i}\big(X,\prod_j(\fra^{(i)}_j+\frm_x^d)^{\lambda_j}\big)<0$$
and $k_{E_i}=k_E$ for infinitely many $i$, we contradict our assumption. This completes the proof of the proposition.
\end{proof}
The assertion in Proposition~\ref{prop_LC} has an interesting consequence in connection with the description of log canonical pairs
in terms of jet schemes, when the ambient variety is smooth. This will not play any role in the following sections, so the reader not interested in jet schemes
could skip this part.
Recall that if $X$ is a smooth variety, $Y$ is a closed subscheme of $X$
defined by the nonzero ideal $\fra$, and $q\in\RR_{\geq 0}$, then the pair $(X,\fra^q)$ is log canonical if and only if
$$\dim(Y_m)\leq (m+1)(\dim(X)-q)\quad\text{for all}\quad m\geq 0,$$
where $Y_m$ is the $m^{\rm th}$ jet scheme of $Y$ (see \cite[Corollary~3.2]{ELM}).
For the definition and basic properties of jet schemes and contact loci, we refer to \cite{ELM}.
Proposition~\ref{prop_LC} implies that if the dimension of $X$ and $q\in\RR_{\geq 0}$ are fixed, then
it is enough to check the dimensions of only a prescribed number of jet schemes.
\begin{proposition}\label{consequence_jet_schemes}
Given $n\geq 1$ and $q\in\RR_{\geq 0}$, there is a positive integer $N$ that satisfies the following property.
For every smooth $n$-dimensional variety $X$ and for every closed subscheme $Y$ of $X$ defined by
a nonzero ideal $\fra$, the pair $(X,\fra^q)$ is log canonical if and only if
$$\dim(Y_m)\leq (m+1)(n-q)\quad\text{for all}\quad m\leq N.$$
\end{proposition}
\begin{proof}
The case $q=0$ is trivial (the pair is always log canonical in this case, hence any $N$ will work), hence we assume from now on $q>0$.
We first consider the case when $X=\AAA^n$ and choose $\ell$ given by Proposition~\ref{prop_LC},
such that for every nonzero ideal $\fra$ in $\AAA^n$, if $a_E(\AAA^n,\fra^q)\geq 0$ for all divisors $E$ over $\AAA^n$ with center at the origin and $k_E\leq\ell$, then
$(\AAA^n,\fra^q)$ is log canonical at $0$. Let $N=\lfloor \frac{\ell+1}{q}\rfloor$, where $\lfloor u\rfloor$ denotes the largest integer $\leq u$.
We show that if $\fra$ is a nonzero ideal defining the subscheme $Y$ of $\AAA^n$ such that
$\dim(Y_m)\leq (m+1)(n-q)$ for all $m\leq N$, then $(\AAA^n,\fra^q)$ is log canonical at $0$.
Indeed, if $(\AAA^n,\fra)$ is not log canonical at $0$, then it follows by assumption that
there is a divisor $E$ over $\AAA^n$ with center $0$ such that $k_E\leq\ell$ and
$k_E+1<q\cdot \alpha_E$, where $\alpha_E=\ord_E(\fra)$. Since $\alpha_E$ is an integer, it follows that
$\alpha_E\geq m+1$, where $m=\lfloor \frac{k_E+1}{q}\rfloor$.
Let $f\colon Y\to \AAA^n$ be a log resolution of $(\AAA^n,\fra)$ such that
$E$ appears as a divisor on $Y$. It follows from \cite[Theorem~2.1]{ELM} that if $C=\overline{f_{\infty}({\rm Cont}^{\geq 1}(E))}$,
then
$$C\subseteq {\rm Cont}^{\geq \alpha_E}(\fra)\subseteq {\rm Cont}^{\geq (m+1)}(\fra)\quad\text{and}\quad {\rm codim}(C)=k_E+1.$$
We thus conclude that
$$\dim(Y_m)=(m+1)n-{\rm codim}({\rm Cont}^{\geq (m+1)}(\fra))\geq (m+1)n-{\rm codim}(C)>(m+1)(n-q).$$
Since $m\leq N$, this proves our assertion.
Suppose now that $X$ is an arbitrary smooth $n$-dimensional variety and $\fra$ is a nonzero ideal, defining the closed subscheme $Y$
of $X$, such that
$$\dim(Y_m)\leq (m+1)(n-q)\quad\text{for all}\quad m\leq N.$$
We show that for every $x\in X$, the pair $(X,\fra^q)$ is log canonical at $x$.
Since $X$ is smooth, after possibly replacing $X$ by an open neighborhood of $X$, we may assume that
we have an \'{e}tale morphism $g\colon X\to\AAA^n$, with $g(x)=0$. Let $\frm_x$ denote the ideal defining $x$
and for every $d\geq 1$, let $\fra_d=\fra+\frm_x^d$, defining the subscheme $V(\fra_d)$ of $X$. For every such $d$, there is an ideal $\frb_d$ on $\AAA^n$ defining
a subscheme $V(\frb_d)$ supported at $0$ and such that $\frb_d\cdot\cO_X=\fra_d$.
Note that for every $d$ and $m$, we have
$$V(\frb_d)_m\simeq V(\fra_d)_m\hookrightarrow Y_m,$$
hence by assumption
$$\dim\big(V(\frb_d)_m\big)\leq (m+1)(n-q)\quad\text{for all}\quad m\leq N.$$
As we have seen, this implies that
$(\AAA^n,\frb_d^q)$ is log canonical.
Since $g$ is \'{e}tale, we have $\lct_x(X,\fra_d)=\lct_0(\AAA^n,\frb_d)\geq q$ for every $d$, while
$$\lct_x(X,\fra)=\lim_{d\to\infty}\lct_x(X,\fra_d)$$
(see, for example, \cite[Proposition~2.15]{dFEM1}).
We conclude that $\lct_x(X,\fra)\geq q$, that is, the pair $(X,\fra^q)$ is log canonical at $x$.
This completes the proof of the proposition.
\end{proof}
\section{A proof of the conjecture in dimension 2}
We begin with the following convexity property of log discrepancies from \cite[Proposition 2.37]{Kollar3}.
\begin{proposition}\label{proposition:conv}
Let $X$ be a surface and $\fra$ an $\RR$-ideal on $X$ such that $(X,\fra)$ is log canonical, and $f \colon Y \to X$ a birational morphism
from a smooth surface $Y$. Assume that $a_E (X, \fra) \le 1$ for every $f$-exceptional divisor $E$.
If $E_1$, $E_2$, and $E_3$ are $f$-exceptional prime divisors that satisfy the following conditions:
\begin{enumerate}
\item $E_1$ meets both $E_2$ and $E_3$, and
\item $E_1$ has the self-intersection number $E_1 ^2 \le -2$,
\end{enumerate}
then $a_1 \le \frac{1}{2}(a_2 + a_3)$, where $a_i = a_{E_i} (X, \fra)$.
\end{proposition}
\begin{proof}
Since the statement is local, we may assume that $X$ is affine.
We may write $\fra = \prod \fra _i ^{\lambda _i}$
for nonzero ideal sheaves $\fra _i$ and $\lambda _i \in \RR _{>0}$.
We fix a positive integer $c$ which satisfies $c \ge \lambda _i$ for every $i$.
Take general elements $f_{i1}, \ldots , f_{ic} \in \fra _i$, and
let $D_{i1}, \ldots, D_{ic}$ be the corresponding effective Cartier divisors.
If $\Delta = \frac{1}{c} \sum _{i,j} \lambda _i D_{ij}$, then
$(X, \Delta)$ is log canonical and $a_{E} (X, \Delta) = a_{E} (X, \fra)$
for every $f$-exceptional divisor $E$ (see \cite[Lemma 4.2]{Nak2}).
Let $\{ E_i \}$ be the set of all $f$-exceptional divisors.
We write
\[
f^* (K_X + \Delta) = K_Y + \widetilde{\Delta} +\sum _i (1- a_i) E_i,
\]
where $\widetilde{\Delta}$ is the strict transform of $\Delta$ and $a_i = a_{E_i} (X, \Delta)$.
Note that $1 - a_i \ge 0$ for every $i$, by assumption.
We have
\begin{align*}
0 = f^* (K_X + \Delta) \cdot E_1
= &(K_Y + E_1) \cdot E_1 + \widetilde{\Delta} \cdot E_1 - a_1 E_1 ^2 \\ &+ (1 - a_2) E_1 \cdot E_2
+ (1 - a_3) E_1 \cdot E_3 + \sum _{i \not = 1, 2, 3} (1- a_i) E_1 \cdot E_i.
\end{align*}
It is clear that we have
\[
(K_Y + E_1) \cdot E_1 \ge -2, \quad \widetilde{\Delta} \cdot E_1 \ge 0 , \quad
\sum _{i \not = 1, 2, 3} (1- a_i) E_1 \cdot E_i \ge 0,
\]
and the assumptions (1) and (2) give
\[
- a_1 E_1 ^2 \ge 2 a_1, \quad (1 - a_2) E_1 \cdot E_2 \ge 1 - a_2, \quad (1 - a_3) E_1 \cdot E_3 \ge 1 - a_3.
\]
By combining all these, we obtain the desired inequality $2 a_1 - a_2 - a_3 \le 0$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{dim2}]
Let $X$ be a klt surface, $x \in X$ a point and $I \subset \RR _{\ge 0}$ a finite set.
The non-log-canonical case follows from Proposition \ref{prop_LC},
hence we only consider the log canonical case.
Let $\fra$ be an $\RR$-ideal on $X$ with exponents in $I$ such that $(X,\fra)$ is log canonical around $x$.
Let $X_0\to X$ be the minimal resolution of $X$.
Suppose that $\mld _x (X, \fra)$ is not computed by any $(X_0 \to X)$-exceptional divisor.
Then, there is a sequence of blow-ups
\[
X_n \to X_{n-1} \to \cdots \to X_1 \to X_0 \to X,
\]
with the following properties:
\begin{enumerate}
\item For every $i$ with $0 \le i \le n-1$, the map $X_{i+1} \to X_i$ is the blow-up of $X_i$ at a point $p_i \in X_i$
with exceptional divisor $E_i \subset X_{i+1}$.
\item $p_0$ maps to $x$ by the map $X_0 \to X$.
\item $p_{i+1}$ maps to $p_i$ by the map $X_{i+1} \to X_i$ for every $i$ with $0 \le i \le n-2$ (equivalently, $p_{i+1} \in E_i$).
\item $a_{E_i}(X, \fra) > \mld _x (X, \fra)$ for $i$ with $0 \le i \le n-2$ and
$a_{E_{n-1}} (X, \fra) = \mld _x (X, \fra)$.
\end{enumerate}
The next lemma gives a bound for $k_{E_{n-1}}$ in terms of $n$.
\begin{lemma}\label{lemma:n_to_k}
With the above notation, we have $k_{E_{n-1}} \le 2^{n-1}$.
\end{lemma}
\begin{proof
We first show that $\ord _{E_{n-1}} F \le 2^{n-1-i}$ for every prime divisor $F$ on $X_i$
which is exceptional over $X_0$ with $0 \le i \le n-1$.
We argue by descending induction on $i$.
The case $i = n-1$ is trivial since each exceptional prime divisor over $X_0$ is smooth.
If $i < n-1$, then the pull-back of $F$ to $X_{i+1}$ is either equal to the strict transform $F'$ of $F$ on $X_{i+1}$ or
it is equal to $F' + E_{i}$. By induction, we conclude that
$$\ord _{E_{n-1}} (F) \le \ord _{E_{n-1}} (F' + E_{i}) \le 2 \cdot 2^{n-2-i} = 2^{n-1-i}.$$
Note now that we have
\[
k_{E_{n-1}} = \ord _{E_{n-1}} (K_{X_n / X}) = \ord _{E_{n-1}} (K_{X_0 / X}) + \sum _{i = 1} ^n \ord _{E_{n-1}} (K_{X_i / X_{i-1}}).
\]
On the other hand, since $X_0$ is the minimal resolution of $X$, we have $K_{X_0/X}\leq 0$, hence
$\ord _{E_{n-1}} (K_{X_0 / X}) \le 0$. Using the assertion at the beginning of the proof, we conclude
\begin{align*}
k_{E_{n-1}}\leq \sum _{i = 1} ^n \ord _{E_{n-1}} (K_{X_i / X_{i-1}}) = \sum _{i = 1} ^n \ord _{E_{n-1}} (E_{i-1}) \le 1 + \sum _{i = 1} ^{n-1} 2^{n-1-i} = 2^{n-1},
\end{align*}
which gives the desired inequality.
\end{proof}
Returning to the proof of Theorem~\ref{dim2}, it follows from
Lemma~\ref{lemma:n_to_k} that in order to conclude the proof of the theorem
it is enough to prove the following lemma, giving a bound on the number $n$ of blow-ups of $X_0$.
\end{proof}
\begin{lemma}\label{lemma:bound_n}
There exists a positive integer $\ell (I)$ depending on the finite set $I$ that satisfies the following condition:
for every $\RR$-ideal $\fra$ on $X$ with exponents in $I$,
if $(X, \fra)$ is log canonical and $\mld_x (X, \fra)$ is not computed by any $(X_0 \to X)$-exceptional divisor,
then for every sequence of blow-ups satisfying the condition (1)-(4) above, we have
$n \le \ell(I)$.
\end{lemma}
\begin{proof}
If $\mld _x (X, \fra) > 1$, then it is known that $X$ is smooth at $x$
(hence $X_0 = X$) and $n=1$ (see \cite[Theorem 4.5]{KollarMori} and its proof).
From now on we suppose $\mld _x (X, \fra) \le 1$.
We begin by proving the following assertion, which we will need
in order to apply Proposition \ref{proposition:conv}:
\begin{equation}\label{eq:eff}
a_{E} (X, \fra) \le 1\quad\text{for every}\quad (X_n \to X)\text{-exceptional divisor}\,\,E.
\end{equation}
Since $X_0 \to X$ is the minimal resolution, we have $K_{X_0/X}\leq 0$, hence
$$a_{E} (X, \fra) \le a_{E} (X) \le 1$$
for every $(X_0 \to X)$-exceptional divisor $E$.
Suppose that $j$ is the smallest index with $a_{E_j} (X, \fra) > 1$.
We define an $\RR$-ideal $\fra _{j}$ on $X_{j}$ as follows:
if $\frb$ is the $\RR$-ideal on $X_j$ such that
$$\fra\cdot\cO_{X_j}=\frb\cdot\prod_E\cO_X(-E)^{\ord_E(\fra)},$$
where the product is over the $(X_{j} \to X)$-exceptional divisors, then
$$\fra_j=\frb\cdot\prod_E\cO_X(-E)^{\ord_E(\fra)-k_E}$$
(note that this is well-defined since $\ord_E(\fra)-k_E=1-a_E(X,\fra)\geq 0$
for every such $E$). It follows from definition that
$$a_E (X, \fra) = a_E (X_{j}, \fra _{j})\quad\text{for every divisor}\,\,E\,\,\text{over}\,\,X.$$
Since $a_{E_j} (X_j, \fra_{j}) = a_{E_j} (X, \fra) > 1$, we have
$
\mult _{p_{j}} \fra _{j} < 1.
$
By \cite[Theorem 4.5]{KollarMori}, it follows that
$
\mld _{p_{j}} (X_{j}, \fra _{j}) > 1.
$
However, this contradicts
\[
\mld _{p_{j}} (X_{j}, \fra _{j})
\le a_{E_{n-1}} (X_{j}, \fra _{j}) = a_{E_{n-1}} (X, \fra) = \mld _x (X, \fra) \le 1.
\]
This completes the proof of (\ref{eq:eff}).
Suppose now that $F_{0}, F_{1}, \ldots, F_{c}$ are $(X_n \to X_0)$-exceptional divisors
that satisfy the following conditions:
\begin{enumerate}
\item[($\alpha$)] $F_0 = E_{n-1}$ and $F_{i} \not = E_{n-1}$ for $1 \le i \le c$, and
\item[($\beta$)] $F_{i}$ meets $F_{i+1}$ for $0 \le i \le c-1$.
\end{enumerate}
In this case we have the following sequence of inequalities:
\begin{equation}\label{claim2}
a_{E_{n-1}} = a_{F_0} < a_{F_1} < \cdots < a_{F_c},
\end{equation}
where we set $a_{F_i} = a _{F_i} (X, \Delta)$.
In order to see this, note first that by the assumption on the sequence of blow-ups, we have
$a_{E_{n-1}} < a_{F}$ for every $(X_n \to X_0)$-exceptional divisor $F$ except for $F = E_{n-1}$.
This gives the first inequality $a_{F_0} < a_{F_1}$.
We next use the fact that $F ^2 \le -2$ for every $(X_n \to X_0)$-exceptional divisor $F$, except for $F = E_{n-1}$;
in particular, we have $F_1 ^2 \le -2$.
It follows from Proposition \ref{proposition:conv} that
$$a_{F_1}\leq\frac{1}{2}(a_{F_0}+a_{F_2})<\frac{1}{2}(a_{F_1}+a_{F_2}).$$
Therefore $a_{F_1}<a_{F_2}$. We deduce in this way (\ref{claim2}) by repeatedly applying
Proposition \ref{proposition:conv}.
By the discreteness of log discrepancies proved by Kawakita \cite{Kawakita2},
there exists a finite subset $U(I) \subset [0,1]$ depending only on $I$ satisfying the following condition:
\begin{itemize}
\item For every $\RR$-ideal $\fra$ with exponents in $I$ such that $(X, \fra)$ is log canonical,
if $a_F (X, \fra) \in [0,1]$, then $a_F (X, \fra) \in U(I)$.
\end{itemize}
Set $\ell_1 (I) := \# U(I)$.
By the choice of $\ell_1 (I)$ and the bound (\ref{eq:eff}), if we can find a sequence $F_0, \ldots, F_c$ of exceptional divisors
that satisfies the conditions ($\alpha$) and $(\beta)$ above, with $c \ge \ell_1 (I)$, we contradict the sequence of inequalities (\ref{claim2}).
The graph-theoretic Lemma~\ref{lem_graph} below thus implies
$n < \frac{1}{2}(3^{\ell_1 (I)} -1)$.
Indeed, we apply the lemma for the dual graph $\Gamma$ of $(X_n \to X_0)$-exceptional divisors
(the vertices of this graph are given by these exceptional divisors and two vertices are connected by an edge if and only if
the divisors intersect on $X_n$); note that $\Gamma$
has $n$ vertices and each vertex has degree at most three. This completes the proof of Lemma~\ref{lemma:bound_n}.
\end{proof}
\begin{lemma}\label{lem_graph}
Let $\ell$ be a positive integer and
$G$ be a connected graph of order $n \ge \frac{1}{2}(3^{\ell} -1)$.
If every vertex of $G$ has degree $\leq 3$,
then for any vertex $v$ of $G$, the graph $G$ contains a chain of length $\ell$ containing $v$ with degree 1.
\end{lemma}
\begin{proof}
We argue by induction on $\ell$, the case $\ell = 1$ being trivial.
Consider the graph $G'$ obtained by removing the vertex $v$ and the edges containing $v$.
Since $G$ is connected and ${\rm deg}(v)\leq 3$,
the number of the connected components of $G'$ is at most three.
Let $G''$ be a connected component of $G'$ of order at least
$\frac{1}{3}\big( \frac{1}{2}(3^{\ell} -1) -1 \big) = \frac{1}{2}(3^{\ell -1} -1)$.
Let $v'$ be a vertex in $G''$ which is connected to $v$ by an edge in $G$.
By induction, $G''$ contains a chain of length $\ell -1$ containing $v'$ with degree $1$.
By adding $v$ to this chain, we obtain a chain in $G$ which contains $v$ with degree $1$.
\end{proof}
\section{A proof of the conjecture in the monomial case}
In this section we give a proof of Theorem~\ref{monomial_case}. More precisely, we prove the following result.
A \emph{monomial} $\RR$-ideal on $\AAA^n$ is an $\RR$-ideal of the form $\fra=\prod_{j=1}^r\fra_j^{\lambda_j}$,
where each ideal $\fra_j$ is generated by monomials.
\begin{theorem}\label{monomial_case_version2}
Given a positive integer $n$ and a finite subset $I\subset \RR_{\geq 0}$, there is a positive integer $\ell$ ${\rm (}$depending on $n$ and $I$${\rm )}$ such that
for every monomial $\RR$-ideal $\fra$ on $\AAA^n$ with exponents in $I$, there is a divisor $E$
that computes $\mld_0(X,\fra)$ and such that $k_E\leq \ell$.
\end{theorem}
We will use the following result of Maclagan \cite[Theorem~1.1]{Maclagan}: given an infinite set ${\mathcal U}$ of monomial ideals in $k[x_1,\ldots,x_n]$,
then there are two ideals $I,J\in {\mathcal U}$ such that $I\subseteq J$. This implies that given any sequence $(I_m)_{m\geq 1}$ of monomial ideals in $k[x_1,\ldots,x_n]$,
there is a subsequence $(I_{j_m})_{m\geq 1}$ such that $I_{j_m}\supseteq I_{j_{m+1}}$ for all $m$. Indeed, note first that we may assume that each ideal $I$ is equal to $I_m$
for only finitely values of $m$, since otherwise our assertion is trivial. Since $k[x_1,\ldots,x_n]$ is Noetherian, we can find ideals in $\{I_m\mid m\geq 1\}$ that are maximal with respect to inclusion.
By Maclagan's result, there are only finitely many such ideals and by our assumption there are only finitely many $m$ with the property that $I_m$ is such a maximal ideal.
Therefore we can find $m_1\geq 1$ such that $I_{m_1}\supseteq I_m$ for infinitely many values of $m$. By repeating now the argument for the ideals $I_m$, with $m>m_1$ and
$I_m\subseteq I_{m_1}$, we obtain our assertion.
\begin{proof}[Proof of Theorem~\ref{monomial_case_version2}]
If the conclusion of the theorem fails, then there is a sequence $(\fra_m)_{m\geq 1}$ of monomial $\RR$-ideals on $\AAA^n$ and a sequence $(\ell_m)_{m\geq 1}$
with $\lim_{m\to\infty}\ell_m=\infty$ such that for every divisor $E$ over $\AAA^n$ that computes $\mld_0(\AAA^n,\fra_m)$, we have $k_E\geq\ell_m$. We will show that
this leads to a contradiction.
Let $\lambda_1,\ldots,\lambda_r$ be the elements of $I$. By assumption, we can write each $\fra_m$ as
$$\fra_m=\prod_{j=1}^r \fra_{m,j}^{\lambda_j},$$
where all $\fra_{m,j}$ are monomial ideals. As we have seen, it follows from Maclagan's result that after passing to a subsequence, we may assume
that $\fra_{m,1}\supseteq\fra_{m+1,1}$ for all $m\geq 1$. Repeating this for the $\fra_{m,2},\ldots,\fra_{m,r}$, it follows that after $r$ such steps, we may assume that
$\fra_{m,j}\supseteq\fra_{m+1,j}$ for all $m\geq 1$ and all $j$, with $1\leq j\leq r$.
In particular, it follows from Proposition~\ref{general_properties} that $({\rm mld}_0(\AAA^n,\fra_m))_{m\geq 1}$ is a weakly decreasing sequence.
On the other hand, a result of Kawakita \cite[Theorem~1.2]{Kawakita2} says that the set of mld's on a fixed klt germ, for $\RR$-ideals with exponents in the finite set $I$,
is finite. We thus conclude that after passing one more time to a subsequence, we may assume that all $\mld_0(\AAA^n,\fra_m)$ take the same value
(possibly infinite).
Let $E$ be a divisor over $\AAA^n$ that computes $\mld_0(\AAA^n,\fra_1)$. Given $m\geq 1$, since $\fra_{1,j}\supseteq\fra_{m,j}$ for all $j$,
it follows that
$$\mld_0(\AAA^n,\fra_m)\leq k_E+1-\ord_E(\fra_m)\leq k_E+1-\ord_E(\fra_1)=\mld_0(\AAA^n,\fra_1).$$
Therefore all the above inequalities are equalities. In particular, $E$ computes $\mld_0(\AAA^n,\fra_m)$ for all $m\geq 1$,
a contradiction. This completes the proof of the theorem.
\end{proof}
\section{Connection with ACC}
Our goal in this section is to prove Theorem~\ref{thm_acc}, relating Conjecture~\ref{conj_main} to the
ACC conjecture.
\begin{proof}[Proof of Theorem~\ref{thm_acc}]
Suppose that we have a sequence $(\fra_i)_{i\geq 1}$ of $\RR$-ideals on $X$ with exponents in $J$ such that
each $(X,\fra_i)$ is log canonical around $x$ and with
$q_i=\mld_x(X,\fra_i)$, the sequence $(q_i)_{i\geq 1}$ is strictly increasing. Since $q_i\leq\mld_x(X)$ for every $i$, it follows that
$q:=\lim_{i\to\infty}q_i<\infty$.
We may write $\fra_{i}=\prod_{j=1}^{r_i}\fra_{i,j}^{\lambda_{i,j}}$, where each $\fra_{i,j}$ is a nonzero ideal on $X$ with $x\in {\rm Cosupp}(\fra_{i,j})$
and each $\lambda_{i,j}$ is a nonzero element of $I$. Since $J$ is a DCC set, it follows that there is $\epsilon>0$ such that $\lambda_{i,j}\geq\epsilon$ for all $i$ and all $j$ with $1\leq j\leq r_i$.
Let $F$ be a fixed divisor over $X$ with $c_X(F)=x$. For every $i\geq 1$, it follows from the fact that $(X,\fra_i)$ is log canonical around $x$ that
$$r_i\epsilon\leq \sum_{j=1}^{r_i}\lambda_{i,j}\leq \sum_{j=1}^{r_i}\lambda_{i,j}\cdot\ord_F(\fra_{i,j})\leq k_F+1.$$
First, this implies that the $r_i$ are bounded. Second, it implies that the $\lambda_{i,j}$ are bounded.
After possibly passing to a subsequence, we may assume that $r_i=r$ for all $i\geq 1$. Furthermore,
since $J$ is a DCC set, it follows that after possibly passing again to a subsequence, we may assume that
each sequence $(\lambda_{i,j})_{i\geq 1}$ is nondecreasing. Since we have seen that the sequence is bounded, it follows that
$\lambda_j:=\lim_{i\to\infty}\lambda_{i,j}<\infty$.
We consider new $\RR$-ideals $\fra'_i=\prod_{j=1}^r\fra_{i,j}^{\lambda_j}$ for $i\geq 1$. We now show that $(X,\fra'_i)$ is log canonical around $x$ for $i\gg 0$.
Note that also the set $J'=J\cup\{\lambda_1,\ldots,\lambda_r\}$ satisfies DCC, hence
$${\mathcal A}:=\{\lct_x(X,\frb)\mid \frb\,\,\text{is}\,\,\text{an}\,\,\RR\text{-ideal on}\,\,X\,\,\text{with exponents in}\,\,J'\}$$
satisfies ACC (since we work on a fixed variety, this follows from \cite[Theorem~4.2]{dFEM1}; for the general statement, see \cite[Theorem~1.1]{HMX}).
In particular, there is $M$ such that $\lct_x(X,\frb)\leq M$ for every $\RR$-ideal $\frb$ on $X$ with exponents in $J$.
Note that we have
\begin{equation}\label{eq_limit}
\lim_{i\to\infty}(\lct_x(X,\fra'_i)-\lct_x(X,\fra_i))=0.
\end{equation}
Indeed, it follows from Proposition~\ref{general_properties} that for every $\delta>0$ and for every $i$ such that $\lambda_{i,j}\geq (1+\delta)^{-1}\lambda_j$ for all $j$, we have
$$\frac{1}{\delta+1}\cdot \lct_x(X,\fra_i) \leq \lct_x(X,\fra'_i)\leq \lct_x(X,\fra_i),$$
hence
$$0\leq \lct_x(X,\fra_i)-\lct_x(X,\fra'_i)\leq \frac{\delta}{\delta+1}\cdot\lct_x(X,\fra_i)\leq\frac{M\delta}{\delta+1}.$$
This gives (\ref{eq_limit}). On the other hand, we have by assumption $\lct_x(X,\fra_i)\geq 1$ for all $i\geq 1$. Since the set ${\mathcal A}$ satisfies ACC,
we conclude from (\ref{eq_limit}) that $\lct_x(X,\fra'_i)\geq 1$ (hence $(X,\fra'_i)$ is log canonical around $x$) for all $i\gg 0$. After possibly ignoring the first few terms,
we may assume that $(X,\fra'_i)$ is log canonical around $x$ for every $i\geq 1$.
We now choose for every $i$ a divisor $E_i$ over $X$ which computes $\mld_x(X,\fra'_i)$. Since we assume that $X$ satisfies the assertion in
Conjecture~\ref{conj_main} for $I=\{\lambda_1,\ldots,\lambda_r\}$,
we may and will assume that the set $\{k_{E_i}\mid i\geq 1\}$ is bounded above. Since we have
$$0\leq\mld_x(X,\fra'_i)=k_{E_i}+1-\sum_{j=1}^r\lambda_j \cdot\ord_{E_i}(\fra_{i,j}),$$
it follows that there is $B>0$ such that $\ord_{E_i}(\fra_{i,j})\leq B$ for all $i$ and $j$.
On the other hand, since $\lambda_{i,j}\leq\lambda_j$ for all $i$ and $j$, we have by Proposition~\ref{general_properties}
\begin{equation}\label{eq_bound_mld}
a_{E_i}(X,\fra'_i)=\mld_x(X,\fra'_i)\leq \mld_x(X,\fra_i)\leq a_{E_i}(X,\fra_i)=a_{E_i}(X,\fra'_i)+\sum_{j=1}^r(\lambda_j-\lambda_{i,j})\cdot\ord_{E_i}(\fra_{i,j}).
\end{equation}
Since the $\RR$-ideals $\fra'_i$ have exponents in the finite set $\{\lambda_1,\ldots,\lambda_r\}$, it follows from a result of Kawakita \cite[Theorem~1.2]{Kawakita2} that the set
$\{\mld_x(X,\fra'_i)\mid i\geq 1\}$ is finite. After possibly passing to a subsequence, we may thus assume that $\mld_x(X,\fra'_i)=A$ for every $i\geq 1$.
We then conclude from (\ref{eq_bound_mld}) that
\begin{equation}\label{eq_bound_mld2}
A\leq q_i\leq A+B\cdot \sum_{j=1}^r(\lambda_j-\lambda_{i,j}).
\end{equation}
Since $\lim_{i\to\infty}\lambda_{i,j}=\lambda_j$ for all $j$, it follows from (\ref{eq_bound_mld2}) by passing to limit that $q=A$. Using
one more time (\ref{eq_bound_mld2}), we obtain $A\leq q_i\leq q=A$ for every $i$, hence the sequence $(q_i)_{i\geq 1}$ is constant, a contradiction.
\end{proof}
\section{Three equivalent conjectures}
We begin by stating the Generic Limit conjecture and the Ideal-adic Semicontinuity conjecture for minimal log discrepancies.
Let $X$ be a klt variety over $k$ and $x\in X$ a closed point.
Given a positive integer $r$ and $r$ sequences of nonzero coherent sheaves of ideals
$(\fra^{(i)}_j)_{i\geq 1}$ on $X$, for $1\leq j\leq r$,
the generic limit construction (see \S\ref{section:generic_limit}) gives an affine klt scheme $\widetilde{X}$,
a closed point $\widetilde{x}\in\widetilde{X}$,
and $r$ ideals $\widetilde{\fra}_1,\ldots,\widetilde{\fra}_r$ on $\widetilde{X}$.
\begin{conjecture}[{Generic Limit conjecture, \cite[Conjecture 4.5]{Kawakita2}}]\label{conjecture:generic_limit}
For positive real numbers $\lambda_1, \ldots , \lambda _r$, there exists an infinite subset $S \subseteq \mathbb{Z} _{>0}$
such that the following hold:
\begin{itemize}
\item The ideals $\widetilde{\fra}_1,\ldots,\widetilde{\fra}_r$ are again
generic limits of the sequences of ideals $(\fra^{(i)}_1)_{i \in S},\ldots, (\fra^{(i)}_r)_{i \in S}$, and
\item For every $i\in S$, we have
$$
\mld _{\widetilde{x}} (\widetilde{X}, \prod _{j = 1} ^r \widetilde{\fra}_j ^{\lambda _j}) =
\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j)^{\lambda _j}).$$
\end{itemize}
\end{conjecture}
\begin{remark}\label{remark:oneineq}
Note that in the setting of the above conjecture, the inequality
\[
\mld _{\widetilde{x}} (\widetilde{X}, \prod _{j = 1} ^r \widetilde{\fra}_j ^{\lambda _j}) \ge
\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j)^{\lambda _j})
\]
can easily be guaranteed. Indeed,
let $E$ be a divisor computing $\mld _{\widetilde{x}} (\widetilde{X}, \prod _{j = 1} ^r \widetilde{\fra}_j ^{\lambda _j})$.
Take a positive integer $\ell$ such that $\ell\cdot \ord _E (\frm _{\widetilde{x}}) > \ord _E \widetilde{\fra}_j$ holds for each $j$.
Then we have
\[
\mld _{\widetilde{x}} (\widetilde{X}, \prod _{j = 1} ^r \widetilde{\fra}_j ^{\lambda _j}) =
\mld _{\widetilde{x}} (\widetilde{X}, \prod _{j = 1} ^r (\widetilde{\fra}_j + \frm _{\widetilde{x}} ^{\ell} )^{\lambda _j}).
\]
By Remark~\ref{remark_properties_generic_limit},
there exists an infinite subset $S \subseteq \ZZ_{>0}$
such that the first condition in the conjecture holds and
\[
\mld _{\widetilde{x}} (\widetilde{X}, \prod _{j = 1} ^r (\widetilde{\fra}_j + \frm _{\widetilde{x}} ^{\ell} )^{\lambda _j})
=
\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j + \frm _x ^{\ell} )^{\lambda _j})
\]
for every $i \in S$.
Since
\[
\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j + \frm _x ^{\ell} )^{\lambda _j})
\ge
\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j)^{\lambda _j}),
\]
we obtain the claimed inequality.
\end{remark}
We now turn to the (uniform version of) Ideal-adic Semicontinuity conjecture for minimal log discrepancies.
\begin{conjecture}\label{ideal_adic}
Let $X$ be a klt variety and let $x\in X$ be a point defined by the ideal $\frm_x$. Given a finite set $I\subseteq \RR_{\geq 0}$,
there is a positive integer
$s$ ${\rm (}$depending on $(X,x)$ and $I$${\rm )}$ such that the following holds: for every $\RR$-ideals
$\fra=\prod_{j=1}^r\fra_j^{\lambda_j}$ and $\frb=\prod_{j=1}^r\frb_j^{\lambda_j}$, with $\lambda_j\in I$ for all $j$, if $\fra_j+\frm_x^s=\frb_j+\frm_x^s$ for all $j$,
then $\mld_x(X,\fra)\geq 0$ if and only if $\mld_x(X,\frb)\geq 0$, and if this is the case\footnote{If this is not the case and $\dim(X)\geq 2$, then the two mlds are equal since they are both $-\infty$.}, then $\mld_x(X,\fra)=\mld_x(X,\frb)$.
\end{conjecture}
We now prove the result stated in the Introduction, saying that Conjectures~\ref{conj_main}, \ref{conjecture:generic_limit}, and \ref{ideal_adic}
are equivalent.
\begin{proof}[Proof of Theorem~\ref{thm_equivalence}]
We first show that Conjecture~\ref{conj_main} implies Conjecture~\ref{ideal_adic}. Suppose that Conjecture~\ref{conj_main} holds for $(X,x)$ and every finite set $I$.
Let $I$ be such a set. By assumption, there is a positive integer $\ell$ such that for every $\RR$-ideal
$\fra=\prod_{j=1}^r\fra_j^{\lambda_j}$ on $X$, with $\lambda_j\in I$ for all $j$,
there is a divisor
$E$ computing $\mld_x(X,\fra)$ with $k_E\leq\ell$. Let $\epsilon$ be the smallest nonzero element of $I$ and let $s$ be a positive integer that satisfies
$s>\frac{\ell+1}{\epsilon}$.
Suppose that $\fra$ and $\frb$ are as in Conjecture~\ref{ideal_adic}, with $\mld_x(X,\fra)\geq 0$.
We may and will assume that $\lambda_j>0$ for all $j$.
Let $\fra'_s=\prod_{j=1}^r(\fra_j+\frm_x^s)^{\lambda_j}$ and $\frb'_s=\prod_{j=1}^r(\frb_j+\frm_x^s)^{\lambda_j}$.
We assume that $\fra_j+\frm_x^s=\frb_j+\frm_x^s$ for all $j$, hence $\fra'_s=\frb'_s$.
Let $E$ be a divisor over $X$ which computes $\mld_x(X,\fra)$ such that $k_E\leq \ell$.
In this case we have
$0\leq\mld_x(X,\fra)=k_E+1-\sum_{j=1}^r\lambda_j\cdot\ord_E(\fra_j)$, hence
$$\sum_{j=1}^r\lambda_j\cdot\ord_E(\fra_j)\leq\ell+1.$$
It follows from the choice of $\epsilon$ and $s$ that
$$s\cdot\ord_E(\frm_x)\geq s> \frac{\ell+1}{\lambda_j}\geq\ord_E(\fra_j)$$ for every $j$. Using Proposition~\ref{general_properties}, we obtain
$\mld_x(X,\fra)=\mld_x(X,\fra'_s)$.
Since $\fra'_s=\frb'_s$, we have
$\mld_x(X,\frb'_s)=\mld_x(X,\fra'_s)=\mld_x(X,\fra)$ and since $\mld_x(X,\frb)\leq \mld_x(X,\frb'_s)$, we conclude that $\mld_x(X,\frb)\leq\mld_x(X,\fra)$.
On the other hand, we have $\mld_x(X,\frb)\geq 0$. Indeed, if this is not the case, then by assumption we can find a divisor $F$ that computes
$\mld_x(X,\frb)$, with $k_F\leq \ell$. Therefore we have $k_F+1<\ord_F(\frb)$. We now use the fact that $\mld_x(X,\frb'_s)\geq 0$. First, this implies that
$\ord_F(\frb'_s)<\ord_F(\frb)$, and since we can write
$$\ord_F(\frb'_s)=\sum_{j=1}^r\lambda_j\cdot\min\{s\cdot\ord_F(\frm_x),\ord_F(\frb_j)\},$$
we conclude that there is $j$ such that $s\cdot\ord_F(\frm_x)<\ord_F(\frb_j)$.
Second, it gives
$$\ell+1\geq k_F+1\geq \ord_F(\frb'_s)\geq \lambda_js\cdot \ord_F(\frm_x)\geq \epsilon s>\ell+1,$$
a contradiction. We thus conclude that $\mld_x(X,\frb)\geq 0$.
We can now run the same argument with the roles of $\fra$ and $\frb$ reversed, to conclude that $\mld_x(X,\frb)\geq\mld_x(X,\fra)$. Therefore $\mld_x(X,\frb)=\mld_x(X,\fra)$,
This completes the proof of the fact that Conjecture~\ref{conj_main} implies Conjecture~\ref{ideal_adic}.
We now show that Conjecture~\ref{ideal_adic} implies Conjecture~\ref{conjecture:generic_limit}.
Suppose that we are in the setting of Conjecture~\ref{conjecture:generic_limit} and let $s$ be the positive integer provided by
Conjecture~\ref{ideal_adic} for the set $I=\{\lambda_1,\ldots,\lambda_r\}$.
By assumption, we have
\[
\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j + \frm _x ^{\ell} )^{\lambda _j})
=
\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j)^{\lambda _j})
\]
for every $\ell\geq s$ and every $i$.
The argument in Remark~\ref{remark:oneineq} then implies that there is an infinite subset $S\subseteq\ZZ_{>0}$ that satisfies the first condition
in Conjecture~\ref{conjecture:generic_limit} and such that
\[
\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j + \frm _x ^{\ell} )^{\lambda _j})
=
\mld _{\widetilde{x}} (\widetilde{X}, \prod _{j=1} ^r \widetilde{\fra}_j^{\lambda _j}),
\]
for every $i\in S$. We thus have the conclusion in Conjecture~\ref{conjecture:generic_limit}.
Finally, we show that Conjecture~\ref{conjecture:generic_limit} implies Conjecture~\ref{conj_main}.
Let $\lambda _1, \ldots , \lambda_r$ be the nonzero elements of the finite set $I$.
If the assertion in Conjecture~\ref{conj_main} is not true,
then for each positive integer $i$ there exist coherent ideals $\fra ^{(i)} _1, \ldots ,\fra ^{(i)} _r$
with the following property:
\begin{itemize}
\item $k_{E_i} \ge i$ holds for every divisor $E_i$ that computes $\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j)^{\lambda _j})$.
\end{itemize}
We use the generic limit construction for $(\fra ^{(i)} _j) _{\ge 1}$ to obtain coherent ideal sheaves
$\widetilde{\fra}_1,\ldots,\widetilde{\fra}_r$ on $\widetilde{X}$.
By applying successively\footnote{We need the first condition in Conjecture~\ref{conjecture:generic_limit}
in order to be able to apply Remark~\ref{remark:oneineq} to the resulting subsequences of ideals.} Conjecture~\ref{conjecture:generic_limit} and Remark~\ref{remark:oneineq}, we get an infinite subset $S\subseteq\ZZ_{>0}$ such that
\begin{equation}\label{eq_thm_equivalence}
\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j + \frm _x ^{\ell} )^{\lambda _j})
=
\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j)^{\lambda _j})
\end{equation}
for every $i\in S$.
Let $\ell'$ be the $\ell$ provided by Theorem~\ref{thm_bound_ord_point}.
It follows that for every $i\in S$, there is a divisor $E_i$ that computes
$\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j + \frm _x ^{\ell} )^{\lambda _j})$
such that $\ord_{E_i} (\frm _x) \le \ell '$.
The equality (\ref{eq_thm_equivalence}) implies that $E_i$ also computes
$\mld _x (X, \prod _{j=1} ^r (\fra ^{(i)} _j)^{\lambda _j})$.
Therefore we have
\[
\ord _{E_i} (\fra ^{(i)} _j + \frm _x ^{\ell}) = \ord _{E_i} (\fra ^{(i)} _j)
\]
for every $j$, hence
\[
\ord _{E_i} (\fra ^{(i)} _j) \le \ord _{E_i} (\frm _x^{\ell}) \le \ell \ell'.
\]
If $i\in S$ satisfies $i > \mld _x (X) - 1 + \ell \ell' \sum _{j=1} ^r \lambda _j$,
then we have
\begin{align*}
a_{E_i} (X, \prod _{j=1} ^r (\fra ^{(i)} _j)^{\lambda _j})
&= k_{E_i} + 1 - \sum _{j=1} ^r \lambda _j \cdot \ord_{E_i} (\fra ^{(i)} _j) \\
&\ge i +1 - \ell \ell' \sum _{j=1} ^r \lambda _j > \mld _x (X).
\end{align*}
This contradicts the fact that
\[
a_{E_i} (X, \prod _{j=1} ^r (\fra ^{(i)} _j)^{\lambda _j})
= \mld _{x} (X, \prod _{j=1} ^r (\fra ^{(i)} _j)^{\lambda _j}) \le \mld _x (X).
\]
We thus showed that Conjecture~\ref{conjecture:generic_limit} implies Conjecture~\ref{conj_main},
completing the proof of the theorem.
\end{proof}
\begin{remark}
By the equivalence of Conjectures~\ref{conj_main} and~\ref{conjecture:generic_limit},
Theorem~\ref{dim2} and Theorem~\ref{thm_acc} also follow from
results of Kawakita, see \cite[Proposition~4.8, Theorem~5.3]{Kawakita2}.
\end{remark}
\providecommand{\bysame}{\leavevmode \hbox \o3em
{\hrulefill}\thinspace}
\begin{bibdiv}
\begin{biblist}
\bib{Ambro}{article}{
author={Ambro, Florin},
title={On minimal log discrepancies},
journal={Math. Res. Lett.},
volume={6},
date={1999},
number={5-6},
pages={573--580},
}
\bib{Birkar}{article}{
author={Birkar, Caucher},
title={Ascending chain condition for log canonical thresholds and
termination of log flips},
journal={Duke Math. J.},
volume={136},
date={2007},
number={1},
pages={173--180},
}
\bib{dFEM2}{article}{
author={de Fernex, Tommaso},
author={Ein, Lawrence},
author={Musta{\c{t}}{\u{a}}, Mircea},
title={Shokurov's ACC conjecture for log canonical thresholds on smooth
varieties},
journal={Duke Math. J.},
volume={152},
date={2010},
number={1},
pages={93--114},
}
\bib{dFEM1}{article}{
author={de Fernex, Tommaso},
author={Ein, Lawrence},
author={Musta{\c{t}}{\u{a}}, Mircea},
title={Log canonical thresholds on varieties with bounded singularities},
conference={
title={Classification of algebraic varieties},
},
book={
series={EMS Ser. Congr. Rep.},
publisher={Eur. Math. Soc., Z\"urich},
},
date={2011},
pages={221--257},
}
\bib{dFM}{article}{
author={de Fernex, Tommaso},
author={Musta{\c{t}}{\u{a}}, Mircea},
title={Limits of log canonical thresholds},
journal={Ann. Sci. \'Ec. Norm. Sup\'er. (4)},
volume={42},
date={2009},
number={3},
pages={491--515},
}
\bib{ELM}{article}{
author={Ein, Lawrence},
author={Lazarsfeld, Robert},
author={Musta{\c{t}}{\v{a}}, Mircea},
title={Contact loci in arc spaces},
journal={Compos. Math.},
volume={140},
date={2004},
number={5},
pages={1229--1244},
}
\bib{EMY}{article}{
author={Ein, Lawrence},
author={Musta{\c{t}}{\u{a}}, Mircea},
author={Yasuda, Takehiko},
title={Jet schemes, log discrepancies and inversion of adjunction},
journal={Invent. Math.},
volume={153},
date={2003},
number={3},
pages={519--535},
}
\bib{HMX}{article}{
author={Hacon, Christopher D.},
author={McKernan, James},
author={Xu, Chenyang},
title={ACC for log canonical thresholds},
journal={Ann. of Math. (2)},
volume={180},
date={2014},
number={2},
pages={523--571},
}
\bib{Kawakita4}{article}{
author={Kawakita, Masayuki},
title={Ideal-adic semi-continuity problem for minimal log discrepancies},
journal={Math. Ann.},
volume={356},
date={2013},
number={4},
pages={1359--1377},
}
\bib{Kawakita3}{article}{
author={Kawakita, Masayuki},
title={Ideal-adic semi-continuity of minimal log discrepancies on
surfaces},
journal={Michigan Math. J.},
volume={62},
date={2013},
number={2},
pages={443--447},
}
\bib{Kawakita2}{article}{
author={Kawakita, Masayuki},
title={Discreteness of log discrepancies over log canonical triples on a
fixed pair},
journal={J. Algebraic Geom.},
volume={23},
date={2014},
number={4},
pages={765--774},
}
\bib{Kawakita1}{article}{
author={Kawakita, Masayuki},
title={A connectedness theorem over the spectrum of a formal power series ring},
journal={Internat. J. Math.},
volume={26},
date={2015},
number={11},
pages={1550088, 27},
}
\bib{Kollar1}{article}{
author={Koll{\'a}r, J{\'a}nos},
title={Which powers of holomorphic functions are integrable?},
eprint={arXiv:0805.0756v1}
}
\bib{Kollar3}{book}{
author={Koll{\'a}r, J{\'a}nos},
title={Singularities of the minimal model program},
series={Cambridge Tracts in Mathematics},
volume={200},
note={With a collaboration of S\'andor Kov\'acs},
publisher={Cambridge University Press, Cambridge},
date={2013},
}
\bib{KollarMori}{book}{
author={Koll{\'a}r, J{\'a}nos},
author={Mori, Shigefumi},
title={Birational geometry of algebraic varieties},
series={Cambridge Tracts in Mathematics},
volume={134},
publisher={Cambridge University Press, Cambridge},
date={1998},
pages={viii+254},
}
\bib{Maclagan}{article}{
author={Maclagan, Diane},
title={Antichains of monomial ideals are finite},
journal={Proc. Amer. Math. Soc.},
volume={129},
date={2001},
number={6},
pages={1609--1615}
}
\bib{Nakamura}{article}{
author={Nakamura, Yusuke},
title={On semi-continuity problems for minimal log discrepancies},
journal={J. Reine Angew. Math.},
volume={711},
date={2016},
pages={167--187}
}
\bib{Nak2}{article}{
author={Nakamura, Yusuke},
title={On minimal log discrepancies on varieties with fixed Gorenstein
index},
journal={Michigan Math. J.},
volume={65},
date={2016},
number={1},
pages={165--187},
}
\bib{Shokurov}{article}{
author={Shokurov, V. V.},
title={Letters of a bi-rationalist. V. Minimal log discrepancies and
termination of log flips},
language={Russian, with Russian summary},
journal={Tr. Mat. Inst. Steklova},
volume={246},
date={2004},
number={Algebr. Geom. Metody, Svyazi i Prilozh.},
pages={328--351},
translation={
journal={Proc. Steklov Inst. Math.},
date={2004},
number={3 (246)},
pages={315--336},
issn={0081-5438},
},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,353
|
package com.topview.controller.interactivezone;
import com.topview.common.CommonResult;
import com.topview.po.interactivezone.BlogQuestion;
import com.topview.po.interactivezone.BlogReply;
import com.topview.po.usermanage.User;
import com.topview.service.interactivezone.BlogQuestionService;
import com.topview.service.interactivezone.BlogReplyService;
import com.topview.util.StringUtils;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.*;
import javax.annotation.Resource;
import javax.servlet.http.HttpSession;
import java.util.Date;
@Controller
@RequestMapping("/blog_reply")
public class BlogReplyController {
@Resource
private BlogReplyService blogReplyService;
@Resource
private BlogQuestionService blogQuestionService;
@ResponseBody
@RequestMapping(value = "/add", method = RequestMethod.POST)
public CommonResult addReply(@RequestParam("blogQuId") Long blogQuId, @RequestParam("content") String content, HttpSession session) {
User user = (User) session.getAttribute("user");
if (user == null) {
return new CommonResult("添加失败:用户没有登陆", false);
}
BlogReply blogReply = new BlogReply();
blogReply.setBlogQuId(blogQuId);
if (StringUtils.isEmpty(content)) {
return new CommonResult("添加失败:回复内容不能为空", false);
} else if (blogQuestionService.selectOneById(blogQuId) == null) {
return new CommonResult("添加失败:该问题不存在", false);
} else if (blogReplyService.select(blogReply) != null) {
return new CommonResult("添加失败:该问题已被回复", false);
}
blogReply = new BlogReply(content, blogQuId, user.getId(), new Date(), new Date());
int result = blogReplyService.insert(blogReply);
if (result <= 0) {
return new CommonResult("添加失败", false);
}
return new CommonResult("添加成功", true);
}
@ResponseBody
@RequestMapping("/delete/{replyId}")
public CommonResult deleteReply(@PathVariable Long replyId) {
if (blogReplyService.selectOneById(replyId) == null) {
return new CommonResult("删除失败:该回复不存在", false);
}
int result = blogReplyService.deleteById(replyId);
if (result <= 0) {
return new CommonResult("删除失败", false);
}
return new CommonResult("删除成功", true);
}
@ResponseBody
@RequestMapping(value = "/update", method = RequestMethod.POST)
public CommonResult updateReply(BlogReply blogReply) {
if (StringUtils.isEmpty(blogReply.getContent())) {
return new CommonResult("修改失败:请填写回复内容", false);
}
blogReply.setModifiedTime(new Date());
int result = blogReplyService.update(blogReply);
if (result <= 0) {
return new CommonResult("修改失败", false);
}
return new CommonResult("修改成功", true);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 3,205
|
How to Find the Source of An Image
June 10, 2014 /0 Comments/in PC Tips /by Tony Patarini
Let's say you have a friend on Facebook or Twitter or Instagram or whatever that always posts horrible hilarious pictures.
These pictures are so horrible hilarious, in fact, that you'd like to find out where your friend is getting them from.
A couple years ago this meant you'd have to randomly surf the internet looking for pictures similar to the one your friend posted, hoping you'd eventually find the right website.
But now many search engines have added a "reverse image search" feature. This feature actually lets you search the web using an image instead of a search term.
It works like this: You upload an image to the search engine and the search engine analyzes the image. This analysis takes many areas of the image into consideration, like color and shapes in the image, and then attempts to match them to the closest image with similar metrics in its database.
This is a really complicated process for the computers that do the analysis, but it's really easy to use.
Reverse Image Search Using Google
Searching for the source of images using Google is almost exactly like searching using words.
On the Google Images search page, click the little camera icon. You should see new options pop up that look like this:
If you have the web address of an image your want to find the source of, type it in or paste it on the line provided.
If you have an image on your computer you'd like to find the source of (more common) click "Upload an image" and Google will let you upload your image.
After that, just press "Search by image" and your search will take place like normal, with the results displayed just like a normal Google image search.
Reverse Image Search Using TinEye
TinEye is a similar reverse image search engine that actually predates Google's image search. It also uses different metrics to compare images, so the results may be different than Google. Different results can be helpful in narrowing down where an image came from or finding all of the locations where an image is hosted.
TinEye's search interface is almost just like Google's. You can search by uploading an image, entering its web address, or just dragging and dropping an image onto the page. That third option could be very useful for people who are not skilled at using computers.
After the image search is complete, TinEye will show a list of all the results it found ranked in order of what it thinks is closest to what you were searching for. It will also provide the web address for any images it finds.
That's it, now you know how to track down the source of images online.
Need to track down the solution to a pesky computer problem? ZookaWare computer experts are online 24/7 for remote technical support.
https://zookaware.com/blog/wp-content/uploads/2014/06/How-to-Find-the-Source-of-An-Image.jpg 287 390 Tony Patarini https://zookaware.com/blog/wp-content/uploads/2017/01/zk_logo.png Tony Patarini2014-06-10 08:00:312016-11-14 14:31:27How to Find the Source of An Image
ZookaWare PC Cleaner
Cleans out spyware, junk files, registry errors, tracking cookies and more to make your PC and internet faster, safer and error free
Download Supports Vista, 7, 8, & 10
How to Uninstall the Ask Toolbar June 17, 2014
How to Uninstall iTunes June 14, 2014
The 3 Most Private Search Engines June 14, 2014
How to Secure Your Computer using Microsoft's Enhanced Mitigation Experience Toolkit (EMET) June 14, 2014
Join our mailing list to get our latest updates!
611 East 12th Avenue, Suite 204
Anchorage, Alaska 99501, USA
Contact@zookaware.com
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,906
|
Moore Is Done With CNN, Says It Gave Him The Political Equivalent Of 'Salmonella'
Tom Williams/CQ-Roll Call Group
Former almost-Federal Reserve Board nominee Stephen Moore explained to radio station WMAL why he would not return to CNN as an analyst: "you go to restaurant and you get salmonella, you don't go back to that restaurant."
"They basically begged me to come over from Fox and they begged me to sign a renewal for my contract," he said. "Then literally, the day after I was announced for the Fed, they turned their knives against me. They were vicious and vile."
Moore was felled by a combination of scandals, including his own sexist past writings, trouble with the IRS and a joke about evicting "a black family from public housing" when the Obamas left the White House.
Here's TPM's deep dive on Moore's rise and fall.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,288
|
Tom Steyer gets emotional discussion of suicides in the Iowa woman's family
in the vicinityVideoTom Steyer: This country is not the investment in and support of the citizens, the people intentionally hurt
Raw video: News correspondent Peter Doocy sits down with the Democratic presidential candidate Tom Steyer.
Democratic presidential candidate Tom Steyer grew emotional during a Fox News interview on Monday, he described a woman he met in Iowa over the weekend, who lost several family members to suicide.
Steyer recalled how he told the woman during a campaign town hall event on Sunday, him on the state of mental health in Iowa.
"And as she was describing this, she mentioned that seven of her family had committed suicide," Steyer said that while in Iowa City.
FOX NEWS POLL: BIDEN LEADS DEMOCRATIC IN THE RACE AS WARREN DROPS
The democratic candidate went on to say that interactions like this on the election campaign have taught him that while "we talk about this policy, it is very far from the reality of American life."
"If you see someone whose family has a flood of suicides, because people get depressed and are not getting the support you need, you will understand how close to the bone that's how really important this choice is that this country is not in the investment in and support of the people, that people hurt you intentionally," Steyer said. "It is not right. And this is what comes out of it to me."
He then appeared to gag, as he said, "I see the lady, as on my team. I don't want to make someone, the people in my team."
Steyer, the liberal billionaire environmentalist, has long urged President trump, for the prosecution, has struggled to gain traction in the Democratic race. A Fox News poll of the Democratic field on Sunday Steyer shows released with 1 percent support at the national level.
Fox News' Pat Ward and Alex Pappas contributed to this report.
The rep from Trump the district faces rowdy town hall, the boos after the securing charge
Trump leaves the door open for the evasion of the presidential debate in Commission in 2020
Buttigieg campaign respondents, employees of color for 'microaggressions, report says
nearvideo-Tucker: the mayor, Pete is worried about "microaggressions' Buttigieg's campaign of racial quotas for the composition, rather than diversity retreats. The national engagement coordinator for Pete Buttigieg presidential...
Trump, on Packed New Jersey, rally, meets with congressional Dems in the midst of impeachment fight
nearvideo trump praises 'courageous' party-switcher Jeff Van Drew at the South Jersey rally: He had had enough of the Democrats " socialism President Trump was the headline in a full rally Packed in Wildwood, New Jersey, on Tuesday...
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,020
|
Open main lifestyle navigation Lifestyle
trending now in Lifestyle
Alcohol flush warns of deadly vascular disease: study
Single women are shopping for 'husband material' at Home Depot
Dear Abby: My children never call or visit me
Wife attacks Thai sex worker she found having sex with her husband
Starbucks barista slams customer's 'hack': 'We delete the order'
My palpitations were dismissed as anxiety — turns out I have a...
12-year-old set to graduate from high school and college in the same month
Mike Wimmer will graduate from high school and college next month. Facebook
More On: students
LSU sorority student, 19, fatally struck by car after alleged rape; 4 held
NY dad kills 14-year-old daughter, turns gun on himself after chilling call to estranged wife
School of hard knocks: Bronx principal could be probed for fight with student
Idaho 'killer' visited eatery where two victims worked: ex-employee
This education was accelerated at hyper-speed.
As Shakespeare wrote "King Lear" during a bubonic plague outbreak, so did this young genius nab not one but two degrees at warp speed during the coronavirus pandemic.
Mike Wimmer, of Salisbury, North Carolina, is just 12 but on May 21, he is set to graduate from Rowan-Cabarrus Community College — followed by his graduation from Concord Academy High School a week later.
"If one door's locked, he'll find out another way around to figure out how to accomplish his goals," his mother, Melissa Wimmer, told CNN of her son, who decided to enroll in additional courses at school when COVID-19 hit and he found himself with newfound downtime.
A year later, the extra courses have seriously paid off, as Mike has earned enough credits to graduate from both the high school and associate's degree portions of his dual enrollment program. He didn't intend for the timeline, he told CNN, but when he realized he was only a few classes away from graduating, he became even further motivated to complete his degrees.
In addition to the two degrees he's about to hold, Mike also created a successful startup.
Mike didn't compromise his grades for speedier graduation: The prodigy finished college with a 4.0 GPA and high school with a 5.45 — making him valedictorian.
Socially, he has also managed to thrive.
The 12-year-old used his downtime during the pandemic to take more courses in school, enabling him to graduate even earlier than anticipated.
"A lot of people think I've given up my childhood or somehow lost it, and I say to them that I'm having the time of my life," said Mike, who enjoys building Lego sets and playing basketball. Last year, his classmates nominated him to Homecoming Court.
As for what's next, he's got an array of options, including various job offers, a fellowship and a growing startup called Reflect Social.
"My entrepreneurial goal is to build technology that enables people to live better lives," he said.
Mike and his family live in Salisbury, NC.
Read Next Girl Scout cookies take flight in Virginia drone deliverie...
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,575
|
\section{Introduction}
Information Extraction (IE) is a task of natural language processing that involves extracting structured information, that can be interpreted easily by a machine or a program, from plain unstructured text. Since the Internet is filled with huge amounts of data in the form of text, IE systems are extremely important. They can extract meaningful facts from this text, which can then be used for applications like search and QA. Knowledge-bases like Freebase \citep{bollacker2008freebase} and DBpedia \citep{auer2007dbpedia} which are a source for useful information are far from complete and can be extended using such systems. Information Extraction itself is a huge task consisting of several subtasks like named-entity-recognition, relation extraction, event extraction etc. In this review, we specifically focus on deep learning methods used for the subtask of relation extraction.
IE can be done in unsupervised or semi-supervised domain, in the form of OpenIE, where we do not have any predefined ontology or relation classes and we extract facts from the data along with the relation phrases. In the supervised domain, the relation extraction and classification tasks specifically refers to the classification of an entity pair to a set of known relations, using documents containing mentions of the entity pair. The RE task refers to predicting whether a given document contains a relation or not for the pair, modeled as a binary classification. Relation classification refers to predicting which relation class out of a given ontology does that document point to, given that it does contain a relation (modeled as a multi-class classification problem). The two tasks can be combined by making a multi-class classification problem with an extra \textit{NoRelation} class.
Traditional non deep learning methods for relation extraction typically work in the supervised paradigm. They can be divided into two classes which are feature based methods and kernel based methods. In both these methods, the extracted features and elaborately-designed kernels use pre-existing NLP systems which result in errors of the various modules accumulating downstream. Also, the manually constructed features may not capture all the relevant information that is required. This need to manual engineer features is removed by moving into the domain of deep learning.
Supervised techniques for machine learning require large amount of training data for learning. Using hand annotated datasets for relation extraction takes huge time and effort to make the datasets. \citet{mintz2009distant} proposed a distant supervision method for producing large amount of training data by aligning KB facts with texts. Such large datasets allow for learning more complex models for the task like convolutional neural networks. The noise present in datasets generated through distant supervision also require special ways of modeling the problem like Multi-Instance Learning as discussed in the subsequent sections.
\section{Datasets}
\subsection{Supervised Training}
The early works on relation extraction using deep learning employed supervised training datasets that were previously used by non deep learning models. These datasets required intensive human annotation which meant that the data contained high quality tuples with little to no noise. But human annotation can be time-consuming, as a result of which these datasets were generally small. Both of the datasets mentioned below contain data samples in which the document sentence is already labeled with named entities of interest and the relation class expressed between the entity pair is to be predicted.
\begin{description}
\item[ACE 2005 dataset] The Automatic Content Extraction dataset contains 599 documents related to news and email and contains relations that are divided into 7 major types. Out of these, 6 major relation types contain enough instances (average of 700 instances per relation type) and are used for training and testing.
\item[SemEval-2010 Task 8 dataset] This dataset is a freely available dataset by \citet{hendrickx2009semeval} which contains 10,717 samples which are divided as 8,000 for training and 2,717 for testing. It contains 9 relation types which are ordered relations. The directionality of the relations effectively doubles the number of relations, since an entity pair is believed to be correctly labeled only if the order is also correct. The final dataset thus has 19 relation classes (2 $\times$ 9 + 1 for \textit{Other} class).
\end{description}
\subsection{Distant Supervision}
To avoid the laborious task of manually building datasets for relation extraction, \citet{mintz2009distant} proposed a distant supervision approach for automatically generating large amounts of training data. They aligned documents with known KBs, using the assumption that if a relation exists between an entity pair in the KB, then every document containing the mention of the entity pair would express that relation. It can easily be realised that this distant supervision assumption is a very strong assumption and that every document containing the entity pair mention may not express the relation between the pair. Eg. For the tuple (\textit{Microsoft}, \texttt{Founder\_of}, \textit{Microsoft}) in the database and the document ``\textit{Bill Gates's turn to philanthropy was linked to the antitrust problems Microsoft had in the U.S. and the European union}", the document does not express the relation \texttt{Founder\_of} even though it contains both the entities.
To alleviate this problem and reduce the noise, \citet{riedel2010modeling} relaxed the distant supervision assumption by modeling the problem as a multi-instance learning problem (described in the subsequent section). The dataset they used is the most popular dataset used in subsequent works building on distant supervision for relation extraction. This dataset was formed by aligning Freebase relations with the New York Times corpus (NYT). Entity mentions were found in the documents using the Stanford named entity tagger, and are further matched to the names of Freebase entities. There are 53 possible relation classes including a special relation \textit{NA} which indicates there is no relation between the entity pair. The training data contains 522,611 sentences, 281,270 entity pairs and 18,252 relational facts. The testing set contains 172,448 sentences, 96,678 entity pairs and 1,950 relational facts.
The evaluation for this dataset is usually done by comparing the extracted facts against the entries in Freebase. However, since Freebase is not a complete KB, the evaluation scheme is affected by false negatives that undermine the performance of the models. For a comparative study, however, the evaluation scheme works alright.
\section{Basic Concepts}
The following section talks about some basic concepts that are common across most deep learning models for relation extraction.
\subsection{Word Embeddings}
Word embeddings \citep{mikolov2013distributed,pennington2014glove} are a form of distributional representations for the words in a vocabulary, where each word is expressed as a vector in a low dimensional space (low w.r.t to the size of the vocabulary). Word embeddings aim to capture the syntactic and semantic information about the word. They are learnt using unsupervised methods over large unlabeled text corpora. They are implemented using an embedding matrix $E \in \mathbb{R}^{|V| \times d_w}$, where $d_w$ is the dimensionality of the embedding space and $|V|$ is the size of the vocabulary.
\subsection{Positional Embeddings}
In the relation extraction task, along with word embeddings, the input to the model also usually encodes the relative distance of each word from the entities in the sentence, with the help of positional embeddings (as introduced by \citet{zeng2014relation}). This helps the network to keep track of how close each word is to each entity. The idea is that words closer to the target entities usually contain more useful information regarding the relation class. The positional embeddings comprise of the relative distance of the current word from the entities. For example, in the sentence ``Bill\_Gates is the founder of Microsoft.", the relative distance of the word ``founder" to head entity ``Bill\_Gates" is 3 and tail entity ``Microsoft" is -2. The distance are then encoded in a $d_p$ dimensional embedding.
Finally, the overall sentence $x$ can expressed as a sequence of vectors $x = \{w_1, w_2, ..., w_m\}$ where every word $w_i \in \mathbb{R}^{d}$ and $d = d_w + 2\times d_p$.
\subsection{Convolutional Neural Networks}
For encoding the sentences further, deep learning models for relation extraction usually use convolutional neural network layers to capture n-gram level features, similar to \citet{collobert2011natural}. The convolutional layer operates as follows. Given an input sentences $x$ as a sequence of vectors $x = \{w_1, w_2, ..., w_m\}, w_i \in \mathbb{R}^d$, if $l$ is the window size for the convolutional layer kernel, then the vector for the $i$-th window ($q_i \in \mathbb{R}^{(d\times l)}$) is formed by concatenating the input vectors for that window,
\begin{equation}
q_i = w_{i:i+l-1} ; (1\leq i\leq m-l+1)
\end{equation}
A single convolutional kernel would then consist of a weight vector $W \in \mathbb{R}^{(d\times l)}$ and a bias $b \in \mathbb{R}$, and the output for the $i$-th window is computed as,
\begin{equation}
p_i = f(W'q_i + b)
\end{equation}
where $f$ is the activation function. Hence the output of the convolutional kernel $p$ would be of the shape $p \in \mathbb{R}^{(m-l+1)}$. A convolutional layer can consist of $d_c$ convolutional kernels which would make the output of the convolutional layer of the shape $\mathbb{R}^{d_c \times (m-l+1)}$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{ConvLayer}
\caption{Encoder structure with Word and Positional Embeddings followed by Convolutional Layer. (Sourced from \citep{lin2016neural})}
\end{figure}
\begin{table*}
\centering
\begin{tabular}{cccccc}
\textbf{Model} & \textbf{\begin{tabular}[c]{@{}c@{}}Multi-instance \\ Learning\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Word \\ Embeddings\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Positional \\ Embeddings\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Additional \\ Lexical \\ Features\end{tabular}} & \textbf{Max Pooling} \\ \hline \hline
\citet{liu2013convolution} & No & Random & No & Yes & No \\ \hline
\citet{zeng2014relation} & No & Pretrained & \begin{tabular}[c]{@{}c@{}}Yes\\ (Not Trained)\end{tabular} & Yes & Yes \\ \hline
\begin{tabular}[c]{@{}l@{}}Nguyen and\\ Grishman (2015)\end{tabular} & No & Word2Vec & Yes & No & Yes \\ \hline
\begin{tabular}[c]{@{}c@{}}PCNN \\ \citet{zeng2015distant}\end{tabular} & \begin{tabular}[c]{@{}c@{}}Yes \\ (1 sentence \\ per bag)\end{tabular} & Word2Vec & Yes & No & \begin{tabular}[c]{@{}c@{}}Yes\\ (Piecewise in \\ a sentence)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}PCNN + Att \\ \citet{lin2016neural}\end{tabular} & \begin{tabular}[c]{@{}c@{}}Yes \\ (Attention weighted \\ sum over bag)\end{tabular} & Word2Vec & Yes & No & \begin{tabular}[c]{@{}c@{}}Yes\\ (Piecewise \\ and Full)\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}MIMLCNN \\ \citet{jiangrelation}\end{tabular} & \begin{tabular}[c]{@{}c@{}}Yes \\ (Max of each \\ feature over bag)\end{tabular} & Word2Vec & Yes & No & \begin{tabular}[c]{@{}c@{}}Yes\\ (Cross sentence \\ in a bag)\end{tabular}
\end{tabular}
\caption{Summary of features used in the various models for relation extraction using CNNs}
\end{table*}
\section{Supervised learning with CNNs}
The early works using deep learning for relation extraction worked in the supervised training paradigm with the hand-annotated corpus mentioned previously. These model tried to assigned a relation class label to each sentence containing a mention of the entity pair in focus, by modeling the problem as a multi-class classification problem.
\subsection{Simple CNN model ~\citep{liu2013convolution}}
This work is perhaps the earliest work that tries to use a CNN to automatically learn features instead of hand-craft features. It builds an end-to-end network which first encodes the input sentence using word vectors and lexical features, which is followed by a convolutional kernel layer, a single layer neural network and a softmax output layer to give a probability distribution over all the relation classes.
The model uses synonym vectors instead of word vectors, by assigning a single vector to each synonym class rather than giving every individual word a vector. However, it fails to exploit the real representational power of word embeddings. The embeddings are not trained in an unsupervised fashion on the corpus, but randomly assigned to each synonym class. Further, the model also tries to incorporate some lexical features using word lists, POS lists and entity type lists. It is found that this model outperforms the state-of-the-art kernel-based model at the time on the ACE 2005 dataset by 9 points of F-score. There were several improvements that could be made in this model, but as primary step it worked as a proof of concept that deep learning models could perform as good or even better than the rigorously engineered feature-based or kernel-based models.
\subsection{CNN model with max-pooling ~\citep{zeng2014relation}}
Similar to the previous model, this paper used a CNN for encoding the sentence level features. But unlike the previous paper, they used word embeddings that were pre-trained on a large unlabeled corpus. The paper was also the first work that used Positional Embeddings described in the earlier section, which were adapted as standard in all subsequent RE deep learning models. This model also used lexical level features like information about the nouns in the sentence and the WordNet hypernyms of the nouns.
One important contribution of this model was the use of a max-pooling layer over the output of the convolutional network. The output of the convolutional layer $Z \in \mathbb{R}^{d_c\times (m-l+1)}$ is dependent on the size of the input sentence $m$. To make this output independent of $m$ and to capture most useful feature in each dimension of the feature vector across the entire sentence, it was motivated to use a max operation that would collapse $Z$ to $Z' \in \mathbb{R}^{d_c}$. Hence, the the dimension of $Z'$ is no longer related to the sentence length $m$. The model was shown to outperform SVM and MaxEnt based models that used a variety of lexical features. Their ablation study also showed that the Positonal Embeddings gave almost a 9\% improvement in their F-score.
\subsection{CNN with multi-sized window kernels ~\citep{nguyen2015relation}}
This work was one of the last works in supervised domain for relation extraction which built upon the works of \citet{liu2013convolution} and \citet{zeng2014relation}. The model completely gets rid of exterior lexical features to enrich the representation of the input sentence and lets the CNN learn the required features itself. Their architecture is similar to \citet{zeng2014relation} consisting of word and positional embeddings followed by convolution and max-pooling. Additionally, they also incorporate convolutional kernels of varying window sizes to capture wider ranges of n-gram features. By experimenting with different iteration, they find that using kernels with 2-3-4-5 window lengths, gives them the best performance. The authors also initialize the word embedding matrix using pre-trained word embeddings trained with word2vec \citep{mikolov2013distributed}, which gives them a significant boost over random vectors and static-word2vec vectors.
\section{Multi-instance learning models with distant supervision}
As mentioned previously, \citet{riedel2010modeling} relaxed the distant supervision assumption by modeling the task as a multi-instance learning problem, so that they could exploit the large training data created by distant supervision while being robust to the noise in the labels. Multi-instance learning is a form of supervised learning where a label is given to a bag of instances, rather than a single instance. In the context of RE, every entity pair defines a bag and the bag consists of all the sentences that contain the mention of the entity pair. Instead of giving a relation class label to every sentence, a label is instead given to each bag of the relation entity. \citet{riedel2010modeling} model this using the assumption that "if a relation exists between an entity pair, then at least one document in the bag for the entity pair must reflect that relation".
\subsection{Piecewise Convolutional Neural Networks ~\citep{zeng2015distant}}
The PCNNs model uses the multi-instance learning paradigm, with a neural network model to build a relation extractor using distant supervision data. The neural network architecture is similar to the models by \citep{zeng2014relation} and \citep{nguyen2015relation} discussed previously, with one important contribution of piecewise max-pooling across the sentence. The authors claim that the max-pooling layer drastically reduces the size of the hidden layer and is also not sufficient to capture the structure between the entities in the sentence. This can be avoided by max-pooling in different segments of the sentence instead of the entire sentence. It is claimed that every sentence can naturally be divided into 3 segments based on the positions of the 2 entities in focus. By doing a piecewise max-pool within each of the segments, we get a richer representation while still maintaining a vector that is independent of the input sentences length.
One of the drawbacks in this model which is later addressed in future models is the way in which the multi-instance problem was set in the loss function. The paper defined the loss for training of the model as follows. Given $T$ bags of documents with each bag containing $q_i$ documents and having the label $y_i$, $i = 1,2..,T$, the neural network gives the probability of extracting relation $r$ from document $j$ of bag $i$, $d_i^j$ denoted as,
\begin{equation}
p(r|d_i^j, \theta); j = 1,2,...,q_i
\end{equation}
where $\theta$ is the weight parameters of the neural network. Then the loss is given as,
\begin{equation}
J(\theta) = \sum_{i=1}^T \text{log} p(y_i|d_i^{j^*}, \theta)
\end{equation}
\begin{equation}
j^* = \text{arg} \text{max}_j p(y_i|d_i^j, \theta); j=1,2...,q_i
\end{equation}
Thus, since the method assumes that ``atleast one document in the bag expresses the relation of the entity pair'' it uses only the one most-likely document for the entity pair during the training and prediction stage. This means that the model is neglecting large amounts of useful data and information that is expressed by the other sentences in the bag. Even though not all the sentences in the bag express the correct relation between the entity pair, using only a single sentence is a very hard constraint. This issue is addressed in the subsequent works.
The PCNNs model with Multi-instance learning is shown to outperform the traditional non deep learning models like the distant-supervision based model by \citet{mintz2009distant}, the multi instance learning method \textit{MultiR} proposed by \citet{hoffmann2011knowledge} and the multi-instance multi-label model \textit{MIML} by \citet{surdeanu2012multi}, on the dataset by \citet{riedel2010modeling} (Figure 3). The results are further discussed in the later section. Their ablation study also shows the advantages of using PCNNs over CNNs and Multi-instance learning over traditional learning, which both add incrementally to the model as shown in Figure 2.
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{PCNNs}
\caption{Effect of piecewise max pooling and multi-instance learning. (Sourced from \citep{zeng2015distant})}
\end{figure}
\subsection{Selective Attention over Instances ~\citep{lin2016neural}}
To address the shortcoming of the previous model which only used the one most-relevant sentence from the bag, \citet{lin2016neural} used an attention mechanism over all the instances in the bag for the multi-instance problem. In this model, each sentence $d_i^j$ of bag $i$ is first encoded into a vector representation, $r_i^j$ using a PCNN or a CNN, as defined previously. Then the final vector representation for the bag of sentences is found by taking an attention-weighted average of all the sentence vectors ($r_i^j, j=1,2...q_i$) in the bag. The model computes a weight $\alpha_j$ for each instance $d_i^j$ of bag $i$. These $\alpha$ values are dynamic in the sense that they are different for each bag and depend on the relation type and the document instance. The final feature vector for the bag of sentences is given as,
\begin{equation}
r_i = \sum_{j=1}^{q_i}\alpha_j r_i^j
\end{equation}
When the loss is found using this attention weighted representation of all the instances in the bag, the model is able to inherently identify the important sentences from the noisy ones and all the information in the bag is utilized to make the relation class prediction.
It can also be observed that the `only-one most likely sentence' model used in the PCNN paper is a special case of the selective attention procedure where $\alpha_{j^*} = 1$ for only $j^*$ as defined by equation (5) and all the remaining $\alpha$ values are zero (hard attention). It is shown that using this selective attention procedure significantly improves the precision recall curve of both the CNN and the PCNN models. The model is able to predict the correct relations with higher confidence as it able to gather evidence over multiple sentences in the bag.
\subsection{Multi-instance Multi-label CNNs ~\citep{jiangrelation}}
The authors of this paper address the information loss problem in \citet{zeng2015distant} by using a cross-document max-pooling layer.
Like in the attention model, they first find a vector representation, $r_i^j$ for each sentence $d_i^j$ of bag $i$. Then the final vector representation for the bag of sentences is found by taking a dimension wise max of the sentence vectors ($r_i^j, j=1,2...q_i$). The final feature vector for the bag of sentences is given as,
\begin{equation}
(r_i)_k = \max_{j=1,2...q_i} (r_i^j)_k; k=1,2...D
\end{equation}
where $r_i^j, r_i \in \mathbb{R}^{D}$. This allows each feature in the final feature vector to come from the most prominent document for that feature, rather than the entire feature vector coming from the overall most-prominent document.
The paper also address the issue of multi-label in relation extraction. Up until now, all models predicted a single relation class for an entity pair. But it is likely that the same entity pair can have multiply relations (called overlapping relations) which are supported by different documents. For example, (Steve\_Jobs, \texttt{Founded}, Apple) and (Steve\_Jobs, \texttt{CEO\_of}, Apple) are both valid relations between the same entity pair (Steve\_Jobs, Apple) which may be supported by different sentences. The authors modify the architecture to have sigmoid activation functions instead of softmax activations in the final layer, which would mean that the network predicts a probability for each relation class independently, rather than predicting a probability distribution over the relations. The loss for training the model is then defined as,
\begin{equation}
J(\theta) = \sum_{i=1}^T \sum_{r=1}^R y_r^i\text{log} p_r^i + (1-y_r^i)\text{log} (1-p_r^i)
\end{equation}
where $R$ is the number of relation classes, $p_r^i$ is probability for bag $i$ to have relation $r$ as predicted by the network and $y_r^i$ is a binary label if bag $i$ had relation $r$ or not.
The MIMLCNN model is able to improve performance of the PCNN and CNN models like the selective attention mechanism, as it is able to exploit the information across multiple documents in the bag, by using the most prominent document for each feature. The results are discuss further in the next section.
\section{Results}
Figure 3 summarizes the results of the various multi-instance learning models applied on the distant supervision dataset created by \citet{riedel2010modeling}. It shows the results for 3 non deep learning models namely \textit{Mintz} \citep{mintz2009distant}, \textit{MultiR} \citep{hoffmann2011knowledge} and MIML \citep{surdeanu2012multi}. We also see the performance of the deep learning models discussed in the previous sections.
It is observed that the all the deep learning models perform significantly better than the non deep learning models. Using the Multi-instance Multi-label (\textit{MIMLCNN}) mechanism with the CNN model further improves the curve over the \textit{PCNN} model. However, the selective attention mechanism applied over the \textit{PCNN} model gives the best performance out of all the models. It is interesting to note the increase in performance in the \textit{PCNN} curve to the \textit{PCNN+Att} curve as compared to the \textit{MIMLCNN} curve. Since the attention mechanism is a soft-selection mechanism, it works out to be more robust and able to exploit information across the sentences more effectively, than even the cross-document max mechanism used in \textit{MIMLCNN}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{Results}
\caption{Results for Multi-instance learning models. (Sourced from \citep{lin2016neural} and \citep{jiangrelation})}
\end{figure}
\section{Concluding Remarks}
With the introduction of distant supervision for relation extraction by \citet{mintz2009distant}, modeling the task as Multi-instance problem has been widely adopted. Using this mechanism also provides enough data for deep learning models to be trained in the multi-instance setting which accommodates for the labeling noise in the data. Successive works have tried to handle the noise and distant supervision assumption with mechanisms like selective attention over document instances and cross-document max pooling, which have shown to increase performance. Some very recent works in the field also try to exploit the interaction between the relations by exploiting relation paths \citep{zeng2016incorporating} and relation class ties \citep{ye2016jointly} to improve the performance further. For example relations like \texttt{Father\_of} and \texttt{Mother\_of} can be exploited to extract instance for \texttt{Spouse\_of}.
However these improvements only work on the training and inference methods of the model. As far as the deep learning aspect is concerned, the CNN or PCNN architecture used to encode the sentences is same across all these works. It is surprising to note that no work for the task of relation extraction has tried to use Recurrent Neural Networks (RNNs) for encoding the sentences in place of the CNNs (to the best of our knowledge). RNNs and LSTMs intuitively fit more naturally to natural language tasks. Even though NLP literature does not support a clear distinction between the domains where CNNs or RNNs perform better, recent works have shown that each provide complementary information for text classification tasks \citep{yin2017comparative}. Where RNNs perform well on document-level sentiment classification \citep{tang2015document}, some works have shown CNNs to outperform LSTMs on language modeling tasks \citep{dauphin2016language}. Future works for relation extraction can thus definitely try to experiment with using LSTMs for encoding sentence and relations.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,466
|
<?php
namespace Stitcher\Test\Integration;
use Stitcher\Test\CreateStitcherObjects;
use Stitcher\Test\StitcherTest;
use Symfony\Component\Yaml\Yaml;
class PaginatedMetaTest extends StitcherTest
{
use CreateStitcherObjects;
/** @test */
public function paginated_meta_test(): void
{
$pageParser = $this->createPageParser();
$pages = $pageParser->parse($this->createConfiguration());
$metaPage1 = $pages->get('test/page-1')->meta()->render();
$metaPage2 = $pages->get('test/page-2')->meta()->render();
$metaPage3 = $pages->get('test/page-3')->meta()->render();
$this->assertContains('next', $metaPage1);
$this->assertNotContains('prev', $metaPage1);
$this->assertContains('next', $metaPage2);
$this->assertContains('prev', $metaPage2);
$this->assertNotContains('next', $metaPage3);
$this->assertContains('prev', $metaPage3);
}
private function createConfiguration(): array
{
return Yaml::parse(<<<EOT
id: test/page-{page}
template: index.twig
variables:
entries:
a:
name: A
b:
name: B
c:
name: C
config:
pagination:
variable: entries
perPage: 1
parameter: page
EOT
);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,708
|
<div class="no-padding col-md-3" >
<?php $form=$this->beginWidget('CActiveForm'); ?>
<div class="">
<?php echo $form->dropDownList($model, 'itemname', $itemnameSelectOptions, array('maxlength'=>255, 'class'=>'form-control')); ?>
<?php echo $form->error($model, 'itemname'); ?>
</div>
</div>
<div class="no-padding col-md-1" >
<div class="buttons">
<?php echo CHtml::submitButton(Rights::t('core', 'Assign'), array("class" => "btn btn-blue") ); ?>
</div>
<?php $this->endWidget(); ?>
</div>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,613
|
Community preference was a concept in the European Union in which all the member states would be encouraged by the Institutions of the European Union and the Treaties of the European Union to give priority preference to all goods, trade, services, agricultural products and people from their fellow EU member states over all goods, trade, services and people from non-EU countries. Proponents argued that this would add to the benefits of EU membership by encouraging the member states to trade with each other rather than to trade with non-EU counties who are outside the bloc. It would serve as an integral part of the freedom of movement for workers in the European Union as well as the European Single Market and the European Union Customs Union.
Community preference would not apply to countries of the European Free Trade Association even though they were members of the European Single Market and observed freedom of movement rules.
It was one of the founding principles of the establishment of the European Communities (which would later become the European Union) when the Treaty of Rome was signed in 1958. But in a judgment in 1994 Greece v Council, Case C-353/92, the European Court of Justice (ECJ) confirmed that Community preference was not a principle of EU law. Its legal basis, Article 44 of the Treaty of Rome, was repealed by the 1997 Amsterdam Treaty.
See also
European Union Single Market
European Union Customs Union
Common Agricultural Policy
Common Fisheries Policy
References
Trade blocs
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,402
|
Blowout Sale! Up to 65% off on Electronic Accessories at Nicky's Blog, Page 4. Top brands include Lodgepole Leathercraft, Coal Creek Leather, Pacography, JJNUSA, Active Patch, HouseOfBlings, EXTRA STUDIO, AlphaCovers, FobulousFinds, & D&M Leather Studio. Hurry! Limited time offers. Offers valid only while supplies last.
Slim Case for iPhone 5, 5s, SE. USA Flag.
Slim Case for Samsung Galaxy J3 (2017) J327, Emerge, Eclipse, Prime. USA Flag.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 3,540
|
package org.elasticsearch.xpack.monitoring.action;
import org.elasticsearch.Version;
import org.elasticsearch.common.io.stream.BytesStreamOutput;
import org.elasticsearch.common.io.stream.StreamInput;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.test.ESTestCase;
import org.elasticsearch.test.VersionUtils;
import org.elasticsearch.xpack.core.monitoring.action.MonitoringBulkResponse;
import org.elasticsearch.xpack.monitoring.exporter.ExportException;
import java.io.IOException;
import static org.hamcrest.Matchers.equalTo;
import static org.hamcrest.Matchers.is;
import static org.hamcrest.Matchers.notNullValue;
import static org.hamcrest.Matchers.nullValue;
public class MonitoringBulkResponseTests extends ESTestCase {
public void testResponseStatus() {
final long took = Math.abs(randomLong());
MonitoringBulkResponse response = new MonitoringBulkResponse(took, false);
assertThat(response.getTookInMillis(), equalTo(took));
assertThat(response.getError(), is(nullValue()));
assertThat(response.isIgnored(), is(false));
assertThat(response.status(), equalTo(RestStatus.OK));
response = new MonitoringBulkResponse(took, true);
assertThat(response.getTookInMillis(), equalTo(took));
assertThat(response.getError(), is(nullValue()));
assertThat(response.isIgnored(), is(true));
assertThat(response.status(), equalTo(RestStatus.OK));
ExportException exception = new ExportException(randomAlphaOfLength(10));
response = new MonitoringBulkResponse(took, new MonitoringBulkResponse.Error(exception));
assertThat(response.getTookInMillis(), equalTo(took));
assertThat(response.getError(), is(notNullValue()));
assertThat(response.isIgnored(), is(false));
assertThat(response.status(), equalTo(RestStatus.INTERNAL_SERVER_ERROR));
}
public void testSerialization() throws IOException {
int iterations = randomIntBetween(5, 50);
for (int i = 0; i < iterations; i++) {
MonitoringBulkResponse response;
if (randomBoolean()) {
response = new MonitoringBulkResponse(Math.abs(randomLong()), randomBoolean());
} else {
Exception exception = randomFrom(
new ExportException(randomAlphaOfLength(5), new IllegalStateException(randomAlphaOfLength(5))),
new IllegalStateException(randomAlphaOfLength(5)),
new IllegalArgumentException(randomAlphaOfLength(5)));
response = new MonitoringBulkResponse(Math.abs(randomLong()), new MonitoringBulkResponse.Error(exception));
}
final Version version = VersionUtils.randomVersion(random());
BytesStreamOutput output = new BytesStreamOutput();
output.setVersion(version);
response.writeTo(output);
StreamInput streamInput = output.bytes().streamInput();
streamInput.setVersion(version);
MonitoringBulkResponse response2 = new MonitoringBulkResponse(streamInput);
assertThat(response2.getTookInMillis(), equalTo(response.getTookInMillis()));
if (response.getError() == null) {
assertThat(response2.getError(), is(nullValue()));
} else {
assertThat(response2.getError(), is(notNullValue()));
}
assertThat(response2.isIgnored(), is(response.isIgnored()));
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,688
|
\section{Introduction}
The problem of recovering a distribution function from observations additively contaminated with measurement errors is the object of study in this note.
Assuming data are sampled from a convolution kernel mixture, the interest is in \vir{estimating} the mixing or latent distribution from contaminated observations.
The statement of the problem is as follows. Let $X$ be a random variable (r.v.) with probability measure $P_0$
on the Borel-measurable space $(\mathbb{R},\,\mathscr{B}(\mathbb{R}))$, with Lebesgue density $p_0:=\d P_0/\d \lambda$.
Suppose that
\[X=Y+Z,\]
where $Y$ and $Z$ are independent,
unobservable random variables,
$Z$ having Lebesgue density $f$. We examine the case where the error
has the standard Laplace distribution with density
$$f(z)=\frac{1}{2}e^{-|z|}, \quad
z\in\mathbb{R}.$$
The r.v. $Y$ has unknown distribution $G_0$ on some measurable space
$(\mathscr Y,\,\mathscr B(\mathscr Y))$, with $\mathscr Y\subseteq \mathbb{R}$ and
$\mathscr B(\mathscr{Y})$ the Borel $\sigma$-field on
$\mathscr Y$.
The density $p_0$ is then
the convolution of $G_0$ and $f$,
\[p_0(x)=(G_0\ast f)(x)=\int_{\mathscr Y}f(x-y)\,\d G_0(y),\quad x\in\mathbb{R}.\]
In what follows, we also write $p_0\equiv p_{G_0}$ to stress the
dependence of $p_0$ on $G_0$. Letting $\mathscr G$ be the set of all probability measures $G$ on
$(\mathscr Y,\,\mathscr B(\mathscr Y))$, the parameter space
\[\mathscr P:=\Bigg\{p_G(\cdot):=\int_{\mathscr Y}f(\cdot-y)\,\d G(y),\, G\in\mathscr G\Bigg\}\]
is the collection of all convolution Laplace mixtures
and the model is nonparametric.
Suppose we observe $n$ independent copies $X_1,\,\ldots,\,X_n$ of $X$.
The r.v.'s $X_1,\,\ldots,\,X_n$ are independent and identically distributed (i.i.d.) according to the density $p_0\equiv p_{G_0}$ on the real line.
The interest is in recovering the mixing distribution $G_0\in\mathscr G$ from indirect observations.
Deconvolution problems may arise in a wide variety of contexts,
the error distribution being typically modelled as a Gaussian,
even if also the Laplace has relevant applications.
Full density deconvolution, together with the related many normal
means problem, has drawn attention in the literature since the late 1950's
and different deconvolution methods have been proposed and developed since then taking the
frequentist approach, the most popular being based on nonparametric maximum likelihood and kernel methods. Rates of convergence have been mostly investigated for \emph{density} deconvolution:
Fan (1991a, 1991b) showed that deconvolution kernel density estimators achieve global optimal rates for weighted $L^p$-risks, $p\geq1$, when the smoothness of the density to be recovered is measured in terms of the number of its
derivatives. Hall and Lahiri (2008) considered estimation of the \emph{distribution function} using
the cumulative distribution function corresponding to the deconvolution kernel density estimator
and showed that it attains minimax-optimal pointwise and global rates for the integrated mean-squared error over different functional classes for the error and latent distributions, smoothness being described through the tail behaviour of their Fourier transforms.
For a comprehensive account on the topic, the reader may refer to the monograph of Meister (2009). In this note, we do not assume that the probability measure $G_0$ possesses Lebesgue density. Wasserstein metrics are then particularly well-suited as global loss functions: convergence in $L^p$-Wasserstein
metrics for discrete mixing distributions has, in fact, a natural interpretation in terms of convergence
of the single supporting atoms of the probability measures involved.
Dedecker \emph{et al}. (2015) have obtained a lower bound on the rate of convergence for the $L^p$-Wasserstein risk, $p\geq1$, when no smoothness assumption,
except for a moment condition, is imposed on the latent distribution and the error distribution is ordinary smooth, the Laplace being a special case.
Deconvolution problems have only recently begun to be studied from a Bayesian perspective: the typical scheme considers the mixing distribution as a draw
from a Dirichlet process prior.
Posterior contraction rates for recovering the mixing distribution
in $L^p$-Wasserstein metrics have been
investigated in Nguyen (2013) and Gao and van der Vaart (2016), even though
the upper bounds in these articles do not match with the lower bound in
Dedecker \emph{et al}. (2015).
Minimax-optimal adaptive recovery rates for mixing densities belonging to Sobolev spaces
have been instead obtained by Donnet \emph{et al}. (2018)
in a fully Bayes as well as in an empirical Bayes approach to inference, the latter accounting for a data-driven choice of the prior hyperparameters of the Dirichlet process baseline measure.
In this note, we study nonparametric Bayes and maximum likelihood estimation of the mixing distribution $G_0$, when no smoothness assumption is imposed on it.
The analysis begins with the estimation of the sampling density $p_0$: estimating the \emph{mixed} density $p_0$ can, in effect, be the first step for recovering the \emph{mixing} distribution $G_0$.
Taking a Bayesian approach, if the random density $p_G$ is modelled as
a Dirichlet-Laplace mixture, then $p_0$ can be consistently estimated at a rate
$n^{-3/8}$, up to a $(\log n)$-factor, if $G_0$ has tails matching with those of the baseline measure of the Dirichlet process, which essentially requires $G_0$ to be in the weak support of the process, see Proposition \ref{prop:2} and Proposition \ref{prop:1}.
This requirement allows to extend to a possibly unbounded set of locations the results of Gao and van der Vaart (2016), which take into account only the case of compactly supported mixing distributions.
Taking a frequentist approach, $p_0$ can be estimated by the maximum likelihood still
at a rate $n^{-3/8}$, up to a logarithmic factor.
As far as we are aware, the result on the rate of convergence in the Hellinger metric for the maximum likelihood estimator (MLE) of a Laplace convolution mixture is new and is obtained taking the approach proposed by Van de Geer (1996), according to which it is the \vir{dimension} of the class of kernels and the behaviour of $p_0$ near zero that determine the rate of convergence for the MLE. As previously mentioned, results on the estimation of $p_0$ are interesting in view of
the fact that, appealing to an inversion inequality translating the Hellinger or the $L^2$-distance between kernel mixtures, with Fourier transform of the kernel density having polynomially decaying tails, into any $L^p$-Wasserstein distance, $p\geq1$, between the corresponding mixing distributions, rates of convergence in the $L^1$-Wasserstein metric for the MLE and the Bayes' estimator of the mixing distribution can be assessed.
Merging in the $L^1$-Wasserstein metric between Bayes and maximum likelihood for deconvolving Laplace mixtures follows as a by-product.
\medskip
\noindent \emph{Organization}.
The note is organized as follows. Convergence rates in the Hellinger metric for Bayes and maximum likelihood density estimation of Laplace convolution mixtures are preliminarily studied in Sect. \ref {sec:Bayes} and in Sect. \ref {sec:MLE}, respectively, in view of their subsequent instrumental use for assessing the $L^1$-Wasserstein accuracy of the two estimation procedures in recovering the mixing distribution of the sampling density. Merging between Bayes and maximum likelihood follows, as shown in Sect. \ref {sec:merging}. Remarks and suggestions for possible refinements and extensions of the exposed results are presented in Sect. \ref{sec:finrmks}. Auxiliary lemmas, along with the proofs of the main results, are deferred to Appendices A--D.
\medskip
\noindent \emph{Notation}. We fix the notation and recall some
definitions used throughout.\\[7pt]
{\textsf{Calculus}}
\begin{itemize}
\item[--] The symbols \vir{$\lesssim$} and
\vir{$\gtrsim$} indicate inequalities valid up to a constant multiple that is universal or fixed within the context, but anyway
inessential for our purposes.
\item[--] For sequences of real numbers $(a_n)_{n\in\mathbb{N}}$ and $(b_n)_{n\in\mathbb{N}}$, the notation $a_n\sim b_n$ means that $(a_n/b_n)\rightarrow 1$ as $n\rightarrow+\infty$.
Analogously, for real-valued functions $f$ and $g$,
the notation $f\sim g$ means that $f/g\rightarrow1$
in an asymptotic regime that is
clear
from the context.
\end{itemize}
{\textsf{Covering and entropy numbers}}
\begin{itemize}
\item[--] Let $(T,\,d)$ be a (subset of a) semi-metric space. For every $\varepsilon>0$, the $\varepsilon$-\emph{covering number} of $(T,\,d)$, denoted by $N(\varepsilon,\,T,\,d)$, is defined as the minimum number of $d$-balls of radius $\varepsilon$ needed to cover $T$.
Take $N(\varepsilon,\,T,\,d)=+\infty$ if no finite covering by $d$-balls of radius $\varepsilon$ exists. The logarithm of the $\varepsilon$-covering number, $\log N(\varepsilon,\,T,\,d)$,
is called the $\varepsilon$-\emph{entropy}.
\smallskip
\item[--] Let $(T,\,d)$ be a (subset of a) semi-metric space. For every $\varepsilon>0$, the $\varepsilon$-\emph{packing number} of $(T,\,d)$, denoted by $D(\varepsilon,\,T,\,d)$, is defined as the maximum number of points in $T$ such that the distance between each pair is at least $\varepsilon$. Take $D(\varepsilon,\,T,\,d)=+\infty$ if no such finite $\varepsilon$-packing exists. The logarithm of the $\varepsilon$-packing number, $\log D(\varepsilon,\,T,\,d)$,
is called the $\varepsilon$-\emph{entropy}.
\vspace*{-0.05cm}
\end{itemize}
Covering and packing numbers are related by the inequalities
$$N(\varepsilon,\,T,\,d)\leq D(\varepsilon,\,T,\,d)\leq N(\varepsilon/2,\,T,\,d).$$
{\textsf{Function spaces and probability
}
\begin{itemize}\item[--] For real number $1\leq p <+\infty$, let
$$L^p(\mathbb{R}):=\{f|\,f:\mathbb{R}\rightarrow\mathbb{C},\,f \mbox{ is Borel measurable, }\int|f|^p\,\d\lambda<+\infty\}.$$
For $f\in L^p(\mathbb{R})$, the $L^p$-norm of $f$ is defined as
$||f||_p:=(\int|f|^p\,\d\lambda)^{1/p}$. The supremum norm of a function $f$ is defined as $||f||_\infty:=\sup_{x\in\mathbb{R}}|f(x)|$.
\item[--] For $f\in L^1(\mathbb{R})$, the complex-valued function $\hat f(t):=\int_{-\infty}^{+\infty} e^{itx}f(x)\,\d x$, $t\in\mathbb{R}$, is called the \emph{Fourier transform of $f$.}
\item[--] All probability density functions are meant to be with respect to Lebesgue measure $\lambda$
on $\mathbb{R}$ or on some subset thereof.
\item[--] The same symbol, $G$ (say), is used to denote a probability measure on a Borel-measurable space $(\mathscr Y,\,\mathscr B(\mathscr Y))$
and the corresponding cumulative distribution function (c.d.f.).
\item[--] The degenerate probability distribution putting mass one at a point $\theta\in\mathbb{R}$ is denoted by $\delta_\theta$.
\item[--] The notation $Pf$ abbreviates the expected value $\int f\,\d P$, where the integral is understood to extend over the entire natural domain when, here and elsewhere, the domain of integration is omitted.
\item[--] Given a r.v.
$Y$ with distribution $G$, the \emph{moment generating function} of $Y$ or the \emph{Laplace transform of the probability measure $G$} is defined as
$$M_G(s):=E[e^{sY}]=\int_{\mathscr{Y}}e^{sy}\,\d G(y) \,\,\,\mbox{ for all $s$ for which the integral is finite.}$$
\end{itemize}
{\textsf{Metrics and divergences}}
\begin{itemize}
\item[--] The \emph{Hellinger distance} between any pair of probability density functions $q_1$ and $q_2$ on $\mathbb{R}$ is defined as $h(q_1,\,q_2):=\{\int(q_1^{1/2}-q_2^{1/2})^2\,\d \lambda\}^{1/2}$, the $L^2$-distance between the square-root densities. The following inequalities, due to LeCam (1973), p. 40, relating the $L^1$-norm and the Hellinger distance hold:
\begin{equation}\label{eq:Hel_L1}
h^2(q_1,\,q_2)\leq||q_1-q_2||_1
\end{equation}
and
\begin{equation}\label{eq:L1_Hel}
||q_1-q_2||_1\leq 2 h(q_1,\,q_2).
\end{equation}
\item[--] For ease of notation, the same symbol $d$ is used throughout to denote the $L^1$-norm, the $L^2$-norm or the Hellinger metric, the intended meaning being declared at each occurrence.
\item[--] For any probability measure $Q$ on $(\mathbb{R},\,\mathscr{B}({\mathbb{R}}))$ with density $q$,
let
\begin{align*}\textrm{KL}(P_0\|Q):=
\left\{
\begin{array}{ll}
\displaystyle\int \log\frac{\d P_0}{\d Q}\,\d P_0=\int_{p_0q>0} p_0\log\frac{p_0}{q}\,\d\lambda, & \mbox{\quad if $P_0\ll Q$,}\\[10pt]
\quad +\infty, &\mbox{\quad otherwise,}
\end{array}\right.
\end{align*}
be the \emph{Kullback-Leibler divergence} of $Q$ from $P_0$ and, for $k\geq2$, let
\begin{align*}\textrm{V}_k(P_0\|Q):=
\left\{
\begin{array}{ll}
\displaystyle
\int \bigg|\log\frac{\d P_0}{\d Q}\bigg|^k\,\d P_0=
\int_{p_0q>0} p_0\bigg|\log\frac{p_0}{q}\bigg|^k\,\d \lambda, & \mbox{\quad if $P_0\ll Q$,}\\[10pt]
\quad +\infty, &\mbox{\quad otherwise,}
\end{array}\right.
\end{align*}
be the $k$th absolute moment of $\log(\d P_0/\d Q)$.
For any $\varepsilon>0$ and a given $k\geq2$, define a Kullback-Leibler type neighborhood of $P_0$ as
$$B_{\mathrm{KL}}(P_0;\,\varepsilon^k):=\{Q:\,\textrm{KL}(P_0\|Q)\leq\varepsilon^2,\,\textrm{V}_k(P_0\|Q)\leq\varepsilon^k\}.$$
\item[--] For any real number $p\geq 1$ and any pair of probability measures $G_1,\,G_2\in\mathscr G$ with finite $p$th absolute moments,
the $L^p$-\emph{Wasserstein distance} between $G_1$ and $G_2$ is defined as
\[W_p(G_1,\,G_2):=\pt{\inf_{\gamma\in\Gamma(G_1,\,G_2)}\int_{\mathscr Y\times \mathscr Y}|y_1-y_2|^p\,
\gamma(\d y_1,\,\d y_2)}^{1/p},\]
where $\Gamma(G_1,\,G_2)$ is the set of all joint probability measures on $(\mathscr Y\times \mathscr Y)\subseteq\mathbb{R}^2$,
with marginals $G_1$ and $G_2$ on the first and second arguments, respectively.
\end{itemize}
{\textsf{Stochastic order symbols}}\\[4pt]
Let $(Z_n)_{n\in\mathbb{N}}$ be a sequence of real-valued random variables, possibly defined on entirely different probability spaces $(\Omega_n,\,\mathscr F_n,\,\mathbf{P}_n)_{n\in\mathbb{N}}$. Suppressing $n$ in $\mathbf{P}$ causes no confusion if it is understood that $\mathbf{P}$ refers to whatever probability space $Z_n$ is defined on. Let $(k_n)_{n\in\mathbb{N}}$ be a sequence of positive real numbers. We write
\begin{itemize}
\item $Z_n=O_{\mathbf{P}}(k_n)$ if
$\lim_{T\rightarrow+\infty}\limsup_{n\rightarrow+\infty}\mathbf{P}(|Z_n|>Tk_n)=0$. Then,
$Z_n/k_n=O_{\mathbf{P}}(1)$,
\item $Z_n=o_{\mathbf{P}}(k_n)$ if, for every $\varepsilon>0$, $\lim_{n\rightarrow +\infty}\mathbf{P}(|Z_n|>\varepsilon k_n)=0$. Then, $Z_n/k_n=o_{\mathbf{P}}(1)$.
\end{itemize}
Unless otherwise specified, in all stochastic order symbols used throughout, the probability measure $\mathbf{P}$ is understood to be $P_0^n$, the joint law of the first $n$ coordinate projections of the infinite product probability measure $P_0^{\mathbb{N}}$.
\section{Rates of convergence for $L^1$-Wasserstein deconvolution of Dirichlet-Laplace mixtures}\label{sec:Bayes}
In this section, we present some results on the Bayesian recovery of a distribution function from data contaminated with an additive random error following the standard Laplace distribution: we derive rates of convergence
for the $L^1$-Wasserstein deconvolution of Dirichlet-Laplace mixture densities. The density is modeled as a Dirichlet-Laplace mixture
$$p_{G}(\cdot)\equiv (G \ast f)(\cdot)=\int_{\mathscr Y} f(\cdot-y)\,\d G(y),$$
with the kernel density $f$ being the standard Laplace
and the mixing distribution
$G$ being any probability measure on $(\mathscr Y,\,\mathscr{B}(\mathscr{Y}))$, with
$\mathscr Y\subseteq \mathbb{R}$.
As a prior for $G$, we consider
a Dirichlet process with base measure $\alpha$ on
$(\mathscr{Y},\,\mathscr{B}(\mathscr{Y}))$, denoted by $\mathscr{D}_{\alpha}$.
We recall that a Dirichlet process
on a measurable space $(\mathscr{Y},\,\mathscr{B}(\mathscr Y))$,
with finite and positive base measure $\alpha$ on $(\mathscr{Y},\,\mathscr{B}(\mathscr Y))$,
is a random probability measure
$\tilde G$ on $(\mathscr{Y},\,\mathscr{B}(\mathscr Y))$ such that, for every finite partition
$(B_1,\,\ldots,\,B_k)$ of $\mathscr{Y}$, $k\geq1$, the vector of random
probabilities $(\tilde G(B_1),\,\ldots,\,\tilde G(B_k))$ has Dirichlet
distribution with
parameters $(\alpha(B_1),\,\ldots,\,\alpha(B_k))$. A Dirichlet process mixture of Laplace densities can be structurally described as follows:
\vspace{-0.2cm}
\begin{description}
\item[\,\,\,$\bullet$]
\,\,\, $\tilde G\sim\mathscr{D}_{\alpha}$, \vspace*{-0.15cm}
\item[\,\,\,$\bullet$]
\,\,\, given $\tilde G=G$, the r.v.'s $Y_1,\,\ldots,\,Y_n$ are i.i.d. according to $G$, \vspace*{-0.15cm}
\item[\,\,\,$\bullet$]
\,\,\, given $(G,\,Y_1,\,\ldots,\,Y_n)$, the
r.v.'s $Z_1,\,\ldots,\,Z_n$ are i.i.d. according to $f$, \vspace*{-0.15cm}
\item[\,\,\,$\bullet$]
\,\,\,
sampled values from $p_G$ are defined as
$X_i:=Y_i+Z_i$ for $i=1,\,\ldots,\,n$.
\end{description}
Let the sampling density $p_0$ be itself a Laplace mixture with mixing distribution $G_0$, that is, $p_0\equiv p_{G_0}=G_0\ast f$. In order to assess the rate of convergence in the $L^1$-Wasserstein metric for the Bayes' estimator of the true mixing distribution $G_0$, we appeal to an inversion inequality relating the $L^2$-norm or the Hellinger distance between Laplace
mixed densities
to any $L^p$-Wasserstein distance, $p\geq1$, between the corresponding mixing distributions, see Lemma \ref{lem:2} in Appendix D.
Therefore, we first derive rates of contraction in the $L^2$-norm and the Hellinger metric for the posterior distribution of a Dirichlet-Laplace mixture prior: convergence of the posterior distribution at
a rate $\varepsilon_n$, in fact, implies the existence of Bayes' point estimators that converge at least as fast as $\varepsilon_n$ in the frequentist sense. The same indirect approach has been
taken by Gao and van der Vaart (2016), who deal with the case of compactly supported mixing distributions, while we extend the results to mixing distributions possibly supported on the whole real line or on some unbounded subset thereof. We present two results on posterior contraction rates for a Dirichlet-Laplace mixture prior. The first one, as stated in Proposition \ref{prop:2}, is relative to the $L^1$-norm or the Hellinger metric; the second one,
as stated in Proposition \ref{prop:1}, is relative to the $L^2$-metric. Proofs are deferred to Appendix C.
\begin{proposition}\label{prop:2}
Let $X_1,\,\ldots,\,X_n$ be i.i.d. observations from a density $p_0\equiv p_{G_0}=G_0\ast f$,
with the kernel density $f$ being the standard Laplace and the mixing distribution $G_0$ such that, for some decreasing function $A_0:\,(0,\,+\infty)\rightarrow [0,\,1]$ and a constant $0<c_0<+\infty$,
\begin{equation}\label{eq:tailG0SS}
G_0([-T,\,T]^c)\leq A_0(T)\lesssim \exp{(-c_0T)}\quad\mbox{for large $T>0$}.
\end{equation}
If the baseline measure $\alpha$ of the Dirichlet process is symmetric around zero
and possesses density $\alpha'$ such that, for some constants $0<b<+\infty$ and $0<\tau\leq 1$,
\begin{equation}\label{eq:tailG1}
\alpha'(y)\propto \exp{(-b|y|^\tau)},\quad y\in\mathbb{R},
\end{equation}
then there exists a sufficiently large constant $M>0$ such that
$$\Pi(d(p_G,\,p_0)\geq M n^{-3/8}\log^{5/8}n
\mid X^{(n)}} %X_1, \ldots, X_n)=o_{\mathbf{P}}(1),$$
where $\Pi(\cdot\mid X^{(n)}} %X_1, \ldots, X_n)$ denotes the posterior distribution corresponding
to a Dirichlet-Laplace process mixture prior after $n$ observations
and $d$ can be either the Hellinger or the $L^1$-metric.
\end{proposition}
\begin{remark}
In virtue of the following inequality,
$$\forall\,G,\,G'\in\mathscr G,\,\,\, ||p_G-p_{G'}||_2^2\leq 4||f||_\infty h^2(p_G,\,p_{G'}),$$
where $||f||_\infty=1/2$ for the standard Laplace kernel density,
see \eqref{eq:hel^2} in Lemma \ref{lem:l2hel},
the $L^2$-metric posterior contraction rate for a Dirichlet-Laplace mixture prior could, in principle, be
derived from Proposition \ref{prop:2}, which relies on Theorem 2.1 of Ghosal \emph{et al}.
(2000), p. 503, or Theorem 2.1 of Ghosal and van der Vaart (2001), p. 1239,
but this would impose slightly stronger conditions on
the density $\alpha'$ of the baseline measure than those required in Proposition \ref{prop:1} below,
which is based on Theorem 3 of Gin\'{e} and Nickl (2011), p. 2892, that is
tailored for assessing posterior contraction rates in $L^r$-metrics, $1< r< +\infty$, taking an approach
that can only be used if one has sufficiently fine control of the approximation properties of the prior support
in the $L^r$-metric considered.
\end{remark}
\begin{proposition}\label{prop:1}
Let $X_1,\,\ldots,\,X_n$ be i.i.d. observations from a density
$p_0\equiv p_{G_0}=G_0\ast f$, with the kernel density $f$ being the standard Laplace
and the mixing distribution $G_0$ such that condition \eqref{eq:tailG0SS} holds as in Proposition \ref{prop:2}. If the baseline measure $\alpha$ of the Dirichlet process possesses continuous and positive
density $\alpha'$ such that, for some constants $0<b<+\infty$ and $0<\tau\leq1$,
\begin{equation}\label{eq:tailG11}
\alpha'(y)\gtrsim \exp{(-b|y|^\tau)}\quad\mbox{for large $|y|$},
\end{equation}
then there exists a sufficiently large constant $M>0$ such that
\begin{equation}\label{eq:l2norm}
\Pi(||p_G-p_0||_2\geq M n^{-3/8}\log^{5/8}n
\mid X^{(n)}} %X_1, \ldots, X_n)=o_{\mathbf{P}}(1),
\end{equation}
where $\Pi(\cdot\mid X^{(n)}} %X_1, \ldots, X_n)$ denotes the posterior distribution corresponding
to a Dirichlet-Laplace process mixture prior after $n$ observations.
\end{proposition}
As previously mentioned, convergence of the posterior distribution at
a rate $\varepsilon_n$ implies the existence of point estimators that converge at least as fast as $\varepsilon_n$ in the frequentist sense, see, for instance, Theorem 2.5 in Ghosal \emph{et al}.
(2000), p. 506, for the construction of a point estimator that applies to general statistical models and posterior distributions. The posterior expectation of the density $p_G$, which we refer to as the Bayes' density estimator,
$$\hat p_n^{\textrm{B}}(\cdot):=
\int_{\mathscr G} p_G(\cdot)\Pi(\d
G\mid X^{(n)}} %X_1, \ldots, X_n),
$$
has a similar property when jointly considered with bounded semi-metrics that are convex or whose square is convex in one argument. When the random mixing distribution $\tilde G$ is distributed according to a Dirichlet process,
the expression of the Bayes' density estimator $\hat p_n^{\textrm{B}}$
is given by formula (2.6) of Lo (1984), p. 353,
replacing $K(\cdot,\,u)$ with
$\frac{1}{2}\exp{\{-|\cdot-u|\}}$ at each occurrence.
\begin{corollary}\label{cor:1}
Suppose that condition \eqref{eq:tailG0SS} holds for some decreasing function $A_0:\,(0,\,+\infty)\rightarrow [0,\,1]$ and a finite constant $c_0>(1/e)$ such that
\begin{equation}\label{eq:44}
G_0([-T,\,T]^c)\leq A_0(T)\lesssim \exp{(-e^{c_0T})}\quad\mbox{for large $T>0$}
\end{equation}
and condition \eqref{eq:tailG1} holds as in Proposition \ref{prop:2}.
Then,
$$d(\hat p^{\mathrm{B}}_n,\,p_0)=O_{\mathbf{P}}(n^{-3/8}\log^{1/2} n),$$
for $d$ being either the Hellinger or the $L^1$-metric.
\end{corollary}
\begin{proof}
In virtue of the inequality in \eqref{eq:L1_Hel}, it suffices to prove the assertion for the Hellinger metric. The proof follows standard arguments as, for instance, in Ghosal \emph{et al}. (2000), pp. 506--507.
By convexity of $h^2$ in each argument and
Jensen's inequality, for $\varepsilon_n:=\max\{\bar \varepsilon_n,\, \tilde\varepsilon_n\}=n^{-3/8}(\log n)^{(3\vee 4)/8}=n^{-3/8}\log^{1/2} n
$ and a sufficiently large constant $M>0$,
\[\begin{split}
h^2(\hat p_n^{\textrm{B}},\,p_0)&\leq \int_{\mathscr G} h^2(p_G,\,p_0)\Pi(\d G\midX^{(n)}} %X_1, \ldots, X_n)\\
&=\pt{\int_{h(p_G,\,p_0)<M\varepsilon_n}+\int_{h(p_G,\,p_0)\geq M\varepsilon_n}
} h^2(p_G,\,p_0)\Pi(\d G\midX^{(n)}} %X_1, \ldots, X_n)\\[5pt]
&
\lesssim M^2\varepsilon_n^2 + 2 \Pi(h(p_G,\,p_0)
\geq M\varepsilon_n\mid X^{(n)}} %{X_1,\, \ldots,\, X_n).
\end{split}\]
It follows that $$P_0^n h^2(\hat p_n^{\textrm{B}},\,p_0)
\lesssim M^2\varepsilon_n^2 + 2 P_0^n\Pi( h(p_G,\,p_0)
\geq M\varepsilon_n\mid X^{(n)}} %{X_1,\, \ldots,\, X_n)\lesssim \varepsilon^2_n+o(\varepsilon_n^2)$$
because we can apply the almost sure version of Theorem 7 in Scricciolo (2007), p. 636 (see also Theorem A.1 in Scricciolo (2006), p. 2918), which,
under the prior mass condition
\begin{equation}\label{eq:74}
\Pi(h^2(p_G,\,p_0)\|p_0/p_G\|_\infty\leq \tilde\varepsilon_n
^2)\gtrsim \exp{(-Bn\tilde\varepsilon_n^2)},
\end{equation}
with $\tilde\varepsilon_n:=n^{-3/8}\log^{1/2} n$ and a constant $0<B<+\infty$,
yields exponentially fast convergence of the posterior distribution since $P_0^n\Pi( h(p_G,\,p_0)
\geq M\varepsilon_n\mid X^{(n)}} %{X_1,\, \ldots,\, X_n)\lesssim \exp{(-B_1n\tilde\varepsilon_n^2)}$ for a suitable
constant $0<B_1<+\infty$. To verify that condition \eqref{eq:74} is satisfied, we can proceed as in the proof of Proposition \ref{prop:1}: for any $G$ satisfying \eqref{eq:condmixing}, not only is $h(p_G,\,p_0)\lesssim \varepsilon$, but, under assumption \eqref{eq:44} which guarantees that $M_{G_0}(-1)<+\infty$ and $M_{G_0}(1)<+\infty$, it also is
\[||p_0/p_G||_\infty\leq e^{a_\varepsilon}[M_{G_0}(-1)+M_{G_0}(1)]\lesssim\log(1/\varepsilon),\quad\mbox{ for $a_\varepsilon:= A_0^{-1}(\varepsilon^2)\lesssim\log\log(1/\varepsilon)$}.\]
Then, \[\log \Pi(h^2(p_G,\,p_0)\|p_0/p_G\|_\infty\leq \varepsilon^2\log(1/\varepsilon))\gtrsim -\varepsilon^{-2/3}\log(1/\varepsilon).\]
Condition \eqref{eq:74} is thus verified for $\tilde\varepsilon_n:=\varepsilon\log^{1/2}(1/\varepsilon)=n^{-3/8}\log ^{1/2}n$. Conclude that $h(\hat p^{\mathrm{B}}_n,\,p_0)=O_{\mathbf{P}}(\varepsilon_n)$.
\qed
\end{proof}
\begin{remark}
Admittedly, condition \eqref{eq:44} imposes a stringent constraint on the tail decay rate of $G_0$. An alternative sufficient condition for concluding that
\begin{equation}\label{eq:110}
P_0^n\Pi( d(p_G,\,p_0)
\geq M\varepsilon_n\mid X^{(n)}} %{X_1,\, \ldots,\, X_n)=o(\varepsilon_n^2),\quad \mbox{ for \,$d=h$\, or \,$d=\|\cdot\|_1$,}
\end{equation}
is a prior mass condition involving the $k$th absolute moment of $\log(p_0/p_G)$ for a suitable value of $k$, in place of the sup-norm $\|p_0/p_G\|_\infty$,
which can possibly induce a lighter condition on $G_0$.
For $\tilde\varepsilon_n:=n^{-3/8}\log^{\omega}n$, with $\omega>0$, let
$\varepsilon_n:=\max\{\bar \varepsilon_n,\, \tilde\varepsilon_n\}=n^{-3/8}(\log n)^{(3/8)\vee \omega}$. It is known from Lemma 10 of Ghosal and van der Vaart (2007), p. 220, that
if
\begin{equation}\label{eq:340}
\Pi(B_{\mathrm{KL}}(P_0;\,\tilde\varepsilon_n^k))\gtrsim\exp{(-Bn\tilde\varepsilon_n^2)},\quad k\geq2,
\end{equation}
then
\begin{equation}\label{eq:93}
P_0^n\Pi( d(p_G,\,p_0)
\geq M\varepsilon_n\mid X^{(n)}} %{X_1,\, \ldots,\, X_n)\lesssim (n\tilde\varepsilon_n^2)^{-k/2}.
\end{equation}
Thus, if condition \eqref{eq:340} holds for some $k\geq6$ so that $(n\tilde\varepsilon_n^2)^{-k/2}=o(\varepsilon_n^2)$, the value $k=6$ would suffice for the purpose, then condition \eqref{eq:110} is satisfied.
\end{remark}
We now state a result on the rate of convergence for the Bayes' estimator, denoted by $\hat G_n^{\textrm B}$, of the mixing distribution $G_0$ for the $L^1$-Wasserstein deconvolution of Dirichlet-Laplace mixtures.
The Bayes' estimator is the posterior expectation of the random
probability measure $\tilde G$, that is,
$\hat G_n^{\textrm B}(\cdot):=E[\tilde G(\cdot)\mid X^{(n)}} %X_1, \ldots, X_n]$
and its expression can be derived from the expression of the posterior distribution, cf. Ghosh and Ramamoorthi (2003), pp. 144--146. In order to state the result,
let $M_{\hat G_n^{\textrm B}}(s):=\int_{-\infty}^{+\infty} e^{sy}\,\d\hat G_n^{\textrm B}(y)$, $s\in\mathbb{R}$, whose expression can be obtained from
formula (2.6) of Lo (1984), p. 353, replacing $K(x,\,u)$ with $e^{s u}$ at
all occurrences ($s$ playing the role of $x$).
\begin{proposition}\label{prop:4}
Suppose that the assumptions of
Corollary \ref{cor:1} hold. If, in addition, $\bar\alpha:=\alpha/\alpha(\mathbb{R})$
has finite moment generating function on some interval $(-s_0,\,s_0)$, with $0<s_0<1$, and
\begin{equation}\label{eq:ass1}
\forall\, 0<s<s_0,\quad
\limsup_{n\rightarrow +\infty}P_0^nM_{\hat G_n^{\mathrm{B}}}(-s)\leq M_{G_0}(-s)
\,\, \mbox{and} \,\,\limsup_{n\rightarrow
+\infty}P_0^nM_{\hat G_n^{\mathrm{B}}}(s)\leq M_{G_0}(s),
\end{equation}
then
\begin{equation}\label{eq:wass1}
W_1(\hat G_n^{\mathrm{B}},\,G_0)=O_{\mathbf{P}}(n^{-1/8}(\log n)^{2/3}).
\end{equation}
\end{proposition}
\begin{proof}
Let $\rho_n:=n^{-1/8}(\log n)^{2/3}$ and, for a suitable finite constant $c_1>0$, $M_n=c_1(\log n)$. Fix numbers $s$ and $u$ such that $0<u<s<s_0<1$. For sufficiently large constants $0<T, \,T',\,T''<+\infty$,
reasoning as in Lemma \ref{lem:2},
\[\begin{split}
P_0^n(W_1(\hat G_n^{\mathrm{B}},\,G_0)>T\rho_n)&\leq
P_0^n(h(\hat p^{\mathrm{B}}_n,\,p_0)>T'\rho_n^3(\log n)^{-3/2})\\
&\qquad \quad+
P_0^n(M_{\hat G_n^{\mathrm{B}}}(-s)+M_{\hat G_n^{\mathrm{B}}}(s)>T''e^{uM_n}\rho_n)=:P_1+P_2.
\end{split}\]
By Corollary \ref{cor:1}, $h(\hat p^{\mathrm{B}}_n,\,p_0)=O_{\mathbf{P}}(n^{-3/8}\log^{1/2}n)$. Hence, $P_1\rightarrow0$ as $n\rightarrow+\infty$.
By Markov's inequality, for some real $\nu>0$,
\[\begin{split}
P_2
&\lesssim e^{-uM_n}\rho_n^{-1}
[P_0^nM_{\hat G_n^{\mathrm{B}}}(-s)+P_0^nM_{\hat G_n^{\mathrm{B}}}(s)]\\
&\lesssim \frac{1}{n^\nu}
[P_0^nM_{\hat G_n^{\mathrm{B}}}(-s)+P_0^nM_{\hat G_n^{\mathrm{B}}}(s)]
\rightarrow0 \quad\mbox{ as $n\rightarrow+\infty$}
\end{split}\]
by assumption (\ref{eq:ass1}). Thus, $P_2\rightarrow0$ as $n\rightarrow+\infty$. The assertion follows.
\qed
\end{proof}
Some remarks are in order.
There are two main reasons why we focus on deconvolution in the $L^1$-Wasserstein metric.
The first one is related to the inversion inequality in \eqref{eq:wasserstein},
where the upper bound on the $L^p$-Wasserstein metric, as a function of the order $p\geq1$, increases as $p$ gets larger, thus making it advisable
to begin the analysis from the smallest value of $p$.
The second reason is related to the interpretation of the assertion in \eqref{eq:wass1}:
the $L^1$-Wasserstein distance between any
two probability measures $G_1$ and $G_2$ on some Borel-measurable space $(\mathscr{Y},\,\mathscr{B}(\mathscr{Y}))$, $\mathscr Y\subseteq \mathbb{R}$,
with finite first absolute moments, is by itself an interesting distance
because it metrizes
weak convergence plus convergence of the first absolute moments,
but it is even more interesting in view of the fact that,
letting $G_1^{-1}(\cdot)$ and $G_2^{-1}(\cdot)$ denote the left-continuous inverse or quantile functions, $G_i^{-1}(u):=\inf\{y\in\mathscr{Y}:\,G_i(y)\geq u\}$, $u\in(0,\,1)$, $i=1,\,2$,
it can be written as the $L^1$-distance between the quantile functions or, equivalently, as the $L^1$-distance between the cumulative distribution functions,
\begin{equation}\label{eq:34}
W_1(G_1,\,G_2)
=\int_{0}^{1}|G_1^{-1}(u)-G_2^{-1}(u)|\,\d u=
\int_{\mathscr Y}|G_1(y)-G_2(y)|\,\d y=||G_1-G_2||_1,
\end{equation}
see, \emph{e.g.}, Shorack and Wellner (1986), pp. 64--66. The representation in \eqref{eq:34} was obtained by Dall'Aglio (1956). Thus, by rewriting $W_1(\hat G_n^{\mathrm{B}},\,G_0)$
as the $L^1$-distance between
the c.d.f.'s $\hat G_n^{\mathrm{B}}$ and $G_0$,
the assertion of Proposition \ref{prop:4},
$$W_1(\hat G_n^{\mathrm{B}},\,G_0)=||\hat G_n^{\mathrm{B}}-G_0||_1=O_{\mathbf{P}}(n^{-1/8}(\log n)^{2/3}),$$
becomes more transparent and meaningful.
\section{Rates of convergence for ML estimation and $L^1$-Wasserstein deconvolution of Laplace mixtures}\label{sec:MLE}
In this section, we first study the rate of convergence in the Hellinger metric for the MLE
$\hat p_n$ of a Laplace mixture density $p_0\equiv p_{G_0}=G_0 \ast f$, with unknown mixing distribution $G_0\in\mathscr G$. We then derive the rate of convergence in the $L^1$-Wasserstein metric for the MLE $\hat G_n$ of the mixing distribution $G_0$, which corresponds to the MLE $\hat p_n$ of the mixed density $p_0$, by appealing to an inversion inequality relating the Hellinger distance between Laplace mixture densities to any $L^p$-Wasserstein distance, $p\geq1$, between the corresponding mixing distributions (see
Lemma \ref{lem:2} in Appendix D).
A MLE
$\hat p_n$ of $p_0$ is a measurable function of the observations taking values in $\mathscr P:=\{p_G:\,G\in\mathscr G\}$ such that
\[\hat p_n\in \underset{p_G\in \mathscr P}{\arg\max}
\frac{1}{n}\sum_{i=1}^n\log p_G(X_i)=\underset{p_G\in \mathscr P}{\arg\max}
\int(\log p_G)\,\d {\mathbb P_n},\]
where ${\mathbb P_n}:={n}^{-1}\sum_{i=1}^n\delta_{X_i}$ is the empirical measure
associated with the random sample $X_1,\,\ldots,\,X_n$, namely,
the discrete uniform distribution on the sample values that puts mass $1/n$ on each one of the observations. We assume that the MLE exists, but do not require it to be unique,
see Lindsay (1995), Theorem 18, p. 112, for sufficient conditions ensuring uniqueness.
Results on rates of convergence in the Hellinger metric for the MLE of a density
can be found in Birg\'{e} and Massart (1993), Van de Geer (1993) and Wong and Shen (1995);
it can, however, be difficult to calculate the
$L^2$-metric entropy \emph{with bracketing} of the square-root densities that is employed in these articles.
Taking instead into account that a mixture model
$\{\int_{\mathscr Y}K(\cdot,\,y)\,\d G(y):\,G\in\mathscr G\}$ is the closure of the convex hull of the collection of kernels
$\{K(\cdot,\,y):\,y\in\mathscr Y\subseteq\mathbb{R}\}$, which is typically a much smaller class, a bound on a form of metric entropy \emph{without bracketing} of the class of mixtures can be derived from a covering number of the class of kernels (a result on metric entropy \emph{without bracketing} of convex hulls that is deducible from Ball and Pajor (1990)), so that a relatively simple \vir{recipe} can be given
to obtain (an upper bound on) the rate of convergence in the Hellinger metric for the MLE of a density in terms of the \vir{dimension} of the class of kernels and the behaviour of $p_0$ near zero, cf. Corollary 2.3 of Van de Geer (1996), p. 298.
\begin{proposition}\label{prop:3}
Let the sampling density $p_0\equiv p_{G_0}=G_0\ast f$, with the kernel density $f$
being the standard Laplace and the mixing distribution $G_0\in\mathscr G$.
Suppose that, for a sequence of non-negative
real numbers $\sigma_n=O(n^{-3/8}\log^{1/8}n)$,
we have\smallskip
\begin{description}
\item[$(a)$] $\int_{p_0\leq \sigma_n}p_0\,\d \lambda\lesssim \sigma_n^2$,\\
\item[$(b)$] $\int_{p_0>\sigma_n}(1/p_0)\,\d \lambda\lesssim \log(1/\sigma_n)$.
\end{description}
Then,
\begin{equation*}\label{eq:MLEhel}
h(\hat p_n,\, p_0)=O_{\mathbf{P}}(n^{-3/8}\log^{1/8}n).
\end{equation*}
\end{proposition}
\begin{proof}
We begin by spelling out the remark mentioned in the introduction concerning the fact that
a mixture model is the closure of the convex hull of the collection of kernels.
Recall that the convex hull of a class $\mathscr K$ of functions,
denoted by $\mathrm{conv}(\mathscr K)$, is defined as the set of all finite convex combinations of functions in $\mathscr K$,
$$\mathrm{conv}(\mathscr K):=\Bigg\{\sum_{j=1}^r\theta_jK_j:\, \theta_j\geq0,\,K_j\in\mathscr K,\,j=1,\,\ldots,\,r,\,\sum_{j=1}^r\theta_j=1,\,r\in\mathbb{N}\Bigg\}.$$
In our case, $$\mathscr K:=\{f(\cdot-y):\,y\in\mathscr Y\subseteq\mathbb{R}\}$$ is the collection of kernels with $f$ the standard Laplace density.
The class $\mathscr P:=\{p_G:\,G\in\mathscr G\}$ of all Laplace convolution mixtures $p_G=G\ast f$
is the closure of the convex hull of $\mathscr K$,
$$\mathscr P=\overline{\mathrm{conv}}(\mathscr K).$$ Clearly, $\mathscr P$ is itself a convex class.
This remark enables us to apply Theorem 2.2 and Corollary 2.3 of Van de Geer (1996), pp. 297--298 and 310, or, equivalently,
Theorem 7.7 of Van de Geer (2000), pp. 104--105, whose conditions
are hereafter shown to be satisfied. To the aim, we define the class
$${\mathscr K}/p_0:=\bigg\{\frac{f(\cdot-y)}{p_0(\cdot)}\mathbf{1}\{p_0>\sigma_n\}:\,y\in\mathscr{Y}\bigg\}$$
and the envelope function
$$\bar K(\cdot):=\sup_{y\in\mathscr{Y}}\frac{f(\cdot-y)}{p_0(\cdot)}\mathbf{1}\{p_0>\sigma_n\},$$
where we have suppressed the subscript $n$ in
${\mathscr K}/p_0$ and $\bar K(\cdot)$
stressing possible dependence on $\sigma_n$ when $\sigma_n>0$.
Since, by assumption $(a)$,
\[\int_{p_0\leq\sigma_n}\d P_0= \int_{p_0\leq\sigma_n}p_0\,\d \lambda\lesssim \sigma_n^2
\]
and, by assumption $(b)$, together with the fact that $\|f\|_\infty=1/2$,
\begin{equation}\label{eq:envelope}
\int\bar K^2\,\d P_0 \lesssim
\int_{p_0>\sigma_n}\frac{1}{p_0}\,\d \lambda\lesssim \log(1/\sigma_n),
\end{equation}
we can take the sequence $\delta_n^2\propto\sigma_n^2$ in condition (7.21) of Theorem 7.7 of Van de Geer (2000), p. 104.
Because the (standard) Laplace kernel density $f$ is Lipschitz,
$$\forall\,y_1,\,y_2\in\mathscr Y,\quad|f(\cdot-y_1)-f(\cdot-y_2)|\leq\frac{1}{2} |y_1-y_2|,$$ see, \emph{e.g.}, Lemma A.1 in Scricciolo (2011),
pp. 299--300, on the set
\begin{equation}\label{eq:set1}
\pg{\int \bar K^2\,\d {\mathbb P_n} \leq T^2\log(1/\delta_n)},
\end{equation}
where $T>0$ is a finite constant, we find that,
for
$\d{\mathbb Q_n}:= \d {\mathbb P_n}/(T^2\log(1/\delta_n))$,
\[N(\delta,\,{\mathscr K}/p_0,\,||\cdot||_{2,{\mathbb Q_n}})\lesssim \delta^{-1}\quad\mbox{for }\delta>0,\]
where $||\cdot||_{2,{\mathbb Q_n}}$ denotes the $L^2({\mathbb Q_n})$-norm, that is,
$||g||_{2,{\mathbb Q_n}}:=(\int|g|^2\,\d {\mathbb Q_n})^{1/2}$.
So, in view of the result of Ball and Pajor (1990), reported as Theorem 1.1 in Van de Geer (1996), p. 295,
on the same set as in \eqref{eq:set1},
we have
\[\log N(\delta,\,\overline{\mathrm{conv}}({\mathscr K}/p_0),\,||\cdot||_{2,{\mathbb Q_n}})\lesssim \delta^{-2/3},\]
hence
\[\log N(\delta,\,\overline{\mathrm{conv}}({\mathscr K}/p_0),\,||\cdot||_{2,{\mathbb P_n}})\lesssim \pt{\frac{T\log ^{1/2}(1/\delta_n)}{\delta}}^{2/3}.\]
Next, defined the class
\[\mathscr P^{(\mathrm{conv})}_{\sigma_n}:=\pg{\frac{2p_G}{p_G+p_0}\mathbf{1}\{p_0>\sigma_n\}:\,p_G\in\mathscr P}\]
considered in condition (7.20) of Theorem 7.7 in Van de Geer (2000), p. 104,
since \[\log N(2\delta,\,\mathscr P^{(\mathrm{conv})}_{\sigma_n},\,||\cdot||_{2,{\mathbb P_n}})\leq
\log N(\delta,\,\overline{\mathrm{conv}}({\mathscr K}/p_0),\,||\cdot||_{2,{\mathbb P_n}}),\]
in view of \eqref{eq:envelope}, we have
\[\sup_{\delta>0}\frac{\log N(\delta,\,\mathscr P^{(\mathrm{conv})}_{\sigma_n},\,||\cdot||_{2,{\mathbb P_n}})}{H(\delta)}=O_{\mathbf{P}}(1)\]
for the non-increasing function of $\delta$
$$H(\delta):=\delta^{-2/3}\log^{1/3}(1/\delta_n),\quad\delta>0.$$
Taken $\Psi(\delta):=c_1\delta^{2/3}\log^{1/6}(1/\delta_n)$ with a suitable finite constant $c_1>0$, we have
$$\forall\,\delta\in(0,\,1),\,\,\,\Psi(\delta)\geq \pt{\int_{\delta^2/c}^\delta H^{1/2}(u)\,\d u} \vee \delta$$
and, for some $\varepsilon>0$,
$\Psi(\delta)/\delta^{2-\varepsilon}$
is non-increasing. Then, for $\delta_n$ such that
$\sqrt{n}\delta_n^2\geq \Psi(\delta_n)$, cf. condition (7.22) of Theorem 7.7 in Van de Geer (2000), p. 104, which implies that, consistently with the initial choice, we can take $\delta_n\propto n^{-3/8}\log ^{1/8}n$, we have
$h(\hat p_n,\, p_0)=O_{\mathbf{P}}(\delta_n)$ and the proof is complete.
\qed
\end{proof}
\begin{remark}
If $p_0>0$ and $\mathscr Y$ is a compact interval $[-a,\,a]$, with $a>0$, then $h(\hat p_n,\,p_0)=O_{\mathbf{P}}(n^{-3/8})$. In fact, the sequence
$\sigma_n\equiv 0$, $||\bar K||_\infty\leq e^{2a}$ and $\int\bar K^2\,\d P_0 \leq e^{4a}$ so that, on the set
$\{\int \bar K^2\,\d {\mathbb P_n}\leq T\}$, the entropy
$\log N(\delta,\,\overline{\mathrm{conv}}({\mathscr K}/p_0),\,||\cdot||_{2,{\mathbb P_n}})\lesssim \delta^{-2/3}$ and, reasoning as in Proposition \ref{prop:3}, we find the rate $n^{-3/8}$.
\end{remark}
We now derive a consequence of Proposition \ref{prop:3} on the rate of convergence in the $L^1$-Wasserstein metric for the MLE of $G_0$.
A MLE $\hat p_n$ of the \emph{mixed} density $p_0$ corresponds to a MLE $\hat G_n$ of the \emph{mixing} distribution $G_0$, that is, $\hat p_n\equiv p_{\hat G_n}$, such that
$$\hat G_n\in \underset{G\in \mathscr G}{\arg\max}
\frac{1}{n}\sum_{i=1}^n\log p_G(X_i)=\underset{G\in \mathscr G}{\arg\max}
\int(\log p_G)\,\d {\mathbb P_n}.$$
Clearly, $\hat G_n$ is a discrete distribution, but
we do not know the number of its components: Lindsay (1995) showed that the MLE $\hat G_n$
is a discrete distribution
supported on at most $k\leq n$ support points, $k$ being the number of distinct observed values or data points.
\begin{corollary}\label{cor:wasserstein}
Suppose that the assumptions of
Proposition \ref{prop:3} hold. If, in addition,
the mixing distribution $G_0$ has
finite moment generating function
in some interval $(-s_0,\,s_0)$, with $0<s_0<1$, and
\begin{equation}\label{eq:ass}
\forall\, 0<s<s_0,\quad
\limsup_{n\rightarrow +\infty}P_0^nM_{\hat G_n}(-s)\leq M_{G_0}(-s)
\quad \mbox{and} \quad \limsup_{n\rightarrow
+\infty}P_0^nM_{\hat G_n}(s)\leq M_{G_0}(s),
\end{equation}
where $M_{\hat G_n}(s):=\int_{\mathscr Y} e^{sy}\,\d \hat G_n(y)$, $s\in\mathbb{R}$, then
\[W_1(\hat G_n,\,G_0)=O_{\mathbf{P}}(n^{-1/8}(\log n)^{13/24}).\]
\end{corollary}
\begin{proof}
Let $k_n:=n^{-1/8}(\log n)^{13/24}$ and, for a suitable finite constant $c_2>0$, $M_n=c_2(\log n)$. Fix numbers $s$ and $u$ such that $0<u<s<s_0<1$. For sufficiently large constants $0<T,\,T',\,T''<+\infty$,
reasoning as in Lemma \ref{lem:2}, we have
\[\begin{split}
P_0^n(W_1(\hat G_n,\,G_0)>T k_n)&\leq
P_0^n(h(\hat p_n,\, p_0)>T' k_n^3(\log n)^{-3/2})\\
&\qquad\quad+
P_0^n(M_{\hat G_n}(-s)+M_{\hat G_n}(s)>T''k_ne^{uM_n})=:P_1+P_2.
\end{split}\]
The term $P_1$ can be made arbitrarily small because $h(\hat p_n,\, p_0)=O_{\mathbf{P}}(n^{-3/8}\log^{1/8}n)$ by Proposition \ref{prop:3}. The term $P_2$ goes to zero as $n\rightarrow+\infty$: in fact, by Markov's inequality and assumption (\ref{eq:ass}), for some real $0<l<+\infty$,
\[\begin{split}
P_2&\lesssim e^{-uM_n}k_n^{-1}
[P_0^nM_{\hat G_n}(-s)+P_0^nM_{\hat G_n}(s)]\\
&\lesssim \frac{1}{n^l}
[P_0^nM_{\hat G_n}(-s)+P_0^nM_{\hat G_n}(s)]\rightarrow0 \quad \mbox{ as }n\rightarrow+\infty
\end{split}\]
and the assertion follows.
\qed
\end{proof}
\begin{remark}
Assumption (\ref{eq:ass}) essentially requires that $M_{\hat G_n}$ is an asymptotically unbiased estimator of $M_{G_0}$ in some neighborhood of zero $(-s_0,\,s_0)$, with $0<s_0<1$.
An analysis of the asymptotic behaviour of certain linear functionals of the MLE $\hat G_n$ is
presented in Van der Geer (1995), wherein sufficient conditions are provided so that they are $\sqrt{n}$-consistent, asymptotically normal and efficient.
\end{remark}
\section{Merging of Bayes and ML for $L^1$-Wasserstein deconvolution of Laplace mixtures}\label{sec:merging}
In this section, we show that the Bayes' estimator and the MLE of
$G_0$ merge in the $L^1$-Wasserstein metric, their discrepancy
vanishing, at worst, at rate $n^{-1/8}(\log n)^{2/3}$
because they both consistently estimate $G_0$ at a speed which is
within a $(\log n)$-factor of
$n^{-1/8}$, cf. Proposition \ref{prop:4} and Corollary \ref{cor:wasserstein}.
\begin{proposition}\label{prop:merging}
Under the assumptions of Proposition \ref{prop:4} and Corollary \ref{cor:wasserstein},
we have
\begin{equation}\label{eq:merge}
W_1(\hat G_n^{\mathrm{B}},\,\hat G_n)=O_{\mathbf{P}}(n^{-1/8}(\log n)^{2/3}).
\end{equation}
\end{proposition}
\begin{proof}
By the triangle inequality,
$$
W_1(\hat G_n^{\mathrm{B}},\,\hat G_n)\leq W_1(\hat G_n^{\mathrm{B}},\,G_0)+ W_1(G_0,\,\hat G_n),$$
where
$W_1(\hat G_n^{\mathrm{B}},\,G_0)=O_{\mathbf{P}}(n^{-1/8}(\log n)^{2/3})$ and $W_1(G_0,\,\hat G_n)=O_{\mathbf{P}}(n^{-1/8}(\log n)^{13/24})$ by Proposition \ref{prop:4} and Corollary \ref{cor:wasserstein}, respectively.
Relationship \eqref{eq:merge} follows.
\qed
\end{proof}
Proposition \ref{prop:merging} states that the Bayes' estimator and the MLE of $G_0$ will eventually be indistinguishable and (an upper bound on) the speed of convergence for their $L^1$-Wasserstein discrepancy is determined by the stochastic orders of their errors in recovering $G_0$.
The crucial question that remains open is whether the Bayes' estimator and the MLE are rate-optimal.
Concerning this issue, we note that, on the one hand, other deconvolution estimators for the distribution function attain the rate $n^{-1/8}$ when the error distribution is the standard Laplace, with the proviso, however, that the $L^1$-Wasserstein metric is not linked to the integrated quadratic risk between the c.d.f.'s used in the
result we are going to mention, so that the rates are not comparable.
For instance, the estimator $G_n^{K}(h_n)(y):=\int_{-\infty}^yp_n^K(h_n)(u)\,\d u$, $y\in\mathbb{R}$, of the c.d.f. $G_0$
based on the standard deconvolution kernel density estimator
is such that
$\{\int_{-\infty}^{+\infty}E[G_n^{K}(h_n)(y)-G_0(y)]^2\,\d y\}^{1/2}=O(n^{-1/8})$ when
no assumptions on $G_0$ are postulated, except for the existence of the first absolute moment, see (3.12) in Corollary 3.3 of Hall and Lahiri (2008), p. 2117.
On the other hand, a recent lower bound result, due to Dedecker \emph{et al}. (2015), Theorem 4.1, pp. 246--248, suggests that better rates are possible.
For $M>0$ and $r\geq1$, let
$\mathscr D(M,\,r)$ be the class of all probability measures $G$ on
$(\mathbb{R},\,\mathscr{B}(\mathbb{R}))$ such that
$\int_{-\infty}^{+\infty} |y|^r\,\d G(y)\leq M$. Let
$f$ be the error density.
Assume that there exist $\beta>0$ and $c>0$ such that, for every $\ell\in\{0,\,1,\,2\}$, it holds
$|\hat f^{(\ell)}(t)|\leq c(1+|t|)^{-\beta}$, $t\in\mathbb{R}$.
Then, there exists a finite constant $C>0$ such that, for \emph{any} estimator ${\Hat{G}}_n$ (we warn the reader of the clash of notation with the symbol $\hat G_n$ previously used to denote the MLE of $G_0$),
$$\liminf_{n\to+\infty}n^{p/(2\beta+1)}\sup_{G\in \mathscr D(M,\,r)}EW_p^p({\Hat{G}}_n,\,G)>C.$$ For $p=1$ and the (standard) Laplace error distribution, this renders the lower bound $n^{-1/5}$, which is better than the leading term $n^{-1/8}$ of the upper bounds we have found, even if it is not said that either the Bayes' estimator or the MLE attains it.
Finally, a remark on the use of the term \vir{merging}. Even if this term
is herein declined with a different meaning
from that considered in Barron (1988), where merging is intended as the convergence to one of the ratio of the marginal likelihood to the joint
density of the first $n$ observations,
or from that in Diaconis and Freedman (1986), where
merging refers to the \vir{intersubjective agreement}, as more and more data become available, between two Bayesians with different prior opinions, the underlying idea is, in a broad sense, the same: different inferential procedures
become essentially indistinguishable for large sample sizes.
\section{Final remarks}\label{sec:finrmks}
In this note, we have studied rates of convergence for Bayes and maximum likelihood estimation of Laplace mixtures and for their $L^1$-Wasserstein deconvolution.
The result on the convergence rate in the Hellinger metric for the MLE of Laplace mixtures is achieved taking a different approach from
that adopted in Ghosal and van der Vaart (2001), which is
based on the $L^1$-metric entropy with bracketing
of the set of densities under consideration and is difficult to apply in the present context, due to the non-analyticity of the Laplace density.
Posterior contraction rates for Dirichlet-Laplace
mixtures have been previously studied by Gao and van der Vaart (2016) in the case of compactly supported mixing distributions and have been here extended to mixing distributions with a possibly unbounded set of locations, this
accounting for the derivation of more general entropy estimates, cf. Appendix B.
An interesting extension to pursue would be that of considering general kernel densities with polynomially decaying Fourier transforms in the sense of Definition \ref{def:algdecr}: indeed, in the proof of Proposition \ref{prop:1}, which gives an assessement of the posterior contraction rate in the $L^2$-metric for Dirichlet-Laplace mixtures, all conditions, except for the Kullback-Leibler prior mass requirement, hold for any kernel density as in Definition \ref{def:algdecr}, provided that $\beta>1$. The missing piece is an extension of Lemma 2 in Gao and van der Vaart (2016), pp. 615--616, which is preliminary for checking the Kullback-Leibler prior mass condition and guarantees that a Laplace mixture, with mixing distribution that is the re-normalized restriction of $G_0$ to a compact interval, can be approximated in the Hellinger metric by a Laplace mixture with a discrete mixing distribution having a sufficiently restricted number of support points.
We believe that, as for the Laplace kernel, the number of support points of the approximating mixing distribution will ultimately depend only on the decay rate of the Fourier transform of the kernel density, even though, in a general proof, the explicit expression of the kernel density cannot be exploited as in the Laplace case. Extending the result on posterior contraction rates to general kernel mixtures would be of interest in itself and for extending the $L^1$-Wasserstein deconvolution result, even though this would pose in more general terms the rate-optimality question, as it happens for the $n^{-1/8}$-rate in the Laplace case, see the remarks at the end of Sect. \ref{sec:merging}.
We hope to report on these issues in a follow-up contribution.
\bigskip
\noindent{\small\bf{Acknowledgements}}\hspace*{0.3cm}The author
would like to thank the Editor and an anonymous Referee for their careful reading of the manuscript and helpful comments that have led to an improved presentation of the results.
She gratefully acknowledges financial support from MIUR, grant n$^\circ$ 2015SNS29B \vir{Modern Bayesian nonparametric methods}.
\section*{Appendix A: Auxiliary results}
\begin{theopargself}
In this section, a sufficient condition on a convolution kernel $K\in L^1(\mathbb{R})$ is stated
in terms of its Fourier transform $\hat K$ so that the exact order of the $L^2$-norm error for approximating any probability density $f$, with polynomially decaying characteristic function $\hat f$ of degree $\beta>1/2$
(see Definition \ref{def:algdecr} below)
by its convolution with $K_h:=h^{-1}K(\cdot/h)$, that is, by $f\ast K_h$, is assessed in terms of the bandwidth $h$. The
result is instrumental to the proof of Proposition \ref{prop:1} to show that any mixture density $p_G=G\ast f$, irrespective of the mixing distribution $G\in\mathscr G$, verifies the \emph{bias} condition $||p_G\ast K_h-p_G||_2=O(h^{\beta-1/2})$, which is involved in the definition of the sieve set in (15) of Theorem 2 in Gin\'{e} and Nickl (2011), p. 2891. We refer to the difference $(p_G\ast K_h-p_G)$ as the \emph{bias} because it is indeed the bias of the kernel density estimator $p_n^K(h):=\mathbb{P}_n\ast K_h$, when the observations are sampled from $p_G$: in fact, the bias $b[p_n^K(h)]:=E[p_n^K(h)]-p_G=p_G\ast K_h-p_G$. The condition in \eqref{eq:integrability} below, which traces back to Watson and Leadbetter (1963), see the first Theorem of Sect. 3B, pp. 486--487, is verified for any kernel $K$ of order $r$ greater than or equal to $\beta$, as later on spelled out in Remark \ref{rem:1}.
\begin{definition}\label{def:algdecr}
Let $f$ be a probability density function on
$\mathbb{R}$. The Fourier transform of $f$ or the characteristic function of the corresponding probability measure on $(\mathbb{R},\,\mathscr{B}(\mathbb{R}))$, denoted by $\hat f$,
is said to decrease algebraically of degree $\beta>0$ if there exists a constant $0<B_f<+\infty$ such that
\begin{equation}\label{eq:algebraic}
\lim_{|t|\rightarrow+\infty}|t|^\beta|\hat f(t)|=B_f.
\end{equation}
\end{definition}
Relationship \eqref{eq:algebraic} describes the tail behaviour of $|\hat f|$ by stating that it decays polynomially as $|t|^{-\beta}$. The class of probability measures on $(\mathbb{R},\,\mathscr{B}(\mathbb{R}))$ that have characteristic functions satisfying condition (\ref{eq:algebraic})
includes\\[-0.5cm]
\begin{itemize}
\item any gamma distribution with shape and scale parameters $\nu>0$ and $\lambda>0$, respectively,
whose characteristic function has expression $(1+ it/\lambda)^{-\nu}$, the role of $\beta$ in \eqref{eq:algebraic}
being played by $\nu$;\\[-0.33cm]
\item any distribution with characteristic function
$(1+|t|^\alpha)^{-1}$, $t\in\mathbb{R}$, for $0<\alpha \leq 2$,
which is called an $\alpha$-\emph{Laplace distribution} or \emph{Linnik's distribution}, cf. Devroye (1990);
the case $\alpha=2$ renders the characteristic function of a standard Laplace distribution.
The role of $\beta$ in \eqref{eq:algebraic} is played by $\alpha$;\\[-0.33cm]
\item any distribution with characteristic function $(1+|t|^\alpha)^{-1/\beta}$, which, for $\beta=1$,
reduces to that of an $\alpha$-Laplace distribution. The exponent $\alpha/\beta$ plays the role of the polynomial's degree $\beta$ in \eqref{eq:algebraic}. Devroye (1990) observes that,
if $S_\alpha$ is any symmetric stable r.v. with characteristic function
$e^{-|t|^\alpha}$, $0<\alpha\leq2$, and $V_\beta$ is an independent r.v. with density
$e^{-v^\beta}/\Gamma(1+1/\beta)$, $v>0$, then the r.v. $S_\alpha V_\beta^{\beta/\alpha}$
has characteristic function $(1+|t|^\alpha)^{-1/\beta}$.
\end{itemize}
\begin{lemma}\label{lem:1}
Let $f\in L^2(\mathbb{R})$ be a probability density function with Fourier transform $\hat f$ satisfying condition \eqref{eq:algebraic} for some $\beta>1/2$ and a constant $0<B_f<+\infty$. If $K
\in L^1(\mathbb{R})$
has Fourier transform $\hat K$ such that $\hat K(0)=1$ and
\begin{equation}\label{eq:integrability}
I^2_\beta[\hat K]:=\int_{\{t\neq0\}}\frac{|1-\hat K(t)|^2}{|t|^{2\beta}}\,\d t<+\infty,
\end{equation}
then $$h^{-2(\beta-1/2)}\|f-f\ast K_h \|_2^2\rightarrow \frac{1}{2\pi}\times
B^2_f\times
I^2_\beta[\hat K
] \quad\mbox{as } h
\rightarrow0.$$
\end{lemma}
\begin{proof}
\smartqed
Since it is assumed that $f\in L^1(\mathbb{R})\cap L^2(\mathbb{R})$, then $\hat f\in L^2(\mathbb{R})$ and necessarily $\beta>1/2$.
Also, as
$K\in L^1(\mathbb{R})$, then $\|f\ast K_h\|_p\leq \|f\|_p\|K_h\|_1<+\infty$
for $p=1,\,2$.
Thus, $(f-f\ast K_h)\in L^1(\mathbb{R})\cap L^2(\mathbb{R})$ and, by Plancherel's Theorem, $\|f-f\ast K_h\|_2^2=(2\pi)^{-1}\|\hat f-\hat f \times
\hat K_h\|_2^2$. By the change of variable $z=ht$,
\[\begin{split}
\|f-f\ast K_h\|_2^2&=
\frac{1}{2\pi}\int_{-\infty}^{+\infty}
|\hat f(t)|^2|1-\hat K(h t)|^2\,\d t\\
&=\frac{1}{2\pi}h^{2(\beta-1/2)}
\Bigg\{B_f^2 \times I^2_\beta[\hat K]
+
\int_{\{z\neq0\}}\frac{|1-\hat K(z)|^2}{|z|^{2\beta}}\Big
[|z/h|^{2\beta}|\hat f(z/h)|^2-B_f^2\Big]
\,\d z\Bigg\},
\end{split}
\]
where, for every sequence of positive real numbers $h_n\rightarrow 0$, the integral on the right-hand side of the last display tends to zero by the dominated convergence theorem
due to assumption (\ref{eq:integrability}). The assertion follows.
\qed
\end{proof}
In the following remark, which is essentially due to Davis~(1977), cf. Sect. 3, pp.~532--533, sufficient conditions on a kernel $K\in L^1(\mathbb{R})$ are given so that $\hat K(0)=1$ and the requirement in \eqref{eq:integrability} is satisfied.
The conditions in \eqref{eq:3} below require that $K$ is a \emph{kernel of order $r\geq\beta>1/2$}, the order of a kernel being the first non-zero \vir{moment} of the kernel,
cf. Definition 1.3 in Tsybakov (2004), p. 5.
\begin{remark}\label{rem:1}
For $K\in L^1(\mathbb{R})$, the Fourier transform $\hat K$ is continuous and bounded so that the integral $\int_{-\infty}^{+\infty}|t|^{-2\beta}|1-\hat K(t)|^2\mathbf{1}_{[1,\,+\infty)}(|t|)\,\d t<+\infty$ for $\beta>1/2$. The problem with condition \eqref{eq:integrability} is therefore the integrability of the function $t\mapsto |t|^{-2\beta}|1-\hat K(t)|^2$ for $|t|\in(0,\,1
)$.
Suppose that
\begin{eqnarray}\label{eq:3}
&&
\hspace*{-0.5cm}\int_{-\infty}^{+\infty} K(x)\,\d x=1, \nonumber\\
&&\hspace*{-0.5cm}
\mbox{$\exists\,r\in\mathbb{N}$, $r\geq\beta>\frac{1}{2}$\,:}\int_{-\infty}^{+\infty} x^j K(x)\,\d x=0\,\,\mbox{ for $j=1,\,\ldots,\,r-1$\,\, only if\,\, $r\geq2$,}\nonumber\\
&&\hspace*{-0.7cm}\mbox{and }\hspace*{5cm} \int_{-\infty}^{+\infty} x^r K(x)\,\d x\neq 0
\end{eqnarray}
and
\begin{equation}\label{eq:45}
\int_{-\infty}^{+\infty} |x|^r |K(x)|\,\d x<+\infty,
\end{equation}
(the value $r$ being called the \emph{characteristic exponent} of $\hat K$, see Parzen (1962), pp. 1072--1073), then
\[\hat K(0)=1 \,\,\mbox{ and }\,\, \int_{-\infty}^{+\infty}|t|^{-2\beta}|1-\hat K(t)|^2\mathbf{1}_{(0,\,1)}(|t|)\,\d t<+\infty.\]
In fact, $\hat K(0)=\int_{-\infty}^{+\infty}K(x)\,\d x=1$. Also, for every real number $t\neq0$,
\[\begin{split}
\frac{1-\hat K(t)}{t^{r}} = - \frac{\hat K(t)-1}{t^{r}}
&=-\frac{1}{t^{r}}\int_{-\infty}^{+\infty} (e^{itx}-1)K(x)\, \d x\\
&=- \frac{1}{t^r} \int_{-\infty}^{+\infty} \Bigg[e^{itx}-\sum_{j=0}^{r-1}\frac{(itx)^j}{j!}\Bigg]K(x)\, \d x\\
&= - \frac{i^r}{(r-1)!}\int_{-\infty}^{+\infty} x^r K(x)\int_0^1(1-u)^{r-1}e^{itux}\,\d u\,\d x.
\end{split}
\]
By the dominated convergence theorem, condition \eqref{eq:45} implies that
\[\frac{1-\hat K(t)}{t^r}\rightarrow
-\frac{i^r}{r!}\int_{-\infty}^{+\infty} x^r K(x)\,\d x \quad\mbox{as $t\rightarrow0$,}\]
where the limit is non-zero in virtue of the last condition on the right-hand side of \eqref{eq:3}.
It is seen by comparison that, since $r\geq\beta$, the integral $\int_{-\infty}^{+\infty}|t|^{-2\beta}|1-\hat K(t)|^2\mathbf{1}_{(0,\,1)}(|t|)\,\d t<+\infty$ and condition \eqref{eq:integrability} is satisfied. If, for instance, $1/2<\beta\leq 2$, then any symmetric probability density $K$ on $\mathbb{R}$, with finite, non-zero second moment $\mu_2:=
\int_{-\infty}^{+\infty} x^2 K(x)\,\d x\neq 0$ is such that
$I^2_\beta[\hat K]<+\infty$.
\end{remark}
\end{theopargself}
\section*{Appendix B: Entropy estimates}
\begin{theopargself}
In this section, Hellinger and $L^1$-metric entropy estimates
for a class of Laplace mixture densities, with mixing distributions having tails dominated by a given decreasing function, are provided.
The result of Lemma \ref{lem:entropy} extends, along the lines of Theorem 7 in Ghosal and van der Vaart (2007), pp. 708--709, Proposition 2 of Gao and van der Vaart (2016), p. 617, which deals with Laplace mixtures having compactly supported mixing distributions. Lemma \ref{lem:entropy} is invoked in the proof of Proposition \ref{prop:2}, reported in Appendix C, to verify that the entropy condition is satisfied.
\begin{lemma}\label{lem:entropy}
For a given decreasing function $A:\,(0,\,+\infty)\rightarrow[0,\,1]$, with inverse $A^{-
1}$, define the class of Laplace mixture densities
\begin{equation*}\label{eq:set}
\mathscr P_A:=\{p_G:\,G([-a,\,a]^c)\leq A(a) \,\mbox{ for all } a>0\}.
\end{equation*}
Then, for every $0<\varepsilon<1$,
\begin{itemize}
\item taking $a\equiv a_\varepsilon:=A^{-1}(\varepsilon)$ in the definition of $\mathscr P_A$, we have
\begin{equation}\label{eq:entropyL1}
\mbox{ }
\log N(3\varepsilon,\,\mathscr P_A,\,||\cdot||_1)\lesssim \varepsilon^{-2/3}\log \frac{A^{-1}(\varepsilon)}{\varepsilon^2},
\end{equation}
\item taking $a\equiv a_{\varepsilon^2}:=A^{-1}(\varepsilon^2)$ in the definition of $\mathscr P_A$, we have
\begin{equation}\label{eq:entropyHel}
\log N((\sqrt{2}+1)\varepsilon,\,\mathscr P_A,\,h)
\lesssim \varepsilon^{-2/3}\log \frac{A^{-1}(\varepsilon^2)}{\varepsilon^2}.
\end{equation}
\end{itemize}
\end{lemma}
\begin{proof}
Concerning the $L^1$-metric entropy in \eqref{eq:entropyL1}, since $a\equiv a_\varepsilon:=A^{-1}(\varepsilon)$ satisfies $G([-a_\varepsilon,\,a_\varepsilon]^c)\leq A(a_\varepsilon)=\varepsilon$
for all $G$ as in the definition of $\mathscr P_A$, Lemma A.3 of Ghosal and van der Vaart (2001), p. 1261, implies that
the $L^1$-distance between any density $p_G\in \mathscr P_A$ and the corresponding density $p_{G^\ast}$, with mixing distribution $G^\ast$ defined as the re-normalized restriction of $G$ to
$[-a_\varepsilon,\,a_\varepsilon]$, is bounded above by $2\varepsilon$.
Then, in virtue of the inequality in \eqref{eq:L1_Hel},
a Hellinger $(\varepsilon/2)$-net over the class of densities $\mathscr P_{a_\varepsilon}:=\{p_G:\,G([-a_\varepsilon,\,a_\varepsilon])=1\}$ is an $L^1$-metric $3\varepsilon$-net over $\mathscr P_A$, where
$$\log N\big(\varepsilon/2,\,\mathscr P_{a_\varepsilon},\,h\big)\lesssim \varepsilon^{-2/3}\log \frac{a_\varepsilon}{\varepsilon^2}$$ by Proposition 2 of Gao and van der Vaart (2016), p. 617.
The inequality in \eqref{eq:entropyL1} follows.
Concerning the Hellinger-metric entropy in \eqref{eq:entropyHel}, by taking
$a\equiv a_{\varepsilon^2}:=A^{-1}(\varepsilon^2)$,
for every $p_G\in \mathscr P_A$ and the corresponding $p_{G^\ast}$, with mixing distribution $G^\ast$ defined as the re-normalized restriction of $G$ to
$[-a_{\varepsilon^2},\,a_{\varepsilon^2}]$, by the inequality in \eqref{eq:Hel_L1}, we have
$h^2(p_G,\,p_{G^\ast})\leq ||p_G-p_{G^\ast}||_1\leq 2G([-a_{\varepsilon^2},\,a_{\varepsilon^2}]^c)\leq 2\varepsilon^2$,
which implies that $h(p_G,\,p_{G^\ast})\leq \sqrt{2}\varepsilon$. Thus, a Hellinger $\varepsilon$-net
over $\mathscr P_{a_{\varepsilon^2}}:=\{p_G:\,G([-a_{\varepsilon^2},\,a_{\varepsilon^2}])=1\}$ is a $(\sqrt{2}+1)\varepsilon$-net over $\mathscr P_A$, where
$$\log N\big(\varepsilon,\,\mathscr P_{a_{\varepsilon^2}},\,h\big)\lesssim \varepsilon^{-2/3}\log \frac{a_{\varepsilon^2}}{\varepsilon^2}$$
again by Proposition 2 of Gao and van der Vaart (2016), p. 617. The inequality in \eqref{eq:entropyHel} follows.
\qed
\end{proof}
\end{theopargself}
\section*{Appendix C: Posterior contraction rates in $L^r$-metrics, $1\leq r\leq 2$, for Dirichlet-Laplace mixtures}
\label{appendix:rates}
\begin{theopargself}
In this section, we prove Proposition \ref{prop:2} and Proposition \ref{prop:1} of Sect. \ref{sec:Bayes} on contraction rates in the $L^1$ and $L^2$-metrics, respectively, for
the posterior distribution corresponding to a Dirichlet process mixture of Laplace densities.
\medskip
\noindent\emph{Proof of Proposition \ref{prop:2}}
In order to derive the Hellinger or the $L^1$-metric posterior contraction rate,
we can appeal to Theorem 2.1 of Ghosal \emph{et al}.
(2000), p. 503, or Theorem 2.1 of Ghosal and van der Vaart (2001), p. 1239.
We define a sieve set for which conditions (2.2) or (2.8) and (2.3) or (2.9), postulated in the aforementioned theorems, are satisfied.
To the aim, once recalled that $\alpha(\mathbb{R})<+\infty$, let $\bar\alpha:=\alpha/\alpha(\mathbb{R})$ be the
probability measure corresponding to the baseline measure $\alpha$
of the Dirichlet process.
Consistently with the notation adopted throughout, $\bar\alpha$ is also used
to denote the
corresponding cumulative distribution function.
By a result of Doss and Sellke (1982), p. 1304, which concerns the tails of probability measures chosen from a Dirichlet prior, we have that, for almost every sample distribution $G$, if $a>0$ is large enough so that $\bar\alpha(-a)=1-\bar\alpha(a)$ is sufficiently small,
then
\[
\begin{split}
G([-a,\,a]^c)&\leq G(-a)+1-G(a)\\
&\leq \exp{\bigg\{-\frac{1}{\bar\alpha(-a)|\log \bar\alpha(-a)|^2}\bigg\}}+
\exp{\bigg\{-\frac{1}{[1-\bar\alpha(a)]|\log[1-\bar\alpha(a)]|^2}\bigg\}}\\
&=2\exp{\bigg\{-\frac{1}{\bar\alpha(-a)\,|\log \bar\alpha(-a)|^2}\bigg\}}\\
&< A_\eta(a),
\end{split}
\]
having set the position $A_\eta(a):=2\exp{\{-[\bar\alpha(-a)]^{-\eta}\}}$
for some fixed $0<\eta<1$. The inverse function $A_\eta^{-1}:\,(0,\,1)\rightarrow (0,\,+\infty)$
is defined
as $A^{-1}_\eta:\,u\mapsto -\bar\alpha^{-1}(\log^{-1/\eta}(2/u))$,
where the function $\bar\alpha^{-1}(\cdot)$ is the left-continuous inverse of $\bar\alpha(\cdot)$, that is, $\bar\alpha^{-1}(u):=\inf\{y\in\mathbb{R}:\,\bar\alpha(y)\geq u\}$, $u\in(0,\,1)$.
Considered the class of densities
$\mathscr P_{A_\eta}:=\{p_G:\,G([-a,\,a]^c)\leq A_\eta(a) \,\mbox{ for all } a>0\}$,
we have
$\Pi(\mathscr P_{A_\eta})=1$.
For any sequence of positive real numbers $\bar\varepsilon_n\downarrow0$,
set the position $a\equiv a_{\bar\varepsilon_n}:=A_\eta^{-1}(\bar\varepsilon_n)$
and defined the sieve set
$\mathscr P_n:=\{p_G:\,G([-a_{\bar\varepsilon_n},\,a_{\bar\varepsilon_n}]^c)\leq A_\eta(a_{\bar\varepsilon_n})=\bar\varepsilon_n\}$, we have $$\Pi(\mathscr P\setminus \mathscr P_n)=0$$ and condition (2.3) or (2.9) is satisfied.
As for condition (2.2) or (2.8), taking $\bar\varepsilon_n=n^{-3/8}\log^{3/8}n$, by Lemma \ref{lem:entropy}, we have
\begin{equation}\label{eq:entropy bound}
\log D(\bar\varepsilon_n,\,\mathscr P_n,\, ||\cdot||_1)\leq
\log N(\bar\varepsilon_n/2,\,\mathscr P_n,\, ||\cdot||_1)\lesssim
(\bar\varepsilon_n)^{-2/3}\log \frac{A_\eta^{-1}(\bar\varepsilon_n/6)}{\bar\varepsilon_n^2}\lesssim n\bar\varepsilon_n^2.
\end{equation}
The same bound as in \eqref{eq:entropy bound} also holds for the Hellinger metric entropy.
The Kullback-Leibler prior mass condition (2.4) of Theorem 2.1 of Ghosal \emph{et al}.
(2000), p. 503, or, equivalently, condition (2.10) of Theorem 2.1 of Ghosal and van der Vaart (2001),
p. 1239, can be seen to be satisfied for $\tilde\varepsilon_n:=n^{-3/8}\log^{5/8}n$. For the verification of this condition, we refer the reader to
condition (2) of Proposition \ref{prop:1} below,
whose requirement \eqref{eq:tailG11} is satisfied
under assumption \eqref{eq:tailG1} of Proposition \ref{prop:2}.
The proof is completed by taking
$\varepsilon_n:=\max\{\bar\varepsilon_n,\,\tilde\varepsilon_n\}=n^{-3/8}\log^{5/8}n$. For the sake of clarity, we remark that the role of $\tilde\varepsilon_n$ is played by $\varepsilon_n$ in the proof of Proposition
\ref{prop:1}.
\qed
\medskip
We now prove Proposition \ref{prop:1} on the posterior contraction rate in the $L^2$-metric.
The result relies on Theorem 3 of Gin\'{e} and Nickl (2011), p. 2892, which gives sufficient
conditions for deriving posterior contraction rates in $L^r$-metrics, $1<r<+\infty$. All assumptions of Theorem 3, except for condition (2), are shown to be satisfied for any kernel density $f$ as in Definition \ref{def:algdecr} with $\beta>1$.
This includes the (standard) Laplace kernel density as a special case when $\beta=2$.
Condition (2), which requires the prior mass in Kullback-Leibler type neighborhoods of the sampling density
$p_0\equiv p_{G_0}=G_0\ast f$ to be not exponentially small, relies on a preliminary approximation result of the density $p_{G_0^*}=G_0^*\ast f$, with mixing distribution $G_0^*$ obtained as
the re-normalized restriction of $G_0$ to a compact interval,
by a mixture density that has a discrete mixing distribution with a sufficiently restricted number of support points.
This result is known to hold for the Laplace kernel density in virtue of
Lemma 2 of Gao and van der Vaart (2016), pp. 615--616.
\medskip
\noindent\emph{Proof of Proposition \ref{prop:1}}
We apply Theorem 3 of Gin\'{e} and Nickl (2011), p. 2892, with $r=2$.
We refer to the conditions of this theorem using the same letters/numbers
as in the original article.
Let $\gamma_n\equiv 1$ and
$\delta_n\equiv\varepsilon_n:=n^{-3/8}\log^{5/8}n$, $n\in\mathbb{N}$.
\begin{itemize}
\item \emph{Verification of condition} (b)\\[2pt]
Condition (b), which requires that $\varepsilon_n^2=O(n^{-1/2})$, is satisfied in
the general case for $\varepsilon_n=n^{-(\beta-1/2)/2\beta}\log ^\kappa n$, with some $\kappa >0$
and $\beta>1$.\\
\item \emph{Verification of condition} (1)\\[2pt]
Condition (1) requires that the prior probability of the complement of a sieve set $\mathscr P_n$ is exponentially small.
We show that, in the present setting, the prior probability of a sieve set $\mathscr P_n$, chosen
as prescribed by (15) in Theorem 2 of Gin\'{e} and Nickl (2011), p. 2891, is equal to zero.
Let $J_n$ be any sequence of positive real numbers satisfying
$2^{J_n}\leq c n\varepsilon_n^2
$ for some fixed constant $0<c<+\infty$.
Let $K$ be a convolution kernel such that it is of bounded $p$-variation for some finite real number $p\geq1$, right (or left) continuous and satisfies $||K||_\infty<+\infty$,
$\int_{-\infty}^{+\infty}(1+|z|)^w|K(z)|\,\d z<+\infty$ for some $w>2$, $\hat K(0)=1$ and
$I^2_\beta[\hat K
]<+\infty$, cf. condition \eqref{eq:integrability} in Lemma \ref{lem:1}.
Defined the sieve set
$$\mathscr P_n:=\big\{p_G\in\mathscr P:\,||p_G\ast K_{2^{-J_n}}-p_G||_2\leq C\delta
_n\big
\},$$
where $K_{2^{-J_n}}(\cdot):=2^{J_n}K(\cdot2^{J_n})$ and $C>0$ is a finite constant depending only
on $K$ and $f$, we have
\begin{equation*}\label{eq:sieveprob}
\Pi(\mathscr P\setminus \mathscr P_n)=0\quad\mbox{for all $n\in\mathbb{N}$.}
\end{equation*}
In fact, for every $G\in \mathscr G$,
by Plancherel's Theorem,
$||p_G\ast K_{2^{-J_n}}-p_G||_2^2=||p_G-p_G\ast K_{2^{-J_n}}||_2^2=(2\pi)^{-1}
||\hat p_G- \hat p_G\times \hat K
_{2^{-J_n}}||_2^2\leq (2\pi)^{-1}||\hat f-
\hat f \times\hat K
_{2^{-J_n}}||_2^2$
and, by Lemma \ref{lem:1}, $||\hat f-
\hat f \times\hat K
_{2^{-J_n}}||_2^2
\sim (2^{-J_n})^{2\beta-1} \times B_f^2\times I_\beta^2[\hat K]$,
where, for $\beta=2$, we have $(2^{-J_n})^{2\beta-1}=(2^{-J_n})^{3}=
O(\delta_n^2)$.
Thus,
\begin{equation}\label{eq:sieve}
\forall\,G\in\mathscr G,\,\,\,
||p_G\ast K_{2^{-J_n}}-p_G||_2=O(\delta_n)
\end{equation}
and condition (1) is verified. Relationship
\eqref{eq:sieve} holds, in particular, for $p_0\equiv p_{G_0}=G_0\ast f$.
Furthermore, $p_0\in L^2(\mathbb{R})$ if $f\in L^2(\mathbb{R})$, which is the case for the (standard) Laplace kernel density, because
$||p_0||_2^2=(2\pi)^{-1}||\hat p_0||_2^2
\leq (2\pi)^{-1}||\hat f ||_2^2= ||f ||_2^2<+\infty$. \\
\item \emph{Verification of condition} (2)\\[2pt]
Condition (2) requires that, for some finite constant $C_1>0$, the prior probability of Kullback-Leibler type
neighborhoods of $P_0$ of radius $\varepsilon_n^2$ is at least $\exp{(-C_1 n\varepsilon_n^2)}$, that is,
$\Pi(B_{\textrm{KL}}(P_0;\,\varepsilon_n^2)
)\gtrsim \exp{(-C_1 n\varepsilon_n^2)}$.
Fix $0<\varepsilon\leq (1-e^{-1})/\sqrt{2}$ and
let $a_\varepsilon:=A_0^{-1}(\varepsilon^2)$, where $A_0^{-1}$ is the inverse of the function $A_0$
in condition \eqref{eq:tailG0SS}.
Define $G_0^\ast$ as the re-normalized restriction of $G_0$ to $[-a_\varepsilon,\,a_
\varepsilon]$. By Lemma A.3 of Ghosal and van der Vaart (2001), p. 1261,
and assumption \eqref{eq:tailG0SS}, we have
$||p_{G_0}-p_{G_0^\ast}||_1\leq2 G_0([-a_\varepsilon,\,a_
\varepsilon]^c)\lesssim \varepsilon^2$. From the
inequality in \eqref{eq:Hel_L1},
$h^2(p_{G_0},\,p_{G_0^\ast})\leq ||p_{G_0}-p_{G_0^\ast}||_1\lesssim \varepsilon^2$,
whence $h(p_{G_0},\,p_{G_0^\ast})\lesssim\varepsilon$.
It is known from Lemma 2 of Gao and van der Vaart (2016), pp. 615--616,
that there exists a discrete distribution $G_0'$ such that $h(p_{G_0'},\,p_{G_0^\ast})\lesssim \varepsilon$. The distribution $G_0'$ has at most $N\asymp \varepsilon^{-2/3}$ support points $y_1,\,\ldots,\,y_N$ in $[-a_\varepsilon,\,a_\varepsilon]$, which we may assume to be at least $2\varepsilon^2$-separated. If not, we can take a maximal $2\varepsilon^2$-separated set in the support points of $G_0'$ and replace $G_0'$ with the discrete
distribution $G_0''$ obtained by relocating the masses of $G_0'$ to the nearest points of the $2\varepsilon^2$-net. Then,
$h^2(p_{G_0'},\,p_{G_0''}
)\lesssim \max_{1\leq j
\leq N}|y_j'-y_j''|\lesssim \varepsilon^2$,
as shown in Proposition 2 of Gao and van der Vaart (2016), p. 617.
Let $G_0'=\sum_{j=1}^Np_j\delta_{y_j}$, with $|y_j-y_k|\geq2\varepsilon^2$ for all $1\leq j\neq k\leq N$. For any distribution $G$
such that
\begin{equation}\label{eq:condmixing}
\sum_{j=1}^N|G([y_j-\varepsilon^2,\,y
_j+\varepsilon^2])-p_j|\leq \varepsilon^2,
\end{equation}
we have
$||p_G-p_{G_0'}||_1\lesssim \varepsilon^2$
by Lemma 5 of Gao and van der Vaart (2016), p. 620.
Thus,
\[\begin{split}
h^2(p_G,\,p_{G_0}) &\lesssim
h^2(p_G,\,p_{G_0'}) + h^2(p_{G_0'},\,p_{G_0^\ast}) + h^2(p_{G_0^\ast},\,p_{G_0})\\
&\lesssim
||p_G-p_{G_0'}||_1 + \varepsilon^2 + ||p_{G_0^\ast}-p_{G_0}||_1 \lesssim \varepsilon^2.
\end{split}\]
We can now invoke Lemma A.10 in Scricciolo (2011), p. 305, taking into account Remark A.3 of the same article. To this aim, note that, if $G$ satisfies \eqref{eq:condmixing}, then $G([-(a_\varepsilon+1),\,(a_\varepsilon+1)])>1/2$. The reader may also refer to Scricciolo (2014), p. 305.
For any $G\in\mathscr G$, let $P_G$ stand for the probability measure with density $p_G\in\mathscr P$. The inclusion
\[\bigg\{P_G:\,\sum_{j=1}^N|G([y_j-\varepsilon^2,\,y_j+\varepsilon^2])-p_j|\leq \varepsilon^2\bigg
\}\subseteq B_{\textrm{KL}}\big (P_0;\,\varepsilon^2\log^2(1/\varepsilon)\big
)\]
holds. To apply Lemma A.2 of Ghosal and van der Vaart (2001), p. 1260, note that,
for every $y_j$, $1\leq j\leq N$, we have $\alpha([y_j-\varepsilon^2,\,y_j+\varepsilon^2])\gtrsim\varepsilon ^{b'}$ for some finite constant $b'>0$. Thus,
\[\log\Pi(B_{\textrm{KL}}(P_0;\,\varepsilon^2\log^2(1/\varepsilon)))
\gtrsim -N\log(1/ \varepsilon) \asymp -\varepsilon^{-2/3}\log(1/\varepsilon).\]
Taking $\varepsilon_n:=\varepsilon\log(1/\varepsilon)$, we have
$
\Pi(B_{\textrm{KL}}(P_0;\,\varepsilon_n^2)
)\gtrsim \exp{(-C_1 n\varepsilon_n^2)}
$
and condition (2) is satisfied.\\
\item \emph{Verification of condition} (3)\\[2pt]
Condition (3) requires that there exists a finite constant $B>0$ such that
$\Pi(||p_G||_\infty>B\mid X^{(n)}} %X_1, \ldots, X_n)=o_{\mathbf{P}}(1)$.
If $||f||_\infty<+\infty$, then
$||p_G||_\infty\leq ||f||_\infty<+\infty$ for all $G\in\mathscr G$, see Lemma \ref{lem:l2hel}. In particular, $||p_0||_\infty=||p_{G_0}||_\infty\leq ||f||_\infty<+\infty$.
Taking $B:=||f||_\infty$, we have
$$\forall\,n\in\mathbb{N},\,\,\,
\Pi(||p_G||_\infty>B\mid X^{(n)}} %X_1, \ldots, X_n)=0\quad P_0^n\mbox{-almost surely},$$
and condition (3) is satisfied. For the (standard) Laplace kernel density, $||f||_\infty=1/2$.
\end{itemize}
The proof is thus complete and assertion \eqref{eq:l2norm} follows.
\qed
\end{theopargself}
\section*{Appendix D: Inversion inequalities}\label{appendix:wasserstein}
\begin{theopargself}
In this section, we state a result relating, for every real number $p\geq1$, the $L^p$-Wasserstein distance
between any pair of mixing distributions $G,\,G'\in\mathscr{G}$ to the $L^2$-distance between the corresponding mixed densities $p_G=G\ast f$ and $p_{G'}=G'\ast f$, with a kernel density $f$ that is ordinary smooth in the sense of condition \eqref{eq:ft} stated below. Lemma \ref{lem:2} extends Lemma 7 of Gao and van der Vaart (2016), pp. 621--622, beyond the case of compactly supported mixing distributions to mixing distributions with finite moment generating functions on some neighborhood of zero $(-s_0,\,s_0)$, with $0<s_0<1$. If, furthermore, the kernel density is bounded, $||f||_\infty<+\infty$, then the inversion inequality in \eqref{eq:wasserstein} below also holds for the Hellinger metric in virtue of the following known result, which is reported for the reader's convenience.
\begin{lemma}\label{lem:l2hel}
For a given kernel density $f$, let $p_G=G\ast f$, with $G\in\mathscr G$.
If $||f||_\infty<+\infty$, then
$$\forall\,G\in\mathscr G,\,\,\, p_G(x)
\leq ||f||_\infty\quad\mbox{for all $x\in\mathbb{R}$,}$$
and
\begin{equation}\label{eq:hel^2}
\forall\,G,\,G'\in\mathscr G,\,\,\, ||p_G-p_{G'}||_2^2\leq 4||f||_\infty h^2(p_G,\,p_{G'}).
\end{equation}
\end{lemma}
\smallskip
We now state and prove an inequality translating the $L^2$-norm and the
Hellinger distance between mixed densities into any $L^p$-Wasserstein distance, $p\geq 1$, between the corresponding mixing distributions.
\begin{lemma}\label{lem:2}
Let $G$ and $G'$ be probability measures on some Borel-measurable space $(\mathscr{Y},\,\mathscr{B}(\mathscr{Y}))$, $\mathscr Y\subseteq\mathbb{R}$, such that
the associated moment generating functions
$M_G(s)$ and $
M_{G'}(s)$ are finite for all $|s|<s_0$, with $0<s_0<1$.
Let $f$ be a probability density function on $\mathbb{R}$, with Fourier transform $\hat f$ satisfying,
for some real number $\beta>0$, the condition
\begin{equation}\label{eq:ft}
\inf_{t\in\mathbb{R}}(1+|t|^\beta)|\hat f(t)|>0.
\end{equation}
Let $d$ stand for the $L^2$-distance between the mixed densities $p_G=G\ast f$ and $p_{G'}=
G'\ast f$. Then, for any real number
$p\geq 1$,
\begin{equation}\label{eq:wasserstein}
\hspace*{-0.3cm}
W_p(G,\,G')\lesssim d^{1/(p+\beta)}\pt{\log\frac{1}{d} }^{(p+1/2)/(p+\beta)}\quad
\mbox{ for }\, d=||p_G-p_{G'}||_2 \, \mbox{ small enough}.
\end{equation}
If, in addition, $||f||_\infty<+\infty$, then the upper bound in \eqref{eq:wasserstein} also holds for $d$ being the Hellinger distance,
$d=h(p_G,\,p_{G'})$.
\end{lemma}
\begin{proof}
For any real number $h>0$, by the triangle inequality, we have
\begin{equation}\label{eq:wass}
W^p_p(G,\,G')\leq W^p_p(G,\, G\ast\Phi_h) + W^p_p(G\ast \Phi_h,\,G'\ast \Phi_h) +W^p_p(G'\ast \Phi_h,\, G'),
\end{equation}
where $\Phi_h$ stands for a zero-mean Gaussian probability measure with variance $h^2$, whose density is denoted by $\phi_h(\cdot):=h^{-1}\phi(\cdot/h)$, for $\phi$ the density of a standard normal r.v. $W$. The first and third terms on the right-hand side of
\eqref{eq:wass} can be bounded above as follows. By standard arguments, see, for instance, the proof of Theorem 2 in Nguyen~(2013), pp. 389--391,
\begin{equation}\label{eq:max}
\max\{W_p^p(G,\, G\ast\Phi_h),\, W_p^p(G'\ast\Phi_h,\,G')\}\leq E[|hW|^p]\lesssim h^p
\end{equation}
because $E[|W|^p]<+\infty$ for every real number $p>0$, hence, \emph{a fortiori}, for every real $p\geq1$.
Concerning the second term on the right-hand side of
\eqref{eq:wass}, reasoning as in Lemma 7 of Gao and van der Vaart~(2016), pp. 621--622, for any real number $M>0$,
\[W_p^p
(G\ast\Phi_h,\,G'\ast\Phi_h)\lesssim
\pt{\int_{|x|\leq M}+\int_{|x|>M}}|x|^p|
(G-G')\ast\phi_h(x)|\,\d x=:T_1+T_2,\]
where, for every $0<h \leq 1$,
\begin{equation}\label{eq:t1}
T_1\lesssim M^{p+1/2}||(G-G')\ast\phi_h||_2 \lesssim M^{p+1/2} h^{-\beta} ||p_G-p_{G'}||_2
\end{equation}
because $\sup_{t\in\mathbb{R}}|\hat \phi(h
t)|/|\hat f(t)|\lesssim h^{-\beta}$ in virtue of assumption \eqref{eq:ft}. To see it, note that assumption \eqref{eq:ft} implies the existence of a finite constant $L_f>0$ such that $(1+|t|^\beta)|\hat f(t)|\geq L_f$ for all
$t\in\mathbb{R}$. Therefore, if $0<h \leq 1$,
\[\sup_{t\in\mathbb{R}}\frac{|\hat \phi(h
t)|}{|\hat f(t)|}\leq \frac{1}{L_f}
\sup_{t\in\mathbb{R}}[(1+|ht|^\beta)|\hat \phi(h
t)|]\times \sup_{t\in\mathbb{R}}
\bigg(\frac{1+|t|^\beta}{1+|ht|^\beta}\bigg)\lesssim h^{-\beta}.
\]
If $||f||_\infty<+\infty$, then the $L^2$-distance between $p_G$ and $p_{G'}$ in \eqref{eq:t1} can be replaced with the Hellinger distance (see Lemma \ref{lem:l2hel}), so that
\[
T_1\lesssim M^{p+1/2} h^{-\beta} h(p_G,\,p_{G'}).
\]
We now deal with the term $T_2$. We preliminarily derive an instrumental inequality.
For every $x\in\mathbb{R}$ and real numbers $p,\,u>0$,
\[
\frac{p}{u}
e^{u|x|/p}=\frac{p}{u}\sum_{j=0}^{+\infty}\frac{(u|x|/p)^j}{j!}\geq |x|,
\]
whence
\begin{equation}\label{eq:23}
|x|^p\leq (p/u)^p e^{u|x|}<(p/u)^p (e^{-ux}+e^{ux}).
\end{equation}
Now fix any number $0<u<s_0<1$. Applying the inequalities in \eqref{eq:23}
and taking into account the expression of the moment generating function of a standard Gaussian distribution $M_{\Phi}(s)=e^{s^2/2}$, $s\in\mathbb{R}$, we get
\begin{align*}
\int_{-\infty}^{+\infty}\max\{1,\,|x|^p\}e^{u|x|}\phi_h(x)\,\d x&\leq
\int_{-\infty}^{+\infty}\max\{e^{u|x|},\,(p/u)^p e^{2u|x|}\}\phi_h(x)\,\d x\\
&<2\max\{e^{(u h)^2/2},\,(p/u)^p e^{2(u h)^2}\}\\
&<
2\max\{e^{s_0^2/2},\,(p/u)^p e^{2s_0^2}\},
\end{align*}
namely, for fixed $u$, the above integral can be bounded above by a constant that is fixed throughout and can therefore be neglected when bounding $T_2$. Hence,
\[\begin{split}
T_2 &\lesssim e^{-uM} \int_{|x|>M}|x|^pe^{u|x|}[(G+G')\ast\phi_h(x)]\,\d x\\
&\lesssim e^{-uM}\int_{\mathscr Y} (1+|y|^p)e^{u|y|}\pt{\int_{-\infty}^{+\infty}\max\{1,\,|x|^p\}e^{u|x|}\phi_h(x)\,\d x}\,\d (G+G')(y)\\
& \lesssim e^{-uM}\int_{\mathscr Y}(1+|y|^p)e^{u|y|}\,\d (G+G')(y) \lesssim e^{-uM}
\end{split}\]
because
\[\begin{split}
\int_{\mathscr Y}e^{u|y|}\,\d (G+G')(y) &<\int_{\mathscr Y}
(e^{-uy}+e^{uy})\,\d (G+G')(y)\\
&=(M_G+M_{G'})(-u)+(M_G+M_{G'})(u)<+\infty
\end{split}\]
and, for any fixed real number $0<\xi<1$ such that $0<s:=(\xi+u)<s_0$, by the inequalities in \eqref{eq:23},
\[\begin{split}
\int_{\mathscr Y}|y|^pe^{u|y|}\,\d (G+G')(y) &<
(p/\xi)^p
\int_{\mathscr Y}e^{(\xi+u)|y|}\,\d (G+G')(y)\\&=
(p/\xi)^p
\int_{\mathscr Y} e^{s|y|}\,\d (G+G')(y)\\&< (p/\xi)^p\int_{\mathscr Y}
(e^{-sy}+e^{sy})\,\d (G+G')(y)\\&=(p/\xi)^p[(M_G+M_{G'})(-s)+(M_G+M_{G'})(s)]<+\infty
\end{split}\]
by the assumption that both $G$ and $G'$
have finite moment generating functions on $(-s_0,\,s_0)$, for $0<s_0<1$.
Thus,
\begin{equation}\label{eq:t2}
T_2\lesssim e^{-u M}.
\end{equation}
Combining partial results in \eqref{eq:max}, \eqref{eq:t1} and \eqref{eq:t2},
we get
\begin{equation}\label{eq:wasserstein2}
W^p_p(G,\,G')\lesssim h^p + M^{p+1/2}h^{-\beta}d+e^{-uM}
\end{equation}
and the conclusion follows by
minimizing the expression in \eqref{eq:wasserstein2} with respect to $h$ and $M$, which, for sufficiently small $d$, implies taking
$M=O(\log (1/d))$ and $h^{p+\beta}=O(d \log^{p+1/2} (1/d))$.
\qed
\end{proof}
\begin{remark}
The standard Laplace kernel density is bounded, with $||f||_\infty=1/2$, and satisfies condition \eqref{eq:ft} for $\beta=2$.
\end{remark}
\end{theopargself}
\bibliographystyle{}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,559
|
Estrangeiro () is a 1989 album by the Brazilian singer Caetano Veloso. It was produced by Peter Scherer and Arto Lindsay and features Naná Vasconcelos, Carlinhos Brown, Bill Frisell and Marc Ribot. Robert Christgau named it 27th on "The 1989 Pazz & Jop Critics Poll" of best albums released in that year.
Track listing
Personnel
Caetano Veloso: vocals, acoustic guitar (on tracks 3, 8, 10)
Peter Scherer: keyboards (on 1–7, 9)
Arto Lindsay: guitar (on 1, 3–5, 9) and voice (on 4)
Bill Frisell: guitar (on 1, 6)
Marc Ribot: guitar (on 1, 7)
Toni Costa: guitar (on 2, 9) and acoustic guitar (on 10)
Tavinho Fialho: bass (on 2, 9)
Tony Lewis: drums (on 1)
Cesinha: drums (on 2, 9)
Naná Vasconcelos: percussion (on 1, 5–8)
Carlinhos Brown: percussion (on 2, 4, 9)
References
1989 albums
Caetano Veloso albums
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,843
|
Q: I'm looking for a word or phrase that describes the feeling that something very bad or catastrophic is about to happen It may be something that will happen to the person who is having the feeling but it may also be to several persons, as might occur with a highly destructive earthquake, for instance.
The word or phrase would be used in the following sentence:
"I'm no spiritualist but I have a feeling of ___. I hope it's just a feeling."
EDIT - The phrase I'm looking for conveys a much stronger feeling than "I have a bad hunch". It is about something catastrophic which the person feels is "going to happen in a few minutes".
A: How about foreboding: 'a strong inner feeling or notion of a future misfortune, evil, etc'.
A: I have a feeling of impending danger or I have a presentiment of impending danger.
A: Presage:
*
*a sign that something, often something unpleasant, will happen:
*something that foreshadows or portends a future event , omen
*
*The fact that no agreement has been reached by the Prime Ministers is a presage that a conflict may be imminent.
(from www.dictionary.cambridge.org)
A: Is not the word you are searching ominous?
I'm no spiritualist but I have an ominous feeling, ....
Equally, as Patrick Wood points out a feeling of foreboding would do equally well, perhaps engendering even more concern in the listener.
A: My first thoughts on reading the question were of the phrase 'I have a feeling of impending doom.' Since the word 'catastrophic' is used, this doesn't feel unduly strong.
Edit by Centaurus - I'm adding some lines from the reference the answerer has given in his comment below:
Many people experience strong feelings and sensations associated with fear and anxiety. They are especially powerful when they occur for seemingly no reason. Consequently, many people react to these "out of the blue" feelings with fear, which only serves to inflame them. To better understand these strong impending doom feelings, the anxiety symptom "fear of impending doom" is often described as one or many of the following:
*
*Feeling like something awful is about to occur
*A sense that something very dangerous is about to happen
*An overwhelming feeling you are about to die
*A strong feeling that something terrible is about to happen and there isn't anything you can do about it
*A strong feeling of death and destruction that suddenly comes over you
*An overwhelming fear of impending doom, destruction, despair, and gloom
*A horrible feeling of doom and gloom that washes over you
*Fear of impending doom that begins or accompanies a panic attack or anxiety attack
*Such a strong feeling of impending doom that you feel you have to escape immediately or something terrible will happen
A:
I'm no spiritualist but I have a premonition. I hope it's no more than that.
The alternative shown above may work. From en.wiktionary, premonition means
(1) A clairvoyant or clairaudient experience, such as a dream, which resonates with some event in the future.
(2) A strong intuition that something is about to happen (usually something negative, but not exclusively).
A: The word apprehension comes to mind.
A: This is a perfect opportunity to use one of my favorite words:
consternation - noun - "An emotion experienced in anticipation of some specific pain or danger."
A: "I felt the hair on the back of my neck stand up"
A: "I feel like someone just walked over my grave" is a dark, ominous, foreboding description, and it hints toward death with a touch of the supernatural.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 9,528
|
István Ráth-Végh (* 23. November 1870 in Budapest; † 18. Dezember 1959 ebenda) war ein ungarischer Jurist und Schriftsteller, der sich mit Kulturgeschichte befasste.
Ráth-Végh promovierte an der Universität Budapest und arbeitete bis 1934 als Rechtsanwalt.
In deutscher Sprache erschienen unter anderem seine bekanntesten populärwissenschaftlichen Werke: Aus der Geschichte der Dummheit (deutsch 1961), Die Komödie des Buches (1937), Schwarze Chronik (1958).
Weblinks
Biografie (ung.)
Dichterjurist
Rechtsanwalt (Ungarn)
Kulturhistoriker
Schriftsteller (Budapest)
Ungar
Geboren 1870
Gestorben 1959
Mann
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,226
|
import cherrypy
class Root(object):
@cherrypy.expose
def index(self):
return 'Hello World!'
cherrypy.config.update({'environment': 'embedded'})
app = cherrypy.tree.mount(Root())
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,534
|
package org.codehaus.groovy.ast.expr;
import org.codehaus.groovy.ast.ClassNode;
import org.codehaus.groovy.ast.GroovyCodeVisitor;
/**
* @author sam
*/
public class UnaryMinusExpression extends Expression {
private final Expression expression;
public UnaryMinusExpression(Expression expression) {
this.expression = expression;
}
public Expression getExpression() {
return expression;
}
public void visit(GroovyCodeVisitor visitor) {
visitor.visitUnaryMinusExpression(this);
}
public Expression transformExpression(ExpressionTransformer transformer) {
Expression ret = new UnaryMinusExpression(transformer.transform(expression));
ret.setSourcePosition(this);
ret.copyNodeMetaData(this);
return ret;
}
public String getText() {
return expression.getText();
}
public ClassNode getType() {
return expression.getType();
}
public boolean isDynamic() {
return false;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 6,727
|
Внешний вид или Облик:
Внешний вид растения;
Внешний вид человека.
См. также
Экстерьер
Интерьер
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,528
|
Q: How to find the radius if two circles intersect in two distinct points? Question- if two circles $(x-1)^2+(y-3)^2=r^2$ and $x^2+y^2-8x+2y+8=0$ intersect in two distinct points , then find the range in which r exists
I have these two circles
$(x-1)^2+(y-3)^2=r^2$ and $x^2+y^2-8x+2y+8=0$ .
Now, I want to find out the range of r .
I have found out center $(4,-1)$ and radius as 3 of the second circle .
My book has mentioned a condition for circles to intersect at two places , we have.
$r_1-r_2<c_1c_2<r_1+r_2$
I don't understand what is this ?
My book says the answer is $2<r<8$
Please explain me the answer. Thank you !
A: So you have a circle of center $(1,3)$ and radius $r$ and another circle of center $(4,-1)$ and radius $3$. For them to intersect in two places, two things have to happen: the interiors have to overlap (so $r$ cannot be too small) and the first circle cannot completely surround the second circle (so $r$ cannot be too big).
We can simplify the problem by shifting and rotating coordinates so that the first circle is centered at the origin and the second circle is centered on the positive $x$ axis. Then the second circle is now centered at $x=\sqrt{(1-4)^2+(-1-3)^2}=\sqrt{9+16}=5$ and $y=0$.
Having done that, notice that the point on the second circle which is closest to the center of the first circle is $(2,0)$ (the leftmost point). If we want the insides to overlap, then this point will have to be inside the first circle, so you will need $r>2$.
Now try to find a condition so that the first circle does not completely surround the second circle. Hint: what is the furthest point on the second circle from the center of the first circle?
A: For 2 circle intersecting each other as 2 points, the distance between centres $C_1$ and $C_2$ must be shorter than the distance when 2 circle are in contact (touching) for 1 point only, which is the case $C_1C_2=r_1+r_2$ as the centres form a straight line with the only intersecting point. So we have :
$$C_1C_2<r_1+r_2$$
While the minimum distance between centres exists when the smaller circle touches the larger one internally. So the minimum distance $C_1C_2\ge{r_1}-r_2$. However as the 2 circle intersects each other at 2 points, the case of touching ($C_1C_2=r_1-r_2$) should be rejected. So
$$C_1C_2>{r_1}-r_2$$
To conclude,
$$r_1-r_2<c_1c_2<r_1+r_2$$
A: The circles intersect at two distinct points,therefore
$|r_1-r_2|<$distance between centers of the circles$<|r_1+r_2|$
$$
|r-3|<\sqrt{(4-1)^2+(-1-3)^2}<|r+3|
$$
$$
|r-3|<5 \cap |r+3|>5
$$
You can solve after that.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 567
|
Q: Nginx with Supervisor keep changing status b/w Running and Starting Here's a preview of the status running supervisorctl status every 2 seconds:
[root@docker] ~ # supervisorctl status
nginx RUNNING pid 2090, uptime 0:00:02
[root@docker] ~ # supervisorctl status
nginx STARTING
[root@docker] redis-2.8.9 # supervisorctl status
nginx RUNNING pid 2110, uptime 0:00:01
Is this a normal thing for nginx to respawn every few seconds ? Knowing that nginx is setup to be run in the background with this setup:
[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true
A: Its been a long time, but it might help someone else... set daemon off in your nginx config. Supervisord requires processes not to run as daemons.
You can also set it directly for the supervisor command:
command=/usr/sbin/nginx -g "daemon off;"
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,419
|
La casa di Estridsen fu una dinastia dei re di Danimarca dal 1047 al 1412. La dinastia prende il nome dalla sua antenata, Estrid Svendsdatter. La dinastia è talvolta chiamata Ulfingi, dal nome del marito di Estrid, Ulf Thorgilsson. La dinastia diede anche tre dei sovrani della Svezia negli anni 1125-1412. Il loro stemma dinastico divenne lo stemma della Danimarca e quindi influenzò lo stemma di Tallinn e lo stemma dell'Estonia.
La Corte Reale di Danimarca non distingue tra le diverse case reali tra i primi re danesi, ma usa il termine "discendenti di Gorm il Vecchio" per tutti i monarchi prima degli Oldenburg.
Storia
Il nome della dinastia Estridsen ricorda la loro acquisizione della corona danese attraverso il matrimonio di Ulf Thorgilsson con Estrid Svendsdatter della dinastia dei Knýtlinga (Gorm), figlia di Sweyn Barbaforcuta e sorella di Canuto il Grande. Le genealogie successive (introdotte dallo storico danese Jakob Langebek nel XVIII secolo) fanno risalire la dinastia al leader dei vichinghi di Jomsborg, Styrbjörn il Forte, un rampollo della famiglia reale svedese, che a sua volta discende dal leggendario re Sigurðr Hringr, considerato come mitico dalla maggior parte degli storici moderni (nessuna fonte effettiva menziona tale ascendenza). L'ascendenza affidabile risale a non prima del padre di Ulf, l'oscuro Thorgil Sprakling e il nonno Björn (nelle fonti chiamato Ursius), quest'ultimo quindi identificato come Styrbjörn da Langebek.
La dinastia raggiunse il suo apice con l'Unione Kalmar, quando i suoi membri regnarono come re di Danimarca, Norvegia e Svezia in unione personale. La dinastia terminò nel 1412 con la morte dell'ultimo membro, la regina Margherita I. Tutti i successivi monarchi di Danimarca sono discendenti cognatici della dinastia degli Estridsen.
Albero genealogico
Da Thorgil Sprakling a Eric I il Buono
Thorgil Sprakling;
Ulf Thorgilsson, assassinato nel 1026, probabilmente jarl in Inghilterra dal 1017 ⚭ Estrid Svendsdatter (990/997-1057/1073), figlia di Sweyn Barbaforcuta (c. 960-1014) e una sorella di Canuto I d'Inghilterra (985 circa o 995-1035);
Sweyn II di Danimarca (1019 circa-1076), jarl dal 1042, re di Danimarca dal 1047;
Sweyn il Crociato, ucciso nel 1097 ⚭ Florinda († 1097), figlia di Oddone I, duca di Borgogna (Capetingi);
Aroldo III (1040 circa- 1080);
Sigrid ⚭ Godescalco († 1066), un principe degli Obotriti (Naconidi);
San Canuto IV (1042 circa-1086), jarl di Zelanda dal 1076, re di Danimarca dal 1080 ⚭ Adela (1064 circa-1115), figlia di Roberto I, conte delle Fiandre;
Beato Carlo I il Buono (1083-1127), assassinato il 2 marzo 1127 a Bruges, conte delle Fiandre dal 1119 ⚭ attorno al 1119 Margherita, figlia di Rinaldo II, conte di Clermont-en-Beauvaisis;
Cecilia (1085/86 circa-1131) ⚭ Erik, Jarl di Västergötland;
Ingerid (1085/86 circa - ?) ⚭ Folke il Grasso, jarl in Svezia.
Olaf I (1050 circa-1095), jarl dello Jutland Meridionale dal 1080, re di Danimarca dal 1086 ⚭ Ingegerd, figlia del re di Norvegia Harald III Hardrada (Bellachioma);
Ingerid ⚭ attorno al 1070 al re Olaf III di Norvegia;
Eric I il Buono (1060 circa, Slangerup, Danimarca - 10 luglio 1103, Pafo, Cipro), jarl di Zelanda dal 1080, re di Danimarca dal 1095 - per i suoi discendenti, vedi sotto
Svend Tronkræver († 1104);
Henrik Skadelår (1090 circa-4 giugno 1134).
Magnus II di Svezia († 1161), re di Svezia dal 1160 ⚭ Brigida, figlia del re Harald IV Gille di Norvegia;
Canuto († 12 marzo 1162), duca dello Jutland Meridionale dal 1150, duca dello Jutland dal 1157;
Buris (1130-1167), duca dello Jutland meridionale dal 1162 ⚭ attorno al 1166 a una figlia di Ermanno II, conte di Winzenburg.
Niels, ucciso il 25 giugno 1134, re di Danimarca dal 1104 ⚭Margareta Fredkulla, una figlia del re di Svezia Ingold il Vecchio (Stenkil).
Magnus I (1106 circa-4 giugno 1134), duca di Västergötland dal 1125, re di Danimarca dal 1134 ⚭ Richeza, figlia di Bolesław III Wrymouth (Piast).
Canuto V (1129 circa-9 agosto 1157), duca di Jutland dal 1147, co-sovrano di Danimarca dal 1154 ⚭ nel 1156 Elena, figlia del re Sverker I di Svezia (Sverker).
San Niels († 1180);
Valdemaro († 18 luglio 1236 nell'abbazia di Cîteaux), vescovo di Schleswig dal 1182 al 1208, arcivescovo di Brema nel 1192.
Björn (ucciso nel 1049), conte in Inghilterra;
Asbjörn († probabilmente nel 1086), jarl in Danimarca.
Gytha Thorkelsdóttir ⚭ Godwin del Wessex (Godwin). Uno dei loro figli fu il re d'Inghilterra Aroldo II d'Inghilterra;
Eilaf (menzionato per la prima volta nel 1009), earl in Inghilterra.
Da Eric I il Buono a Cristoforo I
Eric I il Buono (1060 circa, Slangerup, Danimarca - 10 luglio 1103, Pafo, Cipro), jarl di Zelanda dal 1080, re di Danimarca dal 1095 - per i suoi ascendenti, vedi sopra
San Canuto Lavard (12 marzo 1096-7 gennaio 1131), re dello Jutland meridionale dal 1115, re dei Venedi dal 1129 ⚭ Ingeborg, figlia di Mstislav I di Kiev (Rjurikidi);
Cristina (1118 circa -1139) ⚭ re Magnus IV di Norvegia;
Valdemaro I il Grande (14 gennaio 1131 - 12 maggio 1182), re di Danimarca ⚭ nel 1157 Sofia di Minsk († 5 maggio 1198), figlia di Volodar di Minsk;
(illegittimo) Cristoforo († 1173), duca dello Jutland meridionale;
Sofia († 1208) ⚭ Sigfrido III, conte di Orlamünde († 1206);
Canuto VI (1163-12 novembre 1202), re di Danimarca dal 1182 ⚭ nel 1177 Gertrude, figlia di Enrico il Leone (Welfen);
Valdemaro II (1170-28 marzo 1241), re di Danimarca dal 1202 ⚭ (I) nel 1205 con Dagmar, figlia del re Ottocaro I di Boemia (Přemyslidi) ⚭ (II) nel 1214 con Berengaria, figlia del re del Portogallo Sancho I (Casa capetingia di Borgogna);
(illegittimo) Niels, conte di Halland dal 1216 - per i discendenti, vedi conti di Halland, estintosi nel 1314;
(illegittimo) Canuto (1211-15 ottobre 1260), duca d'Estonia dal 1219, duca di Blekinge dal 1232, duca di Lolland da prima del 1260 ⚭ Edvige, una figlia di Swietopelk I, duca di Pomerania - per i discendenti, vedi Signori di Skarsholm, estintosi prima del 1408;
(I) Valdemaro il Giovane (c. 1209-28 novembre 1231), co-sovrano della Danimarca dal 1215 ⚭ Eleonora, una figlia del re del Portogallo Alfonso II (Casa capetingia di Borgogna);
(II) Eric IV (1216 circa-9 agosto 1250), re di Danimarca dal 1241 ⚭ nel 1239 Jutta, figlia di Alberto I, duca di Sassonia (Ascanidi).
Sofia (1241–1286) ⚭ Valdemaro, re di Svezia (Casato di Folkung);
Ingeborg († 1287) ⚭ Magnus VI, re di Norvegia (Dinastia Bellachioma);
Jutta (1246–1286/95), amante di Valdemaro, re di Svezia e cognato, poi badessa di Sant'Agneta;
Agnese (1249-dopo il 1290), fondatrice badessa di S. Agneta.
Sofia (1217-2 novembre 1247) ⚭ nel 1230 con Giovanni I, margravio di Bradenburgo († 3 aprile 1266) (Ascanidi);
Abele (1218-29 giugno 1252), re di Danimarca dal 1250 ⚭ Matilda di Holstein († 1288) - per i suoi discendenti, vedi sotto
Cristoforo I (1219-29 maggio 1259), re di Danimarca dal 1252 ⚭ nel 1248 Margherita di Sambiria, figlia del duca Sambor II di Pomerania - per i suoi discendenti, vedi sotto
Ingeburge (1175-29 luglio 1236) ⚭ nel 1193 con il re di Francia Filippo II († 14 luglio 1223) (Capetingi);
Elena (1180 circa-22 novembre 1233) ⚭ nel 1202 con il duca Guglielmo di Lüneburg († 1213) (Welfen);
Richenza (1190-1220) ⚭ il re di Svezia Eric X († 1216).
Harald Kesja (1080-1135), dal 1102 al 1103 reggente di Danimarca ⚭ Ragnhild Magnusdotter, una figlia del re Magnus III di Norvegia;
Björn Ironside († 1134) ⚭ Catherine Ingesdotter, figlia del re di Svezia Inge I;
Cristina († 1170) ⚭ re di Svezia Eric IX († 18 maggio 1160), re di Svezia dal 1156 (Casato di Erik).
Olaf († 1143 circa), anti-re danese.
(illegittimo) Harald Skrænk, leader di una ribellione contadina in Scania attorno al 1182;
Ragnhild Eriksdatter ⚭ Hakon Sunnivasson;
Eric III (1120 circa-27 agosto 1146), re di Danimarca dal 1137 ⚭ nel 1155 con Liutgarda, figlia del margravio della marca del Nord e conte di Stade Rodolfo I di Stade (Odoniani);
(illegittimo) Magnus Eriksen, imprigionato nel 1178.
Eric II (1090 circa-18 luglio 1137), re di Danimarca dal 1134 ⚭ Malmfred, figlia del Gran Principe Mstislav I di Kiev (Rjurikidi).
Sweyn III (1125 circa-23 ottobre 1157), re di Zelanda dal 1147, re di Danimarca dal 1152 ⚭ Adela, una figlia di Corrado, margravio di Meißen (Wettin).
Liutgarda ⚭ il margravio d'Istria e di Carniola Bertoldo I († 1188) (Andechs).
Duchi di Schleswig (Abelslægten)
Abele (1218-29 giugno 1252), re di Danimarca dal 1250 ⚭ Matilda di Holstein († 1288) - per i suoi antenati, vedi sopra
Valdemaro III († 1257), duca di Schleswig, (o, come lo chiamano i danesi, Jutland Meridionale) dal 1253;
Sofia (1240-dopo il 1284) ⚭ Bernardo I, principe di Anhalt-Bernburg (1218 circa-1287) (Ascanidi). Cristiano I di Danimarca fu il loro bis-bis-bis-pronipote e l'attuale regina Margherita II discende da Cristiano I;
Eric I († 27 maggio 1272), duca di Schleswig dal 1260 ⚭ Margherita, una figlia di Jaromar II, principe di Rügen;
Valdemaro IV († 1312), duca di Schleswig dal 1283 ⚭ Elisabetta, figlia di Giovanni I, duca di Sassonia (Ascanidi);
Eric II (1290 circa-12 marzo 1325), duca di Schleswig dal 1312 ⚭ Adelaide, figlia di Enrico I, conte di Holstein-Rendsburg (Schaumburg);
Valdemaro V (1314–1364), duca di Schleswig dal 1325 al 1326 e dal 1330 al 1364, re di Danimarca come Valdemaro III dal 1326 al 1330 ⚭ Richardis, figlia di Gunzelino VI, conte di Schwerin;
Valdemaro (1338 circa-1360);
Enrico (1342 circa-agosto 1375), duca di Schleswig dal 1364.
Helvig († 1374) ⚭ il re Valdemaro IV di Danimarca († 24 ottobre 1375) - vedi sotto
(illegittimo) Valdemar Eriksen Sappi († 1398).
(illegittimo) Abel Valdemarsen - per i discendenti, famiglia Rynd, estintasi nel 1405.
Margherita († dopo il 1313) ⚭ Helmold III, conte di Schwerin;
Eric Longbone (1272–1310), signore di Langeland ⚭ Sofia, una figlia di Burcardo VII, burgravio di Magdeburgo.
Abele (1252-2 aprile 1279), signore di Langeland ⚭ Matilda, figlia di Gunzelino III, conte di Schwerin.
Da Cristoforo I a Margherita I
Cristoforo I (1219-29 maggio 1259), re di Danimarca dal 1252 ⚭ nel 1248 Margherita Sambiria, figlia del duca di Pomerania Sambor II - per i suoi antenati, vedi sopra
Eric V "Klipping" (1249 - 22 novembre 1286), re di Danimarca dal 1259 ⚭ Agnese, una figlia di Giovanni I, margravio di Brandeburgo (Ascanidi);
Eric VI Menved (1274-13 novembre 1319), re di Danimarca dal 1286 ⚭ Ingeborg Magnusdotter (1277-1319), figlia del re Magnus III di Svezia (Casato di Folkung);
Cristoforo II (29 settembre 1276-2 agosto 1332), re di Danimarca dal 1320 al 1326 e dal 1329 al 1332 ⚭ Eufemia, figlia di Bogislaw IV, duca di Pomerania (Greifen);
Margherita, (1305–1340) ⚭ Ludovico V, duca di Baviera († 18 settembre 1361) (Wittelsbach);
Eric (1305-1331 o 1332), eletto re di Danimarca nel 1321 ⚭Elisabetta, figlia di Enrico I, conte di Holstein-Rendsburg (Schaumburg);
Ottone (1310 circa-dopo il 1341), duca di Lolland ed Estonia;
Valdemaro IV "Atterdag" (1320 circa-24 ottobre 1375), re di Danimarca dal 1340 ⚭ Helvig, figlia di Eric II, duca di Schleswig - vedi sopra
Cristoforo († 11 giugno 1363), duca di Lolland dal 1359;
Ingeborg (1 aprile 1347-prima del 16 giugno 1370) ⚭ Enrico III, duca di Meclemburgo († 24 aprile 1383). Furono i nonni di Eric di Pomerania (re di Norvegia come Eric III, re di Danimarca come Eric VII e re della Svezia come Eric XIII);
Margherita I (marzo 1353-28 ottobre 1412), regina regnante di Danimarca dal 1375 al 1385 e dal 1387 al 1396, regina regnante di Norvegia dal 1380 al 1385 e dal 1387 al 1398, regina regnante di Svezia dal 1389 al 1396, co-fondatrice dell'Unione di Kalmar nel 1397 ⚭ nel 1363 Haakon Magnusson († 1380), re di Norvegia dal 1355 come Haakon VI, re di Svezia dal 1362 al 1364 come Håkan.
Olaf II (1370-23 agosto 1387), re di Danimarca dal 1376, re di Norvegia dal 1380, re di Danimarca dal 1385.
(illegittimo) Erik Christoffersen Løvenbalk, la sua discendenza maschile, la famiglia Løvenbalk, si estinse dopo il giugno 1598. Federico VIII, duca di Schleswig-Holstein, discende da lui in linea femminile e sua nipote Elena Adelaide si sposò con la famiglia reale danese.
Richenza († prima del 27 ottobre 1318) ⚭ Nicola II, signore di Werle († 1316) (Meclemburgo). Cristiano I di Danimarca fu il loro bis-bis-pronipote, l'attuale regina Margherita II discende da Cristiano I;
Marta († 2 marzo 13041) ⚭ nel 1298 Birger Magnusson († 31 maggio 1321), re di Svezia dal 1290.
Matilda ⚭ Alberto III, margravio di Brandeburgo-Salzwedel († 1300) (Ascanidi);
Margherita (1257 circa-1306) ⚭ Giovanni II, conte di Holstein-Kiel († 1321) (Schaumburg).
Note
Bibliografia
Detlev Schwennicke: Europäische Stammtafeln, vol II, 1984, tabella 98 e segg
Voci correlate
Dinastia di Olaf
Dinastia dei Knýtlinga (Gorm)
Dinastia di Oldenburg
Altri progetti
Nobili danesi
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,025
|
Henry Drucker, mort le , est un chansonnier et auteur dramatique français actif de la fin des au début des .
Biographie
Malgré une œuvre importante jouée et interprétée sur une période de plus de trente ans de 1877 à 1909, on ne sait pratiquement rien d'Henry Drucker sinon qu'il était d'origine alsacienne et qu'il a été membre de la Société des auteurs et compositeurs dramatiques de 1895 à sa mort.
On lui doit les paroles de plus de quatre cents chansons de la fin des années 1870 au début des années 1900 sur des musiques, entre autres, de Gustave Goublier, Tony Rieffler, Léopold Gangloff, Paul Fauchey, Gaston Maquis, Lucien Collin ou Henri Rosès.
Il est également l'auteur des livrets d'une douzaine d'opérettes et de vaudevilles comme Comme on fait son lit..., pièce en un acte jouée au Théâtre d'Application le .
Carrière
comme chansonnier
1879 : Dans ma nacelle, barcarolle, musique d'Abel Queille
1879 : Te souviens-tu ma belle, rêverie-barcarolle, musique d'Abel Queille
1880 : Le Rhin, mazurka chantée, musique d'Olivier Métra
1880 : La Marguerite, mazurka chantée, musique d'Olivier Métra
1880 : Ballade arabe, musique de Louis Gregh
1880 : Fatma, réponse à la ballade arabe, musique de Louis Gregh
1880 : Voici les beaux jours, polka chantée, musique de Tony Rieffler
1880 : A ton bras, polka chantée, musique de Tony Rieffler
1880 : Seule, mazurka chantée, musique de Tony Rieffler
1881 : Les bois reverdissent, chanson, musique d'Abel Queille
1881 : C'est égal !, chansonnette, musique d'Abel Queille
1882 : Le Bonnet de Marguerite, chanson, musique d'Abel Queille
1883 : C'est le secret de Polichinelle, chansonnette, musique d'Abel Queille
1883 : Nina, ma belle, barcarolle, musique d'Abel Queille
1883 : Mignonne, donne-moi ta bouche fraîche, mélodie, musique d'Albert Petit
1884 : C'est pas ma faute, j'suis grise !, chansonnette, musique d'Abel Queille
1886 : Les blés sont fauchés, villanelle, musique de Jules Javelot
1886 : Cette fois, ça y est, chansonnette, avec Albert Fontana, musique de Lucien Collin
1886 : Le Chemin du ciel, chansonnette, avec Albert Fontana, musique d'Henri Chatau
1886 : Zut !, chansonnette, avec Albert Fontana, musique de Tac-Coen
1886 : Baisers, envolez-vous !, chansonnette valse, avec Albert Fontana, musique de Germain Laurens
1886 : La Fête de mon mari, chansonnette, avec Albert Fontana, musique de Félix Chaudoir
1886 : Comment on se quitte, chanson, musique de Tac-Coen. Mention honorable au concours de chansons de l'Éden-Théâtre
1886 : L'Honneur du soldat, chanson patriotique, avec Albert Fontana, musique de Tac-Coen
1888 : La Bouquetière, chansonnette, musique de Gaston Maquis
1888 : Un baptême, chansonnette, musique de Gaston Maquis
1888 : Sous les toits, chanson, musique de Gaston Maquis
1888 : Baisers volés, romance, musique de François Wohanka
1888 : Dans les tambours, marche, musique de Gaston Maquis
1891 : Salut aux hirondelles, musique de Gaston Maquis
1891 : Retour au nid, romance, musique de Gaston Maquis
1891 : Les Petites Marionnettes, avec Alexandre Trébitsch, musique de Gustave Goublier, chanson créée par Paula Brébion à la Scala.
1891 : Je t'aime !, romance, musique de Gustave Goublier
1891 : Bonjour, petite Thérèse, idylle, musique de Gustave Goublier
1892 : C'était un rêve, romance, musique de Gaston Maquis. Interprétée par Émile Mercadier en 1925 (disque Pathé n° 4760).
1892 : Lettre à Mme ***, mélodie, musique d'Albert Corbin
1896 : Adieu baisers (Nous n'irons plus au bois), avec Auguste Ménard, musique de Gustave Goublier
1901 : Pourquoi m'aimer ?, musique de Gustave Goublier
1907 : A tes beaux yeux, valse lente, musique de Gustave Goublier
1908 : L'Espiègle !, chansonnette enfantine, musique de Gustave Goublier
1909 : Le Clou, mazurka, avec Auguste Ménard, musique de Gustave Goublier
1909 : Histoire d'omnibus, démangeaison en 3 couplets, avec Auguste Ménard, musique de Gustave Goublier
1909 : Il faut en passer à la femme, avec Raoul Benoit, musique de Gustave Goublier.
comme auteur dramatique
1878 : La Diva par amour, opérette en un acte, avec Armand Laffrique, musique de Tony Rieffler, à l'Alcazar d'hiver ()
1878 : C'était pour rire !, opérette en 1 acte avec Armand Laffrique, musique de Tony Rieffler, à l'Alcazar d'hiver (mai)
1879 : Les Deux Favorites, grand duo, avec Armand Laffrique, musique de Tony Rieffler, à la Scala (27 mars)
1879 : La Diva par amour, reprise à la Scala de l'opérette en 1 acte créée en février 1878 à l'Alcazar d'hiver (7 juin)
1880 : Le Chien de la chanteuse, opérette en un acte, avec Armand Laffrique, musique de Tony Rieffler, à la Scala (17 avril)
1881 : L'Île des Vierges, comédie-vaudeville en 3 actes, musique de Lucien Collin, au théâtre des Folies-Marigny ()
1888 : Nini Grand-Livre, comédie naturaliste en un acte, avec Joseph Torin, à l'Éden-Concert ()
1893 : La Val-qui-pleure, opérette-bouffe en un acte (parodie de la Walkyrie de Richard Wagner), avec Auguste Ménard, musique d'Émile Galle, au Concert parisien ()
1894 : Le Club des célibataires, vaudeville en un acte, avec Gaston Maquis, au théâtre de l'Alcazar (Marseille) ()
1894 : Le Mariage de Lise, vaudeville en un acte, avec Auguste Ménard, musique de Léopold Gangloff, à l'Eldorado ()
1898 : U. V. D. C. [Une Veine De Cocu], opérette en un acte, avec Léon Garnier, musique de Gaston Maquis, à l'Eldorado ()
1898 : Comme on fait son lit..., comédie en un acte, au théâtre d'Application ().
Discographie
1925 : C'était un rêve, paroles d'Henry Drucker, musique de Gaston Maquis (1892), romance interprétée par Émile Mercadier de l'Eldorado, disque Pathé double face n° 1760.
Notes et références
Liens externes
Chansonnier français
Dramaturge français du XIXe siècle
Décès en mars 1909
Date de naissance non renseignée (XIXe siècle)
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,543
|
\section{Introduction}
\label{intro}
The tau is the only lepton heavy enough ($m_{\tau}\sim1.8$ GeV) to decay into hadrons.
At the exclusive level, the hadronic partial width ($\sim65\%$) is the sum of the tau partial width to strange ($\sim3\%$) and to non-strange ($\sim62\%$) hadronic final states, and provides an advantageous laboratory to investigate the non-perturbative regime of QCD under rather clean conditions that is useful to understand the hadronization of QCD currents, to study form factors and to extract resonance parameters.
While the non-strange decays are largely dominated by the $\pi^{-}\pi^{0}$ mode which, in turn, constitutes the main decay channel of the $\tau$ with an absolute branching ratio of $\sim25\%$, the strange hadronic final states are suppressed with respect to the non-strange ones mainly due to the following two reasons: $i)$ the mass of the strange quark is larger than the mass of the up and down quarks thus yielding to a phase-space suppression; $ii)$ strange decays are Cabibbo suppressed since the $|V_{us}|$ element of the CKM matrix enters the transition instead than $|V_{ud}|$.
The dominant strangeness-changing $\tau$ decays are into $K\pi$ meson systems which adds up to $\sim42\%$ of the strange spectral function.
However, in order to increase the knowledge of the strange spectral function, the $\tau^{-}\to K^{-}\eta^{(\prime)}\nu_{\tau}$ decays are important.
In this letter, we provide a brief overview of the main results we have obtained in our series of dedicated analyses of two meson tau decays based on the framework of Resonance Chiral Theory supplemented by dispersion relations i.e. $\tau^{-}\to\pi^{-}\pi^{0}\nu_{\tau}$ and $\tau^{-}\to K^{-}K_{S}\nu_{\tau}$ \cite{Gonzalez-Solis:2019iod}, $\tau^{-}\to K_{S}\pi^{-}\nu_{\tau}$ and $\tau^{-}\to K^{-}\eta^{(\prime)}\nu_{\tau}$ \cite{Escribano:2014joa,Escribano:2013bca}, and $\tau^{-}\to\pi^{-}\eta^{(\prime)}\nu_{\tau}$ \cite{Escribano:2016ntp}.
\section{Theoretical framework}
\label{sec-1}
Tau decays into two mesons proceeds through the exchange of $W^{\pm}$ gauge bosons which couple the tau and the generated neutrino with the quark-antiquark pair that subsequently hadronizes into a pair of mesons $P^{-}P^{0}$ (see Fig.\,\ref{fig-1} for a schematic representation).
\begin{figure}[h]
\centering
\includegraphics[width=4cm,clip]{taudecay}
\caption{Schematic picture of a tau decaying into two mesons.}
\label{fig-1}
\end{figure}
The corresponding amplitude can be expressed as an electroweak part times an hadronic matrix element
\begin{equation}
\mathcal{M}\left(\tau^{-}\to P^{-}P^{\prime 0}\nu_{\tau}\right)=\frac{G_{F}}{\sqrt{2}}\bar{u}(p_{\nu_{\tau}})\gamma^{\mu}(1-\gamma^{5})u(p_{\tau})\langle P^{-}P^{\prime 0}|d^{\prime}\gamma^{\mu}u|0\rangle\,,
\label{amplitude}
\end{equation}
where $d^{\prime}=V_{ud}^{*}\bar{d}+V_{us}^{*}\bar{s}$.
In Eq.\,(\ref{amplitude}), we have not considered the gauge boson propagator, since the explored energy region ($\sqrt{s}<m_{\tau}$) is much lighter than the $W^{\pm}$ mass ($M_{W^{\pm}}\sim80$ GeV), but rather its expansion and used the well-known relation $G_{F}/\sqrt{2}=g^{2}/8M_{W}^{2}$.
The hadronic matrix element encodes the unknown QCD dynamics and it is given by
\begin{equation}
\langle P^{-}P^{\prime 0}|d^{\prime}\gamma^{\mu}u|0\rangle=\mathcal{C}_{P^{-}P^{\prime 0}}\Bigg\lbrace \left(p_{-}-p_{0}-\frac{\Delta_{P^{-}P^{\prime 0}}}{s}q\right)^{\mu}f_{+}^{P^{-}P^{\prime 0}}(s)+\frac{\Delta_{P^{-}P^{\prime 0}}}{s}q^{\mu}f_{0}^{P^{-}P^{\prime 0}}(s)\Bigg\rbrace\,,
\label{matrixelementff}
\end{equation}
where $\mathcal{C}_{P^{-}P^{\prime 0}}$ are Clebsch-Gordon coefficients, $p_{-}^{\mu}$ and $p_{0}^{\mu}$ are the momenta of the charged and neutral pseudoscalars, respectively, $q^{\mu}=(p_{-}+p_{0})^{\mu}$ is the momentum transfer and $s=q^{2}$.
In Eq.\,(\ref{matrixelementff}), $f_{0}^{P^{-}P^{\prime0}}(s)$ corresponds to the $S$-wave projection of the state $\langle P^{-}P^{\prime0}|$, while $f_{+}^{P^{-}P^{\prime0}}(s)$ is the $P$-wave component, and they are known as the scalar and vector form factors accordingly.
Notice that the scalar contribution is suppressed by the mass-squared difference $\Delta_{P^{-}P^{\prime 0}}=m_{P^{-}}^{2}-m_{P^{\prime 0}}^{2}$.
In terms of these form factors, the differential decay width reads
\begin{eqnarray} \label{spectral function}
& & \frac{d\Gamma\left(\tau^-\to P^{-}P^{\prime0}\nu_\tau\right)}{ds} = \frac{G_F^2M_\tau^3}{768\pi^3}S_{EW}|V_{{\rm{CKM}}}|^2\mathcal{C}_{P^{-}P^{\prime 0}}^{2}
\left(1-\frac{s}{M_\tau^2}\right)^2\nonumber\\
& & \left\lbrace\left(1+\frac{2s}{M_\tau^2}\right)\lambda_{P^{-}P^{\prime0}}^{3/2}(s)\big|f_{+}^{P^{-}P^{\prime0}}(s)\big|^2+\frac{3\Delta_{K\eta^{(\prime)}}^2}{s^{2}}\lambda_{P^{-}P^{\prime0}}^{1/2}(s)\big|f_{0}^{P^{-}P^{\prime0}}(s)\big|^2\right\rbrace\,,
\end{eqnarray}
where $\lambda_{P^{-}P^{\prime0}}\equiv\lambda(s,m_{P^{-}}^{2},m_{P^{0}}^{2})/s^{2}$ and $S_{\rm{EW}}$ is a short-distance electroweak correction.
Our initial setup approach to describe the required vector form factors assumes a Vector Meson Dominance form that includes both the real and imaginary parts of the unitary loop corrections thus fulfilling analyticity and unitarity.
One can then extract its phase $\phi^{P^{-}P^{0}}_{\rm{input}}(s)$ and insert it into a dispersion relation.
The use of a thrice-subtracted dispersion relation
\begin{equation}
f_{+}^{P^{-}P^{\prime0}}(s)=\exp\left[\alpha_{1}s+\frac{\alpha_{2}}{2}s^{2}+\frac{s^{3}}{\pi}\int_{s_{\rm{th}}}^{\infty}ds^{\prime}\frac{\phi^{P^{-}P^{0}}_{\rm{input}}(s^{\prime})}{(s^{\prime})^{3}(s^{\prime}-s-i0)}\right]\,,
\label{FFthreesub}
\end{equation}
where $\alpha_{1,2}$ are two subtraction constants that can be related to chiral low-energy observables and $s_{\rm{th}}$ is the corresponding two-particle production threshold, is found to be an optimal choice that makes the fit less sensitive to the higher-energy region of the dispersive integral where the phase is less well-known.
In the isospin limit no scalar contributes to $\tau^{-}\to\pi^{-}\pi^{0}\nu_{\tau}$, while for the required $K\pi,K\eta^{(\prime)}$ scalar form factors, we use \cite{Jamin:2001zq}.
\subsection{The pion vector form factor and $\tau^{-}\to K^{-}K_{S}\nu_{\tau}$ decay}
\label{sec-2}
The pion vector form factor is a classic object in low-energy QCD since it provides a privileged laboratory to study the effects of $\pi\pi$ interactions under rather clean conditions.
In \cite{Gonzalez-Solis:2019iod}, we have exploited the synergy between Chiral Perturbation Theory and dispersion relations and provided a representation that uses for the phase required as input in Eq.\,(\ref{FFthreesub}):
\begin{equation}
\phi_{\rm{input}}^{\pi\pi}(s)=\left\{ \begin{array}{llll}
\delta_{1}^{1}(s)&&&4m_{\pi}^{2}\le s<1\,\rm{GeV}^{2}\,,\\[1ex]
\psi(s)&&& 1\,{\rm{GeV}}^{2}\le s<m_{\tau}^{2}\,.\\[1ex]
\psi_{\infty}(s)&&&m_{\tau}^{2}\le s\,.\end{array} \right.
\label{PhaseRegions}
\end{equation}
This phase contains the following remarkable features: $i)$ it fully exploits Watson's theorem providing a model-independent description of the elastic region i.e. until $\sim1$ GeV$^{2}$, through the use of the $\pi\pi$ scattering phase $\delta_{1}^{1}(s)$ \cite{GarciaMartin:2011cn}; $ii)$ for the region $m_{\tau}^{2}\leq s$, we guide smoothly the phase to $\pi$ at high-energies thus ensuring the correct $1/s$ fall-off; $iii)$ for the intermediate region $1\,{\rm{GeV}}^{2}\le s<m_{\tau}^{2}$, we use a parametrization that contains the physics of the inelastic regime until $m_{\tau}^{2}$ by means of $\psi(s)=\arctan[{\rm{Im}}f_{+}^{\pi\pi}(s)|^{3\,\rm{res}}_{\rm{expo}}/{\rm{Re}}f_{+}^{\pi\pi}(s)|^{3\,\rm{res}}_{\rm{expo}}]$, where $f_{+}^{\pi\pi}(s)|^{3\,\rm{res}}_{\rm{expo}}$ is the Omn\`{e}s exponential representation of the form factor that reads (see Ref.\,\cite{Gonzalez-Solis:2019iod} for details)
\begin{eqnarray}
f_{+}^{\pi\pi}(s)|^{3\,\rm{res}}_{\rm{expo}}&=&\frac{M_{\rho}^{2}+s\left(\gamma e^{i\phi_{1}}+\delta e^{i\phi_{2}}\right)}{M_{\rho}^{2}-s-iM_{\rho}\Gamma_{\rho}(s)}\exp\Bigg\lbrace {\rm{Re}}\Bigg[-\frac{s}{96\pi^{2}F_{\pi}^{2}}\left(A_{\pi}(s)+\frac{1}{2}A_{K}(s)\right)\Bigg]\Bigg\rbrace\nonumber\\[2mm]
&&-\gamma\frac{s\,e^{i\phi_{1}}}{M_{\rho^{\prime}}^{2}-s-iM_{\rho^{\prime}}\Gamma_{\rho^{\prime}}(s)}\exp\Bigg\lbrace-\frac{s\Gamma_{\rho^{\prime}}(M_{\rho^{\prime}}^{2})}{\pi M_{\rho^{\prime}}^{3}\sigma_{\pi}^{3}(M_{\rho^{\prime}}^{2})}{\rm{Re}}A_{\pi}(s)\Bigg\rbrace\nonumber\\[2mm]
&&-\delta\frac{s\,e^{i\phi_{2}}}{M_{\rho^{\prime\prime}}^{2}-s-iM_{\rho^{\prime\prime}}\Gamma_{\rho^{\prime\prime}}(s)}\exp\Bigg\lbrace-\frac{s\Gamma_{\rho^{\prime\prime}}(M_{\rho^{\prime\prime}}^{2})}{\pi M_{\rho^{\prime\prime}}^{3}\sigma_{\pi}^{3}(M_{\rho^{\prime\prime}}^{2})}{\rm{Re}}A_{\pi}(s)\Bigg\rbrace\,.
\label{FFExpThreeRes}
\end{eqnarray}
Armed with this parametrization, and variants of it, we have analyzed the high-statistics Belle data \cite{Fujikawa:2008ma} focusing our effort on the improvement of the description of the energy region where the $\rho(1450)$ and $\rho(1700)$ come up into play.
In Fig.\,\ref{fig-2} (left), we display the form factor modulus squared including the statistical fit uncertainty for our reference fit (red error band) and a conservative systematic uncertainty coming from the largest variations of central values with respect to our reference fit (gray error band).
\begin{figure}[h]
\centering
\includegraphics[width=6.5cm,clip]{NewFFComparisonSyst}\includegraphics[width=6.5cm,clip]{SpectrumKKprediction}
\caption{Belle measurement of the modulus squared of the pion vector form factor \cite{Fujikawa:2008ma} (left) and BaBar data \cite{BaBar:2018qry} for $\tau^{-}\to K^{-}K_{S}\nu_{\tau}$ (right) as compared to our fits. See Ref.\,\cite{Gonzalez-Solis:2019iod} for details.}
\label{fig-2}
\end{figure}
Our central results for the physical resonance mass and width of the three participating resonances are found to be \cite{Gonzalez-Solis:2019iod}
\begin{eqnarray}
& & M^{\rm{pole}}_{\rho}\,=\,760.6\pm0.8\,\,\rm{MeV}\,,\quad \Gamma^{\rm{pole}}_{\rho}\,=\,142.0\pm0.4\,\,\rm{MeV}\,,\nonumber\\[1mm]
& & M^{\rm{pole}}_{\rho^{\prime}}\,=\,1289\pm8^{+52}_{-143}\,\,\rm{MeV}\,,\quad \Gamma^{\rm{pole}}_{\rho^{\prime}}\,=\,540\pm16^{+151}_{-111}\,\,\rm{MeV}\,,\nonumber\\[1mm]
& & M^{\rm{pole}}_{\rho^{\prime\prime}}\,=\,1673\pm4^{+68}_{-125}\,\,\rm{MeV}\,,\quad \Gamma^{\rm{pole}}_{\rho^{\prime\prime}}\,=\,445\pm8^{+117}_{-49}\,\,\rm{MeV}\,,
\label{Polesfitpipi}
\end{eqnarray}
where the first error is statistical while the second is our estimated systematic uncertainty.
From our study, we conclude that the determination of the pole mass and width of the $\rho(1450)$ and $\rho(1700)$ is limited by theoretical errors that have been usually underestimated so far.
The study of the $\tau^{-}\to K^{-}K_{S}\nu_{\tau}$ decay is of timely interest due to the recent measurement of its spectrum released by the BaBar Collaboration \cite{BaBar:2018qry}.
The $K^{-}K_{S}$ threshold opens around 1000 MeV which is $\sim$100 MeV larger than $M_{\rho}+\Gamma_{\rho}$, a characteristic energy scale for the $\rho(770)$-dominance region.
This implies that this mode is not sensitive to the $\rho(770)$ peak, and consequently not useful to study its properties, but rather enhances its sensitivity to the properties of the heavier copies $\rho(1450)$ and $\rho(1700)$.
In \cite{Gonzalez-Solis:2019iod}, within a dispersive parametrization of the kaon vector form factor, we have performed different fits to the measured spectrum (see right plot of Fig.\,\ref{fig-2}) and determined the $\rho(1450)$ mass and width.
We have pointed out that higher-quality data on this channel will allow to extract the $\rho(1450)$ and $\rho(1700)$ parameters with improved precision from a combined analysis with the pion vector form factor data.
\subsection{Combined analysis of the decays $\tau^{-}\to K_{S}\pi^{-}\nu_{\tau}$ and $\tau^{-}\to K^{-}\eta\nu_{\tau}$}
\label{sec-4}
We analyze the experimental measurement of the invariant mass distribution of the decay $\tau^{-}\to K_{S}\pi^{-}\nu_{\tau}$ together with spectrum of the $K^{-}\eta$ mode both released by Belle \cite{Epifanov:2007rf,Inami:2008ar}.
The former has been studied in detail in \cite{Boito:2008fq,Boito:2010me}, improving the determination of the resonance parameters of both the $K^{*}(892)$ and its first radial excitation $K^{*}(1410)$, while the later, with a threshold above the $K^{*}(892)$ dominance, has been studied in \cite{Escribano:2013bca} obtaining $K^{*}(1410)$ properties that are competitive with those of the $K_{S}\pi^{-}$ channel.
In \,\cite{Escribano:2014joa}, in a simultaneous study of the decay spectra of $\tau^{-}\to K_{S}\pi^{-}\nu_{\tau}$ and $\tau^{-}\to K^{-}\eta\nu_{\tau}$ within a dispersive representation of the required form factors, we have illustrated how the $K^{*}(1410)$ resonance parameters can be determined with improved precision as compared to previous studies.
We have also investigated possible isospin violations in the form factor slope parameters and claimed that making available the $K^{-}\pi^{0}$ decay spectrum \cite{Aubert:2007jh} would be extremely useful to get further insights.
\begin{figure}[h]
\centering
\includegraphics[width=9.5cm,clip]{spectra}
\caption{Belle $\tau^{-}\to K_{S}\pi^{-}\nu_{\tau}$ (red circles) and $\tau^{-}\to K^{-}\eta\nu_{\tau}$ (green squares) measurements as compared to our best results (solid black and blue curves, respectively) obtained in combined fits to both data sets.}
\label{fig-3}
\end{figure}
Our best fit results are compared to the measured Belle
distributions in Fig.\,\ref{fig-3} where satisfactory agreement with data is seen for all data points.
The $K_{S}\pi^{-}$ decay channel is dominated by the $K^{*}(892)$ resonance peak followed by the contribution of the $K^{*}(1410)$ resonance, whose shoulder is visible on the second half of the spectrum.
The scalar form factor contribution is small although important to describe the data immediately above threshold.
There is no such clear peak structure for the $K\eta$ channel due to the interplay between both $K^{*}$ resonances.
The scalar form factor contribution is insignificant in this case.
With the current data, we succeed in improving the determination of the $K^{*}(1410)$ mass and width with the findings
\begin{equation}\label{K^*'}
M_{K^{*}(1410)} \,=\, \left(1304 \pm 17\right)\,\mathrm{MeV} \,, \quad
\Gamma_{K^{*}(1410)} \,=\,\left(171 \pm 62\right)\,\mathrm{MeV}\,.
\end{equation}
For the $\tau^{-}\to K^{-}\eta^{\prime}\nu_{\tau}$ decay, it is dominated by the scalar form factor and we have obtained a branching ratio of $\sim1\times10^{-6}$ \cite{Escribano:2013bca}, well below the experimental upper bound.
\subsection{The second-class current $\tau^{-}\to\pi^{-}\eta^{(\prime)}\nu_{\tau}$ decays}
\label{sec-5}
The non-strange weak hadronic currents can be divided according to their $G$-parity: $i)$ first-class currents with quantum numbers $J^{PG}=0^{++},0^{--},1^{+-},1^{-+}$; $ii)$ second-class currents (SCC), which have $J^{PG}=0^{+-},0^{-+},1^{++},1^{--}$.
The former completely dominate weak interactions since there has been no evidence of the later in Nature so far.
We study the $\tau^{-}\to\pi^{-}\eta^{(\prime)}\nu_{\tau}$ decays which belong to the SCC processes i.e. parity conservation implies that these transitions must proceed through the vector current which has opposed $G$-parity to the $\pi^{-}\eta^{(\prime)}$ system.
Our predictions \cite{Escribano:2016ntp} are displayed in
Fig.\,\ref{fig-4}, where we show the total decay rate distribution for $\tau^{-}\to\pi^{-}\eta\nu_{\tau}$ (left) and $\tau^{-}\to\pi^{-}\eta^{\prime}\nu_{\tau}$ (right).
The low-energy part of the $\pi\eta$ spectrum is dominated by the vector contribution associated to the $\rho(770)$ while effects of the $a_{0}(980)$ and $a_{0}(1450)$ scalar resonance contributions might show up and dominate the intermediate and high-energy part.
Contrarily, the vector contribution is suppressed in $\tau^{-}\to\pi^{-}\eta^{\prime}\nu_{\tau}$ because the $\pi^{-}\eta^{\prime}$ threshold lies well beyond the region of influence of the $\rho(770)$, thus being this mode dominated by the scalar form factor.
Our branching ratio predictions for $\pi^{-}\eta$ are found to be within the window $[0.36,2.12]\times10^{-5}$ respecting the current experimental upper limit, $7.3\times10^{-5}$ at $90\%$ CL, reported by Belle \cite{Hayasaka:2009zz}.
Regarding the branching of the $\pi^{-}\eta^{\prime}$ mode, it might be one or two order of magnitude smaller than the $\pi^{-}\eta$ channel.
\begin{figure}[h]
\centering
\includegraphics[width=6.2cm,clip]{distributionpieta} \includegraphics[width=6.3cm,clip]{distributionpietap}
\caption{Decay spectrum for $\tau^{-}\to\pi^{-}\eta\nu_{\tau}$ (left) and $\tau^{-}\to\pi^{-}\eta^{\prime}\nu_{\tau}$ (right).
See Ref.\,\cite{Escribano:2016ntp} for details.}
\label{fig-4}
\end{figure}
\section{Summary}
\label{summary}
In this letter, we have provided an overview of all possible semileptonic two-meson decay channels of the $\tau$ lepton.
These decays provide a privileged laboratory to study, under rather clean conditions, the energy region of two-meson form factors where resonances come up into play.
An ideal roadmap for describing them would require a model-independent approach demanding a full know\-led\-ge of QCD in both its perturbative and non-perturbative regimes, knowledge not yet unraveled.
An alternative to such enterprise would pursuit a synergy between the formal theoretical calculations and experimental data.
In this respect, dispersion relations are a powerful tool to direct oneself towards a model-independent description of form factors.
By exploiting the synergy between dispersion relations and Chiral Perturbation Theory, we have carried out a dedicated study of the high-statistics Belle data of the pion vector form factor, assessing the role of the systematic uncertainties in the determination of the $\rho(1450)$ and $\rho(1700)$ parameters, and performed a first analysis of the $\tau^{-}\to K^{-}K_{S}\nu_{\tau}$ BaBar data.
We have also shown the potential of the combined analysis of $\tau^{-}\to K_{S}\pi^{-}\nu_{\tau}$ and $\tau^{-}\to K^{-}\eta\nu_{\tau}$ to extract the $K^{*}(1410)$ mass and width.
Finally, while for the decay $\tau^{-}\to\pi^{-}\eta\nu_{\tau}$ we find a total branching ratio that ranges $[0.36,2.12]\times10^{-5}$, well within the reach of Belle-II, the $\pi^{-}\eta^{\prime}$ channel might be one or two order of magnitude more suppressed.
\section*{Acknowledgements}
The author thanks the organizers of Phi-Psi 2019 for the very nice workshop we have enjoyed.
This work has been supported by the National Science Foundation (Grant No.\,PHY-1714253).
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,785
|
<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.2 http://maven.apache.org/xsd/assembly-1.1.2.xsd">
<id>bin</id>
<formats>
<format>zip</format>
</formats>
<fileSets>
<fileSet>
<directory>${project.basedir}/../</directory>
<outputDirectory></outputDirectory>
<includes>
<include>README*</include>
<include>LICENSE*</include>
<include>NOTICE*</include>
<include>doc/example-data/dogfood*</include>
<include>doc/example-data/countries*</include>
<include>doc/example-data/deichmann.xml</include>
</includes>
</fileSet>
</fileSets>
<moduleSets>
<moduleSet>
<!-- Enable access to all projects in the current multimodule build! -->
<useAllReactorProjects>true</useAllReactorProjects>
<includes>
<include>${project.groupId}:duke-core</include>
<include>${project.groupId}:duke-mapdb</include>
<include>${project.groupId}:duke-lucene</include>
<include>${project.groupId}:duke-server</include>
<include>${project.groupId}:duke-mongodb</include>
<include>${project.groupId}:duke-json</include>
<include>${project.groupId}:duke-es</include>
</includes>
<binaries>
<outputDirectory>lib/</outputDirectory>
<unpack>false</unpack>
</binaries>
</moduleSet>
</moduleSets>
</assembly>
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,479
|
\section{Introduction}
\label{sec:intro}
With the advent of the Large Hadron Collider (LHC), the search for physics beyond the Standard Model (SM) will enter a new era. While the establishment of the Standard Model constitutes one of the major achievements in 20th century physics, particle physicists have long been aware of its limitations and have looked for clues as to the larger framework in which it is embedded. These limitations include the number of {\em a priori} unknown parameters (nineteen), the absence of any requirement of electric charge quantization, the lack of coupling unification at high scales, and the instability of the electroweak scale under radiative corrections. From the standpoint of cosmology, the SM also falls short, as it provides no explanation for the abundance of either the visible, baryonic matter of the universe or the cold dark matter. The recent observation of neutrino oscillations and the corresponding implication that neutrinos have small, non-vanishing masses also points to physics beyond the SM as originally written down by Glashow, Weinberg, and Salam \cite{SM}.
One of the leading candidates for the larger framework in which the SM lies is supersymmetry (SUSY). For over two decades, the attractive features of SUSY have inspired particle physicists to explore its theoretical basis and phenomenological implications with great vigor. These attractive features include its elegant mechanism for producing electroweak symmetry-breaking and stabilizing the electroweak scale; its generation of coupling unification at the grand unification scale; its introduction of a candidate for the cold dark matter (the lightest supersymmetric particle, or LSP); and its possibilities for explaining the abundance of baryonic matter. In addition, SUSY is a rather generic feature of superstring theories, so one might expect a low-energy remnant of string theory to exhibit features of supersymmmetry. The presence of these elements that could resolve many (but not all) of the shortcomings of the SM have outweighed the costs of introducing SUSY, such as the additional large number of {\em a priori} unknown parameters, and have inspired a vast literature in particle physics during the past two decades.
It is hoped among SUSY enthusiasts that experiments at the LHC will finally uncover direct evidence for low-energy SUSY, and the variety of corresponding high-energy signatures have been discussed extensively elsewhere \cite{wang}. In this review, we focus on another frontier in the search for SUSY: the high-precision, low-energy frontier. This low-energy frontier, which lies at the intersection of particle physics with nuclear and atomic physics, has seen substantial recent advances in both experimental techniques and theoretical analysis, making precision low-energy studies a powerful probe of SUSY. Roughly speaking, the sensitivity required for these studies to probe supersymmetric effects is given by $\delta_{\rm SUSY}\sim (\alpha/\pi)(M/{\tilde m})^2$, where $M$ is an appropriate standard model mass scale and ${\tilde m}$ is a typical superpartner mass\footnote{As we discuss in later in this review, exceptions occur.}. The prefactor of $\alpha/\pi$ appears because SUSY contributions typically first arise at one loop order and are of electroweak strength\footnote{The SUSY loop corrections scale as $\alpha/\pi$ rather than $\alpha/4\pi$
since the SU(2)$_L$ coupling $g_2$ that enters one loop amplitudes is
related to the electric charge via the weak mixing angle as
$g_2=e/\sin\theta_W$. The value of $\sin^2\theta_W\approx 1/4$ compensates
for the factor of four that would ordinarily appear in the denominator. }. A well-known illustration occurs in the case of the muon anomalous magnetic moment, $(g_\mu-2)$, where $M\sim m_\mu$ and where, for ${\tilde m}\sim M_W$, one has $\delta_{\rm SUSY}\sim 10^{-9}$. Both the precision of the latest experimental measurement of $(g_\mu-2)$ \cite{Bennett:2006fi} as well as the uncertainty in the theoretical, SM prediction, are at this level, making this observable an important means of accessing SUSY loop effects. Indeed, the $\sim 2\sigma$ deviation from the SM prediction reported by the E821 collaboration, gives the first tantalizing hints of low-energy SUSY.
While there has been considerable recent attention paid to $(g_\mu-2)$ for these reasons, the high energy physics community may have less appreciation for the analogous power of other precision, low-energy observables to provide a window on supersymmetry. In the case of weak decays or electroweak scattering, for example, one has $M\sim M_W$, making $\delta_{\rm SUSY}\sim 10^{-3}$ rather than the $10^{-9}$ in the case of $(g_\mu-2)$. As we discuss in detail throughout the remainder of this review, both the precision of low-energy weak decay and lepton scattering studies -- as well as that of the corresponding theoretical, SM predictions -- is now at the $10^{-3}$ level or better, making an analysis of their implications for SUSY a timely endeavor. As with the study of precision electroweak observables at the end of the last decade, the study of low-energy precision electroweak processes can provide important information that complements what we may learn from the LHC or a future, $e^+ e^-$ linear collider. Indeed, comparisons of the value of the top quark mass implied by precision data with the value determined by direct top quark observation at the Tevatron provided a significant test of the self-consistency of the SM at the level of one-loop radiative corrections and stands as a major success in the history of SM. Given the level of experimental and theoretical precision now available for the low-energy studies discussed here, one can anticipate a similarly useful comparison of indirect and direct search information in the LHC era. Moreover, there exist special cases -- such as searches for lepton flavor violation or permanent electric dipole moments -- where the SM predictions lie well below what will be achievable in the next generation of precision studies, where the expectations of SUSY effects lie well within the reach of these experiments, and where the physics reach of the low-energy measurements can significantly exceed what is accessible at the LHC or a linear collider.
In the remainder of this article, we describe the recent experimental and theoretical advances that have led to this situation and discuss the corresponding present and prospective implications of precision, low-energy measurements for SUSY. In doing so, we attempt to provide sufficient background material for readers from both the low-energy and collider communities to be able to appreciate these developments. We begin with a brief review of low-energy SUSY, and refer readers to the excellent, more extensive ``primer" by S. Martin\cite{Martin:1997ns} for additional details. Because many of the SUSY effects discussed here arise at loop level, we also provide a brief review of renormalization as it is applied to the observables of interest here. The bulk of our subsequent discussion involves a review of low-energy charged current and neutral current experiments, searches for lepton flavor and number violation, tests of CP-violation and the corresponding implications for cosmology. Because members of the high energy community may not be so familiar with the phenomenology of these studies, we provide some background material while referring readers to recent, comprehensive studies of low-energy precision tests\cite{Erler:2004cx}. In addition, the information on SUSY obtained from high energy studies is important in analyzing the low-energy sector, so we also give brief summaries of the present implications of high energy experiments (for recent reviews, see {\em e.g.}, Ref.~\cite{wang,Heinemeyer:2004gx}). Finally, the reader will notice one significant, but intentional, omission from this review: a discussion of the present situation regarding $(g_\mu-2)$. Because the recent literature on this topic is so vast and because there exist useful, recent reviews (see, {\em e.g.}, Refs.~\cite{Hertzog:2006sc,Erler:2004cx,Czarnecki:2001pv}), we believe that a truncated discussion in this article would be redundant and would not do justice to this important measurement. Thus, we refer the reader to the literature for a proper review of the muon anomalous magnetic moment, and concentrate on the other precision tests in the remainder of this article.
\section{Minimal Supersymmetric Extension of Standard Model}
\label{sec:susy}
\subsection{Introduction}
The Standard Model of elementary particle physics has been
confirmed to high precision by a wide array of experiments. The
strong, weak and electromagnetic interactions are described by ${\rm
SU(3)}_C\times{\rm SU(2)}_L\times{\rm U(1)}_Y$ gauge interactions. At
low energies, ${\rm SU(2)}_L\times{\rm U(1)}_Y$ is broken to ${\rm
U(1)}_{EM}$ symmetry by the Higgs mechanism, which generates masses
for the $W^{\pm}$ and $Z$ bosons, while keeping the photon massless.
For this purpose, a complex scalar ${\rm SU(2)}_L$ Higgs doublet
$H=(H^+, H^0)$ is introduced. The electroweak symmetry is broken
spontaneously when the neutral component $H^0$ gets a vacuum
expectation value (VEV): $\langle H^0 \rangle = v/\sqrt{2}$. While
three degrees of freedom for the Higgs doublet are eaten by $W^{\pm}$
and $Z$ (corresponding to their longitudinal degrees of freedom) , one
physical Higgs boson, $h$, which is the real part of $H^0$,
remains in the spectrum.
For the Higgs potential $V(H)=m_H^2H^\dagger H + \lambda |H^\dagger H|^2$,
the mass of the physical Higgs
is related to the Higgs quartic coupling $\lambda$ and Higgs VEV
$v=246$ GeV: $m_{h}^2=2 \lambda v^2$. For $\lambda$ of order unity,
the Higgs mass is around the electroweak scale of a few hundred GeV.
Being a fundamental scalar particle, the Higgs boson can receive large
corrections to its mass from quantum loop effects. Assuming the SM is
an effective theory valid below a cut-off scale $\Lambda_{\rm UV}$, the
Higgs mass, $m_h$, depends strongly on physics at the scale
$\Lambda_{\rm UV}$. For example, any SM fermion $f$ with Yukawa
interaction $(\lambda_f/\sqrt{2}) h \bar{f} f$
induces a one loop correction to
the squared mass of the Higgs.
The leading contribution to the mass of the physical Higgs
depends quadratically on $\Lambda_{\rm UV}$\cite{Drees:1996ca}:
\begin{equation}
\Delta {m_{h}^2}= \frac{|\lambda_f|^2}{16\pi^2}\left[ -2 \Lambda_{\rm
UV}^2+ 6 m_f^2\ln (\Lambda_{\rm UV}^2/m_f^2) + \ldots \right],
\label{eq:mh_f}
\end{equation}
where $\Lambda_{\rm UV}$ is the cutoff scale, which could be as
large as the Planck scale $M_{\rm pl}=(8\pi G_{\rm
Newton})^{-1/2}=2.4 \times 10^{18}$ GeV. A precise cancellation of
32 orders of magnitude between the tree level bare Higgs mass and the
radiative corrections is needed to obtain a physical Higgs mass around
electroweak scale. Such a high level of fine tuning is usually
refered to as the ``hierarchy problem'' \footnote{The hierarchy problem has two elements: the
large scale difference between
$M_{\rm pl}$ and $M_{\rm weak}$, and the need to cancel the radiative corrections to maintain a light Higgs scalar. The need for fine tuning to achieve this cancellation might be more accurately termed a \lq\lq naturalness problem".}. Finding a solution to the hierarchy problem
points to new physics beyond the SM, such as supersymmetry
\cite{Martin:1997ns,susy}, extra dimensions \cite{exD}, little Higgs \cite{lh},
composite Higgs \cite{ch}, Higgsless models\cite{hless}, etc.
Supersymmetry -- a symmetry under interchange of bosonic and
fermionic degrees of freedom -- is one of the most promising new
physics scenarios among various proposals. For each particle in a
supersymmetric theory, there exists a superpartner with spin differing
by a half unit. When SUSY is exact, the masses and the gauge quantum numbers
of superpartners are the same, and the couplings are related by the
symmetry. These features protect the Higgs mass from receiving the
problematic quadratic dependence on $\Lambda_{\rm UV}$ as these
contributions from fermionic and bosonic superpartners cancel.
For
example, the $\Lambda_{\rm UV}^2$ term in Eq.~(\ref{eq:mh_f}) from
fermion $f$ is cancelled precisely by the contribution from its
scalar partners\footnote{One
Dirac fermion $f$ has two complex scalar superpartners.
Eq.~(\ref{eq:mh_s}) assumes the both scalar superpartners have the
same mass $m_S$.} $S$ with mass $m_S$ and Higgs coupling $\lambda_S
|H|^2|S|^2$\cite{Drees:1996ca}:
\begin{equation}
\Delta {m_{h}^2}= \frac{\lambda_S}{16\pi^2}\left[ 2\Lambda_{\rm
UV}^2-2m_S^2 \ln (\Lambda_{\rm UV}^2/m_S^2) + \ldots \right]\ \ \ ,
\label{eq:mh_s}
\end{equation}
A cancellation of the quadratic divergence occurs when we employ the supersymmetric relation $\lambda_S=|\lambda_f|^2$ and add Eqs.~(\ref{eq:mh_f}) and (\ref{eq:mh_s}). The remaining $\Lambda_{\rm UV}$-dependence is only logarithmic level and
fine tuning is no longer necessary. In Eq.~(\ref{eq:mh_s}) the logarithmic term proportional to $m_S^2$ arises from the tadpole graph containing the full quartic scalar interaction.
An additional logarithmic contribution to the Higgs mass
arises from the diagram containing two insertions of the triscalar interaction
$\sqrt{2}\lambda_S v h |S|^2$ induced by the quartic interaction after electroweak symmetry breaking (EWSB).
\begin{equation}
\Delta {m_{h}^2}= \frac{\lambda_S}{16\pi^2}\left[
-4m_f^2\ln (\Lambda_{\rm UV}^2/m_S^2) + \ldots \right],
\label{eq:mh_s2}
\end{equation}
The explicit dependence on $m_f$ appears because
\begin{eqnarray}
\lambda_S |H|^2 |S|^2 & =& |\lambda_f|^2 |H|^2 |S|^2 = |\lambda_f|^2\left(\frac{v^2}{2}+\sqrt{2} v h +\frac{h^2}{2}+\cdots\right) |S|^2 \\
\lambda_f \sqrt{2}v & = & 2 m_f \ \ \
\end{eqnarray}
with the \lq\lq $+\cdots$" denoting the other scalar doublet degrees of freedom.
Adding Eq.~(\ref{eq:mh_s}) and Eq.~(\ref{eq:mh_s2}), the total contribution from
a pair of complex scalars to the physical Higgs mass are
\begin{equation}
\Delta {m_{h}^2}= \frac{\lambda_S}{16\pi^2}\left[ 2\Lambda_{\rm
UV}^2-(2m_S^2+4 m_f^2)\ln (\Lambda_{\rm UV}^2/m_S^2) + \ldots \right].
\label{eq:mh_s3}
\end{equation}
It is obvious from Eqs.~(\ref{eq:mh_f}) and (\ref{eq:mh_s3})
that in the supersymmetric limit, when $m_f=m_S$, the logarithmic
contributions from scalars and fermion also cancel each other.
Besides providing an elegant solution to the hierarchy problem, SUSY
has a variety of other attractive features. In the SM, the ${\rm
SU}(3)_C$, ${\rm SU}(2)_L$ and ${\rm U}(1)_Y$ gauge couplings come
close to unifying at high scales, providing a tantalizing hint of
grand unification. With TeV scale mass for superpartners, the SUSY
$\beta$ functions lead to coupling unification at a scale $M_{\rm
GUT}\sim 10^{16}$ GeV -- close to the Planck scale \cite{unification}. In addition,
electroweak symmetry breaking can be generated radiatively with SUSY,
due to the ${\cal O}(1)$ Yukawa coupling of top quarks and their
scalar superpartners. Finally, SUSY provides viable
particle physics solutions to problems in cosmology. In the minimal
supersymmetric extension of the SM, for example, the lightest
supersymmetric particle is a natural candidate for cold dark
matter (CDM) if it is protected from decays into SM particles by a
symmetry known as $R$-parity. Similarly, SUSY contributions to the
Higgs potential and the introduction of new CP-violating interactions
involving superpartners can make supersymmetric electroweak
baryogenesis a viable mechanism for explaining the baryon asymmetry of
the universe. In short, the theoretical and cosmological motivation
for considering low energy SUSY is strong.
\subsection{The Minimal Supersymmetric Extension of Standard Model}
In the simplest supersymmetric extension of the SM -- the Minimal Supersymmetric Standard Model (MSSM) --
provides a useful framework for discussing the phenomenology of low
energy SUSY. Although there is considerable interest in amplifications
of the MSSM -- particularly in the neutrino and Higgs sectors -- we
will concentrate on the MSSM throughout this article. In the MSSM,
each SM particle is accompanied by a superpartner with the same gauge
quantum numbers as given in Table~\ref{table:MSSMmatter} for the
matter fields and in Table~\ref{table:MSSMgauge} for the gauge
sector. The symbols for the SM superpartners are the same as the
corresponding SM particles, but with a tilde on the top. For each
quark and lepton, its spin 0 superpartner is called a {\it squark}
and {\it slepton}, respectively. The fermionic superpartner of each
Higgs boson is called a {\it Higgsino}. Note that in MSSM, introduction
of two Higgs doublets with opposite hypercharge -- $H_u$ and $H_d$ --
is dictated by the requirement of anomaly cancellation among
fermionic Higgsinos. In addition, the Yukawa interactions in
supersymmetric models are derived from the superpotential, which must
be holomorphic in order to be supersymmtric. In contrast to the SM,
where the same Higgs gives mass both to the up and down type quark,
the latter receive mass in the MSSM from the VEVs of the neutral
$H_u$ and $H_d$, respectively. Finally, the spin-$1/2$ superpartners of
the ${\rm SU}(3)_C$, ${\rm SU}(2)_L$ and ${\rm U}(1)_Y$ gauge bosons
are called the {\it gluino, Wino} and {\it Bino}, respectively.
\begin{table}
\begin{tabular}{c|ccc|cc|cc}
\hline &\multicolumn{3}{c|}{quark sector}& \multicolumn{2}{c|}{lepton
sector}& \multicolumn{2}{c}{Higgs sector} \\ \hline
&$Q$&$\bar{u}$&$\bar{d}$&$L$&$\bar{e}$&$H_u$&$H_d$\\ ${\rm SU(3)}_c$,
${\rm SU(2)}_L$, ${\rm U(1)}_Y$
&(3,2,$\frac{1}{6})$&($\bar{3}$,1,$-\frac{2}{3}$)&
($\bar{3}$,1,$\frac{1}{3}$)&(1,2,$-\frac{1}{2}$)&
(1,1,$1$)&(1,2,$\frac{1}{2}$)&(1,2,$-\frac{1}{2}$) \\ \hline spin 0&
$(\tilde{u}_L, \tilde{d}_L)$&$\tilde{u}_R^*$&$\tilde{d}_R^*$
&$(\tilde{\nu}, \tilde{e}_L)$&$\tilde{e}_R^*$& $(H_u^+,
H_u^0)$&$(H_d^0, H_d^-)$ \\ spin $1/2$ &$(u_L,
d_L)$&$u_R^\dagger$&$d_R^\dagger$ &$(\nu,
e_L)$&$e_R^\dagger$&$(\tilde{H}_u^+, \tilde{H}_u^0)$& $(\tilde{H}_d^0,
\tilde{H}_d^-)$ \\ \hline
\end{tabular}
\caption{Field content for the quark, lepton and Higgs sectors of the MSSM.}
\label{table:MSSMmatter}
\end{table}
\begin{table}
\begin{tabular}{c|ccc}
\hline &\multicolumn{3}{c}{gauge sector}\\ \hline ${\rm SU(3)}_c$,
${\rm SU(2)}_L$, ${\rm U(1)}_Y$ &(8,1,0)&(1,3,0)&(1,1,0)\\ \hline spin
$1/2$ & $\tilde{g}$&$\tilde{W}^{\pm},\tilde{W}^0$& $\tilde{B}^0$\\
spin 1&$g$&$W^{\pm}, W^0$&$B^0$\\ \hline
\end{tabular}
\caption{Field content for the gauge sector of the MSSM.}
\label{table:MSSMgauge}
\end{table}
The Lagrangian of MSSM can be written as
\begin{equation}
{\cal L}={\cal L}_{\rm gauge} + {\cal L}_{\rm chiral}-
\sqrt{2}g[(\phi^*T^a\psi)\lambda^a + \lambda^{\dagger
a}(\psi^{\dagger}T^a\phi)] -\frac{1}{2}\sum_{i}
g^2_i(\phi^*T^a\phi)^2,
\end{equation}
where $i$ runs over the ${\rm SU(3)}_C$, ${\rm SU(2)}_L$ and $ {\rm
U(1)}_Y$ gauge groups; $\phi$ denotes a spin-$0$ complex scalar field
and $\psi$ is the corresponding fermionic superpartner; $\lambda^a$
is the gaugino field for ${\rm SU(3)}_C$, ${\rm SU(2)}_L$ and ${\rm
U(1)}_Y$, with $g_i$ being the corresponding gauge coupling and $T^a$
is the hermitian matrix for the gauge group in the fundamental
representation. The Lagrangian for the gauge fields ${\cal L}_{\rm
gauge}$ contains the kinetic term for gauge bosons and two-component
gaugino spinors $\lambda^a$:
\begin{equation}
{\cal L}_{\rm gauge}=-\frac{1}{4}F_{\mu\nu}^aF^{\mu \nu a}-
i\lambda^{a\dagger}\bar{\sigma}^{\mu}D_{\mu}\lambda^a\ \ \ ,
\end{equation}
where the metric is $\eta^{\mu\nu}={\rm diag}(-1,1,1,1)$,
${\bar\sigma}^\mu = (-1, {\vec\sigma})$, and
$D_\mu$ is the gauge covariant derivative\footnote{Here, we have followed the conventions of Ref.~\cite{Martin:1997ns}.}. The Lagrangian for the
matter fields ${\cal L}_{\rm chiral}$ contains kinetic term and
interactions:
\begin{equation}
{\cal L}_{\rm chiral}=-D^{\mu}\phi^*D_{\mu}\phi-
i\psi^{\dagger}\bar{\sigma}^\mu D_{\mu}\psi +{\cal L}_{\rm int},
\end{equation}
where $\psi$ is a two component spinor for either left- or
right-handed fermions and ${\cal L}_{\rm int}$ can be obtained from
the superpotential $W$
\begin{equation}
W_{\rm MSSM}=\bar{u}{\bf y_u}QH_u - \bar{d}{\bf y_d}QH_d -
\bar{e}{\bf y_e}LH_d + \mu H_u H_d.
\label{eq:MSSMsuperpotential}
\end{equation}
using
\begin{equation}
{\cal L}_{\rm int}=(\partial W / \partial \phi_i \phi_j)\psi_i \psi_j
+(\partial W / \partial \phi_i)(\partial W / \partial \phi_i)^*.
\label{eq:lag}
\end{equation}
The first term in Eq.~(\ref{eq:lag}) gives rise to the usual Yukawa
coupling [from the first three terms in
Eq.~(\ref{eq:MSSMsuperpotential})], and the Higgsino mass [from the
last term in Eq.~(\ref{eq:MSSMsuperpotential})]. The second term in
Eq.~(\ref{eq:lag}) gives rise to all the cubic and quartic scalar
interactions.
The general MSSM superpotential also includes baryon and lepton
number violating interactions:
\begin{eqnarray}
W_{\Delta L=1}&=&\frac{1}{2}\lambda_{ijk} L_i L_j\bar{e}_k +
\lambda^{\prime}_{ijk}L_iQ_j\bar{d}_k + \mu^{\prime}_i L_i H_u,
\label{eq:RPVL}\\
W_{\Delta B=1}&=&\frac{1}{2}\lambda_{ijk}^{\prime\prime} \bar{u}_i \bar{d}_j
\bar{d}_k,
\label{eq:RPVB}
\end{eqnarray}
The simultaneous presence of non-vanishing $\lambda^\prime$ and
$\lambda^{\prime\prime}$ couplings allows for rapid proton decay that
conflicts with present bounds on the proton lifetime. One way to
eliminate such terms is to introduce a new symmetry called
$R$-parity, defined by conservation of the quantum number
\begin{equation}
P_R=(-1)^{3(B-L)+2s},
\end{equation}
where $s$ is the spin of the particle. All SM particles have
$P_R=+1$ while all the superpartners have $P_R=-1$. If $R$-parity is
an exact symmetry, then all the terms appearing in
Eq.~(\ref{eq:RPVL}) and ~(\ref{eq:RPVB}) are forbidden and no
dangerous proton decay can occur via these interactions.
There are two important phenomenological consequence if $R$-parity is
exactly conserved:
\begin{itemize}
\item{The lightest supersymmetric particle is absolutely stable.}
\item{SM particles are coupled to even numbers of superpartners
(usually two).}
\end{itemize}
If LSP is colorless and charge neutral, it can be a viable candidate
for the cold dark matter.
$R$-parity conservation also implies that sparticles are produced in
pairs in collider experiments and that each sparticle other than LSP
eventually decays into final states containing odd numbers of LSPs.
Moreover, for low-energy processes involving only SM particles in the
initial and final states -- such as those of interest in this article
-- supersymmetric contributions appear only at loop-level ({\em
e.g.}, virtual superpartners are pair produced). However, one may
relax the constrains of $R$-parity conservation while preserving
proton stability via, {\em e.g.}, forbidding baryon number violating
terms in Eq.~(\ref{eq:RPVB}). In this case, the LSP is no longer
stable and tree level SUSY contributions to low energy processes
appear through $R$-parity violating interactions. In what follows, we
will consider the implications of both $R$-parity conserving and
$R$-parity violating (RPV) supersymmetry.
\subsection{Soft SUSY Breaking}
If supersymmetry is exact, superpartners have the same mass as the
corresponding SM particles. However, supersymmetry must be broken in
nature because superpartners have not been experimentally observed at
energies where they could be pair produced if they are degenerate with
SM particles. In order to retain the exact cancellation of quadratic
$\Lambda_{\rm UV}$ dependence of the Higgs mass corrections, all the
SUSY breaking couplings must be \lq\lq soft" (of positive mass
dimension). After adding the fermion and scalar contributions of Eqs. (\ref{eq:mh_f},\ref{eq:mh_s}), the remaining logarithamic correction to the Higgs mass
is proportional to the soft SUSY breaking masses\footnote{There
are additional logarithmic contributions proportional to the square of the
triscalar coupling, $a_f$, defined below\cite{Drees:1996ca}.}:
\begin{equation}
\Delta {m_{h}^2}= -\frac{\lambda_S}{8\pi^2}\left[\ \delta m_S^2\ln
(\Lambda_{\rm UV}^2/m_S^2) + \ldots \right]\ \ \ ,
\label{eq:mhsoft}
\end{equation}
where we have taken $m_S^2=m_f^2+\delta m_S^2$. Therefore, the soft
SUSY breaking mass parameters ({\em e.g.} , $\delta m_S^2$) are
should be below a few TeV to avoid reintroduction of the
naturalness problem. Throughout this work, we will refer to this scale of SUSY-breaking mass parameters as ${\tilde m}$.
A brief description of soft SUSY breaking
parameters, SUSY particle mass spectra and interactions is given
below. For a more detailed review of MSSM and related phenomenology,
see Refs.~\cite{Martin:1997ns, susy}.
In MSSM, the Lagrangian for soft SUSY breaking terms are
\begin{eqnarray}
{\cal L}_{\rm
soft}&=&-\frac{1}{2}(M_3 {\tilde{g}}\tilde{g}+M_2 {\tilde{W}}\tilde{W}
+M_1 {\tilde{B}}\tilde{B})+c.c. \nonumber \\
&&-(\tilde{\bar{u}}{\bf a_u}\tilde{Q}H_u-\tilde{\bar{d}}{\bf
a_d}\tilde{Q}H_d -\tilde{\bar{e}}{\bf a_e}\tilde{L}H_d)+c.c. \nonumber
\\ &&-\tilde{Q}^\dagger{\bf m_Q^2}\tilde{Q} -\tilde{L}^\dagger{\bf
m_L^2}\tilde{L} -\tilde{\bar{u}}{\bf
m_{\bar{u}}^2}\tilde{\bar{u}}^\dagger -\tilde{\bar{d}}{\bf
m_{\bar{d}}^2}\tilde{\bar{d}}^\dagger -\tilde{\bar{e}}{\bf
m_{\bar{e}}^2}\tilde{\bar{e}}^\dagger
-m_{H_u}^2H_u^*H_u-m_{H_d}^2H_d^*H_d
\nonumber \\
&&-(bH_uH_d+c.c.)
\label{eq:soft}
\end{eqnarray}
The first line gives the gaugino mass $M_i$, $i=1,2,3$ for ${\rm
U}(1)_Y$, ${\rm SU}(2)_L$ and ${\rm SU}(3)_C$ gauginos, respectively, and where the boldfaced quantities indicate matrices in flavor space.
The second line gives the trilinear ``$A$-term'' that couples Higgs
scalars with left- and right- squarks and sleptons. The third line
gives the scalar mass $m_{\tilde{q}_{L,R}}^2$,
$m_{\tilde{l}_{L,R}}^2$, and $m_{H_{u,d}}^2$ for squarks, sleptons
and Higgs scalars, respectively. Finally, the last line is the
bilinear $b$-term, which couples up- and down-type Higgses. In
principle, one may also include RPV soft interactions that correspond
to the terms in the superpotentials $W_{\Delta L=1}$ and $W_{\Delta
B=1}$. However, pure scalar RPV interactions are generally not
relevant to the low-energy observables discussed here, so we will not
include them.
The trilinear $A$-terms and the soft SUSY breaking squark and slepton
masses are in general non-diagonal in the flavor basis, a feature that
introduces flavor-changing-neutral-current (FCNC) effects beyond
those that are GIM-suppressed in the SM. Moreover, after performing
an appropriate set of field redefinitions, ${\cal L}_{\rm soft}$ --
together with the $\mu$-term in the superpotential -- includes 40
CP-violating phases beyond those of the SM (for a useful discussion, see, {\em e.g.},
Ref.~\cite{Dimopoulos:1995ju}). In contrast to the effects
of the CP-violating phase in the Cabibbo-Kobayashi-Maskawa (CKM)
matrix, the effects of these new phases are not suppressed by the
Jarlskog invariant\cite{Jarlskog:1985ht} and light quark Yukawa couplings. Thus, the
interactions in ${\cal L}_{\rm soft}$ can lead to unsuppressed FCNC
and CP-violating effects at low energy. On the other hand, both FCNC
and CP violation have been tightly constrained by experiment.
Attemps to reconcile these two phenomenological implications of
${\cal L}_{\rm soft}$ with experimental bounds on FCNCs and
CP-violation are known as the \lq\lq SUSY flavor" and \lq\lq SUSY CP"
problem, respectively.
A detailed discussion of the SUSY flavor and CP problems appears in
Section \ref{sec:cpv}. However, for purposes of illustration, we consider one
approach to the flavor problem in which it is assumed that and ${\bf
m_Q^2}$, ${\bf m_{\bar{u}}^2}$, ${\bf m_{\bar{d}}^2}$, ${\bf m_L^2}$
and ${\bf m_{\bar{e}}^2}$ are diagonal in flavor basis and that ${\bf
a_u}$, ${\bf a_d}$, and ${\bf a_e}$ are proportional to the
corresponding Yukawa matrices, ${\bf y_{u,d,e}}$. The conventional
parameterization thus gives -- after diagonalization of the Yukawa
matrices --
\begin{equation}
\label{eq:triscalaryukawa}
{\bf a_u}=A_u\left(
\begin{array}{ccc}
y_u&&\\ &y_c&\\ &&y_t
\end{array}
\right),\ \ \ {\bf a_d}=A_d\left(
\begin{array}{ccc}
y_d&&\\ &y_s&\\ &&y_b
\end{array}
\right),\ \ \ {\bf a_e}=A_e\left(
\begin{array}{ccc}
y_e&&\\ &y_\mu&\\ &&y_\tau
\end{array}
\right).
\end{equation}
Specific SUSY breaking scenarios have been studied in the literature, which could
solve the FCNC and CP-violation problems introduced by the soft SUSY breaking terms.
If SUSY breaking is mediated from an unseen, high energy ``hidden sector'' to the visible weak scale sector (superparticles
in MSSM) via gravity, it is called Gravity Mediated SUSY Breaking (SUGRA) \cite{sugra}.
If SUSY breaking is mediated from the hidden sector to the visible sector via gauge interactions,
it is called Gauge Mediated SUSY Breaking (GMSB) \cite{gmsb}. Recently, Anomaly Mediated SUSY breaking (AMSB) \cite{amsb} and gaugino Mediated SUSY Breaking scenarios~\cite{gauginomsb} have also been considered.
These models relate the large number of parameters in ${\cal
L}_{\rm soft}$ to a few parameters associated with SUSY-breaking physics at
high scales. We will occasionally refer to these model-dependent relations
throughout this review.
\subsection{Superparticle Spectrum}
The superpartner mass spectrum emerges after diagonalization of the
relevant mass matrices. For squarks and sleptons, the mass matrices
contain three components. In the $(\tilde{f}_L, \tilde{f}_R)$ basis,
there are mass matrices ${\bf M^2_{LL}}$ for the LH fermion
superpartners, $\tilde{f}_L$; ${\bf M^2_{RR}}$ for the $\tilde{f}_R$;
and matrices ${\bf M^2_{LR}}$ that mix the two. The $LR$ mixing
matrices arise only after electroweak symmetry breaking, and to the
extent that the triscalar couplings ${\bf a_f}$ are proportional to
the Yukawa couplings as in Eq. (\ref{eq:triscalaryukawa}), one expects
the effects of this mixing to be relatively small except for the third
generation sfermions. In Section \ref{sec:cc} we discuss low-energy
tests of this expectation. In flavor space, each of these matrices is
$6\times 6$ ($3\times 3$ in the case of the sneutrino). Since an extensive
discussion of the flavor problem appears in Section
\ref{sec:cpv}, we will assume
momentarily that the ${\bf M_{AB}^2}$ ($A,B=L,R$) are flavor diagonal
for purposes of illustration. In general, one has
\begin{equation}
{\bf{M_{LL}^2}} = {\bf m_Q^2}+ {\bf m_q^2 }+{\bf \Delta_f}
\end{equation}
\begin{equation}
{\bf{M_{RR}^2}} = {\bf m_{\bar f}^2}+ {\bf m_q^2 }+{\bf
\bar\Delta_f}
\end{equation}
with
\begin{equation}
{\bf \Delta_f} = \left(I^f_3-Q_f\sin^2\theta_W\right)\ \cos 2\beta
M_Z^2
\end{equation}
\begin{equation}
{\bf \bar\Delta_f} = Q_f\sin^2\theta_W \ \cos 2\beta M_Z^2
\end{equation}
and
\begin{equation}
{\bf M_{LR}^2}={\bf M_{RL}^2} =
\begin{cases}
v\left[{\bf a_f} \sin\beta -\mu {\bf y_f} \cos\beta\right]\ , &
{\tilde u}-{\rm type\ sfermion}\\ v\left[{\bf a_f} \cos\beta -\mu {\bf
y_f} \sin\beta\right]\ , & {\tilde d}-{\rm type\ sfermion}
\end{cases}\ \ \ .
\end{equation}
Here ${\bf m_q^2}$ is the mass matrix for the corresponding fermion specie,
$I_3^f$ and $Q_f$ are the third component of isospin, and the charge of the
fermion, respectively, and $\tan\beta$ is the ratio of the neutral Higgs vevs
$\tan\beta=\langle H_u^0 \rangle / \langle H_d^0 \rangle$.
The diagonal elements depend on the unknown soft SUSY breaking
parameters ${\bf m_Q^2}$, ${\bf m_{\bar{u}}^2}$, {\em etc.} while the
off-diagonal elements depend on the supersymmetric parameter $\mu$,
the soft-triscalar coupling ${\bf a_f}$, $v$ and $\tan\beta$.
Assuming no flavor mixing among different sfermion generations, the
sfermion mass matrix reduces to a set of $2\times 2$ matrices for each
flavor. The corresponding mass eigenstates ${\tilde F}_{1,2}$ are
mixtures of the ${\tilde f}_{L,R}$, with the mixing angle $\theta_{\tilde{f}}$:
\begin{equation}
\left(
\begin{tabular}{c}
${\tilde F}_{1}$\\
${\tilde F}_{2}$
\end{tabular}
\right)
=\left(
\begin{tabular}{cc}
$\cos\theta_{\tilde{f}}$&$\ \ \ -\sin\theta_{\tilde{f}}$\\
$\sin\theta_{\tilde{f}}$&$\ \ \ \cos\theta_{\tilde{f}}$\\
\end{tabular}
\right)
\left(
\begin{tabular}{c}
${\tilde f}_{L}$\\
${\tilde f}_{R}$
\end{tabular}
\right).
\end{equation}
In the more general situation where one allows for flavor mixing, the
$6\times 6$ diagonal mass matrix is given by
\begin{equation}
\left( {\bf M_f^2}\right)_{ \rm diag} = {\bf Z_f}^\dag\ {\bf M_f^2}\ {\bf
Z_f}
\end{equation}
where
\begin{equation}
{\bf M_f^2} =\left(
\begin{array}{cc}
{\bf M_{LL}^2} & {\bf M_{LR}^2}\\ {\bf M_{LR}^2} & {\bf M_{RR}^2}
\end{array}\right)
\end{equation}
for each species of sfermion. Hence, a given sfermion mass eigenstate
${\tilde F}_j$ is given in terms of the flavor eigenstates ${\tilde
f}_I$ as\footnote{Here, we follow the notation and conventions of Ref.~\cite{Rosiek:1995kg}.}
\begin{equation}
{\tilde f}_I = Z_f^{Ij}\, {\tilde F}_j
\end{equation}
where $I=1,2,3$ indicate the left-handed (LH) flavor states\footnote{For
scalars, the handness simply indicates that they are the superpartners of
the left-handed or the right-handed fermions.} ${\tilde f}_{L_I}$ and
$I=4,5,6$ refer to the right-handed (RH) flavor states ${\tilde
f}_{R_{I-3}}$. Hence, the simultaneous presence of non-vanishing
$Z_f^{1j}$ and $Z_f^{2j}$ would indicate flavor mixing among first and
second generation LH sfermions, while having both $Z_f^{1j}\not= 0$
and $Z_f^{4j}\not=0$ would imply mixing among the LH and RH sfermions
of the first generation. Note that for sneutrinos, the indices $I,j$
run over only $1,2,3$ and we have no left-right mixing in this case.
The gauginos and higgsinos mix with each other since both are charged
under the electroweak gauge group. The mass matrix for the neutral
states $\psi^0=(\tilde{B}, \tilde{W}^0, \tilde{H}_d^0, \tilde{H}_u^0)$
is
\begin{equation}
{\bf M}_{\tilde{N}}=\left(
\begin{array}{cccc}
M_1&0&-c_{\beta}s_WM_Z&s_{\beta}s_WM_Z\\ 0&M_2&c_\beta c_WM_Z&-s_\beta
c_WM_Z\\ -c_\beta s_WM_Z&c_\beta c_W M_Z&0&-\mu\\ s_\beta
s_WM_Z&-s_\beta c_W M_Z&-\mu&0
\end{array}
\right).
\end{equation}
Here we have introduced the abbreviation $s_\beta=\sin\beta$,
$c_{\beta}=\cos\beta$, $s_W=\sin\theta_W$, and $c_W=\cos\theta_W$.
The mass matrix can be diagonalized by a $4\times 4$ unitary matrix
$N$:
\begin{equation}
{\bf M_{\chi^0}}^{\rm diag}={\bf N}^* {\bf M_{\tilde{N}}} {\bf N}^{-1},
\end{equation}
The mass eigenstates are called neutralinos
$\chi_i^0=N_{ij}\psi_j^0$, $i=1 \ldots 4$, with
$m_{\chi_1^0}<m_{\chi_2^0}<m_{\chi_3^0}<m_{\chi_4^0}$. In the limit
that $M_Z \ll M_1, M_2, |\mu|$, each neutralino is a pure gaugino or
Higgsino state, while in general, the neutralino is a mixture of
gauginos and Higgsinos.
Similarly, the mass matrix for charged gaugino and Higgsino
$\psi^{\pm}=(\tilde{W}^+, \tilde{H}_u^+, \tilde{W}^-, \tilde{H}_d^-)$
is
\begin{equation}
{\bf M_{\tilde{C}}}=\left(
\begin{array}{cc}
{\bf 0}&{\bf X^T}\\ {\bf X}&{\bf 0}
\end{array}
\right); \ \ \ {\bf X}=\left(
\begin{array}{cc}
M_2&\sqrt{2}s_{\beta}M_W\\ \sqrt{2}c_{\beta}M_W&\mu
\end{array}\right).
\end{equation}
The mass eigenstates are called charginos $\chi_i^{\pm}$, $i=1,2$
($m_{\chi_1^\pm}<m_{\chi_2^\pm}$), which are related to the gauge
eigenstates by two unitary $2\times2$ matrices $U$ and $V$ that
diagonalize the chargino mass matrix:
\begin{equation}
\left(\begin{array}{c} \chi_1^+\\ \chi_2^+ \end{array}\right) ={\bf
V}\left(\begin{array}{c} \tilde{W}^+\\ \tilde{H}_u^+
\end{array}\right);\ \ \ \left(\begin{array}{c} \chi_1^-\\ \chi_2^-
\end{array}\right) ={\bf U}\left(\begin{array}{c} \tilde{W}^-\\
\tilde{H}_d^- \end{array}\right);\ \ \ {\bf U^*XV^{-1}}=\left(
\begin{array}{cc}m_{\chi_1^{\pm}}&0\\0&m_{\chi_2^{\pm}}\end{array}\right)
\end{equation}
The ${\rm SU}(3)_C$ gluino is a color octet fermion, and does not mix
with other particles in MSSM. Its mass is parametrized by $M_3$ as
defined in Eq.~(\ref{eq:soft}).
In gravity-mediated and gauge-mediated SUSY breaking
models, there is a unification relation for ${\rm SU}(3)_C$, ${\rm
SU}(2)_L$ and ${\rm U}(1)_Y$ gaugino masses $M_{3,2,1}$:
\begin{equation}
\frac{M_3}{\alpha_s}=\frac{M_2}{\alpha_2}=\frac{M_1}{\alpha_1},
\end{equation}
where $\alpha_s$, $\alpha_2$ and $\alpha_1$ are related to the
couplings of ${\rm SU}(3)_C$, ${\rm SU}(2)_L$ and ${\rm U}(1)_Y$ via
\begin{equation} \alpha_s=\frac{{g_s}^2}{4\pi},\ \ \
\alpha_2=\frac{\alpha}{\sin^2\theta_W}=\frac{{g_2}^2}{4\pi},\ \ \
\alpha_1=\frac{5}{3}\frac{\alpha}{\cos^2\theta_W}
=\frac{5}{3}\frac{{g_Y}^2}{4\pi}. \end{equation}
This relation holds at any
energy scale to one-loop order\footnote{We do not discuss possible
threshold effects at the GUT or Plank scale.} . In particular, at
electroweak scale, if we take $\alpha_s=0.118$, $\alpha=1/128$ and
$\sin^2\theta_W=0.23$, the ratio betweem gaugino masses are \begin{equation}
M_3:M_2:M_1\approx 7:2:1. \end{equation} However, such a unification relation
need not hold in the most general MSSM, and, $M_3$, $M_2$ and $M_1$ could
be completely independent of each other.
The MSSM has two complex Higgs doublets, $H_u$ and $H_d$, which give
mass to up and down type fermions, respectively. The potential for
the neutral Higgs fields is \begin{equation}
V=(|\mu|^2+m_{H_u}^2)|H_u^0|^2+(|\mu|^2+m_{H_d}^2)|H_d^0|^2
-(bH_u^0H_d^0+c.c)+\frac{1}{8}(g^2+g^{\prime
2})(|H_u^0|^2-|H_d^0|^2)^2. \end{equation} As in the Standard Model, the
minimum of the potential corresponds to a non-zero VEV for the neutral
Higgs fields, thereby breaking electroweak symmetry. Let us write
$\langle H_u^0 \rangle = v_u/\sqrt{2}$, $\langle H_d^0 \rangle =
v_d/\sqrt{2}$. The sum of $v_u^2$ and $v_d^2$ is related to the $Z$
boson mass and the gauge couplings as $v_u^2+v_d^2=v^2=4
M_Z^2/(g^2+g^{\prime 2})\approx(246\ {\rm GeV})^2$. It is convenient
to write the ratio of $v_u$ and $v_d$ as $\tan\beta$:
$\tan\beta\equiv v_u/v_d$, where $v_u=v \sin\beta$ and
$v_d= v \cos\beta$.
The two complex Higgs doublets contain eight real scalar degrees of
freedom. After EWSB, two charged and one neutral degree of freedom
are the would-be Nambu-Goldstone bosons, $G^{\pm}$ and $G^0$, that are
eaten by $W^{\pm}$ and $Z$ to become their longitudinal modes. We
are, thus, left with five physical Higgs bosons: two neutral CP-even
Higgses, $h^0$ and $H^0$; one neutral CP-odd Higgs, $A^0$; and a
pair of charged Higgses, $H^{\pm}$. When $m_{A^0}\gg M_W$, $h^0$ is
the SM-like Higgs. The tree level Higgs masses can
be obtained via expanding the potential around the Higgs VEVs and
diagonalizing the $2\times 2$ mass matrices. One finds \begin{eqnarray}
m_{A^0}^2&=&2b/\sin 2 \beta,\\ m_{H^{\pm}}^2&=&m_{A^0}^2 + M_W^2,\\
m_{h^0,H^0}^2&=&\frac{1}{2}\left(
m_{A^0}^2+M_Z^2\mp\sqrt{(m_{A^0}^2+M_Z^2)^2 -4 M_Z^2 m_{A^0}^2\cos^2 2
\beta} \right). \end{eqnarray} The mass eigenstates can be written in terms of
the gauge eigenstates as
\begin{eqnarray}
A^0&=&\sqrt{2}(\cos\beta\ {\rm Im}[H_u^0]+\sin\beta\ {\rm
Im}[H_d^0]),\\ H^{+}&=&\cos\beta\ H_u^+ +\sin\beta\ H_d^{-*},\ \ \
H^{-}\ =\ \cos\beta\ H_u^{+*} +\sin\beta\ H_d^{-},\\
\left(\begin{array}{c} h^0\\ H^0
\end{array}
\right)&=& \sqrt{2}\left(\begin{array}{cc} \cos\alpha&-\sin\alpha\\
\sin\alpha&\cos\alpha
\end{array}
\right) \left(\begin{array}{c} {\rm Re}[H_u^0]-v_u\\ {\rm
Re}[H_d^0]-v_d\\
\end{array}
\right),
\label{eq:alpha}
\end{eqnarray}
where \begin{equation} \frac{\sin 2 \alpha}{\sin 2 \beta}=-
\frac{m_{A^0}^2+M_Z^2}{m_{H^0}^2-m_{h^0}^2};\ \ \ \frac{\cos 2
\alpha}{\cos 2 \beta}=- \frac{m_{A^0}^2-M_Z^2}{m_{H^0}^2-m_{h^0}^2}.
\end{equation} The tree level Higgs masses are determined by only two
parameters: $b$ and $\tan\beta$ (or $m_{A^0}$ and $\tan\beta$). For
large $m_{A^0}$, $A^0$, $H^0$ and $H^{\pm}$ are heavy and decouple
from low-energy observables. On the other hand, the
tree level mass of the light
CP-even Higgs $h^0$ is bounded from above: $m_{h^0} < M_Z$, which has
already been excluded by the current LEP Higgs searches
\cite{LEPHiggs}. However, the mass of the $h^0$ receives large
radiative corrections from third generation quarks and their
superpartners due to the large top Yukawa couplings,
allowing for a mass large enough to be consistent with present direct
search limits. The dominant contribution is from the stop loop:
\begin{equation}
\Delta(m_{h^0}^2)=\frac{3}{4\pi^2}v^2 y_t^4 \sin^4\beta \ln \left(
\frac{m_{\tilde{t}_1}m_{\tilde{t}_2}}{m_t^2}\right) \ \ \ .
\end{equation}
For
stop masses around 1 TeV, this correction can push the $m_{h^0}$ above
the current experimental bound. Detailed two loop calculations for
the light CP-even Higgs mass indicate that the upper bound is about
135 GeV. Uncertainties of a few GeV arise from neglected higher order
effects and the experimental error in the top quark mass \cite{mh2loop}.
The couplings of the Higgses to the (s)quarks and (s)leptons are
proportional to the Yukawa couplings and are, therefore, non-negligible
only for the third generation. For the low energy precision
measurements where only the light quark and leptons are involved, the
contribution from the Higgs sector can almost always be
neglected. However, the details of the Higgs sector do affect the
phenomenology of SUSY CP-violation, so we will provide a brief summary
of the status of the experimental searches of MSSM
neutral Higgs bosons in Sec~\ref{sec:higgs}.
\subsection{SUSY Interactions}
The gauge interactions in MSSM can be obtained
from the usual SM gauge interactions by replacing two of
the SM particles with their superpartners. For example, the coupling
of the SU(2)$_L$ gauge boson to quarks is (before EWSB and
diagonalization of the quark mass matrices)
\begin{equation}
\label{eq:gauge-quark}
-g{Q}^{\dagger}\bar\sigma^\mu
\frac{\vec\tau}{2}\cdot \vec{W}_{\mu} Q
\ee while
the corresponding gauge-squark-squark interaction is
\begin{equation}
i g \partial^{\mu} \tilde{Q}^\dagger \frac{\vec\tau}{2} \cdot \vec{W}_{\mu} \tilde{Q}
+c.c.
\label{eq:gauge-squark}
\ee
Similarly, supersymmetry leads to a
squark-quark-SU(2)$_L$ gaugino interaction
\begin{equation}
-\sqrt{2}g (\tilde{Q}^\dagger \frac{\vec\tau}{2} Q)
\cdot
\vec{\tilde{W}} + c.c.
\label{eq:gauge-quark-squark}
\ee
The corresponding Feynman rules are illustrated in the diagrams of
Fig.~\ref{fig:Feyn_gqq}. Additional
gauge boson-gauge boson-squark-squark interactions
appear via the $D^{\mu}\tilde{Q}^*D_{\mu}\tilde{Q}$ term.
\begin{figure}
\resizebox{5 in}{!}{
\includegraphics*[0,570][290,660]{Feyn_gqq.ps}}
\caption{Feynman diagrams for supersymmetric (a) gauge-quark-quark,
(b) gauge-squark-squark and (c) gaugino-quark-squark vertices.}
\label{fig:Feyn_gqq}
\end{figure}
The other fermion-fermion-gauge boson,
sfermion-sfermion-gauge boson, and fermion-sfermion-gaugino
interactions follow a similar pattern.
Similarly, the supersymmetrized gauge boson self-interation leads to
gauge boson-gaugino-gaugino coupling, as shown in Fig.~\ref{fig:Feyn_ggg}.
\begin{figure}
\resizebox{4 in}{!}{
\includegraphics*[0,570][200,650]{Feyn_ggg.ps}}
\caption{Feynman diagrams for supersymmetric (a) tri-gauge boson coupling,
(b) gauge boson-gaugino-gaugino coupling.}
\label{fig:Feyn_ggg}
\end{figure}
The Higgs Yukawa interactions are obtained from the superpotential
$W_{\rm MSSM}$ in Eq.~(\ref{eq:MSSMsuperpotential}). It also give rise
to Higgsino-quark-squark interactions via the
first term in Eq.~(\ref{eq:lag}) and Higgs-squark-squark interactions via
the second term in Eq.~(\ref{eq:lag}). The soft SUSY breaking trilinear
$A$-term give rise to additional Higgs-squark-squark couplings.
The coupling between the Higgs and the lepton sector can be obtained similarly.
Including the effects of EWSB leads to
modifications of these expressions, due to (a) diagonalization of the
quark mass matrices, leading to the presence of the CKM matrix in
Eqs. (\ref{eq:gauge-quark},\ref{eq:gauge-quark-squark}); left-right
mixing (as well as possible flavor mixing) among sfermions, leading to
the presence of mixing matrices $Z_f^{Ij}$ in
Eqs.~(\ref{eq:gauge-squark},\ref{eq:gauge-quark-squark}); and mixing
of gauginos and Higgsinos into the charginos and neutralinos, leading
to factors of the matrices $N_{ij}$, $V_{ij}$, and $U_{ij}$ {\em etc.}
in Eq.~(\ref{eq:gauge-quark-squark}). The
Feynman rules for these interactions appear several places in the
literature and we do not reproduce a complete list here. Throughout
this article, we generally follow the conventions given in
Refs.~\cite{Rosiek:1995kg,Rosiek:1989rs}.
\subsection{$R$-parity Violating Interactions}
\label{sec:rpv}
Additional $B$- and $L$- violating interactions may appear in the MSSM
if $R$-parity violation is allowed. Rapid proton decay can still be
avoided if we only turn on $B$ or $L$ violating terms, but not both
simultaneously. The RPV terms in the Lagrangian can be obtained from
the superpotentials Eqs.~(\ref{eq:RPVL}) and (\ref{eq:RPVB}) via
Eq.~(\ref{eq:lag}). For low energy process where light quarks are
present in the initial and final states, the RPV terms that are of
interests are Yukawa-type interactions:
\begin{eqnarray}
\label{eq:rpv-super}
{\cal L}_{RPV, \ \Delta{L}=1}&\!\!\!= &\!\!\!\lambda_{ijk} (\frac{1}{2} L_i
L_j\tilde{\bar{e}}^\dagger_k +\tilde{L}_i L_j\bar{e}^{\dagger}_k)
+\lambda_{ijk}^{\prime}( L_i Q_j\tilde{\bar{d}}^\dagger_k +\tilde{L}_i
Q_j\bar{d}^{\dagger}_k +{L_i} \tilde{Q}_j\bar{d}^{\dagger}_k)+\mu_i^\prime L_i\tilde{H}_u; \\
{\cal L}_{RPV, \ \Delta{B}=1}&\!\!\!=&\!\!\!\lambda_{ijk}^{\prime\prime} (\bar{u}^{\dagger}_i
\bar{d}^{\dagger}_j \tilde{\bar{d}}^\dagger_k +\tilde{\bar{u}}^\dagger_i \bar{d}^{\dagger}_j
\bar{d}^{\dagger}_k).
\end{eqnarray}
Note that here we follow the notation of Ref.~\cite{Martin:1997ns} in which
fermion fields are denoted by two-compoments Weyl spinors.
\begin{figure}[ht]
\begin{center}
\resizebox{3 in}{!}
{\includegraphics*[0,640][220,740]{rpv_muon.ps}}
\caption{
Tree-level $P_R$-violating contributions to muon decay.
}
\label{fig:muondecay}
\end{center}
\end{figure}
Such terms will contribute to the low energy SM process via the
exchange of heavy scalar quarks or scalar leptons. For example,
Fig.~\ref{fig:muondecay} shows the RPV contribution
from the purely leptonic term proportional to
$\lambda_{12k}$ to the muon
decay amplitude that determines the Fermi constant, $G_\mu$.
After a Fierz
reordering, the resulting four-fermion amplitude has the same
structure as the tree-level $(V-A)\times(V-A)$ amplitude of the SM,
but with a normalization determined by the ratio of
$|\lambda_{12k}|^2$ to the square of the exchanged slepton mass. More
generally, for momentum transfer $q^2 \ll {\tilde m}^2$, the
correction to low-energy SM amplitudes from RPV interactions can be
parametrized in terms of the following quantities:
\begin{equation}
\label{eq:deltas}
\Delta_{ijk}(\tilde f)={|\lambda_{ijk}|^2\over 4\sqrt{2}G_\mu
m_{\tilde f}^2}\ge 0 \end{equation} with a similar definition for the primed
quantities and double primed quantities.
The quantities $\Delta_{ijk}$ {\it etc.} are constrained by other
precision measurements and rare decay searches. A summary of the current experimental bounds
can be found in \cite{RPV}. Here, we up-date our earlier global
analyses of low-energy constraints on RPV obtained from the
low-energy observables in Table~\ref{tab:rpv-constrain}.
For each observable, we indicate
the sensitivity to the various $\Delta_{ijk}^{(\prime)}(\tilde f)$ along with a
reference to the chapter in this article where a more detailed
discussion appears.
\begin{table}
\begin{tabular}{|c|ccccc|c|c|}
\hline Quantity & $\Delta_{11k}^{\prime}(\tilde{d}_R^k)$ &
$\Delta_{1k1}^{\prime}(\tilde{q}_L^k)$ & $\Delta_{12k}(\tilde{e}_R^k)$
& $\Delta_{21k}^{\prime}(\tilde{d}_R^k)$
& $\Delta_{2k1}^{\prime}(\tilde{d}_L^k)$
& Value
& Discussion
\\ \hline\hline
$\delta |V_{ud}|^2/|V_{ud}|^2$ &2&0&-2&0&0&$-0.0032\pm 0.0014$(a)
&Sec.~\ref{sec:cc}
\\
&&&&&&$-0.0002\pm 0.0015$(b)
&Sec.~\ref{sec:cc}
\\
$\delta Q_W^{\rm Cs}/Q_W^{\rm Cs}$ &-4.82&5.41&0.05&0&0&$-0.0040\pm
0.0066$
&Sec.~\ref{sec:nc}
\\
$\delta R_{e/\mu}$ &2&0&0&-2&0&$-0.0042 \pm 0.0033$
&Sec.~\ref{sec:cc}
\\
$\delta G_\mu/G_\mu$ &0&0&1&0&0&$0.00025\pm 0.001875$
&Sec.~\ref{sec:nc}
\\
$\delta Q_W^e/Q_W^e$
&0&0&-29.8&0&0&$0.14\pm0.11
&Sec.~\ref{sec:nc}
\\
$\delta R_{\nu}$
&0&0
&-0.21& 0.22& 0.08&$-0.0033\pm 0.0007$
&Sec.~\ref{sec:nc}
\\
$\delta R_{\bar\nu}$
&0&0&-0.077& 0.132 &0.32
&$-0.0019\pm 0.0016$
&Sec.~\ref{sec:nc}
\\
\hline
\end{tabular}
\caption{$P_R$-violating contributions to $\delta
|V_{ud}|^2/|V_{ud}|^2$, $\delta Q_W^{\rm Cs}/Q_W^{\rm Cs}$, $\delta
R_{e/\mu}$, $\delta G_\mu/G_\mu$, $\delta Q_W^p/Q_W^p$, $\delta
Q_W^e/Q_W^e$, ${R_\nu}$ and ${R_{\bar\nu}}$.
Here $\delta |V_{ud}|^2/|V_{ud}|^2$ means the possible correction to the value of
$|V_{ud}|^2$ extracted from beta-decay that is allowed by first row CKM unitarity
tests. See text for description of scenario (a) and (b) for $\delta |V_{ud}|^2/|V_{ud}|^2$.
Columns give the coefficients of the various
corrections from $\Delta_{ijk}^{\prime}$ and $\Delta_{12k}$ to the
different quantities. Next to last column gives the
value of the corresponding quantity extracted from experiment assuming only Standard Model contributions to the relevant process. Final column gives section of this review containing relevant discussion..}
\label{tab:rpv-constrain}
\end{table}
The results of our fit are particularly sensitive to tests of the
unitarity of the first row of the CKM matrix, discussed in Section
\ref{sec:cc}. As we discuss in there, the status of
first row CKM unitarity is presently unsettled, so we provide a fit
for two scenarios: (a) using a value of the kaon decay form factor
$f_+(0)$ obtained from large $N_C$ QCD studies, leading to a deviation
from CKM unitarity by roughly two standard deviations; (b) a value
for $f_+(0)$ obtained from recent lattice QCD simulations that implies
agreement with unitarity. The resulting 95\% C.L. ranges for the
$\Delta_{ijk}$ and $\Delta_{ijk}^\prime$ under these two scenarios are given in
Table~\ref{tab:rpvrange}.
\begin{table}
\begin{tabular}{|c|c|c|}\hline
&(a) large $N_c$ QCD&(b)lattice QCD \\ \hline
$\Delta_{11k}^{\prime}(\tilde{d}_R^k)$ & $0-0.0020$
& $0-0.0024$\\
$\Delta_{1k1}^{\prime}(\tilde{q}_L^k)$ &$0-0.0017$ &$0-0.0019$\\
$\Delta_{12k}(\tilde{e}_R^k)$ & $0.0013-0.0039$&$0.0006-0.0031$\\
$\Delta_{21k}^{\prime}(\tilde{d}_R^k)$ &$0-0.0015$&$0-0.0014$\\
$\Delta_{2k1}^{\prime}(\tilde{d}_L^k)$ &$0-0.0013$&$0-0.0009$\\ \hline
\end{tabular}
\caption{95\% C.L. ranges for the $\Delta_{ijk}^{(\prime)}$
obtained from fitting
to the low energy observables listed in Table~\ref{tab:rpv-constrain}.}
\label{tab:rpvrange}
\end{table}
\subsection{SUSY Searches}
Both direct and indirect searches for superparticles have been
carried out in various experiments \cite{pdg}. Sparticles can be
pair produced at $e^+e^-$, $p\bar{p}$ and $pp$ colliders via the
intermediate $\gamma^*$, $Z^*$, gluon or sparticles. Each sparticle
subsequently decays into energetic lepton, jets plus an LSP for
$R$-parity conserving scenarios. In most cases, the LSP is a neutral
weakly interacting particle ({\em e.g.}, a neutralino) which travels
through the detector without depositing significant energy.
Therefore, typical signatures consist of some combination of jets,
leptons, possibly photons, and large missing energy.
The large electron-position (LEP) collider at CERN has completed its
running at center of mass energy more than twice the mass of $Z$
boson with a few hundred ${\rm pb}^{-1}$ integrated luminosity. No
events inconsistent with the SM have been reported and all visible
sparticles with mass up to half the $Z$ mass have been excluded.
Data taken at center of mass energy above $M_Z$ can set stronger
limits on the neutralino, chargino, squark and slepton masses,
although limits would depend on the interplay between the sparticle
masses, cross sections and decay branching ratios. Charginos are
excluded up to 103 GeV except in the case of low acceptance or low
cross section. The limits on the slepton masses are based on the
pair production of $\tilde{l}\tilde{l}$ with $\tilde{l}\rightarrow l
\chi_1^0$, which gives a lower mass bound of about 80$-$100 GeV.
Limits on the stop and sbottom masses are about 90 GeV, which varies
with the left-right squark mixing angle.
The Tevatron Run I has accumulated about $110\ {\rm pb}^{-1}$ of data for $p\bar{p}$ collisions
at $\sqrt{s}=1.8$ TeV. Pairs of squarks
and gluinos can be produced because of the large strong interaction
cross sections. Signals of energetic multijets plus missing
transverse energy have been searched for, with null results being
reported. The exclusion region in $(m_{\tilde{q}}, m_{\tilde{g}})$
has been derived, and mass larger than 300 GeV has been excluded if
$m_{\tilde{g}}=m_{\tilde{q}}$. The gluino mass has been excluded up
to 195 GeV for any squark mass. Charginos and neutralinos can be
produced via $q\bar{q}^{\prime}\rightarrow \chi_1^{\pm}\chi_2^0$.
Leptonic decay of both $\chi_1^\pm$ and $\chi_2^0$ leads to trilepton
signals which reduces the background significantly. The same sign
dilepton signal is possible for charginos produced in the squark and
gluino cascade decay. Bounds on charginos and neutralinos, however,
are not as strong as the LEP results. The upgraded Tevatron Run II
with $\sqrt{s}=2$ TeV and designed integrated luminosity $L=2\ {\rm
fb}^{-1}$ could cover large regions of SUSY parameter space. The
Large Hadron Collider (LHC) at CERN of $pp$ collision with
$\sqrt{s}=14$ TeV and $L=100\ {\rm fb}^{-1}$ would cover the region
of squarks and gluinos with mass up to a few TeV.
Precision measurements of $Z$-pole observables, $M_W$, $m_t$ and several low-energy
observables could constrain the MSSM mass spectrum. The bounds from the current precision $Z$-pole measurements on the SUSY parameters
are discussed in Sec.~\ref{sec:zpole}. In addition, FCNC and CP
violation experiments impose strong limits on the possible
flavor structure and CP-violating phases of MSSM, which heavily
constrain the structure of soft-SUSY breaking parameters (see Section \ref{sec:cpv}). In the $R$-parity conserving MSSM, sparticles could affect the low
energy measurements {\em via} radiative corrections. In general, loop-induced SUSY
effects in low-energy observables are proportional to $\alpha/{\pi} (M/{\tilde m})^2$, where $M$ is the relevant SM particle mass. For ${\tilde m}\sim M_W$ and $M\sim M_W$, probing these effects, therefore, requires precision of better than one percent. The analsyis and implications of these precision measurements constitutes the bulk of the remainder of this review.
\section{Renormalization}
\label{sec:renorm}
In this review, we focus on possible manifestations of SUSY in low energy processes that may complement the information obtained from high energy collider searches. The low energy processes of interest here further break down into two broad classes: (a) those allowed by the Standard Model and for which there exist precise SM predictions, and (b) processes that are either forbidden by SM symmetries or that are highly suppressed. In order to discern the effects of SUSY in the first category of observables, one generally requires knowledge of SM predictions at the level of one-loop (or higher) radiative corrections. In the case of SUSY models that conserve R-parity, SUSY contributions to precision electroweak observables will also appear solely via loop effects.
In order to make meaningful comparisons between experimental results and predictions based on supersymmetric extensions of the SM, one must compute and renormalize superpartner loop effects using the same framework as adopted for the SM predictions. Doing so requires care, however, as one must employ a regulator that preserves supersymmetry. Here, we use dimensional reduction (DR), wherein one works in $d=4-2\varepsilon$ spacetime dimensions while retaining the Clifford algebra appropriate to fermion field operators in $d=4$ dimensions. Renormalized quantities are obtained by introducing counterterms that remove the factors of $1/\varepsilon-\gamma+\ln 4\pi$ that arise in divergent one-loop graphs -- a subtraction scheme known as modified dimensional reduction, or ${\overline {\rm DR}}$. This scheme represents a variation on the more familiar modified minimal subtraction (${\overline {\rm MS}}$) commonly used for the SM. In ${\overline {\rm MS}}$ renormalization, one regularizes divergent graphs using dimensional regularization, which differs form dimensional reduction by continuing both the number of spacetime dimensions as well as the Clifford algebra into $d=4-2\varepsilon$ dimensions. Doing so entails an explicit breaking of supersymmetry, however, as dimensional regularization effectively changes the number of fermionic degrees of freedom relative to those of their bosonic superpartners
in $d\neq 4$. In contrast, ${\overline {\rm DR}}$ retains the correspondence between the number of bosonic and fermionic degrees of freedom.
Most low energy precision electroweak observables are mediated at lowest order by the exchange of a virtual gauge boson (GB), so we consider first the renormalization of GB propagators and GB-fermion interactions. Low energy processes generally involve the lightest quarks and leptons, and since the corresponding Higgs-fermion interactions are suppressed by small Yukawa couplings, we will not discuss renormalization of the Higgs sector in detail here\footnote{A brief discussion
of MSSM neutral Higgs searches is given in Section \ref{sec:higgs}; and more extensive review can be found in Ref.~\cite{Heinemeyer:2004gx}.}. To set our notation, we first discuss renormalization relevant to charged current (CC) processes and subsequently discuss the neutral current (NC) sector.
\subsection{Charged Current Processes}
Radiative corrections to CC amplitudes naturally divide into four topologies: (a) $W$-boson propagator corrections; (b) corrections to the $W$-fermion vertices; (c) fermion propagator corrections; and (d) box graphs. These corrections have been well-studied in the SM and we refer the reader to the extensive literature on the subject for further details (see, {\em e.g.}, Refs.~\cite{Sirlin:1977sv,Marciano:1985pd,Marciano:1993sh,Marciano:2005ec}). As our emphasis in this review lies with the effects of supersymmetric particles, we show in Fig. \ref{fig:cccorr} illustrative superpartner contributions to each type of radiative correction.
\begin{figure}[ht]
\begin{center}
\resizebox{6 in}{!}{
\includegraphics*[20,160][580,620]{cccorrection.ps}}
\caption{Representative supersymmetric corrections to charged current observables: (a) $W$-boson propagator corrections; (b) vertex corrections; (c) external leg corrections; and (d) box graph contributions.}
\label{fig:cccorr}
\end{center}
\end{figure}
One loop corrections to the $W$-boson propagator, fermion propator, and $W$-fermion vertices are divergent, so one must carry out the appropriate renormalizaton. After such renormalization, the W-boson propagator, $iD_{\mu\nu}(k)$ takes the general form in the Feynman gauge
\begin{equation}
iD_{\mu\nu}(k) = -i\left[T_{\mu\nu}{\hat D}_{WW}^T(k^2)+L_{\mu\nu}{\hat D}_{WW}^L(k^2)\right]
\ee
where the transverse and longitudinal projection operators are given by
\begin{eqnarray}
T_{\mu\nu} & = & -g_{\mu\nu}+ k_\mu k_\nu/k^2 \\
L_{\mu\nu} & = & k_\mu k_\nu/k^2
\eea
and ${\hat D}_{WW}^{T,L}(k^2)$ are finite scalar functions and the hat indicates quantities renormalized in the ${\overline {\rm DR}}$ scheme. In low-energy processes, effects associated with the longitudinal term are suppressed by light fermion masses, so we will not discuss the component further. The renormalized transverse component is given by
\begin{equation}
\left[ {\hat D}_{WW}^T(k^2)\right]^{-1} = k^2-{\hat M}_W^2 +{\hat\Pi}_{WW}^T(k^2)\ \ \ ,
\ee
where ${\hat M_W}$ is the finite part of the bare W-boson mass parameter appearing in the renormalized Lagrangian after electroweak symmetry breaking and ${\hat\Pi}_{WW}^T(k^2)$ gives the finite loop contribution after ${\overline {\rm DR}}$ subtraction is performed. Both ${\hat M_W}$ and ${\hat\Pi}_{WW}^T(k^2)$ depend on the t'Hooft renormalization scale $\mu$. However, the physical W-boson mass -- defined by the value of $k^2$ giving $[{\hat D}_{WW}^T(k^2
=M_W^2)]^{-1}=0$ -- is $\mu$-independent. The finite residue ${\hat Z}_W$ of the pole in ${\hat D}_{WW}^T$ is given by
\begin{equation}
{\hat Z}_W = \left[1+{\hat\Pi}_{WW}^{T\ \prime}(M_W^2)\right]^{-1}\ \ \ .
\ee
The corresponding expression for the renormalized, inverse fermion propagator is
\begin{equation}
{\hat S}_f^{-1}(k) = \dslash{k}-{\hat m}_f +\left[ {\hat A}_L(k^2)\dslash{k} +{\hat B}_L(k^2)\right] P_L
+ \left[ {\hat A}_R(k^2)\dslash{k} +{\hat B}_R(k^2)\right]P_R
\ee
where $P_{L,R}$ are the left- and right-handed projectors and the ${\hat A}_{L,R}$ and ${\hat B}_{L,R}$ contain the finite loop contributions. The physical fermion mass is given by
\begin{equation}
m_f=\left[ {\hat m}_f -\frac{1}{2}{\hat B}_L(m_f^2)
-\frac{1}{2}{\hat B}_R(m_f^2)\right]\, \left[1+\frac{1}{2}{\hat A}_L(m_f^2) + \frac{1}{2}{\hat A}_R(m_f^2) \right]^{-1}\ \ \ ,
\ee
while the residue of the pole is
\begin{equation}
{\hat Z}_\psi = \left[1+{\hat A}_L(m_f^2) P_L + {\hat A}_R(m_f^2) P_R\right]^{-1} \ \ \ .
\ee
Note that for CC interactions in the SM, the left-handed (LH) components given the dominant contribution to physical amplitudes, as the presence of right-handed (RH) components will be suppressed by factors of the fermion masses\footnote{For example, the weak magnetic moment operator in the SM is chirality-odd and is generated by one-loop vertex corrections that contain single insertions of the Yukawa interaction.}.
The renormalized vertex functions for CC amplitudes are relatively straightforward. We illustrate using the muon decay process $\mu^-\to \nu_\mu W^-$, for which the tree-level amplitude is
\begin{equation}
i{\cal M}_{0}^{\rm CC} = i \frac{g}{\sqrt{2}}{\bar \nu }_\mu \diracslash{W}^{\ +} P_L \mu
\ee
After one-loop renormalization, one has
\begin{equation}
i{\cal M}_{0}^{\rm CC}+i{\cal M}_{\rm vertex}^{\rm CC} =i \frac{{\hat g}(\mu)}{\sqrt{2}}\left[1+{\hat F}_V(k^2)\right] {\bar \nu }_\mu \diracslash{W}^{\ +} P_L \mu
\ee
where ${\hat g}(\mu)$ is the running SU(2)$_L$ gauge coupling and ${\hat F}_V(k^2)$ is the finite part of the one-loop vertex correction.
The vertex and propagator corrections outlined above will contribute to the four-fermion amplitudes that describe low energy CC processes of interest to us, such as $\mu$- and $\beta$-decay. Additional, but finite, SM one-loop contributions are generated by box graphs involving the exchange of two vector bosons. To the extent that the external masses and momenta are small compared to the weak scale, the box contributions will have the form of a product of two left-handed currents, $(V-A)\otimes(V-A)$. In the case of $\mu^-\to \nu_\mu e^-{\bar\nu}_e$
\begin{equation}
i{\cal M}_{\rm box}^{\rm CC} = -i \frac{{\hat g}^2}{2{\hat M}_W^2}{\hat \delta}_{\rm box} \ {\bar \nu}_\mu \gamma^\lambda P_L \mu \ {\bar {e } }\gamma_\lambda P_L {\nu_{\bar e}} + \cdots\ \ \ \ ,
\ee
where the $+\cdots$ indicate terms whose structure differs from the $(V-A)\otimes(V-A)$ structure of the tree-level CC amplitude\footnote{Here, $\nu_{\bar e}$ is the v-spinor for the electron antineutrino.}. In the SM, such terms will be suppressed by factors of $m_\mu^2/M_W^2$. Superpartner loops can lead to relatively unsuppressed non-$(V-A)\otimes(V-A)$ contributions in the presence of mixing between left- and right-handed sfermions (see Section \ref{sec:cc}).
Including the box contribution along with the other renormalized one-loop contributions, taking into account the factors of $1/\hat Z_\psi^{1/2}$ that arise in the standard reduction formulae, and working in the $k^2 << M_W^2$ limit, one has
\begin{eqnarray}
i{\cal M}_{\rm tree}^{\rm CC}&+&i{\cal M}_{\rm vertex}^{\rm CC}+i{\cal M}_{\rm propagator}^{\rm CC}+i{\cal M}_{\rm box}^{\rm CC} =
-i\frac{{\hat g}^2}{2{\hat M}_W^2}\Bigl[1+\frac{{\hat\Pi}_{WW}^T(0)}{{\hat M}_W^2} \\
\nonumber
& -& \frac{1}{2}\left\{
{\hat A}_L^\mu(m_\mu^2)+{\hat A}_L^e(m_e^2)+{\hat A}_L^{\nu_e}(0)+{\hat A}_L^{\nu_\mu}(0)\right\}
+{\hat F}_V^e(0)+{\hat F}_V^\mu(0)+{\hat \delta}_{\rm box}\Bigr] \\
\nonumber
&&\times \ {\bar \nu }_\mu \gamma^\lambda P_L \mu \ {\bar {e } }\gamma_\lambda P_L {\nu_{\bar e}} +\cdots \ \ \ ,
\eea
or
\begin{equation}
i{\cal M}_{\rm one-loop}^{\rm CC}=-i\frac{{\hat g}^2}{2{\hat M}_W^2}\left[1+\frac{{\hat\Pi}_{WW}^T(0)}{{\hat M}_W^2}+{\hat\delta}_{VB}\right]{\bar \nu }_\mu \gamma^\lambda P_L \mu \ {\bar {e } }\gamma_\lambda P_L {\nu_{\bar e}} +\cdots \ \ \ ,
\ee
where ${\hat\delta}_{VB}$ denotes the fermion propagator, vertex, and box graph contributions.
The resulting rate for muon decay -- including the bremstraahlung contribution -- is then given by
\begin{eqnarray}
\label{eq:taumu}
\frac{1}{\tau_\mu} & = & \frac{m_\mu^5}{96\pi^3}\left(\frac{{\hat g}^2}{8{\hat M}_W^2}\right)^2 \left[1+\frac{{\hat\Pi}_{WW}^T(0)}{{\hat M}_W^2}+{\hat\delta}_{VB}\right]^2 \ + \ {\rm brem} \\
\nonumber
& = & \frac{m_\mu^5}{192\pi^3} G_\mu^2\left[1+\delta_{\rm QED}\right] \ \ \ ,
\eea
where $\tau_\mu$ is the muon lifetime and the second equality defines the $\mu$-decay Fermi constant, $G_\mu$, and where
\begin{equation}
\delta_{\rm QED} = \frac{\alpha}{2\pi} \left( \frac{25}{4}-\pi^2 \right)+\cdots
\ee
denotes the contributions from real and virtual photons computed in the Fermi theory of the decay.
Thus, one has
\begin{equation}
\frac{G_\mu}{\sqrt{2} }= \frac{{\hat g}^2}{8{\hat M}_W^2}\left[1+\frac{{\hat\Pi}_{WW}^T(0)}{{\hat M}_W^2}+{\hat\delta}_{VB}^{(\mu)}\right] \equiv \frac{{\hat g}^2}{8{\hat M}_W^2}\left(1+{\Delta \hat r}_\mu\right) \ \ \ ,
\ee
where ${\hat\delta}_{VB}^{(\mu)}$ is given by ${\hat\delta}_{VB}$ but with the Fermi theory QED contributions subtracted out.
Along with the fine structure constant and the Z-boson mass, $M_Z$, the value of $G_\mu$ is one of the three most precisely determined parameters in the gauge sector of the SM. Thus, for purposes of computing other electroweak observables, it is conventional to express ${\hat g}^2$ in terms of $G_\mu$, ${\hat M}_W$, and the correction $\Delta \hat r_\mu$:
\begin{equation}
\label{eq:ghat}
{\hat g}^2 = \frac{8{\hat M}_W^2 G_\mu}{\sqrt{2}}\ \frac{1}{1+{\Delta \hat r}_\mu}\ \ \ .
\ee
As a particular application, we consider the corresponding amplitude for the $\beta$-decay $d\to u e^-{\bar\nu}_e$:
\begin{eqnarray}
\label{eq:betaampl}
i{\cal M}_{\beta-{\rm decay} } &=& i\frac{{\hat g}^2}{2{\hat M}_W^2}\, V_{ud}\, \left(1+{ \Delta \hat r}_\beta\right) \, {\bar u}
\gamma^\lambda P_L \, d\ {\bar e }\gamma_\lambda P_L {\nu_{\bar e}} \\
\nonumber
&=& i \frac{G_\mu}{\sqrt{2}}\, V_{ud}\, \left(1+{ \Delta \hat r}_\beta-{ \Delta \hat r}_\mu\right)
{\bar u} \gamma^\lambda (1-\gamma_5)\, d\ {\bar e} \gamma_\lambda (1-\gamma_5) {\nu_{\bar{e}}}\ \ \
\eea
where we have omitted terms that are suppressed by quark masses and where ${ \Delta \hat r}_\beta$ includes virtual photon contributions, in contrast to ${\Delta \hat r}_\mu$.
\subsection{Neutral Current Processes}
The renormalization of neutral current (NC) amplitudes follows similar lines, though additional features arise due to mixing between the SU(2)$_L$ and U(1)$_Y$ sectors (for early computations in the SM, see, {\em e.g.}, Refs.~\cite{Marciano:1980pb,Marciano:1982mm,Sarantakos:1982bp,Marciano:1983ss}). The general structure of the renormalized amplitude for the neutral current process $\ell+f\to\ell +f$ is
\begin{equation}
i{\cal M}_{\rm one - loop}^{\rm NC} = -i\frac{G_\mu}{2\sqrt{2}} {\hat \rho}_{\rm NC}(k^2) {\bar\ell}\, \gamma^\lambda({\hat g}_V^\ell+{\hat g}_A^\ell \gamma_5)\, \ell \ {\bar f}\, \gamma_\lambda({\hat g}_V^f+{\hat g}_A^f \gamma_5)\, f + {\rm box} \ \ \ ,
\ee
where $\ell$ and $f$ denote the lepton and fermion spinors, respectively, and
``+box" denotes the box diagram contributions. The quantity $\hat\rho_{\rm NC}$ is a normalization factor common to all four-fermion NC processes that can be expressed in terms of gauge boson masses, the ${\hat \Pi}^T_{VV}(k^2)$, and ${\Delta \hat{r}}_\mu$:
\begin{eqnarray}
\label{eq:rhonc1}
\hat\rho(k^2)_{\rm NC} & = & \frac{M_Z^2}{k^2-M_Z^2+i M_Z\Gamma_Z}\Bigl\{1+
\frac{{\rm Re}\ {\hat\Pi}_{ZZ}^T(M_Z^2)}{M_Z^2}-\frac{{\hat\Pi}_{WW}^T(0)}{M_W^2}\\
\nonumber
&&-\frac{\left[{\hat\Pi}_{ZZ}^T(k^2)-{\hat\Pi}_{ZZ}^T(M_Z^2)\right]}{k^2-M_Z^2} -{\hat\delta}_{VB}^{(\mu)}\Bigr\}\ \ \ ,
\eea
where
\begin{equation}
\label{eq:mzhat}
M_Z^2={\hat M}_Z^2-{\hat\Pi}_{ZZ}^T(M_Z^2)
\ee
and $M_Z\Gamma_Z={\rm Im}\, {\hat\Pi}_{ZZ}^T(k^2)$. For $k^2<M_Z^2$, $\Gamma_Z=0$. Representative superpartner contributions to ${\hat\Pi}_{ZZ}$ are shown in Fig. \ref{fig:nccorr}.
\begin{figure}[ht]
\begin{center}
\resizebox{6 in}{!}{
\includegraphics*[20,160][620,620]{nccorrections.ps}}
\caption{Representative supersymmetric corrections to neutral current observables: (a) $Z$-boson propagator and $Z$-$\gamma$ mixing conributions; (b) vertex corrections; (c) external leg corrections; and (d) box graph contributions}
\label{fig:nccorr}
\end{center}
\end{figure}
The renormalized vector and axial vector couplings of the $Z^0$ to fermion $f$ -- ${\hat g}_V^f$ and ${\hat g}_A^f$ -- can be expressed in terms of the weak mixing angle, $\theta_W$, and an associated universal renormalization factor, along with process-specific vector and axial vector radiative corrections:
\begin{eqnarray}
{\hat g}_V^f & = & 2 I_3^f- 4 {\hat\kappa}(k^2,\mu) \sin^2{\hat\theta}_W(\mu)Q_f + {\hat\lambda}_V^f\\
{\hat g}_A^f & = & -2 I_3^f +{\hat\lambda}_A^f \ \ \ .
\eea
where $I_3^f$ and $Q_f$ are the fermion isospin and charge, respectively; $\sin^2{\hat\theta}_W(\mu)\equiv{\hat s}^2(\mu)$ defines the weak mixing angle in the ${\overline {\rm MS}}$ scheme:
\begin{equation}
\sin^2{\hat\theta}_W(\mu) = \frac{{\hat g}^\prime(\mu)^2}{{\hat g}(\mu)^2 +{\hat g}^\prime(\mu)^2} \ \ \ ;
\ee
and ${\hat\lambda}_{V,A}^f$ are process-dependent corrections that vanish at tree-level.
Here $\hat{g}$ and $\hat{g}^\prime$ are the ${\rm SU}(2)_L$ and ${\rm U}(1)_Y$ coupling, respectively.
As with the running QED and QCD couplings, ${\hat\alpha}(\mu)$ and ${\hat\alpha}_s(\mu)$, respectively, the running of the weak mixing angle is a prediction of the SM and provides a useful benchmark for precision studies in the NC sector\cite{Czarnecki:1995fw} . A renormalization group-improved SM prediction for ${\hat s}^2(\mu)$ in the ${\overline {\rm MS}}$ scheme has recently been carried out in Ref.~\cite{Erler:2004in}, where logarithmic contributions of the form $\alpha^n \ln^n(\mu/\mu_0)$ (with $\mu_0$ being a reference scale) have been summed to all orders. Additional subleading contributions of the form $\alpha^{n+1}\ln^n(\mu/\mu_0)$ and $\alpha\alpha_s^{n+k}\ln^n(\mu/\mu_0)$ with $k=0,1,2$ were also included in that analysis, and a refined estimate of the hadronic physics uncertainty associated with light-quark loops at low scales performed (see below). The results are shown in Fig.~\ref{fig:sin2theta}, where the scale $\mu$ has been chosen to be $Q=\sqrt{|k^2|}$ for a process occurring at squared momentum transfer $k^2$. The reference scale has been chose to be $\mu_0=M_Z$ and the running of ${\hat s}^2(Q)$ normalized to reproduce its value at the $Z^0$-pole :
$\sin^2{\hat\theta}_W(M_Z)=0.23122(15)$. The discontinuities in the curve of Fig.~\ref{fig:sin2theta} correspond to particle thresholds, below which a particle of the corresponding mass decouples from the running. The change in sign of the slope at $Q=M_W$ arises from the difference in sign of the gauge boson and fermion contributions to the $\beta$ function for ${\hat s}^2(\mu)$. Note that threshold matching conditions in the ${\overline {\rm DR}}$-scheme will differ from those in the ${\overline {\rm MS}}$ framework due to the differences in continuation of the Clifford algebra into $d=4-2\varepsilon$ dimensions\cite{Antoniadis:1982vr, Langacker:1992rq}.
For purposes of this review, it is particularly interesting to quote the value of the the running weak mixing angle at $Q=0$\cite{Erler:2004in}:
\begin{equation}
\sin^2{\hat\theta}_W(0)= 0.23867\pm 0.00016
\ee
where the error is dominated by the experimental error in $\sin^2{\hat\theta}_W(M_Z)$ and where the value of ${\hat s}^2(\mu)$ at the two scales differs by roughly three percent. In Section \ref{sec:nc} we describe a variety of low-energy NC experiments designed to test this running.
\begin{figure}
\begin{center}
\includegraphics[width=4in, angle=-90]{sin2theta.ps}
\caption{Calculated running of the weak mixing angle in the SM, defined in the
$\overline{\rm MS}$ renormalization scheme.
Also shown are the experimental results from APV,
neutrino DIS ($\nu$-DIS),
parity violating asymmetry measurement at E158 ($A_{PV}$),
the expected precision of Qweak
and the lepton forward-backward asymmetry measurement
at CDF ($A_{FB}$). This plot
is taken from Ref.\cite{sin2theta}.}
\label{fig:sin2theta}
\end{center}
\end{figure}
By itself, ${\hat s}^2(\mu)$ is not an observable since it depends on the renormalization scale. One may, however, define an effective weak mixing angle that is $\mu$-independent and may in principle be isolated experimentally by comparing experiments with different species of fermions:
\begin{equation}
\label{eq:sweff}
\sin^2{\hat\theta}_W(k^2)^{\rm eff} \equiv {\hat\kappa}(k^2,\mu) \sin^2{\hat\theta}_W(\mu)
\ee
where the quantity ${\hat\kappa}(k^2,\mu)$ describes a class of electroweak radiative corrections that is independent of the species of fermion involved in the NC interaction. Contributions to ${\hat\kappa}(k^2,\mu)$ arise primarily from the $Z$-$\gamma$ mixing tensor:
\begin{equation}
{\hat\Pi}^{\mu\nu}_{Z\gamma}(k^2) = {\hat\Pi}^T_{Z\gamma}(k^2) T^{\mu\nu} +{\hat\Pi}^{L}_{Z\gamma}(k^2) L^{\mu\nu} \ \ \ .
\ee
Note that in general, the functions ${\hat\Pi}_{Z\gamma}^T(k^2)$ depend on the choice of electroweak gauge parameter, so to arrive at a gauge-independent ${\hat\kappa}(k^2,\mu)$, a prescription for removing the gauge-dependent components of ${\hat\Pi}_{Z\gamma}^T(k^2)$ must be employed\cite{Ferroglia:2003wa}.
For processes involving $|k^2| \ll M_Z^2$, contributions from light fermions to ${\hat\kappa}(k^2,\mu)$ can lead to the presence of large logarithms when one chooses $\mu=M_Z$. The presence of these logarithms can spoil the expected behavior of the perturbation series unless they are summed to all orders. To illustrate, consider the amplitude for low-energy, parity-violating M\o ller scattering:
\begin{equation}
\label{eq:moller1}
{\cal M}_{PV}^{ee} = \frac{G_\mu}{2\sqrt{2}} {\hat \rho}_{\rm NC}(0) {\hat g}_V^e{\hat g_A}^e\ {\bar e}\gamma_\mu e\ {\bar e} \gamma^\mu\gamma_5 e \ \ \
\ee
with
\begin{equation}
\label{eq:moller2}
Q_W^e \equiv {\hat \rho}_{\rm NC}(0)\, {\hat g}_V^e {\hat g}_A^e = {\hat \rho}_{\rm NC}(0)\left[-1+4{\hat\kappa}(0,\mu){\hat s}^2(\mu)+{\hat\lambda}_V^f+ {\hat\lambda}_A^f(-1+4{\hat s}^2)\right]+\cdots
\ee
being the \lq\lq weak charge" of the electron and with the $+\cdots$ indicating box diagram contribution and terms of order $(\alpha/4\pi)^2$. At tree-level (${\hat\kappa}\to 1$, ${\hat\lambda}_{V,A}^e\to 0$), the weak charge is suppressed, since ${\hat s}^2$ is numerically close to $1/4$: $Q_W^{e,\ \rm tree}\sim -0.1$. Inclusion of one-loop SM radiative corrections reduce the magnitude of $Q_W^e$ by nearly 40 \%, owing largely to the near cancellation between the first two terms in Eq. (\ref{eq:moller2}) and the presence of large logarithms in ${\hat\kappa}(0,\mu)$ when $\mu$ is chosen to be $M_Z$ as is conventional\cite{Czarnecki:1995fw}. Given these two considerations, one would expect the relative size of two-loop corrections to $Q_W^e$ to be considerably larger than the nominal $\alpha/4\pi$ scale.
In order to improve the convergence of the SM prediction for $Q_W^e$, one would like to sum the large logarithms to all orders. The use of the running $ \sin^2{\hat\theta}_W(\mu)$ provides a means for doing so. By choosing $\mu\sim Q$ in both ${\hat\kappa}(k^2,\mu)$ and $ \sin^2{\hat\theta}_W(\mu)$, using the requirement that their product is $\mu$-independent as per Eq.~(\ref{eq:sweff}), and solving the RG equations for $ \sin^2{\hat\theta}_W(\mu)$ as in Ref.~\cite{Erler:2004in}, one effectively moves all the large logarithms from ${\hat\kappa}(k^2,\mu)$ into $\sin^2{\hat\theta}_W(\mu)$ and sums them to all orders. The result is a SM prediction for $\sin^2{\hat\theta}_W(k^2)^{\rm eff}$ with substantially smaller truncation error than would be obtained by the naive application of perturbation theory to one-loop order.
In the case of superpartner loop contributions to low-energy observables, it is sufficient to include their effects solely in the form factor ${\hat\kappa}(k^2,\mu)$ while choosing $\mu=M_Z$ (illustrative contributions to ${\hat\Pi}_{Z\gamma}$ are shown in Fig. \ref{fig:nccorr}). In addition, one should include their effects in the value of $\sin^2{\hat\theta}_W(M_Z)$. One may adopt two different strategies for doing so:
\begin{itemize}
\item[(1)] Include their effects implicitly in the value of $\sin^2{\hat\theta}_W(M_Z)$ that is obtained from fits to precision $Z$-pole observables. To be consistent, such fits must include the effects of superpartner contributions to ${\cal O}(\alpha)$ electroweak radiative corrections, and to our knowledge, such an extraction has not been carried out using LEP and SLD data in a way that does not rely on a model for SUSY-breaking mediation (see, {\em e.g.}, Ref.~\cite{Erler:1998ur,Cho:1999km} and references therein).
\item[(2)] Use the requirements of electroweak symmetry to compute superpartner contributions to $\sin^2{\hat\theta}_W(M_Z)$ explicitly. Specifically, using
\begin{eqnarray}
{\hat e}^2(\mu) & =& {\hat g}^2(\mu) {\hat s}^2(\mu) \\
{\hat M}_W^2 & = & {\hat M}_Z^2 {\hat c}^2 \ \ \ ,
\eea
writing
\begin{equation}
{\hat \alpha}(\mu) = \alpha + \Delta{\hat \alpha}(\mu)
\ee
where $\alpha$ is the fine structure constant, employing Eqs.~(\ref{eq:ghat},\ref{eq:mzhat}), and choosing
$\mu=M_Z$ we obtain
\begin{equation}
\label{eq:Gfswmz}
{\hat s}^2 (M_Z) {\hat c}^2 (M_Z) = \frac{\pi\alpha}{\sqrt{2} M_Z^2 G_\mu\left[1-\Delta{\hat r}(M_Z)\right]}
\ee
where
\begin{equation}
\label{eq:deltarhat}
\Delta{\hat r}(\mu) = \Delta{\hat r}_\mu+\frac{\Delta{\hat\alpha}}{\alpha} -\frac{{\hat\Pi}_{ZZ}^T(M_Z^2, \mu)}{M_Z^2}\ \ \ .
\ee
Thus, by computing the superpartner loop corrections to the various terms in Eq.~(\ref{eq:deltarhat}) and employing Eq.~(\ref{eq:Gfswmz}), one may determine the predicted shift in ${\hat s}^2(M_Z)$ explicitly for a given set of SUSY parameters. In the remainder of this article, we follow this second strategy.
\end{itemize}
The corrections contained in the ${\hat\lambda}_{V,A}^f$ are fermion-specific, containing the $Zff$ vertex and external leg contributions. Representative superpartner contributions are shown in Fig. \ref{fig:nccorr}. In the case of low-energy interactions involving one or more charged fermions, an additional contribution to the ${\hat\lambda}_V^f$ is generated by $\gamma$ exchange and involves the so-called anapole coupling of the fermion\cite{zeldovich,Musolf:1990sa}
\begin{equation}
\label{eq:anapole1}
{\cal L}_{\rm anapole} = \frac{eF_A}{M^2} {\bar\psi} \gamma_\mu\gamma_5 \psi \ \partial_\nu F^{\mu\nu} \ \ \ ,
\ee
where $F_A$ is the dimensionless anapole moment and $M$ is an appropriate mass scale. The interaction ${\cal L}_{\rm anapole}$ generates a contribution to the fermion martrix element of the electromagnetic current:
\begin{eqnarray}
\label{eq:anapole2}
\bra{p'} J_\mu^{EM}(0)\ket{p} & = & {\bar U}(p') \Bigl[ F_1 \gamma_\mu + \frac{iF_2}{2 M} \sigma_{\mu\nu} k^\nu \\
\nonumber
&& + \frac{F_A}{M^2} \left(k^2\gamma_\mu -\dslash{k}k_\mu\right)\gamma_5 +\frac{iF_E}{2 M}\sigma_{\mu\nu} k^\nu\gamma_5\Bigr] U(p) \ \ \ ,
\eea
where $k=p'-p$ and where $F_1$, $F_2$, $F_A$, and $F_E$ give the Dirac, Pauli, anapole, and electric dipole form factors, respectively\footnote{Note that the overall sign of the anapole term in Eq.~(\ref{eq:anapole2}) differs from the convention used in Ref.~\cite{Kurylov:2003zh}.}. Both the anapole and electric dipole couplings to the photon are parity-odd, while the electric dipole coupling is also odd under time-reversal. Since only weak interactions can give rise to a parity-odd photon-fermion coupling, we choose $M=M_Z$ in Eq.~(\ref{eq:anapole1}).
From Eq. (\ref{eq:anapole1}) one sees that the anapole coupling gives rise to a contact interaction in co-ordinate space, since $\partial_\nu F^{\mu\nu} = j_\mu$ with $j_\mu$ being the current of the other fermion involved in the low-energy interaction. This feature is illustrated in momentum space in Fig.~\ref{fig:anapole}, where the contribution of the anapole coupling of fermion $f^\prime$ to the scattering from fermion $f$ is shown. The $\dslash{k}k_\mu$ term in Eq.~(\ref{eq:anapole2}) gives a vanishing contribution due to current conservation, while the $k^2$ term cancels the $1/k^2$ from the photon propagator to yield a four-fermion contact interaction proportional to $F_A^{f^\prime} (k^2)Q_{f}$. Since this interaction involves a coupling to the vector current of fermion $f$, it corresponds to a contribution to $g_V^f$.
Note that at low energies for which $|k^2| \ll M_Z^2$, the prefactor in $\hat\rho(k^2)_{\rm NC}$ becomes $k^2$-independent constant, signaling that the $Z$-exchange contribution is also a contact interaction. In this regime, one has no experimental, kinematic handle with which to separate the anapole and $Z$-exchange contributions\footnote{In contrast, at $k^2\sim M_Z^2$, the anapole contribution becomes negligible in contrast to the resonating $Z$-exchange amplitude.}. Indeed, the coupling $F_A$ itself depends on the choice of electroweak gauge, and only the complete one-loop scattering amplitude that includes all ${\cal O}(\alpha)$ electroweak radiative corrections (including $F_A$) is gauge-independent (see Ref.~\cite{Musolf:1990sa} and references therein). Nonetheless, when classifying various topologies of the one-loop corrections, it is useful to separate out the anapole contributions, and in doing so, we note that superpartner loop contributions to $F_A$ are gauge-independent. In what follows, we focus on the low-energy $ff^\prime$, in which the $F_A^{f^\prime}$-contribution to the product of vector coupling ${\hat g}_V^f$ and the axial vector coupling ${\hat g}_A^{f^\prime}$ is given by
\begin{equation}
\left( {\hat g}_A^{f^\prime} {\hat g}_V^f \right)_{\rm anapole} = -16{\hat c}^2{\hat s}^2 Q_f F_A^{f^\prime}\ \ \ .
\ee
\begin{figure}[ht]
\begin{center}
\resizebox{5 in}{!}{
\includegraphics*[60,520][470,640]{anapole.ps}}
\caption{Anapole contributions to the NC interaction between two fermions.}
\label{fig:anapole}
\end{center}
\end{figure}
In studies of precision $Z$-pole observables, it has been useful to characterize possible corrections to the gauge boson propagators from new heavy particles
in terms of the so-called oblique parameters, $S$, $T$, $U$ \cite{Peskin:1990zt,Golden:1990ig,Marciano:1990dp, Kennedy:1990ib,Kennedy:1991sn,Altarelli:1990zd,Holdom:1990tc,Hagiwara:1994pw}:
\begin{eqnarray}
\label{eq:stu-sirlin}
S&=&\frac{4{\hat s}^2{\hat c}^2}{{\hat \alpha}M_Z^2}{\rm Re}\Biggl\{
{\hat \Pi}_{ZZ}(0)-{\hat \Pi}_{ZZ}(M_Z^2)+\frac{{\hat c}^2-{\hat
s}^2}{{\hat c}{\hat s}} \left[{\hat \Pi}_{Z\gamma}(M_Z^2)-{\hat
\Pi}_{Z\gamma}(0)\right] +{\hat \Pi}_{\gamma\gamma}(M_Z^2)
\Biggr\}^{\rm New} ~,\nonumber \\
T&=&\frac{1}{{\hat \alpha}M_W^2}
\Biggl\{
{\hat c}^2\left( {\hat \Pi}_{ZZ}(0)+\frac{2{\hat s}}{\hat c}
{\hat \Pi}_{Z\gamma}(0) \right) -{\hat \Pi}_{WW}(0) \Biggr\}^{\rm
New} ~,\nonumber \\
U&=&\frac{4{\hat s}^2}{\hat \alpha} \Biggl\{
\frac{{\hat \Pi}_{WW}(0)-{\hat \Pi}_{WW}(M_W^2)}{M_W^2} +{\hat
c}^2\frac{{\hat \Pi}_{ZZ}(M_Z^2)-{\hat \Pi}_{ZZ}(0)}{M_Z^2} \nonumber
\\
&+&2{\hat c}{\hat s}
\frac{ {\hat \Pi}_{Z\gamma}(M_Z^2)-{\hat
\Pi}_{Z\gamma}(0)}{M_Z^2} +{\hat s}^2 \frac{{\hat
\Pi}_{\gamma\gamma}(M_Z^2)}{M_Z^2} \Biggr\}^{\rm New}
~,\end{eqnarray}
where the superscript \lq\lq New" indicates that only the new physics
contributions to the self-energies are included. Contributions to
gauge-boson self energies can be expressed entirely in terms of the
oblique parameters $S$, $T$, and $U$ in the limit that $M_{\rm NEW}\gg
{M_{Z}}$.
However, since present collider limits allow for fairly light
superpartners, we do not work in this limit\footnote{It is possible to extend the oblique parameterization in this case with three additional parameters\cite{Maksymyk:1993zm}. For the low-energy observables of interest here, this extended oblique approximation is not especially useful.}. Consequently, the
corrections arising from the photon self-energy ($\Pi_{\gamma\gamma}$)
and $\gamma$-$Z$ mixing tensor ($\Pi_{Z\gamma}$) contain a residual
$k^2$-dependence not embodied by the oblique parameters. Expressing the contributions to
${\hat\rho}$ and $\sin^2{\hat\theta}_W(k^2)^{\rm eff} = {\hat\kappa}(k^2,\mu) \sin^2{\hat\theta}_W(\mu)$ in terms of $S$,$T$, and $U$ we obtain:
\begin{eqnarray}
\delta{\hat\rho}^{\rm SUSY} & = & {\hat\alpha} T-{\hat\delta}_{VB}^\mu
~,\nonumber \\
\nonumber \\
\left(\frac{\delta\sin^2{\hat\theta}_W^{\rm eff}}{\sin^2{\hat\theta}_W^{\rm eff}}\right)^{\rm SUSY} & = & \left(
\frac{{\hat c}^2}{{\hat c}^2-{\hat s}^2} \right)
\left(\frac{{\hat\alpha}}{4{\hat s}^2
{\hat c}^2} S-{\hat \alpha} T +{\hat\delta}_{VB}^\mu \right) +
\frac{{\hat c}}{{\hat s}}\Bigl[ \frac{{\hat\Pi}_{Z\gamma}(k^2)}{k^2}-
\frac{{\hat\Pi}_{Z\gamma}(M_Z^2)}{M_Z^2}\Bigr] \nonumber \\
&&+\Bigl(\frac{{\hat c}^2}{{\hat c}^2-{\hat s}^2}
\Bigr)\Bigl[-\frac{{\hat\Pi}_{\gamma\gamma}(M_Z^2)}{M_Z^2}
+\frac{\Delta{\hat\alpha}}{\alpha} \Bigr] ~,
\label{eq:rho-kappa-stu}
\end{eqnarray}
where
$k^2$ is the typical momentum
transfer for a given process. For low energy interactions,
$k^2\rightarrow 0$. Note that we have included in $\delta\sin^2{\hat\theta}_W^{\rm eff}$ both the the contribution from ${\hat\Pi}_{Z\gamma}(k^2)/k^2$ that enters ${\hat\kappa}(k^2,\mu)$ as well as the shift in ${\hat s}^2(M_Z^2)$ obtained from Eq.~(\ref{eq:Gfswmz}) as discussed above.
In analyzing SUSY radiative corrections to low-energy observables, Eqs.~(\ref{eq:rho-kappa-stu}) provide a useful means of incorporating constraints on new physics from precision $Z^0$-pole observables. For example, $\delta{\hat\rho}^{\rm SUSY}$ is highly constrained by bounds on $T$ obtained from such observables. In contrast, $[\delta \sin^2{\hat\theta}_W(k^2)^{\rm eff}]^{\rm SUSY}$ is less stringently constrained. As we discuss in Section \ref{sec:nc} below, the unconstrained contributions to the effective weak mixing angle can lead to relatively large effects in some low-energy NC processes.
\subsection{Theoretical Uncertainties in Electroweak Radiative Corrections}
An important consideration in exploiting low-energy, precision electroweak observables as a probe of SUSY is to ensure that the theoretical uncertainties associated with SM contributions are well-below the level of possible SUSY effects. The SM uncertainties generally involve one of two considerations: (i) neglect of higher order electroweak contributions, and (ii) contributions from strong interactions. While an extensive discussion of these considerations goes beyond the scope of the present article, we give here a brief overview of the strategies employed to address them.
Nominally, one expects the one-loop contributions to quantities such as ${\Delta \hat{r}}_\mu$, $\hat\kappa$, {\em etc.} to be of order $\alpha/\pi\sim 10^{-3}$, so that neglect of two- and higher-loop effects is well justified for the present level of experimental sensitivity. Moreover, since SUSY loop contributions must generally decouple in the ${\tilde m}\to\infty$ limit, one expects the relative magnitude of their contributions to be
\begin{equation}
\delta_{\rm SUSY\ loop} = \frac{{\delta\cal O}^{\rm SUSY loop}}{{\cal O}^{\rm SM}} \sim \frac{\alpha}{\pi}\left(\frac{M}{\tilde m}\right)^2 \ \ \ ,
\ee
where $M$ is the relevant SM mass and $\tilde m$ is a generic superpartner mass. For weak processes, such as $\mu$- and $\beta$-decay, one has $M\to M_W$, and to the extent that ${\tilde m}$ is not too different from the weak scale, one would expect $\delta_{\rm SUSY\ loop}$ to be comparable in magnitude, or slighter smaller than, the scale of one-loop, SM electroweak corrections. Thus, one would expect neglect of two-loop SM contributions to be a justifiable approximation. As discussed above, however, exceptions may occur when the one-loop SM contributions contain large logarithms, when the tree-level SM amplitudes are suppressed, or both. In such situations, the summing terms of the form $\alpha^n \ln^n(\mu/\mu_0)$ is essential, and the RG equations can be employed for this purpose.
Reduction of theoretical uncertainties associated with QCD corrections is generally more challenging. Short-distance QCD contributions can be treated using the operator product expansion (OPE), and the resulting correction to a given order in $\alpha_s$ achieved with sufficient effort. In the case of PV electron-proton scattering, for example, the one-loop $WW$ box contribution is anomalously -- but not logarithmically -- enhanced, and its effect on the proton weak charge, $Q_W^p$, nearly cancels that of the large logarithms appearing in ${\hat\kappa}$. Since the semileptonic, $WW$ box graphs involve hadronic intermediate states one might expect relatively important QCD corrections to the one-loop amplitude. In this case, the loop integral is dominated by dominated by high momentum scales ($k^2\sim M_W^2$), so the corrections can be computed using the OPE, leading to\cite{Erler:2003yk}
\begin{equation}
\delta Q_W^p(WW\ {\rm box}) = \frac{\hat\alpha}{4\pi{\hat s}^2}\left[-2+4\left(1-\frac{\alpha_s(M_W)}{\pi}\right)\right]\ \ \
\ee
for a total QCD correction of $\approx -0.7\%$.
A more problematic situation arises for one-loop corrections that sample momenta of order the hadronic scale. To illustrate, we again consider PV electron scattering. For both PV M\o ller and elastic $ep$ scattering, light quark loop contributions to ${\hat\Pi}_{Z\gamma}^T$ lead to hadronic uncertainties in ${\hat\kappa}(0,\mu)$. Traditionally, light quark contributions have been computed by relating ${\hat\Pi}_{Z\gamma}^T$ to the $\sigma(e^+ e^-\to{\rm hadrons})$ via dispersion relations\cite{wjmkappa}, much as one does in computing hadronic vacuum polarization contributions to the muon anomalous magnetic moment. In the case of ${\hat\Pi}_{Z\gamma}^T$, however, additional assumptions regarding flavor symmetry in the current-current correlator are needed in order to make use of $e^+ e^-$ data. Recently, these assumptions have been examined and more stringent bounds on the hadronic uncertainty in ${\hat\kappa}(0,\mu)$ obtained\cite{Erler:2004in}.
For semileptonic processes, additional hadronic uncertainties appear in box graphs that contain one $\gamma$ and one weak gauge boson. In contrast to the situation for the $WW$-box graphs, the $\gamma Z$ loop integral samples all mometum from the hadronic scale to the weak scale. Neglecting the short-distance, perturbative QCD corrections, one finds
\begin{equation}
\label{eq:zgbox}
\delta Q_W^p(\gamma Z\ {\rm box}) = \frac{5\hat\alpha}{2\pi}\left(1-4{\hat s}^2\right)\left[\ln\left(\frac{M_Z^2}{\Lambda^2}\right)+C_{\gamma Z}(\Lambda)\right]\ \ \ ,
\ee
where $\Lambda$ is a scale characterizing the transition between the perturbative and non-perturbative domains and $C_{\gamma Z}(\Lambda)$ parameterizes contributions to the loop integral from momenta
$\sqrt{|k^2|} \buildrel < \over {_\sim} \Lambda$. The coefficient of logarithm in Eq. (\ref{eq:zgbox}) is determined by short distance dynamics and can be calculated reliably in perturbation theory. However, the values of both $\Lambda$ and $C_{\gamma Z}(\Lambda)$ are sensitive to long-distance scales and have not, as yet, been computed from first principles in QCD. A similar contribution arises in neutron, nuclear, and pion $\beta$-decay. An estimate of the theoretical uncertainty associated with these contributions had been made by varying $\Lambda$ over the range $ 400 \leq \Lambda \leq 1600$ MeV. Recently, Marciano and Sirlin observed that for the $\gamma W$ box, both the pQCD corrections to the logarithmic term as well as the value of $\Lambda$ could be obtained by comparison with the theoretical expression for the Bjorken Sum Rule using isospin symmetry\cite{Marciano:2005ec}. As a result, these authors have reduced the previously-quoted theoretical error by a factor of two. The analogous treatment of the $\gamma Z$ box is more complex, since one cannot obtain the isoscalar contribution from isospin arguments. In both cases, the more refined estimates of the uncertainty associated with the low-energy constants $C_{\gamma Z}$ and $C_{\gamma W}$ remain to be performed. Fortunately, the impact of the uncertainty in $\delta Q_W^p(\gamma Z\ {\rm box})$ due to $C_{\gamma Z}$ is suppressed by the overall factor of $1-4{\hat s}^2\sim 0.1$.
When discussing the implications of various low-energy observables for SUSY, we will also summarize the current situation regarding hadronic uncertainties in the SM predictions.
\section{Charged Current Processes}
\label{sec:cc}
Historically, the study of low energy charged current (CC) processes have played an important role in developing the SM, in determining its parameters, and in testing its self-consistency at the level of one-loop radiative corrections. Indeed, the observation of a parity-violating asymmetry in the $\beta$-decay of polarized $^{60}$Co\cite{Wu:1957my} and $\mu^+$-decay\cite{Garwin:1957hc} confirmed Lee and Yang's hypothesis of parity violation in the weak interaction\cite{Lee:1956qn} and pointed the way toward the $V-A$ structure of the weak charged currents. Measurements of the muon lifetime yield the parameter $G_\mu$ that is one of the three independent, experimental inputs needed for the gauge sector of the theory. Studies of nuclear $\beta$-decay give the most precisely-known element of the CKM matrix -- $V_{ud}$, while measurements of branching ratios for kaon leptonic decays yield a precise value for $V_{us}$ that allows for stringent tests of the unitarity property of the CKM matrix\footnote{The value of $V_{ub}$ is also required, but its magnitude is too small to be relevant.}(for recent discussions, see Refs.~\cite{Hardy:2004id,Severijns:2006dr,Blucher:2005dc}). Comparisons of the widths $\Gamma[\pi\to \mu \nu_\mu (\gamma)]$ and $\Gamma(\pi\to e\nu_e (\gamma)]$ have provided tests of the universality of CC leptonic interactions at the few parts per thousand level\cite{Britton:1992pg,Czapek:kc}. The theoretical interpretation of these precise measurements in terms of the SM has required comprehensive calculations of one-loop radiative corrections to the tree-level amplitudes. The vast majority of this work has been carried out by Sirlin and Marciano, dating back to the classic treatment within the current algebra framework by Sirlin\cite{Sirlin:1977sv}. The implications for various SM extensions have been analyzed extensively by Herczeg and others\cite{Herczeg:2001vk,Deutsch}.
Interest in precise studies of low energy processes remains high, as reviewed recently in Ref.~\cite{Erler:2004cx}. While an extensive survey of the field can be found in that work, we highlight recent developments that motivate the study of CC processes from the standpoint of SUSY. Experimentally one has seen:
\begin{itemize}
\item[i)] New measurements of the Michel parameters that characterize the spectral shape, angular distribution, and polarization properties in polarized $\mu$-decay\cite{Gaponenko:2004mi,Jamieson:2006cf,Danneberg:2005xv} (for a recent global analysis, see Ref.~\cite{Gagliardi:2005fg})
\item[ii)] New efforts to measure $\tau_\mu$ with an order-of-magnitude improvement in precision\cite{fast,mulan}
\item[iii)] Recent Penning trap measurements of \lq\lq superallowed" nuclear $\beta$-decay Q-values \cite{Hardy:2005qv,Eronen:2006if} with significant implications for tests of first-row CKM unitarity
\item[iv)] New measurements of the neutron lifetime, $\tau_n$\cite{Serebrov:2004zf}, and decay correlation coefficients that determine $V_{ud}$ in a manner free from possible nuclear structure ambiguities\cite{Abele02,ucnA,Wietfeldt:2005wz,bowman06}
\item[v)] Extensive new measurements of kaon leptonic decay branching ratios that could imply significant changes in the decades-old value of $V_{us}$ (recently reviewed in Ref.~\cite{Blucher:2005dc}; also, see below)
\item[vi)] Improved precision in the pion $\beta$-decay branching ratio\cite{Pocanic:2002av}
\item[vii)] New efforts to measure the ratio $R_{e/\mu} = \Gamma[\pi\to e\nu_e (\gamma)]/\Gamma[\pi\to \mu \nu_\mu (\gamma)]$ \cite{TRIUMFnew,PSInew}
\end{itemize}
The theoretical interpretation of the semileptonic decays has also been sharpened through
\begin{itemize}
\item[i)] A new analysis of strong interaction uncertainties in $\Delta \hat r^V_\beta$ that associated with $W\gamma$ box graphs that reduces the theoretical uncertainty in the extraction of $V_{ud}$ from $\beta$-decay rates by a factor of two\cite{Marciano:2005ec}
\item[ii)] Computations of the ${\cal O}(p^6)$ loop corrections to the kaon decay form factor $f_{+}^K(t)$ whose value at $t=0$ is needed in order to extract $V_{us}$ from kaon decay branching ratios\cite{Post:2001si,Bijnens:2003uy}
\item[iii)] New analyses of the ${\cal O}(p^6)$ counterterm contributions to $f_{+}^K(0)$ using large $N_C$ QCD\cite{Cirigliano:2001mk} and lattice QCD computations (see below)
\end{itemize}
Given the level of activity in this area, consideration of the implications for SUSY is a timely activity. In reviewing these implications, we also provide additional background on the theoretical and experimental issues for the benefit of readers who may not be familiar with the field. We begin with the purely leptonic CC interaction that gives rise to muon decay and follow with an extensive discussion of low-energy semileptonic CC processes. We divide the latter discussion into several parts: (1) general considerations for semileptonic CC processes; (2) pion leptonic decays and the related implications for SUSY; (3) neutron and nuclear $\beta$-decay; (4) pion and (5) kaon $\beta$-decay; (5) implications of first row CKM unitarity tests for SUSY.
\subsection{Muon Decay}
As discussed in Section \ref{sec:renorm}, the measurements of the muon lifetime generally do not, by themselves, provide information on non-SM physics. Rather, the value of $G_\mu$ as extracted from $\tau_\mu$ using Eq. (\ref{eq:taumu}) provides a key input into SM predictions for other observables, and the presence or absence of deviations from these predictions provides information about various SM extensions. For example, one may use of Eq.~(\ref{eq:Gfswmz}) for this purpose, treating $\alpha$, $G_\mu$, $M_Z$, and $\sin^2{\hat\theta}_W(M_Z)$ as independent, experimentally determined quantities and computing the correction $ \Delta{\hat r}(M_Z)$ in the SM\cite{Marciano:1999ih}. The degree of self-consistency among these quantities in Eq.~(\ref{eq:Gfswmz}) will lead to constraints on any deviation of $ \Delta{\hat r}(M_Z)$ from its SM value associated with possible new physics, such as SUSY radiative corrections to the $\mu$-decay amplitude or the $Z$-boson self-energy [see Eq.~(\ref{eq:deltarhat})].
Studies of the muon spectral shape and angular distribution, along with the electron polarization, can provide additional handles on non-SM contributions. The spectrum and polarization are typically described by the eleven Michel parameters\cite{michel1,michel2}. Four of them ($\rho$, $\eta$, $\delta$, $\xi$) characterize the spectral shape and angular distribution:
\begin{eqnarray}
\nonumber
d\Gamma& = & {G_\mu^2 m_\mu^5\over 192\pi^3} {d\Omega\over 4\pi} x^2\ dx
\times \Biggl\{ {1+h(x)\over 1 + 4\eta(m_e/m_\mu)}\left[
12(1-x)+\frac{4}{3}\rho(8x-6)+ 24\frac{m_e}{m_\mu}{(1-x)\over x}\eta\right]\\
\label{eq:michel1}
&& \pm P_\mu\; \xi\cos\theta \left[ 4 (1-x) + \frac{4}{3}\delta(8x - 6) +
{\alpha\over 2\pi}{g(x)\over x^2}\right]\Biggl\},
\eea
where $x=|{\vec p}_e|/|{\vec p}_e|_{\rm max}$,
$\theta=\cos^{-1}({\hat p}_e\cdot{\hat s}_\mu)$, $P_\mu$ is the $\mu^{\pm}$
polarization, and $h(x)$ and $g(x)$ are momentum dependent radiative
corrections. Five additional parameters ($\xi^\prime$, $\xi^{\prime\prime}$, $\eta^{\prime\prime}$, $\alpha/A$, $\beta/A$) are needed to describe the electron transverse and longitudinal polarization and two more ($\alpha^\prime/A$, $\beta^\prime/A$) parameterize the T-odd correlation of the final state lepton spin and momenta with the muon polarization. The parameter $\eta$ also characterizes deviations of the muon lifetime from its value in the pure $V-A$ Fermi theory, as can be seen from the corresponding modification of the total decay rate:
\begin{equation}
\label{eq:Gftaumu}
\frac{1}{\tau_\mu}=\frac{m_\mu^5}{192\pi^3} G_\mu^2\left[1+\delta_{\rm QED}\right]\left[1+4\eta\frac{m_e}{m_\mu}-8\left(\frac{m_e}{m_\mu}\right)^2\right]\left[1+\frac{3}{5}\left(
\frac{m_\mu}{M_W}\right)^2\right]\ \ \ .
\ee
In the Standard Model, one has $\rho=\delta=3/4$, $P_\mu\xi=1$, and $\eta=0$, so that from measurements of $\tau_\mu$ one obtains a fractional uncertainty in the Fermi constant of
$\Delta G_\mu/G_\mu = 9\times 10^{-6}$. Allowing for $\eta\not=0$ due to possible non-SM contributions to the decay amplitude and constraining such effects with experimental determinations of the Michel parameters can result in a larger uncertainty in $G_\mu$. A recent measurement of transverse positron polarization
from $\mu^+$ decay yields $\eta=(71\pm 37 \pm 5)\times 10^{-3}$, leading to an increase in the relative error on $G_\mu$ by a factor of 40\cite{Danneberg:2005xv}. The results of this experiment also lead to new values for the parameters $\eta^{\prime\prime}$, $\alpha^\prime$ and $\beta^\prime$.
New measurements of $\rho$ and $\delta$ have also been completed, and a new global analysis of the Michel parameters has been carried out in Ref.~\cite{Gagliardi:2005fg}. It is conventional to analyze the results in terms of the effective, four-fermion Lagrangian
\begin{equation}
\label{eq:leff0}
{\cal L}^{\mu-\rm decay} = -\frac{4 G_\mu}{\sqrt{2}}\, \sum_\gamma \ g^\gamma_{\epsilon\mu}\
\ {\bar e}_\epsilon \Gamma^\gamma \nu_e\, {\bar\nu}_\mu \Gamma_\gamma \mu_\mu
\ee
where the sum runs over Dirac matrices $\Gamma^\gamma= 1$ (S), $\gamma^\alpha$ (V), and $\sigma^{\alpha\beta}$ (T) and the subscripts $\epsilon$ and $\mu$ denote the chirality ($R$,$L$) of the final state lepton and muon, respectively\footnote{The use of the subscript \lq\lq $\mu$" to denote both the chirality of the muon and the flavor of the corresponding neutrino is an unfortunate historical convention.}. The SM has $g^V_{LL}=1$ with all other couplings vanishing, leading to the SM values for $\rho$, $\delta$, $P_\mu\xi$, and $\eta$ noted above. The sensitivity of the Michel parameters to non-SM interactions is obtained by expressing them in terms of the general set of couplings $g^\gamma_{\epsilon\mu}$. For example, one has
\begin{equation}
\rho-\frac{3}{4} = -\frac{3}{4}\Bigl\{ |g^V_{LR}|^2+|g^V_{RL}|^2+2\left(|g^T_{LR}|^2+|g^T_{RL}|^2\right)+{\rm Re}\, \left(g^S_{LR} g^{T\,\ast}_{LR}+ g^S_{RL} g^{T\,\ast}_{RL}\right)\Bigr\}
\end{equation}
In order for SUSY to affect the Michel spectrum or lepton polarization in a discernible way, it must generate contributions to the effective Lagrangian (\ref{eq:leff0}) other than those associated with the $g^V_{LL}$ term. If one assumes conservation of R-parity, then such contributions can be generated at the one-loop level {\em via} the box graphs of Fig. \ref{fig:susybox} (a). The corresponding amplitudes contribute to both $g^V_{LL}$ and $g^S_{RR}$. One has\cite{Profumo:2006yu}
\begin{eqnarray}
\label{eq:grrloop}
g^S_{RR,\, \rm loop} & = & \frac{\alpha M_Z^2}{2\pi}\Biggl\{ 2 |U_{j'1}|^2 Z_L^{2i\ast} Z_L^{5i} Z_L^{1i'} Z_L^{4i'\ast} |N_{j1}|^2\, {\cal F}_1\left(M_{\chi^0_j}^2, M_{\chi_{j'}}^2, M_{\tilde L_i}^2,
M_{\tilde L_{i'}}^2\right) \\
\nonumber
&&- Z_\nu^{1j\ast} Z_\nu^{2j} Z_L^{5i} Z_L^{4i\ast} \left(N_{j2}^\ast-\tan\theta_W N_{j1}^\ast\right)
N_{j1} \left(N_{j'2}-\tan\theta_W N_{j'1}\right) N_{j'1}^\ast\\
\nonumber
&&\qquad \times\, {\cal F}_1\left(M_{\chi^0_j}^2, M_{\chi^0_{j'}}^2, M_{\tilde\nu_j}^2, M_{\tilde L_{i}}^2\right)\\
\nonumber
&&-Z_\nu^{1j\ast} Z_\nu^{2j} Z_L^{5i} Z_L^{4i\ast} \left(N_{j2}^\ast-\tan\theta_W N_{j1}^\ast\right)
N_{j1} \left(N_{j'2}-\tan\theta_W N_{j'1}\right) N_{j'1}^\ast\\
\nonumber
&&\qquad\times\, M_{\chi^0_j} M_{\chi^0_{j'}}\, {\cal F}_2\left(M_{\chi^0_j}^2, M_{\chi^0_{j'}}^2, M_{\tilde\nu_j}^2, M_{\tilde L_{i}}^2\right)\Biggr\}
\end{eqnarray}
where the $Z_\nu^{Ij}$, $Z_L^{Ij}$, $U_{ij}$, and $N_{ij}$ are the sneutrino, slepton, chargino, and neutralino mixing matrices, respectively, defined in Section \ref{sec:susy} and
\begin{equation}
{\cal F}_n\left(a,b,c,d\right) = \int_0^1 dx\, \int_0^{1-x} dy\, \int_0^{1-x-y}dz\, \left[ax+by+cz+d(1-x-y-z)\right]^{-n}
\ \ \ .
\end{equation}
The $g^S_{RR}$ term in Eq.~(\ref{eq:leff0}) involves scalar couplings between the right-handed (RH) charged leptons and left-handed (LH) neutrinos. Although CC interactions are associated with the SU(2)$_L$ sector of the MSSM, couplings to the RH charged leptons can arise via either lepton Yukawa interactions or mixing of the LH and RH slepton weak eigenstates into mass eigenstates. In obtaining
Eq.~(\ref{eq:grrloop}) we have retained only contributions associated with the latter, as reflected in the presence of the matrices $Z_L^{(I+3)i}$, $I=1,2$ in each of the terms.
\begin{figure}[ht]
\begin{center}
\resizebox{6 in}{!}{
\includegraphics*[50,510][540,630]{susybox.ps}}
\caption{Box graphs contributing to (a) the muon decay parameter $g^S_{RR}$ and
(b) $\beta$ decay parameters $a^S_{RR}$, $a^S_{RL}$, and $a^T_{RL}$. }
\label{fig:susybox}
\end{center}
\end{figure}
As discussed in Ref.~\cite{Profumo:2006yu}, the products $Z_\nu^{1j\ast} Z_\nu^{2j}$ and $Z_L^{5i} Z_L^{4i\ast}$ lead to lepton flavor changing amplitudes at one-loop order and are, thus, highly constrained by searches for lepton flavor violating processes such as $\mu\to e\gamma$. Thus, for practical purposes, one may neglect the last two terms in Eq.~(\ref{eq:grrloop}). In contrast, the first term in $g^S_{RR,\, \rm loop}$ is flavor diagonal but requires the presence of left-right mixing among first and second generation sleptons. The product $Z_L^{2i\ast} Z_L^{5i}$ also enters the SUSY contribution to $a_\mu=(g_\mu-2)/2$, the muon anomalous magnetic moment. Since the magnetic moment operator is chirality odd, a non-vanishing SUSY contribution requires the presence of left-right mixing either through the muon Yukawa coupling or smuon left-right mixing. Consequently,
the degree of left-right mixing in the first term in Eq.~(\ref{eq:grrloop}) is constrained to some degree by the experimental results for $a_\mu$. Taking these considerations into account, the authors of Ref.~\cite{Profumo:2006yu} find that contributions to $g^S_{RR,\, \rm loop}$ as large as a few $\times 10^{-4}$ are possible, with the largest effects occuring when one of the two smuon mass eigenstates becomes light.
Contributions of this magnitude would imply that either
$|\mu|$ lies well above the electroweak scale or that all but the
SM-like, lightest Higgs boson would decouple in order to avoid the presence of charge and color-breaking minima in the scalar potential.
From the standpoint of the Michel parameters, $g^S_{RR}$ contributes quadratically to the combination of parameters that governs the spectral shape and spatial asymmetry
\begin{equation}
1-\xi\frac{\delta}{\rho} = 2|g^V_{RR}|^2+\frac{1}{2}|g^S_{RR}|^2 +\frac{1}{2} |g^S_{LR}-2 g^T_{LR}|^2
\end{equation}
as well as the parameter $\xi^\prime$ that enters the energy-dependence of the outgoing lepton longitudinal polarization. Experimentally, one has\cite{Stoker:1985sj,Jodidio:1986mz,Eidelman:2004wy}
\begin{eqnarray}
P_\mu\xi\frac{\delta}{\rho} &=& 0.99787\pm0.00082 \\
\xi^\prime &=& 1.00\pm 0.04
\end{eqnarray}
leading to $|g^S_{RR}| < 0.067$ at 90\% confidence from the recent global fit of Ref.~\cite{Gagliardi:2005fg} under the assumption that $P_\mu=1$. Given the maximum magnitude of $g^S_{RR,\, \rm loop}$ it appears unlikely that one will probe SUSY contributions using these quantities. In contrast, the parameters $\eta$, $\eta^{\prime\prime}$, and $\beta^\prime/A$ carries a linear dependence on $g^S_{RR,\, \rm loop}$. The impact on the parameter $\eta$ that characterizes the energy-dependence of the isotropic outgoing lepton spectrum is particularly interesting:
\begin{equation}
\label{eq:eta}
\eta= \frac{{\rm Re}\, g^V_{LL} g^{S,\, \ast}_{RR}}{2|g^V_{LL}|^2}+\cdots\ \ \ ,
\end{equation}
where the \lq\lq $+\cdots$" indicate contributions from the other $g^\gamma_{\epsilon\mu}$ that are not generated in the MSSM. The present limit on $\eta$ obtained in Ref.~\cite{Danneberg:2005xv} is about two orders of magnitude above the SUSY expectations, and a substantial improvement in precision would be needed to probe this parameter at an interesting level from the standpoint of SUSY.
While a direct probe of $g^S_{RR,\, \rm loop}$ via measurements of the Michel parameters may be challenging, its contribution to $\eta$ could be large enough to affect the extraction of $G_\mu$ from $\tau_\mu$. As indicated by Eqs.~(\ref{eq:Gftaumu},\ref{eq:eta}), the fractional shift in $G_\mu$ due to this parameter is
\begin{equation}
\label{eq:Gmushift}
\frac{\Delta G_\mu}{G_\mu} = -\frac{m_e}{m_\mu}\, \frac{{\rm Re}\, g^V_{LL} g^{S,\, \ast}_{RR,\, \rm loop}}{|g^V_{LL}|^2}+\cdots\ \ \ ,
\end{equation}
so that contributions to $g^S_{RR,\, \rm loop}$ of order a few $\times 10^{-4}$ would lead to ppm effects in the value of the Fermi constant. Given the objectives of the new PSI experiments\cite{fast,mulan}, this correction may have to be considered if future collider experiments discover superpartners. In principle, one might also probe this SUSY-induced non-$(V-A)\otimes(V-A)$ operator using Eq.~(\ref{eq:Gfswmz}) if the uncertainty in $M_Z$ and the weak mixing angle can be improved by one and three orders of magnitude, respectively. While the latter appears to be an especially daunting task, obtaining a commensurate reduction in the theoretical uncertainty in $\Delta\hat r$ would likely be even more difficult.
\subsection{Semileptonic Decays of Light Quark Systems: General Considerations}
\label{sec:semi}
As with the analysis of $\mu$-decay, the theoretical interpretation of semileptonic decays requires careful attention to electroweak radiative corrections. Moreover, the presence of low-energy strong interactions introduces additional complications not present for purely leptonic decays. Nonetheless, considerable progress has been made in reducing the theoretical uncertainties associated with non-perturbative QCD effects, allowing one to derive information on SUSY and other possible new physics scenarios from precise studies of these decays.
In analyzing both the ${\cal O}(\alpha)$ electroweak corrections and non-perturbative QCD effects, it is useful to return to Eq.~(\ref{eq:betaampl}), replacing the quark spinors by the corresponding field operators:
\begin{equation}
\label{eq:semi1}
{\cal L}^{\rm CC}_{\rm semileptonic} = -\frac{G_\mu}{\sqrt{2}}\, V_{ud}\, \left(1+{ \Delta \hat r}_\beta-{ \Delta \hat r}_\mu\right)\, {\bar e}\gamma^\lambda(1-\gamma_5) \nu_e\, {\bar u}\gamma_\lambda(1-\gamma_5)d\, +{\rm h.c.} \ \ \ .
\ee
The computation of decay observables requires taking matrix elements of the effective operator between initial and final hadronic states. Note, however, that the ${\cal O}(\alpha)$ correction ${
\Delta \hat r}_\beta$ can only be computed reliably in perturbation theory down to a scale $\Lambda_{\rm had}$ at which strong interactions between quarks become non-perturbative.
Non-perturbative effects arising from QCD dynamics below this scale lead to a dependence of
the radiative corrections on the structure of the initial and final hadronic states
as well as on the spacetime properties of the hadronic charged current. It is conventional to include these process-dependent QCD contributions to the ${\cal O}(\alpha)$ radiative corrections in process-dependent corrections. To this end, we write the decay amplitudes for various semileptonic processes of interest here as
\begin{eqnarray}
\nonumber
{\cal M}_{\ell_2}^\pi & = & -\frac{G_\mu}{\sqrt{2}}\, V_{ud}\, \left(1+{\Delta\hat r^A_\pi}-{\Delta\hat r}_\mu\right)\, {\bar \ell}\gamma^\lambda(1-\gamma_5) \nu_\ell\, \bra{0} {\bar u}\gamma_\lambda \gamma_5 d\ket{\pi^-}\\
\nonumber
{\cal M}_F^\beta & = & -\frac{G_\mu}{\sqrt{2}}\, V_{ud}\, \left(1+{ \Delta \hat r^V_\beta}-{ \Delta\hat r}_\mu\right)\, {\bar e}\gamma^\lambda(1-\gamma_5) \nu_e\, \bra{f} {\bar u}\gamma_\lambda d\ket{i} \\
\label{eq:semi2}
{\cal M}_{GT}^\beta & = & -\frac{G_\mu}{\sqrt{2}}\, V_{ud}\, \left(1+{ \Delta\hat r^A_\beta}-{\Delta\hat r}_\mu\right)\, {\bar e}\gamma^\lambda(1-\gamma_5) \nu_e\, \bra{f} {\bar u}\gamma_\lambda \gamma_5 d\ket{i} \ \ \ ,
\end{eqnarray}
where ${\cal M}_F^\beta$ (${\cal M}_{GT}^\beta$) denote the Fermi (Gamow-Teller) amplitudes for nuclear, neutron, or pion $\beta$-decay that involve hadronic matrix elements of the charged vector (axial vector) current and ${\cal M}_{\ell_2}^\pi$ denotes the amplitude for pion leptonic decay. The ${\Delta\hat r^V_\beta}$, ${\Delta\hat r^A_\beta}$, and ${\Delta\hat r^A_\pi}$ denote the corresponding process-dependent ${\cal O}(\alpha)$ radiative corrections\footnote{The value of the correction ${ \Delta \hat r^V_\beta}$ for the neutron differs from the corresponding correction in pion $\beta$-decay.}.
Since we have factored out the ${\cal O}(\alpha)$ contributions to the amplitudes explicitly, the hadronic matrix elements appearing in Eqs.~(\ref{eq:semi2}) involve purely strong interaction dynamics.
For economy of notation, it is also useful to define a set of process-dependent Fermi constants that encode the ${\cal O}(\alpha)$ corrections and information on the hadronic matrix elements:
\begin{eqnarray}
\nonumber
G_A^\pi & \equiv & G_\mu V_{ud}\, \left(1+{\Delta\hat r^A_\pi}-{\Delta\hat r}_\mu\right) \\
\nonumber
G_V^\beta & \equiv & G_\mu V_{ud}\, \left(1+{\Delta\hat r^V_\beta}-{\Delta\hat r}_\mu\right)g_V(0) \\
\label{eq:semi3}
G_A^\beta & \equiv & G_\mu V_{ud}\, \left(1+{\Delta\hat r^A_\beta}-{\Delta\hat r}_\mu\right)g_A(0) \ \ \ ,
\end{eqnarray}
where $g_V(q^2)$ and $g_A(q^2)$ are the nucleon vector and axial vector form factors, respectively, defined in Eqs.~(\ref{eq:ncurrent}) below. The pion decay matrix element appearing in ${\cal M}_{\ell 2}^\pi$ also contains a dimensional form factor, $F_\pi(q^2)$, but it is conventional to keep the dependence of the decay rate on $F_\pi\equiv F_\pi(m_\pi^2)=92.4$ MeV explicit rather than absorbing it in $G_A^\pi$.
\vskip 0.2in
\noindent{{\bf \ref{sec:semi}.1 Pion Decay Leptonic Decay}}
\vskip 0.2in
The purely leptonic channel $\pi^+\to \mu^+\nu (\gamma)$ yields a value for the pion decay constant $F_\pi$ that provides a key input for the analysis of chiral dynamics. Theoretically the SM prediction for $\Gamma[\pi \to \mu \nu (\gamma)]$ has been computed to one-loop order\cite{Marciano:1993sh}. In contrast to the situation for $\mu$-decay, where QED corrections have been factored out before extracting $G_\mu$ from $\tau_\mu$, the correction ${\Delta\hat r_A^\pi}$ contains QED corrections and the associated infrared divergences. Consequently, one considers the SM prediction for the total, infrared finite decay rate
\begin{eqnarray}
\label{eq:pion0}
\Gamma[\pi^+ \to \ell^+ \bar\nu_\ell (\gamma)]& = & \Gamma[\pi^+ \to \ell^+ \bar\nu_\ell ]+\Gamma[\pi^+ \to \ell^+ \bar\nu_\ell \gamma] \\
\nonumber
&=&\frac{ (G_A^\pi)^2}{4\pi}F_\pi^2 m_\pi m_\ell^2 \left[ 1 - {m_\ell^2\over m_\pi^2} \right]\ +\, {\rm brem} \ \ \ ,
\end{eqnarray}
where the \lq\lq $+\ {\rm brem}$" indicates ${\cal O}(\alpha)$ bremstraahlung contributions.
The most recent analysis of $\Gamma[\pi^- \to \ell^- \bar\nu_\ell (\gamma)]$ has been carried out by Marciano and Sirlin, yielding the result\cite{Marciano:1993sh}
\begin{equation}
\label{eq:piona}
\Gamma[\pi^+ \to \ell^+ \bar\nu_\ell (\gamma)] = {G_\mu^2 |V_{ud}|^2 \over
4\pi} F_\pi^2 m_\pi m_\ell^2 \left[ 1 - {m_\ell^2\over m_\pi^2} \right]
\left[ 1 + {2\alpha\over\pi}\ln\frac{M_Z}{\mu} \right]
\end{equation}
\begin{equation}
\nonumber
\times \left[ 1 - {\alpha\over\pi} \left \{ \frac{3}{2} \ln\frac{\mu}{m_\pi}
+ \bar{C}_1(\mu) + \bar{C}_2(\mu) \frac{m_\ell^2}{\Lambda_\chi^2}
\ln\frac{\mu^2}{m_\ell^2} + \bar{C}_3(\mu) \frac{m_\ell^2}{\Lambda_\chi^2}
+ \cdots \right\} \right] \left[ 1 + \frac{\alpha}{\pi} F(x) \right]\ \ \ ,
\end{equation}
where the quantities proportional to $\alpha$ arise from both the electroweak and QED radiative corrections entering $G_A^\pi$ and contributions from real photon radiation. Note that the corrections are manifestly process- and hadron structure-dependent. The quantity
$\Lambda_\chi=4\pi F_\pi$ is the scale of chiral symmetry breaking associated with the onset of nonperturbative dynamics; the ${\bar C}_i(\mu)$ denote low energy constants that parameterize presently incalculable non-perturbative QCD effects and that depend on the renormalization scale $\mu$; $F$ is a calculable function of $x=m_\ell^2/m_\pi^2$; and the $+\cdots$ indicate additional terms that are suppressed by $m_\ell^2/\Lambda_\chi^2$. Both the function $F(x)$ and the terms containing the ${\bar C}_i(\mu)$ arise from QED corrections to the decay rate for a point-like pion\footnote{We have adopted a slightly different normalization convention from the one used in Ref.~\cite{Marciano:1993sh}.}.
The large logarithms $\ln (M_Z/\mu)$ in Eq. (\ref{eq:pion0}) have been resummed to all orders in powers of $[(\alpha/\pi)\ln (M_Z/\mu)]^2$ using the renormalization group, yielding the electroweak correction factor $S_{EW}(\mu, M_Z)$ that replaces the factor $1+2(\alpha/\pi)\ln (M_Z/\mu)$\cite{Marciano:1993sh}. For $\mu=m_\rho$ one has $S_{EW}(m_\rho, M_Z)=1.0232$. The authors of Ref.~\cite{Marciano:1993sh} estimate that the uncertainty in $\Gamma[\pi \to \mu \nu (\gamma)]$ associated with the ${\bar C}_i(\mu)$ is $\pm 0.56\%$. This uncertainty dominates the error in $F_\pi$ since the pion lifetime and leptonic branching fraction are known to $\pm 0.02\%$ and $\pm 0.00004\%$ uncertainty, respectively. One thus obtains
\begin{equation}
\label{eq:fpiexp}
F_\pi = 92.4\pm 0.025 \pm 0.25\quad {\rm MeV}
\end{equation}
where the first error is associated with the value of $V_{ud}$ and the second with the effects of the ${\bar C}_i(m_\rho)$.
Since one cannot presently compute $F_\pi$ from first principles with a precision comparable to the uncertainties quoted in Eq.~(\ref{eq:fpiexp}), a measurement of $\Gamma[\pi \to \mu \nu (\gamma)]$ by itself does not provide a useful low-energy probe of SUSY effects. On the other hand, inclusion of SUSY corrections to the rate could alter the extracted value of $F_\pi$. To illustrate this possibility, it is convenient to write Eq.~(\ref{eq:piona}) as
\begin{eqnarray}
\label{eq:pionc}
\Gamma[\pi^+ \to \ell^+ \bar\nu_\ell (\gamma)] &=& {G_\mu^2 |V_{ud}|^2 \over
4\pi} F_\pi^2 m_\pi m_\ell^2 \left[ 1 - {m_\ell^2\over m_\pi^2} \right]
\\
\nonumber
&\times & \left\{1+\left(2\left[{\Delta\hat r^A_\pi}-{\Delta\hat r}_\mu\right] +{\rm brem}\, \right)_{\rm SM} +2\left({\Delta\hat r^A_\pi}-{\Delta\hat r}_\mu\right)_{\rm new}\right\}
\end{eqnarray}
where $(2[{\Delta\hat r^A_\pi}-{\Delta\hat r}_\mu] +\, {\rm brem})_{\rm SM}$ denotes the ${\cal O}(\alpha)$ SM corrections to the rate appearing on the RHS of Eq.~(\ref{eq:piona}) and $2({\Delta\hat r^A_\pi}-{\Delta\hat r}_\mu)_{\rm new}$ indicate the corrections from new physics. Since the latter generally involves the exchange of heavy particles, one does not encounter infrared singularities in the new physics corrections and need not consider the corresponding bremstraahlung contributions.
To illustrate the impact of these corrections in SUSY, we first consider the case of RPV interactions, neglecting left-right mixing among sfermions. In terms of the $\Delta_{ijk}({\tilde f})$ and $\Delta_{ijk}^\prime({\tilde f})$ defined in Eq.~(\ref{eq:deltas}), we have \cite{Ramsey-Musolf:2000qn,Barger:1989rk}
\begin{equation}
\label{eq:pione}
\left({\Delta\hat r^A_\pi}-{\Delta\hat r}_\mu\right)_{\rm new}^{\rm RPV} =\left[\Delta_{\ell 1k}^\prime({\tilde d}_R^k)-\Delta_{12k}({\tilde e}_R^k)\right] \ \ \ ,
\ee
where the subscript \lq\lq $\ell$" denotes the generation of the final state leptons.
From a fit to other low energy observables discussed in Section \ref{sec:rpv}, we obtain
\begin{eqnarray}
\label{eq:pionf}
(a)&& -0.004 \leq ({\Delta\hat r^A_\pi}-{\Delta\hat r}_\mu)_{\rm new} \leq -0.001\ \ \
{\rm 95 \%\ C.L.}
\nonumber \\
(b)&&-0.003 \leq ({\Delta\hat r^A_\pi}-{\Delta\hat r}_\mu)_{\rm new} \leq -0.0004\ \ \
{\rm 95 \%\ C.L.}
\end{eqnarray}
for the partial rate involving final state muons,
where we have required the $\Delta_{ijk}$, $\Delta_{ijk}^\prime$ to be non-negative according to the definition of Eq.~(\ref{eq:deltas}).
Here (a) or (b) refers to the different value of $\delta |V_{ud}|^2/|V_{ud}|^2$
in Table.~\ref{tab:rpv-constrain} that was used in the fit.
As a result,
we obtain a small shift in the central value for the rate (\ref{eq:piona}) and an
additional uncertainty of roughly $\pm 0.5\%$, with a corresponding $\sim$ quarter percent increase in the uncertainty of $F_\pi$. It is interesting that the estimated hadronic structure uncertainty associated with SM radiative corrections is comparable to this RPV effect.
In order to obtain a probe of new physics using $\pi_{\ell 2}$ decays, one must attempt to circumvent the uncertainties associated with the hadronic matrix element that is parameterized by $F_\pi$. To that end, it is useful to consider the ratio of partial rates for final state electrons and muons
\begin{equation}
\label{eq:remua}
R_{e/\mu} = {\Gamma[\pi^+ \to e^+ \bar\nu_e (\gamma)]\over \Gamma[\pi^+ \to \mu^+ \bar\nu_\mu (\gamma)]} \ \ \ ,
\ee
whose measurement provides a useful test of lepton universality of the CC weak interaction and its breakdown due to ${\cal O}(\alpha)$ corrections or new physics. From Eqs.~(\ref{eq:piona}) and (\ref{eq:pionc}) it follows that
\begin{eqnarray}
\label{eq:remub}
R_{e/\mu} &=& \frac{m_e^2}{m_\mu^2} \left[ {m_\pi^2-m_e^2 \over m_\pi^2-m_\mu^2} \right]^2\Bigl\{1+\left(2\left[{\Delta\hat r^A_\pi}(e)-{\Delta\hat r^A_\pi}(\mu)\right] +\Delta_{\rm brem}\, \right)_{\rm SM}\\
\nonumber
&& \qquad\qquad\qquad\qquad\qquad+2\left[{\Delta\hat r^A_\pi}(e)-{\Delta\hat r_\pi^A(\mu)}\right]_{\rm new}\Bigr\}\\
\nonumber
&&\\
\nonumber
\nonumber
&\equiv& R_{e/\mu}^{\rm SM}\left\{ 1+2\left[{\Delta\hat r^A_\pi}(e)-{\Delta\hat r_\pi^A(\mu)}\right]_{\rm new}\right\}
\end{eqnarray}
where the ${\Delta\hat r^A_\pi}(\ell)$ indicate the corrections for final state lepton $\ell$, the
\lq\lq $\Delta_{\rm brem}$" indicate the difference in the bremstraahlung contributions to the rate for final state electrons and muons, and
$R_{e/\mu}^{\rm SM}$ is the SM value for the ratio. The SM contributions have been computed in Ref.~\cite{Marciano:1993sh}
\begin{equation}
\label{eq:remuc}
R_{e/\mu}^{\rm SM} = \frac{m_e^2}{m_\mu^2} \left[ {m_\pi^2-m_e^2 \over m_\pi^2-m_\mu^2} \right]^2\\
\left\{ 1 +\frac{\alpha}{\pi} \left[ F({m_e\over m_\pi}) -
F({m_\mu\over m_\pi}) + \frac{m_\mu^2}{\Lambda_\chi^2} ( \bar{C}_2
\ln {m_\mu^2\over \Lambda_\chi^2} + \bar{C}_3) \right] \right\}\ \ \ .
\ee
The terms proportional to $\alpha m_e^2$ are numerically insignificant and have been omitted from this expression. To obtain a precise numerical prediction, the authors of Ref.~\cite{Marciano:1993sh} included structure-dependent bremsstrahlung corrections and performed a renormalization group resummation of all $[(\alpha/\pi) \ln(m_e/m_\mu)]^n$ corrections that enter
$F(m_e/m_\pi)-F(m_\mu/m_\pi)$, leading to
\begin{equation}
\label{eq:remud}
R_{e/\mu}^{\rm SM} = (1.2352\pm 0.0005)\times 10^{-4}\ \ \ .
\end{equation}
where the error is dominated by theoretical uncertainties in the structure dependent
bremsstrahlung contributions.
The ratio $R_{e/\mu}$ has been measured by groups at TRIUMF~\cite{Britton:1992pg} and PSI~\cite{Czapek:kc}, yielding the world average
\begin{eqnarray}
\label{eq:remuresult}
{R_{e/\mu}^{\rm exp}\over R_{e/\mu}^{\rm SM}}=0.9966\pm 0.0030\pm 0.0004,
\end{eqnarray}
where the first error is experimental and the second is the estimated
theoretical uncertainty. A new measurement that has been approved to run at TRIUMF aims to reduce the experimental uncertainty to a level comparable with the theoretical error\cite{TRIUMFnew}. A measurement at PSI with a similar improvement in precision has also recently been approved\cite{PSInew}.
The new measurements could provide significant tests of possible SUSY contributions to the breaking of CC lepton universality. In the case of RPV contributions, for example, one has
\begin{equation}
\label{eq:remue}
\left[{\Delta\hat r_\pi^A}(e)-{\Delta\hat r_\pi^A}(\mu)\right]_{\rm RPV} = \Delta_{11k}^\prime({\tilde d}_R^k)-\Delta_{21k}^\prime({\tilde d}_R^k) \ \ \ .
\ee
Thus, the precise measurement of $R_{e/\mu}$ provides an important input into the global analysis of the RPV corrections $\Delta_{ijk}^\prime$, {\em etc.} (see Table~\ref{tab:rpv-constrain}). The global analysis leading to Eq.~(\ref{eq:pionf}) uses the present average (\ref{eq:remuresult}), and the new measurements should reduce the range on the RPV-related uncertainty in the value of $F_\pi$.
When $P_R$ is conserved, lepton universality can be broken by superpartner loop corrections when the first and second generation sleptons have unequal masses. SUSY loop corrections to the $W{\bar u} d$ vertices, light quark propagators, and $W$-boson propagator are identical for the $\pi^+\to e^+\nu_e(\gamma)$ and $\pi^+\to \mu^+\nu_\mu(\gamma)$ amplitudes, thus canceling out of the ratio $R_{e/\mu}$ at ${\cal O}(\alpha)$. In contrast, corrections to the $W{\ell}\nu_\ell$ vertices, lepton propagators, and box graphs can differ in the two cases, so that $R_{e/\mu}$ provides a probe of the non-universality of the first and second generation sleptons.
An analysis of these corrections has recently been performed in Ref.~\cite{tulin06}. In the limit that the charginos and neutralinos are approximately pure gaugino and higgsino states (a limit achieved in the absence of electroweak symmetry-breaking), the SUSY correction to $R_{e/\mu}$ is dominated by box graphs. The vertex and external leg corrections for the $W^+\ell^+\nu_{\ell}$ vertex sum to zero as required by the Ward Identity\footnote{When $\overline{DR}$ is used, the SM and superpartner corrections individually contribute $\mu$-independent constants of equal magnitude and opposite sign.}. The resulting correction is
\begin{equation}
\label{eq:remuloop1}
\delta R_{e/\mu}^{SUSY}=\frac{\Delta R_{e/\mu}^{SUSY}}{R_{e/\mu}^{SM}} =\frac{\alpha V_{ud}}{6\pi {\hat s}^2}\, \left(\frac{M_W}{M_2}\right)^2 \left\{F\left(\frac{m_{\tilde L_1}^2}{M_2^2},\frac{m_{\tilde Q_1}^2}{M_2^2}\right)-F\left(\frac{m_{\tilde L_2}^2}{M_2^2},\frac{m_{\tilde Q_1}^2}{M_2^2}\right)\right\}\ \ \ ,
\ee
where $F(x,y)$ is a loop function associated with the box graph with $F(1,1)=1$ (corresponding to $m_{\tilde L_i}=m_{\tilde Q_1} =M_2$). When the two sleptons are equal in mass, $\Delta R_{e/\mu}^{SUSY} =0$, whereas when $m_{\tilde L_i} >> m_{\tilde L_j}$, one has
\begin{equation}
\label{eq:remuloop2}
\vert \delta R_{e/\mu}^{SUSY}\vert =1.7 \times 10^{-3} \left(\frac{M_W}{M_2}\right)^2 \, F\left(\frac{m_{\tilde L_j}^2}{M_2^2},\frac{m_{\tilde Q_1}^2}{M_2^2}\right)\ \ \ ,
\ee
so that the correction can be as large as a few $\times 10^{-3}$ in this case.
Allowing for significant gaugino-higgsino mixing can lead to can lead to logarithmic enhancements of $\delta R_{e/\mu}^{SUSY}$ as the vertex and external leg corrections no longer sum to zero in this case. The resulting contribution is
\begin{align}
\left. \delta R_{e/\mu}^{\textrm{SUSY}} \right|_{V+L} = & \; \frac{\alpha}{4 \pi s_W^2} \; \ln \left(\frac{m_{\widetilde{e}_L}^2}{m_{\widetilde{\mu}_L}^2} \right)
\times \left[ 2 - 2\: V^{*}_{j1}U^{*}_{j1} N_{i2} N_{i2} \frac{}{} \right. \nonumber \\
&\left. \quad\quad - V^{*}_{j1} U^{*}_{j2} N_{i2} N_{i3}/\sqrt{2}
+ U^{*}_{j1} V^{*}_{j2} N_{i2}^{*} N_{i3}^{*}/\sqrt{2} \frac{}{} \right] + \; ... ,
\end{align}
where the pre-factor in the square brackets that contains the chargino and neutralino mixing matrices vanishes in the limit that $M_{1,2}$ and $\mu$ are much larger than $M_W$ and and can be as large as $0.5$ for gaugino and higgsino mass parameters of order 100 GeV. The resulting correction can, thus, be as large as
\begin{equation}
\vert \delta R_{e/\mu}^{SUSY}\vert \buildrel < \over {_\sim} 1.3 \times 10^{-3}\, \ln \left(\frac{m_{\tilde L_1}^2}{m_{\tilde L_2}^2}\right)\ \ \ .
\ee
A numerical scan over MSSM parameters performed by the authors of Ref.~\cite{tulin06} is consistent with this bound. Given that the new TRIUMF and PSI experiments hope to reduce the experimental uncertainty to the level of the present SM theory uncertainty ($\sim 5\times 10^{-4}$), one could expect to see a departure from the SM expectations of several standard deviations if the masses of the selectron and smuon differ by more than a factor of $\sim 2$ and the mass of the lightest chargino is less than 300 GeV. Moreover, in the case of such non-degeneracy among sleptons, the sign of $\delta R_{e/\mu}^{SUSY}$ provides an indication of whether the lightest first or second generation slepton is heavier.
\vskip 0.2in
\noindent{{\bf \ref{sec:semi}.2 Neutron and Nuclear $\beta$ Decay: General Features}}
\vskip 0.2in
Studies of nuclear and neutron $\beta$-decay have yielded both important information about parameters of the SM as well as constraints on physics beyond it (for comprehensive reviews, see Refs.~\cite{Herczeg:2001vk,Deutsch,Severijns:2006dr}). In particular, the most precisely-known element of the Cabibbo-Kobayashi-Maskawa (CKM) matrix that characterizes the misalignment of quark weak interaction and mass eigenstates in CC interactions has been determined from \lq\lq superallowed" nuclear $\beta$-decays. The latter involve transitions between spin-parity $J^\pi=0^+$ nuclear states that are mediated solely by the vector charge component of the CC weak current. To the extent that the initial and final $0^+$ nuclear states are states of pure isospin and the energy transfer to the outgoing lepton pair is negligible compared to typical nuclear scales, the transition matrix element is independent of nuclear structure. In contrast, the rates for nuclear decays involving initial and/or final states with non-zero spin -- including the decay of the neutron -- depend more strongly on the details of hadronic and nuclear structure. In these cases, the extraction of precise information on the electroweak sector of the SM or on new physics generally requires measurement of both the decay rate and one or more decay correlation coefficients.
In all cases, the overall decay rate in the SM is characterized by the the reduced half life or \lq\lq $ft$" value, given by
\begin{eqnarray}
\label{eq:ftsuper}
ft&=&\frac{K}{(G_V^\beta)^2 M_F^2 + (G_A^\beta)^2 M_{GT}^2}\\
\nonumber
\\
\nonumber
K&=&\hbar (2\pi^3\ln2 )(\hbar c)^6 /(m_e c^2)^5 \ \ \ ,
\end{eqnarray}
where $t$ is the half-life; $f$ is a factor that takes into account the outgoing $\beta$-particle wavefunction in the presence of the nuclear Coulomb field; the Fermi matrix element $M_F$ is the nuclear transition matrix element of the vector current charge operator $J_0^\dag(x)=u^\dag(x) d(x)+{\rm h.c.}$, while the Gamow-Teller matrix element $ M_{GT}$ involves the spatial component of the axial vector current operator\footnote{For neutron decay, $M_F$ and $M_{GT}$ simply indicate the matrix elements of the vector and axial vector CC operators without respect to their spacetime components.}.
It is often useful to consider the differential decay rate for the decay of a polarized nucleus, for which one has\cite{Jackson} (see also Refs.~\cite{Herczeg:2001vk,Deutsch,Severijns:2006dr})
\begin{eqnarray}
\label{eq:betacor}
d\Gamma& \propto & {\cal N}(E_e)\Biggl\{ 1+a {{\vec p}_e\cdot{\vec p}_\nu\over E_e E_\nu}
+ b{\Gamma m_e\over E_e} + \langle {\vec J}\rangle\cdot \left[A{{\vec p}_e\over E_e}
+ B{{\vec p}_\nu \over E_\nu} + D{{\vec p}_e\times {\vec p}_\nu \over E_e E_\nu}\right] \\
\nonumber
&&+ {\vec\sigma}\cdot\left[N \langle{\vec J}\rangle + G\frac{{\vec p}_e}{E_e}+Q^\prime {\hat p}_e {\hat p}_e\cdot \langle{\vec J}\rangle+R \langle {\vec J}\rangle\times\frac{{\vec p}_e}{E_e}\right]
\Biggr\}
d\Omega_e d\Omega_\nu d E_e,
\end{eqnarray}
where $N(E_e)=p_e E_e(E_0-E_e)^2$, $E_e$ ($E_\nu$) and ${\vec p}_e$
(${\vec p}_\nu$) are the $\beta$ (neutrino) energy and momentum, respectively; $E_0$ is the endpoint energy;
and ${\vec J}$ is the polarization of the decaying nucleus, ${\vec \sigma}$ is the $\beta$ polarization, and $\Gamma=\sqrt{1-(Z\alpha)^2}$. The coefficients of the various correlations involving lepton momenta and nuclear spin depend on the structure of the underlying lepton-quark weak interaction. The correlations parameterized by $A$, $B$, and $G$ are odd under parity (P) and even under time-reversal (T); the $D$-term is T-odd but P-even; the $R$ correlation is both T- and P-odd; and all others are P and T-even.
The various correlation coefficients in Eq.~(\ref{eq:betacor}) carry complementary dependences on the ratio of Fermi constants
\begin{equation}
\label{eq:gvga}
\lambda = \frac{G_A^\beta}{G_V^\beta} = \frac{g_A(0)}{g_V(0)}\left(1+{\Delta\hat r}_\beta^A-{\Delta\hat r}_\beta^V \right)
\ee
as well as on operators that depart from the $(V-A)\otimes(V-A)$ form of the SM, CC low-energy current-current interaction. In the SM, the correlation coefficients can be expressed in terms of $\lambda$ alone.
In the case of neutron decay, for
example, one has:
\begin{equation}
\label{eq:corcoeff}
a = {1-\lambda^2\over 1+3\lambda^2}, \hspace{80pt}
A = -2{\lambda(1+\lambda)\over 1+3\lambda^2}, \hspace{80pt}
B = 2{\lambda(\lambda-1)\over 1+3\lambda^2},
\ee
with analogous expressions for the coefficients $N$ and $G$.
The quantity $b$ appearing in the so-called Fierz interference term is zero for
purely vector and axial vector interactions,
while $N$ vanishes for pure vector
transitions. The T-odd correlations are zero in the SM\footnote{Final state QED interactions can induce non-vanishing contributions to $D$ that mimic the effects of {\em bona fide} T-violation.}.
As with $\mu$-decay, it is useful to describe non-$(V-A)\otimes(V-A)$ possible departures using an effective, low-energy Lagrangian. In analogy with Eq.~(\ref{eq:leff0}) we write
\begin{equation}
\label{eq:leffbeta}
{\cal L}^{\beta-\rm decay} = - \frac{4 G_\mu}{\sqrt{2}}\ \sum_{\gamma,\, \epsilon,\, \delta} \ a^\gamma_{\epsilon\delta}\,
\ {\bar e}_\epsilon \Gamma^\gamma \nu_e\, {\bar u} \Gamma_\gamma d_\delta\ \ \ .
\end{equation}
The SM gives
\begin{equation}
\label{eq:avll}
a^V_{LL}=V_{ud}\, \left(1+{\Delta\hat r}_\beta-{ \Delta\hat r}_\mu\right)\ \ \ ,
\ee
with all other $a^\gamma_{\epsilon,\, \delta}=0$. Note that in hadronic matrix elements of the SM, $(V-A)\otimes(V-A)$ operator, non-perturbative QCD effects will lead to a renormalization of the axial vector current and to differences in hadronic contributions to the radiative corrections $\Delta{\hat r}_\beta^V$ and $\Delta{\hat r}_\beta^A$ as indicated in Eqs.~(\ref{eq:semi2},\ref{eq:semi3}).
Tree-level, supersymmetric contributions to $a^V_{LL}$ can be generated by RPV interactions. One has\cite{Ramsey-Musolf:2000qn,Barger:1989rk}.
\begin{equation}
\label{eq:ckm2}
\left[{ \Delta\hat r}_\beta^{V,A}-{\Delta\hat r}_\mu\right]_{\rm RPV} = 2\left[\Delta_{11k}^\prime({\tilde d}_R^k)-\Delta_{12k}({\tilde e}_R^k)\right] \ \ \ .
\ee
Note that because the RPV interactions preserve the $(V-A)\otimes(V-A)$ structure of the low-energy CC interaction, the RPV contributions to ${ \Delta\hat r}_\beta^{V}$ and ${ \Delta\hat r}_\beta^{A}$ are identical and, thus, do not affect the value of $\lambda$.
As we discuss below, probes of these contributions can be obtained using a combination of $\beta$-decay, kaon semileptonic ($K_{\ell 3}$) decays, $B$-meson decays, and the unitarity property of the CKM matrix. The results of these unitarity tests have been used as input for the global analysis of RPV corrections $\Delta_{ijk}$, $\Delta_{ijk}^\prime$, {\em etc.} summarized in Table~\ref{tab:rpv-constrain}.
Supersymmetric loop corrections can also contribute to $[{ \Delta\hat r}_\beta^{V,A}-{ \Delta \hat r}_\mu]$. As with the RPV corrections, loop SUSY loop corrections to ${ \Delta\hat r}_\beta^{V}$ and ${ \Delta\hat r}_\beta^{A}$ are identical in the MSSM, since the superpartner mass scale lies well above the hadronic scale. Hence, the presence of these corrections leaves the value of $\lambda$ unchanged from its SM value.
An analysis of these corrections has been carried out in Ref.~\cite{Kurylov:2001zx} using the MSSM with R-parity conservation. Doing so requires computation of the corrections illustrated in Fig.~\ref{fig:cccorr} (see Section \ref{sec:renorm}). In general, carrying out an analysis of these corrections on the vast number of MSSM parameters is a formidable task. Typical analyses resort to one of two strategies to contend with the large number of parameters: either one carries out a model-independent analysis by randomly generating a large number of MSSM parameter sets and computing the corrections for each set, or one reduces the number of independent parameters by resorting to a model for SUSY-breaking mediation in which the soft SUSY parameters are determined from a small number of parameters at a high scale and their RG running to the weak scale.
In the case of the corrections to $G_V^\beta$, however, cancellations between corrections to ${ \Delta\hat r}_\beta^{V,A}$ and ${ \Delta\hat r}_\mu$ allow for an alternate approach. Specifically, contributions to the $W$-boson propagators -- ${\hat\Pi}_{WW}^T$ -- cancel entirely between the two terms\footnote{Up to negligible corrections arising for the different kinematics of each process.}, as do corrections to the first generation lepton propagators and $We{\bar\nu}_e$ vertices. As a result,
$[{ \Delta \hat r}_\beta^{V,A}-{ \Delta \hat r}_\mu]_{\rm SUSY-loop}$ depends only on the differences in the $W\, \mu\, \nu_\mu$ and $W\, u\, d$ and external leg corrections as well as the box graphs. The authors of Ref.~\cite{Kurylov:2001zx} found that these simplifications allowed completion of a model-independent analysis without resorting to randomly generated parameter sets. We review the results of this analysis below.
One-loop radiative corrections in the MSSM may also induce non-zero scalar and tensor interactions via the box graphs of Fig.~\ref{fig:susybox} (b). These loop-induced non-$(V-A)\otimes(V-A)$ interactions have recently been analyzed by the authors of Ref.~\cite{Profumo:2006yu}, who obtained
\begin{eqnarray}
\label{eq:deltabetaa}
\tilde\delta_\beta^{\mathbf (a)}&=&\frac{\alpha M_Z^2 V_{ud}}{3\pi} \left|U_{k1}\right|^2Z_D^{1i*}Z_D^{4i}Z_L^{1m}Z_L^{4m*}\left|N_{j1}\right|^2{\mathcal F}_1\left(M_{\chi_j^0},M_{\chi_{k}^+},M_{\tilde d_i},M_{\tilde l_{m}}\right)\\
\nonumber
\tilde\delta_\beta^{\mathbf (b)}&=&\frac{-\alpha M_Z^2 V_{ud}}{3\pi} U_{j1}V_{j1}^* Z_U^{1i*}Z_U^{4i}Z_L^{1m}Z_L^{4m *}\left|N_{k1}\right|^2M_{\chi_j^+}M_{\chi_{k}^0}{\mathcal F}_2\left(M_{\chi_j^+},M_{\chi_{k}^0},M_{\tilde u_i},M_{\tilde l_{m}}\right)
\eea
where the notation is similar to that of Eq.~(\ref{eq:grrloop}). The corresponding operator coefficients are
\begin{eqnarray}
\label{eq:box1}
a^S_{RR} & = & \tilde\delta_\beta^{\mathbf (a)} \\
\nonumber
a^S_{RL} = -2 a^T_{RL} & = & \tilde\delta_\beta^{\mathbf (b)}\ \ \ .
\eea
Note that, in contrast to the situation with $\mu$-decay, the presence of SUSY loop-induced scalar and tensor interactions relevant to $\beta$-decay requires only flavor-diagonal L-R mixing among scalar fermions. These loop-induced scalar and tensor interactions generate contributions to the Fierz interference parameter, $b$, of Eq.~(\ref{eq:betacor}); the energy-dependent components of the neutrino asymmetry parameter $B$ and spin-polarization coefficient $Q^\prime$; and the energy-independent component of the spin-polarization coefficient $N$. We discuss the prospects for future $\beta$-decay experimental probes of these loop-induced interactions below.
\subsection{Superallowed Nuclear Decays}
For the superallowed Fermi transitions one has $M_{GT}=0$, leaving only the dependence on $M_F$ in the $ft$-value. To high degree of accuracy, $M_F$ is independent of the details of nuclear structure, making these transitions an ideal venue for determining $G_V^\beta$ and, thus, $V_{ud}$. As Eqs.~(\ref{eq:semi3}) and (\ref{eq:ftsuper}) indicate, the determination of $V_{ud}$ from these transitions requires both experimental and theoretical input. Experimentally, the $ft$ values for twelve different transitions have been measured with uncertainties ranging from $\sim (3-25)\times 10^{-4}$\cite{Hardy:2004dm,Hardy:2004id}, while the value of $G_\mu$ is known to ten ppm accuracy (assuming $\eta=0$ as in the SM). As discussed in
Refs.~\cite{Hardy:2004dm,Hardy:2004id}, a determination of the experimental half lives requires three distinct experimental measurements: the total decay half life, the branching ratio for the decay to the $0^+$ ground state of the daughter nucleus, and the energy release in the decay, or $Q$-value. New measurements of the $Q$-values for several superallowed decays have recently been completed, leading to shifts in the $ft$ values in some, but not all, cases\cite{Savard:2005cv,Eronen:2006if}. The impact of these new measurements on the overall fit to the twelve decays awaits completed analyses of Penning trap measurements of the $Q$-values of $^{26m}$Al and $^{42}$Sc.
Theoretically, one requires computations of the matrix element $M_F$ as well as the SM radiative correction factors ${\Delta\hat r}^V_\beta$ and ${\Delta\hat r}_\mu$. In the limit of zero energy transfer and exact isospin, the Fermi matrix element is
\begin{equation}
\label{eq:mv}
M_F = \langle I,I_Z\pm 1| J_0 |I, I_Z \rangle = [(I\mp I_Z)(I\pm I_Z+1)]^{1/2}\ \ \ .
\ee
For the most precisely-known cases, one has $I=1, I_Z=0$ so that $ M_F =\sqrt{2}$. For realistic nuclei and finite energy transfer, one must apply small nuclear structure-dependent corrections
\begin{equation}
\label{eq:mvcorr}
M_F^2 (1+\delta_R)(1-\delta_C) = [(I\mp I_Z)(I\pm I_Z+1)]
\ee
where $\delta_C$ is a correction that accounts for isospin-breaking and $\delta_R$ is a nucleus-dependent contribution to the ${\cal O}(\alpha)$ electroweak radiative corrections. After applying these one obtains a corrected $ft$ value
\begin{equation}
\label{eq:ftcorr}
{\cal F}t=ft(1+\delta_R)(1-\delta_C) \ \ \ .
\ee
It follows from Eqs.~(\ref{eq:ftsuper}) and (\ref{eq:ftcorr}) that $ {\cal F}t$ should be the same for each nucleus, since this quantity depends only on the nucleus-independent Fermi constant $G_V^\beta$ and the universal matrix element in Eq.~(\ref{eq:mv}) determined solely by the isospin quantum numbers. For historical reasons, this prediction is described as a consequence of the conserved vector current (CVC) property of the semileptonic, CC weak interaction.
The corrections $\delta_{R,C}$ have been computed by the authors of
Refs.~\cite{Hardy:2004dm,Hardy:2004id} using the nuclear shell model (NSM) and applied to the twelve best-known superallowed transitions, leading to an average ${\cal F}t$ value
\begin{equation}
\label{eq:ftcorrave}
{\overline{ {\cal F}t} }= 3072.7\pm 0.8\ s
\ee
where the error includes the estimated theoretical uncertainty associated with the corrections $\delta_R$ and $\delta_C$\footnote{The $\chi^2$ per degree of freedom for the average is 0.42.}.
This result for ${\overline{ {\cal F}t} }$ implies consistency with CVC at the $0.026\% $ level.
The result in Eq.~(\ref{eq:ftcorrave}) provides a test of the inter-nuclear consistency implied by CVC but does not account for the possibility of a nucleus-independent systematic theoretical error associated with the calculated nuclear corrections. To allow for this possibility, the authors of Refs.~\cite{Hardy:2004dm,Hardy:2004id} compared the NSM ${\cal F}t$ values with those obtained by using the $\delta_C$ Hartree-Fock calculations of Ormand and Brown\cite{ormandbrown}. Since the latter exist for nine out of the 12 most accurately measured superallowed transitions, Towner and Hardy (TH)averaged their values with those of Ormand and Brown (OB) for these cases, yielding the average
\begin{equation}
\label{eq:ftcorravenine}
{\overline{ {\cal F}t} }= 3073.5 \pm 1.2\ s
\ee
where an additional theoretical uncertainty given by half the difference of the TH and OB values for ${\overline {\cal F}t }$. In what follows, we use the average (\ref{eq:ftcorravenine}) in the discussion of $V_{ud}$.
A value for $G_V^\beta$ can be extracted from ${\overline{ {\cal F}t}}$ by employing Eqs.~(\ref{eq:ftsuper},\ref{eq:ftcorr}) and subsequently used to determine $V_{ud}$ by applying the SM radiative correction difference ${ \Delta \hat r}_\beta^V-{ \Delta \hat r}_\mu$ appearing in $G_V^\beta$. The dominant uncertainty entering the latter is the theoretical uncertainty associated with hadronic contributions to the correction ${ \Delta \hat r}_\beta^V$ arising from the $W\gamma$ box graph.
To ${\cal O}(\alpha)$ one has \cite{Sirlin:1977sv}
\begin{equation}
\label{eq:wgbox}
{\Delta\hat r}_\beta^V(W\gamma\, {\rm box}) = {{\hat\alpha}\over 8\pi} \left[
\ln\left( {M_W^2\over\Lambda^2} \right) + C_{\gamma W}(\Lambda)\right] \ \ \ .
\ee
Here, the leading logarithmic term is generated by short-distance contributions to the loop integral and can be computed reliably in the SM. The
the constant $C_{\gamma W}(\Lambda)$ parameterizes contributions to the loop
integral below a momentum scale $\Lambda$. An estimate of $C_{\gamma W}(\Lambda)$ was
given by Marciano and Sirlin~\cite{Marciano:1985pd} using nucleon intermediate states in
the box diagram, and this estimate had been retained by subsequent authors for many years. An estimate of the uncertainty in this contribution had been obtained by varying $\Lambda$ over a reasonable range, yielding an uncertainty of $\sim\pm 0.00038$ in ${\Delta r}_\beta^V$.
Recently, Marciano and Sirlin have reduced this uncertainty by a factor of two over previous estimates by relating the asymptotic part of the $W\gamma$ box integral to the Bjorken sum rule to include perturbative QCD corrections through ${\cal O}(\alpha_s^3)$ and by employing large $N_C$-based current-current correlators to treat the resonance region\cite{Marciano:2005ec}. The result is an uncertainty of $\sim\pm 0.00019$ in ${ \Delta\hat r}_\beta^V$.
Using the results of this new analysis and an up-date of the fit in
Refs.~\cite{Hardy:2004dm,Hardy:2004id} by Savard {\em et al} \cite{Savard:2005cv} one has
\begin{equation}
\label{eq:vudnuc}
V_{ud}=0.97377(11)(15)(19)
\ee
where the first error arises from combining the experimental error in $\overline{{\cal F}t}$ and nuclear structure theory uncertainty; the second error is associated with nuclear Coulomb distortion effects; and the final error is the theoretical hadronic structure error associated with $C_{\gamma W}(\Lambda)$.
In light of this new result, the uncertainty associated with ${ \Delta \hat r}_\beta^V$ and with the nuclear corrections $\delta_{R,C}$ are comparable. In order to test theoretical, nuclear structure computations of these corrections, new measurements of additional superallowed decays are underway with nuclei in which the magnitude of the calculated nuclear corrections are larger than the corrections for the nine most precisely measured transitions used to obtain the result (\ref{eq:vudnuc}). These studies involve even-Z, $I_Z=-1/2$ parent nuclei in the mass range $18 \leq A\leq 42$ and four odd-Z, $I_Z\geq 62$ $0^+$ parent nuclei. Obtaining refined measurements of half lives, branching ratios, and $Q$-values for both series of decays is challenging, though use of Penning trap techniques have allowed for new, highly precise mass measurements for the light cases. A more extensive review of these challenges and prospects for meeting them can be found in Ref.~\cite{Hardy:2005kc}.
In addition to providing a determination of $G_V^\beta$, studies of superallowed transitions also provide a probe of the Fierz interference coefficient $b_F$ (the subscript \lq\lq $F$" denotes those contributions allowed for Fermi decays). In terms of the effective operator coefficients $a^\gamma_{\epsilon\, \delta}$ one has
\begin{equation}
b_F = \pm \frac{2\, g_S}{g_V}\, {\rm Re}\, \left(\frac{a^S_{RL}+a^S_{RR}}{a^V_{LL}}\right)
\end{equation}
independent of the details of the nuclear matrix elements (the upper and lower signs correspond to $\beta^-$ and $\beta^+$ decay, respectively). Here, $g_V$ and $g_S$ are the nucleon vector and scalar current form factors, respectively [see Eq.~(\ref{eq:ncurrent}) below]. An global analysis of superallowed decays leads to the $b_F=0.0026(26)$\cite{Hardy:2004dm,Hardy:2004id}, implying stringent bounds on scalar interactions. In order to probe the non-$(V-A)\otimes(V-A)$ interactions generated by superpartner loops, roughly an order-of-magnitude improvement in sensitivity would be required. Specifically, from Eqs.~(\ref{eq:deltabetaa},\ref{eq:box1}) we have
\begin{eqnarray}
\nonumber
b_F&=&\pm\frac{2\alpha}{3\pi}\, \left(\frac{g_S}{g_V}\right)\, {\rm Re}\, Z_L^{1m}Z_L^{4m*} \Bigl[ \left|U_{k1}\right|^2Z_D^{1i*}Z_D^{4i}\left|N_{j1}\right|^2\, M_Z^2{\mathcal F}_1\left(M_{\chi_j^0},M_{\chi_{k}^+},M_{\tilde d_i},M_{\tilde l_{m}}\right)\\
\label{eq:bFsusy}
&&-U_{j1}V_{j1}^* Z_U^{1i*}Z_U^{4i}\left|N_{k1}\right|^2\, M_Z^2M_{\chi_j^+}M_{\chi_{k}^0}{\mathcal F}_2\left(M_{\chi_j^+},M_{\chi_{k}^0},M_{\tilde u_i},M_{\tilde l_{m}}\right)\Bigr]\ \ \ .
\end{eqnarray}
The prefactor $2\alpha/3\pi$ is ${\cal O}(10^{-2})$ while the product of $M_Z^2$ and the loop functions can be as large as $10^{-1}$ for ${\tilde m}\sim M_Z$. For nearly maximal L-R mixing among sfermions, the product of rotation matrices $Z_F^{1k} Z_F^{4k\ast}$ {\em etc.} is ${\cal O}(1)$. Thus, contributions to $b_F$ as large as $\sim 10^{-3}$ can occur in the regime of maximal mixing and superpartner masses of order the weak scale. Measurements designed to observe $b_F$ at the few $\times 10^{-4}$ level would provide interesting probes of L-R mixing among first generation scalar fermions.
Note that non-vanishing results at this scale would
disfavor the alignment hypothesis of Eq.~(\ref{eq:triscalaryukawa}), which, for
$|\mu|$ of order the electroweak scale, implies
that L-R mixing for the first two generations is Yukawa suppressed. As in the case of the muon decay parameter $g^S_{RR}$, large L-R mixing for the first generation also implies that the masses of the Higgs bosons $H^0$, $A^0$, and $H^\pm$ are super heavy, leaving only the light SM-like Higgs $h^0$ as an experimentally accessible degree of freedom. On the other hand, null results would provide added experimental plausibility to the alignment idea.
\subsection{$\beta$-Decay Correlations}
Experimental studies of the $\beta$ spectral shape, angular distribution, and polarization can provide information on both non-$(V-A)\otimes(V-A)$ interactions as well as an alternate means of obtaining $V_{ud}$. As a specific illustration, we consider polarized neutron decay, for which
both the vector and axial vector components of the CC contribute to the $ft$ value in Eq. (\ref{eq:ftsuper}) with
\begin{eqnarray}
\label{eq:ngt}
M_F^2 & = & 1 \\
\nonumber
M_{GT}^2 & = & 3\, g_A^2
\end{eqnarray}
and where $g_A$ is the hadronic axial vector coupling that characterizes the strong interaction renormalization of the axial vector quark current that enters the neutron decay matrix elements. The latter are given by
\begin{eqnarray}
\label{eq:ncurrent}
\bra{p} {\bar u}(0) \gamma_\mu d(0)\ket{n}& = & {\bar U}_p (P') \left[ g_V(q^2) \gamma_\mu + \frac{ i\,g_M(q^2)}{2 m_N} \sigma_{\mu\nu}\, q^\nu \right] U_n(P)\\
\nonumber
\bra{p} {\bar u}(0) \gamma_\mu\gamma_5 d(0)\ket{n} & = & {\bar U}_p (P') \left[ g_A(q^2) \gamma_\mu\gamma_5 + \frac{ g_P(q^2)}{m_N}\, q_\mu\gamma_5\right] U_n(P)\ \ \ .
\end{eqnarray}
At $q^2=0$ one has $g_V(0)=1$ and $g_M(0)=\kappa_P-\kappa_n$ according to the CVC property of the vector CC, whereas $g_A\equiv g_A(0) \approx 1.26$ and $g_P\approx 8.5$ (Here, we quote a value for $g_P$ taken from chiral perturbation theory that is in agreement with the results of ordinary muon capture experiments but that differs from the result obtained from radiative muon capture. For a recent discussion, see, {\em e.g.}, Ref.~\cite{Kammel:2002sd}.). Neglecting the small $q^2$-dependent corrections associated with the $g_M$ and $g_P$ terms, Eq.~(\ref{eq:ncurrent}) leads to the matrix elements in Eq. (\ref{eq:ngt}). Nucleon matrix elements of scalar and tensor operators, ${\bar u} d$ and ${\bar u}\sigma_{\mu\nu} d$, associated with non-SM interactions are parameterized by analogous form factors, $g_S$ and $g_T$, respectively.
At present, it is not possible to compute $g_A$ with the $0.1 \%$ precision needed for a $ 0.1\%$ determination of $V_{ud}$ from the neutron lifetime, $\tau_n$. Consequently, the axial vector contribution to $ft$ in Eq.~(\ref{eq:ftsuper}) must be separated experimentally from the vector contribution. Doing so requires measurement of a neutron decay correlation coefficient appearing the partial rate Eq.~(\ref{eq:betacor}). The coefficients $A$, $a$, and $B$ carry a dependence on $\lambda$ [see Eq.~(\ref{eq:corcoeff})], so that their measurement can yield the requisite separation. The most precise value of $\lambda$ has been obtained with a 0.6\% measurement of the $A$ parameter by the PERKEO collaboration, leading to $\lambda=-1.2739\pm0.0019$\cite{Abele:2002wc}. Since the publication of that result, a new value for $\tau_n$ has been obtained at ILL that differs from the previous world average by more than six standard deviations\cite{Serebrov:2004zf}. Including that result and performing a one-parameter fit to neutron decay measurements yields $\lambda=-1.27293(46)$. The resulting value for $V_{ud}$ is
\begin{equation}
\label{eq:vudnnew}
V_{ud}=0.97757(65)\ \ \
\ee
compared with the value $V_{ud}=0.97192(65)$ obtained using the previous world average for $\tau_n$ and the new average for $\lambda$.
Several efforts are underway in order to obtain a value of $V_{ud}$ using a combination of the neutron lifetime, $\tau_n$, and correlation coefficient measurements, and a review of these studies can be found in Ref.~\cite{Erler:2004cx}. In light of the new $\tau_n$ result, additional precise determinations of the neutron lifetime are either underway or are being planned at ILL, NIST, and LANSCE.
Precise measurements of neutron decay correlation coefficients may also probe the SUSY loop-induced non-$(V-A)\otimes(V-A)$ interactions. For example, the $\beta$ energy-dependent component of the neutrino asymmetry parameter $B$ depends linearly on both the scalar and tensor interactions of Eq.~(\ref{eq:box1}):
\begin{eqnarray}
B_{\rm SUSY\, box} & = & -2\left(\frac{\Gamma m}{E}\right)\, \frac{\lambda}{1+3\lambda^2}\,
{\rm Re}\, \Biggl\{ 4\lambda \left(\frac{g_T}{g_A}\right)\, \left(\frac{a^{T}_{RL}}{a^{V}_{LL}}\right)^\ast\\
\nonumber
&& +\left[2 \left(\frac{g_T}{g_A}\right)\, \left(\frac{a^{T}_{RL}}{a^{V}_{LL}}\right)^\ast - \left(\frac{g_S}{g_V}\right)\,
\left(\frac{a^{S}_{RL}+a^S_{RR}}{a^{V}_{LL}}\right)^\ast\right]\Biggr\}
\eea
As with the Fierz interference term, the SUSY contributions to $B$ can approach the $10^{-3}$ level for nearly maximal L-R mixing. Experimentally, future measurements using cold or ultracold neutrons may be able to determine the energy-dependent component of $B$ with a sensitivity of a few $\times 10^{-4}$\cite{brad}. At present, the results of nuclear $\beta$-decay correlation measurements do not appear to be sensitive to the linear interference of scalar and tensor interactions with the SM amplitude; improvements in superallowed sensitivity to $b_F$ and in neutron decay correlation measurements appear to hold the brightest prospects for probing the SUSY loop-induced non-$(V-A)\otimes(V-A)$ interactions.
\subsection{Pion $\beta$-decay}
In addition to the use of nuclear and neutron decay, measurements of the rate for pion $\beta$-decay ($\pi_\beta$) also yield a determination of $G_V^\beta$. In this case, the non-universal hadronic contributions to the radiative correction ${ \Delta \hat r}_\beta$ for $\pi_\beta$ differs from that for neutron and nuclear decays, and in the past it has been argued that the corresponding theoretical uncertainties for $\pi_\beta$ are smaller. However, a recent analysis using $\chi$PT reported in Ref.~\cite{Cirigliano:2002ng} quotes a theoretical uncertainty in $V_{ud}$ of $\pm 0.0005$ -- a value that is larger than the new theoretical uncertainty associated with neutron and nuclear decays.
The rate for pion $\beta$-decay is given by
\begin{equation}
\label{eq:pionbeta1}
\Gamma(\pi_\beta)= \frac{(G_V^\beta)^2 m_\pi^5 |f_{+}^\pi(0)|^2 I(\lambda_{+}^\pi)}{64\pi^3}\ \ \ ,
\ee
where $f_{\pm}^{\pi}(t)$ are the two pion form factors and $I(\lambda_{+}^\pi)$ is a phase space integral that results from inclusion of real photon emission and that is a function of the slope $\lambda_{+}^\pi$ of $f_{+}^\pi(t)$ at the photon point. The non-universal, long-distance ${\cal O}(\alpha)$ corrections can be included by
replacing $f_{+}^\pi(t)$ by
\begin{equation}
F_{+}^\pi(t,u) = \left[1+\frac{\alpha}{4\pi}\Gamma(u, m_e^2, m_\pi^2, \lambda_\gamma)\right]\ \ \ ,
\ee
where $\lambda_\gamma$ is an infrared regulator whose effect on the total rate is cancelled by the corresponding $\lambda_\gamma$-dependence of $I(\lambda^\pi_{+})$.
Contributions to $\Gamma(u, m_e^2, m_\pi^2, \lambda_\gamma)$ that are non-analytic in the various masses and momenta can be computed unambiguously at one-loop order and carry no dependence on {\em a priori} unknown parameters. However, there exist analytic contributions that arise at the same chiral order that are parameterized by three constants in the effective Lagrangian: $K_{12}^r(\mu)$,
$X_1$, and $X_6^r(\mu)$. Here, $K_{12}^r$ and $X_6^r$ depend on the renormalization scale $\mu$ since one-loop graphs generate divergences having the same structure as the corresponding terms in the Lagrangian. A theoretical prediction for $K_{12}^r(m_\rho)$ with $\sim 10\%$ uncertainty has been given in Ref.~\cite{Moussallam:1997xx}, while bounds on $X_1$ and $X_6^r(m_\rho)$ were obtained by the authors of Ref.~\cite{Cirigliano:2002ng} using dimensional analysis. The uncertainty associated with these constants dominates the theoretical error in the extraction of $V_{ud}$ from $\Gamma(\pi_\beta)$.
Experimentally, the PIBETA collaboration has recently obtained the most precise determination of the ratio of branching ratios for the $\pi_{\beta(\gamma)}$ and $\pi_{e2(\gamma)}$ decays\cite{Pocanic:2003pf}. Multiplying by the current world average for the $\pi_{e2(\gamma)}$ branching ratio leads to the $\pi_\beta$ branching
ratio\cite{Blucher:2005dc}
\begin{equation}
\label{eq:pionbeta2}
B_{\pi_\beta(\gamma)} = \left[1.036 \pm 0.004 ({\rm stat}) \pm 0.004 ({\rm sys}) \pm 0.003 (\pi_{e2(\gamma)})\right]\, \times10^{-8}
\ee
for a total experimental error of $\pm 0.006\times 10^{-8}$. Note that the fractional uncertainty is roughly a factor of ten times larger than the theoretical error and fifteen times larger than the combined experimental and nuclear structure theory uncertainty in the superallowed $\overline{{\cal F}t}$ value. Thus, considerable experimental and theoretical progress is necessary before a value for $V_{ud}$ can be obtained from $\pi_\beta$ with precision competitive with the superallowed value.
\subsection{Kaon decays and $V_{us}$}
In order to use the value of $V_{ud}$ (\ref{eq:vudnuc}) as a probe of SUSY, one must compare it with the SM expectation. In this case, the SM implies that the CKM matrix is unitary, so that
\begin{equation}
\label{eq:ckm1}
|V_{ud}|^2+|V_{us}|^2+|V_{ub}|^2= 1 \qquad {\rm SM}\ \ \ .
\ee
The value of $V_{ub}=0.0032\pm 0.0009$ is obtained from the decays of $B$-mesons. Given the level of uncertainty in $V_{ud}$, both the magnitude of $V_{ub}$ and its error are too small to affect a test of Eq.~(\ref{eq:ckm1}). On the other hand, the value of $V_{us}$ as well its uncertainty are critically important. The theoretically cleanest determination of $V_{us}$ is obtained from the branching ratios for $K_{\ell 3}$ decays, $K\to\pi \ell \nu$. The partial rate for this decay mode is given by\cite{Blucher:2005dc}
\begin{equation}
\label{eq:ke3partial}
d\Gamma(K^+_{\ell 3}) = \frac{G_\mu^2 m_K^5}{128\pi^3} S_{\rm EW} C(t) |V_{us}|^2 |f_{+}^K(0)|^2 \left[ 1 +
\frac{\lambda_{+}^K\, t}{m_\pi^2}\right]^2\left[1+2\Delta^K_{SU(2)}+2\Delta^{K\ell}_{EM}\right]\ \ \ ,
\ee
where $f_{+}^K(t)$ is the $K$-to-$\pi$ transition form factor with $t=(p_K - p_\pi)^2$, $\lambda_{+}^K/m_\pi^2$ is the slope of the form factor at $t=0$, and $C(t)$ is a kinematic function that depends on the kaon form factor, $S_{\rm EW}$ contains the short-distance electroweak radiative corrections, and $\Delta^K_{SU(2)}$ and $\Delta^{K\ell}_{EM}$ indicate corrections generated by the breaking of flavor SU(2) and long-distance electromagnetic corrections, respectively\cite{Cirigliano:2001mk}. To carry out a test of CKM unitarity at the $0.1\%$ level, one must include the $\Delta^K_{SU(2)}$ and $\Delta^{K\ell}_{EM}$ corrections and determine the value of $f^K_{+}(0)$ with one percent uncertainty or better.
As with the determination of $V_{ud}$ using the $\beta$-decay half lives, arriving at a value for $V_{us}$ from the $K_{\ell 3}$ partial rates requires both experimental and theoretical input. Experimentally, new determinations of the $K_{\ell 3}$ branching ratios\cite{Alexopoulos:2004sx,Ambrosino:2005ec, Lai:2004bt,Sher:2003fb,ambrosino} -- combined with experimental values for $\lambda_{+}^K$ and $C(t)$ \cite{Alexopoulos:2004sy,Lai:2004kb,Yushchenko:2004zs}-- have yield a shift in the world average for the product $V_{us}\times f_{+}^K(0)$. The 2005 Particle Data Group value is
\begin{equation}
V_{us}\, \left[ f_{+}^K(0)/0.961\right ] = 0.2257(9)
\ee
Here the value of $f_{+}^K(0)$ has been normalized to the combined chiral perturbation theory ($\chi$PT)-quark model prediction of Leutwyler and Roos\cite{Leutwyler:1984je} that had been used for many years. That prediction includes contributions from non-analytic, one-loop terms through chiral order $p^4$, analytic ${\cal O}(p^4)$ terms that can be obtained from fits to other pseudoscalar meson observables, and a quark model estimate of the ${\cal O}(p^6)$ contribution.
Recently, the ${\cal O}(p^6)$ loop contributions have been computed in Refs.~\cite{Post:2001si,Bijnens:2003uy}. In this context, the dominant, remaining uncertainty is associated with the ${\cal O}(p^6)$ analytic contributions that depend on the square of the ${\cal O}(p^4)$ constant $L^r_5(\mu)$ and two of the 94 unknown constants appearing in the ${\cal O}(p^6)$ chiral Lagrangian, $C^r_{12}(\mu)$ and $C^r_{34}(\mu)$:
\begin{equation}
\label{eq:fplus6}
f_{+}^{K\, (6),\, {\rm analytic}} =
8{(m_\pi^2-m_K^2)^2\over F_\pi^4}\left[\frac{L_5^r(\mu)^2}{F_\pi^2}-C_{12}^r(\mu)-C_{34}^r(\mu)\right]+\cdots \ ,
\ee
where $\mu$ is the renormalization scale, usually taken to be $\sim \Lambda_\chi$. The constant $L^r_5(\mu)$ is presently well known from experiment, whereas the determination of $C^r_{12,34}(\mu)$ require additional input.
In principle, new measurements of the pion and kaon scalar form factors $f^{\pi,\, K}_0(t)$ could allow a determination of these unknown constants, removing this remaining uncertainty. In particular, a 5\% measurement of the slope of $f_0^K(t)$ and a 20\% determination of its curvature -- coupled with a 1\% theoretical determination of the ratio of decay constants $F_K/F_\pi$ from theory -- would be sufficient to determine $C^r_{12,34}(\mu)$ at a level needed for the first row CKM unitarity test\cite{Blucher:2005dc,Bijnens:2003uy}.
Alternately, values can be determined theoretically. Recent work using large N$_C$ QCD has yield a value for $C^r_{12, 34}$ leading to the prediction\cite{Cirigliano:2005xn}
\begin{equation}
\label{eq:fplusres}
f_{+}^K(0)_{{\rm large}\, N_C} = 0.984 \pm 0.012\ \ \ .
\ee
A number of lattice QCD computations of $f_{+}^K(0)$ have been carried out that yield, in effect, the sum of the nonanalytic and analytic terms in the $\chi$PT analysis. The results tend to favor a smaller value for $f_{+}^K(0)$ that is consistent with the Leutwyler and Roos estimate\cite{Leutwyler:1984je}:
\begin{equation}
f_{+}^K(0)_{\rm lattice} =
\left\{
\begin{array}{ll}
0.960\pm 0.005_{\rm stat} \pm 0.007_{\rm sys}\, & {\rm quenched,\ Wilson}
[119]
\\
0.962(6)(9)\,& {\rm unquenched,\ staggered}
[120]
\\
0.952(6)\,& {\rm unquenched,\ Wilson}
[121]
\\
0.955(12)\,& {\rm unquenched,\ doman\, wall}
[122]
\end{array}
\right. ,
\ee
where recent quenched and unquenched results are shown for different lattice fermion actions (Wilson, staggered, domain wall). Note that the quenched results of Ref.~ \cite{Becirevic:2004ya} do not include any systematic uncertainty associated with the quenched approximation.
An alternative determination of $V_{us}$ can be made by comparing the rates for the leptonic decays of the charged kaon and pion\cite{Marciano:2004uf}. From Eq.~(\ref{eq:piona}) and the analogous expression for the charged kaon decay rate one has
\begin{equation}
\frac{ \Gamma[K^+ \to \mu^+ \nu (\gamma)]}{ \Gamma[\pi^+ \to \mu^+ \nu (\gamma)]} = \frac{V_{us}^2}{V_{ud}^2}\, \frac{F_K^2}{F_\pi^2}\, \frac{m_\pi^3}{m_K^3}\, \frac{(m_K^2-m_\mu^2)^2}{(m_\pi^2-m_\mu^2)^2}\, \left[1-\frac{\alpha}{\pi}\Delta_{K\pi}\right]\ \ \ ,
\ee
where $\Delta_{K\pi}$ gives the difference in the radiative corrections entering the RHS of Eq.~(\ref{eq:piona}). The uncertainty from the hadron structure-dependent contributions to this difference has been estimated to be $\pm 0.75$, corresponding to an uncertainty of $\sim 0.1\%$ in the ratio $(V_{us}/V_{ud})^2$. Preliminary lattice QCD results for the pseudoscalar decay constants obtained by the MILC collaboration give $F_K/F_\pi=1.198\pm 0.003^{+0.016}_{-0.005}$\cite{Bernard:2005ei}. The resulting theoretical uncertainty in $V_{us}$ obtained by this technique is comparable with that entering the analysis of $K_{\ell 3}$ branching ratios. Using a new result for the $K_{\mu 2}$ branching ratio obtained by KLOE one obtains $V_{us}=0.2245^{+0.0011}_{-0.0031}$.
Using these results and those for $V_{ud}$ from the superallowed decays, one obtains for the first row of the CKM matrix
\begin{equation}
|V_{ud}|^2+|V_{us}|^2+|V_{ub}|^2=
\begin{cases}
0.9968\pm 0.0014, & {\rm large}\ N_C
[118]
\\
0.9998\pm0.0015, & {\rm unquenched\, lattice,\ domain\, wall }
[122]
\end{cases}\ \ \ .
\ee
\subsection{CKM Unitarity Tests: Implications for SUSY}
The implications of the CKM unitarity test for SUSY can be significant. In the presence of RPV interactions, a CKM unitarity deficit could be remedied by having $[{\Delta\hat r}^V_\beta-{\Delta\hat r}_\mu]_{\rm RPV} < 0$, thereby implying that $\Delta_{12k}({\tilde e}_R^k) > \Delta_{11k}^\prime({\tilde d}_R^k)>0$ [see Eq.~(\ref{eq:ckm2})]. On the other hand, consistency with CKM unitarity would allow both $\Delta_{12k}({\tilde e}_R^k)$ and $\Delta_{11k}^\prime({\tilde d}_R^k)$ to be nonzero, but would imply a strong correlation on their magnitudes.
RPV corrections of order $\sim 0.1\%$ are not unreasonable from the standpoint of expected magnitudes of superpartner masses or couplings. Indeed, having
\begin{equation}
\Delta_{ijk}({\tilde e}_R^k)\sim 0.1
\ee
implies that
\begin{equation}
\frac{m_{\tilde e_R^k}}{100\, {\rm GeV}} \sim 40 \lambda_{ijk}
\ee
or $M_{\tilde e_R^k}\sim 1$ TeV for $\lambda_{ijk} \sim\sqrt{4\pi\alpha}$. As discussed in Ref.~\cite{Ramsey-Musolf:2000qn}, RPV corrections of this magnitude are not inconsistent with analogous bounds on RPV corrections obtained from other precise measurements, such as the studies of rare decays or flavor-changing neutral current processes.
The implications for superpartner loop corrections for CKM unitarity have been studied in Ref.~\cite{Kurylov:2001zx}. The contributions from $W$ propagator $We{\bar\nu}_e$ vertex and external leg corrections cancel from the difference $[{\Delta\hat r}^V_\beta-{\Delta\hat r}_\mu]$, leaving only the dependence on the $W\mu{\bar\nu}_\mu$ and $Wd{\bar u}$ vertex and external leg corrections as well as the box graphs entering the $\beta$- and $\mu$-decay amplitudes. As with the corrections to $R_{e/\mu}$, these box graphs are subdominant in the presence of gaugino-higgsino mixing, as the vertex and and external leg corrections receive logarithmic enhancements. As a result, the dependence of the correction $[{\Delta\hat r}^V_\beta-{\Delta\hat r}_\mu]$ on the parameters in ${\cal L}_{\rm soft}$ simplifies considerably, and the authors of Ref.~\cite{Kurylov:2001zx} were able to perform an analytic study of the corresponding parameter dependence.
To illustrate the dependence of $[{\Delta\hat r}^V_\beta-{\Delta\hat r}_\mu]$ on the SUSY parameters, it is useful to consider several representative cases:
\begin{itemize}
\item[(i)] For situations in which the scalar superpartners of left- and right-handed fermions mix via the $\mu$-term and tri-scalar SUSY-breaking interactions into mass eigenstates ${\tilde f_{1,2}}$, one has
\begin{equation}
\left[{\Delta\hat r}^V_\beta-{\Delta\hat r}_\mu\right]_{\rm SUSY\ loop}\sim \frac{\alpha({\hat c}^2-{\hat s}^2)}{32\pi^2 {\hat c}^2 {\hat s}^2}\ \ln\left(\frac{m^2_{\tilde q_2}}{m^2_{\tilde q_1}} \frac{m^4_{\tilde \mu_1}}{m^4_{\tilde \mu_2}}\right)+\cdots\ \ \ ,
\ee
where the $m_{\tilde f_i}$ are the corresponding physical masses and where this expression holds in the limit $|m_{\tilde f_2}-m_{\tilde f_1}| >> m_{\tilde f_{1}}$. Note that the overall sign of $[{\Delta\hat r}^V_\beta-{\Delta\hat r}_\mu]_{\rm SUSY\ loop}$ depends on the relative degree of mass splitting among the sleptons and squarks. At the time when the analysis of Ref.~\cite{Kurylov:2001zx} was carried out, the CKM unitarity deviation favored a negative sign, leading to the requirement
\begin{equation}
\label{eq:susybeta1}
\frac{m_{\tilde \mu_2}}{m_{\tilde \mu_1}} >\left( \frac{m_{\tilde q_2}}{m_{\tilde q_1}} \right)^{1/2}
\ee
For nearly degenerate squarks, one would need $m_{\tilde\mu_2}\gsim 3 m_{\tilde\mu_1}$. For values of $m_{\tilde\mu_1}$ close to the current direct search bounds, a mass splitting of this magnitude is ruled out by the recent muon $(g-2)_\mu$ results\cite{Bennett:2004pv}.
\item[(ii)] The constraints from $(g-2)_\mu$ may be evaded by taking $M_{LR}^2\approx 0$ for the scalar fermions\footnote{From Eq. (\ref{eq:susybeta1}) we observe that the degree of mixing among squarks must always be less than that for smuons. Hence, setting $M_{LR}^2=0$ for the smuons implies a similar condition for the squarks.}. In this case, only the superparters of the left-handed fermions contribute to the CC weak interaction, and the requirements on the scalar fermion spectrum are rather different. To illustrate, we consider the limit of large sfermion masses, yielding the asymptotic expression
\begin{equation}
\left[{\Delta\hat r}^V_\beta-{\Delta\hat r}_\mu\right]_{\rm SUSY-loop}\sim\frac{\alpha}{2\pi}\, \cos{2\beta}\, \left[\frac{1}{3}\frac{M_Z^2}{m_{\tilde q}^2}\ln\frac{m_{\tilde q}^2}{\langle M_{\tilde\chi}^2\rangle}-\frac{M_Z^2}{m_{\tilde \mu}^2}\ln\frac{m_{\tilde \mu}^2}{\langle M_{\tilde\chi}^2\rangle}\right]+\cdots\ \ \ ,
\ee
where $\langle M_{\tilde\chi}^2\rangle^{1/2} $ is the mass scale associated with the charginos and neutralinos. For $\tan\beta>1$ as currently favored, one requires $m_{\tilde\mu}^2\gsim 3
m_{\tilde q}^2$ in order to obtain a negative sign for $[{\Delta\hat r}^V_\beta-{\Delta\hat r}_\mu]_{\rm SUSY-loop}$ (up to small logarithmic corrections). Note that such a sfermion spectrum would conflict with models that assume a universal sfermion mass at high scales, since in this case SU(3)$_C$ contributions to the renormalization group evolution of the first generation squark masses increases their magnitude at the weak scale relative to that of the first and second generation slepton masses.
\item[(iii)] A final possibility arises when there exists significant mixing among either the $u$ or $d$-type squarks, but in a way that is not identical for both. In this case, gluino loop effects dominate the SUSY corrections to the $Wud$ vertex, and one can accommodate a negative sign for $[{\Delta\hat r}^V_\beta-{\Delta\hat r}_\mu]_{\rm SUSY-loop}$ without requiring significant mixing among the smuons. In order to evade the $(g-2)_\mu$ constraints, the dominant contribution to $M_{LR}^2$ for the $d$ squarks must arise from a large value for the triscalar coupling $A_d$. In this case, avoiding color or charge-breaking minima in the scalar potential implies that all by the CP-even Higgs $h^0$ must be quite heavy. One can suppress these supersymmetric SU(3)$_C$ loop corrections for $M_3\gsim 500$ GeV. In this case, however, one returns effectively to the situation characterized by item (i), leaving item (ii) as the only viable option.
\end{itemize}
At present, the CKM first row unitarity situation is unclear, so one cannot draw strong conclusions about either PRV or R-parity conserving scenarios. However, the prospective implications for the superpartner spectrum or presence of RPV interactions underlines the importance of resolving the various theoretical and experimental uncertainties germane to CKM unitarity outlined above.
\section{Neutral Current Experiments}
\label{sec:nc}
\subsection{Introduction}
Historically, the study of parity-violating (PV) neutral current interactions has played an
important role in elucidating the structure of the electroweak
interaction. In the 1970's, PV deep inelastic scattering (DIS)
measurements performed at
SLAC confirmed the SM prediction for the structure of weak neutral
current interactions \cite{SLAC}. These results were consistent with a
value for the weak mixing angle given by ${\sin^2\theta_W}\approx 1/4$,
implying a tiny $V$(electron)$\times A$(quark) neutral current
interaction. Subsequent PV measurements -- performed at both very low
scales using atoms as well as at the $Z$-pole in $e^+e^-$ annihilation
-- have been remarkably consistent with the results of the SLAC DIS
measurement.
The value of $\hat{s}^2$ at scale $Q=\sqrt{|q^2|}=M_Z$
has been determined precisely via $Z$-pole
precision measurements at LEP and SLD. The global fit to all the
precision observables gives \cite{pdg}
\begin{equation}
\hat{s}^2(M_Z)=0.23118 \pm 0.00017
\end{equation}
The SM predicted scale-dependence of $\hat{s}^2$, on the other hand,
has never been established experimentally to a high precision.
The solid curve in Fig.~\ref{fig:sin2theta} (see Section \ref{sec:renorm}) shows the running of weak
mixing angle ${\sin^2\theta_W}$ [in the modified minimal subtraction
($\overline{\rm MS}$) scheme]
as a function of the scale $Q$.
The dip around ${M_{W}}$ is due to the
decoupling of $W$ boson when $Q<{M_{W}}$.
When we approach lower and lower energy, quarks with mass $m_q > Q$
decouple and the slope of the $\hat{s}^2$ running changes.
For $Q<1$ GeV, all the heavy
quarks decouple and $\hat{s}^2$ is roughly a constant. The
difference between $\hat{s}^2$ at $Z$-pole and $\hat{s}^2$
at low energy is: $\hat{s}^2(0)-\hat{s}^2({M_{Z}})=0.00749\pm 0.00015 \pm 0.00007$, where the first error is the experimental error\footnote{Here, we have taken the value of $\hat{s}^2({M_{Z}})$ from a fit to precision data rather than the value computed from Eq.~(\ref{eq:Gfswmz}).} $\hat{s}^2({M_{Z}})$ and the second is the theoretical error associated with the running to $Q=0$.
Recently, determinations of $\hat{s}^2$ at various low energy
scales have been performed, although the experimental error
is still relatively large.
The cesium atomic parity-violation (APV) experiment measured the parity-forbidden transition between
atomic states by exploiting Stark interference effects \cite{APV}. The cesium weak charge,
which depends on $\hat{s}^2$ at $Q\approx 0$,
appears to be consistent with the SM prediction.
At higher energies, the NuTeV collaboration measured $\nu$- ($\bar\nu$-) nucleus
deep inelastic scattering~\cite{NuTeV}, and studied the
ratio between the cross section of the neutral current and
that of the charged current. The result can be interpreted as
a determination of $\hat{s}^2$ at $Q\approx 3$ ${\rm GeV}$.
The observed deviation of cross section ratios from the SM predictions
implies a $+3\sigma$ deviation in $\hat{s}^2$ at that scale.
More recently, the SLAC E158 collaboration measured the electron weak charge via
the PV M{\o}ller ($ee$) scattering \cite{E158}.
It determined
$\hat{s}^2$ at $Q^2\approx 0.026\ {\rm GeV}^2$ and obtained a result that is consistent with the
SM prediction at the 1.1 $\sigma$ level.
The Qweak experiment, which plans to measure the proton weak charge
via elastic PV $ep$ scattering using polarized electron beam at the Jefferson Laboratory (JLab),
is currently under construction. The expected $4\%$ measurement of
proton weak charges at $Q^2\approx 0.03 {\rm GeV}^2$
corresponds to $0.3\%$ determination of ${\sin^2\theta_W}$:
$\delta {\sin^2\theta_W} =0.0007$ \cite{QWEAK}, better than any of the
current measurements.
Several future PV experiments are being considered.
One proposal involves a more precise version of PV M{\o}ller ($ee$) scattering at
JLab using the planned 12 GeV upgrade of the accelerator. With a $2.5\%$ precision in electron weak charge
measurement at $Q^2\approx 0.008\ {\rm GeV}^2$, $\hat{s}^2$ could
be determined with an error of $\delta \hat{s}^2 =0.00025$ \cite{jlabmoller},
comparable to the $Z$-pole measurements.
Another proposal is to study the PV electron-Deuterium DIS at JLab
with current 6 GeV electron beam, or at future 12 GeV upgrade.
Although the expected precision is worse than the Qweak experiments,
such measurements have unique opportunity to probe the $V(e)\times A(q)$
interactions, QCD higher twist effects, possible charge symmetry violation in the parton distribution functions, and the ratio $d(x)/u(x)$ at $x\to 1$. An DIS electron-Deuterium experiment has been approved for running with the present 6 GeV beam\cite{pvdis6GeV}, and several options for experiments using the future 12 GeV beam are under consideration\cite{eDDIS}.
\begin{table}
\begin{tabular}{lcc}
\hline
Measurements&$\delta{\sin^2\theta_W}/{\sin^2\theta_W}$&$\delta{\sin^2\theta_W}$\\ \hline
Z-pole&0.07\%&0.00017\\
0.6\% APV $Q_W({\rm Cs})$&0.7\%&0.0016 \\
NuTeV $\nu$-DIS &0.7\%&0.0016\\
13.1\% SLAC E158 $Q_W(e)$&0.5\%&0.0013\\
$*$2.5\% JLab M{\o}ller $Q_W(e)$&0.1\%&0.00025\\
4\% JLab Qweak $Q_W(p)$&0.3\%&0.00072\\
$*$0.8\% JLab eD DIS-parity &0.45\%&0.0011\\ \hline
\end{tabular}
\caption{Precision of various experiments which are sensitive to the value
of ${\sin^2\theta_W}$ at low energies. Entries with $*$ are ideas for future
experiments.}
\label{tab:sin2thetaprecision}
\end{table}
In Table~\ref{tab:sin2thetaprecision}, we list the precision of
various current measurements and possible future experiments, along with the
sensitivity of the measurements of $\hat{s}^2$. Precise determinations of
the value of $\hat{s}^2$ at different scales, obtained from different types of experiments,
will provide a consistency check of the SM at loop level.
Any significant
deviation from the SM prediction
would constitute striking evidence for new physics.
These high precision,
low energy measurements will be sensitive
to the new physics up to TeV scale. For example, we can write down the
effective four fermion operators that contribute to
the parity-violation $eq$ scattering as \cite{MRM99}
\begin{equation}
{\cal{L}}_{eq}^{PV}={\cal{L}}_{\rm SM}^{PV}+{\cal{L}}_{\rm new}^{PV}
=-\frac{G_{\mu}}{2 \sqrt{2}}\bar{e}\gamma_{\mu}\gamma_5e
\sum_{q} Q_W^q\bar{q}\gamma^{\mu}q+\frac{g^2}{4 \Lambda^2}
\bar{e}\gamma_{\mu}\gamma_5e\sum_{q}h_V^q\bar{q}\gamma^{\mu}q,
\end{equation}
where the first term is the SM contribution, and the second term
gives new physics effects.
$g$ is the typical new physics coupling, $\Lambda$ is the new
physics scale, and $h_V^q$ is an ${\cal O}(1)$ coefficient that
parameterizes the new physics contributions for different quarks. A $4\%$ measurement of
proton weak charge $Q_W^p$ corresponds to a probe
of the new physics scale of
\begin{equation}
\frac{\Lambda}{g}\sim \frac{1}{\sqrt{\sqrt{2}G_{\mu}|\delta Q_W^p|}}
\sim 4.6\ {\rm TeV}.
\end{equation}
Table~\ref{tab:newphysicsscale} summarizes the sensitivity to various
new physics scale from current and future parity-violation
experiments \cite{jlabmoller}.
Also shown are the direct collider search limits for new physics
from current colliders (LEP2, CDF and HERA) and indirect search limit from current
electroweak precision fit.
The scales that the low energy precision measurements
can probe is close to -- and in some cases even exceeds -- the ones accessible in high energy, direct searches. Even after LHC begins running, the low energy measurements will be able to probe new physics scale
comparable to the LHC reach. Once any new physics is discovered at LHC, low energy
precision measurements can be used to probe details of the new physics, such as
the couplings and charges of new particles.
\begin{table}
\begin{tabular}{c|cc|cc|cc}
\hline
&\multicolumn{2}{c|}{$Z^\prime$ models}&
\multicolumn{2}{c|}{leptoquark}&\multicolumn{2}{c}{compositeness}\\
&$m(Z_X)$&$m(Z_{LR})$&$m_{LQ}$(up)&$m_{LQ}$(down)&$e-q$&$e-e$\\ \hline
Current direct search limits&0.69&0.63&0.3&0.3&--&--\\
Current electroweak fit&0.78&0.86&1.5&1.5&11$-$26&8$-$10\\
0.6\% $Q_W({\rm Cs})$&1.2&1.3&4.0&3.8&28&-- \\
13.1\% $Q_W(e)$&0.66&0.34&--&--&--&13\\
$*$ 2.5\% $Q_W(e)$&1.5&0.77&--&--&--&29\\
4\% $Q_W(p)$&0.95&0.45&3.1&4.3&28&--\\
\hline
\end{tabular}
\caption{Sensitivity of new physics scale from various current and future
low energy precision measurements. Entry with $*$ is an idea for future
experiment. Also shown are direct search limits from current colliders
(LEP, CDF and Hera) and indirect search limit from current electroweak precision fit. The various new physics scale presented here
are mass of $Z^\prime$ with extra U(1) [$m({Z_X})$], or
in left-right models [$m(Z_{LR})$], mass of
leptoquark in up quark sector [$m_{LQ}$(up)], or down quark sector
[$m_{LQ}$(down)], composite scale for $e-q$ composite, or $e-e$
composite.
Entries with ``--'' either do not exist or do not apply. This Table is updated from
Ref.~ \cite{jlabmoller}.}
\label{tab:newphysicsscale}
\end{table}
In the following sections,
we discuss in detail three different types of neutral current
experiments. Sec.\ref{sec:pves} is devoted to
parity violating electron scattering (PVES),
which includes $ee$ M{\o}ller scattering,
$ep$ elastic scattering, and $eD$ deep inelastic scattering.
Atomic parity violation is discussed in Sec.~\ref{sec:apv}, and
neutrino-nucleus DIS is discussed in Sec.~\ref{sec:nutev}.
\subsection{Parity Violating Electron Scattering:
M{\o}ller and Qweak}
\label{sec:pves}
The SLAC E158 experiment measured PV $ee$ M{\o}ller scattering at
$Q^2 \sim 0.026 {\rm GeV}^2$ \cite{E158}, while the
Qweak experiment at JLab will measure
PV $ep$ scattering at $Q^2 \sim 0.03 {\rm GeV}^2$ \cite{QWEAK}.
In both cases, polarized electron beams are used to measure the PV asymmetry
\begin{equation}
A_{PV}=\frac{N_R-N_L}{N_R+N_L} \ \ \ ,
\end{equation}
where $N_R$ ($N_L$) is the number of detected events for incident electrons with positive (negative) helicity. Although the dominant contribution to
the scattering is via parity conserving photon exchange, the
interference between the photon exchange and the parity violating $Z$ exchange
generates the PV asymmetry and filters out the much larger electromagnetic scattering effects.
At leading order in $Q^2$, the contributions to
${A_{PV}}$ are governed by $A(e)\times V(f)$ operator, with the coefficient being
$Q_W^f$, the \lq\lq weak charge" of the target
fermion, $f$ ($f=e$ for $ee$ scattering, and $f=p$ for $ep$
scattering, where $Q_W^p=2 Q_W^u+Q_W^d$).
\begin{eqnarray}
\label{eq:Leff}
{\cal L}_{EFF}^{ef}=-\frac{G_\mu}{2\sqrt 2}Q_W^f {\bar e}
\gamma_\mu\gamma_5 e {\bar f}\gamma_\mu f~.
\label{eq:qw}
\end{eqnarray}
At tree-level in the SM the weak charges of both the electron and the
proton are suppressed: $Q_W^p=-Q_W^e=1-4{\sin^2\theta_W}\approx
0.1$. One-loop SM electroweak radiative corrections further reduce
this small number, leading to the predictions
$Q_W^e=-0.0449$ \cite{Mar96,Erl-MJRM-Kur02} and
$Q_W^p=0.0716$ \cite{Erl-MJRM-Kur02}. The factor of $\gsim$10
suppression of these couplings in the SM renders them more transparent
to the possible effects of new physics. Consequently, experimental
precision of order a few percent, rather than a few tenths of a
percent, is needed to probe SUSY loop corrections.
The advantages of measuring ${\sin^2\theta_W}$
in these two PVES experiments are that hydrogen target is used in
both experiments, which is a relatively clean environment since the
QCD contamination of heavy nuclei can be avoid. It is also
theoretically clean since the hadronic uncertainties are relatively
small and under control\cite{Mar96,Erl-MJRM-Kur02} .
The $ee$ M{\o}ller scattering experiment measured a
parity violating asymmetry \cite{E158}: $A_{LR}=
(-131 \pm 14 \ ({\rm stat.}) \pm 10 \
({\rm syst.})) \times 10^{-9}$, leading to
the determination of the weak mixing angle
${\sin^2\theta_W}=0.2397 \pm 0.0010 \ ({\rm stat.}) \pm 0.0008 \ ({\rm syst.})$,
evaluated at $Q^2=0.026 {\rm GeV}^2$. Comparing to the SM prediction of
${\sin^2\theta_W} = 0.2381 \pm 0.0006$ at this energy scale, the E158 results
agreed with the SM value at 1.1 $\sigma$. When expressed in terms of
the electron weak change, the difference between the measured value
and the theoretical predicted one is
$\delta Q_W^e=(Q_W^e)_{\rm exp}-(Q_W^e)_{\rm SM}=0.0064 \pm 0.0051$.
In the Qweak experiment, the SM prediction for the left-right asymmetry
is $290\times10^{-9}$.
The expected experimental precision for $ep$
scattering at Qweak experiment is $4\%$. When translated
to a precision in the ${\sin^2\theta_W}$ measurement, it corresponds to
$\delta\hat{s}^2$ of 0.0007,
which is two times smaller than the precision in Cs APV
and in NuTeV measurement.
In order to interpret these high precision electron scattering experiments in terms of possible new physics,
it is crucial to have the hadronic uncertainties under control. There are two
kinds of QCD uncertainties that one must consider: (a) QCD corrections to the weak charge
itself, and (b) QCD effects that impact the extraction of
the weak charge from the experimentally measured
PV asymmetry.
The QCD corrections to weak charges have been discussed in Sec.~\ref{sec:renorm}.
In the case of Qweak, the latter set of uncertainties is brought under control by a judicious choice of kinematics and by extrapolating the results of other experimental measurements.
At forward angle $\theta$, the PV asymmetry can be written
as~\cite{musolf1995}
\begin{equation}
A_{PV}=\frac{G_{\mu} Q^2}{4 \sqrt{2}\pi\alpha}\left[
Q_W^p+F(\theta, Q^2)
\right],
\end{equation}
where $F(\theta, Q^2)$ is the unknown form factor,
that is proportional to $Q^2$ at low energy.
The form factor $F(\theta, Q^2)$ depends on a linear combination of isovector and isoscalar electromagnetic (EM) form factors as well as those associated explicitly with strange quarks. While the results of parity conserving electron scattering experiments provide the needed isovector and isocalar EM contributions to $F(\theta, Q^2)$, the strange quark contributions can only be determined by additional PV electron scattering measurements. These contributions to $F(\theta, Q^2)$ -- which vanish with $Q^2$ as $Q^2\to 0$ -- have been studied with an extensive program of measurements by the SAMPLE~\cite{sample},
HAPPEX~\cite{HAPPEX}, PV A4 ~\cite{A4}, and G0~\cite{G0} Collaborations. The results yield tight constraints on the strange quark contributions to $F(\theta, Q^2)$, which one can then extrapolate
to the much smaller $Q^2$ relevant to the Qweak experiments. On the other hand, one cannot take
$Q^2$ to be too small since ${A_{PV}}$ itself is proportional to $Q^2$ and
the statistical error increases for smaller $Q^2$. Experimentally, these
two effects are optimized and $Q^2$ is
chosen to be about 0.03 ${\rm GeV}^2$. The resulting hadronic uncertainty
from the form factor is about 2\%, half of the total
experimental uncertainties.
The precise measurements of the weak charges could probe both supersymmetric loop effects as well as tree-level, RPV contributions. For the $R$-parity conserving MSSM, contributions to the weak charge $Q_W^e$
and $Q_W^p$ appear at loop level. With higher-order corrections
included, the weak charge of a fermion $f$ can be written in terms of the parameters $\hat\rho_{NC}(0)$, $\hat\kappa$, and $\hat\lambda^f_{V,A}$ introduced in Section \ref{sec:renorm}:
\begin{equation}
\label{eq:C1f-radcorr}
Q_W^f = \hat\rho_{NC}(0) \left[2 I_3^f -4
Q_f\hat\kappa(0,\mu)\hat{s}^2(\mu)\right]+\hat\lambda^f_V +\left(2I_3^f-4Q_f {\hat s}^2\right)\hat\lambda^f_A+{\rm box}\ \ \ ,
\end{equation}
where $I_3^f$ and $Q_f$ are, respectively, the weak isospin and the
electric charge of the fermion $f$, and
$\hat{s}^2\equiv {\sin^2\theta_W}(M_Z^2)$.
The quantities $\hat\rho_{NC}(0)$ and
$\hat\kappa(0,\mu)$ are universal in that they do not depend on the fermion
$f$ under consideration.
Detailed expressions for $\hat\rho$ and $\hat\kappa$ can
be found in Sec.~\ref{sec:renorm}.
The corrections $\hat\lambda^f_{V,A}$, on the other
hand, depend on the fermion species. They include
the vertex and external leg corrections to the weak charge.
The relevant expressions for supersymmetric contributions to the $\hat\lambda^f_{V,A}$ can be found in
Ref.~\cite{sumichaelpves}\footnote{In these studies, the electron anapole contribution was included in the parameter $\hat\kappa$ and we denoted $\kappa_{PV}$; the quantity $\hat\rho_{NC}(0)$ was denoted $\rho_{PV}$; and the sum of the terms containing the $\hat\lambda^f_{V,A}$ in Eq.~(\ref{eq:C1f-radcorr} were denoted by $\lambda_f$, which did not include the electron anapole moment contribution.} .
Note that at tree-level, one has
$\hat\rho_{NC}=1=\hat\kappa$ and $\hat\lambda^f_{V,A}=0$, while
both SM and new physics contributions enter at loop level. Consequently, in what follows we will refer to $\delta\hat\rho=\hat\rho_{NC}-1$ and $\delta\hat\kappa=\hat\kappa-1$.
The MSSM loop contributions to the electron and proton weak charges
have been analyzed in detail in \cite{sumichaelpves}, which
is reproduced in Fig.~\ref{fig:mssm-vs-parity}. Here, we plot
the MSSM loop contributions
to the shift in the weak charge of the proton, $\delta Q_W^p =
2\delta Q_W^u+ \delta Q_W^d$, versus the corresponding shift in the
electron's weak charge, $\delta Q_W^e$, normalized to the respective
SM values. The dots show the results of a random scan over a range of MSSM parameters.
The loop corrections in the $R$-parity conserving MSSM can be as
large as $\sim 4\%$ ($Q_W^p$) and $\sim 8\%$ ($Q_W^e$) -- roughly the
size of the experimental errors for the two PVES
measurements. Given the current results on E158, the SUSY loop contributions
is consistent with the measurement at about 2 $\sigma$ level.
In general, the MSSM effects are larger for large
$\tan\beta$, light SUSY particles, and large splitting between
sfermions, although the latter is an isospin breaking effect and
therefore constrained by oblique
$T$ parameter.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=4in]{rpv_loop_ab.ps}
\caption{Relative shifts in electron and proton weak charges
due to SUSY effects (updated plot from Ref. \cite{sumichaelpves}).
Dots indicate MSSM loop corrections for $\sim 3000$
randomly-generated SUSY-breaking parameters. Interior of truncated
elliptical regions (a) and (b) give possible shifts
due to R-parity non-conserving SUSY interactions (95\% confidence),
using value of
$\delta|V_{ud}|^2/|V_{ud}|^2$ (a) and (b) in Table.~\ref{tab:rpv-constrain}, respectively.
}
\label{fig:mssm-vs-parity}
\end{center}
\end{figure}
The shifts $\delta Q_W^{e,p}$ are dominated by $\delta\hat\kappa^{\rm
SUSY}$ since the corrections to $Q_W^{e,p}$ due to shifts in the
$\rho_{PV}$ parameter are suppressed by $1-4\hat{s}^2$. In addition,
the non-universal corrections involving vertex corrections and
wavefunction renormalization experience significant cancellations.
Since $\delta\hat\kappa^{\rm SUSY}$ is universal, and thus identical for both
$Q_W^e$ and $Q_W^p$, the $\delta\hat\kappa$ dominance produces a
linear correlation between the two weak charges.
The correction to $\delta\kappa^{\rm SUSY}$ is nearly always
negative, corresponding to a reduction in the value of
${\sin^2\theta_W}^{eff}(Q^2)=\hat\kappa(Q^2,\mu)\hat{s}^2({\mu})$ for the
parity-violating electron scattering experiments [see
Eq.~(\ref{eq:C1f-radcorr})].
\begin{figure}[ht]
\resizebox{5 in}{!}{
\includegraphics*[30,480][430,750]{distinguishNP.ps}}
\caption{Comparison of anticipated errors
for $Q_W^p$ and $Q_W^e$ with deviation from the SM expected from
various extensions and allowed range (at 95\% CL) by fits to existing
data \cite{Erl-MJRM-Kur02-update}.}
\label{fig:newphy}
\end{figure}
As evident from Fig.~\ref{fig:mssm-vs-parity}, the relative sign of
the corrections to both $Q_W^p$ and $Q_W^e$ -- normalized to the
corresponding SM values -- is nearly always the same and nearly always
positive. Since $Q_W^p>0$ ($ Q_W^e<0$) in the SM, SUSY loop
corrections give $\delta Q_W^p>0$ ($\delta Q_W^e<0$). This
correlation is significant, since the effects of other new physics
scenarios can display different signatures.
The combined measurements of the electron and proton weak charge can
be a probe to distinguish other new physics, as illustrated in
Fig.~\ref{fig:newphy} \cite{Erl-MJRM-Kur02-update}. The arrows
indicate correlated effects. For example, while both the
superpartner loops and leptoquark exchange give a positive contribution to the proton weak
charge, only MSSM give rise to a sizable effect on the electron weak
charge \cite{Erl-MJRM-Kur02,MRM99}.
for the
general class of $Z^{\prime}$ theories based on $E_6$ gauge group, with neutral
gauge bosons having mass $\buildrel < \over {_\sim}$ 1000 GeV, the effects on $Q_W^p$ and
$Q_W^e$ also correlate, but $\delta Q_W^{e,p}/Q_W^{e,p}$ can have
either sign in this case \cite{Erl-MJRM-Kur02, MRM99}.
In the case when $E_6~Z^\prime$ models and MSSM have similar effects on the
electron and proton weak charge, measurement of the cesium weak charge using atomic parity violation can further tell these two apart, as explained
in Sec.~\ref{sec:apv} below.
If we relax the assumption of $R$-parity conservation, tree level
corrections to the weak charges are generated RPV interactions. In this manner
one obtains the following effective four-fermion Lagrangian, using the
notation of $\Delta_{ijk}^{({\prime})}$ as defined in Eq.~(\ref{eq:deltas}):%
\begin{eqnarray}
\label{eq:rpveffectivepves}
{\cal L}_{{RPV}}^{{EFF}} &=& -{\Delta^{\prime}_{1k1}}(\tilde{q}_L^k){\bar d}_R
\gamma^\mu d_R {\bar e}_{L}\gamma_\mu
e_{L} + {\Delta^{\prime}_{11k}({\tilde d}_R^k)}{\bar
u}_L\gamma^\mu u_L {\bar e}_{L}\gamma_\mu e_{L} \nonumber \\
&&-{\Delta_{12k}({\tilde e}_R^k)}\left[{\bar \nu}_{\mu
L}\gamma^\mu \mu_L {\bar e}_{ L}\gamma_\mu \nu_{e L}+{\rm
h.c.}\right],
\end{eqnarray}
where we have taken $|q^2|\ll m_{\tilde f}^2$ and have retained only
the terms relevant for the PVES scattering.
The last term contributes to the muon decay, which affects the extraction
of the fermi constant from the muon decay lifetime.
Note the absence from Eq.~(\ref{eq:rpveffectivepves})
of the parity-violating contact four-electron
interaction. This is because the superpotential in
Eq.~(\ref{eq:RPVL}) can only produce parity-conserving contact
interactions between identical leptons.
The relative
shifts in the weak charges are \cite{MRM00}:
\begin{eqnarray}
\label{eq:rpv-weak}
\frac{\delta Q_W^{e}}{Q_W^{e}}&\approx&-\left[1+ \left(\frac{4}{
1-4\hat{s}^2}\right)\lambda_x \right]\Delta_{12k}({\tilde e}_R^k)
=-29.8 \Delta_{12k}({\tilde e}_R^k)~,\nonumber \\
\frac{\delta Q_W^{p}}{Q_W^{p}}&\approx &\left(\frac{2}{
1-4\hat{s}^2}\right) \left[ -2\lambda_x \Delta_{12k}({\tilde
e}_R^k) +2\Delta_{11k}^\prime({\tilde
d}_R^k)-\Delta_{1k1}^\prime({\tilde
q}_L^k)\right]-\Delta_{12k}({\tilde e}_R^k)~,\nonumber \\
&=&-18.7\Delta_{12k}({\tilde
e}_R^k) +55.9\Delta_{11k}^\prime({\tilde
d}_R^k)-27.9\Delta_{1k1}^\prime({\tilde
q}_L^k)~,\nonumber \\
\lambda_x&=&\frac{{\hat s}^2(1-{\hat s}^2)}{1-2{\hat s}^2} \frac{1}{
1-\Delta {\hat r^{\rm SM}}} \approx 0.35 ~.
\end{eqnarray}
Since the $\Delta_{ijk}^{(\prime)}$
are non-negative, Eq.~(\ref{eq:rpv-weak}) indicates
that the relative shift in $Q_W^e$ is negative semidefinite. On the
other hand, the relative shift in $Q_W^p$ can have either sign
depending on the relative magnitudes of $\Delta_{12k}$,
$\Delta_{11k}^\prime$, and $\Delta_{1k1}^\prime$.
The quantities $\Delta_{ijk}$, {\em etc.} in
Eq.~(\ref{eq:rpv-weak}) are constrained from the existing precision
data~\cite{MRM00}. A summary of the existing constraints
is given in Table~\ref{tab:rpv-constrain} in Sec.~\ref{sec:susy}, which
includes superallowed nuclear
$\beta$-decay that constrains $|V_{ud}|$ \cite{towner-super}, atomic
PV measurements of the cesium weak charge $Q_W^{\rm Cs}$ \cite{Ben99},
the ratio $R_{e/\mu}$ of $\pi_{l2}$ decays \cite{pil2}, and a
comparison of the Fermi constant $G_\mu$ with the appropriate
combination of $\alpha$, $M_Z$, and ${\sin^2\theta_W}$ \cite{marciano99}.
The 95\% CL region allowed by this fit in the $\delta Q_W^p/Q_W^p$
vs. $\delta Q_W^e/Q_W^e$ plane is shown by the closed curves (a) and (b) in
Fig.~\ref{fig:mssm-vs-parity}, corresponding to the RPV fit with the value of
$\delta|V_{ud}|^2/|V_{ud}|^2$ of case (a) and (b) in Table.~\ref{tab:rpv-constrain}, respectively.
Note that the truncation of the initially
elliptical curves is due to the sign requirements
$\Delta_{ijk}(\tilde f),~\Delta_{ijk}^\prime(\tilde f)\ge 0$ [see
Eq.~(\ref{eq:deltas})]. The correction to $Q_W^e$ and $Q_W^p$ from RPV
SUSY could be two to three times larger than the SUSY loop effects.
In addition, the prospective effects of $P_R$ non-conservation are
quite distinct from SUSY loops. The value of $\delta Q_W^e/Q_W^e$ is
never positive in contrast to the situation for SUSY loop effects,
whereas $\delta Q_W^p/Q_W^p$ can have either sign.
Thus, a comparison of results for the two
parity-violating electron scattering experiments could help determine
whether this extension of the MSSM is to be favored over other new
physics scenarios (see also Ref.~\cite{Erl-MJRM-Kur02}).
If SUSY is the new physics beyond the SM, it is in particular
important to know whether $R$-parity is conserved or not. Indeed, if $R$-parity is
conserved, then the neutral LSP (for example,
the lightest ${\chi}^0$) would be a suitable candidate for
dark matter. If any deviation of the electron and proton weak charge is observed,
the correlation between these two would help us to identify whether or not
there is $R$-parity conservation and, thus, shed light on the feasibility of SUSY dark matter.
Ideas for measuring ${\sin^2\theta_W}$ at low energy with higher precision have been
explored recently.
There is a proposal of a similar M{\o}ller ($ee$) scattering
measurement at JLab 12 GeV upgrade \cite{jlabmoller}.
The estimated precision for electron weak charge is 2.5\%.
It could be used to determine the value of ${\sin^2\theta_W}$ at
$Q^2\sim 0.008\ {\rm GeV}^2$, with a 0.1\%
precision: $\delta{\sin^2\theta_W}=0.00025$, which is comparable to the precision of
${\sin^2\theta_W}$
determined from $Z$-pole precision measurements \cite{jlabmoller}.
Such high precision enables us to constrain the SUSY parameter space,
whether or not a deviation of $Q_W^e$ is observed.
\subsection{Electron-Deuterium Parity Violating Deep Inelastic Scattering}
\label{sec:PV-DIS}
In light of the recent developments in parity-violating $ee$ and $ep$ scattering
discussed in Sec.~\ref{sec:pves},
a new generation of PV DIS measurements with deuterium targets
has been considered at JLab 6 GeV beam \cite{pvdis6GeV} and 12 GeV upgrade \cite{eDDIS}.
The DIS-parity experiments seek to study the
deep inelastic scattering of
a longitudinally polarized electron beam on unpolarized deuterium target.
Neglecting target mass and higher-twist corrections as well as
contributions from sea quarks,
the PV asymmetry for eD DIS has the simple form:
\begin{equation}
\label{eq:pvasym}
{A_{\sst PV}^{eD,\ \rm DIS}}={3 G_\mu Q^2\over
2\sqrt{2}\pi\alpha}\frac{2 C_{1u}-C_{1d}+ Y(2 C_{2u}-C_{2d})}{5}\ \ \ ,
\end{equation}
where
\begin{equation}
Y=\frac{1-(1-y)^2}{1+(1-y)^2-y^2R/(1+R)},\ \ \ {\rm and}\ \ \
R(x,Q^2)=\frac{\sigma_L}{\sigma_R}\approx 0.2,
\end{equation}
and $y\in [0,1]$ is the fractional energy transfer to the target in the lab
frame.
The quantities $C_{iq}$ parameterize the low-energy, PV
electron-quark interaction
\begin{equation}
{\cal L}^{eq}_{\rm PV} = {G_\mu\over \sqrt{2}}\sum_q\ \left[ C_{1q}
{\bar e}\gamma^\mu\gamma_5 e {\bar q}\gamma_ \mu q \ + \ C_{2q} {\bar
e}\gamma^\mu e {\bar q} \gamma_\mu\gamma_5 q\right].
\end{equation}
Using the SM values for $C_{iq}$ at tree level, one obtains
\begin{equation}
{A_{\sst PV}^{eD,\ \rm DIS}}\approx 10^{-4}Q^2\left[
\frac{3}{2}(1+Y)-\left(\frac{10}{3}+6Y\right){\sin^2\theta_W}
\right].
\end{equation}
The DIS asymmetry is much larger than $A_{PV}$ in the M{\o}ller scattering
and Qweak experiments: for $Q^2$=3.7 ${\rm GeV}^2$, ${A_{\sst PV}^{eD,\ \rm DIS}}=0.0003$.
An expected 0.8\% measurement in ${A_{\sst PV}^{eD,\ \rm DIS}}$ corresponds to 0.45\%
precision in $\hat{s}^2$: $\delta\hat{s}^2=0.0011$.
While the sensitivity to $\hat{s}^2$ in eD DIS is not as good as in
the SLAC E158 and Qweak experiments, it has the unique opportunity to
constrain the combination of
$2 C_{2u}-C_{2d}$.
Assuming the successful completion of the Qweak experiment, an
absolute uncertainty of $\delta C_{1u(d)}=0.005$
will be obtained. With this prospective limit, DIS-parity
experiment places an absolute uncertainty of
$\delta(2 C_{2u}-C_{2d})=0.026$. When taken together with the
results from the SAMPLE experiment~\cite{sample}, much tighter bounds
are placed on $C_{2u}$ and $C_{2d}$ than were previously
available \cite{pdg}, as illustrated in
Fig.~\ref{fig:eDDIS} \cite{eDDISplot}.
\begin{figure}
\begin{center}
\includegraphics[width=4in, angle=0]{c2u_c2d.eps}
\caption{The limits on $C_{2u}$ and $C_{2d}$ listed by the particle
data group \cite{pdg}, by the SAMPLE experiment \cite{sample},
and by DIS-parity \cite{eDDIS}. The plot is taken
from Ref.~\cite{eDDISplot}.}
\label{fig:eDDIS}
\end{center}
\end{figure}
For the R-parity conserving MSSM, loop corrections to $C_{iq}$ appear.
The $C_{iq}$ are conveniently computed using the
expressions
\begin{eqnarray}
C_{1q} & = & 2\hat\rho_{NC} I_3^e(I_3^q-2 Q_q {\hat\kappa}
\hat{s}^2)-\frac{1}{2}{\hat\lambda}_{1}^q
\\ C_{2q} & = &
2{\hat\rho}_{NC} I_3^q(I_3^e-2 Q_e{\hat\kappa} \hat{s}^2)
-\frac{1}{2}{\hat\lambda}_{2}^q\ \ \ ,
\end{eqnarray}
where the ${\hat\lambda}^q_{1,2}$ contain the appropriate combinations of the $\hat\lambda^q_{V,A}$ that are process dependent. Detailed expressions for expressions for the
$\hat\lambda_{i}^q$ can be found in Ref.~\cite{eDDISsu-musolf}.
As before, tree level contributions to $C_{iq}$ arise for RPV SUSY.
In terms of the $\Delta_{ijk}({\tilde f})$ and
$\Delta_{ijk}^\prime({\tilde f})$, one has the following shifts in the
$C_{iq}$:
\begin{eqnarray}
\Delta C_{1u}^{\rm RPV} & = & -[C_{1u}-\frac{4}{3}\lambda_x
]\Delta_{12k}({\tilde e}^k_R)-\Delta^\prime_{11k}({\tilde d}^k_R), \\
\Delta C_{1d}^{\rm RPV} & = & -[C_{1d}+\frac{2}{3}\lambda_x
]\Delta_{12k}({\tilde e}^k_R)+\Delta^\prime_{1k1}({\tilde q}^k_L),\\
\Delta C_{2u}^{\rm RPV} & = & -[C_{2u}-2\lambda_x
]\Delta_{12k}({\tilde e}^k_R)-\Delta^\prime_{11k}({\tilde d}^k_R),\\
\Delta C_{2d}^{\rm RPV} & = & -[C_{2d}+2\lambda_x
]\Delta_{12k}({\tilde e}^k_R)-\Delta^\prime_{1k1}({\tilde q}^k_L),
\end{eqnarray}
where $\lambda_x$ is defined in Eq.~(\ref{eq:rpv-weak}).
\begin{figure}[ht]
\hspace{0.00in}
\begin{center}
\resizebox{8.cm}{!}{\includegraphics*[30,200][520,600]{Ad_qwe.ps}}
\resizebox{8.cm}{!}{\includegraphics*[30,200][520,600]{Ad_qwp.ps}}
\caption{95 $\%$ CL allowed region for RPV contribution to
${A_{\sst PV}^{eD,\ \rm DIS}}(y=1, Q^2=3.7\ {\rm GeV}^2)$
vs. electron weak charge (a) and proton weak charge (b).
The dots indicate the SUSY loop corrections.
The figures are reprinted from Ref.~\cite{eDDISsu-musolf} with
permission from Elsevier. }
\label{fig:qwe_qwp}
\end{center}
\end{figure}
In Fig.~\ref{fig:qwe_qwp}, we illustrate the sensitivity of ${A_{\sst PV}^{eD,\ \rm DIS}}$
to the effects of MSSM loop contributions and tree-level
RPV effects \cite{eDDISsu-musolf}. We plot the relative shifts
in ${A_{\sst PV}^{eD,\ \rm DIS}}$ vs. those in $Q_W^e$ and $Q_W^p$.
The interior of the truncated ellipse gives
the 95\% C.L. region from RPV effects
allowed by other precision electroweak data.
A deviation of about 1\% could be expected from MSSM loop effects while
the maximum correction from RPV effects would be $-1.5\%$,
corresponding to about 2$\sigma$ for the precision proposed
in Ref. \cite{eDDIS}.
The presence of RPV
effects would induce negative relative shifts in both ${A_{\sst PV}^{eD,\ \rm DIS}}$ and
$Q_W^e$, whereas the relative sign of the loop corrections is positive
in both cases. A sizable positive
shift in $Q_W^p$ (up to $3\sigma$ for the proposed Qweak
measurement) due to RPV contributions could correspond to a tiny
effect on ${A_{\sst PV}^{eD,\ \rm DIS}}$ whereas a substantial negative shift in the proton
weak charge could also occur in tandem with a substantial negative
correction to ${A_{\sst PV}^{eD,\ \rm DIS}}$. On the other hand, even a result for $Q_W^p$
consistent with the SM would not rule out a sizable effect on
${A_{\sst PV}^{eD,\ \rm DIS}}$.
The addition of an eD DIS measurement would provide a useful
complement to the PV $ee$ and elastic $ep$ measurements,
assuming it can be performed with $\sim$ 0.5\% precision or better.
\subsection{Atomic Parity Violation}
\label{sec:apv}
The effects of atomic parity violation (APV) can be measured either by observing the rotation of the
polarization plane of linearly polarized light, or by measuring the rate for a Stark
induced transition. The most precise measurement of an APV effect has been
performed by the Boulder group using the Stark interference method on a
beam of Cesium atoms \cite{APV}. The PV transition between
two atomic states can be expressed as
\begin{equation}
M(n^\prime P_{1/2} \rightarrow n S_{1/2} )
\sim \frac{G_{\mu}}{2\sqrt{2}}C_{SP}(Z)Q_W(Z,N)+\cdots,
\end{equation}
where the dots are the effects from finite nuclear size, nucleon substructure,
and nuclear spin-dependent term \cite{Erl-MJRM-Kur02,apvnuclear}.
The uncertainty associated with these effects is small, about 0.15\%.
$Q_W(Z,N)$ is the weak charge for atom, which is a combination of
the weak charge of the up and down quark:
\begin{equation}
Q_W(Z,N)=(2Z+N)Q_W^u+(Z+2N)Q_W^d\approx Z(1-4 {\sin^2\theta_W})-N \approx -N
\end{equation}
The coefficient $C_{SP}(Z)$ parametrizes
the contribution related to the atomic structure \cite{apvatomic}.
A precise measurement on the cesium transit dipole amplitude
can be used to determine $C_{SP}$ with relatively small
uncertainty \cite{apvC}.
The cesium weak charge extracted from the APV measurement is \cite{APV}
\begin{equation}
Q_W^{\rm Cs}({\rm exp.})=-72.69 \pm 0.48,
\end{equation}
with the combined experimental and theoretical uncertainty
to be about 0.6$\%$. This is consistent with the SM
prediction \cite{Erl-MJRM-Kur02}
\begin{equation}
Q_W^{\rm Cs}({\rm SM})=-73.16 \pm 0.13.
\end{equation}
The correction to
the atom weak charge can be written as
a sum of the weak charge of the up and down type quarks:
\begin{equation}
\delta Q_W(Z,N) = (2Z+N)\delta Q_W^u + (2N+Z)\delta Q_W^d.
\end{equation}
Using the results of the MSSM corrections to the up and down quark
weak charge as described in Sec.~\ref{sec:pves},
we can obtain the contribution of SUSY loop
corrections to the weak charge of heavy nuclei probed with APV.
Since the sign
of $\delta Q_W^f/Q_W^f$ due to superpartner loops is nearly always the
same, and since $Q_W^u>0$ and $Q_W^d<0$ in the SM, a strong
cancellation between $\delta Q_W^u$ and $\delta Q_W^d$ occurs in heavy
nuclei. This cancellation implies that the magnitude of superpartner loop contributions to $\delta
Q_W(Z,N)/Q_W(Z,N)$ is generally less than about 0.2\% for cesium and
is equally likely to have either sign. Since the presently quoted
uncertainty for the cesium nuclear weak charge is about 0.6\%
\cite{sushov02}, cesium APV does not substantially constrain the SUSY
parameter space. Equally as important, the present agreement of
$Q_W^{\rm Cs}$ with the SM prediction does not preclude significant
shifts in $Q_W^{e,p}$ arising from SUSY. The situation is rather
different, for example, in the $E_6~Z^\prime$ scenario, where sizable
shifts in $Q_W^{e,p}$ would also imply observable deviations of
$Q_W^{\rm Cs}$ from the SM prediction.
There are several ongoing atomic parity violation experiments following
the cesium APV results. A more precise cesium APV measurement is
underway \cite{apvparis}. The Seattle group is measuring the APV effects using
${\rm Ba}^+$ ions, which is able to achieve the same level of precision
as for cesium \cite{apvseattle}.
The measurement of APV effects along an isotope chain
would eliminate the large theory uncertainties from atomic structure.
Efforts are underway at Berkeley \cite{apvberkeley} to measure
APV along Yb isotope chain.
A measurement of the Helium weak charge via $0^+\rightarrow 0^+$ transition
is under investigation at JLab, which could be used to cross check
the Cesium APV experiments \cite{QWHe}.
Furthermore, weak charge data from JLab
on both Hydrogen and Helium would have the
advantage of correlated errors which would cancel in the ratio
$Q_W^p/Q_W^{\rm He}$, potentially yielding significantly tighter
constraints on new physics. A preliminary study showed that a 1\% measurement
of the Helium weak charge can constrain RPV SUSY at nearly the same
level as a 4\% proton weak charge experiment \cite{QWHe, suprivate}.
\subsection{Neutrino-Nuclei Deep Inelastic Scattering}
\label{sec:nutev}
Neutrino scattering experiments have played a key role in elucidating the
structure of the SM. Recently, the NuTeV collaboration has performed a
precise determination of the ratio ${R_\nu}$
(${R_{\bar\nu}}$) of neutral current and charged current deep-inelastic
$\nu_\mu$ ($\bar\nu_\mu$)-nucleus cross sections \cite{NuTeV}, which can be
expressed as:
\begin{equation}
\label{eq:rnudef}
R_{\nu ({\bar\nu})} = \frac{\sigma(\nu (\bar\nu)N \rightarrow \nu X)}
{\sigma(\nu (\bar\nu) N \rightarrow l^{-(+)} X)}=
(g_L^{\rm eff})^2 + r^{(-1)} (g_R^{\rm eff})^2\ \ \ ,
\end{equation}
where $r=\sigma^{CC}_{{\bar\nu} N}/\sigma^{CC}_{\nu N}$ and
$(g_{L,R}^{\rm eff})^2$ are effective hadronic couplings (defined below).
Comparing
the SM predictions \cite{pdg} for $(g_{L,R}^{\rm eff})^2$ with the values
obtained by the NuTeV
Collaboration yields deviations:
$\delta R_{\nu({\bar\nu})}=
R_{\nu({\bar\nu})}^{\rm exp}-R_{\nu({\bar\nu})}^{\rm SM}$
\begin{equation}
\delta R_{\nu}=-0.0033 \pm 0.0007, \ \ \
\delta R_{\bar{\nu}}=-0.0019 \pm 0.0016.
\label{eq:deltaRnu}
\end{equation}
Within the SM, these results may be interpreted as a test of the
scale-dependence of the $\hat{s}^2$ since the $(g_{L,R}^{\rm eff})^2$ depend on the
weak mixing angle. The
results from the NuTeV measurement
imply a $+3\sigma$ deviation at $Q \sim 3$ GeV.
This interpretation of the NuTeV results has been the subject of considerable
debate. Unaccounted for effects, such as
NLO QCD corrections \cite{nutevqcd, davidson}, Electroweak radiative
corrections \cite{nutevew, davidson},
strange sea asymmetry \cite{nutevstrange}, isospin
violation \cite{nuteviso}, nuclear shadowing \cite{nutevshadow},
other nuclear effects such as neutron excess in the target
and nuclear cross section effects \cite{nutevnuclear},
electron neutrino content in the NuTeV beam \cite{nutevnue}
have
been proposed as possible remedies for the
anomaly. Alternatively, one may consider physics beyond the
SM \cite{davidson, kur-rm-su-nutev, nutevnew}.
In this review, we focus on the
MSSM effects on the neutrino-nuclei scattering \cite{kur-rm-su-nutev}.
For momentum transfers $q^\mu$ satisfying $|q^2| \ll {M^2_{Z}}$, the neutrino-quark
interactions can be represented
with sufficient accuracy by an effective four fermion Lagrangian:
\begin{eqnarray}
{\cal L}_{\nu q}^{NC} & = & -\frac{G_\mu\hat \rho_{NC}}{\sqrt{2}}
{\bar\nu}_\mu\gamma^\lambda (1-\gamma_5) \nu_\mu
\sum_q {\bar q}\gamma_\lambda [\epsilon_L^q (1-\gamma_5)+\epsilon_R^q
(1+\gamma_5)]q\ , \\
{\cal L}_{\nu q}^{CC} & = & -\frac{G_\mu\hat\rho_{CC}}{\sqrt{2}}
{\bar\mu}\gamma^\lambda (1-\gamma_5) \nu_\mu
{\bar u}\gamma_\lambda(1-\gamma_5) d + {\rm h.c.} \ ,
\end{eqnarray}
where
\begin{eqnarray}
\epsilon_L^q & = & I_L^3 - Q_q\hat\kappa_\nu\hat{s}^2 + \hat\lambda_L^q\ , \\
\epsilon_R^q & = & -Q_q\hat\kappa_\nu\hat{s}^2 +\hat\lambda_R^q \ .
\end{eqnarray}
The parameters $\hat\rho^{CC}$ and
$\hat\lambda_{L,R}^q$ are similar to those quantities that are defined
in Sec.~\ref{sec:renorm}. The relevant expressions can be found
in Ref.~\cite{kur-rm-su-nutev}\footnote{In that reference a subscript \lq\lq $\nu N$" on $\hat\rho_{NC,\, CC}$ was included.}.
The NC to CC cross section ratios ${R_\nu}$ and ${R_{\bar\nu}}$ can be expressed in
terms of the above parameters via the effective couplings $(g_{L,R}^{\rm
eff})^2$
appearing in Eq. (\ref{eq:rnudef}) in
a straightforward way:
\begin{equation}
\label{eq:glrdef}
(g_{L,R}^{\rm eff})^2 = \left(\frac{{\hat M}_Z^2}{{\hat M}_W^2}\right)^2\left(
\frac{{\hat M}_W^2-Q^2}{{\hat M}_Z^2-Q^2}\right)^2
\left(\frac{\hat\rho_{NC}}{\hat\rho_{CC}}\right)^2\sum_q\
(\epsilon_{L,R}^q)^2 \ \ \ .
\end{equation}
The SM values for these quantities are \cite{pdg} $(g_L^{\rm
eff})^2=0.3042$
and $(g_R^{\rm eff})^2 = 0.0301$ while the NuTeV results imply
$(g_L^{\rm eff})^2=0.3005\pm 0.0014$ and $(g_R^{\rm eff})^2=0.0310\pm 0.0011$.
The MSSM loop contributions to $R_{\nu}$ and $R_{\bar\nu}$ are
highly correlated, in the range of about
$0-1.5\times 10^{-3}$ \cite{kur-rm-su-nutev}.
The sign of the SUSY loop corrections is
nearly always positive, in contrast to the sign of the NuTeV anomaly.
There is one corner of the parameter space which admits a negative
loop contribution. This scenario involves gluino loops, whose effect can become
large and negative when the first generation
up-type squark and down-type squarks are nearly degenerate and
left-right mixing is close to maximal.
Although the gluino
contribution could be as much as few$\times 10^{-3}$ in magnitude,
equal and large left-right mixing for both up- and down-type squarks
is inconsistent with the other precision electroweak
inputs, such as the $M_W$ and charged current universality \cite{kurylov02}, and color neutrality of the vacuum (unless the $H^0$, $A^0$, and $H^\pm$ become super heavy).
In addition, the negative contribution from gluino to
${R_\nu}$ and ${R_{\bar\nu}}$ could not account for the apparent deviation of
${\sin^2\theta_W}$ from the SM prediction implied by the NuTeV analysis when a
Paschos-Wolfenstein type relation is used.
When $R$-parity is not conserved,
one obtains the effective Lagrangian for neutrino-quark scattering
via tree-level contribution:
\begin{eqnarray}
\label{eq:rpveffective}
{\cal L}_\sst{RPV}^\sst{EFF} &=&
-\Delta^{\prime}_{2k1}(\tilde{d}^k_L){\bar d}_R\gamma^\mu d_R
{\bar\nu}_{\mu L}\gamma_\mu \nu_{\mu L} +
\Delta^{\prime}_{21k}(\tilde d^k_R){\bar d}_L\gamma^\mu
d_L {\bar\nu}_{\mu L}\gamma_\mu \nu_{\mu L} \nonumber\\
&&- \Delta^{\prime}_{21k}(\tilde d^k_R)\left[{\bar
u}_L\gamma^\mu d_L
{\bar\mu}_{ L}\gamma_\mu \nu_{\mu L}+{\rm h.c.}\right]\ \ \ .
\end{eqnarray}
The corresponding shifts in $R_{\nu(\bar\nu)}$ are
\begin{eqnarray}
\label{eq:rnurpv}
\delta R_{\nu (\bar\nu)}&=&\lambda_x
[-\frac{4}{3} \epsilon_L^u + \frac{2}{3}\epsilon_L^d ]
[1+r^{(-1)} ]\Delta_{12k}(\tilde{e}_R^k)
-2[R_{\nu (\bar\nu)}^{\rm SM}+\epsilon_L^d]
\Delta_{21k}^{\prime}(\tilde{d}_{R}^k)
+2r^{(-1)}\epsilon_R^d
\Delta_{2k1}^{\prime}(\tilde{d}_{L}^k)\nonumber \\
&\approx& -0.25 [1+r^{(-1)} ]\Delta_{12k}(\tilde{e}_R^k) -
2[R_{\nu (\bar\nu)}^{\rm SM}-0.43]
\Delta_{21k}^{\prime}(\tilde{d}_{R}^k)+ 1.6 r^{(-1)}
\Delta_{2k1}^{\prime}(\tilde{d}_{L}^k).
\end{eqnarray}
As we discuss in Section \ref{sec:susy}, $\Delta_{12k}({\tilde e}^k_R)$ and
$\Delta^{\prime}_{21k}({\tilde d}^k_R)$ are constrained by other
precision electroweak data, while $\Delta^{\prime}_{2k1}({\tilde
d}^k_L)$ is relatively unconstrained. In Eq.~(\ref{eq:rnurpv}), the
coefficients of $\Delta^{\prime}_{21k}({\tilde d}^k_R)$ and
$\Delta^{\prime}_{2k1}({\tilde d}^k_L)$ are positive, while the
coefficient of $\Delta_{12k}({\tilde e}^k_R)$ is negative. Since the
$\Delta_{ijk}$ are non-negative, we would require sizable value of
$\Delta_{12k}({\tilde e}^k_R)$ and rather small values of
$\Delta^{\prime}_{21k}({\tilde d}^k_R)$ and
$\Delta^{\prime}_{2k1}({\tilde d}^k_L)$ to account for the negative
shifts in ${R_\nu}$ and ${R_{\bar\nu}}$ implied by the NuTeV result. The
present constraints on $\Delta_{12k}({\tilde e}^k_R)$ from other
precision electroweak observables, as listed in
Table~\ref{tab:rpv-constrain}, however, are fairly stringent. The
possible effects on ${R_\nu}$ and ${R_{\bar\nu}}$ from RPV interactions are by
and large positive. While small negative corrections are also
possible, they are numerically too small to be interesting
\cite{kur-rm-su-nutev}.
In short, the MSSM -- with or
without R-parity conservation -- is likely not responsible for the NuTeV
anomaly. The culprit,
apparently, is to be found elsewhere.
Finally, we note that another proposal to measure the weak mixing angle at
$Q^2 =4 \times 10^{-6} \ {\rm GeV}^{2}$
with a reactor-based experiment via $\bar{\nu}_e e^-$
elastic scattering has been proposed in \cite{nureactor}.
The estimated error on $\hat{s}^2$ is about 1\%, comparable
to APV and NuTeV results, but with substantially
different systematic contributions. Such measurement could explore
the electroweak corrections, though the possible implications for SUSY have not been analyzed.
\section{Flavor, CP, Neutrinos, and Cosmology}
\label{sec:cpv}
The issues of flavor and CP symmetries are generally challenging for SUSY phenomenology. In the case of the MSSM with R-parity conservation, the structure of the soft SUSY-breaking Langrangian allows for a variety of flavor changing neutral current (FCNC) processes that must be suppressed in order to be consistent with experiment. By itself, the general structure of the soft Lagrangian does not provide for this suppression, so models of SUSY-breaking mediation must be constructed that provide it in a natural way. Similarly, ${\cal L}_{\rm soft}$ contains a host of new CP-violating phases beyond the phase of the Standard Model CKM phase that accounts for CP-violation (CPV) in the neutral kaon and B-meson systems. If these phases are ${\cal O}(1)$ and if the soft masses are on the order of a TeV, the associated CPV interactions can give rise to permanent electric dipole moments (EDMs) of the electron, neutron, and neutral atoms that are up to two orders of magnitude larger than experimental EDM limits. Short of any fortuitous cancellations between various CPV effects, there is no {\em a priori} reason to expect large suppressions of these phases as needed for consistency with experiment. The corresponding \lq\lq SUSY CP problem" again provides a challenge to model builders.
Both the SUSY flavor and CP problems have been reviewed extensively elsewhere, and we refer the reader to excellent recent discussions (see, {\em e.g.}, Refs.~\cite{Chung:2003fi,Masiero:2003fy,Nir:2002gu, Buras:1999tb,Masiero:1997bv,Dimopoulos:1995ju}). Here, we focus on aspects of these issues most relevant to the current experimental efforts in the low-energy sector as well as on recent theoretical developments pertaining to their broader implications for particle physics and cosmology. After reviewing general features of SUSY flavor physics and CPV, we concentrate on three areas of interest: (a) lepton flavor violation and the corresponding implications for neutrino physics; (b) EDM searches and their theoretical interpretation; and (c) implications for SUSY baryogenesis and dark matter.
\subsection{General Considerations}
\noindent {\em Flavor}
\vskip 0.25in
Within the Standard Model, the GIM mechanism provides an elegant explanation for the suppression of FCNCs among quarks. In the limit of degenerate quarks, a sum over all intermediate quark states in loop contributions to FCNC processes yields a vanishing result due to the unitarity of the CKM matrix. The natural scale for FCNC effects -- such as $K^0$-${\bar K}^0$ mixing and $b\to s\gamma$ -- is thus governed by differences in the squares of quark Yukawa couplings (and products of off diagonal elements of the CKM matrix). The presence of these factors provides a natural way to understand the observed suppression of FCNCs.
In general, superpartner loop contributions to FCNCs can upset the GIM suppression mechanism. For example, the difference of scalar quark masses need not be small compared to the weak scale, so that the corresponding loop contributions can be enhanced relative to those arising from quark loops. Similarly, flavor mixing among squarks need not be suppressed since there exists no {\em a priori} reason to expect flavor non-diagonal terms in the squark mass matrix to be small compared to the diagonal terms. Thus, studies of FCNC semileptonic or hadronic weak interactions can provide important constraints on the flavor structure of ${\cal L}_{\rm soft}$.
It has become conventional to characterize these SUSY flavor-violating effects by assuming that they are small enough to be described by single insertions of the relevant flavor-violating soft mass parameter. In this \lq\lq mass insertion" approximation, one may consider the parameter\cite{Chung:2003fi}
\begin{equation}
\label{eq:massinsert}
\left(\delta_{AB}\right)_{ij} = \frac{\left(M_{AB}^2\right)_{ij}}{\left[\left(M_{AA}^2\right)_{ii}\left(M_{BB}^2\right)_{jj}\right]^{1/2}}
\ee
where $A$, $B$ denote $L$ or $R$ and $i$ and $j$ are flavor indices. Note that flavor can be violated separately among the left- and right-handed fermion superpartners as well as in mass terms that mix them after electroweak symmetry breaking, {\em viz}
\begin{equation}
\left( M_{LR}^2\right)_{ij} = \frac{v_d}{\sqrt{2}}\left[- \mu\, \left(Y_d\right)_{ij}\tan\beta+\left(a_d\right)_{ij}\right]
\ee
for down-type squarks. While the term containing the Yukawa matrix can be diagonalized by performing the same rotation on $L$- and $R$-squarks that diagonalize the quark mass matrix, the term containing the triscalar coupling $(a_d)_{ij}$ will generally remain flavor non-diagonal after this rotation.
Although the mass insertion approximation may not always accurately reflect the scale of flavor-violation in a given scenario, it provides a useful framework for comparing the constraints on SUSY flavor violation obtained from different experiments. A summary of present limits on the $(\delta_{AB})_{ij}$ for various processes can be found in Ref.~\cite{Chung:2003fi}. As emphasized by the authors of that work, there does not exist a sufficient set of experimental observables to completely determine the flavor-violating parameters that enter the
$(\delta_{AB})_{ij}$ even within the MSSM. Consequently, one must use the experimental limits as input for model building. To this end, several broad approaches have been pursued. Among the most popular are:
\begin{itemize}
\item[(i)] Universality, which assumes that the soft terms are flavor diagonal and universal. For example, one may take \cite{susy}
\begin{equation}
\label{eq:univ1}
\mbold{M}^2_f=\tilde{m}^2\,\mbold{1}
\ee
for $f=Q, U, D, L, E$ and
\begin{equation}
\label{eq:univ2}
\mbold{a}_f=A_f\,\mbold{Y}_f
\ee
for $f=U,D,E$. Assuming that Eqs.~(\ref{eq:univ1}-\ref{eq:univ2}) hold at the SUSY-breaking scale, RG evolution to the electroweak scale will induce corrections to the relations (\ref{eq:univ1}-\ref{eq:univ2}) at the electroweak scale. To the extent that the RG evolution is dominated by Yukawa interactions, one would expect the soft parameters at the electroweak scale to have an expansion in the Yukawa matrices(see, {\em e.g.}, Ref.~\cite{Isidori:2006qy} and references therein):
\begin{eqnarray}
\nonumber
\mbold{M}^2_Q & = & {\tilde m}^2\left[{\tilde a}_1 \mbold{1}+{\tilde b}_1 \mbold{Y}_u^\dag\mbold{Y}_u +{\tilde b}_2 \mbold{Y}_d^\dag \mbold{Y}_d
+{\tilde b}_3\left(\mbold{Y}^\dag_d \mbold{Y}_d \mbold{Y}^\dag_u \mbold{Y}_u
+\mbold{Y}^\dag_u \mbold{Y}_u \mbold{Y}^\dag_d \mbold{Y}_d\right)\right]\\
\label{eq:mfv}
\mbold{M}^2_U & = & {\tilde m}^2\left[{\tilde a}_2\mbold{1}+{\tilde b}_4 \mbold{Y}_u \mbold{Y}^\dag_u\right]\\
\nonumber
\mbold{M}^2_D & = & {\tilde m}^2\left[{\tilde a}_3\mbold{1}+{\tilde b}_5 \mbold{Y}_d \mbold{Y}^\dag_d \right]
\eea
with similar expressions for the triscalar couplings $\mbold{A}_{U,D}$ to third order in the Yukawa matrices. Equations~(\ref{eq:mfv}) illustrate an alternative approach known as \lq\lq minimal flavor violation" (MFV) in which all of the flavor violation in the soft sector is dictated solely by the structure of the Yukawa interactions.
\item[(ii)] Alignment, a scenario in which the soft interactions can be diagonalized by the same rotations that diagonalize the SM Yukawa interactions.
\end{itemize}
Both the universality and alignment approaches build a \lq\lq super GIM mechanism" into the soft SUSY-breaking Lagrangian and protect one against the appearance of large FCNC effects. Specific models for SUSY-breaking mediation may or may not lead to either universality or alignment (for a discussion, see {\em e.g.}, Ref.~\cite{Randall:1998te}), and neither of these approaches may ultimately be correct. Nevertheless, they provide a useful starting point for the phenomenology of low-energy precision tests of SUSY in the flavor sector.
\vskip 0.25in
\noindent{\em Lepton Flavor and Number}
\vskip 0.2in
In lepton sector, total lepton number (LN) is an exact, accidental symmetry of the SM, while lepton flavor violation (LFV) involving charged leptons is highly suppressed by the scale of neutrino mass. Neither feature generally carries over to SUSY. In addition to the long-standing scrutiny of the flavor structure of ${\cal L}_{\rm soft}$, there has been considerable recent interest in the possibility of total lepton number violation (LNV) in SUSY models.
In the MSSM, LNV arises when the requirement of $P_R$ conservation is relaxed, allowing for the existence of the $\lambda$, $\lambda^\prime$, and $\mu^\prime$ terms in the superpotential of Eq.~(\ref{eq:RPVL}). As discussed earlier, such interactions may generate tree-level corrections to SM CC and NC interactions, some of which provide rather stringent constraints on the associated coupling to mass ratios. The interactions in $W_{\Delta L=1}$ may also give rise to a non-zero rate for neutrinoless double $\beta$-decay ($0\nu\beta\beta$)\cite{Faessler:1997db,Faessler:1996ph} -- a $\Delta L=2$ process -- as well as to LN conserving but LFV processes such as $\mu\to e\gamma$, $\mu\to e$ conversion, and $\mu\to 3e$ (for a recent discussion, see, {\em e.g.}, Ref.~\cite{deGouvea:2000cf} and references therein). For TeV scale superpartner masses, the interactions in $W_{\Delta L=1}$ may have important consequences for the interpretation of these LNV and LFV processes as we discuss below. In addition, the existence of $\Delta L=1$ SUSY interactions imply the existence of a radiatively-induced Majorana mass term for the neutrino\cite{Schechter:1981bd}, while extensions of the MSSM that explicitly allow for RH neutrino superfields have taken on renewed interest (the literature on the topic is vast; for representive discussions, see, {\em e.g.}, Refs.~\cite{Mohapatra:2005nc,Dong:2006vk, Kang:2004ix,Frank:2002hk,Hisano:2001qz,Hisano:1995cp}). In SUSY models that explicitly contain tree-level Majorana masses, one may also find a possibility of relatively low-scale leptogenesis. Below, we review some of these recent developments involving LNV and the neutrino sector in SUSY.
\vskip 0.25in
\vskip 0.25in
\noindent{\em CP}
The plethora of new CPV phases that arise in ${\cal L}_{\rm soft}$ leads to similar complications for phenomenology as in the case of flavor. There simply do not exist a sufficient number of experimentally accessible CPV observables to independently constrain all of the phases, and one is generally forced to adopt simplifying, model assumptions. The situation is simplest in the Higgs and gauge sectors, where all CPV phases may be rotated into the following relative phases between the $\mu$ parameter and the gaugino mass parameter(see, {\em e.g.}, Ref.~\cite{Pospelov:2005pr}):
\begin{equation}
{\rm Arg}\, \left(\mu M_i b^\ast \right)\qquad \qquad {\rm Arg}\, \left(M_i M_j^\ast\right)
\ee
where $i,j$ run over the three gauge groups of the MSSM (leading to a total of three independent phases in this sector). The analysis of CPV in this sector is often further simplified by assuming a common gaugino mass parameter at high scales, thereby reducing the number of independent phases to one. When discussing this special case, we refer to the common relative phase of the $\mu$-parameter and gaugino mass parameters as $\phi_\mu$.
In the most general situation, the parameters in the scalar sector of ${\cal L}_{\rm soft}$ allow for an additional 37 independent CPV phases (for a discussion of parameter counting, see, {\em e.g.}, Ref.~\cite{Dimopoulos:1995ju}). As with the Higgsino-gaugino sector, the number of independent phases can be reduced by adopting a version of the flavor universality or MFV scenarios. In the latter case, for example, most of the CPV phases can be absorbed by sfermion field redefinitions, leaving only the parameters in $\mbold{A}_U$ as complex. Alternatively, assuming flavor diagonality for the scalar mass matrices $\mbold{M}^2_{Q,U,D}$ but a general set of triscalar couplings, one has the additional phases\cite{Pospelov:2005pr}
\begin{equation}
{\rm Arg}\, \left(A_f M_i^\ast\right) \qquad \qquad {\rm Arg}\, \left(A_f A_{f^\prime}^\ast\right)\ \ \ .
\ee
A third, commonly employed scenario is that of a common triscalar coupling and a single CPV phase for the gaugino masses at high scales, leaving only one additional phase $\phi_A$. In what follows, we will consider this minimal scenario wherein only two phases -- $\phi_\mu$ and $\phi_A$ -- need be considered.
The presence of non-zero RPV terms in the superpotential can introduce a large number of additional CPV phases. We will note review the implications of these additional sources of CPV in this review and refer the reader to recent literature (see, {\em e.g.}, Ref.~\cite{Faessler:2006at}).
\vskip 0.25in
\noindent {\em Connections with Cosmology}
\vskip 0.2in
While the search for CPV beyond that of the SM is interesting in its own right, there exists additional motivation from cosmological considerations. In particular, an explanation of the small (but anthropically relevant) baryonic component of the energy density of the universe points to the need for CPV beyond that of the SM. Indeed, as first observed by Sakharov nearly four decades ago\cite{Sakharov:1967dj}, arriving at a particle physics-based accounting for the baryon asymmetry of the universe (BAU) requires three ingredients in the particle physics of the early universe: (a) violation of baryon number ($B$); (b) violation of both C and CP symmetry; and (c) a departure from thermal equilibrium, assuming that CPT is an exact symmetry. In principle, SM contains all three ingredients. Baryon number violation arises through anomalous \lq\lq sphaleron" processes that cause transitions between different electroweak vacua having different Chern-Simons number and, therefore, different total $B+L$. At temperatures above the electroweak scale, these transitions are mediated by the excitation of gauge field configurations called sphalerons. At lower temperatures, the probability of sphaleron excitations is Boltzmann suppressed, and transitions between different vacua can only occur via exponentially suppressed tunneling. The SM also contains electroweak CP violation as well as C violating interactions (the gauge-boson couplings to axial vector currents). Finally, a departure from thermal equilibrium occurs as the universe cools through the electroweak temperature and the gauge symmetry of the SM is spontaneously broken. The strength of the CPV effects are strongly suppressed by the light quark Yukawa couplings and Jarlskog invariant associated with the CKM matrix\cite{Shaposhnikov:1987tw,Farrar:1993sp,Farrar:1993hn}, while the LEP II lower bound on mass of the SM Higgs boson precludes a strongly first order electroweak phase transition as needed to prevent washout of the BAU (see, {\em e.g.}, Ref.~\cite{Balazs:2005tu}).
A variety of particle physics scenarios have been proposed that attempt to circumvent these SM shortcomings in explaining the BAU via baryogenesis at different cosmic epochs. At present, have no conclusive evidence favoring either early time/high scale scenarios such as leptogenesis, or relatively late time, electroweak scale baryogenesis. From the standpoint of phenomenology, consideration of the latter is particularly attractive, since a combination of CPV searches, precision electroweak measurements, and collider studies can highly constrain and possibly even rule out electroweak baryogenesis (EWB). In addition, it is interesting to consider the possibility that new physics at the electroweak scale may provide both an explanation of the BAU and viable candidate for cold dark matter (DM). In this case, DM considerations can provide additional constraints on EWB.
The study of SUSY DM remains an active, on-going field that has been reviewed extenstively elsewhere (see, {\em e.g.}, Refs.~\cite{Bertone:2004pz,Jungman:1995df}). Below, we focus on the viability of SUSY EWB, taking into account recent field theoretical developments and the phenomenology of EDM searches, precision electroweak measurements, collider studies, and DM considerations. In this respect, we provide an up-date to the extensive reviews of baryogenesis provided by Trodden and Riotto\cite{Riotto:1999yt} and Dine and Thomas\cite{Dine:2003ax}.
\vskip 0.25in
\subsection{Lepton Flavor and Number Violation}
The best-known probe of LFV is to search for the SM-forbidden decay $\mu\to e\gamma$. The current best limit on the branching ratio is\cite{Brooks:1999pu}
\begin{equation}
B_{\mu\to e\gamma} \equiv \frac{\Gamma(\mu^+\to e^+\gamma)}{\Gamma(\mu^+\to e^+\nu{\bar\nu})}
< 1.2 \times 10^{-11}\qquad\qquad {\rm 90\% C.L.}
\ee
obtained by the MEGA collaboration. A similarly interesting bound on the rate for $\mu\to e$ converstion in gold nuclei has been obtained by the SINDRUM collaboration\cite{Wintz:rp}:
\begin{equation}
B^{\rm Au}_{\mu\to e} \equiv \frac{\Gamma[\mu^-+A(N,Z)\to e^-+A(N,Z)]}{\Gamma[\mu^-+A(N,Z) \to \nu
A(Z-1,N+1)]} < 8\times 10^{-13}\qquad {\rm 90\% C.L.} \ \ \ .
\ee
In addition, stringent limits have been obtained for other LFV rates: $1.0\times 10^{-12}$ for $B_{\mu^+\to e^+ e^- e^+}$\cite{Bellgardt:1987du}; $4.3\times 10^{-12}$ for $B_{\mu\to e}^{\rm Ti}$\cite{Dohmen:1993mp}; and $4.6\times 10^{-11}$ for
$B_{\mu\to e}^{\rm Pb}$\cite{Honecker:1996zf}. A new experiment is being performed by the MEG collaboration at PSI that hopes to reach a sensitivity of $\sim 5\times 10^{-14}$ for $B_{\mu\to e\gamma}$\cite{Yashima:2000qz}, while until recently there had been an effort by the MECO collaboration to probe $B^{\rm Al}_{\mu\to e}$ at the level of $5\times10^{-17}$ using the AGS at Brookhaven.
Although that experiment has now been derailed by the U.S. funding agencies, efforts are underway to pursue an experiment at an alternate site, possibly using a future muon strorage ring at JPARC in Japan.
The prospective implications of these experiments for supersymmetric lepton flavor structure has been analyzed by several authors (for a discussion within non-SUSY models, see, {\em e.g.}, Refs.~\cite{Cirigliano:2004mv,Cirigliano:2004tc,Atre:2005eb} and references therein). A comprehensive analysis in a minimally-extended MSSM that includes RH neutrino supermultiplets and neutrino mass generation {\em via} the see-saw mechanism has been carried out in Ref.~\cite{Hisano:1995cp}. While the RH neutrino sector decouples from low-energy LFV observables due to the large RH neutrino mass ($M_R\sim 10^{12}$ GeV), the LFV effects of this sector can be communicated to the charged slepton mass matrices and triscalar couplings in ${\cal L}_{\rm soft}$ through RG running from high scales. The authors obtain general expressions for the rates for $\mu\to e\gamma$ and $\mu\to e$ conversion in terms of the slepton and light sneutrino rotation matrices, $Z_L^{Ij}$ and $Z_\nu^{Ij}$, introduced earlier without relying on the mass insertion approximation. One has
\begin{equation}
\label{eq:meg1}
B_{\mu\to e\gamma} = {48\pi^3 \alpha}\, \left(\left\vert {\tilde A}_2^L\right\vert^2+ \left\vert {\tilde A}_2^R\right\vert^2 \right)
\ee
where ${\tilde A}_2^{L,R}$ are the dipole amplitudes appearing in the amplitude for $\mu\to e\gamma^{(\ast)}$:
\begin{eqnarray}
\nonumber
{\cal M}_{\mu\to e\gamma^{(\ast)}} &=& e G_\mu\, \varepsilon^{\alpha\, \ast}\, {\bar \mu}(p-q)\Bigl[ \left(q^2\gamma_\alpha -\dslash{q} q_\alpha\right) \left({\tilde A}_1^R P_R +{\tilde A}_1^L P_L\right) \\
\label{eq:meg2}
&&
+ i m_\mu \sigma_{\alpha\beta} q^\beta \left({\tilde A}_2^R P_R +{\tilde A}_2^L P_L\right)\Bigr] \, e(p)
\end{eqnarray}
where, in contrast to Ref.~\cite{Hisano:1995cp} we have normalized the amplitudes to $G_\mu$.
Contributions to these amplitudes are generated by the diagrams of Fig. \ref{fig:mueg}. The chargino loop contribution to ${\tilde A}_2^R$, for example, gives
\begin{eqnarray}
\label{eq:meg3}
{\tilde A}_2^{R\ (\chi^\pm)} &=& -\left(\frac{1}{4\sqrt{2}\pi^2}\right) \, \sum_{ j,k}\, \left(\frac{M_W^2}{m_{\tilde\nu_j}^2}\right)\, Z_\nu^{1j} Z_\nu^{2j\, \ast}\, \biggl[ \vert U_{k1}\vert^2\, Z_\nu^{1j} Z_\nu^{2j\, \ast}\ F_1(x_{jk})\\
\nonumber
&& \qquad -U_{k1}^\ast\, V_{k2}\, \left(\frac{m_{\chi_k}}{m_\mu}\right)F_1(x_{jk})\biggr]
\eea
where $x_{jk}=m^2_{\chi_k}/m_{\tilde\nu_j}^2$, and the $F_{1,2}(x)$ are loop functions defined in Ref.~\cite{Hisano:1995cp}.
Analogous expressions for the other chargino and neutralino loop contributions to the ${\tilde A}_2^{R,L}$ are given in Ref.~\cite{Hisano:1995cp}. Assuming no cancellations among the various contributions, one may use Eqs.~(\ref{eq:meg1}-\ref{eq:meg3}) and experimental limits on $B_{\mu\to e\gamma}$ to derive bounds on the LFV parameters $Z_\nu^{1j} Z_\nu^{2j\, \ast}$ {\em etc.}. Note that these LFV couplings also arise in the non $(V-A)\times (V-A)$ SUSY box graph contributions to the muon decay parameter, $g^S_{RR}$ as in Eq.~(\ref{eq:grrloop}). The bounds on these quantities are sufficiently stringent that only the L-R mixing terms can make appreciable contributions to $g^S_{RR}$. In the case of $\mu\to e$ conversion in nuclei, the \lq\lq penguin" amplitudes proportional to ${\tilde A}_1^{L,\, R}$ contribute to the four fermion ${\bar e}\mu {\bar q}q$ conversion operators, as do LFV $Z^0$-exchange amplitudes and box graphs. As we discuss below, the ${\tilde A}_1^{L,R}$ may give the dominant contributions in the presence of $P_R$ non-conservation.
\begin{figure}
\resizebox{4 in}{!}{
\includegraphics*[60,520][300,640]{muegamma.ps}}
\caption{Contributions to the LFV amplitudes entering $\mu\to e\gamma^{(\ast)}$ .}
\label{fig:mueg}
\end{figure}
An older analysis using the mass insertion approximation has been performed by the authors of Refs.~\cite{Gabbiani:1996hi,Gabbiani:1988rb}. For the special case in which the amplitude is dominated by photino loops one has
\begin{equation}
\label{eq:meg4}
B_{\mu\to e\gamma} =\frac{24\alpha\sin^2\theta_W}{\pi}\, \left\{ \left\vert M_3(x)\, \left(\delta_{LL}^\ell\right)_{21} +\frac{M_{\tilde\gamma}}{m_{\tilde\ell}}\, M_1(x)\, \left(\delta_{LR}^\ell\right)_{21}
\right\vert^2+\, L\leftrightarrow R\right\}
\ee
where $m_{\tilde\ell}$ is an average slepton mass, $x=M_{\tilde\gamma}^2/m_{\tilde\ell}^2$, and the $M_i(x)$ are defined in Ref.~\cite{Gabbiani:1996hi}. The limits on the parameters $(\delta_{AB}^\ell)_{21}$ vary with $x$. At $x=1.0$ one has
\begin{equation}
\left\vert\left(\delta_{LL}^\ell\right)_{21}\right\vert \leq1.9\times 10^{-3}\qquad \qquad
\left\vert\left(\delta_{LR}^\ell\right)_{21}\right\vert\leq 4.3\times 10^{-7}
\ee
for $m_{\tilde\ell}=100$ GeV. These authors did not analyze the $\mu\to e$ conversion process, so no limits on the $(\delta_{AB}^\ell)_{21}$ are available for this process.
In some scenarios, the mass insertion approximation may not provide a realistic guide to the magnitude of LFV effects. A well-known illustration occurs in SUSY GUT models\cite{Barbieri:1994pv,Barbieri:1995tw}, wherein contributions from the large, third generation up-type Yukawa coupling to the RG running of the sfermion and triscalar couplings from the Planck scale to the GUT scale lead to splittings among these parameters. After implementing symmetry-breaking at the GUT scale and evolving these parameters to the weak scale one has for an SU(5) GUT
\begin{equation}
{\bf m_L^2}=m_{\tilde{L}}^2 \mbold{1}\quad {\bf m_{\bar{e}}^2}=m_{\tilde{e}}^2-\mbold{I}_G\quad
{\bf a_e}=\left[A_e\mbold{1}-\frac{1}{3}\mbold{I}_G^\prime\right]\mbold{y}_e
\ee
where $\mbold{I}_G$ and $\mbold{I}_G^\prime$ are contributions arising from the Yukawa induced running from $M_P$ to $M_G$. It is convenient to rotate the sfermion fields by the same transformations that diagonalize the Yukawa matrices,
\begin{equation}
{\tilde e}_R^\prime = \mbold{V}_e\, {\tilde e}_R \qquad \qquad {\tilde L}^\prime = \mbold{U}_e^\dag L
\ee
so that the term in ${\bf M_{LR}^2}$ containing $\mbold{I}_G^\prime$ becomes
\begin{equation}
-\frac{1}{3} {\tilde e}_R^{\prime \dag} \mbold{I}_G^\prime \mbold{V}_e^\ast \mbold{M}_e
{\tilde e}_L^{\prime} +{\rm h.c.}
\ee
where $\mbold{M}_e$ is the charged lepton mass matrix. For an SO(10) GUT, the LH slepton mass matrix ${\bf m_L^2}$ picks up an additional contribution from running above $M_G$, while the induced triscalar contribution becomes
\begin{equation}
-\frac{5}{14} {\tilde e}_R^{\prime \dag} \mbold{I}_G^\prime \mbold{V}_e^\ast \mbold{M}_e \mbold{V}_e^\dag {\tilde e}_L^\prime +{\rm h.c.}
\ee
In general, the numerical impact on $B_{\mu\to e\gamma} $ of these LFV couplings is not well reproduced using the mass approximation. Importantly, the expected magnitude of this LFV observable is expected to be well within the reach of the MEG experiment for superpartner masses on the order of a few hundred GeV. For sufficiently large $m_{\tilde e_R}$, however, $B_{\mu\to e\gamma} $ scales as $1/m_{\tilde e_R}^4$, so a null result for the MEG experiment would imply TeV-scale superpartner masses in this GUT scenario.
The foregoing analyses assume conservation of $P_R$, so that the potentially significant LFV effects at the weak scale are not accompanied by corresponding LNV. If $P_R$ is not conserved, however, then the terms in the superpotential $W_{\Delta L=1}$ can lead to observable low-scale LNV as well as LFV. An analyses of this possibility of low-scale LFV that is accompanied by low-scale LNV has recently been carried out in Ref.~\cite{Cirigliano:2004tc} using an effective operator approach, following on the earlier analysis of Raidal and Santamaria\cite{Raidal:1997hq}. The leading LFV and LNV operators have dimension six and nine, respectively, and appear in the effective Lagrangians valid below the electroweak scale\footnote{Consequently, the operators do not generally respect the SU(2)$_L\times$U(1)$_Y$ symmetry of the SM.}.
\begin{eqnarray}
{\cal L}_{\rm LFV} &=& \sum_i\frac{c_i}{\Lambda^2} {\cal O}_i^{(6)} +\cdots \\
{\cal L}_{\rm LNV} &=& \sum_i\frac{\tilde c_i}{\Lambda^5} {\cal O}_i^{(9)} +\cdots
\end{eqnarray}
where the $+\cdots$ indicate terms containing higher dimension operators and where
\begin{eqnarray}
{\cal O}^{(6)}_{\sigma L} & = & {\bar \ell}_{iL} \sigma_{\mu\nu} i D\!\!\!\!/ \, \ell_{jL} F^{\mu\nu} +{\rm h.c.}
\\
{\cal O}^{(6)}_{\ell L} & = & {\bar \ell}_{iL} \ell^c_{jL} {\bar\ell}^c_{kL} \ell_{mL}\\
{\cal O}^{(6)}_{\ell q} & = & {\bar \ell}_{iL} \Gamma_\ell \ell_{j} {\bar q_L}\Gamma_q q
\end{eqnarray}
with $\Gamma_{\ell,q}$ is a shorthand for all possible $\gamma$-matrix insertions and where the corresponding operators with RH fields have not been explicitly included\footnote{In the semileptonic operator ${\cal O}^{(6)}_{\ell q}$ we have not included chirality labels on all the fields since scalar and tensor interactions flip chirality while vector and axial vector interactions preserve it.}. In the case of ${\cal L}_{\rm LNV}$ one has operators of the general form that will contribute to $0\nu\beta\beta$
\begin{equation}
{\cal O}^{(9)} = {\bar q} \Gamma_1 q\ {\bar q}\Gamma_2q\ {\bar e}\Gamma_3 e^c +{\rm h.c.}
\ee
where a complete list of operators of this form can be found in Ref.~\cite{Prezeau:2003xn}. All searches for $0\nu\beta\beta$-decay involve $0^+\to 0^+$ transitions, and for these cases the dominate operator is \cite{Faessler:1997db,Prezeau:2003xn}
\begin{equation}
{\cal O}^{(9)++}_{+}=\left[
{\bar q}_R\tau^+ q_L {\bar q}_R \tau^+ q_L + {\bar q}_L\tau^+ q_R {\bar q}_L \tau^+ q_R\right] {\bar e}(1+\gamma_5) e^c
\ee
with coefficient in the case of $P_R$ non-conservation given by
\begin{equation}
\frac{\tilde c}{\Lambda^5} = \frac{8\pi\alpha_s}{9}\frac{|\lambda_{111}^\prime|^2}{m_{\tilde q}^4 m_{\tilde g}}\, + \cdots
\ee
where $m_{\tilde q}$ is an average squark mass, $m_{\tilde g}$ is the gluino mass, and the $+\cdots$ indicate contributions proportional to the semiweak coupling $\alpha_2$.
The chiral structure of ${\cal O}^{(9)++}_{+}$ implies that it can contribute to an effective $\pi\pi ee$ operator that contains no derivatives\cite{Prezeau:2003xn}, thereby leading to an enhanced, long-range pion-exchange contribution to the $0\nu\beta\beta$-decay rate\cite{Faessler:1997db}. From experimental upper limits on this rate and taking into account this long-range contribution, the authors of Ref.~\cite{Faessler:1997db} obtained the constraint
\begin{equation}
\label{eq:0nurpv}
\lambda^\prime_{111} \leq 2\times 10^{-4} \left(\frac{m_{\tilde q}}{100\, {\rm GeV}}\right)^2\, \left(\frac{m_{\tilde g}}{100\, {\rm GeV}}\right)^{1/2}\ \ \ .
\ee
The coefficients of the LFV operators can similarly be expressed in terms of the $\lambda$ and $\lambda^\prime$ couplings:
\begin{eqnarray}
\nonumber
\frac{c_\sigma}{\Lambda^2} &\sim & \frac{\lambda\lambda^\ast}{m_{\tilde\ell}^2}\, ,
\frac{\lambda^\prime\lambda^{\prime\ast}}{m_{\tilde q}^2} \\
\frac{c_\ell}{\Lambda^2} &\sim& \frac{\lambda_{i11}\lambda_{i21}^\ast}{m_{\tilde\nu_i}^2},\quad
\frac{\lambda_{i11}^\ast\lambda_{i12}}{m_{\tilde\nu_i}^2}\\
\nonumber
\frac{c_{\ell q}}{\Lambda^2} &\sim& \frac{\lambda^{\prime\ast}_{11i}\lambda_{21i}^{\prime}}{m_{\tilde d_i}^2},\quad
\frac{\lambda_{1i1}^{\prime\, \ast} \lambda_{2i1}^\prime}{m_{\tilde u_i}^2}
\ \ \ ,
\end{eqnarray}
where the various combinations of the $\lambda$ and $\lambda^\prime$ entering $c_\sigma$ are given in Ref.~\cite{deGouvea:2000cf}. Limits on various combinations of the $P_R$-violating couplings are given in Tables II and III of that work. From the present limits on $B_{\mu\to e\gamma}$, one has for example
\begin{eqnarray}
\label{eq:megrpv}
|\lambda_{131}\lambda_{231}| \leq 2.3 \times 10^{-4} \left(\frac{m_{\tilde \ell}}{100\ {\rm GeV}}\right)^2\\
|\lambda_{111}^\prime\lambda_{211}^\prime| \leq 7.6 \times 10^{-5} \left(\frac{m_{\tilde q}}{100\ {\rm GeV}}\right)^2
\end{eqnarray}
while from $B^{\rm Au}_{\mu\to e}$ one obtains
\begin{eqnarray}
\label{eq:merpv}
|\lambda_{131}\lambda_{231}| \leq 1.1 \times 10^{-5} \left(\frac{m_{\tilde \ell}}{100\ {\rm GeV}}\right)^2\\
|\lambda_{111}^\prime\lambda_{211}^\prime| \leq 6.0 \times 10^{-7} \left(\frac{m_{\tilde q}}{100\ {\rm GeV}}\right)^2
\end{eqnarray}
Analogous limits on other combination of the couplings can be found in Ref.~\cite{deGouvea:2000cf}.
The results in Eqs.~(\ref{eq:0nurpv}-\ref{eq:merpv}) lead to several observations. First, for superpartner masses of ${\cal O}(100)$ GeV, the bounds on $\lambda^\prime_{111}$ obtained from $0\nu\beta\beta$ are stronger than those derived from the LFV observables, though the gap between them shrinks as the superpartner masses are increased. Second, the LFV $\mu\to e$ conversion limits are more stringent than those derived from $\mu\to e\gamma$, even though $B^{\rm Au}_{\mu\to e}$ contains an extra factor of $e^2$ suppression compared to $B_{\mu\to e\gamma}$. The reason is that the penguin operators generated by the terms proportional to ${\tilde A}_1^{L,R}$ in Eq.~(\ref{eq:meg2}) contain large logarithmic enhancements while the dipole operators do not, and these large logs overcome the nominal $4\pi\alpha$ suppression of the conversion process. In addition, tree-level exchange of superpartners can occur in the presence of $P_R$ non-conservation, leading to the presence of four-fermion operators not directly related to the LFV electromagnetic amplitudes of Eq.~(\ref{eq:meg2}). This situation differs from that of the $P_R$-conserving GUT scenario of Refs.~\cite{Barbieri:1994pv,Barbieri:1995tw}, wherein the magnetic amplitudes are dominant and the naive expectations for the relative magnitudes of conversion and $\mu\to e\gamma$ branching ratios obtains.
\begin{figure}
\resizebox{6 in}{!}{
\includegraphics*[50,500][580,660]{rpvneutrino.ps}}
\caption{Contributions from RPV interactions to (a) $0\nu\beta\beta$-decay
and (b) neutrino mass. Figure (a) gives representative contribution from
semileptonic trilinear RPV interactions parameterized by the coupling
$\lambda^\prime_{111}$; the \lq\lq $+\cdots$ indicate contributions
involving neutralino exchange. Figure (b) shows analogous neutrino mass
contribution from semileptonic trilinear RPV; the cross and shaded circle
denote fermion mass and sfermion L-R mixing insertions, respectively. In
both cases generation indices are suppressed. }
\end{figure}
This observations can have important consequences for the interpretation of $0\nu\beta\beta$ in terms of light Majorana neutrino exchange. Indeed,
For $\buildrel < \over {_\sim}{\cal O}$(TeV) masses in ${\cal L}_{\rm soft}$, the contribution from ${\tilde c}{\cal O}^{(9)++}_{+}/\Lambda^5$ to the rate for $0\nu\beta\beta$ can be comparable to the contribution from the exchange of a light Majorana neutrino. Denoting the amplitudes for the former and latter as $A_H$ and $A_L$, respectively, one has\cite{Cirigliano:2004tc}
\begin{equation}
\label{eq:heavylight}
\frac{A_H}{A_L}\sim \frac{M_W^2 {\bar k}^2}{\Lambda^5 m_{\beta\beta}}
\ee
where $m_{\beta\beta}$ is the effective mass associated with the exchange of the light Majorana neutrino and ${\bar k}\sim 50$ MeV is its typical virtuality. For $\Lambda\sim 1$ TeV and $m_{\beta\beta}\sim 0.1-0.5$ eV, the ratio in Eq.~(\ref{eq:heavylight}) can be order unity. In general, one would like to know whether or not there exist important heavy particle contributions when attempting to extract robust bounds on $m_{\beta\beta}$ from the $0\nu\beta\beta$ rate limits. To this end, it was observed in Ref.~\cite{Cirigliano:2004tc} that the presence of important heavy particle (superpartner), $P_R$-violating exchange contributions is also accompanied by logarithmically-enhanced contributions to the conversion rate. Should the results of future studies find that the relative sizes of the LFV branching ratios are in accord with the naive expectations, one would conclude that there are no large $P_R$-violating contributions to the $0\nu\beta\beta$-decay process. A similar conclusion holds for other, non-SUSY scenarios, pointing to the usefulness of LFV studies as a diagnostic for the presence of low scale LNV.
\subsection{R Parity Violation and Neutrino Mass}
One may incorporate massive neutrinos in SUSY models by supersymmetrizing any of the SM extensions that incorporate non-vanishing $m_\nu$. As discussed above, the authors of Ref.~\cite{Hisano:1995cp} carried out such an analysis in an extension of the MSSM that contains right-handed neutrino superfields $N_j^C$ that are singlets under the MSSM gauge groups. The small masses of the light neutrinos arise via the see saw mechanism with a right-handed neutrino mass parameter $M_R\sim10^{12}$ GeV. Except for the effects of the resulting non-zero light neutrino Majorana mass, the effects of the fields in $N_j^C$ decouple from low-energy observables. Variations on this class of SUSY models with massive neutrinos can be found
in the literature (see Refs.~~\cite{Mohapatra:2005nc,Dong:2006vk, Kang:2004ix,Frank:2002hk,Hisano:2001qz,Hisano:1995cp} and references therein).
An alternate mechanism for generating neutrino mass involves the LNV interactions in $W_{\Delta L=1}$ (for a recent analysis and survey of the literature, see {\em e.g.}, Ref.~\cite{Grossman:2003gq}). Tree-level Majorana masses can arise from the bilinear terms that lead to mixing between neutrinos and neutral Higgsinos [see Eq.~(\ref{eq:RPVL})]. Specifically, one has the contribution to the light neutrino mass matrix\cite{Grossman:2003gq}
\begin{equation}
\label{eq:neutrino1}
\left[ m_\nu\right]_{ij}^{(\mu\mu)} \sim \mu_i^\prime \mu_j^\prime \, \frac{\cos^2\beta}{\tilde m}
\ee
where $\tilde m$ is a characteristic soft mass parameter. In the absence of fine-tuning between the contribution in Eq.~(\ref{eq:neutrino1}) and other tree-level neutrino mass terms, one obtains the following rough upper bound on the scale of the $P_R$-violating bilinear coupling
\begin{equation}
\label{eq:neutrino2}
\left\vert \frac{\mu_i^\prime}{\tilde m}\right\vert \buildrel < \over {_\sim} \frac{3\times 10^{-6}}{\cos\beta}\left(\frac{m_\nu}{1\, {\rm eV}}\right)^{1/2} \, \left(\frac{100\, {\rm GeV}}{\tilde m}\right)^{1/2}.
\ee
The neutrino mass implications for the magnitude of the dimensionful $P_R$ violating SUSY couplings are thus quite severe.
The triscalar couplings in $W_{\Delta L=1}$ may also induce contributions to $m_\nu$ through one-loop radiative corrections. Such contributions have been studied extensively, and we give only a flavor of these analyses here (for an extensive list of the literature, see Ref. [4] of Ref.~\cite{Grossman:2003gq}). Roughly speaking, one finds
\begin{eqnarray}
\label{eq:neutrino3}
\left[ m_\nu\right]_{ij}^{(\lambda\lambda)} & = & \frac{1}{32\pi^2}\, \sum_{\ell, k}\, \lambda_{i\ell k}\lambda_{jkl} m_{L_\ell} \left(\delta^L_{LR}\right)_{kk}\ {\bar\xi}^L_{kk}\\
\nonumber
\left[ m_\nu\right]_{ij}^{(\lambda^\prime\lambda^\prime )} & = & \frac{N_C}{32\pi^2}\, \sum_{\ell, k}\, \lambda_{i\ell k}^\prime \lambda_{jkl}^\prime m_{d_\ell} \left(\delta^d_{LR}\right)_{kk}\ {\bar\xi}^d_{kk}
\end{eqnarray}
where
\begin{equation}
{\bar\xi}^f_{kk} = \frac{
\sqrt{(M_L^2)^f_{kk} (M_R^2)^f_{kk}}}{ (M_L^2)^f_{kk} +(M_R^2)^f_{kk} }
\ee
is a number typically of ${\cal O}(1)$ and where we have assumed that $(M_L^2)^f_{kk} \sim(M_R^2)^f_{kk} >> (M_{LR}^2)^f_{kk} $. From Eqs.~(\ref{eq:neutrino3}) we may derive neutrino mass naturalness bounds on products of the triscalar couplings for a given value of the flavor diagonal LR mixing parameters, $(\delta^f_{LR})_{kk}$. Assuming that $(M_{LR}^2)^f\propto m_f$, so that
$(\delta^f_{LR})_{kk}\sim m_f/{\tilde m}$, we obtain the most restrictive bounds for third generation fermions ($\ell=3$):
\begin{eqnarray}
\label{eq:neutrino4}
\lambda^\prime_{i3k}\lambda^\prime_{jk3}\buildrel < \over {_\sim} 4\times 10^{-7}\, \left(\frac{m_\nu}{1\, {\rm eV}}\right)\left(\frac{\tilde m}{100\, {\rm GeV}}\right)\\
\lambda_{i3k}\lambda_{jk3}\buildrel < \over {_\sim} 4\times 10^{-5}\, \left(\frac{m_\nu}{1\, {\rm eV}}\right)\left(\frac{\tilde m}{100\, {\rm GeV}}\right)
\end{eqnarray}
Note that these limits are comparable to those obtained from LFV and LNV observables for ${\tilde m}\sim 100$ GeV, though the dependence on the soft masses differ in the various cases.
\subsection{EDM Searches: Implications for SUSY}
Recent advances in experimental techniques have put the field of EDM searches on the brink of a revolution. The present limits on the EDMs of the electron\cite{Regan:ta}, neutron\cite{baker06}, and mercury atom\cite{Romalis:2000mg} -- shown in Table \ref{tab:edm} -- are already remarkably stringent. The pursuit of EDMs that began with the pioneering studies of the neutron by Purcell and Ramsey in the 1950's \cite{Purcell:1950,Smith:ht} is poised to enter a new era with prospective improvements in sensitivity of up to four orders of magnitude. New efforts are underway that aim to push the sensitivity of the EDM searches for the electron\cite{DeMille:2000,Kawall:2003ga,Liu:2004,hunter05}, neutron\cite{Mischke:ac,Aleksandrov:2002}, and neutral atoms\cite{Romalis:2001,Romalis:2004,Holt:2004}, as well as for the muon and deuteron\cite{Semertzidis:2003iq}. These prospective experimental advances, as well as related theoretical issues and developments, have been discussed in several recent reviews\cite{Erler:2004cx,Pospelov:2005pr,Fortson:fi,Ginges:2003qt} as well as the text by Lamoreaux and Khriplovich\cite{Khriplovich:ga}, and we refer the reader to those publications for an extensive survey of the literature.
While the potential improvements shown in Table \ref{tab:edm} would not provide access to EDMs associated with the CPV phase of the CKM matrix\cite{Shabalin:rs,Shabalin:sg,Bernreuther:1990jx}, they could allow one to observe an EDM associated with SUSY CPV. As noted above, present EDM limits already preclude ${\cal O}(1)$ CPV phases and TeV scale superpartner masses in the MSSM, and the future measurements will make these constraints even more stringent.
\begin{table}
\begin{center}
\begin{minipage}[t]{16.5 cm}
\caption[]{ Present and prospective EDM limits. Expectations based on SM (CKM)
CP violation are also shown.}
\label{tab:edm}
\vspace*{4pt}
\end{minipage}
\begin{tabular}{|c|r|l|r|c|}
\hline
&&&&\\[-8pt]
System & Present Limit ($e$-cm)& Group & Future Sensitivity & Standard Model
(CKM) \\
\hline
&&&&\\[-8pt]
$e^-$ & $1.6\times 10^{-27}$ (90\%~CL) & Berkeley & & $<10^{-38}$ \\
$e^-$ & & Yale (PbO) & $\sim 10^{-29}$ & \\
$e^-$ & & Indiana/Yale & $\sim 10^{-30}$ & \\
$e^-$ & & Amherst & $\sim 10^{-30}$ & \\
$e^-$ & & Sussex (YbF) & $\sim 10^{-29} $ & \\
&&&&\\
\hline
&&&&\\
$\mu$ & $ 9.3\times 10^{-19}$ (90\%~CL) & CERN & &$<10^{-36}$ \\
$\mu$ & & BNL & $\sim 10^{-24}$ & \\
&&&&\\
\hline
&&&&\\
$n$ & $2.9\times 10^{-26}$ (90\%~CL) & ILL & $1.5\times 10^{-26}$ &
$1.4\times 10^{-33} - 1.6\times 10^{-31} $ \\
$n$ & & PSI & $7\times 10^{-28}$ &\\
$n$ & & SNS & $2\times 10^{-28}$ & \\
$n$ & & ILL & $2\times 10^{-28}$ & \\
&&&&\\
\hline
&&&&\\
$^{199}$Hg & $2.1 \times 10^{-27}$ (95\%~CL) & Seattle & $5\times 10^{-28}$&
$\buildrel < \over {_\sim} 10^{-33}$ \\
$^{225}$Ra & & Argonne & $10^{-28}$ & \\
$^{129}$Xe & & Princeton & $10^{-31}$ & $\buildrel < \over {_\sim} 10^{-34}$ \\
D & & BNL & $\sim 10^{-27}$ & \\
$^{223}$Rn & & TRIUMF & $\sim 10^{-28}$ & \\
[-8pt] &&&&\\
\hline
\end{tabular}
\end{center}
\end{table}
The theoretical interpretation of the present and prospective EDM searches in terms of the parameters of ${\cal L}_{\rm soft}$ requires a careful delineation of a variety of effects. The most straightforward analysis occurs at the level of operators involving the SM fermion and gauge boson fields. In the strong sector of the SM, the lowest-dimension, gauge invariant operator that can generate an EDM is the QCD $\theta$-term:
\begin{equation}
{\cal L}_{(4)}^{\rm CPV} = \frac{\alpha_s \bar\theta}{8\pi} {\rm Tr}\left( G_{\mu\nu}{\tilde G}^{\mu\nu}\right)
\ee
where $G_{\mu\nu}$ is the SU(3)$_C$ field strength tensor and $\tilde G^{\mu\nu} = (1/2)\epsilon^{\alpha\beta\mu\nu} G_{\alpha\beta}$. The present limits from the EDM of the neutron, $d_n$, lead to the most stringent bounds on ${\bar\theta}$ \cite{baker06}:
\begin{equation}
\label{eq:thetabar}
|{\bar\theta}| < (1.2\pm 0.6) \times 10^{-10}\ \ \ (90\% \ {\rm C.L.})\ \ \ ,
\ee
where the $(1.2\pm 0.6)$ prefactor is obtained using the QCD sum rule result of Ref.~\cite{Pospelov:2005pr} and includes the theoretical error quoted in that work.
CP-violation in the electroweak sector arises from the complex phase in the CKM matrix that enters the renormalizable interactions of quarks with $W^\pm$ gauge bosons. The magnitude of its effects is governed by the Jarlskog invariant~\cite{Jarlskog:1985ht},
\begin{equation}
\label{eq:jarlskog}
J = \cos\theta_1\cos\theta_2\cos\theta_3\sin\theta_1^2\sin\theta_2
\sin\theta_3\sin\delta = (2.88\pm 0.33)\times 10^{-5},
\ee
with the $\theta_i$ and $\delta$ being the three angles and complex phase in the CKM matrix. The EDMs generated by SM electroweak CP violation arise at multi-loop level\cite{Shabalin:rs,Shabalin:sg,Gavela:1981sm,Khriplovich:1981ca,He:1989xj} and are proportional to $J$, suppressing their effects to the levels indicated in Table \ref{tab:edm} (for a discussion of the corresponding SM effects in atoms and nuclei, see, {\em e.g.}, Refs.~\cite{Haxton:dq,Flambaum:1984fb,Donoghue:dd,Schiff:1963,Engel:2003rz,Engel:1999np,Dzuba:2002kg,Dmitriev:2003sc,Khriplovich:1999qr}).
The effects of supersymmetric CPV arise through loop corrections to operators involving SM fields. For example, one-loop SUSY corrections to quark propagators can generate complex phases in the quark mass matrix, and through redefinitions of the quark fields, these phases can be absorbed into ${\bar\theta}$. In the absence of a Peccei-Quinn (PQ) symmetry that allows one to absorb these contributions into the axion field and maintain a vanishing ${\bar\theta}$ prior to symmetry-breaking, the experimental limits on ${\bar \theta}$ given in Eq.~(\ref{eq:thetabar}) lead to tight constraints on CPV in the ${\rm SU(3)}_C$ sector of the MSSM\cite{Pospelov:2005pr}. Generally speaking, however, phenomenological analyses of supersymmetric CPV implicitly assume such a PQ mechanism, leading one to consider SUSY contributions to higher dimension operators.
The lowest dimension non-renormalizable, gauge-invariant CPV operators arise at dimension six:
\begin{eqnarray}
\label{eq:L6}
{\cal L}_{(6)}^{\rm CPV} & = & \frac{i\, g_1 d_u^B}{\Lambda^2} {\bar Q} \sigma_{\mu\nu}\gamma_5 B^{\mu\nu} H_u U + \frac{i\, g_1 d_d^B}{\Lambda^2} {\bar Q} \sigma_{\mu\nu} \gamma_5 B^{\mu\nu} H_d D \\
\nonumber
& +& \frac{i\, g_2 d_u^W}{\Lambda^2} {\bar Q} \sigma_{\mu\nu} \gamma_5 \tau^A W^{\mu\nu\, A} H_u U + \frac{i\, g_2 d_d^W}{\Lambda^2} {\bar Q} \sigma_{\mu\nu}\gamma_5 \tau^A W^{\mu\nu\, A} H_d D\\
\nonumber
& +& \frac{i\, g_3 d_u^G}{\Lambda^2} {\bar Q} \sigma_{\mu\nu}\gamma_5 \lambda^A G^{\mu\nu\, A} H_u U + \frac{i\, g_3 d_d^G}{\Lambda^2} {\bar Q} \sigma_{\mu\nu} \gamma_5\lambda^A G^{\mu\nu\, A} H_d D\\
\nonumber
&+& \frac{w}{\Lambda^2} {\rm Tr}\left(G^{\mu\nu} G_{\nu\alpha} {\tilde G}^{\alpha}_\mu\right)\\
\nonumber
&+& \frac{1}{\Lambda^2} {\rm Tr}\left(G^{\mu\nu} {\tilde G_{\mu\nu}}\right)\, \left[w_u H^\dag_u H_u+w_d H^\dag_d H_d + w_{ud} \left(H_u^\dag \epsilon H_d+{\rm h.c.}\right)\right] \\
\nonumber
& +& \frac{1}{\Lambda^2} {\rm Tr}\left(W^{\mu\nu} {\tilde W_{\mu\nu}}\right)\, \left[c_u H^\dag_u H_u+c_d H^\dag_d H_d + c_{ud} \left(H_u^\dag \epsilon H_d+{\rm h.c.}\right)\right] \\
\nonumber
&+& \frac{1}{\Lambda^2} {\rm Tr}\left(B^{\mu\nu} {\tilde B_{\mu\nu}}\right)\, \left[b_u H^\dag_u H_u+b_d H^\dag_d H_d + b_{ud} \left(H_u^\dag \epsilon H_d+{\rm h.c.}\right)\right] \\
\nonumber
&+& \frac{1}{\Lambda^2} {\rm Tr}\left(W^{\mu\nu\, a} {\tilde B_{\mu\nu}}\right)\, \left[a_u H^\dag_u\tau^a H_u+a_d H^\dag_d\tau^a H_d + a_{ud} \left(H_u^\dag \tau^a \epsilon H_d+{\rm h.c.}\right)\right] \\
\nonumber
& +& \sum_{ab} \frac{C_{abcd}}{\Lambda^2} \epsilon_{ij} {\bar Q}_i^a d^c {\bar Q}_j^b i\gamma_5 u^d +\cdots
\eea
where the $+\cdots$ indicate gauge invariant operators involving lepton fields. Here, we have chosen to normalize the operators in terms of a new physics scale $\Lambda$ taken to be greater than the scale of electroweak symmetry breaking. After electroweak symmetry-breaking, the terms containing the SU(2)$_L$ and U(1)$_Y$ field strength tensors give rise to the electric dipole moments of the elementary fermions:
\begin{equation}
{\cal L}_{EDM} = -\frac{i\, d_u^\gamma}{2\Lambda} {\bar U}_L \sigma_{\mu\nu} F^{\mu\nu} U_R- \frac{i\, d_d^\gamma}{2\Lambda} {\bar D}_L \sigma_{\mu\nu} F^{\mu\nu} D_R-\frac{i\, d_\ell^\gamma}{2\Lambda} {\bar \ell}_L \sigma_{\mu\nu} F^{\mu\nu} \ell_R
\ee
where
\begin{eqnarray}
d_u^\gamma & =& -\frac{\sqrt{2}\, v_u\left(c_W\, d_U^B+s_W\, d_U^W\right)}{\Lambda}\\
d_d^\gamma & =& -\frac{\sqrt{2}\, v_d\left(c_W\, d_d^B+s_W\, d_U^W\right)}{\Lambda}\\
d_\ell^\gamma & =& -\frac{\sqrt{2}\, v_d\left(c_W\, d_\ell^B+s_W\, d_\ell^W\right)}{\Lambda} \ \ \
\eea
where $c_W\equiv\cos\theta_W$ and $s_W=\sin\theta_W$.
The terms coupling $G^{\mu\nu}$ to quarks are the chromoelectric dipole moment operators; the term containing three powers of $G^{\mu\nu}$ is the CPV Weinberg three gluon operator; and the four fermion operator involves products of SU(2)$_L$ doublet and singlet fields (with SU(2)$_L$ indices $i,j$) of flavors $a, \ldots, d$. As discussed in Ref.~\cite{Manohar:2006gz}, the terms containing products of $H^\dag_u H_u$ and $G {\tilde G}$ {\em etc.}, become topological when the Higgs fields are replaced by their vevs, and these effects cannot be analyzed in perturbation theory. They do, however, contribute to the operator ${\cal L}_{(4)}^{\rm CPV}$ and amount to a shift in the value of ${\bar\theta}$ that is constrained by $d_n$. The parts of the $H^\dag_u H_u G {\tilde G}$ operators containing physical scalar degrees of freedom contribute to a renormalization of ${\cal L}_{(4)}^{\rm CPV}$ and, thus, to EDMs. Naive dimensional analysis suggests that values of the operator coefficients $w_u\sim \alpha_s/4\pi$ would lead to EDMs that are consistent with present limits for the mass scale $\Lambda\gsim 1$ TeV. A comprehensive study of these operators, however, has not been performed at present.
\begin{figure}
\resizebox{6 in}{!}{
\includegraphics*[60,480][550,660]{oneloopedm.ps}}
\caption{Representative one loop supersymmetric contributions to elementary
fermion (a) electric dipole moment
and (b) chromoelectric dipole moment (quarks only).}
\label{fig:dfoneloop}
\end{figure}
One-loop SUSY contributions to the operator coefficients in Eq. (\ref{eq:L6}) have been computed in Refs.~\cite{Bernreuther:1990jx,Ibrahim:1997gj,Falk:1999tm} (for older one-loop computations, see the literature cited in these studies), while two-loop contributions to the elementary fermion EDMs $d_f^\gamma$ have recently been analyzed in Refs.~\cite{Giudice:2005rz,Chang:2005ac,Chang:2002ex,Pilaftsis:2002fe}. Illustrative contributions are show in Figs. \ref{fig:dfoneloop} and \ref{fig:twoloopedm}.
Approximate results for the one-loop quark and lepton EDMs and quark chromo-EDMs have been given in Ref.~\cite{Pospelov:2005pr} for a simplified scenario in which only two CP-violating phases contribute: $\phi_\mu$, a common relative phase between the electroweak gaugino masses and the $\mu$ parameter, and $\phi_A$, a common phase associated with the triscalar couplings:
\begin{eqnarray}
\frac{d_e^\gamma}{e\kappa_e} & = & -\frac{g_1^2}{12}\sin\phi_A+\left(\frac{5g_2^2}{24}+\frac{g_1^2}{24}\right)\sin\phi_\mu\tan\beta\\
\frac{d_q^\gamma}{e_q\kappa_q} & = & \frac{2 g_3^2}{9}\left(\sin\phi_\mu[\tan\beta]^{\pm 1} +\sin\phi_A\right)
+{\cal O}(g_2^2,g_1^2)\\
\frac{d_q^G}{\kappa_q} & = & \frac{5 g_3^3}{18}\left(\sin\phi_\mu[\tan\beta]^{\pm 1} +\sin\phi_A\right)
+{\cal O}(g_2^2,g_1^2)
\eea
where $e_q$ is the quark charge,
\begin{equation}
\kappa_f = \frac{m_f}{16\pi^2\, {\tilde m}}
\ee
and where for simplicity we take $\Lambda={\tilde m}$, a common soft mass, and where the upper (lower) sign corresponds to negatively (positively) charged quarks.
These results, together with two-loop contributions to the Weinberg three gluon operator coefficient $w$, can be used to compute the EDMs of charged leptons, the neutron, and neutral atoms. In the case of hadrons and atoms, one must contend with a variety of theoretical issues involving non-perturbative QCD, nuclear structure, and atomic structure theory. Extensive, recent reviews of these issues can be found in Refs.~\cite{Erler:2004cx,Pospelov:2005pr,Fortson:fi,Ginges:2003qt,Khriplovich:ga}, so we do not reproduce those discussions here. Instead, we illustrate the sensitivity of these EDMs to the CPV phases. Considering first the electron, we scale the SUSY mass scale ${\tilde m}$ to 100 GeV and use
$g_1^2 = 4\pi\alpha\tan/\cos^2\theta_W$ and $g_2^2 = 4\pi\alpha/\sin^2\theta_W$ to obtain
\begin{equation}
\label{eq:oneloopest}
\frac{d_e^\gamma}{\Lambda} \approx 5\times 10^{-25}\, \left(\frac{100\, {\rm GeV}}{{\tilde m}}\right)^2 \left[ \tan\beta\sin\phi_\mu - 0.05\sin\phi_A\right]\ e-{\rm cm}\ \ \ .
\ee
In this case, we see that for $\tan\beta \gsim 1$, $d_e^\gamma$ is overwhelmingly sensitive to the relative phase of the $\mu$ parameter and electroweak gaugino soft masses and that the present experimental bounds in Table \ref{tab:edm} imply
\begin{equation}
\label{eq:deoneloopest}
\sin\phi_\mu \buildrel < \over {_\sim} 3\times 10^{-3} \, \left(\frac{{\tilde m}}{100\, {\rm GeV}}\right)^2\, \cot\beta\ \ \ .
\ee
In short, only for ${\tilde m}\gsim $ a few TeV can one accomodate an ${\cal O}(1)$ phase and remain consistent with present limits.
In Fig. \ref{fig:e-n-oneloop}, we illustrate the complementary sensitivity of various EDMs to the SUSY phases using illustrative results for the electron and neutron. Here, we have assumed a common sfermion mass
$m_{\tilde f} = $ 1 TeV for the first two generations and show results for two different scenarios for the values of $|\mu|$ and $M_2$. The widths of the bands correspond solely to the experimental error and contains no theoretical uncertainty associated with the non-perturbative methods used to obtain $d_n$.
The shaded band indicates the regions required to produce the baryon asymmetry during the supersymmetric electroweak phase transition. The band associated with the $^{199}$Hg EDM limit is similar to that for $d_n$ (see, {\rm e.g.}, \cite{Pospelov:2005pr}). The bands in Fig. \ref{fig:e-n-oneloop} illustrate the general feature that various EDMs display complementary dependences on SUSY CP-violating phases and that results from a variety of EDM searches are needed to obtain meaningful constraints in a given scenario. Moreover, the scale of the allowed phases is quite small: $\sim {\rm few}\, \times 10^{-2}$. This small scale does not appear to be {\em a priori} natural, and leads to the SUSY CP problem indicated earlier.
\begin{figure}
\includegraphics*[width=3 in]{1TeVoffpeak.eps}
\includegraphics*[width=3 in]{1TeVpeak.eps}
\caption{
One loop constraints on CPV phases $\phi_\mu$ (horizontal axis) and
$\phi_A$ (vertical axis) from present 95\% C.L. limits on the EDMs of
the electron (solid lines), neutron (dashed lines) and the baryon asymmetry
(colored bands). Constraints consistent with the WMAP value for the baryon
asymmetry are given by the blue band and those obtained from BBN are given
by the blue + green bands. Left panel corresponds to choosing $(|\mu|,
M_2)=(250, 200)$ GeV (non-resonant baryogenesis) and the right panel
corresponds to $(200, 200)$ GeV (resonant baryogenesis). In obtaining both
figures, a common sfermion mass of 1 TeV was used. These figures are courtesy of C.~Lee. }
\label{fig:e-n-oneloop}
\end{figure}
As discussed in Ref.~\cite{Pospelov:2005pr} two-loop \lq\lq Barr-Zee" contributions to the EDMs of elementary fermions may become important for TeV scale sfermions in the limit of large $\tan\beta$. For the electron, these contributions are given approximately by
\begin{equation}
\label{eq:debarrzee}
\frac{d_e^\gamma}{e\kappa_e} \approx \frac{\alpha\, y_t^2}{9\pi}\, \ln\left(\frac{\tilde m^2}{m_A^2}\right)\, \tan\beta\, \sin(\phi_\mu+\phi_A)\ \ \ ,
\ee
where $Y_t$ is the top Yukawa coupling and $m_A$ is the mass of the CP-odd Higgs. The dependence on these parameters arises from the coupling of the CP-odd Higgs to scalar top quarks in the two-loop amplitude. Comparing with the dominant one-loop contribution we observe that -- apart from the logarithmic factor -- the two-loop Barr-Zee contribution is suppressed relative to the one-loop EDM by a little more than the $1/16\pi^2$ loop factor. Since the dependence on $\tan\beta$ is similar in both cases, the two-loop Barr-Zee contribution will be comparable with the one-loop contribution only if $|\phi_\mu| << |\phi_A| $ and a large splitting between the masses of the scalar fermions and CP-odd Higgs generate logarithmic enhancements of the two-loop amplitude.
The tight one-loop limits on the SUSY CP violating phases may be relaxed for sufficiently heavy sfermion masses, as in \lq\lq split supersymmetry" scenarios. In this case, the dominant contributions arise from the two-loop graphs of Fig. \ref{fig:twoloopedm}. Omitting the Barr-Zee contribution and letting
\begin{equation}
d_f^\gamma({\rm 2\ loop}) = d_f^\gamma(\gamma h) + d_f^\gamma(Zh) + d_f^\gamma(WW)
\ee
corresponding to the three different graphs of Fig. \ref{fig:twoloopedm}, one has, for example, \cite{Giudice:2005rz,Chang:2005ac,Pilaftsis:2002fe}
\begin{equation}
\label{eq:dftwoloop}
\frac{d_f^\gamma(\gamma h)}{\Lambda^2} =
\frac{e Q_f \alpha^2}{4\sqrt{2}\pi^2 s_W^2}\, {\rm Im}\left(D_{ii}^R\right) \frac{m_f M_{\chi^+_i}}{M_W m_{h^0}^2}\, F_{\gamma H}(r_{iH}^+)
\ee
where $r^+_{iH} = (m_{\chi^+_i}/m_{h^0})^2$ and $D_{ii}^R$ involves combinations of the chargino diagonalization matrices and Higgs-Higgsino-Gaugino couplings. In the simplified scenario discussed above in which the electroweak gaugino mass parameters have a common phase, ${\rm Im}(D_{ii}^R)\propto \sin\phi_\mu$. Analogous expressions can be obtained for the other contributions in Fig. \ref{fig:twoloopedm}. However, the literature does not agree on the results for these graphs. Numerically, the authors of Ref.~\cite{Giudice:2005rz} obtain in the heavy chargino limit
\begin{eqnarray}
d_e^\gamma(Zh) & \approx & 0.05 d_e^\gamma(\gamma h) \\
d_e^\gamma(WW) & \approx & -0.3 d_e^\gamma(\gamma h) \\
d_n^\gamma(Zh) & \approx & d_e^\gamma(\gamma h) \\
d_n^\gamma(WW) & \approx & -0.7 d_e^\gamma(\gamma h)
\eea
where the results for $d_n$ include QCD evolution from the electroweak scale to the hadronic scale.
Note that in the case of the electron, the $Zh$ contribution is suppressed by the small vector coupling of the $Z$-boson to the electron. Moreover, the authors find substantial cancellations between the $WW$ and $\gamma h$ contributions to $d_n$. In contrast, the authors of Ref.~\cite{Chang:2005ac} find an opposite sign for the $WW$ contribution and no cancellation. Since the relative importance of this disagreement is less severe for $d_e$ we concentrate on the electron EDM below. The sensitivity of the two-loop EDM to the CPV phases is given approximately by
\begin{equation}
\label{eq:twoloopest}
\frac{d_e^\gamma(\gamma h)}{\Lambda} \approx 2\times 10^{-27} \left(\frac{m_{\chi^\pm}}{m_{h^0}}\right)\, \left(\frac{100\, {\rm GeV}}{m_{h^0}}\right) \left({\rm Im} D_{ii}^R\right) F_{\gamma H}(r_{iH}^+)\ \ e-{\rm cm}\ \ \ .
\ee
For $F_{\gamma H}(r_{iH}^+)\sim {\cal O}(1)$ and ${\rm Im} D_{ii}^R\sim\sin\phi_\mu$ the two-loop EDM is roughly 300 times less sensitive to $\sin\phi_\mu$ than the one-loop contribution. Thus,
the present $d_e$ limits may accommodate ${\cal O}(1)$ phases for sfermion masses of ${\cal O}(10\, {\rm TeV})$. For $\sin\phi_\mu=0.5$, for example, the one-loop contributions become suppressed relative to the two-loop effects for $m_{\tilde f}\gsim 4-5$ TeV and for $200\, {\rm GeV}\buildrel < \over {_\sim} \mu, M_2 \buildrel < \over {_\sim} 1$ TeV\cite{Cirigliano:2006dg}. We discuss the corresponding implications for SUSY baryogenesis and dark matter below.
\begin{figure}
\resizebox{5 in}{!}{
\includegraphics*[0,0][680,240]{twoloopgraphs.eps}}
\caption{Two-loop contributions to elementary fermion EDMs. The figures are
reprinted from Ref.~\cite{Giudice:2005rz} with
permission from Elsevier.}
\label{fig:twoloopedm}
\end{figure}
While the foregoing discussion relies largely on flavor-diagonal CPV, elementary fermion EDMs may also provide information on the flavor structure of ${\cal L}_{\rm soft}$. For example, one-loop gluino contributions to the d-quark EDM can be enhanced by a factor of $m_b/m_d$ in the presence of flavor non-diagonal CPV\cite{Pospelov:2005pr}
\begin{equation}
\label{eq:dquarkedm}
d_d^\gamma = e\, Q_d\, \delta^d_{131}\, \left(\frac{m_b}{{\tilde m}^2}\right)\, \frac{\alpha_s\tan\beta}{45\pi}
\ee
where
\begin{equation}
\delta^d_{131}\, = {\rm Arg}\left[\left(\delta^d_{LL}\right)_{13}\, \left(\delta^d_{LR}\right)_{33}\, \left(\delta^d_{RR}\right)_{31}\right]
\ee
and where the $\left(\delta^d_{AB}\right)_{ij}$ have been normalized to an average sfermion mass-squared rather than to the denominator appearing in Eq.~(\ref{eq:massinsert}). The quantity $\delta^d_{131}$ also enters the imaginary part of the one-loop quark mass renormalization, and so will be constrained to the $10^{-9}$ level in the absence of a PQ symmetry. Imposing the latter and taking the present EDM limits one finds bounds on the $\delta^f_{131}$ ranging from $10^{-3}$ to $10^{-6}$ for ${\tilde m}=1$ TeV.
Apart from the Weinberg operator proportional to ${\rm Tr}\, GG{\tilde G}$, the other dimension six operators appearing in Eq.~(\ref{eq:L6}) have received less scrutiny than the the dipole operators. It has been observed in Ref.~\cite{Lebedev:2002ne} that the EDMs of the neutron and neutral atoms may receive important contributions from the four fermion operators in the limit of large $\tan\beta$, as these contributions grow with $\tan^3\beta$.
\subsection{SUSY Baryogenesis and Dark Matter}
Explaining the origin of the matter density of the universe is an ongoing task that lies at the interface of particle physics, nuclear physics, and cosmology. As is well known, the smallest -- but most anthropically relevant -- component of the energy of the universe consists of baryonic matter, while the next largest component is the non-baryonic cold dark matter (DM). In principle, SUSY could provide a particle physics basis for explaining how both of these matter components came to be and take on their observed abundance. The literature pertaining to both supersymmetric dark matter and baryogenesis is vast, and comprehensive reviews have appeared over the past decade\cite{Jungman:1995df,Bertone:2004pz,Riotto:1999yt,Dine:2003ax}. Here, we review recent theoretical developments and the corresponding implications for SUSY CP-violation.
The baryon asymmetry of the universe (BAU) can be characterized by the ratio of the baryon density to the entropy density of photons:
\begin{equation}
\label{eq:ewb1}
Y_B\equiv \frac{n_B}{s} =
\biggl\{
\begin{array}{cc}
(7.3\pm 2.5)\times 10^{-11}, & \text{BBN \cite{Eidelman:2004wy}}\\
(9.2\pm 1.1)\times 10^{-11}, & \text{WMAP \cite{Spergel:2003cb}}
\end{array}
\ee
where the first value (BBN) is obtained from observed light element abundances and standard Big Bang Nucleosynthesis and the second value is obtained from the cosmic microwave background as probed by the WMAP collaboration.
Despite the presence of all three Sakharov ingredients discussed above, it was shown by Shaposhnikov\cite{Shaposhnikov:1987tw} that they are not sufficiently effective within the SM to account for the observed BAU. In order to prevent ``washout" of any baryon number created at the electroweak temperature, the electroweak phase transition (EPWT) has to be strongly first order. The strength of the EWPT depends on the parameters of the Higgs potential, $V_H$, that also govern the mass of the Higgs boson. In the SM, the mass of the Higgs must be below 45 GeV to allow for a strong first order EWPT, so the present LEP 2 direct search limit $m_h > 114.4$ GeV precludes this possibility.
Roughly speaking, this bound arises from a competition between thermal contributions to $V_H$ and those generated by parameters in the Lagrangian at $T=0$:
\begin{eqnarray}
\label{eq:vh1}
V_H(T,\phi) & = & V_{\rm eff}(\phi)+V_{\rm th}(T,\phi)\\
V_{\rm eff}(\phi) & = & \frac{1}{2} m^2\phi^\dag \phi+\frac{\lambda}{4}(\phi^\dag\phi)^2+\cdots
\eea
where $V_{\rm eff}(\phi)$ is the zero temperature one-loop effective potential and $V_{\rm th}(T,\phi)$
gives thermal contributions to the potential. One may then use the high temperature expansion to determine the shape of $V_H(T,\phi)$ as a function of $T$, leading to (see, {\em e.g.}, Ref.~\cite{Quiros:1999jp})
\begin{equation}
\label{eq:veff}
V_H(T,\phi) = D(T^2-T_0^2) \phi^2 - ET \phi^3 +\frac{\lambda(T)}{4}\phi^4
\ee
where $T_0$ is the temperature below which the quadratic term becomes negative and where $D$, $E$, $\lambda(T)$ and $T_0$ are functions of the scalar, vector boson, and fermion masses, $v$, and $T$. The EWPT is characterized by two additional temperatures in addition to $T_0$:
\begin{eqnarray}
\label{eq:ewpt1}
T_1^2 & =& \frac{8\lambda D T_0^2}{8\lambda D - 9E^2}\\
\nonumber
T_C^2 & = & \frac{\lambda D T_0^2}{\lambda D-E^2}\ \ \ .
\eea
For $T< T_1$, a potential barrier forms between the phases of broken and unbroken electroweak symmetry, associated with minima of the potential at $\phi^0=0$ and $\phi^0=v(T)/\sqrt{2}\equiv \phi_m$. The formation of a barrier is
accompanied by the onset of tunneling between the two phases and the formation of bubbles of broken electroweak symmetry. For $T<T_C$, one has $V_H(\phi_m,T) < V_H(\phi=0,T)$ while for $T<T_0 < T_C$, the potential at $\phi=0$ is a local maximum.
In order to avoid washout of the baryon asymmetry during the EWPT, the energy associated with the sphaleron configurations, $E_{\rm sph}$, must be sufficiently large as to suppress the associated transition rate between different vacua. The sphaleron energy is also tied to the value of $v(T)$ since the gauge and Higgs fields are coupled through the associated equations of motion. From numerical studies, one obtains
\begin{equation}
\label{eq:ewpt2}
\frac{E_{\rm sph}}{T_C} = \frac{4\pi}{g}\, \frac{v(T_C)}{T_C}\, B(\lambda/g^2) \gsim 40
\ee
where $B(\lambda/g^2)$ is a constant of ${\cal O}(1)$ that depends gently on $\lambda/g^2$ and where the last inequality is imposed in order to ensure a value of $Y_B$ no smaller than given by Eq.~(\ref{eq:ewb1}). The latter requirement, thus, leads to
\begin{equation}
\label{eq:ewpt3}
\frac{v(T_C)}{T_C}\gsim 1\ \ \ .
\ee
The value of $v(T_C)$ is, in turn, determined by minimization condition $V^\prime(\phi,T_C)=0$ that yields
\begin{equation}
\label{eq:ewpt4}
\frac{v(T_C)}{T_C} = \frac{2 E}{\lambda(T_C)} = 4E\, \left( \frac{v_0^2}{m_h^2}\right)+\cdots \ \ \ ,
\ee
where the the \lq\lq$+\cdots$" indicate small corrections arising from the logarithmic $T$-dependence of $\lambda$. From Eqs.~(\ref{eq:ewpt3},\ref{eq:ewpt4}) one has
\begin{equation}
\label{eq:ewpt5}
4 E\, \left( \frac{v_0^2}{m_h^2}\right)\gsim 1
\ee
as the condition on the parameters in $V_H(\phi,T)$ that must be satisfied in order to obtain a strong first order EWPT that prevents washout of the baryon asymmetry.
In the SM, the cubic coupling is given by
\begin{equation}
\label{eq:ewpt6}
E = \frac{2 m_W^3+m_Z^3}{4\pi v^3} \approx 0.01
\ee
implying that $m_h$ must be lighter than about 45 GeV in order to satisfy the condition (\ref{eq:ewpt5}). As this bound is well below the present LEP II lower bound, the electroweak baryogenesis in the SM is clearly ruled out experimentally.
In the MSSM, the bound on the lightest CP-even Higgs mass can be relaxed either through additional scalar contributions to $V_H(\phi,T)$ that increase the value of $E$ or choices of the other parameters that allow for a lighter Higgs boson (see the discussion in Section \ref{sec:higgs}). The former possibility is realized in the presence of a light, right-handed stop that couples strongly to the Higgs fields, contributes to $E$ at the one-loop level, and enhances $E$ by roughly an order of magnitude :
\begin{eqnarray}
\label{eq:ewpt7}
E_{MSSM} & \approx & E_{SM}+\frac{y_t^3\, \sin^3\beta\, \left(1-{\bar A}_t^2/M^2_{Q_3}\right)^{3/2}}{4\sqrt{2}\pi} \approx 9\, E_{SM}\\
\nonumber
{\bar A}_t & = & A_t-\mu\cot\beta \ \ \ .
\eea
In principle, light left-handed stops could also generate large enhancements, but precision electroweak data rule out the existence of such a light ${\tilde t}_L$. The feasibility of the light ${\tilde t}_R$-induced enhancement of $E$ also depends on the avoidance of color- and charge-breaking minima, leading to conditions on the soft parameter $M^2_{U_3}$. From the analysis of this requirement in Ref.~\cite{Carena:1996wj}, one finds that RH stops lighter than about 130 GeV are disfavored. Assuming these conditions are satisfied, the enhanced value of $E_{MSSM}$ increases the upper bound on the lightest Higgs from Eq.~(\ref{eq:ewpt5}) to $\sim 120$ GeV.
Going beyond the MSSM, one may strengthen the EWPT by introducing singlet Higgs supermultiplets $S$ via the superpotential\cite{Gunion:1984yn,Pietroni:1992in,Davies:1996qn,huber96}
\begin{equation}
\label{eq:nmssm1}
W_{\rm singlet}=\left(\mu+\alpha S\right)\, H_1 H_2 + \beta S + \frac{\kappa}{3} S^3
\ee
and the corresponding soft Lagrangian that contains triscalar couplings
\begin{equation}
\label{eq:nmssm2}
{\cal L}_{\rm singlet} = -\left(\alpha A_\alpha H_1\epsilon H_2 S+{\rm h.c.}\right)-\left(\frac{1}{3}\kappa A_\kappa S^3+{\rm h.c.}\right)+\cdots \ \ \ ,
\ee
where $\epsilon$ is the antisymmetric SU(2) tensor, and $A_\alpha$ and $A_\kappa$ are the soft SUSY breaking parameters.
Writing the vevs of the neutral fields as
\begin{eqnarray}
\nonumber
{\rm Re}\langle H_1^0\rangle & = & \frac{\phi}{\sqrt{2}}\cos\gamma\cos\beta\\
\label{eq:nmssm3}
{\rm Re}\langle H_2^0\rangle & = & \frac{\phi}{\sqrt{2}}\cos\gamma\sin\beta\\
\nonumber
{\rm Re} \langle S\rangle & = & \frac{\phi}{\sqrt{2}}\sin\gamma
\eea
the corresponding singlet contribution to the cubic term is given by
\begin{equation}
\label{eq:nmssm4}
E_{\rm singlet} = \frac{2\sqrt{2}\, \sin\gamma}{T}\, \left(\alpha A_\alpha \cos^2\gamma\sin 2\beta +\frac{2}{3}\kappa A_\kappa \sin^2\gamma\right)\ \ \ .
\ee
Supersymmetric models with singlet Higgs are generically referred to as \lq\lq next-to-minimal" and are motivated largely by a desire to generate the $\mu$ parameter as the vev of the singlet field\footnote{In writing Eq.~(\ref{eq:nmssm1}), we follow the conventions of Ref.~\cite{Davies:1996qn}, where a possible quadratic term in eliminated by a constant shift of the field $S\to S+c$. In this case, an explicit $\mu$ parameter appears.} . Models of this type have received considerable attention recently, as in the studies of Refs.~\cite{nonminimal}.
The feasibility of choosing the soft parameters to obtain a sufficiently larger value of $E_{\rm singlet}$ while respecting the experimental lower bounds on the lightest CP-even Higgs were initially studied in Ref.~\cite{Pietroni:1992in}. It was shown in that there exist considerable regions of the singlet parameter space leading to a sufficiently strong first order EWPT. The more stringent LEP II lower bounds on the mass of the Higgs reduce this available parameter space, but there remain considerable regions that admit $v(T_C)/T_C \gsim 1$.
In addition to these EWPT considerations, the size of the CP-violating asymmetries generated by particle physics interactions at the phase boundary must also be studied. In the SM, these asymmetries are highly suppressed by the Jarlskog invariant (\ref{eq:jarlskog}) and by
\begin{equation}
\label{eq:ewb2}
(y_t^2-y_c^2)(y_t^2-y_u^2)(y_c^2-y_u^2)(y_b^2-y_s^2)(y_b^2-y_d^2)(y_s^2-y_d^2)\approx 4\times 10^{-17}
\ee
since CP-violating effects should vanish in the limit that any two quarks become degenerate.
Farrar and Shaposhnikov\cite{Farrar:1993sp,Farrar:1993hn} subsequently argued, however, that the relevant CP-violating asymmetry depends solely on the difference between the probabilities for reflection and transmission of $s$ and $d$-quark currents and the phase transition boundary, so that $Y_B$ is proportional to $y_s-y_d$ rather than the combination in Eq.~(\ref{eq:ewb2}). Assuming that a non-SM mechanism generates a strong first order EWPT, the resulting expectation for the BAU in the SM is much closer to the observed value than in Shaposhnikov's original work.
As with the EWPT, the presence of supersymmetric interactions at the phase boundary may also lead to larger CP-violating asymmetries, as there exist a plethora of CPV interactions that are not Jarlskog suppressed. The computation of these asymmetries was initiated by the authors of Refs.~\cite{Cohen:1994ss,Joyce:1994zn} using conventional transport methods and was followed up by a number of subsequent studies in the MSSM\cite{Huet:1995sh,mssmewb}. The results generally indicated that SUSY CPV phases of $\buildrel < \over {_\sim} {\cal O}(1)$ would be needed to obtain the observed values of $Y_B$\cite{Huet:1995sh}. In the late 1990's, however, Riotto pointed out -- using more sophisticated non-equilibrium field theory methods -- that memory effects in the EWPT plasma could resonantly enhance the CP-violating sources needed for successful EWB with smaller CP-violating phases\cite{Riotto:1998zb} for appropriately tuned values of the MSSM parameters. Subsequent detailed analyses of these resonant enhancements were performed in Refs.~\cite{carena97,resonantewb}. These resonant enhancements allow for successful EWB in the MSSM with significantly smaller CPV phases than implied by earlier work, thereby allowing for consistency with the corresponding EDM bounds on these phases.
The CPV sources appear in the transport equations for Higgs and quark supermultiplet densities that govern the projection of chiral charge at the phase boundary.
In the case of the Higgs supermultiplet current density, $H_\mu$, for example, one has
\begin{equation}
\label{eq:Heq}
\partial^\mu H_\mu = - \Gamma_H\frac{H}{k_H}
-\Gamma_Y\biggl(\frac{Q}{k_Q} - \frac{T}{k_T} + \frac{H}{k_H}\biggr) -
{\tilde\Gamma}_Y\biggl(\frac{B}{k_B} - \frac{Q}{k_Q} +
\frac{H}{k_H}\biggr)+ \bar\Gamma_h\frac{h}{k_h} + S_{\widetilde
H}^{CP\!\!\!\!\!\!\!\raisebox{1pt}{\scriptsize$\diagup$}} \ \ \ .
\ee
$Q$ and
($B$,$T$) are the number densities of particles in the third
generation left- and right-handed quark supermultiplets, respectively;
the $k_{H,h,Q,T,B}$ are statistical weights; $S_{\widetilde H}^{CP\!\!\!\!\!\!\!\raisebox{1pt}{\scriptsize$\diagup$}}$
is a CP-violating source; and $\Gamma_H$, $\Gamma_Y$,
${\tilde\Gamma}_Y$, and $\bar\Gamma_h$ are transport coefficients. The terms proportional to $\Gamma_H$ and ${\bar\Gamma}_h$ cause any non-zero Higgs supermultiplet asymmetry to relax to zero, as favored by minimization of the free energy. The terms containing $\Gamma_Y$ and
${\tilde\Gamma}_Y$ favor the transfer of the Higgs asymmetry into the baryon sector and are, thus, essential for the generation of a non-vanishing $Y_B$ from CPV in the Higgs sector. The net baryon asymmetry depends on a detailed competition between the effects of the CPV sources and the CP-conserving relaxation and Higgs-to-baryon transfer rates.
Analogous equations obtain for the quark supermultiplet densities. Carena {\em et al} observed that the enhancements of the sources $S_{{\widetilde H},T,Q}^{CP\!\!\!\!\!\!\!\raisebox{1pt}{\scriptsize$\diagup$}}$ occur when LH and RH scalar top quarks or Higgsinos and gauginos are nearly degenerate, leading to resonant scattering from the spacetime varying Higgs vevs\cite{carena97}. As noted above, however, the requirements of a strong first order EWPT and of precision electroweak data preclude the occurrence of such degeneracies in the scalar top sector, implying that resonant supersymmetric EWB may only occur via gauginos and Higgsinos.
These developments were followed up by studies in
Refs.~\cite{Konstandin:2004gy,Konstandin:2003dx,Konstandin:2005cd}, who found somewhat smaller resonance effects from the sources, and by the work of Refs.~\cite{Lee:2004we,Cirigliano:2006wh}, in which the relaxation and Higgs-to-baryon transfer coefficients terms in the quantum transport equations were computed using the same non-equilibrium methods. In particular, the authors of the latter work observed that the various terms in Eq.~(\ref{eq:Heq}) (and the analogous quark supermultiplet equations) could be derived by expanding the Greens functions and self-energies entering the non-equilibtrium Schwinger-Dyson equations in ratios of physical scales present during the EWPT.
It was also pointed that resonance effects could also enhance Higgsino and chiral charge relaxation coefficients ($\Gamma_H$, {\em etc}.) , thereby mitigating the effect of enhanced CP violating sources. In addition, the lowest order contributions to the Higgs-Quark transfer coefficient, $\Gamma_Y$, had been neglected in earlier analyses. The net impact of these refinements implies that the viability of supersymmetric EWB in the MSSM is a quantitative question, depending in detail on the parameters of the theory and requirement input from EDMs, present and future collider studies, and precision electroweak data.
Although formal theoretical issues in EWB remain to be addressed, it is instructive to consider the phenomenological implications of the recent studies for EDM searches. Recent analyses have appeared in Refs.~\cite{Balazs:2004ae,Cirigliano:2006dg,Huber:2006ri,YaserAyazi:2006zw}.
Illustrative results are given in Fig.~\ref{fig:edm-dm-ewb} , where we show the regions of the $\mu$-$M_1$ parameter space that correspond to resonant MSSM EWB (light blue bands), assuming GUT relation between the gaugino masses. The funnel-like structure corresponds to the resonance conditions: $\mu\sim M_1$ or $\mu\sim M_2$. The red region is excluded by LEP 2. The semicircular bands indicate the exclusion region derived from the present (dark blue) and prospective (black) electron EDM measurements for $\sin\phi_\mu\sim 0.5$ using the two-loop computations of Refs.~\cite{Giudice:2005rz,Chang:2005ac}. An analogous set of plots for various values of $\tan\beta$ and $m_A$ are given in Ref.~\cite{Cirigliano:2006dg} for AMSB type gaugino mass relations.
\begin{figure}
\resizebox{3 in}{!}{
\includegraphics{bau_summary_2.ps}}
\caption{Constraints on MSSM parameters from resonant electroweak baryogenesis, two-loop electric dipole moment of the electron, and LEP II . This figure is courtesy of S. Profumo.}
\label{fig:edm-dm-ewb}
\end{figure}
The study of Ref.~\cite{Cirigliano:2006dg} indicates that the viability of resonant EWB in the MSSM implies $|d_e|\gsim 10^{-28}$ e-cm and gaugino/Higgsino mass parameters $\buildrel < \over {_\sim} 1$ TeV. Although some portion of the remaining parameter space can be explored with LHC studies, full collider exploration of the needed MSSM parameters will await the ILC. In addition, future dark matter detection experiments may provide a complementary probe, as the character of the LSP is governed by the same parameters $\mu$ and $M_{1,2}$ that determine the viability of resonant EWB. Assuming that the relevant portions of the MSSM parameter space leads to the observed DM relic abundance, one could expect resonant EWB to be accompanied by enhanced production of, and detection rates for, high energy neutrinos produced by neutralino annihilation in the sun. The absence of evidence for high energy solar neutrinos in Super Kamiokande data implies that a portion of the parameter space in Fig.~\ref{fig:edm-dm-ewb} near the base of the EWB funnels would be excluded under such scenarios. Future neutrino telescopes and ton-sized direct detection experiments will considerably extend the reach of these DM probes of EWB. In most cases, however, obtaining the observed relic abundance requires modifications from standard cosmology, such as the presence of an additional energy-density that increases the Hubble parameter and allows for earlier decoupling of neutralinos.
\section{$Z$-pole Electroweak Precision Measurements}
\label{sec:zpole}
The electron-positron colliders SLC and the LEP have produced millions of $Z$ bosons
at $\sqrt{s}\simeq M_Z$. The measurements of $Z$ lineshape, forward-backward
asymmetries and polarized asymmetries at $Z$-pole
lead to a precise determination of the $Z$-boson pole mass, the total
$Z$-width and the $Z$ couplings to fermions pairs.
The $Z$-pole precision observables are also combined
with the results from other experiments, like CDF, D0, and low
energy atomic parity violation (APV) and scattering measurements to confront
theories such as the Standard Model or other new physics extensions.
The global fit to the electroweak precision observables are
in excellent agreement with
the Standard Model. Any new physics beyond the SM that could
contribute to these precision observables is, therefore, tightly constrained by
the existing data.
In the $R$-parity conserving MSSM, new corrections to the precision observables arise
from superparticle loops. The SUSY contributions
could be significant when the superparticles are light.
Global analyses of precision observables in the
framework of MSSM have appeared in the
literature
\cite{cho, boer, erler, Heinemeyer:2004gx, ellis,DjouadiYK,AltarelliWX,Belanger:2004ag},
which will be briefly reviewed in this section.
\subsection{Precision Observables}
The $Z$ pole observables can be organized into several
groups \cite{erler,EWG}:
\begin{itemize}
\item{9 lineshape observables}\\
A fit to the $Z$ lineshape and the leptonic forward-backward asymmetries
determines the $Z$ boson mass $M_Z$, the total $Z$-width $\Gamma_Z$,
the hadronic cross section $\sigma_{\rm had}^0$, and, for each lepton flavor,
$l=e$, $\mu$, $\tau$, the ratio
$R_l=\Gamma_{\rm had}/\Gamma_{ll}$\footnote{The results are combined into one
$R_l$ if lepton flavor universality is assumed.},
and the pole asymmetry $A_{FB}^{0,l}$. Defining
\begin{equation}
A_f=\frac{1-4 Q_f \sin^2\theta_{\rm eff}^f}{1-4 Q_f \sin^2\theta_{\rm eff}^f
+8 Q_f^2 \sin^4\theta_{\rm eff}^f},
\end{equation}
where $\sin^2\theta_{\rm eff}^f$ is the effective weak mixing angle for
fermion $f$ at the $Z$-pole, we have
\begin{equation}
A_{FB}^{0,f}=\frac{3}{4}A_e A_f.
\end{equation}
\item{3 further LEP asymmetries}\\
These include the $\tau$ polarization
${\cal P}(\tau)=A_{\tau}$, its forward-backward asymmetry
${\cal P}_{FB}(\tau)=A_e$, and the hadronic charge asymmetry,
$\langle Q_{FB} \rangle$, which is used as a determination of
$\sin^2\theta_{eff}^e$.
\item{6 heavy flavor observables}\\
These include the ratios $R_q=\Gamma_{qq}/\Gamma_{\rm had}$,
the forward-backward pole asymmetries $A_{FB}^{0,q}$,
and the left-right forward-backward asymmetries
$A^{LR}_{FB}(q)=A_q$, each for $q=b,c$.
\item{3 further SLD asymmetries}\\
The precise measurements at SLD using the polarized electron beam
determine $A_e$, $A_\mu$ and $A_\tau$\footnote{The
results are combined into one
$A_l$ if lepton flavor universality is assumed.}.
\end{itemize}
Besides the precision measurements performed at the $Z$-resonance,
the $W$ boson mass ($M_W$) and width ($\Gamma_W$)
have been measured to a relatively high precision
at both the Tevatron and the LEP 2. These quantities are usually included in the list of
precision observables.
Low energy precision observables are sometimes included in global fits.
These observables include
\begin{itemize}
\item{Two weak charge measurements from atomic parity violation:
$Q_W^{\rm Tl}$ and $Q_W^{\rm Cs}$.}
\item{Deep inelastic scattering experiments that yield
$\kappa$, which is a linear
combination of effective 4-Fermi operator coefficients.}
\item{$\nu_\mu e$ scattering experiments that yield the leptonic 4-Fermi
operator coefficients $g_V^{\nu e}$ and $g_A^{\nu e}$.}
\item{NuTeV results on the neutrino-nucleus deep inelastic scattering.}
\item{The branching ratio of $B\rightarrow X_s \gamma$.}
\item{Muon anomalous magnetic moment $(g-2)_{\mu}$.}
\end{itemize}
\subsection{SM Global Fit}
The Standard Model contributions to the precision observables depend on
the free parameters in the model. In the gauge and Higgs sectors one has five parameters: the three gauge couplings, the Higgs vev, and the physical Higgs boson mass ($m_h$). From among these, one ordinarily chooses the fine structure constant ($\alpha$) and Fermi constant ($G_\mu=1/\sqrt{2}v^2$) as independent inputs because they are known precisely from low-energy measurements. The remaining quantities in this sector that are relevant for $Z$-pole observables are the mass of the $Z$-boson ($M_Z$), $m_h$, and the SU(3)$_C$ coupling ($\alpha_s$). In addition, $Z$-pole observables depend on the value of the running QED coupling $\alpha(M_Z)$. Since $\alpha(M_Z)/\alpha$ receives hadronic contributions associated with the five light quarks that are not calculable with the same precision as other SM contributions, one treats these contributions, denoted as $\Delta \alpha_{\rm had}^{(5)}$, as an separate parameter to be obtained from the fits. The top quark does not contribute to the running couplings at $\mu=M_Z$ and below because $m_t>M_Z$. However, one-loop radiative corrections depend strongly on $m_t$ so it is also treated as a fit parameter.
The $Z$-boson mass
$M_Z$ has been determined at LEP to a high precision comparable to that of $G_{\mu}$.
Therefore, $M_Z$ is sometime taken as fixed input instead of as a fitting parameter.
The top quark mass $m_t$ has been measured directly at CDF and D0.
Its value from the direct measurement is included in the global fit
as a constraint. The strong coupling constant $\alpha_s$, can be determined from
from non-$Z$ lineshape\cite{alphas};
and $\Delta \alpha_{\rm had}^{(5)}$, is obtained
from hadronic $\tau$ decay \cite{AlemanyTN}. Both determinations
are included in the global fitting as extra constraints.
The lower bound on the Higgs mass from LEP Higgs searches is sometimes
included as well.
A global fit to the precision observables with extra
constraints on $m_t$, $\alpha_s$, $\Delta \alpha_{\rm had}^{(5)}$
and $m_h$ determines the fitted
values of the input parameters. The studies from LEP electroweak
working group \cite{EWG} included the latest results on the
mass of the top quark from Tevatron: $m_t=171.4 \pm 2.1$ GeV,
the width of the $W$ boson from the Tevatron and LEP-2:
$\Gamma_W = 2.147\pm 0.060$ GeV,
and the mass of the $W$ from LEP-2: $M_W = 80.392 \pm 0.029$ GeV.
The program ZFITTER is used to calculate the SM
predictions for those precision observables, including full
one-loop radiative corrections\footnote{ZFITTER uses on-shell renormalization.} and higher order QCD and electroweak
corrections \cite{ZFITTER}.
The global fit to all the high $Q^2$ precision observables showed
excellent agreement between the measurements and
the SM fitted values. The discrepancies are usually
less than 2 $\sigma$ for almost all the observables except for $A_{FB}^{b}$,
where the deviation is about 3 $\sigma$.
The fitted Higgs mass
is in the range of $85_{-28}^{+39}$ GeV at 68\% C.L. An upper limit of
$m_h < 166$ GeV is obtained at 95\% C.L. This limit increases to 199 GeV
when the LEP-2 direct Higgs search limit of 114 GeV is included in the fit.
\subsection{MSSM Contributions to the Precision Measurements}
The success of the SM global fit to the electroweak precision measurements
imposes strong constraints on any new physics extension beyond the Standard
Model. Supersymmetric models can always avoid such constraints
since supersymmetric corrections decouple in the ${\tilde m}\to\infty$ limit. Thus, supersymmetric models
look just like the Standard Model if the supersymmetric mass
scale is large enough. As long as the Standard Model with a light
Higgs boson provides a good fit to the data, supersymmetric models can as
well. On the other hand, large contributions from SUSY are possible when a
supersymmetric spectrum includes light superparticles.
There have been numerous attempts to identify
constraints on the MSSM parameters from precision
observables, both in the general framework of
MSSM \cite{cho, boer, AltarelliWX},
or in a specific SUSY breaking scenario, {\em e.g.},
mSUGRA\cite{boer, erler,ellis,Heinemeyer:2004gx,DjouadiYK, Belanger:2004ag},
or GMSB\cite{erler}.
The results from different group differ slightly, depending on the choices of
the set of precision observables that are included in the fit, the order of
loop corrections, and the experimental values that are used.
We will review the general feature of SUSY contributions to the
precision observables in this section, and leave the discussion of the
global fit in a particular SUSY framework to section~\ref{sec:sugra_gmsb}.
When either the supersymmetric scalars (squarks and sleptons) or
the supersymmetric fermions (charginos and neutralinos) are sufficiently
heavy, the radiative corrections to precision observables are
dominated by the universal gauge boson propagator corrections,
or the oblique corrections $S$, $T$ and $U$ [see Eqs.~(\ref{eq:stu-sirlin})]. The authors of Ref.~\cite{cho} systematically studied the MSSM contributions to the oblique parameters from four different sectors:
squarks, sleptons, neutralinos/charginos, and the Higgs sector.
They found that relatively light squarks/sleptons generally make the
fit to the electroweak data worse than the SM fit.
The squark sector contributes essentially to the positive $T$ direction.
The slepton sector contributes negatively to $S$, but $T$ remains
constant or slightly positive for large $\tan\beta$. Both tend
to be disfavored by data.
The contributions from
light charginos and neutralinos make both
$S$ and $T$ negative, which slightly improves the fit. The best
fit is obtained when the lightest chargino
mass is near its experimental lower bound. The
contributions from the MSSM Higgs sector are found to be small
when the light CP-even MSSM Higgs mass is taken to be the SM Higgs boson
mass.
When both the supersymmetric
scalars and fermions are light, additional
non-oblique (vertex, external leg, and box graph) corrections to all the $Zff$ vertices
become important. In addition to considering the non-oblique corrections to specific processes, one must also include the the MSSM contributions to the muon decay amplitude since we express the coupling ${\hat g}^2$ in terms of $G_\mu$ (see Section \ref{sec:renorm}).
Morevoer, for large $\tan\beta$, the bottom and tau Yukawa couplings are
large. In this case, the MSSM Higgs boson loops can appreciably affect the
$Zbb$, $Z\tau\tau$ and $Z\nu_{\tau}\nu_{\tau}$ vertices,
especially for small $m_A$. As a result, the overall fit to the precision
observables for large $\tan\beta$ is worse than the SM.
On the other hand, the order $\alpha_s$ SUSY-QCD contributions to $Zqq$
vertex via gluino-squark-squark loop are found to be
negligibly small when the mass for gluino and squarks are
bigger than about 200 GeV.
The electroweak contributions to $Zqq$ vertices due to light
squarks and light neutralinos/charginos are
insignificant when their masses are above the
current direct search limit.
When the masses of left-handed slepton and neutralinos/charginos
are light, the $Zll$ vertices as well as the muon-decay
amplitude are affected significantly. The fit is improved
slightly when the left-handed slepton mass
is around 200$\sim$500 GeV \cite{cho}.
In contrast, the right-handed slepton
and squark masses are not constrained significantly, and hence
smaller masses are still allowed.
In summary, for SUSY spectrum with light left-handed sleptons and
chargino/neutralinos,
the global fit in the MSSM has a lower $\chi^2$ value than in the SM.
Since the MSSM fit has fewer degrees of freedom than the SM fit\footnote{The increase in the number of parameters in the MSSM compared to the SM reduces the number of fitting degrees of freedom.},
the overall fit probability in the MSSM is very similar
to that in the SM.
\subsection{Global Analysis in mSUGRA and GMSB}
\label{sec:sugra_gmsb}
In a particular SUSY breaking scenario such as mSUGRA or GMSB, the
complete SUSY spectrum can be determined from the RGE running of only a
few parameters from the high energy scale down to the weak scale.
Due to the relatively small numbers of parameters in these scenarios,
a global fit to the precision observables
can be used to exclude certain region of the parameter space if
the fit is significantly worse than the SM.
Ref.~\cite{erler} studied the global fit of precision observables (including
certain low energy measurements) in the framework of mSUGRA and
GMSB. It is shown that significant portions of the parameter spaces
of mSUGRA and GMSB are excluded. Requiring
$\chi^2_{\rm MSSM}-\chi^2_{\rm SM}< 3.84$,
a lower limit on the mass of the light
CP-even Higgs: $m_h \geq 78$ GeV can be obtained. Also, the first
and second generation squark masses are constrained to be above
280(325) GeV in the mSUGRA(GMSB) model.
The global fit in mSUGRA performed in Ref.~\cite{boer}
[including $(g-2)_{\mu}$ and $b\rightarrow s \gamma $]
showed that at 95\% C.L.,
the value of $\tan\beta$ is constrained to be above 6.5,
while the value of the gaugino masses at the GUT scale has to be above
$\sim$ 220 GeV, which corresponds to a lower limit on the lightest
neutralino(chargino) of 95 (175) GeV.
Several analyses focusing
on the cold dark matter relic density favorite region in mSUGRA have been
performed \cite{ellis, Heinemeyer:2004gx, DjouadiYK,Belanger:2004ag}.
Ref.~\cite{ellis} includes all the high $Q^2$
precision observables, muon $(g-2)_{\mu}$ and
$b\rightarrow s \gamma$. The analysis shows a favored region for $\mu>0$
and small $m_0$ and $m_{1/2}$.
Ref.~\cite{Heinemeyer:2004gx}
focuses on the SUSY contributions to the $W$ boson mass $M_W$,
the effective leptonic weak mixing angle
$\sin^2\theta_{\rm eff}$, the anomalous magnetic moment of the
muon $(g-2)_{\mu}$, and $b\rightarrow s \gamma$. Higher order
loop corrections are included and both theoretical and experimental
errors are treated.
A fit to these precision quantities
in mSUGRA shows
a clear preference for relative small value of $m_{1/2}$, with
a best-fit value of about 300 GeV for $\tan\beta=10$.
An upper bound of about 600 GeV on $m_{1/2}$ is obtained at 90\% C.L.. We note that only the fits of Refs.~\cite{Heinemeyer:2004gx,Belanger:2004ag} take into account the most recent $(g-2)_{\mu}$ results (for the final report of the Brookhaven E821 experiment, see Ref.~\cite{Bennett:2006fi}).
\section{The Experimental Limit on the MSSM Neutral Higgses}
\label{sec:higgs}
As emphasized in Section~\ref{sec:cpv}, the Higgs
potential plays a crucial role in determining the viability of electroweak
baryogenesis. Both the shape of the potential and the mass of the lightest,
Standard Model-like Higgs boson are important in this respect. Here, we
review what is known about the neutral Higgs bosons in the MSSM and
highlight the various assumptions associated with the corresponding Higgs
mass limits. Importantly, some scenarios allow for weaker experimental lower
bounds on $m_{h^0}$ than the bound for the mass of the SM Higgs that makes
it too heavy to accommodate a strong first order EWPT in the SM.
There have been extensive searches for Higgs bosons at the LEP.
No signals have been found so far and a lower limit
of 114.4 GeV has been set for the mass of the SM model Higgs boson at
95\% C.L. \cite{SMHiggssearch}. In MSSM, two Higgs doublets need
to be introduced and there are five physical Higgses in the
spectrum: three neutral ones and two charged ones. When CP is conserved
in the Higgs sector, the three neutral Higgses are CP eigenstates:
two CP-even ones $h^0$ and $H^0$ and one CP-odd one $A^0$. However,
there is no reason to exclude the CP violation in the Higgs sector.
In particular, CP violation in MSSM could provide one of the ingredients
to explain the observed matter-antimatter asymmetry in the
Universe~\cite{Riotto:1999yt,Dine:2003ax}, as discussed above. In the CP violating scenario, the three neutral
Higgs mass eigenstates $H_1$, $H_2$ and $H_3$
are a mixture of the CP-even and the CP-odd states. The Higgs production and
decay might differ significantly from the CP conserved scenario. Analysis
of the LEP Higgs searches have been performed in both
scenarios \cite{LEPMSSMneutral, OPAL, ALEPH, DELPHI, L3}. In this
section, we will briefly review the Higgs searches and the
exclusion bounds on the neutral Higgs masses and other parameters.
In the CP conserving scenario, the main production
processes of $h^0$, $H^0$ and $A^0$ at the LEP are Higgsstraahlung
$e^+e^-\rightarrow h^0 Z$ and $e^+e^-\rightarrow H^0Z$ (if kinematically possible) and
pair production $e^+e^- \rightarrow h^0A^0$ and $e^+e^- \rightarrow H^0A^0$ (if kinematically possible).
These two processes are complementary since $\sigma_{h^0Z}$ is
governed by the $h^0ZZ$ coupling, which is proportional to $\sin(\beta-\alpha)$,
while $\sigma_{h^0A^0}$ is governed by the $Zh^0A^0$ coupling, which is
proportional to $\cos(\beta-\alpha)$. For the Heavy Higgs $H^0$, $\sin$ and $\cos$
are exchanged.
The light neutral Higgs $h^0$, whose mass is typically below 140 GeV
\cite{Higgsmass}, decays dominantly into fermion pairs since its mass
is below the threshold of $WW$ and $ZZ$. Although, for particular
choices of parameters, the fermionic decay may be strongly suppressed,
which will be discussed below in the {\it large $\mu$},
{\it gluophobic} and {\it small $\alpha_{eff}$} benchmark models \cite{benchmark}. For the
CP-odd state $A^0$, it also dominantly decays into fermion pairs, since
its coupling to the gauge boson vanishes at tree level. For not too small
$\tan\beta$, the Higgses decay into $b\bar{b}$ or $\tau^+\tau^-$, while
for $\tan\beta<1$, decays to $c\bar{c}$ might be important.
In the CP violating scenario, the mass eigenstates $H_i$, ($i=1,2,3)$ are
a mixture of CP eigenstates. Each of them can be produced by Higgsstrahlung
$e^+e^-\rightarrow H_iZ$ via the CP-even field components,
and also in pairs $e^+e^-\rightarrow H_i H_j (i\neq j)$. The relative
rates depend on the CP-even/odd mixing. Such mixing does not occur in the tree-level potential, but does appear at one-loop order. The degree of CP-mixing is proportional to the quantity
\begin{equation}
\frac{m_t^4}{v^2}\frac{{\rm Im}(\mu A)}{{\tilde m}^2}.
\label{eq:CPmixing}
\end{equation}
that arises from stop loops.
Large CP violation in the Higgs sector is expected for small ${\tilde m}$
and large ${\rm Im}(\mu A)$. The CP violation effects are also very sensitive
to the precise value of the top quark mass, which is known with a few
GeV experimental error.
The Higgs searches in the
CP-violating scenario are, in general, more challenging than the CP-conserving
case. The reason is that the production of the lightest Higgs $H_1$ is
reduced due to its suppressed coupling to $Z$. While the production of the
heavier Higgses are suppressed or forbidden due to kinematics. The decay
of the Higgses in the CP-violating scenario is very similar to the
CP-conserving case that discussed above.
Higgs searches have been performed at LEP up to the highest LEP energy of
209 GeV, carried out by the four LEP collaborations \cite{OPAL, ALEPH, DELPHI, L3}.
The searches include the Higgsstrahlung process and pair production process,
which are sensitive over the accessible MSSM parameters due to their
complementarity.
For the Higgsstrahlung process, the principle signal topologies are Higgs
decays to fermion pairs $b\bar{b}$, $\tau^+\tau^-$ or flavor independent
$q\bar{q}$, while the $Z$ decays into pair of jets ($q\bar{q}$),
$\nu\bar\nu$ (associated with missing energy), or lepton pairs $e^+e^-$, $\mu^+\mu^-$, $\tau^+\tau^-$. The
reconstruction of the $Z$ mass offers a discrimination of the signal
over the background. Searches including Higgs cascade decay
$e^+e^-\rightarrow H_2 Z \rightarrow (H_1 H_1) Z$ have also been performed,
which might play an important role when this decay mode is open.
For Higgs pair production process, $b$ pairs and $\tau$ pairs final states
have been studied when Higgs masses are above the $\tau^+\tau^-$ threshold.
When the $b\bar{b}$ decay mode of the Higgs is suppressed in certain
parameter spaces, flavor-independent searches are used as
a supplementation or replacement.
The combined LEP data show no significant signal for Higgs boson
production \cite{LEPMSSMneutral}. Therefore, the search results are used to set an upper limit
on the Higgs production cross section, and they are interpreted
in a set of representative MSSM ``benchmark'' models \cite{benchmark}.
For the CP-conserving scenario, the {\it $m_h$-max} benchmark \cite{benchmark} occurs when
the stop mixing parameter is set to a large value,
$X_t=A-\mu \cot\beta=2 M_{\rm SUSY}$. This model is designed to maximize
the theoretical upper bound on $m_h^0$ for a
given $\tan\beta$, thereby providing the largest parameter space
and therefore the most conservative exclusion limits among all
the CP-conserving scenarios studied.
Fig.~\ref{fig:mhmax} \cite{LEPMSSMneutral}
shows the excluded region in $(m_h^0,m_A^0)$ (left plot)
and in $(m_h^0, \tan\beta)$ (right plot).
For $\tan\beta<5$, the 95\% C.L.
exclusion bound on the Higgs mass is about 114 GeV, provided by the
Higgsstrahlung process, while for higher values of $\tan\beta$,
pair production process dominates and the bounds is about 93 GeV
for both $m_h^0$ and $m_A^0$. A certain region of $\tan\beta$ between
0.5 and 3 is also excluded, which, however, depends on the
precise value of the top quark mass. The excluded region gets smaller
for larger $m_t$, and no limit can be set for $\tan\beta$ for $m_t>183$ GeV.
\begin{figure}
\includegraphics*[width=3 in]{fig7a.eps}
\includegraphics*[width=3 in]{fig7b.eps}
\caption{The MSSM exclusions, at 95\% C.L. (light-green) and 99.7\% C.L. (dark-green),
for the {\it $m_h$-max} benchmark scenario, with $m_t$=174.3 GeV. The theoretically
inaccessible regions are shown in yellow. The dashed lines indicate the boundaries of the
regions expected to be excluded on the basis of the Monte Carlo simulations with no
signal. In the right plot, the upper edge of the parameter space is indicated for various
top quark masses; from left to right: $m_t=$ 169.3, 174.3, 179.3 and 183.0 GeV.
The figures are reproduced from Ref.~\cite{LEPMSSMneutral} with kind permission of
Springer Science and Business Media.}
\label{fig:mhmax}
\end{figure}
For the {\it no-mixing} benchmark \cite{benchmark}, the stop mixing parameter $X_t$ is set to
zero, thereby minimizing the stop contribution to the Higgs mass.
The theoretical bounds of the parameter space are more restrictive than
in the {\it $m_h$-max} case, although the experimental bounds are similar.
It is worth mentioning that a small domain at $m_h^0 \approx 80$ GeV,
$m_A^0 < 3$ GeV and $\tan\beta < 0.7$ is still allowed. This domain is not
covered by the current searches since the branching ratio of
$h^0\rightarrow b \bar{b}$ is suppressed while $A^0\rightarrow \tau^+\tau^-$ is
not kinematically allowed.
In the {\it large-$\mu$} scenario \cite{benchmark}, the experimental detection
is {\it a priori} challenging due to the suppressed decay of
$h^0\rightarrow b\bar{b}$,
$\tau^+\tau^-$. The dominant decay modes are
$h^0\rightarrow c\bar{c}$, $gg$ and $W^+W^-$. The flavor- and
decay-mode-independent searches are used instead, which
exclude almost all of the accessible MSSM parameter space.
The {\it gluphobic} scenario \cite{benchmark} is constructed so that Higgs-gluon-gluon
coupling is suppressed, leading to a reduced Higgs production by gluon fusion
at the LHC. The {\it small $\alpha_{eff}$} scenario refers to the case when
$h^0\rightarrow b \bar{b}$ and $\tau^+\tau^-$ are suppressed, since the
corresponding couplings are proportional to $\alpha_{eff}$.
Note that $\alpha_{eff}$ is the effective mixing angle of the neutral CP-even Higgs
sector (defined in Eq.~(\ref{eq:alpha}) in Sec.~\ref{sec:susy})
including radiative corrections.
Both scenarios
were invented to test situations that might be problematic at LHC. In both case
large parameter spaces are excluded by the LEP searches.
The parameters of the CP-violating benchmark \cite{CPXbenchmark} have been chosen to maximize
the difference with respect to the CP-conserving scenario:
${\tilde m}=500$ GeV, $\mu=2000$ GeV and ${\rm arg}(\mu A)=90^0$.
Fig.~\ref{fig:mhcp} \cite{LEPMSSMneutral} shows the excluded region in $(m_{H_1},m_{H_2})$ (left plot)
and in $(m_{H_1}, \tan\beta)$ (right plot).
For large $m_{H_2}$, the $H_1$ is almost completely CP-even and the
95\% C.L. exclusion bound on the $H_1$ mass is about 113 GeV. For lighter
mass of $m_{H_2}<130$ GeV, $H_1$ has a large CP-odd mixture, leading to
unexcluded domain. For $\tan\beta$ between
about 3.5 and 10, the exclusion is particularly weak.
Nonetheless, the region of $m_{H_1}<114$ GeV and $\tan\beta<3.0$ are
excluded by the data.
Furthermore,
at 95\% C.L. $\tan\beta<2.6$ is excluded for all values of Higgs masses.
\begin{figure}
\includegraphics*[width=3 in]{fig16a.eps}
\includegraphics*[width=3 in]{fig16b.eps}
\caption{Exclusions, at 95\% C.L. (light-green) and 99.7\% C.L. (dark-green),
for the CP-violating scenario, with $m_t$=174.3 GeV. The theoretically
inaccessible regions are shown in yellow. The dashed lines indicate the boundaries of the
regions expected to be excluded, at the 95\% C.L.,
on the basis of the Monte Carlo simulations with no
signal.
The figures are reproduced from Ref.~\cite{LEPMSSMneutral} with kind permission of
Springer Science and Business Media.}
\label{fig:mhcp}
\end{figure}
The exclusion region for the Higgs mass,
however, depends strongly on the top quark mass.
For the limits discussed above, $m_t=174.3$ GeV was used. The exclusion
power is reduced for larger top quark mass, especially in the
region of $\tan\beta$ between 4 and 10. The bound on $\tan\beta$ quoted
above, however, is barely sensitive to the precise value of $\tan\beta$.
The exclusion region also depends on the CP-violating phase, ${\rm arg}(\mu A)$,
the $\mu$ parameter, and ${\tilde m}$. The exclusion region is somewhat
larger for values deviate away from the benchmark parameters.
\section{Conclusions and Outlook}
\label{sec:conclude}
Precision measurements of electroweak observables played an important role in developing and testing the Standard Model, and they will undoubtedly be a crucial tool in determining the larger framework in which the SM lies. If that framework includes low-energy supersymmetry, then one would expect a rich array of effects to be discernible in precision measurements carried out at both high and low-energies. In this review, we have concentrated on the low-energy domain, where the \lq\lq precision frontier" will lie at least until the era of a future $e^+e^-$ linear collider. We hope to have demonstrated that through precise measurements of both SM observables as well as those forbidden or highly-suppressed in the SM, studies of these low-energy observables will offer important information about SUSY that can complement what we may learn from the Large Hadron Collider.
We also hope to have illustrated the opportunities and challenges in this field. Experimentally, recent advances have made the prospects for carrying out $\sim 0.1\%$ measurements of SM electroweak observables -- as needed to probe SUSY -- quite realistic, and a number of efforts are underway with such precision as a goal. Recent theoretical developments have also made it possible to interpret measurements at this level in terms of SUSY, as a number of strong interaction uncertainties have been circumvented or reduced. In both cases, going beyond the $\sim 0.1\%$ precision level for SM observables represents the next horizon, one that both experimentalists and theorists are beginning to approach. At the same time, the prospects for performing measurements of rare and forbidden observables, such as electric dipole moments and lepton flavor violating effects, have improved dramatically, with several orders of magnitude increases in sensitivity now within reach. As we hope to have shown, the \lq\lq physics reach" of such experiments can match and even exceed what will be achievable at both the LHC and a linear collider, assuring that they will remain important avenues of study well into the future collider era.
Finally, we emphasize that though we have concentrated here on the Minimal Supersymmetric Standard Model, it is by no means assured that the MSSM (together with the conventional assumptions about SUSY-breaking mediation) is the right low-energy manifestation of SUSY. There remains a wealth of possible variations and a correspondingly rich field of low-energy, precision electroweak phenomenology to be explored. This opportunity, and the attendant experimental and theoretical challenges, will surely keep particle, nuclear, and atomic physicists busy for many years.
\begin{acknowledgments}
We would like to thank following people for discussions and interactions
while finishing this paper:
J. Erler, P. Langacker, W. Marciano, V. Cirigliano, D. Hertzog, D. Bryman,
J. Hardy, D. Pocanic, G. Savard, P. Reimer, X. Zheng, P. Souder, K. Kumar,
R. Holt, Z. Lu, B. Filippone, G. Greene, M. Pospelov, M. Wise, D. Mack, C.
Lee (also for assistance with figures), S. Profumo (also for assistance with figures), S. Martin,
S. Tulin, P. Vogel, R. McKeown, C. Wagner, M. Carena, P. Herczeg, W.-F. Chang, K. Kumar, R. Carlini.
M.J. R.-M. thanks the Institute for Nuclear Theory, U. Pennsylvania, U.
Arizona, and Los Alamos National Laboratory for hospitality during the
completion of this work.
S. Su thanks California Institute of Technology for hospitality during the
completion of this work.
MRM is supported under
Department of Energy Contract \# DE-DE-FG02-05ER41361 and NSF Award
PHY-0555674. SS is
supported under U.S. Department of Energy
contract \# DE-FG02-04ER-41298.
\end{acknowledgments}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,226
|
check-netapp-volume is a python2.7/nagios plugin that allow you to check volume space, according thresholds defined.
It use SSH connection to connect to the filer, get the entire volumes list, parse and output the result.
<br><br>
Plugin exclude snap reserve from the output.
<br><br>
Example of run :
check_netapp-2.7.py -H BIGFiler -U supervision -P superpassword -I vol1,vol3 -W 70 -C 95
'-I' allow you to ignore volume from the check (vol name or partial vol name not regexp)
Example :
If you want to exclude a volume named 'DataDBOracle', you can use
-I Data
or
-I DBOra
-I is case sensitive.
###Why using SSH instead of SNMP ?
When Netapp filer is in heavily load, browsing MIB from snmp client does not work very well, because we need to make many requests to the filer to get the stats. The plugin will report UNKNOWN state, because of timed out.
With SSH, we only need one connection, and one remote command execution to get the volume list. (vol size, usage)
Calculation and report are made by the script.
##Todo before use
#####On the Netapp host
We need a SSH account to connect to the filer, with minimal privileges (for security reason).
This can be done by creating a Netapp role (associated with the 'df' command), a group and a user.
Example :
filer# useradmin role add only_ssh -a login-ssh
filer# useradmin role add df_command -a cli-df*
filer# useradmin group add ssh_supervision -r only_ssh,df_command
filer# useradmin user add supervision -g ssh_supervision
now you can use 'supervision' as the SSH user.
<br>
#####On the Nagios host
check-netapp-volume is a python script that use a very limited number of modules.
Before running the script, you have to install the 'fabric' module, with pip :
pip install fabric
If you use python version before 2.7.9, you must install pip manually :
https://pip.pypa.io/en/stable/installing/
<br>
#####On the script header (optional)
there is a couple of variable that you can define (local directory use to store the SSH stdout, and the file name)
see the help script for more details
<br>
#####Network issue
Ensure that firewall and other network security tools are opened between Nagios host and Filers
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,099
|
Q: combine php array, with matching values I have a set of php arrays
$arrayOne = (
0 => new,
1 => old,
2 => fresh,
3 => new,
4 => old,
5 => fresh,
6 => new,
7 => old,
8 => fresh,
)
$arrayTwo = (
0 => yellow,
1 => green,
2 => red,
3 => blue,
4 => grey,
5 => orange,
6 => purple,
7=> pink,
8 => brown
)
$arrayThree = (
0 => Monday
1 => Tuesday
2 => Wednesday
3 => Thursday
4 => Friday
5 => Saturday
6 => Sunday
7 => Monday2
8 => Monday3
)
These array's are being looped though and placed in a table
for($index = 0; index < 100; $index++){
$returnVariable .= '<td>'.$ArrayOne[$index].'</td>';
$returnVariable .= '<td>'.$ArrayTwo[$index].'</td>';
$returnVariable .= '<td>'.$ArrayThree[$index].'</td>';
}
When returned and displayed on the page the table works just as intended with everything matched how they are supposed to be
new yellow monday
old green tuesday
fresh red wednesday
etc,etc, I would like to group the first column so that it list all the 'new', then all the 'old', then all the fresh, while keeping the intended matching ex,
new yellow monday
new blue thursday
new purple sunday
old green tuesday
old grey friday
old pink Monday2
etc etc
A: First, join the three arrays into one. Then, sort the new array by the first value (new first, then old, then fresh):
<?php
$arrayOne = [
0 => "new",
1 => "old",
2 => "fresh",
3 => "new",
4 => "old",
5 => "fresh",
6 => "new",
7 => "old",
8 => "fresh",
];
$arrayTwo = [
0 => "yellow",
1 => "green",
2 => "red",
3 => "blue",
4 => "grey",
5 => "orange",
6 => "purple",
7=> "pink",
8 => "brow"
];
$arrayThree = [
0 => "Monday",
1 => "Tuesday",
2 => "Wednesday",
3 => "Thursday",
4 => "Friday",
5 => "Saturday",
6 => "Sunday",
7 => "Monday2",
8 => "Monday3",
];
echo "<pre>";
for ($i = 0; $i < count($arrayOne); $i++) {
$array[] = [
$arrayOne[$i],
$arrayTwo[$i],
$arrayThree[$i],
];
}
$values = [ // give these strings a numeric value to compare them
"new" => 0,
"old" => 1,
"fresh" => 2,
];
usort($array, function($a, $b) use ($values) {
return $values[$a[0]] - $values[$b[0]];
});
Demo
A: Create new array to hold all then sort it and implode it into a string
<?php
$arrayOne = array(
0 => "new",
1 => "old",
2 => "fresh",
3 => "new",
4 => "old",
5 => "fresh",
6 => "new",
7 => "old",
8 => "fresh",
);
$arrayTwo = array(
0 => "yellow",
1 => "green",
2 => "red",
3 => "blue",
4 => "grey",
5 => "orange",
6 => "purple",
7=> "pink",
8 => "brown"
);
$arrayThree =array(
0 => "Monday",
1 => "Tuesday",
2 => "Wednesday",
3 => "Thursday",
4 => "Friday",
5 => "Saturday",
6 => "Sunday",
7 => "Monday2",
8 => "Monday3"
);
$returnVariable=array();
for($index = 0; $index<count($arrayOne); $index++){
$returnVariable[$index][0]= '<td>'.$arrayOne[$index].'</td>';
$returnVariable[$index][1]= '<td>'.$arrayTwo[$index].'</td>';
$returnVariable[$index][2]= '<td>'.$arrayThree[$index].'</td>';
}
sort($returnVariable);
echo "<table>";
for ($i=0; $i<count($returnVariable); $i++) {
if (@is_array($returnVariable[$i]))
$returnVariable[$i] = implode($returnVariable[$i]," ");
echo "<tr>";
print $returnVariable[$i];
echo "</tr>";
}
echo "</table>";
A: Straight-forward solution:
$arrayOne = array(0 => "new",1 => "old",2 => "fresh",3 => "new",4 => "old",5 => "fresh",6 => "new",7 => "old",8 => "fresh",);
$arrayTwo = array(0 => "yellow",1 => "green",2 => "red",3 => "blue",4 => "grey",5 => "orange",6 => "purple",7=> "pink",8 => "brown");
$arrayThree = array(0 => "Monday",1 => "Tuesday",2 => "Wednesday",3 => "Thursday",4 => "Friday",5 => "Saturday",6 => "Sunday",7 => "Monday2",8 => "Monday3");
$result = [];
foreach($arrayOne as $k => $v){
$result[$v][] = "<tr><td>$v</td><td>{$arrayTwo[$k]}</td><td>{$arrayThree[$k]}</td></tr>";
}
echo '<table>';
foreach(['new', 'old', 'fresh'] as $k){
echo implode("", $result[$k]);
}
echo '</table>';
The output (push "Run code snippet"):
<table border="1"><tr><td>new</td><td>yellow</td><td>Monday</td></tr><tr><td>new</td><td>blue</td><td>Thursday</td></tr><tr><td>new</td><td>purple</td><td>Sunday</td></tr><tr><td>old</td><td>green</td><td>Tuesday</td></tr><tr><td>old</td><td>grey</td><td>Friday</td></tr><tr><td>old</td><td>pink</td><td>Monday2</td></tr><tr><td>fresh</td><td>red</td><td>Wednesday</td></tr><tr><td>fresh</td><td>orange</td><td>Saturday</td></tr><tr><td>fresh</td><td>brown</td><td>Monday3</td></tr></table>
A: Another way to do it is
$row ="";
$temp = array();
foreach($arrayOne as $key => $value){
$temp[$value][] = $key;
}
foreach($temp as $value){
foreach($value as $value2){
$row .= ' '.$arrayOne[$value2].'';
$row .= ' '.$arrayTwo[$value2].'';
$row .= ' '.$arrayThree[$value2]."\n";
}
}
echo $row;
Live demo : https://eval.in/850007
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,763
|
Published: April 8, 2015, 5:01 pm
Tags: Florida, Neighborhood Crime, News, Local
Man arrested after stealing boat from Stock Island marina, deputies say
Yasmani Chamizo-Betancourt, 24, charged with criminal mischief, grand theft
STOCK ISLAND, Fla. – A man from Cape Coral was arrested Tuesday after he stole a boat from Sunset Water Sports on Stock Island, deputies said.
Yasmani Chamizo-Betancourt, 24, is charged with criminal mischief and grand theft.
Detectives responded to the Hurricane Hole Marina in the morning after Florida Fish and Wildlife officials found the boat abandoned on the west side of Key West. They said the boat was grounded on flats.
Employees of Sunset Water Sports hadn't discovered that the boat was missing until the 21-foot center console vessel was found by authorities.
Deputies said surveillance video from the marina showed Chamizo-Betancourt riding a bike toward the marina just after 6 a.m. They said he walked onto the boat, whose keys were left on board. Deputies said he left the marina on the boat.
A witness who lives where the boat ran aground told deputies that he saw the suspect trying to get the boat off the flats. He said he saw Chamizo-Betancourt swim to an abandoned sailboat nearby and climb aboard.
Authorities found Chamizo-Betancourt on the boat, and said he claimed to have been dropped off on the sailboat by a friend.
He was arrested and taken to jail.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 8,426
|
On my way into work, I was stuck in rush hour traffic. Normally I don't really care as I listen to my sports news on my way into work. It passes the time very nicely. I am not one to get upset being in rush hour traffic. My blood pressure is normal and I intend to keep it that way.
This morning was a bit different. The traffic was backed up a bit more than usual causing me to have a sudden panic attack. I wanted out of the traffic right away. My wife was soundly asleep in the passenger seat. During my daily excursions to into this metal chaos, I'll steal a few seconds to look at her and smile. She is a few months pregnant and that always makes me smile. I quickly remind myself that I am lucky that I can drive her to work so that she doesn't have to deal with feeling crappy while in transit.
This morning however, something was bugging me. I have a baby on the way in December and I am starting to feel the aggravation of being a parent. Am I a good person? Will I be a good Dad? I know the answer is yes to both questions and many others that pop into my head.
Paulina Gretzky along with my other kids of rich parents are very lucky in my eyes. They are afforded the spoils of life that most of society can only dream of. I am looking at this from a 10,000 foot view of course. I am not saying that she shouldn't enjoy life. She deserves it. Her blood line has worked for it, they must reap what was sowed.
All I am thinking is that I wish I could do that for my future kid. I want to be Wayne Gretzky.
There are equally successful people who have come from nothing to be something. I just want to to make sure my kid doesn't have to endure any hardships they don't need to face. I suppose all parents wish this. I have to remember that Walter Gretzky wasn't born with a silver spoon in his mouth.
I look at Wayne and his success and I attribute that success based on the work that his Dad and Mom did when he was younger. Wayne would be a shadow of himself if his parents didn't have the work ethic and most importantly, parenting ethic to allow Wayne to be what he is today. I am almost positive they did the same thing with the rest of Wayne's siblings. Wayne reaped the fame & financial rewards while the others have reaped the rewards of being raised in a good home. I am making all these judgement calls based on zero knowledge of how the Gretzky's were raised. From an outside perspective, they look like they have done alright. I felt good to know that I am ready and prepared to do what I need to do for my child. Right now I want to be Walter Gretzky.
I turned the corner off Lakeshore onto Yonge street. I had forgotten for a few minutes where I was. The aura Walter Gretzky's family life relaxed me, I fell into my normal driving meditation state. I stole another look at wife. She is rubbing her belly and and slowly awakening from her nap. I smiled again, the worries of the world are lost. In my head, I say a little prayer and say good morning to my kid. I know he/she can't "hear" me, but I am sure he/she is listening. This child chose us as his/her vehicle to enter the world. That alone is rich enough. At this moment, I am Walter Gretzky.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 333
|
#ifndef SRC_BGP_ROUTING_INSTANCE_ISERVICE_CHAIN_MGR_H_
#define SRC_BGP_ROUTING_INSTANCE_ISERVICE_CHAIN_MGR_H_
#include <stddef.h>
#include <stdint.h>
#include <string>
class RoutingInstance;
class ServiceChainConfig;
class ServiceChainGroup;
class ShowServicechainInfo;
class IServiceChainMgr {
public:
virtual ~IServiceChainMgr() { }
virtual void ManagedDelete() = 0;
virtual void StopServiceChain(RoutingInstance *rtinstance) = 0;
virtual bool LocateServiceChain(RoutingInstance *rtinstance,
const ServiceChainConfig &config) = 0;
virtual void UpdateServiceChain(RoutingInstance *rtinstance,
bool group_oper_state_up) = 0;
virtual void UpdateServiceChainGroup(ServiceChainGroup *group) = 0;
virtual bool ServiceChainIsUp(RoutingInstance *rtinstance) const = 0;
virtual size_t PendingQueueSize() const = 0;
virtual size_t ResolvedQueueSize() const = 0;
virtual uint32_t GetDownServiceChainCount() const = 0;
virtual bool IsQueueEmpty() const = 0;
virtual bool FillServiceChainInfo(RoutingInstance *rtinstance,
ShowServicechainInfo *info) const = 0;
virtual bool ServiceChainIsPending(RoutingInstance *rtinstance,
std::string *reason = NULL) const = 0;
private:
template <typename U> friend class ServiceChainIntegrationTest;
template <typename U> friend class ServiceChainTest;
virtual ServiceChainGroup *FindServiceChainGroup(
RoutingInstance *rtinstance) = 0;
virtual ServiceChainGroup *FindServiceChainGroup(
const std::string &group_name) = 0;
virtual void set_aggregate_host_route(bool value) = 0;
virtual void DisableResolveTrigger() = 0;
virtual void EnableResolveTrigger() = 0;
virtual void DisableGroupTrigger() = 0;
virtual void EnableGroupTrigger() = 0;
virtual void DisableQueue() = 0;
virtual void EnableQueue() = 0;
};
#endif // SRC_BGP_ROUTING_INSTANCE_ISERVICE_CHAIN_MGR_H_
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,149
|
A 29 kvalifikációs helyre a FIFA hat konföderációjának 199 tagországa jelentkezett. A kontinensek között azok csapatainak erősségétől függően osztották szét a helyeket, melyek a következőképpen alakultak:
Európa (UEFA): 50 induló 14,5 helyre (Franciaország címvédőként automatikus résztvevője volt a világbajnokságnak)
Afrika (CAF): 51 induló 5 helyre
Dél-Amerika (CONMEBOL): 10 induló 4 vagy 5 helyre
Ázsia (AFC): 40 induló 2 vagy 3 helyre, (Dél-Korea, és Japán rendezőként automatikus résztvevője volt a világbajnokságnak)
Észak- és Közép-Amerika és Karib-térség (CONCACAF): 35 induló 3 helyre
Ausztrália és Óceánia (OFC): 10 induló 0 vagy 1 helyre.
Összesen 193 csapat játszott legalább egy selejtezőmérkőzést. Összesen 777 mérkőzést játszottak, amiken összesen 2452 gól született (átlagosan 3,16 gól mérkőzéseként).
Részt vevő csapatok
Selejtezőcsoportok
Európa (UEFA)
Csoportgyőztesként jutottak ki a 2002-es labdarúgó-világbajnokságra
, , , , , , , és .
Pótselejtezőn keresztül jutottak ki a 2002-es labdarúgó világbajnokságra
, , és .
Interkontinentális pótselejtezőn keresztül jutott ki a 2002-es labdarúgó világbajnokságra
Dél-Amerika (CONMEBOL)
Csoportkör után jutottak ki a világbajnokságra
(csoportgyőztesként), , és
Interkontinentális pótselejtezőn keresztül jutott ki a 2002-es labdarúgó világbajnokságra
Afrika (CAF)
Csoportgyőztesként jutottak ki a világbajnokságra
, , , és
Ausztrália és Óceánia (OFC)
Ausztrália elvesztette az interkontinentális pótselejtezőt, így nem jutott ki a világbajnokságra.
Ázsia (AFC)
Csoportgyőztesként jutottak ki a világbajnokságra
és
A csoport másodikként végzett Irán és Egyesült Arab Emírségek pótselejtezőt játszott egymással, hogy ki vehet részt a UEFA vs AFC interkontinentális párharcon. Az oda-visszavágós rendszerben lejátszott párharcot Irán nyerte (4–0), de az interkontinentális párharcon kiesett az írekkel szemben.
Észak- és Közép-Amerika és Karib-térség (CONCACAF)
Csoportkör után jutottak ki a világbajnokságra
(csoportgyőztesként), és
Interkontinentális pótselejtezők
A párosítások győztesei jutottak ki a 2002-es labdarúgó-világbajnokságra:
és .
|}
Jegyzetek
Külső hivatkozások
A FIFA hivatalos oldala
2002
Selejtező
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 416
|
\section{Introduction}\label{sec:intro}
This paper holds a similarity optimization view towards two elemental deep feature learning paradigms, \emph{i.e.}, learning from data with class-level labels and from data with pair-wise labels. The former employs a classification loss function (\emph{e.g.}, softmax cross-entropy loss~\cite{sun2014deep,Liu2016LargeMarginSL,wen2016discriminative}) to optimize the similarity between samples and weight vectors. The latter leverages a metric loss function (\emph{e.g.}, triplet loss~\cite{hoffer2015deep,schroff2015facenet}) to optimize the similarity between samples.
In our interpretation, there is no intrinsic difference between these two learning approaches. They both seek to minimize between-class similarity $s_n$, as well as to maximize within-class similarity $s_p$.
From this viewpoint, we find that many popular loss functions (\emph{e.g.}, triplet loss~\cite{hoffer2015deep,schroff2015facenet}, softmax cross-entropy loss and its variants~\cite{sun2014deep,Liu2016LargeMarginSL,wen2016discriminative,wang2018additive,Wang_2018_CVPR,deng2019arcface}) share a similar optimization pattern. They all embed $s_n$ and $s_p$ into similarity pairs and seek to reduce $(s_n-s_p)$. In $(s_n-s_p)$, increasing $s_p$ is equivalent to reducing $s_n$. We argue that this symmetric optimization manner is prone to the following two problems.
\begin{figure}[t!]
\centering
\includegraphics[width=0.95\linewidth]{figures/fig_intro.pdf}
\caption{Comparison between the popular optimization manner of reducing $(s_n-s_p)$ and the proposed optimization manner of reducing $(\alpha_n s_n -\alpha_p s_p)$. (a) Reducing $(s_n-s_p)$ is prone to inflexible optimization ($A$, $B$ and $C$ all have equal gradients with respect to $s_n$ and $s_p$), as well as ambiguous convergence status (both $T$ and $T'$ on the decision boundary are acceptable). (b) With $(\alpha_n s_n -\alpha_p s_p)$, the Circle loss dynamically adjusts its gradients on $s_p$ and $s_n$, and thus benefits from a flexible optimization process. For $A$, it emphasizes on increasing $s_p$; for $B$, it emphasizes on reducing $s_n$. Moreover, it favors a specified point $T$ on the circular decision boundary for convergence, setting up a definite convergence target.}
\vspace{-4mm}
\label{fig:intro}
\end{figure}
$\bullet$ \textbf{Lack of flexibility for optimization.} The penalty strength on $s_n$ and $s_p$ is restricted to be equal. Given the specified loss functions, the gradients with respect to $s_n$ and $s_p$ are of same amplitudes (as detailed in Section~\ref{sec:revisit}). In some corner cases, \emph{e.g.}, $s_p$ is small and $s_n$ already approaches 0 (``$A$'' in Fig.~\ref{fig:intro} (a)), it keeps on penalizing $s_n$ with a large gradient. It is inefficient and irrational.
$\bullet$ \textbf{Ambiguous convergence status.}
Optimizing $(s_n-s_p)$ usually leads to a decision boundary of $s_p-s_n=m$ ($m$ is the margin). This decision boundary allows ambiguity (\emph{e.g.}, ``$T$'' and ``$T'$'' in Fig.~\ref{fig:intro} (a)) for convergence. For example, $T$ has $\{s_n,s_p\}=\{0.2,0.5\}$ and $T'$ has $\{s_n',s_p'\}=\{0.4,0.7\}$. They both obtain the margin $m=0.3$. However, comparing them against each other, we find the gap between $s_n'$ and $s_p$ is only $0.1$. Consequently, the ambiguous convergence compromises the separability of the feature space.
With these insights, we reach an intuition that different similarity scores should have different penalty strength. If a similarity score deviates far from the optimum, it should receive a strong penalty. Otherwise, if a similarity score already approaches the optimum, it should be optimized mildly. To this end, we first generalize $(s_n-s_p)$ into $(\alpha_n s_n - \alpha_p s_p)$, where $\alpha_n$ and $\alpha_p$ are independent weighting factors, allowing $s_n$ and $s_p$ to learn at different paces.
We then implement $\alpha_n$ and $\alpha_p$ as linear functions \wrt $s_n$ and $s_p$ respectively, to make the learning pace adaptive to the optimization status: The farther a similarity score deviates from the optimum, the larger the weighting factor will be. Such optimization results in the decision boundary $\alpha_n s_n - \alpha_p s_p = m$, yielding a circle shape in the $(s_n,s_p)$ space, so we name the proposed loss function \emph{Circle loss}.
Being simple, Circle loss intrinsically reshapes the characteristics of the deep feature learning from the following three aspects:
\textbf{First, a unified loss function}. From the unified similarity pair optimization perspective, we propose a unified loss function for two elemental learning paradigms, \emph{learning with class-level labels and with pair-wise labels.}
\textbf{Second, flexible optimization}. During training, the gradient back-propagated to $s_n$ ($s_p$) will be amplified by $\alpha_n$ ($\alpha_p$). Those less-optimized similarity scores will have larger weighting factors and consequentially get larger gradients. As shown in Fig.~\ref{fig:intro} (b), the optimization on $A$, $B$ and $C$ are different to each other.
\textbf{Third, definite convergence status.}
On the circular decision boundary, Circle loss favors a specified convergence status (``$T$'' in Fig.~\ref{fig:intro} (b)), as to be demonstrated in Section~\ref{sec:method_character}. Correspondingly, it sets up a definite optimization target and benefits the separability.
The main contributions of this paper are summarized as follows:
\begin {itemize}
\item We propose Circle loss, a simple loss function for deep feature learning. By re-weighting each similarity score under supervision, Circle loss benefits the deep feature learning with flexible optimization and definite convergence target.
\item We present Circle loss with compatibility to both class-level labels and pair-wise labels. Circle loss degenerates to triplet loss or softmax cross-entropy loss with slight modifications.
\item We conduct extensive experiments on a variety of deep feature learning tasks, \emph{e.g.} face recognition, person re-identification, car image retrieval and so on. On all these tasks, we demonstrate the superiority of Circle loss with performance on par with the state of the art.
\end {itemize}
\section{A Unified Perspective}\label{sec:revisit}
Deep feature learning aims to maximize the within-class similarity $s_p$, as well as to minimize the between-class similarity $s_n$. Under the cosine similarity metric, for example, we expect $s_p\rightarrow1$ and $s_n\rightarrow0$.
To this end, \textbf{learning with class-level labels} and \textbf{learning with pair-wise labels} are two elemental paradigms. They are conventionally considered separately and significantly differ from each other \emph{w.r.t} to the loss functions. Given class-level labels, the first one basically learns to classify each training sample to its target class with a classification loss, \textit{e.g.} L2-Softmax~\cite{ranjan2017l2}, Large-margin Softmax~\cite{liu2017sphereface}, Angular Softmax~\cite{Liu2016LargeMarginSL}, NormFace~\cite{wang2017normface}, AM-Softmax~\cite{wang2018additive}, CosFace~\cite{Wang_2018_CVPR}, ArcFace~\cite{deng2019arcface}. These methods are also known as proxy-based learning, as they optimize the similarity between samples and a set of proxies representing each class.
In contrast, given pair-wise labels, the second one directly learns pair-wise similarity (\emph{i.e.}, the similarity between samples) in the feature space and thus requires no proxies, \emph{e.g.}, constrastive loss~\cite{hadsell2006dimensionality,chopra2005learning}, triplet loss~\cite{hoffer2015deep, schroff2015facenet}, Lifted-Structure loss~\cite{oh2016deep}, N-pair loss~\cite{Sohn2016ImprovedDM}, Histogram loss~\cite{Ustinova2016LearningDE}, Angular loss~\cite{Wang2017DeepML}, Margin based loss~\cite{wu2017sampling}, Multi-Similarity loss~\cite{wang2019multi} and so on.
This paper views both learning approaches from a unified perspective, with no preference for either proxy-based or pair-wise similarity.
Given a single sample $x$ in the feature space, let us assume that there are $K$ within-class similarity scores and $L$ between-class similarity scores associated with $x$. We denote these similarity scores as $\{s_p^i\}\, (i=1,2,\cdots,K)$ and $\{s_n^j\}\,(j=1,2,\cdots,L)$, respectively.
To minimize each $s_n^j$ as well as to maximize $s_p^i$, $(\forall i\in \{1,2,\cdots,K\}, \, \forall j \in \{1,2,\cdots,L\})$, we propose a unified loss function by:
\begin{equation}\label{eq:proto}
\footnotesize{
\begin{aligned}
\mathcal{L}_{uni}&=\log\Big[1+\sum_{i=1}^K\sum_{j=1}^L\exp(\gamma(s_n^j - s_p^i+m))\Big]\\
&=\log\Big[1+\sum_{j=1}^L\exp(\gamma(s_n^j+m))\sum_{i=1}^K\exp(\gamma(-s_p^i))\Big],\\
\end{aligned}
}
\end{equation}
in which $\gamma$ is a scale factor and $m$ is a margin for better similarity separation.
Eq.~\ref{eq:proto} is intuitive. It iterates through every similarity pair to reduce $(s_n^j-s_p^i)$.
We note that it degenerates to triplet loss or classification loss, through slight modifications.
\begin{figure*}[t!]
\centering
\includegraphics[width=1\linewidth]{figures/gradients.pdf}
\caption{The gradients of the loss functions. (a) Triplet loss. (b) AM-Softmax loss. (c) The proposed Circle loss. Both triplet loss and AM-Softmax loss present the lack of flexibility for optimization. The gradients with respect to $s_p$ (left) and $s_n$ (right) are restricted to equal and undergo a sudden decrease upon convergence (the similarity pair B). For example, at $A$, the within-class similarity score $s_p$ already approaches $1$, and still incurs a large gradient. Moreover, the decision boundaries are parallel to $s_p=s_n$, which allows ambiguous convergence. In contrast, the proposed Circle loss assigns different gradients to the similarity scores, depending on their distances to the optimum. For $A$ (both $s_n$ and $s_p$ are large), Circle loss lays emphasis on optimizing $s_n$. For B, since $s_n$ significantly decreases, Circle loss reduces its gradient and thus enforces a moderated penalty. Circle loss has a circular decision boundary, and promotes accurate convergence status.}
\vspace{-4mm}
\label{fig:gradient}
\end{figure*}
\textbf{Given class-level labels}, we calculate the similarity scores between $x$ and weight vectors $w_i~(i=1,2,\cdots,\,N)$ ($N$ is the number of training classes) in the classification layer. Specifically, we get $(N-1)$ between-class similarity scores by: $s_n^j=w_j^\intercal x/(\|w_j\|\|x\|)$ ($w_j$ is the $j$-th non-target weight vector). Additionally, we get a single within-class similarity score (with the superscript omitted) $s_p=w_y^\intercal x/(\|w_y\|\|x\|)$.
With these prerequisite, Eq.~\ref{eq:proto} degenerates to AM-Softmax~\cite{wang2018additive,Wang_2018_CVPR}, an important variant of Softmax loss (\emph{i.e.}, softmax cross-entropy loss):
\begin{equation}\label{eq:degenerate_softmax}
\footnotesize{
\begin{aligned}
\mathcal{L}_{am}
&=\log\Big[1+\sum_{j=1}^{N-1}\exp(\gamma (s_n^j+m))\exp(-\gamma s_p)\Big]\\
&=-\log\frac{\exp(\gamma (s_p-m))}{\exp(\gamma (s_p-m))+\sum_{j=1}^{N-1}\exp(\gamma s_n^j)}.\\
\end{aligned}
}
\end{equation}
Moreover, with $m=0$, Eq.~\ref{eq:degenerate_softmax} further degenerates to Normface~\cite{wang2017normface}.
By replacing the cosine similarity with the inner product and setting $\gamma=1$, it finally degenerates to Softmax loss.
\textbf{Given pair-wise labels}, we calculate the similarity scores between $x$ and the other features in the mini-batch. Specifically,
$s_n^j=(x_n^j)^\intercal x/(\|x_n^j\|\|x\|)$ ($x_n^j$ is the $j$-th sample in the negative sample set $\mathcal{N}$) and $s_p^i=(x_p^i)^\intercal x/(\|x_p^i\|\|x\|)$ ($x_p^i$ is the $i$-th sample in the positive sample set $\mathcal{P}$). Correspondingly, $K=|\mathcal{P}|,\,L=|\mathcal{N}|$. Eq.~\ref{eq:proto} degenerates to triplet loss with hard mining~\cite{schroff2015facenet, hermans2017defense}:
\begin{equation}\label{eq:degenerate_triplet}
\footnotesize{
\begin{aligned}
\mathcal{L}_{tri}&={\lim_{\gamma \to +\infty}}\frac{1}{\gamma}\mathcal{L}_{uni}\\
&={\lim_{\gamma \to +\infty}}\frac{1}{\gamma}\log\Big[1+\sum_{i=1}^{K}\sum_{j=1}^{L}\exp(\gamma (s_n^j -s_p^i+m))\Big]\\
&=\max\big[s_n^j-s_p^i+m\big]_+.\\
\end{aligned}
}
\end{equation}
Specifically, we note that in Eq.~\ref{eq:degenerate_triplet}, the ``$\sum\exp(\cdot)$'' operation is utilized by Lifted-Structure loss~\cite{oh2016deep}, N-pair loss~\cite{Sohn2016ImprovedDM}, Multi-Similarity loss~\cite{wang2019multi} and \emph{etc.}, to conduct ``soft'' hard mining among samples. Enlarging $\gamma$ gradually reinforces the mining intensity and when $\gamma\rightarrow+\infty$, it results in the canonical hard mining in~\cite{schroff2015facenet, hermans2017defense}.
\textbf{Gradient analysis.}
Eq.~\ref{eq:degenerate_softmax} and Eq.~\ref{eq:degenerate_triplet} show triplet loss, Softmax loss and its several variants can be interpreted as specific cases of Eq.~\ref{eq:proto}. In another word, they all optimize $(s_n-s_p)$.
Under the toy scenario where there are only a single $s_p$ and $s_n$, we visualize the gradients of triplet loss and AM-Softmax loss in Fig.~\ref{fig:gradient} (a) and (b), from which we draw the following observations:
\begin{itemize}
\item First, before the loss reaches its decision boundary (upon which the gradients vanish), the gradients with respect to both $s_p$ and $s_n$ are the same to each other. The status $A$ has $\{s_n, s_p\}=\{0.8,0.8\}$, indicating good within-class compactness. However, $A$ still receives a large gradient with respect to $s_p$.
It leads to a lack of flexibility during optimization.
\item Second, the gradients stay (roughly) constant before convergence and undergo a sudden decrease upon convergence. The status $B$ lies closer to the decision boundary and is better optimized, compared with $A$. However, the loss functions (both triplet loss and AM-Softmax loss) enforce an approximately equal penalty on $A$ and $B$. It is another evidence of inflexibility.
\item Third, the decision boundaries (the white dashed lines) are parallel to $s_n-s_p=m$. Any two points (\emph{e.g.}, $T$ and $T'$ in Fig.~\ref{fig:intro}) on this boundary have an equal similarity gap of $m$, and are thus of equal difficulties to achieve. In another word, loss functions minimizing $(s_n-s_p+m)$ lay no preference on $T$ or $T'$ for convergence, and are prone to ambiguous convergence. Experimental evidence of this problem is to be accessed in Section~\ref{sec:exp_mechanism}.
\end{itemize}
These problems originate from the optimization manner of minimizing $(s_n-s_p)$, in which reducing $s_n$ is equivalent to increasing $s_p$. In the following Section~\ref{sec:circle_loss}, we will transfer such an optimization manner into a more general one to facilitate higher flexibility.
\section{A New Loss Function} \label{sec:circle_loss}
\subsection{Self-paced Weighting}
We consider to enhance the optimization flexibility by allowing each similarity score to learn at its own pace, depending on its current optimization status. We first neglect the margin item $m$ in Eq.~\ref{eq:proto} and transfer the unified loss function into the proposed Circle loss by:
\begin{equation}\label{eq:circle}
\footnotesize{
\begin{aligned}
\mathcal{L}_{circle}&=\log\Big[1+\sum_{i=1}^K\sum_{j=1}^L\exp\big(\gamma(\alpha_n^j s_n^j - \alpha_p^i s_p^i)\big)\Big]\\
&=\log\Big[1+\sum_{j=1}^L\exp(\gamma\alpha_n^j s_n^j)\sum_{i=1}^K\exp(-\gamma\alpha_p^i s_p^i),\Big]\\
\end{aligned}
}
\end{equation}
in which $\alpha_n^j$ and $\alpha_p^i$ are non-negative weighting factors.
Eq.~\ref{eq:circle} is derived from Eq.~\ref{eq:proto} by generalizing $(s_n^j-s_p^i)$ into $(\alpha_n^j s_n^j - \alpha_p^i s_p^i)$.
During training, the gradient with respect to $(\alpha_n^j s_n^j - \alpha_p^i s_p^i)$ is to be multiplied with $\alpha_n^j$ ($\alpha_p^i$) when back-propagated to $s_n^j$ ($s_p^i$).
When a similarity score deviates far from its optimum (\emph{i.e.}, $O_n$ for $s_n^j$ and $O_p$ for $s_p^i$), it should get a large weighting factor so as to get effective update with large gradient.
To this end, we define $\alpha_p^i$ and $\alpha_n^j$ in a self-paced manner:
\begin{equation}\label{eq:scale}
\footnotesize{
\left\{
\begin{aligned}
\alpha_p^i=[O_p-s_p^i ]_+,\\
\alpha_n^j=[s_n^j-O_n]_+,\\
\end{aligned}
\right.
}
\end{equation}
in which $[\cdot]_+$ is the ``cut-off at zero'' operation to ensure $\alpha_p^i$ and $\alpha_n^j$ are non-negative.
\textbf{Discussions.} Re-scaling the cosine similarity under supervision is a common practice in modern classification losses~\cite{ranjan2017l2,wang2017normface,wang2018additive,Wang_2018_CVPR,Zhang2018HeatedUpSE,Zhang2019AdaCosAS}. Conventionally, all the similarity scores share an equal scale factor $\gamma$. The equal re-scaling is natural when we consider the softmax value in a classification loss function as the probability of a sample belonging to a certain class. In contrast, Circle loss multiplies each similarity score with an independent weighting factor before re-scaling. It thus gets rid of the constraint of equal re-scaling and allows more flexible optimization. Besides the benefits of better optimization, another significance of such a re-weighting (or re-scaling) strategy is involved with the underlying interpretation. Circle loss abandons the interpretation of classifying a sample to its target class with a large probability. Instead, it holds a similarity pair optimization perspective, which is compatible with two learning paradigms.
\subsection{Within-class and Between-class Margins}\label{sec:method_margin}
In loss functions optimizing $(s_n-s_p)$, adding a margin $m$ reinforces the optimization~\cite{liu2017sphereface,Liu2016LargeMarginSL,wang2018additive,Wang_2018_CVPR}. Since $s_n$ and $-s_p$ are in symmetric positions, a positive margin on $s_n$ is equivalent to a negative margin on $s_p$. It thus only requires a single margin $m$. In Circle loss, $s_n$ and $s_p$ are in asymmetric positions. Naturally, it requires respective margins for $s_n$ and $s_p$, which is formulated by:
\begin{scriptsize}
\begin{equation}\label{eq:margin_circle}
\mathcal{L}_{circle}
=\log\big[1+\sum_{j=1}^L\exp(\gamma \alpha_n^j (s_n^j-\Delta_n))\sum_{i=1}^K\exp(-\gamma \alpha_p^i (s_p^i-\Delta_p))\big]
\end{equation}
\end{scriptsize}
in which $\Delta_n$ and $\Delta_p$ are the between-class and within-class margins, respectively.
Basically, Circle loss in Eq.~\ref{eq:margin_circle} expects $s_p^i>\Delta_p$ and $s_n^j<\Delta_n$. We further analyze the settings of $\Delta_n$ and $\Delta_p$ by deriving the decision boundary. For simplicity, we consider the case of binary classification, in which the decision boundary is achieved at $\alpha_n (s_n-\Delta_n)-\alpha_p (s_p- \Delta_p)=0$.
Combined with Eq.~\ref{eq:scale}, the decision boundary is given by:
\begin{equation}\label{eq:boundary}
\footnotesize
(s_n-\frac{O_n+\Delta_n}{2})^2 + (s_p-\frac{O_p+\Delta_p}{2})^2=C
\end{equation}
in which $C=\big((O_n-\Delta_n)^2+(O_p-\Delta_p)^2\big)/4$.
Eq.~\ref{eq:boundary} shows that the decision boundary is the arc of a circle, as shown in Fig.~\ref{fig:intro} (b). The center of the circle is at $s_n=(O_n+\Delta_n)/2, s_p=(O_p+\Delta_p)/2$, and its radius equals $\sqrt{C}$.
There are five hyper-parameters for Circle loss, \emph{i.e.}, $O_p$, $O_n$ in Eq.~\ref{eq:scale} and $\gamma$, $\Delta_p$, $\Delta_n$ in Eq.~\ref{eq:margin_circle}. We reduce the hyper-parameters by setting $O_p=1+m$, $O_n=-m$, $\Delta_p=1-m$, and $\Delta_n=m$.
Consequently, the decision boundary in Eq.~\ref{eq:boundary} is reduced to:
\begin{equation}\label{eq:simple_boundary}
\begin{aligned}
(s_n-0)^2 + (s_p-1)^2=2m^2.
\end{aligned}
\end{equation}
With the decision boundary defined in Eq.~\ref{eq:simple_boundary}, we have another intuitive interpretation of Circle loss. It aims to optimize $s_p\rightarrow1$ and $s_n \rightarrow0$. The parameter $m$ controls the radius of the decision boundary and can be viewed as a relaxation factor. In another word, Circle loss expects $s_p^i>1-m$ and $s_n^j<m$.
Hence there are only two hyper-parameters, \emph{i.e.}, the scale factor $\gamma$ and the relaxation margin $m$.
We will experimentally analyze the impacts of $m$ and $\gamma$ in Section~\ref{sec:exp_param}.
\subsection{The Advantages of Circle Loss}\label{sec:method_character}
{The gradients} of Circle loss with respect to $s_n^j$ and $s_p^i$ are derived as follows:
\begin{small}
\begin{equation}
\footnotesize
\frac{\partial \mathcal{L}_{circle}}{\partial s_n^j}
=Z\frac{\exp\big(\gamma((s_n^j)^2-m^2)\big)}{\sum_{l=1}^L\exp\big(\gamma((s_n^l)^2-m^2)\big)}\gamma(s_n^j+m),
\end{equation}
\end{small}
and
\begin{small}
\begin{equation}\label{eq:gradient_neg}
\footnotesize
\frac{\partial \mathcal{L}_{circle}}{\partial s_p^i} =Z\frac{\exp\big(\gamma((s_p^i-1)^2-m^2)\big)}{\sum_{k=1}^K\exp\big(\gamma((s_p^k-1)^2-m^2)\big)}\gamma(s_p^i-1-m),
\end{equation}
\end{small}
in both of which
$
\footnotesize{
Z=1-\exp(-\mathcal{L}_{circle})}.
$
Under the toy scenario of binary classification (or only a single $s_n$ and $s_p$), we visualize the gradients under different settings of $m$ in Fig.~\ref{fig:gradient} (c), from which we draw the following three observations:
$\bullet~$\emph{Balanced optimization on $s_n$ and $s_p$.} We recall that the loss functions minimizing $(s_n-s_p)$ always have equal gradients on $s_p$ and $s_n$ and is inflexible. In contrast, Circle loss presents dynamic penalty strength. Among a specified similarity pair $\{s_n, s_p\}$, if $s_p$ is better optimized in comparison to $s_n$ (\emph{e.g.}, $A=\{0.8,0.8\}$ in Fig.~\ref{fig:gradient} (c)), Circle loss assigns a larger gradient to $s_n$ (and vice versa), so as to decrease $s_n$ with higher superiority. The experimental evidence of balanced optimization is to be accessed in Section~\ref{sec:exp_mechanism}.
$\bullet~$\emph{Gradually-attenuated gradients.} At the start of training, the similarity scores deviate far from the optimum and gain large gradients (\emph{e.g.}, ``$A$'' in Fig.~\ref{fig:gradient} (c)). As the training gradually approaches the convergence, the gradients on the similarity scores correspondingly decays (\emph{e.g.}, ``$B$'' in Fig.~\ref{fig:gradient} (c)), elaborating mild optimization. Experimental result in Section~\ref{sec:exp_param} shows that the learning effect is robust to various settings of $\gamma$ (in Eq.~\ref{eq:margin_circle}), which we attribute to the automatically-attenuated gradients.
$\bullet~$\emph{A (more) definite convergence target.}
Circle loss has a circular decision boundary and favors $T$ rather than $T'$ (Fig.~\ref{fig:intro}) for convergence. It is because $T$ has the smallest gap between $s_p$ and $s_n$, compared with all the other points on the decision boundary. In another word, $T'$ has a larger gap between $s_p$ and $s_n$ and is inherently more difficult to maintain. In contrast, losses that minimize $(s_n - s_p)$ have a homogeneous decision boundary, that is, every point on the decision boundary is of the same difficulty to reach.
Experimentally, we observe that Circle loss leads to a more concentrated similarity distribution after convergence, as to be detailed in Section \ref{sec:exp_mechanism} and Fig.~\ref{fig:scatter}.
\section{Experiments}
We comprehensively evaluate the effectiveness of Circle loss under two elemental learning approaches, \emph{i.e.}, learning with class-level labels and learning with pair-wise labels. For the former approach, we evaluate our method on face recognition (Section~\ref{sec:exp_face}) and person re-identification (Section~\ref{sec:exp_reid}) tasks. For the latter approach, we use the fine-grained image retrieval datasets (Section~\ref{sec:exp_finegrain}), which are relatively small and encourage learning with pair-wise labels. We show that Circle loss is competent under both settings.
Section~\ref{sec:exp_param} analyzes the impact of the two hyper-parameters, \emph{i.e.}, the scale factor $\gamma$ in Eq.~\ref{eq:margin_circle} and the relaxation factor $m$ in Eq.~\ref{eq:simple_boundary}. We show that Circle loss is robust under reasonable settings. Finally, Section~\ref{sec:exp_mechanism} experimentally confirms the characteristics of Circle loss.
\subsection{Settings}
\textbf{Face recognition.}\quad We use the popular dataset MS-Celeb-1M~\cite{guo2016ms} for training. The native MS-Celeb-1M data is noisy and has a long-tailed data distribution. We clean the dirty samples and exclude few tail identities ($\le3$ images per identity). It results in $3.6M$ images and $79.9K$ identities. For evaluation, we adopt MegaFace Challenge 1 (MF1)~\cite{kemelmacher2016megaface}, IJB-C~\cite{maze2018iarpa}, LFW~\cite{LFWTech}, YTF~\cite{wolf2011face} and CFP-FP~\cite{cfp-paper} datasets and the official evaluation protocols are used. We also polish the probe set and 1M distractors on MF1 for more reliable evaluation, following~\cite{deng2019arcface}.
For data pre-processing, we resize the aligned face images to $112\times112$ and linearly normalize the pixel values of RGB images to $[-1,1]$~\cite{wen2016discriminative,liu2017sphereface,Wang_2018_CVPR}. We only augment the training samples by random horizontal flip. We choose the popular residual networks~\cite{he2016deep} as our backbones.
All the models are trained with 182k iterations. The learning rate is started with 0.1 and reduced by 10$\times$ at 50\%, 70\% and 90\% of total iterations respectively. The default hyper-parameters of our method are $\gamma=256$ and $m=0.25$ if not specified.
For all the model inference, we extract the 512-D feature embeddings and use cosine distance as the metric.
\textbf{Person re-identification.}\quad
Person re-identification (re-ID) aims to spot the appearance of the same person in different observations.
We evaluate our method on two popular datasets, \emph{i.e.}, Market-1501~\cite{Zheng_2015_ICCVmarket} and MSMT17~\cite{Wei_2018_CVPRMSMT17}. Market-1501 contains 1,501 identities, 12,936 training images and 19,732 gallery images captured with 6 cameras. MSMT17 contains 4,101 identities, 126,411 images captured with 15 cameras and presents a long-tailed sample distribution. We adopt two network structures, \emph{i.e.} a global feature learning model backboned on ResNet50 and a part-feature model named MGN~\cite{Wang_2018MGN}. We use MGN with consideration of its competitive performance and relatively concise structure. The original MGN uses a Sofmax loss on each part feature branch for training. Our implementation concatenates all the part features into a single feature vector for simplicity. For Circle loss, we set $\gamma=128$ and $m=0.25$.
\textbf{Fine-grained image retrieval.}\quad We use three datasets for evaluation on fine-grained image retrieval, \textit{i.e.} CUB-200-2011~\cite{WahCUB_200_2011}, Cars196~\cite{krause20133d} and Stanford Online Products~\cite{oh2016deep}.
CARS-196 contains $16,183$ images which belong to $196$ class of cars. The first $98$ classes are used for training and the last $98$ classes are used for testing. CUB-200-2010 has $200$ different class of birds. We use the first $100$ class with $5,864$ images for training and the last $100$ class with $5,924$ images for testing. SOP is a large dataset that consists of $120,053$ images belonging to $22,634$ classes of online products. The training set contains $11,318$ class includes $59,551$ images and the rest $11,316$ class includes $60,499$ images are for testing.
The experimental setup follows~\cite{oh2016deep}. We use BN-Inception~\cite{ioffe2015batch} as the backbone to learn 512-D embeddings. We adopt P-K sampling trategy~\cite{hermans2017defense} to construct mini-batch with $P=16$ and $K=5$.
For Circle loss, we set $\gamma=80$ and $m=0.4$.
\begin{table}[t]
\small
\centering
\caption{Face identification and verification results on MFC1 dataset. ``Rank 1'' denotes rank-1 identification accuracy. ``Veri.'' denotes verification TAR (True Accepted Rate) at 1e-6 FAR (False Accepted Rate) with $1M$ distractors. ``R34'' and ``R100'' denote using ResNet34 and ResNet100 backbones, respectively.}
\label{tab:mf1}
\begin{tabularx}{\linewidth}{Xcccc}
\toprule
\multirow{2}{*}{Loss function} & \multicolumn{2}{c}{Rank 1 (\%)} &
\multicolumn{2}{c}{Veri. (\%)}\\
\cmidrule(l{2pt}r{2pt}){2-3} \cmidrule(l{2pt}r{2pt}){4-5}
& R34 & R100 &R34 & R100 \\
\midrule
Softmax &92.36 &95.04& 92.72 &95.16 \\
NormFace~\cite{wang2017normface} &92.62 &95.27 &92.91 &95.37 \\
AM-Softmax~\cite{wang2018additive,Wang_2018_CVPR} &97.54 &98.31 & 97.64 &98.55 \\
ArcFace~\cite{deng2019arcface} &97.68 & 98.36 &97.70 &98.58 \\
CircleLoss (ours) &\textbf{97.81} &\textbf{98.50}&\textbf{98.12} &\textbf{98.73} \\
\bottomrule
\end{tabularx}
\end{table}
\begin{table}[t]
\small
\centering
\caption{Face verification accuracy (\%) on LFW, YTF and CFP-FP with ResNet34 backbone.}
\label{tab:face-verif}
\begin{tabularx}{\linewidth}{lccc}
\toprule
Loss function & LFW~\cite{LFWTech}& YTF~\cite{wolf2011face} & CFP-FP~\cite{cfp-paper} \\
\midrule
Softmax &99.18 & 96.19& 95.01\\
NormFace~\cite{wang2017normface} & 99.25 & 96.03 & 95.34\\
AM-Softmax~\cite{wang2018additive,Wang_2018_CVPR} & 99.63 & 96.31 & 95.78 \\
ArcFace~\cite{deng2019arcface} & 99.68 & 96.34 & 95.84\\
CircleLoss(ours) & \textbf{99.73} & \textbf{96.38} & \textbf{96.02} \\
\bottomrule
\end{tabularx}
\end{table}
\subsection{Face Recognition}\label{sec:exp_face}
For face recognition task, we compare Circle loss against several popular classification loss functions, \emph{i.e.}, vanilla Softmax, NormFace~\cite{wang2017normface}, AM-Softmax~\cite{wang2018additive} (or CosFace~\cite{Wang_2018_CVPR}), ArcFace~\cite{deng2019arcface}. Following the original papers~\cite{wang2018additive, deng2019arcface}, we set $\gamma=64,m=0.35$ for AM-Softmax and $\gamma=64, m=0.5$ for ArcFace.
\begin{table}[t]
\small
\centering
\caption{Comparison of TARs on the IJB-C 1:1 verification task.}
\label{tab:ijb-c}
\begin{tabularx}{\linewidth}{Xccc}
\toprule
\multirow{2}{*}{Loss function} & \multicolumn{3}{c}{TAR@FAR (\%)} \\
\cmidrule{2-4}
& 1e-3 & 1e-4 & 1e-5 \\
\midrule
ResNet34, AM-Softmax~\cite{wang2018additive,Wang_2018_CVPR} & 95.87 & 92.14 & 81.86\\
ResNet34, ArcFace~\cite{deng2019arcface} &95.94 & 92.28 & 84.23\\
ResNet34, CircleLoss(ours) & \textbf{96.04} & \textbf{93.44} & \textbf{86.78} \\
\hline
ResNet100, AM-Softmax~\cite{wang2018additive,Wang_2018_CVPR} & 95.93 & 93.19 & 88.87\\
ResNet100, ArcFace~\cite{deng2019arcface} &96.01 & 93.25 & 89.10\\
ResNet100, CircleLoss(ours) & \textbf{96.29} & \textbf{93.95} & \textbf{89.60} \\
\bottomrule
\end{tabularx}
\end{table}
We report the identification and verification results on MegaFace Challenge 1 dataset (MFC1) in Table~\ref{tab:mf1}. Circle loss marginally outperforms the counterparts under different backbones. For example, with ResNet34 as the backbone, Circle loss surpasses the most competitive one (ArcFace) by +0.13\% at rank-1 accuracy. With ResNet100 as the backbone, while ArcFace achieves a high rank-1 accuracy of 98.36\%, Circle loss still outperforms it by +0.14\%. The same observations also hold for the verification metric.
Table~\ref{tab:face-verif} summarizes face verification results on LFW~\cite{LFWTech}, YTF~\cite{wolf2011face} and CFP-FP~\cite{cfp-paper}.
We note that performance on these datasets is already near saturation. Specifically, ArcFace is higher than AM-Softmax by +0.05\%, +0.03\%, +0.07\% on three datasets, respectively. Circle loss remains the best one, surpassing ArcFace by +0.05\%, +0.06\% and +0.18\%, respectively.
We further compare Circle loss with AM-Softmax and ArcFace on IJB-C 1:1 verification task in Table~\ref{tab:ijb-c}. Under both ResNet34 and ResNet100 backbones, Circle loss presents considerable superiority. For example, with ResNet34, Circle loss significantly surpasses ArcFace by +1.16\% and +2.55\% on ``TAR@FAR=1e-4'' and ``TAR@FAR=1e-5'', respectively.
\begin{table}[t]
\small
\centering
\caption{Evaluation of Circle loss on re-ID task. We report R-1 accuracy (\%) and mAP (\%). }
\label{tab:person-reid}
\begin{tabular}{lcccc}
\toprule
\multirow{2}{*}{Method} & \multicolumn{2}{c}{Market-1501} & \multicolumn{2}{c}{MSMT17}\\
\cmidrule{2-5}
& R-1 & mAP & R-1& mAP \\
\midrule
PCB~\cite{Sun_2018_ECCVPCB} (Softmax)&93.8&81.6&68.2&40.4\\
MGN~\cite{Wang_2018MGN} (Softmax+Triplet) &95.7&86.9&-&-\\
JDGL~\cite{Zheng_2019_CVPRJDGL} &94.8&86.0&\textbf{77.2}&\textbf{52.3}\\
ResNet50 + AM-Softmax &92.4&83.8&75.6&49.3\\
ResNet50 + CircleLoss(ours) &94.2& 84.9 &76.3&50.2\\
MGN + AM-Softmax &95.3&86.6&76.5&51.8\\
MGN + CircleLoss(ours) &\textbf{96.1}&\textbf{87.4}&{76.9}&{52.1}\\
\bottomrule
\end{tabular}
\end{table}
\begin{table*}[t]
\small
\centering
\caption{Comparison of R@K(\%) on three fine-grained image retrieval datasets. Superscript denotes embedding size.}
\label{tab:cub-cars}
\begin{tabularx}{\textwidth}{Xcccccccccccccc}
\toprule
\multirow{2}{*}{Loss function} & \multicolumn{4}{c}{CUB-200-2011~\cite{WahCUB_200_2011}} && \multicolumn{4}{c}{Cars196~\cite{krause20133d}} && \multicolumn{4}{c}{Stanford Online Products~\cite{oh2016deep}}\\
\cmidrule{2-5} \cmidrule{7-10} \cmidrule{12-15}
& R@1 & R@2 & R@4 & R@8 & & R@1 & R@2 & R@4 & R@8 & & R@1 &R@10& R@$10^2$ & R@$10^3$\\
\midrule
LiftedStruct$^{64}$~\cite{oh2016deep} &43.6 &56.6&68.6&79.6 &&53.0&65.7&76.0&84.3 &&62.5&80.8&91.9&97.4 \\
HDC$^{384}$~\cite{Song_2017_CVPRHDC} &53.6 &65.7&77.0&85.6 &&73.7&83.2&89.5&93.8 &&69.5&84.4&92.8&97.7\\
HTL$^{512}$~\cite{Ge_2018_ECCVHTL} &57.1 &68.8&78.7&86.5 &&81.4&88.0&92.7&95.7 &&74.8&88.3&94.8&98.4\\
ABIER$^{512}$~\cite{ABIER} &57.5 &71.5&79.8&87.4 &&82.0&89.0&93.2&96.1 &&74.2&86.9&94.0&97.8\\
ABE$^{512}$~\cite{Kim_2018_ECCVABE} &60.6 &71.5&79.8&87.4 &&\textbf{85.2}&\textbf{90.5}&94.0&96.1 &&76.3&88.4&94.8&98.2\\
Multi-Simi$^{512}$~\cite{wang2019multi} & 65.7 & 77.0 & \textbf{86.3} & 91.2 && {84.1} & {90.4} & 94.0 & 96.5 && 78.2 & 90.5 & 96.0 & \textbf{98.7}\\
CircleLoss$^{512}$ & \textbf{66.7} & \textbf{77.4} & 86.2 & \textbf{91.2} && 83.4 & 89.8 & \textbf{94.1} & \textbf{96.5} && \textbf{78.3} & \textbf{90.5} & \textbf{96.1} & 98.6 \\
\bottomrule
\end{tabularx}
\end{table*}
\subsection{Person Re-identification}\label{sec:exp_reid}
We evaluate Circle loss on re-ID task in Table~\ref{tab:person-reid}. MGN~\cite{Wang_2018MGN} is one of the state-of-the-art methods and is featured for learning multi-granularity part-level features. Originally, it uses both Softmax loss and triplet loss to facilitate joint optimization. Our implementation of ``MGN (ResNet50) + AM-Softmax'' and ``MGN (ResNet50)+ Circle loss'' only use a single loss function for simplicity.
We make three observations from Table~\ref{tab:person-reid}. First, we find that Circle loss can achieve competitive re-ID accuracy against state of the art. We note that ``JDGL'' is slightly higher than ``MGN + Circle loss'' on MSMT17~\cite{Wei_2018_CVPRMSMT17}. JDGL~\cite{Zheng_2019_CVPRJDGL} uses a generative model to augment the training data, and significantly improves re-ID over the long-tailed dataset. Second, comparing Circle loss with AM-Softmax, we observe the superiority of Circle loss, which is consistent with the experimental results on the face recognition task. Third, comparing ``ResNet50 + Circle loss'' against ``MGN + Circle loss'', we find that part-level features bring incremental improvement to Circle loss. It implies that Circle loss is compatible with the part-model specially designed for re-ID.
\subsection{Fine-grained Image Retrieval}\label{sec:exp_finegrain}
\vspace{0.5em}
We evaluate the compatibility of Circle loss to pair-wise labeled data on three fine-grained image retrieval datasets, \emph{i.e.}, CUB-200-2011, Cars196, and Standford Online Products. On these datasets, majority methods~\cite{oh2016deep,Song_2017_CVPRHDC,Ge_2018_ECCVHTL,ABIER,Kim_2018_ECCVABE,wang2019multi} adopt the encouraged setting of learning with pair-wise labels. We compare Circle loss against these state-of-the-art methods in Table~\ref{tab:cub-cars}. We observe that Circle loss achieves competitive performance, on all of the three datasets. Among the competing methods, LiftedStruct~\cite{oh2016deep} and Multi-Simi~\cite{wang2019multi} are specially designed with elaborate hard mining strategies for learning with pair-wise labels. HDC~\cite{Song_2017_CVPRHDC}, ABIER~\cite{ABIER} and ABE~\cite{Kim_2018_ECCVABE} benefit from model ensemble. In contrast, the proposed Circle loss achieves performance on par with the state of the art, without any bells and whistles.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/params.pdf}
\caption{Impact of two hyper-parameters. In (a), Circle loss presents high robustness on various settings of scale factor $\gamma$. In (b), Circle loss surpasses the best performance of both AM-Softmax and ArcFace within a large range of relaxation factor $m$. }
\label{fig:params}
\end{figure}
\subsection{Impact of the Hyper-parameters}\label{sec:exp_param}
We analyze the impact of two hyper-parameters, \emph{i.e.}, the scale factor $\gamma$ in Eq.~\ref{eq:margin_circle} and the relaxation factor $m$ in Eq.~\ref{eq:simple_boundary} on face recognition tasks.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/train_logit}
\caption{The change of $s_p$ and $s_n$ values during training. We linearly lengthen the curves within the first 2$k$ iterations to highlight the initial training process (in the \textcolor{green}{green} zone). During the early training stage, Circle loss rapidly increases $s_p$, because $s_p$ deviates far from the optimum at the initialization and thus attracts higher optimization priority.}
\label{fig:logit-train}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.9\linewidth]{figures/distribution.pdf}
\caption{Visualization of the similarity distribution after convergence. The \textcolor{blue}{blue} dots mark the similarity pairs crossing the decision boundary during the whole training process. The \textcolor{green}{green} dots mark the similarity pairs after convergence. (a) AM-Softmax seeks to minimize $(s_n-s_p)$. During training, the similarity pairs cross the decision boundary through a wide passage. After convergence, the similarity pairs scatter in a relatively large region in the $(s_n, s_p)$ space. In (b) and (c), Circle loss has a circular decision boundary. The similarity pairs cross the decision boundary through a narrow passage and gather into a relatively concentrated region. }
\label{fig:scatter}
\end{figure*}
\textbf{The scale factor $\gamma$} determines the largest scale of each similarity score. The concept of the scale factor is critical in a lot of variants of Softmax loss. We experimentally evaluate its impact on Circle loss and make a comparison with several other loss functions involving scale factors. We vary $\gamma$ from $32$ to $1024$ for both AM-Softmax and Circle loss. For ArcFace, we only set $\gamma$ to 32, 64 and 128, as it becomes unstable with larger $\gamma$ in our implementation. The results are visualized in Fig.~\ref{fig:params}. Compared with AM-Softmax and ArcFace, Circle loss exhibits high robustness on $\gamma$. The main reason for the robustness of Circle loss on $\gamma$ is the automatic attenuation of gradients. As the similarity scores approach the optimum during training, the weighting factors gradually decrease. Consequentially, the gradients automatically decay, leading to a moderated optimization.
\textbf{The relaxation factor $m$} determines the radius of the circular decision boundary. We vary $m$ from $-0.2$ to $0.3$ (with $0.05$ as the interval) and visualize the results in Fig.~\ref{fig:params} (b). It is observed that under all the settings from $-0.05$ to $0.25$, Circle loss surpasses the best performance of Arcface, as well as AM-Softmax, presenting a considerable degree of robustness.
\subsection{Investigation of the Characteristics}\label{sec:exp_mechanism}
\textbf{Analysis of the optimization process.}\quad
To intuitively understand the learning process, we show the change of $s_n$ and $s_p$ during the whole training process in Fig.~\ref{fig:logit-train}, from which we draw two observations:
First, at the initialization, all the $s_n$ and $s_p$ scores are small. It is because randomized features are prone to be far away from each other in the high dimensional feature space~\cite{Zhang2019AdaCosAS,helanqing_dissection}. Correspondingly, $s_p$ get significantly larger weights (compared with $s_n$), and the optimization on $s_p$ dominates the training, incurring a fast increase in similarity values in Fig.~\ref{fig:logit-train}. This phenomenon evidences that Circle loss maintains a flexible and balanced optimization.
Second, at the end of the training, Circle loss achieves both better within-class compactness and between-class discrepancy (on the training set), compared with AM-Softmax. Because Circle loss achieves higher performance on the testing set, we believe that it indicates better optimization.
\textbf{Analysis of the convergence.}\quad
We analyze the convergence status of Circle loss in Fig.~\ref{fig:scatter}.
We investigate two issues: how the similarity pairs consisted of $s_n$ and $s_p$ cross the decision boundary during training and how they are distributed in the $(s_n, s_p)$ space after convergence. The results are shown in Fig.~\ref{fig:scatter}. In Fig.~\ref{fig:scatter} (a), AM-Softmax loss adopts the optimal setting of $m=0.35$. In Fig.~\ref{fig:scatter} (b), Circle loss adopts a compromised setting of $m=0.325$. The decision boundaries of (a) and (b) are tangent to each other, allowing an intuitive comparison. In Fig.~\ref{fig:scatter} (c), Circle loss adopts its optimal setting of $m=0.25$. Comparing Fig.~\ref{fig:scatter} (b) and (c) against Fig.~\ref{fig:scatter} (a), we find that Circle loss presents a relatively narrower passage on the decision boundary, as well as a more concentrated distribution for convergence (especially when $m=0.25$). It indicates that Circle loss facilitates more consistent convergence for all the similarity pairs, compared with AM-Softmax loss.
This phenomenon confirms that Circle loss has a more definite convergence target, which promotes the separability in the feature space.
\section{Conclusion}
This paper provides two insights into the optimization process for deep feature learning. First, a majority of loss functions, including the triplet loss and popular classification losses, conduct optimization by embedding the between-class and within-class similarity into similarity pairs. Second, within a similarity pair under supervision, each similarity score favors different penalty strength, depending on its distance to the optimum. These insights result in Circle loss, which allows the similarity scores to learn at different paces. The Circle loss benefits deep feature learning with high flexibility in optimization and a more definite convergence target. It has a unified formula for two elemental learning approaches, \emph{i.e.}, learning with class-level labels and learning with pair-wise labels. On a variety of deep feature learning tasks, \emph{e.g.}, face recognition, person re-identification, and fine-grained image retrieval, the Circle loss achieves performance on par with the state of the art.
{\small
\bibliographystyle{ieee}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,152
|
\section{Introduction}
Problems of \textit{graph partitioning} arise in various areas of computer science, engineering, and related fields.
For example in high performance computing \cite{schloegel2000gph}, community detection in social networks \cite{journals/corr/abs-0905-4918} and route planning \cite{journals/jea/BauerDSSSW10}.
In particular the graph partitioning problem is very valuable for parallel computing.
In this area, graph partitioning is mostly used to partition the underlying graph model of computation and communication.
Roughly speaking, vertices in this graph represent computation units and edges denote communication.
This graph needs to be partitioned such that there are few edges between the blocks (pieces).
In particular, if we want to use $k$ processors we want to partition the graph into $k$ blocks of about equal size.
In this paper we focus on a version of the problem that constrains the maximum block size to $(1+\epsilon)$ times the average block size and tries to minimize the total cut size, i.e., the number of edges that run between blocks.
It is well known that this problem is NP-complete \cite{journals/ipl/BuiJ92} and that there is no approximation algorithm with a constant ratio factor for general graphs \cite{journals/ipl/BuiJ92}. Therefore mostly heuristic algorithms are used in practice.
A successful heuristic for partitioning large graphs is the \emph{multilevel graph partitioning} (MGP) approach depicted in Figure~\ref{fig:mgp}
where the graph is recursively \emph{contracted} to achieve smaller graphs which should reflect the same basic structure as the input graph. After applying an \emph{initial partitioning} algorithm to the smallest graph, the contraction is undone and, at each level, a
\emph{local refinement} method is used to improve the partitioning induced by the coarser level.
The main focus of this paper is a technique which integrates an evolutionary search algorithm with our multilevel graph partitioner KaFFPa and its scalable parallelization.
We present novel mutation and combine operators which in contrast to previous methods that use a graph partitioner \cite{soper2004combined, delling2010graph} do not need random perturbations of edge weights.
We show in Section~\ref{s:experiments} that the usage of edge weight perturbations decreases the overall quality of the underlying graph partitioner.
The new combine operators enable us to combine individuals of different kinds (see Section~\ref{s:evolutionarycomponents} for more details).
Due to the parallelization our system is able to compute partitions that have quality comparable or better than previous entries in Walshaw's well known partitioning benchmark \textit{within a few minutes} for graphs of moderate size.
Previous methods of Soper et.al \cite{soper2004combined} required runtimes of up to one week for graphs of that size.
We therefore believe that in contrast to previous methods, our method is very valuable in the area of high performance computing.
The paper is organized as follows.
We begin in Section~\ref{s:preliminaries} by introducing basic concepts.
After shortly presenting Related Work in Section~\ref{s:related}, we continue describing the main evolutionary components in Section~\ref{s:evolutionarycomponents} and its
\begin{wrapfigure}{r}{7cm}
\begin{center}
\vspace*{-0.4cm}
\includegraphics[width=0.4\textwidth]{pics/MGP}
\end{center}
\label{fig:mgp}
\vspace*{-.75cm}
\caption{Multilevel graph partitioning.}
\end{wrapfigure}
parallelization in Section~\ref{s:parallelization}.
A summary of extensive experiments done to tune the algorithm and evaluate its performance is presented in Section~\ref{s:experiments}.
A brief outline of the techniques used in the multilevel graph partitioner KaFFPa is provided in Appendix~\ref{s:kaffpa}.
We have implemented these techniques in the graph partitioner KaFFPaE (Karlsruhe Fast Flow Partitioner Evolutionary) which is written in C++.
Experiments reported in Section~\ref{s:experiments} indicate that KaFFPaE is able to compute partitions of very high quality and scales well to large networks and machines.
\section{Preliminaries}\label{s:preliminaries}
\subsection{Basic concepts}
Consider an undirected graph $G=(V,E,c,\omega)$
with edge weights $\omega: E \to \ensuremath{\mathbb{R}}_{>0}$, node weights
$c: V \to \ensuremath{\mathbb{R}}_{\geq 0}$, $n = |V|$, and $m = |E|$.
We extend $c$ and $\omega$ to sets, i.e.,
$c(V')\Is \sum_{v\in V'}c(v)$ and $\omega(E')\Is \sum_{e\in E'}\omega(e)$.
$\Gamma(v)\Is \setGilt{u}{\set{v,u}\in E}$ denotes the neighbors of $v$.
We are looking for \emph{blocks} of nodes $V_1$,\ldots,$V_k$
that partition $V$, i.e., $V_1\cup\cdots\cup V_k=V$ and $V_i\cap V_j=\emptyset$
for $i\neq j$. The \emph{balancing constraint} demands that
$\forall i\in \{1..k\}: c(V_i)\leq L_{\max}\Is (1+\epsilon)c(V)/k+\max_{v\in V} c(v)$ for
some parameter $\epsilon$.
The last term in this equation arises because each node is atomic and
therefore a deviation of the heaviest node has to be allowed.
The objective is to minimize the total \emph{cut} $\sum_{i<j}w(E_{ij})$ where
$E_{ij}\Is\setGilt{\set{u,v}\in E}{u\in V_i,v\in V_j}$.
A clustering is also a partition of the nodes, however $k$ is usually not given in advance and the balance constraint is removed.
A vertex $v \in V_i$ that has a neighbor $w \in V_j, i\neq j$, is a boundary vertex.
An abstract view of the partitioned graph is the so called \emph{quotient graph}, where vertices represent blocks and edges are induced by connectivity between blocks.
Given two clusterings $\mathcal{C}_1$ and $\mathcal{C}_2$ the \emph{overlay clustering} is the clustering where each block corresponds to a connected component of the graph $G_\mathcal{E} = (V,E\backslash \mathcal{E})$ where $\mathcal{E}$ is the union of the cut edges of $\mathcal{C}_1$ and $\mathcal{C}_2$, i.e. all edges that run between blocks in either $\mathcal{C}_1$ or $\mathcal{C}_2$.
By default, our initial inputs will have unit edge and node weights.
However, even those will be translated into weighted problems in the course of the algorithm.
A matching $M\subseteq E$ is a set of edges that do not share any common nodes, i.e., the graph $(V,M)$ has maximum degree one. \emph{Contracting} an edge $\set{u,v}$ means to replace the nodes $u$ and $v$ by a new node $x$ connected
to the former neighbors of $u$ and $v$.
We set $c(x)=c(u)+c(v)$ so the weight of a node at each level is the number of nodes it is representing in the original graph. If replacing edges of the form $\set{u,w}$,$\set{v,w}$ would generate two parallel edges $\set{x,w}$, we insert a single edge with
$\omega(\set{x,w})=\omega(\set{u,w})+\omega(\set{v,w})$.
\emph{Uncontracting} an edge $e$ undos its contraction.
In order to avoid tedious notation, $G$ will denote the current state of the graph
before and after a (un)contraction unless we explicitly want to refer to
different states of the graph.
The \textit{multilevel approach} to graph partitioning consists of three main phases.
In the \emph{contraction} (coarsening) phase, we iteratively identify matchings $M\subseteq E$ and contract the edges in $M$.
Contraction should quickly reduce the size of the input and each computed level should reflect the global structure of the input network.
Contraction is stopped when the graph is small enough to be directly partitioned using some expensive other algorithm.
In the \emph{refinement} (or uncoarsening) phase, the matchings are iteratively uncontracted.
After uncontracting a matching, a refinement algorithm moves nodes between blocks in order to improve the cut size or balance.
KaFFPa, which we use as a base case partitioner, extended the concept of \emph{iterated multilevel algorithms} which was introduced by \cite{walshaw2004multilevel}.
The main idea is to iterate the coarsening and uncoarsening phase.
Once the graph is partitioned, edges that are between two blocks are not contracted.
An \emph{F-cycle} works as follows: on \emph{each} level we perform at most \emph{two recursive calls} using different random seeds during contraction and local search.
A second recursive call is only made the second time that the algorithm reaches a particular level.
As soon as the graph is partitioned, edges that are between blocks are not contracted.
This ensures nondecreasing quality of the partition since our refinement algorithms guarantee no worsening and break ties randomly. These so called \textit{global search strategies} are more effective than plain restarts of the algorithm.
\emph{Extending this idea} will yield the new combine and mutation operators described in Section~\ref{s:evolutionarycomponents}.
\noindent Local search algorithms find good solutions in a very short amount of time but often get stuck in local optima. In contrast to local search algorithms, genetic/evolutionary algorithms are good at searching the problem space globally.
However, genetic algorithms lack the ability of fine tuning a solution, so that local search algorithms can help to improve the performance of a genetic algorithm.
The combination of an evolutionary algorithm with a local search algorithm is called \textit{hybrid} or \textit{memetic} evolutionary algorithm \cite{conf/gecco/KimHKM11}.
\section{Related Work}\label{s:related}
There has been a huge amount of research on graph partitioning so that we refer the reader to \cite{fjallstrom1998agp,Walshaw07} for more material on multilevel graph partitioning and to \cite{conf/gecco/KimHKM11} for more material on genetic approaches for graph partitioning.
All general purpose methods that are able to obtain good partitions for large real world graphs are based on the multilevel principle outlined in Section~\ref{s:preliminaries}.
Well known software packages based on this approach include, Jostle~\cite{Walshaw07}, Metis \cite{karypis1999pmk}, and Scotch \cite{Scotch}.
KaFFPa \cite{kappa} is a MGP algorithm using local improvement algorithms that are based on flows and more localized FM searches.
It obtained the best results for many graphs in \cite{soper2004combined}.
Since we use it as a base case partitioner it is described in more detail in Appendix \ref{s:kaffpa}.
KaSPar \cite{kaspar} is a graph partitioner based on the central idea to (un)contract only a single edge between two levels.
KaPPa \cite{kappa} is a "classical" matching based MGP algorithm designed for scalable parallel execution.
Soper et al. \cite{soper2004combined} provided the first algorithm that combined an evolutionary search algorithm with a multilevel graph partitioner. Here crossover and mutation operators have been used to compute edge biases, which yield hints for the underlying multilevel graph partitioner.
Benlic et al. \cite{conf/ieeeconftoolsartintell/benlichao2010} provided a multilevel memetic algorithm for balanced graph partitioning. This approach is able to compute many entries in Walshaw's Benchmark Archive \cite{soper2004combined} for the case $\epsilon=0$. PROBE \cite{journals/tc/ChardaireBM07} is a meta-heuristic which can be viewed as a genetic algorithm without selection. It outperforms other metaheuristics, but it is restricted to the case $k=2$ and $\epsilon=0$.
Very recently an algorithm called PUNCH \cite{delling2010graph} has been introduced.
This approach is not based on the multilevel principle.
However, it creates a coarse version of the graph based on the notion of natural cuts.
Natural cuts are relatively sparse cuts close to denser areas.
They are discovered by finding minimum cuts between carefully chosen regions of the graph.
They introduced an evolutionary algorithm which is similar to Soper et al. \cite{soper2004combined}, i.e. using a combine operator that computes edge biases yielding hints for the underlying graph partitioner.
Experiments indicate that the algorithm computes very good partitions for road networks.
For instances without a natural structure such as road networks, natural cuts are not very helpful.
\section{Evolutionary Components} \label{s:evolutionarycomponents}
The general idea behind evolutionary algorithms (EA) is to use mechanisms which are highly inspired by biological evolution such as selection, mutation, recombination and survival of the fittest.
An EA starts with a population of individuals (in our case partitions of the graph) and evolves the population into different populations over several rounds.
In each round, the EA uses a selection rule based on the fitness of the individuals (in our case the edge cut) of the population to select good individuals and combine them to obtain improved offspring \cite{goldbergGA89}.
Note that we can use the cut as a fitness function since our partitioner almost always generates partitions that are within the given balance constraint, i.e. there is no need to use a penalty function or something similar to ensure that the final partitions generated by our algorithm are feasible.
When an offspring is generated an eviction rule is used to select a member of the population and replace it with the new offspring.
In general one has to take both into consideration, the fitness of an individual and the distance between individuals in the population \cite{baeckEvoAlgPHD96}.
Our algorithm generates only one offspring per generation. Such an evolutionary algorithm is called \textit{steady-state} \cite{dejongEvoComp2006}.
A typical structure of an evolutionary algorithm is depicted in Algorithm~\ref{alg:generalsteadystateEA}.
For an evolutionary algorithm it is of major importance to keep the diversity in the population high \cite{baeckEvoAlgPHD96}, i.e. the individuals should not become too similar, in order to avoid a premature convergence of the algorithm.
In other words, to avoid getting stuck in local optima a procedure is needed that randomly perturbs the individuals.
In classical evolutionary algorithms, this is done using a mutation operator.
It is also important to have operators that introduce unexplored search space to the population.
Through a new kind of crossover and mutation operators, introduced in Section~\ref{s:combineoperators}, we introduce more elaborate diversification strategies which allow us to search the search space more effectively.
Interestingly, Inayoshi et al. \cite{conf/ppsn/InayoshiM94} noticed that good local solutions of the graph partitioning problem tend to be close to one another.
Boese et al. \cite{boese1994new} showed that the quality of the local optima overall decreases as the distance from the global optimum increases.
We will see in the following that our combine operators can exchange good parts of solutions quite effectively especially if they have a small distance.
\begin{center}
\small
\begin{algorithm}[h!]
\begin{algorithmic}
\STATE \textbf{procedure} \textit{steady-state-EA}
\STATE \quad create initial population $P$
\STATE \quad \textbf{while} stopping criterion not fulfilled
\STATE \quad \quad \textit{select} parents $p_1, p_2$ from $P$
\STATE \quad \quad \textit{combine} $p_1$ with $p_2$ to create offspring $o$
\STATE \quad \quad \textit{mutate} offspring $o$
\STATE \quad \quad \textit{evict} individual in population using $o$
\STATE \quad \textbf{return} the fittest individual that occurred
\end{algorithmic}
\caption{A classic general steady-state evolutionary algorithm.}
\label{alg:generalsteadystateEA}
\end{algorithm}
\end{center}
\subsection{Combine Operators} \label{s:combineoperators}
We now describe the general combine operator framework. This is followed by three instantiations of this framework.
In contrast to previous methods that use a multilevel framework our combine operators do not need perturbations of edge weights since we integrate the operators into our partitioner and do not use it as a complete black box.
Furthermore all of our combine operators assure that the offspring has a partition quality \textit{at least as good as the best of both parents}.
Roughly speaking, the combine operator framework combines an individual/partition $\mathcal{P} = V^\mathcal{P}_1, ..., V^\mathcal{P}_k$ (which has to fulfill a balance constraint) with a clustering $\mathcal{C} = V^\mathcal{C}_1, ..., V^\mathcal{C}_{k'}$.
Note that
\begin{wrapfigure}{r}{6.1cm}
\begin{center}
\vspace*{-0.5cm}
\includegraphics[width=3.5cm]{pics/general_crossover.pdf}
\end{center}
\caption{On the top a graph $G$ with two partitions, the dark and the light line, are shown. Cut edges are not eligible for the matching algorithm. Contraction is done until no matchable edge is left. The best of the two given partitions is used as initial partition.}
\vspace*{-0.5cm}
\label{fig:generalcrossover}
\end{wrapfigure}
the clustering does not necessarily has to fulfill a balance constraint and $k'$ is not necessarily given in advance.
All instantiations of this framework use a different kind of clustering or partition.
The partition and the clustering are both used as input for our multi-level graph partitioner KaFFPa in the following sense.
Let $\mathcal{E}$ be the set of edges that are cut edges, i.e. edges that run between two blocks, in either $\mathcal{P}$ \textit{or} $\mathcal{C}$.
All edges in $\mathcal{E}$ are blocked during the coarsening phase, i.e. they \textit{are not contracted} during the coarsening phase.
In other words these edges are not eligible for the matching algorithm used during the coarsening phase and therefore are not part of any matching computed.
An illustration of this can be found in Figure~\ref{fig:generalcrossover}.
The stopping criterion for the multi-level partitioner is modified such that it stops when no contractable edge is left.
Note that the coarsest graph is now exactly the same as the quotient graph $\mathcal{Q'}$ of the overlay clustering of $\mathcal{P}$ and $\mathcal{C}$ of $G$ (see Figure~\ref{fig:crossover}).
Hence vertices of the coarsest graph correspond to the connected components of $G_\mathcal{E} = (V, E\backslash \mathcal{E})$ and the weight of the edges between vertices corresponds to the sum of the edge weights running between those connected components in $G$.
As soon as the coarsening phase is stopped, we apply the partition $\mathcal{P}$ to the coarsest graph and use this as
initial partitioning.
This is possible since we did not contract any cut edge of $\mathcal{P}$.
Note that due to the specialized coarsening phase and this specialized initial partitioning we obtain a high quality initial solution on a very coarse graph which is usually not discovered by conventional partitioning algorithms.
Since our refinement algorithms guarantee no worsening of the input partition and use random tie breaking we can assure nondecreasing partition quality.
Note that the refinement algorithms can effectively exchange good parts of the solution on the coarse levels by moving only a few vertices.
Figure~\ref{fig:crossover} gives an example.
Also note that this combine operator can be extended to be a multi-point combine operator, i.e. the operator would use $p$ instead of two parents.
However, during the course of the algorithm a sequence of two point combine steps is executed which somehow "emulates" a multi-point combine step.
Therefore, we restrict ourselves to the case $p=2$.
When the offspring is generated we have to decide which solution should be evicted from the current population.
We evict the solution that is \textit{most similar} to the offspring among those individuals in the population that have a cut worse or equal than the offspring itself.
The difference of two individuals is defined as the size of the symmetric difference between their sets of cut edges.
This ensures some diversity in the population and hence makes the evolutionary algorithm more effective.
\subsubsection{Classical Combine using Tournament Selection}
This instantiation of the combine framework corresponds to a classical evolutionary combine operator $C_1$.
That means it takes two individuals $P_1, P_2$ of the population and performs the combine step described above.
In this case $\mathcal{P}$ corresponds to the partition having the smaller cut and $\mathcal{C}$ corresponds to the partition having the larger cut.
Random tie breaking is used if both parents have the same cut.
The selection process is based on the tournament selection rule \cite{Miller95geneticalgorithms}, i.e. $P_1$ is the fittest out of two random individuals $R_1, R_2$ from the population.
The same is done to select $P_2$.
Note that in contrast to previous methods the generated offspring will have a cut smaller or equal to the cut of $\mathcal{P}$.
Due to the fact that our multi-level algorithms are randomized, a combine operation performed twice using the same parents can yield different offspring.
\subsubsection{Cross Combine / (Transduction)}
In this instantiation of the combine framework $C_2$, the clustering $\mathcal{C}$ corresponds to a partition of $G$.
But instead of choosing an individual from the population we create a new individual in the following way.
We choose $k'$ uniformly at random in $[k/4, 4k]$ and $\epsilon'$ uniformly at random in $[\epsilon, 4\epsilon]$.
We then use KaFFPa to create a $k'$-partition of $G$ fulfilling the balance constraint $\max c(V_i) \leq (1+\epsilon')c(V)/k'$.
In general larger imbalances reduce the cut of a partition which then yields good clusterings for our crossover.
To the best of our knowledge there has been no genetic algorithm that performs combine operations combining individuals from different search spaces.
\begin{figure}[t]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=4cm]{fragmaster/easyexchange.pdf}& \quad \quad \includegraphics[width=4cm]{fragmaster/easyexchange2.pdf}
\end{tabular}
\end{center}
\caption{A graph $G$ and two bipartitions; the dotted and the dashed line (left). Curved lines represent a large cut. The four vertices correspond to the coarsest graph in the multilevel procedure. Local search algorithms can effectively exchange $v_2$ or $v_4$ to obtain the better partition depicted on the right hand side (dashed line). }
\label{fig:crossover}
\end{figure}
\subsubsection{Natural Cuts} \label{s:combinenaturalcut}
Delling et al. \cite{delling2010graph} introduced the notion of \textit{natural cuts} as a preprocessing technique for the partitioning of road networks.
The preprocessing technique is able to find relatively sparse cuts close to denser areas.
We use the computation of natural cuts to provide another combine operator, i.e. combining a $k$-partition with a clustering generated by the computation of natural cuts.
We closely follow their description:
The computation of natural cuts works in rounds.
Each round picks a center vertex $v$ and grows a breadth-first search (BFS) tree.
The BFS is stopped as soon as the weight of the tree, i.e. the sum of the vertex weights of the tree, reaches \begin{wrapfigure}{r}{.3\textwidth}
\begin{center}
\includegraphics[width=3cm]{fragmaster/naturalcutexplained2.pdf} \\ \includegraphics[width=5cm]{pics/naturalcutexplained_all.pdf}
\end{center}
\caption{On the top we see the computation of a natural cut. A BFS Tree which starts from $v$ is grown. The gray area is the core. The dashed line is the natural cut. It is the minimum cut between the contracted versions of the core and the ring (shown as the solid line). During the computation several natural cuts are detected in the input graph (bottom).}
\vspace*{-0.5cm}
\label{fig:naturalcutsexplained}
\end{wrapfigure}
$\alpha U$, for some parameters $\alpha$ and $U$. The set of the neighbors of $T$ in $V \backslash T$ is called the \textit{ring} of $v$.
The \textit{core} of $v$ is the union of all vertices added to $T$ before its size reached $\alpha U / f$ where $f > 1$ is another parameter.
The core is then temporarily contracted to a single vertex $s$ and the ring into a single vertex $t$ to compute the minimum $s$-$t$-cut between them using the given edge weights as capacities.
To assure that every vertex eventually belongs to at least one core, and therefore is inside at least one cut, the vertices $v$ are picked uniformly at random among all vertices that have not yet been part of any core in any round.
The process is stopped when there are no such vertices left.
In the original work \cite{delling2010graph} each connected component of the graph $G_C = (V, E \backslash C)$, where $C$ is the union of all edges cut by the process above, is contracted to a single vertex.
Since we do not use natural cuts as a preprocessing technique at this place we don't contract these components.
Instead we build a clustering $\mathcal{C}$ of $G$ such that each connected component of $G_C$ is a block.
This technique yields the third instantiation of the combine framework $C_3$ which is divided into two stages, i.e. the clustering used for this combine step is dependent on the stage we are currently in.
In both stages the partition $\mathcal{P}$ used for the combine step is selected from the population using tournament selection.
During the first stage we choose $f$ uniformly at random in $[5,20]$, $\alpha$ uniformly at random in $[0.75, 1.25]$ and we set $U = |V|/3k$.
Using these parameters we obtain a clustering $\mathcal{C}$ of the graph which is then used in the combine framework described above.
This kind of clustering is used until we reach an upper bound of ten calls to this combine step.
When the upper bound is reached we switch to the second stage.
In this stage we use the clusterings computed during the first stage, i.e. we extract elementary natural cuts and use them to quickly compute new clusterings.
An \textit{elementary natural cut} (ENC) consists of a set of cut edges and the set of nodes in its core.
Moreover, for each node $v$ in the graph, we store the set of of ENCs $N(v)$ that contain $v$ in their core.
With these data structures its easy to pick a new clustering $\mathcal{C}$ (see Algorithm \ref{alg:computeNCclustering}) which is then used in the combine framework described above.
\begin{algorithm}[h!]
\small
\begin{algorithmic}[1]
\STATE unmarked all nodes in $V$
\STATE \textbf{for each} $v \in V$ in random order \textbf{do}
\STATE \quad \textbf{if} $v$ is not marked \textbf{then}
\STATE \quad \quad pick a random ENC $C$ in $N(v)$
\STATE \quad \quad output $C$
\STATE \quad \quad mark all nodes in $C$'s core
\end{algorithmic}
\caption{computeNaturalCutClustering (second stage)}
\label{alg:computeNCclustering}
\end{algorithm}
\subsection{Mutation Operators}
We define two mutation operators, an ordinary and a modified F-cycle.
Both mutation operators use a random individual from the current population.
The main idea is to iterate coarsening and refinement several times using different seeds for random tie breaking.
The first mutation operator $M_1$ can assure that the quality of the input partition does not decrease.
It is basically an ordinary F-cycle which is an algorithm used in KaFFPa.
Edges between blocks are not contracted.
The given partition is then used as initial partition of the coarsest graph.
In contrast to KaFFPa, we now can use the partition as input to the partition in the very beginning.
This ensures nondecreasing quality since our refinement algorithms guarantee no worsening.
The second mutation operator $M_2$ works quite similar with the small difference that the input partition is not used as initial partition of the coarsest graph.
That means we obtain very good coarse graphs but we can not assure that the final individual has a higher quality than the input individual.
In both cases the resulting offspring is inserted into the population using the eviction strategy described in Section~\ref{s:combineoperators}.
\section{Putting Things Together and Parallelization}
\label{s:parallelization}
We now explain the parallelization and describe how everything is put together. Each processing element (PE) basically performs the same operations using different random seeds (see Algorithm~\ref{alg:localview}).
First we estimate the population size $S$: each PE performs a partitioning step and measures the time $\overline{t}$ spend for partitioning.
We then choose $S$ such that the time for creating $S$ partitions is approximately $t_{\text{total}}/f$ where the fraction $f$ is a tuning parameter and $t_{\text{total}}$ is the total running time that the algorithm is given to produce a partition of the graph.
Each PE then builds its own population, i.e. KaFFPa is called several times to create $S$ individuals/partitions.
Afterwards the algorithm proceeds in rounds as long as time is left.
With corresponding probabilities, mutation or combine operations are performed and the new offspring is inserted into the population.
We choose a parallelization/communication protocol that is quite similar to \textit{randomized rumor spreading} \cite{conf/icalp/DoerrF11}.
Let $p$ denote the number of PEs used. A communication step is organized in rounds.
In each round, a PE chooses a communication partner and sends her the currently best partition $P$ of the local population.
The selection of the communication partner is done uniformly at random among those PEs to which $P$ not already has been send to.
Afterwards, a PE checks if there are incoming individuals and if so inserts them into the local population using the eviction strategy described above.
If $P$ is improved, all PEs are again eligible.
This is repeated $\log p$ times.
Note that the algorithm is implemented \textit{completely asynchronously}, i.e. there is no need for a global synchronisation.
The process of creating individuals is parallelized as follows:
Each PE makes $s' = |S|/p$ calls to KaFFPa using different seeds to create $s'$ individuals.
Afterwards we do the following $S-s'$ times:
The root PE computes a random cyclic permutation of all PEs and broadcasts it to all PEs.
Each PE then sends a random individual to its successor in the cyclic permutation and receives a individual from
its predecessor in the cyclic permutation.
We call this particular part of the algorithm \textit{quick start}.
The ratio $\frac{c}{10}:\frac{10-c}{10}$ of mutation to crossover operations yields a tuning parameter $c$.
As we will see in Section~\ref{s:experiments} the ratio $1:9$ is a good choice.
After some experiments we fixed the ratio of the mutation operators $M_1:M_2$ to $4:1$ and the ratio of the combine operators $C_1:C_2:C_3$ to $3:1:1$.
Note that the communication step in the last line of the algorithm could also be performed only every $x$-iterations (where $x$ is a tuning parameter) to save communication time.
Since the communication network of our test system is very fast (see Section~\ref{s:experiments}), we perform the communication step in each iteration.
\begin{center}
\vspace*{-0.3cm}
\begin{algorithm}[h!]
\small
\begin{algorithmic}
\STATE \textbf{procedure} \textit{locallyEvolve}
\STATE \quad estimate population size $S$
\STATE \quad \textbf{while} time left
\STATE \quad \quad \textbf{if} elapsed time $< t_{\text{total}}/f$ \textbf{then} create individual and insert into local population
\STATE \quad \quad \textbf{else}
\STATE \quad \quad\quad flip coin $c$ with corresponding probabilities
\STATE \quad \quad\quad \textbf{if} $c$ shows head \textbf{then}
\STATE \quad \quad\quad \quad perform a mutation operation
\STATE \quad \quad\quad \textbf{else}
\STATE \quad \quad\quad \quad perform a combine operation
\STATE \quad \quad\quad insert offspring into population if possible
\STATE \quad \quad communicate according to communication protocol
\end{algorithmic}
\caption{All PEs perform basically the same operations using different random seeds.}
\label{alg:localview}
\end{algorithm}
\vspace*{-0.3cm}
\end{center}
\section{Experiments}\label{s:experiments}
\paragraph*{Implementation.}
We have implemented the algorithm described above using C++. Overall,
our program (including KaFFPa) consists of about 22\,500 lines of code.
We use two base case partitioners, KaFFPaStrong and KaFFPaEco.
KaFFPaEco is a good tradeoff between quality and speed, and KaFFPaStrong is
focused on quality.
For the following comparisons we used Scotch 5.1.9., and kMetis 5.0 (pre2).
\paragraph*{System.}
Experiments have been done on two machines. Machine A is a cluster with 200 nodes where each node is equipped with two Quad-core Intel Xeon processors (X5355) which run at a clock speed of 2.667 GHz.
Each node has 2x4 MB of level 2 cache each and run Suse Linux Enterprise 10 SP 1.
All nodes are attached to an InfiniBand 4X DDR interconnect which is characterized by its very low latency of below 2 microseconds and a point to point bandwidth between two nodes of more than 1300 MB/s.
Machine B has two Intel Xeon X5550, 48GB RAM, running Ubuntu 10.04. Each CPU has 4 cores (8 cores when hyperthreading is active) running at 2.67 GHz.
Experiments in Sections \ref{sec:parametertuning}, \ref{sec:expscalability}, \ref{sec:comparisionkaffpaandother} and \ref{sec:walshawbenchmark} have been conducted on machine A, and experiments in Sections \ref{sec:combineopexperiment} and \ref{sec:exproadnetworks} have been conducted on machine B.
All programs were compiled using GCC Version 4.4.3 and optimization level~3 using OpenMPI 1.5.3.
Henceforth, a PE is one core.
\paragraph*{Instances.}
We report experiments on three suites of instances (small, medium sized and road networks) summarized in
Appendix~\ref{sec:instances}.
\Id{rggX} is a \emph{random geometric graph} with
$2^{X}$ nodes where nodes represent random points in the unit square and edges
connect nodes whose Euclidean distance is below $0.55 \sqrt{ \ln n / n }$.
This threshold was chosen in order to ensure that the graph is almost connected.
\Id{DelaunayX} is the Delaunay triangulation of $2^{X}$
random points in the unit square. Graphs \Id{uk},\Id{3elt}..\Id{fe\_body} and
\Id{t60k}..\Id{memplus} come from Walshaw's benchmark archive
\cite{walshaw2000mpm}. Graphs \Id{deu} and \Id{eur}, \Id{bel} and \Id{nld} are undirected versions of the road networks, used in \cite{DSSW09}.
\Id{luxemburg} is a road network taken from \cite{dimacschallengegraphpartandcluster}.
Our default number of partitions $k$ are 2, 4, 8, 16, 32, 64 since they are the default values in \cite{walshaw2000mpm} and in some cases we additionally use 128 and 256.
Our default value for the allowed imbalance is 3\% since this is one
of the values used in \cite{walshaw2000mpm} and the default value in Metis.
Our default number of PEs is 16.
\paragraph*{Methodology.} We mostly present two kinds of data: average values and plots that show the evolution of solution quality (\textit{convergence plots}).
In both cases we perform multiple repetitions. The number of repetitions is dependent on the test that we perform.
Average values over multiple instances are obtained as follows: for each instance (graph, $k$), we compute the geometric mean of the average edge cut values for each instance.
We now explain how we compute the convergence plots.
We start explaining how we compute them for a single instance $I$:
whenever a PE creates a partition it reports a pair ($t$, cut), where the timestamp $t$ is the currently elapsed time on the particular PE and cut refers to the cut of the partition that has been created.
When performing multiple repetitions we report average values ($\overline{t}$, avgcut) instead.
After the completion of KaFFPaE we are left with $P$ sequences of pairs ($t$, cut) which we now merge into one sequence.
The merged sequence is sorted by the timestamp $t$.
The resulting sequence is called $T^I$.
Since we are interested in the evolution of the solution quality, we compute another sequence $T^I_{\text{min}}$.
For each entry (in sorted order) in $T^I$ we insert the entry $(t, \min_{t'\leq t} \text{cut}(t'))$ into $T^I_\text{min}$.
Here $\min_{t'\leq t} \text{cut}(t')$ is the minimum cut that occurred until time $t$.
$N^I_{\text{min}}$ refers to the normalized sequence, i.e. each entry ($t$, cut) in $T^I_\text{min}$ is replaced by ($t_n$, cut) where $t_n = t/t_I$ and $t_I$ is the average time that KaFFPa needs to compute a partition for the instance $I$.
To obtain average values over \textit{multiple instances} we do the following: for each instance we label all entries in $N^I_{\text{min}}$, i.e. ($t_n$, cut) is replaced by ($t_n$, cut, $I$). We then merge all sequences $N^I_\text{min}$ and sort by $t_n$. The resulting sequence is called $S$.
The final sequence $S_g$ presents \textit{event based} geometric averages values.
We start by computing the geometric mean cut value $\mathcal{G}$ using the first value of all $N^I_\text{min}$ (over $I$).
To obtain $S_g$ we basically sweep through $S$: for each entry (in sorted order) $(t_n, c, I)$ in $S$ we update $\mathcal{G}$, i.e. the cut value of $I$ that took part in the computation of $\mathcal{G}$ is replaced by the new value $c$, and insert $(t_n, \mathcal{G})$ into $S_g$.
Note that $c$ can be only smaller or equal to the old cut value of $I$.
\subsection{Parameter Tuning}
\label{sec:parametertuning}
We now tune the fraction parameter $f$ and the ratio between mutation and crossover operations.
For the parameter tuning we choose our small testset because runtimes for a single graph partitioner call are not too large.
To save runtime we focus on $k=64$ for tuning the parameters.
For each instance we gave KaFFPaE ten minutes time and 16 PEs to compute a partition.
During this test the quick start option is disabled.
For this test the flip coin parameter $c$ is set to one.
In Figure~\ref{fig:parametertuning} we can see that the algorithm is not too sensitive about the exact choice of this parameter.
However, larger values of $f$ speed up the convergence rate and improve the result achieved in the end.
Since $f=10$ and $f=50$ are the best parameter in the end, we choose $f=10$ as our default value.
For tuning the ratio $\frac{c}{10}:\frac{10 - c}{10}$ of mutation and crossover operations, we set $f$ to ten.
We can see that for smaller values of $c$ the algorithm is not too sensitive about the exact choice of the parameter.
However, if the $c$ exceeds 8 the convergence speed slows down which yields worse average results in the end.
We choose $c=1$ because it has a slight advantage in the end.
The parameter tuning uses KaFFPaStrong as a partitioner.
We also performed the parameter tuning using KaFFPaEco as a partitioner (see Appendix~\ref{sec:furtherparametertuning}).
\begin{figure}[t!]
\vspace*{-1cm}
\begin{center}
\includegraphics[width=0.4\textwidth]{pics/parameter_tuning_strong_fraction.pdf}
\includegraphics[width=0.4\textwidth]{pics/parameter_tuning_strong_flip_coin.pdf}
\end{center}
\vspace*{-1cm}
\caption{Conv. plots for the \textit{fraction} $f$ using $c=1$ (left) and the \textit{flip coin} $c$ using $f=10$ (right). }
\vspace*{-.5cm}
\label{fig:parametertuning}
\end{figure}
\subsection{Scalability}
\label{sec:expscalability}
In this Section we study the scalability of our algorithm. We do the following to obtain a fair comparison:
basically each configuration has the same amount of time, i.e. when doubling the number of PEs used,
we divide the time that KaFFPaE has to compute a partition per instance by two.
To be more precise, when we use one PE KaFFPaE has $t_1=15360s$ to compute a partition of an instance.
When KaFFPaE uses $p$ PEs, then it gets time $t_p=t_1/p$ to compute a partition of an instance.
For all the following tests the quick start option is enabled.
To save runtime we use our small sized testset and fix $k$ to 64.
Here we perform five repetitions per instance.
We can see in Figure~\ref{fig:scalabilityKaFFPaE} that using more processors speeds up the convergence speed and up to $p=128$ also \textit{improves} the quality in the end (in these cases the speedups are optimal in the end).
This might be due to island effects \cite{AlbaT02}.
For $p=256$ results are worse compared to $p=1$.
This is because the algorithm is barely able to perform combine and mutation steps, due to the very small amount of time given to KaFFPaE (60 seconds).
On the largest graph of the testset (delaunay16) we need about 20 seconds to create a partition into $k=64$ blocks.
We now define pseudo speedup $S_p(t_n)$ which is a measure for speedup at a particular normalized time $t_n$ of the configuration using one PE.
Let $c_p(t_n)$ be the mean minimum cut that KaFFPaE has computed using $p$ PEs until normalized time $t_n$.
The pseudo speedup is then defined as $S_p(t_n) = c'_1(t_n)/ c'_p(t_n)$ where $c'_i(t_n) = \min_{c_i(t') \leq c_1(t_n)} t'$. If $c'_p(t) > c'_1(t_n)$ for all $t$ we set $S_p(t_n) = 0$ (in this case the parallel algorithm is not able to compute the result computed by the sequential algorithm at normalized time $t_n$; this is only the case for $p=256$).
We can see in Figure~\ref{fig:scalabilityKaFFPaE} that after a short amount of time we reach super linear pseudo speedups in most cases.
\begin{figure}[h!]
\vspace*{-.5cm}
\begin{center}
\includegraphics[width=5cm]{pics/scalability_stdplot.pdf} \quad
\includegraphics[width=5cm]{pics/scalability_nope_normalized_pe1.pdf} \quad
\includegraphics[width=5cm]{pics/scalability_speedup.pdf}
\end{center}
\vspace*{-0.5cm}
\caption{Scalability of our algorithm: (left) a normal convergence plot, (middle) mean minimum cut relative to best cut of KaFFPaE using one PE, (right) pseudo speedup $S_p(t_n)$ (larger versions can be found in Appendix~\ref{sec:largerscalabilityKaFFPaE}).}
\vspace*{-.5cm}
\label{fig:scalabilityKaFFPaE}
\end{figure}
\vspace*{-1cm}
\clearpage
\subsection{Comparison with KaFFPa and other Systems}
\label{sec:comparisionkaffpaandother}
\begin{wraptable}{r}{0.3\textwidth}
\begin{center}
\vspace*{-1cm}
\small
\begin{tabular}{r||r|r}
\hline
$k$/Algo. & Reps. & KaFFPaE \\
& Avg. & impr. \%\\
\hline
\hline
2 & \numprint{569}& 0.2\%\\
4 & \numprint{1229} & 1.0\%\\
8 &\numprint{2206}& 1.5\%\\
16 &\numprint{3568}& 2.7\%\\
32 &\numprint{5481}& 3.4\%\\
64 &\numprint{8141}& 3.3\%\\
128 &\numprint{11937}& 3.9\%\\
256 &\numprint{17262}& 3.7\%\\
\hline
\hline
overall &\numprint{3872}& 2.5\%\\
\hline
\end{tabular}
\end{center}
\vspace*{-0.5cm}
\caption{Different algorithms after two hours of time on 16 PEs.}
\vspace*{-0.25cm}
\end{wraptable}
In this Section we compare ourselves with repeated executions of KaFFPa and other systems.
We switch to our middle sized testset to avoid the effect of overtuning our algorithm parameters to the instances used for calibration.
We use 16 PEs and two hours of time per instance when we use KaFFPaE.
We parallelized repeated executions of KaFFPa (embarrassingly parallel, different seeds) and also gave 16 PEs and two hours to KaFFPa.
We look at $k \in \{2,4,8,16,32,64,128,256\}$ and performed three repetitions per instance.
Figure~\ref{fig:comparision} show convergence plots for $k \in \{32, 64, 128, 256\}$. All convergence plots can be found in the Appendix~\ref{sec:comparision_all}.
As expected the improvements of KaFFPaE relative to repeated executions of KaFFPa increase with increasing $k$. The largest improvement is obtained for $k=128$.
Here KaFFPaE produces partitions that have a 3.9\% smaller cut value than plain restarts of the algorithm.
Note that using a weaker base case partitioner, e.g. KaFFPaEco, increases this value.
On the small sized testset we obtained an improvement of 5.9\% for $k=64$ compared to plain restarts of KaFFPaEco.
Tables comparing KaFFPaE with the best results out of ten repetitions of Scotch and Metis can be found in the Appendix Table~\ref{fig:allnumberscomparision}.
Overall, Scotch and Metis produce 19\% and 28\% larger (best) cuts than KaFFPaE respectively.
However, these methods are much faster than ours (Appendix Table~\ref{fig:allnumberscomparision}).
\begin{figure}
\vspace*{-1cm}
\begin{center}
\includegraphics[width=4cm]{pics/comparision_middlesize_k32.pdf}
\includegraphics[width=4cm]{pics/comparision_middlesize_k64.pdf}
\includegraphics[width=4cm]{pics/comparision_middlesize_k128.pdf}
\includegraphics[width=4cm]{pics/comparision_middlesize_k256.pdf}
\end{center}
\vspace*{-0.5cm}
\caption{Convergence plots for the comparison of KaFFPaE with repeated executions of KaFFPa.}
\label{fig:comparision}
\end{figure}
\vspace*{-0.25cm}
\subsection{Combine Operator Experiments}
\label{sec:combineopexperiment}
\begin{wraptable}{l}{0.35\textwidth}
\begin{center}
\vspace*{-1cm}
\small
\begin{tabular}{r||r|r|r|r}
\hline
Algo. & S3R & K3R & KC & SC \\
\hline
$k$ & Avg. & \multicolumn{3}{c}{improvement \%}\\
\hline
\hline
2 & \numprint{591} & \numprint{2.4} & \numprint{1.6} & \numprint{0.2} \\
4 & \numprint{1304} & \numprint{3.4} & \numprint{4.0} & \numprint{0.2} \\
8 & \numprint{2336} & \numprint{3.7} & \numprint{3.6} & \numprint{0.2} \\
16 & \numprint{3723} & \numprint{2.9} & \numprint{2.0} & \numprint{0.2} \\
32 & \numprint{5720} & \numprint{2.7} & \numprint{3.3} & \numprint{0.0} \\
64 & \numprint{8463} & \numprint{2.8} & \numprint{3.0} & \numprint{-0.6} \\
128 & \numprint{12435} & \numprint{3.6} & \numprint{4.5} & \numprint{0.0} \\
256 & \numprint{17915} & \numprint{3.4} & \numprint{4.2} & \numprint{-0.1} \\
\hline
\end{tabular}
\end{center}
\vspace*{-.5cm}
\caption{Comparison of quality of different algorithms relative to S3R.}
\label{tab:combineexperiementquality}
\vspace*{-.5cm}
\end{wraptable}
We now look into the effectiveness of our combine operator $C_1$.
We conduct the following experiment: we compare the best result of three repeated executions of KaFFPa (\textit{K3R}) against a combine step (\textit{KC}), i.e. after creating two partitions we report the result of the combine step $C_1$ combining both individuals.
The same is done using the combine operator of Soper et. al. \cite{soper2004combined} (\textit{SC}), i.e. we create two individuals using perturbed edge weights as in \cite{soper2004combined} and report the cut produced by the combine step proposed there (the best out of the three individuals). We also present best results out of three repetitions when using perturbed edge weights as in Soper et. al. (\textit{S3R}).
Since our partitioner does not support double type edge weights, we computed the perturbations and scaled them by a factor of \numprint{10000} (for S3R and SC).
We performed ten repetitions on the middle sized testset.
Results are reported in Table~\ref{tab:combineexperiementquality}.
A table presenting absolute average values and comparing the runtime of these algorithms can be found in Appendix Table~\ref{tab:combineexperiementruntime}.
We can see that for large $k$ our new combine operator yields improved partition quality in compareable or less time (KC vs. K3R)).
Most importantly, we can see that edge biases decrease the solution quality (K3R vs. S3R).
This is due to the fact that edge biases make edge cuts optimial that are not close to optimial in the unbiased problem.
For example on 2D grid graphs, we have straight edge cuts that are optimal.
Random edge biases make bended edge cuts optimal.
However, these cuts are are not close to optimal cuts of the original graph partitioning problem.
Moreover, local search algorithms (Flow-based, FM-based) work better if there are a lot of equally sized cuts.
\subsection{Walshaw Benchmark}
\label{sec:walshawbenchmark}
We now apply KaFFPaE to Walshaw's benchmark archive \cite{soper2004combined} using the rules used there, i.e., running time is not an issue but we want to achieve minimal cut values for $k \in \{2,4,8,16,32,64\}$ and balance parameters $\epsilon \in \{0,0.01,0.03,0.05\}$.
We focus on $\epsilon \in \{1\%,3\%,5\%\}$ since KaFFPaE (more precisely KaFFPa) is not made for the case $\epsilon=0$.
We run KaFFPaE with a time limit of two hours using 16 PEs (two nodes of the cluster) per graph, $k$ and $\epsilon$ and report the best results obtained in the Appendix~\ref{sec:walshawbenchmarktable}.
KaFFPaE computed 300 partitions which are better than previous best partitions reported there: 91 for 1\%, 103 for 3\% and 106 for 5\%. Moreover, it reproduced \textit{equally sized} cuts in 170 of the 312 remaining cases.
When only considering the 15 largest graphs and $\epsilon \in \{1.03, 1.05\}$ we are able to reproduce or improve the current result in 224 out of 240 cases. Overall our systems (including KaPPa, KaSPar, KaFFPa, KaFFPaE) now improved or reproduced the entrys in 550 out of 612 cases (for $\epsilon \in \{0.01, 0.03, 0.05\}$).
\vspace*{-.25cm}
\subsection{Comparison with PUNCH}
\label{sec:exproadnetworks}
\begin{wraptable}{r}{0.4\textwidth}
\small
\vspace*{-1cm}
\begin{center}
\begin{tabular}{r||r|r||r|r||r}
\hline
grp, $k$ & \multicolumn{5}{c}{algorithm/runtime $t$} \\
\hline
ger. & P$_{best}$ & $t_{\text{total}}$ & B$_\text{avg}$ & $t_{\text{avg}}$ & B$_\text{best}$ \\
\hline
2 & \numprint{164} & 83 & 161 & 6 & \textbf{\numprint{161}} \\
4 & \numprint{400} & 96 & 394 & 6 & \textbf{\numprint{393}} \\
8 & \numprint{711} & 102 & 694 & 9 & \textbf{\numprint{693}} \\
16 & \numprint{1144} & 83 & \numprint{1148} & 16 & \textbf{\numprint{1137}} \\
32 & \numprint{1960} & 71 & \numprint{1928} & 31 & \textbf{\numprint{1898}} \\
64 & \numprint{3165} & 83 & \numprint{3164} & 62 & \textbf{\numprint{3143}} \\
\hline
\hline
eur. & P$_{best}$ & $t_{\text{total}}$ & B$_\text{avg}$ & $t_{\text{avg}}$ & B$_\text{best}$ \\
\hline
2 & \numprint{129} & 423 & \numprint{149} & 39 & \textbf{\numprint{129}} \\
4 & \textbf{\numprint{309}} & 358 & \numprint{313} & 39 & \numprint{310} \\
8 & \textbf{\numprint{634}} & 293 & \numprint{693} & 47 & \numprint{659} \\
16 & \numprint{1293} & 252 & \numprint{1261} & 73 & \textbf{\numprint{1238}} \\
32 & \numprint{2289} & 217 & \numprint{2259} & 130 & \textbf{\numprint{2240}} \\
64 & \numprint{3828} & 241 & \numprint{3856} & 248 & \textbf{\numprint{3825}} \\
\hline
\end{tabular}
\caption{Results on road networks: best results of PUNCH (P) out of 100 repetitions and total time [m] needed to compute these results; average and best cut results of Buffoon (B) as well as average runtime [m] (including preprocessing).}
\vspace*{-0.5cm}
\label{tab:resultsonroadnetworks}
\end{center}
\end{wraptable}
In this Section we focus on finding partitions for road networks.
We implemented a specialized algorithm, Buffoon, which is similar to PUNCH \cite{delling2010graph} in the sense that it also uses natural cuts as a preprocessing technique to obtain a coarser graph on which the graph partitioning problem is solved.
For more information on natural cuts, we refer the reader to \cite{delling2010graph}.
Using our (shared memory) parallelized version of natural cut preprocessing we obtain a coarse version of the graph.
Note that our preprocessing uses slightly different parameters than PUNCH (using the notation of \cite{delling2010graph}, we use $\mathcal{C}=2, U=(1+\epsilon)\frac{n}{2k}, f=10, \alpha=1$).
Since partitions of the coarse graph correspond to partitions of the original graph, we use KaFFPaE to partition the coarse version of the graph.
After preprocessing, we gave KaFFPaE $t_{\text{eur},k} = k \times 3.75\text{ min}$ on europe and $t_{\text{ger},k} = k \times 0.9375\text{ min}$ on germany, to compute a partition.
In both cases we used all 16 cores (hyperthreading active) of machine B for preprocessing and for KaFFPaE. The experiments where repeated ten times.
A summary of the results is shown in Table~\ref{tab:resultsonroadnetworks}.
Interestingly, on germany already our average values are smaller or equal to the best result out of 100 repetitions obtained by PUNCH.
Overall in 9 out of 12 cases we compute a best cut that is better or equal to the best cut obtained by PUNCH.
Note that for obtaining the best cut values we invest significantly more time than PUNCH.
However, their machine is about a factor two faster (12 cores running at 3.33GHz compared to 8 cores running at 2.67GHz) and our algorithm is not tuned for road networks.
A table comparing the results on road networks against KaFFPa, KaSPar, Scotch and Metis can be found in Appendix~\ref{tab:detailedroadnetworks}.
These algorithms produce 9\%, 12\%, 93\% and 288\% larger cuts on average respectively.
\vspace*{-.25cm}
\section{Conclusion and Future Work}
KaFFPaE is an distributed evolutionary algorithm to tackle the graph partitioning problem.
Due to new crossover and mutation operators as well as its scalable parallelization it is able to compute the best known partitions for many standard benchmark instances in only a \textit{few minutes}.
We therefore believe that KaFFPaE is still helpful in the area of high performance computing.
Regarding future work, we want to integrate other partitioners if they implement the possibility to block edges during the coarsening phase and use the given partitioning as initial solution.
It would be interesting to try other domain specific combine operators, e.g. on social networks it could be interesting to use a modularity clusterer to compute a clustering for the combine operation.
\bibliographystyle{plain}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,031
|
Курдиманш сир Есон () насеље је и општина у Француској у региону Париски регион, у департману Есон која припада префектури Еври.
По подацима из 2011. године у општини је живело 268 становника, а густина насељености је износила 47,69 становника/-{km²}-. Општина се простире на површини од 5,62 -{km²}-. Налази се на средњој надморској висини од 5913 метара (максималној 146 -{m}-, а минималној 58 -{m}-).
Демографија
График промене броја становника у току последњих година
Види још
Списак општина у департману Есон
Референце
Спољашње везе
База података: -{Insee}-
Courdimanche-sur-Essonne на страници Националног географског института Француске
Courdimanche-sur-Essonne на страници организације -{INSEE}-
Најближа насеља (километража, правац и координате)
Положај места Courdimanche-sur-Essonne на мапи Француске (са основним подацима о месту)
План насеља Courdimanche-sur-Essonne на мапи (-{Mapquest}-)
Департман Есон у Француској
Википројект географија/Насеља у Француској
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,253
|
Q: Python Is it possible to recreate a whole call stack? I'm interested in experimenting with python. I know I can inspect and inject local and global variables into a frame using frame.f_locals and frame.f_globals, but I am now itching to create a full call stack.
What is keeping me from just changing the stack information is the fact that python doesn't allow me to change it.
I have actually considered programmatically transforming the python module I am using, in order to simulate winding the stack. But I am aware it is a terrible solution because client code usage of if, while, with and try would easily break my code.
I've also looked at manipulating frame.f_back, to no avail. It's read-only.
>>> import sys
...
... frm = sys._getframe()
...
... frm.f_back = None
Traceback (most recent call last):
File "<pyshell#4>", line 5, in <module>
frm.f_back = None
TypeError: readonly attribute
What I'm trying to do
As an experiment, I'm trying to implement fork() across a network.
I'm aware stackless python may have what I want, but it's still impossible to change the frame.f_back attribute.
A: Have a look on Online Python Tutor (http://www.pythontutor.com/). What it does is that it captures frames during execution to create visualization of python code. So, you could use the captured frames.
A: >>> type(sys._getframe())()
TypeError: cannot create 'frame' instances
Sorry.
A: You should look at the AST module and the symtable module
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 6,872
|
{"url":"https:\/\/socratic.org\/questions\/do-double-bonds-affect-bond-angle","text":"Do double bonds affect bond angle?\n\nDec 7, 2015\n\nYes, the bond angle will be affected due to a different hybridization on the atoms.\n\nExplanation:\n\nA double bond involves a sigma bond and a pi bond. The pi bond is between pi orbitals of adjacent atoms in a molecule. Because of the acquisition of these additional orbitals, the hybridization must change.\n\nIn ${\\text{CH\"_3-\"CH}}_{3}$, the carbons have a hybridization of ${\\text{sp}}^{3}$, resulting in a tetrahedral structure.\n\nIn ${\\text{CH\"_2=\"CH}}_{2}$, the carbons have a hybridization of ${\\text{sp}}^{2}$, resulting in trigonal planar structure.\n\nTetrahedral and trigonal planar molecular structures have different bond angles.","date":"2020-07-14 07:32:08","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 4, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7533366680145264, \"perplexity\": 2881.015753779714}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593657149205.56\/warc\/CC-MAIN-20200714051924-20200714081924-00055.warc.gz\"}"}
| null | null |
The present study investigated the influence that trait hostility and serotonin may have on individuals' mood and cardiovascular responses to stress. Sixty high and low hostile males and females participated in an either an acute tryptophan depletion, a procedure that lowers brain serotonin levels, or a sham tryptophan depletion, that leaves serotonin levels unchanged, and the four resulting groups (Low Hostile-Non-Depleted, High Hostile-Non-Depleted, Low Hostile-Depleted, High Hostile-Depleted) were subsequently exposed to an interpersonal conflict. High and low hostile participants in the tryptophan depleted group reported increases in hostility-related affect following the 5.5 hour waiting phase, a period of time necessary for the full effects of the tryptophan manipulation to take effect. This finding partially supports previous research reports. There were no mood differences as a function of hostility status during this waiting period. Overall participants, regardless of grouping, exhibited a cardiovascular change pattern that is generally associated with a more relaxed state, a result that is incongruent with the increased negative affect in the tryptophan depleted groups. High hostile individuals showed a slightly less relaxed pattern during this period without any tryptophan-related differences. All participants exhibited heightened cardiovascular responses to the interpersonal conflict, as well as reduced positive affect and increased negative affect, including hostility/anger-related mood changes. Contrary to expectations, there were no differential effects of trait hostility status nor tryptophan condition. Possible reasons for these findings are explored.
xiv, 126 leaves ; 29 cm.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,375
|
{"url":"http:\/\/books.duhnnae.com\/2017\/jun4\/149730325428-Zero-modes-on-cosmic-strings-in-an-external-magnetic-field-Francesc-Ferrer-Harsh-Mathur-Tanmay-Vachaspati-Glenn-D-Starkman.php","text":"# Zero modes on cosmic strings in an external magnetic field\n\nA classical analysis suggests that an external magnetic field can cause trajectories of charge carriers on a superconducting domain wall or cosmic string to bend, thus expelling charge carriers with energy above the mass threshold into the bulk. We study this process by solving the Dirac equation for a fermion of mass $m f$ and charge $e$, in the background of a domain wall and a magnetic field of strength $B$. We find that the modes of the charge carriers get shifted into the bulk, in agreement with classical expectations. However the dispersion relation for the zero modes changes dramatically - instead of the usual linear dispersion relation, $\\omega k =k$, the new dispersion relation is well fit by $\\omega \\approx m f tanhk-k *$ where $k *=m f$ for a thin wall in the weak field limit, and $k *=eBw$ for a thick wall of width $w$. This result shows that the energy of the charge carriers on the domain wall remains below the threshold for expulsion even in the presence of an external magnetic field. If charge carriers are expelled due to an additional perturbation, they are most likely to be ejected at the threshold energy $\\sim m f$.\n\nAuthor: Francesc Ferrer; Harsh Mathur; Tanmay Vachaspati; Glenn D. Starkman\n\nSource: https:\/\/archive.org\/","date":"2017-10-22 03:09:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7115681767463684, \"perplexity\": 332.5561352277007}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-43\/segments\/1508187825057.91\/warc\/CC-MAIN-20171022022540-20171022042540-00671.warc.gz\"}"}
| null | null |
After incubating at 37°C and 5% CO2 for 48 h, 1 μCi 3H-thymidine
After incubating at 37°C and 5% CO2 for 48 h, 1 μCi 3H-thymidine (Amersham) was added to each well. The cultures were harvested 18 h later and then processed for measurement of incorporated radioactivity in a liquid scintillation counter. The inhibitors of NO, 200 uM L-NMMA; arginase, 40 uM nor-NOHA (NW-hydroxyl-nor-l-arginine) (Calbiochem); or ROS scavenger, 5 mM NAC (N-acetyl l-cystein) (Sigma) were added at the beginning of the culture. One million of SCs or IHLs were incubated in 1%
FBS this website 1% BSA in PBS with the relevant Abs. Intracellular cytokine staining [48], nitrotyrosine staining [35], and detection of CD107a (BioLegend) [49] were made as previously described. For iNOS detection, splenocytes were cultured and stimulated with Con A (5 mg/mL) for 48 h. Then, cells were stained with allophycocyanin-anti-CD11b (clone M170) and PE-anti-Gr1, fixed, permeabilized with Cytofix/Cytoperm buffer, and were incubated with rabbit
polyclonal anti-iNOS Ab (BD Bioscience). After washing, samples were examined using BD FACS Canto II flow cytometer (BD Biosciences). The Abs conjugated were allophycocyanin-anti-Ly6G/Ly6C (Gr-1, clone RB6–8C5), PE-anti-Ly6G (clone 1A8), FITC-anti-Ly6C (clone AL-21) (BD Bioscience), allophycocyanin-anti-CD4 (clone GK1.5)(BioLegend), PE-anti-CD8 (clone 53-6.7), PE-anti-IL6 (MP5-20F3), PE-anti-IFNγ (XMG1.2,), HTS assay PE-anti-IL-17A (clone eBio17B7) (eBioscience), and anti-Phospho-Stat3 (Tyr705)(clone D3A7) (Cell Signaling). Oxidation-sensitive dye DCFDA (Molecular Probes/Invitro-gen), was used to measure ROS production [27]. Cytokine levels were determined by ELISA sandwich for detecting TNF-α, IL6, and IFN-γ (eBioscience) in plasma and in culture supernatants from sorted MDSCs cultured in supplemented RPMI 1640 at 24 h. Splenocytes were cultured
with ConA for 48 h, fixed in 4% paraformaldehyde, blocked with PBS-BSA Idelalisib concentration 1% and labeled with allophycocyanin-anti-CD4, PE-anti-CD8, and Alexa Fluor 488-anti-NT and visualized using FV1000 (Olympus) confocal microscope. Sorted CD11b+Gr1+ were put on a slide by the citospin technique and were stained with DNA-binding fluorochrome Hoechst 33 258 (2 ug/mL) and FITC-anti-phosphoSTAT3. Slides were observed with a NIKON ECLIPE Microscope. Purified MDSCs were washed and lysed (1% Triton X-100, 0.5% sodium deoxicholate, 9% SDS, 1 mM sodium ortovanadate, and 10 g PMSF in PBS). Aliquots of tissue lysates, were separated on a 10% SDS-PAGE and transferred to nitrocellulose membranes. After blocking, they were incubated with rabbit polyclonal Ab anti-p47phox (Santa Cruz) followed by HRP-anti-rabbit Ab (Sigma) and assayed using the ECL chemiluminescent system. Protein loading was visualized by anti-actin Ab (Santa Cruz). Experimental differences over the controls were analyzed with the Student's t-test and nonparametric test and differences with p-value of <0.
Both methods present advantages and disadvantages In solid piece
Both methods present advantages and disadvantages. In solid pieces of tissue, neurones are mixed together see more with glial populations, which could help the maturation of the tissue in the host brain [145]. Importantly, with the latter approach, cells do not undergo mechanical stress, trauma or necrosis due to axotomy, although cell death may still occur upon dissection
of the tissue [146]. On the other hand, cell suspensions, which require the mechanical dissociation of the tissue with potential accompanying cell damage, are surgically easier to implant in the brain. Dissociated cells are also more likely to be integrated in the host brain and to form afferent and efferent connections with the latter [147]. However, the trituration of the tissue leads to the destruction of the donor vasculature leaving the graft to rely strictly on the vascular supply of the host [90,148,149]. Solid pieces this website of tissue maintain their own angioarchitecture and will more readily anastomose with surrounding vessels [114,148,150,151]. Finally, cell suspensions trigger a weaker inflammatory response, in part because they are injected through a smaller cannula than solid grafts [139]. In clinical trials, the cell suspensions utilized were not completely dissociated and small clusters of cells were maintained, introducing a source of variability with regard to the effective number of cells implanted
between transplants. However, the method of cell suspension seems to yield a better outcome [139]. The regime of immunosuppression is another parameter that may be predictive of graft outcome and one that is intermingled with the cellular and molecular immune/inflammatory responses against grafted tissue (Table 1).
The early work on transplantation in animal models of disease demonstrated that the long-term survival of dopaminergic xenografts (mouse to rat and human to rat) was improved when the immunosuppressive drug cyclosporine A was administered to the recipient animal, even for a short period of time [152,153]. However, halting cyclosporine treatment reduced functional effects of grafted tissue at later time points (6 months), although the improvement of the behavioural phenotype of the immunosuppressed animals was still greater than in non-immunosuppressed dipyridamole animals [154]. Clinically, the withdrawal of immunosuppression coincided with the decline of beneficial effects in PD patients [155]. It was suggested that this could reflect graft rejection, although grafts survival was confirmed both by PET scans of Fluoro-dopa uptake and later by post-mortem histological analysis [155], similarly to previous reports [156]. In other PD cases, the withdrawal of the immunotherapy treatment did not lead to graft rejection [157,158]. Two independent reports have further described grafts survival in the absence of any immunosuppressive treatment [109,159].
All patients had experienced symptoms for a prolonged time period
All patients had experienced symptoms for a prolonged time period (mean time of disease 10±14 years) and presented with mucosal lesions involving the nasal cavity (100%), pharynx (35%) and/or larynx (11%). All tissue specimens were obtained before treatment; afterwards, patients received N-methylglucamine antimoniate (20 mg/Sb/kg/d) for 30 days. Nasal mucosal biopsy was performed under Alisertib in vitro local anaesthesia with Lidocaine® spray (10%). Normal mucosal samples were obtained from turbinectomy nasal
surgery. Tissue fragments were cryopreserved or conserved in 10% formalin. This study was approved by the Gonçalo Moniz Research Center (CPqGM/FIOCRUZ-Bahia) Institutional Review Board, and informed consent was obtained from all patients before enrolment. Frozen sections (5 μm thick) were obtained and immunohistochemistry was performed as described previously 2. The following primary antibodies were used: rabbit anti-IL-17 (4 μg/mL) or anti-TGF-β (2 μg/mL) (both Santa Cruz Biotechnology, Santa Cruz, CA, USA), goat anti-IL-23 (0.01 μg/mL), mouse anti-IL-6 (25 μg/mL), mouse anti-IL-1β (10 μg/mL) selleck kinase inhibitor or goat anti-MMP-9 (4 μg/mL) (all R&D Systems,
Abingdon, UK), goat anti-MPO (4 μg/mL; US Biological, Swampscott, MA, USA) and goat anti-NE (12 μg/mL; Santa Cruz Biotechnology). Biotin-labelled anti-rabbit, anti-mouse or anti-goat IgG (Vector Laboratories, Peterborough, almost England) was used as a secondary antibody. Isotype control antibodies (R&D Systems) were used as negative controls. Positive-control sections consisted of frozen mucosal tonsillar tissue and frozen nasal polyps. Digital images of tissue sections were captured using a Nikon E600 light microscope and a Q-Color 1 Olympus digital camera. Quantification of stained areas was performed using Image Pro-Plus software (Media Cybernetics). Double immunofluorescence staining was performed for IL-17 and CD4, CD8, CD14 or
CCR6 markers. The following primary antibodies were used: mouse anti-CD4 (BD Biosciences, San Jose, CA, USA), mouse anti-CD8 (BD Biosciences), mouse anti-CCR6 (R&D Systems) and rabbit anti-IL-17 (8 μg/mL, Santa Cruz Biotechnology). Secondary antibodies were biotin anti-mouse IgG (Vector Laboratories) or anti-rabbit Alexa 488 (Molecular Probes, Eugene, OR, USA). Streptavidin Cy3 (Sigma, Buchs, Switzerland) was used after biotin antibodies. Multiple images representing positive staining and negative controls were acquired using a confocal microscope (Leica TCS SP2 SE and SP5 AOB5). Image Pro Plus was used for image processing. The extraction of total RNA from mucosal tissues was performed following the protocol recommended by the manufacturer (Life Technologies, Rockville, MD, USA). cDNA was synthesised using 1 μg of RNA through a reverse transcription reaction (M-MLV reverse transcriptase, Promega, Madison, WI, USA).
The PBMCs from patients with TM (n = 35), patients with TH (n = 3
The PBMCs from patients with TM (n = 35), patients with TH (n = 30), patients with NT (n = 21) and HC (n = 32) were examined for the subset population, defined as the percentage of Th17 cells among total CD4+ T cells using flow cytometry. Summarized
data from all individuals indicated that the proportion of Th17 cells in TM group was significantly higher than those in HC group (1.49 ± 0.59% versus 0.99 ± 0.12%, P < 0.05) (Fig. 1A,B). There was no significant difference in the frequency of Th17 cells between TH group (1.38 ± 0.42%), NT group (1.08 ± 0.52%) and HC group (P > 0.05). There was also no significant difference in the frequency of Th17 cells between TM group and TH group (P > 0.05). We also compared the number of the Treg cells in PBMCs in patients with MG to that in healthy subjects. The proportion of Treg cells in TM group (3.23 ± 0.64%) was lower than those in TH group (5.87 ± 0.51%, P < 0.05), NT group (6.27 ± 0.51%, P < 0.05) Wnt inhibitor and HC group (6.21 ± 0.12%, P < 0.05) (Fig. 1C). There was no significant difference in the find more frequency of Treg cells between TH group, NT group and HC group (P > 0.05). The results suggested that increased number of Th17 cells and decreased number of Treg cells specifically correlate with MG patients with TM but
not all patients with MG. To further evaluate possible alterations in the expression of pro-Th17 genes in MG, we tested its mRNA levels in patients with MG and healthy subjects by using real-time quantitative PCR. The values were calculated as copy numbers of interesting genes in terms of house-keeping gene (β-actin). The relative quantification values (RQ values) of mRNA are shown in Fig. 2. The expression levels of IL-17 mRNA (23.1 ± 4.7) were upregulated significantly versus those in HC group (13.8 ± 3.0, P < 0.01). Ibrutinib price As IL-1β, IL-6 and IL-23 were involved in the generation of human Th17 cells, we further detected their mRNA expression. The expression levels of IL-1β mRNA significantly
increased in TM group (7.3 ± 2.1) versus those in HC group (4.8 ± 1.6, P < 0.05). The expression levels of IL-6 mRNA increased in TM group (8.4 ± 1.9) versus those in HC group (4.9 ± 1.3, P < 0.05). The expression levels of IL-23 mRNA in TM group (18.4 ± 2.1) increased significantly versus those in HC group (11.3 ± 2.9, P < 0.05). No differences in expression levels of TGF-β1 mRNA were found (P > 0.05). We used ELISA to detect the Th17-related cytokine levels in serum. As shown in Fig. 3, the mean concentration of IL-17A was upregulated significantly in TM group (30.0 ± 7.2 pg/ml) versus HC group (20.0 ± 4.9 pg/ml, P < 0.05). Serum levels of IL-23 were always increased in TM group (208.0 ± 85.6 pg/ml) versus HC group (93 ± 48.3 pg/ml, P < 0.01). The expression of IL-1β in TM group (72.0 ± 34.5 pg/ml) and in TH group (86.0 ± 30.1 pg/ml) increased significantly versus those in HC group (45 ± 25.3 pg/ml, P < 0.05).
Optimization of the benefit-to-risk ratio for individual substanc
Optimization of the benefit-to-risk ratio for individual substances can be achieved on multiple
levels, including (a) patient selection according to clinical/paraclinical criteria, (b) optimization of treatment and monitoring protocols, (c) identification of patients at higher risk for SADRs and (d) the development of biomarkers for treatment response and/or risk profile (Fig. 1). In the following we will discuss these aspects, focusing on treatment of MS and NMO with mAbs (NAT, alemtuzumab, daclizumab and others), FTY, teriflunomide, dimethylfumarate (DMF) and MX. The alpha-4-integrin-inhibitor natalizumab (Tysabri®) [39] was approved by the Food and Drug Administration (FDA) click here and European Medicines Agency (EMA) in 2005/06 for the treatment of highly active forms of the relapsing–remitting disease course (RRMS), but not chronic progressive forms [primary or secondary progressive MS (PPMS, SPMS)]. Efficacy in SPMS is under investigation in a Phase
IIIb study, ASCEND in SPMS (A Clinical Study of the Efficacy of Natalizumab on Reducing Disability Progression in Subjects With SPMS; ClinicalTrials.gov NCT01416181). Therapeutic efficacy Crizotinib has also been reported in paediatric cohorts with high disease activity [40, 41]. In NMO, the use of NAT should be avoided, as current data suggest negative effects on relapse rate and disease progression as well as severe astrocyte damage in spite of natalizumab treatment [42, 43]. Monthly NAT administration is standard treatment. So far, there are only few data on the prolongation of infusion intervals [44]. The REFINE trial (Exploratory Study of the Safety, Tolerability and Efficacy of Multiple Regimens of Natalizumab in Adult Subjects With Relapsing Multiple Sclerosis (MS); ClinicalTrials.gov NCT01405820) is investigating both different dosing schemes and application routes [intravenous (i.v.), subcutaneous (s.c.)]; thus far, this approach cannot be recommended outside clinical trials. Safety considerations and monitoring were profoundly influenced by the occurrence of progressive multi-focal leucoencephalopathy (PML). This is a relatively rare but potentially fatal (22%) opportunistic Amino acid viral
infection of the CNS which can result in severe disability in 40% of the patients [45]. Epidemiological data on the frequency of NAT-associated PML has shown an increase of PML incidence after a treatment duration of 2 years (i.e. 24 infusions) [45]. Thus, therapy continuation for more than 24 infusions requires updated documented informed consent [46] and re-evaluation of the individual risk–benefit ratio. In addition, adequate counselling of patients and relatives is crucial for the early recognition of symptoms and signs of possible PML, as neuropsychological symptoms may prevail initially. Regular clinical monitoring and magnetic resonance imaging (MRI) are required to detect symptoms suggestive of PML or suspicious lesions [47].
Patients Luminespib solubility dmso with other connective tissue disorders were excluded from the analysis as the numbers were insignificant. Results: The
mean estimated glomerular filtration rate of vasculitis and LN patients improved from 28.8 to 51.3 mL/min/1.73 m2 and 62.42 to 65.53 mL/min/1.73 m2 respectively. The mean urine protein/creatinine ratio of vasculitis and LN improved from 273 to 79.5 and 406 to 70 respectively. No patients died in either groups. Only one vasculitic and two LN patients required maintenance dialysis. Three LN patients underwent renal transplantation. Conclusion: Compared to the published studies our results show better patient and renal survival. Long-term follow up is needed before firm conclusions can be made. 221 INDICATIONS AND DIAGNOSES OF KIDNEY BIOPSIES AT A SINGLE INSTITUTION 2008–2013 A LECAMWASAM, MA ROBERTS, D LEE, H LIEW, L MCMAHON Box Hill Hospital, Australia Aim: To evaluate the distribution of clinical indications and histological diagnoses of renal biopsies. A secondary aim was to examine the clinical outcomes from the most common diagnoses. Background: A retrospective audit of all renal biopsies
performed at Eastern Health FDA-approved Drug Library mw between January 2008 and October 2013 was performed. Methods: Reports of all renal biopsies and clinical data during the study period were obtained from the electronic health records at Eastern Health. Results: Of 197 biopsies performed, 170 were native kidneys and 27 transplant kidneys. The main indications for native kidney biopsy were reduced kidney function (44%), proteinuria (37%) and haematuria O-methylated flavonoid (11%). The main indications for transplant kidneys were protocol biopsy (n = 15) and suspected rejection (n = 12). In 60 patients with combined haematuria and proteinuria, IgA nephropathy was the predominant pathology (n = 26, 43%), followed by pauci-immune glomerulonephritis (n = 13, 22%). In
17 patients considered to have nephrotic syndrome, membranous nephropathy (n = 8) was the dominant lesion. The mean eGFR of 16 IgA nephropathy patients with complete follow up data, at biopsy, 6 months, and at most recent follow-up (median 2.8 years) was 51.6, 53.9 and 51.6 mL/min/1.73 m2 respectively. The corresponding mean proteinuria was 3.3, 1.2 and 0.5 g/day respectively. The corresponding systolic blood pressure measurements improved from a mean of 130 at biopsy to 120 and 112 mm/Hg at 6 months and most recent follow-up respectively. Three quarters of patients received an antagonist of the renin-angiotensin system. Conclusions: Reduced kidney function was the most frequent indication and IgA nephropathy the most common histological diagnosis in this kidney biopsy audit. Patients with IgA with follow-up data had a good short term prognosis. 222 TOWARDS A NATIONAL SURVEILLANCE NETWORK FOR CHRONIC KIDNEY DISEASE (CKD) WE HOY1, HG HEALY1,2,3, D WAUGH3,4, M JOSE5, H KULKARNI6, I KATZ7, C NELSON3,8, K PANARETTO9, R WALKER10 1CKD.
Diameters were determined for n = 72 beads and were 136 µm (range
Diameters were determined for n = 72 beads and were 136 µm (range 74–205 µm) for LB and 40 µm (range 15–85 µm) for SB (Fig. 1a). Using the formula for sphere volume = 4/3 ×π×r3, the LB were found to have a mean volume of 1 317 000 µm3 compared to 34 000 µm3 for the SB, giving a ratio difference in volume of 38·7 between LB and SB. Using the formula for sphere surface area = 4 × p ×r2, the LB were found to have a surface area of 58 107 mm2 compared to 5027 mm2 for the SB, giving a ratio difference in surface area of 11·6 between CT99021 LB and SB. Because both groups received the same amount of bacteria and alginate, this provides a larger total surface area of the SB of 3·3 (38·7/11·6 = 3·3).
In addition, the volume of alginate in the two bead suspensions was adjusted to ensure equal volumes of alginate in the two groups. At day 1 after challenge, a significantly higher number of CFUs was observed in the lungs of SB group compared to the LB group (P < 0·003) (Fig. 2). At days 3, 5 and 6 no significant differences in quantitative bacteriology were observed between the two groups. P. aeruginosa could be cultured from the majority of mice at all time-points (Fig. 2). Four mice from each group were killed 2 h after infection, and lungs examined for
number of CFUs to confirm that the infection dose was equal in the two groups. No significant differences were observed in CFUs 2 h after challenge (Fig. 2). As expected, a PMN-dominated https://www.selleckchem.com/products/FK-506-(Tacrolimus).html inflammation was observed in all mice at day 1 after infection (Table 1). However, in the SB group the inflammation was located exclusively endobronchially, in contrast to a partially mixed localization in the LB group (Table 1). In the SB group this shifted
significantly to a mixed localization or exclusively parenchymal localization on days 2/3 after challenge (P < 0·005, Table 1), and in general was paralleled by a more peripheral presence of the bacteria in the alveoli of the SB group. For the SB group, a significantly faster resolution of inflammation at days 5/6 compared to the LB group was observed (P < 0·03, Table 1). For both groups together, a significant increase in degree of inflammation from day 1 to days 2/3 was observed (P < 0·01, Table 1). However, the difference between the two groups for this observation did not reach significance. Aurora Kinase The area of the biofilm-like structures identified by Alcian blue staining were significantly smaller in the SB group compared to the LB group at day 1 and days 2/3 (P < 0·001, Figs 3 and 4). In accordance, the area of the airways in which biofilm-like structures were identified were significantly smaller in the SB group compared to the LB group at days 2/3 (P < 0·002, Figs 3 and 4). The number of identified biofilm-like structures was 137 in the LB group versus 308 in the SB group. PNA-FISH and DAPI staining confirmed the presence of P. aeruginosa in the biofilm-like structures (Fig. 5).
Table 1 lists the primers that
were used for mRNA quantif
were used for mRNA quantification. Samples were analysed using a Bio-Rad iCycler iQ (Bio-Rad, Hercules, CA). Changes in gene expression were determined by calculating the Δ cycle threshold (Ct) by subtracting the Ct for ribosomal protein L19 (RPL19) (reference gene) from the Ct of the gene of interest for each sample.26 The ΔCt of the control was subtracted from the corresponding treated sample giving rise to the ΔΔCt. The fold change was derived from the equation 2−[ΔΔ]Ct. To confirm that the reference gene ribosomal protein L19 was stably expressed in MoDCs and BDCs, a comparison was performed using either glyceraldehyde 3-phosphate dehydrogenase (GAPDH) or RPL19 as the Maraviroc manufacturer reference gene. Similar trends in fold change were observed. Complementary DNA was diluted to generate
a standard curve whose correlation coefficient was > 0·99. The efficiency of qPCR was determined from the slope using the equation (10[−1/M] − 1) × 100 and ranged between 90% and 110%. To evaluate changes in cytokine secretion, 1 × 106 MoDCs or BDCs were incubated in 1 ml culture medium for 24-hr in six-well plates (Corning) and culture supernatants were collected. Concentrations of IL-6, Staurosporine concentration IL-8 and IL-10 were assayed using commercial kits as per the manufacturer's instructions (R&D Systems, Minneapolis, MN). The ELISA for IFN-α, TNF-α and IL-12 were performed as previously described.27 Statistical analysis was performed by non-parametric Mann–Whitney U-tests (P-value < 0·05) using the statistical software programme graphpad prism 5 (GraphPad Software, Inc., La Jolla, CA). In this study, 800 ml of EDTA blood yielded approximately 2 × 109 PBMCs. Following CD14+ selection, an average of 2 × 108 monocytes were cultured in the presence of IL-4 and GM-CSF to before generate MoDCs. On day 6, approximately 2 × 107 MoDCs were harvested and cultured for use. The CD14− population
was positively selected for cells expressing CD172, which equates to the BDC (CD14− CD172+) population. Approximately 3 × 107 BDCs were therefore isolated and rested overnight. In contrast to other studies, the protocol used in this study resulted in lower numbers of MoDCs compared with BDCs from an equal amount of blood.28 Dendritic cell morphology is characterized by a large cytoplasmic cell mass and extrusion of dendrites which increase the surface area available to sample and take up antigens. In this study, the morphologies of Giemsa-stained MoDCs (Fig. 1a) and BDCs (Fig. 1b) were compared. Both DC populations displayed a typical DC morphology, characterized by an irregular cell border with a large cytoplasmic cell mass. Expression of cell surface markers CD172, MHC II, CD16, CD1, CD80/86 and CD14 was assessed by flow cytometry in 6-day-old MoDCs and BDCs (Table 2). Both MoDCs and BDCs expressed all of these markers; however, BDCs showed similar expression of CD172 and MHC II, higher expression of CD16 and lower expression of CD80/86 and CD1.
Other activating family members for inhibitory receptors also fai
Other activating family members for inhibitory receptors also fail to bind the physiological ligand; CD200RLa and CD200RLb do not bind CD200 99 and SIRP-β does not bind CD47 100. These results suggest that activating family members of inhibitory receptors have evolved in response to bacterial or viral ligands, whereas binding to the latter, they have lost the capacity to bind the physiological
ligand. The presence of activating family members may be an important determinant in the outcome of infection. For example, C57BL/6J mice are protected from mouse cytomegalovirus infection by NK-cell expression of the activating receptor Ly49H, which binds to the MCMV-encoded MHC class I-like glycoprotein m157 and induces NK-cell cytotoxicity. On the contrary, 129/J mice express the inhibitory VDA chemical Ly49I receptor instead of the activating Ly49H and show increased susceptibility to MCMV during the early phase of infection 101. Thus, activating family members of inhibitory receptors may protect from infection
by binding bacterially encoded ligands. Inhibitory receptors play a pivotal role in diverse aspects of phagocyte function and can provide an activation threshold, this website regulate, or terminate immune cell activation, and hence contributing to immune homeostasis. Inhibitory receptors thus play an important regulatory role during various stages of the immune response. Bacteria may encode ligands for inhibitory receptors that lead to reduced immune cell activation, and hence providing them evolutionary advantage. An intriguing possibility is that besides acknowledged ligands for inhibitory
old receptors, some inhibitory receptors may bind additional molecules, as demonstrated for Siglec-10 with CD24 and KIR3DL2 with CpG DNA, these interactions could contribute to inhibitory receptor specificity. Indeed, it is intriguing that although signaling through a commonly shared motif, each inhibitory receptor has specific functionality, most inhibiting, but some enhancing immune cell function (Fig. 1). The affinity with which SHP-1 and/or SHP-2 are recruited, regulated receptor and ligand expression may add to the nonredundant roles of inhibitory receptors in immune regulation. In addition, alternative molecules recruited to the phosphorylated ITIMs may contribute to specific function (Fig. 2), and it is likely that more such molecules will be recognized. Finally, cellular localization of inhibitory receptors and associated SHP-1/2 may be a major determinant of inhibitory receptor capacity. To conclude, the general view of inhibitory receptors as global inhibitors of immune cell activation does not fully represent their functional repertoire. Further research is necessary to elucidate the molecular mechanisms behind inhibitory receptor function that lead to divergent or even opposing roles in phagocytic cell regulation. The authors thank Professor Paul Coffer, Dr. Peter Boross, and Dr.
Results: Mean patient age was 63 years with
Results: Mean patient age was 63 years with click here male predominance (62.8%). Median bone length harvested was 8 cm (range, 3–12 cm) with prophylactic plating of the radius following harvest.
Donor site morbidity included fracture (1 patient, 0.5%) and sensory neuropathy (5 patients, 2.3%). Mean DASH scores were comparative between groups and to established normative values. Mandibular malunion rate was 3.2% and hardware extrusion at the recipient site occurred in 15.6%. Conclusion: Reluctance to perform FRFOCF by surgeons usually centers on concerns regarding potential donor site morbidity and adequacy of available bone stock; however, we identified minimal objective or patient perceived donor site morbidity or recipient site complications following harvest of FRFOCFs. Mild wrist weakness and stiffness are common but do not impede ability to perform activities of daily living. Data from this and other reports suggest this flap is particularly useful for midfacial and short segment mandibular reconstruction. © 2012 Wiley Periodicals, Inc. Microsurgery, 2012. "
"Introduction: The basic idea of video-microsurgery is the improvement of ergonomic conditions in microsurgical
procedures by replacing the bulky operating microscope with a compact videosystem. Objective: To specify optical requirements on a videosystem MK0683 cell line for microsurgical intracranial procedures in neurosurgery. Methods: During 27 microsurgical intracranial procedures (12 cerebellopontine angle and 15 supratentorial) zoom factor, focus distance and illumination parameters of the operating microscope were continuously recorded. Ergonomic aspects were documented as well. Results: The zoom factor ranged from 1.7 to 13.5 in CPA procedures and from 1.4 to 13.4 in supratentorial procedures. The focus
distance ranged from 180 mm to 367 mm MycoClean Mycoplasma Removal Kit in CPA procedures and from 188 mm–472 mm in supratentorial procedures. Conclusion: From an optical point of view current operating microscopes meet the requirements of intracranial microneurosurgery. However, ergonomically further developments are highly desirable. Video microsurgery is a promising field and could hold a solution to this problem. © 2011 Wiley-Liss, Inc. Microsurgery, 2011. "
"Introduction: Appropriate and adequate blood flow and oxygen delivery to a free flap is paramount to viability and success. We present a comprehensive examination of perioperative anemia, determining its prevalence and effect on complications and outcomes in autologous breast reconstruction. Methods: We analyzed all autologous free flap breast reconstruction at the Hospital of the University of Pennsylvania from 2005 to 2011 with regards to anemia (hemoglobin (Hgb) <12 g dL−1). Anemic patients were compared to those with Hgb > 12 g dL−1 at preoperative and postoperative timepoints. Complications were analyzed relative to HgB levels and the incidence of anemia. Subgroups were analyzed based on worsening degrees of anemia.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,068
|
\section{Introduction}\label{s:intr}
Consider a classical nonautonomous Hamiltonian system on the phase-space $T^*{\mathbb T}^d={\mathbb R}^d\times{\mathbb T}^d=\{(p,q)\}$
or $T^*{\mathbb R}^d={\mathbb R}^d\times{\mathbb R}^d$
with a Hamiltonian $H(p,q,t)$:
\begin{equation}\label{0.1}
\dot p=-\nabla_q H,\qquad \dot q=\nabla_p H.
\end{equation}
The corresponding quantum Hamiltonian operator is obtained by replacing in $H(p,q,t)$ the variable
$q_j$, $j=1,\dots,d$, by the operator which acts on complex functions $u(x)$ as
multiplying by $x_j$, and replacing each $p_j$ by the operator $\frac{\hbar}{i} \frac{{\partial}}{{\partial} x_j} $, where
$\hbar$ is the Planck constant.\footnote[3]{This rule of quantisation is the most common, but certainly it is not
unique. More generally one may replace $q_j$ and $p_j$ by any operators $Q_j$ and $P_j$ such that
$[Q_j,P_k]=i\hbar\delta_{j,k}$, for all $j$ and $k$.}
The Hamiltonian operator
$
{\cal H}=H(\frac{\hbar}{i} \nabla_x,x,t)
$
defines a quantum system, and a classical problem of the quantum mechanics, streaming from its
first years of existence, is to study (spectral) properties of the operator ${\cal H}$ and the properties of the
corresponding evolutionary equation
\begin{equation}\label{0.2}
i\hbar\, \dot u(t,x)= {\cal H} u(t,x),
\end{equation}
in their relation with the classical system \eqref{0.1}.
For example, if
\begin{equation}\label{0.3}
H(p,q,t)=|p|^2+V(t,q),
\end{equation}
then
\begin{equation}\label{0.0}
{\cal H}={\cal H}_t=-\hbar^2\Delta+V(t,x),
\end{equation}
i.e. ${\cal H}$ is the Schr\"odinger operator with the potential $V$.
In this paper we discuss properties of the Hamiltonian operator ${\cal H}$, corresponding to properties of
system \eqref{0.1}, described by the KAM-related theories. Namely, by
the proper KAM, the averaging, the Nekhoroshev stability, and
the diffusion (this list by no means is canonical; it corresponds to the authors' taste). We
discuss results for quantum systems \eqref{0.2} which we regard as parallel to the three classical
theories above, mostly restricting ourselves to the case of periodic boundary conditions $x\in{\mathbb T}^d$ and
assuming that $\hbar=\,$const. Scaling $x$ and $t$ in the dynamical equation \eqref{0.2}, \eqref{0.0} we
achieve $\hbar=1$. A discussion concerning semiclassical limit $\hbar\to0$, when it is not appropriate
to scale $\hbar$ to 1, is contained in Section~\ref{quasi-classic}. There we consider the equations in the whole
space, $x\in{\mathbb R}^d$, since for the periodic boundary conditions the corresponding results are less developed.
All quantum results we discuss deal with non-autonomous equations \eqref{0.2}, \eqref{0.0},
so their classical analogies are ``KAM-related'' theories for non-autonomous Hamiltonian systems \eqref{0.3}.
We do not touch very interesting, important and complicated problem of constructing eigenfunctions of
nearly integrable Hamiltonian operators by quantasing KAM-tori of the corresponding autonomous
Hamiltonian systems
(see \cite{Laz}).
\medskip
Let $u(t)$ be a solution of the equation \eqref{0.2}, \eqref{0.0}. Multiplying the equation by $\bar u$ and integrating
over ${\mathbb T}^d$ we get that
$\
|u(t)|_{L_2}^2=\mathop{\rm const}\nolimits.
$
Write $\ u(t,x)=\sum_su_s(t) \varphi_s(x)$, where $\{\varphi_s\}$ are eigenfunctions of the ``unperturbed" Hamiltonian operator.
Then $\sum |u_s(t)|^2\equiv\,$const. What happens to the quantities
$|u_s(t)|^2$ as $t$ growths, i.e. how the total probability $\sum |u_s(t)|^2$ is distributed between the states $s\in{\mathbb Z}^d$
when $t$ is large? This is the question which is addressed by the theorems we discuss.
\medskip
\noindent
{\bf Acknowledgments.} The authors are thankful for discussions to Sergey Dobrokhotov, H{\aa}kan~Eliasson, and Johannes Sj\"ostrand. SK acknowledges the support of l'Agence Nationale de la Recherche through the grant
ANR-10-BLAN~0102.
\section{Quantum averaging}
\subsection{Averaging and adiabatic invariance}\label{ss1}
Let a classical Hamiltonian \eqref{0.3} have the form
\begin{equation}\label{ad_class}
H(p,q,\varepsilon t)=H_{\varepsilon}=|p|^2+ V(\varepsilon t,q),
\end{equation}
where the unperturbed Hamiltonian $|p|^2+ V(\tau,q)\,,\ \tau={\rm const}$, is integrable for each
$\tau$. Let $I_j, 1\le j\le d$, be the corresponding actions.
The classical averaging principle (e.g., see in \cite{AKN, LochM}) implies that each action is an adiabatic invariant, namely if $u_\varepsilon(t)$ is a solution of
the perturbed equation \eqref{0.1}${}_{H=H_\varepsilon}$,
then $I_j(u_\varepsilon(t))$ stays almost constant on time-intervals of order $\varepsilon^{-1}$ . The averaging principle is a heuristic statement, and it does not always lead to correct results. The adiabatic invariance for classical systems is discussed in more details in Section~\ref{quasi-classic}.
Now let us drop the assumption that the Hamiltonians \eqref{ad_class} with frozen $t$ are integrable and
consider the corresponding
quantum system:
\begin{equation}\label{S}
\dot u= -i\big(-\Delta u+ V (\varepsilon t,x) u\big),\quad
x\in{\mathbb T}^d.
\end{equation}
We assume that the function $V(\tau,x)$ is $C^2$-smooth bounded and
denote by $A_{\varepsilon t}$ the linear operator in \eqref{S},
$$A_{\varepsilon t}=-\Delta + V(\varepsilon t,x).
$$
Let $\{\varphi_s(\tau), s\in{\mathbb Z}^d\}$ and $\{\lambda_s(\tau)\}$ be the eigenvectors and the eigenvalues of $A_\tau$, where
each $\lambda_s(\tau)$ is continuous in $\tau$.
Let $u(t,x)$ be a solution of \eqref{S}, equal at $t=0$ to a pure state,
\begin{equation}\label{15}
u(0,x)= \varphi_{s_0}(0),
\end{equation}
such that for each $\varepsilon t$, $\lambda_{s_0}(\varepsilon t)$ is an isolated eigenvalue of $A_{\varepsilon t}$ of a constant multiplicity. Consider expansion of $u(t,x)$ over the basis $\{\varphi_s(\tau), s\in{\mathbb Z}^d\}$:
$$
u(t,x)=\sum_su_s(t)\varphi_s(\varepsilon t)\, .
$$
The quantum adiabatic theorem says that $u(t,x)$ stays close to the eigenspace, corresponding to $\lambda_{s_0}(\varepsilon t)$:
\begin{theorem}\label{tB-F} (M.~Born, V.~Fock \cite{BF28} and T.~Kato \cite{Kat50}
)
\begin{equation}\label{kato}
\sup_{0\le t\le \varepsilon^{-1} }\sum_{s:\, \lambda_s(\varepsilon t)\ne\lambda_{s_0}(\varepsilon t)}|u_s(t)|^2\to0\;\;\text{as}\;\; \varepsilon\to0.
\end{equation}
\end{theorem}
This is a very general result which remains true for systems in the whole space (when $x\in{\mathbb R}^d$)
if the operators
$A_{\varepsilon t}$ have mixed spectrum,
but $\lambda_{s_0}(\varepsilon t)$ always is an isolated eigenvalue of constant multiplicity, see in \cite{LochM}.
The case when this eigenvalues may be approached by other eigenvalues is considered in \cite{AvEl}.
Both for classical and quantum systems, adiabatic theorems are considered often on infinite time interval $-\infty<t<\infty$ under condition that the dependence of the potential $V$ on time disappears fast enough as $t\to\pm\infty$, and the system is sufficiently smooth. In this case, for classical Hamiltonians with $d=1$, difference of values of actions on trajectory at $t\to\pm\infty$ tends to 0 much faster than $\varepsilon$ as $\varepsilon\to 0$; in analytic case this difference is $O(\exp(-{\rm const}/\varepsilon))$, see \cite{LL} and references in \cite{AKN}, Sect. 6.4.5. For quantum systems, if for $\tau\to-\infty$
all the probability is concentrated
in the states, corresponding to the eigenvalue $\lambda_{s_0}(\tau)$, then all the probability but a very small remnant will be absorbed by these states as $\tau\to+\infty$. In analytic case this remnant is $O(\exp(-{\rm const}/\varepsilon))$ \cite{Ne, JP} (this result also follows from the calculus, developed
in \cite{S}).
We will return to the quantum adiabaticity in Section~\ref{quasi-classic}. We note that also
there are adiabatic theorems for systems where the Hamiltonian slowly depends not only on time, but also
on a part of the space-variables, e.g. see \cite{AKN}, Sect. 6.4.1 for classical systems and \cite{Dobr} for quantum systems.
\subsection{Around Nekhoroshev's Theorem.
}\label{ss2}
Let us
start with classical systems.
Let
$H_\varepsilon(p,q)=h_0(p)+\varepsilon h_1(p,q)$, where the function
$h_0$ is analytic and steep (e.g., it is strictly
convex, for the definition of steep functions see \cite{Nek77} and \cite{LochM, AKN}).
Let $(p(t),q(t))$ be a solution of \eqref{0.1}.
Then there are $a,b>0$
such that
\begin{equation}\label{Nek}
|p(t)-p(0)|\le\varepsilon^a \qquad \forall\,|t|\le e^{\varepsilon^{-b}},
\end{equation}
see in \cite{Nek77, LochM, AKN}.
There are many related results. For example: let
$$
H_\varepsilon(p,q,t)=h_0(p)+\varepsilon h_1(\omega t; p,q),\quad \omega\in{\mathbb R}^N,
$$
where $h_1$ is an analytic function on ${\mathbb T}^N\times{\mathbb R}^d\times{\mathbb T}^d$, $N\ge1$.
Then for a typical $\omega$ estimate \eqref{Nek} is true.
In particular, let us take
$$
H_\varepsilon(p,q,t)=|p|^2+\varepsilon V(\omega t;q).
$$
The corresponding quantised Hamiltonian is the operator
$
-\Delta +\varepsilon V(\omega t; x),
$
and the evolutionary equation is
\begin{equation}\label{0}
\dot u= - i \big(-\Delta u+\varepsilon V(\omega t;x) u\big).
\end{equation}
Do we have for solutions of \eqref{0} an analogy of the Nekhoroshev estimate \eqref{Nek}? I.e. is it true that
actions of the unperturbed system, evaluated along solutions of the perturbed equation \eqref{0}, do not change
much during exponentially long time? It turns out that a weaker form of this assertion holds true, even when $\varepsilon=1$!
Let us consider the equation
\begin{equation}\label{0.02}
\dot u= -i \big(-\Delta u+ V(t, x) u\big),
\end{equation}
and
consider the squared $r$-th Sobolev norm of $u$:
$$
\|u\|_r^2 = \sum_{s\in{\mathbb Z}^d} |u_s|^2(1+|s|^2)^r, \qquad r\in{\mathbb R}.
$$
This is a linear combination of the actions for the unperturbed system with $V=0$.
\begin{theorem}\label{tB1} (\cite{Bo99}).
Let
$V(t,x)=\tilde V(\omega t,x)$, where $\omega\in{\mathbb R}^N$ is a Diophantine vector and $\tilde V$ is a smooth
function on ${\mathbb T}^N\times {\mathbb T}^d$. Then for each $r\ge1$ there exists $c(r)$
such that any solution $u(t)$ of \eqref{0.02} satisfies
\begin{equation} \label{0.11}
\|u(t)\|_r\le \rm{Const}\cdot
(\ln t)^{c(r)}\|u_0\|_r,\qquad \forall\,t\ge2.
\end{equation}
\end{theorem}
So if $u_0$ is smooth, then the high states $u_s$ stay almost non-excited for very long time. We miss a result
which would imply that the quantity in \eqref{kato}, calculated for solutions of \eqref{0.02}, \eqref{15} stays small for long time.
It is surprising that a weaker version of this result holds for potentials $V$ which are not time-quasiperiodic:
\begin{theorem}\label{tB2} (\cite{Bo99c}).
Let $V$ be smooth and $C^k$-bounded uniformly in $(t,x)$ for each $k$. Then for each $r\ge1$ and $a>0$
there exists $C_a$
such that
\begin{equation*}
\|u(t)\|_r\le C_a\, t^a \|u_0\|_s,
\qquad \forall\,t\ge2.
\end{equation*}
\end{theorem}
Also see \cite{Del10}.
If the potential $V(t,x)$ is analytic, then the norm $\|u(t)\|_r$ satisfies \eqref{0.11},
see \cite{WM08}.
We are not aware of any classical analogy of these results.
\section{
Quantum KAM }\label{s2}
Let $(p,q)\in {\mathbb R}^d\times {\mathbb T}^d$. Consider integrable Hamiltonian $h_0(p)=|p|^2$ and its time-quasiperiodic
perturbation
$H_\varepsilon(p,q)=h_0(p)+\varepsilon V(\omega t,q)$, $\omega\in {\mathbb R}^n$, where
$V$ is analytic. For the corresponding Hamiltonian
equation we have a KAM result:
{\it For a typical $(p(0),q(0))$ and a typical $\omega$ the solution $(p(t),q(t))$ is time-quasiperiodic.
}
The quantised Hamiltonian defines the dynamical equation \eqref{0}.
We regard the vector $\omega$ as a parameter of the problem:
$
\omega\in U\Subset {\mathbb R}^n.
$
We abbreviate $L^2=L^2({\mathbb T}^d,{\mathbb C})$ and provide this space with the exponential basis
$$\{e^{is\cdot x},s\in{\mathbb Z}^d \}.
$$
For any linear operator $B:L^2\to L^2$ let $(B_{ab}, a,b\in{\mathbb Z}^d)$
be its matrix in this basis.
The theorem below may be regarded as a quantum analogy of the KAM theorem above.
For $d=1$ it is proven in \cite{BG01}, and for $n\ge2$ -- in \cite{EK09}. We do not know how to pass in this result to
the semiclassical limit.
\begin{theorem}\label{tEK2}
If $\varepsilon\ll1$, then for most $\omega$ we can find an $\varphi$-dependent
complex-linear isomorphism $\Psi(\varphi)=\Psi_{\varepsilon,\omega}(\varphi)$, \ $\varphi\in{\mathbb T}^N$,
$$
\Psi(\varphi):L^2\to L^2,\quad u(x)\mapsto \Psi(\varphi)u(x),
$$
and a bounded Hermitian operator $Q=Q^{\varepsilon,\omega}$ such that a curve $u(t)\in L^2$ solves eq.~\eqref{0}
if and only if $v(t)=\Psi( t\omega)u(t)$ satisfies
$$
\dot v= i\big( \Delta v- \varepsilon Qv\big).
$$
The matrix $(Q_{ab})$ is block-diagonal, i.e.
$\
Q_{ab}=0\quad\text{if}\quad |a|\ne|b|
$, and it satisfies
$$
Q_{ab}=(2\pi)^{-n-d}\int_{{\mathbb T}^N}\int_{{\mathbb T}^d} V(\varphi,x)e^{i(a-b)\cdot x }\,dxd\varphi+O(\varepsilon^\gamma),\quad \gamma>0.
$$
Moreover, for any $p\in{\mathbb N}$ we have
$
\|Q\|_{H^p,H^p}\le C_1$ and $ \|\Psi(\varphi)-\mathop{\rm id}\nolimits \|_{H^p,H^p}\le \varepsilon C_2
$.
\end{theorem}
Here
``for most" means ``for all $\omega\in U_\varepsilon\subset U$, where mes$\,(U\setminus U_\varepsilon)\le\varepsilon^\kappa$
for some $\kappa>0$". In particular, for any $\omega$ as in the theorem all solutions of eq.~\eqref{0} are almost-periodic
functions of time.
Their Sobolev norms are almost constant:
\begin{corollary} For $\omega$ as in the theorem and for
any $p$ solutions of \eqref{0} satisfy
$$
(1-C\varepsilon)\|u(0)\|_p\le \|u(t)\|_p\le (1+C\varepsilon)\|u(0)\|_p,\quad \forall\,t\ge0.
$$
\end{corollary}
This property is called the {\it dynamical localisaton}.
{\bf Proof.} Since $Q$ is block-diagonal, then $\|v(t)\|_p=\,$const. Since $v(t)=\Psi(t)u(t)$
and $\|\Psi-\mathop{\rm id}\nolimits\|_{H^p,H^p}\le\varepsilon C_2$, then the estimate follows. \hfill $\Box$
\medskip
{\bf
Remarks.} 1) Let $n=0$. Then \eqref{0} becomes the equation
$\
\dot u=-i \big(\Delta u+\varepsilon V(x) u\big).
$
Theorem states that this equation may be reduced to a block-diagonal equation
$\ \
\dot u=-i Au$, where\ $A_{ab}=0\;\;\text{if}\;\; |a|\ne|b|.\
$
This is a well known fact.
2) For $n=1$ the theorem's assertion is the Floquet theorem for the time-periodic equation \eqref{0}.
In difference with the finite-dimensional case, this is a perturbative result, valid only for `typical' frequencies $\omega\in{\mathbb R}$ and small $\varepsilon$.
{\it
Proof of the Theorem}. Eq. \eqref{0} is a non-autonomous linear Hamiltonian system in $L^2$:
$$
\dot u=-i \frac{\delta}{\delta \bar u} H_\varepsilon(u),\quad H_\varepsilon(u)=\frac12 \langle\nabla u,\nabla \bar u\rangle
+\frac12 \varepsilon \langle V(\varphi_0+t\omega,x)u,\bar u \rangle.
$$
Consider the extended phase-space $L^2\times {\mathbb T}^n\times {\mathbb R}^n=\{(u,\varphi,r)\}$. There
the equation above can be written as the autonomous Hamiltonian system
\begin{equation*}\begin{split}
&\dot u=-i \frac{\delta}{\delta \bar u} h_\varepsilon(u,\varphi,r),\\
&\dot\varphi=\nabla_r h_\varepsilon=\omega,\\
&\dot r=-\nabla_\varphi h_\varepsilon,
\end{split}
\end{equation*}
where
$\
h_\varepsilon(u,\varphi,r,\varepsilon)=\omega\cdot r+
\frac12 \langle\nabla u,\nabla \bar u\rangle
+\frac12 \varepsilon \langle V(\varphi,x)u,\bar u \rangle.
$
So
$h_\varepsilon$ is a small perturbation of the integrable quadratic Hamiltonian
$\ h_0=\omega\cdot r+
\frac12 \langle\nabla u,\nabla \bar u\rangle
$.
To perturbations of $h_0$ applies the KAM-theorem from \cite{EK10}.
To show how this implies the Theorem~\ref{tEK2} let us write $h_\varepsilon$ as
$$
h_\varepsilon(u,\varphi,r,\varepsilon)=\omega\cdot r+
\frac12 \langle\nabla u,\nabla \bar u\rangle
+\varepsilon f(u,\varphi,r).
$$
In our case
$f=\frac12 \langle V(\varphi,x)u,\bar u \rangle$. The theorem below is the main result of \cite{EK10}).
\begin{theorem}\label{tEK1}
There exist a domain
$\ \ {\cal O} =\{\|u\|<\delta\} \times {\mathbb T}^n\times\{|r|<\delta\}
$
and a symplectic transformation $\ \ \Phi:{\cal O}\to L^2\times {\mathbb T}^n\times {\mathbb R}^n\ $
which transforms $h_\varepsilon$ to
$$
h_0=\omega'\cdot r+\frac12\langle \nabla u,\nabla \bar u\rangle +\varepsilon \langle Qu,\bar u\rangle
+f'(u,\varphi,r),
$$
where $f'=O(|u|^3)+O(|r|^2)$.
\end{theorem}
Torus$\ T_0=0\times {\mathbb T}^n\times 0\ $ is invariant for the transformed system, so
$\Phi(T_0)$ is invariant for the original equation. This is the usual KAM statement.
Now it is trivial since it simply states that $u(t)\equiv 0$ is a solution on the original equation.
But the KAM theorem above tells more. Simple analysis of the proof (see a Remark in [EK2]) shows that if the
perturbation $\varepsilon f$ is quadratic in $u$ and $r$-independent, then
the KAM-transformations are linear in $u$ and do not change $\omega$.
So the transformed Hamiltonians stay quadratic in $u$. Hence, the
Hamiltonian $h_0$ is such that $f'=0$. That is, \\
$$
h_0=\omega'\cdot r+\frac12\langle \nabla u,\nabla \bar u\rangle +\varepsilon \langle Qu,\bar u\rangle.
$$
This proves Theorem \ref{tEK2}.
\section{Quantum diffusion.}\label{s3}
Let $(p,q)\in R^d\times {\mathbb T}^d$. Consider
$H_\varepsilon(p,q)=|p|^2+\varepsilon V(\omega t,q)$, where $\omega\in {\mathbb R}^N$ and $V$ is analytic. Then
i) by KAM, for a typical $\omega$ and typical initial data $(p_0,q_0)$ the solution such that $(p(0), q(0))=(p_0, q_0)$ is time-quasiperiodic;
ii) for exceptional $\omega$ and $(p_0,q_0)$ we ``should" have the Arnold diffusion: the
action $p(t)$ of a corresponding solution slowly
``diffuses away"
from $p_0$.
As before,
the quantised Hamiltonian defines the dynamical equation \eqref{0}.
{\bf Claim 4.1.}
Let $d=1$, $N\ge2$ and the potential $V$ is nondegenerate in a suitable sense.
Then there exist a smooth function $u(0,x)$ and $\omega\in {\mathbb R}^N$
such that
\begin{equation}\label{growth}
\limsup_{t\to\infty} \| u(t)\|_s=\infty
\end{equation}
for some $s\ge1$.
\smallskip
An {\it example} of a time-periodic potential $V$, satisfying \eqref{growth}, is given in \cite{Bo99}. It is conjectured by
H.~Eliasson that the validity of the Claim for a {\it typical} potential follows from the method of his work \cite{E02}.
Proof of this assertion is a work under preparation.
\section{Perturbed harmonic and anharmonic oscillators.}
In Sections~\ref{s2},~\ref{s3} we
deal with the evolutionary Schr\"odinger equation under periodic boundary
conditions. Some similar results are available for equations in the whole space with growing
potentials:
\begin{itemize}
\item Consider Schr\"odinger equation in ${\mathbb R}^1$:
$$
\dot u= -i\big(-u_{xx} +(x^2+\mu x^{2m})u +\varepsilon V(t\omega,x)u\big),
$$
where $\mu>0, \ m\in{\mathbb N},\ m\ge2$; $V(\varphi,x)$ is $C^2$-smooth in $\varphi, x$ and analytic in $\varphi$, bounded uniformly
in $\varphi,x$.
An analogy of Theorem~\ref{tEK2} holds. See \cite{K1} (Section 2.5) for the needed KAM-theorem.
\item Due to Bambusi-Graffi \cite{BG01}, the result holds for non-integer $m$. That is, for equations
$$
\dot u=-i \big( -u_{xx} +Q(x)u +\varepsilon V(t\omega,x)u\big),
$$
where $Q(x)\sim |x|^\alpha, \alpha>2$ as $|x|\to\infty$. The potential $V$ may grow to infinity as $|x|\to\infty$.
\item Liu-Yuan \cite{LY10} allow faster growth of $V(x)$ in $x$.
Their result applies to prove an analogy of Theorem~\ref{tEK2} for the {\it quantum Duffing oscillator }
$$
\dot u= - i\big(
-u_{xx} +x^4u +\varepsilon xV(t\omega,x)u\big).
$$
\item Due to Grebert and Thomann \cite{GT11}, the assertion holds for the perturbed harmonic oscillator
$$
\dot u=-
i\big(-u_{xx} +x^2 u +\varepsilon V(t\omega,x)u\big).
$$
\end{itemize}
What happens in higher dimensions, $d\ge2$ ? -- This is completely unknown.
\section{Quantum adiabatic theorem in semiclassical limit}
\label {quasi-classic}
In this Section we consider the classical system on $T^*{\mathbb R}^d={\mathbb R}^d\times{\mathbb R}^d$
with a Hamiltonian
\begin{equation}\label{ad_class1}
H(p,q,\tau)=|p|^2+ V(\tau,q),
\quad \tau =\varepsilon t,
\end{equation}
and the corresponding quantum system
\begin{equation}\label{S_h}
i\hbar \, \dot u= -{\hbar}^2\Delta u+ V (\tau,x) u = {\cal H}_\tau u,
\quad \tau =\varepsilon t,
\end{equation}
(see \eqref{0.0}). We assume that for each $\tau$ the potential
$V(\tau,x)$ grows to infinity with
$|x|$, so the operator ${\cal H}_\tau$ has a discrete spectrum.
We fix small enough $\varepsilon$ that allows to make some statements about the
dynamics of the classical system, and then pass to the limit as $\hbar\to 0$. This limiting
dynamics may be quite different from that in Section~\ref{ss1}
when $\hbar$ is fixed and $\varepsilon\to 0$, as it was demonstrated by M. Berry \cite{Ber84} in the following striking example. Let $d=1$ and potential $V$ for $\tau={\rm const}$ has two (non-symmetric) potential wells. Generically, for $\tau={\rm const}$ and small enough $\hbar$ each well supports a family of pure quantum states localised mainly in this well. Consider a
solution $u(t,x)$ of equation (\ref{S_h}) with an initial condition which is a pure quantum state from the left well. For however small $\varepsilon$ there exists $\hbar_0=\hbar_0(\varepsilon)>0$ such that if $0<\hbar<\hbar_0$, then for each $t\in[0,1/\varepsilon]$ the function $u(t,\cdot)$ is localised in the same left well. On the other hand, under some rather general assumptions, for however small $\hbar$ there exist $\varepsilon_0=\varepsilon_0(\hbar)$ and positive constants $ a_1< a_2$, such that if $0<\varepsilon<\varepsilon_0$ then the function $u(t,\cdot)$ is localised in the right well for $ a_1\hbar/\varepsilon\le t\le a_2\hbar/\varepsilon$.
Discussion of the case $\varepsilon\sim \hbar$ is contained in \cite{Kar90}. In what follows $\varepsilon_0, c, c_i$ are positive constants.
\subsection{Systems with one degree of freedom}
Assume first that
classical Hamiltonian (\ref{ad_class1}) has one degree of freedom. We suppose that $V$ is
$C^{\infty}$-smooth and that in the phase plane of the Hamiltonian system (\ref{ad_class1}) for each $\tau={\rm const }$ there is a domain filled by closed trajectories. In this domain we introduce action-angle variables $I=I(p,q,\tau), \chi=\chi(p,q,\tau)\ {\rm mod}\ 2\pi$ (i.e. $\chi\in{\mathbb T}^1$). Invert these relations: $p=p(I, \chi, \tau), q=q(I, \chi, \tau)$.
Suppose that there is an interval $[a_1, b_1]$, $ 0<a_1< b_1$, such that
the map $(I, \chi, \tau) \mapsto (p, q, \tau)$ is smooth for $I\in [a_1, b_1],\chi\in {\mathbb T}^1, \tau\in[0,1] $.
We express Hamiltonian (\ref{ad_class1}) via the
action variable and slow time: $H(p,q,\tau)=E(I, \tau)$.
For $\varepsilon >0$ let $(p(t), q(t))$ be a solution of the perturbed system with the Hamiltonian $H(p,q,\varepsilon t)$.
\begin{theorem} (see, e.g., \cite{A1}) There exist $\varepsilon_0, c_1$ such that for $0<\varepsilon<\varepsilon_0$ we have
$$| I(p(t),q(t),\varepsilon t)-I(p(0),q(0),0)| <c_1\varepsilon \;\; {\rm for} \;\;\;0\le t\le 1/\varepsilon\,.$$
\end{theorem}
Now assume that for each $\tau={\rm const } \in [0,1]$, and each $I_*\in (a_1, b_1)$ Hamiltonian $H$ (\ref{ad_class1}) has a unique trajectory with the action $I=I_*$.
Consider the corresponding quantum system (\ref{S_h}). The operator ${\cal H}_{\tau}$ has a series of eigenfunctions $\varphi_{s}(\tau)=\varphi_s(\tau,x)$ such that
\begin{equation}
\label{norm}
||\varphi_{s}(\tau)||=1, \qquad \varphi_{s}(\tau,x)\to0\;\;\text{as}\;\; x\to\infty,
\end{equation}
and the corresponding
eigenvalues are
$\lambda_s(\tau) = E(I_s, \tau)+ O(\hbar^2)$, where $I_s=\hbar(s+1/2) \in [a_1, b_1]$ (this is the
Bohr-Sommerfeld quantisation rule, see \cite{MF}). We assume that $V$ is such that
the convergence to zero in \eqref{norm} is
faster than $|x|$ in any negative power. Let $u(t,x)$ be a solution
of non-stationary equation (\ref{S_h}) with a pure state initial condition $u(0,x)=\varphi_{s_0}(0)$.
Denote by ${\mathbb P}_{(\alpha, \beta)}^\tau$ the orthogonal projector in $L^2({\mathbb R})$ onto the linear span
of vectors $\varphi_{s}(\tau)$ with $I_s\in (\alpha, \beta)$. The approach in \cite{Bor} leads to the following
\begin{conjecture}
\label{1d_quantum adiabatic}
There exist $ \varepsilon_0, c_1$ such that if $0<\varepsilon<\varepsilon_0$ and $0<\hbar\le \varepsilon$, then for any $m\ge1$ and a suitable
$c_2(m)>0$ we have
\begin{equation}
\label{qad_est}
\sup_{0\le t\le \varepsilon^{-1} }||u(t)-{\mathbb P}_{(I_{s_0}-c_1\varepsilon, I_{s_0}+c_1\varepsilon)}^{\varepsilon t} u (t)|| <c_2(m) \left(\frac{\hbar}{\varepsilon}\right)^m \,.
\end{equation}
\end{conjecture}
Thus $u(t,\cdot)$ stays close to the eigenspace that corresponds to eigenvalues from $O(\varepsilon)$-neighbourhood of $E( I_{s_0},\varepsilon t).$
\subsection{Systems with several degrees of freedom}
Now let classical Hamiltonian (\ref{ad_class1}) has $d>1$ degrees of freedom. As before, we
assume that $V\in C^{\infty}$. For each $\tau={\rm const}$ let the corresponding Hamiltonian system be completely integrable and in its phase space there is a domain filled by invariant tori. In this domain we introduce action-angle variables $I=I(p,q,\tau)$, $\ \chi=\chi(p,q,\tau) \in {\mathbb T}^d$. Invert these relations: $p=p(I, \chi, \tau), q=q(I, \chi, \tau)$. Suppose that there is a compact domain $ {\cal A}\Subset {\mathbb R}^d_+$ such that
the map $(I, \chi, \tau) \mapsto (p, q, \tau)$ is smooth for $I\in {\cal A},\chi\in {\mathbb T}^d, \tau\in[0,1] $.
We express Hamiltonian (\ref{ad_class1}) via the action variables and slow time, $H(p,q,\tau)=E(I, \tau)$, and
denote by $\omega(I,\tau)= {\partial} E/{\partial} I$ the frequency vector of the unperturbed motion. We assume that the system is non-degenerate or iso-energetically nondegenerate (see definition in \cite{A1}, Appendix 8). The dynamics of the variables
$(I, \chi)(t)=(I,\chi)(p(t), q(t), \varepsilon t) $ is described by a Hamiltonian of the form (see \cite{A1}, Sect. 52F)
\begin{equation}
\label{H_I}
{\cal H}(I,\chi,\tau,\varepsilon)=E(I, \tau)+\varepsilon H_1(I,\chi,\tau),
\end{equation}
where $H_1$ is a smooth function on ${\cal A}\times{\mathbb T}^d\times[0,1]$.
Let $K_0$ be a compact set in $ {\mathbb R}^{2d}$. For $(p_0,q_0)\in K_0$ denote by
$(p,q)(t)= (p,q)(t,p_0,q_0)$ a solution of the perturbed system
with initial condition $(p,q)(0)=(p_0, q_0)$.
\begin{theorem} (see, e.g., \cite{AKN, LochM}). If $0<\varepsilon<\varepsilon_0$, then
$$
\int\limits_{K_0} \sup_{0\le t\le \varepsilon^{-1} }| I(p(t),q(t),\varepsilon t)-I(p(0),q(0),0)|dp_0dq_0 <c_1\sqrt{\varepsilon} \,.
$$
\end{theorem}
In systems with $d>1$ degrees of freedom the value of action-vector as a function of time
may change considerably
for some initial conditions due to the effect of resonance between unperturbed frequencies, i.e. components of the vector $\omega(I,\tau)$. We say that there is a resonance for some $(I, \tau)$ if $(k\cdot\omega)(I,\tau)=0$ for a suitable
vector $k\in{\mathbb Z}^d\setminus\{0\}$ (here $\cdot$ denotes the Euclidian scalar product).
Now consider corresponding quantum system (\ref{S_h}). Under some conditions, the operator ${\cal H}_{\tau}$
has a series of eigenfunctions $\varphi_{s}(\tau)=\varphi_s(\tau,x)$, $s\in{\mathbb Z}^d$, satisfying \eqref{norm}, with eigenvalues $\lambda_m(\tau) = E(I_m, \tau)+ O(\hbar^2)$, where $I_m=\hbar(m+\frac{1}{4}\kappa) \in {\cal A}$, $m\in {\mathbb Z}^d_+$, and $\kappa\in{\mathbb Z}^d$ is the vector of the Maslov-Arnold indices \cite{MF}
(the Bohr-Sommerfeld quantisation rule). Consider now the solution $u(t,x)$ of non-stationary equation (\ref{S_h}) with a pure state initial condition $u(0,x)=\varphi_{m_0}(0)$.
If we fix some small $\hbar $ and proceed to the limit as $\varepsilon\to 0$, then Theorem \ref{tB-F} would apply. However, now we are interested in another limit, when a small
$\varepsilon$ is fixed and $\hbar\to 0$. Not much is known about the corresponding limiting dynamics. So we will formulate
natural {\it hypotheses } about the limiting quantum dynamics as $\hbar\to0$ and will use them jointly with the
known results about dynamics for classical Hamiltonian (\ref{ad_class1}) with small $\varepsilon$.
For Theorem ~\ref{tB-F} to hold
it is important that $\lambda_{m_0}(\tau)$ is an isolated eigenvalue for all $\tau$. Consider the
distance between $\lambda_{m}(\tau)$ and $\lambda_{m_0}(\tau)$, where
$ m, m_0\in{\mathbb Z}^d$ are such that
$m\ne m_0$ and $|m-m_0|\sim 1$:
\begin{equation*}\begin{split}
\lambda_{m}(\tau)-\lambda_{m_0}(\tau)&= E(I_m, \tau)-E(I_{m_0}, \tau) + O(\hbar^2)\\
&= (I_m-I_{m_0}) \cdot \omega(I_{m_0},\tau) +O((I_m-I_{m_0})^2)+ O(\hbar^2)\\
&=\hbar(m-{m_0})\cdot\omega(I_{m_0},\tau)+ O(\hbar^2) .
\end{split}
\end{equation*}
Thus if there is no resonance at $(I_{m_0}, \tau)$, then distance between $\lambda_{m_0}(\tau)$ and nearby eigenvalues is $\sim \hbar$. However, if there is a resonance $k\cdot\omega(I_{m_0},\tau)=0$, then $\lambda_{m_0+\nu k}(\tau)-\lambda_{m_0}(\tau)=O(\hbar^2)$ for integer $\nu\sim1$. Thus classical resonances correspond to almost multiple points of the spectrum of the quantum problem. Therefore it seems that they should
also manifest themselves in the quantum adiabaticity.
\medskip
For Hamiltonian (\ref{ad_class1}) there is a rather detailed information about dynamics in the two-frequency case
$d=2$. We will now use this information and the Bohr-Sommerfeld quantisation rule to
state some conjectures about dynamics for the 2d~quantum system (\ref{S_h}).
Following P. Dirac \cite{Dir25} we assume that \footnote[4] {Condition (\ref{A}) just means that the ratio of frequencies changes with non-zero rate along solutions of the system with Hamiltonian (\ref{H_I}): $\omega_2^2\frac{d}{dt}\big(
\frac{\omega_1}{\omega_2}
\big) >c^{-1}\varepsilon$. Similarly, condition (\ref{barA}) means that ratio of frequencies changes with non-zero rate in adiabatic dynamics: $\omega_2^2\frac{d}{dt}\big(
\frac{\omega_1}{\omega_2}
\big)_{I={\rm const}} >c^{-1}\varepsilon$. }
\begin{equation}
\label{A}
\omega_2\frac{{\partial}\omega_1}{{\partial} \tau}-\omega_1\frac{{\partial}\omega_2}{{\partial} \tau}-\left(\omega_2\frac{{\partial}\omega_1}{{\partial} I}-\omega_1\frac{{\partial}\omega_2}{{\partial} I}\right)\frac{{\partial} H_1}{{\partial}\chi}>c^{-1}
\end{equation}
for all $I,\varphi$. General result by V.~I.~Arnold about averaging in two-frequency systems \cite{Arn65, AKN} implies that in this case
\begin{equation}
\label{Aest}
| I(p(t),q(t),\varepsilon t)-I(p(0),q(0),0)| <c_1\sqrt {\varepsilon} \quad {\rm for} \ 0\le t\le 1/\varepsilon\,.
\end{equation}
On the basis of the Bohr-Sommerfeld quantisation rule and by analogy with Conjecture~\ref{1d_quantum adiabatic}
it is natural to conjecture that for $0\le t\le 1/\varepsilon$
the total probability $|u(t)|_{L_2}^2$ is mostly concentrated in the states,
corresponding to actions from the $C\sqrt\varepsilon$-vicinity of the original action $I_{s_0}$.
Now assume that instead of (\ref{A}) the following condition is satisfied (cf. the forth footnote)
\begin{equation}
\label{barA}
\omega_2\frac{{\partial}\omega_1}{{\partial} \tau}-\omega_1\frac{{\partial}\omega_2}{{\partial} \tau}>c^{-1}\,.
\end{equation}
This is a particular case of a condition introduced by V.~I.~Arnold in \cite{Arn65}.
If, in addition to (\ref{barA}),
some general position condition is satisfied (see details in \cite{AKN}), then estimate (\ref {Aest}) in which
$\sqrt{\varepsilon}$ is replaced with $\sqrt{\varepsilon}|\ln\varepsilon|$ holds for all initial data outside
a set of measure
$O(\sqrt{\varepsilon})$ \cite{AKN}, Sect. 6.1.8. The later set mainly consists of initial data for trajectories with {\it capture into resonance},
along these trajectories actions change by values $\sim 1$. Since for some initial
data $I(0), \chi(0)$ the solution $I(t)$ is not localised in the vicinity of $I(0)$, then we should
not expect for the quantum system (\ref{S_h}) any estimate similar to that of Conjecture~\ref{1d_quantum adiabatic}, where the amplitudes of eigenmodes tend to 0 as $\hbar \to 0$ outside some small interval of actions.
Consider classical Hamiltonian (\ref{ad_class1}) under condition (\ref{barA}). Then the capture is only
possible for a finite number of resonances,
and the dynamics with a capture into resonance $k_1\omega_1+k_2\omega_2=0$ with
co-prime $k_1, k_2$ is the following \cite{N05}. Denote $(I, \chi)(t)=(I,\chi)(p(t), q(t), \varepsilon t) $.
Suppose that at the initial moment $t=0$ we have no resonance:
$$
k_1\omega_1(I(0),0)+k_2\omega_2(I(0),0)\ne0,
$$
and let $\tau_*\in(0,1)$ be the first moment when the resonance occurs:
$$
k_1\omega_1(I(0),\tau_*)+k_2\omega_2(I(0),\tau_*)=0.
$$
Then for $0\le\varepsilon t\le \tau_*$ the values of actions are approximately
conserved:
$$
I(t)=I(0)+O(\sqrt{\varepsilon}\ln\varepsilon) \,.
$$
For $\tau_*\le\varepsilon t\le1$ the system is captured into resonance, and evolution of actions is described by two relations:
\begin{equation*}\begin{split}
& k_1\omega_1(I(t),\varepsilon t)+k_2\omega_2(I(t),\varepsilon t )=O(\sqrt{\varepsilon}\ln\varepsilon),\\
& k_2 I_1(t)- k_1 I_2(t)= k_2 I_1{(0)}- k_1 I_2{(0)}+ O(\sqrt{\varepsilon}\ln\varepsilon)\;.\;
\end{split}
\end{equation*}
First of them means that the system stays near the resonance, while the second says that the dynamics has an approximate first integral. Jointly the two relations approximately
define the trajectory $I(t)$ for $\tau_*\le \varepsilon t\le1$.
Based on this description and the Bohr-Sommerfeld quantisation rule, by analogy with
Conjecture~\ref{1d_quantum adiabatic} we conjecture that for the quantum problem \eqref{S_h} the
capture in resonance in the classical system \eqref{ad_class1} results in transfer of an $C\varepsilon$-amount of the total probability from the vicinity of the initially excited pure state, corresponding to the action $I_{s_0}$, to the
vicinity of a state $s_t\in {\mathbb Z}^2$ such that the lattice vector
$I(t)=\hbar(s_t+\frac{1}{4}\kappa) $ satisfies the two relations above. This transfer happens for $t\ge\varepsilon^{-1}\tau_*$. When
$\hbar\to0$, this $C\varepsilon$-amount stays positive of order $\varepsilon$.
\smallskip
For the dynamics of phases of
captured into resonances points also there is a more detailed description
\cite{N05}. Consider
the resonant phase $\gamma= k_1\chi_1+k_2\chi_2$. It turns out that the behaviour of $\gamma$ is described by an auxiliary Hamiltonian system with one degree of freedom and the Hamiltonian of the form
$$
F=\sqrt{\varepsilon}\left(\alpha(\tau)p_{\gamma}^2/2 +f(\gamma, \tau) +L(\tau)\gamma \right)\,.
$$
Here $p_{\gamma}, \gamma$ are canonically conjugate variables, function $f$ is $2\pi$-periodic in $\gamma$,
and $\alpha,L\ne 0$.
In the phase portrait of the system for frozen $\tau$ there are domains of oscillations of $\gamma$. Motion in these
domains can be approximately represented as composition of motion along a trajectory of Hamiltonian $F$ with frozen $\tau$ and slow evolution of this trajectory due to the change of $\tau$.
This evolution follows the adiabatic rule: the area surrounded by the trajectory remains constant.
In the original variables $p,q$ this motion is represented as a motion along slowly evolving torus.
Angular variables on this torus are $\gamma$ and $\psi=l_1\varphi_1+l_2\varphi_2$, where $l_1$ and $l_2$ are integers
such that $k_1l_2-k_2l_1=1$. This torus drifts along the resonant surface $k_1\omega_1+k_2\omega_2=0$ as it was described above. It is not known which quantum object corresponds to it.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 4,856
|
Q: How to create Dialogflow login for users? How would I go about making a user login / acocunt making for a Dialogflow agent that would consist of a username and password and then store it on firebase / firestore?
I'm making an app that will require users to login, but the app depends almost entirely on the Dialgflow agent and considering most things on Dialogflow are fairly easy, i figured this might be easier.
From what I've read, there is a way of doing this through the actions on google console, however I was hoping to use a webhook / the inline editor to make a function. I would provide a code sample of what I have tried, but truthfully I'm not even really sure where to start.
With your answer, if you could maybe provide a general code snippet I could probably build off of that.
Thank you for your help or any suggestions!
Note: If ultimately the actions of google route is a lot easier and better, I will go that route, I just do not want to have the dependency of the google assistant.
A: Account linking is handled by Actions On Google, instead of in Dialogflow (though you'll still have to handle the fulfillment on your end). Dialogflow -itself- doesn't have the capability of doing any user-login flow, but can assist AoG in doing so.
Authentication comes in 3 flavors; the easiest being "Google Sign-in", which just requests a user to log in with their Google Account. More info here . The example covers your question pretty closely, and should even work using the inline-editor.
You could write your own OAuth service (which would somewhat allow you to store user credentials in firestore), but it is definitely going to be more work. More info on the AoG details here
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,237
|
It seems that the media industry, in many respects, will at least take some action against those who are outed or accused of sexual misconduct, even though when it comes to government, we often just let it slide. The latest following all the Harvey Weinstein bruhaha comes aimed at actor Kevin Spacey , star of the Netflix series, "House of Cards ."
In the aftermath of today's allegations of unwanted sexual advances in 1986 by Kevin Spacey against a then-teenage Anthony Rapp, Netflix has decided to pull the plug on House of Cards after the upcoming sixth season next year, Deadline reported.
"I don't know how— We got in through the front door," Rapp continued. "We didn't have to show ID. And we sat with him in some VIP area." Rapp noted that he had no memory of being offered alcohol — "It was just a fun night just talking and hanging out," he said — and at some point, Spacey invited him to attend a party he was hosting a few days later at his Manhattan apartment.
At some point, Rapp said he turned to see Spacey standing at the bedroom door. And that's when he first realized that everyone else had left the party. They were alone.
After pushing Spacey off him, Rapp remembered he was able to step into the bathroom and close the door. "I was like, 'What is happening?'" he said. "I saw on the counter next to the sink a picture of him having his arm around a man. So I think on some level I was like, Oh. He's gay. I guess. Then I opened the door, and I was like, 'OK, I'm going to go home now.' He followed me to the front door of the apartment, and as I opened the door to leave, he was leaning on the front door[frame]. And he was like, 'Are you sure you wanna go?' I said, 'Yes, good night,' and then I did leave."
House of Cards is the fictitious story of U.S. Rep. Francis Underwood of South Carolina, who starts out as a ruthless politician seeking revenge.
He's promised the post of Secretary of State in exchange for his support to help ensure the election of Garrett Walker to the presidency.
However, Walker changes his mind before the inauguration, telling Underwood he's too valuable in Congress.
Outwardly, Underwood accepts his marching orders, but secretly he and his wife, an environmental activist, make a pact to destroy Walker and his allies.
The story is based on the United Kingdom miniseries of the same name. However, the US version offers a look behind the scenes at the greed and corruption in American politics.
This is partially where its popularity comes from, as today's politics have become nothing more than a form of entertainment for many.
Deadline.com added, "We have also heard that Netflix's Spacey-starring film Gore about the acerbic author Gore Vidal may be on the chopping block now too. The Reed Hastings-run streaming service has stayed officially silent on the sexual advance claims. No word yet if the NYPD is looking into the Rapp claims that allegedly occurred in its jurisdiction."
Perhaps this will motivate people to become involved in real politics with real consequences.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,365
|
The 10 Dumbest Artificial Intelligences Ever Created
Gordon Jackson
Filed to:superlist
The Singularity is coming soon! Artificial intelligences will reinvent everything, and there will be unlimited rice pudding. Except, of course, that when we imagine artificial intelligences in fiction, they're often not that smart. Case in point? These 10 ridiculously dumb artificial intelligences.
1) B1 Battle Droids (Star Wars)
Controlled by a central computer intelligence, inexpensive to manufacture and easy to produce (limbs were held on with electromagnets), the battle droids were intended to overwhelm their foes in swarms rather than ruthless cunning. Following the Battle of Naboo, where the droids central intelligence hub was destroyed by a nine-year old boy, the Federation began experimenting with individual AI's—which only caused more problems.
From Wookieepedia:
"Labored with more and more specialized roles that pushed the limits of their programming, many older droids developed personality quirks and a tendency to excessively comment on their situations in an attempt to handle the data overflow that had strained their inadequate logic modules."
That's very thoughtful and interesting! Peter Watts makes a similar case for consciousness as a biological malfunction in his excellent 2006 novel, Blindsight.
2) Yung-Star & Cy-Star (Terrahawks)
Gerry Anderson's final "Supermarionation" show—actually filmed in "SuperMACROmation," meaning more latex puppets were involved—Terrahawks involved a geriatric android named Zelda invading Mars as an outpost for her future attack on Earth. Using robotic Cubes that worked like building blocks, Zelda's plans mostly either involved weird applications of her squared foot soldiers, or defrosting monsters that she kept in her refrigerator.
Zelda, who cherished the aesthetics of the "oldest and wisest"—commonplace on her home planet, Guk—built a robotic family retaining these ideals. They included a son named Yung-Star, a gurgling, dim-witted old man who ate rocks. And a sister named Cy-Star, an excitable old woman who was only dimly aware of her surroundings.
Dressed in rags and dismissed as an idiot by his siblings, Yung-Star still retained a high opinion of himself—even if his best idea was simply to stack twenty Cubes into the shape of a gun. As for Cy-Star, she kept a pet Cube named Pluto, while constantly straining to prevent her wig from sliding off her head each time she would bellow her catchphrase, "WONDERFFFUUULL"—which was uttered whenever a new genocidal plan was put forth by Zelda.
Despite its terrifying premise, Terrahawks is actually quite beloved: Big Finish just recently reunited the original voice cast for a fourth, audio-only season just this year—with another CD box set on the way in 2016.
3) HAL 9000 (2001: A Space Odyssey)
After some minor malfunctions, the scheming computer HAL (Heuristically programmed Algorithmic computer) plots to kill the human crewmembers before they get the chance to shut him down. HAL's task to maintain the ship's functions offers a litany of methods to crush, maim, starve and suffocate the members of his crew, should he desire—but in the film, HAL locks Dr. Poole outside the ship and switches off the life support mechanism in each astronaut's hibernation chamber. For a program capable of lip-reading, we expected better.
4) Bill & Ted's Robot Doubles (Bill and Ted's Bogus Journey)
Sent from the future to kill Bill and Ted by a jealous cultist named De Nomolos, the robotic replicas only accomplished their mission by throwing the duo off a cliff. The perfect duplicates in every way, "Evil" Bill and Ted even retained the same level of intelligence as their death-marked counterparts.
And while they did technically succeed in killing Bill & Ted, sending the duo on a journey to the afterlife, outer space and beyond, they were ultimately unspooled by another pair of robots built from local hardware store parts—assembled by the dual alien entity the original Bill andTed befriended along the way, a being collectively known as "Station!"
5) ED-209 (RoboCop)
The Enforcement Droid Series 209, a fully automated "peacekeeping" droid built by Omni Consumer Products for "urban pacification" went disastrously wrong during its first demonstration, murdering a junior executive in the process. Built to look cool, with every cost-cutting measure undertaken by manufacturer Dick Jones, ED-209's failure to perform ultimately gave rise to the RoboCop program—so we can thank it for that! Unable to navigate stairwells and programmed to issue forth a sound similar to a squealing pig in times of distress, the ED-209 also suffers from bad logic circuits—unable to process information as quickly as a human brain, the model is easily hacked, tricked and manipulated. In Robocop 3, a young girl named Nikko is able to override its command system by simply manipulating three serial ports on its right leg.
6) Leon (Blade Runner)
A Replicant of below-average intelligence who prickles at the distinction between a tortoise and a turtle, Leon makes up for his slow wit with tremendous strength and lightning-fast reflexes. Regrettably, he made himself vulnerable while taunting his enemy, the replicant-hunting Dekker, and consequently had his brains—his only weak point!—obliterated with a future-revolver from Sean Young.
7) Box (Logan's Run)
"Protein, Plankton, fish from the sea…"
The demented, grandstanding robot with his own menagerie of ice sculptures and frozen specimens from 1976's Logan's Run? Pretty dumb.
Box relishes his job of cataloging and preserving life forms in ice—perhaps a little too much. The flailing robot causes a cave-in while chasing Michael York and Jenny Agutter through his lair after he got a little too caught up in the moment, blasting away at the refugees with his ice beams, chuckling all the way.
8) Waspinator (Transformers: Beast Wars)
While the violent and excitable robot-dinosaurs, the Dinobots should be considered ("Grimlock smash brains!") the dumbest Transformer in the franchise is probably Waspinator—or at least, so he appears. Destroyed or demolished in nearly every episode of Beast Wars, Waspinator, who speaks in buzzing, fragmented sentences, does at least show some level of introspection while on a mission to reclaim his colleague Inferno's scattered parts. He asks himself: "Inferno blow up, Waspinator salvage. Waspinator blow up, nobody salvage. Why universe hate Waspinator?"
9) Dynomutt (Dynomutt)
The robotic sidekick to superhero Blue Falcon, Dynomutt's history and backstory is ambiguous: is he a robot dog, like K9? Or is he an enhanced cyborg like Inspector Gadget? Whatever the case, we can all agree he is very, very stupid. More of a liability for the Blue Falcon's war on crime than an asset, Dynomutt's over-eagerness to lend a hand often causes misery and extensive property damage, but its his very dimwittedness that often saves the day, too—accidentally capturing such fearsome super criminals asThe Worm, The Glob and Madame Ape Face.
10) Mega Man (Captain N)
While the character is typically portrayed as a semi-tragic figure, lamenting the necessity to destroy his reprogrammed robot brethren for some vague and unattainable vision of peace, Captain N decided to go in a different direction with Mega Man altogether. Instead of ruminating on the necessity of violence for social change, the TV series had him shout phrases like, "Mega-hi!" in a croaky, toad-like voice. The dim bulb in the roster of Captain N's Nintendo heroes, this version of the character is considered to be the worst to date.
The number one complaint from fans, though? He's the wrong color!
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,963
|
WHY GIVE TO THE CHURCH?
"No one can serve two masters. For you will hate one and love the other; you will be devoted to one and despise the other. You cannot serve both God and money."
The greatest competitor for the throne of our hearts is our money. In this passage, Jesus is calling us to make a decision - will we serve Him or money? Giving allows us to demonstrate that Jesus has our hearts more than our material possessions.
Whether you'd like to give a single gift, schedule ongoing donations, or view your giving history, you can do it all online. It's quick, easy, and secure. With this online service you can give by using your debit or credit card (that you pay off each month!), whichever is most convenient. This safe and flexible option is one of the easiest ways to give to The Refuge. Thank you for your support and for making an eternal difference in the lives of others.
Text any dollar amount to 501.404.9396 (example: $50).
Follow a quick, self-guided setup process to tie your mobile phone to The Refuge, your donor account, and a payment source. After that, donating is as easy as sending a text.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,254
|
{"url":"https:\/\/www.gradesaver.com\/textbooks\/math\/algebra\/intermediate-algebra-12th-edition\/chapter-4-chapter-4-test-page-319\/7","text":"## Intermediate Algebra (12th Edition)\n\n$\\text{Scientific notation: } 3.0\\times10^{-4} \\\\\\text{Standard form: } 0.0003$\nA number in scientific notation takes the form $a\\times10^n$ where $1\\le a\\lt10$ and $n$ is an integer. Hence, the given expression, $\\dfrac{2,500,000\\times0.00003}{0.05\\times5,000,000} ,$ is equivalent to \\begin{array}{l}\\require{cancel} \\dfrac{(2.5\\times10^6)(3.0\\times10^{-5})}{(5.0\\times10^{-2})(5.0\\times10^{6})} .\\end{array} Using the law of exponents which states that $a^x\\cdot a^y=a^{x+y},$ the expression above simplifies to \\begin{array}{l}\\require{cancel} \\dfrac{(2.5)(3.0)\\times10^{6+(-5)}}{(5.0)(5.0)\\times10^{-2+6}} \\\\\\\\= \\dfrac{7.5\\times10^{1}}{25\\times10^{4}} .\\end{array} Using the law of exponents which states that $\\dfrac{a^x}{a^y}=a^{x-y},$ the expression above simplifies to \\begin{array}{l}\\require{cancel} 7.5\\div25\\times10^{1-4} \\\\\\\\= 0.3\\times10^{-3} \\\\\\\\= 0.3\\times10^{-3} .\\end{array} Hence, the simplified form is \\begin{array}{l}\\require{cancel} \\text{Scientific notation: } 3.0\\times10^{-4} \\\\\\text{Standard form: } 0.0003 .\\end{array}","date":"2019-11-21 23:24:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.999738872051239, \"perplexity\": 3009.9769902490525}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-47\/segments\/1573496671053.31\/warc\/CC-MAIN-20191121231600-20191122015600-00009.warc.gz\"}"}
| null | null |
Definition at line 33 of file movedfwdfrmsbyobjpos.hxx.
Definition at line 26 of file movedfwdfrmsbyobjpos.cxx.
Definition at line 30 of file movedfwdfrmsbyobjpos.cxx.
Definition at line 54 of file movedfwdfrmsbyobjpos.hxx.
Definition at line 62 of file movedfwdfrmsbyobjpos.cxx.
References SwFrame::FindPageFrame(), SwIterator< TElementType, TSource, eMode >::First(), SwPageFrame::GetPhyPageNum(), SwLayoutFrame::IsAnLower(), maMovedFwdFrames, and SwIterator< TElementType, TSource, eMode >::Next().
Definition at line 46 of file movedfwdfrmsbyobjpos.cxx.
Definition at line 35 of file movedfwdfrmsbyobjpos.cxx.
Definition at line 41 of file movedfwdfrmsbyobjpos.cxx.
Definition at line 36 of file movedfwdfrmsbyobjpos.hxx.
Referenced by DoesRowContainMovedFwdFrame(), FrameMovedFwdByObjPos(), Insert(), and Remove().
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,700
|
CSKA (en cyrillique : ЦСКА, qui est l'abréviation de Центральный Спортивный Клуб Армии - Tsentralny Sportivny Kloub Armii qui se traduit par « Club sportif central de l'Armée ») est donnée à plusieurs clubs ou institutions sportives en Russie et Europe de l'Est :
Clubs omnisports
CSKA Moscou, un club omnisports russe, qui possède notamment :
une section football,
une section football féminin,
une section basket-ball,
une basket-ball féminin,
une section hockey sur glace,
une section handball,
une section handball féminin,
une section rugby,
une ancienne section volleyball.
CSKA Sofia, un club omnisports bulgare, qui possède notamment :
une section football,
une section basketball
une section basketball féminin,
une section hockey sur glace,
une section volleyball,
une section handball
CSKA Kiev, ancien club omnisports ukrainien
Clubs de football
PFK CSKA Sofia, un club de football bulgare
FK CSKA 1948 Sofia, un autre club de football bulgare
CSKA Douchanbé ancien club tadjik
CSKA-Pamir Douchanbé club tadjik
Autres
Palais des sports de glace CSKA, salle omnisports de Moscou
ancien nom du HK Arystan Temirtaw, club de hockey sur glace du Kazakhstan
VEB Arena stade de football appelé également Arena CSKA
CSKA Samara, club de basketball féminin
SKA Saint-Pétersbourg, club de hockey sur glace de l'armée basé à Saint-Pétersbourg
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,270
|
Q: Swift 3: Enlarging images when selected in CollectionView Good day, I have only seen examples of this in Objective-C or Swift 2 but not yet in Swift 3 running Xcode 8. My situation is that I have a collection view with a set of images and I want them to be enlarged when the user taps on them. Here is my code:
@IBOutlet weak var collectionView: UICollectionView!
var images = ["catPic1.jpg", "catPic2.jpg", "catPic3.jpg"]
override func viewDidLoad() {
super.viewDidLoad()
// Do any additional setup after loading the view.
collectionView.delegate = self
collectionView.dataSource = self
} // end of view did load
public func collectionView(_ collectionView: UICollectionView, numberOfItemsInSection section: Int) -> Int
{
return images.count
}
public func collectionView(_ collectionView: UICollectionView, cellForItemAt indexPath: IndexPath) -> UICollectionViewCell
{
let cell = collectionView.dequeueReusableCell(withReuseIdentifier: "CustomCell" , for: indexPath) as! CustomCell
cell.myImage.image = UIImage(named: images[indexPath.row])
return cell
}
By the way, I have the collection view cell code in another swift class. Here's the code:
class CustomCell: UICollectionViewCell {
@IBOutlet weak var myImage: UIImageView!
}
All the images are showing correctly, I just need them to somehow enlarge when the user selects them. Thanks.
A: You can use another UIImageView to view fullscreen your image
func collectionView(_ collectionView: UICollectionView, didSelectItemAt indexPath: IndexPath) {
let imageView = UIImageView(image: UIImage(named: images[indexPath.row]))
imageView.frame = self.view.frame
imageView.backgroundColor = .black
imageView.contentMode = .top
imageView.isUserInteractionEnabled = true
let tap = UITapGestureRecognizer(target: self, action: #selector(dismissFullscreenImage))
imageView.addGestureRecognizer(tap)
self.view.addSubview(imageView)
}
// Use to back from full mode
func dismissFullscreenImage(_ sender: UITapGestureRecognizer) {
sender.view?.removeFromSuperview()
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 9,677
|
Q: How to set specific elements of an array with minimal code C++ I was wondering if there is an easy way to set more than one element of an array in a single line of code. For example, instead of:
int Array[10];
Array[4] = 100;
Array[7] = 100;
Is there some way to do something like the following?
int Array[10];
Array[4 & 7] = 100;
I know the above code doesn't work, but I can't really think of any other way to display my question. Anyhow, thanks in advance to anyone who would like to share their opinion :)
A: int array[10];
array[4] = array[7] = 100;
array[4] = 100, array[7] = 100;
4[array] = 7[array] = 100;
EDIT:
You may want to use loops for a somewhat dynamic setting of elements
int i, array[10], array_element[3] = { 3, 5, 6 };
for (i = 0; array_element[i] && array[array_element[i]]; i++) array[array_element[i]] = 100;
Another option is to define a function if by 'minimal' code you mean abstraction
overlord::set(array, 100, "3, 5, 6");
overlord::set(array, 100, "{ 3, 5, 6 }");
overlord::set(array, "3: 200, 5: 400, 6: 500");
Either way you won't find "DYNAMIC" language features in C++ or C. You'll have to implement an abstraction over basic existing functionality to be able to get that silly dynamic typing.
A: You could possibly do it this way
int Array[10];
Array[4] = Array[7] = 100;
A: If you are trying to set a range of elements you can use a for loop
int array[10];
for(int i=0; i<10; i++) {
array[i] = 100;
}
You can also do it for only certain numbers by using this trick
int nums[2] = { 4,7 }; //Positions you wish to set
for(int i=0; i<2; i++) {
array[nums[i]] = 100; //nums[0] = 4, array[4]
//nums[1] = 7, array[7]
}
A: You have this perfectly readable code:
int Array[10];
Array[4] = 100;
Array[7] = 100;
And you want to "set more than one element of an array in a single line of code." Okay:
int Array[10];
Array[4] = 100; Array[7] = 100;
But why would you? Is there a newline shortage that I haven't heard about?
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 6,713
|
Just Listed 280 Vernon Ave , Princeton,BC $279,000 !!
Investor, First Time Buyer or moving out of the lower mainland.
Currently rented for $1000 a month to a tenant who would like to stay if possible. 2300 sft home on 1/4 lot with lane access and walking distance to downtown Princeton. 2 bedrooms up and 2 bedroom suite down. 2 large decks with views to the south.Price reflects that it needs some paint and a little TLC. This could be the time to invest in Princeton as a large manufacting plant is opening in the next 6 months with 300 jobs available.
To view this property and other properties in Princeton area .
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,716
|
{"url":"https:\/\/hvc.berlin\/news\/index-10.html","text":"# Fourier Series: Triangular\n\n## Formula\n\nThe triangular wave is a symmetric waveform with a stronger decrease towards higher partials than square wave or sawtooth. Its Fourier series has the following characteristics:\n\n\u2022 only odd harmonics\n\u2022 altering sign\n\u2022 (squared)\n\n$X(t) = \\frac{8}{\\pi^2} \\sum\\limits_{i=0}^{N} (-1)^{(i)} \\frac{\\sin(2 \\pi (2i +1) f\\ t)}{(2i +1)^2}$\n\n## Interactive Example\n\nPitch (Hz):\n\nNumber of Harmonics:\n\nOutput Gain:\n\nTime Domain:\n\nFrequency Domain:\n\n# Sampling & Aliasing: Square Example\n\nFor the following example, a sawtooth with 20 partials is used without band limitation. Since the builtin Web Audio oscillator is band-limited, a simple additive synth is used in this case. At a pitch of about $2000 Hz$, the aliases become audible. For certain fundamental frequencies, all aliases will be located at actual multiples of the fundamental, resulting in a correct synthesis despite aliasing. In most cases, the mirrored partials are inharmonic and distort the signal and for higher fundamental frequencies the pitch is fully dissolved.\n\nPitch (Hz):\n\nOutput Gain:\n\nTime Domain:\n\nFrequency Domain:\n\n## Anti-Aliasing Filters\n\nIn analog-to-digital conversion, simple anti-aliasing filters can be used to band-limit the input and discard signal components above the Nyquist frequency. In case of digital synthesis, however, this principle can not be applied. When generating a square wave signal with an infinite number of harmonics, aliasing happens instantaneously and can not be removed, afterwards.\n\n## Band Limited Generators\n\nIn order to avoid the aliasing, band-limited signal generators are provided in most audio programming languages and environments.\n\n# Karplus-Strong in Faust\n\n## White Tone Oscillator\n\nAs explained in the Sound Synthesis Introduction, the Karplus-Strong algorithm is based on a sequence of white noise. The following example uses a feedback structure to create a looped version of a white noise array:\n\nMain components of the above example are the excitation and the resonator. The resonator is a feedback loop with an adjustable delay:\n\nThe excitation passes a random sequence to the resonator, once the gate is activated. It will oscillate until the gate is released.\n\nLoad the example in the Faust online IDE for a quick start:\n\n\/\/ white_tone.dsp\n\/\/\n\/\/ Henrik von Coler\n\/\/ 2021-07-04\n\nimport(\"all.lib\");\n\n\/\/ Control parameters:\nfreq = hslider(\"freq Hz\", 50, 20, 1000, 1) : si.smoo; \/\/ Hz\ngate = button(\"gate\");\n\n\/\/ processing elements for excitation:\ndiffgtz(x) = (x-x') > 0;\ndecay(n,x) = x - (x>0)\/n;\nrelease(n) = + ~ decay(n);\ntrigger(n) = diffgtz : release(n) : > (0.0);\n\nP = SR\/freq;\n\n\/\/ Resonator:\nresonator = (+ : delay(4096, P) * gate) ~ _;\n\n\/\/ processing function:\nprocess = noise : *(gate : trigger(P)): resonator <: _,_;\n\n\n## Karplus-Strong in Faust\n\nThe Karplus-Strong algorithm for plucked string sounds is explained in detail in the Sound Synthesis Introduction. That implementation is based on a ring buffer with a moving average filter. For the Faust implementation, this example has been adjusted, slightly (Smith, 2007).\n\n### Exercise\n\nExercise\n\nExtend the White Tone example with a filter in the feedback to turn it into a Karplus-Strong synthesis.\n\n## References\n\n\u2022 Romain Michon, Julius Smith, Chris Chafe, Ge\u00a0Wang, and Matthew Wright. The faust physical modeling library: a modular playground for the digital luthier. In International Faust Conference. 2018.\n[BibTeX\u25bc]\n\u2022 Julius Smith. Making virtual electric guitars and associated effects using faust. REALSIMPLE Project, 2007. URL: https:\/\/ccrma.stanford.edu\/realsimple\/faust_strings\/faust_strings.pdf.\n[BibTeX\u25bc]\n\u2022 # Spatializing Rhythmic Music\n\nSpatialization of rhythmic music is quite different from working with experimental content. Movements need to be synced to the rhythmic rhythmic structure. For techno and related genres it is even more restrictive, since movements and signal alteration through rendering algorithms must not degrade the bass structure and the transients. The music must not lose its energy and spatial effects have to be used carefully. Kick and bass are usually not spatialized at all, making it as tight as possible.\n\n## Garbicz 2019\n\n### Setup\n\nA surround system with a diameter of ~22 m, featuring 6 Funktion One Resolution 2 and 4 Res9 (and many subs), was installed at Garbicz Festival 2019. Ambisonics rendering was performed with IRCAM's PanoramixApp.\n\n### Software\n\nPD was used to perform beat-tracking and real-time feature extraction, as well as for generating synced source movements. An Ableton Push could be used with the PD patches for controlling the source movements with a simple, intuitive interface. The patch allows the spatialization of multiple mono sources for live acts and the treatment of stereo sources through MS processing for DJs.\n\n## Movement Demo\n\nThis video shows beat-aligned source movements and free rotations for a multi track spatialization. It is created with GEM (Pure Data), which is also used for visualizing the source movements during operation of the system. This example is only a mockup - the audio is not rendered from these movements but the standard stereo mix is used.\n\nDemonstration of source movements with 'Combination 03' by JPLS\n\n# 2020-2021 Class\n\n## Concept\n\nFor the first online edition of the SPRAWL class, all students were equipped with the original Access Points, used for the original approach. The concept relied on irregular weekend sessions with additional meetings during the week.\n\nIn each session, the SPRAWL System was used for audio connections. Video and additional talkback for trouble shooting was realized with a parallel Zoom session, as shown in the figure below. For leave streams, as in the closing concert, the audio from the Jacktrip connections is merged with the video from the Zoom meeting, by means of OBS on an additional Acess Point dedicated to straming.\n\n## Scores and Compositions\n\nSeveral conepts were explored during the semester, including graphic scores, text-based compositions and configuration-based improvisations.\n\n### Graphic Scores\n\nGraphic scores are a simple but effective means for guiding improvisations in network performances. They can be distributed to all participants via screen sharing to ensure a decent synchronization.\n\n### Blodgett\n\nBlodgett is a text-based score by Robert Stokowy, comissioned by the EOC in 2019:\n\nhttps:\/\/robertstokowy.bandcamp.com\/track\/blodgett-i\n\nThe score gives precise instructions on the spatial behavior of the sound sources. In the SPRAWL System, each participant takes control of his\/her own source position, thus sharing the task of spatialization. Focusing on simple properties like proximity\/distance and movement\/stillness, each student programmed a Pure Data patch, allowing a GUI-based control on the default Access Points' touch screen.\n\n### Granular Confusion\n\nGranular Confusion is a concept by Mario Hillenbrand, developed for the SPRAWL System. The Access Points are devided into sound generators and processors:\n\n\u2022 Generators can use any means for sound generation.\n\u2022 Processors are all running the same granular patch.\n\nThe Access Points are statically connected, as shown in the figure below. An additional sound director takes care of spatialization and manages the start\/stop procedure of the configuration. A minimal timeline is used to guide the iprovisation, telling the generators when to be active.\n\nBack to NSMI Contents\n\n# Compiling JackTrip\n\nThe SPRAWL System needs some additional features that are missing from the main branch of Jacktrip. To build the correct JackTrip version, the Jacktrip git repository must be cloned and checked out to the correct branch.\n\nTherefor git must be installed. On MacOS git is often already installed. Linux users should install git through their package manager. Windows users download the installer from git-scm.\n\n## Getting the JackTrip Source Code\n\nNow the JackTrip source code can be downloaded from the official JackTrip repository.\n\ngit clone https:\/\/github.com\/jacktrip\/jacktrip.git\ngit checkout nils\n\n\nChanges in the remote repository have to get pulled.\n\ngit pull\n\n\nAfterwards you can follow the official build instructions.\n\n# Moving Files with SCP\n\nSCP (Secure copy protocol) is an SSH-based tool for transferring files between machines in local and wide area networks. It is a safe and quick way to exchange data.\n\n## Copying to a Remote Machine\n\nThe following command copies the file testfile.html from the local machine to the home directory of the user student on the server with the address specified address 11.22.33. Instead of the home directory (~\/), any other target can be specified:\n\n$scp testfile.html student@11.22.33:~\/ Add the -r flag to copy a directory recursively: $ scp -r \/foo\/bar student@11.22.33:~\/\n\nSelect or create a short WAV file on your local machine and copy it to your personal directory on the server using SCP.\n\n\n## Copying From a Remote Machine\n\nTo copy a file from a remote server, the arguments' order needs to be swapped. The dot (.) copies the data to the recent directory. Any other path can be used as target.\n\n$scp student@85.214.78.6:~\/WebAudioFreqGain.html . Exercise Create a text file in your personal directory on the server. Copy it to your local machine using the SCP command from your local machine. # Waveshaping Example The following interactive example offers control over the pre-gain to add overtones to the sinusoidal source signal: Pitch (Hz): Pre-Gain: Output Gain: Time Domain: Frequency Domain: # Using Python for Control Python offers many useful tools for preparing data and controlling synthesis processes. Although it can also be used for actual digital signal processing, its versatility makes it a great tool for auxuliary tasks. Most notably, it can be used for flexible processing and routing of OSC messages, especially in the field of data sonification. ## Python & OSC A large variety of Python packages offers the possibility of using OSC. They can be installed using pip: $ pip install python-osc\n\\$ pip install pythonosc\n\n\nAn example project for controlling a Faust-built synthesizer with Python is featured in this software repository: https:\/\/github.com\/anwaldt\/py2faust_synth\n\n## Python & JACK\n\nThe JACK Audio Connection Kit Client for Python by Matthias Geier connects Python processes to the JACK server. This integration of Python in a JACK ecosystem can be helpful not only for audio processing, but also for synchronization of processes. Since the Python package also implements the JACK transport functions, it can be used to couple Python threads to the timeline of audio projects.\n\nContents \u00a9 Henrik von Coler 2021 - Contact","date":"2021-07-25 22:52:40","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.24253897368907928, \"perplexity\": 6075.432658975826}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046151866.98\/warc\/CC-MAIN-20210725205752-20210725235752-00375.warc.gz\"}"}
| null | null |
using System;
using System.Collections.Generic;
using System.Diagnostics;
using System.Linq;
using System.Windows.Forms;
using Zad4;
namespace Zad7
{
class Program
{
static void Main(string[] args)
{
//Application.Run(new Form1());
var neuralNetDimensions = new[] {2, 8, 3};
int type1WeightsAndS = neuralNetDimensions[0] * neuralNetDimensions[1];
int type2Weights = 0;
for (int i = 1; i < neuralNetDimensions.Length - 1; i++)
{
type2Weights += neuralNetDimensions[i] * neuralNetDimensions[i + 1];
}
Solution.NumberOfGenes = type1WeightsAndS * 2 + type2Weights;
Evaluator.EvaluationFunction = dot =>
{
var genes = dot.Variables;
var net = ConfigureNeuralNetFromNetInputValues(genes, type1WeightsAndS, neuralNetDimensions);
var dataset = Dataset.Data;
double error = 0;
foreach (var singleExample in dataset)
{
var input = new List<double> {singleExample.Item1, singleExample.Item2};
var netOutput = net.ForwardPropagate(input);
Func<double, double> Sq = d => d * d;
for (int i = 0; i < netOutput.Count; i++)
{
error += Sq(netOutput[0] * singleExample.Item3)
+ Sq(netOutput[1] * singleExample.Item4)
+ Sq(netOutput[2] * singleExample.Item5);
}
}
error /= dataset.Count;
return error;
};
var result = GeneticAlgorithm.RunDefaultGeneticAlgo();
var dots = result.Item2.Genes;
var finalNet = ConfigureNeuralNetFromNetInputValues(dots, type1WeightsAndS, neuralNetDimensions);
var ddataset = Dataset.Data;
foreach (var singleExample in ddataset)
{
var input = new List<double> {singleExample.Item1, singleExample.Item2};
var netOutput = finalNet.ForwardPropagate(input);
//var net = new NeuralNet();
}
}
private static NeuralNet ConfigureNeuralNetFromNetInputValues(List<double> genes, int type1WeightsAndS,
int[] neuralNetDimensions)
{
var type1Weights = genes.Take(type1WeightsAndS).ToList();
var type1S = genes.Skip(type1WeightsAndS).Take(type1WeightsAndS).ToList();
var type2 = genes.Skip(type1WeightsAndS * 2).ToList();
var type1 = new List<(List<double>, List<double>)>();
for (int i = 0; i < neuralNetDimensions[1]; i++)
{
var ws = new List<double>();
var ss = new List<double>();
for (int j = 0; j < neuralNetDimensions[0]; j++)
{
ws.Add(type1Weights[i * neuralNetDimensions[0] + j]);
ss.Add(type1S[i * neuralNetDimensions[0] + j]);
}
type1.Add((ws, ss));
}
int sofar = 0;
var allLayers = new List<List<List<double>>>();
for (int i = 1; i < neuralNetDimensions.Length - 1; i++)
{
var all = new List<List<double>>();
for (int j = 0; j < neuralNetDimensions[i + 1]; j++)
{
var ws = new List<double>();
for (int k = 0; k < neuralNetDimensions[i]; k++)
{
ws.Add(type2[sofar++]);
}
all.Add(ws);
}
allLayers.Add(all);
}
var net = new NeuralNet(type1, allLayers);
return net;
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,886
|
package com.thoughtworks.xstream.converters.basic;
import com.thoughtworks.xstream.converters.SingleValueConverter;
/**
* Base abstract implementation of {@link com.thoughtworks.xstream.converters.SingleValueConverter}.
* <p/>
* <p>Subclasses should implement methods canConvert(Class) and fromString(String) for the conversion.</p>
*
* @author Joe Walnes
* @author Jörg Schaible
* @author Mauro Talevi
* @see com.thoughtworks.xstream.converters.SingleValueConverter
*/
public abstract class AbstractSingleValueConverter implements SingleValueConverter {
public abstract boolean canConvert(Class type);
public String toString(Object obj) {
return obj == null ? null : obj.toString();
}
public abstract Object fromString(String str);
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,955
|
\section{Introduction}
One of the outstanding problems in string theory is to derive the effective theories
appearing as a low energy limit of string compactifications.
This is an important step in connecting string theory to the real world,
which can also shed light on its non-perturbative structure.
Eventually, we are interested in compactifications preserving $N=1$
or no supersymmetry, since these cases are relevant from the phenomenological point of view.
However, at present the full low energy description of such compactifications seems to be beyond our reach.
At the same time, compactifications preserving eight supercharges in four dimensions seem to be more tractable,
and still involving very non-trivial and rich physics.
One type of such compactifications is provided by type II string theory on a Calabi-Yau manifold.
During recent years a significant progress has been achieved in getting its non-perturbative
description (see \cite{Alexandrov:2011va,Alexandrov:2013yva} for reviews).
The corresponding effective theory is completely determined by the metric on its moduli space,
which is a direct product of vector and hypermultiplet components, $\mathcal{M}_V$ and $\mathcal{M}_H$.
It is in the latter space where all complications are hidden. Being a quaternion-K\"ahler (QK) manifold \cite{Bagger:1983tt},
$\mathcal{M}_H$ receives instanton corrections coming from branes wrapping non-trivial cycles of the Calabi-Yau \cite{Becker:1995kb}.
There are two types of such branes which contribute to the non-perturbative metric: D-branes and NS5-branes.
Using twistorial techniques \cite{Hitchin:1986ea,MR1327157,Alexandrov:2008nk},
which provide a very efficient parametrization of quaternionic manifolds,
the D-brane instantons have been incorporated in a series of works \cite{RoblesLlana:2006is,Alexandrov:2008gh,Alexandrov:2009zh}.
This result left NS5-brane instantons as the only remaining unknown piece of the full non-perturbative picture.
An attempt to include these corrections has been done in \cite{Alexandrov:2010ca}. It was based on the fact
that in the type IIB formulation S-duality maps D5-branes into NS5-branes, which makes possible to get the latter
once we know the former. However, due to a complicated action of S-duality on the twistor space,
where the D-instantons have the most simple formulation,
this idea had been realized only in the one-instanton approximation.
In this paper we go beyond this restriction and provide a complete description of fivebrane instantons
which includes all orders of the instanton expansion.
The clue to such a result is an improved parametrization of contact transformations which encode the geometry
of the twistor space of $\mathcal{M}_H$. It allows essentially to linearize the action of S-duality and thus to extract
fivebrane instanton corrections to the hypermultiplet metric.
In a companion paper \cite{AB:2014} we will show the consistency of our result with the U-duality symmetry group
of $\mathcal{M}_H$, which requires an improved realization of these symmetries, and extend it to include contributions from
interactions between fivebrane and D1-D(-1)-instantons.
\section{Twistor approach and contact geometry}
\subsection{Darboux coordinates and transition functions}
We start with a brief review of twistorial techniques which are necessary tools for describing instanton
corrections to the metric on $\mathcal{M}_H$ \cite{Alexandrov:2011va}. The need for these techniques stems from the fact
that a generic QK manifold is not even a complex manifold so that the constraints of the QK geometry
appear to be highly non-trivial. At the same time, $4n$-dimensional QK spaces $\mathcal{M}$ are in one-to-one correspondence
with $(4n+2)$-dimensional K\"ahler spaces $\mathcal{Z}_\mathcal{M}$ carrying a {\it holomorphic contact structure}. The latter are known as
{\it twistor spaces} and appear as $\IC P^1$ bundles over the original QK manifolds.
The presence of the complex and holomorphic contact structures makes the twistor spaces much easier to work with.
In particular, the contact structure can be represented by a set of holomorphic
one-forms $\mathcal{X}^{[i]}$ on each of the patches of the covering $\mathcal{Z}_\mathcal{M}=\cup \mathcal{U}_i$. They are defined
up to rescalings by nowhere vanishing holomorphic smooth functions, and
such that $\mathcal{X}^{[i]} \wedge \(\mathrm{d}\mathcal{X}^{[i]}\)^n \ne 0$ is a holomorphic top form.
Given these one-forms, in each patch one can introduce a system of Darboux coordinates
$(\xii{i}^\Lambda,\txii{i}_\Lambda,\ai{i})$, $\Lambda=0,\dots,n-1$,
fixed (non-uniquely) by the requirement that
\begin{equation}
\mathcal{X}\ui{i} = \mathrm{d}\ai{i} + \xii{i}^\Lambda \mathrm{d}\txii{i}_\Lambda.
\end{equation}
Then all information about the geometry of $\mathcal{Z}_\mathcal{M}$, and hence $\mathcal{M}$,
is contained in the changes of the Darboux coordinates between different patches. They should preserve
the contact one-form up to an overall holomorphic factor $\mathcal{X}\ui{i} = \hat f_{ij}^{2} \, \mathcal{X}\ui{j}$.
Such contact transformations can be parametrized by holomorphic functions $\Hij{ij}$ which, by analogy with
canonical transformations, are taken to depend on $\xi^\Lambda$ in patch $\mathcal{U}_i$ and $\tilde\xi_\Lambda, \alpha$ in patch $\mathcal{U}_j$.
In these terms the gluing conditions read \cite{Alexandrov:2009zh}
\begin{equation}
\begin{split}
\xii{j}^\Lambda = &\, \xii{i}^\Lambda -\partial_{\txii{j}_\Lambda }\Hij{ij}
+\xii{j}^\Lambda \, \partial_{\ai{j} }\Hij{ij},
\\
\txii{j}_\Lambda =&\, \txii{i}_\Lambda
+ \partial_{\xii{i}^\Lambda } \Hij{ij} ,
\\
\ai{j} =&\, \ai{i}
+ \Hij{ij}- \xii{i}^\Lambda \partial_{\xii{i}^\Lambda}\Hij{ij} ,
\end{split}
\label{glucon}
\end{equation}
which results in $\hat f_{ij}^2=1-\partial_{\ai{j} }\Hij{ij}$.
Supplemented by proper regularity conditions, these relations can be rewritten as integral equations and solved
with respect to Darboux coordinates as functions of coordinates on the QK base and the $\IC P^1$ fiber.
Starting from these solutions, there is a straightforward, although a bit non-trivial procedure to derive the metric
on $\mathcal{M}$ \cite{Alexandrov:2008nk}.
Thus, the twistorial description encodes the geometry of a QK space into a covering of its twistor space
and the associated set of holomorphic functions $\Hij{ij}$. In fact, for the purpose of constructing the {\it local} metric on $\mathcal{M}$,
the recipe can even be simplified: it is sufficient to provide a set of contours $\ell_i$ on $\IC P^1$ and a set of transition functions $\Hij{i}$
attached to each contour.
Whereas the closed contours typically correspond to boundaries of open patches $\mathcal{U}_i$, open contours arise as an effective description
of transition functions with branch cuts. We refer to \cite{Alexandrov:2011va} for more details.
Applying this approach to the problem of computing the non-perturbative metric on the hypermultiplet moduli space $\mathcal{M}_H$,
we see that it reduces to the problem of finding contours and holomorphic functions for each type of quantum corrections
contributing to the metric. The perturbative metric was put into this language in \cite{Alexandrov:2008nk} and the twistor data
for D-instantons have been found in \cite{Alexandrov:2008gh,Alexandrov:2009zh}.
As a result, a D-instanton of charge $\gamma=(p^\Lambda,q_\Lambda)$
is generated by the data consisting of the contour
\begin{equation}
\ell_\gamma=\{t\in \IC P^1:\ Z_\gamma/t\in\mathrm{i}\mathbb{R}^-\},
\end{equation}
where $ Z_\gamma$ is the central charge of the $N=1$ supersymmetry algebra preserved by the D-brane,
and the transition function
\begin{equation}
\Hij{\gamma} = H_{\gamma} - \frac12\, q_{\Lambda}p^\Lambda \(H'_{\gamma}\)^2,
\label{trHij}
\end{equation}
where $H_{\gamma}$ is given by
\begin{equation}
H_{\gamma}(\Xi_{\gamma})
= \frac{\bar \Omega(\gamma)}{4 \pi^2}\, \sigma_D (\gamma) \, e^{-2\pi\mathrm{i} \Xi_{\gamma}}.
\label{Hgam}
\end{equation}
Here $\sigma_D (\gamma)$ is the the so-called quadratic refinement,
$\bar \Omega(\gamma)$ are rational DT invariants \footnote{The original result has been formulated in terms of the di-logarithm and
integer DT invariants, which also makes explicit its relation to Thermodynamic Bethe Ansatz \cite{Gaiotto:2008cd,Alexandrov:2010pp}.
The two formulations are readily equivalent.}, and
we used the notation $\Xi_\gamma=q_\Lambda \xi^\Lambda-p^\Lambda\tilde\xi_\Lambda$.
\subsection{Contact bracket and improved transition functions}
Although the parametrization of the contact transformations \eqref{glucon} using transition functions $\Hij{ij}$
is very explicit, it also has some inconveniences. The most important problem comes from that the arguments of $\Hij{ij}$
belong to different patches. As a result, all symmetries of the twistor space are realized on the transition functions
in a very non-trivial way. An example is the symplectic invariance of the D-instanton corrections:
whereas the function \eqref{Hgam} is clearly symplectic invariant,
as $(p^\Lambda, q_\Lambda)$ and $(\xi^\Lambda,\tilde\xi_\Lambda)$ transform as vectors under symplectic transformations,
this is not the case for the transition function \eqref{trHij}.
This suggests that there should be another way to parametrize the contact transformations where the fundamental role
is shifted to the function \eqref{Hgam} \footnote{In fact, such a symplectic invariant description has been already proposed in
\cite{Alexandrov:2009zh}. But it works only if transition functions satisfy a certain integrability condition,
which is indeed the case for $H_\gamma$, but turns out not to be the case for the NS5-transition functions found below.}.
To introduce the new parametrization, we need first to define the so-called ``contact bracket",
which can be viewed as a lift of the Poisson bracket to the realm of contact geometry and was defined previously in
\cite[Eq.(2.44)]{Alexandrov:2008gh}.
This can be done in a coordinate independent way as follows.
Let us associate with a function $h$
the ``contact vector field" $X_h$ determined by the following relations
\begin{equation}
\iota_{X_h}\mathrm{d} \mathcal{X}=-\mathrm{d} h+ R(h) \mathcal{X},
\qquad
\iota_{X_h}\mathcal{X}=h,
\label{defXh}
\end{equation}
where $\iota_X$ is contraction of vector $X$ with a differential form,
$\mathcal{X}$ is the contact one-form, and
$R$ is the Reeb vector field which is the unique element of the kernel of $\mathrm{d}\mathcal{X}$ such that $\mathcal{X}(R) =1$.
Then the contact bracket between two functions $h$ and $f$ is defined as
\begin{equation}
\{h,f\}=X_h(f).
\end{equation}
In terms of Darboux coordinates, it reads explicitly as
\begin{equation}
\begin{split}
\{h,\xi^\Lambda\}=&\, -\partial_{\tilde\xi_\Lambda} h+\xi^\Lambda\partial_\alpha h,
\qquad
\{h,\tilde\xi_\Lambda\}=\partial_{\xi^\Lambda} h,
\\
&\qquad
\{h,\alpha\}=h-\xi^\Lambda\partial_{\xi^\Lambda} h.
\end{split}
\label{contbr}
\end{equation}
Note that the bracket is not antisymmetric, but satisfies
\begin{equation}
\{h,h\} =hR(h)=h\partial_\alpha h,
\label{commHH}
\end{equation}
which can be obtained by applying $\iota_{X_h}$ to the first equation in \eqref{defXh}.
Besides, the bracket does not follow the Leibnitz rule in the first argument giving instead
\begin{equation}
\begin{split}
\{ h_1 h_2,f\}=&\,
h_1\{h_2,f\}+h_2\{h_1,f\}-h_1 h_2\partial_\alpha f.
\end{split}
\label{notLeibn}
\end{equation}
The reason for the different behaviour with respect to the two arguments is that actually
they are supposed to be local sections of different bundles, $\mathcal{O}(2)$ and $\mathcal{O}(0)$,
respectively, and the contact bracket maps them into an $\mathcal{O}(0)$ section \cite{Alexandrov:2008gh}.
A crucial property of the contact bracket, which immediately follows from
its definition, is that it generates an infinitesimal transformation scaling the contact one-form
\begin{equation}
\mathcal{L}_{X_h}\mathcal{X}=\mathrm{d}\(\iota_{X_h}\mathcal{X}\)+\iota_{X_h}\mathrm{d}\mathcal{X}=R(h)\mathcal{X},
\end{equation}
i.e. a contact transformation. Conversely, any {\it finite} contact transformation, for instance, the one which
relates Darboux coordinates in different patches,
can be parametrized by a holomorphic function $\hHij{ij}$
and generated by the contact bracket via the following action
\begin{equation}
\Xi^{[j]}=e^{\{\hHij{ij},\, \cdot\,\}}\, \Xi^{[i]},
\label{glcond-KS}
\end{equation}
where $\Xi^{[i]}$ denotes the set of Darboux coordinates in the patch $\mathcal{U}_i$.
What is important here is that $\hHij{ij}$ is considered as a function of Darboux coordinates in {\it one} patch.
We call it {\it improved transition function}.
Comparing the gluing conditions \eqref{glcond-KS} and \eqref{glucon}, one obtains relations between
the two types of transition functions.
In particular, this gives the following explicit expression (we dropped the patch indices)
\begin{equation}
H=\(e^{\{h,\, \cdot\,\}}-1\)\alpha+\xi^\Lambda\( e^{\{h,\, \cdot\,\}}-1\)\tilde\xi_\Lambda.
\label{relHH}
\end{equation}
To illustrate the new parametrization, let us apply the relation \eqref{relHH} to the D-instanton case taking
the improved transition function to be $h=H_\gamma(\Xi_\gamma)$. Using the fact that
$\alpha$-independent functions commute with themselves (see \eqref{commHH}), one obtains
\begin{eqnarray}
\( e^{\{H_\gamma,\, \cdot\,\}}-1\)\tilde\xi_\Lambda&=& q_\Lambda H'_\gamma,
\\
\(e^{\{H_\gamma,\, \cdot\,\}}-1\)\alpha&=& H_\gamma-q_\Lambda\xi^\Lambda H'_\gamma-{1\over 2}\, p^\Lambda q_\Lambda(H'_\gamma)^2.
\nonumber
\end{eqnarray}
Substituting this into \eqref{relHH}, one reproduces the previous result \eqref{trHij}.
\section{S-duality in twistor space}
As we explained in the introduction, we are going to find fivebrane instanton contributions to the metric on $\mathcal{M}_H$
by applying S-duality to the D5-brane corrections. Therefore, it is important to understand how S-duality acts
at the level of the twistor space, in particular, on Darboux coordinates and on transition functions.
This question has been already addressed in the previous works \cite{Alexandrov:2008gh,Alexandrov:2012bu,Alexandrov:2013mha}.
It was found that for an $SL(2,\mathbb{Z})$ transformation $\gl{c,d}=\(\begin{array}{cc}
a & b
\\
c & d
\end{array}\)$ to be an isometry of $\mathcal{M}_H$ lifted to the twistor space the Darboux coordinates should transform as
\begin{eqnarray}
&
\displaystyle{\xi^0 \mapsto \frac{a \xi^0 +b}{c \xi^0 + d} \, , \qquad
\xi^a \mapsto \frac{\xi^a}{c\xi^0+d} \, ,}
&
\nonumber\\
&
\displaystyle{\tilde\xi_a \mapsto \tilde\xi_a + \frac{c\kappa_{abc} \xi^b \xi^c}{2(c \xi^0+d)}- c_{2,a}\varepsilon(\gl{c,d})\, ,}
&
\label{SL2Zxi}\\
&
\begin{pmatrix} \tilde\xi_0 \\ \alpha \end{pmatrix} \!\mapsto\!
\begin{pmatrix} d & -c \\ -b & a \end{pmatrix}
\begin{pmatrix} \tilde\xi_0 \\ \alpha \end{pmatrix}
- \displaystyle{\frac{c}{6}\, \kappa_{abc} \xi^a\xi^b\xi^c}
\!\begin{pmatrix}
\frac{-c}{c \xi^0+d}\\
\frac{c (a\xi^0 + b)+2}{(c \xi^0+d)^2}
\end{pmatrix}\!,
\nonumber
\end{eqnarray}
where $a=1,\dots,n-1$,
$\kappa_{abc}$ are triple intersection numbers of the Calabi-Yau,
$c_{2,a}$ are components of the second Chern class along a basis of 2-forms,
and $\varepsilon(\gl{c,d})$ is the multiplier system of the Dedekind eta function.
Combined with the gluing conditions \eqref{glucon}, the transformation \eqref{SL2Zxi} can be used
to get the transformation property of the transition functions $\Hij{ij}$.
Although its explicit form was found \cite{Alexandrov:2013mha}, it is highly non-linear and its direct application
is rather involved. We do not give it here since we found a way to proceed which is much more instructive and elegant.
The idea is that the passage to the improved transition functions $\hHij{ij}$ will also improve
transformation properties under S-duality: one can hope that the non-linearities appearing
in the transformation law for $\Hij{ij}$ are of the same origin as those in \eqref{trHij} and will disappear
when one works in terms of $\hHij{ij}$.
To show that this is indeed the case, let us note the following property of the contact bracket
\begin{equation}
\{ (c\xi^0+d)\gl{c,d}\cdot h, \gl{c,d}\cdot f\}=\gl{c,d}\cdot \{h,f\},
\end{equation}
which can be verified by direct calculation using \eqref{contbr} and \eqref{SL2Zxi}.
With its help it becomes easy to evaluate the operator relating Darboux coordinates in two patches after the $SL(2,\mathbb{Z})$ transformation.
One finds
\begin{equation}
\begin{split}
\gl{c,d}\cdot e^{\{h,\, \cdot\,\}} \cdot\gl{c,d}^{-1}
=&\,
e^{\gl{c,d}\cdot\{h,\, \cdot\,\}\cdot\gl{c,d}^{-1}}
\\
=&\,
e^{\{(c\xi^0+d)\gl{c,d}\cdot h,\, \cdot\,\}}.
\end{split}
\end{equation}
This implies that a QK manifold carries the isometric action of the S-duality group $SL(2,\mathbb{Z})$ only
if the improved transition functions on its twistor space are split into $SL(2,\mathbb{Z})$ multiplets and transform {\it linearly }
inside each multiplet with weight $-1$, e.g. \footnote{It is possible also to have on the r.h.s. some regular contributions, which can always be absorbed
into a redefinition of Darboux coordinates not affecting the contact structure.}
\begin{equation}
h_{m,n}\mapsto \frac{h_{m',n'}}{c\xi^0+d},
\quad
\begin{pmatrix} m'\\ n' \end{pmatrix} =
\begin{pmatrix}
a & c
\\
b & d
\end{pmatrix}
\begin{pmatrix} m \\ n \end{pmatrix},
\label{transhH}
\end{equation}
where the pair $(m,n)$ labels the elements of the multiplet.
\section{Fivebrane instantons}
An important property of type IIB string theory compactified on a Calabi-Yau is
that quantum corrections to the hypermultiplet moduli space $\mathcal{M}_H$ are arranged into different sectors
invariant under S-duality. This happens according to the following pattern:
\begin{equation}
\mbox{
\begin{tabular}{l|c|c|c|c|c|c|cc|}
\cline{2-2} \cline{4-4}
$\alpha'$: \hspace{0.1cm} & perturb. & \hspace{0.1cm} & w.s. inst & \multicolumn{4}{c}{} \rule{0pt}{10pt}
\\
\cline{6-6} \cline{8-9}
$g_s$: & 1-loop \ D(-1) & & D1 & \hspace{0.1cm} & \,D3\, & \hspace{0.1cm} & \,D5 & NS5 \rule{0pt}{11pt}
\\
\cline{2-2} \cline{4-4} \cline{6-6} \cline{8-9}
\end{tabular}}
\label{quantcor}
\end{equation}
Thus, D(-1)-instantons are combined with perturbative $\alpha'$ and $g_s$-corrections,
D1-instantons mix with worldsheet instantons, D3-instantons are S-duality invariant, whereas D5 and NS5-instantons
transform as a doublet under $SL(2,\mathbb{Z})$. This splitting allows to study each sector independently of the others.
In particular, a twistorial description of the first two sectors, which is manifestly S-duality invariant,
has been given in \cite{Alexandrov:2009qq,Alexandrov:2012bu}.
The sector of D3-branes has been studied in \cite{Alexandrov:2012au},
where it was shown that the transition functions \eqref{trHij} restricted to this sector are consistent with S-duality.
Here we concentrate on the sector of fivebranes and derive all corresponding instanton corrections
from the knowledge of D5-instantons.
In type IIB theory D5-branes, or more precisely D5-D3-D1-D(-1)-bound states,
are characterized by the rational valued generalized Mukai vector
$\gamma=(p^0,p^a,q_a,q_0)$ with $p^0\ne0$ \cite{Douglas:2006jp}.
Below we will also need the so-called invariant charges \cite{Alexandrov:2010ca}
\begin{equation}
\begin{split}
\hat q_a = &\, \textstyle{q_a + \frac12 \,\kappa_{abc} \frac{p^b p^c}{p^0},}
\\
\hat q_0 =&\, \textstyle{ q_0 + \frac{p^a q_a}{p^0} + \frac13\, \kappa_{abc}\frac{p^a p^b p^c}{(p^0)^2}}
\end{split}
\end{equation}
and the reduced charge vector $\hat \gamma=(p^a,\hat q_a,\hat q_0)$.
Since fivebrane instantons form a doublet of $SL(2,\mathbb{Z})$, their
improved transition functions must follow the transformation law \eqref{transhH}.
Identifying D5-branes with the component $(m,n)=(0,p^0)$, all of them can be obtained by applying $\gl{c,d}$
to the function $\hHij{\hat \gamma}_{0,p^0}=H_\gamma$. Taking into account the physical interpretation of the charges,
it is convenient to take
\begin{equation}
c= - k/p^0,
\qquad
d=p/p^0,
\qquad
p^0=\gcd(p,k).
\end{equation}
Then $k$ appears to be precisely the NS5-brane charge.
Using \eqref{SL2Zxi}, it is straightforward to obtain
\begin{equation}
\hHij{\hat \gamma}_{k,p}=-\frac{\bar\Omega_{k,p}(\hat \gamma)}{4\pi^2} \frac{k}{p^0} (\xi^0-n^0)\sigma_D(\gamma)\, e^{2\pi\mathrm{i} S_{k,p;\hat \gamma}},
\label{fivebraneh}
\end{equation}
where
\begin{eqnarray}
S_{k,p;\hat \gamma}&=& k\(\alpha + n^\Lambda \tilde\xi_\Lambda + F^{\rm cl} (\xi - n)\)
\\
&+& \frac{p^0(k \hat q_a (\xi^a - n^a)+ p^0 \hat q_0)}{k^2(\xi^0 - n^0)} + \frac{a}{k}\,p^0 q_0- c_{2,a} p^a \varepsilon(\gl{c,d}),
\nonumber
\end{eqnarray}
$F^{\rm cl}(X)=-\kappa_{abc} \,\frac{X^a X^b X^c}{6 X^0}$ is the classical prepotential, $n^a=p^a/k$, $n^0=p/k$,
and $\bar\Omega_{k,p}(\hat \gamma)\equiv\bar\Omega(\gamma;\gl{c,d}\cdot z)$ takes into account the fact that DT invariants
are only piecewise constant and generically jump along lines of marginal stability in the moduli space of K\"ahler structure deformations $z^a$.
It should not be surprising that the resulting function \eqref{fivebraneh},
up to the factor ensuring the correct modular weight, coincides with the result found in \cite[Eq.(5.30)]{Alexandrov:2010ca}
in the one-instanton approximation: in this approximation the two types of transition functions are identical and
the remarkable feature of \eqref{fivebraneh} is that it is {\it exact} at linear order.
Furthermore, it is possible to evaluate explicitly the contact transformation generated by the function $\hHij{\hat \gamma}_{k,p}$
by applying the operator \eqref{glcond-KS}. Referring to \cite{AB:2014} for details,
here we give the result just for the transition function
defined by the relation \eqref{relHH} \footnote{The function (\ref{tranNS5all}) is written in terms of Darboux coordinates in one patch what is not
sufficient to calculate its derivatives entering the gluing conditions (\ref{glucon}). Their expressions can be found in \cite{AB:2014}.}
\begin{equation}
\Hij{\hat \gamma}_{k,p}
= \hHij{\hat \gamma}_{k,p}
-2\pi^2(\hHij{\hat \gamma}_{k,p})^2\[\frac{\hat q_0 (p^0)^2}{k(\xi^0-n^0)}-\frac{2k^2F^{\rm cl}(\xi-n)}{\(1-2\pi\mathrm{i} k\hHij{\hat \gamma}_{k,p}\)^2} \].
\label{tranNS5all}
\end{equation}
One observes that this function generates an infinite expansion in instantons equivalent to the expansion in DT invariants
or in $\hHij{\hat \gamma}_{k,p}$. In \cite{AB:2014} we also show that it solves the non-linear S-duality constraint
found in \cite{Alexandrov:2013mha} and is consistent with all discrete symmetries generating the U-duality group of the compactified theory.
\section{Conclusions}
In this paper we found a twistorial description of fivebrane instanton corrections to the hypermultiplet moduli space $\mathcal{M}_H$ of
type II string theory on a Calabi-Yau. It is provided by the holomorphic functions \eqref{fivebraneh} and \eqref{tranNS5all}.
With these results at hand, one can now write integral equations for Darboux coordinates on the twistor space of $\mathcal{M}_H$,
whose solution uniquely defines the metric on the moduli space and thereby the non-perturbative low energy effective action
of the compactified theory.
This progress has become possible due to a new parametrization \eqref{glcond-KS}
of contact transformations with the help of the so-called contact bracket.
The improved transition functions $\hHij{ij}$ entering this parametrization appear to be more fundamental
than their ordinary cousins $\Hij{ij}$. As a result, all transformation laws and results for instantons
take a much simpler linear form being reformulated in their terms.
Although our results provide a complete description of the fivebrane sector of quantum corrections to $\mathcal{M}_H$,
it still remains to put all quantum effects, shown in \eqref{quantcor}, into one unifying picture.
This problem is addressed in \cite{AB:2014},
where we provide the twistor space construction including all sectors except the one of D3-branes.
The latter represents a challenge since, despite it is captured by the transition functions \eqref{trHij},
even in the one-instanton approximation it has not been reformulated yet in
an explicitly S-duality invariant way. It is expected that a crucial role in such a reformulation
will be played by mock modular forms \cite{Alexandrov:2012au}.
Another interesting problem is to put our type IIB construction into the mirror type IIA formulation.
In particular, it is interesting to see how the NS5-brane instantons deform the integrable structure
of the D-instantons encoded in the Thermodynamic Bethe Ansatz equations \cite{Gaiotto:2008cd,Alexandrov:2010pp}.
Besides, there is a number of important issues which can be approached once the fivebrane instantons have been incorporated.
These include: a resolution of the singularity generated by the one-loop correction on $\mathcal{M}_H$;
divergence of the sum over charges due to the exponential growth of DT invariants in supergravity,
which was argued to be related to the NS5-brane effects \cite{Pioline:2009ia};
and consistency of the NS5-brane instantons with wall-crossing.
Finally, one can hope that the results presented here can be useful as well for phenomenological studies.
For instance, in \cite{Ketov:2014roa} it was argued that the fivebrane instantons may be crucial for the derivation
of the Starobinsky model \cite{Starobinsky:1980te} of the inflationary cosmology from compactifications of string theory.
\section*{Acknowledgments}
We are grateful to Sylvain Ribault for careful reading of the manuscript.
\providecommand{\href}[2]{#2}\begingroup\raggedright
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,207
|
Ginni Linn
Executive Editor/Print Manager
Hess Re-elected to Township Association's Executive Committee
Members of the Pennsylvania State Association of Township Supervisors have re-elected Steven D. Hess Sr., a supervisor and roadmaster for North Centre Township in Columbia County, to a second three-year term on the association's Executive Committee.
The election took place during PSATS' 95th Annual Educational Conference and Trade Show, held April 23-26 in Hershey. This is the largest municipal event of its kind in the state, with close to 4,000 attendees. The conference attracted township officials from every county in Pennsylvania except Philadelphia, which has no townships.
The seven-member Executive Committee, the association's five officers, and the immediate past president make up the Executive Board, which is responsible for managing the affairs of the state association. The board meets frequently throughout the year to oversee association business and plan new projects that will benefit member townships.
The Pennsylvania State Association of Township Supervisors represents Pennsylvania's 1,454 townships of the second class and is committed to preserving and strengthening township government and securing greater visibility and involvement for townships in the state and federal political arenas. Townships of the second class cover 95 percent of Pennsylvania's land mass and represent more residents — 5.5 million — than any other type of political subdivision in the commonwealth.
Hess is a member of the association's Grassroots Lobbying Network, which addresses legislative issues that affect every Pennsylvanian who lives in a township of the second class. He is also the board liaison to the Townships Between 2,000 and 5,000 Population Committee. He previously served as chairman of the PSATS Resolutions, Nominations, and Townships Under 4,000 Population committees.
Hess, the deputy emergency management coordinator for North Centre Township, currently serves as president of the Columbia County Association of Township Officials. He previously held the post of vice president and was also chairman of that association's Resolutions Committee.
Hess serves as vice president of the Columbia County Sanitation Inspection Office, president and firefighter with the Lime Ridge Fire Company, and a member and sound technician with the Stillwater Christian Church, where he attends with his wife, Carla.
His past affiliations include president and vice president of the Tri-County Council of Governments and Executive Committee member for the COG's IBC Inspection Service. He was financial secretary for the Lime Ridge Fire Company, head coach for the Lime Ridge Little League, and assistant coach for the Orangeville Cornhuskers Midget Football.
Hess also served as a leader for the Boy Scouts and Cub Scouts and served in various capacities with the Ridge Street United Methodist Church, including chairman of the trustees and Staff Parish Relations Committee, lay leader, and volunteer with the children's fellowship. Hess also was a United Methodist Pennsylvania state-certified lay speaker.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,438
|
Q: Вызвано исключение по адресу 0x0FF0EA10 (ucrtbased.dll) в 2209_1.exe: 0xC0000005: нарушение прав доступа при чтении по адресу 0xFFFFFFE4 наткнулся на проблему небольшую, хочу отсортировать массив символов предварительно динамически выделеным. Я в ф-цию передаю указатель на первый символ моего массива, но когда в ф-ции я хочу юзануть str[i] str[j] всё ломается
пробовал просто пройтись по массиву, оно работает
int i = 0;
int i = 0;
while(str[i] != '\0'){ printf("%c", str[i]); i++;) }
Вот код на Си.
#define _CRT_SECURE_NO_WARNINGS
#include <Windows.h>
#include <stdio.h>
#include <string.h>
#include <malloc.h>
#include <stdlib.h>
void quickSort(char* str, int left, int right);
int main()
{
int n;
printf("Enter size of string: ");
scanf("%d", &n);
char* str = (char*)malloc(sizeof(char) * (n)+4);
printf("Enter your string : ");
scanf("%s", str);
quickSort(str, 0, n);
printf("Sorted string : \n %i", str);
free(str);
system("pause");
}
void quickSort(char* str, int left, int right)
{
int i, j, p;
i = left;
j = right - 1;
char tmp[100];
while (i != j) {
if ((strcmp(str[i], str[j]) > 0) != (i < j))
{
strcpy(tmp, str[i]);
strcpy(str[i], str[j]);
strcpy(str[j], tmp);
p = i;
i = j;
if (p < j)
j = p + 1;
else j = p - 1;
}
else
{
if (i < j) j--;
else j++;
};
};
if (left < i - 1)
quickSort(str ,left, i - 1);
if (i + 1 < right)
quickSort(str, i + 1, right);
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 7,443
|
{"url":"https:\/\/ftp.aimsciences.org\/article\/doi\/10.3934\/jimo.2011.7.139","text":"# American Institute of Mathematical Sciences\n\nJanuary\u00a0 2011,\u00a07(1):\u00a0139-156. doi:\u00a010.3934\/jimo.2011.7.139\n\n## Optimal consumption and investment under irrational beliefs\n\n 1 School of Business and Management, Hong Kong University of Science and Technology, Hong Kong, China 2 School of Economics and Management, Tsinghua University, Beijing, China\n\nReceived\u00a0 March 2010 Revised\u00a0 October 2010 Published\u00a0 January 2011\n\nIn this paper, we study how irrationality affects the investor's consumption and investment decisions. We build a continuous-time financial model, where an irrational investor determines his consumption and investment according to an exogenous price process. The main results are as follows. First, compared with a rational investor, an optimistic irrational investor tends to consume more, while a pessimistic irrational investor tends to consume less. Second, the more irrational the investor, the more volatile his consumption. Third, the extremely irrational investor can get more ex ante expected utility than his rational counterpart, no matter he is optimistic or pessimistic.\nCitation: Lei Sun, Lihong Zhang. Optimal consumption and investment under irrational beliefs. Journal of Industrial & Management Optimization, 2011, 7 (1) : 139-156. doi: 10.3934\/jimo.2011.7.139\n##### References:\n\nshow all references\n\n##### References:\n [1] Zuo Quan Xu, Fahuai Yi. An optimal consumption-investment model with constraint on consumption. Mathematical Control & Related Fields, 2016, 6 (3) : 517-534. doi: 10.3934\/mcrf.2016014 [2] Jingzhen Liu, Ka-Fai Cedric Yiu, Kok Lay Teo. Optimal investment-consumption problem with constraint. Journal of Industrial & Management Optimization, 2013, 9 (4) : 743-768. doi: 10.3934\/jimo.2013.9.743 [3] Jiaqin Wei, Danping Li, Yan Zeng. Robust optimal consumption-investment strategy with non-exponential discounting. Journal of Industrial & Management Optimization, 2020, 16 (1) : 207-230. doi: 10.3934\/jimo.2018147 [4] Min Dai, Zhou Yang. A note on finite horizon optimal investment and consumption with transaction costs. Discrete & Continuous Dynamical Systems - B, 2016, 21 (5) : 1445-1454. doi: 10.3934\/dcdsb.2016005 [5] Yong Ma, Shiping Shan, Weidong Xu. Optimal investment and consumption in the market with jump risk and capital gains tax. Journal of Industrial & Management Optimization, 2019, 15 (4) : 1937-1953. doi: 10.3934\/jimo.2018130 [6] Ka Chun Cheung, Hailiang Yang. Optimal investment-consumption strategy in a discrete-time model with regime switching. Discrete & Continuous Dynamical Systems - B, 2007, 8 (2) : 315-332. doi: 10.3934\/dcdsb.2007.8.315 [7] Qian Zhao, Rongming Wang, Jiaqin Wei. Time-inconsistent consumption-investment problem for a member in a defined contribution pension plan. Journal of Industrial & Management Optimization, 2016, 12 (4) : 1557-1585. doi: 10.3934\/jimo.2016.12.1557 [8] Jiapeng Liu, Ruihua Liu, Dan Ren. Investment and consumption in regime-switching models with proportional transaction costs and log utility. Mathematical Control & Related Fields, 2017, 7 (3) : 465-491. doi: 10.3934\/mcrf.2017017 [9] Nan Zhang, Ping Chen, Zhuo Jin, Shuanming Li. Markowitz's mean-variance optimization with investment and constrained reinsurance. Journal of Industrial & Management Optimization, 2017, 13 (1) : 375-397. doi: 10.3934\/jimo.2016022 [10] Yang Shen, Tak Kuen Siu. Consumption-portfolio optimization and filtering in a hidden Markov-modulated asset price model. Journal of Industrial & Management Optimization, 2017, 13 (1) : 23-46. doi: 10.3934\/jimo.2016002 [11] Dongping Zhuang. Irrational stable commutator length in finitely presented groups. Journal of Modern Dynamics, 2008, 2 (3) : 499-507. doi: 10.3934\/jmd.2008.2.499 [12] Ferr\u00e1n Valdez. Veech groups, irrational billiards and stable abelian differentials. Discrete & Continuous Dynamical Systems, 2012, 32 (3) : 1055-1063. doi: 10.3934\/dcds.2012.32.1055 [13] W. Patrick Hooper. Lower bounds on growth rates of periodic billiard trajectories in some irrational polygons. Journal of Modern Dynamics, 2007, 1 (4) : 649-663. doi: 10.3934\/jmd.2007.1.649 [14] Marie-Claude Arnaud. A nondifferentiable essential irrational invariant curve for a $C^1$ symplectic twist map. Journal of Modern Dynamics, 2011, 5 (3) : 583-591. doi: 10.3934\/jmd.2011.5.583 [15] Hans Koch. On trigonometric skew-products over irrational circle-rotations. Discrete & Continuous Dynamical Systems, 2021, 41 (11) : 5455-5471. doi: 10.3934\/dcds.2021084 [16] Stefano Galatolo, Alfonso Sorrentino. Quantitative statistical stability and linear response for irrational rotations and diffeomorphisms of the circle. Discrete & Continuous Dynamical Systems, 2021\u00a0 doi: 10.3934\/dcds.2021138 [17] Fengjun Wang, Qingling Zhang, Bin Li, Wanquan Liu. Optimal investment strategy on advertisement in duopoly. Journal of Industrial & Management Optimization, 2016, 12 (2) : 625-636. doi: 10.3934\/jimo.2016.12.625 [18] Xin Jiang, Kam Chuen Yuen, Mi Chen. Optimal investment and reinsurance with premium control. Journal of Industrial & Management Optimization, 2020, 16 (6) : 2781-2797. doi: 10.3934\/jimo.2019080 [19] Ming Yang, Chulin Li. Valuing investment project in competitive environment. Conference Publications, 2003, 2003 (Special) : 945-950. doi: 10.3934\/proc.2003.2003.945 [20] Adrien Nguyen Huu. Investment under uncertainty, competition and regulation. Journal of Dynamics & Games, 2014, 1 (4) : 579-598. doi: 10.3934\/jdg.2014.1.579\n\n2020\u00a0Impact Factor:\u00a01.801","date":"2021-10-24 06:01:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6167960166931152, \"perplexity\": 13025.899353483886}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585911.17\/warc\/CC-MAIN-20211024050128-20211024080128-00441.warc.gz\"}"}
| null | null |
{"url":"http:\/\/mathoverflow.net\/questions\/10914\/notation-for-eventually-less-than\/116616","text":"# Notation for eventually less than\n\nIs there some existing notation for\n\n$f(n)\\leq g(n)$ for sufficiently large n\n\nApart from just writing that itself? I'm thinking of something compact like the Landau notation $f\\ll g$.\n\n(Apologies if this is too specific for MathOverflow - just close it if so. I was also unsure what tags to add, so just edit it accordingly).\n\n-\nOf course, you realize that the two condition you list are not equivalent? My general impression is that there is not a lot of standard notation for asymptotic relations. \u2013\u00a0 Harald Hanche-Olsen Jan 6 '10 at 13:30\nOf course, sorry. I have edited out my dim moment. \u2013\u00a0 Thomas Bloom Jan 6 '10 at 13:46\nWhatever notation you end up picking, please oh please explain it before you use it! \u2013\u00a0 Mariano Su\u00e1rez-Alvarez Jan 6 '10 at 15:55\n\nIn logic, this relation is called almost less than or equal, and is denoted with an asterisks on the relation symbol, like this: $f \\leq^* g$.\n\nFor example, the bounding number is the size of the smallest family of functions from N to N that is not bounded with respect to this relation. Under CH, the bounding number is the continuum, but it is consistent with the failure of CH that the bounding number is another intermediate value.\n\n-\nFor example, in variations of this terminology set theorists often speak of the almost inclusion relation $A \\subset^* B$, which means that A-B is finite. More generally, we have an ideal I and we use this notation to mean that A-B is in I. The structure P(omega)\/Fin is extremely interesting, and uses this relation. \u2013\u00a0 Joel David Hamkins Jan 6 '10 at 16:07\n\nWhy not just overload $\\leq$ when applied to sequences? I don't think there is any opportunity for confusion, and it fits with the notation you would use when extending $\\leq$ to an ultraproduct.\n\nThis is what Jim Henle does in his \"non-nonstandard analysis\", which uses \"eventually\" as a replacement for an ultrafilter.\n\n-\nThe ultrafilter usage would be a mathematically different concept. For example, with an ultrafilter the order is a total (linear) order, but almost-less-than is not. The almost-less-than relation in the question uses a mere filter, the Frechet filter of all co-finite sets. \u2013\u00a0 Joel David Hamkins Jan 6 '10 at 16:10\nThere certainly is opportunity for confusion in overloading $\\leq$. It's not uncommon to use $\\leq$ to mean an entrywise comparison of sequences (or more generally pointwise comparison of functions). And since it's common (sloppy) practice to obscure the distinction between a sequence and its entries, many readers are likely to assume that's what $\\leq$ means (or find it distracting nevertheless even if the meaning is clearly defined). \u2013\u00a0 Mark Meckes Jan 6 '10 at 16:15\nI still like the idea of using the same notation for a statement and the same statement extended to a product via a filter, but I also feel squeamish about almost-less-than not being a total order.. thanks for the food for thought! \u2013\u00a0 Matt Noonan Jan 6 '10 at 16:19\n@Mark: Point taken. Of course, that usage also fits with the filter usage, for the trivial filter. That said, if it needs this much justification then my suggestion is clearly not a good one. \u2013\u00a0 Matt Noonan Jan 6 '10 at 16:22\n\nGood notation should be self-explaining and not require the reader to remember to much. I would write either: $$f\\le g\\quad\\text{eventually}$$ $$f\\le g\\quad\\text{ near }\\infty$$ If you use it more than 100 times in a paper you could use something like $$f \\preccurlyeq g.$$\n\n-\n\nI agree with Joel Hamkins's answer, but I don't entirely agree with his comment on that answer. I generally use asterisks to mean \"with finitely many exceptions\" or \"modulo finite sets\", so I'd use $f\\leq^*g$ and $A\\subseteq^*B$ as Joel says. But when working modulo some ideal $I$ other than the ideal of finite sets, I'd ordinarily avoid asterisks and instead write $f\\leq_Ig$ and $A\\subseteq_IB$.\n\nI'd like to protest vigorously against the use of $\\ll$ in this situation. To me, $f\\ll g$ means that $f$ is a lot smaller than $g$ (at least eventually), whereas here you might have $f(n)=g(n)-1$ for all $n$.\n\n-\nBut then $\\ll$ in analytic number theory and related fields has a very different meaning (referred to by OP); essentially a synonym for big-Oh. \u2013\u00a0 quid Dec 17 '12 at 15:54","date":"2015-07-29 13:47:26","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.91313636302948, \"perplexity\": 564.6589312053114}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-32\/segments\/1438042986444.39\/warc\/CC-MAIN-20150728002306-00015-ip-10-236-191-2.ec2.internal.warc.gz\"}"}
| null | null |
Паэгле:
(1942—2019) — латвийский политик.
(род. 1943) — латвийский лингвист.
Паэгле, Леон (1890—1926) — латышский писатель, драматург и общественный деятель.
Паэгле, Сприцис (1876—1962) — латвийский бизнесмен, политик и общественный деятель.
См. также
Поэгли, Вадим Юрьевич (род. 1964) — российский политический журналист.
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,745
|
{"url":"https:\/\/srijit.cheenta.com\/statistical-decision-theory\/","text":"# Statistical Decision Theory\n\nThis post will be treating this topic with a Bird\u2019s Eye View of the subject. This is m first conscious attempt to understand a subject with the Inside Out technique and sharing my thoughts.\n\nMain Interest: Comparing two or more Statistical Decisions.\n\nLet us define the basics, we will slowly develop.\n\nWe have a sample X $$\\in \\chi$$, where $$\\chi$$ is the Sample Space.\n\nStatistical Decision Problem Triplet: ( $$\\Theta$$,A, L).\n\nA = Set of Actions; L = Loss function; D = set of Decision rules; A = set of actions; D* = set of randomized Decision rules; A* = set of randomized actions; Risk of a decision d = R($$\\theta,$$d)\n\nMain Idea: Comparing two or more Statistical Decisions based on the risk.\n\nTo compare, we need to have something like numbers, where we can give some order(inequality). First, we need to devise logical techniques to map a decision to a real number, so that we can compare the two decisions, based on the values of the real numbers, they are mapped to. We will, of course, need to take the help of the Risk Functions of the two decisions.\n\nThe Risk Function has an intrinsic problem that it depends on the true parameter values, which we may or may not know. So, we need to do some changes.\n\nMethod 1: (Restricting the space of variables of the Decisions D)\n\nExample 1: Unbiased Estimators\nExample 2: Invariant Estimators, etc\n\nThen finding the best among them, like finding the UMVUE estimator, UMPU test, etc.\n\nMethod 2.1: (Bayesian Framework) $$\\theta$$ ~ $$\\pi$$, where we know the prior distribution.\n\nHence, we define r(\\pi,\\)d) = $$E_{\\pi}[ R(\\theta,$$d)], which is a real number and we can compare. The decision with the minimum such value is called the Bayes Minimax Rule.\n\nMethod 2.2: (Minimax Rule)\n\nTheorem: For squared error loss, there is no Unbiased Bayes Estimator.\n\n1. If the minimax theorem holds, and a least favorable distribution $$\\pi_{0}$$ exists, then any minimax rules $$\\delta_{0}$$ is Bayes w.r.t $$\\pi_{0}$$ .\n2. If equality holds in the minimax theorem, then any minimax rules $$\\delta_{0}$$ is an extended Bayes rule.","date":"2020-02-23 07:17:27","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8123149275779724, \"perplexity\": 948.3142670107816}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875145747.6\/warc\/CC-MAIN-20200223062700-20200223092700-00548.warc.gz\"}"}
| null | null |
\section{Preliminaries and main result}
\subsection{Introduction}
The study of the dynamics of a system of ordinary differential equations (ODEs) in a neighbourhood of an equilibrium, boasts nowadays a rich and well established theory. Its foundation goes even back to the late XIX century to the contribution of Poincar\'{e} \cite{poi79} and Lyapunov \cite{lyap92}. Given an analytic vector field, the possibility to write the motions of the associated system in the vicinity of an equilibrium as a convergent power series, is deeply related to some \emph{non-resonance} conditions on the eigenvalues of the linear part.\\
The results have been afterwards extended in the studies of Siegel started in \cite{sieg42}. The problem of the reducibility of a given system to a linear form via an analytic transformation, it is shown to be solvable in \cite{siegel52} for a full measure set of eigenvalues . \\
In the case of Hamiltonian structure, investigated later in \cite{siegel54} , the problem can be naturally interpreted in terms of the existence of a (convergent) canonical transformation of variables, casting a Hamiltonian of the form ``quadratic'' $+$ ``perturbation'' into a suitable\footnote{I.e. such that the corresponding canonical equations are integrable.} normal form, in some neighbourhood of the examined equilibrium. Based on this approach, the paper \cite{giorLyap} provides a generalisation of the results by Lyapunov, removing the hypothesis of purely imaginary eigenvalues. \\
In any case, we remark that, as a common feature of this class of problems, without any assumption on the eigenvalues, the program of casting the Hamiltonian at hand into a normal form, at least in general, fails. In fact, it is immediate to recognize how the linear combinations of eigenvalues occurring in the normalization scheme could produce some ``small divisor effects''. Knowingly, this phenomenon can either obstruct the formal resolvability of the homological equations produced during the normalization or jeopardize the convergence of the series. \\
We recall that, for instance, the described problem of well-posedness of the homological equation is overcome by Moser in \cite{moser56}, in the case of ``one and a half\footnote{With periodic time dependence.}'' degrees of freedom Hamiltonian $H(p,q,t)$ close to a hyperbolic equilibrium located at $p=q=0$. The strategy consists of keeping terms of the form $(pq)^k$, $k \geq 2$, in the normal form. In this way the canonical equations are still integrable ($x:=pq$ is a prime integral) but this allows to avoid the division by zero in the homological equation which would have been carried by those terms. This analysis plays a fundamental role in the context of instability phenomena in Hamiltonian systems with several degrees of freedom (Arnold's diffusion), in order to describe the flow in the neighbourhood of partially hyperbolic tori of a priori unstable systems, see \cite{chga94}. \\
The pioneering work by Pustil'nikov \cite{pust}, aims to extend the results of the paper \cite{siegel52}, by introducing a time dependence in the non-linear part of the vector field (not necessarily Hamiltonian). As it is natural, the choice of a suitable class of time-dependent perturbations and its treatment is a further difficulty to the phenomenon of the ``resonances''. In \cite{pust}, under the non-resonance condition already assumed in \cite{siegel52} for the autonomous case, it is required that the perturbation is asymptotic to a time-independent, analytic function. However, no restrictions are imposed on the ``type'' of the time dependence, more specifically, it has to be neither periodic nor quasi-periodic. This case is also known as \emph{aperiodic} time dependence.\\
After \cite{pust}, the interest in a general dependence on time has been renewed in \cite{giozen} then followed by \cite{boun13}, \cite{fw14a} and subsequent papers. Basically, all of them deal with the Hamiltonian case (see \cite{fw15a} for the case of Poisson systems). The paper \cite{fw15} extends the above described result by Moser to the case of a perturbation aperiodically dependent on time. \\
As a matter of fact, the Hamiltonian structure is not a real obstruction for the use of the tools apt to treat the Hamiltonian case. In fact, given a system of ODEs, it can be always interpreted as (``a half'' of the) canonical equations of a suitable Hamiltonian system, of larger dimension, see e.g. \cite{berdvar}. The strategy of this paper is to derive the integrability of the system of ODEs at hand, see \ff{eq:sys}, as a particular case of the existence of a normal form for a real-analytic Hamiltonian with aperiodic perturbation, see \ff{eq:ham}, by using the tools introduced in \cite{fw15} for the one degrees of freedom case.\\
The possibility to cast the Hamiltonian \ff{eq:ham} into a normal form is shown to be possible in the two cases described in Theorem \ref{thm}. In the second case, we deal with perturbations linear in the $y$ variables, in the presence of some non-resonance assumption on the eigenvalues. This case is directly related to the Hamiltonian formulation of a system of ODEs (due to the linearity in $y$). It is immediate to notice that, with respect to \cite[(0.3)]{pust}, the condition \ff{eq:nonres} on the eigenvalues is clearly more restrictive. Nevertheless, the hypothesis of asymptotic time-independence assumed in \cite{pust} is weakened to the simple boundedness.\\
On the other hand, the first case, has a more general character: if the perturbation decays\footnote{The exponential decay, see \ff{eq:decay}, is chosen for simplicity of discussion. The only necessary assumption is the summability in $t$ of the perturbing function over the non-negative real semi-axis, see \cite{fw15c}.} in time, either the described assumption on the form of $f$ or on the eigenvalues turn out to be unnecessary. Basically, the presence of resonance phenomena is no longer an obstruction for the existence of the normal form, see also \cite{fw15c}.\\
The paper, based on the \emph{Lie series} formalism developed by A. Giorgilli et al., can be regarded, at the same time, as a non-autonomous version of \cite{giorLyap}.
\subsection{Setting}
Let us consider the following Hamiltonian
\beq{eq:ham}
H(x,y,\eta,t)=h(x,y,\eta)+ f(x,y,t), \qquad h(x,y,\eta):=\eta + \sum_{l=1}^n \lambda_l x_l y_l \mbox{,}
\end{equation}
where $(x,y,\eta) \in \mathcal{D}:=[-r,r]^n \times [-r,r]^n \times \RR$, with $n \geq 1$ and $r>0$, $\lambda_l \in \CC$ and $t \in \RR^+:=[0,+\infty)$. The assumptions on $f$ will be discussed below. The system (\ref{eq:ham}) is nothing but the ``autonomous equivalent'' of $\mathcal{H}(x,y,t)=\sum_{l=1}^n \lambda_l x_l y_l + f(y,x,t)$, once $\eta$ has been defined as the conjugate variable to $t$.\\
The standard use of the analytic tools requires the complexification of the domain $\mathcal{D}$ as follows. Given $R \in (0,1/2]$ set $\mathcal{D}_R:=\mathcal{Q}_R \times \mathcal{S}_{R} $, where
$$
\mathcal{Q}_R:=\{(x,y) \in \CC^{2n} : |x|,|y| \leq R \},\qquad \mathcal{S}_R:=\{\eta \in \CC : |\Im \eta| \leq R \} \mbox{,}
$$
It will be required that, for all $t \in \RR^+$, $f$ belongs to the space of real-analytic functions on $\accentset{\circ}{\mathcal{Q}}_R$ and continuous on the boundary, which we denote with $\mathfrak{C}(\mathcal{Q}_R)$. In such a way $ H \in \mathfrak{C}(\mathcal{D}_R) $.\\In particular, the space of all the $G \in \mathfrak{C}(\mathcal{Q}_R) $ is endowed with the
\emph{Taylor norm}
\beq{eq:taylor}
\norm{G(x,y,t)}{R}:=\sum_{\alpha,\beta \in \NN^n} |g_{\alpha,\beta}(t)| R^{|\alpha+\beta|} \mbox{,}
\end{equation}
where $G(x,y,t)=:\sum_{\alpha,\beta \in \NN^n} g_{\alpha,\beta}(t) x^{\alpha} y^{\beta}$ and\footnote{It is understood that $x^{\alpha}y^{\beta}:=x_1^{\alpha_1}\cdot\ldots\cdot x_n^{\alpha_n} \cdot y_1^{\beta_1}\cdot\ldots\cdot y_n^{\beta_n}$.} $|\alpha|:=\sum_{l=1}^n \alpha_l$. We recall the standard result for which, if $G \in \mathfrak{C}(\mathcal{Q}_R)$ for all $t \in \RR^+$, then $|g_{\alpha,\beta}(t)| \leq \snorm{G}{R}R^{-|\alpha+\beta|}$, where $|G|_R:=\sup_{(x,y) \in \mathcal{Q}_R}|G|$. In particular, $\norm{G}{R'}<+\infty$ for all $R'<R$.\\
Throughout this paper we shall deal with perturbations satisfying the following conditions:
\begin{enumerate}
\item $f$ is ``at least'' quadratic in $x$ and ``at least'' linear in $y$: a property that we will denote with ($QxLy$), i.e. $f_{\alpha,\beta}(t) = 0$ for all $t \in \RR^+$ and for all $(\alpha,\beta) \in \NN^{2n} \setminus \Gamma$, where $\Gamma:=\{(\alpha,\beta) \in \NN^{2 n}:|\alpha| \geq 2, \, |\beta| \geq 1 \}$,
\item there exist $M_f \in [1,+\infty)$ and $a \in [0,1)$ such that\footnote{The interval $a \in [0,1)$ is a compact way to denote either the time decay $a \in (0,1)$ or the boundedness $a=0$. As in our previous paper we recall that we are interested in the case of small $a$ (slow decay) and the upper bound $a=1$ is set for simplicity. On the other hand, it is easy to realise that the case $a \geq 1$ is straightforward.}, for all $(x,y,t) \in \mathcal{Q}_R \times \RR^+$,
\beq{eq:decay}
\norm{f(x,y,t)}{R} \leq M_f e^{-a t} \mbox{.}
\end{equation}
\end{enumerate}
\subsection{Main result}
In the described setting, the main result can be stated as follows
\begin{satz}\label{thm}
Suppose that one of the following conditions are satisfied:
\begin{description}
\item{I. Time decay:} $a>0$.
\item{II. Linearity in $y$ $+$ non-resonance:} $a=0$ and the perturbation is linear in $y$, denoted by ($Ly$), i.e. of the form $f(x,y,t)=y \cdot g(x,t)$. In addition, the vector $\Lambda:=(\lambda_1,\ldots,\lambda_n)$, satisfies the \emph{non-resonance condition}
\beq{eq:nonres}
\max_{l=1,\ldots,n} \left(|\Re \mathcal{U}(\alpha,e_l,\Lambda)|^{-1} \right)\leq \gamma |\alpha|^{\tau}, \qquad \forall \alpha \in \NN^n \mbox{,}
\end{equation}
where $\mathcal{U}(\alpha,\beta,\Lambda):=(\alpha-\beta) \cdot \Lambda$, for some $\gamma>0$ and $\tau \geq n$. $e_l$ stands for the $l-$th vector of the canonical basis of $\RR^n$.
\end{description}
Then it is possible to determine $R_*,R_0$ with $0<R_*<R_0 \leq R^{16}$ and a family of canonical transformations $(x,y,\eta)=\mathcal{M}(x^{(\infty)},y^{(\infty)},\eta^{(\infty)})$, $\mathcal{M}:\mathcal{D}_{R_*} \rightarrow \mathcal{D}_{R_0}$, analytic on $\mathcal{D}_{R_*}$ for all $t \in \RR^+$, casting the Hamiltonian (\ref{eq:ham}) into the \emph{strong normal form}
\beq{eq:normalform}
H^{(\infty)}(x^{(\infty)},y^{(\infty)},\eta^{(\infty)})=h(x^{(\infty)}, y^{(\infty)} , \eta^{(\infty)}) \mbox{.}
\end{equation}
\end{satz}
\begin{rem}
It is immediate to recognize the similarity between (\ref{eq:nonres}) and the standard Diophantine condition. Clearly, all the vectors $\Lambda$ whose real part is a Diophantine vector, satisfy condition (\ref{eq:nonres}), no matter what the imaginary part is. Hence the set of vectors satisfying (\ref{eq:nonres}) is, \emph{a fortiori}, a full-measure set.\\
As anticipated in the introduction, we stress that condition (\ref{eq:nonres}) is stronger than the non-resonance condition imposed in \cite{pust} and it is not satisfied in the case of purely imaginary $\Lambda$.
\end{rem}
\begin{rem}\label{remtwo}
As usually done in the \emph{Lie series method}, see e.g. \cite{gio03}, the transformation $\mathcal{M}$ will be constructed as the limit (defined, at the moment, only at a formal level)
\beq{eq:composition}
\mathcal{M}:=\lim_{j \rightarrow \infty} \mathcal{M}^{(j)} \circ \mathcal{M}^{(j-1)} \circ \ldots \circ \mathcal{M}^{(0)} \mbox{,}
\end{equation}
where $\mathcal{M}^{(j)}:=\exp(\lie{j}) \equiv \id + \sum_{s \geq 1} (s!)^{-1} \lie{j}^s$ and $\lie{j}:=\{\cdot,\chi^{(j)}\}$. The \emph{generating sequence} $\{\chi^{(j)}\}_{j \in \NN}$, where $\chi^{(j)}=\chi^{(j)}(x,y,t)$, see \cite{giozen}, is meant to be determined. \\
We will show (see the proof of Lemma \ref{lem}) that in the case of a perturbation which is ($Ly$), it is possible to show that $\chi^{(j)}(x,y,t)$ is ($Ly$) as well, for all $j \in \NN$. In such a case, it is easy to check by induction that $x^{(j)}= \mathcal{M}^{(j)} x^{(j+1)}$ \emph{does not} depend on the variable $y$, for all $j$. Hence the composition $x \equiv x^{(0)}=\mathcal{M} x^{(\infty)} =:\mathcal{M}_x (x^{(\infty)},t)$ does not depend on $y^{(\infty)}$ i.e. is an analytic map $\mathcal{M}_x: \tilde{\mathcal{Q}}_{R_*} \rightarrow \tilde{\mathcal{Q}}_{R_0}$ parametrised by $t$, where $\tilde{\mathcal{Q}}_{R}:=\{x \in \CC^n : |x|\leq R\}$.
This will play a key role in the next section.
\end{rem}
\subsection{The corollary}
Let us consider the following non-linear system
\beq{eq:sys}
\dot{v}=Av+g(v,t) \mbox{,}
\end{equation}
where $v \in \RR^n$, $A$ is a $n \times n$ matrix with real entries and the function $g$ is such that $\partial_{v}^{\nu}g(0,t) \equiv 0$ for all $\nu \in \NN^n$ such that $|\nu| \leq 1$ i.e. $g$ is at least quadratic in $v$. We restrict ourselves to the class of diagonalizable $A$ with non-purely imaginary eigenvalues $\lambda_l$. In the obvious system of coordinates denoted with $x$, the system (\ref{eq:sys}) easily reads as
\beq{eq:sysdiag}
\dot{x}_l=\lambda_l x_l + \tilde{g}_l(x,t), \qquad l=1,\ldots,n\mbox{.}
\end{equation}
In this framework one can state the next
\begin{cor}
Suppose that $f(x,y,t):=y \cdot \tilde{g}(x,t)$ and $\Lambda$ is such that the conditions described in II of Theorem \ref{thm} are satisfied. Then the system (\ref{eq:sysdiag}) is integrable in a suitable neighbourhood of the origin. \\
The same result holds, in particular, without any non-resonance condition on $\Lambda$, provided that $\tilde{g}(x,t)$ is such that (\ref{eq:decay}) is satisfied with $a>0$.
\end{cor}
\proof
The key remark, see e.g. \cite{berdvar}, is that (\ref{eq:sysdiag}) can be interpreted as a set of canonical equations of the Hamiltonian system with Hamiltonian $
\mathcal{K}:=\eta + \sum_{l=1}^n y_l(\Lambda_l x_l + \tilde{g}_l(x,t))$, i.e. (\ref{eq:ham}) with $f(x,y,t)$ defined in the statement. Hence, by Theorem \ref{thm}, there exists a suitable neighbourhood of the origin endowed with a set of coordinates $(x^{(\infty)},y^{(\infty)},\eta^{(\infty)})$, such that $\mathcal{K}$ is cast into the (integrable) strong form $\mathcal{K}^{(\infty)}=\eta^{(\infty)}+ \sum_{l=1}^n \lambda_l y_l^{(\infty)} x_l^{(\infty)}$. Furthermore, as noticed in Remark \ref{remtwo}, $\mathcal{M}_x$ is an analytic map between $x$ and $x^{(\infty)}$. Hence $x(t)=\mathcal{M}_x (x^{(\infty)}(0)\exp(\mathcal{A} t),t)$, with $\mathcal{A}:=\diag(\lambda_1,\ldots,\lambda_n)$, gives the explicit solution of (\ref{eq:sysdiag}).
\endproof
\section{Some preliminary results}
\subsection{Two elementary inequalities}
\begin{prop}\label{prop:estimates}
For all $\mathcal{R} \leq e^{-4}$ and all $\delta \leq 1/2$ the following inequalities hold
\beq{eq:in}
\sum_{\substack{\nu \in \NN^m \\ |\nu|\geq N}} \mathcal{R}^{|\nu|} \leq 2 m e^{3m-3} \mathcal{R}^{\frac{3N}{4}},\qquad
\sum_{\nu \in \NN^m} |\nu|^{\mu} (1-\delta)^{|\nu|} \leq \mathcal{C}(m,\mu) \delta^{-m-\mu-1} \mbox{,}
\end{equation}
where $m \geq 2$, $\mu \geq 0$ and $\mathcal{C}(m,\mu):=e^{4m+\mu-1} (m+\mu)^{(m+\mu)}/(m-1)!$.
\end{prop}
\proof See Appendix.
\endproof
\subsection{A result on the homological equation}
\begin{prop}\label{propone}
Consider the following equation
\beq{eq:homol}
\lie{j} h + f^{(j)}=0 \mbox{,}
\end{equation}
where $h$ has been defined in (\ref{eq:ham}) and $f^{(j)}=f^{(j)}(x,y,t)=\sum_{(\alpha,\beta) \in \Gamma} f_{\alpha,\beta}^{(j)} (t) x^{\alpha} y^{\beta}$ satisfies $\norm{f^{(j)}}{\tilde{R}} \leq M_j \exp(-a t)$ for some $a \in [0,1)$. The following statements hold for all $\delta \in (0,1/2]$:
\begin{enumerate}
\item If $a>0$, there exists $C_1=C_1(n,\Lambda)>0$ such that
\beq{eq:estimone}
\norm{\chi^{(j)}}{(1-\delta)\tilde{R}},\norm{\partial_t \chi^{(j)}}{(1-\delta)\tilde{R}} \leq C_1 M_j a^{-1} \delta^{-2(n+1)} \mbox{.}
\end{equation}
\item If $a=0$, $f^{(j)}$ is of the form $f^{(j)}=y \cdot g^{(j)} (x,t)$ and $\Lambda$ satisfies (\ref{eq:nonres}), there exists $C_2=C_2(n,\Lambda,\tau,\gamma)>0$ such that
\beq{eq:estimtwo}
\norm{\chi^{(j)}}{(1-\delta)\tilde{R}},\norm{\partial_t \chi^{(j)}}{(1-\delta)\tilde{R}} \leq C_2 M_j \delta^{-(n+\tau+2)} \mbox{.}
\end{equation}
\end{enumerate}
\end{prop}
\proof
First of all note that $\lie{j} h=\partial_t \chi ^{(j)}+ \sum_{l=1}^n \lambda_l (x_l \partial_{x_l}-y_l \partial_{y_l})\chi^{(j)} $. By expanding the generating function as $\chi^{(j)} (x,y,t)=\sum_{(\alpha,\beta) \in \NN^{2n}} c_{\alpha,\beta}^{(j)} (t) x^{\alpha} y^{\beta}$, equation (\ref{eq:homol}) reads, in terms of Taylor coefficients, as
\beq{eq:homolocomp}
\dot{c}_{\alpha,\beta}^{(j)} (t) + \mathcal{U}(\alpha,\beta,\Lambda) c_{\alpha,\beta}^{(j)} =f_{\alpha,\beta}^{(j)} (t) \mbox{.}
\end{equation}
The solution of (\ref{eq:homolocomp}) is easily written, for all $(\alpha,\beta) \in \Gamma$, as
\beq{eq:solhom}
c_{\alpha,\beta}^{(j)} (t) = e^{-\mathcal{U}(\alpha,\beta,\Lambda)t}\left[c_{\alpha,\beta}^{(j)} (0) + \int_0^t e^{\mathcal{U}(\alpha,\beta,\Lambda)s} f_{\alpha,\beta}^{(j)} (s) ds \right] \mbox{,}
\end{equation}
while trivially $c_{\alpha,\beta}^{(j)}(t) \equiv 0$ for all $(\alpha,\beta) \in \NN^{2n} \setminus \Gamma$.\\
Now denote $\mathcal{U}_R+i \mathcal{U}_I:=\mathcal{U}(\alpha,\beta,\Lambda)$ with $\mathcal{U}_{I,R} \in \RR$ and recall that, by hypothesis, $|f_{\alpha,\beta}^{(j)} (t)| \leq M_j \tilde{R}^{-|\alpha+\beta|} e^{-a t}$.\\
\textbf{Case} $a>0$. For all $(\alpha,\beta) \in \Gamma$ such that $\mathcal{U}_R \geq 0$ we choose $c_{\alpha,\beta}^{(j)} (0)=0$ then we have
\[|c_{\alpha,\beta}^{(j)}| \leq e^{-\mathcal{U}_R t} \int_0^t e^{\mathcal{U}_R s} |f_{\alpha,\beta}^{(j)} (s)|ds \leq M_j \tilde{R}^{-|\alpha+\beta|} \int_0^t e^{-as} ds \leq M_j \tilde{R}^{-|\alpha+\beta|} a^{-1} \mbox{.}\]
Otherwise, for those $\alpha$ and $\beta$ such that $\mathcal{U}_R<0$, redefine $\mathcal{U}_R:=-\mathcal{U}_R$ with $\mathcal{U}_R>0$ and choose $c_{\alpha,\beta}^{(j)} (0):=-\int_{\RR^+} \exp(\mathcal{U}(\alpha,\beta,\Lambda)s) f_{\alpha,\beta}^{(j)}(s) ds$. Note that $|c_{\alpha,\beta}^{(j)} (0)| < + \infty$. In this case we have $|c_{\alpha,\beta}^{(j)}| \leq \exp(\mathcal{U}_R t) \int_t^{\infty} \exp(-\mathcal{U}_R s) |f_{\alpha,\beta}^{(j)} (s)|ds \leq M_j \tilde{R}^{-|\alpha+\beta|} a^{-1}$. Hence $|c_{\alpha,\beta}^{(j)}|\leq M_j \tilde{R}^{-|\alpha+\beta|} a^{-1}$ for all $(\alpha,\beta) \in \Gamma$. By recalling (\ref{eq:taylor}) one gets $\norm{\chi^{(j)}}{(1-\delta)\tilde{R}} \leq M_j a^{-1} \sum_{(\alpha,\beta) \in \NN^{2n}} (1-\delta)^{|\alpha+\beta|}$. The use of the second of \ff{eq:in} with $\nu:=(\alpha,\beta)$, yields the first part of (\ref{eq:estimone}) with $C_1$ set for the moment to $\hat{C}_1:=\mathcal{C}(2n,0)$.\\
Directly from (\ref{eq:homolocomp}) we get $|\dot{c}_{\alpha,\beta}^{(j)}| \leq |\alpha+\beta||\Lambda||c_{\alpha,\beta}^{(j)}|+|f_{\alpha,\beta}^{(j)}| \leq a^{-1} M_j (1+|\Lambda|) |\alpha+\beta|\tilde{R}^{-|\alpha+\beta|}$. By \ff{eq:in} with $\mu=1$ we get the second of part of \ff{eq:estimone}. The constant is chosen as $C_1:=(1+ |\Lambda|) \mathcal{C}(2n,1) > \hat{C}_1$. \\
\textbf{Case} $a=0$. In such case, the homological equation reads as
\beq{eq:homolocomptwo}
\dot{c}_{\alpha,l}^{(j)} (t) + \mathcal{U}(\alpha,e_l,\Lambda) c_{\alpha,l}^{(j)} =f_{\alpha,l}^{(j)} (t) \mbox{,}
\end{equation}
where $f_{\alpha,l}^{(j)}:=f_{\alpha,\beta}^{(j)}|_{\beta=e_l}$
(the same notation for $c_{\alpha,l}^{(j)}$), for all $\alpha \in \NN^n$ such that $|\alpha| \geq 2$ and for all $l=1,\ldots,n$. By hypothesis (\ref{eq:nonres}), $\mathcal{U}_R \neq 0$. Similarly to the case $a>0$, if $\mathcal{U}_R >0$ we set $c_{\alpha,l}^{(j)} (0) =0$, otherwhise, $c_{\alpha,l}^{(j)} (0):=-\int_{\RR^+} \exp(\mathcal{U}(\alpha,e_l,\Lambda)s) f_{\alpha,l}^{(j)}(s) ds$. Proceeding as before, one obtains, by using (\ref{eq:nonres}),
\[
|c_{\alpha,l}^{(j)} (t)| \leq M_j \mathcal{U}_R^{-1} \tilde{R}^{-|\alpha|-1} \leq \gamma M_j |\alpha|^{\tau} \tilde{R}^{-|\alpha|-1} \mbox{.}
\]
This implies $\norm{\chi^{(j)}}{(1-\delta)\tilde{R}} \leq n \gamma M_j \sum_{\alpha \in \NN^n} |\alpha|^{\tau} (1-\delta)^{|\alpha|}$ which is, by \ff{eq:in}, the first part of \ff{eq:estimtwo} with $\hat{C}_2=n \gamma \mathcal{C}(n,\tau)$. On the other hand, from the homological equation, we get $|\dot{c}_{\alpha,l}^{(j)}(t)| \leq M_j |\alpha|^{\tau+1} (1+\gamma|\Lambda|) \tilde{R}^{-|\alpha|-1}$. Similarly, the latter yields the second part of \ff{eq:estimtwo} with $C_2:=\max\{ n (1+\gamma|\Lambda|) \mathcal{C}(n,\tau+1), \hat{C}_2\}$.
\endproof
\subsection{A bound on the Lie operator}
\begin{prop}\label{proptwo}
Let $F,G$ be two functions such that $\norm{F}{(1-\tilde{d})\tilde{R}},\norm{G}{(1-\tilde{d})\tilde{R}}< +\infty$ for some $\tilde{d} \in (0,1/4]$ and $\tilde{R}>0$. Then for all $ s \in \NN$ the following bound holds
\beq{eq:spoisson}
\norm{\mathcal{L}_{G}^s F}{(1-2 \tilde{d})\tilde{R}} \leq e^{-2} s! [e^2 (\tilde{R} \tilde{d} )^{-2} \norm{G}{(1-\tilde{d})\tilde{R}}]^s \norm{F}{(1-\tilde{d})\tilde{R}} \mbox{.}
\end{equation}
\end{prop}
\proof
Straightforward from \cite[Sec 3.2]{giorLyap} and \cite[Lemma 4.2]{gio03}.
\endproof
\section{Proof of the main result: convergence of the normal form}
\subsection{Preparation of the domains}
Taking into account the domain restriction imposed by Proposition \ref{proptwo}, the canonical transformations will be constructed of the form $\mathcal{M}_j:\mathcal{D}_{R_{j+1}} \rightarrow \mathcal{D}_{R_j} \ni (x^{(j)},y^{(j)},\eta^{(j)})$ (understood $(x^{(0)},y^{(0)},\eta^{(0)}) \equiv (x,y,\eta)$), where $\{\mathcal{D}_{R_j}\}_{j \in \NN}$ is a suitable sequence of nested domains. We will also provide another sequence $\{\epsilon_j\}$ which will be used to control the size of the remainder.
\begin{lem}\label{propthree}
Let us consider the following sequences
\beq{eq:rec}
\epsilon_{j+1}=K a^{-1} d_j^{-\sigma} \epsilon_j^{2},\qquad
R_{j+1}:=(1-2 d_j) R_j \mbox{,}
\end{equation}
with $\epsilon_j,R_j < 1$, $d_j \leq 1/4$ and where $\epsilon_0, R_0,a,K,\sigma>0$ are given. If
\beq{eq:convcond}
\epsilon_0 \leq \epsilon_a:=a (2 \pi)^{-\sigma} K^{-1} \mbox{,}
\end{equation}
then it is possible to construct $\{d_j\}_{j \in \NN}$ in such a way $R_j \geq R_*:=R_0/2$ and $\epsilon_j \rightarrow 0$ monotonically as $j \rightarrow \infty$.
\end{lem}
\begin{rem}
The property $R_*>0$ is crucial, as $R_*$ is the lower bound for the analyticity radius of the normalised Hamiltonian.
\end{rem}
\proof Straightforward from \cite[Lemma 4.4]{fw15c}. We recall that a suitable choice is $\epsilon_j=\epsilon_0(j+1)^{-\sigma}$, then, by \ff{eq:rec}, $d_j=(\epsilon_0 K a^{-1} )^{(1/\sigma)}(j+2)^2/(j+1)^4$. From the latter, one has
\beq{eq:series}
\sum_{j \geq 0} d_j \leq 1/6 \mbox{,}
\end{equation}
provided that condition \ff{eq:convcond} is satisfied.
\endproof
\subsection{Iterative lemma}
Let us define for all $j \geq 0$, $H^{(j+1)}:=\mathcal{M}_j H^{(j)}$ with $H^{(0)}:=H$.
\begin{lem}\label{lem}
Under the same hypotheses of Theorem \ref{thm} and under the condition \ff{eq:convcond} it is possible to find a $R_0$ and a sequence $\{\chi^{(j)}\}_{j \in \NN}$ such that $H^{(j)}(x,y,\eta,t)=h(x,y,\eta)+f^{(j)} (x,y,t)$ with $f^{(j)}$ ($QxLy$) and such that $\norm{f^{(j)}}{R_j} \leq \epsilon_j e^{-at}$ for all $j$, where $\epsilon_j,R_j$ are given by \ff{eq:rec}.
\end{lem}
The stated result exploits the possibility to remove the perturbation with the normalization algorithm obtaining, in this way, the desired normal form \ff{eq:normalform}. The interpretation of $\epsilon_j$ as a bound for the remainder is clearly related to the well known feature of the \emph{quadratic method}.
\proof
By induction. If $j=0$, the statement is clearly true by hypothesis, by setting $f^{(0)}:=f$, either in the case $I$ or in the case $II$. We are supposing here that $\epsilon_0$ is small enough in order to satisfy \ff{eq:convcond}. This will be achieved later by a suitable choice of $R_0$.\\
Let us suppose the statement to be valid for $j$. In this way we get
\[
H^{(j+1)} \equiv \exp(\lie{j}) H^{(j)} = h+ f^{(j)} + \lie{j} h + \sum_{s \geq 1} (s!)^{-1} \lie{j}^s f^{(j)} + \sum_{s \geq 2} (s!)^{-1} \lie{j}^s h \mbox{.}
\]
We shall determine $\chi^{(j)}$ in such a way \ff{eq:homol} is satisfied so that, by setting
\beq{eq:fjpo}
f^{(j+1)}:=\sum_{s \geq 1} \frac{1}{s!} \lie{j}^s f^{(j)} + \sum_{s \geq 2} \frac{1}{s!} \lie{j}^s h \Heq{\ref{eq:homol}}{=} \sum_{s \geq 1} \frac{s}{(s+1)!} \lie{j}^s f^{(j)} \mbox{,}
\end{equation}
one has $H^{(j+1)}=h+f^{(j+1)}$.\\
It is immediate from \ff{eq:homolocomp} that $\chi^{(j)}$ has the same null Taylor coefficients as $f^{(j)}$. Hence if $f^{(j)}$ is ($QxLy$) then $\chi^{(j)}$ is also. It is easy to check by induction that this implies that $\lie{j}^s f^{(j)}$ is ($QxLy$) for all $s$, then $f^{(j+1)}$ is ($QxLy$). Similarly, equation \ff{eq:homolocomptwo} implies that if $f^{(j)}$ is ($Ly$) then $\chi^{(j)}$ is also. This implies that $\lie{j}^s f^{(j)} $ is ($Ly$) for all $s$, hence $f^{(j+1)}$ is ($Ly$). This completes the formal part. In particular, by induction, $f^{(j)} $ is ($Ly$) for all $j$, as claimed in Remark \ref{remtwo}. \\
Let us now discuss the quantitative estimate on $f^{(j)}$ in the case $a>0$. By Propositions \ref{propone}, \ref{proptwo} and the inductive hypothesis, one gets
\beq{eq:lief}
\norm{\lie{j}^s f^{(j)}}{(1-2 d_j)R_j} \leq s! \Theta^s \epsilon_j e^{-at},\qquad
\Theta:=\frac{e^2 C_1 }{ a R_*^2 d_j^{2n+4}} \epsilon_j \mbox{.}
\end{equation}
Setting $K:=2 n e^2 C_1 R_*^{-2}$ and $\sigma:=2n + 5$, we have that
\beq{eq:}
2 n \Theta=(K \epsilon_j a^{-1} d_j^{-\sigma})d_j \leq d_j \mbox{,}
\end{equation}
as $\epsilon_{j+1}/\epsilon_j<1$ by Lemma \ref{propthree}. Hence, $\Theta < 1/2$ and the series defined in \ff{eq:fjpo} is convergent, furthermore
\beq{eq:lastlief}
e^{at} \norm{f^{(j+1)}}{R_{j+1}} \leq \epsilon_j \sum_{s \geq 1} \Theta^s \leq 2 n \Theta \epsilon_j \Heq{\ref{eq:}}{\leq} K a^{-1} d_j^{-\sigma} \epsilon_j^2 \Heq{\ref{eq:rec}}{=} \epsilon_{j+1} \mbox{,}
\end{equation}
which completes the inductive step. The condition \ff{eq:convcond} in this case reads as
\beq{eq:condone}
\epsilon_0 \leq a R_0^2 (2\pi)^{-\sigma} (8 n e^2 C_1)^{-1} \mbox{.}
\end{equation}
On the other hand, from the analyticity of $f$, we get $|f_{\alpha,\beta}(t)| \leq M_f R^{-|\alpha+\beta|} \leq M_f R_0^{-|\alpha+\beta|/16}$, as $R_0 \leq R^{16}$ by hypothesis. By using the first of \ff{eq:in} we get $\norm{f}{R_0} \leq M_f \sum_{(\alpha,\beta) \in \NN^{2n}} R_0^{(15/16)|\alpha+\beta|} \leq 2n e^{(2n-1)} M_f R_0^{135/64}=:\epsilon_0$. Replacing the latter in \ff{eq:condone}, the condition on $R_0$ described in the statement of Theorem \ref{thm} is meant to be completed with the following one
\beq{eq:rzeroa}
R_0 \leq [a/(16 (2 \pi)^{\sigma} e^{2n+1} n^2 C_1 M_f) ]^{64/7} \mbox{.}
\end{equation}
The case $a=0$ is analogous: it is sufficient to replace $C_1$ with $C_2$, remove the term $e^{\pm a t}$ from the statement, \ff{eq:lief} and \ff{eq:lastlief}, then replace $a$ with $1$ from \ff{eq:lief} to \ff{eq:condone}, where now $\sigma=n+\tau+5$. The only substantial difference consists in the sum obtained from \ff{eq:in}, which is slightly improved, since $f$ linear in $y$. We have in this case $\norm{f}{R_0} \leq n^2 e^{n-1} M_f R_0^{75/32}=:\epsilon_0$ leading to
\beq{eq:rlesszeroa}
R_0 \leq [8 (2 \pi)^{\sigma} e^{n+1} n^3 C_2 M_f]^{-32/11}\mbox{.}
\end{equation}
\endproof
\subsection{Bounds on the coordinate transformation}
\begin{lem}
The transformation of coordinates defined by the limit \ff{eq:composition} satisfies
\beq{eq:trasf}
|x^{(\infty)}-x|,|y^{(\infty)}-y|,|\eta^{(\infty)}-\eta| \leq R_0/6 \mbox{,}
\end{equation}
in particular, it defines an analytic map $\mathcal{M}:\mathcal{D}_{R_*} \rightarrow \mathcal{D}_{R_0}$ and $H^{(\infty)}:=\mathcal{M} H$ is an analytic function on $\mathcal{D}_{R_*}$.
\end{lem}
\proof
We will discuss the case $a>0$. The case $a=0$ is straightforward simply replacing $C_1$ with $C_2$, $a$ with $1$ and changing the value of $\sigma$, where necessary.\\
Let us start from the variable $x$. Note that, by Proposition \ref{proptwo}, one has $\norm{\lie{j}^s x_l^{(j+1)}}{(1-2 d_j)R_j} \leq s! \Theta^s R_0$ for all $l=1,\ldots,n$. Hence we have, by \ff{eq:}
\[
|x^{(j+1)}-x^{(j)}| \leq n \max_{l=1,\ldots,n} \sum_{s \geq 1} \frac{1}{s!}
\norm{\lie{j}^s x_l^{(j+1)}}{(1-2 d_j)R_j} \leq 2 n R_0 \Theta \leq R_0 d_j \mbox{.}
\]
In this way $ |x^{(\infty)}-x| \leq \sum_{j \geq 0} |x^{(j+1)}-x^{(j)}| $ converges by \ff{eq:series}. The procedure for $y$ is analogous. \\
As for the third of \ff{eq:trasf}, it is necessary to observe that $\lie{j} \eta = - \partial_t \chi^{(j)}$. Hence, by \ff{eq:spoisson} and the second of \ff{eq:estimone}, one has $
\norm{\lie{j}^s \eta}{(1-2 R_j)} \leq e^{-2}s! \Theta^{s-1} (R_*^2 e^{-2}\Theta) \leq s! \Theta^s R_0$, hence $|\eta^{(j+1)}-\eta^{(j)}| \leq 2 n R_0 \Theta \leq R_0 d_j $.\\
The bounds \ff{eq:trasf} ensure that points in $\mathcal{D}_{R_*}$ are mapped within $\mathcal{D}_{R_0}$ where $R_*=R_0/2$. Furthermore, the absolute convergence of the above described series, ensured by \ff{eq:series}, guarantees the uniform convergence in every compact subset of $\mathcal{D}_{R_*}$ and the analyticity of $\mathcal{M}$, and then of $H^{(\infty)}$, follows from the theorem of Weierstra\ss, see e.g. \cite{dett}.
\endproof
\subsection*{Appendix. Proof of Proposition \ref{prop:estimates}}
First of all, recall $\sum_{|\nu| \geq N} |\nu|^{\mu} \mathcal{R}^{|\nu|} =
\sum_{l \geq N}
\binom{l+m-1}{m-1} l^{\mu} \mathcal{R}^l$.
Now note that $\log \prod_{j=1}^{m-1}(l+j) \leq \int_1^m \log(l+x)dx =1-m+\log[(m+l)^{(m+l)}(1+l)^{-(1+l)}]$ hence $(m-1)!\binom{l+m-1}{m-1}= \prod_{j=1}^{m-1}(l+j) \leq e^{m-1} (m+l)^{(m+l)}(1+l)^{-(1+l)} \leq e^{2m-2} (m+l)^{(m+\mu)} $. This yields
\beq{eq:intin}
\sum_{|\nu| \geq N} |\nu|^{\mu} \mathcal{R}^{|\nu|} \leq [e^{2m-2}/(m-1)!]
\sum_{l \geq N} (m+l)^{(m+\mu)} \mathcal{R}^{l} \mbox{.}
\end{equation}
On the other hand, the function $h(x):=(m+x)^{\kappa}\mathcal{R}^{x/4}$ has a maximum in $x=0$ (in the non-negative semi-axis) if $\mathcal{R} \leq \exp(-4 \kappa /m)$ and in $x^*:=-m-4 \kappa/\log\mathcal{R} $ otherwise. Hence, from (\ref{eq:intin}) with $\mu=0$ we have $\sum_{|\nu| \geq N} \mathcal{R}^{|\nu|} \leq [(m-1)!]^{-1} m^m e^{2m-2}
\sum_{l \geq N} \mathcal{R}^{(3/4)l}$ which gives the first of \ff{eq:in} by using the inequality $m^m \leq e^{m-1}m!$ and recalling $\mathcal{R} \leq e^{-4}$.\\
Now set $\mathcal{R}=1-\delta$. By hypothesis $\mathcal{R}>e^{-4}$, hence $(m+l)^{(m+\mu)}(1-\delta)^{l/4} \leq (1-\delta)^{-m/2}(-2 (m+\mu)/\log (1-\delta))^{(m+\mu)}$. By substituting the latter in \ff{eq:intin} with $N=0$, then using the inequalities $-\log(1-\delta) \geq \delta$ and $[1-(1-\delta)^{3/4}] \geq \delta/2$ as $\delta \leq 1/2$, the second of \ff{eq:in} easily follows.
\subsection*{Acknowledgements}
The first author is grateful to Prof. Dario Bambusi for remarkable discussions on this problem.
\bibliographystyle{alpha}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,593
|
\section{Introduction}
\label{sec:1}
Over the last decade, the problem of exceptional orthogonal polynomials (XOPs) has generated a great deal of interest and activity in several areas of mathematics and physics. Most of these activities focused on Jacobi and Laguerre XOPs (see \textit{e.g.}
\cite{Bonneux2019,Bonneux2018a,Gomez-Ullate2010a,Gomez-Ullate2010,Ho2012,Liaw2015,Odake2010a,Quesne2008,Quesne2012,Sasaki2010}
and references therein), Hermite XOPs (see \textit{e.g.} \cite{Bonneux2018,Gomez-Ullate2014,Gomez-Ullate2018,Gomez-Ullate2013,Gomez-Ullate2016,Kuijlaars2015}) and multi-indexed orthogonal polynomials (see \textit{e.g.}
\cite{Odake2012,Odake2013a,Odake2013b,Odake2017a} and references therein). XOPs have been shown to play an essential role in several branches of physics, mostly related to the quantum harmonic oscillator. In particular, Jacobi XOPs were applied to the description of the Kepler-Coulomb quantum model in \cite{Hoque2018} and were seen as having an electrostatic interpretation in \cite{Dimitrov2014}. Moreover, Hermite XOPs were applied to the description of coherent states in \cite{Hoffmann2018}.
The results obtained in \cite{Gomez-Ullate2014} for the construction of Hermite XOPs were so promising that it seems to be worthwhile to study a particular case of Hermite XOPs corresponding to a codimension of two by an analytical method. We demonstrate the links between Hermite XOPs and minimal surfaces immersed in the $\frak{su}(2)$ Lie algebra. For this purpose, it is convenient to write the structural equations of these surfaces in terms of the moving frame using $2\times 2$ matrices. In particular, in \cite{Chalifour2019}, such a description was applied to the investigation of classical orthogonal polynomials \cite{Szego1939} and their link with special classes of minimal surfaces associated with the Hermite, Bessel, Chebyshev, Legendre, Laguerre, Gegenbauer and Jacobi polynomials.
In this paper, we examine certain aspects of Hermite XOPs for which we fix the partition. We set the partition defining a family of polynomials of codimension two, corresponding to the gap sequence in the spectrum of the exceptional Hermite differential operator. We show that setting the partition to a specific value allows us to determine the ordinary differential equation (ODE) whose general solution includes the Hermite XOPs and to determine the explicit form of the minimal surfaces associated with these polynomials. The methodological approach assumed in this paper is based on the general solution of such ODEs describing orthogonal polynomials as presented in \cite{Chalifour2019,Doliwa2012}. It allows us to identify a specific ODE describing XOPs corresponding to the reduced linear problem associated with minimal surfaces. The main idea is to investigate Hermite XOPs as the solutions of the linear problem for the moving frame. These polynomials are introduced in the linear problem, resulting in a moving frame directly determined by Hermite XOPs. The resolution of the linear problem then leads to the explicit computation of the associated Enneper-Weierstrass formula for the immersion of minimal surfaces in the 3-dimensional Euclidean space $\mathbb{E}^3$. This is, in short, the aim of this paper.
The paper is organized as follows. In section \ref{sec:2}, we recall some basic notions and definitions on the theory of Hermite XOPs. In section \ref{sec:3}, we use the theory of Hermite XOPs to study a family of exceptional polynomials of codimension two, for a fixed partition. We then study a formulation of Hermite XOPs in terms of classical Hermite polynomials. The orthogonality relation is presented. From the Hermite exceptional differential operator, we derive an ODE associated with a family of codimension two for this partition. We present a fundamental system for this ODE composed of analytic solutions obtained by the method of generalized series. We study the dependence link between Hermite XOPs of codimension two and these new solutions. In section \ref{sec:4}, we construct minimal surfaces associated with Hermite XOPs of codimension two, benefiting from the link between the Enneper-Weierstrass representation of surfaces and the linear problem associated with the moving frame. We first present the immersion formula for minimal surfaces and associate it with a matrix representation in the $\frak{su}(2)$ algebra. We then apply a method for the reduction of the linear problem by a gauge transformation, which leads to a second-order linear ODE. This ODE is identified with the ODE associated with the Hermite XOPs of codimension two under consideration. We determine the arbitrary functions of the Enneper-Weierstrass representation, which lead to the explicit form of the minimal surfaces in the Euclidean space $\mathbb{E}^3$. These results are illustrated by numerical representations for different values of the parameter of the exceptional Hermite ODE. In Appendices \ref{app:1} and \ref{app:2}, we present some proofs by induction related to the general solution of the exceptional Hermite ODE.
\newpage
\section{Exceptional Hermite polynomials}
\label{sec:2}
\subsection{Sturm-Liouville operator in terms of Hermite polynomials}
To make the paper self-contained, we present in this section some known results concerning Hermite XOPs which are relevant for our purposes. A summary of recent developments on this subject can be found in the work of G\'omez-Ullate et al \cite{Gomez-Ullate2014}. As a starting point, consider the Sturm-Liouville problem
\begin{equation}\label{eq:Sturm_Liouville}
L\psi = \lambda\psi
\end{equation}
for the Schr{\"o}dinger operator possessing a potential $U(z)$
\begin{equation}\label{eq:SchrodingerOp}
L = -\frac{d^2}{dz^2}+U(z).
\end{equation}
If the operator (\ref{eq:SchrodingerOp}) is without monodromy, then the potential is of the form \cite{Oblomkov1999}
\begin{equation}\label{eq:potential}
U(z) = -2\frac{d^2}{dz^2} \log{Wr(H_{k_1},H_{k_2},..., H_{k_n})}+z^2.
\end{equation}
In this context, $\{k_i\}_{i = 1}^n$ is a strictly increasing sequence of positive integers and $H_n(z)$ is the $n^{\text{th}}$ classical Hermite polynomial, which can be described by the Rodrigues formula \cite{Abramowitz1965}
\begin{equation}\label{eq:Rodriques}
H_n(z) = (-1)^ne^{z^2}\frac{d^n}{dz^n}e^{-z^2}.
\end{equation}
The potential (\ref{eq:potential}) is rational and has singularities corresponding to the zeros of the Wronskian
\begin{equation}
Wr(H_{k_1},H_{k_2},..., H_{k_n})(z).
\end{equation}
Zeros of Wronskians of Hermite polynomials have been studied in \cite{Felder2012,Kuijlaars2015}. The following theorem (formulated in a slightly different way in \cite{Gomez-Ullate2014}) summarizes the results obtained by Krein \cite{Krein1957} and Adler \cite{Adler1994} concerning the zeros of the Wronskian of the eigenfunctions of the problem (\ref{eq:Sturm_Liouville}) in a more general context (general eigenfunctions). In our formulation, the sequence $\{k_1, ..., k_n\}$ is expressed as a new sequence $\{0,1,..., M_0',M_1, ..., M_1', ......, M_s, ...M_s'\}$, in order to clarify its structure. Here, according to the notation used in \cite{Adler1994}, the symbol prime $'$ denotes the biggest positive integer of the block $\{M_l, ..., M_l'\}$, while $M_l$ denotes the smallest positive integer of the same block.
\begin{theorem}\label{th:1} (Krein-Adler) Let $\phi_j$ be the eigenfunctions of a pure-point Sturm-Liouville operator $L = -\frac{d^2}{dx^2} +U$ defined on the real line
\begin{equation}
L[\phi_j] = \lambda_j\phi_j, \qquad j = 0, 1, 2, ...,\qquad x\in(-\infty,\infty),
\end{equation}
with suitable boundary conditions. The Wronskian Wr$(\phi_{k_1}, ..., \phi_{k_n})$ has no zero on the real line if and only if the sequence of distinct positive integers $\{k_1, ..., k_n\}$, when arranged in an ascending order, has the following structure
\begin{equation}\label{eq:CondThOne}
\{0, 1, ..., M_0'\}\cup\{M_1, ..., M_1'\}\cup...\cup\{M_s, ..., M_s'\}
\end{equation}
where $M_l'+1 < M_{l+1}$ for all $l = 0, ..., s-1$. Here, the block $\{0, 1, ..., M_0'\}$ may be absent and the blocks $\{M_l, ..., M_l'\}$ consist of an even number of terms when $l\geq1$.
\end{theorem}
Condition (\ref{eq:CondThOne}) means that the sequence is allowed to (but does not necessarily) begin with a sequence of arbitrary length, composed of consecutive positive integers starting with zero, followed by an arbitrary number of blocks of even length. The meaning of the inequality $M_l'+1 < M_{l+1}$ is that there is a gap greater or equal to 1 between the biggest positive integer $M_l'$ of a block and the smallest positive integer $M_{l+1}$ of the next block. These results are used to construct Hermite XOPs.
\newpage
\subsection{Construction of exceptional Hermite polynomials}
The definition of Hermite XOPs begins with the choice of a partition $\lambda = (\lambda_1, ..., \lambda_l)$ for a specific positive integer $m\in\mathbb{N}$, which consists of a non-decreasing sequence
\begin{equation}\label{eq:partition1}
0\leq\lambda_1\leq\lambda_2\leq...\leq \lambda_l.
\end{equation}
For a combinatorial interpretation of the partition associated with XOPs, see \textit{e.g.} \cite{Gomez-Ullate2018,Bonneux2018a,Bonneux2018,Bonneux2019,Felder2012}. The sequence (\ref{eq:partition1}) is a partition of a unique positive integer $m$ if
\begin{equation}
m=\sum_{k=0}^l\lambda_k.
\end{equation}
A sequence of type (\ref{eq:partition1}) determines a strictly increasing sequence called a gap sequence \cite{Gomez-Ullate2013}, of the form
\begin{equation}\label{eq:gapSeq}
0\leq k_1<k_2<...< k_l,
\end{equation}
where
\begin{equation}\label{eq:gapSeqRel}
k_i = \lambda_i +i - 1.
\end{equation}
\begin{definition}
A partition of length $l$ is called a double partition if $l$ is even and $\lambda_{2i-1} = \lambda_{2i}$ for all $i$.
\end{definition}
From an arbitrary partition $\lambda$, we define a double partition of length $2l$ by duplicating each term of the partition (\ref{eq:partition1})
\begin{equation}\label{eq:partition2}
\lambda^2 = (\lambda_1, \lambda_1, \lambda_2, \lambda_2, ..., \lambda_l, \lambda_l).
\end{equation}
The sequence $\{k_1,k_2, ..., k_n\}$ arising from the application of the relation (\ref{eq:gapSeqRel}) to any double partition of the form (\ref{eq:partition2}) respects the structure (\ref{eq:CondThOne}) of Theorem \ref{th:1}.
\begin{definition}
An Adler partition is either a double partition or a double partition preceded by a sequence of zeros of arbitrary length.
\end{definition}
For each partition $\lambda$, we consider the Wronskians
\begin{align}\label{eq:wronskiens}
&H_\lambda := Wr(H_{k_1}, ..., H_{k_l}),\\\label{eq:wronskiens2temp}
&H_{\lambda, n}:= Wr(H_{k_1}, ..., H_{k_l}, H_n), \quad n\notin\{k_1, ..., k_l\}.
\end{align}
\begin{definition}\label{def:polExc}
\cite{Gomez-Ullate2014} For any double partition (\ref{eq:partition2}) of length $2l$, we define the X$_\lambda$-Hermite family of polynomials, denoted by $\{H^{(\lambda)}_n\}$, as the following countable sequence
\begin{equation}\label{def:HermiteX}
H^{(\lambda)}_n = H_{\lambda^2, n},\quad n\in\mathbb{N}\backslash\{k_1, k_1+1, ..., k_l, k_l+1\}.
\end{equation}
\end{definition}
\begin{definition}\label{def:Codimension}\cite{Gomez-Ullate2010}
Let $\lambda=(\lambda_1, \lambda_2, ..., \lambda_l)$ be a partition of length $l$. The positive integer
\begin{equation}
2m:=\left|\lambda^2\right|=\sum_{i=1}^l(\lambda_{2i-1}+\lambda_{2i}) = 2\sum_{i=1}^l \lambda_{2i}
\end{equation}
is called the codimension of the $X_\lambda$-Hermite family of polynomials.
\end{definition}
It is known that Hermite XOPs exist only for even codimension $2m$ \cite{Gomez-Ullate2014}. This is a consequence of Theorem \ref{th:1} that leads to the choice of a double partition, as in equation (\ref{eq:partition2}). Hermite XOPs were applied to the description of coherent states in \cite{Hoffmann2018}. In what follows, the notation $X_{2m}^{\lambda}$-Hermite will be used to refer to the family of Hermite XOPs associated with a partition $\lambda$ of some positive integer $m$ with codimension $2m$.
From the Wronskian (\ref{eq:wronskiens2temp}) and from Definition \ref{def:polExc}, we obtain that \cite{Gomez-Ullate2014}
\begin{equation}\label{eq:degre}
deg H^{(\lambda)}_n(x) = 2\sum_{k=1}^l \lambda_k - 2l+n.
\end{equation}
Equation (\ref{eq:degre}) tells us that the degree of a polynomial of the $X_{2m}^{\lambda}$-Hermite family is $n$ if and only if $\lambda_1>0$ and $m=l$, \textit{i.e.} the positive integer $m$ is equal to the length of its partition, which leaves only one possibility for the $m$-components partition, namely $\lambda = (1,1,...,1)$.
\subsection{Differential operator and orthogonality relation}
We consider the classical Hermite differential operator
\begin{align}\label{eq:HermioteOp}
T[y]:= \frac{d^2y}{dx^2} -2x \frac{dy}{dx}.
\end{align}
The exceptional Hermite operator is obtained through the use of state-deleting Darboux-Crum transformations and intertwining relations, as in \cite{Gomez-Ullate2014}, and through the study of polynomial flags, as in \cite{Gomez-Ullate2013}
\begin{align}\label{eq:HermioteXOp}
T_\lambda[y]:=\frac{d^2y}{dx^2}-2\left(x+\frac{H_\lambda'}{H_\lambda}\right)\frac{dy}{dx}+\left(\frac{H_\lambda''}{H_\lambda}+2x\frac{H_\lambda'}{H_\lambda}\right)y,
\end{align}
where the symbol prime $'$ denotes the derivative with respect to $x$. Generally, the differential operator (\ref{eq:HermioteXOp}) will have singular rational coefficients for an arbitrary partition $\lambda$. However, for an Adler partition $\lambda^2$, the operator $T_{\lambda^2}$ is non-singular on $\mathbb{R}$ and is called the X$_\lambda$-Hermite operator or exceptional Hermite operator \cite{Gomez-Ullate2014}. Oblomkov has studied regular singularities of the potential (\ref{eq:potential}) in \cite{Oblomkov1999}. The following results were discussed in \cite{Gomez-Ullate2014}.
\begin{proposition}\cite{Gomez-Ullate2014}
For every partition $\lambda$, we have
\begin{equation}
T_\lambda[H_{\lambda, n}] = 2(l-n)H_{\lambda, n},\quad n\notin \{k_1, ..., k_l\}
\end{equation}
where $l$ is the length of the partition.
\end{proposition}
\begin{corollary}\label{eq:EDOXHermite}\cite{Gomez-Ullate2014}
The Hermite XOPs $H^{(\lambda)}_n$ introduced in Definition \ref{def:polExc} are eigenfunctions of the following second-order differential operator
\begin{equation}\label{eq:OperatorTlambdaSquared}
T_{\lambda^2}[H^{(\lambda)}_n] = 2(2l-n)H^{(\lambda)}_n, \quad n\in\mathbb{N}\backslash \{k_1, k_1+1, ..., k_l, k_l+1\},
\end{equation}
where $T_\lambda$ is given by (\ref{eq:HermioteXOp}).
\end{corollary}
If we define the polynomial of degree $l$
\begin{equation}\label{eq:polyn}
p_\lambda(x):= (x-k_1)(x-k_2)\cdots(x-k_l),
\end{equation}
we conclude that, for any double partition, $p_{\lambda^2}(n)\geq0\;\forall n\in\mathbb{N}$. Proposition \ref{prop:orth}, which was formulated in \cite{Gomez-Ullate2014,Gomez-Ullate2018}, is true because Hermite XOPs are defined through an Adler partition $\lambda^2$, as in Definition \ref{def:polExc}.
\begin{proposition} \label{prop:orth} \cite{Gomez-Ullate2014,Gomez-Ullate2018}
The Hermite XOPs $H^{(\lambda)}_n$ satisfy the orthogonality relation
\begin{equation}\label{eq:Orthog_general}
\int_{-\infty}^{+\infty}H^{(\lambda)}_m(x)H^{(\lambda)}_n(x)W_{\lambda^2}(x)dx = \delta_{m,n}2^{n+2l}n!\sqrt{\pi}p_{\lambda^2}(n),
\end{equation}
where the orthogonality weight given by
\begin{equation}\label{eq:poids}
W_{\lambda^2}(x)=\frac{e^{-x^2}}{(H_{\lambda^2}(x))^2}
\end{equation}
is regular and positive definite.
\end{proposition}
\newpage
\section{General solution of the exceptional Hermite differential equation associated with the reduced double partition $\lambda = (1)$}
\label{sec:3}
In this section, we fix a partition $\lambda$ and use the theoretical results from section \ref{sec:2} to obtain the $X_{2m}^\lambda$-Hermite ODE associated with the chosen partition. We express the $X_{2m}^\lambda$-Hermite polynomials in terms of classical Hermite polynomials and we find the general solution of the $X_{2m}^\lambda$-Hermite ODE associated with the fixed partition $\lambda$.
\subsection{Exceptional Hermite polynomials in terms of classical Hermite polynomials}
The partition (\ref{eq:partition1}) can start with a sequence of zeros of arbitrary length. In what follows, we consider a reduced double partition, \textit{i.e.} a partition for which $\lambda_1 >0$ \cite{Gomez-Ullate2014}. If we set $m = 1$, then the only possible reduced partition is $\lambda = (1)$. Therefore, $l=1$ and $\lambda^2 = (1,1)$ is a reduced double partition for which the associated strictly increasing sequence of length $2l$ is $\{k_1, k_2\}$ (the gap sequence). From relation (\ref{eq:gapSeqRel}), we obtain
\begin{align}
k_1 = \lambda_1 +1 - 1 = 1,\\
k_2 = \lambda_2+2 - 1 = 2.
\end{align}
Due to Definition \ref{def:Codimension}, the codimension of the family of polynomials which results from this choice of partition is $2m = 2$. We therefore consider the countable family of polynomials which consitutes the $X_2^{(1)}$-Hermite family
\begin{equation}
\left\{H_n^{(1)}(x)\;|\; n\in\mathbb{N}\backslash \{1, 2\}\right\}.
\end{equation}
From relation (\ref{eq:degre}), we see that the degree of a polynomial of this family reduces to
\begin{equation}\label{eq:Degre2014}
deg H^{(1)}_n(x) = n\quad \forall n\in\mathbb{N}\backslash \{1, 2\}.
\end{equation}
The Wronskians defined in (\ref{eq:wronskiens}) and (\ref{eq:wronskiens2temp}) become
\begin{align}\label{eq:polyn_poids_1}
&H_{(1,1)}(x) = Wr(H_{1}, H_{2})(x) = 4(1+2x^2),\\\label{eq:HermiteExceptFixe}
&H^{(1)}_n(x) = H_{(1,1), n}(x)= Wr(H_{1}, H_{2}, H_n)=\left| \begin{array}{ccc} 2x & -2+4x^2 & H_n\\2 & 8x&H_n' \\0 &8 & H_n''\end{array} \right|,
\end{align}
where $n\notin\{1, 2\}$. Under the above hypotheses, we obtain the following result.
\begin{theorem}\label{cor:Poynome2019}
For the fixed partition $\lambda = (1)$, the polynomials (\ref{eq:HermiteExceptFixe}) satisfy the relation
\begin{equation}\label{eq:corol_rec}
H_{n}^{(1)}(x) = 8(n-1)(n-2)\hat{H}_{n}(x),\qquad \forall n\in\mathbb{N}\backslash\{1, 2\}
\end{equation}
where
\begin{equation}\label{eq:HChapeau}
\hat{H}_{n}(x):= H_n(x)+4nH_{n-2}(x)+4n(n-3)H_{n-4}(x).
\end{equation}
\end{theorem}
\begin{preuve}\normalfont
Making use of the differential relation \cite{Abramowitz1965}
\begin{equation}\label{eq:HermiteDerivee}
H_n'(x) = 2nH_{n-1}(x)
\end{equation}
and Definition \ref{def:polExc}, we find
\begin{align}\nonumber
H_{n}^{(1)}&=H_{(1,1), n} = \left| \begin{array}{ccc} 2x & -2+4x^2 & H_n\\2 & 8x&H_n' \\0 &8 & H_n''\end{array} \right|
=\left| \begin{array}{ccc} 2x & -2+4x^2 & H_n\\2 & 8x&2nH_{n-1} \\0 &8 & 4n(n-1)H_{n-2}\end{array} \right|\\\label{eq:temp1}
&=16\left(H_n -2nxH_{n-1}+2n(n-1)x^2H_{n-2}+n(n-1)H_{n-2}\right).
\end{align}
Through successive applications of the recurrence relation \cite{Abramowitz1965}
\begin{equation}\label{eq:recurrHermiteClassique}
2xH_{n+1}(x) = 2(n+1)H_n(x)+H_{n+2}(x)
\end{equation}
to equation (\ref{eq:temp1}), we obtain
\begin{equation}
H_{n}^{(1)}= 8(n-1)(n-2)\left(H_n(x)+4nH_{n-2}(x)+4n(n-3)H_{n-4}(x)\right).
\end{equation}
\end{preuve}
$\left.\right.\hfill\square$\\~\\
\begin{remark}
The function $\hat{H}_n(x)$ (\ref{eq:HChapeau}) is known from Cari\~{n}ena et al \cite{Cariena2008}, where we also find the analogue of the Rodrigues formula (\ref{eq:Rodriques}) adapted to relation (\ref{eq:HChapeau})
\begin{equation}
\hat{H}_n(x) = (-1)^ne^{x^2}\left(\frac{d^n}{dx^n}+4n \frac{d^{n-2}}{dx^{n-2}}+4n(n-3)\frac{d^{n-4}}{dx^{n-4}}\right)e^{-x^2}.
\end{equation}
\end{remark}
\subsection{Orthogonality relation of the $X_2^{(1)}$-Hermite polynomials}
When $\lambda = (1)$, the polynomial (\ref{eq:polyn}) associated with the double partition $\lambda^2=(1,1)$ becomes
\begin{equation}\label{eq:polyn_1}
p_{(1,1)}(n) = (n-1)(n-2)\geq0 \qquad \forall \; n\in\mathbb{N}.
\end{equation}
Replacing (\ref{eq:polyn_poids_1}) in relation (\ref{eq:poids}), the weight becomes
\begin{equation}\label{eq:Poidslambda1}
W_{(1,1)}(x)=\frac{e^{-x^2}}{(H_{(1,1)}(x))^2} = \frac{e^{-x^2}}{(4(1+2x^2))^2}>0\quad\forall x\in\mathbb{R}.
\end{equation}
The orthogonality relation (\ref{eq:Orthog_general}) becomes
\begin{equation}\label{eq:orthogX1}
\int_{-\infty}^{+\infty}H^{(1)}_m(x)H^{(1)}_n(x)\frac{e^{-x^2}}{(4(1+2x^2))^2}dx = \delta_{m,n}\sqrt{\pi}2^{n+2}n!(n-1)(n-2),
\end{equation}
where $m, n \in \mathbb{N}\backslash\{1,2\}$. Equivalently, using Theorem \ref{cor:Poynome2019}, we find
\begin{equation}\label{eq:orthogPolynome2019}
\int_{-\infty}^{+\infty}\hat{H}_{m}(x)\hat{H}_{n}(x)\frac{e^{-x^2}}{(1+2x^2)^2}dx = \delta_{m,n}\frac{\sqrt{\pi}2^{n}n!}{(n-1)(n-2)},
\end{equation}
which was shown independently in \cite{Cariena2008}. Because of an order relation between integrands, the integral (\ref{eq:orthogX1}) diverges if the integral (\ref{eq:orthogPolynome2019}) diverges. Indeed, for all $m = n\in\mathbb{N}\backslash\{1,2\}$,
\begin{align}
&0<4^2\hat{H}_{n}^2<4^3(n-1)^2(n-2)^2\hat{H}_{n}^2.
\end{align}
Using Theorem \ref{cor:Poynome2019}, we find
\begin{align}
&0<\hat{H}_{n}^2<\frac{\left(H^{(1)}_{n}\right)^2}{4^2},
\end{align}
and multiplying by $e^{-x^2}/(1+2x^2)^2$, we get
\begin{equation}
0<\hat{H}_{n}^2\frac{e^{-x^2}}{(1+2x^2)^2}<\left(H^{(1)}_{n}\right)^2\frac{e^{-x^2}}{(4(1+2x^2))^2}.
\end{equation}
Integrating each side on the orthogonality interval $(-\infty, +\infty)$, we obtain
\begin{equation}
0<\int_{-\infty}^{+\infty}\hat{H}_{n}^2\frac{e^{-x^2}}{(1+2x^2)^2}dx<\int_{-\infty}^{+\infty}\left(H^{(1)}_n\right)^2\frac{e^{-x^2}}{(4(1+2x^2))^2}dx.
\end{equation}
The norm of $X_2^{(1)}$-Hermite polynomials is therefore defined on $\mathbb{N}$ except for integer values which are zeros of the polynomial (\ref{eq:polyn_1}), namely $n = 1, 2$. For $\lambda=(1)$, these integer values correspond to the gap sequence (\ref{eq:gapSeq}).
\begin{remark}
It was proved in \cite{Gomez-Ullate2014} that
\begin{equation}
span\left\{H^{(1)}_n(x)\;|\;n\in \mathbb{N}\backslash\{1,2\} \right\}
\end{equation}
is dense in the Hilbert space $L^2(\mathbb{R}, W_{(1,1)}(x))$.
\end{remark}
\subsection{$X_2^{(1)}$-Hermite differential equation}
Consider the first and second-order derivatives of the polynomial (\ref{eq:polyn_poids_1}) obtained from the double partition $\lambda^2 = (1,1)$
\begin{align}\label{eq:temp3a}
&H_{(1,1)}(x) = 4(1+2x^2),\\\label{eq:temp3}
&H_{(1,1)}'(x) =16x,\\\label{eq:temp4}
&H_{(1,1)}''(x) = 16.
\end{align}
Corollary \ref{eq:EDOXHermite} is written in term of the differential operator (\ref{eq:HermioteXOp})
\begin{align}\label{eq:temp5}
&H_{\lambda^2, n}''-2\left(x+\frac{H_{\lambda^2}'}{H_{\lambda^2}}\right)H_{\lambda^2, n}'+\left(\frac{H_{\lambda^2}''}{H_{\lambda^2}}+2x\frac{H_{\lambda^2}'}{H_{\lambda^2}}\right)H_{\lambda^2, n} = (2l-4n)H_{\lambda^2, n}.
\end{align}
Making use of (\ref{eq:temp3a})-(\ref{eq:temp4}), equation (\ref{eq:temp5}) becomes
\begin{align}\nonumber
&H_{(1,1), n}''-2\left(x+\frac{16x}{4(1+2x^2)}\right)H_{(1,1), n}'\\
&\qquad\qquad\qquad\qquad+\left(\frac{16}{4(1+2x^2)}+2x\frac{16x}{4(1+2x^2)}\right)H_{(1,1), n}
= (2l-4n)H_{(1,1), n},
\end{align}
or equivalently,
\begin{align}
\left(H^{(1)}_n(x)\right)''-2\left(x+\frac{4x}{(1+2x^2)}\right)\left(H^{(1)}_n(x)\right)'+2nH^{(1)}_n(x) = 0.
\end{align}
In other words, the polynomial $H^{(1)}_n(x)$ (\ref{eq:HermiteExceptFixe}) is a solution of the second-order linear homogeneous ODE
\begin{equation}\label{eq:EDOX}
\omega''(x)-2\left(x+\frac{4x}{1+2x^2}\right)\omega'(x)+2n\omega(x) = 0, \quad x\in\mathbb{R}, \;\; n\in \mathbb{N}\backslash\{1,2\}.
\end{equation}
From this point on, we will refer to equation (\ref{eq:EDOX}) as the $X_2^{(1)}$-Hermite ODE, which was presented in \cite{Milson2019} as the exceptional Hermite differential equation. Consider the complex extension of the $X_2^{(1)}$-Hermite ODE (\ref{eq:EDOX})
\begin{equation}\label{eq:EDOXComplexe}
\omega''(z)-2\left(z+\frac{4z}{1+2z^2}\right)\omega'(z)+2n\omega(z) = 0, \qquad z\in\mathbb{C}.
\end{equation}
Equation (\ref{eq:EDOXComplexe}) possesses three regular singular points at $\{\pm i/\sqrt{2}, \infty\}$. We use Fuchs Theorem, by setting
\begin{equation}\label{eq:coeff}
p(z) = -2\left(z+\frac{4z}{(z-\frac{i}{\sqrt{2}})(z+\frac{i}{\sqrt{2}})}\right),\qquad q(z) = 2n.
\end{equation}
Let $B_\delta(z_0)$ be the open ball of radius $\delta = 2/\sqrt{2}$ around the point $z_0\in\mathbb{C}$. Then the functions
\begin{equation}
(z-i/\sqrt{2})p(z),\qquad (z-i/\sqrt{2})^2q(z)
\end{equation}
are analytic on $B_\delta(i/\sqrt{2})$ and the functions
\begin{equation}
(z+i/\sqrt{2})p(z),\qquad (z+i/\sqrt{2})^2q(z)
\end{equation}
are analytic on $B_\delta(-i/\sqrt{2})$, which shows that $\{\pm i/\sqrt{2}\}$ are regular singular points. We apply the M{\"o}bius transformation
\begin{equation}
M: z\mapsto \zeta = \frac{1}{z-\frac{i}{\sqrt{2}}}
\end{equation}
to the ODE (\ref{eq:EDOXComplexe}) and obtain
\begin{equation}\label{eq:EDOMobius}
\omega''(\zeta)-2\left(\frac{4i}{\sqrt{2}}\zeta^2+4 \zeta +\frac{i}{\sqrt{2} }+ \frac{1}{\zeta}\right)\omega'(\zeta)+2n\omega(\zeta) = 0.
\end{equation}
We define the new coefficients
\begin{equation}
\tilde{p}(\zeta) = -2\left(\frac{4i}{\sqrt{2}}\zeta^2+4 \zeta +\frac{i}{\sqrt{2} }+ \frac{1}{\zeta}\right),\qquad \tilde{q}(\zeta) = 2n.
\end{equation}
The functions $\zeta\tilde{p}(\zeta)$ and $\zeta^2\tilde{q}(\zeta)$ are analytic on an arbitrary neighborhood of $z_0 = 0$, therefore $\{\infty\}$ is a regular singular point of the ODE (\ref{eq:EDOXComplexe}).
\subsection{Polynomial and non-polynomial solutions of the $X_2^{(1)}$-Hermite differential equation}
In this section, we study the polynomial and non-polynomial solutions of the complex $X_{2}^{(1)}$-Hermite ODE (\ref{eq:EDOXComplexe}). We find new solutions using the method of generalized series. We compare these solutions to the Hermite XOPs and we perform an extension of the classical Hermite polynomials to negative integers, leading to non-polynomial solutions.
\begin{corollary}\label{cor:HChapeau}
The function $\hat{H}_n(x)$ defined in (\ref{eq:HChapeau}) is a polynomial solution of the $X_2^{(1)}$-Hermite ODE (\ref{eq:EDOX}) for all $n\in\mathbb{N}\backslash\{1,2\}$. Moreover, this solution and the polynomial $H^{(1)}_n(x)$ defined in (\ref{eq:HermiteExceptFixe}) are linearly dependent for all $n\in\mathbb{N}\backslash\{1,2\}$.
\end{corollary}
\begin{preuve}\normalfont
Let $n\in\mathbb{N}\backslash\{1,2\}$. The proof is straightforward, considering that the polynomial $\hat{H}_n(x)$ is equal to the polynomial $H^{(1)}_n(x)$, up to a constant (which depends on $n$), by Theorem \ref{cor:Poynome2019}.\\$\left.\right.\hfill\square$
\end{preuve}
\begin{remark}
By Corollary \ref{cor:HChapeau}, we know that the polynomials $H^{(1)}_n(z)$ and $\hat{H}_n(z)$ are linearly dependent solutions of the ODE (\ref{eq:EDOXComplexe}) for $n\in\mathbb{N}\backslash\{1,2\}$. However, by the relation (\ref{eq:HermiteExceptFixe}) and by Theorem \ref{cor:Poynome2019}, we see that $H^{(1)}_n(z)$ is a trivial solution of the ODE (\ref{eq:EDOXComplexe}) for $n=1,2$. Performing the extension of the classical Hermite polynomials to negative integers, we notice that $\hat{H}_n(z)$ is a non-polynomial and nontrivial solution of the ODE (\ref{eq:EDOXComplexe}) for $n=1, 2$. Indeed, making use of the Rodrigues formula associated with classical Hermite polynomials (\ref{eq:Rodriques}), we define
\begin{equation}\label{eq:DefHermite_Neg}
H_{-1}(z):=-e^{z^2}\int_z^\infty e^{-z^2}\;dz = \frac{\sqrt{\pi}}{2}e^{z^2} (1-erf(z)),
\end{equation}
where $erf(z)$ is the Error function defined by \cite{Abramowitz1965}
\begin{equation}\label{eq:temp17}
erf(z) = \frac{2}{\sqrt{\pi}}\int_0^z e^{-t^2}dt.
\end{equation}
The non-polynomial extension to negative integers may then be established by the recurrence relation (\ref{eq:recurrHermiteClassique}) and by substituting Definition (\ref{eq:DefHermite_Neg}) into relation (\ref{eq:HChapeau}). We find
\begin{align}\nonumber
\hat{H}_1(z) &= 4z+\sqrt{\pi} e^{z^2}(1-2z^2)\left(1-erf(z)\right),\\\label{eq:HermiteNegInt}
\hat{H}_2(z) &=2+4z^2+4\sqrt{\pi} ze^{z^2}\left(1-erf(z)\right),
\end{align}
which are non-polynomial and nontrivial solutions of the complex $X_{2}^{(1)}$-Hermite ODE (\ref{eq:EDOXComplexe}) for $n=1,2,$ such that the integrals
\begin{align}
\int_{-\infty}^{+\infty} \left(\hat{H}_1(x)\right)^2\frac{e^{-x^2}}{(1+2x^2)^2}\; dx,\qquad \int_{-\infty}^{+\infty} \left(\hat{H}_2(x)\right)^2\frac{e^{-x^2}}{(1+2x^2)^2}\; dx,
\end{align}
diverge. We recall that $\hat{H}_1(x)$ and $\hat{H}_2(x)$ are not part of the complete orthogonal polynomial system formed by the $X_2^{(1)}$-Hermite family of polynomials. However, when considering the non-polynomial extension of classical Hermite polynomials to negative integers, we can say that $\hat{H}_n(z)$ is a solution of the complex $X_{2}^{(1)}$-Hermite ODE (\ref{eq:EDOXComplexe}) for all $n\in\mathbb{N}$.
\end{remark}
\begin{remark}
We notice that the online application \textit{WolframAlpha} does consider the extension of classical Hermite polynomials to negative integers.
\end{remark}
\paragraph{\textbf{The method of generalized series}} The variable coefficients $p(z)$ and $q(z)$ are analytic on $B_{\delta'}(0)$, where $\delta' = 1/\sqrt{2}$. Therefore, we can find at least one solution by the method of generalized series. Moreover, the principal part of the Laurent series around $z_0 = 0$ must vanish. Let
\begin{equation}\label{eq:Series1and2}
\beta_n(z) = z^{\sigma_1}\sum_{k=0}^\infty c_k z^k ,\qquad \nu_n(z) = z^{\sigma_2}\sum_{k=0}^\infty \tilde{c}_k z^k,
\end{equation}
where $\sigma_1$ and $\sigma_2$ are the roots of the indicial equation associated with equation (\ref{eq:EDOXComplexe})
\begin{equation}\label{eq:EQDet}
\sigma(\sigma - 1)+a_0\sigma +b_0 = 0.
\end{equation}
By definition, we have
\begin{equation}
a_0 = \lim_{z\rightarrow 0}z\cdot\left(-2z-\frac{8z}{1+2z^2}\right) = 0,\quad b_0 = \lim_{z\rightarrow 0}z^2\cdot2n = 0.
\end{equation}
The indicial equation (\ref{eq:EQDet}) reduces to
\begin{equation}
\sigma(\sigma - 1) = 0,
\end{equation}
which possesses the roots $\sigma_1 = 0$ and $\sigma_2 = 1$. The two series (\ref{eq:Series1and2}) become
\begin{align}\label{eq:SerieRacine1}
\beta_n(z) &= \sum_{k=0}^\infty c_k z^k,\\\label{eq:SerieRacine2}
\nu_n(z) &= \sum_{k=0}^\infty \tilde{c}_k z^{k+1}.
\end{align}
\paragraph{\textbf{Case 1: root $\sigma_1 = 0$.}}
We substitute the series $\beta_n(z)$ (\ref{eq:SerieRacine1}) and its derivatives up to order two into the ODE (\ref{eq:EDOXComplexe}) and obtain
\begin{align}\nonumber
&(2c_2 +2nc_0) + (6c_3+2(n-5)c_1)z \\
&\qquad\qquad\qquad+\sum_{k=2}^\infty\left[(k+2)(k+1)c_{k+2}+2(k(k-6)+n)c_k+4(n-k+2)c_{k-2}\right]z^k = 0,
\end{align}
where $c_0$ and $c_1$ are arbitrary constants. Let $c_0 = c_1 = 1$. Then the first coefficients take the form
\begin{equation}
c_2 = -n, \quad c_3 = -\frac{1}{3}(n-5), \quad c_4 = \frac{1}{6}n(n-10),
\end{equation}
and we conclude that for all $k\geq4$, the recurrence relation is as follows
\begin{equation}\label{eq:recSerie}
c_k = \frac{-2((k-2)(k-8)+n)c_{k-2}-4(n-k+4)c_{k-4}}{k(k-1)}.
\end{equation}
The first even and odd coefficients are presented in Table \ref{tab:1}.
\begin{table}[H]
\centering
\caption{First coefficients of the series $\beta_n(z)$}
\begin{tabular}{lll}
\hline\noalign{\smallskip}
$\quad k$ & $\quad c_{2k}$ & $\quad c_{2k-1}$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\quad0$ &$\quad1$ & \quad- \\
$\quad1$ & $\quad-n$ & $\quad1$ \\
$\quad2$ & $\quad\frac{n^2-10n}{6}$ & $ \quad-\frac{n-5}{3}$ \\
$\quad3$& $\quad-\frac{ n^3-30 n^2+104 n }{90}$ & $\quad\frac{n^2-20n+51}{30}$ \\
$\quad4$& $\quad\frac{ + n^4- 60 n^3+ 524 n^2-1200 n}{2520}$ & $\quad-\frac{ n^3- 45 n^2+311 n-555 }{630}$ \\
$\quad5$&$\quad- \frac{ n^5-100 n^4+1580 n^3-8720 n^2+ 15744 n}{113400}$ & $\quad\frac{ n^4- 80 n^3+ 1046 n^2- 4720 n+6825 }{22680} $\\
\noalign{\smallskip}\hline
\end{tabular}
\label{tab:1}
\end{table}
\noindent For $k\geq2$, the denominators of the even coefficients $c_{2k}$ from Table \ref{tab:1} take the form $(2k)!/2^{k}$ while the denominators of the odd coefficients $c_{2k-1}$ from Table \ref{tab:1} take the form $(2k-1)!/2^{k-1}$. This is due to the fact that the recurrence relation (\ref{eq:recSerie}) may be written as
\begin{equation}
c_k = \frac{-((k-2)(k-8)+n)c_{k-2}-2(n-k+4)c_{k-4}}{\frac{2k(k-1)}{2^2}}.
\end{equation}
The sign of the highest power of $n$ appearing in the even and odd coefficients from Table \ref{tab:1} alternates. Therefore, the coefficients of the series $\beta_n(z)$ (\ref{eq:SerieRacine1}) are of the form
\begin{equation}\label{eq:CoeffTemp}
c_{2k} = (-1)^k\frac{p_k(n)}{(2k)!/2^{k}},\qquad c_{2k-1} = (-1)^{k+1}\frac{q_k(n)}{(2k-1)!/2^{k-1}},
\end{equation}
where $p_k(n)$ and $q_k(n)$ are polynomials of the positive integer variable $n$ of order $k$ and $k-1$, respectively. We denote by $\lambda_p(k)$ and $\lambda_q(k)$ the roots of the polynomials $p_k(n)$ and $q_k(n)$, respectively.
\begin{table}[H]
\caption{Roots of the coefficients of the series $\beta_n(z)$}
\label{tab:2}
\centering
\begin{tabular}{lll}
\hline\noalign{\smallskip}
$\quad k$ & $\qquad \lambda_p(k)$ & $\qquad \lambda_q(k)$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\quad0$ & \qquad- & \qquad- \\
$\quad1$ & $\qquad0$ & \qquad- \\
$\quad2$ & $\qquad0,\mathbf{10}$ & $\qquad\mathbf{5}$ \\
$\quad3$& $\qquad0,4,\mathbf{26}$ & $\qquad3,\mathbf{17}$ \\
$\quad4$& $\qquad0,4,6,\mathbf{50}$ & $\qquad3,5,\mathbf{37}$ \\
$\quad5$& $\qquad0,4,6,8,\mathbf{82}$ & $\qquad3,5,7,\mathbf{65}\quad$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
\noindent Table \ref{tab:2} shows the roots of the first terms associated with the series $\beta_n(z)$ (\ref{eq:SerieRacine1}). For $k\geq3$, the coefficients $c_{k}$ (even and odd) have in particular as a root $\lambda(k) = (k-1)^2+1$. These roots correspond to the positive integers in bold character. The coefficients $c_{2k}$ then possess the factor
\begin{equation}\label{eq:Factor1}
(n-((2k-1)^2+1)),
\end{equation}
while the coefficients $c_{2k-1}$ possess the factor
\begin{equation}\label{eq:Factor2}
(n-((2(k-1))^2+1)).
\end{equation}
If the coefficient is even, the remaining roots consist of a sequence of even positive integers
$0, 4, 6, ..., k-2$, where the positive integer $2$ is excluded. The even coefficients then possess the factors
\begin{equation}\label{eq:Factor3}
n\cdot\prod_{j=1}^{k-2}(n-2(1+j)).
\end{equation}
If the coefficient is odd, the remaining roots consist of a sequence of odd positive integers $3, 5, 7 ..., k-2 $, where the positive integer $1$ is excluded. The odd coefficients then possess the factors
\begin{equation}\label{eq:Factor4}
\prod_{j=1}^{k-2}(n-2(1+j)+1).
\end{equation}
Fixing the positive integer $n$ therefore truncates the series of even or odd coefficients, but not both, depending on the parity of $n$. Since even and odd coefficients have no root in common, the series must be infinite. Moreover, the fact that the positive integers $1$ and $2$ are excluded indicates that the $X_2^{(1)}$-Hermite family of polynomials is defined on the spectrum $\mathbb{N}\backslash\{1,2\}$.
Making use of (\ref{eq:Factor1})-(\ref{eq:Factor4}), we find
\begin{align}\nonumber
p_2(n) &=n(n-10)\\\label{eq:pk}
p_k(n) &= n(n-((2k-1)^2+1))\prod_{j=1}^{k-2}(n-2(1+j)),\quad\quad\;\;\; k\geq3,\\\nonumber
q_2(n)& = (n-5)\\\label{eq:qk}
q_k(n) &= (n-((2(k-1))^2+1))\prod_{j=1}^{k-2}(n-2(1+j)+1),\quad k\geq3,
\end{align}
and taking (\ref{eq:pk}) and (\ref{eq:qk}) into account, the coefficients (\ref{eq:CoeffTemp}) become
\begin{align}\label{eq:coeffPair}
c_{2k} &= (-1)^k \frac{n(n-((2k-1)^2+1))\prod_{j=1}^{k-2}(n-2(1+j))}{(2k)!/2^{k}},\\\label{eq:coeffImpair}
c_{2k-1} &=(-1)^{k+1} \frac{(n-((2(k-1))^2+1))\prod_{j=1}^{k-2}(n-2(1+j)+1)}{(2k-1)!/2^{k-1}}.
\end{align}
The series $\beta_n(z)$ (\ref{eq:SerieRacine1}) is therefore of the form
\begin{align}\nonumber
\beta_n(z) &= 1+z-nz^2-\frac{n-5}{3}z^3+\frac{n(n-10)}{6}z^4\\\label{eq:Sol1}
&+\sum_{k=3}^\infty \left[(-1)^k \frac{n(n-((2k-1)^2+1))\prod_{j=1}^{k-2}(n-2(1+j))}{(2k)!/2^{k}}z^{2k}\right.\\\nonumber
&\left.\qquad \quad+(-1)^{k+1} \frac{(n-((2(k-1))^2+1))\prod_{j=1}^{k-2}(n-2(1+j)+1)}{(2k-1)!/2^{k-1}}z^{2k-1}\right].
\end{align}
The series $\beta_n(z)$ (\ref{eq:Sol1}) converges. Indeed, if the series converges absolutely, then it may be written in terms of two separate series for the even and for the odd coeffcients
\begin{align}\nonumber
\beta_n(z) &= 1+z-nz^2-\frac{n-5}{3}z^3+\frac{n(n-10)}{6}z^4\\\label{eq:s1}
&+\sum_{k=3}^\infty \left[(-1)^k \frac{n(n-((2k-1)^2+1))\prod_{j=1}^{k-2}(n-2(1+j))}{(2k)!/2^{k}}z^{2k}\right]\\\label{eq:s2}
& +\sum_{k=3}^\infty\left[(-1)^{k+1} \frac{(n-((2(k-1))^2+1))\prod_{j=1}^{k-2}(n-2(1+j)+1)}{(2k-1)!/2^{k-1}}z^{2k-1}\right],
\end{align}
where the series (\ref{eq:s1}) and (\ref{eq:s2}) respectively have as a general term $c_{2k}$ (\ref{eq:coeffPair}) and $c_{2k-1}$ (\ref{eq:coeffImpair}). We apply the D'Alembert ratio test and find
\begin{align}\nonumber
\lim_{k\rightarrow\infty}\left|\frac{c_{2(k+1)}}{c_{2k}}\right| &= \lim_{k\rightarrow\infty}\left|\frac{(-1)^{k+1} \frac{n(n-((2(k+1)-1)^2+1))\prod_{j=1}^{(k+1)-2}(n-2(1+j))}{(2(k+1))!/2^{k+1}}}{(-1)^k \frac{n(n-((2k-1)^2+1))\prod_{j=1}^{k-2}(n-2(1+j))}{(2k)!/2^{k}}}\right|\\\label{eq:Convergencemu}
&\quad=2 \lim_{k\rightarrow\infty}\frac{ (n-((2(k+1)-1)^2+1))\prod_{j=1}^{(k+1)-2}(n-2(1+j))}{ (2k+1)(2k+2)(n-((2k-1)^2+1))\prod_{j=1}^{k-2}(n-2(1+j))}\\\nonumber
&\quad=0<1,
\end{align}
\begin{align}\nonumber
\lim_{k\rightarrow\infty}\left|\frac{c_{2(k+1)-1}}{c_{2k-1}}\right| &= \lim_{k\rightarrow\infty}\left|\frac{(-1)^{k} \frac{(n-((2((k+1)-1))^2+1))\prod_{j=1}^{(k+1)-2}(n-2(1+j)+1)}{(2(k+1)-1)!/2^{(k+1)-1}}}{(-1)^{k+1} \frac{(n-((2(k-1))^2+1))\prod_{j=1}^{k-2}(n-2(1+j)+1)}{(2k-1)!/2^{k-1}}}\right|\\\label{eq:ConvergenceH2}
&\quad= \lim_{k\rightarrow\infty} \frac{ (n-((2((k+1)-1))^2+1))\prod_{j=1}^{(k+1)-2}(n-2(1+j)+1)}{ k(2k+1)(n-((2(k-1))^2+1))\prod_{j=1}^{k-2}(n-2(1+j)+1)}\\\nonumber
&\quad=0<1.
\end{align}
We conclude that the series (\ref{eq:s1}) and (\ref{eq:s2}) converge. Moreover, these series converge absolutely, therefore the series $\beta_n(z)$ (\ref{eq:Sol1}) converges.
\paragraph{\textbf{Case 2: root $\sigma_2 = 1$.}}
Consider the series $\nu_n(z)$ (\ref{eq:SerieRacine2}) associated with the root $\sigma_2$ of the indicial equation (\ref{eq:EQDet}). Based on the above reasoning, we obtain
\begin{align}\label{eq:Sol2}
&\nu_n(z) = z-\frac{1}{3}(n-5)z^3+\sum_{k=3}^\infty \left[ (-1)^{k+1} \frac{(n-((2(k-1))^2+1))\prod_{j=1}^{k-2}(n-2(1+j)+1)}{(2k-1)!/2^{k-1}}z^{2k-1}\right],
\end{align}
which is a convergent series, by (\ref{eq:ConvergenceH2}). The expansion of the series $\nu_n(z)$ (\ref{eq:Sol2}) is finite for all values of $n$ which correspond to a root of the polynomial $q_k(n)$ (see the sequence $\lambda_q(k)$ in Table \ref{tab:2}). The coefficients of the series $\nu_n(z)$ (\ref{eq:Sol2}) correspond to the odd coefficients of the series $\beta_n(z)$ (\ref{eq:Sol1}). For this reason, we will now define a notation that will be useful in what follows
\begin{align}
\mu_n(z):&=1-nz^2+\frac{n(n-10)}{6}z^4\label{eq:mu}+\sum_{k=3}^\infty \left[(-1)^k \frac{n(n-((2k-1)^2+1))\prod_{j=1}^{k-2}(n-2(1+j))}{(2k)!/2^{k}}z^{2k}\right],
\end{align}
so that the series $\beta_n(z)$ (\ref{eq:Sol1}) may be rearranged as
\begin{equation}
\beta_n(z) = \mu_n(z) + \nu_n(z).
\end{equation}
\begin{remark}
The series $\mu_n(z)$ (\ref{eq:mu}) converges, by relation (\ref{eq:Convergencemu}).
\end{remark}
\begin{proposition}\label{th:GenSol}
The series $\beta_n(z)$ (\ref{eq:Sol1}) is a non-polynomial solution of the complex $X_2^{(1)}$-Hermite ODE (\ref{eq:EDOXComplexe}) for all $n\in\mathbb{N}$.
\end{proposition}
\begin{preuve}\normalfont
See Appendix \ref{app:1}.
\end{preuve}
\begin{proposition}\label{th:GenSol1}
The series $\mu_{n}(z)$ (\ref{eq:mu}) is a polynomial solution of the complex $X_2^{(1)}$-Hermite ODE (\ref{eq:EDOXComplexe}) for all $n\in2\mathbb{N}\backslash\{2\}$, while the series $\nu_{n}(z)$ (\ref{eq:Sol1}) is a polynomial solution of the complex $X_2^{(1)}$-Hermite ODE (\ref{eq:EDOXComplexe}) for all $n\in(2\mathbb{N}-1)\backslash\{1\}$.
\end{proposition}
\begin{preuve}\normalfont
See Appendix \ref{app:2}.
\end{preuve}
We study the linear dependence relation between $\beta_n(z)$ and $\hat{H}_n(z)$, as well as the linear dependence relation between $\mu_n(z)$ and $\hat{H}_n(z)$ and between $\nu_n(z)$ and $\hat{H}_n(z)$. We find
\begin{equation}\label{eq:WronskienBeta}
Wr\left(\hat{H}_n,\beta_n\right)(n,z) = \phi_1(n) e^{z^2}(1+z^2)^2 = \phi_1(n) \frac{16}{W_{(1,1)}(z)},
\end{equation}
for all $n \in \mathbb{N}\backslash\{1,2\}$, where $|\phi_1|:\mathbb{N}\backslash\{1,2\}\rightarrow\mathbb{N}\backslash\{0\}$ is a strictly increasing function. On the other hand, we find
\begin{equation}\label{eq:Wronskienmu}
Wr\left(\hat{H}_n,\mu_n\right)(z) =
\left\{\begin{matrix}
\phi_2(n)e^{z^2}(1+2z^2)^2,\;\;\; n\in (2\mathbb{N}-1)\backslash\{1\}\\
0, \qquad\qquad\qquad\quad n\in (2\mathbb{N})\backslash\{2\}\\
\end{matrix}\right.,
\end{equation}
where $|\phi_2|:\mathbb{N}\backslash\{1,2\}\rightarrow\mathbb{N}\backslash\{0\}$ is a strictly increasing function, and
\begin{equation}\label{eq:WronskienHChapeauH2}
Wr\left(\hat{H}_n,\nu_n\right)(z) =
\left\{\begin{matrix}
0, \qquad\qquad\qquad\qquad \;\,n\in (2\mathbb{N}-1)\backslash\{1\}\\
\phi_3(n)e^{z^2}(1+2z^2)^2,\;\; n\in 2\mathbb{N}\backslash\{2\}\quad\quad\;
\end{matrix}\right.,
\end{equation}
where $|\phi_3|:\mathbb{N}\backslash\{1,2\}\rightarrow\mathbb{N}\backslash\{0\}$ is a strictly increasing function.
The numerators of the coefficients $c_{2k}$ (\ref{eq:coeffPair}) and $c_{2k-1}$ (\ref{eq:coeffImpair}) have no factor in common, therefore the solution $\beta_n(z)$ (\ref{eq:Sol1}) is non-polynomial for all values of $n$. Consider the solution $\nu_n(z)$ (\ref{eq:Sol2}) and let
\begin{align}
r_1(k):&=(2(k-1))^2+1,\qquad k\geq2,\\
r_2(j):&=2(1+j)-1,\qquad\quad\; 1\leq j\leq k-2.
\end{align}
Then the solution $\nu_n(z)$ (\ref{eq:Sol2}) may be written as
\begin{align}\label{eq_Sol2Prime}
&\nu_n(z) = z-\frac{1}{3}(n-r_1(2))z^3+\sum_{k=3}^\infty \left[ (-1)^{k+1} \frac{(n-r_1(k))\prod_{j=1}^{k-2}(n-r_2(j))}{(2k-1)!/2^{k-1}}z^{2k-1}\right].
\end{align}
The roots of the polynomials $q_k(n)$ (\ref{eq:qk})
\begin{align}\label{eq:PolTemp}
&q_k(n) = (n-r_1(k)),\qquad \qquad\quad\qquad k=2,\\
&q_k(n)(n-r_1(k))\prod_{j=1}^{k-2}(n-r_2(j))\qquad\; k\geq 3,
\end{align}
are odd positive integers as shown in table \ref{tab:2}
\begin{align}\label{eq:Seq1}
r_1(k)&\in\{5, 17, 37, 65, 101, ...\},\\\label{eq:Seq2}
r_2(j)&\in \{3, 5, 7, ..., 2k-3\}.
\end{align}
Therefore, the solution $\nu_n(z)$ (\ref{eq:Sol2}) is non-polynomial for all $n\in2\mathbb{N}\cup\{1\}$, and polynomial for all odd values of $n$ except $n=1$. The first polynomial cases of the solution $\nu_n(z)$ (\ref{eq:Sol2}) are presented in Table \ref{tab:3}.
\begin{table}[H]
\caption{First polynomial cases of the solution $\nu_n(z)$}
\label{tab:3}
\centering
\begin{tabular}{lll}
\hline\noalign{\smallskip}
$\quad l$ & $2l-1$ & $\qquad\qquad\qquad\qquad\qquad \nu_{2l-1}(z)$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\quad2$ & $\quad3$ & $z+\frac{2}{3}z^3$ \\\\
$\quad3$& $\quad5$ & $z+\mathbf{0}\cdot z^3-\frac{4}{5}z^5$ \\\\
$\quad4$& $\quad7$ & $z-\frac{2}{3}z^3-\frac{4}{3}z^5+\frac{8}{21}z^7$ \\
$\quad\vdots$& $\quad\;\vdots$ & $\vdots$ \\
$\quad8$& $\quad15$ & $z-\frac{10}{3}z^3-\frac{4}{5}z^5+\frac{88}{21}z^7-\frac{400}{89}z^9+\frac{1376}{3465}z^{11}-\frac{64}{2079}z^{13}+\frac{128}{155925}z^{15}$ \\\\
$\quad9$& $\quad17$ & $z-4z^3+\mathbf{0}\cdot z^5+\frac{16}{3}z^7-\cdots-\frac{256}{2297295}z^{17}$ \\\\
$\quad10$& $\quad19$ & $z-\frac{14}{3}z^3+\frac{16}{15} z^5+\frac{32}{5}z^7-\cdots+\frac{512}{38513475}z^{19}$ \\
$\quad\vdots$& $\quad\;\vdots$ & $\vdots$ \\
$\quad18$& $\quad35$ & $z-10z^3+\frac{96}{5}z^5+ \frac{64}{21}z^7-\frac{320}{9}z^9+\cdots +\frac{131072}{6716457438687871875}z^{35}$ \\\\
$\quad19$& $\quad37$ & $z-\frac{32}{3}z^3+\frac{68}{3}z^5+\mathbf{0}\cdot z^7-\frac{1088}{27}z^9+\cdots -\frac{262144}{234308415218225473125}z^{37}$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
Table \ref{tab:3} shows that for the values $n=2l-1\in\{5, 17, 37, ...\}$, one term is missing. These values correspond to the roots $\lambda_q(k)$ in bold character from Table \ref{tab:2}. This is due to the fact that in these cases, the degree of the polynomial solution $\nu_{2l-1}(z)$ (\ref{eq:Sol2}) corresponds to a root of the polynomials $q_k(n)$ (\ref{eq:PolTemp}), namely a value of the sequence (\ref{eq:Seq1}), \textit{i.e.}
\begin{equation}
n\in\{r_1(k)\;|\;k\geq2\}\subset \lambda_q(k),
\end{equation}
where $(2k-1)$ is the degree of the missing term.
The coefficient of the first-order term in the polynomials from Table \ref{tab:3} is normalized, because we made the arbitrary choice $\tilde{c}_1 = 1$ during the construction of the generalized series $\nu_{n}(z)$ (\ref{eq:SerieRacine2}). We notice that for each $n\in\{2l-1\;|\;l\geq2\}$, there exists a proportionality constant that depends on $l$, so that
\begin{equation}
\hat{H}_{2l-1}(z)=M_1(l)\cdot \nu_{2l-1}(z),\qquad l = 2, 3, ...,
\end{equation}
As an example, we have that
\begin{equation}\label{eq:example}
\hat{H}_3(z) = 12 \nu_3(z).
\end{equation}
Moreover, equation (\ref{eq:WronskienHChapeauH2}) means that when $n = 2l-1$ for some positive integer $l\geq2$, the series $\nu_{n}(z)$ (\ref{eq:Sol2}) is equal to the polynomial $\hat{H}_{n}(z)$ (\ref{eq:HChapeau}), up to a constant, and we therefore see a part of the $X_2^{(1)}$-Hermite polynomials (corresponding to odd degrees $2l-1\geq3$) arising from the construction of the solution of the ODE (\ref{eq:EDOXComplexe}), using the method of generalized series.
\begin{theorem}\label{th:propHChapeauH2}
The solutions $\hat{H}_{n}(z)$ (\ref{eq:HChapeau}) and $\nu_{n}(z)$ (\ref{eq:Sol2}) of equation (\ref{eq:EDOXComplexe}) follow the proportionality relation
\begin{equation}
\hat{H}_n(z) = M_1(n)\nu_n(z),
\end{equation}
for all $n = 2l-1,\; l\geq2$, where
\begin{equation}\label{eq:M1}
M_1(n) = \frac{(-1)^{(n+1)/2}n!2^{(n+1)/2}}{p_{(1,1)}(n)\prod_{j=1}^{(n-3)/2}(n-2(1+j)+1)},
\end{equation}
where
\begin{equation}
p_{(1,1)}(n) = (n-1)(n-2)
\end{equation}
is the polynomial associated with the fixed partition $\lambda = (1)$ defined in (\ref{eq:polyn_1}), and where
\begin{equation}
\prod_{j=1}^{0}\alpha(j):=1.
\end{equation}
\end{theorem}
\begin{preuve}\normalfont
Let $n = 2l-1,\; l\geq2$, and let $\hat{H}_n(z)$ and $\nu_n(z)$ be the functions defined in (\ref{eq:HChapeau}) and (\ref{eq:Sol2}), respectively. $\nu_n(z)$ is a polynomial of degree $n$. Indeed, by equations (\ref{eq:Degre2014}) and (\ref{eq:corol_rec}), we know that $\hat{H}_n(z)$ is a polynomial of degree $n$, and therefore equation (\ref{eq:WronskienHChapeauH2}) leads to the conclusion that $\nu_n(z)$ is also a polynomial of degree $n$. The missing term in the solution $\nu_n(z)$ illustrated in Table \ref{tab:3} corresponds to a power of $z$ that is always strictly smaller than $n$, because if there is a missing term, then $n = (2(k-1))^2+1$ for some $k\geq2$. Since $n = (2(k-1))^2+1>(2k-1)$ for all $k\geq2$, we conclude that the missing term is always associated with a power of $z$ smaller than $n$. The contrary would lead to a contradiction because $\nu_n(z)$ is a polynomial of degree $n$.
Let $\tilde{c}_n$ be the coefficient of the term $z^n$ in the polynomial $\nu_n(z)$. From equation (\ref{eq:Sol2}), we have that for $k\geq3$, the coefficients are defined by the rational expression
\begin{equation}\label{eq:rational}
\tilde{c}_k = (-1)^{k+1} \frac{(n-((2(k-1))^2+1))\prod_{j=1}^{k-2}(n-2(1+j)+1)}{(2k-1)!/2^{k-1}}.
\end{equation}
The highest order term is of degree $n = 2k-1$, which implies that
\begin{equation}\label{eq:rational2}
k = \frac{n+1}{2}.
\end{equation}
Substituting the right-hand side of (\ref{eq:rational2}) into equation (\ref{eq:rational}), we obtain
\begin{equation}\label{eq:cTilde}
\tilde{c}_n = \frac{(-1)^{(n+1)/2}p_{(1,1)}(n)2^{(n-1)/2}\prod_{j=1}^{(n-3)/2}(n-2(1+j)+1)}{n!}.
\end{equation}
Let $\hat{c}_n$ be the coefficient of the term $z^n$ in the polynomial $\hat{H}_n(z)$. Then, by relation (\ref{eq:HChapeau}), we have that
\begin{equation}\label{eq:cHat}
\hat{c}_n = 2^n.
\end{equation}
Evaluating the ratio of the coefficients (\ref{eq:cTilde}) and (\ref{eq:cHat}), we get
\begin{equation}\label{eq:tempM}
M_1(n) = \frac{\hat{c}_n}{\tilde{c}_n} = \frac{(-1)^{(n+1)/2}n!2^{(n+1)/2}}{p_{(1,1)}(n)\prod_{j=1}^{(n-3)/2}(n-2(1+j)+1)}.
\end{equation}
The only zeros of the denominator of $M_1(n)$ are the zeros of $p_{(1,1)}(n)$, namely the gap sequence $\{1,2\}$, because
\begin{equation}
\left\{\prod_{j=1}^{(n-3)/2}(n-r_2(j))\;:\;n = 3, 5, 7, ...\right\} = \{1, 2, 2\cdot4,2\cdot4\cdot6, ...\},
\end{equation}
which means that $M_1(n)$ is not defined on the gap sequence. We complete the proof by verifying the case $k=2$, \textit{i.e.} $\hat{H}_3(z) = M_1(3)\nu_3(z)$ holds. Indeed,
\begin{equation}
M_1(3) = 12,
\end{equation}
which corresponds to the example given in (\ref{eq:example}).\\
$\left.\right.\hfill\square$
\end{preuve}
From Theorem \ref{th:propHChapeauH2}, we find that the function $\phi_2(n)$ in the Wronskian (\ref{eq:Wronskienmu}) corresponds to the additive inverse of the function $M_1(n)$ (\ref{eq:M1})
\begin{equation}\label{eq:WronskienmuPrime}
Wr\left(\hat{H}_n,\mu_n\right)(z) = -M_1(n)e^{z^2}(1+2z^2)^2
\left\{\begin{matrix}
1,\qquad n\in (2\mathbb{N}-1)\backslash\{1\}\\
0, \qquad n\in (2\mathbb{N})\backslash\{2\}\;\;\,\quad\\
\end{matrix}\right..
\end{equation}
Moreover, Theorem \ref{th:propHChapeauH2} shed light on the fact that Hermite XOPs $\hat{H}_{2l-1}$ arise from the \textit{odd part} of the series $\beta_n(z)$ (\ref{eq:Sol1}). This motivates the search for Hermite XOPs $\hat{H}_{2l}$ in the \textit{even part} of the series $\beta_n(z)$ (\ref{eq:Sol1}).
Consider the series $\mu_n(z)$ (\ref{eq:mu}) and let
\begin{align}
r_3(k):&=(2k-1)^2+1,\qquad k\geq2,\\
r_4(j):&=2(1+j),\qquad\qquad\; 1\leq j\leq k-2.
\end{align}
Then the series $\mu_n(z)$ (\ref{eq:mu}) may be written as
\begin{align}\label{eq_muPrime}
&\nu_n(z) = 1-nz^2 +\frac{n(n-r_3(2))}{6}z^4+\sum_{k=3}^\infty \left[ (-1)^{k} \frac{n(n-r_3(k))\prod_{j=1}^{k-2}(n-r_4(j))}{(2k)!/2^{k}}z^{2k}\right].
\end{align}
The roots of the polynomials $p_k(n)$ (\ref{eq:pk})
\begin{align}\nonumber
p_k(n) &= (n-r_3(k)),\qquad\qquad \qquad\qquad\;\;\;\; k = 2,\\\label{eq:PolTempmu}
p_k(n) &= n(n-r_3(k))\prod_{j=1}^{k-2}(n-r_4(j)),\qquad k \geq3,
\end{align}
are even positive integers as shown in Table \ref{tab:2}
\begin{align}\label{eq:Seq1mu}
r_3(k)&\in\{10, 26, 50, 82, 122, ...\},\\\label{eq:Seq2mu}
r_4(j)&\in \{4, 6, 8, ..., 2k-2\}.
\end{align}
Therefore, the series $\mu_n(z)$ (\ref{eq:mu}) is non-polynomial for all $n\in(2\mathbb{N}-1)\cup\{2\}$, and polynomial for all even values of $n$ except $n=2$. The first polynomial cases of the series $\mu_n(z)$ (\ref{eq:mu}) are presented in Table \ref{tab:4}.
\begin{table}[H]
\caption{First polynomial cases of the series $\mu_n(z)$}
\label{tab:4}
\centering
\begin{tabular}{lll}
\hline\noalign{\smallskip}
$\quad l$ & $2l$ & $\qquad\qquad\qquad\qquad\qquad \mu_{2l}(z)$ \\
\noalign{\smallskip}\hline\noalign{\smallskip}
$\quad0$ & $\quad0$ & $1$ \\\\
$\quad2$& $\quad4$ & $1-4z^2-4z^4$ \\
$\quad\vdots$& $\quad\;\vdots$ & $\vdots$ \\
$\quad4$& $\quad8$ & $1-8z^2-\frac{8}{3}z^4+\frac{32}{5}z^6-\frac{16}{15}z^8$ \\\\
$\quad5$& $\quad10$ & $1-10z^2+\mathbf{0}\cdot z^4+\frac{32}{3}z^6-\frac{80}{21}z^8+\frac{32}{105}z^{10}$ \\\\
$\quad6$& $\quad12$ & $1-12z^2+4 z^4+\frac{224}{15}z^6-\cdots-\frac{64}{945}z^{12}$ \\
$\quad\vdots$& $\quad\;\vdots$ & $\vdots$ \\
$\quad12$& $\quad24$ & $1-24z^2+56z^4+\cdots-\frac{4096}{13749310575}z^{24}$ \\\\
$\quad13$& $\quad26$ & $1-26z^2+\frac{208}{3}z^4+\mathbf{0}\cdot z^6-\cdots+\frac{8192}{316234143225}z^{26}$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
Table \ref{tab:4} shows that for the values $n=2l\in\{10, 26, 50, ...\}$, one term is missing. These values correspond to the roots $\lambda_p(k)$ in bold character from Table \ref{tab:2}. This is due to the fact that in these cases, the degree of the polynomial $\mu_n(z)$ (\ref{eq:mu}) corresponds to a root of the polynomials $p_k(n)$ (\ref{eq:PolTempmu}), namely a value of the sequence (\ref{eq:Seq1mu}), \textit{i.e.}
\begin{equation}
n\in\{r_3(k)\;|\;k\geq2\}\subset \lambda_p(k),
\end{equation}
where $(2k)$ is the degree of the missing term.
The constant term in the polynomials from Table \ref{tab:4} is normalized, because we made the arbitrary choice $c_1 = 1$ during the construction of the generalized series $\beta_n(z)$ (\ref{eq:SerieRacine1}). We notice that for each $n\in\{2l\;|\;l\geq2\}$, there exists a proportionality constant that depends on $l$, so that
\begin{equation}
\hat{H}_{2l}(z)=M_2(l)\cdot \mu_{2l}(z),\qquad l = 2, 3, ...,
\end{equation}
\begin{theorem}\label{th:propHChapeaumu}
The functions $\hat{H}_n(z)$ (\ref{eq:HChapeau}) and $\mu_n(z)$ (\ref{eq:mu}) follow the proportionality relation
\begin{equation}
\hat{H}_n(z) = M_2(n)\mu_n(z),
\end{equation}
for all $n = 2l,\; l\in\{0, 2, 3, 4, ...\}$, where
\begin{align}\label{eq:M2}
&M_2(n) = \frac{(-1)^{(n+2)/2}n!2^{n/2}}{n \cdot p_{(1,1)}(n)\prod_{j=1}^{(n-4)/2}(n-2(1+j))},\\\nonumber
&M_2(0):=1.
\end{align}
\end{theorem}
\begin{preuve}\normalfont
Let $n = 2l,\; l\geq2$, and let $\hat{H}_n(z)$ and $\mu_n(z)$ be the functions defined in (\ref{eq:HChapeau}) and (\ref{eq:mu}), respectively. $\mu_n(z)$ is a polynomial of degree $n$. The missing term in the polynomial $\mu_n(z)$ illustrated in Table \ref{tab:4} corresponds to a power of $z$ that is always strictly smaller than $n$, because if there is a missing term, then $n = (2k-1)^2+1$ for some $k\geq2$. Since $n = (2k-1)^2+1>2k$ for all $k\geq2$, we conclude that the missing term is always associated with a power of $z$ smaller than $n$.
Let $c_n$ be the coefficient of the term $z^n$ in the polynomial $\mu_n(z)$. From equation (\ref{eq:mu}), we have that for $k\geq3$, the coefficients are defined by the rational expression
\begin{equation}\label{eq:rationalmu}
c_k = (-1)^{k} \frac{n(n-((2k-1)^2+1))\prod_{j=1}^{k-2}(n-2(1+j))}{(2k)!/2^{k}}.
\end{equation}
The highest order term is of degree $n = 2k$, which implies that
\begin{equation}\label{eq:rational2Prime}
k=n/2.
\end{equation}
Substituting the right-hand side of (\ref{eq:rational2Prime}) into (\ref{eq:rationalmu}), we obtain
\begin{equation}\label{eq:cmu}
c_n = \frac{(-1)^{(n+1)/2}p_{(1,1)}(n)2^{(n-1)/2}\prod_{j=1}^{(n-3)/2}(n-2(1+j)+1)}{n!}.
\end{equation}
Let $\hat{\hat{c}}_n$ be the coefficient of the term $z^n$ in the polynomial $\hat{H}_n(z)$. Then, by relation (\ref{eq:HChapeau}), we have that
\begin{equation}\label{eq:cHatmu}
\hat{\hat{c}}_n = 2^{2n}.
\end{equation}
Evaluating the ratio of the coefficients (\ref{eq:cmu}) and (\ref{eq:cHatmu}), we get
\begin{equation}\label{eq:tempMmu}
M_2(n) = \frac{\hat{\hat{c}}_n}{c_n} = \frac{(-1)^{(n+2)/2}n!2^{n/2}}{n \cdot p_{(1,1)}(n)\prod_{j=1}^{(n-4)/2}(n-2(1+j))}.
\end{equation}
The only zeros of the denominator of $M_2(n)$ are the zeros of $p_{(1,1)}(n)$ and zero itself, namely the gap sequence $\{1,2\}$ and zero, because
\begin{equation}
\left\{\prod_{j=1}^{(n-4)/2}(n-r_4(j))\;:\;n = 4, 6, 8, ...\right\} = \{1, 3, 3\cdot5,3\cdot5\cdot7, ...\},
\end{equation}
which means that $M_2(n)$ is not defined on the gap sequence, neither at $n=0$, which is why we define this case separatly in equations (\ref{eq:M2}). Remark \ref{rem:6} discusses another formulation of the constant $M_2(n)$.
We complete the proof by verifying the case $k=2$, \textit{i.e.} $\hat{H}_4(z) = M_2(4)\mu_4(z)$ holds. Indeed,
\begin{equation}
\hat{H}_4(z) = M_2(4)\mu_4(z),
\end{equation}
where $M_2(4)=-4$.\\
$\left.\right.\hfill\square$
\end{preuve}
\begin{remark}\label{rem:6}
Since $n$ is even, $M_2(n)$ can be equivalently expressed in term of the Euler gamma function \cite{Abramowitz1965}
\begin{equation}
M_2(n) = (-1)^{(n+2)/2}2^{n-1}\pi^{-1/2}\Gamma\left(\frac{n-1}{2}\right),\qquad \forall n\in 2\mathbb{N}.
\end{equation}
\end{remark}
From Theorem \ref{th:propHChapeaumu}, we find that the function $\phi_3(n)$ in the Wronskian (\ref{eq:WronskienHChapeauH2}) corresponds to the function $M_2(n)$ (\ref{eq:M2})
\begin{equation}\label{eq:WronskienHChapeauH2Prime}
Wr\left(\hat{H}_n,\nu_n\right)(z) = M_2(n)e^{z^2}(1+2z^2)^2\cdot
\left\{\begin{matrix}
0, \qquad n\in (2\mathbb{N}-1)\backslash\{1\}\\
1,\qquad n\in 2\mathbb{N}\backslash\{2\}\qquad\;\;
\end{matrix}\right..
\end{equation}
From Theorems \ref{th:propHChapeauH2} and \ref{th:propHChapeaumu}, we find that the function $\phi_1(n)$ appearing in the Wronskian (\ref{eq:WronskienBeta}) corresponds to the additive inverse of the function $M_1(n)$ (\ref{eq:M1}) when $n$ is odd, and to the function $M_2(n)$ (\ref{eq:M2}) when $n$ is even
\begin{equation}\label{eq:WronskienBetaPrime}
Wr\left(\hat{H}_n,\beta_n\right)(n,z) = e^{z^2}(1+z^2)^2 \cdot
\left\{\begin{matrix}
-M_1(n), \qquad n\in (2\mathbb{N}-1)\backslash\{1\}\\
\;M_2(n), \qquad n\in 2\mathbb{N}\backslash\{2\}\qquad
\end{matrix}\right..
\end{equation}
\begin{corollary}\label{cor:Prop2019H2}
The functions $H^{(1)}_n(z)$ (\ref{eq:HermiteExceptFixe}), $\mu_n(z)$ (\ref{eq:mu}) and $\nu_n(z)$ (\ref{eq:Sol2}) follow the proportionality relations
\begin{equation}
H^{(1)}_n(z) = \frac{(-1)^{(n+1)/2}8n!2^{(n+1)/2}}{\prod_{j=1}^{(n-3)/2}(n-2(1+j)+1)}\nu_n(z)
\end{equation}
for all $n = 2l-1,\; l\geq2$,
\begin{equation}
H^{(1)}_n(z) = \frac{(-1)^{(n+2)/2}8n!2^{n/2}}{n \prod_{j=1}^{(n-4)/2}(n-2(1+j))}\mu_n(z)
\end{equation}
for all $n = 2l,\; l\geq2,$
\begin{equation}
H^{(1)}_0(z) = 16\mu_0(z).
\end{equation}
\end{corollary}
\begin{preuve}\normalfont
The proof is straightforward, making use of Theorems \ref{cor:Poynome2019}-\ref{th:propHChapeaumu}.
\end{preuve}
\begin{corollary}\label{cor:OrthogonalityH2}
The functions $\nu_n(z)$ (\ref{eq:Sol2}) and $\mu_n(z)$ (\ref{eq:mu}) follow the orthogonality relations
\begin{equation}
\int_{-\infty}^\infty \nu_m(x)\nu_n(x)\frac{e^{-x^2}}{(1+2x^2)^2}\;dx = \delta_{m,n}\frac{\sqrt{\pi}p_{(1,1)}(n)\prod_{j=1}^{(n-3)/2}(n-2(1+j)+1)^2}{2\cdot n!}
\end{equation}
for all $n = 2l-1,\; l\geq2$,
\begin{equation}
\int_{-\infty}^\infty \mu_m(x)\mu_n(x)\frac{e^{-x^2}}{(1+2x^2)^2}\;dx = \delta_{m,n}\frac{\sqrt{\pi}\,n\cdot p_{(1,1)}(n)\prod_{j=1}^{(n-4)/2}(n-2(1+j))^2}{ (n-1)!}
\end{equation}
for all $n = 2l,\; l\geq2$,
\begin{equation}
\int_{-\infty}^\infty \left(\mu_0(x)\right)^2\frac{e^{-x^2}}{(1+2x^2)^2}\;dx = \frac{\sqrt{\pi}}{2}.
\end{equation}
\end{corollary}
\begin{preuve}\normalfont
The orthogonality relation for $\hat{H}_n(x)$ is given by (\ref{eq:orthogPolynome2019})
\begin{equation}\label{eq:tempOrthog}
\int_{-\infty}^{+\infty}\hat{H}_{m}(x)\hat{H}_{n}(x)\frac{e^{-x^2}}{(1+2x^2)^2}dx = \delta_{m,n}\frac{\sqrt{\pi}2^{n}n!}{(n-1)(n-2)}.
\end{equation}
If $n = 2l-1,\; l\geq2$, making use of Theorem \ref{th:propHChapeauH2}, then the orthogonality relation (\ref{eq:tempOrthog}) becomes
\begin{equation}\label{eq:tempOrthog2}
M_1(m)M_1(n)\int_{-\infty}^{+\infty}\nu_{m}(x)\nu_{n}(x)\frac{e^{-x^2}}{(1+2x^2)^2}dx = \delta_{m,n}\frac{\sqrt{\pi}2^{n}n!}{(n-1)(n-2)},
\end{equation}
which implies that
\begin{align}\label{eq:tempOrthog3}
\int_{-\infty}^{+\infty}\nu_{m}(x)\nu_{n}(x)\frac{e^{-x^2}}{(1+2x^2)^2}dx &= \delta_{m,n}M_1^{-2}(n)\frac{\sqrt{\pi}2^{n}n!}{(n-1)(n-2)}\\\nonumber
&=\delta_{m,n}\frac{\sqrt{\pi}p_{(1,1)}(n)\prod_{j=1}^{(n-3)/2}(n-2(1+j)+1)^2}{2\cdot n!}.
\end{align}
The case $n=0$ is easily verified. If $n = 2l,\; l\geq2$, making use of Theorem \ref{th:propHChapeaumu}, then the orthogonality relation (\ref{eq:tempOrthog}) becomes
\begin{equation}\label{eq:tempOrthog2mu}
M_2(m)M_2(n)\int_{-\infty}^{+\infty}\mu_{m}(x)\mu_{n}(x)\frac{e^{-x^2}}{(1+2x^2)^2}dx = \delta_{m,n}\frac{\sqrt{\pi}2^{n}n!}{(n-1)(n-2)},
\end{equation}
which implies that
\begin{align}\label{eq:tempOrthog3mu}
\int_{-\infty}^{+\infty}\mu_{m}(x)\mu_{n}(x)\frac{e^{-x^2}}{(1+2x^2)^2}dx &= \delta_{m,n}M_2^{-2}(n)\frac{\sqrt{\pi}2^{n}n!}{(n-1)(n-2)}\\\nonumber
&=\delta_{m,n}\frac{\sqrt{\pi}\,n\cdot p_{(1,1)}(n)\prod_{j=1}^{(n-4)/2}(n-2(1+j))^2}{ (n-1)!}.
\end{align}
$\left.\right.\hfill\square$
\end{preuve}
\begin{remark}
On the gap sequence, the functions $\hat{H}_{1}(z)$ and $\hat{H}_{2}(z)$ may be defined from the extension of classical Hermite polynomials to negative integers, as in equations (\ref{eq:HermiteNegInt}), leading to non-polynomial solutions of the ODE (\ref{eq:EDOXComplexe}). Theorems \ref{th:propHChapeauH2} and \ref{th:propHChapeaumu} state that that there exists a proportionality constant between the polynomials $\hat{H}_{2l}(z)$ (\ref{eq:HChapeau}) and $\mu_{2l}(z)$ (\ref{eq:mu}), where $l\in\{0, 2, 3, ...\}$ and between the polynomials $\hat{H}_{2l-1}(z)$ (\ref{eq:HChapeau}) and $\nu_{2l-1}(z)$ (\ref{eq:Sol2}), where $l\in\{2, 3, 5, ...\}$. The same phenomenon holds between $\hat{H}_{2}(z)$ and $\mu_{2}(z)$ and between $\hat{H}_{1}(z)$ and $\nu_{1}(z)$, namely on the gap sequence, which indicates that the extension of classical Hermite polynomials to negative integers arises naturally in the construction of the solution of the ODE (\ref{eq:EDOXComplexe}).
\end{remark}
We showed in the present section that the $X_2^{(1)}$-Hermite polynomials of even degree may be expressed as the \textit{even part} of the series $\beta_{n}(z)$ (\ref{eq:Sol1}), when $n$ is even, and that the $X_2^{(1)}$-Hermite polynomials of odd degree may be expressed as the \textit{odd part} of the series $\beta_{n}(z)$ (\ref{eq:Sol1}), when $n$ is odd. We would like to express the general solution of the complex $X_2^{(1)}$-Hermite ODE (\ref{eq:EDOXComplexe}) as a linear combination of two separate functions, where the first function would include the $X_2^{(1)}$-Hermite polynomials together with the non-polynomial cases $n=1,2$ (the gap sequence), and where the second function would include non-polynomial solutions. Let $\hat{H}_{1}(z)$ and $\hat{H}_{2}(z)$ be defined as in (\ref{eq:HermiteNegInt}). Then $\{\hat{H}_{n}(z)\}_{n=0}^\infty$ is a countable sequence of functions which includes the $X_2^{(1)}$-Hermite polynomials, up to a constant, and two non-polynomial solutions, namely $n=1,2$ (the gap sequence). Consider the proportionality constants $M_1(n)$ and $M_2(n)$ from Theorems \ref{th:propHChapeauH2} and \ref{th:propHChapeaumu}, respectively, and let
\begin{equation}\label{eq:M3}
M_3(n) := \left\{\begin{matrix}
1, \qquad\;\; n\in\{1,2\}\;\;\;\\
M_1^{-1}(n), \quad n \in 2\mathbb{N}-1\backslash\{1\}\\
M_2^{-1}(n), \quad n \in 2\mathbb{N}\backslash\{2\}\quad\;\;
\end{matrix}\right..
\end{equation}
We define
\begin{equation}\label{eq:alpha}
\alpha_n(z) := M_3(n)\hat{H}_{n}(z), \qquad n\in \mathbb{N},
\end{equation}
where the functions $\hat{H}_{1}(z)$ and $\hat{H}_{2}(z)$ are defined as in (\ref{eq:HermiteNegInt}). The constant $M_3(n)$ in the definition (\ref{eq:alpha}) is not necessary. For $n\notin\{1,2\}$, the result is that the odd and even cases correspond exactly to the polynomial cases of $\nu_n(z)$ (\ref{eq:Sol2}) and $\mu_n(z)$ (\ref{eq:mu}), respectively, resulting in polynomials possessing a normalized coefficient associated with the smallest power of $z$, as shown in Tables \ref{tab:3} and \ref{tab:4}.
Under the above hypotheses, we have the following theorem.
\begin{theorem} (\textbf{Main result}) \label{th:Main}
The general solution of the complex $X_2^{(1)}$-Hermite ODE (\ref{eq:EDOXComplexe}) is
\begin{equation}\label{eq:GenSolDecomposed}
k_1\alpha_n(z)+k_2\beta_n(z), \qquad\qquad k_1, k_2 \in\mathbb{C},
\end{equation}
where the functions $\alpha(z)$ and $\beta(z)$ are defined as in (\ref{eq:alpha}) and (\ref{eq:Sol1}), respectively. Moreover, the countable sequence
\begin{equation}\label{eq:OPS}
\{\alpha_{n}(z)\}_{n\in\mathbb{N}\backslash\{1,2\}}
\end{equation}
corresponds to the exceptional Hermite orthogonal polynomials of codimension $2$ associated with the partition $\lambda = (1)$ of the positive integer $m=1$, up to a constant that depends on $n$, whereas the set
\begin{equation}
\{\alpha_{n}(z)\}_{n\in\{1,2\}}
\end{equation}
corresponds to non-polynomial solutions defined from the extension of classical Hermite polynomials to negative integers, which complete the gap $\{1,2\}$ in the spectrum of the orthogonal polynomial system (\ref{eq:OPS}), but are not part of the orthogonal polynomial system (\ref{eq:OPS}) itself. The countable sequence
\begin{equation}\label{eq:NonPol}
\{\beta_{n}(z)\}_{n\in\mathbb{N}}
\end{equation}
is composed of non-polynomial functions.
\end{theorem}
\begin{preuve}\normalfont
Consider the definition of the function $\alpha_n(z)$ (\ref{eq:alpha}). By Corollary \ref{cor:HChapeau}, we know that $\hat{H}_n(z)$ is a polynomial solution of the complex $X_2^{(1)}$-Hermite ODE (\ref{eq:EDOXComplexe}) for all $n\in\mathbb{N}\backslash\{1,2\}$, which indicates that $\alpha_n(z)$ is a polynomial solution for these values of $n$. The non-polynomial cases $n=1,2$ are easily verified by substituting the expressions (\ref{eq:HermiteNegInt}) and their derivatives up to order 2 into the ODE (\ref{eq:EDOXComplexe}), when considering the extension of classical Hermite polynomials to negative integers. This information is confirmed making use of Theorems \ref{th:propHChapeauH2} and \ref{th:propHChapeaumu} and looking at the Wronskians
\begin{equation}\label{eq:WronskienmuComplete}
Wr\left(\alpha_n,\mu_n\right)(z) = e^{z^2}(1+2z^2)^2\cdot
\left\{\begin{matrix}
-2,& n=1\;\\
0,& n=2\;\\
-1,& \qquad\qquad\;\,\; n\in (2\mathbb{N}-1)\backslash\{1\}\\
0, & \qquad\;\;\;\; n\in (2\mathbb{N})\backslash\{2\}\\
\end{matrix}\right.,
\end{equation}
\begin{equation}\label{eq:WronskienHChapeauH2Complete}
Wr\left(\alpha_n,\nu_n\right)(z) = e^{z^2}(1+2z^2)^2\cdot
\left\{\begin{matrix}
0,& \;\;\;n= 1\;\\
2,& \;\;\;n=2\;\\
0, & \qquad\qquad\;\,\;\;\;\; n\in (2\mathbb{N}-1)\backslash\{1\}\\
1,&\qquad\;\;\;\;n\in 2\mathbb{N}\backslash\{2\}
\end{matrix}\right.,
\end{equation}
showing that $\{\hat{H}_{2l}(z),\mu_{2l}(z)\}$ and $\{\hat{H}_{2l+1}(z),\nu_{2l+1}(z)\}$ are linearly dependent sets for all $l\geq0$ (we recall that it was shown in Appendix \ref{app:2} that $\{\mu_{2l}(z)\}$ and $\{\nu_{2l+1}(z)\}$ are solutions for all $l\geq0$.)
By Proposition \ref{th:GenSol}, we know that $\beta(z)$ is a non-polynomial solution of the ODE (\ref{eq:EDOXComplexe}) for all $n\in\mathbb{N}$. Moreover, the Wronskian
\begin{equation}\label{eq:WronskienBetaComplete}
Wr\left(\alpha_n,\beta_n\right)(n,z) = e^{z^2}(1+z^2)^2 \cdot
\left\{\begin{matrix}
(\sqrt{\pi}-2),&n=1\qquad\quad\;\;\\
2(1-2\sqrt{\pi}),&n=2\qquad\quad\;\;\\
-1, &\quad\;\; n\in (2\mathbb{N}-1)\backslash\{1\}\\
\;1, &\quad \;n\in 2\mathbb{N}\backslash\{2\}\qquad
\end{matrix}\right.,
\end{equation}
shows that $\alpha_n(z)$ and $\beta_n(z)$ are linearly independent functions.
Making use of the theoretical results from section \ref{sec:2}, it was shown in detail in section \ref{sec:3} (by construction) that the countable sequence
\begin{equation}\label{eq:OPSPrime}
\{\alpha_{n}(z)\}_{n\in\mathbb{N}\backslash\{1,2\}}
\end{equation}
corresponds to the exceptional Hermite orthogonal polynomials of codimension $2$ associated with the partition $\lambda = (1)$ of the positive integer $m=1$, up to a constant that depends on $n$. This was shown extensively in Theorems \ref{cor:Poynome2019}-\ref{th:propHChapeaumu} and in Corollaries \ref{cor:HChapeau} and \ref{cor:Prop2019H2}.
By the definition of $\alpha_n(z)$ (\ref{eq:alpha}) and by equations (\ref{eq:HermiteNegInt}), the set
\begin{equation}
\{\alpha_{n}(z)\}_{n\in\{1,2\}}
\end{equation}
corresponds to non-polynomial solutions defined from the extension of classical Hermite polynomials to negative integers.
The countable sequence
\begin{equation}\label{eq:NonPolPrime}
\{\beta_{n}(z)\}_{n\in\mathbb{N}}
\end{equation}
is composed of non-polynomial functions. This is due to the form of the series $\beta_n(z)$ (\ref{eq:Sol1}), composed of coefficients $c_{2k}(n)$ (\ref{eq:coeffPair}) associated with even powers of $z$ which have no roots in common with the coefficients $c_{2k-1}(n)$ (\ref{eq:coeffImpair}) associated with odd powers of $z$ (see the roots $\lambda_p(k)$ and $\lambda_q(k)$ in Table \ref{tab:2}).
\\
$\left.\right.\hfill\square$
\end{preuve}
\begin{remark}
The linear combination (\ref{eq:GenSolDecomposed}) is the analytical general solution of the ODE (\ref{eq:OperatorTlambdaSquared}) for $n\in\mathbb{N}$ (no gap), on the complex plane, for the particular case $\lambda = (1)$. Provided that $\lambda^2$ is an Adler partition, the differential operator $T_{\lambda^2}$ is non-singular on $\mathbb{R}$. We showed that in the case $\lambda = (1)$, the operator possesses singularities at ${\pm i/\sqrt{2}}\notin\mathbb{R}$. This is due to the fact that the operator $T_\lambda$ (\ref{eq:HermioteXOp}) has singularities corresponding to the zeros of the Wronskian $H_\lambda$ (\ref{eq:wronskiens}). Indeed, the general solution was built from an Adler partition and making use of the transformation (\ref{eq:gapSeqRel}), leading to a gap sequence which fulfills the hypotheses of the Krein-Adler Theorem \ref{th:1}, so the ODE arising from these choices has no singularity on $\mathbb{R}$. Moreover, the general solution (\ref{eq:GenSolDecomposed}) was built using the method of generalized series, under the assumption that it could be expressed as a Taylor series around $z=0$, resulting in an analytical function on $\mathbb{C}$. The solution $\alpha_n(z)$ (\ref{eq:alpha}) is non-polynomial on the gap sequence arising from the choice of the partition $\lambda = (1)$. The mathematical expression of the coefficients of the series $\beta_n(z)$ (\ref{eq:Sol1}) constructed from the differential operator $T_{(1,1)}$ provides some clues explaining the existence of a gap in the eigenvalue spectrum of the differential operator. The coefficients $c_{2k}(n)$ (\ref{eq:coeffPair}) and $c_{2k-1}(n)$ (\ref{eq:coeffImpair}) of the series $\beta_n(z)$ (\ref{eq:Sol1}), associated with even and odd powers of $z$, repectively, are polynomials of the parameter $n$, which possess no root on the gap sequence (see Table \ref{tab:2}), leading to the non-polynomial solutions $\nu_1(z)$ and $\mu_2(z)$ for the values $n=1,2$.
\end{remark}
\newpage
\section{Minimal surfaces associated with $X_2^{(1)}$-Hermite polynomials}
\label{sec:4}
In this section, we make use of the link between the classical Enneper-Weierstrass formula for the immersion of a minimal surface $F$ in the Euclidean space $\mathbb{E}^3$ and the linear problem for the moving frame
\begin{equation}\label{eq:MovingFrame}
\sigma = (\partial F, \overline{\partial} F, N)^T
\end{equation}
on the surface, where we used the notation for the holomorphic and antiholomorphic derivatives
\begin{equation}
\partial = \frac{1}{2}\left(\frac{\partial}{\partial x}-i\frac{\partial}{\partial y}\right),\qquad \overline{\partial} = \frac{1}{2}\left(\frac{\partial}{\partial x}+i\frac{\partial}{\partial y}\right), \qquad z = x+iy.
\end{equation}
This link, expressed as a second-order linear ODE \cite{Doliwa2012}, allows us to investigate the behavior of Hermite XOPs represented by minimal surfaces. We calculate the explicit form of the immersion formula and we show a numerical display of these surfaces for different values of the parameter of the $X_2^{(1)}$-Hermite complex ODE (\ref{eq:EDOXComplexe}).
\subsection{Enneper-Weierstrass formula and $\frak{su}(2)$ representation}
Consider the Enneper-Weierstrass immersion formula \cite{enneper1868analytisch,weierstrass1866fortsetzung} describing a zero mean curvature surface (denoted by $H=0$) in terms of two locally holomorphic arbitrary functions
\begin{equation}
\label{eq:F}F(\xi_0,\xi) = \frac{1}{2}\mathbb{R}e\left( \int_{\xi_{0}}^\xi \left( 1 - \chi^2,\; i(1 + \chi^2),\; 2\chi)\right)^T\eta^2 \; dz\right) \in \mathbb{E}^3,
\end{equation}
where
\begin{equation}\label{eq:EtaChiHolom}
\overline{\partial}\eta =0,\qquad \overline{\partial}\chi = 0.
\end{equation}
The integration in formula (\ref{eq:F}) is performed on an arbitrary path from the constant $\xi_0\in\mathbb{C}$ to the complex variable $\xi\in\mathbb{C}\backslash\{\xi_0\}$.
Let $\tilde{F}\in\frak{su}(2)\simeq \mathbb{E}^3$ be the quaternionic description of the minimal surface described by formula (\ref{eq:F}). In order to determine the explicit form of this representation, we identify the Euclidean space $\mathbb{E}^3$ with the imaginary quaternions by the formula \cite{Bobenko1994}
\begin{equation}\label{eq:Desc_Quatern} \tilde{F} = -i\sum_{\alpha = 1}^3F_\alpha\sigma_\alpha \in \mathbb{I}m\mathbb{H}\simeq \frak{su}(2),\qquad Tr(\tilde{F})=0 ,\quad \tilde{F}^\dagger = -\tilde{F},
\end{equation}
where dagger $\dagger$ denotes the Hermitian conjugate of the considered expression. The matrices $\sigma_\alpha, \alpha = 1,2,3$ are the Pauli matrices, such that $\sigma_\alpha^\dagger = \sigma_\alpha$. The inner product is then
\begin{equation}
\braket{X,Y} = -\frac{1}{2}Tr(XY),\qquad \forall X,Y \in \frak{su}(2).
\end{equation}
Substituting the components $F_\alpha, \alpha = 1,2,3,$ of the Enneper-Weierstrass representation (\ref{eq:F}) into formula (\ref{eq:Desc_Quatern}), we obtain a matrix formulation of the surface \cite{Chalifour2019}
\begin{equation}\label{eq:53}
\tilde{F} = -\frac{i}{2}\left(\begin{array}{cc} \int_{\xi_{0}}^\xi \chi\eta^2 \;dz + \left(\int_{\xi_{0}}^\xi \chi\eta^2 \;dz \right)^* & \int_{\xi_{0}}^\xi \eta^2 \;dz - \left(\int_{\xi_{0}}^\xi \chi^2\eta^2 \;dz \right)^* \\ \\ -\int_{\xi_{0}}^\xi \chi^2\eta^2 \;dz + \left(\int_{\xi_{0}}^\xi \eta^2 \;dz \right)^* & - \int_{\xi_{0}}^\xi \chi\eta^2 \;dz - \left(\int_{\xi_{0}}^\xi \chi\eta^2 \;dz \right)^* \\ \end{array}\right),
\end{equation}
where star $*$ denotes the complex conjugate of the considered expression. The formula (\ref{eq:53}) for $\tilde{F}$ is a quaternionic representation of the surface immersed in the $\frak{su}(2)$ Lie algebra, because $Tr(\tilde{F}) = 0$, $\tilde{F}^\dagger = -\tilde{F}$.
\newpage
\subsection{Holomorphic reduction of the linear problem for the moving frame}
Making use of the Lie algebra isomorphism $\frak{so}(3) \simeq \frak{su}(2)$, the Gauss-Weingarten equations for the moving frame $\sigma$ (\ref{eq:MovingFrame}) may be written in terms of $2\times2$ complex-valued matrices \cite{Bobenko1994,Bobenko2000}, where the wavefunction $\Phi \in SU(2, \mathbb{C})$ satisfies the linear differential equations
\begin{equation}\label{eq:15}\partial\Phi = \mathcal{U}\Phi, \qquad \overline{\partial}\Phi = \mathcal{V}\Phi,\end{equation}
and where $\mathcal{U}(z,\bar{z}),\mathcal{V}(z,\bar{z})\in\frak{sl}(2,\mathbb{C})$. When the mean curvature vanishes ($H=0$), the matrices $\mathcal{U}$ and $\mathcal{V}$ are of the form
\begin{equation}\label{eq:U_V} \mathcal{U} = \left( \begin{array}{cc}
\frac{1}{4}\partial u & -Qe^{-\frac{u}{2}} \\
0 & -\frac{1}{4}\partial u \\
\end{array}\right),\quad
\mathcal{V} = \left( \begin{array}{cc}
-\frac{1}{4}\bar{\partial}u & 0 \\
\bar{Q}e^{-\frac{u}{2}} & \frac{1}{4}\bar{\partial}u\\
\end{array}\right)\in \frak{sl}(2,\mathbb{C}), \end{equation}
where $\mathcal{U}^\dagger = -\mathcal{V}$. We apply the gauge transformation $M$ to the wavefunction $\Phi\in SU(2, \mathbb{C})$ of the linear problem (\ref{eq:15}), as proposed in \cite{Doliwa2012} and used afterwards in \cite{Chalifour2019}
\begin{equation}\label{eq:31_0}
\Psi = M\Phi, \qquad \text{ where } \;M =\left( \begin{array}{cc}\frac{|\eta|(1+\chi\overline{\chi})^{1/2}}{\eta\chi}&0\\ -\frac{|\eta|}{\eta(1+\chi\overline{\chi})^{1/2}}&\frac{\eta\chi}{|\eta|(1+\chi\overline{\chi})^{1/2}} \end{array}\right)\in SL(2, \mathbb{C}).
\end{equation}
We obtain
\begin{equation}\label{eq:32}
\partial\Psi = \lambda \eta^2\left( \begin{array}{cc}\chi &-1\\ \chi^2&-\chi\\ \end{array}\right)\Psi, \qquad \overline{\partial} \Psi=0,
\end{equation}
where
\begin{equation}\label{eq:50}
\tilde{\mathcal{U}}(\lambda;z) = \lambda\eta^2\left( \begin{array}{cc} \chi & -1\\ \chi^2 & -\chi \end{array}\right)\in\frak{sl}(2,\mathbb{C}).
\end{equation}
The system (\ref{eq:32}) is a reduced linear problem for the holomorphic wavefunction $\Psi(z)$. The potential matrix $\tilde{\mathcal{U}}$ is parametrized by the spectral parameter $\lambda\in \mathbb{C}\backslash\{0\}$, where $\eta = re^{i\theta}, \;\; r \in \mathbb{R}^+, \;\;\theta\in [0, 2\pi[$ and $\lambda = \eta/\overline{\eta} = e^{2i\theta}$. The linear system (\ref{eq:32}) can be equivalently expressed by the system
\begin{align}\label{eq:52}
&\partial^2\Psi_1 - 2\frac{\partial\eta}{\eta}\partial \Psi_1 - \lambda\eta^2\partial\chi\Psi_1 = 0,\\\label{eq:52_a}
&\Psi_2 = \chi \Psi_1 - \frac{\partial \Psi_1}{\lambda \eta^2},
\end{align}
where $\Psi = (\Psi_1, \Psi_2)^T$. The coefficients of the linear second-order ODE (\ref{eq:52}) possess a degree of freedom involving two arbitrary locally holomorphic complex-valued functions $\eta(z)$ and $\chi(z)$. These functions correspond to the arbitrary functions from the Enneper-Weierstrass representation (\ref{eq:F}) describing minimal surfaces in $\mathbb{E}^3$.
\subsection{Links between the linear problem and the $X_2^{(1)}$-Hermite differential equation}
Making use of the approach described in \cite{Chalifour2019}, we carry out an association between the coefficients of the ODE (\ref{eq:52}) and the coefficients of the complex $X_2^{(1)}$-Hermite ODE (\ref{eq:EDOXComplexe}). We obtain the system
\begin{equation}\label{eq:Association}
- 2\frac{\partial\eta}{\eta} = -2\left(z+\frac{4z}{1+2z^2}\right), \qquad - \lambda\eta^2\partial\chi = 2n.
\end{equation}
The association made in (\ref{eq:Association}) signifies that the component $\Psi_1(n;z)$ of the holomorphic wavefunction $\Psi$ corresponds to the general solution of equation (\ref{eq:EDOXComplexe}). We obtain the explicit form of the arbitrary functions from the Enneper-Weierstrass representation (\ref{eq:F})
\begin{align}\label{eq:etaExceptional}
\eta^2(z)&=c_1^2 e^{z^2}(1+2z^2)^2 = \frac{16c_1^2}{W_{(1,1)}(z)},\\\nonumber
\chi(n;\lambda;z) &= -\frac{2n}{\lambda c_1^2}\left(c_2 +\frac{\sqrt{\pi}}{4}erf(z)+\frac{e^{-z^2}}{2(1+2z^2)}\right)\\\label{eq:chiExceptional}
&= -\frac{2n}{\lambda c_1^2}\left(c_2 +\frac{\sqrt{\pi}}{4}erf(z)+8(1+2z^2)W_{(1,1)}(z)\right),
\end{align}
where $c_1\in\mathbb{C}\backslash\{0\}$ and $c_2\in\mathbb{C}$ are arbitrary constants. The functions $\eta$ (\ref{eq:etaExceptional}) and $\chi$ (\ref{eq:chiExceptional}) are written in terms of the complex extension of the weight (\ref{eq:Poidslambda1}). They can be compared to the arbitrary functions $\eta_0$ and $\chi_0$ arising from the identification with the classical complex Hermite ODE \cite{Szego1939}
\begin{equation}
\omega''(z)-2z\omega'(z)+2n\omega(z) = 0,\qquad z\in\mathbb{C},\; n\in\mathbb{N}.
\end{equation}
These were obtained in \cite{Chalifour2019}
\begin{equation}
\eta_0^2(z) = c_1^2e^{z^2}, \quad \chi_0(n;\lambda;z) = -\frac{2n}{\lambda c_1^2}\left(c_2+ \frac{\sqrt{\pi}}{2}erf(z)\right).
\end{equation}
Substituting the functions $\eta$ (\ref{eq:etaExceptional}) and $\chi$ (\ref{eq:chiExceptional}), the components of the potential matrix $\tilde{\mathcal{U}}(n; \lambda;z) = (u_{ij})$ (\ref{eq:50}) become
\begin{align}\nonumber
u_{11} &=-u_{22} = -\frac{32n}{W_{(1,1)}(z)}\left(c_2+\frac{\sqrt{\pi}}{4}erf(z)+8(1+2z^2)W_{(1,1)}(z)\right),\\
u_{12} &= -\frac{16\lambda c_1^2}{W_{(1,1)}(z)},\\\nonumber
u_{21} &= \frac{64n^2}{\lambda c_1^2W_{(1,1)}(z)}\left(c_2+\frac{\sqrt{\pi}}{4}erf(z)+8(1+2z^2)W_{(1,1)}(z)\right)^2.
\end{align}
From equation (\ref{eq:52}) and from the association made in equations (\ref{eq:Association}), we know that the component $\Psi_1$ of the wavefunction corresponds to the general solution of the complex $X_2^{(1)}$-Hermite ODE given by Theorem \ref{th:Main}. The component $\Psi_2$ of the wavefunction is given by equation (\ref{eq:52_a}). We obtain
\begin{equation}\label{eq:Wavefunction}
\Psi(n; \lambda; z) = \left(\begin{matrix}\Psi_1\\\Psi_2\end{matrix}\right) = \left(\begin{matrix}k_1\alpha_n(z)+k_2\beta_n(z)\\\chi \left(k_1\alpha_n(z)+k_2\beta_n(z)\right) - \frac{\partial \left(k_1\alpha_n(z)+k_2\beta_n(z)\right)}{\lambda \eta^2}\end{matrix}\right).
\end{equation}
Substituting the functions $\eta$ (\ref{eq:etaExceptional}) and $\chi$ (\ref{eq:chiExceptional}) into equation (\ref{eq:Wavefunction}), we obtain
\begin{equation}\label{eq:WavefunctionFinal}
\Psi = \left(\begin{matrix}k_1\alpha_n(z)+k_2\beta_n(z)\\\\
-\frac{2n}{\lambda c_1^2}\left(c_2 +\frac{\sqrt{\pi}}{4}erf(z)+8(1+2z^2)W_{(1,1)}(z)\right) \left(k_1\alpha_n(z)+k_2\beta_n(z)\right) \\
- \frac{1}{16\lambda c_1^2}\partial \left(k_1\alpha_n(z)+k_2\beta_n(z)\right)W_{(1,1)}(z)\end{matrix}\right),
\end{equation}
where $c_1,\lambda \in\mathbb{C}\backslash\{0\}$ and $c_2, k_1, k_2\in\mathbb{C}$.
\subsection{Minimal surfaces describing the $X_{2}^{(1)}$-Hermite polynomials}
The explicit form of the components of the Enneper-Weierstrass representation (\ref{eq:F}) is obtained by integration of the functions $\eta$ (\ref{eq:etaExceptional}) and $\chi$ (\ref{eq:chiExceptional}). Let
\begin{equation}\label{eq:Integrales}
I_1:= \int_{\xi_{0}}^\xi \eta^2\;dz, \quad I_2 := \int_{\xi_{0}}^\xi \chi^2\eta^2\;dz, \quad I_3:=\int_{\xi_{0}}^\xi \chi\eta^2\;dz.
\end{equation}
Then the Enneper-Weierstrass immersion formula (\ref{eq:F}) describing a minimal surface immersed in $\mathbb{E}^3$ becomes
\begin{equation}\label{eq:Integrales2}
F(n; \lambda;\xi_0, \xi) = \left(\frac{1}{2}\mathbb{R}e\left( I_1 - I_2 \right), -\frac{1}{2}\mathbb{I}m\left( I_1 + I_2 \right), \mathbb{R}e\left( I_3 \right)\right)^T \;\; \in \mathbb{E}^3,
\end{equation}
where
\begin{equation}\label{eq:I1}
I_1 = c_1^2\left[ \sqrt{\pi} erfi(z)+e^{z^2}z(2z^2-1) \right]_{\xi_0}^\xi,\qquad\qquad\qquad\qquad\qquad\quad\;\;
\end{equation}
\begin{align}\nonumber
I_2 = &\frac{4n^2}{\lambda^2 c_1^2}\left[ c_2^2\sqrt{\pi}erfi(z)+\frac{\sqrt{\pi}}{6}z^2 erf(z)+ \frac{\sqrt{\pi}}{4}z erf(z)+\frac{\sqrt{\pi}}{8} erf(z)-\frac{c_2}{2}z^2\left.\right._{2}F_{2}(1,1;-1/2,2;z^2)\right.\\\label{eq:I2}
&\quad\left.+c_2\sqrt{\pi}z^2\left.\right._{2}F_{2}(1,1;1/2,2;z^2)+\frac{c_2\sqrt{\pi}}{2}z^2 \left.\right._{2}F_{2}(1,1;3/2,2;z^2)-\frac{c_2\sqrt{\pi}}{2} z^4+\frac{2c_2}{3}z^3-\frac{c_2\sqrt{\pi}}{2}z^2\right.\\\nonumber
&\quad\left.+c_2z+2c_2^2z^3e^{z^2}-c_2^2ze^{z^2}+\frac{1}{6}z^2 e^{-z^2}+\frac{5}{12}e^{-z^2}
\right]_{\xi_0}^\xi+\frac{n^2\pi}{4\lambda^2 c_1^2}\int_{\xi_0}^\xi e^{z^2}(2z^2+1)erf^2(z)\;dz,
\end{align}
\begin{align}\nonumber
I_3 = &\frac{2n}{\lambda }\left[ c_2\sqrt{\pi}erfi(z)+c_2e^{z^2}(2z^2-1)-\frac{1}{4}z^2\left.\right._{2}F_{2}(1,1;-1/2,2;z^2)\right.\\\label{eq:I3}
&\quad\left.+\frac{1}{2}z^2\left.\right._{2}F_{2}(1,1;1/2,2;z^2)+\frac{1}{4}z^2 \left.\right._{2}F_{2}(1,1;3/2,2;z^2)-\frac{1}{4} z^4+\frac{1}{3}z^3-\frac{1}{4}z^2+\frac{1}{2}z\right]_{\xi_0}^\xi.
\end{align}
The function $erfi(z)$ appearing in the components (\ref{eq:I1})-(\ref{eq:I3}) is the imaginary Error function defined by \cite{Abramowitz1965}
\begin{equation}
erfi(z) = -i\cdot erf(iz),
\end{equation}
and the function $_{p}F_{q}(a_1, a_2, ..., a_p;b_1, b_2, ..., b_q;z)$ is the generalized hypergeometric function defined by \cite{Abramowitz1965}
\begin{equation}
_{p}F_{q}(a_1, a_2, ..., a_p;b_1, b_2, ..., b_q;z) = \sum_{k=0}^\infty\frac{(a_1)_k (a_2)_k\cdots (a_p)_k}{(b_1)_k (b_2)_k\cdots (b_q)_k}\frac{z^k}{k!},
\end{equation}
where $(a)_k:=a(a+1)\cdots(a+k-1)$ is the Pochhammer symbol. The components of the surface (\ref{eq:Integrales2}) take the form
\begin{align}\nonumber
&F_1 = \frac{1}{2}Re\left[\sqrt{\pi}\left(c_1^2 - \frac{4n^2c_2^2}{\lambda^2c_1^2}\right) erfi(z)\bigg\vert_{\xi_0}^\xi+\left(c_1^2 - \frac{4n^2}{\lambda^2c_1^2} \right)e^{z^2}z(2z^2-1)\bigg\vert_{\xi_0}^\xi\right.\\\nonumber
&\quad\quad\left. -\frac{4n^2}{\lambda^2 c_1^2}\left[\frac{\sqrt{\pi}}{2}\left(\frac{1}{3}z^2 erf(z)+\frac{1}{2}z\cdot erf(z) +\frac{\sqrt{\pi}}{4} erf(z)\right)-\frac{c_2}{2}z^2 \left.\right._{2}F_{2}(1,1;-1/2,2;z^2)\right.\right.\\\label{eq:ComponentF1}
&\quad\quad\left.\left.+c_2\sqrt{\pi}z^2\left.\right._{2}F_{2}(1,1;1/2,2;z^2)+\frac{c_2\sqrt{\pi}}{2}z^2 \left.\right._{2}F_{2}(1,1;3/2,2;z^2)-\frac{c_2\sqrt{\pi}}{2} z^4+\frac{2c_2}{3}z^3\right.\right.\\\nonumber
&\quad\quad\left.\left.-\frac{c_2\sqrt{\pi}}{2}z^2+c_2z+\frac{1}{6}z^2 e^{-z^2}+\frac{5}{12}e^{-z^2}
\right]_{\xi_0}^\xi +\frac{\pi}{16}\int_{\xi_0}^\xi e^{z^2}(2z^2+1)^2 erf^2(z)dz
\right],
\end{align}
\begin{align}\nonumber
&F_2 = -\frac{1}{2}Im\left[\sqrt{\pi}\left(c_1^2 + \frac{4n^2c_2^2}{\lambda^2c_1^2}\right) erfi(z)\bigg\vert_{\xi_0}^\xi+\left(c_1^2 + \frac{4n^2}{\lambda^2c_1^2} \right)e^{z^2}z(2z^2-1)\bigg\vert_{\xi_0}^\xi\right.\\\nonumber
&\quad\quad\left. +\frac{4n^2}{\lambda^2 c_1^2}\left[\frac{\sqrt{\pi}}{2}\left(\frac{1}{3}z^2 erf(z)+\frac{1}{2}z\cdot erf(z) +\frac{\sqrt{\pi}}{4} erf(z)\right)-\frac{c_2}{2}z^2 \left.\right._{2}F_{2}(1,1;-1/2,2;z^2)\right.\right.\\\label{eq:ComponentF2}
&\quad\quad\left.\left.+c_2\sqrt{\pi}z^2\left.\right._{2}F_{2}(1,1;1/2,2;z^2)+\frac{c_2\sqrt{\pi}}{2}z^2 \left.\right._{2}F_{2}(1,1;3/2,2;z^2)-\frac{c_2\sqrt{\pi}}{2} z^4+\frac{2c_2}{3}z^3\right.\right.\\\nonumber
&\quad\quad\left.\left.-\frac{c_2\sqrt{\pi}}{2}z^2+c_2z+\frac{1}{6}z^2 e^{-z^2}+\frac{5}{12}e^{-z^2}
\right]_{\xi_0}^\xi -\frac{\pi}{16}\int_{\xi_0}^\xi e^{z^2}(2z^2+1)^2 erf^2(z)dz
\right],
\end{align}
\begin{align}\nonumber
&F_3 = Re\left[\frac{2n}{\lambda}\left[c_2\sqrt{\pi} erfi(z)+c_2e^{z^2}z(2z^2-1)-\frac{1}{4}z^2\left.\right._{2}F_{2}(1,1;-1/2,2;z^2)\right.\right.\\\label{eq:ComponentF3}
&\quad\quad\left.\left.+\frac{1}{2}z^2\left.\right._{2}F_{2}(1,1;1/2,2;z^2)+\frac{1}{4}z^2\left.\right._{2}F_{2}(1,1;3/2,2;z^2)-\frac{1}{4}z^4+\frac{1}{3}z^3-\frac{1}{4}z^2+\frac{1}{2}z\right]_{\xi_0}^\xi
\right].
\end{align}
The integral in terms of the Error function (\ref{eq:temp17})
\begin{equation}\label{eq:integral}
I_4 := \int_{\xi_0}^\xi e^{z^2}(2z^2+1)^2 erf^2(z)dz
\end{equation}
appearing in equations (\ref{eq:ComponentF1}) and (\ref{eq:ComponentF2}) may be numerically approximated for the plotting of the surface. It may also be reduced to the numerical approximation of the integral
\begin{equation}
\int_{\xi_0}^\xi e^{z^2} erf^2(z)dz.
\end{equation}
We integrate by parts by putting
\begin{equation}
u = erf^2(z), \qquad dv = e^{z^2}(2z^2+1)^2dz.
\end{equation}
The integral $I_4$ (\ref{eq:integral}) becomes
\begin{align}\nonumber
&I_4 = erf^2(z)\left(\sqrt{\pi}erfi(z)+e^{z^2}z(2z^2-1)\right)\bigg\vert_{\xi_0}^\xi-\frac{1}{2\sqrt{\pi}}(4z^4-4z^2-1)erf(z)\bigg\vert_{\xi_0}^\xi\\\label{eq:temp:16}
&\qquad\qquad-\frac{1}{\pi}e^{-z^2}(2z^3+z)\bigg\vert_{\xi_0}^\xi-4\int_{\xi_0}^\xi e^{-z^2} erf(z)erfi(z)dz.
\end{align}
The integral
\begin{equation}\label{eq:I5}
I_5:=\int_{\xi_0}^\xi e^{-z^2} erf(z)erfi(z)dz
\end{equation}
appearing in equation (\ref{eq:temp:16}) may also be integrated by parts by putting
\begin{equation}
s = erfi(z),\qquad dt = e^{-z^2}erf(z)dz.
\end{equation}
The integral $I_5$ (\ref{eq:I5}) becomes
\begin{equation}
I_5 = \frac{\sqrt{\pi}}{4}erf^2(z)erfi(z)\bigg\vert_{\xi_0}^\xi-\frac{1}{2}\int_{\xi_0}^\xi e^{z^2} erf^2(z)dz.
\end{equation}
and the integral $I_4$ (\ref{eq:integral}) becomes
\begin{align}
&I_4 = \left[erf^2(z)\left(\sqrt{\pi}erfi(z)+e^{z^2}z(2z^2-1)\right)-\sqrt{\pi}erf^2(z)erfi(z)\right.\\\nonumber
&\quad\quad\qquad\left.-\frac{1}{2\sqrt{\pi}}(4z^4-4z^2-1)erf(z)-\frac{1}{\pi}e^{-z^2}(2z^3+z)\right]_{\xi_0}^\xi+2\int_{\xi_0}^\xi e^{z^2} erf^2(z)dz.
\end{align}
In terms of the integrals (\ref{eq:Integrales}), the immersion formula (\ref{eq:53}) describing a minimal surface immersed in $\frak{su}(2)$ becomes
\begin{equation}\label{eq:FTilde_I1I2I3}
\tilde{F}(n; \lambda; z) = -\frac{i}{2}\left(\begin{array}{cc} I_3+ I_3^* &I_1 - I_2^* \\ & \\ -I_2+ I_1^* & -(I_3+ I_3^*) \\ \end{array}\right)\in \frak{su}(2),
\end{equation}
because $Tr(\tilde{F}) = 0$, $\tilde{F}^\dagger = -\tilde{F}$, and where $I_k^*$ denotes the complex conjugate of the integral $I_k, k = 1, 2, 3$. The expressions for the components of the surface $\tilde{F}\in \frak{su}(2)$ are rather long so we omit them here. It suffices to substitute the integrals (\ref{eq:I1}), (\ref{eq:I2}) and (\ref{eq:I3}) into formula (\ref{eq:FTilde_I1I2I3}).
\subsection{Numerical representation of minimal surfaces describing $X_2^{(1)}$-Hermite\\polynomials}
Even if the $X_2^{(1)}$-Hermite XOPs are not defined for $n=1, 2$, we are able to construct the surfaces describing the behavior of the solutions of the $X_2^{(1)}$-Hermite complex ODE (\ref{eq:EDOXComplexe}) for these values of $n$. This is due to the fact that the surface may be described by the holomorphic wavefunction $\Psi$ (the solution of the linear problem (\ref{eq:32})), acting as the moving frame on the surface, which is determined by the general solution (\ref{eq:GenSolDecomposed}), defined for all $n \in\mathbb{N}$. For $n=0$, the surface coincides with the plane $F_3\equiv0$. Figures \ref{fig:1} to \ref{fig:4} below show the evolution of the surface for $n=1, 2, 3$ and $7$. They were obtained using the Mathematica symbolic software and applying the Enneper-Weierstrass immersion formula (\ref{eq:F}) associated with $X_2^{(1)}$-Hermite polynomials. The components of the surface were calculated in (\ref{eq:ComponentF1}), (\ref{eq:ComponentF2}) and (\ref{eq:ComponentF3}). The integration constants and the parameter were fixed as $c_1=c_2=1$ and $\lambda=\sqrt{\pi}$, respectively. The integration was performed from $\xi_0 = 1+3i$ to $\xi = x+iy$, where $x\in[-1,1]$, $y\in[-1,1]$. The parameter $n$ is related to the $X_2^{(1)}$-Hermite complex ODE (\ref{eq:EDOXComplexe}). As $n$ grows, the surface expands, but the evolution of the surface as the parameter $n$ grows suggests a global flattening phenomena for the third component $F_3$ (notice that there is a change of scale from one figure to another). A mirror symmetry about the plane $F_2\equiv C$, for some $C<0$, appears clearly in each image.
\newpage
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{HermiteN1.jpg}
\caption{Representation of the $X_2^{(1)}$-Hermite polynomials for $n=1$.}
\label{fig:1}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{HermiteN2.jpg}
\caption{Representation of the $X_2^{(1)}$-Hermite polynomials for $n=2$.}
\label{fig:2}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{HermiteN3.jpg}
\caption{Representation of the $X_2^{(1)}$-Hermite polynomials for $n=3$.}
\label{fig:3}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.65\textwidth]{HermiteN7.jpg}
\caption{Representation of the $X_2^{(1)}$-Hermite polynomials for $n=7$.}
\label{fig:4}
\end{figure}
\newpage
\noindent \textbf{Acknowledgements}\\~\\
V.C. has been partially supported by the Natural Science and Engineering Research Council of Canada (NSERC). A.M.G. has been partially supported by the Natural Science and Engineering Research Council of Canada (NSERC). The authors would like to thank R. Conte ({\'E}cole Normale Sup{\'e}rieure de Cachan) and K. G{\'o}rska (H. Niewodnicza{\'n}ski Institute of Nuclear Physics) for helpful discussions on this topic.
\begin{appendix}
\section{Proof of Proposition \ref{th:GenSol}}\label{app:1}
We show that the series $\beta_n(z)$ (\ref{eq:Sol1}) is a non-polynomial solution of equation (\ref{eq:EDOXComplexe}) for all $n\in\mathbb{N}$.
\begin{preuve}\normalfont
Consider the following proposition:
\begin{equation}
P_1(n):\;\; \text{The function $\beta_n(z)$ (\ref{eq:Sol1}) is a solution of equation (\ref{eq:EDOXComplexe}) for all }
n\in\mathbb{N}.
\end{equation}
We proceed by induction. Multiplying by $(1+2z^2)$, equation (\ref{eq:EDOXComplexe}) can be equivalently expressed as
\begin{equation}\label{eq:EDOXComplexeModif}
\left(1+2z^2\right)\omega''(z)+\left(-4z^3-10z\right)\omega'(z)+\left(2n+4nz^2\right)\omega(z) = 0.
\end{equation}
\paragraph{\textbf{Case} $n = 0$.} Equation (\ref{eq:EDOXComplexeModif}) becomes
\begin{equation}\label{eq:EDOXComplexeModifZero}
\left(1+2z^2\right)\omega''(z)+\left(-4z^3-10z\right)\omega'(z) = 0.
\end{equation}
Let
\begin{equation}\label{eq:Delta1}
\Delta_1(k):=(-1)^k2^{k-1}((2(k-1))^2+1)\prod_{j=1}^{k-2}(1-2(1+j)).
\end{equation}
Then the series (\ref{eq:Sol1}) and its first and second-order derivatives take the form
\begin{align}\label{eq:H0}
\beta_0(z) &= 1+z+\frac{5}{3}z^3+\sum_{k=3}^\infty\frac{\Delta_1(k)}{(2k-1)!}z^{2k-1},\\\label{eq:H0Derivee}
\frac{d\beta_0}{dz}(z) &= 1+5z^2+\sum_{k=3}^\infty\frac{\Delta_1(k)}{(2k-2)!}z^{2k-2},\\\label{eq:H0DeriveeSeconde}
\frac{d^2\beta_0}{dz^2}(z) &= 10z+\sum_{k=3}^\infty\frac{\Delta_1(k)}{(2k-3)!}z^{2k-3}.
\end{align}
Substituting (\ref{eq:H0Derivee}) and (\ref{eq:H0DeriveeSeconde}) into the left-hand side (LHS) of equation (\ref{eq:EDOXComplexeModifZero}), we obtain
\begin{align}\nonumber
G_1(0;z) &= \left(1+2z^2\right)\cdot10z+\left(-4z^3-10z\right)\cdot\left(1+5z^2\right)+\sum_{k=3}^\infty\frac{\Delta_1(k)}{(2k-3)!}z^{2k-3}\\
&\quad+\sum_{k=3}^\infty\frac{2\Delta_1(k)}{(2k-3)!}z^{2k-1}-\sum_{k=3}^\infty\frac{4\Delta_1(k)}{(2k-2)!}z^{2k+1}-\sum_{k=3}^\infty\frac{10\Delta_1(k)}{(2k-2)!}z^{2k-1}.
\end{align}
In order to obtain powers corresponding to $(2k-1)$ in all series, we perform a translation of the summation variable where necessary
\begin{align}\nonumber
G_1(0;z) &= \left(1+2z^2\right)\cdot10z+\left(-4z^3-10z\right)\cdot\left(1+5z^2\right)+\sum_{k=2}^\infty\frac{\Delta_1(k+1)}{(2k-1)!}z^{2k-1}\\\label{eq:TempP1}
&\quad+\sum_{k=3}^\infty\frac{2\Delta_1(k)}{(2k-3)!}z^{2k-1}-\sum_{k=4}^\infty\frac{4\Delta_1(k-1)}{(2k-4)!}z^{2k-1}-\sum_{k=3}^\infty\frac{10\Delta_1(k)}{(2k-2)!}z^{2k-1}.
\end{align}
Extracting the terms of degree $k\leq3$, and considering the terms outside of a series in equation (\ref{eq:TempP1}), we see that they cancel each other. Regrouping all series, we get
\begin{align}\label{eq:G1}
G_1(0;z)=&\sum_{k=4}^\infty\left[\frac{\Delta_1(k+1)}{(2k-1)!}
+\frac{2\Delta_1(k)}{(2k-3)!}
-\frac{4\Delta_1(k-1)}{(2k-4)!}
-\frac{10\Delta_1(k)}{(2k-2)!} \right]z^{2k-1}.
\end{align}
Evaluating $\Delta_1$ from relation (\ref{eq:Delta1}), we obtain
\begin{align}
G_1(0;z) = \sum_{k=4}^\infty\left[ \frac{(-1)^k2^{k}\prod_{j=1}^{k-3}(1+2(1+j))}{(2k-4)!}\cdot\Delta_2(k) \right]z^{2k-1},
\end{align}
where
\begin{align}\nonumber
\Delta_2(k):&= - \frac{((2k)^2+1)(1-2(1+(k-2)))(1-2(1+(k-1)))}{(2k-3)(2k-2)(2k-1)} +\frac{((2(k-1))^2+1)(1-2(1+(k-2)))}{(2k-3)}\\\nonumber
&\qquad+((2(k-2))^2+1) -\frac{5((2(k-1))^2+1)(1-2(1+(k-2)))}{(2k-3)(2k-2)}\\\label{eq:Delta2}
&=0,
\end{align}
for all $k\geq4$. We conclude that $P_1(0)$ is true.
\paragraph{\textbf{Induction hypothesis}.} Suppose that $P_1(n-1)$ is true for some $n\geq0$, \textit{i.e.}
\begin{align}\label{eq:EDOXComplexeModifNMoins1}
&\left(1+2z^2\right)\left(\beta_{n-1}(z)\right)''+\left(-4z^3-10z\right)\left(\beta_{n-1}(z)\right)'+2(n-1)\left(1+2z^2\right)\beta_{n-1}(z) =0.
\end{align}
We want to show that $P_1(n)$ is true, \textit{i.e.}
\begin{align}\label{eq:EDOXComplexeModifN}
&\left(1+2z^2\right)\left(\beta_{n}(z)\right)''+\left(-4z^3-10z\right)\left(\beta_{n}(z)\right)'+2n\left(1+2z^2\right)\beta_{n}(z) =0.
\end{align}
Substracting the LHS of (\ref{eq:EDOXComplexeModifNMoins1}) and (\ref{eq:EDOXComplexeModifN}), we need to show the equality
\begin{align}\label{eq:EDOAMONTRER}
& \left(1+2z^2\right)\left[\left(\beta_{n-1}\right)''-\left(\beta_{n}\right)''\right](z)+\left(-4z^3-10z\right)\left[\left(\beta_{n-1}\right)'-\left(\beta_{n}\right)'\right](z)\\\nonumber
&\qquad\qquad\qquad\qquad\qquad+2n(1+2z^2)\left[\beta_{n-1}-\beta_{n}\right](z) -2(1+2z^2)\beta_{n-1}(z)=0.
\end{align}
Let
\begin{align}\label{eq:Delta7}
\Delta_3(k):&=(n-1)(n-((2k-1)^2+2))\prod_{j=1}^{k-2}(n-(2(1+j)+1)),\\\label{eq:Delta8}
\Delta_4(k):&=n(n-((2k-1)^2+1))\prod_{j=1}^{k-2}(n-2(1+j)),\\\label{eq:Delta9}
\Delta_{5}(k):&=(n-((2(k-1))^2+2))\prod_{j=1}^{k-2}(n-2(1+j)),\\\label{eq:Delta10}
\Delta_{6}(k):&=(n-((2(k-1))^2+1))\prod_{j=1}^{k-2}(n-2(1+j)+1).
\end{align}
Then we get
\begin{align}\nonumber
&\left[\beta_{n-1}-\beta_{n}\right](z) = z^2+\frac{1}{3}z^3+\frac{11-2n}{6}z^4\\\label{eq:Diff1}
&\qquad\qquad\qquad\quad+\sum_{k=3}^\infty\left[\frac{(-1)^k2^k}{(2k)!}\left(\Delta_3 - \Delta_4\right)z^{2k}+\frac{(-1)^{k+1}2^{k-1}}{(2k-1)!}\left(\Delta_5 - \Delta_{6}\right)z^{2k-1}\right],\\\nonumber
&\left[\left(\beta_{n-1}\right)'-\left(\beta_{n}\right)'\right](z)=2z+z^2+\frac{22-4n}{3}z^3\\\label{eq:Diff2}
&\qquad\qquad\qquad\quad+\sum_{k=3}^\infty\left[ \frac{(-1)^k2^k}{(2k-1)!}\left(\Delta_3 - \Delta_4\right)z^{2k-1} + \frac{(-1)^{k+1}2^{k-1}}{(2k-2)!}\left(\Delta_5 - \Delta_{6}\right)z^{2k-2} \right],\\\nonumber
&\left[\left(\beta_{n-1}\right)''-\left(\beta_{n}\right)''\right](z)=2+2z+(22-4n)z^2\\\label{eq:Diff3}
&\qquad\qquad\qquad\quad+\sum_{k=3}^\infty\left[\frac{(-1)^k2^k}{(2k-2)!}\left(\Delta_3 - \Delta_4\right)z^{2k-2}+\frac{(-1)^{k+1}2^{k-1}}{(2k-3)!}\left(\Delta_5 - \Delta_{6}\right)z^{2k-3}\right].
\end{align}
Substituting (\ref{eq:Diff1})-(\ref{eq:Diff3}) into the LHS of equation (\ref{eq:EDOAMONTRER}), we obtain
\begin{align}\nonumber
G_1(n;z) &=\left(1+2z^2\right)\left(2+2z+(22-4n)z^2\right)+\left(-4z^3-10z\right) \left(2z+z^2+\frac{22-4n}{3}z^3\right)\\\nonumber
&\qquad+2n(1+2z^2)\left(z^2+\frac{1}{3}z^3+\frac{11-2n}{6}z^4\right)\\\nonumber
&\qquad-2\left(1+2z^2\right)\left(1+z-(n-1)z^2-\frac{n-6}{3}z^3+\frac{(n-1)(n-11)}{6}z^4\right)\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k}2^k}{(2k-2)!}\left(\Delta_3 - \Delta_4\right)z^{2k-2} +\frac{(-1)^{k+1}2^{k-1}}{(2k-3)!}\left(\Delta_5 - \Delta_6\right)z^{2k-3}\right]\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k}2^{k+1}}{(2k-2)!}\left(\Delta_3 - \Delta_4\right)z^{2k} +\frac{(-1)^{k+1}2^{k}}{(2k-3)!}\left(\Delta_5 - \Delta_6\right)z^{2k-1}\right]\\\label{eq:TempP4}
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k+1}2^{k+2}}{(2k-1)!}\left(\Delta_3 - \Delta_4\right)z^{2k} +\frac{(-1)^{k}2^{k+1}}{(2k-2)!}\left(\Delta_5 - \Delta_6\right)z^{2k+1}\right]\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{5(-1)^{k+1}2^{k+1}}{(2k-1)!}\left(\Delta_3 - \Delta_4\right)z^{2k} +\frac{5(-1)^{k}2^{k}}{(2k-2)!}\left(\Delta_5 - \Delta_6\right)z^{2k-1}\right]\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k}2^{k+1}n}{(2k)!}\left(\Delta_3 - \Delta_4\right)z^{2k} +\frac{(-1)^{k+1}2^{k}n}{(2k-1)!}\left(\Delta_5 - \Delta_6\right)z^{2k-1}\right]\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k}2^{k+2}n}{(2k)!}\left(\Delta_3 - \Delta_4\right)z^{2k+2} +\frac{(-1)^{k+1}2^{k+1}n}{(2k-1)!}\left(\Delta_5 - \Delta_6\right)z^{2k+1}\right]\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k+1}2^{k+1}}{(2k)!}\Delta_{3}z^{2k} +\frac{(-1)^{k}2^{k}}{(2k-1)!}\Delta_{5}z^{2k-1}\right]\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k+1}2^{k+2}}{(2k)!}\Delta_{3}z^{2k+2} +\frac{(-1)^{k}2^{k+1}}{(2k-1)!}\Delta_{5}z^{2k+1}\right].
\end{align}
In order to obtain powers corresponding to $(2k)$ and $(2k-1)$ in all series, we perform a translation of the summation variable where necessary. Extracting the terms of degree $k\leq3$, and considering the terms outside of a series in equation (\ref{eq:TempP4}), we see that they cancel each other. Regrouping all series, we obtain
\begin{align}\nonumber
G_1(n;z) &=\sum_{k=4}^\infty\left[ \frac{(-1)^{k}2^{k+1}}{(2k-3)!}\left(\prod_{j=1}^{k-3}\left( n-(2(1+j)+1)\right)\Delta_{7}-\prod_{j=1}^{k-3}\left( n-2(1+j)\right)\Delta_{8}\right)z^{2k}\right. \\
& \qquad\quad+\left.\frac{(-1)^{k+1}2^{k}}{(2k-4)!}\left(\prod_{j=1}^{k-3}\left( n-2(1+j)\right)\Delta_{9}-\prod_{j=1}^{k-3}\left( n-(2(1+j)+1)\right)\Delta_{10}\right) z^{2k-1} \right],
\end{align}
where
\begin{align}\nonumber
\Delta_{7}(k):&=-\frac{(n-1)(n-((2k+1)^2+2))(n-(2(k-1)+1))(n-(2k+1))}{(2k-2)(2k-1)(2k)}\\\nonumber
&+(n-1)(n-((2k-1)^2+2))(n-(2(k-1))+1)\\\label{eq:Delta7Prime}
&\qquad\qquad\qquad\cdot\left(\frac{1}{(2k-2)}-\frac{5}{(2k-2)(2k-1)}+\frac{n}{(2k-2)(2k-1)(2k)}\right)\\\nonumber
&-\frac{(n-1)(n-((2k-1)^2+2))(n-(2(k-1)+1))}{(2k-2)(2k-1)(2k)}\\\nonumber
&+\frac{(n-1)(n-((2k-3)^2+2))}{(2k-2)}+(n-1)(n-((2k-3)^2+2))\left(1-\frac{n}{(2k-2)}\right)\\\nonumber
&=0,\\
\nonumber
\Delta_{8}(k):&=-\frac{n(n-((2k+1)^2+1))(n-2(k-1))(n-2k)}{(2k-2)(2k-1)(2k)}\\\nonumber
&+n(n-((2k-1)^2+1))(n-2(k-1))\\\label{eq:Delta8Prime}
&\qquad\qquad\qquad\cdot\left(\frac{1}{(2k-2)}-\frac{5}{(2k-2)(2k-1)}+\frac{n}{(2k-2)(2k-1)(2k)}\right)\\\nonumber
&+n(n-((2k-3)^2+1))\left(1-\frac{n}{(2k-2)}\right)\\\nonumber
&=0,\\
\nonumber
\Delta_{9}(k):&=-\frac{(n-((2k)^2+2))(n-2(k-1))(n-2k)}{(2k-3)(2k-2)(2k-1)}\\\nonumber
&+(n-((2(k-1))^2+2))(n-2(k-1))\\\label{eq:Delta13}
&\qquad\qquad\qquad\cdot\left(\frac{1}{(2k-3)}-\frac{5}{(2k-3)(2k-2)}+\frac{n}{(2k-3)(2k-2)(2k-1)}\right)\\\nonumber
&+(n-((2(k-2))^2+2))\left(1-\frac{n}{(2k-3)}\right)\\\nonumber
&-\frac{(n-((2(k-1))^2+2))(n-2(k-1))}{(2k-3)(2k-2)(2k-1)}+\frac{(n-((2(k-2))^2+2))}{(2k-3)}\\\nonumber
&=0,\\
\nonumber
\Delta_{10}(k):&=-\frac{(n-((2k)^2+1))(n-2(k-1)+1)(n-2k+1)}{(2k-3)(2k-2)(2k-1)}\\\nonumber
&+(n-((2(k-1))^2+1))(n-2(k-1)+1)\\\nonumber
&\qquad\qquad\qquad\cdot\left(\frac{1}{(2k-3)}-\frac{5}{(2k-3)(2k-2)}+\frac{n}{(2k-3)(2k-2)(2k-1)}\right)\\\label{eq:Delta14}
&+(n-((2(k-2))^2+1))\left(1-\frac{n}{(2k-3)}\right)\\\nonumber
&=0,
\end{align}
for all $k\geq4$. We conclude that the induction hypothesis implies that $P_1(n)$ is true. By construction, the solution $\beta_n(z)$ (\ref{eq:Sol1}) is non-polynomial, because the coefficients $c_{2k}(n)$ (\ref{eq:coeffPair}) and $c_{2k-1}(n)$ (\ref{eq:coeffImpair}), associated with even and odd powers of $z$, respectively, are polynomials of the parameter $n$, possessing no root on the gap sequence (see Table \ref{tab:2}), which completes the proof of Proposition \ref{th:GenSol}.\\
$\left.\right.\hfill\square$
\end{preuve}
\newpage
\section{Proof of Proposition \ref{th:GenSol1}}\label{app:2}
We show that the series $\mu_{n}(z)$ (\ref{eq:mu}) is a polynomial solution of equation (\ref{eq:EDOXComplexe}) for all $n\in2\mathbb{N}\backslash\{2\}$ and that the series $\nu_{n}(z)$ (\ref{eq:Sol2}) is a polynomial solution of equation (\ref{eq:EDOXComplexe}) for all $n\in(2\mathbb{N}-1)\backslash\{1\}$.
\begin{preuve}\normalfont
Consider the following proposition:
\begin{equation}
P_2(n):\;\; \text{The function $\mu_n(z)$ (\ref{eq:mu}) is a solution of equation (\ref{eq:EDOXComplexe}) for all }
n\in\mathbb{N}.
\end{equation}
We proceed by induction.
\paragraph{\textbf{Case} $n = 0$.}
The series $\mu_n(z)$ (\ref{eq:mu}) and its first and second-order derivatives take the form
\begin{align}
\mu_0(z) = 1, \qquad
\frac{d\mu_0}{dz}(z) = 0,\qquad
\frac{d^2\mu_0}{dz^2}(z) = 0,
\end{align}
so we see immediately that $\mu_0(z)$ is a solution of equation (\ref{eq:EDOXComplexeModifZero}). We conclude that $P_2(0)$ is true.
\paragraph{\textbf{Induction hypothesis}.} Suppose that $P_2(n-1)$ is true for some $n\geq0$, \textit{i.e.}
\begin{align}\label{eq:EDOXComplexeModifNMoins1mu}
&\left(1+2z^2\right)\left(\mu_{n-1}(z)\right)''+\left(-4z^3-10z\right)\left(\mu_{n-1}(z)\right)'+2(n-1)\left(1+2z^2\right)\mu_{n-1}(z) =0.
\end{align}
We want to show that $P_2(n)$ is true, \textit{i.e.}
\begin{align}\label{eq:EDOXComplexeModifNmu}
&\left(1+2z^2\right)\left(\mu_{n}(z)\right)''+\left(-4z^3-10z\right)\left(\mu_{n}(z)\right)'+2n\left(1+2z^2\right)\mu_{n}(z) =0.
\end{align}
Substracting the LHS of (\ref{eq:EDOXComplexeModifNMoins1mu}) and (\ref{eq:EDOXComplexeModifNmu}), we need to show the equality
\begin{align}\label{eq:EDOAMONTRERmu}
& \left(1+2z^2\right)\left[\left(\mu_{n-1}\right)''-\left(\mu_{n}\right)''\right](z)+\left(-4z^3-10z\right)\left[\left(\mu_{n-1}\right)'-\left(\mu_{n}\right)'\right](z)\\\nonumber
&\qquad\qquad\qquad\qquad\qquad\qquad\qquad+2n(1+2z^2)\left[\mu_{n-1}-\mu_{n}\right] -2(1+2z^2)\mu_{n-1}(z)=0.
\end{align}
We get
\begin{align}\label{eq:Diff1mu}
&\left[\mu_{n-1}-\mu_{n}\right](z) = z^2+\frac{11-2n}{6}z^4+\sum_{k=3}^\infty\left[\frac{(-1)^k2^k}{(2k)!}\left(\Delta_3 - \Delta_4\right)z^{2k}+\right],\\\label{eq:Diff2mu}
&\left[\left(\mu_{n-1}\right)'-\left(\mu_{n}\right)'\right](z)=2z+\frac{22-4n}{3}z^3+\sum_{k=3}^\infty\left[ \frac{(-1)^k2^k}{(2k-1)!}\left(\Delta_3 - \Delta_4\right)z^{2k-1} \right],\\\label{eq:Diff3mu}
&\left[\left(\mu_{n-1}\right)''-\left(\mu_{n}\right)''\right](z)=2+(22-4n)z^2+\sum_{k=3}^\infty\left[\frac{(-1)^k2^k}{(2k-2)!}\left(\Delta_3 - \Delta_4\right)z^{2k-2}\right].
\end{align}
Substituting (\ref{eq:Diff1mu})-(\ref{eq:Diff3mu}) into the LHS of equation (\ref{eq:EDOAMONTRERmu}), we obtain
\begin{align}\nonumber
&G_2(n;z) =\left(1+2z^2\right)\left(2+(22-4n)z^2\right)+\left(-4z^3-10z\right) \left(2z+\frac{22-4n}{3}z^3\right)\\\nonumber
&\qquad+2n(1+2z^2)\left(z^2+\frac{11-2n}{6}z^4\right)-2\left(1+2z^2\right)\left(1-(n-1)z^2+\frac{(n-1)(n-11)}{6}z^4\right)\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k}2^k}{(2k-2)!}\left(\Delta_3 - \Delta_4\right)z^{2k-2} \right]+\sum_{k=3}^\infty\left[ \frac{(-1)^{k}2^{k+1}}{(2k-2)!}\left(\Delta_3 - \Delta_4\right)z^{2k} \right]\\\label{eq:TempP4mu}
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k+1}2^{k+2}}{(2k-1)!}\left(\Delta_3 - \Delta_4\right)z^{2k} \right]+\sum_{k=3}^\infty\left[ \frac{5(-1)^{k+1}2^{k+1}}{(2k-1)!}\left(\Delta_3 - \Delta_4\right)z^{2k} \right]\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k}2^{k+1}n}{(2k)!}\left(\Delta_3 - \Delta_4\right)z^{2k} \right]+\sum_{k=3}^\infty\left[ \frac{(-1)^{k}2^{k+2}n}{(2k)!}\left(\Delta_3 - \Delta_4\right)z^{2k+2} \right]\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k+1}2^{k+1}}{(2k)!}\Delta_{3}z^{2k} \right]+\sum_{k=3}^\infty\left[ \frac{(-1)^{k+1}2^{k+2}}{(2k)!}\Delta_{3}z^{2k+2} \right].
\end{align}
In order to obtain powers corresponding to $(2k)$ in all series, we perform a translation of the summation variable where necessary. Extracting the terms of degree $k\leq3$, and considering the terms outside of a series in equation (\ref{eq:TempP4mu}), we see that they cancel each other. Regrouping all series, we obtain
\begin{align}\nonumber
&G_2(n;z) =\sum_{k=4}^\infty\left[ \frac{(-1)^{k}2^{k+1}}{(2k-3)!}\left(\prod_{j=1}^{k-3}\left( n-(2(1+j)+1)\right)\Delta_{7}-\prod_{j=1}^{k-3}\left( n-2(1+j)\right)\Delta_{8}\right)z^{2k} \right],
\end{align}
where we already showed by the relations (\ref{eq:Delta7Prime}) and (\ref{eq:Delta8Prime}) that $\Delta_7(k)\equiv\Delta_8(k)\equiv0$ for all $k\geq4$. We conclude that the induction hypothesis implies that $P_2(n)$ is true. By construction, the coefficients $c_{2k}(n)$ of the series $\mu_n(z)$ (\ref{eq:mu}) possess only even roots $\lambda_p(k)$ from which $n=2$ is excluded, as illustrated in Table \ref{tab:2} and by the polynomials $p_k(n)$ (\ref{eq:pk}). Therefore the only polynomial cases are $\mu_{2l}$, where $l\in\{0,2, 3, 4...\}$.
\\~\\
Consider the following proposition:
\begin{equation}
P_3(n):\;\; \text{The function $\nu_n(z)$ (\ref{eq:Sol2}) is a solution of equation (\ref{eq:EDOXComplexe}) for all }
n\in\mathbb{N}.
\end{equation}
\paragraph{\textbf{Case} $n = 0$.} The series $\nu_n(z)$ (\ref{eq:Sol2}) and its first and second-order derivatives take the form
\begin{align}\label{eq:H0Prime}
\nu_0(z) &= z+\frac{5}{3}z^3+\sum_{k=3}^\infty\frac{\Delta_1(k)}{(2k-1)!}z^{2k-1},\\\label{eq:H0DeriveePrime}
\frac{d\nu_0}{dz}(z) &= 1+5z^2+\sum_{k=3}^\infty\frac{\Delta_1(k)}{(2k-2)!}z^{2k-2},\\\label{eq:H0DeriveeSecondePrime}
\frac{d^2\nu_0}{dz^2}(z) &= 10z+\sum_{k=3}^\infty\frac{\Delta_1(k)}{(2k-3)!}z^{2k-3}.
\end{align}
Substituting (\ref{eq:H0DeriveePrime}) and (\ref{eq:H0DeriveeSecondePrime}) into equation (\ref{eq:EDOXComplexeModifZero}), we obtain
\begin{align}\nonumber
G_3(0;z)&=\left(1+2z^2\right)\cdot10z+\left(-4z^3-10z\right)\cdot\left(1+5z^2\right)+\sum_{k=3}^\infty\frac{\Delta_1(k)}{(2k-3)!}z^{2k-3}\\
&+\sum_{k=3}^\infty\frac{2\Delta_1(k)}{(2k-3)!}z^{2k-1}-\sum_{k=3}^\infty\frac{4\Delta_1(k)}{(2k-2)!}z^{2k+1}-\sum_{k=3}^\infty\frac{10\Delta_1(k)}{(2k-2)!}z^{2k-1}.
\end{align}
In order to obtain powers corresponding to $(2k-1)$ in all series, we perform a translation of the summation variable where necessary
\begin{align}\nonumber
G_3(0;z)&=\left(1+2z^2\right)\cdot10z+\left(-4z^3-10z\right)\cdot\left(1+5z^2\right)+\sum_{k=2}^\infty\frac{\Delta_1(k+1)}{(2k-1)!}z^{2k-1}\\\label{eq:TempP1Prime}
&+\sum_{k=3}^\infty\frac{2\Delta_1(k)}{(2k-3)!}z^{2k-1}-\sum_{k=4}^\infty\frac{4\Delta_1(k-1)}{(2k-4)!}z^{2k-1}-\sum_{k=3}^\infty\frac{10\Delta_1(k)}{(2k-2)!}z^{2k-1}.
\end{align}
Extracting the terms of degree $k\leq3$, and considering the terms outside of a series in equation (\ref{eq:TempP1Prime}), we see that they cancel each other. Regrouping all series, we get
\begin{align}
&G_3(0;z)=\sum_{k=4}^\infty\left[\frac{\Delta_1(k+1)}{(2k-1)!}
+\frac{2\Delta_1(k)}{(2k-3)!}
-\frac{4\Delta_1(k-1)}{(2k-4)!}
-\frac{10\Delta_1(k)}{(2k-2)!} \right]z^{2k-1}.
\end{align}
Evaluating $\Delta_1$ from relation (\ref{eq:Delta1}), we obtain
\begin{align}
G_3(0;z)=\sum_{k=4}^\infty\left[ \frac{(-1)^k2^{k}\prod_{j=1}^{k-3}(1+2(1+j))}{(2k-4)!}\cdot\Delta_2(k) \right]z^{2k-1}.
\end{align}
We already showed by the relation (\ref{eq:Delta2}) that $\Delta_2(k)\equiv0$ for all $k\geq4$. We conclude that $P_3(0)$ is true.
\paragraph{\textbf{Induction hypothesis}.} Suppose that $P_3(n-1)$ is true for some $n\geq0$, \textit{i.e.}
\begin{align}\label{eq:EDOXComplexeModifNMoins1Prime}
&\left(1+2z^2\right)\left(\nu_{n-1}(z)\right)''+\left(-4z^3-10z\right)\left(\nu_{n-1}(z)\right)'+2(n-1)\left(1+2z^2\right)\nu_{n-1}(z) =0.
\end{align}
We want to show that $P_3(n)$ is true, \textit{i.e.}
\begin{align}\label{eq:EDOXComplexeModifNPrime}
&\left(1+2z^2\right)\left(\nu_{n}(z)\right)''+\left(-4z^3-10z\right)\left(\nu_{n}(z)\right)'+2n\left(1+2z^2\right)\nu_{n}(z) =0.
\end{align}
Substracting the left-hand side (LHS) of (\ref{eq:EDOXComplexeModifNMoins1Prime}) and (\ref{eq:EDOXComplexeModifNPrime}), we need to show the equality
\begin{align}\label{eq:EDOAMONTRERPrime}
&\left(1+2z^2\right)\left[\left(\nu_{n-1}\right)''-\left(\nu_{n}\right)''\right](z)+\left(-4z^3-10z\right)\left[\left(\nu_{n-1}\right)'-\left(\nu_{n}\right)'\right](z)\\\nonumber
&\qquad\qquad\qquad\qquad\qquad+2n\left(1+2z^2\right)\left[\nu_{n-1}-\nu_{n}\right](z) -2(1+2z^2)\nu_{n-1}(z)=0.
\end{align}
We get
\begin{align}\label{eq:Diff1Prime}
&\left[\nu_{n-1}-\nu_{n}\right](z) =\frac{1}{3}z^3+\sum_{k=3}^\infty\left[\frac{(-1)^{k+1}2^{k-1}}{(2k-1)!}\left(\Delta_5 - \Delta_{6}\right)z^{2k-1}\right],\\\label{eq:Diff2Prime}
&\left[\left(\nu_{n-1}\right)'-\left(\nu_{n}\right)'\right](z)=z^2+\sum_{k=3}^\infty\left[ \frac{(-1)^{k+1}2^{k-1}}{(2k-2)!}\left(\Delta_5 - \Delta_{6}\right)z^{2k-2} \right],\\\label{eq:Diff3Prime}
&\left[\left(\nu_{n-1}\right)''-\left(\nu_{n}\right)''\right](z)=2z+\sum_{k=3}^\infty\left[\frac{(-1)^{k+1}2^{k-1}}{(2k-3)!}\left(\Delta_5 - \Delta_{6}\right)z^{2k-3}\right].
\end{align}
Substituting (\ref{eq:Diff1Prime})-(\ref{eq:Diff3Prime}) into the LHS of equation (\ref{eq:EDOAMONTRERPrime}), we obtain
\begin{align}\nonumber
&G_3(n;z) = \left(1+2z^2\right)\cdot2z+\left(-4z^3-10z\right) \cdot z^2+2n(1+2z^2)\cdot \frac{1}{3}z^3-2\left(1+2z^2\right)\left(z-\frac{n-6}{3}z^3\right)\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k+1}2^{k-1}}{(2k-3)!}\left(\Delta_5 - \Delta_{6}\right)z^{2k-3}\right]+\sum_{k=3}^\infty\left[ \frac{(-1)^{k+1}2^{k}}{(2k-3)!}\left(\Delta_5 - \Delta_{6}\right)z^{2k-1}\right]\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k}2^{k+1}}{(2k-2)!}\left(\Delta_5 - \Delta_{6}\right)z^{2k+1}\right]+\sum_{k=3}^\infty\left[ \frac{5(-1)^{k}2^{k}}{(2k-2)!}\left(\Delta_5 - \Delta_{6}\right)z^{2k-1}\right]\\\nonumber
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k+1}2^{k}n}{(2k-1)!}\left(\Delta_5 - \Delta_{6}\right)z^{2k-1}\right]+\sum_{k=3}^\infty\left[ \frac{(-1)^{k+1}2^{k+1}n}{(2k-1)!}\left(\Delta_5 - \Delta_{6}\right)z^{2k+1}\right]\\\label{eq:TempP4Prime}
&\qquad+\sum_{k=3}^\infty\left[ \frac{(-1)^{k}2^{k}}{(2k-1)!}\Delta_{5}z^{2k-1}\right]+\sum_{k=3}^\infty\left[ \frac{(-1)^{k}2^{k+1}}{(2k-1)!}\Delta_{5}z^{2k+1}\right].
\end{align}
In order to obtain powers corresponding to $(2k-1)$ in all series, we perform a translation of the summation variable where necessary. Extracting the terms of degree $k\leq3$, and considering the terms outside of a series in (\ref{eq:TempP4Prime}), we see that they cancel each other. Regrouping all series, we obtain
\begin{align}
&G_3(n;z) =\sum_{k=4}^\infty\left[ \frac{(-1)^{k+1}2^{k}}{(2k-4)!}\left(\prod_{j=1}^{k-3}\left( n-2(1+j)\right)\Delta_{9}-\prod_{j=1}^{k-3}\left( n-(2(1+j)+1)\right)\Delta_{10}\right) z^{2k-1} \right],
\end{align}
where we already showed by the relations (\ref{eq:Delta13}) and (\ref{eq:Delta14}) that $\Delta_{9}(k)\equiv\Delta_{10}(k)\equiv0$ for all $k\geq4$. We conclude that the induction hypothesis implies that $P_3(n)$ is true.
By construction, the coefficients $\tilde{c}_k(n)$ of the series $\nu_n(z)$ (\ref{eq:Sol2}) possess only odd roots $\lambda_q(k)$ from which $n=1$ is excluded, as illustrated by Table \ref{tab:2} and by the polynomials $q_k(n)$ (\ref{eq:qk}). Therefore the only polynomial cases are $\nu_{2l-1}$, where $l\geq2$, which completes the proof of Proposition \ref{th:GenSol1}.\\
$\left.\right.\hfill\square$
\end{preuve}
\newpage
\end{appendix}
\bibliographystyle{spmpsci}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,953
|
\section{Introduction}
\label{sec:introduction}
\noindent Multiphase simulations gain high demand in both industry and science, especially in the field of process technology, where complex reactions and phase transitions need to be calculated.
Exemplary applications are liquid-solid, gas-solid and gas-liquid reactors \cite{reschetilowski}, phase separators and transport units \cite{sattler, bohnet}.
Today, computational fluid dynamics (CFD) eases the design and optimization of such processes.
Three different strategies for the simulation of complex flows akin to the above have been established, namely volume of fluid (VOF) methods, discrete element methods (DEM), and Eulerian multiphase methods \cite{hiltunen}.
Whereas VOF methods track the phase interface in detail, the DEM approach calculates paths of discrete particles.
The Eulerian multiphase simulation schemes typically consider each phase as continuous and solve mass and momentum conservation equations for each of them.
The coupling between phases is realized via volume averaging of the phase flow variables.
The resulting volume averaged Navier\textendash Stokes equations (VANSE) \cite{GIDASPOW1994337} involve also the phase interaction forces in the momentum equation.
Besides multiphase flows, the VANSE are suitable for the modeling of porous flows \cite{zhu}.
In comparison to VOF, the Eulerian methods require lower computational resources in general.
Similarly, the latter outperform the DEM if the amount of particles reaches billions or more.
The Navier\textendash Stokes equations (NSE) and VANSE can be solved numerically in the discretized form with the finite difference method (FDM) \cite{PEPIOT2012104}, finite element method (FEM) or finite volume method (FVM) \cite{moukalled} on the macroscopic level or with the lattice Boltzmann methods (LBM) \cite{kruger}, which are based on mesoscopic kinetic theory \cite{hanel2004molekulare}.
In LBM the fluid is considered as a quantity of colliding and streaming particles.
The state of particles is described by a discretized particle distribution function (population) \textemdash the probability of the particle to be located at the regarded coordinates in the phase space.
The equilibrium population is the Maxwellian distribution based on the equation of state.
The collision and streaming of populations is described by a simplified version of the Boltzmann equation.
Taking moments of one lattice cell leads to the macroscopic quantities density, velocity and pressure, respectively.
Through the Chapman\textendash Enskog (CE) expansion \cite{li} or limit consistency \cite{simonis2022limit}, the lattice Boltzmann equation can be linked to the NSE.
The most prolific feature of LBM is the suitability for parallelization due to explicitly local calculation of populations.
Meanwhile, LBM has been found to provide advanced capabilities for the parallel simulation of turbulent flows \cite{simonis2022temporal,simonis2021linear,haussmann2019direct}, advection\textendash diffusion transport \cite{simonis2020relaxation,dapelo2021lattice-boltzmann}, and more specific photobioreactors \cite{mink2021comprehensive}, Flettner rotors \cite{simonis2022forschungsnahe} or Coriolis mass flow meters \cite{haussmann2021fluid-structure}.
As a paragon of effectiveness of the LBM, the comparison between the open-source software packages OpenLB \cite{krause,kummerlander2022olb15} and OpenFOAM shows 32 times faster computation time of the former by the in-cylinder flow test \cite{krause, tuprints}.
Particular LBM for the solution of VANSE were developed by several authors.
Ansatz of Guo et al. \cite{guo} for flows through porous media is a discretization of the Darcy\textendash Lapwood\textendash Brinkman equation.
Unfortunately, this realization is only valid for temporally and spatially constant void fractions.
Blais et al.~\cite{blais} proposed a scheme, which is based on the method of moments, where first the population moments necessary for the VANSE are chosen and after that the equilibrium distribution is composed.
The volume fraction is implemented only into the zeroth population, what makes the pressure calculation more stable, but allows application of this model only by volume fractions above $0.5$.
This model fits the majority of porous flows, but is not universally applicable for all multiphase flows.
Although Höcker et al.~\cite{hocker} and Maier et al.~\cite{Maier2021_1000132643} correct the zeroth moment on the lattice Boltzmann level, the CE expansion of this method in case of strongly varying local volume fractions is not fulfilled.
The simplest and most uniform VANSE LBM is suggested by Zhang et al.~\cite{zhang}.
The method fits cases with temporally and spatially varying volume fractions except for the pressure distribution.
To the knowledge of the present authors, the pressure discrepancy in \cite{zhang} due to an inconsistent zeroth moment interpretation in the there performed CE expansion.
This in turn leads to a density calculation which changes pressure correction forces and the pressure itself.
Based on the preceding approaches, the present work proposes a consistent way of the numerical VANSE solution with lattice Boltzmann methods for one, two, and three dimensions.
The paper is structured as follows.
First, the principles of VANSE and the corresponding LBM scheme are derived in Section \ref{sec:meth}.
In particular, the novel population moments are presented and locally varying void fractions are taken into account.
In Section \ref{sec:numerics}, the validation of the new correction is performed on stationary and transient examples with spatially changing volume fractions between $0.1$ and $0.9$.
The numerical results suggest a second order convergence of flow velocity and pressure.
Section \ref{sec:conc} draws conclusions and suggests future research.
At last, the CE expansion, formally proving the approximation of the VANSE with the present LBM up to higher order terms, is detailed in \ref{sec:appendix}.
\section{Methodology}\label{sec:meth}
\subsection{Volume averaged Navier\textendash Stokes equations}
\noindent If subgrid particles are contained in the regarded control volume, any quantity of a fluid phase can be adjusted to the whole volume, which also includes these particles.
Below, this adjustment is called volume averaging, denoted with $\langle \cdot \rangle$, and can be written as follows for any fluid quantity $q^{\flat}$, where $\flat$ indicates the corresponding phase.
Let \(V\) denote the overall volume and \(V^{\flat}\) the volume which is occupied by phase $\flat$, hence
\begin{linenomath}\begin{align}
\sum\limits_{\flat} V^\flat = V.
\end{align}\end{linenomath}
The ratio of these volumes is defined via the respective void fraction
\begin{linenomath}\begin{align}
\phi^{\flat} &= \frac{V^{\flat}}{V}.
\end{align}\end{linenomath}
In the following, volume averaged scalars are denoted with $\tilde{\cdot}$ and volume averaged vectors with $\overline{\cdot}$.
Thus, for scalars \(q\) and vectors \(\bm{s}\) we define
\begin{linenomath}\begin{align}
\phi^{\flat} \tilde{q^{\flat}} &= \langle q^{\flat} \rangle \equiv \frac{1}{V} \int\limits_{V^{\flat}} q^{\flat} \,\mathrm{d}V, \\
\phi^{\flat} \tilde{\rho^{\flat}} \overline{\bm{s}^{\flat}} &= \langle \rho^{\flat} \bm{s}^{\flat} \rangle.
\end{align}\end{linenomath}
By volume averaging all terms of the NSE, the VANSE are deduced \cite{hiltunen, GIDASPOW1994337}
\begin{linenomath}\begin{align}
\partial_t (\phi^{\flat} \tilde{\rho^{\flat}})+ \bm{\nabla} \cdot (\phi^{\flat} \tilde{\rho^{\flat}} \overline{\bm{u}^{\flat}}) & = 0, \label{eq:vansEquMass} \\
\partial_t (\phi^{\flat} \tilde{\rho^{\flat}} \overline{\bm{u}^{\flat}})+ \bm{\nabla} \cdot (\phi^{\flat} \tilde{\rho^{\flat}} \overline{\bm{u}^{\flat}} \overline{\bm{u}^{\flat}})+ \phi^{\flat} \bm{\nabla} \tilde{p} & = \nu \bm{\nabla} \cdot (\phi^{\flat} \tilde{\rho^{\flat}} (\bm{\nabla} \overline{\bm{u}^{\flat}} + \overline{\bm{u}^{\flat}} \bm{\nabla})) + \phi^{\flat} \overline{\bm{F}^{\flat}}, \label{eq:vansEquMom}
\end{align}\end{linenomath}
where $\tilde{\rho^{\flat}}$ and $\overline{\bm{u}^{\flat}}$ denote the volume averaged versions of the fluid density and the velocity, respectively.
The pressure $\tilde{p}$ is common for all phases in the system.
\subsection{Lattice Boltzmann scheme for volume averaged Navier\textendash Stokes equations}
\noindent In the following, equations \eqref{eq:vansEquMass} and \eqref{eq:vansEquMom} are approximated with an LBM based on Bhatnagar\textendash Gross\textendash Krook (BGK) collision \cite{bgk} and Guo et al. forcing \cite{guozhaoli} on two- and three-dimensional \(D2Q9\) and \(D3Q27\) lattices.
One-dimensional stencils are also discussed but not focused here.
The discrete velocity sets are visualized in Figures \ref{fig:D2Q9_lattice} and \ref{fig:D3Q27_lattice}.
The corresponding discretization parameters are given in Tables \ref{tab:latt_par} and \ref{tab:latt_3d}, respectively.
\begin{figure}[h]
\centering
\includegraphics[scale=1]{vans-figure0.pdf}
\caption{Schematic view of the \(D2Q9\) discrete velocity set.}
\label{fig:D2Q9_lattice}
\end{figure}
\begin{figure}
\centering
\includegraphics[scale=1]{vans-figure1.pdf}
\caption{Schematic view of the \(D3Q27\) discrete velocity set.}
\label{fig:D3Q27_lattice}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{ccc}
\toprule
Directions \(i\) & Normalized lattice velocity $\bm{\xi}_{i}$ & Lattice weights $w_{i}$ \\
\midrule
$0$ & $(0, 0)$ & $4/9$ \\
$1,2,3,4$ & $(\pm 1, 0), (0, \pm 1)$ & $1/9$ \\
$5,6,7,8$ & $(\pm 1, \pm 1)$ & $1/36$ \\
\bottomrule
\end{tabular}
\caption{Lattice discretization parameters of \(D2Q9\).}
\label{tab:latt_par}
\end{table}
\begin{table}[h]
\centering
\begin{tabular}{ccc}
\toprule
Directions \(i\) & Normalized lattice velocity $\bm{\xi}_{i}$ & Lattice weights $w_{i}$ \\
\midrule
$0$ & $(0, 0)$ & $8/27$ \\
$1,2,\ldots,6$ & $(\pm 1, 0, 0), (0, \pm 1, 0), (0, 0, \pm 1)$ & $2/27$ \\
$7,8,\ldots,18$ & $(\pm 1, \pm 1, 0), (\pm 1, 0, \pm 1), (0, \pm 1, \pm 1)$ & $1/54$ \\
$19,20,\ldots,26$ & $(\pm 1, \pm 1, \pm 1)$ & $1/216$ \\
\bottomrule
\end{tabular}
\caption{Lattice discretization parameters of \(D3Q27\).}
\label{tab:latt_3d}
\end{table}
Unless stated otherwise, \(i=0,1,\ldots,26\) denotes the population index.
The space-time discrete lattice Boltzmann equation (LBE) reads
\begin{linenomath}\begin{align}\label{eq:lbe}
f_{i}(\bm{x}+\bm{\xi}_{i} \triangle t, t+\triangle t) = f_{i}(\bm{x},t) + \frac{\triangle t}{\tau}(f_{i}^{eq}(\bm{x},t) - f_{i}(\bm{x},t)) + \Omega^{F}_{i} .
\end{align}\end{linenomath}
The equilibrium particle distribution function used by Zhang et al. \cite{zhang} as well as by Höcker et al. \cite{hocker} and Maier \cite{Maier2021_1000132643} is simple, universal for all populations from 0 to 26, and stable for all possible volume fraction values.
It is the common, second order truncated Maxwell equilibrium, multiplied with the local volume fraction.
\begin{linenomath}\begin{align}\label{eq:equilibrium}
f_{i}^{\mathrm{eq}}(\bm{x},t) = w_{i} \tilde{\rho^{\flat}} \phi^{\flat} \Bigl(1 + \frac{\xi_{i\alpha} \overline{u_{\alpha}^{\flat}}}{c_{s}^{2}} + \frac{(\xi_{i \alpha} \xi_{i \beta} - c_{s}^{2} \delta_{\alpha \beta}) \overline{u_{\alpha}^{\flat}} \overline{u_{\beta}^{\flat}}}{2c_{s}^{4}} \Bigr)
\end{align}\end{linenomath}
After the first time step, $\tilde{\rho^{\flat}} \phi^{\flat}$ is replaced by the zeroth population moment $\sum_{i} f_{i}$.
The standard LBM presupposes the constant density of the fluid, which is typically fulfilled e.g. in multiphase or porous flows.
In contrast, if the constant density is multiplied with the spatially and temporally varying volume fraction, the result is not constant anymore.
The density in lattice units takes usually the value of 1, whereas the volume fraction can vary between 0 and 1, such that the effective density considered here in turn is varying also between 0 and 1.
Taking into account the streaming of effective densities along the lattice directions, the new form of the equilibrium distribution function is then after the first collision
\begin{linenomath}\begin{align
f_{i}^{\mathrm{eq}}(\bm{x},t) = w_{i} \tilde{\rho^{\flat}} \Bigl( \int_V \phi^{\flat}(\bm{x},t) \mathrm{d}V \Bigr) \Bigl(1 + \frac{\xi_{i\alpha} \overline{u_{\alpha}^{\flat}}}{c_{s}^{2}} + \frac{(\xi_{i \alpha} \xi_{i \beta} - c_{s}^{2} \delta_{\alpha \beta}) \overline{u_{\alpha}^{\flat}} \overline{u_{\beta}^{\flat}}}{2c_{s}^{4}} \Bigr) . \label{eq:feq}
\end{align}\end{linenomath}
Based on that, we define the effective density and velocity as
\begin{linenomath}\begin{align}
\tilde{\rho^{\flat}} & = \frac{\sum_{i} f_{i}}{\int_V \phi^{\flat}(\bm{x},t) \mathrm{d}V}, \label{eq:density} \\
\overline{\bm{u}^{\flat}} & = \frac{\sum_{i} \bm{\xi}_{i} f_{i}}{\sum_{i} f_{i}} + \frac{\triangle t}{2} \frac{\sum_{k} \overline{\bm{F}_{k}}}{\sum_{i} f_{i}}, \label{eq:velocity}
\end{align}\end{linenomath}
respectively.
The density definition uses for the volume fraction integration the neighboring cell data, which is considered further below.
Due to Guo et al. forcing scheme \cite{guozhaoli}, the velocity $\overline{\bm{u}^{\flat}}$ contains the sum of forces used in the example $\sum_{k} \overline{\bm{F}_{k}}$.
Further, the forcing term is defined as
\begin{linenomath}\begin{align}
\Omega_{i}^{F} = \Bigl(1 - \frac{\triangle t}{2\tau} \Bigr) w_{i} \Bigl(\frac{\xi_{i\alpha}}{c_{s}^{2}} + \frac{(\xi_{i \alpha} \xi_{i \beta} - c_{s}^{2} \delta_{\alpha \beta}) \overline{u_{\beta}^{\flat}}}{c_{s}^{4}} \Bigr) \sum\limits_{k} \overline{F_{k\alpha}}.
\end{align}\end{linenomath}
The sum \(\sum_{k} \overline{F_{k\alpha}} \) includes the phase interaction forces and the pressure correction force proposed by Zhang et al. \cite{zhang}
\begin{linenomath}\begin{align}\label{eq:pressForce}
\overline{\bm{F}_{\mathrm{PC}}} = \tilde{p} \bm{\nabla} \phi^{\flat} = \tilde{\rho^{\flat}} c_{s}^{2} \bm{\nabla} \phi^{\flat}.
\end{align}\end{linenomath}
This correction force adjusts the pressure term in the momentum equation, which is $\bm{\nabla} (\phi^{\flat} \tilde{p})$ according to the CE expansion of Zhang et al. equilibrium particle distribution and should be $\phi^{\flat} \bm{\nabla} \tilde{p}$ as in VANSE.
The phase interaction forces are for example in the case of a particle-laden flow given by the drag, lift, gravity, virtual mass and turbulence interaction forces.
These interaction forces are not considered in the present work due to the focus on model validation.
Note that the consistent incorporation of the neglected forces can be done with Guo et al. forcing scheme alongside the pressure correction.
Hence, without loss of generality we assume that \(\sum_{k} \overline{\bm{F}_{k}} = \overline{\bm{F}_{\mathrm{PC}}}\).
Further, the gradient of volume fraction appearing in \eqref{eq:pressForce} is discretized through central differences, thus for example in two dimensions
\begin{linenomath}\begin{align}
\bm{\nabla} \phi^{\flat} \approx \frac{1}{2\triangle x} \begin{pmatrix} \phi_{x+1}^{\flat} - \phi_{x-1}^{\flat} \\ \phi_{y+1}^{\flat} - \phi_{y-1}^{\flat} \end{pmatrix}.
\end{align}\end{linenomath}
The above-mentioned effective density is part of the equilibrium distribution function, and hence propagates from and to the neighbor lattice cells (cf. \eqref{eq:equilibrium} $\rightarrow$ cf. \eqref{eq:feq}), such that volume fraction becomes integrated over the cell volume \(\tilde{\rho^{\flat}} \phi^{\flat} \rightarrow \tilde{\rho^{\flat}}\int_V \phi^{\flat}(\bm{x},t) \mathrm{d}V\).
Each cell contains own distinct effective density and different density values at the interfaces, calculated by integration with the neighbor cells effective densities.
For the discretized integral calculations we use quadrature rules
\begin{linenomath}\begin{align}\label{eq:quad}
\int_V \phi^{\flat}(\bm{x},t) \mathrm{d}V &= \sum_i^{N} \varpi_i(N) \phi^{\flat}(\bm{x} - \bm{\xi}_{i} \triangle t,t)
\end{align}\end{linenomath}
which are rearranged to
\begin{linenomath}\begin{align}
\sum_i^{N} \varpi_i(N) \phi^{\flat}(\bm{x} - \bm{\xi}_{i} \triangle t,t) & = \left( \varpi_{i \neq 0}(N) \bm{\nabla}^{2} \phi^{\flat} + \phi^{\flat} \right) \left( \bm{x}, t\right), \label{eq:latDiffRuleIIre}
\end{align}\end{linenomath}
respectively.
The number of quadrature points $N$ is dependent on the void fraction variation directions number.
Hereby the diagonal directions are not considered.
In particular, the volume fraction integration is performed on the $D1Q3$ lattice if the volume fraction changes only in one direction, on the $D2Q5$ lattice if in two and on $D3Q7$ if in all three directions.
In \eqref{eq:quad} and \eqref{eq:latDiffRuleIIre} $N$ is equal to $Q$.
It is to be noted that the weighting factors, which are listed in Table \ref{tab:weights}, do not conform to the weights of a discrete velocity set.
\begin{table}[h]
\centering
\begin{tabular}{ccc}
\toprule
Dimensions & $\varpi_0$ & $\varpi_{i \neq 0}$ \\
\midrule
$d=1$ & $1/2$ & $1/4$ \\
$d=2$ & $1/3$ & $1/6$ \\
$d=3$ & $1/6$ & $5/36$ \\
\bottomrule
\end{tabular}
\caption{Quadrature weights for void fraction integration over a lattice cell.}
\label{tab:weights}
\end{table}
The equilibrium moments with varying local volume fractions are thus computed via \eqref{eq:feq}, \eqref{eq:density}, \eqref{eq:velocity}, and \eqref{eq:latDiffRuleIIre} in a separately regarded lattice cell in the pre-collision state to
\begin{linenomath}\begin{align}
M^{\mathrm{eq}}_{0} &= \sum_{i} f_{i}^{\mathrm{eq}} = \tilde{\rho^{\flat}} \Bigl(\phi^{\flat} (\bm{x},t) + \varpi_{i \neq 0}(N) \bm{\nabla}^{2} \phi^{\flat} \Bigr), \label{eq:M0} \\
M_{1\alpha}^{\mathrm{eq}} &= \sum_{i} \xi_{i\alpha} f_{i}^{\mathrm{eq}} = \tilde{\rho^{\flat}} \Bigl(\phi^{\flat} (\bm{x},t) + \varpi_{i \neq 0}(N) \bm{\nabla}^{2} \phi^{\flat} \Bigr) \overline{\bm{u}^{\flat}}, \\
M_{2\alpha\beta}^{\mathrm{eq}} &= \sum_{i} \xi_{i \alpha} \xi_{i \beta} f_{i}^{\mathrm{eq}} = \tilde{\rho^{\flat}} \Bigl(\phi^{\flat} (\bm{x},t) + \varpi_{i \neq 0}(N) \bm{\nabla}^{2} \phi^{\flat} \Bigr) \overline{u_{\alpha}^{\flat}} \overline{u_{\beta}^{\flat}} \nonumber \\
&+ \tilde{\rho^{\flat}} c_s^{2} \Bigl(\phi^{\flat} (\bm{x},t) + \varpi_{i \neq 0}(N) \bm{\nabla}^{2} \phi^{\flat} \Bigr), \\
M_{3\alpha\beta\gamma}^{\mathrm{eq}} &= \sum_{i} \xi_{i \alpha} \xi_{i \beta} \xi_{j \gamma} f_{i}^{\mathrm{eq}} = \tilde{\rho^{\flat}} \Bigl(\phi^{\flat} (\bm{x},t) + \varpi_{i \neq 0}(N) \bm{\nabla}^{2} \phi^{\flat} \Bigr) \overline{\bm{u}^{\flat}} \delta_{\alpha \beta \gamma}. \label{eq:M3}
\end{align}\end{linenomath}
Finally, using these moments, a CE expansion (see \ref{sec:appendix}) of the above proposed lattice Boltzmann scheme yields formal consistency towards the VANSE \eqref{eq:vansEquMass}, \eqref{eq:vansEquMom}.
\section{Numerical validation}\label{sec:numerics}
\noindent The numerical validation of the proposed LBM for VANSE is performed on a stationary and a transient example.
Both examples are built with the method of manufactured solutions (MMS) \cite{roache}.
Thereby, analytical functions for volume fraction, fluid velocity and pressure are chosen, s.t. they fulfill the mass conservation law of VANSE.
For these fixed functions $\phi^{\flat},\overline{\bm{u}^{\flat}}, \tilde{p_{i}}$, the MMS force is calculated with central finite differences to
\begin{linenomath}\begin{align}
\bm{F}_{\mathrm{MMS}} = \partial_t \left(\phi^{\flat} \tilde{\rho^{\flat}} \overline{\bm{u}^{\flat}}\right) & + \bm{\nabla} \cdot \left(\phi^{\flat} \tilde{\rho^{\flat}} \overline{\bm{u}^{\flat}} \overline{\bm{u}^{\flat}}\right)+ \phi^{\flat} \bm{\nabla} \tilde{p_{i}} \nonumber \\
& - \nu \bm{\nabla} \cdot \left(\phi^{\flat} \tilde{\rho^{\flat}} \left(\bm{\nabla} \overline{\bm{u}^{\flat}} + \overline{\bm{u}^{\flat}} \bm{\nabla}\right)\right) ,
\end{align}\end{linenomath}
including all terms of the momentum equation.
This force is used as forcing term in the LBE \eqref{eq:lbe} together with the pressure correction force
\begin{linenomath}\begin{align}
\sum_{k} \overline{\bm{F}_{k}} = \overline{\bm{F}_{\mathrm{MMS}}} + \overline{\bm{F}_{\mathrm{PC}}}.
\end{align}\end{linenomath}
The examples are evaluated through several error measurements.
The errors correspond to $L^1$-, $L^2$- and $L^{\infty}$-norms over nodal values of velocity and pressure deviations between the simulated and the prescribed data \cite{oberkampf}, i.e.
\begin{linenomath}\begin{align}
r_{L^{1}} \left( q^{\flat} \right) &= \frac{1}{N_{\mathrm{node}}} \sum_{c=1}^{N_{\mathrm{node}}} \left\vert q_{c}^{\flat} - q_{c}^{\flat,\star} \right\vert , \\
r_{L^{2}} \left( q^{\flat} \right) &= \sqrt{ \frac{1}{N_{\mathrm{node}}} \sum_{c=1}^{N_{\mathrm{node}}} \left\vert q_{c}^{\flat} - q_{c}^{\flat,\star} \right\vert^{2}}, \\
r_{L^{\infty}} \left( q^{\flat} \right) &= \max\limits_{c = 1,..,N_{\mathrm{node}}} \left\vert q_{c}^{\flat} - q_{c}^{\flat,\star} \right\vert,
\end{align}\end{linenomath}
respectively, where \(q^{\flat,\star}\) denotes the corresponding analytical solution.
The solutions of the VANSE are chosen for time independent and time dependent cases, constructed similarly to ones of Blais et al.~\cite{BLAIS2015121} and Höcker et al.~\cite{hocker}.
The here tested configurations are summarized as follows.
\begin{enumerate}
\item\label{ex:stat2d} Stationary two-dimensional example:
\begin{linenomath}\begin{align}
\phi^{\flat} &= 0.5 + 0.4 \sin{(\pi x)} \sin{(\pi y)}, \\
\overline{\bm{u}^{\flat}} &= 2 \begin{pmatrix} -(\sin{(\pi x)})^{2} \sin{(\pi y)} \cos{(\pi y)} \\ (\sin{(\pi y)})^{2} \sin{(\pi x)} \cos{(\pi x)} \end{pmatrix}, \\
\tilde{p^{\flat}} &= \sin{(\pi x)} \sin{(\pi y)} .
\end{align}\end{linenomath}
\item\label{ex:stat3d} Stationary three-dimensional example:
\begin{linenomath}\begin{align}
\phi^{\flat} &= 0.5 + 0.4 \sin{(\pi x)} \sin{(\pi y)} \sin{(\pi z)}, \\
\overline{\bm{u}^{\flat}} &= \begin{pmatrix} (\sin{(\pi x)})^{2} \sin{(\pi y)} \cos{(\pi y)} \sin{(\pi z)} \cos{(\pi z)} \\ (\sin{(\pi y)})^{2} \sin{(\pi x)} \cos{(\pi x)} \sin{(\pi z)} \cos{(\pi z)} \\ -2 (\sin{(\pi z)})^{2} \sin{(\pi x)} \cos{(\pi x)} \sin{(\pi y)} \cos{(\pi y)} \end{pmatrix}, \\
\tilde{p^{\flat}} &= \sin{(\pi x)} \sin{(\pi y)} \sin{(\pi z)} .
\end{align}\end{linenomath}
\item\label{ex:tran1d} Transient one-dimensional example:
\begin{linenomath}\begin{align}
\phi^{\flat} &= 0.5 + 0.4 \sin{(\pi (x - 0.5t))}, \\
\overline{\bm{u}^{\flat}} &= \begin{pmatrix} 0.5 + \frac{1}{\phi^{\flat}} \\ 0 \end{pmatrix}, \\
\tilde{p^{\flat}} &= \sin{(\pi (x - 0.5t))} .
\end{align}\end{linenomath}
\item\label{ex:tran2d} Transient two-dimensional example:
\begin{linenomath}\begin{align}
\phi^{\flat} &= 0.5 + 0.4 \sin{(\pi (x - 0.5t))} \sin{(\pi (y - 0.5t))}, \\
\overline{\bm{u}^{\flat}} &= \begin{pmatrix} 0.5 + \frac{1}{\phi^{\flat}} \\ 0.5 + \frac{1}{\phi^{\flat}} \end{pmatrix}, \\
\tilde{p^{\flat}} &= \sin{(\pi (x - 0.5t))} \sin{(\pi (y - 0.5t))} .
\end{align}\end{linenomath}
\item\label{ex:tran3d} Transient three-dimensional example:
\begin{linenomath}\begin{align}
\phi^{\flat} &= 0.5 + 0.4 \sin{(\pi x)} \sin{(\pi y)} \sin{(\pi z)}, \\
\overline{\bm{u}^{\flat}} &= \begin{pmatrix} 0.5 + \frac{1}{\phi^{\flat}} \\ 0.5 + \frac{1}{\phi^{\flat}} \\ 0.5 + \frac{1}{\phi^{\flat}} \end{pmatrix}, \\
\tilde{p^{\flat}} &= \sin{(\pi x)} \sin{(\pi y)} \sin{(\pi z)} .
\end{align}\end{linenomath}
\end{enumerate}
The spatial simulation domain comprises $2\, m$ in each coordinate direction with periodic boundary conditions in every example.
The fluid density is set to \(1\) $kg/m^{3}$ and the kinematic viscosity to \(0.1\) $m^{2}/s$.
The relaxation time is held constant by all resolutions under diffusive scaling and is equal to \(0.53\) for the stationary and \(0.5075\) for the transient examples.
Exemplary solutions are visualized in Figure \ref{fig:3D_stationary} for the stationary three-dimensional example \ref{ex:stat3d} and in Figure \ref{fig:3D_transient} for the transient three-dimensional example \ref{ex:tran3d}.
\begin{figure
\centering
\begin{tabular}{cc}
\multirow{2}{*}[9.5em]{
\includegraphics[width=0.6\linewidth]{bild_stationary_3_c.jpeg}
}
&
\includegraphics[scale=1]{vans-figure2.pdf} \\
&
\includegraphics[scale=1]{vans-figure3.pdf}
\end{tabular}
\caption{Stationary three-dimensional velocity and porosity distribution of Example \ref{ex:stat3d}.}
\label{fig:3D_stationary}
\end{figure}
\begin{figure
\centering
\begin{tabular}{cc}
\multirow{2}{*}[9.5em]{
\includegraphics[width=0.6\linewidth]{bild_transient_3.jpeg}
}
&
\includegraphics[scale=1]{vans-figure4.pdf} \\
&
\includegraphics[scale=1]{vans-figure5.pdf}
\end{tabular}
\caption{Transient three-dimensional velocity and porosity distribution of Example \ref{ex:tran3d}.}
\label{fig:3D_transient}
\end{figure}
The convergence plots for the examples in each error norm are shown in Figures \ref{fig:stationary_plots_2d}, \ref{fig:stationary_plots_3d}, \ref{fig:transient_plots_1d}, \ref{fig:transient_plots_2d}, and \ref{fig:transient_plots_3d}, respectively.
\begin{figure}[ht!]
\centerline{
\subfloat[Velocity error]{
\includegraphics[scale=1]{vans-figure6.pdf}}
\subfloat[Pressure error]{
\includegraphics[scale=1]{vans-figure7.pdf}}
}
\caption{Error measurements for (a) velocity and (b) pressure of the stationary two-dimensional Example \ref{ex:stat2d}.}
\label{fig:stationary_plots_2d}
\end{figure}
\begin{figure}[ht!]
\centerline{
\subfloat[Velocity error]{
\includegraphics[scale=1]{vans-figure8.pdf}}
\subfloat[Pressure error]{
\includegraphics[scale=1]{vans-figure9.pdf}}
}
\caption{Error measurements for (a) velocity and (b) pressure of the stationary three-dimensional Example \ref{ex:stat3d}.}
\label{fig:stationary_plots_3d}
\end{figure}
\begin{figure}[ht!]
\centerline{
\subfloat[Velocity error]{
\includegraphics[scale=1]{vans-figure10.pdf}}
\subfloat[Pressure error]{
\includegraphics[scale=1]{vans-figure11.pdf}}
}
\caption{Error measurements for (a) velocity and (b) pressure of the transient one-dimensional Example \ref{ex:tran1d}.}
\label{fig:transient_plots_1d}
\end{figure}
\begin{figure}[ht!]
\centerline{
\subfloat[Velocity error]{
\includegraphics[scale=1]{vans-figure12.pdf}}
\subfloat[Pressure error]{
\includegraphics[scale=1]{vans-figure13.pdf}}
}
\caption{Error measurements for (a) velocity and (b) pressure of the transient two-dimensional Example \ref{ex:tran2d}.}
\label{fig:transient_plots_2d}
\end{figure}
\begin{figure}[ht!]
\centerline{
\subfloat[Velocity error]{
\includegraphics[scale=1]{vans-figure14.pdf}}
\subfloat[Pressure error]{
\includegraphics[scale=1]{vans-figure15.pdf}}
}
\caption{Error measurements for (a) velocity and (b) pressure of the transient three-dimensional Example \ref{ex:tran3d}.}
\label{fig:transient_plots_3d}
\end{figure}
All examples are evaluated after the state stabilized and error norms remained asymptotically constant.
It is to be noted that also the error norms reach a steady state after sufficiently long simulation time.
This is due to the periodic boundary condition and constant maximal and minimal variable values that change only in position but not the amplitude.
In Figures \ref{fig:stationary_plots_2d}, \ref{fig:stationary_plots_3d}, \ref{fig:transient_plots_1d}, \ref{fig:transient_plots_2d}, and \ref{fig:transient_plots_3d} we observe the same experimental convergence order of two in every norm type for the velocity as well as the pressure error.
The absolute pressure deviation is not scaling by an increase of the target pressure values, so that the relative pressure can be made small enough.
The results above clarify that the proposed LBM model for approximating VANSE converges with second order and thus is validly consistent in the present numerical tests.
\section{Conclusion}\label{sec:conc}
\noindent We establish a novel LBM for approximating the VANSE.
The present LBM is formulated with an appropriate equilibrium distribution and pressure correction forcing term.
The new moments of these equilibrium function and forcing terms, which take into account the local and temporal varying void fractions, are provided and justified.
This unconventional point of view is based on considering streaming of the effective density from cell to cell.
In particular, the population moments taken at one lattice cell include the effective density streamed from the neighbor cell in the chosen direction, so that a finite differences scheme is applicable.
The numerical validation of the proposed LBM is performed on steady and transient examples, which are composed with MMS.
Under the premise of diffusive scaling by refinement of the lattice resolution, the second order convergence of the fluid velocity and the pressure is approved.
Finally, the presented CE expansion formally validates the pressure correction forcing term via cancellation of moments with corresponding terms.
Based on that, the expansion recovers the VANSE.
In future studies the proposed LBM is to be extended to a full multiphase Eulerian model with phase interaction forces.
Due to the intrinsic computing efficiency and optimal parallelizability of LBM, large eddy simulations \cite{smagorinsky} of complex entire reactor geometries with Eulerian multiphase LBM will become feasible.
A second necessary extension of the model is the accounting for mass transfer between phases.
Conclusively, the planned future research might render the multiphase LBM to an equal competitor of common FVM which is typically used in industrial solvers.
\section*{Author contribution}\noindent
\textbf{Fedor Bukreev}:
Conceptualization,
Validation,
Formal analysis,
Investigation,
Resources,
Data Curation,
Writing - Original Draft;
\textbf{Stephan Simonis}:
Methodology,
Validation,
Formal analysis,
Investigation,
Data Curation,
Writing - Review \& Editing,
Supervision;
\textbf{Adrian Kummerländer}:
Software,
Supervision;
\textbf{Julius Jeßberger}:
Writing - Review \& Editing;
\textbf{Mathias J. Krause}:
Software,
Resources,
Funding acquisition.
\section*{Acknowledgment}
\noindent
This work was performed on the HoreKa supercomputer funded by the Ministry of Science, Research and the Arts Baden-Württemberg and by the Federal Ministry of Education and Research. The current research is a part of the DFG project number 436212129 "Increase of efficiency in phosphate recovery by understanding the interaction of flow and loading processes with modeling and simulation".
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,706
|
Q: Is it possible to edit css file in a running mvc site? I have an MVC site which is colored black and white except for certain design elements, which are colored with one specific color (let's say blue). I do all the coloring from css.
What I want to do is to switch this color from time to time to another one. Problem is, that if I do this switching lets say from jquery when the document is loaded, the colors in an async loading element wont change.
Is it possible to change the css file itself from MVC, or maybe there is an event for async loaded elements too?
A: What about placing the different elements in different files and then including only the one you care about? For example:
In file default.css
#myDiv {
color: blue;
}
Now I can have other files.. .let's say red.css.
#myDiv {
color: red !important;
}
Now in your master page you can load the red.css based on whatever business logic you like. Here's a sample:
<link rel="stylesheet" type="text/css" href="/css/default.css" />
<% if (SomeCondition) { %>
<link rel="stylesheet" type="text/css" href="/css/red.css" />
<% } %>
A: How frequently are your colors going to change? If the change isn't happening very often, you could create an action method that returns a different stylesheet based on the current time.
public ActionResult GetCss() {
string stylesheet = GetStylesFromSomewhere();
return Content(stylesheet, "text/css");
}
A: I reread the quetion and considered that I should give a whole different answer.
The css is mainly declarative. This is a big convenience. If you declare something that is that way, you don't need to care about reapplying the css on new element, or reprocessing the DOM. It happens "in the browser". Imperatively changing the new elements to a new state every time something (e.g. adding a new element) happens is not just inconvenient but can lead to a whole lot of mistakes. So no there is no general event for "async loaded elements", but you shouldn't be looking for one.
CACHING
In any web environment having a constant css file is mainly a good idea. The browser caches the css, and can use it every time it's needed. If you change your css, the browser won't notice. Of course there are a lot of settings to consider, but it's true in most cases.
So if you could edit your css file it won't really matter as you would need to somehow tell the browser to notice the change.
There is a technique called cache-busting which is mainly putting a GET parameter after the css, like:
<link href="https://www.famous-cats.com/style/puna.css?v=3" rel="stylesheet" type="text/css" />
The browser thinks that ?v=1 is different from ?v=2, although you just referenced the same file. So it will cache it's every version with different GET parameters.
OK.
CHANGING FILES
But even if you change the file: is it a good idea to change the css file itself? If 95% of the rules are the same every time (everything that is not about that 1 color), you might want to separate it in a "main" file, and only change the parts about the colors, which can be in a different file/inline html.
An other thing about changing files is that if you are not planning on procedurally generating new styles you should just write constant files and switch them.
DOTLESS
It's a port of lesscss to asp.net. Check http://lesscss.org/ for it's features, it's basically sass with a slightly different syntax.
The good thing about it is that you can use variables, and get it's values from url parameters, so if you link style.less?color=fuschia it will just set the variable named "color" to your favorite color.
Less is neat stuff and can considerably reduce the development time.
THE SOLUTION I RECOMMEND
Just do the coloring in a css different from the main one:
<link href="https://www.famous-cats.com/style/main.css" rel="stylesheet" type="text/css" />
<link href="https://www.famous-cats.com/style/color.less?color=00FF55" id="colorcss" rel="stylesheet" type="text/css" />
And if you need to change the colors, just call:
document.getElementById('colorcss').href = 'color.less?color=F35741';
(I have given the link tag an id).
And it works.
Dive into less a little more and check how you can use color variables. You can make it possible on the admin to just select that one color.
Also inline css is not the work of the devil, you can easily create it in razor. If it is just a few rules it won't hurt anyone. So if you are not into less, you can use inline css, or generate the css with an MVC controller-action (just set the mimetype to text/css, and use an url parameter for color changing/cache busting).
I hope this answers your question.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,500
|
\section*{Notation}
\addcontentsline{toc}{section}{Notation}
\begin{center}
\begin{longtable}[t]{ll}
\bfseries Symbol & \bfseries Meaning\\\hline
$a(t)\leftrightarrow A(f)$ & background noise of the instrument A\\
$b(t)\leftrightarrow B(f)$ & background noise of the instrument B\\
$c(t)\leftrightarrow C(f)$ & DUT noise, i.e., the useful signal\\
$b_i$ & coefficients of the power-law approximation of $S_\varphi(f)$\\&(in AM-PM noise)\\
$\mathrm{dev}\{\,\}$ & deviation, $\mathrm{dev}\{x\}=\smash{\sqrt{\mathbb{V}\{x\}}}$\\
$\mathbb{E}\{\,\}$ & mathematical expectation\\
$f$ & Fourier frequency, Hz\\
$f(x)$ & probability density function (PDF)\\
$F(x)$ & cumulative density function (CDF)\\
$\mathcal{F}\{\,\}$ & Fourier thansform operator\\
$h_i$ & coefficients of the power-law model of $S_\alpha(f)$ or $S_y(f)$\\&(in AM-PM noise)\\
$i$ & integer number, often as as an index\\
$\imath$ & imaginary unit, $\imath^2=-1$\\
$\Im\{\,\}$ & imaginary part of a complex quantity, as in $X''=\Im\{X\}$\\
$m$ & number of averaged spectra, as in $\left<|S_{yx}|\right>_m$\\
$O(\,)$ & order of, as in $e^x=1+x+O(x^2)$\\
$\mathbb{P}\{\,\}$ & probability, as in $\mathbb{P}\{x>0\}$\\
$P_N$ & probability that a value is negative, as in $P_N=\mathbb{P}\{x<0\}$\\
$P_P$ & probability that a value is positive, as in $P_P=\mathbb{P}\{x>0\}$\\
$R_{xx}(t')$ & autocorrelation function\\
$\Re\{\,\}$ & real part of a complex quantity, as in $X'=\Re\{X\}$\\
$S_{xx}(f)$ & PSD of the quantity $x$\\
$S_{yx}(f)$ & cross PSD of the quantities $y$ and $x$\\
$t$ & time\\
$T$ & measurement time\\
$\mathbb{V}\{\,\}$ & variance, mathematical expectation of\\
$x(t)\leftrightarrow X(f)$ & generic variable\\
$x(t)\leftrightarrow X(f)$ & signal at the FFT analyzer input, channel 1\\
$\mathbf{x}(t)$, $\mathbf{y}(t)$ & stochastic processes, of which $x(t)$ and $x(t)$ are realizations\\
$y(t)\leftrightarrow Y(f)$ & generic variable\\
$y(t)\leftrightarrow Y(f)$ & signal at the FFT analyzer input, channel 2\\
$\alpha(t)\leftrightarrow\mathcal{A}(f)$ & normalized-amplitude noise (in AM-PM noise)\\
$\Gamma(x)$& the gamma function used in probability\\
$\kappa^2$ & PSD of the signal $c(t)$\\
$\mu$ & average (the value of)\\
$\nu$ & frequency (Hz), used for carrier signals (in AM-PM noise)\\
$\nu$ & no.\ of degrees of freedom, in probability functions\\
$\sigma(\tau)$& Allan deviation, $\sqrt{\text{Allan variance}}$ (in AM-PM noise)\\
$\tau$ & measurement time of the Allan variance (in AM-PM noise)\\
$\varphi(t)\leftrightarrow\Phi(f)$ & phase noise (in AM-PM noise)\\
$\chi^2$ & in probability, $\chi^2=\mathbf{x}_1^2+\mathbf{x}_2^2+\mathbf{x}_3^2+\ldots$\ originates the\\& $\chi^2$ distribution\\[1em]
\bfseries Subscript & \bfseries Meaning\\\hline
$T$ & truncated over the meas.\ time $T$, as in $x_T(t)$, $X_T(f)$\\[1em]
\bfseries Superscript & \bfseries Meaning\\\hline
$\ast$ & complex conjugate, as in $|X|^2=XX^\ast$\\[1em]
\bfseries Symbol & \bfseries Meaning\\\hline
$\left<~\right>$ & average. Also $\left<~\right>_m$ average of $m$ values\\
$\hat{~}$ & estimator of a quantity, as in $\smash{\hat{S}}_{yx}=\left<S_{yx}\right>_m$ \\
$'$, $''$ & real and imaginary part, as in $X=X'+\imath X''$\\
$\leftrightarrow$& transform inverse-transform pair, as in $x(t)\leftrightarrow X(s)$\\
$\dot{~}$ & time-derivative, as in $\dot{\varphi}(t)$ (in AM-PM noise)\\[1em]
\bfseries Acronym & \bfseries Meaning\\\hline
AM & Amplitude Modulation, often `AM noise' (in AM-PM noise)\\
CDF & Cumulative Density Function\\
DUT & Device Under Test\\
FFT & Fast Fourier Transform\\
PM & Phase Modulation, often `PM noise' (in AM-PM noise)\\
PDF & Probability Density Function\\
PLL & Phase Locked Loop (in AM-PM noise)\\
PSD & (single-side) Power Spectral Density\\[1em]
\bfseries font/case & \bfseries Meaning\\\hline
uppercase & Fourier transform of the lower-case function\\
rm-bf & stochastic processes, as in $x(t)$ is a realization of $\mathbf{x}(t)$\\\hline
\multicolumn{2}{l}{Font/case is used in this way only in some special (and obvious) cases}
\end{longtable}
\end{center}
\clearpage
\section{Introduction}\label{sec:xsp-introduction}
Measuring a device under test (DUT), the observed spectrum contains the DUT noise, which we can call \emph{signal} because it is the object of the measurement, and the background noise of the instrument. The core of the cross-spectrum measurement method is that we can measure the DUT simultaneously with two equal instruments. Provided that experimental skill and a pinch of good luck guarantee that DUT and instruments are statistically independent, statistics enables to extract the DUT spectrum from the background.
\begin{figure}[b]
\centering\namedgraphics{scale=0.6}{xsp-correl-basics}{\textwidth}
\caption{Basics of the cross-spectrum method.}
\label{fig:xsp-correl-basics}
\end{figure}
\begin{figure}
\centering\namedgraphics{scale=0.8}{mce-sqrt-law}{\textwidth}
\vspace*{-2em}
\caption{Average and deviation of the cross spectrum $|\left<S_{yx}\right>_m|$, as a function of the number $m$ of averaged realizations of white Gaussian noise. Since the statistical properties of $S_{yx}(f)$ are the same at any frequency, only one point (i.e., one frequency) is shown and the variable $f$ is dropped.
The DUT noise is 10 dB lower than the background.}
\label{fig:mce-sqrt-law}
\end{figure}
The two-channel measurement can be modeled as the block diagram of Fig.~\ref{fig:xsp-correl-basics}, where $a(t)$ and $b(t)$ are the background of the two instruments, and $c(t)$ the DUT noise, under the hypothesis that $a(t)$, $b(t)$ and $c(t)$ are statistically independent. Thus, the observed signals are
\begin{align*}
x(t)&=c(t)+a(t)\\
y(t)&=c(t)+b(t)~.
\end{align*}
We are interested in the power spectral density\footnote{The PSD as a statistical concept will be defined afterwards. Newcomers can provisionally use $S_{yx}(f)=\frac1TY(f)X^\ast(f)$, which is the is the readout of the FFT analyzer. $T$ is the measurement time.} (PSD), which is a normalized form of spectrum that expresses the power per unit of bandwidth, denoted with $S(f)$.
It will be shown that the average cross-PSD $\left<S_{yx}(f)\right>$ converges to the DUT PSD $S_{cc}(f)$, which is what we want to measure.
The idea of the cross-spectrum method is explained in Fig.~\ref{fig:mce-sqrt-law}. This figure builds from the output of the free-running analyzer, after selecting one frequency ($f_0$). This is a sequence of $|S_{yx}(f_0)|$ called realizations, which we average on contiguous groups of $m$ values $|\left<S_{yx}(f)\right>_m|$. The averages form a (slower) sequence whose statistical properties depend on $m$. So, Fig.~\ref{fig:mce-sqrt-law} plots the average and the variance of the sequence of averages, as a function of $m$.
At small values of $m$, the background is dominant and decreases as $m$ increases. Beyond $m\approx100$, we observe that $|\left<S_{yx}(f)\right>_m|$ stops decreasing and approaches the value of 0.1 ($-10$ dB), which is the DUT noise in this example. The standard deviation further decreases. The background is dominant below $m\approx100$. Beyond, the DUT noise shows up and the estimation accuracy increases, as seen from the deviation-to-average ratio.
Notice that the choice of $|\left<S_{yx}(f)\right>_m|$ as an estimator of $S_{yx}(f)$ is still arbitrary and will be further discussed.
All this report is about how and why the cross-spectrum converges to the DUT noise $S_{cc}(f)$, and about how this fact can be used in the laboratory practice.
The scheme of Fig.~\ref{fig:xsp-correl-basics} is analyzed from the following standpoints
\begin{description}
\item[Normal use.] All the noise processes [$a(t)$, $b(t)$ and $c(t)$] have non-negligible power. We use the statistics to extract $S_{cc}(f)$.
\item[Statistical limit.] In the absence of correlated phenomenon, thus with $c=0$, the average cross spectrum takes a finite nonzero value, limited by the number of averaged realizations.
\item[Hardware limit.] After removing the DUT, a (small) correlated part remain. This phenomenon, due to crosstalk or to other effects, limits the instrument sensitivity.
\end{description}
Though the author is inclined to use phase and amplitude noise as the favorite examples (Section \ref{ssec:xsp-pm-noise} and \ref{ssec:xsp-am-noise}), the cross-spectrum method is of far more general interest. Examples from a variety of research fields will be discussed in Section~\ref{ssec:xsp-other-applications}.
As a complement to this report, the reader is encouraged to refer to classical textbooks of probability and statistics, among which \cite{Feller:probability,Papoulis:probability,Cramer:statistics,Davenport-Root:noise} are preferred.
\section{Power spectral density}
The processes we describe are stationary and ergodic. The requirement that noise be stationary and ergodic is not a stringent constraint in the laboratory practice because the words `stationary' and `ergodic' are the equivalent of `repeatable' and `reproducible' in experimental physics. Thus, a realization $x(t)$ has the same statistical properties independently of the origin of time, and also the statistical properties of the entire process $\mathbf{x}(t)$. Unless otherwise specified, $\mathbf{x}(t)$ is a zero-mean finite-power process.
The power spectral density (PSD) of such processes is
\begin{align}
S_{xx}(f) &= \mathcal{F}\left\{R_{xx}(t')\right\}
\label{eqn:xsp-psd-def}
\end{align}
where $\mathcal{F}\{\:\}$ is the Fourier transform operator,
\begin{align}
R_{xx}(t') &= \mathbb{E}\left\{ \mathbf{x}(t) \, \mathbf{x}(t+t')\right\}
\end{align}
the autocorrelation function, and $\mathbb{E}\{\:\}$ the mathematical expectation.
As a simplified notation, we use the upper case for the Fourier transform, and the left-right arrow for the transform inverse-transform pair, thus
\begin{align*}
x(t)\leftrightarrow X(f)\qquad\text{Fourier transform -- inverse transform pair}~.
\end{align*}
The two-sided Fourier transform and spectra are generally preferred in theoretical issues, while the experimentalist often prefers the single-sided representation. Though we use the one-sided representation in all figures, often we do not need the distinction between one-sided and two-sided representation. In most practical measurements the Fast Fourier Transform (FFT) replaces the traditional Fourier transform, and the frequency is a discrete variable.
The Wiener-Khintchine theorem for ergodic and stationary processes enables to calculate the PSD through the absolute value of the Fourier transform. Thus it holds that
\begin{align}
\label{eqn:xsp-psd-wk}
\mathbb{E}\left\{S_{xx}(f)\right\}
&=\mathbb{E}\Bigl\{\lim_{T\rightarrow\infty}\Bigl[\frac1T\,X_T(f)\,X_T^\ast(f)\Bigr]\Bigr\}\\
&=\mathbb{E}\Bigl\{\lim_{T\rightarrow\infty}\Bigl[\frac1T\,\left|X_T(f)\right|^2\Bigr]\Bigr\}~,
\label{eqn:xsp-psd-wk-abs}
\end{align}
where the subscript $T$ means truncated over the measurement time $T$, and the superscript `$\ast$' stands for complex conjugate. By the way, the factor $\frac1T$ is necessary for $S_{xx}(f)$ to have the physical dimension of a \emph{power density}, i.e., power per unit of frequency.
Omitting the expectation, \req{eqn:xsp-psd-wk} can be seen as a realization of the PSD\@. In actual experiments the expectation is replaced with the average on a suitable number $m$ of spectrum samples
\begin{align}
&\left<S_{xx}(f)\right>_m = \frac1T\,\left<|X_T(f)|^2\right>_m
&&\text{(avg, $m$ spectra)}~.
\label{eqn:xsp-psd-avg}
\end{align}
As an obvious extension, the cross PSD of two generic random processes $\mathbf{x}(t)$ and $\mathbf{y}(t)$
\begin{align}
&S_{yx}(f) =
\mathcal{F}\left\{R_{yx}(t')\right\}
\end{align}
is measured as
\begin{align}
&\left<S_{yx}(f)\right>_m = \frac1T\,\left<Y_T(f)\,X_T^\ast(f)\right>_m~.
\label{eqn:xsp-measured-Syx}
\end{align}
\subsection{Measurement time \boldmath$T$}
In practical experiments the measurement time is finite, so we can only access the truncated version $x_T(t)\leftrightarrow X_T(f)$ of a realization.
In order to simplify the notation, the subscript $T$ for the \emph{truncation time} will be omitted. Thus for example we write \req{eqn:xsp-measured-Syx} as
\begin{align*}
\left<S_{yx}(f)\right>_m&=\frac1T\,\left<Y(f)\,X^\ast(f)\right>_m
&&\text{(abridged notation)}~.
\end{align*}
\subsection{Why white Gaussian noise}
However too simplistic at first sight it may seem, the use of white Gaussian noise is justified as follows. First, spectrally-smooth noise phenomena originate from large-number statistics (electrons and holes, semiconductor defects, shot noise, etc.), which by virtue of the central limit theorem yield to Gaussian process. Second, most non-white noise phenomena of interest in follow the power-law model $S(f)=\sum h_if^i$, hence they can be converted into white noise after multiplication by a suitable power of $f$ without affecting the PDF, and converted back after analysis. The idea of whitening and un-whitening a noise spectrum is by the way of far broader usefulness than shown here. For these reasons we can take full benefit from the simplicity of white Gaussian noise. Yet, it is understood that white noise rolls off at some point, so that all signals have finite power.
\section{The cross-spectrum method}
Recalling the definitions of Section \ref{sec:xsp-introduction}, we denote with $a(t)$ and $b(t)$ the background of the two instruments, with $c(t)$ the common noise, and with $A$, $B$ and $C$ their Fourier transform, letting the frequency implied. Working with realizations, we no longer need a separate notation for the process. By definition, $a(t)$, $b(t)$ and $c(t)$ are statistically independent. We also assume that they are ergodic and stationary. The two instrument outputs are
\begin{gather}
x(t)=c(t)+a(t)~~\leftrightarrow~~X = C+A\\
y(t)=c(t)+b(t)~~\leftrightarrow~~Y = C+B\makebox[0pt]{~~.}
\end{gather}
First, we observe that the cross-spectrum $S_{yx}$ converges to $S_{cc}$. In fact, \begin{align}
\mathbb{E}\{S_{yx}\}
&=\tfrac1T\,\mathbb{E}\{YX^\ast\}\nonumber\\
&=\tfrac1T\,\mathbb{E}\{[C+A]\times[C+B]^\ast\}\nonumber\\
&=\tfrac1T\,\bigl[\mathbb{E}\{CC^\ast\} + \mathbb{E}\{CB^\ast\} +
\mathbb{E}\{AC^\ast\} + \mathbb{E}\{AB^\ast\}\bigr]\nonumber\\
&= S_{cc}
\end{align}
because the hypothesis of statistical independence gives
\begin{align*}
\mathbb{E}\{CB^\ast\}=0, \qquad
\mathbb{E}\{AC^\ast\}=0, \qquad\text{and}\qquad
\mathbb{E}\{AB^\ast\}=0~.
\end{align*}
Then we replace the expectation with the average on $m$ measured spectra
\begin{align}
\left<S_{yx}\right>_m
&=\tfrac1T\,\left<YX^\ast\right>_m \nonumber\\
&=\tfrac1T\,\left<[C+A]\times[C+B]^\ast\right>_m\nonumber\\
&=\tfrac1T\,\bigl[\left<CC^\ast\right>_m + \left<CB^\ast\right>_m +
\left<AC^\ast\right>_m + \left<AB^\ast\right>_m\bigr]\nonumber\\
&= S_{cc} + O(\sqrt{1/m})~,
\label{eqn:xsp-syx-avg}
\end{align}
where $O(\:)$ means `order of.' Owing to statistical independence, the cross terms decrease proportionally to $1/\sqrt{m}$.
\subsection{Statistical limit}
With no DUT noise it holds that $c=0$, hence $S_{cc}=0$.
Maintaining the hypothesis of statistical independence of the two channels, we notice that the number of averaged spectra sets a statistical limit to the measurement.
Only the cross terms remain in \req{eqn:xsp-syx-avg}, which decrease proportionally to $1/\sqrt{m}$. Thus, the statistical limit is
\begin{align}
\left<S_{yx}\right>_m &= \tfrac1T\left<AB^\ast\right>_m
\approx\sqrt{\frac1m\,\left<S_{yy}\right>_m\left<S_{xx}\right>_m}
\qquad\text{(statistical limit)}.
\end{align}
Accordingly, a 5 dB improvement on the single-channel noise costs a factor of 10 in averaging, thus in measurement time. The convergence law will be extensively discussed afterwards.
\subsection{Hardware limit}
Breaking the hypothesis of the statistical independence of the two channels, we are interested in the \emph{correlated noise} of the instrument, which limits the sensitivity. This can be due for example to the crosstalk between the two channels, or to environmental fluctuations (ac magnetic fields, temperature, etc.) acting simultaneously on the two channels.
The mathematical description is simplified by setting the true DUT noise to zero, and by re-interpreting $c(t)$ as the \emph{correlated noise} of the instrument observed on unlimited number of averaged spectra
\begin{align}
\mathbb{E}\{S_{yx}\} = \mathbb{E}\{S_{cc}\}
\qquad\text{(hardware limit)}~.
\label{eqn:ddl-correl-hw-limit}
\end{align}
Nonetheless, the correct identification of this limit may require non-trivial experimental skill.
\subsection{Regular DUT measurement}
The accurate measurement of a regular DUT requires that
\begin{enumerate}
\item The number $m$ is large enough for the statistical limit to be negligible
\item The hardware background noise is negligible as compared to the DUT noise
\end{enumerate}
In this conditions, the average cross spectrum converges to the expectation of the DUT noise
\begin{align}
\left<S_{yx}\right>_m ~~\rightarrow~~ \mathbb{E}\{S_{cc}\}
\qquad\text{(DUT measurement)}.
\label{eqn:xsp-correl-dut-meas-reg}
\end{align}
This is the regular use of the instrument.
\section{Running the experiment}\label{sec:fft-display}
Before getting through mathematical details, it is instructive to start from a simplified picture of what happens when we run an experiment. For this purpose, we chose $\smash{\hat{S}}_{yx}=|\left<S_{yx}\right>_m|$ as an estimator of $S_{yx}$, which is often the default of the FFT analyzer in cross-spectrum mode. This estimator is suitable to be displayed on a logarithmic scale (dB) because it takes only nonnegative values, but it is biased.
We observe the PSD on the display of the FFT analyzer as $m$ increases, looking for the signature of $\smash{\hat{S}}_{yx}$ converging to $S_{cc}$.
We restrict our attention to the case of DUT noise smaller than the single-channel background, as it usually occurs when we need the correlation. The purpose for this assumption is to make the simulations representative of the laboratory practice. And of course we assume that the two channels are equal.
\subsection{Ergodicity}
\begin{figure}[t]
\centering\namedgraphics{scale=0.7}{xsp-ergodicity-3d}{\textwidth}
\caption{Sequence of cross spectra $|\left<S_{yx}(f)\right>_{32}|$.}
\label{fig:xsp-ergodicity-3d}
\end{figure}
Averaging on $m$ realizations, the progression of a measurement gives a sequence of spectra $|\left<S_{yx}\right>_{m}|_i$ of running index $i$, as shown in Fig.~\ref{fig:xsp-ergodicity-3d}.
For a given frequency $f_0$, the sequence
$|\left<S_{yx}(f_0)\right>_{m}|_i$ is a time series.
Since $S_{yx}(f_1)$ and $S_{yx}(f_2)$, are statistically independent for $f_1\neq f_2$, also $|\left<S_{yx}(f_1)\right>_{m}|_i$ and $|\left<S_{yx}(f_2)\right>_{m}|_i$ are statistically independent. For this reason, scanning the frequency axis gives access to (a subset of) the statistical ensemble.
\begin{figure}[t]
\centering\namedgraphics{scale=0.45}{xsp-convergence-3d}{\textwidth}
\caption{Sequence of cross spectra $|\left<S_{yx}\right>_{m}|$.}
\label{fig:xsp-convergence-3d}
\end{figure}
Ergodicity allows to interchange time statistics and ensemble statistics, thus the running index $i$ of the sequence and the frequency $f$. The important consequence is that the average and the deviation calculated on the frequency axis give access to the average and deviation of the time series, without waiting for multiple realizations to be available. This property helps detect when the cross spectrum leaves the $1/\sqrt{m}$ law and converges to the DUT noise.
Figure~\ref{fig:xsp-convergence-3d} shows a sequence of cross spectra $|\left<S_{yx}\right>_{m}|$, increasing $m$ in powers of two. On the left-hand side of Fig.~\ref{fig:xsp-convergence-3d}, the DUT noise is set to zero. Increasing $m$, the average cross spectrum decreases proportionally to $1/\sqrt{m}$, as emphasized by the slanted plane. The $1/\sqrt{m}$ law is easily seen after averaging on the frequency axis separately for each value of $m$, and then transposing the law to each point of the frequency axis thanks to ergodicity.
The right-hand side of Fig.~\ref{fig:xsp-convergence-3d} shows the same simulation, yet with the DUT noise set to a value of 10 dB lower than the single-channel background. At small values of $m$ the cross-spectrum is substantially equal to the previous case. Yet at $m\gtrsim100$ the cross-spectrum leaves the $1/\sqrt{m}$ law (slanted plane) and converges to the DUT noise (horizontal plane at $-10$ dB). Once again, thanks to ergodicity we can transpose the average on the frequency axis to each point of the frequency axis.
In the rest of this Section we will refer to a generic point of the PSD, letting the frequency unspecified. The variable $f$ is omitted in order to simplify the notation. Hence for example we will write $\Re\{S_{yx}\}$ instead of $\Re\{S_{yx}(f)\}$.
\subsection{Single-channel noise.}
It is explained in Sec.~\ref{sec:xsp-estimation-Sxx} that the single-channel PSD $\left<S_{xx}\right>_m$ is $\chi^2$ distributed with $2m$ degrees of freedom. The average PSD is equal to $\frac1T\,\mathbb{V}\{X\}=\frac1T\mathbb{V}\{A\} + \frac1T\mathbb{V}\{C\}$, where $\mathbb{V}\{\,\}$ is the variance; the deviation-to-average ratio is equal to $1/\sqrt{m}$. Of course the same holds for $S_{yy}$, after replacing $A$ with $B$.
The track seen on the display converges to the DUT noise \emph{plus} the background noise, and shrinks as $m$ increases. The track thickness is twice the deviation. This fact is shown on Fig.~\ref{fig:spectra-seq-11-1024-0316-absSyx-WIDE}. The green plot, labeled $|S_{xx}|$, keeps the same vertical position as $m$ increases, and shrinks.
\begin{figure}[t]
\centering\namedgraphics{scale=0.56, angle=90}{spectra-seq-11-1024-0316-absSyx-WIDE}{\textwidth
\caption{Simulated PSD, plotted for increasing number $m$ of averaged realizations. The parameter $g=0.32$ ($-10$ dB), which is $\kappa$ in the main text, is the correlated noise, while the single-channel background is of one.}
\label{fig:spectra-seq-11-1024-0316-absSyx-WIDE}
\end{figure}
\subsection{Cross-spectrum observed with insufficient \boldmath$m$.}
When the number $m$ of averaged realizations is insufficient for the DUT noise to show up, the system behaves as the two channels were (almost) statistically independent. In this conditions we can predict the spectrum by setting $X\simeq A$, $Y\simeq B$ and $C\simeq0$, thus $\mathbb{E}\{S_{yx}\}\simeq0$.
The estimator $\smash{\hat{S}}_{yx}=|\left<S_{yx}\right>_m|$ has Rayleigh distribution with $2m$ degrees of freedom.
Normalizing on the single-channel background $\mathbb{E}\{S_{xx}\}=\mathbb{E}\{S_{yy}\}=1$, and using the results of Sec.~\ref{sec:xsp-noise-rejection}, we find that
\begin{align}
\mathbb{E}\{\hat{S}_{yx}\}
&=\mathbb{E}\{|\left<S_{yx}\right>_m|\}=\sqrt{\frac{\pi}{4m}}=\frac{0.886}{\sqrt{m}}\nonumber\\[1ex]
\mathbb{V}\{\hat{S}_{yx}\}
&=\mathbb{V}\{|\left<S_{yx}\right>_m|\}
=\frac1m\left(1-\frac{\pi}{4}\right)=\frac{0.215}{m}~,\nonumber
\intertext{and therefore}
\mathrm{dev}\{\hat{S}_{yx}\}
&=\sqrt{\mathbb{V}\{|\left<S_{yx}\right>_m|\}}
=\sqrt{\frac{1}{m}\left(1-\frac{\pi}{4}\right)}=\frac{0.463}{\sqrt{m}}\nonumber\\[1ex]
\frac{\mathrm{dev}\{\hat{S}_{yx}\}}{\mathbb{E}\{\hat{S}_{yx}\}}
&=\sqrt{\frac{4}{\pi}-1}=0.523\qquad\text{(independent of $m$)}~.\nonumber
\end{align}
The track is centered at $\smash{\frac{0.886}{\sqrt{m}}}$. This is the estimator bias. The track looks as a horizontal band located at $\mathrm{avg}\pm\mathrm{dev}$, thus on a logarithmic from $10\log_{10}(1-\mathrm{dev/avg})=-3.21~\unit{dB}$ to $10\log_{10}(1+\mathrm{dev/avg})=+1.83~\unit{dB}$ asymmetrically distributed around the average. This is shown on Fig.~\ref{fig:spectra-seq-11-1024-0316-absSyx-WIDE}. For $m\lesssim100$, the blue plot labeled $|S_{yx}|$ decreases proportionally to $1/\sqrt{m}$ and has the constant thickness of half a decade (5 dB), independent of $m$.
\subsection{Cross-spectrum observed with large \boldmath$m$.} When the number $m$ of averaged realizations is large enough, the background noise vanishes and the DUT spectrum shows up. The cross spectrum no longer decreases but the variance still does. Qualitatively speaking, the average is set by the DUT noise $S_{cc}$ and the deviation is set by the instrument background divided by $\sqrt{m}$. On a logarithmic scale, the track no longer decreases and starts shrinking. This is shown on Fig.~\ref{fig:spectra-seq-11-1024-0316-absSyx-WIDE} for $m\gtrsim100$, blue plot labeled $|S_{yx}|$.
The above reasoning can be reversed. The simultaneous observation that the cross spectrum \emph{stops decreasing}, and \emph{shrinks} is the signature that the averaging process is converging. The single-channel background is rejected and the instrument measures the DUT noise (or the hardware limit, which is higher). This fact is of paramount importance in some measurements, where for some reasons we cannot remove the DUT.
\section{Estimation of \boldmath$S_{xx}$}\label{sec:xsp-estimation-Sxx}
The measurement accuracy depends on three main factors, instrument calibration, instrument background (front-end and quantization), and statistical estimation. Only the latter is analyzed in this Section.
As a property of zero-mean white Gaussian noise, the Fourier transform $X=X'+\imath X''$ is also zero-mean Gaussian, and the energy is equally split between $X'$ and $X''$.
Restricting our attention to a generic point (i.e., to an unspecified frequency), the PSD is
\begin{align*}
\mathbb{E}\{S_{xx}\} &= \frac1T\,\mathbb{E}\Bigl\{\left|X\right|^2\Bigr\}
= \frac1T\,\mathbb{E}\Bigl\{\left[X'^{\,2}+X''^{\,2}\right]\Bigr\}~.
\end{align*}
For use in this Section we define
\begin{align*}
\varsigma^2=\mathbb{E}\{S_{xx}\}~,
\end{align*}
which is the power in 1 Hz bandwidth.
Since $X'$ and $X''$ are zero-mean Gaussian-distributed random variables, a single realization
\begin{align*}
S_{xx} &= \frac1T\,\left[X'^{\,2}+X''^{\,2}\right]
\end{align*}
follows a $\chi^2$ distribution with two degrees of freedom.
After our definition of $\varsigma^2$, we find that
\begin{align*}
\mathbb{V}\{X'\}=\mathbb{V}\{X''\}=\frac{T}{2}\,\varsigma^2~.
\end{align*}
because $S_{xx}$ includes a factor $\frac1T$.
This is seen on the ``scaled $\chi^2$'' column of Table~\ref{tab:xsp-chi-square-prop}, after setting $\nu=2$ (degrees of freedom) and $\sigma=\frac{1}{2}T\,\varsigma^2$. On that Table we find that $\mathbb{E}\{S_{xx}\}=\frac{1}{T}\,\nu\sigma^2$, which is equal to $\varsigma^2$, and that $\mathbb{V}\{S_{xx}\}=\frac{1}{T^2}\,2\nu\sigma^4$, hence
\begin{align*}
\mathbb{V}\{S_{xx}\}=\varsigma^4~.
\end{align*}
Averaging on $m$ realizations of $S_{xx}$
\begin{align*}
\left<S_{xx}\right>_m = \frac{1}{m} \sum_{i=1}^{m} \;\frac1T\left[X_i'^{\,2}+X_i''^{\,2}\right],
\end{align*}
we notice that $\left<S_{xx}\right>_m$ has $\chi^2$ distribution with $2m$ degrees of freedom. Using the right-hand column of Table~\ref{tab:xsp-chi-square-prop}, we find $\mathbb{V}\{\left<S_{xx}\right>_m\}=\frac1m\varsigma^4$.
The uncertainty (standard deviation) is therefore
\begin{align*}
\text{dev}\{\left<S_{xx}\right>_m\} &= \frac{1}{\sqrt{m}} \varsigma^2
&\frac{\text{dev}\{\left<S_{xx}\right>_m\} }{\mathbb{E}\{\left<S_{xx}\right>_m\}} &= \frac{1}{\sqrt{m}}~.
\end{align*}
Figure \ref{fig:xsp-S-pdf} shows an example PDF of the spectrum averaged on $m$ realizations. The $\chi^2$ distribution is normalized for the standard deviation to be equal one. Increasing $m$, the PDF converges to the normal distribution and shrinks.
\begin{figure}[t]
\centering\namedgraphics{scale=0.6}{xsp-S-pdf}{\textwidth}
\caption{Probability density function $f(x)$ of the PSD averaged on $m$ realizations.}
\label{fig:xsp-S-pdf}
\end{figure}
Finally, we may find useful the following normalization
\begin{align*}
S_{aa}&=1\quad\text{(background)} & S_{cc}&=\kappa^2\quad\text{(DUT)}~.
\end{align*}
Expanding $X=X'+\imath X''=(A'+C')+\imath(A''+C'')$ we notice that $X$ is zero-mean white Gaussian noise, and that
\begin{align*}
\mathbb{E}\left\{\left<S_{xx}\right>_m\right\}&=1+\kappa^2 &
\text{dev}\left\{\left<S_{xx}\right>_m\right\}=\frac{1+\kappa^2}{\sqrt{m}}~.
\end{align*}
\section{Estimation of \boldmath$S_{yx}$ and noise rejection}\label{sec:xsp-noise-rejection}
It is obvious from Eq.~\req{eqn:xsp-psd-avg} that the spectrum $S_{xx}(f)$ takes always \emph{real positive} values, even if averaged on a small number of realizations. Since some kind of fundamental noise is always present in a physical experiment, the probability that $S_{xx}(f)$ nulls at some frequency is zero. Conversely, the cross-spectrum $S_{yx}(f)$ is a \emph{complex} function that converges to the positive function $S_{cc}(f)$ only after averaging on a sufficient number $m$ of realizations, as seen in Eq. \req{eqn:xsp-syx-avg}.
In numerous practical cases we need to plot $S_{yx}(f)$ on a logarithmic vertical scale, which is of course impossible where $S_{yx}(f)$ is not positive.
\begin{itemize}
\item In radio engineering virtually all spectra are given in decibels, which resorts to a logarithmic scale.
\item When the spectrum spreads over a large dynamic range, only a compressed scale makes sense. The logarithmic scale is by far the preferred representation.
\item Numerous spectra found in physical experiments follow a polynomial law because the time-domain derivative (integral) maps into a multiplication (division) of the spectrum by $f^2$. On a logarithmic plot, a power of $f$ maps into a straight line.
\item It is explained in Section \ref{sec:fft-display} that running the experiment, average and deviation of the instrument noise are ruled by the same $1/sqrt{m}$ law until the number of averaged realizations is sufficient for $S_{yx}(f)$ to converge to $S_{cc}(f)$. This is most comfortably seen on a logarithmic scale.
\end{itemize}
Thus, we need to extend Section \ref{sec:xsp-estimation-Sxx} to the cross spectrum, discussing the suitable estimators. The estimator may introduce noise and bias. In everyday life a better estimator may save only a little amount of time, and in this case it could be appreciated mainly because it is smarter. Oppositely in long-term measurements, like \emph{timekeeping} and \emph{radioastronomy}, a single data point takes years of observation. Here, the choice of the estimator may determine whether the experiment is feasible or not.
\subsection{Basic material}
Let us expand $S_{yx}$
\begin{align}
S_{yx}
&=\tfrac1T\,\mathbb{E}\left\{YX^\ast\right\}\nonumber\\
&=\tfrac1T\,\mathbb{E}\left\{(B+C)\times(A+C)^\ast\right\}\nonumber\\
&=\tfrac1T\,\mathbb{E}\left\{(B'+\imath B''+C'+\imath C'')\times(A'-\imath A''+C'-\imath C'')\right\}\nonumber\\
&=\tfrac1T\,\mathbb{E}\left\{\bigl(B'A'+B''A''+B'C'+B''C''+C'A'+C''A''+C'^{\,2}+C''^{\,2}\bigr)\right.\nonumber\\[0.5ex]
&\left.\qquad+\imath\bigl(B''A'-B'A''+B''C'-B'C''+C''A'-C'A''\bigr)\right\}
\label{eqn:xsp-correl-dut-meas}
\end{align}
and simplify the calculus by normalizing on the variances as follows
\begin{align}
\mathbb{V}\{A\}&=1 & \mathbb{V}\{A'\}&=1/2 & \mathbb{V}\{A''\}&=1/2\nonumber\\
\mathbb{V}\{B\}&=1 & \mathbb{V}\{B'\}&=1/2 & \mathbb{V}\{B''\}&=1/2\nonumber\\
\mathbb{V}\{C\}&=\kappa^2\ll1 & \mathbb{V}\{C'\}&=\kappa^2/2 & \mathbb{V}\{C''\}&=\kappa^2/2~.\nonumber
\end{align}
Notice that an additional factor $T$ must be added a-posteriori for a proper normalization on $\mathbb{E}\{S_{aa}\}=\mathbb{E}\{S_{bb}\}=1$ (background power in 1 Hz bandwidth equal to one), as we did in Section \ref{sec:xsp-estimation-Sxx}. Thanks to energy equipartition, it follows that $\mathbb{V}\{A'\}=1/2\Rightarrow\mathbb{V}\{A'\}=T/2$, etc.
The assumption that $\kappa^2\ll1$, though not necessary, is quite representative of actual experiments because the main virtue of the correlation method is the capability of extracting the DUT noise when it is lower than the background.
Looking at \req{eqn:xsp-correl-dut-meas}, we identify the following classes
\def\pba#1{\parbox{27ex}{\setlength{\baselineskip}{2.5ex}#1}}
\def\pbb#1{\parbox{22ex}{\setlength{\baselineskip}{2.5ex}#1}}
\def0pt{0pt}
\begin{center}
\begin{tabular}{|l|c|c|c|l|}\hline
\rule[-1.5ex]{0pt}{4ex}%
terms & $\mathbb{E}$ & $\mathbb{V}$ & PDF & comment\\\hline
\rule[-2.5ex]{0pt}{6ex}%
\pba{$B'A'$, $B''A''$, $B''A'$, $B'A''$}&0&$1/4$&Gauss
&\pbb{product of zero-mean Gaussian processes}\\\hline
\rule[-2.5ex]{0pt}{6ex}%
\pba{$B'C'$, $B''C''$, $C'A'$, $C''A''$ , $B''C'$, $B'C''$, $C''A'$, $C'A''$}
&0&$\kappa^2/4$&Gauss
&\pbb{product of zero-mean Gaussian processes}\\\hline
\rule[0ex]{0pt}{2.5ex}%
$C'^{\,2}+C''^{\,2}$&$\kappa^2$&$\kappa^4$&$\chi^2$&sum of zero-mean\\
&&&$\nu=2$& square Gaussian proc.\\\hline
\end{tabular}
\end{center}
Equation \req{eqn:xsp-correl-dut-meas} can be rewritten as
\begin{align}
S_{yx}&=\tfrac1T\,\mathbb{E}\left\{\mathscr{A}+\imath\mathscr{B}+\mathscr{C}\right\}
\label{eqn:xsp-correl-dut-meas-ABC}
\intertext{where the terms}
\mathscr{A}&=B'A'+B''A''+B'C'+B''C''+C'A'+C''A''\nonumber\\
\mathscr{B}&=B''A'-B'A''+B''C'-B'C''+C''A'-C'A''\nonumber\\
\mathscr{C}&=C'^{\,2}+C''^{\,2}\nonumber
\end{align}
have the statistical properties listed underneath. Notice that $\left<\mathscr{C}\right>_m$ follows a $\chi^2$ distribution with $2m$ degrees of freedom, thus for large $m$ it can be approximated with a Gaussian distributed variable of equal average and variance, which is denoted with $\bigl<\tilde{\mathscr{C}}\bigr>_m$.
\def0pt{0pt}
\begin{center}
\begin{tabular}{|l|c|c|c|l|}\hline
\rule[-1.5ex]{0pt}{4ex}%
term & $\mathbb{E}$ & $\mathbb{V}$ & PDF & comment\\\hline
\rule[-2.5ex]{0pt}{6.5ex}%
$\left<\mathscr{A}\right>_m$&0&$\displaystyle\frac{1+2\kappa^2}{2m}$&Gauss&average (sum) of zero-mean\\\cline{1-4}
\rule[-2.5ex]{0pt}{6.5ex}%
$\left<\mathscr{B}\right>_m$&0&$\displaystyle\frac{1+2\kappa^2}{2m}$&Gauss&Gaussian processes\\\hline
\rule[0ex]{0pt}{2.5ex}%
$\left<\mathscr{C}\right>_m$&$\kappa^2$&$\displaystyle\kappa^4/m$&$\chi^2$&average (sum) of\\
&&&$\nu=2m$& chi-square processes\\\hline
\rule[-1.5ex]{0pt}{4ex}%
$\bigl<\tilde{\mathscr{C}}\bigr>_m$&$\kappa^2$&$\displaystyle\kappa^4/m$&Gauss& approximates $\left<\mathscr{C}\right>_m$ for large $m$\\\hline
\end{tabular}
\end{center}
Next, we will analyze the properties of some useful estimators of $\hat{S}_{yx}$.
Running an experiment, the logarithmic plot is comfortable because the average-to-deviation ratio is easily identified as the thickness of the track, independent of the vertical position. Yet, the logarithmic plot can only be used to display nonnegative quantities.
\subsection{\boldmath$\hat{S}_{yx}=\left|\left<S_{yx}\right>_m\right|$}\label{ssec:xsp-estimator-Abs-value}
The main reason for us to spend attention with this estimator is that it is the default setting for cross-spectrum measurement in most FFT analyzers. Besides, it can be used in conjunction with $\arg\left<S_{yx}\right>_m$ when the hypothesis that the delay of the two channels is not equal and useful information is contained in the argument, as it happens in radio-astronomy. $|\left<S_{yx}\right>_m|$ is of course suitable to logarithmic plot because it can only take nonnegative values.
The relevant objections against this estimator are
\begin{itemize}
\item There is no need to take in $\Im\left\{S_{yx}\right\}$, which contains half of the total background noise.
\item The instrument background turns into relatively large estimation bias.
\end{itemize}
For large $m$, where $\left<\mathscr{C}\right>_m$ tends to $\left<\smash{\tilde{\mathscr{C}}}\right>_m$, the estimator is expanded as
\begin{align*}
|\left<S_{yx}\right>_m|
&=\frac1T\sqrt{\left[\Re\left\{\left<YX^\ast\right>_m\right\} \right]^2
+ \left[\Im\left\{\left<YX^\ast\right>_m\right\} \right]^2}\\
&=\frac1T \sqrt{\left[\left<\mathscr{A}\right>_m+\left<\smash{\tilde{\mathscr{C}}}\right>_m \right]^2
+ \left[\left<\mathscr{B}\right>_m\right]^2}~.
\end{align*}
\subsubsection{The (not so) silly case of \boldmath$\kappa=0$}
\begin{figure}[t]
\centering\namedgraphics{scale=0.6}{xsp-Gauss-Rayleigh-pdf}{\textwidth}
\caption{Gaussian distribution of variance $\sigma^2=1/2$ and Rayleigh distribution generated by a pair of Gaussian variables of variance $\sigma^2=1/2$.}
\label{fig:xsp-Gauss-Rayleigh-pdf}
\end{figure}
The analysis of this case tells us what happens when $m$ is insufficient for the single-channel to be rejected, so that the displayed average spectrum is substantially the bias of the estimator.
Since $c\leftrightarrow C=0$, it holds that $\mathscr{C}=0$. Letting
\begin{align*}
\left<\mathscr{Z}\right>_m&=\sqrt{\left[\left<\mathscr{A}\right>_m\right]^2
+ \left[\left<\mathscr{B}\right>_m\right]^2}~.
\end{align*}
we notice that $\left<\mathscr{Z}\right>_m$ is Rayleigh distributed with $2m$ degrees of freedom.
Using Table~\ref{tab:xsp-rayleigh}, we find that
\begin{align*}
&\mathbb{E}\{\left<\mathscr{Z}\right>_m\}=\sqrt{\frac{\pi}{4m}}=\frac{0.886}{\sqrt{m}}&&\text{(average)}\\[1ex]
&\mathbb{V}\{\left<\mathscr{Z}\right>_m\}
=\frac1m\left(1-\frac{\pi}{4}\right)=\frac{0.215}{m}&&\text{(variance)}
\end{align*}
Figure~\ref{fig:xsp-Gauss-Rayleigh-pdf} compares the case $m=1$ (Rayleigh distribution) to the Gaussian distribution associated with the best estimator (Section~\ref{ssec:xsp-estimator-Real-part}).
Interestingly, the deviation-to-average ratio, which also applies to $|\left<S_{yx}\right>_m|$,
\begin{align}
\frac{\displaystyle\text{dev}\{|\left<S_{yx}\right>_m|\}}{\displaystyle\mathbb{E}\{|\left<S_{yx}\right>_m|\}}
&=\sqrt{\frac{4}{\pi}-1}=0.523
\qquad\frac{\text{dev}}{\mathbb{E}}
\end{align}
is independent of $m$.
In logarithmic scale, the cross spectrum appears as a strip decreasing as $5\log(m)$ dB, yet \emph{of constant thickness} of approximately 5 dB (dev/avg). This is seen in the example of Fig.~\ref{fig:spectra-seq-11-1024-0316-absSyx-WIDE}.
\subsubsection{Large number of averaged realizations}
The estimator converges to $\kappa^2$, which is trivial, and for $\kappa\ll1$ the deviation-to-average ratio is approximately $1/\sqrt{m}$. This issue is not further expanded here.
\subsection{\boldmath$\hat{S}_{yx}=\Re\left\{\left<S_{yx}\right>_m\right\}$}\label{ssec:xsp-estimator-Real-part}
\begin{figure}[t]
\centering\namedgraphics{scale=0.8}{xsp-estimator-Re}{\textwidth}
\caption{PDF of the estimator $\hat{S}_{yx}=\Re\left\{\left<S_{yx}\right>_m\right\}$.}
\label{fig:xsp-estimator-Re}
\end{figure}
This is the best estimator to the extent that
\begin{itemize}
\item All the useful information is in $\Re\left\{S_{yx}\right\}=\frac1T(\mathscr{A}+\mathscr{C})$.
\item Since the instrument background is equally split in $\Re\left\{S_{yx}\right\}$ and $\Im\left\{S_{yx}\right\}$, discarding $\Im\left\{S_{yx}\right\}$ results in 3 dB improvement of the SNR\@.
\item for the same reason, the instrument background does not contribute to the bias.
\end{itemize}
The main drawback is that this estimator is not suitable to logarithmic plot because $\Re\left\{\left<S_{yx}\right>_m\right\}$ can take negative values, especially at small $m$.
For large $m$ we can approximate $\left<\mathscr{C}\right>_m$ with $\bigl<\smash{\tilde{\mathscr{C}}}\bigr>_m$, which is Gaussian distributed.
Letting
\begin{align*}
\left<\mathscr{Z}\right>_m=\left<\mathscr{A}\right>_m+\left<\smash{\tilde{\mathscr{C}}}\right>_m~,
\end{align*}
the PDF of $\left<\mathscr{Z}\right>_m$ is Gaussian (Fig.~\ref{fig:xsp-estimator-Re}).
Using the results of Sec.~\ref{ssec:xsp-gaussian}, we find
\begin{align}
\mathbb{E}\left\{\left<\mathscr{Z}\right>_m\right\}&=\kappa^2\\
\mathbb{V}\left\{\left<\mathscr{Z}\right>_m\right\}&=\frac{1+2\kappa^2+2\kappa^4}{2m}\\
\text{dev}\left\{\left<\mathscr{Z}\right>_m\right\}&=\sqrt{\frac{1+2\kappa^2+2\kappa^4}{2m}}
\approx\frac{1+\kappa^2}{\sqrt{2m}}\\
\frac{\text{dev}\left\{\left<\mathscr{Z}\right>_m\right\}}{%
\mathbb{E}\left\{\left<\mathscr{Z}\right>_m\right\}}
&=\frac{\sqrt{1+2\kappa^2+2\kappa^4}}{\kappa^2\:\sqrt{2m}}
\approx\frac{1+\kappa^2}{\kappa^2\:\sqrt{2m}}
\label{eqn:xsp-dev-avg-Re}\\[1ex]
P_N&=\frac12 \text{erfc}\!\left(\frac{\kappa^2}{\sqrt{2}\:\sigma}\right)
&&(\mathbb{P}\{\mathbf{x}<0\},~\text{Sec.~\ref{ssec:xsp-gaussian}})\\
P_P&=1-\frac12 \text{erfc}\!\left(\frac{\kappa^2}{\sqrt{2}\:\sigma}\right)
&&(\mathbb{P}\{\mathbf{x}>0\},~\text{Sec.~\ref{ssec:xsp-gaussian}})~.
\end{align}
Accordingly, for $\kappa\ll1$ a 0 dB SNR requires that $m=\frac{1}{2\kappa^4}$. If for example the DUT noise is 20 dB lower than the single-channel background, thus $\kappa=0.1$, averaging on $5{\times}10^3$ spectra is necessary to get a SNR of 0 dB.
On the other hand, if $\kappa\gg1$ the deviation-to-average ratio converges to $1/\sqrt{2m}$, which is what we expect if the instrument background is negligible.
\subsubsection{Precision vs.\ energy conservation}
The term $\sqrt{2}$ in the denominator of \req{eqn:xsp-dev-avg-Re} means that the SNR of the correlation system is 3 dB better than the single-channel system. In a physical system ruled by energy conservation this factor does not come for free because the DUT power is equally split into two channels. The conclusion is that the factor $\sqrt{2}$ in the SNR cancels with the $\sqrt{2}$ intrinsic loss of the power splitter. So, the basic \emph{conservation laws} of thermodynamics (or information) are \emph{not violated}.
\subsection{\boldmath$\hat{S}_{yx}=\left|\Re\left\{\left<S_{yx}\right>_m\right\}\right|$}\label{ssec:xsp-estimator-Abs-Real-part}
\begin{figure}[t]
\centering\namedgraphics{scale=0.8}{xsp-estimator-abs-Re}{\textwidth}
\caption{PDF of the estimator $\hat{S}_{yx}=\left|\Re\left\{\left<S_{yx}\right>_m\right\}\right|$.}
\label{fig:xsp-estimator-abs-Re}
\end{figure}
The negative values of $\left<S_{yx}\right>_m$ are folded up, so that $\smash{\hat{S}}_{yx}$ is always positive and can be plotted on a logarithmic axis.
Approximating $\left<\mathscr{C}\right>_m$ with $\left<\smash{\tilde{\mathscr{C}}}\right>_m$ for large $m$, the estimator is expanded as
\begin{align*}
\left|\Re\left\{\left<S_{yx}\right>_m\right\}\right|
&=\frac1T \left| \left<\mathscr{A}\right>_m +\left<\smash{\tilde{\mathscr{C}}}\right>_m\right|
\end{align*}
The PDF of $|\Re\{\left<S_{yx}\right>_m\}|$ is obtained from the PDF of $|\Re\{\left<S_{yx}\right>_m\}|$ already studied in Section~\ref{ssec:xsp-estimator-Real-part} by folding%
\footnote{A theorem states that follows. Let $\mathbf{x}$ a random variable, $f(x)$ its PDF, and $\mathbf{y}=|\mathbf{x}|$ a function of $\mathbf{x}$. The PDF of $\mathbf{y}$ is $g(y)=f(y)\mathfrak{u}(y)+f(-y)\mathfrak{u}(-y)$, where $\mathfrak{u}(y)$ is the Heaviside (step) function. Notice that the term $f(-y)\mathfrak{u}(-y)$ is the negative-half-plane ($y<0$) side of $f(y)$ folded to the positive half plane.}
the negative-half-plane of the original PDF on the positive half plane.
The result is shown in Fig.~\ref{fig:xsp-estimator-abs-Re}.
\subsection{\boldmath$\hat{S}_{yx}=\Re\left\{\left<S_{yx}\right>_{m'}\right\}$, averaging on the positive values}%
\label{ssec:xsp-estimator-Neg-discarded}
\begin{figure}[t]
\centering\namedgraphics{scale=0.8}{xsp-estimator-Re-discard-neg}{\textwidth}
\caption{PDF of the estimator obtained averaging the positive values of $\Re\left\{S_{yx}\right\}$.}
\label{fig:xsp-estimator-Re-discard-neg}
\end{figure}
Averaging $m$ values of $\Re\{S_{yx}\}$, we expect $m'=m\,P_P$ positive values and $m-m'=m\,P_N$ negative values. This estimators consists of averaging on the $m'$ positive values, discarding the negative values.
As usual, we assume that for large $m$ the term $\left<\mathscr{C}\right>_m$ is approximated with $\left<\smash{\tilde{\mathscr{C}}}\right>_m$, so that its PDF is Gaussian.
The PDF of this estimator is formed%
\footnote{A theorem states that follows. Let $f(x)$ the PDF of a process, and $g(x)$ the PDF conditional to the event $\mathbf{e}$. The conditional PDF is obtained in two steps. First an auxiliary function $h(x)$ is obtained from $f(x)$ by selecting the sub-domain defined by $\mathbf{e}$. Second, the desired PDF is $g(x)=h(x)/\int_{-\infty}^{\infty}h(x)\:dx$. The first step generates $h(x)$ equal to $f(x)$, but taking away the portions not allowed by $\mathbf{e}$. The second step scales the function $h(x)$ up so that $\int_{-\infty}^{\infty}g(x)\:dx=1$ (probability of all possible events), thus it is a valid PSD.}
from the PDF of $\Re\{\left<S_{yx}\right>_m\}$ after removing the negative-half-plane values and scaling up the result for the integral of the PDF to be equal to one. This is illustrated in Fig.~\ref{fig:xsp-estimator-Re-discard-neg}.
\subsection{Estimator \boldmath$\hat{S}_{yx}=\left<\max(\Re\{S_{yx}\}, 0_+)\right>_m$}%
\label{ssec:xsp-estimator-Neg-set-to-zero}
\begin{figure}[t]
\centering\namedgraphics{scale=0.8}{xsp-estimator-Re-make-pos}{\textwidth}
\caption{PDF of the estimator.}
\label{fig:xsp-estimator-Re-make-pos}
\end{figure}
Averaging $\Re\{S_{yx}\}$, the negative values are replaced with $0_+$. The reason for using $0_+$ instead of just 0 is that $\lim_{x\rightarrow0_+}\log(x)$ exists, while $\lim_{x\rightarrow0}\log(x)$ does not. The notation ``$0_+$'' is a nerdish replacement for the ``smallest positive floating-point number'' available in the computer. This small number is equivalent to zero for all practical purposes, but never produces a floating-point error in the evaluation of the logarithm.
Since the negative values are replaced with zero, the PDF of this estimator (Fig.~\ref{fig:xsp-estimator-Re-make-pos}) derives from the PDF of $\Re\{\left<S_{yx}\right>_m\}$ replacing the negative-half-plane side with a Dirac delta function.
\subsection{Choice among the positive (biased) estimators}
\begin{figure}[t]
\centering\namedgraphics{scale=0.8}{xsp-estimator-comparison}{\textwidth}
\caption{Comparison of the estimators based on $\Re\{S_{yx}\}$.}
\label{fig:xsp-estimator-comparison}
\end{figure}
Having accepted that an estimator suitable to logarithmic plot is positive, thus inevitably biased, the best choice is the estimator that exhibits the lowest variance and the lowest bias.
This criterion first excludes $|\left<S_{yx}\right>_m|$ in favor of one of the estimators based on $\Re\{\left<S_{yx}\right>_m\}$ because $\Im\{S_{yx}\}$ contains only the instrument background, which goes in both average (bias) and variance of $|\left<S_{yx}\right>_m|$. Taking $\Im\{S_{yx}\}$ away, the estimator is necessarily based on $\Re\{\left<S_{yx}\right>_m\}$.
Then, we search for a suitable low-bias estimator with the heuristic reasoning shown in Figure \ref{fig:xsp-estimator-comparison}.
It is shown in Sec.~\ref{ssec:xsp-estimator-Real-part} that for large $m$ the PDF of $\Re\{\left<S_{yx}\right>_m\}$ is a Gaussian distribution with mean value $\kappa^2$ and variance $\sigma^2=\smash{\frac{1+2\kappa^2+2\kappa^4}{2m}}$.
The probability of the events $\Re\{\left<S_{yx}\right>_m\}<0$ is represented in Fig.~\ref{fig:xsp-estimator-Re} as the grey area on the left-hand half-plane.
These events have probability $P_N$.
Using the results of Section~\ref{ssec:xsp-gaussian}, the average of these negative events is
\begin{align*}
\mu_N=\int_{-\infty}^{\infty}x\,f_N(x)\:dx = \mu-\frac{1}{\frac12\text{erfc}\!\left(\frac{\mu}{\sqrt{2}\:\sigma}\right)} \: \frac{\sigma}{\sqrt{2\pi\exp(\mu^2/\sigma^2)}}
\qquad\text{(Eq.~\req{eqn:xsp-Gauss-mu-N})}~.
\end{align*}
The estimator is made positive by moving the area $P_N$ from the left-hand half-plane to the right-hand half-plane. The bias depends on the shape taken by this area, and ultimately on the average associated to this shifted $P_N$.
By inspection on Fig.~\ref{fig:xsp-estimator-comparison} we notice that
\begin{description}
\item[Section \ref{ssec:xsp-estimator-Neg-discarded}.] $\smash{\hat{S}_{yx}}=\Re\{\left<S_{yx}\right>_{m'}\}$ makes use only of the positive values, the negative values are discarded.
The PSD area associated to $P_N$ has the same shape of the right-hand side of the PSD\@. We denote the average of this shape with $\mu_1$.
\item[Section \ref{ssec:xsp-estimator-Abs-Real-part}.]
$\smash{\hat{S}_{yx}}=|\Re\{\left<S_{yx}\right>_m\}|$.
The shadowed area associated to $P_N$ is flipped from the negative half-plane to the positive half-plane. The average is $\mu_2=-\mu_N$.
\item[Section \ref{ssec:xsp-estimator-Neg-set-to-zero}.] $\smash{\hat{S}_{yx}}=\Re\{\left<\max(S_{yx}, 0_+)\right>_m\}$.
The shadowed area associated to $P_N$ collapses into a Dirac delta function. The average is $\mu_3=0$.
\end{description}
From the graphical construction of Fig.~\ref{fig:xsp-estimator-comparison}, it is evident that \begin{align*}\mu_1>\mu_2>\mu_3~.\end{align*}
The obvious conclusion is that the preferred estimator is
\begin{align*}
\hat{S}_{yx}=\Re\left\{\left<\max(S_{yx}, 0_+)\right>_m\right\}
\qquad\text{(Preferred, Sec.~\ref{ssec:xsp-estimator-Neg-set-to-zero})}~.
\end{align*}
It is worth pointing out that the naif approach of just \emph{discarding the negative values} before averaging (Sec.~\ref{ssec:xsp-estimator-Neg-discarded}) turns out to be the \emph{worst choice} among the estimators we analyzed.
\subsection{The use of \boldmath$\Im\{\left<S_{yx}\right>_m\}$}
It has been shown in Sec.~\ref{sec:xsp-noise-rejection} (Eq.~\req{eqn:xsp-correl-dut-meas}) that all the DUT signal goes into $\Re\{S_{yx}\}$, and that $\Re\{S_{yx}\}$ contains only the instrument background. More precisely, \req{eqn:xsp-correl-dut-meas} is rewritten as
\begin{align*}
&S_{yx}=\tfrac1T\,\mathbb{E}\left\{\mathscr{A}+\imath\mathscr{B}+\mathscr{C}\right\}
\qquad\qquad\quad\qquad\text{(Eq.~\req{eqn:xsp-correl-dut-meas-ABC})}\\[1ex]
&\Re\{S_{yx}\}=\tfrac1T\,\mathbb{E}\left\{\mathscr{A}+\mathscr{C}\right\}
\quad\text{and}\quad
\Im\{S_{yx}\}=\tfrac1T\,\mathbb{E}\left\{\mathscr{B}\right\}
\end{align*}
where $\mathscr{A}$ and $\mathscr{B}$ come from the background have equal statistics, and $\mathscr{C}$ comes from the DUT spectrum. Therefore
\begin{itemize}
\item $\Im\{\left<S_{yx}\right>_m\}$ is a good estimator of the background
\item the contrast $\Re\{\left<S_{yx}\right>_m\}-\Im\{\left<S_{yx}\right>_m\}$ is a good indicator of the averaging convergence to $S_{cc}$.
\end{itemize}
\section{Statistical independence on the frequency axis}
\begin{figure}[t]
\centering\namedgraphics{scale=0.7}{xsp-truncation-effect}{\textwidth}
\caption{Effect of the finite duration of the measurement on the spectrum.}
\label{fig:xsp-truncation-effect}
\end{figure}
As a relevant property of white Gaussian noise, the Fourier transform is also Gaussian with all values on the frequency axis statistically-independent. This property is taken as a good representation of the reality even in the case of discrete spectra measured on a finite measurement time $T$, and used extensively in this report.
Yet, in a strictly mathematical sense time-domain truncation breaks the hypothesis of statistical independence in the frequency domain.
This happens because time truncation is equivalent to a multiplication by a rectangular pulse, which maps into a convolution by a sinc(\,) function in the frequency domain.
This concept is shown in Fig.~\ref{fig:xsp-truncation-effect}, and expanded as follows
\begin{align*}
x(t) &\qquad\Rightarrow&x_T(t)&=x(t)\,\Pi(t/T)\\[1ex]
X(f) &\qquad\Rightarrow&X_T(f)&=x(t) \ast T\frac{\sin(\pi Tf)}{\pi Tf}
\end{align*}
where
\begin{align*}
\Pi(t)=\begin{cases}1&-1/2<t<1/2\\0&\text{elsewhere}\end{cases}
\qquad\leftrightarrow\qquad
\text{sinc}(f)=\frac{\sin(\pi f)}{\pi f}~.
\end{align*}
The consequences are the following.
\begin{itemize}
\item The side-lobes of $T$sinc$(Tf)$ cause energy leakage, thus a small correlation on the frequency axis.
\item Accuracy is reduced because each point collects energy from other frequencies. This may show up in the presence of high peaks (50--60Hz, for example) or high roll-off bumps.
\item One should question whether the number of degrees of freedom is reduced.
\end{itemize}
The truncation function is called ``window'' on the front panel of analyzers, and sometimes ``taper'' in textbooks about spectral analysis.
Reduced frequency leakage is obtained by a different choice of the truncation function, like the Bartlett (triangular), Hanning (cosine) or Parzen (cubic) window.
\section{Applications and experimental techniques}
\subsection{PM noise}\label{ssec:xsp-pm-noise}
The first application to frequency metrology was the measurement of Hydrogen masers \cite{vessot64nasa} in the early sixties. Then, the method was used for the measurement of phase noise \cite{walls76fcs} in the seventies, but it found some popularity only in the nineties, when dual-channel FFT analyzers started to be available.
\begin{figure}[t]
\centering\namedgraphics{scale=0.64, angle=0}{xsp-sphi-schemes}{\textwidth}
\caption{Basics schemes for the measurement of phase noise.}
\label{fig:xsp-sphi-schemes}
\end{figure}
Figure \ref{fig:xsp-sphi-schemes} shows some of the most popular schemes for the measurement of phase noise. The mixer is a saturated phase-to-voltage converter in Fig.~\ref{fig:xsp-sphi-schemes} A-C, and a synchronous down-converter in Fig.~\ref{fig:xsp-sphi-schemes} D\@. In all cases correlation is used to reject the noise of the two mixers.
The background noise turns out to be limited by the thermal homogeneity, instead of the absolute temperature referred to the carrier power. This property was understood only after working on the scheme D \cite{rubiola2000rsi-correlation}. At that time, the other schemes were already known.
The scheme A \cite{walls76fcs} is suitable to the measurement of low-noise two-port devices, mainly passive devices showing small group delay, so that the noise of the reference oscillator can be rejected.
The scheme B consists of two separate PLLs that measure separately the oscillator under test. Correlation rejects the noise of the two reference oscillators. In this way, it is possible to measure an oscillator by comparing it to a pair of synthesizers, even if the noise of the synthesizers is higher than that of the oscillator. This fact is relevant to the development of oscillator technology, when manufacturing makes it difficult to have the oscillator at the round frequency of the available standards, and also difficult to build two prototypes at the same frequency.
The scheme C derives from A after introducing a delay in the arms \cite{lance84}. It can be implemented using either a pair of resonators or a pair a delay lines. The use of the optical-fiber delay line is the most promising solution because the delay line can be adapted to the arbitrary frequency of the oscillator under test, while a resonator can not \cite{rubiola2005josab-delay-line}. Correlation removes the fluctuations of the delay line \cite{salik04fcs-xhomodyne,salzenstein2007appa-dual-delay-line}.
The scheme D is based on a bridge that nulls the carrier before amplification and synchronous detection of the noise sidebands. This scheme derives from the pioneering work of Sann \cite{sann68mtt}. At that time, the mixer was used to down convert the fluctuation of the null at the output of a magic Tee. Amplification of the noise sideband \cite{labaar82microw} and correlation \cite{rubiola2000rsi-correlation} were introduced afterwards.
With modern RF/microwave components, isolation between the two channels may not be a serious problem. The hardware sensitivity is limited environmental effects, like temperature fluctuations and low-frequency magnetic fields, and by the AM noise. The latter is taken in through the sensitivity of the mixer offset to the input power. Only partial solutions are available \cite{rubiola2007uffc-am-to-pm-pollution}.
\subsection{AM noise}\label{ssec:xsp-am-noise}
\begin{figure*}[t]
\centering\textbf{A: amplitude noise of a RF/microwave source}\\[0.5em]
\namedgraphics{scale=0.8}{am-correl-scheme}{\textwidth}\\[2em]
\centering\textbf{B: relative intensity noise (RIN) of a laser}\\[0.5em]
\centering\namedgraphics{scale=0.78}{mce-am-optical}{\textwidth}\\[2em]
\centering\textbf{C: amplitude noise of a photonic RF/microwave source}\\[0.5em]
\centering\namedgraphics{scale=0.8}{am-mwave-photonic}{\textwidth}
\caption{Basics schemes for the measurement of amplitude noise (from \cite{rubiola2005arxiv-am-noise}).}
\label{fig:mce-am}
\end{figure*}
Figure \ref{fig:mce-am} shows some schemes for the cross spectrum measurement of AM noise, taken from \cite{rubiola2005arxiv-am-noise}.
In Fig.~\ref{fig:mce-am}~A, two Schottky-diode or tunnel-diode passive power-detectors are used to measure simultaneously the power fluctuations of the source under test. Isolation between channels is guaranteed by the isolation of the power splitter (18--20 dB) and by the fact that the power detectors do not send noise back to the input.
Correlation enables the rejection the single-channel noise.
\begin{figure}[t]
\centering\namedgraphics{scale=0.63}{am-wenzel-spectrum}{\textwidth}
\caption{Example of cross spectrum measurement (amplitude noise of an oven-controlled quartz oscillator), taken from \cite{rubiola2005arxiv-am-noise}.}
\label{fig:am-wenzel-spectrum}
\end{figure}
As an example, Fig.~\ref{fig:am-wenzel-spectrum} shows the measurement of a quartz oscillator. Converting the $1/f$ noise into stability of the fractional amplitude $\alpha$, we get $\sigma_\alpha(\tau)=4.3{\times}10^{-7}$ (Allan deviation, constant vs.\ the measurement time $\tau$). This oscillator exhibits the lowest AM noise measured in our laboratory. The single-channel noise rejection achieved by correlation and averaging is more than 10 dB.
Figure \ref{fig:mce-am} B is the obvious adaptation of the scheme A to the measurement of the laser relative intensity noise (RIN). We start using it routinely.
The scheme of Fig.~\ref{fig:mce-am}~C, presently under study, is intended for the measurement of the microwave AM noise on the modulated light beam at the output of new generation of opto-electronic oscillators based on optical fibers \cite{Yao1996josab-oeo}, or based on whispering-gallery optical resonators.
\begin{figure}
\centering\namedgraphics{scale=0.64}{xsp-am-detector-meas}{\textwidth}
\caption{Measurement of the background noise of a power detector.}
\label{fig:xsp-am-detector-meas}
\end{figure}
\subsubsection{Single-chanel vs.\ dual-channel measurements}
In the measurement of PM noise it is more or less possible to test the background of a single-channel instrument by removing the DUT\@. This happens because we can always get the two phase-detector from a single oscillator, which is the phase reference.\footnote{This statement of course applies only to the background noise of the instrument. When the instrument is used to measure an oscillator we need a reference oscillator, the noise of which must be validated separately.}
The correlation schemes are more complex than the single-channel counterparts, and sometimes difficult to operate. Obviously, the experimentalist prefers the single-channel measurements and uses the correlation schemes only when the sensitivity of the former is insufficient.
Conversely, the measurement of AM noise relies upon the power detector, which does not work without the source. Thus we cannot remove the device under test, and of course we cannot asses the single-channel background noise \emph{of the instrument} in this way. One can object that even in the case of PM noise we can not measure an oscillator in single-channel mode if we do not have a low-noise reference oscillator. The difference is that in the case of PM noise we can at least validate the instrument, while in the case of AM noise we can not.
Another difference between AM and PM is that the phase detector is always more or less sensitive to AM noise \cite{rubiola2007uffc-am-to-pm-pollution}, while the amplitude detector is not sensitive to phase noise. In correlation systems, this fact makes the channel separation simple to achieve and to test.
The conclusion is that the cross-spectrum measurement is inherently simpler with AM noise than with PM noise.
\subsection{Other applications}\label{ssec:xsp-other-applications}
Tracking back through the literature, the first use of the cross-spectrum was for the determination of the angular size of stellar radio sources \cite{hanbury-brown52nat}.
In the case of a signal coming through two antennas separated by an appropriate baseline, the latter introduces a delay depending on the source direction in space. Hence the useful signal $S_{cc}$ cannot be real. Instead, the angle $\arctan{\Im/\Re}$ gives information on the source direction. The very-large-baseline interferometry (VLBI) can be seen as a generalization of this method.
When the same method was applied to the intensity interferometer \cite{HanburyBrown1956nature-Correlation,HanburyBrown1956nature-Syrius}, an anti-correlation effect was discovered, due to the discrete nature of light. This phenomenon, known as Hanbury Brown -- Twiss effect (HBT effect), was later observed also in microwave signals in photonic regime \cite{Gabelli2004prl-056801}, i.e., with $h\nu>kT$.
The correlation method finds another obvious application in radiometry \cite{allred62jrnbs}, and of course in Johnson thermometry, which is often considered a branch of radiometry.
Since the cross-spectrum enables to compare the PSD of two noise sources, it can be used to measure a temperature by comparing thermal noise to a reference shot noise. The latter is in turn measured as a dc value by exploiting the property of Poisson processes that the variance can be calculated from the average.
In a Tunnel junction, theory predicts the amount of shot and thermal noise. This fact can be exploited for precision thermometry \cite{Spietz2003Science}, and ultimately to redefine the temperature in terms of fundamental constants.
The measurement of the low $1/f$ voltage fluctuations is an important diagnostic tool in semiconductor technology. The field-effect transistors are suitable to this task because of the low bias current at the input. In fact, the bias current flowing into the sample turns into a fully correlated voltage through the Ohm law. Additionally, the electrode capacitance may limit the instrument sensitivity. The reader can refer to \cite{sampietro99rsi} for a detailed treatise.
In metallurgy, the cross spectrum method has been used for the measurement of electromigration in thin metal films through the $1/f$ fluctuation of the conductor resistance. This is relevant in microprocessor technology because the high current density in metal connexions can limit the life of the component and make it unreliable. For this reason, Aluminum is no longer used.
The high sensitivity is based on the idea that with white Gaussian noise $X'$ and $X''$ (real and imaginary part) are statistically independent. Synchronously detecting the signal with two orthogonal references, it is therefore possible to reject the amplifier noise even if a single amplifier is shared by the two channel \cite{verbruggen89apa}.
Adapting this idea to RF and microwaves is straightforward \cite{rubiola2002rsi-matrix}. Unfortunately, we still have no application for this.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,758
|
{"url":"http:\/\/math.stackexchange.com\/questions\/784296\/examples-of-interesting-unconventional-solutions-to-rather-standard-problems","text":"# Examples of interesting\/unconventional solutions to rather \u201cstandard\u201d problems?\n\nFor example, I recently came across the following way to evaluate the integral of $\\cos^2 x - \\sin^2 x$ without using double angle formulas:\n\n$$\\int dx (\\cos^2 x - \\sin^2 x) = \\int dx(\\cos x + \\sin x)(\\cos x - \\sin x) = \\int u du = \\frac{u^2}{2} + C$$\n\nWhere $u = \\cos x + \\sin x$. One can expand the final result to get $\\sin x \\cos x + C'$ as the final result, i.e. $\\frac{1}{2} \\sin (2x) + C'$. Though this may take longer, I find this solution valuable because it reminds us that there is more than one solution, even to a seemingly rigid problem like this.\n\nAnother example is solving $\\lim \\limits_{n \\to \\infty} \\sqrt[n]{n}$ using AM-GM and Squeeze theorem:\n\n$\\frac{n - 2 + 2 \\sqrt{n}}{n} \\geq \\sqrt[n]{n} \\geq 1$ by definition\n\n$1 - \\frac{2}{n} + \\frac{2}{\\sqrt{n}} \\geq \\sqrt[n]{n} \\geq 1$ and then squeeze.\n\nI found this solution to be much more interesting than the standard solution which is to take $\\ln$ of the expression and find that limit.\n\nCan you show me other examples of this? (I am personally only an advanced high schooler, but I will certainly appreciate answers at any level and hopefully I will be able to fully understand them some time in the future.) Also note that any area of study is acceptable.\n\n-\n\nPerhaps the most standard way to compute the sum $$1+3+5+\\ldots +(2n-1)=n^2$$ would be to use the arithmetic series with $a_0=1$, $d=2$.","date":"2015-09-04 06:19:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9076467156410217, \"perplexity\": 216.23507126231465}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-35\/segments\/1440645338295.91\/warc\/CC-MAIN-20150827031538-00300-ip-10-171-96-226.ec2.internal.warc.gz\"}"}
| null | null |
Buffalo Mop is a small unincorporated community in Limestone County, Texas, United States.
References
Unincorporated communities in Texas
Unincorporated communities in Limestone County, Texas
Ghost towns in East Texas
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,306
|
Home - Author Archives: sandra (page 6)
Author Archives: sandra
Gun Control Australia Making up Issues Again
March 4, 2020 Leave a comment 964 Views
ONCE again, a major Australian media outlet is giving precious column inches to the inane thought bubbles of Gun Control Australia.
The latest offender is Sydney's The Daily Telegraph, which ran a story entitled "Guns Out For Schoolboys" on February 29.
We're not going to link to the story – we know that media metrics only record total clicks/visits to a story, not the reason people read it, so reading the story to be angry about it only encourages them to write more, since their metrics say lots of people click on those stories.
In the story, GCA call for a ban on shooting competition events in all NSW schools, singling out two private schools in Sydney and demanding they end their shooting competition programme and remove all guns from the schools.
The schools, to their credit, declined to engage with the anti-gun rhetoric, and the Association of Independent Schools NSW literally states the schools are well within their rights to offer a totally legal activity to students, and fully supports them offering legal sporting activities to their students.
Remember: No-one is forcing these students to participate. They will be using specialist single- shot target rifles – quite possibly air rifles, even – conducted in serious competitions affiliated with peak sporting bodies.
Do we want teenagers learning about responsible firearm use in a safe, controlled environment as part of a sport with an Olympic or Commonwealth Games pathway at its upper levels, or do we want them thinking video games are a realistic representation of guns?
The guns the hoplophobes wet their collective pants over have been illegal in Australia for nearly three decades. They were illegal more than a decade before most current high school students were even born. It is absurd to be even considering properly organised target shooting competitions at high schools as some sort of undesirable or harmful situation.
Many of our members – including our president and our media officer – learned safe and responsible firearms handling as part of competition target shooting in high school.
The number of students who have been hurt as part of an organised school-based target shooting competition match in Australia is, as far as we can tell, zero – which makes it safer than rugby, AFL, soccer, cricket or even regular athletics.
What continually irritates us is how the media so frequently ignore our press releases, even on major issues – yet seem only too happy to give oxygen to GCA's empty and harmful drivel.
Here's what we need you to do: When you see an anti-gun story, or a story with GCA as the primary source, contact the media outlet and complain. Be polite about it, but make it clear that every time they run a story that paints law-abiding firearms users in a negative light, they're going to get complaints – and with readership numbers falling dramatically, they really shouldn't be trying to alienate even more readers.
For our part, we've asked the journalist responsible for this story why they thought GCA's random thoughts were newsworthy, and how come they never seem to run any positive stories based on the press releases we send them.
Keeping media and politicians accountable is important work, and we need all the support we can get to keep it up and achieve even more results on behalf of #allshooters.
If you're not already a member, why not join Shooters Union – and if you are a member, why not consider sponsoring a friend to join? https://shootersunion.com.au/join-shooters-union/
Shooters Union joins Federal Firearms Stakeholder Meeting
Ensuring Gel Blasters remain classified as toys, a national permanent firearms amnesty and implementing digital firearms licences and PTAs topped the agenda of a recent Commonwealth- level meeting in Canberra attended by Shooters Union Australia.
Our vice president David Brown attended a Federal Firearms Stakeholder committee meeting in Canberra on February 28, following an invitation from Border Force representatives for Shooters Union to be involved alongside representatives from the shooting industry and Commonwealth
One of the top items on the agenda was the agreement to implement a national permanent firearms amnesty. At this stage, it is envisioned the amnesty will be run along the successful Queensland model, where guns are taken to firearms dealers and registered to a licence or surrendered to the dealer.
It was also reportedly agreed at the meeting that all states would move towards digital licensing and PTA systems; many are already well on the way to implementation.
The Commonwealth Government is also planning new firearms trafficking laws, with maximium penalties including life terms for particularly serious offences.
Mr Brown's suggestion for better education on what gun parts require import approval and which ones do not was also well received and we will be following it up in due course.
One topic featured prominently at the meeting – namely gel blasters, and a general willingness to keep them considered as toys.
"The idea that gel blasters and airsoft guns are toys and are classified as such as very important to Shooters Union," Mr Brown said. "Australia is the only civilised country in the world which restricts them – even the United
Kingdom and Communist China allow their citizens to own and use them."
Mr Brown said it was very clear at the meeting that both industry and ABF did not want further restrictions and were even supportive of the toy importers getting a better deal.
"It became quite clear to me listening to the meeting and in subsequent conversations with the two gel blaster industry representatives that we, the firearm industry, need to support these retail groups in keeping gel blasters out of the 'real' firearms world," Mr Brown said.
"It is in our interest because if gel blasters and airsoft guns end up restricted or banned because of their appearance, it could flow onto real firearms with hysterical media and anti-gun groups spouting their uninformed, hateful nonsense to the public and politicians."
Shooters Union is about standing up for all shooters, and as part of that we are a vocal supporter of gel blaster and soft air/airsoft guns, and openly supports the gel blaster and airsoft industry in their efforts towards full, nationwide legalisation and acceptance.
Make sure you're following us on Facebook and Twitter, or visit us on the OzGunLobby forums, to stay up to date with everything we're doing!
Some advice about responding to the Qld Govt "Consultation on replica guns and gel blasters
February 19, 2020 1 Comment 1,748 Views
The Queensland Government currently has a "Public consultation on Gel Blasters and other Replica Firearms" underway.
The consultation/survey can be found here:
https://www.getinvolved.qld.gov.au/gi/consultation/7300/view.html
It is very obvious to us the survey has been worded with a view to further restricting gel blasters and replica guns in Queensland, so it is important as many law-abiding firearms users (and people who are concerned about Government over-reach) provide helpful responses to the consultation.
Shooters Union Australia has a very simple position: Gel Blasters are toys and replica guns are harmless inert items; they cannot hurt anyone and they should not be restricted in any way.
To that end, we have put together some "answering points" for you to respond to the consultation with. Please do not just copy/paste these responses; public servants know when this is happening and will disregard the responses.
Use these points to help you put the answers in your own words. If you need any help, contact us on media@shootersunion.com.au
QUESTION: What do you know about gel blasters?
ANSWERING POINT: They are toys which fire a harmless gel ball and cannot hurt anyone.
QUESTION: How do you think gel blaster and replica firearm ownership impacts on community safety?
ANSWERING POINT: It does not, especially considering cricket bat ownership and kitchen knife ownership are not deemed to impact community safety and they have actually killed and injured people, unlike gel blasters or replica guns.
QUESTION: Are you supportive of a sensible set of regulations around replica firearms and gel blasters to support a greater level of community safety?
ANSWER: Strongly Disagree
QUESTION: Please provide your opinion on there being a need for a person to have a reasonable excuse when possessing replica firearms and gel blasters (for example, a reasonable excuse may include being a member of a gel blaster club and taking part in club activities, military re-enactments etc).
ANSWERING POINT: People should not need a reason to own a gel blaster or replica gun. They are toys or decorative items; they cannot hurt anyone.
Also, we do not require baseball bat owners to be a member of a sports club, so why should we make people buying a toy be a member of a club?
QUESTION: How should replica firearms and gel blasters be required to be stored when not in use?
ANSWERING POINT: However the owner feels like it. They are toys and we should not be regulating how people store their toys.
QUESTION: Do you have any ideas on other ways to enhance community safety around the use of replica firearms and any suggestions to enhance the Queensland Police Service ongoing awareness campaign 'Stop and Think', which focuses on responsible ownership of gel blasters and replica firearms?
ANSWERING POINT: Educate the public that assault rifles and machine-guns have been illegal for a long time and are almost non-existent in Australia, so if people see one then it is almost certainly a gel blaster or replica and they should not be alarmed. Also there are laws in place covering brandishing replica weapons (including gel blasters) in public, with prison terms attached, so there is no need for additional laws or regulations.
Northern Territory Reverses A22R Reclassification
February 12, 2020 Leave a comment 498 Views
In a huge win for shooters, the NT News is reporting the Northern Territory Police Service has apparently agreed to reverse its recategorisation of the Savage A22R and Verney-Carron Speedline rifles.
Late last year, the Acting NT Police Commissioner used his powers under the Territory's Firearms Act to declare the guns were Category C and D respectively, effectively banning them.
We spearheaded the effort to fight this, alongside our friends at SIFA, the NT Field & Game Association and SSAA NT, and we are delighted to say the NT Police have indicated they will likely be reversing the decision and returning the guns to Categories A and B where they belong.
According to the story, a Northern Territory Police spokeswoman said "NT Police are repealing the declaration with a view to engaging in more consultation with all parties involved; to make a considered and informed decision on the classification of these types of firearms".
This is an incredible result and a significant win for shooters not just in the NT, but all across Australia.
It's also one of the few times since 1996 that a significant restriction on shooters has been successfully fought and overturned and sends a strong message to politicians and the anti-gun mob that we've had enough of being used for political points scoring and being treated like potential criminals.
This outcome has been made possible by the strength and support of our members as well – so if you know someone who isn't a Shooters Union member (maybe it's you?), why not encourage them to join so we can keep fighting for the rights of #allshooters?
Follow this link and get involved! https://shootersunion.com.au/join-shooters-union/
Learn more about our Legal Defence Fund
February 7, 2020 Leave a comment 279 Views
You can donate here.
More details are available here: FAQs
If you have any questions, please email legal@shootersunion.com.au
Shooters Union is establishing a legal fighting fund
One of the questions we get asked is: How come we're not constantly fighting the State Governments in court every time they pass anti-gun laws?
The simple answer is: Fighting Governments in court is really expensive, even if you win – and even more so if you don't.
However, it's become increasingly apparent that there are cases where it's a necessity – and to that end, Shooters Union Australia is establishing a Legal Defence Fund to help stand up for the rights of #AllShooters.
What will the Fund be used for?
The Legal Defence Fund will be used to make financial contributions to legal cases (including Civil Administrative Tribunal hearings and appeals) directly affecting law-abiding firearms owners as a group.
Who will administer the Fund?
Funding decisions will be made by the Shooters Union Australia executive board, with input from members and experts if required or appropriate.
If a SUA member is charged with a firearms offence, will the Fund help pay for their lawyer?
At this stage no, unfortunately. The Fund is intended for legal matters which affect a number of law-abiding shooters – such as getting suppressors legalised, ensuring handguns remain available to primary producers for occupational reasons, making Category C firearms available for competition target shooting and recreational shooting, or challenging unfair legislation – rather than individual cases.
Having said that, if the outcome of an SUA member's legal case may set a legal precedent, the executive may opt to make assistance from the Fund available at their discretion and on a case-by-case basis.
How can I get in touch about Fund-related matters?
Email: legal@shootersunion.com.au
Queenslanders slam government for shifting legal goalposts
February 7, 2020 1 Comment 1,660 Views
THE Queensland Government is coming under pressure from furious primary producers and shooters over allegations they 'moved the goalposts' to nullify a legal case result which had ruled against them.
Queensland farmer James Ryder requires a firearm suppressor to mitigate hearing loss from controlling feral pests on his farm, and applied to the police Weapons Licensing Branch (WLB) for an exemption to own one.
His application was denied due to suppressors being categorised the same as machine-guns and rocket-launchers under current law, but he was advised – in writing – that he could appeal the matter to the Queensland Civil Administration Tribunal (QCAT), which he did.
QCAT not only ruled in Mr Ryder's favour, it questioned the restrictions on suppressors and suggested they be removed from the restricted category.
The police appealed the ruling, claiming QCAT did not have jurisdiction to hear the case – and QCAT agreed with them, essentially nullifying its earlier ruling.
The decision has caused outrage among primary producers and shooters alike, with Mr Ryder accusing the state Government of moving the goalposts because they didn't get the outcome they expected.
"I have followed all the correct procedures, including the police service's own advice, and they've decided to change the rules because they don't like how it's turned out for them," he said.
"It's completely unacceptable and I will be taking this legal fight further."
Mr Ryder said he honestly did not see why there was such a fuss over suppressors, either.
"If I was 150km south across the border in NSW, I could easily apply for – and get – a suppressor permit for use on my farm, yet in Queensland they are lumped in with machine guns, bazookas and land mines," he said.
"Hearing loss is not a particularly pleasant thing to experience ,especially when there is a simple fix available that satisfies both Worksafe noise guidelines and biosecurity obligations for farmers at no risk to the community.
"Noise induced hearing loss is a major issue amongst farmers, farm workers and recreational shooters. Suppressor are a safe effective engineering control measure that follows the hierarchy of control guidelines required under Queensland Work Safe laws."
His continuing fight to legalise a vital piece of safety equipment is not over and has the support, backing and assistance of the state's pre-eminent pro-gun organisation, Shooters Union Australia, with president Graham Park describing the suppressor ban as ridiculous and harmful.
"We firmly believe that farmers, primary producers and hunters should be able to legally own suppressors for their firearms," he said.
"It's common knowledge the ban on suppressors exists because of how they're portrayed in movies and video games. People have no idea how they actually work in real life.
"They do not completely silence the shot – it is still quite loud – but what they do is bring the noise level down to a safer level to mitigate hearing loss."
Mr Park said Shooters Union had established a legal fighting fund to help Mr Ryder appeal his case further and get the vital equipment legalised in Queensland.
"We're not all going to suddenly turn into John Wick because we can put a sound suppressor on a hunting rifle," Mr Park said.
"NSW issues suppressor permits and they haven't had any problems, they're freely available in New Zealand without issues – so why is Queensland dragging their heels on this?
"Even if you don't like guns, the implications of the Government shifting the goals to get results it wants are extremely worrying and should concern all Australians.
"It's just not on, and we should all be taking a stand against it."
Positive News from the NT
January 30, 2020 Leave a comment 1,237 Views
WE have some good news to start 2020 off!
As you may recall, late last year the Northern Territory Government unilaterally recategorised the Savage A22R lever-release .22 rifle from Category A to Category C, and recategorised the Verney-Carron Speedline lever-release centrefire rifles from Category B to Category D.
This was done through a provision in the state's Firearms Act which, regrettably, allows the Territory's Police Commissioner to recategorise firearms simply because he or she feels like it.
Needless to say, we at Shooters Union were not having a bar of it and were quick to make our displeasure known, challenging the NT Police Commissioner and the NT Justice Minister on the issue and demanding an explanation.
We are very pleased to report that, as a direct result of our efforts and those of our fellow shooting organisations – including SIFA, the NT Field & Game Association, and SSAA NT – the Territory's Government has extended the deadline for surrender or relicensing of the affected firearms by 90 days (to the end of April) and is now actively consulting with shooters on the situation.
Given there is an election coming up in March, the NT Government is highly unlikely to want 16,000 law-abiding firearms owners (and voters!) off-side, and we are reliably informed the Government is giving some serious thought to its position on the A22R and Speedline issue.
While there's still some work to be done, the message and the takeaway is clear: When you speak up, get active and get involved, you can get results!
On that note, we have also contributed financially to a hugely successful campaign in the NT creating "I SHOOT, I VOTE" and "I HUNT, I VOTE" bumper stickers for distribution.
The campaign, being spearheaded by our friends at the NT Field & Game Association, has been a runaway success, with the entire first printing run of about 7,000 stickers being taken up already and a second print run in the works too.
The stickers are available absolutely free from shooting clubs, gun shops, pubs and newsagents throughout the Territory, as well as in a digital file for an e-mail signature too.
Make sure you get one of the stickers and help get the message out – and remind politicians that people with gun licences also have a vote on election day, and will use it to support the people who genuinely support us.
Why the Virginia Demonstrations matter to Shooters in Australia
January 24, 2020 1 Comment 158 Views
Media images of well over 20,000 protestors, many of them lawfully openly carrying firearms, on the steps of legislative assembly in the US state of Virginia have been getting extensive coverage online – and with good reason.
The protest was held on January 20th, a day traditionally used for protesting proposed legislative matters in the state as it is a public holiday – Martin Luther King Day – and generated a massive turnout from law-abiding citizens concerned their rights were being ignored to suit a political
agenda.
The state's Governor is trying to pass laws which would, among other things, ban the sale of military-style semi-automatic centrefire rifles, restrict handgun sales to an individual to one per month, and enact "red flag" laws allowing the police to seize firearms from individuals who have
been reported as being a danger to themselves.
Red flag laws are a controversial subject amongst Second Amendment supporters in the US, with many feeling the laws are far too easy to be abused – the details of who can "red flag" a gun owner vary from state to state.
The rally had been organised by a pro-gun organisation and while firearms were prohibited on the grounds of the state's legislative assembly building, they were permitted elsewhere in the city of Richmond – which law-abiding citizens of the state took full advantage of, with many of them openly bringing rifles, shotguns and handguns to the rally in exercise of their legal rights.
Inconveniently for the anti-gun brigade, who were clearly hoping the whole thing would kick off and shots would be fired, the rally was a completely peaceful affair with minimal police presence – and not only that, but once it had finished, several pro-gun group members stayed behind to clean up the rubbish from the event.
That, friends, is how you peacefully protest for firearms rights, and it's a lesson we would do well to learn in Australia too.
One of the reasons the rally was so important was summed up by Reddit user Glothr, who said: "Not only was it peaceful, it was diverse. They [the media and antis] wanted SO BADLY for this to be a bunch of old racist white dudes so they could perpetuate their white nationalist narrative. Instead, they were greeted by people of all races and backgrounds who united peacefully around the cause of defending liberty. That's what we're about and that's what they demonstrated to the country today. This was exactly the kind of win we needed."
Indeed, there was only one arrest on the day – a 21 year old woman arrested for refusing to remove a face covering when directed to do so by police, despite being warned twice she faced arrest for not doing so.
Leftist attempts to deride pro-gun owners as racist nutters playing soldier have been utterly destroyed by the peaceful, organised, and incident-free way the rally was conducted, and it has provided one of the most powerful proofs yet that gun owners are, first and foremost, law- abiding citizens.
So why is a pro-gun rally in the US relevant to Australia? Because it proves the right and the left can get along when the stakes are high. The issue in this case isn't firearms, it's the government being seen to ride roughshod over citizen's rights (in the US case, rights enshrined in their Constitution) and an understanding that while today it's scary black rifles on the hit list,
tomorrow it could very easily be "anyone who disagrees with the Government".
In this case, not only were pro-gun people marching against the laws, they were joined by organisations including Antifa and even the Black Panthers.
Picture that for a moment – three groups which are not usually considered to be on each other's Christmas Card lists put aside their differences to come together on an issue that mattered.
Imagine if we could do that in Australia?
There's also a more important message here about not being politically apathetic – that means when something that affects gun owners happens here, we need to do more than just whinge about in on Facebook or at the range.
The message is clear: Get active, get out there and be involved in the fight – and as Australia's pre-eminent pro-shooting organisation, we're here to help.
Writing Guide for NT Reclassification
November 28, 2019 Leave a comment 614 Views
As you may have heard, earlier this month the then-Acting NT Police Commissioner Michael Murphy unilaterally declared that "all linear repeating firearms with assisted ejection, chambered for rim fire ammunition, to be category C firearms" and "all linear repeating firearms with assisted ejection, chambered for centrefire ammunition, to be category D firearms," under section 8(1) of the Firearms Act 1997.
In practical terms, this means the Savage A22R .22 lever-release rifle is now treated the same as a Ruger 10/22 or any other semi-auto .22 and essentially unavailable to most shooters, while the Verney-Carron Speedline lever-release rifle is now treated the same as an AR-15 or L1A1 SLR and effectively banned.
Both these guns are Category A and B literally everywhere else in Australia.
We wrote to the NT Acting Police Minister and the NT Justice Minister a fortnight ago demanding an explanation and none has been forthcoming.
We are writing to all Shooters Union members in the Northern Territory to ask for your help in making sure this matter isn't swept under the rug and that law-abiding shooters in the Territory get their voices heard in the fight against unfair and unjust gun laws.
Please, contact your local MP as soon as you can and let them know the following key points:
You are angry about the arbitrary re-categorisation and how it has been carried out
You want the decision reversed
You want the Firearms Act amended to remove the Police Commissioner's ability to recategorise firearms into a higher category
Failure to take a pro-gun stance on this (and any other relevant) matter WILL cost them your vote and the vote of all other law-abiding firearms users in their electorate at the next Territory election
Make no mistake, this recategorisation is just the tip of the iceberg and is being closely watched by the other States.
We all need to stand up and take action now to fight for a fair go for all shooters.
Even if you don't own, or have no intention of owning, a lever-release rifle, if this isn't stopped then it's only a matter of time before the antis decide to start coming after pump and lever-action rifles, then "high power sniper rifles" (any centrefire with a telescopic sight), and even potentially PCP air guns (for being more or less silent).
Contacting your MP will only take a few minutes but will make an enormous difference and is one of the most effective ways to get our message to the people in charge.
If you've got any questions or would like more information on how you can help, please don't hesitate to contact media@shootersunion.com.au
Yours in shooting,
The Shooters Union Australia team
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 5,351
|
package com.google.cloud.billing.v1;
import static com.google.cloud.billing.v1.CloudBillingClient.ListBillingAccountsPagedResponse;
import static com.google.cloud.billing.v1.CloudBillingClient.ListProjectBillingInfoPagedResponse;
import com.google.api.gax.core.NoCredentialsProvider;
import com.google.api.gax.grpc.GaxGrpcProperties;
import com.google.api.gax.grpc.testing.LocalChannelProvider;
import com.google.api.gax.grpc.testing.MockGrpcService;
import com.google.api.gax.grpc.testing.MockServiceHelper;
import com.google.api.gax.rpc.ApiClientHeaderProvider;
import com.google.api.gax.rpc.InvalidArgumentException;
import com.google.api.resourcenames.ResourceName;
import com.google.common.collect.Lists;
import com.google.iam.v1.AuditConfig;
import com.google.iam.v1.Binding;
import com.google.iam.v1.GetIamPolicyRequest;
import com.google.iam.v1.Policy;
import com.google.iam.v1.SetIamPolicyRequest;
import com.google.iam.v1.TestIamPermissionsRequest;
import com.google.iam.v1.TestIamPermissionsResponse;
import com.google.protobuf.AbstractMessage;
import com.google.protobuf.ByteString;
import io.grpc.StatusRuntimeException;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.UUID;
import javax.annotation.Generated;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Assert;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
@Generated("by gapic-generator-java")
public class CloudBillingClientTest {
private static MockCloudBilling mockCloudBilling;
private static MockServiceHelper mockServiceHelper;
private LocalChannelProvider channelProvider;
private CloudBillingClient client;
@BeforeClass
public static void startStaticServer() {
mockCloudBilling = new MockCloudBilling();
mockServiceHelper =
new MockServiceHelper(
UUID.randomUUID().toString(), Arrays.<MockGrpcService>asList(mockCloudBilling));
mockServiceHelper.start();
}
@AfterClass
public static void stopServer() {
mockServiceHelper.stop();
}
@Before
public void setUp() throws IOException {
mockServiceHelper.reset();
channelProvider = mockServiceHelper.createChannelProvider();
CloudBillingSettings settings =
CloudBillingSettings.newBuilder()
.setTransportChannelProvider(channelProvider)
.setCredentialsProvider(NoCredentialsProvider.create())
.build();
client = CloudBillingClient.create(settings);
}
@After
public void tearDown() throws Exception {
client.close();
}
@Test
public void getBillingAccountTest() throws Exception {
BillingAccount expectedResponse =
BillingAccount.newBuilder()
.setName(BillingAccountName.of("[BILLING_ACCOUNT]").toString())
.setOpen(true)
.setDisplayName("displayName1714148973")
.setMasterBillingAccount("masterBillingAccount1488941620")
.build();
mockCloudBilling.addResponse(expectedResponse);
BillingAccountName name = BillingAccountName.of("[BILLING_ACCOUNT]");
BillingAccount actualResponse = client.getBillingAccount(name);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
GetBillingAccountRequest actualRequest = ((GetBillingAccountRequest) actualRequests.get(0));
Assert.assertEquals(name.toString(), actualRequest.getName());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void getBillingAccountExceptionTest() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
BillingAccountName name = BillingAccountName.of("[BILLING_ACCOUNT]");
client.getBillingAccount(name);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void getBillingAccountTest2() throws Exception {
BillingAccount expectedResponse =
BillingAccount.newBuilder()
.setName(BillingAccountName.of("[BILLING_ACCOUNT]").toString())
.setOpen(true)
.setDisplayName("displayName1714148973")
.setMasterBillingAccount("masterBillingAccount1488941620")
.build();
mockCloudBilling.addResponse(expectedResponse);
String name = "name3373707";
BillingAccount actualResponse = client.getBillingAccount(name);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
GetBillingAccountRequest actualRequest = ((GetBillingAccountRequest) actualRequests.get(0));
Assert.assertEquals(name, actualRequest.getName());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void getBillingAccountExceptionTest2() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
String name = "name3373707";
client.getBillingAccount(name);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void listBillingAccountsTest() throws Exception {
BillingAccount responsesElement = BillingAccount.newBuilder().build();
ListBillingAccountsResponse expectedResponse =
ListBillingAccountsResponse.newBuilder()
.setNextPageToken("")
.addAllBillingAccounts(Arrays.asList(responsesElement))
.build();
mockCloudBilling.addResponse(expectedResponse);
ListBillingAccountsPagedResponse pagedListResponse = client.listBillingAccounts();
List<BillingAccount> resources = Lists.newArrayList(pagedListResponse.iterateAll());
Assert.assertEquals(1, resources.size());
Assert.assertEquals(expectedResponse.getBillingAccountsList().get(0), resources.get(0));
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
ListBillingAccountsRequest actualRequest = ((ListBillingAccountsRequest) actualRequests.get(0));
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void listBillingAccountsExceptionTest() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
ListBillingAccountsRequest request =
ListBillingAccountsRequest.newBuilder()
.setPageSize(883849137)
.setPageToken("pageToken873572522")
.setFilter("filter-1274492040")
.build();
client.listBillingAccounts(request);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void updateBillingAccountTest() throws Exception {
BillingAccount expectedResponse =
BillingAccount.newBuilder()
.setName(BillingAccountName.of("[BILLING_ACCOUNT]").toString())
.setOpen(true)
.setDisplayName("displayName1714148973")
.setMasterBillingAccount("masterBillingAccount1488941620")
.build();
mockCloudBilling.addResponse(expectedResponse);
BillingAccountName name = BillingAccountName.of("[BILLING_ACCOUNT]");
BillingAccount account = BillingAccount.newBuilder().build();
BillingAccount actualResponse = client.updateBillingAccount(name, account);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
UpdateBillingAccountRequest actualRequest =
((UpdateBillingAccountRequest) actualRequests.get(0));
Assert.assertEquals(name.toString(), actualRequest.getName());
Assert.assertEquals(account, actualRequest.getAccount());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void updateBillingAccountExceptionTest() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
BillingAccountName name = BillingAccountName.of("[BILLING_ACCOUNT]");
BillingAccount account = BillingAccount.newBuilder().build();
client.updateBillingAccount(name, account);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void updateBillingAccountTest2() throws Exception {
BillingAccount expectedResponse =
BillingAccount.newBuilder()
.setName(BillingAccountName.of("[BILLING_ACCOUNT]").toString())
.setOpen(true)
.setDisplayName("displayName1714148973")
.setMasterBillingAccount("masterBillingAccount1488941620")
.build();
mockCloudBilling.addResponse(expectedResponse);
String name = "name3373707";
BillingAccount account = BillingAccount.newBuilder().build();
BillingAccount actualResponse = client.updateBillingAccount(name, account);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
UpdateBillingAccountRequest actualRequest =
((UpdateBillingAccountRequest) actualRequests.get(0));
Assert.assertEquals(name, actualRequest.getName());
Assert.assertEquals(account, actualRequest.getAccount());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void updateBillingAccountExceptionTest2() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
String name = "name3373707";
BillingAccount account = BillingAccount.newBuilder().build();
client.updateBillingAccount(name, account);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void createBillingAccountTest() throws Exception {
BillingAccount expectedResponse =
BillingAccount.newBuilder()
.setName(BillingAccountName.of("[BILLING_ACCOUNT]").toString())
.setOpen(true)
.setDisplayName("displayName1714148973")
.setMasterBillingAccount("masterBillingAccount1488941620")
.build();
mockCloudBilling.addResponse(expectedResponse);
BillingAccount billingAccount = BillingAccount.newBuilder().build();
BillingAccount actualResponse = client.createBillingAccount(billingAccount);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
CreateBillingAccountRequest actualRequest =
((CreateBillingAccountRequest) actualRequests.get(0));
Assert.assertEquals(billingAccount, actualRequest.getBillingAccount());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void createBillingAccountExceptionTest() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
BillingAccount billingAccount = BillingAccount.newBuilder().build();
client.createBillingAccount(billingAccount);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void listProjectBillingInfoTest() throws Exception {
ProjectBillingInfo responsesElement = ProjectBillingInfo.newBuilder().build();
ListProjectBillingInfoResponse expectedResponse =
ListProjectBillingInfoResponse.newBuilder()
.setNextPageToken("")
.addAllProjectBillingInfo(Arrays.asList(responsesElement))
.build();
mockCloudBilling.addResponse(expectedResponse);
BillingAccountName name = BillingAccountName.of("[BILLING_ACCOUNT]");
ListProjectBillingInfoPagedResponse pagedListResponse = client.listProjectBillingInfo(name);
List<ProjectBillingInfo> resources = Lists.newArrayList(pagedListResponse.iterateAll());
Assert.assertEquals(1, resources.size());
Assert.assertEquals(expectedResponse.getProjectBillingInfoList().get(0), resources.get(0));
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
ListProjectBillingInfoRequest actualRequest =
((ListProjectBillingInfoRequest) actualRequests.get(0));
Assert.assertEquals(name.toString(), actualRequest.getName());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void listProjectBillingInfoExceptionTest() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
BillingAccountName name = BillingAccountName.of("[BILLING_ACCOUNT]");
client.listProjectBillingInfo(name);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void listProjectBillingInfoTest2() throws Exception {
ProjectBillingInfo responsesElement = ProjectBillingInfo.newBuilder().build();
ListProjectBillingInfoResponse expectedResponse =
ListProjectBillingInfoResponse.newBuilder()
.setNextPageToken("")
.addAllProjectBillingInfo(Arrays.asList(responsesElement))
.build();
mockCloudBilling.addResponse(expectedResponse);
String name = "name3373707";
ListProjectBillingInfoPagedResponse pagedListResponse = client.listProjectBillingInfo(name);
List<ProjectBillingInfo> resources = Lists.newArrayList(pagedListResponse.iterateAll());
Assert.assertEquals(1, resources.size());
Assert.assertEquals(expectedResponse.getProjectBillingInfoList().get(0), resources.get(0));
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
ListProjectBillingInfoRequest actualRequest =
((ListProjectBillingInfoRequest) actualRequests.get(0));
Assert.assertEquals(name, actualRequest.getName());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void listProjectBillingInfoExceptionTest2() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
String name = "name3373707";
client.listProjectBillingInfo(name);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void getProjectBillingInfoTest() throws Exception {
ProjectBillingInfo expectedResponse =
ProjectBillingInfo.newBuilder()
.setName("name3373707")
.setProjectId("projectId-894832108")
.setBillingAccountName("billingAccountName929322205")
.setBillingEnabled(true)
.build();
mockCloudBilling.addResponse(expectedResponse);
String name = "name3373707";
ProjectBillingInfo actualResponse = client.getProjectBillingInfo(name);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
GetProjectBillingInfoRequest actualRequest =
((GetProjectBillingInfoRequest) actualRequests.get(0));
Assert.assertEquals(name, actualRequest.getName());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void getProjectBillingInfoExceptionTest() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
String name = "name3373707";
client.getProjectBillingInfo(name);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void updateProjectBillingInfoTest() throws Exception {
ProjectBillingInfo expectedResponse =
ProjectBillingInfo.newBuilder()
.setName("name3373707")
.setProjectId("projectId-894832108")
.setBillingAccountName("billingAccountName929322205")
.setBillingEnabled(true)
.build();
mockCloudBilling.addResponse(expectedResponse);
String name = "name3373707";
ProjectBillingInfo projectBillingInfo = ProjectBillingInfo.newBuilder().build();
ProjectBillingInfo actualResponse = client.updateProjectBillingInfo(name, projectBillingInfo);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
UpdateProjectBillingInfoRequest actualRequest =
((UpdateProjectBillingInfoRequest) actualRequests.get(0));
Assert.assertEquals(name, actualRequest.getName());
Assert.assertEquals(projectBillingInfo, actualRequest.getProjectBillingInfo());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void updateProjectBillingInfoExceptionTest() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
String name = "name3373707";
ProjectBillingInfo projectBillingInfo = ProjectBillingInfo.newBuilder().build();
client.updateProjectBillingInfo(name, projectBillingInfo);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void getIamPolicyTest() throws Exception {
Policy expectedResponse =
Policy.newBuilder()
.setVersion(351608024)
.addAllBindings(new ArrayList<Binding>())
.addAllAuditConfigs(new ArrayList<AuditConfig>())
.setEtag(ByteString.EMPTY)
.build();
mockCloudBilling.addResponse(expectedResponse);
ResourceName resource = BillingAccountName.of("[BILLING_ACCOUNT]");
Policy actualResponse = client.getIamPolicy(resource);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
GetIamPolicyRequest actualRequest = ((GetIamPolicyRequest) actualRequests.get(0));
Assert.assertEquals(resource.toString(), actualRequest.getResource());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void getIamPolicyExceptionTest() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
ResourceName resource = BillingAccountName.of("[BILLING_ACCOUNT]");
client.getIamPolicy(resource);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void getIamPolicyTest2() throws Exception {
Policy expectedResponse =
Policy.newBuilder()
.setVersion(351608024)
.addAllBindings(new ArrayList<Binding>())
.addAllAuditConfigs(new ArrayList<AuditConfig>())
.setEtag(ByteString.EMPTY)
.build();
mockCloudBilling.addResponse(expectedResponse);
String resource = "resource-341064690";
Policy actualResponse = client.getIamPolicy(resource);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
GetIamPolicyRequest actualRequest = ((GetIamPolicyRequest) actualRequests.get(0));
Assert.assertEquals(resource, actualRequest.getResource());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void getIamPolicyExceptionTest2() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
String resource = "resource-341064690";
client.getIamPolicy(resource);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void setIamPolicyTest() throws Exception {
Policy expectedResponse =
Policy.newBuilder()
.setVersion(351608024)
.addAllBindings(new ArrayList<Binding>())
.addAllAuditConfigs(new ArrayList<AuditConfig>())
.setEtag(ByteString.EMPTY)
.build();
mockCloudBilling.addResponse(expectedResponse);
ResourceName resource = BillingAccountName.of("[BILLING_ACCOUNT]");
Policy policy = Policy.newBuilder().build();
Policy actualResponse = client.setIamPolicy(resource, policy);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
SetIamPolicyRequest actualRequest = ((SetIamPolicyRequest) actualRequests.get(0));
Assert.assertEquals(resource.toString(), actualRequest.getResource());
Assert.assertEquals(policy, actualRequest.getPolicy());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void setIamPolicyExceptionTest() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
ResourceName resource = BillingAccountName.of("[BILLING_ACCOUNT]");
Policy policy = Policy.newBuilder().build();
client.setIamPolicy(resource, policy);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void setIamPolicyTest2() throws Exception {
Policy expectedResponse =
Policy.newBuilder()
.setVersion(351608024)
.addAllBindings(new ArrayList<Binding>())
.addAllAuditConfigs(new ArrayList<AuditConfig>())
.setEtag(ByteString.EMPTY)
.build();
mockCloudBilling.addResponse(expectedResponse);
String resource = "resource-341064690";
Policy policy = Policy.newBuilder().build();
Policy actualResponse = client.setIamPolicy(resource, policy);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
SetIamPolicyRequest actualRequest = ((SetIamPolicyRequest) actualRequests.get(0));
Assert.assertEquals(resource, actualRequest.getResource());
Assert.assertEquals(policy, actualRequest.getPolicy());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void setIamPolicyExceptionTest2() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
String resource = "resource-341064690";
Policy policy = Policy.newBuilder().build();
client.setIamPolicy(resource, policy);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void testIamPermissionsTest() throws Exception {
TestIamPermissionsResponse expectedResponse =
TestIamPermissionsResponse.newBuilder().addAllPermissions(new ArrayList<String>()).build();
mockCloudBilling.addResponse(expectedResponse);
ResourceName resource = BillingAccountName.of("[BILLING_ACCOUNT]");
List<String> permissions = new ArrayList<>();
TestIamPermissionsResponse actualResponse = client.testIamPermissions(resource, permissions);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
TestIamPermissionsRequest actualRequest = ((TestIamPermissionsRequest) actualRequests.get(0));
Assert.assertEquals(resource.toString(), actualRequest.getResource());
Assert.assertEquals(permissions, actualRequest.getPermissionsList());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void testIamPermissionsExceptionTest() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
ResourceName resource = BillingAccountName.of("[BILLING_ACCOUNT]");
List<String> permissions = new ArrayList<>();
client.testIamPermissions(resource, permissions);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
@Test
public void testIamPermissionsTest2() throws Exception {
TestIamPermissionsResponse expectedResponse =
TestIamPermissionsResponse.newBuilder().addAllPermissions(new ArrayList<String>()).build();
mockCloudBilling.addResponse(expectedResponse);
String resource = "resource-341064690";
List<String> permissions = new ArrayList<>();
TestIamPermissionsResponse actualResponse = client.testIamPermissions(resource, permissions);
Assert.assertEquals(expectedResponse, actualResponse);
List<AbstractMessage> actualRequests = mockCloudBilling.getRequests();
Assert.assertEquals(1, actualRequests.size());
TestIamPermissionsRequest actualRequest = ((TestIamPermissionsRequest) actualRequests.get(0));
Assert.assertEquals(resource, actualRequest.getResource());
Assert.assertEquals(permissions, actualRequest.getPermissionsList());
Assert.assertTrue(
channelProvider.isHeaderSent(
ApiClientHeaderProvider.getDefaultApiClientHeaderKey(),
GaxGrpcProperties.getDefaultApiClientHeaderPattern()));
}
@Test
public void testIamPermissionsExceptionTest2() throws Exception {
StatusRuntimeException exception = new StatusRuntimeException(io.grpc.Status.INVALID_ARGUMENT);
mockCloudBilling.addException(exception);
try {
String resource = "resource-341064690";
List<String> permissions = new ArrayList<>();
client.testIamPermissions(resource, permissions);
Assert.fail("No exception raised");
} catch (InvalidArgumentException e) {
// Expected exception.
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,068
|
{"url":"https:\/\/datascience.stackexchange.com\/questions\/55198\/confused-with-the-derivation-of-the-gradient-descent-update-rule","text":"# Confused with the derivation of the gradient descent update rule\n\nI have been going over some theory for gradient descent. The source I am looking at said that the change in cost can be described by the following equation: $$\u2206C=\u2207C\u2219\u2206w$$ where $$\u2207C$$ is the gradient vector\/vector derivative of the cost function (MSE) and $$\u2206w$$ is the change in weights. It said that the goal is to make the change in cost negative. Good so far. My issue is with the next part. It states that $$\u2206v=-\u03b7\u2207C$$ My issue is with this, and why $$\u2206v$$ is set to this. Why would we want to change the weights by a small amount of the gradient function?\n\nUpon writing this I have realised the answer to the question. I am still going to post so that anyone else who wants to learn where the update rule comes from can do so. I have come to this by studying the equation carefully. $$\u2207C$$ is the gradient vector of the cost function. The definition of the gradient vector is a collection of partial derivatives that point in the direction of steepest ascent. Since we are performing gradient 'descent', we take the negative of this, as we hope to descend towards the minimum point. The issue for me was how this relates to the weights. It does so because we want to 'take'\/'travel' along this vector towards the minimum, so we add this onto the weights. Finally, we use neta which is a small constant. It is small so that the inequality $$\u2206C>0$$ is obeyed, because we want to always decrease the cost, not increase it. However, too small, and the algorithm will take a long time to converge. This means the value for eta must be experimented with.","date":"2020-10-19 21:50:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 7, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8417530059814453, \"perplexity\": 128.98042353239154}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-45\/segments\/1603107866404.1\/warc\/CC-MAIN-20201019203523-20201019233523-00201.warc.gz\"}"}
| null | null |
The Triouleyre was a French automobile manufactured from 1896 to 1898. The car had a rear-mounted five-horsepower horizontal engine along the lines of a Benz driving the back axle through belts and chains. Two started in the 1896 Paris–Marseille–Paris and Paris-Nantes races but failed to finish.
References
David Burgess Wise, The New Illustrated Encyclopedia of Automobiles.
1890s cars
Defunct motor vehicle manufacturers of France
Cars introduced in 1896
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 76
|
Laupheim è una città tedesca situata nel land del Baden-Württemberg, da gennaio 2016 è una grande città circondariale.
Note
Altri progetti
Collegamenti esterni
Comuni del circondario di Biberach
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,038
|
{"url":"https:\/\/github.com\/eugenio-valdano\/threshold","text":"# eugenio-valdano\/threshold\n\nNo description, website, or topics provided.\nPython\nFetching latest commit\u2026\nCannot retrieve the latest commit at this time.\n Failed to load latest commit information. .gitignore LICENSE README.md test_system.py threshold.py threshold_util.py\n\n# Computing the Epidemic Threshold on Temporal Networks\n\nProvides Python tools for computing the epidemic threshold on temporal network, as explained in paper\n\nAnalytical Computation of The Epidemic Threshold on Temporal Networks\n\nValdano E, Ferreri L, Poletto C, Colizza V, Phys Rev X 5, 021005 2015.\n\nWhen you use this code, please cite the above reference.\n\n## Content\n\n\u2022 test_system.py checks if your system has all the needed libraries.\n\u2022 threshold.py main module.\n\u2022 threshold_util.py additional methods for network handling.\n\n## Required external modules\n\n\u2022 numpy\n\u2022 scipy\n\u2022 networkx\n\u2022 pandas (for threshold_util.py)\n\nRun test_system.py to check if you have everything you need.\n\n# Overview\n\nThe package consists of two objects: the class tnet for uploading and managing the temporal network, and the class threshold, for the actual computation of the threshold.\n\n## import\n\nThe directory containing threshold.py must be in your Python search path. You can temporarily add it using\n\nfrom sys import path\npath.append('<dir to threshold.py>')\n\nThen actually import the module as, for instance,\n\nimport threshold as thr # main module\nimport threshold_util as thu # additional utils\n\n## tnet: manage your temporal network\n\nClass tnet is able to load a temporal network given in different formats:\n\n\u2022 path to a text file containing the whole edge list. First two columns represent edges' origin and destination, while last column is the time stamp. Time stamps are assumed to be integers from 0. If there are more than 3 columns, then 3rd column is interpreted as edge weight. Further columns between the 3rd and the last (time) are disregarded. Default separator is \\t; different separators (e.g. separator=',') can be input via the optional keyword separator in the tnet constructor. By default the edge list is assumed undirected; this can be changed via the optional keyword directed in the tnet constructor.\n\u2022 (Python) list of networkx Graph or DiGraph objects. If the network is weighted, weights must be assigned to edges as weight keywords.\n\nThe network can then be loaded in class tnet as follows:\n\nR = thr.tnet(my_network)\n\n### Arguments for tnet, with their default values\n\n\u2022 my_network: where to look for the network, according to supported formats (see above);\n\u2022 period = None: set period like this, if only a part of the network is to be used, up to period T (less than the one inferred from time stamps);\n\u2022 dtype = 'float128': the bit length of the used float. 'float128' is the default because it is often needed. Every string that is not 'float64' is interpreted as 'float128'.\n##### other optional keywords\n\u2022 directed: it may be used when loading from text file. If directed=True, then the edge list is assumed to be directed. If not specified, treated as directed=False. When loading from a list of networkx graphs, it inherits from them the fact of being (un)directed.\n\u2022 attributes=None: with this keyword you can provide a dictionary for assigning node attributes. Imagine your nodes are people, you could set attributes={'id1':'male','id2':'female'}. The dictionary does not have to be exhaustive. Nodes without attribute are allowed.\n\u2022 separator: it may be used when loading from text file, to specify the separator. If not specified, treated as separator='\\t'.\n\n### Attributes\n\nname description\nN number of nodes.\nT period. You can manually reduce it. It will drop the time steps in excess from the end.\nweighted True\/False\nlG list of networkx graphs\nlA list of adjacency matrices in scipy.sparse.csr_matrix format\nattributes node attributes\nnodelist list of nodes\n\n## threshold: compute the threshold\n\nIntstantiate a threshold object like this:\n\nmyth = th.threshold(X)\n\nWhere X can be either a tnet object or a list of adjacency matrices in scipy.sparse.csr_matrix. Additional optional arguments are\n\n##### related to power method:\n\u2022 eval_max=20000: maximum number of eigenvalue evaluations.\n\u2022 tol=1e-6 : tolerance for power method convergence.\n\u2022 store=10 : number of eigenvector(value) values to use to check convergence.\n\u2022 convergence_on_eigenvector=True. If True uses the algorithm that checks convergence on the L1 norm of the principal eigenvector (probably more accurate). If False, checks the convergence of the eigenvalue estimate itself.\n##### related to the temporal network:\n\u2022 weighted=None. You have to specify it when you provide a list of adjacency matrices instead of a tnet object. You can specify it also with a tnet object if you want to override the .weighted attribute of the tnet object. If the network itself is weighted, you still can set weighted=False here. It simply means it multiplies transmissibility directly to the adjacency matrices. To know more about weights, read this article. weighted=False is more time-efficient than weighted=True.\n\u2022 attributes=None. It is ignored when X is a tnet object, as it will inherit the attributes from X. When X is a list of matrices, you can use this to provide a list of length N containing the attribute of each node. If you do not wish to set an attribute for node i, put None in the list at place i.\n\nYou can access and edit eval_max, tol, store and weighted as class attributes.\n\nThe class has also the attribute convergente_on which is either eigenvector or eigenvalue. You can access it and edit it.\n\nFor instance:\n\nmyth.tol = 1e-5\nmyth.convergence_on = 'eigenvalue'\n\nThe class has the attribute lA which is the list of adjacency matrices. You can access it and set it safely.\n\nFinally, the attribute avg_k returns the average (weighted) degree of the network, i.e., \\frac{\\sum_{t=1}^T\\sum_{i,j}A_{t,ij}}{NT}\n\n### compute method\n\nThis carries out the actual computation of the threshold.\n\nx = th.compute(mu, vmin=1e-3, vmax=1, maxiter=50, root_finder='brentq', **kwargs)\n\u2022 mu is the only compulsory argument. It can be either a single value (recovery probability) or a dictionary having a recovery probability for every attribute: {'attr 1': 0.1, 'attr 2': 0.3, 'default':0.6}. It must always have a 'default' value, which will be assigned to nodes with no attribute.\n\u2022 vmin and vmax are the boundaries of the intervals in which to look for the threshold.\n\u2022 maxiter is the maximum number of iterations of the root finding algorithm.\n\u2022 root_finder can be either 'brentq' or 'bisect', referring to the functions in scipy.optimize. For further details see, for instance, scipy documentation.\n\u2022 Other keyword arguments are directly sent to the root finding scipy function (e.g. xtol and rtol).\n\n## threshold_util\n\nThis module contains two functions: DataFrame_to_lG and DataFrame_to_lA. They turn a pandas.DataFrame object into a list of networkx graphs or scipy.sparse CSR matrix. The former is a suitable input for threshold.tnet, the latter for threshold.threshold.\n\n### DataFrame_to_lG\n\nlG = thu.DataFrame_to_lG(df, directed=False, weight=None, source='source', target='target', time='time')\n\u2022 df is a pandas.DataFrame.\n\u2022 directed bool variable about (un)directedness.\n\u2022 source name of the column of source nodes.\n\u2022 target name of the column of target nodes.\n\u2022 time name of the column with timestamps.\n\u2022 weight can be None (unweighted network) or a string with the name of the column to be interpreted as weights.\n\nIt returns a list of networkx Graph or DiGraph objects.\n\n### DataFrame_to_lA\n\nAssumes node id's are integers from 0 to N-1, where N is the number of nodes.\n\nlA = thu.DataFrame_to_lA(df, directed=False, source='source', target='target', time='time', weight='weight', dtype=np.float128, force_beg=None, force_end=None)\n\u2022 df is a pandas.DataFrame.\n\u2022 directed bool variable about (un)directedness.\n\u2022 source name of the column of source nodes.\n\u2022 target name of the column of target nodes.\n\u2022 time name of the column with timestamps.\n\u2022 weight can be None (unweighted network) or a string with the name of the column to be interpreted as weights.\n\u2022 force_beg if not None, will discard all timesteps smaller than this.\n\u2022 force_end if not None, will discard all timesteps larger than this.","date":"2017-03-01 20:55:14","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.27434423565864563, \"perplexity\": 3820.376919464102}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-09\/segments\/1487501174276.22\/warc\/CC-MAIN-20170219104614-00131-ip-10-171-10-108.ec2.internal.warc.gz\"}"}
| null | null |
Q: Can I evaluate a string in C# I have to Pad a string and return it before I can do a Substring on the string.
Is there a way I can evaluate the string after the padding and combine the two statements into 1 line?
This works
string numberOfRecords = allRecords.Count().ToString().PadLeft(8, '0');
numberOfRecords = numberOfRecords.Substring(numberOfRecords.Length-8,8);
But this does not
string numberOfRecords = allRecords.Count().ToString();
numberOfRecords = numberOfRecords.PadLeft(8, '0').Substring(numberOfRecords.Length-8,8);
A: string is immutable in C#. In the first case, you are doing PadLeft(8, '0') and assigning to numberOfRecords. So it contains updated value (with PadLeft).
But in the second case, you are doing PadLeft but the value of numberOfRecords will be same. This will fail when you do numberOfRecords.Length which will be original value (allRecords.Count().ToString()) and may throw an exception.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 6,811
|
\section{Introduction}
\begin{figure}[b]
\onecolumn
\centering
\copyright This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.
\vspace{-2.05cm}
\twocolumn
\end{figure}
\ac{ris}-aided systems are enabling enhanced communication performance, e.g., because of the potential to serve \acp{mt} in a blocked position, and are thus considered to be a key technology in 6G systems \cite{IRS_6G}.
Having accurate estimates of both the direct and the cascaded channel including the \ac{ris} is crucial.
Since the \ac{ris} only consists of passive elements, processing the impinging waves is not possible.
Consequently, no separate channel estimation can be conducted at the \ac{ris}.
To illuminate the cascaded channel, a large number of training sequences has to be transmitted over different phase allocations at the \ac{ris}.
Since these phase shifts heavily impact the estimation quality,
optimizing them is an important~task.
A variety of approaches for phase optimization and channel estimation is considered in the literature. An on/off strategy was proposed in \cite{8879620,9127834,9130088} where the direct and the cascaded channels are estimated subsequently. However, it was shown in \cite{9053695} that this strategy is suboptimal. As shown in \cite{9747624,9053695}, the \ac{dft} matrix is the optimal phase allocation matrix when employing the \ac{ls} estimator for full illumination.
Unfortunately, the optimal phase allocations are unknown in general for the \ac{mmse} channel estimator or when having reduced phase allocations.
Therefore, the work in \cite{9133142} discusses optimization of discrete phase shifts, and \cite{9081935} investigates joint pilot and phase optimization.
In \cite{9543577}, a projected gradient descent algorithm based on a sparse geometry-modeled channel is proposed for optimizing the phase allocation matrix. Therein, the optimized phase matrix outperforms \ac{dft}-based phase allocations.
Furthermore, there exist different strategies to reduce the pilot overhead, e.g., an element grouping strategy \cite{9039554}, or a two-phase channel estimation procedure exploiting correlations in the cascaded channel \cite{9732214}.
The work in \cite{9747624} provides an overview of the \ac{ls} and the \ac{mmse} estimator for full illumination.
Surveys on channel estimation in \ac{ris}-aided systems can be found in \cite{9326394,9722893}.
\textit{Contributions:}
We perform a study on reduced \ac{dft}-based phase allocations where we exhaustively search for the best combination of \ac{dft} columns as phase matrix for a given radio propagation environment and a certain \ac{ris} configuration. This is in contrast to the analysis in \cite{9543577} where heuristically a combination of \ac{dft} columns is chosen as comparison. Thereby, we show the great potential for optimizing the phase allocations which are heavily dependent on the considered scenario.
Motivated by this observation, we propose a \ac{nn} which jointly learns the phase matrix and the channel estimator in a supervised manner.
The first part of the \ac{nn} emulates the observed signal by interpreting the angles of the reduced phase matrix as parameterizable weights. The phase matrix module by design fulfills the unit magnitude constraint enforced by the passive nature of the \ac{ris} elements that is problematic in classical \ac{ris} optimization algorithms.
This allows to adjust the reduced phase matrix to the propagation scenario by training.
The second part of the \ac{nn} consists of a \ac{cnn} for channel estimation.
We show in numerical experiments that the proposed approach outperforms \ac{dft}-based and random phase allocations together with state-of-the-art channel estimators.
We further study the properties of the learned reduced phase allocations, i.e., the performance with respect to different channel estimators.
\section{System and Channel Model}\label{sec:system}
We consider a \ac{ris}-aided \ac{simo} system where we denote the direct channel between a single-antenna \ac{mt} and an $M$-antenna \ac{bs} by $\B h_0\in\mathbb{C}^M$. The channel between the \ac{ris} with $L$ passive elements and the \ac{mt} is denoted by $\B h_1\in\mathbb{C}^L$, whereas the channel between the \ac{ris} and the \ac{bs} is denoted by $\B H_2\in\mathbb{C}^{M\times L}$. The received uplink signal is then given by
\begin{align}
\B y^\prime &= \B h_0 + \B H_2\B \Phi \B h_1 + \B n^\prime
\label{eq:system1}
\end{align}
where $\B \Phi = \operatorname{diag}(\B v)\in\mathbb{C}^{L\times L}$ comprises the unimodular phase shift coefficients at the \ac{ris} elements and $\B n^\prime\sim\mathcal{N}_\mathbb{C}(\B 0, \sigma^2\bm{\op{I}})$ is additive white Gaussian noise.
Due to the passive elements at the \ac{ris}, the amplitudes of the reflected signals are not changed. Hence, $v_\ell= \op e^{\op j \theta_\ell}$
with the angle $\theta_\ell\in[0,2\pi)$ and unit-magnitude entries $|v_\ell| = 1$ for $\ell=1,\dots,L$.
With $\B H = [\B h_0, \B h_1^{\op T} \circledast \B H_2 ]\in\mathbb{C}^{M\times L+1}$, where $\circledast$ denotes the Khatri-Rao product,
the system in \eqref{eq:system1} can be written as $\B y^\prime = \B H \B v^\prime + \B n^\prime$ where $\B v^\prime = [1, \B v^{\op T}]^{\op T}$, see e.g., \cite{9747624}.
Note that $\B h_1$ and $\B H_2$ of the cascaded channel $\B H_2\B \Phi \B h_1$ cannot be estimated explicitly \cite{9747624}. Therefore, $N_v$ different phase allocations are considered, that are collected in $\B V = [\B v^\prime_1,\dots,\B v^\prime_{N_v}]$, to illuminate the channel. This yields
\begin{align}\label{eq:system_unvec}
\B Y &= \B H \B V + \B N \in\mathbb{C}^{M\times N_v}
\end{align}
as the training sequence where the $N_v$ different observations are collected as the columns of $\B Y$.
After vectorization, we get
\begin{equation}
\B y = (\B V^{\op T} \otimes \bm{\op{I}}) \B h + \B n = \B A \B h + \B n \in\mathbb{C}^{MN_v},
\label{eq:system_vec}
\end{equation}
with the vectorized expressions $\B h = \operatorname{vec}(\B H)$, $\B y=\operatorname{vec}(\B Y)$, $\B n=\operatorname{vec}(\B N)$, and the observation matrix $\B A = \B V^{\op T} \otimes \bm{\op{I}}$, where $\otimes$ denotes the Kronecker product.
We define the \ac{snr} as $\text{SNR} = 1/\sigma^2$ where we normalize the channels to $\op E [\|\B h\|_2^2] = M(L+1)$.
For the construction of a scenario-specific channel dataset, we use the QuaDRiGa channel simulator \cite{QuaDRiGa1}.
We consider an \ac{uma} scenario following the 3GPP 38.901 specification, where the \ac{bs} is placed at a height of 25m and covers a sector of 120°. The \ac{ris} is placed opposite to the \ac{bs} with a distance of 500m at the same height, cf. Figure \ref{fig:bs_cell_irs}. Note that, opposite to the \ac{mt} with possibly \ac{nlos} channels, the channel between \ac{ris} and \ac{bs} has \ac{los} condition. We want to highlight that, although the position of the \ac{bs} and \ac{ris} is fixed, the corresponding channel is not constant within the dataset but slightly changes according to the \ac{uma} conditions.
The generated channels are post-processed to remove the path gain.
\begin{figure}[t]
\centering
\input{BS_cell_tikz}
\caption{Sketch of the considered \ac{ris}-aided system in an \ac{uma} scenario where the \ac{ris} is placed opposite to the \ac{bs}.}
\label{fig:bs_cell_irs}
\end{figure}
\section{Reference Methods}\label{sec:refs}
We compare our proposed method for joint phase optimization and channel estimation to state-of-the-art choices of the phase matrix and the channel estimator that are known from the literature, which are introduce in the following.
\subsection{Channel Estimation}\label{sec:refs_est}
We briefly introduce the \acp{gmm} and the \ac{cme} based thereon from \cite{9842343,9747226}.
A \ac{gmm} with $K$ components is a \ac{pdf} of the form
$f_{\B h}^{(K)}(\B h) = \sum_{k=1}^K p(k) \mathcal{N}_{\mathbb{C}}(\B h; \B \mu_k, \B C_k)$
consisting of a weighted sum of $ K $ Gaussian \acp{pdf}.
Given data samples, an \ac{em} algorithm can be used to fit a $ K $-components \ac{gmm}~\cite[Sec. 9.2]{bookBi06}.
In \cite{9842343,9747226}, a \ac{cme} is formulated based on \acp{gmm}, which is proven to asymptotically converge to the true \ac{cme} when $K$ grows large. The estimator is formulated as a convex combination of \ac{lmmse} terms, given as
\begin{equation}\label{eq:gmm_full}
\hat{\B h}^{(K)} = \sum_{k=1}^K p(k \mid \B y) ( \B \mu_k + \B C_k\B A^{\op H} \B C_{\B y,k}^{-1} (\B y - \B A\B \mu_k))
\end{equation}
where the responsibilities $p(k \mid \B y)$ are computed by
\begin{equation}\label{eq:gmm_likelihood}
p(k \mid \B y) = \frac{p(k) \mathcal{N}_{\mathbb{C}}(\B y; \B A\B\mu_k, \B C_{\B y,k}) }{\sum_{i=1}^K p(i) \mathcal{N}_{\mathbb{C}}(\B y; \B A\B \mu_i, \B C_{\B y,i}) }
\end{equation}
with $\B C_{\B y,k} = \B A \B C_k \B A^{\op H} +\sigma^2 \bm{\op{I}}$, cf. \eqref{eq:system_vec}.
We further introduce a simple \ac{lmmse} estimator by first computing a sample covariance matrix $\B C = \frac{1}{N}\sum_{n=1}^N \B h_n \B h^{\op H}_n$ using $N=19\cdot 10^4$ training samples and then we compute
\begin{equation}
\B h_{\text{sample cov.}} = \B C\B A^{\op H} (\B A \B C \B A^{\op H} + \sigma^2\bm{\op{I}})^{-1} \B y.
\label{eq:sample_cov}
\end{equation}
Finally, the \ac{ls} estimator is $\hat{\B h}_{\text{LS}} = \B A^\dagger \B y = (\B V^\dagger \otimes \bm{\op{I}})\B y$,
where $\B V^\dagger$ is the pseudoinverse of $\B V$.
\subsection{Phase Allocations}\label{sec:refs_phase}
A simple choice for the phase allocations is to use random phase shifts for every \ac{mt}. We therefore construct a phase matrix by sampling i.i.d. Gaussian realizations from $\mathcal{N}_\mathbb{C}(0,1)$ per entry and dividing each entry by its absolute value to fulfill the unit magnitude constraint. Note that these phase allocations might be difficult to implement in a practical system because of the very limited processing ability at the \ac{ris}.
Since \ac{dft}-based phases are optimal in the full-illumination case for the \ac{ls} estimator \cite{9747624,9053695}, we evaluate the use of a \ac{dft} submatrix for reduced phase allocations, i.e., the $m,n$th entry is given as $V^{\text{sub-DFT}}_{m,n} = \exp((m-1)(n-1)\op j2\pi/N_v)$ with $m=1,\dots,L+1$ and $n=1,\dots,N_v$.
Note that the columns of the \ac{dft} submatrix are not orthogonal for $N_v < L+1$.
\section{DFT-Based Phase Allocation Study}\label{sec:dft_study}
\begin{figure}[t]
\centering
\begin{tikzpicture}
\begin{axis}
[
ybar=-7pt,
bar width=7pt,
width=1\columnwidth,
height=0.5\columnwidth,
xtick={1,3,5,7,9,11,13,15,17},
xmin=0,
xmax=18,
xlabel={DFT Column},
ymin= 1e-2,
ymax=1e-1,
ylabel= {Relative Frequency},
grid = both,
legend columns = 3,
legend entries={
Parallel,
30$^\circ$ Downtilt,
},
legend style={at={(0.5,1.0)}, anchor=south},
]
\addplot[color=TUMBeamerBlue,line width=1.2pt,pattern=north east lines, pattern color=TUMBeamerBlue,opacity=0.7]
table[x=col, y=rel, col sep=comma]
{csvdat/histogram-DFT_scen1_1x8BS_1x1UE_4x4IRS_Nv=8_IRS-parallel_SNR=40dBm.csv};
\addplot[mark options={solid},color=TUMOrange,line width=1.2pt,fill=TUMOrange,opacity=0.7,pattern=crosshatch dots, pattern color=TUMOrange,densely dashed]
table[x=col, y=rel, col sep=comma]
{csvdat/histogram-DFT_scen2_1x8BS_1x1UE_4x4IRS_Nv=8_30deg-down_SNR=40dBm.csv};
\end{axis}
\end{tikzpicture}
\caption{Histogram of the occurrence of the \ac{dft} columns in the exhaustive search approach for different \ac{ris} configurations with $M=8$, $L=16$, and $N_v =8$ at a \ac{snr} of 40dBm.}
\label{fig:histogram_40dBm}
\end{figure}
In this section, we investigate the potential of the phase allocation optimization based on a \ac{dft} grid.
We consider a \ac{bs} with a \ac{ula} consisting of $M=8$ antennas serving single-antenna \acp{mt} supported by a \ac{ris} with $L=4\times 4$ elements, cf. Fig. \ref{fig:bs_cell_irs}. Instead of full illumination with $L+1$ phase allocations at the \ac{ris} we set $N_v=8$ in order to simulate a reduced phase allocation situation.
We then exhaustively search for the best combination of eight columns drawn from the full $(L+1)$-dimensional \ac{dft} matrix for $10,000$ uniformly sampled \acp{mt} in the \ac{bs} cell.
Note that this procedure is infeasible in practical systems since in general $\binom{L+1}{N_v}$
combinations of \ac{dft} columns have to be tested for every \ac{mt} which drastically increases for higher numbers of \ac{ris} patches.
In the considered case this already yields $\binom{17}{8} = 24,310$ combinations.
For each \ac{mt}, we choose the combination which yields the best channel estimation performance based on the \ac{gmm} estimator introduced in \Cref{sec:refs_est} at an \ac{snr} of 40dBm. The histogram in Fig. \ref{fig:histogram_40dBm} shows how often each \ac{dft} column occurs relatively in the exhaustive search of \ac{dft} column combinations over all \acp{mt} for two different scenarios.
In the first scenario, the \ac{ris} array is placed in parallel to the \ac{bs} array where it can be observed that especially the first and last \ac{dft} columns occur more frequently in the best-performing combinations. On average, the best combination of \ac{dft} columns for this scenario is $\{1,2,3,4,14,15,16,17\}$. In contrast to that, for a scenario where the \ac{ris} has a downtilt of $30^\circ$, the middle \ac{dft} columns occur primarily in the best combinations and $\{3,5,6,7,8,9,10,11\}$ is the best combination on average.
The conclusions of this study are twofold. First, we have seen that there is great potential for optimizing the phases since for a given setting, some \ac{dft} columns are much more important than others. Second, we showed that the optimization of the phase allocations heavily depends on the considered scenario.
We further evaluate the optimization based on the \ac{dft} grid in the numerical experiments section where we compare this brute force approach to our proposed optimization procedure.
\section{Learning-Based Joint Phase Optimization and Channel Estimation}\label{sec:joint}
In \Cref{sec:dft_study}, we have seen that the choice of the phase allocation matrix is depending heavily on the underlying system setup, i.e., the configuration of the \ac{ris}, as well as on the propagation environment that induces structural properties which can be exploited for reduced phase allocations.
However, on the one hand, the optimization procedure from \Cref{sec:dft_study} is infeasible in practice because of the combinatorial search, on the other hand, it is limited to a search on the \ac{dft} grid which may be sub-optimal in general.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\node at (0,0) {$\B H$};
\draw [->, thick] (0.2,0) -- (1,0) {};
\draw[fill=TUMBeamerBlue,fill opacity=0.3,rounded corners] (1,-0.5) rectangle ++(1.5,1);
\node at (1.75,0) {$\B V_{\text{NN}}$};
\draw [->, thick] (2.5,0) -- ++(1,0) {};
\filldraw[black, fill=white,thick] (3.68,0) circle (5pt);
\draw[thick] (3.5,0) -- ++(0.36,0) {};
\draw[thick] (3.68,0.18) -- ++(0,-0.36) {};
\node at (3.68,1.2) {$\B N$};
\draw [->, thick] (3.68,1) -- ++(0,-0.82) {};
\draw [->, thick] (3.86,0) -- node[above] {$\B Y_{\text{NN}}$} ++(1,0) {};
\draw[fill=TUMOrange,fill opacity=0.3,rounded corners] (4.86,-0.5) rectangle ++(1.5,1);
\node at (5.61,0) {CNN};
\draw [->, thick] (6.36,0) -- ++(1,0) {};
\node at (8,0) {$\hat{\B H}(\B V_{\text{NN}})$};
\end{tikzpicture}
\caption{Flowchart of the proposed \ac{nn} architecture for joint phase optimization and channel estimation.}
\label{fig:flowchart}
\end{figure}
Thus, we propose to utilize machine learning for joint phase optimization and channel estimation via a specific \ac{nn} architecture. In essence, we parametrize the phase allocations of the matrix $\B V$. Due to the passiveness of the \ac{ris} which enforces the unit magnitude constraint, we only train with respect to the angles of the phase matrix. In particular, the phase matrix is constructed as
\begin{equation}\label{eq:V_learning}
\B V_{\text{NN}} = \cos(\B \Phi) + \op j \sin(\B \Phi)
\end{equation}
where $\B \Phi\in\mathbb{R}^{L+1\times N_v}$. A similar approach for the optimization of a sensing matrix with a magnitude constraint was employed in \cite{KOLLER2022108553} which serves as a motivation for our considerations.
The training procedure is summarized as follows.
The parametrized phase matrix $\B V_{\text{NN}}$
given by \eqref{eq:V_learning} is multiplied with a channel realization from the training dataset. Afterwards, we artificially add additive white Gaussian noise. This yields an emulated observation $\B Y_{\text{NN}}$ following the model in \eqref{eq:system_unvec}.
The emulated observation $\B Y_{\text{NN}}$ then serves as the input of a \ac{cnn} which yields a channel estimate $\hat{\B H}(\B V_{\text{NN}})$ at the output. Therefore, the complex-valued input of the \ac{cnn} is split into its real and imaginary part as different convolution channels and each layer employs 2D convolutions.
Since the phase optimization and the training of the \ac{cnn} for channel estimation depend on each other, it is not possible to separately update their parameters. Thus, we jointly optimize the phase matrix $\B V_{\text{NN}}$ and the \ac{cnn} for which we exploit the efficient framework of \acp{nn} with powerful gradient-based optimization techniques. As such, we can interpret the phase matrix $\B V_{\text{NN}}$ as a layer with a specific structure, cf. \eqref{eq:V_learning}, of a larger \ac{nn} that contains the \ac{cnn} as further layers.
The described architecture is summarized as a flowchart in Fig. \ref{fig:flowchart}.
We utilize labeled data from the constructed dataset, cf. \Cref{sec:system}, to compute gradients with the \ac{mse}
\begin{equation}
\text{MSE} = \op E[ \| \B H - \hat{\B H}(\B V_{\text{NN}}) \|_F^2]
\end{equation}
as cost function. Note that the a single forward pass propagates through both \ac{nn} parts and, therefore, all network parameters are updated simultaneously. After training, the optimized phase allocations are given by \eqref{eq:V_learning} and the trained \ac{cnn} is extracted as the channel estimator.
We initialize the weights of the phase matrix randomly at the beginning of the training and we perform a random hyper-parameter search for the \ac{nn} parameters, i.e., the batch size ($\in[2^5,2^{11}]$), activation functions (ReLU, Tanh, Sigmoid, SiLU, ELU), batch normalization, learning rate ($\in[10^{-5}, 10^{-1}]$), number of kernels ($\in [16, 512]$) and layers ($\in[3,9]$), where we choose the best setting over 100 random initializations.
\section{Numerical Results}\label{sec:numeric}
We present numerical results for the described setting in \Cref{sec:system}. We utilize a dataset consisting of $19\cdot10^4$ data samples for fitting the \ac{gmm} with $K=128$ components and training the \ac{nn}. Each method is evaluated using $10^4$ samples which are not part of the training data. For all plots, we evaluate the scenario with a parallel \ac{ris} opposite to the \ac{bs} since the results are qualitatively the same for both depicted scenarios in \Cref{sec:dft_study}.
The curves labeled ``LS'', ``sample-cov'', or ``GMM'' refer to the baseline estimators from \Cref{sec:refs_est}, whereas ``CNN joint'' refers to the proposed approach from \Cref{sec:joint}. The additional labeling ``DFT'', ``rand'', ``opt'', or ``hist'' refers to the choice of the phase allocation matrix based on the \ac{dft} (sub)matrix or on random allocations, cf. \Cref{sec:refs_phase}, the optimized phase allocations from the \ac{nn}, cf. \Cref{sec:joint}, or the histogram based search from \Cref{sec:dft_study}, respectively.
\subsection{Full Illumination}\label{sec:numeric_full}
\begin{figure}[t]
\centering
\begin{tikzpicture}
\begin{axis}
[width=1\columnwidth,
height=0.6\columnwidth,
xtick=data,
xmin=-10,
xmax=40,
xlabel={SNR [dBm]},
ymode = log,
ymin= 1e-3,
ymax=1e0,
ylabel= {Normalized MSE},
ylabel shift = 0.0cm,
grid = both,
legend columns = 2,
legend entries={
\scriptsize LS DFT,
\scriptsize sample-cov DFT,
\scriptsize sample-cov rand,
\scriptsize GMM DFT,
\scriptsize GMM rand,
\scriptsize GMM opt,
\scriptsize CNN joint,
},
legend style={at={(0.0,0.0)}, anchor=south west},
]
\addplot[mark options={solid},color=black,line width=1.2pt]
table[x=SNR, y=LS, col sep=comma]
{csvdat/2022-10-11_19-42-17_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_nv=17_np=1_NLOS_dftphase=True_down=False.csv};
\addplot[mark options={solid},color=TUMOrange,line width=1.2pt,mark=triangle]
table[x=SNR, y=sample_cov, col sep=comma]
{csvdat/2022-10-11_19-42-17_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_nv=17_np=1_NLOS_dftphase=True_down=False.csv};
\addplot[mark options={solid},color=TUMOrange,line width=1.2pt,mark=triangle,dashed]
table[x=SNR, y=sample_cov, col sep=comma]
{csvdat/2022-10-13_05-27-19_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_nv=17_np=1_NLOS_dftphase=False_down=False.csv};
\addplot[mark options={solid},color=TUMBeamerBlue,line width=1.2pt,mark=o]
table[x=SNR, y=gmm, col sep=comma]
{csvdat/2022-10-11_19-42-17_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_nv=17_np=1_NLOS_dftphase=True_down=False.csv};
\addplot[mark options={solid},color=TUMBeamerBlue,line width=1.2pt,mark=o,dashed]
table[x=SNR, y=gmm, col sep=comma]
{csvdat/2022-10-13_05-27-19_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_nv=17_np=1_NLOS_dftphase=False_down=False.csv};
\addplot[mark options={solid},color=TUMBeamerGreen,line width=1.2pt,mark=square,dotted]
table[x=snr, y=gmm, col sep=comma]
{csvdat/cnn_gmm_scen1_1x8BS_1x1UE_4x4IRS_IRS-parallel_Nv=17.csv};
\addplot[mark options={solid},color=TUMBeamerRed,line width=1.2pt,mark=diamond]
table[x=snr, y=cnn, col sep=comma]
{csvdat/cnn_gmm_scen1_1x8BS_1x1UE_4x4IRS_IRS-parallel_Nv=17.csv};
\end{axis}
\end{tikzpicture}
\caption{$M=8$ ULA BS antennas, $L = 4\times 4 = 16$ \ac{ura} \ac{ris} patches and single-antenna \acp{mt} with $N_v=L+1$.}
\label{fig:full_scenario1}
\end{figure}
In Fig. \ref{fig:full_scenario1}, we depict results for the case of full illumination, i.e., $N_v=L+1$ with $M=8$ \ac{ula} \ac{bs} antennas and $L=4\times 4$ \ac{ura} \ac{ris} patches.
In case of the \ac{ls} estimator, the \ac{dft} phase matrix is shown to be optimal, cf. \cite{9747624,9053695}. When we use the LS estimator in combination with random phase matrices, the normalized \ac{mse} is larger than one over the considered \ac{snr} range and therefore not visible in Fig. \ref{fig:full_scenario1}, which might be due to the fact that a randomly chosen phase matrix does not have to be a full-rank matrix. When using the \ac{gmm} or the sample-covariance based estimator, the \ac{dft}-based phase allocations also yield a better performance as compared to random allocations.
Interestingly, it can be observed that the channel estimators with the optimized phase matrix (``CNN joint'' and ``GMM opt'') are able to outperform the \ac{dft} matrix in the low \ac{snr} regime with a vanishing gap in the high \ac{snr} where the \ac{ls} estimator is reasonable. This means that phase optimization is in fact useful also for the full illumination case for low \ac{snr} values. Furthermore, since the performance is very similar for both the \ac{gmm} and \ac{cnn} estimator it can be concluded that the optimized phase matrix is not only useful for the jointly trained \ac{cnn}, but is generally adapted to the scenario and can be used in combination with an arbitrary channel estimator.
\subsection{Reduced Phase Allocations}\label{sec:numeric_reduced}
\begin{figure}[t]
\centering
\begin{tikzpicture}
\begin{axis}
[width=1\columnwidth,
height=0.6\columnwidth,
xtick=data,
xmin=-10,
xmax=50,
xlabel={SNR [dBm]},
ymode = log,
ymin= 7*1e-3,
ymax=1e0,
ylabel= {Normalized MSE},
ylabel shift = 0.0cm,
grid = both,
legend columns = 3,
legend entries={
\scriptsize LS DFT,
\scriptsize LS rand,
\scriptsize sample-cov DFT,
\scriptsize sample-cov rand,
\scriptsize GMM DFT,
\scriptsize GMM rand,
\scriptsize GMM opt,
\scriptsize CNN joint,
\scriptsize GMM hist,
},
legend style={at={(0.5,1.0)}, anchor=south},
]
\addplot[mark options={solid},color=black,line width=1.2pt]
table[x=SNR, y=LS, col sep=comma]
{csvdat/2022-10-12_09-47-18_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_nv=8_np=1_NLOS_dftphase=True_down=False.csv};
\addplot[mark options={solid},color=black,line width=1.2pt,dashed]
table[x=SNR, y=LS, col sep=comma]
{csvdat/2022-10-12_14-31-36_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_nv=8_np=1_NLOS_dftphase=False_down=False.csv};
\addplot[mark options={solid},color=TUMOrange,line width=1.2pt,mark=triangle]
table[x=SNR, y=sample_cov, col sep=comma]
{csvdat/2022-10-12_09-47-18_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_nv=8_np=1_NLOS_dftphase=True_down=False.csv};
\addplot[mark options={solid},color=TUMOrange,line width=1.2pt,mark=triangle,dashed]
table[x=SNR, y=sample_cov, col sep=comma]
{csvdat/2022-10-12_14-31-36_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_nv=8_np=1_NLOS_dftphase=False_down=False.csv};
\addplot[mark options={solid},color=TUMBeamerBlue,line width=1.2pt,mark=o]
table[x=SNR, y=gmm, col sep=comma]
{csvdat/2022-10-12_09-47-18_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_nv=8_np=1_NLOS_dftphase=True_down=False.csv};
\addplot[mark options={solid},color=TUMBeamerBlue,line width=1.2pt,mark=o,dashed]
table[x=SNR, y=gmm, col sep=comma]
{csvdat/2022-10-12_14-31-36_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_nv=8_np=1_NLOS_dftphase=False_down=False.csv};
\addplot[mark options={solid},color=TUMBeamerGreen,line width=1.2pt,mark=square,dotted]
table[x=snr, y=gmm, col sep=comma]
{csvdat/cnn_gmm_scen1_1x8BS_1x1UE_4x4IRS_IRS-parallel_Nv=8.csv};
\addplot[mark options={solid},color=TUMBeamerRed,line width=1.2pt,mark=diamond]
table[x=snr, y=cnn, col sep=comma]
{csvdat/cnn_gmm_scen1_1x8BS_1x1UE_4x4IRS_IRS-parallel_Nv=8.csv};
\addplot[mark options={solid},color=mylila,line width=1.2pt,mark=|]
table[x=snr, y=hist, col sep=comma]
{csvdat/mse_scen1_hist-snr=40dBm_L=16_ntest=10000_Nv=8.csv};
\end{axis}
\end{tikzpicture}
\caption{$M=8$ ULA BS antennas, $L = 4\times 4 = 16$ \ac{ura} \ac{ris} patches and single-antenna \acp{mt} with $N_v=8$.}
\label{fig:nv=8_scenario1}
\end{figure}
\begin{figure}[t]
\centering
\begin{tikzpicture}
\begin{axis}
[width=1\columnwidth,
height=0.6\columnwidth,
xtick=data,
xmin=2,
xmax=17,
xlabel={Phases $N_v$},
ymode = log,
ymin= 1e-3,
ymax=1e0,
ylabel= {Normalized MSE},
ylabel shift = 0.0cm,
grid = both,
legend columns = 3,
legend entries={
\scriptsize LS DFT,
\scriptsize LS rand,
\scriptsize sample-cov DFT,
\scriptsize sample-cov rand,
\scriptsize GMM DFT,
\scriptsize GMM rand,
\scriptsize GMM opt,
\scriptsize CNN joint,
},
legend style={at={(0.5,1.0)}, anchor=south},
]
\addplot[mark options={solid},color=black,line width=1.2pt]
table[x=phases, y=LS, col sep=comma]
{csvdat/2022-10-12_10-06-46_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_np=1_NLOS_dftphase=True_down=False_phases.csv};
\addplot[mark options={solid},color=black,line width=1.2pt,dashed]
table[x=phases, y=LS, col sep=comma]
{csvdat/2022-10-12_10-06-46_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_np=1_NLOS_dftphase=False_down=False_phases.csv};
\addplot[mark options={solid},color=TUMOrange,line width=1.2pt,mark=triangle]
table[x=phases, y=sample_cov, col sep=comma]
{csvdat/2022-10-12_10-06-46_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_np=1_NLOS_dftphase=True_down=False_phases.csv};
\addplot[mark options={solid},color=TUMOrange,line width=1.2pt,mark=triangle,dashed]
table[x=phases, y=sample_cov, col sep=comma]
{csvdat/2022-10-12_10-06-46_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_np=1_NLOS_dftphase=False_down=False_phases.csv};
\addplot[mark options={solid},color=TUMBeamerBlue,line width=1.2pt,mark=o]
table[x=phases, y=gmm, col sep=comma]
{csvdat/2022-10-12_10-06-46_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_np=1_NLOS_dftphase=True_down=False_phases.csv};
\addplot[mark options={solid},color=TUMBeamerBlue,line width=1.2pt,mark=o,dashed]
table[x=phases, y=gmm, col sep=comma]
{csvdat/2022-10-12_10-06-46_ant_irs=16_ant_bs=8_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_np=1_NLOS_dftphase=False_down=False_phases.csv};
\addplot[mark options={solid},color=TUMBeamerGreen,line width=1.2pt,mark=square,dotted]
table[x=nv, y=gmm, col sep=comma]
{csvdat/cnn_gmm_scen1_1x8BS_1x1UE_4x4IRS_IRS-parallel_snr=40dBm.csv};
\addplot[mark options={solid},color=TUMBeamerRed,line width=1.2pt,mark=diamond]
table[x=nv, y=cnn, col sep=comma]
{csvdat/cnn_gmm_scen1_1x8BS_1x1UE_4x4IRS_IRS-parallel_snr=40dBm.csv};
\end{axis}
\end{tikzpicture}
\caption{$M=8$ ULA BS antennas, $L = 4\times 4 = 16$ \ac{ura} \ac{ris} patches and single-antenna \acp{mt} with SNR=40dB.}
\label{fig:phases_scenario1}
\end{figure}
In Fig. \ref{fig:nv=8_scenario1}, we show the same setting as in \Cref{sec:numeric_full} but with a reduced number of $N_v=8$ phase allocations, i.e., less than $50\%$ of the fully illuminated case. First, it can be observed that the \ac{ls} estimator performs poorly due to the underdetermined system. Second, the random phase allocations outperform the sub-\ac{dft} allocations when using the \ac{gmm} or sample covariance estimator where the \ac{gmm} estimator performs significantly better than the sample covariance estimator. Finally, the \ac{cnn} and \ac{gmm} estimator with optimized phase allocations show a similar performance which is better than all baseline methods, including the \ac{gmm} based on the histogram search method from \Cref{sec:dft_study}. This demonstrates the great potential of optimization for reduced phase allocations.
Fig. \ref{fig:phases_scenario1} depicts the same setting as before for a fixed \ac{snr} of 40dBm with a varying number of phase allocations.
Similarly as before, the methods with optimized phase allocations outperform all baseline algorithms.
For the \ac{gmm} estimator it is possible to achieve a normalized \ac{mse} of $10^{-2}$ with only $N_v=9$ phase allocations, whereas more than $N_v=12$ phase allocations are needed to achieve the same \ac{mse} when having random or \ac{dft}-based phase allocations.
For the case of full illumination, i.e., $N_v=17$, the performance of the \ac{cnn} and \ac{gmm} estimators with optimized or \ac{dft}-based phase allocation matrix is very similar which is expected due to the insights from \Cref{sec:numeric_full}.
Finally, in Fig. \ref{fig:phases_scenario3}, we show results for a larger system setup with $M=16$ \ac{ula} \ac{bs} antennas and $L=8\times 8$ \ac{ura} \ac{ris} patches for a fixed \ac{snr} of 40dBm. It can be observed that especially the gap to the \ac{dft}-based phase allocations increases drastically which perform poorly for this larger system setup. To achieve a normalized \ac{mse} of $4\cdot10^{-2}$, $N_v = 32$ optimized phase allocations are necessary, whereas $N_v=48$ random or even $N_v=58$ \ac{dft}-based phase allocations have to be used to achieve the same \ac{mse}. In conclusion, the optimization of the phase allocations has increasing potential for larger systems which is in compliance with the trend to massive \ac{mimo} systems.
\begin{figure}[t]
\centering
\begin{tikzpicture}
\begin{axis}
[width=1\columnwidth,
height=0.6\columnwidth,
xtick={8,16,24,32,40,48,56,65},
xmin=8,
xmax=65,
xlabel={Phases $N_v$},
ymode = log,
ymin= 1e-3,
ymax=1e0,
ylabel= {Normalized MSE},
ylabel shift = 0.0cm,
grid = both,
legend columns = 3,
legend entries={
\scriptsize LS DFT,
\scriptsize LS rand,
\scriptsize sample-cov DFT,
\scriptsize sample-cov rand,
\scriptsize GMM DFT,
\scriptsize GMM rand,
\scriptsize GMM opt,
\scriptsize CNN joint,
},
legend style={at={(0.5,1.0)}, anchor=south},
]
\addplot[mark options={solid},color=black,line width=1.2pt,mark=square]
table[x=phases, y=LS, col sep=comma]
{csvdat/2022-10-14_09-24-32_ant_irs=64_ant_bs=16_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_np=1_NLOS_dftphase=True_down=False_phases.csv};
\addplot[mark options={solid},color=black,line width=1.2pt,mark=square,dashed]
table[x=phases, y=LS, col sep=comma]
{csvdat/2022-10-19_02-38-40_ant_irs=64_ant_bs=16_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_np=1_NLOS_dftphase=False_down=False_phases.csv};
\addplot[mark options={solid},color=TUMOrange,line width=1.2pt,mark=triangle]
table[x=phases, y=sample_cov, col sep=comma]
{csvdat/2022-10-14_09-24-32_ant_irs=64_ant_bs=16_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_np=1_NLOS_dftphase=True_down=False_phases.csv};
\addplot[mark options={solid},color=TUMOrange,line width=1.2pt,mark=triangle,dashed]
table[x=phases, y=sample_cov, col sep=comma]
{csvdat/2022-10-19_02-38-40_ant_irs=64_ant_bs=16_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_np=1_NLOS_dftphase=False_down=False_phases.csv};
\addplot[mark options={solid},color=TUMBeamerBlue,line width=1.2pt,mark=o]
table[x=phases, y=gmm, col sep=comma]
{csvdat/2022-10-14_09-24-32_ant_irs=64_ant_bs=16_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_np=1_NLOS_dftphase=True_down=False_phases_gmm.csv};
\addplot[mark options={solid},color=TUMBeamerBlue,line width=1.2pt,mark=o,dashed]
table[x=phases, y=gmm, col sep=comma]
{csvdat/2022-10-19_02-38-40_ant_irs=64_ant_bs=16_ant_ue=1_comp=128_sum=0.99_ntrain=190000_ntest=10000_np=1_NLOS_dftphase=False_down=False_phases.csv};
\addplot[mark options={solid},color=TUMBeamerGreen,line width=1.2pt,mark=square,dotted]
table[x=nv, y=gmm, col sep=comma]
{csvdat/cnn_gmm_scen3_1x16BS_1x1UE_8x8IRS_IRS-parallel_snr=40dBm.csv};
\addplot[mark options={solid},color=TUMBeamerRed,line width=1.2pt,mark=diamond]
table[x=nv, y=cnn, col sep=comma]
{csvdat/cnn_gmm_scen3_1x16BS_1x1UE_8x8IRS_IRS-parallel_snr=40dBm.csv};
\end{axis}
\end{tikzpicture}
\caption{$M=16$ ULA BS antennas, $L = 8\times 8 = 64$ URA \ac{ris} patches and single-antenna \acp{mt} with SNR=40dB.}
\label{fig:phases_scenario3}
\end{figure}
\section{Conclusion}
In this work, we investigated the potential of optimizing the reduced phase allocation matrix for channel estimation in \ac{ris}-aided systems. With a study based on a selection of \ac{dft} columns, we found that the system setup drastically influences the choice of the phase allocations. We then proposed a \ac{nn} which jointly learns a phase allocation matrix together with a channel estimator. We were able to show that the optimized phases are able to outperform the generic \ac{dft} matrix in the low \ac{snr} regime even for full illumination. For cases with a reduced number of phase allocations, the proposed approach outperforms the baseline approaches over the whole \ac{snr} range. In addition, when using the optimized phase allocation matrix for a different channel estimator, the performance is significantly increased.
This leads to the conclusion that the optimized phase allocation matrix is able to leverage the inherent structure of the environment and the chosen system setup to a performance gain. Lastly, we have shown that the potential for optimization is increasing with a larger number of antennas and \ac{ris} patches which is in accordance with massive \ac{mimo} trends.
\bibliographystyle{IEEEtran}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,084
|
typedef BOOL (WINAPI *CRYPTPROTECTMEMORY)(LPVOID pData,DWORD cbData,DWORD dwFlags);
typedef BOOL (WINAPI *CRYPTUNPROTECTMEMORY)(LPVOID pData,DWORD cbData,DWORD dwFlags);
#ifndef CRYPTPROTECTMEMORY_BLOCK_SIZE
#define CRYPTPROTECTMEMORY_BLOCK_SIZE 16
#define CRYPTPROTECTMEMORY_SAME_PROCESS 0x00
#define CRYPTPROTECTMEMORY_CROSS_PROCESS 0x01
#endif
class CryptLoader
{
private:
HMODULE hCrypt;
bool LoadCalled;
public:
CryptLoader()
{
hCrypt=NULL;
pCryptProtectMemory=NULL;
pCryptUnprotectMemory=NULL;
LoadCalled=false;
}
~CryptLoader()
{
if (hCrypt!=NULL)
FreeLibrary(hCrypt);
hCrypt=NULL;
pCryptProtectMemory=NULL;
pCryptUnprotectMemory=NULL;
};
void Load()
{
if (!LoadCalled)
{
hCrypt = LoadSysLibrary(L"Crypt32.dll");
if (hCrypt != NULL)
{
// Available since Vista.
pCryptProtectMemory = (CRYPTPROTECTMEMORY)GetProcAddress(hCrypt, "CryptProtectMemory");
pCryptUnprotectMemory = (CRYPTUNPROTECTMEMORY)GetProcAddress(hCrypt, "CryptUnprotectMemory");
}
LoadCalled=true;
}
}
CRYPTPROTECTMEMORY pCryptProtectMemory;
CRYPTUNPROTECTMEMORY pCryptUnprotectMemory;
};
// We need to call FreeLibrary when RAR is exiting.
CryptLoader GlobalCryptLoader;
#endif
SecPassword::SecPassword()
{
CrossProcess=false;
Set(L"");
}
SecPassword::~SecPassword()
{
Clean();
}
void SecPassword::Clean()
{
PasswordSet=false;
cleandata(Password,sizeof(Password));
}
// When we call memset in end of function to clean local variables
// for security reason, compiler optimizer can remove such call.
// So we use our own function for this purpose.
void cleandata(void *data,size_t size)
{
if (data==NULL || size==0)
return;
#if defined(_WIN_ALL) && defined(_MSC_VER)
SecureZeroMemory(data,size);
#else
// 'volatile' is required. Otherwise optimizers can remove this function
// if cleaning local variables, which are not used after that.
volatile byte *d = (volatile byte *)data;
for (size_t i=0;i<size;i++)
d[i]=0;
#endif
}
// We got a complain from user that it is possible to create WinRAR dump
// with "Create dump file" command in Windows Task Manager and then easily
// locate Unicode password string in the dump. It is unsecure if several
// people share the same computer and somebody left WinRAR copy with entered
// password. So we decided to obfuscate the password to make it more difficult
// to find it in dump.
void SecPassword::Process(const wchar *Src,size_t SrcSize,wchar *Dst,size_t DstSize,bool Encode)
{
// Source string can be shorter than destination as in case when we process
// -p<pwd> parameter, so we need to take into account both sizes.
memcpy(Dst,Src,Min(SrcSize,DstSize)*sizeof(*Dst));
SecHideData(Dst,DstSize*sizeof(*Dst),Encode,CrossProcess);
}
void SecPassword::Get(wchar *Psw,size_t MaxSize)
{
if (PasswordSet)
{
Process(Password,ASIZE(Password),Psw,MaxSize,false);
Psw[MaxSize-1]=0;
}
else
*Psw=0;
}
void SecPassword::Set(const wchar *Psw)
{
if (*Psw==0)
{
PasswordSet=false;
memset(Password,0,sizeof(Password));
}
else
{
PasswordSet=true;
Process(Psw,wcslen(Psw)+1,Password,ASIZE(Password),true);
}
}
size_t SecPassword::Length()
{
wchar Plain[MAXPASSWORD];
Get(Plain,ASIZE(Plain));
size_t Length=wcslen(Plain);
cleandata(Plain,ASIZE(Plain));
return Length;
}
bool SecPassword::operator == (SecPassword &psw)
{
// We cannot compare encoded data directly, because there is no guarantee
// than encryption function will always produce the same result for same
// data (salt?) and because we do not clean the rest of password buffer
// after trailing zero before encoding password. So we decode first.
wchar Plain1[MAXPASSWORD],Plain2[MAXPASSWORD];
Get(Plain1,ASIZE(Plain1));
psw.Get(Plain2,ASIZE(Plain2));
bool Result=wcscmp(Plain1,Plain2)==0;
cleandata(Plain1,ASIZE(Plain1));
cleandata(Plain2,ASIZE(Plain2));
return Result;
}
void SecHideData(void *Data,size_t DataSize,bool Encode,bool CrossProcess)
{
// CryptProtectMemory is not available in UWP and CryptProtectData
// increases data size not allowing in place conversion.
#if defined(_WIN_ALL)
// Try to utilize the secure Crypt[Un]ProtectMemory if possible.
if (GlobalCryptLoader.pCryptProtectMemory==NULL)
GlobalCryptLoader.Load();
size_t Aligned=DataSize-DataSize%CRYPTPROTECTMEMORY_BLOCK_SIZE;
DWORD Flags=CrossProcess ? CRYPTPROTECTMEMORY_CROSS_PROCESS : CRYPTPROTECTMEMORY_SAME_PROCESS;
if (Encode)
{
if (GlobalCryptLoader.pCryptProtectMemory!=NULL)
{
if (!GlobalCryptLoader.pCryptProtectMemory(Data,DWORD(Aligned),Flags))
{
ErrHandler.GeneralErrMsg(L"CryptProtectMemory failed");
ErrHandler.SysErrMsg();
ErrHandler.Exit(RARX_FATAL);
}
return;
}
}
else
{
if (GlobalCryptLoader.pCryptUnprotectMemory!=NULL)
{
if (!GlobalCryptLoader.pCryptUnprotectMemory(Data,DWORD(Aligned),Flags))
{
ErrHandler.GeneralErrMsg(L"CryptUnprotectMemory failed");
ErrHandler.SysErrMsg();
ErrHandler.Exit(RARX_FATAL);
}
return;
}
}
#endif
// CryptProtectMemory is not available, so only slightly obfuscate data.
uint Key;
#ifdef _WIN_ALL
Key=GetCurrentProcessId();
#elif defined(_UNIX)
Key=getpid();
#else
Key=0; // Just an arbitrary value.
#endif
for (size_t I=0;I<DataSize;I++)
*((byte *)Data+I)^=Key+I+75;
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,147
|
The Relaxed Top: A breezy silhouette for a looser fit that guarantees comfort! This pretty printed blouse features a crew neck, zippered pocket and short sleeves. Pair it with jeans or dark pants for a simple yet classic look.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,288
|
Муйыл-майы-беремэсе — башкирская ватрушка с черёмуховым маслом.
Муйыл-майы-беремэсе готовится на сдобном тесте. Тесто раскатывается в лепёшки толщиной 10 мм. На лепешку кладётся фарш из черёмухового масла. Края смазываются сметаной и закатываются.
Лепёшка запекается в духовке около 20 минут.
Интересные факты
Муйыл-майы-беремэсе может готовится на картофельном, творожном или ореховом фарше.
Ссылки
Башкирская кухня
http://www.nnre.ru/kulinarija/million_velikolepnyh_blyud_dlja_yubileev_svadeb_i_prazdnichnyh_stolov_narodov_rossii/p8.php#metkadoc111
Башкирская кухня
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,834
|
\section{Introduction}
\label{intro}
The study of curves is an important chapter of geometry. Besides its intrinsic importance, curves also play a major role in the analysis of surfaces, and manifolds in general. In particular, the consideration of curves that belong to a given surface, such as planar or spherical curves \cite{ArnoldRMS1995,ChmutovOregon2006,LinJDG1996}, may be of great value. In this respect an interesting problem is {\it``how can we characterize those (spatial) curves that belong to a certain surface $\Sigma$?"}. Despite the simplicity to formulate the problem, there is no known general solution to it except for a few cases: namely, when $\Sigma$ is a plane \cite{Struik}, a sphere \cite{WongMonatshMath,Struik}, or a cylinder \cite{StarostinMonatshMath}. The solution for planar curves is quite trivial once we introduce the Frenet frame: the torsion must vanish. On the other hand, the characterization for spherical curves generally involves an ODE relating the curvature function and the torsion \cite{WongMonatshMath}, while the characterization for cylindrical curves involves a system of algebro-differential equations \cite{StarostinMonatshMath}. In the 70's, through the idea of equipping a curve with a relatively parallel moving frame, Bishop was able to characterize spherical curves through a linear equation relating the coefficients that dictate the frame motion \cite{BishopMonthly}. The coefficients of such a Bishop frame admit a simple geometric interpretation and, besides its impact on the study of spherical curves, a Bishop frame also has the advantage of being globally defined even if a curve has points of zero curvature \cite{BishopMonthly}. Naturally, it also finds applications in problems which make use of frames along curves, such as in rotation-minimizing frames in rigid body dynamics \cite{FaroukiCAGD2014}, computer graphics and visualization \cite{HansonTechrep1995}, robotics \cite{WebsterIJRR2010}, quantum waveguides \cite{HaagAHP2015}, integrable systems \cite{SandersMMJ2003}, and also in mathematical biology in the study of DNA \cite{ChirikjianBST2013,ClauvelinJCTC2012} and protein folding \cite{HuPRE2011}, just to name a few.
In the quest of spherical curves we should not restrict ourselves to the context of an Euclidean ambient space, $(E^3,\langle\cdot,\cdot\rangle\,)$. Indeed, we can consider the more general setting of a Lorentz-Minkowski space, $(E_1^3,(\cdot,\cdot)\,)$, where one has to deal with three types of spheres: pseudo-spheres $\mathbb{S}_1^2(P;r)=F_P^{-1}(r^2)$; pseudo-hyperbolic spaces $\mathbb{H}_0^2(P;r)=F_P^{-1}(-r^2)$; and light-cones $\mathcal{C}^2(P)=F_P^{-1}(0)$, where $F_P(x)=(x-P,x-P)$ and $(\cdot,\cdot)$ has index 1. Indeed, it is possible to find characterizations of some classes of spherical curves scattered among a few papers: pseudo-spherical \cite{BektasBMMS1998,IlarslamJII-PP2003,PekmenMM1999,Petrovic-TorgasevMM2000,Petrovic-TorgasevMV2001} and pseudo-hyperbolic curves \cite{IlarslamJII-PP2003,Petrovic-TorgasevKJM2000} via Frenet frame; and also curves on light-cones \cite{ErdoganJST2009,LiuRM2011,LiuJG2016} by exploiting conformal invariants and the concept of cone curvature \cite{LiuBAG2004}. It is also possible to find constructions of Bishop frames on curves in $E_1^3$ for spacelike curves \cite{BukcuCFSUA2008,BukcuSJAM2010,LowJGSP2012,OzdemirMJMS2008} with a non-lightlike normal, and timelike curves \cite{KaracanSDUJS2008,LowJGSP2012,OzdemirMJMS2008}, along with several characterizations of spherical curves through a linear equation via Bishop frames \cite{BukcuCFSUA2008,BukcuSJAM2010,KaracanSDUJS2008,OzdemirMJMS2008}. All the above mentioned studies in $E_1^3$ have in common that much attention is paid on the possible combinations of casual characters of the tangent and normal vectors, which makes necessary the consideration of several instances of the investigation of Bishop frames and spherical curves. Moreover, none of them take into account the possibility of a lightlike tangent or a lightlike normal. Naturally, this reflects in the incompleteness of the available characterizations of spherical curves in $E_1^3$.
Here we apply these ideas in order to characterize those spatial curves that belong to surfaces implicitly defined by a smooth function, $\Sigma=F^{-1}(c)$, by reinterpreting the problem in the new geometric setting of an inner product induced by the Hessian, $\mbox{Hess}\,F=\partial^2F/\partial x^i\partial x^j$. Although simple, this idea will prove to be very useful. Moreover, since a Hessian may fail to be positive definite, one is naturally led to the study of the differential geometry of curves in Lorentz-Minkowski spaces. In this work, we then present a systematic approach to moving frames on curves in $E_1^3$. The turning point is that one should exploit the casual character of the tangent vector and the induced casual character on the normal plane only. In this way, we are able to furnish a systematic approach to the construction of Bishop frames in $E_1^3$. This formalism allows us to give a complete characterization of spherical curves in $E_1^3$. Finally, we present a necessary and sufficient criterion for a curve to lie on a level surface of a smooth function. More precisely, we present a functional relationship involving the coefficients of a Bishop frame with respect to the Hessian metric along a curve on $\Sigma=F^{-1}(c)$, which reduces to a linear relation when $\mbox{Hess}\,F$ is constant. In this last case, we are able to characterize spatial curves that belong to a given non-degenerate Euclidean quadric $\mathcal{Q}=\{x:\langle B(x-P),(x-P)\rangle=\rho\}$, $\rho\in\mathbb{R}$ constant, by using $(\cdot,\cdot)=\langle B\cdot,\cdot\rangle$. We also furnish an interpretation for the casual character that a curve may assume when we pass from $E^3$ to $E_1^3$, which also allows us to understand why certain types of curves do not exist on a given quadric or on a given Lorentzian sphere, if we reinterpret the problem from $E_1^3$ in $E^3$. To the best of our knowledge, this is the first time that this characterization problem is considered in a general context.
This work is organized as follows. In Section 2 we review the construction of relatively parallel moving frames in Euclidean space according to Bishop. Section 3 is devoted to moving frames in Lorentz-Minkowski spaces: subsection 3.1 to Frenet frames in $E_1^3$; subsections 3.2 and 3.3 to Bishop frames along space- and timelike curves and their geometric interpretations, respectively; and, subsection 3.4, to null frames along lightlike curves. In Section 4 we characterize spherical curves in Lorentz-Minkowski spaces, i.e., curves on pseudo-spheres, pseudo-hyperbolic spaces, and light-cones. In Section 5 we characterize curves on a non-degenerate Euclidean quadric. In Section 6 we present a characterization of curves that lie on a regular level surface. And finally, in Section 7 we present our conclusions along with some open problems and directions of future research.
\section{Moving frames on curves in $E^3$}
\label{sec:MovingFrameCurves}
Let us denote by $E^3$ the $3d$ Euclidean space, i.e., $\mathbb{R}^3$ equipped with the standard metric $\langle\cdot,\cdot\rangle$. Given a regular curve $\alpha:I\rightarrow E^3$ parametrized by arc-length, the usual way to introduce a moving frame along it is by means of the Frenet frame $\{\mathbf{t},\mathbf{n},\mathbf{b}\}$ \cite{Struik}. However, we can also consider any other adapted orthonormal moving frame $\{\mathbf{e}_0(s),\mathbf{e}_1(s),\mathbf{e}_2(s)\}$ along $\alpha(s)$, i.e., $\mathbf{e}_0\propto \mathbf{t}$ and $\langle\mathbf{e}_i,\mathbf{e}_j\rangle=\delta_{ij}$. The equation of motion of such a moving frame is given by a skew-symmetric $3\times3$ matrix. For the Frenet frame one of the entries of this matrix is zero and the other two are the curvature function $\kappa$ and the torsion $\tau$:
\begin{equation}
\frac{{\rm d}}{{\rm d}s}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}\\
\mathbf{b}\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & \kappa & 0\\
-\kappa & 0 & \tau\\
0 & -\tau & 0\\
\end{array}
\right)\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}\\
\mathbf{b}\\
\end{array}
\right).\label{eq::FrenetEqs}
\end{equation}
By introducing the notion of a relatively parallel vector field, Bishop considered a moving frame $\{\mathbf{t},\mathbf{n}_1,\mathbf{n}_2\}$, where $\mathbf{n}_i$ are normal vectors to the unit tangent $\mathbf{t}$, whose equation of motion is \cite{BishopMonthly}
\begin{equation}
\frac{{\rm d}}{{\rm d}s}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}_1\\
\mathbf{n}_2\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & \kappa_{1} & \kappa_{2}\\
-\kappa_{1} & 0 & 0\\
-\kappa_{2} & 0 & 0\\
\end{array}
\right)\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}_1\\
\mathbf{n}_2\\
\end{array}
\right).\label{eq::BishopEqs}
\end{equation}
The coefficients $\kappa_1$ and $\kappa_2$ relate with the curvature function and torsion according to \cite{BishopMonthly}
\begin{equation}
\left\{
\begin{array}{c}
\kappa_1 = \kappa\cos\theta\\
\kappa_2 = \kappa\sin\theta\\
\theta'= \tau\\
\end{array}
\right..
\end{equation}
\begin{remark}
A vector field $\mathbf{e}(s)$ along $\alpha(s)$ is {\it relatively parallel} if the derivative of its normal component, $\mathbf{e}^{\perp}$, is a multiple of the unit tangent vector, i.e., ${\rm d}\mathbf{e}^{\perp}/{\rm d}s=\eta(s)\mathbf{t}(s)$, and the tangent component is a constant multiple of $\mathbf{t}$ \cite{BishopMonthly}.
\end{remark}
\begin{remark}
Such a frame may be also named {\it rotation minimizing frame}, since $\mathbf{n}_i$ does not rotate around $\mathbf{t}$. In addition, it can be proved that $\mathbf{n}_i$ is parallel transported along $\alpha(s)$ with respect to the normal connection of the curve \cite{Etayo2016}. Observe that for a closed curve, $\alpha(s_i)=\alpha(s_f)$, $\mathbf{n}_1(s_i)$ will differ from $\mathbf{n}_1(s_f)$, by an angular amount of $\Delta \theta = \int_{s_i}^{s_f} \tau(x){\rm d}x$.
\end{remark}
An advantage of such a relatively parallel moving frame, or {\it Bishop frame} for short\footnote{This frame has been independently discovered several times \cite{DaCostaPRA1981,TangIEEE1970}, e.g., in the physics literature it is sometimes named as the Tang frame. However, Bishop seems to be the first to exploit the geometric implications of such frames.}, is that it can be globally defined even if the curve is degenerate, i.e., if the curvature $\kappa$ vanishes at some points \cite{BishopMonthly}. Furthermore, it also allows for a simple characterization of spherical curves:
\begin{theorem}[Bishop \cite{BishopMonthly}]
A $C^2$ regular curve lies on a sphere if and only if its normal development, i.e., the curve $(\kappa_1(s),\kappa_2(s))$, lies on a line not passing through the origin. Moreover, the distance of this line from the origin, $d$, and the radius of the sphere, $r$, are reciprocals: $r=d^{-1}$.
\label{theo:BishopCharacSpherericalCurves}
\end{theorem}
\begin{remark}
Straight lines passing through the origin characterize planar curves which are not spherical \cite{BishopMonthly}.
\end{remark}
Finally, Bishop frames are not uniquely defined. Indeed, any rotation of $\mathbf{n}_1$ and $\mathbf{n}_2$ still gives two relatively parallel fields, i.e., there is an ambiguity associated with the group $SO(2)$ acting on the normal plane. However, the coefficients still determine a curve up to rigid motions \cite{BishopMonthly}. Moreover, $\kappa$-constant curves are represented in the normal development plane by circles centered at the origin with radius $\kappa$ \cite{BishopMonthly}, which can be seen as the orbits of the symmetry group $SO(2)$.
In the following we shall extend this formalism in order to present a way of building Bishop frames along curves in $E_1^3$ and then apply it to furnish a unified approach to the characterization of spherical curves in $E_1^3$ (i.e., curves on pseudo-spheres, pseudo-hyperbolic spaces, and light-cones), curves on quadrics in $E^3$, and finally characterize curves that lie on level surfaces of a smooth function by reinterpreting the problem in a new geometric setting.
\section{Moving frames on curves in $E_{1}^3$}
\label{sec:MovingFrameCurvesE3_1}
Let us denote by $E_{1}^3$ the vector space $\mathbb{R}^3$ equipped with a pseudo-metric $(\cdot,\cdot)$ of index $1$. In fact, the concepts below, and the construction of Bishop-like frames as well, are still valid in the context of a 3-dimensional semi-Riemannian manifold \cite{ONeill}, but to help intuition, the reader may keep in mind the particular setting of $\mathbb{R}^3$ equipped with the standard Minkowski metric, i.e. $(x,y)=x_1y_1+x_2y_2-x_3y_3$. Naturally, in a more general context, the derivative of a vector field along a curve should be understood as a covariant derivative.
Before discussing the moving frame method on curves in $E_1^3$, let us introduce some terminology and geometric properties associated with $E_1^3$ (for more details, we refer to \cite{LopesIEJG2014,ONeill}).
One property that makes the geometry in Lorentz-Minkowski spaces $E_1^3$ more difficult and richer than the geometry in $E^3$ is that curves and vector subspaces may assume different casual characters:
\begin{definition}
A vector $v\in E_1^3$ assumes one of the following {\it casual characters}:
\begin{enumerate}[(a)]
\item $v$ is {\it spacelike}, if $(v,v)>0$ or $v=0$;
\item $v$ is {\it timelike}, if $(v,v)<0$;
\item $v$ is {\it lightlike}, if $(v,v)=0$ and $v\not=0$.
\end{enumerate}
\end{definition}
The inner product $(\cdot,\cdot)$ induces a pseudo-norm defined by $\Vert x\Vert=\sqrt{\vert(x,x)\vert}$. Given a vector subspace $U\subseteq\mathbb{R}^3$, we define the orthogonal complement $U^{\perp}$ in the usual way: $U^{\perp}=\{v\in E_1^3:\forall\,u\in U,\,(v,u)=0\}$. Moreover, we can consider the restriction of $(\cdot,\cdot)$ to $U$, $(\cdot,\cdot)|_{U}$.
\begin{definition}
Let $U$ be a vector subspace, then
\begin{enumerate}[(a)]
\item $U$ is {\it spacelike} if $(\cdot,\cdot)|_{U}$ is positive definite;
\item $U$ is {\it timelike} if $(\cdot,\cdot)|_{U}$ has index 1;
\item $U$ is {\it lightlike} if $(\cdot,\cdot)|_{U}$ is degenerate.
\end{enumerate}
\end{definition}
We have the following useful properties related to the casual characters of vector subspaces:
\begin{proposition}
Let $U\subseteq E_1^3$ be a vector subspace. Then,
\begin{enumerate}[(i)]
\item $\dim U^{\perp} = 3-\dim U$ and $(U^{\perp})^{\perp}=U$;
\item $U$ is lightlike if and only if $U^{\perp}$ is lightlike;
\item $U$ is spacelike (timelike) if and only if $U^{\perp}$ is timelike (spacelike).
\item $U$ is lightlike if and only if $U$ contains a lightlike vector but not a timelike one. Moreover, $U$ admits an orthogonal basis formed by a lightlike and a spacelike vectors.
\end{enumerate}
\end{proposition}
Given two vectors $u,v\in E_1^3$, the Lorentzian vector product, denoted by $u\times v$, is the only vector that satisfies
\begin{equation}
\forall\,w\in E_1^3,\,(u\times v,w)=\det(u,v,w),\label{eq::LorentzVectorProd}
\end{equation}
where the columns of $(u,v,w)$ are formed by the entries of $u,v$, and $w$.
From these definitions, we say that a curve $\alpha:I\to E_1^3$ is spacelike, timelike, or lightlike, if its velocity vector $\alpha'$ is spacelike, timelike, or lightlike, respectively. Analogously, we say that a surface is spacelike, timelike, or lightlike, if its tangent planes are spacelike, timelike, or lightlike, respectively.
If a curve is lightlike we can not define an arc-length parameter (in $E^3$ this is always possible). In this case, one must introduce the notion of a {\it pseudo arc-length parameter}, i.e., a parameter $s$ such that $(\alpha''(s),\alpha''(s))=1$. More precisely, if $\alpha$ is a lightlike curve and $(\alpha'',\alpha'')\not=0$ (otherwise $\alpha''$ and $\alpha'$ will be linearly dependent and the curve is a straight line), we define the {\it pseudo arc-length parameter} as
\begin{equation}
s = \int_a^t\Vert\alpha''(u)\Vert\,{\rm d}u\,.\label{eq:PseudoArclength}
\end{equation}
On the other hand, if $\alpha$ is not a lightlike curve, then the {\it arc-length parameter} is defined as usual
\begin{equation}
s = \int_a^t\Vert\alpha'(u)\Vert\,{\rm d}u\,.\label{eq:arclength}
\end{equation}
In the following we will assume every curve to be parametrized by the arc-length or pseudo arc-length parameter.
\subsection{Frenet frame in $E_1^3$}
The study of the local properties of a curve $\alpha\subset E_1^3$ in a Frenet frame fashion becomes quite cumbersome due to the various possibility for the casual characters of the tangent and its derivative. In essence, there is a construction for each combination of the casual characters of $\mathbf{t}$ and $\mathbf{t}'$.
Let $\mathbf{t}(s)=\alpha'(s)$ be the (unit) tangent and $s$ the arc- or pseudo arc-length parameter. If $\mathbf{t}'$ is not a lightlike vector, let $\mathbf{n}=\mathbf{t}'/\Vert\mathbf{t}'\Vert$ be the normal vector. We shall denote by $\epsilon=(\mathbf{t},\mathbf{t})$ and $\eta=(\mathbf{n},\mathbf{n})$ the parameters that enclose the casual character of the tangent and normal vectors. If $\mathbf{t}$ and $\mathbf{n}$ are not lightlike, then
\begin{equation}
\frac{{\rm d}}{{\rm d}s}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}\\
\mathbf{b}\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & \eta\,\kappa & 0\\
-\epsilon\,\kappa & 0 & -\epsilon\eta\,\tau\\
0 & -\eta\,\tau & 0\\
\end{array}
\right)\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}\\
\mathbf{b}\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & \kappa & 0\\
-\kappa & 0 & \tau\\
0 & -\tau & 0\\
\end{array}
\right)E_{\mathbf{t},\mathbf{n},\mathbf{b}}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}\\
\mathbf{b}\\
\end{array}
\right),\label{eq::FrenetEqsInE13}
\end{equation}
where $\mathbf{b}=\mathbf{t}\times\mathbf{n}$, and $\kappa = (\mathbf{t}',\mathbf{n})$ and $\tau=(\mathbf{n}',\mathbf{b})$ are the curvature function and torsion of $\alpha$, respectively\footnote{Our definition for $\kappa$ is slightly different from that of L\'opez \cite{LopesIEJG2014}. Indeed, despite the fact that the definition is formally identical to the Euclidean version, our $\kappa$ is a signaled curvature and its sign encloses the casual character of the curve in a natural manner.}. Here $E_{\mathbf{t},\mathbf{n},\mathbf{b}}=\mbox{diag}(\epsilon,\eta,-\epsilon\eta)=[(\mathbf{e}_i,\mathbf{e}_j)]_{ij}$ denotes the matrix associated with the frame $\{\mathbf{e}_0=\mathbf{t},\mathbf{e}_1=\mathbf{n},\mathbf{e}_2=\mathbf{b}\}$.
If $\mathbf{t}$ is spacelike and $\mathbf{t}'$ is lightlike, we define $\mathbf{n}=\mathbf{t}'$, while $\mathbf{b}$ is the unique lightlike vector orthonormal to $\mathbf{t}$ that satisfies $(\mathbf{n},\mathbf{b})=-1$. The Frenet equations are
\begin{equation}
\frac{{\rm d}}{{\rm d}s}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}\\
\mathbf{b}\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & \,1 & 0\\
0 & \,\tau & 0\\
1 & \,0 & -\tau\\
\end{array}
\right)\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}\\
\mathbf{b}\\
\end{array}
\right)=E_{\mathbf{t},\mathbf{n},\mathbf{b}}\left(
\begin{array}{ccc}
0 & 1 & 0\\
-1 & 0 & \tau\\
0 & -\tau & 0\\
\end{array}
\right)\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}\\
\mathbf{b}\\
\end{array}
\right),\label{eq::FrenetEqsInE13tprimelightlike}
\end{equation}
where $\tau=-(\mathbf{n}',\mathbf{b})$ is the pseudo-torsion. Here $E_{\mathbf{t},\mathbf{n},\mathbf{b}}=[(\mathbf{e}_i,\mathbf{e}_j)]_{ij}$ denotes the matrix associated with the null frame $\{\mathbf{e}_0=\mathbf{t},\mathbf{e}_1=\mathbf{n},\mathbf{e}_2=\mathbf{b}\}$.
Finally, if $\mathbf{t}$ is lightlike, we define $\mathbf{n}=\mathbf{t}'$ (we assume this normal vector to be spacelike, otherwise $\alpha$ is a straight line), while $\mathbf{b}$ is the unique lightlike vector that satisfies $(\mathbf{n},\mathbf{b})=0$ and $(\mathbf{t},\mathbf{b})=-1$. The Frenet equations are then
\begin{equation}
\frac{{\rm d}}{{\rm d}s}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}\\
\mathbf{b}\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & 1 & 0\\
-\tau & 0 & 1\\
0 & -\tau & 0\\
\end{array}
\right)\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}\\
\mathbf{b}\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & 1 & 0\\
-1 & 0 & \tau\\
0 & -\tau & 0\\
\end{array}
\right)E_{\mathbf{t},\mathbf{n},\mathbf{b}}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}\\
\mathbf{b}\\
\end{array}
\right),\label{eq::FrenetEqsInE13tangentlightlike}
\end{equation}
where $\tau=(\mathbf{n}',\mathbf{b})$ is the pseudo-torsion. Here $E_{\mathbf{t},\mathbf{n},\mathbf{b}}=[(\mathbf{e}_i,\mathbf{e}_j)]_{ij}$ denotes the matrix associated with the null frame $\{\mathbf{e}_0=\mathbf{t},\mathbf{e}_1=\mathbf{n},\mathbf{e}_2=\mathbf{b}\}$.
\begin{remark}
In $E^3$ the coefficient matrix of a Frenet frame is always skew-symmetric. On the other hand, this does not happen in $E_1^3$ \cite{LowJGSP2012}. However, the above expressions show that the coefficient matrix can be obtained from a skew-symmetric matrix through a right-multiplication, or a left one if $\mathbf{t}'$ is lightlike, by the matrix $E_{\mathbf{t},\mathbf{n},\mathbf{b}}=[(\mathbf{e}_i,\mathbf{e}_j)]_{ij}$ associated with the respective Frenet frame $\{\mathbf{e}_0=\mathbf{t},\mathbf{e}_1=\mathbf{n},\mathbf{e}_2=\mathbf{b}\}$ in $E_1^3$. This skew-symmetric matrix is precisely the coefficient matrix that we would obtain for a Frenet frame in $E^3$. Let us mention that when $\mathbf{t}'$ is lightlike it does not mean that the curvature function is $\kappa=1$; a curvature is not well defined for such curves \cite{LopesIEJG2014}.
\end{remark}
\begin{remark}
In the following, when discussing Bishop frames in $E_1^3$ along non-lightlike curves and null frames along lightlike curves, we will see that the coefficient matrix can be obtained from a skew-symmetric matrix (precisely the matrix that we would obtain for a Bishop frame in $E^3$) through a right-multiplication by the matrix associated with a convenient basis.
\end{remark}
\subsection{Relatively parallel moving frames along spacelike or lightlike curves}
A quite complete and systematic approach to the problem of the existence of Bishop-like frames along curves in $E_1^3$ was presented by \"Ozdemir and Ergin \cite{OzdemirMJMS2008}, where they build Bishop-like frames on timelike and spacelike curves with a non-lightlike normal. However, as in the Frenet frame case, they also paid much attention to the casual character of $\mathbf{t}'$. Here, we show that one must exploit the structure of the normal plane inherited from the casual character of $\mathbf{t}$ in order to build a unified treatment of the problem. More precisely, instead of considering the problem for each combination of the casual character of $\mathbf{t}$ and $\mathbf{t}'$, one must pay attention to the symmetry associated with the problem, which is reflected in an ambiguity in the definition of a Bishop frame. The study of moving frames along curves in $E_1^3$ is then divided in three cases only: (i) timelike curves; (ii) spacelike curves; and (iii) lightlike curves. As a direct consequence, the characterization of spherical curves can be split along three Theorems only.
\begin{definition}
A vector field $\mathbf{e}(s)$ along a curve $\alpha:I\to E_1^3$ is {\it relatively parallel} if the derivative of its normal component is a multiple of the unit tangent vector $\mathbf{t}=\alpha'$ and its tangent component is a constant multiple of $\mathbf{t}$.
\end{definition}
Let $\alpha:I\to E_1^3$ be a timelike curve. Since $\mathbf{t}$ is a timelike vector, the normal plane $N_{\alpha(s)}=\mbox{span}\{\mathbf{t}(s)\}^{\perp}$ is spacelike. To prove the existence of relatively parallel moving frames, let $\mathbf{x}_1$ and $\mathbf{x}_2=\mathbf{t}\times\mathbf{x}_1$ be an orthonormal basis of $N_{\alpha}$. The frame $\{\mathbf{t},\mathbf{x}_1,\mathbf{x}_2\}$ satisfies the following equations
\begin{equation}
\frac{{\rm d}}{{\rm d}s}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{x}_1\\
\mathbf{x}_2\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & p_{01} & p_{02}\\
p_{01} & 0 & p_{12}\\
p_{02} & -p_{12} & 0\\
\end{array}
\right)\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{x}_1\\
\mathbf{x}_2\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & p_{01} & p_{02}\\
-p_{01} & 0 & p_{12}\\
-p_{02} & -p_{12} & 0\\
\end{array}
\right)E_{\mathbf{t},\mathbf{x}_1,\mathbf{x}_2}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{x}_1\\
\mathbf{x}_2\\
\end{array}
\right),
\end{equation}
for some functions $p_{ij}$, where $E_{\mathbf{t},\mathbf{x}_1,\mathbf{x}_2}=[(\mathbf{e}_i,\mathbf{e}_j)]_{ij}$ denotes the matrix associated with the time-oriented frame $\{\mathbf{e}_0=\mathbf{t},\mathbf{e}_k=\mathbf{x}_k\}$. Let $\theta$ be a smooth function such that $\mathbf{x}=L\cos\theta\,\mathbf{x}_1+L\sin\theta\,\mathbf{x}_2$, where $L$ is a constant. Then,
\begin{equation}
\mathbf{x}'=L(p_{01}\cos\theta+p_{02}\sin\theta)\mathbf{t}+L(\theta'+p_{12})(-\sin\theta\mathbf{x}_1+\cos\theta\mathbf{x}_2).
\end{equation}
Thus, it follows that $\mathbf{x}$ is relatively parallel if and only if $\theta'+p_{12}=0$. By the existence of a solution $\theta(s)$ for any initial condition, this shows that relatively parallel vector fields do exist along timelike curves. Observe that Bishop frames are not unique. Indeed, any rotation of the normal vectors still gives two relatively parallel vector fields, i.e., there is an ambiguity associated with the group $SO(2)$.
On the other hand, if $\alpha:I\to E_1^3$ is a spacelike curve, $\mathbf{t}$ is a spacelike vector and then the normal plane $N_{\alpha(s)}=\mbox{span}\{\mathbf{t}(s)\}^{\perp}$ is timelike. In a Frenet frame fashion, the study is divided into three cases, depending on the casual character of $\mathbf{t}'\in N_{\alpha}$, i.e., if $\mathbf{t}'$ is a space-, time-, or lightlike vector. But, if we only take into account the structure of $N_{\alpha}$, this is no longer necessary.
To prove the existence of relatively parallel moving frames along spacelike curves, let $\mathbf{y}_1\in N_{\alpha}$ be a timelike vector and let $\mathbf{y}_2=\mathbf{t}\times\mathbf{y}_1$ be spacelike. Then, the frame $\{\mathbf{t},\mathbf{y}_1,\mathbf{y}_2\}$ is an orthonormal time-oriented basis of $E_1^3$ along $\alpha$. The frame $\{\mathbf{t},\mathbf{y}_1,\mathbf{y}_2\}$ satisfies the following equation of motion
\begin{equation}
\frac{{\rm d}}{{\rm d}s}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{y}_1\\
\mathbf{y}_2\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & -p_{01} & p_{02}\\
-p_{01} & 0 & p_{12}\\
-p_{02} & p_{12} & 0\\
\end{array}
\right)\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{y}_1\\
\mathbf{y}_2\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & p_{01} & p_{02}\\
-p_{01} & 0 & p_{12}\\
-p_{02} & -p_{12} & 0\\
\end{array}
\right)E_{\mathbf{t},\mathbf{y}_1,\mathbf{y}_2}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{y}_1\\
\mathbf{y}_2\\
\end{array}
\right),
\end{equation}
for some functions $p_{ij}$, where $E_{\mathbf{t},\mathbf{y}_1,\mathbf{y}_2}=[(\mathbf{e}_i,\mathbf{e}_j)]_{ij}$ denotes the matrix associated with the time-oriented frame $\{\mathbf{e}_0=\mathbf{t},\mathbf{e}_k=\mathbf{y}_k\}$. Let $\theta$ be a smooth function such that $\mathbf{y}=L\cosh\theta\,\mathbf{y}_1+L\sinh\theta\,\mathbf{y}_2$, where it is used hyperbolic trigonometric functions because the normal plane is timelike. Then, we have
\begin{equation}
\mathbf{y}'=L(-p_{01}\cosh\theta-p_{02}\sinh\theta)\mathbf{t}+L(\theta'+p_{12})(\sinh\theta\mathbf{y}_1+\cosh\theta\mathbf{y}_2).
\end{equation}
Thus, it follows that $\mathbf{y}$ is relatively parallel if and only if $\theta'+p_{12}=0$. By the existence of a solution $\theta(s)$ for any initial condition, this shows that relatively parallel vector fields do exist along spacelike curves. As in the previous case, observe that Bishop frames are not unique. Indeed, any (hyperbolic) rotation of the normal vectors still gives two relatively parallel vector fields, i.e., there is an ambiguity associated with the group $SO_1(2)$, which is a
component of the symmetry group of a Lorentzian plane $E^2_1$ \cite{LopesIEJG2014,ONeill}.
When $\mathbf{n}$ has a distinct casual character from that of $\mathbf{n}_1$, then we can not obtain $\mathbf{n},\mathbf{b}$ from a $SO_1(2)$-rotation of $\mathbf{n}_1,\mathbf{n}_2$, i.e., there exists no $ M\in SO_1(2)$ such that $M(\mathbf{n})=\mathbf{n}_1$ and $M(\mathbf{b})=\mathbf{n}_2$. In this case, we must first exchange $\mathbf{n}_1$ and $\mathbf{n}_2$ and then rotate them \cite{OzdemirMJMS2008}. However, we can still read the information about the casual character of $\mathbf{n}$, including the lightlike case, from the ``circles'' of the normal plane, i.e., the orbits of $O_1(2)$, see figure 1 and Proposition \ref{prop::geomNormalDevelopm} below.
Now we put together the above mentioned existence results of relatively parallel vectors on non-lightlike curves. Let $\{\mathbf{n}_1,\mathbf{n}_2\}$ be a basis for $N_{\alpha}$ formed by relatively parallel vectors such that
\begin{equation}
\mathbf{n}'_i(s) = -\epsilon \kappa_i\,\mathbf{t}(s),
\end{equation}
where $\epsilon = (\mathbf{t},\mathbf{t})=\pm1$ and we have defined the Bishop curvatures
\begin{equation}
\kappa_i = (\mathbf{t}',\mathbf{n}_i),\,i=1,2\,.
\end{equation}
Then, defining $\epsilon_1=(\mathbf{n}_1,\mathbf{n}_1)=\pm1$, we can write the following equation of motion
\begin{equation}
\frac{{\rm d}}{{\rm d}s}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}_1\\
\mathbf{n}_2\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & \epsilon_1\kappa_{1} & \kappa_{2}\\
-\epsilon\kappa_{1} & 0 & 0\\
-\epsilon\kappa_{2} & 0 & 0\\
\end{array}
\right)\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}_1\\
\mathbf{n}_2\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & \kappa_{1} & \kappa_{2}\\
-\kappa_{1} & 0 & 0\\
-\kappa_{2} & 0 & 0\\
\end{array}
\right)E_{\mathbf{t},\mathbf{n}_1,\mathbf{n}_2}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{n}_1\\
\mathbf{n}_2\\
\end{array}
\right),\label{eq::GenBishopEqsNonlightCurves}
\end{equation}
where $E_{\mathbf{t},\mathbf{n}_1,\mathbf{n}_2}=[(\mathbf{e}_i,\mathbf{e}_j)]_{ij}$ denotes the matrix associated with the time-oriented frame $\{\mathbf{e}_0=\mathbf{t},\mathbf{e}_k=\mathbf{n}_k\}$. The numbers $\epsilon$ and $\epsilon_1$ determine the casual character of $\mathbf{t}$ and $\mathbf{n}_1$, respectively, and since $\mathbf{n}_2=\mathbf{t}\times\mathbf{n}_1$, we have $\epsilon_2=(\mathbf{n}_2,\mathbf{n}_2)=-\epsilon\epsilon_1=+1$. So, in this case $E_{\mathbf{t},\mathbf{n}_1,\mathbf{n}_2}=\mbox{diag}(\epsilon,\epsilon_1,-\epsilon\epsilon_1)$.
\subsection{Geometry of the normal development of spacelike and timelike curves}
\begin{figure*}[tbp]
\centering
{\includegraphics[width=0.33\linewidth]{Figure1a.eps}}
{\includegraphics[width=0.32\linewidth]{Figure1b.eps}}
{\includegraphics[width=0.32\linewidth]{Figure1c.eps}}
\caption{The geometry of the normal development $(\kappa_1,\kappa_2)$: (a) On a space- or timelike normal plane, lines through the origin (dashed red line) represent planar curves (Proposition \ref{prop::CharacPlaneCurves}), and lines not passing through the origin (solid blue line) represent spherical curves (Section 4); (b) On a spacelike normal plane, circles represent $\kappa$-constant curves; and (c) On a timelike normal plane, hyperbolas represent $\kappa$-constant curves with spacelike normal vector (solid blue line) or timelike normal vector (dashed red line), and the degenerate hyperbola $\kappa_1=\pm \kappa_2$ represents curves with a lightlike normal vector (dotted black line).}
\end{figure*}
The normal development of $\alpha(s)$ is the planar curve $(\kappa_1(s),\kappa_2(s))$. After proving the existence for Bishop moving frames on non-lightlike curves the natural question is how to relate the Bishop curvatures $\kappa_1,\kappa_2$ to the geometry of the curve which defines them.
From the Frenet equations we have
\begin{equation}
\eta\mathbf{n}=\frac{\mathbf{t}'}{\kappa}=\frac{\epsilon_1\kappa_1\mathbf{n}_1+\kappa_2\mathbf{n}_2}{\kappa}\Rightarrow \eta = \epsilon_1\frac{\kappa_1^2}{\kappa^2}+\frac{\kappa_2^2}{\kappa^2}\,,
\end{equation}
where $\eta=(\mathbf{n},\mathbf{n})\in\{-1,0,+1\}$. Then we have the following relations (see figure 1):
\begin{proposition}
For a fixed value of the parameter $s$, the point $(\kappa_1(s),\kappa_2(s))$ lies on a conic. More precisely,
\begin{enumerate}[(a)]
\item If $\mathbf{t}(s)$ is timelike (so $\mathbf{n}(s)$ must be spacelike), then $(\kappa_1(s),\kappa_2(s))$ lies on a circle of radius $\kappa(s)$: $\kappa^2=X^2+Y^2$;
\item If $\mathbf{t}(s)$ is spacelike and $\mathbf{n}(s)$ is timelike (spacelike), then $(\kappa_1(s),\kappa_2(s))$ lies on a hyperbola with foci on the $x$ axis ($y$ axis): $\kappa^2=\pm X^2\mp Y^2$;
\item If $\mathbf{t}(s)$ is spacelike and $\mathbf{n}(s)$ is lightlike, then $(\kappa_1(s),\kappa_2(s))$ lies on the line $X=\pm Y$, which form the asymptotes lines of the hyperbolas in item (b).
\end{enumerate}
\label{prop::geomNormalDevelopm}
\end{proposition}
\begin{remark}
Observe that $\kappa$-constant curves are precisely the orbits of $O_1(2)$, the symmetry group of a Lorentzian plane.
\end{remark}
\begin{proposition}
Let $\alpha:I\to E_1^3$ be a $C^2$ regular curve which is not spherical. Then, the curve is planar if and only if its normal development $(\kappa_1(s),\kappa_2(s))$ lies on a straight line passing through the origin.
\label{prop::CharacPlaneCurves}
\end{proposition}
\begin{proof}
Suppose $a\,\kappa_1+b\,\kappa_2=0$ for some constants $a$ and $b$. Defining $\mathbf{x}(s)=a\mathbf{n}_1(s)+b\mathbf{n}_2(s)\in N_{\alpha(s)}=\mbox{span}\{\mathbf{t}(s)\}^{\perp}$. It follows that $\mathbf{x}$ is constant and also that
\begin{equation}
(\alpha,\mathbf{x})' = (\alpha,-(a\epsilon\kappa_1+b\epsilon\kappa_2)\mathbf{t})=-\epsilon(\alpha,\mathbf{t})(a\kappa_1+b\kappa_2)=0\,.
\end{equation}
Thus, $(\alpha(s),\mathbf{x})$ is constant and then $(\alpha(s)-\alpha(s_0),\mathbf{x})=0$. So, $\alpha$ is a planar curve.
Conversely, let $\alpha$ be contained on a plane $(\alpha(s)-\alpha(s_0),\mathbf{x})=0$. Since the tangent $\mathbf{t}$ also belongs to this plane, we can write $\mathbf{x}=a\mathbf{n}_1+b\mathbf{n}_2$ for some constants $a,b$. Then,
\begin{equation}
0=(\alpha-\alpha_0,\mathbf{x})' = (\alpha-\alpha_0,-(a\epsilon\kappa_1+b\epsilon\kappa_2)\mathbf{t})=-\epsilon(a\kappa_1+b\kappa_2)\,(\alpha-\alpha_0,\mathbf{t})\,.
\end{equation}
Thus, $a\kappa_1+b\kappa_2=0$ or $(\alpha-\alpha_0,\mathbf{t})=0$. In this last case, the curve would be spherical. Indeed, if it were $(\alpha-\alpha_0,\mathbf{t})=0$, then $\alpha-\alpha_0=b_1\mathbf{n}_1+b_2\mathbf{n}_2$, for some constants $b_1$ and $b_2$, because $b_i=\epsilon_i(\alpha-\alpha_0,\mathbf{n}_i)$ and $b_i'=\epsilon_i(\mathbf{t},\mathbf{n}_i)-\epsilon\epsilon_i\kappa_i(\alpha-\alpha_0,\mathbf{t})=0$.
Taking the derivative of $(\alpha-\alpha_0,\mathbf{t})=0$, gives $0=(\mathbf{t},\mathbf{t})+(\alpha-\alpha_0,\epsilon_1\kappa_1\mathbf{n}_1+\kappa_2\mathbf{n}_2)=\epsilon+b_1\kappa_1+b_2\kappa_2$. But $0=1+\epsilon b_1\kappa_1+\epsilon b_2\kappa_2$ is the equation of a spherical curve (see theorems on Section 4 below).
\qed
\end{proof}
\begin{remark}
If the pseudo-torsion of a spacelike curve with a lightlike normal vanishes, then the curve is planar (the converse is not true. Indeed, L\'opez \cite{LopesIEJG2014} gives an example of a curve which is planar and has a non-zero pseudo-torsion). However, it follows from the above propositions that all spacelike curves with a lightlike normal are planar, no matter the value of the pseudo-torsion.
\end{remark}
\subsection{Moving frames along lightlike curves}
It is not possible to define a Bishop frame along lightlike curves, we can not even define an orthonormal frame. In this case we must work with the concept of a null frame (see Inoguchi and Lee \cite{InoguchiIEJG2008} for a survey on the geometry of lightlike curves and null frames along them). As in the previous case, we will introduce along $\alpha$ a (null) frame by exploiting the structure of the normal plane only.
Let $\alpha:I\to E_1^3$ be a lightlike curve. In this case, since $\alpha'$ is a lightlike vector, the normal plane $N_{\alpha(s)}=\mbox{span}\{\alpha'(s)\}^{\perp}$ is lightlike and $\alpha'\in N_{\alpha}$. So, we have $N_{\alpha(s)}=\mbox{span}\{\alpha'(s),\mathbf{z}_1(s)\}$, where $\mathbf{z}_1$ is a unit spacelike vector. Denote by $\mathbf{t}=\alpha'$ the tangent vector. If $\mathbf{t}'$ is spacelike, then we can assume $\alpha$ parametrized by pseudo arc-length. Let $\mathbf{z}_2$ be the lightlike vector orthogonal to $\mathbf{z}_1$ and satisfying $(\mathbf{t},\mathbf{z}_2)=-1$. In this case, the equations of motion are
\begin{equation}
\frac{{\rm d}}{{\rm d}s}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{z}_1\\
\mathbf{z}_2\\
\end{array}
\right)=\left(
\begin{array}{ccc}
\kappa_3 & \kappa_1 & 0\\
-\kappa_2 & 0 & \kappa_1\\
0 & -\kappa_{2} & -\kappa_3\\
\end{array}
\right)\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{z}_1\\
\mathbf{z}_2\\
\end{array}
\right)=\left(
\begin{array}{ccc}
0 & \kappa_1 & -\kappa_3\\
-\kappa_1 & 0 & \kappa_2\\
\kappa_3 & -\kappa_{2} & 0\\
\end{array}
\right)E_{\mathbf{t},\mathbf{z}_1,\mathbf{z}_2}\left(
\begin{array}{c}
\mathbf{t}\\
\mathbf{z}_1\\
\mathbf{z}_2\\
\end{array}
\right),\label{eq::FrameEqsLightCurves}
\end{equation}
where $\kappa_1=(\mathbf{t}',\mathbf{z}_1)$, $\kappa_2=(\mathbf{z}_1',\mathbf{z}_2)$, and $\kappa_3=(\mathbf{z}_2',\mathbf{t})$. Here $E_{\mathbf{t},\mathbf{n},\mathbf{b}}=[(\mathbf{e}_i,\mathbf{e}_j)]_{ij}$ denotes the matrix associated with the null frame $\{\mathbf{e}_0=\mathbf{t},\mathbf{e}_1=\mathbf{n},\mathbf{e}_2=\mathbf{b}\}$. The coefficient $\kappa_1$ plays a significant role on the theory of moving frames along lightlike curves.
\begin{remark}
If $\mathbf{t}'$ is spacelike and if we take $\mathbf{z}_1=\mathbf{t}'=\mathbf{n}$, then $\mathbf{z}_2=\mathbf{b}$ and $\kappa_1=1$, $\kappa_2=\tau$ and $\kappa_3=0$. However, the Frenet frame is not defined when $\mathbf{t}'$ is lightlike. Here, the presence of $\kappa_3$ allows for a description of lightlike curves regardless of the casual character of $\mathbf{t}'$.
\end{remark}
\begin{proposition}
A lightlike curve $\alpha:I\to E_1^3$ is a straight line if and only if $\kappa_1=0$. Moreover, if $\alpha$ is not a straight line and is parametrized by the pseudo arc-length, then $\kappa_1^2=1$.
\label{prop::CharacLightlikeLine}
\end{proposition}
\begin{proof}
If $\kappa_1=0$, then $\mathbf{t}'=\kappa_3\mathbf{t}$. Integration of this equation gives $\alpha=\alpha_0+(\int {\rm e}^{\int\kappa_3})\mathbf{t}_0$, where $\alpha_0$ and $\mathbf{t}_0$ are constants. Then, $\alpha$ is a straight line. Conversely, let $\alpha=\alpha_0+f\,\mathbf{t}_0$, with $f$ a smooth function. Taking derivatives, it is easy to verify that $\kappa_1=0$.
Now, suppose $\kappa_1\not=0$, so if $\alpha$ is parametrized by pseudo arc-length we have
\begin{equation}
1=(\mathbf{t}',\mathbf{t}')=\kappa_1^2,
\end{equation}
as expected.
\qed
\end{proof}
\section{Characterization of spherical curves in $E_1^3$}
In $E^3$ the function $F(x)=\langle x-P,x-P\rangle$ is non-negative. A sphere of radius $r$ and center $P$ in $E^3$, $\mathbb{S}^2(P;r)$, is then defined as the level sets of $F$, i.e. $\langle x-P,x-P\rangle=r^2$ (if $r=0$ the sphere degenerates to a single point). On the other hand, in $E_1^3$ the function $F_1(x)=(x-P,x-P)$ may assume any value on the real numbers. So, in $E_1^3$ we still define spheres as the level sets of $F_1$, but one must consider three types of spheres, depending on the sign of $F_1$. We shall adopt the following standard notations:
\begin{equation}
\mathbb{S}_1^2(P;r) =\{x\in E_1^3\,:\, (x-P,x-P)=r^2\},\\
\end{equation}
\begin{equation}
\mathcal{C}^2(P) = \{x\in E_1^3\,:\,(x-P,x-P)=0\},
\end{equation}
and
\begin{equation}
\mathbb{H}_0^2(P;r) =\{x\in E_1^3\,:\, (x-P,x-P)=-r^2\},
\end{equation}
where $r\in (0,\infty)$. These spheres are known as pseudo-sphere, light-cone, and pseudo-hyperbolic space, respectively. As surfaces in $E_1^3$ pseudo-spheres and pseudo-hyperbolic spaces have constant Gaussian curvature $1/r^2$ and $-1/r^2$ \cite{LopesIEJG2014}, respectively\footnote{If we see them as surfaces in $E^3$, their Gaussian curvatures are not constant and, additionally, for $\mathbb{S}_1^2(P;r)$ it is negative, while for $\mathbb{H}_0^2(P;r)$ it is positive.}.
It is well known that the Minkowski metric restricted to $\mathbb{H}_0^2(P;r)$ is a positive definite metric. Then, it follows that $\mathbb{H}_0^2(P;r)$ is a spacelike surface and, consequently, there is no lightlike or timelike curves in $\mathbb{H}_0^2(P;r)$. On the other hand, light-cones are lightlike surfaces \cite{LopesIEJG2014} and, consequently, there is no lightlike curves on them. The pseudo-sphere is the only one that has the three types of curves \cite{InoguchiIEJG2008,LopesIEJG2014}:
\begin{lemma}
There exist no time- and lightlike curves in $\mathbb{H}_0^2(P;r)$ and no timelike curves in $\mathcal{C}^2(P)$.
\end{lemma}
Now we generalize Bishop's characterization of spherical curves in $E^3$ \cite{BishopMonthly} to the context of spheres in $E_1^3$.
\begin{theorem}
A $C^2$ regular spacelike or timelike curve $\alpha:I\to E_1^3$ lies on a sphere of nonzero radius, i.e., $\alpha\subseteq \mathbb{H}_0^2(P;r)$ or $\mathbb{S}_1^2(P;r)$, if and only if its normal development, i.e., the curve $(\kappa_1(s),\kappa_2(s))$, lies on a line not passing through the origin. Moreover, the distance of this line from the origin, $d$, and the radius of the sphere are reciprocals: $d=1/r$.
\label{theo::characSpaceAndLightCurves}
\end{theorem}
\begin{remark}
When a curve is spacelike the normal plane is timelike and then the distance in the normal development plane should be understood as the distance induced by the restriction of $(\cdot,\cdot)$ on the normal plane. So, circles in this plane metric are hyperbolas.
\end{remark}
\begin{proof}
{\it of Theorem} \ref{theo::characSpaceAndLightCurves}. Denote by $\mathcal{Q}$ a sphere $\mathbb{H}_0^2(P;r)$ or $\mathbb{S}_1^2(P;r)$. If $\alpha$ lies in $\mathcal{Q}$, then taking the derivative of $(\alpha-P,\alpha-P)=\pm r^2$ gives
\begin{equation}
(\alpha-P,\mathbf{t})=0.\label{eq::aux1}
\end{equation}
This implies that $\alpha-P=a_1\mathbf{n}_1+a_2\mathbf{n}_2$. Now, let us investigate the coefficients $a_i$. Since $a_i=\epsilon_i(\alpha-P,\mathbf{n}_i)$, where $\epsilon_i=(\mathbf{n}_i,\mathbf{n}_i)$, we have
\begin{eqnarray}
a_i' & = & \epsilon_i(\mathbf{t},\mathbf{n}_i)+\epsilon_i(\alpha-P,\mathbf{n}_i')=0\,.
\end{eqnarray}
Therefore, the coefficients $a_1$ and $a_2$ are constants. Finally, taking the derivative of Eq. (\ref{eq::aux1}), we find
\begin{equation}
0=(\mathbf{t},\mathbf{t})+(\alpha-P,\epsilon_1\kappa_1\mathbf{n}_1+\kappa_2\mathbf{n}_2)=\epsilon+a_1\kappa_1+a_2\kappa_2.
\end{equation}
Thus, the normal development $(\kappa_1,\kappa_2)$ lies on a straight line $1+\epsilon a_1X+\epsilon a_2Y=0$ not passing through the origin. If $Q=\mathbb{S}_1^2(P;r)$, then $r^2=(\alpha-P,\alpha-P)=\epsilon_1 a_1^2+a_2^2=1/d^2$, where $d$ is the distance of the line from the origin. On the other hand, if $Q=\mathbb{H}_0^2(P;r)$, then the curve is necessarily spacelike and $\epsilon_1=-1$, since $\mathbf{n}_1$ is timelike (as mentioned before, $\mathbb{H}_0^2(P;r)$ is a spacelike surface). So, we have $r^2=-(\alpha-P,\alpha-P)= a_1^2-a_2^2=\pm1/d^2$ (the orientation of the hyperbolas will depend on the casual character of the normal vector $\mathbf{n}$ according to Proposition \ref{prop::geomNormalDevelopm}: see figure 1).
Conversely, assume that $0=1+\epsilon a_1\kappa_1+\epsilon a_2\kappa_2$ for some constants $a_1$ and $a_2$. Define the point $P=\alpha-a_1\mathbf{n}_1-a_2\mathbf{n}_2$. Then $P'=\mathbf{t}+(a_1\epsilon\kappa_1+a_2\epsilon\kappa_2)\mathbf{t}=0$ and therefore $P$ is a fixed point. It follows that $\alpha$ lies on a sphere of nonzero radius and center $P$: $(\alpha-P,\alpha-P)=\epsilon_1a_1^2+a_2^2$.
\qed
\end{proof}
For spacelike curves on light-cones (as mentioned before there is no timelike curve on light-cones: Lemma 1) we have an analogous characterization:
\begin{theorem}
A $C^2$ regular spacelike curve $\alpha:I\to E_1^3$ lies on a light-cone $\mathcal{C}^2(P)$, i.e., lies on a sphere of zero radius, if and only if its normal development, i.e., the curve $(\kappa_1(s),\kappa_2(s))$, lies on a line $\{a_1X+a_2Y+1=0\}$ not passing through the origin. Moreover, we have the relation $a_2=\pm a_1$.
\end{theorem}
\begin{proof}
Let $\alpha$ be a curve in $\mathcal{C}^2(P)$ with $(\mathbf{t},\mathbf{t})=1$ and $(\mathbf{n}_1,\mathbf{n}_1)=-1$, i.e., $\epsilon=1$ and $\epsilon_1=-1$. Now taking the derivative of $(\alpha-P,\alpha-P)=0$ gives
\begin{equation}
(\alpha-P,\mathbf{t})=0.\label{eq::aux2}
\end{equation}
This implies that $\alpha-P=a_1\mathbf{n}_1+a_2\mathbf{n}_2$. Since $a_i=\epsilon_i(\alpha-P,\mathbf{n}_i)$, where $\epsilon_i=(\mathbf{n}_i,\mathbf{n}_i)$, we have
\begin{eqnarray}
a_i' & = & \epsilon_i(\mathbf{t},\mathbf{n}_i)+(\alpha-P,\mathbf{n}_i')=0\,.
\end{eqnarray}
Therefore, the coefficients $a_1$ and $a_2$ are constants. Finally, taking the derivative of Eq. (\ref{eq::aux2}), we find
\begin{equation}
0=(\mathbf{t},\mathbf{t})+(\alpha-P,-\kappa_1\mathbf{n}_1+\kappa_2\mathbf{n}_2)=1+a_1\kappa_1+a_2\kappa_2.
\end{equation}
Thus, the normal development $(\kappa_1(s),\kappa_2(s))$ lies on a straight line $1+a_1X+a_2Y=0$ not passing through the origin. Moreover, $0=(\alpha-P,\alpha-P)=- a_1^2+a_2^2$, which implies $a_2=\pm a_1$.
Conversely, assume that $0=1+ a_1\kappa_1\pm a_1\kappa_2$ for some constant $a_1$. Define the point $P=\alpha-a_1\mathbf{n}_1\mp a_1\mathbf{n}_2$, which satisfies $P'=\mathbf{t}+(a_1\kappa_1\pm a_1\kappa_2)\mathbf{t}=0$. In other words, $P$ is a fixed point and it follows that $\alpha$ lies on a light-cone $\mathcal{C}^2(P)$ of center $P$.
\qed
\end{proof}
For lightlike curves we are not able to use a Bishop frame. However, by using null frames, we can still state a criterion for a lightlike curve be contained on pseudo-spheres or light-cones (trying to follow steps as in the previous cases does not work, due to the lack of good orthogonality properties). In fact, the following results are generalizations of those of Inoguchi and Lee \cite{InoguchiIEJG2008} for pseudo-spherical lightlike curves.
\begin{theorem}
If a $C^2$ regular lightlike curve $\alpha:I\to E_1^3$ lies on a pseudo-sphere or a light-cone, then $\kappa_1=0$ or, equivalently, $\alpha$ is a straight line.
\end{theorem}
\begin{proof}
Let $\mathcal{Q}$ be a sphere of non-negative radius denoted by $\mathcal{Q}=\{x\,:\,(x-P,x-P)=\rho\}$ where $\rho=r^2$ ($r>0$) or $0$, i.e., $\mathcal{Q}$ is a pseudo-sphere $\mathbb{S}_1^2(P;r)$ or a light-cone $\mathcal{C}^2(P)$. If $\alpha\subseteq\mathcal{Q}$, taking the derivative of $(x-P,x-P)=\rho$ gives
\begin{equation}
(\mathbf{t},x-P)=0.\label{eq::auxtx-pzero}
\end{equation}
Deriving the above equation gives
\begin{equation}
\kappa_1(\mathbf{z}_1,x-P)=0.
\end{equation}
If $\kappa_1$ were not zero, then we would find $(\mathbf{z}_1,x-P)=0$, which by taking a derivative again gives $(\mathbf{z}_2,x-P)=0$. From these two last equations, and from Eq. (\ref{eq::auxtx-pzero}), we would conclude that $x-P=0$, which is not possible. In short, the curve must satisfy $\kappa_1=0$. Finally, by Proposition \ref{prop::CharacLightlikeLine} it follows that $\alpha$ must be a straight line.
\qed
\end{proof}
\begin{remark}
Surfaces in a semi-Riemannian manifold $M_1^3$ have an interesting property: a lightlike curve is always a pregeodesic, i.e., there exists a parametrization that makes the curve a parametrized geodesic \cite{ONeill}. In $\mathbb{R}^3$ equipped with the standard Minkowski metric, a lightlike curve is a geodesic if and only if it is straight line \cite{InoguchiIEJG2008}.
\end{remark}
The converse of the above theorem is not true. In fact, taking $(\cdot,\cdot)$ as the standard Minkowski metric, the straight line $\alpha(\tau)=(0,0,\tau)$ does not lie on any pseudo-sphere or light-cone. However, we have the following partial converse:
\begin{proposition}
Let $\alpha_0\in \mathcal{Q}(P;\rho)=\{x:(x-P,x-P)=\rho\}$ be a point on a pseudo-sphere or light-cone, i.e., $\rho=r^2$ ($r>0$) or $=0$. If $\mathbf{u}\in T_{\alpha_0}\mathcal{Q}(P;\rho)$ is a lightlike vector, then for any smooth function $f(\tau)$ the curve $\alpha(\tau)=\alpha_0+f(\tau)\,\mathbf{u}$ is a lightlike straight line that lies on $\mathcal{Q}(P;\rho)$.
\end{proposition}
\begin{proof}
Using that $\mathbf{u}\in T_{\alpha_0}\mathcal{Q}(P;\rho)$ implies $(\alpha_0-P,\mathbf{u})=0$, we find
\begin{eqnarray}
(\alpha-P,\alpha-P) & = & (\,(\alpha_0-P)+f\,\mathbf{u},(\alpha_0-P)+f\,\mathbf{u})\nonumber\\
& = & (\alpha_0-P,\alpha_0-P)=\rho\,.
\end{eqnarray}
So, the desired result follows.
\qed
\end{proof}
\section{Characterization of curves on Euclidean quadrics}
Quadrics are the simplest examples of level surfaces and understanding how the characterization works in this particular instance will prove very useful. Indeed, it will become clear in the following that the proper geometric setting to attack the characterization problem on a surface $\Sigma=F^{-1}(c)$ is that of a metric induced by the Hessian of $F$.
Points on a quadratic surface $\mathcal{Q}\subset\mathbb{R}^3$ can be characterized by a symmetric matrix $B\in\mbox{M}_{3\times3}(\mathbb{R})$ as
\begin{equation}
x\in\mathcal{Q}\Leftrightarrow\langle \,B(x-P),x-P\,\rangle=r^2,\label{eq:QuadraticSurface}
\end{equation}
where $P$ is a fixed point (the center of $\mathcal{Q}$), $r>0$ is a constant (the radius of $\mathcal{Q}$), and $\langle\cdot,\cdot\rangle$ is the canonical inner product on $\mathbb{R}^3$. Naturally, if the symmetric matrix $B$ has a non-zero determinant, then this non-degenerate quadric induces a metric or a pseudo-metric on $\mathbb{R}^3$ by defining
\begin{equation}
(\cdot,\cdot)=\langle B\,\cdot,\cdot\rangle\,.
\end{equation}
If the matrix $B$ has index 0, then $\mathcal{Q}$ is an ellipsoid and it can be seen as a sphere on the 3-dimensional Riemannian manifold $M^3=(\mathbb{R}^3,\langle B\,\cdot,\cdot\rangle)$. The characterization of those spatial curves that belong to an ellipsoid can be made through a direct adaption of Bishop's characterization of spherical curves in $E^3$ \cite{Etayo2016}. Indeed, one just uses the metric $\langle B\,\cdot,\cdot\rangle$ instead of $\langle\cdot,\cdot\rangle$ and then follows the steps on the construction of a Bishop frame in $E^3$. On the other hand, if the matrix $B$ has index 1, then $\mathcal{Q}$ is a one-sheeted hyperboloid and can be seen as a pseudo-sphere on a Lorentz-Minkowski space $E^3_1=(\mathbb{R}^3,\langle B\,\cdot,\cdot\rangle)$. If $B$ has index 2, $\mathcal{Q}$ is then a two-sheeted hyperboloid and can be seen as a pseudo-hyperbolic plane on a Lorentz-Minkowski space $E^3_1=(\mathbb{R}^3,\langle -B\,\cdot,\cdot\rangle)$. This way, the results on the previous section can be applied in order to characterize those spatial curves that belong to a (one or two-sheeted) hyperboloid.
Since the characterization of curves on a quadric is made be reinterpreting the problem on a new geometric setting, a natural question then arises: {\it How do we interpret the casual character that a spatial curve assumes when we pass from $E^3$ to $E_1^3$?}
This question can be answered if we take into account the following expression for the normal curvature on a level surface $\Sigma=F^{-1}(c)$ \cite{DombrowskiMN1968}
\begin{equation}
\kappa_n(p,\mathbf{v}) = \frac{\langle \mbox{Hess}_pF\,\mathbf{v},\mathbf{v}\rangle}{\Vert\nabla_p F\Vert},\label{eqNormalCurvLevelSets}
\end{equation}
where $\mathbf{v}\in T_p\Sigma$, and $\mbox{Hess}\,F$ and $\nabla F$ are the Hessian and the gradient vector of $F$, respectively (for more details involving the expressions for the curvatures of level set surfaces see \cite{GoldmanCAGD2005}). Then, we have the following interpretation:
\begin{proposition}
If $\alpha:I\to\mathbb{R}^3$ is a curve on a non-degenerate quadric $\mathcal{Q}$, then asymptotic directions (in $\mathcal{Q}\subseteq E^3$) correspond to lightlike directions (in $\mathcal{Q}\subseteq E_1^3$).
\label{prop::InterpretCasualChar}
\end{proposition}
\begin{proof}
Quadrics are level sets of $F(x)=\langle B\,(x-P),x-P\rangle$ and $\mbox{Hess}\,F=B$. Now, since the quadric is non-degenerate, we have that $\mathcal{Q}$ is the inverse image of a regular value of $F$. So, we can apply Eq. (\ref{eqNormalCurvLevelSets}).
\qed
\end{proof}
Based on these constructions we can better interpret why pseudo-spheres $\mathbb{S}_1^2$ have both space- and timelike tangent vectors, while pseudo-hyperbolic planes $\mathbb{H}_0^2$ only have spacelike ones. Indeed, Eq. (\ref{eqNormalCurvLevelSets}) shows that the sign of the Gaussian curvature (in $E^3$), $K_{E^3}$, has an impact on the casual character of the tangent plane: points with $K_{E^3}>0$ have spacelike tangent planes, while points with $K_{E^3}<0$ have timelike tangent planes.
Finally, observe that quadrics are level sets of $F(x)=\langle B(x-P),x-P\rangle$, which has a constant Hessian: $\mbox{Hess}\,F=B$. This motivates us to consider this procedure for any level surface.
\section{Curves on level surfaces of a smooth function}
Let $\Sigma$ be a surface implicitly defined by a smooth function $F:U\subseteq\mathbb{R}^3\to\mathbb{R}$. Then, the Hessian of $F$ induces on $\mathbb{R}^3$ a (pseudo-) metric
\begin{equation}
(\cdot,\cdot)_p = \langle\mbox{Hess}_p\,F\,\cdot\,,\cdot\rangle=\left\langle\frac{\partial^2F(p)}{\partial x^i\partial x^j}\,\cdot\,,\cdot\right\rangle\,.\label{eq::HessMetric}
\end{equation}
By using Eq. (\ref{eqNormalCurvLevelSets}), Proposition \ref{prop::InterpretCasualChar} is still valid for $\Sigma$ in the context of a Hessian pseudo-metric. Moreover, if $\det(\mbox{Hess}_pF)\not=0$, then $\mbox{Hess}\,F$ is non-degenerate on a neighborhood of $p$. Likewise, since the eigenvalues vary continuously \cite{SerreMatrixBook} and the index can be seen as the number of negative eigenvalues, the Hessian $\mbox{Hess}\,F$ has a constant index on an open neighborhood. Then, $(\cdot,\cdot)$ in Eq. (\ref{eq::HessMetric}) is well defined on a neighborhood of a non-degenerate point $p$.
Now we ask ourselves if the techniques developed in the previous sections can be applied to characterize curves that lie on a level surface. Unhappily, we are not able to establish a characterization via a linear equation as previously done. Nonetheless, we can still exhibit a functional relationship between the curvatures $\kappa_1$ and $\kappa_2$ of a Bishop frame of the corresponding curves with respect to the Hessian metric. Before that, let us try to understand the technical difficulties involved in the study of level surfaces:
\begin{example}[index 1 Hessian]
Suppose that $\mbox{index}(\mbox{Hess}\,F)=1$ on a certain neighborhood of a non-degenerate point $p$. Let $\alpha:I\to E^3$ be a curve on a regular level surface $\Sigma = F^{-1}(c)$ whose velocity vector $\alpha'\in T_{\alpha(s)}\Sigma$ is not an asymptotic direction for all $s\in I$, i.e. $\kappa_n(\alpha(s),\mathbf{\alpha}'(s))\not=0$. This means that the curve is timelike or spacelike. Denote by $\{\mathbf{t},\mathbf{n}_1,\mathbf{n}_2\}$ a Bishop frame along $\alpha$, with respect to Eq. (\ref{eq::HessMetric}), and denote by $D$ the covariant derivative and by a prime $'$ the usual one.
From $F(\alpha(s))=c$ it follows that
\begin{equation}
(\mbox{grad}_{\alpha(s)}F,\mathbf{t}) = 0\Rightarrow\mbox{grad}_{\alpha}F=a_1\mathbf{n}_1+a_2\mathbf{n}_2\,,\label{eq:gradF_hessMet_InNormalPlane}
\end{equation}
where $\mbox{grad}_{\alpha}F$ denotes the gradient vector with respect to $(\cdot,\cdot)$. The coefficients $a_1$ and $a_2$ satisfy $a_i=\epsilon_i(\mbox{grad}_{\alpha}F,\mathbf{n}_i)$ and, therefore,
\begin{eqnarray}
\epsilon_ia_i' & = & (D\,\mbox{grad}_{\alpha}F, \mathbf{n}_i)+(\mbox{grad}_{\alpha}F,D\,\mathbf{n}_i)\nonumber\\
& = & H^F(\mathbf{t}, \mathbf{n}_i)-\epsilon\kappa_i(\mbox{grad}_{\alpha}F,\mathbf{t})\nonumber\\
& = & H^F(\mathbf{t}, \mathbf{n}_i),
\end{eqnarray}
where $H^F$ denotes the Hessian with respect to $(\cdot,\cdot)$, whose coefficients can be expressed as \cite{ONeill}
\begin{equation}
H^F_{ij} = \left(\frac{\partial^2F}{\partial x^i\partial x^j}-\sum_k\Gamma_{ij}^k\frac{\partial F}{\partial x^k}\right)\,.
\end{equation}
From this expression we see that $a_i'$ does not need to be zero and then we can not apply the same steps as in the previous sections. Indeed, the orthogonality of the Bishop frame $\{\mathbf{t},\mathbf{n}_1,\mathbf{n}_2\}$ with respect to $\mbox{Hess}\,F=\partial^2F/\partial x^i\partial x^j$ and $H^F$ does not coincide, unless $\mbox{Hess}\,F$ is constant.
\qed
\end{example}
\begin{theorem}
Let $\mathcal{U}_p\subseteq\mathbb{R}^3$ be a neighborhood of a non-degenerate point $p\in\Sigma=F^{-1}(c)$ where the index is constant. Let $H^F$ denotes the Hessian with respect to the Hessian metric $(\cdot,\cdot)_q=\langle{\rm Hess}_qF\,\cdot,\cdot\rangle$.
If $\alpha:I\to\mathcal{U}_p\cap\Sigma$ is a $C^2$ regular curve, with no asymptotic direction for ${\rm index}({\rm Hess}\,F
)\not\in\{0,3\}$, i.e., $\kappa_n(\alpha,\alpha')\not=0$, then its normal development $(\kappa_1(s),\kappa_2(s))$ satisfies
\begin{equation}
a_2(s)\kappa_2(s)+a_1(s)\kappa_1(s)+a_0(s)=0, \label{eq::characLevelSurfacesCurves}
\end{equation}
where $a_0=H^F(\mathbf{t},\mathbf{t})$, $a_i=({\rm grad}_{\alpha}F,\mathbf{n}_i)$, and $a_i'(s)=H^F(\mathbf{t},\mathbf{n}_i)$: or $\epsilon_i H^F(\mathbf{t},\mathbf{n}_i)$, $\epsilon_i=(\mathbf{n}_i,\mathbf{n}_i)=\pm1$, if ${\rm index}({\rm Hess}\,F
)\not\in\{0,3\}$. Here, the Bishop frame is defined with respect to the Hessian metric.
Conversely, if Eq. (\ref{eq::characLevelSurfacesCurves}) is valid and $({\rm grad}_{\alpha(s_0)}F,\mathbf{t}(s_0))=0$ at some point $\alpha(s_0)$, then $\alpha$ lies in a level surface of $F$.
\label{theo::CurvesInLevelSets}
\end{theorem}
\begin{remark}
If $\Sigma=F^{-1}(c)$, where $c$ is a regular value of $F$, then $\Sigma$ is an orientable surface. The reciprocal of this result is also valid, i.e., every orientable surface is the inverse image of a regular value of some smooth function \cite{Guillemin}. Then, the above theorem can be applied to any orientable surface (we still have to exclude those points where the Hessian has a zero determinant).
\end{remark}
\begin{proof}
{\it of theorem }\ref{theo::CurvesInLevelSets}.
If the index is $0$, then the Hessian metric defines a Riemannian metric (if $\mbox{index}(\mbox{Hess}\,F)=-3$, then its negative defines a metric). On the other hand, the construction of a Bishop frame for a pseudo-metric with index 2 in dimension 3 is completely analogous to the case of index 1. Moreover, when the index of $\mbox{Hess}\,F$ is 1 (or 2), the assumption that $\alpha'$ is not an asymptotic direction means that $\alpha$ must be a space- or a timelike curve.
In the following, let us assume that ${\rm index}(\mbox{Hess}\,F)=1$, the other cases being analogous. In this case, Eq. (\ref{eq::HessMetric}) defines a pseudo-metric in $\mathcal{U}_p\subseteq\mathbb{R}^3$.
Since $F(\alpha(s))=c$, we have
\begin{equation}
(\mbox{grad}_{\alpha(s)}F,\mathbf{t}) = 0\Rightarrow\mbox{grad}_{\alpha}F=a_1\mathbf{n}_1+a_2\mathbf{n}_2\,,\label{eqGradFHessMetric}
\end{equation}
where $\mbox{grad}_{\alpha}F$ denotes the gradient vector with respect to $(\cdot,\cdot)$. The coefficients $a_1$ and $a_2$ satisfy $a_i=\epsilon_i(\mbox{grad}_{\alpha}F,\mathbf{n}_i)$ and, therefore,
\begin{eqnarray}
a_i' & = & \epsilon_i(D\,\mbox{grad}_{\alpha}F, \mathbf{n}_i)+\epsilon_i(\mbox{grad}_{\alpha}F,D\,\mathbf{n}_i)\nonumber\\
& = & \epsilon_iH^F(\mathbf{t}, \mathbf{n}_i)-\epsilon_i\epsilon\kappa_i(\mbox{grad}_{\alpha}F,\mathbf{t})\nonumber\\
& = & \epsilon_iH^F(\mathbf{t}, \mathbf{n}_i),
\end{eqnarray}
where $H^F$ denotes the Hessian with respect to $(\cdot,\cdot)$ \cite{ONeill}. Taking the derivative of Eq. (\ref{eqGradFHessMetric}) gives
\begin{eqnarray}
0 & = & (D\,\mbox{grad}_{\alpha}F,\mathbf{t})+(\mbox{grad}_{\alpha}F,D\,\mathbf{t})\nonumber\\
& = & H^F(\mathbf{t},\mathbf{t})+(a_1\mathbf{n}_1+a_2\mathbf{n}_2,\epsilon_1\kappa_1\mathbf{n}_1+\kappa_2\mathbf{n}_2)\nonumber\\
& = & H^F(\mathbf{t},\mathbf{t})+a_1\kappa_1+a_2\kappa_2\,.
\end{eqnarray}
Then, Eq. (\ref{eq::characLevelSurfacesCurves}) is satisfied.
Conversely, suppose that Eq. (\ref{eq::characLevelSurfacesCurves}) is satisfied. Let us define the function $f(s)=F(\alpha(s))$. We must show that $f$ is constant, i.e., $f'(s)=0$. Taking the derivative of $f$ twice gives
\begin{equation}
f' = (\mbox{grad}_{\alpha}F,\mathbf{t}),
\end{equation}
and
\begin{eqnarray}
f'' & = & (D\,\mbox{grad}_{\alpha}F,\mathbf{t})+(\mbox{grad}_{\alpha}F,D\,\mathbf{t})\nonumber\\
& = & H^F(\mathbf{t},\mathbf{t})+\epsilon_1\kappa_1(\mbox{grad}_{\alpha}F,\mathbf{n}_1)+\kappa_2(\mbox{grad}_{\alpha}F,\mathbf{n}_2)\nonumber\\
& = & 0.
\end{eqnarray}
Then, $f'(s)=(\mbox{grad}_{\alpha(s)}F(s),\mathbf{t}(s))$ is constant. By assumption, we have $f'(s_0)=0$, then $f(s)=F(\alpha(s))$ is constant on an open neighborhood of $s_0$, i.e., $\alpha$ lies on a level surface of $F$.
\qed
\end{proof}
\begin{remark}
The Christoffel symbols $\Gamma_{ij}^k$ of a Hessian metric $g_{ij}=\partial^2F/\partial x^i\partial x^j$ vanish if and only if $\mbox{Hess}\,F$ is constant; which is valid for a quadratic surface, this case being treated in the previous section.
\end{remark}
If $\mbox{Hess}\,F$ degenerates, i.e., $\det(\mbox{Hess}_pF)=0$ at some points, then the Hessian matrix does not define a metric. Nonetheless, it is still possible to characterize curves on a level surface by using the standard metric of $\mathbb{R}^3$. In fact, it can be used even if $\mbox{Hess}\,F$ is non-degenerate, but in this case we do not have non-degenerate quadrics as a particular instance. The obtained criterion is completely analogous to the previous one in Theorem \ref{theo::CurvesInLevelSets}. Indeed, we have
\begin{theorem}
If $\alpha:I\to E^3\cap \Sigma$ is a $C^2$ regular curve, where $\Sigma=F^{-1}(c)$, then its normal development $(\kappa_1(s),\kappa_2(s))$ satisfies
\begin{equation}
b_2(s)\kappa_2(s)+b_1(s)\kappa_1(s)+b_0(s)=0, \label{eq::characLevelSurfacesCurves2}
\end{equation}
where $b_0=\langle ({\rm Hess}\,F)\,\mathbf{t},\mathbf{t}\rangle$, $b_i=\langle\nabla_{\alpha}F,\mathbf{n}_i\rangle$, and $b_i'(s)=\langle ({\rm Hess}\,F)\,\mathbf{t},\mathbf{n}_i\rangle$. Here, the Bishop frame is defined with respect to the usual metric in $E^3$.
Conversely, if Eq. (\ref{eq::characLevelSurfacesCurves2}) is valid and $\langle\nabla_{\alpha(s_0)}F,\mathbf{t}(s_0)\rangle=0$ at some point $\alpha(s_0)$, then $\alpha$ lies in a level surface of $F$.
\end{theorem}
\begin{proof}
Let $\{\mathbf{t},\mathbf{n}_1,\mathbf{n}_2\}$ be a Bishop frame along $\alpha:I\to E^3$. If $F(\alpha(s))=c$, then we have
\begin{equation}
\langle\nabla_{\alpha(s)}F,\mathbf{t}\rangle = 0\Rightarrow\nabla_{\alpha}F=b_1\mathbf{n}_1+b_2\mathbf{n}_2\,,\label{eqGradFUsualMetric}
\end{equation}
where $\nabla_{\alpha}F$ denotes the gradient vector with respect to usual metric in $E^3$. The coefficients $b_1$ and $b_2$ satisfy $b_i=\langle\nabla_{\alpha}F,\mathbf{n}_i\rangle$ and, therefore,
\begin{eqnarray}
b_i' = \langle (\mbox{Hess}\,F)\,\mathbf{t}, \mathbf{n}_i\rangle-\kappa_i\langle\nabla_{\alpha}F,\mathbf{t}\rangle=\langle (\mbox{Hess}\,F)\,\mathbf{t}, \mathbf{n}_i\rangle.
\end{eqnarray}
Taking the derivative of Eq. (\ref{eqGradFUsualMetric}) gives
\begin{eqnarray}
0 & = & \langle (\mbox{Hess}\,F)\,\mathbf{t},\mathbf{t}\rangle+\langle b_1\mathbf{n}_1+b_2\mathbf{n}_2,\kappa_1\mathbf{n}_1+\kappa_2\mathbf{n}_2\rangle\nonumber\\
& = & \langle (\mbox{Hess}\,F)\,\mathbf{t},\mathbf{t}\rangle+b_1\kappa_1+b_2\kappa_2\,.
\end{eqnarray}
So, Eq. (\ref{eq::characLevelSurfacesCurves2}) is valid.
Conversely, suppose that Eq. (\ref{eq::characLevelSurfacesCurves}) is satisfied. Let us define the function $f(s)=F(\alpha(s))$. Taking the derivative of $f$ twice gives
\begin{equation}
f' = \langle\nabla_{\alpha}F,\mathbf{t}\rangle,
\end{equation}
and
\begin{eqnarray}
f''
& = & \langle (\mbox{Hess}\,F)\,\mathbf{t},\mathbf{t}\rangle+\kappa_1\langle\nabla_{\alpha}F,\mathbf{n}_1\rangle+\kappa_2\langle\nabla_{\alpha}F,\mathbf{n}_2\rangle= 0.
\end{eqnarray}
Then, $f'(s)=\langle\nabla_{\alpha(s)}F(s),\mathbf{t}(s)\rangle$ is constant. By assumption, we have $f'(s_0)=0$, then $f(s)=F(\alpha(s))$ is constant on an open neighborhood of $s_0$, i.e., $\alpha$ lies on a level surface of $F$.
\qed
\end{proof}
\section{Discussion and Conclusions}
In this work we were interested in the characterization of curves that lie on a given surface. The main tool to achieve that was the use of moving frames along curves. In the construction of Frenet frames in Lorentz-Minkowski spaces $E_1^3$, we showed that the coefficient matrix of the frame motion can be obtained from a skew-symmetric matrix (precisely the matrix that would appear in an Euclidean context) through a right-multiplication by the matrix that describe a frame $\{\mathbf{e}_0,\mathbf{e}_1,\mathbf{e}_2\}$ as a basis of $E_1^3$: $[(\mathbf{e}_i,\mathbf{e}_j)]_{ij}$. Later, by adapting Bishop's idea of relatively parallel moving frames, we were able to furnish a complete characterization of spherical curves in $E_1^3$ through a linear equation relating the coefficients which dictate the frame motion. To attain that, we developed a systematic approach to the construction of Bishop frames by exploiting the structure of the normal planes induced by the casual character of the curve, while for lightlike curves we made use of null frames. In both cases, the coefficient matrix of the frame motion can be obtained from a skew-symmetric matrix, the matrix that would appear in an Euclidean context, through a right-multiplication by the matrix that describes the frame as a basis. We then applied these ideas to surfaces that are level sets of a smooth function, $\Sigma=F^{-1}(c)$, by reinterpreting the problem in the context of the metric given by the Hessian of $F$, which is not always positive definite. So, we are naturally led to the study of curves in $E_1^3$. We also interpreted the casual character that a curve may assume when we pass from $E^3$ to $E_1^3$ and finally established a criterion for a curve to lie on a level surface of a smooth function, which reduces to a linear equation when the Hessian is constant and happens for non-degenerate Euclidean quadrics.
An interesting problem which remains open is to consider the possibility of a curve changing its casual character. Since the property of being space- or timelike is open, i.e., if it is valid at a point it must be valid on a neighborhood of that point, the real problem is to understand what happens near lightlike points. Moreover, the techniques applied here can be extended to higher dimensions and also to the setting of a Riemannian or a semi-Riemannian manifold $M_{\nu}^n$. Indeed, we believe that it is possible to systematically build Bishop and null frames along curves in $M_{\nu}^n$ as done in this work and then apply these constructions to study level hypersurfaces of a smooth function $F:M_{\nu}^3\to\mathbb{R}$. Since the relation between normal curvature and the Hessian with respect to a (pseudo)-metric is still valid, we can also interpret what happens in the transition from $M_{\nu}^n$ to the new geometric setting of a Hessian metric, which may be of a Lorentzian nature since the Hessian may fail to be positive definite.
To the best of our knowledge, this was the first time that the characterization problem for curves was considered for a large class of surfaces. This makes this work an important contribution to the geometry of curves and surfaces.
\begin{acknowledgements}
The author would like to thank useful discussions with J. Deibsom da Silva and F. A. N. Santos, and also the financial support by Conselho Nacional de Desenvolvimento Cient\'ifico e Tecnol\'ogico - CNPq (Brazilian agency).
\end{acknowledgements}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,715
|
Q: Laravel query builder - using data within row to compare with time in another I have got this working using a whereRaw as follow:
Reports::whereRaw('last_check_datetime < CURRENT_TIMESTAMP - INTERVAL `check_schedule_minutes` MINUTE')->get();
The table has a number of reports in it, with differing schedule minutes. i.e. I want one report to be checked every 15 mins and another to be checked every 60 mins. If the time in last_check_datetime is not older than the number of minutes in check_schedule_minutes then the report should not be returned in the collection.
Is there a way to achieve the same thing using a more Eloquent syntax, ideally with Carbon?
A: Tbh I think what you're doing here is right. The problem is in your question - you're wanting to merge PHP/MySQL syntax, but the data is in MySQL. So without first fetching all records and then doing a check, what you've done is right, and not really avoidable. However, if you really want more eloquent buildery-solution, here:
Reports::where('last_check_datetime', '<', DB::raw('CURRENT_TIMESTAMP - INTERVAL `check_schedule_minutes` MINUTE'))->get();
Tbh though, I think the whereRaw is neater.
A: You can use eloquent's whereBetween method on date columns
Reports::whereBetween('last_check_datetime', [Carbon::now()->subMinutes($check_schedule_minutes), Carbon::now()])->get();
Should do the trick for you.
EDIT: I realise this relies on you knowing the $check_schedule_minutes value before making the query, where your question is reading check_schedule_minutes from the table.
One way to solve this would be to have a predefined standard for check_schedule_minutes (15 or 60 mins as you said) then have that value passed into the function performing the query and add that to your conditions:
$check_schedule_minutes = 60;
Reports::whereBetween('last_check_datetime', [Carbon::now()->subMinutes($check_schedule_minutes), Carbon::now()])
->where('check_schedule_minutes', $check_schedule_minutes)
->get();
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,934
|
Home/Documents/Enemies of Syria/ISIL speaker Arab League Vows to Take All Measures to Confront 'Daesh'
ISIL speaker Arab League Vows to Take All Measures to Confront 'Daesh'
Arab states agreed to take the "necessary measures" to confront the so-called "Daesh" [ISIL] extremist group at a meeting of foreign ministers in Egypt's Cairo on Sunday, as US President Barack Obama prepares to go to lawmakers and the American public with his own plan to stop the militants.
Arab League chief Nabil Elaraby said at a news conference, "The Arab foreign ministers have agreed to take the necessary measures to confront terrorist groups including" "Daesh," without explicitly supporting US calls for a coalition to back its air campaign against the militants in Iraq.
"What is needed is a clear decision for a comprehensive confrontation, militarily and politically," Elarabi said, a day after he and US Secretary of State John Kerry discussed "Daesh."
A senior US State Department official, speaking on condition of anonymity because the person was not authorized to publicly discuss the private conversation, said that Kerry updated Elaraby on efforts to combat the insurgents.
"They discussed the need for the Arab League and its members to take a strong position in the coalition that is developing … and the importance of decisive action" to stop the flow of foreign fighters, disrupt "Daesh" financing and combat incitement, the official said.
The Arab League moreover endorsed in the closing statement of its meeting a UN Security Council resolution passed last month [August] calling on member states to "act to suppress the flow of foreign fighters, financing and other support to extremist groups in Iraq and Syria."
It wasn't immediately clear what steps the Arab League would take in supporting the West's campaign against "Daesh."
Elaraby said the rise of "Daesh" in Iraq challenged not merely the authority of the state but "its very existence and the existence of other states" and called for a decisive resolution to confront terrorism militarily, politically, economically and culturally.
He noted that the Arab League's member states have failed to help each other in the past when facing local armed groups, often because of disagreements and fear of being accused of meddling in one another's affairs.
Meanwhile, Obama will meet with congressional leaders on Tuesday and then outline his plan to tackle "Daesh" to the American public on Wednesday, the eve of the 13th anniversary of the September 11, 2001 attacks in the United States.
Violence claimed 867 Iraqi lives in May: United Nations
Sayyed Nasrallah to Israel: 'Don't try us again
S. Nasrallah, Mujahidin Repeat Epistolary, Emotional Sample of 2006 War
15 Syria kids die amid UN vaccine campaign
Saudi Arabia vowed to support Syria if ties with Iran cut: Assad
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,198
|
Q: How to convert all data in column to datetime - pandas I have a large dataframe that, in its date column, has a mixture of date formats (only 2).
Most are in the correct format but there is some data that is in a different format.
i.e. most are 2013-11-07. Some are 20170510. Pandas throws an exception when i try to validate the code against a schema i have.
Is there a quick way to convert all dates to have the same format as the majority? Or do i have to do something more painful/manual?
i.e.
date \
0 2013-11-07 False
2 2013-11-07 False
... ... ... ... ... ...
3595037 20170510 NaN
3595038 20200701 NaN
A: Is there a quick way to convert all dates to have the same format as the majority?
Considering that you have only two formats, one represented by 2013-11-07 and another by 20170510 it is enough to remove - from first to get common format, i.e.
import pandas as pd
df = pd.DataFrame({'day':['2013-11-07','20170510']})
df['day'] = df['day'].str.replace('-','')
print(df)
output
day
0 20131107
1 20170510
pandas.to_datetime does understand it correctly
df['day'] = pd.to_datetime(df['day'])
print(df)
output
day
0 2013-11-07
1 2017-05-10
Disclaimer: I converted to format of minority not majority. It is possible to convert that to format of majority using regular expression, however if you are interested in datetime objects, this is unnecessary complication.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,870
|
Jeffrey Patterson
COVID-19: Developing a Safety Plan for Your Workplace
In a recent publication, the Government of Ontario is encouraging all employers to develop and implement a COVID-19 Safety Plan as part of their obligation to comply with the Occupational Health and Safety Act ("OHSA"). A government template is available to assist employers in this regard. While employers do not need to submit their plan to the Ministry of Labour, Training and Skills Development, a ministry inspector may ask about a plan during a workplace inspection. Accordingly, it is wise for employers to create or update their existing plan.
In developing your safety plan, the Government of Ontario has noted six (6) questions for employers to consider. They are paraphrased as follows:
How will safety measures be communicated to employees?
How will employees be screened for COVID-19?
How will the risk of transmission be controlled?
How will the employer respond if there is a potential, or confirmed, case of COVID-19 in the workplace?
How will the employer manage any new risks that arise from changes to the workplace?
How will the employer ensure that the plan is, and will remain, effective?
Further details about these considerations can be found here.
An understanding of the virus will be critical to answering the above questions and developing an effective safety plan. Employers are expected to use current public health and health and safety information. A useful resource in this regard is the Government of Canada's COVID-19 database, which you can find here. Employers should also consult their local by-laws, health authorities, and workplace specific health and safety advisories to stay apprised about the virus and current government responses (particularly with respect to initiatives such as mask wearing protocols).
The COVID-19 pandemic remains an evolving situation, and challenges for workplace health and safety will certainly persist. Developing a sound COVID-19 safety plan and reviewing this plan frequently should be considered one piece of your workplace's strategy to proactively address the coronavirus and remain compliant with the OHSA.
We will continue to provide guidance on this issue and provide you with further updates as they become available. Please note that this bulletin is intended for informational purposes only and does not constitute legal advice or an opinion on any issue. We strongly recommend that you contact your Miller Canfield lawyer with your specific questions so that those questions can be addressed properly with you.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,094
|
Instructions
For this assignment, you will submit a single C++ compilable file containing a program written in C++.
Background
Ok, so Moe liked the work that you did for him and now wants you to code another project. He is lazy and fairly unp .... no, very unpleasant. (You can see it in his face.) So Moe wants a computer stationed at the door of his bar so that patrons entering will run the "Greeter Program". It will ask questions of each potential patron, and then either welcome them in or tell them to leave.
Specifications
Your program will behave as follows:
* First prompt/read in a name (first only is fine).
* The patron (always referred hereafter by name) is then asked if he/she/it is a "teetotaler"1 and promptly thrown out if the answer is yes.
* If not, they are asked their age and asked to leave if they are not 21 or older.
* Then they are asked if they intend to drink beer, soda, or hard liquor.
* The affirmative on soda will get them a ticket out the door with a "we don't soyrv you sissies in dis place!" response.
* Beer and hard-stuff drinkers are welcomed, but questioned how much money they have on them.
With this info, Moe (your program, actually) will tell them how many beers (for beer drinkers) or how many drinks (for hard-stuff drinkers) they can buy with that amount of money. The program will then inquire as to how many drinks they intend to buy. If the answer is less than the maximum number of drinks they can buy with their money, they will promptly be thrown in the gutter ("come back when you want to spend all your money!"). Otherwise, your program will tell them to "come right in, step up to the bar!" If the patron has not enough money to buy even one drink, throw them out with a "get outta here, ya bum!". For any cases above where I haven't specified what is said to the customer, you make up something appropriate. If the patron is "Barney", just respond with, "C'mon in Barney".
This will end the interaction with a given potential patron. Your program should then prompt for another patron by saying something like, "Anybody else there??". An affirmative answer will have the user go through the entire "interview" again, and end the program otherwise. At any time in the interview a potential patron is "asked to leave", this prompt for another patron is invoked.
At Moe's:
* beers are $2.00 ea.
* hard liquor drinks are all $4.25 ea.
Remember: When writing your code, be sure to:
* Use meaningful variable names.
* Use proper spacing for indentations.
* Use constant variable declarations where appropriate.
* Include the comment block at the head of your file.
* Comment code that needs it.
* Be literate in your welcoming/signing-off messages and prompts and output.
Note: You are expected to check the inputs from the user for range acceptance. What that means is that whenever appropriate, a input value should be checked for validity. For example, if the user inputs -8 or 213 for their age, then they are to be re-prompted. So what are acceptable ranges? You decide this....and make it a good decision! To do this range checking and the required re-prompting for new patrons, you will need to use loops. We may not have covered loops by the time this assignment is posted, but we will very soon. You can still get started on the coding for this program.
Also, since you will be displaying dollar amounts, it's bad form to have $3.50 come out as $3.5. So, put the following code at the top of your main after declarations and it will force exactly two decimal places to be shown always.
cout.setf(ios::fixed);
cout.setf(ios::showpoint);
cout.precision(2);
We'll explain where this code comes from later in the semester.
Optional
You can optionally display how much "change" patrons (who drink) leave the bar with. For example, suppose a patron comes in with $4.60 and drinks beers. He/She/It would have two beers with $0.60 left over. You would report this 60 cents.
Submitting
When you submit:
* enter the bar not as a teetotaler, but a legal beer drinker with $21.45 in your pocket. Report you intend to drink 10 beers.
* next patron is to be a teetotaler.
* next patron is to be a legal soda drinker.
* no more patrons
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 7,354
|
Crookfest 2020 postponed until 2021
Coronavirus – fixtures suspended until Friday 3rd April
After taking all advice into account and giving the matter careful consideration, the league management committee has decided to suspend all scheduled fixtures with effect from Saturday 14th March 2020 up until Friday 3rd April 2020.Although the decision itself was a difficult one, the fact a growing number of clubs were reporting players, management and officials as self-isolating, meant the effect on the health and …
Jamie's Thoughts……
Here are Jamie's thoughts after two games in the last week against Newcastle University and Birtly Town, and looking ahead to Saturday's game at home to Chester le Street 'I was really pleased with our performance levels on Saturday at Newcastle University. They're a tricky side and have beaten us twice this season so we knew we had to start …
Kenty notches number 20
Kenty's on fire – his goal yesterday as part of Crook Town's 5-1 win against Newcastle University moves him up to 20 for the season (18 league and two in cup competitions). With eleven games remaining who knows how many he could get before the end of the season?
Massive March!
With eleven games remaining this season Crook are sat third in the league, the top four at the end of the season are set to win promotion so Crook Town's seven games scheduled for March are going to be massive. There are only two home games in March – one of them being a Friday night derby against Esh Winning, …
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 321
|
Q: How do you align multiple empheq boundary conditions? I am looking to align my curly brackets on the slide but cannot figure out how to do this. I have tried grouping an align statement around each as such:
\begin{align}
\begin{subequations}
&\begin{empheq}[left =\text{Solar Surface =}\empheqlbrace]{alignat = 2}
& \uvec{v}=\uvec{v}_\odot, & &\text{for} \ r=r_\odot \\
& \uvec{B}=\uvec{B}_\odot &\quad & \text{for}\ r=r_\odot
\end{empheq}
\end{subequations}
\begin{subequations}
&\begin{empheq}[left =\text{Critical Distance =} \empheqlbrace]{alignat = 2}
& \uvec{v}=\uvec{v}_c, & &\text{for} \ r=r_c \\
& \uvec{B}=\uvec{B}_c &\quad & \text{for}\ r=r_c
\end{empheq}
\end{subequations}
\begin{subequations}
&\begin{empheq}[left =\text{Infinte Distance =} \empheqlbrace]{alignat = 2}
& \uvec{v}\to\uvec{0}, & &\text{for} \ r\to\infty \\
& \uvec{B}\to\uvec{0} &\quad & \text{for}\ r\to\infty
\end{empheq}
\end{subequations}
\end{align}
A: Without the need for numbering - something that doesn't really help with a presentation - using a single align* and some cases works without problem:
\documentclass{beamer}
\usepackage{amsmath}
\usetheme{Berlin}
\usecolortheme{whale}
\newcommand{\uvec}[1]{\underline{#1}}
\begin{document}
\begin{frame}
\begin{align*}
\text{Solar Surface} &= \begin{cases}
\uvec{v} = \uvec{v}_\odot, & \text{for $r = r_\odot$} \\
\uvec{B} = \uvec{B}_\odot & \text{for $r = r_\odot$}
\end{cases}
\\
\text{Critical Distance} &= \begin{cases}
\uvec{v} =\uvec{v}_c, & \text{for $r = r_c$} \\
\uvec{B} =\uvec{B}_c & \text{for $r = r_c$}
\end{cases}
\\
\text{Infinte Distance} &= \begin{cases}
\uvec{v} \to \uvec{0}, & \text{for $r \to \infty$} \\
\uvec{B} \to \uvec{0} & \text{for $r \to \infty$}
\end{cases}
\end{align*}
\end{frame}
\end{document}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,983
|
Der Nordstrand in Erfurt ist eine geflutete Kiesgrube und wird seit 1972 als Freizeit- und Erholungspark genutzt. Die Gesamtfläche des Freizeitparkes beträgt 35 ha, davon entfallen etwa 16 ha auf die Wasserfläche und 19 ha auf die Landfläche.
Der See liegt in der Nähe des Stadtgebietes Erfurt im Stadtteil Johannesvorstadt südlich einer Kette aus ehemaligen Kiesgruben um den Alperstedter See, die mit einem regionalen Entwicklungskonzept zur Renaturierung zu einer Wasserlandschaft ausgebaut werden sollen.
Am See selbst befindet sich eine Wasserski-Anlage, ein Badestrand, eine Tauchschule, Beachvolleyballfelder, Beachsoccerplätze sowie ein Naturlehrpfad und naturbelassene Ruhezonen.
Siehe auch
Erfurter Seen
Weblinks
Webseite Nordstrand Erfurt
See in Thüringen
See in Europa
Gewässer in Erfurt
Badeanlage in Thüringen
Johannesvorstadt
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,673
|
Poet and politician—okay. Václav Havel, the Czech dissident who became the first post-Soviet president of his country, was a playwright. That's sort of like being a poet. (Playwrights—and poets—may disagree with that assessment.) Havel wrote bravely and as openly as he could about the totalitarianism choking his country when it was a Soviet state and spent time in prison because of it.
Politicians and poets (or playwrights) both work with words. But jockey? You just never know what you're going to find in Wikipedia.
It turns out that of Gordon's three occupations, the least likely was not jockey but politician.
Why? I guess because it was there.
I know there's a lot to take in from that paragraph. The 19th century equivalent of a reality star gets elected by three freaking votes. And—surprise, surprise—spends his 18-month-long political career making "entertaining but largely irrelevant" speeches. But the thing that nearly made me do a spit-take was the first sentence. A two-month-long election cycle? How can we get one of those in the U.S.?
Sadly, that "greater activity" didn't last long. Beset by injuries and personal and financial woes, he shot himself less than four years after ending his political career. He was 36 years old.
It is so often quoted—and embroidered—that I will forgive you for reading it as a Hallmark sentiment. But let's look a little deeper into the poem, shall we?
You just never know what you're going to find when you go on Story Safari. I started out intending to write about this inspirational little quote I'd tucked into my quotation file, and I end up finding a story about a person famous for very little reason who ends up gaining elective office. I mean, who could imagine something like that ever happening?
But Gordon is right. We have to "live and labour/Till yon goal be won." And along the way, question hypocrisy wherever we find it, especially when it's cloaked in religion. And yes—be kind to those facing troubles, and have courage to face our own.
Learn to tell your story powerfully. Join me for my free webinar "The Courage to Communicate: Write Right to Lead"—Wednesday November 30th at 8 p.m. Eastern, 5 p.m. Pacific.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,147
|
/**
* This class is generated by jOOQ
*/
package org.carbon.sample.ext.jooq.tables.interfaces;
import java.io.Serializable;
import java.time.LocalDateTime;
import javax.annotation.Generated;
/**
* This class is generated by jOOQ.
*/
@Generated(
value = {
"http://www.jooq.org",
"jOOQ version:3.8.6"
},
comments = "This class is generated by jOOQ"
)
@SuppressWarnings({"all", "unchecked", "rawtypes"})
public interface ILecturerRoom extends Serializable {
/**
* Setter for <code>carbondb.lecturer_room.id</code>.
*/
public void setId(Long value);
/**
* Getter for <code>carbondb.lecturer_room.id</code>.
*/
public Long getId();
/**
* Setter for <code>carbondb.lecturer_room.lecturer_id</code>.
*/
public void setLecturerId(Long value);
/**
* Getter for <code>carbondb.lecturer_room.lecturer_id</code>.
*/
public Long getLecturerId();
/**
* Setter for <code>carbondb.lecturer_room.room_name</code>.
*/
public void setRoomName(String value);
/**
* Getter for <code>carbondb.lecturer_room.room_name</code>.
*/
public String getRoomName();
/**
* Setter for <code>carbondb.lecturer_room.room_detail</code>.
*/
public void setRoomDetail(String value);
/**
* Getter for <code>carbondb.lecturer_room.room_detail</code>.
*/
public String getRoomDetail();
/**
* Setter for <code>carbondb.lecturer_room.begin_datetime</code>.
*/
public void setBeginDatetime(LocalDateTime value);
/**
* Getter for <code>carbondb.lecturer_room.begin_datetime</code>.
*/
public LocalDateTime getBeginDatetime();
/**
* Setter for <code>carbondb.lecturer_room.end_datetime</code>.
*/
public void setEndDatetime(LocalDateTime value);
/**
* Getter for <code>carbondb.lecturer_room.end_datetime</code>.
*/
public LocalDateTime getEndDatetime();
// -------------------------------------------------------------------------
// FROM and INTO
// -------------------------------------------------------------------------
/**
* Load data from another generated Record/POJO implementing the common interface ILecturerRoom
*/
public void from(org.carbon.sample.ext.jooq.tables.interfaces.ILecturerRoom from);
/**
* Copy data into another generated Record/POJO implementing the common interface ILecturerRoom
*/
public <E extends org.carbon.sample.ext.jooq.tables.interfaces.ILecturerRoom> E into(E into);
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 8,444
|
{"url":"https:\/\/en.wikibooks.org\/wiki\/Clock_and_Data_Recovery\/Structures_and_types_of_CDRs\/Applications_of_the_2nd_order_type_2_architecture","text":"# Clock and Data Recovery\/Structures and types of CDRs\/Applications of the 2nd order type 2 architecture\n\n2nd order and type 2 has become the preferred architecture for high performance CDR (regenerators) as soon as the ever advancing IC technology has allowed to integrate all CDR blocks inside one silicon chip.\nAll blocks can be integrated ( high gain and low noise amplifiers, A\/Ds, D\/As and DSPs ) with very low manufacturing costs.\nEven the VCO is a block that is integrated, but it can not reach the accuracies possible with a crystal oscillator. This is compensated by the more sophisticated architecture and some additional circuitry (PFD).\n\nThis architecture uses the option of a zero (the gain increases at lower frequencies) in the filter stage between phase comparator and VCO.\n\nalt = The zero is at angular frequency \u03c9z.\nThe zero is at angular frequency \u03c9z =1\/\u03c4z, \u03c9w = 1\/\u03c4w is the frequency of gain = 1 = 0 dB . \u03c4z = Gf \u03c4w .\nLower frequencies are amplified more (-20 dB \/ dec) and the lowest frequency -the d.c. jitter component- gets a theoretical infinite amplification.\nThe decrease of the amplification continues up to \u03c9z and becomes flat (Gf) at higher frequencies.\nIn practical applications of type 2 CDRs: Gf << 1 (often referred to as \"the filter attenuation\" and called \u03b2) and \u03c9w << \u03c9z .\n\nThe maximum output of the comparator corresponds to much less than the maximum frequency deviation of the VCO, deviation that can be reached gradually thanks to the integration of low frequencies made by the loop filter.\n\nThe response to an input variation always ends with catching up completely (zero steady-state error in this type 2 loop), (even if the loop gain is much less than \"infinite\" or the \"latency\" is relevant [1]) and every CDR design is tailored to specific applications to avoid that the response overshoots significantly.\n\nIt is like using either the accelerator or the brake, increasing pressure indefinitely on the pedal until the response of the system becomes sufficient.\nIt is easy to understand how this architecture reacts immediately but only to a limited extent, and then increases the correction slowly but progressively if the immediate response has not proven sufficient.This is especially evident during the acquisition phase.\n\nThe low gain at high frequencies attenuates more the high frequency input jitter (in many cases it also attenuates the bang-bang tracking jitter) while the integration of the input phase error (assisted by a PFD) provides for acquisition in a much wider range of input frequencies.\n\n## Where used and how made\n\nThe 2-2 architecture is well suited for monolithic implementations, which explains its widespread use.\n\nThis architecture finds its practical applications where a bang-bang phase detector must be used and where the accuracy of the VCO is so poor as to exceed the jitter bandwidth required.\n\nWith a type 1 architecture is practically impossible to phase lock into a signal if the VCO free running frequency differs from the frequency of the incoming signal more than the bandwidth of the jitter transfer characteristic.\nThe strong reduction of the low-frequency noise from the VCO and the reduction of Es to a minimum (possibly zero) are additional advantages.\n\nA bang-bang phase detector is inevitable in monolithic applications at very high line frequencies,[2] and monolithic implies also an on-chip oscillator, that is relatively noisy and relatively inaccurate in \u03c9fr. Inaccurate \u03c9fr means that the acquisition is only possible using a (bang-bang) phase and frequency detector PFD.\n\nOn the other hand, the high volume applications of today are correspondingly very cost sensitive, and require that a single silicon chip accommodate the whole CDR, and also much more.\n\nIn practice a bang-bang phase and frequency detector is always used, and the VCO is a linear monolithic one, either a ring oscillator or a low-Q LC.\n\nConsequently all the monolithic:\n\n\u2022 line regenerators,\n\u2022 slave clocks in Telecom networks\n\u2022 CDRs of portable electronics\n\nare made with this 2 - 2 architecture.\n\nA long run-length can affect the tolerance margin in sampling the received signal, but this is normally mitigated by the use of a ternary phase detector.\n\nThis loop type is the most \u201cpowerful\u201d of the three, because it incorporates a loop filter that offers a very high gain at low frequencies.\nThis is the characteristic feature of this architecture, that generates its strong and weak points.\nThe high gain at low frequencies allows the compression to very low value of any steady state error, unlike the other two loops ( 1 - 1 and 2 - 1 ). The filter gain is maximum at very low frequencies, and decreases up to \u03c9z, then it flattens to the asymptotic value and stays constant in the bandwidth of interest (until parasitic poles, always present, make it drop). Gf is very low, in all cases below 1, so that it is often referred to as \"the filter attenuation\".\nThe flat gain of the loop filter at high frequencies allows a good tracking of medium frequency jitter, unlike the other second order loop ( 2 - 1 ).\nThe very low gain of the loop filter at mid and high frequencies keeps the tracking jitter (i.e. the jitter generated by the bang-bang frequency jumps) adequately low.\nThe high gain at low frequencies allows the frequency and phase acquisitions even when the VCO free-running frequency is shifted much more than \u03c9z from the frequency of the signal to recover.\n\nThis 2 - 2 loop tracks very well, as long as the loop filter and the VCO operate within their range of normal operation.\n\nCare must be paid to the very high (closed loop) gain at low jitter frequencies when either:\n\n\u2022 the lack of data transitions temporarily opens the feedback path (this is mitigated by the use of a ternary PD, but presents anyway the risk of a phase error that may increases out of control as long as the loop is open);\n\u2022 the (sinusoidal) input jitter has a significant amplitude. The filter gain at high frequencies is kept low to limit the jitter generation resulting from the bang-bang, but this limits the ability of the loop to track a fast and large swing of the input phase. The jitter tolerance depends on the corner frequency of the loop filter and on the length of runs without transitions.\nClosed loop gain and corner frequency of the tolerance curve are concepts easily identified if the loop is linear.\nWhen the loop incorporates a bang-bang detector, and the data transitions are random, it is necessary to restrict the study only to conditions of practical interest and to use simulations.\n\n### Introductory example\n\nThis simulation diagram helps understand the operation of the 2 \u2013 2 architecture with bang-bang phase and frequency detector.\n\n.\nThe two waveforms of the two phase detectors of the PFD are shifted higher in the diagram, for easier interpretation. The other waveforms are not shifted.\nThe outputs of the PFD and of the loop filter are scaled differently, with the filter output more amplified in the representation. (The filter output does not reach the clamping level(s), and the VCO drive signal is not different from it, i.e. would be exactly overlapped to it in the diagram. The high frequency parasitic pole of the charge pump is at 200 Mrad\/sec, and its effect is barely visible in the shape of the filter bang bang spikes).\nTo the left the waveforms show that there is no incoming signal. LOS is asserted and the loop is open. The VCO (=the CDR) output simply drifts away with a slope proportional to the difference between the signal frequency and the VCO free-running frequency.\nAfter 1.28 \u03bcs (= 150 simulation steps) the signal appears with a phase mismatch equal to -4.0 radian, which happens to reduce the cumulated phase drift that had reached 10.06 rad.\nLOS is dis-asserted at that moment and the loop starts catching up.\nInitially the output of the PFD is a constant positive level (meaning the VCO is slower than the incoming signal. The filter, that integrates the low frequencies of the comparator output, adds a further positive ramp.\nThe output phase undershoots the input phase before lock, with even a small overshoot when the filter makes the first negative bang and starts a negative ramp to finally reach a stable continuous bang-bang condition.\nDuring the frequency acquisition three slips have createdted a gap of exactly 3 \u03c0 between input and output. The gap remains constant from then on, apart from a small additional phase error.\nAs soon as the loop has caught up with the input, the typical pattern of bang-bang starts: the loop is in lock. The comparator bangs rapidly between is two output states and the filter output maintains a d.c. bias that compensate the \u03c9p - \u03c9fr distance.\nAfter 8.53 \u03bcs (= 1000 simulation steps) the input signal phase starts a sinusoidal jittering with a large amplitude (3.20 rad) and a frequency of 0.9 rad\/\u03bcs.\nThis jitter brings the loop close to its tolerance limit, which is shown by the detector and by the filter outputs that is alternatively forced out of bang bang, as well as by the error signal that shows deviations from its average of 3\u03c0-4 rad in correspondence with those periods of difficult tracking. The error signal deviations are not large in this case, and remain within +0.25 and -0.26 rad.\n\nIn this example the transition density is 100%, but the random nature of the input signal must also be taken into account.\n\n### Single zero filter\n\nThe zero is at angular frequency \u03c9z =1\/\u03c4z,\n\u03c9w = 1\/\u03c4 is the frequency of gain = 1 = 0 dB . \u03c4z = Gf \u03c4 .\n\nThis loop type is the most \u201cpowerful\u201d of the three, because it incorporates a loop filter that offers a very high gain at low frequencies, that is the key feature of the 2 - 2 architecture and that generates its strong and weak points.\n\nThe filter gain is maximum at very low frequencies, and decreases up to \u03c9z, then it flattens to the value Gf and stays constant in the bandwidth of interest (until parasitic poles, always present, make it drop).\nThe high gain at low frequencies allows the frequency and phase acquisitions (a ternary PFD is used in these applications) even when the VCO free-running frequency \u03c9fr is shifted much more than \u03c9z away from the frequency \u03c9p of the signal to recover.\nThe high gain at low frequencies allows the compression to very low value of any steady state error, unlike the other two loops ( 1 - 1 and 2 - 1 ).\n\nThe (almost) infinite gain at low frequencies gives this architecture some very useful properties, but it can also be the origin of unexpected troubles.\n\nAs shown in the figure above, Gf is very low, in all cases below 1, so that it is often referred to as \"the filter attenuation\" and called \u03b2. As a consequence, \u03c9z is higher than \u03c9w, the zero-gain frequency.\n\nIn the 1 - 1 architecture the output of a bang-bang binary PD always makes the VCO jump from one end of its control range to the opposite end. This leaves a strong residual \"tracking\" jitter in the output phase of the PLL, because only the 1\/s slope of the VCO characteristic does filter the sharp and large swings.\nThe 2 - 2 architecture in principle behaves the same for high frequency jitter as its loop filter does not filter out the high frequency components (higher than \u03c9z) coming from the comparator output. The filter passes the high frequency components of the jitter to the VCO with a flat transfer function. But the value of the filter gain above \u03c9z ( although flat up to where parasitic poles at even higher frequencies make themselves felt ) is much smaller in a 2 - 2 architecture than the equivalent gain in a 1 - 1 architecture and that makes the tracking jitter proportionally smaller.\nThe use of a ternary phase detector further reduces the peak tracking jitter.[3]\n\nThis 2 - 2 loop tracks very well, as long as the loop filter and the VCO operate within their range of normal operation.\n\nA unit step input generates an output with an initial step as high as the high frequency gain,\nand a following ramp, with a slope equal to the high frequency gain times the cut-off frequency.\n\n### Loose-tracking conditions\n\nWhen the phase detector outputs a constant request for higher, of for lower, frequency for a significant number of clock cycles ,\na temporary lack of bang-bang around the locking condition occurs, while the VCO lags behind a rising or falling input phase .\n\nAs this interval grows, the tracking error increases (see the linear ramp of the step response of the filter), and this might result into a phase error beyond the tolerance limit.\n\nThis 2 - 2 loop may drift away from lock more than the other type 1 loops, if the phase information is not refreshed, as indicated also by its linear model.\n\nThis may occur for two different causes:\n\n\u2022 the input signal has too few transitions (run-length problem, that can be approximately described using the stability factor \u03be) or\n\u2022 the phase of the input signal varies too fast for the loop to track (slew-rate problem, that is investigated by simulations with sinusoidal input jitter close to the tolerance limit).\n\nBoth causes can reduce the tolerance of the CDR, and the effects of each can be reduced at the expenses of other CDR performances.\n\n#### Run-length problem\n\nThe actual statistic of the transitions in the input signal is so difficult to predict that only a simple worst-case approach can be of use.\n\nThe very extreme case where isolated transitions come periodically and separated by a constant number of line-pulse periods may give some insight and explain why large design margins are found in real monolithic CDRs.\n\nIn a condition of very rare but periodic transitions, the fundamental parameter is the time that the loop waits before the next update of the input phase, update that comes from the next transition of the input signal.\n\nWhen a transition comes as soon as possible, the update time is: tupdate = 1\/fp.\nWhen a bit of the same sign follows in the input signal, tupdate at least doubles. More precisely:\ntupdate = run-length * 1\/fp.\n\nIt is convenient to define a parameter \u03be:\n\n\u03be = 2 \u03c4z \/ tupdate\n\nThe longer the run length, the more critical the loop response may become . In fact, \u03be is called the stability factor.[4]\n\nLet's focus now on what happens in the condition of bang-bang tracking.\n\nThe last bang corrects the filter output, higher or lower, by a jump of G\u03c6Gf [volt], and a ramp follows, in the same direction, with slope G\u03c6Gf \u03c9z [volt\/sec]. (See the figure above).\nThe VCO frequency jumps G\u03c6GfGVCO = G [rad\/sec] and then ramps with a slope of G\u03c6GfGVCO\u03c9z = G\u03c9z [rad\/sec2]\n(G\u03c6GfGVCO = G, where the phase detector outputs can be -G\u03c6, +G\u03c6 if the detector is binary and -G\u03c6, 0, +G\u03c6 if it is ternary)\nThe quantity G is very closely related to the quantity fbb.[5] Both measure the frequency jump during bang-bang tracking, and are related by the formula G = 2\u03c0 fbb.\nWhile fbb is easy to interpret as the frequency jump and is fixed by the circuitry of the CDR, G is used as a nominal value for the open loop gain, in the nominal conditions of DT = 1 and maximum phase error (i.e. minimum G\u03c6). In fact, G is always found multiplied by DT in the formulae that describe the loop behaviour.\n\nThe VCO phase, as a function of the time t, is the sum of a linear ramp Gt plus a parabolic ramp ${\\displaystyle {\\tfrac {1}{2}}}$zt2 .\n\nAfter tupdate, the VCO phase has increased (or decreased) by a linear part G tupdate [volt] plus a parabolic part ${\\displaystyle {\\tfrac {1}{2}}}$ G \u03c9z tupdate2 [rad].\nthe ratio of the linear part of the phase increase to the parabolic part of the phase increase is exactly the stability factor \u03be:\nG tupdate \/ (${\\displaystyle {\\tfrac {1}{2}}}$ G \u03c9z tupdate2) = 2\u03c4z \/ tupdate = \u03be\nWhen the update takes place, the new bang makes the filter output jump in the opposite direction by a step of Gf [v], followed by another ramp, now in this new direction.\nIn order to stay in tracking and not to sidetrack out of lock, the phase drift during tupdate must not make the VCO phase drift outside the lateral eye opening:\n${\\displaystyle {\\tfrac {1}{2}}}$ G \u03c9z tupdate2 + G tupdate < LEO (0 < LEO \u2264 \u03c0)\n${\\displaystyle {\\tfrac {1}{2}}}$ G \u03c9z (2\u03c4z\/\u03be)2 + G (2\u03c4z\/\u03be) < LEO (0 < LEO \u2264 \u03c0)\nThe equation yields always one positive real root[6] corresponding to \u03be \u2265 2.\n\nThe value of G however decreases from its nominal value proportionally to the reduction of DT from its maximum of 100 %.\n\nThe ability of the CDR to tolerate with minimal phase drift some very long run-lengths, and\/or periodic repetitions of them, can be increased increasing the value of \u03c4z at the design stage (\u03be = 2\u03c4z \/ tupdate).\n\nThis reduces the bandwidth of the loop filter \u03c9z and has the adverse effect of reducing the frequency lock-in range and of increasing the lock-in time.\n\nIn practice, values of \u03be in excess of 1000, even with low transition densities as found in SONET transmissions, are not used.\n\n#### Slew-rate problem\n\nThere always exists the possibility that the VCO is not able to follow the rapidly changing phase of the input, because the rate of change of the VCO phase is insufficient. The VCO is \"slew-rate\" limited.\n\nThis is not normally due to late response of the VCO driven by the signal from the filter output.\nThe frequency deviation limits of the CDR are set by the characteristics of the loop filter block, rather than by the extremes of the frequency range of the VCO itself.\n\"The VCO is designed to respond fully in one update time. This is usually very easy to achieve in ring-oscillators and possible with some care using low-Q VCOs.\" [7]).\n\nThe 2 - 2 CDR is not able to vary rapidly its frequency. If the input signal offers the maximum transition density, the loop can respond to an input phase step with a frequency step equal to G [rad\/sec], i.e. with a phase ramp of slope G. ( More precisely, the output phase of the loop increases -until the next transition that comes after tupdate- with a parabolic increase where the linear part is normally the largest: \u03c6 = Gtupdate + 1\/2 G\u03c9ztupdate2 )\n\nThe value G (calculated, unless otherwise specified, at DT = 100%) is the loop reference gain ( G\u03c6 is calculated with the largest phase input error, typically \u00b1\u03c0 , multiplied by DT, Gf is the value of the filter gain in the flat region, GVCO is a linear approximation around the fp working point ), but is also proportional to the bang-bang frequency step (G = 2\u03c0 fbbG\u03c6).\n\nBut if the phase of the input signal varies more rapidly than G rad\/sec (G for a binary PD, GDT for a ternary PD), then a phase error appears and it may grow and possibly affect the CDR tolerance.\n\nThis problem may be investigated using a sinusoidal input phase jitter. This is also useful as the tolerance curve of the CDR is measured as a function of a sinusoidal input jitter.\n\nThe slew rate concept, and how the slew rate limits in particular the tolerance curve of a 2nd order type 2 CDR with a bang-bang PD, had been already introduced and discussed to some extent in the jitter tolerance page, where also an introductory case related to slew-rate for this architecture -with bang-bang PFD- is shown.\n\nIn fact, the slew rate is relevant because it limits directly the jitter tolerance of the CDR, more than any other performance.\nThe phase tolerance in a 2nd order type 2 CDR can be (prudently) approximated, in its frequency dependent part, using the condition of slew-rate onset. The phase tolerance there can be derived from the frequency range possible for its VCO.\n\nThis figure shows, using asymptotic approximations, how the tolerance curve (of the 2nd order type 2 CDR with bang-bang detector) depends on the loop fundamental parameters.\n\nThe curve identifies all the regions of theoretical interest, over the entire frequency range.\nTwo of the four asymptotic regions (the two regions with a -20 dB\/dec slope) correspond to limitations of the VCO frequency range:\nthe low frequency region corresponds to the large-signal d.c. clamping of the VCO drive signal;\nthe medium frequency region, above \u03c9z, corresponds to small-signal high-frequency limitation of the bang-bang drive step (Vbb).\n\nBut the frequency range of practical interest for the jitter tolerance does not extend more than a couple of decades either:\n\n1. below the frequency region where the asymptotic slope of -20\u00a0dB\/dec is identified (because all PLL based CDRs follow that slope -or a steeper one- at lower frequencies, while the network requirements always saturate to a buffer width, i.e. to a constant phase, from a certain jitter frequency downwards), nor\n2. above the frequency where the flattening of the horizontal asymptote is identified ( because a PLL based CDR will follow that asymptote for all higher frequencies of jitter).\nThe following figure shows tolerance curves (in dB and in UI, obtained by numerical simulations for another 2nd order type 2 CDR with bang-bang detector) in the range of frequencies of practical interest.\nTolerance curves in dB and in UI obtained by simulation of a 2 -2 CDR with PFD and transition stuffing.\nThe LEO is set at a low value of 1 rad. The asymptotes are positioned manually over the interpolated dB curve.\nA good agreement is found between the corner point of the two oblique asymptotes and the \u03c9z given value of 4.0 10+5.\n\nThis figure is like a zoom on the frequency region of practical interest, but shows the same fundamental behaviour (in that region) that is present in the previous figure.\n\nIt may be noted that the asymptotic tolerance at high frequencies is slightly lower that the minimum usually specified of 0.15 UI.\nThis is a consequence of having used the pessimistic value of 1 rad for the Lateral Eye Opening.\n\nThe values of Gf (or \u03c9bb = GVCO Gf = G\/G\u03c6, because it has been assumed that the output of the PD is +1 or -1 volt) and of \u03c9z are always chosen as a compromise between opposite application requirements.\n\n1. the value of Gf (Gf is the factor of G that is easiest to vary) is kept as small as possible to reduce the \"tracking\" jitter due to the bang-bang jumps. It but can not be reduced too much or else the \"slew-rate\" becomes excessively small. The slew-rate G (of the -20\u00a0dB\/dec in the frequency range of interest for the loop tolerance) is originated by the loop filter that has a very limited gain (= an attenuation) at frequencies higher than \u03c9z. It can be seen in the figure of asymptotic tolerance that the intersection point with the 0\u00a0dB axis is:\nGVCO Vbb = \u03c9bb\nand drifts to the left when Gf (or \u03c9bb) is made smaller.\n2. The value of \u03c9z is kept low in order to have a stability factor \u03be large enough, that is to tolerate long run-lengths. But \u03c9z can not be too low, or else the times for acquisition of frequency and phase become too long, and also the time before the steady steady error is squeezed to zero becomes too long as well. The PLL would resemble too much a type 1 system with low gain.\n\n## Jitter Bandwidths in 2nd order type 2 bang-bang CDRs\n\nThe bang-bang CDRs are made up by most of the 1-1 (PD) and practically all of the 2-2 (PFD).\n\nThese loops are intrinsically non-linear, primarily because of the bang-bang nature of the phase detector: this is the first non-linearity.\n\nThe very large (\u2261 infinite) gain in the detector needs a compensation by a signal level limitation is another loop point.\n(The level limitation keeps signal levels within the physical capability of the circuit elements).\nThe level limitation takes place either:\n1. in the limitation of the VCO drive signal made by the circuits driving the VCO, or\n2. in the extremes of the VCO voltage-to-frequency characteristic.\n\nIt is a frequency limitation. This is the second non-linearity. In both cases of frequency limitation, when the limit is reached, the output phase waveform of the CDR enters a slew-rate condition.\n\nThe VCO intrinsic characteristic, at both ends, depends very much (and often unpredictably) on manufacturing and environmental variations.\nTherefore, the frequency range is always deliberately limited by a clamp of the output range of the VCO drive stage, +\/- Vdr .\n\nOverall, two non-linearities (PD or PFD, and VCO frequency constraints) combine making transfer (e.g. jitter transfer) functions become families of functions.\n\nEach function in a family is associated with a specific input waveform.\n\nIn the (transfer functions') case of sinusoidal inputs, each function of \u03c9j is associated with the input amplitude only.\n\nThe output is also periodic, and the transfer function is defined as the ratio of the output peak amplitude to the input peak amplitude Aj (that is constant at all frequencies for each function in the family).\n\nVbb = the absolute value of the voltage step that takes place in these CDRs, at the input of the VCO, when the PD bangs up or down from its intermediate level.\n\nIt coincides with the value of Gf(\u03c9) for \u03c9 >> \u03c9z. In fact, the PD output is either +1 or -1 [volt] (or 0 V in the case of a ternary PD).\nIn 1 -1 loops +\/- Vbb is the total drive range of the VCO and coincides with +\/- Vdr .\nEd is generated by a deviation of the duty cycle away from 50% in the VCO drive waveform.\nIn 2 - 2 loops +\/- Vbb is much smaller than the drive range of the VCO +\/-Vdr (e.g. Vbb \/ Vdr = 10-3).\nThe VCO drive waveform shows a tiny bang-bang three-level ripple added to a slower and much larger waveform, that is made by the filter amplification of low jitter frequency components.\nEd is obtained by a variation of the mean level of the bang-bang ripple in the drive waveform.\n\n## Jitter transfer functions in bang-bang CDRs of 2nd order and type 2 [8]\n\nThis is a type 2 system, which means that -during tracking- the mean values of the input and of the output phase waveforms do not differ .\n\nThe difference of mean values is the steady state sampling error Es, and Es = 0 in these systems.\nIn other words, the average level of the the difference between the input and output waveforms is zero (= the average phase error is zero ).\n\nIn 2nd order systems the signal processing between PD and VCO consists of two regions along the frequency axis:\n\n1. a flat attenuation above \u03c9z (that makes the drive signal so small that , in tracking, the VCO frequency bang-bangs within a small range around fp).\n2. a region of increasing amplification as jitter frequency decreases from \u03c9z, that slopes at -20 dB\/dec.\n\nThe two regions of the loop filter define the loop behaviour unless either of the two VCO frequency limits is reached (with an input jitter large enough), where the resulting slew-rate corresponds to either of the VCO limits (+\/- Vdr or +\/_Vbb).\n\nThe two frequency regions of the loop filter, with jitter levels that make Vbb slewing evident in both of them, can be identified in the figure already shown in the jitter tolerance page.\n\nSlew-rate limitations at medium and at high jitter frequencies.\nThe triangular wave of the output at high frequencies is symmetric: the type 2 loop squeezes to zero any steady state output error.\n\nIn steady state with a sinusoidal input, the output of the CDR is just periodic with the same period. The jitter transfer function can be defined as the ratio of the output peak value to the input peak value.\n\nThe waveform (in black) of the VCO drive signal bang-bangs when there is no slewing, but does not bang.bang and and slowly ramps away as long as the phase error keeps the same sign (constant slew-rate condition of either sign).\nThe steep bang transitions, and the slope of the ramps that follow, implement the step response of the loop filter driven by the steps (bang-bangs) of the PD output.\nIn the PD output, the slow ramp during slewing compensates any possible steady-state error of the loop output.\n\nThe figure is drawn with values of jitter amplitudes Aj still within the jitter tolerance boundary, but larger that those used to measure the jitter transfer characteristic of a CDR.\n\nIn jitter transfer measurements, at low \u03c9j (= at low jitter frequencies), the slewing that is shown in the left hand side of the figure does not appear.\nThe amplitude of the jitter input used in those measurements (= Aj) is lower, and the CDR output phase tracks the input phase jitter and the transfer curve is a constant 1, i.e. a flat 0\u00a0dB in a Bode plot. A value of Aj = 1.5 UIpp is typical for the measurement of jitter transfer below \u03c9j, and of 0.15 UIpp above \u03c9j. [9]\n\nBut for jitter frequencies \u03c9j sufficiently high, even if Aj ( the amplitude of the input sinusoidal jitter ) is relatively low, the tracking is not perfect and the output phase waveform from sinusoidal becomes triangular like in the right hand side of the figure above:\n\nAcquisition and tracking of a 2nd order type 2 CDR at the very onset of triangular slewing (DT = 1).\nThere the triangular output has fixed slopes (+\/- VbbGVCO) and It takes one quarter of the jitter period, 1\/4 * 2\u03c0\/\u03c9j for the output to vary from 0 to its peak value.\nIts peak value is: ${\\displaystyle {\\tfrac {\\pi V_{bb}G_{VCO}}{2\\omega _{j}}}}$. As the peak varies inversely to \u03c9j, in this region the jitter transfer curve rolls off at- 20 dB\/dec.\nThe ratio of the output peak value to the input peak value (Aj) in the Vbb slew-rate region is therefore\u00a0:\n${\\displaystyle {\\tfrac {\\pi V_{bb}G_{VCO}}{2A_{j}\\omega _{j}}}}$\nIf the output was measured with the amplitude of its fundamental component instead of with its peak value, the roll off part of the transfer curve would just be translated (downwards) a tiny -0,091 dB.\n\nThe overall jitter transfer curve has one horizontal asymptote at 0\u00a0dB towards low frequencies, and a - 20\u00a0dB\/dec asymptote towards high frequencies.\n\nThe transition between the two asymptotes can be approximated using the model of a first order linear system, whose corner frequency \u03c9jc is where the sloping asymptote intersects the 0\u00a0dB horizontal asymptote. (The approximation given by this model is more than adequate for most engineering purposes)\u00a0:\n\n\u03c9jc = ${\\displaystyle {\\tfrac {\\pi V_{bb}G_{VCO}}{2A_{j}}}}$[10]\n\nThe jitter transfer can therefore be modelled as:\n\nJitterTransfer(\u03c9) = ${\\displaystyle {\\frac {1}{\\sqrt {1+{\\tfrac {\\omega _{jc}^{2}}{\\omega ^{2}}}}}}}$\n\nThe Aj to measure the transfer characteristic may be smaller than \u03a6m.\n\nWith large Aj (e.g. 1.5 UIpp ), a sinusoidal jitter is transferred unchanged to the output.\nAt the onset of triangular slewing, that is a function of Aj, the transfer ratio rolls off at -20dB\/dec. This region is typically explored with Aj down to 0.15 UIpp.\nThe transition from 0 dB flat to -20 dB\/dec roll-off takes place at a frequency inversely proportional to Aj.\nIf Aj is made too small (smaller than \u03a6m), the bandwidth of the transfer curve does not increase any more.\n\nThis is a realistic assumption in many practical cases, and may even be used to design the CDR so that a certain mask of jitter transfer is met.\n\nThe sinusoidal jitter that is to be applied in the bandwidth from the maximum \u03c9jc and a couple of decades above does not normally exceed 0.15 UIpp [11]\nThe \u03a6m in a documented 10 Gbps CDR is found to be about 0.26 UIpp, taking into account just the metastability in the PD flip-flops, without other smaller contributions.[12]\n\nThe model becomes the linear model of a 2nd order type 2 loop with a linear comparator of gain G\u03c6 = 1\/\u03a6m.\n\nThe transfer function has a fixed (= independent from Aj in this case) cut-off frequency at\n\n\u03c9n2 = ${\\displaystyle {\\sqrt {G\\omega _{z}}}}$\nIt may be remarked that the linear 2 - 2 loop has an open loop gain G = G\u03c6GfGVCO and that in this case G\u03c6 can be computed as Aj\/(Aj\u03a6m) and Gf can be computed as Vbb.\nSubstituting inside the formula for \u03c9n2 above, the formula for \u03c9jc is obtained.\nAs G\u03c6 is in these practical cases very high (higher than 4\u03c9z) , the 2 - 2 model that can be used for small Aj (smaller than \u03a6m) has a damping coefficient \u03b6 larger than 1 and therefore no gain peaking ( \u03b6 = ${\\displaystyle {\\sqrt {\\tfrac {G}{4\\omega _{z}}}}}$).\n\n## Jitter tolerance of bang-bang CDRs of 2nd order and type 2\n\nIn real 2 -2 CDRs several different regions of jitter tolerance, more than in the applications of other CDR architectures, can be found.\n\nThe phase tolerance (2nd order type 2 bang-bang CDR can be (prudently) approximated using the condition of slew-rate onset.\nThe 0 dB reference depends on the measurement unit chosen for the sinusoidal jitter Amplitude Aj.\nIf it is radian, then 0 dB correspond to 1 rad, if it is Unit Internals, the 0 dB corresponds to 1 U.I.\n\u2022 There is the high frequency asymptote corresponding to the bare tolerance of the lateral eye opening, where the PLL is too slow to follow the jitter and tracks just the jitter average value. (This is the only region where the tolerance depends on a quantity - the LEO- that is not a characteristic of the CDR itself).\n\u2022 This region is preceded by a -20\u00a0dB\/dec region corresponding to the slew rate limitation originated by the limited high-frequency gain of the loop filter, i.e. to a range limited to -\/- Vbb for the VCO drive signal.\n\u2022 The corner between these two asymptotes corresponds to their intersection and may be called \u03c9hor. It can be obtained by extrapolation of \u03c90\u00a0dB, the frequency at which the Vbb asymptote crosses the 0\u00a0dB axis.\n\u03c90\u00a0dB = GVCO*Vbb\n\u03c9hor = \u03c90\u00a0dB\/(LEO - \u03c9bbTp\/2)\n\u2022 A good fitting with simulated results is obtained smoothing the corner with a first order approximation of the tolerance curve [13]\n\u2022 A third region, at even lower frequencies, with a slope of -40\u00a0dB\/dec, is generated by a quasi-linear operation of the CDR. There the bang-bang is less obtrusive and the behavior of the PLL is well approximated by the linear model of the previous page, although with a larger value of G\u03c6.\nA good fitting has been found using for G\u03c6 the value:\nG\u03c6 = ${\\displaystyle {\\tfrac {1}{T_{j}}}\\int \\limits _{t_{0}}^{t_{0}+T_{j}}\\mid }$ G(\u03c6(t)) ${\\displaystyle \\mid }$ ${\\displaystyle dt\\ ,\\ \\ }$\nwhere:\nG(\u03c6(t)) = ${\\displaystyle {\\begin{cases}{\\frac {\\pi }{{\\text{\u03a6}}_{M}}},&\\mid {\\text{ \u03c6(t)}}\\mid {\\text{ \u2264 }}{\\text{\u03a6}}_{M}\\\\{\\frac {\\pi G_{f}}{\\text{\u03c6(t)}}},&{\\text{\u03a6}}_{M}{\\text{ \u2264}}\\mid {\\text{\u03c6(t)}}\\mid {\\text{ \u2264 }}\\pi \\end{cases}}}$\nwhere Gf represents the static gain value at the largest phase difference close to the output inversion (i.e. the end-of-range, minimum, value of the gain of the bang-bang detector).\n\u2022 The border between this and the adjacent Vbb region to the right is conceptually at frequency \u03c9z.\n\u2022 At low jitter frequencies, the slew rate associated with the limitations of the VCO control range (conceptually due to the intrinsic VCO characteristic, but in practice due to the tighter control range forced by the clamping in the loop filter output) originates another region of slew-rate (-20\u00a0dB\/dec), higher up in the left hand part of the Bode plot.\n\u2022 The border between this and the adjacent region to the right is at frequency \u03c9D = \u03c9z Vbb \/ VD.\n\u2022 The last low-frequency region to the left might be a flat part of the curve if the CDR incorporated the additional feature of a phase aligner.\nThe tolerance curve is found when the amplitude of the sinusoidal input jitter makes the error signal reach the LEO value.\nWhen slewing creates the tolerance boundary, it is convenient to model the tolerance curve with the onset of slewing (a conservative estimate).\nThe onset of slewing just makes the error function increase, and there is some margin in Aj before this increase reaches the LEO value.\nThe margin between the Aj value at the onset of slewing and the Aj value that truly makes the peak of the phase error reach the LEO value (i.e. the condition that truly defines the border of the tolerance region) is negligible at smaller \u03c9, and increases somewhat with larger \u03c9.\nWhen the output phase jitter becomes triangular and the high frequency tolerance goes from -20 dB\/dec to the flat LEO horizontal asymptote, the onset of slewing is a conservative estimation. A first order transition, with a 3 dB smoothing at the corner, gives a good approximation, as confirmed by numerical simulations. See the figures below for an example.\n\n#### Examples of CDR behaviour close to the tolerance border\n\nThe jitter tolerance is generally important in the range of jitter amplitudes from 0.1 U.I. to 2 U.I.\n\nA sinusoidal jitter smaller than 0.1 U.I. is always tolerated, even if the CDR does not track at all, because the LEO tolerance is always larger than that.\nA sinusoidal jitter larger than 2 U.I. can only be present in a network at frequencies that a CDR (meeting the other requirements) tracks perfectly.\n\nTherefore the frequency range of interest starts a couple of octaves below \u03c9z up to where the curve is almost flat.\n\nThe following figure shows tolerance curves (in dB and in U.I.) obtained by numerical simulations.\n\nTolerance curves in dB and in UI obtained by simulation of a 2 -2 CDR with PFD and transition stuffing. 0 dB here corresponds to 1.5 U.I.\nThe LEO is set at a low value of 1 rad. The asymptotes are positioned manually over the interpolated dB curve.\nA good agreement is found between the corner point of the two oblique asymptotes and the \u03c9z given value of 4.0 10+5.\n\nIt may be noted that the asymptotic tolerance at high frequencies is slightly lower that the minimum usually specified of 0.15 UI. This is a consequence of having used the pessimistic value of 1 rad for the Lateral Eye Opening.\n\nA frequency of particular interest is the frequency at the limit between two conditions:\n\n\u2022 either both tracking and slewing are present during each jitter period (lower frequencies) or\n\u2022 slewing only is present all the time (higher frequencies, and triangular output).\n\nA sinusoidal jitter at such frequency, starting from zero, makes the first peak of the output reach up to the sinusoid peak.\n\n(The next peaks of the output reach a little lower, as the balancing effect of the type 2 loop takes place).\nOnset of triangular slewing in a 2-2 CDR.\nA sinusoidal jitter starts abruptly, at a frequency much higher than the frequency \u03c9z of the zero in the loop filter.\nAt this frequency the Jitter transfer is about - 1 dB.\n\nIt can be calculated as (suffix j means jitter, SR means Slew Rate):\n\ninput sinusoid peak = Slew-Rate x Tj \/ 4\nAj = SR \u03c0 \/ (2 \u03c9j)\nSR = 2Aj\u03c9j\n\u03c9j = \u03c0 SR \/ 2Aj\nIn the example shown, the figure is obtained by simulation, and the limit condition is obtained by trial and error, looking for a jitter frequency that makes the high frequency bang bang disappear between the straight alternating slopes of the output phase.\nAn alternative method, that yields results that are not much different, consists in solving the equation that requires the slope of the triangle in tracking always to start by being exactly tangent to the sinusoid. This leads to a delay of the triangular peak versus the sinusoid peak of t0 = ${\\displaystyle {\\tfrac {1}{\\omega _{j}}}}$arctg(${\\displaystyle {\\tfrac {2}{\\pi }})}$, to a ratio of peak values of cos(\u03c9jt0) = - 1.48 dB and to a limit slew-rate of Aj(${\\displaystyle {\\tfrac {2\\omega _{j}}{\\pi }}}$)cos(\u03c9jt0).\n\nThe following four figures describe the conditions and the waveforms in the same CDR.\n\nEach figure corresponds to a region where the CDR behaviour is different.\n\nAll figures correspond to conditions where the CDR tolerance is important. At lower frequencies the tolerance is not important because the CDR tolerates larger jitter amplitudes than required by the network operation and in excess of what specified for its performances. For instance, in telecom networks the essential requirement at low jitter frequencies is to tolerate sinusoidal jitter that grows with 1\/f towards low frequencies.[14]\n\nThe PLLs of type 1 do comply as they follow a 1\/f low frequency asymptote. In this case of type 2 PLLs the asymptote has an additional margin because it follows a 1\/f2 slope.\nA 2nd order, type 2 CDR, with bang-bang PFD, at its tolerance limit, in the region between the two slew-rate regions.\nTo the left (smaller frequencies) it is the slew rate of the VCO that makes the tolerance border. To the right it is the slew rate due to the high frequency attenuation of the loop filter.\nThe CDR operates in an almost linear mode, with almost sinusoidal signal waveforms.\nAround this frequency the tolerance curve falls at - 40 dB\/dec.\nThe same CDR operating at another point on its tolerance border.\nThe jitter frequency is the same as the frequency of the zero of the CDR loop filter.\nThe CDR operates in a condition intermediate between the linear mode and the slew-rate limit set by the limited high-frequency gain of the loop filter.\nThe same CDR operating at yet another point on its tolerance border.\nThe jitter frequency is centred inside the region of slew-rate limit set by the limited high-frequency gain of the loop filter.\nAt this frequency the tolerance curve falls at - 20 dB\/dec.\nThe same CDR operating at a fourth point on its tolerance border.\nThe jitter frequency is so high that the CDR tracks correctly just the average phase of the input, but very poorly the input sinusoid.\nThe output phase is clearly limited by the slew-rate, and jitters with a low amplitude triangular waveform.\nAt this frequency the tolerance curve has become almost flat\n.\n\n## Notes and External References\n\n1. Richard C. Walker (2003). \"Designing Bang-Bang PLLs for Clock and Data Recovery in Serial Data Transmission Systems\". pp. 34-45, a chapter appearing in \"Phase-Locking in High-Performance Sytems - From Devices to Architectures\", edited by Behzad Razavi, IEEE Press, 2003, ISBN 0-471-44727-7.\u00a0 C. Response to Phase Step\n2. Richard C.Walker article, A. Run-length and Latency, pg 10.\n3. Richard C.Walker article, A. Stability Factor, pg 4.\n4. Richard C.Walker article, p. 3.\n5. Roots of ${\\displaystyle {\\tfrac {1}{2}}}$ G \u03c9z (2\u03c4z\/\u03be)2 + G (2\u03c4z\/\u03be) - LEO = 0 are: ${\\displaystyle 1\/\\xi ={\\tfrac {-1\\pm {\\sqrt {1+{\\tfrac {2LEO}{G\\tau _{z}}}}}}{2}}}$.\n6. Richard C.Walker article, VII. C. VCO Tuning Bandwidth, pg 10.\n7. Jri Lee, Kenneth S. Kundert, Behzad Razavi (SEPTEMBER 2004). \"Analysis and Modeling of Bang-Bang Clock and Data Recovery Circuits\". IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 39, NO. 9 pages 1571 .. 1579, III., JITTER ANALYSIS, A. Jitter Transfer. Retrieved 2015-1-25.\n8. ITU-T G.8251 The control of jitter and wander within the optical transport network (OTN) (09\/2010) A.7 Jitter transfer \"The jitter transfer function of a 3R regenerator shall be under the curve given in Figure A.7-1 when input sinusoidal jitter up to the masks of Figures ..., is applied.\"\n9. Lee, Kundert, Razavi paper of sep. 2004, JITTER ANALYSIS, A. Jitter Transfer \u03c9-3 dB = ${\\displaystyle {\\tfrac {\\pi K_{VCO}I_{p}R_{p}}{2\\phi _{in,p}j}}}$ (12)\n10. ITU-T Rec. .8251-201009 The control of jitter and wander within the optical transport network (OTN); A.7 Jitter transfer .\n11. Jri Lee, Kenneth S. Kundert, Behzad Razavi (SEPTEMBER 2004). \"Analysis and Modeling of Bang-Bang Clock and Data Recovery Circuits\". IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 39, NO. 9 pages 1571 .. 1579, II. BANG-BANG PD MODEL, A. Effect of Metastability.Fig. 3. (b) Simulated characteristic at transistor level.. Retrieved 2015-1-25.\n12. Jri Lee, Kenneth S. Kundert, Behzad Razavi (SEPTEMBER 2004). \"Analysis and Modeling of Bang-Bang Clock and Data Recovery Circuits\". IEEE JOURNAL OF SOLID-STATE CIRCUITS, VOL. 39, NO. 9 pages 1571 .. 1579, II. BANG-BANG PD MODEL, A. Effect of Metastability.Fig. 3. (b) Simulated characteristic at transistor level.. Retrieved 2015-1-25.\n13. ITU-T G8521 (09\/2010): The control of jitter and wander within the optical transport network (OTN), 6. Jitter and wander tolerance of network interfaces, and its Amendment 1 (04\/2011)","date":"2018-05-24 10:10:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 21, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7302210330963135, \"perplexity\": 2050.867366496657}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-22\/segments\/1526794866201.72\/warc\/CC-MAIN-20180524092814-20180524112814-00122.warc.gz\"}"}
| null | null |
A Quick Brief Of Istanbul's History
If you're visiting Istanbul for the first time or the tenth, you're certainly aware that the city never ceases to amaze its tourists. There are plenty of historical landmarks, cultural attractions, and museums to visit in this cosmopolitan metropolis. Istanbul's magic derives not only from its historical sites, but also from its inhabitants and living culture. When you visit Istanbul, you'll want to get to the center of the city's culture as quickly as possible. As a result, we recommend that you book an Istanbul Tourist Pass for your journey to Istanbul. Yet, before your Istanbul trip, let's take a look at the magnificent history of this city!
The first settlers of Istanbul lived on the Asian side of the city and date back to the second millennium BC. It gets its first name from Megara king Byzas, who brought his colonists here in the 7th century BC to found a settlement called Byzantium, which is the Greek name for a city on the Bosphorus. Byzas selected this location after meeting with a Delphi oracle, who advised him to settle across from the "land of the blind." Indeed, Byzas concluded that earlier explorers would have been "blind" for missing this magnificent spot at the mouth of the Bosphorus strait, the Black Sea's only entry point.
The city was controlled by Persians in the 6th century BC, and after Alexander the Great took over in the 4th century BC, it was prosperous until the 2nd century BC. The city was conquered by Roman emperor Septimus Severus in 193 AD, and it remained under Roman rule until the 4th century AD, when emperor Constantine the Great made Byzantium the capital of the entire Roman Empire and named it Constantinople, and the Eastern Roman Empire became known as the Byzantine Empire after the 5th century. Like Rome, the city was founded on seven hills.
Early Byzantine emperors, particularly between the 4th and 6th centuries, when the city's population surpassed half a million, filled their city with the riches of the ancient world. The city was devastated by riots in 532, during the reign of Justinian I. However, it was restored, and notable buildings such as Hagia Sophia still remain as monuments to the Byzantine Empire's golden age. Istanbul's later history is full of intrigues and sieges; it was besieged by Arabs in the 7th and 8th centuries and by Barbarians in the 9th and 10th centuries, but it was ruled by the Fourth Crusade from 1204 to 1261, who ruined and sacked all the wealth. Constantinople never recovered its former wealth or power after that.
The Fall of Constantinople
In 1453, the Ottoman Turks, led by Sultan Mehmet II, invaded Constantinople. The city was renamed Islambol and became the Ottoman Empire's capital. Sultans constructed several mosques and public buildings between the 15th and 16th centuries, bringing Istanbul's population back up to about half a million by the mid 1500s. Istanbul was a significant cultural, political, and economic centre. Throughout the times, the word "Istanbul" was derived from a combination of "Islambol" ("capital of Islam" in Turkish) and "eis tin Polin" ("to the City" in Greek). Important landmarks such as Hagia Sophia and Topkapi Palace was built under the Ottoman reign in Istanbul
Ottoman rule continued until World War I, when allied forces invaded Istanbul. The Republic of Turkey was established in 1923, after years of struggle led by Ataturk against the invading powers, and the capital was relocated to the province of Ankara. Istanbul, on the other hand, has started to grow rapidly, with a population of over 13 million people and rising. It is also Turkey's economic and intellectual epicenter.
Covid-Safe Istanbul Trip with Istanbul Tourist Pass!
We, as well as the museums, take precautions very seriously. Istanbul is a low-risk travel destination in contrast to other countries, and travel experts take safety measures very seriously. Social distance is maintained during museum visits, and masks are required at all times. The number of guests is limited at any given time. In addition, since the Istanbul Tourist Pass is fully wireless, there is less chance of transmission when registering or visiting Istanbul's museums and palaces, such as Hagia Sophia and Topkapi Palace.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,219
|
Few peacemakers in Israel's Knesset
By Neve Gordon - posted Wednesday, 18 February 2009 Sign Up for free e-mail updates!
Israeli voters have elected a majority of lawmakers who are against the two-state solution. Now it's up to the world - and the Obama administration - to respond. The Nation, February 11, 2009.
Israelis have had their say at the polls, and now it is up to the world, and particularly the Obama administration, to respond.
Thirty-three parties ran for the Knesset (the Israeli parliament), ranging from the well-known Kadima, Likud and Labor to a variety of lesser known parties that ran on an array of platforms from the rights of the disabled to legalising cannabis. However, only 12 parties managed to garner enough votes to secure seats in the Knesset.
The incoming Knesset will have a solid right-wing bloc, made up of Likud with 27 seats, Yisrael Beiteinu with 15 seats, two ultra-Orthodox parties with 16 seats and two smaller nationalist parties with seven seats. This bloc has four more than the 61-seat threshold needed to form a coalition.
The centre bloc was able to muster 41 seats. This bloc consists of Kadima with 28 seats and Labor with 13 seats. The remaining 14 seats were won by liberal, leftist and Arab national parties.
The results clearly testify to the fact that a large majority of the elected politicians are against an Israeli-Palestinian peace agreement based on the two-state solution. Moreover, some parties have blatant neo-fascist tendencies. Yisrael Beiteinu, for example, ran under the banner of "no citizenship without loyalty", and would like to strip any person who is critical of Israeli policies towards the Palestinians of their citizenship. People like me.
While the devastating effects of these elections on internal Israeli politics may not concern the international community, their repercussions for Israel's relations with its neighbours - not least the Palestinians - should certainly concern world leaders and specifically President Barack Obama, who has already declared that Middle East stability and peace are vital to US interests.
Obama's political vision has engendered hope not only in the United States, but around the world. My expectation is that he will make good on his promise for change and introduce a courageous initiative that will finally bring peace to Israelis and Palestinians. He has both an opportunity and a responsibility to do so.
The opportunity has arisen as a result of more than 18 years of political negotiations on the two-state solution (from the Madrid Conference in 1991, through Oslo, Camp David, Taba, and Annapolis) as well as the publication of promising initiatives (from the Geneva Initiative and the Arab Peace Initiative to the Nusseibeh and Ayalon Plan), which have clarified exactly what needs to be done to reach a peace settlement between the warring sides.
The two-state solution entails three central components:
Israel's full withdrawal to the 1967 border with possible one per one land swaps so that ultimately the total amount of land that was occupied will be returned.
Jerusalem's division according to the 1967 borders with certain land swaps to guarantee that each side has control over its own religious sites and large neighbourhoods. These two components entail the dismantling of Israeli settlements and the return of the Jewish settlers to Israel.
The acknowledgment of the right of return of all Palestinians but with the following stipulation: while all Palestinians who so desire will be able to return to the fledgling Palestinian state, only a limited number agreed upon by the two sides will be allowed to return to Israel; those who cannot exercise this right or, alternatively, choose not to, will receive full compensation.
Obama's responsibility arises from the fact that the only way to advance US regional interests and to provide real security for the two peoples is by having Israelis and Palestinians sign a comprehensive agreement of this kind. Taking into account the results of the current Israeli elections, Obama will have to neutralise the rejectionists in order to resolve this bloody conflict once and for all.
Continued over the page...
Neve Gordon is the co-author (with Nicola Perugini) of the newly released The Human Right to Dominate.
» Redefining anti-Semitism on Facebook - September 28, 2020
» Migrant caravan: branding migrants 'human shields' has a deadly motive - November 22, 2018
» The human rights of the settler - September 14, 2016
» The human rights crisis is about domination, not perception - July 1, 2015
» Time to erase Israel's Green Line - June 19, 2015
All articles by Neve Gordon
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,528
|
Q: Why doesn't pandoc convert a plaintext file to PDF properly? Commands tried:
pandoc -V 'fontfamily:Courier' --variable mainfont="Courier" --pdf-engine=pdflatex 1.txt -o 1.pdf
pandoc -V 'fontfamily:Courier' --variable mainfont="Courier" --pdf-engine=lualatex 1.txt -o 2.pdf
pandoc -V 'fontfamily:Courier' --variable mainfont="Courier" --pdf-engine=xelatex 1.txt -o 3.pdf
pandoc -V 'fontfamily:Courier' --variable mainfont="Courier" --pdf-engine=latexmk 1.txt -o 4.pdf
pandoc -V 'fontfamily:Courier' --variable mainfont="Courier" --pdf-engine=tectonic 1.txt -o 5.pdf
pandoc -V 'fontfamily:Courier' --variable mainfont="Courier" --pdf-engine=wkhtmltopdf 1.txt -o 6.pdf
pandoc -V 'fontfamily:Courier' --variable mainfont="Courier" --pdf-engine=weasyprint 1.txt -o 7.pdf
pandoc -V 'fontfamily:Courier' --variable mainfont="Courier" --pdf-engine=prince 1.txt -o 8.pdf
pandoc -V 'fontfamily:Courier' --variable mainfont="Courier" --pdf-engine=context 1.txt -o 9.pdf
pandoc -V 'fontfamily:Courier' --variable mainfont="Courier" --pdf-engine=pdfroff 1.txt -o 10.pdf
Contents of 1.txt:
--------------------------------------------------------------------------------
Left Right
--------------------------------------------------------------------------------
Lorem ipsum whatever. Lorem ipsum whatever. Lorem ipsum whatever. Lorem ipsum 1
whatever. Lorem ipsum whatever. Lorem ipsum whatever. Lorem ipsum whatever. 2
Lorem ipsum whatever. Lorem ipsum whatever. Lorem ipsum whatever. Lorem ipsum 3
whatever. Lorem ipsum whatever. Lorem ipsum whatever. Lorem ipsum whatever. 4
Lorem ipsum whatever. Lorem ipsum whatever. Lorem ipsum whatever. 5
--------------------------------------------------------------------------------
Results:
Out of all those allegedly supported "engines", only the first and third produce any PDF at all (the others just dump a bunch of nonsensical errors). And those two that do produce PDFs, produce horribly butchered ones:
*
*"pdflatex" (the first command) entirely ignores the specified font, so it's completely useless.
*"xelatex" (the third command) seems to be mostly using the right font, but seemingly deletes all the spaces between "Left" and "Right", morphs the "-"s into straight lines (that's not how that font looks...) and messes up the lines completely so that the numbers on the last columns are not aligned to the right, and has crammed the entire contents into the middle of the page instead of, as expected, near the top-left corner:
screenshot of the xelatex-produced PDF
I have spent enormous amounts of times hunting for options and trying a million variations of the above commands, but it seems like this tool is fundamentally broken. I have no idea how others (apparently) use these tools, but they just don't work. It's impossible to convert a text file to PDF...
A: Pandoc is not broken; it is doing just what its documentation says it will do. Pandoc treats your input file as Markdown with pandoc extensions (since you didn't specify a format). What you have here is a one-column simple table (since there is no break in the line of ----s to indicate a column break).
If what you want is a rendering of this context as verbatim text in a PDF, you could use e.g. enscript 1.txt --output=- | ps2pdf - > 1.pdf. If you want to do it using pandoc, then the easiest way is to put the content inside backtick fences so that it is treated as a markdown verbatim block. One way to do this would be to modify your file, but you could also do it by creating a file ticks.txt containing just
```
and then run
pandoc ticks.txt 1.txt ticks.txt -o 1.pdf
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,934
|
Emerging Subjects Blog
Emerging Subjects of the New Economy: Tracing Economic Growth in Mongolia
Emerging Subjects Blog »
New Forms of Political Protests in Ulaanbaatar: Calling for Justice through Shoes
By uczipm0, on 28 April 2016
This post was written by Sh. Tuya, a researcher on the Emerging Subjects project.
On April 7th, the Chinggis Khaan Square in front of the government palace was covered in shoes of all variety. Strikingly, the protest was a quiet space devoid of people themselves protesting. The protest was part of series of campaigns organized by the Mongolian Youth Association (further referred as MYA), titled Mongolia without Thieves: Black List, White List.
Shoe Protest in front of the government building on Chinggis Square. 6 April 2016.
Black List, White List
The black list was compiled by the MYA and includes names of public figures that should not be permitted to run in rural constituencies for parliamentary elections and the Citizen's Representative Khural.[1] The release of the list as mentioned by the President of the MYA was scheduled to take place on October 2nd, 2015, the day that S, Zorig, the leader of democratic movements in Mongolia, was assassinated.[2] However, the release of the list was postponed until closer to the elections in spring. The campaign was criticized by another youth association, mainly the Union of Mongolian Students (further referred as UMS) for politicizing the MYA and implicating it in political party propaganda.[3] The statement was released after the end of a brief coalition between the MYA and UMS in spring. According to the press release, the UMS was abandoning the coalition because of the list and because the MYA's new board members have a political background. The statement further emphasized that non-government organization such as the MYA do not have legitimate right to accuse individuals without proper legal procedure and that only the justice system has a right to issue trials and pardoning procedures.[4]
Why Shoes and Not Faces?
The shoe protests in Ulaanbaatar first struck me as similar to the shoe protest in Paris last fall, where 10,000 shoes that stood in for a large protest that was cancelled during the Paris Climate Talks following attacks on the city. However this protest, unlike its original counterpart in Paris, was not seeking to address a global environmental issue and there wasn't a ban on authorized demonstrations in Ulaanbaatar.
The shoe protest took place for one day in a controlled space on the far west side of the square. All the shoes were placed in space fenced off by the police. At the center of the fenced-in space stood a giant shoe with a display covered in black and white reading: "White List: We Will Elect, Let's Liberate!" When I entered the fence to snap photographs of the shoes, I noticed small tablets attached to each pair with individual pleas. The tablets read a range of frustrations covering many aspects of society, including political, economic, culture and even the sex industry.
For instance, one pair of shoes complained about the devaluation of meritorious titles that have existed since the socialist era, such as STA "Soyolyn Terguunii Ajiltan" (Leading Cultural Worker). A pair of worn out boots proposed to legalize sex workers and allocate a street for them. The messages on tablets were very broad and subjective, which did not sit well with just the political agenda of the upcoming elections.
The overall site was modest in size, with a few curious people approaching the table where pamphlets were given out. I arrived there at 11 in the morning during a workday and most of the small crowd of people consisted of elderly people, students who worked at the booth, and several journalists snapping photos. If a young population of workers and students wanted to show their frustration, this was indeed a convenient method of protest with no missed office hours or classes. Even though the title, White List/Black List, seemed threatening, the protest itself did not feel this way.
"Leading Cultural Workers (i.e. artists) have become worthless!!!!!"
Sign calling for the dedication of a "pink street" for sex workers.
Hashtag #хөлфи (#shoelfie): Active social media discussions versus absent physical protest
The media coverage of the protest was quite high both in official outlets, as well as on social networking websites, especially Twitter. A Twitter hashtag, #хөлфи –meaning shoelfie, a reference to shoe selfies – went viral.
Both the form of the protest and its performative aspect point to specific aesthetics that are emerging as civil society critique of the failures of the state. The appeal of the faceless protest was novel in its performative approach, which resonated well with the internet culture where people are expected to perform a certain role on a daily basis. The political message of the protest outlined the appeal of the moral aesthetics of politics. The range of individual messages all united under the face of black and white questioned what is good and beautiful as opposed to bad and repulsive. Perhaps, that's why the messages even included frustrations with so many clandestine topics that are not discussed in public life, such as legalization of sex workers, getting rid of useless artist titles, and lack of cash money among students.
Furthermore, given that the public is becoming increasingly aware of censorship and surveillance, a 'faceless' protest of shoes may seem to be a less risky protest strategy. Since there were no faces or names associated with the shoes, the identities of the shoes' owners is unknown. This is not the first time such a 'faceless' protest has been carried out in Mongolia. In 2014, people protesting uranium mining and the disposal of radioactive waste in Mongolia donned masks similar to those used by the Anonymous movement during a protest in Chinggis Square.
Vita Peacock, an anthropologist at UCL who studies forms of street protest such as the Anonymous movement, observers that, "the point of being present then, is not to do one thing but to question another." [5] The performative aspect of anonymity points to the theatrical concept of the double: the functioning of body, an emptied vessel without identifiable subject, and what it represents. Such anonymity allows for direct non-violent action against established hierarchy, according to anarchist anthropologist, David Graber.[6] The performance guarantees the anonymity of identities that otherwise would threaten directly the fragile state of consensus between the state and the society. Whereas, the physical presence of empty forms of materiality – such as shoes – still challenges the stable understanding of society.
Thank you to my research partner, Lauren Bonilla, for contributing to this post.
All photos © of Sh. Tuya.
[1] The Citizens Representatives Khural is an elected, quasi-state institution that is meant to support local, regional, and urban governance ( http://www.khural.mn/en-us/n/8xyy).
[2] http://khulgaichguimongol.myf.mn/
[3] http://www.bolod.mn/News/146486.html
[5] See Vita Peacock on Anonymous Movements in London: http://allegralaboratory.net/million-mask-marching-performance/
[6] David Graeber, 2004, "Fragments of an Anarchist Antrhopology", p, 94. Prickly Paradigm Press.
Filed under Environmental and Nationalist Movements, Ethnographic Studies
Tags: elections, politics, protest, resistance, social movement
This blog is a hub to exchange ideas and discuss themes, topics, and ongoing research relating to our project. We encourage you to participate by posting comments to the discussion section.
Conceptual and Theoretical Approaches
2016 Election Series
Loans and Debt Series
Tsagaan Sar Gift Index
Өр ба зээлийн тухай асуудлаархи цуврал нийтлэл
Ethnographic Studies
Bank and Credit Market
Environmental and Nationalist Movements
Property and Ownership
Talks and Seminars
Монгол Хэлээр
The Project's Advisory Board Members Reflect on the Mongolian Economy — Comparing 2019 and 2016
The Safe Arrival of Book Donations to the Department of Anthropology, National University of Mongolia
Emerging Subjects Public Lecture, Television Interview, and Podcast
Publication of Series on 'Temporary Possession'
Review of Five Heads (Tavan Tolgoi)
The Potential of Citizen Groups in Ulaanbaatar
anthropology anticipation artist exchange cash civil society collaboration conference corruption crisis debt democracy development economic policy economy elections environment ethnography Findings foreign investment future ger district gift exchange Gobi informal politics infrastructure loans methodology mining mongolian economy mонгол xэлээр neoliberalism obligations ownership Oyu Tolgoi politics power speculation Tavan Tolgoi theory Tsagaan Sar Ulaanbaatar Оюу Толгой төр засаг уул уурхай эдийн засаг
Archives Select Month July 2019 June 2019 May 2019 April 2019 March 2019 February 2019 September 2018 August 2018 July 2018 June 2018 May 2018 March 2018 January 2018 December 2017 October 2017 September 2017 July 2017 June 2017 May 2017 April 2017 March 2017 December 2016 November 2016 October 2016 September 2016 August 2016 June 2016 April 2016 March 2016 February 2016 January 2016 December 2015 November 2015 October 2015 September 2015 August 2015 July 2015 June 2015 May 2015 April 2015 March 2015 February 2015
Our work, whether individually or collectively, is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 1,877
|
{"url":"http:\/\/seiya-kumada.blogspot.com\/2015\/09\/","text":"## 2015\u5e749\u670810\u65e5\u6728\u66dc\u65e5\n\n### Feature Extraction by CNN and Classification by SVM\n\n#### Introduction\n\nIn the previous page, I performed the scene recognition using the Convolutional Neural Network (CNN) that the library Caffe provides. In this page, for the same dataset the same CNN is used for extracting feature vectors and the classification is accomplished by means of the Support Vector Machine (SVM) in the library LIBLINEAR. The recognition accuracy reaches about 95% .\n\n#### Feature Extraction\n\nThe following code is used to extract feature vectors from the CNN and to convert them into the input format that LIBLINEAR requires. The format of training and testing data files for LIBLINEAR is the same as that for LIBSVM: Each line contains a label and a feature vector. I extracted feature vectors from the layer \"fc7.\" Their dimension is 4096. The contents of the file \"total_list_15_in_local_machine.txt\" described in the above code are as follows: Each line consists of a file path, a label, and a phase (\"train\"\/\"valid\"\/\"test\"). In this work, two phases \"train\" and \"valid\" are merged into one phase \"train.\" After executing the above python code, I got two files \"libsvm_train_inputs.txt\" and \"libsvm_test_inputs.txt\" which are input files for LIBLINEAR. The number of training images are 7560 and the number of testing images 1220.\n\n#### Execution of SVM\n\nThe following command is run to train a SVM. Then, this command is run to predict the categories. The recognition accuracy is about as much as that of the CNN.\n\n## 2015\u5e749\u67086\u65e5\u65e5\u66dc\u65e5\n\n### Scene Recognition by Caffe\n\n#### Introduction\n\nIn this page, I perform a scene recognition by means of the library Caffe. It is shown that with the pre-training model that Caffe provides and its fine-tuning by scene images, the recognition accuracy achieves about 95% .\n\n#### Computation Environment\n\nI used that instance g2.2xlarge in the Amazon EC2 which mounts the GPU device.\n\n#### Dataset\n\nI trained the CNN using the dataset LSP15 in this page. The dataset consists of the 15 directories as follows:\n1. MITcoast\n2. MITforest\n3. MIThighway\n4. MITinsidecity\n5. MITmountain\n6. MITopencountry\n7. MITstreet\n8. MITtallbuilding\n9. bedroom\n10. CALsuburb\n11. industrial\n12. kitchen\n13. livingroom\n14. PARoffice\n15. store\nThe name of the directory represents the category of the scene. Each directory contains about 200 to 300 images which belong to their category.\n\n#### Data Augmentation\n\nIn order to augment dataset, I added the mirror images to it. Moreover, the images are split into two groups \"train\" and \"test.\" The size of the image is 256 $\\times$ 256, and the number of the channel is 3. The number of the images in each category is as follows:\n\nlabel name number of train number of test\n0 MITcoast 610 100\n1 MIThighway 440 70\n2 MITmountain 630 100\n3 MITstreet 490 80\n4 MITforest 550 90\n5 MITinsidecity 520 80\n6 MITopencountry 690 110\n7 MITtallbuilding 600 100\n8 bedroom 360 60\n9 CALsuburb 400 60\n10 industrial 520 80\n11 kitchen 360 60\n12 livingroom 490 80\n13 PARoffice 360 60\n14 store 540 90\n7560 1220\n\n#### Dataset for Caffe\n\nCaffe requires the following directories and files:\n1. a directory which contains training images\n2. a directory which contains test images\n3. a text file in which names and labels of training images are described\n4. a text file in which names and labels of test images are described\nIn my environment, they are put in the following paths:\n1. \/home\/ubuntu\/data\/caffe_256_15\/train\/\n2. \/home\/ubuntu\/data\/caffe_256_15\/test\/\n3. \/home\/ubuntu\/data\/caffe_256_15\/train.txt\n4. \/home\/ubuntu\/data\/caffe_256_15\/test.txt\nThe contents of the file \"test.txt\" are as follows:\nMITstreet_image_0179_flipped.jpg 3\nMITtallbuilding_image_0173_flipped.jpg 7\nMITcoast_image_0126.jpg 0\nstore_image_0158_flipped.jpg 14\nMITinsidecity_image_0102_flipped.jpg 5\nMITforest_image_0200_flipped.jpg 4\nindustrial_image_0189_flipped.jpg 10\nMITcoast_image_0142.jpg 0\nkitchen_image_0019_flipped.jpg 11\nbedroom_image_0210_flipped.jpg 8\nbedroom_image_0116_flipped.jpg 8\nlivingroom_image_0008_flipped.jpg 12\nkitchen_image_0051_flipped.jpg 11\nMITstreet_image_0167_flipped.jpg 3\nMITcoast_image_0315.jpg 0\n....\nThe contents of the file \"train.txt\" are as follows:\nindustrial_image_0190.jpg 10\nCALsuburb_image_0103_flipped.jpg 9\nbedroom_image_0022_flipped.jpg 8\nMITopencountry_image_0222.jpg 6\nMITstreet_image_0040.jpg 3\nMIThighway_image_0053_flipped.jpg 1\nlivingroom_image_0063_flipped.jpg 12\nstore_image_0106_flipped.jpg 14\nindustrial_image_0144.jpg 10\nkitchen_image_0085_flipped.jpg 11\nbedroom_image_0040.jpg 8\nMIThighway_image_0088_flipped.jpg 1\nindustrial_image_0264.jpg 10\nbedroom_image_0117_flipped.jpg 8\nMITcoast_image_0021_flipped.jpg 0\n...\nAfter storing the images specified in the files \"test.txt\" and \"train.txt\" in the directories \"test\" and \"train\" respectively, this script is run to create the dataset for Caffe. \"test_leveldb\" and \"train_leveldb\" which are the inputs for Caffe are output as shown below.\n\n#### Definition of CNN\n\nI defined the structure of the CNN in the file named \"model\/scene_recognition\/train_val.prototxt\" as: The file is based on the file \"\/home\/ubuntu\/buildspace\/caffe-master\/models\/bvlc_reference_caffenet\/train_val.prototxt.\" In the layers \"data\" and \"fc8,\" there are differences between the original and my own files. I replaced the layer \"fc8\" with the new layer \"scene_fc8.\" Moreover, in accordance with the explanation in this page, parameters in the layer \"scene_fc8\" were modified as shown above.\n\n#### Definition of Solver\n\nBased on the file \"models\/bvlc_reference_caffenet\/solver.prototxt,\" the text file used for training the CNN is defined as follows: The path of that file is \"model\/scene_recognition\/solver.prototxt.\"\n\n#### Training\n\nThis script is run to train the CNN. The pre-training model which Caffe provides is \"models\/bvlc_reference_caffenet\/bvlc_reference_caffenet.caffemodel\" which is passed as the argument of the command option \"-weights.\" The script fine-tunes the pre-training model by using the current dataset.\n\n#### Result\n\nThe x-axis indicates the iteration number and the y-axis the recognition accuracy. Because in the current case the total iteration number is 80,000 and the solver is designed to output the accuracy once per 500 iterations, the maximum value of the x-axis is 160(=80,000\/500). The recognition accuracy reaches about 95%.\n\n#### Construction of Classifier\n\nAfter the training, the file \"scene_train_iter_80000.caffemodel\" is created. The file contains the information of the fine-tuned CNN. In order to construct the classifier from the model file, the following file is needed. That file is named \"deploy.ptototxt.\" It is made from the file \"model\/scene_recognition\/train_val.prototxt\" according to the following procedures.\n1. Remove the layer \"data,\" and add the four lines as shown below.\n2. Remove the layers \"loss\" and \"accuracy\", and add this layer.\nThe four lines with which the layer \"data\" is replaced means:\n1. input_dim: 20 --- batch size\n2. input_dim: 3 --- channel number\n3. input_dim: 227 --- width of an image\n4. input_dim: 227 --- height of an image\nThe code to classify the image is implemented as follows: It is named \"classifier.py.\" Now I can classify the images.\n\n## 2015\u5e749\u67082\u65e5\u6c34\u66dc\u65e5\n\n### Caffe\u306b\u3088\u308b\u30b7\u30fc\u30f3\u8a8d\u8b58\uff08\uff18\u5206\u985e\u554f\u984c\uff092\n\n#### \u306f\u3058\u3081\u306b\n\n\u5148\u306e\u30da\u30fc\u30b8\u3067 caffe\u3092\u4f7f\u3063\u3066\u30b7\u30fc\u30f3\u8a8d\u8b58\uff08\uff18\u5206\u985e\u554f\u984c\uff09\u3092\u8a66\u307f\u305f\u3002\u4eca\u56de\u306f\u3001caffe \u304c\u63d0\u4f9b\u3059\u308b pre-training \u30e2\u30c7\u30eb\u3092\u7528\u3044\u3066\u3001\u540c\u3058\u554f\u984c\u3092\u8003\u5bdf\u3059\u308b\u3002\n\n#### \u8a08\u7b97\u6a5f\u74b0\u5883\n\nAmazon\u306eEC2\u3092\u5229\u7528\u3057\u305f\u3002\u30a4\u30f3\u30b9\u30bf\u30f3\u30b9\u540d\u306f g2.2xlarge \u3067\u3042\u308b\u3002GPU\u3092\u642d\u8f09\u3057\u305f\u30de\u30b7\u30fc\u30f3\u3067\u3042\u308b\u3002\n\n#### \u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\n\n\u30c7\u30fc\u30bf\u30bb\u30c3\u30c8\u306f\u524d\u56de\u3068\u540c\u3058\u3067\u3042\u308b\u3002\u305f\u3060\u3057\u3001pre-training\u30e2\u30c7\u30eb\u306b\u5408\u308f\u305b\u308b\u305f\u3081\u3001\u753b\u50cf\u30b5\u30a4\u30ba\u3092256$\\times$256\u306b\u3001\u30c1\u30e3\u30f3\u30cd\u30eb\u6570\u30923\u306b\u5909\u66f4\u3057\u305f\u3002\n\n#### caffe\u7528\u30c7\u30fc\u30bf\u306e\u4f5c\u6210\n\n\u3053\u3053\u3082\u524d\u56de\u3068\u540c\u3058\u3067\u3042\u308b\u3002 \u4ee5\u4e0b\u306e\u30b3\u30de\u30f3\u30c9\u306b\u3088\u308a caffe \u7528\u306e\u5165\u529b\u30c7\u30fc\u30bf\u3092\u4f5c\u6210\u3059\u308b\u3002 \u524d\u56de\u306f\u30b0\u30ec\u30fc\u753b\u50cf\u3092\u6271\u3063\u305f\u306e\u3067\u30b3\u30de\u30f3\u30c9\u30aa\u30d7\u30b7\u30e7\u30f3\u306b -gray \u3092\u4ed8\u3051\u305f\u304c\u3001\u4eca\u56de\u306f3\u30c1\u30e3\u30f3\u30cd\u30eb\u306a\u306e\u3067 -gray \u306f\u4ed8\u3051\u306a\u3044\u3002\n\n#### \u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u306e\u8a2d\u8a08\n\ncaffe\u306e\u30bd\u30fc\u30b9\u30b3\u30fc\u30c9\u3092\u7d0d\u3081\u305f\u30c7\u30a3\u30ec\u30af\u30c8\u30ea\n\/home\/ubuntu\/buildspace\/caffe-master\/models\/bvlc_reference_caffenet\n\u306b\u3042\u308b train_val.prototxt \u3092\u4e0b\u6577\u304d\u306b\u3057\u3066\u4ee5\u4e0b\u306e\u3088\u3046\u306a\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u69cb\u9020\u3092\u5b9a\u7fa9\u3057\u305f\uff08model\/scene_recognition\/train_val.prototxt\uff09\u3002 \u5909\u66f4\u90e8\u5206\u306f\u3001data \u5c64\u3068 fc8 \u5c64\u306e2\u3064\u3067\u3042\u308b\uff08fc8 \u5c64\u3092 scene_fc8 \u5c64\u306b\u7f6e\u304d\u63db\u3048\u305f\uff09\u3002 \u307e\u305f\u3001\u3053\u306e\u30da\u30fc\u30b8\u306e\u89e3\u8aac\u306b\u5f93\u3063\u3066\u3001scene_fc8 \u5c64\u306e\u30d1\u30e9\u30e1\u30fc\u30bf lr_mult \u3092\u5909\u66f4\u3057\u305f\u3002\n\n#### solver\u306e\u4f5c\u6210\n\n\u8a13\u7df4\u3092\u884c\u306a\u3046\u305f\u3081\u306e\u30c6\u30ad\u30b9\u30c8\u30d5\u30a1\u30a4\u30eb model\/scene_recognition\/solver.prototxt \u3092\u4ee5\u4e0b\u306e\u3088\u3046\u306b\u8a18\u8ff0\u3057\u305f\u3002caffe \u306e\u63d0\u4f9b\u3059\u308b\u30b5\u30f3\u30d7\u30eb\u30d5\u30a1\u30a4\u30eb\uff08models\/bvlc_reference_caffenet\/solver.prototxt\uff09\u3092\u4e0b\u6577\u304d\u306b\u3057\u305f\u3002 \u73fe\u5728\u306e test \u753b\u50cf\u306e\u679a\u6570\u306f730\u679a\u3001batch size \u309210\u3068\u3057\u305f\u306e\u306773(=test_iter)\u56de\u3067\u4e00\u901a\u308a\u753b\u50cf\u3092\u8d70\u67fb\u3059\u308b\u3053\u3068\u306b\u306a\u308b\u3002\u307e\u305f\u3001test\u306e\u5b9f\u884c\u306ftrain\u3092500(=test_interval)\u56de\u307e\u308f\u3059\u3054\u3068\u306b\u884c\u306a\u3046\u3088\u3046\u306b\u3057\u305f\u3002\n\n#### \u8a13\u7df4\n\n\u4ee5\u4e0b\u3092\u5b9f\u884c\u3059\u308b\u3002 \u30b3\u30de\u30f3\u30c9\u30aa\u30d7\u30b7\u30e7\u30f3 -weights \u306e\u5f15\u6570\u3068\u3057\u3066\u4e0e\u3048\u3089\u308c\u3066\u3044\u308b models\/bvlc_reference_caffenet\/bvlc_reference_caffenet.caffemodel \u304ccaffe\u304c\u63d0\u4f9b\u3059\u308bpre-training\u30e2\u30c7\u30eb\u3067\u3042\u308b\u3002\u4e0a\u8a18\u306e\u30b3\u30de\u30f3\u30c9\u306b\u3088\u308a\u3001\u5927\u898f\u6a21\u30c7\u30fc\u30bf\u3067\u3042\u3089\u304b\u3058\u3081\u4f5c\u3089\u308c\u305f pre-training \u30e2\u30c7\u30eb\u3092\u3001\u79c1\u306e\u30c7\u30fc\u30bf\u3067 fine-tuning \u3059\u308b\u3053\u3068\u304c\u3067\u304d\u308b\u3002\n\n\u4ee5\u4e0b\u306b\u7d50\u679c\u3092\u793a\u3059\u3002\ntest\u753b\u50cf\uff1a\ntrain\u753b\u50cf\uff1a\n\n#### \u691c\u51fa\u5668\u306e\u4f5c\u6210\n\n\u3053\u3053\u307e\u3067\u306e\u8a08\u7b97\u3067\u3001\u30d5\u30a1\u30a4\u30eb scene_train_iter_40000.caffemodel \u304c\u3067\u304d\u3066\u3044\u308b\u3002\u3053\u3053\u306b\u306f\u3001fine-tuning \u306b\u3088\u308a\u30d1\u30e9\u30e1\u30fc\u30bf\u306e\u78ba\u5b9a\u3057\u305f\u30cd\u30c3\u30c8\u30ef\u30fc\u30af\u69cb\u9020\u304c\u7d0d\u3081\u3089\u308c\u3066\u3044\u308b\u3002\u3053\u306e\u30d5\u30a1\u30a4\u30eb\u304b\u3089\u691c\u51fa\u5668\u3092\u4f5c\u308b\u306b\u306f\u3001\u4ee5\u4e0b\u306e\u3088\u3046\u306adeploy.prototxt \u30d5\u30a1\u30a4\u30eb\u304c\u5fc5\u8981\u3067\u3042\u308b\u3002 \u3053\u308c\u306f\u3001\u5148\u306b\u5b9a\u7fa9\u3057\u305f model\/scene_recognition\/train_val.prototxt \u306e\u30c7\u30fc\u30bf\u5c64\u3092\u53d6\u308a\u9664\u304d \u3092\u8ffd\u52a0\u3057\u3001\u6700\u7d42\u5c64\u306b\u3042\u308b loss \u5c64\u3068 accuracy \u5c64\u3092\u53d6\u308a\u9664\u304d \u3092\u8ffd\u52a0\u3057\u305f\u3082\u306e\u3067\u3042\u308b\u3002\u30c7\u30fc\u30bf\u5c64\u306e\u4ee3\u308f\u308a\u306b\u633f\u5165\u3057\u305f input_dim \u306e\u610f\u5473\u306f\u4ee5\u4e0b\u306e\u901a\u308a\u3067\u3042\u308b\u3002\n1. input_dim: 10 --- \u30d0\u30c3\u30c1\u30b5\u30a4\u30ba\n2. input_dim: 3 --- \u5165\u529b\u753b\u50cf\u306e\u30c1\u30e3\u30f3\u30cd\u30eb\u6570\n3. input_dim: 227 --- \u5165\u529b\u753b\u50cf\u306e\u5e45\n4. input_dim: 227 --- \u5165\u529b\u753b\u50cf\u306e\u9ad8\u3055","date":"2020-04-09 06:01:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5303027629852295, \"perplexity\": 7228.497128865087}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585371830894.88\/warc\/CC-MAIN-20200409055849-20200409090349-00509.warc.gz\"}"}
| null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.