text
stringlengths 14
5.77M
| meta
dict | __index_level_0__
int64 0
9.97k
⌀ |
|---|---|---|
Q: My query is not finding results I'm trying this query but I'm not able to get my results. I cant find the error!
Here is my table structure:
id norm(mediumtext) bohrung(int) breite(int)
2 DIN 5462 26 6
3 DIN 5462 28 7
4 DIN 5462 32 6
5 DIN 5462 36 7
6 DIN 5462 42 8
7 DIN 5462 46 9
This is my SQL query
<?php
if (isset($_POST['bohrung'])) {
$bohrung = $_POST['bohrung'];
$result = mysqli_query ($con, "SELECT * FROM keilnaben WHERE norm = {bohrung}");
if($result && mysqli_num_rows($result) > 0) {
echo '<table class="table" border="2">
<tr>
<th>norm</th>
<th>norm</th>
<th>norm</th>
</tr>';
while($row = mysqli_fetch_array($result)) {
echo "<tr>
<td>" . $row['norm'] . "</td>
<td>" . $row['bohrung'] . "</td>
<td>" . $row['breite'] . "</td>
</tr>";
}
echo "</table>";
}
}
The problem is that when I enter for example DIN5462 in the text box, the query does not return anything, but if I try the same for bohrung of breite, it does return results. I don't know why.
A: The problem is this line:
SELECT * FROM keilnaben WHERE norm = {bohrung}
^^^
// its a string literal, not a variable
Change it to this and at least escape your input:
$bohrung = $con->real_escape_string($_POST['bohrung']);
$result = mysqli_query($con,"SELECT * FROM keilnaben WHERE norm = '$bohrung' ");
Or prepared statements:
if (isset($_POST['bohrung'])) {
$input = $_POST['bohrung'];
$select = $con->prepare('SELECT * FROM keilnaben WHERE norm = ?');
$select->bind_param('s', $input);
$select->execute();
if($select->num_rows > 0) {
echo '<table class="table" border="2">
<tr>
<th>norm</th>
<th>norm</th>
<th>norm</th>
</tr>';
$select->bind_result($norm, $bohrung, $breite);
while ($select->fetch()) {
echo "<tr>
<td>" . $norm . "</td>
<td>" . $bohrung . "</td>
<td>" . $breite . "</td>
</tr>";
}
echo "</table>";
}
}
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,574
|
\section{Introduction}
Most anatomical structures are orientable, closed surfaces.
Examples include the hippocampus, trabecular bone, and liver.
Since these surfaces are orientable and closed, they permit an embedding, enclosing a defined volume.
This paper is concerned with measuring morphological properties of such surfaces.
While this work was developed in the context of bone-related research, it is applicable to any object representable as a closed surface.
Methods for measuring the volume, area, and curvatures of surfaces is well described.
When the surface is represented as a triangulated mesh, curvatures can be estimated at each vertex by using geometric information from neighboring vertices \cite{goldfeather2004novel,rusinkiewicz2004estimating,flynn1989reliable}, volume by summing the signed volume of each face~\cite{zhang2001efficient}, and area by summing the area of each triangle~\cite{zhang2001efficient,alyassin1994evaluation}.
Alternatively, the surface can be represented implicitly as the level set of an embedding~\cite{osher1988fronts}.
Doing so permits estimating the curvatures locally based on the embedding gradients, while volume, area, and total curvatures can be computed from volume integrals of the embedding~\cite{sethian1999level,chan2001active}.
There are advantages to using the implicit representation over the parametric mesh representation for morphometric analyses.
The first advantage is that spatial gradients of the embedding are well defined, avoiding the need to smooth or fit the surface \cite{goldfeather2004novel,rusinkiewicz2004estimating,flynn1989reliable}.
Second, morphometry can be measured during curve evolution problems where topology can change without explicit splitting and merging techniques~\cite{osher1988fronts}.
This has been the primary feature that made level set methods popular, used extensively in computational fluid dynamics~\cite{peng1999pde,sussman1994level}, object segmentation~\cite{chan2001active,caselles1993geometric,vese2002multiphase}, and biophysical simulations~\cite{besler2018bone}.
The one disadvantage is that implicit representations can require large amounts of memory to store and process.
However, there is an artifact that occurs during embedding that prevents the application of these methods to study anatomical structures.
More precisely, anatomical structures are typically represented as binary images, which are embedded using the signed distance transform~\cite{danielsson1980euclidean}.
However, due to a quantization error in the distance transform of binary images, gradients in the image are very noisy\sloppy~\cite{besler2020artifacts}.
Thus, measures of local mean and Gaussian curvature are poorly estimated based on these embeddings.
This work is principally concerned with demonstrating the unsuitability of the signed distance transform and providing an alternative.
It summarizes morphometrics for orientable, closed surfaces and provides an embedding method suitable for their computation.
The method is local, meaning that the morphometrics can be evaluated at arbitrary locations along the surface.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{overview}%
\caption{Workflow for local morphometry of closed surfaces demonstrated on a lumbar vertebra. Clipping in the curvature images originates from the finite support of the Gaussian filter.}
\label{fig:overview}
\end{figure}
\begin{table}
\centering
\begin{tabular}{ccc}
\hline
Parameter & Units & Selection \\
\hline
$\epsilon$ & $[-]$ & Equation~\ref{eqn:epsilon} \\
$\sigma$ & [$\si{\milli\metre}$] & $<$ Structure Thickness \\
$t$ & [$\si{\milli\metre}$] & $> \Delta x$ \\
$T$ & $[-]$ & $0.5$ \\
\hline
\end{tabular}
\caption{Summary of the method parameters. Only two parameters are needed: Gaussian blur standard deviation, $\sigma$, and the thickness over which to compute area elements, $t$. $\Delta x$ denotes the voxel resolution.}
\label{tab:parameters}
\end{table}
\section{Morphometry of Closed Surfaces}
An overview of the method is given in Figure~\ref{fig:overview} and a summary of the method parameters is given in Table~\ref{tab:parameters}.
The method relies on the local evaluation of the mean and Gaussian curvature as well as volume integrals to derive global morphometrics.
Since the computation of curvatures is local, they can be visualized across the surface.
A motivating example for this work is given in Figure~\ref{fig:torus} where it is demonstrated that a signed distance transform produces enormous errors in local curvature, whereas the proposed method produces smoother and more realistic results.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{SimpleSurfaces}%
\caption{Local differences between the signed distance transform and proposed method applied to a binary torus (inner radius = $\SI{20}{\milli\metre}$ outer radius = $\SI{40}{\milli\metre}$).}
\label{fig:torus}
\end{figure}
\subsection{Mathematical Preliminaries}
\subsubsection{Differential Geometry}
Define an orientable, closed, two-dimensional surface $C : \mathbb{R}^2 \rightarrow \mathbb{R}^3$.
Being closed and orientable allows the surface to define a volume.
Two principal curvatures, $\kappa_1$ and $\kappa_2$, exist at each point on the surface measuring the least and greatest curvature at that point.
The Gaussian ($K$) and mean ($H$) curvatures are defined as the product and average of the principal curvatures.
\begin{eqnarray}
K &=& \kappa_1 \kappa_2 \\
H &=& \frac{1}{2} (\kappa_1 + \kappa_2)
\end{eqnarray}
Mean curvature is an extrinsic property of the surface, which can be understood intuitively as the divergence of the normals.
The Gaussian curvature is an intrinsic property of the surface, which can be understood intuitively as the amount of shrinking or expanding that occurs walking along the surface.
It is related to the topology of the surface as is elucidated below.
\subsubsection{Topology}
Considering stretching, bending, and compressing the surface, it is possible to mold a sphere into a femur without cutting or gluing the object.
The study of surfaces related by an isomorphism is called topology.
An important measure in topology is the Euler--Poincar\'e~characteristic.
If an object is viewed as a graph or mesh, the Euler--Poincar\'e~characteristic $\chi$ can be computed from the vertices $V$, edges $E$, and faces $F$ of the mesh:
\begin{equation}
\chi = V - E + F
\end{equation}
When viewed continuously, the Euler--Poincar\'e~characteristic is related to the genus $g$(colloquially, the number of holes) of a surface.
\begin{equation}
\chi = 2 - 2 g
\end{equation}
Both, the Euler--Poincar\'e~characteristic and genus, are topological invariants, meaning they do not change with the bending and stretching of the surface, only with cutting or gluing.
By example, to mold a femur into a vertebra, a hole must be created corresponding to the foramen (colloquially, spinal cord hole).
\subsubsection{Gauss-Bonnet Theorem}
Remarkably, local measures of curvature can be related to their topology.
More precisely, the Gauss-Bonnet theorem states that the Gaussian curvature summed across a surface is equal to the Euler--Poincar\'e~characteristic.
\begin{equation}
\int_M K dA = 2 \pi \chi
\end{equation}
In this work, the Gauss-Bonnet theorem will be used to measure the Euler--Poincar\'e~topological invariant from local Gaussian curvature.
\subsection{Embedding of Closed Surfaces}
The problem of embedding a closed surface is described. Consider a binary image $I: \Omega \rightarrow \{0,1\}$ to be embedded, where $\Omega \subset \mathbb{Z}^n$ is the discrete domain.
An embedding $\phi$ is sought such that it recovers the underlying binary image:
\begin{equation}
\label{eqn:recovery}
\theta(-\phi) = I
\end{equation}
where $\theta$ is the Heaviside function.
The nomenclature common in statistics, $\theta$, is used for the Heaviside to avoid confusion with mean curvature.
Furthermore, the surface is recoverable as the zero level set of the embedding:
\begin{equation}
\label{eqn:level_set}
C(x) = \{ x \given \phi(x) = 0\}
\end{equation}
The surface can be any level set of the embedding but will be taken as the zero level set in this work.
The problem is under-constrained and does not permit a unique embedding.
As a convention, this work considers the inside of the surface as having a negative embedding.
An embedding is always possible for a closed and orientable surface.
The embedding $\phi$ is a non-parametric representation containing the same information as $C$.
However, it can be much easier to work with $\phi$ computationally than $C$ because of issues of parametrization.
The success of the level set method~\cite{osher1988fronts} is largely due to the ease of working with the embedding while being able to recover the surface at a later time.
\subsubsection{Signed Distance Transform}
The most commonly used embedding is the signed Euclidean distance transform~\cite{rosenfeld1966sequential,danielsson1980euclidean}.
This transform assigns a value to every point $x$ in the image based on its signed distance $d$ from the surface $C$:
\begin{equation}
\label{eqn:sdt}
\phi(x) = \pm d(x, C)
\end{equation}
The embedding is unique given the additional constraint that the magnitude gradient of the embedding equals $+1$.
The signed distance transform is a computationally fast method of embedding a binary image~\cite{danielsson1980euclidean}.
However, the distance transform of sampled signals produces a quantized representation of the true signal~\cite{besler2020artifacts}.
As a result, gradients are extremely noisy and independent of image spacing.
Furthermore, reinitialization methods~\cite{peng1999pde,sussman1994level} to overcome this problem converge slowly~\cite{besler2020artifacts} making them impractical for removing quantization errors.
\subsubsection{Proposed Embedding Technique}
A different embedding is proposed in this work based on a Gaussian blur of the binary image.
The image intensities are shifted by a threshold $T \in [0, 1]$ such that the zero crossing corresponds to the binary surface:
\begin{equation}
\label{eqn:embedding}
\phi = T - G_\sigma * I
\end{equation}
where $G_\sigma$ denotes a Gaussian filter of standard deviation $\sigma$ and $*$ denotes the convolution operator.
There are two parameters to this embed
ding, the threshold and standard deviation.
The threshold should be selected as $0.5$ to preserve the localization of flat surfaces and the standard deviation should be selected larger than the size of a voxel but not larger than the structure.
The optimal amount of smoothing is application specific.
Gaussian blurring a binary image to generate a surface mesh using Marching Cubes is a common task in image processing~\cite{lorensen1987marching}.
Properties of the proposed embedding technique should be made explicit.
First, the proposed method modifies the binary image.
That is, the Heaviside of the embedding does not recover the original binary image exactly.
Areas of concavity shrink and areas of convexity expand (Figure~\ref{fig:gauss}).
Second, the embedding technique does not produce a signed distance image.
If a signed distance signal is needed, reinitialization~\cite{peng1999pde,sussman1994level,kimmel1996sub} can be performed on the embedding.
Finally, the resulting image has intensities in the range $[-0.5,0.5]$.
Within this context, it should be noted that many binary images are stored as the largest value in their dynamic range ($127$ for a signed char, $255$ for an unsigned char) and should be flattened to $\{0,1\}$ before embedding as described above.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{GaussBlur}%
\caption{Gaussian blurring modifies the underlying object like mean curvature flow.}
\label{fig:gauss}
\end{figure}
\subsubsection{Relation to Mean Curvature Smoothing}
While Gaussian blurring smooths the binary image, we would like to know how that translates into surface smoothing.
Prior literature is used to show that Equation~\ref{eqn:embedding} leads to mean curvature smoothing of the surface.
First, consider the heat flow of the image:
\begin{numcases}{\hspace{-20pt}}
I_\tau = \Delta I & $\text{on }\Omega \times (0, \infty)$ \\
I(x,0) = I_0 & $\text{on }\Omega \times 0$
\end{numcases}
where $\tau$ is a time-like parameter, $\Delta = \nabla^2$ is the scalar Laplacian, and $I_0$ is the original binary image.
It is well known that the solution of the heat equation is Gaussian convolution~\cite{witkin1984scale,koenderink1984structure}:
\begin{equation}
I(x,\tau) = G_\tau(x) * I_0(x)
\end{equation}
with $2\tau = \sigma^2$.
The image $I(x,\tau)$ is related to Equation~\ref{eqn:embedding} by shifting the embedding such that the $T$ level set in $I(x,\tau)$ is the zero level set in $\phi$.
Importantly, the heat flow and level set shift corresponds to the BMO (Bence-Merriman-Osher) algorithm in computational physics for simulating mean curvature flow~\cite{merriman1992diffusion}.
BMO simulates mean curvature flow by blurring a binary image using the heat equation and rebinarizing the field with a threshold at $0.5$.
Evans (Theorem 5.1, \cite{evans1993convergence}) proved that if $u$ is the viscous solution from mean curvature flow and $I(x,\tau)$ the solution from the diffusion equation, the two methods are equivalent in the limit of small $\tau$.
As Equation~\ref{eqn:embedding} has the effect on the surface of mean curvature flow, general principles can be known about how the technique modifies the surface~\cite{evans1991motion}.
First, mean curvature flow is the gradient descent of the first variation of surface area.
As such, the surface area of the object will be greatly affected by blurring.
Furthermore, areas of high curvature, such as at the tip of the transverse process in lumbar vertebrae, experience more change than flat areas, such as the articulating surface of the vertebral body.
Second, minimal surfaces where mean curvature is zero everywhere will experience no local shift in the surface location.
Finally, the topology of the surface can change corresponding to singularities in mean curvature flow.
\subsection{Morphometry of Embedded Surfaces}
Attention is now placed on measuring geometric and topological parameters from an embedding.
\subsubsection{Measures of Volume and Area}
Methods for measuring the volume and area of an implicit surface have been known for some time~\cite{sethian1999level,chan2001active} and are summarized here.
Briefly, the volume of the surface can be determined by summing up all volume elements inside the surface.
This can be well-defined as the integral of the Heaviside function of the embedding:
\begin{equation}
\text{Volume}(\phi) = \int_\Omega \theta(-\phi) dV
\end{equation}
where $dV$ is the volume of a volume element.
By considering area as the variation of volume, the area can be defined equally as well:
\begin{equation}
\label{eqn:area}
\text{Area}(\phi) = \int_\Omega \delta(\phi) \lvert \nabla \phi \rvert dV
\end{equation}
where $\delta = \lvert \nabla \theta \rvert$ is the Dirac delta function and $\nabla$ is the differential operator.
\subsubsection{Measuring Local Curvature}
For an embedding, the Gaussian and mean curvature are typically computed first and then principal curvatures derived~\cite{sethian1999level}.
Mean curvature is computed as one half the divergence of the surface normal:
\begin{eqnarray}
N &=& \frac{\nabla \phi}{\lvert \nabla \phi \rvert} \\
H &=& \frac{1}{2} \nabla \cdot \left( \frac{\nabla \phi}{\lvert \nabla \phi \rvert} \right)
\end{eqnarray}
where $N$ denotes the unit normal vector, $\nabla$ is the gradient operator, $\lvert\cdot\rvert$ is the $\ell^2$ norm, and $H$ is mean curvature.
The one-half factor is not typically used in the literature on level set methods and mean curvature flow~\cite{osher1988fronts,sethian1999level}.
It comes from averaging the principal curvatures of the surface, of which there are two on two-dimensional surfaces.
This brings the computation of mean curvature of the embedding equal to the mean curvature as defined in differential geometry.
In higher dimensions, the factor would be one divided by one less the dimension of the embedding domain.
Similarly, the Gaussian curvature can be defined from the level set embedding in terms of first and second derivatives of the embedding:
\begin{equation}
K = -\frac{\begin{vmatrix}
\phi_{xx} & \phi_{xy} & \phi_{xz} & \phi_x \\
\phi_{yx} & \phi_{yy} & \phi_{yz} & \phi_y \\
\phi_{zx} & \phi_{zy} & \phi_{zz} & \phi_z \\
\phi_x & \phi_y & \phi_z & 0
\end{vmatrix}}{\lvert \nabla \phi \rvert^4}
\end{equation}
where $\lvert \cdot \rvert$ denote the matrix determinant and $K$ is Gaussian curvature.
While not used in this work, the principal curvatures can be computed from the mean and Gaussian curvature.
\begin{equation}
\kappa_1, \kappa_2 = H \pm \sqrt{H^2 - K}
\end{equation}
\subsubsection{Computing Total Curvature}
Based on the definition of area, a way of computing integrals along the surface is defined.
Consider some quantity $Q$ to be integrated over a surface.
This integral can be generalized using the definition of area given in Equation~\ref{eqn:area}:
\begin{equation}
\label{eqn:surface_integrals}
\int_M Q dA = \int_\Omega Q \delta(\phi) \lvert \nabla \phi \rvert dA
\end{equation}
The form of this integral allows surface integrals to be performed in general. Thus, the total mean ($\bar{H}$) and Gaussian curvature ($\bar{K}$) can be computed:
\begin{eqnarray}
\bar{H} &=& \int_\Omega H \delta(\phi) \lvert \nabla \phi \rvert dA \\
\bar{K} &=& \int_\Omega K \delta(\phi) \lvert \nabla \phi \rvert dA
\end{eqnarray}
Now that area, volume, total mean curvature, and total Gaussian curvature are defined, other morphometric quantities can be derived.
First, the average mean curvature $\langle H \rangle$ can easily be defined by dividing by the total area.
\begin{equation}
\langle H \rangle = \frac{\bar{H}}{A}
\end{equation}
By the Gauss-Bonnet theorem, the Euler--Poincar\'e~characteristic can also be computed.
\begin{equation}
\chi = \frac{\bar{K}}{2\pi}
\end{equation}
Volume, area, average mean curvature, and the Euler--Poincar\'e~characteristic are the most natural global descriptors of surfaces.
\subsection{Implementation Considerations}
\subsubsection{Numerical Approximations}
A numerical approximation is needed for the Heaviside and Dirac delta functions.
Many are available~\cite{chan2001active} but the finite support sine approximation is used in this work:
\begin{eqnarray}
\theta_\epsilon(x) &=& \begin{cases}
\frac{1}{2} \left[1 + \frac{x}{\epsilon} + \frac{1}{\pi} \sin\left(\frac{\pi x}{\epsilon}\right) \right] & \lvert x \rvert \leq \epsilon \\
1 & x > \epsilon \\
0 & x < -\epsilon
\end{cases} \\
\delta_\epsilon(x) &=& \begin{cases}
\frac{1}{2\epsilon} \left[ 1 + \cos\left(\frac{\pi x}{\epsilon}\right)\right] & \lvert x \rvert \leq \epsilon \\
0 & \lvert x \rvert > \epsilon
\end{cases}
\end{eqnarray}
The finite support is advantageous as it is conceptually easy to design with in comparison to infinite support approximations such as the hyperbolic tangent.
Outside $2\epsilon$, the response is zero or one.
The regularization parameter $\epsilon$ should be selected larger than one voxel edge and less than the support of the Gaussian filter.
For the distance transform, $\epsilon$ is often selected as some multiple of image resolution.
However, the proposed embedding technique does not permit such a simple method of parameter selection since the embedding no longer encodes physical space.
Instead, if the intent is to average over some physical thickness $t$, $\epsilon$ should be selected as:
\begin{equation}
\label{eqn:epsilon}
\epsilon = \frac{1}{2} \text{erf}\left(\frac{t}{2\sqrt{2}\sigma}\right)
\end{equation}
where $\sigma$ is the filter standard deviation and $\text{erf}\left(\cdot\right)$ is the error function.
Derivation of Equation~\ref{eqn:epsilon} is given in Appendix~\ref{app:regulariz_selection}.
The parameter $t$ is termed the regularization thickness to denote it has dimensions of length and can be interpreted as a physical size.
Furthermore, the integrals of the form of Equation~\ref{eqn:surface_integrals} need to be numerically approximated.
Simply summing across the image and multiplying by the product of spacing works well.
Alternatively, Simpson's rule can be applied along each direction in the image.
This leads to a slight improvement of the volume measurement for the monotonic Heaviside function where single sided approximation of the Riemann integral leads to a systematic error.
This work uses Simpson's rule for completeness.
\subsubsection{Finite Differences}
All infinitesimal differences are approximated with finite differences.
This work uses fourth-order accurate central differences for the first, second, and mixed derivatives.
At least second-order accurate differences are required~\cite{coquerelle2016fourth} and all derivatives should be of the same order.
No boundary conditions are defined for the problem, and they are not needed.
Either the image can be padded based on features of the embedding (details in Section~\ref{subsubsec:image_boundary}) or the finite difference stencil can be shifted near the edges of the image.
Finite difference equations are given in Appendix~\ref{app:stencils}.
\subsubsection{Image Boundary}
\label{subsubsec:image_boundary}
Care should be taken at the boundary of the image for two reasons.
First, the surface may be clipped by the edge of the image, modifying morphometry.
Second, the formula for the Euler--Poincar\'e~characteristic includes an additional term when the surface is clipped, which is not obvious to compute.
The image can be padded to avoid boundary issues.
If the given data is a binary image, background voxels can be padded before embedding.
If the given data is an embedding, padding can be selected as appropriate for the embedding method.
For distance transforms where the inside of the curve is negative, this would be padding with a positive number.
For the proposed method, this would be padding with a constant of 0.5.
Still, some structures are artificially clipped by the imaging protocol.
For instance, abdominal clinical computed tomography clips the femur at the lesser trochanter and extremity imaging clips long bones to the scanner field of view.
Closing these surfaces by padding background voxels is required.
Once closed, clipping location will still affect the measured outcomes and care should be taken to standardize clipping in a study.
\subsubsection{Visualization and Histograms}
One advantage of the proposed method is that we can visualize curvatures on the surface.
If a volume renderer~\cite{drebin1988volume} is used, opacity can be mapped by a regularized Heaviside function and color mapped by a transfer function of the embedding.
Alternatively, the object can be meshed using marching cubes at the zero iso-contour~\cite{lorensen1987marching} and finite difference stencils placed on the mesh vertices to compute mean and Gaussian curvature.
As gradients are well defined, both methods give excellent visualizations.
Marching cubes is used in this work as it permits the computation of histograms from vertices.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{Segmentation}%
\caption{Visualization of the dataset. L4 vertebra are fully imaged while the proximal femur is clipped at an operator defined level roughly corresponding to the lesser trochanter. Manual segmentation artifacts are evident. Starting top left and going clockwise, the images are an axial slice, sagittal slice, coronal slice, and 3D render. Scale bar is for the 2D visuals.}
\label{fig:data}
\end{figure}
\section{Experiments}
Experiments are conducted to demonstrate the unsuitability of the signed distance transform, gain intuition on the parameters of the proposed embedding and morphometry technique, and validate the morphometry against existing method.
50 abdominal clinical computed tomography images are used.
The scan volume started at the T12 vertebrae and ended at the lesser trochanter.
The right femur and fourth lumbar vertebra were manually segmented from each dataset.
33 (66\%) of subjects were male. Reported median [min -- max], age was 61.5 [50.0 – 102.0] years, in-plane resolution was 0.703 [0.580 -- 0.977] $\si{\milli\meter}$, and slice thickness was 0.625 [0.624 -- 1.000] $\si{\milli\meter}$.
A median (by volume) segmented femur and vertebra are displayed in Figure~\ref{fig:data}.
Small artifacts due to manual segmentation along the axial direction are evident. Additional details on the data can be found in a previous study~\cite{michalski2021opportunistic}.
\subsection{Necessity of the Embedding Technique}
One vertebra was used to visually demonstrate that the signed distance transform is insufficient for local morphometry.
This subject had an in-plane resolution of 0.639 mm and a slice thickness of $\SI{0.625}{\milli\meter}$.
The binary image was embedded with the signed distance transform~\cite{danielsson1980euclidean} and with the proposed method ($\sigma = \SI{1.0}{\milli\meter}$).
The surface was extracted using the marching cubes method.
Mean and Gaussian curvatures were estimated at each vertex and visualized across the mesh triangles.
The histogram for mean and Gaussian curvature were generated from the vertices.
\subsection{Structural Changes of the Proposed Embedding Technique}
\label{subsec:structural}
The next experiment aimed at investigating the change in structure as a consequence of blurring.
Therefore, the right femur and L4 vertebrae of one subject were used.
This subject had an in-plane resolution of $\SI{0.672}{\milli\metre}$ and a slice thickness of $\SI{1.000}{\milli\metre}$.
Embedding was repeated for 100 Gaussian standard deviations spaced uniformly from $\SI{1.0}{\milli\meter}$ to $\SI{5.0}{\milli\metre}$.
The regularization thickness ($t = \SI{1.5}{\milli\meter}$) did not change.
Measured outcomes were volume (V, [$\si{\milli\metre\cubed}$]), area (A, [$\si{\milli\metre\squared}$]), average mean surface curvature ($\langle H \rangle$, [$\si{\per\milli\metre}$]), and Euler--Poincar\'e~characteristic ($\chi$, [–]).
Embeddings at select standard deviations are visualized for qualitative assessment.
\subsection{Sensitivity of Morphometrics to Regularization Thickness}
Next, the stability of the morphometric calculations to regularization thickness was explored.
The same dataset and same morphometric outcomes were used as described in Section~\ref{subsec:structural}.
Outcomes were plotted for 100 regularization thicknesses spaced uniformly from $\SI{0.5}{\milli\metre}$ to $\SI{10.0}{\milli\metre}$.
The blurring ($\sigma = \SI{2.5}{\milli\metre}$) was kept constant for this experiment.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{DT-Artifact}%
\caption{Distance transforms produce quantization artifacts in the computation of curvature. (\ref{fig:sdt}a) Visualization of the surface extracted from the respective embeddings, (\ref{fig:sdt}b) 2D axial slices through the embeddings, (\ref{fig:sdt}c) Histograms of the curvatures evaluated at the vertices of the extracted surface. The curvature histograms computed from the distance transform demonstrate severe artifacts.}
\label{fig:sdt}
\end{figure*}
\subsection{Validation of Morphometrics}
The objective of the final experiment was to compare global morphometric outcomes of the proposed method with established methods.
All 50 right femurs and 50 4th lumbar vertebrae were used for comparison. Images were embedded ($\sigma = \SI{2.0}{\milli\metre}$) and morphometry performed with the proposed method ($t = \SI{2.5}{\milli\metre}$).
Measured outcomes were volume (V, [$\si{\milli\metre\cubed}$]), area (A, [$\si{\milli\metre\squared}$]), surface average mean curvature ($\langle H \rangle$, [$\si{\per\milli\metre}$]), and Euler--Poincar\'e~characteristic ($\chi$, [–]).
The embeddings were re-binarized by thresholding below zero to compare the proposed method with traditional morphometric techniques.
In this way, the underlying structure is the same in both methods.
The binary image was embedded using a signed distance transform and morphometry performed ($\epsilon = \SI{1.25}{\milli\metre}$) to quantify error when using a signed distance transform embedding.
Traditional morphometry was computed on the binary image using Image Processing Language (IPL v5.42, SCANCO Medical AG, Brüttisellen, Switzerland).
Euler--Poincar\'e~characteristic was computed using an exact method for a 3D binary image~\cite{odgaard1993quantification}, area and volume are estimated by triangulating the surface, and surface average mean curvature is computed using a dilation technique~\cite{hildebrand1997quantification}.
The proposed measures of volume, area, and average mean surface curvature were compared to standard techniques using regression and Bland-Altman analysis~\cite{bland1986statistical}.
The categorical Euler--Poincar\'e~characteristic was compared using a confusion matrix.
This nuance is outlined in the discussion, but the proposed method gives a continuous outcome for Euler--Poincar\'e~characteristic while the standard method gives a categorical outcome.
For purposes of comparison, the proposed technique was rounded to an integer.
Analysis was stratified by femur and vertebra. Statistical analysis was performed in R (v4.0.0, The R Foundation for Statistical Computing, Vienna, Austria)~\cite{team2013r}.
\section{Results}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{StructuralChanges}%
\caption{Structural changes as a function of smoothing. (\ref{fig:smoothing}a) Morphometrics vary as a function of smoothing. (\ref{fig:smoothing}b) Visualizing the surfaces for different smoothing values. $t = \SI{1.5}{\milli\metre}$ throughout.}
\label{fig:smoothing}
\end{figure*}
\subsection{Necessity of the Embedding Technique}
Surfaces of the proposed and signed distance transform embedding techniques are rendered in Figure~\ref{fig:sdt}a for a selected case.
Overall, the rendered surface is smoother using the proposed method.
In contrast to this, the mean and Gaussian curvature exhibit a large amount of noise when computed from the signed distance transform image.
A 2D slice is taken through the embedding in Figure~\ref{fig:sdt}b. Histograms of the curvatures are displayed in Figure~\ref{fig:sdt}c.
Large curvatures from the signed distance transform are severely quantized compared with the proposed method.
This result is fundamental to the signed distance transform of binary images~\cite{besler2020artifacts}.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{RegularizationSensitivity}%
\caption{Sensitivity to regularization thickness. (\ref{fig:regularization}a) Morphometrics as a function of regularization thickness. The dashed vertical line indicates the largest edge length of the dataset's anisotropic voxel ($\SI{1}{\milli\metre}$). (\ref{fig:regularization}b) Visualizing the Dirac delta and Heaviside response for varying regularization thicknesses. $\sigma = \SI{2.5}{\milli\metre}$ throughout.}
\label{fig:regularization}
\end{figure*}
\subsection{Structural Changes of the Proposed Embedding Technique}
Changes in morphometry as a function of smoothing are shown in Figure~\ref{fig:smoothing}.
Both, the femur and vertebrae, show unsmooth voxel edges with little smoothing.
At a standard deviation of 3.0 mm, the vertebra is oversmoothed with a bubble-like look.
The Euler--Poincar\'e~characteristic is poorly estimated for low standard deviations.
Sensitivity of area, volume, and averaged mean surface curvature are consistent with the Gaussian modifying the underlying object.
The transverse processes shrink in Figure~\ref{fig:smoothing}b while small variations along the surface are removed with increasing smoothing.
This is consistent with the decreasing surface averaged mean curvature where areas of high absolute curvature are smoothed more rapidly than areas of low curvature.
Finally, the Euler--Poincar\'e~characteristic settles near 2 for the femur and 0 for the vertebra.
This is expected as the femur is topologically equivalent to a sphere and the vertebra is topologically equivalent to a torus.
If the smoothing is selected much larger than the thickness of the vertebral arch, a hole may be introduced in the surface, changing the topology.
However, the Euler--Poincar\'e~characteristic of a vertebrae is not necessarily always zero.
There can be physical damage or anatomical anomalies, which disconnect or form holes in the vertebral arch.
Interestingly, not all vertebrae are topologically equivalent either, with cervical vertebrae having two additional holes corresponding to the transverse foramen.
An important result of this experiment is that small smoothing values still have quantized gradients, evident by the increase in Euler--Poincar\'e~characteristic.
Such large values of the Euler--Poincar\'e~characteristic are physically impossible since the only orientable, closed surface with a Euler--Poincar\'e~characteristic greater than zero is the sphere.
There were no disconnected particles in the image which could have increased the Euler--Poincar\'e~characteristic past two.
Depending on the object structure relative to image resolution size, the required amount of smoothing may be prohibitive.
When embedding a binary image, it is recommended to select one standard deviation for each tissue class but not necessarily the same standard deviation across tissue classes.
The smoothing should be selected such that morphometry is accurate, but the object is not artificially smoothed for subsequent processing.
Experimental designers using this technique are responsible for quantifying and understanding the structural changes that occur with Gaussian blurring.
In general, smoothing should be selected larger than the image resolution but smaller than the object thickness or holes in the object.
\subsection{Sensitivity of Morphometrics to Regularization}
Changes in morphometry as a function of regularization thickness are plotted in Figure~\ref{fig:regularization}a.
Morphometric outcomes hardly vary as a function of regularization thickness, suggesting that the method is insensitive to regularization thickness.
The Euler--Poincar\'e~characteristic is the most sensitive outcome exhibiting non-integer values for values of t smaller than a voxel.
Finally, increasing the regularization thickness increases the response size of the Dirac delta and Heaviside responses, as expected (Figure~\ref{fig:regularization}b).
\subsection{Validation of Morphometrics}
Regression and Bland-Altman plots for volume, area, and average mean surface curvature are displayed in Figure~\ref{fig:regression-ba-sdt} for the signed distance transform and Figure~\ref{fig:regression-ba-proposed} for the proposed embedding.
Regression and Bland-Altman statistics for both embeddings are summarized in Table~\ref{tab:agreement}.
The proposed method reduces variability in area and average mean curvature measures, while greatly reducing the proportional bias in area as compared to the signed distance transform.
Computed as the difference in limits of agreement over the average between methods, the area proportional bias slope improved from -5.0\% in the femur and -3.1\% in the vertebrae to 0.6\% in the femur and 0.8\% in the vertebrae using the proposed method.
Regression slopes all improved or remained unity.
Excellent agreement is seen between the proposed and traditional methods.
While the global morphometric outcomes appear reasonable for the signed distance transform, Figure~\ref{fig:sdt} demonstrates that the local measures are quantized and averaging across the surface has increased global outcome accuracy.
\begin{landscape}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Regression-BA-SDT}%
\caption{Regression and Bland-Altman plots demonstrating agreement between the signed distance transform and standard methods for the femur and L4 vertebra (n=50 in each group). Dashed light gray lines denote ideal relationships.}
\label{fig:regression-ba-sdt}
\end{figure}
\end{landscape}
\begin{landscape}
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{Regression-BA-Proposed}%
\caption{Regression and Bland-Altman plots demonstrating agreement between the proposed and standard methods for the femur and L4 vertebra (n=50 in each group). Dashed light gray lines denote ideal relationships.}
\label{fig:regression-ba-proposed}
\end{figure}
\end{landscape}
\begin{landscape}
\begin{table}
\footnotesize
\centering
\begin{threeparttable}
\begin{tabular}{llcllclllcll}
& & & \multicolumn{2}{c}{Descriptive Statistics\tnote{$\dagger$}} & & \multicolumn{3}{c}{Regression Analysis} & & \multicolumn{2}{c}{Bland-Altman Analysis} \\
\cline{4-5}\cline{7-9}\cline{11-12}
& & & Method & Standard & & Slope (95\% CI) & Intercept (95\% CI) & R\textsuperscript{2} & & Bias (95\% LoA) & p-value\tnote{$\ddagger$}\\
\hline
\multicolumn{12}{c}{Signed Distance Transform} \\
\hline
\multirow{3}{*}{Femur} & $V~[\si{\milli\metre\cubed}]$ & & $159959.74 \pm 45821.52$ & $160513.90 \pm 45992.36$ & & $1.0037~(1.0034, 1.0040)$ & $-42.12~(-94.53, 10.29)$ & $1.000$ & & $554.16~(205.45, 902.88)$ & $<2\cdot 10^{-16}$ \\
& $A~[\si{\milli\metre\squared}]$ & & $19669.37 \pm 4213.27$ & $18775.06 \pm 3991.35$ & & $0.9470~(0.9397, 0.9543)$ & $148.27~(1.76, 294.77)$ & $0.999$ & & $-894.31~(-1378.64, -409.97)$ & $<2\cdot 10^{-16}$ \\
& $\langle H \rangle~[\si{\per\milli\metre}]$ & & $0.0319 \pm 0.0029$ & $0.0324 \pm 0.0029$ & & $0.9615~(0.8900, 1.0329)$ & $0.0018~(-0.0005, 0.0041)$ & $0.938$ & & $0.0006~(-0.0009, 0.0020)$ & $0.834$ \\
\cline{2-12}
\multirow{3}{*}{L4} & $V~[\si{\milli\metre\cubed}]$ & & $66762.55 \pm 12878.85$ & $67009.42 \pm 12961.63$ & & $1.0064~(1.0055, 1.0073)$ & $-181.96~(-242.27, -121.65)$ & $1.000$ & & $246.86~(67.29, 426.43)$ & $<2\cdot 10^{-16}$ \\
& $A~[\si{\milli\metre\squared}]$ & & $14829.09 \pm 2049.11$ & $14110.63 \pm 1971.38$ & & $0.9613~(0.9499, 0.9727)$ & $-144.0273~(-314.79, 26.73)$ & $0.998$ & & $-718.47~(-940.13, -496.81)$ & $3.5\cdot 10^{-8}$ \\
& $\langle H \rangle~[\si{\per\milli\metre}]$ & & $0.0515 \pm 0.0044$ & $0.0507 \pm 0.0040$ & & $0.9010~(0.0009, 0.0078)$ & $0.0043~(0.0009, 0.0078)$ & $0.939$ & & $-0.0008~(-0.0029, 0.0013)$ & $0.0473$ \\
\hline
\multicolumn{12}{c}{Proposed Method} \\
\hline
\multirow{3}{*}{Femur} & $V~[\si{\milli\metre\cubed}]$ & & $159857.37 \pm 45810.44$ & $160513.90 \pm 45992.36$ & & $1.0040~(1.0037, 1.0043)$ & $21.82~(-30.31, 73.96)$ & $1.000$ & & $656.54~(287.05, 1026.02)$ & $<2\cdot 10^{-16}$ \\
& $A~[\si{\milli\metre\squared}]$ & & $18660.90 \pm 3967.92$ & $18775.06 \pm 3991.35$ & & $1.0059~(1.0049, 1.0069)$ & $4.09~(-15.92, 24.11)$ & $1.000$ & & $114.17~(60.36, 167.97)$ & $3.46\cdot 10^{-15}$ \\
& $\langle H \rangle~[\si{\per\milli\metre}]$ & & $0.0325 \pm 0.0029$ & $0.0324 \pm 0.0029$ & & $1.0024~(0.9975, 1.0074)$ & $-0.0001~(-0.0003, 0.0000)$ & $1.000$ & & $0.0000~(-0.0002, 0.0000)$ & $0.2972$ \\
\cline{2-12}
\multirow{3}{*}{L4} & $V~[\si{\milli\metre\cubed}]$ & & $66679.18 \pm 12865.13$ & $67009.42 \pm 12961.63$ & & $1.0075~(1.0062, 1.0087)$ & $-169.28~(-254.42, -84.14)$ & $1.000$ & & $330.24~(112.16, 548.32)$ & $3.78\cdot 10^{-16}$ \\
& $A~[\si{\milli\metre\squared}]$ & & $14055.01 \pm 1961.87$ & $14110.63 \pm 1971.38$ & & $1.0047~(1.0007, 1.0088)$ & $-11.13~(-68.20, 45.94)$ & $1.000$ & & $55.62~(-0.72, 111.96)$ & $0.019$ \\
& $\langle H \rangle~[\si{\per\milli\metre}]$ & & $0.0511 \pm 0.0041$ & $0.0507 \pm 0.0040$ & & $0.9958~(0.9858, 1.0057)$ & $-0.0002~(-0.0007, 0.0003)$ & $0.999$ & & $-0.0004~(-0.0007, -0.0001)$ & $0.466$ \\
\hline
\end{tabular}
\begin{tablenotes}
\item[$\dagger$] Reported mean $\pm$ standard deviation.
\item[$\ddagger$] Computed on slope in the Bland-Altman diagram to test for proportional bias.
\end{tablenotes}
\caption{Agreement analysis between for the signed distance transform and proposed method relative to the standard methods for continuous outcomes.}
\label{tab:agreement}
\end{threeparttable}
\end{table}
\end{landscape}
At the $\alpha = 0.05$ level, statistically significant proportional bias is seen in measures of area and volume using the proposed method.
While the proportional bias is statistically significant, it is practically insignificant exhibiting a slope of less than 1\% for all measures.
The source of the proportional bias is believed to be an interaction between the Gaussian filtration and regularized Heaviside and Dirac delta functions, where there is a sub-voxel shift of no more than one voxel in the embedding with a direction and magnitude that depends on the local mean curvature.
The embedding shrinks at areas of positive mean curvature and expands at areas of negative mean curvature (Figure~\ref{fig:gauss}).
Although locally small, the volume and area integrals accumulate the error across the volume into a detectable bias.
This leads to a proportional bias because the larger the object, the more accumulation.
It should be noted that this is independent from the structural changes caused by Gaussian blurring as the analysis was performed on the data re-binarized after embedding.
Confusion matrices for the computation of the Euler--Poincar\'e~characteristic are given in Figure~\ref{fig:confusion_matrix}. 100\% accuracy is seen in the femur and 96\% accuracy is seen in the L4 vertebrae.
The Euler--Poincar\'e~characteristic was nonsensical when computed from the signed distance transform, so the results are visualized using a histogram rather than the confusion matrix (Figure~\ref{fig:chi_histogram}).
The reason for such a large errors is that the quantization in the signed distance transform makes the embedding at best $O(h)$ accurate in initialization~\cite{besler2020artifacts}, causing the second derivatives to amplify noise.
This result is fundamental to the signed distance transform and cannot be corrected by increasing the resolution or transforming the signed distance transform in some way~\cite{besler2020artifacts}.
Two vertebrae display an Euler--Poincar\'e~characteristic of -2 indicating a second hole in their shape.
These images are displayed in Figure~\ref{fig:l4_hole}.
One vertebra has a hole of a single voxel while another has a hole of a few voxels.
As the regularization thickness is larger than these holes, an error is seen in the proposed method's computation of Euler--Poincar\'e~characteristic giving 0 in the first case and -1 in the second.
Most likely, there are more small holes in the surfaces caused by segmentation errors or anatomical defects that are removed with Gaussian filtering.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{ConfusionMatrix}%
\caption{Confusion matrix demonstrating agreement between the proposed and standard method for computing the Euler-Poincaré characteristic in the femur and L4 vertebra (n=50 in each group).}
\label{fig:confusion_matrix}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{ChiHistogram}%
\caption{Histogram of Euler-Poincaré characteristic in the femur and L4 vertebra (n=50 in each group) computed using the signed distance transform embedding. Values should be around 0 and 2 but instead are erroneously larger.}
\label{fig:chi_histogram}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{L4Hole}%
\caption{A second hole in two L4 vertebrae, highlighted in red. This could be due to a contouring error or due to an anatomical defect in the pedicles.}
\label{fig:l4_hole}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{SurfaceOverview}%
\caption{(Left) Visualization of local mean and Gaussian curvature for a femur and L4 vertebra. (Right) histogram of mean and Gaussian curvature across the surface.}
\label{fig:surface_overview}
\end{figure*}
A femur and L4 vertebra, selected as the objects with median surface average mean curvature, are rendered in Figure~\ref{fig:surface_overview}.
Local mean curvature is consistent with intuition: the fovea capitis has a negative mean curvature and the tips of the transverse processes show a positive mean curvature.
The mean and Gaussian histograms are fat-tailed distributions with the L4 mean curvature histogram exhibiting skewness.
The femur mean curvature histogram has a large spike at zero corresponding to the flat distal section where the scan volume of interest cropped the femur.
\section{Discussion}
The main contribution of this paper is the description of a method for performing morphometry on closed, implicit surfaces.
For this purpose, an embedding procedure for binary images based on Gaussian blurring is suggested as an alternative to the signed distance transform to avoid errors from quantization.
Morphometrics resulting from this embedding are validated against well-established methods and show excellent agreement and considerably better results compared to using the standard signed distance transform.
The proposed method is a refinement and summation of many classic works~\cite{sethian1999level}.
Measuring area and volume from implicit surfaces is well-defined~\cite{chan2001active} while total Gaussian curvature has been previously used to count the number of objects in a volume preserving flow~\cite{peng1999pde}.
Mean curvature has been used extensively in the computation of mean curvature flow~\cite{osher1988fronts,chopp1991computing}.
The main contribution of this work is a well-defined embedding function and synthesis of previously described methods into morphometrics founded in differential geometry.
This technique provides a way to measure mathematically well-described properties on anatomical structures for basic or clinical research.
The embedding method based on Gaussian blurring is the key to enable an advanced morphometric analysis since computation of curvatures from signed distance transforms of binary images result in considerable quantization-related errors~\cite{besler2020artifacts}.
These errors can have a particular negative effect on the computation of Euler--Poincar\'e~characteristic and will be most obvious and severe in that outcome first.
In essence, the problem is to assign well defined spatial gradients to binary images.
The consequences of the quantized embedding are more profound than just morphometry.
An error in the representation of embeddings can also lead to irreducible error in curve evolution problems~\cite{coquerelle2016fourth} that are independent of voxel spacing~\cite{besler2020artifacts}.
This highly motivates the use of flexible, local level set initialization methods~\cite{li2005level} for active contour problems.
Embedding methods can be designed for specific applications but generally require that the gradients are accurate.
We previously proposed a dithering and reinitialization algorithm~\cite{peng1999pde} for fixing the gradient issue in signed distance transforms.
While the algorithm improves accuracy compared to using signed distance transforms directly, the method does not produce the accuracy seen in this work since the algorithm stops improving before gradients of the embedding are highly accurate (data not shown).
Furthermore, the algorithm requires an exceptional amount of computation time limiting practical application.
The proposed method was inspired by a sub-voxel distance mapping method~\cite{caselles1993geometric} and flexible, local level set initialization~\cite{li2005level}.
The point remains that there is space for design around the embedding method.
It is important to highlight that in some workflows, no embedding procedure is needed as the data already comes embedded.
This is true in active contour segmentation models~\cite{chan2001active,vese2002multiphase,cremers2007review} in particular.
The general design criterion for embeddings is that the Heaviside recovers the object (Equation~\ref{eqn:recovery} and~\ref{eqn:level_set}) and that the embedding is monotonic across the zero level set.
Practically, the embedding only needs to be defined near the surface --- the so-called narrow band method~\cite{adalsteinsson1995fast}.
A Gaussian filter was used due to its speed, ease of design, and widespread applicability.
However, other methods such as anisotropic diffusion~\cite{perona1990scale}, anti-aliasing filters~\cite{whitaker2000reducing}, or simple non-nearest neighbor interpolation~\cite{thevenaz2000image} could also be used and may prove beneficial.
The design objective is that 1) the Heaviside recovers the object, 2) gradients are accurate, and 3) features relevant for experimental work are not removed.
Within this context, embedding with a Gaussian filter has advantages and disadvantages.
The major disadvantage is that Gaussian filter modifies the underlying object.
Objects thinner or closer together than the full-width half maximum of the blurring Gaussian are likely to be closed or opened.
Nevertheless, large organs relative to image spacing such as long bones, the hippocampus, and the liver are unlikely to change.
One advantage is that Gaussian filtration also helps to remove small imperfections in the binary images (e.g. manual segmentation artifacts).
In this work, manual contouring artifacts were seen in the data, which would have appeared as noise in the curvature outcomes.
The Gaussian standard deviation is an intuitive and obvious parameter to handle this artifact.
Furthermore, it helps handling images of varying resolutions.
As the image resolution increases, smaller dimples in the surface can be resolved increasing the absolute value of the curvature that can be measured.
By filtering the data at a physical size, these small differences between datasets can be standardized.
The primary advantage of the proposed method is that it is local.
In this context, local means that the morphometrics can be evaluated locally in the image while global means the values can only be computed for the surface as a whole.
Global methods exist for the computation of the mean curvature~\cite{hildebrand1997quantification,hahn1992trabecular,jinnai2002surface} and Euler--Poincar\'e~characteristic~\cite{odgaard1993quantification} while binarizing the embedding allows computation of volume and area. Local curvature can also be evaluated from a mesh of the surface \cite{goldfeather2004novel,rusinkiewicz2004estimating,flynn1989reliable}.
However, meshes are not ideal for curve evolution problems due to the splitting and merging required to change topology.
Having the ability to evaluate these outcomes locally opens up many possibilities.
First, they can be visualized and correlated with other measures such as local stress from finite element analysis~\cite{loundagin2020stressed} and local bone formation rates~\cite{schulte2011vivo}.
Second, they can be used as a loss function in deep learning models~\cite{litjens2017survey} because gradients in back propagation can be defined through the spatial gradients.
An important feature of the proposed method is that the computation of Euler--Poincar\'e~characteristic is not limited to integer values because a continuous measure is integrated across the surface.
As was seen in the vertebra, this can produce misleading results when small holes relative to the regularization thickness are present in the structure.
This will not be an issue in many cases and the Euler--Poincar\'e~characteristic can be rounded to an integer.
However, this mistake will be obvious to spot in the resulting data if odd values or values greater than 2 are measured for the Euler--Poincar\'e~characteristics.
The main limitation of this study is that local curvatures were not directly validated as we assumed that if the global morphometry is accurate, the local morphometry will also be accurate.
However, this may not be the case since averaging across the surface should increase the accuracy of the results.
Given that the Euler--Poincar\'e~characteristic is not averaged and accumulates errors across the surface, it is reasonable to assume the error in local curvature is small.
Additionally, qualitative analyses based on visualizing curvatures on the surface provided evidence that local measures are accurate.
In the future, ideal parametric surfaces such as spheres, tori, or triply period minimal surfaces~\cite{schoen1970infinite} where curvatures can be computed analytically should be used for validation.
\section{Conclusion}
A method of computing volume, area, average mean curvature, and Euler--Poincar\'e~characteristic of closed, orientable surfaces is described.
The fast and simple Gaussian fitler is proposed for embedding binary images to overcome the quantization errors associated with the signed distance transform. The method is accurate and local, allowing the visualization of curvatures across the surface.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,684
|
Q: simple login "if" condition but it only halfe work android studio java Hello I am new to android java codding I created a "login activity" and "Users class"
and my condition is to add a number of users and pass to the "Users class" and check if the password given is correct then log in,
however, it works correctly when I put the right password but if the password is wrong the app just crashes and the if-else condition does not run.
public class MainActivity extends AppCompatActivity {
private EditText Name;
private EditText Password;
private TextView Loginmsg;
public Button Btlogin;
public static int Counter;
public static ArrayList <User> mUsers;
public static int Tester;
private String nameHolder;
private String passHolder;
@Override
protected void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.activity_main);
Name = findViewById(R.id.etpsudo);
Password = findViewById(R.id.etpaswword);
Loginmsg = findViewById(R.id.txtlog);
Btlogin = findViewById(R.id.btnlogin);
Tester=0;
Btlogin.setOnClickListener(new View.OnClickListener() {
@SuppressLint("SetTextI18n")
@Override
public void onClick(View v) {
nameHolder=Name.getText().toString();
passHolder=Password.getText().toString();
for(int i=0; i<=mUsers.size();i++){
if((nameHolder.equals(mUsers.get(i).getmNom()))&&(passHolder.equals(mUsers.get(i).getmPass()))){
Counter=i;
i=mUsers.size()+1;
Tester=1;
}
}
if (Tester==1){
Intent drawless = new Intent(MainActivity.this,newacc.class);
startActivity(drawless);
}
else{
Loginmsg.setText("eroor");
}
}
});
mUsers = new ArrayList<>();
mUsers.add(new User("amine","12345"));
mUsers.add(new User("bouhali","4523"));
mUsers.add(new User("khawla","ae12"));
}
}
A: You have a problem in you for loop
Look carefully at
for(int i=0; i<=mUsers.size();i++).
ArrayLists are index starting at 0. This mean that the last element is less than the total number, not equal to the total number.
Also I agree with Dave, use a break rather than modifying the value of i to end the loop, it is much clearer.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,638
|
\section{Introduction}
\IEEEPARstart{H}{}umanoid robots have been widely studied for decades for a better mobility, a higher dexterity and a stronger intelligence. Making humanoids work in human-tailored environments or interact with humans, however, still faces enormous problems. As one of the most fundamental skills for legged robots, balance maintenance is of primary importance for a humanoid. The balance control approaches based on simplified models has been well studied in early works. The robot is simplified as a single rigid body mounted on the top of an inverted pendulum with the regulation of center of mass (CoM) as control objective \cite{2016li}. Further, a simplified three-link planar model with more information is proposed in \cite{2013nenchev}. These approaches utilizing the ankle or hip joints have been proved to be practical on real robots, however, suffer from an unnatural behavior due to failing to account for all joints motion.
To fully exploit the capability of redundant joints, whole-body control (WBC) has been proposed and gained a growing interest within the robotics community over the past twenty years. Up to now, the existing WBCs can be categorised into: (1) null space projection based WBC (NSP-WBC), (2) weight quadratic program based WBC (wQP-WBC) and (3) hierarchical quadratic program based WBC (hQP-WBC). The NSP-WBC can implements the hierarchy of tasks via null space projections. Such an idea originates from the redundancy control of robot manipulators \cite{1987nakamura} and is first extended to the walking of HRP-2 robot in \cite{2003kajita}. However, such a method is limited as it can not take the inequality constraints into account. For torque-controlled robots, dynamic constraints such as joint torque limits are highly necessary for robots' safety. The wQP-WBC can handle this problem by formulating the tasks and constraints as a quadratic optimization problem. It can find the optimal solution that minimizes the task errors while satisfying the constraints. This method has been applied on Atlas robot to execute multi tasks during the DARPA Robotics Challenge \cite{2014dai}\cite{2015feng}\cite{2016Koolen}. However, the strict task hierarchy can not be guaranteed in this method. Only soft hierarchy can be realized by tuning the task weight. This limitation becomes a concern when the tasks conflict with each other.
The hQP-WBC has been devised to combine the task hierarchy and inequality constraints together \cite{2010de} \cite{2011kanoun} \cite{2014escande}. The basic idea is constructing each layer as a quadratic optimization problem and solving them in a sequence where the lower priority QPs can not disturb the higher priority QPs. As a result, the inequality constraints in QP stack gradually as the priority decreases, thus leading to a time consuming problem. \cite{2016herzog} proposed an efficient way by introducing null space projection to reduce the computational cost and implemented it on a torque-controlled robot Sarcos with a 1KHz control loop. Similarly, \cite{2016bellicoso} implemented this method on a quadruped robot ANYmal showing a natural adaption to the terrain while walking. In short, the hQP-WBC has been a genetic whole-body generation tool for torque-controlled robots.
For achieving highly dynamic motions, proprioceptive actuation has been proposed and implemented in legged robots \cite{2012seok}. The torque is controlled directly by the motor's current with the torque loss in the gear reducer being well covered. Thanks to its high bandwidth and high torque density, the MIT Cheetah robot demonstrated the dynamic contact interactions in its high-speed locomotion \cite{2017wensing}. However, the joint friction could be negligible as the gear reduction ratio becomes larger, thus requiring an off-line model identification process.
Besides the high bandwidth characteristic of actuators, the overall performance of the robot is highly depends on the control frequency. \cite{2016kim} has noticed a phenomenon in the torque-controlled robot Mercury that increasing the control frequency from 1KHz to 1.5KHz will provide a larger posture and foot position control bandwidth. This puts forward a higher demand for the real-time computation of WBC. Moreover, a faster WBC is also preferred as other time-consuming techniques such as model predictive control are embedded into the control framework.
This paper mainly focus on the the systematic integration of algorithm, software and hardware in the dynamic balancing of a humanoid robot with proprioceptive actuation. We first customize a reasonable hierarchy of tasks and constraints for adapting to disturbances. Then a reduced whole-body control is implemented and solved in real-time by a computationally efficient WBC software. A modular master control system UBTMaster is also designed for providing real-time communication and powerful computing capability in hardware side. Next, a model identification process is conducted on the robot to cover the joint friction and model inaccuracy issues. Finally, extensive experiments on various balancing scenarios are implemented on a robot Walker3 with proprioceptive actuation. The balance performance under various kinds of disturbance are discussed.
Walker3 is a humanoid robot consisting of two legs and a torso as shown in Fig. \ref{fig_robot_model}. The robot is 1.6m tall and weights 43Kg. Inertial measurement unit (IMU) is mounted on the torso for the state estimation. Two 6-axis force sensors are installed on the soles to measure the center of pressure (CoP) in each foot. Each leg has 6 electrical motors in series. The motors use gear reducer to enlarge the output torque and the gear reduction ratio is up to 100. The motors are real-time controlled using the EtherCAT communication protocol.
\begin{figure}[htp]
\centering
\includegraphics[width=3.5in]{figures/Robot_Model.pdf}
\caption{Walker3 humanoid robot with 12 actuated DoFs (a), and its kinematic model (b).}
\label{fig_robot_model}
\end{figure}
\section{Control Approach}
To adapt to the uncertain disturbances, an online task planner is proposed to make adjustments on the task trajectory. The basic idea of this planner is to ensure the foot ZMP resides in the safe region as much as possible. Then the desired trajectory is tracked by a reduced whole-body controller coupled with a hierarchical optimization solver. The hierarchy of tasks and constraints is divided into four layers according to their priorities. Each layer is solved by a quadratic optimization and the strict hierarchy among layers is guaranteed by the null space projection. After solving a sequence of QPs, the whole-body controller outputs the optimized joint torques.
For a robot with proprioceptive actuation, joint torques are turned into joint current commands. The desired joint velocity commands obtained through numerical integration of desired joint accelerations are also considered here to improve the performance of the joint-level control. The whole control architecture is illustrated in Fig. \ref{fig_control_architecture}. The state of the robot is estimated by the kinematics solver with the sensory data of joints and IMU.
\begin{figure*}[htp]
\centering
\includegraphics[width=7.0in]{figures/Control_Architecture.pdf}
\caption{Overview of the control architecture}
\label{fig_control_architecture}
\end{figure*}
\section{Tasks and constraints in dynamic balancing}
\subsection{Task planner}
\label{Sec_control_approach_ZMP_based_planner}
Facing dynamic environments, robot needs to tune the desired motion of the task properly. The task planner presented here can make real-time adjustments on the task trajectory according to the ZMP state.
Different from the method in \cite{2016herzog}, we have no extra sensors to get information about the moving support but the force sensors to detect the contact between foot and support. When a robot is stably resting on a moving support, the ZMP of each foot must reside inside its support polygon. Therefore, the desired motion of the foot can be set using the following rule: if the measured ZMP is inside the safe region of the support polygon, we adjust the desired position, posture and velocity of the foot to its current state ${r_{F,ref}} = {r_F} $ and ${v_{F,ref}} = {v_F}$.
In addition, the desired motion of CoM should be adjusted along with the foot desired motion. The desired horizontal position is located in the middle of two feet $ {r_{{\rm{CoM}},ref}}\left( {x,y} \right){\rm{ = }}\left( {{r_{LF,ref}}\left( {x,y} \right) + {r_{RF,ref}}\left( {x,y} \right)} \right)/2 $ and the desired vertical position is set to $ {r_{{\rm{CoM}},ref}}\left( z \right) = \left( {{r_{LF,ref}}\left( z \right) + {r_{RF,ref}}\left( z \right)} \right)/2 + {\rm{C}} $. $\rm C $ is a constant value depending on the robot stand pose. The desired velocity is set to the average velocity of two feet $ {v_{{\rm{CoM}},ref}}{\rm{ = }}\left( {{v_{LF,ref}} + {v_{RF,ref}}} \right)/2 $. In our planner, the desired motion of torso and foot contact force remain unchanged throughout the balance control. The desired motion of torso is set to $ {r_{T,ref}}{\rm{ = }}0, {v_{T,ref}}{\rm{ = }}0 $ and the desired vertical contact force is set to $ {F_{LF,ref}}\left( z \right){\rm{ = }}{F_{RF,ref}}\left( z \right){\rm{ = }}mg/2 $, where $m$ is the total mass of the robot.
\subsection{Tasks and constraints hierarchy}
\label{Sec_control_approach_tasks_and_constraints_hierarchy}
When multi tasks have to be performed simultaneously, how to handle the conflicts among these objectives is crucial. Prioritized hierarchy strategy has been widely adopted in redundant robots. Motion solvers will accomplish the lower priority tasks as well as possible under the prerequisite that the higher priority tasks is implemented first. For example, balance is always considered a top-priority task for a humanoid robot. The robot tends to sacrifice its posture under disturbance to ensure the feet is fully in contact with the ground. Likewise, physical constraints concerning the humanoid robot safety should be put in the highest priority. Table. \ref{table_task_constraint_hierarchy} specifies the hierarchy of tasks and constraints.
\begin{table*}[htbp]
\label{table_task_constraint_hierarchy}
\centering
\caption{Tasks and constraints hierarchy}
\renewcommand\arraystretch{1.5}
\begin{tabular}{@{}lllll@{}}
\toprule
Level & Task & Task Dimensions & Constraint & Constraint Dimensions \\ \midrule
1 & Floating base dynamics & 6 & Joint torque saturation & 12 \\
2 & Foot position and posture & 12 & \begin{tabular}[c]{@{}l@{}}Center of pressure\\ Friction cone\end{tabular} & 18 \\
3 & Linear momentum & 3 & & \\
4 & \begin{tabular}[c]{@{}l@{}}Torso posture\\ Foot contact force\end{tabular} & 15 & & \\ \bottomrule
\end{tabular}
\label{table_task_constraint_hierarchy}
\end{table*}
\subsubsection{Floating base dynamics}
Humanoid robots, typical floating-based systems, are a particular case of underactuated systems due to the partial actuation when they make contact with the environment. The configuration of a humanoid robot is represented by generalized coordinates $ q = {\left[ {\begin{array}{*{20}{c}}
{q_f^{\rm{T}}}&{q_a^T}
\end{array}} \right]^{\rm{T}}} $, where $ {q_f} $ represents the position and orientation of the robot free-floating body and $ {q_a} $ represents the $ {n} $ actuated joints of the robot. When the robot is in contact with the environment, the dynamic equation of the system can be fully described by
\begin{equation}
{S_f}M\ddot q + {S_f}C + {S_f}G = {S_f}{J^{\rm{T}}}{F}
\end{equation}
where $ {M} $ is the generalized inertia matrix, $ {C} $ is the nonlinear vector including Coriolis and centrifugal forces, $ {G} $ is the gravity vector. $ {F} $ is the contact force vector. $ {J} $ is the jacobian matrix of the contact point. $ {S_f} = \left[ {\begin{array}{*{20}{c}}
I&0
\end{array}} \right] $ is a matrix selecting the free-floating joints. Thus, actuated torque vector $ {\tau} $ is eliminated from the dynamic equation. Instead, choosing a different selection matrix $ {S_a} = \left[ {\begin{array}{*{20}{c}}
0&I
\end{array}} \right] $ will derive a linear function between $ {\tau} $ and $ {\ddot q} $, $ {F} $. Due to such linear dependence, the full body dynamics of
$ {n+6} $ dimensions are simplified as the floating base dynamics of 6 dimensions. The adoption of floating base dynamics is crucial to reduce the optimization time and will help to implement the 1KHz control loop.
As the highest priority task, dynamics equation is the essential description for a physical multi-rigid-body system. Once the equation holds, the movements of the system are physically feasible. The second priority task following the dynamics equation is the foot position and posture control. A good task control performance will guarantee a good contact between foot and support, which is a premise for the contact force. Linear momentum control, the third priority task, has been proved to be essential for a good balance by regulating the state of CoM \cite{2012lee}. In the lowest priority task, we prefer to have the torso posture control and foot contact force control on the same level. Given that we have only 30 optimization variables (including 18 for $ {\ddot q} $ and 12 for $ {F} $) while dynamic equation, foot position and posture task and linear momentum task lock 6, 12 and 3 variables separately, only 9 free variables are left for the torso posture and foot contact force control.
\subsubsection{Operational space tasks}
Operational space tasks such as foot position and posture control, linear momentum control and torso posture control can be phrased as
\begin{equation}
{J}\ddot q + {\dot J}\dot q = {a}
\end{equation}
where $ {J} $ is the jacobian matrix of a specific task, $ a $ is the task desired acceleration which can be determined by a feedforward and feedback control law. The $ {\dot J}\dot q $ term is related to the robot state. Specially, the jacobian matrix of the linear momentum task is also named as centroidal momentum matrix and can be calculated using an efficient O($ n $) algorithm based on the generalized inertia matrix $ {M} $ \cite{2016wensing}.
\subsubsection{Robot safety constraints}
Given the physical limitations of the robot, several safety issues have to be appropriately concerned. The joint torque saturation constraint $ {\tau _{\min }} \le \tau \le {\tau _{\max }} $ is especially important for generating control commands that are valid on a robot.
A stable contact between foot and support is an essential precondition for generating a 6-dimensional contact force vector, which further means the foot is not allowed to tilt or slide relative to the support. The stable contact can be ensured from two aspects. One is the center of pressure constraint. The center of pressure at each foot must no exceed the boundary of the foot's support polygon. The other is the friction cone constraint, which requires the foot contact force must stay inside the friction cones. The cones are approximated as pyramids here so that the constraint can be expressed as linear inequality.
\section{Real-time WBC}
\subsection{Reduced hierarichal whole-body control}
\label{Sec_control_approach_whole_body_control}
The tasks can be formulated as equalities and constraints can be formulated as inequalities. Therefore, the tasks and constraints in the same level can be stacked vertically into the form
\begin{equation}
\label{eq:task_and_constraint}
\left\{ {\begin{array}{*{20}{c}}
{{{{A}}_i}{{x}} - {{{b}}_i} = 0}\\
{{{{D}}_i}{{x}} - {{{f}}_i} \le 0}
\end{array}} \right.
\end{equation}
where $ {{{A}}_i} $ is the $ i $th task matrix, $ {{{b}}_i} $ is the $ i $th task reference vector, $ {{{D}}_i} $ is the $ i $th constraint matrix, $ {{{f}}_i} $ is the $ i $th constraint boundary vector, $ {{x}} = {\left[ {\begin{array}{*{20}{c}}
{{{{{\ddot q}}}^{\rm{T}}}}&{{{{F}}^{\rm{T}}}}
\end{array}} \right]^{\rm{T}}} $ is the optimal variables. The goal of this level is to find $ {{{\ddot q}}} $ and $ {{F}} $ that satisfies these objectives as well as possible. The solution under such a linear inequality constraint can be solved through quadratic optimization. The tasks and constraints in different levels need to be optimized in a strict prioritized order. Solving level $ p $ yields an optimal solution $ {{x}}_p^* $. In order to ensure the strict prioritization of tasks, the solution of level $ p+1 $ can be found in the null space of all higher priority tasks $ {{N}}_p^{} = {{N}}_{p - 1}^{}\left( {{{I}} - {{\hat A}}_p^\# {{{{\hat A}}}_p}} \right) $. $ {{N}}_{p - 1}^{} $ is the null space of all tasks from level 1 to $ p-1 $. $ \left( {{{I}} - {{\hat A}}_p^\# {{{{\hat A}}}_p}} \right) $ is the null space of task in level $ p $. $ {{\hat A}_p} = {A_p}{N_{p - 1}} $ describes the task matrix of level $p$ projected into the null space of all higher priority tasks. The solution of level $ p+1 $ can be expressed as ${{{x}}_{p + 1}}{ = }{{x}}_p^* + {{{N}}_p}{{{u}}_{p + 1}}$, where ${{{u}}_{p + 1}}$ is an arbitrary vector lying in the row space of ${{{N}}_p}$. Substituting ${{{x}}_{p + 1}}$ into the QP problem in level $ p+1 $ yields
\begin{equation}
\begin{array}{l}
\mathop {\min .}\limits_{{{{u}}_{p + 1}}} \;\;\;\;\;\;\;{\left\| {{{{A}}_{p + 1}}\left( {{{x}}_p^* + {{{N}}_p}{{{u}}_{p + 1}}} \right) - {{{b}}_{p + 1}}} \right\|^2}\\
\;\;\;{\rm{s}}.{\rm{t}}.\;\;\;\;\;\;\;\;\;\;\;\;\;{{{D}}_{p + 1}}\left( {{{x}}_p^* + {{{N}}_p}{{{u}}_{p + 1}}} \right) - {{{f}}_{p + 1}} \le 0\\
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;{{{D}}_p}\left( {{{x}}_p^* + {{{N}}_p}{{{u}}_{p + 1}}} \right) - {{{f}}_p} \le 0\\
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\; \vdots \\
\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;{{{D}}_1}\left( {{{x}}_1^* + {{{N}}_p}{{{u}}_{p + 1}}} \right) - {{{f}}_1} \le 0
\end{array}
\end{equation}
All the higher priority constraints are stacked into the optimization to ensure the strict prioritization of constraints. Using this recursive algorithm, QP in each level is solved sequentially according to its priority.
Different from the general formulation in \cite{2016herzog}, the slack variables are omitted in the optimization problem. The slack variables are originally introduced to turn the hard constraint into a soft one. In our case, however, we notice that the optimized slack variables are always zero which means that the solver can find the optimal result without violating the hard constraints. Therefore, the slacks are excluded from the optimization variables to reduce the computational complexity. Such a method is not general yet proved to be computationally efficient in the physical applications.
To implement the above algorithm in an effective way, a WBC software is developed based on C++. Fig. \ref{fig_wbc_architecture} depicts the architecture of the WBC software. The software contains four basic classes: RobotDynamics, Task, Constraint and Wbc. These classes provide the basic interfaces for user development and the derived classes about Walker3 is developed in this software. The following part will describe these classes in detail.
The RobotDynamics class contains the member variables related to the kinematics and dynamics of a robot, such as the number of the generalized joints, number of the contact forces, inertia matrix, Coriolis and centrifugal vector, gravity vector, the selection matrix, jacobian matrix and so on. For Walker3 robot, it is implemented by a subclass named RobotDynamics\_Walker3. The model structure of Walker3 can be constructed in the subclass directly or loaded from URDF files. Calling the calcWbcDependence() function can get all the required kinematics and dynamics parameters. The open-source Rigid Body Dynamics Library (RBDL), a highly efficient C++ library that contains some essential rigid body dynamics algorithms \cite{2014featherstone}, is used here.
The Task and Constraint classes are constructed according to Eq. (\ref{eq:task_and_constraint}) with their member variables including the task's or constraint's name, priority, dimension, matrix, reference vector (boundary vector) and the DoF of variables. Each task or constraint of Walker3 is implemented by a subclass, thus forming a task or constraint library. Calling the update(const RobotDynamics \&) function will update the member variables with calculated kinematics and dynamics parameters.
The Wbc class, the core of the software, contains the pointers of other three classes. It can manage the addition, deletion and adjustment operations of the tasks and constraints. As its implementation, two subclasses based on different algorithms is developed here. One is named WqpWbc which forms all the tasks and constraints as one quadratic optimization problem \cite{2015feng}. The other is named HqpWbc which implements the hierarchical quadratic optimization as mentioned before.
Benefiting from this software, lots of redundant codes are avoided to enhance the developing efficiency. Meanwhile, developers also build the dynamic model of a quadruped and its corresponding tasks' and constraints' library. Users can also build their own robots without rewriting the WBC solver code.
\begin{figure*}[htp]
\centering
\includegraphics[width=7.0in]{figures/Wbc_Software_Architecture.pdf}
\caption{The architecture of WBC software}
\label{fig_wbc_architecture}
\end{figure*}
\subsection{High-performance master control system}
The WBC software is embedded on a modular master control system named UBTMaster as shown in Fig. \ref{fig_master_control_system}(a). It is designed with the characteristics of strong real-time computation, extensible computing capability and configurable interface. It can realize the different combination configurations of typical hardware platforms such as ARM, GPU, X86 and DSP, and allow the expansion of the computing capability in different scenarios. For the X86 basic edition, it can perform 100 GFLOPS per second with the Intel core i7-7600U.
The software architecture is illustrated in Fig. \ref{fig_master_control_system}(b). The real-time operating system based on the PREEMPT\_RT kernel is to serve real-time applications that process data as it comes in, typically without buffer delays. It ensures that the application's task must be done within the defined time constraints. In the real-time communication layer, we apply the high-speed, real-time bus communication protocol EtherCAT for short data update times (also called cycle times; $ {\le} $ 100 us) with low communication jitter (for precise synchronization purposes; $ {\le} $ 1 us). From these two aspects, the time jitter can be controlled at a microsecond level.
In the upper layer, the roboCore is running in a real-time process and used for isolating the applications from the hardware platform. Users can develop their own applications to meet the specific requirements.
\begin{figure*}[htp]
\centering
\includegraphics[width=7.0in]{figures/UBTMaster.pdf}
\caption{The modular master control system (a) and its software architecture (b)}
\label{fig_master_control_system}
\end{figure*}
\section{Proprioceptive actuation with large reduction ratio}
\subsection{Joint-level control}
\label{Sec_joint_level_control}
Given the inputs, WBC software will output the optimized joint torque. However, the lack of torque sensor in proprioceptive actuation do not allow the direct torque control. A common workaround for this problem is to utilize an admittance coupling to convert joint torque to joint velocity \cite{2012Dietrich}. Considering the bandwidth of admittance control will limit the torque tracking performance, here we utilize direct current control.
For proprioceptive actuation with small reduction ratio, the joint current can be approximated as a linear function of joint torque due to the negligible torque loss in the reducer. However, there is a significant amount of static friction in the reducer with a large reduction ratio. This stiction translates into joint torque stiction of up to 5 Nm. Consequently, joint friction torque compensation is essential. Besides, the joint velocity obtained by integrating the optimized joint acceleration can also be added to the joint current command as a kinematic compensation term to improve the joint impedance \cite{2016Koolen}.
In our paper, the final control law for joint current is calculated as
\begin{equation}
{i^{{\rm{cmd}}}}{\rm{ = }}{k_i}\left( {{\tau ^{{\rm{opt}}}}{\rm{ + }}{k_f}{\tau ^f} + {\tau ^{\dot q}}} \right)
\end{equation}
where $ {{\tau ^{{\rm{opt}}}}} $ is the optimized joint torque, $ {{\tau ^f}} $ is the joint friction compensation torque, $ {{k_f}} $ is the corresponding friction compensation coefficient, $ {{\tau ^{\dot q}}} $ is the joint kinematic compensation torque. $ {{\tau ^{{{opt}}}}} $ ,$ {{\tau ^f}} $ and $ {{\tau ^{\dot q}}} $ can be expressed as
\begin{equation}
{\tau ^{opt}} {\rm{ = }}S_a^{}M{\ddot q^{opt}} + S_a^{}\left( {C + G} \right) - S_a^{}{J^{\rm{T}}}{F^{opt}}
\end{equation}
\begin{equation}
{\tau ^f}{\rm{ = }}\left\{ {\begin{array}{*{20}{l}}
{{F_c} + {F_v}\left( {\int {{{\ddot q}^{opt}}dt} - {{\dot q}^*}} \right),{\kern 1pt} {\kern 1pt} {\kern 1pt} \int {{{\ddot q}^{opt}}} dt \ge {{\dot q}^*}}\\
{\int {{{\ddot q}^{opt}}} dt\frac{{{F_c}}}{{{q^*}}},{\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} {\kern 1pt} - {{\dot q}^*} \le \int {{{\ddot q}^{opt}}} dt \le {{\dot q}^*}}\\
{ - {F_c} + {F_v}\left( {\int {{{\ddot q}^{opt}}} dt + {{\dot q}^*}} \right),{\kern 1pt} {\kern 1pt} {\kern 1pt} \int {{{\ddot q}^{opt}}} dt \le - {{\dot q}^*}}
\end{array}} \right.
\end{equation}
\begin{equation}
{\tau ^{\dot q}}{\rm{ = }}{k_{\dot q}}\left( {\int {{{\ddot q}^{{\rm{opt}}}}dt} - \dot q} \right)
\end{equation}
where $ {{\tau ^{opt}}} $ is reorganized from the full dynamics. The joint friction compensation torque $ {\tau ^f} $ is modeled as two parts: coulomb and viscous friction. $ {F_c} $ is the coulomb friction and $ {F_v} $
is the viscous friction coefficient. $ {{\dot q}^*} $ is a user defined value to prevent a sudden jump in joint friction compensation torque as motor rotates reversely. $ {k_{\dot q}} $ is a gain acting on the difference between the desired joint velocity $ \int {{{\ddot q}^{{\rm{opt}}}}dt} $ and the measured joint velocity $ \dot q $.
\subsection{Model identification}
\label{sec_model_identification}
Model identification is an effective method to obtain a robot's dynamic parameters. Besides the concerned dynamic parameters such as links' mass, inertia and center of mass, joint friction model can also be incorporated into the linearlized dynamic equation \cite{2018Gaz}. Here, coulomb-viscous friction model is preferred due to its linear expression.
The main process can be divided into three parts.
\subsubsection{Linearization of dynamic equation}
Use recursive Newton-Euler equation to reorganize the joint torque $ {\tau} $ as a linear function of dynamic parameters $ {\pi} $ (including joint friction parameters), $ \tau = Y\pi $. $ Y $ is the identification matrix and can be uniquely determined by joint motion $ q $, $ {\dot q} $ and $ {\ddot q} $.
\subsubsection{Optimal excitation trajectory}
The excitation trajectory is parameterized first by a finite Fourier series function and then optimized for the minimum of a user defined cost function \cite{2016siciliano} while satisfying the constraint conditions. The series and base frequency in Fourier series are set 5 and 0.1Hz separately. The condition number of Y matrix is closely relative to the mean square error of identification results and thus selected as the cost function.
\subsubsection{Dynamic parameters optimization}
Construct an optimization problem to find optimal dynamic parameters $ {\pi} $ which can minimize the error between the measured joint torque and the predicted joint torque by linear dynamic equation.
In addition to the motors lacking of torque sensors, a prior calibration of current-torque coefficient can help to get approximate joint torque through the measured joint current.
Fig. \ref{fig_model_identification} compares four sets of joint torque in left leg. Red dotted line indicates the measured torque through joint current. Blue solid line indicates the predicted torque through identified dynamic parameters while green dotted line indicates the predicted torque through identified base dynamic parameters. Black dotted line indicates the theoretical torque calculated by parameters obtained from 3D model. Obviously, there is a large error between the measured and theoretical torque. The main reason is that the joint friction torque is not considered in the dynamics equation calculation.
Not surprisingly, the error between the measured and predicted torque is very small. Meanwhile, several torque jumps are measured when the motor changes its rotating direction. Thanks to the coulomb friction term in our friction model, the predicted torque curve follows the measured one closely. This directly prove that the identified dynamic parameters can reflect the actual dynamic characteristics and can be used for modeling in the real control system.
\begin{figure*}[htp]
\centering
\includegraphics[width=7.0in]{figures/Dynamic_Parameter_Identification.pdf}
\caption{The comparison of joint torque in left leg}
\label{fig_model_identification}
\end{figure*}
\section{Experimental validation}
The above mentioned control approach is experimentally evaluated on the Walker3 humanoid. The balance performance is evaluated in different scenarios: push recovery on the ground, balancing on a seesaw, push recovery on two moving skateboards. A summary of experimental videos is available in the supplementary materials.
\subsection{Push recovery on the ground}
The robot is subjected to impulses from X and Y directions while it is standing on the ground. A ball weighed 5Kg is used to generate impulses at the torso of the robot. The impulses can be quantified by the ball' momentum with the its known mass and velocity. Fig. \ref{fig_push_recovery} shows a series of snapshots when the robot is stroked along X-axis and Y-axis. Eight impulses along X-axis and seven impulses along Y-axis are exerted on the robot.
For the push recovery along X-axis, the index and magnitude of impulses are listed in Tab. \ref{tab_impulses_magnitude}.
\begin{table}[htp]
\centering
\caption{The magnitude of impulses along X-axis}
\label{tab_impulses_magnitude}
\renewcommand\arraystretch{2}
\begin{tabular}{lllll}
\hline
Index & 1 & 2 & 3 & 4$\sim$8 \\ \hline
Magnitude & 8Ns & 10Ns & 11Ns & 12Ns \\ \hline
\end{tabular}
\end{table}
Several key features of the tasks are drawn in Fig. \ref{fig_push_recovery_task}. It can be seen that the peak of the foot pitch angle gradually increases as the impulses become larger. The foot pitch angle reaches up to 1.5$^\circ$ when the impulse comes to its maximum 12Ns. Theoretically, the foot pitch angle should be zero because the CoP constraint has been considered in the hierarchical optimization. Given that there is a carpet between the foot and ground, such a small pitch angle is acceptable.
Fig. \ref{fig_push_recovery_task}(b) draws the CoM position along X-axis. The CoM position fluctuates between -29mm~56mm under the continuous impulses. Fig. \ref{fig_push_recovery_task}(c) draws the pitch angle of torso. The robot tries to rotate the upper body to preserve the foot posture and the CoM position, which looks similar to that of a human rotating its trunk to maintain balance. The maximum value of the torso pitch angle is 25.7$^\circ$. Continuing increasing the impulse will cause a balance failure due to the torso pitch angle exceeding the joint angle limits.
For these three tasks in Fig. \ref{fig_push_recovery_task}, there always exists a tiny stability error although the tasks' feedback control law works. Such a control command resulted from the tiny error will not drive the joint to move. In the video, the robot behaves that it can not recover to its original state after the disturbances.
Fig. \ref{fig_push_recovery_error}(a) compares the measured ZMP with the optimized one. The measured ZMP is calculated using the force sensor while the optimized ZMP is calculated using the optimized foot contact force. Ideally, the measured ZMP should follow the optimized ZMP closely, but actually there is a slight difference. It means that all motors in the robot can not generate the required optimized foot contact force. The main reason accounting for that is the joint torque error due to the limited identification accuracy of the joint friction model in Sec. \ref{sec:exp:dynamic_parameters_identification}.
In addition, there are several times that the optimized ZMP reaches up to its boundary. It indicates a strong linear relationship between the optimal foot contact force, which directly leads to the dimensionality reduction of optimal variables. As a final result, the robot tends to sacrifice the lowest priority task due to a lack of DoFs. As proof, Fig. \ref{fig_push_recovery_error}(b) plots the task error: $ error = {\left\| {Ax - b} \right\|^2} $ of the lowest priority task. It can be seen that the task error is small enough when the first three impulses act on the robot. However, there will always be a sharp peak with the magnitude of $ {10^3} $ as long as the impulses increase to 12Ns. The huge task error will deteriorate the control performance. The robot is originally designed to show its torso compliance accodrding to the PD parameters, but the compliance characteristic can not be ensured owning to the task error. The scarified compliance will enlarge the amplitude of the torso pitch angle, which further limits the push recovery performance.
\begin{figure*}[htp]
\centering
\includegraphics[width=6.0in]{figures/Push_Recovery.pdf}
\caption{The balancing behavior in push recovery scenario along the X-axis (a), and Y-axis (b).}
\label{fig_push_recovery}
\end{figure*}
\begin{figure}[htp]
\centering
\includegraphics[width=3.5in]{figures/Push_Recovery_Task.pdf}
\caption{The measured pitch angle of right foot (a), CoM position along X-axis (b) and pitch angle of torso (c).}
\label{fig_push_recovery_task}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=3.5in]{figures/Push_Recovery_Error.pdf}
\caption{The measured and optimized ZMP of right foot (a) and the task error of the lowest priority task (b).}
\label{fig_push_recovery_error}
\end{figure}
\subsection{Balancing on a seesaw}
The robot balances on a seesaw with the inclination disturbances along X-axis and Y-axis. Fig. \ref{fig_seesaw} shows how the robot adapts to the inclined surface to keep balance, and Fig. \ref{fig_seesaw_state} plots the measured orientation and angular velocity of right foot. Unfortunately, no additional IMU is mounted on the seesaw to measure its real movement. Here, the measured state of the foot is approximated as the estimation of seesaw because the ZMP resides inside the foot polygon throughout the test.
The amplitude of inclined angles along both axes reach up to 6$^\circ $, as shown in Fig. \ref{fig_seesaw_state}. Meanwhile, the maximum angular velocity along Y-axis is 1rad/s which is slightly larger than that along X-axis (0.6rad/s). The reason mainly relies on that the joint friction has a minor influence on the balance performances when the seesaw inclines along Y-axis. The robot merely needs to modulate its ankle pitch joint to adapt to the inclined seesaw along Y-axis while needs to modulate its ankle roll joint as well as the length of both legs to adapt to the inclined seesaw along X-axis. Here, the presented angular velocity data has been processed by a low-pass filter with a cutoff frequency of 20Hz.
Fig. \ref{fig_seesaw_zmp} shows the response of right foot's ZMP during the disturbances. The components of ZMP along X and Y axes did not exceed the constrained boundary defined by foot geometry. All these proves that the task planner works well and the robot can adjust the tasks' target to resist the disturbance from the seesaw.
\begin{figure*}[htp]
\centering
\includegraphics[width=6.0in]{figures/Seesaw.pdf}
\caption{The balancing behaviors when the seesaw rotates along X-axis (a), and Y-axis (b).}
\label{fig_seesaw}
\end{figure*}
\begin{figure}[htp]
\centering
\includegraphics[width=3.5in]{figures/Seesaw_State.pdf}
\caption{The measured orientation and angular velocity of right foot.}
\label{fig_seesaw_state}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=3.5in]{figures/Seesaw_ZMP.pdf}
\caption{The measured ZMP of right foot.}
\label{fig_seesaw_zmp}
\end{figure}
\subsection{Push recovery on two moving skateboards}
The final experiment is balance maintenance on two moving skateboards as shown in Fig. \ref{fig_moving_support}. The two feet of the robot rest on two moving skateboards separately and suffer inclination and shift disturbances independently. To evaluate the anti-disturbance capability, the right moving skateboard is actuated by hands to translate along X, Y-axis and rotate along X, Y and Z-axis while the left one is locked (Fig. \ref{fig_moving_support}(a)). Fig. \ref{fig_moving_support_velocity} plots the measured
velocity of the right foot with each direction tested separately. The shift disturbance along Z-axis is measured indirectly by rotating the skateboard about Y-axis where the foot will rise as the tilt angle increases. The robot can resist the moving skateboard disturbance with the maximum velocities 0.94m/s, 0.89m/s, 0.47m/s along X, Y, Z-axis and maximum angular velocities 1.8rad/s, 1.4rad/s, 0.5rad/s along X, Y, Z-axis.
The robot can also maintain good balance when the two skateboards not only have different inclination angles but translate back and forth with out of phase velocity. Meanwhile, when 8Ns impulses along Y-axis are exerted on the robot as the skateboards keep moving, the robot generates a large torso rotation to keep balance (Fig. \ref{fig_moving_support}(b)).
To achieve a 1KHz control loop, the on-board computer needs to solve the hierarchical optimization which contains four quadratic optimization problems in 1ms. Fig. \ref{fig_moving_support_time} plots the computation time of the whole algorithm including the state estimation, trajectory planning, whole-body control and joint-level control parts. The average and the maximum computation time is 0.363ms and 0.639ms. As the most time-consuming portion, the computation time of the whole-body control part is also plotted here in yellow line. The average computation time is 0.321ms and it takes up about 88 percents of the whole time, leaving 0.042ms for the other parts. In the whole-body control part, we need to solve the quadratic optimization and the null space projection matrix sequentially. About 72\% of the computation time is used for quadratic optimization and the left 28\% is used for the null space projection matrix. The quadratic optimization is solved through a C++ open source QP solver, qpOASES \cite{2014ferreau}, which implements the active set algorithm. The null space projection matrix needs to calculate the pseudo inverse of the matrix first and the complete orthogonal decomposition algorithm implemented in the Eigen matrix library is used here.
\begin{figure*}[htp]
\centering
\includegraphics[width=6.0in]{figures/Moving_Support.pdf}
\caption{The balancing behaviors when the right support moves in all directions (a), and the balancing behaviors in push recovery on moving support scenario (b).}
\label{fig_moving_support}
\end{figure*}
\begin{figure}[htp]
\centering
\includegraphics[width=3.5in]{figures/Moving_Support_Velocity.pdf}
\caption{The measured velocity of right foot with each direction tested separately.}
\label{fig_moving_support_velocity}
\end{figure}
\begin{figure}[htp]
\centering
\includegraphics[width=3.5in]{figures/Moving_Support_Time.pdf}
\caption{The computation time of the algorithm.}
\label{fig_moving_support_time}
\end{figure}
\section{Conclusions}
The purpose of this paper is endowing the proprioceptive actuated humanoid robot with the ability of dynamic balance. To that end, the tasks and constraints are assigned and customized with a reasonable hierarchy. The hierarchical whole-body control is reduced for real-time computation and implemented via a computationally efficient WBC software. A modular master control system UBTMaster characterized by the real-time communication and powerful computing capability is also designed. With the aid of these software and hardware, users can easily develop their real-time applications on their robots without rewriting the WBC solver code. For the robot with proprioceptive actuation, key dynamic parameters are identified to cover the nonlinear joint friction and model inaccuracy problems. The identified model are accurate enough so that the predicted torque can follows the measured one closely with a mean residual less than 1Ns.
With these three aspects being well considered, the balance performance of a humanoid robot Walker3 is fully tested in various scenarios. The robot can maintain balance under continuous impulses along X and Y axes with the maximum value up to 12Ns. It tends to rotate its upper body to preserve the foot posture and the CoM position, which is similar to the human behavior. In addition, we also find the reason that limits the push recovery performance. The dimensionality reduction in the optimal variables as they reach to the constrained boundary will lead to the the sacrifice of the lowest priority task. The higher priority tasks are not affected due to the strict hierarchy. An effective solution to the problem is providing more redundant DoFs such as adding dual-arm to the robot.
When Walker3 stands on a seesaw, it can actively adapts to the inclined surface to keep balance. Different from \cite{2016herzog}, the state of seesaw is estimated without additional IMU and then used to update the trajectory of tasks. The experimental results show that the inclined angles along both axes reach up to 6$^\circ $. Meanwhile, the maximum angular velocities along X and Y axes are 0.6rad/s and 1rad/s separately, which is about 1.7 to 2.8 times of the performance on a torque-controlled robot COMAN \cite{2013li}.
To further exploit the robot's adaptability to the uncertain disturbance, we place Walker3 on two moving skateboards and impose inclination and shift disturbances simultaneously. The robot can resist the disturbance with the maximum velocities 0.94m/s, 0.89m/s, 0.47m/s along X, Y, Z-axis and maximum angular velocities 1.8rad/s, 1.4rad/s, 0.5rad/s along X, Y, Z-axis. The robot can even resists 8Ns impulses as the two skateboards not only have different inclination angles but translate back and forth with out of phase velocity.
All these results prove that with the strict hierarchy, real-time computation and joint friction being handled carefully, the robot with proprioceptive actuation will show an excellent balance performance which is comparable to that of a torque-controlled robot. For future work, we would like to improve the balance framework with a higher level planner to handle larger disturbances. The authors believe that combining the offline nonlinear optimization and the online model predictive control techniques will improve the robustness of the robot significantly.
\section*{Acknowledgment}
The authors would like to thank the anonymous reviewers for their detailed and pertinent comments.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\input{Main_V3.bbl}
\bibliographystyle{IEEEtran}
\end{document}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,600
|
Huaneng power Contact Number
Listing Results Huaneng power Contact Number
Huaneng Power International, Inc. Company Profile
Company Description: Huaneng Power International is one of China's largest independent power producers. Its nearly 50 power plants in about 20 provinces have a capacity of more than 66,700 MW; nearly all of the company's power is produced from coal.
Employees: 58K
Location: No.6, Fuxingmen Inner Street, Xicheng District, Beijing, 100000
about Huaneng
4 hours ago Hntagdkj.com Show details
Huaneng Taishan Electric Power Co. Ltd Shandong Taifeng Holding Group Co. Ltd contact us. contact number :0538-2077077. address:About 140 meters south of the intersection of Hesheng Road and Yonghe Road, Tai'an City Xintai City, Shandong Province. Huaneng WeChat Official Account.
Company Overview CHINA HUANENG GROUP
7 hours ago Chng.com.cn Show details
Contact Us. English Name:China Huaneng Group Co., Ltd. English Name for Short:CHINA HUANENG. Abbreviation: CHNG. CHNG Registered Address:No. 6, FuXingMenNei St, Xicheng District, Beijing. Postal Code:100031. Tel:(+8610)63228800. Fax:(+8610)63228866
Huaneng Power International Crunchbase Company …
Huaneng Power International goal is to provide sufficient, reliable and green electricity to the society as a power company and achieve long-term, stable and increasing paybacks to the shareholder as a listed company. Huaneng Power International was founded on June 30, 1994, and is headquartered in Beijing, China.
Founded: Jul 01, 2007
Location: Beijing
Huaneng Power International Inc Company Profile and …
8 hours ago Bloomberg.com Show details
Company profile page for Huaneng Power International Inc including stock price, company news, press releases, executives, board members, and contact information
Founded: Jun 30, 1994
HUANENG POWER INTERNATIONAL, INC.
Just Now Rise.esmap.org Show details
HUANENG POWER INTERNATIONAL, INC. Email and/or Facsimile number and Address of Company Contact Person) Securities registered or to be registered pursuant to Section 12(b) of the Act. available hours For a power plant for any period, the …
Commercial Credit Report for Huaneng Power International
9 hours ago Info.creditriskmonitor.com Show details
HUANENG POWER INTERNATIONAL,INC. is a China-based company principally engaged in the development, construction, operation and management of power plants. The Company mainly operates through the generation and sale of electric power. The …
Huaneng Power International, Inc. (0902.HK) Company
7 hours ago Finance.yahoo.com Show details
See the company profile for Huaneng Power International, Inc. (0902.HK) including business summary, industry/sector information, number of employees, business summary, corporate governance, key
Huaneng Power International Wikipedia
Huaneng Power International, Inc. (HPI), commonly known as Huaneng Power, is a Chinese electric power company.It was established in 1994 by the China Huaneng Group, one of the five largest power producers in China.It engages in the development, construction and operation of large power plants.. As of 31 May 2018, the market capitalization of its H share was …
Huaneng Power Intl (HNP) Company Profile & Facts …
See the company profile for Huaneng Power Intl (HNP) including business summary, industry/sector information, number of employees, business summary, corporate governance, key executives and their
China Huaneng Group Wikipedia
China Huaneng Group Co., Ltd., abbreviated as CHNG or Huaneng Group, is one of the five largest state-owned electricity generation enterprises in China, administrated by the State Council.It engages in the investment, construction, operation and management of power generation assets and the production and sale of electricity.In 2012, the company was ranked …
HUANENG POWER INTERNATIONAL, INC. : Shareholders Board
5 hours ago Marketscreener.com Show details
HUANENG POWER INTERNATIONAL, INC. : 902 Stock Price
HUANENG POWER INTERNATIONAL, INC. HUANENG POWER INTERNATIONAL,INC. is a China-based company principally engaged in the development, construction, operation and management of power plants. The Company mainly operates through the generation and sale of electric power. The Company also provides supply of heat.
HNP Stock Huaneng Power International Inc SEC Filings
9 hours ago Sec.report Show details
Huaneng Power International Inc is primarely in the business of electric services. For financial reporting, their fiscal year ends on December 31st. This page includes all SEC registration details as well as a list of all documents (S-1, Prospectus, Current Reports, 8-K, 10K, Annual Reports) filed by Huaneng Power International Inc.
'Smooth operator': world's largest floating solar plant
8 hours ago Rechargenews.com Show details
China has linked the world's largest floating solar plant to enter operation so far with wind generation and battery storage. Completion of the second phase of the Dezhou Dingzhuang floating PV array by Chinese power giant Huaneng International brought the total to 320MW, billed as the largest to enter service so far globally.
World's largest floating PV plant goes online in China
3 hours ago Pv-magazine.com Show details
Huaneng Power International has switched on a 320 MW floating PV array in China's Shandong province. It deployed the plant in two phases on a reservoir near its 2.65 GW Dezhou thermal power station.
HUANENG POWER INTERNATIONAL INC OTC Markets
2 hours ago Otcmarkets.com Show details
(10) 6322 6999HUANENG BUILDING, 6 FUXINGMENNEI STREET, XICHENG DISTRICT, BEIJING, P EOPLE S REPUBLIC OF CHINA Tel: +86 (10) 6322 6999 Fax: +86 (10) 6322 6888 (Name, Telephone, Email and/or Facsimile number and Address of Company Contact Person) Securities registered or to be registered pursuant to Section 12(b) of the Act.
Huaneng Power International, Inc. (NYSE:HNP) Short
9 hours ago Marketbeat.com Show details
Huaneng Power International, Inc. (NYSE:HNP) saw a significant drop in short interest in December. As of December 15th, there was short interest totalling 31,700 shares, a drop of 32.4% from the November 30th total of 46,900 shares. Based on an average daily trading volume, of 39,500 shares, the short-interest ratio is presently 0.8 days. Currently, 0.0% of the …
China Huaneng Group Co., Ltd. Credit Ratings :: Fitch Ratings
1 hours ago Fitchratings.com Show details
China Huaneng Group Co., Ltd. Entity with Fitch Analyst Adjusted Financials as featured on Fitch Ratings. Credit Ratings, Research and Analysis for the global capital markets.
Appendix 1: Nuclear Organisations in China World Nuclear
7 hours ago World-nuclear.org Show details
1. The State-owned Assets Supervision and Administration Commission (SASAC) of the State Council was founded in 2003 to take over the responsibilities of the former State Economic and Trade Commission as investor of state-owned assets on behalf of the central government and in guiding state-owned enterprises' reform and management. It aims to speed up restructuring of state- owned economy and push forward reform of state-owned enterprises, as well as harvesting dividends from them. At the end of...
2. The State Administration of Science, Technology and Industry for National Defence (SASTIND) was set up by merger in 2008 under the Ministry for Industry and Information Technology (MIIT) and supervises defence, aeronautics and nuclear energy. The CAEA is its nuclear arm, which complements the role of NEA under NDRC, and has a national Nuclear Emergency Office. In 2014 SASTIND and the General Staff HQ of the People's Liberation Army (PLA) jointly set up a response team for nuclear emergencies....
FCC approves texting to 988 to contact the National
8 hours ago Ca.finance.yahoo.com Show details
800-273-8255"Today's action requires covered text providers to support text messaging to 988 by routing text messages sent to 988 to the Lifeline's 10-digit number, 1-800-273-8255 (TALK)," Thursday's announcement stated. "The rules establish a process that will require covered text providers to support transmitting messages to 988 in additional text messaging formats that …
Is Huaneng Power International Inc ADR Class N (HNP) Stock
2 hours ago Investorsobserver.com Show details
Huaneng Power International Inc ADR Class N is near the top in its sector according to InvestorsObserver. HNP gets an overall rating of 68. That means it scores higher than 68% of stocks. Huaneng Power International Inc ADR Class N gets a 71 rank in the Utilities sector. Utilities is number 10 out of 11 sectors.
Siemens signs strategic cooperation agreement with Chinese
8 hours ago Press.siemens.com Show details
Huaneng Power International, Inc. is principally engaged in the investment, construction, operation and management of power plants throughout China. As one of Chinese largest power producers, the company owns power plants located in 19 provinces, municipalities and autonomous regions in China.
Shidaowan gives China edge in nuclear power tech
9 hours ago English.gov.cn Show details
Shidaowan gives China edge in nuclear power tech. The world's first industrial-scale demonstration plant of a high-temperature gas-cooled reactor with pebble-bed module, the No 1 reactor of the Shidaowan nuclear power plant, located in Shandong province, was connected to the grid and put into operation on Dec 20, said China Huaneng Group, its
Category: Tech SupportShow more
China's Huaneng pushes carbon capture but costs bite Reuters
5 hours ago Reuters.com Show details
Directory of sites Login Contact Support. but despite a number of pilot projects across the world, the technology is far from mature. Huaneng's Shanghai project aims to sequester 10,000
Report of Foreign Issuer Pursuant to Rule 13a16 or 15d16
7 hours ago Ih.advfn.com Show details
Form 6-K. REPORT OF FOREIGN PRIVATE ISSUER PURSUANT TO RULE. 13a-16 OR 15d-16 UNDER. THE SECURITIES EXCHANGE ACT OF 1934. For the month of December 2021. Commission File Number 001-13314. Huaneng Power International, Inc. (Translation of registrant's name into English) Huaneng Power International, Inc.
China Huaneng Group Treasury Management (Hong Kong) Ltd
1 hours ago Fintel.io Show details
020-02-05HNP / Huaneng Power International, Inc. / BlackRock Inc. Passive Investment. 2020-02-05 sec.gov - cne1000006z4020420.txt SECURITIES AND EXCHANGE COMMISSION Washington, D.C. 20549 SCHEDULE 13G Under the Securities Exchange Act of 1934 (Amendment No: 9) HUANENG POWER INTERNATIONAL INC - (Name of Issuer) Common Stock - (Title of Class …
contact number :0538-2077077 address:About 140 meters south of the intersection of Hesheng Road and Yonghe Road, Tai'an City Xintai …
2 hours ago Chinadaily.com.cn Show details
The Shidaowan plant, with independent intellectual property rights, has a total installed capacity of 200,000 kilowatts. Since 2021, it has been built jointly by China Huaneng Group Co Ltd, China
Offshore Wind farms in China 4C Offshore
3 hours ago 4coffshore.com Show details
Offshore Wind Farms. China. China has 339 offshore wind farm projects of which 86 currently operating, 11 where construction has progressed enough to connect the turbines and generate electricity, 33 are in the build phase, and 33 are either consented or have applied for consent.
Huaneng Power International (NYSE:HNP) Cut to Sell at
Huaneng Power International has a fifty-two week low of $12.79 and a fifty-two week high of $28.77. The stock has a market cap of $10.96 billion, a P/E ratio of 19.46 and a beta of 0.67. Huaneng Power International (NYSE:HNP) last issued its quarterly earnings data on Tuesday, October 26th.
Huaneng Power International Inc (HNP) Price Targets From
5 hours ago Stocknews.com Show details
The number of analysts covering the stock of HNP is greater than 0.88% of stocks in the large market cap category. HNP has a greater average analyst price target than 4.1% of Utilities stocks. Stocks similar to Huaneng Power International Inc in the Utilities industry regarding analyst recommendations and price targets are UTL , EDN and CEPU .
China power generators' profits tumble on record coal prices
5 hours ago News.yahoo.com Show details
Huadian Power International, a subsidiary of China Huadian Corp on Tuesday reported profits for the January-September period dropped 58% from a year earlier to 1.6 billion yuan ($251 million) with a third-quarter loss of 1.8 billion yuan. Huaneng Power International, a listed arm of China Huaneng Group, also said their earnings in the first
China's CGN and Huaneng enhance cooperation : Corporate
6 hours ago World-nuclear-news.org Show details
CGN, he said, will give Huaneng full support in nuclear power operation and project management. CGN has 21 power reactors in operation in China and a further seven under construction. Under a strategic investment agreement signed in October 2016, CGN agreed to take a 33.5% stake in EDF Energy's Hinkley Point C project in Somerset, UK, as well
Insights – QUATRO International Inc.
1 hours ago Quatrostrategies.ca Show details
The government of Norway has decided to raise its projected spending for 2022 to support businesses affected by the pandemic and the households hit by soaring electricity prices. The government plans to spend 355.1 billion Norwegian crowns ($40.9 billion) compared to an original projection of 322.4 billion crowns.
Laboratory signs agreement with China to develop clean
Just Now Llnl.gov Show details
Huaneng is a big player in CCS by operating GreenGen, the first large-scale coal-fueled power plant to employ integrated CCS. In addition, the company has operated the world's largest CCS pilot at the Shanghai Shidongkou Coal Power Plant, which uses about 15 megawatts of energy to capture 120,000 tons of carbon dioxide (CO2) per year.
Leading and managing people Essay Company
3 hours ago Essaycompany.com Show details
HuaNeng's challenge is a huge number of people working longer than people are new recruited. And this part of ageing employees already lead to some negative impacts as follow: Firstly, part of ageing workforces influence the working enthusiasm of young employees.
Hedge Funds Are Crazy About United Therapeutics
Just Now Insidermonkey.com Show details
The number of long hedge fund bets moved up by 7 recently. United Therapeutics Corporation (NASDAQ: UTHR ) was in 52 hedge funds' portfolios at the end of September. The all time high for this
HNP: Huaneng Power International Inc Stock Price, Quote
1 hours ago Cnbc.com Show details
Get Huaneng Power International Inc (HNP:NYSE) real-time stock quotes, news, price and financial information from CNBC.
What is Huaneng Asset Turnover from 2010 to 2022 NYSE
3 hours ago Cdn-macroaxis.netdna-ssl.com Show details
Check Huaneng Power financial statements over time to gain insight into future company performance. You can evaluate financial statements to find patterns among Huaneng main balance sheet or income statement drivers, such as Consolidated Income of 2.4 B, Cost of Revenue of 112 B or Earning Before Interest and Taxes EBIT of 14.4 B, as well as many …
What are Huaneng Company Financials from 2010 to 2021
Just Now Macroaxis.com Show details
Huaneng Power Earnings Before Interest Taxes and Depreciation Amortization EBITDA are very stable at the moment as compared to the past year. Huaneng Power reported last year Earnings Before Interest Taxes and Depreciation Amortization EBITDA of 36.12 Billion. As of 29th of December 2021, Earnings before Tax is likely to grow to about 6.4 B, while Free Cash Flow is …
HNP Stock Snapshot Fidelity
6 hours ago Eresearch.fidelity.com Show details
As of April 27, 2021, the company had a controlled generating capacity of 113,805 megawatts and an equity-based installed capacity of 99,570 megawatts. Huaneng Power International, Inc. was incorporated in 1994 and is based in Beijing, the People's Republic of China. View less.
form6k.htm SEC.gov
7 hours ago Sec.gov Show details
In order to support the business development of the Company, expand the scope of its regional operations, further address the issue of competition with the controlling shareholder while implementing Huaneng Group's previous non-competition undertaking made with the Company, on 14 October 2016, the Company entered into the Transaction
www.sec.gov
Eligible shareholders who wish to attend the Extraordinary General Meeting are advised to complete and return this reply slip to the Company's business address at Capital Market Department, Huaneng Power International, Inc., Huaneng Building, 6 Fuxingmennei Street, Xicheng District, Beijing 100031, the PRC by post or by facsimile (Fax no
Report of Foreign Issuer (6k) ih.advfn.com
The number of new domestic shares or new overseas listed foreign shares (other than those issued by conversion of the surplus reserve into share capital in accordance with the Company Law and the articles of Huaneng Power International) conditionally or unconditionally, separately or concurrently allotted, issued and dealt with (whether
HNP Is Its Stock Price A Worthy Investment? Learn More.
Huaneng Power International, Inc. (NYSE:HNP) shares gapped up before the market opened on Monday . The stock had previously closed at $16.30, but opened at $17.45. Huaneng Power International shares last traded at $17.47, with a volume of 29 shares. HNP has been the subject of a number of analyst reports. Zacks Investment Research upgraded []
China Oilfield Services OilfieldWiki
2 hours ago Oilfieldwiki.com Show details
1. COSL's overseas revenue surged 133% year on year in the first half to 209.8 million yuan, driven by demand for its services in Indonesia, West Africa and the Middle East. With crude prices more than doubling in the past two years, a speculative merger and acquisitionfrenzy has gripped the oil services market. COSL had an office in Kuala Lumpur, Malaysia briefly in the early 2000's when it had its first western Vice President, Alan Good, a British born Oilman who helped launch the company internationally and pushed for further growth in the international sector. COSL also maintains an office in Houston, TX as COSL America
6 hours ago Transformers-magazine.com Show details
China: Huaneng Power International has switched on a 320 MW floating PV array in China's Shandong province. The plant was deployed in two phases on a reservoir near its 2.65 GW Dezhou thermal power station, after which the company deployed the floating array on a reservoir near Huaneng Power's 2.65 GW Dezhou thermal power station.. The company built …
1. (10) 6322 6999
4. 020-02-05
Who is Huaneng Power International?
Huaneng Power International is one of China's largest independent power producers. Its nearly 50 power plants in about 20 provinces have a capacity of more than 66,700 MW; nearly all of the company's power is produced from coal. Huaneng Power International, which is always expanding, also owns Singapore's electricity retailer Tuas Power.
What is the abbreviation for Huaneng Power?
HPI), commonly known as Huaneng Power or in Chinese: 华能国际 (literally Huaneng International), is a Chinese electric power company. It was established in 1994 by China Huaneng Group. Its parent company China Huaneng Group is one of the five largest power producers in China.
Who is the parent company of China Huaneng Group?
It was established in 1994 by China Huaneng Group. Its parent company China Huaneng Group is one of the five largest power producers in China. It engages in the development, construction and operation of large power plants.
Is Huaneng Power International Inc (HNP) a good value stock to buy?
The strongest trend for HNP is in Value, which has been heading down over the past 139 days. HNP's current lowest rank is in the Quality metric (where it is better than 8.41% of US stocks). The price/operating cash flow metric for Huaneng Power International Inc is higher than just 1.96% of stocks in our set with a positive cash flow.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,745
|
\section{Introduction}
\label{sec1}
Trajectory tracking is one of the most important objectives in the area of motion control, particularly for autonomous systems, robotics and electromechanical systems, which is concerned with designing a feedback law to make the given system asymptotically follow a time parameterized path. The {\em de facto} standard technical route of tracking control is to translate it into a stabilization problem by defining an error dynamics, and then regulating to zero the induced nonlinear systems, with many nonlinear control techniques applicable. However, it brings the challenges to analyze nonlinear \emph{time-varying} systems.
An alternative route is to study control systems differentially along their solutions, which is widely known as contraction or incremental stability analysis \cite{ANDetal,FORSEP,LOHSLO,ANG}. The basic results on contraction analysis can be tracked back to the field of differential equations, see for example \cite{DEM,LEW}. It allows us to study the evolution of nearby trajectories to each other from an auxiliary linearized dynamics, the stability of which can be characterized by Finsler-Lyapunov functions with an elegant geometric interpretation \cite{FORSEP}. Nevertheless, most works on contraction theory are devoted to systems analysis, and in each case the corresponding constructive solutions are rarely discussed, with notable exception \cite{MANSLO,MANSLOcsl}. In \cite{MANSLO} control contraction metric (CCM) is introduced as a sufficient condition for exponential stabilizability of {\em all feasible trajectories} of given nonlinear systems, the notion of which resembles control Lyapunov functions (CLFs) for asymptotic regulation of nonlinear systems. The obtained controller based on CCM enjoys the benefit that the key constructive procedure can be formulated as off-line convex optimization. In \cite{CHAMAN}, the CCM method is extended to Finsler manifolds. On the application side the CCM technique has provided solutions to a wide variety of physical systems, see \cite{BAZSTE,SINetal} for applications to human manipulation and motion planning.
In this paper, we present some further results on asymptotic tracking in the context of contraction analysis. The main contributions are twofold.
\begin{itemize}
\item[1)] Similarly to CLF which is necessary for asymptotic controllability, the CCM is also a necessary condition of universal asymptotic tracking for general nonlinear systems. We also consider the cases of robust tracking and with dynamic extension, and show that CCMs are invariant under dynamic extension.
\vspace{0.1cm}
\item[2)] Motivated by the proposed necessary conditions, we provide a simple differential controller design, {\em i.e.}, injecting damping along an elaborated differentially passive output. The design is smooth globally, unlike the one in \cite{MANSLO} excluding a zero-Lebesgue set.
\end{itemize}
The paper is organized as follows. In Section \ref{sec:2}, we give the problem formulation and some preliminaries on differential dynamics. Section \ref{sec:3} presents the main results of the paper on necessary conditions of universal asymptotic tracking from several perspectives. Based on them, we discuss the tracking controller design in Section \ref{sec:4}. Some examples are given in Section \ref{sec:5}, and then the paper is wrapped up by some concluding remarks.
\textbf{Notations.} All mappings are assumed smooth. For full-rank matrix $g\in \mathbb{R}^{n\times m}$ ($m<n$), we denote the generalized inverse as $g^\dagger = [g^\top g]^{-1} g^\top$ and $g_\bot$ a full-rank left-annihilator. Given a matrix $M(x)$, a function $V(x)$ and a vector field $f(x)$ with proper dimensions, we define the directional derivative as $\partial_f M(x)= \sum_{i}{\partial M(x) \over \partial x_i} f_i(x)$ and $L_f V$ as the Lie derivative of $V$. For a square matrix $A$, $\texttt{sym}\{A\}$ represents its symmetric part $(A+A^\top)/2$.
\section{Preliminary}
\label{sec:2}
Consider the nonlinear control system
\begin{equation}
\label{eq:NL}
\dot{x} = f(x) + B(x) u,
\end{equation}
with states $x \in \mathbb{R}^n$, input $u \in \mathbb{R}^m$ and $x(0) =x_0$, where the input matrix $B(x) \in \mathbb{R}^{n \times m}$ ($n>m$) is full rank. We denote its solution as $x(t) = X(t,x_0,u)$. The control target is to track a predefined trajectory $x_d(t)$ generated by
\begin{equation}
\label{eq:target}
\dot{x}_d = f(x_d) + B(x_d) u_d(t), \qquad x_d(0) = x_{d0}
\end{equation}
with input $u_d(t)$. Following standard practice in tracking control, we assume that the system \eqref{eq:target} is forward complete and define the feasible input set as
$
{\cal E}_{x_{d0}} := \left\{
u_d \in {\cal L}_\infty^m \cap {\cal C}^1 | \exists
X(t,x_{d0},u_d),~ \forall t\ge 0
\right\}
$
for a given $x_{d0}$. To streamline the presentation, we recall some definitions first.
\begin{definition}\rm\cite{ANG}
Consider the system \eqref{eq:NL} under the control $u = \alpha(x,t)$, the solution of which is forward invariant in ${\cal E} \subset \mathbb{R}^n$. The closed-loop system in ${\cal E}$ is
\noindent {(IAS)} incrementally asymptotically stable (or asymptotically contracting) if $\forall(x_1,x_2) \in {\cal E}$,
$$
|X(t,x_1,\alpha(x_1,t)) - X(t,x_2,\alpha(x_2,t))| \le \kappa(|x_1-x_2|,t)
$$
holds for any $t\ge 0$ and some function $\kappa$ of class $\mathcal{K}{\cal L}$.
\noindent {(IES)} incrementally exponentially stable (or contracting) if the system is IAS with $\kappa(a,t) = k_1 e^{-k_2 t} a$ for some constants $k_1,k_2 >0$.
\end{definition}
\begin{definition}\rm
\label{def:lya}
For the system $\dot x = F(x,t)$, a function $V: {\cal E}\times {\cal E} \to \mathbb{R}_+$ is called IAS (or IES) Lyapunov function if
\begin{equation}
\label{cond:1}
L_{F(x,t)} V(x,\xi) + L_{F(\xi,t)} V(x,\xi) <0
\end{equation}
[or $\le \lambda V(x,\xi)$], and for some $a_1,a_2>0$ satisfying
\begin{equation}
\label{cond:2}
a_1 |x - \xi|^2 \le V(x,\xi) \le a_2 |x-\xi|^2.
\end{equation}
\end{definition}
The IAS of the system $\dot{x}=F(x,t)$ is equivalent to the existence of an IAS Lyapunov function for set stability of $x=\xi$, by considering an auxiliary dynamics $\dot\xi =F(\xi,t)$ \cite{ANG}. We are interested in designing a feedback law such that
\begin{equation}\label{eq:convergence}
\lim_{t\to\infty} |x(t) - x_d(t)| =0.
\end{equation}
\noindent {\bf Problem Formulation} For the systems \eqref{eq:NL} and \eqref{eq:target} with any $u_d(t) \in {\cal E}_{x_{d0}}$ and $\forall x_{d0} \in {\cal E}$, design a controller $u=\alpha(x,t)$ achieving {\bf i}) the IAS of the system \eqref{eq:NL} (or IES for exponential tracking) and {\bf ii}) invariance of $\{(x,x_d) \in \mathbb{R}^{2n}| x=x_d\}$.
\begin{remark}\rm
The qualifier ``universal'' refers to target trajectories generated by arbitrary $u_d \in {\cal E}_{x_{d0}}$ and $x_{d0}\in \mathbb{R}^n$. Another well-studied formulation of trajectory tracking is to achieve \eqref{eq:convergence} for \emph{a class of} inputs $u_d$, which is expected to have weaker requirements on control systems. It, however, involves additional excitation assumptions on desired trajectories or equivalently on $u_d$ \cite{JIANIJ}. A similar issue appears in nonlinear observers, where the universal case is related to \emph{uniform observablilty} \cite{BES}. For weakly observable systems, persistent excitation of system trajectories is required to continue the observer design \cite{ORTetal}.
\end{remark}
For any $(x_{d0}, x_0) \in \mathbb{R}^{2n}$, there exists a regular smooth curve $\bar{\gamma}:[0,1] \to \mathbb{R}^n$ such that $\bar\gamma(0)= x_{d0}$, $\bar\gamma(1) = x_0$, and
\begin{equation}\label{eq:ineq1}
\int_{0}^{1} \sqrt{ {\dot \gamma (s)}^\top M(\bar{\gamma}(s) ) \dot \gamma(s) }
\le
(1+\varepsilon) d(x_{d0}, x_0)
\end{equation}
for some $\varepsilon>0$. Considering the infinitesimal displacement $\delta x(t) = {\partial \over \partial s} X(t,\bar{\gamma}(s),u_s)|_{s=1}$ and $\delta u := {\partial \over \partial s} u_s|_{s=1}$, the time derivative of which is given by
\begin{equation}\label{diff_dyn}
\delta\dot{x} = A(x,u) \delta x + B(x) \delta u,
\end{equation}
with $A(x,u):= {\partial f (x)\over \partial x} + {\partial B(x) u \over \partial x}$.
\section{Main Results of Necessary Conditions}
\label{sec:3}
In this section, we present the main results of the paper, that is, identifying necessary conditions of the systems which may achieve universal tracking, under several different assumptions. The links to CCMs will also be clarified.
\subsection{Necessary Condition of Universal Tracking}
\label{sec:31}
Let us consider the basic case of universal tracking, in which we need the following.
\begin{assumption}
\rm\label{ass:1}
Consider the system \eqref{eq:NL} and the target dynamics \eqref{eq:target} forward invariant in ${\cal E} \subset \mathbb{R}^n$ with any input $u_d$ and $x_{d0}$. There exists a feedback law $u = \alpha(x,t)$\footnote{The feedback $u=\alpha(x,t)$ may also depend on $x_d(t)$ and $u_d(t)$, and the ``time-varying'' form is adopted to show this point.} such that
\begin{itemize}
\item[1)] The set $\{(x,x_d)\in {\cal E}\times {\cal E}|x = x_d\}$ is forward invariant.
\item[2)] The system $\dot x = f(x) + B(x)\alpha(x,t) := F(x,t)$ is IAS (or IES) with the Lyapunov function in the sense of Definition \ref{def:lya}.
\end{itemize}
\end{assumption}
The above assumption characterizes the problem formulation of universal tracking in terms of incremental stability.
\begin{proposition}\rm
\label{prop:necessary}
If Assumption \ref{ass:1} holds, then there exists a symmetric matrix $2a_1 I_n \le M(x) \le 2a_2 I_n $ such that
\noindent{C1)} for any non-zero $v\in \mathbb{R}^n$, we have
\begin{equation}
\label{eq:ccm}
\begin{aligned}
v^\top M(x)B(x) = 0
\quad \implies \quad
v^\top \Bigg[
\partial_f M(x)
+ 2M(x) {\partial f(x)\over \partial x}
\Bigg] v < 0
\end{aligned}
\end{equation}
[or $\le - \lambda v^\top M(x) v$ for IES] and the PDEs for $i=1,\ldots, m$
\begin{equation}
\label{pde:1}
\partial_{B_i} M(x) + {\partial B_i(x) \over \partial x}^\top M(x) + M(x){\partial B_i(x) \over \partial x} =0.
\end{equation}
\noindent{C2)} The \emph{dual} differential system
\begin{equation}
\label{syst:dual}
\begin{aligned}
\dot{p} = {\partial f(x) \over \partial x}^\top p, ~
y_p = [M(x)B(x)]^\top p
\end{aligned}
\end{equation}
is uniformly zero-state detectable (with exponential convergence speed for IES).
\end{proposition}
\begin{proof}
Considering the IAS case, we define ${\partial^2 V \over \partial \xi^2}(x,x) =M(x)$, which is motivated by \cite{SANPRA}. According to \eqref{cond:2}, it yields $V(x,x)=0$ and
$
2a_1 I_n \le M(x) \le 2a_2 I_n.
$
For any pair $(x,\xi) \in {\cal E} \times {\cal E}$, we parameterize $\xi$ as $\xi=x+ rv$ for any $v\in \mathbb{R}^n$ with $|r|$ sufficiently small. We get
$$
{\partial V \over \partial x}(x, x+rv) F(x,t) + {\partial V \over \partial \xi}(x,x+rv) F(x+rv,t) <0.
$$
A \emph{necessary} condition to the above inequality is that the second-order terms in the Taylor expansion with respect to $r$ are negative, that is
\begin{equation}
\label{ineq:p1_1}
\begin{aligned}
{r^2 \over 2}
\Bigg[
{\partial \over \partial x}\Big( v^\top {\partial^2 V \over \partial \xi^2 } v \Big)\Big|_{(x,x)}
+
{\partial \over \partial \xi}\Big( v^\top {\partial^2 V \over \partial \xi^2 } v \Big)\Big|_{(x,x)}\Bigg]F(x,t)
+ 2r^2 v^\top M(x) {\partial F(x,t) \over \partial x} v
<0.
\end{aligned}
\end{equation}
According to the definition of $M(x)$, we have
$$
{\partial \over \partial x}(v^\top M v)
=
{\partial \over \partial x}\Big( v^\top {\partial^2 V \over \partial \xi^2 } v \Big)\Big|_{(x,x)}
+
{\partial \over \partial \xi}\Big( v^\top {\partial^2 V \over \partial \xi^2 } v \Big)\Big|_{(x,x)},
$$
thus the inequality \eqref{ineq:p1_1} becomes
\begin{equation}\label{ineq:p1_2}
{\partial \over \partial x}(v^\top M(x) v) F + v^\top\Bigg[ {\partial F\over \partial x}^\top M(x) + M(x){\partial F\over \partial x} \Bigg] v <0,
\end{equation}
for any non-zero $v\in\mathbb{R}^n$. This condition relies on the existence of a feedback $\alpha(x,t)$ satisfying the above inequality.
Now we decompose the feedback $\alpha(x,t)$ into
\begin{equation}
\label{decomposition}
\alpha(x,t) = \alpha_0(x,t) + u_d.
\end{equation}
For any trajectory $x_d\in {\cal E}$, we assume that $x=x_d$ is invariant in Assumption \ref{ass:1}. That is, for $x_0 = x_{d0}$, we have that
$$
\dot{x} - \dot{x}_d = f(x) + B(x) \alpha(x,t) - f(x_d) + B(x_d)u_d \quad
\implies \quad \alpha_0(x_d,t)=0,
$$
where we used the full rank of $B(x)$ in the last implication. Invoking the Lagrange reminder representation of the Taylor series expansion, we note that $\alpha_0(x,t)$ can be represented as $\alpha_0(x,t) = \alpha_1(x,t)(x-x_d)$ for some function $\alpha_1$. Substituting \eqref{decomposition} into the inequality \eqref{ineq:p1_2}, we have
$$
\begin{aligned}
& v^\top \Bigg[ \partial_{(f + B(\alpha_0 + u_d ))} M + 2\Big({\partial f \over \partial x} +B { \partial \alpha \over \partial x} \Big)^\top M
+ \sum_{i=1}^{n} \Big[{\partial B_i \over \partial x}^\top M + M {\partial B_i \over \partial x}
\Big](\alpha_0 + u_d)_i
\Bigg]v <0,
\end{aligned}
$$
which is satisfied uniformly for \emph{arbitrary} $u_d \in {\cal E}_{x_{d0}}$ with $\forall x_{d0}\in{\cal E}$ and $v\neq 0$, thus the PDEs \eqref{pde:1} hold. Then, we have
$$
v^\top \Bigg[\partial_f M(x) + 2\Big({\partial f(x) \over \partial x}^\top + {\partial \alpha(x,t) \over \partial x}^\top B(x)^\top\Big) M(x)\Bigg]v <0
$$
for any non-zero $v\in\mathbb{R}^n$, equivalently written as \eqref{eq:ccm}.
Let us consider the necessary condition C2, in which we need to show that for the system \eqref{syst:dual}
\begin{equation}
\label{impl:detectable}
y_p \equiv 0 \quad \implies \quad \lim_{t\to\infty} p(t) =0.
\end{equation}
Consider the Lyapunov function candidate $\mathcal{V}(x,p) = p^\top M(x) p$, the time derivative of which is
$$
\dot{\mathcal{V}} = p^\top \Bigg[\dot{M}(x) + {\partial f(x) \over \partial x}^\top M(x) + M(x){\partial f(x) \over \partial x} \Bigg] p,
$$
where $x$ is generated by \eqref{eq:NL}. Consider the case $y_p \equiv 0$ and \eqref{eq:ccm}, we have $\dot{\mathcal{V}} < 0$ for any $p \neq0$, thus verifying \eqref{impl:detectable}. The IES case can be proved \emph{mutatis mutandis}.
\end{proof}
\begin{remark}\rm
The condition C1 for universal asymptotic tracking resembles the ``stronge'' CCM proposed in \cite{MANSLO} but without a fixed contracting rate. We underscore that the PDE \eqref{pde:1} is also a necessity of \emph{differential passivity} \cite{VDS}. The condition C2 motivates us to construct tracking controllers with an observation that stabilizing the differential system can be translated into driving to zero the differential output.
\end{remark}
\begin{remark}\rm
As figured out in \cite{MANSLO}, the CCM resembles the CLF for asymptotic stabilization of nonlinear systems \cite{SON}. The existence of a CLF is, indeed, a \emph{necessary} condition of asymptotic controllability of nonlinear systems. Similarly, Proposition \ref{prop:necessary} verifies the CCM as necessity to achieve universal asymptotic tracking.
\end{remark}
\subsection{Dynamic Extension is Unnecessary}
The CCM was originally introduced for \emph{static} feedback control. On the other hand, dynamic feedback is a widely popular technique in feedback control for different purposes, {\em e.g.}, achieving relative degree, output feedback, performance enhancement and relaxing constraints. Particularly, it is widely recognized that dynamic extension may make a given nonlinear system achieve relative degree, then combining with feedback linearization we can design a dynamic controller to obtain an error system with linear time invariant dynamics, in order to be able to track any feasible trajectories \cite[Section 5.4]{ISI}. Therefore, a natural question relies on whether we can simply the necessary conditions by introducing dynamic extensions. Let us first consider the following example.
\begin{example}\rm
Consider the nonlinear system
\begin{equation}
\label{examp:dyn_ext}
\dot{x} = \begmat{-x_1 \\ x_4 \\ -x_3 + x_4 \\ 0}
+
\begmat{1 & 0 \\ x_3^2 +1 & 0 \\ 0 & 0 \\ 0 & 1}u,
\quad
y = \begmat{x_1 \\ x_2}
\end{equation}
with input $u \in \mathbb{R}^2$. A simple solution to output tracking is via feedback linearization. Note that the system does not have relative degree with the given output mapping \cite[Section 5.4]{ISI}. However, we are able to achieve (vector) relative degree $[2,2]$ w.r.t. the new input $[v_1~~u_2]^\top$ by adding dynamic extension
$
u_1 = \xi , ~
\dot{\xi} = v_1,
$
and then use feedback linearization to solve the problem. It is easy to verify that the system enjoys a CCM by performing a change of input $u =\mbox{col}((x_3^2+1)(-x_2 + v_1), v_2)$. It implies that a static feedback can achieve universal tracking for this example.
\end{example}
The above example shows that relative degrees are not fundamentally related to the universal stabilizability. We are now in position to show that dynamic extension is {\em unnecessary} to relax requirements in contraction analysis. Consider the objective that the system \eqref{eq:NL} asymptotically tracks the trajectory generated by the target system \eqref{eq:target} with an integral control\footnote{It can be extended to the more general cases, but we here adopt the basic case to streamline the underlying mechanism. See Remark \ref{rem:general_dyn_ext}.}, that is
\begin{equation}
\label{target_dyn_int}
\begin{aligned}
u_d = \theta , \quad
\dot{\theta} = f_c(x_d,\xi) + u_{\tt I}^d
\end{aligned}
\end{equation}
with the extended state $\theta\in \mathbb{R}^{m}$ and $u_{\tt I}^d \in \mathbb{R}^{m}$ involving in the integral action. We are interested in the necessary conditions of universal tracking $x_d$ generated by \eqref{target_dyn_int}.
\begin{proposition}\rm
Consider the system \eqref{eq:NL} in closed loop as
\begin{equation}
\label{dyn_closed}
\begin{aligned}
\dot{x} = f(x) + B(x) u_{\tt K}, \quad
\dot{x}_c = u_{\tt I},
\end{aligned}
\end{equation}
with the dynamic feedback $(u_{\tt K}, u_{\tt I})$. Suppose that the system \eqref{dyn_closed} is IAS and forward complete with $x_d$ generated under \eqref{target_dyn_int} as a particular solution for any $u_{\tt I}^d$. Then there exists a metric $2a_1 I_n \le M(x) \le 2a_2 I_n$ satisfying the condition C1.
\end{proposition}
\begin{proof}
The condition C1 is equivalent to the existence of a dual metric $W(x) = M^{-1}(x)$ such that
\begin{equation}
\label{W}
\begin{aligned}
& B_\bot^\top(x) \bigg( \partial_f W + {\partial f(x) \over \partial x} W + W {\partial f(x) \over \partial x}^\top \bigg) B_\bot(x) <0 \\
%
& \partial_{B_i} W(x) - {\partial B_i(x) \over \partial x} W - W {\partial B_i(x) \over \partial x}^\top =0,
\end{aligned}
\end{equation}
for $i=1\ldots,n$, with $B_\bot(x)$ a full-rank left annihilator.
When we introduce the additional degree of freedom to design dynamic extensions, it is equivalent to verify the above condition for the extended dynamics
$$
\dot{\chi} = \bar f(\chi) + \bar B(\chi)\begin{pmatrix}u_{\tt K} \\ u_{\tt I}
\end{pmatrix}
$$
with $\chi:= \mbox{col}(x,x_c)$, $\bar f(\chi) = \mbox{col}(f(x, 0_{m \times 1})$ and $\bar B(\chi) =\mbox{diag}(B(x) , I_{m})$. Since $x_c$ can be any feasible trajectories in the target system \eqref{target_dyn_int}, following the proof of Proposition \ref{prop:necessary} and using the dual property, the extended system should satisfy
\begin{equation}
\begin{aligned}
& \bar B_\bot^\top(\chi) \bigg( \partial_{\bar f} \bar W(x,x_c) + {\partial \bar f(\chi) \over \partial \chi} \bar W + \bar W {\partial \bar f(\chi) \over \partial \chi}^\top \bigg) \bar B_\bot(\chi) <0 \\
%
& \partial_{\bar B_i} \bar W(x,x_c) - {\partial \bar B_i(\chi) \over \partial \chi} \bar W - \bar W {\partial \bar B_i(\chi) \over \partial \chi}^\top =0,
\label{barW}
\end{aligned}
\end{equation}
for some $\bar{a}_1 I_{n + m} \le \bar W(x,x_c) \le \bar{a}_2 I_{n + m}$ with $\bar a_1, \bar a_2 >0$. We partition the matrix $\bar W(x,x_c)$ conformally as
$$
\bar{W}(x,x_c) = \begmat{{\bar W}_x(x,x_c)& {\bar W}_{xc}(x,x_c)\\{\bar W}_{xc}(x,x_c) & {\bar W}_c(x,x_c)},
$$
and note that ${\bar W}_1(x,x_c)$ is also positive definite. Computing the $(1,1)$-block of the second equation in \eqref{barW} for $i=1\ldots,n$, we may get
$$
\partial_{B_i} {\bar W}_x(x,x_c) - 2{\tt sym}
\left\{ {\partial B_i(x) \over \partial x} {\bar W}_x \right\} = 0
$$
as a \emph{necessary} condition, where we used $\partial_{\bar B_i} {\bar W}_x(x,x_c) = \partial_{ B_i} {\bar W}_x(x,x_c)$ since the last $m$ elements in $\bar{B}_i(x)$ $(i=1,\ldots, n)$ are zeros.
It is clear that a feasible full-rank annihilator of $\bar B(\chi)$ is
$
\bar B_\bot(\chi) = \mbox{col}(B_\bot(x), 0),
$
based on which we may get the $(1,1)$-block of the first inequality in \eqref{barW} as
$$
B_\bot^\top(x) \bigg( \partial_f \bar W_x(x,x_c) + {\partial f(x) \over \partial x} \bar W_x + \bar W_x {\partial f(x) \over \partial x}^\top \bigg) B_\bot(x) <0.
$$
Note that the above inequality holds for all $(x,x_c) \in \mathbb{R}^n \times \mathbb{R}^m$. Simply selecting
$$
M(x) = {\bar W}^{-1}_x(x,0),
$$
and invoking the duality, we complete the proof.
\end{proof}
In the above analysis we show that we cannot weaken the necessary conditions via adding an integral action.
\begin{remark}\rm
\label{rem:general_dyn_ext}
It is natural to consider the more general case of dynamic extension
$
u = \alpha(x,x_c) , ~
\dot{x}_c = \eta(x,x_c) + \gamma(x,x_c)v
$
with new input $v$ and $x_c \in \mathbb{R}^{n_c}$. If we have a radically unbounded assumption on $\alpha(x,x_c)$ for fixed $x$, C1 is still a necessary condition. It shows the invariance of CCMs under dynamic extension. Note that the additional radical unboundedness assumption is used to force the PDEs \eqref{pde:1} to hold uniformly.
\end{remark}
\begin{remark}\rm
Invoking the fact that every feedback linearizable system admits a CCM, we conclude that the system which can achieve relative degree via dynamic extension also has a CCM. Roughly speaking, if a nonlinear system can achieve universal asymptotic tracking with desired trajectories generated by a dynamic controller, then the system has a CCM.
\end{remark}
\subsection{Necessary Condition for Robust Tracking}
\label{sec:robust}
Now we are carrying out the analysis for robust universal tracking control. Consider the closed loop
\begin{equation}
\label{syst_pert}
\dot{x} = f(x) +
B(x)\alpha(x,t) + w(t)
\end{equation}
under the feedback $\alpha(x,t)$, in the presence of perturbation $w(t) \in \mathcal{L}_2^e$, which asymptotically {\em practically} tracks the trajectories of \eqref{eq:target} with any $u_d(t) \in {\cal E}_{x_{d0}}$ and $x\in \mathbb{R}^n$. With a slight abuse of notations, we denote the solution of \eqref{syst_pert} as $X_F(t,x_0,w(t))$ and $F(x,t):=f(x) +B(x)\alpha(x,t)$. To this end, we require that the target trajectory $x_d$ generated by \eqref{eq:target} is a particular solution of \eqref{syst_pert} in the absence of $w(t)$, and the closed loop is incrementally input-to-state stable (ISS), {\em i.e.},
$$
|X_F(t,\xi_1,w_1) - X_F(t,\xi_2,w_2)| \le \beta_1(|\xi_1 - \xi_2|,t) + \beta_2(|w_1 - w_2|_\infty),
$$
with $\beta_1 \in \mathcal{KL}$ and $\beta_2 \in \mathcal{K}_\infty$ for any pairs $(\xi_1,\xi_2) \in \mathbb{R}^n \times \mathbb{R}^n$. We need the following.
\begin{definition}
\label{def:iISS}\rm
A smooth function $V(x,\xi) :\mathbb{R}^n \times \mathbb{R}^n \to \mathbb{R}_{\ge 0}$ is called an incremental ISS Lyapunov function if \eqref{cond:2} holds and there exists $\beta_3 \in \mathcal{K}_\infty$ such that $\forall w_1,w_2\in \mathcal{U} \subset \mathbb{R}^n$ $\forall x_1,x_2\in\mathbb{R}^n$, we have
$$
\beta_3(|x_1-x_2|) \ge |w_1 - w_2| \quad \implies \quad
L_{F(x,t) + w_1} V(x,\xi) + L_{F(\xi,t) + w_2} V(x,\xi) < -\lambda V(x,\xi),
$$
with $\lambda >0$.
\end{definition}
\begin{proposition}
\label{prop:robust_necessary}\rm
If Assumption \ref{ass:1} holds but with an incremental ISS Lyapunov function in the sense of Definition \ref{def:iISS}, then there exists a metric $2a_1 I_n \le M(x) \le 2a_2 I_n $ such that
\begin{equation}\label{lmi:iISS}
\begin{aligned}
\bar v^\top \mbox{col}(M(x)B(x), 0_{n\times m}) & = 0 \quad \implies \quad
\bar v ^\top \begmat{ \partial_fM + 2{\tt sym}\{M{\partial f\over \partial x}\} + \lambda I_n & M(x) \\
M(x) & - \gamma_0 I_n } \bar v & < 0
\end{aligned}
\end{equation}
for any non-zero $\bar v \in \mathbb{R}^{2n}$ and some $\gamma_0>0$.
\end{proposition}
\begin{proof}
The proof is similar to the one of Proposition \ref{prop:necessary}, by selecting $M(x) = {\partial^2 V(x,x) \over \partial \xi^2}$. If $w_1 = w_2 =0$, this case recovers the results in Proposition \ref{prop:necessary}, and thus we have \eqref{pde:1}. The implication in Definition \ref{def:iISS} is equivalent to
$$
{\partial V(x,\xi)\over \partial x} (F(x,t) + w_1) + {\partial V(x,\xi) \over \partial \xi} (F(\xi,t) + w_2)
< -\lambda V(x,\xi) + \beta_4(|w_1 - w_2|)
$$
with $\beta_4 \in \mathcal{K}_\infty$ \cite[Remark 2.4, pp. 353]{SONWAN}. If $w_1=w_2= 0$, then the above inequality degenerates to the IES case studied in Proposition \ref{prop:necessary}, thus \eqref{eq:ccm} and \eqref{pde:1} also hold for this case.
For any pairs $(x,\xi) \in {\cal E} \times {\cal E}$ and $(w_1,w_2) \in \mathcal{U} \times \mathcal{U}$, we parameterize
$
\xi = x + rv,~w_1 = w_2 - 2re
$
with $|r|$ and $|e|$ sufficiently small. Focusing on the second-order term in the Taylor expansion with respect to $r$, we get the following necessary condition
$$
\begin{aligned}
{r^2 \over 2}
\Bigg[
{\partial \over \partial x}\Big( v^\top {\partial^2 V \over \partial \xi^2 } v \Big)\Big|_{(x,x)}
+
{\partial \over \partial \xi}\Big( v^\top {\partial^2 V \over \partial \xi^2 } v \Big)\Big|_{(x,x)}\Bigg]F(x,t)
+ 2r^2 v^\top M(x) {\partial F(x,t) \over \partial x} v
- 2r^2 v^\top {\partial^2 V \over \partial x\partial \xi}\Big|_{(x,x)}e \\
< -\lambda v^\top {\partial^2 V\over \partial^2 x}\Big|_{(x,x)}v + \gamma_0 e^\top e,
\end{aligned}
$$
for some $\gamma_0 >0$. It can be written as
$$
\begmat{v \\ e}^\top
\begmat{\partial_f M + 2{\tt sym}\{M{\partial F\over\partial x}\} + \lambda M & M \\ M & - \gamma_0 I_n}
\begmat{v \\ e} <0
$$
for any $\mbox{col}(v,e) \neq 0$, where we have used \eqref{cond:2} and ${\partial^2 V \over \partial \xi^2}|_{(x,x)} = - M(x)$. Cancelling the input $\alpha(x,t)$ from the above inequality, we may get the inequality \eqref{lmi:iISS}.
\end{proof}
\begin{remark}\rm
In \cite[Theorem 2]{ANG}, it was shown that the above incremental ISS Lyapunov function is a sufficient and necessary condition to the incremental ISS property of \eqref{syst_pert}, assuming that $\mathcal{U}$ is compact and $\alpha(\cdot)$ is time invariant. It is interesting to observe that the condition \eqref{lmi:iISS} is nothing, but just the robust CCM proposed in \cite{MANSLOcsl} for nonlinear ${\cal H}_\infty$ control, with the ``output'' $y=x$. The above analysis shows the necessary perspective of the robust CCM in \cite{MANSLOcsl}.
\end{remark}
\section{Further Results}
\label{sec:4}
\subsection{Stabilizing Differential System via Damping Injection}
\label{sec:41}
In this section, we discuss some further results of the presented necessary conditions, which are motivating to tracking controller design. In \cite{MANSLO}, a Sontag's type of differential feedback controller $\delta u$ is constructed in order to stabilize the infinitesimal displacement $\delta x$, thus achieving IES. However, the obtained differential controller cannot be guaranteed smooth at $\delta x= 0$, since the \emph{small control property} only guarantees continuity. Overcoming this drawback is one of the motivations.
\begin{assumption}
\label{ass:sufficient}\rm
Given the system \eqref{eq:NL} and a forward invariant target dynamics \eqref{eq:target} under $u_d$, there exists a metric $\underline p I \le M(x) \le \bar p I$ satisfying the condition C1 of the IES case.
\end{assumption}
Unlike the CLF, the CCM defined on Riemannian manifold enjoys a \emph{quadratic} form, making it possible to conduct a structural decomposition. The differential detectability condition C2 motivates us to design carefully an output injection, along which we can passivitify the differential system.
\begin{proposition}
\label{prop:passivity}\rm
Consider the system \eqref{eq:NL} satisfying Assumption \ref{ass:sufficient}. Then, there exist globally defined smooth functions $\gamma:\mathbb{R}^n \to \mathbb{R}_{\ge0}$ such that the differential feedback controller
\begin{equation}
\label{diff_control}
\delta u = -\gamma(x) {\cal R}(x) \delta y + \delta v
\end{equation}
with the damping matrix ${\cal R}(x):= [(MB)^\top MB]^{-1} $ and the differential output
$
\delta y = [M(x)B(x)]^\top \delta x,
$
makes the system differentially passive with the input-output pair $(\delta v,\delta y)$. Furthermore, the damping injection $\delta v= - \gamma_0 \delta y$ with $\gamma_0>0$ makes the origin of the differential dynamics \eqref{diff_dyn} asymptotically stable, and
$
\delta v= - \gamma_0 {\cal R}(x) \delta y
$
makes the origin exponentially stable.
\end{proposition}
\begin{proof}
For convenience, we denote $P(x) := M(x)B(x)$, and decompose each infinitesimal displacement $\delta x$ into two parts, one of which is tangent to $P(x)$ denoted as $\delta x_p$, and the other $\delta x_v$ is orthogonal to $P(x)$, that is,
$
\delta x_p := P(x)P^\dagger(x)\delta x,
$
and
$
\delta x_v := \delta x - \delta x_p.
$
It is easy to verify
$
\delta x_v^\top P(x) = \delta x^\top [I - P(P^\top P)^{-1} P^\top] P =0.
$
We define a differential storage function as $V(x,\delta x) = {1\over 2}\delta x^\top M(x) \delta x$, the time derivative of which is
$$
\begin{aligned}
\dot{\aoverbrace[L1R]{V(x,\delta x)}{} }
\le & \delta x^\top
\Bigg[
{1\over 2}\partial_f M(x) + {\partial f(x) \over \partial x}^\top M(x)
\Bigg]\delta x
+
\delta x^\top M(x) \delta u
\\
\le & ({1\over 4r} - {\lambda\over 2}) |\delta x_v|_{M(x)}^2 + \Big[ {r\over \underline p} \Upsilon(x)^2 - \gamma(x)\Big]|\delta x_p|^2 +\delta v^\top \delta y
\end{aligned}
$$
where $u$ does not appear in the first inequality invoking \eqref{pde:1}, we have substituted $\delta x =\delta x_p + \delta x_v$ and used $\delta x_v^\top P=0$ in the second one with
$$
\Upsilon(x):=
\left\|
\partial_f M(x) +2 {\tt sym}\Big\{{\partial f(x) \over \partial x}^\top M(x)\Big\}
\right\|,
$$
and in the last inequality we have used ${\cal R}(x) P(x)^\top = P^\dagger(x)$ and $\delta x_p^\top P(x) \delta v = \delta v^\top \delta y$. For any $r>{1\over 2\lambda}$, by selecting smooth function
$$
\gamma(x) \ge {r\over \underline p} \Upsilon(x)^2,~ \forall x\in \mathbb{R}^n,
$$
we have
$$
{d\over dt}V(x,\delta x) \le \delta v^\top \delta y.
$$
It implies that the given system can be differentially passivitified via \eqref{diff_control}.
By adding a damping term $\delta v= -\gamma_0 \delta y$ with $\gamma_0 >0$, we have ${d\over dt}V(x,\delta) \le - |\delta y|^2$, thus
$$
\lim_{t\to\infty} \delta y(t) =0,
$$
in terms of Barbalat's lemma. In the proof of Proposition \ref{prop:necessary} we have shown that the condition C1 implies the zero-detectability of the differential system with the output $\delta y = P(x)^\top \delta x$. It implies that the origin of the differential system is exponentially stable.
For the case of $\delta v= - \gamma_0 {\cal R}(x) \delta y$, we have
$$
\begin{aligned}
\dot{\aoverbrace[L1R]{V(x,\delta x)}{} } & \le
- {1 \over 2} (\lambda - {1\over 2r})|\delta x_v|_{M(x)}^2
+
\delta x_v^\top P(x) \delta v \\
& \le - \lambda_0 V(x,\delta x)
\end{aligned}
$$
with $\lambda_0 = {\mbox{min}}\{\lambda - {1\over 2r}, {2\over \underline p}\}$.
\end{proof}
The above analysis shows that the differential controller
\begin{equation}
\label{diff_control2}
\delta u = - [\gamma(x) + \gamma_0] {\cal R}(x) \delta y := K(x) \delta x
\end{equation}
can exponentially stabilize the differential dynamics, which is simply damping injection along the direction of the differentially zero-detectable output $\delta y$, identified in the condition C2. It guarantees ${d\over dt}V(x,\delta x) \le - \lambda_0 V(x,\delta x)$.
\begin{remark}\rm
The proposed differential controller enjoys global smoothness, which is simpler than the Sontag's type design in \cite{MANSLO}. The latter is not smooth in a zero Lebesgue measure set. In \cite{MANSLO} the well-known Finsler's Lemma is \emph{point-wisely} applied to calculate the differential controller in the form $\delta u = \rho(x) \delta y $ and the metric $M(x)$ simultaneously. Another difference between the proposed design and the one in \cite{MANSLO} relies on the involvement of a rotation matrix ${\cal R}(x)$.
\end{remark}
\subsection{Motivating Case and Path Integral}
\label{sec:42}
In this section, we study the construction of tracking controller $u=\alpha(x,t)$ complying with the differential controller $\delta u = K(x)\delta x$ proposed in Section \ref{sec:41}. In this subsection, we start from a motivating case, and invoke the well developed methods via path integral in \cite{MANSLO}. Our new analytical design will be introduced in Section \ref{sec:42}.
Let us come back the differential systems with the initial condition $x(0)= \bar\gamma(s)$, the corresponding differential controller at $X(t,\bar\gamma(s),u_s)$ is
\begin{equation}
\label{eq:diff_control_s}
\delta u_s = K(X(t,\bar{\gamma}(s),u_s)) \delta x_s,
\end{equation}
The objective \eqref{eq:convergence} implies forward invariance, {\em i.e.}, $x(t) = x_d(t)$ for all $t \ge t_0$ if $x(t_0)= x_d(t_0)$. A necessary condition to it is the {\em boundary condition}
\begin{equation}\label{eq:boundary}
u_s (t,\cdot) \Big|_{s=0} = u_d(t).
\end{equation}
The differential feedback \eqref{eq:diff_control_s} may be rewritten as
\begin{equation}\label{eq:ode_s}
{\partial u_s(t, \cdot) \over \partial s} = K(X(t,\bar{\gamma}(s),u_s)) {\partial X (\cdot) \over \partial s}
\end{equation}
For a given moment $t>t_0$, the collection $\bar{\gamma}(\cdot)$ and a family of signals $u_s(\cdot) \in {\cal L}_\infty^m[0,t]$ for all $s \in [0,1]$, the solution $ X(t,\bar{\gamma}(\mu),u_s) =: \bar{\gamma}_x (t, \mu)$ defines a mapping
$$
\bar{\gamma}_x (t, \mu) : [0,\infty)\times [0,1] \to {\cal I}_t,
$$
which is a smooth curve ${\cal I}_t$ connecting $x(t)$ and $x_d(t)$ governed by \eqref{eq:NL}-\eqref{eq:target}. Along the curve ${\cal I}_t$, considering the boundary condition \eqref{eq:boundary} and solving the ordinary differential equation (ODE) \eqref{eq:ode_s} at each moment $t\in [0,\infty)$\footnote{The differential equation \eqref{eq:ode_s} can be regarded as an ODE with respect to the variable $s$ with a given $t$.}, we get the desired control signal as
$$
\begin{aligned}
u(t,\cdot) =u_s(t,\cdot)\Big|_{s=1}
= u_d(t) + \int_{0}^{1} K(\bar{\gamma}_x (t,\mu)) {\partial \bar{\gamma}_x(t,\mu) \over \partial \mu} d\mu.
\end{aligned}
$$
The implementation of the above controller design relies on calculating the mapping $\bar\gamma_x(t,\dot)$ numerically, which has a relatively heavy online computation burden. An alternative method is using the minimal geodesic $\gamma_{\tt m}(t,s)$ between $x(t)$ and $x_d(t)$ with the Riemannian metric $M(x)$. We have the following.
\begin{proposition}\rm
\label{prop:path_integral}
Consider the system \eqref{eq:NL} satisfying Assumption \ref{ass:sufficient}. For any $x_{d0}\in \mathbb{R}^n$ and $u_d \in {\cal E}$, the feedback controller
\begin{equation}
\label{control:path}
u = u_d(t) + \int_{0}^{1} K(\bar{\gamma}_{\tt m} (x,x_d,\mu)) {\partial \bar{\gamma}_{\tt m}(x,x_d,\mu) \over \partial \mu} d\mu,
\end{equation}
exponentially achieve \eqref{eq:convergence}, where $K(\cdot)$ is defined in \eqref{diff_control2}, and the mapping $\gamma_{\tt m}$ is the minimal geodesic w.r.t. the metric $M(x)$ between $x(t)$ and $x_d(t)$, parameterized by $\mu \in [0,1]$.
\end{proposition}
\begin{proof}
The proof follows Proposition \ref{prop:passivity} and the proof of the third item in \cite[Theorem 1]{MANSLO}.
\end{proof}
See \cite{MANSLO,WANMAN} for more details about implementation, and \cite{LEUMAN} for the online computation of the minimal geodesic. This step is openly recongnized as the heaviest computational step of online realization.
If we make a change of variable to the ODE \eqref{eq:ode_s}, we may get the PDE
\begin{equation}\label{eq:pde_k1}
{\partial \alpha(x,t) \over \partial x} = K(x),
\end{equation}
where we fix $u=\alpha(x,t)$. We denote $K_i(\cdot)$ as the $i$-column of $K(\cdot)$. The equation \eqref{eq:pde_k1} is only solvable if and only if
\begin{equation}
\label{pde:k}
{\partial K_i(x) \over \partial x} = \left[{\partial K_i(x) \over \partial x}\right]^\top.
\end{equation}
We have the following corollary, which is trivial to prove but motivating to our new development in the next subsection.
\begin{corollary}\rm
\label{cor:poincare}
Consider the system \eqref{eq:NL} satisfying Assumption \ref{ass:sufficient}. For any $x_{d0}\in \mathbb{R}^n$ and $u_d \in {\cal E}$, if we can find a smooth function $\gamma(x)$ guaranteeing \eqref{pde:k}, then the feedback controller
\begin{equation}
\label{control:poincare}
u = u_d(t) + \beta(x) - \beta(x_d)
\end{equation}
exponentially achieves \eqref{eq:convergence}, where $K(\cdot)$ is defined in \eqref{diff_control2}, and
$$
\alpha(x) = \int_{0}^x K(\chi)d\chi.
$$
\end{corollary}
\begin{proof}
The PDE \eqref{pde:k} guarantees the existence of $\beta(x)$. The infinitesimal displacement $\delta u_s$ at $X(t,\bar{\gamma}(s), u_s)$ is
$$
\delta u_s = K(X(t,\bar{\gamma}(s), u_s))\delta x_s.
$$
Selecting the differential Lyapunov function $\bar V(t,s) := V(X(\cdot),\delta x_s) = \delta x_s^\top M(X(\cdot))\delta x_s$, the time derivative of which satisfies
$$
{d\over dt} \bar V(t,s) \le -\lambda_0 \bar V(t,s),
$$
since the mapping $K(\cdot)$ is constructed following Proposition \ref{prop:passivity}. Invoking the main results in \cite{FORSEP}, we conclude the incremental exponential stability of the closed-loop system under the controller \eqref{control:poincare}. Note the invariance of $x = x_d$, we achieve the universal tracking task \eqref{eq:convergence}.
\end{proof}
\vspace{0.1cm}
\begin{remark}\rm
The selection of damping injection provides an additional degree of freedom to make the mapping $K$ satisfy \eqref{pde:k}. When the Riemannian metric $P(x)$ and the input matrix $g(x)$ are independent of state $x$, it is simple to complete controller design. However, it is a daunting task to guarantee \eqref{pde:k} in general cases.
\end{remark}
\subsection{The Controller Design}
The last step is the construction of tracking controller $u=\alpha(x,t)$ from the obtained differential feedback $\delta u$, which is solvable if and only if
$
\nabla K_i (x) = [\nabla K_i(x)]^\top
$
with $K_i(\cdot)$ as the $i$-column of $K(\cdot)$. Here, we propose an alternative method to (locally) realize the proposed differential controller. Indeed, the above-mentioned PDE is widely adopted in nonlinear observer design and adaptive control, see \cite{KARetal} for a recent review. We have the following.
\begin{proposition}
\label{prop:dyn_impl}\rm
Consider the system \eqref{eq:NL} and the target system \eqref{eq:target} satisfying Assumption \ref{prop:passivity} with $x_d \in {\cal L}_\infty$. The system \eqref{eq:NL} is contracting under the dynamic feedback law
\begin{equation}
\label{eq:z}
\dot{z} = f(x) + B(x) u - \ell (z -x), \quad u = u_d + \beta(x,z) - \beta(x_d,z)
\end{equation}
with $\ell>0$, $z(0),x_0 \in \mathcal{B}_\varepsilon(x_{d0})$ for $\varepsilon$ smaller than some $\varepsilon_\star >0$ and
$$
\beta(x,z) = \int_{0}^{x_1} K_1(\mu, z_2, \ldots, z_n)d\mu + \ldots
+ \int_{0}^{x_n} K_m( z_1, \ldots, z_{n-1}, \mu)d\mu,
$$
thus achieving the task \eqref{eq:convergence}.
\end{proposition}
\begin{proof}
We give the sketch of proof. The dynamic extension \eqref{eq:z} is an IES system with $z=x$ a particular solution. If the system \eqref{eq:NL} is forward complete, then we have $\lim_{t\to\infty} |z(t) - x(t)| =0$. For convenience, we denote the feedback law in \eqref{eq:z} as
$$
\alpha(x,t) := u_d(t) + \beta(x,z(t)) - \beta(x_d(t),z(t)).
$$
We then have
\begin{equation}\label{eq:partial}
\begin{aligned}
{\partial \alpha \over \partial x}
& =
{\partial \beta(x,z) \over \partial x}
=
\bigg[
\begin{aligned}
K_1(x_1, z_2, \ldots, z_n)~~ \Big|&~ \ldots&\Big| K_m(z_1,\ldots, z_{n-1}, x_n)
\end{aligned}
\bigg]
=:
\hat{K}(x,z).
\end{aligned}
\end{equation}
Define
\begin{equation}
\label{eq:Delta}
\begin{aligned}
\Delta(x,z) := &\hat{K}(x,z) - K(x)
\end{aligned}
\end{equation}
satisfying
$
\Delta(x,x) =0.
$
If the closed-loop system \eqref{eq:NL} is forward complete, which can be shown for small $\varepsilon>0$, we have $\lim_{t\to\infty} \Delta(x(t),z(t)) = 0$ exponentially.
We now prove the contraction property of the closed-loop system by investigating its differential system along the solution $x =X(t,\bar\gamma(1),\alpha(x,t))$, which is
\begin{equation}
\label{diff_dyn_pert}
\delta \dot{x} = [A(x,u) + g(x)(K(x)+ \Delta(x,z))]\delta x.
\end{equation}
Invoking Proposition \ref{prop:passivity}, the above differential system can be regarded as an exponentially stable LTV system perturbed by a term $g(x)\Delta(x,z) \delta x$. Assuming that $x_0 \in B_\varepsilon(x_{d0})$ and $x_d(t) \in {\cal L}_\infty$ with small $\varepsilon>0$, we have $x(t) \in {\cal L}_\infty$. Therefore, the perturbation $g(x) \Delta(x,z) \delta x$ is an exponentially decaying term, and we conclude the exponential stability of \eqref{diff_dyn_pert} with some basic perturbation analysis \cite[Chapeter 9]{KHA}. Using the inverse Lyapunov theorem and \cite[Theorem 1]{FORSEP}, we are able to prove the (locally) IES of the control system \eqref{eq:NL} under the proposed feedback law.
\end{proof}
\section{Examples}
\label{sec:5}
\subsection{A Numerical Example}
In this subsection we consider a simple numerical example to verify the results in Section \ref{sec:43}, showing the relatively large domain of attraction. Consider the system
\begin{equation}
\label{eq:numerical}
\begin{aligned}
\dot{x}_1 & = {1\over3}x_2^3 + x_2 \\
\dot{x}_2 & = -x_2 + u.
\end{aligned}
\end{equation}
We may get the metric as $M = \begmat{3 & -1 \\ -1 & 2}$ with the differential controller $\delta u= K(x) \delta x$, where $K(x) = [-(x_2^2+1)~ ~-x_2^2]$. Constructing the dynamic extension and following the results in Subsection \ref{sec:42}, we may get the feedback law as
$$
u = - (z_2^2+1)(x_2) + (x_{d,2}^2 + 1)x_{d,2} - {1\over 3}(x_2^3 - x_{d,2}^3) + u_d(t).
$$
We compare it with the controller \eqref{control:path} by path integral via simulations. The initial conditions are $x_{d0}=[3~-1]^\top$, $x_0 = [-5 ~2]^\top$ and $z(0)=[0~0]^\top$, with $\ell =5$ and $u_d = \sin(t) - \cos(t)^2 x_{d,1}(t)$. We show the simulation result in Fig. \ref{fig:num}, where both the methods achieve IES. As expected, the proposed method has a larger overshoot at the beginning due to the dynamic extension, but reducing the online computation burden. We also test the controller with different initial conditions, illustrating that the domain of attraction is relatively large.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{fig/fig_num1.pdf}
\includegraphics[width=0.3\textwidth]{fig/fig_num3.pdf}\caption{Simulation results of the numerical example}
\label{fig:num}
\end{figure}
\subsection{Electrostatic Microactuator}
To illustrate the results, let us consider the problem of position tracking of the electrostatic microactuator, the model of which is given by \cite{MAIetal}
\begin{equation}
\label{EM_syst}
\begmat{\dot q\\ \dot p \\ \dot Q}
=
\begmat{{1 \over m} p \\ - k(q -1) - {1\over 2A\epsilon} Q^2 - {b\over m}p \\ -{1\over R A\epsilon}qQ+ {1\over R}u },
\end{equation}
and we denote $x:=\mbox{col}(q,p,Q)$ representing the air gap, the momentum and the charge of the device. The systems state is defined on $\{(q,p,Q) \in \mathbb{R}^3~ |~ 0\le q\le2, Q\ge 0\}$ due to physical constraints. Solving the inequality \eqref{eq:ccm}, we get a feasible solution
$$
M^{-1} = \begmat{
{1\over 2bk} + {b^2 +km \over 2bk}& -{m\over 2}& 0\\
-{m \over 2} & {km^2 \over 2b} + {m \over 2b} & 0\\
0 & 0 & 1
},
$$
which is positive definite uniformly in the parameters $k>0, m>0$ and $b>0$. Noting that such example runs in a bounded state space, we can simply use a constant $\gamma>0$ for trajectory tracking. We give the simulation results in Fig. \ref{fig:EM} with normalized parameters $m=1,k=1,b=2,R=1$,$A = 3$ and $\epsilon = {1\over 2}$, and $x_{d0}=[0.2~0~0]^\top$ and $x_0=[1.5~1~2]^\top$. The control input of the target dynamics is selected as $u_d = {1 \over 2} |\sin({1\over 5}t) + \cos(t)|$, and we fix $\gamma =2$. The simulation results validate the theoretical part.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{fig/fig_EM1.pdf}\caption{Simulation results for the model of electrostatic microactuator}
\label{fig:EM}
\end{figure}
\section{Concluding Remarks}
\label{sec:6}
In this paper we have studied the necessary conditions of the systems which can achieve trajectory tracking with different cases, including universal asymptotic tracking, with dynamic extension and robust case. The invariance of CCMs under dynamic extension is clarified. We also show that the proposed differential detectability condition is intuitive for tracking controller design. The extensions in the following directions are of interests: 1) it is of practical interests to modify the results in Proposition \ref{prop:dyn_impl} in order to get a semi-global design; and 2) in this paper, we limit our attentions to the general nonlinear systems in the form \eqref{eq:NL}. For the systems with specific structures, it is promising to get more systematic constructive solutions.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,321
|
{"url":"https:\/\/homework.cpm.org\/category\/CON_FOUND\/textbook\/ac\/chapter\/10\/lesson\/10.1.2\/problem\/10-22","text":"### Home > AC > Chapter 10 > Lesson 10.1.2 > Problem10-22\n\n10-22.\n\nMultiply both sides of the equation by a common multiple of 5 and 6.\n\n$30\\left(\\frac{m}{6}=\\frac{m+1}{5}\\right)$\n\n5m = 6(m + 1)\n\nSolve for m.\n5m = 6m + 6\nm = 6\n\nm = \u22126\n\nFollow the steps in part (a).\n\nA common multiple of k and k + 3 is k(k + 3).\n\nIsolate x on one side by undoing the division.\n\nx = 90","date":"2020-10-24 08:45:02","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 1, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4426799714565277, \"perplexity\": 1771.7896530994924}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-45\/segments\/1603107882103.34\/warc\/CC-MAIN-20201024080855-20201024110855-00534.warc.gz\"}"}
| null | null |
\section{Introduction}
Three dimensional supersymmetry is important as it has been
observed in Kondo effect \cite{18a}-\cite{a18}. The original Kondo effect
describes a defect interacting with a free fermi liquid of itinerant
electrons, and the supersymmetry is introduced if
the ambient theory is an interacting CFT. In fact, this introduces qualitatively new features into the system.
A meta-magnetic transition in models for heavy fermions has been analysed using
a doped Kondo lattice model in two dimensions
\cite{16}. It has been demonstrated that such a system
exhibits a field-driven quantum phase transitions due to a breakdown of the
Kondo effect \cite{a1}-\cite{a12}. Such systems are analysed using
Lifshitz theories which are theories based on an anisotropic scaling between space and time.
The second order quantum phase transition has also been analysed using Lifshitz theories \cite{1}-\cite{4}.
The location of a Fermi-surface-changing Lifshitz transition is determined by carrier doping
in some heavy fermion compounds \cite{15}.
The chemical potential does not cause a heavy band to shift rigidly due to a strong correlation. This
is determined by the interplay of heavy and additional light bands crossing the Fermi level.
Three dimensional supersymmetry have also been observed in graphene \cite{s5}-\cite{s4}. Furthermore,
the van der Waals and Casimir interaction, between graphene
and a material plate, between a single-wall carbon nanotube and a plate, between graphene and an atom or a molecule, have been analysed using Lifshitz scaling \cite{a5}.
It may be noted that by generalizing the usual Lifshitz theory, it is possible to describe such materials which could not be described with the local dielectric response \cite{a3}. The
Casimir-Lifshitz free energy, between two parallel plates made of dielectric material possessing a constant conductivity at low temperatures, has been studied; and the temperature correction for this system has also been analysed \cite{a4}. Many properties of narrow heavy fermion bands can be described by a
Zeeman-driven Lifshitz transition \cite{a2}. The fermionic theories with $ z = 3$ have been analysed \cite{2a}-\cite{3a}.
In fact, the Nambu-Jona-Lasinio type four-fermion coupling at the $z=3$ Lifshitz fixed point in four dimensions is asymptotically free and generates a mass scale dynamically \cite{5a}. Furthermore, fermionic theories with $z=2$ have been constructed, and it has been demonstrated that the construction of such fermionic theories requires a non-local differential operator \cite{6a}. However, it is possible to analyse this non-local differential operator using the harmonic extension of functions \cite{7a}-\cite{12a}.
The Lifshitz theories based on the generalized uncertainty principle have also been constructed \cite{field}.
The generalized uncertainty principle is motivated by the existence of a minimum length scale, which in turn is predicted from almost all approaches to quantum gravity. According to most quantum gravity theories, the classical picture of spacetime as a continuous differential manifold breaks down below the Planck length. This is because fluctuations in the geometry of order one at the Planck scale impose a minimum length scale below which space cannot be probed. Such a minimum measurable length scale
occurs in string theory, as space cannot be probed below the string length scale in perturbative string theory
\cite{unz2}-\cite{un2z}. In loop quantum gravity, the existence of a minimum length
turns the big bang into a big bounce \cite{unz1}. Even though the existence of a minimum measurable length scale in predicted in almost
all theories of quantum gravity, it is not consistent with the usual
Heisenberg uncertainty principle. This is because according to the usual Heisenberg uncertainty principle
length can in principle be measured with arbitrary precision, if the momentum is not
measured \cite{unzasaqsw,un1,un11, un12, un13, un14, un15, un17,un18,un19,un10,un51,un52,un5}. So, according to the usual Heisenberg uncertainty principle, a minimum measurable
length scale does not exist. Therefore,
it is necessary to modify the Heisenberg uncertainty principle to make it consistent with the existence of a minimum measurable length scale.
This modified uncertainty principle is called the generalized uncertainty principle.
The modification of the Heisenberg uncertainty principle leads to a deformation of the usual Heisenberg algebra.
Even though the generalized uncertainty principle is motivated from quantum gravity, a modification of this
principle can have low energy effects
which can be detected in laboratory \cite{un54}. In fact, such effects are expected to be observed in
Lamb shift, Landau levels, and the tunneling current in a scanning tunneling microscope \cite{un}.
Futhermore, as marti have been recently studied in graphene, it is expected that such a low energy effect from
generalized uncertainty principle can be observed in graphene.
Thus, it is both interesting and important to analyse supersymmetric theories in three
dimensions, with Lifshitz scaling based on the generalized uncertainty principle.
Such an analysis would be important to analyse
the low energy effect of generalized uncertainty principle on Kondo effect in heavy metals, and van der Waals and Casimir interaction in graphene.
It will be possible to construct a free supersymmetric theory based on generalized uncertainty principle and
Lifshitz scaling. Even though the introduction of interactions will breaks the supersymmetry of
such a theory, such a theory might be interesting as
free field theories are also very important as effective field theories to describe materials like graphene.
It may be noted that four dimensional supersymmetric theories with Lifshitz scaling have been studied \cite{lifs}-\cite{lifs1}, but
so far three dimensional theories with Lifshitz scaling have not been studied. Furthermore, the generalized uncertainty principle has
never been combined with supersymmetric field theories based on Lifshitz scaling. However, such a construction is important
to analyse condensed matter systems. So, in this paper, we will analysed three dimensional supersymmetric field Lifshitz theories based
on the generalized uncertainty principle.
\section{ Deformed Superspace}
In this paper, we shall analyse supersymmetric Lifshitz theories with the existence of a minimum measurable length scale.
Let us first introduce these two concepts. First, the existence of minimum measurable length scale is manifested by deforming the usual uncertainty principle to a
generalized uncertainty principle,
\begin{equation}
\Delta x \Delta p = \frac{1}{2} [1 + \beta (\Delta p)^2],
\end{equation}
where $\beta = \beta_0 \ell_{Pl}^2 $, $\beta_0$ is a constant normally assumed to be of order one,
and $\ell_{Pl} \approx 10^{-35}~m$.
This deformation of the uncertainty principle in turn deforms the usual Heisenberg algebra to
\begin{equation}
[x^i, p_j ] =
i [\delta_{j}^i + \beta p^2 \delta_{j}^i + 2 \beta p^i p_j].
\end{equation}
Correspondingly, the
coordinate representation of the momentum operator is modified to the first order in $\beta$ as,
\begin{equation}
p_i = -i \partial_i (1 - \beta \partial^i \partial_i ).
\end{equation}
Second, in theories with Lifshitz scaling, space and time scale differently. Thus, we can write the scaling of
space and time as
\begin{eqnarray}
x \to bx, \nonumber \\ t \to b^z t,
\end{eqnarray}
where $z$ is called the degree of anisotropy and $b$ is called the scaling factor.
In this paper, we shall consider $z =2$.
It may be noted that this transformation reduces to the usual conformal transformation for
$z =1$.
Now we will incorporate the generalized uncertainty principle into a theory with Lifshitz scaling. Such deformed three dimensional Lifshitz bosonic action is given by \cite{field}
\begin{eqnarray}
S_{b}&=& \frac{1}{2}\int d^3 x~\left(
\phi \partial^0 \partial _{0}\phi- \kappa ^{2} \partial ^{i}\phi \mathcal{T}^2_{\partial}
\partial _{i}\phi \right),
\end{eqnarray}
where the non-local fractional derivative operator $\mathcal{T}_{\partial}$ is given by
\begin{eqnarray}
\mathcal{T}_{\partial} &=& T_\partial (1 - \beta \partial^j \partial_j)
\nonumber \\ &=& \sqrt{-\partial^i \partial_i} (1 - \beta \partial^j \partial_j).
\end{eqnarray}
Such incorporation breaks the Lifshitz scaling,
as $\beta$ does not scale with the space and time. However, it is possible to preserve the Lifshitz scaling by
promoting the parameter $\beta$ to a background field which scales as \cite{field}
\begin{equation}
\beta \to b^2 \beta.
\end{equation}
It may be noted that the non-local differential operator used in the construction of the Lifshitz bosonic action
based on the generalized uncertainty principle can be analysed using the harmonic extension of functions from
$R^2$ to $R^2 \times (0, \infty)$ \cite{6a}-\cite{12a}. In fact, it can be effectively viewed as a local
differential operator by using this harmonic extension of functions.
The operator $ {T}_{\partial}$ can be defined by its action on
functions
$f: R^2 \to R $. In this case, its harmonic extension $u: R^2\times (0, \infty) \to R$ satisfies,
$
{T}_{\partial} f(x) = -\partial_y u (x, y)| _{y =0}
$. Now let $u: R^2 \times (0, \infty) \to R$ be the harmonic extension
of $f: R^2 \to R$, such that
its restriction
to $R^2$ coincides with $f: R^2 \to R$. Now the solution of the Dirichlet problem defined by $u(x, 0) = f(x)$ and $ \partial^2 u (x, y) =0 $,
can be used to find $u$, where
$\partial^2$ is the Laplacian on $R^3$.
There exists a unique harmonic extension
$ u \in C^\infty (R^2 \times (0, \infty))$ for a smooth function $C^\infty_0 (R^2) $.
Now we can write $
{T}_{\partial}^2 f (x) = \partial^2_y u(x, y)|_{y =0}
= - \partial^i \partial_i u(x, y)|_{y =0}$, because $ {T}_{\partial} f (x)$ also has a harmonic
extension to $R^2 \times (0, \infty)$.
Furthermore, it is possible to write $ {T}_{\partial} = \sqrt{- \partial^i \partial_i}$,
as $ {T}_{\partial}^2 f(x) = - \partial^i \partial_i f(x)$.
Thus, we obtain $ {T}_{\partial} \exp ikx = |k| \exp ikx$, as
$ {T}_{\partial}^2 \exp ikx = |k|^2 \exp ikx$.
Now using this scalar product, we can write
the bosonic action as
\begin{equation}
S_b =\frac{1}{2}\int d^{3}x~i\partial ^{\mu }\phi ~G_{\mu \nu }
\partial ^{\nu }\phi,
\end{equation}
where $G_{\mu\nu}$ is a matrix.
It is also possible to define a
set of local gamma matrices such that they statisfy
\begin{equation}
\{ \Gamma_\mu, \Gamma_\nu\} = 2 G_{\mu\nu}.
\end{equation}
It is possible to write a Lifshitz
fermionic operator based on generalized uncertainty principle as
\begin{equation}
\Gamma^\mu \partial_\mu =
\gamma^0 \partial _{0 } + \gamma^i \kappa \mathcal{T}_{\partial} \partial _{i }.
\end{equation}
This is because if $
\{ \gamma_\mu, \gamma_\nu\} = 2 \eta_{\mu\nu}
$, then it is possible to write
$\Gamma_0 = \gamma_0$ and $\Gamma_i = \kappa \mathcal{T}_{\partial} \gamma_i$.
Furthermore, we can also write
\begin{equation}
\Gamma^\mu \partial _{\mu }
\Gamma^\nu \partial _{\nu } = \partial^0\partial_0 - \kappa^2 (\partial^i\partial_i(1 -\beta \partial^k \partial_k)) ^2.
\end{equation}
We can write a Lifshitz fermionic action based on generalized uncertainty principle using three dimensional spinor fields,
$\psi_a = \psi^b C_{ba}, $ and $ \psi^a = C^{ab}\psi_b$. Here we have
$C_{ab}C^{cd} = \delta^c_a \delta^d_b -\delta^c_b \delta^d_a $.
The square of these spinor fields is given by $\psi^2 = \psi^a \psi_a/2$.
Now the Lifshitz fermionic action based on generalized uncertainty principle can be written as
\begin{eqnarray}
S_f&=& \frac{1}{2}
\int d^{3}x~\psi^a ( \Gamma^\mu \partial _{\mu })^b_a \psi_b \nonumber \\
&=&\frac{1}{2}
\int d^{3}x~ \psi^a ( \gamma^0 \partial _{0 } + \gamma^i \kappa \mathcal{T}_{\partial} \partial _{i })^b_a \psi_b.
\end{eqnarray}
We have the Lifshitz bosonic and Lifshitz fermionic theories based on the generalized uncertainty principle,
and so we can
can construct a free supersymmetric theory with $\mathcal{N} =1$ supersymmetry using these actions.
Thus, motivated by the definition of generator of ordinary
$\mathcal{N} =1$ supersymmetry, we can write the
generator of $\mathcal{N} =1$ supersymmetry for a Lifshitz theory based on the generalized uncertainty as
\begin{equation}
Q_a = \partial_a - ( \gamma^0 \partial _{0 } \theta + \gamma^i \kappa \mathcal{T}_{\partial} \partial _{i }\theta)_a.
\end{equation}
Now let $u (x, y) $ be the harmonic extension of $f (x) $, and so
$ \partial_i u (x, y)$ will be the harmonic extension of $\partial_i f(x)$,
\begin{eqnarray}
{T}_{\partial} \partial_i f(x) &=& -
\partial_y \partial_i u(x, y)|_{y =0} \nonumber \\ &=& - \partial_i u_y (x, y)|_{y =0}.
\end{eqnarray}
Furthermore, we have
$- \partial_i u_y (x, y)|_{y =0} = \partial_i {T}_{\partial}
f(x)$ as $ {T}_{\partial} f(x) = - u_y (x, 0)$.
So, the operator $ {T}_{\partial}
$ commutes with an ordinary derivative $\partial_i$,
\begin{equation}
{T}_{\partial} \partial_i f(x) = \partial_i {T}_{\partial} f(x).
\end{equation}
Thus, we can now construct a super-derivative $D_a$ which will commute with the
generator of $\mathcal{N} = 1$ supersymmetry,
\begin{equation}
D_a = \partial_a - ( \gamma^0 \partial _{0 } \theta - \gamma^i \kappa \mathcal{T}_{\partial} \partial _{i }\theta)_a.
\end{equation}
Furthermore, they also obey the following non-local supersymmetric algebra,
\begin{eqnarray}
\{Q_a, Q_b\} &=&
2 ( \gamma^0 \partial _{0 } + \gamma^i \kappa \mathcal{T}_{\partial} \partial _{i })_{ab},\nonumber \\
\{D_a, D_b\} &=&
- 2 ( \gamma^0 \partial _{0 } + \gamma^i \kappa \mathcal{T}_{\partial} \partial _{i })_{ab},\nonumber \\
\{Q_a, D_b\} &=&
0.
\end{eqnarray}
The states in this theory
that are invariant under a symmetry are annihilated by
generators of that symmetry. So, by taking the trace of $\langle E |\{ Q_a, Q_b \}| E \rangle $,
it is possible to demonstrate that
the energy of the
ground state vanishes even for this deformed supersymmetric theory. Furthermore, as the
Lifshitz momentum deformed by the generalized uncertainty principle
again commutes with the generators of the supersymmetry, there
occurs a degeneracy in the mass of two states that are related to
each other by these generators of supersymmetry.
However, now because of the non-local differential operator in the definition of
$Q_a$, these variations do not obey the Leibniz rule and so the differentiation of
a product of superfields is not the same as the differential of each of those superfields.
This problem can be evaded for free theories. This is because for free theories we can always
shift one differential operator at a time from one field to another in the Lagrangian. Thus,
in case of free theories, even theories with Lifshitz scaling deformed by generalized uncertainty principle, we can still construct a non-local supersymmetric field theory using
superspace formalism.
But as soon as the interactions are introduced, they will tend to break this supersymmetry.
Now we will analyse some properties of the superspace which is suitable to construct free non-local supersymmetric theories.
First, we have
\begin{equation}
D_a D_b = - C_{ab} D^2 - ( \gamma^0 \partial _{0 } + \gamma^i \kappa \mathcal{T}_{\partial} \partial _{i })_{ab}.
\end{equation}
Furthermore, the complete anti-symmetrization of three two-dimensional indices vanishes,
\begin{equation}
2 D_a D_b D_c = D_a \{ D_b , D_c \} + D_b \{ D_a , D_c\} + D_c \{ D_a , D_b\}.
\end{equation}
So, we can write, $D^a D_b D_a =0$, and $ D^2 D_a = - D_a D^2 $, where
$
D^2 D_a = ( \gamma^0 \partial _{0 } D + \gamma^i \kappa \mathcal{T}_{\partial} \partial _{i } D)_{a}
$.
These properties will be used to study various non-local Lifshitz supersymmetric field theories based on the
generalized uncertainty principle.
\section{Supersymmetric Field Theory}
In this section, we will analyse Lifshitz supersymmetric field theories based on generalized uncertainty principle.
We will write an action for a generalized uncertainty principle deformed Lifshitz theory in $\mathcal{N} =1$ superspace
formalism, so that it has manifest
$\mathcal{N} =1$ supersymmetry. In order to do that, we first expand a superfield $\Phi$ as
$
\Phi = \phi + \psi^a \theta_a - \theta^2 F
$.
Now we can write $
\phi = [\Phi]_|, \, \psi_a = [D_a \Phi]_|,
\,
F = [D^2 \Phi]_|
$, here $'|'$ means that at the end of calculations we set $\theta_a =0$.
The non-local supersymmetric transformations generated by $\epsilon^a Q_a$
can be written as
\begin{eqnarray}
\epsilon^a Q_a \phi &=&
- \epsilon^a \psi_a,
\nonumber \\
\epsilon^a Q_a \psi_a &=&
- \epsilon^b [C_{ab} F + ( \gamma^0 \partial _{0 } + \gamma^i \kappa \mathcal{T}_{\partial} \partial _{i })_{ab}\phi ],
\nonumber \\
\epsilon^a Q_a F &=&
- \epsilon^a ( \gamma^0 \partial _{0 } + \gamma^i \kappa \mathcal{T}_{\partial} \partial _{i })_{a}^b \psi_b.
\end{eqnarray}
We can write a free action for the deformed supersymmetic theory in $\mathcal{N} =1$ superspace as
\begin{eqnarray}
S_{free} [\Phi] &=& \frac{1}{2} \int d^3 x D^2 [\Phi D^2 \Phi ]_| \nonumber \\
&=& \frac{1}{2} \int d^3 x [ D^2 \Phi D^2 \Phi + D^a \Phi D_a D^2 \Phi + \Phi (D^2)^2 \Phi ]_|
\nonumber \\
&=& \frac{1}{2} \int d^3 x [F^2 +
\phi ( \partial^0\partial_0 - \kappa^2 (\partial^i\partial_i(1 - \beta \partial^j \partial_j)) ^2\phi
\nonumber \\ &&
+
\psi^a ( \gamma^0 \partial _{0 } + \gamma^i \kappa \mathcal{T}_{\partial} \partial _{i })^b_a \psi_b
]
\nonumber \\ &=&
S_a + S_b + S_f,
\end{eqnarray}
where $S_b$ is the deformed bosonic action, $S_f$ is
the deformed fermionic action, and $S_a$ is the deformed action for the auxiliary field $F$.
In this action, the supersymmetric variations of the temporal parts cancel out as in the
ordinary supersymmetric field theories. Furthermore, the non-local supersymmetric
variation of a part of the bosonic action generates,
$ \epsilon ^a \psi _a \kappa^2 (\partial^i\partial_i(1 - \beta \partial^j \partial_j)) ^2\phi $,
and this term exactly cancels with
a term generated by the non-local supersymmetric variation of a part of fermionic action.
The fermionic action contains a non-local part,
$\epsilon^b (\gamma^j \kappa \mathcal{T}_{\partial} \partial _{j })_b^a\phi.
(\gamma^j \kappa \mathcal{T}_{\partial} \partial _{j })_a^c \psi_c $.
This does not directly cancel
out with
the non-local supersymmetric variation of the bosonic part. However,
if we view the non-local operator in terms
of harmonic extensions of functions, and
then this term can be written as
$\epsilon^b \phi \kappa^2 (\partial^i\partial_i(1 - \beta \partial^j \partial_j)) ^2\psi_b $.
Here the derivatives only act on the fermionic part.
Let $u_1(x, y)$ be the harmonic extension of $f_1 (x)$ to $ C = R^2 \times (0, \infty)$, and $u_2 (x, y)$ be the harmonic extension of
$f_2: (x)$ to $ C = R^2 \times (0, \infty)$. Now
both these these harmonic extensions vanish
for $|x| \to \infty $ and $|y| \to \infty $, and we can write \cite{5a01}
\begin{equation}
\int_C u_1(x, y) \partial^2 u_2 (x, y) dx dy -
\int_C u_2(x, y) \partial^2 u_1 (x, y) dx dy
= 0.
\end{equation}
Thus, we obtain
\begin{equation}
\int_{R^2} \left(u_1(x, y) \partial_y u_2 (x, y) - u_2(x, y)
\partial_y u_1 (x, y) \right)\left. \right|_{y =0} dx
= 0.
\end{equation}
This can be expressed in terms of $f_1 (x) $ and $f_2 (x)$,
\begin{equation}
\int_{R^2}\left(f_1(x) \partial_y
f_2 (x) - f_2(x)\partial_y f_1 (x)\right) dx
= 0.
\end{equation}
Thus, $\mathcal{T}_{\partial}$ is moved from $f_2 (x)$ to $f_1 (x)$,
\begin{equation}
\int_{R^2} f_1 (x) \mathcal{T}_{\partial} f_2 (x) =
\int_{R^2} f_2 (x) \mathcal{T}_{\partial} f_1 (x).
\end{equation}
Now the non-local term generated by the non-local supersymmetric
variation of the fermionic action can be expressed in terms of
$ \epsilon ^a \phi \kappa^2 (\partial^i\partial_i(1 - \beta \partial^j \partial_j)) ^2\psi_a $, and so it
also cancels out with the non-local supersymmetric variation of the bososnic action.
It may be noted that this can be done only formally by using the theory of
harmonic extensions of functions from
$R^2$ to $R^2 \times (0, \infty)$. Similarly, the remaining terms generated by non-local supersymmetric
variation of the fermionic part cancel with the terms generated by the non-local supersymmetric variation
of the auxiliary field.
This theory will have a generalized uncertainty principle deformed Lifshitz scaling and $\mathcal{N} =1$ supersymmetry,
even after the following mass term, $m D^2 [ \Phi^2 ]_|/2= m\psi^2 + m AF$,
is added to its the Lagrangian. It is possible to show that this mass term is also invariant under the non-local supersymmetric transformations.
This is because the invariance of the temporal part is again similar
to the usual non-local supersymmetric theories and the invariance of the remaining part can
be demonstrated by using the theory of harmonic extensions of functions from
$R^2$ to $R^2 \times (0, \infty)$, as in the previous case.
We can now use the standard method --- the functional integral to quantize the supersymmetric Lifshitz free
field theory deformed by the generalized uncertainty principle. If it was possible to extend to an
interactive theory, we could also obtained the Feynman graphs using this method.
However, it will be demonstrated that the interactions terms break the
supersymmetry in those theories.
The generating functional integral for the free theory can be written as
\begin{equation}
Z_0 [J] = \frac{ D\Phi \exp i \left( S_{free}[\Phi]+
J\Phi \right)}{ D\Phi \exp i \left( S_{free} [\Phi]\right)},
\end{equation}
where
\begin{equation}
J\Phi = \int d^3 x D^2 [J \Phi)]_|.
\end{equation}
Thus, we obtain
\begin{equation}
Z[J] = \exp - i \int d^3 x D^2[ J (D^2 +m)^{-1}J]_|.
\end{equation}
Now the superfield propagator can be written as
\begin{equation}
\langle \Phi (p, \theta_1) \Phi (-p, \theta_2) \rangle =
\frac{D^2 - m}{ p^0p_0 - \kappa^2 (p^ip_i (1 -\beta p^k p_k)) ^2 - m^2}
\delta( \theta_1- \theta_2).
\end{equation}
It may be noted that if we add any interaction term will break the supersymmetry of this theory.
This is because even though for a free field theory the non-local derivative can be shifted from one field to the
another by using harmonic extensions of functions from
$R^2$ to $R^2 \times (0, \infty)$, the Leibniz rule does not hold in general. Thus,
when we have interacting theories, the non-local supersymmetric variation of a product of more than
two fields is not equal to the individual non-local supersymmetric variation of those fields.
In fact, if we take a simple interaction of the form,
\begin{equation}
S[\Phi] = S_{free}[\Phi] + S_{int}[\Phi],
\end{equation}
where
\begin{eqnarray}
S_{int}[\Phi]&=& \frac{\lambda}{6} \int d^3 D^2 [\Phi^3]_|
\nonumber \\ &=&\frac{\lambda}{2} \int d^3 (\phi\psi^a\psi_a + \phi^2 F ),
\end{eqnarray}
then it is not invariant under the non-local
supersymmetric variation generated by $\epsilon^a Q_a$. This is because in ordinary supersymmetric
field theories we need to show that $ \epsilon^a \psi^b (\gamma^\mu \partial_\mu)_{ab} \phi^2
= 2 \epsilon^a \psi^b \phi (\gamma^\mu \partial_\mu)_{ab} \phi $, however, for
the non-local part of this deformed theory, we have
$\epsilon^a \psi^b (\gamma^i \kappa \mathcal{T}_{\partial} \partial_i)_{ab} \phi^2
\neq 2 \epsilon^a \psi^b \phi (\gamma^i \kappa \mathcal{T}_{\partial} \partial_i)_{ab} \phi$.
Thus,
the non-local supersymmetric variation of the interaction terms can not cancel out.
\section{Conclusion}
In this paper, we analysed a
supersymmetric theory deformed by generalized uncertainty principle and Lifshitz scaling. The action of this deformed theory contains non-local fractional
derivatives. Thus, even the generators of supersymmetry contain non-local fractional derivative terms.
However, these fractional derivative terms can effectively be treated as a local operator by using
harmonic extensions of functions from
$R^2$ to $R^2 \times (0, \infty)$. Furthermore, this non-local operator commutes with the local
derivatives, and so we could construct a super-derivative which commutes
with the generator of the supersymmetry. This super-derivative was
used in the construction of various non-local supersymmetric field theories. A free matter theory
deformed by the generalized uncertainty principle and Lifshitz scaling was constructed such that it
was invariant under non-local supersymmetric transformations.
It was argued that any free non-local supersymmetric theory will be invariant under non-local supersymmetric transformations.
However, it was demonstrated that even a simple
interaction term will break the supersymmetry of this theory.
The effect of generalized uncertainty principle
on AdS/CFT has already been analysed \cite{faiz}. The AdS/CFT correspondence relates the supergravity solutions on AdS to a superconformal field
theory on its boundary \cite{13a}-\cite{17a}. It would be be interesting to analyse the AdS/CFT correspondence for Lifshitz theories based on
the generalized uncertainty principle.
The holographic dual to the Lifshitz field theory has also been analysed \cite{10}-\cite{14}.
In these Lifshitz theories, the dependence of
physical quantities such as the energy density on the momentum scale is evaluated using the
renormalization group flow at finite temperature \cite{b14}.
In fact, gravity with anisotropic scaling is obtained from
the holographic renormalization asymptotically Lifshitz spacetimes \cite{c14}.
The holographic counter-terms induced near anisotropic infinity take the
form of the action for gravity at a Lifshitz point. It has been observed that the $z=2$ anisotropic Weyl anomaly in
dual field theories, in three dimensions, can be obtained from the
holographic renormalization of Horava-Lifshitz gravity \cite{a14}.
In fact, Lifshitz theories have also become important because of the development of Horava-Lifshitz gravity
\cite{5}-\cite{9}. Even though the addition of higher order curvature terms to the
gravitational action makes it renormalizable, it spoils the unitarity of this theory.
However, it is possible to add
higher order spatial derivatives without adding any higher order temporal
derivatives.
Even though this break Lorentz symmetry in the Horava-Lifshitz theory
of gravity, General Relativity is recovered the infrared limit.
It may be noted that a system at finite temperature and finite chemical potential
with a Lifshitz black hole in place of a Lifshitz geometry has been used for analysing
the fermionic retarded Green's function with $z = 2$
\cite{17}. In fact, the
Hawking radiation for Lifshitz fermions has also been studied \cite{18}. It would be interesting to analyse the effect
that generalized uncertainty principle can have on such systems.
\section*{Acknowledgement}
We would like to thank Ali Nassar for pointing out that the
parameters in a conformal field theory
can be promoted to background fields. We would also like to thank Douglas Smith for useful discussions on Lifshitz supersymmetry. The work of Q.Z. is
supported by NUS Tier 1 FRC Grant R-144-000-316-112.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,818
|
Cinema
Sydney (Hard Eight) – film del 1996 diretto da Paul Thomas Anderson
Geografia
Sydney – capitale del Nuovo Galles del Sud (Australia)
Città di Sydney – Local Government Area del Nuovo Galles del Sud
Sydney – comunità della Nuova Scozia (Canada)
Sydney – fiume della Nuova Scozia (Canada)
Onomastica
Sydney – variante del nome proprio di persona Sidney
Televisione
Sydney – serie televisiva statunitense trasmessa nel 1990
Sydney – personaggio della serie televisiva Jarod il camaleonte
Sydney Bristow – personaggio della serie televisiva Alias
Pagine correlate
Sidney
HMAS Sydney
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,049
|
{"url":"https:\/\/monadical.com\/posts\/jwt-login-firebase.html","text":"A Cheat Sheet for Using your JWT Authentication with Django REST Framework to log in to Firebase - HedgeDoc\n1557 views\n<center> # A Cheat Sheet for Using your JWT Authentication with Django REST Framework to log in to Firebase *Getting DRF, SimpleJWT and Firebase to play nice.* <\/center> --- [Firebase](https:\/\/firebase.google.com\/) is a Google platform composed of more than eighteen products that can speed up the development of mobile and web applications. It has some really nice advanced features, such as real-time updates and horizontal scalability for greenfield software projects. However, applications with a considerable user base can end up being expensive to host on Firebase. For cases like this, it\u2019s usually worth migrating to a different solution like Django. For a general strategy for how to do this, have a look at [How to Migrate from Firebase to Django](https:\/\/monadical.com\/posts\/from-firebase-to-django.html). Whether as a first step towards a larger migration, or a component of a hybrid approach (for example, if you want to augment Firebase with data that has to be hosted on your own servers), sometimes it\u2019s useful to migrate the login process while retaining access to Firebase's services This post gives you a quick cheat sheet for **how to use Firebase services with a token emitted from your Django backend. The token will be both valid for Firebase and for your backend.** This way you\u2019ll also be prepared for any future migrations from Firebase. We will set SimpleJWT, DRF and Django to log in to Firebase. ## Installation and Setup For this setup, we are going to use [DRF](https:\/\/www.django-rest-framework.org\/) with [SimpleJWT](https:\/\/github.com\/SimpleJWT\/django-rest-framework-simplejwt). First, let\u2019s install the packages. bash pip install djangorestframework pip install djangorestframework_simplejwt Then let\u2019s configure DRF to use JWTAuthentication and require authenticated requests as default. These SimpleJWT settings tell SimpleJWT to use the appropriate algorithm and set some of the claims required by Firebase without needing to install and call [firebase-admin-python](https:\/\/github.com\/firebase\/firebase-admin-python) to create the token. settings.py python3 # django-rest-framework - https:\/\/www.django-rest-framework.org\/api-guide\/settings\/ REST_FRAMEWORK = { \"DEFAULT_AUTHENTICATION_CLASSES\": ( \"rest_framework_simplejwt.authentication.JWTAuthentication\", ), \"DEFAULT_PERMISSION_CLASSES\": (\"rest_framework.permissions.IsAuthenticated\",), } # See https:\/\/firebase.google.com\/docs\/auth\/admin\/create-custom-tokens#create_custom_tokens_using_a_third-party_jwt_library SIMPLE_JWT = { \"ALGORITHM\": \"RS256\", \"SIGNING_KEY\": env.str(\"FIREBASE_PRIVATE_KEY\", multiline=True), \"VERIFYING_KEY\": env.str(\"FIREBASE_PUBLIC_KEY\", multiline=True), \"ISSUER\": env.str(\"FIREBASE_SERVICE_EMAIL\"), \"USER_ID_CLAIM\": \"uid\", # Firebase allows only max=1h \"ACCESS_TOKEN_LIFETIME\": timedelta(hours=1), \"AUDIENCE\": \"https:\/\/identitytoolkit.googleapis.com\/google.identity.identitytoolkit.v1.IdentityToolkit\", } Also notice that we loaded the secret and sensitive information (SIGNING_KEY, VERIFYING_KEY, ISSUER) from environment variables for the required [service account](https:\/\/firebase.google.com\/support\/guides\/service-accounts). Next we'll need to add URL routes for our token endpoints. urls.py python3 from django.urls import path from rest_framework_simplejwt.views import TokenRefreshView from .views import FirebaseTokenObtainPairView urlpatterns = [ path('api\/token\/', FirebaseTokenObtainPairView.as_view(), name='token_obtain_pair'), path('api\/token\/refresh\/', TokenRefreshView.as_view(), name='token_refresh'), # Your other urls \u2026. ] The URL routes need to be connected to views -- let\u2019s add a view to obtain the token pair and a serializer that will add the additional claims required by Firebase. views.py python3 from django.conf import settings from rest_framework_simplejwt.serializers import TokenObtainPairSerializer from rest_framework_simplejwt.views import TokenObtainPairView class FirebaseTokenObtainPairSerializer(TokenObtainPairSerializer): @classmethod def get_token(cls, user): token = super().get_token(user) # Add custom claims token[\"sub\"] = settings.SIMPLE_JWT.get(\"ISSUER\", \"\") # When the serializer is called token['exp'] does not reflect the settings.ACCES_TOKEN_LIFETIME # and is set to now + 1day,thus we subtract a day to get iat token[\"iat\"] = token[\"exp\"] - (60 * 60 * 24) # Add additional claims here token[\"claims\"] = {\"is_superuser\": user.is_superuser, \"is_staff\": user.is_staff} return token class FirebaseTokenObtainPairView(TokenObtainPairView): serializer_class = FirebaseTokenObtainPairSerializer Here we overrode the view and the serializer to include all of the required fields for Firebase. An alternative approach would be to generate the token with firebase-admin-python as follows: python3 uid = 'some-uid' additional_claims = { 'premiumAccount': True } custom_token = auth.create_custom_token(uid, additional_claims) When we have the token, we return it at the view. And that's it! We have our token. ## Test Let\u2019s do a quick test with [httpie](https:\/\/httpie.org\/) and one of our existing Django accounts. bash http post localhost:8000\/api\/token username=test password=test We get a token valid for both our Django backend and Firebase services. Woot. bash HTTP\/1.1 200 OK allow: POST, OPTIONS content-language: en content-length: 1945 content-type: application\/json date: Mon, 19 Oct 2020 21:22:21 GMT server: uvicorn vary: Accept, Accept-Language, Origin x-content-type-options: nosniff x-frame-options: DENY x-xss-protection: 1; mode=block { \"access\": \"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9\u2026\u2026 \", \"refresh\":\"eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiJ9\u2026\u2026 \" } ## Credits and Further Reading 1. https:\/\/simpleisbetterthancomplex.com\/tutorial\/2018\/12\/19\/how-to-use-jwt-authentication-with-django-rest-framework.html 2. https:\/\/monadical.com\/posts\/from-firebase-to-django.html 3. https:\/\/firebase.google.com\/docs\/auth\/admin\/create-custom-tokens 4. https:\/\/firebase.google.com\/support\/guides\/service-accounts\n\nRecent posts:","date":"2021-10-28 08:23:43","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2900858223438263, \"perplexity\": 14813.432957348943}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 5, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323588282.80\/warc\/CC-MAIN-20211028065732-20211028095732-00356.warc.gz\"}"}
| null | null |
Q: Can someone help deconstruct this terse Java function into plain English? I'm trying to port the PriorityQueue class from the OpenJDK implementation to another language (Xojo) that doesn't have a similar data structure. I'm really struggling to break down the following method into pseudocode so I can translate it to Xojo:
public E poll() {
final Object[] es;
final E result;
if ((result = (E) ((es = queue)[0])) != null) {
modCount++;
final int n;
final E x = (E) es[(n = --size)];
es[n] = null;
if (n > 0) {
final Comparator<? super E> cmp;
if ((cmp = comparator) == null)
siftDownComparable(0, x, es, n);
else
siftDownUsingComparator(0, x, es, n, cmp);
}
}
return result;
}
The variable queue is an Object[] array defined on the class.
There are a couple of lines that are confusing me. Firstly:
if ((result = (E) ((es = queue)[0])) != null)
Does this mean "assign the array queue to the variable es and access element 0 and do the following if it's not null?" What does the result = (E) expression mean? I know E is a generic type.
What is the order of operation of final E x = (E) es[(n = --size)];? Does this mean decrement size, assign that value to n and then access this index within the es array? If so, what does x = (E) before this expression mean? I'm guessing it means to cast the element to type E?
Lastly, these lines:
final Comparator<? super E> cmp;
if ((cmp = comparator) == null)
comparator is a class variable (holding a Comparator). Why assign it to a local variable cmp and what does the question mark mean on the first line?
A:
Does this mean "assign the array queue to the variable es"
Yes.
and access element 0 and do the following if it's not null?
Yes.
What does the result = (E) expression mean?
At the same time as the two expressions above, it also assigns queue[0] to result. The (E) is a cast to a type. So it's basically just:
result = queue[0]
With some extra stuff thrown in.
final E x = (E) es[(n = --size)];? Does this mean decrement size, assign that value to n and then access this index within the es array?
Yes.
If so, what does x = (E) before this expression mean? I'm guessing it means to cast the element to type E?
Yes, again just a cast like before.
comparator is a class variable
Just to be pedantic, comparator is likely an instance variable, not a class variable. Check its definition.
Why assign it to a local variable cmp
I suppose to make a local variable copy. I don't see a good reason to do that in the code, so it might be a mistake or something that was left in after some previous code got changed.
and what does the question mark mean on the first line?
The question mark means the type of the Comparator is unknown, and can be anything as long as it's a super class of E. For example, if Integer doesn't have a Comparator but Number does, then that's OK, Number is a super class of Integer and that's good enough.
A: 1- if ((result = (E) ((es = queue)[0])) != null)
First it assigns the array queue to the variable esand gets element 0 from it the casts it t E generic type and assigns it to result then checks if result is not null.
2- final E x = (E) es[(n = --size)];
First java evaluates --size then assigns to int type n then gets nth from es array, casts it to E, and then assigns it to variable x.
I think the next two lines you asked are clear now!
A: Let's see if I can help:
if ((result = (E) ((es = queue)[0])) != null)
The above means "assign queue to es, access index 0 of it, cast it to type E, assign it to result and do the following if it's not null".
final E x = (E) es[(n = --size)];
This means "substract one from size and assign the new value to n, use it as an index of es and cast that element to type E, then assign it to the final variable x of type E.
final Comparator<? super E> cmp;
The question mark is a wildcard. <? super E> means "some type which is an ancestor of E". As for why comparator is assigned to the local variable cmp, I'm not quite sure but I remember something similar being asked recently in another question. I'll see if I can find it and edit this answer. I hope that helps you at least a bit. If any of what I said is not clear just ask and I'll try to reword the explanation.
Edit: This is the question I mentioned in the previous paragraph. The answers suggest performance benefits, but again I'm not sure if that's the reason in this case, as the specifics differ slightly.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,087
|
'Nep' is a sticky rice and 'Phu Loc' is a province in Northern Vietnam, so Nếp Phú Lộc literally means sticky rice from Phu Loc Village. Sơn Tinh Nếp Phú Lộc is distilled in a copper pot still from fermented sticky rice from Vietnam's Red River Delta. It is rested for five years prior to bottling.
(sample from 2008) Crystal clear.
Buttered scones and yeast brown bread with white pepper.
Oily mouthfeel with delicate bready flavours fighting for attention against aggressive black pepper spice.
Bready notes and lingering black pepper spice.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 6,859
|
import {
SHARED_AUTHENTICATION_FAILED,
SHARED_AUTHENTICATION_STARTED,
SHARED_RECEIVE_TOKEN,
} from '../../constants/actionTypes';
export const isAuthenticating = (prevState = false, action) => {
switch (action.type) {
case SHARED_AUTHENTICATION_STARTED:
return true;
case SHARED_RECEIVE_TOKEN:
case SHARED_AUTHENTICATION_FAILED:
return false;
default:
return prevState;
}
};
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,115
|
package panda.interpreter.syntax.scope.block;
import panda.interpreter.architecture.statement.PandaBlock;
import panda.interpreter.architecture.statement.Scope;
import panda.interpreter.architecture.statement.Variable;
import panda.interpreter.architecture.statement.VariableData;
import panda.interpreter.parser.Context;
import panda.interpreter.parser.PandaParserFailure;
import panda.interpreter.source.Location;
import panda.interpreter.token.Snippet;
import panda.interpreter.resource.syntax.keyword.Keywords;
import panda.interpreter.syntax.PandaSourceReader;
import panda.interpreter.syntax.scope.variable.VariableDataInitializer;
import panda.std.reactive.Completable;
import panda.std.Option;
public final class TryCatchParser extends BlockParser<TryCatch> {
@Override
public String name() {
return "try-catch";
}
@Override
public Option<Completable<TryCatch>> parse(Context<?> context) {
PandaSourceReader sourceReader = new PandaSourceReader(context.getStream());
Location tryLocation = sourceReader.toLocation();
if (sourceReader.read(Keywords.TRY).isEmpty()) {
return Option.none();
}
Option<Snippet> tryBody = sourceReader.readBody();
if (tryBody.isEmpty()) {
throw new PandaParserFailure(context, "Missing try body");
}
Location catchLocation = sourceReader.toLocation();
if (sourceReader.read(Keywords.CATCH).isEmpty()) {
throw new PandaParserFailure(context, "Missing try body");
}
Option<Snippet> catchWhat = sourceReader.readArguments();
if (catchWhat.isEmpty()) {
throw new PandaParserFailure(context, "Missing catch arguments");
}
Option<Snippet> catchBody = sourceReader.readBody();
if (catchBody.isEmpty()) {
throw new PandaParserFailure(context, "Missing catch body");
}
Scope parent = context.getScope();
Scope tryBlock = SCOPE_PARSER.parse(context, new PandaBlock(parent, tryLocation), tryBody.get());
TryCatch tryCatch = new TryCatch(tryLocation, tryBlock, new PandaBlock(parent, tryLocation));
context.getScope().addStatement(tryCatch);
Scope catchBlock = new PandaBlock(parent, catchLocation);
VariableDataInitializer dataInitializer = new VariableDataInitializer(context, catchBlock);
VariableData variableData = dataInitializer.createVariableDataByDeclaration(catchWhat.get(), false, false);
Variable variable = catchBlock.createVariable(variableData);
SCOPE_PARSER.parse(context, catchBlock, catchBody.get());
if (context.getTypeLoader().requireType("panda/java@::Throwable").isAssignableFrom(variableData.getKnownType())) {
//noinspection unchecked
tryCatch.addHandler((Class<? extends Throwable>) variable.getKnownType().getType().getAssociated().get(), variable, catchBlock);
}
return Option.withCompleted(tryCatch);
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,897
|
require 'bundler'
begin
Bundler.setup(:default, :development)
rescue Bundler::BundlerError => e
$stderr.puts e.message
$stderr.puts "Run `bundle install` to install missing gems"
exit e.status_code
end
$LOAD_PATH.unshift(File.dirname(__FILE__) + '/../../lib')
require 'schematize'
require 'rspec/expectations'
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 316
|
Somerville acquires lot to ease parking woes
The Downtown Somerville Alliance will contribute $250,000 toward the $750,000 acquisition.
Somerville acquires lot to ease parking woes The Downtown Somerville Alliance will contribute $250,000 toward the $750,000 acquisition. Check out this story on mycentraljersey.com: https://mycj.co/1HEUUlU
Mike Deak, @MikeDeakMyCJ Published 5:50 p.m. ET Dec. 1, 2015 | Updated 7:44 p.m. ET Dec. 1, 2015
New lot may serve downtown residents and employees who work on Main Street
The purchase of a new parking lot may relieve concerns about long-term parking for downtown Somerville employees and residents.(Photo: ~File)
SOMERVILLE - The borough's downtown parking woes may be eased by the acquisition of an 80-space lot that will be designated for use by downtown residents and workers.
The 0.72-acre property on High Street, east of Davenport Street, is adjacent to a municipal parking lot and may be reserved for long-term parking, said.Beth Anne Macdonald, executive director of Downtown Somerville Alliance. The property has been used for parking by the adjacent Stires Associates
The parking lot will relieve worries from store owners and downtown residents that the borough's new parking rules, which include higher rates, would create a financial hardship for those who work at Main Street stores and inconvenience for residents who would have to wake up early on Saturday mornings to feed the meters.
"I wish we had done this three months ago," Mayor Brian Gallagher said at the Nov.16 Borough Council meeting.
The Downtown Somerville Alliance, the organization charged with promoting the downtown business district, will contribute $250,000 toward the $750,000 acquisition.
The lot will be purchased by the Somerset County Improvement Authority. Somerville has entered into a lease-purchase agreement with the authority.
At the end of the lease, the borough will buy the lot from the authority for $1, the mayor said.
The borough's parking task force will develop a plan on how to manage the lot, Gallagher said.
In January, Gallagher created a task force to review parking policies after the borough received an increasing number of complaints.
After six months, the task force concluded that while the borough had changed since the last revision to parking laws in 1984, the laws had not, and needed to be updated to keep pace with the changes.
The task force recommended changes to promote turnover of parking spaces by discouraging store owners and employees from feeding parking meters. That would free more parking spaces for customers.
"There is no reason not to park in the Main Street spaces and feed the meters," the task force wrote in its report. "This creates pressure on the areas closer to Main Street through people continually hunting for a parking space."
Calls for cop's arrest in Michael Brown shooting grow
The report says that many of the Main Street spaces are used by store owners and their employees "reducing the ability for shoppers/visitors to park near the location they wish to go to."
The task force also concluded that while the borough had changed since the last revision to parking laws in 1984, the laws had not and needed to be updated to keep pace with the changes.
The task force found that parking rates had not changed in the last decade and had fallen behind rates in other towns and that enforcement ended before peak parking times.
There are more than 1,000 paid parking spaces in the borough, with 60 percent of those in municipal parking lots.
That does not include the 372 spaces at ShopRite, the 270 spaces in the parking garage at Post Office Plaza on Division Street and 700 spaces in the borough-owned garage on Veterans Memorial Drive.The total also does not include the parking garage owned by Somerset County on High Street for county employees and jurors
The task force recommended that the rates be increased and parking would no longer be free on Saturdays. The task force also said that enforcement should be stepped up. The fine for overtime parking would go from $23 to $29.
One of the major changes is eliminating free parking on Saturday and extending the hours for parking to 8 a.m. to 8 p.m.
Once these changes go into effect, the borough's annual parking revenues, after expenses, could rise 145 percent to $363,884, according to the task force. That total does not include income from parking tickets.
In 2014, the borough collected $212,337 from parking tickets, with $85,640 going to the state. The increased fines could bring another $125,000 to the borough.
Download our apps and get alerts for local news, weather, traffic and more. Search "MyCentralJersey" in your app store or use these links from your device: iPhone app | Android app for phone and tablet | iPad app Don't forget to 'like' us on Facebook!
Read or Share this story: https://mycj.co/1HEUUlU
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,118
|
Inger Lemvigh-Müller (28 October 1902 – 21 June 1994) was a Danish equestrian. She competed in two events at the 1956 Summer Olympics.
References
1902 births
1994 deaths
Danish female equestrians
Danish dressage riders
Olympic equestrians of Denmark
Equestrians at the 1956 Summer Olympics
Sportspeople from Copenhagen
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,025
|
Q: Changing Units of a prj file in ArcGIS I have a multipatch feature class which comes from a conversion of a IFC model. Its coordinate system is very strange ("IFC_COORD_SYS_0") and I'd like to change it to EPSG:32632.
I am using ArcGIS Pro.
As the project tool fails (it shows an "extent error" message), my way to go was to:
*
*use the Define Projection tool to set its coordinate system to EPS:32632.
*Move everything with the Move To tool to my area of interest, having the Map Frame coordinate system set to 32632 as well.
Doing so, the layer looks in the right place, but it spans over regions while it is supposed to be just a single building. I suppose the problem is that its usints are millimeters rather than meters.
I think I need to make a custom PRJ with millimeters units. My idea was to save projection as EPSG:32632 as prj file, and modify it from METERS to MILLIMETERS units. But, I have some problems in doing so, as when I assign the new custom PRJ my layer shows up in the wrong place all the time.
Below are both the original (in meters) and the modified (in millimeters) PRJ files. What could I have done wrong?
EPSG:32632 (METERS)
PROJCS["WGS_1984_UTM_Zone_32N",
GEOGCS["GCS_WGS_1984",
DATUM["D_WGS_1984",
SPHEROID["WGS_1984",6378137.0,298.257223563]],
PRIMEM["Greenwich",0.0],
UNIT["Degree",0.0174532925199433]],
PROJECTION["Transverse_Mercator"],
PARAMETER["False_Easting",500000.0],
PARAMETER["False_Northing",0.0],
PARAMETER["Central_Meridian",9.0],
PARAMETER["Scale_Factor",0.9996],
PARAMETER["Latitude_Of_Origin",0.0],
UNIT["Meter",1.0],
AUTHORITY["EPSG",32632]]
EPSG:32632 (MILLIMETERS)
PROJCS["WGS_1984_UTM_Zone_32N_MM",
GEOGCS["GCS_WGS_1984",
DATUM["D_WGS_1984",
SPHEROID["WGS_1984",6378137.0,298.257223563]],
PRIMEM["Greenwich",0.0],
UNIT["Degree",0.0174532925199433]],
PROJECTION["Transverse_Mercator"],
PARAMETER["False_Easting",500000000.0],
PARAMETER["False_Northing",0.0],
PARAMETER["Central_Meridian",9000.0],
PARAMETER["Scale_Factor",0.9996],
PARAMETER["Latitude_Of_Origin",0.0],
UNIT["MILLIMETER",1]]
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 510
|
{"url":"http:\/\/spatstat.org\/AAU2015\/labs\/lab11.html","text":"This session is concerned with summary statistics and Gibbs models for multitype point patterns.\nThe lecturer\u2019s R script is available here (right click and save).\n\n### Exercise 1\n\nThe amacrine dataset contains the locations of cells of two types (\u201con\u201d and \u201coff\u201d detectors) in a layer of the retina.\n\n1. Compute and plot the bivariate $$L$$ function for the amacrine data.\n\nLam <- alltypes(amacrine, \"L\")\n2. plot estimates of the bivariate pair correlation functions by\n\nplot(alltypes(amacrine, pcfcross))\n3. What is the overall interpretation of these summary functions?\n\n### Exercise 2\n\nContinuing with the amacrine data,\n\n1. Use alltypes to plot the bivariate $$G$$-functions $$G_{ij}$$ for each pair of types $$i,j$$ in the amacrine data.\n\n2. Use alltypes to plot the functions $$G_{i\\bullet}$$ (Gdot in spatstat) for each type $$i$$ in the amacrine data.\n\n3. What is the overall interpretation of the $$G$$-functions?\n\n### Exercise 3\n\nThe dataset bramblecanes gives the locations and ages of bramble cane plants in a study region. Age is a categorical variable, with three levels. We will conduct a randomisation test of the Random Labelling Property.\n\n1. Read the help for the command rlabel.\n\n2. We will use the bivariate $$K$$-function $$K_{2,0}$$ as our summary statistic. Compute this for the data using Kcross(bramblecanes, \"2\", \"0\") and plot it.\n\n3. Read the help for Kcross. Find the names of the second and third arguments to the function.\n\n4. Generate the simulation envelopes as follows\n\nshuffle <- expression(rlabel(bramblecanes))\nE <- envelope(bramblecanes, Kcross, nsim=19, simulate=shuffle, i=\"2\", j=\"0\")\nplot(E)\n\nNote that the named arguments i and j are not recognised by the envelope command (as we can check from the help file for envelope), so they are passed to the command Kcross as we intended.\n\n5. Generate the corresponding simulation envelopes of the bivariate $$L$$-function, either by replacing Kcross by Lcross in the code above, or by\n\nplot(E, sqrt(.\/pi) ~ r)\n\n### Exercise 4\n\nWe want to fit a Gibbs process model to the betacells data.\n\n1. Access the betacells data and plot the pattern.\n\n2. Save the data as a point pattern X and save only the mark type\n\nX <- betacells\nmarks(X) <- marks(betacells)\\$type\n3. Plot the bivariate $$K$$ functions. 1. Does it appear that cells of the same type interact? If so, guess at a suitable interaction distance. 2. Does it appear that cells of different types interact? If so, guess at a suitable interaction distance.\n\n4. Fit a multitype Strauss model using the selected interactions. For example if your answer to question i was \u201cyes, at 20 metres\u201d and your answer to question ii was \u201cyes, at 30 metres\u201d,\n\nrad <- matrix(c(20,30,30,20), 2, 2)\nppm(X ~ marks, MultiStrauss(typ,rad))\n\nrad <- matrix(c(NA,60,60,NA), 2, 2)\nppm(X ~ marks, MultiStrauss(typ,rad))\n\nInterpret the fitted model. Plot the array of fitted pairwise interactions using plot(fitin(fit)) where fit is the fitted model. What is the fitted strength of the interaction?\n\n5. For comparison purposes, fit the following models, interpret them, and compare the results:\n\nfitU <- ppm(X ~ marks, Strauss(60))\nfitE <- ppm(X ~ marks, MultiStrauss(rad))\n\n### Exercise 5\n\nHere we will use profile pseudolikelihood to estimate the interaction distances for the multitype Strauss model in Question 4. We\u2019ll assume that points of different types do not interact, and that points of the same type interact at a distance $$R$$ which is the same for each type.\n\n1. Create a vector of values of $$R$$ to search over:\n\nrval <- data.frame(R=seq(50,100,by=5))\n\nThis will become the argument s of profilepl.\n\n2. We need the argument f of profilepl, and this should be a function that takes the value $$R$$ and produces a multitype Strauss interaction. So define\n\nMS <- function(R){ MultiStrauss(diag(c(R,R))) }\n\nTry typing MS(50) to check that this is what you expect.\n\n3. Then we can use maximum profile pseudolikelihood:\n\nprofilepl(rval, MS, X, ~marks)","date":"2021-10-20 09:18:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.47956717014312744, \"perplexity\": 3625.9057954362806}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585305.53\/warc\/CC-MAIN-20211020090145-20211020120145-00169.warc.gz\"}"}
| null | null |
\section{Introduction}
Quantum dots containing a single Mn impurity have been a subject of growing interest in the last few years.\cite{Besombes_PRL04,Leger_PRL06,Kudelski_PRL07,Krebs_PRB09}
Optical orientation of a single Mn spin inside a CdTe quantum dot (QD) has been observed recently. \cite{LeGall_PRL09,Goryca_PRL09,Besombes_SSC09,LeGall_PRB10} In these experiment circularly polarized light was creating a spin-polarized exciton (X) in the dot, and upon constant illumination the Mn spin became polarized on a timescale of $\tau_{\text{Mn}} \! < \! 100$ ns. One mode of excitation is quasi-resonant, through an excited state of the dot in Ref.~\onlinecite{LeGall_PRL09} or through an exciton transfer from a resonantly excited nearby Mn-free dot in Ref.~\onlinecite{Goryca_PRL09}. Alternatively, one of the states of the Mn+X complex can be resonantly driven, as in Ref.~\onlinecite{LeGall_PRB10}. This mode of operation was originally proposed in Ref.~\onlinecite{Govorov_PRB05}, and it is the focus of this paper.
Specifically, we will consider the situation from Ref.~\onlinecite{LeGall_PRB10} in which the highest-energy line of the X+Mn complex is excited with $\sigma_{-}$ polarized light, thus creating a dominantly $\ket{-5/2;-1}$ state (written in the basis of the $S^{z}$ component of the Mn spin and the $J^{z}$ projection of the total spin of the exciton). Due to such an excitation the population of the $\ket{-5/2}$ Mn level was observed to decrease on the timescale of less than 100 ns.
These recent experimental achievements could pave the way to optical control of the Mn spin state (e.g.~being able to intialize the Mn spin in each of its six states, as proposed in Ref.~\onlinecite{Reiter_PRL09}).
The physical mechanism of the optical orientation of the Mn spin remains, however, unclear, and its proper understanding will most probably be crucial for further experimental developments. The goal of this paper is to elucidate a possible microscopic mechanism of Mn optical orientation.
The ``intrinsic'' relaxation of Mn due to spin-lattice interaction (Mn spin flip due to scattering with phonons) is well known to be very slow for isolated Mn spins,\cite{Scalbert_pssb96,Dietl_PRL95} e.g.~spin-lattice relaxation times longer than a microsecond were observed in dilute samples at $T\! = \! 5$ K and at magnetic field of about $10$ T in Ref.~\onlinecite{Strutz_PRL92}. It should be stressed that the high-field relaxation times are relevant to the case of Mn interacting with a confined exciton, which splits the Mn spin levels via the sp-d exchange interaction. The relatively fast relaxation of isolated Mn spins observed recently at zero $B$ field\cite{Goryca_relaxation_PRL09} is possibly relevant when the exciton is absent.
On the other hand, the phonon-induced processes of carrier spin relaxation (spin flips of the electron, of the hole, or of the whole exciton) were predicted to be quite effective for large energy transfer involved in a spin-flip.\cite{Khaetskii_PRB01,Woods_PRB04,Tsitsishvili_PRB03,Tsitsishvili_PRB05,Roszak_PRB07} In the Mn-doped dot the spin splitting of the carrier states is enhanced by the sp-d exchange interaction, and clear signatures of both exciton and hole spin relaxation were observed there.\cite{LeGall_PRB10}
It is therefore clear that the spin flips of the carriers occur on timescales relevant to the Mn optical pumping process. On the contrary, the existence of a fast (on a time-scale of tens of ns) process of ``intrinsic'' Mn spin flip in the presence of the exciton is still somewhat controversial. Such a process was included in the model used in Ref.~\onlinecite{LeGall_PRB10} in order to explain the $\tau_{\text{Mn}} \sim 70$ ns timescale of Mn orientation (it was also used in the original proposal\cite{Govorov_PRB05} of optically orienting the Mn spin by driving one of the six X+Mn transitions). As mentioned above, it is highly improbable that the spin-lattice interaction can account for such a fast process. Mn spin relaxation time of the order of 10 ns was observed in Ref.~\onlinecite{Besombes_PRB08} and explained there by assuming that the Mn is coupled to extended electronic states from the wetting layer.
This mechanism requires the presence of free hot photocarriers outside of the dot (which would scatter on the Mn spin), which should not be the case for (quasi)resonant excitation of a single dot.
The goal of this paper is to show that it is not necessary to include the ``intrinsic'' rate of Mn spin relaxation in the presence of an exciton, $\Gamma_{\text{Mn-X}}$, into the description of the optical pumping process. The Mn optical orientation can occur due to carrier spin relaxation (specifically the hole spin relaxation in the case of experiment from Ref.~\onlinecite{LeGall_PRB10}) and mixing of the exciton and Mn states due to sp-d exchange interaction. Because of the latter the eigenstates of the X+Mn system are superpositions of states with different exciton $J^{z}$ and Mn spin $S^{z}$. When a high-energy X+Mn state is excited, the carrier spin relaxation leads to transition to lower-energy states having different $S^{z}$ composition, and subsequent spontaneous recombination of these states leaves the Mn spin changed. In other words, in order to achieve Mn spin orientation, it is enough to consider the carrier spin relaxation in the strongly coupled system of the carriers and the Mn spin.
The paper is organized in the following way. In Section 2 we introduce the Hamiltonian of the system, including the sp-d exchange interaction and the electron-hole exchange. In Section 3 we briefly discuss the possibility of Mn spin optical orientation without any spin relaxation in the system. This optical pumping mechanism turns out to be inefficient, but its discussion highlights the significance of mixing of X and Mn states via the sp-d exchange interaction. An impatient reader can skip this Section and proceed directly to Section 4, where we include the carrier spin relaxation in the system dynamics and show that it could account for the recent observations.
\section{The Hamiltonian of a single Manganese spin interacting with an exciton in a quantum dot} \label{sec:H}
The Hamiltonian at zero magnetic field is $\hat{H} = \hat{H}_{sp-d} + H_{e-h} $. The first term is the sp-d interaction
\begin{eqnarray}
\hat{H}_{sp-d}& = & -A_{e}( \hat{S}^{z}\hat{s}^{z} + \frac{1}{2}[\hat{S}^{+}\hat{s}^{-} + \hat{S}^{-}\hat{s}^{+}]) \nonumber \\
& & + A_{h}( \hat{S}^{z}\hat{\kappa}^{z}/2 + \frac{1}{2}[\epsilon\hat{S}^{+}\hat{\kappa}^{-} + \epsilon^{*}\hat{S}^{-}\hat{\kappa}^{+} ]) \,\, ,
\end{eqnarray}
where $\hat{S}^{i}$ are the operators of the Mn spin ($S\! = \! 5/2$), $\hat{s}^{i}$ are the electron spin operators, and $\hat{\kappa}^{i}$ are the Pauli matrices operating in the two-dimensional subspace of dominantly heavy hole states (the Kramers doublet of the lowest-energy hole states confined in the dot).
They appear after taking the matrix elements of the p-d interaction $A_{h}\mathbf{\hat{S}}\cdot\mathbf{\hat{j}}/3$ (with $\mathbf{\hat{j}}$ being the spin-$3/2$ operator) within the subspace of two mostly heavy-hole (hh) states being confined in the QD. The finite admixture of the light hole (lh) states in the relevant low-energy states (due to e.g.~anisotropic strain\cite{Besombes_JAP07,Leger_PRB07}) leads to $\epsilon \! \neq \! 0$ allowing for the flip-flop between the hole spin and the Mn spin.
$A_{e}$ and $A_{h}$ are the exchange interaction energies for the electron and the hole (with our sign convention they are both positive).
The second term is the electron-hole exchange interaction,\cite{Bayer_PRB02} which is written as
\begin{eqnarray}
\hat{H}_{e-h} & = & \frac{\delta_{0}}{2}( \ket{1}\bra{1} + \ket{-1}\bra{-1} - \ket{2}\bra{2} - \ket{-2}\bra{-2} ) \nonumber \\
& & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! + \frac{\delta_{1}}{2}( \ket{1}\bra{-1} + \ket{-1}\bra{1}) + \frac{\delta_{2}}{2}( \ket{2}\bra{-2} + \ket{-2}\bra{2}) \,\, ,
\end{eqnarray}
where we have used the basis of the total exciton angular momentum along the $z$ axis $\ket{J^{z}=s^{z}_{e} + j^{z}_{h}}$, and we have approximately identified the two mostly hh-like states with $j^{z}_{h} \! =\! \pm 3/2$ (thus neglecting the small corrections due to the hh-lh mixing).
$\delta_{0}$ is the isotropic exchange splitting of the bright and dark excitons, $\delta_{1}$ is the splitting of bright excitons present in dots with broken cylindrical symmetry, and $\delta_{2}$ is giving the splitting of dark excitons. The last two terms come from $b_{i} (J^{i}_{h})^{3}s^{i}_{e}$ terms in the e-h exchange Hamiltonian, which are present due to the cubic symmetry of the lattice, and as such are breaking the cylindrical symmetry of the exchange Hamiltonian, thus leading to mixing of states with different $J^{z}$.
\begin{figure}[t]
\includegraphics[width=0.99\linewidth]{Fig_PL.eps}
\caption{(Color online) Photoluminescence spectrum of the dot with a single Mn spin calculated using the parameters given in the text. All levels are assumed to be equally populated. The contribution from the mostly dark states to the total PL is plotted with the dashed line. Line broadening of $0.1$ meV was used.} \label{fig:PL}
\end{figure}
In the calculations below we will use the following parameters typical for small CdTe QDs. We take $A_{h} \! \equiv \! -\beta|\Psi_{h}(\mathbf{r}_{\text{Mn}} )|^2 \! = \! 0.8$ meV, with $\beta$ being the hole exchange integral, and $ \Psi_{h}(\mathbf{r}_{\text{Mn}})$ being the amplitude of the hole wavefunction at the Mn site. This value corresponds to $\approx \! 3$ meV width of sextuplet of bright exciton lines of X+Mn complex (see Fig.~\ref{fig:PL}). The value of $A_{e} \! \equiv \! \alpha|\Psi_{e}(\mathbf{r}_{\text{Mn}})|^2 \! = \! 0.2$ meV follows from the ratio of $|\beta/\alpha| \! \approx \! 4$ in CdTe and from a somewhat arbitrary assumption of equal amplitude of electron ond hole wavefunctions at the Mn spin site (the hole is believed to be more weakly bound than the electron in CdTe dots, but its binding is enhanced in the presence of an electron,\cite{Besombes_JAP07} and it is unlcear which effect prevails). For the parameter giving the strength of the hole-Mn flip-flop we take a typical value of $|\epsilon| \! = \! 0.1$ deduced from the linear polarization of the QD photoluminescence\cite{Besombes_JAP07,Leger_PRB07} (the phase of $\epsilon$ determines the polarization axis, but it is irrelevant for the optical orientation effect discussed here). For the electron-hole exchange energies we use\cite{Kazimierczuk_APPA09} $\delta_{0} \! =\! 1$ meV and $\delta_{1} = 0.1 $ meV, and we assume $\delta_{2} \! =\! 0.1$ meV (which probably is an overestimate).
The energy spectrum of the above Hamiltonian is clearly visible in the photoluminescence (PL) signal from an excited dot.\cite{Besombes_PRL04,ROssier_PRB06} In the zeroth approximation, we can neglect $A_{e}$ (since it is usually much smaller than $A_{h}$), and also put $\epsilon$, $\delta_{1}$, and $\delta_{2}$ equal to zero. Then the spectrum consists of 12 doubly-degenerate levels: 6 of them are bright (i.e.~they couple to light and contribute to the PL signal), and 6 are dark. Within each group the spacing of the levels is given by $A_{h}/2$, and the bright states are higher in energy by $\delta_{0}$ compared
to the dark ones. In the calculation with the full Hamiltonian the main change is the ``brightening'' of the dark excitons, which occurs due to the flip-flop parts of the sp-d exchange interactions. This is shown in Fig.~\ref{fig:PL}, where
the total spectrum is deformed, and more than 6 peaks are visible (the additional ones corresponding to mostly dark excitons, the contribution of which to the total PL is shown by the dashed line). Nonzero values of $\delta_{1}$ and $\epsilon$ also lead to the linear polarization of the PL signal,\cite{Leger_PRB07,Besombes_JAP07} which is however irrelevant for this paper.
\section{Manganese optical orientation without spin relaxation} \label{sec:norelax}
First, let us note that Mn polarization can be induced by resonant optical pumping even in the absence of \emph{any} spin relaxation processes. The breaking of the cylindrical symmetry of the dot is only needed, i.e.~$\epsilon \! \neq \! 0$ and/or $\delta_{2} \! \neq \! 0$.
We focus now on the excitation of the highest energy X+Mn state with $\sigma_{-}$ polarized light.\cite{LeGall_PRB10} The $\ket{e}$ state excited in such a way contains a large amplitude of $\ket{-5/2;-1}$, but it also has admixtures of other states. For typical values of parameters the dominant admixtures are the ones of $\ket{-5/2; +1}$ state (caused by $\delta_{1}$ term mixing the bright excitons) and of $\ket{-3/2; -2}$ (caused by the electron-Mn flip-flop). In the 2nd order of perturbation theory the latter state contains an admixture of $\ket{-3/2;2}$ state due to $\delta_{2}$ interaction mixing the dark excitons, and in the third order the admixtures of $\ket{-1/2;1}$ and $\ket{-1/2;-1}$ states are created from $\ket{-3/2;2}$ state by electron and hole spin flip-flops with the Mn spin, respectively. The admixture of these $\ket{-1/2;\pm 1}$ states in the $\ket{e}$ state (with $b_{\pm 1}$ amplitudes) lead to a finite probability of the recombination of the $\ket{e}$ state into the empty dot $\ket{-1/2}$ state. Using the third order perturbation theory we have
\begin{eqnarray}
b_{1} & = & \frac{ (A_{e}\sqrt{8}/2)\cdot (\delta_{2}/2) \cdot ( A_{e}\sqrt{5}/2 ) } {(2A_{e} + \frac{1}{2}A_{h} + \delta_{0} ) (\frac{1}{2}A_{e} + 2A_{h} + \delta_{0}) ( \frac{3}{2}A_{e}+\frac{3}{2}A_{h}) } \nonumber \\
b_{-1} & = & b_{1} \frac{3\epsilon}{2} \frac{A_{h}}{A_{e}} \,\, . \nonumber
\end{eqnarray}
In the simplest case, when $|b_{-1}| ~\! \ll \! |b_{1}|$ (corresponding to negligible hh-lh mixing), the re-absorption from the $\ket{-1/2}$ state can be neglected due to the optical selection rules ($\sigma_{-}$ light coupling only to $\ket{-1}$ excitons), and in the process of optical pumping of the $\ket{e}$ state the population of the $S^{z}$ levels is transferred from $\ket{-5/2}$ to $\ket{-1/2}$ by spontaneous emission of $\sigma_{+}$ polarized photons.
While the calculations of pumping dynamics with both $b_{\pm 1}$ amplitudes being finite show rich and interesting features (e.g.~the possibility of either depleting the $\ket{-5/2}$ level or increasing its occupation, depending on the values of $b_{\pm 1}$ and other parameters), one can quickly see that this kind of process is incapable of explaining the experimental timescale of Mn optical orientation. In the limit of $\epsilon\! = \! 0$ we can write rate equaitons for 3 levels (occupations of $\ket{e}$ state and the two empty dot states $\ket{-5/2}$ and $\ket{-1/2}$). With generation rate $G$ and spontaneous recombination rate $\Gamma_{\text{rec}}\! = \! 1/T_{\text{rec}}$, and with $\ket{e} = a\ket{-5/2;-1} + b\ket{-1/2;1} + ...$ (with other admixed states being optically inactive or having negligible amplitudes), we have the equations for occupation probabilities
\begin{eqnarray}
\frac{d p_{e}}{dt} & = & -G|a|^{2}p_{e} -\Gamma_{\text{rec}}p_{e} + G|a|^{2}p_{-5/2} \,\, , \\
\frac{d p_{-5/2}}{dt} & = & G|a|^{2}p_{e} +\Gamma_{\text{rec}}|a|^{2}p_{e} - G|a|^{2}p_{-5/2} \,\, , \\
\frac{d p_{-1/2}}{dt} & = & \Gamma_{\text{rec}}|b|^{2}p_{e} \,\, .
\end{eqnarray}
For strong driving, $G\! \gg \! \Gamma_{\text{rec}}$, we get for times longer than $1/G$ that $p_{-5/2} \! \approx \! \frac{1}{4}\exp(-\Gamma_{\text{rec}}|b|^2 t/2)$, which gives the Mn orientation timescale $\tau_{\text{Mn}} = 2T_{\text{rec}}/|b|^2$. With the parameters used here one gets $\tau_{\text{Mn}} \approx 10^{7} T_{\text{rec}}$, while in the experiment $\tau_{\text{Mn}} \! < \! 10^{3} T_{\text{rec}}$ was seen (using the value of $T_{\text{rec}} \! =\! 200$ ps).
\section{Manganese optical orientation with carrier spin relaxation} \label{sec:relax}
The optical orientation mechanism described above is inefficient because it relies on very small $\delta_{2}$-induced mixing of the dark excitons, and also because both of the flip-flop related admixtures involve the energy denominators $\Delta E \! > \! \delta_{0}$, with the latter being much larger than the off-diagonal couplings $A_{e}$ and $\epsilon A_{h}$.
Much more efficient optical orientation can be obtained when we include the processes of carrier spin relaxation. A phonon-induced spin-flip of an electron (a hole) leads to a transition from a bright state $\ket{m; \pm 1}$ to a dark state $\ket{m ;\pm 2} $ ($\ket{m;\mp 2}$).
The mostly dark eigenstates of the full Hamiltonian contain admixtures of bright states with $m' \! = \! m\pm 1$ appearing in the first order of the perturbation theory. The electron-Mn flip-flop terms
are connecting the $\ket{m; \pm 2}$ state to $\ket{m\pm 1; \pm 1}$, while the hole-Mn flip-flop terms $\sim \! \epsilon A_{h} \hat{\kappa}^{\pm}\hat{S}^{\mp}$ are connecting it to $\ket{m\pm 1; \mp 1}$.
From the resonantly excited state $\ket{e} \! \approx \! \ket{-5/2; -1}$ the electron spin relaxation leads to $\ket{-5/2; -2}$ state, which is is \emph{not} coupled by sp-d exchange to any other states, and in the first order of perturbation theory does not have any admixtures of states with flipped Mn spin.
We are thus led to consider the possibility of the hole spin relaxation event, which leads to a transition into the state $\ket{r} \approx a\ket{-5/2; 2} + b_{e}\ket{-3/2; 1} + b_{h}\ket{-3/2,-1}$, with the amplitudes of other states being much smaller.
The main admixture amplitudes are
\begin{eqnarray}
b_{e} \approx \frac{A_{e}\sqrt{5}/2}{\delta_{0}-2A_{e}+A_{h}/2} \,\, , \\
b_{h} \approx \frac{ \epsilon A_{h}\sqrt{5}/2}{ \delta_{0} + 2A_{h} - A_{e}/2 } \,\, .
\end{eqnarray}
For typical parameters we get $b_{e} \! > \! b_{h}$ (e.g.~with values used here we have $b_{e} \approx 0.2$, $b_{h}\approx 0.04$).
The presence of hole spin relaxation was seen in Ref.~\onlinecite{LeGall_PRB10}, where it was shown that excitation of $\ket{1/2; +1}$ state was leading to the strongest PL from the ``dark'' state $\ket{1/2; -2}$, which was being populated by hole spin relaxation from the initial state. The optical activity of the mostly dark states is also visible in the calculated PL spectrum shown in Fig.~\ref{fig:PL}, where the PL signal from states having mostly dark character is plotted with the dashed line. With the parameters employed here, the dark states most strongly mixed with the bright states are the ones with dominant $\ket{-1/2; \pm 2}$ character (with energy $\approx \! -0.44$ meV, see the strongest ``dark'' transition in Fig.~\ref{fig:PL}), which contain large admixtures of $\ket{-3/2; \mp 1}$ states caused by the hole-Mn flip-flop term allowed by hh-lh mixing.
The rate equations for the populations of $\ket{e}$, $\ket{r}$, $\ket{-5/2}$, and $\ket{-3/2}$ levels (the $\ket{-1/2}$ level considered previously is neglected here, since its pumping has been shown in the previous Section to occur on a much longer timescale) are
\begin{eqnarray}
\frac{d p_{e}}{dt} & = & -(G + \Gamma_{\text{rec}} +\Gamma_{\text{h}}) p_{e} + Gp_{-5/2} + \Gamma^{'}_{\text{h}}p_{r}\,\, , \\
\frac{d p_{-5/2}}{dt} & = & ( G +\Gamma_{\text{rec}}) p_{e} - Gp_{-5/2} \,\, , \\
\frac{d p_{-3/2}}{dt} & = & \Gamma_{\text{d}}p_{r} \,\, ,\\
\frac{d p_{r}}{dt} & = & \Gamma_{\text{h}}p_{e} -(\Gamma_{\text{d}} + \Gamma^{'}_{\text{h}}) p_{r}
\end{eqnarray}
where the spontaneous recombination rate of the ``dark'' exciton is $\Gamma_{\text{d}} \approx (|b_{e}|^{2} + |b_{h}|^{2})\Gamma_{\text{rec}}$, $\Gamma_{\text{h}}$ is the hole relaxation rate, and $\Gamma^{'}_{\text{h}} \! =\! \exp(-\Delta E/k_{B}T)\Gamma_{\text{h}}$ is the rate for the hole spin flip from the dark state back to the bright state. $\Delta E \! \approx\! \delta_{0} + \frac{5}{2}A_{h}$ is the energy difference between the two states. At $T\! = \! 5$ K and for $\Delta E \! = \! 3 $ meV obtained from the parameters used here $\Gamma^{'}_{\text{h}} \! =\! 10^{-3} \Gamma_{\text{h}}$. However, even with larger $\Gamma^{'}_{\text{h}}$ the results discussed below are changed very little, and we will put $\Gamma^{'}_{\text{h}} \! = \! 0$ from here on.
We start with the initial conditions of $p_{-5/2}(0) \! =\! p_{-3/2}(0) \! = \! 1/2$ and all the other $p_{i}(0) \! =\! 0$.
In the strong driving (saturation) regime ($G \! \gg \! \Gamma_{\text{rec}}$, $\Gamma_{\text{h}}$)
we get that at times $t \! \gg \! G^{-1}$ we have $p_{-5/2}(t) \! \approx \! \frac{1}{4}\exp(-\Gamma_{\text{h}}t/2)$, i.e.~the $\ket{-5/2}$ state get emptied on timescale of hole spin relaxation. Its population is shifted to $\ket{-3/2}$ and $\ket{r}$ levels. For the population of the former state we have
\begin{equation}
p_{-3/2}(t) \approx 1 + \frac{\Gamma_{\text{h}}e^{-\Gamma_{\text{d}}t} - \Gamma_{\text{d}}e^{-\Gamma_{\text{h}}t/2} }{2(\Gamma_{\text{d}} - \Gamma_{\text{h}}/2)} \,\,
\end{equation}
when $\Gamma_{\text{d}} \! \neq \! \Gamma_{\text{h}}/2$, and $p_{-3/2}(t) \approx 1 - \frac{1}{2} e^{-\Gamma_{\text{d}}t}(\Gamma_{\text{d}}t+1)$ when $\Gamma_{\text{d}} \! = \! \Gamma_{\text{h}}/2$.
Before we reach the times $t \! \gg \! 2\Gamma_{\text{h}}^{-1}$, $\Gamma_{\text{d}}^{-1}$ most of the initial population of $\ket{-5/2}$ will have moved to $\ket{-3/2}$ state. The driven transition becomes then optically inactive, and the optical orientation process is complete.
\begin{figure}[t]
\includegraphics[width=0.99\linewidth]{Fig_G.eps}
\caption{(Color online) Time dependence of the occupation of the empty dot $\ket{-3/2}$ Mn level upon pumping of the $\ket{-5/2,-1}$ transition with different light intensities (corresponding to different exciton generation rates $G$), with normalization $p_{-3/2}+p_{-5/2} \! =\! 1$. The spontaneous recombination rate is $\Gamma_{\text{rec}} \! = \! 5$ ns$^{-1}$ and the hole spin relaxation rate is $\Gamma_{\text{h}} \! = \! 0.1$ ns$^{-1}$. The exciton spin relaxation rate $\Gamma_{\text{X}}$ is assumed to be zero with exception of the dashed line for $G\! = \! 0.1\Gamma_{\text{rec}}$, for which $\Gamma_{\text{X}} \! = \! 1$ ns$^{-1}$. The other parameters are given in the text.
} \label{fig:G}
\end{figure}
\begin{figure}[t]
\includegraphics[width=0.99\linewidth]{Fig_Tsr.eps}
\caption{(Color online) The same as in Fig.~\ref{fig:G}, only for $G/\Gamma_{\text{rec}} \!= \! 1$ and with various holes spin relaxation times $T_{\text{h}} \! = \! \Gamma_{\text{h}}^{-1}$. } \label{fig:Tsr}
\end{figure}
With $b_{e} \! \approx \! 0.2$ we get the ``dark'' state recombination time $\Gamma_{\text{d}}^{-1} = 5$ ns, which is close to the observed value of $8$ ns.\cite{Besombes_PRB08}
The calculations of $p_{-3/2}(t)$ for hole spin relaxation time $T_{\text{h}}\! =\! \Gamma_{\text{h}}^{-1} \! =\! 10$ ns are shown in Fig.~\ref{fig:G} for different exciton generation rates $G$. For $G\! \gg \! \Gamma_{\text{rec}}$ the analytical formulas given above are accurate, while at lower $G$ the rate equations have to be solved numerically. In Fig.~\ref{fig:Tsr} we show $p_{-3/2}(t)$ for various $T_{\text{h}}$ when $G\! = \! \Gamma_{\text{rec}}$.
The exciton spin relaxation\cite{Tsitsishvili_PRB03} leads to transitions from $\ket{e}$ to $\ket{f} \! \equiv \! \ket{-5/2,1}$ state with rate $\Gamma_{\text{X}} \! \equiv \! T^{-1}_{X}$. Including this effect in our rate equations is straighforward. However, as long as an assumption of $\Gamma_{\text{X}} \! \ll \! \Gamma_{\text{rec}}$ is made, the processes of exciton spin relaxation and subseqent spontaneous emission of $\sigma_{+}$ photon have very little impact on the optical orientation of the Mn spin. This is shown in Fig.~\ref{fig:G}, where a result for $G\! = \! 0.1 \Gamma_{\text{rec}}$ is shown also for $T_{\text{X}} \! = \! 1$ ns, and one can see that this leads to an insignificant slowing down of the orientation process. At higher $G$ the influence of finite $T_{\text{X}} \! > \! T_{\text{rec}}$ is even smaller.
The third process of carrier spin relaxation is the electron spin relaxation, which leads to a transition into a dark state $\ket{d}\! = \! \ket{-5/2; -2}$ with recombination time of at least a couple hundreds of ns (using our rather large value of $\delta_{2}$, which is needed to bring about the mixing of this state with a bright one), which basically means that on the timescale of $\sim \! 100$ ns this state is perfectly trapping. If the electron spin relaxation time was faster than the hole spin relaxation time, then instead of the pumping of $\ket{-3/2}$ level the system would get trapped in the dark state $\ket{d}$. In the strong driving regime the transition corresponding to $\ket{-5/2; -1}$ state would become inactive on timescale of electron spin relaxation time $T_{\text{e}} \! = \! \Gamma_{\text{e}}^{-1}$, and instead of achieving optical orientation of the Mn spin one would obtain a dot with a very long-lived dark exciton trapped in it. The fact that this does not happen in Ref.~\onlinecite{LeGall_PRB10}, where the observations are consistent with the transfer of population between the Mn spin states, and not with the creation of stable dark exciton, shows that the electron spin relaxation is slower than hole and exciton spin relaxation processes.
\section{Conclusions}
We have shown that the experimental result from Ref.~\onlinecite{LeGall_PRB10}, the optical orientation of the Mn spin in tens of nanoseconds under a resonant driving of the highest-energy line of exciton+Mn complex, can be explained by the process of hole spin relaxation occurring on this timescale (which also has been observed in Ref.~\onlinecite{LeGall_PRB10}). The optical orientation occurs because the hole relaxation leads to a transition to an eigenstate of mostly dark character, which is mixed with optically active states via sp-d exchange interaction. Since this admixture consists of states with a flipped Mn spin, the emission from the ``dark'' state populated by hole relaxation leads to a change of the spin polarization of the Mn ion. Consequently, the intrinsic processes of Mn spin relaxation (due to interaction with carriers in the wetting layer or phonons), do not have to be invoked in order to explain the optical orientation (this of course does not rule out their existence in some cases\cite{Besombes_PRB08}).
Our analysis has also shown that the heavy-light hole mixing, while visibly present in the PL spectra, is not necessary to explain the Mn optical orientation (at least in the case of exciting the highest energy state of X+Mn complex). The three processes of hole, exciton, and electron spin relaxation, together with the electron-Mn exchange, can lead to quite a complicated behavior, with the excitation pattern considered here leading to a relatively simple dynamics. Further experiments involving resonant excitation of various lines of X+Mn complex, and observation of PL signals induced in this way (as in Ref.~\onlinecite{LeGall_PRB10}), coupled with a calculation of X+Mn state mixing, might give more quantitative information on all the involved relaxation times. This knowledge could be then used in modeling of the situation from Ref.~\onlinecite{Goryca_PRL09}, where simultaneous excitation of many X+Mn levels leads to more complicated dynamics.
One feature of the experimental results from Ref.~\onlinecite{LeGall_PRB10} which cannot be explained by the model proposed here is the saturation of the depletion of $\ket{-5/2}$ state at $75$ \%. Addressing this question is left for future research.
\emph{Note added.} The ``brightening'' of dark excitons due to the sp-d exchange interaction was very recently observed in Ref.~\onlinecite{Goryca_brightening}, where it was also shown that recombination from these states is an efficient channel of the Mn spin orientation.
\section{Acknowledgements}
The author is grateful to T.~Dietl for discussions, reading of the manuscript, and commenting on it. Discussions with {\L}.~K{\l}opotowski, M.~Goryca, O.~Krebs, and C.~Le Gall are also acknowledged. The financial support from the Homing programme of the Foundation for Polish Science supported by the EEA Financial Mechanism, from the EU FunDMS Advanced Grant of the European Research Council within the ``Ideas'' 7th Framework Programme, and from the European Union within European Regional Development Fund through grant Innovative Economy (POIG.01.03.01-00-159/08, ``InTechFun''), is gratefully acknowledged.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,662
|
{"url":"https:\/\/www.rdocumentation.org\/packages\/devtools\/versions\/2.0.2","text":"# devtools v2.0.2\n\n0\n\n0th\n\nPercentile\n\n## Tools to Make Developing R Packages Easier\n\nCollection of package development tools.\n\n# devtools\n\nThe aim of devtools is to make package development easier by providing R functions that simplify and expedite common tasks. R Packages is a book based around this workflow.\n\n## Installation\n\n# Install devtools from CRAN\ninstall.packages(\"devtools\")\n\n# Or the development version from GitHub:\n# install.packages(\"devtools\")\ndevtools::install_github(\"r-lib\/devtools\")\n\n\n## Usage\n\nAll devtools functions accept a path as an argument, e.g. load_all(\"path\/to\/mypkg\"). If you don't specify a path, devtools will look in the current working directory - this is recommended practice.\n\n\u2022 load_all() simulates installing and reloading your package, loading R code in R\/, compiled shared objects in src\/ and data files in data\/. During development you usually want to access all functions (even un-exported internal ones) so load_all() works as if all functions were exported in the package NAMESPACE.\n\n\u2022 document() updates generated documentation in man\/, file collation and NAMESPACE.\n\n\u2022 test() reloads your code with load_all(), then runs all testthat tests.\n\n\u2022 test_coverage() runs test coverage on your package with covr. This makes it easy to see what parts of your package could use more tests!\n\n### Building and installing:\n\n\u2022 install() reinstalls the package, detaches the currently loaded version then reloads the new version with library(). Reloading a package is not guaranteed to work: see the documentation for unload() for caveats.\n\n\u2022 build() builds a package file from package sources. You can use it to build a binary version of your package.\n\n\u2022 install_* functions install an R package:\n\n\u2022 install_github() from GitHub\n\u2022 install_gitlab() from GitLab\n\u2022 install_bitbucket() from Bitbucket\n\u2022 install_url() from an arbitrary url\n\u2022 install_git() and install_svn() from an arbitrary git or SVN repository\n\u2022 install_local() from a local file on disk\n\u2022 install_version() from a specific version on CRAN\n\u2022 update_packages() updates a package to the latest version. This works both on packages installed from CRAN as well as those installed from any of the install_* functions.\n\n### Check and release:\n\n\u2022 check() updates the documentation, then builds and checks the package locally. check_win() checks a package using win-builder, and check_rhub() checks a package using r-hub. This allows you to easily check your package on all systems CRAN uses before submission.\n\n\u2022 release() makes sure everything is ok with your package (including asking you a number of questions), then builds and uploads to CRAN.\n\n## Learning more\n\nR package development can be intimidating, however there are now a number of valuable resources to help!\n\n1. R Packages is a book that gives a comprehensive treatment of all common parts of package development and uses devtools throughout.\n\n\u2022 The first edition is available at http:\/\/r-pkgs.had.co.nz, but note that it has grown somewhat out of sync with the current version of devtools.\n\u2022 A second edition is under development and is evolving to reflect the current state of devtools. It is available at https:\/\/r-pkgs.org.\n\u2022 The Whole Game and Package structure chapters make great places to start.\n2. RStudio community - package development is a great place to ask specific questions related to package development.\n\n3. rOpenSci packages has extensive documentation on best practices for R packages looking to be contributed to rOpenSci, but also very useful general recommendations for package authors.\n\n4. There are a number of fantastic blog posts on writing your first package, including\n\n5. Writing R Extensions is the exhaustive, canonical reference for writing R packages, maintained by the R core developers.\n\n## Conscious uncoupling\n\ndevtools started off as a lean-and-mean package to facilitate local package development, but over the years it accumulated more and more functionality. Currently devtools is undergoing a conscious uncoupling to split out functionality into smaller, more tightly focussed packages. This includes:\n\n\u2022 testthat: Writing and running tests (i.e. test()).\n\n\u2022 roxygen2: Function and package documentation (i.e. document()).\n\n\u2022 remotes: Installing packages (i.e. install_github()).\n\n\u2022 pkgbuild: Building binary packages (including checking if build tools are available) (i.e. build()).\n\n\u2022 pkgload: Simulating package loading (i.e. load_all()).\n\n\u2022 rcmdcheck: Running R CMD check and reporting the results (i.e. check()).\n\n\u2022 revdepcheck: Running R CMD check on all reverse dependencies, and figuring out what's changed since the last CRAN release (i.e. revdep_check()).\n\n\u2022 sessioninfo: R session info (i.e. session_info()).\n\n\u2022 usethis: Automating package setup (i.e. use_test()).\n\nGenerally, you should not need to worry about these different packages, because devtools installs them all automatically. You will need to care, however, if you're filing a bug because reporting it at the correct place will lead to a speedier resolution.\n\n## Code of conduct\n\nPlease note that the devtools project is released with a Contributor Code of Conduct. By contributing to this project, you agree to abide by its terms.\n\n## Functions in devtools\n\n Name Description build_vignettes Build package vignettes. clean_vignettes Clean built vignettes. dev_mode Activate and deactivate development mode. is.package Is the object a package? as.package Coerce input to a package. lint Lint all source files in a package. build Build package release Release package to CRAN. build_manual Create package pdf manual bash Open bash shell in package directory. check_rhub Run CRAN checks for package on R-hub dev_packages Return a vector of names of packages loaded by devtools check_win Build windows binary package. install Install a local development package. github_pat Retrieve GitHub personal access token. devtest Return the path to one of the packages in the devtools test dir release_checks Custom devtools release checks. install_deps Install package dependencies if needed. submit_cran Submit a package to CRAN. system_check Run a system command and check if it succeeds. has_tests Was devtools installed with tests? run_examples Run all examples in a package. r_env_vars Environment variables to set when calling R save_all Save all documents in an active IDE session. source_url Run a script through some protocols such as http, https, ftp, etc. spell_check Spell checking check_failures Parses R CMD check log file for ERRORs, WARNINGs and NOTEs check_man Check documentation, as R CMD check does. document Use roxygen to document a package. reexports Objects exported from other packages dr_devtools Diagnose potential devtools issues load_all Load complete package. loaded_packages Return a vector of names of attached packages build_readme Build a Rmarkdown README for a package missing_s3 Find missing s3 exports. package_file Find file in a package. system_output Run a system command and capture the output. test Execute test_that tests in a package. uninstall Uninstall a local development package. wd Set working directory. build_site Execute pkgdown build_site in a package check_cran Deprecated Functions devtools Package development tools for R. dr_github Diagnose potential GitHub issues git_checks Git checks. reload Unload and reload package. revdep Reverse dependency tools. show_news Show package news source_gist Run a script on gist check Build and check a package, cleaning up automatically on success. No Results!","date":"2019-07-22 21:18:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.1804821938276291, \"perplexity\": 14034.28800507636}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-30\/segments\/1563195528220.95\/warc\/CC-MAIN-20190722201122-20190722223122-00389.warc.gz\"}"}
| null | null |
Q: Hide form tabs for specific user groups There is requirement to hide certain tabs for certain user groups on a form. Thing is, hiding all the fields in the tab does not seem to work. Any ideas are appreciated. Working with AX 2009.
A: In case anyone needed an answer to this with visuals:
In AX you can assign a security key to a tab via the properties sheet:
Then you can either make use of existing security keys or create your own in order to control user visibility by assigning only the users you want to see the tab to have access to that key.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 518
|
Microdata (95)
Colombia - The Medium-term Effects of Home-based Early Childhood Development Intervention Impact Evaluation 2011
The Medium-Term Effects of Home-based Early Childhood Development Intervention Impact Evaluation (ECDIIE) covered 96 small towns in central Colombia, representing a large number of small communities across a relatively big...
Data Type: Microdata Last Updated: Dec 17, 2019 Publisher: Development Data Group; The World Bank
Tunisia - World Bank Group Country Survey 2018
The Country Opinion Survey in Tunisia assists the World Bank Group (WBG) in gaining a better understanding of how stakeholders in Tunisia perceive the WBG. It provides the WBG with systematic feedback from national and local...
Data Type: Microdata Last Updated: Aug 28, 2019 Publisher: Development Data Group; The World Bank Group, Corporate Communications; The World Bank Group
Croatia - World Bank Group Country Survey 2018
The Country Opinion Survey in Croatia assists the World Bank Group (WBG) in gaining a better understanding of how stakeholders in Croatia perceive the WBG. It provides the WBG with systematic feedback from national and local...
Data Type: Microdata Last Updated: Aug 15, 2019 Publisher: Development Economics Data Group; The World Bank
Croatia - Enterprise Survey 2009
The objective of the survey is to obtain feedback from enterprises in client countries on the state of the private sector as well as to help in building a panel of enterprise data that will make it possible to track changes in...
Data Type: Microdata Last Updated: Jul 20, 2019 Publisher: Antonina Redko
Egypt, Arab Rep. - Population Housing and Establishments Census 2006 - IPUMS Subset
IPUMS-International is an effort to inventory, preserve, harmonize, and disseminate census microdata from around the world. The project has collected the world's largest archive of publicly available census samples. The data...
Data Type: Microdata Last Updated: Mar 13, 2019
Egypt, Arab Rep. - Population Housing and Establishment Census 1996 - IPUMS Subset
The Medium Term Effects of Home-based Early Childhood Development Intervention Impact Evaluation (ECDIIE) covered 96 small towns in central Colombia, representing a large number of small communities across a relatively big...
Data Type: Microdata Last Updated: Jan 22, 2019
Colombia - Enterprise Survey 2017
An Enterprise Survey is a firm-level survey of a representative sample of an economy's private sector. The surveys cover a broad range of business environment topics including access to finance, corruption, infrastructure,...
Data Type: Microdata Last Updated: Nov 27, 2018
Croatia - Global Financial Inclusion (Global Findex) Database 2017
Financial inclusion is critical in reducing poverty and achieving inclusive economic growth. When people can participate in the financial system, they are better able to start and expand businesses, invest in their children's...
Data Type: Microdata Last Updated: Oct 31, 2018
Egypt, Arab Rep. - Global Financial Inclusion (Global Findex) Database 2017
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 2,751
|
It would appear that I fell into some sort of space/time rupture there and now I'm just emerging.
Well, that's not the case. I apparently wound up taking a little bit of a hiatus from the blogging world again. Sorry. It happens sometimes.
So why did I go dark this time? Well, a couple of reasons.
First of all, I wound up wrapping my The Lutheran Difference series. Remember those? Yeah, that kept me going for a while but, once I got done with the whole thing, I wasn't sure what I should do on Mondays anymore. I used to post little devotional thoughts based on assigned Scripture readings, but I wasn't sure if those were as helpful as I wanted them to be. So I'm open to suggestions as to what you might want to see on Mondays. Something spiritual, that's about the only rule of thumb for me.
And then there's the writing business. Things have been quiet there too. My latest WIP, Unmasked, is in the hands of my very capable agent. I'm waiting to hear from her and see what suggestions she might have for it. And then we'll see who might be interested in a YA fantasy for the general market.
So what am I doing now? Well, that's changed a bit from last time I updated all y'all. The last time I posted a writing update, I said that I was going to finally start working on a book series that had been simmering in the back of my mind for the last two or three years. In many ways, this was a frustrating project for me. Usually I come up with the plot first, then the characters, and the storyworld kind of fills in around them as I work. This time, though, I was going about things backwards. I've been crafting the storyworld first, and I have some thoughts on the characters. I just can't figure out what these guys are going to do.
But I had figured enough was enough. I was going to squeeze a plot out of the twenty pages of notes that I had been compiling. And that was my intention up until a month ago.
And then I went to OYAN again.
But here's the thing: the more I thought about this little story idea, the more I was intrigued by it. As I kept turning this idea over and over in my mind, the more details started to spill out. Maybe it was a result of being with these kids for a week, talking about writing and stories and all that. Maybe their enthusiasm for the craft was contagious. Whatever the case, I decided to jettison the troublesome storyworld and focus on my #gottabebae story (that's not the title, just a working label for this story until I can come up with something better).
So what's it about? Well, I won't go into all the twists and turns of the plot. Instead, I'll say this: it's a retelling of the story of Esther set in a science fantasy world. And I'm excited about this idea. In the last month, I've been burning through my prewriting process and I'm hoping that, within a week or two, I can start putting actual words to paper.
So that's where I've been. You can expect more Wordcount Wednesday posts as #gottabebae takes off. And I may ruminate about other stuff from time to time.
Oh, and I'm also heading out to Realm Makers for the first time ever. You might expect a post about that too at some point.
WAY COOL! I'm excited about the concept of Esther in a Sci-Fi/Fantasy realm ^__^ I'm still trying to get my Eirinth to as many as I can. I need to finish editing my 8-10 year-old children's "Guinean Guards" Fantasy Fairy-tale while I wait for my artist to finish more pics for it. LUV YA BUNCHES IN JESUS WITH HOPE FOR US ALL!
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 5,660
|
Skovsav bruges ved fældning af træer og, når disse er store, tillige til overskæring af den fældede stamme; skovsav må være mindst dobbelt så lang som træets tværmål; den er dannet af en bred, stiv stålklinge og to løse håndtag; klingens æg er oftest buet og tykkere end ryggen, savtænderne er i reglen forlængede (m-tænder, ulvetænder med mere) og har ofte åbne mellemrum, som skal give plads til spånerne. Håndtagene må være til at tage af, for at man kan trække saven ud af snittet også efter, at der er slået kiler i dette.
Savende værktøj
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 2,592
|
{"url":"https:\/\/www.gradesaver.com\/textbooks\/math\/applied-mathematics\/elementary-technical-mathematics\/chapter-9-section-9-4-applications-involving-pairs-of-linear-equations-exercises-page-338\/3","text":"## Elementary Technical Mathematics\n\nLet x be the time in operation of the 180 gal\/h pump and y be the time in operation of the 250 gal\/h pump. $x+y=6$ since the total test time was 6 hours. $180x+250y=1325$ since the total amount pumped was 1325 gal. Multiply both sides of the first equation by -250, then add the two equations. $-250x-250y=-1500$ $\\underline{\\ \\ \\ 180x+250y=\\ \\ \\ 1325}$ $\\ -70x\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ =-175$ $\\longrightarrow$ divide both sides by -70 to solve for x $-70x\\div-70=-175\\div-70$ $x=2.5$ Substitute 2.5 for x in the first equation to solve for y. $2.5+y=6$ $2.5+y-2.5=6-2.5$ $y=3.5$","date":"2018-09-25 19:42:34","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7160370945930481, \"perplexity\": 436.8769037567601}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-39\/segments\/1537267162385.83\/warc\/CC-MAIN-20180925182856-20180925203256-00541.warc.gz\"}"}
| null | null |
{"url":"https:\/\/everything.explained.today\/Digamma_function\/","text":"# Digamma function explained\n\nIn mathematics, the digamma function is defined as the logarithmic derivative of the gamma function:\n\n \\psi(x)= d ln(\\Gamma(x))= dx\n \\Gamma'(x) \\simln{x}- \\Gamma(x)\n 1 2x\n\n.\n\nIt is the first of the polygamma functions.\n\nThe digamma function is often denoted as\n\n\\psi0(x),\\psi(0)(x)\n\nor [1] (the uppercase form of the archaic Greek consonant digamma meaning double-gamma).\n\n## Relation to harmonic numbers\n\nThe gamma function obeys the equation\n\n\\Gamma(z+1)=z\\Gamma(z).\n\nTaking the derivative with respect to gives:\n\n\\Gamma'(z+1)=z\\Gamma'(z)+\\Gamma(z)\n\nDividing by or the equivalent gives:\n\n \\Gamma'(z+1) = \\Gamma(z+1)\n \\Gamma'(z) + \\Gamma(z)\n 1 z\n\nor:\n\n \\psi(z+1)=\\psi(z)+ 1 z\n\nSince the harmonic numbers are defined for positive integers as\n\nHn=\\sum\n\n n k=1\n 1 k,\n\nthe digamma function is related to them by\n\n\\psi(n)=Hn-1-\\gamma,\n\nwhere and is the Euler\u2013Mascheroni constant. For half-integer arguments the digamma function takes the values\n\n\\psi\\left(n+\\tfrac12\\right)=-\\gamma-2ln2\n\n n +\\sum k=1\n 2 2k-1\n\n.\n\n## Integral representations\n\nIf the real part of is positive then the digamma function has the following integral representation due to Gauss:[2]\n\n\\psi(z)=\n\n infty \\int \\left( 0\n e-t t\n\n-\n\n e-zt 1-e-t\n\n\\right)dt.\n\n\\gamma\n\ngives:\n\n\\psi(z+1)=-\\gamma+\n\n 1 \\int \\left( 0\n 1-tz 1-t\n\n\\right)dt.\n\nThe integral is Euler's harmonic number\n\nHz\n\n, so the previous formula may also be written\n\n\\psi(z+1)=\\psi(1)+Hz.\n\nA consequence is the following generalization of the recurrence relation:\n\n\\psi(w+1)-\\psi(z+1)=Hw-Hz.\n\nAn integral representation due to Dirichlet is:[2]\n\n\\psi(z)=\n\n infty \\int 0\n\n\\left(e-t-\n\n 1 \\right) (1+t)z\n dt t\n\n.\n\nGauss's integral representation can be manipulated to give the start of the asymptotic expansion of\n\n\\psi\n\n.[3]\n\n\\psi(z)=logz-\n\n 1 2z\n\n-\n\n infty \\int \\left( 0\n 1 2\n\n-\n\n 1 t\n\n+\n\n 1 et-1\n\n\\right)e-tzdt.\n\nThis formula is also a consequence of Binet's first integral for the gamma function. The integral may be recognized as a Laplace transform.\n\nBinet's second integral for the gamma function gives a different formula for\n\n\\psi\n\nwhich also gives the first few terms of the asymptotic expansion:[4]\n\n\\psi(z)=logz-\n\n 1 2z\n\n-\n\n infty 2\\int 0\n tdt (t2+z2)(e2\\pi-1)\n\n.\n\nFrom the definition of\n\n\\psi\n\nand the integral representation of the gamma function, one obtains\n\n\\psi(z)=\n\n 1 \\Gamma(z)\n infty \\int 0\n\ntz-1ln(t)e-tdt,\n\nwith\n\n\\Rez>0\n\n.[5]\n\n## Infinite product representation\n\nThe function\n\n\\psi(z)\/\\Gamma(z)\n\nis an entire function, and it can be represented by the infinite product\n \\psi(z) \\Gamma(z)\n\n=-e2\\gamma\n\n infty\\left(1- z xk\n\\prod\nk=0\n z xk\n\\right)e\n\n.\n\nHere\n\nxk\n\nis the kth zero of\n\n\\psi\n\n(see below), and\n\n\\gamma\n\nis the Euler\u2013Mascheroni constant.\n\nNote: This is also equal to\n\n - d dz\n 1 \\Gamma(z)\ndue to the definition of the digamma function:\n \\Gamma'(z) \\Gamma(z)\n\n=\\psi(z)\n\n.\n\n## Series representation\n\n### Series formula\n\nEuler's product formula for the gamma function, combined with the functional equation and an identity for the Euler\u2013Mascheroni constant, yields the following expression for the digamma function, valid in the complex plane outside the negative integers (Abramowitz and Stegun 6.3.16):\n\n\\begin{align} \\psi(z+1) &=-\\gamma+\n\n infty \\sum \\left( n=1\n 1 n\n\n-\n\n 1 n+z\n\n\\right), \u2003\u2003 z-1,-2,-3,\\ldots,\\\\ &=-\\gamma+\n\n infty \\sum \\left( n=1\n z n(n+z)\n\n\\right), \u2003\u2003 z-1,-2,-3,\\ldots. \\end{align}\n\nEquivalently,\n\n\\begin{align} \\psi(z) &=-\\gamma+\n\n infty \\sum \\left( n=0\n 1 n+1\n\n-\n\n 1 n+z\n\n\\right), \u2003\u2003 z0,-1,-2,\\ldots,\\\\ &=-\\gamma+\n\n infty \\sum n=0\n z-1 (n+1)(n+z)\n\n, \u2003\u2003 z0,-1,-2,\\ldots,\\\\ \\end{align}\n\n#### Evaluation of sums of rational functions\n\nThe above identity can be used to evaluate sums of the form\n\n infty \\sum n=0\n\nun=\\sum\n\n infty n=0\n p(n) q(n)\n\n,\n\nwhere and are polynomials of .\n\nPerforming partial fraction on in the complex field, in the case when all roots of are simple roots,\n\nu\n n= p(n) q(n)\n m =\\sum k=1\n ak n+bk\n\n.\n\nFor the series to converge,\n\n\\limn\\toinftynun=0,\n\notherwise the series will be greater than the harmonic series and thus diverge. Hence\n\n m \\sum k=1\n\nak=0,\n\nand\n\n infty \\begin{align} \\sum n=0\n\nun&=\n\n m ak n+bk\n\\sum\nk=1\n m \\\\ &=\\sum k=1\na-\n k\\left( 1 n+bk\n 1 n+1\n\n\\right)\n\n m\\left(a \\\\ &=\\sum k\\sum\n infty\\left( 1 n+bk\n-\nn=0\n 1 n+1\n m \\right)\\right)\\\\ &=-\\sum k=1\n\nak(\\psi(bk)+\\gamma)\n\n m \\\\ &=-\\sum k=1\n\nak\\psi(bk). \\end{align}\n\nWith the series expansion of higher rank polygamma function a generalized formula can be given as\n\n infty \\sum n=0\n\nun=\\sum\n\n m k=1\nak\n rk (n+b k)\n m =\\sum k=1\n rk (-1)\n(rk-1)!\n (rk-1) a k\\psi\n\n(bk),\n\nprovided the series on the left converges.\n\n### Taylor series\n\nThe digamma has a rational zeta series, given by the Taylor series at . This is\n\n\\psi(z+1)=-\\gamma\n\n infty -\\sum k=1\n\n\\zeta(k+1)(-z)k,\n\nwhich converges for . Here, is the Riemann zeta function. This series is easily derived from the corresponding Taylor's series for the Hurwitz zeta function.\n\n### Newton series\n\nThe Newton series for the digamma, sometimes referred to as Stern series,[6] [7] reads\n\n infty \\psi(s+1)=-\\gamma-\\sum k=1\n (-1)k k\n\n\\binom{s}{k}\n\nwhere is the binomial coefficient. It may also be generalized to\n\n\\psi(s+1)=-\\gamma-\n\n 1 m\n m-1 \\sum k=1\n m-k - s+k\n 1 m\n infty (-1)k k\n\\sum\nk=1\n\n\\left\\{\\binom{s+m}{k+1}-\\binom{s}{k+1}\\right\\}, \u2003\u2003 \\Re(s)>-1,\n\nwhere\n\n### Series with Gregory's coefficients, Cauchy numbers and Bernoulli polynomials of the second kind\n\nThere exist various series for the digamma containing rational coefficients only for the rational arguments. In particular, the series with Gregory's coefficients is\n\n\\psi(v)=lnv-\n\n infty |Gn|(n-1)! (v)n\n\\sum\nn=1\n\n, \u2003\u2003 \\Re(v)>0,\n\n\\psi(v)=2ln\\Gamma(v)-2vlnv+2v+2lnv-ln2\\pi-\n\n infty |Gn(2)| (v)n\n2\\sum\nn=1\n\n(n-1)!, \u2003\u2003 \\Re(v)>0,\n\n\\psi(v)=3ln\\Gamma(v)-6\\zeta'(-1,v)+3v2ln{v}-\n\n 32 v\n\n2-6vln(v)+3v+3ln{v}-\n\n 32ln2\\pi +\n 12 -\n infty |Gn(3)| (v)n\n3\\sum\nn=1\n\n(n-1)!, \u2003\u2003 \\Re(v)>0,\n\nwhere is the rising factorial, are the Gregory coefficients of higher order with, is the gamma function and is the Hurwitz zeta function.[8] Similar series with the Cauchy numbers of the second kind reads\n\n\\psi(v)=ln(v-1)+\n\n infty Cn(n-1)! (v)n\n\\sum\nn=1\n\n, \u2003\u2003 \\Re(v)>1,\n\nA series with the Bernoulli polynomials of the second kind has the following form\n\n\\psi(v)=ln(v+a)+\n\ninfty\n n\\psi (-1) (a)(n-1)! n\n(v)n\n\\sum\nn=1\n\n, \u2003\u2003 \\Re(v)>-a,\n\nwhere are the Bernoulli polynomials of the second kind defined by the generatingequation\n z(1+z)a ln(1+z)\n\n=\n\n infty \\sum n=0\n\nzn\\psin(a), \u2003\u2003 |z|<1,\n\nIt may be generalized to\n\n\\psi(v)=\n\n 1 r\n r-1 \\sum l=0\n\nln(v+a+l)+\n\n 1 r\n infty (-1)nNn,r(a)(n-1)! (v)n\n\\sum\nn=1\n\n, \u2003\u2003 \\Re(v)>-a,r=1,2,3,\\ldots\n\nwhere the polynomials are given by the following generating equation\n (1+z)a+m-(1+z)a ln(1+z)\n infty =\\sum n=0\n\nNn,m(a)zn, \u2003\u2003 |z|<1,\n\nso that . Similar expressions with the logarithm of the gamma function involve these formulas\n\n\\psi(v)=\n\n 1 v+a-\\tfrac12\n\n\\left\\{ln\\Gamma(v+a)+v-\n\n 12ln2\\pi -\n 12 +\n infty (-1)n\\psin+1(a) (v)n\n\\sum\nn=1\n\n(n-1)!\\right\\}, \u2003\u2003 \\Re(v)>-a,\n\nand\n\n\\psi(v)=\n\n 1 \\tfrac{1\n\n{2}r+v+a-1}\\left\\{ln\\Gamma(v+a)+v-\n\n 12ln2\\pi -\n 12 +\n 1 r\n r-2 \\sum n=0\n\n(r-n-1)ln(v+a+n)+\n\n 1 r\n infty (-1)nNn+1,r(a) (v)n\n\\sum\nn=1\n\n(n-1)!\\right\\},\n\nwhere\n\n\\Re(v)>-a\n\nand\n\nr=2,3,4,\\ldots\n\n.\n\n## Reflection formula\n\nThe digamma function satisfies a reflection formula similar to that of the gamma function:\n\n\\psi(1-x)-\\psi(x)=\\pi\\cot\\pix\n\n## Recurrence formula and characterization\n\nThe digamma function satisfies the recurrence relation\n\n \\psi(x+1)=\\psi(x)+ 1 x\n\n.\n\nThus, it can be said to \"telescope\", for one has\n\n\\Delta[\\psi](x)=\n\n 1 x\n\nwhere is the forward difference operator. This satisfies the recurrence relation of a partial sum of the harmonic series, thus implying the formula\n\n\\psi(n)=Hn-1-\\gamma\n\nwhere is the Euler\u2013Mascheroni constant.\n\nMore generally, one has\n\n\\psi(1+z)=-\\gamma+\n\n infty \\sum k=1\n\n\\left(\n\n 1 - k\n 1 z+k\n\n\\right).\n\nfor\n\nRe(z)>0\n\n. Another series expansion is:\n \\psi(1+z)=ln(z)+ 1 2z\n infty -\\displaystyle\\sum j=1\n B2j 2jz2j\n\n, where\n\nB2j\n\nare the Bernoulli numbers. This series diverges for all and is known as the Stirling series.\n\nActually, is the only solution of the functional equation\n\n F(x+1)=F(x)+ 1 x\n\nthat is monotonic on and satisfies . This fact follows immediately from the uniqueness of the function given its recurrence equation and convexity restriction. This implies the useful difference equation:\n\n N-1 \\psi(x+N)-\\psi(x)=\\sum k=0\n 1 x+k\n\n## Some finite sums involving the digamma function\n\nThere are numerous finite summation formulas for the digamma function. Basic summation formulas, such as\n\n m \\sum \\psi\\left( r=1\n r m\n\n\\right)=-m(\\gamma+lnm),\n\n m \\sum \\psi\\left( r=1\n r m\n\n\\right)\\exp\\dfrac{2\\pirki}{m}=mln\\left(1-\\exp\n\n 2\\piki m\n\n\\right), \u2003\u2003 k\\in\\Z,m\\in\\N,k\\nem.\n\n m-1 \\sum \\psi\\left( r=1\n r m\n\n\\right)\\cos\\dfrac{2\\pirk}{m}=mln\\left(2\\sin\n\n k\\pi m\n\n\\right)+\\gamma, \u2003\u2003 k=1,2,\\ldots,m-1\n\n m-1 \\sum r=1\n\n\\psi\\left(\n\n r m\n\n\\right)\\sin\n\n 2\\pirk = m\n \\pi 2\n\n(2k-m), \u2003\u2003 k=1,2,\\ldots,m-1\n\nare due to Gauss.[9] [10] More complicated formulas, such as\n\n m-1 \\sum r=0\n\n\\psi\\left(\n\n 2r+1 \\right) \u22c5 \\cos 2m\n (2r+1)k\\pi m\n\n=mln\\left(\\tan\n\n \\pik 2m\n\n\\right), \u2003\u2003 k=1,2,\\ldots,m-1\n\n m-1 \\sum r=0\n\n\\psi\\left(\n\n 2r+1 2m\n\n\\right)\\sin\\dfrac{(2r+1)k\\pi}{m}=-\n\n \\pim 2\n\n, \u2003\u2003 k=1,2,\\ldots,m-1\n\n m-1 \\sum \\psi\\left( r=1\n r \\right) \u22c5 \\cot m\n \\pir m\n\n=-\n\n \\pi(m-1)(m-2) 6\n m-1 \\sum r=1\n\n\\psi\\left(\n\n r m\n\n\\right)\n\n r =- m\n \\gamma (m-1)- 2\n m 2\n\nlnm-\n\n \\pi 2\n m-1 \\sum r=1\n r \u22c5 \\cot m\n \\pir m\n\n m-1 \\sum r=1\n\n\\psi\\left(\n\n r m\n\n\\right)\\cos\\dfrac{(2\\ell+1)\\pir}{m}=-\n\n \\pi m\n m-1 \\sum r=1\n r \u22c5 \\sin\\dfrac{2\\pir m\n m-1 \\sum r=1\n\n\\psi\\left(\n\n r m\n\n\\right)\\sin\\dfrac{(2\\ell+1)\\pir}{m}=-(\\gamma+ln2m)\\cot\n\n (2\\ell+1)\\pi 2m\n\n+\\sin\\dfrac{(2\\ell+1)\\pi\n\n m-1 }{m}\\sum r=1\n ln\\sin\\dfrac{\\pir m\n m-1 \\sum r=1\n 2\\left( r m\n\\psi\n\n\\right)=(m-1)\\gamma2+m(2\\gamma+ln4m)ln{m}-m(m-1)ln22+\n\n \\pi2(m2-3m+2) 12\n m-1 +m\\sum \\ell=1\n\nln2\\sin\n\n \\pi\\ell m\n\nare due to works of certain modern authors (see e.g. Appendix B in Blagouchine (2014)[11]).\n\n## Gauss's digamma theorem\n\nFor positive integers and, the digamma function may be expressed in terms of Euler's constant and a finite number of elementary functions\n\n \\psi\\left( r m\n\n\\right)=-\\gamma-ln(2m)-\n\n \\pi \\cot\\left( 2\n r\\pi m\n\n\\right)\n\n\\left\\lfloor\n m-1 2\n\\right\\rfloor\n+2\\sum\\cos\\left(\nn=1\n 2\\pinr m\n\n\\right)ln\\sin\\left(\n\n \\pin m\n\n\\right)\n\nwhich holds, because of its recurrence equation, for all rational arguments.\n\n## Asymptotic expansion\n\nThe digamma function has the asymptotic expansion\n\n\\psi(z)\\simlnz-\n\n 1 2z\n\n+\n\n infty \\sum n=1\n \\zeta(1-2n) z2n\n\n=lnz-\n\n 1 2z\n\n-\n\n infty \\sum n=1\n B2n 2nz2n\n\n,\n\nwhere is the th Bernoulli number and is the Riemann zeta function. The first few terms of this expansion are:\n\n\\psi(z)lnz-\n\n 1 2z\n\n-\n\n 1 12z2\n\n+\n\n 1 120z4\n\n-\n\n 1 252z6\n\n+\n\n 1 240z8\n\n-\n\n 1 132z10\n\n+\n\n 691 32760z12\n\n-\n\n 1 12z14\n\n+.\n\nAlthough the infinite sum does not converge for any, any finite partial sum becomes increasingly accurate as increases.\n\nThe expansion can be found by applying the Euler\u2013Maclaurin formula to the sum[12]\n\n infty \\sum \\left( n=1\n 1 n\n\n-\n\n 1 z+n\n\n\\right)\n\nThe expansion can also be derived from the integral representation coming from Binet's second integral formula for the gamma function. Expanding\n\nt\/(t2+z2)\n\nas a geometric series and substituting an integral representation of the Bernoulli numbers leads to the same asymptotic series as above. Furthermore, expanding only finitely many terms of the series gives a formula with an explicit error term:\n\n\\psi(z)=lnz-\n\n 1 2z\n\n-\n\n N \\sum n=1\n B2n 2nz2n\n\n+(-1)N+1\n\n 2 z2N\n infty \\int 0\n t2N+1dt (t2+z2)(e2\\pi-1)\n\n.\n\n## Inequalities\n\nWhen, the function\n\nlogx-\n\n 1 2x\n\n-\\psi(x)\n\nis completely monotonic and in particular positive. This is a consequence of Bernstein's theorem on monotone functions applied to the integral representation coming from Binet's first integral for the gamma function. Additionally, by the convexity inequality\n\n1+t\\leet\n\n, the integrand in this representation is bounded above by\n\ne-tz\/2\n\n.\n 1 x\n\n-logx+\\psi(x)\n\nis also completely monotonic. It follows that, for all,\n\nlogx-\n\n 1 x\n\n\\le\\psi(x)\\lelogx-\n\n 1 2x\n\n.\n\nThis recovers a theorem of Horst Alzer.[13] Alzer also proved that, for,\n 1-s x+s\n\n<\\psi(x+1)-\\psi(x+s),\n\nRelated bounds were obtained by Elezovic, Giordano, and Pecaric, who proved that, for,\n\nlog(x+\\tfrac{1}{2})-\n\n 1 x\n\n<\\psi(x)<log(x+e-\\gamma)-\n\n 1 x\n\n,\n\nwhere\n\n\\gamma\n\nis the Euler\u2013Mascheroni constant.[14] The constants appearing in these bounds are the best possible.[15]\n\nThe mean value theorem implies the following analog of Gautschi's inequality: If, where is the unique positive real root of the digamma function, and if, then\n\n\\exp\\left((1-s)\n\n \\psi'(x+1) \\psi(x+1)\n\n\\right)\\le\n\n \\psi(x+1) \\psi(x+s)\n\n\\le\\exp\\left((1-s)\n\n \\psi'(x+s) \\psi(x+s)\n\n\\right).\n\nMoreover, equality holds if and only if .[16]\n\nInspired by the harmonic mean value inequality for the classical gamma function, Horzt Alzer and Graham Jameson proved, among other things, a harmonic mean-value inequality for the digamma function:\n\n-\\gamma\\leq\n\n2\\psi(x)\n \\psi( 1 x\n)\n \\psi(x)+\\psi( 1 ) x\n\nfor\n\nx>0\n\nEquality holds if and only if\n\nx=1\n\n.[17]\n\n## Computation and approximation\n\nThe asymptotic expansion gives an easy way to compute when the real part of is large. To compute for small, the recurrence relation\n\n\\psi(x+1)=\n\n 1 x\n\n+\\psi(x)\n\ncan be used to shift the value of to a higher value. Beal[18] suggests using the above recurrence to shift to a value greater than 6 and then applying the above expansion with terms above cut off, which yields \"more than enough precision\" (at least 12 digits except near the zeroes).\n\nAs goes to infinity, gets arbitrarily close to both and . Going down from to, decreases by, decreases by, which is more than, and decreases by, which is less than . From this we see that for any positive greater than,\n\n\\psi(x)\\in\\left(ln\\left(x-\\tfrac12\\right),lnx\\right)\n\nor, for any positive,\n\n\\exp\\psi(x)\\in\\left(x-\\tfrac12,x\\right).\n\nThe exponential is approximately for large, but gets closer to at small, approaching 0 at .\n\nFor, we can calculate limits based on the fact that between 1 and 2,, so\n\n \\psi(x)\\in\\left(- 1 x\n\n-\\gamma,1-\n\n 1 x\n\n-\\gamma\\right),x\\in(0,1)\n\nor\n\n\\exp\\psi(x)\\in\\left(\\exp\\left(-\n\n 1 -\\gamma\\right),e\\exp\\left(- x\n 1 x\n\n-\\gamma\\right)\\right).\n\nFrom the above asymptotic series for, one can derive an asymptotic series for . The series matches the overall behaviour well, that is, it behaves asymptotically as it should for large arguments, and has a zero of unbounded multiplicity at the origin too.\n\n 1 \\exp\\psi(x)\n\n\\sim\n\n 1 + x\n 1 + 2 \u22c5 x2\n 5 + 4 \u22c5 3! \u22c5 x3\n 3 + 2 \u22c5 4! \u22c5 x4\n 47 48 \u22c5 5! \u22c5 x5\n\n-\n\n 5 16 \u22c5 6! \u22c5 x6\n\n+\n\nThis is similar to a Taylor expansion of at, but it does not converge.[19] (The function is not analytic at infinity.) A similar series exists for which starts with\n\n\\exp\\psi(x)\\simx-\n\n 12.\n\nIf one calculates the asymptotic series for it turns out that there are no odd powers of (there is no \u22121 term). This leads to the following asymptotic expansion, which saves computing terms of even order.\n\n\\exp\\psi\\left(x+\\tfrac{1}{2}\\right)\\simx+\n\n 1 4! \u22c5 x\n\n-\n\n 37 8 \u22c5 6! \u22c5 x3\n\n+\n\n 10313 72 \u22c5 8! \u22c5 x5\n\n-\n\n 5509121 384 \u22c5 10! \u22c5 x7\n\n+\n\n## Special values\n\nThe digamma function has values in closed form for rational numbers, as a result of Gauss's digamma theorem. Some are listed below:\n\n\\begin{align} \\psi(1)&=-\\gamma\\ \\psi\\left(\\tfrac{1}{2}\\right)&=-2ln{2}-\\gamma\\\\ \\psi\\left(\\tfrac{1}{3}\\right)&=-\n\n \\pi 2\\sqrt{3\n} -\\frac - \\gamma \\\\ \\psi\\left(\\tfrac\\right) &= -\\frac - 3\\ln - \\gamma \\\\\\psi\\left(\\tfrac\\right) &= -\\frac -2\\ln -\\frac - \\gamma \\\\\\psi\\left(\\tfrac\\right) &= -\\frac - 4\\ln - \\frac - \\gamma.\\end\n\nMoreover, by taking the logarithmic derivative of\n\n|\\Gamma(bi)|2\n\nor\n\n|\\Gamma(\\tfrac{1}{2}+bi)|2\n\nwhere\n\nb\n\nis real-valued, it can easily be deduced that\n\n\\operatorname{Im}\\psi(bi)=\n\n 1 + 2b\n \\pi 2\n\n\\coth(\\pib),\n\n\\operatorname{Im}\\psi(\\tfrac{1}{2}+bi)=\n\n \\pi 2\n\n\\tanh(\\pib).\n\nApart from Gauss's digamma theorem, no such closed formula is known for the real part in general. We have, for example, at the imaginary unit the numerical approximation\n\n\\operatorname{Re}\\psi(i)=\n\n infty n-1 n3+n2+n+1\n-\\gamma-\\sum\nn=0\n\n0.09465.\n\n## Roots of the digamma function\n\nThe roots of the digamma function are the saddle points of the complex-valued gamma function. Thus they lie all on the real axis. The only one on the positive real axis is the unique minimum of the real-valued gamma function on at . All others occur single between the poles on the negative axis:\n\n\\vdots\n\nAlready in 1881, Charles Hermite observed[20] that\n\nxn=-n+\n\n 1 lnn\n\n+O\\left(\n\n 1 (lnn)2\n\n\\right)\n\nholds asymptotically. A better approximation of the location of the roots is given by\n\nxn-n+\n\n 1 \\arctan\\left( \\pi\n \\pi lnn\n\n\\right) \u2003\u2003 n\\ge2\n\nand using a further term it becomes still better\n\nxn-n+\n\n 1 \\arctan\\left( \\pi\n\\pi\nlnn+\n 1 8n\n\n\\right) \u2003\u2003 n\\ge1\n\nwhich both spring off the reflection formula via\n\n0=\\psi(1-xn)=\\psi(xn)+\n\n \\pi \\tan\\pixn\n\nand substituting by its not convergent asymptotic expansion. The correct second term of this expansion is, where the given one works good to approximate roots with small .\n\nAnother improvement of Hermite's formula can be given:\n\n 2}+O\\left( 1{n 2(log\nx\nn=-n+1{log\n n}- 1{2n(log n)\n\nn)2}\\right).\n\nRegarding the zeros, the following infinite sum identities were recently proved by Istv\u00e1n Mez\u0151 and Michael Hoffman[21]\n\ninfty1\n 2 x n\n\\begin{align} \\sum\nn=0\n 2+ \\pi2 2\n&=\\gamma\n\n,\n\ninfty1\n 3 x n\n\\ \\sum\nn=0\n 3- \\gamma\\pi2 2\n&=-4\\zeta(3)-\\gamma\n\n,\n\ninfty1\n 4 x n\n\\ \\sum\nn=0\n 4+ \\pi4 9\n&=\\gamma\n\n+\n\n 23 \\gamma\n\n2\\pi2+4\\gamma\\zeta(3). \\end{align}\n\nIn general, the function\n\ninfty1\n k x n\nZ(k)=\\sum\nn=0\n\ncan be determined and it is studied in detail by the cited authors.\n\nThe following results[21]\n\ninfty1\n 2+x x n\n\\begin{align} \\sum\nn=0\n\n&=-2,\n\ninfty1\n 2-x x n\n\\\\ \\sum&=\\gamma+\nn=0\n \\pi2 6\\gamma\n\n\\end{align}\n\nalso hold true.\n\nHere is the Euler\u2013Mascheroni constant.\n\n## Regularization\n\nThe digamma function appears in the regularization of divergent integrals\n\n infty \\int 0\n dx x+a\n\n,\n\nthis integral can be approximated by a divergent general Harmonic series, but the following value can be attached to the series\n\n infty \\sum n=0\n 1 n+a\n\n=-\\psi(a).\n\n## References\n\n1. Book: Pairman, Eleanor . Eleanor Pairman . 1919 . Tables of the Digamma and Trigamma Functions . Cambridge University Press . 5.\n2. Whittaker and Watson, 12.3.\n3. Whittaker and Watson, 12.31.\n4. Whittaker and Watson, 12.32, example.\n5. Web site: NIST. Digital Library of Mathematical Functions. DLMF, 5.9..\n6. Book: N\u00f6rlund, N. E.. Niels Erik N\u00f6rlund. 1924. Vorlesungen \u00fcber Differenzenrechnung. Springer. Berlin.\n7. Blagouchine . Ia. V. . 1606.02044 . Three Notes on Ser's and Hasse's Representations for the Zeta-functions . INTEGERS: The Electronic Journal of Combinatorial Number Theory . 1\u201345 . 18A . 2018. 2016arXiv160602044B.\n8. Blagouchine . Ia. V. . 1408.3902 . Journal of Mathematical Analysis and Applications . 404\u2013434 . Two series expansions for the logarithm of the gamma function involving Stirling numbers and containing only rational coefficients for certain arguments related to . 442 . 2016. 2014arXiv1408.3902B . 10.1016\/J.JMAA.2016.04.032 . 119661147 .\n9. R. Campbell. Les int\u00e9grales eul\u00e9riennes et leurs applications, Dunod, Paris, 1966.\n10. H.M. Srivastava and J. Choi. Series Associated with the Zeta and Related Functions, Kluwer Academic Publishers, the Netherlands, 2001.\n11. 10.1016\/j.jnt.2014.08.009 . Iaroslav V. . Blagouchine . A theorem for the closed-form evaluation of the first generalized Stieltjes constant at rational arguments and some related summations . Journal of Number Theory . 148 . 537\u2013592 . 2014 . 1401.3724.\n12. Algorithm AS 103 psi(digamma function) computation. Jos\u00e9 M.. Bernardo. 1976. Applied Statistics. 25. 315\u2013317. 10.2307\/2347257. 2347257.\n13. H. Alzer, On some inequalities for the gamma and psi functions, Math. Comp. 66 (217) (1997) 373\u2013389.\n14. N. Elezovic, C. Giordano and J. Pecaric, The best bounds in Gautschi\u2019s inequality, Math. Inequal. Appl. 3 (2000), 239\u2013252.\n15. F. Qi and B.-N. Guo, Sharp inequalities for the psi function and harmonic numbers, arXiv:0902.2524.\n16. A. Laforgia, P. Natalini, Exponential, gamma and polygamma functions: Simple proofs of classical and new inequalities, J. Math. Anal. Appl. 407 (2013) 495\u2013504.\n17. Alzer . Horst . Jameson . Graham . 41966777 . 2017 . A harmonic mean inequality for the digamma function and related results . . 203\u2013209 . 10.4171\/RSMUP\/137-10 . 70 . 201 . 0041-8994 . 50046633 . 01761704.\n18. Matthew J. . Beal . Variational Algorithms for Approximate Bayesian Inference. 2003 . PhD thesis . The Gatsby Computational Neuroscience Unit, University College London . 265\u2013266 .\n19. If it converged to a function then would have the same Maclaurin series as . But this does not converge because the series given earlier for does not converge.\n20. Charles . Hermite . Sur l'int\u00e9grale Eul\u00e9rienne de seconde esp\u00e9ce . Journal f\u00fcr die reine und angewandte Mathematik. 90. 1881. 332\u2013338.\n21. Istv\u00e1n . Mez\u0151 . Michael E. . Hoffman . Zeros of the digamma function and its Barnes G-function analogue . Integral Transforms and Special Functions . 28 . 2017. 11. 846\u2013858. 10.1080\/10652469.2017.1376193. 126115156 .","date":"2022-08-07 16:31:36","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.991580605506897, \"perplexity\": 5164.709286593522}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-33\/segments\/1659882570651.49\/warc\/CC-MAIN-20220807150925-20220807180925-00457.warc.gz\"}"}
| null | null |
The Classy Wood Metal Glass Candle Holder that will add beauty to your candles. This candle holder will be a great addition to your space. This candle holder blends with all kinds of interiors. This candle holder will grab the attention of many.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,760
|
title: Check if an Element is Present in a Binary Search Tree
localeTitle: Verificar se um elemento está presente em uma árvore de pesquisa binária
---
## Verificar se um elemento está presente em uma árvore de pesquisa binária
Este é um esboço. [Ajude nossa comunidade a expandi-lo](https://github.com/freecodecamp/guides/tree/master/src/pages/certifications/coding-interview-prep/data-structures/check-if-an-element-is-present-in-a-binary-search-tree/index.md) .
[Este guia de estilo rápido ajudará a garantir que sua solicitação de recebimento seja aceita](https://github.com/freecodecamp/guides/blob/master/README.md) .
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,155
|
class eop_aux
{
public:
template<typename eT> arma_inline static typename arma_integral_only<eT>::result acos (const eT x) { return eT( std::acos(double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result asin (const eT x) { return eT( std::asin(double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result atan (const eT x) { return eT( std::atan(double(x)) ); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result acos (const eT x) { return std::acos(x); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result asin (const eT x) { return std::asin(x); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result atan (const eT x) { return std::atan(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result acos (const eT x) { return arma_acos(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result asin (const eT x) { return arma_asin(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result atan (const eT x) { return arma_atan(x); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result acosh (const eT x) { return eT( arma_acosh(double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result asinh (const eT x) { return eT( arma_asinh(double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result atanh (const eT x) { return eT( arma_atanh(double(x)) ); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result acosh (const eT x) { return arma_acosh(x); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result asinh (const eT x) { return arma_asinh(x); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result atanh (const eT x) { return arma_atanh(x); }
template<typename eT> arma_inline static typename arma_not_cx<eT>::result conj(const eT x) { return x; }
template<typename T> arma_inline static std::complex<T> conj(const std::complex<T>& x) { return std::conj(x); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result sqrt (const eT x) { return eT( std::sqrt (double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result log10 (const eT x) { return eT( std::log10(double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result log (const eT x) { return eT( std::log (double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result exp (const eT x) { return eT( std::exp (double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result cos (const eT x) { return eT( std::cos (double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result sin (const eT x) { return eT( std::sin (double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result tan (const eT x) { return eT( std::tan (double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result cosh (const eT x) { return eT( std::cosh (double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result sinh (const eT x) { return eT( std::sinh (double(x)) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result tanh (const eT x) { return eT( std::tanh (double(x)) ); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result sqrt (const eT x) { return std::sqrt (x); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result log10 (const eT x) { return std::log10(x); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result log (const eT x) { return std::log (x); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result exp (const eT x) { return std::exp (x); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result cos (const eT x) { return std::cos (x); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result sin (const eT x) { return std::sin (x); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result tan (const eT x) { return std::tan (x); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result cosh (const eT x) { return std::cosh (x); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result sinh (const eT x) { return std::sinh (x); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result tanh (const eT x) { return std::tanh (x); }
template<typename eT> arma_inline static typename arma_unsigned_integral_only<eT>::result neg (const eT x) { return x; }
template<typename eT> arma_inline static typename arma_signed_only<eT>::result neg (const eT x) { return -x; }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result floor (const eT x) { return x; }
template<typename eT> arma_inline static typename arma_real_only<eT>::result floor (const eT x) { return std::floor(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result floor (const eT& x) { return eT( std::floor(x.real()), std::floor(x.imag()) ); }
template<typename eT> arma_inline static typename arma_integral_only<eT>::result ceil (const eT x) { return x; }
template<typename eT> arma_inline static typename arma_real_only<eT>::result ceil (const eT x) { return std::ceil(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result ceil (const eT& x) { return eT( std::ceil(x.real()), std::ceil(x.imag()) ); }
#if defined(ARMA_USE_CXX11)
template<typename eT> arma_inline static typename arma_integral_only<eT>::result round (const eT x) { return x; }
template<typename eT> arma_inline static typename arma_real_only<eT>::result round (const eT x) { return std::round(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result round (const eT& x) { return eT( std::round(x.real()), std::round(x.imag()) ); }
#else
template<typename eT> arma_inline static typename arma_integral_only<eT>::result round (const eT x) { return x; }
template<typename eT> arma_inline static typename arma_real_only<eT>::result round (const eT x) { return (x >= eT(0)) ? std::floor(x+0.5) : std::ceil(x-0.5); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result round (const eT& x) { return eT( eop_aux::round(x.real()), eop_aux::round(x.imag()) ); }
#endif
#if defined(ARMA_USE_CXX11)
template<typename eT> arma_inline static typename arma_integral_only<eT>::result trunc (const eT x) { return x; }
template<typename eT> arma_inline static typename arma_real_only<eT>::result trunc (const eT x) { return std::trunc(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result trunc (const eT& x) { return eT( std::trunc(x.real()), std::trunc(x.imag()) ); }
#else
template<typename eT> arma_inline static typename arma_integral_only<eT>::result trunc (const eT x) { return x; }
template<typename eT> arma_inline static typename arma_real_only<eT>::result trunc (const eT x) { return (x >= eT(0)) ? std::floor(x) : std::ceil(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result trunc (const eT& x) { return eT( eop_aux::trunc(x.real()), eop_aux::trunc(x.imag()) ); }
#endif
#if defined(ARMA_USE_CXX11)
template<typename eT> arma_inline static typename arma_integral_only<eT>::result log2 (const eT x) { return eT( std::log(double(x))/ double(0.69314718055994530942) ); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result log2 (const eT x) { return std::log2(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result log2 (const eT& x) { typedef typename get_pod_type<eT>::result T; return std::log(x) / T(0.69314718055994530942); }
#else
template<typename eT> arma_inline static typename arma_integral_only<eT>::result log2 (const eT x) { return eT( std::log(double(x))/ double(0.69314718055994530942) ); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result log2 (const eT x) { typedef typename get_pod_type<eT>::result T; return std::log(x) / T(0.69314718055994530942); }
#endif
#if defined(ARMA_USE_CXX11)
template<typename eT> arma_inline static typename arma_integral_only<eT>::result exp2 (const eT x) { return eT( std::pow(double(2), double(x)) ); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result exp2 (const eT x) { return std::exp2(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result exp2 (const eT& x) { typedef typename get_pod_type<eT>::result T; return std::pow( T(2), x); }
#else
template<typename eT> arma_inline static typename arma_integral_only<eT>::result exp2 (const eT x) { return eT( std::pow(double(2), double(x)) ); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result exp2 (const eT x) { typedef typename get_pod_type<eT>::result T; return std::pow( T(2), x); }
#endif
template<typename eT> arma_inline static typename arma_integral_only<eT>::result exp10 (const eT x) { return eT( std::pow(double(10), double(x)) ); }
template<typename eT> arma_inline static typename arma_real_or_cx_only<eT>::result exp10 (const eT x) { typedef typename get_pod_type<eT>::result T; return std::pow( T(10), x); }
template<typename eT> arma_inline static typename arma_unsigned_integral_only<eT>::result arma_abs (const eT x) { return x; }
template<typename eT> arma_inline static typename arma_signed_integral_only<eT>::result arma_abs (const eT x) { return std::abs(x); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result arma_abs (const eT x) { return std::abs(x); }
template<typename T> arma_inline static typename arma_real_only< T>::result arma_abs (const std::complex<T>& x) { return std::abs(x); }
template<typename eT> arma_inline static typename arma_unsigned_integral_only<eT>::result sign (const eT x) { return (x > eT(0)) ? eT(+1) : eT(0); }
template<typename eT> arma_inline static typename arma_signed_integral_only<eT>::result sign (const eT x) { return (x > eT(0)) ? eT(+1) : ( (x < eT(0)) ? eT(-1) : eT(0) ); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result sign (const eT x) { return (x > eT(0)) ? eT(+1) : ( (x < eT(0)) ? eT(-1) : eT(0) ); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result sign (const eT& x) { typedef typename eT::value_type T; return (x.real() != T(0) && x.imag() != T(0)) ? (x / std::abs(x)) : x; }
#if defined(ARMA_USE_CXX11)
template<typename eT> arma_inline static typename arma_integral_only<eT>::result erf (const eT x) { return eT( std::erf(double(x)) ); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result erf (const eT x) { return std::erf(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result erf (const eT& x) { arma_ignore(x); return eT(0); }
#elif defined(ARMA_HAVE_TR1)
template<typename eT> arma_inline static typename arma_integral_only<eT>::result erf (const eT x) { return eT( std::tr1::erf(double(x)) ); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result erf (const eT x) { return std::tr1::erf(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result erf (const eT& x) { arma_ignore(x); return eT(0); }
#else
template<typename eT> arma_inline static eT erf (const eT x) { arma_ignore(x); arma_stop_logic_error("erf(): need C++11 compiler"); return eT(0); }
#endif
#if defined(ARMA_USE_CXX11)
template<typename eT> arma_inline static typename arma_integral_only<eT>::result erfc (const eT x) { return eT( std::erfc(double(x)) ); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result erfc (const eT x) { return std::erfc(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result erfc (const eT& x) { arma_ignore(x); return eT(0); }
#elif defined(ARMA_HAVE_TR1)
template<typename eT> arma_inline static typename arma_integral_only<eT>::result erfc (const eT x) { return eT( std::tr1::erfc(double(x)) ); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result erfc (const eT x) { return std::tr1::erfc(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result erfc (const eT& x) { arma_ignore(x); return eT(0); }
#else
template<typename eT> arma_inline static eT erfc (const eT x) { arma_ignore(x); arma_stop_logic_error("erfc(): need C++11 compiler"); return eT(0); }
#endif
#if defined(ARMA_USE_CXX11)
template<typename eT> arma_inline static typename arma_integral_only<eT>::result lgamma (const eT x) { return eT( std::lgamma(double(x)) ); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result lgamma (const eT x) { return std::lgamma(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result lgamma (const eT& x) { arma_ignore(x); return eT(0); }
#elif defined(ARMA_HAVE_TR1)
template<typename eT> arma_inline static typename arma_integral_only<eT>::result lgamma (const eT x) { return eT( std::tr1::lgamma(double(x)) ); }
template<typename eT> arma_inline static typename arma_real_only<eT>::result lgamma (const eT x) { return std::tr1::lgamma(x); }
template<typename eT> arma_inline static typename arma_cx_only<eT>::result lgamma (const eT& x) { arma_ignore(x); return eT(0); }
#else
template<typename eT> arma_inline static eT lgamma (const eT x) { arma_ignore(x); arma_stop_logic_error("lgamma(): need C++11 compiler"); return eT(0); }
#endif
template<typename T1, typename T2> arma_inline static typename arma_integral_only<T1>::result pow (const T1 base, const T2 exponent) { return T1( std::pow( double(base), double(exponent) ) ); }
template<typename T1, typename T2> arma_inline static typename arma_real_or_cx_only<T1>::result pow (const T1 base, const T2 exponent) { return std::pow(base, exponent); }
template<typename eT>
arma_inline
static
typename arma_integral_only<eT>::result
direct_eps(const eT)
{
return eT(0);
}
template<typename eT>
inline
static
typename arma_real_only<eT>::result
direct_eps(const eT x)
{
//arma_extra_debug_sigprint();
// acording to IEEE Standard for Floating-Point Arithmetic (IEEE 754)
// the mantissa length for double is 53 bits = std::numeric_limits<double>::digits
// the mantissa length for float is 24 bits = std::numeric_limits<float >::digits
//return std::pow( std::numeric_limits<eT>::radix, (std::floor(std::log10(std::abs(x))/std::log10(std::numeric_limits<eT>::radix))-(std::numeric_limits<eT>::digits-1)) );
const eT radix_eT = eT(std::numeric_limits<eT>::radix);
const eT digits_m1_eT = eT(std::numeric_limits<eT>::digits - 1);
// return std::pow( radix_eT, eT(std::floor(std::log10(std::abs(x))/std::log10(radix_eT)) - digits_m1_eT) );
return eop_aux::pow( radix_eT, eT(std::floor(std::log10(std::abs(x))/std::log10(radix_eT)) - digits_m1_eT) );
}
template<typename T>
inline
static
typename arma_real_only<T>::result
direct_eps(const std::complex<T>& x)
{
//arma_extra_debug_sigprint();
//return std::pow( std::numeric_limits<T>::radix, (std::floor(std::log10(std::abs(x))/std::log10(std::numeric_limits<T>::radix))-(std::numeric_limits<T>::digits-1)) );
const T radix_T = T(std::numeric_limits<T>::radix);
const T digits_m1_T = T(std::numeric_limits<T>::digits - 1);
return std::pow( radix_T, T(std::floor(std::log10(std::abs(x))/std::log10(radix_T)) - digits_m1_T) );
}
};
//! @}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,764
|
Q: Apache mod_rewrite rules ignored when request uri contains known extension These are my rules:
# Rewrite rules
RewriteRule ^$ path/to/webroot/index.php [L]
RewriteRule (.*) path/to/webroot/index.php [L]
Which I would expect to rewrite all requests to: path/to/webroot/index.php (please note, I've simplified this for demonstration purposes)
Tests:
Request Response Result
/test 200 [PASSED]
/another_test 200 [PASSED]
/index.html 404 [FAILED]
/index.htmlXX 200 [PASSED]
/test.css 404 [FAILED]
/test.cssXX 200 [PASSED]
/index.php 200 [PASSED]
tl;dr: %.html & %.css fail, everything else passes.
It appears that requests that contain extensions the server understands (html, css, ...), the rewrites get ignored. The one exception in my test being .php, which the server redirects and serves correctly.
Unfortunately I don't currently have access to the server logs.
Am I doing it wrong?
A: It appears that, if the files does exists, then your rewriterules are not applied. This often comes from Multiviews option.
Solution to try:
*
*disable MultiViews in .htaccess: Options -MultiViews
*remove MultiViews from default config
A: First off, I apologise for wasting anyone's time on this question. I wasn't fully aware of the server setup when I posted this question. I won't go into the details but I will post the answer because hopefully it might help others who run into the same situation.
The 'problem' arose as Nginx is being used to the serve the static assets - js|ico|gif|jpg|jpeg|png|css|etc|etc... and hence the mod_rewrite rules are not being applied to any requests with these file extensions.
Obviously the solution is to create rewrite rules for the assets using HttpRewriteModule
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,564
|
Story: Richard Ainscough
Inspired by their own compact New York City bathroom, Matthew Malin and Andrew Goetz decided that they wanted to develop products that could be shared between a couple to help minimize bathroom clutter; this meant foregoing the norm and keeping their entire eponymous skin care line gender neutral. "We're addressing skin issues and we're trying to treat them, so for us that's not about race or gender, it's really just about: do you have a great cleanser, do you have a great moisturizer, and is it effective?" says Malin. Although, leaving some of the colour-coordinated Malin + Goetz products out on the bathroom counter wouldn't be the end of the world.
The white background of each tub, bottle, and tube is plastered with blue, red, or green writing that categorizes the products into face, hair, and body. Malin + Goetz has amassed a following of committed users with its gentle formulations that are free of synthetic fragrances and colours, and packed with amino acids known for their replenishing properties. Minimizing potential irritants within the range is at the foundation of the brand, as Malin struggled to find grooming products that wouldn't upset his rosacea and eczema. The body washes have a dense viscosity closer to bubble bath than typical fare, the lotions are hydrating but never greasy, and the scented candles permeate a room without overpowering.
"One of our goals for Malin + Goetz is always to try to simplify the process, to make it really uncomplicated, and keep it as effective along the way—so we have really kept our assortment very tight, and we really focused on those essential items that make a difference to somebody's regimen," says Malin. The brand keeps a small circle of treatment products, including masks and exfoliators, which complement a handful of cleaners and moisturizers, forgoing the inclusion of toners. "Why spend money on [a toner] when we can just make a better cleanser and a better moisturizer?" says Malin with a laugh. The co-founder speaks of his company in a way that conveys his passion for skin care, and his knowledge of the industry thanks to his tenure at Kiehl's as well as Prada's fragrance division, all while being incredibly down-to-earth about the entire operation.
Malin + Goetz recently opened two shops in Los Angeles, and is in the process of building its first international boutique, scheduled to open in London in 2016. Malin says that the idea behind the brand is "to modernize the traditional apothecary concept from a neighbourhood perspective and on to a global reach," which the duo has certainly achieved (their products are stocked in boutiques around the world and found in the amenity offerings of the Morgans Hotel Group). The highly recognizable fragrances of bergamot and dark rum, which are two of the brand's signature scents, have amassed an international customer base that can recognize the scents upon first whiff, regardless of location.
The company continues to grow and develop "organically", according to Malin, releasing new products only a few times annually. The coming year will see another fragrance added to their compact but diverse collection of scents, and a replenishing facial oil will also launch. "We have been a self-funded business throughout our life, just sort of operating based on profitability, and growing organically—so it's been very manageable along the way," says Malin. "It just feels very homegrown and special."
Keep up with our beauty coverage.
Malin + Goetz candle
Malin + Goetz London
Malin + Goetz Los Angeles
Malin + Goetz perfume
Malin + Goetz skincare
Malin + Goetz Vancouver
malin and goetz
malin and goetz skin care
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,580
|
\section{Analysis}
\label{sec:analysis}
Some smoothness assumptions on the densities are warranted to make estimation
tractable.
We use the \holder class, which is now standard in
nonparametrics literature. \\
\begin{definition}[\holder Class]
Let $\Xcal \subset \RR^d$ be a compact space.
For any $r = (r_1, \dots, r_d), r_i \in \NN$, define $|r| = \sum_i r_i$ and
$D^r = \frac{\partial^{|r|}}{\partial x_1^{r_1} \dots \partial x_d^{r_d}}$. The
\holder class \textbf{$\Sigma(s, L)$} is the set of functions on $L_2(\Xcal)$ satisfying,
\[
|D^r f(x) - D^r f(y)| \leq L \|x - y\|^{s - r}, \;\;
\]
for all $r$ s.t. $|r| \leq \floor{s}$ and for all $x, y \in \Xcal$.
\end{definition}
Moreover, define the Bounded \holder Class $\Sigma(s, L, B', B)$ to be
$\cbr{f\in \Sigma(s,L): B' < f < B}$.
Note that large $s$ implies higher smoothness.
Given $n$ samples $\Xn$ from a $d$-dimensional density $f$, the kernel density
estimator (KDE) with bandwidth $h$ is $\fhat(t) = 1/(nh^d) \sum_{i=1}^n
K\left(\frac{t-X_i}{h} \right)$.
Here $K:\RR^d \rightarrow \RR$ is a smoothing kernel \cite{tsybakov08nonparametric}.
When $f \in \Sigma(s,L)$, by
selecting $h \in \Theta(n^{\frac{-1}{2s+d}})$
the KDE achieves the minimax rate of $\bigOp(n^{\frac{-2s}{2s+d}})$ in mean squared
error.
Further, if $f$ is in the bounded \holder class $\Sigma(s, L, B',B)$ one can
truncate the KDE from below at $B'$ and from above at $B$
and achieve the same convergence rate~\citep{birge95estimation}.
In our analysis, the density estimators $\fhatone, \fhatmi, \ghatone, \ghatmi$
are formed by either a KDE or a
truncated KDE, and we will make use of these results.
We will also need the following regularity condition
on the influence
function. This is satisfied for smooth functionals including those
in Table~\ref{tb:functionalDefns}. We demonstrate this in our example in
Appendix~\ref{sec:workedExample}.
\\[\thmparaspacing]
\begin{assumption}
For a functional $T(f)$, the influence function $\psi$ satisfies,
\[
\EE\big[(\psi(X;f') - \psi(X;f))^2\big]
\in \bigO( \|f-f'\|^2 ) \;\;\textrm{ as }\;\;
\|f-f'\|^2 \rightarrow 0.
\]
For a functional $T(f,g)$ of two distributions, the influence functions $\psi_f,
\psi_g$ satisfy,
\begin{align*}
&\EE_f\Big[(\psi_f(X;f', g') - \psi_f(X;f,g))^2\Big]
\in \bigO( \|f-f'\|^2 + \|g-g'\|^2 )
\;\;\textrm{ as }\;\;
\|f-f'\|^2, \|g-g'\|^2 \rightarrow 0. \\
&\EE_g\Big[(\psi_g(Y;f', g') - \psi_g(Y;f,g))^2\Big]
\in \bigO( \|f-f'\|^2 + \|g-g'\|^2 )
\;\;\textrm{ as }\;\;
\|f-f'\|^2, \|g-g'\|^2 \rightarrow 0.
\end{align*}
\label{asm:infFunRegularity}
\end{assumption}
\vspace{-0.2in}
Under the above assumptions, it is known \cite{emery98probstat,robins09quadraticvm}
that the \dss estimator on a single distribution achieves the mean squared error
(MSE) $\EE[(\Tfds-T(f))^2] \in
\bigO(n^{\minfours} + n^{-1})$ and further is asymptotically normal when $s>d/2$.
We have reviewed them along with a proof in Appendix~\ref{sec:appOneDistro}.
Note that
\citet{robins09quadraticvm} analyse $\Tfds$ in the semiparametric setting.
We present a simpler self contained analysis that directly uses the VME
and has more interpretable assumptions.
Bounding the bias and variance
of the \dss estimator to establish the
convergence rate follows via a straightforward conditioning argument and Cauchy
Schwarz. However, an attractive property is that the analysis is agnostic to the
density estimator used provided it achieves the correct rates.
For the \loos estimator proposed in~\eqref{eqn:looEstimator},
we establish the following result.
\vspace{\parathmspacing}
\begin{theorem}[\textbf{Convergence of \loos Estimator for $T(f)$}]
Let $f\in\Sigma(s,L,B,B')$ and $\psi$ satisfy Assumption~\ref{asm:infFunRegularity}.
Then,
$\EE[(\Tf - T(f))^2]$ is
$\bigO(n^{\frac{-4s}{2s+d}})$ when $s<d/2$ and $\bigO(n^{-1})$ when $s\geq d/2$.
\label{thm:convOneLoo}
\end{theorem}
The key technical challenge in analysing the \loos estimator (when compared to the
\dss estimator) is in bounding the variance with
several correlated terms in the summation.
The bounded difference inequality is a popular trick used in such settings,
but this requires a supremum on the influence functions which leads to
significantly worse rates.
Instead we use the Efron-Stein inequality which provides an integrated
version of bounded differences that can recover the correct rate when coupled with
Assumption~\ref{asm:infFunRegularity}.
Our proof is contingent on the
use of the KDE as the density estimator.
While our empirical studies indicate that $\Tf$'s limiting distribution is normal
(Fig~\ref{fig:looHellingerAN}), the proof seems challenging due to the correlation
between terms in the summation.
We conjecture that $\Tf$ is indeed asymptotically normal but for now leave it as an
open problem.
We reiterate that while the convergence rates are the same for both \dss and \loos
estimators, the data splitting degrades empirical performance of $\Tfds$.
Now we turn our attention to functionals of two distributions.
When analysing asymptotics we will assume that
as $n,m \rightarrow \infty$,
$n/(n+m) \rightarrow \zeta \in (0,1)$.
Denote $N = n+m$.
For the \dss estimator~\eqref{eqn:dsEstimatorTwoD} we generalise
\emph{our} analysis for one
distribution to establish the theorem below.
\\[\thmparaspacing]
\begin{theorem}[\textbf{Convergence/Asymptotic Normality of \dss Estimator
for $T(f,g)$}]
Let $f,g\in\Sigma(s,L,B,B')$ and $\psif,\psig$ satisfy
Assumption~\ref{asm:infFunRegularity}. Then,
$\;\EE[(\Tfds - T(f,g))^2]$ is
$\bigO(n^{\frac{-4s}{2s+d}} + m^{\frac{-4s}{2s+d}})$ when $s<d/2$
and $\bigO(n^{-1} + m^{-1})$ when $s\geq d/2$.
Further, when $s>d/2$ and when $\psif, \psig \neq \zero$, $\Tfds$ is asymptotically
normal,
\begin{align*}
&\sqrt{N}(\Tfds - T(f,g)) \indistribution \numberthis
\label{eqn:asympNormalTwoDistro}
\Ncal \left( 0, \frac{1}{\zeta} \VV_{f} \left[\psi_f(X; f, g)\right]
+ \frac{1}{1 - \zeta} \VV_{g} \left[\psi_g(Y; f, g)\right] \right).
\end{align*}
\label{thm:convTwoDS}
\end{theorem}
\vspace{-0.18in}
The asymptotic normality result is useful as it allows us to construct asymptotic
confidence intervals for a functional.
Even though the asymptotic variance of the influence function is
not known, by Slutzky's theorem
any consistent estimate of the variance gives a valid asymptotic confidence
interval. In fact, we can use an influence function based estimator for the
asymptotic variance, since it is also a
differentiable functional of the densities.
We demonstrate this in our example in Appendix~\ref{sec:workedExample}.
The condition $\psif, \psig\neq \zero$ is somewhat technical. When \emph{both}
$\psi_f$ and $\psi_g$ are zero, the first
order terms vanishes and the estimator converges very fast (at rate $1/n^2$).
However, the asymptotic behavior of the estimator is unclear.
While this degeneracy occurs only on a meagre set, it does
arise for important choices. One example is the null hypothesis $f=g$ in two-sample
testing problems.
Finally, for the \loos estimator~\eqref{eqn:looEstimatorTwoD} on two distributions
we have the following result. \\[\thmparaspacing]
\begin{theorem}[\textbf{Convergence of \loos Estimator for $T(f,g)$}]
Let $f,g\in\Sigma(s,L,B,B')$ and $\psif,\psig$ satisfy
Assumption~\ref{asm:infFunRegularity}. Then,
$\;\EE[(\Tf - T(f,g))^2]$ is
$\bigO(n^{\frac{-4s}{2s+d}} + m^{\frac{-4s}{2s+d}})$ when $s<d/2$ and
$\bigO(n^{-1} + m^{-1})$ when $s\geq d/2$.
\label{thm:convTwoLoo}
\end{theorem}
For many functionals, a H\"olderian assumption ($\Sigma(s,L)$) alone is
sufficient to guarantee
the rates in Theorems~\ref{thm:convOneLoo},\ref{thm:convTwoDS}
and~\ref{thm:convTwoLoo}. However, for
some functionals (such as the $\alpha$-divergences) we require
$\fhat, \ghat, f, g$ to be bounded above and below.
Existing results~\citep{krishnamurthy14renyi,birge95estimation} demonstrate
that estimating such quantities is difficult without this assumption.
Now we turn our attention to the question of statistical difficulty. Via lower
bounds given by~\citet{birge95estimation} and~\citet{laurent1996efficient}
we know that the \dss and \loos estimators
are minimax optimal when $s>d/2$ for functionals of one distribution.
In the following theorem, we present a lower bound for estimating functionals of
two distributions.
\\[\thmparaspacing]
\begin{theorem}[\textbf{Lower Bound for $T(f,g)$}]
Let $f,g \in \Sigma(s,L)$ and $\That$ be any estimator for $T(f,g)$.
Define $\tau = \min\{8s/(4s+d), 1\}$. Then there exists a strictly positive
constant $c$ such
that,
\[
\liminf_{n\rightarrow \infty}\; \inf_{\That} \; \sup_{f,g \in \Sigma(s,L)}
\EE\big[ (\That - T(f,g))^2 \big] \geq c \left( n^{-\tau} + m^{-\tau} \right).
\]
\label{thm:lowerbound}
\end{theorem}
\vspace{\thmparaspacing}
Our proof, given in Appendix~\ref{sec:appLowerBound}, is based on LeCam's
method \cite{tsybakov08nonparametric} and
generalises the analysis of \citet{birge95estimation} for functionals
of one distribution.
This establishes minimax
optimality of the \ds/\loos estimators for functionals of two distributions when
$s\geq d/2$.
However, when $s<d/2$ there is a gap between our technique and the lower bound and
it is natural to ask if it is possible
to improve on our rates in this regime.
A series of work \citep{birge95estimation, laurent1996efficient,
kerkyacharian1996estimating} shows that, for integral
functionals of one distribution, one can achieve the $n^{-1}$ rate when $s > d/4$
by estimating the second order term in the functional Taylor expansion.
This second order correction was also done for polynomial functionals of two
distributions with similar statistical gains \citep{krishnamurthy14renyi}.
While we believe this is possible here, these estimators are conceptually
complicated and computationally expensive -- requiring
$O(n^3+m^3)$ effort when compared to the $O(n^2+m^2)$ effort for our estimator.
The first order estimator has a favorable
balance between statistical and computational efficiency.
Further, not much is known about the limiting distribution of
second order estimators.
\section{Auxiliary Results}
\label{sec:appAncillaryResults}
\begin{lemma}[VME and Functional Taylor Expansion]
\label{lem:vmeTaylorEqOneDistro}
Let $P, Q$ have densities $p, q$ and let $T(P) = \phi(\int \nu(p))$.
Then the first order VME of $T(Q)$ around $P$ reduces to a functional Taylor
expansion around $p$:
\begin{equation}
T(Q) = T(P) + T'(Q-P;P) + R_2 = T(p) +
\phi'\left( \int \nu( p ) \right)\int \nu'(p) (q-p) + R_2
\end{equation}
\begin{proof}
It is
sufficient to show that the first order terms are equal.
\begin{align*}
T'(Q-P;P) &= \partialfracat{t}{T( (1-t)P + tQ)}{0}
= \partialfrac{t}{} \phi \left(\int \nu( (1-t)p + tq) \right)\big|_{t=0} \\
&= \phi'\left( \int \nu( (1-t)p + tq) \right) \int \nu'((1-t)p + tq) (q-p)
\big|_{t=0} \\
&= \phi'\left( \int \nu(p) \right)\int \nu'(p) (q-p)
\end{align*}
\end{proof}
\end{lemma}
\begin{lemma}[VME and Functional Taylor Expansion - Two Distributions]
\label{lem:vmeTaylorEqTwoDistro}
Let $P_1, P_2, Q_1, Q_2$ be distributions with densities $p_1, p_2, q_1, q_2$.
Let $T(P_1, P_2) = \int \nu(p_1, p_2)$. Then,
\begin{align*}
T(Q_1, Q_2) &= T(P_1, P_2) + T'_1(Q_1-P_1; P_1, P_2) + T_2'(Q_2-P_2; P_1, P_2) +
R_2 \numberthis \\
&= T(P_1, P_2) +
\phi'\left(\int \nu(p) \right) \Big(
\int \partialfrac{p_1(x)}{\nu(p_1(x), p_2(x))} (q_1 - p_1) \ud x + \\
&\hspace{0.6in}
\int \partialfrac{p_2(x)}{\nu(p_1(x), p_2(x))} (q_2 - p_2) \ud x \Big) + R_2
\end{align*}
\begin{proof}
Is similar to Lemma~\ref{lem:vmeTaylorEqOneDistro}.
\\[\thmparaspacing]
\end{proof}
\end{lemma}
\begin{lemma} Let $f, g$ be two densities bounded above and below on
a compact space. Then for all $a, b$
\[
\| f^a - g^a \|_b \in O(\|f - g\|_b)
\]
\begin{proof}
Follows from the expansion,
\begin{equation*}
\int |f^a - g^a|^b = \int |g^a(x) + a(f(x)-g(x))g_*^{a-1}(x) - g^a(x)|^b \leq
a^b \sup |g_*^{b(a-1)}(x)| \int |f-g|^b.
\end{equation*}
Here $g_*(x)$ takes an
intermediate value between $f(x)$ and $g(x)$. In the second step we have used
the boundedness of $f$, $g$ to bound $f_*$.
\label{lem:densitypowers}
\end{proof}
\end{lemma}
Finally, we will make use of the Efron Stein inequality stated below in our analysis.
\\[\thmparaspacing]
\begin{lemma}[Efron-Stein Inequality]
Let $X_1, \dots, X_n, X'_1, \dots, X'_n$ be independent random variables where $X_i,
X'_i \in \Xcal_i$.
Let $Z = f(X_1, \dots, X_n)$ and $\Zii{i} = f(X_1, \dots, X'_i, \dots, X_n)$ where
$f:\Xcal_1\times\dots\times\Xcal_n \rightarrow \RR$. Then,
\[
\VV(Z) \;\leq\; \frac{1}{2} \;\EE\left[ \sum_{i=1}^n (Z - \Zii{i})^2 \right]
\]
\label{lem:efronstein}
\end{lemma}
\section{Addendum to Experiments}
\label{sec:appExperiments}
\subsection{Details on Simulations}
In our simulations, for the first figure comparing the Shannon Entropy
in Fig~\ref{fig:toyOne} we generated data
from the following one dimensional density,
\[
f_1(t) = 0.5 + 0.5 t^9
\]
For this, with probability $1/2$ we sample from the uniform distribution $U(0,1)$
on $(0,1)$ and otherwise sample $10$ points from $U(0,1)$ and pick the maximum.
For the third figure in Fig~\ref{fig:toyOne} comparing the KL divergence, we
generate data from the one dimensional density
\[
f_2(t) = 0.5 + \frac{0.5t^{19}(1-t)^{19}}{B(20,20)}
\]
where $B(\cdot,\cdot)$ is the Beta function. For this, with probability $1/2$ we
sample from $U(0,1)$ and otherwise sample from a $\textrm{Beta}(20,20)$ distribution.
The second and fourth figures of Fig~\ref{fig:toyOne} we sampled from a $2$ dimensional
density where the first dimension was $f_1$ and the second was $U(0,1)$.
The fifth and sixth were from a $2$ dimensional
density where the first dimension was $f_2$ and the second was $U(0,1)$.
In all figures of Fig.~\ref{fig:toyTwo}, the first distribution was a $4$-dimensional density
where all dimensions are $f_2$. The latter was $U(0,1)^4$.
\textbf{Methods compared to: }
In addition to the plug-in, \dss and \loos estimators we perform comparisons with
several other estimators.
For the Shannon Entropy we compare our method to the \knns estimator of
\citet{goria2005new}, the method of \citet{stowell2009fast} which uses $K-D$
partitioning, the method of \citet{noughabi2013entropy} based on Vasicek's spacing
method and that of \citet{learned2003ica} based on Voronoi tessellation.
For the KL divergence we compare against the \knns method of \citet{perez2008kullback}
and that of \citet{ramirez2009entropy} based on the power spectral density
representation using Szego's theorem. For \renyi, \tsallis and Hellinger divergences
we compared against the \knns method of \citet{poczos12divergence}.
\subsection{Image Clustering Task}
Here we demonstrate a simple image clustering task using a nonparametric divergence
estimator. For this we use images from the ETH-80 dataset.
The objective here is not to champion our approach for image clustering against
all methods for image clustering out there.
Rather, we just wish to demonstrate that our estimators can
be easily and intuitively applied to many Machine Learning problems.
We use the three categories Apples, Cows and Cups and randomly select $50$ images
from each category. Some sample images are shown in Fig~\ref{fig:clusImages}.
We convert the images to grey scale and extract the SIFT features from each
image. The SIFT features are $128$-dimensional but we project it to $4$ dimensions
via PCA. This is necessary because nonparametric methods work best in low dimensions.
Now we can treat each image as a collection of features, and hence a sample from a $4$
dimensional distribution.
We estimate the Hellinger divergence between these ``distributions".
Then we construct an affinity matrix $A$ where the similarity metric between the
$i$\superscript{th} and $j$\superscript{th} image is given by $A_{ij} =
\exp(-\widehat{H}^2(X_i, X_j))$. Here $X_i$ and $X_j$ denotes the projected SIFT
samples from images $i$ and $j$ and $\widehat{H}(X_i, X_j)$ is the estimated
Hellinger divergence between the distributions.
Finally, we run a spectral clustering algorithm on the matrix $A$.
Figure~\ref{fig:affinity} depicts the affinity matrix $A$ when the images were
ordered according to their class label. The affinity matrix exhibits
block-diagonal structure which indicates that our Hellinger divergence estimator
can in fact identify patterns in the images.
Our approach achieved a clustering accuracy of $92.47\%$. When we used the $k$-NN
based estimator of~\cite{poczos12divergence} we achieved an accuracy of
$90.04\%$.
When we instead applied Spectral clustering naively,
with $A_{ij} = \exp(-L_2(P_i,P_j)^2)$ where $L_2(P_i,P_j)$ is
the squared $L_2$ distance between the pixel intensities we achieved an accuracy of
$70.18\%$. We also tried $A_{ij} = \exp(-\alpha \widehat{H}^2(X_i, X_j))$ as the
affinity for
different choices of $\alpha$ and found that our estimator still performed best.
We also experimented with the
\renyi and \tsallis divergences and obtained similar results.
On the same note, one can imagine that these divergence estimators can also be used
for a classification task. For instance we can treat $\exp(-\widehat{H}^2(X_i, X_j))$
as a similarity
metric between the images and use it in a classifier such as an SVM.
\insertFigClustering
\section{Proof of Lower Bound (Theorem~\ref{thm:lowerbound})}
\label{sec:appLowerBound}
We will prove the lower bound in the bounded \holder class $\Sigma(s,L,B,B')$
noting that the lower bound also applies to $\Sigma(s,L)$.
Our main tool will be LeCam's method where we reduce the estimation problem to a
testing problem. In the testing problem we construct a set of alternatives satisfying
certain separation properties from the null. For this we will use some
technical results from \citet{birge95estimation} and
\cite{krishnamurthy14renyi}.
First we state LeCam's method below adapted to our setting.
We define the squared Hellinger Divergence between two distributions
$P,Q$ with densities $p, q$ to be
\[
H^2(P,Q) = \int \big(\sqrt{p(x)} - \sqrt{q(x)}\big)^2 \ud x
= 2 - 2\int p(x)q(x) \ud x
\]
\\[\thmparaspacing]
\begin{theorem}
\label{thm:lecam}
Let $T:\Mcal \times \Mcal \rightarrow \RR$. Consider a parameter space $\Theta
\subset \Mcal \times \Mcal$ such that $(f,g) \in \Theta$ and $(\plambda, \qlambda)
\in \Theta$ for all $\lambda$ in some index set $\Lambda$.
Denote the distributions of $f,g,\plambda,\qlambda$ by $F,G,\Plambda,\Qlambda$
respectively.
Define
$\PQbar = \frac{1}{|\Lambda|} \sum_{\lambda \in \Lambda} \Plambda^n \times
\Qlambda^m$.
If, there exists $(f,g) \in \Theta$,
$\gamma < 2$ and $\beta > 0$ such that the following two conditions
are satisfied
\begin{align*}
& H^2(\FnGm, \PQbar) \leq \gamma \\
& T(\plambda,\qlambda) \geq T(f,g) + 2\beta \;\;\; \forall\; \lambda \in \Lambda
\end{align*}
then,
\[
\inf_{\That} \sup_{(f,g) \in \Theta}
\PP\left( |\That - T(f,g)| > \beta\right) \frac{1}{2}\left(1 -
\sqrt{\gamma(1-\gamma/4)} \right) > 0.
\]
\begin{proof}
The proof is a straightforward modification of Theorem 2.2 of
\citet{tsybakov08nonparametric} which we provide here
for completeness.
Let $\Theta_0 = \{(p,q) \in \Theta | T(p,q) \leq T(f,g)\}$ and
$\Theta_1 = \{(p,q) \in \Theta| T(p,q) \geq T(f,g) + 2\beta\}$. Hence
$(f,g) \in \Theta_0$ and $\pqlambda \in \Theta_1$ for all $\lambda \in \Lambda$.
Given $n$ samples from $p'$ and $m$ samples from $q'$
consider the simple vs simple hypothesis testing problem of
$H_0:(p',q') \in \Theta_0$ vs $H_1: (p',q') \in \Theta_1$. The probability of error
$p_e$ of any test $\Psi$ test is lower bounded by
\[
p_e \geq \frac{1}{2}\left(1 - \sqrt{H^2(\FnGm,\PQbar)(1-
H^2(\FnGm,\PQbar))/4}\right).
\]
See Lemma 2.1, Lemma 2.3 and Theorem 2.2 of \citet{tsybakov08nonparametric}.
Therefore,
\[
\inf_\psi \sup_{(p',q')\in \Theta_0, (p'',q'')\in \Theta_0}
p_e \geq \frac{1}{2}\left(1-\sqrt{\gamma(1-\gamma/4)}\right)
\]
If we make an error in the testing problem the error in estimation is least
$\beta$ in the
estimation problem which completes the proof of the theorem.
\end{proof}
\end{theorem}
\vspace{\thmparaspacing}
Consider the set $\Gamma = \{-1,1\}^\ell$ and a set of densities
$\pgamma = f(1 + \sum_{j=1}^\ell \gamma_j v_j)$ indexed by each $\gamma \in \Gamma$.
Here $f$ is itself a density and the $v_j$'s are perturbations on $f$.
We will also use the following result from
\citet{birge95estimation} which bounds the
Hellinger divergence between the product distribution $\Fn$ and the mixture
product distribution $\Pbarn = \frac{1}{|\Gamma|} \sum_{\gamma \in \Gamma}
\Pgamman$. \\[\thmparaspacing]
\begin{proposition}
\label{prop:hellinger}
Let $\{R_1, \dots, R_\ell\}$ be a partition of $[0,1]^d$.
Let $\rho_j$ is zero except on $R_j$ and satisfies
$\|\rho_j\|_\infty \leq 1$, $\int \rho_j f = 0$ and $\int \rho_j^2 f =\alpha_j$.
Further, denote $\alpha = \sum_j\|\rho_j\|_\infty$,
$s = n\alpha^2 \sup_j P(R_j)$ and
$c = n \sup_j \alpha_j$. Then,
\[
H^2(\Fn, \Pbarn) \leq \frac{n^2}{3} \sum_{j=1}^\ell \alpha_j^2.
\]
\end{proposition}
We also use the following technical result from \citet{krishnamurthy14renyi} and adapt it
to our setting. \\[\thmparaspacing]
\begin{proposition}[Taken from \cite{krishnamurthy14renyi}]
\label{prop:construction}
Let $R_1, \dots, R_\ell$ be a partition of $[0,1]^d$ each having size $\ell^{-1/d}$.
There exists functions $u_1,\dots,u_\ell$ such that,
\begin{align*}
&\supp(u_j) \subset \{x| B(x,\epsilon) \subset R_j\}, \;\;\;\;\;\;\;\;
\int u_j^2 \in \Theta(\ell^{-1}), \;\;\;\;\;\;\;\;
\int u_j = 0, \\
&\int \psi_f(x; f,g) u_j(x) = \int \psi_g(x; f,g) u_j(x) = 0, \;\;\;\;\;\;\;\;
\|D^r u_j\|_\infty \leq \ell^{r/d} \;\; \forall r \textrm{ s.t } \sum_j r_j \leq s + 1
\end{align*}
where $B(x,\epsilon)$ denotes an $L_2$ ball around $x$ with radius $\epsilon$. Here
$\epsilon$ is any number between $0$ and $1$.
\begin{proof}
For this we use an orthonormal system of $q \;(>4)$ functions
on $(0,1)^d$ satisfying $\phi_1 = 1$, $\supp(\phi_j) \subset [\epsilon,
1-\epsilon]^d$ for any $\epsilon > 0$ and $\|D^r \phi_j\|_\infty \leq J$
for some $J < \infty$.
Now for any given functions $\eta_1, \eta_2$
we can find a function $\upsilon$ such that $\upsilon \in \spann(\{\phi_j\})$,
$\int \upsilon \phi_1 = \int \upsilon \eta_1 = \int \upsilon \eta_2 = 0$.
Write $\upsilon = \sum_i c_j\phi_j$. Then $D^r\upsilon = \sum_j c_j D^r \phi_j$ which
implies $\|D^r\upsilon\|_\infty \leq K\sqrt{q}$. Let $\nu(\cdot) =
\frac{1}{J\sqrt{q}} \upsilon(\cdot)$. Clearly, $\int \nu^2$ is upper and lower
bounded and $\|D^r\nu\|_\infty \leq 1$.
To construct the functions $u_j$, we map $(0,1)^d$ to $R_j$ by appropriately scaling
it. Then, $u_j(x) = \nu(m^{1/d}(x - \jb))$ where $\jb$ is the point corresponding to
$\zero$ after mapping.
Moreover let $\eta_1$ be $\psif(\cdot; f,g)$ constrained to $R_j$ (and scaled back to fit
$(0,1)^d$). Let $\eta_2$ be the same with $\psig$.
Now, $\int_{R_j} u_j^2 = \frac{1}{\ell} \int \nu^2 \in \Theta(\ell^{-1})$.
Also, clearly $\|D^r u_j\| \leq m^{r/d}$. All $5$ conditions above are satisfied.
\end{proof}
\end{proposition}
We now have all necessary ingredients to prove the lower bound.
\begin{proof}[Proof of Theorem~\ref{thm:lowerbound}]
To apply Theorem~\ref{thm:lecam} we will need to construct the set of alternatives
$\Lambda$ which contains tuples $\pqlambda$ that satisfy the conditions of
Theorem~\ref{thm:lecam}.
First apply Proposition~\ref{prop:construction} with $\ell = \ell_1$ to obtain the
index set $\Gammat = \{-1,1\}^{\ell_1}$ and the functions $u_1,\dots,u_{\ell_1}$.
Apply it again with $\ell = \ell_2$ to obtain the
index set $\Delta = \{-1,1\}^{\ell_2}$ and the functions $v_1,\dots, v_{\ell_2}$.
Define $\Gamma,\Delta$ be the following set of functions which are perturbed around
$f$ and $g$ respectively,
\begin{align*}
\Gamma &= \Big\{\pgamma = f + K_1 \sum_{j=1}^{\ell_1} \gamma_j u_j | \gamma \in \Gammat
\Big\}
\\
\Delta &= \Big\{\qdelta = g + K_2 \sum_{j=1}^{\ell_2} \delta_j v_j | \delta \in \Deltat
\Big\}
\end{align*}
Since the perturbations in Proposition~\ref{prop:construction}
are condensed into the small $R_j$'s it invariably violates
the \holder assumption.
The scaling $K_1$ and $K_2$ are necessary to shrink the perturbation and ensure that
$\pgamma, \qdelta \in \Sigma(s,L)$.
By following essentially an identical argument to
\cite{krishnamurthy14renyi} (Section E.2) we have that $\pgamma \in \Sigma(s,L)$ if
$K \asymp \ell_1^{-s/d}$ and $\qdelta \in \Sigma(s,L)$ if $K_2 \asymp \ell_2^{-s/d}$.
We will set $\ell_1$ and $\ell_2$ later on to obtain the required rates.
For future reference denote
$\Pbarn = \frac{1}{|\Gamma|} \sum_{\gamma \in \Gamma} \Pgamma^n$ and
$\Qbarm = \left(\frac{1}{|\Delta|} \sum_{\delta \in\Delta} \Qdelta^m\right) $.
Now our set of alternatives are formed by the product of $\Gamma$ and $\Delta$
\[
\Lambda = \Gamma \times \Delta = \left\{ (\pgamma,\qdelta) | \pgamma \in \Gamma,
\qdelta \in \Delta \right\}
\]
First note that for any $\pqlambda = (\pgamma,\qdelta) \in \Lambda$,
by the second order functional
Taylor expansion we have,
\begin{align*}
T\pqlambda &= T(f,g) + \int \psif(x;f,g) \plambda + \int \psig(x; f,g) \qlambda
+ R_2
\end{align*}
By Lemma~\ref{thm:infFunDeriv} and the construction
the first order terms vanish since,
\[
\int \psif(x; f,g) \left(f + K_1 \sum_j \gamma_j u_j\right)
= K_1 \sum_j\gamma_j \int \psif(x;f,g) u_j = 0.
\]
The same is true for $\int \psig(x; f,g)$. The second order term can be upper bounded
by
\begin{align*}
R_2 &= \phi''\left(\int \nu(f^*,g^*)\right) \bigg(
\int \frac{\partial^2 \nu(f^*(x), g^*(x))}{\partial f^2(x)} (\plambda - f)^2 +
\int \frac{\partial^2 \nu(f^*(x), g^*(x))}{\partial g^2(x)} (\qlambda - g)^2 + \\
&\hspace{0.4in}
2\int \frac{\partial^2 \nu(f^*(x), g^*(x))}{\partial g(x)\partial g(x)}
(\plambda - f)(\qlambda - g) \bigg) \\
&\geq \sigma_{\min}\left( \|\plambda - f\|^2 + \|\qlambda - g\|^2 \right)
\;\geq \sigma_{\min}\left(K_1^2 + K_2^2\right)
\end{align*}
For the second step note that $(f^*,g^*)$ lies in line segment between $(\plambda,
\qlambda)$ and $(f,g)$ and is therefore both upper and lower bounded. Therefore,
the Hessian evaluated at $(f^*, g^*)$ is strictly positive definite
with some minimum eigenvalue $\sigma_{min}$.
For the third step we have used that $(\plambda - f, \qlambda - g) =
(K_1\sum_{j=1}^{\ell_1} \gamma_j u_j, K_2\sum_{j=1}^{\ell_2} \delta_j v_j)$
and that the $u_j$'s are orthonormal and $\|u_j\|_2 = 1$.
This establishes the $2\beta$ separation between the null and the alternative as
required by Theorem~\ref{thm:lecam} with
$\beta = \sigmamin(K_1^2 + K_2^2)/2$. Precisely,
\[
T\pqlambda \geq T(f,g) + \bigO(\ell_1^{-2s/d} + \ell_2^{-2s/d})
\]
Now we need to bound the Hellinger separation, between $\FnGm$ and $\PQbar$.
First note that by our construction,
\[
\PQbar = \frac{1}{|\Lambda|} \sum_{\lambda \in \Lambda} \Plambda^n \times \Qlambda^m
= \left(\frac{1}{|\Gamma|} \sum_{\gamma \in \Gamma} \Pgamma^n\right) \times
\left(\frac{1}{|\Delta|} \sum_{\delta \in\Delta} \Qdelta^m\right)
= \Pbarn \times \Qbarm
\]
By the tensorization property of the Hellinger affinity we have,
\begin{align*}
H^2(\FnGm, \PQbar)
&= 2\left( 1 - \left( 1- \frac{H^2(\Fn,\Pbarn)}{2}\right)
\left( 1- \frac{H^2(\Gm,\Qbarm)}{2}\right) \right) \\
&\leq H^2(\Fn,\Pbarn) + H^2(\Gm,\Qbarm)
\end{align*}
We now apply Proposition~\ref{prop:hellinger} to bound each Hellinger divergence.
If we denote $\rho_j(\cdot) = K_1 u_j(\cdot)/f(\cdot)$ then we see that the
$\rho_j$'s satisfy the conditions of the proposition and further $\pgamma = f(1 +
\sum_j \gamma_j \rho_j)$ allowing us to use the bound.
Accordingly $\alpha_j = \int \rho_j^2 f \leq C K_1^2 /\ell_1 $ for some $C$.
Hence,
\[
H^2(\Fn,\Pbarn) \leq \frac{n^2}{3} \sum_{j=1}^m \alpha_j^2
\leq \frac{C n^2 K_1^4}{\ell_1} \in \bigO( n^2 \ell_1^{-\frac{4s+d}{d}}).
\]
A similar argument yields
$H^2(\Gm,\Qbarm) \in \bigO(m^2 \ell_2^{-\frac{4s+d}{d}})$.
If we pick $\ell_1 = n^{\frac{2d}{4s+d}}$ and $\ell_2 = m^{\frac{2d}{4s+d}}$
and hence $K_1 = n^{\mintwos}$ and $K_2 = m^{-\mintwos}$, then we have
that the Hellinger separation is bounded by a constant.
\[
H^2(\FnGm,\PQbar)
\leq H^2(\Fn,\Pbarn) + H^2(\Gm,\Qbarm) \in \bigO(1)
\]
Further, the error is larger than $\beta \asymp K_1^s + K_2^2
\asymp n^{\minfours} + m^{\minfours}$.
The first part of the lower bound for $\tau = 8s/(4s+d)$
is concluded by Markov's inequality,
\[
\frac{\EE\big[(\That - T(f,g))^2]}{ (n^{-\tau/2} + m^{-\tau/2})^2 }
\leq \PP\left(|\That - T(f,g)| > (n^{-\tau/2} + m^{-\tau/2}) \right) > c
\]
where we note that $(n^{-\tau/2} + m^{-\tau/2})^2 \asymp n^{-\tau} + m^{-\tau}$.
The $n^{-1} + m^{-1}$ lower bound is straightforward as as we cannot do better than the
the parametric rate \cite{bickel1988estimating}.
See \cite{krishnamurthy14renyi} for an proof that uses a contradiction argument in
the setting $n=m$.
\end{proof}
\subsection{\loos Estimator}
\label{sec:looEstimatorTwoD}
\begin{proof}[Proof of Theorem~\ref{thm:convTwoLoo}]
Assume w.l.o.g that $n>m$.
As before, the bias follows via conditioning.
\begin{align*}
\EE|\Tf - T(f,g)| &= \EE[ T(\fhatmi, \ghatmi) + \psif(X_i;\fhatmi, \ghatmi)
+ \psig(Y_i; \fhatmi, \ghatmi) - T(f,g) ] \\
&= \EE\left[ \bigO(\|\fhatmi - f\|^2 + \|\ghat - g\|^2)\right]
\leq C_1 (n^{\mintwos} + m^{\mintwos})
\end{align*}
for some constant $C_1$.
To bound the variance we use the Efron-Stein inequality.
Consider the samples $\{X_1, \dots, X_n, Y_1, \dots, Y_m\}$ and
$\{X'_1, \dots, X_n, Y_1, \dots, Y_m\}$ and denote the estimates obtained by $\Tf$
and $\Tf'$ respectively.
Recall that we need to bound $\EE[(\Tf - \Tf)^2]$.
Note that,
\begin{align*}
& |\Tf - \Tf'| \leq
\frac{1}{n}|\psif(X_1;\fhatmii{1}, \ghatmii{1}) - \psif(X'_1;\fhatmii{1},\ghatmii{1})| +
\\
&\hspace{0.2in}
\frac{1}{n}\sum_{i\neq 1} |T(\fhatmi, \ghatmi) - T(\fhatmi',\ghatmi) | +
|\psif(X_i;\fhatmi, \ghatmi) -\psif(X_i;\fhatmi', \ghatmi) | +
|\psig(Y_i;\fhatmi, \ghatmi) -\psig(Y_i;\fhatmi', \ghatmi) |
\end{align*}
The first term can be bounded by $2\|\psif\|_\infty/n$ using the boundedness of
the influence function on bounded densities.
By using an argument similar to Equation~\eqref{eqn:tempone} in the one
distribution case, we can also bound each term inside the summation of the
second term via,
\[
|T(\fhatmi,\ghatmi) - T(\fhatmi', \ghatmi)| \leq
\frac{ \|K\|_\infty L_\phi L_\nu }{n}
\]
Then, by Jensen's inequality we have,
\begin{align*}
|\Tf - \Tf'|^2 &\leq \frac{8\|\psif\|^2_\infty}{n^2} +
\frac{4\|K\|^2_\infty L^2_\phi L^2_\nu}{n^2}
+\frac{4}{n^2} \left(\sum_{i\neq 1}
|\psif(X_i;\fhatmi, \ghatmi) -\psif(X_i;\fhatmi', \ghatmi) |\right)^2 \\
&\hspace{0.4in} +\frac{4}{n^2} \left(\sum_{i\neq 1}
|\psig(Y_i;\fhatmi, \ghatmi) -\psig(Y_i;\fhatmi', \ghatmi) |\right)^2
\end{align*}
The third and fourth terms can be bound in expectation using a similar technique
to bound the third term in equation~\ref{eqn:tempone}.
Precisely, by using Assumption~\eqref{asm:infFunRegularity} and Cauchy Schwarz
we get,
\begin{align*}
\EE\big[|\psif(X_i;\fhatmi,\ghatmi) - \psif(X_i;\fhatmi', \ghatmi)|
|\psif(X_j;\fhatmj,\ghatmj) - \psif(X_j;\fhatmj', \ghatmj)|\big]
&\leq \frac{2CB^2\|K\|^2_\infty}{n^2} \\
\EE\big[|\psig(Y_i;\fhatmi,\ghatmi) - \psig(Y_i;\fhatmi', \ghatmi)|
|\psig(Y_j;\fhatmj,\ghatmj) - \psig(Y_j;\fhatmj', \ghatmj)|\big]
&\leq \frac{2CB^2\|K\|^2_\infty}{n^2}
\end{align*}
This leads us to a $\bigO(1/n^2)$ bound for $\EE[(\Tf - \Tf')^2]$,
\begin{align*}
\EE[(\Tf-\Tf')^2] \leq \frac{ 8\|\psif\|^2_\infty
+ 4\|K\|^2_\infty L^2_\phi L^2_\nu +
16CB^2\|K\|^2_\infty}{n^2}
\end{align*}
Now consider, the set of samples
$\{X_1, \dots, X_n, Y_1, \dots, Y_m\}$ and
$\{X_1, \dots, X_n, Y'_1, \dots, Y_m\}$ and denote the estimates obtained by $\Tf$
and $\Tf'$ respectively. Note that some of the $Y$ instances are repeated but
each point occurs at most $n/m$ times.
The remaining argument is exactly the same except that we need to account for
this repetition. We have,
\begin{align*}
&|\Tf - \Tf'| \leq
\frac{n}{m} \frac{1}{n}|\psif(X_1;\fhatmii{1}, \ghatmii{1}) -
\psif(X'_1;\fhatmii{1},\ghatmii{1})| \,+\,
\frac{n}{m} \frac{1}{n}\sum_{i\neq 1} \Big(|T(\fhatmi, \ghatmi)
- T(\fhatmi',\ghatmi) | + \\
&\hspace{0.4in}
|\psif(X_i;\fhatmi, \ghatmi) -\psif(X_i;\fhatmi', \ghatmi) | \,+
|\psig(Y_i;\fhatmi, \ghatmi) -\psig(Y_i;\fhatmi', \ghatmi) | \Big)
\numberthis \label{eqn:temptwo}
\end{align*}
And hence,
\begin{align*}
\EE[(\Tf - \Tf')^2] &\leq
\frac{\|\psig\|^2_\infty}{m^2} +
\frac{n^2}{m^4} 4\|K\|^2_\infty L^2_\phi L^2_\nu
+ \bigO\left(\frac{n^4}{m^6}\right)
\end{align*}
where the last two terms of~\eqref{eqn:temptwo} are bounded by $\bigO(n^4/m^6)$
after squaring and then taking the expectation.
We have been a bit sloppy by bounding the difference by $n/m$ and not
$\ceil{n/m}$ but it is clear that this doesn't affect the rate.
Finally by the Efron Stein inequality we have
\[
\VV(\Tf) \in \bigO\left(\frac{1}{n} + \frac{n^4}{m^5}\right)
\]
which is $\bigO(1/n + 1/m)$ if $n$ and $m$ are of the same order. This is the
case if for instance there exists $\zeta_l, \zeta_u \in (0,1)$ such that
$\zeta_l \leq n/m \leq \zeta_u$.
Therefore the mean squared error is
$\EE[(T - \Tf)^2] \in \bigO(n^{-\frac{4s}{2s+d}} + m^{-\frac{4s}{2s+d}}
+ n^{-1} + m^{-1})$ which completes the proof.
\end{proof}
\section{Proofs of Results on Functionals of Two Distributions}
\label{sec:appMultipleDistros}
\subsection{\dss Estimator}
\label{sec:dssEstimator}
We generalise the results in Appendix~\ref{sec:appOneDistro} to analyse the \dss
estimator for two distributions. As before we begin with a series of lemmas. \\
\begin{lemma}
The influence functions have zero mean. I.e.
\begin{align}
\EE_{P_1}[\psi_1(X;P_1; P_2)] = 0 \hspace{0.1in} \forall P_2 \in \Mcal
\hspace{0.5in}
\EE_{P_2}[\psi_2(Y;P_1; P_2)] = 0 \hspace{0.1in} \forall P_1 \in \Mcal
\end{align}
\begin{proof}
$0 = T'_i(P_i-P_i;P_1; P_2) = \int \psi_i(u;P_1, P_2) \ud P_i(u)$ for $i = 1,2$.
\end{proof}
\label{thm:infFunDeriv}
\end{lemma}
\begin{lemma} [Bias \& Variance of \eqref{eqn:dsEstimatorTwoD}]
Let $\fhatone, \ghatone$ be consistent estimators for $f, g$ in the $L_2$ sense. Let
$T$ have bounded second derivatives and let
$\sup_x \psi_f(x; f, g)$, $\sup_x \psi_g(x; f, g)$, $\VV_{f}\psi(X;f',g')$,
$\VV_{g}\psi_g(X;f',g')$ be bounded for all $f, f', g, g' \in \Mcal$.
Then the bias of $\Tfdsone$ conditioned on $\Xone, \Yone$
is $|T - \EE[\Tfdsone|\Xone, \Yone] \in \bigO( \|f - \fhatone\|^2 + \|g - \ghatone\|^2)$.
The conditional variance is $\VV[\Tfdsone|\Xone, \Yone] \in \bigO(n^{-1} + m^{-1})$.
\label{thm:biasvarTwoDistro}
\begin{proof}
First consider the bias conditioned on $\Xone, \Yone$,
\begin{align*}
&\EE \left[ \Tfdsone - T(f, g) | \Xone, \Yone \right] \\
&\hspace{0.2in}= \EE\left[ T(\fhatone, \ghatone)
+ \frac{2}{n}\nsumsechalf \psi_f(X_i; \fhatone, \ghatone)
+ \frac{2}{m}\msumsechalf \psi_g(Y_j; \fhatone, \ghatone)
- T(f, g) \Bigg| \Xone, \Yone \right] \\
&\hspace{0.2in}= T(\fhatone, \ghatone) + \int \psi_f(x; \fhatone, \ghatone) f(x) \ud \mu(x) +
\int \psi_g(x; \fhatone, \ghatone) g(x) \ud \mu(x) - T(f, g) \\
&\hspace{0.2in}= \bigO\left( \|f - \fhatone\|^2 + \|g - \ghatone\|^2 \right)
\end{align*}
The last step follows from the boundedness of the second derivatives from which
the first order functional Taylor expansion~\eqref{eqn:functaylortwoD} holds.
The conditional variance is,
\begin{align*}
\VV\left[ \Tfdsone| \Xone, \Yone \right]
&= \VV\left[ \frac{1}{n}\sum_{i=n+1}^{2n} \psi_f(X_i; \fhatone, \ghatone) \Big| \Xone \right]
+ \VV\left[ \frac{1}{m}\sum_{j=m+1}^{2m} \psi_g(Y_j; \fhatone, \ghatone) \Big| \Yone \right]
\\
&= \frac{1}{n} \VV_f \left[ \psi_f(X; \fhatone, \ghatone) \right] +
\frac{1}{m} \VV_g \left[ \psi_g(Y; \fhatone, \ghatone) \right]
\in \bigO\left(\frac{1}{n} + \frac{1}{m}\right)
\end{align*}
The last step follows from the boundedness of the variance of the influence
functions.
\end{proof}
\end{lemma}
The following lemma characterises conditions for asymptotic normality. \\
\begin{lemma} [Asymptotic Normality] Suppose, in addition to the conditions in
Theorem~\ref{thm:biasvarTwoDistro} above and the regularity
assumption~\ref{asm:infFunRegularity} we also have
$\|\fhat - f\|\in o_P(n^{-1/4}), \|\ghat - g\| \in o_P(m^{-1/4})$ and
$\psif, \psig \neq \zero$.
Then we have asymptotic Normality for $\Tfds$,
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\sqrt{2N}(\Tfds - T(f,g)) \indistribution \numberthis
\label{eqn:asympNormalTwoDistro} \\
&\hspace{0.05in}
\Ncal \left( 0, \frac{1}{\zeta} \VV_{f} \left[\psi_f(X; f, g)\right]
+ \frac{1}{1 - \zeta} \VV_{g} \left[\psi_g(Y; f, g)\right] \right)
\end{align*}
}
{
\begin{align*}
&\sqrt{N}(\Tfds - T(f,g)) \indistribution \numberthis
\label{eqn:asympNormalTwoDistro}
\Ncal \left( 0, \frac{1}{\zeta} \VV_{f} \left[\psi_f(X; f, g)\right]
+ \frac{1}{1 - \zeta} \VV_{g} \left[\psi_g(Y; f, g)\right] \right)
\end{align*}
}
\label{thm:asympNormalTwoDistro}
\begin{proof}
We begin with the following expansions around
$(\fhatone, \ghatone)$,
\begin{align*}
&T(f,g) = T(\fhatone, \ghatone) + \int \psi_f(u; \fhatone, \ghatone) f(u) \ud u
\int \psi_g(u; \fhatone, \ghatone) g(u) \ud u \;+ \\
&\hspace{1in} \bigO\left( \|f - \fhatone\|^2 + \|g - \ghatone\|^2 \right)
\end{align*}
Consider $\Tfdsone$. We can write
\begingroup
\allowdisplaybreaks
\begin{align*}
&\sqrt{\frac{N}{2}}(\Tfdsone - T(f))
\numberthis \label{eqn:cltExpansion} \\
&\hspace{0.3in}=
\sqrt{\frac{N}{2}}\left( T(\fhatone, \ghatone) +
\frac{2}{n}\nsumsechalf \psi_f(X_i;f, g) +
\frac{2}{m}\msumsechalf \psi_g(Y_j;f, g) - T(f, g) \right) \\
&\hspace{0.3in}
= \sqrt{\frac{N}{2}}\Bigg( \frac{2}{n} \nsumsechalf \psi(X_i; \fhatone, \ghatone) +
\frac{2}{m} \msumsechalf \psi(X_j; \fhatone, \ghatone)
- \EE_f\left[\psi(X;\fhatone,\ghatone)\right] \\
&\hspace{0.6in}
- \EE_g\left[\psi(X;\fhatone,\ghatone)\right]
\Bigg) + \sqrt{N} O\left( \|f - \fhatone\|^2 + \|g - \ghatone\|^2
\right) \\
&\hspace{0.3in}=\sqrt{\frac{2N}{n}} n^{-1/2} \nsumsechalf \left(
\psi_f(X_i; \fhatone, \ghatone) - \psi_f(X_i; f, g) -
(\EE_f\psi_f(X;\fhatone,\ghatone) +
\EE_f\psi_f(X;f,g) ) \right) + \\
&\hspace{0.4in}
\sqrt{\frac{2N}{m}} m^{-1/2} \msumsechalf \left(
\psi_g(Y_j; \fhatone, \ghatone) - \psi_g(Y_j; f, g) -
(\EE_g\psi_g(Y;\fhatone,\ghatone) +
\EE_g\psi_g(Y;f,g) ) \right) + \\ &\hspace{0.4in}
\sqrt{\frac{2N}{n}} n^{-1/2} \nsumsechalf \psi_f(X_i; f, g)\,+\,
\sqrt{\frac{2N}{m}} m^{-1/2} \msumsechalf \psi_g(Y_j; f, g)\,+\, \\
&\hspace{0.6in}
\sqrt{N} \bigO\left( \|f - \fhatone\|^2 + \|g - \ghatone\|^2 \right)
\end{align*}
\endgroup
The fifth term is $o_P(1)$ by the assumptions. The first and second terms are
also $o_P(1)$ . To see this, denote the first term by $Q_n$.
\begin{align*}
\VV\left[Q_n|\Xone, \Yone \right] &= \frac{N}{n}\VV_f \left[ \nsumsechalf
\left(
\psi_f(X; \fhatone, \ghatone) - \psi_f(X; f, g) - (\EE_f\psi_f(X;\fhatone,\ghatone) +
\EE_f\psi_f(X;f,g) ) \right) \right] \\
&\leq \frac{N}{n} \EE_f\left[ \left(\psi_f(X_i; \fhatone, \ghatone) -
\psi_f(X_i; f, g)\right)^2 \right] \rightarrow 0
\end{align*}
where we have used the regularity assumption~\ref{asm:infFunRegularity}.
Further,
$\PP(|Q_n| > \epsilon | \Xone, \Yone) \leq \VV[Q_n|\Xone, \Yone]
\ \epsilon \rightarrow 0$, hence the first term is $o_P(1)$. The proof for the
second term is similar.
Therefore we have,
\begin{align*}
\sqrt{\frac{N}{2}}(\Tfdsone - T(f)) =
\sqrt{\frac{2N}{n}} n^{-1/2} \nsumsechalf \psi_f(X_i; f, g)\,+\,
\sqrt{\frac{2N}{m}} m^{-1/2} \msumsechalf \psi_g(Y_j; f, g)\,+\,
o_P(1)
\end{align*}
Using a similar argument on $\Tfdstwo$ we get,
\begin{align*}
\sqrt{\frac{N}{2}}(\Tfdstwo - T(f)) =
\sqrt{\frac{2N}{n}} n^{-1/2} \sum_{i=1}^{n/2} \psi_f(X_i; f, g)\,+\,
\sqrt{\frac{2N}{m}} m^{-1/2} \sum_{j=1}^{m/2} \psi_g(Y_j; f, g)\,+\,
o_P(1)
\end{align*}
Therefore,
\begin{align*}
\sqrt{N}(\Tfdstwo - T(f)) &=
\sqrt{2} \left(
\sqrt{\frac{2N}{n}} n^{-1/2} \sum_{i=1}^{n} \psi_f(X_i; f, g)\,+\,
\sqrt{\frac{2N}{m}} m^{-1/2} \sum_{j=1}^{m} \psi_g(Y_j; f, g)\,
\right)\, +\,
o_P(1) \\
&=
\sqrt{\frac{N}{n}} n^{-1/2} \sum_{i=1}^{2n} \psi_f(X_i; f, g)\,+\,
\sqrt{\frac{N}{m}} m^{-1/2} \sum_{j=1}^{2m} \psi_g(Y_j; f, g)\,+\,
o_P(1)
\end{align*}
By the CLT and
Slutzky's theorem this converges weakly to
the RHS of~\eqref{eqn:asympNormalTwoDistro}.
\end{proof}
\end{lemma}
We are now ready to prove the rates of convergence for the \dss estimator
in the \holder class. \\
\begin{proof}[Proof of Theorem~\ref{thm:convOneDS}].
We first note that in a \holder class, with $n$ samples the KDE achieves the rate
$\EE\|p - \phat\|^2 \in O(n^{\frac{-2s}{2s+d}})$. Then the bias for the
preliminary estimator $\Tfdsone$ is,
\begin{align*}
\EE \left[ \Tfdsone - T(f, g) | \Xone, \Yone \right]
&= \EE_{\Xone, \Yone}\left[ O\left( \|f - \fhatone\|^2 + \|g - \ghatone\|^2 \right)
\right] \\
&\in O\left(n^{\frac{-2s}{2s+d}} + m^{\frac{-2s}{2s+d}} \right)
\end{align*}
The same could be said about $\Tfdstwo$.
It therefore follows that
\[
\EE\left[ \Tfds-T\right] = \EE\left[ \frac{1}{2}\left(\Tfdsone-T(f)\right) +
\frac{1}{2}\left(\Tfdstwo-T(f)\right) \right]
\in O\left(n^{\frac{-2s}{2s+d}} + m^{\frac{-2s}{2s+d}} \right)
\]
For the variance, we use Theorem~\ref{thm:biasvarTwoDistro} and the Law of total
variance to first control $\VV\Tfdsone$,
\begin{align*}
\VV\left[ \Tfdsone \right]
&= \frac{1}{n} \EE\left[\VV_f \left[ \psi_f(X; \fhatone, \ghatone)|\Xone \right]\right]+
\frac{1}{m} \EE\left[\VV_g \left[ \psi_g(Y; \fhatone, \ghatone)|\Yone \right]\right]
\\ &\hspace{0.3in} + \VV\
\left[ \EE\left[ \Tf - T(f,g) | \Xone \Yone \right]
\right] \\
&\in O\left(\frac{1}{n} + \frac{1}{m}\right) +
\EE \left[O\left( \|f - \fhatone\|^4 + \|g - \ghatone\|^4 \right)
\right] \\
&\in O\left(n^{-1} + m^{-1}+ n^{\frac{-4s}{2s+d}} + m^{\minfours} \right)
\end{align*}
In the second step we used the fact that $\VV Z \leq \EE Z^2$.
Further,
$\EE_{\Xone}\VV_f \left[ \psi_f(X; \fhatone, \ghatone) \right]$,
$\EE_{\Yone}\VV_g \left[ \psi_g(Y; \fhatone, \ghatone) \right]$ are bounded since
$\psi_f$, $\psi_g$ are bounded. Then by applying the Cauchy Schwarz inequality
as before we get $\VV\Tfds \in
O\left(n^{-1} + m^{-1}+ n^{\frac{-4s}{2s+d}} + m^{\minfours} \right)$.
Finally when $s > d/2$, we have the required $o_P(n^{-1/4}), o_P(m^{-1/4})$
rates on $\|\fhat - f\|$ and $\|\ghat - g\|$
which gives us asymptotic normality.
\end{proof}
\section{Analysis of \loos Estimator}
\label{sec:convOneLoo}
\begin{proof}[Proof of Theorem~\ref{thm:convOneLoo}]
First note that we can bound the mean squared error via the bias and variance terms.
\[
\EE [(\Tf-T(f))^2] \leq |\EE\Tf - T(f)|^2 + \EE[(\Tf - \EE\Tf)^2]
\]
The bias is bounded via a straightforward conditioning argument.
\begin{align*}
\EE|\Tf-T(f)| &= \EE[ T(\fhatmi) + \psi(X_i;\fhatmi) - T(f) ]
= \EE_{\Xmi}\left[ \EE_{X_i}[ T(\fhatmi) + \psi(X_i;\fhatmi) - T(f)] \right] \\
&= \EE_{\Xmi}\left[ \bigO(\|\fhatmi - f\|^2) \right] \;
\leq C_1n^{\frac{-2s}{2s+d}}
\numberthis\label{eqn:biasBound}
\end{align*}
for some constant $C_1$.
The last step follows by observing that the KDE achieves the rate
$n^{\frac{-2s}{2s+d}}$ in integrated squared error.
To bound the variance we use the Efron-Stein inequality.
For this consider two sets of samples $\Xn=\{X_1, X_2, \dots, X_n\}$ and
$\Xn'=\{X'_1, X_2, \dots, X_n\}$ which are the same except for the
first point. Denote the estimators obtained using $\Xn$ and $\Xn'$ by $\Tf$
and $\Tf'$ respectively. To apply Efron-Stein we shall bound $\EE[(\Tf - \Tf')^2]$.
Note that,
\begin{align*}
|\Tf - \Tf'| &\leq
\frac{1}{n}|\psi(X_1;\fhatmii{1}) - \psi(X'_1;\fhatmii{1})| +
\frac{1}{n}\sum_{i\neq 1} |T(\fhatmi) - T(\fhatmi') | \\
&\hspace{0.6in}
+ \frac{1}{n}\sum_{i\neq 1} |\psi(X_i;\fhatmi) -\psi(X_i;\fhatmi') |
\numberthis \label{eqn:diffdecomp}
\end{align*}
The first term can be bounded by $2\|\psi\|_\infty/n$ using the boundedness of
$\psi$.
Each term inside the summation in the second term
in~\eqref{eqn:diffdecomp} can be bounded via,
\begin{align*}
|T(\fhatmi) - T(\fhatmi')| &\leq
L_\phi \int |\nu(\fhatmi) - \nu(\fhatmi')| \leq
L_\nu L_\nu \int |\fhatmi - \fhatmi'| \numberthis \label{eqn:tempone} \\
& \leq L_\phi L_\nu \int \frac{1}{nh^d}
\Big| K\left(\frac{X_1 - u}{h}\right) -
K\left(\frac{X'_1 - u}{h}\right) \Big| \ud u
\leq \frac{\|K\|_\infty L_\phi L_\nu}{n}.
\end{align*}
The substitution $(X_1-u)/h = z$ for integration eliminates the $1/h^d$.
Here $L_\phi, L_\nu$ are the Lipschitz constants of $\phi, \nu$.
To apply Efron-Stein we need to bound the expectation of the LHS over
$X_1, X'_1, X_2, \dots, X_n$.
Since the first two terms in~\eqref{eqn:diffdecomp}
are bounded pointwise by $\bigO(1/n^2)$ they are also
bounded in expectation.
By Jensen's inequality we can write,
\begin{align*}
|\Tf - \Tf'|^2 &\leq
\frac{12\|\psi\|^2_\infty}{n^2} +
\frac{ 3 \|K\|^2_\infty L^2_\phi L^2_\nu}{n^2}
+ \frac{3}{n^2}\left(\sum_{i\neq 1}|\psi(X_i;\fhatmi) -\psi(X_i;\fhatmi') |
\right)^2
\numberthis \label{eqn:esdecomp}
\end{align*}
For the third, such a pointwise bound does not hold so
we will directly bound the expectation.
\begin{align*}
\sum_{1\neq i, j}
\EE \Big[|\psi(X_i;\fhatmi) -\psi(X_i;\fhatmi') |
|\psi(X_j;\fhatmj) -\psi(X_j;\fhatmj') | \Big]
\numberthis \label{eqn:thirdterm}
\end{align*}
We then have,
\begin{align*}
\EE \big[(\psi(X_i;\fhatmi) -\psi(X_i;\fhatmi') )^2\big]
&\leq \EE_{X_1,X'_1}\left[C \int |\fhatmi - \fhatmi'|^2 \right] \\
&\leq CB^2 \int \frac{1}{n^2h^{2d}}
\left( K\left(\frac{x_1 - u}{h}\right) -
K\left(\frac{x'_1 - u}{h}\right) \right)^2 \ud x_1 \ud x'_1 u \\
&\leq \frac{2 CB^2 \|K\|^2_\infty }{n^2}
\end{align*}
I the first step we have used Assumption~\ref{asm:infFunRegularity} and in the
last step the substitutions $(x_1 - x_i)/h = u$ and $(x_1 - x_j)/h = v$
removes the $1/h^d$ twice.
Then, by applying Cauchy Schwarz each term inside the summation
in~\eqref{eqn:thirdterm} is $\bigO(1/n^2)$.
Since each term inside equation~\eqref{eqn:thirdterm} is $\bigO(1/n^2)$
and there are $(n-1)^2$ terms it is $\bigO(1)$.
Combining all these results with equation~\eqref{eqn:esdecomp} we get,
\[
\EE[(\Tf-\Tf')^2] \in \bigO\left(\frac{1}{n^2} \right)
\]
Now, by applying the Efron-Stein inequality we get $\VV(\Tf) \leq \frac{C}{2n}$.
Therefore the mean squared error $\EE[(T - \Tf)^2] \in \bigO(n^{-\frac{4s}{2s+d}}
+ n^{-1})$ which completes the proof.
\end{proof}
\section{Review: \dss Estimator on a Single Distribution}
\label{sec:appOneDistro}
This section is intended to be a review of the data split estimator used in
\cite{robins09quadraticvm}. The estimator was originally analysed in the
semiparametric setting. However, in order to be self contained we provide an
h analysis that directly uses the Von Mises Expansion.
We state our main result below. \\
\begin{theorem}
Suppose $f\in\Sigma(s,L,B,B')$ and $\psi$ satisfies
Assumption~\ref{asm:infFunRegularity}. Then,
$\;\EE[(\Tfds - T(f))^2]$ is
$\bigO(n^{\frac{-4s}{2s+d}})$ when $s<d/2$ and $\bigO(n^{-1})$ when $s>d/2$.
Further, when $s>d/2$ and when $\psif, \psig \neq \zero$, $\Tfds$ is asymptotically
normal.
\begin{align*}
&\sqrt{n}(\Tfds - T(f,g)) \indistribution \numberthis
\label{eqn:asympNormalTwoDistro}
\Ncal \left( 0, \frac{1}{\zeta} \VV_{f} \left[\psi_f(X; f, g)\right]
+ \frac{1}{1 - \zeta} \VV_{g} \left[\psi_g(Y; f, g)\right] \right)
\end{align*}
\label{thm:convOneDS}
\end{theorem}
We begin the proof with a series of technical lemmas. \\
\begin{lemma}
The Influence Function has zero mean. i.e. $\EE_P[\psi(X;P)] = 0$.
\begin{proof}
$0 = T'(P-P;P) = \int \psi(x;P) \ud P(x)$.
\end{proof}
\label{lem:infFunMean}
\end{lemma}
Now we prove the following lemma on the
preliminary estimator $\Tfdsone$. \\
\begin{lemma}[Conditional Bias and Variance]
\label{thm:biasvar}
Let $\fhatone$ be a consistent estimator for $f$ in the $L_2$ metric. Let $T$ have
bounded second derivatives and let $\sup_x \psi(x; f)$ and $\VV_{X\sim
f}\psi(X;g)$
be bounded for all $g \in \Mcal$. Then,
the bias of the preliminary estimator $\Tfdsone$~\eqref{eqn:dsEstimator} conditioned on
$\Xone$ is $\bigO(\|f - \fhatone\|_2^2)$. The conditional
variance is $\bigO(1/n)$.
\begin{proof}
First consider the conditional bias,
\begingroup
\allowdisplaybreaks
\begin{align*}
\EE_{\Xtwo} \left[ \Tfdsone - T(f) | \Xone \right] &=
\EE_{\Xtwo} \left[ T(\fhatone) + \frac{2}{n} \nsumsechalf \psi(X_i; \fhatone)
- T(f) | \Xone \right] \\
&= T(\fhatone) + \EE_f\left[ \psi(X; \fhatone) \right] - T(f) \in \bigO(\|\fhatone -
f\|_2^2). \numberthis
\end{align*}
\endgroup
The last step follows from the boundedness of the second derivative from which
the
first order functional Taylor expansion~\eqref{eqn:functaylor} holds.
The conditional variance is,
\begin{equation}
\VV_\Xtwo \left[ \Tfdsone|\Xone \right] =
\VV_\Xtwo \left[ \frac{2}{n} \nsumsechalf \psi(X; \fhatone)\Big|\Xone \right] =
\frac{2}{n} \VV_f \left[ \psi(X; \fhatone) \right] \in \bigO(n^{-1}).
\end{equation}
\end{proof}
\end{lemma}
\begin{lemma}[Asymptotic Normality]
\label{thm:asympnormal}
Suppose in addition to the conditions in the lemma above we also have
Assumption~\ref{asm:infFunRegularity} and
$\|\fhatone - f \|_2 \in o_P(n^{-1/4})$ and
$\psi \neq \zero$.
Then, \
\[\sqrt{n}(\Tfds - T(f)) \indistribution \Ncal(0, \VV_f \psi(X; f)).
\]
\begin{proof}
We begin with the following expansion around $\fhatone$,
\begin{align}
T(f) &= T(\fhatone) + \int \psi(u; \fhatone) f(u) \ud \mu(u) + O(\|\fhatone - f\|^2).
\label{eqn:vmeTfhat}
\end{align}
First consider $\Tfdsone$. We can write
\begingroup
\allowdisplaybreaks
\begin{align*}
&\sqrt{\frac{n}{2}} \left( \Tfdsone - T(f) \right) =
\sqrt{\frac{n}{2}} \left( T(\fhatone) +
\frac{2}{n} \nsumsechalf \psi(X_i; \fhatone) - T(f) \right)
\numberthis \label{eqn:cltdecomp} \\
&\hspace{0.3in}= \sqrt{\frac{2}{n}}\nsumsechalf\left[
\psi(X_i; \fhatone) - \psi(X_i; f) -
\left(\int \psi(u;\fhatone) f(u) \ud u - \int \psi(u;f) f(u) \ud u\right)
\right] \\
&\hspace{0.5in} + \sqrt{\frac{2}{n}} \nsumsechalf \psi(X_i; f) +\;\;\;
\sqrt{n}\bigO\left( \|\fhatone - f\|^2 \right).
\end{align*}
\endgroup
In the second step we used the VME in~\eqref{eqn:vmeTfhat}. In the third step,
we added and subtracted $\sum_i \psi(X_i; f)$ and also added $\EE\psi(\cdot; f)
= 0$. Above, the third term is $o_P(1)$ as $\|\fhatone - f\|_2 \in o_P(n^{-1/4})$.
The first term which we shall denote by $Q_n$ can also be
shown to be $o_P(1)$ via Chebyshev's inequality. It is sufficient to show
$\PP(|Q_n|>\epsilon | \Xone) \rightarrow 0$. First note that,
\begingroup
\allowdisplaybreaks
\begin{align*}
\VV [Q_n|\Xone] &=
\VV \left[\sqrt{\frac{2}{n}}\nsumsechalf\left(
\psi(X_i; \fhatone) - \psi(X_i; f) -
\left(\int \psi(u;\fhatone) f(u) \ud u - \int \psi(u;f) f(u) \ud u\right)
\right) \Big| \Xone \right] \\
&= \VV \left[ \psi(X; \fhatone) - \psi(X; f) -
\left(\int \psi(u;\fhatone) f(u) \ud u - \int \psi(u;f) f(u) \ud u\right)
\Big| \Xone \right] \\
&\leq \EE \left[ \left(\psi(X; \fhatone) - \psi(X; f)\right)^2 \right]
\;\in\; \bigO(\|\fhatone - f\|^2)
\rightarrow 0, \numberthis
\end{align*}
\endgroup
where the last step follows from Assumption~\ref{asm:infFunRegularity}.
Now, $\PP(|Q_n|>\epsilon | X_1^n) \leq \VV(Q_n|\Xone) /\epsilon \rightarrow 0$.
Hence we have
\[
\sqrt{\frac{n}{2}}(\Tfdsone - T(f)) =
\sqrt{\frac{2}{n}} \nsumsechalf \psi(X_i; f) + o_P(1)
\]
We can similarly show
\[
\sqrt{\frac{n}{2}}(\Tfdstwo - T(f)) =
\sqrt{\frac{2}{n}} \nsumsechalf \psi(X_i; f) + o_P(1)
\]
Therefore, by the CLT and Slutzky's theorem,
\begin{align*}
\sqrt{n}(\Tfds - T(f)) &= \frac{1}{\sqrt{2}}
\left( \sqrt{\frac{n}{2}}(\Tfdsone - T(f)) +
\sqrt{\frac{n}{2}}(\Tfdstwo - T(f)) \right) \\
&= n^{-1/2} \nsumwhole \psi(X_i; f) + o_P(1) \;\;
\indistribution \Ncal(0, \VV_f \psi(X;f)
\end{align*}
\end{proof}
\end{lemma}
We are now ready to prove Theorem~\ref{thm:convOneDS}. Note that the brunt of the
work for the \dss estimator was in analysing the preliminary estimator $\Tfds$.
\begin{proof}[Proof of Theorem~\ref{thm:convOneDS}]
We first note that in a \holder class, with $n$ samples the KDE achieves the rate
$\EE\|p - \phat\|^2 \in O(n^{\frac{-2s}{2s+d}})$. Then the bias of $\Tfds$ is,
\begin{align*}
\EE_{\Xone} \EE_{\Xtwo} \left[ \Tfdsone - T(f) | \Xone \right]
&= \EE_{\Xone}\left[ O\left( \|f - \fhatone\|^2 \right) \right]
\in O\left(n^{\frac{-2s}{2s+d}} \right)
\end{align*}
It immediately follows that $\EE\left[\Tfds-T(f)\right] \in
O\left(n^{\frac{-2s}{2s+d}} \right)$.
For the variance, we use Theorem~\ref{thm:biasvar} and the Law of total
variance for $\Tfdsone$,
\begin{align*}
\VV_{\Xonetwo}\left[ \Tfdsone \right]
&= \frac{1}{n} \EE_{\Xone}\VV_f \left[ \psi(X; \fhatone, \ghat) \right] +
+ \VV_{\Xone} \left[ \EE_{\Xtwo}\left[ \Tfds - T(f) | \Xone \right]
\right] \\
&\in O\left(\frac{1}{n} \right) +
\EE_{\Xone} \left[O\left( \|f - \fhatone\|^4 \right) \right] \\
&\in O\left( n^{-1} + n^{\frac{-4s}{2s+d}} \right)
\end{align*}
In the second step we used the fact that $\VV Z \leq \EE Z^2$.
Further,
$\EE_{\Xone}\VV_f \left[ \psi(X; \fhatone) \right]$ is bounded since
$\psi$ is bounded.
The variance of $\Tfds$ can be bounded using the Cauchy Schwarz Inequality,
\begin{align*}
\VV\left[\Tfds\right] &= \VV\left[ \frac{\Tfdsone+ \Tfdstwo}{2} \right]
= \frac{1}{4}\left( \VV\Tfdsone + \VV\Tfdstwo + 2\Covar(\Tfdsone, \Tfdstwo) \right)\\
&\leq \max\left( \VV\Tfdsone, \VV\Tfdstwo\right)
\in O\left( n^{-1} + n^{\frac{-4s}{2s+d}} \right)
\end{align*}
Finally for asymptotic normality, when $s > d/2$, $\EE\|\fhatone - f\|_2
\in \bigO(n^{\frac{-s}{2s+d}})
\in o\,(n^{-1/4})$.
\end{proof}
\section{Proofs of Results in Section~\ref{sec:workedExample}}
\label{sec:appWorkedExample}
\begin{proof}[Proof of Proposition~\ref{thm:tsalliscdInfFun}]
Recall that we can derive the influence functions via
$\psixz(X,Z_1; p) = \tsalliscd'_{XZ}(\delta_{X,Z_1}-\pxz; p)$,
$\psiyz(Y,Z_2; p) = \tsalliscd'_{YZ}(\delta_{X,Z_2}-\pyz; p)$
where $\tsalliscd'_{XZ}, \tsalliscd'_{YZ}$ are the \gateaux derivatives of $\tsalliscd$
w.r.t $\pxz, \pyz$ respectively. Hence,
\begingroup
\allowdisplaybreaks
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\psixz(X,Z_1) = \\
& \frac{1}{\alpha-1} \frac{\partial}{\partial t}
\int ((1-t)\pxz + t\delta_{X Z_1})^\alpha \pyz^\beta \Big|_{t=0} \\
& \frac{\alpha}{\alpha-1}\int\pxz^{\alpha-1} \pyz^\beta (\delta_{X Z_1} - \pxz)
\end{align*}
}
{
\begin{align*}
\psixz(X,Z_1)
&= \frac{1}{\alpha-1} \frac{\partial}{\partial t}
\int ((1-t)\pxz + t\delta_{X Z_1})^\alpha \pyz^\beta \Big|_{t=0} \\
&= \frac{\alpha}{\alpha-1}\int\pxz^{\alpha-1} \pyz^\beta (\delta_{X Z_1} - \pxz)
\end{align*}
}
\endgroup
from which the result follows. Deriving $\psiyz$ is similar.
Alternatively, we can directly show that $\psixz, \psiyz$ in Equation~\eqref{eqn:tsalliscdInfFun}
satisfy Definition~\ref{def:infFun}.
\end{proof}
To prove Corollary~\ref{thm:condTsallisAsympNormal} we will first need the
following Lemma. \\
\begin{lemma}
\label{lem:tsalliscdInfFunOrder}
Let $\phat = (\pxzhat, \pyzhat)$ be
a consistent estimator for $p=(\pxz, \pyz)$. Then
$\EE_{XZ}\left[\left(\psixz(V; \phat) - \psixz(V; p)\right)^2\right]$,
$\EE_{YZ}\left[\left(\psiyz(W; \phat) - \psiyz(W; p)\right)^2\right]$
$\in O(\|\pxz - \pxzhat\|^2 + \|\pyz - \pyzhat\|^2)$.
\begin{proof}
\vspace{-0.1in}
Given in Appendix~\ref{sec:appMultipleDistros}. It uses Jensen's
inequality and the boundedness of the densities.
\end{proof}
\begin{proof}
We will show this for $\EE_{XZ}\left[( \psixz(V;\phat) - \psixz(V;p) )^2
\right]$. The proof for $\EE_{YZ}\left[( \psiyz(W;\phat) - \psiyz(W;p) )^2
\right]$ is similar.
\begin{align*}
& \int \left( \psixz(u; \pxz, \pyz) - \psixz(u; \pxzhat, \pyzhat) \right)^2 \pxz\\
&\hspace{0.2in}= \frac{\alpha^2}{(1-\alpha)^2}
\int \left( \pxz^{\alpha-1}\pyz^\beta - \int \pxz^\alpha \pyz^\beta
-\left[ \pxzhat^{\alpha-1}\pyzhat^\beta - \int \pxzhat^\alpha \pyzhat^\beta \right]
\right)^2 \pxz \\
&\hspace{0.2in}\leq 2\frac{\alpha^2}{(1-\alpha)^2} \left[
\int \left(\pxz^{\alpha-1}\pyz^\beta - \pxzhat^{\alpha-1}\pyzhat^\beta \right)^2\pxz +
\left( \int \pxz^\alpha \pyz^\beta - \int\pxzhat^\alpha \pyzhat^\beta \right)^2
\right] \\
&\hspace{0.2in}\leq 2\frac{\alpha^2}{(1-\alpha)^2} \left[
\int \left(\pxz^{\alpha-1}\pyz^\beta - \pxzhat^{\alpha-1}\pyzhat^\beta \right)^2\pxz +
\int \left( \pxz^\alpha \pyz^\beta - \pxzhat^\alpha \pyzhat^\beta \right)^2
\right] \\
&\hspace{0.2in} \leq 4\frac{\alpha^2}{(1-\alpha)^2} \bigg[
\|\pyz^{\beta}\|_\infty^2 \int (\pxz^{\alpha-1} - \pxzhat^{\alpha-1})^2 +
\|\pxzhat^{\alpha-1}\|_\infty^2 \int (\pyz^\beta - \pyzhat^\beta)^2 + \\
&\hspace{0.6in}
\|\pyz^{\beta}\|_\infty^2 \int (\pxz^{\alpha} - \pxzhat^{\alpha})^2 +
\|\pxzhat^{\alpha}\|_\infty^2 \int (\pyz^\beta - \pyzhat^\beta)^2
\bigg] \\
&\hspace{0.2in}\in
O\left( \|\pxz -\pxzhat\|^2 \right) +
O\left( \|\pyz -\pyzhat\|^2 \right) \rightarrow 0
\end{align*}
where, in the second and fourth steps we have used Jensen's inequality.
The last step follows from the boundedness of all our densities and estimates
and by lemma~\ref{lem:densitypowers}.
\end{proof}
\end{lemma}
We are now ready to prove asymptotic normality for the Conditional Tsallis
Estimator.
\begin{proof}[Proof of Corollary~\ref{thm:condTsallisAsympNormal}]
\vspace{-0.1in}
Recall that we need to verify the following: \begin{enumerate}
\item $\|\pxzhat - \pxz\|, \|\pyzhat - \pyz\| \in o_P(n^{-1/4})$
\item $\psixz(V; p), \psiyz(W; p)$ have bounded variance.
\item If $\|\pxzhat - \pxz\|, \|\pyzhat - \pyz\| \rightarrow 0$, then
$\EE_{XZ} \left[\left(\psixz(V; \phat) -
\psixz(V; p)\right)^2\right] \rightarrow 0$
and \\
$\EE_{YZ}\left[\left(\psiyz(V; \phat) -
\psiyz(W; p)\right)^2\right] \rightarrow 0$
\item $\psixz, \psiyz \neq \zero$.
\end{enumerate}
The first condition is true whenever $s > d/2$.
The second condition follows from the boundedness of the densities.
\begingroup
\allowdisplaybreaks
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\VV_{\pxz} \psixz(V;\pxz, \pyz) \\
&\leq \frac{\alpha^2}{ (\alpha-1)^2 }
\EE_{\pxz} \left[\pxz^{2\alpha-2}(X,Z_1) \pyz^{2\beta}(X,Z_1) \right] \\
&= \frac{\alpha^2}{ (\alpha-1)^2 }\int \pxz^{2\alpha-1} \pyz^{2\beta} < \infty
\end{align*}
}
{
\begin{align*}
\VV_{\pxz} \psixz(V;\pxz, \pyz)
&\leq \frac{\alpha^2}{ (\alpha-1)^2 }
\EE_{\pxz} \left[\pxz^{2\alpha-2}(X,Z_1) \pyz^{2\beta}(X,Z_1) \right] \\
&= \frac{\alpha^2}{ (\alpha-1)^2 }\int \pxz^{2\alpha-1} \pyz^{2\beta} < \infty
\end{align*}
}
\endgroup
We can bound $\VV_{\pyz}\psiyz$ similarly. The third condition follows from
Lemma~\ref{lem:tsalliscdInfFunOrder}.
For the fourth condition, note that when $\pxz = \pyz$,
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\psi_{XZ}(X,Z_1; \pxz, \pxz) \\
&= \frac{\alpha}{\alpha -1} \left(
\pxz^{\alpha+\beta-1}(X,Z_1) - \int \pxz\right)
= 0,
\end{align*}
}
{
\begin{align*}
\psi_{XZ}(X,Z_1; \pxz, \pxz)
&= \frac{\alpha}{\alpha -1} \left(
\pxz^{\alpha+\beta-1}(X,Z_1) - \int \pxz\right)
= 0,
\end{align*}
}
and similarly $\psi_{YZ} = \zero$.
Otherwise, $\psixz$ depends explicitly on $X,Z$ and is nonzero.
Therefore we have asymptotic normality away from $\pxz = \pyz$.
\end{proof}
\section{Comparison with Other Approaches}
\label{sec:comparison}
Estimation of statistical functionals under nonparametric assumptions
has received considerable attention over the last few decades. A
large body of work has focused on estimating the Shannon entropy--
\citet{beirlant1997nonparametric} gives a nice review of results and
techniques. More recent work in the single-distribution setting
includes estimation of R\'{e}nyi and Tsallis
entropies~\citep{leonenko2010statistical,pal2010renyi}.
There are also several papers extending some of these techniques to the
divergence estimation~\citep{krishnamurthy14renyi,poczos2011estimation,wang2009divergence,kallberg2012estimation,perez2008kullback}.
Many of the existing methods can be categorized as plug-in methods: they are based
on estimating the densities either via a KDE or using $k$-Nearest Neighbors (\knn)
and evaluating the functional on these estimates.
Plug-in methods are conceptually simple but unfortunately suffer several drawbacks.
First, they typically have worse convergence rate than our approach,
achieving the parametric rate only when $s \ge d$ as opposed to
$s \ge d/2$~\citep{liu2012exponential,singh14plugin}.
Secondly, using either the KDE or \knn,
obtaining the best rates for plug-in methods requires undersmoothing the
density estimate and we are not aware for principled approaches for hyperparameter
tuning here.
In contrast, the bandwidth used in our estimators is the optimal bandwidth for
density estimation, so a number of approaches such as cross validation are available.
This is convenient for a practitioner as our method \emph{does not require tuning hyper
parameters}.
Secondly, plugin methods based on the KDE always require computationally
burdensome numeric
integration. In our approach, numeric integration can be avoided for many functionals
of interest (See Table~\ref{tb:functionalDefns}).
There is also another line of work on estimating $f$-Divergences.
\citet{nguyen2010estimating} estimate $f$-divergences by solving a
convex program and analyse the technique when the likelihood ratio of the densities
is in an RKHS. Comparing the theoretical results is not straightforward since
it is not clear how to port their assumptions to our setting.
Further, the size of the convex program increases with the sample size
which is problematic for large samples. \citet{moon14fdivergence} use
a weighted ensemble estimator for $f$-divergences. They establish
asymptotically normality and the parametric convergence
rates only when $s \ge d$, which is a stronger smoothness assumption
than is required by our technique. Both these works only consider
$f$-divergences. Our method has wider applicability and includes
$f$-divergences as a special case.
\section{Conclusion}
\label{sec:conclusion}
We generalise
existing results in Von Mises estimation by proposing an empirically superior
\loos technique for estimating functionals and extending the framework
to functionals of two distributions.
We also prove a lower bound for the latter setting.
We demonstrate the practical utility of our technique via
comparisons against other alternatives and an image clustering
application. An open problem arising out
of our work is to derive the limiting distribution of the \loos estimator.
\section{Estimating Functionals}
\label{sec:estimation}
First consider estimating a functional of a single distribution,
$T(f)= \phi(\int \nu(f) d\mu)$
from samples $\Xonetwo \sim f$.
Using the VME~\eqref{eqn:functaylor}, \citet{emery98probstat}
and~\citet{robins09quadraticvm}
suggested a natural estimator.
If we use half of the data $\Xone$ to construct a density estimate
$\fhatone = \fhatone(\Xone)$, then by~\eqref{eqn:functaylor}:
\begin{align*}
T(f) - T(\fhatone) = \int \psi(x; \fhatone)f(x)d\mu + \bigO(\|f - \fhatone\|_2^2).
\end{align*}
Since the influence function does not depend on the unknown distribution $F$,
the first term on the right hand side is simply an expectation of $\psi(X;\fhatone)$
at $F$.
We use the second half of the data to estimate this expectation with its sample mean.
This leads to the following preliminary estimator:
\begin{equation}
\Tfdsone = T(\fhatone) + \frac{1}{n/2}\nsumsechalf \psi(X_i; \fhatone).
\label{eqn:dsEstimator}
\end{equation}
We can similarly construct an estimator $\Tfdstwo$ by using $\Xtwo$
for density estimation and $\Xone$ for averaging. Our final estimator is
obtained via the average $\Tfds =(\Tfdsone+\Tfdstwo)/2$.
In what follows, we shall refer to this estimator as the Data-Split (\ds) estimator.
The rate of convergence of this estimator is determined by the error in the VME
$\bigO(\|f-\fhatone\|_2^2)$ and the $n^{-1/2}$ rate for estimating an expectation.
Lower bounds from several literature
\citep{laurent1996efficient,birge95estimation}
confirm minimax optimality of the \dss estimator when $f$ is sufficiently
smooth. The data splitting trick is commonly used in several other works
\cite{birge95estimation, laurent1996efficient, krishnamurthy14renyi}.
The analysis of \dss estimators is straightforward as the rate directly follows from the
Cauchy-Schwarz inequality.
While in theory, \dss estimators enjoy good rates of convergence,
from a practical stand point, the data splitting
is unsatisfying since using
only half the data each for estimation and averaging invariably decreases the accuracy.
As an alternative, we propose a Leave-One-Out (\loo) version of the above estimator which
makes more efficient use of the data,
\begin{equation}
\Tf = \frac{1}{n}\sum_{i=1}^n T(\fhatmi) + \psi(X_i; \fhatmi).
\label{eqn:looEstimator}
\end{equation}
where $\fhatmi$ is the kernel density estimate using all the samples $\Xn$
except for $X_i$.
Theoretically, we prove that the \loos Estimator achieves the same rate of convergence as
the \dss estimator but emprically it performs much better.
We can extend this method for functionals of two distributions.
Akin to the one distribution case, we propose the following \dss and \loos versions.
\begingroup
\allowdisplaybreaks
\begin{align*}
\Tfdsone &= T(\fhatone, \ghatone) +
\frac{1}{n/2} \nsumsechalf \psi_f(X_i; \fhatone, \ghatone)
+ \frac{1}{m/2} \msumsechalf \psi_g(Y_j; \fhatone, \ghatone). \numberthis
\label{eqn:dsEstimatorTwoD} \\
\Tf &= \frac{1}{\max(n,m)}\sum_{i=1}^{\max(n,m)}
T(\fhatmi, \ghatmi) + \psif(X_i;\fhatmi, \ghatmi) +
\psig(Y_i;\fhatmi, \ghatmi). \numberthis
\label{eqn:looEstimatorTwoD}
\end{align*}
\endgroup
Here, $\ghatone,\ghatmi$ are defined similar to $\fhatone,\fhatmi$.
For the \dss estimator we swap the samples to compute $\Tfdstwo$ and then
average.
For the \loos estimator, if $n>m$ we cycle through the points $\Ym$ until we have
summed over all $\Xn$ or vice versa.
Note that $\Tf$ is asymmetric when $n\neq m$.
A seemingly natural alternative would be to sum over all $nm$ pairings of
$X_i$'s and $Y_j$'s. However, the latter approach is more computationally
burdensome. Moreover, a straightforward modification of our analysis in
Appendix~\ref{sec:looEstimatorTwoD} shows that both estimators have the same
rate of convergence if $n$ and $m$ are of the same order.
\textbf{Examples: }
We demonstrate the generality
of our framework by presenting estimators for several
entropies, divergences and mutual informations and their
conditional versions in Table~\ref{tb:functionalDefns}.
For several functionals in the table, \emph{these are the first estimators proposed}.
We hope that this table will serve as a good reference for practitioners.
For several functionals (e.g. conditional and unconditional \renyi divergence,
conditional \tsallis mutual information and more) the estimators are not listed only
because the expressions are too long to fit into the table. Our software implements
a total of $17$ functionals which include all the estimators in the table.
In Appendix~\ref{sec:workedExample} we illustrate how to apply our framework
to derive an estimator for any functional via an example.
As will be discussed in Section~\ref{sec:comparison},
when compared to other alternatives, our technique has several favourable properties.
Computationally, the complexity of our method is $O(n^2)$ when compared to $O(n^3)$
for some other methods and for several functionals we do not require numeric
integration.
Additionally, unlike most other methods, we do not require any tuning of
hyperparameters.
\input{functionalEstimators2}
\section{Experiments}
\label{sec:experiments}
\vspace{-0.05in}
We compare the estimators derived using our methods on a series of synthetic
examples in $1-4$ dimensions. We compare against the methods in
\cite{stowell2009fast,goria2005new,noughabi2013entropy,miller2003new,ramirez2009entropy,
perez2008kullback,poczos12divergence,poczos2011estimation}. Software for the
estimators is obtained either directly from the papers or from~\citet{szabo14ite}.
For the \ds/\loos estimators, we estimate the density
via a KDE with the smoothing kernels constructed using
Legendre polynomials \citep{tsybakov08nonparametric}.
In both cases and for
the plug in estimator we choose the bandwidth by
performing $5$-fold cross validation. The integration for the plug in estimator
is approximated numerically.
We test the estimators on a series of synthetic datasets in $1-4$ dimension.
The specifics of the densities used in the examples and methods compared to
are given in Appendix~\ref{sec:appExperiments}.
The results are shown in Figures~\ref{fig:toyOne} and~\ref{fig:toyTwo}.
We make the
following observations. In most cases the \loos estimator performs best. The \dss
estimator approaches the \loos estimator when there are many samples but is generally
inferior to the \loos estimator with few samples. This, as we have explained before is
because data splitting does not make efficient use of the data.
The \knns estimator for divergences
\cite{poczos12divergence} requires choosing a $k$. For this estimator,
we used the default setting for $k$ given in the software.
As performance is sensitive to the choice of $k$, it performs well in some cases but
poorly in other cases.
We reiterate that our estimators do not require any setting of hyperparameters.
Next, we present some results on asymptotic normality. We test the \dss and
\loos estimators on a $4$-dimensional Hellinger divergence estimation problem. We use
$4000$ samples for estimation. We repeat this experiment $200$ times and compare the
empiritical asymptotic distribution (i.e. the $\sqrt{4000}(\widehat{T} -
T(f,g))/\widehat{S}$ values where $\widehat{S}$ is the estimated asymptotic
variance) to a $\Ncal(0,1)$
distribution on a QQ plot. The results in Figure~\ref{fig:toyTwo} suggest that both
estimators are asymptotically normal.
\textbf{Image clustering: } We demonstrate the use of our nonparametric divergence
estimators in an image clustering task on the ETH-80 datset.
Using our Hellinger divergence estimator we achieved an accuracy of $92.47\%$
whereas a naive spectral clustering approach achieved only $70.18\%$.
When we used a $k$-NN estimator for the Hellinger
divergence~\cite{poczos12divergence} we achieved $90.04\%$
which attests to the superiority of our method.
Since this is not the main focus of this work we defer this to
Appendix~\ref{sec:appExperiments}.
\insertFigToyTwo
\section{Experiments}
\label{sec:experiments}
\vspace{-0.05in}
\insertFigToyOne
\subsection{Simulations}
First, we compare the estimators derived using our methods on a series of synthetic
examples in $1-4$ dimensions.
For the \ds/\loos estimators, we estimate the density
via a KDE with the smoothing kernels constructed using
Legendre polynomials \citep{tsybakov08nonparametric}.
In both cases and for
the plug in estimator we choose the bandwidth by
performing $5$-fold cross validation. The integration for the plug in estimator
is approximated numerically.
We test the estimators on a series of synthetic datasets in $1-4$ dimension.
The specifics of the data generating distributions and methods compared to are given
below.
The results are shown in Figures~\ref{fig:toyOne} and~\ref{fig:toyTwo}.
We make the
following observations. In most cases the \loos estimator performs best. The \dss
estimator approaches the \loos estimator when there are many samples but is generally
inferior to the \loos estimator with few samples. This, as we have explained before is
because data splitting does not make efficient use of the data.
The \knns estimator for divergences
\cite{poczos12divergence} requires choosing a $k$. For this estimator,
we used the default setting for $k$ given in the software.
As performance is sensitive to the choice of $k$, it performs well in some cases but
poorly in other cases.
We reiterate that our estimators do not require any setting of hyperparameters.
Next, we present some results on asymptotic normality. We test the \dss and
\loos estimators on a $4$-dimensional Hellinger divergence estimation problem. We use
$4000$ samples for estimation. We repeat this experiment $200$ times and compare the
empiritical asymptotic distribution (i.e. the $\sqrt{4000}(\widehat{T} -
T(f,g))/\widehat{S}$ values where $\widehat{S}$ is the estimated asymptotic
variance) to a $\Ncal(0,1)$
distribution on a QQ plot. The results in Figure~\ref{fig:toyTwo} suggest that both
estimators are asymptotically normal.
\textbf{Details: }
In our simulations, for the first figure comparing the Shannon Entropy
in Fig~\ref{fig:toyOne} we generated data
from the following one dimensional density,
\[
f_1(t) = 0.5 + 0.5 t^9
\]
For this, with probability $1/2$ we sample from the uniform distribution $U(0,1)$
on $(0,1)$ and otherwise sample $10$ points from $U(0,1)$ and pick the maximum.
For the third figure in Fig~\ref{fig:toyOne} comparing the KL divergence, we
generate data from the one dimensional density
\[
f_2(t) = 0.5 + \frac{0.5t^{19}(1-t)^{19}}{B(20,20)}
\]
where $B(\cdot,\cdot)$ is the Beta function. For this, with probability $1/2$ we
sample from $U(0,1)$ and otherwise sample from a $\textrm{Beta}(20,20)$ distribution.
The second and fourth figures of Fig~\ref{fig:toyOne} we sampled from a $2$ dimensional
density where the first dimension was $f_1$ and the second was $U(0,1)$.
The fifth and sixth were from a $2$ dimensional
density where the first dimension was $f_2$ and the second was $U(0,1)$.
In all figures of Fig.~\ref{fig:toyTwo}, the first distribution was a $4$-dimensional density
where all dimensions are $f_2$. The latter was $U(0,1)^4$.
\textbf{Methods compared to: }
In addition to the plug-in, \dss and \loos estimators we perform comparisons with
several other estimators.
For the Shannon Entropy we compare our method to the \knns estimator of
\citet{goria2005new}, the method of \citet{stowell2009fast} which uses $K-D$
partitioning, the method of \citet{noughabi2013entropy} based on Vasicek's spacing
method and that of \citet{learned2003ica} based on Voronoi tessellation.
For the KL divergence we compare against the \knns method of \citet{perez2008kullback}
and that of \citet{ramirez2009entropy} based on the power spectral density
representation using Szego's theorem. For \renyi, \tsallis and Hellinger divergences
we compared against the \knns method of \citet{poczos12divergence}.
Software for these
estimators is obtained either directly from the papers or from~\citet{szabo14ite}.
\insertFigToyTwo
\subsection{Image Clustering Task}
Here we demonstrate a simple image clustering task using a nonparametric divergence
estimator. For this we use images from the ETH-80 dataset.
The objective here is not to champion our approach for image clustering against
all methods for image clustering out there.
Rather, we just wish to demonstrate that our estimators can
be easily and intuitively applied to many Machine Learning problems.
We use the three categories Apples, Cows and Cups and randomly select $50$ images
from each category. Some sample images are shown in Fig~\ref{fig:clusImages}.
We convert the images to grey scale and extract the SIFT features from each
image. The SIFT features are $128$-dimensional but we project it to $4$ dimensions
via PCA. This is necessary because nonparametric methods work best in low dimensions.
Now we can treat each image as a collection of features, and hence a sample from a $4$
dimensional distribution.
We estimate the Hellinger divergence between these ``distributions".
Then we construct an affinity matrix $A$ where the similarity metric between the
$i$\superscript{th} and $j$\superscript{th} image is given by $A_{ij} =
\exp(-\widehat{H}^2(X_i, X_j))$. Here $X_i$ and $X_j$ denotes the projected SIFT
samples from images $i$ and $j$ and $\widehat{H}(X_i, X_j)$ is the estimated
Hellinger divergence between the distributions.
Finally, we run a spectral clustering algorithm on the matrix $A$.
Figure~\ref{fig:affinity} depicts the affinity matrix $A$ when the images were
ordered according to their class label. The affinity matrix exhibits
block-diagonal structure which indicates that our Hellinger divergence estimator
can in fact identify patterns in the images.
Our approach achieved a clustering accuracy of $92.47\%$. When we used the $k$-NN
based estimator of~\cite{poczos12divergence} we achieved an accuracy of
$90.04\%$.
When we instead applied Spectral clustering naively,
with $A_{ij} = \exp(-L_2(P_i,P_j)^2)$ where $L_2(P_i,P_j)$ is
the squared $L_2$ distance between the pixel intensities we achieved an accuracy of
$70.18\%$. We also tried $A_{ij} = \exp(-\alpha \widehat{H}^2(X_i, X_j))$ as the
affinity for
different choices of $\alpha$ and found that our estimator still performed best.
We also experimented with the
\renyi and \tsallis divergences and obtained similar results.
On the same note, one can imagine that these divergence estimators can also be used
for a classification task. For instance we can treat $\exp(-\widehat{H}^2(X_i, X_j))$
as a similarity
metric between the images and use it in a classifier such as an SVM.
\insertFigClustering
\section{Estimators for Some Information Theoretic Quantities}
\newcommand{\twolinecell}[2][c]{%
\begin{tabular}[#1]{@{}c@{}}#2\end{tabular}}
\newcommand{\fourlinecell}[2][c]{%
\begin{tabular}[#1]{@{}c@{}c@{}c@{}}#2\end{tabular}}
\newcommand{\int \px^\alpha \py^{1-\alpha}}{\int \px^\alpha \py^{1-\alpha}}
\newcommand{\int \pxz^\alpha \pyz^{1-\alpha}}{\int \pxz^\alpha \pyz^{1-\alpha}}
\begin{table*}[htbp]
\begin{center}
\small{
\begin{tabular}{|c|c|}
\hline
\textbf{Functional} & \textbf{\loos Estimator} \\ \hline
\twolinecell{\tsallis Entropy \\ $\frac{1}{\alpha-1}\left(1 - \int p^\alpha\right)$} &
$\frac{1}{1-\alpha} +
\frac{1}{n}\sum_i \int \phatmi^\alpha -\frac{\alpha}{\alpha-1} \phatmi^{\alpha-1}(X_i) $ \\ \hline
\twolinecell{\renyi Entropy \\ $\frac{-1}{\alpha-1}\log \int p^\alpha$} &
$ \frac{\alpha}{\alpha-1} + \frac{1}{n}\sum_i
\frac{-1}{\alpha-1}\log \int \phatmi^\alpha -
\phatmi^{\alpha-1}(X_i) + $
\\ \hline
\twolinecell{Shannon Entropy \\ $- \int p \log p$} &
$ -\frac{1}{n} \sum_i \log \phatmi(X_i)$
\\ \hline
\twolinecell{$\ltwotwo$ Divergence\\ $ \int (p_X - p_Y)^2$} &
$\frac{2}{n}\sum_i \pxhatmi(X_i) - \pyhat(X_i) - \int (\pxhatmi - \pyhat)^2 +
\frac{2}{m}\sum_j \pxhat(Y_j) - \pyhatmj(Y_j) $
\\ \hline
\twolinecell{Hellinger Divergence \\ $ 2-2\int \px^{1/2}\py^{1/2} $} &
$2- \frac{1}{n}\sum_i \pxhatmi^{-1/2}(X_i)\pyhat^{1/2}(X_i)
- \frac{1}{m}\sum_j \pxhat^{1/2}(Y_j)\pyhatmj^{-1/2}(Y_j) $
\\ \hline
\twolinecell{Chi-Squared Divergence \\ $\int \frac{(\px-\py)^2}{\px}$}&
$ -1 + \frac{1}{n}\sum_i \frac{\pyhat^2(X_i)}{\pxhatmi^2(X_i)} +
2\frac{1}{m}\sum_j \frac{\pyhatmj(Y_j)}{\pxhat(Y_j)} $\\ \hline
\twolinecell{$f$-Divergence \\ $\int \phi ( \frac{\px}{\py} ) \py $}&
$ \frac{1}{n}\sum_i\phi' \left(\frac{\pxhatmi(X_i)}{\pyhat(X_i)}\right)
+ \frac{1}{m} \sum_j \left(\phi\left(\frac{\pyhatmj(Y_j)}{\pxhat(Y_j)}\right)
- \frac{\pxhat(Y_j)}{\pyhatmj(Y_j)}\phi\left(\frac{\pxhat(Y_j)}{\pyhatmj(Y_j)}\right) \right)
$\\ \hline
\twolinecell{\tsallis Divergence \\ $ \frac{1}{\alpha-1}\left(
\int \px^\alpha \py^{1-\alpha} -1 \right) $} &
$ \frac{1}{1-\alpha} + \frac{\alpha}{\alpha-1}\frac{1}{n}\sum_i
\left(\frac{\pxhatmi(X_i)}{\pyhat(X_i)}\right)^{\alpha-1} -
\frac{1}{m}\sum_j \left( \frac{\pxhat(Y_j)}{\pyhatmj(Y_j)}\right)^{\alpha} $\\ \hline
\twolinecell{KL divergence \\ $ \int p_{X}\log\frac{p_{X}}{p_{Y}} $} &
$ 1 + \frac{1}{n}\sum_i \log \frac{\pxhatmi(X_i)}{\pyhat(X_i)} -
\frac{1}{m}\sum_j \frac{\pxhat(Y_j)}{\pyhatmj(Y_j)} $ \\ \hline
\twolinecell{Conditional-\tsallis divergence \\ $ \int p_Z\frac{1}{\alpha-1}\left(\int p_{X|Z}^{\alpha}
p_{Y|Z}^{1-\alpha} -1\right) $} &
$ \frac{1}{1 - \alpha} + \frac{\alpha}{\alpha-1}
\frac{1}{n}\sum_i \left(\frac{\pxzhatmi(V_i)}{\pyzhat(V_i)}\right)^{\alpha-1}
- \frac{1}{m}\sum_{j} \left(\frac{\pxzhat(W_j)}{\pyzhatmj(W_j)}\right)^{\alpha} $
\\ \hline
\twolinecell{Conditional-KL divergence \\ $ \int p_Z\int p_{X|Z}\log\frac{p_{X|Z}}{p_{Y|Z}} $} &
$ 1 + \frac{1}{n}\sum_i \log \frac{\pxzhatmi(V_i)}{\pyzhat(V_i)} -
\frac{1}{m}\sum_j \frac{\pxzhat(W_j)}{\pyzhatmj(W_j)} $ \\ \hline
\twolinecell{Shannon Mutual Information \\
$ \int p_{XY}\log\frac{p_{XY}}{p_{X}p_Y} $} &
$ \frac{1}{n} \sum_i \log \pxyhatmi(X_i, Y_i) -\log \pxhatmi(X_i)
-\log \pyhatmi(Y_i) $ \\ \hline
\twolinecell{Conditional \tsallis MI \\
$\int p_Z \frac{1}{\alpha-1}\left(\int p_{X,Y|Z}^{\alpha}
p_{X|Z}^{1-\alpha}p_{Y|Z}^{1-\alpha} -1\right)$} &
\fourlinecell{$
\frac{1}{1-\alpha} + \frac{1}{\alpha-1}\frac{1}{n}\sum_i
\alpha \left(\frac{\pxyzhatmi(X_i,Y_i,Z_i)\pzhat(Z_i)}
{\pxzhatmi(X_i,Z_i)\pyzhatmi(Y_i,Z_i)}\right)^{\alpha-1}$ \\
{\scriptsize
$- (1-\alpha) \frac{1}{\alpha-1}\frac{1}{n}\sum_i\pzhat^{\alpha-2}(Z_i) \int\pxyzhatmi^\alpha(\cdot,\cdot,Z_i)
\pxzhatmi^{1-\alpha}(\cdot,Z_i) $} \\
{ \scriptsize
$ +\frac{1}{\alpha-1}\frac{1}{n}\sum_i
(1-\alpha)\pxzhatmi^{-\alpha}(X_i,Z_i)\pzhat^{1-\alpha}(Z_i)
\int\pxyzhatmi^\alpha(X_i,\cdot,Z_i)\pyzhatmi^{1-\alpha}(\cdot,Z_i)
$}
\\
{\scriptsize
$+ \frac{1}{\alpha-1}\frac{1}{n}\sum_i
(1-\alpha)\pyzhatmi^{-\alpha}(Y_i,Z_i)\pzhat^{\alpha -1}(Z_i)
\int \pxyzhatmi^\alpha(\cdot,Y_i,Z_i)\pxzhatmi^{1-\alpha}(\cdot,\cdot)
$} }
\\ \hline
\end{tabular}
}
\vspace{-0.1in}
\caption{
Definitions of functionals and the corresponding estimators.
Here $p_{X|Z}, \pxz$ etc. are conditional and joint distributions.
For the conditional divergences we take $V_i = (X_i, Z_{1i})$, $W_j = (Y_j,
Z_{2j})$ to be the samples from $\pxz, \pyz$ respectively.
For the mutual informations we have samples $(X_i, Y_i)\sim \pxy$ and for
the conditional versions we have $(X_i, Y_i, Z_i)\sim \pxyz$.
}
\label{tb:functionalDefns}
\end{center}
\end{table*}
\section*{Acknowledgements}
This work is supported in part by NSF Big Data grant IIS-1247658 and
DOE grant DESC0011114.
{\small
\section{An Illustrative Example - The Conditional Tsallis Divergence}
\label{sec:workedExample}
In this section we present a step by step guide on applying our framework to
estimating any desired functional.
We choose the Conditional Tsallis divergence because pedagogically it is a
good example in Table~\ref{tb:functionalDefns} to illustrate the technique.
By following a similar procedure, one may derive an estimator for any desired
functional.
The estimators are derived in Section~\ref{sec:tsallisEstimator} and
in Section~\ref{sec:tsallisAnalysis} we discuss conditions for the theoretical
guarantees and asymptotic normality.
The Conditional \tsallis divergence
($\alpha \neq 0, 1$) between $X$ and $Y$ conditioned on $Z$ can be written
in terms of joint densities $\pxz, \pyz$.
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\tsalliscd(p_{X|Z} \| p_{Y|Z}; p_Z) = \tsalliscd(\pxz, \pyz) = \\
&=\int p_Z(z)\frac{1}{\alpha-1}\left(\int p_{X|Z}^{\alpha}(u,z)
p_{Y|Z}^{1-\alpha}(u,z) \ud u -1\right) \ud z \\
&= \frac{1}{1- \alpha} + \frac{1}{\alpha-1}
\int p^{\alpha}_{XZ}(u,z) p^{\beta}_{YZ}(u,z) \ud u \ud z
\end{align*}
}
{
\begin{align*}
\tsalliscd(p_{X|Z} \| p_{Y|Z}; p_Z) = \tsalliscd(\pxz, \pyz)
&=\int p_Z(z)\frac{1}{\alpha-1}\left(\int p_{X|Z}^{\alpha}(u,z)
p_{Y|Z}^{1-\alpha}(u,z) \ud u -1\right) \ud z \\
&= \frac{1}{1- \alpha} + \frac{1}{\alpha-1}
\int p^{\alpha}_{XZ}(u,z) p^{\beta}_{YZ}(u,z) \ud u \ud z
\end{align*}
}
where we have taken $\beta = 1-\alpha$. We have samples
$V_i = (X_i, Z_{1i}) \sim \pxz, i = 1, \dots, n$
and $W_j = (Y_j,Z_{1j})\sim \pyz, j=1, \dots, m$
We will assume $\pxz, \pyz \in \Sigma(s,L,B', B)$.
For brevity, we will write $p = (\pxz, \pyz)$ and $\phat = (\pxzhat, \pyzhat)$.
\subsection{The Estimators}
\label{sec:tsallisEstimator}
We first compute the influence functions of $\tsalliscd$ and the use
it to derive the \ds/\loos estimators. \\[\thmparaspacing]
\begin{proposition}[Influence Functions of $\tsalliscd$]
\label{thm:tsalliscdInfFun}
The influence functions of $\tsalliscd$ w.r.t $\pxz$, $\pyz$ are
\begingroup
\allowdisplaybreaks
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\psixz(X, Z_1; \pxz, \pyz) = \numberthis \label{eqn:tsalliscdInfFun} \\
&\frac{\alpha}{\alpha-1} \left(
\pxz^{\alpha-1}(X,Z_1)\pyz^{\beta}(X,Z_1) - \int \pxz^\alpha\pyz^\beta \right) \\
&\psiyz(Y, Z_2; \pxz, \pyz) = \\
& - \left(
\pxz^{\alpha}(Y,Z_2)\pyz^{\beta-1}(Y,Z_2) - \int \pxz^\alpha\pyz^\beta \right)
\end{align*}
}
{
\begin{align*}
\psixz(X, Z_1; \pxz, \pyz) &= \numberthis \label{eqn:tsalliscdInfFun}
\frac{\alpha}{\alpha-1} \left(
\pxz^{\alpha-1}(X,Z_1)\pyz^{\beta}(X,Z_1) - \int \pxz^\alpha\pyz^\beta \right) \\
\psiyz(Y, Z_2; \pxz, \pyz) &=
- \left(
\pxz^{\alpha}(Y,Z_2)\pyz^{\beta-1}(Y,Z_2) - \int \pxz^\alpha\pyz^\beta \right)
\end{align*}
}
\endgroup
\begin{proof}
Recall that we can derive the influence functions via
$\psixz(X,Z_1; p) = \tsalliscd'_{XZ}(\delta_{X,Z_1}-\pxz; p)$,
$\psiyz(Y,Z_2; p) = \tsalliscd'_{YZ}(\delta_{X,Z_2}-\pyz; p)$
where $\tsalliscd'_{XZ}, \tsalliscd'_{YZ}$ are the \gateaux derivatives of
$\tsalliscd$
w.r.t $\pxz, \pyz$ respectively. Hence,
\begingroup
\allowdisplaybreaks
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\psixz(X,Z_1) = \\
& \frac{1}{\alpha-1} \frac{\partial}{\partial t}
\int ((1-t)\pxz + t\delta_{X Z_1})^\alpha \pyz^\beta \Big|_{t=0} \\
& \frac{\alpha}{\alpha-1}\int\pxz^{\alpha-1} \pyz^\beta (\delta_{X Z_1} - \pxz)
\end{align*}
}
{
\begin{align*}
\psixz(X,Z_1)
&= \frac{1}{\alpha-1} \frac{\partial}{\partial t}
\int ((1-t)\pxz + t\delta_{X Z_1})^\alpha \pyz^\beta \Big|_{t=0} \\
&= \frac{\alpha}{\alpha-1}\int\pxz^{\alpha-1} \pyz^\beta (\delta_{X Z_1} - \pxz)
\end{align*}
}
\endgroup
from which the result follows. Deriving $\psiyz$ is similar.
Alternatively, we can directly show that $\psixz, \psiyz$ in
Equation~\eqref{eqn:tsalliscdInfFun}
satisfy Definition~\ref{def:infFun}.
\end{proof}
\end{proposition}
\vspace{\thmparaspacing}
\textbf{\dss estimator}:
Use
$\Vone, \Wone$ to construct density estimates $\pxzhatone, \pyzhatone$ for $\pxz,
\pyz$. Then, use $\Vtwo, \Wtwo$ to add the sample means of the influence
functions given in
Theorem~\ref{thm:tsalliscdInfFun}. This results in our preliminary estimator,
\begingroup
\allowdisplaybreaks
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
\tsalliscdestimss{1} &= \frac{1}{1 - \alpha} + \frac{\alpha}{\alpha-1}
\frac{2}{n}\nsumsechalf
\left( \frac{\pxzhatone(X_i, Z_{1i})}{\pyzhatone(X_i, Z_{1i})} \right)^{\alpha-1} \\
&\hspace{0.2in} - \frac{2}{m}\msumsechalf
\left( \frac{\pxzhatone(Y_j, Z_{2j})}{\pyzhatone(Y_j, Z_{2j})} \right)^{\alpha}
\numberthis.
\label{eqn:tsalliscdestim}
\end{align*}
}
{
\begin{align*}
\tsalliscdestimss{1} &= \frac{1}{1 - \alpha} + \frac{\alpha}{\alpha-1}
\frac{2}{n}\nsumsechalf
\left(\frac{\pxzhatone(X_i, Z_{1i})}{\pyzhatone(X_i, Z_{1i})} \right)^{\alpha-1}
- \frac{2}{m}\msumsechalf
\left(\frac{\pxzhatone(Y_j, Z_{2j})}{\pyzhatone(Y_j, Z_{2j})} \right)^{\alpha}
\numberthis
\label{eqn:tsalliscdestim}
\end{align*}
}
\endgroup
The final estimate is
$\tsalliscdestimds = (\tsalliscdestimss{1} + \tsalliscdestimss{2})/2$
where $\tsalliscdestimss{2}$ is obtained by swapping the two samples.
\textbf{\loos Estimator:}
Denote the density estimates of $\pxz, \pyz$ without the $i$\superscript{th} sample by
$\pxzhatmi$ and $\pyzhatmi$. Then the \loos estimator is,
\begin{equation}
\tsalliscdestim = \frac{1}{1-\alpha} +
\frac{\alpha}{\alpha-1} \frac{1}{n} \nsumwhole
\left(\frac{\pxzhatmi(X_i, Z_{1i})}{\pyzhat(X_i, Z_{1i})} \right)^{\alpha-1}
- \left(\frac{\pxzhat(Y_i, Z_{2i})}{\pyzhatmi(Y_i, Z_{2i})} \right)^{\alpha}
\label{eqn:tsalliscdlooestim}
\end{equation}
\subsection{Analysis and Asymptotic Confidence Intervals}
\label{sec:tsallisAnalysis}
We begin with a functional Taylor expansion of $\tsalliscd(f,g)$ around
$(f_0, g_0)$. Since $\alpha, \beta \neq 0, 1$,
we can bound the second order terms by $O\left( \|f-f_0\|^2 + \|g-g_0\|^2
\right)$.
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
\tsalliscd(f,g) &=
\tsalliscd(f_0, g_0) +
\frac{\alpha}{\alpha - 1} \int f_0^{\alpha-1}g_0^\beta + \numberthis
\label{eqn:tsalliscdvme} \\
& - \int f_0^{\alpha}g_0^{\beta-1} +
O\left( \|f-f_0\|^2 + \|g-g_0\|^2 \right)
\end{align*}
}
{
\begin{align*}
\tsalliscd(f,g) &=
\tsalliscd(f_0, g_0) +
\frac{\alpha}{\alpha - 1} \int f_0^{\alpha-1}g_0^\beta \numberthis
\label{eqn:tsalliscdvme}
- \int f_0^{\alpha}g_0^{\beta-1} +
O\left( \|f-f_0\|^2 + \|g-g_0\|^2 \right)
\end{align*}
}
Precisely, the second order
remainder is,
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\frac{\alpha^2}{\alpha - 1} \int f_*^{\alpha-2}g_*^\beta (f - f_0)^2
-\beta \int f_*^{\alpha}g_*^{\beta-2} (g - g_0)^2 \\
&+\frac{\alpha\beta}{\alpha -1} \int f_*^{\alpha-1}g_*^{\beta} (f-f_0)(g-g_0)
\end{align*}
}
{
\begin{align*}
&\frac{\alpha^2}{\alpha - 1} \int f_*^{\alpha-2}g_*^\beta (f - f_0)^2
-\beta \int f_*^{\alpha}g_*^{\beta-2} (g - g_0)^2
+\frac{\alpha\beta}{\alpha -1} \int f_*^{\alpha-1}g_*^{\beta} (f-f_0)(g-g_0)
\end{align*}
}
where $(f_*,g_*)$ is in the line segment between $(f,g)$ and $(f_0,g_0)$.
If $f,g,f_0,g_0$ are bounded above and below so are $f_*, g_*$ and
$ f_*^a g_*^b $ where $a,b$ are coefficients depending on $\alpha$.
The first two terms are respectively
$O\left( \|f-f_0\|^2\right)$, $O\left(\|g-g_0\|^2 \right)$.
The cross term can be bounded via,
$\abr{\int (f-f_0)(g-g_0) } \leq \int \max\{ |f-f_0|^2, |g-g_0|^2 \} \in
O( \|f-f_0\|^2 + \|g-g_0\|^2)$.
As mentioned earlier, the boundedness of the densities give us the required rates
given in Theorems~\ref{thm:convTwoLoo} for both estimators.
For the \dss estimator,
to show asymptotic normality, we need to verify the conditions in
Theorem~\ref{thm:asympNormalTwoDistro}.
We state it formally below, but prove it at the end of this section. \\
\begin{corollary}
\label{thm:tsalliscdAsympNormal}
Let $\pxy,\pxz \in \Sigma(s, L, B, B')$. Then $\tsalliscdestimds$
is asymptotically normal when $\pxz \neq \pyz$ and $s>d/2$.
\label{thm:condTsallisAsympNormal}
\end{corollary}
Finally, to construct a confidence interval we need a consistent
estimate of the asymptotic variance :
$\frac{1}{\zeta} \VV_{XZ}\left[ \psixz(V; p) \right]
+ \frac{1}{1-\zeta} \VV_{YZ} \left[ \psiyz(W; p) \right]$ where,
\begingroup
\allowdisplaybreaks
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
& \VV_{XZ}\left[ \psixz(X, Z_1; \pxz, \pyz) \right] = \\
& \left( \frac{\alpha}{\alpha-1} \right)^2 \left( \int \pxz^{2\alpha-1}
\pyz^{2\beta} - \left( \int \pxz^\alpha \pyz^\beta \right)^2 \right) \\
& \VV_{YZ}\left[ \psiyz(Y, Z_2; \pxz, \pyz) \right] = \\
& \left( \int \pxz^{2\alpha}
\pyz^{2\beta-1} - \left( \int \pxz^\alpha \pyz^\beta \right)^2 \right)
\end{align*}
}
{
\begin{align*}
\VV_{XZ}\left[ \psixz(X, Z_1; \pxz, \pyz) \right] &=
\left( \frac{\alpha}{\alpha-1} \right)^2 \left( \int \pxz^{2\alpha-1}
\pyz^{2\beta} - \left( \int \pxz^\alpha \pyz^\beta \right)^2 \right) \\
\VV_{YZ}\left[ \psiyz(Y, Z_2; \pxz, \pyz) \right] &=
\left( \int \pxz^{2\alpha}
\pyz^{2\beta-1} - \left( \int \pxz^\alpha \pyz^\beta \right)^2 \right)
\end{align*}
}
\endgroup
From our analysis above,
we know that any
functional of the form $S(a,b) = \int \pxz^a \pyz^b$, $a+b=1, a,b\neq 0,1$
can be estimated via a \loos estimate
\begin{align*}
\widehat{S}(a,b) =
\frac{1}{n}\nsumwhole a \frac{\widehat{p}^b_{YZ,-i}(V_i)}{\widehat{p}^b_{XZ,-i}(V_i)}
+ b \frac{\widehat{p}^a_{XZ,-i}(W_i)}{\widehat{p}^a_{YZ,-i}(W_i)}
\end{align*}
where $\pxzhatmi,\pyzhatmi$ are the density estimates from
$V_{-i},W_{-i}$ respectively.
$n/N$ is a consistent estimator for $\zeta$.
This gives the following estimator for the asymptotic variance,
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\frac{N}{n}\frac{\alpha^2}{(\alpha-1)^2} \widehat{S}(2\alpha-1, 2\beta) +
\frac{N}{m} \widehat{S}(2\alpha, 2\beta-1) \\
&-
\frac{N (m\alpha^2 + n(\alpha-1)^2)}{nm(\alpha-1)^2}\widehat{S}^2(\alpha, \beta).
\end{align*}
}
{
\begin{align*}
\frac{N}{n}\frac{\alpha^2}{(\alpha-1)^2} \widehat{S}(2\alpha-1, 2\beta) +
\frac{N}{m} \widehat{S}(2\alpha, 2\beta-1) -
\frac{N (m\alpha^2 + n(\alpha-1)^2)}{nm(\alpha-1)^2}\widehat{S}^2(\alpha, \beta).
\end{align*}
}
The consistency of this estimator follows from the consistency of
$\widehat{S}(a,b)$ for $S(a,b)$, Slutzky's theorem and the
continuous mapping theorem.
\begin{proof}[Proof of Corollary~\ref{thm:condTsallisAsympNormal}]
We now prove that the \dss estimator satisfies the necessary conditions for
asymptotic normality. We begin by showing that $\tsalliscd$'s influence functions
satisfy the regularity condition~\ref{asm:infFunRegularity}.
We will show this for $\psiyz$. The proof for $\psixz$ is similar.
Consider two pairs of densities $(f,g)$ $(f',g')$ on the $(XZ,YZ)$ spaces.
\begingroup
\allowdisplaybreaks
\begin{align*}
& \int \left( \psixz(u; f, g) - \psixz(u; f', g') \right)^2 f\\
&\hspace{0.2in}= \frac{\alpha^2}{(1-\alpha)^2}
\int \left( f^{\alpha-1}g^\beta - \int f^\alpha g^\beta
-\left[ f'^{\alpha-1}g'^\beta - \int f'^\alpha g'^\beta \right]
\right)^2 f \\
&\hspace{0.2in}\leq 2\frac{\alpha^2}{(1-\alpha)^2} \left[
\int \left(f^{\alpha-1}g^\beta - f'^{\alpha-1}g'^\beta \right)^2f +
\left( \int f^\alpha g^\beta - \int f'^\alpha g'^\beta \right)^2
\right] \\
&\hspace{0.2in}\leq 2\frac{\alpha^2}{(1-\alpha)^2} \left[
\int \left(f^{\alpha-1}g^\beta - f'^{\alpha-1}g'^\beta \right)^2f +
\int \left( f^\alpha g^\beta - f'^\alpha g'^\beta \right)^2
\right] \\
&\hspace{0.2in} \leq 4\frac{\alpha^2}{(1-\alpha)^2} \bigg[
\|g^{\beta}\|_\infty^2 \int (f^{\alpha-1} - f'^{\alpha-1})^2 +
\|f'^{\alpha-1}\|_\infty^2 \int (g^\beta - g'^\beta)^2 + \\
&\hspace{0.6in}
\|g^{\beta}\|_\infty^2 \int (f^{\alpha} - f'^{\alpha})^2 +
\|f'^{\alpha}\|_\infty^2 \int (g^\beta - g'^\beta)^2
\bigg] \\
&\hspace{0.2in}\in
O\left( \|f -f'\|^2 \right) +
O\left( \|g-g'\|^2 \right)
\end{align*}
\endgroup
where, in the second and fourth steps we have used Jensen's inequality.
The last step follows from the boundedness of all our densities and estimates
and by lemma~\ref{lem:densitypowers}.
The bounded variance condition of the influence functions also follows from the
boundedness of the densities.
\begingroup
\allowdisplaybreaks
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\VV_{\pxz} \psixz(V;\pxz, \pyz) \\
&\leq \frac{\alpha^2}{ (\alpha-1)^2 }
\EE_{\pxz} \left[\pxz^{2\alpha-2}(X,Z_1) \pyz^{2\beta}(X,Z_1) \right] \\
&= \frac{\alpha^2}{ (\alpha-1)^2 }\int \pxz^{2\alpha-1} \pyz^{2\beta} < \infty
\end{align*}
}
{
\begin{align*}
\VV_{\pxz} \psixz(V;\pxz, \pyz)
&\leq \frac{\alpha^2}{ (\alpha-1)^2 }
\EE_{\pxz} \left[\pxz^{2\alpha-2}(X,Z_1) \pyz^{2\beta}(X,Z_1) \right] \\
&= \frac{\alpha^2}{ (\alpha-1)^2 }\int \pxz^{2\alpha-1} \pyz^{2\beta} < \infty
\end{align*}
}
\endgroup
We can bound $\VV_{\pyz}\psiyz$ similarly.
For the fourth condition, note that when $\pxz = \pyz$,
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\psi_{XZ}(X,Z_1; \pxz, \pxz) \\
&= \frac{\alpha}{\alpha -1} \left(
\pxz^{\alpha+\beta-1}(X,Z_1) - \int \pxz\right)
= 0,
\end{align*}
}
{
\begin{align*}
\psi_{XZ}(X,Z_1; \pxz, \pxz)
&= \frac{\alpha}{\alpha -1} \left(
\pxz^{\alpha+\beta-1}(X,Z_1) - \int \pxz\right)
= 0,
\end{align*}
}
and similarly $\psi_{YZ} = \zero$.
Otherwise, $\psixz$ depends explicitly on $X,Z$ and is nonzero.
Therefore we have asymptotic normality away from $\pxz = \pyz$.
\end{proof}
\section{INTRODUCTION}
\else
\section{Introduction}
\fi
\label{sec:intro}
Entropies, divergences, and mutual informations are classical information-theoretic
quantities that play fundamental roles in statistics, machine learning,
and across the mathematical sciences.
In addition to their use as analytical tools, they arise in a variety of
applications including hypothesis testing, parameter estimation, feature selection, and optimal experimental
design.
In many of these applications, it is important to \emph{estimate} these
functionals from data so that they can be used in downstream algorithmic or
scientific tasks.
In this paper, we develop a recipe for estimating statistical
functionals of one or more nonparametric distributions based on the notion of
influence functions.
Entropy estimators are used in applications ranging from independent components
analysis~\citep{learned2003ica}, intrinsic dimension
estimation~\citep{carter10intrinsic} and several signal processing
applications~\citep{hero02entropicGraphs}.
Divergence estimators are useful in statistical tasks such as two-sample testing.
Recently they have also gained popularity as they are used to measure (dis)-similarity between objects that are modeled as distributions, in what is known as the ``machine learning on distributions" framework~\citep{dhillon03information,poczos12divergence}.
Mutual information estimators have been used in in learning tree-structured Markov random fields~\citep{liu2012exponential}, feature selection~\citep{peng2005feature}, clustering~\citep{lewi2006real} and neuron classification~\citep{schneidman02neurons}.
In the parametric setting, conditional divergence and conditional mutual
information estimators are
used for conditional two sample testing or as
building blocks for structure learning in graphical models.
Nonparametric estimators for these quantities
could potentially allow us to generalise several of these
algorithms to the nonparametric domain.
Our approach gives sample-efficient estimators for all these quantities
(and many others), which often outperfom the existing estimators both
theoretically and empirically.
Our approach to estimating these functionals is based on post-hoc correction
of a preliminary estimator using the Von Mises Expansion
\cite{vandervaart98asymptotic,fernholz83vonmises}.
This idea has been used before in
semiparametric statistics literature~\citep{birge95estimation,robins09quadraticvm}.
However, hitherto most
studies are restricted to functionals of one distribution and have
focused on a ``data-split" approach which splits the samples for
density estimation and functional estimation.
While the data-split (\ds) estimator is known to achieve the parametric convergence
rate for sufficiently
smooth densities \cite{birge95estimation,laurent1996efficient}, in practical settings
splitting the data results in poor empirical performance.
In this paper we introduce the calculus of influence functions to the machine
learning community and
considerably expand existing results by proposing a ``leave-one-out" (\loo) estimator
which makes efficient use of the data and has
better empirical performance than the DS technique.
We also extend the framework of influence functions
to functionals of multiple distributions and develop both \dss and \loos
estimators.
The main contributions of this paper are:
\begin{enumerate}[leftmargin=*]
\item We propose a \loos technique to estimate functionals of a single distribution
with the same convergence rates as the \dss estimator. However, the
\loos estimator performs better empirically.
\item We extend the framework to functionals of multiple distributions and
analyse their convergence.
Under sufficient smoothness both \dss and \loos estimators achieve the parametric
rate and the \dss
estimator has a limiting normal distribution.
\item We prove a lower bound for estimating functionals of multiple distributions.
We use this to establish minimax optimality of the
\dss and \loos estimators under sufficient smoothness.
\item We use the approach to construct and implement estimators for various entropy,
divergence, mutual information quantities and their conditional versions.
A subset of these functionals are
listed in Table~\ref{tb:functionalDefns}. For several functionals, \emph{these are the only
known estimators}.
Our software is publicly available at
\texttt{github.com/kirthevasank/if-estimators}.
\item We compare our estimators against several other approaches in simulation.
Despite the generality of our approach, our estimators are
competitive with and in many cases superior to existing specialized approaches for
specific functionals. We also demonstrate how our estimators can be used in machine
learning applications via an image clustering task.
\end{enumerate}
Our focus on information theoretic quantities is due to their
relevance in machine learning applications, rather than a limitation
of our approach. Indeed our techniques apply to any smooth functional.
\label{sec:relatedwork}
\textbf{History:}
We provide a brief history of the post-hoc correction technique and influence functions.
We defer a detailed discussion of other approaches to estimating
functionals to Section~\ref{sec:comparison}.
To our knowledge, the first paper using a post-hoc correction
estimator for was that of~\citet{bickel1988estimating}.
The line of work
following this paper analyzed integral functionals of a single one
dimensional density of the form $\int \nu(p)$
~\citep{bickel1988estimating,birge95estimation,
laurent1996efficient,kerkyacharian1996estimating}. A recent paper
by \citet{krishnamurthy14renyi} also extends this line to functionals
of multiple densities, but only considers polynomial functionals of
the form $\int p^\alpha q^\beta$ for densities $p$ and $q$.
Moreover, all these works use data splitting.
Our work builds on these by extending to a more general class of
functionals and we propose the superior \loos estimator.
A fundamental quantity in the design of our estimators is
the \emph{influence function}, which appears both in
robust and semiparametric
statistics. Indeed, our work is inspired
by that of~\citet{robins09quadraticvm} and~\citet{emery98probstat}
who propose a (data-split)
influence-function based estimator for functionals of a single
distribution.
Their analysis for nonparametric problems rely
on ideas from semiparametric statistics: they define
influence functions for parametric models and then analyze estimators
by looking at all parametric submodels through the true parameter.
\section{Functionals of Multiple Distributions}
\label{sec:multipleDistros}
\input{functionalEstimators}
We focus here on functionals of two distributions -- the extension to multiple
distributions is straightforward.
In the same vein to above, we will take $\Mcal$ to be the set of all measures
with densities in $L_2(\Xcal)$ and from now on write all derivatives in terms of
the functions.
The VME reduces to a Taylor expansion on the densities
(Lemma~\ref{lem:vmeTaylorEqTwoDistro}),
\begingroup
\allowdisplaybreaks
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&T(q_1, q_2)
\numberthis \label{eqn:funcTaylorTwoDistro} \\
&= T(p_1, p_2) + \phi'\left(\int \nu(p_1, p_2) \right)
\int \partialfrac{p_1}{\nu} (q_1 - p_1) \\
&\hspace{0.2in} + \phi'\left(\int \nu(p_1, p_2) \right)
\int \partialfrac{p_2}{\nu} (q - p) + R_2 \\
&= T(p_1, p_2) + \int \psi_1(\cdot; p_1, p_2) q_1 +
\int \psi_2(\cdot; p_1, p_2) q_2\\
&\hspace{0.2in} + O \left( \|p_1 - q_1\|^2 + \|p_2 - q_2\|^2 \right),
\end{align*}
}
{
\begin{align*}
T(q_1, q_2)
\numberthis \label{eqn:funcTaylorTwoDistro}
&= T(p_1, p_2) + \phi'\left(\int \nu(p_1, p_2) \right)
\int \partialfrac{p_1}{\nu} (q_1 - p_1)
+ \phi'\left(\int \nu(p_1, p_2) \right)
\int \partialfrac{p_2}{\nu} (q - p) + R_2 \\
&= T(p_1, p_2) + \int \psi_1(\cdot; p_1, p_2) q_1 +
\int \psi_2(\cdot; p_1, p_2) q_2
+ O \left( \|p_1 - q_1\|^2 + \|p_2 - q_2\|^2 \right),
\end{align*}
}
\endgroup
where $R_2 \in O \left( \|p_1 - q_1\|^2 + \|p_2 - q_2\|^2 \right)$ if the second derivatives are bounded.
As before, we have replaced the first order terms with the corresponding terms in the VME.
The estimator for two distributions is analogous to the one distribution case.
Suppose we have samples $\Xonetwo\sim f$ and $\Yonetwo\sim g$.
Let the influence functions of $T$ w.r.t
$f, g$ be $\psi_f, \psi_g$ respectively.
Use the first halves of the data $\Xone,
\Yone$ to construct density estimates $\fhat = \fhat(\Xone)$ and $\ghat = \ghat(\Yone)$. Then, use $\Xtwo, \Ytwo$ as follows:
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
\Tf_1 &= T(\fhat, \ghat) +
\frac{1}{n} \sum_{i=n+1}^{2n} \psi_f(X_i; \fhat, \ghat) \\
&\hspace{0.3in}
+ \frac{1}{m} \sum_{j=m+1}^{2m} \psi_g(Y_j; \fhat, \ghat) \numberthis
\label{eqn:estimatorTwoDistro}
\end{align*}
}
{
\begin{align*}
\Tf_1 &= T(\fhat, \ghat) +
\frac{1}{n} \sum_{i=n+1}^{2n} \psi_f(X_i; \fhat, \ghat)
+ \frac{1}{m} \sum_{j=m+1}^{2m} \psi_g(Y_j; \fhat, \ghat) \numberthis
\label{eqn:estimatorTwoDistro}
\end{align*}
}
Finally, we swap the two samples $\Xone, \Xtwo$ to obtain $\Tf_2$ and then
compute $\Tf = (\Tf_1 + \Tf_2)/2$.
\subsection{Some Examples}
We demonstrate the applicability
of this framework by presenting estimators for some common functionals.
They are given in Table~\ref{tb:functionalDefns}.
While all estimators have been derived using our method of influence functions,
we mention that for computational reasons this approach may not be desirable.
For example, the estimator for \renyi divergence in Table~\ref{tb:functionalDefns} requires numerical integration.
This can be avoided by instead using an influence function-based estimator for $\int \px^\alpha \py^{1-\alpha}$ and passing this estimate through the $\log$ function.
More generally, one can estimate a functional $\phi\left(\int \nu(p)\right)$ by first estimating $\int \nu(p)$ and plugging into $\phi$.
If $\int \nu(p)$ is bounded and $\phi$ is Lipschitz in that domain, then this
estimator achieves the same rates.
Furthery, by the Delta method they will also be asymptotically normal.
\section{Functionals of One Distribution}
\label{sec:oneDistro}
In this section, we focus on estimating a functional
$T(F) = T(f)= \phi(\int \nu(f) d\mu)$ (where $f$ is the density of $F$)
from samples $\Xonetwo \sim F$.
Equation~\eqref{eqn:functaylor} leads to a natural estimator.
If use half of the data $\Xone$ to construct a density estimator
$\fhat = \fhat(\Xone)$, then Equation~\eqref{eqn:functaylor} shows that:
\begin{align*}
T(f) - T(\fhat) = \int \psi(x; \fhat)f(x)d\mu + O(\|f - \fhat\|_2^2)
\end{align*}
Since the influence function does not depend on the unknown distribution $F$,
the first term on the right hand side is simply an expectation.
We use the second half of the data to estimate this expectation with its sample mean.
This leads to the following preliminary estimator:
\begin{equation}
\Tf_1 = T(\fhat) + \frac{1}{n}\sum_{i=n+1}^{2n} \psi(X_i; \fhat)
\label{eqn:estimator}
\end{equation}
We can similarly construct an estimator $\Tf_2$ by using $\Xtwo$
for density estimation and $\Xone$ for averaging. Our final estimator is
obtained via the average $\Tf =(\Tf_1+\Tf_2)/2$. While $\Tf_1$ itself attains
the correct rate of convergence using $\Tf$ will also give us the correct
constants in asymptotic normality.
We establish the following result for $\Tf$. \\
\begin{theorem}[Bias, Variance, Asymptotic Normality]
Let $T$ have bounded second derivatives and $\psi$ be bounded. If $f \in
\Sigma(s,L)$
the Bias is $O\left(n^{\frac{-2s}{2s+d}}\right)$.
The Variance is
$O(n^{-1})$ when $s>d/2$ and $O\left(n^{\frac{-4s}{2s+d}}\right)$ otherwise.
Under certain regularity condtions, the estimator is asymptotically normal when
$s > d/2$.
\end{theorem}
Even though our analysis of this estimator is new, we defer it to the appendix
since the result is incremental in comparison to \cite{robins09quadraticvm,
emery98probstat} who
analyse similar estimators in the Semiparametric setting.
However, we extend this analysis for functionals of multiple
distributions in the next section.
\section{Preliminaries}
\label{sec:prelims}
Let $\Xcal$ be a compact metric space equipped with a measure $\mu$, e.g. the
Lebesgue measure.
Let $P$ and $Q$ be measures over $\Xcal$ that are absolutely continuous w.r.t $\mu$.
Let $p,q \in L_2(\Xcal)$ be the Radon-Nikodym derivatives with respect to $\mu$.
We focus on estimating functionals of the form:
\begin{align}
T(P) &= T(p) = \phi\left(\int\nu(p)d\mu\right) \qquad \textrm{or} \qquad
T(P,Q) = T(p,q) = \phi\left(\int \nu(p,q) d\mu\right),
\end{align}
where $\phi, \nu$ are real valued Lipschitz functions that twice differentiable.
Our framework permits more general
functionals -- e.g. functionals based on the conditional densities --
but we will focus on this form for ease of exposition.
To facilitate presentation of the main definitions, it is easiest to work with
functionals of one distribution $T(P)$.
Define $\Mcal$ to be the set of all measures that are absolutely
continuous w.r.t $\mu$, whose Radon-Nikodym derivatives belong to $L_2(\Xcal)$.
Central to our development is the Von Mises expansion (VME), which is the
distributional analog of the Taylor expansion.
For this we introduce the \gateaux derivative which imposes a notion of
differentiability in topological spaces. We then introduce the \emph{influence
function}. \\[\thmparaspacing]
\begin{definition
The map $T':\Mcal \rightarrow \RR$ where
$
T'(H;P) = \partialfrac{t}{T(P + tH )}|_{t=0}
$
is called the \textbf{\gateaux
derivative} at $P$ if the derivative exists and is linear and continuous in $H$.
We say $T$ is \gateaux differentiable at $P$ if $T'$ exists at $P$.
\\[\thmparaspacing]
\end{definition}
\begin{definition
Let $T$ be \gateaux differentiable at $P$.
A function $\psi(\cdot; P) : \Xcal \rightarrow \RR$ which satisfies
$
T'(Q-P; P) = \int \psi(x; P) \ud Q(x),
$
is the \textbf{influence function} of $T$ w.r.t the distribution $P$.
\label{def:infFun}
\end{definition}
The existence and uniqueness of the influence function is guaranteed by the
Riesz representation theorem, since the domain of $T$ is a bijection of
$L_2(\Xcal)$ and consequently a Hilbert space.
The classical work of~\citet{fernholz83vonmises} defines the influence function
in terms of the \gateaux derivative by,
\begin{equation}
\psi(x, P) = T'(\delta_x - P, P) = \partialfracat{t}{T((1-t)P + t\delta_x )}{0},
\label{eqn:infFunGateaux}
\end{equation}
where $\delta_x$ is the dirac delta function at $x$.
While our functionals are defined only on non-atomic distributions, we can still
use~\eqref{eqn:infFunGateaux} to compute the influence function.
The function computed this way can be shown to satisfy
Definition~\ref{def:infFun}.
Based on the above, the first order VME is,
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
T(Q) &= T(P) + T'(Q-P; P) + R_2(P,Q) \numberthis
\label{eqn:oneDistroVME}\\
&= T(P) + \int \psi(x; P) \ud Q(x) + R_2(P,Q),
\end{align*}
}
{
\begin{align}
T(Q) = T(P) + T'(Q-P; P) + R_2(P,Q)
= T(P) + \int \psi(x; P) \ud Q(x) + R_2(P,Q),
\label{eqn:oneDistroVME}
\end{align}
}
where $R_2$ is the second order remainder.
\gateaux differentiability alone will not be
sufficient for our purposes. In what follows, we will assign
$Q \rightarrow F$
and $P\rightarrow \widehat{F}$, where $F$, $\widehat{F}$ are the
true and estimated distributions.
We would like to bound the remainder in
terms of a distance between $F$ and $\widehat{F}$.
By taking the domain of $T$ to be only measures with continuous densities,
we can control $R_2$ using the $L_2$ metric of the
densities.
This essentially means that our functionals satisfy a stronger form of
differentiability called \frechet differentiability
\citep{vandervaart98asymptotic, fernholz83vonmises} in the $L_2$ metric.
Consequently, we can write all derivatives in terms of the
densities, and the VME reduces to a functional Taylor expansion on the densities
(Lemmas~\ref{lem:vmeTaylorEqOneDistro},~\ref{lem:vmeTaylorEqTwoDistro}
in Appendix~\ref{sec:appAncillaryResults}):
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
T(q) &= T(p) + \phi'\left(\int \nu(p)\right)\int(q-p) \nu'(p) \\
&\hspace{0.8in}+ R_2(p,q) \\
&= T(p) + \int \psi(x; p) q(x) \ud x + \bigO(\|p-q\|_2^2).
\numberthis
\label{eqn:functaylor}
\end{align*}
}
{
\begin{align*}
T(q) &= T(p) + \phi'\left(\int \nu(p)\right)\int(q-p) \nu'(p) +
R_2(p,q) \\
&= T(p) + \int \psi(x; p) q(x) \ud x + \bigO(\|p-q\|_2^2).
\numberthis
\label{eqn:functaylor}
\end{align*}
}
This expansion will be the basis for our estimators.
These ideas generalize to functionals of multiple distributions and to settings where the functional involves quantities other than the density.
A functional $T(P,Q)$ of two distributions has two \gateaux derivatives, $T'_i(\cdot; P,Q)$ for $i=1,2$ formed by perturbing the $i$th argument with the other fixed.
The influence functions $\psi_1,\psi_2$ satisfy, $\forall P_1,P_2\in\Mcal$,
\begingroup
\allowdisplaybreaks
\begin{align*}
&T_1'(Q_1-P_1; P_1, P_2) =
\partialfracat{t}{T(P_1+t(Q_1-P_1), P_2)}{0} =
\int \psi_1(u; P_1, P_2) \ud Q_1(u).
\numberthis \label{eqn:defInfFun} \\
&T_2'(Q_2-P_2; P_1, P_2) =
\partialfracat{t}{T(P_1,P_2 +t(Q_2-P_2))}{0} =
\int \psi_2(u; P_1, P_2) \ud Q_2(u).
\end{align*}
\endgroup
The VME can be written as,
\begin{align*}
T(q_1, q_2) &= T(p_1, p_2) + \int \psi_1(x; p_1, p_2) q_1(x) \ud x +
\int \psi_2(x; p_1, p_2) q_2(x) \ud x \\
&\hspace{0.2in}+ \bigO(\|p_1-q_1\|_2^2) + \bigO(\|p_2-q_2\|_2^2).
\numberthis
\label{eqn:functaylortwoD}
\end{align*}
\section*{\Large Appendix}
\input{appAncillaryResults}
\input{appOneDistro}
\input{appOneDistroLooES}
\input{appMultipleDistros}
\input{appMultipleDistrosLooES}
\input{appLowerBound}
\input{illustrativeExample}
\input{appExperiments}
\input{functionalEstimators2}
\end{document}
\section{An Illustrative Example}
\label{sec:workedExample}
Now, we present a step by step guide on applying our framework to
estimating any desired functional.
We choose the Conditional Tsallis divergence because pedagogically it is a
good example in Table~\ref{tb:functionalDefns} to illustrate the technique.
By following a similar procedure, one may derive an estimator for any desired
functional.
The Conditional \tsallis divergence
($\alpha \neq 0, 1$) between $X$ and $Y$ conditioned on $Z$ can be written
in terms of joint densities $\pxz, \pyz$.
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\tsalliscd(p_{X|Z} \| p_{Y|Z}; p_Z) = \tsalliscd(\pxz, \pyz) = \\
&=\int p_Z(z)\frac{1}{\alpha-1}\left(\int p_{X|Z}^{\alpha}(u,z)
p_{Y|Z}^{1-\alpha}(u,z) \ud u -1\right) \ud z \\
&= \frac{1}{1- \alpha} + \frac{1}{\alpha-1}
\int p^{\alpha}_{XZ}(u,z) p^{\beta}_{YZ}(u,z) \ud u \ud z
\end{align*}
}
{
\begin{align*}
\tsalliscd(p_{X|Z} \| p_{Y|Z}; p_Z) = \tsalliscd(\pxz, \pyz)
&=\int p_Z(z)\frac{1}{\alpha-1}\left(\int p_{X|Z}^{\alpha}(u,z)
p_{Y|Z}^{1-\alpha}(u,z) \ud u -1\right) \ud z \\
&= \frac{1}{1- \alpha} + \frac{1}{\alpha-1}
\int p^{\alpha}_{XZ}(u,z) p^{\beta}_{YZ}(u,z) \ud u \ud z
\end{align*}
}
where we have taken $\beta = 1-\alpha$. We have samples
$V_i = (X_i, Z_{1i}) \sim \pxz, i = 1, \dots, 2n$
and $W_j = (Y_j,Z_{1j})\sim \pyz, j=1, \dots, 2m$
We will assume $\pxz, \pyz \in \Sigma(s,L,B', B)$.
For brevity, we will write $p = (\pxz, \pyz)$ and $\phat = (\pxzhat, \pyzhat)$.
We begin with a functional Taylor expansion of $\tsalliscd(f,g)$ around
$(f_0, g_0)$. Since $\alpha, \beta \neq 0, 1$,
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
\tsalliscd(f,g) &=
\tsalliscd(f_0, g_0) +
\frac{\alpha}{\alpha - 1} \int f_0^{\alpha-1}g_0^\beta + \numberthis
\label{eqn:tsalliscdvme} \\
& - \int f_0^{\alpha}g_0^{\beta-1} +
O\left( \|f-f_0\|^2 + \|g-g_0\|^2 \right)
\end{align*}
}
{
\begin{align*}
\tsalliscd(f,g) &=
\tsalliscd(f_0, g_0) +
\frac{\alpha}{\alpha - 1} \int f_0^{\alpha-1}g_0^\beta \numberthis
\label{eqn:tsalliscdvme}
- \int f_0^{\alpha}g_0^{\beta-1} +
O\left( \|f-f_0\|^2 + \|g-g_0\|^2 \right)
\end{align*}
}
We can bound the second order terms by $O\left( \|f-f_0\|^2 + \|g-g_0\|^2
\right)$. Precisely, the second order
remainder is,
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\frac{\alpha^2}{\alpha - 1} \int f_*^{\alpha-2}g_*^\beta (f - f_0)^2
-\beta \int f_*^{\alpha}g_*^{\beta-2} (g - g_0)^2 \\
&+\frac{\alpha\beta}{\alpha -1} \int f_*^{\alpha-1}g_*^{\beta} (f-f_0)(g-g_0)
\end{align*}
}
{
\begin{align*}
&\frac{\alpha^2}{\alpha - 1} \int f_*^{\alpha-2}g_*^\beta (f - f_0)^2
-\beta \int f_*^{\alpha}g_*^{\beta-2} (g - g_0)^2
+\frac{\alpha\beta}{\alpha -1} \int f_*^{\alpha-1}g_*^{\beta} (f-f_0)(g-g_0)
\end{align*}
}
where $(f_*,g_*)$ is in the line segment between $(f,g)$ and $(f_0,g_0)$.
If $f,g,f_0,g_0$ are bounded above and below so are $f_*, g_*$ and
$ f_*^a g_*^b $ where $a,b$ are coefficients depending on $\alpha$.
The first two terms are respectively
$O\left( \|f-f_0\|^2\right)$, $O\left(\|g-g_0\|^2 \right)$.
The cross term can be bounded via,
$\abr{\int (f-f_0)(g-g_0) } \leq \int \max\{ |f-f_0|^2, |g-g_0|^2 \} \in
O( \|f-f_0\|^2 + \|g-g_0\|^2)$. We now state the influence functions of
$\tsalliscd$. The derivation is fairly straightforward using the \gateaux
derivative and is given in Appendix~\ref{sec:appWorkedExample}.
\\[\thmparaspacing]
\begin{proposition}[Influence Functions of $\tsalliscd$]
The influence functions of $\tsalliscd$ w.r.t $\pxz$, $\pyz$ are
\begingroup
\allowdisplaybreaks
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\psixz(X, Z_1; \pxz, \pyz) = \numberthis \label{eqn:tsalliscdInfFun} \\
&\frac{\alpha}{\alpha-1} \left(
\pxz^{\alpha-1}(X,Z_1)\pyz^{\beta}(X,Z_1) - \int \pxz^\alpha\pyz^\beta \right) \\
&\psiyz(Y, Z_2; \pxz, \pyz) = \\
& - \left(
\pxz^{\alpha}(Y,Z_2)\pyz^{\beta-1}(Y,Z_2) - \int \pxz^\alpha\pyz^\beta \right)
\end{align*}
}
{
\begin{align*}
\psixz(X, Z_1; \pxz, \pyz) &= \numberthis \label{eqn:tsalliscdInfFun}
\frac{\alpha}{\alpha-1} \left(
\pxz^{\alpha-1}(X,Z_1)\pyz^{\beta}(X,Z_1) - \int \pxz^\alpha\pyz^\beta \right) \\
\psiyz(Y, Z_2; \pxz, \pyz) &=
- \left(
\pxz^{\alpha}(Y,Z_2)\pyz^{\beta-1}(Y,Z_2) - \int \pxz^\alpha\pyz^\beta \right)
\end{align*}
}
\endgroup
\label{thm:tsalliscdInfFun}
\end{proposition}
We use
$\Vone, \Wone$ to construct density estimates $\pxzhat, \pyzhat$ for $\pxz,
\pyz$. Then, use $\Vtwo, \Wtwo$ to add the sample means of the influence
functions given in
Theorem~\ref{thm:tsalliscdInfFun}. This results in our preliminary estimator,
\begingroup
\allowdisplaybreaks
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
\tsalliscdestimss{1} &= \frac{1}{1 - \alpha} + \frac{\alpha}{\alpha-1}
\frac{1}{n}\sum_{i=n+1}^{2n}
\left( \frac{\pxzhat(X_i, Z_{1i})}{\pyzhat(X_i, Z_{1i})} \right)^{\alpha-1} \\
&\hspace{0.2in} - \frac{1}{m}\sum_{j=m+1}^{2m}
\left( \frac{\pxzhat(Y_j, Z_{2j})}{\pyzhat(Y_j, Z_{2j})} \right)^{\alpha}
\numberthis.
\label{eqn:tsalliscdestim}
\end{align*}
}
{
\begin{align*}
\tsalliscdestimss{1} &= \frac{1}{1 - \alpha} + \frac{\alpha}{\alpha-1}
\frac{1}{n}\sum_{i=n+1}^{2n}
\left(\frac{\pxzhat(X_i, Z_{1i})}{\pyzhat(X_i, Z_{1i})} \right)^{\alpha-1}
- \frac{1}{m}\sum_{j=m+1}^{2m}
\left(\frac{\pxzhat(Y_j, Z_{2j})}{\pyzhat(Y_j, Z_{2j})} \right)^{\alpha}
\numberthis.
\label{eqn:tsalliscdestim}
\end{align*}
}
\endgroup
The final estimate is
$\tsalliscdestim = (\tsalliscdestimss{1} + \tsalliscdestimss{2})/2$
where $\tsalliscdestimss{2}$ is obtained by swapping the two samples.
As mentioned earlier, the boundedness of the densities give us the rates in
Corollary~\ref{thm:ratesholder}.
To show asymptotic normality, we need to verify the conditions in
Theorem~\ref{thm:asympNormalTwoDistro}.
This is a fairly routine excercise and we outline the steps in
Appendix~\ref{sec:appWorkedExample}.
We state the result formally below. \\
\begin{corollary}
\label{thm:tsalliscdAsympNormal}
Let $\pxy,\pxz \in \Sigma(s, L, B, B')$. Then $\tsalliscdestim$
is asymptotically normal when $\pxz \neq \pyz$ and $s>d/2$.
\label{thm:condTsallisAsympNormal}
\end{corollary}
Finally, to construct a confidence interval we need a consistent
estimate of the asymptotic variance :
$\frac{1}{\zeta} \VV_{XZ}\left[ \psixz(V; p) \right]
+ \frac{1}{1-\zeta} \VV_{YZ} \left[ \psiyz(W; p) \right]$ where,
\begingroup
\allowdisplaybreaks
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
& \VV_{XZ}\left[ \psixz(X, Z_1; \pxz, \pyz) \right] = \\
& \left( \frac{\alpha}{\alpha-1} \right)^2 \left( \int \pxz^{2\alpha-1}
\pyz^{2\beta} - \left( \int \pxz^\alpha \pyz^\beta \right)^2 \right) \\
& \VV_{YZ}\left[ \psiyz(Y, Z_2; \pxz, \pyz) \right] = \\
& \left( \int \pxz^{2\alpha}
\pyz^{2\beta-1} - \left( \int \pxz^\alpha \pyz^\beta \right)^2 \right)
\end{align*}
}
{
\begin{align*}
\VV_{XZ}\left[ \psixz(X, Z_1; \pxz, \pyz) \right] &=
\left( \frac{\alpha}{\alpha-1} \right)^2 \left( \int \pxz^{2\alpha-1}
\pyz^{2\beta} - \left( \int \pxz^\alpha \pyz^\beta \right)^2 \right) \\
\VV_{YZ}\left[ \psiyz(Y, Z_2; \pxz, \pyz) \right] &=
\left( \int \pxz^{2\alpha}
\pyz^{2\beta-1} - \left( \int \pxz^\alpha \pyz^\beta \right)^2 \right)
\end{align*}
}
\endgroup
From our analysis above,
we know that any
functional of the form $S(a,b) = \int \pxz^a \pyz^b$, $a+b=1, a,b\neq 0,1$
can be estimated via
\begin{align*}
\widehat{S}(a,b) =
\frac{a}{n}\sum_{i=n+1}^{2n} \frac{\widehat{p}^b_{YZ}(V_i)}{\widehat{p}^b_{XZ}(V_i)}
+
\frac{b}{m}\sum_{j=m+1}^{2m} \frac{\widehat{p}^a_{XZ}(W_j)}{\widehat{p}^a_{YZ}(W_j)}
\end{align*}
where $\pxzhat,\pyzhat$ are the density estimates from $\Vone, \Wone$.
$n/N$ is a consistent estimator for $\zeta$.
This gives the following estimator for the asymptotic variance,
\ifthenelse{\boolean{istwocolumn}}
{
\begin{align*}
&\frac{N}{n}\frac{\alpha^2}{(\alpha-1)^2} \widehat{S}(2\alpha-1, 2\beta) +
\frac{N}{m} \widehat{S}(2\alpha, 2\beta-1) \\
&-
\frac{N (m\alpha^2 + n(\alpha-1)^2)}{nm(\alpha-1)^2}\widehat{S}^2(\alpha, \beta).
\end{align*}
}
{
\begin{align*}
\frac{N}{n}\frac{\alpha^2}{(\alpha-1)^2} \widehat{S}(2\alpha-1, 2\beta) +
\frac{N}{m} \widehat{S}(2\alpha, 2\beta-1) -
\frac{N (m\alpha^2 + n(\alpha-1)^2)}{nm(\alpha-1)^2}\widehat{S}^2(\alpha, \beta).
\end{align*}
}
The consistency of this estimator follows from the consistency of
$\widehat{S}(a,b)$ for $S(a,b)$, Slutzky's theorem and the
continuous mapping theorem.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 6,434
|
Тависка́рон (88611 Teharonhiawako I Sawiskera) — спутник транснептунового объекта (88611) Таронхайавагон.
История открытия
Тавискарон открыт в ночь с 11 на 12 октября 2001 года в обсерватории Лас-Кампанас астрономами Д. Осипом и С. Бурлем.
Тавискарон (могаук. Sawiskera, гурон. Tawis-karong, Tawiscara) в мифологии ирокезов — злой брат-близнец Таронхайавагона (у гуронов — Иоскехи).
Орбита
Тавискерон обращается на расстоянии 27,3 тыс. км от основного тела.
Физические характеристики
Его диаметр равен 122±14 км, то есть составляет примерно 2/3 размера основного тела.
Примечания
Транснептуновые объекты
Спутники астероидов
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,419
|
{"url":"http:\/\/pressbooks-dev.oer.hawaii.edu\/collegephysics\/chapter\/12-4-viscosity-and-laminar-flow-poiseuilles-law\/","text":"12 Fluid Dynamics and Its Biological and Medical Applications\n\n# 88 12.4 Viscosity and Laminar Flow; Poiseuille\u2019s Law\n\n### Summary\n\n\u2022 Define laminar flow and turbulent flow.\n\u2022 Explain what viscosity is.\n\u2022 Calculate flow and resistance with Poiseuille\u2019s law.\n\u2022 Explain how pressure drops due to resistance.\n\n# Laminar Flow and Viscosity\n\nWhen you pour yourself a glass of juice, the liquid flows freely and quickly. But when you pour syrup on your pancakes, that liquid flows slowly and sticks to the pitcher. The difference is fluid friction, both within the fluid itself and between the fluid and its surroundings. We call this property of fluids viscosity. Juice has low viscosity, whereas syrup has high viscosity. In the previous sections we have considered ideal fluids with little or no viscosity. In this section, we will investigate what factors, including viscosity, affect the rate of fluid flow.\n\nThe precise definition of viscosity is based on laminar, or nonturbulent, flow. Before we can define viscosity, then, we need to define laminar flow and turbulent flow. Figure 1 shows both types of flow. Laminar flow is characterized by the smooth flow of the fluid in layers that do not mix. Turbulent flow, or turbulence, is characterized by eddies and swirls that mix layers of fluid together.\n\nFigure 2 shows schematically how laminar and turbulent flow differ. Layers flow without mixing when flow is laminar. When there is turbulence, the layers mix, and there are significant velocities in directions other than the overall direction of flow. The lines that are shown in many illustrations are the paths followed by small volumes of fluids. These are called streamlines. Streamlines are smooth and continuous when flow is laminar, but break up and mix when flow is turbulent. Turbulence has two main causes. First, any obstruction or sharp corner, such as in a faucet, creates turbulence by imparting velocities perpendicular to the flow. Second, high speeds cause turbulence. The drag both between adjacent layers of fluid and between the fluid and its surroundings forms swirls and eddies, if the speed is great enough. We shall concentrate on laminar flow for the remainder of this section, leaving certain aspects of turbulence for later sections.\n\n### MAKING CONNECTIONS: TAKE-HOME EXPERIMENT: GO DOWN TO THE RIVER\n\nTry dropping simultaneously two sticks into a flowing river, one near the edge of the river and one near the middle. Which one travels faster? Why?\n\nFigure 3 shows how viscosity is measured for a fluid. Two parallel plates have the specific fluid between them. The bottom plate is held fixed, while the top plate is moved to the right, dragging fluid with it. The layer (or lamina) of fluid in contact with either plate does not move relative to the plate, and so the top layer moves atwhile the bottom layer remains at rest. Each successive layer from the top down exerts a force on the one below it, trying to drag it along, producing a continuous variation in speed fromto 0 as shown. Care is taken to insure that the flow is laminar; that is, the layers do not mix. The motion in Figure 3 is like a continuous shearing motion. Fluids have zero shear strength, but the rate at which they are sheared is related to the same geometrical factorsandas is shear deformation for solids.\n\nA forceis required to keep the top plate in Figure 3 moving at a constant velocityand experiments have shown that this force depends on four factors. First,is directly proportional to(until the speed is so high that turbulence occurs\u2014then a much larger force is needed, and it has a more complicated dependence on). Second,is proportional to the areaof the plate. This relationship seems reasonable, sinceis directly proportional to the amount of fluid being moved. Third,is inversely proportional to the distance between the platesThis relationship is also reasonable;is like a lever arm, and the greater the lever arm, the less force that is needed. Fourth,is directly proportional to the coefficient of viscosity,The greater the viscosity, the greater the force required. These dependencies are combined into the equation\n\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{vA}{L}},[\/latex]\n\nwhich gives us a working definition of fluid viscositySolving forgives\n\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{FL}{vA}},[\/latex]\n\nwhich defines viscosity in terms of how it is measured. The SI unit of viscosity isTable 1 lists the coefficients of viscosity for various fluids.\n\nViscosity varies from one fluid to another by several orders of magnitude. As you might expect, the viscosities of gases are much less than those of liquids, and these viscosities are often temperature dependent. The viscosity of blood can be reduced by aspirin consumption, allowing it to flow more easily around the body. (When used over the long term in low doses, aspirin can help prevent heart attacks, and reduce the risk of blood clotting.)\n\n# Laminar Flow Confined to Tubes\u2014Poiseuille\u2019s Law\n\nWhat causes flow? The answer, not surprisingly, is pressure difference. In fact, there is a very simple relationship between horizontal flow and pressure. Flow rateis in the direction from high to low pressure. The greater the pressure differential between two points, the greater the flow rate. This relationship can be stated as\n\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{P_2-P_1}{R}},[\/latex]\n\nwhereandare the pressures at two points, such as at either end of a tube, andis the resistance to flow. The resistanceincludes everything, except pressure, that affects flow rate. For example,is greater for a long tube than for a short one. The greater the viscosity of a fluid, the greater the value ofTurbulence greatly increaseswhereas increasing the diameter of a tube decreases\n\nIf viscosity is zero, the fluid is frictionless and the resistance to flow is also zero. Comparing frictionless flow in a tube to viscous flow, as in Figure 4, we see that for a viscous fluid, speed is greatest at midstream because of drag at the boundaries. We can see the effect of viscosity in a Bunsen burner flame, even though the viscosity of natural gas is small.\n\nThe resistanceto laminar flow of an incompressible fluid having viscositythrough a horizontal tube of uniform radiusand lengthsuch as the one in Figure 5, is given by\n\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{8\\eta\\ell}{\\pi{r}^4}}.[\/latex]\n\nThis equation is called Poiseuille\u2019s law for resistance after the French scientist J. L. Poiseuille (1799\u20131869), who derived it in an attempt to understand the flow of blood, an often turbulent fluid.\n\nLet us examine Poiseuille\u2019s expression forto see if it makes good intuitive sense. We see that resistance is directly proportional to both fluid viscosityand the lengthof a tube. After all, both of these directly affect the amount of friction encountered\u2014the greater either is, the greater the resistance and the smaller the flow. The radiusof a tube affects the resistance, which again makes sense, because the greater the radius, the greater the flow (all other factors remaining the same). But it is surprising thatis raised to the fourth power in Poiseuille\u2019s law. This exponent means that any change in the radius of a tube has a very large effect on resistance. For example, doubling the radius of a tube decreases resistance by a factor of\n\nTaken together,andgive the following expression for flow rate:\n\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{(P_2-P_1)\\pi{r}^4}{8\\eta\\ell}}.[\/latex]\n\nThis equation describes laminar flow through a tube. It is sometimes called Poiseuille\u2019s law for laminar flow, or simply Poiseuille\u2019s law.\n\n### Example 1: Using Flow Rate: Plaque Deposits Reduce Blood Flow\n\nSuppose the flow rate of blood in a coronary artery has been reduced to half its normal value by plaque deposits. By what factor has the radius of the artery been reduced, assuming no turbulence occurs?\n\nStrategy\n\nAssuming laminar flow, Poiseuille\u2019s law states that\n\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{(P_2-P_1)\\pi{r}^4}{8\\eta\\ell}}.[\/latex]\n\nWe need to compare the artery radius before and after the flow rate reduction.\n\nSolution\n\nWith a constant pressure difference assumed and the same length and viscosity, along the artery we have\n\n$\\boldsymbol{=}$\n\nSo, given thatwe find that\n\nTherefore,a decrease in the artery radius of 16%.\n\nDiscussion\n\nThis decrease in radius is surprisingly small for this situation. To restore the blood flow in spite of this buildup would require an increase in the pressure differenceof a factor of two, with subsequent strain on the heart.\n\nFluid Temperature (\u00baC) Viscosity\n\u03b7(mPa\u00b7s)\nGases\nAir 0 0.0171\n20 0.0181\n40 0.0190\n100 0.0218\nAmmonia 20 0.00974\nCarbon dioxide 20 0.0147\nHelium 20 0.0196\nHydrogen 0 0.0090\nMercury 20 0.0450\nOxygen 20 0.0203\nSteam 100 0.0130\nLiquids\nWater 0 1.792\n20 1.002\n37 0.6947\n40 0.653\n100 0.282\nWhole blood1 20 3.015\n37 2.084\nBlood plasma2 20 1.810\n37 1.257\nEthyl alcohol 20 1.20\nMethanol 20 0.584\nOil (heavy machine) 20 660\nOil (motor, SAE 10) 30 200\nOil (olive) 20 138\nGlycerin 20 1500\nHoney 20 2000\u201310000\nMaple Syrup 20 2000\u20133000\nMilk 20 3.0\nOil (Corn) 20 65\nTable 1. Coefficients of Viscosity of Various Fluids\n\nThe circulatory system provides many examples of Poiseuille\u2019s law in action\u2014with blood flow regulated by changes in vessel size and blood pressure. Blood vessels are not rigid but elastic. Adjustments to blood flow are primarily made by varying the size of the vessels, since the resistance is so sensitive to the radius. During vigorous exercise, blood vessels are selectively dilated to important muscles and organs and blood pressure increases. This creates both greater overall blood flow and increased flow to specific areas. Conversely, decreases in vessel radii, perhaps from plaques in the arteries, can greatly reduce blood flow. If a vessel\u2019s radius is reduced by only 5% (to 0.95 of its original value), the flow rate is reduced to aboutof its original value. A 19% decrease in flow is caused by a 5% decrease in radius. The body may compensate by increasing blood pressure by 19%, but this presents hazards to the heart and any vessel that has weakened walls. Another example comes from automobile engine oil. If you have a car with an oil pressure gauge, you may notice that oil pressure is high when the engine is cold. Motor oil has greater viscosity when cold than when warm, and so pressure must be greater to pump the same amount of cold oil.\n\n### Example 2: What pressure Produces This Flow Rate?\n\nAn intravenous (IV) system is supplying saline solution to a patient at the rate ofthrough a needle of radius 0.150 mm and length 2.50 cm. What pressure is needed at the entrance of the needle to cause this flow, assuming the viscosity of the saline solution to be the same as that of water? The gauge pressure of the blood in the patient\u2019s vein is 8.00 mm Hg. (Assume that the temperature is.)\n\nStrategy\n\nAssuming laminar flow, Poiseuille\u2019s law applies. This is given by\n\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{(P_2-P_1)\\pi{r}^4}{8\\eta\\ell},}[\/latex]\n\nwhereis the pressure at the entrance of the needle andis the pressure in the vein. The only unknown is\n\nSolution\n\nSolving foryields\n\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{8\\eta\\ell}{\\pi{r}^4}}[\/latex]\n\nis given as 8.00 mm Hg, which converts toSubstituting this and the other known values yields\n\nDiscussion\n\nThis pressure could be supplied by an IV bottle with the surface of the saline solution 1.61 m above the entrance to the needle (this is left for you to solve in this chapter\u2019s Problems and Exercises), assuming that there is negligible pressure drop in the tubing leading to the needle.\n\n# Flow and Resistance as Causes of Pressure Drops\n\nYou may have noticed that water pressure in your home might be lower than normal on hot summer days when there is more use. This pressure drop occurs in the water main before it reaches your home. Let us consider flow through the water main as illustrated in Figure 6. We can understand why the pressureto the home drops during times of heavy use by rearranging\n\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{P_2-P_1}{R}}[\/latex]\n\nto\n\nwhere, in this case,is the pressure at the water works andis the resistance of the water main. During times of heavy use, the flow rateis large. This means thatmust also be large. Thusmust decrease. It is correct to think of flow and resistance as causing the pressure to drop fromtois valid for both laminar and turbulent flows.\n\nWe can useto analyze pressure drops occurring in more complex systems in which the tube radius is not the same everywhere. Resistance will be much greater in narrow places, such as an obstructed coronary artery. For a given flow ratethe pressure drop will be greatest where the tube is most narrow. This is how water faucets control flow. Additionally,is greatly increased by turbulence, and a constriction that creates turbulence greatly reduces the pressure downstream. Plaque in an artery reduces pressure and hence flow, both by its resistance and by the turbulence it creates.\n\nFigure 7 is a schematic of the human circulatory system, showing average blood pressures in its major parts for an adult at rest. Pressure created by the heart\u2019s two pumps, the right and left ventricles, is reduced by the resistance of the blood vessels as the blood flows through them. The left ventricle increases arterial blood pressure that drives the flow of blood through all parts of the body except the lungs. The right ventricle receives the lower pressure blood from two major veins and pumps it through the lungs for gas exchange with atmospheric gases \u2013 the disposal of carbon dioxide from the blood and the replenishment of oxygen. Only one major organ is shown schematically, with typical branching of arteries to ever smaller vessels, the smallest of which are the capillaries, and rejoining of small veins into larger ones. Similar branching takes place in a variety of organs in the body, and the circulatory system has considerable flexibility in flow regulation to these organs by the dilation and constriction of the arteries leading to them and the capillaries within them. The sensitivity of flow to tube radius makes this flexibility possible over a large range of flow rates.\n\nEach branching of larger vessels into smaller vessels increases the total cross-sectional area of the tubes through which the blood flows. For example, an artery with a cross section ofmay branch into 20 smaller arteries, each with cross sections ofwith a total ofIn that manner, the resistance of the branchings is reduced so that pressure is not entirely lost. Moreover, becauseandincreases through branching, the average velocity of the blood in the smaller vessels is reduced. The blood velocity in the aorta () is about 25 cm\/s, while in the capillaries (in diameter) the velocity is about 1 mm\/s. This reduced velocity allows the blood to exchange substances with the cells in the capillaries and alveoli in particular.\n\n# Section Summary\n\n\u2022 Laminar flow is characterized by smooth flow of the fluid in layers that do not mix.\n\u2022 Turbulence is characterized by eddies and swirls that mix layers of fluid together.\n\u2022 Fluid viscosityis due to friction within a fluid. Representative values are given in Table 1. Viscosity has units ofor\n\u2022 Flow is proportional to pressure difference and inversely proportional to resistance:\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{P_2-P_1}{R}}.[\/latex]\n\u2022 For laminar flow in a tube, Poiseuille\u2019s law for resistance states that\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{8\\eta\\ell}{\\pi{r}^4}.}[\/latex]\n\u2022 Poiseuille\u2019s law for flow in a tube is\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{(P_2-P_1)\\pi{r}^4}{8\\eta\\ell}.}[\/latex]\n\u2022 The pressure drop caused by flow and resistance is given by\n\n### Conceptual Questions\n\n1: Explain why the viscosity of a liquid decreases with temperature\u2014that is, how might increased temperature reduce the effects of cohesive forces in a liquid? Also explain why the viscosity of a gas increases with temperature\u2014that is, how does increased gas temperature create more collisions between atoms and molecules?\n\n2: When paddling a canoe upstream, it is wisest to travel as near to the shore as possible. When canoeing downstream, it may be best to stay near the middle. Explain why.\n\n3: Why does flow decrease in your shower when someone flushes the toilet?\n\n4: Plumbing usually includes air-filled tubes near water faucets, as shown in Figure 8. Explain why they are needed and how they work.\n\n### Problems & Exercises\n\n1: (a) Calculate the retarding force due to the viscosity of the air layer between a cart and a level air track given the following information\u2014air temperature isthe cart is moving at 0.400 m\/s, its surface area isand the thickness of the air layer is(b) What is the ratio of this force to the weight of the 0.300-kg cart?\n\n2: What force is needed to pull one microscope slide over another at a speed of 1.00 cm\/s, if there is a 0.500-mm-thick layer ofwater between them and the contact area is\n\n3: A glucose solution being administered with an IV has a flow rate ofWhat will the new flow rate be if the glucose is replaced by whole blood having the same density but a viscosity 2.50 times that of the glucose? All other factors remain constant.\n\n4: The pressure drop along a length of artery is 100 Pa, the radius is 10 mm, and the flow is laminar. The average speed of the blood is 15 mm\/s. (a) What is the net force on the blood in this section of artery? (b) What is the power expended maintaining the flow?\n\n5: A small artery has a length ofand a radius ofIf the pressure drop across the artery is 1.3\u00a0kPa, what is the flow rate through the artery? (Assume that the temperature is .)\n\n6: Fluid originally flows through a tube at a rate ofTo illustrate the sensitivity of flow rate to various factors, calculate the new flow rate for the following changes with all other factors remaining the same as in the original conditions. (a) Pressure difference increases by a factor of 1.50. (b) A new fluid with 3.00 times greater viscosity is substituted. (c) The tube is replaced by one having 4.00 times the length. (d) Another tube is used with a radius 0.100 times the original. (e) Yet another tube is substituted with a radius 0.100 times the original and half the length, and the pressure difference is increased by a factor of 1.50.\n\n7: The arterioles (small arteries) leading to an organ, constrict in order to decrease flow to the organ. To shut down an organ, blood flow is reduced naturally to 1.00% of its original value. By what factor did the radii of the arterioles constrict? Penguins do this when they stand on ice to reduce the blood flow to their feet.\n\n8: Angioplasty is a technique in which arteries partially blocked with plaque are dilated to increase blood flow. By what factor must the radius of an artery be increased in order to increase blood flow by a factor of 10?\n\n9: (a) Suppose a blood vessel\u2019s radius is decreased to 90.0% of its original value by plaque deposits and the body compensates by increasing the pressure difference along the vessel to keep the flow rate constant. By what factor must the pressure difference increase? (b) If turbulence is created by the obstruction, what additional effect would it have on the flow rate?\n\n10: A spherical particle falling at a terminal speed in a liquid must have the gravitational force balanced by the drag force and the buoyant force. The buoyant force is equal to the weight of the displaced fluid, while the drag force is assumed to be given by Stokes Law,Show that the terminal speed is given by\n\n[latex size=\u201d2\u2033]\\boldsymbol{\\frac{2R^2g}{9\\eta}}[\/latex]\n\nwhereis the radius of the sphere,is its density, andis the density of the fluid andthe coefficient of viscosity.\n\n11: Using the equation of the previous problem, find the viscosity of motor oil in which a steel ball of radius 0.8 mm falls with a terminal speed of 4.32 cm\/s. The densities of the ball and the oil are 7.86 and 0.88 g\/mL, respectively.\n\n12: A skydiver will reach a terminal velocity when the air drag equals their weight. For a skydiver with high speed and a large body, turbulence is a factor. The drag force then is approximately proportional to the square of the velocity. Taking the drag force to beand setting this equal to the person\u2019s weight, find the terminal speed for a person falling \u201cspread eagle.\u201d Find both a formula and a number forwith assumptions as to size.\n\n13: A layer of oil 1.50 mm thick is placed between two microscope slides. Researchers find that a force ofis required to glide one over the other at a speed of 1.00 cm\/s when their contact area isWhat is the oil\u2019s viscosity? What type of oil might it be?\n\n14: (a) Verify that a 19.0% decrease in laminar flow through a tube is caused by a 5.00% decrease in radius, assuming that all other factors remain constant, as stated in the text. (b) What increase in flow is obtained from a 5.00% increase in radius, again assuming all other factors remain constant?\n\n15: Example 2 dealt with the flow of saline solution in an IV system. (a) Verify that a pressure ofis created at a depth of 1.61 m in a saline solution, assuming its density to be that of sea water. (b) Calculate the new flow rate if the height of the saline solution is decreased to 1.50 m. (c) At what height would the direction of flow be reversed? (This reversal can be a problem when patients stand up.)\n\n16: When physicians diagnose arterial blockages, they quote the reduction in flow rate. If the flow rate in an artery has been reduced to 10.0% of its normal value by a blood clot and the average pressure difference has increased by 20.0%, by what factor has the clot reduced the radius of the artery?\n\n17: During a marathon race, a runner\u2019s blood flow increases to 10.0 times her resting rate. Her blood\u2019s viscosity has dropped to 95.0% of its normal value, and the blood pressure difference across the circulatory system has increased by 50.0%. By what factor has the average radii of her blood vessels increased?\n\n18: Water supplied to a house by a water main has a pressure ofearly on a summer day when neighborhood use is low. This pressure produces a flow of 20.0 L\/min through a garden hose. Later in the day, pressure at the exit of the water main and entrance to the house drops, and a flow of only 8.00 L\/min is obtained through the same hose. (a) What pressure is now being supplied to the house, assuming resistance is constant? (b) By what factor did the flow rate in the water main increase in order to cause this decrease in delivered pressure? The pressure at the entrance of the water main isand the original flow rate was 200 L\/min. (c) How many more users are there, assuming each would consume 20.0 L\/min in the morning?\n\n19: An oil gusher shoots crude oil 25.0 m into the air through a pipe with a 0.100-m diameter. Neglecting air resistance but not the resistance of the pipe, and assuming laminar flow, calculate the gauge pressure at the entrance of the 50.0-m-long vertical pipe. Take the density of the oil to beand its viscosity to be(or). Note that you must take into account the pressure due to the 50.0-m column of oil in the pipe.\n\n20: Concrete is pumped from a cement mixer to the place it is being laid, instead of being carried in wheelbarrows. The flow rate is 200.0 L\/min through a 50.0-m-long, 8.00-cm-diameter hose, and the pressure at the pump is(a) Calculate the resistance of the hose. (b) What is the viscosity of the concrete, assuming the flow is laminar? (c) How much power is being supplied, assuming the point of use is at the same level as the pump? You may neglect the power supplied to increase the concrete\u2019s velocity.\n\nConsider a coronary artery constricted by arteriosclerosis. Construct a problem in which you calculate the amount by which the diameter of the artery is decreased, based on an assessment of the decrease in flow rate.\n\n22: Consider a river that spreads out in a delta region on its way to the sea. Construct a problem in which you calculate the average speed at which water moves in the delta region, based on the speed at which it was moving up river. Among the things to consider are the size and flow rate of the river before it spreads out and its size once it has spread out. You can construct the problem for the river spreading out into one large river or into multiple smaller rivers.\n\n## Footnotes\n\n1. 1 The ratios of the viscosities of blood to water are nearly constant between 0\u00b0C and 37\u00b0C.\n2. 2 See note on Whole Blood.\n\n## Glossary\n\nlaminar\na type of fluid flow in which layers do not mix\nturbulence\nfluid flow in which layers mix together via eddies and swirls\nviscosity\nthe friction in a fluid, defined in terms of the friction between layers\nPoiseuille\u2019s law for resistance\nthe resistance to laminar flow of an incompressible fluid in a tube: R = 8\u03b7l\/\u03c0r4\nPoiseuille\u2019s law\nthe rate of laminar flow of an incompressible fluid in a tube: Q = (P2P1)\u03c0r4\/8\u03b7l\n\n### Solutions\n\nProblems & Exercises\n\n1:\n\n(a)\n\n(b)\n\n3:\n\n5:\n\n7:\n\n0.316\n\n9:\n\n(a) 1.52\n\n(b) Turbulence will decrease the flow rate of the blood, which would require an even larger increase in the pressure difference, leading to higher blood pressure.\n\n11:\n\n13:\n\nor\n\nOlive oil.\n\n15:\n\n(a)\n\n(b)\n\n(c)10.6 cm\n\n17:\n\n1.59\n\n19:\n\n(gauge pressure)","date":"2018-08-18 10:39:06","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6897591352462769, \"perplexity\": 1075.0509131618703}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-34\/segments\/1534221213540.33\/warc\/CC-MAIN-20180818095337-20180818115337-00136.warc.gz\"}"}
| null | null |
\section{Introduction}
Since Hanbury Brown and Twiss (HBT) first observed the two-photon bunching effect in 1956 \cite{brown1956correlation,brown1956test}, they found that photons emitted by thermal light source are more inclined to arrive in pairs rather than randomly. The degree of second-order correlation of thermal light in the HBT interferometer was measured as 2, which greatly promoted the development of quantum optics \cite{glauber1963quantum,glauber1963coherent,glauber1963photon,sudarshan1963equivalence,glauber2006nobel}. Then the discovery of the superbunching effect that the degree of second-order correlation is greater than 2 has also attracted the attention of researchers \cite{lipeles1965direct,kaul1966observation,akhmanov1967nonlinear,kocher1967polarization,auffeves2011few,hoi2012generation,grujic2013repulsively,bhatti2015superbunching,hong2012two,zhou2017superbunching,manceau2019indefinite,Yu2019Experimental}. It is generally accepted that the HBT effect is all well explained by both classical and quantum interpretations which can be unified. Classical intensity fluctuation correlation theory was usually employed to interpret this phenomenon \cite{brown1957interferometry}, which emphasized whether the two detectors could detect specks with the same intensity fluctuation in the HBT interferometer. It can also be interpreted by quantum theory, in which two-photon bunching is interpreted by superposition of different alternatives to trigger a two-photon coincidence count \cite{glauber1963quantum,glauber1963coherent,glauber1963photon,fano1961quantum}. In 2017, Bin Bai et al. reported an experiment that bunching effect was observed without two-photon interference since there is only one alternative to trigger a joint detection event \cite{bai2017hanbury}. It seems to indicate the discrepancy between classical and quantum theory. Ultimately, he used Glauber's quantum optical correlation theory to explain this interesting phenomenon. Later, Yu Zhou et al. proposed quantum two-photon interference theory to explain superbunching pseudothermal light based on multi-path interference \cite{zhou2017superbunching}. However, there is no classical intensity fluctuation correlation theory analyzing the such superbunching effect with distinguishable interference paths in the temporal and spatial domain.
In this paper, we proposed and demonstrated a method to observe temporal and spatial superbunching effect from a pair of modulated distinguishable classical light. The normalized temporal and spatial second-order correlation function were measured as ${g^{\left(2\right)}}\left(0\right)=3.83\pm0.02$ and ${g^{\left(2\right)}}\left(0\right)=3.42\pm0.02$, respectively. At the same time, the
theoretical results derived from classical intensity fluctuation correlation theory are in good agreement with the experimental results. Based on above study, it further demonstrates the unity and complementarity between quantum theory and classical theory, which is not only replenish the theoretical study of superbunching effect, but also helpful to understand the physical essence of superbunching effect.
The remaining parts of the paper are organized as follows. In Sec. \ref{Methods}, classical intensity fluctuation correlation theory is employed to interpret the superbunching effect, then the theoretical derivation results is verified by the designed experimental scheme. The discussions and conclusions are in Sec. \ref{Discussion} and \ref{Conclusion}, respectively.
\section{\label{Methods}Methods}
\subsection{Classical intensity fluctuation correlation theory}
Instead of the quantum two-photon interference theory, the classical intensity fluctuation correlation theory is used to explain the superbunching effect. This is mainly due to the correlation between the electromagnetic waves propagating to two detectors located at different space-time coordinates $({r_1},{t_1})$ and $({r_2},{t_2})$. Therefore, the function of the second-order correlation of optical field can be expressed as
\begin{equation}\label{equ1}
{G^{(2)}}({r_1},{t_1};{r_2},{t_2}) = \left\langle {{E_1}({r_1},{t_1}){E_2}({r_2},{t_2}){E_1}^*({r_1},{t_1}){E_2}^*({r_2},{t_2})} \right\rangle,
\end{equation}
where ${\left\langle { \cdot \cdot \cdot } \right\rangle}$ is the ensemble average by taking all the possible realizations into account, ${E_i}({r_i},{t_i})$ is the electric field intensity on the surface of detector $\rm D_i$, where ${E_i}^*({r_i},{t_i})$is the complex conjugate of ${E_i}({r_i},{t_i})$, $i = 1,2$.
\begin{figure}[htb]
\centering
\includegraphics[width=10cm]{figure1.pdf}
\caption{\label{figure1} The theoretical model diagram of a joint detection event triggered by different electromagnetic waves. $\rm RG_1$ and $\rm RG_2$ are two rotating ground glass plates. $\rm D_1$ and $\rm D_2$ are two single-photon detectors in the HBT interferometer. $\rm CC$ is the two-photon coincidence count detection system.}
\end{figure}
When a single-mode continuous-wave laser light passes through two rotating ground glass (RG) plates continuously, the scattered pseudothermal light, which fluctuates in time and space, will arrive at the surfaces of the two detectors. The theoretical model of superbunching effect is shown in figure \ref{figure1}. There will be eight different electromagnetic waves triggering the two detectors, $E_{111}$, ${E_{112}}$, ${E_{121}}$, ${E_{122}}$, ${E_{211}}$, ${E_{212}}$, ${E_{221}}$, ${E_{222}}$. Taking $E_{121}$ as an example, it represents the electromagnetic wave scattered by the laser passing through position 1 of the $\rm RG_1$, then propagates to the position 2 of the $\rm RG_2$, finally arrives at the surface of $\rm D_1$. Other meaning of electromagnetic waves are analogized in turn. For the meaning of $E_{121}$, it is actually a compound electric field, which is obtained by multiplying a initial electromagnetic waves with the propagation function under different conditions. According to Green function for a point light source in classical optics \cite{Goodman1995Introduction}, the electromagnetic wave which arrived at the surface of detector without the spatial part can be expressed as
\begin{eqnarray}\label{equ2}
{E_{ijk}} & =E_{i} \cdot T_{j | i} \cdot T_{k|j|i} \nonumber \\
& =e^{-i\omega_{0}\left(t_{i}-t_{0}\right)} \int_{\omega_{0}-\frac{1}{2} \Delta \omega_{1}}^{\omega_{0}+\frac{1}{2} \Delta \omega_{1}} e^{-i\omega_{0j}\left(t_{j}^{''}-t_{i}^{'}\right)} d \omega_{0 j} \int_{\omega_{0}-\frac{1}{2}\Delta \omega_{2}}^{\omega_{0}+\frac{1}{2} \Delta \omega_{2}} e^{-i\omega_{k}\left(t_{k}-t_{j}^{''}\right)} d \omega_{k},
\end{eqnarray}
where $E_i$ is the electromagnetic wave which propagated by laser passing through position $i$ of the $\rm RG_1$, ${T_{j|i}}$ represents the propagation function of pseudothermal light when $E_i$ passes through the position $j$ of the $\rm RG_2$, and ${T_{k|j|i}}$ is the propagation function of electromagnetic wave when it arrives at the surface of $\rm D_k$ under the condition of ${E_i}\cdot{T_{j|i}}$. ${\omega _0}$ is the center frequency of the laser, ${\omega _{0i}}$ is the frequency which the laser scattered by the position $i$ of the $\rm RG_1$, and ${\omega _{j}}$ is the frequency of the light which scattered by the position $j$ of the $\rm RG_2$. $\Delta {\omega _{\rm{1}}}$ and $\Delta {\omega _{\rm{2}}}$ represent the spectral widths of pseudothermal light scattered by the $\rm RG_1$ and $\rm RG_2$, respectively. $t_i^{'}$, $t_j^{''}$ and $t_k$ are the time when the electromagnetic wave arrives at position $i$ of the $\rm RG_1$, position $j$ of the $\rm RG_2$ and the detector $k$ surface respectively, $i,j,k = 1,2$.
Then there are four types of coincidence current detected by joint detection system as shown in the figure \ref{figure1}, ${E_{111}}{E_{222}}$, ${E_{112}}{E_{221}}$, ${E_{121}}{E_{212}}$, ${E_{122}}{E_{211}}$. Therefore, the total current measured by the detector can be expressed as
\begin{equation}\label{equ3}
\fl {E_1}({r_1},{t_1}){E_2}({r_2},{t_2}) = \frac{1}{4}{E_{111}}{E_{222}} + \frac{1}{4}{E_{112}}{E_{221}} + \frac{1}{4}{E_{121}}{E_{212}} + \frac{1}{4}{E_{122}}{E_{211}}.
\end{equation}
Substituting equation (\ref{equ3}) into equation (\ref{equ1}), the second-order correlation function can be written as
\begin{eqnarray}\label{equ4}
\fl {G^{(2)}}({r_1},{t_1};{r_2},{t_2}) &= \frac{1}{{16}}\langle \left({{E_{111}}{E_{222}} + {E_{112}}{E_{221}}+{E_{121}}{E_{212}} + {E_{122}}{E_{211}}} \right)\nonumber\\&\cdot
\left({{E_{111}}^*{E_{222}}^* + {E_{112}}^*{E_{221}}^* + {E_{121}}^*{E_{212}}^* + {E_{122}}^*{E_{211}}^*} \right)\rangle.
\end{eqnarray}
There will be 16 terms after expanding equation (\ref{equ4}). It is obvious that there are 12 cross-correlation terms besides four auto-correlation terms. The whole second-order correlation function can be categorized into four groups. We calculate one term for each group, and the other terms are calculated in the same way. Substituting equation (\ref{equ2}) into equation (\ref{equ4}), one can get concrete expression.
The first group is all the autocorrelation group. The calculated results are all constant. Take $E_{111}E_{222}E_{111}^*E_{222}^*$ as a example
\begin{eqnarray}\label{equ5}
\fl \langle E_{111} E_{222} E_{111}^{*} E_{222}^{*}\rangle
& =\langle\left(E_{1} T_{1|1} T_{1|1|1}\right)\left(E_{2} T_{2 | 2} T_{2|2| 2}\right)\left(E_{1}^{*} T_{1 | 1}^{*} T_{1|1| 1}^{*}\right)\left(E_{2}^{*} T_{2 | 2}^{*} T_{2|2| 2}^{*}\right)\rangle \nonumber \\
& =\left(\Delta \omega_{1}\right)^{2}\left(\Delta \omega_{2}\right)^{2},
\end{eqnarray}
The rest of the three terms, ${E_{112}}{E_{221}}{E_{112}}^*{E_{221}}^*$,
${E_{121}}{E_{212}}{E_{121}}^*{E_{212}}^*$,
${E_{122}}{E_{211}}{E_{122}}^*{E_{211}}^*$ in the same group have the same result as the one of equation (\ref{equ5}).
The second term that needs to be calculated is $E_{111}E_{222}E_{112}^*E_{221}^*$. With the same method above, it can express as
\begin{eqnarray}\label{equ6}
\fl \langle E_{111} E_{222} E_{112}^{*} E_{221}^{*}\rangle
& =\langle\left(E_{1} T_{1|1} T_{1|1|1}\right)\left(E_{2} T_{2|2} T_{2|2|2}\right)\left(E_{1}^{*} T_{1|1}^{*} T_{2|1| 1}^{*}\right)\left(E_{2}^{*} T_{2|2}^{*} T_{1|2| 2}^{*}\right)\rangle \nonumber \\
& = {\left({\Delta {\omega _1}}\right)^2}\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _2}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _2}} {{e^{{\rm{ - }}i {{\omega _1}\left( {{t_1} - {t_2}} \right)} }}d} {\omega _1}\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _2}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _2}} {{e^{ - i{{\omega _2}\left( {{t_2} - {t_1}} \right)} }}d} {\omega _2} \nonumber \\
& = {\left({\Delta {\omega _1}}\right)^2}{\left( {\Delta {\omega _2}} \right)^2}sin{c^2}\left[{\frac{1}{2}\Delta {\omega _2}\left({{t_1} - {t_2}} \right)}\right],
\end{eqnarray}
where $\sin{c}(x)=\sin (x)/x$, the other terms ${E_{112}}{E_{221}}{E_{111}}^*{E_{222}}^*,{E_{121}}{E_{212}}{E_{122}}^*{E_{211}}^*,{E_{122}}{E_{211}}\\\cdot{{E_{121}}^*}{E_{212}}^*$ in the same group have the same result as the one of equation (\ref{equ6}).
All of the terms in the second group add up to $4{\left( {\Delta {\omega _1}} \right)^2}{\left( {\Delta {\omega _2}} \right)^2}sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _2}\left( {{t_1} - {t_2}} \right)} \right]$.
The third term that needs to be calculated is ${E_{111}}{E_{222}}{E_{121}}^*{E_{212}}^*$, it can be obtained by simplification as
\begin{eqnarray}\label{equ7}
\fl \langle E_{111} E_{222} E_{121}^{*} E_{212}^{*}\rangle
& =\langle \left(E_{1} T_{1|1} T_{1|1 | 1}\right)\left(E_{2} T_{2|2} T_{2|2|2}\right)\left(E_{1}^{*} T_{2|1}^{*} T_{1|2| 1}^{*}\right)\left(E_{2}^{*} T_{1|2}^{*} T_{2|1| 2}^{*}\right)\rangle \nonumber \\
& = {\left( {\Delta {\omega _2}} \right)^2}\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _1}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _1}} {{e^{{\rm{ - }}i {{\omega _{01}}\left( {{t_1}^{''} - {t_2}^{''}} \right)}}}d} {\omega _{01}}\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _1}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _1}} {{e^{-i{{\omega _{02}}\left( {{t_2}^{''} - {t_1}^{''}} \right)}}}d} {\omega _{02}} \nonumber \\
& = {\left( {\Delta {\omega _1}} \right)^2}{\left( {\Delta {\omega _2}} \right)^2}sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _1}\left( {{t_1}^{''} - {t_2}^{''}} \right)} \right].
\end{eqnarray}
${E_{112}}{E_{221}}{E_{122}}^*{E_{211}}^*$,
${E_{121}}{E_{212}}{E_{111}}^*{E_{222}}^*$,
${E_{122}}{E_{211}}{E_{112}}^*{E_{221}}^*$ are the rest of the terms, they have the same result as equation (\ref{equ7}). The whole third group is $4{\left( {\Delta {\omega _1}} \right)^2}{\left( {\Delta {\omega _2}} \right)^2}sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _1}\left( {{t_1}^{''} - {t_2}^{''}} \right)} \right]$.
The forth term that needs to be calculated is ${E_{111}}{E_{222}}{E_{122}}^ * {E_{211}}^*$ , which is
\begin{eqnarray}\label{equ8}
\fl \langle E_{111} E_{222} E_{122}^{*} E_{211}^{*}\rangle
& =\langle\left(E_{1} T_{1|1} T_{1|1|1}\right)\left(E_{2} T_{2|2} T_{2|2|2}\right)\left(E_{1}^{*} T_{2|1}^{*} T_{2|2| 1}^{*}\right)\left(E_{2}^{*} T_{1|2}^{*} T_{1|1| 2}^{*}\right)\rangle \nonumber \\
& = \int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _1}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _1}} {{e^{{\rm{ - }}i {{\omega _{01}}\left( {{t_1}^{''} - {t_2}^{''}} \right)}}}d} {\omega _{01}}\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _2}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _2}} {{e^{ - i\left[ {{\omega _{02}}\left( {{t_1} - {t_2}} \right)} \right]}}d} {\omega _{02}} \nonumber \nonumber \\
& \cdot\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _1}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _1}} {{e^{{\rm{ - }}i{{\omega _{01}}\left( {{t_2}^{''} - {t_1}^{''}} \right)}}}d} {\omega _{01}}\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _2}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _2}} {{e^{ - i\left[ {{\omega _{02}}\left( {{t_2} - {t_1}} \right)} \right]}}d} {\omega _{02}} \nonumber \\
& = {\left( {\Delta {\omega _1}} \right)^2}{\left( {\Delta {\omega _2}} \right)^2}sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _1}\left( {{t_1}^{''} - {t_2}^{''}} \right)} \right]sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _2}\left( {{t_1} - {t_2}} \right)} \right].
\end{eqnarray}
And the rest of the terms,
${E_{112}}{E_{221}}{E_{121}}^*{E_{212}}^*$,
${E_{121}}{E_{212}}{E_{112}}^*{E_{221}}^*$,
${E_{122}}{E_{211}}{E_{111}}^*{E_{222}}^*$ in the same group have the same result as the one of equation (\ref{equ8}). We're going to add up to $4{\left( {\Delta {\omega _1}} \right)^2}{\left( {\Delta {\omega _2}} \right)^2}sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _1}\left( {{t_1}^{''} - {t_2}^{''}} \right)} \right]sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _2}\left( {{t_1} - {t_2}} \right)} \right]$.
Finally, when all the above terms are added together, the second-order temporal correlation function with two RGs in the scheme as shown in figure \ref{figure1} is
\begin{eqnarray}\label{equ9}
\fl {G^{(2)}}({t_1} - {t_2};{t_1}^{''} - {t_2}^{''})&=\frac{1}{4}{\left( {\Delta {\omega _1}} {\Delta {\omega _2}} \right)^2}({1 + sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _1}\left({{t_1}-{t_2}} \right)} \right] +sin{c^2}\left[{\frac{1}{2}\Delta {\omega _2}\left( {{t_1}-{t_2}} \right)} \right]} \nonumber \\
&+ sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _1}\left( {{t_1}^{''}-{t_2}^{''}} \right)} \right]sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _2}\left( {{t_1} - {t_2}} \right)} \right]),
\end{eqnarray}
where ${\tau _1} = {t_1}^{''} - {t_2}^{''}$, ${\tau _{\rm{2}}} = {t_1} - {t_2}$, they represent the coherent time that the laser passing through the first and second RG, respectively. Taking these equations into equation (\ref{equ9}), the normalized
second-order temporal correlation function can be expressed as
\begin{equation}\label{equ10}
{g^{(2)}}({\tau _1};{\tau _{\rm{2}}}) = \left[ {1 + sin{c^2}\left( {\frac{1}{2}\Delta {\omega _1}{\tau _1}} \right)} \right]\cdot\left[ {1 + sin{c^2}\left( {\frac{1}{2}\Delta {\omega _{\rm{2}}}{\tau _2}} \right)} \right].
\end{equation}
When the value of ${\tau _1}$ and ${\tau _2}$ approaches infinity, ${g^{(2)}}({\tau _1};{\tau _{\rm{2}}})$ equals 1, which means detection events are independent each other. When ${\tau _1}$ and ${\tau _2}$ all equals zero, ${g^{(2)}}({\tau _1};{\tau _{\rm{2}}})$ equals 4, which means superbunching effect can be observed. Also the result of theoretical derivation is in agreement with the conclusion derived from the quantum two-photon interference theory \cite{zhou2017superbunching}.
\subsection{Experimental verification}
The experimental setup for observing two-photon superbunching from a pair of modulated distinguishable classical light is shown in figure \ref{figure2}. The initial polarization of two employed He-Ne lasers are horizontal. A half-wave plate (HWP) behind laser 1 is employed to change the horizontal polarization to vertical polarization. $\rm GP_1$ and $\rm GP_2$ are two Glan prisms, which are set to purify the polarization of these two lasers. $\rm L_1$ and $\rm L_2$ are convergent lenses with focal lengths of 50 mm. The distance between $\rm RG_1$ and $\rm P_1$ is 300 mm. The transverse coherence length of pseudothermal light generated by $\rm RG_1$ is 1.9 mm in the plane of $\rm P_1$. The diameter of the $\rm P_1$ is 1 mm, which is less than the coherence length in order to pass through $\rm P_1$ within one same coherence area. $\rm L_2$ is also employed to focus the light onto $\rm RG_2$. The distance between $\rm L_2$ and $\rm RG_2$ needs to be larger than the focus length of $\rm L_2$ mainly because pseudothermal light scattered by $\rm RG_1$ is not parallel. $\rm D_1$ and $\rm D_2$ are two single-photon detectors (PerkinElmer, SPCM-AQRH-14-FC). C.C is a two-photon coincidence count detection system (Becker \&. Hickl GmbH, DPC-230). The experiment mainly includes the following three steps.
\begin{figure}[htb]
\centering\includegraphics[width=12cm]{figure2.pdf}
\caption{\label{figure2} Experimental setup for measuring superbunching effect from a pair of modulated distinguishable classical light. Laser 1 and Laser 2 are single-mode continuous-wave He-Ne lasers. $\rm PBS_1$ and $\rm PBS_2$ are two polarized beam splitters. $\rm RG_1$ and $\rm RG_2$ are two rotating ground glass plates that can adjust the rotational speed. $\rm P_1$ and $\rm P_2$ are the pinholes. The measuring system is a HBT-like intensity interferometer.}
\end{figure}
In the first step, turn on laser 1 and turn off laser 2, let the vertically polarized light pass through $\rm RG_1$ and $\rm RG_2$ successively. The traditional second-order temporal correlation function of the ordinary pseudothermal superbunching effect is measured to verify the reliability of the experimental test system. $\rm RG_1$ and $\rm RG_2$ are rotating at 100 Hz and 50 Hz, respectively. As a result of the vertical polarization, the modulated light passes through the $\rm PBS_2$ and only arrives at $\rm D_1$. The experimental results are shown in figure \ref{figure3}(a). The coincidence count is almost constant for 50s of collection time. It has no superbunching effect obviously because there is almost no light reaching $\rm D_2$. Keeping experimental devices unchanged, when we rotate additional HWP to make it $45^{\circ}$ polarized light respect to the horizontal direction which is placed between the $\rm PBS_1$ and the $\rm L_1$ (not shown in figure \ref{figure2}), the pseudothermal light will be split into two beams with the same intensity after passing through the $\rm PBS_2$. Finally, the HBT intensity interferometer will be triggered by two pseudothermal lights. The results are shown in figure \ref{figure3}(b). The degree of second-order correlation ${g^{\left(2\right)}}\left(0\right) = 3.91\pm0.02$ was observed, which means the superbunching effect of pseudothermal light was achieved. Also the measured full width at half maximum (FWHM) reaches about $1.7\mu s\pm 0.02$ and the visibility of the peak is about 59.2\%.
\begin{figure}
\centering\includegraphics[width=12cm]{figure3.pdf}
\caption{\label{figure3} (a) Two-photon coincidence
counts when only turn on laser 1. (b) The degree of second-order temporal correlation when only $45^{\circ}$ linearly polarized light comes in. (c) The result of measured normalized second-order temporal correlation functions when two pseudothermal lights focus on different areas of $\rm RG_1$. (d) The result of measured normalized second-order temporal correlation functions when two pseudothermal lights focus on same areas of $\rm RG_1$. $\tau$ is the time difference between two single-photon detection
events within a two-photon coincidence count. The black circles are measured results and the red lines are theoretical fittings.}
\end{figure}
In the second step, turn on the laser 1 and laser 2 simultaneously. Vertically polarized light from laser 1 and horizontally polarized light from laser 2 are combined into one beam at the $\rm PBS_1$, and then focus on $\rm RG_1$ by the lens $\rm L_1$. Note that two beams focus on different areas of $\rm RG_1$ at the moment. That means $\rm RG_1$ will produce two completely different sets of speckles. When two sets of speckles with mutually perpendicular polarization directions pass through $\rm PBS_2$, the vertically polarized light from the laser 1 will reach the detector $\rm D_1$, and the horizontally polarized light from laser 2 will reach the detector $\rm D_2$. The measurement result is shown in figure \ref{figure3}(c), ${g^{\left(2\right)}}\left(\tau\right)$ is flat and no superbunching effect occurs.
In the third step, except for the condition that the combination of the two light beams from laser 1 and laser 2 through $\rm PBS_1$ focus on the same area of $\rm RG_1$, the rest of experimental steps are the same as the second steps. Hence the speckle patterns scattered by $\rm RG_1$ and $\rm RG_2$ are almost the same, and they have the same temporal and spatial fluctuations. The measurement result is shown in figure \ref{figure3}(d). The circles is the measured result, the red line is theoretical fittings by employing equation (\ref{equ9}). The FWHM is $1.71 \mu s \pm 0.02$ and the visibility of the peak is about 58.5\%. The degree of second-order correlation is
${g^{\left(2\right)}}\left(0\right)=3.83\pm0.02$ as depicted in the figure \ref{figure3}(d). It is obviously that ${g^{\left(2\right)}}\left(0\right)$ is greater than 2, which means the temporal superbunching effect of pseudothermal light was observed successfully.
\begin{figure}[htb]
\centering\includegraphics[scale=0.3]{figure4.pdf}
\caption{\label{figure4} The result of measured normalized second-order spatial correlation functions. The black circles are measured results and the red line is theoretical fitting.}
\end{figure}
Based on the the third step, we proceed to study the superbunching effect in the spatial domain. With $\rm D_1$ fixed at $x_1=0$, $\rm D_2$ is then moved transversely in steps of 2 mm through $x_2= \pm12 mm$. The collection time for every steps is 50s. The spatial normalized second-order correlation function ${g^{\left(2\right)}}\left(x_1-x_2\right)$ is calculated and the second-order correlation pattern as shown in figure \ref{figure4}. The FWHM of this peak is $4.8 \pm 0.01 mm$, while the visibility of the peak is about 22.4\%. The degree of second-order correlation of pseudothermal light was measured as ${g^{\left(2\right)}}\left(0\right)=3.42\pm0.02$, which means superbunching effect in the spatial domain was also observed.
\section{\label{Discussion}Discussion}
The superbunching effect was observed in the first step. From a classical theory of view, when the $45^{\circ}$ linearly polarized light passes through $\rm RG_1$, $\rm RG_2$ and $\rm PBS_2$ in sequence, two single photon detectors at symmetric position are triggered by the pseudothermal light which possesses the same intensity fluctuation and photon statistical distribution. Therefore, the classical intensity fluctuations theory \cite{goodman2007speckle,ou1988violation} could be explained why the superbunching effect can be observed. In the quantum two-photon interference theory \cite{mandel1999quantum}, when we changed the horizontal light to $45^{\circ}$ linearly polarized light by rotating the HWP, the pseudothermal lights pass through $\rm PBS_2$, there are two different and indistinguishable paths to trigger two detectors. It is the reason why we can use quantum two-photon interference theory to explain the superbunching effects, simultaneously the superbunching effects of ${g^{\left(2\right)}}\left(0\right) = 3.91\pm0.02$ was observed.
On the basis of the first step, turn on the laser 1 and laser 2 simultaneously, the second step and third step are a group of comparative experiments. The difference between them is whether two lights focus on same areas of $\rm RG_1$ or not. From the classical theory of intensity fluctuation correlation, the second step and the third step will get different results. When the lasers focus on the $\rm RG_1$ in different positions, it means the two-photon coincidence count system is triggered by two completely different sets of speckles. Therefore, the superbunching effects can not be observed corresponding to figure \ref{figure3}(c). The third step is the opposite of the second step, the superbunching effects is observed corresponding to figure \ref{figure3}(d) mainly because the lasers focus on the $\rm RG_1$ in same area. The two set of speckles that pass through $\rm RG_1$ and $\rm RG_2$ own identical fluctuations in the same coherent region.
However, the quantum two-photon interference theory will give same prediction results between the second step and the third step. It is well known that two-photon interference theory emphasizes whether the interference path is different and indistinguishable. In the second and third step, the polarization directions of two incident lasers are all orthogonal. After passing through $\rm PBS_2$, the horizontally polarized light enters the detector $\rm D_2$, and the vertically polarized light enters the detector $\rm D_1$. There is only one path to trigger the two photon coincidence count detection system. Under the prediction of quantum two-photon interference theory, there is no superbunching effect in the second and third step. But we observed the superbunching effect of pseudothermal light in the third step, which the degree of second-order temporal correlation was measured as ${g^{\left(2\right)}}\left(0\right)=3.83\pm0.02$ as shown in figure \ref{figure3}(d). It is contrary to the prediction of classical theory of intensity fluctuation correlation. The temporal superbunching effect of classical light in the figure \ref{figure3}(d) seems to break the unity of classical and quantum theory. The quantum two-photon interference theory can not explain this strange phenomenon. Here we used the the classical intensity fluctuations theory to explain the superbunching effect without two-photon interference.
On account of that the polarization of two beams are orthogonal and they focus on the same area of $\rm RG_1$, two pseudothermal light beams own the different polarization direction but generate the same fluctuation. It means the distribution of electric field ${E}({r},{t})$ is the same. After transferring through $\rm RG_1$ and $\rm RG_2$ in series, the two-photon coincidence count system is triggered by two sets of same speckles. The specific derivation is shown in the above Sec. \ref{Methods}. Therefore, the superbunching effect can be well explained by the classical intensity fluctuations theory.
In the fourth step, the background is equal to 2 in the spatial superbunching correlation diagram. This is mainly due to the fact that $\rm P_1$, which is set in front of $\rm RG_2$, performs a spatial mode selection on the light passing through $\rm RG_2$ so that the speckle has the same fluctuation in the same spatial mode. However, there is no change in the longitudinal correlation length related to the superbunching effect in the time domain. Therefore, the temporal superbunching effects is different from the spatial superbunching effect, and the measured background of the superbunching effect is 1 and 2, respectively.
It is well known that there exists space-time duality between the equations that describe the paraxial diffraction of light beams \cite{Kolner:89,301659,TORRESCOMPANY20111}. Similar to the above methods for calculating the temporal superbunching effect, the classical intensity fluctuation correlation theory can also be used to calculate the spatial superbunching effect. The expression of electromagnetic wave which transmitted to the surface of detector can be expressed as
\begin{eqnarray}\label{equ11}
E_{i j k} &=E_{i} \cdot T_{j | i} \cdot T_{k|j|i} \nonumber \\
& =e^{-ik_{0}\left(r_{i}-r_{0}\right)} \int_{-\frac{1}{2} d_{1}}^{\frac{1}{2}d_{1}} e^{-ik_{0j}\left(r_{j}^{''}-r_{i}^{'}\right)} d r_{0 j} \int_{-\frac{1}{2}d_{2}}^{\frac{1}{2} d_{2}} e^{-ik_{k}\left(r_{k}-r_{j}^{''}\right)} d r_{k}.
\end{eqnarray}
The meaning of the symbol in the above equation (\ref{equ11}) is similar to equation (\ref{equ2}). $d_1$ and $d_2$ represent the length of the pseudothermal light source after passing through $\rm RG_1$ and $\rm RG_2$, respectively. ${k_0}$ is the wave vector of the laser, ${k _{0i}}$ is the wave vector of the light scattered by the position $i$ of the $\rm RG_1$, and ${k _{j}}$ is the wave vector of the light scattered by the position of the $\rm RG_2$. $r_i^{'}$, $r_j^{''}$ and $r_k$ are position vectors when the electromagnetic wave arrives at position $i$ of the $\rm RG_1$, position $j$ of the $\rm RG_2$ and the surface $k$ of detector respectively, where $i,j,k = 1,2$. Substituting the equation (\ref{equ11}) into equation (\ref{equ4}) and the normalized second-order temporal correlation function is
\begin{equation}\label{equ12}
{g^{(2)}}(\Delta {x_1};\Delta {x_2}) = \left[ {1 + sin{c^2}\left( {\frac{{\pi L_1}}{{\lambda d_1}}\Delta {x_1}} \right)} \right]\cdot\left[{1 + sin{c^2}\left({\frac{{\pi L_2}}{{\lambda d_2}}\Delta {x_2}} \right)} \right],
\end{equation}
we define $\Delta {x_1} = x_1^{'} - x_2^{'}$ that means the position difference between two orthogonal beams of light propagating over $\rm RG_1$, $\Delta {x_2} = x_1 - x_2$ means the transverse position difference between detectors $\rm D_1$ and $\rm D_2$. $\lambda$ is the wavelength of the light source, $\rm L_1$ is the distance from $\rm RG_1$ to detectors and $\rm L_2$ is the distance from $\rm RG_2$ to detectors.
As a result of two orthogonal light beams focus on the same area of $\rm RG_1$ ($\Delta {x_1} = 0$), they generate same fluctuation after passing through $\rm RG_1$. So the equation (\ref{equ12}) can be further simplified as
\begin{equation}\label{equ13}
{g^{(2)}}(\Delta {x_2})=2\left[{1+sin{c^2}\left({{\frac{{\pi L_2}}{{\lambda d_2}}}\Delta {x_2}}\right)}\right],
\end{equation}
when the value of $\Delta {x_2}$ is infinity, ${g^{(2)}}(\Delta {x_2})$ equals 2, which means the two detectors are far enough apart; ${g^{(2)}}(\Delta {x_2})$ equals 4 when $\Delta {x_2}=0$, which means two detectors are in the same symmetric position.
The background is close to 2 in our experiment about spatial superbunching effect. The reasons are as follows, taking off $\rm RG_2$ and repeating the third step above, the measured degree of second-order temporal correlation is $1.90 \pm 0.01$. It is the bunching effect that ${g^{\left(2\right)}}\left(0\right)=2$. When we add the $\rm RG_2$, the modulated light is modulated again. This is the reason why we can observe spatial superbunching effect. At the same time, the vibration of $\rm RG_1$ and $\rm RG_2$ would cause the vibration of the optical table in the actual experiment, so the measurement of the degree of second-order correlation is 1.9 or higher because of the vibration of laser. Therefore, the measured background of spatial superbunching effect accords with our theory that equals 2.
\section{\label{Conclusion}Conclusion}
In summary, we achieved the superbunching effect from a pair of modulated distinguishable classical light in the temporal and spatial domain. We employed the classical theory of intensity fluctuation correlation to deduce the normalized second-order correlation function of temporal and spatial superbunching effect. It shows good agreement with the measurement as ${g^{\left(2\right)}}\left(0\right)=3.83\pm0.02$. Also the degree of second-order spatial correlation was measured as $3.42\pm0.02$. Although there are completely opposite predictions by using the classical theory of intensity fluctuation correlation and quantum two-photon interference theory to explain the phenomena in the experiment, it does not mean quantum theory contradicts classical theory. It only shows there is no two-photon interference phenomenon in this experiment. Therefore, studying this interesting phenomenon is not only conducive to the future research about superbunching effect that have potential application in improving the visibility of ghost imaging, but also plays an important role in understanding the relationship between classical theory of intensity fluctuation correlation and quantum two-photon interference theory.
\section*{Acknowledgments}
The authors would like to thank Dr. J.B. Liu for helpful discussions. This work was supported by Shaanxi Key Research and Development Project (Grant No. 2019ZDLGY09-10); Key Innovation Team of Shaanxi Province (Grant No. 2018TD-024) and 111 Project of China (Grant No.B14040).
\section*{ORCID iDs}
Sheng Luo https://orcid.org/0000-0001-6495-0900\\
Huai-Bin Zheng https://orcid.org/0000-0003-2313-4119
\bibliographystyle{iopart-num}
\section{Introduction}
Since Hanbury Brown and Twiss (HBT) first observed the two-photon bunching effect in 1956 \cite{brown1956correlation,brown1956test}, they found that photons emitted by thermal light source are more inclined to arrive in pairs rather than randomly. The degree of second-order correlation of thermal light in the HBT interferometer was measured as 2, which greatly promoted the development of quantum optics \cite{glauber1963quantum,glauber1963coherent,glauber1963photon,sudarshan1963equivalence,glauber2006nobel}. Then the discovery of the superbunching effect that the degree of second-order correlation is greater than 2 has also attracted the attention of researchers \cite{lipeles1965direct,kaul1966observation,akhmanov1967nonlinear,kocher1967polarization,auffeves2011few,hoi2012generation,grujic2013repulsively,bhatti2015superbunching,hong2012two,zhou2017superbunching,manceau2019indefinite,Yu2019Experimental}. It is generally accepted that the HBT effect is all well explained by both classical and quantum interpretations which can be unified. Classical intensity fluctuation correlation theory was usually employed to interpret this phenomenon \cite{brown1957interferometry}, which emphasized whether the two detectors could detect specks with the same intensity fluctuation in the HBT interferometer. It can also be interpreted by quantum theory, in which two-photon bunching is interpreted by superposition of different alternatives to trigger a two-photon coincidence count \cite{glauber1963quantum,glauber1963coherent,glauber1963photon,fano1961quantum}. In 2017, Bin Bai et al. reported an experiment that bunching effect was observed without two-photon interference since there is only one alternative to trigger a joint detection event \cite{bai2017hanbury}. It seems to indicate the discrepancy between classical and quantum theory. Ultimately, he used Glauber's quantum optical correlation theory to explain this interesting phenomenon. Later, Yu Zhou et al. proposed quantum two-photon interference theory to explain superbunching pseudothermal light based on multi-path interference \cite{zhou2017superbunching}. However, there is no classical intensity fluctuation correlation theory analyzing the such superbunching effect with distinguishable interference paths in the temporal and spatial domain.
In this paper, we proposed and demonstrated a method to observe temporal and spatial superbunching effect from a pair of modulated distinguishable classical light. The normalized temporal and spatial second-order correlation function were measured as ${g^{\left(2\right)}}\left(0\right)=3.83\pm0.02$ and ${g^{\left(2\right)}}\left(0\right)=3.42\pm0.02$, respectively. At the same time, the
theoretical results derived from classical intensity fluctuation correlation theory are in good agreement with the experimental results. Based on above study, it further demonstrates the unity and complementarity between quantum theory and classical theory, which is not only replenish the theoretical study of superbunching effect, but also helpful to understand the physical essence of superbunching effect.
The remaining parts of the paper are organized as follows. In Sec. \ref{Methods}, classical intensity fluctuation correlation theory is employed to interpret the superbunching effect, then the theoretical derivation results is verified by the designed experimental scheme. The discussions and conclusions are in Sec. \ref{Discussion} and \ref{Conclusion}, respectively.
\section{\label{Methods}Methods}
\subsection{Classical intensity fluctuation correlation theory}
Instead of the quantum two-photon interference theory, the classical intensity fluctuation correlation theory is used to explain the superbunching effect. This is mainly due to the correlation between the electromagnetic waves propagating to two detectors located at different space-time coordinates $({r_1},{t_1})$ and $({r_2},{t_2})$. Therefore, the function of the second-order correlation of optical field can be expressed as
\begin{equation}\label{equ1}
{G^{(2)}}({r_1},{t_1};{r_2},{t_2}) = \left\langle {{E_1}({r_1},{t_1}){E_2}({r_2},{t_2}){E_1}^*({r_1},{t_1}){E_2}^*({r_2},{t_2})} \right\rangle,
\end{equation}
where ${\left\langle { \cdot \cdot \cdot } \right\rangle}$ is the ensemble average by taking all the possible realizations into account, ${E_i}({r_i},{t_i})$ is the electric field intensity on the surface of detector $\rm D_i$, where ${E_i}^*({r_i},{t_i})$is the complex conjugate of ${E_i}({r_i},{t_i})$, $i = 1,2$.
\begin{figure}[htb]
\centering
\includegraphics[width=10cm]{figure1.pdf}
\caption{\label{figure1} The theoretical model diagram of a joint detection event triggered by different electromagnetic waves. $\rm RG_1$ and $\rm RG_2$ are two rotating ground glass plates. $\rm D_1$ and $\rm D_2$ are two single-photon detectors in the HBT interferometer. $\rm CC$ is the two-photon coincidence count detection system.}
\end{figure}
When a single-mode continuous-wave laser light passes through two rotating ground glass (RG) plates continuously, the scattered pseudothermal light, which fluctuates in time and space, will arrive at the surfaces of the two detectors. The theoretical model of superbunching effect is shown in figure \ref{figure1}. There will be eight different electromagnetic waves triggering the two detectors, $E_{111}$, ${E_{112}}$, ${E_{121}}$, ${E_{122}}$, ${E_{211}}$, ${E_{212}}$, ${E_{221}}$, ${E_{222}}$. Taking $E_{121}$ as an example, it represents the electromagnetic wave scattered by the laser passing through position 1 of the $\rm RG_1$, then propagates to the position 2 of the $\rm RG_2$, finally arrives at the surface of $\rm D_1$. Other meaning of electromagnetic waves are analogized in turn. For the meaning of $E_{121}$, it is actually a compound electric field, which is obtained by multiplying a initial electromagnetic waves with the propagation function under different conditions. According to Green function for a point light source in classical optics \cite{Goodman1995Introduction}, the electromagnetic wave which arrived at the surface of detector without the spatial part can be expressed as
\begin{eqnarray}\label{equ2}
{E_{ijk}} & =E_{i} \cdot T_{j | i} \cdot T_{k|j|i} \nonumber \\
& =e^{-i\omega_{0}\left(t_{i}-t_{0}\right)} \int_{\omega_{0}-\frac{1}{2} \Delta \omega_{1}}^{\omega_{0}+\frac{1}{2} \Delta \omega_{1}} e^{-i\omega_{0j}\left(t_{j}^{''}-t_{i}^{'}\right)} d \omega_{0 j} \int_{\omega_{0}-\frac{1}{2}\Delta \omega_{2}}^{\omega_{0}+\frac{1}{2} \Delta \omega_{2}} e^{-i\omega_{k}\left(t_{k}-t_{j}^{''}\right)} d \omega_{k},
\end{eqnarray}
where $E_i$ is the electromagnetic wave which propagated by laser passing through position $i$ of the $\rm RG_1$, ${T_{j|i}}$ represents the propagation function of pseudothermal light when $E_i$ passes through the position $j$ of the $\rm RG_2$, and ${T_{k|j|i}}$ is the propagation function of electromagnetic wave when it arrives at the surface of $\rm D_k$ under the condition of ${E_i}\cdot{T_{j|i}}$. ${\omega _0}$ is the center frequency of the laser, ${\omega _{0i}}$ is the frequency which the laser scattered by the position $i$ of the $\rm RG_1$, and ${\omega _{j}}$ is the frequency of the light which scattered by the position $j$ of the $\rm RG_2$. $\Delta {\omega _{\rm{1}}}$ and $\Delta {\omega _{\rm{2}}}$ represent the spectral widths of pseudothermal light scattered by the $\rm RG_1$ and $\rm RG_2$, respectively. $t_i^{'}$, $t_j^{''}$ and $t_k$ are the time when the electromagnetic wave arrives at position $i$ of the $\rm RG_1$, position $j$ of the $\rm RG_2$ and the detector $k$ surface respectively, $i,j,k = 1,2$.
Then there are four types of coincidence current detected by joint detection system as shown in the figure \ref{figure1}, ${E_{111}}{E_{222}}$, ${E_{112}}{E_{221}}$, ${E_{121}}{E_{212}}$, ${E_{122}}{E_{211}}$. Therefore, the total current measured by the detector can be expressed as
\begin{equation}\label{equ3}
\fl {E_1}({r_1},{t_1}){E_2}({r_2},{t_2}) = \frac{1}{4}{E_{111}}{E_{222}} + \frac{1}{4}{E_{112}}{E_{221}} + \frac{1}{4}{E_{121}}{E_{212}} + \frac{1}{4}{E_{122}}{E_{211}}.
\end{equation}
Substituting equation (\ref{equ3}) into equation (\ref{equ1}), the second-order correlation function can be written as
\begin{eqnarray}\label{equ4}
\fl {G^{(2)}}({r_1},{t_1};{r_2},{t_2}) &= \frac{1}{{16}}\langle \left({{E_{111}}{E_{222}} + {E_{112}}{E_{221}}+{E_{121}}{E_{212}} + {E_{122}}{E_{211}}} \right)\nonumber\\&\cdot
\left({{E_{111}}^*{E_{222}}^* + {E_{112}}^*{E_{221}}^* + {E_{121}}^*{E_{212}}^* + {E_{122}}^*{E_{211}}^*} \right)\rangle.
\end{eqnarray}
There will be 16 terms after expanding equation (\ref{equ4}). It is obvious that there are 12 cross-correlation terms besides four auto-correlation terms. The whole second-order correlation function can be categorized into four groups. We calculate one term for each group, and the other terms are calculated in the same way. Substituting equation (\ref{equ2}) into equation (\ref{equ4}), one can get concrete expression.
The first group is all the autocorrelation group. The calculated results are all constant. Take $E_{111}E_{222}E_{111}^*E_{222}^*$ as a example
\begin{eqnarray}\label{equ5}
\fl \langle E_{111} E_{222} E_{111}^{*} E_{222}^{*}\rangle
& =\langle\left(E_{1} T_{1|1} T_{1|1|1}\right)\left(E_{2} T_{2 | 2} T_{2|2| 2}\right)\left(E_{1}^{*} T_{1 | 1}^{*} T_{1|1| 1}^{*}\right)\left(E_{2}^{*} T_{2 | 2}^{*} T_{2|2| 2}^{*}\right)\rangle \nonumber \\
& =\left(\Delta \omega_{1}\right)^{2}\left(\Delta \omega_{2}\right)^{2},
\end{eqnarray}
The rest of the three terms, ${E_{112}}{E_{221}}{E_{112}}^*{E_{221}}^*$,
${E_{121}}{E_{212}}{E_{121}}^*{E_{212}}^*$,
${E_{122}}{E_{211}}{E_{122}}^*{E_{211}}^*$ in the same group have the same result as the one of equation (\ref{equ5}).
The second term that needs to be calculated is $E_{111}E_{222}E_{112}^*E_{221}^*$. With the same method above, it can express as
\begin{eqnarray}\label{equ6}
\fl \langle E_{111} E_{222} E_{112}^{*} E_{221}^{*}\rangle
& =\langle\left(E_{1} T_{1|1} T_{1|1|1}\right)\left(E_{2} T_{2|2} T_{2|2|2}\right)\left(E_{1}^{*} T_{1|1}^{*} T_{2|1| 1}^{*}\right)\left(E_{2}^{*} T_{2|2}^{*} T_{1|2| 2}^{*}\right)\rangle \nonumber \\
& = {\left({\Delta {\omega _1}}\right)^2}\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _2}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _2}} {{e^{{\rm{ - }}i {{\omega _1}\left( {{t_1} - {t_2}} \right)} }}d} {\omega _1}\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _2}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _2}} {{e^{ - i{{\omega _2}\left( {{t_2} - {t_1}} \right)} }}d} {\omega _2} \nonumber \\
& = {\left({\Delta {\omega _1}}\right)^2}{\left( {\Delta {\omega _2}} \right)^2}sin{c^2}\left[{\frac{1}{2}\Delta {\omega _2}\left({{t_1} - {t_2}} \right)}\right],
\end{eqnarray}
where $\sin{c}(x)=\sin (x)/x$, the other terms ${E_{112}}{E_{221}}{E_{111}}^*{E_{222}}^*,{E_{121}}{E_{212}}{E_{122}}^*{E_{211}}^*,{E_{122}}{E_{211}}\\\cdot{{E_{121}}^*}{E_{212}}^*$ in the same group have the same result as the one of equation (\ref{equ6}).
All of the terms in the second group add up to $4{\left( {\Delta {\omega _1}} \right)^2}{\left( {\Delta {\omega _2}} \right)^2}sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _2}\left( {{t_1} - {t_2}} \right)} \right]$.
The third term that needs to be calculated is ${E_{111}}{E_{222}}{E_{121}}^*{E_{212}}^*$, it can be obtained by simplification as
\begin{eqnarray}\label{equ7}
\fl \langle E_{111} E_{222} E_{121}^{*} E_{212}^{*}\rangle
& =\langle \left(E_{1} T_{1|1} T_{1|1 | 1}\right)\left(E_{2} T_{2|2} T_{2|2|2}\right)\left(E_{1}^{*} T_{2|1}^{*} T_{1|2| 1}^{*}\right)\left(E_{2}^{*} T_{1|2}^{*} T_{2|1| 2}^{*}\right)\rangle \nonumber \\
& = {\left( {\Delta {\omega _2}} \right)^2}\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _1}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _1}} {{e^{{\rm{ - }}i {{\omega _{01}}\left( {{t_1}^{''} - {t_2}^{''}} \right)}}}d} {\omega _{01}}\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _1}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _1}} {{e^{-i{{\omega _{02}}\left( {{t_2}^{''} - {t_1}^{''}} \right)}}}d} {\omega _{02}} \nonumber \\
& = {\left( {\Delta {\omega _1}} \right)^2}{\left( {\Delta {\omega _2}} \right)^2}sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _1}\left( {{t_1}^{''} - {t_2}^{''}} \right)} \right].
\end{eqnarray}
${E_{112}}{E_{221}}{E_{122}}^*{E_{211}}^*$,
${E_{121}}{E_{212}}{E_{111}}^*{E_{222}}^*$,
${E_{122}}{E_{211}}{E_{112}}^*{E_{221}}^*$ are the rest of the terms, they have the same result as equation (\ref{equ7}). The whole third group is $4{\left( {\Delta {\omega _1}} \right)^2}{\left( {\Delta {\omega _2}} \right)^2}sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _1}\left( {{t_1}^{''} - {t_2}^{''}} \right)} \right]$.
The forth term that needs to be calculated is ${E_{111}}{E_{222}}{E_{122}}^ * {E_{211}}^*$ , which is
\begin{eqnarray}\label{equ8}
\fl \langle E_{111} E_{222} E_{122}^{*} E_{211}^{*}\rangle
& =\langle\left(E_{1} T_{1|1} T_{1|1|1}\right)\left(E_{2} T_{2|2} T_{2|2|2}\right)\left(E_{1}^{*} T_{2|1}^{*} T_{2|2| 1}^{*}\right)\left(E_{2}^{*} T_{1|2}^{*} T_{1|1| 2}^{*}\right)\rangle \nonumber \\
& = \int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _1}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _1}} {{e^{{\rm{ - }}i {{\omega _{01}}\left( {{t_1}^{''} - {t_2}^{''}} \right)}}}d} {\omega _{01}}\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _2}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _2}} {{e^{ - i\left[ {{\omega _{02}}\left( {{t_1} - {t_2}} \right)} \right]}}d} {\omega _{02}} \nonumber \nonumber \\
& \cdot\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _1}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _1}} {{e^{{\rm{ - }}i{{\omega _{01}}\left( {{t_2}^{''} - {t_1}^{''}} \right)}}}d} {\omega _{01}}\int_{{\omega _0}{\rm{ - }}\frac{1}{2}\Delta {\omega _2}}^{{\omega _0}{\rm{ + }}\frac{1}{2}\Delta {\omega _2}} {{e^{ - i\left[ {{\omega _{02}}\left( {{t_2} - {t_1}} \right)} \right]}}d} {\omega _{02}} \nonumber \\
& = {\left( {\Delta {\omega _1}} \right)^2}{\left( {\Delta {\omega _2}} \right)^2}sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _1}\left( {{t_1}^{''} - {t_2}^{''}} \right)} \right]sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _2}\left( {{t_1} - {t_2}} \right)} \right].
\end{eqnarray}
And the rest of the terms,
${E_{112}}{E_{221}}{E_{121}}^*{E_{212}}^*$,
${E_{121}}{E_{212}}{E_{112}}^*{E_{221}}^*$,
${E_{122}}{E_{211}}{E_{111}}^*{E_{222}}^*$ in the same group have the same result as the one of equation (\ref{equ8}). We're going to add up to $4{\left( {\Delta {\omega _1}} \right)^2}{\left( {\Delta {\omega _2}} \right)^2}sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _1}\left( {{t_1}^{''} - {t_2}^{''}} \right)} \right]sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _2}\left( {{t_1} - {t_2}} \right)} \right]$.
Finally, when all the above terms are added together, the second-order temporal correlation function with two RGs in the scheme as shown in figure \ref{figure1} is
\begin{eqnarray}\label{equ9}
\fl {G^{(2)}}({t_1} - {t_2};{t_1}^{''} - {t_2}^{''})&=\frac{1}{4}{\left( {\Delta {\omega _1}} {\Delta {\omega _2}} \right)^2}({1 + sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _1}\left({{t_1}-{t_2}} \right)} \right] +sin{c^2}\left[{\frac{1}{2}\Delta {\omega _2}\left( {{t_1}-{t_2}} \right)} \right]} \nonumber \\
&+ sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _1}\left( {{t_1}^{''}-{t_2}^{''}} \right)} \right]sin{c^2}\left[ {\frac{1}{2}\Delta {\omega _2}\left( {{t_1} - {t_2}} \right)} \right]),
\end{eqnarray}
where ${\tau _1} = {t_1}^{''} - {t_2}^{''}$, ${\tau _{\rm{2}}} = {t_1} - {t_2}$, they represent the coherent time that the laser passing through the first and second RG, respectively. Taking these equations into equation (\ref{equ9}), the normalized
second-order temporal correlation function can be expressed as
\begin{equation}\label{equ10}
{g^{(2)}}({\tau _1};{\tau _{\rm{2}}}) = \left[ {1 + sin{c^2}\left( {\frac{1}{2}\Delta {\omega _1}{\tau _1}} \right)} \right]\cdot\left[ {1 + sin{c^2}\left( {\frac{1}{2}\Delta {\omega _{\rm{2}}}{\tau _2}} \right)} \right].
\end{equation}
When the value of ${\tau _1}$ and ${\tau _2}$ approaches infinity, ${g^{(2)}}({\tau _1};{\tau _{\rm{2}}})$ equals 1, which means detection events are independent each other. When ${\tau _1}$ and ${\tau _2}$ all equals zero, ${g^{(2)}}({\tau _1};{\tau _{\rm{2}}})$ equals 4, which means superbunching effect can be observed. Also the result of theoretical derivation is in agreement with the conclusion derived from the quantum two-photon interference theory \cite{zhou2017superbunching}.
\subsection{Experimental verification}
The experimental setup for observing two-photon superbunching from a pair of modulated distinguishable classical light is shown in figure \ref{figure2}. The initial polarization of two employed He-Ne lasers are horizontal. A half-wave plate (HWP) behind laser 1 is employed to change the horizontal polarization to vertical polarization. $\rm GP_1$ and $\rm GP_2$ are two Glan prisms, which are set to purify the polarization of these two lasers. $\rm L_1$ and $\rm L_2$ are convergent lenses with focal lengths of 50 mm. The distance between $\rm RG_1$ and $\rm P_1$ is 300 mm. The transverse coherence length of pseudothermal light generated by $\rm RG_1$ is 1.9 mm in the plane of $\rm P_1$. The diameter of the $\rm P_1$ is 1 mm, which is less than the coherence length in order to pass through $\rm P_1$ within one same coherence area. $\rm L_2$ is also employed to focus the light onto $\rm RG_2$. The distance between $\rm L_2$ and $\rm RG_2$ needs to be larger than the focus length of $\rm L_2$ mainly because pseudothermal light scattered by $\rm RG_1$ is not parallel. $\rm D_1$ and $\rm D_2$ are two single-photon detectors (PerkinElmer, SPCM-AQRH-14-FC). C.C is a two-photon coincidence count detection system (Becker \&. Hickl GmbH, DPC-230). The experiment mainly includes the following three steps.
\begin{figure}[htb]
\centering\includegraphics[width=12cm]{figure2.pdf}
\caption{\label{figure2} Experimental setup for measuring superbunching effect from a pair of modulated distinguishable classical light. Laser 1 and Laser 2 are single-mode continuous-wave He-Ne lasers. $\rm PBS_1$ and $\rm PBS_2$ are two polarized beam splitters. $\rm RG_1$ and $\rm RG_2$ are two rotating ground glass plates that can adjust the rotational speed. $\rm P_1$ and $\rm P_2$ are the pinholes. The measuring system is a HBT-like intensity interferometer.}
\end{figure}
In the first step, turn on laser 1 and turn off laser 2, let the vertically polarized light pass through $\rm RG_1$ and $\rm RG_2$ successively. The traditional second-order temporal correlation function of the ordinary pseudothermal superbunching effect is measured to verify the reliability of the experimental test system. $\rm RG_1$ and $\rm RG_2$ are rotating at 100 Hz and 50 Hz, respectively. As a result of the vertical polarization, the modulated light passes through the $\rm PBS_2$ and only arrives at $\rm D_1$. The experimental results are shown in figure \ref{figure3}(a). The coincidence count is almost constant for 50s of collection time. It has no superbunching effect obviously because there is almost no light reaching $\rm D_2$. Keeping experimental devices unchanged, when we rotate additional HWP to make it $45^{\circ}$ polarized light respect to the horizontal direction which is placed between the $\rm PBS_1$ and the $\rm L_1$ (not shown in figure \ref{figure2}), the pseudothermal light will be split into two beams with the same intensity after passing through the $\rm PBS_2$. Finally, the HBT intensity interferometer will be triggered by two pseudothermal lights. The results are shown in figure \ref{figure3}(b). The degree of second-order correlation ${g^{\left(2\right)}}\left(0\right) = 3.91\pm0.02$ was observed, which means the superbunching effect of pseudothermal light was achieved. Also the measured full width at half maximum (FWHM) reaches about $1.7\mu s\pm 0.02$ and the visibility of the peak is about 59.2\%.
\begin{figure}
\centering\includegraphics[width=12cm]{figure3.pdf}
\caption{\label{figure3} (a) Two-photon coincidence
counts when only turn on laser 1. (b) The degree of second-order temporal correlation when only $45^{\circ}$ linearly polarized light comes in. (c) The result of measured normalized second-order temporal correlation functions when two pseudothermal lights focus on different areas of $\rm RG_1$. (d) The result of measured normalized second-order temporal correlation functions when two pseudothermal lights focus on same areas of $\rm RG_1$. $\tau$ is the time difference between two single-photon detection
events within a two-photon coincidence count. The black circles are measured results and the red lines are theoretical fittings.}
\end{figure}
In the second step, turn on the laser 1 and laser 2 simultaneously. Vertically polarized light from laser 1 and horizontally polarized light from laser 2 are combined into one beam at the $\rm PBS_1$, and then focus on $\rm RG_1$ by the lens $\rm L_1$. Note that two beams focus on different areas of $\rm RG_1$ at the moment. That means $\rm RG_1$ will produce two completely different sets of speckles. When two sets of speckles with mutually perpendicular polarization directions pass through $\rm PBS_2$, the vertically polarized light from the laser 1 will reach the detector $\rm D_1$, and the horizontally polarized light from laser 2 will reach the detector $\rm D_2$. The measurement result is shown in figure \ref{figure3}(c), ${g^{\left(2\right)}}\left(\tau\right)$ is flat and no superbunching effect occurs.
In the third step, except for the condition that the combination of the two light beams from laser 1 and laser 2 through $\rm PBS_1$ focus on the same area of $\rm RG_1$, the rest of experimental steps are the same as the second steps. Hence the speckle patterns scattered by $\rm RG_1$ and $\rm RG_2$ are almost the same, and they have the same temporal and spatial fluctuations. The measurement result is shown in figure \ref{figure3}(d). The circles is the measured result, the red line is theoretical fittings by employing equation (\ref{equ9}). The FWHM is $1.71 \mu s \pm 0.02$ and the visibility of the peak is about 58.5\%. The degree of second-order correlation is
${g^{\left(2\right)}}\left(0\right)=3.83\pm0.02$ as depicted in the figure \ref{figure3}(d). It is obviously that ${g^{\left(2\right)}}\left(0\right)$ is greater than 2, which means the temporal superbunching effect of pseudothermal light was observed successfully.
\begin{figure}[htb]
\centering\includegraphics[scale=0.3]{figure4.pdf}
\caption{\label{figure4} The result of measured normalized second-order spatial correlation functions. The black circles are measured results and the red line is theoretical fitting.}
\end{figure}
Based on the the third step, we proceed to study the superbunching effect in the spatial domain. With $\rm D_1$ fixed at $x_1=0$, $\rm D_2$ is then moved transversely in steps of 2 mm through $x_2= \pm12 mm$. The collection time for every steps is 50s. The spatial normalized second-order correlation function ${g^{\left(2\right)}}\left(x_1-x_2\right)$ is calculated and the second-order correlation pattern as shown in figure \ref{figure4}. The FWHM of this peak is $4.8 \pm 0.01 mm$, while the visibility of the peak is about 22.4\%. The degree of second-order correlation of pseudothermal light was measured as ${g^{\left(2\right)}}\left(0\right)=3.42\pm0.02$, which means superbunching effect in the spatial domain was also observed.
\section{\label{Discussion}Discussion}
The superbunching effect was observed in the first step. From a classical theory of view, when the $45^{\circ}$ linearly polarized light passes through $\rm RG_1$, $\rm RG_2$ and $\rm PBS_2$ in sequence, two single photon detectors at symmetric position are triggered by the pseudothermal light which possesses the same intensity fluctuation and photon statistical distribution. Therefore, the classical intensity fluctuations theory \cite{goodman2007speckle,ou1988violation} could be explained why the superbunching effect can be observed. In the quantum two-photon interference theory \cite{mandel1999quantum}, when we changed the horizontal light to $45^{\circ}$ linearly polarized light by rotating the HWP, the pseudothermal lights pass through $\rm PBS_2$, there are two different and indistinguishable paths to trigger two detectors. It is the reason why we can use quantum two-photon interference theory to explain the superbunching effects, simultaneously the superbunching effects of ${g^{\left(2\right)}}\left(0\right) = 3.91\pm0.02$ was observed.
On the basis of the first step, turn on the laser 1 and laser 2 simultaneously, the second step and third step are a group of comparative experiments. The difference between them is whether two lights focus on same areas of $\rm RG_1$ or not. From the classical theory of intensity fluctuation correlation, the second step and the third step will get different results. When the lasers focus on the $\rm RG_1$ in different positions, it means the two-photon coincidence count system is triggered by two completely different sets of speckles. Therefore, the superbunching effects can not be observed corresponding to figure \ref{figure3}(c). The third step is the opposite of the second step, the superbunching effects is observed corresponding to figure \ref{figure3}(d) mainly because the lasers focus on the $\rm RG_1$ in same area. The two set of speckles that pass through $\rm RG_1$ and $\rm RG_2$ own identical fluctuations in the same coherent region.
However, the quantum two-photon interference theory will give same prediction results between the second step and the third step. It is well known that two-photon interference theory emphasizes whether the interference path is different and indistinguishable. In the second and third step, the polarization directions of two incident lasers are all orthogonal. After passing through $\rm PBS_2$, the horizontally polarized light enters the detector $\rm D_2$, and the vertically polarized light enters the detector $\rm D_1$. There is only one path to trigger the two photon coincidence count detection system. Under the prediction of quantum two-photon interference theory, there is no superbunching effect in the second and third step. But we observed the superbunching effect of pseudothermal light in the third step, which the degree of second-order temporal correlation was measured as ${g^{\left(2\right)}}\left(0\right)=3.83\pm0.02$ as shown in figure \ref{figure3}(d). It is contrary to the prediction of classical theory of intensity fluctuation correlation. The temporal superbunching effect of classical light in the figure \ref{figure3}(d) seems to break the unity of classical and quantum theory. The quantum two-photon interference theory can not explain this strange phenomenon. Here we used the the classical intensity fluctuations theory to explain the superbunching effect without two-photon interference.
On account of that the polarization of two beams are orthogonal and they focus on the same area of $\rm RG_1$, two pseudothermal light beams own the different polarization direction but generate the same fluctuation. It means the distribution of electric field ${E}({r},{t})$ is the same. After transferring through $\rm RG_1$ and $\rm RG_2$ in series, the two-photon coincidence count system is triggered by two sets of same speckles. The specific derivation is shown in the above Sec. \ref{Methods}. Therefore, the superbunching effect can be well explained by the classical intensity fluctuations theory.
In the fourth step, the background is equal to 2 in the spatial superbunching correlation diagram. This is mainly due to the fact that $\rm P_1$, which is set in front of $\rm RG_2$, performs a spatial mode selection on the light passing through $\rm RG_2$ so that the speckle has the same fluctuation in the same spatial mode. However, there is no change in the longitudinal correlation length related to the superbunching effect in the time domain. Therefore, the temporal superbunching effects is different from the spatial superbunching effect, and the measured background of the superbunching effect is 1 and 2, respectively.
It is well known that there exists space-time duality between the equations that describe the paraxial diffraction of light beams \cite{Kolner:89,301659,TORRESCOMPANY20111}. Similar to the above methods for calculating the temporal superbunching effect, the classical intensity fluctuation correlation theory can also be used to calculate the spatial superbunching effect. The expression of electromagnetic wave which transmitted to the surface of detector can be expressed as
\begin{eqnarray}\label{equ11}
E_{i j k} &=E_{i} \cdot T_{j | i} \cdot T_{k|j|i} \nonumber \\
& =e^{-ik_{0}\left(r_{i}-r_{0}\right)} \int_{-\frac{1}{2} d_{1}}^{\frac{1}{2}d_{1}} e^{-ik_{0j}\left(r_{j}^{''}-r_{i}^{'}\right)} d r_{0 j} \int_{-\frac{1}{2}d_{2}}^{\frac{1}{2} d_{2}} e^{-ik_{k}\left(r_{k}-r_{j}^{''}\right)} d r_{k}.
\end{eqnarray}
The meaning of the symbol in the above equation (\ref{equ11}) is similar to equation (\ref{equ2}). $d_1$ and $d_2$ represent the length of the pseudothermal light source after passing through $\rm RG_1$ and $\rm RG_2$, respectively. ${k_0}$ is the wave vector of the laser, ${k _{0i}}$ is the wave vector of the light scattered by the position $i$ of the $\rm RG_1$, and ${k _{j}}$ is the wave vector of the light scattered by the position of the $\rm RG_2$. $r_i^{'}$, $r_j^{''}$ and $r_k$ are position vectors when the electromagnetic wave arrives at position $i$ of the $\rm RG_1$, position $j$ of the $\rm RG_2$ and the surface $k$ of detector respectively, where $i,j,k = 1,2$. Substituting the equation (\ref{equ11}) into equation (\ref{equ4}) and the normalized second-order temporal correlation function is
\begin{equation}\label{equ12}
{g^{(2)}}(\Delta {x_1};\Delta {x_2}) = \left[ {1 + sin{c^2}\left( {\frac{{\pi L_1}}{{\lambda d_1}}\Delta {x_1}} \right)} \right]\cdot\left[{1 + sin{c^2}\left({\frac{{\pi L_2}}{{\lambda d_2}}\Delta {x_2}} \right)} \right],
\end{equation}
we define $\Delta {x_1} = x_1^{'} - x_2^{'}$ that means the position difference between two orthogonal beams of light propagating over $\rm RG_1$, $\Delta {x_2} = x_1 - x_2$ means the transverse position difference between detectors $\rm D_1$ and $\rm D_2$. $\lambda$ is the wavelength of the light source, $\rm L_1$ is the distance from $\rm RG_1$ to detectors and $\rm L_2$ is the distance from $\rm RG_2$ to detectors.
As a result of two orthogonal light beams focus on the same area of $\rm RG_1$ ($\Delta {x_1} = 0$), they generate same fluctuation after passing through $\rm RG_1$. So the equation (\ref{equ12}) can be further simplified as
\begin{equation}\label{equ13}
{g^{(2)}}(\Delta {x_2})=2\left[{1+sin{c^2}\left({{\frac{{\pi L_2}}{{\lambda d_2}}}\Delta {x_2}}\right)}\right],
\end{equation}
when the value of $\Delta {x_2}$ is infinity, ${g^{(2)}}(\Delta {x_2})$ equals 2, which means the two detectors are far enough apart; ${g^{(2)}}(\Delta {x_2})$ equals 4 when $\Delta {x_2}=0$, which means two detectors are in the same symmetric position.
The background is close to 2 in our experiment about spatial superbunching effect. The reasons are as follows, taking off $\rm RG_2$ and repeating the third step above, the measured degree of second-order temporal correlation is $1.90 \pm 0.01$. It is the bunching effect that ${g^{\left(2\right)}}\left(0\right)=2$. When we add the $\rm RG_2$, the modulated light is modulated again. This is the reason why we can observe spatial superbunching effect. At the same time, the vibration of $\rm RG_1$ and $\rm RG_2$ would cause the vibration of the optical table in the actual experiment, so the measurement of the degree of second-order correlation is 1.9 or higher because of the vibration of laser. Therefore, the measured background of spatial superbunching effect accords with our theory that equals 2.
\section{\label{Conclusion}Conclusion}
In summary, we achieved the superbunching effect from a pair of modulated distinguishable classical light in the temporal and spatial domain. We employed the classical theory of intensity fluctuation correlation to deduce the normalized second-order correlation function of temporal and spatial superbunching effect. It shows good agreement with the measurement as ${g^{\left(2\right)}}\left(0\right)=3.83\pm0.02$. Also the degree of second-order spatial correlation was measured as $3.42\pm0.02$. Although there are completely opposite predictions by using the classical theory of intensity fluctuation correlation and quantum two-photon interference theory to explain the phenomena in the experiment, it does not mean quantum theory contradicts classical theory. It only shows there is no two-photon interference phenomenon in this experiment. Therefore, studying this interesting phenomenon is not only conducive to the future research about superbunching effect that have potential application in improving the visibility of ghost imaging, but also plays an important role in understanding the relationship between classical theory of intensity fluctuation correlation and quantum two-photon interference theory.
\section*{Acknowledgments}
The authors would like to thank Dr. J.B. Liu for helpful discussions. This work was supported by Shaanxi Key Research and Development Project (Grant No. 2019ZDLGY09-10); Key Innovation Team of Shaanxi Province (Grant No. 2018TD-024) and 111 Project of China (Grant No.B14040).
\section*{ORCID iDs}
Sheng Luo https://orcid.org/0000-0001-6495-0900\\
Huai-Bin Zheng https://orcid.org/0000-0003-2313-4119
\bibliographystyle{iopart-num}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,968
|
\section{Introduction}
Reinforcement learning (RL) studies sequential decision-making problems where the agent aims to maximize its expected total reward by interacting with an unknown environment over time. However, in many applications such as electric grids and robotics, the agent often deals with conflicting requirements \cite{mannor2004geometric}, or has safety constraints during the learning process \cite{achiam2017constrained}. The constrained reinforcement learning (C-RL) framework is a natural way to embed all conflicting requirements efficiently and incorporate safety \cite{altman1999constrained,paternain2019constrained,castellano2021reinforcement,achiam2017constrained,chen2021primal,bai2022achieving,calvo2021state}.
There are two major approaches to finding the optimal policy of a C-RL problem, where the first approach solves it in the occupancy measure space. The constrained Markov Decision Process (CMDP) framework is a standard, and well-studied formulation for reinforcement learning with constraints \cite{altman1999constrained}. The agent aims to maximize the total reward function while satisfying requirements in secondary cumulative reward constraints. The CMDP problem can be equivalently written as a linear programming problem in occupancy measure space, and the optimal policy could be recovered from the optimal occupancy measure \cite{altman1999constrained}. However, this approach requires knowledge of the transition kernel of the underlying dynamical system explicitly, which is not always available in many realistic applications.
An alternative approach is to apply the Lagrangian duality and solve the C-RL problem in policy space \cite{chen2021primal,bai2022achieving,calvo2021state,liu2021learning,ding2020natural}. These approaches solve the min-max optimization problem using a sampling-based primal-dual algorithm or stochastic gradient descent-ascent (SGDA) algorithm, where the Lagrangian function is augmented with a possible regularization term, e.g., a KL divergence regularization. The primal variables and dual variables are updated iteratively, either using gradient information or solving a sub-optimization problem. The outcome of primal-dual algorithms is often subject to two cases: in the first case, the output of the primal-dual algorithm is a mixing policy, which is a weighted average of history outputs \cite{chen2021primal,bai2022achieving,calvo2021state}. In the second case, instead of showing the output policy converges to the optimal policy, they present a regret analysis for objective functions, and constraints \cite{liu2021learning,ding2020natural}. In summary, a key limitation is that the policy often oscillates and does not converge to the optimal policy, i.e., there is a mismatch between the behavioral policy and the optimal one. In this paper, we aim to tackle the above limitations by introducing a novel SGDA algorithm leveraging recent results on regularized saddle flow dynamics. Some of the proofs are omitted due to space constraints.
The key insight that the above sampling-based primal-dual algorithms do not converge is that the Lagrangian function for the C-RL problem does not possess sufficient convexity. The Lagrangian function is bilinear in occupancy measure space and is non-convex-concave in policy space. Our proposed method is rooted in the study of saddle flow dynamics \cite{you2021saddle, cherukuri2016asymptotic}. By adding a carefully crafted augmented regularization, the dissipative saddle flow proposed in \cite{you2021saddle} makes minimal requirements on convexity-concavity and yet still guarantees asymptotic convergence to a saddle point.
Leveraging tools from this dissipative saddle flow framework, we propose a novel algorithm to solve the C-RL problem in occupancy measure space, where the dynamics asymptotically converge to the optimal occupancy measure and optimal policy. We further extend the continuous-time algorithm in a model-free setting, where the discretized SGDA algorithm is shown to be the stochastic approximation of the continuous-time saddle flow dynamic. We prove that the SGDA algorithm almost surely converges to the optimal solution of the C-RL problem. To the best of our knowledge, this work is the first attempt to solve the C-RL problem to converge to the optimal occupancy measure and policy.
Notation: Let $ \mathcal{K} \subset \mathbb{R}^n $ be a closed convex set. Given a point $ y \in \mathbb{R}^n $, $\Psi_{ \mathcal{K}}[y] = \argmin_{z \in \mathcal{K}} \|z- y\| $ denote the point-wise projection (nearest point) in $ \mathcal{K}$ to $y$. Given $x \in \mathcal{K}$ and $v \in \mathbb{R}^n$, define the vector field projection of $v$ at $x$ with respect to $ \mathcal{K}$ as:
$ \Pi_{\mathcal{K}}[x,v] = \lim_{\delta \to 0^+} \frac{\Psi_{ \mathcal{K}}[x + \delta v] -x}{\delta}$
\section{Problem Formulation}
In the constrained reinforcement learning problem (C-RL), $\mathcal{S}$ denotes the finite state space, $ \mathcal{A}$ denotes the finite action space, and $P: \mathcal{S} \times \mathcal{A} \to \triangle^{| \mathcal{S}|}$ gives the transition dynamics of the CMDP, where $P(\cdot | s,a)$ denotes the probability distribution of next state conditioned on the current state $s$ and action $a$. $r: \mathcal{S} \times \mathcal{A} \to [0,1]$ is the reward function, $g^i: \mathcal{S} \times \mathcal{A} \to [-1,1]$ denotes the $i^{th}$ constraint cost function. The scalar $\gamma$ denotes the discount factor, and $q$ denotes the initial distribution of the states.
A stationary policy is a map $\pi: \mathcal{S} \to \triangle^{| \mathcal{A}|}$ from states to a distribution in the action space. The value functions for both reward and constraints' cost following policy $\pi$ are given by:
\begin{align*}
& V^{\pi}_{r}(q) = (1-\gamma)\mathbf{E}_\pi[\textstyle\sum_{t=0}^\infty \gamma^t r(s_t,a_t) \,|\, s_0\sim q ],\\
& V^{\pi}_{g^i}(q) = (1-\gamma)\mathbf{E}_\pi[\textstyle\sum_{t=0}^\infty \gamma^t g^i(s_t,a_t) \,|\, s_0\sim q].
\end{align*}
The standard C-RL problem aims to maximize the total reward function while satisfying requirements in secondary cumulative reward constraints:
\begin{align}\label{problem:CMDP}
\max_{\pi}\;\; & V^{\pi}_{r}(q) \nonumber\\
s.t.\;\; & V^{\pi}_{g^i}(q) \geq h^i, \;\;\forall i \in [I] .
\end{align}
There exist two classes of approaches to solving the optimal policy of a constrained reinforcement learning problem. The constrained Markov Decision Process (CMDP) framework equivalently expresses the C-RL problem as a linear programming problem in occupancy measure space \cite{altman1999constrained}.
Given a policy $\pi$, define $\lambda^{\pi}: \mathcal{S} \times \mathcal{A} \to [0,1]$ as occupancy measure:
\begin{align*}
\lambda^{\pi}(s,a) = (1-\gamma)\textstyle\sum_{t=0}^\infty \gamma^t P_q^\pi(s_t=s,a_t=a) ,
\end{align*}
where $s_0 \sim q$. By definition, the occupancy measure belongs to the probability simplex $\lambda^{\pi} \in \Delta$. Problem \eqref{problem:CMDP} can be equivalently written as a linear programming problem:
\begin{align}\label{eq:LP_CMDP}
\max_{\lambda \in \Delta}\;\; &\textstyle \sum_{a} \lambda_a^Tr_a \\
s.t. \;\; &\textstyle \sum_{a}\lambda_a^T g^i_a \geq h^i, \;\;i \in [I] \nonumber \\
&\textstyle \sum_{a} (I - \gamma P_a^T)\lambda_a = (1-\gamma)q ,\nonumber
\end{align}
where $\lambda_a = [\lambda(1,a),\dots,\lambda(s,a)]^T \in \mathbb{R}^{|\mathcal{S} |} $ is the $a^{th}$ column of $\lambda^{\pi}$, $r_a =[r(1,a),\dots,r(s,a)]^T \in \mathbb{R}^{|\mathcal{S} |}$ denotes reward function associated with action $a$, $P_a$ denotes the transition matrix associated with action $a$. The optimal policy could be recovered by finding the optimal occupancy measure
\ifthenelse{\boolean{arxiv}}{
\begin{align*}
\end{align*}}{$\lambda^* $ from \eqref{eq:LP_CMDP} :
$ \pi^*(a|s) = \lambda^*(s,a)/\sum_{a'\in \mathcal{A}}\lambda^*(s,a')$ \cite{altman1999constrained}. }However, a key limitation in this approach is that it requires knowledge of the transition kernel of the underlying dynamical system explicitly, i.e., $P_a, r_a, g_a^i$.
Another approach is to apply the primal-dual algorithm to find the saddle points of the associated Lagrangian function of problem \eqref{problem:CMDP} in policy space:
\begin{align*}
L(\pi,\mu) = V^{\pi}_{r} + \textstyle \sum_{i=1}^I \mu_i (V^{\pi}_{g^i} - h^i).
\end{align*}
Algorithms often augment Lagrangian function with a regularization term $ \hat{L}(\pi,\mu) = L(\pi,\mu) + R(\pi,\mu)$, e.g., a KL divergence regularization, and update the policy and dual variable using one of the following rules:
\begin{align*}
\pi_{k+1}\! =\! \begin{cases}
\pi_k \! + \! \eta\nabla_{\pi}\hat{L}(\pi,\mu_k) \\
\mathrm{argmax}_{\pi} \hat{L}(\pi,\mu_k)
\end{cases}
\mu_{k+1} \!= \!\begin{cases}
\mu_k \!- \eta\nabla_{\pi}\hat{L}(\pi_k,\mu) \\
\mathrm{argmin}_{\mu} \hat{L}(\pi_k,\mu)
\end{cases}
\end{align*}
Among the sampling-based primal-dual algorithms, several algorithms output a mixing policy of the form $ \pi_T = \sum_{k=0}^{T-1} \eta_k \pi_k$, which is a weighted average of the history updates \cite{chen2021primal,bai2022achieving,calvo2021state}. The output policy oscillates and does not converge to the optimal policy. On the other hand, several papers provide a regret analysis instead of showing the algorithm's convergence.
\ifthenelse{\boolean{arxiv}}{
To summarize, the CMDP approach could directly solve the optimal occupancy measure and the optimal policy while requiring knowledge of the transition kernel. The sampling-based primal-dual algorithms often output a mixing policy of history and do not converge to the optimal policy. The key limitation is that the Lagrangian function for the C-RL problem does not possess sufficient convexity. Specifically, the Lagrangian function is bilinear in occupancy measure space and is nonconvex in policy space. In this paper, we aim to provide a novel algorithm that tackles the above difficulties.}
{}
\section{Key insight from saddle flow dynamics}
Before introducing our algorithm, we would like to illustrate our key insight from saddle flow dynamics, which explains why the primal-dual algorithm oscillates and does not converge. For a min-max optimization problem, primal-dual algorithms require the Lagrangian $L(x,y)$ function to be strictly convex or concave on $x$ or $y$, respectively, to converge. Consider the following motivating example with bilinear Lagrangian function:
\begin{align*}
\min_{x} \max_{y}L(x,y):=xy .
\end{align*}
Our goal is to apply different dynamic laws that seek to converge to some saddle point $(x^*,y^*)=(0,0)$ of $ L(x,y)$, which satisfies $L(x^*,y) \leq L(x^*,y^*) \leq L(x,y^*) $. In particular, consider the following classical primal-dual algorithm:
\begin{align*}
& \dot{x} = -\nabla_x L(x,y) = -y,\\
& \dot{y} = \nabla_y L(x,y) = x.
\end{align*}
In Figure \ref{fig:Bilinear_example}, (a) plots the time series trajectory of states $x$ and $y$, and (b) plots the vector field and corresponding phase portrait. We observe that the dynamical system oscillates and does not converge to the saddle point (0,0).
\begin{figure}[!htb]
\centering
\subfloat[\centering time series trajectories]{{\includegraphics[width=4.5cm]{Bilinear_time.png} }}%
\subfloat[\centering phase portrait]{{\includegraphics[width=4.5cm]{Bilinear_phase.png} }}%
\caption{Primal-dual dynamics of bilinear Lagrangian function}%
\label{fig:Bilinear_example}%
\end{figure}
In \cite{you2021saddle}, the authors introduce a regularization framework for saddle flow dynamics that guarantees asymptotic convergence to a saddle point based on mild assumptions. In this paper, we further extend the above framework to solve the C-RL problem. Specifically, consider the following constrained min-max optimization problem,
\begin{align*}
\min_{x \in \mathcal{K}} \max_{y \in\mathcal{V}}L(x,y)
\end{align*} where $\mathcal{K} \subset \mathbb{R}^n,\mathcal{V} \subset \mathbb{R}^m$ are bounded closed convex sets. We propose a regularized surrogate for $L(x,y)$ via the following augmentation:
\begin{align*}
L(x,y,z,w) :=\frac{1}{2\rho}\|x-z \|^2+ L(x,y)-\frac{1}{2\rho}\|y-w \|^2
\end{align*}
The following projected and regularized saddle flow dynamics aim to find the saddle points of the regularized Lagrangian, which contains the saddle point of the original Lagrangian. The regularized saddle flow dynamics still preserve the same distribution structure, which can be implemented in a fully distributed fashion, and requires the same gradient information as the classical primal-dual algorithm:
\begin{align}\label{eq: projected saddle flow dynamics subequations}
&\dot{x} = \Pi_{\mathcal{K}}\Bigr[x, \!-\!\nabla_xL(x,y) \!-\! \frac{1}{\rho}(x-z)\Bigr] , \dot{z} =\Pi_{\mathcal{K}}\Bigr[z, \frac{1}{\rho}(x-z)\Bigr] \nonumber \\
&\dot{y} = \Pi_{\mathcal{V}}\Bigr[y,\!-\!\nabla_yL(x,y) \!-\! \frac{1}{\rho}(y-w)\Bigr] , \dot{w} =\Pi_{\mathcal{V}}\Bigr[w, \frac{1}{\rho}(y-w) \Bigr]
\end{align}
\begin{thm}\label{thm:Saddle FLow Dynamics}
Assume that $L(\cdot, y)$ is convex for $\forall y$ and $L(x,\cdot)$ is concave for $\forall x$, continuously differentiable, and there exists at least one saddle point $(x^* \in \mathcal{K} ,y^* \in \mathcal{V})$, where $\mathcal{K} \subset \mathbb{R}^n,\mathcal{V} \subset \mathbb{R}^m$ are closed and convex. Then the projected saddle flow dynamics \eqref{eq: projected saddle flow dynamics subequations} asymptotically converge to some saddle point $(x^*,y^*)$ of $L(x,y)$, while $x(t) \in \mathcal{K}, y(t) \in \mathcal{V}, \forall t$ with initialization $x(0) \in \mathcal{K}, y(0) \in \mathcal{V} $.
\ifthenelse{\boolean{arxiv}}{
\textit{Proof}: See Appendix}
{\textit{Proof}: See \cite{zheng2022constrained}.}
\end{thm}
The above theorem shows the projected and regularized saddle flow dynamics will asymptotically converge to the saddle point of the Lagrangian function, which requires mild assumptions on convexity. Additionally, the following result summarizes conditions under which the solutions of the projected system exist and are unique.
\begin{prop}\cite[Prop 2.2]{cherukuri2016asymptotic} \label{pro:existence_projected_system}
Let $f: \mathbb{R}^n \to \mathbb{R}^n$ be Lipschitz on a closed convex polyhedron $\mathcal{K} \in \mathbb{R}^n $. Then, for any $x_0 \in \mathcal{K} $, there exists a unique solution $t \to x(t)$ of the projected system $\dot{x} = \Pi_{\mathcal{K}}\Bigr[x, f(x)\Bigr]$ with $x(0) = x_0$.
\end{prop}
We now apply the regularized saddle flow dynamics to the bilinear Lagrangian function $L(x,y)=xy $.
\ifthenelse{\boolean{arxiv}}{
\begin{subequations}
\begin{align*}
& \dot{x} =-y-\frac{1}{\rho}(x-z), & \dot{z} = \frac{1}{\rho}(x-z), \\
& \dot{y} = x - \frac{1}{\rho}(y-w), & \dot{w} = \frac{1}{\rho}(y-w).
\end{align*}
\end{subequations}
}{}According to Figure \ref{fig:2}, the trajectories of the above saddle flow dynamics asymptotically converge to the saddle point $(0,0,0,0)$, even when the original Lagrangian function is bilinear.
\begin{figure}[!htb]
\centerline{\includegraphics[width=0.6\columnwidth]{Bilinear_Regularized.png}}
\caption{Regularized saddle flow dynamics for $ L(x,y)=xy$ }
\label{fig:2}
\end{figure}
A direct application of the above projected and regularized saddle flow dynamic is to solve the C-RL problem in occupancy measure space \eqref{eq:LP_CMDP}, where the Lagrangian function is also bilinear. Specifically, the Lagrangian function for \eqref{eq:LP_CMDP} in occupancy measure space is:
\begin{align}\label{eq:Lagrangian_LP_CMDP}
L(\lambda,\mu,v)= &\sum_{a} \lambda_a^Tr_a + \sum_{i}\mu_i(\sum_{a}\lambda_a^T g^i_a - h^i)\nonumber \\ & +(1-\gamma) \langle q,v\rangle - \sum_{a\in \mathcal{A}}\lambda_a^T (I - \gamma P_a)v,
\end{align} where $\mu_i \geq 0$ is the Lagrange multiplier associated with the $i^{th}$ inequality constraint and $v$ is the Lagrange multiplier associated with the equality constraint. Therefore, motivated by the projected and regularized saddle flow dynamics framework, we propose a regularized surrogate for \eqref{eq:Lagrangian_LP_CMDP} via the following augmentation:
\begin{align}\label{eq:augmented lagrangian}
L(v,\hat{v},\mu,\hat{\mu},\lambda,\hat{\lambda})& := \frac{1}{2\rho}\|v-\hat{v} \|^2+ \frac{1}{2\rho}\|\mu-\hat{\mu} \|^2 \nonumber \\ &+ L(v,\mu,\lambda) -\frac{1}{2\rho}\|\lambda-\hat{\lambda} \|^2
\end{align}
Slater's condition for C-RL and the following Lemma establishes the boundedness of dual decision variables, which naturally provides a closed convex set for projection.
\begin{ass}[Slater's condition for C-RL]\label{ass:Slater C-RL}
There exists a strictly feasible occupancy measure $\tilde{\lambda} \in \Delta$ of problem \eqref{eq:LP_CMDP}, i.e., there exist some $\psi > 0$ such that
\begin{align*}
& \sum_{a}\tilde{\lambda}_a^T g^i_a \geq h^i + \psi,\;\; i \in [I] \nonumber \\
& \sum_{a\in \mathcal{A}} (I - \gamma P_a^T)\tilde{\lambda}_a = (1-\gamma)q ,\nonumber
\end{align*}
\end{ass}
\begin{lem}\cite[Lem.1]{bai2022achieving}[Bounded dual variable]
Under the assumption \ref{ass:Slater C-RL}, the optimal dual variables $\mu^*,v^*$ are bounded. Formally, it holds that $\| \mu^* \|_1 \leq \frac{2}{\psi}$ and $\| v^* \|_{\infty} \leq \frac{1}{1-\gamma}+\frac{2}{(1-\gamma)\psi}$
\end{lem}
Therefore, we propose the following projected saddle flow dynamics to find the saddle points of \eqref{eq:augmented lagrangian}, where $\mathcal{U}:= \{\mu | \mu \in \mathbb{R}^{I}_{\geq 0}, \| \mu \|_1 \leq \frac{2}{\psi} \}, \mathcal{V}:= \{ v | v \in \mathbb{R}^s, \| v^* \|_{\infty} \leq \frac{1}{1-\gamma}+\frac{2}{(1-\gamma)\psi} \}$ are both closed convex polyhedrons.
\begin{align}\label{eq:regularized saddle flow dynamic C-RL}
&\Dot{v} \;= \Pi_{\mathcal{V}}\Bigr[v, \sum_{a\in \mathcal{A}} (I - \gamma P_a^T)\lambda_a-(1-\gamma)q - \frac{1}{\rho}(v-\hat{v})\Bigr] \nonumber ,\\
&\Dot{\hat{v}} \;= \Pi_{\mathcal{V}}\Bigr[\hat{v},\frac{1}{\rho}(v-\hat{v}) \Bigr]\nonumber,\\
&\Dot{\mu}_i = \Pi_{\mathcal{U}}\Bigr[\mu_i, h^i- \sum_{a}\lambda_a^T g^i_a - \frac{1}{\rho}(\mu_i-\hat{\mu}_i)\Bigr] \nonumber,\\
&\Dot{\hat{\mu}}_i = \Pi_{\mathcal{U}}\Bigr[\hat{\mu}, \frac{1}{\rho}(\mu-\hat{\mu})\Bigr] \nonumber,\\
&\Dot{\lambda}_a =\Pi_{\Delta}\Bigr[\lambda, r_a - (I - \gamma P_a)v + \sum_{i}\mu_i g^i_a- \frac{1}{\rho}(\lambda_a-\hat{\lambda}_a)\Bigr] \nonumber,\\
&\Dot{\hat{\lambda}}_a =\Pi_{\Delta}\Bigr[\hat{\lambda}_a, \frac{1}{\rho}(\lambda-\hat{\lambda})\Bigr] ,
\end{align}
The following theorem is a direct application of Theorem \ref{thm:Saddle FLow Dynamics} and Proposition \ref{pro:existence_projected_system}, which guarantees \eqref{eq:regularized saddle flow dynamic C-RL} asymptotically converge to the unique (optimal) saddle point of the C-RL problem \eqref{eq:LP_CMDP}. Then we could recover the optimal policy from the optimal occupancy measure $\lambda^*$.
\begin{thm}\label{Thm:Saddle flow for C-RL}
Let Assumption \ref{ass:Slater C-RL} hold. Then the projected saddle flow dynamics \eqref{eq:regularized saddle flow dynamic C-RL} asymptotically converge to some saddle point $(\lambda^* ,\mu^* ,v^*)$ of $L(\lambda,\mu,v)$, while satisfying $\lambda(t)\in \Delta, \mu(t) \in \mathcal{U}, \forall t$ with proper initialization.
\end{thm}
\section{Stochastic Approximation for C-RL}
In the following section, we aim to extend the proposed continuous-time saddle flow algorithm \eqref{eq:regularized saddle flow dynamic C-RL} to a model-free setting. Specifically, we propose a novel stochastic gradient descent-ascent algorithm, which does not require the knowledge of transition kernel. We show that the SGDA algorithm is a stochastic approximation of the continuous time saddle flow dynamics \eqref{eq:regularized saddle flow dynamic C-RL}, which almost surely (w.p.1) converges to the unique saddle point of the C-RL problem.
In many optimization problems, the goal is to find some recursive numerical procedure that sequentially approximates a value of the decision variable $x$, which minimizes the objective function, e.g., $\dot{x} =h(x)$ or $ x^{n+1} = x^n + \alpha^nh(x^n)$. Stochastic approximations attempt to solve the problem when one cannot actually observe $h(x)$, but rather $h(x)$ plus some error or noise. Consider the following projection algorithm:
\begin{align}\label{eq:projection algorithm from projection theorem}
x^{n+1} = \Psi_{ \mathcal{G} }\Bigr[x^n + \alpha^n \Bigr(h(x^n) + \xi^n\Bigr) \Bigr],
\end{align}
where $\mathcal{G} := \{x: q_i(x) \leq 0, i \in [s] \} $ denotes the constraints and $\{\xi^n \}$ denotes a sequence of random variables. The goal is to generate a sequence $\{x^n\}$ estimate of the optimal value of $x$ when the actual observation has random noise $h(x^n) + \xi^n$. In general, the projection $\Psi_{ \mathcal{G} }[x]$ is easy to compute when the constraints are linear; i.e., when $ \mathcal{G} $ is a polyhedron. We introduce the following list of standard assumptions for stochastic approximation
\begin{ass}[Stochastic Approximation]\label{Ass: Stochastic approximation} \;\;
\begin{enumerate}
\item[1.1] $h(\cdot)$ is a continuous function.
\item[1.2] $\{\alpha^n\} $ is a sequence of positive real numbers such that $\alpha^n >0, \sum_n \alpha^n = \infty, \sum_n(\alpha^n)^2 < \infty$,
\item[1.3] G is the closure of its interior and is bounded. The $q_i(\cdot), i \in [s]$ are continuously differentiable.
\item[1.4] There is a $T>0$ such that for each $\epsilon > 0$
\begin{align*}
\lim_n P\{\sup_{j \geq n} \max_{t \leq T}|\sum_{i=m(jT)}^{m(jT+t)-1}\alpha^i\xi^i |\geq \epsilon \}=0,
\end{align*}
where $t^n := \sum^{n-1}_{i=0} \alpha^i$ and $ m(t):= \max_n\{ t^n \leq t\}$ for $t \geq 0$.
\end{enumerate}
\end{ass}
Under those standard assumptions for stochastic approximations, the sequence $\{x^n \}$ generated by the projection algorithm \eqref{eq:projection algorithm from projection theorem} will converge almost surely to a stable solution to the projected system.
\begin{thm}\cite[Theorem 5.3.1]{kushner2012stochastic}\label{Thm: Projection Theorem}
Assume Assumption \ref{Ass: Stochastic approximation} hold. Consider the following ODE:
\begin{align}\label{eq:Projected ODE projection theorem}
\dot{x} = \Pi_{\mathcal{G}}\Bigr[x,h(x)\Bigr].
\end{align} Let $x^*$ denotes an asymptotically stable point of \eqref{eq:Projected ODE projection theorem} with domain of attraction $DA(x^*)$ and $x^n$ generated by \eqref{eq:projection algorithm from projection theorem}.
If $A \in DA(x^*)$ is compact and $x^n \in A$ infinitely often, then $x^n$ converges to $x^*$ almost surely as $n \to \infty$.
\end{thm}
Consider the following randomized primal-dual approach proposed in \cite{bai2022achieving,wang2020randomized}, where we assume the presence of a generative model. For a given state action pair $(s,a)$, the generative model provides the next state $s' $ and the reward functions $r(s,a), g^i(s,a)$ to train the policy. Consider the following stochastic approximation for the Lagrangian function \eqref{eq:Lagrangian_LP_CMDP} for a distribution $ \xi $:
\begin{align} \label{eq:Stochastic approximation for Lagrangian function}
& L^{\xi}(\lambda,\mu,v)=(1-\gamma)v(s_0) - \sum_{i \in [I] } \mu_ih^i +\\ & \mathbf{1}_{\xi(s,a)>0}\frac{\lambda(s,a)\Bigr[r(s,a)-v(s)+\gamma v(s')+\sum_{i \in [I]}\mu_i g^i(s,a) \Bigr]}{\xi(s,a)} \nonumber
\end{align}
where $s_0 \sim q, (s,a) \sim \xi $, and the next state $s' \sim P(\cdot | s,a)$. The stochastic approximation $ L^{\xi}(\lambda,\mu,v)$ \eqref{eq:Stochastic approximation for Lagrangian function} is an unbiased estimator for the Lagrangian function \eqref{eq:Lagrangian_LP_CMDP}, i.e., $\mathbf{E}_{\xi, P(\cdot | s,a),q }\Bigr[L^{\xi}(\lambda,\mu,v)\Bigr] = L(\lambda,\mu,v) $. Using the proposed stochastic approximation of the Lagrangian function, consider the following projection algorithm for solving the C-RL problem in a model-free setting:
\begin{align}\label{eq:SA regularized saddle flow dynamic}
v^{n+1} &= \Psi_{\mathcal{V}}\Bigr[ v^n+\alpha^n\Bigr(\mathbf{1}_{\xi(s,a)>0}\frac{\lambda(s,a)[e(s)-\gamma e(s')]}{\xi(s,a)} \nonumber \\ &-(1-\gamma)\mathbf{e}(s_0)- \frac{1}{\rho}(v^n-\hat{v}^n)\Bigr) \Bigr] , \nonumber \\
\hat{v}^{n+1} &= \Psi_{\mathcal{V}}\Bigr[\hat{v}^{n}+\alpha^n \frac{1}{\rho}(v^n-\hat{v}^n) \Bigr], \nonumber\\
\mu_i^{n+1} &= \Psi_{\mathcal{U}}\Bigr[\mu_i^n + \alpha^n\Bigr(h^i -
\mathbf{1}_{\xi(s,a)>0}\frac{\lambda(s,a)g^i(s,a) }{\xi(s,a)} \nonumber \\
& - \frac{1}{\rho}(\mu_i^n-\hat{\mu}_i^n)\Bigr)\Bigr], \nonumber\\
\hat{\mu}_i^{n+1} &= \Psi_{\mathcal{U}}\Bigr[\hat{\mu}_i^{n}+\alpha^n \frac{1}{\rho}(\mu_i^n-\hat{\mu}_i^n)\Bigr], \nonumber\\
\lambda_a^{n+1} &=\Psi_{ \Delta}\Bigr[ \lambda_a^{n}+\alpha^n\Bigr(-\frac{1}{\rho}(\lambda_a^n-\hat{\lambda}_a^n) \nonumber \\ & + \mathbf{1}_{\xi(s,a)>0}\frac{ r(s,a) - v(s)+\gamma v(s')+ \sum_{i}\mu_i^n g^i(s,a)}{\xi(s,a)} \Bigr)\Bigr] , \nonumber\\
\hat{\lambda}_a^{n+1} &= \Psi_{ \Delta} \Bigr[\hat{\lambda}_a^{n} + \frac{1}{\rho}(\lambda_a^n-\hat{\lambda}_a^n) \Bigr],
\end{align}
The following Theorem is a direct application of Theorem \ref{Thm: Projection Theorem} and \ref{Thm:Saddle flow for C-RL}, which shows the sequence from \eqref{eq:SA regularized saddle flow dynamic} almost surely converges to the optimal solution to the C-RL problem.
\begin{thm}
Assume \ref{ass:Slater C-RL} and \ref{Ass: Stochastic approximation} hold, as $n \to \infty$, the sequence $\{\lambda^n, v^n ,\mu^n\} $ generated by \eqref{eq:SA regularized saddle flow dynamic} almost surely (w.p.1) converge to the optimal solution of the C-RL problem \eqref{eq:LP_CMDP}.
\end{thm}
\section{Numerical Examples} \label{sec:numerical}
In this section, we illustrate the effectiveness of our proposed approach using a classical CMDP problem: flow and service control problem in a single-server queue \cite{altman1999constrained}. Specifically, we consider a discrete-time single-server queue with a buffer of finite size $L$. We assume that, at most, one customer may join the system in a time slot. The state $s$ corresponds to the number of customers in the queue at the beginning of a time slot $ (|\mathcal{S}| = L+1) $.
The service action $a$ is selected from a finite subset $A$, and the flow action $b$ is selected from a finite subset $B$. Specifically, for two real numbers satisfying $ 0 < a_{\min} \leq a_{\max} < 1$, if the queue is non-empty and if the action of the server is $a\in A$, where $A$ is a finite subset of $[a_{\min}, a_{\max}] $, then the service of a customer is successfully completed with probability $a$. Likewise, for two real numbers satisfying $ 0 \leq b_{\min} \leq b_{\max} < 1$, if the queue is not full and if the action of the server is $b \in B(s)$, where $B(s)$ is a finite subset of $[b_{\min}, b_{\max}]$, then the probability of having one arrival during this time slot is equal to $b$. We assume that $0 \in B(x)$ for all $x$; moreover, when the buffer is full, no arrivals
are possible $(B(L) = {0})$. The transition law $P(\cdot | s,a)$ is therefore given by:
\begin{align*}
\begin{cases}
a(1-b) \;\; &\mathrm{if} \;1\leq x\leq L,y = x-1;\\
ab+(1-a)(1-b) \;\; &\mathrm{if} \;1\leq x\leq L,y = x;\\
(1-a)b \;\; &\mathrm{if} \;0 \leq x < L,y = x+1;\\
1-(1-a)b \;\; & \mathrm{if} \; y=x=0;
\end{cases}
\end{align*}
The reward function $r(s,a,b)$ is a real-valued decreasing function that depends only on $s$, which can be interpreted as a holding cost. The reward function $g^1(s,a,b)$ corresponding to the service rate is assumed to be a decreasing function that depends only on $a$. It can be interpreted as a higher service success rate having a higher cost. The reward function $g^2(s,a,b)$ corresponding to the flow rate $b$ is assumed to be an increasing function that depends only on $b$. It can be interpreted as a higher flow rate is more desired.
Suppose we want to solve the optimal policy for C-RL problem \eqref{problem:CMDP}, while satisfying constraints for service and flow. In the following numerical example, we compare the result generated by \eqref{eq:SA regularized saddle flow dynamic} and the ground truth result by directly solving the linear programming \ref{eq:LP_CMDP}, where we use the transition law stated above. Specifically, we choose $L = 4, A = [0.2,0.3,0.5,0.6,0.8], B = [0.1,0.3,0.5,0.9,0]$. The initial distribution $q$ is set as uniform distribution. The reward functions are $r(s) = -s+5, g^1(a) = -10a+3, g^2(b) = 10b-3 $.
\begin{figure}[ht]
\label{ fig7}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=1\linewidth]{reward.png}
\caption{objective function}
\vspace{4ex}
\end{minipage
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=1\linewidth]{constraint.png}
\caption{constraint functions}
\vspace{4ex}
\end{minipage}
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=1\linewidth]{lambda.png}
\caption{occupancy measure $\lambda$}
\vspace{4ex}
\end{minipage
\begin{minipage}[b]{0.5\linewidth}
\centering
\includegraphics[width=1\linewidth]{v.png}
\caption{dual variable v}
\vspace{4ex}
\end{minipage}
\end{figure}
We compare the cumulative reward function, constraint functions, and output decision variables $\lambda,\mu,v$ with the ground truth result by directly solving the linear programming problem \eqref{eq:LP_CMDP}. Results show that the decision variables converge to the optimal solution while satisfying the constraints for flow and service.
\section{Conclusion}
In this work, we propose a novel SGDA algorithm to solve the C-RL problem in occupancy measure space leveraging tools from regularized saddle flow dynamics. Even when the Lagrangian function is bilinear, the continuous dynamics asymptotically converge to the optimal occupancy measure and policy. The discretized SGDA is a stochastic approximation of the continuous-time saddle flow dynamic. We further proved the SGDA algorithm almost surely converges to the optimal solution to the C-RL problem.
\ifthenelse{\boolean{arxiv}}{
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8
|
package com.srotya.sidewinder.core.storage.compression.gorilla;
import java.nio.ByteBuffer;
import java.security.NoSuchAlgorithmException;
import com.srotya.sidewinder.core.predicates.Predicate;
import com.srotya.sidewinder.core.storage.RejectException;
import com.srotya.sidewinder.core.storage.compression.FilteredValueException;
import com.srotya.sidewinder.core.storage.compression.Reader;
public class GorillaTimestampReader implements Reader {
private int count;
private int counter;
private Predicate valuePredicate;
private GorillaTimestampDecompressor decompressor;
private ByteBuffer buf;
private int checkSumLocation;
public GorillaTimestampReader(ByteBuffer buf, int startOffset, int checkSumLocation) {
this.buf = buf;
this.checkSumLocation = checkSumLocation;
buf.position(startOffset);
this.count = buf.getInt();
buf.getInt();
this.decompressor = new GorillaTimestampDecompressor(new ByteBufferBitInput(buf));
}
@Override
public long read() throws FilteredValueException, RejectException {
if (counter < count) {
long value = decompressor.read();
if (valuePredicate != null && !valuePredicate.test(value)) {
throw FILTERED_VALUE_EXCEPTION;
}
counter++;
return value;
} else {
throw EOS_EXCEPTION;
}
}
@Override
public int getCounter() {
return counter;
}
@Override
public int getCount() {
return count;
}
@Override
public void setPredicate(Predicate predicate) {
valuePredicate = predicate;
}
@Override
public byte[] getDataHash() throws NoSuchAlgorithmException {
ByteBuffer duplicate = buf.duplicate();
duplicate.rewind();
duplicate.position(checkSumLocation);
byte[] ary = new byte[GorillaTimestampWriter.MD5_PADDING];
duplicate.get(ary);
return ary;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,503
|
layout: note
title: Juvenile Libertarians
---
### Juvenile Libertarians
Ouch, I am afriad that I think [Brian Leiter](http://leiterlawschool.typepad.com/leiter/2014/01/why-women-arent-welcome-on-the-internet.html) got it right about the EFF.
> It also reveals that, once again, the Elecronic Frontier Foundation is on the morally depraved side of the issue. (At what point will so-called progressive law professors become ashamed to ally themselves with the juvenile libertarians of EFF, the leading supporters of keeping cyberspace the cesspool it is?)
I guess I was kind of hiding the fact that I think that from myself because it's hard to think of the EFF as anything else but do-gooders. Or, at least I am very used to thinking that they are the guys and gals on "our side".
In fact, I guess I can admit to myself now that juvenile libertarianism is where most public advocacy groups and companies go when they promote a privacy agenda or discuss a privacy policy.
This is why I've resisted temptation of researching Privacy Policy: Too many people are doing people, and not for the right reasons. At least not my idea of right, which is not, as Leiter is suggesting,
Just to clarify, I don't think I agree with the paper that got Leiter to share his opinion about the EFF, or that I believe its description of women's experience online (certainly, it's not my online experience, but this obviously is nisht ahin, nisht aher,=.)
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,332
|
\section{Introduction}
The motion of a system of electromagnetic charges in special relativity in terms of multipoles is a well-known subject, see for example \cite{Dixon}. The motion could be described in terms of the world line of the center of mass and the multipoles of the distrbution of charges.
The space-time symmetries of particles moving in an electro-magnetic background is much less known. To our knowledge the first attempt to study these symmetries was in the framework of kinematical algebras \cite{Bacry:1968zf} and was done by Bacry--Combe--Richard (BCR) in \cite{Bacry:1970ye} for the case of a particle in a given fixed constant electromagnetic field $F_{ab}$. The Lorentz generators that leave $F_{ab}$ invariant are two combinations of Lorentz generators
\begin{equation}
\label{bcr}
G=\frac 12 F^{ab} M_{ab} \,,\quad\quad
{}\star G=\frac 12 {}\star F^{ab} M_{ab}\,.
\end{equation}
The BCR algebra has 6 generators, four translations and two of the previous Lorentz transformations.
The algebra has two central charges
\begin{equation}
\label{bcr1}
\left[ P_a, P_b \right]=Z F^{ab} M_{ab} +Z{}\star F^{ab} M_{ab}
\end{equation}
and the two central charges are identified with electric and magnetic charge.
Later, the Maxwell algebra was introduced by Schrader \cite{Schrader:1972zd}, where the electro-magnetic field $F_{ab}$ was allowed to transform with the Lorentz and the Poincar\'e algebra and then received six non-central extensions\footnote{It is well-known that the $D$-dimensional Poincar\'e algebra admits non-central extensions~\cite{Galindo}.}
\begin{align}
\label{M0}
\left[ P_a, P_b \right] = Z_{ab}\,.
\end{align}
This Maxwell algebra appears in~\cite{Bonanos:2008ez} as the symmetry group of a particle moving in a generic constant electro-magnetic background. The Maxwell algebra describes at same time the particle and the constant electro-magnetic background where the particle moves. In~\cite{Bonanos:2008ez} an infinite extension of the Maxwell algebra ($\textrm{Maxwell}_{\infty}$) was also envisioned through the process of extending by non-trivial two-forms in the Chevalley--Eilenberg Lie algebra cohomology. This was done in an iterative procedure and the full mathematical structure of $\textrm{Maxwell}_{\infty}$ was not uncovered.
In this paper, we find the mathematical structure underlying $\textrm{Maxwell}_{\infty}$. It turns out to be the semi-direct product of the Lorentz algebra with a free Lie algebra generated by the Poincar\'e translations and the algebra is $\mathbb{Z}$-graded (with empty negative levels). The finite level extensions found in~\cite{Bonanos:2008ez} are easily reproduced and the associated Young tableaux are identified. This construction works in any space-time dimension. We present the lowest level generators of the algebra in table~\ref{tab:intro} for the four-dimensional case. From the free algebra one can in principle compute any finite level extension and the corresponding commutation relations. We also study possible quotients of $\textrm{Maxwell}_{\infty}$. As is clear from table~\ref{tab:intro} and~\eqref{M0}, the Maxwell algebra studied by Schrader~\cite{Schrader:1972zd} corresponds to the quotient of $\textrm{Maxwell}_{\infty}$ where one only keeps levels $\ell=0$, $\ell=1$ and $\ell=2$.
\renewcommand{\arraystretch}{1.5}
\begin{table}[t!]
\centering
\caption{\it Summary of the low level generators of $\textrm{Maxwell}_{\infty}$ in $D=4$ dimensions. The Young tableau describes the tensor type as a representation of the symmetric group. All generators are Lorentz tensors under the anti-symmetric $M_{ab}$. Levels $\ell=0$ and $\ell=1$ together constitute the Poincar\'e algebra and the translation generators $P_a$ generate the higher levels freely (in a Lie algebraic sense). We also display the coordinates that are associate with the generators in the non-linear realisation of $\textrm{Maxwell}_{\infty}/$Lorentz.\label{tab:intro}}
\begin{tabular}{c||c|c|c}
Level & Young tableau & Generator and some commutators & coordinates\\
\hline\hline
$\ell=0$ & \raisebox{0.2\mathrm{ht}}{\scalebox{0.6}{$\yng(1,1)$}} & $M_{ab}$ & none\\\hline
$\ell=1$ & \raisebox{0.2\mathrm{ht}}{\scalebox{0.6}{$\yng(1)$}} & $P_a$ & $x^a$ \\\hline
$\ell=2$ & \raisebox{0.2\mathrm{ht}}{\scalebox{0.6}{$\yng(1,1)$}} & $Z_{ab}=\left[ P_a,P_b \right]$ & $\theta^{ab}$ \\\hline
$\ell=3$ & \raisebox{0.2\mathrm{ht}}{\scalebox{0.6}{$\yng(2,1)$}} & $Y_{ab,c}=\left[ Z_{ab}, P_c\right]$ & $\xi^{ab,c}$ \\\hline
$\ell=4$ & \raisebox{0.1\mathrm{ht}}{\scalebox{0.6}{$\yng(3,1)$}} & $S_{ab,c,d}^1$, see~\eqref{eq:inv4} & $\sigma_1^{ab,c,d}$ \\[3mm]
& \raisebox{-0.1\mathrm{ht}}{\scalebox{0.6}{$\yng(2,1,1)$}} & $S_{abc,d}^2$, see~\eqref{eq:inv4} & $\sigma_2^{abc,d}$
\end{tabular}
\end{table}
We construct a dynamical system with $\textrm{Maxwell}_{\infty}$ symmetry. The model that we analyze is lowest order in derivatives and could therefore be considered as the first term of an effective description of an electro-magnetic interaction of particles. Possible higher derivative extensions are not treated in this article. The basic tool that we use is the non-linear realisation in terms of the coset $\textrm{Maxwell}_{\infty}/\mathrm{Lorentz}$ and the associated Maurer--Cartan forms. The coset can described in terms of the generalized coordinates $x^a, \theta^{ab}, \xi^{ab,c}, \sigma_1^{ab,c,d}, \ldots$ dual to the generators at levels $\ell>0$. Mathematically, these coordinates are just a local parametrisation of the infinite-dimensional coset in a Lorentz
gauge-fixed form. We will also assign physical significance to them by thinking of $x^a$ as giving the space-time coordinate (of the center of mass) of a charge distribution and the additional coordinates could be considered either as describing higher inertial multipoles of a charge distribution or as coordinates on some generalized space-time. This latter viewpoint has been taken repeatedly~\cite{Sohnius:1978fw,Duff:1989tf,Gauntlett:1990nk,Duff:1990hn,deWit:2000wu} and is particularly pronounced in recent work on exceptional symmetries in supergravity~\cite{West:2003fc,Kleinschmidt:2003jf,Hull:2007zu,Coimbra:2011ky,Coimbra:2012af,Berman:2012vc,Hohm:2013uia}.
We also require the introduction of an infinite set new dynamical variables
$f_{ab}, f_{ab,c}, f_{ab,c,d}^1, \cdots$ that make it possible to write a manifest Lorentz invariant Lagrangian for the system. These quantities are also the canonical momenta associated to the generalized coordinates. By formal similarity of our equations to those of~\cite{Dixon}, the extra dynamical variables are related to the higher electro-magnetic multipole moments of the system of charges while the coordinates $\theta^{ab}$, $\xi^{ab,c}$,\ldots are similar to higher inertial moments like angular momentum. However, this assignment is based only on similarity and we consider further analysis of the precise interpretation of the higher coordinates and dynamical variables to be necessary.
The dynamical equations of motion of our system relate the extra dynamical variables with the coordinates on $\textrm{Maxwell}_{\infty}/\mathrm{Lorentz}$. A universal equation motion that always is present in our dynamical system is\footnote{We have used the proper time of the center of mass as the evolution parameter.}
\begin{align}
m\ddot{x}_a = f_{ab} \dot{x}^b\,.
\end{align}
This equation is the Lorentz force for our system. Note that our differential equations of motion are $\textrm{Maxwell}_{\infty}$ invariant. When we consider a particular solution of the equations of motion for $f_{ab}, f_{ab,c}, f_{ab,c,d}^1, \ldots$, we can see that $f_{ab}$ is now a function in general of the generalized coordinates $f_{ab}\rightarrow F_{ab}(x,\theta,\ldots)$ and the symmetry $\textrm{Maxwell}_{\infty}$ is spontaneously broken.
It is not clear whether $F_{ab}$ can always be interpreted as an electro-magnetic field in ordinary space-time that satisfies the integrability (or Bianchi) identity $\partial_{[a} F_{bc]}=0$.\footnote{The violation of the Bianchi identity in ordinary configuration space could be associated with magnetic monopoles.} We confirm the finding of~\cite{Bonanos:2008ez} that this is not always the case. The electro-magnetic field $F_{ab}(x,\theta\cdots)$ appearing in the Lorentz equation is not necessarily integrable if one considers all of $\textrm{Maxwell}_{\infty}$. Restricting the free Lie algebra to a suitable quotient renders the field integrable and in fact the quotient just corresponds to the unfolding formalism for the Maxwell field studied for example in~\cite{Vasiliev:2005zu,Boulanger:2015mka}. We can thus describe the motion of a charged particle in a generic (analytic) background. Our formulation gives a different view on the standard electro-dynamics of a point particle and potentially also describes generalizations for systems of charges in terms of inertial moments.
Going beyond the quotient related to the unfolding formalism one encounters additional dynamical equations and non-integrable solutions to the Maxwell equation. These appear to correspond to the back-reaction of the multipoles on the motion of the center of mass.
The organisation of the paper is as follows: In section~\ref{sec:MA}, we review the non-central extension process for the Poincar\'e algebra. Section~\ref{sec:FLA} contains the description of free Lie algebras and the isomorphism of the free Lie algebra generated by the translations $P_a$ to the extension obtained in section~\ref{sec:MA}. We also discuss quotients of $\textrm{Maxwell}_{\infty}$. In section~\ref{sec:dyn}, we construct a dynamical point particle model with $\textrm{Maxwell}_{\infty}$ symmetry and analyse its equations of motion. The relation to electrodynamics and unfolded dynamics is discussed. We offer some concluding remarks in section~\ref{sec:concl} and collect some background material on free Lie algebras in an appendix.
\section{Extensions of the Poincar\'e algebra}
\label{sec:MA}
In this section, we briefly review the extension of the Poincar\'e algebra based on Eilenberg--Chevalley cohomology. The results of this section were obtained in~\cite{Bonanos:2008ez}.
The Poincar\'e algebra in $D$ space-time dimensions is a semi-direct sum of the Lorentz algebra $\mf{so}(1,D-1)$ and the abelian algebra of space-time translations. We denote the Lorentz generators by $M_{ab}=M_{[ab]}$ for $a=0,\ldots, D-1$ and the translation generators by $P_a$. Their Poincar\'e Lie algebra is
\begin{align}
\label{eq:PA}
\left[ M_{ab} , M_{cd} \right] &= \eta_{bc} M_{ad} -\eta_{bd} M_{ac} -\eta_{ac} M_{bd} + \eta_{ad} M_{bc}\,,\nonumber\\
\left[ M_{ab}, P_c \right] &= \eta_{bc} P_a - \eta_{ac} P_b\,,\nonumber\\
\left[ P_a, P_b \right] &=0\,.
\end{align}
Here, $\eta_{ab}=(-+\ldots+)$ is the flat Minkowski metric. We will refer to the Lorentz generators as level $\ell=0$ and the Poincar\'e generators as level $\ell=1$.\footnote{This terminology differs from the one in~\cite{Bonanos:2008ez} but it turns out to be more convenient for the connection to free Lie algebras discussed in the present paper.}
In order to determine possible extensions of this Lie algebra one can study the Chevalley--Eilenberg cohomology~\cite{AzcarragaIzquierdo}. It turns out~\cite{Bonanos:2008ez} that there is a sequence of extensions of the algebra by generators that are Lorentz tensors and in fact can be viewed also as tensors of the general linear algebra $\mf{gl}(D)$ and represented by Young tableaux. In this representation, the translation generators $P_a$ of level $\ell=1$ are written as a single box
\begin{align}
\textrm{Level $\ell=1$:}\quad\quad P_a \longleftrightarrow \yng(1)\,.
\end{align}
We will also assign level $\ell=0$ to the Lorentz generators $M_{ab}$.
The Poincar\'e algebra has non-trivial cohomology that can be parametrised by the anti-symmetric tensor $Z_{ab}=Z_{[ab]}$~\cite{Schrader:1972zd,Bonanos:2008ez}.\footnote{Here and everywhere in the paper we use (anti-)symmetrizations of strength one.} Adding this generator to the Poincar\'e algebra deforms the commutation relations in~\eqref{eq:PA} to
\begin{align}
\label{eq:M2}
\left[ P_a, P_b \right] = Z_{ab}\,,\quad\quad
\left[ Z_{ab}, Z_{cd} \right] =0\,,\quad\quad
\left[ Z_{ab}, P_c \right] =0\,.
\end{align}
The tensorial nature of $Z_{ab}$ is expressed by the commutation relation
\begin{align}
\left[ M_{ab} , Z_{cd} \right] &= \eta_{bc} Z_{ad} -\eta_{bd} Z_{ac} -\eta_{ac} Z_{bd} + \eta_{ad} Z_{bc}
\end{align}
with the Lorentz generators $M_{ab}$. We will refer to the generator $Z_{ab}$ as level $\ell=2$ and as a Young tableaux it is given by
\begin{align}
\textrm{Level $\ell=2$:}\quad\quad Z_{ab} \longleftrightarrow \yng(1,1)\,.
\end{align}
The algebra generated by $(M_{ab}, P_a, Z_{ab})$ closes and is commonly called the \textit{Maxwell algebra}~\cite{Schrader:1972zd}. In view of the sequence of further extensions of the Poincar\'e algebra we will refer to it as $\textrm{Maxwell}_2$ as it used the generators up to level $\ell=2$. The connection between this algebra and particle motion in a constant electro-magnetic background $F_{ab}=\textrm{const.}$ has been well-studied~\cite{Bonanos:2008ez}, see also~\cite{Schrader:1972zd,Bacry:1970ye,Bacry:1970du}. We note that the commutators~\eqref{eq:M2} are consistent with the level grading that we have assigned to the generators. The commutator $\left[ P_a, P_b\right] =Z_{ab}$ has two generators of level $\ell=1$ on the left-hand side and the right-hand side a single generator of level $\ell=2$. The vanishing $\left[ Z_{ab}, Z_{cd}\right]=\left[ Z_{ab},P_c\right]=0$ within the algebra $\textrm{Maxwell}_2$ then is simply due to the fact that there are no generators of level $\ell>2$ in $\textrm{Maxwell}_2$.
The cohomological analysis can be repeated and one finds that there is also a non-trivial cohomology of the algebra $\textrm{Maxwell}_2$~\cite{Bonanos:2008ez}. One can therefore extend $\textrm{Maxwell}_2$ to an algebra $\textrm{Maxwell}_3$ by introducing new generators
\begin{align}
\textrm{Level $\ell=3$:}\quad\quad Y_{ab,c} \longleftrightarrow \yng(2,1)
\end{align}
that we will call generators of level $\ell=3$ (since their Young tableaux has three boxes). They can appear in the commutation relations consistently with the level grading and we will see below how they change some of the commutators in~\eqref{eq:M2}.
The Young tableaux above has mixed symmetry and since we will encounter many such Young tableaux we will now fix our conventions for labelling tensor generators associated with them: We transverse the Young tableau by column from left to right; each column corresponds to an anti-symmetric set of indices. We separate the columns (=sets of anti-symmetric indices) by commas. A tensor associated with such a Young tableau is therefore automatically anti-symmetric in every set of indices. The irreducibility (as a $\mf{gl}(D)$ representation) of the representation encoded in the Young tableaux is equivalent to the requirement that the anti-symmetrization of the indices of one column with any single index of a column to the right gives zero. If there are columns of equal length, the tensor is symmetric under interchange of the sets of indices. A discussion of Young symmetrizers and representations of the symmetric group based on tableaux can be found for example in~\cite{Fulton}.
In the example above this means that $Y_{ab,c}$ has the following symmetry properties
\begin{align}
Y_{[ab],c} = Y_{ab,c}\,,\quad Y_{[ab,c]} =0\,.
\end{align}
It arises in the commutator\footnote{The order of indices on $Y_{ab,c}$ and the normalisation of the generator differs (by $3$) from the one used in~\cite{Bonanos:2008ez}.}
\begin{align}
\label{eq:L3MA}
\left[ Z_{ab} , P_c \right] = Y_{ab,c}\,.
\end{align}
This commutator automatically satisfies that the totally anti-symmetric part vanishes by the Jacobi identity upon substitution of~\eqref{eq:M2}. Moreover, $Y_{ab,c}$ transforms as a tensor under the Lorentz generators $M_{ab}$ in the standard way
\begin{align}
\left[ M_{ab} , Y_{cd,e} \right] = \eta_{bc} Y_{ad,e} - \eta_{ac} Y_{bd,e} + \eta_{bd} Y_{ca,e} -\eta_{ad} Y_{cb,e} + \eta_{be} Y_{cd,a}-\eta_{ae} Y_{cd,b}\,.
\end{align}
We will denote the Lie algebra generated by $(M_{ab}, P_a, Z_{ab}, Y_{ab,c})$ by $\textrm{Maxwell}_3$. The generator $Y_{ab,c}$ of level $\ell=3$ commutes with all generators of levels $\ell\ge 1$ if one considers $\textrm{Maxwell}_3$. This is again due to the fact that our level assignment provides a consistent grading of the Lie algebra $\textrm{Maxwell}_3$.
Performing the cohomological analysis of $\textrm{Maxwell}_3$ one finds again that it admits an extension~\cite{Bonanos:2008ez}.\footnote{We assume $D\geq 4$ here for simplicity.} This time the extending generators belong to two different irreducible representations of $\mf{gl}(D)$. They are given by\footnote{Also here the convention for the order of indices differs from~\cite{Bonanos:2008ez}. The reason is that we want to maintain our general labelling convention for Young tableaux.}
\begin{align}
\label{eq:M4}
\textrm{Level $\ell=4$:}\quad\quad S_{ab,c,d}^1 \longleftrightarrow \yng(3,1)\quad\textrm{and}\quad S_{abc,d}^2 \longleftrightarrow \yng(2,1,1)\,.
\end{align}
Even though the generators could be distinguished by their tensor structure, we have put superscripts on them to make them easier to distinguish. In agreement with our rules above, the generators have the following tensor properties
\begin{align}
S^1_{[ab],c,d} = S^1_{ab,c,d}\,,\quad
S^1_{ab,(c,d)} = S^1_{ab,c,d}\,,\quad
S^1_{[ab,c],d} = 0
\end{align}
and
\begin{align}
S^2_{[abc],d} = S^2_{abc,d}\,,\quad
S^2_{[abc,d]} = 0\,.
\end{align}
The new generators arise in the commutators as follows:
\begin{align}
\label{eq:L4MA}
\left[ Y_{ab,c},P_d \right] &= S^1_{ab,c,d} + 2S^2_{abd,c}- S^2_{bcd,a}- S^2_{cad,b}\nonumber\\
&= S^1_{ab,c,d} +3 S^2_{dab,c} - S^2_{abc,d}\,,
\end{align}
where we have written the right-hand side in two different ways using the irreducibility constraint $S^2_{[abc,d]}=0$ of the second generator arising at level $\ell=4$. This relation fixes also
\begin{align}
\left[ Z_{ab} , Z_{cd} \right] =
3S^2_{abd,c} -S^2_{abc,d} - 3 S^2_{abc,d}+S^2_{abd,c}
= -8 S^2_{ab[c,d]}
\end{align}
by Jacobi identities. All remaining commutators are also completely determined by the level grading and the tensor structure of the generators. The Lie algebra that is generated by $(M_{ab}, P_a, Z_{ab}, Y_{ab,c}, S_{ab,c,d}^1, S_{abc,d}^1)$ will be called $\textrm{Maxwell}_4$. For completeness, we also note the inverse relations
\begin{align}
\label{eq:inv4}
S^2_{abc,d} &= -\frac38 \left[ Z_{[ab}, Z_{c]d}\right] = -\frac38 \left[ P_{[c}, Y_{ab],d}\right]\,,\nonumber\\
S^1_{ab,c,d} &= \frac38 \left[ P_d, Y_{ab,c} \right] + \frac38 \left[ P_c, Y_{ab,d} \right] -\frac14 \left[ P_{[a} , Y_{b]c,d} \right] -\frac14 \left[ P_{[a} , Y_{b]d,c} \right]
\end{align}
The tableaux and their dimensions are summarized for $D=4$ in table~\ref{tab:L4}.
\renewcommand{\arraystretch}{1.5}
\begin{table}[t!]
\centering
\caption{\it Summary of the positive level generators for $\textrm{Maxwell}_4$ in $D=4$ dimensions.\label{tab:L4}}
\begin{tabular}{c||c|c|c}
Level & Young tableau & Generator & Dimension\\
\hline\hline
$\ell=1$ & \raisebox{0.2\mathrm{ht}}{\scalebox{0.6}{$\yng(1)$}} & $P_a$ & $4$\\\hline
$\ell=2$ & \raisebox{0.2\mathrm{ht}}{\scalebox{0.6}{$\yng(1,1)$}} & $Z_{ab}$ & $6$\\\hline
$\ell=3$ & \raisebox{0.2\mathrm{ht}}{\scalebox{0.6}{$\yng(2,1)$}} & $Y_{ab,c}$ & $20$\\\hline
$\ell=4$ &\raisebox{0.1\mathrm{ht}}{\scalebox{0.6}{$\yng(3,1)$}} & $S_{ab,c,d}^1$ & $45$\\[3mm]
& \raisebox{-0.1\mathrm{ht}}{\scalebox{0.6}{$\yng(2,1,1)$}} & $S_{abc,d}^2$ & $15$
\end{tabular}
\end{table}
This cohomological process could now be continued and~\cite{Bonanos:2008ez} gives the tableaux for the generators at level $\ell=5$. Rather than pursuing further the step by step cohomological analysis, we now identify the full Lie algebraic structure in slightly different terms.
\section{Free Lie algebras and their quotients}
\label{sec:FLA}
The sequence of Maxwell algebras $\textrm{Maxwell}_n$ reviewed in the previous section exhibits an intriguing pattern: The generators at level $0<\ell\le n$ are given by tensors of $\mf{gl}(D)$ that transform according to Young tableaux with $\ell$ boxes. The generators at level $\ell+1$ can be obtained from the ones at level $\ell$ upon commutation with the $\ell=1$ generators $P_a$ and all commutators are consistent the level grading.
Inspection of the Young tableaux that arise up to $\textrm{Maxwell}_5$ shows that the structure is fully consistent with quotients of a free Lie algebra on $D$ generators. These $D$ generators are precisely the translation generators $P_a$ for $a=0,\ldots, D-1$. This is one of the central results of this paper. Before proving it we review for completeness some basic features of free Lie algebras. More details can be found in appendix~\ref{app:FLA} and~\cite{BourbakiFree,Viennot,KleinschmidtThesis}.
\subsection{Free Lie algebras}
For any (finite) set of independent generators $P_a$ one can define a free Lie algebra $\mf{f}$ as follows. One considers the linear space spanned by all possible multi-commutators $\left[ \left[ \left[ P_{a_1}, P_{a_2} \right],\ldots P_{a_{\ell-1}}\right] , P_{a_\ell}\right]$ and identifies all elements that are related to each other by the relations of the Lie bracket, namely anti-symmetry and the Jacobi identity. When we refer to this general $\ell$-fold multiple commutator we will sometimes denote it by $Y_{a_1\ldots a_\ell}$ below and in this case the tensorial symmetry properties of the generator are not specified. Since no relations other than anti-symmetry and the Jacobi identity are used this is the minimal requirement on a Lie algebra and the space spanned by all these multi-commutators is therefore called the free Lie algebra $\mf{f}$ generated by the $P_a$. Since in particular, there are no relations that change the number of the $P_a$ in a multi-commutator one can consider the free Lie algebra $\mf{f}$ for a fixed number $\ell$ of elements in the multi-commutator. This is called the level $\ell$ part $\mf{f}_\ell$ of the free Lie algebra that is a direct sum
\begin{align}
\label{eq:fgrad}
\mf{f} = \bigoplus_{\ell>0} \mf{f}_\ell = \mf{f}_1 \oplus \mf{f}_2 \oplus \ldots\,.
\end{align}
More precisely, $\mf{f}_1$ is the $D$-dimensional vector space spanned by the $P_a$. The space $\mf{f}_2$ is the space spanned by all commutators $\left[ P_a, P_b\right]$. Due to the anti-symmetry of the Lie bracket this space is of dimension $D(D-1)/2$. We can write this also as $\mf{f}_2 = \left[ \mf{f}_1, \mf{f}_1\right]$. As a vector space, $\mf{f}_2$ is the exterior square of $\mf{f}_1$, but we prefer to use the bracket notation since this extends to arbitrary level\footnote{This statement contains non-trivial information: Not all multi-commutators with $\ell$ elements are of the form $\left[ \left[ \left[ P_{a_1}, P_{a_2} \right],\ldots P_{a_{\ell-1}}\right] , P_{a_\ell}\right]$ but the commutators can also be nested in a different way, e.g., $\left[ \left[ P_{a_1}, P_{a_2}\right],\left[ P_{a_3},P_{a_3}\right]\rb$. The statement here is that it is possible to find a basis of the iterated commutator form given.}
\begin{align}
\label{eq:levelstep}
\mf{f}_{\ell+1} = \left[ \mf{f}_\ell,\mf{f}_1 \right]\,.
\end{align}
In appendix~\ref{app:FLA}, we review what is known about the dimensions of the space $\mf{f}_\ell$ and how free Lie algebras can be understood as special cases of generalized Kac--Moody algebras introduced by Borcherds~\cite{Borcherds}. We note also that the grading~\eqref{eq:fgrad} satisfies $\left[ \mf{f}_\ell, \mf{f}_{\ell'} \right] \subset \mf{f}_{\ell+\ell'}$.
\subsection{Level decomposition}
\label{sec:LD}
We now want to make closer contact to the explicit generators introduced above in section~\ref{sec:MA}. To this end we again notice that the generating set $P_a$ spanning the $D$-dimensional space $\mf{f}_1$ can be viewed as the fundamental of $\mf{gl}(D)$ and written as a Young tableau
\begin{align}
\mf{f}_1=\langle P_a \rangle \quad\longleftrightarrow\quad \yng(1)\,.
\end{align}
As a representation of $\mf{gl}(D)$, the next level $\mf{f}_2$ is the anti-symmetric product of two fundamental representations leading to\footnote{Here, we assume $D\geq 4$ for simplicity.}
\begin{align}
\mf{f}_2 \quad\longleftrightarrow\quad \yng(1,1)\,.
\end{align}
Introducing the corresponding tensorial generators $Z_{ab}$ of the free Lie algebra agrees with $\left[ P_a, P_b\right] =Z_{ab}$ in~\eqref{eq:M2}. (This fixes a convenient normalization.)
Continuing to the next level of the free Lie algebra, we know that $\mf{f}_3$ has to be contained in the $\mf{gl}(D)$ tensor product of $\mf{f}_2$ with $\mf{f}_1$ according to~\eqref{eq:levelstep}. This tensor product is
\begin{align}
\label{eq:f213}
\yng(1,1) \otimes \yng(1) = \yng(1,1,1) \oplus \yng(2,1)\,.
\end{align}
The space $\mf{f}_3$ is a proper subset of the full tensor product since one has to impose the Jacobi identity for three elements $\left[ \left[ P_a, P_b\right],P_c\right]$. The Jacobi identity $\left[ \left[ P_{[a}, P_b\right], P_{c]}\right]=0$ is completely anti-symmetric and therefore $\mf{f}_3$ does not contain the totally anti-symmetric representation of $\mf{gl}(D)$, leading to
\begin{align}
\mf{f}_3 \quad\longleftrightarrow\quad \yng(2,1)\,.
\end{align}
Introducing a corresponding tensorial generator $Y_{ab,c}$, it is obtained from the lower ones by $\left[ Z_{ab},P_c\right]=Y_{ab,c}$ in agreement with~\eqref{eq:L3MA}.
Four-fold commutators in the free Lie algebra must necessarily be contained in the tensor product of $\mf{f}_3$ with $\mf{f}_1$ according to~\eqref{eq:levelstep}
\begin{align}
\yng(2,1) \otimes \yng(1) = \yng(3,1)\oplus \yng(2,1,1) \oplus \yng(2,2)\,.
\end{align}
Again, not all these representation belong to $\mf{f}_4$ as one has to impose anti-symmetry and the Jacobi identity of the Lie bracket. This eliminates the last Young tableau from the above tensor product. This can be understood in general by considering the explicit multiplicities of the generators as discussed in appendix~\ref{app:FLA}. In the present case it can also be seen by realising that if this tableau were part of $\mf{f}_4$ it would arise also in the commutator $\left[\mf{f}_2,\mf{f}_2\right]$ which is anti-symmetric in the $Z_{ab}$ and can therefore cannot produce a Young tableau with shape \raisebox{0.3\mathrm{ht}}{\scalebox{0.5}{$\yng(2,2)$}} in its commutator. Thus
\begin{align}
\mf{f}_4 \quad\longleftrightarrow\quad \yng(3,1)\oplus \yng (2,1,1)\,.
\end{align}
The corresponding generators are again those found in the cohomological approach, namely $S^1_{ab,c,d}$ and $S^2_{abc,d}$ with commutation relations shown in~\eqref{eq:L4MA}.
This process can be continued and we give the generators up to $\mf{f}_7$ in appendix~\ref{app:FLA}.
We would like to make on important remark here. For classifying the generators of the free Lie algebra $\mf{f}$, or equivalently the ones of the extensions the Poincar\'e algebra, the use of Young tableaux encoding irreducible representations of $\mf{gl}(D)$ is most convenient. However, since the are tensors of Lorentz algebra $\mf{so}(1,D-1)\subset \mf{gl}(D)$ one should properly consider the decomposition of the tensors into irreducibles of $\mf{so}(1,D-1)$. This corresponds to considering possible traces of the tensors. Starting from $\mf{f}_3$ such traces are possible.
We will use the notation that an irreducible tensor of $\mf{so}(1,D-1)$ is also denoted by a Young tableau giving the permutation symmetries of the indices. But in order to emphasize the fact that all possible traces have been removed from the tensor we will put a tilde over the tableau. For example, for $\mf{f}_3$ the decomposition of the $\mf{gl}(D)$ tensor $Y_{ab,c}$ into tensors of $\mf{so}(1,D-1)$ reads in tableau form
\begin{align}
\yng(2,1) \quad \longrightarrow\quad
\widetilde{\yng(2,1)}\oplus \yng(1)\,.
\end{align}
As tensors we write this relation as
\begin{align}
Y_{ab,c} = \tilde{Y}_{ab,c} + \frac{1}{D-1}\left(\eta_{bc} Y_a - \eta_{ac} Y_b\right)
\end{align}
such that the trace of $Y_{ab,c}$ gives $Y_a$ and $\tilde{Y}_{ab,c}$ is traceless:
\begin{align}
\eta^{bc} Y_{ab,c} = Y_a\,,\quad
\eta^{bc} \tilde{Y}_{ab,c} =0\,.
\end{align}
The $\ell=4$ generators decompose similarly under the Lorentz group as
\begin{align}
\yng(3,1) &\quad \longrightarrow\quad \widetilde{\yng(3,1)}\oplus \widetilde{\yng(2)} \oplus \yng(1,1)\,,\nonumber\\
\yng(2,1,1) &\quad \longrightarrow\quad \widetilde{\yng(2,1,1)}\oplus \yng(1,1)\,,\nonumber\\
\end{align}
We can use this decomposition to make contact with the algebras $\mathcal{B}_n$ studied in~\cite{Salgado:2014qqa}. The algebra $\mathcal{B}_5$ contains everything on $\ell=2$ and the vector generator on $\ell=3$. The algebra $\mathcal{B}_6$ contains everything on $\ell=2$, the vector generator on $\ell=3$ and the second two-form on $\ell=4$ (the one coming from (3,1)). Thus the $\mathcal{B}_n$ algebras are subalgebras of $\textrm{Maxwell}_{\infty}$.
\subsection{Relation to Maxwell algebra}
As we have seen in the previous section, there is a close relationship between the generators appearing in the maximally extended Maxwell algebra $\textrm{Maxwell}_{\infty}$ and the those of the free Lie algebra $\mf{f}$. For levels $\ell=1,\ldots,4$ we have seen above that the correspondence is exact not only for the generators but also for the commutation relations. We will now prove that this continues to all levels. In other words, the central statement is that
\begin{align}
\textrm{Maxwell}_{\infty} \cong \mf{so}(1,D-1) \oplus \mf{f}\,,
\end{align}
where the sum is semi-direct and $\mf{f}$ is the free Lie algebra generated by the translation generators $P_a$. The isomorphism above is an isomorphism of Lie algebras. The essential part of the isomorphism is the `translation part' where the Lorentz algebra $\mf{so}(1,D-1)$ is frozen and we will restrict to this. We note that we could also introduce a dilatation operator $D$ on level $\ell=0$ that gives the level of the free Lie algebra generators as its eigenvalue:
\begin{align}
\left[ D, \mf{f}_{\ell} \right] = \ell \mf{f}_\ell\,.
\end{align}
The Lorentz generators then have eigenvalue zero under $D$ since they are at level $\ell=0$.
The proof relies on studying the Eilenberg--Chevalley cohomology at level $\ell$ and goes by induction. One way of studying the cohomology is to construct the Lie algebra valued Maurer--Cartan one-form up to level $\ell$. We first need to introduce some notation. Let
\begin{align}
\Omega^{(\ell)} = g^{-1} dg = \sum_{k=1}^\ell \Omega_{(k)} \,,
\end{align}
be the Maurer--Cartan one-form up to level $\ell$, where
\begin{align}
\Omega_{(k)} = \Omega^{a_1\ldots a_k} Y_{a_1\ldots a_k}
\end{align}
is the one-form at level $k$ as signalled by the fact that the generator is a tensor with $k$ indices, meaning the generator is at level $k$ in $\textrm{Maxwell}_{\infty}$. Here, the notation does not fix the tableau structure of the indices on the level $k$ generator $Y_{a_1\ldots a_k}$. As examples, we have
\begin{align}
\Omega_{(1)} = \Omega^a P_a\,,\quad
\Omega_{(2)} = \frac12\Omega^{ab} Z_{ab}\,,\quad
\Omega_{(3)} = \frac12 \Omega^{ab,c} Y_{ab,c}\,.
\end{align}
The Maurer--Cartan equation says that
\begin{align}
d \Omega_{(k)} = - \sum_{m=1}^{k-1} \Omega_{(m)} \wedge \Omega_{(k-m)}\,,
\end{align}
in particular $d\Omega_{(1)} = 0$.
For the Eilenberg--Chevalley cohomology $H^2$ we have to study the non-trivial two-forms inductively by level. Before discussing the general case, we consider as an illustrative example $\ell=2$. All possible two-forms at this level are contained in $\Omega^{a_1a_2} \wedge \Omega^b$ and we need to find the closed but non-exact invariant ones. The action of the differential on an arbitrary two-forms is
\begin{align}
\label{eq:d21}
d \left(\Omega^{a_1a_2} \wedge \Omega^b \right) = - \Omega^{a_1} \wedge \Omega^{a_2} \wedge \Omega^b = - \Omega^{[a_1} \wedge \Omega^{a_2} \wedge \Omega^{b]} = d\left( \Omega^{[a_1a_2}\wedge \Omega^{b]}\right) \,,
\end{align}
where we have used the Maurer--Cartan equation $d\Omega^{a_1a_2} = - \Omega^{a_1} \wedge \Omega^{a_2}$ and have shown in the last steps that the differential of an arbitrary two-form is equal to that of the anti-symmetrised projection. This projection occurs in the decomposition of the product $\Omega^{a_1a_2}\wedge \Omega^b$ into irreducible representations according to~\eqref{eq:f213}
\begin{align}
\Omega^{a_1a_2}\wedge \Omega^b = \underbrace{\Omega^{[a_1a_2}\wedge \Omega^{b]}}_{\raisebox{-0.3\mathrm{ht}}{\scalebox{0.7}{\yng(1,1,1)}}} + \underbrace{\left(\Omega^{a_1a_2}\wedge \Omega^b -\Omega^{[a_1a_2}\wedge \Omega^{b]}\right)}_{\raisebox{-0.3\mathrm{ht}}{\scalebox{0.7}{\yng(2,1)}}}\,.
\end{align}
Applying the dfferential $d$ to this equation and using~\eqref{eq:d21} we deduce that the structure with Young shape \raisebox{0.3\mathrm{ht}}{\scalebox{0.5}{$\yng(2,1)$}} is closed. Moreover, the shape \raisebox{0.3\mathrm{ht}}{\scalebox{0.5}{$\yng(2,1)$}} is not exact since $\Omega^b$ is not the differential of anything \textit{invariant}. This last qualification is very important as we are interested in the cohomology of invariant forms. We note also that representation \raisebox{0.3\mathrm{ht}}{\scalebox{0.5}{$\yng(1,1,1)$}} can be viewed as the Jacobi identity at this level.
After this example, we now return to the general case. In order to study the cohomology at level $\ell$ we need to consider all possible two-forms at exactly level $\ell+1$, that is linear combinations of (the overcomplete set)
\begin{align}
\Omega_{(1)} \wedge \Omega_{(\ell)}\,, \Omega_{(2)} \wedge \Omega_{(\ell-1)}\,, \ldots\,, \Omega_{(\ell)} \wedge \Omega_{(1)}\,,
\end{align}
and determine the ones that are closed but not exact. Let us focus on the last term to start with and see when it is closed:
\begin{align}
d \left( \Omega_{(\ell)} \wedge \Omega_{(1)}\right) = d\Omega_{(\ell)} \wedge \Omega_{(1)} = -\left(\sum_{k=1}^{\ell-1} \Omega_{(k)} \wedge \Omega_{(\ell-k)}\right) \wedge \Omega_{(1)}
\end{align}
This double commutator vanishes exactly when one can arrange the terms such that they correspond to the Jacobi identity in the free Lie algebra, similar to the example above. In other words, projecting the equation to the Jacobi identity representation does not change the result and therefore any term that satisfies the Jacobi identity will be closed while terms that do not satisfy the Jacobi identity cannot be closed.
None of the two-forms that satisfy the Jacobi identity can be exact (i.e., are differentials of an invariant one-form). If the closed two-form $\Omega_{(\ell)}\wedge \Omega_{(1)}+\ldots$ in question were exact, there would be an invariant one-form $\Theta$ with $d\Theta = \Omega_{(\ell)}\wedge\Omega_{(1)} +\ldots$. However, the form $\Omega_{(\ell)}$ is by induction assumption not exact and neither is $\Omega_{(1)}$, thus such a $\Theta$ cannot exist. This induction step for the cohomology of the Maxwell algebra mirrors the inductive statement~\eqref{eq:levelstep}.
Therefore we have shown that the Eilenberg--Chevalley cohomology computed level by level generates exactly a free Lie algebra. This is maybe not surprising as the free Lie algebra is the maximal Lie algebra one can construct over a generating set $P_a$ and therefore there cannot be any additional extensions provided by the cohomology.
\subsection{Ideals and quotients}
\label{sec:quot}
Free Lie algebras admit many non-trivial quotient Lie algebras that arise from non-trivial ideals of the free Lie algebra. For example, the construction of standard semi-simple Lie algebras of Kac--Moody type can be viewed in this language and the class of ideals relevant there can be described in terms of Dynkin diagrams with the ideals being generated by the Serre relations~\cite{Kac}. Here, we will consider three other classes of ideals of $\mf{f}$.
The first family of ideals is defined as
\begin{align}
\mf{i}_{\ell} =\bigoplus_{k>\ell} \mf{f}_k
\end{align}
and consists of all multi-commutators with more than $\ell$ basic generators. Since the grading~\eqref{eq:fgrad} respects the commutator this clearly is an ideal of $\mf{f}$, \textit{i.e.}, $\left[ \mf{f},\mf{i}_\ell \right] \subset \mf{i}_\ell$. The associated quotient Lie algebra $\mf{q}_\ell$ is
\begin{align}
\mf{q}_\ell = \mf{f} / \mf{i}_\ell = \bigoplus_{k=1}^\ell \mf{f}_k
\end{align}
with the same commutators as $\mf{f}$ except for that all terms leading to generators contained in $\mf{f}_k$ with $k>\ell$ are set to zero. From this we conclude that the finite Maxwell extensions of the Poincar\'e algebra satisfy
\begin{align}
\textrm{Maxwell}_\ell = \mf{so}(1,D-1) \oplus \mf{q}_\ell\,.
\end{align}
The second family of ideals is given for integers $r>0$ by
\begin{align}
\mf{s}_r = \left\langle \textrm{Young tableaux in $\mf{f}$ with more than $r$ rows} \right\rangle\,.
\end{align}
This forms an ideal since the commutation relations of the free Lie algebra will never remove boxes from a Young tableau and therefore $\mf{s}_r$ is stable under the adjoint action of $\mf{f}$. We denote the associated quotient by
\begin{align}
\label{eq:DI}
\mf{d}_r = \mf{f}/\mf{s}_r\,.
\end{align}
The third ideal $\mf{u}$ corresponds to
\begin{align}
\mf{u} = \left\langle \textrm{Young tableaux in $\mf{f}$ with more than $2$ rows or more than one box in the second row} \right\rangle\,.
\end{align}
The quotient
\begin{align}
\label{eq:unfold}
\mf{w} = \mf{f}/\mf{u}
\end{align}
then consists only of tableaux of the shape
\begin{align}
\young(a)\,,\quad \young(a,b)\,,\quad \young(ac_1c_2\cdotsc_n,b)
\end{align}
for $n>0$. Except for the first tableau corresponding to $P_a$ these are exactly the tensors needed to unfold the Maxwell field strength~\cite{Boulanger:2015mka}.
There are many other ideals that could be considered for quotients. For example, any non-zero generator $x$ of $\mf{f}$ generates a non-trivial principal ideal. Other options include removing ideals generated by certain low-lying tableaux and the Serre relations of standard (Kac--Moody) Lie algebras are of this type.
\section{Point particle model with $\textrm{Maxwell}_{\infty}$ symmetry}
\label{sec:dyn}
In this section, we begin to consider a dynamical realisation of the symmetry algebra $\textrm{Maxwell}_{\infty}$ studied in the previous sections in the terms of relativistic massive and massless particle in an electro-magnetic background. For this we use the language of non-linear realisations and Lagrangian formulation also used in~\cite{Bonanos:2008kr,Bonanos:2008ez,Gibbons:2009me}.
\subsection{Coset element, Maurer--Cartan forms and Lagrangian}
We consider the coset $\textrm{Maxwell}_{\infty} / SO(1,3)$ and write a group element as\footnote{We note that our normalisations differ slightly from those employed in~\cite{Bonanos:2008ez}. For mixed symmetry Young tableau the choice of combinatorial coefficients is not fully canonical and our choice gives simple rational coefficients in the equations.}
\begin{align}
\label{eq:gpelem}
g = e^{x^a P_a} e^{\frac12\theta^{ab} Z_{ab}} e^{\frac12 \xi^{ab,c} Y_{ab,c}} e^{\frac14 \sigma_1^{ab,c,d} S_{ab,c,d}^1} e^{\frac14\sigma_2^{abc,d} S_{abc,d}^2}\cdots\,,
\end{align}
such that $X^M=(x^a,\theta^{ab},\xi^{ab,c},\sigma_1^{ab,c,d},\sigma_2^{abc,d},\ldots)$ are a choice of local coordinates on the coset space and we have fixed a Lorentz gauge. The basic building block of the non-linear realisation is the Maurer--Cartan one-form
\begin{align}
\label{eq:CM4}
\Omega = g^{-1} dg &= d x^a P_a + \frac12\left(d\theta^{ab} +dx^a x^b\right)Z_{ab}
+ \frac12\left(d\xi^{ab,c} - \theta^{ab} dx^c + \frac13 dx^a x^b x^c \right) Y_{ab,c}\nonumber\\
&\quad+ \frac14 \left(d\sigma_1^{ab,c,d} -2\xi^{ab,c} dx^d +\frac16 dx^a x^b x^c x^d\right) S_{ab,c,d}^1 \nonumber\\
&\quad + \frac14\left(d\sigma_2^{abc,d} +4\theta^{ab} d\theta^{cd} - 6\xi^{ab,d} dx^c -8 dx^a x^b \theta^{cd} \right)S^2_{abc,d} +\ldots\,.
\end{align}
The form $\Omega$ is by construction invariant under all positive level elements of $\textrm{Maxwell}_{\infty}$ acting by global transformations from the left. If one wanted to keep the Lorentz gauge invariance unfixed one would have to include a factor $\exp(\tfrac12 r^{ab}M_{ab})$ in the group element~\eqref{eq:gpelem}. This could alternatively be accommodated by considering a covariant extension of the differential $d$
\begin{align}
d \quad \rightarrow\quad D = d + \mathcal{M}^{ab}M_{ab} \,,
\end{align}
where the last term indicates the action of the Maurer--Cartan one-form $\mathcal{M}^{ab}M_{ab}$ coming from the Lorentz piece in the group element~\cite{Bonanos:2008ez}. As we work in the fixed Lorentz gauge above, we will not require this in the sequel.
Note that the contraction with the generators automatically projects the coefficients in~\eqref{eq:CM4} on the correct Young tableau symmetries. Defining the coefficients in general as
\begin{align}
\Omega = \Omega^a P_a + \frac12 \Omega^{ab} Z_{ab} + \frac12 \Omega^{ab,c} Y_{ab,c} + \frac14 \Omega^{ab,c,d}_1 S^1_{ab,c,d} + \frac14\Omega^{abc,d}_2 S^2_{abc,d} + \ldots
\end{align}
one has the following expanded form of the projected coefficients
\begin{subequations}
\label{eq:C4exp}
\begin{align}
\Omega^a &= dx^a\,,\\
\Omega^{ab} &= d\theta^{ab}+\frac12 \left(dx^a x^b - dx^b x^a\right)\,,\\
\Omega^{ab,c} &= d\xi^{ab,c} -\frac13\left(2\theta^{ab} dx^c - \theta^{bc} dx^a - \theta^{ca} dx^b\right)
+ \frac16 \left(dx^a x^b x^c - dx^b x^a x^c \right)\,,\\
\Omega^{ab,c,d}_1 &= d\sigma_1^{ab,c,d} -\frac34\xi^{ab,c} dx^d -\frac34 \xi^{ab,d} dx^c
+\frac14\left(\xi^{bc,d} + \xi^{bd,c}\right) dx^a - \frac14\left(\xi^{ac,d} + \xi^{ad,c}\right) dx^b\nonumber\\
&\hspace{20mm} + \frac1{12}\left( dx^a x^b x^c x^d - dx^b x^a x^c x^d\right)\,,\\
\Omega^{abc,d}_2 &= d\sigma_2^{abc,d} -6 dx^{[a} \xi^{bc],d} + 4 \theta^{[ab} d\theta^{c]d} -4 \theta^{[ab} d\theta^{cd]} -8 dx^{[a} x^b \theta^{c]d} +8 dx^{[a} x^b \theta^{cd]}\,.
\end{align}
\end{subequations}
Considering infinitesimal transformations generated by $\mf{f}$ on the lowest levels with rigid generator
\begin{align}
T = \epsilon^a P_a + \frac12 \epsilon^{ab} Z_{ab} + \frac12 \epsilon^{ab,c} Y_{ab,c}
+\frac14 \epsilon^{ab,c,d}_1 S^1_{ab,c,d} + \frac14 \epsilon^{abc,d}_2 S^2_{abc,d}
+ \ldots\,,
\end{align}
we find that the coset fields $X^M=(x^a,\theta^{ab},\xi^{ab,c},\sigma_1^{ab,c,d},\sigma_2^{abc,d},\ldots)$ transform as
\begin{subequations}
\begin{align}
\delta_T x^a &= \epsilon^a\,,\\
\delta_T \theta^{ab} &= \epsilon^{ab} - \frac12 \left(x^a \epsilon^b-x^b \epsilon^a\right)\,,\\
\delta_T \xi^{ab,c} &= \epsilon^{ab,c} +\frac13\left(2\epsilon^{ab} x^c -\epsilon^{bc} x^{a} -\epsilon^{ca} x^{b}\right)
+ \frac13\left(\epsilon^a x^b x^c - \epsilon^b x^a x^c\right)\,,\\
\delta_T \sigma_1^{ab,c,d} &= \epsilon_1^{ab,c,d} + \frac34\epsilon^{ab,c} x^d + \frac34 \epsilon^{ab,d} x^c -\frac14\left(\epsilon^{bc,d}+\epsilon^{bd,c}\right) x^a +\frac14\left(\epsilon^{ac,d}+\epsilon^{ad,c}\right)x^b\nonumber\\
&\hspace{20mm}+\frac12\epsilon^{ab} x^c x^d -\frac14\left(\epsilon^{bc}x^a x^d - \epsilon^{ad} x^b x^c - \epsilon^{ac}x^b x^d +\epsilon^{bd} x^a x^c\right)\nonumber\\
&\hspace{20mm} +\frac14\left(\epsilon^a x^b x^c x^d - \epsilon^b x^a x^c x^d\right) \,,\\
\delta_T \sigma_2^{abc,d} &= \epsilon_2^{abc,d} + 6 x^{[a} \epsilon^{bc],d} + 4 \theta^{[ab} \epsilon^{c]d} -4 \theta^{[ab} \epsilon^{cd]} +2 \epsilon^{[ab}x^{c]} x^d- 4 \epsilon^{[a} x^b \theta^{c]d}+4\epsilon^{[a} x^b \theta^{cd]}\,.
\end{align}
\end{subequations}
It is natural to think of these transformations as generalized translations in the extended space spanned by all the $X^M$. The Cartan--Maurer forms~\eqref{eq:C4exp} are invariant under these transformations.
Our next task will be to construct a (massive) particle model for motion on this coset. For this we consider the coordinates $X^M=(x^a,\theta^{ab},\xi^{ab,c},\sigma_1^{ab,c,d}, \sigma_2^{abc,d},\ldots)$ as functions of a world-line parameter $\tau$, \textit{i.e.}, $X^M\equiv X^M(\tau)$. Then the differential $d$ becomes a derivative with respect to the world-line parameter $\tau$ by the chain rule, for example $\Omega^a = \dot{x}^a d\tau$. An invariant Lagrangian up to $\ell=4$ can be constructed from the pull-backs of the one-forms $\Omega$ as~\cite{Bonanos:2008ez}
\begin{align}
\label{eq:L4}
L \, d\tau = m \sqrt{- \Omega^a \Omega_a } + \frac12 f_{ab} \Omega^{ab} + \frac12 f_{ab,c} \Omega^{ab,c}+\frac14 f_{ab,c,d}^1 \Omega^{ab,c,d}_1 + \frac14 f_{abc,d}^2 \Omega^{abc,d}_2+\ldots\,,
\end{align}
where indices are raised and lowered with the flat Minkowski metric. The new dynamical quantities $f_{ab}$, $f_{ab,c}$, $f_{ab,c,d}^1$ and $f_{abc,d}^2$ multiply the invariant one-forms and should be thought of as momentum like variables. They have the same symmetries as the corresponding generators of the free Lie algebra.
The first Cartan form $\Omega^a=\dot{x}^a d\tau$ associated with the space-time coordinates $x^a$ plays a special role in the construction in that it is not set to zero by a Lagrange multiplier but provides a kinetic term for the particle's motion.
One could also consider the case of massless particles by changing the first term in the Lagrangian (\ref{eq:L4}) to
$\frac{\dot x^a \dot{x}_a}{2e}$,
where $e$ is the einbein variable on the world-line. If we assign as dilatation $D$-eigenvalues the opposite dilatation weights compared to the coordinates, all terms in~\eqref{eq:L4} are invariant under $D$ except for the first term. In the massless case one can make the whole Lagrangian dilatation invariant by letting $\left[ D,e\right] =-2e$.
\subsection{Equations of motion}
The Euler--Lagrange equations following from a Lagrangian of the type given in~\eqref{eq:L4} take a universal and simple form. To understand this we first consider an even simpler Lagrangian of the form
\begin{align}
L \, d\tau = f_A \Omega^A\,,
\end{align}
where $\Omega^A$ are the components of a Maurer--Cartan form $\Omega= g^{-1} dg = \Omega^A t_A$ expanded in a basis $t_A$ of a Lie algebra.The equations of motion of this system are then obtained by considering variations $g^{-1} \delta g = \Sigma^A t_A$ of the coordinates on the group manifold. Varying the Lagrangian above leads to (up to a total derivative)
\begin{align}
\delta L \, d\tau = \delta f_A \Omega^A + f_a \delta \Omega^A = \delta f_A \Omega^A - \left(\dot{f}_A + c_{AB}{}^C f_C \Omega^B \right)\Sigma^A\,,
\end{align}
in terms of the structure constants $\left[ t_A, t_B \right] = c_{AB}{}^C t_C$ of the algebra. Here, we have used that the variation of the components of the Maurer--Cartan form is
\begin{align}
\delta \Omega^A = \dot{\Sigma}^A +c_{BC}{}^A \Omega^B \Sigma^C\,.
\end{align}
The equations of motion of such a system are therefore
\begin{align}
\Omega^A=0 \,,\quad \dot{f}_A = 0\,.
\end{align}
The Lagrangian~\eqref{eq:L4} does not have Lagrange multipliers $f_A$ for all Maurer--Cartan forms; the first component $\Omega^a$ is treated differently. As a result there will be associated contributions to the equations for all $\dot{f}_A$. The non-trivial Maurer--Cartan form at level $\ell=1$ is $\Omega^a=\dot{x}^a d\tau$ and therefore all equations for $\dot{f}_{\ldots}$ for levels $\ell>1$ will be proportional to $\dot{x}^a$ contracted into the Lagrange multiplier $f_{\ldots}$ of level $\ell+1$.
To be more precise, our Lagrangian~\eqref{eq:L4} is
\begin{align}
L \, d\tau = m\sqrt{-\Omega_a\Omega^a} + \sum_{\ell>1} f_\ell\, \Omega^\ell\,,
\end{align}
where we have used a schematic notation for the Lagrange multipliers and Maurer--Cartan forms at levels $\ell>1$. The variation of such a Lagrangian is then
\begin{align}
\delta L \, d\tau &= m \delta \sqrt{-\Omega^a\Omega_a} +\sum_{\ell>1} \delta f_\ell \Omega^\ell + \sum_{\ell>1} f_\ell \delta \Omega^\ell\nonumber\\
&= m \ddot{x}_a \delta x^a d\tau + \sum_{\ell>1} \delta f_\ell \Omega^\ell - \sum_{\ell>1}\dot{f}_\ell \Sigma^\ell - \sum_{\ell,m=1}^\infty c_{\ell m}{}^{\ell+m} f_{\ell+m} \Omega^m \Sigma^\ell \,,
\end{align}
where we have discarded total derivatives and employed proper time gauge. Moreover, the $\mathbb{Z}$-grading on the free Lie algebra was used to simplify the structure constants. The equations of motion therefore imply
\begin{align}
\label{eq:U2}
\Omega^{\ell} =0\,, \quad\quad
\dot{f}_\ell d\tau= -c_{\ell\, 1}{}^{\ell+1} f_{\ell+1} \Omega^1\quad\quad
\textrm{ for $\ell>1$.}
\end{align}
Using the fact that $\Omega^\ell=0$ for $\ell>1$, the last term in the variation simplifies and we see that on-shell
every $f_\ell$ only couples to the one on the next level ($f_{\ell+1}$) multiplied by $\Omega^1$ that corresponds to $\Omega^a=\dot{x}^a d\tau$. The equations for the first level are
\begin{align}
\label{eq:MU}
m\ddot{x}_a= f_{ab} \dot{x}^b\,,
\end{align}
where we have also used the commutator $\left[ P_a, P_b\right] =Z_{ab}$ to make the structure constant explicit. Therefore the Lorentz equation~\eqref{eq:MU} is universal for our Lagrangian~\eqref{eq:L4} to all levels in the free Lie algebra. All the other equations~\eqref{eq:U2} are similarly universal. As an example we consider the equations of motion we work out the equation for $\dot{f}_2$. From the structure constant $\left[ Z_{ab} , P_c \right]=Y_{ab,c}$ we deduce $\dot{f}_{ab} =- f_{ab,c} \dot{x}^c$. In section~\ref{sec:MP}, we will address the question to what extent $f_{ab}$ can be interpreted as an external electro-magnetic field in ordinary space-time.
Since the Lagrangian is constructed from the $\textrm{Maxwell}_{\infty}$-invariant Maurer--Cartan forms, our dynamical system has global $\textrm{Maxwell}_{\infty}$ symmetry.
\subsection{Equations of motion up to level $\ell=4$}
We now give the explicit form of the equations of motion up to level $\ell=4$ in proper time gauge. These can be calculated most easily using the universal forms derived above together with the explicit expressions~\eqref{eq:C4exp} for the Maurer--Cartan forms.
\begin{subequations}
\label{eq:EL}
\begin{align}
\dot{\sigma}_1^{ab,c,d} &= \frac34\xi^{ab,c} \dot{x}^d +\frac34 \xi^{ab,d} \dot{x}^c
-\frac14\left(\xi^{bc,d} + \xi^{bd,c}\right) \dot{x}^a + \frac14\left(\xi^{ac,d} + \xi^{ad,c}\right) \dot{x}^b\nonumber\\
&\quad - \frac1{24}\left( \dot{x}^a x^b c^c x^d - \dot{x}^b x^a x^c x^d\right)\,,\\
\dot{\sigma}_2^{abc,d} &= -\frac{27}{4} \dot{x}^{[a} \xi^{bc],d} - 3 \theta^{[ab} \dot{\theta}^{c]d} - 3 \theta^{d[a} \dot{\theta}^{bc]} +6 \theta^{[ab} x^{c]} \dot{x}^d +6 \theta^{d[a} x^b \dot{x}^{c]}
\,,\\
\dot{\xi}^{ab,c} &=\frac13\left( 2\theta^{ab}\dot{x}^c-\theta^{ca} \dot{x}^b-\theta^{bc} \dot{x}^a\right)
-\frac16\left( \dot{x}^a x^b x^c- \dot{x}^b x^a x^c\right)\,,\\
\dot{\theta}^{ab} &= -\frac12\left(\dot{x}^a x^b - \dot{x}^b x^a\right)\,,\\
\dot{f}^1_{ab,c,d} &=0\,,\\
\dot{f}^2_{abc,d} &=0\,,\\
\label{eq:F3}
\dot{f}_{ab,c} &=-f^1_{ab,c,d} \dot{x}^d +\left(f^2_{abc,d} - 3f^2_{abd,c}\right)\dot{x}^d\,,\\
\label{eq:F2}
\dot{f}_{ab}
&= -f_{ab,c} \dot{x}^c\,,\\
\label{eq:MW4}
m \ddot{x}_a
&=f_{ab} \dot{x}^b\,.
\end{align}
\end{subequations}
The first equations are simply the vanishing of the Maurer--Cartan forms~\eqref{eq:C4exp} enforced by the Lagrange multipliers $f_{\ldots}$ for levels $\ell>1$. If one calculated these equations directly from the Lagrangian~\eqref{eq:L4} with all expressions substituted from~\eqref{eq:C4exp} as Euler--Lagrange equations, one would of course arrive at the same result. However, the simple final expressions might appear surprising if one did not know about the underlying symmetry structure and the general considerations of the preceding section.
\subsection{Relation to multipoles}
\label{sec:MP}
Let us analyse in more detail equation~\eqref{eq:MW4} that looks like a standard Lorentz equation. The field $f_{ab}$ appearing on the right-hand side can be integrated explicitly using the equations~\eqref{eq:EL}. First, we note from~\eqref{eq:F3} that
\begin{align}
f_{ab,c} = -F_{ab,c,d}^1 x^d + \left( F^2_{abc,d} - 3 F^2_{abd,c}\right) x^d + F_{ab,c}\,,
\end{align}
where $F_{\ldots}$ are constants (along the world-line). When we consider any solution of the equations of motion the Maxwell symmetry of the dynamical system is spontaneously broken. Substituting this into~\eqref{eq:F2} one finds
\begin{align}
f_{ab} = \frac12 F^1_{ab,c,d} x^c x^d + \left( F^2_{abc,d}-3F^2_{abd,c}\right) \left(\theta^{cd} - \frac12 x^c x^d\right)
-F_{ab,c} x^c + F_{ab}\,,
\end{align}
where $\dot{x}^c x^d = -\frac{d}{d\tau}\left(\theta^{cd} - \frac12x^c x^d\right)$ has been used. As already noted in~\cite{Bonanos:2008ez}, this `electro-magnetic field' depends on the new coordinate $\theta^{ab}$ and is not integrable in the space-time coordinates $x^a$. It is useful to separate it into an integrable part (which is a genuine electro-magnetic background) in ordinary configuration space and a non-integrable part:
\begin{align}
f_{ab} = f_{ab}^{\mathrm{int}} + f_{ab}^{\mathrm{non-int}}
\end{align}
with
\begin{subequations}
\begin{align}
f_{ab}^{\mathrm{int}} &= \frac12 F_{ab,c,d}^1 x^c x^d - F_{ab,c} x^c + F_{ab}\,,\\
f_{ab}^{\mathrm{non-int}} &= 4 F^2_{abc,d} \theta^{cd} + F^2_{abc,d} x^c x^d
\end{align}
\end{subequations}
The integrable part satisfies the Bianchi identity $\partial_{[a}^{\ } f_{bc]}^{\mathrm{int}} =0$.We note that the non-integrable part only depends on $F^2_{abc,d}$ whereas the integrable part looks like a Taylor expansion of an electro-magnetic field. The full equation for $x_a$ can then be written as
\begin{align}
\label{eq:LEQ}
m\ddot{x}_a -4F^2_{abc,d}\dot{x}^b \theta^{cd} - F^2_{abc,d} \dot{\theta}^{bc} x^d = f_{ab}^{\mathrm{int}} \dot{x}^b\,.
\end{align}
For $F^2_{abc,d}=0$ one obtains the equation of motion for a particle moving in an electro-magnetic field that depends up to quadratic order on the coordinates. The condition $F^2_{abc,d}=0$ is satisfied automatically when working in the quotient $\mf{d}_2$ defined in~\eqref{eq:DI}.
If we consider the particular solution $f_{ab}^{\mathrm{int}}=0$ this would eliminate the coupling of the world-line to the electro-magnetic field and therefore should be interpreted as vanishing total charge of the system. We can, however, keep the non-integrable part in~\eqref{eq:LEQ} where the resulting equation then takes the form
\begin{align}
m\ddot{x}_a = 4F^2_{abc,d}\dot{x}^b \theta^{cd} -F^2_{abc,d} \dot{\theta}^{bc} x^d\,.
\end{align}
This equation has similarity with the equation for the motion of an electric dipole studied in~\cite{Anandan:1999ig}.
\subsection{Consistent truncation of the equations to quotients of $\textrm{Maxwell}_{\infty}$}
As discussed in section~\ref{sec:quot}, there are many quotients of $\textrm{Maxwell}_{\infty}$ that can be constructed. The dynamical equations automatically truncate consistently to any such quotient. In view of the non-integrable contributions to the Lorentz equation~\eqref{eq:LEQ} we now consider two such quotients.
The first is to work in the quotient $\mf{d}_2$ of equation~\eqref{eq:DI} where one only keeps Young tableaux with at most two rows. In this quotient up to level $\ell=4$, the non-integrable part proportional to $F^2_{abc,d}$ in~\eqref{eq:LEQ} drops out and one has a standard Lorentz equation with an electro-magnetic field that is at most cubic in the coordinate $x^a$.
We shall also extend the equations of motion to $\ell=5$ in the quotient $\mf{d}_2$. The level five generators are given in appendix~\ref{app:CR}. The new relevant equations in the quotient $\mf{d}_2$ are
\begin{subequations}
\begin{align}
\dot{f}_{ab,c,d,e} &= 0\,,\\
\dot{f}_{ab,cd,e} &=0 \,,\\
\dot{f}_{ab,c,d} &= - f_{ab,c,d,e} \dot{x}^e - \left(f_{ab,ce,d}+f_{ab,de,c}\right)\dot{x}^e\,.
\end{align}
\end{subequations}
Integrating iteratively for $f_{ab}$ then gives the following
\begin{subequations}
\begin{align}
f_{ab,c,d} &= - F_{ab,c,d,e} x^e - \left(F_{ab,ce,d}+F_{ab,de,c}\right) x^e + F_{ab,c,d}\,,\\
f_{ab,c} &= \frac12 F_{ab,c,d,e} x^d x^e - \left(F_{ab,ce,d}+F_{ab,de,c}\right) \left(\theta^{de}-\frac12 x^dx^e\right) -F_{ab,c,d} x^d + F_{ab,c}\,,\\
f_{ab} &= \frac16 F_{ab,c,d,e} x^c x^d x^e +\left(F_{ab,ce,d}+F_{ab,de,c}\right) \left( \xi^{de,c} -\frac16 x^c x^d x^e\right) +\frac12 F_{ab,c,d} x^c x^d - F_{ab,c} x^c +F_{ab}\,.
\end{align}
\end{subequations}
The correction to the $\xi$ term in the last line comes from
\begin{align}
\left(\theta^{de}-\frac12 x^dx^e\right)\dot{x}^c + (c\leftrightarrow d) = \frac{d}{d\tau} \left( \xi^{de,c} -\frac16 x^c x^d x^e\right) + (c\leftrightarrow d)
\end{align}
but seems to drop out when contracted with the $F$-terms. However, one is left with a non-integrable contribution to $f_{ab}$ given by
\begin{align}
f_{ab}^{\mathrm{non-int}} &= \left(F_{ab,ce,d}+F_{ab,de,c}\right) \xi^{de,c} \,.
\end{align}
This does not satisfy the Bianchi identity in the standard space-time and comes from the tableau \scalebox{0.8}{$\yng(3,2)$}. We therefore conclude that the quotient $\mf{d}_2$ introduces new additions to the Lorentz equation that are not just corresponding to the Taylor expansion of a standard electro-magnetic field. We will comment more on this in the conclusion.
The second quotient we consider is the one give in~\eqref{eq:unfold} where one only keeps Young tableaux with at most two rows such that the second row has at most one box. This will also remove the non-integrable contribution above and it is straight-forward to see that in this case one can extend the algebra and equations of motion to arbitrary level. The resulting integrable electro-magnetic field takes the form
\begin{align}
f_{ab} = \sum_{\ell\geq 0} (-1)^\ell F_{ab,i_1\ldots i_\ell} x^{i_1} \cdots x^{i_\ell}\,.
\end{align}
(The alternating sign is a consequence of our convention of appending indices on the right in the free algebra, cf.~\eqref{eq:levelstep}.) The Lorentz equation becomes simply
\begin{align}
m\ddot{x}_a = f_{ab} \dot{x}^b
\end{align}
and this quotient then describes the Taylor expansion of an electro-magnetic field. One might also refer to it as the unfolding of particle motion in an electro-magnetic field.
\section{Conclusions}
\label{sec:concl}
The Maxwell algebra introduced in \cite{Galindo,Schrader:1972zd} is an extension of the Poincare algebra with an antisymmetric generator $Z_{ab}$. A particle moving in a generic constant electro-magnetic background~\cite{Bonanos:2008ez} is a realisation of this algebra. The Maxwell algebra describes at same time the particle and the constant electro-magnetic background in which the particle moves.
In this paper, we have introduced a maximal infinite sequential extension of this algebra that we call $\textrm{Maxwell}_{\infty}$. We have an infinite-dimensional free Lie algebra generated by the translation generators $P_a$ at level $\ell=1$ to which we join at level $\ell=0$ the Lorentz generators $M_{ab}$. The higher order levels corresponds to generators with a precise Young tableau structure. The relation with the Eilenberg--Chevalley cohomology was elucidated. The existence of different infinite Lie algebra ideals of $\textrm{Maxwell}_{\infty}$ allows construction of different truncations of the Maxwell algebra. One of these truncations corresponds to the finite Maxwell algebras of~\cite{Bonanos:2008ez}. Another truncation gives the the unfolding of the Maxwell field given in \cite{Boulanger:2015mka,Vasiliev:2005zu}. It will be interesting to study other possible truncations of $\textrm{Maxwell}_{\infty}$.
As a possible realisation of $\textrm{Maxwell}_{\infty}$ we have constructed a model at low order in derivatives that tentatively describes the motion of a distribution of charged particles in an generic electro-magnetic field. The motion is characterised in terms of the center of mass coordinates and an infinite set of momenta, that are conjugate to the generalised coordinates of the coset $\textrm{Maxwell}_{\infty}/\mathrm{Lorentz}$. These equations take a universal form and are invariant under the $\textrm{Maxwell}_{\infty}$ symmetry by construction. By contrast any solution will break this symmetry spontaneously and the residual symmetry can be smaller, for example agreeing with the BCR algebra~\eqref{bcr1}
in the case of $\textrm{Maxwell}_2$.
We see many avenues of future research. Our treatment of the dynamical system was in Lagrangian form; for a proper analysis of the Killing symmetries of the system a transition to a canonical Hamiltonian form along the lines of~\cite{Gibbons:2009me} will be useful. The dynamical variables $f_{\ldots}$ for $\ell>1$ will then play the roles of momenta while the conjugate momentum $\pi_a$ to the position $x^a$ will take a more complicated form. The canonical formulation is also crucial for considering the potential quantisation of the system.
Two possible generalisations of the present work are to study either the non-relativistic Galilei (or Carroll) case~\cite{Bonanos:2008kr,Bergshoeff:2014jla} or the supersymmetric extension~\cite{Bonanos:2009wy}. In either case only finite extensions of the standard kinematical algebra are known but we anticipate an embedding of these structures in an appropriate free Lie algebra construction, possibly with quotients. It would also be interesting to study the relation of the electro-magnetic $\textrm{Maxwell}_{\infty}$ to other finite- or infinite-dimensional symmetries that involve gravity and/or higher rank gauge fields~\cite{Sezgin:1996cj,West:2001as,Damour:2002cu,Henneaux:2010ys}.
In conclusion, we consider our construction as providing a very general framework that can serve to analyse many physical different situations depending on which quotient of $\textrm{Maxwell}_{\infty}$ one considers.
\subsection*{Acknowledgements}
We would like to thank M.~Henneaux and J.~Palmkvist for useful discussions. This research was started during the 2015 Benasque program ``Gauge theory, supergravity and strings''. We are grateful to the Benasque center and to the program organisers for providing a stimulating environment ultimately leading to the results of this paper. AK gratefully acknowledges the warm hospitality of the University of Barcelona.
JG has been supported in part by FPA2013-46570-C2-1-P, 2014-SGR-104 (Generalitat de Catalunya) and Consolider CPAN and by
the Spanish goverment (MINECO/FEDER) under project MDM-2014-0369 of ICCUB (Unidad de Excelencia Mar\'\i a de Maeztu).
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 35
|
Q: Is it possible to setup an interface with both DHCP and a static IP? Sending a box out to a colo and the guy taking it has no idea of networking. was hoping I could set it up so I can access it no matter which of two switches he plugs it into. After which I can remove one of the entries.
Something like:
auto eth0
iface eth0:0 inet dhcp
iface eth0:1 inet static
address 66.66.66.220
netmask 255.255.255.224
gateway 66.66.66.254
broadcast 66.66.66.223
A: Finally got someone at the console just in case, but this works.
auto eth0
iface eth0 inet dhcp
auto eth0:1
iface eth0:1 inet static
address 66.66.66.220
netmask 255.255.255.224
gateway 66.66.66.254
broadcast 66.66.66.223
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 6,209
|
Euthalia bolitissa är en fjärilsart som beskrevs av Hans Fruhstorfer 1913. Euthalia bolitissa ingår i släktet Euthalia och familjen praktfjärilar. Inga underarter finns listade i Catalogue of Life.
Källor
Praktfjärilar
bolitissa
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 3,794
|
Data Menu
Data Home >
Select Country Afghanistan Bangladesh Bhutan Cambodia China Fiji India Indonesia Iran, Islamic Republic of Lao People's Democratic Republic Malaysia Maldives Mongolia Myanmar Nepal Pakistan Papua New Guinea Philippines Sri Lanka Thailand Timor-Leste Viet Nam Albania Armenia Azerbaijan Belarus Bosnia and Herzegovina Georgia Kazakhstan Kosovo Office Kyrgyzstan Moldova, Republic of North Macedonia Serbia Tajikistan Türkiye Turkmenistan Ukraine Uzbekistan Algeria Djibouti Egypt Iraq Jordan Lebanon Libya Morocco Oman Palestine Somalia Sudan Syrian Arab Republic Tunisia Yemen Angola Botswana Burundi Comoros Congo, the Democratic Republic of the Eritrea Eswatini Ethiopia Kenya Lesotho Madagascar Malawi Mauritius Mozambique Namibia Rwanda Seychelles South Africa South Sudan Tanzania, United Republic of Uganda Zambia Zimbabwe Argentina Bolivia, Plurinational State of Brazil Chile Colombia Costa Rica Cuba Dominican Republic Ecuador El Salvador Guatemala Haiti Honduras Jamaica Mexico Nicaragua Panama Paraguay Peru Uruguay Venezuela, Bolivarian Republic of Benin Burkina Faso Cabo Verde Cameroon Central African Republic Chad Congo Côte d'Ivoire Equatorial Guinea Gabon Gambia Ghana Guinea Guinea-Bissau Liberia Mali Mauritania Niger Nigeria Sao Tome and Principe Senegal Sierra Leone Togo
Eastern Europe & Central Asia
Caribbean SRO
Transparency Portal >
Donor Contributions
Population data >
FGM Dashboard
World Population Dashboard
Adolescent and Youth Dashboard
Midwifery Dashboard
Demographic Dividend Dashboard
Intimate Partner Violence Dashboard
Emergencies >
Kyrgyzstan Data Home
Kyrgyzstan - Transparency >
Kyrgyzstan - Midwifery >
Kyrgyzstan - World Population >
Kyrgyzstan - Demographic Dividend >
UNFPA Kyrgyzstan
Despite considerable improvements in sexual and reproductive health and the success of the ongoing health-care reform, the Kyrgyz Republic has one of the highest maternal mortality in the region. UNFPA puts stronger focus to help the country make progress towards reducing maternal mortality and works on integration of reproductive health services into the general health care delivery system. It partners with state agencies on supporting youth-friendly health services and improving data availability and policy formulation around population dynamics, sexual reproductive health and gender equality.
Key results of Kyrgyzstan in 2021 View more
Sexual and reproductive health in emergency preparedness plans
Sexual and reproductive health was integrated into emergency preparedness plans
School-based comprehensive sexuality education
A comprehensive sexuality education curricula was operationalized in accordance with international standards
Integration of sexual and reproductive health of adolescents and youth into strategies of sectors apart from health sector
At least two sectors (other than health) had strategies which integrated the sexual and reproductive health of adolescents and youth
Gender-based violence in emergencies
At least 15 of the 18 minimum standards were applied for the prevention of and response to gender-based violence in emergencies
Minimum Initial Services Package
Health service providers and managers were trained on the minimum initial service package
Programme Activities
Select Year: 2021 2020 2019 2018 2017 2016 2015 2014
Non - core
Kyrgyzstan 2014 Programme Activities data
Utilization of sexual and reproductive health services
Every woman, adolescent and youth everywhere, especially those furthest behind, has utilized integrated sexual and reproductive health services and exercised reproductive rights, free of coercion, discrimination and violence
Total Spending:
UNFPA $404,125 (96%)
NGO $14,968 (4%)
Core Resources (91%)
Non-core Resources (9%)
Increased national capacity to strengthen enabling environments, increase demand for and supply of modern contraceptives and improve quality family planning services that are free of coercion, discrimination and violence
UNFPA $29,072 (100%)
Core Resources (100)
Increased national capacity to deliver comprehensive maternal health services
UNFPA $329,888 (100%)
NGO $1,635 (0%)
Non-core Resources (0)
Increased national capacity to deliver HIV programmes that are free of stigma and discrimination, consistent with the UNAIDS unified budget results and accountability framework (UBRAF) commitments
Core Resources (3)
Non-core Resources (97)
Sexual and reproductive health in emergencies
Increased national capacity to provide sexual and reproductive health services in humanitarian settings
UNFPA $2,000 (13%)
NGO $13,334 (87%)
Increased national capacity to deliver integrated sexual and reproductive health services
UNFPA $5,278 (100%)
Empowerment of young people
Every adolescent and youth, in particular adolescent girls, is empowered to have access to sexual and reproductive health and reproductive rights, in all contexts
Non-core Resources (31%)
Adolescents and youth
Increased national capacity to conduct evidence-based advocacy for incorporating adolescents and youth and their human rights/needs in national laws, policies, programmes, including in humanitarian settings
Core Resources (65)
Increased national capacity to design and implement community and school based comprehensive sexuality education (CSE) programmes that promote human rights and gender equality
UNFPA $23,661 (61%)
Gender equality and empowerment of women and girls
Gender equality, the empowerment of all women and girls, and reproductive rights are advanced in development and humanitarian settings
NGO $114,019 (29%)
GOV $14,343 (4%)
Civil society and rights for all
Strengthened engagement of civil society organizations to promote reproductive rights and women's empowerment, and address discrimination, including of marginalized and vulnerable groups, people living with HIV and key populations
Ending harmful practices
Increased capacity to prevent gender-based violence and harmful practices and enable the delivery of multisectoral services, including in humanitarian settings
Evidence-based policymaking
Strengthened national policies and international development agendas through integration of evidence-based analysis on population dynamics and their links to sustainable development, sexual and reproductive health and reproductive rights, HIV and gender equality
Core Resources (100%)
Population data analysis
Increased availability of evidence through cutting-edge in-depth analysis on population dynamics, sexual and reproductive health, HIV and their linkages to poverty eradication and sustainable development
Organizational effectiveness and efficiency
Organizational adaptability
Increased adaptability through innovation, partnership and communications
UNFPA $777 (100%)
Non-core Resources (100)
Programme effectiveness
Enhanced programme effectiveness by improving quality assurance, monitoring, and evaluation
NGO $9,345 (100%)
NGO $111,583 (100%)
Strengthened international and national protection systems for advancing reproductive rights, promoting gender equality and non-discrimination and addressing gender-based violence
GOV $48,304 (27%)
GOV $133,657 (29%)
GOV $9,943 (5%)
NGO $16,435 (100%)
Data and policies
Strengthened capacity for the formulation and implementation of rights-based policies (global, regional and country) that integrate evidence on population dynamics, sexual and reproductive health, HIV, and their links to sustainable development
UNFPA $500 (2%)
NGO $1,188 (16%)
Everyone, everywhere, is counted, and accounted for, in the pursuit of sustainable development equality
Increased availability of evidence through cutting-edge in-depth analysis on population dynamics, sexual and reproductive health, HIV and their linkages to povert
Integrated sexual and reproductive health services
Strengthened capacities to provide high-quality, integrated information and services for family planning, comprehensive maternal health, sexually transmitted infections and HIV, as well as information and services that are responsive to emergencies and fragile contexts
Sexual and reproductive health polices for those furthest behind
Enhanced capacities to develop and implement policies, including financial protection mechanisms, that prioritize access to information and services for sexual and reproductive health and reproductive rights for those furthest behind, including in humanitarian settings
Health workforce capacity
Strengthened capacities of the health workforce, especially those of midwives, in health management and clinical skills for high-quality and integrated sexual and reproductive health services, including in humanitarian settings
Accountability for sexual and reproductive health
Improved domestic accountability mechanisms for sexual and reproductive health and reproductive rights through the involvement of communities and health-system stakeholders at all levels
NGO $4 (100%)
Marginalized Girls
Increased capacity of partners to design and implement comprehensive programmes to reach marginalized adolescent girls including those at risk of child marriage
UNFPA $1,087 (7%)
Gender equality laws and policies
Strengthened policy, legal and accountability frameworks to advance gender equality and empower women and girls to exercise their reproductive rights and to be protected from violence and harmful practices
UNFPA $9 (100%)
NGO $0 (0%)
Improved mobilization, management and alignment of resources through an increased focus on value for money and systematic risk management
$-11
UNFPA $-11 (100%)
Population data systems
Improved national population data systems to map and address inequalities; to advance the achievement of the Sustainable Development Goals and the commitments of the Programme of Action of the International Conference on Population and Development; and to strengthen interventions in humanitarian crises
Delivery of sexual and reproductive health commodities
Strengthened capacities to effectively forecast, procure, distribute and track the delivery of sexual and reproductive health commodities, ensuring resilient supply chains
Adolescents and youth skills and capabilities
Young people, in particular adolescent girls, have the skills and capabilities to make informed choices about their sexual and reproductive health and rights, and well-being
Youth leadership and participation
Young people have opportunities to exercise leadership and participate in sustainable development, humanitarian action and in sustaining peace
Gender and sociocultural norms
Strengthened engagement of civil society organizations to promote reproductive rights and women's empowerment, and address discrimination
Strengthened engagement for promoting gender equality and non-discrimination and addressing gender-based violence
Core Resources (1%)
Demographic intelligence
Mainstreamed demographic intelligence to improve the responsiveness, targeting and impact of development policies, programmes and advocacy
Adolescents and youth policies
Policies and programmes in relevant sectors tackle the determinants of adolescent and youth sexual and reproductive health, development and well-being
Strengthened civil society and community mobilization to eliminate discriminatory gender and sociocultural norms affecting women and girls
Prevention and addressing of gender-based violence
Increased multisectoral capacity to prevent and address gender-based violence using a continuum approach in all contexts, with a focus on advocacy, data, health and health systems, psychosocial support and coordination
development, sexual and reproductive health and reproductive rights, HIV and gender equality
Dashboards available for Kyrgyzstan
Programme Documentation
CPD Kyrgyzstan [2023-2027] (DP/FPA/CPD/KGZ/5)
Cycle: 2023-2027
CPE Kyrgyzstan [2010-2011] (DP/FPA/2009/9)
CPE Kyrgyzstan [2017] (DP/FPA/2015/14)
Technical notes and sources
The results featured here are only a selection of key results in line with strategic plan 2014-17 indicators. The selection does not reflect the full picture of all results achieved during the strategic plan cycle by UNFPA programme countries.
The source of data for most country level indicators is the UNFPA country annual reports for 2014-2017, unless stated otherwise
Results featured are cumulative - i.e., achieved between the 2014 and 2017 timeframe, and reflect the net situation, true as of the year selected
Majority of the results are captured from 127 UNFPA programme countries
Indicators that are marked 'not achieved' could imply any of the following conditions:
The country may have achieved the result without the support of UNFPA
The country has not targeted the given indicator during the 2014-2017 period
UNFPA is supporting this area of work, but the result has not yet been achieved
Notes for key results
Maternal deaths averted: Results reflected for 46 UNFPA Supplies programme countries
Unintended pregnancies averted: Results reflected for 46 UNFPA Supplies programme countries
Unsafe abortions averted: Results reflected for 46 UNFPA Supplies programme countries
Number of countries that developed midwifery workforce policies based on international standards: Baseline data not available
Number of fistula repair surgeries supported: Baseline data not available
Number of countries that implemented at least 8 out of the UNFPA 10-step strategic-approach to comprehensive condom programming: Results achieved in 2014-2016 only
Number of countries that have capacity to implement the Minimum Initial Service Package at the onset of a crisis: UNFPA reflected non-cumulative figures for this indicator; the Minimum Initial Service Package (MISP) is a series of crucial actions required to respond to reproductive health needs at the onset of every humanitarian crisis. View here for more information on MISP
Percentage of countries affected by humanitarian crises that have functioning inter-agency gender-based violence coordination body as a result of UNFPA guidance and leadership: Baseline data not available
Number of countries that established comprehensive plan to report on UNFPA-supported Sustainable Development Goal indicators: Results achieved in 2017 only; baseline data not available
Number of countries that established online national population data platforms that are publicly accessible by users: : Results achieved in 2017 only; baseline data not available
Number of countries in which the capacity of national statistical authorities was developed to analyse and use disaggregated data on adolescent and youth: Baseline data not available; UNFPA reflected non-cumulative figures for this indicator
Number of countries that generated and used sub-national estimates of population, health and social data: Baseline data not available
The designations employed and the presentation of material on the map do not imply the expression of any opinion whatsoever on the part of UNFPA concerning the legal status of any country, territory, city or area or its authorities, or concerning the delimitation of its frontiers or boundaries. The dotted line represents approximately the Line of Control in Jammu and Kashmir agreed upon by India and Pakistan. The final status of Jammu and Kashmir has not yet been agreed upon by the parties.
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 6,599
|
Kati Jo Spisak (* 22. November 1983) ist eine US-amerikanische Fußballspielerin und -trainerin.
Karriere
Spielerin
Spisak spielte von 2007 bis 2009 in 20 Ligaspielen für das Franchise der Washington Freedom, davon 17 in der W-League und drei Spiele in der Premierensaison der WPS. Die Spielzeiten 2010 und 2011 verbrachte sie bei den WPS-Konkurrenten Saint Louis Athletica und Boston Breakers, ohne dort jedoch zu Pflichtspieleinsätzen zu kommen. Ab der Saison 2014 wurde sie sporadisch als Ersatztorhüterin der NWSL-Franchise Washington Spirit eingesetzt, blieb aber auch hierbei ohne Ligaeinsatz.
Spisak war teil der US-amerikanischen Nachwuchsnationalmannschaften in den Altersstufen U-21 und U-23 und nahm mit ersterer im Jahr 2004 erfolgreich am Nordic Cup teil.
Trainerin
Zur Saison 2014 der National Women's Soccer League verstärkte sie das Trainerteam der Washington Spirit um Cheftrainer Mark Parsons. Ab der Saison 2015 übernahm sie zusätzlich das Traineramt der Spirit-Reservemannschaft in der W-League, mit der sie auf Anhieb das Meisterschaftsfinale gewann.
Erfolge und Auszeichnungen
Als Spielerin
2004: Gewinn des Nordic Cup (USA U-21)
2006: Keough Award als beste Fußballspielerin im Großraum St. Louis
2007: Meister der W-League (Washington Freedom)
Als Trainerin
2015: Meister der W-League (Washington Spirit Reserves)
Einzelnachweise
Weblinks
Fußballtorhüter (Washington Freedom)
Fußballtorhüter (Saint Louis Athletica)
Fußballtorhüter (Boston Breakers, 2008)
Fußballtorhüter (Washington Spirit)
Fußballtrainer (Vereinigte Staaten)
US-Amerikaner
Geboren 1983
Frau
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,180
|
\section{Introduction}
In statistical physics, it is often assumed that individuals are intrinsically identical. In neuroscience also, identical parameters are typically assumed for all neurons in the study of neuronal population activity and correlation transmission. Real neurons, though, even if they are of the same type and located in the same brain area, exhibit intrinsic differences. Their morphologies and the intracellular concentrations of ions, to name just two examples, can differ widely, although in principle they have been generated by the same mechanisms \cite{Marder2006}. As a consequence, neuronal spike patterns can differ although neurons receive identical inputs \cite{Mainen1995,Padmanabhan2010}. Recently, \textit{in vitro} intracellular recordings of isolated mitral cells in the mouse olfactory bulb were conducted while they responded to identical input \cite{Padmanabhan2010} (Fig.\,\ref{f1}(a)). The neurons displayed diverse output firing rates and pairwise correlations. Specifically, the spike correlation coefficient obtained with a $1\,\mathrm{ms}$ observation window co-varied with the rate difference of the neuron pairs: small differences resulted in a wide range of different spike correlations, but large differences led always to small spike correlation.
In homogeneous network models, additional independent Gaussian white noises or independent Poisson spikes are very often added to every constituent identical neuron to account for their diverse spike timing. In real brain networks, not only the spike timing but also the spiking rate of neurons differ due to their intrinsic biophysical diversity. Therefore, it is of great interest to understand how the biophysical heterogeneity of a neuronal population contributes to neural coding and information processing in neuronal networks. Research work has been conducted on the coding properties \cite{Chelaru2008,Mejias2012} and synchronous responses \cite{Tsodyks1993,Wang1996,Brette2012} in a network of heterogeneous neurons. In many cases, neuronal heterogeneity was implemented simply by replacing one or more fixed neuronal parameters, such as the offset current \cite{Tsodyks1993,Wang1996}, the spiking threshold \cite{Mejias2012}, or the synaptic conductance \cite{Chelaru2008}, by a Gaussian- or uniformly-distributed random variable.
Here we investigated more fundamental questions, using both theoretical analysis and simulations: how neuronal heterogeneity can be represented appropriately in theory and how it can affect the neuronal dynamics and the spiking statistics in a population of simple leaky integrate-and-fire (LIF) neurons. The limitations of the existing approaches are addressed first. Then we suggest a more general scheme to implement biophysical diversity when either rate or correlation is of interest. By rescaling the dynamical equation, we derive mathematical relations between multiple neuronal parameters and the input noise. The main impact of common input to heterogeneous neurons on rate and correlation can be realized by an identical (frozen) noise current injection with different values of mean and variance, whereas the complete effect is captured by additionally drawing distributed values of the membrane time constant and the refractory period. In this scheme, the rate difference of heterogeneous LIF neurons can be treated analytically. As for correlation, we utilize alternative correlation measures to illustrate that spikes from heterogeneous neurons may be desynchronized by several milliseconds, thus escaping detection by a $1\,\mathrm{ms}$ observation window.
\section{Model}
We consider a population of isolated leaky integrate-and-fire (LIF) neurons, each of which has its membrane potential $V(t)$ governed by
\begin{eqnarray}
\tau_{m}\dot{V}(t) &=& -V(t) + RI(t), \label{lif}
\end{eqnarray}
where the input synaptic current
\begin{eqnarray}
RI(t) &=& \tau_{m}J_{E}\sum_{j}\delta(t-t_{j}) - \tau_{m}J_{I}\sum_{k}\delta(t-t_{k}).
\end{eqnarray}
$\tau_{m} = RC$ is the membrane time constant. $R$ and $C$ are the membrane resistance and capacitance, respectively. $J_{E}$ ($J_{I}$) is the amplitude of an excitatory (inhibitory) post-synaptic potential, whereas $t_{j}$ ($t_{k}$) represents the time of the $j$th ($k$th) excitatory (inhibitory) input spike.
When $V(t) = \theta$, $V(t)$ is reset to $V_{r}$ and a pause for synaptic integration $\tau_{r}$ is imposed to mimic the refractory period. In the high-input regime, the sum of synaptic inputs to a neuron can be approximated by a fluctuating input noise \cite{Ricciardi1979,Kuhn2004}
\begin{eqnarray}
I(t) &\equiv& \tau_{m}[\mu + \sigma\eta(t)], \label{current}
\end{eqnarray}
where
\begin{eqnarray}
\mu &=& J_{E}\nu_{E} - J_{I}\nu_{I}, \label{mu} \\
\sigma &=& \sqrt{J_{E}^{2}\nu_{E} + J_{I}^{2}\nu_{I}}. \label{sigma}
\end{eqnarray}
$\eta(t)$ is a white noise random process such that $\langle \eta(t)\eta(t')\rangle = \delta(t-t')$. $\nu_{E}$ ($\nu_{I}$) is the firing rate of the excitatory (inhibitory) input.
The numerical integration of Eq.~\ref{lif} in our simulations was performed using the fourth-order Runge-Kutta method with a time step of $0.01\,\mathrm{ms}$.
\begin{figure}
\includegraphics[scale=0.22]{fig1.ps}
\caption{(Color online). (a) Correlation coefficient ($1\,\mathrm{ms}$ window) of $589$ spike trains from mitral cells \textit{in vitro} receiving an identical input as a function of rate difference, adapted with permission from \cite{Padmanabhan2010}.
(b) The same measure for $100$ simulated heterogeneous LIF neurons using uniformly distributed $\tau_{m}$, $\tau_{r}$, $C$ and $\theta$ with different distribution widths.
\label{f1}}
\end{figure}
\section{Heterogeneity}
Intrinsic diversity of a population of neurons can be directly imposed by drawing neuronal parameters from a distribution. Here, we tested the response of $100$ isolated heterogeneous LIF neurons to an identical fluctuating input current in the form of Eq.\,(\ref{current}). Neuronal heterogeneity is implemented by drawing four uniformly distributed parameters: $\tau_{m}$, $\tau_{r}$, $C$ (or $R$) and $\theta$, which, together with $V_{r}$, represent all the independent parameters of a LIF neuron in response to a current input. The mean values of the uniform distribution are $20\,\mathrm{ms}$, $2\,\mathrm{ms}$, $1$ and $1$ (in arbitrary units) respectively, and $V_{r}$ is fixed to $0$. These values are used throughout this work. We maintain the temporal scale of the dynamics of a typical neuron and rescale the potential by setting the mean reset to zero and the mean threshold to $1$. The correlation coefficient as a function of the output firing rate difference of all possible pairs with different distribution widths (percentage with respect to the mean) from $100\,\mathrm{s}$ of simulations (an example with $\mu = 0.03$ and $\sigma = 0.3$ shown in Fig.\,\ref{f1}(b)) highly resemble the experimental findings in \cite{Padmanabhan2010} (Fig.\,\ref{f1}(a)).
Diverse neuronal spike timing in a network has very often been achieved by adding independent random inputs to individual neurons. We provide every identical neuron with a common input as the input signal plus an independent input with the same statistics among neurons , in the form of
$\mu + \sigma\big[\sqrt{c}\eta(t) + \sqrt{1-c}\xi_{i}(t)\big]$ where $\eta(t)$ and $\xi_{i}(t)$ are independent Gaussian white noises. Fig.\,\ref{f2}(a) displays the raster and the correlation coefficient as functions of the rate difference when $\mu = 0.06$, $\sigma = 0.2$ and $c = 0.9$. The rate difference is close to zero and the correlation coefficient between any pair is nearly the same \cite{delaRocha2007}. Decreasing $c$ leads to a drop in spike correlation but has no effect on the rate difference. These observations are very distinct from both the experimental (Fig.\,\ref{f1}(a)) and simulation (Fig.\,\ref{f1}(b)) results. In view of some previous work on the reliability of single neurons in response to a repeated input \cite{Mainen1995,Teramae2008,Padmanabhan2010}, this implementation may be adopted to account for trial-to-trial variability of the same neuron.
It is common practice to implement heterogeneity of neurons by drawing random variables for a single neuronal parameter. To test its validity, we provide every identical neuron with a common input plus a random value of the spiking threshold drawn from a uniform distribution $\theta \in [0.5,1.5]$. Fig.\,\ref{f2}(b) displays the raster and the correlation coefficient as functions of the rate difference. Another example with distributed values of the input offset current $\mu$ instead of $\theta$ is shown in Fig.\,\ref{f3}(a). In either case, neurons exhibit dispersive firing rates. However, the spike correlation distribution at different values of rate difference is too narrow compared with Fig.\,\ref{f1}(a) and (b), and the region of small rate difference and small spike correlation cannot be reached. Thus, the impact of neuronal heterogeneity is only partially accounted for. We describe our alternative approach in the next section.
\begin{figure}
\includegraphics[scale=0.15]{fig2.ps}
\caption{Rasterplot, peri-stimulus time histogram (PSTH) and correlation coefficient as a function of rate difference for the following input into $100$ LIF neurons:
(a) $\mu + \sigma\big[\sqrt{c}\eta(t) + \sqrt{1-c}\xi_{i}(t)\big]$ ($c = 0.9$);
(b) identical input $\mu + \sigma\eta(t)$ but distributed $\theta_{i}$;
(c) our scheme using $\mu_{i} + \sigma_{i}\eta(t)$ together with distributed $\tau_{m}$ and $\tau_{r}$, consistent with both the experimental findings \cite{Padmanabhan2010} and the mathematical analysis. The subscript $i$ denotes ``independent''.
\label{f2}}
\end{figure}
\section{Mathematical relations}
In addition to the five parameters mentioned above, two additional parameters correspond to synaptic inputs: $J_{E}$ and $J_{I}$. We analyze the contribution of the heterogeneity in the seven independent neuronal parameters in a population of LIF neurons in the high-input regime.
Regarding synaptic inputs, heterogeneity in the parameters $J_{E}$ and $J_{I}$ can be captured by heterogeneity in $\mu$ and $\sigma$, according to Eqs. (\ref{mu}) and (\ref{sigma}). For example, if $J_{E}$ is uniformly distributed, $\mu$ is also uniformly distributed, whereas $\sigma^{2}$ is distributed as the square of a uniformly distributed variable.
The other five parameters are present in the neuronal dynamics irrespective of the type of inputs (current or spikes). First, $R$ (or $C$, depending on the form of writing) can be absorbed into $I(t)$ as shown in Eq.\,(\ref{lif}) so any distribution of $R$ can be accounted for by a corresponding distribution of $\mu$ and $\sigma$.
The difference between $\theta$ and $V_{r}$, which is the potential difference a neuron has to traverse, is a quantity relative to the synaptic strengths $J_{E}$ and $J_{I}$. For instance, lifting $\theta$, or lowering $V_{r}$, is equivalent to reducing $J_{E}$ and $J_{I}$ together by the same ratio. Thus, heterogeneity in $\theta$ and $V_{r}$ can be included in $\mu$ and $\sigma$ by means of rescaling.
Unlike the above five parameters related to the potential, the remaining two shaping the neuronal response in the temporal scale, $\tau_{m}$ and $\tau_{r}$, cannot be rescaled or captured by $\mu$ and $\sigma$. Their distributions among neurons have to be accounted for separately. Therefore, in the high input regime when the approximation of a fluctuating input noise is valid, the seven independent neuronal parameters (and their distributions) can be reduced to four: $\mu$, $\sigma$, $\tau_{m}$ and $\tau_{r}$. Based on this analysis, we suggest using distributed values of these four parameters together with an identical noise $\eta(t)$ to account for the effects of all the parameters in a population of LIF neurons receiving identical inputs. This is in contrast to the common practice of using independent noises as shown in Fig.\,\ref{f2}(a). We draw the parameters from uniform distributions
$\mu \in [0.015,0.105]$, $\sigma \in [0.1,0.3]$, $\tau_{m} \in [16,24]\,\mathrm{ms}$ and $\tau_{r} \in [1.5,2.5]\,\mathrm{ms}$.
In Fig.\,\ref{f2}(c), both the rate difference and the correlation coefficient, as well as their relation, are consistent with the experimental and our simulation results.
The respective contributions of the four parameters are investigated. Fig.\,\ref{f3} shows the correlation as functions of the rate difference for $\mu$, $\sigma$, $\tau_{m}$ and $\tau_{r}$ separately. Each realization is drawn from a uniform distribution of $10\,\%$, $20\,\%$ and $50\,\%$ around their mean values, which are $0.06$, $0.2$, $20\,\mathrm{ms}$ and $2\,\mathrm{ms}$, respectively. The firing rate of a neuron is largely shaped by $\mu$, whereas the distributed values of the variance give rise to different degrees of imprecise spiking. The wider the distribution of $\tau_{m}$ and $\tau_{r}$, the larger is the rate difference and the lower the correlation. When $\tau_{r}\,\ll\,1/\nu$, the effect of $\tau_{r}$ is small.
\begin{figure}
\includegraphics[scale=0.2]{fig3.ps}%
\caption{(Color online). Correlation coefficient as a function of rate difference when neurons receive identical noise with uniformly distributed (a) $\mu$, (b) $\sigma$, (c) $\tau_{m}$ and (d) $\tau_{r}$ separately. \label{f3}}
\end{figure}
\section{Rate difference}
The firing rate of a LIF receiving a Gaussian distributed noise is known analytically \cite{Brunel1999,Siegert1951}:
\begin{eqnarray}
\frac{1}{\nu} &=& \tau_{r} + \tau_{m}\sqrt{\pi}\int_{\frac{V_{r}-\mu\tau_{m}}{\sigma\sqrt{\tau_{m}}}}^{\frac{\theta-\mu\tau_{m}}{\sigma\sqrt{\tau_{m}}}}due^{u^{2}}\big(1+\text{erf}(u)\big), \text{when } \sigma > 0; \nonumber\\
\frac{1}{\nu} &=& \tau_{r} - \tau_{m}\text{ln}(1-\frac{\theta}{\mu\tau_{m}}), \text{when } \sigma = 0.
\end{eqnarray}
Only six parameters $\mu$, $\sigma$, $\tau_m$, $\tau_r$, $\theta$ and $V_{r}$ influence the firing rates, of which only the first four are independent. Changing any of them can result in a rate difference as shown in Fig.\,\ref{f4}, and this explains the wide distribution of firing rates in a heterogeneous neuronal population.
\begin{figure}
\includegraphics[scale=0.15]{fig4.ps}%
\caption{(Color online). Output rate as a function of (a) $\mu$, (b) $\sigma$ and (c) $\tau_{m}$ from theory (black) and simulation (colored).
\label{f4}}
\end{figure}
\section{Imprecise spiking}
The raster plot in Fig.\,\ref{f2}(c) shows population synchrony with spike-time jitter. Low average correlation coefficients in a population of neurons do not necessarily imply an asynchronous state of the population. Removing spikes from one of the two identical spike trains can also reduce the correlation coefficient significantly. When we compute the correlation coefficient in our data with a larger bin size, larger values for the the correlation coefficient are obtained, as shown in Fig.\,\ref{f5}(a). This is because some spikes become ``coincident'' only for larger bin sizes. The output spike trains behave like jittered spikes as discussed in earlier theoretical studies \cite{Tetzlaff2008}. Whether the decorrelation due to neuronal heterogeneity is significant depends critically on the bin size, or the integration window of the neurons receiving such inputs.
In view of a significant number of spikes jittering outside a $1\,\mathrm{ms}$ bin, we look into the cross-correlation function
\begin{eqnarray}
r_{xy}(\tau) &=& \frac{c_{xy}(\tau)}{\sigma_{x}\sigma_{y}} = \frac{\langle x(t)y(t+\tau)\rangle - \langle x(t)\rangle\langle y(t)\rangle}{\sigma_{x}\sigma_{y}},
\end{eqnarray}
where $x(t)$ and $y(t)$ denote two output spike trains in discrete time, consisting of 0 and 1 with bin size of $0.1\,\mathrm{ms}$. $x(t)$ is assigned to be the spike train with lower spike count. $c_{xy}(\tau)$ is the covariance function and $\sigma_{x}$ and $\sigma_{y}$ denote the standard deviation of the two spike trains, considered as discrete signals. Fig.\,\ref{f5}(b) shows that the mean $r_{xy}(\tau)$ over all pairs is positive in a small neighborhood of $\tau$, indicating a higher than chance level to observe spikes. Spikes are jittered, instead of asynchronous. In addition, $r_{xy}(\tau)$ is asymmetric, and its area is skewed towards negative $\tau$, indicating that the higher-firing-rate neuron is more likely to lead in terms of spiking \cite{Ostojic2009,Tchumatchenko2010}.
We further look at the normalized total cross-covariance $\kappa$ \cite{Bair2001,Tetzlaff2008}
\begin{eqnarray}
\kappa &=& \frac{\int_{-\infty}^{\infty}d\tau c_{xy}(\tau)}{\sqrt{\int_{-\infty}^{\infty}d\tau c_{xx}(\tau)\int_{-\infty}^{\infty}d\tau' c_{yy}(\tau')}},
\end{eqnarray}
which is an overall measure for the fraction of the spikes that are correlated above chance level. Fig.\,\ref{f5}(c) shows that $\kappa$ is quite close to unity, and has a weak dependence on the rate difference. This indicates that spikes would not be decorrelated when a larger time window is considered.
\begin{figure}
\includegraphics[scale=0.15]{fig5.ps}%
\caption{(Color online).
(a) Correlation coefficient as a function of window size (colored lines indicate five examples; the black thick line indicates the average over all pairs).
(b) Cross-correlation. Positive correlation at negative $\tau$ indicates that the higher-rate neuron is leading.
(c) $\kappa$, which takes jittered spikes into account, as a function of rate difference.
\label{f5}}
\end{figure}
\section{Discussion}
We remark that the plots of the correlation coefficient as a function of rate difference from our simulations (such as Figs. \ref{f1}(b) and \ref{f2}(c) can fit the experimental results in \cite{Padmanabhan2010} more satisfactorily by introducing a random delay of up to $1\,\mathrm{ms}$ to every incoming input spike. The spike correlation distribution then becomes broader, and the regime of low correlated output at small rate differences can also be reached (data not shown). This effect could be due to some (unknown) biological mechanism not captured in a simple LIF neuron model.
We emphasize that common input into heterogeneous neurons is better realized by a shared noise with distributed mean and variance and, more completely, with additionally distributed values of the membrane time constant and the refractory period. This insight is based on both the mathematical analysis presented here and the \textit{in vitro} experimental findings \cite{Padmanabhan2010}. As far as firing rates and spike correlations are concerned, the distributions of mean and variance of the input account for most of the experimental observations. Diversity of $\tau_{m}$ and $\tau_{r}$ has a smaller effect on the quantities in question, nevertheless, including them can account for the full degree of heterogeneity.
In the raster plot of a neuronal population with similar rate differences and spike correlations, synchrony is obvious, although spike times are not precise. Spikes, if present, are jittered in the millisecond range, which cannot be captured by the $1\,\mathrm{ms}$ temporal window used for analysis. This is why using larger bins leads to larger values for spike correlation. We emphasize that neuronal heterogeneity alone does impose an appreciable decorrelation effect on the population activity. However, whether decorrelation is functionally significant depends on the readout of the downstream neurons. On top of that, a network of heterogeneous neurons may give rise to richer network dynamics. It remains to be explored whether such intrinsic heterogeneity can facilitate other decorrelation mechanisms to increase the amount of information flow \cite{Renart2010,Wiechert2010,Padmanabhan2010}. Given the significant reduction in spike correlation among heterogeneous neurons, research concerning correlation transmission must take neuronal heterogeneities into consideration.
\begin{acknowledgments}
We thank Volker Pernice, Moritz Deger, and Tom Tetzlaff for discussions. The present work was supported by the German Federal Ministry of Education and Research (BMBF Grant No. 01GQ0420 to ``BCCN Freiburg'' and BMBF Grant No. 01GW0730 ``Impulse Control'') and the EU (INTERREG-V Grant to Neurex: TriNeuron).
\end{acknowledgments}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,978
|
Payday Loans & Personal Loans in Forest Hill, LA.
The population of Forest Hill, LA in 2019 counting up to 803 people. More than 15% of employed adults in Louisiana apply at least onse a year for a Payday Loans from $100 to $1,000. And as the time shows, more than 60 people, even with bad FICO credit score get approved for small-dollar loan.
Short-term Payday Loans up to $1,000 and long-term Personal Loans up to $15,000 in Forest Hill, Louisiana.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 1,862
|
Can't bear the thought of only buying one thing?
Whether you're shopping for yourself or for a gift, here's where you'll find all the best bargains on our custom package deals. If you'd like us to let you know when the site is updated, sign up for our Newsletter!
Get a free packet of Salad Sprout Blend with your order!
Fresh Sprouts Right From Your Kitchen Counter!
The plastic used in iPlant™ is environmentally safe, non-toxic, imparts no taste, is abrasion resistant, and easy to clean.
An automatic water-sprinkler consistently provides sufficient water to grow the sprouts.
Our special ventilation design provides fresh air and oxygen to promote optimum sprout growth.
The temperature controlled, automated heating system efficiently and automatically adjusts the temperature underneath the growing beds—producing great harvests in any season.
The translucency of the plant tank and the cover's double layer insulating design provide optimum sprouting conditions for vegetables attracted to sunlight—causing them to sprout quicker and promote rapid growth.
Designed to provide a quiet and stable growing environment.
The Eco-Friendly energy efficient design consumes very little power—during both heating and non-heating cycles.
On average, iPlant™ uses under 0.5KW.H of power over 24 hours—which usually costs just a few cents.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 9,607
|
.class Landroid/support/v4/graphics/BitmapCompatHoneycombMr1;
.super Ljava/lang/Object;
.source "BitmapCompatHoneycombMr1.java"
# direct methods
.method constructor <init>()V
.locals 0
.prologue
.line 23
invoke-direct {p0}, Ljava/lang/Object;-><init>()V
return-void
.end method
.method static getAllocationByteCount(Landroid/graphics/Bitmap;)I
.locals 1
.param p0, "bitmap" # Landroid/graphics/Bitmap;
.prologue
.line 26
invoke-virtual {p0}, Landroid/graphics/Bitmap;->getByteCount()I
move-result v0
return v0
.end method
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,707
|
Q: Can I access Intent.extras from outside of the OnCreate method? I'm building an Android app using the Model-View-Presenter design pattern. My app receives a Firebase messaging notification which when my app if not running passes the data into the app (from the notification bar) using an Intent. I have all this working and can see the intent data with this code in the onCreate method:
intent.extras?.let {
for (key in it.keySet()) {
val value = intent.extras?.get(key)
Log.d(TAG, "Key: $key Value: $value")
}
}
I want to save this information to my presenter however and the presenter hasn't been created yet. Can I access intents.extras after onCreate? I tried and got an null pointer exception, so it appears the Intent object gets destroyed after onCreate. Is this the case? How can my presenter get access to this data? My only idea at this point is to create a member variable in the Activity and then save this data to that member variable in onCreate, and then later have the presenter fetch it, but then I'm sort of working against the Model-View-Presenter design pattern of having a dumb view.
A:
Can I access intents.extras after onCreate?
Yes.
I tried and got an null pointer exception, so it appears the Intent object gets destroyed after onCreate. Is this the case?
No.
However, one risk with Kotlin is if you have a property or variable named intent — intent.extras might refer to that property or variable, rather than calling getIntent() on the activity itself.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 76
|
{"url":"https:\/\/techwhiff.com\/learn\/unscramble-acnsrl-spgei\/78049","text":"# Unscramble acnsrl spgei\n\n###### Question:\n\nUnscramble acnsrl spgei\n\n#### Similar Solved Questions\n\n##### 5. [-12 Points) DETAILS Classify the critical point (0, 0) of the given linear system by...\n5. [-12 Points) DETAILS Classify the critical point (0, 0) of the given linear system by computing the tracer and determinant A and using the figure. x = -9x + 7y y = -x + 9y A4 Stable spiral Unstable spiral t2 = 48 Stable node Unstable node T2-44 <0 Center Degenerate stable node Degenerate unsta...\n##### The fluid mosaic model presents the modern view of ___. A) the cell membrane structure B)...\nThe fluid mosaic model presents the modern view of ___. A) the cell membrane structure B) chromosome movement during mitosis C) protein synthesis D) ATP production...\n##### What are some areas that you feel as nurses we can be more proactive instead of...\nwhat are some areas that you feel as nurses we can be more proactive instead of reactive in health promotion?...\n##### How do you find the domain and range of f(x) = x+2?\nHow do you find the domain and range of f(x) = x+2?...\n##### A certain lens focuses light from an object 2.37-m away as an image 38.2-cm on the...\nA certain lens focuses light from an object 2.37-m away as an image 38.2-cm on the other side of the lens. What is the focal length of the lens in meters?...\n##### Apter 20 ASSY 36 Required information The following information applies to the questions displayed below) art...\napter 20 ASSY 36 Required information The following information applies to the questions displayed below) art 2 of 3 Blending process Beginning work in process Goods started Goods completed Ending work in process Units of Percent of Product Conversion 150,000 80% 310,000 100 340,000 190 120,000 25 e...\n##### 2. Open (or close) a sliding door: a. Stand at arm\u2019s length. b. Standing close, facing...\n2. Open (or close) a sliding door: a. Stand at arm\u2019s length. b. Standing close, facing the door. c. Standing close, facing in the direction that the door is to move, using a pushing motion with the forearm parallel to the door. Which is the best method? Explain in terms of direction of applica...\n##### Please use your own words!!! THANK YOU! As well as be as descriptive as possible -...\nPlease use your own words!!! THANK YOU! As well as be as descriptive as possible - Give two examples in which the host's own immune system is contributing to the harmful symptoms of a parasitic disease....\n##### A triangle has sides A, B, and C. Sides A and B have lengths of 3 and 5, respectively. The angle between A and C is (5pi)\/8 and the angle between B and C is (pi)\/12. What is the area of the triangle?\nA triangle has sides A, B, and C. Sides A and B have lengths of 3 and 5, respectively. The angle between A and C is (5pi)\/8 and the angle between B and C is (pi)\/12. What is the area of the triangle?...","date":"2022-10-05 05:43:38","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2972676753997803, \"perplexity\": 1424.0448930218463}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030337537.25\/warc\/CC-MAIN-20221005042446-20221005072446-00569.warc.gz\"}"}
| null | null |
Camera IconAustralian Cameron Davis sits two shots off the lead after the second round of The Honda Classic.
Davis two shots behind Steele in Florida
Brendan Steele made a couple of big mistakes down the stretch, but was still good enough to grab the outright lead at the midpoint of The Honda Classic at Palm Beach Gardens in Florida.
Australian Cameron Davis is two shots behind the American after he shot an impressive second round 67, backing up from his 70 the day before.
Steele shot a three-under 67 on Friday (Saturday AEDT), getting to five-under for the week and putting himself a shot clear of J.T. Poston (69), Lee Westwood (69) and Luke Donald (66).
Davis is joined two shots behind Steele at three-under by U.S. Open champion Gary Woodland (67), Sepp Straka (67) and Nick Watney (66).
This is Steele's ninth time playing this tournament and the first time he's ended any round at PGA National with the lead - he missed the cut last year by 10 shots.
But most of what he's doing so far this year had worked, until he made bogey on two of his last three holes Friday (Saturday AEDT).
"My first few years here I couldn't quite figure it out," Steele said.
"I thought maybe it wasn't a good course for me. ... I don't feel like it's a course you can just jump out your first time and have it nailed. You have to see it in all the different winds and conditions that you have."
The cut was three-over, and many big names in the field aren't sticking around for the weekend, including Brooks Koepka, Rickie Fowler, Justin Rose and defending champion Keith Mitchell.
Australians Matt Jones and Cameron Percy (both three-over) just beat the cut, while compatriot Greg Chalmers (12-over) bowed out.
Australia's best swim!Swim to Rotto as a 25km solo, 19.7km solo, duo, team of 4 or 6 Entries are filling up, sign up NOW!
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,128
|
Before Ecto Cooler, there was Hi-C Citrus Cooler. | Dinosaur Dracula!
As most of you know, Ecto Cooler has retired again, with the remaining stock from its summer run now being liquidated at dollar stores and discount shops. Stock up while it lasts!
But here's something that most of you don't know: Ecto Cooler's termination marks the end of a Hi-C flavor that can be traced back as far as the 1960s!
Yes, long before Ecto Cooler, there was Hi-C Citrus Cooler Drink. It even had the same ghoulish green hue!
Now, was it exactly the same as Ecto Cooler? Until I find a label from a late '80s version of Citrus Cooler, that's impossible to know. But there's room for doubt.
I can confirm that Citrus Cooler was an orange/lemon blend for at least a good long while, forgoing the tangerine additive that made Ecto Cooler so distinct. Hi-C also once had a totally separate tangerine flavor, so it's possible that our dear Ecto was a weird amalgam of those two beverages.
Still, there's an undeniable lineage here, and it's impossible to look at Citrus Cooler as anything but the Goldeen to Ecto Cooler's Seaking.
Citrus Cooler debuted in 1969, and remained a fairly popular flavor throughout the 1970s. At the time of its debut, weird green juice blends were actually in fashion, with several different beverage companies offering up their own versions.
If you've ever skimmed through cookbooks from the '60s or '70s, you know that old ideas about fancy food are best described as "strange and gaudy." People would mix three types of meat into the rough shapes of human heads, and then suspend 'em in tomato gelatin. In that world, "green orange juice" hardly seemed odd.
Citrus Cooler Drink even eked out a rare television appearance in 1989, turning up in a local ad for Eagle Food Centers.
That ad would've been filmed just before Ecto Cooler's arrival, and if you look close, you'll notice that Citrus Cooler's can design was almost a dead-on match for Ecto Cooler's original labels. All it needed was Slimer!
CONCLUSION: Coca-Cola may have tinkered with the taste, but we wouldn't have had Ecto Cooler without Citrus Cooler Drink.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 7,701
|
\section{TODO}
\section{Introduction}
As machine learning begins moving into sensitive predictions tasks, it becomes critical to ensure the fair performance of prediction models. Naively trained machine learning systems can replicate biases present in their training data, resulting in unfair outcomes that can accentuate societal inequities. For example, machine learning systems have been discovered to be unfair in predicting time to criminal recidivism \citep{dieterich2016compas}, ranking applications to nursing school \citep{romano2020achieving}, and recognizing faces \citep{buolamwini2018gender}. Most prior work in this area has focused on ensuring fairness for binary outcomes. However, there are many important real-world applications with multiclass outcomes instead. For example, a self-driving car will need to be able to distinguish clearly between humans, non-human animals (such as dogs), and non-sentient objects while nonetheless maintaining fair performance for both wheelchair users and non-wheelchair users.
Most work has also been done with the assumption that model parameters are accessible to the algorithm, but there is increasing availability of powerful blackbox models whose internal parameters can be either inaccessible or too costly to train. In this paper, we address the case where outcomes are multiclass and the user has received a pre-trained blackbox model. The main contributions of our work are as follows:
\begin{itemize}
\item We show how to extend \citet{hardt2016equality} to multiclass outcomes.
\item We demonstrate in what data regimes multiclass postprocessing is likely to produce fair, useful, and accurate results via a set of rigorous synthetic experiments.
\item We demonstrate the results of our post-processing algorithm on publicly available real-world applications.
\end{itemize}
\paragraph{Code and Dataset Availability} All of the code used to produce our experimental results as well as the synthetic and real-world datasets can be found on our github page\footnote{https://github.com/scotthlee/fairness/tree/aaai}.
\section{Technical Approach}
As in \citet{hardt2016equality}, we consider the problem of enforcing fairness on a blackbox classifier without changing its internal parameters. This means that our approach only has access to the predicted labels $\hat{y_i}$ from the blackbox classifier, the true labels $y_i$, and the protected attributes $a_i$ for $i\in\{1, ..., N\}$ where $N$ is the number of individuals. The goal of our approach is to produce a new set of updated and fair 'adjusted' predictions $y^{\text{\text{adj}}}_i$ that satisfy a desired fairness criterion. For each of $\hat{y_i}$, $y_i$, and $a_i$, we define corresponding random variables $\hat{Y}$, $Y$, $A$. Then, following \citet{hardt2016equality} we define the random variable for the adjusted predictions $Y^{\text{\text{adj}}}$ to be a randomized function of $\hat{Y}$ and $A$.
We extend the approach in \citet{hardt2016equality} by allowing multiclass outcomes, such that the sample spaces of $\hat{Y}$, $Y$, and $Y^{\text{\text{adj}}}$ are a collection of discrete and mutually exclusive outcomes $\mathcal{C} = \{1, 2, .... , |C|\}$. We in principle allow the sample space of the protected group $A$, $\mathscr{A}$, to contain any number of discrete values as well: $\mathscr{A} = \{1, 2, ..., |\mathscr{A}|\}$.
\paragraph{Linear Program} Our approach involves the construction of a linear program over the conditional probabilities of the adjusted predictor $Pr(Y^{\text{\text{adj}}}=y^{\text{\text{adj}}}|\hat{Y}=\hat{y}, A=a)$ such that a desired fairness criterion is satisfied by those probabilities. In order to construct the linear program, both the loss and fairness criteria must be linear in terms of the protected attribute conditional probability matrices ${\mathbf{P^{a}} =Pr(Y^{\text{adj}}|\hat{Y}, A=a)}$ which have dimensions $|C|\times |C|$.
\paragraph{Types of Objective Functions} We consider objective functions which are linear in the group conditional adjusted probabilities $\mathbf{P^{a}}$. More specifically we consider minimizing expected losses of the form:
\begin{align*}
&E[l(y^{\text{adj}}, y)] = \\ &\sum_{a\in\mathscr{A}}\sum_{i=1}^{|C|} \sum_{j\neq i} Pr(Y^{\text{adj}}=i, Y=j, A=a)l(i, j, a) \\
&= \sum_{a\in\mathscr{A}}\sum_{i=1}^{|C|} \sum_{j\neq i} W^{a}_{ij}\;\;Pr(A=a, Y=j)\;l(i, j, a)
\end{align*}
where $W^a_{ij}=Pr(Y^{\text{adj}}=i|Y=j, A=a)$ are the protected attribute conditional confusion matrices.
Under the independence assumption $Y^{\text{adj}} \perp Y | A, \hat{Y}$, we can write $\mathbf{W^a}=\mathbf{P^a} \mathbf{Z^a}$ where $\mathbf{Z^a} = Pr(\hat{Y}|Y, A=a)$, the class conditional confusion matrices of the original blackbox classifier's predictions. The matrices $\mathbf{Z^a}$ are estimated empirically from the training data ($y_i$, and $a_i$) and blackbox predictions of the model ($\hat{y_i}$). Therefore, this formulation of the objective function remains linear in the protected attribute conditional probability matrices, $\mathbf{P^a}$, as is necessary for the linear program. This definition is similar to \citet{hardt2016equality} except we let the loss $l(i, j, a)$ also be a function of protected attributes instead of just the true and adjusted labels, which allows controlling the strictness of penalties for errors made for specific protected groups and classes. The most straightforward version of this loss is letting $l(y^{\text{adj}}, y, a)$ be the zero-one loss (ignoring the protected attributes) which results in minimizing the sum of the joint probabilities of mismatch between $Y^{\text{adj}}$ and $Y$. We refer to this approach as \textit{unweighted} loss. Another approach is to set $l(y^{\text{adj}}, y, a)$ equal to one over the joint probabilities of the true label and protected attribute $1/Pr(Y=y, A=a)$ (estimated empirically), which we refer to as \textit{weighted} loss. Intuitively, this option reweights the loss to give rarer protected groups and label combinations equal importance to the optimization which could improve fairness when very low membership minority protected groups exist in the dataset. This option for the objective function can be equivalently minimized by maximizing the diagonals (true detection rates) of the group conditional confusion matrices $\mathbf{W^a}$.
\paragraph{Types of Fairness} We consider several versions of multiclass fairness criteria, all of which can be written as a collection of $|\mathscr{A}| - 1$ pairwise equalities setting a fairness criterion of interest equal across all groups. Moreover, each of the terms in these equalities can be written as some $|C|\times|C|$ matrix $M^a$ times the adjusted probability matrix $\mathbf{P^a}$, and therefore are linear in the adjusted probabilities as needed for the linear program (see appendix A for the exact form $M^a$ takes for the different fairness criteria).
The first definition involves requiring strictly equal performance across protected groups.
\begin{definition}[Term-by-Term Multiclass Equality of Odds]
A multiclass predictor satisfies term-by-term equality of odds if the protected group conditional confusion matrices $\mathbf{W^a}$ are equal across all protected groups:
\begin{equation}
\label{def.strict}
\mathbf{W^{1}}= \mathbf{W^{2}}=\dots=\mathbf{W^{|\mathscr{A}|}}
\end{equation}
where $\mathbf{W^{a}}=Pr(Y^{\text{adj}}|Y, A=a)$.
\end{definition}
This is a straightforward extension to the multiclass case of equality of odds defined in \citet{hardt2016equality}. Notice that since this definition requires equality of each off-diagonal term of $\mathbf{W^a}$ across all groups, it enforces that not only are errors made at the same overall rate across groups, but also that the rate of specific types of errors are equal.
For some practical applications, term-by-term equality of odds is important, such as predicting criminal recidivism times binned into three years, two years, one year, and "never recommits". In this case, making the error of predicting 3 years until recidivism when the actual time is 1 year is much worse than predicting 3 years when the actual time is 2. Therefore, it is critical for fairness in this application that the rates of specific types of errors are strictly equal across groups.
Instead of requiring strict equality of off-diagonal terms of $\mathbf{W^a}$ we can instead enforce equality across the classwise overall false detection rates $FDR$, which leads to the next fairness definition:
\begin{definition}[Classwise Multiclass Equality of Odds]
A multiclass predictor satisfies classwise multiclass equality of odds if the diagonals of the protected group conditional confusion matrices and the protected attribute conditional vector of false detection rates are equal across all protected groups:
\begin{equation*}
\label{def.relaxed}
\begin{array}{l}
diag(\mathbf{W^{1}})= diag(\mathbf{W^{2}})=\dots=diag(\mathbf{W^{|\mathscr{A}|}})\\
\mathbf{FDR^{1}} = \mathbf{FDR^{2}} = \dots = \mathbf{FDR^{|\mathscr{A}|}}\numberthis
\end{array}
\end{equation*}
where $\mathbf{FDR^{a}} = Pr(Y^{\text{adj}}|Y^{\text{adj}}\neq Y, A=a)$.
\end{definition}
This version of fairness can 'trade' better performance for a specific protected group on one off diagonal term in $\mathbf{W^a}$ (i.e. lower error probability for that term) for poorer performance of the same group on a different off diagonal term (i.e. higher error probability for another term). Individually each class label has it's true detection rate, and overall false detection rate set equal across groups. Thus, this type of fairness is 'classwise'.
For some problems it is sufficient to maintain fair true detection rates across classes and allow false detection rates to differ across groups. This is even less restrictive than Definition \ref{def.relaxed}. This may be desirable when, for example, deciding whether an accepted college application should be accepted into a honors program, accepted with scholarship, or regularly accepted. Since all the outcomes are positive, unfairness across false detection rates may not be critical, as long as the true detection rates are fair across groups. This motivates the following fairness criteria:
\begin{definition}[Multiclass Equality of Opportunity]
A multiclass predictor satisfies equality of opportunity if the diagonals of the protected group conditional confusion matrices $\mathbf{W^a}$ are equal across all groups:
\begin{equation}
diag(\mathbf{W^{1}})= diag(\mathbf{W^{2}})=\dots=diag(\mathbf{W^{|\mathscr{A}|}})
\end{equation}
where $\mathbf{W^{a}}=Pr(Y^{\text{adj}}|Y, A=a)$.
\end{definition}
A common and even more relaxed version of fairness called demographic parity only requires the rate of class predictions across different groups to be equal \citep{calders2009building}.
\begin{definition}[Multiclass Demographic Parity]
A multiclass predictor satisfies demographic parity if the protected group conditional class probabilities are equal across groups:
\begin{equation}
\begin{array}{l}
Pr(Y^{\text{adj}}|A=1)=\\
Pr(Y^{\text{adj}}|A=2) =\dots =Pr(Y^{\text{adj}}|A=|\mathscr{A}|)
\end{array}
\end{equation}
\end{definition}
Enforcing this version of fairness for certain datasets may produce effectively unfair outcomes \citep{dwork2012fairness}. However, in synthetically produced data, this definition has been shown to reduce the reputation of disadvantaged protected groups when repeatedly applied over a long period of time to sensitive decision-making tasks such as hiring \citep{hu2018short}.
Note that while the learned adjusted probabilities after running the linear program, $\mathbf{P^a}$ are guaranteed to be fair, taking the max value over the learned probabilities when predicting on an individual level will not maintain fairness. In fact, it can occur that taking the max over the adjusted probabilities will just result in identical predictions as those made by the original blackbox classifier. Instead, when predicting the class of an individual, the corresponding learned adjusted probabilities must be sampled from in order to maintain the fairness guarantee.
\section{Related Work}
Most prior work done on post-processing based fairness approaches focus on binary task prediction. \citet{wei2019optimized} create a post-processing algorithm that modifies the raw scores of a binary classifier (instead of thresholded hard predictions) in order to achieve desired fairness constraints expressed as linear combinations of the per-group expected raw scores. \citet{ye2020unbiased} develop a general in-processing fairness framework which alternates between a process of selecting a subset of the training data and fitting a classifier to that data.
Several adversarial approaches to multiclass fairness have been investigated recently; although these are not blackbox post-processing algorithms. \citet{zhang2018mitigating} first present the idea of adversarial debiasing, while \citet{romano2020achieving} present a multiclass approach for in-process training based on adversarial learning, with the discriminator distinguishing between the distribution of the model's current predictions, the true label, and artificial protected attributes resampled to be fair, and the true distribution of the predictions, true labels, and true protected attributes.
Multiclass blackbox post-processing techniques are less studied; although there have been a few new approaches recently. Notably, \citet{denis2021fairness} derive an optimally fair classifier from a pre-trained model and show several nice theoretical guarantees, including the asymptotic fairness of their proposed plug-in estimator. We see 3 key differences between their approach and the extension to \citet{hardt2016equality} that we propose: they only consider binary protected attributes ($|\mathscr{A}|=2$), while we allow categorical protected attributes ($|\mathscr{A}| > 2$) and can take on any number of unique values, at least theoretically; their method requires fitting a new estimator to the test data, whereas ours only requires computing probabilities and solving a linear program, which is relatively efficient; and, perhaps most importantly, their approach is limited to the demographic parity fairness constraint, whereas our approach applies to any constraint that is linear in $\mathbf{P^a}$.
In broader terms, \citet{hossain2020designing} unify many of the published methods for learning fair classifiers by showing that equalized odds, equal opportunity, and other common measures of fairness in the binary setting are subsumed by their proposed generalizations of the economic notions of envy-freeness and equitability. They show that these generalizations of fairness apply to the multiclass setting, but post-processing techniques are incapable of achieving them. We show here that this notion is not entirely correct, at least in a narrow sense, and that fairness can be achieved with post-processing techniques in the multiclass setting, so long as the joint distribution $P(Y, \hat{Y}, A)$ is either fully known or can be reasonably approximated by a large-enough sample of training data.
\begin{table*}
\footnotesize
\centering
\footnotesize
\begin{tabular}{cccc}
\multicolumn{4}{c}{\bf Experiments with $\mathbf{|\mathscr{A}|=3}$}\\
\toprule \textbf{Hyperparameter} & \textbf{Level} & \textbf{Change in Acc (CI)} & \textbf{Change in $\mathbf{TDR}$ (CI)}
\\
\midrule
Intercept & -- & -0.13 (-0.17, -0.09) & -0.18 (-0.21, -0.15)
\\
\midrule
Loss & Unweighted & -- & --
\\ & Weighted & -0.11 (-0.13, -0.09) & 0.12 (0.10, 0.13)
\\
\midrule
Goal & Equalized Odds & -- & --
\\ & Demographic Parity & 0.24 (0.22, 0.27) & 0.21 (0.18, 0.23)
\\ & Equal Opportunity & 0.08 (0.05, 0.11) & 0.03 (0.01, 0.05)
\\ & Term-by-Term & 0.08 (0.05, 0.11) & 0.02 (-0.01, 0.04)
\\
\midrule
Group Balance & No Minority & -- & --
\\ & One Slight Minority & -0.03 (-0.06, 0.00) & -0.02 (-0.04, 0.01)
\\ & One Strong Minority & -0.04 (-0.07, -0.00) & -0.01 (-0.03, 0.02)
\\ & Two Slight Minorities & -0.05 (-0.08, -0.02) & -0.02 (-0.04, 0.01)
\\ & Two Strong Minorities & -0.07 (-0.11, -0.04) & -0.01 (-0.04, 0.01)
\\
\midrule
Class Balance & Balanced & -- & --
\\ & One Rare & 0.02 (-0.00, 0.04) & -0.04 (-0.06, -0.02)
\\ & Two Rare & 0.07 (0.04, 0.09) & -0.18 (-0.20, -0.17)
\\
\midrule
Pred Bias & Low One & -- & --
\\ & Low Two & 0.00 (-0.03, 0.04) & -0.00 (-0.03, 0.02)
\\ & Medium One & -0.06 (-0.09, -0.02) & -0.06 (-0.08, -0.03)
\\ & Medium Two & -0.04 (-0.07, -0.00) & -0.06 (-0.08, -0.03)
\\ & High One & -0.18 (-0.22, -0.15) & -0.16 (-0.19, -0.14)
\\ & High Two & -0.15 (-0.19, -0.12) & -0.13 (-0.16, -0.11)
\\
\midrule
\end{tabular}
\caption{Predicted change and 95\% confidence intervals for accuracy and mean $TDR$ as a function of the experimental hyperparameters in our synthetic datasets with three protected attributes. All datasets had a 3-class outcome.}
\end{table*}
\section{Synthetic Data Experiments}
\paragraph{Synthetic Data}
To explore the effect of different data regimes and optimization goals on post-adjustment discrimination, we conducted thorough (though by no means exhaustive) synthetic experiments for a 3-class outcome. We constructed synthetic datasets with $N=1,000$ observations for each unique combination of the following data-generating hyperparameters:
\begin{itemize}
\item The number of unique values for the protected attribute, $|\mathscr{A}|$. We explored setting $|\mathscr{A}|=2$ or $|\mathscr{A}|=3$ (see results with $|\mathscr{A}|=2$ in our github repository)
\item The amount of class imbalance for the labels $Y$. For simplicity, we did not allow this to vary across protected groups.
\item Group balance, or the number and relative size of minority groups compared to majority groups. This varied according to the number of groups but was generally either none, weak, or strong.
\item Predictive bias as the difference in mean true detection rate, $TDR$, between the groups. We vary this from mild predictive bias (10 percent difference) to severe bias with the minority group $TDR$ being near chance. The predictive bias is set to always favor the majority group.
\end{itemize}
This process yielded 117 datasets. For each one, we ran the linear program to adjust the (synthetic) biased blackbox predictions 8 times, once for each unique combination of the objective function and type of fairness, yielding a total of 936 adjustments. After each adjustment, we recorded two broad measures of the fair predictor's performance:
\begin{itemize}
\item Triviality, or whether any of the columns in $\mathbf{W^a}=Pr(Y^{\text{adj}}|Y, A=a)$ contained all zeroes (i.e., whether any levels of the outcome were no longer predicted).
\item Discrimination, or the percent change in loss for the adjusted predictor relative to that of the original predictor. For this measure, we examined two specific metrics: global accuracy and the mean of the group-wise $TDRs$. These are equivalent to 1 minus the post-adjustment loss under the two versions of the objective functions we present above.
\end{itemize}
To quantify the average effect of each hyperparameter on discrimination, we fit two multivariable linear regression models to the resulting dataset, one for each discrimination metric. Before fitting the models, we converted the categorical hyperparameters (so all but loss) to one-hot variables, and then we set a reference level for each, removing the corresponding column from the design matrix. We then fit the models separately using ordinary least squares (OLS) and calculated confidence intervals (CIs) for the resulting coefficients.
\paragraph{Results}
Table 1 shows coefficients and 95\% confidence intervals for the regression models with $|\mathscr{A}|=3$. The results highlight several important points:
\begin{itemize}
\item Predictive bias and class imbalance are the two main drivers of decreases in post-adjustment discrimination, for both accuracy, and $TDR$.
\item High group imbalance for the protected attributes lowers post-adjustment discrimination, but only from the perspective of global accuracy--even with 2 strong minorities (3-group scenario), mean $TDR$ only drops by 1.1\%.
\item Relative to the weighted objective, the unweighted objective leads to higher scores for global accuracy but lower scores for mean $TDR$. This is perhaps unsurprising, but it is worth noting nonetheless.
\item Despite finding better accuracy solutions, we also found that the unweighted objective leads to trivial solutions far more frequently (30\% of the time it was used) than the weighted version of the loss (0.2\% of the time it was used). This trend will likely worsen with increasing dimension of either the number of classes or the number of protected groups.
\item Fairness is generally harder to achieve with 3 protected groups than with 2, since the intercepts are lower for both accuracy and mean $TDR$. We believe this to be a general consequence of forcing fairness across more groups and expect this trend to continue as the number of groups increases.
\end{itemize}
\begin{figure*}[!h]
\centering
\includegraphics[scale=.5]{figures/brier_fds.png}
\caption{Fairness-discrimination plots for our postprocessing algorithm on our 4 real-world datasets, created by systematically relaxing the fairness equality constraints of the linear program. The plots show Brier score as a function of the maximum average difference between groups of the corresponding fairness criterion. Performance of the original, unadjusted predictor is marked by an X.
}
\label{fig:fairness_vs_discrimination}
\end{figure*}
\section{Experiments with Real-World Data}
\paragraph{Dataset Descriptions}
To further examine the performance characteristics of our algorithm, we ran it on several real-world datasets described below.
\begin{enumerate}
\item \textbf{Drug Usage} \cite{fehrman2017five}. This dataset has inherently multiclass outcomes, with the target being a 7-level categorical variable indicating recentness of use for a variety of drugs. We focus on predicting cannabis usage, where we collapsed the 7-level usage indicator into 3 broader categories: never used, used but not in the past year, and used in the past year. Predictors included demographic variables like age, gender, and level of education, as well as a variety of measures of personality traits hypothesized to affect usage habits.
\item \textbf{Obesity} \cite{palechor2019dataset}. This dataset has inherently multiclass outcomes, with the target being a 7-level categorical variable indicating weight category; the protected attribute is gender (Male/Female). Because some of the observations are synthetic in order to protect privacy, not all of the gender/weight categories had sufficient numbers for modeling, and so we omitted observations from the 2 most extreme weight categories, Obesity Type-II and Obesity Type-III, leaving a 5-level target for prediction. Predictors included age, gender, family medical history, and several measures of physical activity and behavioral health.
\item \textbf{LSAC Bar Passage} \cite{wightman1998lsac}. This dataset has inherently multiclass outcomes, with the target being a 3-level variable indicating bar exam passage status (passed first time, passed second time, or did not pass). The protected attribute is race, which we collapsed from its original 8 levels to 2 (white and non-white). Predictors included mostly measures of educational achievement, like undergraduate GPA, law school GPA, and LSAT score.
\item \textbf{Parkinson's Telemonitoring} \cite{tsanas2009accurate}. This dataset does not have inherently multiclass outcomes, with the target for prediction being the continuous Unified Parkinson's Disease Rating Scale (UPDRS), a continuous score that increases with the severity of impairment. We again used Otsu's method to bin the continuous score into 3 categories--low impairment, moderate impairment, and high impairment--which we took as the new class labels. The protected attribute is a 2-level variable for gender (Male/Female). Predictors included mostly biomedical measurements from the voice recordings of patients with Parkinson's Disease.
\end{enumerate}
For each of these datasets, we obtained a potentially-biased predictor $\hat{Y}$ by training a random forest on all available informative features (including the protected attribute) to predict the multiclass outcome, and then taking the categories corresponding to the row-wise maxima of the out-of-bag decision scores as the set of predicted labels. We then adjusted the predictions with the weighted objective and term-by-term equality of odds fairness constraint and recorded the relative changes in global accuracy and mean $TDR$ as the outcome measures of interest, as with our synthetic experiments.
\begin{table*}
\footnotesize
\centering
\hspace{.61 cm}
\begin{tabular}{c|cccc}
\multicolumn{5}{c}{\bf In-Sample Results}\\
\toprule
Dataset (N) & \# Terms& Old Acc $\shortrightarrow$ New Acc & Old $TDR\shortrightarrow$ New $TDR$& Pre $\shortrightarrow$ Post-Adj Disparity\\
&in $\mathbf{P^a}$&(\% change)&(\% change)&(\% change)\\
\midrule
Bar (N=22406)& 18&88 \% $\shortrightarrow$ 88\% (-1\%)& 36\% $\shortrightarrow$ 34\% (-7\%)&0.11 $\shortrightarrow$0.00 (-100\%)\\
Cannabis (N=1885)&18&74\% $\shortrightarrow$ 71\% (-4\%)& 67\% $\shortrightarrow$ 63\% (-6\%)&0.07 $\shortrightarrow$0.00 (-100\%)\\
Obesity (N=1490)&50&78\% $\shortrightarrow$ 73\% (-7\%)& 78\% $\shortrightarrow$ 73\% (-7\%)&0.05 $\shortrightarrow$ 0.00 (-100\%)\\
Parkinsons (N=5875)&18&93\% $\shortrightarrow$ 91\% (-2\%)& 92\% $\shortrightarrow$ 89\% (-3\%)&0.04 $\shortrightarrow$0.00(-100\%)\\
\end{tabular}
\newline
\vspace{.5 cm}
\newline
\begin{tabular}{c|cccc}
\multicolumn{5}{c}{\bf Out of Sample Results}\\
\toprule
Dataset (N) & \# Terms& Old Acc $\shortrightarrow$ New Acc & Old $TDR\shortrightarrow$ New $TDR$& Pre $\shortrightarrow$ Post-Adj Disparity\\
&in $\mathbf{P^a}$&(\% change)&(\% change)&(\% change)\\
\midrule
Bar (N=22406)& 18&88 \% $\shortrightarrow$ 83\% (-6\%)& 36\% $\shortrightarrow$ 33\% (-8\%)&0.11 $\shortrightarrow$0.01 (-95\%)\\
Cannabis (N=1885)&18&74\% $\shortrightarrow$ 61\% (-18\%)& 67\% $\shortrightarrow$ 52\% (-22\%)&0.07 $\shortrightarrow$0.16 (124\%)\\
Obesity (N=1490)&50&78\% $\shortrightarrow$ 41\% (-47\%)& 78\% $\shortrightarrow$ 42\% (-46\%)&0.05 $\shortrightarrow$ 0.07 (45\%)\\
Parkinsons (N=5875)&18&93\% $\shortrightarrow$ 82\% (-12\%)& 92\% $\shortrightarrow$ 78\% (-15\%)&0.04 $\shortrightarrow$0.05(33\%)\\
\end{tabular}
\caption{Results of applying the linear program to adjust the blackbox predictions and produce $y^{der}$ for four real-world datasets. The top table is without any splitting. Results shown in the bottom table are cross-validated across five 80/20 splits of each dataset. Accuracy and $TDR$ are shown before and adjustment, with $TDR$ being the mean across all classes. Percent changes, shown in parentheses are the relative percent drops in accuracy and mean $TDR$. Post-adjustment disparity is the element-wise mean difference across all groups of $\mathbf{W^a}$.}
\end{table*}
\paragraph{Exploring the Effect of Finite Sampling}
\citet{hardt2016equality} note that their method will not be effected by finite sample variability as long as the joint distribution $Pr(Y, \hat{Y}, A)$ is known, or at least well-approximated by a large sample. In practical applications, however, the sample at hand may not be large enough to approximate the joint distribution with precision. This problem is exacerbated when the number of observations $N$ is small relative to the number of probabilities learned by the algorithm of which there are $|C|\times|C|\times|\mathscr{A}|$ total. This difficulty is therefore more severe for our extension in this work where $|C| > 2$.
In these cases, the adjusted predictor $Y^{\text{adj}}$ may have worse classification performance and higher disparity when applied to unseen, out-of-sample data.
As a preliminary exploration of this effect, we used 5-fold cross-validation to generate out-of-sample predictions for each of the observations in our real-world datasets. Keeping $Y$, $\hat{Y}$, and $A$ fixed, we solved the linear program on $80\%$ of the data and then used the adjusted probabilities $\mathbf{P^a}$ to obtain class predictions for the observations in the remaining $20\%$. As with the predictions obtained from solving the linear program on the full dataset, we measured the changes in accuracy and mean $TDR$ for the cross-validated predictions. Because fairness is not guaranteed when the joint distribution assumption is violated, we also measured post-adjustment fairness.
\paragraph{Exploring the Fairness-Discrimination Tradeoff}
When there are large gaps in a predictor's performance across groups, i.e., when predictive bias is high, strict fairness may not always be possible or desirable to achieve because of the large amount of randomization required to balance the blackbox classifier's predictions. To explore the tradeoff between fairness and discrimination, we ran the linear program on each of the real-world datasets once for each of the four kinds of fairness. For each combination of dataset and fairness type, we varied the equality constraints of the linear program--the maximum percent difference allowed between any pairwise comparison of fairness measures between groups--from 0.0 to 1.0 in increments of 0.01, and then plotted the value of the weighted objective at each point as a function of the global measure of fairness corresponding to the fairness type under consideration. To obtain these global measures, we took the maximum of the mean differences across pairs of groups of the following metrics:
\begin{itemize}
\item $\mathbf{W}$, or the matrix of probabilities $P(Y^{\text{adj}}|Y)$, for term-by-term equality of odds
\item Youden's J index, or $TDR + (1-FDR) - 1$, for classwise equality of odds
\item $TDR$ for equal opportunity
\item $P(Y^{\text{adj}})$ for demographic parity
\end{itemize}
We note here that taking the maximum of the maxima of the pairwise differences would also be a valid and sensible global measure. So that the plots show performance under optimal conditions, we do not use cross-validation to obtain $Y^{\text{adj}}$, i.e., we obtain it by solving the linear program on the entire dataset.
\paragraph{Results}
Table 2 shows changes in global accuracy and mean $TDR$ after adjustment with the weighted objective and term-by-term conditional fairness constraint for our four datasets, using cross-validation as described above to capture some of the variability that comes with finite sampling. Overall, adjustment lowered both accuracy and mean $TDR$. Although, for the bar passage, drug usage, and Parkinson's datasets, the drops were moderate, with average relative changes in both metrics coming in at around 12\% and 15\%, respectively (without cross-validation, the drops were much smaller at 3\% and 4\%). For the obesity dataset, the drops are much larger at 47\% and 46\%, respectively, which are indeed substantial and would likely make the predictor unusable in practical settings. On in-sample data, these drops were both only around 7\%, and so we suspect that characteristics of the data, like large class imbalance or small overall sample size, are responsible for the poor performance. Perhaps most importantly, the post-adjustment disparity for all datasets is non-zero, and for three of the datasets actually increases. The bar passage dataset was the only example where the out-of-sample post-adjustment disparity decreased to near zero likely due to it being the largest dataset. This starkly points out the sensitivity of the method to estimating the joint probabilities $Pr(Y, \hat{Y}, A)$, and shows that the approach is unlikely to work in smaller dataset regimes which have a larger combination of classes and protected attributes. Note that for in-sample results, post-adjustment disparity drops completely to 0.0 for all datasets since it is strictly enforced by the linear program in Table 2.
Figure 1 shows fairness-discrimination plots for our 4 datasets with the weighted objective and each of the 4 fairness constraints. Under strict fairness, with inequality set to 0, equalized odds is the hardest to satisfy, showing the largest increase in Brier score. For the drug usage, obesity, and Parkinson's datasets, discrimination improves approximately linearly as fairness worsens; for the bar passage dataset, discrimination improves to a point, but then worsens as fairness approaches the value for the original, unadjusted predictor $\hat{Y}$. For all datasets, the total loss of discrimination under strict fairness is relatively small (the biggest drop is around 7.5 percentage points on Brier score), but the random forests' predictions were only mildly biased to begin with, so we expect this gap to increase for less-fair predictors.
\section{Discussion}
Generally, our post-processing approach to achieving fairness in multiclass settings seems both feasible and efficient given a large enough dataset size. We have shown above that the linear programming technique proposed by \citet{hardt2016equality} can be extended to accommodate a theoretically arbitrarily large number of discrete outcomes and levels of a protected attribute. Nonetheless, our synthetic experiments and analyses of real-world datasets show that are a few important considerations for using the approach in practice.
In many cases, the effect of finite sampling may be non-negligible, especially when the number of observations $N$ is small relative to number of outcomes $|C|$ or the number of protected groups $|\mathscr{A}|$. For example, the obesity dataset with $|C|=5$ and $N=1,490$ saw a large relative drop of 46\% in mean $TDR$ after adjustment under cross-validation.
We also saw this effect extend to fairness, which was not reduced completely to zero on out-of-sample data for any of the real-world datasets. In fact, for the drug usage dataset we found post-adjustment disparity doubled on out-of-sample data.
This last observation raises a concerning point: for some classification problems, the post-adjustment predictions on out-of-sample data may increase disparity rather than lowering it. For the largest of the datasets, the bar passage dataset with $N=22,406$, neither of these issues was a concern. Even under cross-validation, the relative change in $TDR$ was only -8\%, and the disparity dropped to near 0 (-95\% decrease). Given this, we expect that with a large enough dataset size, our approach will be far more reliable on out-of-sample data. Future work more precisely quantifying the number of training examples needed for reliable out-of-sample fair performance with our approach is needed.
More generally, even when finite sampling variability is not an issue, not all datasets will lend themselves well to this kind of post-processing approach. In our synthetic experiments, we showed that severe class imbalance and severe predictive bias (predicting at nearly the level of chance for minority protected groups) lead to large drops in post-adjustment performance on average. In many of the single experimental runs for synthetic datasets with these settings, the resulting derived predictor was effectively useless, either producing trivial results or lowering predictive performance to near chance (for all groups) for one or more class outcomes. In these circumstances, it may be more sensible to enforce fairness through a combination of pre-processing, in-processing, and post-processing methods, rather than through a post-processing method alone. Indeed, \citet{woodworth2017learning} make this point generally, albeit for the binary setting, by showing that unless the biased predictor $\hat{Y}$ is very close to being Bayes optimal, the derived predictor $Y^{\text{adj}}$ proposed by \citet{hardt2016equality} can underperform relative to other methods, sometimes substantially. Under less extreme circumstances, however, we found our approach produces good results, especially given the time-efficiency of solving the linear program relative to other methods.
\section{Acknowledgments}
This work was supported in part by the HPI Research Center in Machine Learning and Data Science at UC Irvine (P. Putzel), as well as in part by an appointment to the Research Participation Program at the Centers for Disease Control and Prevention, administered by the Oak Ridge Institute for Science and Education (P. Putzel). We would also like to thank Chad Heilig, and Padhraic Smyth for their helpful comments on the approach and paper.
\section{TODO}
\section{Introduction}
As machine learning begins moving into sensitive predictions tasks, it becomes critical to ensure the fair performance of prediction models. Naively trained machine learning systems can replicate biases present in their training data, resulting in unfair outcomes that can accentuate societal inequities. For example, machine learning systems have been discovered to be unfair in predicting time to criminal recidivism \citep{dieterich2016compas}, ranking applications to nursing school \citep{romano2020achieving}, and recognizing faces \citep{buolamwini2018gender}. Most prior work in this area has focused on ensuring fairness for binary outcomes. However, there are many important real-world applications with multiclass outcomes instead. For example, a self-driving car will need to be able to distinguish clearly between humans, non-human animals (such as dogs), and non-sentient objects while nonetheless maintaining fair performance for both wheelchair users and non-wheelchair users.
Most work has also been done with the assumption that model parameters are accessible to the algorithm, but there is increasing availability of powerful blackbox models whose internal parameters can be either inaccessible or too costly to train. In this paper, we address the case where outcomes are multiclass and the user has received a pre-trained blackbox model. The main contributions of our work are as follows:
\begin{itemize}
\item We show how to extend \citet{hardt2016equality} to multiclass outcomes.
\item We demonstrate in what data regimes multiclass postprocessing is likely to produce fair, useful, and accurate results via a set of rigorous synthetic experiments.
\item We demonstrate the results of our post-processing algorithm on publicly available real-world applications.
\end{itemize}
\paragraph{Code and Dataset Availability} All of the code used to produce our experimental results as well as the synthetic and real-world datasets can be found on our github page\footnote{https://github.com/scotthlee/fairness/tree/aaai}.
\section{Technical Approach}
As in \citet{hardt2016equality}, we consider the problem of enforcing fairness on a blackbox classifier without changing its internal parameters. This means that our approach only has access to the predicted labels $\hat{y_i}$ from the blackbox classifier, the true labels $y_i$, and the protected attributes $a_i$ for $i\in\{1, ..., N\}$ where $N$ is the number of individuals. The goal of our approach is to produce a new set of updated and fair 'adjusted' predictions $y^{\text{\text{adj}}}_i$ that satisfy a desired fairness criterion. For each of $\hat{y_i}$, $y_i$, and $a_i$, we define corresponding random variables $\hat{Y}$, $Y$, $A$. Then, following \citet{hardt2016equality} we define the random variable for the adjusted predictions $Y^{\text{\text{adj}}}$ to be a randomized function of $\hat{Y}$ and $A$.
We extend the approach in \citet{hardt2016equality} by allowing multiclass outcomes, such that the sample spaces of $\hat{Y}$, $Y$, and $Y^{\text{\text{adj}}}$ are a collection of discrete and mutually exclusive outcomes $\mathcal{C} = \{1, 2, .... , |C|\}$. We in principle allow the sample space of the protected group $A$, $\mathscr{A}$, to contain any number of discrete values as well: $\mathscr{A} = \{1, 2, ..., |\mathscr{A}|\}$.
\paragraph{Linear Program} Our approach involves the construction of a linear program over the conditional probabilities of the adjusted predictor $Pr(Y^{\text{\text{adj}}}=y^{\text{\text{adj}}}|\hat{Y}=\hat{y}, A=a)$ such that a desired fairness criterion is satisfied by those probabilities. In order to construct the linear program, both the loss and fairness criteria must be linear in terms of the protected attribute conditional probability matrices ${\mathbf{P^{a}} =Pr(Y^{\text{adj}}|\hat{Y}, A=a)}$ which have dimensions $|C|\times |C|$.
\paragraph{Types of Objective Functions} We consider objective functions which are linear in the group conditional adjusted probabilities $\mathbf{P^{a}}$. More specifically we consider minimizing expected losses of the form:
\begin{align*}
&E[l(y^{\text{adj}}, y)] = \\ &\sum_{a\in\mathscr{A}}\sum_{i=1}^{|C|} \sum_{j\neq i} Pr(Y^{\text{adj}}=i, Y=j, A=a)l(i, j, a) \\
&= \sum_{a\in\mathscr{A}}\sum_{i=1}^{|C|} \sum_{j\neq i} W^{a}_{ij}\;\;Pr(A=a, Y=j)\;l(i, j, a)
\end{align*}
where $W^a_{ij}=Pr(Y^{\text{adj}}=i|Y=j, A=a)$ are the protected attribute conditional confusion matrices.
Under the independence assumption $Y^{\text{adj}} \perp Y | A, \hat{Y}$, we can write $\mathbf{W^a}=\mathbf{P^a} \mathbf{Z^a}$ where $\mathbf{Z^a} = Pr(\hat{Y}|Y, A=a)$, the class conditional confusion matrices of the original blackbox classifier's predictions. The matrices $\mathbf{Z^a}$ are estimated empirically from the training data ($y_i$, and $a_i$) and blackbox predictions of the model ($\hat{y_i}$). Therefore, this formulation of the objective function remains linear in the protected attribute conditional probability matrices, $\mathbf{P^a}$, as is necessary for the linear program. This definition is similar to \citet{hardt2016equality} except we let the loss $l(i, j, a)$ also be a function of protected attributes instead of just the true and adjusted labels, which allows controlling the strictness of penalties for errors made for specific protected groups and classes. The most straightforward version of this loss is letting $l(y^{\text{adj}}, y, a)$ be the zero-one loss (ignoring the protected attributes) which results in minimizing the sum of the joint probabilities of mismatch between $Y^{\text{adj}}$ and $Y$. We refer to this approach as \textit{unweighted} loss. Another approach is to set $l(y^{\text{adj}}, y, a)$ equal to one over the joint probabilities of the true label and protected attribute $1/Pr(Y=y, A=a)$ (estimated empirically), which we refer to as \textit{weighted} loss. Intuitively, this option reweights the loss to give rarer protected groups and label combinations equal importance to the optimization which could improve fairness when very low membership minority protected groups exist in the dataset. This option for the objective function can be equivalently minimized by maximizing the diagonals (true detection rates) of the group conditional confusion matrices $\mathbf{W^a}$.
\paragraph{Types of Fairness} We consider several versions of multiclass fairness criteria, all of which can be written as a collection of $|\mathscr{A}| - 1$ pairwise equalities setting a fairness criterion of interest equal across all groups. Moreover, each of the terms in these equalities can be written as some $|C|\times|C|$ matrix $M^a$ times the adjusted probability matrix $\mathbf{P^a}$, and therefore are linear in the adjusted probabilities as needed for the linear program (see appendix A for the exact form $M^a$ takes for the different fairness criteria).
The first definition involves requiring strictly equal performance across protected groups.
\begin{definition}[Term-by-Term Multiclass Equality of Odds]
A multiclass predictor satisfies term-by-term equality of odds if the protected group conditional confusion matrices $\mathbf{W^a}$ are equal across all protected groups:
\begin{equation}
\label{def.strict}
\mathbf{W^{1}}= \mathbf{W^{2}}=\dots=\mathbf{W^{|\mathscr{A}|}}
\end{equation}
where $\mathbf{W^{a}}=Pr(Y^{\text{adj}}|Y, A=a)$.
\end{definition}
This is a straightforward extension to the multiclass case of equality of odds defined in \citet{hardt2016equality}. Notice that since this definition requires equality of each off-diagonal term of $\mathbf{W^a}$ across all groups, it enforces that not only are errors made at the same overall rate across groups, but also that the rate of specific types of errors are equal.
For some practical applications, term-by-term equality of odds is important, such as predicting criminal recidivism times binned into three years, two years, one year, and "never recommits". In this case, making the error of predicting 3 years until recidivism when the actual time is 1 year is much worse than predicting 3 years when the actual time is 2. Therefore, it is critical for fairness in this application that the rates of specific types of errors are strictly equal across groups.
Instead of requiring strict equality of off-diagonal terms of $\mathbf{W^a}$ we can instead enforce equality across the classwise overall false detection rates $FDR$, which leads to the next fairness definition:
\begin{definition}[Classwise Multiclass Equality of Odds]
A multiclass predictor satisfies classwise multiclass equality of odds if the diagonals of the protected group conditional confusion matrices and the protected attribute conditional vector of false detection rates are equal across all protected groups:
\begin{equation*}
\label{def.relaxed}
\begin{array}{l}
diag(\mathbf{W^{1}})= diag(\mathbf{W^{2}})=\dots=diag(\mathbf{W^{|\mathscr{A}|}})\\
\mathbf{FDR^{1}} = \mathbf{FDR^{2}} = \dots = \mathbf{FDR^{|\mathscr{A}|}}\numberthis
\end{array}
\end{equation*}
where $\mathbf{FDR^{a}} = Pr(Y^{\text{adj}}|Y^{\text{adj}}\neq Y, A=a)$.
\end{definition}
This version of fairness can 'trade' better performance for a specific protected group on one off diagonal term in $\mathbf{W^a}$ (i.e. lower error probability for that term) for poorer performance of the same group on a different off diagonal term (i.e. higher error probability for another term). Individually each class label has it's true detection rate, and overall false detection rate set equal across groups. Thus, this type of fairness is 'classwise'.
For some problems it is sufficient to maintain fair true detection rates across classes and allow false detection rates to differ across groups. This is even less restrictive than Definition \ref{def.relaxed}. This may be desirable when, for example, deciding whether an accepted college application should be accepted into a honors program, accepted with scholarship, or regularly accepted. Since all the outcomes are positive, unfairness across false detection rates may not be critical, as long as the true detection rates are fair across groups. This motivates the following fairness criteria:
\begin{definition}[Multiclass Equality of Opportunity]
A multiclass predictor satisfies equality of opportunity if the diagonals of the protected group conditional confusion matrices $\mathbf{W^a}$ are equal across all groups:
\begin{equation}
diag(\mathbf{W^{1}})= diag(\mathbf{W^{2}})=\dots=diag(\mathbf{W^{|\mathscr{A}|}})
\end{equation}
where $\mathbf{W^{a}}=Pr(Y^{\text{adj}}|Y, A=a)$.
\end{definition}
A common and even more relaxed version of fairness called demographic parity only requires the rate of class predictions across different groups to be equal \citep{calders2009building}.
\begin{definition}[Multiclass Demographic Parity]
A multiclass predictor satisfies demographic parity if the protected group conditional class probabilities are equal across groups:
\begin{equation}
\begin{array}{l}
Pr(Y^{\text{adj}}|A=1)=\\
Pr(Y^{\text{adj}}|A=2) =\dots =Pr(Y^{\text{adj}}|A=|\mathscr{A}|)
\end{array}
\end{equation}
\end{definition}
Enforcing this version of fairness for certain datasets may produce effectively unfair outcomes \citep{dwork2012fairness}. However, in synthetically produced data, this definition has been shown to reduce the reputation of disadvantaged protected groups when repeatedly applied over a long period of time to sensitive decision-making tasks such as hiring \citep{hu2018short}.
Note that while the learned adjusted probabilities after running the linear program, $\mathbf{P^a}$ are guaranteed to be fair, taking the max value over the learned probabilities when predicting on an individual level will not maintain fairness. In fact, it can occur that taking the max over the adjusted probabilities will just result in identical predictions as those made by the original blackbox classifier. Instead, when predicting the class of an individual, the corresponding learned adjusted probabilities must be sampled from in order to maintain the fairness guarantee.
\section{Related Work}
Most prior work done on post-processing based fairness approaches focus on binary task prediction. \citet{wei2019optimized} create a post-processing algorithm that modifies the raw scores of a binary classifier (instead of thresholded hard predictions) in order to achieve desired fairness constraints expressed as linear combinations of the per-group expected raw scores. \citet{ye2020unbiased} develop a general in-processing fairness framework which alternates between a process of selecting a subset of the training data and fitting a classifier to that data.
Several adversarial approaches to multiclass fairness have been investigated recently; although these are not blackbox post-processing algorithms. \citet{zhang2018mitigating} first present the idea of adversarial debiasing, while \citet{romano2020achieving} present a multiclass approach for in-process training based on adversarial learning, with the discriminator distinguishing between the distribution of the model's current predictions, the true label, and artificial protected attributes resampled to be fair, and the true distribution of the predictions, true labels, and true protected attributes.
Multiclass blackbox post-processing techniques are less studied; although there have been a few new approaches recently. Notably, \citet{denis2021fairness} derive an optimally fair classifier from a pre-trained model and show several nice theoretical guarantees, including the asymptotic fairness of their proposed plug-in estimator. We see 3 key differences between their approach and the extension to \citet{hardt2016equality} that we propose: they only consider binary protected attributes ($|\mathscr{A}|=2$), while we allow categorical protected attributes ($|\mathscr{A}| > 2$) and can take on any number of unique values, at least theoretically; their method requires fitting a new estimator to the test data, whereas ours only requires computing probabilities and solving a linear program, which is relatively efficient; and, perhaps most importantly, their approach is limited to the demographic parity fairness constraint, whereas our approach applies to any constraint that is linear in $\mathbf{P^a}$.
In broader terms, \citet{hossain2020designing} unify many of the published methods for learning fair classifiers by showing that equalized odds, equal opportunity, and other common measures of fairness in the binary setting are subsumed by their proposed generalizations of the economic notions of envy-freeness and equitability. They show that these generalizations of fairness apply to the multiclass setting, but post-processing techniques are incapable of achieving them. We show here that this notion is not entirely correct, at least in a narrow sense, and that fairness can be achieved with post-processing techniques in the multiclass setting, so long as the joint distribution $P(Y, \hat{Y}, A)$ is either fully known or can be reasonably approximated by a large-enough sample of training data.
\begin{table*}
\footnotesize
\centering
\footnotesize
\begin{tabular}{cccc}
\multicolumn{4}{c}{\bf Experiments with $\mathbf{|\mathscr{A}|=3}$}\\
\toprule \textbf{Hyperparameter} & \textbf{Level} & \textbf{Change in Acc (CI)} & \textbf{Change in $\mathbf{TDR}$ (CI)}
\\
\midrule
Intercept & -- & -0.13 (-0.17, -0.09) & -0.18 (-0.21, -0.15)
\\
\midrule
Loss & Unweighted & -- & --
\\ & Weighted & -0.11 (-0.13, -0.09) & 0.12 (0.10, 0.13)
\\
\midrule
Goal & Equalized Odds & -- & --
\\ & Demographic Parity & 0.24 (0.22, 0.27) & 0.21 (0.18, 0.23)
\\ & Equal Opportunity & 0.08 (0.05, 0.11) & 0.03 (0.01, 0.05)
\\ & Term-by-Term & 0.08 (0.05, 0.11) & 0.02 (-0.01, 0.04)
\\
\midrule
Group Balance & No Minority & -- & --
\\ & One Slight Minority & -0.03 (-0.06, 0.00) & -0.02 (-0.04, 0.01)
\\ & One Strong Minority & -0.04 (-0.07, -0.00) & -0.01 (-0.03, 0.02)
\\ & Two Slight Minorities & -0.05 (-0.08, -0.02) & -0.02 (-0.04, 0.01)
\\ & Two Strong Minorities & -0.07 (-0.11, -0.04) & -0.01 (-0.04, 0.01)
\\
\midrule
Class Balance & Balanced & -- & --
\\ & One Rare & 0.02 (-0.00, 0.04) & -0.04 (-0.06, -0.02)
\\ & Two Rare & 0.07 (0.04, 0.09) & -0.18 (-0.20, -0.17)
\\
\midrule
Pred Bias & Low One & -- & --
\\ & Low Two & 0.00 (-0.03, 0.04) & -0.00 (-0.03, 0.02)
\\ & Medium One & -0.06 (-0.09, -0.02) & -0.06 (-0.08, -0.03)
\\ & Medium Two & -0.04 (-0.07, -0.00) & -0.06 (-0.08, -0.03)
\\ & High One & -0.18 (-0.22, -0.15) & -0.16 (-0.19, -0.14)
\\ & High Two & -0.15 (-0.19, -0.12) & -0.13 (-0.16, -0.11)
\\
\midrule
\end{tabular}
\caption{Predicted change and 95\% confidence intervals for accuracy and mean $TDR$ as a function of the experimental hyperparameters in our synthetic datasets with three protected attributes. All datasets had a 3-class outcome.}
\end{table*}
\section{Synthetic Data Experiments}
\paragraph{Synthetic Data}
To explore the effect of different data regimes and optimization goals on post-adjustment discrimination, we conducted thorough (though by no means exhaustive) synthetic experiments for a 3-class outcome. We constructed synthetic datasets with $N=1,000$ observations for each unique combination of the following data-generating hyperparameters:
\begin{itemize}
\item The number of unique values for the protected attribute, $|\mathscr{A}|$. We explored setting $|\mathscr{A}|=2$ or $|\mathscr{A}|=3$ (see results with $|\mathscr{A}|=2$ in our github repository)
\item The amount of class imbalance for the labels $Y$. For simplicity, we did not allow this to vary across protected groups.
\item Group balance, or the number and relative size of minority groups compared to majority groups. This varied according to the number of groups but was generally either none, weak, or strong.
\item Predictive bias as the difference in mean true detection rate, $TDR$, between the groups. We vary this from mild predictive bias (10 percent difference) to severe bias with the minority group $TDR$ being near chance. The predictive bias is set to always favor the majority group.
\end{itemize}
This process yielded 117 datasets. For each one, we ran the linear program to adjust the (synthetic) biased blackbox predictions 8 times, once for each unique combination of the objective function and type of fairness, yielding a total of 936 adjustments. After each adjustment, we recorded two broad measures of the fair predictor's performance:
\begin{itemize}
\item Triviality, or whether any of the columns in $\mathbf{W^a}=Pr(Y^{\text{adj}}|Y, A=a)$ contained all zeroes (i.e., whether any levels of the outcome were no longer predicted).
\item Discrimination, or the percent change in loss for the adjusted predictor relative to that of the original predictor. For this measure, we examined two specific metrics: global accuracy and the mean of the group-wise $TDRs$. These are equivalent to 1 minus the post-adjustment loss under the two versions of the objective functions we present above.
\end{itemize}
To quantify the average effect of each hyperparameter on discrimination, we fit two multivariable linear regression models to the resulting dataset, one for each discrimination metric. Before fitting the models, we converted the categorical hyperparameters (so all but loss) to one-hot variables, and then we set a reference level for each, removing the corresponding column from the design matrix. We then fit the models separately using ordinary least squares (OLS) and calculated confidence intervals (CIs) for the resulting coefficients.
\paragraph{Results}
Table 1 shows coefficients and 95\% confidence intervals for the regression models with $|\mathscr{A}|=3$. The results highlight several important points:
\begin{itemize}
\item Predictive bias and class imbalance are the two main drivers of decreases in post-adjustment discrimination, for both accuracy, and $TDR$.
\item High group imbalance for the protected attributes lowers post-adjustment discrimination, but only from the perspective of global accuracy--even with 2 strong minorities (3-group scenario), mean $TDR$ only drops by 1.1\%.
\item Relative to the weighted objective, the unweighted objective leads to higher scores for global accuracy but lower scores for mean $TDR$. This is perhaps unsurprising, but it is worth noting nonetheless.
\item Despite finding better accuracy solutions, we also found that the unweighted objective leads to trivial solutions far more frequently (30\% of the time it was used) than the weighted version of the loss (0.2\% of the time it was used). This trend will likely worsen with increasing dimension of either the number of classes or the number of protected groups.
\item Fairness is generally harder to achieve with 3 protected groups than with 2, since the intercepts are lower for both accuracy and mean $TDR$. We believe this to be a general consequence of forcing fairness across more groups and expect this trend to continue as the number of groups increases.
\end{itemize}
\begin{figure*}[!h]
\centering
\includegraphics[scale=.5]{figures/brier_fds.png}
\caption{Fairness-discrimination plots for our postprocessing algorithm on our 4 real-world datasets, created by systematically relaxing the fairness equality constraints of the linear program. The plots show Brier score as a function of the maximum average difference between groups of the corresponding fairness criterion. Performance of the original, unadjusted predictor is marked by an X.
}
\label{fig:fairness_vs_discrimination}
\end{figure*}
\section{Experiments with Real-World Data}
\paragraph{Dataset Descriptions}
To further examine the performance characteristics of our algorithm, we ran it on several real-world datasets described below.
\begin{enumerate}
\item \textbf{Drug Usage} \cite{fehrman2017five}. This dataset has inherently multiclass outcomes, with the target being a 7-level categorical variable indicating recentness of use for a variety of drugs. We focus on predicting cannabis usage, where we collapsed the 7-level usage indicator into 3 broader categories: never used, used but not in the past year, and used in the past year. Predictors included demographic variables like age, gender, and level of education, as well as a variety of measures of personality traits hypothesized to affect usage habits.
\item \textbf{Obesity} \cite{palechor2019dataset}. This dataset has inherently multiclass outcomes, with the target being a 7-level categorical variable indicating weight category; the protected attribute is gender (Male/Female). Because some of the observations are synthetic in order to protect privacy, not all of the gender/weight categories had sufficient numbers for modeling, and so we omitted observations from the 2 most extreme weight categories, Obesity Type-II and Obesity Type-III, leaving a 5-level target for prediction. Predictors included age, gender, family medical history, and several measures of physical activity and behavioral health.
\item \textbf{LSAC Bar Passage} \cite{wightman1998lsac}. This dataset has inherently multiclass outcomes, with the target being a 3-level variable indicating bar exam passage status (passed first time, passed second time, or did not pass). The protected attribute is race, which we collapsed from its original 8 levels to 2 (white and non-white). Predictors included mostly measures of educational achievement, like undergraduate GPA, law school GPA, and LSAT score.
\item \textbf{Parkinson's Telemonitoring} \cite{tsanas2009accurate}. This dataset does not have inherently multiclass outcomes, with the target for prediction being the continuous Unified Parkinson's Disease Rating Scale (UPDRS), a continuous score that increases with the severity of impairment. We again used Otsu's method to bin the continuous score into 3 categories--low impairment, moderate impairment, and high impairment--which we took as the new class labels. The protected attribute is a 2-level variable for gender (Male/Female). Predictors included mostly biomedical measurements from the voice recordings of patients with Parkinson's Disease.
\end{enumerate}
For each of these datasets, we obtained a potentially-biased predictor $\hat{Y}$ by training a random forest on all available informative features (including the protected attribute) to predict the multiclass outcome, and then taking the categories corresponding to the row-wise maxima of the out-of-bag decision scores as the set of predicted labels. We then adjusted the predictions with the weighted objective and term-by-term equality of odds fairness constraint and recorded the relative changes in global accuracy and mean $TDR$ as the outcome measures of interest, as with our synthetic experiments.
\begin{table*}
\footnotesize
\centering
\hspace{.61 cm}
\begin{tabular}{c|cccc}
\multicolumn{5}{c}{\bf In-Sample Results}\\
\toprule
Dataset (N) & \# Terms& Old Acc $\shortrightarrow$ New Acc & Old $TDR\shortrightarrow$ New $TDR$& Pre $\shortrightarrow$ Post-Adj Disparity\\
&in $\mathbf{P^a}$&(\% change)&(\% change)&(\% change)\\
\midrule
Bar (N=22406)& 18&88 \% $\shortrightarrow$ 88\% (-1\%)& 36\% $\shortrightarrow$ 34\% (-7\%)&0.11 $\shortrightarrow$0.00 (-100\%)\\
Cannabis (N=1885)&18&74\% $\shortrightarrow$ 71\% (-4\%)& 67\% $\shortrightarrow$ 63\% (-6\%)&0.07 $\shortrightarrow$0.00 (-100\%)\\
Obesity (N=1490)&50&78\% $\shortrightarrow$ 73\% (-7\%)& 78\% $\shortrightarrow$ 73\% (-7\%)&0.05 $\shortrightarrow$ 0.00 (-100\%)\\
Parkinsons (N=5875)&18&93\% $\shortrightarrow$ 91\% (-2\%)& 92\% $\shortrightarrow$ 89\% (-3\%)&0.04 $\shortrightarrow$0.00(-100\%)\\
\end{tabular}
\newline
\vspace{.5 cm}
\newline
\begin{tabular}{c|cccc}
\multicolumn{5}{c}{\bf Out of Sample Results}\\
\toprule
Dataset (N) & \# Terms& Old Acc $\shortrightarrow$ New Acc & Old $TDR\shortrightarrow$ New $TDR$& Pre $\shortrightarrow$ Post-Adj Disparity\\
&in $\mathbf{P^a}$&(\% change)&(\% change)&(\% change)\\
\midrule
Bar (N=22406)& 18&88 \% $\shortrightarrow$ 83\% (-6\%)& 36\% $\shortrightarrow$ 33\% (-8\%)&0.11 $\shortrightarrow$0.01 (-95\%)\\
Cannabis (N=1885)&18&74\% $\shortrightarrow$ 61\% (-18\%)& 67\% $\shortrightarrow$ 52\% (-22\%)&0.07 $\shortrightarrow$0.16 (124\%)\\
Obesity (N=1490)&50&78\% $\shortrightarrow$ 41\% (-47\%)& 78\% $\shortrightarrow$ 42\% (-46\%)&0.05 $\shortrightarrow$ 0.07 (45\%)\\
Parkinsons (N=5875)&18&93\% $\shortrightarrow$ 82\% (-12\%)& 92\% $\shortrightarrow$ 78\% (-15\%)&0.04 $\shortrightarrow$0.05(33\%)\\
\end{tabular}
\caption{Results of applying the linear program to adjust the blackbox predictions and produce $y^{der}$ for four real-world datasets. The top table is without any splitting. Results shown in the bottom table are cross-validated across five 80/20 splits of each dataset. Accuracy and $TDR$ are shown before and adjustment, with $TDR$ being the mean across all classes. Percent changes, shown in parentheses are the relative percent drops in accuracy and mean $TDR$. Post-adjustment disparity is the element-wise mean difference across all groups of $\mathbf{W^a}$.}
\end{table*}
\paragraph{Exploring the Effect of Finite Sampling}
\citet{hardt2016equality} note that their method will not be effected by finite sample variability as long as the joint distribution $Pr(Y, \hat{Y}, A)$ is known, or at least well-approximated by a large sample. In practical applications, however, the sample at hand may not be large enough to approximate the joint distribution with precision. This problem is exacerbated when the number of observations $N$ is small relative to the number of probabilities learned by the algorithm of which there are $|C|\times|C|\times|\mathscr{A}|$ total. This difficulty is therefore more severe for our extension in this work where $|C| > 2$.
In these cases, the adjusted predictor $Y^{\text{adj}}$ may have worse classification performance and higher disparity when applied to unseen, out-of-sample data.
As a preliminary exploration of this effect, we used 5-fold cross-validation to generate out-of-sample predictions for each of the observations in our real-world datasets. Keeping $Y$, $\hat{Y}$, and $A$ fixed, we solved the linear program on $80\%$ of the data and then used the adjusted probabilities $\mathbf{P^a}$ to obtain class predictions for the observations in the remaining $20\%$. As with the predictions obtained from solving the linear program on the full dataset, we measured the changes in accuracy and mean $TDR$ for the cross-validated predictions. Because fairness is not guaranteed when the joint distribution assumption is violated, we also measured post-adjustment fairness.
\paragraph{Exploring the Fairness-Discrimination Tradeoff}
When there are large gaps in a predictor's performance across groups, i.e., when predictive bias is high, strict fairness may not always be possible or desirable to achieve because of the large amount of randomization required to balance the blackbox classifier's predictions. To explore the tradeoff between fairness and discrimination, we ran the linear program on each of the real-world datasets once for each of the four kinds of fairness. For each combination of dataset and fairness type, we varied the equality constraints of the linear program--the maximum percent difference allowed between any pairwise comparison of fairness measures between groups--from 0.0 to 1.0 in increments of 0.01, and then plotted the value of the weighted objective at each point as a function of the global measure of fairness corresponding to the fairness type under consideration. To obtain these global measures, we took the maximum of the mean differences across pairs of groups of the following metrics:
\begin{itemize}
\item $\mathbf{W}$, or the matrix of probabilities $P(Y^{\text{adj}}|Y)$, for term-by-term equality of odds
\item Youden's J index, or $TDR + (1-FDR) - 1$, for classwise equality of odds
\item $TDR$ for equal opportunity
\item $P(Y^{\text{adj}})$ for demographic parity
\end{itemize}
We note here that taking the maximum of the maxima of the pairwise differences would also be a valid and sensible global measure. So that the plots show performance under optimal conditions, we do not use cross-validation to obtain $Y^{\text{adj}}$, i.e., we obtain it by solving the linear program on the entire dataset.
\paragraph{Results}
Table 2 shows changes in global accuracy and mean $TDR$ after adjustment with the weighted objective and term-by-term conditional fairness constraint for our four datasets, using cross-validation as described above to capture some of the variability that comes with finite sampling. Overall, adjustment lowered both accuracy and mean $TDR$. Although, for the bar passage, drug usage, and Parkinson's datasets, the drops were moderate, with average relative changes in both metrics coming in at around 12\% and 15\%, respectively (without cross-validation, the drops were much smaller at 3\% and 4\%). For the obesity dataset, the drops are much larger at 47\% and 46\%, respectively, which are indeed substantial and would likely make the predictor unusable in practical settings. On in-sample data, these drops were both only around 7\%, and so we suspect that characteristics of the data, like large class imbalance or small overall sample size, are responsible for the poor performance. Perhaps most importantly, the post-adjustment disparity for all datasets is non-zero, and for three of the datasets actually increases. The bar passage dataset was the only example where the out-of-sample post-adjustment disparity decreased to near zero likely due to it being the largest dataset. This starkly points out the sensitivity of the method to estimating the joint probabilities $Pr(Y, \hat{Y}, A)$, and shows that the approach is unlikely to work in smaller dataset regimes which have a larger combination of classes and protected attributes. Note that for in-sample results, post-adjustment disparity drops completely to 0.0 for all datasets since it is strictly enforced by the linear program in Table 2.
Figure 1 shows fairness-discrimination plots for our 4 datasets with the weighted objective and each of the 4 fairness constraints. Under strict fairness, with inequality set to 0, equalized odds is the hardest to satisfy, showing the largest increase in Brier score. For the drug usage, obesity, and Parkinson's datasets, discrimination improves approximately linearly as fairness worsens; for the bar passage dataset, discrimination improves to a point, but then worsens as fairness approaches the value for the original, unadjusted predictor $\hat{Y}$. For all datasets, the total loss of discrimination under strict fairness is relatively small (the biggest drop is around 7.5 percentage points on Brier score), but the random forests' predictions were only mildly biased to begin with, so we expect this gap to increase for less-fair predictors.
\section{Discussion}
Generally, our post-processing approach to achieving fairness in multiclass settings seems both feasible and efficient given a large enough dataset size. We have shown above that the linear programming technique proposed by \citet{hardt2016equality} can be extended to accommodate a theoretically arbitrarily large number of discrete outcomes and levels of a protected attribute. Nonetheless, our synthetic experiments and analyses of real-world datasets show that are a few important considerations for using the approach in practice.
In many cases, the effect of finite sampling may be non-negligible, especially when the number of observations $N$ is small relative to number of outcomes $|C|$ or the number of protected groups $|\mathscr{A}|$. For example, the obesity dataset with $|C|=5$ and $N=1,490$ saw a large relative drop of 46\% in mean $TDR$ after adjustment under cross-validation.
We also saw this effect extend to fairness, which was not reduced completely to zero on out-of-sample data for any of the real-world datasets. In fact, for the drug usage dataset we found post-adjustment disparity doubled on out-of-sample data.
This last observation raises a concerning point: for some classification problems, the post-adjustment predictions on out-of-sample data may increase disparity rather than lowering it. For the largest of the datasets, the bar passage dataset with $N=22,406$, neither of these issues was a concern. Even under cross-validation, the relative change in $TDR$ was only -8\%, and the disparity dropped to near 0 (-95\% decrease). Given this, we expect that with a large enough dataset size, our approach will be far more reliable on out-of-sample data. Future work more precisely quantifying the number of training examples needed for reliable out-of-sample fair performance with our approach is needed.
More generally, even when finite sampling variability is not an issue, not all datasets will lend themselves well to this kind of post-processing approach. In our synthetic experiments, we showed that severe class imbalance and severe predictive bias (predicting at nearly the level of chance for minority protected groups) lead to large drops in post-adjustment performance on average. In many of the single experimental runs for synthetic datasets with these settings, the resulting derived predictor was effectively useless, either producing trivial results or lowering predictive performance to near chance (for all groups) for one or more class outcomes. In these circumstances, it may be more sensible to enforce fairness through a combination of pre-processing, in-processing, and post-processing methods, rather than through a post-processing method alone. Indeed, \citet{woodworth2017learning} make this point generally, albeit for the binary setting, by showing that unless the biased predictor $\hat{Y}$ is very close to being Bayes optimal, the derived predictor $Y^{\text{adj}}$ proposed by \citet{hardt2016equality} can underperform relative to other methods, sometimes substantially. Under less extreme circumstances, however, we found our approach produces good results, especially given the time-efficiency of solving the linear program relative to other methods.
\section{Acknowledgments}
This work was supported in part by the HPI Research Center in Machine Learning and Data Science at UC Irvine (P. Putzel), as well as in part by an appointment to the Research Participation Program at the Centers for Disease Control and Prevention, administered by the Oak Ridge Institute for Science and Education (P. Putzel). We would also like to thank Chad Heilig, and Padhraic Smyth for their helpful comments on the approach and paper.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 9,351
|
Produced by V. L. Simpson and the Online Distributed
Proofreading Team at http://www.pgdp.net
[Illustration: YOUR AFF PUSSY]
Letters from a Cat.
PUBLISHED BY HER MISTRESS
For the Benefit of all Cats
AND
THE AMUSEMENT OF LITTLE CHILDREN.
BY H. H.,
AUTHOR OF "NELLY'S SILVER MINE."
_WITH SEVENTEEN ILLUSTRATIONS BY ADDIE LEDYARD._
BOSTON:
ROBERTS BROTHERS.
1879.
_Copyright, 1879,_ By Roberts Brothers.
[Illustration: Helen]
INTRODUCTION.
Dear Children:
I do not feel wholly sure that my Pussy wrote these letters herself.
They always came inside the letters written to me by my mamma, or other
friends, and I never caught Pussy writing at any time when I was at
home; but the printing was pretty bad, and they were signed by Pussy's
name; and my mamma always looked very mysterious when I asked about
them, as if there were some very great secret about it all; so that
until I grew to be a big girl, I never doubted but that Pussy printed
them all alone by herself, after dark.
They were written when I was a very little girl, and was away from home
with my father on a journey. We made this journey in our own carriage,
and it was one of the pleasantest things that ever happened to me. My
clothes and my father's were packed in a little leather valise which was
hung by straps underneath the carriage, and went swinging, swinging,
back and forth, as the wheels went round. My father and I used to walk
up all the steep hills, because old Charley, our horse, was not very
strong; and I kept my eyes on that valise all the while I was walking
behind the carriage; it seemed to me the most unsafe way to carry a
valise, and I wished very much that my best dress had been put in a
bundle that I could carry in my lap. This was the only drawback on the
pleasure of my journey,--my fear that the valise would fall off when we
did not know it, and be left in the road, and then I should not have
anything nice to wear when I reached my aunt's house. But the valise
went through all safe, and I had the satisfaction of wearing my best
dress every afternoon while I stayed; and I was foolish enough to think
a great deal of this.
On the fourth day after our arrival came a letter from my mamma, giving
me a great many directions how to behave, and enclosing this first
letter from Pussy. I carried both letters in my apron pocket all the
time. They were the first letters I ever had received, and I was very
proud of them. I showed them to everybody, and everybody laughed hard at
Pussy's, and asked me if I believed that Pussy printed it herself. I
thought perhaps my mamma held her paw, with the pen in it, as she had
sometimes held my hand for me, and guided my pen to write a few words. I
asked papa to please to ask mamma, in his letter, if that were the way
Pussy did it; but when his next letter from mamma came, he read me this
sentence out of it: "Tell Helen I did not hold Pussy's paw to write that
letter." So then I felt sure Pussy did it herself; and as I told you, I
had grown up to be quite a big girl before I began to doubt it. You see
I thought my Pussy such a wonderful Pussy that nothing was too
remarkable for her to do. I knew very well that cats generally did not
know how to read or write; but I thought there had never been such a
cat in the world as this Pussy of mine. It is a great many years since
she died; but I can see her before me to-day as plainly as if it were
only yesterday that I had really seen her alive.
She was a little kitten when I first had her; but she grew fast, and was
very soon bigger than I wanted her to be. I wanted her to stay little.
Her fur was a beautiful dark gray color, and there were black stripes on
her sides, like the stripes on a tiger. Her eyes were very big, and her
ears unusually long and pointed. This made her look like a fox; and she
was so bright and mischievous that some people thought she must be part
fox. She used to do one thing that I never heard of any other cat's
doing: she used to play hide-and-seek. Did you ever hear of a cat's
playing hide-and-seek? And the most wonderful part of it was, that she
took it up of her own accord. As soon as she heard me shut the gate in
the yard at noon, when school was done, she would run up the stairs as
hard as she could go, and take her place at the top, where she could
just peep through the banisters. When I opened the door, she would give
a funny little mew, something like the mew cats make when they call
their kittens. Then as soon as I stepped on the first stair to come up
to her, she would race away at the top of her speed, and hide under a
bed; and when I reached the room, there would be no Pussy to be seen. If
I called her, she would come out from under the bed; but if I left the
room, and went down stairs without speaking, in less than a minute she
would fly back to her post at the head of the stairs, and call again
with the peculiar mew. As soon as I appeared, off she would run, and
hide under the bed as before. Sometimes she would do this three or four
times; and it was a favorite amusement of my mother's to exhibit this
trick of hers to strangers. It was odd, though; she never would do it
twice, when she observed that other people were watching. When I called
her, and she came out from under the bed, if there were strangers
looking on, she would walk straight to me in the demurest manner, as if
it were a pure accident that she happened to be under that bed; and no
matter what I did or said, her frolic was over for that day.
She used to follow me, just like a little dog, wherever I went. She
followed me to school every day, and we had great difficulty on Sundays
to keep her from following us to church. Once she followed me, when it
made a good many people laugh, in spite of themselves, on an occasion
when it was very improper for them to laugh, and they were all feeling
very sad. It was at the funeral of one of the professors in the college.
The professors' families all sat together; and when the time came for
them to walk out of the house and get into the carriages to go to the
graveyard, they were called, one after the other, by name. When it came
to our turn, my father and mother went first, arm-in-arm; then my sister
and I; and then, who should rise, very gravely, but my Pussy, who had
slipped into the room after me, and had not been noticed in the crowd.
With a slow and deliberate gait she walked along, directly behind my
sister and me, as if she were the remaining member of the family, as
indeed she was. People began to smile, and as we passed through the
front door, and went down the steps, some of the men and boys standing
there laughed out. I do not wonder; for it must have been a very comical
sight. In a second more, somebody sprang forward and snatched Pussy up.
Such a scream as she gave! and scratched his face with her claws, so
that he was glad to put her down. As soon as I heard her voice I turned
round, and called her in a low tone. She ran quickly to me, and I picked
her up and carried her in my arms the rest of the way. But I saw even my
own papa and mamma laughing a little, for just a minute. That was the
only funeral Pussy ever attended.
Pussy lived several years after the events which are related in these
letters.
It was a long time before her fur grew out again after that terrible
fall into the soft-soap barrel. However, it did grow out at last, and
looked as well as ever. Nobody would have known that any thing had been
the matter with her, except that her eyes were always weak. The edges of
them never got quite well; and poor Pussy used to sit and wash them by
the hour; sometimes mewing and looking up in my face, with each stroke
of her paw on her eyes, as much as to say, "Don't you see how sore my
eyes are? Why don't you do something for me?"
She was never good for any thing as a mouser after that accident, nor
for very much to play with. I recollect hearing my mother say one day to
somebody,--"Pussy was spoiled by her experience in the cradle. She would
like to be rocked the rest of her days, I do believe; and it is too
funny to see her turn up her nose at tough beef. It was a pity she ever
got a taste of tenderloin!"
At last, what with good feeding and very little exercise, she grew so
fat that she was clumsy, and so lazy that she did not want to do any
thing but lie curled up on a soft cushion.
She had outgrown my little chair, which had a green moreen cushion in
it, on which she had slept for many a year, and of which I myself had
very little use,--she was in it so much of the time. But now that this
was too tight for her, she took possession of the most comfortable
places she could find, all over the house. Now it was a sofa, now it was
an arm-chair, now it was the foot of somebody's bed. But wherever it
happened to be, it was sure to be the precise place where she was in the
way, and the poor thing was tipped headlong out of chairs, shoved
hastily off sofas, and driven off beds so continually, that at last she
came to understand that when she saw any person approaching the chair,
sofa, or bed on which she happened to be lying, the part of wisdom for
her was to move away. And it was very droll to see the injured and
reproachful expression with which she would slowly get up, stretch all
her legs, and walk away, looking for her next sleeping-place. Everybody
in the house, except me, hated the sight of her; and I had many a
pitched battle with the servants in her behalf. Even my mother, who was
the kindest human being I ever knew, got out of patience at last, and
said to me one day:--
"Helen, your Pussy has grown so old and so fat, she is no comfort to
herself, and a great torment to everybody else. I think it would be a
mercy to kill her."
"Kill my Pussy!" I exclaimed, and burst out crying, so loud and so hard
that I think my mother was frightened; for she said quickly:--
"Never mind, dear; it shall not be done, unless it is necessary. You
would not want Pussy to live, if she were very uncomfortable all the
time."
"She isn't uncomfortable," I cried; "she is only sleepy. If people would
let her alone, she would sleep all day. It would be awful to kill her.
You might as well kill me!"
After that, I kept a very close eye on Pussy; and I carried her up to
bed with me every night for a long time.
But Pussy's days were numbered. One morning, before I was up, my mamma
came into my room, and sat down on the edge of my bed.
"Helen," she said, "I have something to tell you which will make you
feel very badly; but I hope you will be a good little girl, and not make
mamma unhappy about it. You know your papa and mamma always do what they
think is the very best thing."
"What is it, mamma?" I asked, feeling very much frightened, but never
thinking of Pussy.
"You will never see your Pussy any more," she replied. "She is dead."
"Oh, where is she?" I cried. "What killed her? Won't she come to life
again?"
"No," said my mother; "she is drowned."
Then I knew what had happened.
"Who did it?" was all I said.
"Cousin Josiah," she replied; "and he took great care that Pussy did not
suffer at all. She sank to the bottom instantly."
"Where did he drown her?" I asked.
"Down by the mill, in Mill Valley, where the water is very deep,"
answered my mother; "we told him to take her there."
At these words I cried bitterly.
"That's the very place I used to go with her to play," I exclaimed.
"I'll never go near that bridge as long as I live, and I'll never speak
a word to Cousin Josiah either--never!"
My mother tried to comfort me, but it was of no use; my heart was nearly
broken.
When I went to breakfast, there sat my cousin Josiah, looking as
unconcerned as possible, reading a newspaper. He was a student in the
college, and boarded at our house. At the sight of him all my
indignation and grief broke forth afresh. I began to cry again; and
running up to him, I doubled up my fist and shook it in his face.
"I said I'd never speak to you as long as I lived," I cried; "but I
will. You're just a murderer, a real murderer; that's what you are! and
when you go to be a missionary, I hope the cannibals'll eat you! I hope
they'll eat you alive raw, you mean old murderer!"
"Helen Maria!" said my father's voice behind me, sternly. "Helen Maria!
leave the room this moment!"
I went away sullenly, muttering, "I don't care, he is a murderer; and I
hope he'll be drowned, if he isn't eaten! The Bible says the same
measure ye mete shall be meted to you again. He ought to be drowned."
For this sullen muttering I had to go without my breakfast; and after
breakfast was over, I was made to beg Cousin Josiah's pardon; but I did
not beg it in my heart--not a bit--only with my lips, just repeating the
words I was told to say; and from that time I never spoke one word to
him, nor looked at him, if I could help it.
My kind mother offered to get another kitten for me, but I did not want
one. After a while, my sister Ann had a present of a pretty little gray
kitten; but I never played with it, nor took any notice of it at all. I
was as true to my Pussy as she was to me; and from that day to this, I
have never had another Pussy!
I.
My Dear Helen:
That is what your mother calls you, I know, for I jumped up on
writing-table just now, and looked, while she was out of the room; and I
am sure I have as much right to call you so as she has, for if you were
my own little kitty, and looked just like me, I could not love you any
more than I do. How many good naps I have had in your lap! and how many
nice bits of meat you have saved for me out of your own dinner! Oh, I'll
never let a rat, or a mouse, touch any thing of yours so long as I live.
I felt very unhappy after you drove off yesterday, and did not know what
to do with myself. I went into the barn, and thought I would take a nap
on the hay, for I do think going to sleep is one of the very best things
for people who are unhappy; but it seemed so lonely without old Charlie
stamping in his stall that I could not bear it, so I went into the
garden, and lay down under the damask rose-bush, and caught flies. There
is a kind of fly round that bush which I like better than any other I
ever ate. You ought to see that there is a very great difference between
my catching flies and your doing it. I have noticed that you never eat
them, and I have wondered that when you were always so kind to me you
could be so cruel as to kill poor flies for nothing: I have often wished
that I could speak to you about it: now that your dear mother has taught
me to print, I shall be able to say a great many things to you which I
have often been unhappy about because I could not make you understand. I
am entirely discouraged about learning to speak the English language,
and I do not think anybody takes much trouble to learn ours; so we cats
are confined entirely to the society of each other, which prevents our
knowing so much as we might; and it is very lonely too, in a place where
there are so few cats kept as in Amherst. If it were not for Mrs.
Hitchcock's cat, and Judge Dickinson's, I should really forget how to
use my tongue. When you are at home I do not mind it, for although I
cannot talk to you, I understand every word that you say to me, and we
have such good plays together with the red ball. That is put away now in
the bottom drawer of the little workstand in the sitting-room. When your
mother put it in, she turned round to me, and said, "Poor pussy, no more
good plays for you till Helen comes home!" and I thought I should
certainly cry. But I think it is very foolish to cry over what cannot be
helped, so I pretended to have got something into my left eye, and
rubbed it with my paw. It is very seldom that I cry over any thing,
unless it is "spilt milk." I must confess, I have often cried when that
has happened: and it always is happening to cats' milk. They put it into
old broken things that tip over at the least knock, and then they set
them just where they are sure to be most in the way. Many's the time
Josiah has knocked over that blue saucer of mine, in the shed, and when
you have thought that I had had a nice breakfast of milk, I had nothing
in the world but flies, which are not good for much more than just a
little sort of relish. I am so glad of a chance to tell you about this,
because I know when you come home you will get a better dish for me.
I hope you found the horse-chestnuts which I put in the bottom of the
carriage for you. I could not think of any thing else to put in, which
would remind you of me: but I am afraid you will never think that it was
I who put them there, and it will be too bad if you don't, for I had a
dreadful time climbing up over the dasher with them, and both my jaws
are quite lame from stretching them so, to carry the biggest ones I
could find.
There are three beautiful dandelions out on the terrace, but I don't
suppose they will keep till you come home. A man has been doing
something to your garden, but though I watched him very closely all the
time, I could not make out what he was about. I am afraid it is
something you will not like; but if I find out more about it, I will
tell you in my next letter. Good by.
Your affectionate Pussy.
[Illustration: "I felt very unhappy after you drove off yesterday."
Page 28.]
[Illustration: "I hope you found the horse-chestnuts which I put in
the carriage for you. I had a dreadful time climbing up over the
dasher with them."--Page 33.]
II.
My Dear Helen:
I do wish that you and your father would turn around directly, wherever
you are, when you get this letter, and come home as fast as you can. If
you do not come soon there will be no home left for you to come into. I
am so frightened and excited, that my paws tremble, and I have upset the
ink twice, and spilled so much that there is only a little left in the
bottom of the cup, and it is as thick as hasty pudding; so you must
excuse the looks of this letter, and I will tell you as quickly as I can
about the dreadful state of things here. Not more than an hour after I
finished my letter to you, yesterday, I heard a great noise in the
parlor, and ran in to see what was the matter. There was Mary with her
worst blue handkerchief tied over her head, her washing-day gown on, and
a big hammer in her hand. As soon as she saw me, she said, "There's that
cat! Always in my way," and threw a cricket at me, and then shut the
parlor door with a great slam. So I ran out and listened under the
front windows, for I felt sure she was in some bad business she did not
want to have known. Such a noise I never heard: all the things were
being moved; and in a few minutes, what do you think--out came the whole
carpet right on my head! I was nearly stifled with dust, and felt as if
every bone in my body must be broken; but I managed to creep out from
under it, and heard Mary say, "If there isn't that torment of a cat
again! I wish to goodness Helen had taken her along!" Then I felt surer
than ever that some mischief was on foot: and ran out into the garden,
and climbed up the old apple-tree at the foot of the steps, and crawled
out on a branch, from which I could look directly into the parlor
windows. Oh! my dear Helen, you can fancy how I felt, to see all the
chairs and tables and bookshelves in a pile in the middle of the floor,
the books all packed in big baskets, and Mary taking out window after
window as fast as she could. I forgot to tell you that your mother went
away last night. I think she has gone to Hadley to make a visit, and it
looks to me very much as if Mary meant to run away with every thing
which could be moved, before she comes back. After awhile that ugly
Irishwoman, who lives in Mr. Slater's house, came into the back gate:
you know the one I mean,--the one that threw cold water on me last
spring. When I saw her coming I felt sure that she and Mary meant to
kill me, while you were all away; so I jumped down out of the tree, and
split my best claw in my hurry, and ran off into Baker's Grove, and
stayed there all the rest of the day, in dreadful misery from cold and
hunger. There was some snow in the hollows, and I wet my feet, which
always makes me feel wretchedly; and I could not find any thing to eat
except a thin dried-up old mole. They are never good in the spring.
Really, nobody does know what hard lives we cats lead, even the luckiest
of us! After dark, I went home; but Mary had fastened up every door,
even the little one into the back shed. So I had to jump into the cellar
window, which is a thing I never like to do since I got that bad sprain
in my shoulder from coming down on the edge of a milk-pan. I crept up to
the head of the kitchen stairs, as still as a mouse, if I'm any judge,
and listened there for a long time, to try and make out, from Mary's
talk with the Irishwoman, what they were planning to do. But I never
could understand Irish, and although I listened till I had cramps in all
my legs, from being so long in one position, I was no wiser. Even the
things Mary said I could not understand, and I usually understand her
very easily. I passed a very uncomfortable night in the carrot bin. As
soon as I heard Mary coming down the cellar stairs, this morning, I hid
in the arch, and while she was skimming the milk, I slipped upstairs,
and ran into the sitting-room. Every thing there is in the same
confusion; the carpet is gone; and the windows too, and I think some of
the chairs have been carried away. All the china is in great baskets on
the pantry floor; and your father and mother's clothes are all taken out
of the nursery closet, and laid on chairs. It is very dreadful to have
to stand and see all this, and not be able to do any thing. I don't
think I ever fully realized before the disadvantage of being only a cat.
I have just been across the street, and talked it all over with the
Judge's cat, but she is very old and stupid, and so taken up with her
six kittens (who are the ugliest I ever saw), that she does not take
the least interest in her neighbors' affairs. Mrs. Hitchcock walked by
the house this morning, and I ran out to her, and took her dress in my
teeth and pulled it, and did all I could to make her come in, but she
said, "No, no, pussy, I'm not coming in to-day; your mistress is not at
home." I declare I could have cried. I sat down in the middle of the
path, and never stirred for half an hour.
I heard your friend, Hannah Dorrance, say yesterday, that she was going
to write to you to-day, so I shall run up the hill now and carry my
letter to her. I think she will be astonished when she sees me, for I am
very sure that no other cat in town knows how to write. Do come home as
soon as possible.
Your affectionate Pussy.
P. S. Two men have just driven up to the front gate in a great cart, and
they are putting all the carpets into it. Oh dear, oh dear, if I only
knew what to do! And I just heard Mary say to them, "Be as quick as you
can, for I want to get through with this business before the folks come
back."
[Illustration: "I climbed up the old apple-tree, and crawled out on
a branch from which I could look directly into the parlor
windows."--Page 38.]
[Illustration: "I crept up to the head of the kitchen stairs, as
still as a mouse, if I'm any judge, and listened."--Page 40.]
III.
My Dear Helen:
I am too stiff and sore from a terrible fall I have had, to write more
than one line; but I must let you know that my fright was very silly,
and I am very much mortified about it. The house and the things are all
safe; your mother has come home; and I will write, and tell you all,
just as soon as I can use my pen without great pain.
Some new people have come to live in the Nelson house; very nice people,
I think, for they keep their milk in yellow crockery pans. They have
brought with them a splendid black cat whose name is Caesar, and
everybody is talking about him. He has the handsomest whiskers I ever
saw. I do hope I shall be well enough to see him before long, but I
wouldn't have him see me now for any thing.
Your affectionate Pussy.
[Illustration: "They have brought with them a splendid black cat
whose name is Caesar, and everybody is talking about him. He has the
handsomest whiskers I ever saw."--Page 46.]
IV.
My Dear Helen:
There is one thing that cats don't like any better than men and women
do, and that is to make fools of themselves. But a precious fool I made
of myself when I wrote you that long letter about Mary's moving out all
the furniture, and taking the house down. It is very mortifying to have
to tell you how it all turned out, but I know you love me enough to be
sorry that I should have had such a terrible fright for nothing.
It went on from bad to worse for three more days after I wrote you. Your
mother did not come home; and the awful Irishwoman was here all the
time. I did not dare to go near the house, and I do assure you I nearly
starved: I used to lie under the rose-bushes, and watch as well as I
could what was going on: now and then I caught a rat in the barn, but
that sort of hearty food never has agreed with me since I came to live
with you, and became accustomed to a lighter diet. By the third day I
felt too weak and sick to stir: so I lay still all day on the straw in
Charlie's stall; and I really thought, between the hunger and the
anxiety, that I should die. About noon I heard Mary say in the shed, "I
do believe that everlasting cat has taken herself off: it's a good
riddance anyhow, but I should like to know what has become of the plaguy
thing!"
I trembled all over, for if she had come into the barn I know one kick
from her heavy foot would have killed me, and I was quite too weak to
run away. Towards night I heard your dear mother's voice calling, "Poor
pussy, why, poor pussy, where are you?"
I assure you, my dear Helen, people are very much mistaken who say, as I
have often overheard them, that cats have no feeling. If they could only
know how I felt at that moment, they would change their minds. I was
almost too glad to make a sound. It seemed to me that my feet were
fastened to the floor, and that I never could get to her. She took me up
in her arms, and carried me through the kitchen into the sitting-room.
Mary was frying cakes in the kitchen, and as your mother passed by the
stove she said in her sweet voice, "You see I've found poor pussy,
Mary." "Humph," said Mary, "I never thought but that she'd be found fast
enough when she wanted to be!" I knew that this was a lie, because I had
heard what she said in the shed. I do wish I knew what makes her hate me
so: I only wish she knew how I hate her. I really think I shall gnaw her
stockings and shoes some night. It would not be any more than fair; and
she would never suspect me, there are so many mice in her room, for I
never touch one that I think belongs in her closet.
The sitting-room was all in most beautiful order,--a smooth white
something, like the side of a basket, over the whole floor, a beautiful
paper curtain, pink and white, over the fire-place, and white muslin
curtains at the windows. I stood perfectly still in the middle of the
room for some time. I was too surprised to stir. Oh, how I wished that I
could speak, and tell your dear mother all that had happened, and how
the room had looked three days before. Presently she said, "Poor pussy,
I know you are almost starved, aren't you?" and I said "Yes," as plainly
as I could mew it. Then she brought me a big soup-plate full of thick
cream, and some of the most delicious cold hash I ever tasted; and after
I had eaten it all, she took me in her lap, and said, "Poor pussy, we
miss little Helen, don't we?" and she held me in her lap till bed-time.
Then she let me sleep on the foot of her bed: it was one of the happiest
nights of my life. In the middle of the night I was up for a while, and
caught the smallest mouse I ever saw out of the nest. Such little ones
are very tender.
In the morning I had my breakfast with her in the dining-room, which
looks just as nice as the sitting-room. After breakfast Mrs. Hitchcock
came in, and your mother said: "Only think, how fortunate I am; Mary did
all the house-cleaning while I was away. Every room is in perfect order;
all the woollen clothes are put away for the summer. Poor pussy, here,
was frightened out of the house, and I suppose we should all have been
if we had been at home."
Can you imagine how ashamed I felt? I ran under the table and did not
come out again until after Mrs. Hitchcock had gone. But now comes
the saddest part of my story. Soon after this, as I was looking out of
the window, I saw the fattest, most tempting robin on the ground under
the cherry-tree: the windows did not look as if they had any glass in
them, and I took it for granted that it had all been taken out and put
away upstairs, with the andirons and the carpets, for next winter. I
knew that there was no time to be lost if I meant to catch that robin,
so I ran with all my might and tried to jump through. Oh, my dear Helen,
I do not believe you ever had such a bump: I fell back nearly into the
middle of the room; and it seemed to me that I turned completely over at
least six times. The blood streamed out of my nose, and I cut my right
ear very badly against one of the castors of the table. I could not see
nor hear any thing for some minutes. When I came to myself, I found your
dear mother holding me, and wiping my face with her own nice
handkerchief wet in cold water. My right fore-paw was badly bruised, and
that troubles me very much about washing my face, and about writing. But
the worst of all is the condition of my nose. Everybody laughs who sees
me, and I do not blame them; it is twice as large as it used to be, and
I begin to be seriously afraid it will never return to its old shape.
This will be a dreadful affliction: for who does not know that the nose
is the chief beauty of a cat's face? I have got very tired of hearing
the story of my fall told to all the people who come in. They laugh as
if they would kill themselves at it, especially when I do not manage to
get under the table before they look to see how my nose is.
Except for this I should have written to you before, and would write
more now, but my paw aches badly, and one of my eyes is nearly closed
from the swelling of my nose: so I must say good-by.
Your affectionate Pussy.
P. S. I told you about Caesar, did I not, in my last letter? Of course I
do not venture out of the house in my present plight, so I have not seen
him except from the window.
[Illustration: "Can you imagine how ashamed I felt? I ran under the
table and did not come out again until after Mrs. Hitchcock had
gone."--Page 54.]
[Illustration: "I knew that there was no time to be lost if I meant
to catch that robin, so I ran with all my might and tried to jump
through."--Page 55.]
V.
My Dear Helen:
I am sure you must have wondered why I have not written to you for the
last two weeks, but when you hear what I have been through, you will
only wonder that I am alive to write to you at all. I was very glad to
hear your mother say, yesterday, that she had not written to you about
what had happened to me, because it would make you so unhappy. But now
that it is all over, and I am in a fair way to be soon as well as ever,
I think you will like to hear the whole story.
In my last letter I told you about the new black cat, Caesar, who had
come to live in the Nelson house, and how anxious I was to know him. As
soon as my nose was fit to be seen, Judge Dickinson's cat, who is a
good, hospitable old soul, in spite of her stupidity, invited me to tea,
and asked him too. All the other cats were asked to come later in the
evening, and we had a grand frolic, hunting rats in the Judge's great
barn. Caesar is certainly the handsomest and most gentlemanly cat I
ever saw. He paid me great attention: in fact, so much, that one of
those miserable half-starved cats from Mill Valley grew so jealous that
she flew at me and bit my ear till it bled, which broke up the party.
But Caesar went home with me, so I did not care; then we sat and talked a
long time under the nursery window. I was so much occupied in what he
was saying, that I did not hear Mary open the window overhead, and was
therefore terribly frightened when there suddenly came down on us a
whole pailful of water. I was so startled that I lost all presence of
mind; and without bidding him good-night, I jumped directly into the
cellar window by which we were sitting. Oh, my dear Helen, I can never
give you any idea of what followed. Instead of coming down as I expected
to on the cabbages, which were just under that window the last time I
was in the cellar, I found myself sinking, sinking, into some horrible
soft, slimy, sticky substance, which in an instant more would have
closed over my head, and suffocated me; but, fortunately, as I sank, I
felt something hard at one side, and making a great effort, I caught on
it with my claws. It proved to be the side of a barrel, and I succeeded
in getting one paw over the edge of it. There I hung, growing weaker and
weaker every minute, with this frightful stuff running into my eyes and
ears, and choking me with its bad smell. I mewed as loud as I could,
which was not very loud, for whenever I opened my mouth the stuff
trickled into it off my whiskers; but I called to Caesar, who stood in
great distress at the window, and explained to him, as well as I could,
what had happened to me, and begged him to call as loudly as possible;
for if somebody did not come very soon, and take me out, I should
certainly die. He insisted, at first, on jumping down to help me
himself; but I told him that would be the most foolish thing he could
do; if he did, we should certainly both be drowned. So he began to mew
at the top of his voice, and between his mewing and mine, there was
noise enough for a few minutes; then windows began to open, and I heard
your grandfather swearing and throwing out a stick of wood at Caesar;
fortunately he was so near the house that it did not hit him. At last
your grandfather came downstairs, and opened the back door; and Caesar
was so frightened that he ran away, for which I have never thought so
well of him since, though we are still very good friends. When I heard
him running off, and calling back to me, from a distance, that he was so
sorry he could not help me, my courage began to fail, and in a moment
more, I should have let go of the edge of the barrel, and sunk to the
bottom; but luckily your grandfather noticed that there was something
very strange about my mewing, and opened the door at the head of the
cellar stairs, saying, "I do believe the cat is in some trouble down
here." Then I made a great effort and mewed still more piteously. How I
wished I could call out and say, "Yes, indeed, I am; drowning to death,
in I'm sure I don't know what, but something a great deal worse than
water!" However, he understood me as it was, and came down with a lamp.
As soon as he saw me, he set the lamp down on the cellar bottom, and
laughed so that he could hardly move. I thought this was the most cruel
thing I ever heard of. If I had not been, as it were, at death's door, I
should have laughed at him, too, for even with my eyes full of that
dreadful stuff, I could see that he looked very funny in his red
night-cap, and without his teeth. He called out to Mary, and your
mother, who stood at the head of the stairs, "Come down, come down;
here's the cat in the soft-soap barrel!" and then he laughed again, and
they both came down the stairs laughing, even your dear kind mother, who
I never could have believed would laugh at any one in such trouble. They
did not seem to know what to do at first; nobody wanted to touch me; and
I began to be afraid I should drown while they stood looking at me, for
I knew much better than they could how weak I was from holding on to the
edge of the barrel so long. At last your grandfather swore that oath of
his,--you know the one I mean, the one he always swears when he is very
sorry for anybody,--and lifted me out by the nape of my neck, holding me
as far off from him as he could, for the soft soap ran off my legs and
tail in streams. He carried me up into the kitchen, and put me down in
the middle of the floor, and then they all stood round me, and laughed
again, so loud that they waked up the cook, who came running out of
her bedroom with her tin candlestick and a chair in her hand, thinking
that robbers were breaking in. At last your dear mother said, "Poor
pussy, it is too bad to laugh at you, when you are in such pain" (I had
been thinking so for some time). "Mary, bring the small washtub. The
only thing we can do is to wash her."
When I heard this, I almost wished they had left me to drown in the soft
soap; for if there is any thing of which I have a mortal dread, it is
water. However, I was too weak to resist; and they plunged me in all
over, into the tub full of ice-cold water, and Mary began to rub me
with her great rough hands, which, I assure you, are very different from
yours and your mother's. Then they all laughed again to see the white
lather it made; in two minutes the whole tub was as white as the water
under the mill-wheel that you and I have so often been together to see.
You can imagine how my eyes smarted. I burnt my paws once in getting a
piece of beefsteak out of the coals where it had fallen off the
gridiron, but the pain of that was nothing to this. You will hardly
believe me when I tell you that they had to empty the tub and fill it
again ten times before the soap was all washed out of my fur. By that
time I was so cold and exhausted, that I could not move, and they began
to think I should die. But your mother rolled me up in one of your old
flannel petticoats, and made a nice bed for me behind the stove. By this
time even Mary began to seem sorry for me, though she was very cross at
first, and hurt me much more than she need to in washing me; now she
said, "You're nothing but a poor beast of a cat, to be sure; but it's
mesilf that would be sorry to have the little mistress come back, and
find ye kilt." So you see your love for me did me service, even when
you were so far away. I doubt very much whether they would have ever
taken the trouble to nurse me through this sickness, except for your
sake. But I must leave the rest for my next letter. I am not strong
enough yet to write more than two hours at a time.
Your affectionate Pussy.
[Illustration: "Judge Dickinson's cat, who is a good hospitable old
soul, in spite of her stupidity, invited me to tea, and asked Caesar
too."--Page 60.]
[Illustration: "When there suddenly came down on us a whole pailful
of water." Page 61.]
[Illustration: "He lifted me out by the nape of my neck, holding me
as far off from him as he could."--Page 68.]
VI.
My Dear Helen:
I will begin where I left off in my last letter.
As you may imagine, I did not get any sleep that night, not even so much
as a cat's nap, as people say, though how cat's naps differ from men's
and women's naps, I don't know. I shivered all night, and it hurt me
terribly whenever I moved. Early in the morning your grandfather came
downstairs, and when he saw how I looked, he swore again, that same
oath: we all know very well what it means when he swears in that way: it
means that he is going to do all he can for you, and is so sorry, that
he is afraid of seeming too sorry. Don't you remember when you had that
big double tooth pulled out, and he gave you five dollars, how he swore
then? Well, he took me up in his arms, and carried me into the
dining-room; it was quite cool; there was a nice wood fire on the
hearth, and Mary was setting the table for breakfast. He said to her in
a very gruff voice, "Here you, Mary, you go up into the garret and
bring down the cradle."
Sick as I was, I could not help laughing at the sight of her face. It
was enough to make any cat laugh.
"You don't ever mean to say, sir, as you're going to put that cat into
the cradle."
"You do as I tell you," said he, in that most awful tone of his, which
always makes you so afraid. I felt afraid myself, though all the time he
was stroking my head, and saying, "Poor pussy, there, poor pussy, lie
still." In a few minutes Mary came down with the cradle, and set it
down by the fire with such a bang that I wondered it did not break. You
know she always bangs things when she is cross, but I never could see
what good it does. Then your grandfather made up a nice bed in the
cradle, out of Charlie's winter blanket and an old pillow, and laid me
down in it, all rolled up as I was in your petticoat. When your mother
came into the room she laughed almost as hard as she did when she saw me
in the soft-soap barrel, and said, "Why, father, you are rather old to
play cat's cradle!" The old gentleman laughed at this, till the tears
ran down his red cheeks. "Well," he said, "I tell you one thing; the
game will last me till that poor cat gets well again." Then he went
upstairs, and brought down a bottle of something very soft and slippery,
like lard, and put it on my eyes, and it made them feel much better.
After that he gave me some milk into which he had put some of his very
best brandy: that was pretty hard to get down, but I understood enough
of what they had said, to be sure that if I did not take something of
the kind I should never get well. After breakfast I tried to walk, but
my right paw was entirely useless. At first they thought it was broken,
but finally decided that it was only sprained, and must be bandaged. The
bandages were wet with something which smelled so badly it made me feel
very sick, for the first day or two. Cats' noses are much more sensitive
to smells than people's are; but I grew used to it, and it did my poor
lame paw so much good that I would have borne it if it had smelled twice
as badly. For three days I had to lie all the time in the cradle: if
your grandfather caught me out of it, he would swear at me, and put me
back again. Every morning he put the soft white stuff on my eyes, and
changed the bandages on my leg. And, oh, my dear Helen, such good things
as I had to eat! I had almost the same things for my dinner that the
rest of them did: it must be a splendid thing to be a man or a woman! I
do not think I shall ever again be contented to eat in the shed, and
have only the old pieces which nobody wants.
Two things troubled me very much while I was confined to the cradle: one
was that everybody who came in to see your mother laughed as if they
never could stop, at the first sight of me; and the other was that I
heard poor Caesar mewing all around the house, and calling me with all
his might; and I knew he thought I was dead. I tried hard to make your
kind mother notice his crying, for I knew she would be willing to let
him come in and see me, but I could not make her understand. I suppose
she thought it was only some common strolling cat who was hungry. I have
always noticed that people do not observe any difference between one
cat's voice and another's; now they really are just as different as
human voices. Caesar has one of the finest, deepest-toned voices I
ever heard. One day, after I got well enough to be in the kitchen, he
slipped in, between the legs of the butcher's boy who was bringing in
some meat; but before I had time to say one word to him, Mary flew at
him with the broom, and drove him out. However, he saw that I was alive,
and that was something. I am afraid it will be some days yet before I
can see him again, for they do not let me go out at all, and the
bandages are not taken off my leg. The cradle is carried upstairs, and I
sleep on Charlie's blanket behind the stove. I heard your mother say
to-day that she really believed the cat had the rheumatism. I do not
know what that is, but I think I have got it: it hurts me all over when
I walk, and I feel as if I looked like Bill Jacobs's old cat, who, they
say, is older than the oldest man in town; but of course that must be a
slander.
The thing I am most concerned about is my fur; it is coming off in
spots: there is a bare spot on the back of my neck, on the place by
which they lifted me up out of the soap barrel, half as large as your
hand; and whenever I wash myself, I get my mouth full of hairs, which
is very disagreeable. I heard your grandfather say to-day, that he
believed he would try Mrs. Somebody's Hair Restorer on the cat, at which
everybody laughed so that ran out of the room as fast as I could go, and
then they laughed still harder. I will write you again in a day or two,
and tell you how I am getting on. I hope you will come home soon.
Your affectionate Pussy.
[Illustration: "Then your grandfather made up a nice bed in the
cradle, and laid me down in it."--Page 76.]
[Illustration: "One day he slipped in between the legs of the
butcher boy, but before I had time to say a word to him, Mary flew
at him with the broom."--Page 81.]
VII.
My Dear Helen:
I am so glad to know that you are coming home next week, that I cannot
think of any thing else. There is only one drawback to my pleasure, and
that is, I am so ashamed to have you see me in such a plight. I told
you, in my last letter, that my fur was beginning to come off. Your
grandfather has tried several things of his, which are said to be good
for hair; but they have not had the least effect. For my part I don't
see why they should; fur and hair are two very different things, and I
thought at the outset there was no use in putting on my skin what was
intended for the skin of human heads, and even on them don't seem to
work any great wonders, if I can judge from your grandfather's head,
which you know is as bald and pink and shiny as a baby's. However, he
has been so good to me, that I let him do any thing he likes, and every
day he rubs in some new kind of stuff, which smells a little worse than
the last one. It is utterly impossible for me to get within half a mile
of a rat or a mouse. I might as well fire off a gun to let them know I
am coming, as to go about scented up so that they can smell me a great
deal farther off than they can see me. If it were not for this dreadful
state of my fur, I should be perfectly happy, for I feel much better
than I ever did before in my whole life, and am twice as fat as when you
went away. I try to be resigned to whatever may be in store for me, but
it is very hard to look forward to being a fright all the rest of one's
days. I don't suppose such a thing was ever seen in the world as a cat
without any fur. This morning your grandfather sat looking at me for a
long time and stroking his chin: at last he said, "Do you suppose it
would do any good to shave the cat all over?" At this I could not resist
the impulse to scream, and your mother said, "I do believe the creature
knows whenever we speak about her." Of course I do! Why in the world
shouldn't I! People never seem to observe that cats have ears. I often
think how much more careful they would be if they did. I have many a
time to see them send children out of the room, and leave me behind,
when I knew perfectly well that the children would neither notice nor
understand half so much as I would. There are some houses in which I
lived, before I came to live with you, about which I could tell strange
stories if I chose.
Caesar pretends that he likes the looks of little spots of pink skin,
here and there, in fur; but I know he only does it to save my feelings,
for it isn't in human nature--I mean in cat's nature--that any one
should. You see I spend so much more time in the society of men and
women than of cats, that I find myself constantly using expressions
which sound queerly in a cat's mouth. But you know me well enough to be
sure that every thing I say is perfectly natural. And now, my dear
Helen, I hope I have prepared you to see me looking perfectly hideous. I
only trust that your love for me will not be entirely killed by my
unfortunate appearance. If you do seem to love me less, I shall be
wretched, but I shall still be, always,
Your affectionate Pussy.
End of the Project Gutenberg EBook of Letters from a Cat, by Helen Jackson
***
|
{
"redpajama_set_name": "RedPajamaBook"
}
| 3,205
|
Insidious Inside è il quarto album del gruppo musicale femminile Mumble Rumble, uscito nel 2018 per Latlantide
Tracce
Note
Album delle Mumble Rumble
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,783
|
\section{Introduction}
Separation logic (SL)~\cite{OHearnRY01,Reynolds02} has been actively used
recently to reason about imperative programs that alter data structures.
For example, the static analysis tool Infer~\cite{FacebookInfer} of
Facebook has been using SL to discover critical memory safety bugs in
Android and iOS applications. One of the pivotal features making the
success of SL is the {\em separating conjunction} operator $(*)$, which is
used to describe the separation of computer memory. In particular, the
assertion $p * q$ denotes a memory portion which can be decomposed into two
{\em disjoint\/} sub-portions held by $p$ and $q$, respectively. In
addition, SL is also equipped with the ability for users to define
inductive heap predicates \cite{Reynolds00,BrotherstonDP11,IosifRS13}. The
combination of the separating conjunction and inductive heap predicates
makes SL expressive enough to model various types of recursive data
structures, such as linked lists and trees.
However, this powerful expressiveness also poses challenges in reasoning
about SL entailments. Considerable
researches have been conducted on the SL entailment proving problem,
including the works \cite{BrotherstonDP11,BrotherstonGP12,ChuJT15} related
to mathematical induction. In particular, Brotherston et al.
~\cite{BrotherstonDP11,BrotherstonGP12} propose the {\em cyclic proof},
which allows proof trees to contain cycles, and can be perceived as
infinite derivation trees. Furthermore, during the proof derivation, induction
hypotheses are not explicitly identified via applications of induction
rules; instead, they are implicitly obtained via the discovery of valid
cycle proofs. Consequently, a soundness condition needs to be checked
globally on proof trees. On the other hand, Chu et al.~\cite{ChuJT15} apply
{\em structural induction\/} on inductive heap predicates for proving SL
entailments. During proof search, this technique dynamically uses derived
entailments as induction hypotheses. When applying induction hypotheses, it
performs a local check to ensure that predicates in the target entailments
are substructures of predicates in the entailments captured as hypotheses.
This dynamicity in hypothesis generation enables multiple induction
hypotheses within a single proof path to be exploited; however, it does not
admit hypotheses obtained from different proof paths.
In this work, we develop a sequent-based deductive system for proving SL
entailments by using mathematical induction. Our technique is an instance
of Noetherian induction ~\cite{Bundy01}, where we propose a novel induction
principle based on a well-founded relation of SL models. Generally, proof
techniques based on Noetherian induction are often classified into two
categories, i.e., {\em explicit} and {\em implicit}
induction ~\cite{Bundy01}, and each of them presents advantages over the
other. We follow the explicit induction methods to implement the induction
principle as inference rules, so that it can be easily integrated into a
deductive system, and the soundness condition can be checked locally in
each application of inference rules. In addition, since the well-founded relation defined
in our induction principle does not depend directly on the substructure
relationship, induction hypotheses gathered in one proof path can be used
for hypothesis applications at other proof paths of the entire proof
tree. Thus, our induction principle also favors {\em mutual
induction}, a natural feature of {\em implicit induction}, in which the
goal entailment and other entailments derived during the proof search can be
used as hypotheses to prove each other. Our proof technique, therefore,
does not restrict induction hypotheses to be collected from only one proof
path, but rather from all derived paths of the proof tree.
{\bf Related work.} The entailment proving problem in SL has been actively
studied recently. Various sound and complete techniques have been
introduced,
but they deal with only {\em pre-defined}
inductive heap predicates, whose definitions and semantics are given in
advance
\cite{BerdineCO04,BerdineCO05,CookHOPW11,PerezR13,PerezR11,BozgaIP10,PiskacWZ13,PiskacWZ14}. Since these techniques are designated to only certain classes of
pre-defined predicates, they are not suitable for handling general
inductive heap predicates.
Iosif et al.~\cite{IosifRS13,IosifRV14} and Enea et al.~\cite{EneaLSV14}
aim to prove entailments in more general SL fragments by translating SL
assertions into tree automata. However, these approaches still have certain
restrictions on inductive heap predicates, such as the predicates must have
the {\em bounded tree width} property, or they are variants of linked list
structures.
Proof techniques proposed by Nguyen et al.
\cite{NguyenDQC07,NguyenC08,ChinDNQ12}
and by Madhusudan et al.~\cite{QiuGSM13} can prove SL
entailments with {\em general} inductive heap predicates. Nonetheless,
these techniques are semi-automated since users are required to provide
supplementing lemmas to assist in handling those predicates.
In~\cite{EneaSW15}, Enea et al. develop a mechanism to automatically
synthesize these supporting lemmas, but solely limited to certain kinds of
lemmas, i.e., {\em composition lemmas}, {\em completion lemmas} and {\em
stronger lemmas\/}.
Cyclic proof ~\cite{BrotherstonDP11,BrotherstonGP12} and induction proof
in~\cite{ChuJT15} are most closely related to our approach. We recall the
aforementioned comments that cyclic proof requires soundness condition to
be checked globally on proof trees, whereas proof technique
in~\cite{ChuJT15} restricts that induction hypotheses collected from one
path of proof tree cannot be used to prove entailments in other paths. Our
work differs from them as we not only allow soundness condition to be
checked locally at inference rule level, but also support mutual induction
where entailments from different proof paths can be used as hypotheses to
prove each other.
{\bf Contribution.}
Our contributions in this work are summarized as follows:
\begin{enumerate}
\setlength{\itemsep}{1pt}\setlength{\parskip}{3pt}
\item[--] We define a well-founded relation on SL models and use it to
construct a novel mutual induction principle for proving SL entailments.
\item[--] We develop a deductive system for proving SL
entailments based on the proposed mutual induction principle, and prove
soundness of the proof system.
\item[--] We implement a prototype prover, named \textsf{Songbird}, and
experiment on it with benchmarks of handcrafted entailments as well as
entailments collected from \textsf{SL-COMP}, an SL competition. Our prover
is available for both online use and download at: \sburl.
\end{enumerate}
\section{Motivating Example}
\label{sec:Motivation}
We consider the procedure $\code{traverse}$ in Fig. \ref{fig:ProgramRandomJump},
which traverses a linked list in an unusual way, by randomly jumping either one
or two steps at a time. In order to verify memory safety of this program,
automated verification tools such as ~\cite{CalcagnoDOY09,LeGQC14} will first
formulate the shape of the computer memory manipulated by $\code{traverse}$.
Suppose the initially discovered shape is represented by an {\em inductive heap
predicate} $\hformp{\predTmp}{x}$ in SL, defined as:
\begin{wrapfigure}[11]{r}{0.48\textwidth}
\def\indent{\hspace*{2em}}
\centering
\begin{tabular}{l}
$\code{struct~\,node~\,\{\,struct~\,node~\, \cpointer{next};\}}$ \\
\\[-0.8em]
$\code{void~\,traverse\,(\,struct~\,node~\,\cpointer{x}\,)\,\{}$ \\
$\code{\indent if ~\, (\,x \shorteqeq NULL\,) ~\, return;}$ \\
$\code{\indent bool ~\, jump \,\shorteq\, random();}$ \\
$\code{\indent if ~\, (jump ~\,\&\&~\, x{\rightarrow}next \,\shortneq\, NULL)}$ \\
$\code{\indent\indent
traverse(x{\rightarrow}next{\rightarrow}next);}$ \\
$\code{\indent else ~\, traverse(x{\rightarrow}next); \,\}}$
\end{tabular}
\caption{A linked-list traversal algorithm with random jump}
\label{fig:ProgramRandomJump}
\end{wrapfigure}
\begin{center}
\begin{tabular}{c}
$\hformp{\predTmp} {x}
\,~\triangleq~\,
\predEmp
~\mtor~
\exists u. (\hformn{x}{u} * \hformp{\predTmp}{u})
~\mtor~
\exists u, v. (\hformn{x}{u} * \hformn{u}{v} * \hformp{\predTmp}{v})$
\end{tabular}
\end{center}
Intuitively, $\hformp{\predTmp}{x}$ covers three possible cases of the shape,
which can be an empty memory $\predEmp$ (when $\code{x \shorteqeq NULL}$), or be
recursively expanded by a single data structure $\hformn{x}{u}$ (when
$\code{traverse}$ jumps one step), or be recursively expanded by two
structures $\hformn{x}{u}$ and $\hformn{u}{v}$ (when $\code{traverse}$ jumps two
steps). Note that $\hformn{x}{u}$ and $\hformn{u}{v}$ are SL predicates modeling
the data structure $\code{node}$. Details about the SL syntax will be explained in
Section \ref{sec:Background}.
Since the derived shape is anomalous, the verifiers or users may want to
examine if it is actually a linked \underline{{\bf l}}ist
\underline{{\bf s}}egment, modeled by the following predicate:
\begin{center}
\begin{tabular}{c}
$\hformp{\predLs}{x,y}
\,~{\triangleq}~\, (\predEmp \wedge x{=}y) \mtor \exists w. (\hformn{x}{w} *
\hformp{\predLs}{w,y})$
\end{tabular}
\end{center}
This can be done by checking the validity of the following entailment:
\begin{center}
\begin{tabular}{c}
$E ~\triangleq ~\hformp{\predTmp}{x} \entails \exists y.\,\hformp{\predLs}{x,y}$
\end{tabular}
\end{center}
In the semantics of SL, the entailment $E$ is said to be {\em valid}, if all
memory models satisfying $\hformp{\predTmp}{x}$ also satisfy $\exists
y.\,\hformp{\predLs}{x,y}$. To prove it by induction, $E$ is firstly recorded as
an induction hypothesis (IH), then the predicate $\hformp{\predTmp}{x}$ is
analyzed in each case of its definition, via a method called unfolding, to
derive new entailments $E_1,E_2,E_3$ as follows.
\begin{center}
\begin{tabular}{c}
$E_1 ~\triangleq~ \predEmp \entails
\exists y.\,\hformp{\predLs}{x,y}$ \qquad
$E_2 ~\triangleq~ \hformn{x}{u} \,{*}\, \hformp{\predTmp}{u}
\entails \exists y.\,\hformp{\predLs}{x,y}$\\[5pt]
$E_3 ~\triangleq~ \hformn{x}{u} * \hformn{u}{v} * \hformp{\predTmp}{v}
\entails \exists y.\,\hformp{\predLs}{x,y}$
\end{tabular}
\end{center}
The entailment $E_1$ can be easily proved by unfolding the predicate
$\hformp{\predLs}{x,y}$ in the right side
by its base case to obtain a valid entailment $\predEmp \entails \exists
y. (\predEmp \wedge x=y)$. On the contrary, the entailment $E_2$ can only be
proved by using the induction hypothesis $E$. Its (simplified) proof tree can be
depicted in Fig. \ref{fig:InductionProofTree}.
\begin{figure}[ht]
\begin{prooftree}
\def\ScoreOverhang{0em}
\def\defaultHypSeparation{\hskip 2em}
\AxiomC{{\small{$ $}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{{
$(\rulePureEntail)$:
Valid, proved by external provers, e.g. Z3.
}}}
\UnaryInfC{{\small{$
true
\entails
\exists y,w.\,(u{=}w \wedge t{=}y)
$}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{{
$(\ruleStarPred)$:
Match and remove predicates $\hformp{\predLs}{u,t}$
and $\hformp{\predLs}{w,y}$.
}}}
\UnaryInfC{{\small{$
\hformp{\predLs}{u,t}
\entails
\exists y,w.\,(\hformp{\predLs}{w,y} \wedge u{=}w)
$}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{{
$(\ruleStarData)$:
Match and remove data nodes $\hformn{x}{u}$
and $\hformn{x}{w}$.
}}}
\UnaryInfC{{\small{$
\hformn{x}{u} \,{*}\, \hformp{\predLs}{u,t}
\entails
\exists y,w.\,(\hformn{x}{w} \,{*}\, \hformp{\predLs}{w,y})
$}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{{
$(\rulePredIntroRight)$:
Unfold $\hformp{\predLs}{x,y}$
by its inductive case.
}}}
\UnaryInfC{{\small{$
(E_4) ~ \hformn{x}{u} \,{*}\, \hformp{\predLs}{u,t}
\entails
\exists y.\,\hformp{\predLs}{x,y}
$}}}
\def\extraVskip{3pt}
\RightLabel{
\scriptsize{
$(\ruleHypo)$:
Apply IH $E$ with subst. $[u/x]$,
rename $y$ to fresh $t$}.}
\UnaryInfC{{\small{$
(E_2) ~ \hformn{x}{u} \,{*}\, \hformp{\predTmp}{u}
\entails
\exists y.\,\hformp{\predLs}{x,y}
$}}}
\end{prooftree}
\caption{Proof tree of $E_2$, using induction hypothesis $E$}
\label{fig:InductionProofTree}
\end{figure}
We can also prove $E_3$ by the same method,
i.e., applying the IH $E$, and its proof tree
is shown in Fig. \ref{fig:InductionProofTreeThree}.
\begin{figure}[h]
\begin{prooftree}
\def\ScoreOverhang{0em}
\def\defaultHypSeparation{\hskip 2em}
\AxiomC{{\small{$ $}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{{
$(\rulePureEntail)$:
Valid, proved by external prover, e.g. Z3.
}}}
\UnaryInfC{{\small{$
true
\entails
\exists y,z,w.\,(u{=}z \wedge v{=}w \wedge t{=}y)
$}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{{
$(\ruleStarPred)$:
Remove predicates $\hformp{\predLs}{v,t}$
and $\hformp{\predLs}{w,y}$.
}}}
\UnaryInfC{{\small{$
\hformp{\predLs}{v,t}
\entails
\exists y,z,w.\,(\hformp{\predLs}{w,y} \wedge u{=}z \wedge v{=}w)
$}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{{
$(\ruleStarData)$:
Remove data nodes $\hformn{u}{v}$
and $\hformn{z}{w}$.
}}}
\UnaryInfC{{\small{$
\hformn{u}{v} * \hformp{\predLs}{v,t}
\entails
\exists y,z,w.\,(\hformn{z}{w} * \hformp{\predLs}{w,y} \wedge u{=}z)
$}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{{
$(\rulePredIntroRight)$:
Unfolding $\hformp{\predLs}{z,y}$
by inductive case.
}}}
\UnaryInfC{{\small{$
\hformn{u}{v} * \hformp{\predLs}{v,t}
\entails
\exists y,z.\,(\hformp{\predLs}{z,y} \wedge u{=}z)
$}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{{
$(\ruleStarData)$:
Remove data nodes $\hformn{x}{u}$
and $\hformn{x}{z}$.
}}}
\UnaryInfC{{\small{$
\hformn{x}{u} * \hformn{u}{v} * \hformp{\predLs}{v,t}
\entails
\exists y,z.\,(\hformn{x}{z} * \hformp{\predLs}{z,y})
$}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{{
$(\rulePredIntroRight)$:
Unfold $\hformp{\predLs}{x,y}$
by inductive case.
}}}
\UnaryInfC{{\small{$
\hformn{x}{u} * \hformn{u}{v} * \hformp{\predLs}{v,t}
\entails
\exists y.\,\hformp{\predLs}{x,y}
$}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{$(\ruleHypo)$:~\parbox{24em}{
Apply IH $E$ with substitution $[v/x]$, \\
and rename $y$ to $t$
}}}
\UnaryInfC{{\small{$
(E_3) ~ \hformn{x}{u} * \hformn{u}{v} * \hformp{\predTmp}{v}
\entails
\exists y.\,\hformp{\predLs}{x,y}
$}}}
\end{prooftree}
\caption{Ordinary proof tree of $E_3$, using induction hypothesis $E$}
\label{fig:InductionProofTreeThree}
\end{figure}
Using a different strategy, we observe that once $E_2$ is proved, entailments
derived during its proof, i.e., $E_2$ and $E_4$, can be used as hypotheses to
prove $E_3$. In this case, the new proof of $E_3$ is much simpler than the above
original induction proof, as demonstrated in Fig. \ref{fig:MutualProofTree}; the
proving process, therefore, is more efficient.
In the new proof tree, the entailment $E_4$ can be directly used as a hypothesis
to prove other entailments since it is already proven {\em valid} (see Fig.
\ref{fig:InductionProofTree}). However, when $E_2$ is applied to prove $E_3$,
thus prove $E$, it is not straightforward to conclude about $E$,
since the validity of $E_2$ is still {\em unknown}.
This is because the proof of $E_2$ in Fig.
\ref{fig:InductionProofTree} also uses $E$ as a hypothesis. Therefore,
$E$ and $E_2$ jointly form a {\em mutual induction} proof, in which
they can be used to prove each other. The theoretical principle of this proof
technique will be introduced in Section \ref{sec:MutualInduction}.
\begin{figure}
\begin{prooftree}
\def\defaultHypSeparation{\hskip 2em}
\def\ScoreOverhang{0em}
\AxiomC{{\small{$ $}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{{
$(\rulePureEntail)$:
Valid, proved by external provers, e.g., Z3.
}}}
\UnaryInfC{{\small{$
true
\entails
\exists y.\,y=z
$}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{{
$(\ruleStarPred)$:
Remove predicates $\hformp{\predLs}{x,z}$
and $\hformp{\predLs}{x,y}$.
}}}
\UnaryInfC{{\small{$
\hformp{\predLs}{x,z}
\entails
\exists y.\,\hformp{\predLs}{x,y}
$}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{$(\ruleHypo)$:
Apply $E_4$ with subst. $[r/t]$,
and rename $y$ to $z$.
}}
\UnaryInfC{{\small{$
\hformn{x}{u} \,{*}\, \hformp{\predLs}{u,r}
\entails
\exists y.\,\hformp{\predLs}{x,y}
$}}}
\def\extraVskip{3pt}
\RightLabel{\scriptsize{$(\ruleHypo)$:~\parbox{24em}{
Apply hypothesis $E_2$ with subst. $[u{/}x,v{/}u]$,\\
and rename $y$ to $r$.
}}}
\UnaryInfC{{\small{$
(E_3) ~ \hformn{x}{u} \,{*}\, \hformn{u}{v} \,{*}\, \hformp{\predTmp}{v}
\,{\entails}\,
\exists y.\,\hformp{\predLs}{x,y}
$}}}
\end{prooftree}
\caption{New proof tree of $E_3$, using hypotheses $E_2$ and $E_4$}
\label{fig:MutualProofTree}
\end{figure}
\section{Theoretical background}
\label{sec:Background}
In this work, we consider the {\em symbolic-heap} fragment of separation logic
with arbitrary user-defined inductive heap predicates. We denote this logic
fragment as $\theorySLSH$. It is similar to those introduced in
\cite{IosifRS13,BrotherstonGKR16}, but extended with linear arithmetic
($\theoryLA$) to describe more expressive properties of the data structures,
such as size or sortedness. The syntax and semantics of the $\theorySLSH$
assertions and their entailments are introduced in this section.
\subsection{Symbolic-heap Separation Logic}
{\bf Syntax.} The syntax of our considered separation logic fragment
$\theorySLSH$ is described in Fig. \ref{fig:SyntaxMSSL}. In
particular, the predicate $\predEmp$ represents an {\em empty}
memory. The {\em singleton} heap predicate
$\hformnt{\varSort}{x}{x_1{,}{...}{,}x_n}$ models an $n$-field single
data structure in memory where $x$ points-to; its data type is
represented by a unique {\em sort} $\varSort$\footnote{Note that for
the simplicity of presenting the motivating example, we have removed
the sort $\varSort$ from the SL singleton heap predicate denoting
the data structure $\code{node}$.} and values of its fields are
captured by $x_1{,}{...}{,}x_n$. The {\em inductive} heap predicate
$\hformp{\predP}{x_1{,}{...}{,}x_n}$ models a recursively defined data
structure, which is formally defined in Definition
\ref{def:InductivePredicate}. These three heap predicates, called
{\em spatial atoms}, compose the {\em spatial} assertions $\Sigma$ via
the separating conjunction operator $*$. $\Pi$ denotes {\em pure}
assertions in linear arithmetic, which do not contain any spatial
atoms.
\setlength{\intextsep}{0em}
\begin{definition}[Inductive heap predicate]
\label{def:InductivePredicate}
A system of $k$ inductive heap predicates
$\predPi$ of arity $n_i$
and parameters $x^i_1,...,x^i_{n_i}$, with $i = 1, ..., k$,
are syntactically defined as follows:
\setlength{\intextsep}{0em}
\begin{center}
\begin{tabular}{l}
$\Big\{\, \hformp{\predPi}{x^i_1, ..., x^i_{n_i}}$
~~$\triangleq$~~
$\formF^i_1(x^i_1, ..., x^i_{n_i})
\,\mtor \dots \mtor\,
\formF^i_{m_i}(x^i_1, ..., x^i_{n_i}) \,\Big\}^k_{i\,=\,1}$
\end{tabular}
\end{center}
where $\formF^i_j(x^i_1,...,x^i_{n_i})$, with $1 \,{\leq}\, j
\,{\leq}\, m_i$, is a {\em definition case} of
$\hformp{\predPi}{x^i_1,...,x^i_{n_i}}$.
Moreover, $\formF^i_j$ is a {\em base case}
of $\predPi$, if it does not contain any predicate symbol which is (mutually)
recursively defined with $\predPi$; otherwise, it is an {\em inductive case}.
\end{definition}
\begin{figure}[h]
\begin{tabular}{c}
\begin{minipage}{0.45\textwidth}
\begin{adjustwidth}{0em}{}
\begin{tabular}{m{1em}m{1.5em}m{22em}l}
\multicolumn{4}{l}{
$c, x,\varSort,\predP$ resp. denote constants,
variables, data sorts, and predicate symbols.
}\\[2pt]
$e$
& $\defBNF/$
& $c~|~x~|~{-}e~|~e_1{+}e_2~|~e_1{-}e_2$
& Integer expressions \\[2pt]
$a$
& $\defBNF/$
& $\valNil~|~x~$
& Spatial expressions \\[2pt]
$\Pi$
& $\defBNF/$
& $a_1 = a_2~|~a_1 \neq a_2~|~e_1 = e_2~|~e_1 \neq e_2~|$
& Pure assertions \\[2pt]
{} & & \multicolumn{2}{l}{
$e_1 > e_2~|~e_1 \geq e_2~|~e_1 < e_2~|~e_1 \leq e_2~|$}\\
{} & & \multicolumn{2}{l}{
$\neg\Pi~|~\Pi_1 \,{\wedge}\, \Pi_2~|~\Pi_1 \,{\vee}\, \Pi_2~
|~\Pi_1 \,{\Rightarrow}\, \Pi_2~|~
\forall x .\Pi~|~\exists x .\Pi$} \\[2pt]
$\Sigma$
& $\defBNF/$
& $\predEmp~|~\hformnt{\varSort}{x}{x_1{,}{...}{,}x_n}~
|~ \hformp{\predP}{x_1{,}{...}{,}x_n}~
|~ \Sigma_1 * \Sigma_2$
& Spatial assertions\\[2pt]
$F$
& $\defBNF/$
& $\Sigma ~|~ \Pi ~|~ \Sigma \wedge \Pi ~|~\exists x. F$
& $\theorySLSH$ assertions
\end{tabular}
\end{adjustwidth}
\end{minipage}
\end{tabular}
\caption{Syntax of assertions in $\theorySLSH$}
\label{fig:SyntaxMSSL}
\end{figure}
\begin{definition}[Syntactic equivalence]
\label{def:SyntacticEquiv}
The syntactical equivalence relation
of two spatial assertions $\Sigma_1$ and $\Sigma_2$,
denoted as $\Sigma_1 \synequiv \Sigma_2$,
is recursively defined as follows:
\begin{tabular}{lll}
-- $\predEmp \synequiv \predEmp$ \quad\quad
&
-- $\hformnt{\varSort}{u}{v_1{,}...{,}v_n} \synequiv
\hformnt{\varSort}{u}{v_1{,}...{,}v_n}$ \quad\quad
&
-- $\hformp{\predP}{u_1{,}...{,}u_n} \synequiv
\hformp{\predP}{u_1{,}...{,}u_n}$ \\[5pt]
\multicolumn{3}{l}{
-- If $\Sigma_1 \synequiv \Sigma'_1$ and $\Sigma_2 \synequiv \Sigma'_2$,
then
$\Sigma_1 * \Sigma_2 \synequiv \Sigma'_1 * \Sigma'_2$
and
$\Sigma_1 * \Sigma_2 \synequiv \Sigma'_2 * \Sigma'_1$
}
\end{tabular}
\end{definition}
{\bf Semantics.}
The semantics of $\theorySLSH$ assertions are given in Fig. \ref{fig:SemanticsMSSL}.
Given a set $\setVar$ of variables, $\setSort$ of sorts, $\setVal$ of values
and $\setLoc \subset \setVal$ of memory addresses,
a model of an assertion consists of:
\begin{itemize}
\item a {\em stack} model $s$, which is a function
$s{:} ~ \setVar \rightarrow \setVal$. We write \evalForm{\Pi}{s}
to denote valuation of a pure
assertion $\Pi$ under the stack model $s$.
Note that the constant $\valNil \in \setVal \setminus \setLoc$
denotes dangling memory address.
\item a {\em heap} model $h$, which is a partial function
$h{:} ~ \setLoc \rightharpoonup_{\code{fin}}
(\setSort \rightarrow (\setVal ~ \code{list}))$.
$\funcDom{h}$ denotes domain of $h$, and
$\cardFunc{h}$ is cardinality of $\funcDom{h}$.
We follow Reynolds' semantics~\cite{Reynolds08} to
consider {\em finite} heap models, i.e., $\cardFunc{h} \,{<}\, \infty$.
$h \disjoins h'$ indicates that $h$ and $h'$ have disjoint domains,
i.e., $\funcDom{h} \,{\cap}\, \funcDom{h'} \,{=}\, \setempty$, and
$h \hunions h'$ is the union of two disjoint heap models $h, h'$,
i.e., $h \disjoins h'$.
\end{itemize}
\subsection{Entailments in $\theorySLSH$}
In this section, we formally define the $\theorySLSH$ entailments
and introduce a new concept of {\em model of entailments},
which will be used in the next section to construct
the well-founded relation in our induction principle.
\begin{definition}[Entailment]
An entailment between two assertions $F$ and $G$,
denoted as $F \entails G$,
is said to be {\em valid} (holds), iff
$s,h \satisfies F$ implies that $s,h \satisfies G$,
for all models $s,h$.
Formally,\\[0.5em]
\centerline{
$F \entails G$ is valid, iff~
$\forall s,h.(s,h \satisfies F \mtimply s,h \satisfies G)$
}
\end{definition}
Here, $F$ and $G$ are respectively called the {\em antecedent}
and the {\em consequent} of the entailment.
For simplicity, the entailment $F \entails G$
can be denoted by just $E$, i.e., $E \triangleq F \entails G$.
\begin{figure}[ht]
\begin{tabular}{c}
\begin{minipage}{\textwidth}
\begin{adjustwidth}{1em}{}
\begin{tabular}{m{10em}m{1em}l}
$s,h \satisfies \Pi$
& \hspace{-1.2em} iff
& \hspace{-0.6em} $\evalForm{\Pi}{s}\,{=}\,\valTrue$
and $\funcDom{h}\,{=}\,\setempty$
\\ [0.2em]
$s,h \satisfies \predEmp$
& \hspace{-1.2em} iff
& \hspace{-0.6em} $\funcDom{h}\,{=}\,\setempty$
\\ [0.2em]
$s,h \satisfies \hformnt{\varSort}{x}{x_1{,}{...}{,}x_n}$
& \hspace{-1.2em} iff
& $\evalVar{x}{s} {\in} \setLoc$
and $\funcDom{h}\,{=}\,\{ \evalVar{x}{s} \}$
\\[0.2em]
{}
&
& \hspace*{4em} and $h(\evalVar{x}{s})\varSort\,{=}\,
(\evalVar{x_1}{s}, ..., \evalVar{x_n}{s})$
\\[0.2em]
$s,h \satisfies \hformp{\predP}{x_1{,}{...}{,}x_n}$
& \hspace{-1.2em} iff
& $s,h \satisfies \hformp{R_i}{x_1{,}{...}{,}x_n}$,
with $\hformp{R_i}{x_1{,}{...}{,}x_n}$ is one of
\\[0.2em]
{}
&
& \hspace*{4em} the definition cases of $\hformp{\predP}{x_1{,}{...}{,}x_n}$
\\[0.2em]
$s,h \satisfies \Sigma_1 * \Sigma_2$
& \hspace{-1.2em} iff
& there exist $h_1, h_2$ such that: $h_1 \disjoins h_2$,
$h_1 \hunions h_2\,{=}\,h$
\\[0.2em]
{}
&
& \hspace*{4em} and $s,h_1 \satisfies \Sigma_1$ and
$s,h_2 \satisfies \Sigma_2$
\\[0.2em]
$s,h \satisfies \Sigma \wedge \Pi$
& \hspace{-1.2em} iff
& $\evalForm{\Pi}{s}\,{=}\,\valTrue$ and $s,h \satisfies \Sigma$
\\[0.2em]
$s,h \satisfies \exists x .F$
& \hspace{-1.2em} iff
& $\exists v \,{\in}\, \setVal \, . \, [s|x{:}v],h \satisfies F$
\\[0.2em]
\end{tabular}
\end{adjustwidth}
\end{minipage}
\end{tabular}
\caption{Semantics of assertions in $\theorySLSH$.
$[f|x{:}y]$ is a function like $f$ except that it returns $y$
for input $x$.}
\label{fig:SemanticsMSSL}
\end{figure}
\setlength{\intextsep}{0em}
\begin{definition}[Model and counter-model]
\label{def:ModelCounterModel}
Given an entailment $E \triangleq F \entails G$. An SL model $s,h$ is called a
{\em model} of $E$, iff $s,h \satisfies F$ implies $s,h \satisfies G$. On the
contrary, $s,h$ is called a {\em counter-model} of $E$, iff $s,h \satisfies F$
and $s,h \nsatisfies G$.
\end{definition}
We denote $s,h \satisfies (F \entails G)$, or $s,h \satisfies E$, if $s,h$ is a
model of $E$. Similarly, we write $s,h \nsatisfies (F \entails G)$, or $s,h
\nsatisfies E$, if $s,h$ is a counter-model of $E$. Given a list of $n$
entailments $E_1,...,E_n$, we write $s,h \satisfies E_1,...,E_n$ if $s,h$ is a
model of {\em all} $E_1,...,E_n$, and $s,h \nsatisfies E_1,...,E_n$ if $s,h$ is
a counter-model of {\em some} $E_1,...,E_n$.
\section{Mutual induction proof for separation logic entailment using model order}
\label{sec:MutualInduction}
In this section, we first introduce the general schema of
{\em Noetherian induction, a.k.a. well-founded induction,}
and then apply it in proving SL entailments.
{\bf Noetherian induction~\cite{Bundy01}}.
Given a conjecture $\mathcal{P}(\alpha)$,
with $\alpha$ is a structure of type $\tau$,
the general schema of Noetherian induction on the structure $\alpha$ is
\setlength{\intextsep}{0.5em}
\begin{figure}[ht]
\begin{prooftree}
\AxiomC{$
\forall \alpha\,{:}\,\tau.~ (\forall \beta\,{:}\,\tau. ~
\beta\,{\prec_{\tau}}\,\alpha \mtimply \mathcal{P}(\beta))
\mtimply \mathcal{P}(\alpha))
$}
\UnaryInfC{$ \forall \alpha\,{:}\,\tau.~ \mathcal{P}(\alpha) $}
\end{prooftree}
\end{figure}
where $\prec_{\tau}$ is a well-founded relation on $\tau$, i.e., there is no
infinite descending chain, like $... \prec_{\tau} \alpha_n \prec_{\tau} ...
\prec_{\tau} \alpha_2 \prec_{\tau} \alpha_1$. Noetherian induction can be
applied for arbitrary type $\tau$, such as data structures or control flow.
However, success in proving a conjecture by induction is highly dependent on the
choice of the induction variable $\alpha$ and the well-founded relation
$\prec_{\tau}$.
{\bf Proving SL entailments using Noetherian induction}. We observe that an SL
entailment $E$ is said to be {\em valid} if $s,h \satisfies E$ for all model
$s,h$, given that the heap domain is finite, i.e., $\forall h. |h| \in \setNats$,
according to Reynolds' semantics~\cite{Reynolds08}. This inspires us to define a
well-founded relation among SL models, called {\em model order}, by comparing
size of their heap domains. To prove an SL entailment by Noetherian induction
based on this order, we will show that if all the smaller models satisfying the
entailment implies that the bigger model also satisfies the entailment, then
the entailment is satisfied by all models, thus it is valid. The model order and
induction principle are formally described as follows.
\begin{definition}[Model order]
The {\em model order}, denoted by $\ltmodel$, of SL models is a
binary relation defined as: $s_1,h_1 \ltmodel s_2,h_2$, if $|h_1| < |h_2|$.
\end{definition}
\begin{theorem}[Well-founded relation]
The model order $\ltmodel$ of SL models is a well-founded relation.
\end{theorem}
\begin{proof}
By contradiction, suppose that $\ltmodel$ were not well-founded, then there
would exist an infinite descending chain: $... \ltmodel s_n,h_n \ltmodel ...
\ltmodel s_1,h_1$. It follows that there would exist an
infinite descending chain: $... < |h_n| < ... < |h_1|$. This is
impossible since domain size of heap model is finite, i.e., $|h_1|, ...,
|h_n|, ... \in \setNats$. \qedhere
\end{proof}
\begin{theorem}[Induction principle]
\label{thm:InductionPrinciple}
An entailment $E$ is valid, if for all model $s,h$,
the following holds:
$(\forall s',h'.~ s',h' \ltmodel s,h \mtimply \satentail{s',h'}{E})
\mtimply \satentail{s,h}{E}$. Formally:
\begin{prooftree}
\AxiomC{$
\forall s,h.~ (\forall s',h'.~
s',h' \ltmodel s,h \mtimply \satentail{s',h'}{E})
\mtimply \satentail{s,h}{E}
$}
\UnaryInfC{$
\forall s,h.~ \satentail{s,h}{E}
$}
\end{prooftree}
\end{theorem}
Since our induction principle is constructed on the SL model order, an induction
hypothesis can be used in the proof of any entailment whenever the decreasing
condition on model order is satisfied. This flexibility allows us to extend the
aforementioned principle to support {\em mutual induction}, in which multiple
entailments can participate in an induction proof, and each of them can be used
as a hypothesis to prove the other. In the following, we will introduce our {\em
mutual induction principle}. Note that the induction principle in Theorem
\ref{thm:InductionPrinciple} is an instance of this principle, when only one
entailment takes part in the induction proof.
\begin{theorem}[Mutual induction principle]
\label{thm:MutualInduction}
Given $n$ entailments $E_1,...,E_n$. All of them are valid, if for all model
$s,h$, the following holds: $(\forall s',h'.~ s',h' \ltmodel s,h \mtimply
\satentail{s',h'}{E_1,...,E_n}) \mtimply \satentail{s,h}{E_1,...,E_n}$.
Formally:
\begin{prooftree}
\AxiomC{$
\forall s,h.~ (\forall s',h'.~
s',h' \ltmodel s,h \mtimply \satentail{s',h'}{E_1,...,E_n})
\mtimply \satentail{s,h}{E_1,...,E_n}
$}
\UnaryInfC{$
\forall s,h.~ \satentail{s,h}{E_1,...,E_n}
$}
\end{prooftree}
\end{theorem}
\begin{proof}
By contradiction, assume that some of $E_1,...,E_n$ were invalid. Then, there
would exist some counter-models $s,h$ such that $s,h \nsatisfies E_1,...,E_n$.
Since $\ltmodel$ is a well-founded relation, there would exist the {\em least}
counter-model $s_1,h_1$ such that $s_1,h_1 \nsatisfies E_1,...,E_n$,
and, $s'_1,h'_1 \satisfies E_1,...,E_n$ for all $s'_1, h'_1
\ltmodel s_1, h_1$. Following the theorem's hypothesis $\forall s,h.~ (\forall
s',h'.~ s',h' \,{\ltmodel}\, s,h \mtimply \satentail{s',h'}{E_1,...,E_n})
\mtimply \satentail{s,h}{E_1,...,E_n}$, we have $s_1,h_1 \satisfies
E_1,...,E_n$. This contradicts with the assumption that $s_1,h_1$ is a
counter-model. \qed
\end{proof}
\section{The proof system}
\label{sec:Implementation}
In this section, we introduce a sequent-based deductive system, which comprises a
set of inference rules depicted in Fig.~\ref{fig:LogicalRule} (logical rules)
and Fig.~\ref{fig:InductionRule} (induction rules), and a proof search procedure
in Fig.~\ref{fig:ProofSearch}. Each inference rule has zero or more premises, a
conclusion and possibly a side condition. A premise or a conclusion is described
in the same form of $\varHypo,~ \varTrace,~ F_1 \entails F_2$, where (i) $F_1
\entails F_2$ is an entailment, (ii) $\varHypo$ is a set of entailments with
validity status, which are recorded during proof search and can be used as
hypotheses to prove $F_1 \entails F_2$, and (iii) $\varTrace$ is a proof trace
capturing a chronological list of inference rules applied by the proof search
procedure to reach $F_1 \entails F_2$.
In addition, the entailment in the conclusion of a rule is called the {\em goal
entailment}. Rules with zero (empty) premise is called {\em axiom rules}. A
proof trace $\varTrace$ containing $n$ rules $\ruleR_1, \ldots, \ruleR_n$, with
$n \,{\geq}\, 0$, is represented by $[(\ruleR_1), \ldots, (\ruleR_n)]$, where
the head $(\ruleR_1)$ of $\varTrace$ is the latest rule used by the proof search
procedure. In addition, some operations over proof traces are (i) insertion:
$(\ruleR)\,{\inserttrace}\,\varTrace$, (ii) membership checking: $(\ruleR)
\membertrace \varTrace$, and (iii) concatenation: $\varTrace_1 \concattrace
\varTrace_2$.
\setlength{\intextsep}{0.8em}
\begin{figure}[ht]
\begin{small}
\begin{minipage}{\textwidth}
\begin{tabular}{ll}
\minipageFalseLeftOne{0.29\textwidth}
&
\minipageFalseLeftTwo{0.48\textwidth}
\\[1.5em]
\minipagePureEntail{0.3\textwidth}
&
\minipageEmpLeftOne{0.41\textwidth}
\\[1.5em]
\minipageEqualLeft{0.3\textwidth}
&
\minipageEmpRightOne{0.47\textwidth}
\\[1.8em]
\minipageEqualRight{0.3\textwidth}
&
\hspace{-1.3em}
\minipageStarData{0.65\textwidth}
\\[1.8em]
\minipageExistsLeft{0.35\textwidth}
&
\minipageStarPred{0.595\textwidth}
\\[1.5em]
\minipageExistsRight{0.2\textwidth}
&
\hspace{-1.8em}
\minipagePredIntroRight{0.54\textwidth}
\\[1.8em]
\end{tabular}
\caption{Logical rules.
Note that for a rule $\ruleR$ with trace $\varTrace$ in its conclusion,
the trace in its premise is $\varTrace' \triangleq (\ruleR) \inserttrace \varTrace$.}
\label{fig:LogicalRule}
\end{minipage}
\end{small}
\end{figure}
\titlespacing*{\subsection}{0pt}{1ex plus .1ex}{0.5ex plus .1ex}
\subsection{Logical rules}
Logical rules in Fig.~\ref{fig:LogicalRule} deal with the logical structure of SL
entailments. For brevity, in these rules, we write the {\em complete\/}
symbolic-heap assertion $\exists \vec{x}. (\Sigma \wedge \Pi)$ as a {\em
standalone\/} $F$. We define the {\em conjoined\/} assertion $F * \Sigma'
\triangleq \Sigma * \Sigma' \wedge \Pi$ and $F \wedge \Pi' \triangleq \Sigma \wedge
\Pi \wedge \Pi'$, given that existential quantifiers does not occur in the
outermost scope of $F$, i.e., $F \triangleq \Sigma \wedge \Pi$. The notation
$\vec{u}{=}\vec{v}$ means $(u_1{=}v_1) \,{\wedge}\, \ldots \,{\wedge}\,
(u_n{=}v_n)$, given that $\vec{u}{=}u_1{,} \ldots {,}u_n$ and $\vec{v}{=}v_1{,}
\ldots {,}v_n$ are two lists containing the same number of variables. We also
write $\vec{x} \disjoins \vec{y}$ to denote $\vec{x}$ and $\vec{y}$ are
disjoint, i.e., $\nexists u. (u \in \vec{x} \mtand u \in \vec{y})$, and use
\freevars{F} to denote the list of all free variables of an assertion $F$.
Moreover, $F[e/x]$ is a formula obtained from $F$ by
substituting the expression $e$ for all occurrences of the free variable
$x$ in $F$.
The set of logical rules are explained in details as follows:
\setlength{\intextsep}{0.0em}
\begin{enumerate}
\setlength{\itemsep}{3pt}\setlength{\parskip}{0.5pt}
\item[--] {\bf Axiom rules.} The rule $\rulePureEntail$ proves a pure entailment
$\Pi_1 \entails \Pi_2$ by invoking off-the-shelf provers such as
Z3~\cite{MouraB08} to check the pure implication $\Pi_1 \,{\Rightarrow}\,
\Pi_2$ in its side condition. The two rules $\ruleFalseLeftOne$ and
$\ruleFalseLeftTwo$ decide an entailment {\em vacuously\/} valid if its
antecedent is unsatisfiable, i.e., the antecedent contains a contradiction
$(u{\neq}u)$ or overlaid data nodes $(\hformnt{\varSort_1}{u}{\vec{v}} *
\hformnt{\varSort_2}{u}{\vec{w}})$.
\item[--] {\bf Normalization rules.} These rules simplify their goal
entailments by either eliminating existentially quantified variables
($\ruleExistsLeft, \ruleExistsRight$), or removing equalities
($\ruleEqualLeft, \ruleEqualRight$) or empty heap predicates ($\ruleEmpLeft,
\ruleEmpRight$) from antecedents (left side) or consequents (right side) of
the entailments.
\item[--] {\bf Frame rules.} The two rules $\ruleStarData$ and $\ruleStarPred$
applies the {\em frame property\/} of SL~\cite{Reynolds08} to remove {\em
identical\/} spatial atoms from two sides of entailments. Note that the
identical condition is guaranteed by adding equality constraints of these
spatial atoms' arguments into consequents of the derived entailments.
\item[--] {\bf Unfolding rules.} The rule $\rulePredIntroRight$ derives a new
entailment by unfolding a heap predicate in the goal entailment's consequent
by its inductive definition. Note that unfolding a heap predicate in the
entailment's antecedent will be performed by the induction rule
$\ruleInduction$, as discussed in the next section.
\end{enumerate}
\setlength{\intextsep}{0.3em}
\subsection{Induction rules}
\setlength{\intextsep}{0em}
\begin{figure}[H]
\begin{small}
\begin{tabular}{l}
\begin{minipage}{0.6\textwidth}
\begin{prooftree}
\def\defaultHypSeparation{\hskip 1em}
\def\ScoreOverhang{0em}
\AxiomC{$
\varHypo \cup \{(H,\statusUnknown)\},\,
\varTracePrim,\,
F_1 * \hformp{\formF^{\predP}_1}{\vec{u}}
\entails
F_2
$}
\AxiomC{$\dots$}
\AxiomC{$
\varHypo \cup \{(H,\statusUnknown)\},\,
\varTracePrim,\,
F_1 * \hformp{\formF^{\predP}_m}{\vec{u}}
\entails
F_2
$}
\def\extraVskip{3pt}
\LeftLabel{\rulename{\ruleInduction}}
\RightLabel{$\dagger_{(\ruleInduction)}$}
\TrinaryInfC{$
\varHypo \sepAnte\, \varTrace \sepAnte\,
F_1 * \hformp{\predP}{\vec{u}}
\entails
F_2
$}
\end{prooftree}
\end{minipage}
\\
\begin{minipage}{\textwidth}
\begin{adjustwidth}{1.3em}{}
\begin{small}
Given
$H \triangleq F_1 * \hformp{\predP}{\vec{u}} \entails F_2$,
$\varTracePrim = (\ruleInduction) \inserttrace \varTrace$,
and
$\dagger_{(\ruleInduction)}$:~
$\hformp{\predP}{\vec{u}} \triangleq
\hformp{\formF^{\predP}_1}{\vec{u}} \mtor
\ldots
\mtor \hformp{\formF^{\predP}_m}{\vec{u}}$
\end{small}
\end{adjustwidth}
\end{minipage}
\end{tabular}
\\[0.8em]
\begin{tabular}{l}
\begin{minipage}{\textwidth}
\raggedleft{}
\begin{prooftree}
\def\defaultHypSeparation{\hskip 2em}
\def\ScoreOverhang{0em}
\AxiomC{$
\varHypo \cup \{(H, status)\},~
(\ruleHypo) \inserttrace \varTrace,~
F_4\theta \,{*}\, \Sigma' \,{\wedge}\, \Pi_1
\entails
F_2
$}
\def\extraVskip{3pt}
\LeftLabel{\rulename{\ruleHypo}}
\RightLabel{~\parbox{15em}{\rulesidecondright{
\exists{} \theta{,} \Sigma'. (\Sigma_1 {\synequiv} \Sigma_3\theta{*} \Sigma'
\mtand{} \Pi_1 {\Rightarrow} \Pi_3\theta)},\\
\rulesidecondright{\dagger_{(\ruleHypo)}}
}}
\UnaryInfC{$
\varHypo \,{\cup}\, \{(H \,{\triangleq}\, \Sigma_3 {\wedge} \Pi_3 {\entails} F_4, status)\},\,
\varTrace,\,
\Sigma_1 \,{\wedge}\, \Pi_1
\,{\entails}\,
F_2
$}
\end{prooftree}
\end{minipage}
\\
\begin{minipage}{\textwidth}
\begin{adjustwidth}{1em}{}
\begin{small}
\begin{tabular}{ll}
with $\dagger_{(\ruleHypo)}$:
$(status {=} \statusValid)$
&
$\mtor
\exists \varSort, u, \vec{v}, \Sigma''.
(\Sigma' \synequiv \hformntShort{\varSort}{u}{\vec{v}} * \Sigma'')$\\
{}&
$\mtor
\exists \varTrace_1, \varTrace_2.(
\varTrace \,{=}\, \varTrace_1 {\concattrace} [(\ruleStarData)] {\concattrace}
\varTrace_2
\mtand (\ruleInduction) \,{\nmembertrace}\, \varTrace_1
\mtand (\ruleInduction) \,{\membertrace}\, \varTrace_2)$.\\
\end{tabular}
\end{small}
\end{adjustwidth}
\end{minipage}
\end{tabular}
\end{small}
\caption{Induction rules}
\label{fig:InductionRule}
\end{figure}
Fig.~\ref{fig:InductionRule} presents inference rules implementing
our mutual induction principle.
The {\em induction\/} rule $\ruleInduction$ firstly records its goal
entailment as an induction hypothesis $H$, and unfolds an inductive heap
predicate in the antecedent of $H$ to derive new entailments. When $H$ is inserted
into the hypothesis vault $\varHypo$, its status is initially assigned to
$\statusUnknown$ ({\em unknown\/}), indicating that its validity is not known at
the moment. Later, the status of $H$ will be updated to $\statusValid$ ({\em
valid\/}) once the proof search procedure is able to prove it valid.
Generally, given an entailment $E$ and its proof tree $\mathcal{T}$, the proof
search procedure concludes that $E$ is valid if (i) every leaf of $\mathcal{T}$
is empty via applications of axiom rules, and (ii) all hypotheses used by the
{\em apply hypothesis\/} rule $\ruleHypo$ must be derived in $\mathcal{T}$.
Rule $\ruleHypo$ is the key rule of our mutual induction principle, which
applies an appropriate hypothesis $H \triangleq \Sigma_3 \wedge \Pi_3 \entails
F_4$ in proving its goal entailment $E \triangleq \Sigma_1 \wedge \Pi_1 \entails
F_2$. The rule firstly unifies the antecedents of $H$ and $E$ by a substitution
$\theta$, i.e., there exists a spatial assertion $\Sigma'$ such that $\Sigma_1
\synequiv \Sigma_3\theta * \Sigma'$ and $\Pi_1 \Rightarrow \Pi_3\theta$. If such
$\theta$ and $\Sigma'$ exist, we can weaken the antecedent of $E$ as follows $
(\Sigma_1 \wedge \Pi_1) \entails (\Sigma_3\theta * \Sigma' \wedge \Pi_3\theta
\wedge \Pi_1) \entails (F_4\theta * \Sigma' \wedge \Pi_1) $. Note that we use
Reynolds's substitution law~\cite{Reynolds08} to obtain $\Sigma_3\theta \wedge
\Pi_3\theta \entails F_4\theta$ from the hypothesis $H$. The proof system then
derives the next goal entailment $F_4\theta * \Sigma' \wedge \Pi_1 \entails F_2$
as shown in the premise of rule $\ruleHypo$.
\begin{wrapfigure}[8]{r}{0.43\textwidth}
\begin{tikzpicture}[
->,>=stealth',shorten >=1pt,auto,node distance=2.4cm,thick,
main node/.style={circle,draw,font=\sffamily},
label/.style={draw=none},
hidden node/.style={font=\sffamily}]
\node[main node] (E) {};
\node[label] [right =0.05cm of E] (L) {{\small $E, (\ruleHypo)$}};
\node[main node] (H) [left =1.8cm of E] {};
\node[label] [above left =-0.1cm and -0.05cm of H] (L) {{\small $H$}};
\node[main node] (R) [below left=1.2cm and 0.8cm of E] {};
\node[label] [right =0.05cm of R] (L) {{\small $I, (\ruleInduction)$}};
\draw [rounded corners,dotted] (H) -- node{{\scriptsize apply hypo}} (E);
\draw [rounded corners,dashed] (R) -- (E);
\end{tikzpicture}
\caption{Applying hypothesis}
\label{fig:ApplyingHypothesis}
\end{wrapfigure}
The side condition $\dagger_{(\ruleHypo)}$ of rule $\ruleHypo$ ensures the
decreasing condition of the mutual induction principle. In particular,
suppose that the proof search procedure applies a hypothesis $H$ in $\varHypo$
to prove an entailment $E$ via rule $\ruleHypo$. If the status of $H$ is
$\statusValid$, denoted by the first condition in $\dagger_{(\ruleHypo)}$, then
$H$ is already proved to be valid; thus it can be freely used to prove other
entailments. Otherwise, the status of $H$ is $\statusUnknown$, and $H$ may
participate in a (mutual) induction proof with an entailment $I$ in the proof
path of $E$, as depicted in Fig.~\ref{fig:ApplyingHypothesis}. Note that the
entailment $I$ has been recorded earlier as an induction hypothesis by an
application of the induction rule $\ruleInduction$.
In the latter case, the induction principle requires the decrease of model size
when applying the hypothesis $H$ to prove entailment $I$. We then show
that this decreasing condition holds if one of the following conditions of
$\dagger_{(\ruleHypo)}$ is satisfied.
\setlength{\intextsep}{0em}
\begin{enumerate}
\setlength{\itemsep}{2pt}\setlength{\parskip}{0pt}
\item[(i)] $\exists \varSort, u, \vec{v}, \Sigma''. (\Sigma' {\synequiv}
\hformntShort{\varSort}{u}{\vec{v}} {*} \Sigma'')$ indicates that the
left-over heap part $\Sigma'$ after unifying antecedent of $H$ into that
of $E$ contains at least one singleton heap predicate, or
\item[(ii)] $\exists \varTrace_1, \varTrace_2.( \varTrace {=} \varTrace_1
{\concattrace} [(\ruleStarData)] {\concattrace} \varTrace_2 \mtand
(\ruleInduction) {\nmembertrace} \varTrace_1 \mtand (\ruleInduction)
{\membertrace} \varTrace_2)$ requires that there is a removal step of a
singleton heap predicate by the rule $\ruleStarData$ applied
between this hypothesis application $\ruleHypo$ and the most recent
induction step $\ruleInduction$.
\end{enumerate}
Consider an arbitrary model $s,h$ satisfying $I$. During the derivation path from $I$ to
$E$, the model $s,h$ is transformed into a corresponding model $s_e,h_e$ of $E$.
We always have $|h_e| \leq |h|$ as the applications of logical rules and rule
$\ruleInduction$ never increase heap model size of entailments.
Moreover,
when applying $H$ to prove $E$, the model $s', h'$ of $H$, which
corresponds to $s_e, h_e$ of $E$,
satisfies $|h'| \leq |h_e|$, due to the unification step in
rule $\ruleHypo$. We consider two following cases. If condition
(i) is satisfied, then heap model size of the left-over part $\Sigma'$ is at
least 1 since $\Sigma'$ contains a singleton heap predicate. As a result, $|h'|
< |h_e|$ and it follows that $|h'| < |h|$. If condition (ii) is satisfied, then
$|h_e| < |h|$ since there is a singleton heap predicate, whose size of heap
model is 1, is removed when deriving $I$ to $E$. This implies that $|h'| < |h|$.
In summary, we obtain that $|h'| < |h|$ for both cases; thus, $s',h' \ltmodel
s,h$. This concludes our explanation about the rule $\ruleHypo$.
\vspace{1em}
\begin{figure}[H]
\begin{algorithm}[H]
\begin{small}
\caption*{{\bf Procedure} \psCall{\procProve}{\varHypo,\varTrace,F \entails{} G}}
\psKey{Input:}
$\varHypo, F \entails G$ and $\varTrace$
are respectively a set of hypotheses, a goal entailment and its corresponding proof trace.\\
\psKey{Output:} Validity result ({$\valValid$} or {$\valUnknown$}),
a set of derived entailments with their validity statuses,
and a set of hypotheses used in proof of $F \entails G$.
\label{proc:ProveEntailment}
\begin{algorithmic}[1]
\def\indent{\hspace{2em}}
\State
\psAssign{\varRuleSelected}
{\setenum{\varRule_{inst}~|~
\varRule_{inst} = \psCall{\proc{Unify}}{\varRule, (\varHypo, \varTrace, F \entails{} G)}
~\mtand~\varRule\in\varRuleSet}}
\label{line:FindRule}
\If{$\ruleRs = \setempty$}
\psReturn{\valUnknown, \setempty, \setempty}
\Comment{no rule is selected}
\EndIf{}
\label{line:NoRule}
\For{\psKey{each} $\varRule_{inst}$ \psKey{in} $\ruleRs$}
\If{\psCall{\proc{GetName}}{\varRule_{inst}} $\in$ $\{\rulePureEntail, \ruleFalseLeftOne, \ruleFalseLeftTwo\}$ }
\Comment{$\varRule$ is an axiom rule}
\State\psReturn{\valValid, \setempty, \setempty}
\EndIf{}
\label{line:AxiomRule}
\State\psAssign{\varHypo_{used}}{\setempty}
\label{line:UsedHypoOne}
\If{$\varRule_{inst} = \ruleHypo$ with hypothesis $E$}
\psAssign{\varHypo_{used}}{\varHypo_{used} \cup{} \{E\}}
\EndIf{}
\label{line:UsedHypoTwo}
\State\psAssign{\varHypo_{derived}}{\setempty}
\label{line:DerivedEntailOne}
\State \psAssign{(\varHypo_i,\varTrace_i,F_i \entails G_i)_{i=1, \ldots, n}}{
\psCall{\proc{GetPremises}}{\varRule_{inst}}}
\Comment{all premises of $\varRule_{inst}$}
\label{line:PremiseBegin}
\For{i = 1 \psKey{to} n}
\State \psAssign{res,\varHypo_{derived},\varHypo'_{used}}{
\psCall{\procProve}
{\varHypo_i \oplus \varHypo_{derived},\varTrace_i,F_i \entails G_i}}
\label{line:MutualInduction}
\label{line:UsedHypoThree}
\label{line:DerivedEntailTwo}
\If{$res = {\valUnknown}$}
\psReturn{\valUnknown, \setempty, \setempty}
\label{line:OnePremiseFail}
\EndIf{}
\State{\psAssign{\varHypo_{used}} {\varHypo_{used} \cup{} \varHypo'_{used}}}
\label{line:UsedHypoFour}
\EndFor{}
\If{$\varHypo_{used} \subseteq
(\psCall{\proc{GetEntailments}}{\varHypo_{derived}} \cup \{ F \entails G\})$}
\label{line:DecideStatusValidOne}
\State{\psAssign{\varHypo_{derived}}{\varHypo_{derived} \oplus{} \{(F \entails{} G, \statusValid)\}}}
\label{line:DecideStatusValidTwo}
\label{line:DerivedEntailThree}
\Else
\,\psAssign{\varHypo_{derived}}{\varHypo_{derived} \oplus \{(F \entails G, \statusUnknown)\}}
\label{line:DerivedEntailFour}
\label{line:DecideStatusValidThree}
\EndIf{}
\State{\psReturn{\valValid, \varHypo_{derived}, \varHypo_{used}}}
\label{line:AllPremisesValid}
\Comment{all derived premises are proved}
\EndFor{}
\label{line:PremiseEnd}
\State{\psReturn{\valUnknown, \setempty, \setempty}}
\Comment{all rules fail to prove $F \entails G$}
\label{line:AllRuleFail}
\label{line:ProveEntailmentEnd}
\end{algorithmic}
\end{small}
\end{algorithm}
\caption{General proof search procedure,
in which $\varRuleSet$ is the set of inference rules given in
Fig.~\ref{fig:LogicalRule} and~\ref{fig:InductionRule}.}
\label{fig:ProofSearch}
\end{figure}
\titlespacing*{\subsection}{0pt}{1ex plus .1ex}{0.5ex plus .1ex}
\subsection{Proof search procedure}
Our proof search procedure $\procProve$ is designed in a self-recursive manner,
as presented in Fig.~\ref{fig:ProofSearch}. Its inputs consist of a set of hypotheses,
a proof trace, and an entailment, which are components of an inference
rule's conclusion. To prove a candidate entailment $F \entails G$, initially the
hypothesis set $\varHypo$ and the proof trace are assigned to empty ($\setempty$
and $[\,]$).
Firstly, the procedure $\procProve$ finds a set $\varRuleSelected$ of suitable
rules, whose conclusion can be unified with the goal entailment $F \entails G$,
among all inference rules in $\varRuleSet$ (line~\ref{line:FindRule}). If no
suitable rule is found, the procedure immediately returns $\valUnknown$,
indicating that it is unable to prove the entailment (line~\ref{line:NoRule}).
Otherwise, it subsequently processes each discovered rule $\varRule_{inst}$ in
$\varRuleSelected$ by either (i) returning $\valValid$ to announce a valid
result, if an axiom rule is selected (line \ref{line:AxiomRule}), or (ii)
recursively searching for proofs of the derived entailments in the premises of
$\varRule_{inst}$ (lines \ref{line:PremiseBegin}--\ref{line:PremiseEnd}). In the
latter case, the procedure returns $\valUnknown$ if one of the derived
entailments is not proved (line~\ref{line:OnePremiseFail}), or returns
$\valValid$ if all of them are proved (line~\ref{line:AllPremisesValid}).
Finally, it simply returns $\valUnknown$ when it cannot prove the goal
entailment with all selected rules (line~\ref{line:AllRuleFail}).
The procedure uses a local variable $\varHypo_{used}$ to store all hypotheses
used during the proof search. $\varHypo_{used}$ is updated when the rule
$\ruleHypo$ is applied (line~\ref{line:UsedHypoTwo}) or after the procedure
finishes proving a derived entailment (lines ~\ref{line:UsedHypoThree}
and~\ref{line:UsedHypoFour}). We also use another variable $\varHypo_{derived}$
to capture all generated entailments with their validity statuses.
The condition at line~\ref{line:DecideStatusValidOne} checks if all
hypotheses used to prove the entailment $F \entails G$ are only introduced
during the entailment's proof. If this condition is satisfied, then $F \entails
G$ is updated with a {\em valid status\/} $\statusValid$
(line~\ref{line:DecideStatusValidTwo}). Otherwise, the entailment may
participate in a (mutual) induction proof, thus its status is assigned to {\em
unknown\/} $\statusUnknown$ (line \ref{line:DecideStatusValidThree}).
At line~\ref{line:MutualInduction}, the procedure uses not only the hypothesis
set $\varHypo_i$, introduced by the selected inference rule, but also the set
$\varHypo_{derived}$ containing entailments derived during proof search to prove
a new goal entailment $F_i \entails G_i$. This reflects our mutual induction
principle which allows derived entailments to be used as hypotheses in other
entailments' proofs. Note that the {\em union and update\/} operator $\oplus$
used in the algorithm will insert new entailments and their statuses into the
set of hypotheses, or update the existing entailments with their new statuses.
In addition, the auxiliary procedures used in our proof search procedure are
named in a self-explanatory manner. In particular, $\proc{Unify}$,
$\proc{GetName}$ and $\proc{GetPremises}$ respectively unifies an inference rule
with a goal entailment, or returns name and premises of an inference rule.
Finally, $\proc{GetEntailments}$ returns all entailments stored in the set of
derived entailments $\varHypo_{derived}$.
{\bf Soundness}. Soundness of our proof system is stated in Theorem
\ref{thm:Soundness}. Due to page constraint, we present the detailed proof in
the Appendix \ref{app:SoundnessInferenceRule}
\setlength{\intextsep}{0em}
\begin{theorem}[Soundness]
\label{thm:Soundness}
Given an entailment $E$, if the proof search procedure returns $\valValid$
when proving $E$, then $E$ is valid.
\end{theorem}
\section{Experiment}
\label{sec:Experiment}
We have implemented the proposed induction proof technique into a prototype
prover, named $\songbird$. The proof system and this paper's artifact are
available for both online use and download at {\sburl}.
\setlength{\intextsep}{0.6em}
\begin{figure}[H]
\begin{small}
\begin{tabular}{cc}
\begin{minipage}{0.6\textwidth}
\def\arraystretch{1.
\setlength{\tabcolsep}{1.5pt}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
\multicolumn{1}{|c|}{{\bf Category}} &
{\bf \slide} &
{\bf \spen} &
{\bf \sleek} &
{\bf \cyclist} &
{\bf $\songbird$}
\\
\hline
~\textsf{singly-ll} \hfill (64) & 12 & 3 & 48 & {\bf 63} & {\bf 63} \\
~\textsf{doubly-ll} \hfill (37) & 14 & 0 & 17 & 24 & {\bf 26} \\
~\textsf{nested-ll} \hfill (11) & 0 & {\bf 11} & 5 & 6 & {\bf 11} \\
~\textsf{skip-list} \hfill (13) & 0 & {\bf 12} & 4 & 5 & 7 \\
~\textsf{tree} \hfill (26) & 12 & 1 & 14 & 18 & {\bf 22} \\
\hline
\hline
~Total \hfill (151) & 38 & 27 & 88 & 116 & {\bf 129} \\
\hline
\end{tabular}
\end{minipage}
&
\begin{minipage}{0.4\textwidth}
\def\arraystretch{1.2
\setlength{\tabcolsep}{1pt}
\begin{tabular}{|l|c|c|c|c|}
\hline
{} & \multicolumn{4}{c|}{{\bf $\songbird$}} \\
\cline{2-5}
&
$\cmarksb\,\xmarkother$ &
$\xmarksb\,\cmarkother$ &
$\cmarksb\,\cmarkother$ &
$\xmarksb\,\xmarkother$
\\
\hline
{\bf \cyclist} & 13 & 0 & 116 & 22 \\
\hline
{\bf \sleek} & 41 & 0 & 88 & 22 \\
\hline
{\bf \spen} & 109 & 7 & 20 & 15 \\
\hline
{\bf \slide} & 103 & 12 & 26 & 10 \\
\hline
\end{tabular}
\end{minipage}
\\
(a) & (b)
\end{tabular}
\end{small}
\caption{Overall evaluation on the benchmark
\textsf{slrd\_entl} of \textsf{SL-COMP}}
\label{fig:ExpSlcompAll}
\end{figure}
To evaluate our
technique, we compared our system against state-of-the-art SL provers, including
{\slide}~\cite{IosifRS13,IosifRV14}, {\spen}~\cite{EneaLSV14},
{\sleek}~\cite{ChinDNQ12} and {\cyclist}~\cite{BrotherstonDP11,BrotherstonGP12},
which had participated in the recent SL competition
\textsf{SL-COMP}~\cite{SighireanuC14}. We are however unable to make direct
comparison with the induction-based proof technique presented in \cite{ChuJT15}
as their prover was not publicly available. Our evaluation was performed on an
Ubuntu 14.04 machine with CPU Intel E5-2620 (2.4GHz) and RAM 64GB.
Firstly, we conduct the experiment on a set of {\em valid}
entailments\footnote{We exclude the set of invalid entailments because some
evaluated proof techniques, such as~\cite{ChinDNQ12,BrotherstonDP11}, aim to
only prove validity of entailments.}, collected from the benchmark
\textsf{slrd\_entl}\footnote{Available at
\textsf{https://github.com/mihasighi/smtcomp14-sl/tree/master/bench}.} of
\textsf{SL-COMP}. These entailments contain {\em general} inductive heap predicates
denoting various data structures, such as singly linked lists
(\textsf{singly-ll}), doubly linked lists (\textsf{doubly-ll}), nested lists
(\textsf{nested-ll}), skip lists (\textsf{skip-list}) and trees (\textsf{tree}).
We then categorize problems in this benchmark based on their predicate types. In
Fig. \ref{fig:ExpSlcompAll}(a), we report the number of entailments successfully
proved by a prover in each category, with a timeout of 30 seconds for proving an
entailment. For each category, the total number of problems is put in
parentheses, and the maximum number of entailments that can be proved by the
list of provers are highlighted in bold. As can be seen, $\songbird$ can prove
more entailments than all the other tools. In particular, we are the best in
almost categories, except for \textsf{skip-list}. However, in this category, we
are behind only {\spen}, which has been specialized for skip lists
\cite{EneaLSV14}. Our technique might require more effective generalization to
handle the unproven \textsf{skip-list} examples.
In Fig. \ref{fig:ExpSlcompAll}(b), we make a detailed comparison among Songbird
and other provers. Specifically, the first column ($\cmarksb\,\xmarkother$)
shows the number of entailments that $\songbird$ can prove valid whereas the
others cannot. The second column ($\xmarksb\,\cmarkother$) reports the number of
entailments that can be proved by other tools, but not by $\songbird$. The last
two columns list the number of entailments that both Songbird and others can
($\cmarksb\,\cmarkother$) or cannot ($\xmarksb\,\xmarkother$) prove. We would
like to highlight that our prover efficiently proves {\em all} entailments
proved by {\cyclist} (resp. {\sleek}) in {\em approximately half the time},
i.e., 20.92 vs 46.40 seconds for 116 entailments, in comparison with {\cyclist}
(resp. 8.38 vs 15.50 seconds for 88 entailments, in comparison with {\sleek}).
In addition, there are 13 (resp. 41) entailments that can be proved by our tool,
but {\em not} by {\cyclist} (resp. {\sleek}). Furthermore, our $\songbird$
outperforms {\spen} and {\slide} by more than 65\% of the total entailments,
thanks to the proposed mutual induction proof technique.
Secondly, we would like to highlight the efficiency of {\em mutual induction} in
our proof technique via a comparison between $\songbird$ and its variant
$\songbirdSI$, which exploits only induction hypotheses found within a {\em
single} proof path. This mimics the structural induction technique which
explores induction hypotheses in the same proof path. For this purpose, we
designed a new entailment benchmark, namely \textsf{slrd\_ind}, whose problems
are more complex than those in the \textsf{slrd\_entl} benchmark. For example,
our handcrafted benchmark\footnote{The full benchmark is available at \sburl.}
contains an entailment $\hformp{\predLsEven}{x,y} * \hformn{y}{z} *
\hformp{\predLsEven}{z,t} \entails \exists u.\,\hformp{\predLsEven}{x,u} *
\hformn{u}{t}$ with the predicate $\hformp{\predLsEven}{x,y}$ denoting list
segments with even length. This entailment was inspired by the entailment
$\hformp{\predLsEven}{x,y} * \hformp{\predLsEven}{y,z} \entails
\hformp{\predLsEven}{x,z}$ in the problem \textsf{11.tst.smt2} of
\textsf{slrd\_entl}, contributed by team {\cyclist}. Note that entailments in
our benchmark were constructed on the same set of linked list predicates
provided in \textsf{slrd\_entl}, comprised of regular singly linked lists
(\textsf{ll}), linked lists with even or odd length (\textsf{ll-even/odd}) and
linked list segments which are left- or right-recursively defined
(\textsf{ll-left/right}). We also use a new \textsf{ll2} list segment predicate
whose structure is similar to the predicate \textsf{tmp} in our motivating
example. In addition, problems in the \textsf{misc.} category involve all
aforementioned linked list predicates.
As shown in Fig. \ref{fig:ExpHandcraftedEntails}, $\songbirdSI$ is able to prove
nearly 70\% of the total entailments, which is slightly better than
{\cyclist}\footnote{We do not list other provers in Fig.
\ref{fig:ExpHandcraftedEntails} as they cannot prove any problems in
\textsf{slrd\_ind}.}, whereas $\songbird$, with full capability of mutual
induction, can prove the {\em whole} set of entailments. This result is
encouraging as it shows the usefulness and essentials of our mutual explicit
induction proof technique in proving SL entailments.
\setlength{\intextsep}{0.8em}
\begin{figure}[H]
\begin{center}
\def\arraystretch{1.1
\begin{tabular}{|l|c|c|c|}
\hline
\multicolumn{1}{|c|}{{\bf Category}} &
{\bf \cyclist} &
{\bf $\songbird_\textbf{SI}$} &
{\bf $\songbird$}
\\
\hline
~ \textsf{ll/ll2} \hfill (24) & 18 & 22 & 24 \\
~ \textsf{ll-even/odd} \hfill (20) & 8 & 17 & 20 \\
~ \textsf{ll-left/right} \hfill (20) & 12 & 10 & 20 \\
~ \textsf{misc.} \hfill (32) & 17 & 16 & 32 \\
\hline
~ Total \hfill (96) & 55 & 65 & 96 \\
\hline
\end{tabular}
\end{center}
\caption{Comparison on \textsf{slrd\_ind} benchmark}
\label{fig:ExpHandcraftedEntails}
\end{figure}
\section{Conclusion}
We have proposed a novel induction technique and developed a proof system for
automatically proving entailments in a fragment of SL with general inductive
predicates. In essence, we show that induction can be performed on the size of
the heap models of SL entailments. The implication is that, during automatic
proof construction, the goal entailment and entailments derived in the entire
proof tree can be used as hypotheses to prove other derived entailments, and
vice versa. This novel proposal has opened up the feasibility of mutual
induction in automatic proof, leading to shorter proof trees being built.
In future, we would like to develop a verification system on top of the prover
{$\songbird$}, so that our {\em mutual explicit induction} technique can be
effectively used for automated verification of memory safety in imperative
programs.
{\bf Acknowledgement}. We would like to thank the anonymous reviewers for their
valuable and helpful feedback. The first author would like to thank Dr. James
Brotherston for the useful discussion about the cyclic proof. This work has been
supported by NUS Research Grant R-252-000-553-112. Ton Chanh and Wei-Ngan are
partially supported by MoE Tier-2 grant MOE2013-T2-2-146.
\bibliographystyle{splncs03}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 3,399
|
\section{Introduction}
\label{intro}
This article outlines a simple approach to a general problem in text
analysis, the selection of documents for costly annotation. We then
show how inverse regression can be applied with variable interactions
to obtain both generic and subject-specific predictions of document
sentiment, our annotation of interest.
We are motivated by the problem of design and analysis of
a particular text mining experiment: the scoring of Twitter posts (`tweets') for
positive, negative, or neutral sentiment directed towards particular
US politicians. The contribution is structured first with a proposal
for optimal design of text data experiments, followed by application
of this technique in our political tweet case study and analysis of
the resulting data through inverse regression.
Text data are viewed throughout simply as counts, for each document,
of phrase occurrences. These phrases can be words (e.g., {\it tax})
or word combinations (e.g. {\it pay tax} or {\it too much tax}).
Although there are many different ways to process raw text into these
{\it tokens}, perhaps using sophisticated syntactic or semantic rules,
we do not consider the issue in detail and assume tokenization as
given; our case study text processing follows a few simple rules
described below. Document $i$ is represented as $\bm{x}_i =
[x_{i1},\ldots,x_{ip}]'$, a sparse vector of counts for each of $p$
tokens in the vocabulary, and a document-term count matrix is written
$\bm{X} = [\bm{x}_1 \cdots \bm{x}_n]'$, where $n$ is the number of
documents in a given corpus. These counts, and the associated
frequencies $\bm{f}_i = \bm{x}_i/m_i$ where $m_i = \sum_{j=1}^p
x_{ij}$, are then the basic data units for statistical text analysis.
Hence, text data can be characterized simply as exchangeable counts in
a very large number of categories, leading to the common assumption of
a multinomial distribution for each $\bm{x}_i$.
We are concerned with predicting the {\it sentiment} $\bm{y} =
[y_1,\ldots,y_n]'$ associated with documents in a corpus. In our main
application, this is positive, neutral, or negative sentiment
directed toward a given politician, as measured through a reader
survey. More generally, sentiment can be replaced by any annotation
that is correlated with document text. Text-sentiment
prediction is thus just a very high-dimensional regression problem,
where the covariates have the special property that they can be
represented as draws from a multinomial distribution.
Any regression model needs to be accompanied with data for training.
In the context of sentiment prediction, this implies documents scored
for sentiment. One can look to various sources of `automatic'
scoring, and these are useful to obtain the massive amounts of data
necessary to train high-dimensional text models. Section \ref{data}
describes our use of emoticons for this purpose. However, such
automatic scores are often only a rough substitute for the true
sentiment of interest. In our case, generic happy/sad sentiment is
not the same as sentiment directed towards a particular politician.
It is then necessary to have a subset of the documents annotated with
precise scores, and since this scoring will cost money we need to
choose a subset of documents whose content is most useful for
predicting sentiment from text. This is an application for {\it pool
based active learning}: there is a finite set of examples for which
predictions are to be obtained, and one seeks to choose an optimal
representative subset.
There are thus two main elements to our study: design -- choosing the
sub-sample of tweets to be sent for scoring -- and analysis -- using
sentiment-scored tweets to fit a model for predicting Twitter sentiment
towards specific politicians. This article is about both components.
As a design problem, text mining presents a difficult situation where
raw space filling is impractical -- the dimension of $\bm{x}$ is so
large that every document is very far apart -- and we argue in Section
\ref{design} that it is unwise to base design choices on the poor
estimates of predictive uncertainty provided by text regression. Our
solution is to use a space-filling design, but in an estimated lower
dimensional multinomial-factor space rather than in the original
$\bm{x}$-sample. Section \ref{design}.1 describes a standard class of
{\it topic models} that can be used to obtain low-dimensional factor
representations for large document collections. The resulting
unsupervised algorithm (i.e., sampling proceeds without regard to
sentiment) can be combined with any sentiment prediction model. We
use the multinomial inverse regression of \cite{Tadd2012a}, with the
addition of politician-specific interaction terms, as described in
Section \ref{mnir}.
\subsection{Data application: political sentiment on Twitter}
\label{data}
The motivating case study for this article is an analysis of sentiment
in tweets about US politicians on Twitter, the social blog,
from January 27 to February 28, 2012, a period that included the
Florida (1/31), Nevada (2/4), Colorado, Missouri, and Minnesota (2/7),
Maine (2/11), and Michigan and Arizona (2/28) presidential primary
elections. Twitter provides streaming access to a large subset of
public (as set by the user) tweets containing terms in a short list of
case insensitive filters. We were interested in conversation on the
leading candidates in the Republican presidential primary, as well as
that concerning current president Barack Obama; our list of filter
terms was {\smaller\sf obama}, {\smaller\sf romney}, {\smaller\sf gingrich}, {\smaller\sf ron paul},
and, from February 13 onward, {\smaller\sf santorum}. Note that Romney,
Gingrich, and Paul were the only front-runners at the beginning of our
study, but Santorum gained rapidly in the polls following his surprise
victories in three state votes on February 7: the Minnesota and
Colorado caucuses and the Missouri Primary. Daily data collection is
shown by politician-subject in Figure \ref{volume}; total counts are
10.2\e{5} for Obama, 5\e{5} for Romney, 2.2\e{5} for Gingrich,
2.1\e{5} for Santorum, and 1.5\e{5} for Paul, for a full sample of
about 2.1 million tweets.
In processing the raw text, we remove a limited set of stop words
(terms that occur at a constant rate regardless of subject, such as
{\it and} or {\it the}) and punctuation before converting to lowercase
and stripping suffixes from roots according to the Porter stemmer
\citep{Port1980}. The results are then tokenized into single terms
based upon separating white-space, and we discard any tokens that
occur in $<$ 200 tweets and are not in the list of tokens common in
our generic emoticon-sentiment tweets, described in the next
paragraph. This leads to 5532 unique tokens for Obama, 5352 for
Romney, 5143 for Gingrich, 5131 for Santorum, and 5071 for Paul.
\begin{figure}[t]
\includegraphics[width=6.3in]{polsVolume}
\caption{\label{volume} Tweet sample volume for political
candidates. All are taken from the stream of public Twitter posts
from Jan 27 through the end of February, except for Santorum who was
only tracked after Feb 13. }
\end{figure}
The primary analysis goal is to classify tweets by
sentiment: positive, negative, or neutral. We have two data sources
available: twitter data that is scored for generic
sentiment, and the ability to survey readers about sentiment
in tweets directed at specific politicians. In the first case, 1.6 million
tweets were obtained, from the website {\smaller\sf
http://twittersentiment.appspot.com}, that have been automatically
identified as positive or negative by the presence of an emoticon
(symbols included by the author -- e.g., a happy face indicates a
positive tweet and a sad face a negative tweet). Tokenization for
these tweets followed the same rules as for the political Twitter
sample above, and we discard tokens that occur in less than 0.01\%
of tweets. This leads to a vocabulary of 5412 `emoticon' tokens; due
to considerable overlap, the combined vocabulary across all tweets
(political and emoticon) is only 5690 tokens.
As our second data source, we use the Amazon Mechanical Turk ({\smaller\sf
https://www.mturk.com/}) platform for scoring tweet sentiment.
Tweets are shown to anonymous workers for categorization as
representing either positive (e.g., `support, excitement, respect, or
optimism') or negative (e.g., `anger, distrust, disapproval, or
ridicule') feelings or news towards a given politician, or as neutral
if the text is `irrelevant, or not even slightly positive or
negative'. Each tweet is seen by two independent workers, and it is
only considered scored if the two agree on categorization. In
addition, workers were pre-screened as `masters' by Amazon and we
monitored submissions for quality control, blocking poor workers.
Given the 2-3 cents per-tweet paid to individual workers, as well as
the overhead charged by Amazon, our worker agreement rates of around
80\% imply an average cost near \$0.075 per sentiment scored tweet.
\section{Sentiment prediction via multinomial inverse regression}
\label{mnir}
Sentiment prediction in this article follows the multinomial inverse
regression (MNIR) framework described in \citet{Tadd2012a}.
Section \ref{mnir}.1 summarizes that approach, while Section 2.2
discusses an adaptation specific to the main application of this paper. Inverse
regression as a general strategy looks to estimate the {\it inverse
distribution} for covariates given response, and to use
this as a tool in building a {\it forward model} for $y_i$ given
$\bm{x}_i$. The specific idea of MNIR is to estimate a simple model
for how the multinomial distribution on text counts changes with
sentiment, and to derive from this model low dimensional text
projections that can be used for predicting sentiment.
\subsection{Single-factor MNIR}
As a simple case, suppose that $y_i$ for document $i$ is a discrete
ordered sentiment variable with support $\mc{Y}$ -- say $y_i \in
\{-1,0,1\}$ as in our motivating application. Only a very complicated
model will be able to capture the generative process for an
individual's text, $\bm{x}_i |y_i$, which involves both
heterogeneity between individuals and correlation across dimensions of
$\bm{x}_i$. Thus estimating a model for $\bm{x}_i |y_i$ can be far
harder than predicting $y_i$ from $\bm{x}_i$, and inverse regression
does not seem a clever place to be starting analysis. However, we can
instead concentrate on the {\it population average} effect of
sentiment on text by modeling the conditional distribution for
collapsed token counts $\bm{x}_{y} = \sum_{i:y_i=y} \bm{x}_i$.
A basic MNIR model is then
\begin{equation} \label{basic-mnir} \bm{x}_{y} \sim \mr{MN}(\bm{q}_{y},
m_{y})~~\text{with}~~ q_{yj} = \frac{\exp[\alpha_j +
y\varphi_j]}{\sum_{l=1}^p \exp[\alpha_l + y\varphi_l
]},~~\text{for}~~j=1,\ldots,p,~~y \in \mc{Y}
\end{equation}
where each $\mr{MN}$ is a $p$-dimensional multinomial distribution
with size $m_{y} = \sum_{i:y_i=y} m_i$ and probabilities
$\bm{q}_{ y} = [q_{y1},\ldots,q_{yp}]'$ that are a linear function of
$y$ through a logistic link. Although independence assumptions
implied by (\ref{basic-mnir}) are surely incorrect, within-individual
correlation in $\bm{x}_i$ is quickly overwhelmed in aggregation and
the multinomial becomes decent model for $\bm{x}_{y}$. (One
could also argue against an equidistant three point scale for $y$;
however such a scale is useful to simplify inverse regression
and we assume that misspecification here can be accommodated in
forward regression).
Given sentiment $y$ and counts $\bm{x}$ drawn from the multinomial
distribution $\mr{MN}(\bm{q}_{y}, m)$ in (\ref{basic-mnir}), the
projection $\bs{\varphi}'\bm{x}$ is {\it sufficient for sentiment} in
the sense that $y \perp\!\!\!\perp \bm{x} \mid \bs{\varphi}'\bm{x}, m$. A
simple way to demonstrate this is through application of Bayes rule
(after assigning prior probabilities for each element of $\mc{Y}$).
Then given $\bm{x}_i$ counts for an {\it individual} document,
$\bs{\varphi}'\bm{x}_i$ seems potentially useful as a low-dimensional
index for predicting $y_i$. More specifically, we normalize by
document length in defining the {\it sufficient reduction} (SR) score
\begin{equation}\label{basic-sr}
z_i = \bs{\varphi}'\bm{f}_i = \bs{\varphi}'\bm{x}_i/m_i.
\end{equation}
Now, since (\ref{basic-mnir}) is a model for collapsed text counts
rather than for $\bm{x}_i$ given $y_i$, the SR score in
(\ref{basic-sr}) is {\it not} theoretically sufficient for that
document's sentiment. \cite{Tadd2012a} describes specific random
effects models for the information loss in regressing $y_i$ onto $z_i$
instead of $\bm{x}_i$, and under certain models the individual
document regression coefficients approach $\bs{\varphi}$. However, in
general this population average projection is {\it misspecified} as an
individual document projection. Hence, instead of applying Bayes rule
to invert (\ref{basic-mnir}) for sentiment prediction, $z_i$ is
treated as an observable in a second-stage regression for $y_i$ given
$z_i$. Throughout this article, where $y$ is always an ordered
discrete sentiment variable, this {\it forward regression} applies
logistic proportional odds models of the form $\mr{p}(y_i < c) =
\left(1 + \exp[ -(\gamma_c + \beta z_i)]\right)^{-1}$.
\subsection{MNIR with politician-interaction}
In the political twitter application, our approach needs to be adapted
to allow different text-sentiment regression models for each
politician, and also to accommodate positive and negative emoticon tweets,
which are sampled from all public tweets rather than always being associated
with a politician. This is achieved naturally within the MNIR framework by
introducing interaction terms in the inverse regression.
The data are now written with text in the i$^{th}$ tweet for
politician $s$ as $\bm{x}_{si}$, containing a total of $m_{si}$ tokens
and accompanied by sentiment $y_{si}\in \{-1,0,1\}$, corresponding to
negative, neutral, and positive sentiment respectively. Collapsed
counts for each politician-sentiment combination are obtained as
$x_{syj} = \sum_{i: y_{si} = y} x_{sij}$ for each token $j$. This
yields 17 `observations': each of three sentiments for five
politicians, plus positive and negative emoticon tweets. The
multinomial inverse regression model for sentiment-$y$ text counts
directed towards politician $s$ is then $\bm{x}_{sy} \sim
\mr{MN}(\bm{q}_{sy}, m_{sy})$, $q_{syj} = e^{\eta_{scy}}/\sum_{l=1}^p
e^{\eta_{syl}}$ for $j=1\ldots p$, with linear equation
\begin{equation}\label{pols-mnir}
\eta_{syj} = \alpha_{0j} + \alpha_{sj} +
y(\varphi_{0j} + \varphi_{sj}).
\end{equation}
Politician-specific terms are set to zero for emoticon tweets (which
are not associated with a specific politician), say $s=e$, such that
$\eta_{eyj} = \alpha_{0j} + y\varphi_{0j}$ as a generic sentiment
model. Thus all text is centered on main effects in
$\bs{\alpha}_{0}$ and $\bs{\varphi}_0$, while interaction terms
$\bs{\alpha}_s$ and $\bs{\varphi}_s$ are
identified only through their corresponding turk-scored political
sentiment sample.
Results in \cite{Tadd2012a} show that $\bm{x}'[\bs{\varphi}_{0},
\bs{\varphi}_{s}]$ is sufficient for sentiment when $\bm{x}$ is drawn
from the collapsed count model implied by (\ref{pols-mnir}). Thus
following the same logic behind our univariate SR scores in
(\ref{basic-sr}), $\bm{z}_{i} = [\bm{z}_{i0},\bm{z}_{is}] =
\bm{f}_{i}'[\bs{\varphi}_{0}, \bs{\varphi}_{s}]$ is a bivariate
sufficient reduction score for tweet $i$ on politician $s$. The
forward model is again proportional-odds
logistic regression,
\begin{equation}\label{pols-fwd}
\mr{p}( y_{i} \leq c) = 1/(1 + \exp[ \beta_0 z_{i0} +
\beta_s z_{is} - \gamma_c ]),
\end{equation}
with main $\beta_0$ and subject $\beta_s$ effects. Note the absence
of subject-specific $\gamma_{sc}$: a tweet containing no significant
tokens (such that $z_{i0} = z_{is} = 0$) is assigned probabilities
according to the overall aggregation of tweets. Such `empty' tweets
have $\mr{p}(-1) = 0.25$, $\mr{p}(0) = 0.65$, and $\mr{p}(1) = 0.1$ in
the fitted model of Section \ref{analysis}, and are thus all
classified as `neutral'.
\subsection{Notes on MNIR estimation}
Estimation of MNIR models like those in (\ref{basic-mnir}) and
(\ref{pols-mnir}) follows exactly the procedures of \cite{Tadd2012a},
and the interested reader should look there for detail. Briefly, we
apply the {\it gamma lasso} estimation algorithm, which corresponds to
MAP estimation under a hierarchical gamma-Laplace coefficient prior
scheme. Thus, and this is especially important for the interaction
models of Section \ref{mnir}.1, parameters are estimated as exactly
zero until a large amount of evidence has accumulated. Optimization
proceeds through coordinate descent and, along with the obvious
efficiency derived from collapsing observations, allows for estimation
of single-factor SR models with hundreds of thousands of tokens in
mere seconds. The more complicated interaction model in
(\ref{pols-mnir}) can be estimated in less than 10 minutes.
To restate the MNIR strategy, we are using a simple but very
high-dimensional (collapsed count) model to obtain a useful but
imperfect text summary for application in low dimensional sentiment
regression. MNIR works because the multinomial is a useful
representation for token counts, and this model assumption increases
efficiency by introducing a large amount of information
about the functional relationship between text and sentiment into the
prediction problem. Implicit here is an assumption that ad-hoc
forward regression can compensate for mis-application of
population-average summary projections to individual document counts.
\cite{Tadd2012a} presents empirical evidence that this holds
true in practice, with MNIR yielding higher quality prediction at
lower computational cost when compared to a variety of text regression
techniques. However the design algorithms of this article are not
specific to MNIR and can be combined with any sentiment prediction
routine.
\section{Topic-optimal design}
\label{design}
Recall the introduction's pool-based design problem: choosing from the
full sample of 2.1 million political tweets a subset to be scored, on
mechanical turk, as either negative, neutral, or positive about the
relevant politician.
A short review of some relevant literature on active learning and
experimental design is in the appendix. In our specific situation of a
very high dimensional input space (i.e a large vocabulary), effective
experimental design is tough to implement. Space-filling is impractical
since limited sampling will always leave a large distance between
observations. Boundary selection -- where documents with roughly
equal sentiment-class probabilities are selected for scoring -- leads
to samples that are very sensitive to model fit and is impossible in
early sampling where the meaning of most terms is unknown (such that
the vast majority of documents lie on this boundary). Moreover,
one-at-a-time point selection implies sequential algorithms that
scale poorly for large applications, while more elaborate active
learning routines which solve for optimal batches of new points tend
to have their own computational limits in high dimension. Finally,
parameter and predictive uncertainty -- which are relied upon in many
active learning routines -- is difficult to quantify in complicated
text regression models; this includes MNIR, in which the posterior is
non-smooth and is accompanied by an ad-hoc forward
regression step. The vocabulary is also growing with sample size and
a full accounting of uncertainty about sentiment in unscored texts
would depend heavily on a prior model for the meaning of previously
unobserved words.
While the above issues make tweet selection difficult, we do have an
advantage that can be leveraged in application: a huge pool of
unscored documents. Our solution for text sampling is thus to look at
space-filling or optimal design criteria (e.g., D-optimality) but on a
reduced dimension factor decomposition of the covariate space rather
than on $\bm{X}$ itself. That is, although the main goal is to learn $\bs{\Phi}$
for the sentiment projections of Section \ref{mnir}, this cannot be done until
enough documents are scored and we instead look to space-fill on an
{\it unsupervised} factor structure that can be estimated without labelled examples.
This leads to to what we call {\it
factor-optimal design}. Examples of this approach include
\citet{GalvMaccBezz2007} and \citet{ZhanEdga2008}, who apply optimal
design criteria on principal components, and \citet{DavyLuz2007}, a
text classification contribution that applies active learning criteria
to principal components fit for word counts. The proposal here is to
replace generic principal component analysis with text-appropriate
topic model factorization.
\subsection{Multinomial topic factors}
A $K$-topic model \citep{BleiNgJord2003} represents each vector of
document token counts, $\bm{x}_i \in \{\bm{x}_{1}\ldots \bm{x}_n\}$
with total $m_i = \sum_{j=1}^p x_{ij}$, as a multinomial factor
decomposition
\begin{equation}\label{eq:tpc}
\bm{x}_i \sim \mr{MN}(\omega_{i1} \bs{\theta}_{1} + \ldots + \omega_{iK}
\bs{\theta}_{K}, m_i)
\end{equation}
where topics $\bs{\theta}_k = [\theta_{k1} \cdots \theta_{kp}]'$ and
weights $\bs{\omega}_i$ are probability vectors. Hence, each topic
$\bs{\theta}_k$ -- a vector of probabilities over words or phrases --
corresponds to factor `loadings' or `rotations' in the usual factor
model literature. Documents are thus characterized through a
mixed-membership weighting of topic factors and $\bs{\omega}_i$ is a
reduced dimension summary for $\bm{x}_i$.
Briefly, this approach assumes independent prior
distributions for each probability vector,
\begin{equation}\label{eq:prior}
\bs{\omega}_i \stackrel{iid}{\sim} \mr{Dir}(1/K),~i=1\ldots n,~~\text{and}~~
\bs{\theta}_k \stackrel{iid}{\sim}
\mr{Dir}(1/(Kp)),~k=1\ldots K,
\end{equation}
where $\bs{\theta} \sim \mr{Dir}(\alpha)$ indicates a Dirichlet
distribution with concentration parameter $\alpha$ and density
proportional to $\prod_{j=1}^{\mr{dim}(\bs{\theta})}
\theta_j^\alpha$. These $\alpha < 1$ specifications encourage a few
dominant categories among mostly tiny probabilities by placing
weight at the edges of the simplex. The particular specification in
(\ref{eq:prior}) is chosen so that prior weight, measured as the sum
of concentration parameters multiplied by the dimension of their
respective Dirichlet distribution, is constant in both $K$ and $p$
(although not in $n$). The model is estimated through posterior
maximization as in \citet{Tadd2012b}, and we employ a Laplace
approximation for simulation from the conditional posterior for
$\bs{\Omega}$ given $\bs{\Theta} = [\bs{\theta}_1 \cdots
\bs{\theta}_K]$. The same posterior approximation allows us to
estimate Bayes factors for potential values of $K$, and we use this to
{\it infer} the number of topics from the data. Details are in
Appendix \ref{bayes}.
\subsection{Topic D-optimal design}
As a general practice, one can look to implement any space filling
design in the $K$ dimensional $\bs{\omega}$-space. For the current
study, we focus on D-optimal design rules that seek to maximize the
determinant of the information matrix for linear regression; the
result is thus loosely optimal under the assumption that sentiment has
a linear trend in this representative factor space. The algorithm
tends to select observations that are at the edges of the topic space.
An alternative option that may be more robust to sentiment-topic
nonlinearity is to use a latin hypercube design; this will lead to a
sample that is spread evenly throughout the topic space.
In detail, we
seek to select a design of documents $\{i_1 \ldots i_T\} \subset
\{1\ldots n\}$ to maximize the topic information determinant $D_T =
|\bs{\Omega}_T'\bs{\Omega}_T|$, where $\bs{\Omega}_T = [\bs{\omega}_1
\cdots \bs{\omega}_T]'$ and $\bs{\omega}_t$ are topic weights
associated with document $i_t$. Since construction of exact D-optimal
designs is difficult and the algorithms are generally slow
\citep[see][for an overview of both exact and approximate optimal
design]{AtkiDone1992}, we use a simple greedy search to obtain an {\it
ordered} list of documents for evaluation in a near-optimal design.
Given $D_t = |\bs{\Omega}_t'\bs{\Omega}_t|$ for a current sample of
size $t$, the topic information determinant after adding $i_{t+1}$ as
an additional observation is
\begin{equation}\label{dup}
D_{t+1} = \left|\bs{\Omega}_t'\bs{\Omega}_t + \bs{\omega}_{t+1}'
\bs{\omega}_{t+1}\right| = D_{t}\left( 1 +
\bs{\omega}_{t+1}' \left(\bs{\Omega}_t'\bs{\Omega}_t\right)^{-1} \bs{\omega}_{t+1}\right),
\end{equation}
due to a standard linear algebra identity. This implies that, given
$\bs{\Omega}_t$ as the topic matrix for your currently evaluated
documents, $D_{t+1}$ is maximized simply by choosing $i_{t+1}$ such that
\begin{equation}\label{max}
\bs{\omega}_{t+1} = \mr{argmax}_{\{\bs{\omega}\in
\bs{\Omega}/\bs{\Omega}_t\}} ~~\bs{\omega}' \left(\bs{\Omega}_t'\bs{\Omega}_t\right)^{-1} \!\!\bs{\omega}
\end{equation}
Since the topic weights are a low ($K$) dimensional summary, the
necessary inversion $\left(\bs{\Omega}_t'\bs{\Omega}_t\right)^{-1}$ is
on a small $K\times K$ matrix and will not strain computing resources.
This inverted matrix provides an operator that can quickly be applied
to the pool of candidate documents (in parallel if desired), yielding
a simple score for each that represents the proportion by which its
inclusion increases our information determinant.
For the recursive equation in (\ref{max}) to apply, the design must be
initially seeded with at least $K$ documents, such that
$\bs{\Omega}_t'\bs{\Omega}_t$ will be non-singular. We do this by
starting from a simple random sample of the first $t=K$ documents
(alternatively, one could use more principled space-filling in factor
space, such as a latin hypercube sample). Note
that again topic-model dimension reduction is crucial: for our greedy
algorithm to work in the full $p$ dimensional token space, we would
need to sample $p$ documents before having an invertible information
matrix. Since this would typically be a larger number of documents
than desired for the full sample, such an approach would never move
beyond the seeding stage.
In execution of this design algorithm, the topic weights for each
document must be estimated. In what we label MAP topic D-optimal
design, each $\bs{\omega}_i$ for document $i$ is fixed at its MAP
estimate as described in Section \ref{design}.1. As an alternative,
we also consider a {\it marginal} topic D-optimality wherein a set of
topic weights $\{\bs{\omega}_{i1}\ldots \bs{\omega}_{iB}\}$ are
sampled for each document from the approximate posterior in Appendix
A.1, such that recursively D-optimal documents are chosen to
maximize the {\it average} determinant multiplier over this set.
Thus, instead of (\ref{max}), marginal D-optimal $i_{t+1}$ is
selected to maximize $\frac{1}{B}\sum_b \bs{\omega}_{i_{t+1}b}'
\left(\bs{\Omega}_t'\bs{\Omega}_t\right)^{-1}
\!\!\bs{\omega}_{i_{t+1}b}$.
\subsection{Note on the domain of factorization}
The basic theme of this design framework is straightforward: fit an
unsupervised factor model for $\bm{X}$ and use an optimal design rule
in the resulting factor space. Given a single sentiment
variable, as in examples of Section \ref{examples}, the $\bm{X}$ to
be factorized is simply the entire text corpus.
Our political twitter case study introduces the added variable of
`politician', and it is no longer clear that a single shared
factorization of all tweets is appropriate. Indeed, the interaction
model of Section \ref{mnir}.2 includes parameters (the $\alpha_{sj}$
and $\varphi_{sj}$) that are only identified by tweets on the
corresponding politician. Given the massive amount of existing data
from emoticon tweets on the other model parameters, any
parameter learning from new sampling will be concentrated on these
interaction parameters. Our solution in Section \ref{analysis} is to
apply stratified sampling: fit independent factorizations to each
politician-specific sub-sample of tweets, and obtain D-optimal designs
on each. Thus we ensure a scored sample of a chosen size for each
individual politician.
\section{Example Experiment}
\label{examples}
To illustrate this design approach, we consider two simple
text-sentiment examples. Both are detailed in
\cite{Tadd2012a,Tadd2012b}, and available in the {\smaller\sf textir} package
for {\smaller\sf R}. {\it Congress109} contains 529 legislators' usage counts
for each of 1000 phrases in the $109^{th}$ US Congress, and we
consider party membership as the `sentiment' of interest: $y=1$ for
Republicans and $0$ otherwise (two independents caucused with
Democrats). {\it We8there} consists of counts for 2804 bigrams in
6175 online restaurant reviews, accompanied by restaurant {\it
overall} rating on a scale of one to five. To mimic the motivating
application, we group review sentiment as negative ($y=-1$) for
ratings of 1-2, neutral ($y=0$) for 3-4, and positive ($y=1$) for 5
(average rating is 3.95, and the full 5-class analysis is in
\citealt{Tadd2012a}). Sentiment prediction follows the single-factor MNIR
procedure of Section \ref{mnir}, with binary logistic forward
regression $\ds{E}[y_i] = \exp[ \gamma + \beta z_i]/(1 + \exp[ \gamma
+ \beta z_i] )$ for the congress data, and proportional-odds logistic
regression $\mr{p}(y_i \leq c) = \exp[ \gamma_c - \beta z_i]/(1 +
\exp[ \gamma_c - \beta z_i] )$, $c=-1,0,1$ for the we8there data.
We fit $K =$ 12 and 20 topics respectively to the congress109 and
we8there document sets. In each case, the number of topics is chosen
to maximize the approximate marginal data likelihood, as detailed in
the appendix and in \cite{Tadd2012b}. Ordered sample designs were
then selected following the algorithms of Section \ref{design}.2: for
MAP D-optimal, using MAP topic weight estimates, and for marginal
D-optimal, based upon approximate posterior samples of 50 topic
weights for each document. We also consider principal component
D-optimal designs, built following the same algorithm but with topic
weights replaced by the same number (12 or 20) of principal components
directions fit on token frequencies $\bm{f}_i = \bm{x}_i/m_i$.
Finally, simple random sampling is included as a baseline, and was
used to seed each D-optimal algorithm with its first $K$ observations.
Each random design algorithm was repeated 100 times.
\begin{figure}[t]
\hskip -.4cm\includegraphics[width=6.6in]{LearningExperiment}
\caption{\label{experiment} Average error rates on 100 repeated
designs for the 109$^{th}$ congress and we8there examples.
`MAP' is D-optimal search on MAP estimated topics; `Bayes' is our
search for marginal D-optimality when sampling from the topic
posterior; `PCA' is the same D-optimal search in principal
components factor space; and `random' is simple random
sampling. Errors are evaluated over the entire dataset.}
\end{figure}
Results are shown in Figure \ref{experiment}, with average error rates
(misclassification for congress109 and mean absolute error for
we8there) reported for maximum probability classification over the
entire data set. The MAP D-optimal designs perform better than simple
random sampling, in the sense that they provide faster reduction in
error rates with increasing sample size. The biggest improvements are
in early sampling and error rates converge as we train on a larger
proportion of the data. There is no advantage gained from using a
principal component (rather than topic) D-optimal design, illustrating
that misspecification of factor models can impair or eliminate their
usefulness in dimension reduction. Furthermore, we were surprised to
find that, in contrast with some previous studies on active learning
\citep[e.g.][]{TaddGramPols2011}, averaging over posterior uncertainty
did not improve performance: the MAP D-optimal design does as well or
better than the marginal alternative, which is even outperformed by
random sampling in the we8there example. Our hypothesis is that, since
conditioning on $\bs{\Theta}$ removes dependence across documents,
sampling introduces Monte Carlo variance without providing any
beneficial information about correlation in posterior uncertainty.
Certainly, given that the marginal algorithm is also much more time
consuming (with every operation executed $B$ times in addition to the
basic cost of sampling), it seems reasonable to focus on the MAP
algorithm in application.
\section{Analysis of Political Sentiment in Tweets}
\label{analysis}
This section describes selection of tweets for sentiment scoring from
the political Twitter data described in Section \ref{data}, under the
design principles outlined above, along with an MNIR analysis of the
results and sentiment prediction over the full collection.
\subsection{Topic factorization and D-optimal design}
As the first step in experimental design, we apply the topic
factorization of Section \ref{design}.1 independently to each
politician's tweet set. Using the Bayes factor approach of
\cite{Tadd2012b}, we tested $K$ of 10, 20, 30 and 40 for each
collection and, in every case, selected the simple $K=10$ model as
most probable. Although this is a smaller topic model than often seen
in the literature, we have found that posterior evidence tends to
favor such simple models in corpora with short documents
\citep[see][for discussion of information increase with
$m_i$]{Tadd2012b}.
Across politicians, the most heavily used topic (accounting for about
20\% of words in each case) always had {\smaller\sf com}, {\smaller\sf
http}, and {\smaller\sf via} among the top five tokens by
topic lift -- the probability of a token within a topic over its
overall usage proportion. Hence, these topics appear to represent a
Twitter-specific list of stopwords. The other topics are a mix of
opinion, news, or user specific language. For example, in the Gingrich
factorization one topic accounting for 8\% of text with top tokens {\smaller\sf
herman}, {\smaller\sf cain}, and {\smaller\sf endors} is focused on Herman Cain's
endorsement, {\smaller\sf \#teaparty} is a top token in an 8\% topic that
appears to contain language used by self identified members of the Tea
Party movement (this term loads heavily in a single topic for each
politician we tracked), while another topic with {\smaller\sf @danecook} as
the top term accounts for 10\% of traffic and is dominated by posts of
unfavorable jokes and links about Gingrich by the comedian Dane Cook
(and forwards, or `retweets', of these jokes by his followers).
Viewing the sentiment collection problem through these interpreted
topics can be useful: since a D-optimal design looks (roughly) for
large variance in topic weights, it can be seen as favoring tweets
on single topics (e.g., the Cain endorsement) or rare
combinations of topics (e.g., a Tea Partier retweeting a Dane Cook
joke). As a large proportion of our data are retweets (near 40\%),
scoring those sourced from a single influential poster can yield a
large reduction in predictive variance, and tweets containing
contradictory topics help resolve the relative weighting of words.
In the end, however, it is good to remember that the topics do not
correspond to subjects in the common understanding, but are simply
loadings in a multinomial factor model. The experimental design described
in the next section treats the fitted topics as such.
\subsection{Experimental design and sentiment collection}
Using the MAP topic D-optimal algorithm of Section \ref{design}.2,
applied to each politician's topic factorization,
we built ordered lists of tweets to be scored on Mechanical Turk: 500
for each Republican primary candidate, and 750 for Obama. Worker
agreement rates varied from 78\% for Obama to 85\% for Paul, leading
to sample sizes of 406 for Romney, 409 for Santorum, 418 for Gingrich,
423 for Paul, and 583 for Obama.
Unlike the experiment of Section \ref{examples}, we have no ground truth
for evaluating model performance across samples without having to pay
for a large amount of turk scoring. Instead, we propose two metrics:
the number of non-zero politician specific loadings $\varphi_{js}$,
and the average entropy $-\sum_{c=-1,0,1} \mr{p}_{c} \log(\mr{p}_{c})$
across tweets for each politician, where $\mr{p}_{c} = \mr{p}(y =c)$
is based on the forward proportional-odds regression described below
in \ref{analysis}.2. We prefer the former for measuring the {\it
amount of sample evidence} -- the number of tokens estimated as
significant for politician-specific sentiment in gamma-lasso penalized
estimation -- as a standard statistical goal in design of experiments,
but the latter corresponds to the more common machine learning metric
of classification precision (indeed, entropy calculations inform many
of the close-to-boundary active learning criteria in Appendix \ref{back}).
\begin{figure}[t]
\includegraphics[width=6.4in]{polsLearning}
\caption{\label{learning} Learning under the MAP topic D-optimal
design. For increasing numbers of scored tweets added from the
ranked design, the left shows the number of significant (nonzero) loadings in the direction of
politician-specific sentiment and the right shows mean entropy $-\sum
\mr{p}_c \log(\mr{p}_{c})$ over the full sample. As in
Figure \ref{volume}, blue is Obama, orange Romney, red Santorum, pink
Gingrich, and green Paul.}
\end{figure}
Results are shown in Figure \ref{learning} for the sequential addition
of scored tweets from the design-ranked Turk results (sentiment
regression results are deferred until Section \ref{analysis}.3). On the left, we
see that there is a steady climb in the number of nonzero
politician-specific loadings as the sample sizes increase. Although
the curves flatten with more sampling, it does appear that had we
continued spending money on sending tweets to the Turk it would have
led to larger politician-sentiment dictionaries. The right plot shows
a familiar pattern of early overfit (i.e., underestimated
classification variance) before the mean entropy begins a slower
steady decline from $t=200$ onwards.
\subsection{MNIR for subject-specific sentiment analysis}
After all Turk results are incorporated, we are left with 2242 scored
political tweets, plus the 1.6 million emoticon tweets, and a 5566
token vocabulary. This data were used to fit the
politician-interaction MNIR model detailed in Section \ref{mnir}.2.
The top ten politician-specific loadings
($\varphi_{sj}$) by absolute value are shown in Table \ref{loadings}
(recall that these are the effect on log odds for a unit increase in
sentiment; thus, e.g., negatively loaded terms occur more frequently
in negative tweets). This small sample shows some large
coefficients, corresponding to indicators for users or groups, news
sources and events, and various other labels. For example, the Obama
column results suggest that his detractors prefer to use `GOP' as
shorthand for the republican party, while his supporters simply use
`republican'. However, one should be cautious about interpretation:
these coefficients correspond to the partial effects of sentiment on the
usage proportion for a term {\it given} corresponding change in
relative frequency for all other terms. Moreover, these are only
estimates of average correlation; this analysis is not
intended to provide a causal or long-term text-sentiment model.
Summary statistics for fitted SR scores are shown in Table
\ref{zsmry}. Although we are not strictly forcing orthogonality on
the factor directions -- $z_{0}$ and $z_s$, say the emotional and
political sentiment directions respectively -- the political scores
have only weak correlation (absolute value $<$ 0.2) with the generic
emotional scores. This is due to an MNIR setup that estimates
politician-specific loadings $\varphi_{sj}$ as the sentiment effect on
language about a given politician {\it after} controlling for generic
sentiment effects. Notice that there is greater variance in political
scores than in emotional scores; this is due to a few large token
loadings that arise by identifying particular tweets (that are heavily
retweeted) or users that are strongly associated with positive or
negative sentiment. However, since we have far fewer scored political
tweets than there are emoticon tweets, fewer token-loadings are
non-zero in the politician-specific directions than in the generic
direction: $\bs{\varphi}_0$ is only 7\% sparse, while the
$\bs{\varphi}_s$ are an average of 97\% sparse.
\begin{table}[t]
\vspace{.1cm}
\centering\small
\begin{tabular}{cl|cl|cl|cl|cl}
\multicolumn{2}{c}{\normalsize \sc Obama}
&\multicolumn{2}{c}{\normalsize \sc Romney}
&\multicolumn{2}{c}{\normalsize \sc Santorum}
&\multicolumn{2}{c}{\normalsize \sc Gingrich}
&\multicolumn{2}{c}{\normalsize \sc Paul}\\
[-1.5ex]\\
\smaller\sf republican&15.5&\smaller\sf fu&-10&\smaller\sf @addthi&-11.5&\smaller\sf bold&10.6&\smaller\sf \#p2&11.1\\
\smaller\sf gop&-13.2&\smaller\sf 100\%&-9.6&\smaller\sf @newtgingrich&-9.9&\smaller\sf mash&-10&\smaller\sf \#teaparti&11\\
\smaller\sf \#teaparti&-12.7&\smaller\sf lover&-9.4&\smaller\sf clown&-9.4&\smaller\sf ap&9.9&\smaller\sf ht&10\\
\smaller\sf \#tlot&-11.9&\smaller\sf quot&-9.4&\smaller\sf @youtub&-9.2&\smaller\sf obama&9.9&\smaller\sf airplan&9.6\\
\smaller\sf economi&11&\smaller\sf anytim&-9.2&\smaller\sf polit&-8.7&\smaller\sf campaign&-9.9&\smaller\sf legal&-9.5\\
\smaller\sf cancer&10&\smaller\sf abt&-8.6&\smaller\sf speech&-8.6&\smaller\sf lesbian&-9.7&\smaller\sf paypal&7.4\\
\smaller\sf cure&9.6&\smaller\sf lip&-8.5&\smaller\sf opportun&-8.2&\smaller\sf pre&9.5&\smaller\sf flight&6.9\\
\smaller\sf ignor&9.2&\smaller\sf incom&-8.4&\smaller\sf disgust&-8.2&\smaller\sf bid&9.5&\smaller\sf rep&6.7\\
\smaller\sf wors&9.2&\smaller\sf januari&8.1&\smaller\sf threw&-7.4&\smaller\sf recip&-9.2&\smaller\sf everyth&-6.4\\
\smaller\sf campaign&9.2&\smaller\sf edg&8&\smaller\sf cultur&-7.3&\smaller\sf america&9.1&\smaller\sf debat&6
\end{tabular}
\caption{ Top ten politician-specific token loadings $\varphi_{sj}$ by
their absolute value in
MNIR. \label{loadings}}
\end{table}
\begin{figure}[t]
\begin{minipage}{3.2in}
\includegraphics[width=3.2in]{polsFit}
\caption{\label{fit} In-sample sentiment fit: the forward
model probabilities for each observation's true category. }
\end{minipage}
~~~~~
\begin{minipage}{2.8in}\small
\vspace{1cm}
\begin{tabular}{lccc}
\multicolumn{1}{c}{} & $\mr{cor}(z_s,z_0)$ & $\mr{sd}(z_s)$ & $\bar z_s$ \\
\\[-1.5ex]\cline{2-4}\\[-1.5ex]
\sf obama&-0.12&0.31&0.1\\
\sf romney&0.17&0.23&-0.07\\
\sf santorum&0.16&0.19&-0.19\\
\sf gingrich&-0.07&0.26&-0.06\\
\sf paul&0.07&0.16&0.1\\
\sf emoticons &---&0.06&0.006
\end{tabular}
\vskip .25cm
\captionof{table}{\label{zsmry} {\it Full} sample summary statistics for
politician-specific sufficient reduction scores. }
\end{minipage}
\end{figure}
Figure \ref{fit} shows fitted values in forward proportional-odds
logistic regression for these SR scores. We observe some very high
fitted probabilities for both true positive and negative tweets,
indicating again that the analysis is able to identify a subset of
similar tweets with easy sentiment classification. Tweet
categorization as neutral corresponds to an absence of evidence in
either direction, and neutral tweets have fitted $\mr{p}(0)$ with mean
around 0.6. In other applications, we have found that a large number
of `junk' tweets (e.g., selling an unrelated product) requires
non-proportional-odds modeling to obtain high fitted neutral
probabilities, but there appears to be little junk in the current
sample. As an aside, we have experimented with adding `junk' as a
fourth possible categorization on Mechanical Turk, but have been
unable to find a presentation that avoids workers consistently getting
confused between this and `neutral'.
\begin{table}[t]
\vspace{.2cm}
\small\hspace{-.3cm}
\begin{tabular}{lccccccccc}
&\multicolumn{2}{c}{\normalsize \sc Intercepts $\gamma_c$} &
&\multicolumn{6}{c}{\normalsize \sc SR score coefficients $\beta_0$,
$\beta_s$} \\
\cline{2-3} \cline{5-10} \\[-2ex]
& $\leq -1$ & \hspace{-.3cm}$\leq 0$ && \smaller\sf emoticons & \smaller\sf obama & \smaller\sf romney & \smaller\sf
santorum & \smaller\sf gingrich & paul\\ [-2ex]\\
Estimate & -1.1 {\smaller (0.1)}&\hspace{-.3cm} 2.2 {\smaller (0.1)}&&8.3 {\smaller (1.1)}&4.9 {\smaller (0.5)}&
5.6 {\smaller (0.5)}&5.8 {\smaller (0.5)}&7.9 {\smaller
(1.0)}&11.9 {\smaller (1.1)}\\ [-2.5ex]\\
$\beta \times \bar z_s$ & & & & 0.0 & 0.5 & -0.4& -1.1& -0.5 & 1.2 \\
{$\exp[\beta \times \mr{sd}(z)]$}\hspace{-.2cm} & & & & 1.6 & 4.5 & 3.6 & 2.9 & 7.7 & 6.4
\end{tabular}
\caption{ MAP estimated parameters and the conditional standard
deviation (ignoring variability in $\bm{z}$) in the forward proportional-odds
logistic regression $\mr{p}( y_{i} \leq c) = (1 + \exp[ \beta_0 z_{i0} +
\beta_s z_{is} - \gamma_c])^{-1}$, followed by the average effect on
log-odds for each sufficient reduction score and exponentiated
coefficients scaled according to the corresponding full-sample score
standard deviation. \label{coef}}
\end{table}
The forward parameters are MAP estimated, using the {\smaller\sf arm} package
for {\smaller\sf R} \citep{GelmSuYajiHillPittKermZhen2012}, under diffuse
$t$-distribution priors; these estimates are printed in Table
\ref{coef}, along with some summary statistics for the implied effect
on the odds of a tweet being at or above any given sentiment level.
The middle row of this table contains the average effect on log-odds
for each sufficient reduction score: for example, we see that Santorum
tweet log-odds drop by an average of -1.1 ($e^{-1.1} \approx 0.3$)
when you include his politician-specific tweet information. The
bottom row shows implied effect on sentiment odds scaled for a
standard deviation increase in each SR score direction: an extra
deviation in emotional $z_0$ multiplies the odds by $e^{0.5} \approx
1.6$, while a standard deviation increase in political SR scores
implies more dramatic odds multipliers of 3 (Santorum) to 8
(Gingrich). This agrees with the fitted probabilities of Figure
\ref{fit}, and again indicates that political directions are
identifying particular users or labels, and not `subjective language'
in the general sense.
\begin{figure}[t]
\includegraphics[width=6.3in]{polsSentiment}
\caption{\label{sentiment} Twitter sentiment regression full-sample
predictions. Daily tweet count percentages by sentiment
classification are shown with green for positive, grey neutral, and
red negative. }
\end{figure}
Figure \ref{sentiment} shows predicted sentiment classification for
each of our 2.1 million collected political tweets, aggregated by day
for each politician-subject. In each case, the majority of traffic
lacks enough evidence in either direction, and is classified as
neutral. However, some clear patterns do arise. The three
`mainstream' Republicans (Romney, Santorum, Gingrich) have far more
negative than positive tweets, with Rick Santorum performing worst.
Libertarian Ron Paul appears to be relatively popular on Twitter,
while President Obama is the only other politician to receive
(slightly) more positive than negative traffic. It is also possible
to match sentiment classification changes to events; for example,
Santorum's negative spike around Feb 20 comes after a weekend of new
agressive speeches in which he referenced Obama's `phony theology' and
compared schools to `factories', among other lines that generated
controversy.
Finally, note for comparison that without the interaction terms (i.e,
with only {\it score} as a covariate in inverse regression), the
resulting univariate SR projection is dominated by emoticon-scored
text. These projections turn out to be a poor summary of sentiment in
the political tweets: there is little discrimination between SR scores
across sentiment classes, and the in-sample mis-classification rate jumps
to 42\% (from 13\% for the model that uses politician-specific
intercepts). Fitted class probabilities are little different from
overall class proportions, and with true neutral tweets being less
common (at 22\% of our turk-scored sample) the result is that all
future tweets are unrealistically predicted as either positive or
negative.
\section{Discussion}
\label{discussion}
This article makes two simple proposals for text-sentiment analysis.
First, looking to optimal design in topic factor space can be useful
for choosing documents to be scored. Second, sentiment can be
interacted with indicator variables in MNIR to allow subject-specific
inference to complement information sharing across generic
sentiment.
Both techniques deserve some caution. Topic D-optimal design ignores
document length, even though longer documents can be more informative;
this is not a problem for the standardized Twitter format, and did not
appear to harm design for our illustrative examples, but it could be
an issue in other settings. In the MNIR analysis, we have observed that
subject-specific sentiment loadings (driven in estimation by small
sample subsets) can be dominated by news or authors specific to the
given sample. While this is not technically overfit, since it is
finding persistent signals in the current time period, it indicates
that one should constantly update models when using these techniques
for longer-term prediction.
A general lesson from this study is that traditional statistical techniques, such
as experimental design and variable interaction, will apply in new
areas like text mining when used in conjunction with careful
dimension reduction. Basic statistics principles can then be relied
upon to build optimal predictive models and to assess their risk
and sensitivity in application.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 2,935
|
{"url":"https:\/\/electronics.stackexchange.com\/questions\/500293\/current-flow-and-voltage-difference\/500294","text":"# Current flow and voltage difference\n\nWhen there is a potential difference between two points in an electric circuit current flows . Then why does not current flow through an open circuit?\n\n\u2022 Let me mess you up with what could be your next doubt. Consider a perfect conductor carrying a current from a battery V to a load R. There is current flowing, right? But if the conductor is perfect, it has no resistance, hence no voltage fall across any two points. So, how can current flow without voltage difference? (hint: when you have to cope with infinite and zero quantities, always consider them as limits of very big and very small quantities) \u2013\u00a0Sredni Vashtar May 18 at 9:12\n\u2022 And to add to the other answers, if the voltage difference is too large, you can have an electrical breakdown of the medium between the two \"ends\" of the open circuit and so a temporary circuit where current flows through: en.wikipedia.org\/wiki\/Electric_arc But this is a very specific case. \u2013\u00a0DimP May 18 at 11:59\n\nWhen there is a potential difference between two points in an electric $$\\\\color{red}{\\text{circuit}}\\$$ current flows\nAn open circuit is not a $$\\\\color{red}{\\text{circuit}}\\$$.","date":"2020-07-13 19:06:23","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 2, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4736039936542511, \"perplexity\": 336.92250261472225}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593657146247.90\/warc\/CC-MAIN-20200713162746-20200713192746-00136.warc.gz\"}"}
| null | null |
Łupki bitumiczne – rodzaj skały osadowej, odmiana łupków ilastych, nasycona bituminami (węglowodorami stałymi).
1 tona zawiera do 40 litrów materiałów ropopochodnych.
Zasoby szacuje się na ok. 900 mld ton materiałów ropopochodnych w zawartych w łupkach bitumicznych.
Eksploatowane głównie w Estonii (70% światowego wydobycia), tam też znajdują się dwie największe na świecie elektrownie opalane tym paliwem. Łupki bitumiczne występują również w Szkocji.
Największe złoża łupków bitumicznych znajdują się w Stanach Zjednoczonych, głównie na terenie stanów: Utah, Wyoming i Kolorado. Stanowią one 62% wszystkich zasobów światowych (równowartość 800 miliardów baryłek ropy naftowej - trzykrotnie więcej niż całe zasoby ropy w Arabii Saudyjskiej). USA, Rosja i Brazylia łącznie posiadają 86% wszystkich dostępnych zasobów.
Z wysokosiarkowych łupków bitumicznych uzyskiwany jest ichtiol.
Przypisy
Łupki osadowe
Surowce mineralne
Ropa naftowa
Surowce energetyczne
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 4,246
|
Q: unsigned right shift by negative number in Java I am having a difficult time wrapping my head around negative unsigned bitwise operators.
For example, I have the code below. It prints out the value 7 and I don't understand why.
int num1 = -37, num2 = -3;
System.out.println( num1 >>> num2);
// Essentially the same as System.out.println( -37 >>> -3);
// I just wanted to emphasize that i am working with Integers
From my knowledge, the number -37 in binary format is as shown below.
11111111 11111111 11111111 11011010 = -37 (in decimal format)
If we are unsigned right shift of 3 ( -37 >>> 3, not -37 >>> -3), from my knowledge (please correct me if my theory is flawed or lacking key concepts), it shifts the bytes to the right by 3 and the 3 bits that fall out on the right most position appear on the left most position in a flipped down state (from zero to one), meaning that we get the following result.
00011111 11111111 11111111 11111011 = 536870907 (in decimal format).
However, if we apply an unsigned right shift of -3 ( -37 >>> -3), we get the result 7. I don't understand why it is returning 7. Could somebody please explain it to me?
A: It seems counter-intuitive, but when shifting an int, only the last 5 bits of the shift amount are used, according to the JLS, Section 15.19.
If the promoted type of the left-hand operand is int, then only the five lowest-order bits of the right-hand operand are used as the shift distance. It is as if the right-hand operand were subjected to a bitwise logical AND operator & (§15.22.1) with the mask value 0x1f (0b11111). The shift distance actually used is therefore always in the range 0 to 31, inclusive.
This means that -3 only has the last 5 bits used, or 29. Here's -3 in bits to show what's going on:
11111111 11111111 11111111 11111101
The last 5 bits are 11101, or 29 in decimal.
The right-shift distance is 29, which means that the first 3 bits are kept and shifted all the way to the right. Taking -37:
11111111 11111111 11111111 11011010
After the unsigned shift of 29 places, 7 is left:
00000000 00000000 00000000 00000111
As you can see, a negative shift amount is confusing at best. Try to avoid it and always attempt to use an actual number between 0 and 31 for the shift amount for shifting ints.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 2,072
|
\section{Introduction}
Transit ephemeris is crucial for exoplanet follow-up investigations, e.g., atmosphere analysis \citep{Berta2012, Deming2013, YangLD} and orbital evolution \citep{Lendl2014, Dawson2018, Millholland2018,2020wasp12b}.
The newly commissioned Transiting Exoplanet Survey Satellite \citep[TESS]{Ricker2015} provides precise timings in a long baseline when combined with previous works, which enables us to obtain a better transit ephemeris.
The observed transit timing could deviate from the ephemeris prediction due to either underestimation of ephemeris uncertainties \citep{Mallonn2019}, or physical processes \citep{Agol2018}. The TTV could originate from tidal dissipation, orbital precession, R$\o$mer effect, mass loss and multi-planets \citep{2017wasp12b,2020wasp12b,2021wasp12bTESS,wasp-4b2020, Ragozzine2009, Lai2010,Mazeh2013, Agol2018}. For hot Jupiters, the interactions of planet companions are usually not massive and close enough to generate significant TTVs \citep{Huang2016, Dawson2018}.
TTV provides direct evidence of tidal dissipation that likely drives hot Jupiter migration \citep{Dawson2018}. WASP-12b has been reported to undergo tidal dissipation by observational TTVs \citep{2017wasp12b,2020wasp12b,2021wasp12bTESS}. The TTVs are at the level of $\sim$ 5 minutes in a 10-year baseline compared to the ephemeris obtained from a constant period \citep{2020wasp12b}. Apsidal precession is reported to be the major arguing explanation and seems to be ruled out with more than 10-year observations, including most recent TESS timings \citep{2017wasp12b,2020wasp12b,2021wasp12bTESS}. The referred works also discuss and exclude the other possible effects, including R$\o$mer effect and mass loss \citep{Ragozzine2009, Lai2010}.
The R$\o$mer effect, i.e. the acceleration towards the line-of-sight probably due to stellar companions, has been reported to dominate the TTV of WASP-4b \citep{wasp-4b2020}. Using TESS light curves, \cite{WASP-4b} present a period decreasing at -12.6 $\pm$ 1.2 ms yr$^{-1}$. Further radial velocity (RV) monitoring indicates the Doppler effect contributes most of the period decreases \citep{wasp-4b2020}. For another example, WASP-53b and WASP-81b should harbor brown dwarf companions which could cause TTVs $\sim$ 30s, according to the calculation of \cite{Triaud2017}.
We compare TESS timings and archival ephemeris predictions\footnote{Exoplanet Archive: \url{https://exoplanetarchive.ipac.caltech.edu/index.html}}, and
report significant transit timing offsets of 31 hot Jupiters in this work. The paper is organized as follows. We present the sample selection and data reduction in Section 2. In Section 3, transit timings and offsets compared to the previous ephemeris are shown. The ephemeris refinement is also shown in this section. In Section 4, we discuss the possible physical origin of the timing offsets. We briefly summarize the work in Section 5.
\section{Sample Selection and TESS Timing}
The exoplanet sample in this work are hot Jupiters identified from previous work and have access to transit timings from the TESS Objects of Interest (TOI) Catalog \citep{TOIcatalog}. The sample selection requires an orbital period of less than 10 days, a planet mass larger than 0.5 $M_{J}$ and a planet radius larger than 0.5 $R_{J}$.
These criteria leave 421 hot Jupiters among which 262 are listed in the TOI catalog with transit timings.
\subsection{TESS Photometry and TOI Catalog}
TESS is launched in 2018, possessing four cameras with a total field of view (FOV) of 24×96 square degrees, equivalent to a pixel resolution of 21 arcseconds \citep{Ricker2015}. The full frame image covering FOV is released in a cadence of 30 minutes while $\sim$ 200,000 targets are recorded with 11 $\times$ 11 pixel cut-off images in a cadence of 2 minutes (as shown in Figure \ref{image:example}).
\begin{figure}
\centering
\includegraphics[width=3.2in]{imgaexample.pdf}
\caption{TESS example images of 14$\times$14 pixels. The images correspond to HAT-P-31b, HAT-P-35b, and WASP-56b, from top to bottom. The blue points refer to the planet position in the Gaia catalog \citep{GaiaDr2} while red points present nearby source positions.}
\label{image:example}
\end{figure}
The TOI catalog is built based on the light curves obtained from TESS image products, including both 2 minute and 30 minute frames \citep{TOIcatalog}. The 2-minute cadence light curve is generated by the Science Processing Operations Center (SPOC) pipeline
and the 30 minute light curves by the Quick Look Pipeline (QLP) \citep{Twicken2016, Huang2020}. \citet{TOIcatalog} generate an automated pipeline to derive transit parameters and thereby identify planet candidates with the method referred to the Kepler Robovetter \citep{Thompson2018}. More than 2000 planet candidates (continuously updating) are identified in the TOI catalog including both newly discovered and previously known planets.
The timing from the TOI catalog provides a long time baseline when compared with the previous ephemeris. The median timing baseline of the 262 exoplanets is 2368 days, while the median uncertainties of timings from archival data and from the TOI catalog are 0.59 and 0.84 minutes. The median uncertainty of archival periods is 4$\times$10$^{-6}$ days. 159 of 262 hot Jupiters show consistent TESS timings within 1 $\sigma$ when compared to the previous ephemeris predictions. This circumstantially demonstrates the accuracy of TOI timings. We neglect the difference between the Barycentric Julian Date (BJD) and Heliocentric Julian Date (HJD) in this work. The difference is within $\pm$4s, beyond the timing precision discussed.
The TOI catalog has been well utilized for exoplanet research, including TTV analysis which uses the data in a similar condition to this work \citep{TOIforTTV, Martins2020TOI1,2021HowardTOI2}.
\subsection{TESS Transit Timing Validation}
\begin{figure}
\centering
\includegraphics[width=3.3in]{kelt-19.pdf}
\includegraphics[width=3.3in]{kelt-19all.pdf}
\caption{Light curves of KELT-19Ab as an example: single epoch around TOI timing (top panel), folded multiple visits at reference epoch (bottom panel). The blue points present observations (10-minute cadence) while the green points are bins of every three points for clarity. The red line gives the transit model fit with the yellow region indicating 1 $\sigma$ confidence region. The vertical blue line gives the fitted timing; the black vertical line, TOI timing; the green vertical line, previous ephemeris prediction. The timings from single epoch fitting, folded epoch fitting are only 0.14 minutes earlier, 0.20 minutes later than TOI, corresponding to a negligible difference as shown in the image (overlapped blue and black line). The observed TESS timings show an offset of $\sim$ 15 minutes, compared to the previous ephemeris prediction as shown in the vertical green line. The fitting uncertainty is 0.54 minutes for a single epoch, and 0.23 minutes for folded epochs.
}
\label{image:lc}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=7.5in]{HAT-P-31timing.pdf}
\caption{The timing difference of HAT-P-31b. The timing difference is the observed mid-transit times minus the ephemeris predictions. The red point refers to TESS timing difference; black points refer to timing differences of other observations from literature paper \citep{Kipping2011, Mallonn2019}; the black dashed line is reference ephemeris; blue line alternative reference ephemeris; the red line is the refined ephemeris derived by combining TESS observation; green region 1 $\sigma$ significant region of reference ephemeris; brown region 1 $\sigma$ significant region of alternative reference ephemeris. We note that our refined ephemeris overlaps the alternative reference ephemeris, indicating the consistency of the two ephemerides.
}
\label{image:timing}
\end{figure*}
A precision validation of TOI timing is necessary, in the purpose of the study on timing offsets to the previous ephemeris. A majority of the hot Jupiters (159 of 262) present consistent TOI timings which could be circumstantial evidence. Direct evaluation is performed by independently reducing the data and obtaining the TESS timings.
We have generated a photometric pipeline for the TESS image products (as shown in Figure \ref{image:example}) and applied it to the analysis of transit exoplanets \citep{Yangatmos, YangLD}. The pipeline includes modules of e.g., astrometry checking, aperture photometry, deblending of the nearby contamination flux, light curve detrending. The details and evaluations of the pipeline are referred to in previous work \citep{Yangatmos, YangLD}. From the tests and applications so far, the derived transit parameters are within 1 $\sigma$ when we apply the same fitting to the TESS SPOC light curves.
We check if the timing obtained from our pipeline is consistent with TOI timing and find the difference is within 2 minutes. The comparison is performed on the planets WASP-161b with the largest time difference earlier than TESS timing, WASP-17b with the largest time difference later than TESS timing, WASP-58b that is consistent TESS timing compared to the previous ephemeris \citep{Anderson2010, wasp161, Mallonn2019}.
The comparison also includes WASP-121b, KELT-19Ab which have been applied for analysis of the transit depth in previous work \citep{Yangatmos, YangLD}.
Applying Monte-Carlo Markov Chain (MCMC) \citep{pymc,pya}, the light curve is fitted with a planet transit model \citep{Mandel_Agol2002}. The detrended light curve is performed with a stellar activity check from archival data and TESS photometry to avoid possible timing bias. The choice of `circular orbit' or `Keplerian orbit' is consistent with the archival reference work. The details of the free parameters and priors are as described in \citep{Yangatmos, YangLD}. For `circular orbit', the free parameters are transit mid-point ($T0$), the radius ratio of planet to star ($R_{p}/R_{\ast}$), the semi-major axis ( $a/R_{\ast}$), the inclination ($i$), and the limb darkening parameters. For `Keplerian orbit', the model has extra free parameters of the longitude of the ascending node, the orbital eccentricity ($e$), the ascending node, the periapsis passage time, and the periapsis argument. The MCMC fitting runs 50,000 steps after the first 50,000 steps as initialization. All the priors are uniform, except for the limb darkening which applies Gaussian prior interpreted in limb darkening table \citep{TESSLD}.
We apply the transit model to both the light curve of a single epoch and the light curve folded from multiple epochs (examples as shown in Figure \ref{image:lc}). The folding is based on the archival ephemeris and we evaluate the fitting parameter bias if folding an inappropriate period \citep[using the same method as in][]{Yanghats5b}. For TESS one sector, the timing bias is $\sim$ 4 minutes if the period is biased at 0.0004 days. Such large period bias would cause significant TESS timing offsets when compared to ephemeris prediction and thereby are flagged. The fold-and-check method has been well utilized in period searching researches \citep{PDM1989,yangbinary,yang2021ltd064402245919}.
The fitted timings from our pipeline are consistent within 4 minutes when compared to TOI timings in the contrast sample. The median timing offset between our results and TOI timings are 1.43 minutes. The median TOI timing uncertainty is 0.83 minutes. We conclude that it is reasonable to use TOI timings. And the TOI timing offset to the previous ephemeris is regarded as significant if the offset is larger than 4 minutes which is 3 times the median difference. We also require the timing offset to be larger than 1 combined $\sigma$ which is the square root of the quadratic sum of archival ephemeris uncertainty and TESS timing uncertainty. These criteria lead to a final sample of 31 hot Jupiters.
\section{Hot Jupiters with TESS Timing Offsets}
We obtain a sample of 31 targets with significant timing offsets compared to the previous ephemeris prediction. An example is as shown in Figure \ref{image:timing} with the whole sample as shown in Figure \ref{image:timing appendix}. The parameters are present in Table \ref{table:timing}, including planet ID, TESS time minus the predicted time from the previous ephemeris ($\Delta T_{C}$), transit midpoint $T_{C}$, orbital period $P$, category flag, parameter reference.
In our sample, the median $\Delta T_{C}$ is 17.8 minutes while the median combined uncertainty is 5.2 minutes. Therefore the signal-to-noise ratio (SNR) is 3.4. Among 31 Jupiters, WASP-161b presents the earliest offset timing of -203.7$\pm$4.1 minutes. WASP-17b gives the latest offset timing of 70.8$\pm$11.7 minutes. The timing uncertainty is derived as the quadratic sum of uncertainties of previous ephemeris and TESS timing.
We classify the sources into four categories, according to the potential properties implied by the timing offsets.
Type I target refers to a source of which timings are modeled with a linear function. The time offset could be either due to systematic error underestimation or some physical process. The linear indicates a model with a constant derivative, referring to a constant period. Most type I targets only have two timings that we can hardly distinguish the origin of the period difference referring to different linear functions. We regard the period obtained by the linear function fitted to TESS timing as a refined period. Type II refers to the targets of which the timing differences can not be modeled by a linear function but a quadratic function. The quadratic function can be due to abnormal points or physical processes which lead to a constant period derivative. We identify the targets as type III if the TESS timing offset is probably due to the systematic effect in the previous ephemeris and timing differences (at least three data sets) can be well fitted by a linear function.
The fitted linear function refines the ephemeris. The sources are classified as type IV if the timings can not be fitted with any linear or quadratic functions.
The possible physical origin of the timing offsets is discussed in Section. 4.
{
\setlength{\tabcolsep}{2pt}
\setlength\LTcapwidth{\textwidth}
\begin{longtable*}{|l|l|l|l|l|l|}
\caption{Exoplanet parameters. `1' in column `Planet ID' indicates to the reference ephemeris in Figure \ref{image:timing} and \ref{image:timing appendix}, while `2' presents the alternative ephemeris.}
\label{table:timing}\\
\hline
Planet ID & $\Delta $$T_{C}$ & $T_{c}$ & $P$ & Category flags & Reference \\
& minutes & HJD & days & & \\
\endfirsthead
\multicolumn{6}{c}%
{\tablename\ \thetable\ -- \textit{Continued from previous page}} \\
\hline
Planet ID & $\Delta $$T_{C}$ & $T_{c}$ & $P$ & Category flags & Reference \\
& minutes & HJD & days & & \\
\hline
\endhead
\hline \multicolumn{6}{c}{\textit{Continued on next page}} \\
\endfoot
\hline
\endlastfoot
\hline
WASP-161b & -203.7$\pm$4.1 & 2459249.035676$\pm$0.000594 & 5.405625$\pm$0.000003 & I & This work \\
1 & & 2457416.5289$\pm$0.0011 &
5.4060425$\pm$0.0000048 & & \cite{wasp161} \\
\hline
HAT-P-31b & -206.0$\pm$131.6 & 2459010.826736$\pm$0.001149 & 5.005272$\pm$0.000005 & III & This work \\
1 & & 2454320.8866$\pm$0.0052 &5.005425$\pm$0.000092 & &\cite{Bonomo2017} \\
2& & 2458169.9410$\pm$0.0017 &
5.0052724±0.0000063 & &\cite{Mallonn2019} \\
\hline
KELT-1b & -67.4$\pm$53.9 & 2458765.533813$\pm$0.000299 & 1.217494$\pm$0.0000002 & III & This work \\
1 & & 2455914.1628$\pm$0.0023 &
1.217514$\pm$0.000015 & & \cite{Siverd2012} \\
2 & & 2456093.13464$\pm$0.00019 &
1.21749448$\pm$0.00000080 & & \cite{Baluev2015}\\
\hline
TOI-163 b & -57.2$\pm$22.0 & 2459310.502979$\pm$0.000817 & 4.231135$\pm$0.000003 & I & This work \\
1 & & 2458328.87970$\pm$0.00063 &
4.231306$\pm$0.000063 & & \cite{Kossakowski2019} \\
\hline
WASP-54b & -55.9$\pm$8.6 & 2458931.236409$\pm$0.000435 & 3.693599$\pm$0.000001 & I & This work \\
1 & & 2455518.35087$\pm$0.00053 &
3.6936411$\pm$0.0000059 & &\cite{Bonomo2017} \\
\hline
WASP-173Ab & 1.2$\pm$0.9 & 2458355.195662$\pm$0.00047 &
1.386632$\pm$0.000001 & III & This work \\
& -30.4$\pm$1.1 & 2458355.173660$\pm$0.000620 &
1.386632$\pm$0.000001 & & TOI timing \\
1 & & 2457288.8585$\pm$0.0002 &
1.38665318$\pm$0.00000027& & \cite{Hellier2019} \\
2 & & 2458105.59824$\pm$0.00090 &
1.3866529$\pm$0.0000027 & & \cite{Labadie2019} \\
\hline
KELT-18b & -26.8$\pm$2.3 & 2458714.181140$\pm$0.000380 & 2.871706$\pm$0.000001 & I & This work \\
1 & & 2457542.52504$\pm$0.00039 &
2.8717518$\pm$0.0000028 & & \cite{McLeod2017} \\
2 & & 2457542.52463$\pm$0.00057 &
2.8716992$\pm$0.0000013 & & \cite{Maciejewski2020}\\
\hline
XO-3b & -17.8$\pm$1.2 & 2458819.064098$\pm$0.000279 & -- & IV & This work \\
1 & & 2455292.43266$\pm$0.00015 &
3.19153285$\pm$0.00000058 & & \cite{Wong2014} \\
2 & & 2454449.86816$\pm$0.00023 &
3.1915239$\pm$0.0000068 & & \cite{Winn2008}\\
& & 2456419.04365$\pm$0.00026 &
3.19153247$\pm$0.00000055& & \cite{Wong2014}\\
\hline
WASP-101b & -17.3$\pm$5.2 & 2459223.302264$\pm$0.000132 & 3.585708$\pm$0.0000002 & I & This work \\
1 & & 2456164.6934$\pm$0.0002 &
3.585722$\pm$0.000004 & & \cite{Hellier2014} \\
\hline
K2-237b & -15.5$\pm$3.9 & 2458626.800781$\pm$0.000869 & 2.180534$\pm$0.0000014 & I & This work \\
1 & & 2457656.4633789$\pm$0.0000048 & 2.1805577$\pm$0.0000057 & & \cite{Smith2019} \\
\hline
KELT-7b & -12.4$\pm$5.4 & 2458819.253410$\pm$0.000240 & 2.734765$\pm$0.0000002 & I & This work \\
1 & & 2456355.229809$\pm$0.000198 & 2.734775$\pm$0.0000039 & & \cite{Bieryla2015} \\
\hline
WASP-76b & -11.9$\pm$2.9 & 2459117.687201$\pm$0.000119 & 1.809881$\pm$0.0000001 & I & This work \\
1 & & 2456107.85507$\pm$0.00034 & 1.809886$\pm$0.000001 & & \cite{West2016} \\
\hline
WASP-95b & -10.7$\pm$2.9 & 2459084.585010$\pm$0.000110 & 2.184667$\pm$0.0000001 & I & This work \\
1 & & 2456338.458510$\pm$0.000240 & 2.184673$\pm$0.0000014 & & \cite{Hellier2014} \\
\hline
KELT-14b & -10.7$\pm$5.2 & 2459252.535529$\pm$0.000108 & -- & II & This work \\
1 & & 2457091.028632$\pm$0.000470 & 1.710059$\pm$0.0000025 & & \cite{Rodriguez2016} \\
2 & & 2456665.224010$\pm$0.000210 & 1.710057$\pm$0.0000032 & & \cite{Turner2016} \\
\hline
KELT-21b & -9.8$\pm$2.4 & 2458686.841940$\pm$0.000580 & 3.612747$\pm$0.0000013 & I & This work \\
1 & & 2457295.934340$\pm$0.000410 & 3.612765$\pm$0.0000030 & & \cite{Johnson2018} \\
\hline
WASP-35b & -9.5$\pm$3.5 & 2459176.768453$\pm$0.000197 & 3.161569$\pm$0.0000002 & I & This work \\
1 & & 2455531.479070$\pm$0.000150 & 3.161575$\pm$0.0000020 & & \cite{Enoch2011} \\
\hline
TOI-1333b & 2.67$\pm$1.44& 2458715.1230$\pm$0.0010 & 4.720172$\pm$0.000025 & I & This work \\
& -5.7$\pm$1.5 & 2458715.117140$\pm$0.000550 & 4.720314$\pm$0.0000116 & & TOI timing \\
1 & & 2458913.370330$\pm$0.000450 & 4.720219$\pm$0.0000110 & & \cite{Rodriguez2021} \\
\hline
WASP-17b & 70.8$\pm$11.7 & 2458627.126221$\pm$0.000584 & 3.735485$\pm$0.0000003 & III & This work \\
1 & & 2454559.181020$\pm$0.000280 & 3.735442$\pm$0.0000072 & & \cite{Anderson2010} \\
& & 2454577.85806$\pm$0.00027 & 3.7354380$\pm$0.0000068 & & \cite{Anderson2011} \\
& & 2454592.80154$\pm$0.00050 & 3.7354845$\pm$0.0000019 & & \cite{Southworth2012} \\
& & 2457192.69798$\pm$0.00028 & 3.735438 & & \cite{Sedaghati2016} \\
\hline
WASP-99b & 61.6$\pm$31.2 & 2459135.796019$\pm$0.000239 & 5.752595$\pm$0.0000020 & I & This work \\
1 & & 2456224.983200$\pm$0.001400 & 5.752510$\pm$0.0000400 & & \cite{Bonomo2017} \\
\hline
WASP-58b & 37.4$\pm$13.5 & 2458986.981902$\pm$0.000409 & 5.017215$\pm$0.0000013 & III & This work \\
1 & & 2455183.933500$\pm$0.001000 & 5.017180$\pm$0.0000110 & & \cite{Hebrard2013} \\
2 & & 2457261.059700$\pm$0.000620 & 5.017213$\pm$0.0000026 & & \cite{Mallonn2019} \\
\hline
WASP-187b & 34.5$\pm$8.7 & 2458764.856300$\pm$0.002600 & 5.147913$\pm$0.0000033 & I & This work \\
1 & & 2455197.352900$\pm$0.002000 & 5.147878$\pm$0.0000050 & & \cite{Schanche2020} \\
\hline
HAT-P-6b & 26.3$\pm$9.2 & 2458740.188710$\pm$0.000360 & 3.853000$\pm$0.0000003 & I & This work \\
1 & & 2454035.675750$\pm$0.000280 & 3.852985$\pm$0.0000050 & & \cite{Noyes2008} \\
& & 2454347.76763$\pm$0.00042 & & & \cite{Szabo2010} \\
& & 2454698.3908$\pm$0.0011 & & & \cite{Szabo2010} \\
\hline
KELT-23Ab & 23.8$\pm$7.7 & 2458683.911214$\pm$0.000056 & -- & IV & This work \\
1 & & 2458140.379200$\pm$0.002700 & 2.255251$\pm$0.0000110 & & \cite{Johns2019} \\
2 & & 2458140.386980$\pm$0.000200 & 2.255288$\pm$0.0000007 & & \cite{Maciejewski2020} \\
\hline
WASP-33b & 22.4$\pm$6.9 & 2458791.414307$\pm$0.000169 & 1.219871$\pm$0.0000001 & III & This work \\
1 & & 2454163.223730$\pm$0.000260 & 1.219867$\pm$0.0000012 & & \cite{Collier2010} \\
2 & & 2455507.522200$\pm$0.000300 & 1.219868$\pm$0.0000011 & & \cite{von2014} \\
\hline
WASP-78b & 18.8$\pm$11.1 & 2459175.589610$\pm$0.000863 & 2.175185$\pm$0.0000006 & III & This work \\
1 & & 2455882.359640$\pm$0.000530 & 2.175176$\pm$0.0000047 & & \cite{Bonomo2017} \\
2 & & 2456139.030300$\pm$0.000500 & 2.175173$\pm$0.0000030 & & \cite{Brown2017} \\
\hline
KELT-19Ab & 15.2$\pm$5.9 & 2459222.789720$\pm$0.000183 & 4.611734$\pm$0.0000007 & I & This work \\
1 & & 2457281.249537$\pm$0.000361 & 4.611709$\pm$0.0000088 & & \cite{Siverd2018} \\
\hline
WASP-178b & 12.9$\pm$3.1 & 2458602.836430$\pm$0.001860 & 3.344842$\pm$0.0000014 & III & This work \\
1 & & 2456927.068390$\pm$0.000470 & 3.344829$\pm$0.0000012 & & \cite{Hellier2019} \\
2 & & 2458321.867240$\pm$0.000380 & 3.344841$\pm$0.0000033 & & \cite{Rodr2020} \\
\hline
WASP-94Ab & 10.2$\pm$4.0 & 2459039.335846$\pm$0.000386 & 3.950201$\pm$0.0000005 & I & This work \\
1 & & 2456416.402150$\pm$0.000260 & 3.950191$\pm$0.0000037 & & \cite{Bonomo2017} \\
\hline
HAT-P-69b & 9.7$\pm$1.5 & 2459242.559429$\pm$0.000245 & 4.786992$\pm$0.0000036 & I & This work \\
1 & & 2458495.788610$\pm$0.000720 & 4.786949$\pm$0.0000018 & & \cite{Zhou2019} \\
\hline
KELT-24b & 7.9$\pm$0.9 & 2458684.821890$\pm$0.000320 & -- & II & This work \\
1 & & 2458540.477590$\pm$0.000360 & 5.551493$\pm$0.0000081 & & \cite{Rodriguez2019} \\
2 & & 2458268.454590$\pm$0.000870 & 5.551492$\pm$0.0000086 & & \cite{Maciejewski2020} \\
\hline
TOI-628b & 3.8$\pm$3.4 & 2458469.232700$\pm$0.002220 & 3.409511$\pm$0.000048 & I & This work \\
& 7.4$\pm$1.2 & 2458469.235200$\pm$0.000430 & 3.409458$\pm$0.0000086 & & TOI timing \\
1 & & 2458629.479720$\pm$0.000390 & 3.409568$\pm$0.0000070 & & \cite{Rodriguez2021} \\
\hline
\end{longtable*}
}
We verify the TOI timings of 31 hot Jupiters among which WASP-173Ab, TOI-1333b, and TOI-628b need timing recalibration. We check the TESS raw data (2-minute cadence) of WASP-173Ab and find an abnormal data point around a transit at 2468356.564637 (HJD). The abnormal data biases the modeling if not clipped when performing an automatic pipeline. We refit the TESS light curve with abnormal data clipped. The timing is 2458355.195662$\pm$0.00047 (HJD) when we fit one transit visit and 2458355.195907$\pm$0.0001 (HJD) when fitting visits folded through the whole sector. These two results are consistent within 0.35 minutes and are different from TOI timing at 29 minutes. The refitted TESS timing is consistent with the previous ephemeris (as shown in Figure \ref{image:timing_corre}).
TOI-1333b timing derived by refitting TESS light curve is 2458715.1230$\pm$0.0010 (HJD) which is 8.4 minutes later than TOI derived timing (as shown in Figure \ref{image:timing_corre}). The TESS 30 minute data (available for the TOI-1333b) has some abnormal points around transits which would bias the timings if not applying the sigma clipping process. Removing the abnormal data points, we refit the light curve for the timing. The timings derived from single transit and conjunct transits have a difference of 1.8 minutes (within 0.3 combined $\sigma$). The timing is close ($\sim$ 1 $\sigma$) to the prediction of previous ephemeris (Figure \ref{image:timing_corre}).
We derive a conjunct timing of 1469.23270$\pm$0.00222 (HJD) for TOI-628b while a single transit visit obtains a midpoint at 1469.2332$\pm$0.0074 (HJD). The value is $\sim$ 1 $\sigma$ earlier than TOI timing and is consistent with the previous ephemeris. We note that TOI timings are highly reliable that only two sources among 262 TOI hot Jupiters are found with significant inappropriate values.
\begin{figure}
\centering
\includegraphics[width=3.3in]{WASP-173Abtiming.pdf}
\includegraphics[width=3.3in]{TOI-1333btiming.pdf}
\includegraphics[width=3.3in]{TOI-628btiming.pdf}
\caption{The timing differences with corrected timings for WASP-173Ab, TOI-1333b, TOI-628b. The symbols are similar to Figure \ref{image:timing}. The green diamond indicates TOI timing, the red diamond gives the timing generated from TESS raw images. We classify WASP-173Ab as type III, TOI-1333 and TOI-628b as type IV targets (as shown in Table \ref{table:timing}).
}
\label{image:timing_corre}
\end{figure}
\subsection{Ephemeris Refinement}
We refine the ephemeris of type I and III targets in our sample. We do not apply any ephemeris refinements to type II and IV sources. The new ephemeris consists of TESS timing and a refined period (as shown in Table \ref{table:timing}). The refinement has a median precision of 1.11 minutes until 2025 and 1.86 minutes until 2030. The largest uncertainty is 7.22 minutes at 2030 referring to WASP-187.
The ephemeris precision depends on the length of the time baseline and transit timing precision. The timing uncertainties could be underestimated due to the techniques in light curve generation and high dimension model fitting \citep{Yangatmos, YangLD}. Empirically, the timings obtained from a single transit visit are usually within two times of the reported uncertainty (Yang et al. submitted). Conjunction timing derived from multi-visits based on a constant period assumption might be biased if the folding period is not precise, especially when the light curves partially cover the transits. Correcting the timing biases in archival papers (if present) is beyond the scope of this work.
The period could be updated when more observations are available \citep{Mallonn2019, Edwards2021, Wang2021}. For type I Jupiters, the periods from the previous works are significantly different from the periods derived in our refinement. We note that these period differences might origin from physical processes which make the refinement inappropriate (as discussed in Section~\ref{sec:physics}). The type III exoplanets are likely to present significant uncertainty underestimation in the previous ephemeris. Type II and IV targets might be due to abnormal timings included or due to more complicated physical processes.
\begin{figure}
\centering
\includegraphics[width=3.3in]{kelt-19quadratic.pdf}
\caption{KELT-19 timings fitted with a quadratic function. The symbols are similar to Figure \ref{image:timing}. The red line shows the quadratic function model.}
\label{image:timingbinary}
\end{figure}
\section{Discussion: Possible Physical Origin}
\label{sec:physics}
Some targets in our sample present very significant period differences when compared to former results. It might not be a good hypothesis to regard all the differences originating from the underestimation of archival period uncertainties. Period bias caused by a timing shift of 2 minutes would be only $\sim$ 10$^{-5}$ days when the time baseline is 1 year.
We argue that a very significant period difference might be attributed to physical period-changing processes. We find in our sample that the targets with offset SNR larger than 10 all present earlier observation timings. These sources are WASP-161b, XO-3b, and KELT-18b. The period difference caused by systematic underestimation should be unsigned which is not the case. The tidal dissipation could explain the observational phenomenon, to the best of our knowledge.
The tidal torque transfers the energy between the star-planet orbit and the rotation of star and planet \citep{Goldreich1966, Lin1996, Naoz2011, Wu2011, Dawson2018, RodetandLai}. The process could cause the period decay and the apsidal procession \citep{Hut1981, Ragozzine2009}. The induced TTV has been discovered in WASP-12b at $\sim$ a few minutes \citep{2011wasp12b,2017wasp12b}. And TESS provides the most recent evidence for WASP-12b TTV \citep{2021wasp12bTESS}.
\begin{figure*}
\centering
\includegraphics[width=7.5in]{xo-3bqua.pdf}
\caption{The TTV of XO-3b. The symbols are similar to Figure \ref{image:timing} while the red dashed line presents the fitted quadratic function as described in the text.}
\label{image:xo-3}
\end{figure*}
We report WASP-161b, which shows the most significant TESS timing offsets in this sample, presenting a period derivative of -1.16$\times$10$^{-7}\pm$2.25$\times$10$^{-8}$ (as details described in Yang et al. 2021, submitted). WASP-161b possibly is undergoing tidal dissipation. We have approved CHEOPS for two visit observations in early 2022 for further investigation. WASP-161b is regarded as a type I target in this work.
The period of XO-3b has been reported differently in previous works \citep[][and references therein]{Winn2008, Winn2009, Johns-Krull2008, Wong2014, Bonomo2017}. TESS timing presents an offset of -17.8$\pm$1.2 minutes (14.8 $\sigma$) to the newest archival ephemeris from \citet{Bonomo2017}. The timing generated by our pipeline is consistent within 0.3 minutes to TOI timing. And the uncertainties are similar ($\sim$ 0.45 minutes). We gather the archival timings \citep{Winn2008, Johns-Krull2008, Wong2014, Bonomo2017} and obtain a timing baseline longer than 10 years with TESS timing as the most recent (as shown in Figure \ref{image:xo-3}). The timings could not be well fitted with any linear function.
We find a quadratic function is a good fit for the data sets, yielding a constant period decaying model. The period derivative ($\dot{P}$) is 5.8$\times$10$^{-9}$$\pm$9.3$\times$10$^{-10}$.
The $\dot{P}$ could be explained by a tidal dissipation model, expressed as \citep[`equilibrium tide';][]{Hut1981,Themodel}:
\begin{equation}
\label{modelapplied}
\begin{split}
\dot{P} = \frac{27\pi}{Q_p'}\left(\frac{M_\star}{M_p}\right)\left(\frac{R_p}{a}\right)^5\left[N(e)\,x_\mathrm{p}\,
\frac{\omega_\mathrm{p}}{n}-N_a(e)\right] \\
+ \frac{27\pi}{Q_*'}\left(\frac{M_p}{M_\star}\right)\left(\frac{R_\star}{a}\right)^5\left[N(e)\,x_\star\,
\frac{\omega_\star}{n}-N_a(e)\right].
\end{split}
\end{equation}
Here $Q_p'$ is the `modified tidal quality factor', $\omega_\mathrm{p}$ the planet's rotation rate, $x_\mathrm{p}$ the obliquity. Replacing the $p$ with $\star$ defines the stellar parameters, $N(e)$ and $N_a(e)$ are defined as:
\begin{equation*}
\label{n_e}
N(e) = \frac{1+\frac{15}{2}e^2+\frac{45}{8}e^4+\frac{5}{16}e^6}{(1-e^2)^{6}}
\end{equation*}
and
\begin{equation*}
\label{na_e}
N_a(e)=\frac{1+\frac{31}{2}e^2+\frac{255}{8}e^4+\frac{185}{16}e^6+\frac{25}{64}e^8}{(1-e^2)^{15/2}}.
\end{equation*}
The value of $Q'_\star$ is 1.6$\times$10$^{5}$$\pm$0.2$\times$10$^{5}$ if assuming the period decaying is due to the stellar tide and the value of $Q'_p$ is 1.9$\times$10$^{4}$$\pm$0.2$\times$10$^{4}$ under the assumption that period decaying is due to the planetary tide.
The value is derived from Equation~\ref{modelapplied}, with a stellar mass of 1.213$\pm$0.066 $M_{\odot}$, a stellar radius of 1.377$\pm$0.083 $R_{\odot}$, a planet mass of 11.70$\pm$0.42 $M_{J}$, a planet radius of 1.217$\pm$0.073 $R_{J}$, an eccentricity of 0.27587$^{+0.00071}_{-0.00067}$, an orbital semi-major axis of 4.95$\pm$0.18 (in unit of stellar radii), an obliquity of 70$\pm$15 degrees \citep{Hebrard2008,Bonomo2017,Stassun2017}.
XO-3b is reported as a candidate giant planet undergoing migration in previous work \citep{Bonomo2017}, due to its large eccentricity and high $M_{p}$/$M_{\ast}$ (larger than 6$\times$10$^{-3}$). Our timing result provides direct observational evidence to support this scenario.
The apsidal precession could be excited when the tidal torque exists \citep{Ragozzine2009}. Distinguishing the difference between tidal dissipation and precession needs to model timings of occultation \citep{2017wasp12b,2020wasp12b, 2021wasp12bTESS}. XO-3b is also expected to be a candidate presenting precession in previous work \citep{Jordan2008}. No clues of a binary companion are reported in XO-3b references. Further investigation on XO-3b is available in the following work (Yang et al. submitted). We note that the period changing originating from precession and R$\o$mer effect should be unsigned as the same as from systematic underestimation.
The relation between the planet period derivative and host star acceleration rate is well modeled \citep{wasp-4b2020}. In our sample, KELT-19Ab shows a maximum stellar acceleration at 4 m s$^{-1}$ yr$^{-1}$ originating from binary companion \citep{KELT-19}. This acceleration would cause a period derivative of 5.32 ms yr$^{-1}$, according to the calculation from \citep{wasp-4b2020}. We generate the TESS timings in both 2019 and 2020. TOI catalog gives the timing at 2020 which is only 0.14 minutes different from our result (as shown in Figure \ref{image:lc} and caption therein). We find timings can be fitted with both a linear and a quadratic function (as shown in Figure \ref{image:timingbinary}). The fitting result of the quadratic function indicates a period derivative of 112$\pm$94 ms yr$^{-1}$. Therefore, we conclude that combining TESS and archival timings do not present a significant TTV dominated by stellar acceleration for KELT-19Ab. We regard the R$\o$mer effect beyond the detection limit in this work.
\section{Summary}
We discuss the ephemeris of 31 hot Jupiters, of which TESS timings show significant offsets. The TESS timing comes from the TOI catalog and is validated using our self-generated pipeline. The pipeline obtains the light curve from the raw TESS images and fits the light curve with the planet transit model. The result from our pipeline gives consistent results compared to TOI catalog.
Among the sample, TESS timings present a median offset of 17.8$\pm$5.2 minutes, equivalent to an SNR of 3.4$\sigma$ when compared to the previous ephemeris. WASP-161b and XO-3b give the most significant timing offsets. The ephemeris refinement serves the potential follow-up observations for equipment, e.g., CHEOPS, ongoing James Webb Space Telescope, and Ariel Space Telescope. The refined timing reaches a precision within 1.11 minutes in the next 5 years and 1.86 minutes in the next ten years.
WASP-161b, XO-3b, and KELT-18b present timing offsets larger than 10 $\sigma$. These three targets all have an earlier observed timing than the predictions from the previous ephemeris under a constant period assumption. We find WASP-161b presents evidence of period decaying in previous work (Yang et al. submitted).
XO-3b presents a tentative TTV, which could be modeled by a quadratic function. The derived period derivative is 5.8$\times$10$^{-9}$$\pm$9.3$\times$10$^{-10}$. Tidal dissipation can explain the TTV with a $Q_\star$ of 1.6$\times$10$^{5}$$\pm$0.2$\times$10$^{5}$ or a $Q_p$ of 1.9$\times$10$^{4}$$\pm$0.2$\times$10$^{4}$. Apsidal precession could be an alternative explanation to the TTV. Interestingly, all the four targets (WASP-161, XO-3b, WASP-12b, WASP-4b) with significant observed TTVs, show earlier timing than the prediction in a constant period model. Apsidal precession could not explain this since the timing variation caused by precession should be unsigned. Further observations, e.g., occultation timing monitoring, are helpful for confirmation.
\section*{acknowledgements}
This work made use of the NASA Exoplanet Archive \footnote{\url{https://exoplanetarchive.ipac.caltech.edu/index.html}} and PyAstronomy\footnote{\url{https://github.com/sczesla/PyAstronomy}} \citep{pya}. We would like thank for Ranga-Ram Chary for helpful discussions. Su-Su Shan, Fan Yang, and Ji-Feng Liu acknowledge fundings from the Cultivation Project for LAMOST Scientific Payoff and Research Achievement of CAMS-CAS, the National Key Research and Development Program of China (No.2016YFA0400800), the National Natural Science Foundation of China (NSFC; No.11988101). Xing Wei is supported by NSFC (No.11872246, 12041301), and the Beijing Natural Science Foundation (No. 1202015). Hai-yan Zhang acknowledges NSFC (No.12041301, U1831128).
\clearpage
\newpage
\section*{Appendix}
\renewcommand{\thefigure}{A\arabic{figure}}
\setcounter{figure}{0}
\begin{figure}[ht]
\centering
\includegraphics[width=3.3in]{appendix/WASP-161b.pdf}
\includegraphics[width=3.3in]{appendix/HAT-P-31b.pdf}
\includegraphics[width=3.3in]{appendix/KELT-1b.pdf}
\includegraphics[width=3.3in]{appendix/TOI-163b.pdf}
\caption{Timing differences of 31 targets. The symbols are the same as Figure \ref{image:timing} while the legend inside image is dismissed for clarity.}
\label{image:timing appendix}
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}
\centering
\includegraphics[width=3.3in]{appendix/WASP-54b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-173Ab.pdf}
\includegraphics[width=3.3in]{appendix/KELT-18b.pdf}
\includegraphics[width=3.3in]{appendix/XO-3b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-101b.pdf}
\caption{(Continued)}
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}
\centering
\includegraphics[width=3.3in]{appendix/K2-237b.pdf}
\includegraphics[width=3.3in]{appendix/KELT-7b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-76b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-95b.pdf}
\includegraphics[width=3.3in]{appendix/KELT-14b.pdf}
\caption{(Continued) }
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}
\centering
\includegraphics[width=3.3in]{appendix/KELT-21b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-35b.pdf}
\includegraphics[width=3.3in]{appendix/TOI-1333b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-17b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-99b.pdf}
\caption{(Continued) }
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}
\centering
\includegraphics[width=3.3in]{appendix/WASP-58b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-187b.pdf}
\includegraphics[width=3.3in]{appendix/HAT-P-6b.pdf}
\includegraphics[width=3.3in]{appendix/KELT-23Ab.pdf}
\includegraphics[width=3.3in]{appendix/WASP-33b.pdf}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.3in]{appendix/WASP-78b.pdf}
\includegraphics[width=3.3in]{appendix/KELT-19Ab.pdf}
\includegraphics[width=3.3in]{appendix/WASP-178b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-94Ab.pdf}
\includegraphics[width=3.3in]{appendix/HAT-P-69b.pdf}
\caption{(Continued) }
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}
\centering
\includegraphics[width=3.3in]{appendix/KELT-24b.pdf}
\includegraphics[width=3.3in]{appendix/TOI-628b.pdf}
\caption{(Continued) }
\end{figure}
\clearpage
\newpage
\bibliographystyle{aasjournal}
\section{Introduction}
Transit ephemeris is crucial for exoplanet follow-up investigations, e.g., atmosphere analysis \citep{Berta2012, Deming2013, YangLD} and orbital evolution \citep{Lendl2014, Dawson2018, Millholland2018,2020wasp12b}.
The newly commissioned Transiting Exoplanet Survey Satellite \citep[TESS]{Ricker2015} provides precise timings in a long baseline when combined with previous works, which enables us to obtain a better transit ephemeris.
The observed transit timing could deviate from the ephemeris prediction due to either underestimation of ephemeris uncertainties \citep{Mallonn2019}, or physical processes \citep{Agol2018}. The TTV could originate from tidal dissipation, orbital precession, R$\o$mer effect, mass loss and multi-planets \citep{2017wasp12b,2020wasp12b,2021wasp12bTESS,wasp-4b2020, Ragozzine2009, Lai2010,Mazeh2013, Agol2018}. For hot Jupiters, the interactions of planet companions are usually not massive and close enough to generate significant TTVs \citep{Huang2016, Dawson2018}.
TTV provides direct evidence of tidal dissipation that likely drives hot Jupiter migration \citep{Dawson2018}. WASP-12b has been reported to undergo tidal dissipation by observational TTVs \citep{2017wasp12b,2020wasp12b,2021wasp12bTESS}. The TTVs are at the level of $\sim$ 5 minutes in a 10-year baseline compared to the ephemeris obtained from a constant period \citep{2020wasp12b}. Apsidal precession is reported to be the major arguing explanation and seems to be ruled out with more than 10-year observations, including most recent TESS timings \citep{2017wasp12b,2020wasp12b,2021wasp12bTESS}. The referred works also discuss and exclude the other possible effects, including R$\o$mer effect and mass loss \citep{Ragozzine2009, Lai2010}.
The R$\o$mer effect, i.e. the acceleration towards the line-of-sight probably due to stellar companions, has been reported to dominate the TTV of WASP-4b \citep{wasp-4b2020}. Using TESS light curves, \cite{WASP-4b} present a period decreasing at -12.6 $\pm$ 1.2 ms yr$^{-1}$. Further radial velocity (RV) monitoring indicates the Doppler effect contributes most of the period decreases \citep{wasp-4b2020}. For another example, WASP-53b and WASP-81b should harbor brown dwarf companions which could cause TTVs $\sim$ 30s, according to the calculation of \cite{Triaud2017}.
We compare TESS timings and archival ephemeris predictions\footnote{Exoplanet Archive: \url{https://exoplanetarchive.ipac.caltech.edu/index.html}}, and
report significant transit timing offsets of 31 hot Jupiters in this work. The paper is organized as follows. We present the sample selection and data reduction in Section 2. In Section 3, transit timings and offsets compared to the previous ephemeris are shown. The ephemeris refinement is also shown in this section. In Section 4, we discuss the possible physical origin of the timing offsets. We briefly summarize the work in Section 5.
\section{Sample Selection and TESS Timing}
The exoplanet sample in this work are hot Jupiters identified from previous work and have access to transit timings from the TESS Objects of Interest (TOI) Catalog \citep{TOIcatalog}. The sample selection requires an orbital period of less than 10 days, a planet mass larger than 0.5 $M_{J}$ and a planet radius larger than 0.5 $R_{J}$.
These criteria leave 421 hot Jupiters among which 262 are listed in the TOI catalog with transit timings.
\subsection{TESS Photometry and TOI Catalog}
TESS is launched in 2018, possessing four cameras with a total field of view (FOV) of 24×96 square degrees, equivalent to a pixel resolution of 21 arcseconds \citep{Ricker2015}. The full frame image covering FOV is released in a cadence of 30 minutes while $\sim$ 200,000 targets are recorded with 11 $\times$ 11 pixel cut-off images in a cadence of 2 minutes (as shown in Figure \ref{image:example}).
\begin{figure}
\centering
\includegraphics[width=3.2in]{imgaexample.pdf}
\caption{TESS example images of 14$\times$14 pixels. The images correspond to HAT-P-31b, HAT-P-35b, and WASP-56b, from top to bottom. The blue points refer to the planet position in the Gaia catalog \citep{GaiaDr2} while red points present nearby source positions.}
\label{image:example}
\end{figure}
The TOI catalog is built based on the light curves obtained from TESS image products, including both 2 minute and 30 minute frames \citep{TOIcatalog}. The 2-minute cadence light curve is generated by the Science Processing Operations Center (SPOC) pipeline
and the 30 minute light curves by the Quick Look Pipeline (QLP) \citep{Twicken2016, Huang2020}. \citet{TOIcatalog} generate an automated pipeline to derive transit parameters and thereby identify planet candidates with the method referred to the Kepler Robovetter \citep{Thompson2018}. More than 2000 planet candidates (continuously updating) are identified in the TOI catalog including both newly discovered and previously known planets.
The timing from the TOI catalog provides a long time baseline when compared with the previous ephemeris. The median timing baseline of the 262 exoplanets is 2368 days, while the median uncertainties of timings from archival data and from the TOI catalog are 0.59 and 0.84 minutes. The median uncertainty of archival periods is 4$\times$10$^{-6}$ days. 159 of 262 hot Jupiters show consistent TESS timings within 1 $\sigma$ when compared to the previous ephemeris predictions. This circumstantially demonstrates the accuracy of TOI timings. We neglect the difference between the Barycentric Julian Date (BJD) and Heliocentric Julian Date (HJD) in this work. The difference is within $\pm$4s, beyond the timing precision discussed.
The TOI catalog has been well utilized for exoplanet research, including TTV analysis which uses the data in a similar condition to this work \citep{TOIforTTV, Martins2020TOI1,2021HowardTOI2}.
\subsection{TESS Transit Timing Validation}
\begin{figure}
\centering
\includegraphics[width=3.3in]{kelt-19.pdf}
\includegraphics[width=3.3in]{kelt-19all.pdf}
\caption{Light curves of KELT-19Ab as an example: single epoch around TOI timing (top panel), folded multiple visits at reference epoch (bottom panel). The blue points present observations (10-minute cadence) while the green points are bins of every three points for clarity. The red line gives the transit model fit with the yellow region indicating 1 $\sigma$ confidence region. The vertical blue line gives the fitted timing; the black vertical line, TOI timing; the green vertical line, previous ephemeris prediction. The timings from single epoch fitting, folded epoch fitting are only 0.14 minutes earlier, 0.20 minutes later than TOI, corresponding to a negligible difference as shown in the image (overlapped blue and black line). The observed TESS timings show an offset of $\sim$ 15 minutes, compared to the previous ephemeris prediction as shown in the vertical green line. The fitting uncertainty is 0.54 minutes for a single epoch, and 0.23 minutes for folded epochs.
}
\label{image:lc}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=7.5in]{HAT-P-31timing.pdf}
\caption{The timing difference of HAT-P-31b. The timing difference is the observed mid-transit times minus the ephemeris predictions. The red point refers to TESS timing difference; black points refer to timing differences of other observations from literature paper \citep{Kipping2011, Mallonn2019}; the black dashed line is reference ephemeris; blue line alternative reference ephemeris; the red line is the refined ephemeris derived by combining TESS observation; green region 1 $\sigma$ significant region of reference ephemeris; brown region 1 $\sigma$ significant region of alternative reference ephemeris. We note that our refined ephemeris overlaps the alternative reference ephemeris, indicating the consistency of the two ephemerides.
}
\label{image:timing}
\end{figure*}
A precision validation of TOI timing is necessary, in the purpose of the study on timing offsets to the previous ephemeris. A majority of the hot Jupiters (159 of 262) present consistent TOI timings which could be circumstantial evidence. Direct evaluation is performed by independently reducing the data and obtaining the TESS timings.
We have generated a photometric pipeline for the TESS image products (as shown in Figure \ref{image:example}) and applied it to the analysis of transit exoplanets \citep{Yangatmos, YangLD}. The pipeline includes modules of e.g., astrometry checking, aperture photometry, deblending of the nearby contamination flux, light curve detrending. The details and evaluations of the pipeline are referred to in previous work \citep{Yangatmos, YangLD}. From the tests and applications so far, the derived transit parameters are within 1 $\sigma$ when we apply the same fitting to the TESS SPOC light curves.
We check if the timing obtained from our pipeline is consistent with TOI timing and find the difference is within 2 minutes. The comparison is performed on the planets WASP-161b with the largest time difference earlier than TESS timing, WASP-17b with the largest time difference later than TESS timing, WASP-58b that is consistent TESS timing compared to the previous ephemeris \citep{Anderson2010, wasp161, Mallonn2019}.
The comparison also includes WASP-121b, KELT-19Ab which have been applied for analysis of the transit depth in previous work \citep{Yangatmos, YangLD}.
Applying Monte-Carlo Markov Chain (MCMC) \citep{pymc,pya}, the light curve is fitted with a planet transit model \citep{Mandel_Agol2002}. The detrended light curve is performed with a stellar activity check from archival data and TESS photometry to avoid possible timing bias. The choice of `circular orbit' or `Keplerian orbit' is consistent with the archival reference work. The details of the free parameters and priors are as described in \citep{Yangatmos, YangLD}. For `circular orbit', the free parameters are transit mid-point ($T0$), the radius ratio of planet to star ($R_{p}/R_{\ast}$), the semi-major axis ( $a/R_{\ast}$), the inclination ($i$), and the limb darkening parameters. For `Keplerian orbit', the model has extra free parameters of the longitude of the ascending node, the orbital eccentricity ($e$), the ascending node, the periapsis passage time, and the periapsis argument. The MCMC fitting runs 50,000 steps after the first 50,000 steps as initialization. All the priors are uniform, except for the limb darkening which applies Gaussian prior interpreted in limb darkening table \citep{TESSLD}.
We apply the transit model to both the light curve of a single epoch and the light curve folded from multiple epochs (examples as shown in Figure \ref{image:lc}). The folding is based on the archival ephemeris and we evaluate the fitting parameter bias if folding an inappropriate period \citep[using the same method as in][]{Yanghats5b}. For TESS one sector, the timing bias is $\sim$ 4 minutes if the period is biased at 0.0004 days. Such large period bias would cause significant TESS timing offsets when compared to ephemeris prediction and thereby are flagged. The fold-and-check method has been well utilized in period searching researches \citep{PDM1989,yangbinary,yang2021ltd064402245919}.
The fitted timings from our pipeline are consistent within 4 minutes when compared to TOI timings in the contrast sample. The median timing offset between our results and TOI timings are 1.43 minutes. The median TOI timing uncertainty is 0.83 minutes. We conclude that it is reasonable to use TOI timings. And the TOI timing offset to the previous ephemeris is regarded as significant if the offset is larger than 4 minutes which is 3 times the median difference. We also require the timing offset to be larger than 1 combined $\sigma$ which is the square root of the quadratic sum of archival ephemeris uncertainty and TESS timing uncertainty. These criteria lead to a final sample of 31 hot Jupiters.
\section{Hot Jupiters with TESS Timing Offsets}
We obtain a sample of 31 targets with significant timing offsets compared to the previous ephemeris prediction. An example is as shown in Figure \ref{image:timing} with the whole sample as shown in Figure \ref{image:timing appendix}. The parameters are present in Table \ref{table:timing}, including planet ID, TESS time minus the predicted time from the previous ephemeris ($\Delta T_{C}$), transit midpoint $T_{C}$, orbital period $P$, category flag, parameter reference.
In our sample, the median $\Delta T_{C}$ is 17.8 minutes while the median combined uncertainty is 5.2 minutes. Therefore the signal-to-noise ratio (SNR) is 3.4. Among 31 Jupiters, WASP-161b presents the earliest offset timing of -203.7$\pm$4.1 minutes. WASP-17b gives the latest offset timing of 70.8$\pm$11.7 minutes. The timing uncertainty is derived as the quadratic sum of uncertainties of previous ephemeris and TESS timing.
We classify the sources into four categories, according to the potential properties implied by the timing offsets.
Type I target refers to a source of which timings are modeled with a linear function. The time offset could be either due to systematic error underestimation or some physical process. The linear indicates a model with a constant derivative, referring to a constant period. Most type I targets only have two timings that we can hardly distinguish the origin of the period difference referring to different linear functions. We regard the period obtained by the linear function fitted to TESS timing as a refined period. Type II refers to the targets of which the timing differences can not be modeled by a linear function but a quadratic function. The quadratic function can be due to abnormal points or physical processes which lead to a constant period derivative. We identify the targets as type III if the TESS timing offset is probably due to the systematic effect in the previous ephemeris and timing differences (at least three data sets) can be well fitted by a linear function.
The fitted linear function refines the ephemeris. The sources are classified as type IV if the timings can not be fitted with any linear or quadratic functions.
The possible physical origin of the timing offsets is discussed in Section. 4.
{
\setlength{\tabcolsep}{2pt}
\setlength\LTcapwidth{\textwidth}
\begin{longtable*}{|l|l|l|l|l|l|}
\caption{Exoplanet parameters. `1' in column `Planet ID' indicates to the reference ephemeris in Figure \ref{image:timing} and \ref{image:timing appendix}, while `2' presents the alternative ephemeris.}
\label{table:timing}\\
\hline
Planet ID & $\Delta $$T_{C}$ & $T_{c}$ & $P$ & Category flags & Reference \\
& minutes & HJD & days & & \\
\endfirsthead
\multicolumn{6}{c}%
{\tablename\ \thetable\ -- \textit{Continued from previous page}} \\
\hline
Planet ID & $\Delta $$T_{C}$ & $T_{c}$ & $P$ & Category flags & Reference \\
& minutes & HJD & days & & \\
\hline
\endhead
\hline \multicolumn{6}{c}{\textit{Continued on next page}} \\
\endfoot
\hline
\endlastfoot
\hline
WASP-161b & -203.7$\pm$4.1 & 2459249.035676$\pm$0.000594 & 5.405625$\pm$0.000003 & I & This work \\
1 & & 2457416.5289$\pm$0.0011 &
5.4060425$\pm$0.0000048 & & \cite{wasp161} \\
\hline
HAT-P-31b & -206.0$\pm$131.6 & 2459010.826736$\pm$0.001149 & 5.005272$\pm$0.000005 & III & This work \\
1 & & 2454320.8866$\pm$0.0052 &5.005425$\pm$0.000092 & &\cite{Bonomo2017} \\
2& & 2458169.9410$\pm$0.0017 &
5.0052724±0.0000063 & &\cite{Mallonn2019} \\
\hline
KELT-1b & -67.4$\pm$53.9 & 2458765.533813$\pm$0.000299 & 1.217494$\pm$0.0000002 & III & This work \\
1 & & 2455914.1628$\pm$0.0023 &
1.217514$\pm$0.000015 & & \cite{Siverd2012} \\
2 & & 2456093.13464$\pm$0.00019 &
1.21749448$\pm$0.00000080 & & \cite{Baluev2015}\\
\hline
TOI-163 b & -57.2$\pm$22.0 & 2459310.502979$\pm$0.000817 & 4.231135$\pm$0.000003 & I & This work \\
1 & & 2458328.87970$\pm$0.00063 &
4.231306$\pm$0.000063 & & \cite{Kossakowski2019} \\
\hline
WASP-54b & -55.9$\pm$8.6 & 2458931.236409$\pm$0.000435 & 3.693599$\pm$0.000001 & I & This work \\
1 & & 2455518.35087$\pm$0.00053 &
3.6936411$\pm$0.0000059 & &\cite{Bonomo2017} \\
\hline
WASP-173Ab & 1.2$\pm$0.9 & 2458355.195662$\pm$0.00047 &
1.386632$\pm$0.000001 & III & This work \\
& -30.4$\pm$1.1 & 2458355.173660$\pm$0.000620 &
1.386632$\pm$0.000001 & & TOI timing \\
1 & & 2457288.8585$\pm$0.0002 &
1.38665318$\pm$0.00000027& & \cite{Hellier2019} \\
2 & & 2458105.59824$\pm$0.00090 &
1.3866529$\pm$0.0000027 & & \cite{Labadie2019} \\
\hline
KELT-18b & -26.8$\pm$2.3 & 2458714.181140$\pm$0.000380 & 2.871706$\pm$0.000001 & I & This work \\
1 & & 2457542.52504$\pm$0.00039 &
2.8717518$\pm$0.0000028 & & \cite{McLeod2017} \\
2 & & 2457542.52463$\pm$0.00057 &
2.8716992$\pm$0.0000013 & & \cite{Maciejewski2020}\\
\hline
XO-3b & -17.8$\pm$1.2 & 2458819.064098$\pm$0.000279 & -- & IV & This work \\
1 & & 2455292.43266$\pm$0.00015 &
3.19153285$\pm$0.00000058 & & \cite{Wong2014} \\
2 & & 2454449.86816$\pm$0.00023 &
3.1915239$\pm$0.0000068 & & \cite{Winn2008}\\
& & 2456419.04365$\pm$0.00026 &
3.19153247$\pm$0.00000055& & \cite{Wong2014}\\
\hline
WASP-101b & -17.3$\pm$5.2 & 2459223.302264$\pm$0.000132 & 3.585708$\pm$0.0000002 & I & This work \\
1 & & 2456164.6934$\pm$0.0002 &
3.585722$\pm$0.000004 & & \cite{Hellier2014} \\
\hline
K2-237b & -15.5$\pm$3.9 & 2458626.800781$\pm$0.000869 & 2.180534$\pm$0.0000014 & I & This work \\
1 & & 2457656.4633789$\pm$0.0000048 & 2.1805577$\pm$0.0000057 & & \cite{Smith2019} \\
\hline
KELT-7b & -12.4$\pm$5.4 & 2458819.253410$\pm$0.000240 & 2.734765$\pm$0.0000002 & I & This work \\
1 & & 2456355.229809$\pm$0.000198 & 2.734775$\pm$0.0000039 & & \cite{Bieryla2015} \\
\hline
WASP-76b & -11.9$\pm$2.9 & 2459117.687201$\pm$0.000119 & 1.809881$\pm$0.0000001 & I & This work \\
1 & & 2456107.85507$\pm$0.00034 & 1.809886$\pm$0.000001 & & \cite{West2016} \\
\hline
WASP-95b & -10.7$\pm$2.9 & 2459084.585010$\pm$0.000110 & 2.184667$\pm$0.0000001 & I & This work \\
1 & & 2456338.458510$\pm$0.000240 & 2.184673$\pm$0.0000014 & & \cite{Hellier2014} \\
\hline
KELT-14b & -10.7$\pm$5.2 & 2459252.535529$\pm$0.000108 & -- & II & This work \\
1 & & 2457091.028632$\pm$0.000470 & 1.710059$\pm$0.0000025 & & \cite{Rodriguez2016} \\
2 & & 2456665.224010$\pm$0.000210 & 1.710057$\pm$0.0000032 & & \cite{Turner2016} \\
\hline
KELT-21b & -9.8$\pm$2.4 & 2458686.841940$\pm$0.000580 & 3.612747$\pm$0.0000013 & I & This work \\
1 & & 2457295.934340$\pm$0.000410 & 3.612765$\pm$0.0000030 & & \cite{Johnson2018} \\
\hline
WASP-35b & -9.5$\pm$3.5 & 2459176.768453$\pm$0.000197 & 3.161569$\pm$0.0000002 & I & This work \\
1 & & 2455531.479070$\pm$0.000150 & 3.161575$\pm$0.0000020 & & \cite{Enoch2011} \\
\hline
TOI-1333b & 2.67$\pm$1.44& 2458715.1230$\pm$0.0010 & 4.720172$\pm$0.000025 & I & This work \\
& -5.7$\pm$1.5 & 2458715.117140$\pm$0.000550 & 4.720314$\pm$0.0000116 & & TOI timing \\
1 & & 2458913.370330$\pm$0.000450 & 4.720219$\pm$0.0000110 & & \cite{Rodriguez2021} \\
\hline
WASP-17b & 70.8$\pm$11.7 & 2458627.126221$\pm$0.000584 & 3.735485$\pm$0.0000003 & III & This work \\
1 & & 2454559.181020$\pm$0.000280 & 3.735442$\pm$0.0000072 & & \cite{Anderson2010} \\
& & 2454577.85806$\pm$0.00027 & 3.7354380$\pm$0.0000068 & & \cite{Anderson2011} \\
& & 2454592.80154$\pm$0.00050 & 3.7354845$\pm$0.0000019 & & \cite{Southworth2012} \\
& & 2457192.69798$\pm$0.00028 & 3.735438 & & \cite{Sedaghati2016} \\
\hline
WASP-99b & 61.6$\pm$31.2 & 2459135.796019$\pm$0.000239 & 5.752595$\pm$0.0000020 & I & This work \\
1 & & 2456224.983200$\pm$0.001400 & 5.752510$\pm$0.0000400 & & \cite{Bonomo2017} \\
\hline
WASP-58b & 37.4$\pm$13.5 & 2458986.981902$\pm$0.000409 & 5.017215$\pm$0.0000013 & III & This work \\
1 & & 2455183.933500$\pm$0.001000 & 5.017180$\pm$0.0000110 & & \cite{Hebrard2013} \\
2 & & 2457261.059700$\pm$0.000620 & 5.017213$\pm$0.0000026 & & \cite{Mallonn2019} \\
\hline
WASP-187b & 34.5$\pm$8.7 & 2458764.856300$\pm$0.002600 & 5.147913$\pm$0.0000033 & I & This work \\
1 & & 2455197.352900$\pm$0.002000 & 5.147878$\pm$0.0000050 & & \cite{Schanche2020} \\
\hline
HAT-P-6b & 26.3$\pm$9.2 & 2458740.188710$\pm$0.000360 & 3.853000$\pm$0.0000003 & I & This work \\
1 & & 2454035.675750$\pm$0.000280 & 3.852985$\pm$0.0000050 & & \cite{Noyes2008} \\
& & 2454347.76763$\pm$0.00042 & & & \cite{Szabo2010} \\
& & 2454698.3908$\pm$0.0011 & & & \cite{Szabo2010} \\
\hline
KELT-23Ab & 23.8$\pm$7.7 & 2458683.911214$\pm$0.000056 & -- & IV & This work \\
1 & & 2458140.379200$\pm$0.002700 & 2.255251$\pm$0.0000110 & & \cite{Johns2019} \\
2 & & 2458140.386980$\pm$0.000200 & 2.255288$\pm$0.0000007 & & \cite{Maciejewski2020} \\
\hline
WASP-33b & 22.4$\pm$6.9 & 2458791.414307$\pm$0.000169 & 1.219871$\pm$0.0000001 & III & This work \\
1 & & 2454163.223730$\pm$0.000260 & 1.219867$\pm$0.0000012 & & \cite{Collier2010} \\
2 & & 2455507.522200$\pm$0.000300 & 1.219868$\pm$0.0000011 & & \cite{von2014} \\
\hline
WASP-78b & 18.8$\pm$11.1 & 2459175.589610$\pm$0.000863 & 2.175185$\pm$0.0000006 & III & This work \\
1 & & 2455882.359640$\pm$0.000530 & 2.175176$\pm$0.0000047 & & \cite{Bonomo2017} \\
2 & & 2456139.030300$\pm$0.000500 & 2.175173$\pm$0.0000030 & & \cite{Brown2017} \\
\hline
KELT-19Ab & 15.2$\pm$5.9 & 2459222.789720$\pm$0.000183 & 4.611734$\pm$0.0000007 & I & This work \\
1 & & 2457281.249537$\pm$0.000361 & 4.611709$\pm$0.0000088 & & \cite{Siverd2018} \\
\hline
WASP-178b & 12.9$\pm$3.1 & 2458602.836430$\pm$0.001860 & 3.344842$\pm$0.0000014 & III & This work \\
1 & & 2456927.068390$\pm$0.000470 & 3.344829$\pm$0.0000012 & & \cite{Hellier2019} \\
2 & & 2458321.867240$\pm$0.000380 & 3.344841$\pm$0.0000033 & & \cite{Rodr2020} \\
\hline
WASP-94Ab & 10.2$\pm$4.0 & 2459039.335846$\pm$0.000386 & 3.950201$\pm$0.0000005 & I & This work \\
1 & & 2456416.402150$\pm$0.000260 & 3.950191$\pm$0.0000037 & & \cite{Bonomo2017} \\
\hline
HAT-P-69b & 9.7$\pm$1.5 & 2459242.559429$\pm$0.000245 & 4.786992$\pm$0.0000036 & I & This work \\
1 & & 2458495.788610$\pm$0.000720 & 4.786949$\pm$0.0000018 & & \cite{Zhou2019} \\
\hline
KELT-24b & 7.9$\pm$0.9 & 2458684.821890$\pm$0.000320 & -- & II & This work \\
1 & & 2458540.477590$\pm$0.000360 & 5.551493$\pm$0.0000081 & & \cite{Rodriguez2019} \\
2 & & 2458268.454590$\pm$0.000870 & 5.551492$\pm$0.0000086 & & \cite{Maciejewski2020} \\
\hline
TOI-628b & 3.8$\pm$3.4 & 2458469.232700$\pm$0.002220 & 3.409511$\pm$0.000048 & I & This work \\
& 7.4$\pm$1.2 & 2458469.235200$\pm$0.000430 & 3.409458$\pm$0.0000086 & & TOI timing \\
1 & & 2458629.479720$\pm$0.000390 & 3.409568$\pm$0.0000070 & & \cite{Rodriguez2021} \\
\hline
\end{longtable*}
}
We verify the TOI timings of 31 hot Jupiters among which WASP-173Ab, TOI-1333b, and TOI-628b need timing recalibration. We check the TESS raw data (2-minute cadence) of WASP-173Ab and find an abnormal data point around a transit at 2468356.564637 (HJD). The abnormal data biases the modeling if not clipped when performing an automatic pipeline. We refit the TESS light curve with abnormal data clipped. The timing is 2458355.195662$\pm$0.00047 (HJD) when we fit one transit visit and 2458355.195907$\pm$0.0001 (HJD) when fitting visits folded through the whole sector. These two results are consistent within 0.35 minutes and are different from TOI timing at 29 minutes. The refitted TESS timing is consistent with the previous ephemeris (as shown in Figure \ref{image:timing_corre}).
TOI-1333b timing derived by refitting TESS light curve is 2458715.1230$\pm$0.0010 (HJD) which is 8.4 minutes later than TOI derived timing (as shown in Figure \ref{image:timing_corre}). The TESS 30 minute data (available for the TOI-1333b) has some abnormal points around transits which would bias the timings if not applying the sigma clipping process. Removing the abnormal data points, we refit the light curve for the timing. The timings derived from single transit and conjunct transits have a difference of 1.8 minutes (within 0.3 combined $\sigma$). The timing is close ($\sim$ 1 $\sigma$) to the prediction of previous ephemeris (Figure \ref{image:timing_corre}).
We derive a conjunct timing of 1469.23270$\pm$0.00222 (HJD) for TOI-628b while a single transit visit obtains a midpoint at 1469.2332$\pm$0.0074 (HJD). The value is $\sim$ 1 $\sigma$ earlier than TOI timing and is consistent with the previous ephemeris. We note that TOI timings are highly reliable that only two sources among 262 TOI hot Jupiters are found with significant inappropriate values.
\begin{figure}
\centering
\includegraphics[width=3.3in]{WASP-173Abtiming.pdf}
\includegraphics[width=3.3in]{TOI-1333btiming.pdf}
\includegraphics[width=3.3in]{TOI-628btiming.pdf}
\caption{The timing differences with corrected timings for WASP-173Ab, TOI-1333b, TOI-628b. The symbols are similar to Figure \ref{image:timing}. The green diamond indicates TOI timing, the red diamond gives the timing generated from TESS raw images. We classify WASP-173Ab as type III, TOI-1333 and TOI-628b as type IV targets (as shown in Table \ref{table:timing}).
}
\label{image:timing_corre}
\end{figure}
\subsection{Ephemeris Refinement}
We refine the ephemeris of type I and III targets in our sample. We do not apply any ephemeris refinements to type II and IV sources. The new ephemeris consists of TESS timing and a refined period (as shown in Table \ref{table:timing}). The refinement has a median precision of 1.11 minutes until 2025 and 1.86 minutes until 2030. The largest uncertainty is 7.22 minutes at 2030 referring to WASP-187.
The ephemeris precision depends on the length of the time baseline and transit timing precision. The timing uncertainties could be underestimated due to the techniques in light curve generation and high dimension model fitting \citep{Yangatmos, YangLD}. Empirically, the timings obtained from a single transit visit are usually within two times of the reported uncertainty (Yang et al. submitted). Conjunction timing derived from multi-visits based on a constant period assumption might be biased if the folding period is not precise, especially when the light curves partially cover the transits. Correcting the timing biases in archival papers (if present) is beyond the scope of this work.
The period could be updated when more observations are available \citep{Mallonn2019, Edwards2021, Wang2021}. For type I Jupiters, the periods from the previous works are significantly different from the periods derived in our refinement. We note that these period differences might origin from physical processes which make the refinement inappropriate (as discussed in Section~\ref{sec:physics}). The type III exoplanets are likely to present significant uncertainty underestimation in the previous ephemeris. Type II and IV targets might be due to abnormal timings included or due to more complicated physical processes.
\begin{figure}
\centering
\includegraphics[width=3.3in]{kelt-19quadratic.pdf}
\caption{KELT-19 timings fitted with a quadratic function. The symbols are similar to Figure \ref{image:timing}. The red line shows the quadratic function model.}
\label{image:timingbinary}
\end{figure}
\section{Discussion: Possible Physical Origin}
\label{sec:physics}
Some targets in our sample present very significant period differences when compared to former results. It might not be a good hypothesis to regard all the differences originating from the underestimation of archival period uncertainties. Period bias caused by a timing shift of 2 minutes would be only $\sim$ 10$^{-5}$ days when the time baseline is 1 year.
We argue that a very significant period difference might be attributed to physical period-changing processes. We find in our sample that the targets with offset SNR larger than 10 all present earlier observation timings. These sources are WASP-161b, XO-3b, and KELT-18b. The period difference caused by systematic underestimation should be unsigned which is not the case. The tidal dissipation could explain the observational phenomenon, to the best of our knowledge.
The tidal torque transfers the energy between the star-planet orbit and the rotation of star and planet \citep{Goldreich1966, Lin1996, Naoz2011, Wu2011, Dawson2018, RodetandLai}. The process could cause the period decay and the apsidal procession \citep{Hut1981, Ragozzine2009}. The induced TTV has been discovered in WASP-12b at $\sim$ a few minutes \citep{2011wasp12b,2017wasp12b}. And TESS provides the most recent evidence for WASP-12b TTV \citep{2021wasp12bTESS}.
\begin{figure*}
\centering
\includegraphics[width=7.5in]{xo-3bqua.pdf}
\caption{The TTV of XO-3b. The symbols are similar to Figure \ref{image:timing} while the red dashed line presents the fitted quadratic function as described in the text.}
\label{image:xo-3}
\end{figure*}
We report WASP-161b, which shows the most significant TESS timing offsets in this sample, presenting a period derivative of -1.16$\times$10$^{-7}\pm$2.25$\times$10$^{-8}$ (as details described in Yang et al. 2021, submitted). WASP-161b possibly is undergoing tidal dissipation. We have approved CHEOPS for two visit observations in early 2022 for further investigation. WASP-161b is regarded as a type I target in this work.
The period of XO-3b has been reported differently in previous works \citep[][and references therein]{Winn2008, Winn2009, Johns-Krull2008, Wong2014, Bonomo2017}. TESS timing presents an offset of -17.8$\pm$1.2 minutes (14.8 $\sigma$) to the newest archival ephemeris from \citet{Bonomo2017}. The timing generated by our pipeline is consistent within 0.3 minutes to TOI timing. And the uncertainties are similar ($\sim$ 0.45 minutes). We gather the archival timings \citep{Winn2008, Johns-Krull2008, Wong2014, Bonomo2017} and obtain a timing baseline longer than 10 years with TESS timing as the most recent (as shown in Figure \ref{image:xo-3}). The timings could not be well fitted with any linear function.
We find a quadratic function is a good fit for the data sets, yielding a constant period decaying model. The period derivative ($\dot{P}$) is 5.8$\times$10$^{-9}$$\pm$9.3$\times$10$^{-10}$.
The $\dot{P}$ could be explained by a tidal dissipation model, expressed as \citep[`equilibrium tide';][]{Hut1981,Themodel}:
\begin{equation}
\label{modelapplied}
\begin{split}
\dot{P} = \frac{27\pi}{Q_p'}\left(\frac{M_\star}{M_p}\right)\left(\frac{R_p}{a}\right)^5\left[N(e)\,x_\mathrm{p}\,
\frac{\omega_\mathrm{p}}{n}-N_a(e)\right] \\
+ \frac{27\pi}{Q_*'}\left(\frac{M_p}{M_\star}\right)\left(\frac{R_\star}{a}\right)^5\left[N(e)\,x_\star\,
\frac{\omega_\star}{n}-N_a(e)\right].
\end{split}
\end{equation}
Here $Q_p'$ is the `modified tidal quality factor', $\omega_\mathrm{p}$ the planet's rotation rate, $x_\mathrm{p}$ the obliquity. Replacing the $p$ with $\star$ defines the stellar parameters, $N(e)$ and $N_a(e)$ are defined as:
\begin{equation*}
\label{n_e}
N(e) = \frac{1+\frac{15}{2}e^2+\frac{45}{8}e^4+\frac{5}{16}e^6}{(1-e^2)^{6}}
\end{equation*}
and
\begin{equation*}
\label{na_e}
N_a(e)=\frac{1+\frac{31}{2}e^2+\frac{255}{8}e^4+\frac{185}{16}e^6+\frac{25}{64}e^8}{(1-e^2)^{15/2}}.
\end{equation*}
The value of $Q'_\star$ is 1.6$\times$10$^{5}$$\pm$0.2$\times$10$^{5}$ if assuming the period decaying is due to the stellar tide and the value of $Q'_p$ is 1.9$\times$10$^{4}$$\pm$0.2$\times$10$^{4}$ under the assumption that period decaying is due to the planetary tide.
The value is derived from Equation~\ref{modelapplied}, with a stellar mass of 1.213$\pm$0.066 $M_{\odot}$, a stellar radius of 1.377$\pm$0.083 $R_{\odot}$, a planet mass of 11.70$\pm$0.42 $M_{J}$, a planet radius of 1.217$\pm$0.073 $R_{J}$, an eccentricity of 0.27587$^{+0.00071}_{-0.00067}$, an orbital semi-major axis of 4.95$\pm$0.18 (in unit of stellar radii), an obliquity of 70$\pm$15 degrees \citep{Hebrard2008,Bonomo2017,Stassun2017}.
XO-3b is reported as a candidate giant planet undergoing migration in previous work \citep{Bonomo2017}, due to its large eccentricity and high $M_{p}$/$M_{\ast}$ (larger than 6$\times$10$^{-3}$). Our timing result provides direct observational evidence to support this scenario.
The apsidal precession could be excited when the tidal torque exists \citep{Ragozzine2009}. Distinguishing the difference between tidal dissipation and precession needs to model timings of occultation \citep{2017wasp12b,2020wasp12b, 2021wasp12bTESS}. XO-3b is also expected to be a candidate presenting precession in previous work \citep{Jordan2008}. No clues of a binary companion are reported in XO-3b references. Further investigation on XO-3b is available in the following work (Yang et al. submitted). We note that the period changing originating from precession and R$\o$mer effect should be unsigned as the same as from systematic underestimation.
The relation between the planet period derivative and host star acceleration rate is well modeled \citep{wasp-4b2020}. In our sample, KELT-19Ab shows a maximum stellar acceleration at 4 m s$^{-1}$ yr$^{-1}$ originating from binary companion \citep{KELT-19}. This acceleration would cause a period derivative of 5.32 ms yr$^{-1}$, according to the calculation from \citep{wasp-4b2020}. We generate the TESS timings in both 2019 and 2020. TOI catalog gives the timing at 2020 which is only 0.14 minutes different from our result (as shown in Figure \ref{image:lc} and caption therein). We find timings can be fitted with both a linear and a quadratic function (as shown in Figure \ref{image:timingbinary}). The fitting result of the quadratic function indicates a period derivative of 112$\pm$94 ms yr$^{-1}$. Therefore, we conclude that combining TESS and archival timings do not present a significant TTV dominated by stellar acceleration for KELT-19Ab. We regard the R$\o$mer effect beyond the detection limit in this work.
\section{Summary}
We discuss the ephemeris of 31 hot Jupiters, of which TESS timings show significant offsets. The TESS timing comes from the TOI catalog and is validated using our self-generated pipeline. The pipeline obtains the light curve from the raw TESS images and fits the light curve with the planet transit model. The result from our pipeline gives consistent results compared to TOI catalog.
Among the sample, TESS timings present a median offset of 17.8$\pm$5.2 minutes, equivalent to an SNR of 3.4$\sigma$ when compared to the previous ephemeris. WASP-161b and XO-3b give the most significant timing offsets. The ephemeris refinement serves the potential follow-up observations for equipment, e.g., CHEOPS, ongoing James Webb Space Telescope, and Ariel Space Telescope. The refined timing reaches a precision within 1.11 minutes in the next 5 years and 1.86 minutes in the next ten years.
WASP-161b, XO-3b, and KELT-18b present timing offsets larger than 10 $\sigma$. These three targets all have an earlier observed timing than the predictions from the previous ephemeris under a constant period assumption. We find WASP-161b presents evidence of period decaying in previous work (Yang et al. submitted).
XO-3b presents a tentative TTV, which could be modeled by a quadratic function. The derived period derivative is 5.8$\times$10$^{-9}$$\pm$9.3$\times$10$^{-10}$. Tidal dissipation can explain the TTV with a $Q_\star$ of 1.6$\times$10$^{5}$$\pm$0.2$\times$10$^{5}$ or a $Q_p$ of 1.9$\times$10$^{4}$$\pm$0.2$\times$10$^{4}$. Apsidal precession could be an alternative explanation to the TTV. Interestingly, all the four targets (WASP-161, XO-3b, WASP-12b, WASP-4b) with significant observed TTVs, show earlier timing than the prediction in a constant period model. Apsidal precession could not explain this since the timing variation caused by precession should be unsigned. Further observations, e.g., occultation timing monitoring, are helpful for confirmation.
\section*{acknowledgements}
This work made use of the NASA Exoplanet Archive \footnote{\url{https://exoplanetarchive.ipac.caltech.edu/index.html}} and PyAstronomy\footnote{\url{https://github.com/sczesla/PyAstronomy}} \citep{pya}. We would like thank for Ranga-Ram Chary for helpful discussions. Su-Su Shan, Fan Yang, and Ji-Feng Liu acknowledge fundings from the Cultivation Project for LAMOST Scientific Payoff and Research Achievement of CAMS-CAS, the National Key Research and Development Program of China (No.2016YFA0400800), the National Natural Science Foundation of China (NSFC; No.11988101). Xing Wei is supported by NSFC (No.11872246, 12041301), and the Beijing Natural Science Foundation (No. 1202015). Hai-yan Zhang acknowledges NSFC (No.12041301, U1831128).
\clearpage
\newpage
\section*{Appendix}
\renewcommand{\thefigure}{A\arabic{figure}}
\setcounter{figure}{0}
\begin{figure}[ht]
\centering
\includegraphics[width=3.3in]{appendix/WASP-161b.pdf}
\includegraphics[width=3.3in]{appendix/HAT-P-31b.pdf}
\includegraphics[width=3.3in]{appendix/KELT-1b.pdf}
\includegraphics[width=3.3in]{appendix/TOI-163b.pdf}
\caption{Timing differences of 31 targets. The symbols are the same as Figure \ref{image:timing} while the legend inside image is dismissed for clarity.}
\label{image:timing appendix}
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}
\centering
\includegraphics[width=3.3in]{appendix/WASP-54b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-173Ab.pdf}
\includegraphics[width=3.3in]{appendix/KELT-18b.pdf}
\includegraphics[width=3.3in]{appendix/XO-3b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-101b.pdf}
\caption{(Continued)}
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}
\centering
\includegraphics[width=3.3in]{appendix/K2-237b.pdf}
\includegraphics[width=3.3in]{appendix/KELT-7b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-76b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-95b.pdf}
\includegraphics[width=3.3in]{appendix/KELT-14b.pdf}
\caption{(Continued) }
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}
\centering
\includegraphics[width=3.3in]{appendix/KELT-21b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-35b.pdf}
\includegraphics[width=3.3in]{appendix/TOI-1333b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-17b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-99b.pdf}
\caption{(Continued) }
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}
\centering
\includegraphics[width=3.3in]{appendix/WASP-58b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-187b.pdf}
\includegraphics[width=3.3in]{appendix/HAT-P-6b.pdf}
\includegraphics[width=3.3in]{appendix/KELT-23Ab.pdf}
\includegraphics[width=3.3in]{appendix/WASP-33b.pdf}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=3.3in]{appendix/WASP-78b.pdf}
\includegraphics[width=3.3in]{appendix/KELT-19Ab.pdf}
\includegraphics[width=3.3in]{appendix/WASP-178b.pdf}
\includegraphics[width=3.3in]{appendix/WASP-94Ab.pdf}
\includegraphics[width=3.3in]{appendix/HAT-P-69b.pdf}
\caption{(Continued) }
\end{figure}
\addtocounter{figure}{-1}
\begin{figure}
\centering
\includegraphics[width=3.3in]{appendix/KELT-24b.pdf}
\includegraphics[width=3.3in]{appendix/TOI-628b.pdf}
\caption{(Continued) }
\end{figure}
\clearpage
\newpage
\bibliographystyle{aasjournal}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 5,623
|
Der Weiherbach ist ein linker Zufluss des Mooshamer Weiherbachs in Oberbayern.
Der Weiherbach entsteht südlich des Weilers Schallkofen, durchfließt zunächst den Harmatinger Weiher, macht einen Bogen nach Westen und mündet schließlich in den Mooshamer Weiherbach. Der Bach verläuft fast vollständig im FFH-Schutzgebiet Moore zwischen Dietramszell und Deining.
Weblinks
Lauf des Weiherbach auf dem BayernAtlas
Fließgewässer im Landkreis Bad Tölz-Wolfratshausen
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 9,127
|
A customer satisfaction survey of Kindle Fire owners shows that while the vast majority are satisfied with their purchase, it is mainly the low price fueling their happiness. ChangeWave Research asked a sample of new Kindle Fire owners how they were enjoying their device so far; slightly more than half reported being "very satisfied," and 59 percent said the $199 price of the Kindle Fire was what they liked best about it.
The survey asked 254 people who had recently acquired a Kindle Fire what they liked about the device, and beyond the low price, they had little to say. Thirty-one percent liked the color screen, 27 percent the ease of use, and 20 percent liked the selection of books. "Long battery life" and "screen size" were the favorite features of only 12 percent of respondents.
When asked what their least favorite part of the device was, 27 percent said they didn't like that there were no hardware volume up and down buttons. Twenty-one percent were most displeased that the Kindle Fire has no camera, and 15 percent said that the battery life was too short.
Overall, 54 percent of the Kindle Fire owners reported being "very satisfied" with it—not quite the iPad's 74 percent of customers who report being "very satisfied," but better than the 49 percent figure for other tablet devices. Combined with the 38 percent "somewhat satisfied" group, the Kindle Fire reached a 92 percent approval rating, according to ChangeWave.
The Kindle Fire has met with wide success in spite of lukewarm reviews, many of which cited the price as the main mitigator for its shortcomings—at least 4 million Kindle units were sold in December, the bulk of which were Kindle Fires, and the Kindle Fire shot up to a 36 percent market share of Android tablets in only three months. However, Boy Genius Report points out that the percent of people "very likely" to buy a Kindle Fire has dropped to 2 percent, down from 4 percent in December.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,480
|
#pragma once
#ifndef _LIB_ALMATH_ALMATH_DSP_DIGITALFILTER_H_
#define _LIB_ALMATH_ALMATH_DSP_DIGITALFILTER_H_
#include <vector>
#include <almath/api.h>
#include <boost/version.hpp>
#if (BOOST_VERSION < 106200) && !defined(BOOST_CB_DISABLE_DEBUG)
# define BOOST_CB_DISABLE_DEBUG
#endif
#include <boost/circular_buffer.hpp>
namespace AL
{
namespace Math
{
namespace DSP
{
class ALMATH_API DigitalFilter
{
public:
DigitalFilter(void);
~DigitalFilter(void);
/*! \fn AL::Math::DSP::DigitalFilter::configureFilter(
const std::vector<float> & weightsIn,
const std::vector<float> & weightsOut,
const float dcGain)
\brief Configure the digital filter
\param weightsIn A vector of float describing weights applied on input vector
\param weightsOut A vector of float describing weights applied on output vector
\param dcGain Static gain of the filter
*/
void configureFilter(const std::vector<float> & pWeightsIn,
const std::vector<float> & pWeightsOut,
float pDcGain);
/*! \fn void AL::Math::DSP::DigitalFilter::resetFilter()
\brief Reset the processing of the filter
*/
void resetFilter();
/*! \fn float AL::Math::DSP::DigitalFilter::processFilter(const float inputData)
\brief Process a step of the filter
\param in Signal input
*/
float processFilter(float pInputData);
private:
boost::circular_buffer<float> fFilterBufferIn;
boost::circular_buffer<float> fFilterBufferOut;
float fFilterDcGain;
std::vector<float> fFilterWeightsIn;
std::vector<float> fFilterWeightsOut;
};
}
}
}
#endif // _LIB_ALMATH_ALMATH_DSP_DIGITALFILTER_H_
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 304
|
Because there are a LOT of exports to follow 2. As it looks like, a table with 0x bytes is filled with an increasing counter. SXG75 Sehr fehlerbehaftete Firmware.
SXG75 Sehr fehlerbehaftete Firmware. In algorithm it used as size of file or size of data for calc checksum. Concerning my "change RF NV item" project we found that on some phones this is a read-only item. Visual Express is not suitable cause it doesn't include mfc library.
Made popular by you. I have repeated algorithm of calculation crc30 on Delphi. Originally Posted by adfree.
Last edited by viperbjk; Of course it does. I never understood the different modes offline-A etc.
Can't uderstand how to get it work Page 23 of What your ideas on it? So we are able to check some small parts in mmgsdi library. Com part works with any QC chipset based mobile, f.
A lot of FFs and three numbers MBN at the right position. To eliminate from AMSS. In my opinion, it is last chain of calc algorithm for last byte of calced data.
Might be that single-image differs.
Originally Posted by jockyw PS: Maybe we start with working functions Can you explain these to me? Last edited by adfree; Also I am now able to decipher the various error messages structures stored in AMSS that could not be clearly linked to the functions.
Elimination of extracted files in amss. Believe it or not And haved same crc30 value? Originally Posted by viperbjk.
Last bytes are never included or checked in crc30 routine. Icons in die TMO-Firmware integrieren. Is it possible to check this with some tool? Recalc and have same result of checksum.
This entry was posted in Graphic Design Software by JoJohn. Bookmark the permalink.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,343
|
Hush Now, Banshee! is commercial-quality, 7x6", 30 pages, full color, 40 pt matte-laminated paper board book.
Lexile-measured reading level: Ages 1 to 7.
This book is aligned with Common Core Standards.
For international orders, customs and/or duties fees may apply.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 4,304
|
Der Begriff Wortphilologie bezeichnet eine Forschungsmethode der Philologie, die im Gegensatz zur Sachphilologie den Erkenntnisgewinn allein durch die Analyse der Texte (Grammatik, Stilistik, Textkritik) ohne Zuhilfenahme anderer Disziplinen erreichen will. Diese Methode bildete sich insbesondere in der Klassischen Philologie heraus, die sich mit einem begrenzten Bestand an Texten befasst und bemüht ist, die ursprüngliche Textgestalt herzustellen.
Im 19. Jahrhundert entzündete sich im Zuge der einsetzenden Methodenreflexion der Geisteswissenschaften ein Methodenstreit zwischen Gottfried Hermann und August Böckh, dem Herausgeber des Corpus Inscriptionum Graecarum. Böckhs starke Einbeziehung der weiteren Altertumswissenschaften, besonders der Epigraphik, wurde von Hermann mit scharfen Rezensionen bedacht. Hermann selbst vertrat die Ansicht, dass durch übertriebenes Verlassen auf die Erkenntnisse anderer Disziplinen die wahre Erkenntnis der Philologie getrübt werde. Dieser Methodenstreit wirkte in Deutschland bis ins 20. Jahrhundert nach. Dennoch kam es zu keiner definitiven Bildung zweier Lager, und Feindschaften zwischen hauptsächlichen Vertretern der einen Methode mit denen der anderen kamen kaum vor.
Literatur
Böckh-Hermann-Auseinandersetzung. In: Der Neue Pauly 13, Sp. 523–526. Stuttgart 1999. ISBN 3-476-01483-5
Wissenschaftsgeschichte
Altphilologie
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 8,035
|
using System;
using System.Linq;
using RequestReduce.Configuration;
using RequestReduce.IOC;
using RequestReduce.SqlServer;
using RequestReduce.SqlServer.ORM;
using RequestReduce.Utilities;
using Xunit;
using RequestReduce.ResourceTypes;
namespace RequestReduce.Facts.Store
{
public class RepositoryFacts
{
class FakeFileRepository : FileRepository
{
public FakeFileRepository(IRRConfiguration config)
: base(config)
{
RequestReduceDB.DefaultProviderName = "System.Data.SQLite";
var db = GetDatabase();
db.KeepConnectionAlive = true;
db.Execute(SqliteHelper.GetSqlLightSafeSql());
}
private static string GetSqlLightSafeSql(string sql)
{
var result = sql.Replace("[dbo].", string.Empty);
result = result.Replace("(max)", "(1000)");
result = result.Replace("CLUSTERED", string.Empty);
result = result.Replace("GO", string.Empty);
return result;
}
}
class TestableRepository : Testable<FakeFileRepository>, IDisposable
{
public TestableRepository()
: this("RRConnection")
{
}
public TestableRepository(string connectionString)
{
Mock<IRRConfiguration>().Setup(x => x.ConnectionStringName).Returns(connectionString);
}
public void Dispose()
{
RRContainer.Current.Dispose();
RRContainer.Current = null;
}
}
public class Save
{
[Fact]
public void WillSaveToDatabase()
{
var testable = new TestableRepository();
var id = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = new byte[] {1},
FileName = "fileName",
Key = Guid.NewGuid(),
LastUpdated = DateTime.Now,
OriginalName = "originalName",
RequestReduceFileId = id
};
testable.ClassUnderTest.Save(file);
var savedFile = testable.ClassUnderTest.SingleOrDefault<RequestReduceFile>(id);
Assert.Equal(file.Content.Length, savedFile.Content.Length);
Assert.Equal(file.Content[0], savedFile.Content[0]);
Assert.Equal(file.FileName, savedFile.FileName);
Assert.Equal(file.Key, savedFile.Key);
Assert.Equal(file.OriginalName, savedFile.OriginalName);
Assert.Equal(file.RequestReduceFileId, savedFile.RequestReduceFileId);
Assert.True((file.LastUpdated - savedFile.LastUpdated) <= TimeSpan.FromMilliseconds(4));
}
[Fact]
public void WillSaveToDatabaseUsingConnectionString()
{
var testable = new TestableRepository("Data Source=:memory:;Version=3;New=True");
var id = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = "fileName",
Key = Guid.NewGuid(),
LastUpdated = DateTime.Now,
OriginalName = "originalName",
RequestReduceFileId = id
};
testable.ClassUnderTest.Save(file);
var savedFile = testable.ClassUnderTest.SingleOrDefault<RequestReduceFile>(id);
Assert.Equal(file.Content.Length, savedFile.Content.Length);
Assert.Equal(file.Content[0], savedFile.Content[0]);
Assert.Equal(file.FileName, savedFile.FileName);
Assert.Equal(file.Key, savedFile.Key);
Assert.Equal(file.OriginalName, savedFile.OriginalName);
Assert.Equal(file.RequestReduceFileId, savedFile.RequestReduceFileId);
Assert.True((file.LastUpdated - savedFile.LastUpdated) <= TimeSpan.FromMilliseconds(4));
}
[Fact]
public void WillUpdateContentAndLastUpdatedTime()
{
var testable = new TestableRepository();
var id = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = "fileName",
Key = Guid.NewGuid(),
LastUpdated = new DateTime(2010, 1, 1),
OriginalName = "originalName",
RequestReduceFileId = id
};
testable.ClassUnderTest.Save(file);
var file2 = new RequestReduceFile()
{
Content = new byte[] { 2 },
FileName = "fileName",
Key = Guid.NewGuid(),
OriginalName = "originalName",
RequestReduceFileId = id
};
testable.ClassUnderTest.Save(file2);
var savedFile = testable.ClassUnderTest.SingleOrDefault<RequestReduceFile>(id);
Assert.Equal(2, savedFile.Content[0]);
Assert.True(savedFile.LastUpdated > new DateTime(2011, 1, 1));
}
[Fact]
public void WillThrowIfFilenameIsTooLong()
{
var testable = new TestableRepository();
var id = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = "123456789-123456789-123456789-123456789-123456789-123456789-123456789-123456789-123456789-123456789-123456789-123456789-123456789-123456789-123456789-1",
Key = Guid.NewGuid(),
LastUpdated = DateTime.Now,
OriginalName = "originalName",
RequestReduceFileId = id
};
var ex = Record.Exception(() => testable.ClassUnderTest.Save(file));
Assert.NotNull(ex);
}
[Fact]
public void WillThrowIfFilenameIsNull()
{
var testable = new TestableRepository();
var id = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = null,
Key = Guid.NewGuid(),
LastUpdated = DateTime.Now,
OriginalName = "originalName",
RequestReduceFileId = id
};
var ex = Record.Exception(() => testable.ClassUnderTest.Save(file));
Assert.NotNull(ex);
}
[Fact]
public void WillThrowIfContentIsNull()
{
var testable = new TestableRepository();
var id = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = null,
FileName = "filename",
Key = Guid.NewGuid(),
LastUpdated = DateTime.Now,
OriginalName = "originalName",
RequestReduceFileId = id
};
var ex = Record.Exception(() => testable.ClassUnderTest.Save(file));
Assert.NotNull(ex);
}
}
public class Update
{
[Fact]
public void WillUpdateContentAndLastUpdatedTime()
{
var testable = new TestableRepository();
var id = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = "fileName",
Key = Guid.NewGuid(),
LastUpdated = new DateTime(2010, 1, 1),
OriginalName = "originalName",
RequestReduceFileId = id
};
testable.ClassUnderTest.Save(file);
file.Content = new byte[] {2};
testable.ClassUnderTest.Update(file);
var savedFile = testable.ClassUnderTest.SingleOrDefault<RequestReduceFile>(id);
Assert.Equal(2, savedFile.Content[0]);
Assert.True(savedFile.LastUpdated > new DateTime(2011, 1, 1));
}
}
public class GetActiveFiles
{
[Fact]
public void WillReturnListOfCssFiles()
{
var testable = new TestableRepository();
var builder = new RequestReduce.Utilities.UriBuilder(testable.Mock<IRRConfiguration>().Object);
var id = Guid.NewGuid();
var id2 = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = builder.BuildResourceUrl<CssResource>(id, new byte[] { 1 }),
Key = id,
LastUpdated = DateTime.Now,
OriginalName = "originalName",
RequestReduceFileId = Hasher.Hash(new byte[] { 1 })
};
var file2 = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = builder.BuildSpriteUrl(id, new byte[] { 2 }),
Key = id,
LastUpdated = DateTime.Now,
RequestReduceFileId = Hasher.Hash(new byte[] { 2 })
};
var file3 = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = builder.BuildResourceUrl<CssResource>(id2, new byte[] { 3 }),
Key = id2,
LastUpdated = DateTime.Now,
OriginalName = "originalName2",
RequestReduceFileId = Hasher.Hash(new byte[] { 3 })
};
testable.ClassUnderTest.Save(file);
testable.ClassUnderTest.Save(file2);
testable.ClassUnderTest.Save(file3);
var result = testable.ClassUnderTest.GetActiveFiles();
Assert.Equal(2, result.Count());
Assert.True(result.Contains(file.FileName));
Assert.True(result.Contains(file3.FileName));
}
[Fact]
public void WillNotReturnExpiredKeys()
{
var testable = new TestableRepository();
var builder = new RequestReduce.Utilities.UriBuilder(testable.Mock<IRRConfiguration>().Object);
var id = Guid.NewGuid();
var id2 = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = builder.BuildResourceUrl<CssResource>(id, new byte[] { 1 }),
Key = id,
LastUpdated = DateTime.Now,
OriginalName = "originalName",
RequestReduceFileId = Hasher.Hash(new byte[] { 1 }),
IsExpired = true
};
var file2 = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = builder.BuildResourceUrl<CssResource>(id2, new byte[] { 2 }),
Key = id2,
LastUpdated = DateTime.Now,
OriginalName = "originalName2",
RequestReduceFileId = Hasher.Hash(new byte[] { 2 })
};
testable.ClassUnderTest.Save(file);
testable.ClassUnderTest.Save(file2);
var result = testable.ClassUnderTest.GetActiveFiles();
Assert.Equal(1, result.Count());
Assert.Equal(file2.FileName, result.First());
Assert.True(result.Contains(file2.FileName));
}
[Fact]
public void WillReturnMostRecentActiveEntryPerKey()
{
var testable = new TestableRepository();
var builder = new RequestReduce.Utilities.UriBuilder(testable.Mock<IRRConfiguration>().Object);
var id = Guid.NewGuid();
var id2 = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = builder.BuildResourceUrl<CssResource>(id, new byte[] { 1 }),
Key = id,
LastUpdated = DateTime.Now,
OriginalName = "originalName",
RequestReduceFileId = Hasher.Hash(new byte[] { 1 }),
IsExpired = true
};
var file2 = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = builder.BuildResourceUrl<CssResource>(id, new byte[] { 2 }),
Key = id,
LastUpdated = DateTime.Now.Subtract(new TimeSpan(0, 0, 2)),
OriginalName = "originalName2",
RequestReduceFileId = Hasher.Hash(new byte[] { 2 })
};
var file3 = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = builder.BuildResourceUrl<CssResource>(id, new byte[] { 3 }),
Key = id,
LastUpdated = DateTime.Now.Subtract(new TimeSpan(0, 0, 3)),
OriginalName = "originalName2",
RequestReduceFileId = Hasher.Hash(new byte[] { 3 })
};
var file4 = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = builder.BuildResourceUrl<CssResource>(id2, new byte[] { 4 }),
Key = id2,
LastUpdated = DateTime.Now.Subtract(new TimeSpan(0, 0, 3)),
OriginalName = "originalName2",
RequestReduceFileId = Hasher.Hash(new byte[] { 4 })
};
var file5 = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = "file.png",
Key = id2,
LastUpdated = DateTime.Now,
OriginalName = "originalName2",
RequestReduceFileId = Hasher.Hash(new byte[] { 5 })
};
testable.ClassUnderTest.Save(file);
testable.ClassUnderTest.Save(file2);
testable.ClassUnderTest.Save(file3);
testable.ClassUnderTest.Save(file4);
testable.ClassUnderTest.Save(file5);
var result = testable.ClassUnderTest.GetActiveFiles();
Assert.Equal(2, result.Count());
Assert.True(result.Contains(file2.FileName));
Assert.True(result.Contains(file4.FileName));
}
}
public class GetFilesFromKey
{
[Fact]
public void WillPullTheFilesWithTheSpecifiedKey()
{
var testable = new TestableRepository();
var id = Guid.NewGuid();
var id2 = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = new CssResource().FileName,
Key = id,
LastUpdated = DateTime.Now,
OriginalName = "originalName",
RequestReduceFileId = Guid.NewGuid()
};
var file2 = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = new CssResource().FileName,
Key = id,
LastUpdated = DateTime.Now,
RequestReduceFileId = Guid.NewGuid()
};
var file3 = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = new CssResource().FileName,
Key = id2,
LastUpdated = DateTime.Now,
OriginalName = "originalName2",
RequestReduceFileId = Guid.NewGuid()
};
testable.ClassUnderTest.Save(file);
testable.ClassUnderTest.Save(file2);
testable.ClassUnderTest.Save(file3);
var result = testable.ClassUnderTest.GetFilesFromKey(id);
Assert.Equal(2, result.Count());
Assert.True(result.All(x => x.Key == id));
Assert.NotNull(result.Single(f => f.RequestReduceFileId == file.RequestReduceFileId));
Assert.NotNull(result.Single(f => f.RequestReduceFileId == file2.RequestReduceFileId));
}
}
public class GetUrlByKey
{
[Fact]
public void WillGetCssUrlFromDb()
{
var testable = new TestableRepository();
var id = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = "fileName1" + new CssResource().FileName,
Key = id,
LastUpdated = new DateTime(2010, 1, 1),
OriginalName = "originalName",
RequestReduceFileId = Guid.NewGuid()
};
testable.ClassUnderTest.Save(file);
var file2 = new RequestReduceFile()
{
Content = new byte[] { 2 },
FileName = "fileName2" + new CssResource().FileName,
Key = Guid.NewGuid(),
LastUpdated = new DateTime(2011, 1, 1),
OriginalName = "originalName",
RequestReduceFileId = Guid.NewGuid()
};
testable.ClassUnderTest.Save(file2);
var result = testable.ClassUnderTest.GetActiveUrlByKey(id, typeof(CssResource));
Assert.Equal(file.FileName, result);
testable.Dispose();
}
}
[Fact]
public void WillGetNonExpiredCssUrlFromDb()
{
var testable = new TestableRepository();
var id = Guid.NewGuid();
var file = new RequestReduceFile()
{
Content = new byte[] { 1 },
FileName = "fileName1" + new CssResource().FileName,
Key = id,
LastUpdated = new DateTime(2010, 1, 1),
OriginalName = "originalName",
RequestReduceFileId = id,
IsExpired = true
};
testable.ClassUnderTest.Save(file);
var file2 = new RequestReduceFile()
{
Content = new byte[] { 2 },
FileName = "fileName2" + new CssResource().FileName,
Key = id,
LastUpdated = new DateTime(2011, 1, 1),
OriginalName = "originalName",
RequestReduceFileId = Guid.NewGuid()
};
testable.ClassUnderTest.Save(file2);
var result = testable.ClassUnderTest.GetActiveUrlByKey(id, typeof(CssResource));
Assert.Equal(file2.FileName, result);
testable.Dispose();
}
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 4,822
|
Telets (), a member of the Ugain clan, was the ruler of Bulgaria from 762 to 765. Byzantine sources indicate that Telets replaced the legitimate rulers of Bulgaria. The same sources describe Telets as a brave and energetic man in his prime (about 30 years old). Scholars have conjectured that Telets may have belonged to an anti-Slavic faction of the Bulgarian nobility.
After his accession, Telets led a well-trained and well-armed army against the Byzantine Empire and devastated the Empire's frontier zone, inviting the emperor to a contest of strength. Emperor Constantine V Kopronymos marched north on June 16, 763, while another army was carried by a fleet of 800 ships (each carrying infantry and 12 horsemen) with the intent to create a pincer movement from the north.
Telets at first fortified the mountain passes with his troops and some twenty thousand Slavic auxiliaries. Later he changed his mind and led out his troops to the plain of Anchialos (Pomorie) on June 30. The bloody battle of Anchialus then began at mid-morning, and lasted until dusk. At the end, Telets' Slavic auxiliaries deserted him for the emperor, who won the field but chose to return home in triumph. According to the Byzantine sources, Constantine V brought home a throng of Bulgarian prisoners in wooden restraints, for the entertainment of Constantinople's populace.
The military defeat sealed the fate of Telets, who was lynched together with his supporters by his rebellious subjects.
See also
History of Bulgaria
Bulgars
References
Mosko Moskov, Imennik na bălgarskite hanove (novo tălkuvane), Sofia 1988.
Jordan Andreev, Ivan Lazarov, Plamen Pavlov, Koj koj e v srednovekovna Bălgarija, Sofia 1999.
(primary source), Bahši Iman, Džagfar Tarihy, vol. III, Orenburg 1997.
730s births
765 deaths
8th-century murdered monarchs
Monarchs of the Bulgars
Murdered Bulgarian monarchs
8th-century Bulgarian monarchs
Bulgarian people of the Byzantine–Bulgarian Wars
Turkic rulers
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 5,206
|
Q: Apache reverse proxy setup I have a jboss application server on machine1. The application address is http://ip-address:8080/webapp. I wanted to have only an ip pointing to the application. So on machine2 I setup an apache proxy. But it only helps to shift to port 80 but the directory webapp cannot be removed. So using proxy, the address is http://ip-address/webapp. So is there a way to just have the ip point to the application. For example the address http://ip-address should open the web page of the application.
A: JBoss integration with apache2 is best done using the Tomcat connector (mod_jk).
http://community.jboss.org/wiki/UsingModjk12WithJBoss
Depending on your server environment you may even have readymade packages available to quickly setup mod_jk.
A: Take a look at this SFq:
*
*simple apache2 reverse proxy setup not working
Here's a bit more verbose version:
*
*http://www.apachetutor.org/admin/reverseproxies
Take a look at "Debugging your Proxy Configuration" section.
Note that there may be issues with this setup, depending on what you app is doing. The simple case would be if you use any URLs in JavaScript - these may need to be converted if they change the behavior of the app in a way visible to the end user (e.g. doing redirects or so).
A: Using mod_jk I have used Apache as front end to Tomcat.
The working vhost configuration I had is
<VirtualHost *:80>
ServerName yourapp.name
ProxyPass / ajp://internal.com
ProxyPassReverse / ajp://internal.com
</VirtualHost>
Other security and logging directives you can include in the definition as per your needs/
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 1,183
|
Terence Kemp McKenna () a fost un etnobotanist american, mistic, psihonaut, lector, autor și avocat pentru utilizarea responsabilă a plantelor psihedelice care apar în mod natural. El a vorbit și a scris despre o varietate de subiecte, inclusiv medicamente psihedelice, entogeni pe bază de plante, șamanism, metafizică, alchimie, limbaj, filozofie, cultură, tehnologie, mediu și originile teoretice ale conștiinței umane. El a fost numit " Timothy Leary al anilor '90", "una dintre autoritățile de frunte pe fundamentele ontologice ale șamanismului", și "vocea intelectuală a culturii Rave ".
McKenna a formulat un concept despre natura timpului bazat pe tiparele fractale pe care el a afirmat că le-a descoperit în Yi-Jing, pe care el a numit-o teoria noutății, propunând acest lucru prevăzut sfârșitul timpului și o tranziție a conștiinței în anul 2012. Promovarea sa a teoriei noutății și a legăturii sale cu calendarul Maya este considerată ca fiind unul dintre factorii care conduc la convingerile pe scară largă despre escatologia din 2012 . Teoria noutății este considerată pseudoștiință .
Idei
Teoria noutății
Teoria noutății este o idee pseudoștiințifică care intenționează să prezică evoluția și fluxul noutății în univers ca o calitate intrinseca a timpului, propunând că timpul nu este o constantă, ci are diverse calități care tind spre "obicei" sau "noutate". Obiceiul, în acest context, poate fi gândit ca fiind entropic, repetitiv sau conservator; și noutatea ca fenomene creative, disjunctive sau progresive. Ideea lui McKenna era că universul este un motor proiectat pentru producerea și conservarea noutății și că, pe măsură ce noutatea crește, la fel și complexitatea . Fiecare nivel de complexitate obținut devine platforma pentru o ascensiune suplimentară către complexitate.
Baza teoriei a fost concepută inițial la mijlocul anilor '70, după ce experiențele lui McKenna cu ciupercile halucinogene la "La Chorrera" din Amazon l-au determinat să studieze îndeaproape secvența King Wen a Yi-Jing -ului.
Referințe
Nașteri în 1946
Decese în 2000
Istorici din secolul al XX-lea
Scriitori americani din secolul al XX-lea
Sceptici religioși
Mistici
Anarhiști americani
Americani de origine galeză
Americani de origine irlandeză
Erori CS1: parametri depășiți
Droguri psihoactive
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 1,897
|
\section{Introduction}
In 1948 it was shown by Casimir \cite{casi48-51-793} that the presence of a boundary significantly modifies the vacuum structure of a quantum field. In particular, he computed the pressure between two perfectly conducting parallel plates due to the ground state of an electromagnetic field and determined that an attractive force exists between them. As a result, an enormous amount of literature has been produced which has been focused on the analysis of the effects that the geometry of the boundary and the boundary conditions have on the vacuum structure of quantum fields (see e.g. \cite{milt01b,bord01-353-1,bord09b,eliz95b,kirs02b,milo94b,most97b} and references therein). In the vast majority of cases, however, research has been centered on the analysis of quantum systems endowed with ideal boundary conditions. The reason for such polarized interest lies in the fact that the eigenvalues and the eigenfunctions of the relevant Laplace type operator with ideal boundary conditions are often explicitly known. This makes the analysis of quantum vacuum effects, as a result, less involved.
Ideal boundary conditions, although of crucial theoretical importance for understanding important aspects of quantum vacuum effects, do not always provide an accurate description of physical systems. For instance, it is well known that a neutral metal plate does not constitute an impenetrable boundary for all the frequencies associated with a quantum field. For this reason different methods have been suggested in order to construct models that more closely characterize physical properties of real materials.
One of the approaches used to more precisely describe quantum systems relies on the replacement of ideal boundary conditions with a boundary modeled by a potential. The rationale behind this idea is that it is possible to describe physical characteristics of a boundary by choosing an appropriate potential function. This process reduces to substituting a geometric boundary of the system with a non-dynamical external field. Along these ideas, the analysis of the Casimir effect in the setting of potentials modeling a boundary have appeared, for example, in \cite{acto95-52-3581,eliz97-30-5393}.
It is worth mentioning that the importance of external potentials in quantum systems is not confined to the description of a non-ideal boundary. In fact, they play an important role in describing background potentials resulting from classical solutions like monopoles \cite{hoof74-79-276,poly74-20-194}, sphalerons \cite{klin84-30-2212} and electroweak Skyrmions \cite{gips81-183-524,gips84-231-365,ambj85-256-434,eila86-56-1331,frie77-15-1694,frie77-16-1096,skyr61-260-127,skyr62-31-556,adki83-228-552}.
Furthermore, the formalism needed to study the vacuum energy of scalar fields under the influence of integrable potentials in unbounded Euclidean space has been developed in \cite{bord96-53-5753,dunn06-39-11915,dunn09-42-075402}. It is the purpose of the current work to extend this analysis to include finite spatial volumes and additional Kaluza-Klein dimensions.
The models considered in this paper can be formally reduced to the analysis of the Casimir effect on a one-dimensional finite interval with an arbitrary, sufficiently smooth, potential. Assuming two additional free dimensions this represents a piston configuration in flat space, where the piston is modeled by a potential with a sharply concentrated support. The arbitrary additional dimensions are represented by a smooth Riemannian manifold ${\cal N}$ with or without boundary, which describes the cross-section of the piston. Piston configurations have received significant interest in recent years as they are often free of divergences which allows for an unambiguous prediction of forces \cite{cava04-69-065015}. Prior research has concentrated on infinitely thin pistons where different types of ideal boundary conditions were imposed; see, e.g., \cite{eliz09-79-065023,fucci12a,hert05-95-250402,kirs09-79-065019,mara07-75-085019}. As previously mentioned, this work differs substantially from previous ones in that infinitely thin pistons are replaced by potentials modeling pistons of finite thickness.
In this paper the zeta function regularization method is used, in which the Casimir energy and force are given by contour integrals that involve boundary values of unique solutions to initial value problems. This particular approach has been successfully applied to the evaluation of functional determinants \cite{kirs03-308-502,kirs04-37-4649}, for which explicit results can be obtained through purely analytical means and can be represented in closed form. In order to compute the Casimir energy and the corresponding force, however, numerical integration through suitable quadrature methods is necessary.
The outline of the paper is as follows. In Section \ref{onedim} the spectral zeta function associated with a boundary value problem in one dimension is represented in terms of a contour integral and the formalism needed to perform the analytic continuation to a suitable domain in the complex plane is developed. The obtained meromorphic extension is exploited to analyze the Casimir energy for a massive scalar field on an interval under the influence of a background potential. The Casimir force on the piston is found for cases where the potential mimics a piston; see Equation (\ref{2.16}). In Section \ref{Sec:Examples} numerical results for the Casimir force are given for various potentials constructed from smooth, compactly supported functions, for which each can be realized as a delta-sequence. In Section \ref{hdp}, additional Kaluza-Klein dimensions are included into the analysis and the Casimir energy is determined. In this setting, results for the Casimir force are provided for cases in which the additional dimensions are chosen to be either $\mathbb{R}$ or $\mathbb{R}^{2}$. In the Conclusions, the
results are summarized and additional studies along the lines of this work are outlined.
\section{Massive scalar field on a one-dimensional interval}\label{onedim}
In this section a non-selfinteracting massive scalar field is considered on the interval $I = [0,L]\subset\mathbb{R}$ under the influence of a smooth background potential $V(x)$. The one-particle energy eigenvalues $\lambda_\ell$ of this system are determined by the differential equation
\beq
\left( - \frac{d^2}{dx^2} + V(x) + m^2 \right) \phi_\ell (x) = \lambda_\ell \phi_\ell (x)\;, \label{2.1}
\eeq
augmented by boundary conditions which, for definiteness, are chosen to be of Dirichlet type
\beq
\phi_\ell (0) = \phi_\ell (L) =0\;.\label{2.2}
\eeq
In what follows, the mass parameter $m$ is assumed large enough so that all the eigenvalues of the problem (\ref{2.1}) and (\ref{2.2}) are strictly positive.
The spectral zeta function associated with the above boundary value problem is defined by
\beq
\zeta_I (s) = \sum_{\ell =0} ^\infty \lambda_\ell ^{-s}\;, \label{2.3}
\eeq
and, due to the
asymptotic behavior of the eigenvalues, it is convergent in the semi-plane $\Re(s) > 1/2$. In the framework of zeta function regularization, the Casimir energy $E_{Cas}$ of the system is encoded in the value of $\zeta_I (s)$ at $s=-1/2$ \cite{bord09b,eliz95b,kirs02b}. More precisely
\beq
E_{Cas} = \lim_{\alpha \to 0} \frac {\mu^{2\alpha}} 2 \zeta_I \left( \alpha - \frac 1 2 \right)\;, \label{2.4}
\eeq
where $\mu$ represents an arbitrary parameter with the dimension of a mass.
Since the point $s=-1/2$ does not belong to the domain of convergence of (\ref{2.4}),
it is necessary to perform the analytic continuation of the series in Equation (\ref{2.3}) to a neighborhood of $s=-1/2$. Here, the strategy employed consists in rewriting the series (\ref{2.3}) as a contour integral using Cauchy's residue theorem \cite{conw78b} and then utilizing the obtained integral representation as a starting point for the analytic continuation. This technique has been successfully used for the calculation of functional determinants in the setting described by Equation (\ref{2.1}) in \cite{kirs03-308-502,kirs04-37-4649} .
Following the ideas developed in \cite{fucci12,kirs03-308-502,kirs04-37-4649}, instead of the boundary value problem (\ref{2.1}) and (\ref{2.2}), the following equivalent {\it initial value problem} is to be considered,
\beq
\left( - \frac {d^2}{dx^2} + V(x) +m^{2}\right) u_\nu (x) = \nu^2 u_\nu (x)\;, \quad \quad u_\nu (0) =0\;, \quad u_\nu ' (0) =1\;, \label{2.5}
\eeq
where $\nu \in \com.$ The eigenvalues $\lambda_\ell$ of the original eigenvalue problem (\ref{2.1}) with Dirichlet boundary conditions (\ref{2.2}) are then determined as solutions to the transcendental equation
\beq
u_\nu (L) =0\;.\label{2.6}
\eeq
Let us point out that the solutions to Equation (\ref{2.5}) are uniquely determined and define an analytic function of $\nu$.
For $\Re (s) > 1/2$, the zeta function (\ref{2.3}) can be represented in terms of a contour integral as \cite{fucci12}
\beq
\zeta _I (s) = \frac 1 {2\pi i} \int\limits_\gamma d\nu (\nu^2 + m^2 ) ^{-s} \frac d {d\nu} \ln u_\nu (L)\;, \label{2.7}
\eeq
where the contour $\gamma$ encloses all solutions to Equation (\ref{2.6}), assumed to be on the positive real axis,
in the counterclockwise direction. A deformation of the integration contour to the imaginary axis in (\ref{2.7})
leads to the following expression
\beq
\zeta _I (s) = \frac{\sin (\pi s)} \pi \int\limits_m^\infty dk (k^2-m^2)^{-s} \frac d {dk} \ln u_{ik} (L)\;, \label{2.8}
\eeq
which is now valid for $1/2 < \Re (s) < 1$. The analytic continuation of (\ref{2.8}) to the region $\Re(s)\leq 1/2$ is
obtained by adding and subtracting the large-$k$ asymptotic behavior of the solution $u_{ik} (L)$ \cite{fucci12}.
The needed asymptotic behavior can be determined by applying a standard WKB technique to the unique solution of the
initial value problem
\beq
\left( - \frac{d^2} {dx^2} + V(x) + k^2 \right) u_{ik} (x) =0\;, \quad \quad u_{ik} (0) =0\;, \quad u_{ik} ' (0) =1\;.\label{2.9}
\eeq
Although in the process of analytic continuation one only needs to be concerned with the exponentially growing part for large $k$,
at this stage it is important to take into account both the exponentially growing and the exponentially decaying contributions in order to be able to
correctly impose the initial condition in (\ref{2.9}). Following, e.g., \cite{bend10b,mill06b}, it is convenient to introduce the auxiliary function
\beq
S(x,k) = \partial _x \ln \psi_k (x)\;, \label{w1}
\eeq
where $\psi_k (x)$ satisfies
\beq
\left( - \frac {d^2}{dx^2} + V(x) + k^2 \right) \psi _k (x) =0. \label{2.10}
\eeq
By using (\ref{w1}) in (\ref{2.10}), it is not very difficult to show that $S(x,k)$ satisfies the differential equation
\beq
S ' (x,k) = k^2 + V(x) - S^2 (x,k)\;, \label{2.11}
\eeq
where, here and in the rest of this work, the prime indicates differentiation with respect to the variable $x$.
The asymptotic expansion of $S(x,k)$ for large $k$ can be written in the form
\beq
S(x,k) = \sum _{i=-1}^\infty k^{-i} S_i (x)\;,
\eeq
where
the asymptotic orders $S_i (x)$ are recursively determined by
\beq
S_{-1}(x) &=& \pm 1\;, \quad S_0 (x) =0\;, \quad S_1 (x) = \pm \frac{V(x)} 2\;, \label{2.12}\\
S_{i+1} (x) &=& \mp \frac 1 2 \left(S_i ' (x) + \sum_{j=0}^i S_j (x) S_{i-j} (x) \right)\;.\nn
\eeq
The two different signs in (\ref{2.12}) correspond to the exponentially growing and decaying solutions $\psi_k (x)$ of (\ref{2.10}). Let $S^{\pm } (x,k)$ denote the solutions of (\ref{2.11}) corresponding to the different signs, the associated solutions of (\ref{2.10}) have the form
\beq
\psi_k ^\pm (x) = A^\pm \exp \left\{ \int\limits_0^x dt \,\, S^\pm (t,k) \right\}\;.
\eeq
The original function of interest, namely $u_{ik} (x)$, is obtained as a linear combination
\beq \label{w3}
u_{ik} (x) = A^+ \exp \left\{ \int\limits_0^x dt \,\, S^+ (t,k) \right\} + A^- \exp \left\{ \int\limits_0^x dt \,\, S^- (t,k) \right\}\;,
\eeq
where the arbitrary coefficients $A^{+}$ and $A^{-}$ can be found by imposing the initial condition in (\ref{2.9}) and they read
\beq \label{w4}
A^+ = - A^-, \quad \quad A^+ = \frac 1 {S^+ (0,k) - S^- (0,k)}\;.
\eeq
By using the result (\ref{w4}) in the expression (\ref{w3}) the large-$k$ behavior of $u_{ik} (L)$ can be found to be
\beq
u_{ik} (L) = \frac 1 {S^+ (0,k) - S^- (0,k)} \exp \left\{ \int\limits_0^L dt \,\, S^+(t,k) \right\} \big(1+ E(k)\big)\;,
\eeq
where $E(k)$ denotes exponentially decreasing terms as $k\to\infty$.
For the relevant quantity in the integral (\ref{2.8}),
\beq \label{w61}
\ln u_{ik} (L) &=& - \ln \left( S^+ (0,k) - S^- (0,k)\right) + \int\limits_0^L dt \,\, S^+(t,k) \nn\\
&=& - \ln (2k) + k L+ \sum_{j=0}^\infty d_j k^{-j}\;,
\eeq
where exponentially small terms have been omitted and the coefficients $d_j$ are defined as
\beq\label{w5}
d_{2j+1}=\int\limits_{0}^{L}dx\,S_{2j+1}^{+}(x)\;,
\eeq
and
\beq
d_{2j}=\int\limits_{0}^{L}dx\,S_{2j}^{+}(x)-\Omega_{j}(0)\;,
\eeq
with $\Omega_{j}(0)$ defined through the cumulant expansion
\beq
\ln\left[1+\sum_{k=1}^{\infty}\frac{S^{+}_{2k-1}(0)}{z^{2k}}\right]\simeq\sum_{i=1}^{\infty}\frac{\Omega_{i}(0)}{z^{2i}}\;.
\eeq
For completeness, the first six coefficients $d_j$ are explicitly given by
\beq
d_0&=&0\;,\quad\quad d_1=\frac{1}{2}\int\limits_0^L dt\,\,V(t)\;,\quad\quad d_2 = - \frac 1 4 [V(L) + V(0) ]\;,\nn\\
d_3&=&\frac{1}{8}[V'(L)-V'(0)]-
\frac{1}{8}\int\limits_0^Ldt\,\, V^2(t)\;,\nn\\
d_4&=&-\frac{1}{16}[V''(L)+V''(0)]+\frac{1}{8}[V^2(L)-V^2(0)]\;,\label{dsubi}\\
d_5&=&\frac{1}{32}[V^{(3)}(L)-V^{(3)}(0)]-\frac{5}{32}[V(L)V'(L)-V(0)V'(0)]+\frac{1}{16}
\int\limits_0^Ldt\,\,V^3(t)\nn\\
&~&-\frac{1}{32}\int\limits_0^Ldt\,\,V(t)V''(t)\;.\nn
\eeq
It is clear that by using (\ref{2.12}) and the definition (\ref{w5}), an arbitrary number of coefficients can be determined by using an algebraic computer program.
By adding and subtracting from the integral representation (\ref{2.8}) the leading $N+1$ terms in the asymptotic expansion (\ref{w6}), the zeta function can be represented as a sum of two terms
\beq\label{w6}
\zeta_I (s) = \zeta _I ^{(f)} (s) + \zeta _I ^{(as)} (s)\;,
\eeq
where \cite{fucci12}
\beq
\zeta_I ^{(f)} (s) = \frac{\sin \pi s} \pi \int\limits_{m} ^\infty dk \,\, (k^2 - m^2)^{-s} \frac d {dk} \left\{ \ln u_{ik} (L) -kL + \ln (2k) -\sum_{j=0}^N d_j k^{-j} \right\}\;, \label{2.14}
\eeq
and
\beq
\zeta_I ^{(as)} (s) = \frac{\sin \pi s} \pi \int\limits_{m} ^\infty dk \,\, (k^2 - m^2)^{-s} \frac d {dk} \left\{ kL - \ln (2k) +\sum_{j=0}^N d_j k^{-j} \right\}\;. \label{2.15}
\eeq
By construction, the function $\zeta_I^{(f)} (s)$ is analytic in the region $\Re (s) >-(N+1)/2$ and the meromorphic structure of $\zeta_I ^{(as)} (s)$ is made manifest once the $k$-integration is performed. More explicitly,
\beq \label{w7}
\zeta_I ^{(as)} (s) = \frac 1 {2 \Gamma (s)} \left\{ \frac{L \Gamma \left( s- \frac 1 2 \right)}{\sqrt \pi} m^{1-2s}
- \Gamma (s) m ^{-2s} - \sum_{j=1}^N j d_j \frac{ \Gamma \left( s+ \frac j 2 \right)}{\Gamma \left( 1 + \frac j 2 \right)} m^{-j-2s} \right\}\;,
\eeq
where it is clear now that $\zeta_I ^{(as)} (s)$ represents a meromorphic function in the entire complex plane possessing only simple poles.
The expression (\ref{w6}) together with (\ref{2.14}) and (\ref{2.15}) represents the desired analytic continuation of the spectral zeta function (\ref{2.7}).
For the purpose of computing the Casimir energy it is sufficient to choose $N=1$ in the above expressions for $\zeta_I ^{(f)} (s)$ and $\zeta_I ^{(as)} (s)$.
In this case, $\zeta_I ^{(f)} (s)$ is analytic for $\Re(s)>-1$ and one can simply set $s=-1/2$ in (\ref{2.14}) to obtain
\beq\label{w8}
\zeta ^{(f)} _I(-1/2) &=& - \frac 1 \pi \int\limits_{m } ^\infty dk \,\, (k^2 - m^2)^{1/2} \frac d {dk} \left\{ \ln u_{ik} (L) - kL + \ln (2k) - \frac{d_1}{ k} \right\} ,
\eeq
while, in the neighborhood of $s=-1/2$, (\ref{w7}) gives
\begin{eqnarray}\label{w9}
\lefteqn{\zeta_I^{(as)} (-1/2+\alpha ) = \frac 1 \alpha \frac{ 2 d_1 + Lm^2}{4\pi}}\nn\\
&+& \frac 1 {4\pi} \left[ - 2m\pi + Lm^2 (\ln 4-1) + 4d_1 (\ln 2-1) - 2 (2d_1 + Lm^2) \ln m\right] + O(\alpha )\;.
\end{eqnarray}
The explicit form of the Casimir energy for this system easily follows by substituting (\ref{w8}) and (\ref{w9})
in the following expression
\begin{equation}\label{w10}
E_{Cas}=\frac{1}{2}\textrm{FP}\zeta_{I}\left(-\frac{1}{2}\right)+\frac{1}{2}\left(\frac{1}{\alpha}+\ln\mu^{2}\right)\textrm{Res}\,\zeta_{I}\left(-\frac{1}{2}\right)+O(\alpha)\;,
\end{equation}
obtained by expanding (\ref{2.4}) about $\alpha= 0$. In the above formula and throughout the rest of this paper $\textrm{Res}$ denotes the residue of the function and $\textrm{FP}$ its finite part.
It is evident from (\ref{w9}) and (\ref{w10}) that the Casimir energy is, in general, not well defined because of the presence of the term $\textrm{Res}\,\zeta_{I}(-1/2)$.
In order to overcome this problem, the system is interpreted in terms of a piston configuration where the piston itself is modeled by the potential $V(x)$.
For this purpose, the potential $V(x)$ is assumed to have compact support within the interval $[0,L]$. More precisely, it is assumed that $V(x)$ does not vanish for $x\in [a-\epsilon , a+\epsilon] \subset [0,L]$. According to this description, the point $x=a$ represents the position of the piston.
The asymptotic terms $d_{j}$ are expressed either in terms of boundary values of $V(x)$ and its derivatives or as an integral of $V(x)$ and its derivatives, therefore they are independent of the position $a$. It follows from the previous remarks that the Casimir force on the piston, defined in terms of the Casimir energy as
\beq
F_{Cas}=-\frac{\partial}{\partial a}E_{Cas}\;,
\eeq
is a well defined quantity since $\textrm{Res}\,\zeta_{I}(-1/2)$ is, in this setting, independent of $a$. Hence, the explicit expression for the force is
\beq
F_{Cas}(a) = \frac 1 {2\pi } \int\limits_{m} ^\infty dk \,\,
(k^2 - m^2)^{1/2} \,\, \frac \partial {\partial a} \frac \partial {\partial k} \ln u_{ik} (L)\;.\label{2.16}
\eeq
It is worth pointing out that according to the above formula
the magnitude and direction of the force is encoded in the boundary values of
the solution to an initial value problem associated with an ordinary differential equation and no additional information is necessary.
Despite the simplicity of (\ref{2.16}) information about the behavior of $F_{Cas}$ as a function of $a$ can only be obtained through numerical integration techniques since the solution of (\ref{2.9}) is not explicitly known for an arbitrary $V(x)$. In the following section the analysis of $F_{Cas}$ is provided for different types of potentials constructed from smooth, compactly supported functions.
\section{Examples: Gaussian Potentials}\label{Sec:Examples}
It is clear, from (\ref{2.16}), that in order to extract information about the Casimir force $F_{Cas}$ the evaluation of $u_{ik} (L)$ is necessary.
An immediate numerical concern is that solutions to the differential expression
(\ref{2.9}) for large $k$ contain an exponentially increasing term of the type $e^{kx}.$ For this reason, to ensure accuracy in the numerical evaluation, discretization sizes must be chosen to be sufficiently small. This restriction is computationally costly but can be circumvented with relative ease in the following manner. Let $u_{ik} (x) = e^{kx} \varphi_{ik} (x)$ in (\ref{2.9}). The newly introduced function $\varphi_{ik} (x)$ satisfies the initial value problem
\beq
\left( - \frac {d^2} {dx^2} - 2k \frac d {dx} + V(x) \right) \varphi_{ik} (x) =0\;, \quad \quad \varphi_{ik } (0) =0\;, \quad \varphi _{ik} ' (0) =1\;.\label{3.1}
\eeq
The Casimir force in (\ref{2.16}) can, therefore, be written as
\begin{equation}
F_{Cas}(a)=\frac 1 {2\pi } \int\limits_{m} ^\infty dk \,\,
(k^2 - m^2)^{1/2} \,\, \frac \partial {\partial a} \frac \partial {\partial k}\left[e^{kx} \ln \varphi_{ik} (L)\right]\;,
\end{equation}
and since the exponentially growing term does not depend on $a$, this simplifies to
\beq
F_{Cas}(a) = \frac 1 {2\pi } \int\limits_{m} ^\infty dk \,\,
(k^2 - m^2)^{1/2} \,\,\frac \partial {\partial a} \frac \partial {\partial k} \ln \varphi_{ik} (L)\;.\label{3.2}
\eeq
The expression (\ref{3.2}) is now suitable for a numerical evaluation since the exponentially growing term has been dealt with analytically
and, as a result, stringent tolerances on the discretization sizes have been alleviated.
In the next subsections, the results for the Casimir force on Gaussian potentials centered at $a$ and with support of extension $\epsilon$ are presented. More specifically, background potentials of the form are considered,
\beq \label{w11}
V(x) = \left\{ \begin{array} {ll}\eta \frac{\exp \left( \frac{-(x-a)^2}{\epsilon ^2 - (x-a)^2} \right)}{\left|\int\limits_0^1 \eta \exp \left( \frac{-(y-a)^2}{\epsilon ^2 - (y-a)^2} \right) dy \right|} &
\mbox{for }|x-a| < \epsilon, \\
0 & \mbox{otherwise}\;. \end{array} \right.
\eeq
For simplicity, in the following examples $m=0$, $L=1$, and $\epsilon = 10^{-4}.$
In order to obtain the Casimir force on the piston as a function of the position $a$ from the expression (\ref{3.2}) an adaptive second order Runge-Kutta method is used to integrate the differential equation (\ref{3.1}), with a potential of the form (\ref{w11}). This approximates the value of $\varphi_{ik}(x)$ at $x=L$. For potentials of the type (\ref{w11}) this method can be shown to be convergent. More importantly, the discretization size can be determined such that the error in the approximation is within a user-specified tolerance. Upon successful calculation of $\varphi_{ik}(L)$, standard second order centered differences are used to approximate the necessary derivatives in the integral formulation for the Casimir force.
In the next step, the integral over the finite interval $(m,M)$ is computed through the use of a symplectic integrator. The cutoff parameter, $M$, in the integral is determined in the following manner. Let $I_M$ be the calculated Casimir force up to $M$. Now, since the integral is convergent, there exists a value of $M$ such that the contribution of the integral beyond $M$ is negligible. In effect, this can be determined as the first value of $M$ for which $|I_{M+1} - I_{M}|<\delta$ for some arbitrary $\delta$. Here, $\delta$ is chosen to be of the same order as the error obtained in the numerical integration of (\ref{3.1}). This efficient process is highly accurate, allows for large flexibility, and offers improved confidence in the results obtained.
\subsection{Positive and Negative Potential}
By setting $\eta=1$ in (\ref{w11}) one obtains a smooth, positive potential possessing a maximum at the point $a=1/2$. This potential is depicted in the first graph of Figure \ref{Figure1Pot1Fcas}. The Casimir force on the piston modeled by this potential is plotted in the second graph of Figure \ref{Figure1Pot1Fcas}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.85]{Potential1e10n4.eps}
\includegraphics[scale=0.85]{DataForm1P1e10n4.0a0.01to0.99k0to500.eps}
\caption{(a) The potential is shown over the domain of its support. (b) The Casimir force, $F_{cas}$, is calculated using a standard potential of radius $\epsilon =10^{-4}$ centered at $a$ and $\eta=1$. }\label{Figure1Pot1Fcas}
\end{center}
\end{figure}
The Casimir force in this case is negative when the potential is close to the left boundary at $x=0$, while it is positive when the potential is close to the right boundary at $x=1$. In addition, the force vanishes at $a=1/2$ as one would expect. This means that the positive potential considered above is always attracted to the closest boundary, making $a=1/2$ a point of unstable equilibrium.
By setting $\eta=-1$ in (\ref{w11}) one obtains the potential illustrated in the first graph of Figure \ref{Figure2Pot2Fcas}. This potential is smooth, negative, and possesses a minimum at the point $a=1/2$. The Casimir force associated with this potential has been plotted in the second graph of Figure \ref{Figure2Pot2Fcas}.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.85]{Potential2e10n4.eps}
\includegraphics[scale=0.85]{DataForm1P2e10n4.0a0.01to0.99k0to500.eps}
\caption{(a) The potential is shown over the domain of its support. (b) The Casimir force, $F_{cas}$, is calculated using a standard potential of radius $\epsilon = 10^{-4}$ centered at $a$ and $\eta=-1$. }\label{Figure2Pot2Fcas}
\end{center}
\end{figure}
In this situation the behavior of the Casimir force is exactly opposite to the one found for the positive potential. In particular, the negative potential is always repelled from the closest boundary and $a=1/2$ is, in this case, a point of stable equilibrium.
\subsection{Doubly-peaked Positive and Negative Potential}
The doubly-peaked positive potential is constructed by setting $\eta=1/2$ and by adding two potentials of the form (\ref{w11}) one with center at $1/2+\epsilon/2$ and the other centered at $1/2-\epsilon/2$.
The resulting potential, in the first plot of Figure \ref{Figure3Pot5Fcas}, is smooth, positive, possessing two maxima and a minimum at $a=1/2$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.85]{Potential5e10n4.eps}
\includegraphics[scale=0.85]{DataForm1P5e10n4.0a0.01to0.99k0to500.eps}
\caption{(a) The potential is shown over the domain of its support. (b) The Casimir force, $F_{cas}$, is calculated using a double potential of radius $\epsilon = 10^{-4}$ centered at $a$. }\label{Figure3Pot5Fcas}
\end{center}
\end{figure}
The Casimir force for this potential is reported in the second graph of Figure \ref{Figure3Pot5Fcas}. The behavior of the force as
function of the position of the potential is qualitatively similar to the case of the positive potential. In particular, the doubly-peaked positive potential is
always attracted to the closest boundary.
The doubly-peaked negative potential is constructed in the same way as the doubly-peaked positive potential by setting $\eta=-1/2$. The resulting potential, characterized by two minima and one maximum at $a=1/2$, is depicted in the first graph of Figure \ref{Figure4Pot6Fcas} and the associated Casimir force is plotted in the second graph.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.85]{Potential6e10n4.eps}
\includegraphics[scale=0.85]{DataForm1P6e10n4.0a0.01to0.99k0to500.eps}
\caption{(a) The potential is shown over the domain of its support. (b) The Casimir force, $F_{cas}$, is calculated using a double potential of radius $\epsilon = 10^{-4}$ centered at $a$. }\label{Figure4Pot6Fcas}
\end{center}
\end{figure}
Hence, the Casimir force in this case is analogous to the one found for the negative potential. This means that the doubly-peaked negative potential is always repelled from the closest boundary.
It is not surprising that the behavior of the Casimir force on the doubly-peaked potentials matches the one found for the single potentials considered in the previous subsection. This should be expected since, effectively, the potentials have been evenly split while maintaining the total area under the curve equal to the single potentials.
\subsection{Mixed Potential}
The mixed potential, illustrated in the first plot of Figure \ref{Figure5Pot3Fcas}, is obtained by adding two potentials of the form (\ref{w11}): One with $\eta=1/2$ and center at $1/2-\epsilon/2$ and the other with $\eta=-1/2$, centered at $1/2+\epsilon/2$. The resulting Casimir force acting on this potential is provided by the second plot in Figure \ref{Figure5Pot3Fcas}. It is observed that the force on the potential
is always negative, in contrast to the other cases considered. In other words, the mixed potential is repelled from the right boundary at $x=1$ but attracted to the left boundary at $x=0$.
\begin{figure}[h]
\begin{center}
\includegraphics[scale=0.85]{Potential3e10n4.eps}
\includegraphics[scale=0.85]{DataForm1P3e10n4.0a0.01to0.99k0to500.eps}
\caption{(a) The potential is shown over the domain of its support. (b) The Casimir force, $F_{cas}$, is calculated using a double potential of radius $\epsilon = 10^{-4}$ centered at $a$. }\label{Figure5Pot3Fcas}
\end{center}
\end{figure}
It is interesting to notice that the opposite signs of the potential have the net effect of eliminating the force throughout the majority of the interval except for the regions that are closer to the boundary.
In the proximity of the right boundary the negative part of the potential becomes dominant and, therefore, the resulting Casimir force in that region resembles the one for negative potentials.
Near the left boundary, instead, the positive part of the potential becomes dominant resulting in a force that behaves similarly to the one found in the case of positive potentials.
\section{Higher dimensional pistons}\label{hdp}
In this section higher dimensional pistons modeled by potentials and constructed as product manifolds are studied. Let $M$ be a $D=(d+1)$-dimensional manifold such that $M=[0,L]\times {\mathcal N}$, where ${\mathcal N}$ is a smooth $d$-dimensional Riemannian manifold representing the additional Kaluza-Klein dimensions. From the manifold $M$ a piston configuration is obtained by modeling the piston itself with a smooth potential $V(x)$ having support in the interior of the interval $[0,L]$. In this setting, the manifold $\mathscr{N}$ represents the cross-section of the piston.
The dynamics of massless scalar fields propagating on $M$ under the influence of the potential $V(x)$ is described by the operator
\begin{eqnarray}
\mathcal{L} = - \frac{\partial^2}{\partial x^2} - \Delta_{{\cal N}} + V(x)\;, \label{2.17}
\end{eqnarray}
where $\Delta_{\cal N}$ denotes the Laplacian operator on the manifold ${\cal N}$. The eigenvalue equation
\begin{equation}\label{w12}
\mathscr{L}\phi(x,y)=\lambda^{2}\phi(x,y)\;,
\end{equation}
is separable and its solutions can, hence, be written as a product
\beq\label{w13}
\phi (x,y) = X(x) \varphi (y)\;,
\eeq
where $y$ denote the coordinates on the manifold ${\cal N}$ and $\varphi (y)$ represent the eigenfunctions
of $\Delta_{\cal N}$ satisfying
\begin{eqnarray}
- \Delta _{{\cal N}} \varphi _\ell (y) = \eta_\ell ^2 \varphi _\ell (y)\;. \label{2.18}
\end{eqnarray}
By substituting the expression (\ref{w13}) in the equation (\ref{w12}), and by setting $\lambda^{2}=\nu^{2}+\eta^{2}_{\ell}$ one can show that the functions $X(x)$
are solutions to the equation
\begin{eqnarray}
\left( - \frac{d^2} {d x^2} + V(x) \right) X_\nu (x) = \nu^2 X_\nu (x)\;. \label{2.19}
\end{eqnarray}
The eigenvalues $\nu$ appearing in (\ref{2.19}) are uniquely determined once the boundary conditions for $X(x)$ have been specified. As previously done in Section \ref{onedim}, Dirichlet boundary conditions are imposed, namely,
\beq
X_\nu (0) = X_\nu (L) =0\;.
\eeq
The spectral zeta function for the system under consideration is
\begin{eqnarray}
\zeta (s) =\sum_{\lambda} \lambda^{-2s}= \sum_{\ell, \nu} (\nu^2 + \eta_\ell^2)^{-s}\;, \label{5emi}
\end{eqnarray}
which converges for $\Re (s) > (d+1)/2$.
It is clear, at this point, that since the eigenvalue equation (\ref{w12}) is separable, leading to (\ref{2.18}) and to (\ref{2.19}), the analysis of the Casimir energy for higher dimensional pistons and the associated Casimir force can be performed by using the methods described for the one-dimensional case.
Following the ideas developed in Section \ref{onedim}, the spectral zeta function (\ref{5emi}) is represented in terms of a contour integral as follows
\beq
\zeta (s) = \frac 1 {2\pi i} \sum_\ell \int\limits_\gamma d\mu (\mu^2 + \eta_\ell ^2)^{-s} \frac d {d\mu} \ln u_\mu (L)\;,
\eeq
where $u_{\mu}(x)$ are the solutions to the initial value problem (cf. Section \ref{onedim})
\begin{equation}
\left(-\frac{d^2}{dx^{2}}+V(x)\right)u_{\mu}(x)=\mu^{2}u_{\mu}(x)\;,\qquad u_{\mu}(0)=0\;,\quad u^{\prime}_{\mu}(0)=1\;.
\end{equation}
Due to the presence of the manifold ${\cal N}$ the analytically continued expression for the spectral zeta function $\zeta(s)$ will be written in terms of the spectral zeta function
associated with $\Delta_{\cal N}$, namely
\beq
\zeta_{{\cal N}} (s) = \sum_\ell \eta_\ell ^{-2s}\;,
\eeq
which is well defined for $\Re (s) > d/2$ and can be extended to a meromorphic function in the entire complex plane possessing only simple poles.
The standard technique of adding and subtracting asymptotic terms is performed, in particular, \cite{fucci12}
\beq & &\hspace{-.5cm}\zeta ^{(f)}(s) = \frac{\sin \pi s} \pi \sum_\ell \int\limits_{\eta _\ell}^\infty dk \,\, (k^2 - \eta_\ell^2)^{-s} \frac d {dk} \left\{ \ln u_{ik} (L) - kL + \ln (2k) -\sum_{j=0}^N \frac{d_j}{ k^{j}} \right\} , \label{2.20}\\
& &\hspace{-.5cm}\zeta ^{(as)} (s) = \frac 1 {2 \Gamma (s)} \left\{ \frac{ L \Gamma \left( s- \frac 1 2 \right)}{\sqrt \pi} \zeta _{{\cal N}} \left( s- \frac 1 2 \right)
- \Gamma (s) \zeta _{{\cal N}} (s) \right.\nn\\
& &\hspace{3.0cm}\left.- \sum_{j=1}^N j d_j \frac{ \Gamma \left( s+ \frac j 2 \right)}{\Gamma \left( 1 + \frac j 2 \right)} \zeta_{{\cal N}} \left( s + \frac j 2\right) \right\}.\label{2.21}\eeq
Here, $\zeta^{(f)}(s)$ can be proved to be well defined for $(D-N-2)/2<\Re (s)<1$. By choosing $N=D$, $\zeta^{(f)}(s)$ becomes an analytic function in the neighborhood of $s=-1/2$ and can, therefore, be used for the computation of the Casimir energy with no further manipulations. Using the well-known meromorphic structure of the spectral zeta function associated with the Laplacian on $\mathscr{N}$ \cite{kirs02b}, the relevant expression for $\zeta(s)$ at $s=-1/2$ reads
\beq\label{w22}
\zeta ^{(f)} (-1/2) &=& - \frac 1 \pi \sum_\ell \int\limits_{\eta_\ell } ^\infty dk \,\, (k^2 - \eta_\ell^2)^{1/2} \frac d {dk} \left\{ \ln u_{ik} (L) - kL + \ln (2k) - \sum_{j=1}^D \frac{d_j}{ k^{j}} \right\}, \;\;\;\;
\eeq
and
\begin{eqnarray}\label{w23}
\zeta ^{(as)} (-1/2+\epsilon) &=& \frac 1 \epsilon \Bigg\{ \frac L {4\pi} \zeta_{{\cal N}} (-1) - \frac 1 2 \mbox{Res } \zeta_{{\cal N}} (-1/2) + \frac{d_1 \zeta_{{\cal N}} (0)} {2\pi} \nn\\
&+& \sum_{j=2}^D \frac{d_j}{2\sqrt \pi} \frac{\Gamma \left( \frac{j-1} 2 \right)} {\Gamma \left( \frac j 2 \right)} \mbox{Res } \zeta_{{\cal N}} \left( \frac{j-1} 2 \right) \Bigg\}\nn\\
&+& \frac L {4\pi} \left[ \zeta_{{\cal N}} ' (-1) + \zeta_{{\cal N}} (-1) \left( \ln 4-1\right) \right]- \frac 1 2 \mbox{FP } \zeta_{{\cal N}} (-1/2) \nn\\
&+&\frac{d_1}{2\pi} \left[ \zeta_{{\cal N}} ' (0) + \zeta_{{\cal N}} (0) \left(\ln 4-2\right)\right]+\sum_{j=2}^D \frac{ d_j} {2\sqrt \pi} \frac{\Gamma \left( \frac{j-1} 2 \right)} {\Gamma \left( \frac j 2 \right) } \Bigg[ \mbox{FP} \zeta_{{\cal N}} \left( \frac{j-1} 2 \right) \nn\\
&+& \mbox{Res}\zeta_{{\cal N}} \left( \frac{j-1} 2 \right) \left(\Psi\left(\frac{j-1}{2}\right)-\gamma-\ln 4+2\right)\Bigg]+O(\epsilon)\;,
\end{eqnarray}
where $\Psi(x)$ represents the logarithmic derivative of the Euler gamma function.
According to (\ref{2.4}), the Casimir energy is obtained by adding the expressions (\ref{w22}) and (\ref{w23}) and by multiplying by $1/2$. It is worth noting that
the above results are quite general and valid for an arbitrary smooth Riemannian manifold $\mathscr{N}$ and for any smooth potential with compact support in $[0,L]$. More explicit results can only be found once the manifold $\mathscr{N}$ and the potential $V(x)$ are completely specified. Despite the lack of an explicit expression for the Casimir energy, one can still make some general remarks. From the results (\ref{w22}) and (\ref{w23}) it is clear that the Casimir energy is, generally, divergent. However, since none of the terms in (\ref{w23}) depend on the variable $a$, the resulting Casimir force acting on the piston acquires contributions only from the finite part in (\ref{w22}) and is, hence, finite.
This section will be concluded by discussing two specific examples. Consider the particular cases for which ${\cal N} =\reals$ and ${\cal N}=\reals^2,$ respectively. In such cases the continuous spectrum resulting from the unrestricted dimensions can be integrated to obtain the following spectral zeta function densities
\beq
\zeta _{I\times \reals} (s) = \frac{\Gamma \left( s-\frac 1 2 \right)}{\sqrt {4\pi} \Gamma (s) } \zeta_I \left( s- \frac 1 2 \right)\;,
\eeq
respectively
\beq
\zeta_{I\times \reals^2} (s) = \frac 1 {4\pi (s-1)} \zeta _I (s-1)\;.
\eeq
Using the known meromorphic structure of $\zeta _I (s)$ it is then easily verified that
\beq
F_{Cas} ^{I\times \reals} = - \frac 1 {8\pi} \frac \partial {\partial a} \zeta_I ' (-1)
= \frac 1 {8\pi } \int\limits_m^\infty dk (k^2-m^2) \frac \partial {\partial a} \frac \partial {\partial k} \ln u_{ik} (L)\;,
\eeq
and
\beq
F_{Cas} ^{I\times \reals^2}= \frac 1 {12\pi} \frac \partial {\partial a} \mbox{FP }\zeta_I \left(-\frac 3 2\right)= \frac 1 {12\pi } \int\limits_m^\infty dk (k^2-m^2)^{3/2} \frac \partial {\partial a} \frac \partial {\partial k} \ln u_{ik} (L)\;,
\eeq
where, again, $\mbox{FP}$ denotes the finite part of the Laurent series expansion. Numerical results for the above expressions can be obtained by following the same procedure described in the previous section.
When ${\cal N}$ is either $\mathbb{R}$ or $\mathbb{R}^{2}$ the Casimir force on pistons modeled by the type of potentials
described in Section \ref{Sec:Examples} is qualitatively similar to the force found in one dimension for the same type of potentials.
Since in this case the plots of the force as a function of the position $a$ resemble the ones in Figures \ref{Figure1Pot1Fcas}-\ref{Figure5Pot3Fcas}(b), for brevity, they are not included here.
\section{Conclusions}
In this paper a method is developed to study the Casimir energy for massive scalar fields confined in finite volumes under the influence of smooth background potentials.
The pivotal point of our method relies on rewriting the eigenvalues of the relevant {\it boundary value problem} as solutions to a transcendental equation related to an equivalent {\it initial value problem}; see Equations (\ref{2.5}) and (\ref{2.6}). The starting point of our approach is the representation of the spectral zeta function associated with our models in terms of a complex integral. The analytic continuation of $\zeta(s)$ to a neighborhood of $s=-1/2$ was then achieved by adding and subtracting a suitable number of asymptotic terms from the integral representation. Although the form of the potential has been left unspecified, standard WKB techniques have allowed us to effectively compute the asymptotic expansions needed for the analytic continuation. The obtained analytic continuation of the spectral zeta function is, then, used in order to compute the Casimir force acting on a piston modeled by a smooth potential with compact support. In this case, it is found that while the Casimir energy is, in general, divergent the Casimir force on the piston is well defined. This can be immediately understood by noticing that the divergent terms in the energy
are independent of the position of the piston.
In the framework of pistons modeled by potentials it is found that the Casimir force can only be computed numerically once a potential has been specified.
For this reason, several types of Gaussian potentials have been considered and plots of the Casimir force on the piston as a function of the position $a$ have been provided.
We would like to stress, at this point, that in our examples Gaussian potentials have been chosen for illustrative purposes only.
In fact, the results obtained in this work are completely general and are valid for {\it any} smooth potential with compact support.
The numerical results obtained for the Casimir force in the various examples are consistent with our physical intuition, a fact that improves confidence in the presented analysis.
This work can naturally be continued by considering the effect that different types of boundary conditions, such as Neuman, Robin, or mixed, have on the Casimir force. This analysis would follow the lines presented here without any major technical complications. In fact, the boundary conditions determine uniquely the constants $A^{+}$ and $A^{-}$ in the asymptotic expansion of the functions in (\ref{w3}). Different types of boundary conditions will lead to different expressions for $A^{+}$ and $A^{-}$ but will keep the form of the asymptotic expansion (\ref{w6}) unchanged. The analytic continuation of the spectral zeta function would, then, proceed in the same way as presented in this work.
In addition, of particular interest is to study higher dimensional pistons modeled by potentials when the additional Kaluza-Klein manifold $\mathscr{N}$ allows for the explicit knowledge of the eigenvalue associated with $\Delta_{\mathscr{N}}$. Along these lines the authors are currently in the process of analyzing spherically symmetric and cylindrically symmetric configurations where angular momentum sums introduce additional technical and numerical complications.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 8,260
|
Cryptocosma is a genus of moths of the family Crambidae. It contains only one species, Cryptocosma perlalis, which is found in Brazil, Suriname and Panama.
References
Acentropinae
Crambidae genera
Taxa named by Julius Lederer
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 7,322
|
Hemerodromia burdicki är en tvåvingeart som beskrevs av James David Macdonald 1998. Hemerodromia burdicki ingår i släktet Hemerodromia och familjen dansflugor. Inga underarter finns listade i Catalogue of Life.
Källor
Dansflugor
burdicki
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 543
|
'use strict';
// add the jquery
window.jQuery = window.$ = require('jquery');
// add system-tray
require('./js/system-tray.js');
// add the window menus
// just for testing purposes
require('./js/menus.js');
// add bootstrap scripts
require('./node_modules/bootstrap/dist/js/bootstrap.min.js');
// jexcel
require('./node_modules/jexcel/dist/js/excel-formula.min.js');
require('./node_modules/jexcel/dist/js/jquery.csv.min.js');
require('./node_modules/jexcel/dist/js/jquery.jcalendar.js');
require('./node_modules/jexcel/dist/js/jquery.jexcel.js');
require('./node_modules/jquery-ui-dist/jquery-ui.min.js');
// sweetalert
require('./node_modules/sweetalert/dist/sweetalert.min.js');
// main script
require('./js/main.js');
require('./js/settings.js');
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 5,875
|
{"url":"https:\/\/study.com\/academy\/answer\/two-point-charges-plus-8q-and-q-are-located-at-x-0-and-x-l-respectively-the-location-of-a-point-on-the-x-axis-at-which-the-net-electric-field-due-to-these-two-point-charges-is-zero-is-a-l-4-b.html","text":"# Two-point charges + 8q and q are located at x = 0 and x = L respectively. The location of a point...\n\n## Question:\n\nTwo-point charges + 8q and q are located at x = 0 and x = L respectively. The location of a point on the x-axis at which the net electric field due to these two point charges is zero is\n\n(a) L\/4\n\n(b) 2L\n\n(c) 4 L\n\n(d) 8 L\n\n## Electric Field Due to Stationary Charge:\n\nConsider a stationary particle having charge {eq}q {\/eq} C kept at the position vector {eq}\\overrightarrow{x} {\/eq}. Then due to this particle, the electric field at a point with position vector {eq}\\overrightarrow{p} {\/eq} is given by $$\\overrightarrow{E} = \\frac{q (\\overrightarrow{p} - \\overrightarrow{x})}{4\\pi \\epsilon_0 |\\overrightarrow{p}-\\overrightarrow{x}|^3}$$\n\nCharge +8q is kept at x = 0, and charge +q is kept at x = L. Since both these charges are positive, therefore the electric field due to them will be outwards from their position.\n\nFor x < 0, both charges will given field in the -x direction. Hence the field for x < 0 can't be zero.\n\nSimilarly, for x > L, the field can't be zero.\n\nTherefore, we find that for some x between 0 and L, the field will be zero.\n\nNow, if this point does not lie on the x-axis, then the field from both these particles will have an x- and y- component. The x-components of both the fields will be opposite, but the y-component will be in the same direction. Hence, for the field to be zero, the point must lie on the x-axis.\n\nLet the point have the coordinate (x,0)\n\nThen the magnitude of the field due to first particle will be. {eq}E_1 = \\frac{8q}{4\\pi\\epsilon_0 (x-0)^2} = \\frac{8q}{4\\pi\\epsilon_0 x^2}\\\\ {\/eq}\n\nThe magnitude of the field due to the second particle will be {eq}E_2 = \\frac{q}{4\\pi\\epsilon_0 (L-x)^2} {\/eq}\n\nNow, when the field will be zero, both magnitudes will be equal, $$E_1 = E_2\\\\ \\frac{8q}{4\\pi\\epsilon_0 x^2} = \\frac{q}{4\\pi\\epsilon_0 (L-x)^2}\\\\ \\frac{8}{x^2} = \\frac{1}{(L-x)^2}\\\\ 8(L-x)^2 = x^2\\\\ 2\\sqrt2 (L-x) = x\\\\ x = \\frac{2\\sqrt2 L}{2\\sqrt2+1}$$\n\nTherefore, at x = {eq}\\frac{2\\sqrt2 L}{2\\sqrt2 +1} {\/eq}, the net electric field will be zero.","date":"2020-01-18 11:45:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7927932143211365, \"perplexity\": 566.5387216511506}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579250592565.2\/warc\/CC-MAIN-20200118110141-20200118134141-00112.warc.gz\"}"}
| null | null |
Q: How to show "Application Closing.." message in Swing during last JFrame dispose? While I am searching for better method to exit a Swing Application in between System.exit(0) and dispose() I found a very good answer Here.
Now what I want to add-on is, as I make a call to dispose(), the current window gets disposed but the JVM takes few more moments while checking for other open frames and threads before it gets exited. I want to show this small amount of time through a dialog saying that : Application is closing...
How can I achieve this?
A: There is a class WindowUtilities in TUS on sourceforge that I wrote that ALMOST does what you want. You might want to take a look at it and see if you can adapt it for your purposes
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 4,822
|
AFGE TSA
The American Federation of Government Employees, Local 1260, AFL-CIO, began July 1, 2013.
After breaking-off from a much larger local due to conflicts of size, then-AFGE Local 1234's Women's and Fair Practices Coordinator Victor Payes and Executive Vice President Bobby Orozco Jr. decided that it was time for a change. After having served on the Executive Board for Local 1234, which encompassed 40+ airports spanning three (3) states (AZ, CA, & HI), both Victor and Bobby realized the need for intimate and specific attention to their home airport of LAX and surrounding airports. As a result, the change was made from covering over 40 airports in three states to covering twelve (12) airports in one state.
The airports which are covered under the AFGE Local 1260 in Southern California are: Bob Hope Airport (BUR)--Burbank; McClellan-Palomar Airport (CRQ)--Carlsbad; Imperial County Airport (IPL)--Imperial; Los Angeles International Airport (LAX)--Los Angeles; Long Beach Airport (LGB)--Long Beach; Ontario International Airport (ONT)--Ontario; Palm Springs International Airport (PSP)--Palm Springs; San Diego International Airport (SAN)--San Diego; Santa Barbara Municipal Airport (SBA)--Santa Barbara; San Luis Obispo County Regional Airport (SBP)--San Luis Obispo; Santa Maria Public Airport (SMX)--Santa Maria; and John Wayne Airport (SNA)--Santa Ana.
AFGE Local 1260 is contained within District 12 (AZ, CA, HI, & NV) under the leadership of National Vice President George McCubbin III. AFGE District 12 falls under the Federation (AFGE National) and answers directly to current National Vice President of Women's and Fair Practices Jeremy Lannan, National Secretary-Treasurer Dr. Everett Kelly, and National President J. David Cox.
AFGE Local 1260 currently has over 1,900 dues-paying members. AFGE Local 1260 is committed to its membership and it is our goal to serve you, the member, in a manner that is efficient and professional. Therefore, it is our primary goal to ensure that the membership has all that it needs to function as a member of AFGE Local 1260.
Please refer to the "Contact Us" page of this Web site in order to allow us to serve you.
Thank you for your membership, service, and dedication.
admin@afge1260.org
OUR EXECUTIVE BOARD
Bobby Orozco Jr., M.S., MFHD
Gilbert Vasquez
David Chiv
Vice President LGB
Vice President Ontario-Palm Springs
Victor Payes
Secretary Treasurer
Danielle Hollis
Vice President BUR
Wendy Delozier
Vice President Central Coast
Sean Root
Vice President San Diego & Imperial
Samantha Mitchell
Vice President LAX
Ray Alarcon
Vice President SNA
Ron Gerber
Director of Human & Civil Rights
Erich Schmidt
Director of Political & Legislative Affairs
© 2021 AFGE TSA Local 1260, AFL-CIO
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 4,375
|
I have a lot of very good friends. I like being able to care about people without having to concern myself with whether it's technically supposed to be a romantic relationship, or a sexual relationship, or anything like that. (In point of fact I'm aromantic asexual.) There's a couple interesting words in the Queer Community about this, actually, although I'm not sure I can use them.
I don't just up and forget people. I drift. That's just a thing that happens. I don't stop caring, though. So here's a pinky promise with a Prospitian carapace.
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 8,035
|
/**************************************************************************
* FILE NAME
*
* remote_device.c
*
* COMPONENT
*
* OpenAMP Stack
*
* DESCRIPTION
*
* This file provides services to manage the remote devices.It also implements
* the interface defined by the virtio and provides few other utility functions.
*
*
**************************************************************************/
#include <string.h>
#include <openamp/rpmsg.h>
#include <openamp/remoteproc.h>
#include <metal/utilities.h>
#include <metal/alloc.h>
#include <metal/atomic.h>
#include <metal/cpu.h>
/* Macro to initialize vring HW info */
#define INIT_VRING_ALLOC_INFO(ring_info,vring_hw) \
(ring_info).vaddr = (vring_hw).vaddr; \
(ring_info).align = (vring_hw).align; \
(ring_info).num_descs = (vring_hw).num_descs
/* Local functions */
static int rpmsg_rdev_init_channels(struct remote_device *rdev);
/* Ops table for virtio device */
virtio_dispatch rpmsg_rdev_config_ops = {
rpmsg_rdev_create_virtqueues,
rpmsg_rdev_get_status,
rpmsg_rdev_set_status,
rpmsg_rdev_get_feature,
rpmsg_rdev_set_feature,
rpmsg_rdev_negotiate_feature,
rpmsg_rdev_read_config,
rpmsg_rdev_write_config,
rpmsg_rdev_reset
};
/**
* rpmsg_memb_match
*
* This internal function checks if the contents in two memories matches byte
* by byte. This function is needed because memcmp() or strcmp() does not
* always work across different memories.
*
* @param ptr1 - pointer to memory
* @param ptr2 - pointer to memory
* @param n - number of bytes to compare
*
* @return 0 if the contents in the two memories matches, otherwise -1.
*/
static int rpmsg_memb_match(const void *ptr1, const void *ptr2, size_t n)
{
size_t i;
const unsigned char *tmp1, *tmp2;
tmp1 = ptr1;
tmp2 = ptr2;
for (i = 0; i < n; i++, tmp1++, tmp2++) {
if (*tmp1 != *tmp2)
return -1;
}
return 0;
}
/**
* rpmsg_rdev_init
*
* This function creates and initializes the remote device. The remote device
* encapsulates virtio device.
*
* @param proc - pointer to hil_proc
* @param rdev - pointer to newly created remote device
* @param role - role of the other device, Master or Remote
* @param channel_created - callback function for channel creation
* @param channel_destroyed - callback function for channel deletion
* @param default_cb - default callback for channel
*
* @return - status of function execution
*
*/
int rpmsg_rdev_init(struct hil_proc *proc,
struct remote_device **rdev, int role,
rpmsg_chnl_cb_t channel_created,
rpmsg_chnl_cb_t channel_destroyed, rpmsg_rx_cb_t default_cb)
{
struct remote_device *rdev_loc;
struct virtio_device *virt_dev;
struct proc_shm *shm;
int status;
if (!proc)
return RPMSG_ERR_PARAM;
/* Initialize HIL data structures for given device */
if (hil_init_proc(proc))
return RPMSG_ERR_DEV_INIT;
/* Create software representation of remote processor. */
rdev_loc = (struct remote_device *)metal_allocate_memory(sizeof(struct remote_device));
if (!rdev_loc) {
return RPMSG_ERR_NO_MEM;
}
memset(rdev_loc, 0x00, sizeof(struct remote_device));
metal_mutex_init(&rdev_loc->lock);
rdev_loc->proc = proc;
rdev_loc->role = role;
rdev_loc->channel_created = channel_created;
rdev_loc->channel_destroyed = channel_destroyed;
rdev_loc->default_cb = default_cb;
/* Restrict the ept address - zero address can't be assigned */
rdev_loc->bitmap[0] = 1;
/* Initialize the virtio device */
virt_dev = &rdev_loc->virt_dev;
virt_dev->device = proc;
virt_dev->func = &rpmsg_rdev_config_ops;
if (virt_dev->func->set_features != RPMSG_NULL) {
virt_dev->func->set_features(virt_dev, proc->vdev.dfeatures);
}
if (rdev_loc->role == RPMSG_REMOTE) {
/*
* Since device is RPMSG Remote so we need to manage the
* shared buffers. Create shared memory pool to handle buffers.
*/
shm = hil_get_shm_info(proc);
rdev_loc->mem_pool =
sh_mem_create_pool(shm->start_addr, shm->size,
RPMSG_BUFFER_SIZE);
if (!rdev_loc->mem_pool) {
return RPMSG_ERR_NO_MEM;
}
}
if (!rpmsg_rdev_remote_ready(rdev_loc))
return RPMSG_ERR_DEV_INIT;
/* Initialize endpoints list */
metal_list_init(&rdev_loc->rp_endpoints);
/* Initialize channels for RPMSG Remote */
status = rpmsg_rdev_init_channels(rdev_loc);
if (status != RPMSG_SUCCESS) {
return status;
}
*rdev = rdev_loc;
return RPMSG_SUCCESS;
}
/**
* rpmsg_rdev_deinit
*
* This function un-initializes the remote device.
*
* @param rdev - pointer to remote device to deinit.
*
* @return - none
*
*/
void rpmsg_rdev_deinit(struct remote_device *rdev)
{
struct metal_list *node;
struct rpmsg_channel *rp_chnl;
struct rpmsg_endpoint *rp_ept;
while(!metal_list_is_empty(&rdev->rp_channels)) {
node = rdev->rp_channels.next;
rp_chnl = metal_container_of(node, struct rpmsg_channel, node);
if (rdev->channel_destroyed) {
rdev->channel_destroyed(rp_chnl);
}
if ((rdev->support_ns) && (rdev->role == RPMSG_MASTER)) {
rpmsg_send_ns_message(rdev, rp_chnl, RPMSG_NS_DESTROY);
}
/* Delete default endpoint for channel */
if (rp_chnl->rp_ept) {
rpmsg_destroy_ept(rp_chnl->rp_ept);
}
_rpmsg_delete_channel(rp_chnl);
}
/* Delete name service endpoint */
metal_mutex_acquire(&rdev->lock);
rp_ept = rpmsg_rdev_get_endpoint_from_addr(rdev, RPMSG_NS_EPT_ADDR);
metal_mutex_release(&rdev->lock);
if (rp_ept) {
_destroy_endpoint(rdev, rp_ept);
}
metal_mutex_acquire(&rdev->lock);
rdev->rvq = 0;
rdev->tvq = 0;
if (rdev->mem_pool) {
sh_mem_delete_pool(rdev->mem_pool);
rdev->mem_pool = 0;
}
metal_mutex_release(&rdev->lock);
hil_free_vqs(&rdev->virt_dev);
metal_mutex_deinit(&rdev->lock);
metal_free_memory(rdev);
}
/**
* rpmsg_rdev_get_chnl_from_id
*
* This function returns channel node based on channel name. It must be called
* with mutex locked.
*
* @param stack - pointer to remote device
* @param rp_chnl_id - rpmsg channel name
*
* @return - rpmsg channel
*
*/
struct rpmsg_channel *rpmsg_rdev_get_chnl_from_id(struct remote_device *rdev,
char *rp_chnl_id)
{
struct rpmsg_channel *rp_chnl;
struct metal_list *node;
metal_list_for_each(&rdev->rp_channels, node) {
rp_chnl = metal_container_of(node, struct rpmsg_channel, node);
if (!rpmsg_memb_match(rp_chnl->name, rp_chnl_id,
sizeof(rp_chnl->name))) {
return rp_chnl;
}
}
return RPMSG_NULL;
}
/**
* rpmsg_rdev_get_endpoint_from_addr
*
* This function returns endpoint node based on src address. It must be called
* with mutex locked.
*
* @param rdev - pointer remote device control block
* @param addr - src address
*
* @return - rpmsg endpoint
*
*/
struct rpmsg_endpoint *rpmsg_rdev_get_endpoint_from_addr(struct remote_device *rdev,
unsigned long addr)
{
struct rpmsg_endpoint *rp_ept;
struct metal_list *node;
metal_list_for_each(&rdev->rp_endpoints, node) {
rp_ept = metal_container_of(node,
struct rpmsg_endpoint, node);
if (rp_ept->addr == addr) {
return rp_ept;
}
}
return RPMSG_NULL;
}
/*
* rpmsg_rdev_notify
*
* This function checks whether remote device is up or not. If it is up then
* notification is sent based on device role to start IPC.
*
* @param rdev - pointer to remote device
*
* @return - status of function execution
*
*/
int rpmsg_rdev_notify(struct remote_device *rdev)
{
struct virtio_device *vdev = &rdev->virt_dev;
hil_vdev_notify(vdev);
return RPMSG_SUCCESS;
}
/**
* rpmsg_rdev_init_channels
*
* This function is only applicable to RPMSG remote. It obtains channel IDs
* from the HIL and creates RPMSG channels corresponding to each ID.
*
* @param rdev - pointer to remote device
*
* @return - status of function execution
*
*/
int rpmsg_rdev_init_channels(struct remote_device *rdev)
{
struct rpmsg_channel *rp_chnl;
struct proc_chnl *chnl_info;
int num_chnls, idx;
metal_list_init(&rdev->rp_channels);
if (rdev->role == RPMSG_MASTER) {
chnl_info = hil_get_chnl_info(rdev->proc, &num_chnls);
for (idx = 0; idx < num_chnls; idx++) {
rp_chnl =
_rpmsg_create_channel(rdev, chnl_info[idx].name,
0x00, RPMSG_NS_EPT_ADDR);
if (!rp_chnl) {
return RPMSG_ERR_NO_MEM;
}
rp_chnl->rp_ept =
rpmsg_create_ept(rp_chnl, rdev->default_cb, rdev,
RPMSG_ADDR_ANY);
if (!rp_chnl->rp_ept) {
return RPMSG_ERR_NO_MEM;
}
rp_chnl->src = rp_chnl->rp_ept->addr;
}
}
return RPMSG_SUCCESS;
}
/**
* check if the remote is ready to start RPMsg communication
*/
int rpmsg_rdev_remote_ready(struct remote_device *rdev)
{
struct virtio_device *vdev = &rdev->virt_dev;
uint8_t status;
if (rdev->role == RPMSG_MASTER) {
while (1) {
status = vdev->func->get_status(vdev);
/* Busy wait until the remote is ready */
if (status & VIRTIO_CONFIG_STATUS_NEEDS_RESET) {
rpmsg_rdev_set_status(vdev, 0);
hil_vdev_notify(vdev);
} else if (status & VIRTIO_CONFIG_STATUS_DRIVER_OK) {
return true;
}
metal_cpu_yield();
}
} else {
return true;
}
/* Never come here */
return false;
}
/**
*------------------------------------------------------------------------
* The rest of the file implements the virtio device interface as defined
* by the virtio.h file.
*------------------------------------------------------------------------
*/
int rpmsg_rdev_create_virtqueues(struct virtio_device *dev, int flags, int nvqs,
const char *names[], vq_callback * callbacks[],
struct virtqueue *vqs_[])
{
struct remote_device *rdev;
struct vring_alloc_info ring_info;
struct virtqueue *vqs[RPMSG_MAX_VQ_PER_RDEV];
struct proc_vring *vring_table;
void *buffer;
struct metal_sg sg;
int idx, num_vrings, status;
(void)flags;
(void)vqs_;
rdev = (struct remote_device *)dev;
/* Get the vring HW info for the given virtio device */
vring_table = hil_get_vring_info(&rdev->proc->vdev, &num_vrings);
if (num_vrings > nvqs) {
return RPMSG_ERR_MAX_VQ;
}
/* Create virtqueue for each vring. */
for (idx = 0; idx < num_vrings; idx++) {
INIT_VRING_ALLOC_INFO(ring_info, vring_table[idx]);
if (rdev->role == RPMSG_REMOTE) {
metal_io_block_set(vring_table[idx].io,
metal_io_virt_to_offset(vring_table[idx].io,
ring_info.vaddr),
0x00,
vring_size(vring_table[idx].num_descs,
vring_table[idx].align));
}
status =
virtqueue_create(dev, idx, (char *)names[idx], &ring_info,
callbacks[idx], hil_vring_notify,
rdev->proc->sh_buff.io,
&vqs[idx]);
if (status != RPMSG_SUCCESS) {
return status;
}
}
//FIXME - a better way to handle this , tx for master is rx for remote and vice versa.
if (rdev->role == RPMSG_MASTER) {
rdev->tvq = vqs[0];
rdev->rvq = vqs[1];
} else {
rdev->tvq = vqs[1];
rdev->rvq = vqs[0];
}
if (rdev->role == RPMSG_REMOTE) {
sg.io = rdev->proc->sh_buff.io;
sg.len = RPMSG_BUFFER_SIZE;
for (idx = 0; ((idx < rdev->rvq->vq_nentries)
&& ((unsigned)idx < rdev->mem_pool->total_buffs / 2));
idx++) {
/* Initialize TX virtqueue buffers for remote device */
buffer = sh_mem_get_buffer(rdev->mem_pool);
if (!buffer) {
return RPMSG_ERR_NO_BUFF;
}
sg.virt = buffer;
metal_io_block_set(sg.io,
metal_io_virt_to_offset(sg.io, buffer),
0x00,
RPMSG_BUFFER_SIZE);
status =
virtqueue_add_buffer(rdev->rvq, &sg, 0, 1,
buffer);
if (status != RPMSG_SUCCESS) {
return status;
}
}
}
return RPMSG_SUCCESS;
}
unsigned char rpmsg_rdev_get_status(struct virtio_device *dev)
{
struct hil_proc *proc = dev->device;
struct proc_vdev *pvdev = &proc->vdev;
struct fw_rsc_vdev *vdev_rsc = pvdev->vdev_info;
if (!vdev_rsc)
return -1;
atomic_thread_fence(memory_order_seq_cst);
return vdev_rsc->status;
}
void rpmsg_rdev_set_status(struct virtio_device *dev, unsigned char status)
{
struct hil_proc *proc = dev->device;
struct proc_vdev *pvdev = &proc->vdev;
struct fw_rsc_vdev *vdev_rsc = pvdev->vdev_info;
if (!vdev_rsc)
return;
vdev_rsc->status = status;
atomic_thread_fence(memory_order_seq_cst);
}
uint32_t rpmsg_rdev_get_feature(struct virtio_device *dev)
{
return dev->features;
}
void rpmsg_rdev_set_feature(struct virtio_device *dev, uint32_t feature)
{
dev->features |= feature;
}
uint32_t rpmsg_rdev_negotiate_feature(struct virtio_device *dev,
uint32_t features)
{
(void)dev;
(void)features;
return 0;
}
/*
* Read/write a variable amount from the device specific (ie, network)
* configuration region. This region is encoded in the same endian as
* the guest.
*/
void rpmsg_rdev_read_config(struct virtio_device *dev, uint32_t offset,
void *dst, int length)
{
(void)dev;
(void)offset;
(void)dst;
(void)length;
return;
}
void rpmsg_rdev_write_config(struct virtio_device *dev, uint32_t offset,
void *src, int length)
{
(void)dev;
(void)offset;
(void)src;
(void)length;
return;
}
void rpmsg_rdev_reset(struct virtio_device *dev)
{
(void)dev;
return;
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 9,471
|
Meet the lucky couple who got engaged during the St. Patrick's Day Parade in Pittsburgh
It involved Irish music, a shamrock-bedecked bus, and hordes of excited onlookers.
This fairytale chariot comes with shamrocks.
Courtesy of Ryan McArdle
Rossilynne Culgan
Mar. 17, 2018, 4:40 p.m.
In what may be the most Pittsburgh engagement ever, Ryan McArdle proposed to Robyn Pawlos today during the St. Patrick's Day Parade with the help of a Port Authority bus.
McArdle (yes, you probably recognize his last name from McArdle Roadway), usually drives the 44 bus route for Port Authority. But today, he drove the Port Authority's tricked out St. Patrick's bus along the parade route, making just one stop: On Grant Street, Downtown, where his future wife watched the parade with their family and friends.
Donning an Irish tweed cap, he stopped the bus at Grant Street and Third Avenue, hopped out and got on one knee with a diamond ring in front of Pawlos who said she was "in total shock."
"He got off the bus and got down one one knee, and I was crying and everybody's crying," she said. "I definitely said yes — 110 percent."
After taking a moment to compose herself, she wrapped her groom-to-be in a hug before he continued on the parade route.
Tweet from @PGHtransit
No, she didn't ride the rest of the route with him, she said, jokingly: "I didn't have my bus fare there! I couldn't get on the bus."
Both McArdle, 39, and Pawlos, 34, have Irish heritage, so St. Patrick's Day is a special holiday for them, and they always watch the parade from the same spot along Grant Street at Third Avenue.
"It's always been a special day in my family," he said. "Starting with me and my grandfather going to the parade as young kids, and it became a part of my tradition."
For about six weeks, he had been hatching his plan for a St. Patrick's Day engagement, planning to walk in the parade with a sign. But then yesterday, he heard there was going to be a Port Authority bus in the parade, and it needed a driver.
"I went to my supervisor and said 'I will drive the bus as long as I can put my sign on it,'" he said.
Permission was quickly granted, and McArdle worked with a friend to affix the sign to the green shamrock-bedecked bus, which played Irish music from its speakers. It certainly attracted attention on the parade route, with excited onlookers very interested in the bus and its sign in particular.
This is the first year Port Authority has driven a bus in the parade. The agency's new CEO Katharine Kelleman is committed to being "more active and involved in the community and that means participating in events like the St. Patrick's Day Parade," a spokesman said.
"People were asking 'Did she say yes?!,' and I'm like, 'She's down on Third!'" McArdle said. "Everybody was asking and taking pictures, and I felt like I was driving on a cloud."
The Port Authority's bus was one of the last floats in the parade on a 30-degree day, and as it finally neared, a friend suggested Pawlos stand for a photo with the bus approaching in the background. Then, somebody shouted, "look at that sign!," so she turned around and saw the surprise everybody else knew was coming.
"Not a clue — right over my head. Everybody else knew," she said. "Everybody knew except me."
And though it was "freezing out, and we are waiting and waiting on this bus," Pawlos said, true love is worth the wait.
"I would wait for that any day of the week," she said.
Wedding plans will be in the works soon for the Overbrook couple, but for now, they're going to celebrate with their loved ones at the family bar, McArdle's Pub in the South Side, on what's become a very lucky St. Patrick's Day.
"It's definitely Pittsburgh," the bride-to-be said. "And it's definitely Irish."
Celebrating at McArdle's Pub.
Katharine Eagan Kelleman, Port Authority
Pittsburgh food in June 2019: What's opening and what's closing
Sip (and snap) Instagrammable coffee, try some new wine, and indulge in old world pizza.
By Rossilynne Culgan
· May. 31, 2019
Peculiar Pittsburgh
In Pittsburgh art, beauty and fries are in the eye of the beholder
This is a story of a lasting — albeit unintentional — testament to this city's greatest vice. A window into our french-fried souls.
By Colin Deppen
Nominate Pittsburgh rising stars for our third Who's Next: Technology class
Tell us by 10 a.m. on Monday, June 17.
Like hockey? You have Pittsburgh to thank for that.
Long before the Penguins, Pittsburgh rooted for the Keystones, the Professionals, the Victorias, and the Bankers.
What to do this week in Pittsburgh: May 27-June 2, 2019
Listen to jazz, learn Tai Chi, sip tea with a drag queen, and get a psychic reading.
The big list of winners in Pittsburgh's May 2019 primary election
Several shakeups amid low voter turnout.
What to do this week in Pittsburgh: May 20-26, 2019
Dance the salsa, rock out at KayaFest, and ask a date to adult Prom.
The Procrastinator's Guide to Pittsburgh's May 21 primary
Democracy in last-minute action.
Explore the ultimate Pittsburgh bucket list with this new book
How many have you checked off?
Who's Next: Animal Advocates; 10 young Pittsburghers helping animals
Animals love our latest Who's Next class. We think you will, too.
Write poetry, learn to garden, and party like it's 1999.
Jeff Goldblum — yes, that Jeff Goldblum — is the new voice of Carnegie Science Center's Buhl Planetarium
Star power among the planetarium's stars.
In the Steel City, a blast furnace becomes the setting for Shakespeare
It's a "once-in-a-lifetime" show.
Siren: Sheetz is debuting a coffee-flavored beer
Take that, Wawa.
The 14 best Pittsburgh apartments for rent in May 2019
Larry David artwork, a spiral staircase, a gilded mansion — it's all up for grabs this month.
What to do this week in Pittsburgh: May 6-12, 2019
This week is all about music.
Running for those who cannot, Pittsburgh marathoners pay tribute to Tree of Life victims
"It's a symbolic run. It's going to be very emotional."
Pittsburgh food in May 2019: What's opening and what's closing
New right now: Eggplant parm, arepas, and martinis.
Is this Pittsburgh pizza actually the best in America? We tried a slice …
Find the Mee-Maw pizza at Caliente starting May 1.
· Apr. 30, 2019
Nominate Pittsburgh up-and-comers for our 2019 Who's Next: Education class
This Who's Next class is in session.
More Incline
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 3,154
|
\section{Introduction}
The T2K collaboration recently gives that the reactor angle is \cite{Abe:2011sj}
\begin{equation}\label{exp}
0.087 (0.100)\le \sin\theta_{13}\le 0.275 (0.306),
\end{equation}
with best fit value of $\sin \theta_{12}=0.17(0.19)$ for normal (inverted) hierarchy in the neutrino masses. Such a result must be taken as a hint since
it is only at 2$\sigma$. We however consider the implications of such an important indication in
this paper.
In the last decade, the tri-bimaximal (TB) mixing pattern introduced by Harrison {\it et al.} in
2002\,\cite{Harrison:2002er}
\begin{equation}
U_{\text{TB}}=
\left(
\begin{array}{ccc}
\sqrt{\frac{2}{3}}&\frac{1}{\sqrt{3}}& 0\\
-\frac{1}{\sqrt{6}} &\frac{1}{\sqrt{3}} & -\frac{1}{\sqrt{2}}\\
-\frac{1}{\sqrt{6}} &\frac{1}{\sqrt{3}} & \frac{1}{\sqrt{2}}\\
\end{array}
\right),
\end{equation}
has been used as a guide in neutrino physics for the flavor problem. However if the result of T2K
will be confirmed, it will have strong impact on this point of view. In fact, most of the models
predicting tri-bimaximal mixing at the leading order are compatible
only with small values of the reactor angle. In a generic model, the three mixing angles receive
corrections of the same order. Departure of the solar angle from the trimaximal values are at
most of $\mathcal{O}$($\lambda_C^2$) where
$\lambda_C\approx 0.2$. Therefore it is possible
to have a deviation of order $\lambda_C^2 \approx 0.04$\, only \cite{Altarelli:2010gt} for the
reactor angle which is about half of the lower bound in\,(\ref{exp}). This is only an estimation
and it must be considered individually case by case. We however expect that most of them are on the
border of validity if not excluded completely. Therefore it is important to search for the models
with large $\theta_{13}$ and with maximal atmospheric mixing angle and trimaximal\footnote{The word
``trimaximal'' is used for different meaning in previous studies \cite{trimaximal}. We mean here
$\sin^2\theta_{12} = 1/3$ by trimaximal solar angle.} solar angle. Recently, some works fitting T2K
result has been presented \cite{large13}. There are some models based on discrete flavor symmetries before T2K data which predicts large reactor mixing angle, for an incomplete list see reference \cite{Hirsch:2007kh}, and for a classification of models with flavor symmetries classified by its predictions for reactor angle see \cite{Albright:2009cn}
In this paper, we study the possibility to obtain the large reactor angle with tri-bimaximal
values of the solar and atmospheric mixing angles. The lepton mixing matrix with such mixing
pattern was first proposed by King in \cite{King:2009qt} and called Tri-bimaximal-reactor (TBR)
mixing. Using such mixing matrix, we found the structure of the deviations in the neutrino mass
matrix from its TB texture which leads to TBR mixing. We then show that such a particular deviation
in neutrino mass matrix can arise in a model with $S_4$ flavor symmetry.
The paper is organized as follows. We discuss the conditions to obtain a mass matrix with maximal
atmospheric mixing angle, trimaximal solar mixing angle and a non-zero reactor mixing angle within
the T2K region in section \ref{LR}. In section \ref{model}, we present a model based on the group
of permutation of four objects, $S_4$ where the neutrino mass matrix with particular form discussed
in section \ref{LR} is obtained. Finally, we discuss the phenomenology of the model and conclude in
in section \ref{pheno}.
\section{Large Reactor Tri-Bimaximal mixing and neutrino mass matrix}
\label{LR}
In this section, we study the structure of the neutrino mass matrix (in the diagonal basis of the
charged leptons) that gives maximal atmospheric angle $\theta_{23}= \pi/4$, trimaximal
solar angle $\sin\overline{\theta}_{12}= 1/\sqrt{3}$ and an arbitrary reactor angle
$\theta_{13}=\lambda$. In the standard PDG \cite{Nakamura:2010zzi} parametrization,
the lepton mixing matrix with the above values of mixing angles is given by \cite{King:2009qt}
\begin{equation}\label{LRTB}
U_{\text{TBR}}=R_{23} (\frac{\pi}{4})\,R_{13}(\lambda) \,R_{12}(\overline{\theta}_{12})=
\left(
\begin{array}{ccc}
\sqrt{\frac{2}{3}}&\frac{1}{\sqrt{3}}& \lambda\\
-\frac{1}{\sqrt{6}}+\frac{\lambda}{\sqrt{3}} &\frac{1}{\sqrt{3}}+\frac{\lambda}{\sqrt{6}} &
-\frac{1}{\sqrt{2}}\\
-\frac{1}{\sqrt{6}}-\frac{\lambda}{\sqrt{3}} &\frac{1}{\sqrt{3}}-\frac{\lambda}{\sqrt{6}} &
\frac{1}{\sqrt{2}}\\
\end{array}
\right)+\mathcal{O}(\lambda^2).
\end{equation}
We do not consider the CP violation in the lepton sector assume that the above
parameters are real for simplicity. The neutrino mass matrix diagonalized by\,(\ref{LRTB}) is
given by
\begin{equation}\label{mLRTB}
m_\nu^{\text{TBR}}=U_{\text{TBR}}\cdot m_\nu^{\text{diag}} \cdot U_{\text{TBR}}^T= m_\nu^{TB}+\delta
m_\nu
\end{equation}
where $m_\nu^{\text{diag}}$ is a diagonal matrix with the neutrino mass eigenvalues, $m_{\nu_1}$,
$m_{\nu_2}$ and $m_{\nu_3}$. This leads to the following structure of the neutrino mass matrix
\begin{equation}\label{mntbm}
m_\nu^{\text{TB}}=
\left(
\begin{array}{ccc}
2y-x & x&x\\
x & y+z&y-z\\
x & y-z&y+z
\end{array}
\right),
\end{equation}
where $x=(m_2-m_1)/3$, $y=(m_1+2m_2)/6$ and $z=m_3/2$ and
\begin{equation}\label{deviation}
\delta m_\nu=\lambda
\left(
\begin{array}{ccc}
0 & \alpha_1& -\alpha_1\\
\alpha_1 & \beta_1&0\\
-\alpha_1 & 0& -\beta_1
\end{array}
\right)+
\lambda^2 \left(
\begin{array}{ccc}
\gamma & \alpha_2 & \alpha_2\\
\alpha_2 & \beta_2&-\beta_2\\
\alpha_2 & -\beta_2& \beta_2
\end{array}
\right)+
\sum_{n\ge 3} \lambda^n\left(
\begin{array}{ccc}
0 & \alpha_n& (-1)^n\alpha_n\\
\alpha_n & 0&0\\
(-1)^n\alpha_n & 0& 0
\end{array}
\right),
\end{equation}
with $\alpha_1=-(x-2y+2z)/\sqrt{2}$, $\beta_1=\sqrt{2}x$, $\alpha_2=-x/2$, $\beta_2=-(x-2y+z)/2$
and $\gamma=x-2y+2z$. Note that $\beta_2$ can be reabsorbed into the TB term $m_\nu^{\text{TB}}$.
The above form of neutrino mass matrix predicts maximal atmospheric mixing angle and trimaximal
solar mixing angle if all the terms with all powers of $\lambda$ are taken into account.
If one truncates the series in eq.\,(\ref{deviation}) at $n < 3$, the neutrino mass matrix then
implies
\begin{itemize}
\item (A) negligible deviations from maximality in the atmospheric mixing angle;
\item (B) small deviation from trimaximality in the solar mixing angle;
\item (C) prediction of $0\nu\beta\beta \propto \lambda^2$.
\end{itemize}
The prediction (C) is evident from eq.\,(\ref{deviation}) and we verify numerically (A) and (B) in
the section \ref{pheno}. We observe that the main structure of the deviation $\delta
m_\nu$ of order $\lambda$ in eq.\,(\ref{deviation}) is $\mu$-$\tau$ antisymmetric,
see\,\cite{Grimus:2006jz}\footnote{Note that the main structure of the deviation $\delta
m_\nu$ of order $\lambda$ in eq.\,(\ref{deviation}) is similar to the one found in the paper by T. Araki in Ref.\cite{large13} where (contrary with respect to us) the solar angle is not fixed to be the
trimaximal one. }. Therefore a possible flavor symmetry with neutrino mass matrix texture
(\ref{mLRTB}) must contain the group $S_2$ of the $\mu$-$\tau$ permutation and must be
compatible with tri-bimaximal in the unperturbed limit. One possible flavor symmetry with such
features is $S_4$ which contains $S_2$ as a subgroup and leads to tri-bimaximal
mixing\,\cite{Lam:2008sh}.
\section{The Model}
\label{model}
We assume $S_4$ (see appendix) flavor symmetry and extra abelian $Z_N$ symmetry in order to
separate the charged leptons from the neutrino sector as usual in models for TB mixing, see for
instance \cite{Altarelli:2010gt}. In order to simplify the model as much as possible and to render
more clear the main features of the model, we do not enter into the details of the particular
$Z_N$ symmetry required in this model. Our purpose is to show that the neutrino mass matrix
(\ref{mLRTB}) with the structure given by (\ref{mntbm}) and (\ref{deviation}) can be obtained from
symmetry principle. We assume that light neutrino masses arise from both type-I and type-II seesaw
and introduce only one right-handed neutrino. The matter content of our model is given in
table\,\ref{tab1}.
\begin{table}[h!]
\begin{tabular}{|c|c|c|c|c||c|c||c|c|}
\hline
& $L$ & $l_R$&$\nu_R$ & $h$ & $\Delta_{}$ & $\phi_{}$ & $\varphi_l$ & $\xi_l$ \\
\hline
$SU_L(2)$ & 2 & 1 & 1 & 2 & 3 & 1 &1&1\\
\hline
$S_4$ & $3_1$ & $3_1$ & $1_1$ & $1_1$ & $3_1$ & $3_1$ & $2$&$1_1$\\
\hline
\end{tabular}
\caption{Matter content of the model giving TB mixing at the leading order}\label{tab1}
\end{table}
In the scalar sector, we have one $SU_L(2)$ triplet $\Delta$ and one singlet $\phi$ in the
neutrino sector transforming both as $3_1$ of $S_4$. We have two electroweak singlets $\varphi_l$
and $\xi_l$ in the charged lepton sector, transforming as doublet and singlet of $S_4$
respectively. As it has been already mentioned, the two sectors can be separated by introducing an
abelian $Z_N$ symmetry under which $l^c$, $\varphi_l$ and $\xi_l$ are charged while the other
fields could be singlets of $Z_N$. The Yukawa interaction of the model is
\begin{eqnarray}
-\mathcal{L}_l &=& \frac{1}{\Lambda} y_1 (\overline{L}l_R)_{1_1}h\xi_l + \frac{1}{\Lambda} y_2
(\overline{L}l_R)_2h\varphi_l+h.c.\\
-\mathcal{L}_\nu &=& y_a LL\Delta + \frac{y_b}{\Lambda} (\overline{L}
\phi)_{1_1}\tilde{h}\nu_R+\frac{1}{2}M\nu^c\nu^c+h.c.
\end{eqnarray}
where $\Lambda$ is an effective scale. We assume the following $S_4$ alignment in the vacuum
expectation values (vevs) of the scalar fields.
\begin{eqnarray}
\vev{\Delta^0}=v_\Delta (1,1,1)^T,\quad \vev{ \phi}=v_\phi (0,1,-1)^T,\quad \vev{\varphi} =
(v_1,v_2)^T,
\end{eqnarray}
where $v_1\ne v_2$. Using the product rules shown in appendix A, one can easily see that the
charged lepton mass matrix is diagonal and the lepton masses can be fitted in terms of three free
parameters $y_1$, $v_1$ and $v_2$, see \cite{Morisi:2011ge} for details.
The type-II seesaw gives a contribution to the neutrino mass matrix with zero diagonal entries
and equal off diagonal entries since it arises from the product of three $S_4$ triplets. Since
we introduced only one right-handed neutrino, Dirac neutrino mass matrix is a column
$m_D\sim (0,1,-1)^T$ and the light-neutrino mass matrix from seesaw relation is given by
\begin{equation}\label{mntbm2}
m_\nu^{\text{type-I}}= \frac{1}{M}m_D\,m_D^T\sim
\left(
\begin{array}{ccc}
0 & 0&0\\
0& 1&-1\\
0&-1&1
\end{array}
\right)
\end{equation}
Considering both the type-I and type-II contributions, we have the light neutrino mass
matrix which can be diagonalized by TB mixing matrix \cite{Dutta:2009bj}\footnote {In
\cite{Dutta:2009bj}, similar structure (\ref{mntbm2}) has been obtained through only type-II
seesaw and $S_4$ symmetry.}
\begin{equation}\label{mn0}
m_\nu^{\text{TB}} =
\left(
\begin{array}{ccc}
0&a&a\\
a&b&a-b\\
a&a-b&b
\end{array}
\right),
\end{equation}
The mass eigenvalues of the above matrix are $m_1=-a$, $m_2=2a$ and $m_3=-a+2b$.
Here $a= y_a v_\Delta $ and $b=y_b^2 v_h^2 v_\phi^2/(\Lambda^2 M)$ where $v_h=\vev{h^0}$. This neutrino mass matrix is compatible with the normal hierarchy only and predicts zero neutrinoless double beta decay $m_{ee}=0$.
In order to reproduce deviations like eq.\,(\ref{deviation}) in the neutrino mass matrix, we
introduce in the scalar sector one Higgs triplet $\Delta_d$ that transforms as a doublet
under $S_4$ and an electroweak singlet $\phi_d$ that transforms as a triplet $3_1$ under $S_4$.
With inclusion of these fields, the Yukawa interaction Lagrangian $\mathcal{L}_\nu$ contains also
the terms
\begin{equation}\label{lag2}
-\mathcal{L}_\nu\supset y_\beta LL\Delta_d+ \frac{y_\alpha}{\Lambda}
(\overline{L}\phi_d)_{1_1} \tilde{h}\nu_R+h.c.
\end{equation}
We assume that $\Delta_d$ and $\phi_d$ take vevs along the following directions
\begin{equation}\label{all2}
\vev{\Delta_d^0}=v_d (1,0)^T,\quad \vev{\phi_d}=u_d(1,0,0)^T.
\end{equation}
Here we also assume that $y_{\alpha,\beta}\ll y_{a,b}$. This can be realized assuming that
$\Delta_d$ and $\phi_d$ are charged under some extra abelian symmetry like $Z_N$ or $U_{FN}(1)$.
After electroweak symmetry breaking and integrating out the right-handed neutrino, eq. (\ref{lag2}) gives the following contribution to the neutrino mass matrix
\begin{equation}\label{eff}
\frac{y_by_\alpha v_h^2}{\Lambda^2 M}(\nu \phi)_{1_1}(\nu \phi_{\text{d}})_{1_1}+
\frac{y_\alpha^2 v_h^2}{\Lambda^2 M}(\nu \phi_{\text{d}})_{1_1}(\nu \phi_{\text{d}})_{1_1}.
\end{equation}
The second term in eq.\,(\ref{eff}) is smaller with respect to the first since we have assumed $y_\alpha \ll y_b$. In particular assuming $y_b\sim 1$ and $y_\alpha\sim\lambda$ the first term is proportional to $\lambda$ and the second term is proportional to $\lambda^2$.
The extra contributions to the neutrino mass matrix from the type-I see-saw are as follows
\begin{equation}
\delta m_\nu^{\text{type-I}}\sim
c_1\lambda \left(
\begin{array}{ccc}
0 & 1& -1\\
1 & 0&0\\
-1 & 0& 0
\end{array}
\right)+
c_2\lambda^2 \left(
\begin{array}{ccc}
1 & 0& 0\\
0 & 0&0\\
0 & 0& 0
\end{array}
\right).
\end{equation}
where, $c_1$ and $c_2$ are coefficients of order ${\mathcal O}(1)$. From the extra type-II seesaw
term in eq. (\ref{lag2}) and using the vev alignments as in (\ref{all2}), the additional
contribution to the perturbed neutrino mass matrix will be proportional to
$\nu_1\nu_1-\nu_2\nu_2$, therefore the contribution to the neutrino mass matrix coming from
Type-II see-saw is
\begin{equation}
\delta m_\nu^{\text{type-II}}
\sim \left(
\begin{array}{ccc}
0 & 0& 0\\
0 & 1&0\\
0 & 0& -1
\end{array}
\right).
\end{equation}
Putting all these results together, the structure of the deviation in neutrino mass matrix
can be written is
\begin{equation}\label{dmn}
\delta m_\nu=
\left(
\begin{array}{ccc}
\gamma' & \alpha'& -\alpha'\\
\alpha' & \beta'&0\\
-\alpha' & 0& -\beta'
\end{array}
\right),
\end{equation}
where $\alpha'=y_by_\alpha v_h^2 v_\phi u_d/(\Lambda^2 M)$, $\beta'=y_\beta v_d$,
$\gamma'=y_\alpha^2 v_h^2 v_\phi u_d/(\Lambda^2 M)$.
The deviation obtained in our model equal to the neutrino mass deviation in eq.\,(\ref{deviation})
truncated at $\lambda^2$ with $\alpha_2=0$. Such a difference does not modify significantly the
prediction of maximal atmospheric angle and trimaximal solar angle. In the next section, we study
the phenomenological implication of our neutrino mass texture.
\section{phenomenology}
\label{pheno}
Combining eq. (\ref{mn0}) and eq. (\ref{dmn}), the resulting neutrino mass matrix in our model is
\begin{equation}\label{last}
m_\nu =
\left(
\begin{array}{ccc}
\gamma'&a+\alpha'&a-\alpha'\\
a+\alpha'&b+\beta'&a-b\\
a-\alpha'&a-b&b-\beta'
\end{array}
\right).
\end{equation}
As mentioned earlier, we assume $y_a, y_b\sim {\mathcal O}(1)$ and $y_\alpha, y_\beta \sim
{\mathcal O}(\lambda)$ which implies the hierarchies in the elements $a,b \gg \alpha', \beta' \gg
\gamma'$ in the above structure. Since the $(m_\nu)_{11}$ entry is $\gamma \sim
\mathcal{O}(\lambda^2)$, we have neutrino less double beta decay rate $m_{ee}\propto \lambda^2$ and
for small values of $\lambda$ as in the unperturbed case only the normal neutrino mass hierarchy
can be fitted. The neutrino mass matrix\,(\ref{last}) obtained from the model is equivalent
to the matrix\,(\ref{mLRTB}) up to the correction of $\mathcal{O}$($\lambda^2$), the neutrino mass
matrix\,(\ref{last}) is diagonalized by the mixing matrix\,(\ref{LRTB}) up to
$\mathcal{O}$($\lambda^2$) corrections. Note that the neutrino mass matrix\,(\ref{last}) obtained
in the model has 5 free parameters while the derived structure in eq.\,(\ref{LRTB}) has 4 real
parameters ($x,y,z$ and $\lambda$). We fix free parameters of eq.\,(\ref{last}) in terms of
the parameters of eq.\,(\ref{LRTB}) by comparing both the structures at each order of $\lambda$.
Comparing the leading order expressions of the neutrino mass matrix in eq.\,(\ref{mn0}) and
eq.\,(\ref{mntbm}), we restrict our parameters to be
\begin{equation}\label{relat0}
x=2y;~a=2y;~b=y+z.
\end{equation}
Further comparing the higher order terms in $\lambda$, we obtain the relations
\begin{equation}\label{relat0}
\alpha' = -\sqrt{2}z \lambda;~\beta' = 2\sqrt{2}y\lambda;~\gamma'= 2z\lambda^2.
\end{equation}
Note that this is not the most general case for our model, nevertheless we want to point out that
even in the case of bimaximal atmospheric mixing angle and trimaximal solar mixing angle it is
possible to obtain large reactor mixing angle. Also, we expect the negligible deviations from the
tri-bimaximal values for the solar and atmospheric mixing angles due to the fact that our model
predicts $\alpha_2=0$ if compared with eq.\,(\ref{mLRTB}) and it generates the terms only up to the
$\mathcal{O}$($\lambda^2$). We analyze such deviations by randomly varying $y$, $z$ and $\lambda$
with the constraints that the square mass differences and the mixing angles are in the observed
(3$\sigma$) range of validity \cite{Schwetz:2011qt}. The results of our analysis are shown in
figure \ref{figt}. It is evident from figure \ref{figt} that for restricted parameter space as
specified earlier, model allows large $\theta_{13}$ with negligible deviations in atmospheric and
solar mixing angles from there bimaximal and trimaximal values respectively. We also check the
predictions for the neutrinoless double beta decay rates in our model for restricted parameter
space specified above and find that the region for 4.5 meV $<m_{\nu1}<$ 5.8 meV and 0.5 meV $<
|m_{\beta \beta}|<$ 3.5 meV is allowed for the values of $\theta_{13}$ in the 2$\sigma$
limits indicated by T2K.
\begin{figure}[h!]
\includegraphics[width=8cm]{fig1.eps}
\hspace{0.5cm}
\includegraphics[width=8cm]{fig2.eps}
\caption{The figure in the left side shows the allowed region for
$\theta_{13}~\mbox{vs.}~\theta_{23}$. The figure in the right side shows the allowed region for
$\theta_{13}~\mbox{vs.}~\theta_{12}$. The horizontal continuous (dashed) lines represent the best
fit value (2$\sigma$ deviations) of T2K for the reactor neutrino mixing angle.}
\label{figt}
\end{figure}
\vskip10.mm
In summary, we found the structure for the deviation in the neutrino mass matrix from the
well known TB pattern in such a way that the lepton mixing matrix has large
atmospheric mixing angle and trimaximal solar mixing angle with an arbitrary large reactor
angle. The deviation must be approximately $\mu$-$\tau$ antisymmetric. This fact suggests us that
the flavor symmetry could be some permutation symmetry containing $S_2$ ($\mu$-$\tau$ exchange)
subgroup. $S_3$ is too small since it does not give the TB mixing. The smallest permutation group
with this property is $S_4$. We provide a candidate model based on $S_4$ where in the unperturbed
limit the neutrino mass matrix is TB. Then assuming extra scalar fields we show the possibility to
generate deviations from the TB that give a large $\theta_{13}$ in agreement with T2K result,
maximal atmospheric mixing angle and trimaximal solar mixing angle in good agreement with neutrino
data.
\section{Acknowledgments}
This work was supported by the Spanish MICINN under grants FPA2008-00319/FPA and MULTIDARK
CSD2009-00064 (Consolider-Ingenio 2010 Programme), by Prometeo/2009/091 (Generalitat Valenciana),
by the EU Network grant UNILHC PITN-GA-2009-237920. S. M. is supported by a Juan de la Cierva
contract. E. P. is supported by CONACyT (Mexico). K. M. P. is grateful to IFIC, Universitat de
Val{\`e}ncia for hospitality and support.
\begin{appendix}
\section{$S_4$ product rules}
In the basis where the generator of $S_4$ are real, the products of $\mu \times \mu$ (see \cite{Lam:2008sh}):\\
\begin{center}\parbox{2.5in}{\begin{center}
for $2$
\begin{eqnarray}\nonumber
&a_1 a^{\prime}_1 + a_2 a^{\prime}_2 \sim 1_1,&\\ \nonumber
&-a_1 a^{\prime}_2 + a_2 a^{\prime}_1 \sim 1_2,&\\ \nonumber
&\left( \begin{array}{c} a_1 a^{\prime}_2 + a_2 a^{\prime}_1 \\ a_1 a^{\prime}_1 - a_2 a^{\prime}_2 \end{array} \right) \sim 2,&
\end{eqnarray}
\end{center}
}\end{center}
\parbox{3in}{\begin{center}
for $3_1$
\begin{eqnarray}\nonumber
&\sum \limits _{j=1} ^{3} b_j b^{\prime}_j \sim 1_1,&\\ \nonumber
&\left( \begin{array}{c} \frac{1}{\sqrt{2}} (b_2 b^{\prime}
_2 - b_3 b^{\prime}_3) \\ \frac{1}{\sqrt{6}} (-2 b_1 b^{\prime}_1 + b_2 b^{\prime}_2 + b_3 b^{\prime}_3) \end{array} \right) \sim 2,& \\ \nonumber
&\left( \begin{array}{c} b_2 b^{\prime}_3 + b_3 b^{\prime}_2 \\ b_1 b^{\prime}_3 + b_3 b^{\prime}_1\\ b_1
b^{\prime}_2 + b_2 b^{\prime}_1 \end{array} \right) \sim
3_1 \; , \;\; \left(
\begin{array}{c} b_3 b^{\prime}_2 - b_2 b^{\prime}_3 \\ b_1 b^{\prime}_3 - b_3 b^{\prime}_1 \\ b_2 b^{\prime}_1 -
b_1 b^{\prime}_2 \end{array} \right) \sim 3_2,&
\end{eqnarray}
\end{center}}
\parbox{3in}{\begin{center}
for $3_2$
\begin{eqnarray} \nonumber
&\sum \limits _{j=1} ^{3} c_j c^{\prime}_j \sim 1_1,&\\ \nonumber
&\left( \begin{array}{c} \frac{1}{\sqrt{2}} (c_2 c^{\prime}
_2 - c_3 c^{\prime}_3) \\ \frac{1}{\sqrt{6}} (-2 c_1 c^{\prime}_1 + c_2 c^{\prime}_2 + c_3 c^{\prime}_3) \end{array} \right) \sim 2,& \\ \nonumber
&\left( \begin{array}{c} c_2 c^{\prime}_3 + c_3 c^{\prime}_2 \\ c_1 c^{\prime}_3 + c_3 c^{\prime}_1\\ c_1
c^{\prime}_2 + c_2 c^{\prime}_1 \end{array} \right) \sim 3_1\; , \;\;\left(
\begin{array}{c} c_3 c^{\prime}_2 - c_2 c^{\prime}_3 \\ c_1 c^{\prime}_3 - c_3 c^{\prime}_1 \\ c_2 c^{\prime}_1 -
c_1 c^{\prime}_2 \end{array} \right) \sim 3_2& .
\end{eqnarray}
\end{center}}\\
For $2 \times 3_1$:
\parbox{2.5in}{
\begin{eqnarray} \nonumber
\left( \begin{array}{c} a_2 b_1 \\ -\frac{1}{2}(\sqrt{3} a_1 b_2 + a_2
b_2)\\ \frac{1}{2} (\sqrt{3} a_1 b_3 - a_2 b_3) \end{array}
\right) \sim 3_1\\ \nonumber
\left( \begin{array}{c} a_1 b_1 \\ \frac{1}{2}(\sqrt{3} a_2 b_2 - a_1
b_2)\\ -\frac{1}{2} (\sqrt{3} a_2 b_3 + a_1 b_3) \end{array}
\right) \sim 3_2
\end{eqnarray}}
\parbox{1in}{and for $2 \times 3_2$}
\parbox{2in}{
\begin{eqnarray} \nonumber
\left( \begin{array}{c} a_1 c_1 \\ \frac{1}{2}(\sqrt{3} a_2 c_2 - a_1
c_2)\\ -\frac{1}{2} (\sqrt{3} a_2 c_3 + a_1 c_3) \end{array}
\right) \sim 3_1\\ \nonumber
\left( \begin{array}{c} a_2 c_1 \\ -\frac{1}{2}(\sqrt{3} a_1 c_2 + a_2
c_2)\\ \frac{1}{2} (\sqrt{3} a_1 c_3 - a_2 c_3) \end{array}
\right) \sim 3_2 .
\end{eqnarray}}
\begin{center}
\hspace{-2.8in}
For $3_1\times 3_2$
\begin{eqnarray} \nonumber
& \sum \limits _{j=1} ^{3} b_j c_j \sim 1_2&\\ \nonumber
&\left( \begin{array}{c} \frac{1}{\sqrt{6}} (2 b_1 c_1 - b_2 c_2 - b_3
c_3) \\ \frac{1}{\sqrt{2}} (b_2 c
_2 - b_3 c_3) \end{array} \right) \sim 2& \\ \nonumber
&\left( \begin{array}{c} b_3 c_2 - b_2 c_3 \\ b_1 c_3 - b_3 c_1 \\ b_2 c_1 -
b_1 c_2 \end{array} \right) \sim 3_1\; , \;\;\left(
\begin{array}{c} b_2 c_3 + b_3 c_2 \\ b_1 c_3 + b_3 c_1\\ b_1
c_2 + b_2 c_1 \end{array} \right) \sim 3_2& .
\end{eqnarray}
\end{center}
\end{appendix}
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 1,578
|
Shumar Gewog (Dzongkha: ཤུ་མར་) is a gewog (village block) in Pemagatshel District, Bhutan.
Shumar is one of the Gewogs in Pemagatshel Dzongkhag. It is the largest Gewog in the Dzongkhag with more 800 households and more than 11 villages. The Gewog Headman is Gup Lepo who was elected with yes/no votes due to only one contestant. Shumar Gewog is the most developed in the dzongkhag as the Dzongkhag headquarters being located in the Gewog. Shumar Village is the largest village comprising about 90 households in the Gewog and thus Gewog's name is Shumar.
Darchung is a small village under Shumar Village.
References
External links
https://web.archive.org/web/20100503060847/http://www.pemagatshel.gov.bt/gewogDetail.php?id=40
Gewogs of Bhutan
Pemagatshel District
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 6,920
|
{"url":"https:\/\/math.stackexchange.com\/questions\/573964\/prove-that-relation-r-on-a-set-of-functions-is-an-equivalence-relation","text":"# Prove that relation $R$ on a set of functions is an equivalence relation\n\nLet set $S$ be the set of all functions $f:\\mathbb{Z_+} \\rightarrow \\mathbb{Z_+}$. Define a realtion $R$ on $S$ by $(f,g)\\in R$ iff there is a constant $M$ such that $\\forall n (\\frac{1}{M} < \\frac{f(n)}{g(n)}<M).$ Prove that $R$ is an equivalence relation and that there are infinitely mane equivalence classes.\n\nAttempt: Would it work if I define $f$ as $f_k(n)=kn$ and g as $g_k(n)=(M-k)n$ where $M>k$. Then, $$\\forall n ((\\frac{1}{M} < \\frac{f(n)}{g(n)}<M)=(\\frac{1}{M} < \\frac{k}{M-k} <M))$$ is true as long as $M>k$.\n\nSo, $R$ is reflexive: $(f,f) \\in R$ $$\\forall n ((\\frac{1}{M} < \\frac{f(n)}{f(n)}<M)=(\\frac{1}{M} < 1 <M)), \\space M>1$$\n\n$R$ is symmetric: $(f,g)\\in R \\Rightarrow (g,f)\\in R$ $$\\forall n ((\\frac{1}{M} < \\frac{f(n)}{g(n)}<M)=(\\frac{1}{M} < \\frac{k}{M-k} <M))$$ $$\\forall n ((\\frac{1}{M} < \\frac{g(n)}{f(n)}<M)=(\\frac{1}{M} < \\frac{M-k}{k} <M))$$\n\nfor $M>k$.\n\n$R$ is transitive: $(f,g)\\in R \\wedge (g,h) \\in R \\Rightarrow (f,h)\\in R$ $$\\frac{f(n)}{g(n)} \\in R, \\space \\frac{g(n)}{h(n)} \\in R \\Rightarrow \\frac{f(n)g(n)}{h(n)g(n)}=\\frac{f(g)}{h(n)}\\Rightarrow \\forall n ((\\frac{1}{M} < \\frac{f(n)}{h(n)}<M)$$\n\nThen, $R$ is an equivalence relation.\n\n\u2022 You haven\u2019t really addressed the question. You must first show that the relation $R$ that you\u2019ve been given is reflexive, symmetric, and transitive. Then you must explicitly identify infinitely many different equivalence classes of the relation. \u2013\u00a0Brian M. Scott Nov 20 '13 at 0:27\n\u2022 @BrianM.Scott I edited the post showing how it is an equivalence relation. I want to make sure I define f and g correctly or do I need to define them at all. \u2013\u00a0Koba Nov 20 '13 at 0:49\n\u2022 For proving that $R$ is an equivalence relation you shouldn\u2019t be defining any $f$ or $g$: you should be proving things about arbitrary elements of $S$. Now that you\u2019ve shown what you\u2019re thinking, let me write up an answer. \u2013\u00a0Brian M. Scott Nov 20 '13 at 0:53\n\nYour proof of reflexivity would be right if you expressed it a little more clearly. Let $f\\in S$ be arbitrary. Then for all $n\\in\\Bbb Z_+$ we have $$\\frac12<\\frac{f(n)}{f(n)}<2\\;,$$ so $\\langle f,f\\rangle\\in R$, and $R$ is reflexive. (Or course I could have used any real number greater than $1$ for my $M$, but it\u2019s best to be explicit.)\n\nFor symmetry you want to assume that $\\langle f,g\\rangle\\in R$ and show that $\\langle g,f\\rangle$ is therefore necessarily in $R$ as well. Since $\\langle f,g\\rangle\\in R$, you know that there is a constant $M$ such that\n\n$$\\frac1M<\\frac{f(n)}{g(n)}<M\\tag{1}$$\n\nfor each $n\\in\\Bbb Z_+$. Taking reciprocals in $(1)$, we see that\n\n$$M>\\frac{g(n)}{f(n)}>\\frac1M$$\n\nand hence that $\\langle g,f\\rangle\\in R$, as desired.\n\nFor transitivity you must assume that $f,g,h\\in S$ are such that $\\langle f,g\\rangle\\in R$ and $\\langle g,h\\rangle\\in R$ and somehow prove that $\\langle f,h\\rangle\\in R$. You know that there are constants $M$ and $N$ such that\n\n$$\\frac1M<\\frac{f(n)}{g(n)}<M\\qquad\\text{and}\\qquad\\frac1N<\\frac{g(n)}{h(n)}<N\\tag{2}$$\n\nfor all $n\\in\\Bbb Z_+$, and you want to use $M$ and $N$ somehow to find a constant $K$ such that\n\n$$\\frac1K<\\frac{f(n)}{h(n)}<K$$\n\nfor all $n\\in\\Bbb Z_+$. Your idea of looking at the product $\\dfrac{f(n)}{g(n)}\\cdot\\dfrac{g(n)}{h(n)}$ is a good one, but you have to carry it out correctly; make use of the inequalities in $(2)$.\n\nFor the last part of the question, consider the functions $f_k:\\Bbb Z_+\\to\\Bbb Z_+:n\\mapsto n^k$.\n\n\u2022 Thanks. Very clear answer. \u2013\u00a0Koba Nov 20 '13 at 3:57\n\u2022 @Koba: You\u2019re welcome. \u2013\u00a0Brian M. Scott Nov 20 '13 at 4:00\n\nYou are given that $(f,g) \\in R$ if and only if $\\exists M \\forall n > 0 ~.~ 1\/M < f(n)\/g(n) < M$. In order to show that $R$ is an equivalence relation over $\\mathbb{Z}^+ \\times \\mathbb{Z}^+$, you need to show that $R$ is reflexive, symmetric, and transitive over that domain. I will outline a proof that $R$ is an equivalence relation, leaving the proof that $R$ has infinitely many equivalence classes to you. If we take any $M > 1$, it is clear that $1\/M < f(n)\/f(n)$ and $f(n)\/f(n) < M$ for all $n > 0$. Thus, $R$ is reflexive. To show symmetry, we need to show that if $(f,g) \\in R$ then $(g,f) \\in R$. If $(f,g) \\in R$, then there is a constant $M$ such that $1\/M < f(n)\/g(n)$ and $f(n)\/g(n) < M$ for all $n > 0$. Taking reciprocals, our assumption yields $g(n)\/f(n) < M$ and $1\/M < g(n)\/f(n)$ for all $n < 0$. So $R$ is symmetric. Finally, to show transitivity we assume that $(f,g) \\in R$ and $(g,k) \\in R$, so that there is are constants $M$ and $N$ such that $1\/M < f(n)\/g(n) < M$ and $1\/N < g(n)\/k(n) < N$ for all $n> 0$. But $1\/M < f(n)\/g(n)$ implies that $g(n)\/Mk(n) < f(n)\/k(n)$ and $1\/N < g(n)\/k(n)$ for all $n > 0$. Similarly, $f(n)\/g(n)$ implies that $f(n)\/k(n) < Mg(n)\/k(n)$ and $g(n)\/k(n) < N$ for all $n > 0$. Thus, there is a constant $L$ such that $1\/L < f(n)\/k(n) < L$ or all $n > 0$. Hence, the relation $R$ is transitive. Since $R$ is reflexive, symmetric, and transitive it is an equivalence relation.\n\n\u2022 Awesome. Now, I understand everything. Thank you. \u2013\u00a0Koba Nov 20 '13 at 3:58","date":"2019-08-20 18:09:54","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9405107498168945, \"perplexity\": 64.93165478960522}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-35\/segments\/1566027315558.25\/warc\/CC-MAIN-20190820180442-20190820202442-00532.warc.gz\"}"}
| null | null |
El Col·legi Fuster és un edifici de Santa Coloma de Gramenet (Barcelonès) protegit com a Bé Cultural d'Interès Local.
Descripció
Es tracta d'un edifici entre mitgeres de planta baixa i planta pis amb coberta plana. Disposa d'un carreró lateral que serveix d'accés a l'espai interior d'illa on se situen les instal·lacions esportives.
La façana presenta quatre grans balconeres d'arc de mig punt que formen dos balcons correguts. Aquests tenen baranes de ferro forjat. Destaca la barbacana de teula suportada per permòdols.
Referències
Patrimoni monumental de Santa Coloma de Gramenet
Edificis de Santa Coloma de Gramenet
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 8,760
|
Woman wins award for best drama at Saudi film festival
Saudi filmmakers and actors pose for a group picture with their awards on the last day of the Saudi Film Festival at Saudi culture center in the City of Dammam, some 400 km east of the capital Riyadh, on Feb. 24, 2015. (AFP)
AFP, Riyadh Wednesday, 25 February 2015
A female Saudi film-maker won an award for best drama at the Saudi Film Festival, the chief juror said Wednesday, hailing a higher quality of entries despite the kingdom's cinema ban.
The five-day festival was only the second in seven years, and aired films at an arts and cultural center in the Gulf coast city of Dammam.
At the awards ceremony on Tuesday night, Hana al-Omair took the Golden Palm Tree prize for her drama "Complain," said Abdullah al-Eyaf, the head of the festival jury.
It tells the story of a hospital worker who lodges a complaint against a colleague, an act symbolizing everything wrong in her life.
Another woman, Shahad Ameen, won second prize in the drama category for "Eye & Mermaid," a fantasy about a girl who discovers her father has tortured a mermaid to extract beautiful black pearls.
Mohanna Abdullah took third place for his film "Adam's Ant," the story of a prisoner who tries to befriend an ant in his cell.
In 2013 the film "Wadjda," by Saudi female film-maker Haifaa Al-Mansour, became the country's first to be listed as a candidate for a foreign-language Oscar, although it did not make the final shortlist.
At this year's Saudi Film Festival, the Golden Palm Tree for best documentary went to Faisal al-Otaibi for "Grand Marriage." It recounts a two-week wedding ceremony taking place in the archipelago nation of the Comoros.
In the student category, Mohammed al-Faraj also earned one of the golden stylized palm tree trophies for "Lost," a documentary about stateless people living in Saudi Arabia.
Abbas al-Hayek took top prize for best unproduced script.
Eyaf and his jury selected the winning productions from among more than 60 entrants.
Eyaf, himself a prize-winning film-maker, said Saudi Arabia itself has emerged a winner "for having all this talent."
Last Update: Wednesday, 25 February 2015 KSA 14:46 - GMT 11:46
SAUDI GOLDEN PALM TREE PRIZE
HANA AL-OMAIR
Silver screen showdown over cinema in Saudi Arabia
If there is one question the Saudi general public constantly asks, it is: What's wrong with having movie theaters ...
Cinemas to open in Saudi Arabia?
An investor has officially requested a license from the Saudi General Commission of Audiovisual Media to establish ...
Saudi Islamic university sends faculty member to study cinema in America
An Islamic university in Saudi Arabia, where public movie theatres are banned, is sending a faculty member to study ...
Hana al-Omair took the Golden Palm Tree prize for her drama "Complain"
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 9,491
|
var React = require('react');
var _ = require('lodash');
var moment = require('moment');
function dateForItem(item) {
if (item.created_date) {
var datetime = moment(new Date(item.created_date.replace(' ', 'T')));
return datetime.isValid() ? datetime.format('ll h:mm A') : null;
}
}
var BrowseQueries = React.createClass({
clickCallback: function(event) {
this.props.clickCallback(event);
},
buildList: function() {
var listElements = this.props.listItems.map(_.bind(function(listItem, index) {
var isSelected = (this.props.selectedIndex === index) ? true : false;
var classes;
if (isSelected) classes = 'active';
var createdAt;
var datetime = dateForItem(listItem);
if (datetime) {
createdAt = (
<p className="date pull-right">
<span className="icon glyphicon glyphicon-time"></span>
{datetime}
</p>
);
}
var isCachedText = listItem.refresh_rate > 0 ? 'Cached' : '';
return (
<li className={classes} key={index} data-id={listItem.id} onClick={this.clickCallback}>
<h5 className="name">{listItem.metadata.display_name ? listItem.metadata.display_name : 'Query not named'}</h5>
<div className="metadata clearfix">
<p className="date pull-left">{isCachedText}</p>
{createdAt}
</div>
</li>
);
}, this));
return (
<ul ref="list" className="interactive-list">
{listElements}
</ul>
);
},
fieldChanged: function(event) {
var newState = {};
newState[event.target.name] = event.target.value;
this.setState(newState);
},
getDefaultProps: function() {
return {
listItems: [],
clickCallback: null,
selectedIndex: null,
notice: null,
emptyContent: null
};
},
render: function() {
var emptyContent = this.props.listItems.length ? null: this.props.emptyContent;
var listItems = this.buildList();
return (
<section className="query-pane-section browse-queries">
{this.props.notice}
{listItems}
{emptyContent}
</section>
);
}
});
module.exports = BrowseQueries;
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 1,355
|
#include <stdint.h>
#include <string.h>
#include <stdio.h>
#include "sham.h"
#include "commander.h"
#include "flags.h"
static char sd_filename[64] = {0};
static char sd_text[512] = {0};
void delay_ms(int delay)
{
(void) delay;
}
uint8_t sd_write(const char *f, uint16_t f_len, const char *t, uint16_t t_len,
uint8_t replace)
{
(void) replace;
memset(sd_filename, 0, sizeof(sd_filename));
memset(sd_text, 0, sizeof(sd_text));
snprintf(sd_filename, sizeof(sd_filename), "%.*s", f_len, f);
snprintf(sd_text, sizeof(sd_text), "%.*s", t_len, t);
commander_fill_report("sd_write", FLAG_ERR_NO_MCU, STATUS_SUCCESS);
return STATUS_SUCCESS;
}
char *sd_load(const char *f, uint16_t f_len)
{
static char text[512];
memcpy(text, sd_text, 512);
commander_fill_report("sd_load", FLAG_ERR_NO_MCU, STATUS_SUCCESS);
if (!strncmp(sd_filename, f, f_len)) {
return text;
}
return NULL;
}
uint8_t sd_list(void)
{
commander_fill_report("sd_list", FLAG_ERR_NO_MCU, STATUS_SUCCESS);
if (sd_filename[0]) {
commander_fill_report("backup", sd_filename, STATUS_SUCCESS);
}
return STATUS_SUCCESS;
}
uint8_t sd_erase(void)
{
commander_fill_report("sd_erase", FLAG_ERR_NO_MCU, STATUS_SUCCESS);
memset(sd_filename, 0, sizeof(sd_filename));
memset(sd_text, 0, sizeof(sd_text));
return STATUS_SUCCESS;
}
uint8_t touch_button_press(int long_touch)
{
(void) long_touch;
commander_fill_report("touchbutton", FLAG_ERR_NO_MCU, STATUS_SUCCESS);
return STATUS_TOUCHED;
}
void touch_button_parameters(uint16_t timeout, uint16_t threshold)
{
(void)timeout;
(void)threshold;
commander_fill_report("touchbutton", FLAG_ERR_NO_MCU, STATUS_SUCCESS);
}
uint8_t flash_read_unique_id(uint32_t *serial, uint32_t len)
{
memset(serial, 1, len);
return 0; // success
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 2,751
|
Q: How to configure fields and views in Drupal 7? Drupal 7.x:
how to configure fields and views (and other modules if required) to get javascript image viewer like samples below?
http://www.amazon.co.uk/gp/product/images/B000TER4HO/
http://www.amazon.co.uk/gp/product/images/B00126INHI/
A: *
*If the images are field values of a node (recommended): http://drupal.org/project/field_slideshow
Create an image field with Unlimited values and set up the field's display on admin/structure/types/manage/NODE_TYPE/display. (The Slideshow option has some options.)
*If the images are separate nodes connected to the product node someshow: http://drupal.org/project/views_slideshow
You'll need to create a view that loads the image nodes for this. Probably with a relationship to the current node.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 5,790
|
{"url":"https:\/\/3dprinting.stackexchange.com\/questions\/14771\/unable-to-upload-firmware-to-a-new-motherboard","text":"# Unable to upload firmware to a new motherboard\n\nI bricked my Tevo Tarantula's controller board, and I've decided to just replace it rather than unbrick it because they are relatively cheap. I recently bought a new MKS GEN L v1.0 board, but I've been unable to flash new firmware onto it. Every time I go to upload the firmware, I get an error just as it begins to upload, saying:\n\n\"failed to send command to serial port does not exist or is not connected\"\n\navrdude: ser_send(): write error: sorry no info avail\n\navrdude: stk500_send(): failed to send command to serial port\n\navrdude: ser_recv(): read error: The handle is invalid.\n\n\navrdude: stk500v2_getsync(): timeout communicating with programmer\n\n\nAny ideas of what the issue could be? I've tried both the USB ports on my computer and using a USB 2.0 hub (I believe my computer ports are both USB 3.0). I've also made sure that I had the correct port selected in Marlin (1.1.8.13).\n\nI also think that its worth mentioning that my bricked board and new board seem to appear differently in the device manager\n\nBricked:\n\nWhile my new board appears as this:\n\nThe new board also seems to \"cut-out\" when I first connect it to my computer as well. In the device manager, my computer will indicate that an unknown device is connected, then it will quickly disconnect and disappear, only to reconnect and reappear as pictured above.\n\nDo you guys think there is a hardware issue with the motherboard? Thanks for your help, this is giving me quite the headache!\n\n\u2022 what is the new Motherboard's type? Nov 11 '20 at 0:40\n\u2022 Its a Kookye mks gen l v 1.0 from Amazon. It has an lpc1768 Nov 11 '20 at 1:14\n\u2022 amazon.com\/gp\/product\/B07Y1PPWVC\/\u2026 Nov 11 '20 at 1:14\n\u2022 What drivers is Windows using for the new USB connection?\n\u2013\u00a0Mick\nNov 11 '20 at 1:43\n\u2022 Its using a microsoft driver 10.0.19041.1 Nov 11 '20 at 1:56\n\nYour motherboard is not an MKS GEN L v1.0, it's a MKS SGEN L - unfortunately, a very very naming scheme.\n\nYour board is actually a 32-bit board, and must therefore be flashed with Marlin 2.0, built for the 32-bit board. The firmware is then updated by placing it on the SD card and restarting the board, as explained in the documentation for Marlin here\n\nYour new board may have a counterfeit FT232R USB-to-serial interface chip, and the Windows update channel has installed hobbled FTDI drivers that won't work with counterfeit chips. The use of counterfeit FT232R chips is very common with budget 3D printer controllers, and FTDI are trying to discourage their use. Because of this, a lot of manufacturers have switched to using the CH340 chip, which does not suffer from this problem, and it looks like your old board used a CH340 chip. Try deleting the device and its drivers, and then installing the Windows setup executable from the following website:\n\nhttps:\/\/www.ftdichip.com\/Drivers\/VCP.htm\n\nThe 2.12.28 drivers will work with counterfeit chips.\n\n\u2022 Thanks Mick! I will try that tomorrow. Should I install the VCP, D2xx, or the D3xx? Nov 11 '20 at 2:55\n\u2022 Try the D3xx drivers. The device needs to be presented as a virtual COM port.\n\u2013\u00a0Mick\nNov 11 '20 at 2:58","date":"2021-09-26 09:23:37","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.17915359139442444, \"perplexity\": 3382.5761108314796}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780057857.27\/warc\/CC-MAIN-20210926083818-20210926113818-00162.warc.gz\"}"}
| null | null |
Taken from his latest 'I O U 2' series, "Vultures" is incredibly explosive, yet at times wildly introspective. Being able to capture that dichotomy of emotion and translate it into a textured 3-minute track is where Lido thrives the most. Shot deep in the pureness of nature, the music video really helps portray the rawness in the record. A gentle dance between monstrous, fiery emotions and how their release can be incredibly cathartic.
Lido shares of the recent project, "The first part (I O U) represents my interactions and experiences following the conclusion of 'Everything'. It's songs about family and friends. The second part (I O U 2) is about me finding my voice again. Both metaphorically and literally including my unprocessed vocals in my songs for the first time in a very long time." And that's what makes his music resonate with so many listeners. Hard-hitting drops that deliver a monstrous sound yet an elegant R&B style vocals that cradle your soul. He continues "It's (I O U 2) made up of reflections on the 'Everything' experience, finding myself and facing the last bit of darkness left after the dust had settled and I had regained my perspective and the rest of the album contains some very intimate tracks, each track injected with a heavy dose of rawness."
Entirely self-taught, Lido has already gained accolades for producing chart-topping hits alongside artists like Chance The Rapper, Jaden Smith, Banks, Alt-J, Bastille, Halsey, and Disclosure to name a few, Lido is quickly becoming one of the most sought-after young producers in the game. He also co-produced Towkio's debut album WWW alongside industry heavy-weight Rick Rubin. 2018 has already seen Lido produce tracks for Chicago rapper Towkio including his breakout single "Symphony," which the two performed last February on the Tonight Show with Jimmy Fallon.
Lido kicks it old school with THEY. on "Not Enough"
Brasstracks and Lido reunite for "Telling the Truth"
Lido shares video for album single, "Crazy"
|
{
"redpajama_set_name": "RedPajamaC4"
}
| 450
|
Q: Show that the Borel $\sigma$-algebra on $\mathbb{R}$ is generated by the compact sets of $\mathbb{R}$ (Solution Critique) I am teaching myself measure theory over the summer, and would appreciate some feedback on this basic result, since this type of reasoning is quite new to me.
Solution:
By Heine-Borel, the compact sets of $\mathbb{R}$ are exactly the closed bounded subsets. Let $\mathcal{K}(\mathbb{R})$ denote the $\sigma$ algebra generated by the compact sets. The Borel $\sigma$ algebra on $\mathbb{R}$, $\mathcal{B}(\mathbb{R})$, is generated by the open subsets of $\mathbb{R}$, and therefore the closed subsets of $\mathbb{R}$. Additionally, $\mathcal{B}(\mathbb{R})$ is generated by all the sets of the form $(-\infty,t]$ for real $t$. Clearly, the compact sets are contained in the collection of closed sets, so $\mathcal{K}(\mathbb{R}) \subset \mathcal{B}(\mathbb{R})$. Now, each set of the form $(-\infty,t]$ is the union of countably many closed bounded sets, so the $\sigma$ algebra generated by these sets will be contained in $\mathcal{K}(\mathbb{R})$. In other words $\mathcal{B}(\mathbb{R}) \subset \mathcal{K}(\mathbb{R})$, and the result follows.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 3,011
|
Q: Terraform `import` for `aws_route_table` Wants to Delete Routes from State File Afterwards I'm having some trouble whilst importing existing AWS routing tables into Terraform. They import, and their routes are recorded in the state file, but running plan or apply afterwards always wants to delete those routes, even if they are also defined in Terraform.
I define an existing AWS routing table in Terraform like this:
resource "aws_route_table" "public_staging" {
vpc_id = "${aws_vpc.staging.id}"
route {
cidr_block = "${aws_vpc.management.cidr_block}"
vpc_peering_connection_id = "${aws_vpc_peering_connection.management_to_staging.id}"
}
route {
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.staging.id}"
}
tags {
Name = "public staging (igw)"
environment = "staging"
}
}
Then import it like this; terraform import aws_route_table.public_management rtb-abc123.
Which outputs:
aws_route_table.public_staging: Importing from ID "rtb-abc123"...
aws_route_table.public_staging: Import complete!
Imported aws_route_table (ID: rtb-abc123)
Imported aws_route (ID: r-rtb-abc123123456)
Imported aws_route (ID: r-rtb-abc123654321)
Imported aws_route_table_association (ID: rtbassoc-qwert765)
Imported aws_main_route_table_association (ID: rtbassoc-asdf9876)
aws_route.public_staging: Refreshing state... (ID: r-rtb-abc123123456)
aws_route_table.public_staging: Refreshing state... (ID: rtb-abc123)
aws_route.public_staging-1: Refreshing state... (ID: r-rtb-abc123654321)
aws_route_table_association.public_staging: Refreshing state... (ID: rtbassoc-qwert765)
aws_main_route_table_association.public_staging: Refreshing state... (ID: rtbassoc-asdf9876)
When then running terraform plan, Terraform wants to delete all the aws_route resource states it generated in the state file and create the route table we just imported:
Terraform will perform the following actions:
- aws_route.public_staging
- aws_route.public_staging-1
+ aws_route_table.public_management
...
I've also tried defining the routes separately, outside of the aws_route_table resource and attaching them to the routing table by ID, like this:
resource "aws_route" "management_to_staging" {
route_table_id = "${aws_route_table.public_management.id}"
cidr_block = "${aws_vpc.staging.cidr_block}"
vpc_peering_connection_id = "${aws_vpc_peering_connection.management_to_staging.id}"
}
The only thing that will result in a no-change state is if I run the import on the routing table, also define the routes outside of the routing table (as aws_route resources), and then go in and manually change the generated names in the state file to those I've defined the the tf file. However, I believe this would not actually work on a fresh run, since the routes defined in the aws_route_table, and those as separate aws_route resources, would conflict.
EDIT:
Most likely explanation as far as I can see, is that on importing, Terraform is quite happily imports the routes inside the route table, but then on plan, it expects them to be declared explicitly using aws_route resources.
Problem with that is; you can't import aws_route resources, so you can never have your current infrastructure state match your terraform state.
I think the reason explicitly declaring them afterwards doesn't work either is that the state file records imported routes differently if it got them from an import aws_route_table ... command to if it generates them from an apply with explicit aws_route definitions.
And now I'm out of breath.
A: Running in to this problem too. Since it does not look like it is feasible to fully import routing tables, I am going to create new ones via terrform and then change the associations to point at the new tables. This seems the easiest way forward towards having Terraform manage all resources.
A: You should try declaring your routes on a separate resource aws_route. A nested route object inside aws_route_table resource works the same as a separate aws_route, but the latter may be imported. Check the documentation.
Try it like this.
resource "aws_route_table" "public_staging" {
vpc_id = "${aws_vpc.staging.id}"
tags {
Name = "public staging (igw)"
environment = "staging"
}
}
resource "aws_route" "public_staging_r0" {
route_table_id = aws_route_table.public_staging.id
cidr_block = "${aws_vpc.management.cidr_block}"
vpc_peering_connection_id = "${aws_vpc_peering_connection.management_to_staging.id}"
}
resource "aws_route" "public_staging_r1" {
route_table_id = aws_route_table.public_staging.id
cidr_block = "0.0.0.0/0"
gateway_id = "${aws_internet_gateway.staging.id}"
}
Then, import the aws_route_table normally and each aws_route as below
terraform import aws_route.<route_id> <route_table_id>_<cidr_block>
terraform import aws_route.public_staging_r0 rtb-abc123_0.0.0.0/0
Important: as per the documentation, do not use a separate aws_route resource together with nested routes
NOTE on Route Tables and Routes:
Terraform currently provides both a standalone Route resource and a Route Table resource with routes defined in-line. At this time you cannot use a Route Table with in-line routes in conjunction with any Route resources. Doing so will cause a conflict of rule settings and will overwrite rules.
|
{
"redpajama_set_name": "RedPajamaStackExchange"
}
| 8,384
|
Home All issues Volume 509 (January 2010) A&A, 509 (2010) A105 Full HTML
Volume 509, January 2010
2 Simulations of lensing ...
5 Summary and conclusions
Looking for the first galaxies: lensing or blank fields?
A. Maizy1 - J. Richard2 - M. A. De Leo3 - R. Pelló1 - J. P. Kneib4
1 - Laboratoire d'Astrophysique de Toulouse-Tarbes, CNRS, Université de Toulouse, 14 Av. Edouard-Belin, 31400 Toulouse, France
2 - Institute for Computational Cosmology, Department of Physics, Durham University, South Road, Durham, DH1 3LE, UK
3 - Instituto de Astronomía, UNAM, Apartado Postal 70-264, 04510 México DF, Mexico
4 - OAMP, Laboratoire d'Astrophysique de Marseille, UMR 6110, Traverse du Siphon, 13012 Marseille, France
Received 11 February 2009 / Accepted 26 October 2009
Context. The identification and study of the first galaxies remains one of the most exciting topics in observational cosmology. The determination of the best possible observing strategies is a very important choice in order to build up a representative sample of spectroscopically confirmed sources at high-z ( ), beyond the limits of present-day observations.
Aims. This paper is intended to precisely adress the relative efficiency of lensing and blank fields in the identification and study of galaxies at .
Methods. The detection efficiency and field-to-field variance are estimated from direct simulations of both blank and lensing fields observations. Present known luminosity functions in the UV are used to determine the expected distribution and properties of distant samples at for a variety of survey configurations. Different models for well known lensing clusters are used to simulate in details the magnification and dilution effects on the backgound distant population of galaxies.
Results. The presence of a strong-lensing cluster along the line of sight has a dramatic effect on the number of observed sources, with a positive magnification bias in typical ground-based ``shallow'' surveys ( ). The positive magnification bias increases with the redshift of sources and decreases with both depth of the survey and the size of the surveyed area. The maximum efficiency is reached for lensing clusters at . Observing blank fields in shallow surveys is particularly inefficient as compared to lensing fields if the UV LF for LBGs is strongly evolving at . Also in this case, the number of sources expected at the typical depth of JWST ( ) is much higher in lensing than in blank fields (e.g. a factor of 10 for ). All these results have been obtained assuming that number counts derived in clusters are not dominated by sources below the limiting surface brightness of observations, which in turn depends on the reliability of the usual scalings applied to the size of high-z sources.
Conclusions. Blank field surveys with a large field of view are needed to prove the bright end of the LF at , whereas lensing clusters are particularly useful for exploring the mid to faint end of the LF.
Key words: gravitational lensing: strong - galaxies: high-redshift - galaxies: luminosity function, mass function - galaxies: clusters: general
Constraining the abundance of z>7 sources remains an important challenge of modern cosmology. Recent WMAP results place the first building blocks of galaxies at redshifts , suggesting that reionization was an extended process (Dunkley et al. 2009). Distant star-forming systems could have been responsible for a significant part of the cosmic reionization. Considerable advances have been made during the last years in the observation of the early Universe with the discovery of galaxies at 6-7, close to the end of the reionization epoch (e.g. Stanway et al. 2004; Kodaira et al. 2003; Kneib et al. 2004; Cuby et al. 2003; Zheng et al. 2009; Iye et al. 2006; Hu et al. 2002; Bouwens et al. 2006; Bradley et al. 2008; Bouwens et al. 2004a), and the first prospects up to (Stark et al. 2007; Bouwens et al. 2008; Pelló et al. 2004; Bouwens et al. 2009a; Richard et al. 2008,2006).
High-z surveys are mainly based on two different and complementary techniques: the dropout (Lyman- Break) identification, which is an extrapolation of the drop-out technique used for Lyman-Break galaxies (LBGs, Steidel et al. 1999) to higher redshifts (e.g. Richard et al. 2008; Bouwens et al. 2006; Richard et al. 2006), and the narrow-band (NB) imaging aimed at detecting Lyman emitters (LAEs, e.g. Cuby et al. 2007; Taniguchi et al. 2005; Kashikawa et al. 2006; Iye et al. 2006; Willis et al. 2006). Using the former technique, Bouwens et al. (2008) found a strong evolution in the abundance of galaxies between -8 and -4, the SFR density beeing much smaller at very high-z up to the limits of their survey, in particular towards the bright end of the luminosity function (LF). A strong evolution is also observed in the number density of sources detected with NB techniques, which seems to be much smaller at than in the interval (Cuby et al. 2007; Iye et al. 2006; Willis et al. 2008).
Both dropout and NB approaches require a subsequent spectroscopic confirmation of the selected candidates. For now approximately ten galaxies beyond are known with secure spectroscopic redshifts (Kodaira et al. 2003; Cuby et al. 2003; Taniguchi et al. 2005; Iye et al. 2006; Hu et al. 2002). All samples beyond this redshift are mainly supported by photometric considerations (Bouwens et al. 2004b; Bradley et al. 2008; Kneib et al. 2004; Richard et al. 2006; Bouwens et al. 2006; Richard et al. 2008). This situation is expected to dramatically improve in the near future with the arrival of a new generation of multi-object spectrographs in the near-IR, such as MOIRCS/Subaru, Flamingos2/Gemini-S ( 2009), or EMIR/GTC ( 2012), with well suited field of view, spectral resolution and sensitivity. These forthcoming facilities should provide spectroscopic confirmation for a large number of z>7 candidates identified from deep photometric surveys, and the first characterization of the physical properties of these sources (e.g. IMF, stellar populations, fraction of AGN, ...).
The aim of this paper is to determine the best possible observing strategies in order to build up a representative sample of spectroscopically confirmed galaxies. The photometric pre-selection of high-z candidates could be achieved either in blank fields or in lensing clusters. This later technique, also first referred to as the ``gravitational telescope'' by Zwicky, has proven highly successful in identifying a large fraction of the most distant galaxies known today thanks to magnifications by typically 1-3 magnitudes in the cluster core (e.g. Ellis et al. 2001; Bradley et al. 2008; Kneib et al. 2004; Bradac et al. 2009; Zheng et al. 2009; Hu et al. 2002). The presence of a strong lensing cluster in the surveyed field introduces two opposite effects on number counts as compared to blank fields. In one hand, gravitational magnification increases the number of faint sources by improving the detection towards the faint end of the LF. On the other hand, the reduction of the effective surface by the same magnification factor leads to a dilution in observed counts. The global positive/negative magnification bias obviously depends on the slope of the number counts, as pointed out by Broadhurst et al. (1995).
This paper addresses the relative efficiency of surveys conducted on blank and lensing fields as a function of the relevant parameters, namely the redshift of the sources, the redshift and properties of the lensing clusters and the survey characteristics (i.e. area, photometric depth ...). This calculation requires a detailed simulation of observations using lensing models, and realistic assumptions for the properties of background sources according to present-day observational results, in particular for the luminosity function and typical sizes of z>7 galaxies.
The paper is organized as follows. In Sect. 2 we describe the simulations performed in order to determine the relative detection efficiency for high-z sources, both in lensing and blank fields. Section 3 presents the results, in particular the detection efficiency achieved as a function of redshift for both sources and lensing clusters, together with a discussion on the influence of lensing cluster properties and field-to-field variance. A discussion is presented in Sect. 4 on the relative efficiency as a function of survey parameters, and a comparison between simulations and present surveys. Conclusions are given in Sect. 5.
Throughout this paper, we adopt a concordance cosmological model, with , , and . All magnitudes are given in the AB system. Conversion values between Vega and AB systems for the filters used in this paper are typically , 1.41 and 1.87 in J, H and K respectively, with .
2 Simulations of lensing and blank field observations
2.1 Simulation parameters
This section describes the ingredients used in the simulations to implement different assumptions that would affect our efficiency in detecting high redshift galaxies. There are three important aspects to be considered in the comparison between lensing and blank fields. The first one is the LF and typical sizes of sources. The second one concerns the properties of the lensing clusters, in particular their mass distribution and redshift. The third one is related to the survey parameters, namely the photometric depth and the size of the field. All these aspects are discussed in this section. Table 1
Table 1: Summary of the parameters included in our simulations. For each entry, we give the range of values explored and reference to the relevant publication.
2.1.1 Source properties
These simulations are focused on the detection of sources in the redshift range 6<z<12, a relevant domain for spectroscopic follow-up with near-infrared instruments. The lower limit of this redshift domain overlaps with current photometric surveys measuring the LF at (e.g. Bouwens et al. 2007). However, the LF is still largely unconstrained beyond because of the lack of spectroscopic confirmation of photometric samples and the relatively small size of the surveyed volumes.
The abundance of background sources at these redshifts is given by the luminosity function , with L the rest-frame UV luminosity at 1500 Å. is the most basic description of the galaxy population from an observer point of view. We adopt a parametrization based on the analytical Schechter function (Schechter 1976):
The slope at faint luminosities , the characteristic luminosity L*and the normalization factor have been constrained by several photometric surveys targeting LBG at high redshift ( ) (Ouchi et al. 2004; Beckwith et al. 2006; Bouwens et al. 2007; Bouwens et al. 2008; McLure et al. 2009; Henry et al. 2009). Three different representative cases are being discussed in our simulations, basically exploring our present knowledge (or lack of knowledge) of the overall shape of the LF at (Table 1):
an ``optimistic'' scenario where LBGs show no-evolution from , with the LF parameters as determined by Beckwith et al. (2006). Indeed, the LF at found by these authors display the same shape as for (Steidel et al. 1999), but a 3 times smaller normalization factor (but see, e.g., McLure et al. 2009);
a constant LF based on robust measurements by Bouwens et al. (2007) at in the Hubble UDF, but using the more recent fit parameters from Bouwens et al. (2008). As compared to model (a), this LF exhibits a turnover towards the bright end;
the evolutionary LF recently proposed by Bouwens et al. (2008), which includes an important dimming of L* with increasing redshift. This LF represents the ``pessimistic'' case with respect to the case (a), with very few high-luminosity galaxies.
The size of the sources is a relevant parameter in this study, given the finite resolution of instruments, and the fact that gravitational magnification preserves surface brightness. High redshift galaxies are expected to be very small at z>7, typically on the sky (e.g. Barkana & Loeb 2000). Recent observations of photometric candidates support this idea (Bouwens et al. 2008; Oesch et al. 2009). This means that a large fraction of lensed sources should remain spatially unresolved in ground-based surveys, even with a strong gravitational magnification (hereafter ) of 10. The high resolution capability of JWST is clearly needed for resolving such faint galaxies. In the present simulations and for detection purposes, we consider all sources at z>6 as spatially unresolved. However, galaxy morphology and image sampling are important when discussing the efficiency of surveys based on space facilities, as discussed in Sect. 4.2.
2.1.2 Lensing effects
Magnification maps for the three clusters at used in this study ( from left to right A1689, A1835 and AC114 respectively). The global size of the image corresponds to the FOV whereas the blue and red squares correspond respectively to and FOV (see Sect. 2.1.3). Black contours represent different magnification regimes with increasing magnification values from 0.5 to 3 mag towards the cluster center.
The present simulations address the effect of lensing by a foreground galaxy cluster. Several well-studied examples of lensing clusters are used in order to evaluate the influence of different mass distributions on the final results. Reference lensing clusters usually display several multiply-imaged systems with redshift measurements, allowing us to model their lensing properties accurately. Lensing clusters considered in these simulations have been previously used as gravitational telescopes to search for high redshift dropouts and LAEs. We take advantage of this situation to perform a direct comparison between our estimates and available observations. Finally, we selected clusters with different redshifts, total mass distributions and morphologies, because all of these factors are susceptible to affect the way they magnify background galaxies.
We selected three clusters satisfying the previous criteria: Abell 1689, Abell 1835 and AC114. Abell 1689 is one of the most spectacular gravitational telescopes, having the largest Einstein radius observed to date (45 ). Both optical dropouts (Bradley et al. 2008) and Lyman- emitters (Stark et al. 2007) candidates have been reported in the background of this cluster. Abell 1835 and AC114 are both massive, X-ray luminous clusters, previously used in our deep near-infrared survey for high redshift dropouts with VLT/ISAAC (Richard et al. 2006). Finally, these three clusters constitute the sample used by the ZEN2 survey for LAEs in narrow-band images (Willis et al. 2008).
We used the most recent mass models available for the reference clusters to derive the magnification maps (see Table 1) although simulation results are found to be weakly sensitive to modeling details. Each lensing cluster has been modeled in a similar way using the public lensing software Lenstool , including the new MCMC optimization method (Jullo et al. 2007) providing bayesian estimates on each parameter derived from the model.
The structure of mass models is given by a sum of individual dark matter subcomponents of two different types: large scale components, reproducing the cluster-scale behavior of dark matter, and small scale potentials centered on each cluster galaxy, reproducing the effect of substructure. Each lensing potential is parametrized by a Pseudo-Isothermal Elliptical Mass Distribution model (PIEMD, Kassiola & Kovner 1993), with a projected mass density given by:
where , ( , ) stands for the central position with respect to the BCG (cD or central Bright Cluster Galaxy), is the ellipticity, is the central velocity dispersion and ( , ) are two characteristic radii. For each lensing potential, the position of x and yaxis is given by the position angle . The total mass of this profile is proportional to . The PIEMD parametrization, easily linked to the observed geometry of elliptical lensing galaxies, has been widely used to model the strong lensing properties of massive clusters (Smith et al. 2005; Richard et al. 2007; Limousin et al. 2007).
Typical error in the magnification factor , as a function of the magnification for Abell 1835. The solid curve gives the statistical error derived from the MCMC model, while the dashed curve gives the systematic error between two choices of parametrization (see text for details). The thick solid curve is the quadratic sum of both errors, used later in the calculation. The vertical line represents the conservative upper limit of .
A good approximation of the angular distance of the critical line, corresponding to maximum magnification in a flat universe, is given by the Einstein radius :
where stands for the angular distance at redshift z. Source and cluster redshifts are, respectively, and .
The value of provides a fair estimate of the extension of the strongly magnified area ( ) in the image plane. This value quantifies the power of a gravitational telescope to magnify background sources. Equation (3) shows that, for a given source redshift , depends on and on the cluster redshift .
For the three clusters mentioned before, there is a significant variation in redshift ( ) and in (taken from the mass models and reported in Table 1), Abell 1689 being more massive and less distant than AC114, for instance. We explored a wider range of cluster redshifts in our simulations, producing fiducial lensing clusters by adjusting between 0.1 and 0.8 in the three cases, assuming no evolution in the cluster properties. This is clearly an over-simplistic and conservative assumption, as massive clusters of galaxies undergo a dynamical evolution during cluster assembly at high redshift.
The relevance of the MCMC approach of the cluster mass modeling is to derive relevant statistical errors in the magnification factors. Figure 2 illustrates the typical errors in the magnification at a given position of the lensing field, in the case of the lensing cluster Abell 1835. Similar errors in the magnification are found in the case of the two other clusters, as all of them have 5 multiple systems constraining independent regions of the cluster cores, the majority of them having spectroscopic redshifts. For reasonable magnification factors ( 3 mag), this error is always smaller than 0.1 mag (or relative error in flux). For larger magnifications factors, corresponding to the vicinity ( ) of the critical lines, the error can reach much higher values. The systematic errors in the magnification factors, due to the choice of the parametrization when building the lensing model, can be estimated for Abell 1835, which have been modelled by Richard et al. (2009, submitted) using both PIEMD profiles and Navarro-Frenk-White (NFW, Navarro et al. 1997) profiles for the cluster-scale mass distributions. The comparison of magnifications from both models, at a given position, gives an estimate of the systematic error in the magnification, which dominate at large (Fig. 2), reaching typical values of 0.3 mag. We adopted a conservative upper limit of to avoid singularities in the magnification determination. This is justified by the finite resolution of instruments, and the limited knowledge on the precise location of the critical lines at such high z (typically ). The affected area is not significant once averaged over the entire field of view. Nevertheless, the quadratic sum of the statistical and systematic errors in the magnification is later used to derive errors in the number density calculations when looking at lensing fields.
2.1.3 Survey simulations
In addition to cluster and source properties, the main ingredients to consider in the simulations are the following:
the typical field of view (FOV) of near-IR instruments for 8-10 m class telescopes and space facilities. The former typically range between a few and on a side (e.g. for EMIR/GTC, for Hawk-I/VLT). The later are usually smaller (e.g. for NICMOS/HST, for JWST or WFC3-IR/HST). Figure 1 presents the comparison between these typical FOV values and the magnification regimes found in lensing clusters. The references for the different instruments used in the simulations are presented in Table 1;
the limiting magnitudes of present near-IR surveys based on ground-based and space observations tailored to select LBGs at . The former are typically limited to (see Sect. 2.1.1), whereas the later could reach as deep as with JWST (see Sect. 3.5 and 4).
The shallow magnitude limit of achieved on ground-based observations should allow us to detect galaxies with a UV continuum corresponding to a at , whereas the typical depth for JWST should be .
Table 2: Characteristics of the images used to produce the foreground object's mask for each cluster field.
We can relate the UV luminosities of high redshift galaxies with the expected Lyman- emission line by converting L into a star formation rate SFR using the calibrations from Kennicutt (1998):
The expected Lyman- luminosity produced at the equilibrium can be written as:
where is the escape fraction of Lyman- photons and the Lyman- production rate per unit of star formation. Assuming no reddening, the typical values for range between and ergs s-1(Schaerer 2002). We use these scaling relations when discussing the detectability of Lyman- in lensing fields.
2.2 Implementation
We explicitly compute the expected number counts N(z,m0) of sources at the redshift z brighter than a limiting magnitude m0 by a pixel-to-pixel integration of the (magnified) source plane as a function of redshift, using the sources and lensing cluster models described in the previous subsections. Number counts are integrated hereafter within a redshift slice around z, unless otherwise indicated. With respect to a blank field, the magnification pushes the limit of integration to fainter magnitudes, whereas the dilution effect reduces the effective volume by the same factor.
An important effect to take into account in cluster fields is light contamination coming from the large number of bright cluster galaxies, which reduces the surface area reaching the maximum depth, and consequently prevents the detection of faint objects, especially in the vicinity of the cluster center. This contamination effect can be as high as 20% of the total surface (Richard et al. 2006), whereas it is almost negligible in blank field surveys.
We created bright-objects masks by measuring the central position ( , ) and shape parameters (a, b, ) of galaxies in the three cluster fields, each object being approximated by an ellipse during this process. We used SExtractor (Bertin & Arnouts 1996) in combination with reasonably deep and wide ground-based images available from the ESO archive (larger than used in these simulations). The characteristics of these images are summarized in Table 2. They were reduced using standard IRAF routines. The image mask M(x,y) produced is the superposition of ellipses for all objects in the photometric catalog where pixels belonging to object domains were flagged. Ellipses correspond to isophotes over the background sky. In other words, only images lying on empty regions have been included in the lensed samples, thus providing a lower limit for detections in lensing fields.This fractional area covered by foreground galaxies ranges between 6% and 12% depending on the cluster as well as the size and central position of the field of view (Table 2). The largest hidden area corresponds to the smaller field of view centered on the cluster (JWST-like). NICMOS pointings are even smaller, but they are centered on the critical lines in our study and avoid the crowded central regions of the cluster. In blank fields, this value doesn't exceed 3-4%. In the next sections, we have taken into account this correction in the calculations of number counts, both in blank fields and lensing fields.
Including the object mask M(x,y), number counts N(z,m0) are given by the following expression:
N(z,m0) =
where is the magnification induced by the lensing field, Cv(x,y,z) is the covolume associated with a single spatial resolution element pixel with and (L*, , ) are the parameters of the LF.
A conservative upper limit of was adopted in the vicinity of the critical lines in order to avoid singularities in the magnification/dilution determination. This is justified by the finite resolution of instruments, and the limited knowledge on the precise location of the critical lines at such high-z (typically ).
When exploring the impact of cluster redshift, we assumed no evolution in the physical parameters , and of individual potentials, thus keeping the total mass of the cluster constant in this process. Variations in from the original redshift of the cluster produce a geometrical effect on the central positions ( , ) of each PIEMD potential i, measured from a reference position (x0, y0) which coincides with the center of the BCG:
Similarly, we produced fiducial masks at different cluster redshifts based on the reference mask at and adjusting the parameters of the galaxy ellipses, applying the same scaling relations on ( , ). The sizes a and b were scaled by .
As discussed above, lensing introduces two opposite trends on the observed sample as compared to blank fields: gravitational magnification by a factor , increasing the expected number of sources and thus the total number of objects, and reduction of the effective surface by the same factor thus leading to a dilution in expected counts. This effect was first studied by Broadhurst et al. (1995).
If we consider, for a given redshift z, the cumulative abundance of sources (per unit of solid angle) with a luminosity greater than L and by redshift bin, the magnification bias will change depending on according to
= (8)
where is the logarithmic slope of n(L,z) assuming that this function is well represented by a power law in this interval of luminosities: ln lnL). The effect on number counts is as follows:
if the number counts will increase with respect to a blank field, and
if there will be an opposite trend: i.e. a depletion in number counts.
With increasing depth, the parameter will decrease in a greater or lesser amount depending on the LF, the FOV (because it determines the mean ) and the redshift of sources, leading to a depletion in number counts in lensing fields as compared to a blank fields. With these simple considerations, we expect lensing clusters to be more efficient than blank fields in relatively shallow surveys.
The efficiency of using lensing clusters as gravitational telescopes to find high-z galaxies can be quantified with simple assumptions taking advantage of the properties of the sources explained in Sect. 2. In this section, we discuss the results obtained by exploring the relevant intervals in the parameter space. We present a comparison between the number counts expected in lensing and blank fields, as a function of source redshift and for different LFs. The influence of lensing cluster properties and redshift is also studied, as well as the expected field to field variance.
3.1 The influence of the field of view
Here we discuss on the influence of the FOV in the simulations for typical surveys. The influence of the limiting magnitude will be discussed in Sect. 4. Three different FOV are considered here:
(``EMIR-like'' aperture);
(``JWST-like'' or ``WFC3/HST'' aperture);
(NICMOS/HST aperture).
In the last case, the FOV is centered along the critical lines in order to achieve the highest mean magnification (Fig. 1).
The limiting magnitude is , a value ranging between L*(z=6) and 3L*-5L* at redshift to 10. The cluster model corresponds to AC114, but the results are qualitatively the same with other models. Figure 3 displays the relative gain in number counts between lensing and blank fields as a function of sources redshift, for the three values of the FOV mentioned above, and for the three LF adopted in the present simulations.
The largest gain is obtained for the smallest FOV, as expected from geometrical considerations, because the mean magnification decreases with increasing FOV, and in this case given the shallow depth ( ). For a given FOV, the difference between lensing and blank field results strongly depends on the shape of the LF. Hence, the comparison between lensing and blank field number counts is likely to yield strong constraints on the LF, provided that field-to-field variance is small enough. This issue is addressed in Sect. 3.5. In the following subsections, we adopt a FOV unless otherwise indicated.
Relative gain in number counts between lensing and blank fields as a function of the source redshift, with , for different fields of view: in black, in blue dotted line and in red dot-dashed line ( from bottom to top). The three panels from left to right represent the 3 values of the LF, a), b) and c) respectively.
Comparison between the expected number counts of galaxies in a typical FOV, up to , per redshift bin , in a blank field (dashed lines) and in the field of lensing cluster (solid lines) (left to right respectively A1689, A1835 and AC114). Expected counts are obtained by the integration of 3 different luminosity functions (a), (b) and (c) from top to bottom. The limit of one source detected in the field of view is indicated by an horizontal line to guide the eye.
3.2 Lensing versus blank field efficiency
In this section, we study the effects of lensing clusters on source counts, using lensing models for the three reference clusters. We compute the expected number of sources brighter than m0, the typical apparent magnitude reached in ground-based near-IR surveys. The comparison between expected number counts of galaxies in a typical FOV, up to , per redshift bin , in a blank field and in the field of a strong lensing cluster are presented in Fig. 4 in logarithmic scale.
We also estimate the error on number counts due to the uncertainties on magnification factors (Sect. 2.1.2). The choice of the LF has no influence on the following results. Field to field variance dominate the error budget whatever the regime. Statistical errors and systematic errors on lensing models are smaller but not negligible as their contribution is less sensitive than field to field variance to the number of objects. In particular, when the number of detected sources is relatively high (i.e. when field to field variance is relatively small), they reach of the error budget in the worst cases (e.g. for LF(a) at in a FOV), and typically when the number of sources is relatively small (e.g. for LF(c) at , for any FOV).
As shown in Fig. 4, the presence of a strong lensing cluster has a dramatic effect on the observed number of sources, with a positive magnification effect. Strong lensing fields are a factor between 2 and 10 more efficient than blank fields for the most optimistic LF (a), the gain increasing for the LFs (b) and (c), reaching a factor between 10 and 100 in the domain. A positive magnification bias is observed, increasing with the redshift of the sources, and also increasing from optimistic to pessimistic values of the LF. This trend is indeed expected given the steep shape of the LF around the typical luminosity limits achieved in ground-based ``shallow'' surveys.
Quantitatively (cf. Table 3 and Fig. 4), if the LF for LBGs was nearly constant between and 12, we could always detect at least one object over the redshift range of interest. At , we expect up to between 7-10 sources, and at between 0.7 and 1 galaxies should be detected in a lensing field. Even in a blank field, until at least one LBG could be found in such a large field of view. With more realistic (pessimistic) values of the LF (e.g. Bouwens et al. 2008,2006), blank fields are particularly inefficient as compared to lensing fields. The size of the surveyed area would need to increase by at least a factor of 10 in order to reach a number of detections similar to the one achieved in a lensing field around , and this factor increases with redshift. Note however that given a limiting (apparent) magnitude, blank and lensing fields do not explore the same intrinsic luminosities (see also Sect. 4).
As seen in Fig. 4 and Table 3, there are also some differences between the results found in the three lensing clusters, although they are smaller than the differences between lensing and blank fields for a given LF. The number of expected sources behind A1689 is a factor of two (at ) and a factor of three ( ) larger than in the other clusters for the realistic LFs (b) and (c), whereas the difference is only for LF (a). The influence of lensing properties is studied in Sect. 3.4.
From the results above, it seems that lensing fields allow us to detect a larger number of sources based on their UV continuum, with some cluster to cluster differences. This result is essentially due to the shape of the LF. For magnitude limited samples selected within a given field of view, the positive magnification bias increases with the redshift of the sources and decreases with both the depth of the survey and the size of the surveyed area. The last trend is purely geometric, as discussed in the previous section. The differential behaviour between blank and lensing regimes strongly depends on the shape of the LF. The comparison between blank and lensing field observations could be of potential interest in constraining the LF, provided that field-to-field variance is sufficiently small. This issue is addressed in the following sections.
Table 3: Total number of objects expected within a FOV (up to , ) from the three LF adopted in these simulations.
3.3 Redshift of the lensing cluster
The same as Fig. 4 but using different assumptions for the redshift of the clusters, with [0.1, 0.8] and a 0.1 step (for more details see Sect. 3.3).
The redshift of the lensing cluster is a potentially important parameter when defining an ``ideal'' sample of gravitational telescopes. Based on geometrical considerations, we expect the magnification bias to decrease with cluster redshift ( ) after reaching a maximum efficiency at some point, depending on cluster properties and the size of the surveyed field. The field of view considered here is typically a few square arcminutes, essentially including the region around the critical lines where magnification factors are the highest. Further down, we study the impact of on the magnification bias.
Using the non-evolution assumption presented in Sect. 2.2, we compute the expected number counts for the three reference models (A1689, A1835 and AC114) with cluster redshifts ranging between z=0.1 and 0.8, with a step. A step is used in the z=0.1-0.3 interval, in order to refine the sampling around the maximum value. We use the same depth and field size as in previous section. The effect of cluster redshift is clearily seen in Fig. 6 representing the number of objects as a function of cluster redshift (for the three reference models), at a fixed redshift of z=8 for sources.
The global effect of on number counts as a function of the source redshift is displayed in Fig. 5. This figure directly compares to Fig. 4 in the previous section. Table 4 presents the value which corresponds to a maximum in the expected number counts at z=8. This value depends slightly on the source redshift and LF. In addition to the when changing the LF, there is also an increase of with higher values of , up to +0.05 towards . The search efficiency of distant galaxies in lensing fields is maximised when using clusters at low redshift ( ). Although the field of view considered here is relatively large for near-IR surveys and close to present-day cameras, it is the limiting factor at , where an increasing fraction of the strong-magnification area is lost with decreasing . Also, in this regime, the field of view concentrates on the central region of the cluster where bright cluster galaxies mask an important fraction of the strong-magnification area. The high magnification region represents an increasingly small percentage of the field with increasing . Number counts in this regime asymptotically tend towards a limiting value with increasing (Fig. 5), wich still is a substantial gain with respect to a blank field of the same size. The non-evolution assumption in cluster properties has a weak effect on this conclusion. Indeed, clusters far from relaxation will be even more ineffective as a gravitational lenses in the strongest magnification regime. The results obtained are hence an optimistic upper limit on number counts for realistic clusters beyond the optimal regime .
Expected number of objects as a function of the cluster redshift for a fixed redshift of sources ( ) and with the same depth and field size as in previous section. Three cluster models are displayed: A1689 (green dotted line), A1835 (blue dot-dashed line) and AC114 (red solid line). Panels from left to right display respectively LF a), b) and c).
Histogram representing the percentage of the surface ( FOV) as a function of the magnification for different redshifts of cluster ( , 0.2, 0.3, 0.8), using the same color codes for the three clusters as in Fig. 6 (A1689: green dotted line, A1835: blue dot-dashed line and AC114: red solid line).
3.4 Influence of lensing cluster properties
In this section we focus on the differences between lensing cluster properties and their influence on expected source counts. As seen in previous sections, A1689-like clusters are expected to be more efficient irrespective of the cluster redshift. To understand this effect, we study the magnification regimes for a reference source plane fixed at . The distribution of the magnification regimes in the image plane varies from cluster to cluster. Histograms in Fig. 7 represent the percentage of the image plane (for the FOV) as a function of the associated magnification. To perform this calculation, cluster redshifts were standardized to identical values for a better understanding of the phenomenom. As seen in the figure, A1689 shows a different regime at as compared to the other clusters. While the percentage of the surface affected by strong magnification ( ) does not exceed in A1835 and AC114, it is as high as in A1689, depending on . Nevertheless, this difference between clusters tends to fade with increasing cluster redshift due to projection effects, the fraction of highly magnified pixels becoming smaller with respect to the whole FOV. We also note that AC114 and A1835 models have a similar behaviour with minor differences (A1835-like clusters being more effective at very low while the AC114 model is more efficient for intermediate ).
Table 4: Redshift of the cluster which maximizes the number of objects detected at z=8 for the three LF respectively from top to bottom a), b) and c).
Another way of understanding this phenomenom is presented in the Fig. 8, where the effective covolume for the FOV is traced as a function of the effective magnitude, for a magnitude-limited survey with . Magnification in lensing fields provides an enhanced depth for a magnitude-limited survey, where the effective (lensing corrected) covolume surveyed decreases with increasing effective depth. The behavior of A1689 in Fig. 8 illustrates the situation for this particularly efficient cluster, allowing us to study a volume to with a relatively modest observational investment. Except for some particularly efficient lensing clusters (such as A1689), most lensing fields should behave the same way as A1835 or AC114.
3.5 Field to field variance
In this section, we address the expected field-to-field variance affecting our previous results in order to estimate its impact in blank and lensed fields. We used two different approaches: the two-point correlation function estimation proposed by Trenti & Stiavelli (2008) and a pencil beam tracer through the Millenium simulation.
The first estimate is based on the method implemented by Trenti & Stiavelli (2008). This method for the calculation of the cosmic variance is based on the two points correlation function of the sample (Peebles 1993). Field o field variance is given by
where V represents the volume of the survey.
The effective (lensing corrected) covolume sampled at z=6.5-7.5 by each cluster is given as a function of effective magnitude limit, for a magnitude-limited survey of FOV with . The three clusters are displayed with the same colors and line codes as in Fig. 6.
Table 5: Number counts, field to field variance calculated with the correlation function both in blank and lensing fields, for z = 6, 7 and 8 within a FOV, for a shallow survey with .
We focus on the redshift interval using the present ``shallow'' survey parameters (see Sect. 4.1), both in blank and lensing fields. Here we use the same parameters as in Sect. 3.2, i.e. typical FOV , and .
We define the total fractional error of the counts N following Trenti & Stiavelli (2008) (this is the so-called field-to-field standard deviation or the, again, improperly called ``cosmic variance'') as:
Results are presented in the Table 5 for the three LF considered, and for the three typical clusters used in the simulations. We note an important field to field variance with such a limiting magnitude ( ) either in blank or lensing fields due to the small number counts previously derived from calculations (see Table 3). Nevertheless, the variance is smaller behind gravitational telescopes with the same differential trends mentioned before between the three clusters, i.e. A1689 exhibits a stronger magnification bias (see Sect. 3.4) than the other clusters which have a similar behavior. Besides, with increasing redshift of sources, the expected number counts decrease leading to a larger field to field variance.
The second estimate was based on the Millennium simulation, carried out by the Virgo Consortium and described in detail in Springel et al. (2005) and Lemson & Virgo Consortium (2006). The simulation follows N = 21603 particles of mass within a co-moving box of size on a side. The cosmological model is a CDM model with small differences in the cosmological parameters adopted in Sect. 1, but without impact on the final results. These cosmological parameters are consistent with recent determinations from the combined analysis of the 2dFGRS and 3rd year WMAP data (Sanchez & Baugh 2006). Given its high resolution and large volume, the Millennium simulation allows us to follow in enough details the formation history of a representative sample of high redshift galaxy environments. With these prescriptions and a realistic beam tracer we can study the field-to-field variations in the number counts of star forming galaxies at the epoch of interest.
Our pencil beam tracer is similar to the one developed by Kitzbichler & White (2006). We trace through the simulation box a parallelepiped where the base is a parallelogram, whose size is given by the reference field of view in comoving units, and the depth is the comoving depth arbitrarily taken to . The variation of angular distance versus redshift in the redshift interval of the selection window considered was properly taken into account. This edge box is more than 2000 times larger than the effective volume probed by the FOV: for instance at z=6. We carried out 10 000 Monte Carlo realizations of the beam-tracing procedure by randomly varying the initial position of the beam in order to calculate the typical number counts of galaxies and the associated standard deviation in the field of view with the same hypothesis.
Although this procedure is well suited to determine the field to field variance, several studies on this topic suggest an overprediction on the abundance of massive galaxies at high redshift (e.g. Kitzbichler & White 2007). For this reason, we consider this second approach as a cross check yielding a lower limit for the field to field variance. Results obtained from the Millenium simulation are displayed in Table 6. They are in fair agreement with those obtained with the first method.
Field to field variance on number counts obviously depends on the depth of the survey. In order to compare our results with existing photometric surveys, we calculated the number counts of sources in blank and lensing fields (here AC114) with the evolving LF(c) for different deeper magnitude limits ( , 28.0 and 29.0), in our reference field of view using the same parameters as in Sect. 3.2 (see Table 7). The correlation function was used to derive the cosmic variance. The total fractional error (vr) strongly decreases with increasing photometric depth, as expected given the increasing number of sources detected in such a large FOV (e.g. at , the total number of sources is 1000 times larger than in the ``shallow'' survey), both in blank and lensing fields. The fractional error appears slightly larger in lensing than in blank fields at z=6, but this effect reverses with increasing source redshift. These estimates for the blank field can be compared to present-day surveys. For instance, the field to field variations obtained by Bouwens et al. (2006) for a single ACS pointing at for a limiting magnitude is 35%. Using the same observational constraints (FOV, depth, ...), our simulations yield a , a value which is smaller but fairly compatible with the results quoted by Bouwens et al. (2006).
Table 6: Number counts for and field to field uncertainties (vr) calculated from the Millenium simulation in a blank field, for different source redshifts.
Table 7: Field to field variance for 3 different magnitude limits: , 28.0 and 29.0, in a blank field and lensing field (behind AC114) for the LF(c).
4.1 Survey parameters and efficiency
As discussed in Sect. 2.1.3, the FOV and the limiting magnitude are two important survey parameters used in these simulations. The influence of the FOV for a fixed limiting magnitude strongly depends on the shape of the LF. The highest ratio in number counts between lensing and blank fields can be achieved with the smallest FOV due to simple geometrical considerations. This section specifically addresses the evolution on the survey efficiency in lensing and blank fields as a function of the limiting magnitude.
For these purposes, we use the same approach as in Sect. 3.2 to derive number counts within a FOV in blank fields and behind lensing clusters. AC114 is used here as a representative lensing cluster. Figure 9 displays the expected number counts as a function of the redshift of sources, for different depths ( and 29.0). An opposite trend between blank and lensing fields appears, depending once again on the LF and on the redshift of sources. With increasing limiting magnitude, the efficiency of the survey towards a foreground cluster diminishes and becomes less effective than in blank fields leading to a negative magnification bias for the faintest limiting magnitudes (e.g. for LF(a) between , for LF(b) between and for LF(c) beyond ). This trend, however, is highly sensitive to the FOV. In particular, the negative magnification bias appears towards the typical magnitudes achieved by space facilities (JWST). Figure 10 displays the same results as in Fig. 9 but for a FOV (JWST-like). The main characteristics remain broadly unchanged, the general trends are just exacerbated, the inversion happening to lesser depth.
Lensing and blank field surveys do not explore the same intrinsic luminosities, as shown in Figs. 11 and 12. These figures compare the expected number density of sources as a function of their intrinsic UV luminosity (or equivalent SFR) for different limiting magnitudes ranging from to 29.0. In the case of lensing fields, two different results are given, depending on the FOV around the cluster center. In this particular case, the source redshift is arbitrarily fixed to , assuming a strongly evolving LF(c), and the lensing cluster is AC114.
In summary, the number of z>8 sources expected at the typical depth of JWST ( ) is much higher in lensing than in blank fields if the UV LF is rapidly evolving with redshift (LF(c)), as suggested by Bouwens et al. (2008). The trend should be the opposite if the LF remains unchanged between and 8. Lensing clusters are the only way to study the faintest building blocks of galaxies, with typical to . On the contrary, wide field surveys covering 103 to 104arcmin-2 are needed to set reliable constraints on the brightest part of the LF at , i.e. for galaxies with .
Expected number counts of objects as a function of the redshift of sources in a FOV, for different limiting magnitudes 26.0, 27.0, 28.0 and 29.0 from bottom to top respectively. This calculation is provided both in blank (dotted line) and lensing fields (solid line) (here AC114) and for the three LFs ( from right to left, a) in red, b) in blue and c) in black).
The same as Fig. 9 but for a FOV. Some differences appear in comparison with the Fig. 9. For example, the total numbers of high-z galaxies expected behind lensing clusters (solid lines) and the field (dashed lines) are much larger at low limiting magnitude ( ) but this phenomenom is reversed for deeper surveys ( ) (see text for details).
4.2 Influence of galaxy morphology and image sampling
Gravitational magnification (e.g. in the tangential direction) induces an elongation of images along the shear direction while preserving the resolution in the perpendicular direction and the surface brightness of high redshift galaxies. All the comparisons between lensing and blank fields in our simulations assumed that observations were conducted with the same intrument setup in terms of FOV and spatial sampling, and with the same observational conditions, in particular the same limiting surface brightness and PSF. However, when comparing magnitude-limited samples in lensing and blank fields, it is worth discussing the influence of galaxy morphology and image sampling on the present results. In particular, the evolution in the surface brightness of high redshift sources is susceptible to hinder the search efficiency in clusters if, for instance, number counts in clusters were dominated by sources below the limiting surface brightness.
As explained in Sect. 2.1.1, all the previous results have been obtained assuming that galaxies at z>7 are compact as compared to spatial sampling. Indeed, high redshift sources are expected to be very small, typically on the sky, based on cosmological simulations (e.g. Barkana & Loeb 2000), in such a way that the high resolution capability of JWST is needed for resolving such faint galaxies. Recent observations of LBGs candidates in the HUDF fully support this idea (Bouwens et al. 2008; Oesch et al. 2009; Bouwens et al. 2009b). In a recent paper, Oesch et al. (2009) measured the average intrinsic size of LBGs to be kpc. These galaxies are found to be extremely compact, with very little evolution in their half-light radii between and 7, roughly consistent with galaxies having constant comoving sizes, at least within the observed luminosity domain 0.1-1 L*(z=3). Smaller physical sizes are expected for higher redshift and/or intrisically fainter galaxies, based on the scaling of the dark matter halo mass or the disk circular velocity (Mo et al. 1998). This differential trend is actually observed between the bright ( 0.3-1 L*) and the faint ( 0.12-0.3L*) samples of Oesch et al. (2009).
Cumulative surface density of sources as a function of their intrinsic UV luminosity, in blank fields (blue solid line) and in the lensing fields with FOV 5 arcmin2 (JWST-like, red solid line), for different photometric depths ranging from shallow ( ) to deep ( ) surveys and a strongly evolving LF(c). The source redshift is arbitrarily fixed at z=8, with .
Cumulative surface density of sources as a function of their intrinsic UV luminosity and SFR, in blank fields (blue solid line) and in the lensing fields with FOV (red dotted line) and (JWST-like, red dashed line), for different photometric depths ranging from shallow ( ) to deep ( ) surveys and a strongly evolving LF(c). The mean magnification over the whole the field is used to derive the lensing points, the true distribution is displayed in Fig. 11. The source redshift is arbitrarily fixed at z=8, with . The conversion from absolute magnitude to SFR is provided in Sect. 2.1.3 using the calibrations from Kennicutt (1998).
If all high-z galaxies exhibit the same compact and uniform morphology, the effective mean surface brightness of a lensed galaxy will be brighter or fainter with respect to a blank field galaxy with the same apparent magnitude depending on the spatial resolution (in practice, the instrumental PSF). The majority of lensed sources should remain spatially unresolved on their width on seeing-limited ground-based surveys, and even on their tangential direction up to a gravitational magnification . Hence, the apparent surface brightness of a lensed source is actually brighter than that of a blank field galaxy of similar apparent magnitude (by roughly mags for a spatially unresolved galaxy). This situation is typically found in the ``shallow and wide'' near-IR surveys discussed above (e.g. for the FOV), where lensing clusters are particularly efficient.
On the contrary, for a fixed apparent magnitude, the effective mean surface brightness of a lensed galaxy is expected to become fainter with respect to a blank field galaxy when the image resolution is similar or better than its (lensed maximum) half-light radius, reaching mags in the worst case. This situation is typically expected in the ``deep and narrow'' near-IR surveys with space facilities. In practice, the best spatial resolution presently achieved with HST/WFC3 in the near-IR is , reaching with JWST/NIRCam, i.e. the typical size of the brightest LBGs candidates presently identified. Therefore, the majority of lensed sources should remain spatially unresolved on their width. A lensed source entering the apparent-magnitude limited sample because of its magnification has also a smaller physical size, by a factor of (assuming a constant M/L scaling with the halo mass) or (assuming a constant M/L scaling with the halo circular velocity), leading to an apparent increase on its surface brightness with respect to blank-field observations of the same galaxy. Given the spatial resolution achieved with HST and JWST, this intrinsic-size effect tends to compensate the image dilution described above, in such a way that the actual surface brightness of the lensed galaxy should get close to the surface brightness of a blank field galaxy of similar apparent magnitude.
For the reasons explained above, and to the best of present knowledge, we do not expect the apparent-magnitude limited number counts derived in clusters to be strongly biased by sources below the limiting surface brightness, provided that the usual scalings apply to the size of high-z sources.
4.3 Comparison with current survey results
We have compared our simulation results to recent observations looking for high-z LBGs. For instance, the discovery of a bright lensed galaxy by Bradley et al. (2008), with (intrinsic ), in a FOV survey around A1689 is in fair agreement with our expectations. Indeed, given the survey characteristics and including 100% variance for , we expect between 0.2 and 0.8 such bright objects in this lensing field, if the LF remains constant between and 8 (LF(a)). In case of a strongly evolving LF(c), the expected number of sources in this survey is 0.12 (i.e. ranging between 0 and 0.5 with 200% variance) making the discovery of this bright source particularly fortunate. Our results for lensing fields are also consistent with the number of LBGs found by Richard et al. (2008), to the depth of their survey, using LF(b) or (c). Quantitatively, Richard et al. (2008) detected 5 sources with 12 pointings over 6 clusters. With our simulations, objects with a variance of are expected with the LF(c) model. We also compared with the surface density of candidates in the deep near-IR data behind clusters obtained by Bouwens et al. (2009a). Bouwens et al. (2009a) found a surface density of arcmin-2 with with a typical NICMOS3 FOV. With the strongly evolving LF(c) and the same survey characteristics used in our simulations, we expect a surface density of 0.01 arcmin-2 behind a typical cluster such as AC114, with a variance of . This result shows a relatively good agreement by taking into account field to field variance.
4.4 Lyman Break versus NB searches
In this section, we discuss on the relative efficiency of blank and lensing fields on the detection of LAEs based on either NB surveys or the spectroscopic follow up of LBGs at z>6. Although the observational effort required to select candidates using the dropout technique seems relatively cheap as compared to the NB approach, the two approches are complementary, as emphasized by the fact that many objects found by Lyman emission remain weak or undetected in the continuum (e.g. Rhoads & Malhotra 2001; Kodaira et al. 2003; Cuby et al. 2003; Taniguchi et al. 2005). A quantitative comparison between the properties of LAEs and LBGs at within the same volume should provide important information on the Lyman transmission, SFR and other properties of these high-z galaxies.
Since the pioneering Large Area Lyman Alpha Survey (LALA, Rhoads & Malhotra 2001; Rhoads et al. 2003), different NB surveys in blank fields have provided interesting galaxy samples in the interval, e.g. the large sample of Ly emitters at by Hu et al. (2004), the z=6.17 and 6.53 galaxies found respectively by Cuby et al. (2003) and Rhoads et al. (2004), the two galaxies detected by Kodaira et al. (2003), and the galaxy at a redshift z=6.96 found by Iye et al. (2006). In the latter case, which should be representative of samples, the authors used a combination of NB imaging at 8150 Å (SuprimeCam) and broad-band photometry in the optical bands to select candidates for a subsequent spectroscopic follow up with DEIMOS/Keck. Their confirmation rate is relatively high (18 sources out of 26 candidates), leading to 0.03 sources/arcmin2 and redshift bin . Similar results are reported by Kashikawa et al. (2006). All these sources have important Lyman fluxes (a few 10-17 erg cm-2 s-1), and display broad Lyman lines ( km s-1). A strong evolution is found in the number density of LAEs at with respect to the interval (Iye et al. 2006; Willis et al. 2006; Cuby et al. 2007; Sobral et al. 2009).
Cumulative surface density of observed sources as a function of their Lyman alpha luminosity ( FOV, , redshift of sources fixed at 6.6 for the LF(c)). The density is calculated in blank (blue solid line) and in lensing fields (red solid line) for different limiting magnitudes (from right to left: 25.5, 26.0, 27.0, 28.0 and 29.0). Dashed lines display number counts corrected by transmission value (dashed red line in lensing fields and dashed blue line in blank field). For comparison, raw number counts extracted from the spectroscopic sample of LAEs by Kashikawa et al. (2006) are also given (black solid line). The number density derived from Iye et al. (2006) at z=6.96 is also indicated, together with corresponding error bars (see text for details). As in Fig. 12, the magnification used to derive the lensing points is averaged over the entire field.
The number of LAEs expected within a sample of LBGs at can be estimated using the distribution of Lyman- equivalent widths derived for the spectroscopic sample of LBGs at by Shapley et al. (2003), assuming no evolution in the population of LAEs with respect to LBGs. This simplistic scaling should be enough for the simulation needs. We introduce a factor , defined below, which can be linked to the Lyman transmission as follows:
where L1500 is the UV monochromatic luminosity at 1500 A, is the Lyman- luminosity and is the Lyman- equivalent width. With this simple assumption, the average value for the Lyman- equivalent width is 10 , corresponding to . This value can be used to derive a rough estimate of expected number density of LAEs, from a population of LBGs In addition, the number density is also corrected to take into account of the fraction of the LBGs sample displaying in emission.
Figure 13 displays the cumulative number counts of sources at integrated from the LF(c) as a function of the Lyman- luminosity, scaled according to the UV luminosities (cf Sect. 2.1.1) in the typical FOV, together with a comparison of observations in a similar redshift domain ( for Kashikawa spectroscopic sample of LAEs and Iye et al. 2006 at z = 6.96).
The number density of LBGs at z=7 (with , close to the band-width of NB surveys) ranges between 0.001 (LF(c)) and 0.02 (LF(a)) sources/arcmin2 for a survey limited to , depending on the LF. Lensing clusters improve these numbers by a factor ranging between 6 (for LF(c)) and 2 (for LF(a)). In case of a deep survey limited to , the number densities reach 1 (LF(c)) to 2 (LF(a)) sources/arcmin2. In this case, there is a negative magnification bias of the order of 20%. These numbers, obtained with a simplistic model, are between a factor of 10 (for bright sources) and a few (for faint sources) smaller than the number densities obtained by Kashikawa et al. (2006) for their spectroscopic sample. With increasing (see Fig. 9) for instance at z=9 with the strongly evolving LF(c), no sources can be detected for a shallow survey limited to and for a deeper limited survey ( ), a minimum of 3 arcmin2 surveyed area is needed to obtain 1 source in a blank field. In a lensing field with the (LF(c)), these number densities reach 0.002 for and 0.32 sources/arcmin2 for . The relatively low-efficiency of lensing clusters with NB techniques in the domain has been recently confirmed by the results of Willis et al. (2008).
The preselection of candidates in lensing fields has two main advantages with respect to blank fields. In the shallow ( ) regime, there is an increase by a factor 8-10 on the number of sources detected and a moderate gain in depth for a given exposure time (i.e. 0.5 mag at ). In the deep-survey regime ( ), there is a gain in intrinsic depth, for a number of candidates which remains essentially constant (i.e. 0.8 mag gain at ). The relative efficiency of lensing with respect to blank field counts in Fig. 12 depends on the FOV. The two predictions get close to each other with increasing values of the FOV in lensing surveys, and the trend goes in the opposite direction for smaller FOV. This trend is the same for both LBGs and LAEs. To explore the bright end of the LF, blank field surveys are needed with a large FOV, whereas lensing clusters are particularly useful to explore the faint end of the LF. This trend is further discussed in the next Section.
Table 8: Field to field variance.
4.5 Towards the ideal survey: constraining the luminosity function of high-z sources
All present photometric surveys aimed at constraining the UV LF at z>7, either space or ground-based, are still dramatically small in terms of effective surface. Wide and deep optical+near-IR surveys in lensing and blank fields are needed to set strong constraints on the LF and on the star-formation density at z>7. An important issue is the combination between photometric depth and surveyed area which is needed to identify a representative number of photometric candidates, or to reach a significant non-detection limit in order to constrain the LF of z>7 sources.
There are three different aspects to consider when designing an ``ideal'' survey aiming at constraining the LF: the depth and the area of the survey, and the corresponding field to field variance. In order to address these issues, we have computed the expected field to field variance corresponding to lensing and blank field surveys, for different survey configurations (area and depth). A summary of these results is given in Table 8 for different number of lensing clusters, and for two representative depths in the H-band (i.e. a ``shallow'' survey with , and a ``deep'' survey with ) assuming a strongly evolving LF(c) in all cases. This table complements the results given in Tables 5 and 7 for blank and lensing fields as a function of depth. In all cases, we use AC114 as a reference for lensing clusters.
Regarding field-to-field variance in number counts, results are expected to be similar in blank and lensing fields for a relatively wide FOV ( 40-50 arcmin2; see Sect. 3.5 and Table 7). As shown in Table 8, a deep lensing survey using 10 clusters should be able to reach a variance 20% on sources at , irrespective of the actual LF. This value is better than present-day photometric surveys in blank fields, typically reaching 30-35% for (e.g. Bouwens et al. 2008), which in turn is rather close to what could be achieved in a single lensing cluster for .
A different survey strategy consists of increasing the number of lensing clusters with a shallow limiting magnitude. In this case, a few tens of lensing clusters (typically between 10 and 50, depending on the LF) are needed to reach a variance of 30% at . Note that the difference in exposure time between the shallow and deep surveys reported in Table 8 is a factor 100, and that 10 pointings are needed on blank fields in order to reach the same number of sources as in a single ``shallow'' lensing field.
In the case of a strongly evolving LF(c), photometric surveys should reach a minimum depth of to achieve fair statistics on sources using a lensing cluster (Table 7). In this case we expect between 20 (z=9, 40% variance) and 8 (z=10, 40% variance) sources per lensing cluster in a 40 arcmin2 FOV. The efficiency is a factor of 10 smaller at in blank fields. Fair statistics at should require a minimum depth close to both in lensing and blank fields.
Constraining the LF of star forming galaxies at should require the combination of blank and lensing field observations. This is illustrated for instance in Figs. 11 and 12 for an example at . A survey towards a lensing cluster has several advantages. It increases the total number of sources available for spectroscopic follow up, and it helps extending the sample towards the faint edge of the LF and towards the highest possible limits in redshift. On the other hand, blank fields are needed to achieve fair statistics on the bright edge of the LF. Thus an ``ideal'' survey should combine both blank and lensing fields. Given the numbers presented in previous sections, a blank field used for these purposes should be a factor ranging between 10 and 100 times larger than a lensing field (depending on the redshift domain, photometric depth, and actual LF) in order to efficiently complete the survey towards L>L*. This should be possible with the new upcoming surveys, such as the WIRCAM ultra deep survey (WUDS) at CFHT ( 400 arcmin2, with ), UKIDSS-UDS ( 2700 arcmin2, with ) or Ultra-Vista ( 2600 arcmin2, with , ). The optimum number of lensing fields ranges between 10-20 (for studies with ``shallow'' photometry) to a few (for ``deep'' surveys targeting sources).
We have evaluated the relative efficiency of lensing clusters with respect to blank fields in the identification and study of galaxies. The main conclusions of this study are given below.
For magnitude-limited samples of LBGs at , the magnification bias increases with the redshift of sources and decreases with both the depth of the survey and the size of the surveyed area. Given the typical near-IR FOV in lensing fields, the maximum efficiency is reached for clusters at , with maximum cluster-to-cluster differences ranging between 30 and 50% in number counts, depending on the redshift of sources and the LF.
The relative efficiency of lensing with respect to blank fields strongly depends on the shape of the LF, for a given photometric depth and FOV. The comparison between lensing and blank field number counts is likely to yield strong constraints on the LF.
The presence of a strong-lensing cluster along the line of sight has a dramatic effect on the observed number of sources, with a positive magnification effect in typical ground-based ``shallow'' surveys ( ). The postive magnification bias increases with the redshift of sources, and also from optimistic to pessimistic values of the LF. In case of a strongly evolving LF at , as proposed by Bouwens et al. (2008), blank fields are particularly inefficient as compared to lensing fields. For instance, the size of the surveyed area in ground-based observations would need to increase by a factor of 10 in blank fields with respect to a typical 30-40 arcmin2 survey in a lensing field, in order to reach the same number of detections at , and this merit factor increases with redshift. All these results have been obtained assuming that number counts derived in clusters are not dominated by sources below the limiting surface brightness of observations, which in turn depends on the reliability of the usual scalings applied to the size of high-z sources.
Ground-based ``shallow'' surveys are dominated by field-to-field variance reaching 30 to 50% in number counts between and 8 in a unique 30-40 arcmin2 lensing field survey (or in a 400 arcmin2 blank field), assuming a strongly evolving LF.
The number of z>8 sources expected at the typical depth of JWST ( ) is much higher in lensing than in blank fields if the UV LF is rapidly evolving with redshift (i.e. a factor of 10 at with ).
Blank field surveys with a large FOV are needed to probe the bright edge of the LF at , whereas lensing clusters are particularly useful to explore the mid to faint end of the LF.
We are grateful to D. Schaerer, A. Hempel, J.F. Le Borgne and E. Egami for useful comments. We acknowledge financial support from the European Commissions ALFA-II programme through its funding of the Latin-America European Network for Astrophysics and Cosmology (LENAC). This work was also supported by the French Centre National de la Recherche Scientifique, the French Programme National de Cosmologie (PNC) and Programme National de Galaxies (PNG). J. R. acknowledges support from a EU Marie-Curie fellowship. This work recieved support from Agence Nationale de la recherche bearing the reference ANR-09-BLAN-0234-01.
Appenzeller, I., Fricke, K., Fürtig, W., et al. 1998, The Messenger, 94, 1 [NASA ADS] [Google Scholar]
Barkana, R., & Loeb, A. 2000, ApJ, 531, 613 [NASA ADS] [CrossRef] [Google Scholar]
Beckwith, S. V. W., Stiavelli, M., Koekemoer, A. M., et al. 2006, AJ, 132, 1729 [NASA ADS] [CrossRef] [Google Scholar]
Bertin, E., & Arnouts, S. 1996, A&AS, 117, 393 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
Bouwens, R. J., Illingworth, G. D., Thompson, R. I., et al. 2004a, ApJ, 606, L25 [NASA ADS] [CrossRef] [Google Scholar]
Bouwens, R. J., Thompson, R. I., Illingworth, G. D., et al. 2004b, ApJ, 616, L79 [NASA ADS] [CrossRef] [Google Scholar]
Bouwens, R. J., Illingworth, G. D., Blakeslee, J. P., & Franx, M. 2006, ApJ, 653, 53 [NASA ADS] [CrossRef] [Google Scholar]
Bouwens, R. J., Illingworth, G. D., Franx, M., & Ford, H. 2007, ApJ, 670, 928 [NASA ADS] [CrossRef] [Google Scholar]
Bouwens, R. J., Illingworth, G. D., Bradley, L. D., et al. 2009a, ApJ, 690, 1764 [NASA ADS] [CrossRef] [Google Scholar]
Bouwens, R. J., Illingworth, G. D., Oesch, P. A., et al. 2009b, ApJ, 708, L69 [NASA ADS] [CrossRef] [Google Scholar]
Bradac, M., Treu, T., Applegate, D., et al. 2009, ApJ, 706, 1201 [NASA ADS] [CrossRef] [Google Scholar]
Bradley, L. D., Bouwens, R. J., Ford, H. C., et al. 2008, ApJ, 678, 647 [NASA ADS] [CrossRef] [Google Scholar]
Broadhurst, T. J., Taylor, A. N., & Peacock, J. A. 1995, ApJ, 438, 49 [NASA ADS] [CrossRef] [Google Scholar]
Cuby, J.-G., Le Fèvre, O., McCracken, H., et al. 2003, A&A, 405, L19 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
Cuby, J.-G., Hibon, P., Lidman, C., et al. 2007, A&A, 461, 911 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
Dunkley, J., Komatsu, E., Nolta, M. R., et al. 2009, ApJS, 180, 306 [NASA ADS] [CrossRef] [Google Scholar]
Ellis, R., Santos, M. R., Kneib, J.-P., & Kuijken, K. 2001, ApJ, 560, L119 [NASA ADS] [CrossRef] [Google Scholar]
Garzón, F., Abreu, D., Barrera, S., et al. 2007, Rev. Mex. Astron. Astrofis. Conf. Ser., 29, 12 [NASA ADS] [Google Scholar]
Henry, A. L., Siana, B., Malkan, M. A., et al. 2009, ApJ, 697, 1128 [NASA ADS] [CrossRef] [Google Scholar]
Hu, E. M., Cowie, L. L., McMahon, R. G., et al. 2002, ApJ, 568, L75 [NASA ADS] [CrossRef] [Google Scholar]
Hu, E. M., Cowie, L. L., Capak, P., et al. 2004, AJ, 127, 563 [NASA ADS] [CrossRef] [Google Scholar]
Iye, M., Ota, K., Kashikawa, N., et al. 2006, Nature, 443, 186 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
Jullo, E., Kneib, J.-P., Limousin, M., et al. 2007, New J. Phys., 9, 447 [NASA ADS] [CrossRef] [Google Scholar]
Kashikawa, N., Shimasaku, K., Malkan, M. A., et al. 2006, ApJ, 648, 7 [NASA ADS] [CrossRef] [Google Scholar]
Kassiola, A., & Kovner, I. 1993, in Liege International Astrophysical Colloquia, Vol. 31, ed. J. Surdej, D. Fraipont-Caro, E. Gosset, S. Refsdal, & M. Remy, 571 [Google Scholar]
Kennicutt, R. C. 1998, ApJ, 498, 541 [NASA ADS] [CrossRef] [Google Scholar]
Kitzbichler, M. G., & White, S. D. M. 2006, MNRAS, 366, 858 [NASA ADS] [Google Scholar]
Kitzbichler, M. G., & White, S. D. M. 2007, MNRAS, 376, 2 [NASA ADS] [CrossRef] [Google Scholar]
Kneib, J.-P., Ellis, R. S., Santos, M. R., & Richard, J. 2004, ApJ, 607, 697 [NASA ADS] [CrossRef] [Google Scholar]
Kodaira, K., Taniguchi, Y., Kashikawa, N., et al. 2003, PASJ, 55, L17 [NASA ADS] [CrossRef] [Google Scholar]
Lemson, G., & Virgo Consortium, T. 2006 [arXiv:astro-ph/0608019] [Google Scholar]
Limousin, M., Richard, J., Jullo, E., et al. 2007, ApJ, 668, 643 [NASA ADS] [CrossRef] [Google Scholar]
McLure, R. J., Cirasuolo, M., Dunlop, J. S., Foucaud, S., & Almaini, O. 2009, MNRAS, 395, 2196 [NASA ADS] [CrossRef] [Google Scholar]
Mo, H. J., Mao, S., & White, S. D. M. 1998, MNRAS, 295, 319 [NASA ADS] [CrossRef] [Google Scholar]
Moorwood, A. F. 1997, in SPIE Conf. Ser. 2871, ed. A. L. Ardeberg, 1146 [Google Scholar]
Moorwood, A., Cuby, J.-G., & Lidman, C. 1998, The Messenger, 91, 9 [NASA ADS] [Google Scholar]
Natarajan, P., Kneib, J.-P., Smail, I., & Ellis, R. S. 1998, ApJ, 499, 600 [NASA ADS] [CrossRef] [Google Scholar]
Navarro, J. F., Frenk, C. S., & White, S. D. M. 1997, ApJ, 490, 493 [NASA ADS] [CrossRef] [Google Scholar]
Oesch, P. A., Bouwens, R. J., Carollo, C. M., et al. 2009, ApJ, 709, L21 [NASA ADS] [CrossRef] [Google Scholar]
Ouchi, M., Shimasaku, K., Okamura, S., et al. 2004, ApJ, 611, 660 [NASA ADS] [CrossRef] [Google Scholar]
Peebles. 1993, Physics Today, 46, 87 [Google Scholar]
Pelló, R., Schaerer, D., Richard, J., Le Borgne, J.-F., & Kneib, J.-P. 2004, A&A, 416, L35 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
Rhoads, J. E., & Malhotra, S. 2001, ApJ, 563, L5 [NASA ADS] [CrossRef] [Google Scholar]
Rhoads, J. E., Dey, A., Malhotra, S., et al. 2003, AJ, 125, 1006 [NASA ADS] [CrossRef] [Google Scholar]
Rhoads, J. E., Xu, C., Dawson, S., et al. 2004, ApJ, 611, 59 [NASA ADS] [CrossRef] [Google Scholar]
Richard, J., Pelló, R., Schaerer, D., Le Borgne, J.-F., & Kneib, J.-P. 2006, A&A, 456, 861 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
Richard, J., Kneib, J.-P., Jullo, E., et al. 2007, ApJ, 662, 781 [NASA ADS] [CrossRef] [Google Scholar]
Richard, J., Stark, D. P., Ellis, R. S., et al. 2008, ApJ, 685, 705 [NASA ADS] [CrossRef] [Google Scholar]
Rieke, M. J., Kelly, D., & Horner, S. 2005, in SPIE Conf. Ser. 5904, ed. J. B. Heaney, & L. G. Burriesci, 1 [Google Scholar]
Sanchez, A. G., & Baugh, C. M. 2006, in Cosmic Frontiers, ASP Conf. Ser., 379, 8, [arXiv:astro-ph/0612743] [Google Scholar]
Schaerer, D. 2002, A&A, 382, 28 [NASA ADS] [CrossRef] [EDP Sciences] [Google Scholar]
Schechter, P. 1976, ApJ, 203, 297 [NASA ADS] [CrossRef] [Google Scholar]
Shapley, A. E., Steidel, C. C., Pettini, M., & Adelberger, K. L. 2003, ApJ, 588, 65 [NASA ADS] [CrossRef] [Google Scholar]
Smith, G. P., Kneib, J.-P., Smail, I., et al. 2005, MNRAS, 359, 417 [NASA ADS] [CrossRef] [Google Scholar]
Sobral, D., Best, P. N., Geach, J. E., et al. 2009, MNRAS, 398, L68 [NASA ADS] [CrossRef] [Google Scholar]
Springel, V., White, S. D. M., Jenkins, A., et al. 2005, Nature, 435, 629 [NASA ADS] [CrossRef] [PubMed] [Google Scholar]
Stanway, E. R., Bunker, A. J., McMahon, R. G., et al. 2004, ApJ, 607, 704 [NASA ADS] [CrossRef] [Google Scholar]
Stark, D. P., Ellis, R. S., Richard, J., et al. 2007, ApJ, 663, 10 [NASA ADS] [CrossRef] [Google Scholar]
Steidel, C. C., Adelberger, K. L., Giavalisco, M., Dickinson, M., & Pettini, M. 1999, ApJ, 519, 1 [NASA ADS] [CrossRef] [Google Scholar]
Taniguchi, Y., Ajiki, M., Nagao, T., et al. 2005, PASJ, 57, 165 [NASA ADS] [Google Scholar]
Thompson, R. I. 1998, in BAAS, 30, 1326 [Google Scholar]
Trenti, M., & Stiavelli, M. 2008, ApJ, 676, 767 [NASA ADS] [CrossRef] [Google Scholar]
Willis, J., Courbin, F., Kneib, J.-P., & Minniti, D. 2006, New Astron. Rev., 50, 70 [NASA ADS] [CrossRef] [Google Scholar]
Willis, J. P., Courbin, F., Kneib, J.-P., & Minniti, D. 2008, MNRAS, 384, 1039 [NASA ADS] [CrossRef] [Google Scholar]
Zheng, W., Bradley, L. D., Bouwens, R. J., et al. 2009, ApJ, 697, 1907 [NASA ADS] [CrossRef] [Google Scholar]
... EMIR/GTC
http://www.ucm.es/info/emir/
... Lenstool
http://www.oamp.fr/cosmology/lenstool
Simbad Objects
Constraining the population of star-forming galaxies with deep near-IR images of lensing clusters
Optical dropout galaxies lensed by the cluster A2667
The bright end of the luminosity function at z ~ 9
A&A 542, L31 (2012)
Faint end of the z ∼ 3–7 luminosity function of Lyman-alpha emitters behind lensing clusters observed with MUSE
A&A 628, A3 (2019)
The ALMA Frontier Fields Survey — IV. Lensing-corrected 1.1 mm number counts in Abell 2744, MACS J0416.1–2403 and MACS J1149.5+2223
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,535
|
Mary McNair Mathews (1834–1903) was a Nevada historian. Her memoir and early chronicle of life in Virginia City, Nevada, Ten Years in Nevada or Life on the Pacific Coast, was published in 1880.
She was born in Livingston County, New York in 1834. Widowed early, and learning of her brother's death in Virginia City, she sold her hoop skirt factory and left for Nevada in 1869 with her young son. Once in Virginia City, she ran a laundry business, a school, a boardinghouse, and a soup kitchen; she invested in stocks.
Mathews returned to New York in 1878, published her memoir two years later, and then headed back to the west to be with her son. She died in Ukiah, California in 1903.
In 2009, a chautauqua of her life was staged at the Dangberg Home Ranch Historic Park in Minden, Nevada.
References
1834 births
1903 deaths
People from Livingston County, New York
People from Virginia City, Nevada
Historians of Nevada
Writers from Nevada
Victorian women writers
Victorian writers
American women historians
19th-century American women writers
19th-century American writers
Historians from New York (state)
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 8,955
|
Kendži Mijazawa (宮沢 賢治 nebo 宮澤 賢治 Miyazawa Kenji, * 27. srpen 1896) byl japonský básník a autor dětské literatury, pocházející z města Hanamaki v prefektuře Iwate. Byl také učitelem zemědělských věd, vegetariánem, cellistou, oddaným buddhistou a utopickým společenským aktivistou.
Jeho nejznámějšími díly jsou Ginga Tecudó no Joru (Noc na trati Mléčné dráhy), Kaze no Matasaburo (Větrný Matasaburó), Sero hiki Góšu (Cellista Goš), Futago no hoši (Hvězdy dvojčata) a Cučigami to kicune (Božstvo země a liška).
Po přečtení Lotosové sútry Kendži konvertoval k Ničirenovskému buddhismu a vstoupil do Kokučúkai, nacionalistické organizace ničirenovského buddhismu. Jeho víra a sociální postoje způsobily odtržení od jeho zámožné rodiny a obzvláště od jeho otce. Přesto, nedlouho po jeho smrti, jeho rodina k ničirenovskému buddhismu konvertovala také. Založil Rasu čidžin kjókai (Asociace farmářů Rasu, 羅須地人協会), aby zlepšil situaci rolníků v prefektuře Iwate.
Ovládal jazyk esperanto a do tohoto jazyka také přeložil několik svých básní.
V roce 1933 zemřel na pneumonii. Jako básník byl za svého života téměř neznámý a slavným se stal až posmrtně. Velký zájem o jeho dílo se projevil v polovině 90. let. V roce 1982 bylo v jeho rodném městě otevřeno muzeum věnované jeho životu a dílu. Mnoho jeho příběhů pro děti bylo zadaptováno jako anime. Mnoho jeho básní tanka a básní volného verše, které byly přeloženy do mnoha jazyků, je populárních dodnes.
Život
Narodil se ve městě Hanamaki, v prefektuře Iwate, jakožto nejstarší syn z bohaté rodiny majitelů zastavárny. Rodiče byli oddanými věřícími sekty Čisté země, jak bylo zvykem mezi farmáři v této oblasti. Od roku 1898 jeho otec organizoval pravidelné schůze v místě, kde kázali mnichové a buddhističtí myslitelé a Kendži se svojí mladší sestrou se těchto schůzek účastnili již od útlého věku. Oblast jeho mládí byla zchudlým rýžovým regionem a on vyrůstal v obavách své rodiny o sociální status a toho jak vydělat peníze.
Od mládí byl nadšeným studentem přírodovědy. Jako náctiletý v sobě objevil zájem o poezii, když se dostal pod vliv místního básníka Takubokua Išikawy. Poté, co dokončil nižší střední školu, pomáhal v otcově zastavárně. V roce 1918 skládal básně typu tanka a měl již napsané dva příběhy pro děti. Na vyšší střední, poté co přečetl Lotosovou sútru, konvertoval k sektě Hokke, což mu v budoucnu způsobilo rozepře s otcem. Roku 1918 vychodil Vysokou školu zemědělskou a lesnickou v Morioce (盛岡高等農林学校 Morioka Kōtō Nōrin Gakkō, dnes fakulta agrikultury na univerzitě v Iwate). Toho roku se také stal vegetariánem. Jelikož byl bystrým studentem, získal pozici speciálního výzkumného studenta geologie. Začal se zajímat o výzkum půdy a umělých hnojiv. Později v tomto roce se jel s matkou podívat do Tokia za svou sestrou Toši, která onemocněla, zatímco studovala na Japonské ženské univerzitě (日本女子大学 Nihon džoši daigaku). Domů se navrátil poté, co se jeho sestra na začátku následujícího roku uzdravila.
Kvůli svým rozdílným náboženským názorům a nechuti k rodinnému podnikání v zastavárně se v lednu 1921 rozhodl opustit Hanamaki a vydal se do Tokia. Zde se připojil k organizaci Kikučúkai Tanaky Čigakua a několik měsíců strávil kázaním ničirenovského buddhismu na ulicích. Po osmi měsících v Tokiu se znovu pustil do psaní příběhů pro děti, tentokrát velice plodně, jelikož byl ovlivněn dalším mnichem Takačijo Čijóem, který ho odradil od kněžství a přesvědčil ho, že jeho víra poslouží nejlépe pokud se ji pokusí promítnout do své profese.
Poté se, kvůli obnovení nemoci jeho sestry, vrátil zpět do Hanamaki a stal se zde učitelem na zemědělské škole. 27. listopadu 1922 jeho sestra Toši podlehla své nemoci ve věku 24 let. Ztráta milované sestry byla pro Kendžiho šokem, ze kterého se nikdy plně nevzpamatoval. V den její smrti složil tři básně se souhrnným názvem Musei Dókoku (無声慟哭).
Pracoval jako učitel zemědělských věd na Vyšší střední zemědělské škole v Hanamaki (花巻農学校). Díky půjčkám a podpoře od výrobce nattó mohl v dubnu 1924 vydat sbírku poezie nazvanou Haru to šura (Jaro a démon, 春と修羅). V prosinci toho samého roku také vlastním nákladem vydal svou sbírku dětských příběhů Čúmon no ói rjóriten (Hostinec mnoha objednávek, 注文の多い料理店). Přestože se ani jedné z nich nedostalo většího komerčního úspěchu - byly převážně ignorovány – získaly si pozornost básníků Kótaróa Takamury a Šinpeje Kusana, kteří jeho dílo obdivovali a představili ho literárnímu světu.
Jako učitele ho studenti vnímali jako velmi nadšeného, i když lehce podivínského, jelikož si stál za tím, že je nejlepší učit se přímými zkušenostmi. Často bral studenty mimo školu, ne kvůli zácviku nebo praxi, ale jen kvůli příjemným vycházkám do hor a na pole. Také po nich chtěl, aby hráli divadelní představení, která si sami napíší.
V roce 1926 se vzdal svého postu učitele a stal se farmářem, aby mohl svými teoretickými znalostmi zemědělských věd pomáhat ostatním farmářům ve zchudlém severovýchodním regionu. Ostatní farmáře také seznámil se základy kulturního života jako jsou například hudba, poezie a vše ostatní, o čem si myslel, že by mohlo zlepšit jejich životy. Pomocí svého gramofonu je obeznámil s klasickou hudbou přehráváním skladeb Beethovena, Schuberta, Wagnera a Debussyho. V srpnu 1926 založil Rasu čidžin kjókai (Asociace farmářů Rasu, 羅須地人協会). Zavedl nové zemědělské techniky a odolnější zrna rýže.
V odlehlém domě své rodiny, kde tou dobou pobýval, vyučoval agronomii mladíky z blízkých farmářských rodin. Rasu čidžin kjókai se také podílela na literárních přednesech, divadelních hrách, hudbě a ostatních kulturních událostech. Byla rozpuštěna po dvou letech, v roce 1928, kdy se Japonsko začalo obracet k militarismu.
Ale ne všichni farmáři jeho snahy oceňovali. Brali ho jako člověka z města, který si na farmáře spíše jen hraje, případně si někteří stěžovali, že Kendžiho hnojiva nemají požadovaný efekt. Kendži byl zastáncem přírodních hnojiv, zatímco mnozí dávali přednost západním chemickým zázrakům, a i v případě kdy zklamaly, mnozí z toho stále vinili Kendžiho. Sympatie mu také mohl ubírat fakt, že byl stále částečně ekonomicky závislý na svém otci, kterému se mnoho farmářů zadlužilo v případě špatné úrody. Zatímco jeho zběhnutí k Lotosové sektě sypalo další sůl do rány, jelikož farmáři v této oblasti, stejně jako jeho otec, byli přívrženci Sekty čisté země.
V roce 1926 se naučil esperanto a pokusil se do něj přeložit několik svých japonských básní. Tyto překlady byly publikovány roku 1953, dlouho po jeho smrti.
Projevoval jen pramalý zájem o lásku nebo sex, ať už v osobním životě nebo v literárním díle. Kendžiho blízký přítel Tokuja Seki o něm napsal, že zemřel jako panic.
Nemoc a smrt
Kendži onemocněl v létě roku 1928 a na konci roku se toto onemocnění rozvinulo v akutní pneumonii. Jeho striktní vegetariánství mu neumožnilo příjem výživově hodnotnější stravy, která byla potřebná kvůli jeho zhoršujícímu se zdraví. Mnoho let trpěl zánětem pohrudnice, což ho mnohdy zneschopnilo na řadu měsíců. Jeho zdraví se víceméně zlepšilo dostatečně na to, aby mohl provozovat konzultační práce v kamenolomu v roce 1931. Zlepšení bylo ale pouze kratičké, již v září toho roku při návštěvě Tokia se mu pneumonie znovu vrátila a musel se vrátit do svého rodného města.
Na podzim 1933 se jeho nemoc zdánlivě zlepšila dostatečně na to, aby mohl ze dveří svého domu pozorovat místní šintoistický průvod. Toho využila skupinka místních farmářů a zapředla s ním hodinovou debatu o hnojivech. Zřejmě kvůli vyčerpání z dlouhého rozhovoru s farmáři Kendži následující den zemřel. Na smrtelné posteli ještě svého otce požádal, aby vytiskl a rozdal tisíc kopií Lotosové sútry. Byl rodinou pohřben v rodinné svatyni Andžódži, ale poté, kdy v roce 1951 rodina konvertovala k ničirenovskému buddhismu, přesunuli jeho ostatky do ničirenského chrámu Šinšódži.
Odkazy
Externí odkazy
Reference
Literatura
WINKELHÖFEROVÁ, Vlasta. Slovník japonské literatury. Praha: Libri, 2008. .
Japonští básníci
Japonští spisovatelé
Narození 27. srpna
Narození v roce 1896
Narození v prefektuře Iwate
Úmrtí 21. září
Úmrtí v roce 1933
Úmrtí v prefektuře Iwate
Zemřelí na zápal plic
Muži
Japonští vědci 20. století
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 346
|
package com.march.dev.app.fragment;
import android.content.Context;
import android.content.Intent;
import android.os.Bundle;
import android.support.annotation.NonNull;
import android.support.annotation.Nullable;
import android.support.v4.app.Fragment;
import android.view.LayoutInflater;
import android.view.View;
import android.view.ViewGroup;
import com.march.dev.mvp.presenter.BasePresenter;
import com.march.dev.mvp.presenter.IPresenter;
import com.march.dev.mvp.view.FragmentView;
/**
* Created by march on 16/7/1.
* fragment基类,主要负责处理
* 0. 分离逻辑,不对外开放
* 1. 加载周期
* 2. fragment懒加载相关逻辑
*/
public abstract class MvpFragment extends Fragment {
private FragmentView mViewDelegate;
public abstract FragmentView getViewDelegate();
@Override
public void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);
if (mViewDelegate != null) {
mViewDelegate.onActivityResult(requestCode, resultCode, data);
}
}
@Override
public void onAttach(Context activity) {
super.onAttach(activity);
if (mViewDelegate != null) {
mViewDelegate.onAttach(activity);
}
}
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
mViewDelegate = getViewDelegate();
if (mViewDelegate == null) {
mViewDelegate = new FragmentView() {
@Override
public int getLayoutId() {
return 0;
}
};
}
mViewDelegate.bind(this);
mViewDelegate.onCreate(savedInstanceState);
}
@Override
public View onCreateView(LayoutInflater inflater, ViewGroup container, Bundle savedInstanceState) {
if (mViewDelegate != null) {
return mViewDelegate.onCreateView(inflater, container, savedInstanceState);
}
return null;
}
@Override
public void onViewCreated(View view, @Nullable Bundle savedInstanceState) {
super.onViewCreated(view, savedInstanceState);
}
@Override
public void onActivityCreated(@Nullable Bundle savedInstanceState) {
super.onActivityCreated(savedInstanceState);
if (mViewDelegate != null) {
mViewDelegate.onActivityCreated(savedInstanceState);
}
}
@Override
public void onStart() {
super.onStart();
if (mViewDelegate != null)
mViewDelegate.onStart();
}
@Override
public void onResume() {
super.onResume();
if (mViewDelegate != null)
mViewDelegate.onResume();
}
@Override
public void onPause() {
super.onPause();
if (mViewDelegate != null)
mViewDelegate.onPause();
}
@Override
public void onStop() {
super.onStop();
if (mViewDelegate != null)
mViewDelegate.onStop();
}
@Override
public void onDestroyView() {
super.onDestroyView();
if (mViewDelegate != null)
mViewDelegate.onDestroyView();
}
@Override
public void onDestroy() {
super.onDestroy();
if (mViewDelegate != null)
mViewDelegate.onDestroy();
}
@Override
public void onDetach() {
super.onDetach();
if (mViewDelegate != null) {
mViewDelegate.onDetach();
}
}
@Override
public void onHiddenChanged(boolean hidden) {
super.onHiddenChanged(hidden);
if (mViewDelegate != null) {
mViewDelegate.onHiddenChanged(hidden);
}
}
@Override
public void onRequestPermissionsResult(int requestCode, @NonNull String[] permissions, @NonNull int[] grantResults) {
super.onRequestPermissionsResult(requestCode, permissions, grantResults);
if (mViewDelegate != null)
mViewDelegate.onRequestPermissionsResult(requestCode, permissions, grantResults);
}
public boolean onBackPressed() {
if (mViewDelegate != null)
return mViewDelegate.onBackPressed();
return false;
}
}
|
{
"redpajama_set_name": "RedPajamaGithub"
}
| 450
|
\section{Introduction}
\label{sec:intro}
Inferring the physical properties of galaxies from observations of the spectral energy distribution (SED) of their emitted light is one of the cornerstones of modern extragalactic astronomy. At the heart of this endeavor is stellar population synthesis (SPS): predictive models for galaxy SEDs that fold together the initial stellar mass function, star formation and metallicity enrichment histories, stellar evolution calculations and stellar spectral libraries, phenomenological dust and gas models, black hole activity etc., to predict the spectrum of a galaxy given some input physical parameters associated with each model component. SPS modeling has a rich history, with a plethora of parameterizations of varying complexity available (see \citealp{conroy2013} and references therein).
The computational bottleneck in both inferring galaxy properties from observations and simulating catalogs under SPS models, is running the SPS models themselves. Forward-simulating upcoming Stage IV galaxy surveys will demand $\sim10^{10}$ SPS evaluations per catalog simulation. For data analysis, inferring\footnote{e.g., Markov Chain Monte Carlo sampling.} of order ten SPS model parameters for a single galaxy (given some photometric or spectroscopic data) typically requires $\sim10^5-10^6$ SPS model evaluations. If inference is then to be performed for a large sample of galaxies, the number of SPS evaluations and associated computational demands quickly become prohibitive. For recent context, \citet{leja2019} analyzed $\sim 6\cdot 10^4$ galaxies under a 14-parameter SPS model, with a total cost of $1.5$ million CPU hours\footnote{For added context, the CPU time for the \citet{leja2019} analysis would cost around twenty-thousand USD from Amazon Web Services (estimated in 2019).}. With upcoming surveys such as the Dark Energy Spectroscopic Instrument (DESI; \citealp{levi2013,aghamousa2016,aghamousa2018}) posing the challenge of analyzing millions of galaxy spectra, the need to address the bottleneck posed by SPS is clear and urgent.
There are two principal ways of reducing the cost of inference and simulation under SPS models: speeding up individual SPS computations, and (in the case of inference) reducing the number of SPS computations required to obtain robust inferences per galaxy. In this paper we present neural network emulators for SPS spectra and photometry that gain leverage on both fronts. For galaxy spectra, our emulation framework uses principal component analysis (PCA) to construct a basis for galaxy SEDs, and then trains a neural network on a set of generated SPS spectra to learn the PCA basis coefficients as a function of the SPS model parameters. For photometry, we train a neural network to learn the magnitudes directly (for some set of band passes) as a function of the SPS parameters. The result in both cases is a compact neural network representation of the SPS model that is both fast to evaluate, accurate, and has analytic and readily-computable derivatives, thus making it amenable to efficient gradient-based optimization and inference methods (e.g., Hamiltonian Monte Carlo sampling). Furthermore, calling the emulators from a GPU is straightforward, enabling an additional order-of-magnitude speed-up when evaluating many SPS models in parallel.
We demonstrate and validate the emulator on two SPS models\footnote{Implemented with the SPS code \textsc{fsps} \citep{conroy2009, conroy2010} with python bindings \textsc{python-fsps} \citep{foreman2014}.}: one relatively simple eight-parameter model targeting upcoming DESI observations (for which we emulate spectra), and the more flexible 14-parameter Prospector-$\alpha$ model from the recent \citet{leja2019} analysis (for which we emulate both spectra and photometry). For both models, we show that the emulator is able to deliver percent-level accuracy over broad parameter prior and wavelength ranges, and gives a factor $\sim 10^3-10^4$ speed-up over direct SPS model evaluation. Use of gradient-based inference methods enabled by the emulators will provide further reductions in the cost of inference under SPS models.
The structure of this paper is as follows: In \S \ref{sec:emulation} we outline the emulation framework. In \S \ref{sec:fomo}--\ref{sec:prospector} we validate the spectrum emulator on two SPS model parameterizations. In \S \ref{sec:photometry} we validate the photometry emulator for the Prospector-$\alpha$ model. We discuss the implications for current and future studies in \S \ref{sec:discussion}.
\section{SPECULATOR: Emulating stellar population synthesis}
\label{sec:emulation}
\begin{figure*}
\centering
\includegraphics[width = 17.5cm]{speculator.pdf}
\caption{Schematic of the PCA neural network emulator set-up. A dense neural network parameterizes the PCA basis coefficients as a function of the SPS model parameters (i.e., taking SPS parameters as input and predicting the basis coefficients). These basis coefficients are then multiplied by their respective PCA basis functions and summed to give the predicted spectrum.}
\label{fig:speculator}
\end{figure*}
In this section we describe the framework developed for fast emulation of SPS spectra (\S \ref{sec:emu}) and photometry (\S \ref{sec:emu_phot}). Some background knowledge of PCA and neural networks is assumed in this section; see e.g., \citet{bishop2006} for a comprehensive and pedagogical review. For previous work on representing spectra as interpolations over PCA bases, see \cite{czekala2015,kalmbach2017}.
\subsection{Notation}
\label{sec:sps}
We will denote galaxy SEDs by $l(\lambda; \boldsymbol{\theta})\equiv l_\lambda$ (luminosity per unit wavelength) and log SEDs by $L_\lambda \equiv\mathrm{ln}\,l_\lambda$, for wavelength $\lambda$ and SPS model parameters $\boldsymbol{\theta}$. Photometric fluxes, denoted by $f_b(\boldsymbol{\theta})$, for a given band-pass $b$ with filter $W_b(\lambda)$ and SPS parameters $\boldsymbol{\theta}$, are given by
\begin{align}
f_b(\boldsymbol{\theta}) = \frac{1}{g^\mathrm{AB}4\pi (1+z)d_L^2(z)}\int_0^\infty l(\lambda/(1+z); \boldsymbol{\theta})\,W_b(\lambda)d\lambda,
\end{align}
where $g^\mathrm{AB}$ is the AB flux normalization, $d_L(z)$ the luminosity distance for redshift $z$, and the filter is assumed to be normalized to unity, $\int W_b(\lambda)d\lambda = 1$. The associated apparent magnitudes are denoted by $m_b(\boldsymbol{\theta})$.
The goal of emulation is to find an efficient representation for the galaxy spectra $l_\lambda(\boldsymbol{\theta})$ or photometry $\{m_b(\boldsymbol{\theta})\}$ as a function of the SPS model parameters that is as fast as possible to evaluate, whilst maintaining accuracy.
\subsection{Emulation of galaxy spectra}
\label{sec:emu}
\subsubsection{Parameterization considerations}
There are a couple of simplifications to the SED-emulation problem set-up that will make emulation significantly easier.
We will emulate the rest-frame SEDs only, redshifting (analytically) afterwards as needed. This is motivated by the fact that emulator is contingent on finding a compact PCA basis for galaxy SEDs; constructing this basis is greatly simplified when working with in the rest-frame only, i.e., without requiring that the basis can capture arbitrary stretches in wavelength. Meanwhile, emulating rest-frame SEDs only does not reduce functionality, since redshifted spectra can be obtained straightforwardly (and exactly) from the rest-frame SEDs.
Redshifting involves three transformations on the emulated rest-frame SEDs: a stretch by $\lambda\rightarrow\lambda/(1+z)$, re-scaling by $[(1+z)d_L(z)^2]^{-1}$, and adjusting the age of the Universe at the lookback time for a given redshift, $t_\mathrm{age}(z)$, so that the age of the stellar population is consistent with that lookback time. Therefore, $t_\mathrm{age}(z)$ must be included in the list of SPS parameters $\boldsymbol{\theta}$.
Similarly, we fix the total stellar-mass, $M$, for the emulated spectra to $1\,\mathrm{M}_\odot$ and scale the mass analytically afterwards as required (the total stellar-mass formed $M$ enters as a simple normalization of the SED). Hence, a galaxy spectrum for a given redshift $z$, total stellar-mass formed $M$, and SPS model parameters $\boldsymbol{\theta}$ can be obtained from the corresponding emulated rest-frame unit stellar-mass SED $l(\lambda ; \boldsymbol{\theta})$ as
\begin{align}
\label{redshift}
l(\lambda;\boldsymbol{\theta}, M, z) \rightarrow l(\lambda/(1+z);\boldsymbol{\theta})\vert_{t_\mathrm{age}(z)}&\, \frac{1}{(1+z)d_\mathrm{L}(z)^2}\, M.
\end{align}
\subsubsection{PCA neural network emulator framework}
A schematic overview of the PCA network emulator framework described below is given in Figure \ref{fig:speculator}, for reference throughout this section.
To build an emulator for a given SPS model parameterization, we begin by generating a training set of $N_\mathrm{train}$ galaxy SEDs $\{(L_\lambda, \boldsymbol{\theta})_1, (L_\lambda, \boldsymbol{\theta})_2, \dots, (L_\lambda, \boldsymbol{\theta})_{N_\mathrm{train}}\}$ under the target SPS model, by drawing SPS parameters from the prior and computing the associated SEDs.
From this training set, we construct a basis $\{q_{\lambda,\,i}\}$ for the SEDs by performing a PCA decomposition of the training spectra, and taking the first $N_\mathrm{pca}$ principal components as basis vectors. The number of PCA components retained is chosen such that the resulting PCA basis is comfortably able to recover the model SEDs at the desired accuracy (i.e., $\ll 1\%$ if we want to ensure that the errors associated with the PCA basis are a small fraction of the total error budget).
With the PCA basis $\{q_{\lambda,\,i}\}$ in hand, we model the (log) SED as a linear combination of the PCA basis functions,
\begin{align}
L_\lambda(\boldsymbol{\theta}) = \sum_{i=1}^{N_\mathrm{pca}}\alpha_i(\boldsymbol{\theta})\,q_{\lambda,\,i},
\end{align}
where the vector of coefficients $\boldsymbol{\alpha}(\boldsymbol{\theta})$ are some unknown (non-linear) functions of the SPS parameters $\boldsymbol{\theta}$. The remaining step, then, is to learn some convenient parametric model $\hat{\boldsymbol\alpha}(\boldsymbol{\theta};\vect{w})$ (with parameters $\vect{w})$ for the basis coefficients $\boldsymbol{\alpha}(\boldsymbol{\theta})$ as a function of the SPS parameters.
We parameterize the basis coefficients as a function of the model parameters by a dense fully-connected neural network with $n$ hidden layers, with $\{h_1, h_2,\dots,h_n\}$ hidden units and non-linear activation functions $\{a_1, a_2,\dots,a_n\}$ respectively, i.e.,
\begin{align}
\label{nn}
\hat{\boldsymbol\alpha}(\boldsymbol{\theta}; \vect{w}) &= a_n(\mathbf{W}_n\vect{y}_{n-1} + \vect{b}_n), \nonumber \\
\vect{y}_{n-1} &= a_{n-1}(\mathbf{W}_{n-1}\vect{y}_{n-2} + \vect{b}_{n-1}) \nonumber \\
&\;\vdots \nonumber \\
\vect{y}_1 &= a_1(\mathbf{W}_1\boldsymbol\theta + \vect{b}_1).
\end{align}
The weight matrices and bias vectors for each network layer are denoted by $\mathbf{W}_k\in\mathbb{R}^{h_k\times h_{k-1}}$ and $\vect{b}_k\in\mathbb{R}^k$, we use $\vect{w} = \{\mathbf{W}_k,\vect{b}_k\}$ as shorthand for the full set of weights and biases of the whole network, and $\mathbf{y}_k$ denotes the output from layer $k$.
Finally, to train the emulator we optimize the network parameters $\vect{w}$ by minimizing the loss function,
\begin{align}
-\mathrm{ln}\,U(\vect{w} ; \{\boldsymbol{\theta}, \boldsymbol\alpha\}) = \frac{1}{N_\mathrm{train}} \sum_{m=1}^{N_\mathrm{train}} | \boldsymbol\alpha_m - \hat{\boldsymbol\alpha}(\boldsymbol{\theta}_m; \vect{w}) |^2,
\end{align}
where $\{\boldsymbol\alpha_m\}$ are the PCA basis coefficients for the SEDs $\{L_\lambda\}$ in the training set, and $\boldsymbol{\theta}_m$ the corresponding SPS model parameters for those training set members. This loss function is just the mean square error between neural network predicted and true PCA basis coefficients over the training set.
The emulator model is succinctly summarized by
\begin{align}
\label{emulator}
\hat{\mathbf{L}}(\boldsymbol{\theta}) = \mathbf{Q}\,\hat{\boldsymbol\alpha}(\boldsymbol{\theta} ; \vect{w}),
\end{align}
where $\hat{\mathbf{L}}(\boldsymbol{\theta}) = (\hat{L}_{\lambda,1}(\boldsymbol{\theta}), \hat{L}_{\lambda,2}(\boldsymbol{\theta}), \dots, \hat{L}_{\lambda,{N_\lambda}}(\boldsymbol{\theta}))$ is the emulated SED for parameters $\boldsymbol{\theta}$, $Q_{\lambda i} = q_{\lambda,i}$ is the set of basis functions, and $\hat{\boldsymbol\alpha}(\boldsymbol{\theta} ; \vect{w})$ is given by Eq. \eqref{nn}. The neural network emulator is specified entirely by the set of matrices and non-linear activation functions $\{\mathbf{W}_k, \vect{b}_k, \mathbf{Q}, a_k\}$. Calculating an emulated SPS model spectrum using Eqs. \eqref{emulator} and \eqref{nn} is hence reduced to a series of linear matrix operations, and passes through simple non-linear (e.g., tanh) activation functions. Furthermore, the neural network in Eq. \eqref{nn} is straightforwardly differentiable (by the chain rule), so derivatives of the model spectra with respect to the SPS parameters are readily available. We highlight that implementation of the trained emulator using Eqs. \eqref{nn} and \eqref{emulator} is simple, so incorporating the trained emulator into existing (or future) analysis codes should be straightforward.
In the limit of a large PCA basis, large training set, and complex neural network architecture, the emulator described above can represent any (deterministic) SPS model to arbitrary precision. However, the power of this emulation framework comes from the fact that -- as we will demonstrate in the following sections -- a relatively small PCA basis and neural network architecture can achieve percent-level precision over broad parameter ranges, even for relatively complex SPS parameterizations. It is this fact that allows the emulator to achieve such significant speed ups.
\subsubsection{Discussion}
The use of neural networks in this context is solely as a convenient parametric model for an unknown function that we want to learn, in a situation where the dimensionality is too high to make direct interpolation efficient. Neural networks have a number of useful features that make them well-suited to this sort of emulation task. The universal approximation theorem tells us that a neural network with a single hidden layer and finite number of nodes can approximate any continuous function on compact subsets of $\mathbb{R}^n$ under some mild assumptions about the activation function \citep{csaji2001}. Their derivatives can be computed efficiently (by backpropagation), making for efficient training. Once trained, they are straightforward and fast to evaluate, and importantly the computational cost of evaluation is fixed ahead of time and independent of the size of the training set (in contrast to Gaussian processes\footnote{For use of PCA and Gaussian processes in a similar context, see \citet{czekala2015}.}, where the cost of evaluation na\"{i}vely scales as $N^3$ with the training set size).
In this study we show that relatively simple dense fully-connected network architectures are able to perform well in the context of SPS emulation. However, for more complex SPS models than those considered here, or where fidelity requirements are very high, more sophisticated architectures may prove beneficial (for more discussion see \S \ref{sec:discussion}).
We note that training an emulator on a given SPS parameterization is performed over some pre-determined prior ranges for the parameters. Care should be taken to train the emulator over well-chosen priors in the first instance, since emulated SEDs outside of the pre-determined prior ranges of the training set should not be expected to be reliable.
\subsection{Emulation of galaxy photometry}
\label{sec:emu_phot}
\begin{figure}
\centering
\includegraphics[width = 7.5cm]{photulator.pdf}
\caption{Schematic of the emulator set-up for photometry under SPS models; the magnitudes (for some chosen set of band-passes) as a function of the SPS model parameters are parameterized as a dense fully-connected neural network (c.f., Eq. \eqref{nn2}).}
\label{fig:photulator}
\end{figure}
For applications where photometry rather than spectra are the primary target, it makes sense to emulate the photometry directly, i.e., learn a compact model for the fluxes or magnitudes for some set of filters, as a function of the SPS parameters. Emulating photometry presents a simpler problem than emulating spectra: the number of bands of interest is typically $\mathcal{O}(10)$ (or fewer), so no basis construction or dimensionality reduction is necessary.
To emulate photometry for some set of band-passes $\{b_1, b_2,\dots, b_k\}$ under a given SPS model, we parameterize the magnitudes $\mathbf{m}(\boldsymbol{\theta}) = (m_{b_1}(\boldsymbol{\theta}), m_{b_2}(\boldsymbol{\theta}),\dots,m_{b_k}(\boldsymbol{\theta}))$ by a dense fully-connected neural network, i.e. (Figure \ref{fig:photulator}),
\begin{align}
\label{nn2}
\hat{\mathbf{m}}(\boldsymbol{\theta}; \vect{w}) &= a_n(\mathbf{W}_n\vect{y}_{n-1} + \vect{b}_n), \nonumber \\
\vect{y}_{n-1} &= a_{n-1}(\mathbf{W}_{n-1}\vect{y}_{n-2} + \vect{b}_{n-1}) \nonumber \\
&\;\vdots \nonumber \\
\vect{y}_1 &= a_1(\mathbf{W}_1\boldsymbol\theta + \vect{b}_1),
\end{align}
where $\hat{\mathbf{m}}(\boldsymbol{\theta}; \vect{w})$ denotes the neural network emulated photometry. As before, the weight matrices and bias vectors for each network layer are denoted by $\mathbf{W}_k\in\mathbb{R}^{h_k\times h_{k-1}}$ and $\vect{b}_k\in\mathbb{R}^k$, we use $\vect{w} = \{\mathbf{W}_k,\vect{b}_k\}$ as shorthand for the full set of weights and biases of the whole network.
\subsection{Activation function choice for neural SPS emulation}
\label{sec:activation}
We find that SPS spectra and photometry as functions of the model parameters are mostly smooth, but exhibit some non-smooth features. In particular, the behavior as a function of stellar and gas metallicity parameters exhibits discontinuous changes in gradient. When considering neural network architecture choices for SPS emulation, it is therefore advantageous to choose activation functions that are able to capture both smooth features and sharp gradient changes; well-chosen activation functions will allow us to achieve higher fidelity emulation with smaller (faster) network architectures.
To this end, we adopt activation functions of the following form,
\begin{align}
\label{activation}
a(\mathbf{x}) = \left[\boldsymbol{\gamma} + (1+e^{-\boldsymbol\beta\odot\mathbf{x}})^{-1}(1-\boldsymbol{\gamma})\right]\odot\mathbf{x},
\end{align}
where $\boldsymbol{\gamma}$ and $\boldsymbol\beta$ are included as additional free parameters of the network to be trained alongside the network weights and biases. This activation function is able to cover smooth features (small $\beta$), and sharp changes in gradient (as $\beta\rightarrow\infty$). In experiments, we find that activation funcions of this form outperform other popular neural network activation choices for the SPS emulation problem (including tanh, sigmoid, ReLU and leaky-ReLU; see \citealp{Nwankpa2018} for recent trends in activation function choice). Non-linear activation functions of the form Eq. \eqref{activation} are hence adopted throughout this work.
\subsection{Target accuracy for SPS emulation}
Whilst a great deal of progress has been made in reducing modeling uncertainties associated with stellar population synthesis, some fundamental uncertainties remain (e.g., the effect of binaries and rotation on the ionizing photon production from massive stars \citealp{choi2017}; for a review of SPS model uncertainties see \citealp{conroy2013}). When analyzing galaxies under SPS models it is therefore common practice to assume an error floor of $2$--$5\%$ on the SEDs or photometry, to account for the theoretical SPS model uncertainties (e.g., \citealp{leja2019}). On the observational side, for photometry it is also common practice to put an error floor (typically 5\%) on the measured fluxes to account for systematic uncertainties in the photometric calibration (e.g., \citealp{Muzzin2013a, Chevallard2016, Pacifici2016, Belli2019, Carnall2019}).
This context provides a natural accuracy target for SPS emulation (for both spectra and photometry): $\lesssim 5\%$ accuracy, or, $\ll 5\%$ if we want to ensure the emulator error is a small fraction of the total error budget. Whilst this covers a range of use cases, we note that for analysis of high S/N spectra under very complex SPS models, the fidelity requirements may be more like $\ll 1\%$ (see \S \ref{sec:discussion} for discussion).
\section{Validation I: DESI model spectra}
\label{sec:fomo}
In this section, we demonstrate and validate the emulator on a relatively simple eight-parameter SPS parameterization. The model is outlined in \S \ref{sec:fomo_model}, the emulator set-up described in \S \ref{sec:fomo_emulator}, and validation tests and performance discussed in \S \ref{sec:fomo_validation}-\ref{sec:fomo_performance}.
\subsection{Model and priors}
\label{sec:fomo_model}
\begin{figure*}
\centering
\includegraphics[width = 17.5cm]{fomo_sfh_zh.pdf}
\caption{Basis functions for the star formation history (left) and metallicity history (right) for the DESI model (see \S \ref{sec:fomo_model}). The SFH basis functions are normalized such that the total mass formed is one solar mass. The metallicity components are unnormalized, but the values refer to the mass fraction in metals ($Z_\odot$ = 0.019).}
\label{fig:fomo_sfh_zh}
\end{figure*}
Our first model (hereafter, the DESI model) is motivated by upcoming analyses of large numbers of optical, low signal-to-noise (S/N) spectra being collected by current and future surveys. The specifics of the model presented in this section are targeted at the analysis of low-redshift spectra for the upcoming DESI Bright Galaxy Survey \citep[BGS;][]{aghamousa2016}. The BGS will be a flux-limited
survey that will target ${\gtrsim}10$ million galaxies with
$z \lesssim 0.45$ over $14,000~{\rm deg}^2$. It will
measure spectra over a wavelength range between $360$ to
$980\mathrm{nm}$ with a resolution $R = \lambda / \Delta \lambda$ between 2000 and 5500, depending on the wavelength. Individual spectra will have a median S/N of $\sim2-3$
per pixel. The key features and free parameters of the model, and associated prior ranges, are as follows.
We model the star-formation and chemical enrichment histories as a function of lookback time as linear combinations of a set of pre-computed basis functions (Figure \ref{fig:fomo_sfh_zh}). The shape and number of basis functions were determined by applying a non-negative matrix factorization to the star-formation and chemical enrichment histories of galaxies above $10^9$ M$_\odot$ in the Illustris simulation \citep{Vogelsberger2014}. We sought to construct a basis with the minimal number of components that would reconstruct the history of galaxies, and therefore their optical spectra, to an accuracy dictated by the typical DESI S/N. In practice, the chosen basis has a dependence on the optical colours of the galaxies. The basis used here is an indicative example of what will be used to analyse DESI spectra; further details are given in Tojeiro et al. (in prep).
The star formation history\footnote{i.e., stellar mass formed per unit time, $[\mathrm{M}_\odot \mathrm{Gyr^{-1}}]$.} for a galaxy at redshift $z$ is implemented as a linear combination of four SFH basis functions $\{s^\mathrm{SFH}_i(t)\}$ (shown in Figure \ref{fig:fomo_sfh_zh})
\begin{align}
\mathrm{SFH}(t; t_\mathrm{age}(z)) = \sum_{i=1}^4\beta^\mathrm{SFH}_i\,\frac{s^\mathrm{SFH}_i(t)}{\int_0^{t_\mathrm{age}(z)}s^\mathrm{SFH}_i(t)dt},
\end{align}
where the SFH basis coefficients $\{\beta^\mathrm{SFH}_i\}$ are free parameters of the model, the basis functions are normalized to unity over the age of the Universe at the lookback time of the galaxy $t_\mathrm{age}(z)$, and time runs from $0$ to $t_\mathrm{age}(z)$. We train the emulator over a flat-Dirichlet prior for the basis coefficients, i.e., a uniform prior over all combinations of basis coefficients under the constraint that $\sum_{i=1}^4\beta^\mathrm{SFH}_i=1$ (ensuring that the total SFH is normalized to unity for the emulated spectra).
The metallicity enrichment history (ZH) is similarly parameterized as a linear combination of two basis functions $\{s^\mathrm{SFH}_i(t)\}$ (shown in Figure \ref{fig:fomo_sfh_zh})
\begin{align}
\mathrm{ZH}(t) = \sum_{i=1}^2\gamma^\mathrm{ZH}_i\,s^\mathrm{ZH}_i(t),
\end{align}
where again the ZH basis coefficients $\{\gamma^\mathrm{ZH}_i\}$ are free parameters of the model, and time runs from $0$ to $t_\mathrm{age}(z)$. We take uniform priors for the ZH basis coefficients, $\gamma^\mathrm{ZH}_i\in[6.9\times 10^{-5}, 7.33\times 10^{-3}]$.
Dust attenuation is modelled using the \citet{calzetti2000} attenuation curve, with the optical depth $\tau_\mathrm{ISM}$ as a free parameter with a uniform prior $\tau_\mathrm{ISM}\in[0,3]$.
The eight model parameters, their physical meanings, and associated priors are summarized in Table \ref{tab:fomo}.
\begin{table*}
\centering
\scalebox{0.95}{
\begin{tabularx}{\textwidth}{ccc}
\toprule
Parameter & Description & Prior \tabularnewline
\hline
$\beta^\mathrm{SFH}_1,\,\beta^\mathrm{SFH}_2,\,\beta^\mathrm{SFH}_3,\,\beta^\mathrm{SFH}_4$ & Star formation history basis function coefficients & flat-Dirichlet\tabularnewline
$\gamma^\mathrm{ZH}_1,\,\gamma^\mathrm{ZH}_2$ & Metallicity enrichment history basis function coefficients & Uniform $[6.9\times 10^{-5}, 7.3\times 10^{-3}]$\tabularnewline
$t_\mathrm{age}$ & Age of Universe at lookback-time of the galaxy & Uniform $[9.5, 13.7]\,\mathrm{Gyr}$\tabularnewline
&&(equivalent to $0 < z < 0.4$)\tabularnewline
$\tau_\mathrm{ISM}$ & Dust optical depth (\citealp{calzetti2000} attenuation model) & Uniform $[0, 3]$\tabularnewline
\hline
\end{tabularx}}
\caption{Summary of SPS model parameters and their respective priors for the DESI model (\S \ref{sec:fomo_model}).}
\label{tab:fomo}
\end{table*}
\subsection{Emulation}
\label{sec:fomo_emulator}
We generated a training and validation set of $5\times 10^5$ and $10^5$ SEDs respectively, for model parameters drawn from their respective priors (see Table \ref{tab:fomo}) and covering the wavelength range $200$ to $1000\,\mathrm{nm}$
The PCA basis was constructed by performing a PCA decomposition of all of the training SEDs\footnote{Performing a PCA decomposition over large training sets can be memory intensive. Here we used \textsc{scikit-learn}'s ``incremental PCA", which constructs a PCA basis while only processing a few training samples at a time, keeping the memory requirements under control.}. We choose the number of PCA components to keep in the basis such that the basis is able to describe the validation SEDs to $\ll 1\%$ accuracy over the whole wavelength range and parameter volume. Figure \ref{fig:fomo_pca_variance} shows the fractional error distribution of the validation spectra represented in the PCA basis with $20$ components retained; the $20$ component basis is able to describe the SEDs to $\lesssim 0.5\%$ accuracy over the whole wavelength and parameter prior range. Note that the PCA basis is constructed for log SEDs, but accuracy in Figure \ref{fig:fomo_pca_variance} is displayed in linear space.
The PCA basis coefficients are parameterized by a dense neural network with two hidden layers of $256$ hidden units, with non-linear activation functions (Eq. \eqref{activation}) on all expect the output layer, which has linear activation. The network is implemented in \textsc{tensorflow} \citep{tensorflow2015-whitepaper} and trained with the stochastic gradient descent optimizer \textsc{adam} \citep{kingma2014adam}. Overfitting is mitigated by early-stopping\footnote{The training set is split $9:1$ into training and validation subsets, the networks are trained by minimizing the loss for the training subset only, but the loss for the validation subset is tracked during training. Overfitting is observed when the validation loss stops improving, whilst the training loss continues to decrease. Training is terminated when the loss of the validation set ceases to improve over $20$ training epochs.}.
Network training is performed on a Tesla K80 GPU\footnote{Freely available with Google Colab \url{https://colab.research.google.com/}.} and takes of the order of a few minutes for the network architecture and training set described above; the computational cost of building the emulator is overwhelmingly dominated by performing the direct SPS computations (using \textsc{fsps}) to generate the training set ($\sim$10 hours compared to minutes).
\subsection{Results and validation}
\label{sec:fomo_validation}
For validating the trained emulator, we generated $10^5$ SEDs for model parameters drawn from the prior, and compared the emulated and exact SPS spectra for this validation set. The results are summarized in Figure \ref{fig:fomo_sed_accuracy}. The upper panels show typical, low and extreme case performance of the emulator, taken as the $50$th, $99$th, and $99.9$th percentiles of the mean (absolute) fractional error per SED (over the full wavelength range). The bottom left panel shows the $68$, $95$, $99$ and $99.9$ percent intervals of the fractional error as a function of wavelength, and the bottom right panel shows the cumulative distribution of the mean (absolute) fractional error for the validation samples (over the wavelength range). Note that the emulator is trained on the PCA coefficients of log SEDs, but accuracy is shown in Figure \ref{fig:fomo_sed_accuracy} in linear space.
The emulator is accurate at the $<1\%$ level over the full wavelength range for $>99\%$ of the SEDs in the validation set. A small fraction (less than one percent) of validation samples have errors at the few-percent level at the shortest wavelengths. We note that this small number of ``outliers" occur where the recent star formation history turns on/off and the SEDs are very sensitive to the most-recent SFH coefficients. Whilst even in these cases the emulator errors are acceptable, they may be further improved by re-parameterization of the SFH, or better sampling of the prior volume in this part of parameter space.
There are two distinct sources of emulator error: the adequacy of the PCA basis, and the accuracy of the neural network in learning the PCA basis coefficients as functions of the SPS parameters. Comparing Figures \ref{fig:fomo_pca_variance} and \ref{fig:fomo_sed_accuracy} (bottom left), we see that the error budget in this case is dominated by the neural network rather than the PCA basis. Accuracy could hence be further improved with a larger neural network architecture (accompanied by a larger training set if necessary), at the cost of some reduction in the performance gain (since a larger network will be more expensive to evaluate).
\begin{figure}
\includegraphics[width = 8.5cm]{fomo_pca}
\caption{Validation of the PCA basis for the DESI model (\S \ref{sec:fomo}). Shown are the central 95\% (red), 99\% (salmon) and 99.9\% (grey) intervals of the fractional errors on the DESI model spectra represented in the basis of the first $20$ PCA components. The $20$ PCA component basis is able to describe the model spectra to $\lesssim 0.5\%$ accuracy over the whole wavelength range and parameter volume.}
\label{fig:fomo_pca_variance}
\end{figure}
\subsection{Computational performance}
\label{sec:fomo_performance}
With the network architecture described above (\S \ref{sec:fomo_emulator}), we find that the trained emulator is able to generate predicted SEDs a factor of $10^4$ faster than direct SPS computation with \textsc{fsps} on the same (CPU) architecture.
Implementation in \textsc{tensorflow} allows the emulator to automatically be called from a GPU, allowing for easy exploitation of GPU-enabled parallelization. Generating $10^6$ emulated SEDs takes around $\sim2\,\mathrm{s}$ on a Tesla K80 GPU, compared to $\sim 0.2\,\mathrm{s}$ per direct SPS computation on an Intel i7 CPU; an overall effective factor of $10^5$ speed-up.
When inferring SPS model parameters from galaxy observations, additional performance gains are expected from the use of gradient-based inference and optimization methods that are enabled by the emulator (which has readily available derivatives). We leave investigation of these extra gains to future work.
\begin{figure*}
\centering
\includegraphics[width = 17.5cm]{fomo_sed_accuracy}
\includegraphics[width = 17cm]{fomo_cdf}
\caption{Validation of the emulator for the DESI model (\S \ref{sec:fomo}). Top figure: ``typical", ``low" and ``extreme case" accuracy of the emulated SEDs from a validation set of $10^5$ spectra generated with parameters drawn from the prior (Table \ref{tab:fomo}). These cases correspond to the 50th, 99th and 99.9th percentiles of the mean (absolute) fractional error between emulated and true SED (over the wavelength range). Bottom left: 68 (dark red), 95 (red), 99 (salmon) and 99.9 (grey) percentiles of the fractional emulator error as a function of wavelength. Bottom right: cumulative distribution (blue) and 68 (dark red), 95 (red), 99 (salmon) and 99.9 (grey) percentiles of the mean (absolute) fractional errors (over the wavelength range). We see that the emulator is broadly accurate to $\lesssim 1\%$, with a small fraction (less than one percent) of validation samples having errors at the few-percent level or more at the lower end of the wavelength range.}
\label{fig:fomo_sed_accuracy}
\end{figure*}
\section{Validation II: Prospector-$\alpha$ spectra}
\label{sec:prospector}
In this section we demonstrate and validate the spectrum emulator on a more flexible 14-parameter SPS parameterization -- the Prospector-$\alpha$ model \citep{leja2017, leja2018, leja2019}. The model is outlined in \S \ref{sec:prospector_model}, the emulator set-up described in \S \ref{sec:prospector_emulator}, and validation tests and results discussed in \S \ref{sec:prospector_validation}-\ref{sec:prospector_performance}.
\subsection{Model and priors}
\label{sec:prospector_model}
The Prospector-$\alpha$ model includes a non-parametric star formation history, a two-component dust attenuation model with a flexible attenuation curve, variable stellar and gas-phase metallicity, dust emission powered via energy balance, and emission from a dusty AGN torus. Nebular line and continuum emission is generated using CLOUDY \citep{ferland2013} model grids from \citet{byler2017}. MIST stellar evolution tracks and isochrones are assumed \citep{choi2016, dotter2016}, based on MESA \citep{paxton2010, paxton2013, paxton2015}.
The model has been tested and calibrated on local galaxies \citep{leja2017, leja2018}, and recently used to analyze a sample of $\sim 60,000$ galaxies from the 3D-HST photometric catalog \citep{skelton2014} over $0.5<z<2.5$ \citep{leja2019}. The model is described in detail in \citet{leja2017, leja2018, leja2019}; we review the salient features, model parameters and associated priors below. A summary of model parameters and priors is given in Table \ref{tab:prospector}.
The star formation history is modelled as piece-wise constant, with seven time bins spaced as follows. Two bins are fixed at $[0,30]$ $\mathrm{Myr}$ and $[30,100]$ $\mathrm{Myr}$ to capture recent SFH. A third bin is placed at the other end at $[0.85, 1]\,t_\mathrm{age}$, where $t_\mathrm{age}$ is the age of the Universe at the lookback time of the galaxy, to model the oldest star formation. The remaining four bins are spaced equally in logarithmic time between $100$ $\mathrm{Myr}$ and $0.85\,t_\mathrm{age}$. The six ratios of the logarithmic star formation rate (SFR) in adjacent SFH bins $\{r_\mathrm{SFH}^i\}$ are included as free model parameters. Following \citet{leja2017, leja2018, leja2019} we take independent Student's-$t$ priors on the log SFR ratios (see Table \ref{tab:prospector}). This prior is chosen to allow similar transitions in the SFR as seen in the Illustris hydrodynamical simulations \citep{vogelsberger2014a, vogelsberger2014b, torrey2014, diemer2017}, although care is taken to ensure a wider range of models is allowed than is seen in those simulations.
A single stellar metallicity is assumed for all stars in a galaxy. The observed stellar mass-stellar metallicity relationship from $z = 0$ Sloan Digital Sky Survey (SDSS) data \citep{gallazzi2005} is used to motivate the metallicity prior. For a given stellar-mass, the stellar-metallicity prior is taken to be a truncated normal with limits\footnote{Set by the range of the MIST stellar evolution tracks.} $−1.98 < \mathrm{log}(Z/Z\odot) < 0.19$, mean set to the \citet{gallazzi2005} $z = 0$ relationship, and standard deviation taken to be twice the observed scatter about the $z=0$ relationship (to allow for potential redshift evolution in the mass-metallicity relation).
As discussed in \S \ref{sec:emulation} we fix the integral normalization of the SFH to $1\,\mathrm{M}_\odot$ for the spectra in the training set, and stellar-mass can then be set by adjusting the normalization of the emulated spectra. However, because in this case the metallicity prior is taken to be mass-dependent, we sample total stellar-mass formed from a log uniform prior from $10^7\mathrm{M}_\odot$ to $10^{12.5}\mathrm{M}_\odot$ first (for the purpose of sampling from the metalliticy prior correctly), and then renormalize the spectra to $1\,\mathrm{M}_\odot$ afterwards when training the emulator.
Gas-phase metallicity is decoupled from the stellar metallicity and allowed to vary (uniformly) between $−2 < \mathrm{log}(Z_\mathrm{gas}/Z_\odot) < 0.5$.
Dust is modelled with two components -- birth cloud and diffuse dust screens -- following \citet{charlot2000} (see \citealp{leja2017} for details). The birth cloud ($\tau_1$) and diffuse ($\tau_2$) optical depths are free model parameters, with truncated normal priors: $\tau_2\sim \mathcal{N}(0.3, 1)$ with limits $\tau_2\in[0,4]$, and $\tau_1 / \tau_2 \sim\mathcal{N}(1, 0.3)$ with limits $\tau_1/\tau_2\in[0,2]$. The power law index of the \citet{calzetti2000} attenuation curve for the diffuse component is also included as a free model parameter, with a uniform prior $n\in[-1, 0.4]$.
AGN activity is modelled as described in \citet{leja2018}, with the fraction of the bolometric luminosity from the AGN $f_\mathrm{AGN}$ and optical depth of the AGN torus $\tau_\mathrm{AGN}$ as free parameters with log-uniform priors $\mathrm{ln}\,f_\mathrm{AGN}\in[\mathrm{ln}(10^{-5}), \mathrm{ln}(3)]$ and $\mathrm{ln}\,\tau_\mathrm{AGN}\in[\mathrm{ln}(5), \mathrm{ln}(150)]$ respectively.
The model parameters, their physical meanings, and associated priors are summarized in Table \ref{tab:prospector}.
\begin{table*}
\centering
\scalebox{0.95}{
\begin{tabularx}{\textwidth}{ccc}
\toprule
Parameter & Description & Prior \tabularnewline
\hline
$M$ & Total stellar-mass formed & Log-Uniform $[10^7, 10^{12.5}]\mathrm{M}_\odot$\tabularnewline
${r_\mathrm{SFH}^1,\dots,r_\mathrm{SFH}^6}$ & Ratio of log-SFR between adjacent bins & Clipped Student's-$t$: $\sigma=0.3$, $\nu=2$, $|r_\mathrm{SFH}^i| \leq 5$\tabularnewline
$t_\mathrm{age}$ & Age of Universe at the lookback-time of galaxy & Uniform $[2.6, 13.7]\,\mathrm{Gyr}$, ($0 < z < 2.5$)\tabularnewline
$\tau_2$ & Diffuse dust optical depth & Normal $\mu = 0.3,\,\sigma=1$, min=0, max=4\tabularnewline
$\tau_1$ & Birth-cloud optical depth & Truncated normal in $\tau_1/\tau_2$\tabularnewline
&&$\mu = 1,\,\sigma=0.3$, min=0, max=2\tabularnewline
$n$ & Index of \citet{calzetti2000} dust attn. curve & Uniform $[-1, 0.4]$\tabularnewline
$\mathrm{ln}\,(Z_\mathrm{gas}/Z_\odot)$ & Gas phase metallicity & Uniform $[-2, 0.5]$\tabularnewline
$f_\mathrm{AGN}$ & Fraction of bolometric luminosity from AGN & Log-Uniform $[10^{-5}, 3]$\tabularnewline
$\tau_\mathrm{AGN}$ & Optical depth of AGN torus & Log-Uniform $[5, 150]$\tabularnewline
$\mathrm{ln}\,(Z/Z_\odot)$ & Stellar metallicity & Truncated normal with $\mu$ and $\sigma$ from \tabularnewline
&&\citet{gallazzi2005} mass-metallicity relation (see \S \ref{sec:prospector}),\tabularnewline
&&limits min=-1.98, max=0.19 \tabularnewline
$z$ & Redshift & Uniform [0.5, 2.5] \tabularnewline
\hline
\end{tabularx}}
\caption{Summary of SPS model parameters and their respective priors for the Prospector-$\alpha$ model (\S \ref{sec:prospector_model}). Note that for emulating spectra under this model (\S \ref{sec:prospector}), generated training spectra are computed in the rest-frame (but over a range of values for $t_\mathrm{age}$), and renormalized such that they correspond to $M=1\mathrm{M}_\odot$ (see \S \ref{sec:emulation} for motivation). When emulating photometry under this model (\S \ref{sec:photometry}), $M$ and $z$ are kept as free parameters to be emulated over.}
\label{tab:prospector}
\end{table*}
\subsection{Emulation}
\label{sec:prospector_emulator}
We generated a training and validation set of $2\times 10^6$ and $10^5$ SEDs respectively\footnote{We used a larger training set for the Prospector-$\alpha$ compared to the DESI model, owing to the larger parameter space. Training set sizes for both models were chosen so that they could be generated in $\lesssim$ days and achieved percent-level accuracy upon validation. For more discussion on optimization of training set sizes see \S \ref{sec:discussion}.}, for model parameters drawn from the prior (see Table \ref{tab:prospector}) and covering the wavelength range $100\,\mathrm{nm}$ to $30\,\mu\mathrm{m}$ (using the SPS code \textsc{fsps}).
For emulating higher-dimensional SPS models over very broad wavelength ranges, such as this case, it is advantageous to split the emulation task into a number of wavelength sub-ranges, which can be stitched together afterwards. Here, we will emulate $100-400\,\mathrm{nm}$ (UV), $400-1100\,\mathrm{nm}$ (optical-NIR) and $1100\,\mathrm{nm}-30\,\mu\mathrm{m}$ (IR) separately. We find in experiments that without splitting into wavelength sub-ranges, more PCA components are required (in total) to achieve the same consistent accuracy across the full wavelength range. Furthermore, from the perspective of training the neural networks, emulating relatively smaller PCA bases (for each wavelength sub-range) represents an easier learning task compared to emulating a single large ($>100$ component) basis. This means that relatively smaller networks can be used for each sub-range, requiring less training data and being faster to evaluate once trained. We do not find any evidence for discontinuities in the emulated spectra at the boundaries between wavelength regions (within the accuracy of the emulator at the boundaries; Figure \ref{fig:prospector_sed_accuracy}).
The PCA basis was constructed as before by performing a PCA decomposition of all of the training SEDs (for the three wavelength ranges separately), and the number of PCA components retained chosen such that the resulting basis is able to capture the (validation) SEDs with $\lesssim 1\%$ level accuracy. Figure \ref{fig:prospector_pca_variance} shows the distribution of errors on the validation SEDs for the PCA basis with $50$ components for UV, and $30$ components for optical-NIR and IR respectively. This basis is sufficient to describe the SEDs to $\lesssim 1\%$ over the full wavelength range and parameter volume. The errors can be reduced further by increasing the size of the PCA basis, but are sufficient for our current purposes. Note that the PCA basis was constructed for log SEDs, but accuracy shown in Figure \ref{fig:prospector_pca_variance} in linear space.
The basis coefficients for each wavelength range are parameterized by a dense neural network with three hidden layers of $256$ hidden units, with non-linear activation functions (Eq. \eqref{activation}) on all hidden layers, and linear activation on the output. Network implementation and training follows exactly as described in \S \ref{sec:fomo_emulator}.
\begin{figure*}
\includegraphics[width = 18cm]{prospector_pca_split}
\caption{Validation of the PCA basis for the Prospector-$\alpha$ model (\S \ref{sec:prospector}). Shown are the central 95 (red), 99\% (salmon) and 99.9\% (grey) intervals for the fractional errors on the $10^5$ validation spectra represented in the basis of the first 50, 30 and 30 PCA components for UV, optical-NIR and IR wavelength ranges respectively. The basis is able to capture the Prospector-$\alpha$ model spectra to $\lesssim 1\%$ accuracy over the entire wavelength and parameter ranges.}
\label{fig:prospector_pca_variance}
\end{figure*}
\subsection{Results and validation}
\label{sec:prospector_validation}
Similarly to the DESI model, for validating the trained emulator we generated $10^5$ SEDs for model parameters drawn from the prior, and compared the emulated and exact SPS spectra for this validation set. The results are summarized in Figure \ref{fig:prospector_sed_accuracy}. The upper panels show typical, low and extreme case performance of the emulator, taken as the $50$th, $99$th, and $99.9$th percentiles of the mean (absolute) fractional error per SED (over the full wavelength range). The bottom left panel shows the $68$, $95$, $99$ and $99.9$ percent intervals of the fractional error as a function of wavelength, and the bottom right panel shows the cumulative distribution of the mean (absolute) fractional error for the validation samples (over the full wavelength range). Note that the emulator is trained on the PCA coefficients of log SEDs, but accuracy is shown in Figure \ref{fig:prospector_sed_accuracy} in linear space.
The emulator has typical fractional SED errors (68th percentile) at the $\ll 1\%$ level over the full wavelength range and parameter volume. $99.9\%$ of validation samples are accurate to better than $2\%$ down to $200\mathrm{nm}$, below which the accuracy steadily degrades with tails out to $\sim6\%$ at the lowest wavelengths (100$\mathrm{nm}$).
\subsection{Computational performance}
\label{sec:prospector_performance}
For the Prospector-$\alpha$ model, with the network architecture described in \S \ref{sec:prospector_emulator} the emulator is able to generate predicted SEDs a factor of $10^3$ faster (per wavelength range) than direct SPS computation with \textsc{fsps} on the same CPU architecture.
For applications where parallel SPS evaluations can be leveraged, the emulator can be called on a GPU without any additional development overhead. Generating $10^6$ emulated SEDs takes around $\sim 2\,\mathrm{s}$ on a Tesla K80 GPU, compared to $\sim 0.05\,\mathrm{s}$ per \textsc{fsps} call on an Intel i7 CPU; an overall factor of $10^4$ effective speed-up per SPS evaluation.
We leave investigation of additional performance gains enabled by the use of gradient based optimization and inference methods to future work.
\begin{figure*}
\centering
\includegraphics[width = 17.5cm]{prospector_sed_accuracy_stack}
\includegraphics[width = 17cm]{prospector_cdf_stack}
\caption{Validation of the emulator for the Prospector-$\alpha$ model (\S \ref{sec:prospector}). Top figure: ``typical", ``low" and ``extreme case" accuracy for the emulated SEDs from a validation set of $10^5$ spectra generated with parameters drawn from the prior. These cases correspond to the 50th, 99th, and 99.9th percentiles of the mean (absolute) fractional error between the emulated and true SED (over the wavelength range). The displayed fractional errors (middle row) are faded out where the SEDs $\rightarrow0$. Bottom left: 68 (dark red), 95 (red), 99 (salmon) and 99.9 (grey) percentiles of the fractional emulator error as a function of wavelength. Bottom right: cumulative distribution and 68th (darkred), 95th (red), 99th (salmon) and 99.9th (grey) percentiles of the mean (absolute) fractional errors on the SEDs (over the full wavelength range). Typical errors (68\%) are sub-percent across the whole wavelength range. 99.9\% of samples are accurate to $< 2\%$ over most of the wavelength range, with the tails of the error distribution extending out to $\sim6\%$ at the shortest wavelengths.}
\label{fig:prospector_sed_accuracy}
\end{figure*}
\section{Validation III: Prospector-$\alpha$ photometry}
\label{sec:photometry}
In this section we demonstrate and validate direct emulation of photometry on the same Prospector-$\alpha$ model as considered in the previous section (see \S \ref{sec:prospector_model} and Table \ref{tab:prospector} for the model and parameters).
For this demonstration, we emulate the $24$ bands associated with the AEGIS field for the 3D-HST photometric catalog \citep{skelton2014}, supplemented by Spitzer/MIPS $24\mu m$ fluxes from \citep{whitaker2014}. This is motivated by the recent \citet{leja2019} analysis of the 3D-HST galaxies using the Prospector-$\alpha$ model. The 24 bands are as follows (shown in Figure \ref{fig:bands}): CFHTLS $ugriz$ \citep{erben2009}, CANDELS F606W, F814W, F125W, F160W \citep{grogin2011,Koekemoer2011}, NMBS J1, J2, J3, H1, H2, K \citep{whitaker2011}, WIRDS J, H, Ks \citep{bielby2012}, 3D-HST F140W \citep{brammer2012}, SEDS $3.6\mu$m and $4.5\mu$m \citep{ashby2013}, EGS $5.8\mu$m and $8.0 \mu$m \citep{barmby2008}, and Spitzer/MIPS $24\mu$m \citep{whitaker2014}.
In contrast to spectrum emulation in \S \ref{sec:prospector} where only rest-frame unit-mass SEDs were emulated (and mass and redshift adjusted afterwards as required), when emulating photometry we keep both mass $M$ and redshift $z$ as free parameters to be emulated over. Recall also that for photometry we will emulate the apparent magnitudes directly (\S \ref{sec:emu_phot}); there is no need for an intermediate (PCA) basis construction step in this case.
\subsection{Emulation}
We generated a training and validation set of $2\times 10^6$ and $1\times 10^5$ SEDs and associated photometry, for model parameters drawn from the prior (see Table \ref{tab:prospector}). We parameterized the apparent magnitudes for each band individually by a dense neural network with four hidden layers of 128 units each, with non-linear activation functions (Eq. \eqref{activation}) on all but the output layer, which has linear activation.
Network implementation and training follows exactly \S \ref{sec:fomo_emulator}.
\subsection{Results and validation}
\begin{figure*}
\centering
\includegraphics[width = 18cm]{bands.pdf}
\caption{The filters for the 24 bands emulated (for the Prospector-$\alpha$ model) in \S \ref{sec:photometry}, spanning the wavelength range $300\,\mathrm{nm}$ to $24\,\mu\mathrm{m}$.}
\label{fig:bands}
\end{figure*}
The performance of the emulator is summarized in Figure \ref{fig:mag_accuracy}, which shows the frequency density (black) and 95 (red), 99 (salmon) and 99.9\% (grey) intervals of the emulator errors over the validation set, for all $24$ emulated bands.
Across the board, the standard deviations of the error distributions are $<0.01$ magnitudes. For the majority of bands, 99.9\% of validation samples are accurate to better than $\lesssim 0.02$ magnitudes, and better than $\lesssim 0.04$ in the worst cases. In applications where an error floor of $0.05$ magnitudes is adopted due to SPS modeling and/or photometric calibration systematics, the emulator errors will make up a modest fraction of the total error budget.
\subsection{Computational performance}
We find that with the neural network architecture described above, the emulator is able to predict photometry a factor of $2\cdot 10^3$ faster (per band) than direct SPS computation for the Prospector-$\alpha$ model, with an additional order of magnitude speed-up when calling the emulator from the GPU. We find in experiments that larger network architectures give further improvements in accuracy, at the cost of some computational performance, and leave further optimization of network architectures for this problem to future work.
\begin{figure*}
\centering
\includegraphics[width = 18cm]{mag_validation.pdf}
\caption{Frequency densities (black) and 95 (red), 99 (salmon) and 99.9 (grey) percent intervals of the errors on the emulated apparent magnitudes for the 24 bands considered (\S \ref{sec:photometry}), over the $10^5$ samples in the validation set. For the chosen neural network architecture (\S \ref{sec:photometry}), the emulator is able to deliver percent-level accuracy across the board, with 99.9\% of validation samples being accurate to $\lesssim 0.02$ magnitudes for most bands, and $\lesssim 0.04$ in the worst cases.}
\label{fig:mag_accuracy}
\end{figure*}
\section{Discussion and conclusions}
\label{sec:discussion}
SPS emulation offers a factor $\sim 10^3-10^4$ speed-up over direct SPS computation, whilst delivering percent-level accuracy over broad parameter and wavelength ranges. Parallel SPS evaluations can be further leveraged by calling the emulator from a GPU, giving an overall speed-up factor of $10^4-10^5$ compared to direct SPS evaluations on a CPU (for the models considered). In addition to the direct speed-up of SPS calls, the emulated SEDs and photometry come with readily accessible derivatives (with respect to the SPS model parameters), enabling the use of gradient-based inference and optimization methods; this is expected to reduce the number of SPS evaluations required when analyzing galaxy spectra or photometry under SPS models. The implications of the speed-up are clear: analyses that previously required significant high-performance computing investment could now be performed on a laptop, and previously intractable analyses of large populations of galaxies will now be tractable. For context, the $\sim1.5$ million CPU hour analysis of \citet{leja2019} could now be performed in $\sim$days on $16$-cores, and leveraging the gradients for inference is expected to give additional orders-of-magnitude improvement on top of that (e.g., \citealp{seljak2019}). Similarly, the computational cost associated with SPS evaluation when forward-simulating large surveys will be radically reduced.
Whilst the specific SPS models presented in this paper were motivated by analysis of photometry and low S/N spectra respectively, another promising area for emulation is SPS models designed to fit high S/N, high resolution galaxy spectra. These models are often computationally expensive ($\sim$1 minute per SPS evaluation) and are thus particularly attractive candidates for speed-up by emulation. However, the model dimensionality and required precision can be demanding. For the simple case of quiescent galaxies, state-of-the-art models have up to $\sim$40 parameters which control components such as the initial mass function, individual elemental abundances, as well as detailed models of continuum line spread functions (e.g., \citealt{conroy12,conroy18}). The systematic residuals for such models are on the order of 1\%, so in practice an emulator would need to reproduce thousands of pixels to sub-percent-level accuracy. Star-forming galaxies bring additional challenges, notably nebular emission -- photoionisation codes can have hundreds of parameters controlling hundreds of emission lines \citep{ferland17}, of which each emission line in principle could have its own line spread function. Although the model complexity and fidelity requirements are higher for this use case, because these models are so much more expensive one has considerably more leeway in using larger and more sophisticated neural network architectures, whilst still potentially achieving significant computational speed-up.
Another avenue that SPS emulation opens up is Bayesian hierarchical analysis of large galaxy populations under SPS models, i.e., jointly inferring the physical properties of individual galaxies in a sample along with the intrinsic (prior) distribution of galaxy characteristics. The high-dimensional inference tasks associated with such analyses typically requires gradient-based inference algorithms, such as Hamiltonian Monte Carlo sampling, which will be made substantially easier with emulated SPS.
There are a number of areas where the neural network emulation framework presented here can be improved upon. Firstly, we did not go to great lengths to optimize the neural network architectures to deliver the optimal trade-off between accuracy and speed-up. Once the training sets have been generated, training the emulator networks is sufficiently cheap that a search over network architectures (including more sophisticated architecture types) to deliver the best performance is computationally feasible.
Regarding basis construction for galaxy spectra, we have shown that PCA is effective for a range of applications. However, for complex SPS models or where fidelity requirements are very high, alternative basis constructions such as non-negative matrix factorization (NMF) in linear flux \citep{hurley2014,lovell2019}, or non-linear representation construction with autoencoders, may prove more powerful.
The other area where some additional effort could give substantial improvements is intelligently sampling the parameter space when building the training set. In this study, little focus was given to optimizing parameter space sampling and training set size; training set sizes were simply chosen so that they could be generated in $\lesssim$ days and deliver percent-level accuracy in the trained emulators. However, it is clearly advantageous to use online learning to optimally sample the parameter space on-the-fly in conjunction with the emulator training (see e.g., \citealp{rogers2019,alsing2019}). This approach has the benefits that it both enables more optimal sampling of the parameter space, and by generating the training set synchronously with training, the size of the training set required to achieve a given accuracy target can be determined on-the-fly (i.e., training and acquisition of training data can be stopped when the accuracy reaches the desired threshold).
For inference applications when the emulator error cannot safely be assumed to be a negligible fraction of the total error budget, it will be desirable to have some quantification of the emulator uncertainties that can be folded into the likelihood function. This can be achieved within the neural network paradigm by using Bayesian neural networks: performing posterior inference over the network weights given the training data (and some priors), hence providing posterior predictive distributions over the output SEDs or photometry rather than simple point estimates. This sophistication comes at the cost of having to perform many forward passes through the network to obtain an emulator error estimate at a given set of SPS parameters.
The emulation code -- \textsc{speculator} -- is publicly available at \url{https://github.com/justinalsing/speculator}.
\acknowledgments
We thank Benjamin Joachimi and Fran\c{c}ois Lanusse for useful discussions. JA and HVP were partially supported by the research project grant ``Fundamental Physics from Cosmological Surveys" funded by the Swedish Research Council (VR) under Dnr 2017-04212. HVP, BL and DJM acknowledge the hospitality of the Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. JL is supported by an NSF Astronomy and Astrophysics Postdoctoral Fellowship under award AST-1701487. This work was also partially supported by a grant from the Simons Foundation, and partially enabled by funding from the UCL Cosmoparticle Initiative.
|
{
"redpajama_set_name": "RedPajamaArXiv"
}
| 7,068
|
Ghost Hunters of Connecticut
Daniel Benton Homestead
Private Locations
Waterbury2
Plainville2
Norwich2
Wallingford2
Middletown2
Stafford springs
Windsor Locks 2
Private Home
Vernon2
Killingly2
Middeltown3
North Groversdale 1
Mansfield Home 2
RI 2
Seymour Private home
Tolland 2
Public Locations
Fogg Homestead
Fogg Libary
Wessie 2
Parsonsfield
Gay City 2
Admiral Peary Inn 2
Sterling Opera House
USS Salem 2
Houghton 2
Eastern State 2
Something Simple 2
Marlborough Tavern 2
Deep River 2
Shanley Hotel
East Hartford2
Melonheads2
North Cemetery2
Tyler Mill 2
Old Mill 2
Hanna Cranna Witch of Monroe
Captain Grants 1
Yankee Peddler
Pomfret Cemetery2
Elks Lodge 2
Paranormal Myths2
Myths1
Terms1
Cryptzoology
Mellon Heads Bio
Dover Demon
Jersey Devil
Beast of Bladenboro
Pukwudgie
UFO1
Hellhound
Altamaha-ha
Crypt Investigations
Bigfoot investigation
UFO investigations
USS Salem Quincy MA
The third USS Salem (CA-139) is a Des Moines-class heavy cruiser, formerly commissioned in theUnited States Navy. She was the world's last all-gun heavy cruiser to enter commission, and is open to the public as a museum ship in Quincy, Massachusetts.[1]
Construction and shakedown
Salem was laid down on 4 July 1945 by the Bethlehem Steel Co.'s Fore River Shipyard, Quincy, Mass.; launched on 25 March 1947; sponsored by Miss Mary G. Coffey; and commissioned on 14 May 1949, Captain J. C. Daniel in command. Her main battery held the world's first automatic 8" guns and were the first 8" naval guns to use cased ammunition instead of shell and bag loading.
After a visit to Salem, Mass., on 4 July 1949, Salem underwent three months of shakedown atGuantanamo Bay, Cuba, between July and October 1949, followed by post-shakedown repairs at the Boston Navy Yard. She then made two cruises to Guantanamo in November and December 1949, and participated in maneuvers with the Atlantic Fleet in early 1950.
Salem departed the east coast on 3 May 1950; and, on 17 May, relieved Newport News (CA-148) as flagship of the 6th Fleet in the Mediterranean. During this, the first of seven deployments to the Mediterranean as fleet flagship, Salem visited ports in Malta, Italy, France, Greece, Turkey,Lebanon, and Algeria, and participated in training exercises. On 22 September, she was relieved byNewport News and returned to the United States.
After three weeks at Boston, Salem joined the Atlantic Fleet for maneuvers; and, on 3 January 1951, sailed for six weeks of intensive gunnery training at Guantanamo. She completed her training off Bermuda; and, on 20 March, sailed for the Mediterranean to relieve Newport News as 6th Fleet flagship. On 19 September, she was relieved by Des Moines (CA-134) and returned to the United States for four months of overhaul at Boston.
Salem sailed on 1 February 1952 for refresher training at Guantanamo and returned to Boston on 29 March for brief repairs. On 19 April, she sailed for her third Mediterranean deployment, relievingNewport News at Algiers on 28 April. Besides the normal port calls and exercises, Salemparticipated in Exercise "Beehive II," which involved units of the United States, British, Italian,French, and Greek navies. She was relieved once again by Des Moines on 29 September and arrived at Boston on 9 October.
After four months of local operations, Salem sailed for Guantanamo Bay on 24 January 1953 for training. Returning to Boston on 27 February, she sailed for the Mediterranean on 17 April and again relieved Newport News as flagship. Her fourth deployment was marked by Exercise "Weldfest" and by emergency relief work after the 1953 Ionian Earthquake which devastated theIonian Islands. Salem was the first American ship to arrive on the scene, and provided relief supplies and assistance from 13 August until her own stocks ran low four days later. Relieved byDes Moines as flagship on 9 October, she returned to Boston on 24 October and entered the shipyard for overhaul.
On 6 February 1954, Salem sailed again for Guantanamo Bay and returned on 7 April after refresher training. She left Boston on 30 April; and, on arrival in the Mediterranean on 12 May, again assumed duties as 6th Fleet flagship. Relieved by Des Moines at Lisbon on 22 September, she returned to Boston on 29 September. In October and November 1954, she participated in war games with the Atlantic Fleet.
Between 19 January and 22 February 1955, Salem made her annual cruise to Guantanamo Bay for training. After a two-week reserve training cruise, the cruiser sailed for the Mediterranean on 2 May and relieved Newport News on 19 May. During this, her sixth deployment, she participated in a NATOexercise and a Franco-American naval exercise, with Under Secretary of the Navy Thomas S. Gates embarked as observer. Salem departed Barcelonaon 23 September and returned to Boston on 2 October 1955 for a four-month overhaul.
The cruiser left Boston on 16 February 1956 for training at Guantanamo in preparation for a 20-month cruise as "permanent" flagship of the Commander, 6th Fleet with homeport at Villefranche-sur-Mer. She returned to Boston on 5 April and sailed for the Mediterranean on 1 May. While she was at sea, the Suez Crisis broke out; and she was diverted to Rhodes in the Eastern Mediterranean where she joined the fleet on 14 May and assumed her flagship duties. She remained in the eastern Mediterranean until mid-June and returned when fighting broke out on 30 October. In April and August 1957, the 6th Fleet, by its presence in the eastern Mediterranean, twice showed United States support for the government of Jordan threatened by subversion. The cruiser departed the Mediterranean on 26 June 1958 and arrived at Norfolk on 4 July.
Our Investigation
Our findngs will tell you it is Haunted and worth the trip
Cheryl Uss Salem shadow.jpg
|
{
"redpajama_set_name": "RedPajamaCommonCrawl"
}
| 7,818
|
Hymedesmia trichoma är en svampdjursart som beskrevs av William Lundbeck 1910. Hymedesmia trichoma ingår i släktet Hymedesmia och familjen Hymedesmiidae. Inga underarter finns listade i Catalogue of Life.
Källor
Horn- och kiselsvampar
trichoma
|
{
"redpajama_set_name": "RedPajamaWikipedia"
}
| 761
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.