text stringlengths 14 5.77M | meta dict | __index_level_0__ int64 0 9.97k ⌀ |
|---|---|---|
Q: Ignoring NaN in a dataframe I want to find the unique elements in a column of a dataframe which have missing values. i tried this: df[Column_name].unique() but it returns nan as one of the elements. what can i do to just ignore the missing values.
dataframe look like this.click here
A: Try calling .dropna() right before your call to .unique(). A working example:
import pandas as pd
import numpy as np
df = pd.DataFrame({'col1': np.random.randint(0, 10, 12)})
df.loc[2] = np.nan
df.loc[5] = np.nan
df['col1'].unique()
### output: array([ 4., 0., nan, 8., 1., 3., 2., 6.])
df['col1'].dropna().unique()
### output: array([ 4., 0., 8., 1., 3., 2., 6.])
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 1,155 |
\section{Introduction}
\label{sec:intro}
In his landmark paper \cite{shannon48}, Shannon explicitly excluded semantic aspects from his framework of information theory, saying ``these semantic aspects of communication are irrelevant to the engineering problem.'' This exclusion has been followed throughout the development of the core of information theory; see, e.g., \cite{cover06}. Nevertheless, efforts to characterize semantic aspects of messages and incorporate them into information processing and transmission have been pursued since the inception of information theory; see, e.g., \cite{carnap53} \cite{floridi04} \cite{bao11} \cite{juba11} for a few representative works that cover an extensive range of issues in this context.
In this work, we propose an information theoretic model, motivated by the consideration of semantic information, which has gained much interest recently in the development of 5G and beyond wireless systems \cite{popovski19} \cite{kountouris20}. Instead of seeking a task-independent universal characterization of semantic information, which still appears elusive, we argue that, for many applications the semantic aspects of information correspond to the accomplishment of certain inference goals. So by semantic information, we actually mean that there exists some intrinsic state (i.e., ``feature'') embedded in the sensed extrinsic observation (i.e., ``appearance''), and the interest of the destination is not merely the extrinsic observation, but also the intrinsic state. Hence, if we consider an information theoretic characterization of such a ``semantic'' information source, the task of coding is to efficiently encode the extrinsic observation so that the decoder can infer both the intrinsic state and the extrinsic observation, subject to fidelity criteria on both, simultaneously.
As related topics, the information bottleneck \cite{tishby99} \cite{goldfeld20} and the privacy funnel \cite{makhdoumi14} \cite{shkel21} are, in a certain sense, dual concepts, and both place constraints in terms of information measures. Task-based compression has been tackled mainly from the perspective of quantizer design \cite{shlezinger19}. It has been demonstrated that steering the design goal according to the task leads to performance benefits compared with conventional task-agnostic approach, a conclusion in line with what we advocate in our work. The perception-distortion tradeoff \cite{blau19} imposes an additional constraint on the probability distribution of the reproduction. None of these related works proposes to decompose the information source into intrinsic and extrinsic parts as in our work, let alone investigate the joint behavior of them. In \cite{kipnis21}, a similar intrinsic state-extrinsic observation model is studied, but the encoder is designed based on the marginal distribution of the extrinsic observation only.
We describe the proposed problem formulation in Section \ref{sec:formulation}. We then recognize the proposed problem as a lossy source coding problem with two distortion constraints, one of which is with respect to the unobservable intrinsic state and its reproduction. This problem thus can be cast as an instance of the so-called ``indirect rate-distortion problem'' \cite{dorushin62} \cite{wolf70} \cite{berger71} \cite{witsenhausen80}. We present the corresponding rate-distortion function in Section \ref{sec:tradeoff}. We then investigate several case studies when the intrinsic state and the extrinsic observation are jointly Gaussian, and when the intrinsic state is Bernoulli with conditionally Gaussian extrinsic observations, in Section \ref{sec:case:gaussian} and Section \ref{sec:case:classification}, respectively. Finally, we conclude the paper in Section \ref{sec:conclusion}.
\section{Problem Formulation}
\label{sec:formulation}
The mathematical problem formulation is as follows; see Figure \ref{fig:semantic-model} for an illustration. We describe a memoryless information source as a tuple of random variables, $(\rvs, \rvx)$ with joint probability distribution $p(s, x)$ in product alphabet $\mathcal{S} \times \mathcal{X}$. We interpret $\rvs$ as the intrinsic state, which captures the ``semantic'' aspect of the source and is not observable, and $\rvx$ as the extrinsic observation of the source, which captures the ``appearance'' of the source to an observer.
For a length-$n$ independent and identically distributed (i.i.d.) sequence from the source, $(\rvs^n, \rvx^n)$, a source encoder $f_n$ of rate $R$ is a mapping that maps $\rvx^n$ into an index $\rvw$ within $\{1, 2, \ldots, 2^{nR}\}$, and a corresponding decoder $g_n$ is a mapping that maps $\rvw$ into a pair $(\hat{\rvs}^n, \hat{\rvx}^n)$ drawn values from product alphabet $\hat{\mathcal{S}} \times \hat{\mathcal{X}}$. We consider two distortion metrics, $d_\mathrm{s}(s, \hat{s}): \mathcal{S} \times \hat{\mathcal{S}} \mapsto \mathbb{R}^+ \cup \{0\}$ that models the semantic distortion, and $d_\mathrm{a}(x, \hat{x}): \mathcal{X} \times \hat{\mathcal{X}} \mapsto \mathbb{R}^+ \cup \{0\}$ that models the appearance distortion, respectively. So the block-wise distortions are
\begin{eqnarray}
d_\mathrm{s}(s^n, \hat{s}^n) = \frac{1}{n} \sum_{i = 1}^n d_\mathrm{s}(s_i, \hat{s}_i),\\
d_\mathrm{a}(x^n, \hat{x}^n) = \frac{1}{n} \sum_{i = 1}^n d_\mathrm{a}(x_i, \hat{x}_i),
\end{eqnarray}
respectively.
\begin{figure}[t]
\centering
\includegraphics[width=2.5in]{semantic-model.eps}
\caption{Illustration of system model.}
\label{fig:semantic-model}
\end{figure}
For example, the intrinsic state may be categorical, as the label for certain classification task, and the extrinsic observation may be an image or video clip whose content reflects and depends upon the intrinsic state. In applications, a remote viewer may be interested in the extrinsic observation (i.e., image or video clip) itself, whereas another remote pattern classifier may instead be interested in inferring the intrinsic state (i.e., the label) from the encoded extrinsic observation.\footnote{Note that in general the intrinsic state is not a deterministic function of, and hence cannot be perfectly recovered from, the extrinsic observation; see, e.g., \cite[Chap. 2 and 3]{shalev-shwartz}.}
We say that a rate-distortion triple $(R, D_\mathrm{s}, D_\mathrm{a})$ is achievable if there exists a sequence of encoders $\{f_n\}$ and decoders $\{g_n\}$ at rate $R$ such that as $n$ grows without bound, the expected distortions satisfy
\begin{eqnarray}
\label{eqn:semantic-distortion-constraint}
\lim_{n \rightarrow \infty} \mathbf{E} d_\mathrm{s}(\rvs^n, \hat{\rvs}^n) &\leq& D_\mathrm{s},\\
\lim_{n \rightarrow \infty} \mathbf{E} d_\mathrm{a}(\rvx^n, \hat{\rvx}^n) &\leq& D_\mathrm{a}.
\end{eqnarray}
The boundary of the set of all achievable rate-distortion triples is defined as the state-observation rate-distortion function (SORDF).
\section{Characterization of SORDF}
\label{sec:tradeoff}
The following theorem characterizes the SORDF.
\begin{thm}
\label{thm:rate-distortion function}
The SORDF of the problem setup considered in Section \ref{sec:formulation} is
\begin{eqnarray}
\label{eqn:rate-distortion function}
R(D_\mathrm{s}, D_\mathrm{a}) &=& \min I(\rvx; \hat{\rvs}, \hat{\rvx}),\\
\mbox{s.t.}\quad \mathbf{E} \hat{d}_\mathrm{s}(\rvx, \hat{\rvs}) &\leq& D_\mathrm{s},\\
\mathbf{E} d_\mathrm{a}(\rvx, \hat{\rvx}) &\leq& D_\mathrm{a},\\
\label{eqn:d-hat}
\mbox{where}\;\hat{d}_\mathrm{s}(x, \hat{s}) &=& \frac{1}{p(x)} \sum_{s \in \mathcal{S}} p(s, x) d_\mathrm{s}(s, \hat{s}).
\end{eqnarray}
\end{thm}
\textit{Proof:} The SORDF (\ref{eqn:rate-distortion function}) is basically a combination of the indirect rate-distortion function \cite{dorushin62} \cite{wolf70} \cite[Chap. 3, Sec. 5]{berger71} \cite{witsenhausen80} and the rate-distortion function with multiple distortion constraints \cite[Sec. VII]{elgamal82} \cite[Prob. 7.14]{csiszar} \cite[Prob. 10.19]{cover06}. Hence we only give a sketch of its proof.
A general and unified approach to the indirect rate-distortion function, as adopted in \cite{witsenhausen80}, is first showing that the one-shot expected distortion $\mathbf{E} d_\mathrm{s}(\rvs, \hat{\rvs})$ is equivalent to $\mathbf{E} \hat{d}_\mathrm{s}(\rvx, \hat{\rvs})$, and then invoking a tensorization argument to extend the one-shot equivalence to block codes. Here we combine these two steps, to show that for an arbitrary encoder-decoder pair, the original semantic distortion constraint (\ref{eqn:semantic-distortion-constraint}) is equivalent to
\begin{eqnarray}
\lim_{n \rightarrow \infty} \mathbf{E} \hat{d}_\mathrm{s}(\rvx^n, \hat{\rvs}^n) &\leq& D_\mathrm{s},
\end{eqnarray}
where $\hat{d}_\mathrm{s}(x^n, \hat{s}^n) = \frac{1}{n} \sum_{i = 1}^n \hat{d}_\mathrm{s} (x_i, \hat{s}_i)$. To prove this, consider an arbitrary encoder-decoder pair, for which it holds that
\begin{eqnarray}
\mathbf{E} d_\mathrm{s}(\rvs^n, \hat{\rvs}^n) &=& \sum_{s^n, \hat{s}^n} p(s^n, \hat{s}^n) d_\mathrm{s}(s^n, \hat{s}^n)\nonumber\\
&=& \sum_{s^n, x^n, \hat{s}^n} p(s^n, x^n, \hat{s}^n) d_\mathrm{s}(s^n, \hat{s}^n)\nonumber\\
&\stackrel{(a)}{=}& \sum_{s^n, x^n, \hat{s}^n} p(s^n | x^n) p(x^n, \hat{s}^n) d_\mathrm{s}(s^n, \hat{s}^n)\nonumber\\
&=& \sum_{x^n, \hat{s}^n} p(x^n, \hat{s}^n) \sum_{s^n} p(s^n | x^n) d_\mathrm{s}(s^n, \hat{s}^n)\nonumber\\
&\stackrel{(b)}{=}& \sum_{x^n, \hat{s}^n} p(x^n, \hat{s}^n) \frac{1}{n} \sum_{i = 1}^n \hat{d}_\mathrm{s}(x_i, \hat{s}_i),
\end{eqnarray}
where (a) is due to the Markov chain relationship $\rvs^n \leftrightarrow \rvx^n \leftrightarrow \hat{\rvs}^n$, and (b) is due to the i.i.d. property of $(\rvs^n, \rvx^n)$ and the definition of $\hat{d}_\mathrm{s}$ in (\ref{eqn:d-hat}).
Subsequently, the problem is reduced into a standard lossy source coding problem with multiple distortion constraints \cite[Sec. VII]{elgamal82} \cite[Prob. 7.14]{csiszar} \cite[Prob. 10.19]{cover06}, and the SORDF (\ref{eqn:rate-distortion function}) follows from standard achievability and converse proof techniques. $\Box$
Similar to rate-distortion functions with a single distortion constraint, we have the following properties of $R(D_\mathrm{s}, D_\mathrm{a})$.
\begin{prop}
\label{prop:properties}
\begin{enumerate}
\item $R(D_\mathrm{s}, D_\mathrm{a})$ is monotonically nonincreasing with $D_\mathrm{s}$ and $D_\mathrm{a}$.
\item $R(D_\mathrm{s}, D_\mathrm{a})$ is jointly convex with respect to $(D_\mathrm{s}, D_\mathrm{a})$.
\item For any rate $R \geq 0$, the contour set of $(D_\mathrm{s}, D_\mathrm{a})$ such that $R(D_\mathrm{s}, D_\mathrm{a}) \leq R$ is convex.
\end{enumerate}
\end{prop}
The proof of the first two properties is exactly the same as that for standard rate-distortion functions \cite{berger71, cover06}, and the third property is an immediate corollary of the second property.
\section{Case Study: Jointly Gaussian Model}
\label{sec:case:gaussian}
Consider the case where $\rvs$ and $\rvx$ are jointly Gaussian vectors with zero mean and covariance matrix
\begin{eqnarray}
\begin{bmatrix}
\mathbf{\Sigma}_\rvs & \mathbf{\Sigma}_{\rvs \rvx} \\
\mathbf{\Sigma}_{\rvs \rvx}^T & \mathbf{\Sigma}_\rvx
\end{bmatrix},
\end{eqnarray}
and the distortion metrics are squared error, as $d_\mathrm{s}(s, \hat{s}) = \|s - \hat{s}\|^2$ and $d_\mathrm{a}(x, \hat{x}) = \|x - \hat{x}\|^2$ respectively.
Note that conditioned upon $x$, $\rvs$ is conditionally Gaussian as $\rvs|x \sim \mathcal{N}\left(\mathbf{\Sigma}_{\rvs \rvx} \mathbf{\Sigma}_\rvx^{-1} x, \mathbf{\Sigma}_\rvs - \mathbf{\Sigma}_{\rvs \rvx} \mathbf{\Sigma}_\rvx^{-1} \mathbf{\Sigma}_{\rvs \rvx}^T \right)$. So the equivalent semantic distortion metric is
\begin{eqnarray}
&&\hat{d}_\mathrm{s}(x, \hat{s}) = \mathbf{E}_{\rvs|x} \|\rvs - \hat{s}\|^2\nonumber\\
&=& \mathrm{tr} \left[\mathbf{\Sigma}_\rvs - \mathbf{\Sigma}_{\rvs \rvx} \mathbf{\Sigma}_\rvx^{-1} \mathbf{\Sigma}_{\rvs \rvx}^T\right] + \|\mathbf{\Sigma}_{\rvs \rvx} \mathbf{\Sigma}_\rvx^{-1} x - \hat{s}\|^2,
\end{eqnarray}
where the first trace term is exactly the minimum mean-squared error (MMSE) of estimating $\rvs$ upon observing $\rvx$, denoted as $\mathsf{mmse}$ in the sequel. The SORDF hence becomes
\begin{eqnarray}
\label{eqn:SORDF-Gaussian-general}
R(D_\mathrm{s}, D_\mathrm{a}) &=& \min I(\rvx; \hat{\rvs}, \hat{\rvx}),\\
\label{eqn:SORDF-Gaussian-general-Ds}
\mbox{s.t.}\quad \mathbf{E} \|\mathbf{\Sigma}_{\rvs \rvx} \mathbf{\Sigma}_\rvx^{-1} \rvx - \hat{\rvs}\|^2 &\leq& D_\mathrm{s} - \mathsf{mmse},\\
\label{eqn:SORDF-Gaussian-general-Da}
\mathbf{E} \|\rvx - \hat{\rvx}\|^2 &\leq& D_\mathrm{a}.
\end{eqnarray}
\subsection{$R(\infty, D_\mathrm{a})$ and $R(D_\mathrm{s}, \infty)$}
\label{subsec:Gaussian-degenerated}
Without the semantic distortion constraint, i.e., $D_\mathrm{s} = \infty$, $R(\infty, D_\mathrm{a})$ is the well known rate-distortion function for vector Gaussian source; alternatively, without the appearance distortion constraint, i.e., $D_\mathrm{a} = \infty$, $R(D_\mathrm{s}, \infty)$ is given by the following result, which generalizes the case studied in \cite{wolf70} to jointly Gaussian vectors.
\begin{prop}
\label{prop:gaussian-semantic-only}
For the jointly Gaussian source model, $R(D_\mathrm{s}, \infty) = R_1(D_\mathrm{s} - \mathsf{mmse})$, where $R_1(D)$ is the rate-distortion function for $\mathbf{\Sigma}_{\rvs \rvx} \mathbf{\Sigma}_\rvx^{-1} \rvx$ under the squared error distortion metric.
\end{prop}
\textit{Proof:} We consider an estimate-and-compress scheme which first transforms the source observation $\rvx$ into $\mathbf{\Sigma}_{\rvs \rvx} \mathbf{\Sigma}_\rvx^{-1} \rvx$, and then encodes the transformed source observation under mean squared error distortion constraint $D_\mathrm{s} - \mathsf{mmse}$. The resulting achievable rate $R_1(D_\mathrm{s} - \mathsf{mmse})$ hence constitutes an upper bound of $R(D_\mathrm{s}, \infty)$.
To show that the scheme described above is indeed optimal, consider any $\hat{\rvs}$ jointly distributed with $\rvx$, satisfying the distortion constraint. Note that $\mathbf{\Sigma}_{\rvs \rvx} \mathbf{\Sigma}_\rvx^{-1} \rvx \leftrightarrow \rvx \leftrightarrow \hat{\rvs}$ constitute a Markov chain. So according to the data processing inequality, $I(\mathbf{\Sigma}_{\rvs \rvx} \mathbf{\Sigma}_\rvx^{-1} \rvx; \hat{\rvs}) \leq I(\rvx; \hat{\rvs})$. Since this holds for any $\hat{\rvs}$, it holds when $I(\rvx; \hat{\rvs}) = R(D_\mathrm{s}, \infty)$, and thus
\begin{eqnarray*}
R_1(D_\mathrm{s} - \mathsf{mmse}) \leq I(\mathbf{\Sigma}_{\rvs \rvx} \mathbf{\Sigma}_\rvx^{-1} \rvx; \hat{\rvs}) \leq I(\rvx; \hat{\rvs}) = R(D_\mathrm{s}, \infty),
\end{eqnarray*}
thereby completing the proof. $\Box$
\subsection{Scalar Case}
\label{subsec:Gaussian-scalar}
Then we consider the evaluation of $R(D_\mathrm{s}, D_\mathrm{a})$ for the special case where both $\rvs$ and $\rvx$ are scalar. Hence $\mathbf{\Sigma}_\rvs, \mathbf{\Sigma}_\rvx$ and $\mathbf{\Sigma}_{\rvs \rvx}$ are all scalar-valued, and $\mathsf{mmse} = \mathbf{\Sigma}_\rvs - \mathbf{\Sigma}_{\rvs \rvx}^2/\mathbf{\Sigma}_\rvx$. We have the following result regarding its SORDF.
\begin{prop}
\label{prop:SORDF-scalar-Gaussian}
For the jointly Gaussian source model where both $\rvs$ and $\rvx$ are scalar, its SORDF is given by
\begin{eqnarray}
\label{eqn:SORDF-scalar-Gaussian}
R(D_\mathrm{s}, D_\mathrm{a}) &=& \frac{1}{2} \max\left\{\left(\log \frac{\mathbf{\Sigma}_\rvx}{D_\mathrm{a}}\right)^+,\right.\nonumber\\
&&\left. \left(\log \frac{\mathbf{\Sigma}_{\rvs \rvx}^2}{\mathbf{\Sigma}_\rvx (D_\mathrm{s} - \mathsf{mmse})}\right)^+ \right\},
\end{eqnarray}
for $D_\mathrm{s} > \mathsf{mmse}$, $D_\mathrm{a} > 0$, where $(x)^+$ denotes $\max\{x, 0\}$.
\end{prop}
\textit{Proof:} To show the converse, we note that $R(D_\mathrm{s}, D_\mathrm{a})$ (\ref{eqn:SORDF-Gaussian-general}) is lower bounded by both $\min I(\rvx; \hat{\rvs})$ under (\ref{eqn:SORDF-Gaussian-general-Ds}) and $\min I(\rvx; \hat{\rvx})$ under (\ref{eqn:SORDF-Gaussian-general-Da}). So the first term in the max operand of (\ref{eqn:SORDF-scalar-Gaussian}) is due to the standard Gaussian rate-distortion function, and the second term is due to Proposition \ref{prop:gaussian-semantic-only}.
To show the achievability, we consider two situations. First, if $D_\mathrm{a}/\mathbf{\Sigma}_\rvx^2 \geq (D_\mathrm{s} - \mathsf{mmse})/\mathbf{\Sigma}_{\rvs \rvx}^2$, we let $(\rvx, \hat{\rvs})$ be generated so as to solve the standard Gaussian rate-distortion problem subject to constraint (\ref{eqn:SORDF-Gaussian-general-Ds}) and hence achieve
\begin{eqnarray}
I(\rvx; \hat{\rvs}) = \frac{1}{2} \left(\log \frac{\mathbf{\Sigma}_{\rvs \rvx}^2}{\mathbf{\Sigma}_\rvx (D_\mathrm{s} - \mathsf{mmse})} \right)^+.
\end{eqnarray}
We further let $\hat{\rvx} = (\mathbf{\Sigma}_\rvx / \mathbf{\Sigma}_{\rvs \rvx}) \hat{\rvs}$, which then satisfies the constraint (\ref{eqn:SORDF-Gaussian-general-Da}), and leads to $I(\rvx; \hat{\rvs}, \hat{\rvx}) = I(\rvx; \hat{\rvs})$ because $\hat{\rvx} \leftrightarrow \hat{\rvs} \leftrightarrow \rvx$. Alternatively, if $D_\mathrm{a}/\mathbf{\Sigma}_\rvx^2 < (D_\mathrm{s} - \mathsf{mmse})/\mathbf{\Sigma}_{\rvs \rvx}^2$, we let $(\rvx, \hat{\rvx})$ be generated so as to solve the standard Gaussian rate-distortion problem subject to constraint (\ref{eqn:SORDF-Gaussian-general-Da}), and let $\hat{\rvs} = (\mathbf{\Sigma}_{\rvs \rvx}/\mathbf{\Sigma}_\rvx) \hat{\rvx}$. These then satisfy constraints (\ref{eqn:SORDF-Gaussian-general-Ds}) and (\ref{eqn:SORDF-Gaussian-general-Da}), and achieve
\begin{eqnarray}
I(\rvx; \hat{\rvs}, \hat{\rvx}) = I(\rvx; \hat{\rvx}) = \frac{1}{2} \left(\log \frac{\mathbf{\Sigma}_\rvx}{D_\mathrm{a}}\right)^+.
\end{eqnarray}
Putting these two situations together establishes the achievability. $\Box$
The interpretation of Proposition \ref{prop:SORDF-scalar-Gaussian} is rather straightforward. Since $\rvs$ and $\rvx$ are both scalar, their ``directions'' are both degenerated and the goals of reproducing them can be viewed as perfectly ``aligned''. In the achievability proof, the first situation arises when $D_\mathrm{s}$ is small, i.e., when reproducing $\rvs$ is more demanding than reproducing $\rvx$, and the second situation arises when the opposite is true. In both situations, however, note that $\hat{\rvx}$ and $\hat{\rvs}$ are proportional with the same proportion.
\subsection{Vector Case}
\label{subsec:Gaussian-vector}
In this subsection we evaluate the SORDF for the special vector case where $\rvs$ is scalar and $\mathbf{\Sigma}_{\rvs \rvx} \mathbf{\Sigma}_\rvx^{-1}$ coincides with one of the eigenvectors of $\mathbf{\Sigma}_\rvx$, and leave the general vector case to Appendix. Consider the following model:
\begin{eqnarray}
\label{eqn:aligned-Gaussian-case}
\rvx = \mathbf{1}_m \rvs + \rvz,
\end{eqnarray}
where $\rvs \sim \mathcal{N}(0, \sigma_\rvs^2)$, $\mathbf{1}_m$ is a lengh-$m$ all-one vector, and $\rvz \sim \mathcal{N}(\mathbf{0}, \sigma_\rvz^2 \mathbf{I})$. Denote the MMSE by $\mathsf{mmse} = \frac{\sigma_\rvs^2 \sigma_\rvz^2}{m \sigma_\rvs^2 + \sigma_\rvz^2}$ and set $\alpha = \frac{m \sigma_\rvs^2 + \sigma_\rvz^2}{\sqrt{m} \sigma_\rvs^2}$. We have the following result.
\begin{prop}
\label{prop:aligned-Gaussian-case}
For the jointly Gaussian source model (\ref{eqn:aligned-Gaussian-case}), its SORDF is given by:
- if $D_{\mathrm{s}} \geq \mathsf{mmse}$ and $m \alpha^{2} ( D_{\mathrm{s}} - \mathsf{mmse} ) \leq D_{\mathrm{a}} \leq \alpha^{2} ( D_{\mathrm{s}} - \mathsf{mmse} ) + ( m - 1 ) \sigma_\rvz^2$,
\begin{eqnarray}
&&R ( D_{\mathrm{s}} , D_{\mathrm{a}} )
= \frac{1}{2} \log \left(
\frac{m \sigma_\rvs^2 + \sigma_\rvz^2}
{\alpha^{2} ( D_{\mathrm{s}} - \mathsf{mmse} )}
\right)\nonumber\\
&& \quad + \frac{m - 1}{2} \log \left(
\frac{( m - 1 ) \sigma_\rvz^2}
{D_{\mathrm{a}} - \alpha^{2} ( D_{\mathrm{s}} - \mathsf{mmse} )}
\right) ;
\end{eqnarray}
- if $0 \le D_{\mathrm{a}} < m \sigma_\rvz^2$ and $\alpha^{2} ( D_{\mathrm{s}} - \mathsf{mmse} ) \ge D_{\mathrm{a}} / m$,
\begin{eqnarray}
R ( D_{\mathrm{s}} , D_{\mathrm{a}} ) = \frac{1}{2} \log \left(
\frac{m^{2} \sigma_\rvs^2 + m \sigma_\rvz^2}
{D_{\mathrm{a}}}
\right) + \frac{m - 1}{2}
\log \left( \frac{m \sigma_\rvz^2}{D_{\mathrm{a}}} \right) ;
\end{eqnarray}
- if $0 \le \alpha^{2} ( D_{\mathrm{s}} - \mathsf{mmse} ) < m \sigma_\rvs^2 + \sigma_\rvz^2$ and $D_{\mathrm{a}} > \alpha^{2} ( D_{\mathrm{s}} - \mathsf{mmse} ) + ( m - 1 ) \sigma_\rvz^2$,
\begin{eqnarray}
R ( D_{\mathrm{s}} , D_{\mathrm{a}} )
= \frac{1}{2} \log \left(
\frac{m \sigma_\rvs^2 + \sigma_\rvz^2}
{\alpha^{2} ( D_{\mathrm{s}} - \mathsf{mmse} )}
\right) ;
\end{eqnarray}
- if $m \sigma_\rvz^2 \le D_{\mathrm{a}} < m \sigma_\rvs^2 + m \sigma_\rvz^2$ and $\alpha^{2} ( D_{\mathrm{s}} - \mathsf{mmse} ) \ge D_{\mathrm{a}} - ( m - 1 ) \sigma_\rvz^2$,
\begin{eqnarray}
R ( D_{\mathrm{s}} , D_{\mathrm{a}} )
= \frac{1}{2} \log \left(
\frac{m \sigma_\rvs^2 + \sigma_\rvz^2}
{D_{\mathrm{a}} - ( m - 1 ) \sigma_\rvz^2}
\right) ;
\end{eqnarray}
- if $D_{\mathrm{a}} \ge m \sigma_\rvs^2 + m \sigma_\rvz^2$ and $\alpha^{2} ( D_{\mathrm{s}} - \mathsf{mmse} ) \ge m \sigma_\rvs^2 + \sigma_\rvz^2$,
\begin{eqnarray}
R ( D_{\mathrm{s}} , D_{\mathrm{a}} )
= 0 .
\end{eqnarray}
\end{prop}
\textit{Proof:} We give an outline of the proof. The key observation is that $\mathbf{\Sigma}_{\rvs \rvx} \mathbf{\Sigma}_\rvx^{-1} \propto \mathbf{b}_1 = \frac{1}{\sqrt{m}} \mathbf{1}_m$ is a unit-norm eigenvector of $\mathbf{\Sigma}_\rvx$, associated with the eigenvalue $m\sigma_\rvs^2 + \sigma_\rvz^2$. The remaining $m - 1$ unit-norm eigenvectors of $\mathbf{\Sigma}_\rvx$ are
\begin{eqnarray}
\mathbf{b}_i = \left[\underbrace{\frac{1}{\sqrt{i(i - 1)}}, \ldots, \frac{1}{\sqrt{i(i - 1)}}}_{i - 1}, -\sqrt{\frac{i - 1}{i}}, \underbrace{0, \ldots, 0}_{m - i}\right]^T,
\end{eqnarray}
for $i = 2, \ldots, m$, all associated with the identical eigenvalue $\sigma_\rvz^2$. So $\mathbf{B} = \left[\mathbf{b}_1, \ldots, \mathbf{b}_m\right]^T$ is an orthonormal matrix that decorrelates $\rvx$. The SORDF problem (\ref{eqn:SORDF-Gaussian-general})-(\ref{eqn:SORDF-Gaussian-general-Da}) can then be equivalently rewritten as
\begin{eqnarray}
R(D_\mathrm{s}, D_\mathrm{a}) &=& \min I(\mathbf{B}\rvx; \alpha \hat{\rvs}, \mathbf{B} \hat{\rvx}),\\
\mbox{s.t.}\quad \mathbf{E} (\mathbf{b}_1^T \rvx - \alpha \hat{\rvs})^2 &\leq& \alpha^2 (D_\mathrm{s} - \mathsf{mmse})\\
\sum_{i = 1}^m \mathbf{E}(\mathbf{b}_i^T \rvx - \mathbf{b}_i^T \hat{\rvx})^2 &\leq& D_\mathrm{a}.
\end{eqnarray}
Note that the $m$ elements of $\mathbf{B} \rvx$ are now decorrelated to be independent, and hence it can be shown that the minimization of $I(\mathbf{B}\rvx; \alpha \hat{\rvs}, \mathbf{B} \hat{\rvx})$ can be decoupled and converted into a distortion allocation problem similar to that for the standard parallel Gaussian reverse waterfilling \cite{cover06}. The resulting optimization problem becomes
\begin{eqnarray}
R(D_\mathrm{s}, D_\mathrm{a}) &=& \min_{(D_1, D_2, \ldots, D_m) \in \mathcal{A}(D_\mathrm{s}, D_\mathrm{a})} \left[R\left(\frac{D_1}{m \sigma_\rvs^2 + \sigma_\rvz^2}\right)\right. \nonumber\\
&&\left.+ \sum_{i = 2}^m R\left(\frac{D_i}{\sigma_\rvz^2}\right)\right],\\
\mathcal{A}(D_\mathrm{s}, D_\mathrm{a}) &=& \left\{(D_1, D_2, \ldots, D_m): D_1 \leq \alpha^2 (D_\mathrm{s} - \mathsf{mmse}), \right.\nonumber\\
&& \left.\sum_{i = 1}^m D_i \leq D_\mathrm{a}, D_i \geq 0, \forall i \right\},
\end{eqnarray}
where $R(x) = \frac{1}{2} \left(\log \frac{1}{x}\right)^+$ for $x > 0$. Solving this optimization problem, we obtain the SORDF as presented in Proposition \ref{prop:aligned-Gaussian-case}. $\Box$
According to Proposition \ref{prop:aligned-Gaussian-case}, the $(D_\mathrm{s}, D_\mathrm{a})$-plane is divided into five regions. This is illustrated in Figure \ref{fig:regions}. The SORDF $R(D_\mathrm{s}, D_\mathrm{a})$ and its contour plot are illustrated in Figure \ref{fig:SORDF-vector-Gaussian}. The region where $D_\mathrm{s}$ and $D_\mathrm{a}$ exhibit a tradeoff is clearly indicated by the two slanted-line boundaries in the contour plot: in that region, if we encode $\rvx$ regardless of considering $\rvs$, then extra distortion on $\rvs$ will be incurred, and vice versa.
\begin{figure}[t]
\centering
\includegraphics[width=2.7in]{regions.eps}
\caption{Illustration of the five regions of $(D_\mathrm{s}, D_\mathrm{a})$-plane.}
\label{fig:regions}
\end{figure}
\begin{figure}[th]
\centering
\begin{minipage}{4.2cm}
\includegraphics[width=4.6cm]{threeD.eps}
\end{minipage}
\begin{minipage}{4.2cm}
\includegraphics[width=4.6cm]{contour.eps}
\end{minipage}
\caption{The SORDF $R(D_\mathrm{s}, D_\mathrm{a})$ (left) and its contour plot (right).}
\label{fig:SORDF-vector-Gaussian}
\end{figure}
\section{Case Study: Classification}
\label{sec:case:classification}
Consider the case where $\rvs$ is a binary state, i.e., a Bernoulli random variable drawn from $\{0, 1\}$ with prior probability $1/2$ uniformly. The extrinsic observation $\rvx$ is conditionally Gaussian, as
\begin{eqnarray}
\label{eqn:source-classification}
\rvx \sim \mathcal{N}(A, \sigma^2), \quad \mbox{if}\; \rvs = 0;
\rvx \sim \mathcal{N}(-A, \sigma^2), \quad \mbox{if}\; \rvs = 1.
\end{eqnarray}
So the marginal distribution of $\rvx$ is a Gaussian mixture. We adopt a Hamming distortion between $\rvs$ and $\hat{\rvs}$, i.e., $d_\mathrm{s}(s, \hat{s}) = 0$ if $s = \hat{s}$ and $1$ otherwise; and a squared error distortion between $\rvx$ and $\hat{\rvx}$, i.e., $d_\mathrm{a}(x, \hat{x}) = (x - \hat{x})^2$.
For this source model, we can obtain its $R(D_\mathrm{s}, \infty)$ in the following result.
\begin{prop}
\label{prop:SORDF-classification}
For the source model (\ref{eqn:source-classification}), we have
\begin{eqnarray}
R(D_\mathrm{s}, \infty) = 1 - \frac{1}{2}\int_{-\infty}^\infty \left[N^+(x) + N^-(x)\right] h_2(g(x)) \mathrm{d}x,
\end{eqnarray}
for $Q(A/\sigma) \leq D_\mathrm{s} \leq 1/2$, and $R(D_\mathrm{s}, \infty) = 0$ for $D_\mathrm{s} > 1/2$, where
\begin{eqnarray}\label{eqn:optimal-g}
g(x) = \left[1 + \exp\left(\lambda \frac{1 - e^{-2Ax/\sigma^2}}{1 + e^{-2Ax/\sigma^2}}\right)\right]^{-1},
\end{eqnarray}
wherein $\lambda < 0$ is chosen so as to satisfy
\begin{eqnarray}\label{eqn:optimal-lambda}
\int_{-\infty}^\infty \left[N^+(x) - N^-(x)\right] g(x) \mathrm{d}x = 1 - 2 D_\mathrm{s}.
\end{eqnarray}
Here we denote by $N^+(x)$ and $N^-(x)$ the probability density functions of $\mathcal{N}(A, \sigma^2)$ and $\mathcal{N}(-A, \sigma^2)$, respectively, and $h_2(t)$ is the binary entropy function, $h_2(t) = - t\log_2 t - (1 - t) \log_2 (1 - t)$, for $0 \leq t \leq 1$.
\end{prop}
\textit{Proof:} The expression of $R(D_\mathrm{s}, \infty)$ is obtained by solving $\min I(\rvx; \hat{S})$, subject to the constraint of
\begin{eqnarray}
\mathbf{E} \hat{d}_\mathrm{s} (\rvx, \hat{\rvs}) \leq D_\mathrm{s},
\end{eqnarray}
by optimizing the conditional probability $g(x) = \mathrm{Pr}(\hat{s} = 0 | x)$, where the expectation $\mathbf{E} \hat{d}_\mathrm{s} (\rvx, \hat{\rvs})$ can be further evaluated as
\begin{eqnarray}
\frac{1}{2} \int_{-\infty}^\infty \left[N^-(x) g(x) + N^+(x) (1 - g(x))\right] \mathrm{d}x.
\end{eqnarray}
Note that due to the symmetry in the model, the optimal $g(x)$ should satisfy $g(x) + g(-x) = 1$, $\forall x$, and consequently the resulting $\hat{\rvs}$ is uniform Bernoulli. This property is satisfied by (\ref{eqn:optimal-g}). $\Box$
The conditional probability $g(x)$ as given by (\ref{eqn:optimal-g}) can be interpreted as a soft weighting of the posterior belief regarding $\rvs$ upon observing $\rvx$; see Figure \ref{fig:SORDF-classification-only}. Statistically, observing a positive $x \gg 0$ strongly suggests a possibility of $\rvs = 0$, and thus $g(x)$ is large, while observing a negative $x \ll 0$ leads to the opposite; alternatively, the least informative case of $x \approx 0$ results in $g(x) \approx 1/2$. A noteworthy consequence revealed by Proposition \ref{prop:SORDF-classification} is that the naive scheme of performing locally optimal (Bayesian) classification and encoding the binary classification is suboptimal (indicated as ``Local classification + Compression'' in Figure \ref{fig:SORDF-classification-only}), except for the extreme case of $D_\mathrm{s} = Q(A/\sigma)$. This is different from the jointly Gaussian case in Section \ref{sec:case:gaussian}, where Proposition \ref{prop:gaussian-semantic-only} (see also \cite{wolf70}) indicates the the estimate-and-compress scheme is optimal.
\begin{figure}[th]
\centering
\begin{minipage}{4.2cm}
\includegraphics[width=4.6cm]{gaussClas-gFunctions.eps}
\end{minipage}
\begin{minipage}{4.2cm}
\includegraphics[width=4.6cm]{gaussClas-srdf.eps}
\end{minipage}
\caption{$g(x)$ (left) and the corresponding $R(D_\mathrm{s}, \infty)$ (right), $A = \sigma^2 = 1$, $Q(A/\sigma) = 0.1587$.}
\label{fig:SORDF-classification-only}
\end{figure}
Based upon Proposition \ref{prop:SORDF-classification}, we have the following achievability result.
\begin{prop}
\label{prop:classification-upper-bound}
For the source model (\ref{eqn:source-classification}), we have
\begin{eqnarray}\label{eqn:classification-sordf}
R(D_\mathrm{s}, D_\mathrm{a})\!\!\!\! &\leq& \!\!\!\!\!\!\!\!\!\!\min_{D \in \left[Q\left(\frac{A}{\sigma}\right), D_\mathrm{s}\right]} \!\!\! \left\{R(D, \infty) + \frac{1}{2} \left(\log \frac{\eta}{D_\mathrm{a}}\right)^+\right\},\\
\eta &=& \int_{-\infty}^\infty (x - \gamma)^2 \left[N^+(x) + N^-(x)\right] g_D(x) \mathrm{d}x,\nonumber\\
\gamma &=& \int_{-\infty}^\infty x \left[N^+(x) + N^-(x)\right] g_D(x) \mathrm{d}x\nonumber,
\end{eqnarray}
where $g_D(x)$ is given by (\ref{eqn:optimal-g}) satisfying (\ref{eqn:optimal-lambda}) whose right hand side is now replaced by $1 - 2D$.
\end{prop}
\textit{Proof:} Here we give an outline of a coding scheme that leads to the proof of Proposition \ref{prop:classification-upper-bound}. We first apply Proposition \ref{prop:SORDF-classification} to encode $\rvx$ into $\hat{\rvs}$ at rate $R(D, \infty)$ so as to satisfy the semantic distortion constraint $D_\mathrm{s}$, noting that $D \in [Q(A/\sigma), D_\mathrm{s}]$. Then, conditioned upon $\hat{\rvs}$, we encode $\rvx - \mathbf{E} [\rvx | \hat{\rvs}]$ into $\tilde{\rvx}$ using an i.i.d. Gaussian codebook ensemble with mean squared error distortion constraint $D_\mathrm{a}$, which can be successfully accomplished at rate $\frac{1}{2} \left(\log \frac{\eta}{D_\mathrm{a}}\right)^+$ \cite[Thm. 3]{lapidoth97}. Finally, the decoder reproduces $\hat{\rvx} = \tilde{\rvx} + \mathbf{E} [\rvx | \hat{\rvs}]$. Since the aforementioned scheme applies to any $D \in [Q(A/\sigma), D_\mathrm{s}]$, optimizing $D$ leads to (\ref{eqn:classification-sordf}). $\Box$
Figure \ref{prop:classification-upper-bound} displays the achievable upper bound of $R(D_\mathrm{s}, D_\mathrm{a})$ in (\ref{eqn:classification-sordf}). For comparison, we also plot $(1/2) \left[\log \left(\sigma^2/D_\mathrm{a}\right)\right]^+$, which corresponds to the rate-distortion function under the ideal scenario where both the encoder and the decoder know $\rvs$ perfectly, and $(1/2) \left[\log \left((A^2 + \sigma^2)/D_\mathrm{a}\right)\right]^+$, which corresponds to the naive scheme which directly encodes $\rvx$ subject to the squared error distortion with an i.i.d. Gaussian codebook ensemble.
\begin{figure}[th]
\centering
\includegraphics[width=2.7in]{gaussClas-bound1.eps}
\caption{Achievable upper bound of $R(D_\mathrm{s}, D_\mathrm{a})$ in (\ref{eqn:classification-sordf}), $A^2/\sigma^2 = 10$.}
\label{fig:SORDF-classification-bound}
\end{figure}
\section{Conclusion}
\label{sec:conclusion}
We have provided a general rate-distortion framework for characterizing an information source that can be modeled as a tuple of an intrinsic state and an extrinsic observation. Two issues are particularly relevant for the application of this framework --- first, developing efficient numerical algorithms for computing the SORDF for general sources, and second, estimating the SORDF when only finite training data of the intrinsic state-extrinsic observation pair is available.
\section*{Acknowledgement}
The work of J.~Liu and W.~Zhang was supported in part by the National Key Research and Development Program of China under Grant 2018YFA0701603 and the Key Research Program of Frontier Sciences of CAS under Grant QYZDY-SSW-JSC003, and the work of H.~V.~Poor was supported in part by the U.S. National Science Foundation under Grant CCF-1908308.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 7,532 |
using namespace std;
//---------------------------------------------------------------------------
string KIS_inc::Function(const vector<string>& args)
{
if(args.size()<2) return("");
int diff=1;
if(args.size()>=3) {
diff=atoi(args[2].c_str());
}
int counter=atoi(KisEngine->Engine()->FindFirst(args[1]).c_str())+diff;
if(args.size()>=4) {
int limit=atoi(args[3].c_str());
if(counter>limit) counter=limit;
}
KisEngine->Engine()->InsertAfterClear(args[1],IntToString(counter));
return("");
}
//---------------------------------------------------------------------------
string KIS_dec::Function(const vector<string>& args)
{
if(args.size()<2) return("");
int diff=1;
if(args.size()>=3) {
diff=atoi(args[2].c_str());
}
int counter=atoi(KisEngine->Engine()->FindFirst(args[1]).c_str())-diff;
if(args.size()>=4) {
int limit=atoi(args[3].c_str());
if(counter<limit) counter=limit;
}
KisEngine->Engine()->InsertAfterClear(args[1],IntToString(counter));
return("");
}
//---------------------------------------------------------------------------
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,029 |
\section{Introduction}
Quantum mechanics goes to great lengths to ensure that the wavefunctions are singlevalued. This means discarding terms in the solution to the Schr\"odinger equation that either blow up at the origin or diverge at infinity. Solutions of second-order differential equations which are rational lead to multivaluedness, and great efforts were spent, in the late nineteenth century, to uniformize the solutions so as to render them singlevalued. However, multivaluedness is not a stigma, and will explain numerous phenomena from the interaction of polarized beams to the Aharonov-Bohm effect. In this paper we treat multivaluedness from the theory of automorphic functions.
If a vector is parallel-transported around a closed curve it may not necessarily return as the same vector it started as. The effect is known as holonomy, and has been attributed to positive, Gaussian curvature~\cite{ONeill}. Holonomy also occurs when we solve a Fuchsian differential equation as a power series and take the analytic continuation around a regular singular point. We will, in general, not get back the solution we started with but one that differs from it by a phase factor.
We will show that geometric phase is a manifestation of periodicity with respect to a group of motions of the tessellations of a disc, or half-plane, by lunes or curvilinear triangles, depending on whether the Fuchsian differential equation has two or three regular singular points, respectively. Functions whose only singular points are rational functions will be solutions to a Fuchsian differential equation of two singular points while the solutions of one with three regular points will not reduce to elementary functions, but rather can be expressed as a beta integral.
Differential equations containing only regular singular points, like the hypergeometric equation, have very little to do with the equations of mathematical physics~\cite{Gray}. Although the latter equations have a regular singular point at the origin they possess an essential singularity at infinity that prevents the solution from diverging at infinity. The regular singular point at the origin has linearly independent solutions, which are powers of the radial coordinate whose exponents are determined by the roots of the indicial equation. Their quotient is an automorphic function, whose inverse is a periodic function, that will undergo a linear-fractional transformation and tessellate the plane with lunes, or curvilinear triangles. Quantum mechanics eliminates one of the solutions on the basis that it blows up at the origin. However, this depends on the roots of the indicial equation.
Because of a finite value of the kinetic energy, the other singular point at infinity is an essential singularity. The solutions are exponential rising and decaying functions of the radial coordinate. In order that the wavefunction be finite and singlevalued, the rising solution is excluded. The essential singularity arises as a coalescence of two regular singular points, and is analogous to the behavior of an automorphic function in the immediate neighborhood of limit points of the group of motions which tessellate the half-plane or principal circle. Therefore, if we allow for the multivaluedness of the Schr\"odinger equation, its solutions will behave like automorphic functions far from the limit points on the boundary when we consider the limit of zero kinetic energy.
In the next three sections, through the discussion of the phasor angle, the Pancharatnam phase of polarized light beams, and the Aharonov-Bohm phase, we will show that geometric phase requires positive Gaussian curvature so that the ratio of the area of a curvilinear triangle to its angular excess is constant. Periodicity with respect to a group of motions tessellate the half-plane, or disc, which are natural boundaries upon which reside essential singularities. Periodicity requires at least two regular singular points, and the elliptic motion is a rotation. Non-integral values of quantum numbers are required in order that the group not reduce to the identity, corresponding to the equivalence class of null paths. These do not represent particles, whose quantum numbers must be integers, but, rather, are to be associated with resonances.
We then discuss \lq centripetal attraction\rq, for which the angular momentum varies over a continuous range of non-positive, and non-integral values. The quotient of the solution to the differential equation will take on each value only once in the lune, which is the fundamental region. This forms a dichotomy with quantum mechanics, where the angular momenta are discrete and space is continuous. We conclude the paper by reconstructing the original Schr\"odinger equation: for negative kinetic energy the essential singularity is an exponential function, while for positive kinetic energy it is a circular function. As long as the kinetic energy vanishes, the Schr\"odinger equation, even in the presence of a potential, can be reduced to a Fuchsian form with multiple space scales.
\section{Phasor and the Construction of an Essential Singularity}
The linear-fractional transform,
\begin{equation}
w=\frac{az+b}{cz+d},\label{eq:Mobius}
\end{equation} guarantees that the fundamental region will have the same number of poles and zeros, where $a,b,c,d$ are constants such that $ad-bc=1$. The difference between the number of zeros, $n$, and the number of poles, $p$, is given by
\begin{equation}
\frac{1}{2\pi i}\oint_{\mathcal{C}}\frac{f^{\prime}(z)}{f(z)}dz=n-p,\label{eq:n-p}
\end{equation}
where the contour $\mathcal{C}$ encloses all zeros and poles. Setting $f(z)=w$, with $w$ given by \eqref{eq:Mobius} we find
\begin{equation}
\frac{1}{2\pi i}\oint_{\mathcal{C}}\left(\frac{1}{z+b/a}-\frac{1}{z+d/c}\right)dz=0.\label{eq:0}
\end{equation}
The multipole moment of order $m$ is given by
\begin{equation}
\frac{1}{2\pi i}\oint_{\mathcal{C}}z^m\frac{f^{\prime}(z)}{f(z)}dz.\label{eq:multi}
\end{equation}
The multipole moments are the analogs of essential singularties~\cite{Daniels}. Since equation \eqref{eq:multi} vanishes for an automorphic function; there can be no concentration of \lq charges\rq, which are analogs of zeros and poles, so that \eqref{eq:0} expresses charge neutrality.
Real values of the coefficients in \eqref{eq:Mobius} will have the zero fall on the real axis. The contour in the $z$-plane for \eqref{eq:Mobius} is a circle passing through the pole at $-d/c$, and zero $-b/a$, as shown in Fig.~\ref{fig:contour}. The phase $\delta$ at point $P$ is the difference between the angle $\beta$ and the exterior angle $\alpha$~\cite{Daniels},
\begin{equation}
\delta=\beta-\alpha. \label{eq:phasor}
\end{equation}
The lines of constant phase are circles which pass through $-b/a$ and $-d/c$.
The crucial, and new, point is to realize that by adding $\delta$ to both sides of \eqref{eq:phasor}, and adding and subtracting $\pi$ on the right-hand side give
\begin{equation}
2\delta=\delta+\beta+(\pi-\alpha)-\pi\ge0. \label{eq:anglex}
\end{equation}
The right-hand side is precisely the angle excess of a spherical triangle. We will soon appreciate that the phasor \eqref{eq:phasor} is the complementary angle to the Pancharatnam phase, \eqref{eq:Pan-bis}, to be discussed in the next section.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.60\textwidth]{contour.jpeg}
\caption{The contour is a circle passing through the pole at $-d/c$, and the zero $-b/a$.}
\label{fig:contour}
\end{figure}
The three angles of the triangle in Fig.~\ref{fig:contour}, $\delta=\lambda\pi$, $\beta=\mu\pi$, and $\pi-\alpha=\gamma\pi$, correspond to three \emph{regular\/} singular points, which by a linear-fractional transformation can be placed at $0$, $1$, and $\infty$. The simplest Fuchsian differential equation whose solutions do not reduce to elementary rational functions is one with three singular points. With $\beta$ at the origin, $\pi-\alpha$ at $1$, the phasor $\delta$ will be found at $\infty$.
The automorphic function,
\begin{equation}
w=\int^{z}z^{\mu-1}(1-z)^{\gamma-1}\;dz, \label{eq:beta}
\end{equation}
is a beta integral, and satisfies the Fuchsian differential equation of second-order:
\begin{equation}
w^{\prime\prime}=\left(\frac{\mu-1}{z}+\frac{1-\gamma}{1-z}\right)w^{\prime}, \label{eq:Fuchs}
\end{equation}
where the prime stands for differentiation with respect to $z$. The value of the third angle, $\delta$ at $\infty$, can be determined from the Schwarzian deterivative,
\[
\{w,z\}=\frac{1-\mu^2}{2z^2}+\frac{1-\gamma^2}{(1-z)^2}-\frac{2(1-\gamma)(1-\mu)}{2z(z-1)}. \]
Equating the numerator of the last term with the canonical form~\cite{Lehner},
\[\gamma^2+\mu^2-\lambda^2-1=-2(1-\gamma)(1-\mu),\]
we find
\begin{equation}
\lambda=\pm(\gamma+\mu-1). \label{eq:pm}
\end{equation}
The negative sign will give the Euclidean result,
\begin{equation}
\pi=\delta+\pi-\alpha+\beta, \label{eq:Euclid}
\end{equation}
which is the \emph{negative\/} of the phasor, \eqref{eq:phasor}, while the positive root in \eqref{eq:pm} will give the correct phasor, \eqref{eq:phasor}. This proves that the phasor belongs to spherical geometry, and not to Euclidean geometry.
\section{Pancharatnam's Phase for Polarized Light}
Berry~\cite{Berry} claims that Pancharatnam's phase~\cite{Pan} is one-half the solid angle subtended by a geodesic triangle on the Poincar\'e sphere. Without even knowing what the Pancharatnam phase is, it can safely be ruled out that the phase would be related to an interior solid angle when it is known that all deductions are made on the surface of the Poincar\'e sphere with absolutely no knowledge of the interior angles or points that the sphere encompasses~\cite{Shurcliff}. Moreover, any shape on the surface of the sphere that has the same area will have the same solid angle, and thus it need not be a geodesic triangle. In contrast, we will show that the complementary angle found by Pancharatnam is equal to half the area of a spherical triangle, given by the angle excess.
Pancharatnam considers a polarized beam $C$ to be separated into two beams in states of polarization $A$ and $B$, whose phase difference is the complementary angle to $\delta$. In reference to the phasor \eqref{eq:phasor}, $\delta$ will be equal to the difference in the internal angle $\angle ACB$ and the exterior angle $\angle ABC^{\prime}$,
\begin{equation}
\delta=\angle ACB-\angle ABC^{\prime}, \label{eq:Pan}
\end{equation}
as shown in Fig.~\ref{fig:Poincare}. Expressing the exterior angle in terms of the interior angle, and adding $\delta=\angle BAC$ to both sides of \eqref{eq:Pan}, result in
\begin{equation}
2\delta=\angle BAC+\angle ACB+\angle ABC-\pi. \label{eq:Pan-bis}
\end{equation}
Equation \eqref{eq:Pan-bis} expresses twice the phase difference between the two beams in terms of the area of a spherical triangle given by its angle excess.
Actually, Pancharatnam defines $\delta=\angle CAB$ as the phase difference which he expresses in terms of the triangle colunar to $\triangle ACB$, namely $\triangle AC^{\prime}B$. This is to say the angle,
\begin{equation}
\angle C^{\prime}AB=\angle AC^{\prime}B-\angle ABC, \label{eq:delta}
\end{equation}
is the phasor, \eqref{eq:phasor}, being the difference between the opposite internal angle and the external angle of the third angle of the spherical triangle. Adding the angle $\angle C^{\prime}AB$ to both sides of \eqref{eq:delta}, and adding and subtracting $\pi$ on the right-hand side yield:
\begin{equation}
2\angle C^{\prime}AB=\angle C^{\prime}AB+\angle AC^{\prime}B+\angle ABC^{\prime}-\pi, \label{eq:delta-bis}
\end{equation}
The right-hand side of \eqref{eq:delta-bis} is the area of the triangle $\triangle C^{\prime}AB$, and replacing the left-hand side by its complementary angle gives
\begin{equation}
\delta=\angle CAB=\pi-\mbox{\small{$\frac{1}{2}$}}\left(\angle C^{\prime}AB+\angle AC^{\prime}B+\angle ABC^{\prime}-\pi\right), \label{eq:5.a}
\end{equation}
which is eqn (5.a) of Pancharatnam~\cite{Pan}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.45\textwidth]{Poincare.jpg}
\caption{The phase $\angle C^{\prime}AB$ is determined by the angle excess of the triangle $\triangle BAC$ colunar to $\triangle C^{\prime}AB$. As $B\rightarrow C$ the the two beams will have opposite phases, while as $B\rightarrow C^{\prime}$, which is the opposite state of polarization to $C$, the phase difference will vanish.}
\label{fig:Poincare}
\end{figure}
As $B\rightarrow C$, the phase $\angle C^{\prime}AB\rightarrow\pi$, and the beams will have opposite phases. This is analogous to the coalescence of the zero and pole to form a multipole. Alternatively, as $B\rightarrow C^{\prime}$, the opposite state of polarization to $C$, the beams in the state of polarization $A$ and $B$ will have zero phase difference.
Pancharatman then asks what happens when the split component $B$ tends to the opposite polarized state $A^{\prime}$ of the other polarized component $A$? As $B\rightarrow A^{\prime}$ and $\delta\rightarrow\Delta$, the latter will be given in terms of the area of the lune cut out by the great circles $AC_0A^{\prime}$ and $AC^{\prime}A^{\prime}$, which is $2\angle C_0AC^{\prime}$. Hence,
\begin{equation}
\Delta=\pi-\angle C_0AC^{\prime}=\angle C_0AC, \label{eq:lune}
\end{equation}
is half the area of the lune formed from the great circles $AC_0A^{\prime}$ and $ACA^{\prime}$. When the area vanishes, the beams will have opposite phases, $\Delta=\pi$. Fig.~\ref{fig:Poincare} also illustrates Pancharatnam's observation that the emergent state of polarization $C$ can be obtained from the incident state of polarization $C_0$ when polarized light passes through a birefringent medium, which can be viewed as a rotation of the Poincar\'e sphere through an angle $\Delta$ in the counterclockwise direction about the $AA^{\prime}$ axis.
\section{The Aharonov-Bohm Effect}
The fringe shift in a field-free, but multivalued, region due to a non-vanishing vector potential was predicted by Ehrenberg and Siday~\cite{Siday}, and rediscovered by Aharonov and Bohm~\cite{AR} a decade later. Ehrenberg and Siday found it strange that an optical phenomenon would be caused by a flux, instead of a \emph{change\/} in the flux. Aharonov and Bohm insisted on the multivaluedness of the region in which the beams are travelling.
Consider the Schr\"odinger equation with a vector potential, $\mathbf{A}$,
\begin{equation}
i\hbar\frac{\partial\psi}{\partial t}=\frac{1}{2m}\left(\mathbf{p}-\frac{e}{c}\mathbf{A}\right)^2\psi. \label{eq:Schrodinger}
\end{equation}
We want to see how close \eqref{eq:Schrodinger} comes to a Fuchsian equation. It becomes one when the phase transform,
\begin{equation}
\psi\longrightarrow e^{-(i/\hbar)Et}\psi, \label{eq:trans-1a}
\end{equation}
is introduced into \eqref{eq:Schrodinger} and the Hamiltonian, $H$, is replaced by $H-E$, which does not \lq\lq produce a trivial, computable phase change in the solution of [\eqref{eq:Schrodinger}]\rq\rq~\cite{Simon}. The reason why it is not trivial is because the constant $E$ would bring in higher-order poles in the indicial equation and introduce an essential singularity into the Schr\"odinger equation [cf. eqn \eqref{eq:hydrogen-tris} below]. As we shall show in the last section, the elimination of $E$ is a necessary condition to keep all singular points regular in the Schr\"odinger equation, \eqref{eq:Schrodinger}.
The radial Schr\"odinger equation then reduces to
\begin{equation}
\psi^{\prime\prime}+P\psi^{\prime}+Q\psi=0, \label{eq:Schrodinger-bis}
\end{equation}
where the prime denotes differentiation with respect to the radial coordinate, $r$, and
\begin{align}
P&=-2\frac{ie}{\hbar c}A \label{eq:P}\\
Q&=-\left(\frac{ie}{\hbar c}A^{\prime}+\frac{e^2}{\hbar^2c^2}A^2\right).\label{eq:Q}
\end{align}
With a change in the unknown $\psi\longrightarrow k\psi$, \eqref{eq:Schrodinger-bis} becomes
\begin{equation}
\psi^{\prime\prime}+\left(P+2\frac{k^{\prime}}{k}\right)\psi^{\prime}+\left(Q+P\frac{k^{\prime}}{k}+\frac{k^{\prime\prime}}{k}\right)\psi=0. \label{eq:Yoshida}
\end{equation}
If $k$ satisfies \eqref{eq:Schrodinger-bis}, the coefficient of $\psi$ vanishes in \eqref{eq:Yoshida}. Rather, if the coefficient of $\psi^{\prime}$ vanishes, $P+2k^{\prime}/k=0$, \eqref{eq:Yoshida} reduces to
\begin{equation}
\psi^{\prime\prime}+I\psi=0, \label{eq:normal}
\end{equation}
where
\begin{equation}
I=Q-\mbox{\small{$\frac{1}{4}$}} P^2-\mbox{\small{$\frac{1}{2}$}} P^{\prime}=Q+P\frac{k^{\prime}}{k}+\frac{k^{\prime\prime}}{k}, \label{eq:Schwarzian}
\end{equation}
is half the Schwarzian derivative. Equation \eqref{eq:normal} is known as the normal form of the equation.
Equations with the same normal form are said to be equivalent, and $I$ is their invariant~\cite{Ince}. However, for the Schr\"odinger equation, \eqref{eq:Schrodinger-bis}, with coefficients \eqref{eq:P} and \eqref{eq:Q}, the invariant \eqref{eq:Schwarzian} vanishes identically. Therefore, \eqref{eq:Schrodinger-bis} is weakly equivalent to $\psi^{\prime\prime}=0$~\cite{Yoshida}, and there would be no invariant in the Aharonov-Bohm effect. Any function that has a vanishing Schwarzian derivative must be a linear-fractional transformation. And because a non-vanishing Schwarzian derivative is curvature~\cite{Tab}, we can conclude that \eqref{eq:Schrodinger} is not the correct equation to derive the Aharonov-Bohm effect~\cite{Wu}.
In fact, Aharonov and Bohm~\cite{AR} consider the wave equation outside the magnetic field region,
\begin{equation}
\left[\frac{\partial^2}{\partial r^2}+\frac{1}{r}\frac{\partial}{\partial r}+\frac{1}{r^2}\left(\frac{\partial}{\partial\vartheta}-i\alpha\right)^2+k^2\right]\psi=0, \label{eq:AR}
\end{equation}
where $\mathbf{k}$ is the wave vector of the incident particle, $\alpha=-e\phi/hc$, and $\phi$ is the total magnetic flux inside the circuit. By introducing the phase transformation
\[
\psi\longrightarrow e^{im\vartheta}\psi,\]
in \eqref{eq:AR} we can select a spherically symmetric solution by setting the magnetic quantum number $m=0$. Equation \eqref{eq:AR} then becomes the solution found by Tamm which is a Bessel function for $k^2>0$. According to Wu and Yang~\cite{Wu}, it has no meaningful solution if $k^2\le0$. However, it is precisely the equality that allows \eqref{eq:AR} to be transformed into the Fuchsian differential equation,
\begin{equation}
\psi^{\prime\prime}+\frac{1-(2\alpha)^2}{4r^2}\psi=0, \label{eq:AR-bis}
\end{equation}
provided $2\alpha<1$. According to Wu and Yang, the origin of this term is a monopole in the expression for the angular momentum,
\[\mathbf{L}=\mathbf{r}\times(\mathbf{p}-e\mathbf{A})-\frac{2\alpha\mathbf{r}}{r},\]
but the condition $\ell(\ell+1)\ge(2\alpha)^2$ would prevent the formation of a lune. We will now show their conclusion that \lq\lq the monopole does not possess strings of singularities in the field around it\rq\rq\ is inaccurate since analytic continuation about a regular singular point gives rise to a geometric phase.
Equation \eqref{eq:AR-bis} is valid about the singular point at the origin as well as the singular point at infinity. This can easily be shown by substituting $r=1/z$ in \eqref{eq:AR-bis} to get
\[\psi^{\prime\prime}+\frac{2}{z}\psi^{\prime}+\frac{1-(2\alpha)^2}{z^2}=0.\]
Then the substitution $\psi\rightarrow\psi/z$, will bring it into the exact same form as \eqref{eq:AR-bis}. This shows that the fixed points at $r=0$ and $r=\infty$ are symmetrical.
The two independent solutions to \eqref{eq:AR-bis} are:
\begin{equation}
\psi_1=r^{\mbox{\small{$\frac{1}{2}$}}(1+2\alpha)}\hspace{60pt}\mbox{and}\hspace{60pt}\psi_2=r^{\mbox{\small{$\frac{1}{2}$}}(1-2\alpha)}. \label{eq:AR-soln}
\end{equation}
Since \eqref{eq:AR-soln} is multivalued, one solution would have to be rejected to preserve the singlevaluedness of the Schr\"odinger wavefunction. The quotient of the two solutions, \eqref{eq:AR-soln}, will undergo a linear-fractional transformation since any two independent solutions are linear combinations of any other pair of solutions. Analytic continuation about the origin, or infinity, will not give back the solution we started with. So by solving \eqref{eq:AR-soln} we have found functions automorphic with respect to a group of rotations. The group tessellates the upper half-plane, or disc, by lunes, of the form shown in Fig.~\ref{fig:lune}, where $r=0$ and $r=\infty$ correspond to the angular points of the lune.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.50\textwidth]{lune.jpeg}
\caption{Two circular arcs intersect at an angle $2\alpha\pi$}
\label{fig:lune}
\end{figure}
Two circular arcs that cut out the lune intersect at an angle $2\alpha\pi$. The area of the lune is $4\pi\alpha$. In terms of the phasor, the phase angle would be half this area, while Panacharatnam gives the phase as the complementary angle. Since we want the phase to vanish with the magnetic flux intensity, we choose the former and get
\begin{equation}
\delta=2\pi\alpha=\frac{2\pi e\phi}{hc}=\frac{e\phi}{\hbar c}=\frac{e}{\hbar c}\oint\mathbf{A}\cdot d\mathbf{r}. \label{eq:AR-phase}
\end{equation}
The phase factor,
\begin{equation}
\psi=e^{2\pi i\alpha}, \label{eq:factor}
\end{equation}
is the change in the wave function during a circuit of the solenoid. Equation \eqref{eq:AR-phase} says that when $\phi$ is an odd multiple of a fluxon, $hc/2e$, the two beams (one bypasses the toroidal magnetic and the other passes through its hole) should exhibit a (maximum) phase difference of $\pi$ (mod $2\pi)$, i.e.,
\[\frac{2\nu+1}{2}2\pi\equiv\pi \hspace{30pt}(\,\mod 2\pi)\hspace{30pt}\nu=0,\pm1,\pm2\ldots\] This is what is seen in the interferogram that results from combining the beam with a coherent reference beam that avoids the magnetic field~\cite{Bate}. It is seen that integral quantization of the phase eliminates the phase factor, \eqref{eq:factor}, altogether.
Denote by $\lfloor\alpha^{-1}\rfloor$ Gauss' bracket, which indicates the largest integer not exceeding $\alpha^{-1}$. Then $\varepsilon=e^{2\pi i/\lfloor\alpha^{-1}\rfloor}$ is an elliptic generator with period $\lfloor\alpha^{-1}\rfloor$. In other words, there will be $\lfloor\alpha^{-1}\rfloor$ distinct branches, or $\lfloor\alpha^{-1}\rfloor$ \lq steps\rq\ in the \lq spiral staircase\rq. The different branches are $g_n=\varepsilon^{n}g_0$, where $n=0,1,2,\ldots,\lfloor\alpha^{-1}\rfloor-1$ are the winding numbers. Each step can be regarded as a covering space corresponding to a particular branch of the multivalued function. In particular, for destructive inference of the beams, $\lfloor\alpha^{-1}\rfloor=2$, so that there is a single branch, and the surface is simply connected.
\section{Attractive Angular Momentum}
Many of the equations of mathematical physics can be transformed into Fuchsian differential equations at vanishing kinetic energy. Consider the spherical Bessel equation,
\begin{equation}
\left(\frac{1}{r^2}\frac{d}{dr}r^2\frac{d}{dr}-\frac{\ell(\ell+1)}{r^2}+k^2\right)\psi=0, \label{eq:Bessel}
\end{equation}
which can be transformed into \eqref{eq:normal}
where
\begin{equation}
I=k^2-\frac{\ell(\ell+1)}{r^2}. \label{eq:I}
\end{equation}
The Bessel equation, \eqref{eq:Bessel}, has a regular singular point at $r=0$, and an essential singularity at $r=\infty$. This can be seen by making the substitution $z=1/r$, and noting that the coefficient of $\psi$ has higher-order poles at $z=0$ [cf. eqn \eqref{eq:confluent-tris} below].
The indicial equation at the regular singular point, $r=0$, has two independent solutions:
\begin{equation}
\psi_1=r^{\ell+1}\hspace{60pt}\mbox{and}\hspace{60pt} \psi_2=r^{-\ell}. \label{eq:soln-Bessel}
\end{equation}
The second solution $\psi_2$ is ordinarily discarded on the basis that it blows up at the origin. This makes $\psi$ it singlevalued. The quotient of the two solutions,
\begin{equation}
s=\psi_1/\psi_2=r^{\lambda}, \label{eq:quotient}
\end{equation} has a multivalued nature, and is automorphic with respect to a group of rotations that will tessellate the half-plane, or disc, with lunes, if and only if $k^2=0$. There can be no constant terms appearing in \eqref{eq:I}, or the Schwarzian derivative [cf. eqn \eqref{eq:Schwarz-Bessel} below].
When $k^2\neq0$, there will be an essential singularity at $r\rightarrow\infty$. We may study this singularity by making the substitution $z=1/r$, and as $z\rightarrow0$, \eqref{eq:Bessel} will reduce to
\begin{equation}
\psi^{\prime\prime}+\frac{2}{z}\psi+\frac{k^2}{z^4}\psi=0. \label{eq:Bessel-bis}
\end{equation}
The solution to \eqref{eq:Bessel-bis} gives an essential singularity,
\begin{equation}
\psi=\sin(k/z), \label{eq:essential-sin}
\end{equation}
at $z=0$ which consists of a pole of infinite order. It is the limit point of two sequences of zeros, one on the positive real, and the other on the negative real, axis~\cite{Daniels}. Since the integrand of \eqref{eq:n-p} is
\begin{equation}
\frac{f^{\prime}(z)}{f(z)}=-\frac{k}{z^2}\cot\frac{k}{z}=-\frac{1}{z}+\frac{k^2}{3z^3}+\frac{k^4}{45z^5}+\cdots, \label{eq:n-p-bis}
\end{equation}
and introducing it into \eqref{eq:multi} shows that it has a \lq charge\rq\ of $-1$, a vanishing dipole moment, a quadrupole moment of $k^2/3$, a hexadecapole moment of $k^4/45$, etc.
The automorphic function, $s$, has the Schwarzian derivative,
\begin{equation}
\{s,r\}=\frac{1-\lambda^2}{2r^2}=2I, \label{eq:Schwarz-Bessel}
\end{equation}
only in case of vanishing kinetic energy, $k^2=0$, where $\lambda=2\ell+1$. As we have already shown, the indicial equations will then be identical about $r=0$ and $r=\infty$, thereby reducing the second singular point from an essential to a regular one. This is necessary insofar as the analytic continuation of the solution about the singular point will not give back the solution that we started with, but, the product of analytic continuations about two singular points will give back the original solution. In other words, the group of rotations needs, at least, two generators whose product is the identity. In the case of two singular points, the generators will be inverses of one another. This is Riemann's condition for the \lq\lq periodicity of the function\rq\rq~\cite{Gray}, and the group generated by these matrices is the \lq monodromy group\rq, a term coined by Jordan.
When the two poles are regular, a simply closed circuit in the counterclockwise direction about $r=0$, described by the monodromy matrix,
\begin{equation}
\mathbb{S}_0=\begin{pmatrix} e^{2\pi i\ell} &0\\
0 & e^{-2\pi i\ell} \end{pmatrix}, \label{eq:mono}
\end{equation}
must be accompanied by a counterclockwise circuit about the other singular point at $r=\infty$,
\begin{equation}\mathbb{S}_{\infty}=\begin{pmatrix} e^{-2\pi i\ell} &0 \\
0 & e^{2\pi i\ell} \end{pmatrix}, \label{eq:mono-bis}
\end{equation}
in order that Riemann's condition must be fulfilled:
\begin{equation}
\mathbb{S}_0\mathbb{S}_\infty=\mathbb{I}, \label{eq:Riemann}
\end{equation} so that the motions form a group, the monodromy group. Periodicity results in a multivalued function only for non-integral values of $\ell$. Integral values would reduce the monodromy matrices, \eqref{eq:mono} and \eqref{eq:mono-bis}, to the identity matrix, and destroy the tessellations of the half-plane, or disc, by lunes. This is the condition for constructive interference, which is no longer possible when the singular point at infinity becomes an essential singularity. The presence of an essential singularity destroys the periodicity with respect to the group.
The existence of a lune formed from two circular arcs with angle $\lambda\pi$ implies that $\lambda\le1$, or, equivalently $\ell\in[-\mbox{\small{$\frac{1}{2}$}},0]$. The centripetal repulsion $\ell(\ell+1)$ has now become \lq centripetal attraction\rq, $\ell(\ell+1)<0$.
The Bessel differential equation, \eqref{eq:Bessel}, thus becomes identical to the Aharonov-Bohm equation, \eqref{eq:AR-bis}. The automorphic function $s=\psi_1/\psi_2$ can be written more generally as
\begin{equation}
S=\frac{as+b}{cs+d}, \label{eq:Mobius-bis}
\end{equation}
which gives a conformal representation of the $S$-lune upon the $s$-half plane. Inside the lune, which is the fundamental region, the automorphic function will take on any value only once. Thus, the linear-fractional transformation, \eqref{eq:Mobius-bis}, will transform two circles cutting at angle, $\lambda\pi$, into any two others intersecting at the same angle. This result has been known since the time of Kirchhoff~\cite{K}.
Thus, space and angular momentum have switched roles: the former is discontinuous while the latter is continuous in the interval $\ell\in[-\mbox{\small{$\frac{1}{2}$}},0]$. The geometric phase is now half the area of the lune, $\delta=(2\ell+1)\pi$. For $\ell=-\mbox{\small{$\frac{1}{2}$}}$ the regular and irregular solutions, \eqref{eq:soln-Bessel}, coalesce, and the phase vanishes. At the other extreme, $\ell=0$, and the phase, $\delta=\pi$, in which the area of the lune becomes the area of a hemisphere, and the Schwarzian derivative, \eqref{eq:Schwarz-Bessel}, vanishes. The differential equation \eqref{eq:Bessel} becomes weakly equivalent to $\psi^{\prime\prime}=0$ so that there is no invariant~\cite{Yoshida}, exactly as in the case of the Schr\"odinger equation \eqref{eq:Schrodinger}.
\section{Reconstruction of the Schr\"odinger Equation}
For Fuchsian automorphic functions, accumulation, or limit, points occur on the principal circle or real axis of the half-plane~\cite{Ford}. Not all points on the boundary need be limit points of the group. If the automorphic function is not a constant, each limit point of the group is an essential singularity of the function. The behavior of an automorphic function at a limit point is analogous to the behavior of the Schr\"odinger equation in the immediate neighborhood of the point at infinity. We first establish the form of the essential singularity in the case of negative kinetic energy,~\footnote{For positive kinetic energy the essential singularity is given by \eqref{eq:essential-sin}.} and then show that the Schr\"odinger equation can be reduced to Fuchsian form even in the presence of a potential at infinity provided the kinetic energy vanishes.
Consider the radial Schr\"odinger equation for the \emph{bound\/} states of the hydrogen atom,
\begin{equation}
\psi^{\prime\prime}-\left[\frac{\ell(\ell+1)}{r^2}-\left(\frac{\gamma}{r}-\frac{1}{4}\right)\right]\psi=0, \label{eq:hydrogen}
\end{equation}
where the parameter $\gamma=1/kr_B$, and $r_B$ is the Bohr radius. As $r\rightarrow0$, \eqref{eq:hydrogen}
becomes
\begin{equation}
\psi^{\prime\prime}-\frac{1-\lambda^2}{r^2}\psi=0, \label{eq:hydrogen-bis}
\end{equation}
which has two independent solutions, \eqref{eq:soln-Bessel}.
As $r\rightarrow\infty$, \eqref{eq:hydrogen} reduces to
\begin{equation}
\psi^{\prime\prime}+\frac{2}{z}\psi^{\prime}-\frac{1}{4z^4}\psi=0, \label{eq:hydrogen-tris}
\end{equation}
when the transformation $r=1/z$ is made. The two independent solutions are
\begin{equation}
\psi_1=e^{-1/2z}\hspace{60pt}\mbox{and}\hspace{60pt}\psi_2=e^{1/2z}. \label{eq:soln-hydrogen}
\end{equation}
On the condition that $\psi$ must remain bounded, as $r\rightarrow\infty$, or $z\rightarrow0$, the second solution in \eqref{eq:soln-hydrogen} is eliminated. The solution to \eqref{eq:hydrogen} is given as a product of the first solutions in \eqref{eq:soln-Bessel} and \eqref{eq:soln-hydrogen} multiplied by the associated Laguerre polynomials.
The transcendental function,
\begin{equation}
f(z)=\psi_2/\psi_1=e^{1/z}, \label{eq:f}
\end{equation}
has an essential singularity at $z=0$, corresponding to $r=\infty$. It can be considered as a limit of a rational function which is the ratio of a pole of order $n$ at $z=0$ and a zero of order $n$ at $z=-1/n$~\cite{Daniels}. The ratio,
\begin{equation}
\lim_{n\rightarrow\infty}\frac{(z+1/n)^n}{z^n}=\lim_{n\rightarrow\infty}(1+1/nz)^n=e^{1/z}, \label{eq:f-bis}
\end{equation}
has a finite limit coinciding with a transcendental function.
This occurs on the principal circle, or the positive real axis of the half-plane.\footnote{Points at infinity can be transformed to the principal circle by the linear-fractional transformation,
\[U(z)=\frac{iz+1}{z+i}.\]} The essential singularity thus consists of the merger of a pole at infinite order at $z=0$ and a zero of infinite order at $r=0-$. Introducing \eqref{eq:f} into the multipole moment \eqref{eq:multi}, shows that the only non-vanishing moment is $m=1$ so that the essential singularity has a dipole moment of $-1$. This permits us to interpret poles and zero as opposite charges~\cite{Daniels}.
If equation \eqref{eq:hydrogen-bis} has two singular points $r=0$ and $r=\infty$ there are no limit points of the group of motions that separate the plane~\cite{Ford}. By transforming the singular point at infinity into an essential singularity, where an infinite number of poles will cluster, we introduce a boundary, either a principal circle or real axis. The transform involves introducing the kinetic energy which is represented by the last term in \eqref{eq:hydrogen}. The essential singularity has a dipole moment, which is related to a bound state, such as in the Schr\"odinger equation for the hydrogen atom, \eqref{eq:hydrogen}, in contrast to an unbound state as in Bessel's equation, \eqref{eq:Bessel}, which has an infinite number of moments.
Let us look for a solution to \eqref{eq:hydrogen} of the Fuchsian type, $\psi(r)=r^{\ell+1}\varphi(r)$. Then $\varphi(r)$ will be the solution to
\begin{equation}
\varphi^{\prime\prime}+2\frac{(\ell+1)}{r}\varphi^{\prime}+\left(\frac{\gamma}{r}-\mbox{\small{$\frac{1}{4}$}}\right)\varphi=0. \label{eq:confluent}
\end{equation}
Introducing the Euler operator, $D=rd/dr$~\cite{SK}, \eqref{eq:confluent} can be reduced to the Fuchsian form:
\begin{equation}
D(D+\lambda)\varphi=-r\left(\gamma-\mbox{\small{$\frac{1}{4}$}} r\right)\varphi. \label{eq:confluent-bis}
\end{equation}
The resonances, or roots of the left-hand side of the equation, are $0$ and $-\lambda$. This conferms that for small $r$, the solution should behave as $r^{-\lambda}$ [cf. eqn \eqref{eq:quotient}]. The stable manifold is parameterized by $\gamma$, the coefficient of the attractive Coulombian potential.
Solving \eqref{eq:confluent-bis} recursively, we get the power expansion
\[
\varphi=r^{-\lambda}\left\{1+\frac{\gamma}{\lambda-1}r+\frac{1}{2(\lambda-2)}\left(\frac{\gamma^2}{\lambda-1}-\frac{1}{4}\right)r^2+\cdots\right\},\]
or in terms of our original wavefunction,
\begin{equation}
\psi=r^{-\ell}\left\{1+\frac{\gamma}{2\ell}r+\frac{1}{(2\ell-1)}\left(\frac{\gamma^2}{2\ell}-\frac{1}{4}\right)r^2+\cdots\right\}.\label{eq:psi-soln}
\end{equation}
The idea of such power series solution is the same as Frobenius's \lq trick\rq\ of considering logarithms as limiting cases of powers. Logarithmic solutions are admissible and occur when the roots of the indicial equation are equal. Equation \eqref{eq:psi-soln} shows that it is an analytic function which has a branch pole of order $-\ell$ at $r=0$.
When we apply the same procedure to the fixed point at infinity by setting $r=1/z$, we get
\begin{equation}
D(D-\lambda)\varphi=-\frac{1}{z}\left(\gamma-\frac{1}{4z}\right)\varphi, \label{eq:confluent-tris}
\end{equation}
which is not an equation of the Fuchsian type. At vanishing kinetic energy, \eqref{eq:confluent-tris} can be reduced to a Fuchsian type of differential equation by a transcendental change of variables,
\[R=e^{-1/z}.\]
Introducing two radial coordinates, $R_0=R$ and $R_1=R\ln R$~\cite{SK}, \eqref{eq:confluent-tris} can be brought into the form:
\begin{equation}
\mathcal{D}\left(\mathcal{D}+\lambda\right)\varphi=\gamma\frac{R_1}{R_0}\varphi, \label{eq:confluent-iv}
\end{equation}
where the two-space scale operator, $\mathcal{D}=R_1\partial/\partial R_0$.
There is an analogy between the essential singularity at infinity of differential equations, like \eqref{eq:Bessel} and \eqref{eq:Schrodinger}, and the limit point point of a group, which is also an essential singularity~\cite{Ford}. The essential singularities of the group are the essential singularities of the automorphic function. The limit points either lie along the real axis in the half-plane, or on the principal circle. When an autormorphic function is subjected to linear-fractional substitutions of the group, they will fill the half-plane or principal circle with fundamental regions that do not overlap and without lacunae. However, in the immediate vicinity of a limit point, the automorphic function assumes any number of different values. The fundamental regions tend to cluster in infinite number about points on the principal circle or on the real axis. Thus, \emph{the behavior of the automorphic function at a limit point on the boundary is analogous to the confluence of two poles in a differential equation to produce an essential singularity at infinity\/}.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 4,802 |
Q: Compare values from two csv files, append value to text file I have three files. I want to compare columns with fruit in them and those that match, I want to append the matching fruit to the Append.txt file and then sort ascending.
test1.csv
CustID,Name,Count,Item,Date
23,Smith,8,apples,08/12/2010
1,Jones,8,banana,03/26/2009
15,Miller,2,cookie dough,03/27/2009
6,Fisher,8,oranges,06/09/2011
test2.csv
FRUIT,Amount,Aisle
oranges,1,1
apples,1,1
pears,1,1
Append.txt
Fruit,Total,Aisle
cherries,1,1
dates,2,1
grapes,5,1
kiwis,2,2
peaches,2,2
plums,1,1
watermelon1,2
Code:
import csv
# Iterate through both reader1 and reader2, compare common row, and append matching column data to test.txt in its matching column
with open("C:\\Test\\Append.txt", 'a') as f:
reader1 = csv.reader(open("C:\\Test\\test1.csv", 'rb'), delimiter=',')
row1 = reader1.next()
reader2 = csv.reader(open("C:\\Test\\test2.csv", 'rb'), delimiter=',')
row2 = reader2.next()
if (row1[3] == row2[0]):
print "code to append data from row1[0] to test.txt row[0] goes here"
f.close()
exit
print "code to sort test.txt ascending on column[0] goes here"
My initial script will not work. After examining I can see that the code only compares row 1 with row 1, row 2 with 2, etc. and I really want it to compare all rows (row1 with row 1, row1 with row 2, row 2 with row 1, row 2 with row 2, etc>). After running main script, test files can be populated with no records or up to 5 records. Append file can be either empty or have hundreds of records. Using python 2.7.
I am also unsure as to how to sort the file in ascending order when done.
A: Use sets. Read the two CSV files first and collect just the fruits from the rows.
Then use set intersections to find all fruit that the two files have in common, add these to the fruit from the Append.txt file, sort, and write all fruit back to the file.
import csv
# collect the fruits of both CSV files
with open('c:/Test/test1.csv', 'rb') as test1:
reader = csv.reader(test1)
next(reader, None) # ignore header
test1_fruit = set(row[3] for row in reader)
with open('c:/Test/test2.csv', 'rb') as test2:
reader = csv.reader(test2)
next(reader, None) # ignore header
test2_fruit = set(row[0] for row in reader)
# Read all the fruit from Append
with open("C:/Test/Append.txt", 'r') as append:
fruit = set(line.strip() for line in append if line.strip())
# add all fruit that are in both test1 and test2
fruit |= test1_fruit & test2_fruit
# write out a sorted list
with open("C:/Test/Append.txt", 'w') as append:
append.write('\n'.join(sorted(fruit)))
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,955 |
Q: Pure isometries are unitarily equivalent to a shift (Wold decomposition). Do the corresponding intertwining relations also hold for the adjoints? Let $T \in L(H)$ be an isometry on a Hilbert space. Further assume that $T$ is pure, i.e. $T^{*m} \xrightarrow {m \rightarrow \infty} 0$ in the strong operator topology, or equivalently, there is no unitary part in the Wold decomposition of $T$. In other words, by the Wold decomposition Theorem, there is a Hilbert space $D$ and a unitary $V: H \rightarrow H^2(\mathbb{D}, D)$ such that $VT = M_z V$ holds.
(Here, $H^2(\mathbb{D}, D)$ denotes the $D$-valued Hardy space on $\mathbb{D}$ and $$M_z: H^2(\mathbb{D}, D) \rightarrow H^2(\mathbb{D}, D), (M_zf)(z) = z f(z)$$ the shift operator on it)
Is it true that we also have $VT^* = M_z^* V$ in this case?
A: Yes, because $V$ is a unitary. This allows you to write the intertwinning relation as
$$
VTV^*=M_z.
$$
Now you can take adjoints and multiply by $V$ on the right.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 567 |
\section{Experimental Setting}
\subsection{Data preprocessing}
Translation pairs were batched together by approximate sequence length. Each training batch contained a set of translation pairs containing approximately 32000 source tokens.\footnote{This can be reached by using several of GPUs or by accumulating the gradients for several batches and then making an update.}
\subsection{Model parameters}
\paragraph{Transformer (encoder-decoder).} We follow the setup of Transformer base model~\cite{attention-is-all-you-need}. More precisely, the number of layers in the encoder and in the decoder is $N=6$. We employ $h = 8$ parallel attention layers, or heads. The dimensionality of input and output is $d_{model} = 512$, and the inner-layer of a feed-forward networks has dimensionality $d_{ff}=2048$. We use regularization as described in~\cite{attention-is-all-you-need}.
\paragraph{Transformer (decoder).} The difference from the previous model is that the decoder has 12 layers.
\paragraph{LSTM (encoder-decoder)} is a single-layer GNMT~\cite{wu2016googles} with the input and output dimensionality of 512 and hidden sizes of 1024.
\subsection{Optimizer}
The optimizer we use is the same as in~\cite{attention-is-all-you-need}.
We use the Adam optimizer~\cite{adam-optimizer} with $\beta_1 = 0{.}9$, $\beta_2 = 0{.}98$ and $\varepsilon = 10^{-9}$. We vary the learning rate over the course of training, according to the formula:
$$
l_{rate}=scale\cdot \min(step\_num^{-0.5},$$ $$step\_num\cdot warmup\_steps^{-1.5})
$$
We use $warmup\_steps = 16000$, $scale=4$.
\section{Monotonicity of Alignments}
To measure how the relative ordering of words in the source and target sentences changes during training, we use two different scores: fuzzy reordering score~\cite{talbot-etal-2011-lightweight} and Kendall tau distance. We evaluate both scores for two permutations of the source sentence $\sigma_1$ and $\sigma_2$, where $\sigma_1$is the trivial monotonic alignment and $\sigma_2$ -- the alignment inferred for the generated translation.
\paragraph{Fuzzy Reordering Score} aligns each word in $\sigma_1$ to an instance of itself in $\sigma_2$ taking the first unmatched instance of the word if there is more than one. If $C$ is the number of chunks of contiguously aligned words and $M$ is the number of words in the source sentence, then the fuzzy reordering score is computed as
\begin{equation}
FRS(\sigma_1, \sigma_2) = 1 - \frac{C-1}{M-1}.
\label{eq:fuzzy_reorderin_score}
\end{equation}
This metric assigns a score between 0 and 1, where 1 indicates that the two reorderings are identical. Intuitively, $C$ is the number of times a reader would need to jump in order to read the reordering $\sigma_1$ in the order proposed by $\sigma_2$. A larger fuzzy reordering score indicates more monotonic alignments.
\paragraph{Kendall tau distance} counts the number of pairwise disagreements between two ranking lists. The larger the distance, the more dissimilar the two lists are. Kendall tau distance is also called \textit{bubble-sort distance} since it is equivalent to the number of swaps that the bubble sort algorithm would take to place one list in the same order as the other list. We evaluate the normalized distance, i.e. for a list of length $n$ it is normalized by $\frac{n(n-1)}{2}$. The normalized score is between 0 and 1, where 0 indicates that the two reorderings are identical.
\paragraph{Differences between the scores.} While the first score counts only the number of chunks of contiguously aligned words, the second one takes into account only how distant the changes are. For example, let us consider two reorderings: $(2, 1, 4, 3, 6, 5)$
and $(4, 5, 6, 1, 2, 3)$. While for the fuzzy reordering score the least monotonic reordering is the first (more jumps for a reader), for the Kendall tau score -- the second (requires more permutations to reorder). As we will see in Section~\ref{sect:monotonicity}, results for the two scores are similar.
\paragraph{Our setting.} We take sentences of at least 2 words for the fuzzy reordering score and at least 10 tokens for the Kendall tau distance.
\section{Transformer Training Stages}
Figure~\ref{fig:lrp_ende} shows the abstract stages for En-De, Figures~\ref{fig:lm_scores_ende}-\ref{fig:ende_frs} provide the results from Section~\ref{sect:transformer_training_stages} for the other language pair (En-De).
\begin{figure}[t!]
\centering
{\includegraphics[scale=0.31]{pict/ende_tr_lrp_both.png}}
\vspace{-2ex}
\caption{Left: contribution of source, right: entropy of source contributions. En-De. Vertical lines separate the stages.}
\vspace{-2ex}
\label{fig:lrp_ende}
\end{figure}
\begin{figure}[t!]
\centering
{\includegraphics[scale=0.34]{pict/ende_tr_lm_scores_both_no_y.png}}
\caption{KenLM scores. Left: 5-gram model, all training stages; right: different models, the first stage. Horizontal lines show the scores for the references. En-De.}
\vspace{-2ex}
\label{fig:lm_scores_ende}
\end{figure}
\begin{figure}[t!]
\centering
{\includegraphics[scale=0.3]{pict/ende_tr_freq_proportion_both_with_side_bbox.png}}
\caption{Proportion of tokens of different frequency ranks in model translations. En-De.}
\vspace{-2ex}
\label{fig:freq_ranks_ende}
\end{figure}
\begin{figure}[t!]
\centering
\subfloat[]
{\includegraphics[scale=0.19]{pict/ende_tr_bleu.png}}
\
\subfloat[]
{\includegraphics[scale=0.28]{pict/ende_tr_accuracy_side_bbox.png}}
\vspace{-1ex}
\caption{(a) BLEU score; (b) token-level accuracy (the proportion of cases where the correct next token is the most probable choice). WMT En-De.}
\vspace{-2ex}
\label{fig:ende_bleu_acc}
\end{figure}
\begin{figure}[t!]
\centering
\subfloat[]
{\ \ \includegraphics[scale=0.20]{pict/ende_tr_frs_thin_arrow.png}}
\quad
\subfloat[]
{\includegraphics[scale=0.20]{pict/ende_tr_kendall_thin_arrow.png}}
\vspace{-1ex}
\caption{(a) fuzzy reordering score (for references: 0.5), (b) Kendall tau distance (for references: 0.08); WMT En-De. The arrows point in the direction of less monotonic alignments (more complicated reorderings).}
\vspace{-2ex}
\label{fig:ende_frs}
\end{figure}
\section{Other Models}
Figure~\ref{fig:other_models_ende_appendix} is a version of the Figure~\ref{fig:other_models_ende}a from the main text, but with the scores for all three models. Figure~\ref{fig:other_models_enru} provides corresponding results for the other language pair (En-Ru). Note that in Figure~\ref{fig:other_models_enru}b the reordering score for the LSTM model stops earlier: this is because the LSTM model converges earlier than other models.
\begin{figure}[t!]
\centering
{\includegraphics[scale=0.23]{pict/ende_lm_scores_all.png}}
\caption{Target-side LM scores (5-gram); En-De.}
\label{fig:other_models_ende_appendix}
\end{figure}
\begin{figure}[t!]
\centering
\subfloat[]
{\includegraphics[scale=0.23]{pict/enru_lm_scores_all.png}}
\ \
\subfloat[]
{\includegraphics[scale=0.23]{pict/enru_frs_all.png}}
\caption{(a) target-side LM scores, (b) fuzzy reordering score (for references: 0.6); WMT En-Ru.}
\vspace{-2ex}
\label{fig:other_models_enru}
\end{figure}
\section{Practical Applications}
\subsection{Experimental Setting}
\paragraph{Model.} The model is the re-implemented by~\citet{Zhou2020Understanding} version of the vanilla NAT by~\citet{gu2018nonautoregressive}. Namely, instead of modeling fertility as described in the original paper, \citet{Zhou2020Understanding} monotonically copy the encoder embeddings to the input of the decoder. We used the code released by~\citet{Zhou2020Understanding}.\footnote{ \url{https://github.com/pytorch/fairseq/tree/master/examples/nonautoregressive_translation}}
\paragraph{Training.} For all experiments, we follow the setting by~\citet{Zhou2020Understanding}. Note that in their work, training NAT models required 32 GPUs. In our setting, we ensured the same batch size by accumulating gradients for several batches (in \texttt{fairseq}, this is done using the \texttt{--update-freq} option).
\paragraph{NAT Inference.} Following previous work, for this vanilla NAT model we use a straight-forward decoding algorithm which simply picks the \texttt{argmax} at every position.
\section{Introduction}
In the last couple of decades, the two main machine translation paradigms have been statistical and neural MT. Statistical MT (SMT) decomposes the translation task into several components (e.g., lexical translation probabilities, alignment probabilities, target-side language model, etc.) which are learned separately and then combined in a translation model. Differently, neural MT (NMT) models the entire translation process with a single neural network that is trained end-to-end.
Although joint training of all the components is one of the obvious NMT strengths, this is also one of its challenging aspects. While SMT models different competences with distinct model components and, therefore, can easily validate and/or improve each of them, NMT acquires these competences within the same network over the course of training. Even though previous work shows how to improve some of the competences in NMT, e.g., by using lexical translation probabilities, phrase memories, target-side LM, alignment information~(\citealp{arthur-etal-2016-incorporating,He2016ImprovedNM,tang2016neural,wang-etal-2017-translating,zhang-etal-2017-prior,dahlmann-etal-2017-neural,Gulcehre2015,Glehre2017OnIA,He2016ImprovedNM,Sriram2017ColdFusion,dahlmann-etal-2017-neural,stahlberg-etal-2018-simple,mi-etal-2016-supervised,liu-etal-2016-neural,chen2016guided,alkhouli-etal-2016-alignment,alkhouli-ney-2017-biasing,park-tsvetkov-2019-learning,Song2020AlignmentEnhancedTF} among others), it is still not clear how and when NMT acquires these competences during training. For example, are there any stages where NMT focuses on different aspects of translation, e.g., fluency (agreement on the target side) or adequacy (i.e.\ connection to the source), or does it improve everything at the same rate? Does it learn word-by-word translation first and more complicated patterns later, or is there a different behavior? This is especially interesting in light of a recent work analyzing how NMT balances the two different types of context: the source and prefix of the target sentence~\cite{voita2021analyzing}. As it turns out, changes in NMT training are non-monotonic and form several distinct stages (e.g., stages changing direction from decreasing influence of source to increasing), which hints that the NMT training consists of stages with qualitatively different changes.
In this paper, we try to understand what happens in these stages by analyzing translations generated at different training steps. Specifically, we focus on the aspects related to the three core SMT components: target-side language modeling, lexical translation, and reordering. We find that during training, NMT focuses on these aspects in the specified above order.
Intuitively, it starts by hallucinating frequent n-grams and sentences in the target language, then comes close to word-by-word translation, and finally learns more complicated reordering patterns. We confirm these findings for several models, LSTM and Transformer, and different modeling paradigms, encoder-decoder and decoder-only, i.e.\ LM-style machine translation where a left-to-right language model is trained on the concatenation of source and target sentences.
Finally, we show how such an understanding of the training process can be useful in practice. Namely, we note that during a large part of training, a model's quality (e.g. BLEU and token-level predictive accuracy) changes little, but reordering becomes more complicated. This means that by using different training checkpoints, we can get high-quality translations of varying complexity, which is useful in settings where data complexity matters.
For example, guiding teacher model selection for distillation in non-autoregressive machine translation (NAT) can improve the quality of a vanilla NAT model by more than 1 BLEU.
Our contributions are as follows:
\begin{itemize}
\item we show that during training, NMT undergoes the following three stages:
\begin{itemize}
\setlength\itemsep{0em}
\item[$\circ$] target-side language modeling;
\item[$\circ$] learning how to use source and approaching word-by-word translation;
\item[$\circ$] refining translations, visible by increasingly complex reorderings, but almost invisible to standard metrics (e.g.\ BLEU).
\end{itemize}
\item we confirm our finding for different models and modeling paradigms;
\item we explain how our analysis can be useful in practice and, as an example, show how it can improve a non-autoregressive NMT model.
\end{itemize}
\section{Training Stages: The Two Viewpoints}
In this section, we introduce two points of view on the NMT training process. The first one comes from previous work showing distinct stages in NMT training. These stages are formed by looking at a model's internal workings and changes in the way it balances source and target information when forming a prediction. The second point of view is from this work: we take model translations at different training steps and look at some of their aspects mirroring, in a way, core SMT components.
While these two points of view are complete opposites (one sees only the model's innermost workings, the other -- only its output), only taken together they can fully describe the training process. We start from the first, abstract, stages, then show how these inner processes look on the outside and conclude with one of the immediate practical applications of our analysis~(Section~\ref{sect:practical_application}).
\subsection{The Abstract Viewpoint: Relative Token Contributions to NMT Predictions}
\begin{figure}[t!]
\centering
{\includegraphics[scale=0.30]{pict/enru_tr_lrp_both.png}}
\caption{Contribution of source and entropy of source contributions. En-Ru. Vertical lines separate the stages.}
\vspace{-2ex}
\label{fig:lrp_enru}
\end{figure}
The `abstract' stages come from our previous work measuring how NMT balances the two different types of context: the source and prefix of the target sentence~\cite{voita2021analyzing}. We adapt one of the attribution methods, Layerwise Relevance Propagation~\cite{bach2015pixel}, to the Transformer, and show how to evaluate the proportion of each token's influence for a given prediction. Then these relative token influences are used to evaluate the total contribution of the source (by summing up contributions of all source tokens) or to see whether the token contributions are more or less focused (by evaluating the entropy of these contributions).
Among other things, \citet{voita2021analyzing} look at how the total source contribution and the entropy of source contributions change during training. We repeated these experiments for WMT14 En-Ru and En-De.\footnote{Using the released code: \url{https://github.com/lena-voita/the-story-of-heads}.} Figure~\ref{fig:lrp_enru} confirms previous observations: the training process is non-monotonic with several distinct stages, e.g. stages changing direction from decreasing influence of source to increasing.
These results suggest that during training, NMT undergoes stages of qualitatively different changes. For example, a decreasing and then increasing influence of the source likely indicates that the model first learns to rely on the target prefix more (i.e.\ to focus on target-side language modeling) and only after that focuses on the connection to the source (i.e.\ adequacy rather than fluency). While these hypotheses are reasonable, to confirm them we have to look not only at how model predictions are formed but also at the predictions themselves.
\subsection{The Practical Viewpoint: Model Translations}
In this viewpoint, we are interested in changes in model output, i.e. translations. We measure:
\begin{itemize}
\setlength\itemsep{-0.2em}
\item[$\circ$] target-side language modeling scores;
\item[$\circ$] translation quality;
\item[$\circ$] monotonicity of alignments.
\end{itemize}
Note that these characteristics are related to three core components of the traditional SMT models: target-side language model, translation model, and reordering model. Although we are mainly interested in NMT models and, except for the language modeling scores, do not measure the quality of the corresponding SMT components directly, this relation to SMT is important. While machine translation is now mostly neural, it is still not clear how (e.g., in which order) those competences which used to be modelled with distinct components are now learned jointly within a single neural network.
\section{Experimental Setting}
\label{sect:experimental_setting}
\subsection{Models, Data and Preprocessing}
\paragraph{Models.} We consider three models:
\begin{itemize}
\setlength\itemsep{-0.2em}
\item[$\circ$] Transformer encoder-decoder;
\item[$\circ$] LSTM encoder-decoder;
\item[$\circ$] Transformer decoder (LM-style NMT).
\end{itemize}
For the first model, we follow the setup of the Transformer base~\cite{attention-is-all-you-need}. LSTM encoder-decoder is a single-layer GNMT~\cite{wu2016googles}. The last model is the Transformer decoder trained as a left-to-right language model. In training, the model receives concatenated source and target sentences separated by a token-delimiter; in inference, it receives only the source sentence and the delimiter and is asked to continue generation.
\paragraph{Datasets.} We use the WMT news translation shared task for English-German and English-Russian: for En-De, WMT 2014 with 5.8m sentence pairs, for En-Ru~-- 2.5m sentence pairs (parallel training data excluding UN and Paracrawl). Since our observations are similar for both languages, in the main text we show figures for one of them and in the appendix~-- for the other.
\paragraph{Preprocessing.} The data is lowercased and encoded using BPE~\cite{sennrich-bpe}. We use separate source and target vocabularies of about 32k tokens for encoder-decoder models, and a joint vocabulary of about 50k tokens for LM-style models.
For each experiment, we randomly choose 2/3 of the dataset for training and use the remaining 1/3 as a held-out set for analysis~(see Section~\ref{sect:introduce_reordering_score}).
More details on hyperparameters, preprocessing, and training can be found in the appendix.
\subsection{Target-Side LM Scores}
For each of the models, we train 2-, 3-, 4- and 5-gram
KenLM~\cite{heafield-2011-kenlm}\footnote{\url{https://github.com/kpu/kenlm}} language models on target sides of the corresponding training data (segmented with BPE). We report KenLM scores for the translations of the development sets.
\begin{figure*}[t!]
\centering
\subfloat[]
{\includegraphics[scale=0.32]{pict/enru_tr_lm_scores_both_no_y.png}}
\quad\quad
\subfloat[]
{\includegraphics[scale=0.32]{pict/enru_tr_freq_proportion_both_with_side_bbox.png}}
\caption{(a) KenLM scores (horizontal dashed lines are the scores for the references); (b) proportion of tokens of different frequency ranks in model translations. En-Ru.}
\label{fig:lm_and_freq_ranks}
\end{figure*}
\begin{figure*}[t!]
{\includegraphics[scale=0.24]{pict/lm_stage_example_de.png}}
\caption{Translations at different steps during training. En-De.}
\label{fig:lm_stage_examples_de}
\end{figure*}
\subsection{Monotonicity of Alignments}
\label{sect:introduce_reordering_score}
To measure how the relative ordering of words in the source and its translation changes during training, we use two different scores used in previous work~\cite{burlot-yvon-2018-using,Zhou2020Understanding}. We evaluate the scores for two permutations of the source: the trivial monotonic alignment and the alignment inferred for the generated translation.
\paragraph{Fuzzy Reordering Score}~\cite{talbot-etal-2011-lightweight} counts the number of chunks of contiguously aligned words and, intuitively, it is based on the number of times a reader would need to jump in order to read one reordering in the order proposed by the other. The score is between 0 and 1, where a larger score indicates more monotonic alignments.
\paragraph{Kendall tau distance}~\cite{kendal-tau} is also called \textit{bubble-sort distance} since it is equivalent to the number of swaps that the bubble sort algorithm would take to place one list in the same order as the other list. We evaluate the normalized distance: it is between 0 and 1, where 0 indicates the monotonic alignment.
The main difference between the scores is that the first one takes into account only the number of jumps, while the second also considers their distance. For a formal description of the scores and their differences, see the appendix.
\paragraph{Our setting.} For each of the considered model checkpoints, we obtain datasets where the sources come from the held-out 1/3 of the original dataset, and targets are their translations. For these datasets, we infer alignments using \texttt{fast\_align}~\cite{dyer-etal-2013-simple}\footnote{\url{https://github.com/clab/fast_align}}.
\section{Transformer Training Stages}
\label{sect:transformer_training_stages}
In this section, we discuss the standard encoder-decoder Transformer. In the next section, we mention differences with several other models.
We first analyze the results for each of the three competences and then characterize the stages based on these practical observations. In all figures, we show the abstract stages with vertical lines to link the results to the changes in token contributions.
\subsection{Target-Side Language Modeling}
Figure~\ref{fig:lm_and_freq_ranks}a shows changes in the language modeling scores. We see that most of the change happens in the very beginning: the scores go up and peak much higher than that of the references. This means that the model generates sentences with very frequent n-grams rather than diverse texts similar to references. Indeed, Figure~\ref{fig:lm_and_freq_ranks}a (right) shows that for a part of the training (from 1k to 2k iterations), the scores for simpler models (e.g., 2-gram) are higher than for the more complicated ones (e.g., 5-gram). This means that generated translations tend to consist of frequent words and bigrams, but larger subsequences are not necessarily fluent.
To illustrate this, we show how translations of one of the sentences evolve at the beginning of training~(Figure~\ref{fig:lm_stage_examples_de}). As expected, first the translations evolve from repetitions of frequent tokens to frequent bigrams and trigrams, and finally to longer frequent phrases. To make this more clear, we also show the proportion of tokens of different frequency ranks in generated translations~(Figure~\ref{fig:lm_and_freq_ranks}b). First (iterations 0-500), all generated tokens are from the top-10 most frequent tokens, then only from the top-50, and only later less frequent tokens are starting to appear. From Figure~\ref{fig:lm_stage_examples_de} we see that this happens when the source comes into play: tokens related to the source become weaved into translations. Overall, this evolution from using short target-side contexts to longer ones and, subsequently, to using the source relates to works in computer vision discussing `shortcut features'~\cite{DBLP:journals/corr/abs-2004-07780}, as well as differences in the progression of extracting `easy' and `difficult' features during training~\cite{NEURIPS2020_71e9c662}.
Note also that model translations converge to higher LM scores than references (Figure~\ref{fig:lm_and_freq_ranks}a). This is expected: compared to references, beam search translations are simpler in various aspects, e.g. they are simpler syntactically, contain fewer rare tokens and less reordering~\cite{burlot-yvon-2018-using,pmlr-v80-ott18a,Zhou2020Understanding}, and lead to more confident token contributions inside the model~\cite{voita2021analyzing}. For language models more generally, beam search texts are also less surprising than human ones~\cite{Holtzman2020The}.
To summarize, the beginning of training is mostly devoted to target-side language modeling: we see huge changes in the LM scores (Figure~\ref{fig:lm_and_freq_ranks}a), and the model hallucinates frequent n-grams (Figure~\ref{fig:lm_stage_examples_de}). This agrees with the abstract stages shown in Figure~\ref{fig:lrp_enru}: in the first stage, the total contribution of the source substantially decreases. This means that in the trade-off between information coming from the source and the target prefix, the model gives more and more priority to the prefix.
\subsection{Translation Quality}
\label{sect:main_translation_quality}
\begin{figure}[t!]
\centering
\subfloat[]
{\includegraphics[scale=0.19]{pict/enru_tr_bleu.png}}
\
\subfloat[]
{\includegraphics[scale=0.28]{pict/enru_tr_accuracy_side_bbox.png}}
\vspace{-1ex}
\caption{(a) BLEU score; (b) token-level accuracy (the proportion of cases where the correct next token is the most probable choice). WMT En-Ru.}
\vspace{-2ex}
\label{fig:enru_bleu_acc}
\end{figure}
Figure~\ref{fig:enru_bleu_acc}a shows the BLEU score on the development set during training. For a more fine-grained analysis, we also plot token-level predictive accuracy separately for target token frequency groups~(Figure~\ref{fig:enru_bleu_acc}b). We see that both the BLEU score and accuracy become large very fast, e.g. after the first 20k iterations (25$\%$ of the training process), the scores are already good. What is interesting, is that the accuracy for frequent tokens reaches the maximum value (the score of the converged model) very quickly. This agrees with our previous observations in Figures~\ref{fig:lm_stage_examples_de} and~\ref{fig:lm_and_freq_ranks}b: at the beginning of training, the model generates frequent tokens more readily than the rare ones. Figure~\ref{fig:enru_bleu_acc}b further confirms this: the accuracy for the rare tokens improves slower than for the rest of them.
What is not clear, is what happens during the last half of the training (iterations from 40k to 80k): BLEU score improves only by 0.4, accuracy does not seem to change noticeably even for rare tokens, the proportion of generated tokens of different frequency ranks converges even earlier (Figure~\ref{fig:lm_and_freq_ranks}b), and patterns in token contributions also do not change much~(Figure~\ref{fig:lrp_enru}). This is what we are about to find out in the next section.
\begin{figure}[t!]
\centering
\subfloat[]
{\ \ \includegraphics[scale=0.20]{pict/enru_tr_frs_thin_arrow.png}}
\quad
\subfloat[]
{\includegraphics[scale=0.20]{pict/enru_tr_kendall_thin_arrow.png}}
\vspace{-1ex}
\caption{(a) fuzzy reordering score (for references: 0.6), (b) Kendall tau distance (for references: 0.06); WMT En-Ru. The arrows point in the direction of less monotonic alignments (more complicated reorderings).}
\vspace{-2ex}
\label{fig:enru_frs}
\end{figure}
\begin{figure*}[t!]
\subfloat[En-De]
{\includegraphics[scale=0.24]{pict/example_reordering_de.png}}
\\
\subfloat[En-Ru]
{\includegraphics[scale=0.24]{pict/example_reordering_ru.png}}
\vspace{-1ex}
\caption{Translations at different training steps. Same-colored chunks are approximately aligned to each other.}
\vspace{-2ex}
\label{fig:reordering_stage_examples}
\end{figure*}
\subsection{Monotonicity of Alignments}
\label{sect:monotonicity}
While it is known that, compared to references, beam search translations have more monotonic alignments~\cite{burlot-yvon-2018-using,Zhou2020Understanding}, it is not clear how monotonicity of alignments changes during model training. We show changes in the two reordering scores in Figure~\ref{fig:enru_frs}.\footnote{Note that we evaluate the scores starting not from the very beginning of training but after at least 6k updates. This is because evaluating monotonicity of alignments makes sense only when translations are reasonable.}
We can say that during the second half of the training,
the model is slowly refining translations, and, among the three competences we look at, the most visible changes are due to more complicated (i.e. less monotonic) reorderings. For example, as we already mentioned above, during this part of the training none of the scores we looked at so far changes much, whereas changes in both reordering scores are very substantial. The change in the fuzzy reordering score is only twice smaller than during the preceding stage. Moreover, the alignments keep changing and become less monotonic even after both BLEU and token-level accuracy (i.e. the metric that matches the model's training objective) converged, i.e. iterations after 80k (Figure~\ref{fig:enru_frs}).
Overall, we interpret this refinement stage as the model slowly learning to reduce interference from the source text (typical for human translation~\cite{Volansky2015OnTF} and exacerbated even more in NMT~\cite{toral-2019-post}): it learns to apply complex reorderings to more closely follow typical word order in the target language. This means that while language modeling improves more prominently during the first training stage, there is a long tail of less frequent and more nuanced patterns that the model learns later.
Another example of such nuanced changes in translation not detected with standard metrics is context-aware NMT.
Previous work has criticized using BLEU as a stopping criterion, showing that even when a model has converged in terms of BLEU, it continues to improve in terms of agreement with context~\cite{voita-etal-2019-good}.
To illustrate changes during this last stage, we show two examples in Figure~\ref{fig:reordering_stage_examples}. On average, the translations at the beginning of the last stage tend to have the same word order as the corresponding source sentences: the alignments are highly monotonic. Formally, the similarity to the word-by-word translation is seen from the very low Kendall tau distance after 6k-14k training iterations (Figure~\ref{fig:enru_frs}b): this means that a very small number of permutations is needed to transform the trivial monotonic translation into the one produced by the model. Interestingly, at this point, some undertranslation errors can be explained via failures to perform a complex reordering.
In the example in Figure~\ref{fig:reordering_stage_examples}b, the phrase `axis configuration' cannot be translated into Russian preserving the same word order, which makes the model to omit the translation of `configuration'.
\subsection{Characterizing Training Stages}
\label{sect:characterizing_training_stages}
To summarize, the NMT training process can be described as undergoing the following three stages:
\begin{itemize}
\setlength\itemsep{-0.2em}
\item[$\circ$] target-side language modeling;
\item[$\circ$] learning how to use source and coming close to a word-by-word translation;
\item[$\circ$] refining translations, visible by an increase in complexity of the reorderings and almost invisible by standard evaluation (e.g. BLEU).
\end{itemize}
While the borders of these practical stages are not as strictly defined as the abstract ones with the changes of monotonicity in contribution graphs~(Figure~\ref{fig:lrp_enru}), these two points of view on the training process mirror each other very well. From the abstract point of view with token contributions, the model first starts to form its predictions based more on the prefix and ignores the source, then source influence increases quickly, then very little is going on~(Figure~\ref{fig:lrp_enru}). From the practical point of view with model translations, the model first hallucinates frequent tokens, then phrases, then sentences (mirrors source contributions going down), then quickly improves translation quality (mirrors source contribution going up), then little is going on according to the standard scores, but alignments become noticeably less monotonic. As we see, both points of view show the same kinds of processes from different perspective: from the inside and the outside of the model.
\section{Other NMT Models}
In this section, we compare different architectures within the same encoder-decoder framework (Transformer vs LSTM), and different frameworks with the Transformer architecture (encoder-decoder vs decoder-only).
Overall, we find that all models follow the behavior described in Section~\ref{sect:characterizing_training_stages}; here we discuss some of their differences.
\paragraph{Transformer vs LSTM.} As might be expected from the low BLEU scores (Table~\ref{tab:bleu_scores}), LSTM translations are simpler than the Transformer ones. We see that they are less surprising according to the target-side language modeling scores (Figure~\ref{fig:other_models_ende}a\footnote{Note that in Figure~\ref{fig:other_models_ende}a, only the scores of the encoder-decoder models can be compared because of differences in model vocabulary (see Section~\ref{sect:experimental_setting}). In the appendix, we show scores for all three models.}) and have more monotonic alignments (Figure~\ref{fig:other_models_ende}b). Regarding the latter, it is not clear whether this is because of the lower model capacity or because LSTM has an inductive bias towards more monotonic alignments; we leave this to future work.
\paragraph{Encoder-decoder vs decoder-only.} Table~\ref{tab:bleu_scores} shows that decoder-only (LM-style) NMT is not much worse than the standard encoder-decoder model, especially in the higher-resource setting (e.g., En-De). However, the decoder-only model has much simpler reordering patterns compared to the standard Transformer: its reordering scores are very close to the much weaker LSTM model (Figure~\ref{fig:other_models_ende}b). One possible explanation is that the bidirectional nature of Transformer's encoder facilitates learning more complicated reorderings.
\begin{table}[t!]
\centering
\begin{tabular}{lcc}
\toprule
\bf model & \bf En-Ru & \bf En-De \\
\cmidrule{1-3}
Transformer (enc-dec) & 35{.}93 & 28{.}18 \\
LSTM (enc-dec) & 30{.}14 & 24{.}03\\
Transformer-LM (dec) & 34{.}16 & 26{.}76 \\
\bottomrule
\end{tabular}\textbf{}
\caption{BLEU scores: \texttt{newstest2014} for En-Ru and \texttt{newstest2017} for En-De.}
\vspace{-2ex}
\label{tab:bleu_scores}
\end{table}
\begin{figure}[t!]
\centering
\subfloat[]
{\includegraphics[scale=0.23]{pict/ende_lm_scores_encdec.png}}
\ \
\subfloat[]
{\includegraphics[scale=0.23]{pict/ende_frs_all.png}}
\vspace{-1ex}
\caption{(a) target-side LM scores (5-gram), (b) fuzzy reordering score (for references: 0.5); WMT En-De.}
\vspace{-2ex}
\label{fig:other_models_ende}
\end{figure}
\section{Practical Implications}
\label{sect:practical_application}
We showed that during a large part of the training, the translation quality (e.g., BLEU) changes little, but the alignments become less monotonic. Intuitively, the translations become more complicated while their quality remains roughly the same.
One way to directly apply our analysis is to consider tasks and settings where data properties such as regularity and/or simplicity are important. For example, in neural machine translation,
higher monotonicity of artificial sources was hypothesized to be a facilitating factor for back-translation~\cite{burlot-yvon-2018-using}; additionally, complexity of the distilled data is crucial for sequence-level distillation in non-autoregressive machine translation~\cite{Zhou2020Understanding}. Such examples are not limited to machine translation: in emergent languages,
languages with higher `regularity' bring learning speed advantages for communicating neural agents~\cite{Ren2020Compositional}.
In this section, we consider non-autoregressive NMT, and leave the rest to future work.
\subsection{Non-Autoregressive Machine Translation}
Non-autoregressive neural machine translation~(NAT) \citep{gu2018nonautoregressive} is different from the traditional NMT in the way it generates target sequences: instead of the standard approach where target tokens are predicted step-by-step by conditioning on the previous ones, NAT models predict the whole sequence simultaneously. This is possible only with an underlying assumption that the output tokens are independent from each other, which is unrealistic for natural language.
Fortunately, while this independence assumption is unrealistic for real references, it might be more plausible for simpler sequences, e.g. artificially generated translations. That is why targets for NAT models are usually not references but beam search translations of the standard autoregressive NMT (which, as we already mentioned above, are simpler than references in many aspects).
This is called \textit{sequence-level knowledge distillation}~\cite{kim-rush-2016-sequence}, and it is currently one of the de-facto standard parts of the NAT training pipelines~(\citet{gu2018nonautoregressive,lee-etal-2018-deterministic,ghazvininejad-etal-2019-mask} to name a few).
Recently~\citet{Zhou2020Understanding} showed that the quality of a NAT model strongly depends on the complexity of the distilled data, and changing this complexity can improve the model. Since distilled data consists of translations from a standard autoregressive teacher, our analysis gives a very simple way of modifying the complexity of this data. While usually a teacher is a fully converged model, we propose to use as teachers intermediate checkpoints during training. Since during a large part of training, NMT quality (e.g., BLEU) changes little, but the alignments become less monotonic, earlier checkpoints can produce simpler and more monotonic translations. We hypothesize that these translations are more suitable as targets for NAT models, and we confirm this with the experiments.
\subsection{Setting}
Following previous work~\cite{Zhou2020Understanding}, we train the same NAT model on their preprocessed dataset\footnote{We used the code and the data from \url{https://github.com/pytorch/fairseq/tree/master/examples/nonautoregressive_translation}.} and vary only distilled targets.
\paragraph{Model.} The model is the re-implemented by~\citet{Zhou2020Understanding} version of the vanilla NAT by~\citet{gu2018nonautoregressive}. For more details, see appendix.
\paragraph{Dataset.} The dataset is WMT14 English-German (En-De) with newstest2013 as the validation set and newstest2014 as the test set, and BPE vocabulary of 37,000. We use the preprocessed dataset and the vocabularies released by~\citet{Zhou2020Understanding}.
\paragraph{Distilled targets.} The teacher is the standard Transformer-base from~\texttt{fairseq}~\cite{fairseq}. For the baseline distilled dataset, we use the fully converged model (in this case, the model after 200k updates). For other datasets, we use earlier checkpoints.
\paragraph{Evaluation.} We average the last 10 checkpoints.
\subsection{Experiments}
Figure~\ref{fig:nat_results}c shows the BLEU scores for NAT models trained with distilled data obtained from different teacher's checkpoints; the baseline is the fully converged model (200k iterations). We see that by taking an earlier checkpoint, after 40k iterations, we improve NAT quality by 1{.}1 BLEU. For this checkpoint, the teacher's BLEU score is not much lower than that of the final model~(Figure~\ref{fig:nat_results}a), but the reorderings are much simpler~(a higher fuzzy reordering score in Figure~\ref{fig:nat_results}b).
To vary the complexity of the distilled data, \citet{Zhou2020Understanding} proposed to apply either Born-Again networks~(BANs)~\cite{pmlr-v80-furlanello18a} or mixture-of-experts~(MoE)~\cite{pmlr-v97-shen19c}. Unfortunately, MoE is rather complicated and requires careful hyperparameter tuning~\cite{pmlr-v97-shen19c}, and BANs are time- and resource-consuming. They involve training the AT model till convergence and then translating the training data to get a distilled dataset; this happens in several iterations (e.g., 5-7) using for training the latest generated dataset. Compared to these methods, our approach is extremely simple and does not require a lot of computational resources (e.g., instead of fully training the AT teacher several times as in BANs, our approach requires only to partially train one AT teacher).
Note that in this work,
we provide these experiments mainly to illustrate how our analysis can be useful in the settings where data complexity matters and, therefore, limit ourselves to only using different teacher checkpoints. Future work, however, can investigate possible combinations with other approaches. For example, to further improve quality, our method can be combined with the Born-Again networks while still requiring fewer resources due to only partial training of the teachers.
\begin{figure}[t!]
\centering
\subfloat[]
{\includegraphics[scale=0.25]{pict/practical_at_bleu.png}}
\subfloat[]
{\includegraphics[scale=0.25]{pict/practical_frs.png}}
\subfloat[]
{\includegraphics[scale=0.11]{pict/practical_nat_with_baseline.png}}
\vspace{-1ex}
\caption{(a) BLEU score of the AT Transformer-base (teacher for distillation); (b) fuzzy reordering score for the distilled training data obtained from checkpoints of the AT teacher; (c) BLEU scores for the vanilla NAT model trained on different distilled data.}
\vspace{-2ex}
\label{fig:nat_results}
\end{figure}
\section{Additional Related Work}
\label{sect:related_work}
Other work connecting neural and traditional approaches include modeling modifications, such as modeling coverage and/or fertility~\cite{tu-etal-2016-modeling,mi-etal-2016-coverage,cohn-etal-2016-incorporating,feng-etal-2016-improving} and several other modifications~\cite{zhang-etal-2017-improving,stahlberg-etal-2017-neural,huang2018towards}, analysis of the relation between attention and word alignments~\cite{ghader-monz-2017-attention}, and word alignment induction from NMT models~\cite{li-etal-2019-word,garg-etal-2019-jointly,Song2020TowardsBW,zenkel-etal-2020-end,chen-etal-2020-accurate}.
Previous analysis of NMT learning dynamics
include analyzing how the trainable parameters affect an NMT model~\cite{zhu2020understanding} and looking at the speed of learning specific discourse phenomena in context-aware NMT~\cite{voita-etal-2019-good,voita-etal-2019-context}.
\section{Conclusions}
We analyze how NMT acquires different competencies during training and look at the competencies related to three core SMT components. We find that NMT first focuses on learning target-side language modeling, then improves translation quality approaching word-by-word translation, and finally learns more complicated reordering patterns. We show that such an understanding of the training process can be useful in settings where data complexity matters and illustrate this for non-autoregressive MT; other tasks can be considered in future work. Additionally, our results can contribute to the discussion of (i) `easy' and `difficult' task-relevant features, including `shortcut features', and (ii) the limitations of the BLEU score.
\section*{Acknowledgments} We would like to thank the anonymous reviewers for their comments. Lena is supported by the Facebook PhD Fellowship. Rico Sennrich acknowledges support of the Swiss National Science Foundation (MUTAMUR; no. 176727). Ivan Titov acknowledges support of the European Research Council (ERC StG BroadSem 678254), Dutch National Science Foundation (VIDI 639.022.518) and EU Horizon 2020 (GoURMET, no. 825299).
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 108 |
Song bird
bird marketing
Bunk rooms climb the social ladder – Lowell Sun
By Robert F. Williams Last updated Sep 8, 2021
The bunk bed, born two centuries ago as a measure of austerity, lives in a certain splendor. For country houses, luxury hotels and yachts, architects and designers thematize and decorate the basic components of the pillars, balustrades and ladders with wood, bright colors, playful cutouts and loaf of bread. spice.
When it comes to rooms with custom bunk beds, "there's always an element of whimsy," said Kara Miller, an interior designer based in Jupiter, Fla., Who has carved filigree bunk beds diagonally across the board. basis of the Chinese Chippendale precedents. When customers start planning for new homes, "up front," she said, bedroom windows, doors and closets are set to leave cubic feet available for bunk beds.
"You can let your imagination run wild with them," she added.
The trend has been partly attributed to COVID-19. As a withdrawal from the company may be necessary, some owners want to be prepared to hide in a cheerful and comforting place that can accommodate groups of people not necessarily willing to share mattresses. Liz Caan, an interior designer in Newton, Mass., Said her bunk room customers said, in effect, "We want to be able to sleep a billion people."
Bunk beds by Kara Miller.Credit… Carmel Brantley
The owners report that rooms with bunk beds free up floor space while generating a sense of camaraderie, and that the openings between the rooms allow communication and climbing.
"My grandchildren love getting in and out of these portholes," said Margaret Condit, whose Maryland oceanfront home designed by Purple Cherry Architects has caramel-colored trim on the portholes of her white bunk beds. .
"Everyone says they want to sleep in this room," she said.
Nostalgia also helps motivate new stacked commissions. David Williams, a marketing and investment manager based in Annapolis, Md., Grew up and attended college sleeping in bunk beds. Purple Cherry Architects designed double bedrooms with bunk beds for one of their homes, each with a U of six stacked beds topped with clapboard siding and painted off-white. He described the sets as "definitely a fun thing and a great creator of space". For his grandchildren and the following generations, he added: "I hope these bunks will be a part of their history, as it is for me.
The shape of the furniture, however, is not as historic as it seems. Natalie Larson, an expert on bedding history in Williamsburg, Va., Said there were records of bunk beds installed in the early 19th century in prison rooms, railroad cars and military barracks. . Stacked chambers have also been used in submarines, military ships, schools, summer camps, concentration camps, and bomb shelters. Combining archaeological evidence, wills and inventories, among other sources, Larson said she found virtually no evidence that bunk beds were used for residential or hospitality purposes until the 20th century. Homeowners have long preferred lower, more portable beds, which could be taken apart or moved to multipurpose rooms and sold in times of trouble. And hotel or tavern owners would just share mattresses with their guests.
As the popularity of the bunk bed room increases, a few complicating factors have arisen. Changing bedding requires strength and agility when maneuvering around tight turns and atop ladders.
"It's exhausting trying to figure out how to make these beds," Williams said.
Children should be supervised to avoid injury from jumps or falls from the bunks. Aging knees and hips, among other parts of the body, can be ill-suited for spending the night on upper floors.
A reporter recently decided to see how well his limbs and sense of balance, after nearly six decades of wear and tear, would tolerate the dorm experience. At the Arlo SoHo hotel in Manhattan, she climbed along walnut-colored bed frames connected by black ladders and columns of pipes. Her joints didn't protest, and from an upper bunk she enjoyed a sense of superiority, with a bird's-eye view of old industrial buildings.
Cordell Nelson, the hotel's general manager, remembered as he was showing her around the building. He said, "I always wanted a bunk bed when I was a kid."
This article originally appeared in https://www.nytimes.com/2021/09/02/style/bunk-beds.html The New York Times.
Robert F. Williams
Pet Businesses All Ready For Pet Business World UK's paw at glee, Pet Trade News & Events
Photo of the day: My birds
H&R Block hires ex-Coke lawyer as he fights firm Jack Dorsey
Nature's needs and human desires vie for space among Rhode Island's…
Year 3 is "the greatest" and "the best" yet for Progressive's advertising…
College in Upper Assam rainforests breeds several young environmentalists
South Island Kōkako: Recording sparks hope of spotting elusive bird
How much can you expect to spend on a pet
How to make your home office a better place to work
Can birds get rid of it? | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 1,731 |
Conorhynchos is een monotypisch geslacht van straalvinnige vissen uit de familie van de antennemeervallen (Pimelodidae).
Soort
Conorhynchos conirostris (Valenciennes, 1840)
Pimelodidae | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,147 |
The ancient city of Rome is home to some of the most famous works of architecture and art in the world. There are so many things to do in Rome and so much to learn. Rome is widely considered the birthplace of the western world, which makes it a very popular travel destination so be prepared to navigate through crowds of tourists.
The Vatican is the official residence of the Catholic Pope. If you come on a Sunday and you arrive early enough you might be able to see the Pope deliver mass in Saint Peter Square. The lines are usually pretty long so make sure you arrive early and bring plenty of water and snacks.
The Pantheon is incredibly well preserved and a must see if you come to Rome. The two thousand year old ancient Roman building is the world's largest unreinforced concrete dome.
The Roman Colosseum is nearly 2000 years old and it is unmatched as the largest amphitheater in the Roman Empire. When I think of Rome I think of the Colosseum and so do many other tourists here so if you want to take the tour plan to arrive early to avoid the long lines.
The ancient Roman Forum was once the center of Roman public life. Many public speeches, criminal trials and even elections have taken place here.
The Spanish Steps is a great place to sit and relax after a long day of sightseeing. Get yourself a traditional Italian gelato and watch the world go by.
La Fontana de Trevi is probably the most famous of all of Rome attractions. The site is always crowded so plan to come early in the morning or super late at night. (Trevi Fountain looks really cool at night) Don't forget to throw some coins in, one for love or two to return to Rome someday.
If you want to save money, stay away from the center of Rome.
Gelato is much sweeter than the ice cream I'm used to and the choice of flavors is pretty wild. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,793 |
from mainserver import MainServer
master = MainServer()
master.start_server() | {
"redpajama_set_name": "RedPajamaGithub"
} | 4,110 |
{"url":"https:\/\/brilliant.org\/problems\/a-geometry-problem-by-fidel-simanjuntak-3\/","text":"# A geometry problem by Fidel Simanjuntak\n\nGeometry Level 2\n\nFind the value of $$x$$ in term $$a, b$$ and $$c$$.\n\n\u00d7","date":"2016-10-28 04:29:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2796614170074463, \"perplexity\": 4149.868764477686}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-44\/segments\/1476988721555.54\/warc\/CC-MAIN-20161020183841-00154-ip-10-171-6-4.ec2.internal.warc.gz\"}"} | null | null |
Q: How to get a black and white map from ggmap? If I use a state name as the first argument like so:
map <- ggmap::get_map("Louisiana", zoom=3, maptype = "toner-background", source="stamen")
then I get the black and white map that I want like so:
But If I enter the first argument with longitude and latitude coordinates like so:
map <- ggmap::get_map(c(left=-120, bottom=-65, right=5, top=70), zoom = 3, maptype = "toner-background", source="stamen")
Then i always get a map like this:
The other parameters seem to have no effect.
My goal is to call this function with coordinates like the second line of code and get a black and white map like in the first case.
A: This issue of getting a Google Terrain map when you wanted a Stamen map is one that I saw recently in another context. Instead of using the generic get_map function, try the specific get_stamenmap like this:
library(ggmap)
ggmap(get_stamenmap(c(left=-120, bottom=-65, right=5, top=70), zoom = 3,
maptype = "toner-background"))
A: If you've already downloaded the colored images, you have to use force = TRUE argument
E.g.,
map = get_stamenmap(your-map, maptype = "toner-background", color="bw", force=T)
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,853 |
We offer a premium publisher's network with titles divided into verticals based on interest and/or different audience groups.
We use a cutting-edge ad serving technology that allows us to use specific audience targeting, thanks to our DMP in-house solutions.
A team of professionals constantly monitors the performance of campaigns to ensure the best results. | {
"redpajama_set_name": "RedPajamaC4"
} | 3,122 |
Masalia nubila är en fjärilsart som beskrevs av George Francis Hampson 1903. Masalia nubila ingår i släktet Masalia och familjen nattflyn. Inga underarter finns listade i Catalogue of Life.
Källor
Nattflyn
nubila | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 513 |
Die Sägekäfer, wissenschaftlicher Name Heteroceridae, sind eine Familie der Käfer innerhalb der Überfamilie Byrrhoidea. Sie sind weltweit verbreitet. Insgesamt gibt es ca. 300 Arten in 15 Gattungen, von denen drei Gattungen mit 21 Arten in Mitteleuropa vorkommen. Larven und Imagines leben an vegetationsfreien Gewässerufern (semiaquatisch).
Merkmale
Es handelt sich um kleine Käfer von etwa 1 bis 8 Millimeter Körperlänge. Der Körper ist etwas langgestreckt oval, schwach abgeplattet und von einer dichten, feinen Behaarung bedeckt. Meist sind die Weibchen größer und kompakter gebaut als die Männchen (Sexualdimorphismus). Die Käfer sind dunkel gefärbt, in der Regel mit einer deutlichen Zeichnung aus roten oder gelben Flecken und Punkten. Auffallend und für die Namengebung verantwortlich ist der Bau der Antennen. Diese sind neun- oder elfgliedrig und kurz. Alle Glieder mit Ausnahme der ersten drei formen eine außen gesägte (gezackte) Keule. Der Kopf ist nach vorn gestreckt (prognath) mit großen, den Kopf überragenden Mandibeln, die Komplexaugen sind vorhanden, aber relativ klein.
Der Prothorax ist kürzer als breit und queroval gerundet. Auf der Bauchseite ragt ein Vorsprung (Prosternalfortsatz) nach hinten, wo er in eine Grube der Mittelbrust eingreift. Die Beine sind in der Regel zu Grabbeinen umgestaltet, die Schienen (Tibien) etwas erweitert mit langen Dornenreihen auf der Außenseite. Die Tarsen sind schmal und zart gebaut mit fünf Segmenten. Die Flügeldecken sind langgestreckt mit parallelen Seitenrändern und bedecken den Hinterleib vollständig. Soweit bekannt, sind alle Arten flugfähig.
Am Hinterleib sind sechs Sternite (das zweite bis siebte) sichtbar, das zweite und dritte Sternit sind verschmolzen. Die hinteren Sternite tragen eine lange, nach hinten gerichtete Behaarung.
Larven
Die Larven sind langgestreckt zylindrisch und etwa 2 bis 11 Millimeter lang. Sie sind überwiegend weißlich gefärbt mit dunkel braunen, plattigen Skleriten auf der Rückenseite und braunem Kopf. Der Kopf ist nach vorn gerichtet und etwas abgeflacht, Labium und Maxillen formen einen schaufelförmigen, nach vorn vorragenden Komplex. Der Kopf trägt sehr kurze, dreigliedrige Antennen und beiderseits fünf Larvenaugen (Stemmata). Die fünfgliederigen Beine, insbesondere die Vorderbeine, sind zu Grabbeinen umgestaltet. Die Larve besitzt offene Stigmen am Mesothorax und Hinterleib und ist luftatmend.
Lebensweise
Käfer und Larven leben in der Uferregion von Gewässern. Sie graben lange, gewundene, tunnelartige Galerien in unbewachsenem, durchfeuchtetem Sand oder Schlick. Die Tunnel sind horizontal und liegen dicht unter der Oberfläche, so dass sie oft von oben her als gewundene Linien sichtbar sind. Sie sind in der Regel verzweigt. Die Käfer graben die Tunnel, indem sie das lose Material mit den Vorderbeinen nach oben und hinten wegkratzen und gleichzeitig den Körper vorwärts stemmen. Die Tunnel sind Fraßgänge, keine Wohnröhren oder Bauten, sie werden nicht dauerhaft bewohnt. Unter ungünstigen Bedingungen, v. a. wenn der Sand zu trocken wird, graben sich die Käfer zur Oberfläche und laufen oder fliegen zu einer neuen Stelle. Die Käfer können auch bei Störungen außerordentlich rasch losfliegen, weshalb ihr Fang nicht ganz einfach ist.
Neben Tümpel- oder Flussufern treten die Tiere auch in Watten und Marschen an der Meeresküste auf, sie meiden aber direkten Salzwasserkontakt und verlassen bei Sturmfluten ihren Lebensraum, um landeinwärts Schutz zu suchen. Bei Überflutung bleiben die Tiere durch ihre wasserabweisende (hydrophobe) Behaarung trocken.
Die Ernährungsweise ist nicht vollständig geklärt. Die Larven, in gewissem Umfang auch die Imagines, fressen einfach das gesamte Substrat und verdauen anschließend die organischen Bestandteile. Wichtig für die Ernährung vor allem der Imagines sollen einzellige Algen, besonders Kieselalgen, sein.
Systematik
Die Sägekäfer gehören in eine Familiengruppe innerhalb der Byrrhoidea, die ausschließlich Familien mit wasserlebenden Larven umfasst. Diese Gruppe ist als "Dryopoidea" beschrieben worden. Sie werden in zwei Unterfamilien gegliedert:
Heterocerinae
Elythomerinae
In Europa sind folgende Gattungen nachgewiesen:
Micilus. Einzige europäische Art ist Micilus murinus.
Heterocerus
Augylus
Literatur
Sergio A. Vanin, Cleide Costa, Sergio Ide, Rolf G. Beutel: 18.6 Heteroceridae. In: Rolf G. Beutel & Richard A. Leschen (Herausgeber) Handbook of zoology. Volume IV. Arthropoda: Insecta. Part 38. Coleoptera. Volume 1: Morphology and systematics, Archostemata, Adephaga, Myxophaga, Polyphaga partim. Berlin, New York: Walter de Gruyter.
P. Aguilera, A. Mascagni, I. Ribera (1998): The family Heteroceridae MacLeay, 1825 (Coleoptera, Dryopoidea) in the Iberian peninsula and the Balearic Islands. Miscellania Zoologica 21(l):75-100.
Einzelnachweise
Weblinks
Bestimmungstabelle bei Käfer Europas
Sagekafer | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 450 |
Smart Drug Smarts
#045: Axon: an App for the Nootropics Community
Neuro-Tech,
Smart Drugs,
http://media.blubrry.com/smartdrugsmarts/content.blubrry.com/smartdrugsmarts/SDS045.mp3
Spoiler Alert June, 2016
Update: We have made the sad decision to pull the iOS mobile app from the iTunes App Store and focus our software development efforts on web applications that won't discriminate against our Android-using friends. The strong majority of our users have always come to us over web platforms, and we're redoubling our efforts in that direction. Thank you to everyone who used the mobile app during the almost two years that it was active.
In a shocking break from convention, Jesse interviews himself, about the soon-to-be-released mobile app Axon, aimed at the nootropics-loving community in general, and Smart Drug Smarts fans in particular. Within a few days of the publication of this episode, Axon should have made it past Apple's gatekeepers and be freely available for public download.
This episode also features what is probably the longest-ever (and possibly most attention-worthy) Ruthless Listener-Retention Gimmick in the history of Smart Drug Smarts.
Key Terms Mentioned
Dr. Terry Wahls' Live Seminar(Sept. 20-21, 2014) [No longer active]
Brain Tissue "Doughnuts" Aid Research – Lab Simulation
Axon for iOS (link will be updated as soon as Apple approves the app)
Holographic Universe Experiment Begins (Symmetry Magazine)
Smart Drug Smarts' Suggestion Box
Host Jesse: So, I'm here with Jesse Lawler, the creator of the mobile app Axon which is going to be coming out soon on the iOS platform Jesse tell us about Axon – what is it and why did you do it?
Evil Genius Inventor Jesse: Okay, so Axon is a mobile app for Smart Drug Smarts podcast listeners, although hopefully it can appeal to even people who don't necessarily listen to the podcast but are interested in nootropics. What we're doing right now is it's a way of downloading all the past plus current episodes of the podcast and then also sort of being able to skip around and jump to the audio points you want, sort of based on the audio bookmarks that we've been doing in the past few episode posts. And also, you can do text-based searches for topics if you want to look something up in the Smart Drug Smarts library. So that's sort of element number one – just access to the podcasts the way that people haven't had so far. Then probably the next most interesting thing that we're letting people do is actually voting on what they want included in future versions of the app. We're going to be doing continuous upgrades basically from this point forwards, always going to be building on something new, so we've got five choices in there right now – the users can sort of weigh in and give priority order of votes on what they want to see next.
Host Jesse: Okay, so why do you call this thing Axon? If it's the Smart Drug Smarts app, why not just call it Smart Drug Smarts?
Evil Genius Inventor Jesse: Well, you know, that was sort of the default choice. The Smart Drug Smarts podcast logo is the icon for the app, but it turns out that – at least for Apple – you can only fit a certain number of characters underneath those little icons, and Smart Drug Smarts was just too long. I didn't want to have like the middle of it cut out or replaced with a "dot dot dot", so I was trying to think of something brain-related and kind of cool to call it. I was really surprised that actually the word Axon was available, because it's a cool space-age sounding word. People can spell it easily and nobody had taken it yet. Just in case if anybody is not clear on what an axon is – it's that long spindly part of a nerve fiber.
Host Jesse: So you said iOS first for this one?
Evil Genius Inventor Jesse: Yeah, we're doing it for Apple first. I'm kind of an Apple guy. Within our company we do both mobile apps for iOS and and Android and web applications and all that stuff, but figured we'd build it first on iOS and then clone in the Android version of of that. So now that we've the first version submitted to the Apple app store, we're going to starting to clone version 1.0 for Android quite soon, but don't have a hard and fast deadline for that one just yet.
Host Jesse: So, the voting that you talked about earlier, can people do that? Can anybody do that? I log in, I download it and I vote. What's keeping from doing a bazillion different votes?
Evil Genius Inventor Jesse: Well the voting is tied to user names. I guess somebody who wanted to game the system could easily create a bunch of fake users. We're making people create an account before they can vote, so it's sort of one user one vote. Although we're not doing anything terribly technically sophisticated to keep people from creating multiple accounts. So somebody who cares that much can probably get an undue influence in the further development of the Axon app. We're probably flattering ourselves to think that anybody is going to care quite that much to try to tip the voting on what we do next on a free app. Especially because all the things that we're thinking about for the potential next features are things that we're going to be getting to eventually. Just sort of a question of what order we attack them in. That said, I really do want to be responsive to what people are voting for. I'm very interested to see what the audience members and app users really want to see next.
Host Jesse: Okay, so as far as that goes, what are the said five choices?
Evil Genius Inventor Jesse: Okay five features that we're offering are:
1) A nootropic data look-up. So basically sort of an almanac of information about different nootropics. Kind of a high-level overview with citations that could go off to different places.
2) Related to that would be sort of a 'Build My Stack' (my nootropics stack) where people could kind of keep track of what they're doing, how much of and on what days. Especially for those of use that are cycling different chemicals, so we don't build up a tolerance to any one thing. Getting that organized and potentially being able to collect that data and share it with other people.
3) Then there's sort of the general thing of "quantified self" stuff. Smart Drug Smarts listeners will remember that six or nine months ago we did an episode with Sebastian Marshall, talking about quantified self. Basically I'd like to support that within the app, give people the ability to ping themselves at random times throughout the day, take a quick measurement of where they're at –mood, cognition, alertness, all that stuff – and then cross-reference that with data about what they might have done that day, whether it's nootropics-wise, exercise-wise, the amount of sleep they got the previous night. Over a course of period of months, start to give people a data-driven metric of what sort of inputs to their body are having, at least anecdotal, outputs to their brain and the way that they're feeling.
4) Then finally the obligatory Smart Drug Smarts suggestion box. We've basically got that on the web now. I'd like to build that into the app too. So if you're like, "Ooh, ooh!" sitting in the subway or wherever and you come up with an idea for something that you'd like to have on a Smart Drug Smarts episode, you can quickly drop us a recommendation.
5) I finally wanted to see about getting the game Dual n-Back. This has gotten so much buzz in the past couple of years. I think there might even be open-source implementations of that, so it might just be a matter of lifting that from the open-source site and dropping it into the app. Either way probably wouldn't be too difficult to actually code that. It's not really a complex game to program, it's just a complex game to play. It's not exactly nootropic, but obviously but it's going to be of interest to people that are interested in cognitive enhancement.
So those are the five things and what we're doing is that we're allowing people to assign up to ten points to those five things and then sort of tabulate all the votes. I'm not exactly sure what date we'll set in mind to kind of look at all the votes and make a decision moving forwards but probably pretty soon. I'd say by mid to late September we'll see how people's voting goes and then make a big decision for what our next programming push will be.
Host Jesse: Sounds pretty cool. So for the cheapskates out there in the audience, is this totally free?
Evil Genius Inventor Jesse: Yes. Axon is going to be a 100% free download. I think the games and stuff, they have in-app purchases but not really planning on doing that. If we come up with some super-awesome informative something that we feel like would be a good in-app purchase thing, I'm not opposed to doing that in the future, but basically the goal right now is to see what people like, get people to use it. I'd like to get as much information as we can about what different nootropics people are taking. What types of stacks they're having success with and build it in something where we've got a data-set which is going to be useful to a large group of people. Obviously there's always going to be some real variance in how diligent people are in filling out data about what they're taking and how frequently they're taking it and kind of keeping those things up to date. It's like some people log in to Facebook a couple of times every day and their Facebook page is very very accurate. And other people they maybe log on and check out what's going on every couple of weeks. But I think we can make Axon be something that will be useful for both those types of users. People that really want to dig deep in to it and use it for keeping track of "Oh my fish-oil tablets are going to run out on this day and I need to order a new Vitamin D3 at the same time, let me consolidate those orders or something" vs people who are maybe going to want it because it will ping them when new podcast episodes are out and that's kind of the extent of it. So I would really like to make this the go-to app for nootropic fans. But part of that is going to be a community guided process on what people are really into and we'll just do the coding to get it there.
This Week in Neuroscience: What is a brain doughnut and why should you be interested?
Dr. Terry Wahls' live seminar on her dietary Wahls protocol.
Who is the "mystery man" developing a ground-breaking mobile app for the Nootropics community?
What is this mobile app and why was it created?
The story behind the name – Axon.
Supported platforms and future plans.
Five features being offered in Axon.
How much does it cost, you ask? Well, it's going to be 100% free.
Get clued in to the Simulation Hypothesis and ask yourself a simple question – Are you "real", or not?
Brain Health,
Sci + Society,
#028: Mark Divine on Mental Toughness and the Navy SEALs
#043: Is Cognitive Enhancement Ethical?
#023: Dr. Felipe Fregni and Transcranial Direct Current Stimulation
Haha, very clever self-interview Jesse! It was so believable I forgot you were interviewing yourself for a few minutes
Jesse Lawler says:
Yeah, the "phone effect" on the audio worked really well, didn't it? I was quite pleased. 🙂 Thanks!
LOL. Love your podcasts. This is hilarious.
Thanks John! Glad you liked. This was fun — I liked this episode, also liked the episode with the "Speak-n-Spell" interview — may have another one like that coming up soon. 😉
Nevan says:
Would like to recommend/suggest an interview with Dr William Walsh. Author of book. Nutrient Power.
Like your shows and caliber of the subjects interviewed.
Thanks Nevan. I just grabbed the book on Kindle Unlimited. Got a flight coming up and will see about making this my airplane reading. I appreciate the tip! | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,204 |
layout: page
title: Resources
permalink: /resources/
---
## Templates
* [getbootstrap.com](http://getbootstrap.com)
* [Bootswatch.com](http://bootswatch.com">)
* [startbootstrap.com](http://startbootstrap.com) - excellent
* [wrapbootstrap.com](http://wrapbootstrap.com)
* [bootstrapzero.com](http://www.bootstrapzero.com)
* [ThemeForest](http://themeforest.net/?ref=clickor)
## Snippets
* [bootsnipp.com](http://bootsnipp.com) - snippets to get ideas
* [Onsen UI](http://components.onsen.io/patterns) - web components for Mobile Devices
* [Mobile Angular UI](http://mobileangularui.com/) - Front End Framework using Bootstrap + Angular</li>
## HTML5 Showcase
* [HTML 5 rocks](http://www.html5rocks.com/en/resources) - Ideas and resources for HTML5
## Fonts
* [Google Fonts](https://www.google.com/fonts) -
* [What font! Web font](https://www.myfonts.com/fonts/kbrankin/tumbly/webfont_preview.html) - [WhatTheFont](https://www.myfonts.com/WhatTheFont/) to retrieve font from image
* [WhatTheFont!](https://www.myfonts.com/WhatTheFont/) -find font from image
* [Cheatsheet for Glyphicons and Font-awesome](http://fontawesome.bootstrapcheatsheets.com/#home) -
* [Simple Line icons](https://github.com/thesabbir/simple-line-icons)
## Angular
* [Angular UI](http://angular-ui.github.io/bootstrap/) - Bootstrap components written in pure AngularJS by the AngularUI Team
* [Angular strap](http://mgcrea.github.io/angular-strap/) - AngularJS 1.2+ native directives for Bootstrap 3.
## Patterns
* [Subtle Patterns](http://subtlepatterns.com/)
* [Color palettes](http://www.colourlovers.com/)
## Images
* [Gratisography](http://gratisography.com/) - free images
## Online Image Editors
* [Online-Image Editor](http://www.online-image-editor.com/) - : supports transparency ([here](http://www.online-image-editor.com/help/transparency))
* [Lunapic](http://www170.lunapic.com/ ) - for negative of images (supports PNG)
* [CutMyPic.com](http://www.cutmypic.com/) - image cropping
## Misc
* [Generate Favicon](http://favicon-generator.org/) - 16x16
* [Draw favicon](http://www.favicon.cc/)
* [Color picker of w3 School](http://www.w3schools.com/tags/ref_colorpicker.asp)
* [Combine PDFs](http://www.pdfconvertonline.com/add-pdf-watermark.html)
* [PDF buddy](https://www.pdfbuddy.com/) - but need account
* [Video Hive](http://videohive.net/) - for videos
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,133 |
\section{The status of the $1/N_c$ expansion method}
The large $N_c$ QCD, or alternatively the $1/N_c$ expansion method, proposed by 't Hooft \cite{HOOFT}
and implemented by Witten \cite{WITTEN} became a valuable tool to study baryon properties
in terms of the parameter $1/N_c$ where $N_c$ is the number
of colors.
According to Witten's intuitive picture, a baryon containing $N_c$ quarks
is seen as a bound state in an average self-consistent potential of a Hartree type
and the corrections to the Hartree approximation are of order $1/N_c$.
Ten years after 't Hooft's work, Gervais and Sakita \cite{Gervais:1983wq}
and independently Dashen and Manohar in 1993 \cite{DM} derived a set of consistency conditions for the pion-baryon
coupling constants which imply that the large $N_c$ limit of QCD
has an exact contracted SU(2$N_f$)$_c$ symmetry
when $N_c \rightarrow \infty $, $N_f$ being the number
of flavors.
For ground state baryons the SU(2$N_f$) symmetry is broken by
corrections proportional to $1/N_c$
\cite{Dashen:1994qi,Jenkins:1998wy}.
Analogous to s-wave baryons, consistency conditions which constrain the strong couplings
of excited baryons to pions were derived in Ref. \cite{Pirjol:1997bp}.
These consistency conditions predict the equality between pion couplings to excited states
and pion couplings to s-wave baryons. These predictions are consistent with the nonrelativistic
quark model.
A few years later, in the spirit of the Hartree approximation
a procedure for constructing large $N_c$ baryon wave functions
with mixed symmetric spin-flavor parts has been proposed
\cite{Goity:1996hk} and an operator analysis was performed for $\ell$ = 1
baryons \cite{Carlson:1998vx}.
It was proven that, for such states,
the SU($2N_f$) breaking occurs at order $N^0_c$, instead of $1/N_c$, as it is the case for ground and also symmetric
excited states $[56, \ell^+]$ (for the latter see Refs. \cite{Goity:2003ab,Matagne:2004pm}).
This procedure has been extended to positive parity nonstrange baryons belonging to the $[70, \ell^+]$ with $\ell$ = 0 and 2
\cite{Matagne:2005gd}. In addition, in Ref. \cite{Matagne:2005gd}, the dependence of the contribution of the linear term in $N_c$, of the spin-orbit
and of the spin-spin terms in the mass formula was presented as a function of the excitation energy
or alternatively in terms of the band number $N$.
Based on this analysis an impressive global compatibility between the $1/N_c$ expansion and the quark model results
for $N$ = 0, 1, 2 and 4 \cite{Semay:2007cv} was found
(for a review see Ref. \cite{Buisseret:2008tq}).
More recently the $[70,1^-]$ multiplet was reanalyzed by using an exact wave function, instead of the
Hartree-type wave function, which allowed to keep control of the Pauli principle at any stage
of the calculations \cite{Matagne:2006dj}. The novelty was that the isospin-isospin term, neglected previously
\cite{Carlson:1998vx} becomes as dominant in $\Delta$ resonances as the spin-spin term in $N^*$ resonances.
The purpose of this work is to analyze the compatibility between the $1/N_c$ expansion method in the so-called
${\it quark-shell ~picture}$ and the ${\it resonance~ or~ scattering~ picture}$ defined in the framework of chiral soliton models.
Details can be found in Ref. \cite{Matagne:2011sn}.
\section{Negative parity baryons}\label{se:excit}
If an excited baryon belongs to a symmetric $[\bf{56}]$-plet
the three-quark system can be treated similarly to the ground state
in the flavour-spin degrees of freedom, but one has to take into
account the presence of an orbital excitation in the space
part of the wave function \cite{Goity:2003ab,Matagne:2004pm}.
If the baryon state is described by
a mixed symmetric representation, $[\bf{70}]$ in SU(6)
notation, the treatment becomes more complicated.
In particular, the
resonances up to 2 GeV belong to $[{\bf 70},1^-]$, $[{\bf 70},0^+]$ or
$[{\bf 70},2^+]$ multiplets and beyond to 2 GeV to $[{\bf 70},3^-]$, $[{\bf 70},5^-]$, etc.
In the following we adopt the standard way to study the $[\bf{70}]$-plets
which, as already mentioned, is related to the Hartree approximation \cite{Goity:1996hk}.
An excited baryon is described by a symmetric core plus
an excited quark coupled to this core, see \emph{e.g.}
\cite{Carlson:1998vx,Matagne:2005gd,Goity:2002pu,Matagne:2006zf}.
The core is treated in a way similar to that of the ground state.
In this method each SU($2N_f$) $\times$ O(3) generator is separated
into two parts
\begin{equation}\label{CORE}
S^i = s^i + S^i_c; ~~~~T^a = t^a + T^a_c; ~~~ G^{ia} = g^{ia} + G^{ia}_c;
~~~ \ell^i = \ell^i_q + \ell^i_c,
\end{equation}
where $s^i$, $t^a$, $g^{ia}$ and $\ell^i_q$ are the excited
quark operators and
$S^i_c$, $T^a_c$, $G^{ia}_c$ and $\ell^i_c$ the corresponding core operators.
\subsection{The quark-shell picture}
In the quark-shell picture we use the procedure of Ref. \cite{COLEB1}, equivalent to that of
Ref. \cite{Pirjol:2003ye}, later extended in Ref. \cite{COLEB2}.
We start from the leading-order Hamiltonian
including operators up to order $\mathcal{O}(N^0_c)$ which has the following form
\begin{equation}\label{TOY}
H = c_1 \ \1 + c_2 \ell \cdot s + c_3 \frac{1}{N_c}\ell^{(2)} \cdot g \cdot G_c
\end{equation}
This operator is defined in the spirit of a Hartree picture (mean field)
where the matrix elements of the first term are proportional to $ N_c$ on all baryons \cite{WITTEN}.
The spin-orbit term $\ell \cdot s$
which is a one-body operator and the third term - a two-body operator containing the tensor
$\ell^{(2)ij}$ of O(3) - have matrix elements of order $\mathcal{O}(N^0_c)$. The neglect of $1/N_c$
corrections in the $1/N_c$ expansion makes sense for the comparison with the scattering picture
in the large $N_c$ limit, described in the following section.
One can see that the Hamiltonian (\ref{TOY}) reproduces
the characteristic $N_c$ scaling for
the excitation energy of baryons which is $N^0_c$ \cite{WITTEN}.
\subsubsection{The nucleon case}
In large $N_c$ the color part of the wave function is antisymmetric so that the
orbital-spin-flavor part must be symmetric to satisfy the Pauli principle. A quanta of orbital excitation requires
the orbital part to be mixed symmetric, the lowest state having the partition $[N_c-1,1]$.
We have the following $[N_c-1,1]$ spin-flavor ($SF$) states which form a symmetric state
with the orbital $\ell$ = 3 state of partition $[N_c - 1,1]$
\begin{enumerate}
\item
$\left[N_c - 1, 1\right]_{SF} = \left[\frac{N_c+1}{2}, \frac{N_c - 1}{2}\right]_{S} \times \left[\frac{N_c+1}{2}, \frac{N_c - 1}{2}\right]_{F} $, $N_c \geq 3$ \\
with $S = 1/2$ and $J = 5/2, 7/2$
\item
$\left[N_c - 1, 1\right]_{SF} = \left[\frac{N_c+3}{2}, \frac{N_c - 3}{2}\right]_{S} \times \left[\frac{N_c+1}{2}, \frac{N_c - 1}{2}\right]_{F} $, $N_c \geq 3$ \\
with $S = 3/2$ and $J = 3/2, 5/2, 7/2, 9/2$.
\end{enumerate}
They give rise to matrices of a given $J$ either $2 \times 2$ or $1 \times 1$ depending on the
multiplicity of $J$. States of symmetry $[N_c - 1, 1]_{SF}$ with
$S = 5/2$, like for $\Delta$ (see below), which together with $\ell = 3$ could give rise to $J = 11/2$,
are not allowed for $N$, by inner products of the permutation group
\cite{Stancu:1991rc}. Therefore the experimentally observed resonance $N(2600) I_{11/2}$ should belong to the $N = 5$ band ($\ell$ = 5).
For $N_c$ = 3 the above states correspond to the $^28$ and $^48$ multiplets of SU(2) $\times$ SU(3) respectively.
\subsubsection{The $\Delta$ case}
In this case the Pauli principle allows the following states
\begin{enumerate}
\item
$\left[N_c - 1, 1\right]_{SF} = \left[\frac{N_c+1}{2}, \frac{N_c - 1}{2}\right]_{S} \times \left[\frac{N_c+3}{2}, \frac{N_c - 3}{2}\right]_{F} $, $N_c \geq 3$ \\
with $S = 1/2$ and $J = 5/2, 7/2$,
\item
$\left[N_c - 1, 1\right]_{SF} = \left[\frac{N_c+3}{2}, \frac{N_c - 3}{2}\right]_{S} \times \left[\frac{N_c+3}{2}, \frac{N_c - 3}{2}\right]_{F} $, $N_c \geq 5$ \\
with $S = 3/2$ and $J = 3/2, 5/2, 7/2, 9/2$,
\item
$\left[N_c - 1, 1\right]_{SF} = \left[\frac{N_c+5}{2}, \frac{N_c - 5}{2}\right]_{S} \times \left[\frac{N_c+3}{2}, \frac{N_c - 3}{2}\right]_{F} $, $N_c \geq 7$ \\
with $S = 5/2$ and $J = 1/2, 3/2, 5/2, 7/2, 9/2, 11/2$.
\end{enumerate}
As above, they indicate the size of a matrix of fixed $J$ for the Hamiltonian (\ref{TOY}). For example,
the matrix of $\Delta_{5/2}$ is 3$\times$3, because all three
states can have $J = 5/2$.
For $N_c = 3$ the first state belongs to the $^210$ multiplet.
The other two types of states do not appear in the real world with $N_c = 3$.
Note that both for $N_J$ and $\Delta_J$ states the size of a given matrix equals the multiplicity of the corresponding
state indicated in Table 1 of Ref. \cite{COLEB2} for $\ell = 3$.
The Hamiltonian (\ref{TOY}) is diagonalized in the bases defined above. Let us denote the eigenvalues either by
$m^{(i)}_{N_J}$ or $m^{(i)}_{\Delta_J}$ with $i$ = 1, 2 or 3, depending on how many eigenvalues are at a fixed $J$.
The Hamiltonian has analytical solutions, all eigenvalues being linear functions in the coefficients $c_1$, $c_2$ and $c_3$.
It is remarkable that the 18 available eigenstates with $\ell$ = 3 fall into three degenerate multiplets,
like for $\ell$ = 1. If the degenerate masses are denoted by $m'_2$, $m_3$ and $m_4$
we have
\begin{equation}\label{mass3}
m'_2 = m^{(1)}_{\Delta_{1/2}} = m^{(1)}_{N_{3/2}} = m^{(1)}_{\Delta_{3/2}} = m^{(1)}_{N_{5/2}} = m^{(1)}_{\Delta_{5/2}} = m^{(1)}_{\Delta_{7/2}} ,
\end{equation}
\begin{equation}\label{mass4}
m_3 = m^{(2)}_{\Delta_{3/2}} = m^{(2)}_{N_{5/2}} = m^{(2)}_{\Delta_{5/2}} = m^{(1)}_{N_{7/2}} = m^{(2)}_{\Delta_{7/2}} = m^{(1)}_{\Delta_{9/2}},
\end{equation}
\begin{equation}\label{mass5}
m_4 = m^{(3)}_{\Delta_{5/2}} = m^{(2)}_{N_{7/2}} = m^{(3)}_{\Delta_{7/2}} = m^{(1)}_{N_{9/2}} = m^{(2)}_{\Delta_{9/2}}
= m^{(1)}_{\Delta_{11/2}},
\end{equation}
where
\begin{equation}
m'_2 = c_1 N_c - 2 c_2 - \frac{3}{4} c_3,
\end{equation}
\begin{equation}
m_3 = c_1 N_c - \frac{1}{2} c_2 + \frac{15}{16} c_3,
\end{equation}
\begin{equation}
m_4 = c_1 N_c + \frac{3}{2} c_2 - \frac{5}{16} c_3.
\end{equation}
The notation $m'_2$ is used to distinguish this eigenvalue from $m_2$ of Ref. \cite{COLEB1}.
In the following subsection we shall see that the scattering picture gives an identical pattern
of degeneracy in the quantum numbers, but the resonance mass is not quantitatively defined.
Therefore only a qualitative compatibility can be established.
\subsection{The meson-nucleon scattering picture}
Here we are concerned with nonstrange baryons, as above, and look for a degeneracy pattern in the resonance picture.
The starting point in this analysis are the linear relations
of the S matrices $S^{\pi}_{LL'RR'IJ}$ and $S^{\eta}_{LRJ}$ of $\pi$ and $\eta$ scattering off
a ground state baryon in terms of $K$-amplitudes. They are given by the following equations \cite{COLEB1,COLEB2}
\begin{equation}\label{pi}
S^{\pi}_{LL'RR'IJ} = \sum_K ( - 1)^{R'-R} \sqrt{(2R+1)(2R'+1)} (2K+1)
\left\{\begin{array}{ccc}
K& I & J \\
R' & L' & 1
\end{array}\right\}
\left\{\begin{array}{ccc}
K& I & J \\
R & L & 1
\end{array}\right\}
s^{\pi}_{KLL'},
\end{equation}
and
\begin{equation}\label{eta}
S^{\eta}_{LRJ} = \sum_K \delta_{KL}\delta(LRJ) s^{\eta}_{K},
\end{equation}
where $s^{\pi}_{KL'L}$ and $s^{\eta}_{K}$ are the reduced amplitudes.
The notation is as follows. For $\pi$ scattering $R$ and $R'$ are the spin of the incoming and outgoing baryons
respectively ($R$ =1/2 for $N$ and $R$ = 3/2 for $\Delta$), $L$ and $L'$ are the partial wave angular momentum of the
incident and final $\pi$ respectively (the orbital angular momentum $L$ of $\eta$ remains unchanged),
$I$ and $J$ represent the total isospin and total angular momentum
associated to a given resonance
and $K$ is the
magnitude of the ${\it grand}$ ${\it spin}$ $\vec{K} = \vec{I} + \vec{J}$.
The $6j$ coefficients imply four triangle rules $\delta(LRJ)$, $\delta(R1I)$, $\delta(L1K)$ and
$\delta(IJK)$.
These equations were first derived in the context of the chiral soliton model
\cite{HAYASHI,MAPE}
where
the mean-field breaks the rotational and isospin symmetries, so that $J$ and $I$ are not
conserved but the ${\it grand}$ ${\it spin}$ $K$ is conserved and excitations can be labelled by $K$.
These relations are exact in large $N_c$ QCD and are independent of any model assumption.
The meaning of Eq. (\ref{pi}) is that there are more amplitudes $S^{\pi}_{LL'RR'IJ}$ than there are $s^{\pi}_{KLL'}$
amplitudes. The reason is that the $I J$ as well as the $R R'$ dependence is contained only in the geometrical
factor containing the two $6j$ coefficients.
Then, for example, in the $\pi N$ scattering, in order for a resonance to occur in one channel there
must be a resonance in at least
one of the contributing amplitudes $s^{\pi}_{KLL'}$. But as $s^{\pi}_{KLL'}$ contributes
in more than one channel, all these channels resonate at the same energy and this implies degeneracy
in the excited spectrum. From the chiral soliton model there is no reason to suspect degeneracy
between different $K$ sectors.
From the meson-baryon scattering relations (\ref{pi}) and (\ref{eta})
three sets of degenerate states have been found for $\ell$ = 1 orbital
excitations \cite{COLEB1}.
There is a clear correspondence
between these sets and
the three towers of states \cite{COLEB1,Pirjol:2003ye}
of the excited quark picture provided by the
symmetric core + excited quark scheme \cite{Carlson:1998vx}.
They correspond to $K = 0, 1$ and 2 in the resonance picture.
But the resonance picture also provides a $K = 3$ due to the amplitude
$s^{\pi}_{322}$.
As this is different from the other $s^{\pi}_{KL'L}$ , in Ref. \cite{COLEB1}
it was interpreted as belonging to the $N = 3$ band.
Here we extend the work of Ref. \cite{COLEB1,COLEB2} to $\ell = 3$ excited states which
belong to the $N = 3$ band.
The partial wave amplitudes of interest and their expansion
in terms of $K$-amplitudes from Eqs.~(\ref{pi}) and (\ref{eta}) can be found in Tables I-III of
Ref. \cite{Matagne:2011sn}. They correspond
to $L = L' = 2$, $L = L' = 4$ and $L = L' = 6$ respectively.
From those tables one can infer the following degenerate towers of states
with their contributing amplitudes
\begin{eqnarray}
\Delta_{1/2}, \; \; \; N_{3/2} , \; \; \; \Delta_{3/2} , \; \; \; N_{5/2} , \; \; \; \Delta_{5/2} , \; \; \; \Delta_{7/2} , \; \; \;
&~&
(s_{222}^\pi, s_{2}^\eta) , \label{s2p}\\
\Delta_{3/2} , \; \; \; N_{5/2} , \; \; \; \Delta_{5/2} , \; \; \; N_{7/2} , \; \; \;
\Delta_{7/2} , \; \; \; \Delta_{9/2} ,
&~& (s_{3 2 2}^\pi, s_{3 4 4}^\pi) , \label{s1}\\
\Delta_{5/2} , \; \; \; N_{7/2} , \; \; \; \Delta_{7/2} , \; \; \; N_{9/2} , \; \; \; \Delta_{9/2} ,
\; \; \; \Delta_{11/2} ,
&~& ( s_{4 4 4}^\pi, s_{4}^\eta ) , \label{s2} \\
\Delta_{7/2} , \; \; \; N_{9/2} , \; \; \; \Delta_{9/2} ,
\; \; \; \Delta_{11/2} , \; \;
&~&
(s^\pi_{5 4 4 }, s^\pi_{5 6 6 }), \; \; \label{s3} \\
\Delta_{9/2}, \; \; \;
\Delta_{11/2} ,
&~& (s^\pi_{6 6 6 }, s_{6}^\eta ) \label{s4}
\end{eqnarray}
associated to $K = 2, 3, 4, 5$ and 6 respectively.
We can compare the towers (\ref{s2p})-(\ref{s4})
with the quark-shell model results of (\ref{mass3})-(\ref{mass5}).
The first observation is that the agreement of
(\ref{s2p}) ($K = 2$) with (\ref{mass3}), of
(\ref{s1}) ($K = 3$) with (\ref{mass4}) and of (\ref{s2}) ($K = 4$) with (\ref{mass5})
is perfect regarding the quantum numbers.
Second, we note that the resonance picture can have poles with $K = 5, 6$
which infer the towers (\ref{s3}) and (\ref{s4}). They have no counterpart
in the quark-shell picture for $\ell = 3$.
But there is no problem because the poles with $K = 5, 6$ can belong to a higher band,
namely $N = 5$ ($\ell = 5$) without spoiling the compatibility.
Comparing these results with those of Ref. \cite{COLEB2} one can conclude that
one can associate a common $K = 2$ to $\ell = 1$ and $\ell = 3$. For this value of $K$
the triangular rule $\delta( K \ell 1)$ proposed in Ref \cite{COLEB2} is satisfied.
The quark-shell picture brings however more information than the resonance picture
due to the fact that it implies an energy dependence via the $\ell$ dependence which
measures the orbital excitation. Note that $m'_2$ is different from $m_2$ of $\ell = 1$
\cite{COLEB1,Pirjol:2003ye}. Because in the resonance
picture they stem from the same amplitude $s^{\pi}_{222}$, one expects that this
amplitude possesses two poles at two distinct energies, in order to have compatibility.
Thus the number of poles of the reduced amplitudes $s^{\pi}_{KLL}$ remains an open question.
We anticipate that a similar situation will appear for every value of $K$
associated to two distinct values of $\ell$, satisfying the $\delta(K \ell 1)$ rule, for example, for
$K$ = 4 which is common to $\ell$ = 3 and $\ell$ = 5.
\section{Conclusions}\label{se:concl}
We have compared two alternative pictures for baryon resonances consistent with
large the $N_c$ QCD limit and found that the two pictures are compatible for $\ell$ = 3
excited states, as it was the case for $\ell$ = 1. The quark-shell picture is practical and successful
in describing known resonances
and in predicting other members of the excited octets and decuplets. But the extended symmetry
SU(2$N_f$) $\times$ O(3) where O(3), which is essential to include orbital excitations, does not have
a direct link to large $N_c$.
On the other hand the scattering picture is close to experimental analysis but it is not clear where
the pole positions should lie. It is however very encouraging that the two pictures give
sets of degenerate states with identical quantum numbers when one works at order
$\mathcal{O}(N^0_c)$. It is a qualitative proof that the spin-flavor picture is valid
and useful for baryon phenomenology.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,026 |
Q: Infinitely many prime divisors of $f(a)$ Let $f(x)\in \mathbb{Z}[x]$ be a non constant polynomial with integer coefficients. Show that as $a$ varies over the integers, the set of divisors of $f(a)$ includes infinitely many primes...
To be frank, I have no idea where to start...
Trivial case is when constant term of $f(x)$ is zero.
In case of $f(x)=x(a_nx^n+\cdots+a_1)$ we have $p$ divides $f(p)$ for all primes $p$...
Other than this i have no idea...
Please give only hints..
A: Let $f(x)=a_nx^n+a_{n-1}x^{n-1}+……+a_1x+a_0\in \mathbb Z[x]$.
If $a_0=0$ it is evident that $p$ divides $f(p)$ for arbitrary primes so we make $a_0\ne 0$. Assume for the absurd, that there is only a finite number of prime divisors $p_1,p_2,p_3,……,p_N$ for $ f (k);\space k\in \mathbb Z $, and make the product $P=p_1p_2p_3\cdot\cdot\cdot p_N$.
We have
$$f(a_0P)=a_0\left(a_na_0^{n-1}P^n+a_{n-1}a_0^{n-2}P^{n-1}+….+ \space a_2a_0P^2+a_1P+1\right)$$
It is clear that non prime divisor of the factor $$ a_na_0^{n-1}P^n+a_{n-1}a_0^{n-2}P^{n-1}+….+ \space a_2a_0P^2+a_1P+1$$ can be one of the $p_1,p_2, p_3,……,p_N$. This is a contradiction.
A: We will show that for any $H$, however huge, there is a prime $p\gt H$ that divides some $f(a)$.
As you observed, we can assume that $f(x)$ has non-zero constant term $a_0$. Consider $f(a_0x)=a_0(1+xg(x))$.
The equations $1+xg(x)=1$ and $1+xg(x)=-1$ have only finitely many solutions. Let $N\ge H$ be chosen so that $N!$ is greater than any of these solutions. Then $1+N!g(N!)$ cannot be $1$ or $-1$, so is divisible by some prime $p$.
If $p\le N$, then $p$ divides $N!$ so $p$ cannot divide $1+N!g(N!)$. Thus $p\gt N\ge H$. Since $p$ divides $1+N!g(N!)$ it follows that $f(a_0N!)\equiv 0\pmod{p}$.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,523 |
{"url":"https:\/\/mijn.bsl.nl\/parenting-programs-for-the-prevention-of-child-physical-abuse-re\/12197810?fulltextView=true","text":"main-content\n\n## Swipe om te navigeren naar een ander artikel\n\n04-04-2017 | Uitgave 3\/2017 Open Access\n\n# Parenting Programs for the Prevention of Child Physical Abuse Recurrence: A Systematic Review and Meta-Analysis\n\nTijdschrift:\nClinical Child and Family Psychology Review > Uitgave 3\/2017\nAuteurs:\nKristina Vlahovicova, G. J. Melendez-Torres, Patty Leijten, Wendy Knerr, Frances Gardner\nBelangrijke opmerkingen\nThe original version of this article was revised: modification has been made in the section Effects of Interventions, the p value should be p\u00a0=\u00a00.043.\n\n## Introduction\n\nChild physical abuse is defined as the intentional use of physical force against a child, including hitting, beating, kicking, shaking, biting, scalding, burning, poisoning, and suffocating, often performed under the guise of discipline or punishment (WHO 2006). Worldwide cross-sectional surveys estimate that nearly one in four adults report experiencing physical abuse as children (WHO 2014; Butchart and Mikton 2014). Recent data from Egypt, India and the Philippines indicate that, in these countries, 26, 36 and 21% of parents, respectively, report hitting children with an object as a form of punishment (WHO 2014). Estimates of violence against children, which includes moderate to severe physical abuse, find that a minimum of 64% of 2\u201317-year-old children in Asia, 56% in Northern America, 50% in Africa, 34% in Latin America, and 12% in Europe experienced some form of violence in the last year (Hillis et al. 2016). These prevalence rates are not only high\u2014they are also likely to be underestimates, as measurement errors, stigma and social normativity tend to mask the true magnitude of the problem (Finkelhor et al. 2014; Townsend and Rheingold 2013; Cicchetti and Toth 2005). Physical violence in particular is rarely reported and largely hidden: prevalence of physical abuse is over 75 times higher when assessed with victims\u2019 self-reports rather than official reports (Stoltenborgh et al. 2013); and only the most severe cases tend to come to the attention of Child Protection authorities, if such authorities exist in the community at all.\nThe societal burden of child physical abuse is exorbitant\u2014the lifetime economic cost for all new cases of abuse in one calendar year in the US has been estimated at $124 billion (Fang et al. 2012). Global estimates of the cost of this type of abuse in particular are not yet available, but a recent economic evaluation of the damage of violence against children (combining physical, psychological and sexual abuse only) has set the figure at$7 trillion, or up to 8% of global GDP (Pereznieto et al. 2014). The consequences of child physical abuse are costly, numerous, and severe\u2014physical injury, disability, poor cognitive and socio-emotional outcomes, behavioral and mental health problems throughout the lifespan, perpetuation of abuse cycles, and even death are linked to having experienced abuse as a child (Gilbert et al. 2009; Gershoff 2010; Holmes et al. 2005; Repetti et al. 2002; Runyon et al. 2004; UNICEF 2006). While milder forms of physical abuse might have impairing consequences, there is an established dose\u2013response relationship between experience of physical abuse in childhood and poor outcomes\u2014that is, the most severe and persistent experiences of physical abuse are associated with the poorest outcomes (Norman et al. 2012). It is also known that violence breeds more violence, even across generations\u2014children who have experienced physical abuse are most at risk of re-experiencing\u00a0it (Hindley et al. 2006); and parents with a history of abuse during childhood are twice as likely to be reported to CPS for child maltreatment (Widom et al. 2015). It is therefore of vital importance to find effective interventions to prevent the recurrence of child physical abuse and break this cycle of violence.\nParenting programs are one such intervention. They are aimed at improving the quality of the parent\u2013child relationship and preventing re-abuse by changing parenting attitudes, practices, and skills, as well as reducing parent\u2013child conflict, coerciveness and parenting stress, improving parental psychosocial functioning, improving family dynamics and reducing child behavior problems (Barlow et al. 2006a; Montgomery et al. 2009). These interventions are generally based on Attachment Theory (Bowlby 1969), Learning Theory (Skinner 1950), and\/or Social Learning Theory principles (Bandura 1971), though the latter informs most parenting interventions aimed to reduce child abuse. Most central in this is Patterson\u2019s (1982) coercion hypothesis, which states that abuse might result from a repeating pattern of coercive parent\u2013child interactions in which both the parent and the child escalate their violent behavior (Brinkmeyer and Eyberg 2003). On the side of the parent, the escalating coercive behavior springs from a belief that their child is defiant and unresponsive to less harsh forms of discipline. As children comply, parents may incorrectly believe that this strategy\u2014and no other\u2014works, and they therefore continue to use it (Crouch and Behl 2001). Parenting programs intend to break this cycle by promoting parental sensitivity, modifying parental attitudes, changing parental attributions, teaching adequate disciplining techniques, and increasing the use of positive parenting skills.\nPrior reviews have found parenting programs to be promising strategies for reducing recurrence of child physical abuse. Four reviews in particular inspire the current review and meta-analysis. Barlow et al. (2006b) conducted a high-quality systematic review of individual- and group-based parenting programs to prevent child physical abuse and neglect recidivism, and reduce risk factors associated with re-abuse. This synthesis of RCTs revealed that, overall, parenting programs are a promising treatment strategy for preventing new incidents of abuse in families with a history of physical abuse\u2014but not neglect. Furthermore, parenting programs were found effective in reducing risk factors associated with re-abuse when delivered to families with suspected or substantiated history of abuse. However, because their search resulted in a highly heterogeneous and limited set of trials, authors elected against conducting a meta-analysis of outcomes.\nAnother systematic review focusing specifically on corporal punishment (i.e., physical pain applied to correct or punish a child\u2019s behavior) was conducted in Brazil (Santini and Williams 2016). It found 18 studies using different methodologies to evaluate the effectiveness of parenting programs to reduce corporal punishment, with all studies reporting medium to large reductions (d\u00a0=\u00a0.54\u20132.17). Nevertheless, the authors of this review also opted against conducting a meta-analysis at the time due to insufficient trial-level data reported by the included studies.\nNew evidence from the last decade has provided additional trials with enough clinical homogeneity to justify meta-analysis. Chen and Chan (2015) conducted an updated review of parenting programs for the treatment of child abuse, and attempted a meta-analysis of abuse recurrence outcomes (among others), finding that parenting programs successfully reduced substantiated and self-reported child maltreatment reports (d\u00a0=\u00a0.208). Parenting programs were also found to reduce risk factors\u2014specifically ineffective parenting\u2014and enhance protective factors such as endorsement of appropriate child-rearing attitudes, positive parenting, and parent\u2013child interaction. However, given the clinical diversity of the interventions modalities included in their models, their meta-analyses also exhibited high degrees of statistical heterogeneity (I 2\u00a0=\u00a075.6; p\u00a0<\u00a0.001), suggesting a need for a more tailored approach to understanding intervention modalities.\nA systematic review and meta-analysis published in 2015 (Euser et al. 2015) identified 23 RCTs that tested the effect of 20 different programs (including but not limited to parenting programs) on child maltreatment prevention and\/or reduction (including but not limited to physical abuse). It found a small but significant effect in favor of treatment (d\u00a0=\u00a0.13, 95% CI [0.05, 0.21]), but again, statistical heterogeneity was too high to indicate the true effect of these programs (Q\u00a0=\u00a056.06, p\u00a0<\u00a0.01). Additionally, trim-and-fill analysis of publication bias found that, after adjusting the results of 9 studies with small sample sizes, the pooled effect was greatly diminished (d\u00a0=\u00a00.02, 95% CI [\u22120.06, 0.11]), suggesting publication bias favoring the publication of smaller studies with significant findings.\nPrior reviews suggest the potential effectiveness of indicated parenting programs to prevent child physical re-abuse. However, the body of evidence to date has not been large enough or evaluated with sufficient rigor to corroborate these findings. Additionally, the only meta-analyses that have been conducted (i.e., Chen and Chan 2015; Euser et al. 2015) suffered from problems resulting from a scope too wide and a level of heterogeneity too high to produce results with substantive value. The importance of conducting this review thus springs from two necessities: (1) to provide an up-to-date synthesis of the research on child physical re-abuse prevention using parenting programs, and (2) to overcome the methodological limitations that prior reviews have encountered by narrowing the scope of this review only to those interventions strictly based on SLT to enable combination of trial outcomes into a meta-analysis with less heterogeneity, thus producing a valuable reading of the cumulative evidence. The value of meta-analysis lies in its ability to estimate a mean effect of the interventions, thus providing a helpful basis by which to understand how effective programs could be when implemented in practice settings, and the degree to which new programs offer a meaningful advantage over existing interventions.\u00a0Moreover, many reviews of complex interventions\u2014such as parenting programs to reduce re-abuse\u2014focus on a diversity of programs united by a similar theory of change (Bonell et al. 2016). This is an analytically helpful approach as it focuses on testing the underlying principles, which are thought to make interventions effective. In this study, we provide a broad test as to whether a theory of change, when implemented in the form of parenting interventions, has the potential to reduce recurrence of child maltreatment. Particularly, we focus on behavioral parenting programs (as opposed to non-behavioral programs, which might focus on transforming attitudes and attributions) to be better able to ascertain the effect of programs in this particular intervention modality without injecting problematic clinical heterogeneity in our collection of trials.\n\n## Methods\n\n### Criteria for Trial Inclusion and Exclusion\n\nRCTs and quasi-experiments featuring a high-quality statistical matching technique to simulate randomization (e.g., Propensity Score Matched designs) were acceptable for inclusion in this review. Participants had to be parents (i.e., mothers, fathers, or other primary caregivers) of children aged 0\u201318, who have a suspected or substantiated report of child physical abuse. Both suspected and substantiated reports were acceptable for inclusion, as there is little difference between these groups in regards to their risk of recidivism (Drake et al. 2003; Kohl et al. 2009). Maltreatment history had to be supported by either (a) a police report, child protection referral, or other official agency report, (b) the self-report of an abusive parent or abused child, or (c) an above-threshold score in standardized instruments used for detection of child physical abuse, such as the Parent\u2013Child Conflict Tactics Scale (CTS), the Child Maltreatment Interview Schedule (CMIS, one item on physical abuse), the ISPCAN Child Abuse Screening Tool (I-CAST), and the Alabama Parenting Questionnaire (APQ, corporal punishment, physical punishment, and minor\/severe assault subscales). To ensure sufficient homogeneity to enhance comparability, behavioral parenting programs mostly based on SLT were selected for inclusion. Active and passive control conditions were acceptable\u2014i.e., placebo, treatment as usual, alternative treatment, and wait-list controls.\nAs for outcomes, we focused particularly on physical abuse minding that different forms of abuse tend to co-occur (Jones et al. 2008; Oates and Bross 1995; Manly 2005). Samples showing multiple forms of abuse were included, provided that there was physical abuse present or suspected in at least 15% of the sample. This is an arbitrary threshold set by Oates and Bross (1995) and adopted by this review to maintain consistency with earlier literature, as well as to maximize the amount of studies that meet inclusion criteria. In cases where the amount of participants in the sample suspected of or reported for physical abuse was not stated, first authors were contacted to request more information. If the communication was unsuccessful, the trials were excluded, to err on the side of conservatism.\nThe primary outcome sought in this review were reports of child physical abuse recidivism, including re-report with police or child welfare\/protection agencies, and\/or self-report by parent or child. When official reports of recidivism were available (as opposed to self-reports by parents or children), they were prioritized. This is because official reports tend to be complete sets of data, available for all participants regardless of treatment completion status or attrition.\nAdditionally, proxy measures of physical abuse recurrence were considered acceptable indicators of re-abuse in the absence of direct re-abuse reports. These secondary measures include harsh parenting, physical punishment, and above-threshold scores in the standardized measures of child physical abuse that validly and reliably identify physical abuse occurrence: the PCTS, CMIS, APQ, and ICAST.\nAlthough this review is solely interested in re-abuse outcomes, trials that did not collect re-abuse outcomes but met all other inclusion criteria were included in the final set of studies. In accordance with the Cochrane Handbook (Sect.\u00a014.2.3), the presence or absence of outcomes is not a sufficient criterion to exclude studies from a systematic review (Higgins and Green 2011). Thus, outcomes related to changes in parent and child variables that are not indicative of abuse recidivism (such as parenting stress, or child problem behavior) are included but not synthesized, as this is not the focus of this review.\n\n### Search Methods for Identifying Trials\n\n#### Electronic Searches\n\nNine databases were searched to identify published studies from inception to April 10, 2015: MEDLINE, PsycINFO, EMBASE, PubMed, Cochrane Central Library, Campbell Library, ERIC, Sociological Abstracts, Social Service Abstracts, and CINAHL. The search string initially designed and adapted for use in other databases was:\n((exp child abuse\/) OR ((exp physical abuse\/) AND (baby OR babies OR child* OR toddler* OR minor* OR adolescen* OR teen*)) OR (exp child abuse reporting\/) OR (exp child discipline\/) OR ((exp protective services\/) AND (baby OR babies OR child* OR toddler* OR minor* OR adolescen* OR teen*)) OR (abusive head trauma) OR ((physical*) AND (maltreat* OR abus* OR mistreat*) AND (baby OR babies OR infan* OR child* OR toddler* or adolescen* OR teen* OR minor*)) OR ((intent*AND injur*) AND (baby OR babies OR infan* OR child* OR toddler* or adolescen* OR teen* OR minor*)) OR (corporal punishment ADJ3 (baby OR babies OR infan* OR child* OR toddler* or adolescen* OR teen* OR minor*))) AND ((exp Parent Training\/) OR ((exp child-rearing practices\/OR exp parent child relations\/OR exp parental role\/) AND (program* OR train* OR educat* OR promot* OR intervent* OR group* OR skill* OR support*)) OR ((mother* OR father* OR famil* OR caregiver* OR parent*) ADJ3 (program* OR train* OR educat* OR promot* OR intervent* OR group* OR skill* OR support*)))\nThis search favored sensitivity to capture all relevant studies on child physical abuse, regardless of level of prevention. This is because, judging by prior reviews (e.g., Chen and Chan 2015), it is not uncommon for participants at different levels of risk to be combined in the same trials. No methodological filters were applied to ensure that records were not missed due to poor reporting.\n\n#### Grey Literature Searches\n\nTrials for inclusion were also searched in the following clearinghouse websites: Child Welfare Information Gateway, Center for the Study and Prevention of Violence, National Clearing House of Families and Youth, California Evidence-Based Clearing House for Child Welfare, Child Welfare League of America, ChildTrends, Children and Families Research Center, and the Violence Against Children: United Nations Secretary General\u2019s Study. Dissertations were included as long as they were captured by the database searches, and full-text papers could be retrieved either online or by contacting the author. Furthermore, twelve of the authors of the final set of included trials were contacted to identify ongoing or unpublished trials, of which eight replied with clarifications.\n\n### Data Collection and Analysis\n\n#### Selection of Studies\n\nThe first and second authors independently conducted the selection of studies for inclusion in this review in three stages. An initial title scan was conducted. Subsequently, the abstracts of seemingly relevant titles were scanned to determine whether they met the inclusion criteria. Finally, full-text copies of papers that appeared to meet criteria were reviewed. Uncertainties related to the appropriateness of studies for inclusion were resolved in consultation with co-authors.\n\n#### Assessment of Risk of Bias in Included Studies\n\nCritical appraisal of included studies was conducted. An adapted version of the Cochrane Risk of Bias Tool (Higgins and Green 2011) was used to assess the methodological robustness of studies. All of the dimensions of trials assessed by this tool (random sequence generation, allocation concealment, blinding, and reporting) were ranked\u00a0as either \u201chigh risk,\u201d \u201clow risk,\u201d or \u201cunclear.\u201d\n\n#### Measures of Treatment Effects\n\nOutcome data were presented as Cohen\u2019s d effect sizes (Cohen 1969), if enough data were provided by authors in trial reports (i.e., means and standard deviations for continuous data, or count of new incidents and sample sizes of groups for dichotomous data).\nWe recalculated effectiveness associated with dichotomous outcomes as risk differences (also known as absolute risk reduction). This had several benefits, most importantly that unlike odds ratios and risk ratios, risk differences may be more readily interpretable as the absolute change in risk of an outcome, which may be more relevant from a policy and practice perspective. When risk differences were combined in a meta-analysis, we sensitivity-tested our findings as risk ratios as well. We also converted risk differences into the \u201cnumber needed to treat\u201d by taking the inverse of the risk difference (i.e., 1\/RD). The number needed to treat is the number of families who would need to receive the intervention in order to prevent one incident of re-abuse (Higgins and Green 2011).\n\n#### Unit of Analysis Issues\n\nSome trials had multiple relevant treatment arms (e.g., parent\u2013child interaction therapy and enhanced parent\u2013child interaction therapy), and others had multiple relevant outcomes (e.g., parent self-report of re-abuse and child self-report of re-abuse). When these trials were included in the meta-analysis, the treatment arms (or outcomes) were combined. That is, the participants of the relevant treatment arms were added together (as were their respective counts of recidivist participants), and the multiple outcomes were averaged to produce one single measure of outcome. This is a\u00a0reasonable course of action\u00a0whenever treatment arms are similar versions of the same intervention and both treatment arms observe effects in the same direction\u00a0(Higgins and Green 2011, Sect.\u00a016.5.4).\n\n#### Dealing with Missing Data\n\nMissing data and dropouts were assessed for each included study, and the review reports the number of participants who have been included in the final analysis as a proportion of all participants in each study.\n\n#### Assessment of Heterogeneity\n\nTo expose statistical heterogeneity in the meta-analysis (Higgins and Green 2011) we calculated the I 2 index, which indicates the amount of variability in the intervention effects. An I 2 index larger than .5 (i.e., 50%) indicates that caution should be exerted in making substantive inferences about the results of the meta-analysis. Due to the small number of studies\u00a0included, we did not conduct any subgroup analyses (i.e.,\u00a0exploration of the effect of participant characteristics, or other contextual factors).\n\n#### Data Synthesis\n\nAbuse recidivism data are typically presented in two ways: recurrence of physical abuse can be expressed either as event data (i.e.,\u00a0presence or absence of re-abuse in a given time period), as time-to-event data (i.e.,\u00a0time to re-abuse incident), or both. Risk differences relating to re-abuse event data (both official re-reports and self-reports of recidivism by parents\/children) were synthesized statistically using a random-effects meta-analysis model to account for heterogeneity.\nTime-to-event data were not included in the meta-analysis. Although, theoretically, event and time-to-event data could be combined, not enough information was provided in trial reports to combine them without making ad hoc assumptions. Additionally, time-to-event data could not be meta-analyzed separately, as it was presented inconsistently (e.g., as nonparametric tests, such as log rank, or as hazard ratios), without enough information to facilitate conversion to one common unit, as variance and error were often missing from reports. Individual participant data would be required to compute these missing factors.\nFinally, all other outcomes (harsh parenting, physical punishment, and scores on standardized abuse detection measures) were reported as standard mean differences with 95% confidence intervals whenever means and standard deviations were reported in the included trials.\n\n## Results\n\n### Results of the Search\n\nOnline database searches yielded 8869 results. Searching trial registries and clearinghouse websites, contact with authors, and hand searching prior reviews added 424 hits. In total, after de-duplication, 6168 records were captured (see Fig.\u00a01). The process of scanning records for eligibility was conducted in three stages by the first author and repeated independently by the second author. Initially, all titles were scanned and excluded if they held no direct relevance to this review (i.e., titles unrelated to child abuse, non-English language publications, and publications not concerned with child abuse treatment such as case studies, prevalence studies, risk factors analyses, descriptive studies, observational data reports, and evaluations of assessment tools). Then, abstracts of the remaining records were scanned and excluded if they (a) were not related to child abuse prevention, (b) were not RCTs or statistically controlled designs, (c) were not testing parenting programs, and (d) were not concerned with the indicated prevention of child physical abuse. In the third stage, full-text articles of the remaining 121 records were closely examined. An eligibility form was created to streamline and standardize this process. Inclusion ambiguities were resolved in collaboration with co-authors. After the sorting process was completed, 14 studies remained eligible and were included in this review (Table\u00a01).\nTable\u00a01\nCharacteristics of included studies\nFirst author (year)\nDesign\nIntervention name\nComparison group\nChild age\nDose\nSetting\nRe-abuse effect size\nBrunk (1987)\nRCT\nParent training\nMulti-systemic therapy\n6\u00a0weeks\nClinic\nn\/a\nChaffin (2004)\nRCT (stratified)\nPCIT and EPCIT\nStandard community group\n4\u201312\n12\u201314 Sessions over 6\u00a0months\nClinic\nRR\u00a0=\u00a00.57 [0.35, 0.95]\nChaffin (2011)\n2\u00a0\u00d7\u00a02 RCT\nPCIT\u00a0+\u00a0SM\nTAU\n2.5\u201312\n12\u201314 sessions over 6\u00a0months\nClinic\nRR\u00a0=\u00a01.03 [0.69, 1.55]\nChaffin et al. (2012)\nRCT\nSafe care\nHome visitation without SC components\n0\u201312\nWeekly for approx. 6\u00a0months\nCenter\nHR\u00a0=\u00a00.74\u20130.83\nEagan (1983)\nRCT\nChild Management Program\nTAU\u2014case management\n6\u00a0weeks\nn\/a\nHughes and Gottlieb (2004)\nRCT\nIncredible years\nWait-list control\n3\u20138\n8 Weekly 2-h sessions\nCenter\nn\/a\nJouriles (2010)\nRCT\nProject support\nTAU\n3\u20138\n1.5\u00a0h Weekly for 8\u00a0months\nHome\nRR\u00a0=\u00a00.21 [0.03, 1.63]\nKolko (1996)\nRCT\nIndividual child- and parent-CBT\nFamily therapy\u00a0+\u00a0community services\n6\u201313\n16\u00a0weeks\nClinic\/Home\nRR\u00a0=\u00a00.40 [0.17, 0.96]\nMacMillan (2005)\nRCT\nHome visitation (nurses)\nTAU\n0\u201313\n2\u00a0years\nHome\nRR\u00a0=\u00a00.77 [0.51, 1.14]\nMast (2014)\nRCT\nI-inTERACT\nInternet resource comparison\n3\u20139\nWeekly for 6\u00a0months\nOnline\nn\/a\nRunyon (2010)\nRCT\nCombined parent\u2013child CBT\nParent-only CBT\n7\u201313\n16\u00a0weeks\nClinic\nSMD\u00a0=\u00a00.01 [\u22120.50, 0.52]\nSwenson (2010)\nRCT\nSTEP-TEEN\nMulti-systemic therapy\n10\u201317\n7\u00a0weeks\nCenter\nRR\u00a0=\u00a02.10 [0.40, 10.84]\nTerao (1999)\nRCT\nPCIT\nFamily preservation\n12\u201314 Sessions over 6\u00a0months\nHome\nn\/a\nWolfe (1981)\nRCT\nChild management program\nWait-list control\n2\u201310\n6\u00a0weeks\nClinic\/Home\nRR\u00a0=\u00a00.33 [0.02, 7.14]\n(E)PCIT =\u00a0(Enhanced) parent\u2013child interaction therapy, SM =\u00a0self-motivation, SC =\u00a0safe care, CBT =\u00a0cognitive behavioral therapy, TAU =\u00a0treatment as usual, SMD =\u00a0standard mean difference, RR =\u00a0risk ratio, CI =\u00a0confidence interval, APQ =\u00a0Alabama parenting questionnaire, HR =\u00a0hazard ratio\n\n### Excluded Studies\n\nThe main reasons for exclusion were that (a) the trials did not use an RCT or statistically controlled design, (b) the population was not at least 15% physically abusive, (c) the intervention did not qualify as parent training or did not contain a majority of parenting content, and (d) the level of prevention was not indicated, but rather selective or universal (Table\u00a02).\nTable\u00a02\nCharacteristics of excluded studies\nFirst author, year\nReason for Exclusion\nArmstrong, 1999\nNot indicated level of prevention\nBarlow, 2007\nNot indicated level of prevention\nBarlow, 2013\nNot indicated level of prevention\nBarnes, 2013\nNot indicated level of prevention\nBarth, 2006\nNot >51% parenting intervention\nBernard, 2010\n<15% sample physically abusive parents\nBigelow, 2000\nNot RCT or statistically matched control\nByrne, 2012\nNot RCT or statistically matched control\nCasanueva, 2008\nNot >51% parenting content in intervention\nChaffin, 2001\nNot RCT or statistically matched control\nChaffin, 2009\nTesting motivational component effect on retention\nChaffin, 2012b\nSample of this trial overlaps with Chaffin et al. 2012a\nChristoffersen, 2009\nNot RCT or statistically matched control\nCicchetti, 2006\nNot indicated level of prevention; <15% physically abusive\nConstantino, 2001\nNot indicated level of prevention\nDawe, 2007\nNot indicated level of prevention; <15% physically abusive\nDenicola, 1980\nNot RCT or statistically matched control\nDubowitz, 2009\nNot >51% parenting content in intervention\nDuggan, 2004a\nNot indicated level of prevention\nDuggan, 2004b\nNot indicated level of prevention\nDuggan, 2007\nNot indicated level of prevention\nEckenrode, 2000\nNot indicated level of prevention\nEdwards-Gaura, 2003\nNot indicated level of prevention\nFantuzzo, 2007\nNot >51% parenting content in intervention\nFergusson, 2005\nNot indicated level of prevention\nFergusson, 2006\nNot indicated level of prevention\nFergusson, 2013\nNot indicated level of prevention\nFetsch, 1999\nNot RCT or statistically matched control\nFraser, 2000\nNot indicated level of prevention\nGavlick, 2003\nNot indicated level of prevention; <15% physically abusive\nGershater-Molko, 2003\nMixture of selective and indicated level of prevention\nGreen, 2014\nNot indicated level of prevention\nGuterman, 2013\nNot indicated level of prevention\nHakman, 2009\nNot RCT or statistically matched control\nHall, 2004\nNot RCT or statistically matched control\nHarder, 2005\nNot RCT or statistically matched control\nHarnett, 2008\nNot RCT or statistically matched control\nHorton, 2013\nNot indicated level of prevention\nHorwitz, 2010\nNot RCT or statistically matched control\nHughes, 2002\nSample of this trial overlaps with Hughes and Gottlieb 2004\nHulburt, 2013\nInadequate control group (not maltreating population)\nIrueste-Montes, 1988\nNot RCT or statistically matched control\nKim, 2008\nNot indicated level of prevention\nKnox, 2011\nNot indicated level of prevention\nLanier, 2014\nNot RCT or statistically matched control\nLau, 2011\nNot indicated level of prevention\nLeCroy, 2011\nNot indicated level of prevention\nLetarte, 2010\nNot indicated level of prevention; <15% physically abusive\nLinares, 2006\nNot RCT or statistically matched control\nLind, 2014\nNo report of % sample physically abused\nLober, 1984\nNot RCT or statistically matched control\nLowell, 2014\nNot indicated level of prevention\nLuthar, 2007\nNot indicated level of prevention\nLutzker, 1987\nNot RCT or statistically matched control\nMaher, 2011\nNot RCT or statistically matched control\nMaher, 2012\nNot RCT or statistically matched control\nMeezan, 1998\nNot >51% parenting content in intervention\nMoss, 2011\nIntervention not aimed at modifying parenting abusive practices\nNese, 2014\nNot RCT or statistically matched control\nOlds, 1997\nNot indicated level of prevention\nPolinsky, 2010\nNot RCT or statistically matched control\nRamquist, 2010\nNot RCT or statistically matched control\nNot RCT or statistically matched control\nReynolds, 2003\nInadequate control group (not maltreating population)\nRivara, 1985\nNot RCT or statistically matched control\nRunyan, 2009\nNot indicated level of prevention\nSaldana, 2015\n<15% sample physically abusive parents\nScott, 2012\nNot RCT or statistically matched control\nSelf-Brown, 2012\nNot RCT or statistically matched control\nShaeffer, 2013\nNot RCT or statistically matched control\nSmith, 1984\nNot RCT or statistically matched control\nSprang, 2009\nNot indicated level of prevention\nStronach, 2013a\n<15% sample physically abusive parents\nStronach, 2013b\nNo report of % sample physically abused\nThomas, 2011\nNo report of % sample physically abused\nThomas, 2012\nNot RCT or statistically matched control\nToth, 2002\nIntervention not aimed at modifying parenting abusive practices\nTimmer, 2005\nNot RCT or statistically matched control\nTimmer, 2006\nNot RCT or statistically matched control\nWalker, 2008\nIntervention does not qualify as parenting\nWolfe, 1980\nNot RCT or statistically matched control\n\n### Description of Included Studies\n\n#### Study Designs\n\nAll 14 included studies were RCTs. One of the trials was a cluster-randomized trial (Chaffin et al. 2012, randomized at the Child Protection Service or CPS agency level), one was a 3-arm trial (Chaffin et al. 2004, comparing PCIT vs. enhanced PCIT vs. community standard), one was a 4-arm trial (Egan 1983, comparing parenting programs vs. stress management vs. parenting\u00a0+\u00a0stress management vs. wait-list control), and two were 2\u00a0\u00d7\u00a02 stratified trials (Chaffin et al. 2011, randomized first to orientation group type and then to intervention type; Chaffin et al. 2012, randomized first to intervention type and then to coached vs. un-coached implementation).\n\n#### Populations\n\nAll studies comprised a minimum of 15% physically abusive parents, bar one (Chaffin et al. 2012), which included mostly neglecting families and only 14% physically abusive families, but was still included in the review as it only missed this criterion by 1% and met all other inclusion criteria. Seven trials included exclusively physically abusive parents, and others ranged between 23% and 63%. The number of participants in each study ranged substantially, from 26 to 2176.\n\n#### Interventions\n\nThe 14 trials evaluated 8 different SLT-based behavioral parent training programs. The content of the programs was reasonably similar, with a shared focus on teaching and practicing parenting skills and child management strategies to break cycles of coerciveness in parent\u2013child interaction, although some programs also included modules on child health and safety practices (e.g., Chaffin et al. 2012). While most programs ran weekly sessions with a similar duration (between 1 and 2\u00a0h per session), the total duration of each program varied greatly, with 6 of the programs running for 4\u20138\u00a0months (Chaffin et al. 2004, 2011, 2012; Terao 1999; Jouriles et al. 2010; Mast et al. 2014; Kolko 1996; Runyon et al. 2010), some running for only 8\u00a0weeks (Hughes and Gottlieb 2004; Swenson et al. 2010; Egan 1983; Brunk et al. 1987; Wolfe et al. 1981), and one for over 2\u00a0years (MacMillan et al. 2005). Programs were delivered either individually, to groups, or both; and fully or partially delivered in the home (Jouriles et al. 2010; Kolko 1996; MacMillan et al. 2005; Wolfe et al. 1981; Terao 1999), healthcare or other clinics (Chaffin et al. 2004, 2011; Brunk et al. 1987, Kolko 1996, Wolfe et al. 1981, Runyon et al. 2010), community centers (Chaffin et al. 2012, Hughes and Gottlieb 2004, Swenson et al. 2010), and online (Mast et al. 2014). One trial did not report delivery setting (Egan 1983). The size of the samples also varied tremendously, with the smallest trial including only 26 participants (Hughes and Gottlieb 2004) while the largest included data on almost 2200 families (Chaffin et al. 2012).\n\n#### Comparison Groups\n\nThree trials used wait-list control groups (Egan 1983, Hughes and Gottlieb 2004; Wolfe et al. 1981). The wait period varied across trials: Egan (1983) had a 6-week waitlist, Wolfe et al. (1981) was 8\u00a0weeks, and Hughes and Gottlieb (2004) only offered treatment after 4\u00a0months. All of three trials offered usual agency services during the wait period. Five trials used \u201cservice-as-usual\u201d or \u201ctreatment-as-usual\u201d controls (Chaffin et al. 2004, 2011, 2012; Jouriles et al. 2010; MacMillan et al. 2005). Yet, what was offered as usual treatment differed greatly between trials, some of which would be best classified as alternative treatments. In the Chaffin et al. (2004, 2011) trials, \u201cservice-as-usual\u201d controls were offered a non-SLT parenting group program. In Chaffin et al. (2012), \u201cservice-as-usual\u201d included a behavioral skills training that resembled the treatment intervention in content, but not in structure, dose, or delivery format. In Jouriles et al. (2010), the control conditions varied from nothing to a parenting program alternative treatment. MacMillan et al. (2005) considered \u201ctreatment-as-usual\u201d providing control parents with child physical abuse caseworkers, assessment of recidivism risk, education about parenting, and referrals to other services. The Terao (1999) trial offered a \u201cfamily preservation group\u201d alternative, comprising a range of services that did not include parent training.\nAlternative treatments that were used as control groups included family therapy (FT, Kolko 1996), multi-systemic therapy (MST,\u00a0Brunk et al. 1987; Swenson et al. 2010), Internet resources (IRC, Mast et al. 2014); none of which are primarily based on SLT or focus mainly on parenting training instruction. Only CPC\u2013CBT (i.e.,\u00a0Combined Parent Child Cognitive Behavioral Therapy, the comparison treatment in Runyon et al. 2010) is based on the same theory as the interventions. Yet it was considered a valid alternative, since, reportedly, the amount of parent training was small compared to the parent-only version of the treatment (P-CBT).\n\n#### Risk of Bias in Included Studies\n\nThe summary chart (Fig.\u00a03) gives an overview of the quality of the evidence included in this review. Notably, three trials reported unsuccessful randomization (Chaffin et al. 2011; Brunk et al. 1987; Runyon et al. 2010); none of the trials bar one (MacMillan et al. 2005) detailed the method used for allocation concealment; trial attrition was between 2 and 23%; only four trials reported intention-to-treat analysis (Chaffin et al. 2012; Kolko 1996; MacMillan et al. 2005; Swenson et al. 2010); and none of the trials blinded participants or research personnel to treatment assignment, although blinding is virtually impossible to achieve in a trials of a psychosocial intervention. Lastly, five trials had small sample sizes and limited power to detect effects (Jouriles et al. 2010; Kolko 1996; Mast et al. 2014; Runyon et al. 2010; Wolfe et al. 1981).\n\n### Effects of Interventions\n\n#### Primary Review Outcomes: Re-abuse\n\nSeven trials collected event data of official re-reports to CPS or similar agencies (Chaffin et al. 2004, 2011, 2012; Jouriles et al. 2010; MacMillan et al. 2005; Swenson et al. 2010; Wolfe et al. 1981). Parent and child self-reports of number of new abuse incidents were collected from one trial only (Kolko 1996), in which parents and children separately ranked from 1 to 4 the severity and frequency of the use of force and infliction of injury in the early and late stages of the intervention. The authors of the trial dichotomized these answers to presence or absence of at least one incident of abuse during the course of the intervention.\n\n#### Meta-Analysis: Risk of Re-abuse in Active Versus Treatment as Usual Trials\n\nOf these seven trials, we meta-analyzed risk differences for four trials comparing manualized interventions against treatment as usual, and measuring outcomes via re-reports or referrals to CPS (Jouriles et al. 2010; Chaffin et al. 2004, 2011; MacMillan et al. 2005; see Fig.\u00a02). On the whole, the absolute reduction in risk of recidivism was 11 percentage points less and was statistically significant (RD\u00a0=\u00a0\u22120.11, p\u00a0=\u00a00.043, 95% CI [\u22120.22, \u22120.004]). Another way of understanding these results is that about nine families would need to be treated to prevent one incident of re-abuse. Heterogeneity was notable, but not necessarily large (I 2\u00a0=\u00a028.9%). When we conducted sensitivity analyses as risk ratios, findings were no longer significant (RR\u00a0=\u00a00.76, 95% [CI 0.54, 1.07], I 2\u00a0=\u00a038.4%).\n\n#### Narrative Synthesis: Risk of Re-abuse in Active Versus Active Trials\n\nAn additional three trials (Kolko 1996; Wolfe et al. 1981; Swenson et al. 2010) compared included parenting interventions against another active intervention, but we did not meta-analyze these as the comparator would have been too clinically heterogeneous to be interpretable, and each of the three trials measured re-abuse in a different way. When the relevant, SLT-oriented parenting programs were compared against the other active treatment arms (e.g., family preservation groups, family therapy, multi-systemic therapy), effects were inconsistent. Two studies yielded non-significant risk differences: Wolfe et al. (1981) (RD\u00a0=\u00a0\u22120.125, 95% CI [\u22120.411, 0.161]) and Swenson et al. (2010) (RD\u00a0=\u00a00.050, CI [\u22120.058, 0.158]), whereas Kolko (1996) showed a significant positive effect when compared against specific family therapy (RD\u00a0=\u00a0\u22120.350, CI [\u22120.647, \u22120.054]).\n\n#### Narrative Synthesis: Time to Re-abuse Recidivism\n\nThree trials provided data on the amount of time before a new recidivism episode (time-to-event data). Chaffin et al. (2004) found that PCIT significantly delayed re-abuse when compared to the standard community group condition (log rank\u00a0=\u00a06.2, p\u00a0=\u00a00.02; unit\u00a0=\u00a0days). Furthermore, although PCIT delayed time to re-abuse better than the Enhanced PCIT condition in which ancillary services were also offered, the comparison between EPCIT and the community group condition did not approach significance (log rank\u00a0=\u00a02.3, p\u00a0=\u00a00.13).\nIn a different study of PCIT, Chaffin et al. (2011) found longer survival for the PCIT with self-motivation orientation group relative to PCIT without self-motivation (hazard ratio\u00a0=\u00a00.11, p\u00a0<\u00a0.05; unit\u00a0=\u00a0days), to service-as-usual with self-motivation (hazard ratio\u00a0=\u00a00.10, p\u00a0<\u00a0.05), and to service-as-usual without self-motivation (HR\u00a0=\u00a00.20).\nLastly, results from Chaffin et al. (2012) showed a longer time to re-abuse for the intervention (SafeCare) over a 6-year follow-up period when compared to a different home-visitation intervention (hazard ratio\u00a0=\u00a00.74\u20130.83). Coaching did not make a significant difference to these effects (Fig.\u00a03).\n\n#### Secondary Review Outcomes: Harsh Parenting and Physical Punishment\n\nRunyon et al. (2010) collected scores on the APQ (corporal punishment subscale) but did not find a significant difference between P-CBT and CPC\u2013CBT in terms of corporal punishment (N int\u00a0=\u00a026, M int-post\u00a0=\u00a04.47, SD\u00a0=\u00a02.07 vs. N ctrl\u00a0=\u00a034, M ctrl-post\u00a0=\u00a04.44, SD\u00a0=\u00a02.1; d\u00a0=\u00a00.01, 95%\u00a0CI\u00a0[\u22120.50 to 0.52]). Since the confidence interval crosses the point of no effect (i.e., 0), these results are statistically non-significant.\nSwenson et al. (2010) collected scores for the physical aggression, minor assault, and severe assault subscales of the CTS. Authors did not provide means and standard deviations, but they reported the significance level of between-group differences and the standardized mean difference (SMD), expressed as Cohen\u2019s d. Physical aggression (as reported by youth) differed significantly between STEP-TEEN and MST groups, favoring MST (p\u00a0<\u00a00.01, d\u00a0=\u00a00.21). Minor assault (as reported by youth) also differed significantly between groups in favor of MST (p\u00a0<\u00a00.01, d\u00a0=\u00a00.14), as did severe assault (as reported by youth; p\u00a0<\u00a00.01, d\u00a0=\u00a00.54).\nJouriles et al. (2010) also collected CTS scores from the corporal punishment subscale. Results strongly favored Project Support versus service-as-usual at the post-intervention mark (N int\u00a0=\u00a017, M int-post\u00a0=\u00a00.87, SD\u00a0=\u00a00.93 vs. N ctrl\u00a0=\u00a015, M ctrl-post\u00a0=\u00a01.64, SD\u00a0=\u00a01.04; p\u00a0<\u00a00.05, d\u00a0=\u00a00.86, 95%\u00a0CI\u00a0[0.15\u20131.53]).\n\n#### Secondary Review Outcomes: Other Parent-Related Outcomes\n\nOther parent-related outcomes collected in the trials that were not indicative of recidivism were: child abuse potential (Chaffin et al. 2004; MacMillan et al. 2005; Terao 1999), child-rearing attitudes (MacMillan et al. 2005), hospitalizations related to maltreatment (MacMillan et al. 2005), out-of-home placements (Swenson et al. 2010), observation measures of negative and\/or positive parenting behaviors (Chaffin et al. 2004, 2011; Egan 1983; Hughes and Gottlieb 2004; Jouriles et al. 2010; Mast et al. 2014), parent mental health (Egan 1983; Jouriles et al. 2010; Swenson et al. 2010), parent autonomy support (Hughes and Gottlieb 2004), family relations and functioning (Brunk et al. 1987; Egan 1983; Kolko 1996; MacMillan et al. 2005), parenting stress (Brunk et al. 1987), parent locus of control (Jouriles et al. 2010), parent anger (Kolko 1996), and social support (Brunk et al. 1987; MacMillan et al. 2005; Swenson et al. 2010). We did not synthesize these further as they are not directly predictive of re-abuse or abusive behaviors.\n\n## Discussion\n\nThis review was conducted to strengthen our understanding of the effectiveness of SLT-based behavioral parenting programs for preventing child physical abuse recurrence. Methodologically, it overcomes several important challenges encountered in prior reviews (e.g., Barlow et al. 2006b; Chen and Chan 2015), by including evidence from the last decade, selecting trials for inclusion with stringent criteria, and conducting an informative meta-analysis featuring limited statistical and clinical heterogeneity. The results of this review suggest that behavioral parenting programs are modestly but significantly effective strategies for reducing hard markers of recidivism in physically abusive families. Our meta-analysis found recidivism to be 11% lower for CPS referred families who received SLT-based behavioral parenting training. While this figure is modest, it is important to recognize its magnitude given the complicated nature of child welfare systems and the multiple high risks to which referred families tend to be exposed to. Granted, more extensive and better-quality research is needed to understand the effectiveness of this intervention modality, and thus establish its effectiveness more robustly. While we were only able to include four studies in the meta-analysis, a better-powered analysis may also have been able to understand not only whether\u00a0this intervention modality is effective, but also the differences between specific\u00a0interventions that might make them more or less effective.\nA few limitations of this review must be highlighted. First, the included trials were conducted exclusively in the US or Canada. This is not uncommon in the field of child maltreatment: in a systematic review of reviews by Mikton and Butchart (2009), it was established that\u00a090% of trials of child maltreatment interventions were conducted in high-income countries. Orienting future systematic\u00a0reviews to include trials in languages other than English might help ensure that research from other settings is captured, thus reducing the possibility that geographic homogeneity is an artifact of the search criteria. On the other hand, this review could serve as a starting point for a regional analysis of program effectiveness in this region. In that case, search criteria should be expanded to include child neglect, seeing as it is the mode reason for CPS reports in this region.\nSecond, only half of the included trials had a follow-up assessment, of which only\u00a014% only followed participants for more than 6\u00a0months. Only one notably strong\u00a0trial (Chaffin et al. 2012) had a longitudinal design, with a 6-year follow-up period. Longer follow-up periods in other similar trials would be necessary to understand the long-term effects of parenting programs.\nThird, some decisions made during the selection of studies for inclusion might have introduced bias in this review. For instance, one of the included trials (Chaffin et al. 2012) barely met participant inclusion criteria\u2014the proportion of physically abusive parents was 14% instead of the set minimum of 15%. However, given that only 1% was missing in this instance, an exception was made. Another exception was made for the MacMillan et al. (2005) trial, where the number of physically abusive parents was not reported, but the overall quality of the trial and perfect fit with other inclusion criteria prompted its exceptional inclusion. Future reviews should revisit the conceptual framework for setting the thresholds for inclusion at 15%, considering the low reporting rates for this specific type of abuse.\nLastly, while the statistical heterogeneity in the meta-analysis was low, the clinical heterogeneity present in the set of included studies might need to be carefully considered. The interventions grouped under the umbrella category \u201cparenting programs\u201d included a diversity of components, dosages, delivery settings, and other elements. Nonetheless, this variability is advantageous for the purposes of exploring the effectiveness of the theory of change (i.e., the underlying principles of SLT-based programs), as opposed to any one particular intervention modality or program. This said, when the evidence base is large enough, future reviews should include subgroup analysis so as to better understand how intervention and participant characteristics might be influencing the observed effect. For instance, it would be interesting to explore the differential effects that parent training might have on different types of families (e.g., families with substance abuse issues, single-parent families). Meta-analyses of pooled individual-level data from trials on parenting programs could elucidate differences between types of participants in subsequent synthesis efforts.\nFuture research should also\u00a0focus on understanding how parenting programs work and how their effectiveness can be improved, by exploring the specific mechanisms through which programs reduce or prevent child maltreatment. This is because parenting interventions are complex intervention packages that include multiple interacting components related to parenting knowledge, principles, and skills (Kaehler et al. 2016). Knowing which core components are driving effectiveness can help optimize interventions\u00a0by\u00a0making them briefer, more effective and cost-effective, and improving implementation, reach, uptake, replicability, and sustainability of effects (Elliott and Mihalic 2004; Leijten et al. 2015, Glasziou et al. 2008; Linnan and Steckler 2002). Bentovim and Elliott (2014) initiated the important task of identifying core components of parenting interventions by employing a \u201cdistillation and matching\u201d technique on a few selected RCTs that found parenting training effective for the treatment of physical abuse recidivism. Methodologies such as meta-analysis of components (e.g., Kaminski et al. 2008) could\u00a0also be used in this context to systematically and retrospectively explore which intervention\u00a0components are\u00a0related to the strongest effect sizes.\nThis review ought to be replicated and updated as more and better-quality evidence becomes available. However, at present, it is defensible to conclude that targeting the parent\u2013child relationship through SLT-based behavioral\u00a0parenting programs can be an effective treatment for preventing recurrence of child physical abuse\u2014at least in a North American context.\n\n## Funding\n\nThis work was supported by a grant from the UBS Optimus Foundation to Frances Gardner (PI), Patty Leijten, and G.J. Melendez-Torres. Additionally, GJ Melendez-Torres was part-supported by the National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care West Midlands. This paper presents independent research and the views expressed are those of the authors and not necessarily those of the UBS Optimus Foundation, NHS, the NIHR or the Department of Health.\n\n## Compliance with Ethical Standards\n\n### Conflict of interest\n\nNone of the authors of this review have any conflicts of interest to declare. Furthermore, because this research did not involve human subjects, it did not require consent or assent forms that needed approval by an ethics committee. Nonetheless, this review was approved by the University of Oxford Social Policy and Intervention Department\u2019s ethics committee (i.e., DREC) in the Spring of 2015.","date":"2019-10-15 09:13:51","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.45622754096984863, \"perplexity\": 7234.101170407259}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986657949.34\/warc\/CC-MAIN-20191015082202-20191015105702-00454.warc.gz\"}"} | null | null |
It's a secret. But one thought leader who is called Mark Hughes has leaked the secret. You can find it in his book buzzmarketing. So I'm doing the same thing to show you how to get a job with Yahoo.
To get people talking about you, you create a buzz. People start to talk about you. Eventually people start coming back at you. Mark Hughes tells the story of the little town of Halfway in Oragon. Using his approach he got everyone talking, after he persuaded the town it would be a great idea to earn some free publicity. The idea was put in place. The town renamed itself half.com. The web-publicity worked its magic, (or so Mark tells us).
Later, he pulled together his experience in buzzmarketing, and came up with six ways to get people talking about your idea, and therefore about you.
Now I'm going to leak the secret. It's not even sneaky, because I've done it in a win-win way (I hope). I've added to the buzz about Mark Hughes as a leader we deserve, and maybe tested out if it attracts some folk to Leaders we Deserve. Get it? The secret is to give away a secret. That's how the web works. To them that give away, shall it be given.
So what's the secret of getting a job at Yahoo?
Another student who is also a PhD in something on the hard side of quantum physics taught me about buzzmarketing. Which is why I'm writing this post. I understand he's inviting Yahoo on to Campus, and after that Mark Hughes. (Sorry, Mark Hughes, buzz marketer not the Football manager).
He's a really cool buzzy guy for a PhD. Maybe he's figured out how to become employed. Maybe with Yahoo. Then, to load the bases, he tells everyone to turn up in business dress. I ask you? What's the chances Yahoo don't rate formal dress? I think he'll be up there but not frocked up in business gear. That's another secret for getting a job with Yahoo. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,466 |
package de.intarsys.cwt.font;
/**
* An abstract font description object.
*
*/
public interface IFont {
/**
* The font family name
*
* @return The font family name
*/
public String getFontFamilyName();
/**
* The font name. This may deviate from the postscript font name for
* TrueType fonts.
*
* @return The font name.
*/
public String getFontName();
/**
* The canonical font name.
*
* @return The canonical font name.
*/
public String getFontNameCanonical();
/**
* The postscript font name.
*
* @return The postscript font name.
*/
public String getFontNamePostScript();
/**
* The referenced {@link IFontProgram}.
*
* @return The referenced {@link IFontProgram}.
*/
public IFontProgram getFontProgram();
/**
* The font style.
*
* @return The font style.
*/
public FontStyle getFontStyle();
/**
* The font type. This is for example "TrueType" or "Type1".
*
* @return The font type.
*/
public String getFontType();
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,689 |
\section{Introduction}
Finding the ground state configurations of a complex energy landscape is a long standing computational challenge \cite{wales2006potential}. Short of brute-force enumeration, random search algorithms such as simulated annealing can anneal Markov chains to the global minimum as a simulation temperature approaches zero \cite{kirkpatrick1983optimization}. However, in cases where many interacting degrees of freedom result in a highly rugged energy landscapes, conventional methods suffer from low probability of overcoming energy barriers and the chain may get stuck in local minima \cite{papadimitrou1982combinatorial, landau2014guide, WOLFF199093}.
The classical methods for searching energy landscapes are devised to work for general problems. Yet, many scientific problems often present themselves via an \textit{ensemble} of energy landscapes with similar underlying patterns, with interactions arising from a single or handful of governing equations. Examples include the energy landscapes of organic molecules built out of chemical building blocks, where potential energies are obtained by solving Schr\"{o}dinger equation, or the space of protein structures from interactions of individual amino acids. In an ensemble setting, we hypothesize that there exist {\sl system-specific} sampling rules \cite{wolff1989collective, wang1990cluster, houdayer2001cluster} that make it possible to traverse these particular energy landscapes more efficiently than classical methods. These rules can be learned from examples of energy minima calculated with classical methods for small problems.
Here we demonstrate this approach in the context of a model problem that defines a natural ensemble. We construct Ising spin glasses \cite{mezard1987spin, edwards1975theory}, where the interaction matrix $J$ is a \textit{structured random} matrix, chosen from protein contact maps. Given the large database of natural proteins \cite{berman2000protein} and the distinctive contact pattern of a folded protein \cite{nelson2008lehninger}, protein contact map data gives an ideal ensemble for testing whether interaction rules encoded in $J$'s are consistent across varying system sizes.
In recent years, several machine learning techniques have been applied to sample spin configurations of Ising model. The list includes but is not limited to simple regression \cite{liu2017self}, restricted Boltzmann machine \cite{huang2017accelerated}, reinforcement learning \cite{bojesen2018policy}, autoregressive model \cite{wu2019solving, mcnaughton2020boosting}, and normalizing flow model \cite{hartnett2020self}. The goal of these works is to estimate the Boltzmann distribution of a given problem so that the learned model can either completely replace or be used as a proposal distribution for Markov Chain Monte Carlo simulation. However, these schemes do not consider learning with $J$ instances of varying sizes.
Instead we recast spin glass energy minimization, a well known NP-hard problem \cite{barahona1982computational}, as a node classification problem in graph theory, and employ a graph neural network (GNN) \cite{wu2020comprehensive} to parametrize the mapping from a $J$ to the corresponding ground state configuration. We generate the set of most probable configurations from the GNN model to predict low-lying configurations of an energy landscape. If this configuration set misses a ground state configuration, we show that simulated annealing starting from a configuration in this set can search for the ground state configuration more efficiently. The schematic of this strategy is described in Fig.~\ref{fig:intro}.
We further test the utility of the GNN model by constraining the size of $J$---where size refers to the number of amino acid---in a training set and testing the trained model on larger $J$'s. As we increase the size limit of training set $J$ from 30 to 500, model's test performances quickly reach the level comparable to those obtained with the size limit of 800. We also show that the model trained on $J$ with size less than 800 can predict configurations whose energies are much lower than those found by simulated annealing for $J$ with size around 3000.
\setcounter{figure}{0}
\begin{figure*}
\centering
\begin{adjustbox}{center}
\includegraphics[width=2\columnwidth]{figs/schematics}
\end{adjustbox}
\caption{Schematic of model formulation and ground state prediction. Binary matrices $J$'s obtained from protein structures define the set of Ising model Hamiltonian. The resulting potential energy landscapes are similar since $J$'s have the patterns of connectivity from natural protein folds. We train graph neural network with $\sigma_{min}$ found from simulated annealing. The model aggregates the nearest-neighbor information for all spins at each layer. Thus an $L$-layer model can account for $L$-hop neighborhood information. As the model learns the rule of local interaction, it predicts a configuration which, if not the ground state already, can be improved by simple configuration enumeration and Monte Carlo sampling.}
\label{fig:intro}
\end{figure*}
We begin by constructing an ensemble of Hamiltonians for which the underlying potential energy landscapes have similar patterns.
For simplicity, we consider Ising Hamiltonians of the form
\begin{equation}
\mathcal{H}(\sigma) = - \frac{1}{2} \sum_{i,j}^N J_{ij} \sigma_i \sigma_j + h \sum_i^N \sigma_i, \quad h = \frac{\sum J_{ij}}{2N}
\label{eq:one}
\end{equation}
where both coupling and field terms depend on an interaction matrix $J$ and $J_{ij}, \sigma_i \in \{0, 1\}$. The field is chosen to prevent all ground state configurations from collapsing to a trivial ground state of all 1's. Within this formulation, to obtain an energy landscape ensemble, we need to specify an ensemble of $J$ matrices, whose $J$ is random yet with distinct shared patterns. Due to binary $J_{ij}$, this structured randomness of $J$ shall be encoded in spin connectivity.
In this work, we obtain this ensemble using protein contact maps to calculate the $J$ ensemble. Since proteins are characterized by distinct secondary structures, together with non-local contacts, protein contact maps define a set of structured random connectivity matrices. We downloaded fist subunit of all protein structure files deposited in the Protein Data Bank \cite{rcsbPDB} to ensure $J$'s do not have a distinct block diagonal structure due to the presence of multiple domains. Hence the connectivity features in our $J$ ensemble solely originate from the pattern of intra-domain folding, which are referred to as {\sl secondary} and {\sl tertiary} protein structures. We excluded protein with missing spatial information or whose protein chain is shorter than 20 residues or longer than 800 residues. We additionally added two largest subunit structures with chain length of 3661 and 2814 for the size generalizability experiments. From these files, we generated contact maps by setting $J_{ij}$ as 1 if the distance between two corresponding amino acid residues is less than 8\AA, and 0 otherwise \cite{monastyrskyy2014evaluation}. From this procedure, we obtained 64563 different contact maps, excluding the two large cases, to define our $J$ ensemble. We emphasize the resulting spin configurations derived from these energy functions Eq.~\ref{eq:one} have no relation to amino sequences; our intent here is not to make predictions about proteins {\sl per se}, but instead to use the regularity of protein structures to define a natural ensemble.
For all $J$'s except the two largest, we ran simulated annealing for each $J$ starting from 100 random initial configurations, and selected an annealed configuration with the lowest energy as its purported ground state configuration $\sigma_{min}$. The annealing schedule was optimized such that simulated annealing always find the ground state configurations on $J$'s with size smaller than 30, which we identified from brute-force enumeration of all configurations. For the two largest $J$'s, we decreased a cooling rate and increased an equilibration steps at each temperature to account for an enlarged configurational space and ran 30 randomly initialized simulated annealing. Further simulation details are discussed in the Supplemental Material \footnote{See supplemental material at ...}. Since finding the global energy minimum of an Ising spin glass in $2^N$ configuration space is NP-hard, we settled for this repeated annealing scheme and assume $\sigma_{min}$ closely approximates the actual ground state configuration. From all pairs of $J$ and $\sigma_{min}$, 6400 pairs were randomly selected as validation set, another 6400 pairs as test set, and remaining 51763 pairs as training set. For the first size generalizability experiment, we used the same test set but sub-select from the training set the pairs whose $J$ are smaller than certain size cutoffs to make \textit{small-$J$} training sets. For the second size generalizability experiment, we used the entire training set and test on the two large $J$'s.
Our prediction task is to learn the mapping from $J{\to}\sigma_{min}$.
This can be cast as a node classification problem in graph theory and hence we parametrize the mapping with a graph neural network. Given a graph, the $L$-layer model generates an expressive feature embedding for node $\sigma_i$ by aggregating the features of all $L$-hop neighbor nodes of $\sigma_i$ as shown in Fig.~\ref{fig:intro}, and uses this embedding to classify globally whether each node shall be turned on or off. To allow generalization of the mapping across $J$'s with different size and structure, we chose a message passing framework \cite{scarselli2008graph, gilmer2017neural} with attention mechanism \cite{vaswani2017attention, velickovic2017graph}, instead of Laplacian-based convolution method \cite{defferrard2016convolutional, kipf2016semi} which requires a constant graph structure.
The inputs to the graph neural network are the adjacency matrix $J$ and node features, which are initially a node degree the field strength $h$ from Eq.~\ref{eq:one}. At each layer, the network updates node features by first applying standard nonlinear transformation---expanding feature dimension from 2 to $F$, then calculating attention coefficient $\alpha_{ij}$ to find relative importance of a neighbor node $j$ to node $i$, and taking weighted sum of neighbor nodes' features using these coefficients. To capture more information from neighbors, this process is repeated $K$ times with different set of weights and newly computed $\alpha_{ij}^k$ to produce $K{\times}F$ features for a node. The features of node $i$ are then reduced to a probability $P(\sigma_i{=}1)$ in the final layer for node classification. The functional forms of the operations are detailed in the Supplemental Material \cite{Note1}. Notably, the final model used in this work consists of six layers.
The model's predicted configuration, $\hat{\sigma}$, are then obtained by choosing the greater of the two node classification probabilities, $\texttt{argmax}[P(\sigma_i{=}0) \ P(\sigma_i{=}1)]$. However, this point estimation does not take a full advantage of the learned embedding. This scheme is especially problematic for nodes with probabilities around 0.5 because the non-argmax configurations would have been just as likely. Therefore, we generate a set of top most probable configurations from the configuration probability output of the model, giving a broader coverage of the low-lying region in the energy landscape. To obtain $M$ such configurations, we pick top $\log_2{M}$ nodes whose $P(\sigma{=}1)$ are close to 0.5, and order all permuted configurations according to their corresponding sum combination of probabilities.
A GNN model trained on the entire training set correctly predicted $\sigma_{min}$ for 1700 of 6400 $J$'s in the held-out test set. To further quantify the model's performance, we investigate following two metrics. Define accuracy as the ratio between the number of correctly predicted nodes in $\hat{\sigma}$ and total number of nodes and energy difference $\Delta E$ as the energy gap between a predicted configuration and the true ground state. Fig.~\ref{fig:two}(a) shows the prediction accuracy decreases, while the energy difference increases with increasing $J$. The average accuracy and $\Delta E$ across the entire ensemble are $0.978$ and 2.79 respectively, due to the size distribution of $J$ skewed towards small $J$'s (Supplemental Material Fig.~S1 \cite{Note1}). Since energy histograms of small $J$'s obtained via complete configuration enumeration are peaked at positive energy and negative energy configurations occur in far-left tail region (Supplemental Material Fig.~S2 \cite{Note1}), predicting configurations with such small energy differences is surprising. We emphasize again that the model does not evaluate the energy function of Eq.~\ref{eq:one} to optimize a configuration. This suggests the GNN model has learned a generalizable node feature transformation for this particular class of energy landscape simply by comparing its predicted configurations to known ground state configurations.
\begin{figure}
\centering
\begin{adjustbox}{center}
\includegraphics[width=\columnwidth]{figs/Fig2}
\end{adjustbox}
\caption{(a) Test set performance as a function of the size of $J$, measured in accuracy (blue) and in $\Delta E$ (red). Each point in curve reports an average value for all $J$ with size window of 100. Accuracy is the fraction of correctly classified nodes and $\Delta E$ measures the difference between $E(\hat{\sigma})$ and $E(\sigma_{min})$. (b) Histogram of the classification probability for predictions whose accuracy is above 0.97 (blue) and below 0.7 (orange). Inset shows the fraction of misclassified nodes among \textit{confident} nodes as a function of the threshold probability imposed to select those nodes.}
\label{fig:two}
\end{figure}
\begin{table*}
\caption{\label{tab:table1} Summary of the size generalizability experiment on test set. Accuracy, energy offset, and the number of ground state match in 6400 test set $J$'s using the GNN's prediction $\hat{\sigma}$, the lowest-energy configuration of top most probable set $\hat{\sigma}_{top}$, and seeded annealing $\hat{\sigma}_{anneal}$ are reported.}
\begin{ruledtabular}
\begin{tabular}{cc|ccc|ccc|ccc|c}
\multicolumn{2}{c}{Training set} &
\multicolumn{3}{c}{$\hat{\sigma}$} & \multicolumn{3}{c}{$\hat{\sigma}_{top}$} & \multicolumn{3}{c}{$\hat{\sigma}_{anneal}$} \\
\cline{3-11}
size cutoff & \# $J$'s & $\hat{\sigma}{=}\sigma_{min}$ & acc. & $\Delta E$ & $\hat{\sigma}_{top}{=}\sigma_{min}$ & acc. & $\Delta E$ & $\hat{\sigma}_{anneal}{=}\sigma_{min}$ & acc. & $\Delta E$ & \# $\sigma_{min}$ found \\
\hline
30 & 560 & 14 & 0.866 & 23.84 & 150 & 0.880 & 17.22 & 2824 & 0.926 & 3.68 & 2988 \\
40 & 1301 & 65 & 0.901 & 11.05 & 260 & 0.909 & 8.68 & 2756 & 0.923 & 3.64 & 3081 \\
50 & 1839 & 388 & 0.944 & 5.15 & 1079 & 0.954 & 3.00 & 2383 & 0.932 & 2.68 & 3850 \\
100 & 7319 & 511 & 0.953 & 4.12 & 1265 & 0.962 & 2.23 & 2272 & 0.932 & 2.74 & 4048\\
200 & 24589 & 784 & 0.964 & 3.20 & 1398 & 0.972 & 1.65 & 2222 & 0.942 & 1.85 & 4404 \\
300 & 36130 & 1442 & 0.972 & 2.79 & 1638 & 0.977 & 1.32 & 1454 & 0.938 & 1.93 & 4534\\
400 & 43190 & 1497 & 0.974 & 2.47 & 1687 & 0.980 & 1.18 & 1503 & 0.943 & 1.45 & 4687\\
500 & 47631 & 1519 & 0.976 & 2.36 & 1738 & 0.981 & 1.12 & 1463 & 0.947 & 1.28 & 4720 \\
800 & 51763 & 1700 & 0.978 & 2.31 & 1673 & 0.983 & 1.13 & 1417 & 0.953 & 1.07 & 4790\\
\end{tabular}
\end{ruledtabular}
\end{table*}
Fig.~\ref{fig:two}(b) shows the averaged histogram of node classification probability $P(\sigma{=}1)$ from high accuracy configurations in blue, and that of low accuracy configurations in orange. A striking feature is that most nodes in both cases are predicted with high certainty as evinced by the peaks at both ends. In addition, the histogram of low accuracy configurations shows more nodes in the middle, indicating that the model's prediction accuracy may be directly related to the node classification probability $P(\sigma)$. We thus set a threshold probability, $P_{thr}$, to select nodes with low uncertainty where $P(\sigma_i{=}1) \geq P_{thr}$ or $P(\sigma_i{=}1) < 1{-}P_{thr}$ and calculated an error rate among these nodes as $P_{thr}$ is varied. As shown in the inset of Fig.~\ref{fig:two}, the number of misclassified nodes among such nodes goes down as we increase the threshold. This result in turn confirms that most misclassifications indeed occur among \textit{uncertain} nodes in the middle region of the histogram.
\begin{figure}
\centering
\begin{adjustbox}{center}
\includegraphics[width=\columnwidth]{figs/Fig3}
\end{adjustbox}
\caption{Number of sampling steps taken to reach ground state configuration for simulated annealing launched from a random configuration (blue) and from the lowest-energy configuration of top most probable configurations (orange). Each point reports the averaged value from 10 trials for random annealing and 5 trials for seeded annealing. We also include the minimum number from the 5 trials for seeded annealing experiment (green).}
\label{fig:three}
\end{figure}
Given there are only a handful of uncertain nodes, the set of top most probable configurations can account for most of permutations of their node configurations because the first few nodes to be changed are those with $P(\sigma_i{=}1) {\approx} 0.5$. This enumerated set allows for coverage of configuration space around the model's initial prediction. We enumerated top 1000 most probable configurations for each $J$ in the test set to cover 10 most uncertain nodes since $1000 {\approx} 2^{10}$. We then calculated the energy of these configurations and picked the lowest energy configuration as an improved prediction of the model, $\hat{\sigma}_{top}$. From this procedure, we additionally found the ground state configurations in 1673 $J$'s. This improvement of $\hat{\sigma}$ by configuration enumeration suggests that the uncertain nodes contain frustrated nodes to which the configuration energy is highly sensitive and, thus, that the GNN model has an implicit representation of energy in the node embedding.
For the half of test set where our model missed the ground state configurations, all predicted configurations still have small energy differences relative to the ground state configurations. We exploited this by running 5 simulated annealings with $\hat{\sigma}_{top}$ as a starting configuration for each remaining $J$. Since we are now annealing from a low-lying point in energy landscape, the starting temperature of the annealing should concurrently decreased to prevent the chain from sampling arbitrarily high energy states. We used the temperature value at which the energy trajectory of sampled states drifts up to a bit higher energy at the beginning to allow for initial exploration of energy landscape \cite{Note1}. The $\hat{\sigma}_{top}$-seeded simulated annealing found the ground state configuration for additional 1417 $J$'s with about two orders of magnitude reduction in the averaged number of sampling steps as shown in Fig.~\ref{fig:three}. In about 20\% cases, the minimum number of sampling steps from 5 trials were only few hundreds as only one or two node were misclassified in $\hat{\sigma}_{top}$. In total, we found ground state configurations for 75\% of test set $J$'s. This seeded simulated annealing result shows that the predicted configuration falls in the vicinity of $\sigma_{min}$, which is often close enough that simulated annealing can locate $\sigma_{min}$. Given the top most probable configurations, we could also run simulated annealing from other configurations or perform parallel tempering \cite{swendsen1986replica, earl2005parallel} with multiple configurations to account for the possibility of $\hat{\sigma}_{top}$ falling in a basin that is too far away from the one containing $\sigma_{min}$.
\begin{figure}
\centering
\begin{adjustbox}{center}
\includegraphics[width=\columnwidth]{figs/Large_J_edit}
\end{adjustbox}
\caption{Comparison of randomly initialized simulated annealing and the GNN model predictions on $J$ with size 3661 (left) and 2814 (right). Shown in blues are the energy of random initial configurations. Due to prohibitively large configurational space, we had to lengthen the annealing schedule to get markov chain to annealing down below -200. The GNN model's predictions in green diamond beat this dedicated effort of simulated annealing. The configuration enumeration and seed simulated annealing reach down further.}
\label{fig:four}
\end{figure}
To test for size generalizability of GNN model, we first trained models on eight \textit{small-$J$} training sets with increasing size cutoffs and test them on the existing test set. As shown in Table~\ref{tab:table1}, test set accuracy and energy difference roughly reach those of the original model trained on the entire training set when the size cutoff for $J$ is above 300. Note that the number of the match between the annealed configurations, $\hat{\sigma}_{anneal}$, and $\sigma_{min}$ decreases as the size constraint increases because most of $\sigma$'s are already recovered through model predictions. In all cases, the configuration enumeration and seeded simulated annealing improve upon initial model predictions. If $\hat{\sigma}$, $\hat{\sigma}_{top}$, and $\hat{\sigma}_{anneal}$ are considered together, GNN model provides comparable performance even at the size cutoff of 200. The local interaction pattern of 6-hop neighbor networks in a protein shorter than 100 amino acids should be similar enough to that of much longer chain. It is thus likely that relatively poor performances with training sets with size cutoff less than 100 are simply due to a limited amount of available data.
To test this hypothesis in more practical use case setting, we tested the original model on $J$'s of size 3661 and 2814. The model predicted $\hat{\sigma}$ with energy -443 and configuration enumeration further improved the energy to -447 for $J$ of size 3661, whereas the lowest energy configuration found from 30 randomly initialized simulated annealing runs was -432. On $J$ with size 2814, we obtained energy values of -366 for $\hat{\sigma}$ and -370 for $\hat{\sigma}_{top}$ whereas randomly initialized simulated annealing only reached down to -362. As in previous analysis, we launched simulated annealing from $\hat{\sigma}_{top}$ and obtain annealed configurations with energy -463 and -391 for $J$ with size 3661 and 2814, respectively. Fig.~\ref{fig:four} highlights the efficiency of the GNN model over randomly initialized simulated annealing.
Our work shows that it is indeed possible to use an ensemble of energy landscapes with known ground state configurations to train a neural network to deduce the ground state configuration of similar energy landscapes. On our model problem, we deterministically found the ground state configurations on 50\% of the held out test set $J$'s and stochastically on additional 25\% through a graph neural network, top configuration enumeration, and seeded simulated annealing. Although this number may appear modest, we emphasize that all configurations predicted by the model were extremely low-lying configurations, often in the vicinity of the ground state configurations. Since the loss function does not include other local minima---or, for that matter, the energy function itself, we believe that such an informed prediction is possible only if the learned node feature embedding of GNN correctly captures the local interaction rule encoded in $J$ interaction matrices, and hence the topological undulation in configuration space.
In addition, we showcased the practical utility of the GNN model with size generalizability experiments. The GNN model predicted the configurations that could not be reached by naive simulated annealing with random initial guess, and we were able to further improve it by combining the enumeration scheme and a seeded simulated annealing. Therefore, the GNN model presents an appealing method to produce extremely good initial guesses for a class of energy landscape problem where governing physics is local.
In future work, we will apply this framework to a variety of problems where the discovery of global minima would have technological consequences.
\begin{acknowledgments}
We thank Lucy Colwell for suggesting protein contact maps as a model system for ensembles of energy landscapes, and for helpful discussions. We also thank Yohai Bar-Sinai, Carl Goodrich, Mor Nitzan, Mobolaji Williams and Jong Yeon Lee for helpful discussions. This work is supported by the Office of Naval Research through the grant N00014-17-1-3029, as well as the Simons Foundation.
\end{acknowledgments}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 6,443 |
TAGS: HCM City
Young singer and songwriter Jack from HCM City was honoured at the annual 25th Asian Television Awards (ATA) in Singapore.
Many universities in Việt Nam are developing free massive open online courses (MOOCs) in an aim to contribute to a learning society available to everyone in the community.
Prime Minister Nguyễn Xuân Phúc has asked the Ministry of Transport to work with localities to take old motor vehicles that do not meet circulation standards off the roads.
City allocates land for physical training and sports facilities HCM City will give priority to land in areas such as Thủ Đức City, Cần Giờ and Bình Chánh districts to develop physical facilities, according to the city's People Committee
Propzy to support start-up Proptech company Propzy is offering to provide technical support to start-ups from this year.
Experts pointed out lots of opportunities for Vietnamese and Indian enterprises in the pharmaceutical industry in Việt Nam in Hà Nội yesterday.
Awakening the sleeping beauty on the banks of the Vam Co Dong River
Like a beautiful princess, oversleeping for many years in her own dream by the Vam Co River then suddenly awakened fresh one day, Tay Ninh has become a must-visit destination in an adventure in Viet Nam for many tourists.
Mondelez Kinh Đô launches Tết campaign The snack company Mondelez Kinh Đô Vietnam recently kicked off the Tết (Lunar New Year) season 2021 campaign, bringing new products to the market.
Sabeco launches Tết CSR programmes The Saigon Beer - Alcohol - Beverage Corporation (SABECO) has recently launched "Tết Gắn Kết", a Tết corporate social responsibility (CSR) programme that represents the next phase of the "Rise with Việt Nam" programme.
City strengthens HIV prevention efforts, hopes to end transmission by 2030 HCM City will continue to strengthen HIV preventive efforts with tests for early detection and treatment among high- risk groups to help achieve the national goal of eradicating the spread of the disease by 2030.
THE EXTENSION OF INVITATION FOR COORPERATION | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,889 |
void init_resources(unsigned int * numTypes, unsigned int ** available) {
unsigned int i;
FILE * file;
file = fopen("initial.data", "r");
if(file == NULL) {
perror("fopen");
exit(1);
}
fscanf(file, "%u", numTypes);
*available = (unsigned int *) malloc(sizeof(unsigned int) * (*numTypes));
if(*available == NULL) {
fprintf(stderr, "Error allocating available resource vector of size %d.\n", *numTypes);
fclose(file);
exit(1);
}
for(i = 0; i < *numTypes; i++) {
fscanf(file, "%u", *available + i);
}
fclose(file);
printf("Number of resource types = %d\n", *numTypes);
}
void display_msg(msgbuf_t * msgbuf, unsigned int numTypes) {
int i;
printf("\tClient id: %d\n", msgbuf->request.sender);
if(msgbuf->request.inReply > 0)
printf("\tSerial number: %u\n",msgbuf->request.inReply);
else
printf("\tSerial number: %u\n",msgbuf->request.serialNum);
printf("\tReturn address: queueID %u\n", msgbuf->request.retAddr);
printf("\tMessage type: %d-", (int) msgbuf->mtype);
switch(msgbuf->mtype) {
case 1:
printf("Request");
break;
case 2:
printf("Release");
break;
case 3:
printf("Register");
break;
case 11:
printf("Release All");
break;
case 4:
printf("Request granted");
break;
case 5:
printf("Request denied: unsafe");
break;
case 6:
printf("Request denied: excessive");
break;
case 12:
printf("Request denied: Unavailable");
break;
case 7:
printf("Release successful");
break;
case 8:
printf("Release failed");
break;
case 9:
printf("Registration accepted");
break;
case 10:
printf("Registration denied");
break;
default:
printf("UNKNOWN\n");
}
printf("\n");
printf("\tresources: { ");
for(i=0; i < numTypes; i++) {
printf("%u, ", msgbuf->request.resourceVector[i]);
}
printf("}\n");
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 4,412 |
{"url":"https:\/\/github.com\/kocolosk\/thesis\/blob\/master\/ch4-04-data-selection.tex","text":"# kocolosk\/thesis\n\nSwitch branches\/tags\nNothing to show\nFetching contributors\u2026\nCannot retrieve contributors at this time\n161 lines (142 sloc) 7.21 KB\n \\subsection{Run and Event Selection} This thesis analyzes collisions of polarized protons recorded by the STAR experiment during the years 2005 and 2006. A total of $$11.5~pb^{-1}$$ of longitudinally polarized data were recorded in $\\sim$ 1800 STAR runs. The STAR run serves as the basic unit of quality assurance; if a run fails any part of the QA procedure the entire run is excluded from further analysis. The QA metrics considered in this analysis are listed in Table~\\ref{tab:qa-metrics}. A run is rejected if its value for any metric falls more than 3$\\sigma$ away from the mean of that metric's distribution. In addition, any runs shorter than 2 minutes are automatically discarded. Table~\\ref{tab:dataset-luminosities} records the fraction of runs and integrated luminosity that survive this procedure. The global track DCA metric is of particular interest. Figure~\\ref{fig:dca-before} shows the initial time-series distribution of this metric for the 2006 data. The spike around run index 625 corresponds to a purge of the P10 gas in the TPC. The drift velocity in the TPC was varying rapidly during this period, and the standard procedure for recording drift velocity entries in the Calibrations DB for each laser run proved insufficient to track the variation. A second analysis of the laser data was able to extract additional drift velocity measurements and parameterize the exponential drop in drift velocity during the five day period following the purge, as shown in Figure~\\ref{fig:dv-fit}. Finally, Figure~\\ref{fig:dca-after} shows the track DCA distributions after the recalibration of the drift velocity. \\begin{table} \\centering \\begin{tabular}{|c|} \\hline Quality Assurance Metrics \\\\ \\hline $z$ position of event vertex \\\\ BBC coincidence timebin \\\\ global track DCA to the primary vertex \\\\ number of jets per event \\\\ number of towers per jet \\\\ number of tracks per jet \\\\ jet $p_T$ \\\\ jet tower $p_T$ \\\\ jet track $p_T$ \\\\ \\hline \\end{tabular} \\caption{Distributions analyzed in the QA procedure.} \\label{tab:qa-metrics} \\end{table} \\begin{figure} \\subfloat[][Before DV Recalibration]{ \\includegraphics[width=0.5\\textwidth]{figures\/dca-before} \\label{fig:dca-before} } \\subfloat[][After DV Recalibration]{ \\includegraphics[width=0.5\\textwidth]{figures\/dca-after} \\label{fig:dca-after} } \\caption{Global track DCA distributions as a function of time before and after the drift velocity recalibration. The horizontal green lines in (a) indicate the 3$\\sigma$ cut before recalibration.} \\end{figure} \\begin{figure} \\centering \\includegraphics[width=0.7\\textwidth]{figures\/dv-fit} \\caption{Parameterization of additional drift velocity measurements allowing for fine-granularity tracking of the TPC drift velocity in the days following the P10 gas purge.} \\label{fig:dv-fit} \\end{figure} Events in the runs surviving QA are selected for analysis if a) the event fired a jet patch trigger, b) the spin states of both beams have been successfully identified for the bunch crossing, and c) the event vertex position established from the BBCs is within a selection window. \\begin{table} \\centering \\begin{tabular}{|c|cc|} \\hline Time Period & Runs & $\\mathcal{L}^{-1} (pb^{-1})$ \\\\ \\hline 2005\/04\/17 - 2005\/06\/24 & 739\/1387 & 2.22\/3.77 \\\\ % ppProduction running % 2006\/03\/12 - 2006\/04\/06 & 0\/447 & 0.00\/2.45 \\\\ % ppProduction (Long1) 2006\/05\/12 - 2006\/06\/05 & 297\/464 & 5.59\/7.74 \\\\ % ppProductionLong (Long2) \\hline \\end{tabular} \\caption{Datasets analyzed in this work. Each cell lists the ratio of accepted data to recorded data for the period in question.} \\label{tab:dataset-luminosities} \\end{table} \\subsection{Pion Identification} Charged pions are identified from the subset of primary tracks in each event having at least 25 fit points, a distance of closest approach (DCA) to the primary vertex of no more than 1 centimeter, a pseudorapidity magnitude less than 1.0, and a transverse momentum greater than 2.0 GeV\/c. The first three cuts select high quality tracks. The transverse momentum cut is not necessary from an experimental perspective, but an $$A_{LL}$$ analysis of low momentum pions offers limited physics insights and the analysis can proceed more efficiently if these very common particles are not included. The determination of the PID acceptance window is discussed in Section~\\ref{sec:pid} and the window boundaries are listed in Table~\\ref{tbl:pid-selection-windows}. Figure~\\ref{fig:pid-accept-window} highlights the characteristic relativistic rise of the MIP distribution for the accepted charged pion tracks. \\begin{table} \\centering \\begin{tabular}{|c|c|} \\hline Criterion & Efficiency \\\\ \\hline $|\\eta| < 1.0$ & 0.94 \\\\ at least 25 fit points & 0.95 \\\\ $|DCA|$ of associated global track $<$ 1.0 cm & 0.96 \\\\ \\hline \\end{tabular} \\caption{Quality cuts imposed on the high-$p_T$ primary tracks before PID selection.} \\end{table} \\begin{figure} \\centering \\includegraphics[width=0.7\\textwidth]{figures\/dEdx_p} \\caption{Energy loss per unit path length versus momentum for tracks produced by identified charged pions.} \\label{fig:pid-accept-window} \\end{figure} \\subsection{Jet-Pion Correlations} In the 2006 data analysis events are accepted only if they contain a reconstructed jet with an uncorrected $$p_T$$ between 10 and 30 GeV\/c, a pseudorapidity between -0.7 and 0.9, and an electromagnetic energy fraction not greater than 0.92. Furthermore, the difference in azimuth between the jet axis and the center of a jet patch above the trigger threshold must be no more than $$36^\\circ$$. Multiple jets in an event can satisfy these trigger jet'' cuts. Charged pions satisfying the track quality and PID cuts described in the preceding section are compared against the list of trigger jets. If a charged pion is separated from a trigger jet by at least 2.0 radians in azimuth it is considered to be an away-side'' pion and is accepted for analysis. Figure~\\ref{fig:dphi} plots the azimuthal distribution of charged pions relative to trigger jets in the 2006 analysis. The data show good agreement with Monte Carlo simulations. \\begin{figure} \\centering \\includegraphics[width=0.7\\textwidth]{figures\/dphi} \\caption{Azimuthal distribution of charged pions relative to the trigger jet axis in the 2006 dataset. The black circles represent data, the red lines fully reconstructed Monte Carlo. Pions with $|\\Delta \\phi| > 2.0$ are accepted for analysis.} \\label{fig:dphi} \\end{figure} The data are binned as a function of $$z$$, defined as the ratio of the away-side pion $$p_T$$ and the trigger jet $$p_T$$. Figure~\\ref{fig:meanpt} shows that the jet $$\\langle p_T \\rangle$$ is approximately constant as a function of $$z$$, and thus that the charged pion $$\\langle p_T \\rangle$$ increases linearly with $$z$$. Again, the data are modeled well by STAR's Pythia+GEANT simulations. \\begin{figure} \\centering \\includegraphics[width=1.0\\textwidth]{figures\/meanpt} \\caption{Comparison of the $\\langle p_T \\rangle$ values for jets and charged pions in each $z$ bin. The data show good agreement with fully reconstructed Pythia+GEANT events that pass a simulation of the BJP2 trigger.} \\label{fig:meanpt} \\end{figure}","date":"2017-08-18 20:49:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9951990842819214, \"perplexity\": 2166.343742499767}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886105108.31\/warc\/CC-MAIN-20170818194744-20170818214744-00698.warc.gz\"}"} | null | null |
Q: How to extract sub-string in format hh:mm:ss.sssz from any string? For example I have string (not fix size)
"mynewtime10:20:13.458atcertainplace"
"hertimeatthatplace11:20:55.12nocomment"
The time is at different position (index) from one string to another.
A: Maybe using following regex?
([0-9:.]+)
Your searched string is in group 1
A: try this code hope it will work.
using System.Text.RegularExpressions;
static void Main(string[] args)
{
string str = "mynewtime10:20:13.458atcertainplace";
string patt = @"([0-9:.]+)";
Regex rgx = new Regex(patt, RegexOptions.IgnoreCase);
MatchCollection matches = rgx.Matches(str);
if (matches.Count > 0)
{
Console.WriteLine("{0} ({1} matches):", str, matches.Count);
foreach (Match match in matches)
Console.WriteLine(" " + match.Value);
}
Console.ReadLine();
}
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,168 |
Q: How do yo save an Excel file to a SharePoint Team Site Documents library? I'm trying to save an Excel file to a SharePoint Team Site Documents and keep getting errors. Has anyone done this successfully and is willing to share how to do it please?
Sub Macro2()
Dim UserName As String
UserName = Environ("Username")
ActiveWorkbook.SaveAs Filename:= _
"https://engineeringinspection.sharepoint.com/sites/Enginspire_SharedFolder/Shared%20Documents/UserName" _
, FileFormat:=xlOpenXMLWorkbookMacroEnabled, CreateBackup:=False
End Sub
Note: I've not used UserName in above effort to save to simplify the Save action.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,482 |
Macron wants to see new political cooperation in Europe
Tony Walters May 11, 2022
For one year, EU citizens and politicians met regularly at a conference on the future of Europe. The conference is over in a report With over 300 proposals in nine different policy areas. It brings together everything from free dental care for children (which is not an EU issue) to controversial proposals such as changing EU decision-making. For example, the Congress believes that the European Parliament should be able to start referendums in the Union. The conference also wants to abolish the requirement for member states to agree on issues related to foreign policy and taxation – one country should not be able to block other EU states with a veto.
Such proposals require the EU to amend its treaty (in line with the Swedish constitution).
A majority in the European Parliament supports amending the treaty. European Commission President Ursula von der Leyen also wants to change that. It wants to remove the consensus clause to speed up the EU's decision-making process. In this regard, she has the support of French President-elect Emmanuel Macron.
Among other European Union members The appetite is smaller. On Monday, 13 countries, including Sweden, issued a statement indicating, for example, that now is not the time for the European Union to devote its political energy to discussing treaty changes.
French President Emmanuel Macron.
Photo: Ludovic Marin/AFP
France will take over the presidency of the European Union this semester. It was Emmanuel Macron who in 2019 launched the idea of holding a future conference. In view of today's ambiguous geopolitical situation, he pleaded in his speech during the closing ceremony that there is a need for another political cooperation organization in Europe, a kind of "EU light".
Macron proposes that cooperation take place in areas such as energy, transport and security, and includes democracies in Europe that want, for example, to join the European Union or were (the United Kingdom).
He made it clear that the EU wants a close relationship with Ukraine, but that the process of joining the EU takes several years or even decades. Then there must be faster alternatives, according to Macron.
As for the conference proposals? Speaking at a ceremony in Strasbourg, European Commission President Ursula von der Leyen said the Commission was already working on several proposals.
European Union Commission President Ursula von der Leyen.
"For example, within the next few weeks and months, we will be making proposals to repair landscaped areas and reduce waste from paper packaging," she said.
Ursula von der Leyen also said that the Commission is working to ban the import of goods into the European Union when there is forced labour.
It promised to come back with more proposals in September, based on the report of the upcoming conference.
Thilde Karlsson is studying to become a pre-school teacher and is one of 24 Swedes randomly selected to participate in a future European Union conference.
Photo: Hanna Franzen
One who was allowed to speak in Strasbourg During the closing ceremony was Tilde Karlsson, from Fågelmara outside Karlskrona. I sat on a citizens' committee that discussed climate issues. In her speech, she called for bolder climate policy and a fairer European Union. Age, residence, gender, religion, political preferences etc. should not be discriminatory.
The European Union must be more than an economic union. Member states should show more solidarity among themselves. Tilda Carlson, Ursula von der Leyen, Emmanuel Macron, Antonio Costa (President of Portugal) and others said, We are a family and we must act as such in times of crisis.
fact. future of europe conference
In the run-up to the 2019 European Parliament elections, Emmanuel Macron proposed a conference on the future of the European Union.
The European Commission, the European Parliament and the Council of Ministers agreed on the format of the conference.
The National and European Citizens' Committees made recommendations in various policy areas to a General Assembly.
The General Assembly is made up of representatives of the three EU institutions as well as national parliaments, individual citizens, social partners and others.
The General Assembly's plenary session proposes more than 300 actions in nine different policy areas.
21-year-old Swede advises on climate policy to the European Union
See also The government wants to stop Russian athletes
Previous Spanish security chief expelled after spying scandal
Next Police want to get more people to talk about fraud with the elderly | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 7,402 |
namespace Google.Cloud.PubSub.V1.Snippets
{
// [START pubsub_v1_generated_SubscriberServiceApi_CreateSnapshot_sync_flattened_resourceNames]
using Google.Cloud.PubSub.V1;
public sealed partial class GeneratedSubscriberServiceApiClientSnippets
{
/// <summary>Snippet for CreateSnapshot</summary>
/// <remarks>
/// This snippet has been automatically generated and should be regarded as a code template only.
/// It will require modifications to work:
/// - It may require correct/in-range values for request initialization.
/// - It may require specifying regional endpoints when creating the service client as shown in
/// https://cloud.google.com/dotnet/docs/reference/help/client-configuration#endpoint.
/// </remarks>
public void CreateSnapshotResourceNames()
{
// Create client
SubscriberServiceApiClient subscriberServiceApiClient = SubscriberServiceApiClient.Create();
// Initialize request argument(s)
SnapshotName name = SnapshotName.FromProjectSnapshot("[PROJECT]", "[SNAPSHOT]");
SubscriptionName subscription = SubscriptionName.FromProjectSubscription("[PROJECT]", "[SUBSCRIPTION]");
// Make the request
Snapshot response = subscriberServiceApiClient.CreateSnapshot(name, subscription);
}
}
// [END pubsub_v1_generated_SubscriberServiceApi_CreateSnapshot_sync_flattened_resourceNames]
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 1,543 |
{"url":"https:\/\/mersenneforum.org\/showthread.php?s=4c452c425826f7540519124aad510954&t=23374","text":"mersenneforum.org PrimeNet gives same ECM task multiple times?\n Register FAQ Search Today's Posts Mark Forums Read\n\n2018-05-25, 05:41 \u00a0 #1\nLexicographer\n\nMar 2018\nShenzhen, China\n\n228 Posts\nPrimeNet gives same ECM task multiple times?\n\nIssue #1: I've asked PrimeNet for a trial factoring assignment for M1277 on the Manual Testing page and it gave me ECM assignment, which I did not plan to run... But fine.\n\nIssue #2: I think, initially PrimeNet's assignment looked more like this:\n\nQuote:\n 150 curves, B1=800000000, B2=80000000000\nBut in the list of my current assignments it somehow transformed into:\n\nQuote:\n 150 curves, B1=999999999, B2=99999999900\nIssue #3: After giving that task to my CPU running mprime, I've noticed from the complete exponent status report (with full ECM history) that other people already completed both assignments from above multiple-multiple times over the course of this decade:\n\nQuote:\n 2018-03-04 supercat NF-ECM 150 curves, B1=800000000, B2=80000000000 2017-12-04 Yizhe Huang NF-ECM 150 curves, B1=800000000, B2=80000000000 2017-10-11 Ter\u00c3\u00a7ariol, C. A. S. NF-ECM 150 curves, B1=800000000, B2=80000000000 2017-08-16 Diophantus NF-ECM 150 curves, B1=800000000, B2=80000000000 2017-04-04 Ducho_YYZ NF-ECM 150 curves, B1=800000000, B2=80000000000 2017-01-03 ch3cooh NF-ECM 150 curves, B1=800000000, B2=80000000000 2016-08-24 bayanne NF-ECM 150 curves, B1=800000000, B2=80000000000 2016-06-17 fireredd NF-ECM 150 curves, B1=800000000, B2=80000000000 2016-06-15 Moonwalker89 NF-ECM 150 curves, B1=800000000, B2=80000000000 2016-04-17 dh1 NF-ECM 150 curves, B1=800000000, B2=80000000000 2016-04-04 Mario Tranquillini NF-ECM 150 curves, B1=800000000, B2=80000000000 2016-03-31 cpp1701 NF-ECM 150 curves, B1=800000000, B2=80000000000 2016-02-10 ssateneth NF-ECM 150 curves, B1=800000000, B2=80000000000 2016-02-10 Smoker88 NF-ECM 150 curves, B1=800000000, B2=80000000000 2015-05-15 MadPoo NF-ECM 150 curves, B1=800000000, B2=80000000000 2013-11-11 Oliver Kruse NF-ECM 150 curves, B1=800000000, B2=80000000000 2013-09-26 Aseroberto NF-ECM 150 curves, B1=800000000, B2=80000000000 2013-06-14 yooper NF-ECM 150 curves, B1=800000000, B2=80000000000 2012-12-28 SRJ2877 NF-ECM 150 curves, B1=800000000, B2=80000000000 2011-04-11 Bdot NF-ECM 150 curves, B1=800000000, B2=80000000000 2011-02-14 James Heinrich NF-ECM 150 curves, B1=800000000, B2=80000000000\nQuote:\n 2018-01-31 Ter\u00c3\u00a7ariol, C. A. S. NF-ECM 150 curves, B1=999999999, B2=99999999900 2017-02-27 Diophantus NF-ECM 150 curves, B1=999999999, B2=99999999900 2016-02-16 Fastspiky NF-ECM 150 curves, B1=999999999, B2=99999999900 2015-04-28 XZT NF-ECM 150 curves, B1=999999999, B2=99999999900\nIsn't it a massive waste of processing power to give the same assignment to multiple users again and again?\nI thought, the point of PrimeNet is to coordinate people's effort, not to make them run same task over and over.\n\n2018-05-25, 06:18 \u00a0 #2\nGP2\n\nSep 2003\n\n2\u00d73\u00d7431 Posts\n\nQuote:\n Originally Posted by Lexicographer Issue #1: I've asked PrimeNet for a trial factoring assignment for M1277 on the Manual Testing page and it gave me ECM assignment,\nThere is no point doing any further trial factoring of this exponent. Indeed, there was no point taking it to 65 bits as someone did last year. That was a wasted effort.\n\nTrial factoring can only search for relatively small factors because the difficulty goes up by a factor of two each time you increment the bit size, so once you hit a certain point you have to try other methods, like P\u22121 or ECM. And enough ECM has been done on this exponent by now to demonstrate that the smallest factor must be many, many, many, many times too large to be be found by trial-factoring. Here \"must\" is probabilistic rather than absolutely proven, but the odds are beyond astronomical.\n\nQuote:\n Originally Posted by Lexicographer Isn't it a massive waste of processing power to give the same assignment to multiple users again and again? I thought, the point of PrimeNet is to coordinate people's effort, not to make them run same task over and over.\nIt's not the same task. Each ECM curve has a different (random) sigma. It's like throwing multiple darts at a dartboard, hoping that one will eventually hit a bullseye.\n\nBy contrast, redoing P\u22121 testing with the same B1 and B2 parameters really would be duplicating the same work over again.\n\nLast fiddled with by GP2 on 2018-05-25 at 06:47\n\n 2018-05-25, 07:14 #3 Lexicographer \u00a0 Mar 2018 Shenzhen, China 2\u00d732 Posts My bad Thanks for explaining. I missed the fact that there is a random parameter involved... Though I still think it would be nice if PrimeNet manual testing page did not assign tasks of different type than I asked for. As for this exponent, mfaktc, which I planned to use, doesn't even support trial factoring for exponents lower than 100K. As well as non-prime exponents for some reason (I wanted to overcome the 100K limit by trial factoring M163456 = M(27*1277), which is a product of M1277, and looking if there are any new factors between 2113601438322189019 < 2^65 and 2^80 < 1227156720026097481648213, both of which are consecutive factors of M163456). Last fiddled with by Lexicographer on 2018-05-25 at 07:22\n 2018-05-25, 13:15 #4 ATH Einyen \u00a0 \u00a0 Dec 2003 Denmark 64038 Posts If you look at this ECM report for M1277: https:\/\/www.mersenne.org\/report_ecm\/ you can see ECM up to 60 digit level is done and almost half of 65 digit level. That means the chance of a missed 60 digit factor is well below 1\/e~37% since the half 65 digit level also further rules out missed 60 digit factors. The chance of a missed 55 digit factor is much much lower again, and so close to 0% it would be a miracle if one was found. 55 digit factor is ~ 2^183 so you can see why there is no idea trial factoring between 2^65 and 2^80 ?\n2018-05-25, 14:06 \u00a0 #5\nDubslow\n\n\"Bunslow the Bold\"\nJun 2011\n40<A<43 -89<O<-88\n\n3\u00b729\u00b783 Posts\n\nQuote:\n Originally Posted by Lexicographer Though I still think it would be nice if PrimeNet manual testing page did not assign tasks of different type than I asked for.\nPrimeNet also won't assign pointless work, now or in the future. If you want to do pointless work, I suggest setting the nearest bundle of cash on fire.\n\nSo it can either 1) assign nothing at all, with an error message, which is admittedly probably the better choice or 2) assign a similar worktype with the same purpose, which is what it did. If you want factors of M1277, that's how you get factors. (Either that or pay several thousand dollars to the right people to run SNFS on it.)\n\nQuote:\n Originally Posted by Lexicographer As for this exponent, mfaktc, which I planned to use, doesn't even support trial factoring for exponents lower than 100K. As well as non-prime exponents for some reason (I wanted to overcome the 100K limit by trial factoring M163456 = M(27*1277), which is a product of M1277, and looking if there are any new factors between 2113601438322189019 < 2^65 and 2^80 < 1227156720026097481648213, both of which are consecutive factors of M163456).\nmfaktc doesn't support nonprime exponents because the basic rules which it relies on don't apply to nonprime exponents (or rather only apply to the parts which are prime exponents), namely that if p is prime, then factors of 2^p-1 are of the form 2kp+1 for some integer k.\n\nLast fiddled with by Dubslow on 2018-05-25 at 14:07\n\n2018-05-25, 14:59 \u00a0 #6\nGP2\n\nSep 2003\n\n1010000110102 Posts\n\nQuote:\n Originally Posted by Lexicographer (I wanted to overcome the 100K limit by trial factoring M163456 = M(27*1277), which is a product of M1277, and looking if there are any new factors between 2113601438322189019 < 2^65 and 2^80 < 1227156720026097481648213, both of which are consecutive factors of M163456).\nAs ATH mentioned above, enough ECM has been done on M1277 to make us fairly sure that there can be no factors smaller than about 2183. The probability that there could be an undiscovered factor of size 280 or smaller is about the same as the odds of winning a hundred million dollars in the lottery not just once but many times in a row.\n\n2018-05-25, 17:00 \u00a0 #7\nkriesel\n\n\"TF79LL86GIMPS96gpu17\"\nMar 2017\nUS midwest\n\n38 Posts\n\nQuote:\n Originally Posted by Dubslow (Either that or pay several thousand dollars to the right people to run SNFS on it.)\nSomehow that sounded sort of sinister. Like:\n\n(In the shadows of a dark alley late at night, somewhere in a US midwestern city, wet pavement from the recent rain making it darker, and dimly lit by a shabby warehouse's security illumination, the part that's working, anyway. The sort of neighborhood featuring chain link fence topped by barbed wire, and rebar grilles over dirty windows.) Two shady looking characters in trenchcoats and fedoras approached each other, casting furtive glances in all directions, as if concerned about being followed, and otherwise obviously trying to appear up to nothing in particular, meeting halfway between the security lights. Where it's darkest.\n\n\"Hi Slim, thanks for coming. I hear you got connections that can get things done, no-nonsense, for a price. I want you to arrange a factoring hit on M1277 for me. I can make it worth their while. Two grand now to get things rolling, three more when I get proof the deed's done. And a couple g's extra if it's done by the end of next month.\"\n\nSlim says, \"Joe, make it 3, 3 and 2, and M1277's history by Labor Day for sure. We'll need your public PGP key and an email address for sending proof it's done (from an anonymous throwaway account) and that the balance is due. The encrypted message will be in the second least significant bit of each byte of a cat video posted online. We'll send you the URL.\"\n\nJoe replies, \"Deal. Meet back here 11-12 days after the email is sent, same time of day. Wait a sec, I have the down payment now.\" Joe pulls out from the left inside coat pocket, an envelope stuffed with old worn unmarked non-sequentially numbered small bills, counts out $3000, stuffs the rest in his right coat pocket, and hands Slim the$3000 in the envelope. \"I anticipated the key and email request. It's written inside the envelope.\"\n\n\"Make it 7-8 days. Our guys don't like to be kept waiting for their money. It starts tomorrow. You're really on top of the details, almost like you've arranged this sort of thing before. See you by summer's end. Maybe we can, uh, do business again sometime.\" Slim pockets the envelope and drifts off casually into the developing fog.\n\nJoe heads off in the opposite direction, upbeat. Joe ruminated as he walked back to his car through the lingering puddles. Now it was just a matter of time, and raising a few more grand. No telling what they might do if he couldn't pay the balance promptly, but he was sure he wouldn't like it at all.\n\nThe ironic part of dealing with the dark underside of the nearby major college math department, was they certainly didn't need the money. What they made in 100% profit from black market math jobs like this using the free labor of students and grad students, and occasionally assigning academic staff, and with free hardware paid for by alumni, paled in comparison to what they did in their free time, counting cards at casinos for fun, Fourier analysis in the stock market, planted encrypted steganographic leaks in lottery systems, etc. Plus they'd probably publish a paper or two a year subsidized and originated in black market math. Add in legit consulting gigs, and their nominal salaries were in the noise, almost roundoff.\n\nBut they had consciously made friends in the criminology department, who knew people who knew people that would do anything for a surprisingly low price, including send people to the ER or morgue. Not the sort of people you want to anger or disappoint.\n\nNope, don't want to mess with people with tenure and connections. Better tap a few friends with a grudge against M1277 for some contributions, and sell a small asset or two soon. Joe had heard rumors that some of the math guys had rather large botnets, aggressively winning systems over from some of the world's largest spammers (thanks to a friend in the Comp Sci department) and top-shelf highly parallelized distributed computing code for using the bot systems. The CompSci guy had provided extraordinary antitrojan advances, and was rumored to have even better ones that he kept undisclosed for his personal use, and had certainly refused numerous offers from the NSA. So that email from Slim's crowd could be just days away.\n\nM1277 was toast, very soon. That made Joe smile as he reached his car. Until he saw two flat tires. Apparently M1277 had friends, who didn't mind breaking a few rules or laws either, and they were onto him. No cell phone signal. What lighting there was, went out. Joe was shocked to think he might himself soon be \"factored\".\n\n 2018-05-25, 19:00 #8 Lexicographer \u00a0 Mar 2018 Shenzhen, China 2\u00b732 Posts Thanks Thanks guys. I had already found another fresh topic about M1277 which explained most of this to me, but thanks for repeating. And thanks for explaining why mfaktc requires exponents to be prime. I still think PrimeNet's manual testing page should not assign anything if an assignment of certain type is unavailable or doesn't make sense.\n2018-05-25, 22:52 \u00a0 #9\nMark Rose\n\n\"\/X\\(\u2018-\u2018)\/X\\\"\nJan 2013\n\n2,953 Posts\n\nQuote:\n Originally Posted by kriesel Somehow that sounded sort of sinister. Like: (In the shadows of a dark alley late at night, somewhere in a US midwestern city, wet pavement from the recent rain making it darker, and dimly lit by a shabby warehouse's security illumination, the part that's working, anyway. The sort of neighborhood featuring chain link fence topped by barbed wire, and rebar grilles over dirty windows.) Two shady looking characters in trenchcoats and fedoras approached each other, casting furtive glances in all directions, as if concerned about being followed, and otherwise obviously trying to appear up to nothing in particular, meeting halfway between the security lights. Where it's darkest. \"Hi Slim, thanks for coming. I hear you got connections that can get things done, no-nonsense, for a price. I want you to arrange a factoring hit on M1277 for me. I can make it worth their while. Two grand now to get things rolling, three more when I get proof the deed's done. And a couple g's extra if it's done by the end of next month.\" Slim says, \"Joe, make it 3, 3 and 2, and M1277's history by Labor Day for sure. We'll need your public PGP key and an email address for sending proof it's done (from an anonymous throwaway account) and that the balance is due. The encrypted message will be in the second least significant bit of each byte of a cat video posted online. We'll send you the URL.\" Joe replies, \"Deal. Meet back here 11-12 days after the email is sent, same time of day. Wait a sec, I have the down payment now.\" Joe pulls out from the left inside coat pocket, an envelope stuffed with old worn unmarked non-sequentially numbered small bills, counts out $3000, stuffs the rest in his right coat pocket, and hands Slim the$3000 in the envelope. \"I anticipated the key and email request. It's written inside the envelope.\" \"Make it 7-8 days. Our guys don't like to be kept waiting for their money. It starts tomorrow. You're really on top of the details, almost like you've arranged this sort of thing before. See you by summer's end. Maybe we can, uh, do business again sometime.\" Slim pockets the envelope and drifts off casually into the developing fog. Joe heads off in the opposite direction, upbeat. Joe ruminated as he walked back to his car through the lingering puddles. Now it was just a matter of time, and raising a few more grand. No telling what they might do if he couldn't pay the balance promptly, but he was sure he wouldn't like it at all. The ironic part of dealing with the dark underside of the nearby major college math department, was they certainly didn't need the money. What they made in 100% profit from black market math jobs like this using the free labor of students and grad students, and occasionally assigning academic staff, and with free hardware paid for by alumni, paled in comparison to what they did in their free time, counting cards at casinos for fun, Fourier analysis in the stock market, planted encrypted steganographic leaks in lottery systems, etc. Plus they'd probably publish a paper or two a year subsidized and originated in black market math. Add in legit consulting gigs, and their nominal salaries were in the noise, almost roundoff. But they had consciously made friends in the criminology department, who knew people who knew people that would do anything for a surprisingly low price, including send people to the ER or morgue. Not the sort of people you want to anger or disappoint. Nope, don't want to mess with people with tenure and connections. Better tap a few friends with a grudge against M1277 for some contributions, and sell a small asset or two soon. Joe had heard rumors that some of the math guys had rather large botnets, aggressively winning systems over from some of the world's largest spammers (thanks to a friend in the Comp Sci department) and top-shelf highly parallelized distributed computing code for using the bot systems. The CompSci guy had provided extraordinary antitrojan advances, and was rumored to have even better ones that he kept undisclosed for his personal use, and had certainly refused numerous offers from the NSA. So that email from Slim's crowd could be just days away. M1277 was toast, very soon. That made Joe smile as he reached his car. Until he saw two flat tires. Apparently M1277 had friends, who didn't mind breaking a few rules or laws either, and they were onto him. No cell phone signal. What lighting there was, went out. Joe was shocked to think he might himself soon be \"factored\".\nBrilliant :D\n\n Similar Threads Thread Thread Starter Forum Replies Last Post Bananarama Software 4 2017-08-25 14:28 mahnouman Information & Answers 3 2013-03-19 21:14 Damian Information & Answers 19 2011-01-06 01:55 TTn Programming 31 2005-12-31 12:56 BillW Software 1 2003-01-21 20:11\n\nAll times are UTC. The time now is 17:11.\n\nSat Jun 25 17:11:31 UTC 2022 up 72 days, 15:12, 0 users, load averages: 1.18, 1.12, 1.15","date":"2022-06-25 17:11:31","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.28107190132141113, \"perplexity\": 5997.291403410397}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103036077.8\/warc\/CC-MAIN-20220625160220-20220625190220-00559.warc.gz\"}"} | null | null |
Kamienica Cywińskich w Chełmnie – dom znajdujący się przy ul. Rycerskiej 2, na działce narożnej z Rynkiem.
Kamienica została zbudowana w drugiej połowie XIII w., co czyni ją jednym z najstarszych zachowanych budynków świeckich w Chełmnie. Z tego czasu zachowały się gotyckie ściany obwodowe. W XIV w. dobudowano od wschodu oficyny. W 1570 r. została przebudowana z fundacji Melchiora Cywińskiego w stylu renesansowym. Z przebudowy tej pochodzą fragmenty wystroju rzeźbiarskiego, wmurowane w fasadę, pochodzące przypuszczalnie z dwóch portali: trójkątny naczółek ze sceną Zwiastowania, napisem fundacyjnym Melchiora Cywińskiego i dwoma herbami (z których jeden to Puchała), półkolisty tympanon ze sceną Pokłonu Trzech Króli i dwa maszkarony, wmurowane obecnie po bokach wejścia. We wnętrzu zachowały się drewniane renesansowe stropy z profilowanymi belkami. Na początku XIX w. kamienica została przebudowana w stylu klasycystycznym, a w 1889 przekształcono wnętrza.
Galeria
Bibliografia
Katalog zabytków sztuki w Polsce, t. XI, Województwo bydgoskie, z. 4, Powiat chełmiński, s. 79-80.
Linki zewnętrzne
Cywińskich | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,288 |
Q: How to find the similarity (as a percentage) between two float[] arrays in c# I need to get the similarity between two float[] arrays of the same lengths and return have this similarity be returned as a float which will be the percentage. How would you do this in c#?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,170 |
\section{Introduction}
\label{sect:intro}
Superfluiditity in nuclei is nearly a 60 year old problem. However, a
satisfactory microscopic description of the phenomenon continues to
remain a challenge as the problem is marred by uncertainties in the
input interactions, both at the few body level and the medium
corrections. The possibility of neutron superfluidity was already
pointed out around 1960~\cite{Migdal1959,Ginzburg1964}. The
observational confirmation began with the discovery of
pulsars~\cite{Bell1968}, their connection to rotating neutron
stars~\cite{Gold1968} and the subsequent observation of glitches in
the period of rotation of these pulsars. Rotating neutron stars are
almost perfect clocks with a period of rotation that increases very
slowly with time. However, sometimes the period of rotation suddenly
decreases, followed by long relaxation times (over years) before it
returns to its pre-glitch value. Such glitches can be explained if one
allows for the existence of a superfluid phase in the inner crust of
the star through the mechanism of vortex
unpinning~\cite{Baym1969,Pines1985} (maybe one needs also
superfluidity in the core \cite{Andersson2012}). Further, the
existence of a superfluid state is crucial to explain the
observational data on
cooling~\cite{Yakovlev2004,Page2009,Anderson1975}.
The two-body interaction between two neutrons has attractive
components and while it is not sufficient to produce a bound
di-neutron state in free space, in the presence of other neutrons this
attraction leads to Cooper instability leading to the existence of a
superfluid phase with $s$-wave pairing, which typically exists in the
inner crust of neutron stars. The $NN$ interaction is attractive in
the spin triplet state as well that leads to $p$-wave pairing, and
such a phase is assumed to exist at higher densities in the outer
layers of the core of the star.
In addition to the physics of neutron star crusts, pairing plays a
crucial role in finite nuclei as well by contributing to extra
binding, for example the extra binding leading to an energy gap in
even-even nuclei compared to the quasi-particle spectrum of odd-$A$
nuclei or the even-odd staggering in binding
energy~\cite{Bohr1958,BohrMottelson1}. Close to the drip lines, large
even-odd-staggering has been observed in isotopes of C, Ne and
Mg~\cite{Fang2004,Ozawa2001,Hagino2011}.
In the literature, several extensive reviews already exist on the
subject of neutron star physics and superfluidity in both finite and
infinite systems~\cite{Chamel2008,Dean2003,Haskell2018,Sedrakian2019}. In the
present special topics issue, we aim to give a short overview of the
status of $s$-wave pairing, in particular screening and beyond-BCS
crossover effects, and of the outstanding questions of $p$-wave pairing.
\section{Singlet pairing}
\label{sect:s-wave}
\subsection{BCS gap equation}
\label{subsect:s-wave-bcs}
In the case of an attractive interaction between fermions, the filled
Fermi sea becomes unstable with respect to the formation of Cooper
pairs. The starting point to study pairing is the BCS theory, where
the gap or the critical temperature is given by the BCS gap equation,
which in the $s$-wave spin-singlet ($^1S_0$) channel is given by~\cite{Schrieffer}:
\begin{equation}
\Delta(k) = -\frac{1}{\pi} \int_0^\infty dk^\prime \, k^{\prime \, 2}
\, V(k, k') \frac{\Delta(k')
\tanh\left(\frac{E(k')}{2T}\right)}{E(k')},
\label{eq:BCS_eqn_sing}
\end{equation}
where $\Delta(k)$ is the momentum dependent gap, $V(k,k')$ is the
matrix element of the $s$-wave neutron-neutron ($nn$) interaction,
$E(k) = \sqrt{\xi^2(k) + \Delta^2(k)}$ is the quasi-particle energy
with $\xi(k) = \varepsilon(k)-\mu$ and $\varepsilon(k) = k^2/(2 m^*)$,
$m^*$ is the neutron effective mass, $T$ the temperature and $\mu$ the
chemical potential (including the mean-field energy shift). The
critical temperature $T_c$ is the highest temperature at which there
is a non-trivial solution for Eq.~\eqref{eq:BCS_eqn_sing}. At $T=T_c$,
the gap in $E(k')$ can be neglected and as a result
Eq.~\eqref{eq:BCS_eqn_sing} becomes a linear eigenvalue equation. In
the weak-coupling limit where $\Delta(k_{\text{F}}) \ll \mu$, the gap at zero
temperature is related to the BCS transition temperature by $T_c =
0.57 \, \Delta_{T = 0}(k_{\text{F}})$. In the case of neutron matter, this
formula is a good approximation at all values of $\mu$, because the Fermi
surface remains rather well defined. To simplify
the notation, we will from now on write $\Delta = \Delta(k_{\text{F}})$.
In our
calculations we mostly use the renormalization group (RG) based interactions,
$\ensuremath{V_{\text{low}\,k}}$~\cite{Bogner2007} and $\ensuremath{V_{\text{srg}}}$~\cite{Bogner2010}. They have an inherent scale ($\Lambda$ for $\ensuremath{V_{\text{low}\,k}}$ and $\lambda$ for $\ensuremath{V_{\text{srg}}}$) that sets the scale of decoupling between the
low and high momenta. Such a scale is arbitrary and observables should
be independent of this scale.
Within the simplest BCS approximation, i.e., employing the free-space
$nn$ interaction $V^0(k, k')$ and the free neutron mass $m^* =
m$, any realistic $nn$ interaction that reproduces the two-body
neutron phase shifts yields the same BCS gap \cite{Hebeler2007}.
However, uncertainties arise already at the BCS level as soon as the
effective mass $m^*\ne m$ is included, since this affects the density
of states $N_0 = m^* k_{\text{F}}/\pi^2$, where $k_{\text{F}} = (3\pi^2 n)^{1/3}$ is the
Fermi momentum with $n$ the number density. Recent quantum Monte Carlo
(QMC) calculations \cite{Buraczynski2019} found that the neutron
effective mass drops only moderately with increasing density,
similar to what one gets with effective Gogny forces
\cite{Decharge1980,Chappert2008}, while effective Skyrme interactions
of the Saclay-Lyon family \cite{Chabanat1998} predict a stronger drop,
in contrast to those of the Bruxelles-Montreal family
\cite{Chamel2009,Goriely2010} which predict a slightly increasing effective
mass. In particular at higher densities beyond $k_{\text{F}} \approx 0.8\;
\ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$ (corresponding to number densities above $0.017\;\text{fm}^{-3}$ or mass
densities above $2.9\cdot10^{13}\;\text{g}/\text{cm}^3$) where the BCS
gap is maximum, the gap depends very sensitively on the density of
states, and therefore the different effective masses lead to
sizable uncertainties as can be seen in \Fig{fig:BCSgap}.
\begin{figure}
\begin{center}
\includegraphics[scale=0.72]{figs/DeltaBCS.eps}
\end{center}
\caption{BCS pairing gaps $\Delta = \Delta(k_{\text{F}})$ obtained from
\Eq{eq:BCS_eqn_sing} using the $\ensuremath{V_{\text{low}\,k}}$ interaction (with a cutoff
of $\Lambda = 2\ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$) \cite{Bogner2007,Bogner2010}, as functions
of $k_{\text{F}}$. The differences between the results are only due to the
different effective masses $m^*$ used in the calculations: free
mass ($m^* = m$, black solid line), effective masses from recent
auxiliary-field diffusion Monte-Carlo calculations
\cite{Buraczynski2019} using the AV8$'$+UIX interaction (red
filled circles) and the chiral N2LO interaction (green empty
circles), and effective masses from three different Skyrme
parametrizations: SLy4 \cite{Chabanat1998} (blue long dashes),
BSk19 and BSk21 \cite{Goriely2010} (purple short dashes and
turquoise dots, respectively).}
\label{fig:BCSgap}
\end{figure}
At lower densities, the effective mass is close to the free one, and
the gap is less sensitive to it. In this region, the uncertainties
come mainly from corrections beyond the BCS approximation. These will
be addressed in the following subsections.
\subsection{Screening corrections}
\label{subsect:screening}
It is well known that corrections beyond the BCS approximation due to
density and spin-density fluctuations that the neutrons create in the
surounding medium are very important. Such corrections are called
medium polarization or screening effects, since they are analogous to
the screening of the Coulomb interaction. They can be taken into
account in the gap equation by adding the induced interaction to the
bare $nn$ interaction, so that,
\begin{equation}
V(k,k^\prime) = V^0(k,k^\prime)+V^{(a)}(k,k^\prime)+V^{(b)}(k,k^\prime)\,,
\label{eq:V0ab}
\end{equation}
where the induced interactions, $V^{(a)}$ and $V^{(b)}$, are as seen
in \Fig{fig:vind}. In this figure, diagram (a) allows for one
particle-hole (ph) bubble insertion while diagram (b) sums the ph
bubble series (random-phase approximation, RPA, represented by wavy
lines).
\begin{figure}
\begin{center}
\includegraphics[scale=0.57, clip = true]{figs/diagram-bare.pdf}
\hspace*{0.2in}
\includegraphics[scale=0.57, clip = true]{figs/diagram-a.pdf}
\hspace*{0.2in}
\includegraphics[scale=0.57, clip = true]{figs/diagram-b.pdf}
\hspace*{0.2in}
\includegraphics[scale=0.57, clip = true]{figs/diagram-a-iterated.pdf}
\end{center}
\caption{In the medium, the bare pairing interaction (leftmost
diagram) is modified by the screening corrections (a) and
(b). Diagram (a$'$) illustrates the resummation of ladders in the
3p1h vertices of diagram (a) implicitly assumed in the derivation
of the GMB result \cite{Gorkov1961}.}
\label{fig:vind}
\end{figure}
In these diagrams, the interaction $\tilde{V}$ shown by the dotted lines
is meant to be antisymmetrized, $\langle 12|\tilde{V}|34\rangle =
\langle 12|V|34\rangle-\langle 12|V|43\rangle$, i.e., it includes also
the exchange graphs which are not drawn.
There have been many attempts to calculate the induced interactions in
the
literature~\cite{Wambach1993,Schulze1996,Shen2003,Shen2005,Cao2006,Ramanan2018,Urban2020}. Especially
the earlier calculations \cite{Wambach1993,Schulze1996} found an
extremely strong suppression of the gap. However, since the work by
Cao et al.~\cite{Cao2006} a consensus seems to emerge that the gap is
not too strongly reduced. This is shown in \Fig{fig:screen-QMC} which
summarizes more recent screening and QMC results.
\begin{figure}
\begin{center}
\includegraphics[scale=0.72]{figs/screen-QMCa.eps}
\includegraphics[scale=0.72]{figs/screen-QMCb.eps}
\end{center}
\caption{(a) Screening and QMC results for the gap in neutron matter
as a function of the Fermi momentum $k_{\text{F}}$. The blue dashes and red
dots are the final results of the screening calculations of Cao et
al. \cite{Cao2006} and of our own work \cite{Urban2020},
respectively. The turquoise triangles, purple points, and green
squares are QMC results of Gandolfi et al. \cite{Gandolfi2008},
Abe and Seki \cite{Abe2009}, and Gezerlis and Carlson
\cite{Gezerlis2010}, respectively. For comparison, the BCS result
$\Delta_{\text{BCS}}$ obtained without effective mass is shown as the
black line (same as in \Fig{fig:BCSgap}). (b) Same data as in (a)
but normalized to $\Delta_{\text{BCS}}$. The GMB result
\cite{Gorkov1961} is shown as the black star.}
\label{fig:screen-QMC}
\end{figure}
In \Fig{fig:screen-QMC}(b) we also show the result
$\Delta/\Delta_{\text{BCS}}=(4e)^{-1/3}\approx 0.45$ (black star) obtained
long ago by Gor'kov and Melik-Barkhudarov (GMB) \cite{Gorkov1961},
which should become valid in the limit $|k_{\text{F}} a_{nn}| \ll 1$, with
$a_{nn} \approx -18.5\;\text{fm}$ the $nn$ scattering length.
It is seen that the two screening calculations
\cite{Cao2006,Urban2020} do still not quite agree with each other. We
will come back to a more detailed discussion of these calculations
below. The QMC calculations, which are supposed to be, up to numerical
limitations, exact solutions of the many-body problem, show only a
moderate suppression. The gaps of \Refe{Gandolfi2008} (turquoise
triangles), were obtained with the auxiliary-field diffusion Monte
Carlo technique using the Argonne V8$'$ $nn$ interaction (AV8$'$) and the
Urbana IX three-body force (UIX) and is not significantly reduced
compared to $\Delta_{\text{BCS}}$ up to $k_{\text{F}}\sim 0.6\ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$, but the error
bars are huge. The green points of \Refe{Abe2009} were obtained within
a method based on the discretization of the Hamiltonian on a lattice
(determinantal quantum Monte Carlo). The interaction used in this
calculation is much simpler, as it includes only the leading and
next-to-leading orders (NLO) of pionless effective field theory (EFT),
and is only valid at low momenta, i.e., low densities. These gaps are
reduced by an almost constant factor of about $0.6-0.7$ compared to
$\Delta_{\text{BCS}}$ (see \Fig{fig:screen-QMC}(b)). The almost perfect
agreement of these results with the red dashed curve is probably
accidental. A similar behavior was found in \Refe{Gezerlis2010} using
the AV4 interaction within the variational and subsequent Green's
function Monte-Carlo method. At very low densities, these results tend
(within the error bars) towards the GMB limit. According to
\Refe{Gezerlis2010}, the discrepancy between \Refs{Gandolfi2008} and
\cite{Gezerlis2010} might be due to the less optimized wave function
used in \Refe{Gandolfi2008}.
Let us now discuss in some more detail the screening calculations. In
\Fig{fig:screen-low}
\begin{figure}
\begin{center}
\includegraphics[scale=0.72]{figs/screen-low-a.eps}
\includegraphics[scale=0.72]{figs/screen-low-b.eps}
\end{center}
\caption{Results for the screened gap in neutron matter as a
function of the Fermi momentum $k_{\text{F}}$ at different steps of the
screening calculations of \Refe{Cao2006} (a) and of our own
calculations \cite{Ramanan2018,Urban2020} (b). See text for
details.}
\label{fig:screen-low}
\end{figure}
we display again the ratios of screened gaps to our reference curve
$\Delta_{\text{BCS}}$ which is the BCS gap with the free neutron mass (black
solid line in \Fig{fig:BCSgap}), including the results obtained at
intermediate steps on the way to the final results. Figure
\ref{fig:screen-low}(a) summarizes the neutron-matter results of
\Refe{Cao2006}. In that work, the 3p1h vertices $\tilde{V}$ (dotted
lines in \Fig{fig:vind}) are the Br\"uckner G matrix. Up to the
projection on the $^1S_0$ wave, diagram (a) can be schematically
written as
\begin{equation}
V^{(a)} = \frac{\pi}{2}\sum_{\vek{p}\sigma}\tilde{V}\,
\frac{n(\vek{p}-\frac{\vek{q}}{2})-n(\vek{p}+\frac{\vek{q}}{2})}
{\varepsilon(\vek{p}+\frac{\vek{q}}{2})-\varepsilon(\vek{p}-\frac{\vek{q}}{2})}\,
\tilde{V}\,,
\end{equation}
where we have omitted all momentum and spin labels of $\tilde{V}$. Here,
$\vek{p}$ and $\sigma$ are the momentum and spin labels that are summed
over in the ph loop and $\vek{q} = \vek{k}-\vek{k}'$ is the momentum transfer. The
occupation numbers can be safely approximated by step functions
$n(\vek{p})=\theta(k_{\text{F}}-|\vek{p}|)$. Also, as it is usually done, the
static approximation is made, i.e., the energy transfer in the ph
bubble is neglected. To simplify this complicated expression, the
authors of \Refe{Cao2006} replaced $\tilde{V}$ by its average value
$\langle\tilde{V}\rangle$, where the averaging is done around the Fermi surface, so that it could be taken out of the sum,
which then gives
\begin{equation}
V^{(a)} = -\frac{\pi}{2} \langle\tilde{V}\rangle^2\,\Pi^0(q/k_{\text{F}})\,,
\label{Va-schematic}
\end{equation}
where $\Pi^0(\tilde{q})$ is the static Lindhard function
[$\Pi^0(0,\tilde{q})$ in Eq.~(12.46b) of \cite{FetterWalecka}, with
$m$ replaced by $m^*$] with $\tilde{q} = q/k_{\text{F}}$. The subsequent
projecting of $V^{(a)}$ on the $s$ wave finally amounts to averaging
the Lindhard function over the angle between $\vek{k}$ and $\vek{k}'$, i.e.,
over $q$ in the range $|k-k'| \le q \le k+k'$. Since $\Pi^0 < 0$, the
induced interaction $V^{(a)}$ is repulsive and therefore reduces
(screens) the bare interaction $V^{0}$. The solution of the gap
equation with $V^0+V^{(a)}$ is shown as the purple dash-dot line in
\Fig{fig:screen-low}(a). We see that the screening disappears at low
densities, which is easily understood since $\Pi^0 \propto k_F$ in
\Eq{Va-schematic}.
The next step is to include also diagram (b) of \Fig{fig:vind}. Using
the Landau approximation, the residual ph interaction is approximated
as ${\cal V} = f_0 + g_0 \, \bm{\sigma}_1 \cdot \bm{\sigma}_2$, with
$f_0$ and $g_0$ the Landau parameters, and thereby the RPA series can
be separately summed in the $S = 0$ and $S = 1$ channels, where $S$
denotes the total spin of the ph excitation. Then, the inclusion of
diagram (b) modifies \Eq{Va-schematic} to
\begin{equation}
V^{(a)}+V^{(b)} = -\frac{\pi}{2} \langle\tilde{V}\rangle^2
\Big(\frac{3}{2}\Pi_{S=1}-\frac{1}{2}\Pi_{S=0}\Big)\,,
\label{Vab-schematic}
\end{equation}
with
\begin{equation}
\Pi_{S=0} = \frac{\Pi^0}{1-f_0\Pi^0}\,,\qquad
\Pi_{S=1} = \frac{\Pi^0}{1-g_0\Pi^0}\,.
\end{equation}
Notice that, in dilute neutron matter, $g_0 > 0$ and $f_0 <
0$. Together with $\Pi^0 < 0$, this implies that, with increasing
density, the RPA enhances the attractive $S=0$ contribution in
\Eq{Vab-schematic} while it reduces the repulsive $S=1$
contribution. The net effect of diagram (b) is therefore that the gap
(green solid line in \Fig{fig:screen-low}(a)) is much less screened
than with diagram (a) only.
To obtain the final result of \Refe{Cao2006}, another effect was taken
into account. Namely, the energy dependence of the self-energy
$\Sigma(k,\omega)$ computed in Br\"uckner theory (Fig.~1(b)
of~\cite{Baldo2000}) leads to a reduction of the quasiparticle weight
$Z(k) = 1/(1-\partial\Sigma/\partial\omega)$. This effect can be
accounted for by introducing a factor of $Z(k)Z(k')$ on the right-hand
side of the gap equation (\ref{eq:BCS_eqn_sing}), which then yields
the final result shown in \Fig{fig:screen-QMC} and in
\Fig{fig:screen-low}(a) as the blue dashed lines.
In spite of the reasonable agreement with the QMC results at high density,
the increase of the gap at low-densities ($k_{\text{F}} \lesssim 0.27 \, \ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$) looks
somewhat suspicious. Furthermore, besides the approximation of $\tilde{V}$ by its
average $\langle\tilde{V}\rangle$ mentioned above, \Refe{Cao2006} used
the Babu-Brown theory \cite{Babu1973} to determine the Landau
parameters in a self-consistent way, with the aim to avoid the
liquid-gas instability in low-density symmetric nuclear
matter. However, the validity of this argument may be questioned as
the liquid-gas instability exists.
For these reasons, the screening problem was reconsidered by the
authors in \Refs{Ramanan2018} and \cite{Urban2020}. As input $nn$
interaction, $V^0(k, k')$ in Eq.~\eqref{eq:V0ab}, as well as in the
antisymmetrized 3p1h vertices, we use $\ensuremath{V_{\text{low}\,k}}$. For the ph interaction in the RPA, as
well as in the calculation of the effective mass $m^*$, we use for
simplicity a phenomenological Skyrme energy-density functional (SLy4
in the present example). No further approximations are made, and in
particular, the full momentum dependence of the 3p1h vertices is taken
into account when summing over the loop momenta.
Starting with diagram (a) in \Fig{fig:vind}, computed with the
$\ensuremath{V_{\text{low}\,k}}$ interaction obtained for a common choice of the cutoff
$\Lambda = 2\ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$, one obtains the gap shown in
\Fig{fig:screen-low}(b) as the black dashed line. As already observed
in~\cite{Shen2005,Cao2006}, the screening vanishes and the BCS result
is recovered in the limit $k_{\text{F}}\to 0$, in contradiction to the GMB
result. In fact, the GMB result \cite{Gorkov1961} is also based on
diagram (a) (since all other diagrams can be neglected in the limit
$k_{\text{F}} a\to 0$), but with a subtle difference: In the 3p1h vertices,
one has to use the scattering length (i.e., the full T matrix), and
not just the bare interaction $V$ used in diagram (a). This amounts to
implicitly summing ladders to all orders in the 3p1h vertices, as
shown in diagram (a$'$).
Making use of the RG flow of the $\ensuremath{V_{\text{low}\,k}}$ interaction, a simple way to
solve this problem was suggested in \Refe{Ramanan2018}. First, notice
that, when decreasing the cutoff $\Lambda$, the RG flow guarantees
that the scattering length $a_{nn}$ remains constant by increasing the
matrix elements of the interaction as $V \approx
(m/a-2m\Lambda/\pi)^{-1}$. In this way, the interaction becomes more
and more perturbative in the sense that the Born term is already a
good approximation to the full T matrix. Second, the RG evolution of
$\ensuremath{V_{\text{low}\,k}}$ leaves the BCS gap independent of the cutoff $\Lambda$, as
long as $\Lambda\gtrsim 2.5 k_{\text{F}}$. So, it is preferable to scale the
cutoff with $k_{\text{F}}$, using at each density the lowest permissible cutoff
$\Lambda = 2.5 k_{\text{F}}$. Calculating diagram (a) of \Fig{fig:vind} with
this prescription, one obtains the result shown in \Fig{fig:screen-low}(b)
as the purple dash-dotted line, which indeed reproduces the GMB result
(black star) in the limit $k_{\text{F}}\to 0$.
At higher densities, the RPA corrections [\Fig{fig:vind}(b)] become
important. In the case that the ph interaction is of the Skyrme type,
it is rather straightforward to resum the RPA bubble series
exactly~\cite{Urban2020,Garcia1992,Pastore2015}. This gives our final
result shown as the red dotted lines in \Figs{fig:screen-QMC} and
\ref{fig:screen-low}(b). As discussed above, the inclusion of
\Fig{fig:vind}(b) strongly reduces the screening effect of diagram
(a).
If we use in diagram (b) instead of the full RPA the Landau
approximation, as it was done in
\Refs{Shen2005,Cao2006,Ramanan2018,Ding2016}, we obtain the green
solid line shown in \Fig{fig:screen-low}. Comparing this result with
the red dashed line, one concludes that the Landau approximation is
only valid for $k_{\text{F}} \lesssim 0.4\ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$. Beyond this density, it
overestimates the effect of the RPA and, for $k_{\text{F}}>0.7\ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$, it
even predicts anti-screening (i.e., the gap is enhanced) because of
the large values of the Landau parameters. Anti-screening was already
found long ago in \Refe{Schulze1996}, but only at much higher
densities ($k_{\text{F}}\gtrsim1.3\ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$). The strong anti-screening effect
found in \cite{Ramanan2018} within the Landau approximation at higher
density is absent or strongly suppressed within the full RPA
calculation \cite{Urban2020}.
So far we have concentrated only on the low-density region with
$k_{\text{F}}<0.9\ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$. At higher densities, as we have seen in
\Sec{subsect:s-wave-bcs}, the effective mass $m^*$ leads to large
uncertainties. Similarly, for the screening diagram (b), uncertainties
arise from the Landau parameters $f_0$ and $g_0$ and more generally,
if one goes beyond the Landau approximation, from the ph residual
interactions. Hence, in \cite{Urban2020}, we repeated the calculations
with a couple of different Skyrme parametrizations. All screened
results shown in \Fig{fig:screen-high}(a)
\begin{figure}
\begin{center}
\includegraphics[scale = 0.72]{figs/Delta-SLy-BSk.eps}
\includegraphics[scale = 0.72]{figs/DingRios.eps}
\end{center}
\caption{Behaviour of the singlet gap $\Delta$ versus $k_{\text{F}}$. (a)
Results of \Refe{Urban2020} with (thick lines) and without (thin
lines) medium polarization corrections for different Skyrme
parameterizations used in the effective mass and in the RPA bubble
summation. (b) Results of \Refe{Ding2016} including effects of
medium polarization (=long-range correlations, LRC) as well as
short-range correlations (SRC).}
\label{fig:screen-high}
\end{figure}
(thick lines) were computed with the full RPA and with the density
dependent cutoff $\Lambda=2.5 \,k_{\text{F}}$ for $k_{\text{F}}<0.8\ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$, while we kept
$\Lambda=2\ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$ constant for $k_{\text{F}}\geq 0.8\ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$ since this
cutoff gives the correct BCS gap in the whole density range, and with
larger values of $\Lambda$ the advantage of the soft $\ensuremath{V_{\text{low}\,k}}$ interactions would be
lost. Surprisingly, when screening is included, the dependence on the
choice of the Skyrme interaction is weaker than without screening. In
particular, for all the considered Skyrme forces the maximum of the
screened gap lies now between $2.3$ and $2.5\;\text{MeV}$.
These results can be compared with a calculation based on the
self-consistent Green's function theory \cite{Ding2016}. Here, the
energy and momentum dependent single-particle self-energy
$\Sigma(k,\omega)$ is computed in ladder approximation, whereby all
propagators are themselves dressed ones. This approach accounts
automatically for the short-range correlations created by the
realistic (hard) $nn$ interactions, but not for screening, which
corresponds to long-range correlations. In \Refe{Ding2016},
screening was in fact only included in an approximate way, by adding
$V^{(a)}+V^{(b)}$ using the same approximations as in \Refe{Cao2006}
(see above). The results, obtained with three different bare $nn$
interactions (AV18, CDBonn, and the chiral N3LO interaction) are shown
in \Fig{fig:screen-high}(b). As long as only screening is
included (red, green and blue points), the maximum of the gap is again
about $2.5\;\text{MeV}$, but the density where it tends to zero is clearly
higher than in our screening calculations
(\Fig{fig:screen-high}(a)). However, one should keep in mind that for
the momentum transfers needed in this density region neither the
Landau approximation nor the full Skyrme ph interaction can be
considered to be reliable.
The effect of short-range correlations is closely related to the $Z$
factors included in the gap equation in \Refe{Cao2006}. However, using the full
spectral functions as done in \cite{Ding2016} and not just the
quasiparticle peak, it becomes somewhat more sophisticated. Taking into
account the short-range correlations in addition to the screening
(black, purple, and turquoise points in \Fig{fig:screen-high}(b)), the maximum
gap is further
reduced to $\approx 1.8\;\text{MeV}$. Another observation is that also
the density where the $^1S_0$ gap goes to zero is reduced. Apparently this effect is important and should be studied also at lower densities,
along with a more complete treatment of the screening. Short-range
correlations can also be included via the correlated basis
function method that once again leads to a suppression of the BCS gap~\cite{Pavlou2017},
but this technique will not be discussed in this short review.
\subsection{BCS-BEC Crossover}
\label{subsect:bcs-bec}
The BCS-BEC crossover has attracted a lot of attention in the last two
decades, especially because its experimental realization in ultracold
trapped atoms. In these experiments, one can change the interatomic
interaction by varying the magnetic field, in such a way that the
system passes continuously from a BCS superfluid in the case of weakly
attractive interactions, through a resonance where the scattering
length $a$ diverges (unitary limit), to a Bose-Einstein condensate
(BEC) of bound dimers. For recent reviews emphasizing the analogies
between ultracold atoms and nuclear and neutron matter, see
\cite{CalvaneseStrinati2018,Ohashi2020}.
Of course, in nuclear systems, the interaction cannot be changed. In
this case, the crossover can be realized with changing density. Very
dilute symmetric nuclear matter will form a BEC of deuterons which,
with increasing density, goes continuously over into a BCS state with
$pn$ Cooper pairs \cite{Baldo1995}. In neutron matter, however, a
BCS-BEC crossover does not exist, because there is no bound dineutron
state. But the $nn$ scattering length is unusually large in the $s$
wave, signalling a nearly bound state. Hence, the Cooper pairs in
dilute neutron matter have a relatively small size (coherence length),
comparable to the average distance between particles
\cite{Matsuo2006,Margueron2007}.
In this case, similar to the situation when there is a true bound
state, the temperature $T^*$ where pairs dissociate can be higher than
the superfluid critical temperature $T_c$ where the pairs undergo
Bose-Einstein condensation. This can be seen in \Fig{fig:TstarTc}
\begin{figure}
\begin{center}
\includegraphics[scale = 0.72]{figs/TstarTc.eps}
\end{center}
\caption{QMC phase diagram of \Refe{Abe2009} displaying the critical
temperature $T_c$ (blue squares) and the pair dissociation
temperature $T^*$ (red points) in units of the Fermi energy
$E_F=k_{\text{F}}^2/(2m)$ versus $k_{\text{F}}$. For comparison, the critical
temperatures one would get from the BCS relation $T_c =
0.57\Delta(T=0)$ are shown, too (green triangles).}
\label{fig:TstarTc}
\end{figure}
which shows the QMC results of \Refe{Abe2009} for $T^*$ and $T_c$ as
functions of $k_{\text{F}}$. For a better visibility of the low-density
results, we have divided $T^*$ and $T_c$ by the Fermi energy $E_F =
k_{\text{F}}^2/(2m)$. The region between $T_c$ and $T^*$ is called the
pseudogap phase because, although there is no true gap, there exists a
suppression of the level density at $\omega = 0$ (energy measured
relative to the chemical potential $\mu$) because of the energy needed
to break a pair.
In the pseudogap region, it is usually a bad approximation to compute
the density from the uncorrelated occupation numbers
\begin{equation}
n_{\text{free}} = 2\int \frac{d^3k}{(2\pi)^3} f(\xi(\vek{k}))\,,
\label{eq:rho_free}
\end{equation}
where the factor of $2$ accounts for the spin degeneracy and $f(\xi) =
1/(e^{\xi/T}+1)$ is the Fermi function. Taking into account the
density corresponding to the correlated pairs is crucial to get the
correct result for $T_c$ in the BEC limit. This is done by the
Nozi\`eres-Schmitt-Rink (NSR) approach \cite{Nozieres1985}, which writes
\begin{equation}
n = n_{\text{free}} + n_{\text{corr}}\,.
\label{eq:NSR_decomp}
\end{equation}
The correlated density $n_{\text{corr}}$ is calculated to first order in
the self-energy $\Sigma$ (in the imaginary time
formalism~\cite{FetterWalecka}),
\begin{equation}
n_{\text{corr}} = 2 \int\! \frac{d^3k}{(2\pi)^3} \frac{1}{\beta}
\sum_{\omega_n} \big(\mathcal{G}_0(\vek{k}, \omega_n)\big)^2
[\Sigma(\vek{k}, i\omega_n) - \re \Sigma(\vek{k},\xi(\vek{k})]\,,
\label{eq:rhocorr}
\end{equation}
where $\omega_n$ are the Matsubara frequencies and $\mathcal{G}_0$ is the
uncorrelated single-particle Green's function. The self-energy
$\Sigma$ is calculated within the ladder approximation as shown in
\Fig{fig:feyn}(a) and (b).
\begin{figure}
\begin{center}
\includegraphics[angle = 0, width = 10cm, clip = true]{figs/crossover-diag.eps}
\end{center}
\caption{Diagrams for the T matrix (a), the self energy (b), and the
thermodynamic potential (c) in ladder approximation.}
\label{fig:feyn}
\end{figure}
In the original NSR paper \cite{Nozieres1985}, the correlated density
is obtained as the derivative with respect to $\mu$ of the
thermodynamic potential represented by diagram (c) in \Fig{fig:feyn},
which is equivalent to keeping the self-energy only to first order in
\Eq{eq:rhocorr} \cite{CalvaneseStrinati2018}. However, the subtraction
of the on-shell self energy $\Sigma(\vek{k},\xi(\vek{k}))$ in
\Eq{eq:rhocorr} is absent in the original NSR approach. It is
necessary as this term is already taken into account in $\mathcal{G}_0$ via
the quasiparticle
energy~\cite{Zimmermann1985,Schmidt1990,Jin2010,Ramanan2013}.
The correlated density was calculated in~\cite{Ramanan2013} using the
$\ensuremath{V_{\text{low}\,k}}$ interaction. In order to accommodate the non-local
interaction, the authors expressed the correlated density in the basis
that diagonalizes $V \bar{G}^{(2)}_0$, where $\bar{G}^{(2)}_0$ is the
two particle retarded Green's
function. In~\cite{Ramanan2018,Urban2020} the bare interaction was
augmented by the induced interaction as discussed in
\Sec{subsect:screening}. A similar calculation using a separable
interaction instead of $\ensuremath{V_{\text{low}\,k}}$, but without screening corrections,
was done in \cite{Tajima2019}. In all these calculations, the
subtraction term was approximated by the first-order Hartree-Fock
(HF) self-energy. For a detailed comparison of different subtraction
prescriptions, see \cite{Durel2020}.
\begin{figure}
\begin{center}
\includegraphics[angle = 0, scale = 0.35, clip = true]{figs/Tc_vs_kF_medium_corrections_compare.pdf}
\end{center}
\caption{Critical temperature $T_c$ with (red) and without (black)
screening corrections, as a function of $k_F$ computed with
(dashes) and without (solid lines) the NSR correction to the
density, for two different Skyrme parmetrizations used in the
calculation of the effective mass and of the screening corrections
$V^{(a)}+V^{(b)}$ \cite{Urban2020}.}
\label{fig:NSR-compare}
\end{figure}
Fig.~\ref{fig:NSR-compare} shows the effect of including the
correlated density on the density dependence of the transition
temperature \cite{Urban2020}. The black lines include the effective
mass $m^*$, computed with SLy4 (left panel) and BSk19 (right panel),
while the red lines include also the screening effects
$V^{(a)}+V^{(b)}$ (calculated with the same Skyrme force as
$m^*$). For the solid lines, $k_{\text{F}}$ was computed with $n_{\text{free}}$, while
for the dashed lines, $k_{\text{F}}$ was computed with the NSR density
$n_{\text{free}}+n_{\text{corr}}$. We note that the effect of screening overwhelms
the effect of NSR and hence the change in the transition temperature.
In particular, with screening included, the NSR effect is even smaller
than with the bare interaction.
The smallness of the NSR effect is consistent with the fact that the
QMC critical temperature $T_c$ of \Refe{Abe2009} satisfies well the
BCS relation $T_c\approx 0.57 \Delta(T=0)$ as can be seen in
\Fig{fig:TstarTc}. Also, the pseudogap computed in \cite{Durel2020} is
very small. Therefore, it is surprising that the temperatures $T^*$ up
to which pair correlations survive in \Refe{Abe2009} can be quite far
above $T_c$.
\section{Triplet pairing}
\label{sect:pwave}
Pairing in the triplet channel is supposed to occur at much higher
densities, say, $k_{\text{F}} \gtrsim \, 1.3 \ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$
(corresponding to number densities $n\gtrsim 0.07 \;\text{fm}^{-3}$ or mass
densities $\rho \gtrsim 1.2\cdot 10^{14} \text{g}/\text{cm}^3$),
and hence occurs in the outer layers of the neutron star core.
The evidence for pairing in the spin-triplet channel at high densities
comes from the fact that for momenta $\gtrsim 1.3 \, \ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$, the
attraction in this channel gets stronger, resulting in
positive two-body phase shifts (see left panel of
Fig.~\ref{fig:phase-shift-triplet}),
\begin{figure}
\includegraphics[scale = 0.27, clip = true]{figs/phase_shift_av18_n3lo_compare.eps} \hspace*{0.1in}
\includegraphics[scale = 0.25, clip = true]{figs/triplet_gaps_av18_n3lo.eps}
\caption{Phase shifts and mixing in the $^3PF_2$ channel against the
experimental phase shift of Arndt et al.~\cite{Arndt1997}. Beyond
lab energies of $\sim 150 \;\text{MeV}$, the phase shifts from the
AV18 and N3LO do not agree with the experimental phase
shifts. This is reflected in a model dependent gap in at the BCS
level. It should be noted that the N3LO results for $k_{\text{F}}$ beyond
$2.5\, \ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$ becomes unreliable as the chiral cutoff $\Lambda \sim
3.0\, \ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$.}
\label{fig:phase-shift-triplet}
\end{figure}
until it becomes the most attractive channel that supports pairing at
high densities~\cite{Takatsuka1992}. In the spin-triplet channel,
due to the tensor force, the $l = 1$
and $l = 3$ partial waves are coupled, with total angular momentum $J
= 2$, and it is denoted as $^3P_2-^3F_2 \equiv \,^3PF_2$. The zero-temperature
BCS gap is obtained by solving the angle-averaged gap equation~\cite{Baldo1998}
that couples the $l = J \pm 1$ states and is written as,
\begin{equation}
\Delta_l(k) = -\sum_{l'} \frac{(-1)^{(l - l^\prime)/2}}{\pi}\int_{0}^{\infty}
q^2dq V_{ll'}(k,q) \frac{\Delta_{l'}(q)}{E(q)},
\label{eq:gap_coup}
\end{equation}
where $E(q) = \sqrt{\xi^2(q) +
D^2(q)}$ and $\xi(q) = \varepsilon(q) - \mu$. Further, the overlap
between the different partial waves is ignored and $D^2(q) =
\Delta_1^2(q) + \Delta_3^2(q)$~\cite{Takatsuka1992,Baldo1998}. The
validity of the angle-averaging approximation was confirmed
in~\cite{Khodel2001}.
However, pairing in this channel is plagued by uncertainties as the
input free-space two-body interactions, which are the starting point
for the BCS gap equation, are not phase shift
equivalent~\cite{Ding2016,Baldo1998,Maurizio2014,Srinivas2016,Drischler2016,Zuo2008,Papakonstantinou2017}.
This is seen in \Fig{fig:phase-shift-triplet}(a), where the phase
shifts and mixing angle are compared against the experimental phase
shift~\cite{Arndt1997} for two representative
realistic interactions, the phenomenological interaction, AV18~\cite{Wiringa1995} and the chiral
interaction at N3LO~\cite{Entem2003}, as a function of
lab energies. From \Fig{fig:phase-shift-triplet}(a), it is seen that
beyond lab energies of $\approx 150 \, \;\text{MeV}$, the agreement is rather
poor. These discrepancies result in model dependent gaps already at
the BCS level as seen in \Fig{fig:phase-shift-triplet}(b).
While~\cite{Baldo1998} used realistic interactions to track the model
dependence at the BCS level in the triplet pairing gaps, the input
interactions used in~\cite{Maurizio2014,Srinivas2016,Drischler2016},
are the modern $NN$ interaction obtained via chiral perturation theory
at N3LO which are further softened by the RG running~\cite{Bogner2010}.
The similarity renormalization group (SRG) interactions ($\ensuremath{V_{\text{srg}}}$) are very useful in
studying the gaps in the spin triplet channel. For a given bare
interaction, such as AV18~\cite{Wiringa1995} or
N3LO~\cite{Entem2003}, the SRG evolution preserves the phase shifts at all
energies (unlike $\ensuremath{V_{\text{low}\,k}}$ which preserves the phase shifts only for $k < \Lambda$)
and hence the variation of the gap as a function of the
SRG evolution scale $\lambda$ quantifies the missing $3N$ force and medium corrections,
and, has nothing to do with the inequivalence of the phase
shifts~\cite{Srinivas2016}. In a complementary study, the authors
of~\cite{Maurizio2014}, analysed the dependence of the gap on the
chiral cutoff when the N3LO interactions were used as inputs, which
highlights the differences in dealing with the two pion exchange
interaction term (see Fig.~9 in~\cite{Maurizio2014}).
The correlations beyond the BCS approximation correct both the
quasiparticle spectrum and the particle-particle vertex that enter
the gap equation. The first order Hartree-Fock self-energy
is given by,
\begin{equation}
\Sigma^{(1)}(k) = \int \,\frac{d^3k^\prime}{(2\pi)^3} \,
n_{\vek{k}^\prime} \sum_{l,S,J} 2 \pi (2J+1) \ip{q}{V_{SllJ}|q} (1
- (-1)^{l+S+1}),
\label{eq:self-energy_eqn}
\end{equation}
where $n_{\vek{k}} = \theta(k_{\text{F}} - k)$ is the Fermi-Dirac
distribution at zero temperature and $q = \vert \vek{k} - \vek{k}^\prime
\vert/2$. The HF self-energy changes the free quasiparticle
spectrum to $\varepsilon(k) = k^2/(2 m^*) + \Sigma^{(1)}(k)$, and
$k_{\text{F}}/m^* = [d\varepsilon(k)/dk]_{k = k_{\text{F}}}$ relates the effective mass
to the self-energy. When $m^* < m$, the density of states near the
Fermi surface decreases and hence one can expect a suppression of
pairing and therefore, smaller gaps.
\Fig{fig:effective-mass-3pf2-gaps}
\begin{figure}
\begin{center}
\includegraphics[scale = 0.3]{figs/srg_em_500_self_energy.eps}
\end{center}
\caption{Medium effects: Comparing the gaps with free single
particle spectrum (lines) and effective mass (symbols) for the
AV18 interaction. The blue dots are the results of Baldo et
al~\cite{Baldo1998} and the squares show the gaps computed with $\ensuremath{V_{\text{srg}}}$ for $\lambda
= 2.5\, \ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$ (green filled squares) and $\lambda = 2.0 \, \ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$
(red empty squares) respectively.}
\label{fig:effective-mass-3pf2-gaps}
\end{figure}
shows the gaps with both the free single particle spectrum (lines) and
with the effective mass $m^*$ (symbols - circles and squares). The
black solid line is the bare interaction and the dashed lines and
dash-dotted lines are the gaps obtained from the SRG evolved
interactions for $\lambda = 2.0\, \ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$ and $2.5\, \ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$
respectively. The filled circles are the results for the gap with
effective mass calculated from the Br{\"u}ckner Hartree-Fock (BHF) by
Baldo et al.~\cite{Baldo1998}. The squares, filled and empty are the
first order effective mass calculated using
\Eq{eq:self-energy_eqn} for $\lambda = 2.5\, \ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$ and $2.0\,
\ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$ respectively. When compared with the free single particle spectrum, the
inclusion of the effective mass, both BHF and first order, reduces the gaps, due
to the suppression of the density of states at $k_{\text{F}}$. However, the gaps
are more suppressed with a first order effective mass compared to the effective mass from BHF. This should be expected at high
densities, as a first order calculation of the self-energy is
insufficient. It is interesting to note that the dependence
on $\lambda$ is dramatically lessened with an effective mass when compared to the corresponding
free spectrum result. Lowering $\lambda$ makes the SRG evolved
interaction more attractive and hence increases the effective
mass. However, including an effective mass reduces the BCS gap. The
dramatic decrease in the $\lambda$ dependence arises due to a
compensation between these two effects.
The three-body force is expected to play a crucial role for pairing in the
triplet channel and in fact enhances the
gap~\cite{Maurizio2014,Srinivas2016,Drischler2016,Papakonstantinou2017,Zhou2004,Hebeler2010}.
These forces have been calculated microscopically using
semi-phenomenological
interactions~\cite{Zuo2008,Papakonstantinou2017,Zhou2004,Li2008} as
well as using chiral EFT, where the $3N$ terms first enter at
N2LO~\cite{Hebeler2010,Holt2010}. Fig.~\ref{fig:triplet-3N}
\begin{figure}
\begin{center}
\includegraphics[scale = 0.3,clip = true]{figs/srg_EM_500_3N_cutoff.eps}
\end{center}
\caption{Triplet gaps including $3N$ interactions}
\label{fig:triplet-3N}
\end{figure}
shows the triplet gap including the $3N$ interaction, which is usually
incorporated as a density dependent $2N$ interaction. While
in~\cite{Maurizio2014}, the density dependent $2N$ interaction is
generated from an in-medium chiral $3N$ force,
\Refs{Srinivas2016,Drischler2016} use an effective density dependent
$2N$ interaction from the $3N$ chiral interactions at N2LO~\cite{Hebeler2010}. In addition,~\cite{Drischler2016} also
considers the $3N$ contributions from N3LO. In \Fig{fig:triplet-3N}, the area shaded in gray
between the solid red lines represents the variations in the gap due
to the uncertainties in the low-energy constants~\cite{Drischler2016}
as well as the three-body cutoff, while the black dashed lines
represent the effect of varying the three-body cutoff after fixing the chiral
low-energy constants~\cite{Srinivas2016}. The green hatched region
between the dash-dotted green lines represents the gaps obtained with $3N$
interactions at N3LO with the associated uncertainties in the low-energy
constants~\cite{Drischler2016}. It is worth noting that the spin
triplet gaps are extremely sensitive to the three-body force compared
to the spin singlet
gap~\cite{Drischler2016,Papakonstantinou2017,Hebeler2010}. In the
$^1S_0$ channel the corrections to the gap enter only at higher densities,
while in the $^3PF_2$ channel, the effects of the three-body interaction on
the gap are dramatic.
As discussed already in \Sec{subsect:screening}, beyond BCS correlations (short and long-range) lead to important modifications of the gap. The literature on including these
medium effects for the $p$-wave is rather sparse, with some recent attempts by Dong
et al~\cite{Dong2013} and Ding et al~\cite{Ding2016}. The authors in~\cite{Dong2013}, calculate the quasiparticle weight $Z$ as was done in~\cite{Cao2006} for the singlet
channel (see \Sec{subsect:screening}). The quasiparticle weight
was calculated both with and without the inclusion of a three-body force. The
presence of the $Z$ factor suppresses the gaps by an entire order of magnitude as well as
shrinks the density region where the gaps exist. In~\cite{Ding2016}, the
short-range correlations are taken into account via the self-consistent Green's function
techniques, extrapolated to zero temperatures. In addition, the screening corrections as
in \cite{Cao2006} (see \Sec{subsect:screening}) have been extended to the $p$-wave. In
this case it seems that the screening enhances the gap (antiscreening)
while the short-range correlations suppress it, with the net effect of strongly
reducing the gaps compared to the BCS results.
While pairing in the triplet channel is an important ingredient to
describe the physics of the neutron star core, much remains to be
explored due to the uncertainties in the free-space interactions.
\section{Conclusions}
In this brief review we discussed the state of the art concerning the $s$ and $p$-wave
pairing in pure neutron matter. In the $s$-wave, which is most relevant at low-densities,
the gap as a function of density seems to be under control. QMC and the most recent many-body calculations agree that the gap, before reaching a maximum of $\sim 2 - 2.5 \;\text{MeV}$
at $k_{\text{F}} \sim 0.8 \, \ifmmode\;\text{fm}^{-1}\else~fm$^{-1}$\fi$, follows the behavior of the BCS gap reduced by a factor of $0.6$
to $0.7$, except at extremely low-densities (of purely academic interest) where the GMB
(reduction by a factor of $0.45$) limit is reproduced. Effects of BCS-BEC crossover on the
critical temperature seem to be very weak. Beyond the maximum of the gap,
there are uncertainties that come from medium effects such as effective mass, screening and
short-range correlations. In addition, there are other factors such as $3N$
forces that are important at higher-densities, which we have not discussed here.
In the $p$-wave, which is supposed to be dominant at high-densities, even at the BCS
level, the gaps have large uncertainties. Inclusion
of short and long-range correlations seem to reduce the gaps while the $3N$ force
enhances it. However, since the $p$-wave is always in the extremely weak coupling limit,
the gaps exhibit exponential sensitivity to the details of the interactions and the
approximations. At densities corresponding to the neutron star core, it would be more
realistic to consider asymmetric matter with a finite proton fraction. This might
completely change the conclusions through the $nnp$ $3N$ interactions.
\vspace*{0.1in}
\noindent \textbf{Acknowledgment:} The authors acknowledge support from Collaborative Research Program of IFCPAR/CEFIPRA, Project number: 6304-4.
\vspace*{0.1in}
\noindent \textbf{Author contribution statement:} Both authors contributed equally to this work.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,516 |
%% This example is adapted from the "Business Cards for Programmers/Developers" example at
% https://www.overleaf.com/latex/templates/business-cards-for-programmers-slash-developers/wrwgsnnmxwyg.
% Instructions are at the end of the file.
\documentclass[11pt,a4paper]{memoir}
\setstocksize{55mm}{85mm} % UK Stock size
\setpagecc{55mm}{85mm}{*}
\settypeblocksize{45mm}{75mm}{*}
\setulmargins{5mm}{*}{*}
\setlrmargins{5mm}{*}{*}
\usepackage{xcolor}
\usepackage{datatool}
%% The "database" is a comma-separated values (CSV) file.
%% The first line should contain the column headers, without space characters, e.g.
%% Name,JobTitle,Department
%%
%% If a field value contains a comma, then the field value needs to be surrounded with double quotes, e.g.
%% John Smith,Lecturer,"School of Science, Mathematics and Engineering"
%%
%% Spreadsheet applications can usually export such a .csv file.
%%
%% If field values are expected to contain LaTeX special characters like $, &, then use \DTLloadrawdb{data}.csv instead.
\DTLloaddb{namelist}{data.csv}
\setheadfoot{0.1pt}{0.1pt}
\setheaderspaces{1pt}{*}{*}
\usepackage{fontspec}
\setmainfont[]{EBGaramond12-Regular.ttf}
\checkandfixthelayout[fixed]
\pagestyle{empty}
\usepackage{pstricks}
\usepackage{auto-pst-pdf,pst-barcode}
%% These packages only for typesetting the instructions
\usepackage{listings}
\usepackage{enumitem}
\begin{document}
%% For each line in namelist (which was loaded from data.csv),
%% output the following text (with mailmerged field values)
\DTLforeach{namelist}{%
%% Map each column header in your .csv file to a command
\Name=Name,%
\Subtitle=Subtitle,%
\Division=Division,%
\Employer=Employer,%
\JobTitle=JobTitle,%
\SpecialityOne=Speciality1,%
\SpecialityTwo=Speciality2,%
\SpecialityThree=Speciality3,%
\Web=Web,%
\EmailOne=Email1,%
\EmailTwo=Email2,%
\Mobile=Mobile,%
\GPG=GPG%
}{%%% Start designing your output text!
%%% Mailmerged field values can be inserted using the commands
%%% you've just mapped above.
\begin{Spacing}{0.75}%
\noindent
\textbf{\Name}\\[3pt]
\tiny\mbox{}\Subtitle \hfill {\color{gray}\Division{} / \Employer}\\
\rule{\textwidth}{.3mm}\\
\begin{minipage}[t]{30mm}
\vspace{-0mm}%
\begin{pspicture}(25mm,25mm)
% The MECARD format is used to exchange contact information. More information at:
% https://www.nttdocomo.co.jp/english/service/developer/make/content/barcode/function/application/addressbook/index.html
\psbarcode{MECARD:N:\Name;URL:\Web;EMAIL:\EmailOne;TEL:\Mobile;NOTE:GPG key fingerprint: \GPG;}{eclevel=L width=1 height=1}{qrcode}
\end{pspicture}
\end{minipage}
\hspace{1mm}
\begin{minipage}[t]{42mm}
\vspace{-0mm}%
\begin{flushleft}
{\scriptsize
\begin{Spacing}{1}%
\textbf{\JobTitle}\\
\hspace{5mm}\mbox{}\SpecialityOne\\
\hspace{5mm}\mbox{}\SpecialityTwo\\
\hspace{5mm}\mbox{}\SpecialityThree \vspace{2mm}\\
\end{Spacing}
}
{\tiny
\begin{tabular}{@{}r@{\hspace{2mm}}l}
{\color{gray}web} & \Web\\
{\color{gray}email} & \EmailOne\\
{\color{gray}email} & \EmailTwo\\
{\color{gray}mobile} & \Mobile\\
\end{tabular}
\vspace*{2mm}
}
\end{flushleft}
\end{minipage}
\rule{74mm}{0mm}\\
\texttt{\fontsize{2.84mm}{3.55mm}\selectfont \GPG} % GPG KEY ID
\end{Spacing}
\clearpage
}
%% Comment out this line when typesetting for final output
\input{instructions}
\end{document} | {
"redpajama_set_name": "RedPajamaGithub"
} | 4,617 |
\section{Introduction}\label{sec:intro}
A \emph{residuated binar} is an algebra ${\mathbf A}=(A,\wedge,\vee,\cdot,\backslash,\slash)$, where $(A,\wedge,\vee)$ is a lattice, $\cdot$ is a binary operation on $A$, and for all $x,y,z\in A$,
$$x\cdot y\leq z \iff x\leq z\slash y \iff y\leq x\backslash z.$$
A \emph{residuated semigroup} is a residuated binar for which $\cdot$ is associative, and a residuated binar possessing an identity element $e$ for $\cdot$ is called \emph{unital}. An expansion of a unital residuated semigroup by a constant designating the identity is called a \emph{residuated lattice} \cite{GJKO}. All of the aforementioned algebras satisfy the distributive laws\footnote{Here and throughout, to reduce the need for parentheses we assume that $\cdot$ has priority over $\backslash,\slash$, which in turn have priority over $\wedge,\vee$. We also write $x\cdot y$ as $xy$.}
\begin{equation}\label{eq:fj}
x (y\vee z) = x y\vee x z \tag*{$(\cdot\vee)$}
\end{equation}
\begin{equation}\label{eq:jf}
(x\vee y) z = x z\vee y z \tag*{$(\vee\cdot)$}
\end{equation}
\begin{equation}\label{eq:lm}
x\backslash (y\wedge z) = x\backslash y\wedge x\backslash z \tag*{$(\backslash\wedge)$}
\end{equation}
\begin{equation}\label{eq:mr}
(x\wedge y)\slash z = x\slash z\wedge y\slash z \tag*{$(\wedge\slash)$}
\end{equation}
\begin{equation}\label{eq:rj}
x\slash (y\vee z) = x\slash y\wedge x\slash z \tag*{$(\slash\vee)$}
\end{equation}
\begin{equation}\label{eq:jl}
(x\vee y)\backslash z = x\backslash z\wedge y\backslash z \tag*{$(\vee\backslash)$}
\end{equation}
However, in general neither lattice distributivity nor any of the equations
\begin{equation}\label{eq:fm}
x (y\wedge z) = x y\wedge x z \tag*{$(\cdot\wedge)$}
\end{equation}
\begin{equation}\label{eq:mf}
(x\wedge y) z = x z\wedge y z \tag*{$(\wedge\cdot)$}
\end{equation}
\begin{equation}\label{eq:lj}
x\backslash (y\vee z) = x\backslash y\vee x\backslash z \tag*{$(\backslash\vee)$}
\end{equation}
\begin{equation}\label{eq:jr}
(x\vee y)\slash z = x\slash z\vee y\slash z \tag*{$(\vee\slash)$}
\end{equation}
\begin{equation}\label{eq:ml}
(x\wedge y)\backslash z = x\backslash z\vee y\backslash z \tag*{$(\wedge\backslash)$}
\end{equation}
\begin{equation}\label{eq:rm}
x\slash (y\wedge z) = x\slash y\vee x\slash z \tag*{$(\slash\wedge)$}
\end{equation}
hold in these algebras.
If $t$ is a term in the language of residuated binars (or residuated semigroups), then the \emph{opposite of $t$} is the term $t^{\op}$ defined recursively as follows. For $x$ a variable, set $x^{\op}=x$, and if $s$ and $t$ are terms then set \mbox{$(s\cdot t)^{\op}=t^{\op}\cdot s^{\op}$}, \mbox{$(s\slash t)^{\op}=t^{\op}\backslash s^{\op}$}, $(s\backslash t)^{\op}=t^{\op}\slash s^{\op}$, $(s\wedge t)^{\op}=t^{\op}\wedge s^{\op}$, and $(s\vee t)^{\op}=t^{\op}\vee s^{\op}$ (and $e^{\op}=e$ in the presence of a multiplicative identity $e$). The opposite of an equation $s=t$ is defined by $(s=t)^{\op}=(s^{\op} = t^{\op})$. \emph{Mirror duality} for residuated binars provides that an equation $\epsilon$ holds in the variety of all residuated binars if and only if $\epsilon^{\op}$ does as well. If $\Sigma\cup\{\epsilon\}$ is a set of equations in the language of residuated binars and $\Sigma^{\op} = \{\sigma^{\op} : \sigma\in\Sigma\}$, then $\Sigma\models\epsilon$ holds in the variety of residuated binars if and only if $\Sigma^{\op}\models \epsilon^{\op}$ holds. Observe that \ref{eq:fm}$^{\op}$, \ref{eq:lj}$^{\op}$, and \ref{eq:ml}$^{\op}$ are respectively \ref{eq:mf}, \ref{eq:jr}, and \ref{eq:rm}.
In the presence of a multiplicative identity $e$, left and right prelinearity
\begin{equation}\label{eq:lp}
e\leq x\backslash y\vee y\backslash x \tag*{$(lp)$}
\end{equation}
\begin{equation}\label{eq:rp}
e\leq x\slash y\vee y\slash x \tag*{$(rp)$},
\end{equation}
have a connection to the six nontrivial distributive laws given above. In particular, \cite[Proposition 6.10]{BT2003} shows that in residuated lattices satisfying $e$-distributivity
\begin{equation}\label{eq:ed}
(x\vee y)\wedge e = (x\wedge e)\vee (y\wedge e),\tag*{$(ed)$}
\end{equation}
the equations \ref{eq:lp}, \ref{eq:ml}, and \ref{eq:lj} are pairwise equivalent, as are the equations \ref{eq:rp}, \ref{eq:rm}, and \ref{eq:jr}. Because \ref{eq:lp} and \ref{eq:rp} axiomatize semilinear residuated lattices (i.e., those that are subdirect products of totally-ordered residuated lattices) under appropriate technical hypotheses (see \cite{BT2003}), this provides one explanation of the well-known fact that all six nontrivial distributive laws hold in semilinear residuated lattices. However, a residuated lattice may satisfy all six nontrivial distributive laws even though it is not semilinear (this is the case, e.g., in lattice-ordered groups).
The dependencies among the six nontrivial distributive laws are more complicated in the absence of a multiplicative identity. Sections \ref{sec:implications} and \ref{sec:countermodels} provide a complete description of the dependencies among the nontrivial distributive laws under the hypothesis of lattice distributivity, both for residuated binars and residuated semigroups. Section \ref{sec:additional properties} provides some additional implications among the distributive laws in unital residuated binars, and in the presence of lattice complements. We conclude in Section \ref{sec:open problems} by proposing some open problems.
\section{Implications among the nontrivial distributive laws}\label{sec:implications}
A residuated binar with a distributive lattice reduct may be associated with its \emph{frame}. The frame of a lattice-distributive residuated binar $\mathbf A$ may be obtained by taking the poset of prime filters of the lattice reduct of $\mathbf A$ and endowing it with a ternary relation $R$ defined by
$$R(F,G,H) \iff F\subseteq G\cdot H,$$
where $F\cdot G = \{xy : x\in G, y\in H\}$ is the complex product of $F$ and $G$. Observe that the ternary relation $R$ on the frame of a residuated binar is antitone in its first coordinate and isotone in its second and third coordinates.
Satisfaction of either of the identities \ref{eq:lj} and \ref{eq:jr} has significant consequences for the frame of a lattice-distributive residuated binar \cite{FP2018}, and the nontrivial distributive laws may be profitably analyzed from the point of view of frames. In fact, for lattice-distributive residuated binars, each of the distributive laws introduced in the previous section may be rendered in terms of an equivalent first-order condition on the corresponding frames by application of ALBA \cite{CP2012}. For instance, the identity \ref{eq:jr} is equivalent to the condition that for all $x,y,p,q,j$,
$$[R(x,j,p) \;\&\; R(y,j,q)]\implies \exists z [x,y\leq z \;\&\; (R(z,j,p) \text{ or } R(z,j,q))].$$
On the other hand, \ref{eq:ml} is equivalent to the condition that for all $x,y,p,q,j$,
$$[R(p,x,j) \;\&\; R(q,y,j)]\implies \exists z [z\leq x,y\;\&\; (R(p,z,j)\text{ or }R(q,z,j))],$$
whereas \ref{eq:lj} is equivalent to the condition that for all $x,y,p,q,j$,
$$[R(x,p,j) \;\&\; R(y,q,j)]\implies \exists z [x,y\leq z\;\&\; (R(z,p,j)\text{ or }R(z,q,j))].$$
\begin{proposition}\label{prop:jr and ml implies lj frame}
Let $\mathbf A$ be a residuated binar with a distributive lattice reduct. If $\mathbf A$ satisfies both \ref{eq:jr} and \ref{eq:ml}, then $\mathbf A$ also satisfies \ref{eq:lj}.
\end{proposition}
\begin{proof}
Suppose that both \ref{eq:jr} and \ref{eq:ml} hold. We use the equivalent frame conditions to verify \ref{eq:lj}, so suppose that $x,y,p,q,j$ are points in the frame of $\mathbf A$ such that $R(x,p,j)$ and $R(y,q,j)$. By the frame condition for \ref{eq:ml} there exists $z'$ with $z'\leq p,q$ and one of $R(x,z',j)$ or $R(y,z',j)$. Suppose first that $R(x,z',j)$ holds. Then $R(x,z',j)$ and $R(y,q,j)$, and by monotonicity and $z'\leq q$ we have $R(x,q,j)$ and $R(y,q,j)$. Using the frame condition for \ref{eq:jr} we obtain $z$ such that $z,y\leq z$ and $R(z,q,j)$. On the other hand, if $R(y,z',j)$ holds then $R(y,z',j)$ and $R(x,p,j)$. Monotonicity and $z'\leq p$ then gives $R(y,p,j)$ and $R(x,p,j)$, and by the frame condition for \ref{eq:jr} there exists $z$ with $x,y\leq z$ and $R(z,p,j)$. In either case, there exists $z$ with $x,y\leq z$ and either $R(z,p,j)$ or $R(z,q,j)$, which completes the proof.
\end{proof}
Other results of this kind may be discovered by appealing to equivalent conditions on frames. However, an entirely algebraic treatment is also possible. The next lemma is an important step in this.
\begin{lemma}\label{lem:four variables}
Each of the following gives a pair of identities that are equivalent in residuated binars.
\begin{enumerate}
\item \ref{eq:fm} and $xz\wedge yw\leq (x\vee y)(z\wedge w)$.
\item \ref{eq:mf} and $xz\wedge yw\leq (x\wedge y)(z\vee w)$.
\item \ref{eq:lj} and $(x\vee y)\backslash (z\vee w) \leq x\backslash z \vee y\backslash w$.
\item \ref{eq:jr} and $(z\vee w)\slash (x\vee y) \leq z\slash x\vee w\slash y$.
\item \ref{eq:ml} and $(x\wedge y)\backslash (z\wedge w)\leq x\backslash z\vee y\backslash w$.
\item \ref{eq:rm} and $(z\wedge w)\slash (x\wedge y)\leq z\slash x\vee w\slash y$.
\end{enumerate}
\end{lemma}
\begin{proof}
We prove (1) and (3); (2) and (4) follow by a symmetric argument, and (5) and (6) follow by a proof similar to (3) and (4).
For (1), note that if $xz\wedge yw\leq (x\vee y)(z\wedge w)$ holds then by instantiating $y=x$ we obtain $xz\wedge xw\leq x(z\wedge w)$. The reverse inequality follows from the isotonicity of multiplication, so \ref{eq:fm} holds. Conversely, if \ref{eq:fm} holds then we have $xz\wedge yw\leq (x\vee y)z\wedge (x\vee y)w = (x\vee y)(z\wedge w)$.
For (3), taking $y=x$ in the inequality $(x\vee y)\backslash (z\vee w)\leq x\backslash z\vee y\backslash w$ gives \mbox{$x\backslash (z\vee w)\leq x\backslash z\vee x\backslash w$.} The reverse inequality holds because $\backslash$ is isotone in its numerator, whence \ref{eq:lj} holds. For the converse, note that \ref{eq:lj} implies $(x\vee y)\backslash (z\vee w)=(x\vee y)\backslash z\vee (x\vee y)\backslash w\leq x\backslash z\vee y\backslash w$, where the last step follows because $\backslash$ is antitone in its denominator.
\end{proof}
\begin{theorem}\label{thm:algebraic implications}
Let $\mathbf A$ be a residuated binar with a distributive lattice reduct. Then:
\begin{enumerate}
\item If $\mathbf A$ satisfies both \ref{eq:jr} and \ref{eq:ml}, then $\mathbf A$ also satisfies \ref{eq:lj}.
\item If $\mathbf A$ satisfies both \ref{eq:lj} and \ref{eq:rm}, then $\mathbf A$ also satisfies \ref{eq:jr}.
\item If $\mathbf A$ satisfies both \ref{eq:fm} and \ref{eq:jr}, then $\mathbf A$ also satisfies \ref{eq:rm}.
\item If $\mathbf A$ satisfies both \ref{eq:mf} and \ref{eq:lj}, then $\mathbf A$ also satisfies \ref{eq:ml}.
\item If $\mathbf A$ satisfies both \ref{eq:ml} and \ref{eq:fm}, then $\mathbf A$ also satisfies \ref{eq:mf}.
\item If $\mathbf A$ satisfies both \ref{eq:rm} and \ref{eq:mf}, then $\mathbf A$ also satisfies \ref{eq:fm}.
\end{enumerate}
\end{theorem}
\begin{proof}
We provide proofs for (1) and (5); (2) and (6) follow by mirror duality. The others follow similarly.
For (1), suppose that $u\leq (x\vee y)\backslash (z\vee w)$. Then by residuation we get $x,y\leq x\vee y\leq (z\vee w)\slash u$, and by \ref{eq:jr} we have $x\leq z\slash u\vee w\slash u$ and also $y\leq z\slash u\vee w\slash u$. Observe that $x= x\wedge (z\slash u \vee w\slash u)$ and $y= y\wedge (z\slash u \vee w\slash u)$, and by distributivity we obtain that $x=x_1\vee x_2$ and $y=y_1\vee y_2$, where
$$x_1=x\wedge (z\slash u),$$
$$x_2=x\wedge (w\slash u),$$
$$y_1=y\wedge (z\slash u),$$
$$y_2=y\wedge (w\slash u).$$
Note that
$$x_1\leq z\slash u\implies u\leq x_1\backslash z\leq (x_1\wedge y_2)\backslash z,$$
$$x_2\leq w\slash u\implies u\leq x_2\backslash w\leq (x_2\wedge y_1)\backslash w,$$
$$y_1\leq z\slash u\implies u\leq y_1\backslash z\leq (x_2\wedge y_1)\backslash z,$$
$$y_2\leq w\slash u\implies u\leq y_2\backslash w\leq (x_1\wedge y_2)\backslash w.$$
Hence we get that $u\leq (x_1\wedge y_2)\backslash (z\wedge w)\leq x_1\backslash z\vee y_2\backslash w$ and likewise \mbox{$u\leq (x_2\wedge y_1)\backslash (z\wedge w)\leq x_2\backslash z\vee y_1\backslash w$}. Also, $u\leq x_1\backslash z\leq x_1\backslash z\vee y_1\backslash w$ and $u\leq y_2\backslash w\leq x_2\backslash z\vee y_2\backslash w$. This implies that:
\begin{align*}
u &\leq (x_1\backslash z\vee y_2\backslash w)\wedge (x_2\backslash z\vee y_1\backslash w)\wedge (x_1\backslash z\vee y_1\backslash w)\wedge (x_2\backslash z\vee y_2\backslash w)\\
&= ((x_2\backslash z\wedge x_1\backslash z)\vee y_1\backslash w)\wedge ((x_1\backslash z\wedge x_2\backslash z)\vee y_2\backslash w)\\
&= (x_1\backslash z\wedge x_2\backslash z)\vee (y_1\backslash w\wedge y_2\backslash w)\\
&= (x_1\vee x_2)\backslash z \vee (y_1\vee y_2)\backslash w\\
&= x\backslash z\vee y\backslash w.
\end{align*}
This proves that $(x\vee y)\backslash (z\vee w)\leq x\backslash z\vee y\backslash w$, whence (1) follows by Lemma \ref{lem:four variables}(3).
To prove (5), suppose that $(x\wedge y)(z\vee w)\leq u$. By residuating and \ref{eq:ml}, we obtain $z,w\leq z\vee w\leq (x\wedge y)\backslash u = x\backslash u\vee y\backslash u$. Define
$$z_1=z\wedge (x\backslash u),$$
$$z_2=z\wedge (y\backslash u),$$
$$w_1=w\wedge (x\backslash u),$$
$$w_2=w\wedge (y\backslash u),$$
and note that by the distributivity of the lattice reduct we have $z=z_1\vee z_2$ an $w=w_1\vee w_2$. This provides
$$z_1\leq x\backslash u \implies xz_1\leq u,$$
$$z_2\leq y\backslash u \implies yz_2\leq u,$$
$$w_1\leq x\backslash u \implies xw_1\leq u,$$
$$w_2\leq y\backslash u\implies yw_2\leq u,$$
whence from the isotonicity of multiplication and the middle two items above, we obtain that $y(z_2\wedge w_1)\leq u$ and $x(z_2\wedge w_1)\leq u$. This provides that $(x\vee y)(z_2\wedge w_1)=x(z_2\wedge w_1)\vee y(z_2\wedge w_1)\leq u$, and from the assumption \ref{eq:fm} and Lemma \ref{lem:four variables}(1) we conclude that $xz_2\wedge yw_1\leq u$. Now note that
\begin{align*}
xz\wedge yw &= x(z_1\vee z_2)\wedge y(w_1\vee w_2)\\
&= (xz_1\vee xz_2)\wedge (yw_1\vee yw_2)\\
&= (xz_1\wedge yw_1)\vee (xz_1\wedge yw_2)\vee (xz_2\wedge yw_1)\vee (xz_2\wedge yw_2)\\
&\leq u,
\end{align*}
where the third equation above follow from lattice distributivity. It follows that $xz\wedge yw\leq (x\wedge y)(z\vee w)$, so \ref{eq:mf} follows by Lemma \ref{lem:four variables}(2). This gives (5).
\end{proof}
The implications articulated in Theorem \ref{thm:algebraic implications} are described by the directed graph in Figure \ref{fig:implications}. Each pair of identities given on the left-hand side (respectively, right-hand side) of the graph jointly imply their common successor on the right-hand side (respectively, left-hand side). Note that these consequences are hidden in the special case of $e$-distributive residuated lattices addressed in \cite{BT2003}, where taken individually \ref{eq:ml} and \ref{eq:lj} are equivalent, as are \ref{eq:jr} and \ref{eq:rm}.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.4]
\tikzset{vertex/.style = {shape=circle,draw,fill=white,inner sep=1.5pt, minimum size=2em}}
\tikzset{edge/.style = {->,> = latex'}}
\node[vertex] (a) at (0,8) {$\vee\slash$};
\node[vertex] (b) at (0,4) {$\wedge\backslash$};
\node[vertex] (c) at (0,0) {$\cdot\wedge$};
\node[vertex] (d) at (8,8) {$\backslash\vee$};
\node[vertex] (e) at (8,4) {$\slash\wedge$};
\node[vertex] (f) at (8,0) {$\wedge\cdot$};
\draw[edge] (4,7) to (d);
\draw[edge] (4,3) to (e);
\draw[edge] (4,-1) to (f);
\draw[edge] (4,9) to (a);
\draw[edge] (4,5) to (b);
\draw[edge] (4,1) to (c);
\draw (a) to (4,7);
\draw (b) to (4,7);
\draw (a) to (4,3);
\draw (c) to (4,3);
\draw (b) to (4,-1);
\draw (c) to (4,-1);
\draw (d) to (4,9);
\draw (e) to (4,9);
\draw (d) to (4,5);
\draw (f) to (4,5);
\draw (e) to (4,1);
\draw (f) to (4,1);
\end{tikzpicture}
\end{center}
\caption{Dependencies among the nontrivial distributive laws.}
\label{fig:implications}
\end{figure}
\section{The poset of subvarieties}\label{sec:countermodels}
The class of residuated binars with distributive lattice reducts forms a finitely-based variety $\sf RB$, and the implications announced in Theorem \ref{thm:algebraic implications} entail some inclusions among the subvarieties of $\sf RB$ determined by the nontrivial distributive laws. We will show that these are all of the inclusions among such subvarieties, completely describing the subposet of the subvariety lattice of $\sf RB$ whose elements are axiomatized (modulo the theory of $\sf RB$) by any collection of the nontrivial distributive laws. The same analysis holds for residuated semigroups as well.
\begin{figure}\label{fig:lattice reducts}
\begin{center}
\begin{tikzpicture}
\tikzset{vertex/.style = {shape=circle,draw,fill=white,inner sep=1.5pt}}
\tikzset{edge/.style = {-,> = latex'}}
\node[vertex,label=left:$\top$] (b) at (0,-1) {};
\node[vertex,label=left:$a$] (c) at (-1,-2) {};
\node[vertex,label=right:$b$] (d) at (1,-2) {};
\node[vertex,label=left:$\bot$] (e) at (0,-3) {};
\draw[edge] (b) to (c);
\draw[edge] (b) to (d);
\draw[edge] (c) to (e);
\draw[edge] (d) to (e);
\end{tikzpicture}\hspace{0.25 in}
\begin{tikzpicture}
\tikzset{vertex/.style = {shape=circle,draw,fill=white,inner sep=1.5pt}}
\tikzset{edge/.style = {-,> = latex'}}
\node[vertex,label=left:$\top$] (a) at (0,0) {};
\node[vertex,label=left:$c$] (b) at (0,-1) {};
\node[vertex,label=left:$a$] (c) at (-1,-2) {};
\node[vertex,label=right:$b$] (d) at (1,-2) {};
\node[vertex,label=left:$\bot$] (e) at (0,-3) {};
\draw[edge] (a) to (b);
\draw[edge] (b) to (c);
\draw[edge] (b) to (d);
\draw[edge] (c) to (e);
\draw[edge] (d) to (e);
\end{tikzpicture}\hspace{0.25 in}
\begin{tikzpicture}
\tikzset{vertex/.style = {shape=circle,draw,fill=white,inner sep=1.5pt}}
\tikzset{edge/.style = {-,> = latex'}}
\node[vertex,label=left:$\top$] (b) at (0,-1) {};
\node[vertex,label=left:$a$] (c) at (-1,-2) {};
\node[vertex,label=right:$b$] (d) at (1,-2) {};
\node[vertex,label=left:$c$] (e) at (0,-3) {};
\node[vertex,label=left:$\bot$] (f) at (0,-4) {};
\draw[edge] (b) to (c);
\draw[edge] (b) to (d);
\draw[edge] (c) to (e);
\draw[edge] (d) to (e);
\draw[edge] (e) to (f);
\end{tikzpicture}
\end{center}
\caption{Labeled Hasse diagrams for the lattice reducts of $\mathbf A_1$, $\mathbf A_2$, $\mathbf A_3$ (left), $\mathbf A_4$, $\mathbf A_5$ (middle) and $\mathbf A_6$ (right).}
\end{figure}
\begin{proposition}\label{prop:no other implications}
Theorem \ref{thm:algebraic implications} gives the only implications among the six nontrivial distributive laws modulo the theory of residuated binars. The same holds for residuated semigroups.
\end{proposition}
\begin{proof}
For each $i\in\{1,2,3,4,5,6\}$ we define a residuated binar $\mathbf A_i$. The lattice reducts of each $\mathbf A_i$ is given in Figure \ref{fig:lattice reducts}. We provide operation tables for $\cdot$ in each $\mathbf A_i$ below; the operation tables for $\backslash$ and $\slash$ are uniquely determined by these in each case. For $\mathbf A_1$, $\mathbf A_2$, and $\mathbf A_3$:
$$\begin{array}{c|cccc}
\cdot&\bot&a&b&\top\\\hline
\bot&\bot&\bot&\bot&\bot\\
a&\bot&\bot&\bot&\bot\\
b&\bot&\bot&\top&\top\\
\top&\bot&\bot&\top&\top\\
\end{array}
\qquad
\begin{array}{c|cccc}
\cdot&\bot&a&b&\top\\\hline
\bot&\bot&\bot&\bot&\bot\\
a&\bot&\bot&\bot&\bot\\
b&\bot&a&b&\top\\
\top&\bot&a&b&\top\\
\end{array}
\qquad
\begin{array}{c|cccc}
\cdot&\bot&a&b&\top\\\hline
\bot&\bot&\bot&\bot&\bot\\
a&\bot&\bot&a&a\\
b&\bot&\bot&b&b\\
\top&\bot&\bot&\top&\top\\
\end{array}$$
For $\mathbf A_4$, $\mathbf A_5$, and $\mathbf A_6$:
$$\begin{array}{c|ccccc}
\cdot&\bot&a&b&c&\top\\\hline
\bot&\bot&\bot&\bot&\bot&\bot\\
a&\bot&\top&\bot&\top&\top\\
b&\bot&b&\bot&b&b\\
c&\bot&\top&\bot&\top&\top\\
\top&\bot&\top&\bot&\top&\top\\
\end{array}
\hspace{0.08 in}
\begin{array}{c|ccccc}
\cdot&\bot&a&b&c&\top\\\hline
\bot&\bot&\bot&\bot&\bot&\bot\\
a&\bot&\top&b&\top&\top\\
b&\bot&\bot&\bot&\bot&\bot\\
c&\bot&\top&b&\top&\top\\
\top&\bot&\top&b&\top&\top\\
\end{array}
\hspace{0.08 in}
\begin{array}{c|ccccc}
\cdot&\bot&a&b&c&\top\\\hline
\bot&\bot&\bot&\bot&\bot&\bot\\
a&\bot&a&\bot&\bot&a\\
b&\bot&\bot&b&\bot&b\\
c&\bot&\bot&\bot&\bot&\bot\\
\top&\bot&a&b&\bot&\top\\
\end{array}$$
Direct calculation verifies that:
\begin{itemize}
\item $\mathbf A_1\models$ \ref{eq:rm}, \ref{eq:ml}, \ref{eq:mf}, \ref{eq:fm} and $\mathbf A_1\not\models$ \ref{eq:lj}, \ref{eq:jr}.
\item $\mathbf A_2\models$ \ref{eq:lj}, \ref{eq:ml}, \ref{eq:mf}, \ref{eq:fm} and $\mathbf A_2\not\models$ \ref{eq:jr}, \ref{eq:rm}.
\item $\mathbf A_3\models$ \ref{eq:jr}, \ref{eq:rm}, \ref{eq:mf}, \ref{eq:fm} and $\mathbf A_3\not\models$ \ref{eq:lj}, \ref{eq:ml}.
\item $\mathbf A_4\models$ \ref{eq:jr}, \ref{eq:lj}, \ref{eq:rm}, \ref{eq:fm} and $\mathbf A_4\not\models$ \ref{eq:ml}, \ref{eq:mf}.
\item $\mathbf A_5\models$ \ref{eq:jr}, \ref{eq:lj}, \ref{eq:ml}, \ref{eq:mf} and $\mathbf A_5\not\models $ \ref{eq:rm}, \ref{eq:fm}.
\item $\mathbf A_6\models$ \ref{eq:jr}, \ref{eq:lj}, \ref{eq:rm}, \ref{eq:ml} and $\mathbf A_6\not\models $ \ref{eq:fm}, \ref{eq:mf}.
\end{itemize}
Let $\epsilon\in\{$\ref{eq:jr}, \ref{eq:lj}, \ref{eq:rm}, \ref{eq:ml}, \ref{eq:mf}, \ref{eq:fm}$\}$. Then there exists a unique implication listed in Theorem \ref{thm:algebraic implications} having $\epsilon$ as its consequent. Let $\epsilon_1,\epsilon_2$ be the identities in the antecedent of the aforementioned implication. Then the above countermodels show that if $\epsilon\notin\Sigma\subseteq\{$\ref{eq:jr}, \ref{eq:lj}, \ref{eq:rm}, \ref{eq:ml}, \ref{eq:mf}, \ref{eq:fm}$\}$ and $\epsilon_1\notin\Sigma$ or $\epsilon_2\notin\Sigma$, then $\epsilon$ is not entailed by $\Sigma$.
Note that each $\mathbf A_i$, $i\in\{1,2,3,4,5,6\}$, is an associative residuated binar. The result therefore holds for residuated semigroups as well.
\end{proof}
The left-hand side of Figure \ref{fig:subvariety poset} gives the Hasse diagram of the poset of subvarieties of $\sf RB$ determined by the six nontrivial distributive laws. The coatoms in this diagram are subvarieties axiomatized modulo $\sf RB$ by a single nontrivial distributive law, and the atoms are subvarieties axiomatized by one of the four-element subsets of $\{$\ref{eq:jr}, \ref{eq:lj}, \ref{eq:rm}, \ref{eq:ml}, \ref{eq:mf}, \ref{eq:fm}$\}$ satisfied in one of the models $\mathbf A_i$ given in the proof of Proposition \ref{prop:no other implications}.
The meets in this diagram correspond to intersection of subvarieties, but in general the joins do not correspond to joins in the lattice of subvarieties.
The same diagram describes the corresponding subvariety poset for residuated semigroups since the models ${\mathbf A}_i$, $i\in\{1,2,3,4,5,6\}$, are associative.
When $\cdot$ is commutative in a residuated binar $\mathbf A$, the two residuals satisfy $x\backslash y = y\slash x$ for all $x,y\in A$ and therefore $\backslash$ and $\slash$ coincide. In this event, \ref{eq:lj} is equivalent to \ref{eq:jr}, \ref{eq:ml} is equivalent to \ref{eq:rm}, and \ref{eq:fm} is equivalent to \ref{eq:mf}. The poset of subvarieties axiomatized by the three pairwise independent nontrivial distributive laws is pictured on the right-hand side of Figure \ref{fig:subvariety poset}. The correctness of this diagram can be verified by observing that the models ${\mathbf A}_1$ and ${\mathbf A}_6$ are commutative. Since they are also associative, the same diagram describes the subvariety poset for commutative residuated semigroups.
\begin{figure}\label{fig:subvariety poset}
\begin{center}
\begin{tikzpicture}[xscale=.5, yscale = .5,
every node/.style={circle, draw, fill=white, inner sep=1.5pt},
t/.style={rectangle,draw=white,fill=white,inner sep=0pt},
g/.style={draw,fill=white,inner sep=1.5pt}
]
\draw(6,0)node{}--(1,2)node{}--(0,4)node{}--(1,6)node{}--(2,8)
--(3,6)node{}--(4,4)node{}--(5,2)node{}--(6,4)node{}--(7,6)node{}
--(8,8)--(9,6)node{}--(10,4)node{}--(11,2)node{}--(6,0)node{}
--(3,2)node{}--(2,4)node{}--(1,6)node{}--(0,8)--(3,5)node{}
--(3,2)node{}--(4,4)node{}--(5,6)node{}--(6,8)--(7,6)node{}
--(8,4)node{}--(9,2)node{}--(10,4)node{}--(11,6)node{}--(10,8)
--(7,5)node{}--(7,2)node{}--(6,0)node{}--(5,2)node{}--(5,5)node{}
--(2,8)node{\scriptsize$\wedge\!\backslash$}--(5,10)node{}--(4,8)--(3,6)node{}--(2,4)node{}
--(1,2)node{}--(7,5)node[g]{}--(4,8)node{\scriptsize$\backslash\!\vee$}--(5,6)node{}--(6,4)node{}
--(7,2)node{}--(8,4)node{}--(9,6)node{}--(10,8)node{\scriptsize$\cdot\wedge$}--(5,10)[]node{}
--(8,8)node{\scriptsize$\slash\!\wedge$}--(5,5)node[g]{}--(11,2)node{}(3,5)node{}--(9,2)node{}
--(6,0)node{}(3,5)node[g]{}--(6,8)node{\scriptsize$\vee\!\slash$}--(5,10)node{}--(0,8)node{\scriptsize$\wedge\cdot$}
(11,2)--(20,0.5)node{}--(17,-2.5)node{}--(14,0.5)node{}--(14,3.5)node{}
--(17,6.5)node{}--(20,3.5)node{}--(20,0.5)node{}--(17,3.5)node{}--(14,0.5)node{}
(5,5)--(17,3.5)--(17,6.5)--(5,10)
(6,0)--(17,-2.5)
(5,2)--(14,0.5)
(11,6)--(20,3.5)
(5,6)--(14,3.5)
(0,4)--(11,5.9)
(0,7.8)--(11,6)
(0,4)--(11,2)
(5,10.7)node[t]{$\mathsf{RB}$}
(17,7.2)node[t]{$\mathsf{CRB}$};
\end{tikzpicture}
\end{center}
\caption{The poset of subvarieties determined by the nontrivial distributive laws in varieties of residuated binars ${\sf RB}$ and commutative residuated binars ${\sf CRB}$.}
\end{figure}
\section{Identity elements, complements, and prelinearity}\label{sec:additional properties}
We say that a residuated binar is \emph{complemented} if its lattice reduct is complemented, and \emph{Boolean} if its lattice reduct is a Boolean lattice. A unital residuated binar is called \emph{integral} if it satisfies the identity $x\leq e$, where $e$ is the multiplicative identity.\footnote{This usage of \emph{integral} is typical in the study of residuated lattices, and we caution that it conflicts with the common usage in the theory of relation algebras.}
Boolean (unital) residuated binars are called $(u)r$-algebras in \cite{JJR1995}.
Note that if $\cdot$ and $\wedge$ coincide in a residuated binar $\mathbf A$, then $\mathbf A$ is term-equivalent to a Brouwerian algebra (i.e., to the bottom-free reduct of a Heyting algebra). If additionally $\mathbf A$ is a Boolean residuated binar, then $\mathbf A$ is (term-equivalent to) a Boolean algebra.
The presence of complements and an identity element in a residuated binar can have a profound impact on whether it satisfies any of the six non-trivial distributive laws, a stark example of which is illustrated by the following lemma.
\begin{lemma}\label{lem:integral implies Boolean}
Let ${\bf A}$ be a unital complemented residuated binar. If ${\bf A}$ is integral, then $\wedge$ and $\cdot$ coincide.
\end{lemma}
\begin{proof}
Since $\mathbf A$ is integral, we have $x\cdot y \leq x\wedge y$ for all $x,y\in A$. This implies for any $x\in A$ we have that $x\cdot x'\leq x\wedge x' = \bot$, where $x'$ is a complement of $x$. On the other hand, since the identity element $e$ is the greatest element of ${\bf A}$ we have also that $x\vee x'=e$ for any $x\in A$. Multiplying by $x$ and using \ref{eq:fj}, we obtain $x=x\cdot e = x\cdot(x\vee x') = x^2 \vee x\cdot x'= x^2\vee\bot = x^2$. This gives that ${\bf A}$ is idempotent, whence for any $x,y\in A$, $x\wedge y =(x\wedge y)\cdot (x\wedge y)\leq x\cdot y\leq x\wedge y$, i.e., $x\cdot y = x\wedge y$.
\end{proof}
Thus the only complemented integral residuated binars are Boolean algebras, which satisfy all six nontrivial distributive laws as well as lattice distributivity. Satisfaction of nontrivial distributive laws also often forces integrality in this setting.
\begin{lemma}\label{lem:integral}
Let $\mathbf A$ be a unital residuated binar. If $e$ has a complement $e'$ and $\mathbf A$ satisfies any one of the distributive laws \ref{eq:fm}, \ref{eq:mf}, \ref{eq:ml}, \ref{eq:rm}, then $\mathbf A$ is integral.
\end{lemma}
\begin{proof}
We prove the result for \ref{eq:fm} and \ref{eq:ml}. The result follows for \ref{eq:mf} and \ref{eq:rm} by a symmetric argument.
First, suppose that $\mathbf A$ satisfies \ref{eq:fm}. Then:
\begin{align*}
e' &= e\cdot e'\\
&\leq \top\cdot e'\\
&= \top\cdot e' \wedge \top\\
&= \top\cdot (e'\wedge e)\\
&= \top\cdot\bot\\
&= \bot
\end{align*}
where the last equality uses the identity $x\cdot\bot=\bot$, which holds in all residuated binars. It follows that $e=e\vee\bot=e\vee e'=\top$, hence $e=\top$.
Second, suppose that $\mathbf A$ satisfies \ref{eq:ml}. Note that:
\begin{align*}
\top &= \bot\backslash\bot\\
&= (e\wedge e')\backslash \bot\\
&= (e\backslash\bot)\vee (e'\backslash\bot)\\
&= \bot\vee (e'\backslash\bot)\\
&= e'\backslash\bot,
\end{align*}
giving $\top\leq e'\backslash\bot$, and by residuation $e'\cdot\top\leq\bot$. As $e\leq\top$ and $\cdot$ is isotone, we get $e'\cdot e\leq e'\cdot\top\leq\bot$. Therefore $e'\leq\bot$, so $e'=\bot$. It follows that $e'=\bot$, yielding again $e=\top$ and completing the proof.
\end{proof}
Combining the previous two lemmas gives the following result.
\begin{corollary}\label{cor:complemented to Boolean}
Let $\mathbf A$ be a complemented unital residuated binar. If $\mathbf A$ satisfies any one of the distributive laws \ref{eq:fm}, \ref{eq:mf}, \ref{eq:ml}, \ref{eq:rm}, then $\mathbf A$ is a Boolean algebra.
\end{corollary}
\begin{proof}
Since $\mathbf A$ is complemented, $e$ has a complement. Lemma \ref{lem:integral} then gives that $\mathbf A$ is integral, and so by Lemma \ref{lem:integral implies Boolean} it follows that $\mathbf A$ is a Boolean algebra.
\end{proof}
\begin{lemma}
Let $\mathbf A$ be a unital Boolean residuated binar. If $\mathbf A$ satisfies any one of the distributive laws \ref{eq:fm}, \ref{eq:mf}, \ref{eq:lj}, \ref{eq:jr}, \ref{eq:ml}, or \ref{eq:rm}, then $\mathbf A$ is integral, and hence is a Boolean algebra.
\end{lemma}
\begin{proof}
Corollary \ref{cor:complemented to Boolean} settles the claim if $\mathbf A$ satisfies any of \ref{eq:fm}, \ref{eq:mf}, \ref{eq:ml}, or \ref{eq:rm}. We therefore prove the claim for $\mathbf A$ satisfying \ref{eq:lj}; it will follow if $\mathbf A$ satisfies \ref{eq:jr} by a symmetric argument. Suppose that $\mathbf A$ satisfies \ref{eq:lj}, and note that $e\leq\top$ implies $\top\backslash e'\leq e\backslash e'=e'$. By \ref{eq:lj} and the isotonicity of $\backslash$ in its numerator, we have:
\begin{align*}
\top &= \top\backslash\top\\
&= \top\backslash (e\vee e')\\
&= \top\backslash e\vee \top\backslash e'\\
&\leq \top\backslash e\vee e'.\\
\end{align*}
Hence $\top\backslash e\vee e' = \top$, so $(\top\backslash e)'\wedge e =\bot$. Because $\wedge$ has a residual $\to$ in any Boolean residuated binar, we get $e\leq(\top\backslash e)'\to \bot=(\top\backslash e)''=\top\backslash e$. By residuating with respect to $\cdot$, we obtain that $\top\leq e$, and hence $\top = e$.
\end{proof}
\begin{corollary}
In a unital Boolean residuated binar each of the identities \ref{eq:fm}, \ref{eq:mf}, \ref{eq:lj}, \ref{eq:jr}, \ref{eq:ml}, and \ref{eq:lm} is logically-equivalent to the other five.
\end{corollary}
The two prelinearity equations \ref{eq:lp} and \ref{eq:rp} are not expressible in the absence of a multiplicative identity $e$, but for unital residuated binars they enjoy a connection to the nontrivial distributive laws even in the absence of associativity. In particular, inspection of the proofs offered in \cite{BT2003} verifies that in a unital residuated binar satisfying
$$(x\vee y)\wedge e = (x\wedge e)\vee (y\wedge e),$$
each of \ref{eq:rm} and \ref{eq:jr} implies \ref{eq:lp}, and each of \ref{eq:ml} and \ref{eq:lj} implies \ref{eq:rp}. Without associativity, the converse implications fail. To see this, we may define a five-element residuated binar $\mathbf A_7$ whose lattice reduct is pictured in Figure \ref{fig:prelinearity}. The multiplication $\cdot$ on $\mathbf A_7$ is given in the following table:
$$\begin{array}{c|ccccc}
\cdot&\bot&a&b&e&\top\\\hline
\bot&\bot&\bot&\bot&\bot&\bot\\
a&\bot&a&\bot&a&e\\
b&\bot&\bot&b&b&\top\\
e&\bot&a&b&e&\top\\
\top&\bot&a&\top&\top&\top\\
\end{array}$$
The residuals $\backslash$ and $\slash$ are determined uniquely by the above table as well, and with these operations we have $\mathbf A_7\models$ \ref{eq:lp}, \ref{eq:rp}, but each of \ref{eq:rm}, \ref{eq:jr}, \ref{eq:ml}, and \ref{eq:lj} fail in $\mathbf A_7$. Note also that $\mathbf A_7\not\models$ \ref{eq:fm},\ref{eq:mf}, whence prelinearity does not entail either of the latter distributive laws.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\tikzset{vertex/.style = {shape=circle,draw,fill=white,inner sep=1.5pt}}
\tikzset{edge/.style = {-,> = latex'}}
\node[vertex,label=left:$\top$] (a) at (0,0) {};
\node[vertex,label=left:$e$] (b) at (0,-1) {};
\node[vertex,label=left:$a$] (c) at (-1,-2) {};
\node[vertex,label=right:$b$] (d) at (1,-2) {};
\node[vertex,label=left:$\bot$] (e) at (0,-3) {};
\draw[edge] (a) to (b);
\draw[edge] (b) to (c);
\draw[edge] (b) to (d);
\draw[edge] (c) to (e);
\draw[edge] (d) to (e);
\end{tikzpicture}
\end{center}
\caption{Hasse diagram for the lattice reduct of $\mathbf A_7$.}
\label{fig:prelinearity}
\end{figure}
\section{Open problems}\label{sec:open problems}
Lattice distributivity is a key ingredient in the known proofs of Theorem \ref{thm:algebraic implications}, whether purely algebraic or by equivalent frame conditions. We do not know whether any of the implications announced hold in all residuated binars (without assuming lattice distributivity), nor do we know whether any of these implications fail in this more general setting.
When present, a multiplicative identity element plays a decisive role in shaping the connection between the nontrivial distributive laws. Known characterizations of when a residuated binar may be embedded in a unital residuated binar crucially involve terms of the form $x\backslash x$ and $x\slash x$ (see \cite{B1999,JJR1995}), and we conjecture that conditions involving terms of this form may provide a more satisfying account of the role of a multiplicative identity in this context. In particular, it would be interesting to identify analogues of prelinearity in the non-unital setting and explicate their connection to the nontrivial distributive laws and semilinearity.
\bibliographystyle{plain}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 1,877 |
Эжен Мари Рандю (10 января 1824, Париж — 27 февраля 1902, там же) — французский политик, чиновник и писатель- клерикального направления, деятель образования.
Биография
Был сыном педагога Амбруаза Рандю. Получил учёную степень в области литературы, после чего совершил длительное образовательное путешествие в Италию. С 1848 года сотрудничал в газете Лакордера «l'Ère nouvelle». Затем поступил на работу в Министерство образования, где участвовал в разработке Закона об образовании 1850 года. В 1851 году стал инспектором начального образования, в 1857 году — начальником отдела начального образования в министерстве. В 1860 году был назначен генеральным инспектором народного просвещения. При режиме Второй империи предпринимал неудачные попытки избраться в парламент, в итоге был депутатом от департамента Сена и Уаза с 20 февраля 1876 по 25 июня 1877 года, примкнув к правым. В 1877 году переизбраться не смог, вновь пытался занять кресло в 1885 и затем в 1889 году.
Был известен как горячий сторонник всеобщего и обязательного обучения, но противник светской школы и сторонник церковного образования. Работы его авторства: «Manuel de l'enseignement primaire» (много изданий), «Sur l'obligation de l'enseignement» (1840), «Conditions de la paix dans les Etats romains» (1849), «Commentaire théorique et administrant de la loi sur l'enseignement» (1850), «La souveraineté pontificale et l'Italie» (1862), «Guide des écoles primaires» (1861), «L'instruction primaire devant l'assemblée» (1873).
Примечания
Библиография
Dictionnaire des parlementaires français de 1789 à 1889 (Adolphe Robert et Gaston Cougny).
Ссылки
Статья в La Grande Encyclopédie
Политики Франции
Французские писатели
Выпускники Национальной школы хартий | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 4,472 |
Three majestic lamp decorations can fit for your majestic bathroom vanity. As this set consists of three masterfully-crafted light bulbs, you'll get a majestic bathroom a Click here to read More..
Tags: quoizel bathroom vanity lights, industrial bathroom vanity lighting, bathroom vanity lighting ideas, bathroom vanity light bulbs. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,617 |
Hammenhög è una località nella municipalità svedese di Simrishamn, situata nella contea di Scania. Ha quasi 908 abitanti.
Geografia fisica
Si trova sulla pianura di Scania nell'area di Österlen.
Attrazioni
In Vallby è situato Glimmingehus che è il meglio conservato castello medievale in Scandinavia.
Collegamenti esterni
Hammenhog | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 9,381 |
13 more COVID-19 deaths and 309 more cases confirmed in Ontario
Kayla Goodfield Multi-Platform Writer, CTV News Toronto
@KaylaGoodfield Contact
Published Monday, April 6, 2020 10:31AM EDT Last Updated Monday, April 6, 2020 4:30PM EDT
TORONTO -- Ontario health officials have confirmed 309 new cases of COVID-19 in the province, including 13 more deaths.
The new patients announced on Monday morning bring the total number of confirmed cases of the novel coronavirus in Ontario to 4,347, including 132 deceased patients.
Nine of the deaths were between the ages of 40 and 59, 48 of them were between the ages of 60 and 79 and 75 of them were 80 years of age or older.
Speaking at a news conference on Monday afternoon, Ontario's Associate Chief Medical Officer of Health Dr. Barbara Yaffe said that 589 infected patients are currently in hospital. Of those patients, 216 of them are being treated in an intensive care unit and 160 of those 216 patients have been placed on a ventilator to assist with breathing.
Of all the cases in the province, 451 of them are among healthcare workers – 10.4 per cent.
There are currently an additional 329 people under investigation for the virus. That number is significantly down from last week when thousands of people were awaiting tests results. Health officials across the province have been working to clear a backlog of tests since then.
"We have, as you know, basically gotten rid of the backlog," Yaffe said. "We have tested almost 79, 000 people in Ontario. In the last 24 hours, we've processed about 3,500 people's tests."
Last month, Yaffe said that the number of daily tests is expected to rise, with a goal of 19,000 tests a day by mid-April.
Quick facts on all COVID-19 patients in Ontario:
12.4 per cent of all patients have been hospitalized at one point
46 outbreaks have been reported in long-term care homes in the province
47.3 per cent of all patients in the province are male and 52.2 per cent are female – 24 cases did not specify male or female gender
2.5 per cent of all patients are 19 years of age or younger
26.8 per cent of all patients are between the ages of 20 and 39
10.5 per cent of all patients are 80 years of age or older
Public health units in the Greater Toronto Area account for 51.9 per cent of all cases in the province
19.8 per cent of all patients had travelled in the 14 days prior to becoming ill
13 per cent of all patients had contact with a previously confirmed case
19.3 per cent of all patients had community exposure
47.9 per cent of all patients had exposure information listed as pending
What to do if you think you have symptoms of COVID-19
The number of resolved cases in the province currently sits at 1,624.
To date, more than 78,000 people have been tested for the virus across the province.
Daily breakdown: COVID-19 outbreak in Ontario
There are no specific treatments for the virus and there is no vaccine that protects against it.
Symptoms of the virus, which can include fever, cough and shortness of breath, are similar to other respiratory infections.
The Ontario government's website advises those experiencing symptoms of COVID-19 to contact their primary health care provider or Telehealth Ontario.
Health minister suggests calling doctor with COVID-19 concerns
Ontario Health Minister Christine Elliott is recommending that residents who have COVID-19-related health questions to contact their family physician before calling Telehealth Ontario.
The minister's comments came during a news conference Monday afternoon when she was asked about residents who said they had been waiting two to three days to hear from a nurse after calling the phone line.
"The volume has decreased somewhat but waiting three days is not acceptable," Elliott said. "We want to get that down to 24 hours."
Elliott said that for people who don't want to wait to speak to a health professional via Telehealth, they should reach out to their doctor. Some family physicians have the ability to meet with patients virtually, Elliott said, and will be able to provide advice based on that patient's medical history.
"They can now give that information to them in a much more timely manner."
Speaking with reporters on Monday afternoon, Ontario's Chief Medical Officer of Health Dr. David Williams echoed that sentiment and said that services such as Telehealth are still experiencing large volumes of calls.
"They would be even better prepared because they would have the chart right in front of them. That's why I think the minister said at this time as well call your family physician. They are ready to receive those calls."
Telehealth is still available for people who have questions about COVID-19 or who may not have a family physician.
Ontario expands eligibility for 4th COVID-19 vaccine dose, but healthy adults can wait until fall
Ontario expands PCR testing and treatments for COVID-19. This is who's eligible
Need a refresher? Here's what to do if you get COVID-19 in Ontario right now
Full list of where to get free COVID-19 rapid antigen tests in Ontario
Ontario's mask mandate has ended in most places. Here's where you still need one
Tracking every case of COVID-19 in Canada
CTV News Toronto at Six for Monday, January 30, 2023
Woman regrets dropping collision insurance
Funeral held for former Ont. lieutenant-governor David Onley
Bike lane at Yonge and St. Clair sparks division
An accurate bathroom scale can be an important weight loss tool: consumer reports
Looking for a snow blower? Some new electric models no longer need to be plugged in
Toronto woman denied full refund for unexpectedly cancelled New Zealand trip
Nearly half of eligible Toronto residents have now received a fourth dose of COVID-19 vaccine
Will Toronto's fun summer programs stick around as COVID restrictions ease?
Fourth doses in Ontario: When should you get one?
Totally Toronto
Toronto listed among 'world's most overrated cities' in new report. Here's where it ranks
Want to visit Dunder Mifflin? The Office is coming to Toronto this spring
Madonna announces world tour through 'truth or dare' video. Here's when she comes to Toronto | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,038 |
\section{Introduction}
\noindent
Being popular among the functional renormalization group (FRG) community
the standard formulation
of the FRG approach
\cite{Wet1,Wet2} (among recent reviews of the method see, for example, \cite{DCEMPTW})
meets with the very serious gauge dependence problem
of the effective average action as within the perturbation theory \cite{LSh}
as well as on the level
when it is found as a non-perturbative
solution to the flow equation making impossible
physical interpretations of any results obtained in gauge theories with
the help of effective average action \cite{Lav-2020,Lav-yad}.
Recently it has been proposed \cite{AMNS-1} and
studied \cite{Lav-alt} alternative
methods in comparison with \cite{LSh} when the regulators being
essential tools of the FRG are considered as sources to composite operators.
It differs with the standard introduction of external sources
to composite operators
\cite{CJT} (generalizations to the case of gauge theories
have been proposed in \cite{LO-1989,Lav-tmph,LOR-jmp})
when they are usual functions of space-time coordinates
instead of the regulators which are some differential operators
in general \cite{Reuter}. In contrast with the standard FRG \cite{Wet1,Wet2}
the alternative methods \cite{AMNS-1,Lav-alt} lead to the 2PI effective average
actions which have good properties within the perturbation theory. Unfortunately
they still unacceptable on the non-perturbative level because solutions to corresponding
flow equations depend on gauge at any value of the IR parameter $k$ including the fixed
points \cite{Lav-alt}.
In the present paper we are going to extend the reformulation of the FRG proposed in
\cite{LSh} for Yang-Mills fields to the case of gravity theories and
to study basic properties of corresponding flow equation.
The paper is organized as follows.
In Sec. 2 the properties of effective action on an arbitrary background metric
for gravity theories in
the standard quantization scheme \cite{FP} are presented.
In Sec. 3 the gauge dependence problem
of 2PI irreducible effective average action for Quantum Gravity within the perturbation
theory is discussed.
In Sec. 4 the derivation of alternative flow equation for the
effective action with composite operators and the study its $k$-dependence are given.
In Sec. 5 the gauge dependence of the alternative flow equations
is investigated.
Finally, in Sec. 6 the results obtained in the paper are discussed.
The DeWitt's condensed notations \cite{DeWitt} are used.
The functional derivatives with respect
to fields and sources
are considered as right and left correspondingly. The left
functional derivatives with respect to
fields are marked by special symbol $"\rightarrow"$.
Arguments of any functional are enclosed in square brackets
$[\;]$,
and arguments of any function are enclosed in parentheses, $(\;)$.
The symbol $F_{,i}[\phi,...]$ means the
right derivative of $F[\phi,...]$ with respect to field $\phi^i$.
\section{Effective action for gravity theories}
\noindent
We start with an arbitrary initial action $S_0[g]$ of the metric tensor
$g=\{g_{\mu\nu}\}$.\footnote{Standard examples are Einstein gravity,
$S_0(g)=\kappa^{-2}\int dx \sqrt{-{\rm det}g}\;\!R$,
and $R^2$ gravity,
$S_0(g)=\int dx \sqrt{-{\rm det}g}\;(\lambda_1 R^2+
\lambda_2R^{\mu\nu}R_{\mu\nu}+\kappa^{-2}R)$.}
We suppose the invariance of $S_0[g]$,
\beq
\label{A1}
\delta_{\xi}S_0[g]=0,
\eeq
under general coordinate transformations which infinitesimally take the
form of gauge transformations of $g_{\mu\nu}$
\beq
\label{A2}
\delta_{\xi} g_{\mu\nu}=-\pa_{\sigma}g_{\mu\nu}\xi^{\sigma}-
g_{\mu\sigma}\pa_{\nu}\xi^{\sigma}-g_{\sigma\nu}\pa_{\mu}\xi^{\sigma}=
R_{\mu\nu\sigma}(g)\xi^{\sigma},
\eeq
where $\xi^{\sigma}$ are arbitrary functions of space-time coordinates and
$R_{\mu\nu\sigma}(g)$ are the gauge generators satisfying the closed
and irreducible gauge algebra (for detailed description see \cite{BLRNSh}),
\beq
\label{A3}
[\delta_{\xi_1},\delta_{\xi_2}]g_{\mu\nu}=\delta_{\xi_3}g_{\mu\nu},\quad
\xi^{\sigma}_3=F^{\sigma}_{\alpha\beta}\xi^{\beta}_2\xi^{\alpha}_1=
\xi^{\sigma}_1\pa_{\alpha}\xi^{\alpha}_2-
\xi^{\sigma}_2\pa_{\alpha}\xi^{\alpha}_1.
\eeq
In (\ref{A3}) $F^{\sigma}_{\alpha\beta}$ are structure coefficients of gauge
algebra which do not depend on fields $g_{\mu\nu}$
and have the universal form for any gravity
theory.
On quantum level one operates with the action
\beq
\label{A4}
S[\phi,{\bar g}]=S_0[h+{\bar g}]+S_{gh}[\phi,{\bar g}]+S_{gf}[\phi,{\bar g}],
\eeq
appearing in the Faddeev-Popov method \cite{FP}.
Here the decomposition of $g$, $g=h+{\bar g}$,
on a background metric ${\bar g}=\{{\bar g}_{\mu\nu}\}$ and
quantum fluctuation $h=\{h_{\mu\nu}\}$ is used.
In (\ref{A4}) $\phi^i=(h_{\alpha\beta},B^{\alpha},C^{\alpha},{\bar
C}^{\alpha})$ is the set of quantum fields, $C^{\alpha},{\bar
C}^{\alpha}$ are the ghost and antighost fields, $B^{\alpha}$ are
auxiliary Nakanishi-Lautrup fields for introducing the gauge fixing functions,
$\chi_{\alpha}({\bar g},h)$. A standard choice of $\chi_{\alpha}({\bar g}, h)$
corresponding to the background field
gauge condition in linear gauges \cite{Barv} reads
\beq
\label{A5}
\chi_{\alpha}({\bar g}, h)={\cal F}^{\mu\nu}_{\alpha}({\bar g})h_{\mu\nu}, \quad
{\cal F}^{\mu\nu}_{\alpha}({\bar g})=
-{\bar g}^{\mu\sigma}\big(a\delta^{\nu}_{\alpha}
{\bar \nabla}_{\sigma}+b\delta^{\nu}_{\sigma}{\bar \nabla}_{\alpha}\big ),
\eeq
where ${\bar \nabla}_{\sigma}$ is the covariant derivative constructed with the help of
${\bar g}_{\mu\nu}$ and $a,b$ are constants. The de Donder gauge condition
corresponds to the case when $a=1,b=1/2$.
$S_{gh}[\phi,{\bar g}]$ is the ghost action,
\beq
\label{A6}
S_{gh}[\phi,{\bar g}]=\int dx\sqrt{-{\rm det}{\bar g}}\;
{\bar C}^{\alpha}G_{\alpha}^{\beta\gamma}({\bar g},h)
R_{\beta\gamma\sigma}({\bar g}+h)C^{\sigma},
\eeq
with the notation
\beq
\label{A7}
G_{\alpha}^{\beta\gamma}({\bar g},h)=
\frac{\delta\chi_{\alpha}({\bar g},h)}{\delta h_{\beta\gamma}}\;,
\eeq
and $S_{gf}[\phi,{\bar g}]$ is the gauge fixing action
\beq
\label{A8}
S_{gf}[\phi,{\bar g}]=\int dx \sqrt{-{\rm det}{\bar g}}\;
B^{\alpha}\chi_{\alpha}({\bar g},h).
\eeq
For any admissible choice of gauge fixing functions
$\chi_{\alpha}({\bar g},h)$ the action
(\ref{A4}) is invariant under global supersymmetry (BRST symmetry)
\cite{BRS1,T}, \!\!\!
\footnote{The gravitational BRST transformations were introduced in
\cite{DR-M,Stelle,TvN}. For more compact presentation of the BRST transformations we
use the notation $\delta_B$ for $\delta_{BRST}$.}
\beq
\label{A9}
\delta_B h_{\mu\nu}=R_{\mu\nu\alpha}({\bar g}+h)C^{\alpha}\Lambda,\quad
\delta_B B^{\alpha}=0,\quad
\delta_B C^{\alpha}=-C^{\sigma}\pa_{\sigma} C^{\alpha}\Lambda,\quad
\delta_B {\bar C}^{\alpha}=B^{\alpha}\Lambda,
\eeq
where $\Lambda$ is a constant Grassmann parameter.
The generating functional of Green functions, $Z=Z[J,{\bar g}]$, is constructed in the form of
functional integral
\beq
\label{A10}
Z[J,{\bar g}]=\int D\phi
\exp\Big\{\frac{i}{\hbar}\big(S[\phi,{\bar g}]+J\phi\big)\Big\}=
\exp\Big\{\frac{i}{\hbar}W[J,{\bar g}]\Big\},
\eeq
where $W=W[J,{\bar g}]$ is the generating functional of connected Green functions and
$J=\{J_i\}$ is the set of external sources to fields $\phi=\{\phi^i\}$. The
generating functional of vertex functions (effective action),
$\Gamma=\Gamma[\Phi, {\bar g}]$, is defined through the Legendre transform of $W$,
\beq
\label{A11}
\Gamma[\Phi, {\bar g}]=W[J,{\bar g}]-J\Phi,\quad
\Phi^i=\frac{\delta W}{\delta J_i}, \quad \frac{\delta\Gamma}{\delta\Phi^i}=-J_i
\eeq
and can be found as a solution to the following functional integro-differential equation,
\beq
\label{A12}
\exp\Big\{\frac{i}{\hbar}\Gamma[\Phi,{\bar g}]\Big\}=
\int D\phi
\exp\Big\{\frac{i}{\hbar}\Big(S[\Phi+\phi,{\bar g}]-
\frac{\delta\Gamma[\Phi, {\bar g}]}{\delta\Phi}\phi\Big)\Big\}.
\eeq
The standard approach (perturbation theory) in quantum field theory to find
a solution to the Eq. (\ref{A12}) is based on using
loop expansions
\beq
\Gamma[\Phi,{\bar g}]=S[\Phi,{\bar g}]+\hbar \Gamma_1[\Phi,{\bar g}]+...
\eeq
where
\beq
\Gamma_1[\Phi,{\bar g}]=-i{\rm Tr}\ln (S^{(2)}[\Phi,{\bar g}]), \quad
S^{(2)}[\Phi,{\bar g}]=\frac{\delta^2 S[\Phi,{\bar g}]}{\delta\Phi\;\delta\Phi},
\eeq
is one-loop approximation and ellipses mean higher order
loop contributions.
The generating functionals $Z,W,\Gamma$ depend on gauges but due to the BRST symmetry
the gauge dependence has a very special form and for variation $\delta\Gamma$
under an infinitesimal change of gauge fixing functions,
$\chi\rightarrow\chi+\delta\chi$, obeys the property
\beq
\label{A13}
\delta_{\chi}\Gamma\Big|_{\frac{\delta\Gamma}{\delta\Phi}=0}=0,
\eeq
i.e. it does not depend on gauges when it is considered on extremals.
For the first time the gauge dependence of effective action
for gravity theories in the form
(\ref{A13}) has been described in \cite{LR} (for more early descriptions of gauge dependence
of effective action in gauge theories see papers \cite{J,Niel,LT-1981a,LT-1981b,VLT}).
This fact allows to state the gauge independence of $S$-matrix
thanks to the equivalence theorem \cite{KT}.
Among other important properties being very useful in practical calculations
within the background field method \cite{DeW,AFS,Abbott} the functionals
\beq
\label{A14}
Z[{\bar g}]=Z[J=0,{\bar g}],\quad
W[{\bar g}]=W[J=0,{\bar g}],\quad
\Gamma[{\bar g}]=\Gamma[\Phi=0, {\bar g}]
\eeq
are covariant functionals with respect to ${\bar g}$,
\beq
\label{A15}
\delta_{\xi}Z[{\bar g}]=0,\quad
\delta_{\xi}W[{\bar g}]=0,\quad
\delta_{\xi}\Gamma[{\bar g}]=0,
\eeq
as well as they do not depend on gauges
\beq
\label{A16}
\delta_{\chi}Z[{\bar g}]=0,\quad
\delta_{\chi}W[{\bar g}]=0,\quad
\delta_{\chi}\Gamma[{\bar g}]=0,
\eeq
as the direct consequences of the BRST invariance of action
$S[\phi,{\bar g}]$ \cite{LavSh-QG}.
\section{Gauge dependence of modified effective average action}
\noindent
The effective average action of the FRG \cite{Wet1,Wet2}
is ill-defined perturbatively in the case of Yang-Mills
theories \cite{LSh,Lav-yad} and gravity theories \cite{BLRNSh} because of
the gauge dependence
of effective average action on-shell. To improve
the situation for Quantum Gravity we apply
the background field method \cite{DeW,AFS,Abbott}
\footnote{For recent development of the background field
method see \cite{GLSh}.}
and the formulation of effective action with
composite operators \cite{CJT,LO-1989,Lav-tmph} for construction of modified
effective average action
in the form used for the first time in the
case of Yang-Mills theories in \cite{LSh}.
With the help of addition of a scale-dependent regulator action, $S_k$,
being quadratic in the quantum fields, the FRG modifies
behavior of propagators of quantum fields in
IR and UV regions \cite{Wet1,Wet2}.
In the case of Quantum Gravity the scale-dependent
regulator action takes the form
\cite{Reuter}
\beq
\nonumber
S_{k}(\phi,{\bar g})&=&\int \!dx \sqrt{-{\rm det}{\bar g}}
\;\Big[\frac{1}{2}h_{\mu\nu}R^{(1)\mu\nu\;\!\alpha\beta}_{k}
({\bar g})h_{\alpha\beta}+
{\bar C}^{\alpha}R^{(2)}_{k \;\alpha\beta}({\bar g})C^{\beta}\Big]\equiv\\
\label{B1}
&\equiv& \int \!dx \sqrt{-{\rm det}{\bar g}}\big({\cal L}^{(1)}_k(h,{\bar g})+
{\cal L}^{(2)}_k(C,{\bar C},{\bar g})\Big),
\eeq
where $R^{(1)\mu\nu\;\!\alpha\beta}_{k}({\bar g})$,
$R^{(2)}_{k \;\alpha\beta}({\bar g})$
are regulators with properties
\beq
\label{B2}
R^{(1)\mu\nu\;\alpha\beta}_{k}({\bar g})=
R^{(1)\;\!\alpha\beta\;\!\mu\nu}_{k}({\bar g}),\quad
\lim_{k\rightarrow 0}R^{(1)\mu\nu\;\alpha\beta}_{k}({\bar g})=0,\quad
\lim_{k\rightarrow 0}R^{(2)}_{k\; \alpha\beta}({\bar g})=0.
\eeq
On quantum level the FRG operates with the full action
\beq
\label{B3}
S_{Wk}[\phi,{\bar g}]=S[\phi,{\bar g}]+S_{k}[\phi,{\bar g}],
\eeq
where $S[\phi,{\bar g}]$ is defined in (\ref{A4}) - (\ref{A8}).
The action $S_{Wk}[\phi,{\bar g}]$ (\ref{B3}) is not invariant
under the BRST transformations (\ref{A9})
that leads to the gauge dependence
problem within the FRG for Quantum Gravity \cite{BLRNSh}.
As it was already mentioned above
to improve the situation we propose
following to \cite{LSh} the generating functional of Green functions
$Z_k=Z_k[J,{\bar g},\Sigma]$
in the form
\beq
\label{B4}
Z_k=\int D\phi
\exp\Big\{\frac{i}{\hbar}\big(S[\phi,{\bar g}]+J\phi+
\Sigma{\cal L}_k(\phi,{\bar g})\big)\Big\},
\eeq
where $\Sigma=(\Sigma_1,\Sigma_2)$ are external sources to composite fields
${\cal L}_k(\phi,{\bar g})=({\cal L}^{(1)}_k(h,{\bar g}),
{\cal L}^{(2)}_k(C,{\bar C},{\bar g}))$ and $J=\{J_i\}$ are external
sources to fields
$\phi=\{\phi^i\}$. Let us note that from
the definition of generating functional $Z_k$ (\ref{B4})
and properties of the regulators (\ref{B2})
it immediately follows coincidence of the functional $Z_k$
with the standard generating functional of Green functions in the limit $k\rightarrow 0$.
The same statement is valid for the corresponding S-matrices.
Now we are going to study dependence of the functional
$Z_k[J,{\bar g},\Sigma]$ (\ref{B4})
on gauges. To simplify the corresponding presentation it is useful to rewrite
the action $S[\phi,{\bar g}]$
in the form
\beq
\label{B5}
S[\phi,{\bar g}]=S_0[h+{\bar g}]+\Psi[\phi,{\bar g}]
{\hat R}[\phi,{\bar g}]
\eeq
with the help of gauge fixing functional $\Psi[\phi,{\bar g}]$
\beq
\label{B6}
\Psi[\phi,{\bar g}]=\int dx \sqrt{-{\rm det}{\bar g}}
\;{\bar C}^{\alpha}\chi_{\alpha}({\bar g},h),
\eeq
containing all information about gauge fixing,
and with the generator of BRST transformations (\ref{A9})
\beq
\label{B7}
{\hat R}[\phi,{\bar g}]=\int dx\;
\frac{\overleftarrow{\delta}}{\delta\phi^i}R^i(\phi, {\bar g}),\quad
R^i(\phi, {\bar g})=\big(R_{\mu\nu\sigma}({\bar g}+h)C^{\sigma},\; 0\;,
-C^{\sigma}\pa_{\sigma} C^{\alpha}, B^{\alpha}\big).
\eeq
Consider an infinitesimal variation of gauge fixing functions,
$\chi_{\alpha}({\bar g},h)\;\rightarrow\;\chi_{\alpha}({\bar g},h)+
\delta\chi_{\alpha}({\bar g},h)$ which causes the variation of gauge fixing functional,
$\Psi[\phi,{\bar g}]\;\rightarrow \Psi[\phi,{\bar g}]+\delta\Psi[\phi,{\bar g}]$.
Let us temporally introduce the notations $S_{\Psi}[\phi,{\bar g}]=S[\phi,{\bar g}]$
and $Z_{k\Psi}=Z_k$
to stress essential dependence of $S[\phi,{\bar g}]$ and $Z_k$ on gauge fixing procedure.
In the functional integral
\beq
\label{B8}
Z_{k\Psi+\delta\Psi}=\int D\phi
\exp\Big\{\frac{i}{\hbar}\big(S_{\Psi}[\phi,{\bar g}]+
\delta\Psi[\phi,{\bar g}]{\hat R}[\phi,{\bar g}]+J\phi+
\Sigma{\cal L}_k(\phi,{\bar g})\big)\Big\},
\eeq
we make use the change of integration variables
in the form of BRST transformations
but trading the constant parameter $\Lambda$ by functional
\beq
\label{B9}
\Lambda[\phi,{\bar g}]=\frac{i}{\hbar}\delta\Psi[\phi,{\bar g}].
\eeq
Taking into account the corresponding Jacobian
\beq
\label{B10}
J[\phi,{\bar g}]=
\exp\Big\{-\frac{i}{\hbar}\delta\Psi[\phi,{\bar g}]{\hat R}[\phi,{\bar g}]\Big\},
\eeq
omitting the subscript $\Psi$ we obtain the following equation
\beq
\label{B11}
\delta Z_k
=\frac{i}{\hbar}\big(J_i+
\Sigma{\cal L}_{k,i}(\widehat{q},{\bar g})
\big)R^i(\widehat{q},{\bar g})
\delta\Psi[\widehat{q},{\bar g}]\;Z_k
\eeq
describing the gauge dependence of the functional $Z_k=Z_k[J,{\bar g},\Sigma]$.
In (\ref{B11}) the notations
\beq
\label{B12}
{\cal L}_{k,i}(\phi,{\bar g})=
\frac{\pa{\cal L}_{k}(\phi,{\bar g})}{\pa \phi^i},\quad
\widehat{q}^{\;i}=-i\hbar\frac{\delta}{\delta J_i}
\eeq
are used.
From (\ref{B11}) it follows the important statement that the gauge dependence
of $Z_k[J,{\bar g},\Sigma]$ disappears when external sources are switched off,
$J_i=\Sigma_1=\Sigma_2=0$.
In terms of the generating functional of connected Green functions,
$W_k=W_k[J,{\bar g},\Sigma]=-i\hbar\ln Z_k[J,{\bar g},\Sigma]$,
the relation (\ref{B11}) takes the form
\beq
\label{B13}
\delta W_k=\big(J_i+
\Sigma{\cal L}_{k,i}(\widehat{Q}_k,{\bar g})
\big) R^i(\widehat{Q}_k,{\bar g})
\delta\Psi[\widehat{Q}_k,{\bar g}]\;\cdot 1,
\eeq
where
\beq
\label{B14}
\widehat{Q}^{\;i}_k=\widehat{q}^{\;i}+\frac{\delta W_k}{\delta J_i}.
\eeq
The modified effective average action, $\Gamma_k=\Gamma_k[\Phi_k,{\bar g},F_k]$,
is introduced through the double Legendre transform of $W_k$
\beq
\label{B15}
&&\Gamma_k[\Phi_k,{\bar g},F_k]=W_k[J,{\bar g},\Sigma]-J_i\Phi_k^i-
\Sigma_{\ell}\big({\cal L}^{(\ell)}_k(\Phi_k,{\bar g})+\hbar F_k^{\ell}\big),\\
\label{B16}
&&\Phi_k^i=\frac{\delta W_k}{\delta J_i},\quad \hbar
F_k^{\ell}=\frac{\delta W_k}{\delta \Sigma_{\ell}}- {\cal
L}^{(\ell)}_k\Big(\frac{\delta W_k}{\delta J},{\bar g}\Big),\;\;
\ell =1,2.
\eeq
From (\ref{B15}), (\ref{B16}) it follows
\beq
\label{B17} \frac{\delta\Gamma_k}{\delta\Phi_k^i}=-J_i-\Sigma_{\ell}
{\cal L}^{(\ell)}_{k,i}(\Phi_k,{\bar g}),\quad
\frac{\delta\Gamma_k}{\delta F_k^{\ell}}=-\hbar\Sigma_{\ell}.
\eeq
The modified effective average action satisfies the following functional
integro-differential equation
\beq
\label{B15a}
\!\!\!\exp\Big\{\!\frac{i}{\hbar}\Big(\Gamma_k\!
-\!\frac{\delta\Gamma_k}{\delta F_k}F_k\Big)\!\Big\}\!=\!\!
\int \!\!D\phi
\exp\Big\{\!\frac{i}{\hbar}\Big(\!S[\Phi_k\!+\!\phi,{\bar g},F_k]\!-\!
\frac{\delta\Gamma_k}{\delta\Phi_k}\phi-\!
\frac{1}{2}\frac{\delta\Gamma_k}{\delta(\hbar F_k)}{\cal L}^{(2)}_k(\Phi_k,{\bar g})
\phi\phi\Big)\!\Big\},
\eeq
where
\beq
{\cal L}^{(2)}_k(\Phi_k,{\bar g})=
\frac{\delta^2 {\cal L}_k(\Phi_k,{\bar g})}{\delta\Phi_k\delta\Phi_k}.
\eeq
In what follows below the gauge dependence of $\Gamma_k$ (\ref{B15}) is analyzed
from the point of solutions to the equation (\ref{B15a}) which
can be in principal found perturbatively in the form of loop expansions
having in this case their own specific features
due to the fact that appearing equations have
the form of Clairaut-type equations (for detailed discussions see \cite{LM}).
Let us introduce
the full sets of fields ${\cal F}_k^{\cal A}$ and sources ${\cal
J}_{\cal A}$ according to
\beq
\label{B18}
{\cal F}_k^{A}=(\Phi_k^i,\hbar F_k^{\ell})
\,,\qquad {\cal J}_{ A}=(J_i,\hbar\Sigma_{\ell}).
\eeq
From the condition of solvability of equations (\ref{B17}) with
respect to the sources \ $J$ \ and \ $\Sigma$, it follows that
\beq
\label{B19}
\frac{\de {\cal F}_k^{
C}({\cal J})}{\de {\cal J}_{B}}\,\,
\frac{\overrightarrow{\de}{\cal J}_{
A}({\cal F}_k)}{\de{\cal F}_k^{ C}} \,=\,\de^{B}_{\;A}\,.
\eeq
One can express \ ${\cal J}_{A}$ \ as a function of
the fields in the form
\beq
\label{B20}
{\cal J}_{A}
\,=\,\Big(-\frac{\de\Ga_k}{\de\Phi_k^i}\,-\,
\frac{\de\Ga_k}{\de F_k^{\ell}}\,\frac{\de L^{\ell}_k(\Phi_k,{\bar g})}{\de\Phi_k^i}
\,,\,\,-\,
\frac{\de\Ga_k}{\de F_k^{\ell}}\Big)
\eeq
and, therefore,
\beq
\label{B21}
(G^{''-1}_k)^{AC}(G^{''}_k)_{{CB}}=\de_{\;B}^{A},\qquad
\frac{\overrightarrow{\delta}{\cal J}_{B}({\cal F}_k)}{\delta{\cal F}_k^{A}}
= -(G^{''}_k)_{{AB}}\,,\qquad
\frac{\delta {\cal F}_k^{B}({\cal J})}{\delta {\cal J}_{A}}=
-(G^{''-1}_k)^{AB}\,.
\eeq
Taking into account that due to the Legendre transform
\beq
\label{B22}
\delta W_k=\delta\Gamma_k,
\eeq
the equation (\ref{B13}) in terms of modified effective average action,
$\Gamma_k=\Gamma_k[\Phi_k,{\bar g},F_k]$, rewrites as
\beq
\label{B23}
\delta\Gamma_k=-\Big(\frac{\delta\Gamma_k}{\delta\Phi_k^i}+
\frac{1}{\hbar}\frac{\delta\Gamma_k}{\delta F_k}
\big({\cal L}_{k,i}(\Phi_k,{\bar g})-{\cal L}_{k,i}(\widehat{\Phi}_k,{\bar g})\big)
\Big) R^i(\widehat{\Phi}_k,{\bar g})
\delta\Psi[\widehat{\Phi}_k,{\bar g}]\;\cdot 1,
\eeq
where
\beq
\label{B24}
\widehat{\Phi}_k^i=\Phi^i_k+i\hbar (G^{''-1}_k)^{iB}
\frac{\overrightarrow{\delta}}{\delta {\cal F}_k^{B}}\;.
\eeq
From (\ref{B24}) it follows
\beq
\label{B25}
\delta\Gamma_k\Big|_{\frac{\delta\Gamma_k}{\delta {\cal F}_k}=0}=0
\eeq
the gauge independence of modified effective average action calculated on the its
extremals. This very important property of $\Gamma_k$ is found within the standard
perturbation theory accepted in Quantum Field Theory for evaluation of functional integrals.
In its turn the FRG is considered as non-perturbative approach to quantum field theories
when the effective average action should be found as a solution to the flow equation.
Quite recently \cite{Lav-2020} it was proved that the effective average action
depends on gauge at every value
of the IR parameter $k$ including the fixed points making impossible physical interpretation
of any results obtained in the standard formulation of the FRG for gauge theories.
One meets with the same drawback considering the reformulation of the method based
on the 2PI effective action when regulators are considered as sources to composite fields
\cite{AMNS-1,Lav-alt}. In the next section we are going to introduce
an alternative flow equation and to study its $k$-dependence.
\section{Alternative flow equation and $k$-dependence}
\noindent
The flow equation in the FRG is the basic relation
describing the dependence of the effective average action on the IR
parameter $k$. Let us derive an alternative flow equation for the
2PI effective action (\ref{B15}). To do this we start with
differentiating the functional
$Z_k=Z_k[J,{\bar g},\Sigma]$ (\ref{B4})
with respect to $k$ and taking into account that only quantities
${\cal L}_{k}(\widehat{q},{\bar g})$ through the regulators $R_{k}({\bar g})$
depend on $k$, we obtain
the flow equation for the functional $Z_k$
\beq
\label{C1}
\pa_k Z_k=\frac{i}{\hbar}
\Sigma\;\pa_k{\cal L}_{k}(\widehat{q},{\bar g})Z_k.
\eeq
In terms of the generating functional of connected Green functions
$W_k=W_k[J,{\bar g},\Sigma]=-i\hbar\ln Z_k$, the relation
(\ref{C1}) rewrites in the form
\beq
\label{C2}
\pa_k W_k=
\Sigma\;\pa_k{\cal L}_{k}(\widehat{Q}_k,{\bar g})\cdot 1,
\eeq
In terms of the modified effective average action,
$\Gamma_k=\Gamma_k[\Phi_k,{\bar g},F_k]$
the relation
(\ref{C2}) takes the form
\beq
\label{C3}
\pa_k\Gamma_k=-\frac{\delta\Gamma_k}{\hbar\delta F_k}
\;\pa_k{\cal L}_{k}(\widehat{\Phi}_k,{\bar g})\cdot 1,
\eeq
From (\ref{C3}) it follows
\beq
\label{C4}
\pa_k\Gamma_k\Big|_{\frac{\delta\Gamma_k}{\delta F_k}=0}=0.
\eeq
The modified flow equation
obeys the independence on the IR parameter $k$ when it is considered on extremals,
\beq
\label{C5}
\frac{\delta\Gamma_k}{\delta F^{\ell}_k}=0, \quad \ell=1,2.
\eeq
This fact gives a hope that calculations with the modified effective average action
at the fixed points may have a physical meaning sense in contrast with the standard FRG.
In the next section we will study the gauge dependence
problem for the alternative flow equation.
\section{Gauge dependence of alternative flow equation}
\noindent Taking into account that functions ${\cal
L}^{(1)}_{k}(h,{\bar g})$ and ${\cal L}^{(2)}_{k}(C,{\bar C},{\bar
g})$ do not depend on gauges, from (\ref{C1}) it follows that the
gauge dependence of alternative flow equation for the functional
$Z_k$ is described by the equation \beq \label{D1} \delta\big(\pa_k
Z_k\big)=\frac{i}{\hbar} \Sigma\;\pa_k{\cal L}_{k}(\widehat{q},{\bar
g})\delta Z_k, \eeq or using (\ref{B11}) as \beq \label{D2}
\delta\big(\pa_k Z_k\big)=\Big(\frac{i}{\hbar}\Big)^2
\Sigma\;\pa_k{\cal L}_{k}(\widehat{q},{\bar g})\big(J_i+ \Sigma{\cal
L}_{k,i}(\widehat{q},{\bar g}) \big)R^i(\widehat{q},{\bar g})
\delta\Psi[\widehat{q},{\bar g}]\;Z_k. \eeq We find an inspected
fact that the gauge dependence of flow equation disappears already
when external sources to composite fields are switched off,
$\Sigma_1=\Sigma_2=0$. The alternative flow equation for the
functional $W_k$ reads \beq \label{D3} \delta\big(\pa_k W_k\big)=
\frac{i}{\hbar}\Big(\Sigma\;\pa_k{\cal L}_{k}(\widehat{Q}_k,{\bar
g})- \pa_k W_k\Big)\delta W_k, \eeq or in the form \beq \label{D4}
\delta\big(\pa_k W_k\big)= \frac{i}{\hbar}\Sigma\big(\pa_k{\cal
L}_{k}(\widehat{Q}_k,{\bar g})- \pa_k{\cal
L}_{k}(\widehat{Q}_k,{\bar g})\cdot 1 \big)\big(J_i+ \Sigma{\cal
L}_{k,i}(\widehat{Q}_k,{\bar g}) \big) R^i(\widehat{Q}_k,{\bar g})
\delta\Psi[\widehat{Q}_k,{\bar g}]\cdot 1. \eeq In terms of the
modified effective average action the gauge dependence of
alternative flow equations can be presented as \beq \label{D5}
\delta\big(\pa_k \Gamma_k\big)=
\frac{i}{\hbar}\Big(-\frac{\delta\Gamma_k}{\hbar\delta F_k}
\pa_k{\cal L}_{k}(\widehat{\Phi}_k,{\bar g})- \pa_k \Gamma_k
\Big)\delta \Gamma_k, \eeq or as \beq \nonumber &&\delta\big(\pa_k
\Gamma_k\big)=-\frac{i}{\hbar^2}\frac{\delta\Gamma_k}{\delta F_k}
\big(\pa_k{\cal L}_{k}(\widehat{\Phi}_k,{\bar g})- \pa_k{\cal
L}_{k}(\widehat{\Phi}_k,{\bar g})\cdot 1
\big)\times\\
\label{D6} &&\qquad
\times\Big(\frac{\delta\Gamma_k}{\delta\Phi_k^i}+
\frac{1}{\hbar}\frac{\delta\Gamma_k}{\delta F_k} \big({\cal
L}_{k,i}(\Phi_k,{\bar g})-{\cal L}_{k,i}(\widehat{\Phi}_k,{\bar
g})\big) \Big) R^i(\widehat{\Phi}_k,{\bar g})
\delta\Psi[\widehat{\Phi}_k,{\bar g}]\;\cdot 1.
\eeq
From
(\ref{D5}), (\ref{D6}) it follows
\beq
\label{D7}
\delta\big(\pa_k \Gamma_k\big)\Big|_{\frac{\delta\Gamma_k}{\delta
F_k}=0}=0.
\eeq
Therefore, the flow equation is gauge independent on extremals,
\beq
\label{D8}
\frac{\delta\Gamma_k}{\delta F^{\ell}_k}=0,
\eeq
and as well as additionally obeys the independence on the IR parameter $k$.
These facts give a possibility for consistent application of the proposed
quantization procedure in gauge theories to obtain physical meaning results.
It is interesting to note that the gauge independence of $\Gamma_k$
found as a solution
to the flow equation is expected with use only a part of
the equations of motion (\ref{D8}). In its turn
the independence of the flow equation on the IR parameter $k$
at the extremals (\ref{D8}) can be considered as a signal about
$k$-independence of $\Gamma_k$ at the fixed points.
\section{Summary}
\noindent
We have studied basic properties of the flow equation in gravity theories
within reformulation of
the standard FRG \cite{Wet1,Wet2}
when following ideas of \cite{LSh} the 2PI effective action
with composite fields being densities of the regulator action is
introduced. In contrast with the standard FRG \cite{Wet1,Wet2} and
alternative approaches \cite{AMNS-1,Lav-alt} the proposed reformulation
leads to the 2PI effective average action which possesses standard properties
of gauge dependence in the perturbation theory
and satisfies the alternative flow equation being $k$-
and gauge independent at extremals. Speaking about possible reformulations of the
standard FRG it is necessary to notice the papers \cite{Morris1,Morris2}
where the quantization procedure is based on gauge invariant regularization
of an initial classical action which guarantees the BRST symmetry and the gauge independence
of $S$-matrix elements. Unfortunately absence of explicit procedure to arrive the gauge
invariance of regularized initial action does not allow to consider this approach
as a consistent quantization method \cite{Lav-RG-BV}.
Therefore at the moment we state that the proposed reformulation of the standard
FRG should be considered
as non-perturbative quantization of gauge theories successfully passing
through the gauge dependence tests.
\section*{Acknowledgments}
\noindent
The work is supported by the Ministry of Education of the Russian Federation,
project FEWF-2020-0003.
\begin {thebibliography}{99}
\addtolength{\itemsep}{-8pt}
\bibitem{Wet1}
C. Wetterich, {\it Average action and the renormalization group
equation}, Nucl. Phys. {\bf B352} (1991) 529.
\bibitem{Wet2}
C. Wetterich, {\it Exact evolution equation for the effective
potential}, Phys. Lett. {\bf B301} (1993) 90.
\bibitem{DCEMPTW}
N. Dupuis, L. Canet, A. Eichhorn, W. Metzner, J.M. Pawlowski, M. Tissier, N. Wschebor.
{\it The nonperturbative functional renormalization group and its applications},
arXiv:2006.04853 [cond-mat.stat-mech].
\bibitem{LSh}
P.M.~Lavrov, I.L.~Shapiro,
{\it On the Functional Renormalization Group approach for Yang-Mills
fields,} JHEP {\bf 1306} (2013) 086.
\bibitem{Lav-2020}
P.M. Lavrov,
{\it BRST, Ward identities, gauge dependence and FRG},
arXiv:2002.05997 [hep-th].
\bibitem{Lav-yad}
P.M. Lavrov,
{\it Gauge dependence of effective average action},
Phys. Atom. Nucl. {\bf 83} (2020) 1011.
\bibitem{AMNS-1}
E. Alexander, P. Millington, J. Nursey, P.M. Safin,
{\it An alternative flow equation for the functional renormalization group},
Phys. Rev. {\bf D100} (2019) 101702.
\bibitem{Lav-alt}
P.M. Lavrov,
{\it Gauge dependence of alternative flow equation for the functional
renormalization group},
Nucl. Phys. {\bf B957} (2020) 115107.
\bibitem{CJT}
J.M. Cornwell, R. Jackiw, E. Tomboulis,
{\it Effective action for composite operators},
Phys. Rev. {\bf D10} (1974) 2428.
\bibitem{LO-1989}
P.M. Lavrov, S.D. Odintsov,
{\it The gauge dependence of the effective action of composite fields
in general gauge theories},
Sov. J. Nucl. Phys. {\bf 50} (1989) 332
(Yad. Fiz. {\bf 50} (1989) 536)
\bibitem{Lav-tmph}
P.M. Lavrov, {\it Effective action for composite fields in
gauge theories},
Theor. Math. Phys. {\bf 82} (1990) 282 (Teor. Mat. Fiz. {\bf 82} (1990) 402).
\bibitem{LOR-jmp}
P.M. Lavrov, S.D. Odintsov, A.A. Reshetnyak,
{\it Effective action of composite fields for
general gauge theories in BLT covariant formalism},
J. Math. Phys. {\bf 38} (1997) 3466.
\bibitem{Reuter}
M. Reuter, {\it Nonperturbative evolution equation for Quantum
Gravity}, Phys. Rev. {\bf D57} (1998) 971.
\bibitem{DeWitt} B.S. DeWitt,
{\it Dynamical theory of groups and fields},
(Gordon and Breach, 1965).
\bibitem{BLRNSh}
V.F. Barra, P.M. Lavrov, E.A. dos Reis, T. de Paula Netto, I.L.
Shapiro, {\it Functional renormalization group approach and gauge
dependence in gravity theories},
Phys. Rev. {\bf D101} (2020) 065001
\bibitem{FP}
L.D. Faddeev, V.N. Popov,
{\it Feynman diagrams for the Yang-Mills field},
Phys. Lett. {\bf B25} (1967) 29.
\bibitem{Barv}
A.O. Barvinsky, D. Blas, M. Herrero-Valea, S.M. Sibiryakov, C.F. Steinwachs,
{\it Renormalization of gauge theories in the background-field approach},
JHEP {\bf 1807} (2018) 035.
\bibitem{BRS1}
C. Becchi, A. Rouet, R. Stora,
{\it The abelian Higgs Kibble Model, unitarity of the $S$-operator},
Phys. Lett. {\bf B52} (1974) 344.
\bibitem{T}
I.V. Tyutin,
{\it Gauge invariance in field theory and statistical
physics in operator formalism}, Lebedev Inst. preprint
N 39 (1975).
\bibitem{DR-M}
R. Delbourgo, M. Ramon-Medrano, {\it Supergauge theories and dimensional regularization},
Nucl. Phys. {\bf 110} (1976) 467.
\bibitem{Stelle}
K.S. Stelle, {\it Renormalization of higher derivative quantum gravity},
Phys. Rev. {\bf D16} (1977) 953.
\bibitem{TvN}
P.K. Townsend, P. van Nieuwenhuizen, {\it BRS gauge and ghost field
supersymmetry in gravity and supergravity}, Nucl. Phys. {\bf B120} (1977) 301.
\bibitem{LR}
P.M. Lavrov, A.A. Reshetnyak,
{\it One loop effective action for Einstein gravity in special background gauge},
Phys. Lett. {\bf B351} (1995) 105.
\bibitem{J}
R. Jackiw,
{\it Functional evaluation of the effective potential},
Phys. Rev. {\bf D9} (1974) 1686.
\bibitem{Niel}
N.K. Nielsen,
{\it On the gauge dependence of spontaneous symmetry
breaking in gauge theories},
Nucl. Phys. {\bf B101} (1975) 173.
\bibitem{LT-1981a}
P.M. Lavrov, I.V. Tyutin, {\it
On structure of renormalization in gauge theories},
Sov. J. Nucl. Phys. {\bf 34} (1981) 156 (Yad. Fiz. {\bf 34} (1981) 277)
\bibitem{LT-1981b}
P.M. Lavrov, I.V. Tyutin, {\it On generating functional Of vertex functions
in the Yang-Mills theories},
Sov. J. Nucl. Phys. {\bf 34} (1981) 474 (Yad. Fiz. {\bf 34} (1981) 850)
\bibitem{VLT}
B.L. Voronov, P.M. Lavrov, I.V. Tyutin,
{\it Canonical transformations and the gauge dependence
in general gauge theories},
Sov. J. Nucl. Phys. {\bf 36} (1982) 292 (Yad. Fiz. {\bf 36} (1982) 498)
\bibitem{KT}
R.E. Kallosh, I.V. Tyutin, {\it The equivalence theorem and gauge
invariance in renormalizable theories}, Sov. J. Nucl. Phys. {\bf 17}
(1973) 98 (Yad. Fiz. {\bf 17} (1973) 190).
\bibitem{DeW} B.S. De Witt, \textit{Quantum theory of gravity. II. The
manifestly covariant theory}, Phys. Rev. \textbf{162} (1967) 1195.
\bibitem{AFS}
I.Ya. Arefeva, L.D. Faddeev, A.A. Slavnov, \textit{Generating
functional for the s matrix in gauge theories},
Theor. Math. Phys. \textbf{21} (1975) 1165
(Teor. Mat. Fiz. \textbf{21} (1974) 311).
\bibitem{Abbott}
L.F. Abbott, {\it The background field method beyond one loop},
Nucl. Phys. {\bf B185} (1981) 189
\bibitem{LavSh-QG}
P.M. Lavrov, I.L. Shapiro,
{\it Gauge invariant renormalizability of quantum gravity},
Phys. Rev. {\bf D100} (2019) 026018.
\bibitem{GLSh}
B.L. Guacchini, P.M. Lavrov, I.L. Shapiro,
{\it Background field method for nonlinear gauges},
Phys. Lett. {\bf B797} (2019) 134882.
\bibitem{LM}
P.M. Lavrov, B.S. Merzlikin,
{\it Legendre transformations and Clairaut-type equations},
Phys. Lett. {\bf B756} (2016) 188.
\bibitem{Morris1}
T.R. Morris, {\it Quantum gravity, renormalizability and
diffeomorphism invariance}, SciPost Phys. {\bf 5} (2018) 040.
\bibitem{Morris2}
Y. Igarashi, K. Itoh, T.R. Morris, {\it BRST in the exact renormalization
group}, Prog. Theor. Exp. Phys. (2019).
\bibitem{Lav-RG-BV}
P.M. Lavrov,
{\it RG and BV-formalism}, Phys. Lett. {\bf B803} (2020) 135314.
\end{thebibliography}
\end{document}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 487 |
Celebrating—and studying—Jane Addams
Today is the birthday of Jane Addams, founder of Chicago's Hull House, where Lake Forest College students volunteered in the 1890s. Addams, whose nephew was an 1893 grad and whose brother-in-law taught at the College, recruited Lake Forest students for service projects.
Fast-forward 125 years: Summer Richter Scholars Evangeline Bero '20 and Hakob Parsamyan '20 brought Addams' work into the limelight one citation at a time. Together with Associate Professor of Politics James Marquardt, they worked on building a website that focuses on her international peace advocacy.
Click here to read an interview with Bero and Parsamyan. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,945 |
\section{Introduction}
Our paper \cite{OO}
is a sequel to a
series by the first author \cite{TGC.I, TGC.II},
which compactified both the
moduli space of compact Riemann surfaces $M_{g} (g\ge 2)$ and that of
principally polarized abelian varieties $A_{g}$.
In each case,
as we actually expect an analogue for any moduli of general polarized
K\"ahler-Einstein varieties with non-positive scalar curvatures,
we introduce and study two similar (non-variety) compactifications
of the moduli space $\mathcal{M}$, which we denote by
$\overline{\mathcal{M}}^{\rm GH}$ and
$\overline{\mathcal{M}}^{\rm T}$.
The former $\overline{\mathcal{M}}^{\rm GH}$ is the Gromov-Hausdorff
compactification with respect to
\textit{rescaled} K\"ahler-Einstein metrics
of \textit{fixed diameters} and the latter ``tropical geometric compactification''
$\overline{\mathcal{M}}^{\rm T}$
should dominate
the former $\overline{\mathcal{M}}^{\rm GH}$
as its boundary $\partial\overline{\mathcal{M}}^{\rm T}$
encodes more structure of the
Gromov-Hausdorff limits (collapses) rather than just distance structure.
For a precise definition of $\overline{\mathcal{M}}^{\rm GH}$ we employ the same definition as \cite[\S 2.3]{TGC.I}, \cite[\S 2.2]{TGC.II}.\footnote{However, its compactness is unknown at least to the authors in higher dimensional negative scalar curvature case. }
For $\overline{\mathcal{M}}^{\rm T}$, we have a
case by case definition for only particular classes of varieties.
Here, we recall the structure theorem of
$\overline{A_{g}}^{\rm GH}$ from \cite[Theorems 2.1 2.3 and Corollary 2.5]{TGC.II}.
\if 0
\begin{Thm}[{\cite{TGC.I},\cite{TGC.II}}]\label{TGC.Mg.review}
$M_{g}$ can be explicitly compactified
as $\overline{M_{g}}^{\rm GH}$ (resp., $\overline{M_{g}}^{\rm T}$) whose boundaries
parametrize metrized graph $\Gamma$ (resp., metric graph $\Gamma$ with
$w\colon V(\Gamma):=\{\text{vertices of }\Gamma\}\to \ensuremath{\mathbb{Z}}_{\ge 0}$) which satisfy
certain explicit conditions.
$\overline{M_{g}}^{\rm T}$ dominates $\overline{M_{g}}^{\rm GH}$ by a continuos map
preserving $M_{g}$.
Both boundaries $\partial \overline{M_{g}}^{\rm T}$ and
$\partial \overline{M_{g}}^{\rm GH}$ are naturally stratified
by finite group quotients of open simplices of dimension at most $3g-4$.
For any algebraic morphism from a punctured curve $f\colon C\setminus \{p\}\to M_{g}$,
$\lim_{q\to p} f(q)\in \overline{M_{g}}^{\rm T}$ exists and
such limits, where $f$ runs, form a finite subset of the whole $\partial \overline{M_{g}}^{\rm T}$.
\end{Thm}
\fi
\begin{Thm}[{\cite{TGC.II}}]\label{TGC.Ag.review}
$A_{g}$ can be explicitly compactified as $\overline{A_{g}}^{\rm GH}$
whose boundary parametrizes all flat (real) tori $\ensuremath{\mathbb{R}}^{i}/\ensuremath{\mathbb{Z}}^{i}$ of diameter
$1$ where $1\le i\le g$.
Once we attach the rescaled flat K\"ahler metric
in the principal polarization with diameter $1$ to each
abelian variety, the parametrization of metric spaces on
whole $\overline{A_{g}}^{\rm GH}$ is continuous with
respect to the Gromov-Hausdorff distance.
\if
For any algebraic morphism from a punctured curve $f\colon C^{o}=C\setminus \{p\}\to A_{g}$,
with trivial Raynaud extension, $\lim_{q\to p} f(q)\in \overline{A_{g}}^{\rm T}$ exists and
the set of possible such limits form a dense subset of $\partial \overline{A_{g}}^{\rm T}$ consists of points with rational coordinates.
(The Raynaud extension triviality assumption
is removed as one consequence of \cite{OO}.)
\fi
\end{Thm}
In the above case,
we simply set $\overline{A_{g}}^{\rm T}:=\overline{A_{g}}^{\rm GH}$.
On the other hand, in the analogue for $M_{g}$ \cite{TGC.I},
we distinguish $\overline{M_{g}}^{\rm GH}$ and $\overline{M_{g}}^{\rm T}$, where
the boundaries of $\overline{M_{g}}^{\rm GH}$
(resp., $\overline{M_{g}}^{\rm T}$)
parametrize metrized graphs (resp., metrized graphs with integer weights on
the vertices). We refer the details to \cite{TGC.I}.
Our \cite{OO}
contains the followings.
\begin{enumerate}
\item \label{MS.Sat.part} We first apply the Morgan-Shalen type compactification
for
general Hermitian locally symmetric spaces and identify it with one of the
Satake compactifications (\cite{Sat1}, \cite{Sat2}).
\item
We partially prove that the boundary of the Satake compactification of
the type which appears in \eqref{MS.Sat.part} parametrizes
collapses of abelian varieties and
Ricci-flat K3 surfaces.
This gives a generalisation of some results in
\cite{GW}, \cite{Tos}, \cite{GTZ1}, \cite{GTZ2}, \cite{TZ}
for the K3 surface case.
For instance, a proof of the conjecture of Kontsevich-Soibelman
\cite[Conjecture 1]{KS} (see also Gross-Wilson \cite[Conjecture 6.2]{GW}),
which is related to the Strominger-Yau-Zaslow mirror symmetry \cite{SYZ}, for the case of
K3 surfaces directly follows from our description of collapsing.
We also give a conjecture for higher dimensional hyperK\"ahler varieties.
\end{enumerate}
\noindent
Now we move on to a more detailed description.
\section{General Hermitian symmetric domain}
Let $\mathbb{G}$ be a reductive algebraic group over $\mathbb{Q}$,
$G=\mathbb{G}(\mathbb{R})$,
$K$ (one of) its maximal compact subgroup,
and $D:=G/K$, which we suppose to have a Hermitian symmetric domain structure.
We moreover assume $D$ is irreducible so that $G$ is simple as a Lie group.
Suppose that $\Gamma$ is an arithmetic subgroup of $\mathbb{G}(\mathbb{Q})$,
which acts on $D$.
Hence we can discuss Hermitian locally symmetric space $\Gamma\backslash D$.
Satake \cite{Sat1}, \cite{Sat2}
constructed compactifications of Riemannian locally symmetric spaces $G/K$
associated to irreducible projective representations $\tau\colon G\to
PGL(\mathbb{C})$ satisfying certain conditions. They are
stratified as:
\[\overline{\Gamma\backslash D}^{\rm Sat, \tau}
=\Gamma\backslash D\sqcup \bigsqcup_{P}
(\Gamma\cap Q(P))\backslash M_{P}/(K\cap M_{P}).\]
Here, $P$ runs over all the $\mu(\tau)$-connected rational parabolic subgroups,
$P=N_{P}A_{P}M_{P}$ denotes the Langlands decomposition, and
$Q(P)$ is the $\mu(\tau)$-saturation of $P$.
We are particularly interested in the case when $\tau$ is the adjoint
representation $\tau_{\rm ad}$.
On the other hand,
given any toroidal compactification \cite{AMRT}
for $\Gamma\backslash D$, we can apply the Morgan-Shalen type compactification
to it as \cite[Appendix]{TGC.II} (following \cite{MS, BJ16}).
The Morgan-Shalen type compactification
$\overline{\Gamma\backslash D}^{\rm MSBJ}$ obtained in this way
is independent of the cone decomposition
for the toroidal compactification \cite[A.13, A.14]{TGC.II}.
We now compare these two compactifications.
\begin{Thm}\label{MS.Sat}
Let $\Gamma\backslash D$ be a locally Hermitian symmetric space.
Consider its toroidal compactification
and the associated (generalised) Morgan-Shalen compactification
$\overline{\Gamma\backslash D}^{\rm MSBJ}$.
Then this is homeomorphic to the Satake compactification
$\overline{(\Gamma\backslash D)}^{\rm Sat,\tau_{\rm ad}}$
for the adjoint representation $\tau_{\rm ad}$ of $G$.
\end{Thm}
In the following we make an ``elementary'' but important observation
on a rationality phenomenon of the limits along one parameter holomorphic
family, which we expect to fit well with the recent approach to extend
the theta functions in \cite{GS.JDG} etc.
\begin{Prop}\label{MS.lim}
Suppose $U\subset \overline{U}^{\rm hyb}(\mathcal{X})$ is a Morgan-Shalen-Boucksom-Jonsson compactification associated to an arbitrary dlt stacky pair $(\mathcal{X},\mathcal{D})$ of boundary coefficients $1$
(\cite{TGC.II}) with $\mathcal{U}:=\mathcal{X}\setminus \mathcal{D}$,
its coarse moduli space $\mathcal{U}\to U$. Then for any holomorphic morphism
$\Delta^{*}:=\{z\in \mathbb{C}\mid 0<|z|<1\}\to \mathcal{U}$ which extend to
$\Delta:=\{z\in \mathbb{C}\mid |z|<1\}\to \mathcal{X}$, it induces a continuous map
$\Delta\to \overline{U}^{\rm hyb}(\mathcal{X})$, i.e., the limit exists.
Furthermore, such possible limits in $\Delta(\mathcal{D})$ are
characterized as points with rational coordinates.
\end{Prop}
\begin{Cor}[corollary to Theorem~\ref{MS.Sat} and Proposition~\ref{MS.lim}]\label{}
Take an arbitrary holomorphic map $f\colon \Delta^*\to \Gamma\backslash D$,
which extends to a map to a toroidal compactification of $\Gamma\backslash D$.
Then $f$ also extends to a map $\Delta\to
\overline{\Gamma\backslash D}^{\rm Sat,\tau_{ad}}$
where $0$ is sent to a point with rational coordinates, i.e.,
a point in the dense subset
$(C(F)\cap U(F)\otimes \mathbb{Q})/\mathbb{Q}_{>0}\subset C(F)/\mathbb{R}_{>0}$.
\end{Cor}
This is partially proved
in the case of $A_{g}$
in \cite{TGC.II} by using degeneration data in
\cite{FC90}.
\begin{Rem}
Although we assume that $G$ is simple in this section,
our Morgan-Shalen type compactification
construction \cite[Appendix]{TGC.II} still works for non-simple $G$.
Thus, our construction also gives a new Satake-type compactification
for non-simple $G$, e.g., of the Hilbert modular varieties.
\end{Rem}
\section{Abelian varieties case}
We identify our tropical geometric compactification $\overline{A_g}^{\rm T}$ (\cite{TGC.II})
of $A_g$ with the adjoint type Satake compactification.
\begin{Thm}\label{Ag.TGC.Satake.MS}
There are canonical homeomorphisms between the three compactifications
\[\overline{A_g}^{\rm T} \cong \overline{A_g}^{\rm Sat,\tau_{\rm ad}}\cong \overline{A_g}^{\rm MSBJ},\]
extending the identity on $A_g$.
\end{Thm}
The second canonical homeomorphism is a special case of Theorem \ref{MS.Sat} and
the first is essentially reduced to matrix computations.
In \cite{OO}, we also give a purely moduli-theoritic
reexplanation
of the structure theory of one parameter degenerations of
abelian
varieties in \cite{Mum72.AV}, \cite{FC90},
after the above Theorem \ref{Ag.TGC.Satake.MS} as follows.
\begin{Thm}
Take a holomorphic maximally degenerating family of principally polarized
abelian varieties
$\pi\colon (\mathcal{X},\mathcal{L})\to \Delta$.
Consider the rescaled Gromov-Hausdorff limit $B(\mathcal{X},\mathcal{L})$ of
diameter $1$ as in Theorem \ref{TGC.Ag.review} (\cite{TGC.II})
and its discrete Legendre transform
$\check{B}(\mathcal{X},\mathcal{L})$ (\cite{GS11}, \cite{KS}).
Then we can enhance the underlying integral affine structure of
$\check{B}(\mathcal{X},\mathcal{L})$
as $K$-affine structure (in the sense of \cite[\S 7.1]{KS}) naturally
via the data of $\pi$.
Furthermore, such $K$-affine structure recovers $\pi$
up to an equivalence relation generated by
base change (replace $t$ by $t^{a}$ with $a\in \mathbb{Q}_{>0}$).
\end{Thm}
\section{Moduli of Algebraic K3 surfaces}
\subsection{Satake compactification}
\label{K3.Sat.sec}
Let $\mathcal{F}_{2d}$ be the moduli space of
polarized K3 surfaces of degree $2d$ possibly with ADE singularities.
Its structure is known as follows. Let
$\Lambda_{\rm K3}:=E_{8}(-1)^{\oplus 2}\oplus U^{\oplus 3}$ be the K3 lattice
and fix a primitive vector $\lambda_{2d}$ with $(\lambda_{2d},\lambda_{2d})=2d$
and $\Lambda_{2d}:=\lambda_{2d}^{\perp}$.
The complex manifold
$$\Omega(\Lambda_{2d}):=\{[w]\in \mathbb{P}(\Lambda_{2d}\otimes \mathbb{C})\mid
(w,w)=0,\ (w,\bar{w})>0\}.$$
has two connected components.
We choose one component and denote by $\mathcal{D}_{\Lambda_{2d}}$.
Let $O(\Lambda_{\rm K3})$ denote the isomorphism group
of the lattice $\Lambda_{\rm K3}$
preserving the bilinear form and
set
\begin{align*}
\tilde{O}(\Lambda_{2d}):=\{g|_{\Lambda_{2d}} : g\in O(\Lambda_{\rm K3}),\,
g(\lambda_{2d})=\lambda_{2d}\}.
\end{align*}
The group $\tilde{O}(\Lambda_{2d})$ naturally acts on $\Omega(\Lambda_{2d})$.
We define $\tilde{O}^{+}(\Lambda_{2d})$ to be
the index two subgroup of $\tilde{O}(\Lambda_{2d})$
consisting of the elements preserving each connected component
of $\Omega(\Lambda_{2d})$.
Then it is well-known that
\begin{align*}
\mathcal{F}_{2d}
\simeq
\tilde{O}^{+}(\Lambda_{2d})\backslash \mathcal{D}_{\Lambda_{2d}}
\simeq
\tilde{O}(\Lambda_{2d})\backslash \Omega(\Lambda_{2d}).
\end{align*}
Let $\overline{\mathcal{F}_{2d}}^{{\rm Sat},\tau_{\rm ad}}$
(or simply $\overline{\mathcal{F}_{2d}}^{{\rm Sat}}$ in our papers)
be the Satake compactification of $\mathcal{F}_{2d}$
corresponding to the adjoint representation of $O(2,19)$.
It decomposes as
\[\overline{\mathcal{F}_{2d}}^{{\rm Sat}}=
\mathcal{F}_{2d}\sqcup \bigcup_{l} \mathcal{F}_{2d}(l)
\sqcup \bigcup_{p} \mathcal{F}_{2d}(p),\]
where $l$ runs over one-dimensional isotropic
subspaces of $\Lambda_{2d}\otimes \mathbb{Q}$,
and $p$ runs over two-dimensional isotropic subspaces of
$\Lambda_{2d}\otimes \mathbb{Q}$.
Also, we simply define the tropical geometric compactification of
$\mathcal{F}_{2d}$ as this $\overline{\mathcal{F}_{2d}}^{\rm Sat}$.
The boundary component $\mathcal{F}_{2d}(l)$ is given as
\[
\mathcal{F}_{2d}(l)
= \{v\in (l^{\perp}/l) \otimes \mathbb{R} \mid (v,v)>0\}/\sim.
\]
Here $v \sim v'$ if $g\cdot v=c v'$ for some $g\in \tilde{O}^{+}(\Lambda_{2d})$
and $c\in \mathbb{R}^{\times}$.
We have $\mathcal{F}_{2d}(l)=\mathcal{F}_{2d}(l')$
if $g\cdot l=l'$ for some $g\in \tilde{O}^{+}(\Lambda_{2d})$
and $\mathcal{F}_{2d}(l)\cap\mathcal{F}_{2d}(l')=\emptyset$ if otherwise.
Since $(l^{\perp}/l) \otimes \mathbb{R}$ has signature $(1,18)$,
there is an isomorphism
\begin{multline*}
\{v\in (l^{\perp}/l) \otimes \mathbb{R} \mid (v,v)>0\}/\mathbb{R}^{\times}\\
\simeq O(1,18)/ O(1)\times O(18)
\end{multline*}
and hence $\mathcal{F}_{2d}(l)$
is an arithmetic quotient of $O(1,18)/O(1)\times O(18)$.
The other component $\mathcal{F}_{2d}(p)$ is a point
and $\mathcal{F}_{2d}(p)=\mathcal{F}_{2d}(p')$
if and only if $g\cdot p=p'$
for some $g\in \tilde{O}^+(\Lambda_{2d})$.
Therefore, if we take representatives of $l$ and $p$
from each equivalence class, we get a finite decomposition:
\[\overline{\mathcal{F}_{2d}}^{{\rm Sat}}=
\mathcal{F}_{2d}\sqcup \bigsqcup_{l} \mathcal{F}_{2d}(l)
\sqcup \bigsqcup_{p} \mathcal{F}_{2d}(p).\]
\subsection{Tropical K3 surfaces}\label{trop.K3.1}
In our paper, what we mean by \textit{tropical polarized K3 surface}
is a topological space $B$
homeomorphic to the sphere $S^{2}$, with an
affine structure away from certain finite points ${\rm Sing}(B)$,
with a metric which is Mong\'e-Ampere metric $g$
with respect to the affine structure on $B\setminus {\rm Sing}(B)$.
Studies of such object as tropical version of
K3 surfaces are pioneered in well-known papers of Gross-Wilson \cite{GW} and
Kontsevich-Soibelman \cite{KS}.
Here we assign such tropical K3 surface to each point in
the boundary component $\mathcal{F}_{2d}(l)$ as follows.
Let $l$ be an oriented one-dimensional isotropic
subspace of $\Lambda_{2d}\otimes \mathbb{Q}$.
Write $e$ for the primitive element of $l$ such that
$\mathbb{R}_{> 0} e$ agrees with the orientation of $l$.
Take a vector $v\in (l^{\perp}/l)\otimes \mathbb{R}$ such that $(v,v)>0$.
Write $[e,v]$ for the corresponding point in $\mathcal{F}_{2d}(l)$.
Then there exists a (not necessarily projective) K3 surface $X$
and a marking
$\alpha_{X}\colon H^2(X,\mathbb{Z})\to \Lambda$ with
\begin{itemize}
\item $\alpha_X(H^{2,0})\subset \mathbb{R}\lambda+\sqrt{-1}\mathbb{R}v$,
\item $\alpha_{X}^{-1}(e)$ is in the closure of
K\"ahler cone.
\end{itemize}
The pair $(X,\alpha_{X})$ is unique up to isomorphisms.
Let $L$ be a line bundle on $X$ such that $\alpha_{X}([L])=e$.
Then we get an elliptic fibration $f:X\to B(\simeq \mathbb{P}^1)$.
Take a holomorphic volume form $\Omega$ on $X$ such that
$\alpha_X([\mathop{\mathrm{Re}}\nolimits \Omega])=\lambda$.
The map $f$ is a Lagrangian fibration with respect to
the symplectic form $\mathop{\mathrm{Re}}\nolimits \Omega$.
Hence it gives an affine manifold structure
on $B\setminus\Delta$,
where $\Delta$ denotes the finite set of singular points.
Similarly, the imaginary part $\mathop{\mathrm{Im}}\nolimits \Omega$ gives another affine manifold
structure on $B\setminus\Delta$.
We endow the base space $B$ with
the McLean metric on the base $B$ (\cite{ML}),
where we regard $f$ as special Lagrangian fibration after hyperK\"ahler rotation.
A straightforward calculation shows that this coincides with the
``special K\"ahler metric" $g_{\it sp}$ introduced and studied
in \cite{DW96, Hit,Freed99} and appears as the metric on $\mathbb{P}^1$
in \cite{GTZ2}. We rescale the metric to make its diameter $1$
and denote this obtained tropical K3 surface by $\Phi_{\rm alg}([e,v])$.
\begin{Rem}
Recall the concepts of the \textit{class of metric} (\textit{metric class})
and the \textit{radiance obstruction}
of Mong\'e-Amp\'ere manifolds $B$ with singularities. They are
introduced in \cite{KS} and discussed in \cite{GS.logI} in more details.
We denote them by
$k(B)\in H^1(B, i_*\tilde{\Lambda}^{\vee}\otimes \mathbb{R})$ and
$c(B)\in H^1(B,i_*\Lambda)$, respectively.
Here, $\Lambda$ is the
affine structure as a $\mathbb{Z}^{\it dim(B)}$-local system in tangent
bundle $T(B\setminus \Delta)$, $-^{\vee}$ denotes $-$'s
dual local system, $\tilde{\Lambda}^{\vee}$ is local system of affine
functions. In particular, we naturally have a morphism of local systems
$f\colon \tilde{\Lambda}^{\vee}\to \Lambda^{\vee}$ which induces
$f_*\colon H^1(B, i_*\tilde{\Lambda}^{\vee})\to H^1(B, i_*\Lambda^{\vee})$.
It is also easy to see that, if we slightly change the definition of the metric class,
to extract its ``linear" part as $f_* k(B)$.
Then,
it naturally recovers the data $\overline{v}\in (e^{\perp}\otimes \mathbb{R}/\mathbb{R}e)$
i.e., we have
$f_*k(\Phi_{\rm alg}([e,v]))=[v],$
under the natural identification
$H^1(\Phi_{\rm alg}([e,v]), i_*\Lambda^{\vee}\otimes \mathbb{R})\hookrightarrow
(e^{\perp}\otimes \mathbb{R}/\mathbb{R} e)$
which comes from the Leray spectral sequence applied to the elliptic fibration
$X\twoheadrightarrow \Phi_{\rm alg}([e,v])$ in \S \ref{trop.K3.1}.
Our results in \cite{TGC.II} and Theorem~\ref{Ag.TGC.Satake.MS} for $A_{g}$
can be re-interpretted similarly (but with weight $1$).
\end{Rem}
\begin{Rem}
\textit{Yuto Yamamoto} \cite{Yam} has some ongoing interesting work
which seems to be related to our works,
where he constructs a sphere with an integral affine structure
from the tropicalization of an anticanonical hypersurface
in a toric Fano 3-fold, and computes its radiance obstruction.
\end{Rem}
\subsection{Gromov-Hausdorff collapse of K3 surfaces}
For a point in $\mathcal{F}_{2d}$
we have a corresponding polarized K3 surface $(X,L)$,
equipped with a natural Ricci-flat metric.
For $[e,v]\in \mathcal{F}_{2d}(l)$ we defined in a previous section
$\Phi_{\rm alg}([e,v])$.
For a point in $\mathcal{F}_{2d}(p)$
we assign a (one-dimensional) segment,
which we denote by $\Phi_{\rm alg}(\mathcal{F}_{2d}(p))$.
Let us normalize these metric spaces so that their diameters are one.
We thus obtained a map
$\Phi_{\rm alg}\colon \overline{\mathcal{F}_{2d}}^{{\rm Sat}}
\to \{\text{compact metric spaces with diameter one}\}$.
Here, we associate Gromov-Hausdorff distance to the right hand side (target space)
and denote it by ${\it CMet}_{1}$.
\begin{Conj}\label{K3.Main.conjecture}
The map
\[\Phi_{\rm alg}\colon \overline{\mathcal{F}_{2d}}^{{\rm Sat}}
\to {\it CMet}_{1}\]
given above is continuous.
\end{Conj}
We would like to simply set
the tropical geometric compactification of $\mathcal{F}_{2d}$ as
$\overline{\mathcal{F}_{2d}}^{{\rm T}}:=\overline{\mathcal{F}_{2d}}^{{\rm Sat}}$.
Indeed, if Conjecture~\ref{K3.Main.conjecture} holds,
we get a continuous map $\overline{\mathcal{F}_{2d}}^{{\rm Sat}}\to
\overline{\mathcal{F}_{2d}}^{\rm GH}$ and we also observe that
each $\mathcal{F}_{2d}(l)$ encodes affine structure of the limit tropical K3
surface as well. (This answers a question of Prof.\ B.~Siebert in 2016 to the first author,
regarding if one can associate tropical affine structure to limit of any
collapsing \textit{sequence}).
So far, we have partially confirmed the conjecture.
The case of ($A_{1}$-singular flat) Kummer surfaces, with $3$-dimensional moduli,
are easily reduced to \cite{TGC.II}.
More generally, we have proved the following.
In particular, the conjecture \ref{K3.Main.conjecture} holds
at least away from finite points.
\begin{Thm}\label{K3.Main.Conjecture.18.ok}The map
$\Phi_{\rm alg}$ is continuous on $\overline{\mathcal{F}_{2d}}^{\rm Sat}\setminus (\bigcup_{p}\mathcal{F}_{2d}(p))$.
It is continuous also
when restricted to the boundary $\partial{\overline{\mathcal{F}_{2d}}^{\rm Sat}}
=\overline{\mathcal{F}_{2d}}^{\rm Sat}\setminus \mathcal{F}_{2d}$.
\end{Thm}
The proof of the former half of the statements
involves some symmetric space theory,
hyperK\"ahler geometry,
algebraic geometry of moduli, and a priori analytic estimates.
The estimates heavily
depends on \cite{Tos,GW,GTZ1,GTZ2,TZ}
and their extensions.
One nontrivial part of the extension
is, for instance, to make many of the $C^{2}$-estimations in \textit{op.cit}
following methods of \cite{Yau}
locally uniform with respect to a family of elliptic K3 surfaces
even along degenerations to orbifolds.
During our work, we learnt that
\textit{Kenji Hashimoto, Yuichi Nohara, Kazushi Ueda} \cite{HNU}
also studied the Gromov-Hausdorff collapses along certain $2$-dimensional
subvariety of $\mathcal{F}_{2d}$, i.e., the moduli
of $E_{8}^{\oplus 2}\oplus U(\oplus \langle -2 \rangle)$-polarized K3 surfaces.
Moereover, a result of Hashimoto and Ueda \cite{HU} implies that
the restriction of $\Phi_{\rm alg}$ to the boundary
is a generically two-to-one map.
We appreciate their gentle discussion with us.
Theorem \ref{K3.Main.Conjecture.18.ok} (resp., Conjecture \ref{K3.Main.conjecture}) combined with
Proposition~\ref{MS.lim} determines the Gromov-Hausdorff limits of Type III (resp., Type II) one parameter family of
Ricci-flat algebraic K3 surfaces, which solves a conjecture of
Kontsevich-Soibelman \cite[Conjecture 1]{KS}, Todorov, and Gross-Wilson (cf., e.g., \cite[Conjecture 6.2]{Gross})
in the K3 surfaces case.
In the next section, we discuss collapsing of
general K\"ahler K3 surfaces, which are not necessarily algebraic.
\section{Moduli of K\"ahler K3 surfaces}
It is known
(cf., \cite{Tod}, \cite{Looi}, \cite{KT})
that the moduli space
of all Einstein metrics on a K\"ahler K3 surfaces
(including orbifold-metrics)
has again a structure of the locally Riemannian symmetric space:
$$O(\Lambda_{\rm K3})\backslash SO_{0}(3,19)/
(SO(3)\times SO(19)),$$
which we denote by $\mathcal{M}_{\rm K3}$.
An enriched version encoding also complex structures of the K3 surfaces is
$$\ensuremath{\mathbb{R}}_{>0}\times (O(\Lambda_{\rm K3})\backslash SO_{0}(3,19)/
(SO(2)\times SO(19))).$$
Roughly speaking,
this is a union of K\"ahler cones of ADE K3 surfaces with marking
of the minimal resolutions.
Thus we can again compare a
Satake compactification of $\mathcal{M}_{\rm K3}$
with the Gromov-Hausdorff compactification.
Inside the Satake compactification for the adjoint representation,
we consider an open locus (a partial compactification of $\mathcal{M}_{\rm K3}$)
$\mathcal{M}_{\rm K3} \sqcup
\mathcal{M}_{\rm K3}(a), $
where $\mathcal{M}_{\rm K3}(a)$ denotes
the $36$-dimensional boundary stratum
corresponding to an isotropic rational line
$l=\mathbb{Q} e$ in $\Lambda_{\rm K3}\otimes \mathbb{Q}$,
with primitive integral generator $e$,
which are unique up to $O(\Lambda_{\rm K3})$.
Then for each point $p=\langle e, v_{1},v_2 \rangle$ in strata
$\mathcal{M}_{\rm K3}(a)$, we consider
the marked (possibly ADE) K3 surface $X_{p}$ with period $\langle v_{1},v_2\rangle$. Then it is known that there is an elliptic K3 surface structure on
$X_p$ with the fiber class $e$. Then we define
$\Phi(p)$ as its base biholomorphic to $\mathbb{P}^1$ with the McLean metric,
which only depends on $\langle v_1,v_2\rangle$.
Similarly to the projective case Theorem~\ref{K3.Main.Conjecture.18.ok},
\cite{OO} proves that for non-algebraic situation:
\begin{Thm}\label{K3.Main.conjecture2}
The map
$$
\Phi\colon
\mathcal{M}_{\rm K3} \sqcup
\mathcal{M}_{\rm K3}(a) \to {\it CMet}_{1}$$
given above is continuous.
Here, we put the Gromov-Hausdorff topology for the right hand side.
\end{Thm}
In \cite{OO}, we further explicitly
define an extension to the whole Satake compactification
$\Phi\colon \overline{\mathcal{M}_{\rm K3}}^{\rm Sat}\to {\it CMet}_{1}$,
and conjecture that this is still continuous with respect to the
Gromov-Hausdorff topology.
For the boundary strata other than $\mathcal{M}_{\rm K3}(a)$,
we assign flat tori $\ensuremath{\mathbb{R}}^i/\ensuremath{\mathbb{Z}}^i\ (i=1,2,3)$
modulo $(-1)$-multiplication. We show that $\Phi$ restricted to the closure of the locus which
parametrizes $\ensuremath{\mathbb{R}}^4/\ensuremath{\mathbb{Z}}^4$ modulo $\pm 1$, that includes those boundary strata, is continuous.
Furthermore,
we also prove the restriction of $\Phi$ to
the closure of $\mathcal{M}_{\rm K3}(a)$ is continuous
by using Weierstrass models.
\section{Higher dimensional case}
We expect that our results for K3 surfaces naturally extend to
higher dimensional compact hyperK\"ahler manifolds. Let us focus on
algebraic case in this notes. We set up as follows.
Fix any connected moduli $M$ of polarized $2n$-dimensional irreducible holomorphic
symplectic manifolds $(X,L)$ whose second cohomology
$H^{2}(X,\ensuremath{\mathbb{Z}})$ is isomorphic (as a lattice)
to $\Lambda$. By
\cite{Ver, Mark} (\cite[Theorem 3.7]{GHS}),
it is a Zariski open subset of a Hermitian locally symmetric space of orthogonal type $\Gamma\backslash \mathcal{D}_{M}$.
Then (a rough version of) our
conjecture for algebraic case (in \cite{OO}) is as follows:
\begin{Conj}
There
is a continuous map $\Psi$ (call ``geometric realization map")
from the Satake compactification $(M\subset)\overline{\Gamma\backslash \mathcal{D}_M}^{\rm Sat,\tau_{ad}}$
with respect to the adjoint representation
to the Gromov-Hausdorff compactification of $M$, extending the identity map on $M$.
The $(b_{2}(X)-4)$-dimensional boundary strata of $\overline{\Gamma\backslash \mathcal{D}_M}^{\rm Sat,\tau_{ad}}$ parametrize via $\Psi$ the projective space
$\mathbb{P}^{n}$ with special K\"ahler metrics in the sense of \cite{Freed99}
and the
metric space parametrized by
$0$-dimensional cusps are all homeomorphic to the closed ball of dimension $n$.
\end{Conj}
At the moment of writing this notes,
the authors have only succeeded in proving that $(M\subset )\Gamma\backslash \mathcal{D}_M$
is the moduli of polarized symplectic varieties with continuous (non-collapsing) weak Ricci-flat K\"ahler metrics,
and making some progress on the necessary algebro-geometric preparations in particular for
the case of K3$^{[n]}$-type.
\begin{Rem}[Calabi-Yau case]
In \cite{OO}, we also propose an extension
of Conjecture~\ref{K3.Main.conjecture} for general Calabi-Yau varieties under some technical conditions,
although there are much fewer evidences in that case.
\end{Rem}
\textbf{Acknowledgement}
We appreciate for giving us the chances to talk on \cite{OO}
in various countries and cities. The first was at a
talk by the first author at a Clay conference held at Oxford in September
2016, when
Theorems~\ref{K3.Main.Conjecture.18.ok} and \ref{K3.Main.conjecture2}
were only partially proved and claimed,
whose confirmation in the form of this notes has taken long time.
In particular, we appreciate
Kenji Hashimoto, Shouhei Honda, Radu Laza, Daisuke Matsushita, Shigeru Mukai, Yoshinori Namikawa,
Bernd Siebert, Cristiano Spotti, Song Sun,
Yuichi Nohara, Kazushi Ueda, Ken-ichi Yoshikawa
for helpful discussions. There are plans of some lecture series by the first author on this topic during the next fall semester in Nagoya, Tokyo.
The first author is partially supported by JSPS Grant-in-Aid (S), No. 16H06335,
Grand-in-Aid for Early-Career Scientists No. 18K13389.
The second author is partially supported by JSPS KAKENHI Grant No.\ 16K17562.
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,226 |
{"url":"https:\/\/quantumcomputing.meta.stackexchange.com\/questions\/92\/request-to-change-the-sample-question-and-answers-on-the-tour-page-to-something","text":"Request to change the sample question and answers on the Tour page to something more relevant to this site\n\nOn the current tour page of the site, the example question and answer are completely irrelevant to the this site.\n\nThe sample question is:\n\nHow to prevent unicorns from eating daisies\n\nI love the unicorns who hang out behind my kitchen, but they do tend to eat rather a lot of the daisies. What can I do about this?\n\nThe sample (accepted) answer is:\n\nThe easiest solution is to spray the daisies lightly with corn syrup using a standard vegetable oil sprayer. It won't hurt the flowers, but unicorns hate the sickly-sweet smell and will avoid it at all costs!\n\nI request the dev team to have a look into this issue and change the sample question\/answer to something more relevant. Maybe add one one of the most upvoted questions on this site, as sample, like this one: Can a Turing machine simulate a quantum computer?\n\n4 Answers\n\nThe system will select questions (or Moderators can do so) once the site has sufficient activity for that feature to work. I don't recall the exact criteria, but when I try to pick questions to replace those placeholders, the system reports\n\nThere are currently no viable question candidates for the About page\n\nIn essence, there is not yet enough activity on this site to supplant the placeholders on the tour page. Keep working on your content. I don't think we are that far off.\n\n\u2022 I see. Thank you for the quick response! \u2013\u00a0Sanchayan Dutta Mar 24 '18 at 18:57\n\u2022 Ok, the 'quantum similating question has my vote, since' 1. It is a decent question and 2) there is a short, but not bad answer and there is a clearly better and longer answer (hence the 'best answers rise to the top is clear') \u2013\u00a0Discrete lizard Mar 24 '18 at 20:14\n\u2022 @RobertCartaino It has been a few days. Perhaps you could check again if there are questions that can replace the unicorns? \u2013\u00a0Discrete lizard Apr 1 '18 at 13:07\n\u2022 @RobertCartaino in addition to what Discrete lizard said: We already have more questions than other public-beta sites and they have some non-unicorn questions already. \u2013\u00a0MEE Apr 7 '18 at 10:27\n\u2022 Choosing a question for the Tour page says: There are currently no viable question candidates for the About page. meta.stackexchange.com\/questions\/163947\/\u2026 \u2013\u00a0Robert Cartaino Apr 7 '18 at 11:42\n\u2022 @RobertCartaino is there an update regarding a change of the the question on the tour page? \u2013\u00a0MEE May 17 '18 at 14:15\n\nYes, this seems to be a good idea but I would propose a different example question because most other sites show short questions with short answers on their tour page.\n\nShort, high scoring questions with short accepted answer are for example:\n\n(Just looked at the first page of questions sorted by votes)\n\nYeah, we should do this ASAP! So experts might not like unicorns and as we aren't actually discussing unicorns, we might chase away experts that happen to hate unicorns!\n\nAnd the question has been automatically selected as What are reliable references on analytical and\/or numerical studies of threshold theorems under faulty quantum error correction?\n\nIt's possible (but certainly not necessary) that we could manually change that in the future if people so wished, once (if) other suitable questions get asked and answered. Apparently, the question as well as two answers to that question need to be less than or equal to 400 character to be suitable and there's a SE Data Explorer query for this.\n\nThanks to @Sherif F. for writing the question and @DaftWullie and @Dripto Debroy for writing the answers!","date":"2021-05-17 03:26:24","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.17295482754707336, \"perplexity\": 1688.5569531175809}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243991557.62\/warc\/CC-MAIN-20210517023244-20210517053244-00581.warc.gz\"}"} | null | null |
{"url":"https:\/\/crypto.stackexchange.com\/questions\/202\/should-we-mac-then-encrypt-or-encrypt-then-mac\/31514","text":"# Should we MAC-then-encrypt or encrypt-then-MAC?\n\nMost of the time, when some data must be encrypted, it must also be protected with a MAC, because encryption protects only against passive attackers. There are some nifty encryption modes which include a MAC (EAX, GCM...) but let's assume that we are doing old-style crypto, so we have a standalone encryption method (e.g. AES with CBC chaining and PKCS#5 padding) and a standalone MAC (e.g. HMAC with SHA-256). How should we assemble the encryption and the MAC?\n\n\u2022 MAC-then-Encrypt: Compute the MAC on the cleartext, append it to the data, and then encrypt the whole? (That's what TLS does)\n\u2022 Encrypt-and-MAC: Compute the MAC on the cleartext, encrypt the cleartext, and then append the MAC at the end of the ciphertext? (That's what SSH does)\n\u2022 Encrypt-then-MAC: Encrypt the cleartext, then compute the MAC on the ciphertext, and append it to the ciphertext? (In that case, we do not forget to include the initialization vector (IV) and the encryption method identifier into the MACed data.)\n\nThe first two options are often called \"MAC-then-encrypt\" while the third is \"encrypt-then-MAC\". What are the arguments for or against either?\n\n\u2022 I've heard the second method most commonly referred to as encrypt-and-mac. \u2013\u00a0Stavros Korokithakis Oct 12 '13 at 14:35\n\u2022 I am a bit perplexed by the fact that this question seems highly related to crypto.stackexchange.com\/questions\/5458\/\u2026, but has diametrically opposed answers... \u2013\u00a0Cl\u00e9ment Mar 24 '14 at 12:41\n\u2022 @Cl\u00e9ment: the confusion comes from the widespread (but wrong) habit of calling MAC \"signatures\". In fact MAC and signatures are very different things used in very different contexts. Sign-then-encrypt protocols also use a distinct encryption key for each message, which nullifies all padding oracle attacks; and the signature is meant to serve as proof (e.g. in a trial), so it MUST be applied on the plaintext message. In MAC+encrypt contexts, the same symmetric key is often reused, and there is no \"proof\" requirement. \u2013\u00a0Thomas Pornin Apr 3 '14 at 10:56\n\u2022 @Cl\u00e9ment one difference is that here the secrets are shared, while in the public-key setting, anyone knowing the public key of the recipient can do the encryption part well. \u2013\u00a0npouillard Jul 2 '15 at 8:29\n\u2022 Moxie Marlinspike says, 'When it comes to designing secure protocols, I have a principle that goes like this: if you have to perform any cryptographic operation before verifying the MAC on a message you\u2019ve received, it will somehow inevitably lead to doom'. See moxie.org\/blog\/the-cryptographic-doom-principle \u2013\u00a0mti2935 Dec 20 '17 at 18:01\n\nI'm assuming you actually know all of this better than I do. Anyway, this paper neatly summarizes all these approaches, and what level of security they do or don't provide. I shall paraphrase it in English, rather than Mathematical notation, as I understand it.\n\n\u2022 Encrypt-then-MAC:\n\n\u2022 Provides integrity of Ciphertext. Assuming the MAC shared secret has not been compromised, we ought to be able to deduce whether a given ciphertext is indeed authentic or has been forged; for example, in public-key cryptography anyone can send you messages. EtM ensures you only read valid messages.\n\u2022 Plaintext integrity.\n\u2022 If the cipher scheme is malleable we need not be so concerned since the MAC will filter out this invalid ciphertext.\n\u2022 The MAC does not provide any information on the plaintext since, assuming the output of the cipher appears random, so does the MAC. In other words, we haven't carried any structure from the plaintext into the MAC.\n\u2022 MAC-then-Encrypt:\n\n\u2022 Does not provide any integrity on the ciphertext, since we have no way of knowing until we decrypt the message whether it was indeed authentic or spoofed.\n\u2022 Plaintext integrity.\n\u2022 If the cipher scheme is malleable it may be possible to alter the message to appear valid and have a valid MAC. This is a theoretical point, of course, since practically speaking the MAC secret should provide protection.\n\u2022 Here, the MAC cannot provide any information on the plaintext either, since it is encrypted.\n\u2022 Encrypt-and-MAC:\n\n\u2022 No integrity on the ciphertext again, since the MAC is taken against the plaintext. This opens the door to some chosen-ciphertext attacks on the cipher, as shown in section 4 of Breaking and provably repairing the SSH authenticated encryption scheme: A case study of the Encode-then-Encrypt-and-MAC paradigm.\n\u2022 The integrity of the plaintext can be verified\n\u2022 If the cipher scheme is malleable, the contents of the ciphertext could well be altered, but on decryption, we ought to find the plaintext is invalid. Of course, any implementation error that can be exploited in the decryption process has been by that point.\n\u2022 May reveal information about the plaintext in the MAC. Theoretical, of course, but a less than ideal scenario. This occurs if the plaintext messages are repeated, and the MACed data does not include a counter (it does in the SSH 2 protocol, but only as a 32-bit counter, so you should take care to re-key before it overflows).\n\nIn short, Encrypt-then-MAC is the most ideal scenario. Any modifications to the ciphertext that do not also have a valid MAC can be filtered out before decryption, protecting against any attacks on the implementation. The MAC cannot, also, be used to infer anything about the plaintext. MAC-then-Encrypt and Encrypt-and-MAC both provide different levels of security, but not the complete set provided by Encrypt-then-MAC.\n\n\u2022 Please also note the \"padding oracle attack\" in the answer from Thomas. \u2013\u00a0Maarten Bodewes Nov 29 '11 at 20:55\n\u2022 For personal\/future reference: Encrypt-then-MAC = Encrypt the plaintext, MAC the ciphertext + iv then append it to the ciphertext. MAC-then-Encrypt = MAC the plaintext then append the MAC to the plaintext then Encrypt it all. Encrypt-and-MAC = Encrypt and MAC the plaintext then append the MAC onto the ciphertext. \u2013\u00a0MD Kieran Jun 25 '13 at 21:23\n\u2022 Could you perhaps comment on crypto.stackexchange.com\/questions\/5458\/\u2026 ? It seems to be a closely related, but with diametrically opposed answers... \u2013\u00a0Cl\u00e9ment Mar 24 '14 at 12:42\n\u2022 @clement it is a good point although I don't think you'd use a mac for identity verification... but there are definitely those who disagree that encrypt-then-mac is the best solution and their arguments are very very valid too. \u2013\u00a0user46 Mar 30 '14 at 19:21\n\u2022 @Cl\u00e9ment I have had a go at explaining the difference via a separate question, see crypto.stackexchange.com\/q\/15485\/46 - feel free to ask for clarification there :) \u2013\u00a0user46 Apr 9 '14 at 9:18\n\n@Ninefingers answers the question quite well; I just want to add a few details.\n\nEncrypt-then-MAC is the mode which is recommended by most researchers. Mostly, it makes it easier to prove the security of the encryption part (because thanks to the MAC, a decryption engine cannot be fed with invalid ciphertexts; this yields automatic protection against chosen ciphertext attacks) and also avoids any trouble to confidentiality from the MAC (since the MAC operates on the encrypted text, it cannot reveal anything about the plaintext, regardless of its quality). Note that the padding oracle attacks, which have been applied in the field to ASP.NET, are chosen ciphertext attacks.\n\nFerguson and Schneier, in their book Practical Cryptography, have argued the opposite: that MAC-then-encrypt (or MAC-and-encrypt) is the \"natural\" order and that encrypt-then-MAC is overly complex. The sore point of encrypt-then-MAC is that you have to be careful about what you MAC: you must not forget the initialization vector, or (in case the protocol allows algorithm flexibility) the unambiguous identifier for the encryption algorithm; otherwise, the attacker could change either, inducing a plaintext alteration which would be undetected by the MAC. To prove their point, Ferguson and Schneier describe an attack over an instance of IPsec in which the encrypt-then-MAC was not done properly.\n\nSo while encrypt-then-MAC is theoretically better, it is also somewhat harder to get right.\n\n\u2022 It really depends on the priorities of your application, though :) Whenever authentication is the key point, use AtE, and whenever secrecy is paramount, use EtA :) I.e. whenever it's mostly about safety and security, go for AtE, and whenever it's mostly about secrecy and security, go for EtA :) \u2013\u00a0yeoman Oct 30 '16 at 20:31\n\u2022 Plus, if you're going for AtE, please use a stream cypher because padding and AtE don't go together very well :) \u2013\u00a0yeoman Oct 30 '16 at 20:42\n\u2022 And never ever even consider using the same key for A & E \ud83d\ude2c \u2013\u00a0yeoman Oct 30 '16 at 20:51\n\u2022 \"you must not forget the initialization vector, or ... the unambiguous identifier for the encryption algorithm\" -- while implementing, it occurred to me that one has to be mindful as to how to do this: the combined binary string has to be unique (i.e. the mapping injective)! Plain concatenation of binary strings may not have this property if more than one component is of variable length. I opted for ASN-DER. \u2013\u00a0Raphael Jul 13 '17 at 15:05\n\u2022 So to sum it all, the zeroday paranoids will do mac-encrypt-mac. Or even encrypt-mac-encrypt-mac. \u2013\u00a0Pacerier Oct 24 '17 at 8:37\n\nHugo Krawczyk has a paper titled The Order of Encryption and Authentication for Protecting Communications (or: How Secure Is SSL?). It identifies 3 types of combining authentication (MAC) with encryption:\n\n1. Encrypt then Authenticate (EtA) used in IPsec;\n2. Authenticate then Encrypt (AtE) used in SSL;\n3. Encrypt and Authenticate (E&A) used in SSH.\n\nIt proves that EtA is the secure way to use, and both AtE and E&A are subject to attacks, unless the encryption method is either in CBC mode or it is a stream cipher.\n\nThe abstract says everything; I emphasized important parts by bolding them:\n\nWe study the question of how to generically compose symmetric encryption and authentication when building \u201csecure channels\u201d for the protection of communications over insecure networks. We show that any secure channels protocol designed to work with any combination of secure encryption (against chosen plaintext attacks) and secure MAC must use the encrypt-then-authenticate method. We demonstrate this by showing that the other common methods of composing encryption and authentication, including the authenticate-then-encrypt method used in SSL, are not generically secure. We show an example of an encryption function that provides (Shannon\u2019s) perfect secrecy but when combined with any MAC function under the authenticate-then-encrypt method yields a totally insecure protocol (for example, finding passwords or credit card numbers transmitted under the protection of such protocol becomes an easy task for an active attacker). The same applies to the encrypt-and-authenticate method used in SSH.\n\nOn the positive side we show that the authenticate-then-encrypt method is secure if the encryption method in use is either CBC mode (with an underlying secure block cipher) or a stream cipher (that xor the data with a random or pseudorandom pad). Thus, while we show the generic security of SSL to be broken, the current practical implementations of the protocol that use the above modes of encryption are safe.\n\n\u2022 CBC decrypt before authentication in an online prococol is secure? What about padding oracle attacks? Or do they explicitly specify that you need to verify the MAC in the last block before unpadding? \u2013\u00a0Maarten Bodewes Nov 29 '11 at 21:04\n\u2022 note that OpenSSH also supports E-t-M modes now (can be selected by limiting the hmacs): stribika.github.io\/2015\/01\/04\/secure-secure-shell.html \u2013\u00a0eckes Jan 6 '15 at 17:00\n\u2022 This list is not exhaustive though, there are cipher modes that provide authenticated encryption, i.e. with no need for a separate hash algorittm (e.g. Galois\/Counter Mode). I came up with several own schemes but this is still early experimentation. One of them e.g. is called multi-pass CBC (which cyclically applies CBC several times between standard cipher rounds), which appears to resist not only attacks on same-key CBC-MAC but also padding oracle attacks, but is impractical for large messages. It will be a while before (and if) I publish. \u2013\u00a0Arne Vogel Jun 20 '19 at 14:56\n\u2022 @ArneVogel: Sure there are other possibilities. Yet the OP is asking about the order of combing encryption and MAC. \u2013\u00a0M.S. Dousti Jun 21 '19 at 11:07\n\nAlthough there are already many answers here, I wanted to strongly advocate AGAINST MAC-then-encrypt. I fully agree with Thomas' first half of the answer, but completely disagree with the second half. The ciphertext is the ENTIRE ciphertext (including IV etc.), and this is what must be MACed. This is granted.\n\nHowever, if you MAC-then-encrypt in the straightforward way, then you are completely vulnerable to padding-oracle attacks. by the \"straightforward way\", what I mean is that you call the \"decrypt\" function, and afterwards the \"mac verify\". However, if you get an error in the decrypt function, then you return this straight away, as a padding error. You have now just got a full blown padding oracle attack and you are dead. You can now hack the API and give a single error message only, but the time it takes to return the error has to be the same, whether it's a MAC error or a padding error. If you think that this is easy, then look at the Lucky13 attack on SSL. It's really really really hard (and much harder than just MACing all of the ciphertext).\n\nThe argument by Schneier and Ferguson for MAC-then-encrypt has no formal basis at all. The definition of authenticated-encryption is met by encrypt-then-MAC and is NOT met by MAC-then-encrypt. Furthermore, most implementations of MAC-then-encrypt are actually completely vulnerable to padding oracle attacks and so are actually broken in practice. Don't do this!\n\nHaving said all of the above, my recommendation is to not use any of this. You should be using GCM or CCM today (GCM is much faster, so use it as long as you are sure that your IV won't repeat). A combined authenticated-encryption scheme, with a single API call, and now you won't get in trouble.\n\n\u2022 You advocate against MAC-then-encrypt, and then recommend using CCM, which is exactly a MAC-then-encrypt scheme. Isn't it a contradiction? \u2013\u00a0Penghe Geng Sep 26 '16 at 17:08\n\u2022 CCM is a mode of encryption with a stand-alone proof of security and this distinguishes it from MAC-then-encrypt. It should not have the pitfalls of MAC-then-encrypt. The only problem that can arise is a bad implementation. I am certainly willing to concede that there is more chance of a bad implementation in a mode like this, but then again everything can be badly implemented so I'm not sure it should be a factor. \u2013\u00a0Yehuda Lindell Sep 27 '16 at 0:08\n\u2022 IV related question: If a mode that does not use IV (like CTR) is used, should \"nonce\" be used in MAC computation somehow or it will be computed only from plain text in this case? \u2013\u00a0Jolinar Aug 11 '18 at 11:53\n\u2022 The counter or IV must be included in the MAC. It is part of the ciphertext. \u2013\u00a0Yehuda Lindell Aug 12 '18 at 3:31\n\u2022 And if the IV\/nonce is derived from a salt (and password using KDF), what should be the part of MAC? The salt, the derived IV, both, or it doesn't matter? Thanks \u2013\u00a0Jolinar Aug 12 '18 at 20:11\n\nMoxie Marlinspike calls it in his article http:\/\/www.thoughtcrime.org\/blog\/the-cryptographic-doom-principle\/ the doom principle:\n\nif you have to perform any cryptographic operation before verifying the MAC on a message you\u2019ve received, it will somehow inevitably lead to doom.\n\nHe also demonstrates two attacks which are possible because of trying to decrypt before checking the MAC.\n\nTo summarize: \"Encrypt Then Authenticate\" is the way to go.\n\n\u2022 As-is, I have a hard time finding reason to upvote this answer because your answer is close to a link-only answer. Can you elaborate a bit on what you are quoting? For example: can you explain \u201cwhy\u201d it is a problem to be able to decrypt a message before checking authentication, and why Moxie says it will \u201cinevitably lead to doom\u201d if you MAC-then-encrypt? That would certainly make your answer more valuable\u2026 after all, the question clearly asks \u201cWhat are the arguments for or against either?\u201d I can't really see you're providing arguments. Instead, you merely point to and quote a site. \u2013\u00a0e-sushi Apr 3 '14 at 10:49\n\u2022 @e-sushi Agreed - it remains that this is one of the best accessible treatment of the subject. \u2013\u00a0user2398029 Apr 8 '14 at 0:51\n\u2022 It is worth noting that this principle rules out applying the MAC to the plaintext regardless of whether the MAC is later encrypted or not. The principle itself is intuitively sound because the sooner you can discard a message with an invalid MAC, the less code can be targeted with corrupted inputs. One just has to not fall into the trap of assuming that just because the message carries a valid MAC, there is no way it could possibly be used to exploit buffer overflows or other vulnerabilities. \u2013\u00a0kasperd Aug 12 '14 at 20:21\n\nI think Encrypt-then-MAC does not deliver Plaintext integrity, but only ciphertext integrity. If the MAC over the ciphertext is OK but then we use the wrong key to decrypt (for whatever reason), then the recipient receives a plaintext that the sender did not send and did not vouch for. If this can happen, this is a violation of plaintext integrity.\n\nSo, Encrypt-then-MAC is only secure if you can somehow be sure that decryption won't use the wrong key, and that any other processing\/decoding done to the ciphertext after checking the MAC is completely correct. This is a somewhat fragile aspect of Encrypt-then-MAC, and one reason why Ferguson and Schneier advocate against Encrypt-then-MAC.\n\n\u2022 I edited the answer to more clearly express the point Josef was trying to make. Personally, I think the answer is fine (I upvoted it). \u2013\u00a0D.W. Apr 5 '14 at 1:08\n\u2022 I'll respond below but this threat of using the wrong key to decrypt is really strange. If I verify the MAC using the wrong key then it will also be broken and rejected. Encrypt-then-MAC should certainly be used. \u2013\u00a0Yehuda Lindell Jul 1 '15 at 16:16\n\u2022 @YehudaLindell, The safest for the paranoid is still mac-encrypt-mac \u2013\u00a0Pacerier Oct 24 '17 at 8:07\n\nThe really important thing is, not encrypt-and-mac. The other two, you can debate, but both are at least theoretically sound -- one might just practically be better than the other. Encrypt-and-MAC falls apart for a very simple reason, though: the MAC is not meant to keep the plaintext secret.\n\nThe MAC is based on the plaintext. Authentication is not designed to obscure the plaintext. A MAC, therefore, provides some information about the plaintext used to make it.\n\nThe not-quite-appropriate-but-easy-to-understand example is a checksum. If we have a nine digit number plaintext and a one digit checksum, and ship it with the first nine digits encrypted but the checksum not, the checksum is going to help me learn things about the first nine digits of plaintext. If I can somehow find out eight of the nine digits, I can use the checksum to find out what the last digit is. There might be a lot of other things I can do with that checksum that ruin the integrity of the first nine digits.\n\nSo, as a recap: do not use encrypt-and-mac. Otherwise, whatever, you're good.\n\n\u2022 \"the MAC is not meant to preserve the integrity of the plaintext\" would be just as good a reason to avoid MAC-then-encrypt too. $\\;$ \u2013\u00a0user991 Dec 4 '14 at 4:13\n\u2022 No -- because the MAC and plaintext are both encrypted. Once you decrypt the ciphertext, you're not worried about the integrity anymore, you've decrypted it. \u2013\u00a0Daniel Dec 4 '14 at 4:15\n\u2022 ... Yes we are, otherwise there would be no point to any of these constructions. $\\;$ \u2013\u00a0user991 Dec 4 '14 at 4:53\n\u2022 I suspect you meant \"confidentiality\". $\\;$ \u2013\u00a0user991 Dec 6 '14 at 5:07\n\u2022 @Pacerier -- if you're doing it right, yes. \"Encrypt-and-mac\" refers to the stupid strategy of encrypting the plaintext, mac'ing the plaintext, and then putting the plaintext mac at the end of the ciphertext. \u2013\u00a0Daniel Nov 3 '17 at 17:59\n\nThere is no property of a MAC that states that information about the input should not be leaked. As such, you should encrypt the message first, then apply a MAC. This way, even if the MAC leaks information, all that is leaked is ciphertext.\n\n\u2022 Or MAC then Encrypt, so that the MAC can't leak information, because it can't be read by any attacker. \u2013\u00a0Daniel Dec 4 '14 at 4:09\n\u2022 @Daniel, So how do you address the DOOM principle by Moxie? \u2013\u00a0Pacerier Oct 24 '17 at 8:11\n\nBesides the security benefits of encrypt-then-MAC that many other answers have mentioned, there's a performance benefit. Checking the MAC first on the receiving end allows you to reject forged messages without doing the work to decrypt them. Bernstein mentions this in http:\/\/cr.yp.to\/snuffle\/design.pdf (in the section \"Should the stream be independent of the plaintext?\").\n\n\u2022 That's such a tiny performance boost, given the vast, vast majority messages will be correct messages and will go through two steps anyway, that this benefit can be all but ignored, IMHO. \u2013\u00a0Penghe Geng Sep 26 '16 at 17:14\n\u2022 @xiaobai I think the idea is that it makes it (slightly) harder for an attacker to DOS you in some (niche) situations. In a DOS attack, all of the packets flooding your server might be failing to authenticate, and the rate at which your server could drop them might matter. \u2013\u00a0Jack O'Connor Sep 28 '16 at 3:53\n\u2022 Just note that \"performance benefit\" is not going to be any relevant if encrypt-mac itself was insecure. \u2013\u00a0Pacerier Oct 24 '17 at 8:41\n\nIf you look at the paper \"Tweakable Block Ciphers\" by Moses Liskov, Ronald L. Rivest, and David Wagner published in Advances in Cryptology - Crypto 2002, Proceedings, 2442, section 4.3 Tweakable Authenticated Encryption (TAE), the MAC is computed over the plaintext, appended to the plaintext, and encrypted along with the plaintext. They then supply a proof of their Theorem 3 \"If E is a secure tweakable block cipher, the E used in TAE mode will be unforgeable and pseudorandom\".\n\nIn order to provide message integrity, a hash or message authentication function (MAC) is used. Sometimes, encryption and integrity are used together as:\n\n1. Encrypt-then-MAC: provides ciphertext integrity, but no plaintext integrity,\n2. MAC-then-encrypt: provides plaintext integrity, but no ciphertext integrity, and\n3. Encrypt-and-MAC: provides plaintext integrity, but no ciphertext integrity\n\nEncrypt-then-MAC is the most secure mode, as any changes to the ciphertext can be filtered out before decryption using a valid MAC code, and this protects the messages against any modification attacks. However, a combination of encryption and MAC, such as Galois\/Counter Mode (GCM): combines counter mode of encryption with Galois mode of authentication, or Counter with Cipher Block Chaining (CBC)-MAC (CCM): combines CBC-MAC with the counter mode of encryption, is preferred due to the security strength.\n\n\u2022 Re \"but no plaintext integrity\", doesn't ciphertext integrity asserts plaintext integrity? \u2013\u00a0Pacerier Oct 24 '17 at 8:45\n\nReading all of this leads me thinking that the best solution would be:\n\n## MAC-then-Encrypt-then-MAC\n\nBringing both guarranty on the plain text and cyphertext.\n\nI fully agree both are important :\n\n\u2022 MAC-then-Encrypt if your plain text is not structured and do not permit to confirm its integrity without a MAC\n\u2022 Encrypt-then-MAC for the reasons provided in other answers, especially to avoid decrypting bad data\n\u2022 Could you justify -1 please ? \u2013\u00a0lalebarde Feb 8 '19 at 17:00\n\nIn many applications, only part of the data (m) is encrypted, and some so-called Additional Authenticated Data (AAD, usually some header data including nonce) a is only authenticated but not encrypted.\n\nHere is my argument: When AAD is used, Authentication-then-Encryption provides an additional layer of protection for AAD than Encryption-then-Authentication, thus one may argue it could be more secure in certain usages.\n\nWhen AAD a is used, if we use Encryption-then-Authentication, we will get:\n\n## E(m) + A(a + E(m))\n\nfor scheme, which means we encrypt m first, and then concatenate it with a, and then encrypt the result. Notice how a is only protected by one layer of cryptographic operation, the MAC operation A.\n\nAnd if we use Authentication-then-Encryption, we will get\n\n## E(m + A(a+m))\n\nwhich means we first encrypt concatenated a and m, then concatenate the resulted MAC code with m, and then do the encryption. Notice a is effectively protected by two layers of cryptographic operations, both A and E.\n\nNow suppose the authentication method is somehow broken and the encryption is not, which is not that far-fetched since some MAC algorithms (like HMAC-MD5) is indeed found weak, then a will be fully exposed to tampering when using Encryption-then-Authentication. The same cannot be said for Authentication-then-Encryption.\n\n### Update on 2016-09-27:\n\nI agree with some of the top comments that applying a cipher multiple times doesn't always lead to better security so I retracted that statement. But it actually is not relevant to my main point of AtE provides additional layer of security since we are not applying the same cipher to the same data twice in these A\/E schemes.\n\n\u2022 There is a good reason why we use 3DES rather than 2DES, don't you think? And if AES is broken, don't you think we'd loose confidentiality whichever encryption scheme we use (among about every cryptographic device having to be redesigned)? \u2013\u00a0Maarten Bodewes Sep 26 '16 at 20:35\n\u2022 \u201cIf we agree applying a cypher multiple times is more secure than once\u201d \u2014 cryptographers don't agree with this. At most, you can increase the strength of the keys, but that's only an issue if key size is an issue as in DES's 56 bits. AES uses a minimum of 128 bits, key size isn't an issue. (It might be if quantum computing delivers, but then we'd just switch to 256-bit keys.) Is there anything in your answer that doesn't depend on the invalid argument that multiple encryptions are a good thing? \u2013\u00a0Gilles 'SO- stop being evil' Sep 26 '16 at 20:52\n\u2022 @MaartenBodewes That reason doesn't apply to algorithms with sufficiently large keys that make brute-force attacks infeasible and you just use double encryption to increase the effective number of rounds and in order to make it more resistant to cryptoanalysis. \u2013\u00a0CodesInChaos Sep 26 '16 at 20:54\n\u2022 @CodesInChaos Yeah, my reason to make this answer invalid would only be correct if the reason of Gilles to make this answer incorrect would not apply. Unfortunately, two wrongs.... \u2013\u00a0Maarten Bodewes Sep 26 '16 at 21:59\n\u2022 @MaartenBodewes A critical security system needs to consider the worst scenario and whether the system has a graceful downgrade rather than a total disaster. If AES is broken, yes we will lost confidentiality, but if we can still keep authentication that will be better than nothing. For a lot of applications, authentication is actually more important than confidentiality. \u2013\u00a0Penghe Geng Sep 27 '16 at 14:59","date":"2020-05-26 23:58:14","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.35616934299468994, \"perplexity\": 1826.3738035945212}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590347391923.3\/warc\/CC-MAIN-20200526222359-20200527012359-00170.warc.gz\"}"} | null | null |
I do not care if South Africa wins the Test match from here. I will be one of the happier souls if India were to win the Test from here or an unlikely draw - if weather were to intervene the proceedings of the first Test match between India and South Africa.
Since its re-admission to international cricket - India has toured the 'Rainbow Nation' six times including the current tour. Barring the previous series (which was levelled 1-1), India has lost all the tours and is yet to win a ODI series over there. What will be the outcome of this tour? We know surely what happened in the ODI series.
I will neither be talking more about the shorter formats nor predicting how the series will unwrap in the days to come; all I want to share is the way I feel about this Indian team after having watched the first two days of the Test match in this current series.
A young team led by M S Dhoni - who incidentally captaining in his 50th Test (only 14 have managed to achieve this feat till date) wins the toss and elects to bat against the number one team in the world. My mind goes back to the Headingly Test match in 2002 when India chose to bat first in overcast conditions with series 1-0 down. India won the Test match in spite of playing in a seamer-friendly conditions to level the series in an emphatic fashion. That was a brave decision!
For the first time (since the time I started to watch cricket) there will be no Sachin Tendulkar, Rahul Dravid, VVS Laxman, Sourav Ganguly, Mohammed Azharuddin, Virender Sehwag and Gautam Gambhir in the test line-up. These names resonate a sense of my belonging to cricket - and now we have a team whose highest run-getter is MS Dhoni. This team will go down as one of the most in-experienced line-up for a long time I can remember. Yet there is no sense of panic and that partially is due to low expectations.
Day one saw Indian openers batting first; they hung around for a while and got out without making any impact. Then came the partnership of the Test match so far (from Indian perspective) between Pujara and Virat Kohli. They occupy the #3 and #4 positions respectively in the line-up which the previous incumbents were the top two run-getters of India all-time and the top three (as I write) in the history of the game.
During the playing days of Dravid and Tendulkar - the third wicket partnership had always been a crucial phase; one which has contributed to many of the recoveries after a poor start and many a times gave momentum to an innings, capitalising on a good opening partnership. The essence was there - although I do not want to compare them in one-to-one terms; the feeling of security was to be seen - controlled aggression meets the soothing influence. And then came the run out - Quel Dommage!
India managed to score 280 and probably could have scored more - I do not wish to get into details. Twenty short of three hundred was all this new team could manage. A lot better than some of the scores the team had posted when they last toured outside the sub-continent (Australia in 2011-12 & England 2011).
I watched India bowl - and saw South Africans running away with the game after good initial spells from the pace bowling trio of Zaheer Khan, Ishant Sharma and Shami. With 120 runs on the board for the loss of solitary wicket, the Proteas were in commanding position going into tea.
Then came the intense spells from the trio, suitably rewarded for some good bowling which reduced them to 145/6. It was all about the Indian pace attack; they did a tremendous job in damaging the back bone of the South Africa never ending list of batsmen.
Bouncebackability - a term invented by English football club manager Iain Dowie comes to my mind. The word is apt for the way Indians forged ahead in spite of losing the openers while batting and the manner they took wickets mid way through the South African innings.
The match at the end of day two is well balanced. Unless there is an intervention from the weather, this Test will produce a result with both teams having a fair chance to win.
Historically, this is one of the few away grounds where India is yet to lose a Test match. In 1992, Indian batsmen fought it out on the last day to draw the Test match; 1997 match saw stellar performances by the duo of Rahul Dravid - Sourav Ganguly; victory only to be denied by poor weather on the final day and a stiff resistance of Daryll Cullinan. In 2006, India won the Test match - their first test win in South Africa.
What will happen this time? I am no astrologer - and I will be thrilled if India were to win and maintain its no-loss record at this ground. A quote from the movie 'Rocky' comes to my mind - "I was nobody. But that don't matter either, you know? 'Cause I was thinkin', it really don't matter if I lose this fight. It really don't matter if this guy opens my head, either. 'Cause all I wanna do is go the distance."
Going the distance is what I expect from this Indian team - a team of new guys little short on Test match experience.
Posted by Sports Imitates Life at 23:03 No comments:
Labels: Cricket, India, Johannesburg, MS Dhoni, South Africa, Sports Imitates Life, Test Cricket
Dhyanchand or Sachin Tendulkar - Bharat Ratna Debate
Yesterday, I was part of an interesting conversation on Whatsapp group chat - a group of four friends at different locations discussed the merits of Sachin Tendulkar being awarded the highest civilian award 'Bharat Ratna'.
Though no one disputed the man for being awarded, all of us did express our surprise of him being awarded on the day of his retirement. Personally, I am huge fan of his and I would have waited for at least five years or ten years before bestowing this honour on him.
Now that he has been awarded and thereby the first sportsperson to be recognised with the top civilian award - another topic came up. Why isn't Major Dhyanchand recognised for all this hockey achievements?
How many of us remember Major Dhyanchand? A lot of them do but not as much as Sachin Tendulkar and that's the modern day truth. I try to make sense to myself on why Dhyanchand's legacy is caught in a maze of illusion when compared with Tendulkar.
A friend of mine had illustrated about India in which the legend of Tendulkar took its birth. It was a time in India when people had few TV sets manufactured by handful of companies. There was no satellite television and national television had one channel for the whole of India - which then were customised depending on which region you belonged to. Cricket was edging field hockey slowly by the day and Tendulkar accelerated that process.
That was the India I was born and by the time I was barely six, Tendulkar had made his debut and before I was eight years of age, he had excelled in Pakistan, England, New Zealand and Australia.
Every country loves to have their own set of heroes in any field. The fundamental difference being the nature of fellow countrymen and their reactions. Cricket became the preferred sport and Tendulkar became the hero and much more.
TV sets were on the raise and soon there was cable television with multiple channels - people of India could witness a Indian taken on the best of the teams across the world and excel. Everyone could see Tendulkar bat, or bowl or even field and was well appreciated. There were famous Indian cricketers from the past but none reached out to the common man like the way Sachin did.
Ardent sports fan always found ways to keep in touch with best of sports stories. Which individual or team stories can one think of - purely from Indian sports context in the 90's? Vishwanathan Anand taking on Gary Kasparov for the world title; Leander Paes winning the bronze medal at the Atlanta Olympics comes to my immediate mind. Where was hockey, our national sport? Lost in the past glory and refusing to accept the present.
There were performances by other athletes here and there - none matched the consistency of Tendulkar. Mind you, he was still in his 20's at the turn of millennium and his aura had reached grandly proportions.
The commerce industry was on the increase in the 90's which resulted in the creation of 'Brand Tendulkar' which became a story in itself. One cannot fault a individual if he is getting a raise in his pay because of his performance. His personal life is set as an example for a lot of families in India.
Then came the darkest hour of cricket - the match fixing scandal. Bulk of the senior Indian cricketers were exposed and out of few guys who came out clean - Sachin Tendulkar was hailed as a saint. The year 2000 was very crucial for Indian cricket and to the world cricket in general with few of the cricket fans choosing the dark side of cynicism over a new hope.
There was dirt all around when the exposé took place and Indian cricket had to re-build its image. Cricket in general needed a fresh start. Sachin Tendulkar along with Sourav Ganguly, Rahul Dravid, Anil Kumble, Javagal Srinath, VVS Laxman took up the mantle and took Indian cricket to new heights. Their performance was one of the reasons cricket became and still is one of the heavily invested property in India. Unlike other sports federations in India, development was taken seriously and Indian cricket has never been healthier at the grassroots level.
When you talk about Sachin Tendulkar, he is beyond the statistical world. His personality is strong enough to overlook the petty fights of who's the best cricketer in the world. What does one achieve if he is the best? Will he escape death? Will he cease to become a human being? I am curious to know what does one benefit from being the best. Cricketing wise - he has been a complete player than most of the cricketers in the history of the game. On a personal front though, he is in a stage of infancy without cricket.
What are his next set of challenges?
To understand his two teens at home; help his wife in day-to-day matters; start a new career in development of sports; take up politics; become an entrepreneur or become the recluse he wished to be - the options are plenty.
The opinions will always be divided no matter what and when it comes to Tendulkar both sides of the argument attract tremendous attention. As a cricketer he had to cope with simple expectations multiplied by countless ocean of people from different backgrounds and cultures; he was expected to help Indian win matches and score runs every match. Now what are the new set of expectations?
He had a flawless professional life and personal life thus far - but he knows with each passing day the responsibilities will only increase. It will not be as a player and life outside of cricket field.
People would continue to have expectations more than any athlete in the history of sports. The 22-yards and the cricket field provided the perfect refuge from all the pressures - it was one place where he felt at ease irrespective of the opposition he faced.
As he prepares to lead a life of an ex-cricketer - he is now bestowed with Bharat Ratna and with it comes scrutiny of another level. Such is a life of this persona that he can never lead a simple life. His own talent, extraordinary abilities and discipline have made him to lead this uncommon life in a country of common men.
Give me a worthy guy to succeed and I shall find a cynic who thinks otherwise. And now the debate of Dhyanchand or Sachin Tendulkar is becoming a battle of egos.
Who would benefit from Dhyanchand winning the award? What has been done to Dhyanchand's legacy in the name of National Sports Day in India? What is happening to Indian hockey?
Yes, Indian hockey has won eight Olympic gold medals since the time they made their debut in Amsterdam in 1928. Six gold medals on a trot and the first three involved Dhyanchand - which included the famous victory at the 1936 Berlin Olympics in front of the Nazi regime and Hitler.
What is its relevance now? The last time India won a hockey gold medal was helped by the cold war which saw many of the countries boycotting USSR led Moscow Games. That was in 1980 and my parents didn't even know each other. It is part of history.
I love history but history to me is convoluted. It does not give me the right answers to my questions - it always leaves me with unexplained situations and on top of it all it narrates countless myths when it comes to specific people or events.
We like metaphors because it is soothing, appealing, poetic and dramatic as compared to simple reports or bland narrations. I, like many would get lost in these metaphors created by few writers. For Tendulkar there are plenty of written and visual evidence; on the other hand Major Dhyanchand, sadly very few saw or has been written about him.
The Union Government of India had instituted Dhyanchand lifetime achievement award in 2002. The National stadium which hosted the inaugural Asian Games in 1951 was later renamed as Major Dhyanchand Hockey stadium in New Delhi. The National Sports Day which falls on August 29th of each year also happens to be his birthday.
How many of us remember where he is from? The town he hails from is more popular by a woman patriot by the name of Laxmi Bai. Who can accurately prove which place he was born? Do people know he was an ex-service man who served for the Army in pre-independence and for a decade after independence?
Major Dhyanchand's statue in Jhansi
The famed four hands and four hockey sticks myth of this hockey wizard remains a mystery though it is very much part of Indian sports folklore. He played for pride because he mostly played in the pre-independence era. For independent India, it was all about helping Indian hockey unlike his playing days.
He was short of money towards the end of his life and was unrecognised by nation and at some of the tournaments he went. He died of liver cancer in 1979 in AIIMS in a general ward. Is that the way one treats an icon never mind a potential Bharat Ratna candidate?
Indian hockey and its decline over the years have not helped to elevate Dhyanchand's legacy. The best way to honour according to me would be - To make the national sport, a sport for the nation. Make it a governmental priority through Sports Ministry as BCCI does not need any help from the government funds. India has a lot of space for team sports outside cricket.
Any institution is governed by politics and this aspect is magnified when it concerns Government. Governmental awards over the years have always had political influence in some way or the other. People have different opinions of the same topic and awards are no different.
Why the reluctance to award Dhyanchand, the top honour of India all these days? Why blame Sachin Tendulkar if he is being awarded instead of Dhyanchand. Why should Tendulkar step in and say who deserves the award or not? How does their value legacy diminish by not honouring this award?
Dhyanchand's autobiography 'Goal' starts with the lines "Needless to say I am a common man." Looking at the legacy and upbringing of Tendulkar, he too is a common man. Their achievements in the sports field are uncommon. Their names are, have and will be exploited and used as means to spark debates of all kinds.
Will it silence the debate once in for all if both these sportsmen were awarded jointly? I guess not...
Labels: Bharat Ratna, Cricket, Dhyanchand, Field Hockey, Sachin Tendulkar, Sports Imitates Life
OFFLINE & ONLINE CRICKET
I left India for my Master studies in 2009. The course involved sports but when it came to cricket, it seemed as though it was an alien sport.
Except for few Indians in my batch there was absolutely zero interest on cricket and we were the most recognisable Indians, more than Sachin Tendulkar. This is where it hit me, and quite hard. I was in Switzerland and not in India.
It was on a Sunday morning that year on my way back to the room I got a message from my friend. That was the time when I had a normal mobile phone with no internet - no tabs or smart phones. I had to rely on Wi-Fi connection if I were to be connected online. There was cricket being played and it involved India, but no where I saw the buzz or could find enthusiastic people to discuss about it.
I checked the scores on Cricinfo only to find Tendulkar had hit 163 before he left the field retired hurt. How could I have missed it? I cursed myself and in a state of desperation, I found a remedy. A friend of mine, a cricket enthusiast himself gave me the link to a website where I could watch the highlights. I watched it once, twice and few more times.
When the Indians played New Zealand in the 1st Test at Hamilton, I stayed awake till about 4 am watching Tendulkar construct a brilliant innings of 160. It felt different as I had never watched an innings of his live on a laptop. And earlier in India, I had to wake up early to watch a live match from New Zealand and now I had to sleep late.
Over the course of the year, I watched him score a match winning hundred against Sri Lanka, a mammoth effort against Australia. His 175 at Hyderabad reminded of the desert storm innings when he plundered the Australian attack.
I started watching cricket through my laptop regularly; it reminded me the time when I used to watch and follow cricket with my family, friends or even strangers. Be it at someone's place, or at work or even on a street, cricket was followed religiously and that was the buzz I was missing. Cricket was such a wonderful ice breaker that I started missing the conversations on this sport. Where were the opinionated minds?
After a year of staying and studying in Switzerland, I was back in India on an assignment. I was working late on that evening when I got to know about Sachin's double hundred, the first in ODI's. I was sweating it out on a afternoon when I saw the scoreboard of him scoring a Test double hundred against Sri Lanka, I was busily running to different parts of Delhi when he made another double hundred against Australia and I was chatting with my friend on a cold evening with no TV cable connection when he had scored that 50th Test hundred against South Africa at Centurion. He went on to score one more at Cape Town and I missed that too. To sum it up, I had missed the best phase of Tendulkar's cricketing career in the 21st century. How could I? Why did I not watch all the matches just like the old times?
A couple of weeks before I was to get married, World Cup 2011 had started. I was in Bangalore and didn't want to miss the chance of watching a match live. After struggling for close to six hours, my cousin and I did manage to buy ourselves a ticket each. Tendulkar scored 120 off 115 deliveries and it was his 98th international hundred. It was to be my first World Cup match and as it stands, it was to be the last time I would watch Sachin score a hundred live on the ground.
I didn't watch him take that single at Nagpur which gave him his 99th hundred. Like many others, I too waited for him to score his 100th hundred. It wasn't to be in the World Cup, not when he toured England, not when West Indies toured India or when Indians toured Down Under.
I was in Jaipur working for IPL and a meeting was scheduled to discuss about the preparations for the upcoming tournament. As we went inside the meeting room, there were about 4-5 gentlemen representing Rajasthan cricket hooked on to TV. Sachin was batting and was close to the landmark.
My heart wanted to stay and watch him score that hundred, while my professional head wanted me to go ahead with the meeting. There was a visible reluctance among many to go ahead with the meeting and even my manager wanted to stay back as he didn't want to ruin the joy of watching Sachin getting to his 100th hundred. In fact, he too was keen to watch the proceedings on TV. He was an ex-cricketer himself and he knew his statistics quite well.
The next 15- 20 minutes went by and finally the moment had arrived. It was not one of his best hundreds and Tendulkar would agree to it. But the burden was lifted. A huge sigh of relief and what next was left for this cricketer? Since then he has announced his retirement in both ODI's and in the T20 format. Now, he is bowing out of the game in less than ten days time.
I was not in Kolkata and also was not to be in Mumbai for his 200th Test. Like it was four years ago, I caught all the action on the laptop and by this time I have made few friends with whom I can talk about cricket here in Switzerland.
Life as a cricket fan is a lot easier these days, thanks to the internet and smart phones. I can follow live scores, catch the highlights, watch live cricket or even archive videos.
As I prepare to watch his final few moments in international cricket I know for sure, irrespective of where I reside and what's going on in my life, a glimpse of a Tendulkar's innings will remind me of those random memories of mine associated with cricket.
I put my feet on the lake and the water gushes away. I am not the same person anymore as the water which gushed my feet is long gone replaced by a fresh stream. Cricket will not be the same to me.
Labels: Cricket, Indian Cricket, Sachin Tendulkar, Sports Imitates Life, Test Cricket
Lost Track: Circuits of the Yore XVII - Sebring International Raceway, United States Grand Prix
Jack Brabham pushing his Cooper Climax to the finish line enroute to claiming his first world championship
This year Austin will be hosting its second championship race since it made its debut last year. The Circuit of Americas thereby became the tenth venue in United States to host a F1 race; most by one country when you consider F1 is not really a commercial winner in this part of the world. Barring Glen Watkins, Long Beach or Indianapolis - the rest of the circuits came with a lot of promise which were to be short lived. How long will the current track survive?
In this edition of Lost Track, we go back a little over fifty years to the 50's when F1 in its world championship embodiment first raced in North America.
On 13th September 1959 - Stirling Moss scored an emphatic victory in his privately entered Cooper-Climax courtesy of R.R.C Walker racing team. This was his second straight win for the Rob Walker's team and crucially the victory put Stirling Moss along with Tony Brooks on a mathematical chance to win the championship - which at that point the resilient Jack Brabham was leading. Moss was geared up - but he had to wait for three months for the final round of the championship to begin.
Since its inception, the F1 World championships always had a round held in United States of America in the form of Indianapolis 500. Though the points counted towards the championships, rarely did any of non-US drivers took part. In fact there was none from outside the states who took part for the first nine years running.
The 1959 title contenders - Jack Brabham, Tony Brooks and Stirling Moss had three months to plan and prepare for this momentous occasion. Sebring Raceway, located in Florida was chosen to be the venue hosting the F1 drivers and teams from across the globe.
That year was significant for many other reasons too. The most relevant and important was the introduction of rear-engine chassis designed F1 cars. This idea was that of the visionary John Cooper; a innovation which made him an auto racing legend instantly which changed the way modern cars were built at the top level. Jack Brabham benefitted immensely from this revolutionary design; though it was not a dominating performance, it still gave him a lead of 5.5 points going in to the final round.
Sebring is well-known even today for its endurance races. Remember 12 hours of Sebring? The track included a part of former military base which was used to train the World War II US Army Air forces.
The track saw its racing avatar courtesy of Alec Ulmann, who brought his love for automobiles to United States when he emigrated from Russia. When the local racers were looking for a place to race he organized the airbase at Sebring, Florida to be the race track. The inaugural 12 Hours of Sebring was held in 1952 which became quite popular and was one of the considerations taken into account when Sebring was later chosen to host the first Formula One Grand Prix event in United States. The race was initially lined up a day after the endurance event in March - however with logistical issues, the F1 event was postponed to December to be the final round of the season.
Stirling Moss, the driver in form took the pole position ahead of Jack Brabham and Tony Brooks. Brooks was later pushed to 4th after it was discovered quite late that American Harry Schnell had the third fastest time. Despite caustic protests, mainly by Ferrari there was no change in the order and Brooks was to start from the second row in 4th place.
The race was also significant as for the first time, most of the European cars were to showcase in a F1 competition held in United States. This wasn't any ordinary car shows or exhibitions. F1 was the world's premier racing event and comparisons were made between the European machinery to the American style of racing.
A man with a mission Moss, having finished second in the past few years of the championship was hoping for a victory and Brabham to finish outside of second place. It was a daunting task considering both raced with similar configured Cooper-Climax cars, though for different teams. Moss was out of his blocks quickly at the start of the race and his dream of becoming a world champion came to an halt on lap 6 when he retired due to a transmission failure.
Unless Brooks was to win and Brabham to finish third or lower, the title was very much for Brabham's to lose. Brabham took over the lead from Moss after the latter's retirement and led the race till about 500 yards before the finish. His car halted and he got out of the car - started pushing his Cooper Climax which was permitted in those times, managed to cross the finish line in fourth.
His team-mate, the young Kiwi Bruce McLaren was the winner and became the youngest F1 race winner (if you exclude Tony Ruttman's Indy 500 victory which counted towards the championships).
First time winner Bruce McLaren greeted by one of the models
The 2nd place was taken by Moss's team mate Maurice Trintignant and Tony Brooks crucially came in third. He finished as the runner-up of the 1959 championships overtaking Moss in the overall standings.
His fourth place finish was enough to give Jack Brabham his first crown. Cooper-Climax also became the first non-manufacturer to win the Constructor's championship which was significant considering it gave rise to the 'Garagistes' mainly from Britain, who were to play a prominent role in the evolution of this sport.
The race was exciting - but it was a financial disaster for the organiser and promoter Alec Ulmann. Going by the audience who were to witness the endurance races held previously on the same track, the total count was appalling. In addition, there was a small problem few teams faced post race. The cheques issued to the winners bounced. To save the name and face of American racing, Charles Moran and Briggs Cunningham, two big names in America racing circles personally covered the expenses to the tune of $15,000 and make amends.
Sebring turned out to be a one-off event for F1. In 1960, the same promoter moved the race to Riverside Raceway in California.
In today's scenario, the costs of hosting an F1 event is high and unless Americans accept F1 alongside the other forms of motorsports - I am afraid Circuit of Americas will be abandoned just like its predecessors. United States need F1 or is it the other way around?
Track Photo Courtesy - allf1.info
Labels: 1959, Bruce McLaren, Circuit of Americas, Cooper Climax, F1, Formula One, Indianapolis, Jack Brabham, John Cooper, Rob Walker, Sebring, Sports Imitates Life, Stirling Moss, Tony Brooks, USA
Abu Dhabi: Where it all Began- The success story of Red Bull and Vettel
Image Courtesy - Telegraph.co.uk
After clinching his fourth consecutive driver's world championship, Sebastian Vettel is undoubtedly the king of F1, at least for the moment. There might be couple of drivers currently who are more talented than him; however they will have to wait or come up with something out of the extraordinary to beat him or even come closer to him.
Fernando Alonso came close last year and in 2010 with his Ferrari, reliable but lacked the final punch which was so badly needed to beat Vettel and his Red Bull or whatever the fancy names he calls his car. Mark Webber his team-mate had his best chance to win in 2010 and since then he never looked set to beat Vettel, while Raikkonen excelled in few opportunities where Lotus looked good.
The race now shifts to the Middle Eastern world, Abu Dhabi. A race amidst the twilight on the streets of this Emirate capital welcomes the new world champion albeit a familiar face who previously has won this race twice including the inaugural race in 2009. The last of his two victories is a significant one, the one which wrote the most defining chapter in the legacy of Vettel's racing career and gave him the momentum to move to another level.
It was the evening of 14th November 2010; four drivers came into this season finale having a chance, a mathematical one to win the driver's title. Fernando Alonso with 246 points, Mark Webber with 238 points, Vettel with 231 points and Hamilton with 222 points, which was a record in itself. Never has F1 seen a four-way battle for the top spot. Ferrari and Alonso were confident and so was Mark in Red Bull. It was Vettel's second year with the Red Bull and was not yet the senior driver of the team while Hamilton had nothing really to lose.
In one of the pre-race press conference, Vettel was asked – "You are leading the race, Webber is in second place, Alonso in third and the race would be finishing. What would be your behaviour Vettel?" The wonder kid from Germany smiled, paused and answered in a calm manner – "I was asked a similar question at the last race. It is just Thursday guys, if we ever get to that situation, we'll see"
All the title contenders were placed in top five after qualifying. Vettel and Hamilton occupied the front two positions while Alonso and Webber were to start from second and third row respectively.
The race started and by the end of lap one safety car was called to slow the pace while the track marshals removed the wreckage of Michael Schumacher's Mercedes and Liuzzi's Force India. The race resumed its normal and usual pace after lap five with the top five being Vettel, Hamilton, Button, Alonso and Webber in that order. Alonso was to win the championship if the race finished in that order. He looked set to do what 'Raikkonen' had done in 2007 by winning the championship with Ferrari in his first year.
Drama unfolded on lap 11 when Mark Webber was called on to pits. Why? He was getting stuck behind Alonso and this way he could make up some ground and challenge for the lead or was there some other reason?
Ferrari race strategists keeping a closer eye on Webber were planning to counter this with their own strategy. Alonso was the fastest driver on the circuit before he was called to pit on lap 15. Was the call to pit as a result to keep Webber in check or to replace the degradation of softer tyres on Alonso? Did they have a good look at other drivers on track and their strategies before calling him in? Surely, they would have factored all this considering it was going to be a one-pit stop strategy?
Alonso re-joined the track in 12th position and ahead of Webber and the pit strategy worked. More importantly, he was behind a long chain of cars that had already pitted and would not be required to pit another time. On lap 24 Vettel pitted and the pit-stop was pitch perfect, reminiscent of Ferrari's ruthless stop at Suzuka in 2000 which gave Michael the championship.
Vettel came ahead of Kobayashi and Kubica with clear track ahead of him. Hamilton overtook the Sauber and Renault in pursuit of victory while Alonso was stuck behind the other Renault of Petrov. With each lap down, one could see the disappointment in the faces of Ferrari fans, their crew and Andrea Stella in particular, the race engineer for Alonso who was heard constantly delivering motivational messages lap after lap on the radio. It was just one of those days where things could go all wrong and it did for the team from Marenello. Those despairing faces dressed in red appeared even more hopeless and all they could expect was for some retirements at the front.
On the other hand, the team of Red Bull were anxious, knowing Webber's chance had dwindled and Vettel's victory would mean nothing unless Alonso finished outside of top five. They waited with fingers crossed.
Vettel crossed the line and won the race by 10 seconds. He was not announced as the World Champion immediately and instead he was asked to hold till they could confirm the finishing order. It was looking good and so Vettel waited patiently while he heard out the messages from his race engineer on the radio. "Hamilton P2, Button P3, there's another two cars coming on turn 15 and 16, Rosberg P4, Kubica P5 and.......Der Meister"
Tears were all I could sense hearing Vettel react to being the World Champion. Yes, he became the World Champion and it was unbelievable. He led the championship for the first time that season and what a day to have done that.
As the German national anthem played, my mind could only think of that Sunday evening in Japan 10 years before this race. A German by the name of Michael Schumacher was in tears of joy winning his 3rd driver's championship and the first of his five with Ferrari. His junior had arrived on the big stage.
Vettel in his younger days with this hero Michael Schumacher
In 2008, it was Hamilton who had become the youngest World Champion and now the world was to see another youngster claim the throne. Since that day, he has gone on to win three more titles.
On 3rd November 2013, Vettel will race as a four-time World champion on the Yas Island track. He will be fully aware of the day on this track which gave him the momentum to surge ahead and stamp his authority on the track.
Next year with the rule changes, return of turbo engines and Ferrari having a powerful driver's line-up, it promises to be an exciting season. Will Vettel be crowned for the five time come Abu Dhabi next year? I don't know and honestly even Red Bull doesn't know. What they do know is that it is all theirs to lose. But for now, they will race in Abu Dhabi knowing this is where it all began, the legacy of Vettel and Red Bull in particular.
Labels: 2010, Abu Dhabi, Alonso, F1, Ferrari, Hamilton, Michael Schumacher, Raikkonen, Red Bull, Sports Imitates Life, Vettel, Webber, World Champion
Vakifbank Istanbul wins their first FIVB Women's Club World Championships
At the end of five days of intense volleyball, the 7th Women's Club World Championships came to a halt yesterday in Zurich, Switzerland. Saalsporthalle, a well known multi-purpose stadium in Zurich were roped in as hosts this year. Since the re-introduction of this tournament in 2010, this was the first occasion the tournament was held outside of Qatar. Prior to 2010, the championships were held thrice in 1991, 1992 and 1994 after which it was discontinued.
Apart from the host team Voléro Zürich, there were teams from Africa (Kenya Prisons), Asia (Guangdong Evergrande, China), South America (Unilever Vôlei, Brazil), Iowa Ice, representative of North, Central America and Caribbean Volleyball Confederation (NORCECA) and the reigning European champions Vakifbank Istanbul.
Voléro Zürich and Vakifbank Istanbul remained unbeaten in their respective pools A and B while Guangdong Evergrande and Unilever Vôlei finished second behind Zurich and Istanbul clubs.
The two semi-finals turned out to be very one sided. In the first semi-final between Vakifbank and Guangdong there was a close fight in the first set which Vakifbank eventually won 28-26. The next two were relatively easy as Vakifbank cruised to their second finals.
In the second semi-final though the hosts had the local support, the flair and talent came from the Brazilian side. After a closely fought first set, Voléro Zurich surrendered in the next two sets. The best they could hope for was to fight for the 3rd place.
And the hosts started off well claiming the first set. The next three sets saw an outstanding display of commitment from the Asian champions as they claimed the bronze spot and thereby becoming the first Asian country to achieve a top 3 finish at this competition.
If one looks at the history of this championship which began in 1991, there have been three Brazilian winners and all the three clubs being different. Unilever were playing their first finals while things were slightly different for their rivals. After a straight sets loss in 2011, Vakifbank were playing their second finals in three years. Their form throughout the year was outstanding and looked good to claim their first World club title.
Jovana Brakocevic, a Serbian national team player and a rock star for the Turkish club was a dominant force to reckon with for the entire match. The tall Serbian collected points for all three scoring skills category. Her spike, block and the high jump serve was thrilling to watch from the seats but not so pleasant if you were a Brazilian fan or the team which had no clue most of the time. She finished the match with 23 points that included 3 blocks and one serving ace.
Vakifbank Istanbul won the finals comfortably after a tight second set (25-23, 27-25 and 25-16). Jovana Brakocevic was rightly named the Most Valuable Player (MVP) of the tournament. The winners were awarded USD 200,000 while the runners-up and 3rd place team got USD 110,000 and USD 60,000 respectively.
The 2013 Women's Club World Championship Dream Team went this way:
1st Best Outside Spiker: Kenia Carcaces (Voléro Zurich)
2nd Best Outside Spiker: Kirdar Sonsirma Gözde (Vakifbank Istanbul)
1st Best Middle Blocker: Christiane Fürst (Vakifbank Istanbul)
2nd Best Middle Blocker: Carol (Unilever Vôlei)
Best Libero: Yuko Sano (Voléro Zurich)
Best Setter: Jingsi Shen (Guangdong Evergrande)
Best Opposite Spiker: Sarah Pavan (Unilever Vôlei)
Most Valuable Player: Jovana Brakocevic (Vakifbank Istanbul)
Labels: FIVB, Saalsporthalle, Sports Imitates Life, Unilever Volei, Vakifbank Istanbul, Volero Zürich, Volleyball, Women's Club World Championships, Zurich
It was more than a drizzle. It was pouring last morning. I wondered why? A gamut of coloured leaves lay on the street and the rain washing it away. This is what I saw out of the window from my drawing room. Just as I sipped in the last bit of my ginger honey tea, I heard a beep. My throat was giving me a hard time and the hot beverage had a somewhat soothing effect.
I had to barely walk a couple of feet to pick my phone up. I had a notification and it read "Sachin Tendulkar to retire after his 200th Test" courtesy NDTV breaking news.
I quickly got on to my twitter feed and checked what's happening. I knew this might have happened, but was more interested in the source. It was the BCCI who had made this announcement on behalf of Sachin Tendulkar.
The adages started pouring in left, right and centre. Few expressed relief while majority expressed their loss of connection to childhood - the constant he has been in cricket to many. What did I do?
Words are like perceptions and I read plenty of them. All sorts of people put in their views – Logical, cynical, sadists, critical, dramatists, cerebral, statistical, purists, fanatic, emotional and human. I was amazed and not surprised at the same time to see everyone to put in their two cents on this topic. Few pour their heart out while others wrote whatever they felt. Frankly, I didn't want to reflect on this decision of Sachin Tendulkar. I didn't want to. I was just reading one after the other.
As I occupied myself reading all this - Every now and then, my mind went back to those laminated picture books I have of Tendulkar (3 to 4 of those big books). It is still stacked in my room in India and remains my prized possession.
And then, I got reminded of the way I played cricket as a kid. What made me love this game to this day? Is it because the game itself was so attractive or was I influenced to take up this game?
How old was I? Let me remember, seven, six, five or even younger than that when I either picked up a bat or a ball for the first time. Our house was located away from the city centre and so I didn't have the luxury of having too many friends. There were few (4 of us) and were of my age (what a lovely coincidence). We started playing cricket on the streets as having a proper ground was unimaginable in those days. With occasional tips from elders we were mostly on our own to understand this game and play, a challenge which we relished.
Around that time, a little phenomenon in Indian cricket was making his mark in international cricket. He was young and so were we. So it was an instant connection, a bond which became stronger by the day. I started playing cricket everywhere – on the roads, inside the house and any place which was sufficient to put bat to ball. It didn't matter – My life was occupied with cricket, obsessed with it which made me think school and academics were extra-curricular activities.
Outside my family circle, he has been a constant throughout my life and so that connection is what's being broken. I am now all grown up; understand life in a much better way than I did previously. He gave me immense joy, made me shed tears, made me go frenzy, made me go mad, made me frustrated, gave me that pride, gave me confidence, made me inspired, made me obsessed, made me a thinker, made me a believer, made me a guy to go after one's dreams and it goes on........
Who is this guy? God, no; Demi-god, no; superhuman with magical powers, no; ordinary human with extreme talent, no – To me he is a kid who extended his childhood beyond the conventions of its definition. As we grow old, we get distracted by innumerable things than a child would. As an adult, I believe there is a kid in us and for Sachin Tendulkar I feel it was always the opposite. To me the association with cricket started with Tendulkar. He was a fellow kid like me with whom I could connect to whenever he played cricket.
Kids move on to the next toy or next set of challenges only when the next toy is attractive or when they are bored with the existing toy. I believe Tendulkar has reached that phase in his childhood where playing cricket no longer gave him that fun it once did. He made his retirement call to move on with his life and let the adult in him take over from now on. If cricket were to be his most favourite toy, he has utilised and played with it more than one can imagine. He will play out the final two tests as an adult, fully aware that his childhood days are now over.
A big chunk of my childhood tree has been etched out. The kid in me has lost a link. Now they will be replaced by memories of Sachin Tendulkar and his cricket playing days. I will move on, going about life the usual way with interesting things happening around and with me comes all those wonderful days of the past, recollecting my life, remembering the times when I did everything I could to just watch him play.
Posted by Sports Imitates Life at 11:34 1 comment:
Labels: Cricket, India, Sachin Tendulkar, Sports Imitates Life
My Two Cricketing Idols - Sachin Tendulkar and Rahul Dravid
I had completed my Engineering studies and was now a corporate. Few months later in December 2006, Indian cricket team were touring South Africa and part of their tour was a solitary T20 match. It was India's first international T20 match and at the end of it, they emerged victorious. It was Sachin Tendulkar's first and turned out to be his only T20 international. At the time of the first T20 World Cup in 2007, the trio of Tendulkar, Dravid and Ganguly and had opted out of T20 cricket internationally as they felt it was best suited for youngsters. Rest, it turned out to be an historic moment in the evolution of present day cricket. MS Dhoni led his young team to the title which changed the course of cricketing future - Birth of Indian Premier League and the successive T20 leagues around the world.
It was the summer of 2008 when Indian television and stadium goers had got a custom made cricket event which involved international cricketers spread across eight franchises or cities in India. Sachin Tendulkar represented his home city 'Mumbai' while Rahul Dravid turned out in red and gold colours for 'Bangalore'. This year IPL completed six seasons and if I look back on that night of 18th April 2008; I was celebrating my mother's birthday with relatives and friends at home and the IPL carnival was not so far off from my place in Bangalore. For the first time Indian viewers were to be divided on city basis for its most worshipped sport. I am a Bangalorean and my cricketing idols were Sachin Tendulkar and Rahul Dravid. I decided not to support anyone and I still maintain about picking my favourites on the match day or how I felt. C'est la vie for me when it comes to T20 cricket.
Around the fourth season of IPL I found myself to be in a situation where I was donning the outfits of the IPL central management team which operated the tournament. It was a dream for most youngsters, cricket fans, and game maniacs to be working on a job that involved cricket and cricketers. By that time, I had lost my innocence as a fan and looked at my idols in a different way. I became averse to the idea of clicking photographs with them and more so when it involved my revered cricketers (God knows, how many of my close friends and relatives I have denied). I was still a kid at heart when it came to these two cricketers or when it came to supporting them. Just that, I had become a more silent kid than continue being a naughty one. I felt I was different and if I ever get to meet them in person, I knew I would be not be like any other fans. Believe me it was different.
Looking back, I was thrilled when Sachin Tendulkar greeted me, shook hands and gave an autograph penned using his right hand (he is a left-handed writer) in a local cricket match and quite a similar euphoria when I met Rahul Dravid for the first time after winning a competition and second time at a game. I was a kid back then, the one who had his dreams fulfilled by these two cricketers. No they were not just cricketers, they were super-heroes to me.
And few years later I meet them as a professional. A lot had changed in my life – I was married by this time and yet I could not stop but admire these two cricketers. Yes, I was watching less of live cricket than I used to and yet was managing to follow the missed action through highlights, cricinfo and other medium of information.Cricket was not just a passion, it was my work too.
Yesterday, both Sachin Tendulkar and Rahul Dravid played out their final limited overs game or should I say in coloured clothing. While Rahul Dravid has retired from all forms of the game internationally, Tendulkar continues to be a player in the longer version of the game (Test cricket) for India. While I am amassed and intrigued at the journey and accolades Tendulkar has been able to achieve, I am inspired by the course and journey Rahul Dravid endured. Sachin Tendulkar won his last T20 International for India, last One day international for India (including a World Cup), last IPL match for Mumbai Indians (including the trophy) and the last Champions League T20 again for Mumbai Indians (including the trophy). Even if he doesn't play another Test for some reason or the other, he would still have the feat of winning his last Test match he played for India.
On the other hand – Rahul Dravid has not won a World Cup; he was part of the losing team on the occasion of his last Test, last ODI, his last T20 all for India, his last IPL match and the last Champion's league T20 match with Rajasthan Royals.
Rahul Dravid will not play competitive cricket anymore and I am a grown up boy to understand his decision better than I would have few years ago. He will be missed but I am sure his family would not complain about this retirement. Personally, it was a warming experience to work with the same franchise Dravid captained and something which I cherish for a long time to come. The journey outweighs the destination and one such epitome to that is Rahul Dravid's career.
Sachin Tendulkar has played 24 years of international cricket. I know he is not at his best at the moment and I also know he knows his cricket much better than I do. Is he destroying his legacy by not being at his best or is it a tale of perseverance and dedication to one's skill? Frankly, it doesn't matter to me. His effect on cricket lovers and to the world cricket has been enormous and a mighty positive one.
So on that note, I will cherish this period of dusk on the greatest cricketer I have witnessed in my lifetime. I was a five year old kid when he first played international cricket (1989) wearing the whites and he will end his playing career someday wearing whites. Among my list of childhood idols across all sports, he remains the last man standing.
Image Courtesy: internationalreporter.com
Labels: Champions League T20, Cricket, India, IPL, Mumbai Indians, ODI, Rahul Dravid, Rajasthan Royals, Sachin Tendulkar, Sports Imitates Life, T20, Test Cricket
Afghanistan - The New Messengers of Sport
The Manuka Oval in the capital city of Australia will be part of a certain country's history. The seventh match of the 2015 ICC Cricket World Cup which will be played on Feb 18th 2015 features Bangladesh with another Asian team. No, it is not India neither Pakistan nor Srilanka, the three strong pillars of Asian cricket.
Not so long ago, this country was at unrest and it still is due to conflicts of different nature and security being at the top of this. However, when it comes to cricket they have made significant progress and now they are making their debut at the World stage. Welcome to the 50 over World Cup bandwagon 'Afghanistan'.
They had earlier qualified for T20 World Cup in 2010 and repeated the feat in 2012.This had inspired a lot of youngsters to take up sport in the post Taliban era. I hope this news acts as a catalyst to the population of Afghanistan and more so with the youngsters.
Cricket and its origins in Afghan provinces date back to the time when British rule was prevalent in the mid 19th century. Unlike India and Pakistan, the legacy of cricket in Afghan regions was short lived and was not until the end of previous millennium, a cricket board had been formed. While sports having been placed under 'ban' while Taliban was ruling, cricket escaped with such ban and was to be the only exception sport.
This act of deliberate omission by Taliban was crucial for the development of sport; it paved the way for the national team to become a member of International Cricket Council (2001) and subsequently with Asian Cricket Council (2003). In twelve years time, they have progressed and sky is the limit for the future.
The fraternity of the sporting world must celebrate what Afghanistan has achieved. To put up a team of individuals of different mindsets is never easy especially when you have to constantly worry about your life. No International matches are currently played in Afghanistan due to ongoing security issues. They have a domestic championship which involves a tournament taken part by little more than twenty provinces. They play their home international matches at Sharjah, United Arab Emirates and bulk of their cricket stadiums in Afghanistan are under construction. The Afghanistan Cricket Board has big plans to build a stadium in every province of the country and hope to see international cricket return to their home territory. They are currently placed 12th out of 14 teams which would participate in the multi-country tournament.
In a political world which is judged by one's passport, such heroics from the people of a country will go a long way in changing the image of the country. In the recently published list by Henley & Partners Visa Restrictions Index (a global ranking of countries based on the freedom of travel of their citizens) Afghanistan was placed at the bottom of the list (93rd) with a score of 28, meaning the Afghani citizens can travel to only 28 countries without a visa. And now, they will be travelling to Australia and New Zealand to play the signature event of cricket, with a visa of course.
The last paragraph had nothing to do with cricket or sports in general, atleast they are not related directly. However, repeated performances on the sporting world will ensure a youth giving him/her to imbibe the qualities of their heroes and thereby give a chance to them for a much peaceful future. I believe you don't need great plans to make a sports project work in conflict affected areas; all you need is an opportunity to provide the basic infrastructure to play and life of such players will be automatically taken care. That to me is the power of having Sports in one's life. It is not about being the best in the world, it is all about making an effort to be the best one can become. Sports are one such medium in life. Today, Afghanistan has become the new messengers of the sports industry.
Catch more on the background of growth of cricket in Afghanistan through this documentary
Labels: 2001, 2003, 2010, 2012, 2015 World Cup, ACC, Afghanistan, Australia, Canberra, Cricket, ICC, India, International Cricket Council, New Zealand, Pakistan, Sports, Sports Imitates Life, Srilanka, Taliban
Lost Track: Circuits of the Yore XVI - Pedralbes, Spanish Grand Prix
Last month I visited Barcelona. It was my first time in Spain and I loved it. It was a short stay of three days in one of the beautiful and happening cities of the world and this abrupt stop was memorable nevertheless. I recall the crowded street of La Rambla, the Mediterranean Sea side, the monumental Sagrada Familia, the colossal 'Camp Nou' – abode of FC Barcelona, Poble Espanyol and the Olympic Stadium which also was the venue for Montjuïc race track. There were many other memoirs too like the colourful water fountain in its glory at night, Arc de Triomf, random Tapas joints, introduction to Gazpachos and an unforgettable dinner at the roof top restaurant of Vila Olimpica.
Amongst all this, I also went around the streets in a relatively busy locality called 'Pedralbes'. Famous for its monastery - 'White stones' as translated in Catalan also was the first place in Spain which drew the likes of Fangio, Ascari and the rest of the 50's Formula 1 drivers. It was a street circuit, a quick one where cars could reach up to a speed in excess of 300 km/h. The roads were wide, slightly grand and featured city's broom corners.
With the driver's championships hanging in balance, the final event of the 1951 season was to culminate at Pedralbes, which was making its F1 debut. Which driver would it be? Alfa-Romeo having won the previous year looked good with their driver Juan Manuel Fangio, who led the championship at the start of this race. Ferrari on the other hand had hopes on their star driver Alberto Ascari to overcome the two point deficit and win the driver's title. The job was half done with Ascari taking the pole and Fangio coming in second.
Crowd gathered in good number to watch this thriller unfold. Both drivers were pumped up to win their maiden F1 driver's title. And so, the race started. Engine wise both Alfa-Romeo and Ferrari were evenly matched for speed. But it was the tyre choice that was going to be decisive. Ferrari opted for 16 inch rear tyres while Alfa Romeo went for 18 inch. This difference of 2 inches turned out to be a big disadvantage for Ferrari. They soon found their cars struggling with grip issues and tyres losing their thread rapidly. Ascari suffered the most and his championship hopes now solely rested on Fangio's retirement and him taking 2 points or more.
Fangio went on to win first of his five world titles. Ascari could manage only fourth. After having two successful seasons in F1, Alfa-Romeo announced of their F1 withdrawal from the 1952 season onwards owing to finances and the lack of it. In 1952 and 1953, the Spanish Grand Prix was replaced by Dutch Grand Prix. Pedralbes was back for the 1954 season in place of Zandvoort track of Netherlands.
Like it was in 1951 Pedralbes again hosted the ninth and the last race of the season. This time there was no such pre-race drama. Fangio was already a world champion coming into this round and he now driving for Mercedes could race without any title pressure. Barring for the two races he drove for Maserati, Fangio won four races with Mercedes.
Ascari was a double-world champion by this time and repeated his feat of 1951 by taking the pole position at this 6.3 km circuit. He was racing for Lancia and they had brought in their 90 degree V8 engine as a part of their chassis for this race. The pace was there to be seen - fastest practice lap, pole position and the fastest lap of the race. By the end of nine laps both the Lancia driven cars were out of the race. Luigi Villoresi retired on lap 2 struck by brake problems and seven laps later his mate Ascari would end his race and season due to clutch problems. The fastest car didn't last the distance.
Mike Hawthorn who went on win his solitary World Championship in 1958 won this race for Ferrari. This win was made easy by leakage issues which Fangio had to deal with as he lost oil towards the end of the race. The duel of Hawthorn and Fangio didn't reach the climax as a result of this unfortunate incident. Fangio lost his second position and finished in third. This third position is quite a significant one. Out of his 52 entries in F1, he won a Bradmanesque 24 times, came second 10 times, retired 10 times, DNQ (Did Not Qualify) once, finished outside the top three 6 times and this result in Spain was his sole 3rd place of his F1 career.
The year 1955 is considered to be a black year for motorsports. The LeMans Disaster of 1955 was catastrophic and the sport became a lot stricter than it was as a result of this tragedy. Pedralbes was one of the casualties to suffer aftermath of what happened in LeMans. Stringent rules meant Pedralbes was out of the calendar. It never did any significant attempts to win back its place in F1. However, Spain did host and continues to host F1 races albeit it had to wait for another 13 years.
Now all that remains of Pedralbes is the street and long stretch roads which once, rather twice had some of the fastest road cars on them with drivers accelerating, changing gears and braking at will. Looking at the roads, it was tough for me to visualise the events that took place nearly 60 years ago. There is a tramway on the middle of these roads, a freeway very close by to the road and few corners from the original Pedralbes circuit are still retained. The memories though remain and unfortunately I couldn't get hold of any elderly gentleman or lady who had witnessed this event.
Labels: 1951, 1954, Alberto Ascari, Alfa Romeo, Barcelona, Catalan, F1, Ferrari, Juan Manuel Fangio, Lancia, Mike Hawthorn, Montjuïc, Pedralbes, Spain, Spanish Grand Prix, Sports Imitates Life
Lost Track: Circuits of the Yore XVII - Sebring In...
Abu Dhabi: Where it all Began- The success story o...
Vakifbank Istanbul wins their first FIVB Women's C...
Sachin Tendulkar: From a Child to an Adult - The L...
My Two Cricketing Idols - Sachin Tendulkar and Rah...
Lost Track: Circuits of the Yore XVI - Pedralbes, ... | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 5,603 |
package org.apache.solr.client.solrj.util;
public interface Cancellable {
void cancel();
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,888 |
Ctenophorus spinodomus, commonly known as the Eastern Mallee dragon is a species of agamid lizard occurring in New South Wales and South Australia.
References
Agamid lizards of Australia
spinodomus
Endemic fauna of Australia
Reptiles described in 2019
Taxa named by Ross Allen Sadlier
Taxa named by Donald J. Colgan
Taxa named by Harold Cogger | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,921 |
package org.apache.camel.spring.cloud.zookeeper;
import org.apache.camel.cloud.ServiceDefinition;
import org.apache.camel.impl.cloud.DefaultServiceDefinition;
import org.springframework.cloud.zookeeper.discovery.ZookeeperServer;
import org.springframework.core.convert.converter.Converter;
public final class ZookeeperServerToServiceDefinition implements Converter<ZookeeperServer, ServiceDefinition> {
@Override
public ServiceDefinition convert(ZookeeperServer source) {
return new DefaultServiceDefinition(
source.getId(),
source.getHost(),
source.getPort(),
source.getInstance().getPayload().getMetadata()
);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 6,369 |
package screen;
import java.awt.Color;
import java.awt.Font;
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import java.util.ArrayList;
import java.util.List;
import javax.swing.BoxLayout;
import javax.swing.JButton;
import javax.swing.JComboBox;
import javax.swing.JFrame;
import javax.swing.JLabel;
import javax.swing.JPanel;
import javax.swing.JScrollPane;
import javax.swing.JSeparator;
import javax.swing.JTable;
import javax.swing.JTextField;
import javax.swing.ListSelectionModel;
import javax.swing.SpringLayout;
import javax.swing.border.LineBorder;
import javax.swing.table.DefaultTableModel;
import component.JTextFieldLimit;
import dto.ComboItem;
import dto.UserDto;
import enums.EditMode;
import enums.UserType;
import services.UserService;
import util.ComponentUtil;
import util.Constants;
import util.MessageUtil;
import util.StringUtil;
public class S_UserMaster {
// User's Frame
private JFrame frmUserMaster;
// Parent's Frame
private JFrame frmParent;
// Panel SearchArea
private JPanel searchPanel;
// Panel RegisterArea
private JPanel registerPanel;
// Combobox UserType
@SuppressWarnings("rawtypes")
private JComboBox cbxUserType;
// Table info
private JTable tblInfo;
// TextField UserName
private JTextField txtUserName;
// TextField Password
private JTextField txtPassword;
// TextField NickName
private JTextField txtNickName;
// Button Back
private JButton btnBack;
// Data List
private List<UserDto> userDtoList;
// Deleted Data List
private List<Long> deletedList;
// Editing data
private UserDto editingUserDto;
// Edit mode
private EditMode editMode;
/**
* Create User screen.
*
* @param frmParent
*/
public S_UserMaster(JFrame frmParent) {
this.frmParent = frmParent;
// Initialize Components
initComponents();
// Initialize combobox
initComboBox();
// Initialize screen
initScreen();
}
/**
* Initialize the contents of the frame.
*/
private void initComponents() {
frmUserMaster = new JFrame();
frmUserMaster.setResizable(false);
frmUserMaster.setTitle("User Master");
frmUserMaster.setBounds(100, 100, 600, 610);
frmUserMaster.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
frmUserMaster.getContentPane().setLayout(new BoxLayout(frmUserMaster.getContentPane(), BoxLayout.X_AXIS));
JPanel panel = new JPanel();
frmUserMaster.getContentPane().add(panel);
SpringLayout sl_panel = new SpringLayout();
panel.setLayout(sl_panel);
// Create SearchPanel
searchPanel = new JPanel();
sl_panel.putConstraint(SpringLayout.NORTH, searchPanel, 0, SpringLayout.NORTH, panel);
sl_panel.putConstraint(SpringLayout.WEST, searchPanel, 0, SpringLayout.WEST, panel);
sl_panel.putConstraint(SpringLayout.SOUTH, searchPanel, 400, SpringLayout.NORTH, panel);
sl_panel.putConstraint(SpringLayout.EAST, searchPanel, 0, SpringLayout.EAST, panel);
panel.add(searchPanel);
// Initialize SearchArea.
initSearchArea();
// Create RegisterPanel
registerPanel = new JPanel();
sl_panel.putConstraint(SpringLayout.NORTH, registerPanel, 400, SpringLayout.NORTH, panel);
sl_panel.putConstraint(SpringLayout.WEST, registerPanel, 0, SpringLayout.WEST, panel);
sl_panel.putConstraint(SpringLayout.SOUTH, registerPanel, 0, SpringLayout.SOUTH, panel);
sl_panel.putConstraint(SpringLayout.EAST, registerPanel, 0, SpringLayout.EAST, panel);
panel.add(registerPanel);
// Initialize RegisterArea.
initRegisterArea();
}
/**
* Initialize SearchArea.
*/
@SuppressWarnings({ "serial", "unchecked", "rawtypes" })
private void initSearchArea() {
// Create SpringLayout
SpringLayout sl_panel_1 = new SpringLayout();
searchPanel.setLayout(sl_panel_1);
// ScrollPane
JScrollPane scrollPane = new JScrollPane();
sl_panel_1.putConstraint(SpringLayout.NORTH, scrollPane, 45, SpringLayout.NORTH, searchPanel);
sl_panel_1.putConstraint(SpringLayout.WEST, scrollPane, 40, SpringLayout.WEST, searchPanel);
sl_panel_1.putConstraint(SpringLayout.SOUTH, scrollPane, 345, SpringLayout.NORTH, searchPanel);
sl_panel_1.putConstraint(SpringLayout.EAST, scrollPane, 540, SpringLayout.WEST, searchPanel);
searchPanel.add(scrollPane);
// TableInfo
tblInfo = new JTable();
tblInfo.setModel(new DefaultTableModel(
new Object[][] {
},
new String[] {
"User Name", "User Type", "Nick Name"
}
) {
Class[] columnTypes = new Class[] {
String.class, String.class, String.class
};
@Override
public Class getColumnClass(int columnIndex) {
return columnTypes[columnIndex];
}
@Override
public boolean isCellEditable(int rowIndex, int columnIndex) {
return false;
}
});
tblInfo.getColumnModel().getColumn(0).setPreferredWidth(100);
tblInfo.getColumnModel().getColumn(1).setPreferredWidth(100);
tblInfo.getColumnModel().getColumn(2).setPreferredWidth(400);
tblInfo.setEnabled(true);
tblInfo.setColumnSelectionAllowed(false);
tblInfo.setCellSelectionEnabled(false);
tblInfo.setSelectionMode(ListSelectionModel.SINGLE_SELECTION);
tblInfo.setRowSelectionAllowed(true);
tblInfo.setBorder(new LineBorder(new Color(0, 0, 0)));
tblInfo.getTableHeader().setBackground(new Color(153, 255, 204));
tblInfo.getTableHeader().setFont(new Font("default", Font.BOLD, 13));
scrollPane.setViewportView(tblInfo);
// Initialize SearchArea's button
initSearchAreaButton();
}
/**
* Initialize SearchArea's button
*/
private void initSearchAreaButton() {
SpringLayout sl_panel = (SpringLayout) searchPanel.getLayout();
// Button New
JButton btnNew = new JButton("New");
btnNew.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
actNew();
}
});
sl_panel.putConstraint(SpringLayout.NORTH, btnNew, 360, SpringLayout.NORTH, searchPanel);
sl_panel.putConstraint(SpringLayout.WEST, btnNew, 40, SpringLayout.WEST, searchPanel);
sl_panel.putConstraint(SpringLayout.SOUTH, btnNew, 390, SpringLayout.NORTH, searchPanel);
sl_panel.putConstraint(SpringLayout.EAST, btnNew, 120, SpringLayout.WEST, searchPanel);
searchPanel.add(btnNew);
// Button Modify
JButton btnModify = new JButton("Update");
btnModify.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
actModify();
}
});
sl_panel.putConstraint(SpringLayout.NORTH, btnModify, 360, SpringLayout.NORTH, searchPanel);
sl_panel.putConstraint(SpringLayout.WEST, btnModify, 130, SpringLayout.WEST, searchPanel);
sl_panel.putConstraint(SpringLayout.SOUTH, btnModify, 390, SpringLayout.NORTH, searchPanel);
sl_panel.putConstraint(SpringLayout.EAST, btnModify, 210, SpringLayout.WEST, searchPanel);
searchPanel.add(btnModify);
// Button Delete
JButton btnDelete = new JButton("Delete");
btnDelete.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
actDelete();
}
});
sl_panel.putConstraint(SpringLayout.NORTH, btnDelete, 360, SpringLayout.NORTH, searchPanel);
sl_panel.putConstraint(SpringLayout.WEST, btnDelete, 220, SpringLayout.WEST, searchPanel);
sl_panel.putConstraint(SpringLayout.SOUTH, btnDelete, 390, SpringLayout.NORTH, searchPanel);
sl_panel.putConstraint(SpringLayout.EAST, btnDelete, 300, SpringLayout.WEST, searchPanel);
searchPanel.add(btnDelete);
// Button Register
JButton btnRegister = new JButton("Register");
btnRegister.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
actRegister();
}
});
sl_panel.putConstraint(SpringLayout.NORTH, btnRegister, 360, SpringLayout.NORTH, searchPanel);
sl_panel.putConstraint(SpringLayout.WEST, btnRegister, 350, SpringLayout.WEST, searchPanel);
sl_panel.putConstraint(SpringLayout.SOUTH, btnRegister, 390, SpringLayout.NORTH, searchPanel);
sl_panel.putConstraint(SpringLayout.EAST, btnRegister, 450, SpringLayout.WEST, searchPanel);
searchPanel.add(btnRegister);
// Button Back
btnBack = new JButton("Back");
btnBack.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
actBack();
}
});
sl_panel.putConstraint(SpringLayout.NORTH, btnBack, 360, SpringLayout.NORTH, searchPanel);
sl_panel.putConstraint(SpringLayout.WEST, btnBack, 460, SpringLayout.WEST, searchPanel);
sl_panel.putConstraint(SpringLayout.SOUTH, btnBack, 390, SpringLayout.NORTH, searchPanel);
sl_panel.putConstraint(SpringLayout.EAST, btnBack, 540, SpringLayout.WEST, searchPanel);
searchPanel.add(btnBack);
}
/**
* Initialize RegisterArea.
*/
@SuppressWarnings("rawtypes")
private void initRegisterArea() {
// Create SpringLayout
SpringLayout sl_panel = new SpringLayout();
registerPanel.setLayout(sl_panel);
// Line separator
JSeparator separator = new JSeparator();
separator.setForeground(Color.BLACK);
sl_panel.putConstraint(SpringLayout.NORTH, separator, 0, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.WEST, separator, 10, SpringLayout.WEST, registerPanel);
sl_panel.putConstraint(SpringLayout.SOUTH, separator, 2, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.EAST, separator, -10, SpringLayout.EAST, registerPanel);
registerPanel.add(separator);
// Label lblUserName
JLabel lblUserName = new JLabel("User Name");
sl_panel.putConstraint(SpringLayout.NORTH, lblUserName, 20, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.WEST, lblUserName, 40, SpringLayout.WEST, registerPanel);
registerPanel.add(lblUserName);
// TextField txtUserName
txtUserName = new JTextField();
sl_panel.putConstraint(SpringLayout.NORTH, txtUserName, 20, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.WEST, txtUserName, 130, SpringLayout.WEST, registerPanel);
sl_panel.putConstraint(SpringLayout.EAST, txtUserName, 430, SpringLayout.WEST, registerPanel);
txtUserName.setDocument(new JTextFieldLimit(20));
registerPanel.add(txtUserName);
// Label lblPassword
JLabel lblPassword = new JLabel("Password");
sl_panel.putConstraint(SpringLayout.NORTH, lblPassword, 50, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.WEST, lblPassword, 40, SpringLayout.WEST, registerPanel);
registerPanel.add(lblPassword);
// TextField txtPassword
txtPassword = new JTextField();
sl_panel.putConstraint(SpringLayout.NORTH, txtPassword, 50, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.WEST, txtPassword, 130, SpringLayout.WEST, registerPanel);
sl_panel.putConstraint(SpringLayout.EAST, txtPassword, 430, SpringLayout.WEST, registerPanel);
txtPassword.setDocument(new JTextFieldLimit(100));
registerPanel.add(txtPassword);
// Label lblUserType
JLabel lblUserType = new JLabel("User Type");
sl_panel.putConstraint(SpringLayout.NORTH, lblUserType, 80, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.WEST, lblUserType, 40, SpringLayout.WEST, registerPanel);
registerPanel.add(lblUserType);
// Combobox cbxUserType
cbxUserType = new JComboBox();
sl_panel.putConstraint(SpringLayout.NORTH, cbxUserType, 80, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.WEST, cbxUserType, 130, SpringLayout.WEST, registerPanel);
registerPanel.add(cbxUserType);
// Label lblNickName
JLabel lblNickName = new JLabel("Nick Name");
sl_panel.putConstraint(SpringLayout.NORTH, lblNickName, 110, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.WEST, lblNickName, 40, SpringLayout.WEST, registerPanel);
registerPanel.add(lblNickName);
// TextField NickName
txtNickName = new JTextField();
sl_panel.putConstraint(SpringLayout.NORTH, txtNickName, 110, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.WEST, txtNickName, 130, SpringLayout.WEST, registerPanel);
sl_panel.putConstraint(SpringLayout.EAST, txtNickName, 430, SpringLayout.WEST, registerPanel);
txtNickName.setDocument(new JTextFieldLimit(20));
registerPanel.add(txtNickName);
// Initialize RegisterArea's button
initRegisterAreaButton();
}
/**
* Initialize RegisterArea's button
*/
private void initRegisterAreaButton() {
SpringLayout sl_panel = (SpringLayout) registerPanel.getLayout();
// Button OK
JButton btnOK = new JButton("OK");
btnOK.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
actOK();
}
});
sl_panel.putConstraint(SpringLayout.NORTH, btnOK, 140, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.WEST, btnOK, 220, SpringLayout.WEST, registerPanel);
sl_panel.putConstraint(SpringLayout.SOUTH, btnOK, 170, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.EAST, btnOK, 300, SpringLayout.WEST, registerPanel);
registerPanel.add(btnOK);
// Button Cancel
JButton btnCancel = new JButton("Cancel");
btnCancel.addActionListener(new ActionListener() {
@Override
public void actionPerformed(ActionEvent e) {
actCancel();
}
});
sl_panel.putConstraint(SpringLayout.NORTH, btnCancel, 140, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.WEST, btnCancel, 310, SpringLayout.WEST, registerPanel);
sl_panel.putConstraint(SpringLayout.SOUTH, btnCancel, 170, SpringLayout.NORTH, registerPanel);
sl_panel.putConstraint(SpringLayout.EAST, btnCancel, 390, SpringLayout.WEST, registerPanel);
registerPanel.add(btnCancel);
}
/**
* Initialize combobox
*/
@SuppressWarnings("unchecked")
private void initComboBox() {
// UserType
for (UserType userType : UserType.values()) {
ComboItem item =
new ComboItem(
userType.getCode(), userType.getLabel());
cbxUserType.addItem(item);
}
}
/**
* Initialize screen
*/
private void initScreen() {
// Show data
showData();
// Enable SearchArea
enableSearchArea();
}
/**
* Show table data
*/
private void showData() {
userDtoList = new ArrayList<UserDto>();
deletedList = new ArrayList<Long>();
// Search all User
userDtoList = UserService.searchUser(null);
// Edit table's data
editTableData(null);
}
/**
* Click button New
*/
private void actNew() {
// Initial edit data
editingUserDto = new UserDto();
// Enable RegisterArea
enableRegisterArea();
editMode = EditMode.NEW;
}
/**
* Click button Update
*/
private void actModify() {
// Check: is row selecting?
int selectedIndex = tblInfo.getSelectedRow();
if (selectedIndex < 0) {
MessageUtil.showInfoMessage(frmUserMaster, Constants.MSG_NOT_CHOOSE_ROW);
return;
}
// Edit RegisterArea
editingUserDto = userDtoList.get(selectedIndex).copy();
txtUserName.setText(editingUserDto.getUserName());
txtPassword.setText(editingUserDto.getPassword());
ComponentUtil.selectItem(cbxUserType, editingUserDto.getUserType());
txtNickName.setText(editingUserDto.getNickName());
// Enable RegisterArea
enableRegisterArea();
editMode = EditMode.UPDATE;
}
/**
* Click button Delete
*/
private void actDelete() {
// Check: is row selecting?
int selectedIndex = tblInfo.getSelectedRow();
if (selectedIndex < 0) {
MessageUtil.showInfoMessage(frmUserMaster, Constants.MSG_NOT_CHOOSE_ROW);
return;
}
// Show confirm message
if (MessageUtil.showConfirmMessage(frmUserMaster, Constants.MSG_DELETE_CONFIRM)) {
// Validate delete data
if (!validateDeleteData()) {
return;
}
// Add deleted data into deletedList
Long userId = userDtoList.get(selectedIndex).getUserId();
if (userId != null) {
deletedList.add(userId);
}
// Remove deleted data
userDtoList.remove(selectedIndex);
// Edit table's data
editTableData(null);
}
}
/**
* Click button Register
*/
private void actRegister() {
// Show confirm message
if (MessageUtil.showConfirmMessage(frmUserMaster, Constants.MSG_REGISTER_CONFIRM)) {
// Register all User
if (!UserService.registerAll(userDtoList, deletedList)) {
MessageUtil.showErrorMessage(frmUserMaster, Constants.MSG_REGISTER_FAIL);
return;
}
// Show success message
MessageUtil.showInfoMessage(frmUserMaster, Constants.MSG_REGISTER_SUCCESS);
// Refresh Screen
initScreen();
}
}
/**
* Click button Back
*/
private void actBack() {
// Show confirm message
if (MessageUtil.showConfirmMessage(frmUserMaster, Constants.MSG_CANCEL_CONFIRM)) {
frmUserMaster.dispose();
frmParent.setVisible(true);
}
}
/**
* Click button OK
*/
private void actOK() {
// Validate register data
if (!validateRegisterData()) {
return;
}
ComboItem selectedUserType = (ComboItem) cbxUserType.getSelectedItem();
// Edit SelectedData
editingUserDto.setUserName(txtUserName.getText());
editingUserDto.setPassword(txtPassword.getText());
editingUserDto.setUserType(StringUtil.cnvToString(selectedUserType.getValue()));
editingUserDto.setNickName(txtNickName.getText());
editingUserDto.setIsChange(true);
// Edit Data List
Integer selectedIndex = null;
if (EditMode.NEW.equals(editMode)) {
// Mode New
userDtoList.add(editingUserDto);
} else {
// Mode Modify
selectedIndex = tblInfo.getSelectedRow();
userDtoList.set(selectedIndex, editingUserDto);
}
// Edit table's data
editTableData(selectedIndex);
// Clear RegisterArea
clearRegisterArea();
// Enable SearchArea
enableSearchArea();
}
/**
* Click button Cancel
*/
private void actCancel() {
// Show confirm message
if (MessageUtil.showConfirmMessage(frmUserMaster, Constants.MSG_CANCEL_CONFIRM)) {
// Clear RegisterArea
clearRegisterArea();
// Enable SearchArea
enableSearchArea();
}
}
/**
* Edit table's data
*
* @param selectedIndex
*/
private void editTableData(Integer selectedIndex) {
// Clear data
DefaultTableModel tableModel = (DefaultTableModel) tblInfo.getModel();
tableModel.setRowCount(0);
// Edit table's data from List
for (UserDto userDto : userDtoList) {
// Edit row data
List<Object> rowData = new ArrayList<Object>();
rowData.add(userDto.getUserName());
rowData.add(UserType.getUserType(userDto.getUserType()));
rowData.add(userDto.getNickName());
// Add row
tableModel.addRow(rowData.toArray());
}
// Select row
if (selectedIndex != null) {
tblInfo.setRowSelectionInterval(selectedIndex, selectedIndex);
}
}
/**
* Validate register data
*
* @return Boolean True:OK / False:Error
*/
private Boolean validateRegisterData() {
// Word
if (StringUtil.isNullOrEmpty(txtUserName.getText())) {
MessageUtil.showErrorMessage(frmUserMaster, "Please input [Word].");
return false;
}
// Meaning
if (StringUtil.isNullOrEmpty(txtPassword.getText())) {
MessageUtil.showErrorMessage(frmUserMaster, "Please input [Meaning].");
return false;
}
return true;
}
/**
* Validate register data
*
* @return Boolean True:OK / False:Error
*/
private Boolean validateDeleteData() {
return true;
}
/**
* Clear RegisterArea
*/
private void clearRegisterArea() {
txtUserName.setText("");
txtPassword.setText("");
cbxUserType.setSelectedIndex(0);
txtNickName.setText("");
editMode = null;
}
/**
* Enable SearchArea
*/
private void enableSearchArea() {
ComponentUtil.enableComponents(searchPanel, true);
ComponentUtil.enableComponents(registerPanel, false);
btnBack.setEnabled(true);
}
/**
* Enable RegisterArea
*/
private void enableRegisterArea() {
ComponentUtil.enableComponents(searchPanel, false);
ComponentUtil.enableComponents(registerPanel, true);
btnBack.setEnabled(true);
// Focus User Name
txtUserName.requestFocus();
}
/**
* @return the frmUserMaster
*/
public JFrame getFrame() {
return frmUserMaster;
}
} | {
"redpajama_set_name": "RedPajamaGithub"
} | 8,033 |
Please fill in the form below to contact us or have us add you to our mailing list.
Please, if you move, let the Pioneers know directly so we can update our mailing list. We waste a lot of money in postage and mailing costs for each Reunion on returned newsletters. Even if you inform the Council, you will still need to tell us as they don't share their mailing information. - Thank You! | {
"redpajama_set_name": "RedPajamaC4"
} | 8,541 |
Neil D. Opdyke (February 7, 1933 – April 7, 2019) was an American geologist.
He was the Distinguished Professor Emeritus in the Department of Geological Sciences at the University of Florida in Gainesville, Florida, United States. He was previously with Lamont-Doherty Geological Observatory of Columbia University, including a stint as Director. He was well known for his groundbreaking research in the 1950s on paleoclimate and continental drift, with Keith Runcorn, and later in Africa and Australia with Mike McElhinny and others. Back the U.S. in the mid-1960s he worked on the documentation of magnetic reversals in deep-sea sediments, which led to proof of the Vine–Matthews–Morley hypothesis the governing paradigm for marine magnetic anomalies.
In 1969, Dr. Opdyke & Ken Henry used marine core data for a convincing test of the GAD
hypothesis that is central to the use of paleomagnetism in continental
reconstruction. Opdyke's work with Nick Shackleton in 1973 marked the
beginning of the integration of oxygen isotope stratigraphy and
magnetostratigraphy that has led to current methods of tuning
timescales. Neil pioneered magnetic stratigraphy in terrestrial
(non-marine) sediments and produced some of the most impressive records, notably from Pakistan and southwestern United States. These studies led to a vastly improved time frame for vertebrate evolution and allowed the documentation of mammal migration.
Research interests
Paleomagnetism and its application to tectonics and magnetostratigraphy.
Paleoclimatology and paleogeography of the Phanerozoic.
Education
B.A., Columbia University, 1955
D.Sc., University of Newcastle upon Tyne, 1982
Ph.D., Durham University, England, 1958
Memberships and distinctions
European Geosciences Union Petrus Peregrinus Medal 2008 for pioneering work in magnetic stratigraphy of marine and continental sediments and its contribution to our understanding of the history of the magnetic field and its geological applications.
National Academy of Sciences, 1996
American Academy of Arts and Sciences, 1998
Geological Society of America, fellow
American Association for the Advancement of Science, Fellow
American Geophysical Union, Fellow
American Geophysical Union John Adam Fleming Medal 1996
References
External links
Oral history interview transcript with Neil D. Opdyke on 17 March 1997, American Institute of Physics, Niels Bohr Library & Archives - Session I
Oral history interview transcript with Neil D. Opdyke on 11 July 1997, American Institute of Physics, Niels Bohr Library & Archives - Session II
1933 births
2019 deaths
American geologists
Columbia College (New York) alumni
Columbia University faculty
University of Florida faculty
Fellows of the American Geophysical Union
Fellows of the American Association for the Advancement of Science
Fellows of the Geological Society of America
Members of the United States National Academy of Sciences
Alumni of King's College, Newcastle | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,118 |
{"url":"https:\/\/www.pinecone.io\/learn\/class-activation-maps\/","text":"# How to Explain ConvNet Predictions Using Class Activation Maps\n\nHave you ever used deep learning to solve computer vision tasks? If so, you probably trained a convolutional neural network (ConvNet or CNN) for tasks such as image classification and visual question answering.\n\nIn practice, ConvNets are often viewed as black boxes that take in a dataset and give a task-specific output: predictions in image classification, captions in image captioning, and more. For example, in image classification, you\u2019ll optimize the model for prediction accuracy.\n\nBut how do you know which parts of the image the network was looking at when it made a prediction? And how do you go from black box to interpretable models?\n\nAdding a layer of explainability to ConvNets can be helpful in applications such as medical imaging for disease prognosis. For example, consider a classification model trained on medical images, namely, brain scans and X-rays, to predict the presence or absence of a medical condition. Ensuring that the model is using the relevant parts of the images for its predictions makes it more trustworthy than a black box model with a high prediction accuracy.\n\nClass activation maps can help explain the predictions of a ConvNet. Class activation maps, commonly called CAMs, are class-discriminative saliency maps. While saliency maps give information on the most important parts of an image for a particular class, class-discriminative saliency maps help distinguish between classes.\n\nIn this tutorial, you\u2019ll learn how class activation maps (CAM) and their generalizations, Grad-CAM and Grad-CAM++, can be used to explain a ConvNet. You\u2019ll then learn how to generate class activation maps in PyTorch.\n\nLet\u2019s begin!\n\n## Class Activation Maps Explained\n\nIn general, a ConvNet consists of a series of convolutional layers, each consisting of a set of filters, followed by fully connected layers.\n\nActivation maps indicate the salient regions of an image for a particular prediction. Class activation map (CAM) uses a global average pooling (GAP) layer after the last convolutional layer. Let\u2019s understand how this works.\n\nGAP Layer After the Last CONV Layer (Image by the author)\n\nIf there are n filters in the last convolutional layer, then there are n feature maps. The activation map for a particular output class is the weighted combination of all the n feature maps.\n\nSo how do we learn these weights?\n\nStep 1: Apply global average pooling to each of the feature maps.\n\nThe average value of all pixels in a feature map is its global average. Here\u2019s an example of how global average pooling works. The qualifier global means that the average is computed over all pixel locations in the feature map.\n\nHow GAP Works - An Example (Image by the author)\n\nAfter computing the global average for each of the feature maps, we\u2019ll have n scalars, $k_1, k_2, \u2026, k_n$. Let\u2019s call them GAP outputs.\n\nFrom Feature Maps to Scalars through GAP (Image by the author)\n\nStep 2: The next step is to learn a linear model from these GAP outputs onto the class labels. For each of the N output classes, we should learn a model with weights $w_1, w_2,\u2026,w_n$. Therefore, we\u2019ll have to learn N linear models in all.\n\nLinear Models from GAP Output onto the Class Labels (Image by the author)\n\nStep 3: Once we\u2019ve obtained the n weights for each of the N classes, we can weight the feature maps to generate the class activation maps. Therefore, different weighted combinations of the same set of feature maps give the class activation maps for the different classes.\n\nClass Activation Maps as Weighted Combinations of Feature Maps (Image by the author)\n\nMathematically, the class score for an output class c in the CAM model is given by:\n\n\\begin{align} y^c = \\sum_{k} {w_{k}}^c \\frac{1}{Z}\\sum_{i}\\sum_{j} {A_{ij}}^k \\text{ }\\text{ }(1)\\\\\nA_{ij}^k: \\text{ }pixel\\text{ } at\\text{ } location\\text{ } (i,j)\\text{ } in\\text{ } the\\text{ } k-th\\text{ } feature\\text{ } map\\\\\nZ: total\\text{ }number\\text{ }of\\text{ }pixels\\text{ }in\\text{ }the\\text{ }feature\\text{ }map\\\\\n{w_k}^c: weight\\text{ }of\\text{ }the\\text{ }k-th\\text{ }feature\\text{ }map\\text{ }for\\text{ }class \\text{ }c \\end{align}\n\nEven though we need to train N linear models to learn the weights, CAM does not require a backward pass through the network again. A backward pass through the layers of the network is more expensive than learning a linear mapping.\n\nCAM uses the inherent localization capability of the convolutional layers, so the activation maps can be generated without any positional supervision on the location of the target in the image.\n\n### Limitations of CAM\n\nUsing class activation maps involves the overhead of learning N linear models to learn the weights $w_1, w_2,\u2026, w_n$ for each of the N classes. Training a ConvNet is a computationally intensive task in itself. This overhead can be a limiting factor when both n, the number of filters in the last convolutional layer, and N, the number of output classes, are especially large.\n\nThe introduction of the global average pooling (GAP) layer after the last convolutional layer imposes a restriction on the ConvNet architecture. Though CAM is helpful in explaining the predictions in an image classification task, it cannot be used for computer vision tasks such as visual question answering (VQA). As explained, the GAP layer outputs are scalars that are global averages of the preceding convolutional layer\u2019s feature maps. There is no known performance degradation for image classification. However, this requirement for the GAP layer after the convolutional layers may be too restrictive for tasks like VQA.\n\n## How Gradient-Weighted Class Activation Maps Work\n\nAs mentioned, the key limitation of CAM is the overhead of learning the weights for linear mapping. Gradient-weighted class activation map (Grad-CAM) is a generalization to CAM that overcomes this limitation.\n\nLet\u2019s start by making a simple substitution in the equation for output class score $y^c$ in CAM.\n\n\\begin{align} Let\\text{ }F^k = \\frac{1}{Z}\\sum_{i}\\sum_{j} {A_{ij}}^k \\\\\nSubstituting\\text{ }F^k\\text{ }in\\text{ }eqn(1), y^c = \\sum_{k} {w_{k}}^cF^k\\\\\n\\end{align}\n\nNext, let\u2019s compute the derivative of the output class score with respect to the pixels $A_{i,j}$ in the feature map.\n\n\\begin{align} \\frac{\\partial{y^c}}{\\partial{F^k}} = {w_{k}}^c\\text{ }(2)\\\\\n\\frac{\\partial{y^c}}{\\partial{F^k}} = \\frac{\\frac{\\partial{y^c}}{\\partial{{A_{ij}}^k}}}{\\frac{\\partial{F^k}}{\\partial{{A_{ij}}^k}}}\\\\\n\\frac{\\partial{F^k}}{\\partial{{A_{ij}}^k}} = \\frac{1}{Z}\\\\\n\\frac{\\partial{y^c}}{\\partial{F^k}} = \\frac{\\frac{\\partial{y^c}}{\\partial{{A_{ij}}^k}}}{\\frac{1}{Z}}\\\\\n\\frac{\\partial{y^c}}{\\partial{F^k}} = \\frac{\\partial{y^c}}{\\partial{{A_{ij}}^k}}.{Z}\\text{ }(3)\\\\\nFrom \\text{ }(2) \\text{ }and\\text{ } (3),\\text{ } we \\text{ }have,\\\\\n\\frac{\\partial{y^c}}{\\partial{F^k}} = \\frac{\\partial{y^c}}{\\partial{{A_{ij}}^k}}.{Z} = {w_{k}}^c \\end{align}\n\nSumming the above quantities over all the pixels in the feature map, we have the following:\n\n\\begin{align} \\sum_{i}\\sum_{j}{w_{k}}^c = \\sum_{i}\\sum_{j}\\frac{\\partial{y^c}}{\\partial{{A_{ij}}^k}}.{Z}\\\\\n{Z}.{w_{k}}^c = {Z}.\\sum_{i}\\sum_{j}\\frac{\\partial{y^c}}{\\partial{{A_{ij}}^k}}\\\\\n\nAs seen in the above equation, the weights $w_k$ evaluate to the gradient of the output score with respect to the kth feature map. This means there\u2019s no need to retrain N linear models to learn the weights!\n\nWe\u2019ve summed over all pixel locations (i,j). Adding the normalization factor 1\/Z back in, we get:\n\n\\begin{align} {w_{k}}^c = \\frac{1}{Z}\\sum_{i}\\sum_{j}\\frac{\\partial{y^c}}{\\partial{{A_{ij}}^k}} \\end{align}\n\nIn essence, Grad-CAM uses the global average of the gradients flowing into the feature maps of the last convolutional layer.\n\nHow Grad-CAM Works (Image by the author)\n\nTo retain only the positive correlations in the final activation map, we apply the ReLU function on the weighted combination of feature maps.\n\nReLU function: f(x) = ReLU(x) = x if x >= 0 and 0 otherwise. The ReLU function filters all the negative inputs and passes the positive inputs as they are.\n\nGiven that the gradients of the output with respect to the feature maps identify salient patches in the image, what do negative gradients signify?\n\n\\begin{align} {w_{k}}^c = \\frac{1}{Z}\\sum_{i}\\sum_{j}-\\frac{\\partial{y^c}}{\\partial{{A_{ij}}^k}} \\end{align}\n\nUsing negative gradients in the weights will give those patches in the image that adversarially affect a particular prediction. For example, in an image containing a cat and a dog, if the target class is cat, then the pixel patch corresponding to the dog class affects prediction.\n\nGrad-CAM Counterfactual Explanations (Image Source: arxiv)\n\nTherefore, by identifying and removing these patches from the images, we can suppress the adversarial effect on prediction. As a result, the confidence of prediction increases.\n\nEven though Grad-CAM provides activation maps with good target localization, it fails to capture certain minute details. Pixel-space gradient visualization techniques, which were used in earlier approaches to explainability, can provide more granular information on which pixels have the most influence.\n\nTo obtain a detailed activation map, especially to understand misclassifications among similar classes, we can use guided backpropagation in conjunction with Grad-CAM. This approach is called guided Grad-CAM.\n\nThe concept of guided backpropagation was introduced in [2]. Given a feedforward neural network, the influence of an input x_j on a hidden layer unit h_i is given by the gradient of h_i with respect to x_j. This gradient can be interpreted as follows:\n\n\u2022 a zero-valued gradient indicates no influence,\n\u2022 a positive gradient indicates a significant positive influence, and\n\u2022 a negative gradient indicates negative influence.\n\nSo to understand the fine-grained details, we only backpropagate along the path with positive gradients. Since this approach uses information from higher layers during the backprop, it\u2019s called guided backpropagation.\n\n\u2022 Given that we have the gradients of the output score with respect to the feature maps, Grad-CAM uses these gradients as the weights of the feature maps. This eliminates the need to retrain N models to explain the ConvNet\u2019s prediction.\n\u2022 As we have the gradients of the task-specific output with respect to the feature maps, Grad-CAM can be used for all computer vision tasks such as visual question answering and image captioning.\n\nWhen there are multiple occurrences of the target class within a single image, the spatial footprint of each of the occurrences is substantially lower. Grad-CAM fails to provide convincing explanations under such \u201clow spatial footprint\u201d conditions.\n\nGrad-CAM++ provides better localization when the targets have a low spatial footprint in the images.\n\nLet\u2019s start by reviewing the equation for the Grad-CAM weights.\n\n\\begin{align} {w_{k}}^c = \\frac{1}{Z}\\sum_{i}\\sum_{j}\\frac{\\partial{y^c}}{\\partial{{A_{ij}}^k}} \\end{align}\n\nFrom the above equation, we see that Grad-CAM scales all pixel gradients $\\frac{\\partial{y^c}}{\\partial{{A_{ij}}^k}}$ by the same factor 1\/Z. This means that each pixel gradient has the same significance in generating the final activation map. However, in images where the target has a low spatial footprint, the pixel gradients that actually help the prediction should have greater significance.\n\nTo achieve this, Grad-CAM++ proposes the following:\n\n\u2022 The pixel gradients that are important for a particular class should be scaled by a larger factor, and\n\u2022 The pixel gradients that do not contribute to a particular class prediction should be scaled by a smaller factor.\n\nMathematically, this can be expressed as:\n\n\\begin{align} {w_{k}}^c = \\sum_{i}\\sum_{j}\\alpha_{ij}^{kc}ReLU\\left(\\frac{\\partial{y^c}}{\\partial{{A_{ij}}^k}}\\right) \\end{align}\n\nLet\u2019s parse what $\\alpha_{ij}^{kc}$ means.\n\n\u2022 $\\alpha^{kc}$ denotes the values of \u03b1 for the k-th feature map corresponding to the output class c.\n\u2022 $\\alpha_{ij}^{kc}$ is the value of \u03b1 at pixel location (i,j) for the k-th feature map corresponding to the output class c.\n\nApplying the ReLU function on the gradients ensures that only the gradients that have a positive contribution to the class prediction are retained.\n\nWorking out the math like we did for Grad-CAM, the values of $\\alpha_{ij}$ can be given by the following closed-form expression:\n\n\\begin{align} \\alpha_{ij}^{kc} = \\frac{\\frac{\\partial^2{y^c}}{(\\partial{A_{ij}}^k)^2}}{2.\\frac{\\partial^2{y^c}}{(\\partial{A_{ij}}^k)^2} + \\sum_a\\sum_b{A_{ab}}^k \\frac{\\partial^3{y^c}}{(\\partial{A_{ij}}^k)^3}}\\\\\n\\end{align}\n\nThe output activation map is given by:\n\nwhere, \\text{ }{w_{k}}^c = \\sum_{i}\\sum_{j}\\alpha_{ij}^{kc}ReLU\\left(\\frac{\\partial{y^c}}{\\partial{{A_{ij}}^k}}\\right) \\end{align}\n\nNow that you\u2019ve learned how class activation maps and the variants, Grad-CAM and Grad-CAM++, work, let\u2019s proceed to generate class activation maps for images.\n\n## How to Generate Class Activation Maps in PyTorch\n\nThe PyTorch Library for CAM Methods by Jacob Gildenblat and contributors on GitHub has ready-to-use PyTorch implementations of Grad-CAM, Grad-CAM++, EigenCAM, and much more. This library grad-cam is available as a PyPI package that you can install using pip.\n\npip install grad-cam\n\n\nYou can customize this generic CAM example depending on the computer vision task to which you\u2019d like to add explainability. Let\u2019s start by importing the necessary modules.\n\nfrom torchvision import models\nimport numpy as np\nimport cv2\nimport PIL\n\n\nNext, we import the necessary classes from the grad_cam library.\n\nfrom pytorch_grad_cam import GradCAM,GradCAMPlusPlus\n\n\nIn this example, we\u2019ll use the pre-trained ResNet50 model from the PyTorch Torchvision library that contains datasets and pre-trained models. We then define the target class, the layer after which we\u2019d like to generate the activation map. In this example, we\u2019ve used the following ImageNet classes: Goldfish, Siberian Husky, and Mushroom.\n\n# use the pretrained ResNet50 model\nmodel = models.resnet50(pretrained=True)\nmodel.eval()\n\n# fix target class label (of the Imagenet class of interest!)\n# 1: goldfish, 250: Siberian Husky, 947: mushroom\n\ntargets = [ClassifierOutputTarget(<target-class-number>)]\n\n# fix the target layer (after which we'd like to generate the CAM)\ntarget_layers = [model.layer4]\n\n\nWe can instantiate the model, preprocess the image, generate and display the class activation map.\n\n# instantiate the model\n\n# Preprocess input image, get the input image tensor\nimg = np.array(PIL.Image.open('<image-file-path>'))\nimg = cv2.resize(img, (300,300))\nimg = np.float32(img) \/ 255\ninput_tensor = preprocess_image(img)\n\n# generate CAM\ngrayscale_cams = cam(input_tensor=input_tensor, targets=targets)\ncam_image = show_cam_on_image(img, grayscale_cams[0, :], use_rgb=True)\n\ncam = np.uint8(255*grayscale_cams[0, :])\ncam = cv2.merge([cam, cam, cam])\n\n# display the original image & the associated CAM\nimages = np.hstack((np.uint8(255*img), cam_image))\nPIL.Image.fromarray(images)\n\n\nWe can interpret the class activation map as a heatmap in which the regions in red are the most salient for a particular prediction, and the regions in blue are the least salient.\n\nActivation Map for Class Goldfish (ImageNet Class #1)\n\nActivation Map for Class Siberian Husky (ImageNet Class #250)\n\nSo far, the targets were present only once in the entire image. Now, consider the following image with many small mushrooms, each having a very small spatial footprint.\n\nIn this case, the activation map generated using GradCAM++ better identifies all instances of mushroom than the one from GradCAM.\n\nGrad-CAM Output for Multiple Occurrences of Class Mushroom (ImageNet Class #947)\n\nGrad-CAM++ Output for Multiple Occurrences of Class Mushroom (ImageNet Class #947)\n\nAs a next step, you can try generating activation maps for any class or other vision task of your choice.\n\n## Summing Up\n\nI hope you enjoyed this tutorial on explaining ConvNets with activation maps. Here\u2019s a summary of what you\u2019ve learned.\n\n\u2022 Class activation map (CAM) uses the notion of global average pooling (GAP) and learns weights from the output of the GAP layer onto the output classes. The class activation map of any target class is a weighted combination of feature maps.\n\u2022 Grad-CAM uses the gradients available in the network and does not require learning additional models to explain the ConvNet\u2019s predictions. The gradients of the output with respect to the feature maps from the last convolutional layer are used as the weights.\n\u2022 Grad-CAM++ provides better performance under low spatial footprint. Instead of scaling all pixels in a feature map by a constant factor, Grad-CAM++ uses larger scaling factors for pixel locations that are salient for a particular class. These scaling factors are obtained from higher-order gradients in the ConvNet.\n\nIf you\u2019d like to delve deeper, consider checking out the resources below. Happy learning!\n\n## References\n\n[1] Bolei Zhou et al., Learning Deep Features for Discriminative Localization, 2015\n\n[2] Springenberg and Dosovitskiy et al., Striving for Simplicity: The All Convolutional Net, ICLR 2015\n\n[3] R Selvaraju et al., Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localizations, ICCV 2017\n\n[4] A Chattopadhyay et al., Grad-CAM++: Improved Visual Explanations for Deep Convolutional Networks, WACV 2018\n\n[5] B Zhou et al., Object Detectors Emerge in Deep Scene CNNs, ICLR 2015\n\n[6] Jacob Gildenblat and contributors, PyTorch Library for CAM Methods, GitHub, 2021","date":"2023-02-04 21:27:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 5, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6606613993644714, \"perplexity\": 2286.4473733309806}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500154.33\/warc\/CC-MAIN-20230204205328-20230204235328-00358.warc.gz\"}"} | null | null |
{"url":"https:\/\/math.stackexchange.com\/questions\/2222219\/determinant-of-n-times-n-matrix","text":"# Determinant of $N \\times N$ matrix\n\nI have the following matrix which I need to find the determinant of. I am not too sure of how to proceed. Here is my working so far.\n\n$$\\det(\\boldsymbol J (E_i) - \\lambda \\mathbb I ) = \\begin{pmatrix} A_1 -\\lambda & \\dots & \\phi_{1i} & \\dots & 0 \\\\ \\vdots & \\ddots & \\vdots & & \\vdots \\\\ -c_{i1} & \\dots & \\color{red}{A_i -\\lambda} & \\dots & -c_{iN}\\\\ \\vdots & & \\vdots & \\ddots & \\vdots \\\\ 0 & \\dots & \\phi_{Ni} & \\dots & A_N -\\lambda \\end{pmatrix}$$ The matrix has a diagonal given by $A_j -\\lambda$. From the central red element there are vertically and horizontally non-zero elements. All other elements are zero exactly.\n\nI am really not sure how to find the determinant from here and any help or pointers would be greatly appreciated!\n\nEdit\n\nFor instance if $N$ where to equal 4 we might have the following case if $i=3$, $$\\det(\\boldsymbol J (E_i) - \\lambda \\mathbb I ) = \\begin{pmatrix} D_1 & 0 & V_1 & 0 \\\\ 0 & D_2 & V_2&0 \\\\ H_1 & H_2 & D_3 & H_4\\\\ 0 & 0 & V_4 & D_4 \\\\ \\end{pmatrix}$$\n\n\u2022 If from diagonal entries vertical \/horizontal elements in that row\/column are nonzero, that seems to go against the two $0$ entries upper right and lower left. Apr 7, 2017 at 11:28\n\u2022 @coffeemath Apologies for my poor wording, please see the update! Apr 7, 2017 at 11:30\n\u2022 So, wait, your original matrix is like $[[1, 0, 2, 0, 0], [0, 3, 0, 0, 0], [4, 0, 5, 0, 6], [0, 0, 0, 7, 0], [0, 0, 8, 0, 9]]$? Apr 7, 2017 at 11:38\n\u2022 Could the upper right and lower left entries, now labeled as $0,$ also be nonzero in the case $i \\neq 1,N$? Apr 7, 2017 at 11:42\n\u2022 Step 1: Change basis from $e_1, e_2, \\ldots e_n$ to $e_i, e_2, \\ldots, e_1, \\ldots e_n$, exchanging rows $1$ and $i$ and columns $1$ and $i$. Then the first row and column become nonzero, and the remainder of the matrix is diagonal. That'll at least simplify the notation without changing the determinant. Apr 7, 2017 at 11:44\n\nEdit. Let $B$ the submatrix obtained by deleting the $i$-th row and $i$-th column of the given matrix. Then the required determinant is the product of determinant of the Schur complement of $B$ and $\\det B$, i.e. $$\\left(A_i-\\lambda + \\sum_{k\\ne i}\\frac{c_k\\phi_{ki}}{A_k-\\lambda}\\right) \\prod_{k\\ne i}(A_k-\\lambda).$$","date":"2022-06-29 13:59:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 2, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7787063717842102, \"perplexity\": 280.7870476948474}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-27\/segments\/1656103639050.36\/warc\/CC-MAIN-20220629115352-20220629145352-00358.warc.gz\"}"} | null | null |
\section{Introduction}
In software engineering practice, the later the bug is discovered and fixed, the more costly it will be for the projects~\cite{kumar2017software}. However, due to limited resources and an increasing number of defects reported, it is not possible to fix all the bugs before the software releases. Therefore, practitioners frequently face the decision of which bugs to be resolved now or deferred to the next release.
Bug prioritization \textcolor{red}{and triage} tasks mainly depend on the quality of the reported bugs while keeping in mind that not all reports are consistent. Previous studies discussed the evidence for the mismatch between developers' and users' understanding of the bugs~\cite{Bettenburg2008, umer2019}. Moreover, bug severity information is not reliable since 51\% of the duplicate reported bugs have inconsistent severity labels~\cite{tian2016}. Data on the bug fixing time is not reliable either, that is, it does not indicate the exact amount of working hours on a specific bug in a continuous manner~\cite{Shirin2020}.
Unlike many subjective characteristics of the bugs, blocking bugs are determined by a developer in the phase of defect resolution. In a typical flow of a bug report, an end-user or a developer reports a bug or an issue. Subsequently, a triager assigns it to a developer, or a developer claims its possession. Ultimately, after they find a resolution for the bug, it is verified by another developer and gets closed. However, in the case of a blocking bug, the process is interrupted~\cite{Garcia2018}. Blocking bugs have higher complexity than non-blocking bugs, require more time to get fixed, associate with larger codebase (in terms of lines of code (LOC)), and are also hard to be predicted~\cite{Garcia2018, Goyal2017, Wang2018}. As the number of blocking bugs increases, resource planning becomes a tedious task, and developers defer many bugs to a later release. Accumulation of lingering bugs\textemdash the bugs that are reported but not resolved in the current release\textemdash both degrades the software quality and increases the maintenance cost~\cite{Akbarinasaji2017}. Therefore, understanding the influence of the bug dependency graph (BDG) \textcolor{red}{together with other bug features} on software maintenance is essential.
\textcolor{red}{A common approach to bug triage and prioritization is to use different Machine Learning algorithms and find the performance of the bug assignment or bug prioritization~\cite{uddin2017, alenezi2013efficient,anvik2006should,xuan2017automatic}. However, in most cases, they did not consider the effect of bug dependency in their suggested policy. Moreover, we need to explore the effect of the suggested algorithm in the exact time that a bug is assigned or prioritized. For instance, if a bug is assigned to a developer that has a previous experience with a component, but he/she is overwhelmed with the previously assigned tasks, the algorithm should be able to automatically propose a new alternative for the open bug. However, without a machine that regenerates the exact characteristic of the open bugs and available developers, it would be infeasible to propose a realistic decision. Accordingly, we propound the modular Wayback Machine that regenerates past events for any given timestamp and is able to be easily adopted by researchers in order to investigate the performance of their proposed bug triage or prioritization algorithm. }
Another important missing link in these previous studies is to recognize the actual situation in the real-world as a baseline. It is critical to know how the issue tracking system (ITS) evolves in terms of complexity as it enables practitioners to automate the decision-making process and to trace back the actual decisions in the bug triage process. \textcolor{red}{The idea of Wayback Machine comes from the digital archive of World Wide Web via which we can explore the status and content of the web-pages in previous timestamps~\footnote{\hyperlink{https://archive.org/web/}{https://archive.org/web/}}. To this end, we construct a Wayback machine with which practitioners are able to explore the past events in ITS.} Besides, we simulate an extensive list of \textcolor{red}{prioritization and triage strategies over a BDG to see whether the proposed event-regenerator machine can provide insights that were not possible beforehand.} Moreover, we consider using a discrete-event system simulation approach and evaluate the performance of the models using ten various metrics. Accordingly, our research questions are two-fold: first, understanding and rebuilding the history of the issue tracking system; and second, \textcolor{red}{checking the validity of the Wayback machine through exploring prioritization and triage strategies. We note that these strategies can be substituted with any bug prioritization or triage algorithm in the modular Wayback Machine. Thus}, we structure our study along with the following five research questions, divided in two categories:
\begin{RQquestion}
\textbf{RQ1a: How do open-source software systems evolve in terms of bug reports, dependency, and lingering bugs?}
\end{RQquestion}
\begin{RQanswer}
We explore the past events in the ITS through a novel Wayback machine. Given the mined data from any ITS, this machine provides us with the information related to the bugs in any timestamp in the past. Hence, we may quarry different characteristics of the bugs and explore the reason behind each bug prioritization decision in the past, e.g., what kinds of bugs we had and why a developer chose to resolve a specific bug over others. We demonstrate the number of bug reports, the evolution of BDG, and their effect on the lingering bugs. Our findings show that not all projects are the same, and not all triagers follow the same/similar procedure to assign a bug. Given how accurate the dependency between bugs are determined and how significant the effect of this dependency is for triagers, we may reach a different graph complexity and a different likelihood of future lingering bugs.
\end{RQanswer}
\begin{RQquestion}
\textbf{RQ1b: How do the characteristics of the resolved bugs change over time?}
\end{RQquestion}
\begin{RQanswer}
We further explore the importance of the bug dependencies for triagers. We analyze a series of observed sequences through the Wayback machine to see how triagers regard the degree and depth of a bug when prioritizing it. Our findings illustrate \textcolor{red}{that in some issue tracking systems, the dependency of the bugs are mainly disregarded, and it loses its importance. On the other hand, in the issue tracking systems where bug dependency practice is taken seriously, the principal role of depth and degree is noticeable by comparing their average for both solved and postponed bugs.} Whenever triagers considered those characteristics, they culminate with a less complicated dependency graph, meaning a lower probability of lingering bugs in the long-run.
\end{RQanswer}
\begin{RQquestion}
\textbf{RQ2a: How do different bug prioritization strategies perform in terms of evolutionary metrics?}
\end{RQquestion}
\begin{RQanswer}
After creating the Wayback machine to review past prioritization decisions, we explore different strategies and compare their performance with the actual case. \textcolor{red}{The main aim of RQ2s is to validate the accuracy of the created Wayback Machine through different machine learning and rule-based approaches. To this end, first, we define evolutionary metrics for the first (e.g., the depth and degree of the BDG, and the deviation from the actual assignment). These metrics cannot be obtained through static use of Machine-learning algorithms \textemdash i.e., training a model on tabular information and reporting the performance without time consideration. Our observations through the simulation experiments indicate that there is no inconsistency between external indices and rule-based strategies, meaning that the Wayback Machine performs as expected.}
\end{RQanswer}
\begin{RQquestion}
\textbf{RQ2b: How do different bug triage strategies perform in terms of evolutionary metrics?}
\end{RQquestion}
\begin{RQanswer}
\textcolor{red}{We further explore the performance of well-established bug triage algorithms. We add bug triage module to Wayback machine and compare the added module with the actual bug assignment. Moreover, we report the performance of those algorithms based on the revolutionary metrics together with the traditional, static accuracy-related metrics. The results of the experiment illustrate }
\end{RQanswer}
We organized the rest of the paper as follows. Section~\ref{sec:background} briefly discusses the relevant literature on bug prioritization, triage and dependency graphs, which is followed by methodology and dataset description in Section~\ref{sec:research-methodology}. Section~\ref{sec:Wayback} explores past decisions of triagers via a proposed Wayback machine. \textcolor{red}{Section \ref{sec:results} investigates the impact of the strategies that take into account the evolutionary characteristics of the ITS in past triage and prioritization decisions.} Finally, Section \ref{sec:threats} describes the limitations and threats to validity, and Section \ref{sec:conclusion} concludes the paper.
\section{Research Methodology}\label{sec:research-methodology}
We examine the evolution of the bugs in software repository to help the understanding of bug prioritization. For this purpose, we use reported bug information extracted from the ITS of three open-source projects, namely Mozilla, Eclipse, and LibreOffice, covering a period of 10 years from January 2010 to December 2019. We construct a BDG based on the daily reported bugs (nodes) and daily blocking information (arcs). A BDG is a directed acyclic graph that does not contain any loop in terms of blocking information, i.e., a bug cannot block another bug and be blocked by the same bug.
We track BDG's evolution through complexity metrics, \textit{depth} ($\theta$) of a node defined as the longest directed path between the given node and other nodes in the graph, the \textit{degree} ($\delta$) of a node that is the number of its outgoing arcs, the number of nodes ($n$), and the number of arcs ($m$) in a graph. Accordingly, the maximum degree and depth of a graph cannot exceed $n-1$. As we sort all the information chronologically, we start adding or removing nodes and arcs at each timestamp and measuring the changes in metrics from time $t$ to time $t+1$. The information uncovers the evolution of the BDG in the project.
To accurately trace back the history of the actual software project, we also incorporate bug report attributes such as the number of comments a bug receives and its severity and priority in the BDG. \textcolor{red}{We further use these attributes and create Machine Learning algorithms and rule-based approaches to validate the Wayback machine in a controlled experiment.} The historical data of Bugzilla for Mozilla, Eclipse JDT, and LibreOffice projects indicates many solo defects that neither block nor are blocked by other bugs, whereas, in the same project, many densely connected sub-graphs gradually accumulate. Our evolutionary model, referred to as a Wayback machine, can trace back to the time when each of these sub-graphs developed. It provides a clear insight into the exact time when \textcolor{red}{an inappropriate prioritization/triage} resulted in either lingering bugs or an unbalanced network. We further simulate the network's behavior using different bug prioritization strategies and compare them in terms of various \textcolor{red}{evolutionary metrics}.
\textcolor{red}{\subsection{Motivating example}}
\textcolor{red}{There are three important aspect of bug prioritization and bug triage that are overlooked in many studies: bug dependency, time, and decision outcome. Here we describe why covering each in defect studies is of importance.}
\paragraph{Bug dependency}
\textcolor{red}{Figure~\ref{fig:BDG} shows the dependency graph of the bugs, $b_i \in \{b_1, b_2, \dots, b_9\}$. In this example, $b_1$ and $b_2$ are blocking bugs for $b_4$, meaning that the blocked bug cannot be solved unless its parent nodes are fixed. In a sparse BDG, we may observe a plethora of solo bugs (e.g., see $b_5$ and $b_9$), which neither block nor are blocked by others. On the other hand, having many blocked bugs in the system may postpone bug fixing process and impose lingering bugs in the system~\cite{Shirin2020}. If a triager disregards the dependency of the bugs while prioritizing them, he/she may arrive at a decision that is infeasible in practice. The other important factors in BDG are its number of subgraphs and its bugs' depth and degree. Figure~\ref{fig:BDG} has 4 subgraphs, $\mathcal{S} = \{[1,2,3,4,6],[5],[7,8],[9]\}$. Also, $b_6$ has the highest depth, 2, and $b_1$ has the highest degree, 2. A degree shows the number of blocked bugs, and depth indicates the number of parents and grandparents of a bug in a graph. A higher depth of a bug may lead its fixing time postponement due to the high number of its ancestors. Accordingly, we closely track the dependency of the bugs during bug prioritization process.}
\begin{figure}[!htb]
\centering
\begin{tikzpicture}[b/.style={circle,draw,execute at begin node={$b_{#1}$},
alias=b-#1,label={[rectangle,draw=none,overlay,alias=l-#1]right:{$[s_{#1},c_{#1}]$}}}]
\node[matrix of nodes,column sep=1em,row sep=2em]{
& & |[b=1]|& & |[b=2]| & & &|[b=7]|\\
& |[b=3]|& & |[b=4]| & & & & |[b=8]| \\
|[b=5]|& & |[b=6]| & & & &|[b=9]| &\\
};
\path[-stealth] foreach \X/\Y in {1/3,3/6,1/4,4/6,2/4,7/8} {(b-\X) edge (b-\Y)};
\path (l-7.east);
\end{tikzpicture}
\caption{A typical BDG}
\label{fig:BDG}
\end{figure}
\paragraph{Time} \textcolor{red}{Another important factor in bug prioritization is time. Most of previous bug prioritization studies that are using bug history without simulation do not consider the evolutionary nature of the ITS. For instance, if a model decides to solve bug $i$ prior to bug $j$ at time $t$, this decision should be done while other bugs and the information of the bug $i$ and $j$ are consistent with time $t$. The severity of bug $i$, $s_i$ evolves by the time. Therefore, if we propose an approach to use severity as a feature that may affect the bug prioritization, this severity should be the exact severity of the bug at time $t$. Moreover, in the future, the bug might be blocked by another one, but it is not blocked at time $t$. We need to consider the exact dependency at solving time. This can be generalized to any other evolutionary features of a bug. Lastly, when prioritizing a bug, we need to know what were the exact list of open bugs at the time.}
\paragraph{Decision outcome} \textcolor{red}{We cannot prioritize or triage all the available bugs without considering the opening, closing, and re-opening status. Only having a high accuracy in bug assignment or prioritization does not mean that a model can be applicable for the real world. Assume that we assign bug $b_i$ to developer $d_j$ at time $t$. This assignment may be considered accurate as the developer has previous experience with bugs of the same type/component. However, the developer is overloaded by previously assigned bugs and cannot claim the possession of a new bug at time $t$. In such a case, a second developer who is fairly knowledgeable in the field can start working on the new bug to avoid bug accumulation in the ITS. Therefore, knowing the schedule and current load of developers becomes significant. Accordingly, we define a set of evolutionary metrics, e.g., the number overdue bugs, that capture the the real impact of a decision at each timestamp. We also check the assignment time of the developers and compare each strategy with the actual case to see whether the strategy mimics the real world.}
\textcolor{red}{Accordingly, all the proposed algorithms should consider a stable, past-event re-generator that captures the evolutionary history of the bugs. The ITS Wayback Machine, coded in Python, serves the same purpose by its modular structure. Different defect prioritization or triage algorithms can be installed on it, while the machine uses the chronological data and produces the visual and tabular outputs. }
\subsection{Data collection}
We use bug data information from Bugzilla, an issue tracking system for open-source software applications. The dataset is originally extracted from Mozilla, Eclipse, and LibreOffice ITS and contains reported bugs for the project between January 2010 and December 2019. We note that LibreOffice was forked in 2010 from OpenOffice, and its first reported bug was in August 2010. To collect the raw data from the repository, we used the Bugzilla REST API to extract both general information of all bugs and the history of all metadata changes for each bug~\footnote{\hyperlink{https://wiki.mozilla.org/Bugzilla:REST_API}{https://wiki.mozilla.org/Bugzilla:REST\_API}}. The collected information includes creation time, current status, type, severity, priority, the number of comments, resolution, and component. On the other hand, the evolutionary information is not obtainable via the general information of a bug. Consequently, we extract the formal relationship between the bugs by mining the metadata of their change history, along with their timestamp. These relationships take the form of duplication and blocking.
We examine both blocking and blocked bugs to see whether their creation was before or after 2010. If a blocking or dependent bug was created before that time, we re-mine all its information and add the ``old'' bug to the current database since they could affect the time to solve the corresponding bugs. Therefore, our database captures a full picture of bug dependency, whether it belongs to the targeted dates or earlier. For older bugs, we ignore the blocking information among themselves; however, we consider their dependency effects on targeted bugs between 2010 and 2020.
Next, we construct an evolutionary database. This database includes any change in the reported bugs along with their timestamps. Typically, these data cannot be obtained merely from bugs' information, and it requires mining bugs' history as well. While extracting historical data from Bugzilla, we obtain both missing and contradictory information. We handle the problem by combining the information of duplicate bugs and their historical metadata changes. Lastly, we sort the events' logs by their timestamps and design a database that includes bugs' information in chronological order.
\subsection{Descriptive analysis}
Table~\ref{tab:bug_info} shows the most relevant information regarding the extracted datasets. The number of publicly available bugs reported to Bugzilla between 2010 and 2020 for Mozilla, Eclipse, and LibreOffice is 100,475, 16,228, and 70,168, respectively. We choose these different projects for their diversities in terms of bugs' attributes. After mining those bugs, we encounter some older bugs that block or are blocked by target bugs. We extract the information of the bugs older than 2010 if they are related to the target bugs. A complete report of their priority, severity, number of comments, and blocking information is provided in the table as well.
\begin{table}[!ht]
\centering
\caption{Information related to the bugs extracted from Bugzilla for Mozilla, Eclipse, and LibreOffice projects\label{tab:bug_info}}%
\resizebox{\linewidth}{!}{
\begin{tabular}{lrrrrr}
\toprule
\multicolumn{1}{c}{} & \multicolumn{2}{c}{\textbf{Mozilla}} & \multicolumn{2}{c}{\textbf{Eclipse}} & \multicolumn{1}{c}{\textbf{LibreOffice}} \\
\cline{2-3}\cline{4-5}\cline{6-6}
\textbf{Bug information} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{01/01/2010 -}\\ \textbf{31/11/2019}\\ \textbf{Targeted bugs}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{09/06/1999 -} \\ \textbf{31/11/2009}\\ \textbf{Older bugs}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{01/01/2010 -}\\ \textbf{31/11/2019}\\ \textbf{Targeted bugs}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{09/06/1999 -} \\ \textbf{31/11/2009}\\ \textbf{Older bugs}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{03/08/2010 -} \\ \textbf{31/11/2019}\\ \textbf{All bugs}\end{tabular}} \\
\midrule
\# of bugs & 100,475 & 12,944 &
16,228 & 114 &
70,168 \\
\midrule
Dependency info &&&&& \\
\quad \# of blocked bugs & 13,856 & 6,862 &
1,428 & 41 &
1,576 \\
\quad \# of blocking bugs & 29,021 & 11,415
& 2,236 & 97 &
23,734\\
\midrule
Priority info &&&&& \\
\quad P1 & 6,737 & 1,165 & 47 & -- & 517 \\
\quad P2 & 2,720 & 815 & 132 & 4 & 2,150\\
\quad P3 & 6,880 & 1,485 & 15,811 & 98 & 62,590\\
\quad P4 & 693 & 211 & 76 & 1 & 3,792\\
\quad P5 & 4.449 & 529 & 162 & 11 & 1,119\\
\quad Missing & 78,996 & 8,739 & 0 & 0 & 0\\
\midrule
Severity info &&&&& \\
\quad blocker & 204 & 64 & 169 & 1 & 494 \\
\quad critical & 3,782 & 360 & 308 & 1 & 2,919\\
\quad major & 4,556 & 325 & 1,104 & 9 & 5,885\\
\quad normal & 88,443 & 11,976 & 11,384 & 38 & 46,147\\
\quad minor & 2,426 & 167 & 753 & 3 & 4,763\\
\quad trivial & 1,019 & 52 & 214 & 1 & 1,366\\
\quad enhancement & 45 & 0 & 2,296 & 61 & 8,594\\
\midrule
Number of Comments &&&&& \\
\quad mean & 8.1 & NA & 7.89 & NA & 8.5\\
\quad median & 4.0 & NA & 5.0 & NA & 6.0 \\
\quad standard deviation & 16.69 & NA & 9.6 & NA & 8.7\\
\bottomrule
\end{tabular}}
\end{table}
Priority comes from either the bug's assignee or the project lead. Generally, the bugs are triaged based on their priority, where P1 refers to the most important bugs, whereas P5 corresponds to the least important bugs. The priority of bugs may change during the bug resolution process. For instance, when a developer observes that a bug takes excessive time to be solved, he assigns a lower priority and starts working on another one. We note that, in Mozilla, 78.6\% of the bugs are not assigned a priority level; on the other hand, in Eclipse and LibreOffice, most of the bugs are assigned the medium level of P3, and the variation in priority is negligible. These observations are consistent with previous studies claiming that both ``priority'' and ``severity\\ are unreliable factors~\citep{Shirin2020}.
Also, the person who reports a bug (i.e., reporter) sets the severity to reflect how much it affects the user. To some extent, the reporter could overestimate this severity, and thus, it might need a revision from a developer. If users continually report bugs while assigning incorrect severity, they will damage their reputation and, in the long-run, get less attention. Therefore, it is likely that a new user may tend to set the highest possible severity and make the severity level unreliable. Bugzilla has a limit of ``Normal'' severity level for regular users, and the higher severity can be assigned only by contributors, developers, leaders, or admins.
Furthermore, the severity differentiates between a bug and an enhancement report. Not all severity levels are accessible to regular users. Table~\ref{tab:bug_info} indicates that most of the bugs receive the ``Normal'' severity, the highest accessible level for ordinary users. Lastly, the number of comments below a bug report is an indicator of engagement of users and developers in the bug solving process. The bug triage relies upon the bug comments; however, some noisy comments may affect this decision~\citep{Xuan2012}.
More descriptive details of the data is provided on GitHub\footnote{\url{https://github.com/HadiJahanshahi/WaybackMachine}}.
\section{Wayback Machine}\label{sec:Wayback}
Using the ITS information, we created an evolutionary database in which all bugs are sorted by their events' timestamp. The events include ``introduced'', ``resolved'', ``blocks'', ``depends on'', and ``reopened''. We ignore other events such as ``new'', ``verified'' or unimportant updates. Afterward, our event-based Wayback machine will get updated whenever it observes a new event in the system. Whenever a user reports a new bug, it will be added to the BDG with its full information retrieved from the Bugzilla ITS. If a bug blocks or depends on a new bug, then we update the BDG by adding a new arc from the blocking bug to the blocked one. If a bug is resolved, we remove it from the BDG; however we keep track of its information in a separate dataset, called ``resolved dataset.'' Using ``resolved dataset,'' we can add back the bug to the BDG with its full dependency information in case of reopening.
As recalculating BDG information per event has a high complexity, we only update the information of the affected bugs. For instance, if a bug is linked to other bugs and it is resolved in this timestamp, we update depth and degree information of those bugs in the same subgraph. Using our Wayback machine, we may retrieve the BDG information at any given time. Algorithm~\ref{alg:Wayback_Machine} shows how the ITS Wayback machine works.
\begin{algorithm}[!ht]
\SetKwData{Ev}{Evolutionary Database}\SetKwData{BDG}{BDG}\SetKwData{Solv}{Solved bugs tracker} \SetKwData{Resolved}{Resolved dataset}
\SetKwData{DB}{$\mathscr{DB}$}
\KwData{\Ev with $K$ events, information of the bugs extracted from Bugzilla (\DB)}
\KwResult{Daily monitoring of bug dependency graph evolution}
initialization;\\
\emph{\BDG = $\emptyset$}\\
\emph{\Solv = $\emptyset$}\\
\emph{\Resolved = $\emptyset$}\\
Sort \Ev by the changes' timestamps
\BlankLine
\For{$i \in \{1,\hdots,K\}$}{
\begin{algorithmic}
\IF{\Ev$[i][\text{`status'}] == \text{introduced}$}
\STATE Add bug info to \BDG using \DB
\STATE Start solving time of the bug
\ELSIF{\Ev$[i][\text{`status'}] \in \text{[blocks, depends on]}$}
\STATE Add a directed arc from blocking to blocked bug in \BDG
\ELSIF{\Ev$[i][\text{`status'}] == \text{resolved}$}
\STATE Remove the bug from \BDG and add it to \Resolved
\STATE Update solving time of the bug
\ELSIF{\Ev$[i][\text{`status'}] == \text{reopened}$}
\STATE Remove the bug from \Resolved and add it back to \BDG
\STATE Update solving time of the bug
\ENDIF
Update \Solv in case we have any reopened, resolved, or introduced a bug. \\
Update the graph information of the bugs that are affected at event $i$.
\end{algorithmic}
}
\caption{Wayback Machine}
\label{alg:Wayback_Machine}
\end{algorithm}
\section{\textcolor{red}{Findings}}
\subsection{Actual case observations}
\textcolor{red}{In this section, we present the results of our empirical study that answer two main research questions. More specifically, we analyze the evolution of the bugs in the ITS and explore the effect of different bug prioritization and triage strategies. We characterize the bug dependency and its impact on lingering bugs during the evolution of three open-source software systems. We further investigate the actual evolutionary performance of well-established bug prioritization and triage strategies using the Wayback machine.}
\begin{RQquestion}
\textbf{RQ1a: How do open-source software systems evolve in terms of bug reports, dependency, and lingering bugs?}
\end{RQquestion}
\textcolor{red}{Line plot in Figure~\ref{fig:number_of_bugs_and_arcs} shows the actual number of bugs and the area plot shows the number of arcs (i.e., bug dependency) in each project during the last decade}. Those values are extracted on a monthly basis from the Wayback machine, considering all new bug reports and bug fixes. We extract dependencies from the bug's history, and we use the exact date when the dependency is determined. There is a significant difference between each project. \textcolor{red}{The Eclipse JDT (Figure~\ref{fig:LibOffice_num}) has the lowest number of arcs among other projects. In this graph, we exclude Meta bugs \textemdash i.e., tracking bugs used to associate reports with useful data. As we asked LibreOffice developers about the difference, they mentioned that they are not adding dependencies as frequent as it is done in other projects. Therefore, the bug dependency, in case of LibreOffice, becomes a less important factor. Developers in Mozilla (Figure~\ref{fig:Mozilla_num}) record the bug dependency during the project life-span. Therefore, in the following research question, we investigate whether these dependencies influence bug prioritization/triage process.}
At the last period, the ratios of open bugs to the number of bug reports are $15\%$, $20\%$, and $28\%$, for for Mozilla, LibreOffice, and Eclipse, respectively, which suggests a significantly higher rate of lingering bugs in the Eclipse project. Although Eclipse has only 16,342 bug reports, it contains 4,643 unresolved reports at the end of the period. This observation indicates that the number of arcs is not the only factor in lingering bugs. That is, there might be a shortage of developers, or the bugs in Eclipse project need more time to be resolved.
\begin{figure*}[!ht]
\centering
\medskip
\begin{subfigure}[t]{.32\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/n_arcs_bugs_Mozilla.pdf}
\caption{Mozilla} \label{fig:Mozilla_num}
\end{subfigure}
\begin{subfigure}[t]{.32\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/n_arcs_bugs_LibreOffice.pdf}
\caption{LibreOffice} \label{fig:LibOffice_num}
\end{subfigure}
\begin{subfigure}[t]{.32\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/n_arcs_bugs_EclipseJDT.pdf}
\caption{Eclipse} \label{fig:Eclipse_num}
\end{subfigure}
~\caption{The number of nodes and arcs in bug dependency graph for Mozilla, Eclipse, and LibreOffice projects \textit{(x-axis corresponds to the year and y-axis corresponds to the bug and dependency counts)}.}
\label{fig:number_of_bugs_and_arcs}
\end{figure*}
Figure~\ref{fig:depth_degree} shows the degree and depth evolution of all the three projects. \textcolor{red}{In the atypical case of LibreOffice, we observe a decrease and stability after 2015. Also the average depth and degree is much smaller there as shown by Figure~\ref{fig:LibOffice_num}. After 2017, developers in the LibreOffice defined so many Meta bugs, but we ignored them as they are not a real blocking bugs and act as a clustering approach. On the other hand, the general trend of the degree and depth of the bugs in Mozilla project is ascending until 2016 and then descending afterwards, whereas those for Eclipse project almost remain in the same level with some seasonal fluctuation. Therefore, we conclude that, in terms of graph complexity, each project has its own characteristics that cannot be generalized to other cases.}
\begin{figure*}[!ht]
\centering
\medskip
\begin{subfigure}[t]{.32\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/depth_degree_Mozilla.pdf}
\caption{Mozilla} \label{fig:Mozilla_depth_degree}
\end{subfigure}
\begin{subfigure}[t]{.32\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/depth_degree_LibreOffice.pdf}
\caption{LibreOffice} \label{fig:LibOffice_depth_degree}
\end{subfigure}
\begin{subfigure}[t]{.32\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/depth_degree_EclipseJDT.pdf}
\caption{Eclipse} \label{fig:Eclipse_depth_degree}
\end{subfigure}
~\caption{The monthly evolution of mean depth and degree of BDG for Mozilla, Eclipse, and LibreOffice projects \textit{(x-axis corresponds to the year and y-axis corresponds to the mean depth and degree)}.}
\label{fig:depth_degree}
\end{figure*}
\begin{RQquestion}
\textbf{RQ1b: How do the characteristics of the resolved bugs change over time?}
\end{RQquestion}
To address this research question, we compare the characteristics of the resolved bugs and open bugs to infer the notion behind actual bug prioritization process. We are mainly interested in graph related indices--e.g., degree and depth of the bugs. While comparing the actual decision during the past decade, we explore whether bug triagers consider dependency information in bug prioritization. \textcolor{red}{Our main focus is the training phase \textemdash from 2018 to 2020.}
Figure~\ref{fig:degree_of_solv} juxtaposes the degree of the bugs that are solved with that of the ones that are postponed--i.e., remained open. Such a comparison provides a clear picture \textcolor{red}{on whether bug triagers prioritize a bug based on their dependency.} We show the average degree of the fixed bugs as an area plot and the average degree of the open bugs as a line graph. If we take the area plot as an upper bound of the line plot, we may conclude that, on average, the triagers prioritize the bugs with higher degree. In Figure~\ref{fig:Mozilla_deg_solv} and \ref{fig:Eclipse_deg_solv}, \textcolor{red}{the grey region almost always contains the black line, meaning that on average the degree of solved bugs are greater than degree of the postponed bugs. We use one-tailed paired t-test with the significance level of 0.05 to check the validity of our observation. For both projects, with a p-value close to zero, we reject the null hypothesis.} Hence, triagers indirectly consider the dependency while addressing open bugs. In special case of LibreOffice where we have a very sparse BDG (Figure~\ref{fig:LibOffice_deg_solv}),
\textcolor{red}{we do not observe such a behavior. The area plot is almost always zero, meaning that the blocking effect is not considered as an important factor here.}
\begin{figure*}[!ht]
\centering
\medskip
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ2/degree_vs_solved_Mozilla.pdf}
\caption{Mozilla} \label{fig:Mozilla_deg_solv}
\end{subfigure}\quad
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ2/degree_vs_solved_LibreOffice.pdf}
\caption{LibreOffice} \label{fig:LibOffice_deg_solv}
\end{subfigure} \quad
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ2/degree_vs_solved_EclipseJDT.pdf}
\caption{Eclipse} \label{fig:Eclipse_deg_solv}
\end{subfigure}
~\caption{The comparison of monthly degree of the bugs in BDG and fixed bugs \textit{(the area plot shows the degree of fixed bus, whereas blue lines indicate the degree of remaining bugs in the graph)}.}
\label{fig:degree_of_solv}
\end{figure*}
Figure~\ref{fig:depth_of_solv} contrasts the average depth of fixed and open bugs in three different projects. In Mozilla and Eclipse projects, the depth of the open bugs are mainly smaller than that of fixed bugs--i.e., the black line is within the area under the grey curve. \textcolor{red}{We also observe the similar behavior of the LibreOffice project as we explained in Figure~\ref{fig:degree_of_solv}. Our conclusion remains identical. The blocking bugs become important if and only if the blocking information is constantly recorded and the BDG is not sparse. We do not see any direct relationship with lingering bugs in this case. We recommend that in automating bug triage and bug prioritization process, researcher consider dependency together with other bug attributes. Prioritizing only based on the bug dependency cannot be generalized~\cite{Shirin2020}}.
\begin{figure*}[!ht]
\centering
\medskip
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ2/depth_vs_solved_Mozilla.pdf}
\caption{Mozilla} \label{fig:Mozilla_depth_solv}
\end{subfigure}\quad
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ2/depth_vs_solved_LibreOffice.pdf}
\caption{LibreOffice} \label{fig:LibOffice_depth_solv}
\end{subfigure} \quad
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ2/depth_vs_solved_EclipseJDT.pdf}
\caption{Eclipse} \label{fig:Eclipse_depth_solv}
\end{subfigure}
~\caption{The comparison of monthly depth of the bugs in BDG and fixed bugs \textit{(the area plot shows the depth of fixed bus, whereas blue lines indicate the depth of remaining bugs in the graph)}.}
\label{fig:depth_of_solv}
\end{figure*}
To further analyze the importance of the BDG in prioritization process, we simulate the triagers' tasks in the following research questions.
\subsection{Wayback machine as prioritizer and triager}
We model the actual bug tracking system via a discrete-event system simulation and explore different strategies in the same environment to have a valid comparison among them. The timestamps of the bug reports and their dependency information are exactly adopted from the ITS. \textcolor{red}{Therefore, it is more past-event regenerator than simulator. The event regenerator, which we call Wayback Machine,} is run for all the reports between 2010 and 2020. Figure~\ref{fig:simulation_scheme} illustrates a complete flowchart of the decisions and the Wayaback Machine logic. At each timestamp $t$, we check if it is time to fix bug $b^d$ assigned to developer $d$. If there exists any bug to be solved, and no other bug blocks it, we fix and remove it from the BDG; otherwise, we assign its blocking bug to the developer $d$. In other words, since we cannot solve the selected bug due to its dependency, we substitute it with its parent. If we do not have any bug to solve at this day, we only update the current bugs' blocking information with any dependency discovered. We also continue adding new bugs based on their actual report time and expanding the BDG. On a daily basis, we update the BDG evolutionary metrics to keep track of the BDG evolution. In our Wayback Machine, we assume that developers cannot work on more than a bug at the same time.
\begin{figure*}[!ht]
\centering
\centerline{\includegraphics[width=0.9\linewidth]{imgs/Graph_simulation.pdf}}
\caption{Framework of Bug prioritization through BDG simulation.}
\label{fig:simulation_scheme}
\end{figure*}
In our discrete-event Wayback Machine, the events are in the forms of a new bug report in the system, assigning a prioritized bug, discovering bug dependency, and finally fixing the bugs. At each timestamp, we may have some resolved bugs, some newly reported bugs, some bugs dependencies that are recently discovered by developers, or some bug prioritization task. \textcolor{red}{When based on an algorithm, we prioritize a bug over others, we assign it to the most appropriate developers. Therefore, as we only explore the prioritization accuracy in prioritization task, we assume all the bug assignments are done same as the actual one.}
\subsubsection{Data preprocessing}
After collecting data and building the database, we implement the below steps to prepare the data for the simulation.
\begin{itemize}
\item We remove duplicate bugs, and whenever a duplicate bug has more information than the original one (e.g., dependency information or general information), we merge its information with the original bug's information.
\item Dependency information of older bugs is kept if and only if it is related to the targeted bugs.
\item Few enhancement reports are eliminated from the database as they do not represent a real defect in the system.
\item We have not had access to some of the bugs through REST API as a basic user. Hence, we did not include their information.
\item As there are many lingering bugs in the system that remain unresolved, we decided to disregard these cases as they are outliers in the system.
\end{itemize}
\subsubsection{Validation metrics}
We define various metrics to compare different prioritization and triage strategies. These metrics include some static metrics \textemdash e.g., assignment accuracy \textemdash and some evolutionary metrics \textemdash e.g., percentage of overdue bugs \textemdash. Evolutionary metrics cannot be reported unless a Wayback Machine is used. They are related to the time when a bug is assigned or prioritized and consider either the developers' workload at the assignment time or the status of other open bugs in the system. In order to incorporate the real effect of a triage or prioritization decision, we require to consider time as well. The metrics that we used in our experiment are as follows.
\begin{itemize}
\item \textbf{The number of solved bugs} represents the total number of fixed bugs during the given period. In practice, developers attempt to keep the number of opened bugs in the system as low as possible. Therefore, they assign higher priority to the bugs that are more crucial or/and easier or/and faster to be solved.
\item \textbf{The number of arcs} reports the number of dependencies in a software system. The higher the number of arcs is, the more complex the system will be. Therefore, triagers try not to let the system be overwhelmed by an increasing number of dependent bugs.
\item \textbf{Max degree centrality} is the normalized version of max degree, which is divided by $(n-1)$, where $n$ is the total number of bugs in the graph at the end of the day. Centrality is regarded as a better index when comparing the graph with a different number of nodes~\cite{zinoviev2018}. If blocking degree of a bug increases, triagers will encounter obstacles while assigning a blocked bug. Therefore, it is important to assign and solve a bug with a high degree.
\item \textbf{Max depth centrality} is the maximum depth of the graph divided by $(n-1)$, where $n$ is the total number of bugs in the graph at the end of the day. The maximum depth by itself is defined as the longest path in a directed graph at the end for a given period of the time. An increase in depth of bug dependency graph shows a longer sequence of bugs waiting for one another to be solved. Therefore, triagers should encourage developers to solve the bugs in a way that the ITS does not encounter with such a lengthy succession of events. One of the expected contingencies of higher depth is the substantial defect debt of the system due to lingering bugs.
\item \textbf{Mean subgraphs' depth} is defined based on the depth of each subgraph of a graph. A subgraph $S$ of a graph $G$ is a graph whose set of nodes and arcs are all subsets of $G$. This variable is crucial in our analysis as it implies the number of high-depth subgraphs in the BDG. Unlike max depth centrality, it accentuates the complexity of the subgraphs' within a graph rather than the total depth of a graph.
\item \textbf{Mean severity} is the average severity of the bugs in the graph at the end of the day. We expect that the algorithm solves more severe bugs per day. Therefore, the lower mean severity is desirable. The severity can be partial; however, we cannot ignore its importance from the developers' point of view. Hence, we utilize it to monitor the performance of different bug prioritization strategies.
\item \textbf{The number of comments} reflects users' and developers' attention toward a bug in its life cycle. We use the average number of comments on all the remaining bugs in the BDG. Important bugs with a higher number of comments are more likely to be solved faster~\citep{bug_count}.
\item \textbf{Authority and hub} are the scores first introduce by \citet{HITS-Kleinberg} as Hyperlink-Induced Topic Search (HITS) algorithm. The algorithm is an extension of centrality for a directed graph. Hubs are defined as the nodes pointing to many important other nodes and authorities are those important ones. Therefore, good authorities are those pointed out by many good hubs and vice versa \cite{hubs-authorities}. This circular relation converges iteratively. First, we initialize both hub ($\mathbf{h}$) and authority vector ($\mathbf{a}$) to $\mathbf{u} =\big(1,1,\dots,1\big)$. At each iteration, we update the weights as follows:
\begin{equation}
\mathbf{a} = A^{\intercal} \mathbf{h}; \;\; \mathbf{h} = A \mathbf{a}
\end{equation}
where $A$ is the adjacency matrix. Afterward, a normalization step is applied so that both vectors become unit vectors. After a sufficient number of iterations, authority and hub vectors converge to the principal eigenvector of $A^{\intercal}A$ and $AA^{\intercal}$, respectively.
In our case, the authority score of a node in a dependency graph indicates the node's blocking score, whereas the hub score entails the node's blocked score. The desired output of a strategy is to minimize the maximum authority and hub score of the BDG since the bugs with high authority and hub are crucial in terms of blocking effect.
We report both the average and maximum authority and hub scores of the BDG at the end of each day for a given period. We expect to have the least mean and maximum hub or authority score for the best strategy (solving critical bugs first).
\item \textbf{Harmonic centrality} is the harmonic mean of all shortest distances between nodes (i.e., the $n\times(n-1)$ distances between each pair of distinct nodes). Thus, the harmonic centrality of the node $x_i$ is defined as
\begin{equation}
c_H(x_i) = \frac{1}{n-1}\sum_{i \ne j}{\frac{1}{d(x_i,x_j)}}
\end{equation}
where $n$ is the number of nodes in the graph, $d(x_i,x_j)$ is the shortest distance between nodes $i$ and $j$, and $\nicefrac{1}{d(x_i,x_j)}$ is equal to 0 when there is no path from nodes $i$ and $j$~\cite{rochat2009closeness}. Note that in a star graph, the maximum harmonic mean of distances is obtained by the central node and is equal to $(n-1)$. Therefore, $c_H(x_i)$ is in the range of 0 and 1, 0 for isolate nodes and 1 for nodes connected to every other nodes. The harmonic centrality is a critical factor in bug dependency graph because it indicates the complexity and density of the graph. In the actual bug triage process, a triager needs to incorporate complexity reduction in their decision.
\end{itemize}
\subsection{Bug prioritization strategies}
In practice, triagers may use a combination of factors, such as validity, reproducibility, severity, priority, and even customer pressure to choose an appropriate bug to fix. In some cases, they may decide based on the blocking effect of a bug. Thus, we define a comprehensive list of strategies, including the graph-related ones (i.e., using features coming from the BDG), and severity-based ones, as follows:
\begin{enumerate}
\item \textbf{Maximum degree}: This strategy first solves bugs with a higher out-degree, i.e., the higher number of bugs it blocks. In this paper, we take ``degree'' as the out-degree of a bug. This strategy is crucial as it retards the growth of many bugs with a high blocking degree.
\item \textbf{Maximum depth}: This strategy prioritizes bugs with a higher depth. The depth of a bug in a directed graph is the maximum shortest path from the bug to any other bugs in the graph. In some cases, a bug may have a small degree because it is the root of a lengthy chain of bugs. Prioritizing such bugs is vital for the stability of an ITS.
\item \textbf{Maximum sum of degree and depth}: This strategy selects the bug with the highest sum of its degree and depth. \citet{Shirin2020} take this as a potential, unbiased factor in bug prioritization.
\item \textbf{Maximum severity}: This strategy chooses bugs with the highest severity first. This approach might be controversial due to lack of objective assessment of the severity scores; however, we keep this strategy as an alternative approach to the existing ones.
\item \textbf{Maximum sum of degree and severity}: It prioritizes the bugs with the highest sum of degree and severity. This is an assortment of developers' viewpoint and the bug dependency complexity.
\item \textbf{Children's degree}: Not only is the degree of a bug a sign of its blocking power but also the blocking degree of the bugs that it blocks could be an indicator of its significance. Therefore, we consider both the degree of a bug and the degree of its inheritors. Therefore, we augment the degree of a bug by adding the degree of its children. A recursive function calculates this value as follows:
\begin{equation}
\frac{1}{\exp{L_i}} \times \text{degree}_i
\end{equation}
where $L$ is the level of bug $i$ that starts with 0 for the original bug, and then increases one by one as we move down level by level towards its children and grandchildren. This decreasing function will emphasize on the degree's importance for the ones which have less distance from the bug.
\item \textbf{Children's severity}: If a bug blocks some critical and severe bugs, even if the bug itself is trivial, we can claim that the bug is still important. The importance of a bug is related to both its severity and the severity of the blocked bugs. We increase the severity of a bug by the severity of the ones it blocks. This strategy inherits both characteristics of graph topology and its neighbors' severity. Its notion is identical to the one used for the Children's degree strategy. Therefore, it incorporates both characteristics of manually assigned severity and the complexity of a graph, i.e., the mutual impacts of the bugs.
\item \textbf{Random}: This strategy is considered as a naive baseline and corresponds to selecting the candidate bug randomly. We use this policy to show how well other strategies perform compared to a random selection.
\end{enumerate}
\section{Simulation Results} \label{sec:results}
\begin{table}[!ht]
\caption{The result of different algorithms for bug prioritization}
\resizebox{\linewidth}{!}{
\begin{tabular}{cl>{\columncolor[HTML]{EFEFEF}}r rrr|rrr|r
\toprule
& & \multirow{1}{*}{\textbf{Actual}} & \multicolumn{3}{c|}{\textbf{Rule-based}} & \multicolumn{3}{c|}{\textbf{Machine Learning}} & \multirow{1}{*}{\textbf{Random}} \\
& & & \textbf{\begin{tabular}[c]{@{}c@{}} Maximum\\ \{depth + degree\}\end{tabular}}
& \textbf{\begin{tabular}[c]{@{}c@{}} Maximum\\Priority\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}} Maximum\\ Severity\end{tabular}} & \textbf{Cost-oriented} & \textbf{\begin{tabular}[c]{@{}c@{}} Estimated\\ Priority\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Cost \& Priority\\ Consideration\end{tabular}} & \\
\midrule
\multirow{3}{*}{{\rotatebox[origin=c]{90}{\scshape{\textbf{EclipseJDT}}}}} & \textbf{\begin{tabular}[c]{@{}l@{}}The number of \\ Solved Bugs\end{tabular}} & 2,753 & 1,423 & 1,525 & 1,481 & 1,752 & 1,851 & 1,814 & 973 \\
& \textbf{\begin{tabular}[c]{@{}l@{}}(Early, On-time, Late) \\ Prioritization\end{tabular}} & - & (127, 483, 813) & (141, 450, 934) & & & & & \\
& \textbf{\begin{tabular}[c]{@{}l@{}}Assigning Time\\ Divergence\end{tabular}} & & & & & & & & \\
\hline
\multirow{3}{*}{{\rotatebox[origin=c]{90}{\scshape{\textbf{LibreOffice}}}}} & \textbf{\begin{tabular}[c]{@{}l@{}}The number of \\ Solved Bugs\end{tabular}} & & & & & & & & \\
& \textbf{\begin{tabular}[c]{@{}l@{}}(Early, On-time, Late) \\ Prioritization\end{tabular}} & & & & & & & & \\
& \textbf{\begin{tabular}[c]{@{}l@{}}Assigning Time\\ Divergence\end{tabular}} & & & & & & & & \\
\hline
\multirow{5}{*}{{\rotatebox[origin=c]{90}{\scshape{\textbf{Mozilla}}}}} & \textbf{\begin{tabular}[c]{@{}l@{}}The number of \\ Solved Bugs\end{tabular}} & & & & & & & & \\
& \textbf{\begin{tabular}[c]{@{}l@{}}(Early, On-time, Late) \\ Prioritization\end{tabular}} & & & & & & & & \\
& \textbf{\begin{tabular}[c]{@{}l@{}}Assigning Time\\ Divergence\end{tabular}} & & & & & & & & \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[!ht]
\centering
\renewcommand{\arraystretch}{1.3}
\caption{The result of different algorithms for bug triage}
\resizebox{0.92\linewidth}{!}{
\begin{tabular}{cl >{\columncolor[HTML]{EFEFEF}}r rrr|r}
\toprule
& \textbf{} & \textbf{Actual} & \textbf{CBR} & \textbf{CosTriage} & \textbf{DeepTriage} & \textbf{Random} \\
\midrule
\multirow{6}{*}{{\rotatebox[origin=c]{90}{\scshape{\textbf{EclipseJDT}}}}} & \textbf{Mean Fixing Time} & 6.0 & 7.9 & 7.5 & 7.7 & 8.3 \\
& \textbf{The Number of Assigned Developers} & 15 & 19 & 19 & 19 & 21 \\
& \textbf{Task Concentration$(\mu,\sigma)$} & (83.4, 93.7) & (65.8, 112.0) & (65.8, 108.5) & (72.1, 102.2) & (52.5, 88.3) \\
& \textbf{Assignment Accuracy} & 97.7 & 95.5 & 94.0 & 96.7 & 38.1 \\
& \textbf{Percentage of Overdue Bugs} & 66.0 & 82.2 & 79.6 & 78.3 & 89.3 \\
& \textbf{Infeasible Assignment w.r.t. the BDG} & 5.4 & 6.0 & 5.8 & 6.3 & 5.9 \\
\hline
\multirow{6}{*}{{\rotatebox[origin=c]{90}{\scshape{\textbf{LibreOffice}}}}} & \textbf{Mean Fixing Time} & 3.3 & 2.1 & 1.8 & 1.7 & 2.3 \\
& \textbf{The Number of Assigned Developers} & 57 & 22 & 21 & 23 & 23 \\
& \textbf{Task Concentration} & (27.5, 68.9) & (71.3, 224.5) & (74.7, 253.2) & (70.7, 218.4) & (36.1, 73.7)\\
& \textbf{Assignment Accuracy} & 91.7 & 99.1 & 99.3 & 99.4 & 43.3 \\
& \textbf{Percentage of Overdue Bugs} & 35.9 & 77.1 & 80.8 & 76.2 & 81.3\\
& \textbf{Infeasible Assignment w.r.t. the BDG} & 0.1 & 0.1 & 0.1 & 0.1 & 0.2 \\
\hline
\multirow{6}{*}{{\rotatebox[origin=c]{90}{\scshape{\textbf{Mozilla}}}}} & \textbf{Mean Fixing Time} & 7.0 & 7.2 & 6.6 & 7.1 & 8.6 \\
& \textbf{The Number of Assigned Developers} & 137 & 74 & 85 & 80 & 115 \\
& \textbf{Task Concentration} & (27.0, 49.5) & (50.1, 204.0) & (43.6, 187.0) & (41.7, 192.3) & (21.5, 42.3) \\
& \textbf{Assignment Accuracy} & 72.7 & 60.2 & 59.0 & 62.1 & 15.5 \\
& \textbf{Percentage of Overdue Bugs} & 69.8 & 80.1 & 77.6 & 78.5 & 82.6 \\
& \textbf{Infeasible Assignment w.r.t. the BDG} & 9.4 & 9.0 & 8.8 & 9.8 & 11.2 \\
\bottomrule
\end{tabular}
}
\end{table}
We explore the performance of different strategies on bug prioritization in the long-term. The practical aim of this experiment is to see how different policies can facilitate bug prioritization in the long-run. They are also regarded as a baseline for future studies to choose proper attributes of bug reports. Afterwards, we design a denser, more complicated graph from the actual BDG and investigate the impact of a higher number of bug dependencies upon the performance of various bug prioritization strategies. Finally, we contrast the performances of those policies with the actual bug prioritization.
\subsection{Analysis with data from bug repositories}\label{sec:resultsEntireDataSet}
We utilize the bug reports extracted from Bugzilla, related to the security component of Firefox product, Calc component of LibreOffice and whole dataset of Eclipse JDT. Then, we examine the performance of different policies in terms of bug prioritization. In the Security component of Mozilla, only four developers have solved at least one bug per month on average. Therefore, we assume that there exists four active developers (assignees) in the system to whom the bugs can be assigned to be solved. LibreOffice and Eclipse had 15 and 21 active developers during the same period. Finding the optimal number of developers is still an open-ended question. We assume each developer can work only on one bug at the same time, and whenever they fix a bug, the system assigns a new bug to them. We repeat the process for all strategies three times and report the average indices to avoid any bias due to randomization.
\textbf{RQ2a: How do different bug prioritization strategies perform in terms of external indices?}
At least one of the strategies is linked to each one of the external indices. Those are referred to as ground truth in our case to validate our model. For instance, as one of the methods chooses bugs with the highest degree, we expect to see its superiority in terms of the maximum degree of the BDG.
\textcolor{red}{DEPTH AND DEGREE AND SEVERITY AND RANDOM + (MAX DEGREE + DEPTH) COST (TIME TO SOLVE) ACTUAL BE KEPT}
\begin{table}[!ht]
\centering
\caption{The external indices to measure the effect of adopting different strategies for bug prioritization. The numbers in gray rows represent the actual case, whereas others show the percentage (\%) relative improvement. \label{tab:strategies_security_external}}
\resizebox{\linewidth}{!}{
\begin{tabular}{clrrrrrr}
\toprule
\textbf{Project} & \textbf{Strategy} & \textbf{\begin{tabular}[c]{@{}r@{}}\# of\\fixed\\bugs\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}avg. \#\\ of arcs\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}avg. of\\ max\\degree\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}avg. of\\ max\\depth\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}avg.\\ subgraph\\depth\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}avg. of\\ avg.\\ severity\end{tabular}} \\
\midrule
\parbox[t]{2mm}{\multirow{9}{*}{\rotatebox[origin=c]{90}{\textbf{Mozilla}}}} & \cellcolor[HTML]{EFEFEF}\textbf{Actual (reference)} & \cellcolor[HTML]{EFEFEF}\textbf{1652.0} & \cellcolor[HTML]{EFEFEF}74.2 & \cellcolor[HTML]{EFEFEF}2.3 & \cellcolor[HTML]{EFEFEF}3.0 & \cellcolor[HTML]{EFEFEF}0.4 & \cellcolor[HTML]{EFEFEF}3.2 \\
& \textbf{Max degree (\%)} & -10.8 & \textbf{97.6} & \textbf{75.5} & \textbf{80.3} & \textbf{95.3} & -0.2 \\
& \textbf{Max depth (\%)} & -10.3 & 97.5 & 73.8 & 79.2 & 94.7 & -0.3 \\
& \textbf{Max severity (\%)} & -9.7 & 36.3 & -6.6 & 9.0 & -64.7 & \textbf{10.4} \\
& \textbf{Children's degree (\%)} & -11.0 & 97.5 & 75.2 & 80.1 & 95.0 & -0.4 \\
& \textbf{Children's severity (\%)} & -9.6 & \textbf{97.6} & 72.9 & 78.5 & 95.0 & \textbf{10.3} \\
& \textbf{Max \{degree + depth\} (\%)} & -11.3 & 97.5 & 74.1 & 79.5 & \textbf{95.3} & -0.5 \\
& \textbf{Max \{degree + severity\} (\%)} & -8.9 & 97.1 & 70.8 & 76.8 & 93.7 & 10.5 \\
\multirow{-9}{*}{} & \textbf{Random (\%)} & -11.3 & 25.0 & -14.0 & 8.2 & -89.7 & 0.1 \\
\midrule
\parbox[t]{2mm}{\multirow{9}{*}{\rotatebox[origin=c]{90}{\textbf{LibreOffice}}}} & \cellcolor[HTML]{EFEFEF}\textbf{Actual (reference)} & \cellcolor[HTML]{EFEFEF}\textbf{12448.0} & \cellcolor[HTML]{EFEFEF}312.0 & \cellcolor[HTML]{EFEFEF}1.8 & \cellcolor[HTML]{EFEFEF}1.6 & \cellcolor[HTML]{EFEFEF}0.4 & \cellcolor[HTML]{EFEFEF}2.2 \\
& \textbf{Max degree (\%)} & -7.3 & \textbf{99.9} & 91.4 & 90.2 & 99.0 & -20.2 \\
& \textbf{Max depth (\%)} & -7.3 & \textbf{99.9} & 91.6 & \textbf{90.9} & 99.0 & -20.2 \\
& \textbf{Max severity (\%)} & -6.0 & 99.1 & 60.5 & 56.5 & 96.2 & \textbf{63.1} \\
& \textbf{Children's degree (\%)} & -7.3 & \textbf{99.9} & \textbf{91.8} & 90.8 & \textbf{99.2} & -20.0 \\
& \textbf{Children's severity (\%)} & -6.0 & 98.8 & 59.9 & 55.5 & 95.4 & \textbf{63.3} \\
& \textbf{Max \{degree + depth\} (\%)} & -7.3 & \textbf{99.9} & 90.3 & 89.2 & 99.0 & -19.8 \\
& \textbf{Max \{degree + severity\} (\%)} & -6.0 & 99.7 & 81.9 & 80.9 & 98.5 & 62.6 \\
\multirow{-9}{*}{} & \textbf{Random (\%)} & -56.4 & 1.4 & -8.6 & -15.0 & 58.6 & -25.1 \\
\midrule
\parbox[t]{2mm}{\multirow{9}{*}{\rotatebox[origin=c]{90}{\textbf{EclipseJDT}}}} & \cellcolor[HTML]{EFEFEF}\textbf{Actual (reference)} & \cellcolor[HTML]{EFEFEF}11055.0 & \cellcolor[HTML]{EFEFEF}116.6 & \cellcolor[HTML]{EFEFEF}3.6 & \cellcolor[HTML]{EFEFEF}3.6 & \cellcolor[HTML]{EFEFEF}0.1 & \cellcolor[HTML]{EFEFEF}2.3 \\
& \textbf{Max degree (\%)} & -0.6 & 97.7 & 75.1 & 75.5 & \textbf{95.9} & -13.9 \\
& \textbf{Max depth (\%)} & -0.2 & 97.9 & 75.4 & 75.8 & \textbf{95.9} & -13.6 \\
& \textbf{Max severity (\%)} & 12.6 & -12.1 & -6.5 & 5.5 & -179.5 & 77.2 \\
& \textbf{Children's degree (\%)} & 0.6 & \textbf{98.1} & \textbf{75.5} & \textbf{76.5} & \textbf{95.9} & -13.9 \\
& \textbf{Children's severity (\%)} & \textbf{13.1} & -9.1 & -5.6 & 5.9 & -175.3 & \textbf{77.7} \\
& \textbf{Max \{degree + depth\} (\%)} & -0.1 & 97.8 & 75.2 & 75.8 & \textbf{95.9} & -13.7 \\
& \textbf{Max \{degree + severity\} (\%)} & 11.6 & 11.3 & 40.5 & 26.5 & -100.0 & 76.1 \\
\multirow{-9}{*}{} & \textbf{Random (\%)} & -50.9 & -327.3 & -87.2 & -19.7 & -272.6 & -10.6 \\
\bottomrule
\end{tabular}
}
\end{table}
Table~\ref{tab:strategies_security_external} shows the performance of each strategy based on the aforementioned external indices. We use the data obtained from the ITS through our Wayback machine (referred to as ``Actual'') as a reference and show the relative improvement of other strategies. To compute the relative improvement of indices whose larger or smaller values are desirable, we use $\frac{\hat{x}-x}{x}$ or $\frac{x-\hat{x}}{x}$, respectively; where $x$ is the actual value and $\hat{x}$ is the value obtained using a certain policy.
Unlike \textit{max severity} and \textit{random} strategy, the others consider the graph's topology while deciding on which bug to be solved. Therefore, metrics related to the number of arcs, or degree and depth of the graph are markedly inferior for those two strategies. Regarding the number of fixed bugs, actual case is the optimum one, since developers employ an ensemble of strategies to prioritize bugs. Among suggested strategies, \textit{children's severity} is the closest one to the actual case and even outperforms for Eclipse. One of the probable explanation is that the higher the severity of a bug is, the shorter the fixing time will be. Therefore, the bugs chosen with this strategy are solved faster than the others. Concerning the topology of the BDG, all the graph-related strategies significantly improve the actual case. For instance, \textit{Max degree} reduced the average maximum degree of the graph by 75.5\%, 91.4\%, and 75.1\% in Mozilla, LibreOffice, and Eclipse, respectively. As anticipated, the strategies taking into account the severity have the lowest severity of the BDG in the long-run, i.e., it does not let the severity of the BDG accumulate excessively. Our observation agrees with the presumption about external indices.
Interestingly, we find that there is no single policy that is the best for all cases. While \textit{max degree} outperforms in Mozilla, \textit{children's degree} is the best for both LibreOffice and Eclipse. According to the needs of the system, at each time step, triagers may adopt a sensible policy. In that case, we could not observe a rapid accumulation of the reported bugs as observed in LibreOffice. Moreover, in all projects, graph-related strategies have almost halved the number of dependencies and graph complexity. Consequently, we recommend triagers to give higher priority to the bugs with higher depth and degree to reduce the complexity of the ITS.
We repeat the process for different numbers of developers; however, the relative performance of the strategies remain identical for different numbers of developers.
\textbf{RQ2b: How do different bug prioritization strategies perform in terms of internal indices?}
After validating all strategies, we examine their performance in terms of internal indices. These indices remain intact during the experiment and do not overlap with the model's parameters of predefined strategies.
\begin{table}[!ht]
\centering
\caption{The Internal indices to measure the effect of adopting different strategies for bug prioritization. The numbers in gray rows represent the actual case, whereas others show the relative improvement. \label{tab:strategies_security_internal}}%
\resizebox{\linewidth}{!}{
\begin{tabular}{clrrrr}
\toprule
\textbf{Project} & \textbf{Strategy} & \textbf{\begin{tabular}[c]{@{}r@{}}Avg.\\ comments\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}Avg. hub\\ ($10^{3}$)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}Avg. authority\\ ($10^{3}$)\end{tabular}} & \textbf{\begin{tabular}[c]{@{}r@{}}Max harmonic \\ centrality ($10^{3}$)\end{tabular}} \\
\midrule
\parbox[t]{2mm}{\multirow{9}{*}{\rotatebox[origin=c]{90}{\textbf{Mozilla}}}} & \cellcolor[HTML]{EFEFEF}\textbf{Actual} & \multicolumn{1}{r}{\cellcolor[HTML]{EFEFEF}12.1} & \multicolumn{1}{r}{\cellcolor[HTML]{EFEFEF}36.9} & \multicolumn{1}{r}{\cellcolor[HTML]{EFEFEF}4.7} & \multicolumn{1}{r}{\cellcolor[HTML]{EFEFEF}38.8} \\
& \textbf{Max degree (\%)} & 2.9 & \textbf{81.8} & \textbf{17.7} & \textbf{83.4} \\
& \textbf{Max depth (\%)} & 1.0 & 79.9 & 9.2 & 81.5 \\
& \textbf{Max severity (\%)} & -9.9 & -94.7 & -97.4 & -103.4 \\
& \textbf{Children's degree (\%)} & -3.2 & 81.1 & 16.2 & 82.8 \\
& \textbf{Children's severity (\%)} & 1.6 & 79.5 & 2.3 & 81.5 \\
& \textbf{Max \{degree + depth\} (\%)} & \textbf{3.0} & 81.1 & 13.8 & 82.9 \\
& \textbf{Max \{degree + severity\} (\%)} & 1.5 & 76.2 & -13.7 & 79.2 \\
\multirow{-9}{*}{} & \textbf{Random (\%)} & -13.8 & -115.9 & -74.4 & -129.9 \\
\midrule
\parbox[t]{2mm}{\multirow{9}{*}{\rotatebox[origin=c]{90}{\textbf{LibreOffice}}}} & \cellcolor[HTML]{EFEFEF}\textbf{Actual} & \multicolumn{1}{r}{\cellcolor[HTML]{EFEFEF}11.4} & \multicolumn{1}{r}{\cellcolor[HTML]{EFEFEF}14.7} & \multicolumn{1}{r}{\cellcolor[HTML]{EFEFEF}0.8} & \multicolumn{1}{r}{\cellcolor[HTML]{EFEFEF}27.0} \\
& \textbf{Max degree (\%)} & 11.0 & 87.7 & 81.4 & 92.8 \\
& \textbf{Max depth (\%)} & 11.0 & 87.9 & 82.0 & 92.9 \\
& \textbf{Max severity (\%)} & 10.9 & 52.5 & 31.3 & 77.8 \\
& \textbf{Children's degree (\%)} & 11.0 & \textbf{88.4} & \textbf{84.3} & \textbf{93.1} \\
& \textbf{Children's severity (\%)} & 9.8 & 49.8 & 16.1 & 73.4 \\
& \textbf{Max \{degree + depth\} (\%)} & \textbf{11.2} & 87.8 & 80.0 & 92.8 \\
& \textbf{Max \{degree + severity\} (\%)} & 10.9 & 79.2 & 69.7 & 89.8 \\
\multirow{-9}{*}{} & \textbf{Random (\%)} & -14.8 & 42.2 & -1.1 & 58.7 \\
\midrule
\parbox[t]{2mm}{\multirow{9}{*}{\rotatebox[origin=c]{90}{\textbf{EclipseJDT}}}} & \cellcolor[HTML]{EFEFEF}\textbf{Actual} & \multicolumn{1}{r}{\cellcolor[HTML]{EFEFEF}\textbf{7.6}} & \multicolumn{1}{r}{\cellcolor[HTML]{EFEFEF}3.5} & \multicolumn{1}{r}{\cellcolor[HTML]{EFEFEF}1.1} & \multicolumn{1}{r}{\cellcolor[HTML]{EFEFEF}4.5} \\
& \textbf{Max degree (\%)} & -13.6 & 62.7 & 5.2 & 79.1 \\
& \textbf{Max depth (\%)} & -12.6 & 63.3 & 6.7 & 79.0 \\
& \textbf{Max severity (\%)} & -14.0 & -129.6 & -125.5 & -196.7 \\
& \textbf{Children's degree (\%)} & -13.7 & \textbf{64.5} & \textbf{11.9} & 79.1 \\
& \textbf{Children's severity (\%)} & -14.8 & -130.5 & -132.6 & -196.8 \\
& \textbf{Max \{degree + depth\} (\%)} & -13.5 & 63.6 & 7.8 & \textbf{79.2} \\
& \textbf{Max \{degree + severity\} (\%)} & -12.0 & -134.7 & -101.2 & -168.3 \\
\multirow{-9}{*}{} & \textbf{Random (\%)} & -18.2 & -67.1 & 28.9 & -128.0 \\
\bottomrule
\end{tabular}
}
\end{table}
Table~\ref{tab:strategies_security_internal} shows the evaluation of different strategies based on internal indices. When choosing \textit{max \{depth + degree\}} strategy, we solve the bugs with the highest users' attention, i.e., the highest number of comments. This strategy is effective for Mozilla and LibreOffice, whereas less efficient in the Eclipse project. In Eclipse, we note that the current policy internally considers the discussions for each bug as it is the best based on the average number of comments on open bugs. By defining the objective function as minimizing the average authority and hub of the nodes and their max harmonic centrality, \textit{Children's degree} is the best performing approach for LibreOffice and Eclipse, whereas \textit{Max degree} outperforms others in Mozilla.
In terms of internal indices, once more, we observe a significant improvement over the reference point. In practice, this implies that considering topological space that arises from bugs and their inter-dependencies will reduce the complexity of ITS and, consequently, a fewer number of lingering bugs in the long-run. Moreover, we repeat the experiment for different numbers of developers, and it does not violate the generalizability of the observation. Our findings are consistent with the study of \citet{Shirin2020} in which they considered the resulting depth and degree of the BDG as the reward for the POMDP model. They concluded that, in the products that they investigate, the development team currently does not prioritize
bugs with respect to their blocking effects. Similarly, we observe that whenever practitioners disregard the effect of blocking bugs, they encounter a significant level of blocked bugs and, in turn, lingering bugs in the system.
\subsection{Analysis with synthetic data}\label{sec:resultsSynthetic}
To further examine the effect of graph complexity on strategies, we design a synthetic graph based on the real dataset. We apply this analysis only in one of our projects as we assume that this process would be generalizable enough. Therefore, we selected the Bugzilla with the most number of bugs as our target.
\textbf{RQ2c: How do bug prioritization strategies perform in a dense bug dependency graph with a high number of blocking bugs?}
Table~\ref{tab:simulation_characteristic} shows different attributes of the reported bugs in that project based on their severity. We eliminate duplicate bugs to have a better understanding of the BDG. Consequently, we construct a new graph, based on actual characteristics of bug reports with a small amendment to the number of dependencies. To increase the complexity of the graph, we decide to modify the number of arcs going out of each node using the below formula
\begin{equation*}
n^{\prime} = n \times \gamma + \delta
\end{equation*}
where, $n$ is the current number of outgoing arcs; $\gamma$ is the expansion factor, in our case $\gamma = 3$; and $\delta$ is a random intercept selected from the set of $\{-2,-1,0,1,2\}$.
\begin{table}[!ht]
\centering
\caption{Characteristics of the bugs extracted from Firefox, including the percentage of bugs with different severity levels, The number of dependencies, and estimated solving time. \label{tab:simulation_characteristic}}%
\resizebox{0.7\linewidth}{!}{
\begin{tabular}{lrrrrrr}
\toprule
\multirow{2}{*}{\textbf{severity}} & \multicolumn{1}{c}{\multirow{2}{*}{\textbf{\%}}} & \multicolumn{2}{c}{\textbf{\# of blocks}} & \multicolumn{2}{c}{\textbf{\# of depends on}} & \multicolumn{1}{c}{\textbf{Time to solve}} \\
& \multicolumn{1}{c}{} & \multicolumn{1}{c}{\textbf{$\mu$}} & \multicolumn{1}{c}{\textbf{$\sigma$}} & \multicolumn{1}{c}{\textbf{$\mu$}} & \multicolumn{1}{c}{\textbf{$\sigma$}} & \multicolumn{1}{c}{\textbf{$\mu$}} \\
\midrule
\textbf{Blocker} & 0.41 & 0.714 & 1.254 & 0.143 & 0.378 & 1 day 12:21:23 \\
\textbf{Critical} & 5.49 & 0.817 & 3.210 & 2.032 & 16.296 & 5 days 20:17:14 \\
\textbf{Major} & 6.43 & 0.477 & 1.309 & 0.165 & 0.553 & 5 days 14:21:06 \\
\textbf{Normal} & 85.3 & 1.017 & 1.827 & 4.167 & 44.139 & 5 days 12:54:49 \\
\textbf{Minor} & 2.13 & 1.139 & 3.523 & 1.083 & 5.823 & 8 days 13:06:09 \\
\textbf{Trivial} & 0.24 & 0.750 & 1.500 & 0.000 & 0.000 & 3 days 05:16:17 \\
\bottomrule
\end{tabular}
}
\end{table}
Considering the new denser network of bugs, we utilize the simulation pipeline and fix reported bugs based on the strategies mentioned above. The rate of incoming bug reports and all their attributes remain intact while the only change is via density increase.
After analysing the performance of different strategies, we note that although their relative performance remains the same, synthetic analysis clearly distinguishes between graph-related strategies and random or severity-based ones. In other words, in a dense BDG, we recommend using graph-related strategies in lieu of traditional approaches. Our findings are consistent with the previous studies that emphasize the importance of the graph-related measures in bug prioritization~\cite{Shirin2020}. More details on the numeric outputs are reported in \ref{sec:syntheticdata}.
\section{Threats to validity} \label{sec:threats}
\subsection{Construct Validity}
In this study, we evaluated different strategies in terms of not only external indices but also internal ones. That is, we ensure that our results are independent of the selected strategies. Unlike previous studies that report the methods' performance using a limited number of measurements (e.g., see \citep{Shirin2020}), we define ten indices in this study. From the perspective of the reliability of measures, we explore internal indices to overcome the issue of correlated measures. Moreover, this work proposes a list of strategies to determine the best policy under particular circumstances.
As some strategies in our simulation have randomness in their process, i.e., they randomly choose one bug in case of ties, we reiterate all experiments three times and report the results based on their average performance. We expect this iterative process address the issue of random heterogeneity of subjects.
\subsection{External Validity}
In our simulation, we stick to the data extracted from three different open-source projects with and without some minor modification. Moreover, we choose well-established projects with different natures--i.e., Firefox, Eclipse, and LibreOffice--for the past decade to alleviate generalizability issues. We also consider the evolution of the bug reports instead of static snapshots of the system. We simplify our models by discarding some attributes, e.g., bug description and the number of CC'ed developers. We plan to expand the study by including different attributes of bug reports and create a comprehensive evolutionary graph. Nonetheless, replication of our study using different ITS, e.g. industrial data or proprietary products, may prove fruitful. We used the actual bug prioritization obtained from the ITS as the baseline, and since, to the best of our knowledge, there is no other study that considered the simulation of bug prioritization, we discarded the comparison with other works.
\subsection{Internal Validity}
The BDG is extracted from three Bugzilla ITS using the REST API. However, some bug reports might be deleted from the repository or have limited access to normal users. Our analysis applies to the bugs that are open to the public. Furthermore, we estimate fixing time, using a Gaussian distribution around the median bug fixing time, as this factor proves to be unreliable. Therefore, all reported fixing times in the simulation part are estimated time to solve bugs. This assumption is not considered to impact the final decision since it remains identical for different strategies.
\section{Related work} \label{sec:background}
Bug prioritization is vital in software systems as it affects the maintenance budget of software, scheduled releases and enhancements, and even the image of a brand in the eyes of end-users. The developers typically use manual examination and intuitive judgment in the process of bug triage. \citet{valdivia2016} reports that there is no specific bug prioritization strategy on which developers agree during the bug fixing process.
Bug triaging involves different processes such as designating an appropriate developer with relevant knowledge to resolve a bug, analyzing the time to fix a bug, specifying which bug needs to be solved immediately and which one does not, and finally finding duplicate bug reports~\citep{uddin2017}. Therefore, manual implementation of such an arduous process requires considerable time and resources in large and open-source software systems, making this task error-prone. A considerable amount of research aims to alleviate this issue through the automation of the entire triaging process. For instance, researchers approach the problem of duplicate bug detection using text retrieval techniques or more complex learning-based methods, including additional bug information~\citep{Chaparro2019, hindle2016, EBRAHIMI2019, hindle2019}. On the other hand, several other studies focused on automatic or semi-automatic bug triage models to either select the bug which should be solved next or choose an appropriate developer to solve it~\citep{Shirin2020, Yang2014, Xia2017, Umer2018,Zhang2017,guo2020, Xuan2012}.
In terms of bug triaging, different machine learning approaches such as classification, clustering, information retrieval, and reinforcement learning were adopted. \citet{Yang2014} suggested a method for semi-automatic bug triage and severity prediction. They utilized topic modeling, namely Latent Dirichlet Allocation (LDA), to determine the topic to which an arriving bug belongs. Then, they extracted a list of candidate assignees based on the selected topic and used bug attributes to rank appropriate developers. Similarly, \citet{Xia2017} proposed an extensible topic model based on the LDA approach, multi-feature topic model (MTM), which computes the affinity of a developer to a new bug report using the history of the bugs that the developer has ever fixed. \citet{Umer2018} studied the effect of emotion analysis for the summary attribute of bug reports on bug prioritization. Specifically, they computed the emotion-value of each bug report and assigned them a priority level of P1 to P5. Moreover, they reported a high correlation ($r=0.405$) between emotion and priority of bug reports. \citet{guo2020} utilized Natural Language Processing using Word2vec representation of bug summary and implementing a convolutional neural network (CNN).
\citet{Shirin2020} pointed to a different concern for the bug prioritization, noting that the bug priority and severity can be both subjective and misleading. They focused on the mutual impact of bugs by using a dependency graph. Although there are few other studies that consider a graph-based analysis for the software evolution~\citep{Bhattacharya2012}, \citet{Shirin2020}'s work differs from those in terms of incorporating the uncertainty in the ITS. More specifically, they proposed a partially observable bug dependency graph, where the dependencies between the bugs are not fully observable beforehand and are revealed as the bugs are resolved, and defined its depth and degree as crucial factors affecting a bug's priority. They solved their POMDP model using the Monte Carlo simulation and compared their performance against the baseline policies. On the other hand, their work lacks an internal performance index that would allow them to compare different policies. Our study differs from \citet{Shirin2020}'s study in that we define a pool of strategies for bug prioritization and compare them using various datasets and under different circumstances. We note that the measurements should be independent of the strategy used to prioritize the bugs in order to have a fair comparison between different approaches. Moreover, we consider a variety of strategies to cover different bug prioritization policies. We also create a novel Wayback machine that enables practitioners to compare their suggested approaches with the actual events in ITS.
\section{Conclusion} \label{sec:conclusion}
Previous studies showed that the bug dependency graph (BDG) is a reliable source for decision-makers in defect prioritization~\cite{Shirin2020, Bhattacharya2012-2}. In this work, we extend those observations to three actual projects and draw out a comparable baseline for policy evaluation. The evolutionary graph of historical data is neither subjective nor misleading; however, it is a valid reference for practitioners.
Our work on open-source data indicates the impact of complexity in the dependency-graph on lingering bugs, which requires further validation through proprietary software. Our findings show there is no single remedy to address all the predefined expectations in the bug triage process. Nevertheless, policies based on \textit{children's degree} or simply \textit{the maximum degree} outperform others in a dense bug dependency graph. When the graph includes a multiplicity of solo bugs, other attributes of bug reports, together with network-related ones, are needed to be considered to achieve the best performance.
Accordingly, we recommend monitoring the bug dependency graph's evolution to have an in-depth understanding of the consequences of each decision made during the bug prioritization/resolution process. A Wayback machine enables practitioners to have a complete understanding of the system evolution at each timestamp. It can be the basis of an as-is machine learning algorithm and a basis for the evaluation of the recommender engines. A relevant venue for future research would be to combine the effect of blocking bugs with other factors such as developers' effort and cost of bugs and train a deep learning algorithm to predict the relative importance of each prioritization decision.
Our primary objective in this longitudinal study is to demonstrate the current status of the system and sequential decisions of the developers in these projects to facilitate exploring different bug prioritization strategies. This paper investigates the history of the BDG and compare it against different rule-based strategies. For practitioners, it highlights the importance of the dependency of the bugs in bug prioritization and facilitates the comparison of any strategy with the actual decision making process. At the end, we recommend to consider the evolutionary behaviour of the issue tracking instead of snapshots of the past events, and a simulation study would be helpful for this purpose.
\section*{Supporting Information}
To make the work reproducible, we publicly share our originally extracted dataset of one-decade bug reports, scripts, and analysis on \href{https://github.com/HadiJahanshahi/WaybackMachine}{\textcolor{blue}{GitHub}}.
\bibliographystyle{elsarticle-num-names}
\section{Introduction}
In software engineering practice, the later the bug is discovered and fixed, the more costly it will be for the projects~\cite{kumar2017software}. However, due to limited resources and an increasing number of defects reported, it is not feasible to fix all the bugs before each software release. Therefore, practitioners frequently face the decision of which bugs to resolve now or defer to the next release.
Bug prioritization \textcolor{black}{and triage} tasks mainly depend on the quality of the reported bugs while noting that not all reports are consistent. Previous studies discussed the evidence for the mismatch between developers' and users' understanding of the bugs~\cite{Bettenburg2008, umer2019}. Moreover, bug severity information is not reliable since 51\% of the duplicate reported bugs have inconsistent severity labels\textcolor{black}{, which is expected}~\cite{tian2016}. Data on the bug fixing time is not reliable either; that is, it does not indicate the exact amount of working hours on a specific bug in a continuous manner~\cite{Shirin2020}.
Unlike many subjective characteristics of the bugs, blocking bugs are determined by a developer in the phase of defect resolution. In a typical flow of a bug report, an end-user or a developer reports a bug or an issue. Subsequently, a triager assigns it to a developer, or a developer claims its possession. Ultimately, after they find a resolution for the bug, it is verified by another developer and gets closed. However, in the case of a blocking bug, the process is interrupted~\cite{Garcia2018}. Blocking bugs have higher complexity than non-blocking bugs, require more time to get fixed, associate with a larger codebase (in terms of lines of code (LOC)), and are also hard to be predicted~\cite{Garcia2018, Goyal2017, Wang2018}. As the number of blocking bugs increases, resource planning becomes a tedious task, and developers defer many bugs to a later release. Accumulation of lingering bugs~\textendash~the bugs reported but not resolved in the current release~\textendash~degrades the software quality and increases the maintenance cost~\cite{Akbarinasaji2017}. Therefore, understanding the influence of the bug dependency graph (BDG) \textcolor{black}{together with other bug features} on software maintenance is essential.
\textcolor{black}{A common approach to bug triage and prioritization is to use different machine learning algorithms and find the performance of the bug assignment or bug prioritization~\cite{uddin2017, alenezi2013efficient,anvik2006should,xuan2017automatic}. However, in most cases, previous studies did not consider the effect of bug dependency in their recommended policy. Moreover, it is important to explore the impact of the algorithm at the exact time a bug is assigned or prioritized. For instance, if, at time $t$, a bug is assigned to a developer having previous experience with a component but busy with other assigned tasks, the algorithm should automatically propose an alternative developer for the open bug. However, without a simulator that regenerates the exact characteristic of the open bugs and available developers at time $t$, it might not be feasible to propose a practical solution. Accordingly, we propound the modular Wayback Machine that regenerates past events for any given timestamp and might be easily adopted by researchers to investigate the performance of their proposed bug triage or prioritization algorithm. }
Another important missing link in these previous studies is to recognize the actual situation in the real world as a baseline. It is critical to know how the \textcolor{black}{content of} the issue tracking system (ITS) evolves in terms of complexity as it enables practitioners to automate the decision-making process and to trace back the actual decisions in the bug triage process. \textcolor{black}{The idea of the Wayback Machine comes from the digital archive of the World Wide Web, via which we can explore the status and content of the webpages in previous timestamps\footnote{\hyperlink{https://archive.org/web/}{https://archive.org/web/}}. To this end, we construct a Wayback Machine with which practitioners may explore the past events in the ITS.} Besides, we simulate an extensive list of \textcolor{black}{prioritization and triage strategies over a BDG to see whether the proposed event-regenerator machine can reveal revolutionary aspects of decisions that were not explored in previous studies.} Moreover, we consider using a discrete-event system simulation approach and evaluate the performance of the models using \textcolor{black}{both traditional metrics (e.g., the assignment accuracy) and evolutionary (e.g., the task concentration on developers)} metrics. Accordingly, our research questions are two-fold: first, understanding and rebuilding the history of the issue tracking system; and second, \textcolor{black}{checking the validity of the Wayback Machine through exploring prioritization and triage strategies. We note that these strategies can be substituted with any bug prioritization or triage algorithm in the modular Wayback Machine. Thus}, we structure our study along with the following five research questions, divided into two categories:
\begin{RQquestion}
\textbf{RQ1a: How do open-source software systems evolve in terms of the number of bug reports, bug dependencies, and lingering bugs?}
\end{RQquestion}
\begin{RQanswer}
We explore the past events in the ITS through a novel Wayback Machine. Given the extracted data from any ITS, this machine provides us with bugs' status in any timestamp. Hence, we may query different characteristics of the bugs and explore the reason behind each bug prioritization decision in the past, e.g., what kinds of bugs we had and why a developer chose to resolve a specific bug over others. We demonstrate the number of bug reports, the evolution of BDG, and their effect on the lingering bugs.
\end{RQanswer}
\begin{RQquestion}
\textbf{RQ1b: How do the characteristics of the resolved bugs change over time?}
\end{RQquestion}
\begin{RQanswer}
We further explore the importance of bug dependencies for triagers. We analyze a series of observed sequences through the Wayback Machine to see how triagers regard a bug's \textcolor{black}{severity, priority,} degree, and depth when prioritizing it. Our findings illustrate \textcolor{black}{that in some issue tracking systems, the dependency of the bugs is mainly disregarded, or even some developers are not aware of it, and so dependency loses its importance. On the other hand, in the ITS where bug dependency practice is taken seriously, the principal role of depth and degree is noticeable by comparing their average for both solved and postponed bugs. We also found that although severity and priority levels are known to be subjective, the average severity and priority of the fixed bugs are higher than the open bugs in the ITS.}
\end{RQanswer}
\begin{RQquestion}
\textbf{\textcolor{black}{RQ2a: How do different bug prioritization strategies perform in terms of evolutionary metrics?}}
\end{RQquestion}
\begin{RQanswer}
After creating the Wayback Machine to review past prioritization decisions, we explore different prioritization strategies and compare their performance with the actual case. \textcolor{black}{The main aim of RQ2s is to validate the proposed Wayback Machine as a way to prioritize bugs via different machine learning and rule-based approaches. To this end, we first define evolutionary metrics for the first time (e.g., the depth and degree of the BDG and the deviation from the actual assignment). We cannot report these metrics through static use of Machine-learning algorithms \textendash i.e., training a model on tabular information and reporting the performance without time consideration. Then we evaluate different rule-based and machine learning algorithms for bug prioritization purpose.}
\end{RQanswer}
\begin{RQquestion}
\textbf{\textcolor{black}{RQ2b: How do different bug triage strategies perform in terms of evolutionary metrics?}}
\end{RQquestion}
\begin{RQanswer}
\textcolor{black}{We further explore the performance of well-established bug triage algorithms. We equip the Wayback Machine with a bug triage module, which can compare existing triage algorithms in the literature with the actual bug assignment. Moreover, we report the performance of those algorithms based on the revolutionary and traditional metrics, i.e., static accuracy-related metrics.}
\end{RQanswer}
We organized the rest of the paper as follows. \textcolor{black}{Section~\ref{sec:research-methodology} presents the methodology, motivating example, and dataset description. Section~\ref{sec:Wayback} briefly explores the notion behind the Wayback Machine. Section~\ref{sec:findings} investigates the impact of different prioritization and triage strategies that take into account the evolutionary characteristics of the ITS. It reports the performance of the models based on both traditional and evolutionary metrics. Finally, Section~\ref{sec:threats} describes the limitations and threats to validity, followed by Section~\ref{sec:background}, which briefly discusses the relevant literature on bug prioritization, triage, and dependency graphs, and Section~\ref{sec:conclusion}, which concludes the paper.}
\section{Research Methodology}\label{sec:research-methodology}
We examine the evolution of the bugs in the software repositories to help the understanding of the bug prioritization and triage process. For this purpose, we use reported bug information extracted from the ITS of three open-source projects, namely Mozilla, Eclipse, and LibreOffice, covering ten years from January 2010 to December 2019. We construct a BDG based on the daily reported bugs (nodes) and daily blocking information (arcs). A BDG is a directed acyclic graph that does not contain any loop in terms of blocking information, i.e., a bug cannot block another bug and be blocked by the same bug simultaneously.
We track BDG's evolution through complexity metrics, e.g., \textit{depth} ($\theta$) of a node defined as the longest directed path between the given node and other nodes in the graph, the \textit{degree} ($\delta$) of a node that is the number of its outgoing arcs, the number of nodes ($n$), and the number of arcs ($m$) in a graph. Accordingly, the maximum degree and depth of a graph cannot exceed $n-1$. As we sort all the information chronologically, we start adding or removing nodes and arcs at each timestamp and measuring the changes in metrics from time $t$ to time $t+1$. The information uncovers the evolution of the BDG in the project. More details about the BDG are given in Section~\ref{sec:motivating_example}.
To accurately trace back the history of the actual software project, we also incorporate bug report attributes such as bugs' title, description, severity, and priority. \textcolor{black}{We further use these attributes and create machine learning algorithms and rule-based approaches to validate the Wayback Machine in a controlled experiment.} Also, we simulate the network's behavior using different bug prioritization and triage strategies and compare them in terms of various \textcolor{black}{traditional and evolutionary metrics}.
\subsection{\textcolor{black}{Motivating example}}\label{sec:motivating_example}
\textcolor{black}{The Wayback Machine makes it possible to evaluate/observe the evolution of a project as it records the events in the ITS and generates evolutionary statistics such as the number of reported/fixed bugs, their relevant severity, priority, depth and degree, together with information on the developers' load. They all are time-reliant and may observe changes from one release to another. Accordingly, we list three essential aspects of bug prioritization and triage decisions that are overlooked in many studies: bug dependency, time, and decision outcome. Here we discuss the importance of covering each of them in bug/defect prioritization/triage studies. We note that the Wayback Machine covers those aspects in its design.}
\paragraph{Bug dependency}
\textcolor{black}{Figure~\ref{fig:BDG} shows the dependency graph of the bugs, $b_i \in \{b_1, b_2, \dots, b_9\}$ with their associated severity, $s_i$, and the fixing time, $c_i$. Nodes show the bugs, and arcs show their dependencies determined by developers. In this example, $b_1$ and $b_2$ are blocking bugs for $b_4$, meaning that the blocked bug cannot be solved unless its parent nodes are fixed. In a sparse BDG, we may observe a plethora of solo bugs (e.g., see $b_5$ and $b_9$), which neither block nor are blocked by others. On the other hand, having many blocked bugs in the system may postpone the bug fixing process and impose lingering bugs in the system~\cite{Shirin2020}. If triagers disregard the dependency of the bugs while prioritizing them, they may arrive at a decision that is infeasible in practice that might cause delays in bug resolution times. The other important factors in a BDG are its number of subgraphs and its bugs' depth and degree. In this paper, we refer to out-degree simply as \textit{degree}. Figure~\ref{fig:BDG} has 4 subgraphs, $\mathcal{S} = \{[1,2,3,4,6],[5],[7,8],[9]\}$. Also, $b_6$ has the highest depth value of 2, and $b_1$ has the highest degree value of 2. A degree shows the number of blocked bugs, and depth indicates the number of parents and grandparents of a bug in a graph. A higher depth of a bug may lead to its fixing time postponement due to its many ancestors. Accordingly, we closely track the dependency of the bugs during the bug triage process.}
The historical data of Bugzilla for Mozilla, Eclipse JDT, and LibreOffice projects indicates many solo bugs, whereas, in the same projects, some densely connected sub-graphs gradually accumulate. Our evolutionary model, Wayback Machine, can trace back to when each of these sub-graphs developed. It provides a clear insight into the exact time when \textcolor{black}{an inappropriate prioritization/triage} resulted in either lingering bugs or an unbalanced network.
\begin{figure}[!htb]
\centering
\begin{tikzpicture}[b/.style={circle,draw,execute at begin node={$b_{#1}$},
alias=b-#1,label={[rectangle,draw=none,overlay,alias=l-#1]right:{$[s_{#1},c_{#1}]$}}}]
\node[matrix of nodes,column sep=1em,row sep=2em]{
& & |[b=1]|& & |[b=2]| & & &|[b=7]|\\
& |[b=3]|& & |[b=4]| & & & & |[b=8]| \\
|[b=5]|& & |[b=6]| & & & &|[b=9]| &\\
};
\path[-stealth] foreach \X/\Y in {1/3,3/6,1/4,4/6,2/4,7/8} {(b-\X) edge (b-\Y)};
\path (l-7.east);
\end{tikzpicture}
\caption{\textcolor{black}{A typical BDG, with severity ($s_i$) and fixing time ($c_i$) for each bug $b_i$.}}
\label{fig:BDG}
\end{figure}
We note that, while dependency information is available in the software repositories (e.g., Bugzilla), only a few other studies considered dependency as an important factor while designing bug prioritization and triage algorithms. Accordingly, our study also contributes to a better understanding of the dependency information in bug prioritization and triage.
\paragraph{Time} \textcolor{black}{Another major factor in bug triage is time. Most studies on bug prioritization and triage that use bug history without simulation do not consider the evolutionary nature of the ITS~\citep{uddin2017, zaidi2020applying, alazzam2020automatic, park2011costriage}. For instance, if a model recommends solving bug $i$ prior to bug $j$ at time $t$, this recommendation should be made while other bugs and the information of the bug $i$ and $j$ are consistent with time $t$. The severity of bug $i$, $s_i$, changes over time. Therefore, if we consider an approach to use severity as a feature that may affect the bug prioritization, this severity should be the exact severity of the bug at time $t$. Moreover, the bug might not be blocked by another bug at time $t$, but it becomes blocked in future time steps. That is, we need to consider the exact dependency at the time of solving the bug. This logic can be generalized to any other evolutionary feature of a bug. Lastly, when prioritizing a bug, it is important to know the exact list of open bugs at that time.}
\paragraph{Decision outcome} \textcolor{black}{We cannot prioritize or triage all the available bugs without considering the opening, closing, and re-opening status. That is, only having high accuracy in bug assignment or prioritization does not guarantee that a model can be applied for the real world. For instance, assume that we assign bug $b_i$ to developer $d_j$ at time $t$. This assignment may be considered accurate as the developer has previous experience with bugs of the same type/component. However, the developer might be overloaded by previously assigned bugs and cannot claim possession of a new bug at time $t$. In such a case, a second developer who is fairly knowledgeable in the field can start working on the new bug to avoid bug accumulation in the ITS. Therefore, knowing the schedule and current loads of the developers might be very important. Accordingly, we define a set of evolutionary metrics, e.g., the number of overdue bugs, that capture the real impact of a decision at each timestamp. We also check the assignment time of the developers and compare each strategy with the actual case to see whether the strategy mimics the real world. We note that all bug prioritization and triage algorithms in the literature may benefit from a stable, past-event re-generator that captures the evolutionary history of the bugs. The ITS Wayback Machine, coded in Python, serves this purpose by its modular structure. Different bug prioritization or triage algorithms can be integrated into it, while the machine uses the chronological data and produces the visual and tabular outputs, giving more comprehensive insights into the decision outcomes. }
\textcolor{black}{\subsection{Current bug prioritization and triage practice in Bugzilla projects}
A newly reported bug to the Bugzilla ITS has an ``UNCONFIRMED'' status until it is validated. A developer starts ``preparation'' steps, i.e., searching for bugs according to their expertise, checking their information and Metadata, and finding possible duplicate bugs. After passing that phase, they try to reproduce the bug. If they confirm a bug based on its reproducibility, its status changes to ``NEW'' and becomes ready for the prioritization and assignment phase. Mostly in bug triage meetings, developers review open bugs and evaluate whether each bug is worth to fix, when they should be fixed, and who should work on it. Although the prioritization might be subjective, the QA team members need to be consistent in determining bugs prioritization and have a clear flowchart to set the priority level.
They might also flag a bug as ``UNCONFIRMED'', ``NEEDINFO'', and ``INVALID'' if a defect runs short of information or they fail to verify it. In OSS systems, in the case of critical bugs, the bug assignment is done by highlighting bugs and CCing potential developers. Therefore, a developer may claim possession of a verified bug rather than formally being assigned to it. Nevertheless, the practice of assigning a bug to a developer by a triager is another way of triaging bugs in the OSS\footnote{See triage for Bugzilla in \hyperlink{https://firefox-source-docs.mozilla.org/bug-mgmt/policies/triage-bugzilla.html}{Mozilla}, \hyperlink{https://wiki.documentfoundation.org/QA/BugTriage}{LibreOffice}, and \hyperlink{https://wiki.eclipse.org/SWT/Devel/Triage}{Eclipse} projects}.}
\subsection{Data collection}
We use bug data information from Bugzilla, an ITS for open-source software applications. The dataset is originally extracted from Mozilla, Eclipse, and LibreOffice ITSs and contains reported bugs for the projects between January 2010 and December 2019. We note that LibreOffice was forked in 2010 from OpenOffice, and its first reported bug was in August 2010. \textcolor{black}{According to the Bugzilla website\footnote{\hyperlink{https://www.bugzilla.org/installation-list}{https://www.bugzilla.org/installation-list}}, these projects are amongst top-8 highlighted ``Free Software Projects'' and have a clear explanation of how to extract information from their repositories using API. There are many other projects to be considered, e.g., Linux Distribution projects; however, we choose these ones since they are diverse and well-established in terms of graph complexity, different number of reported bugs, and number of developers.} To collect the raw data from the repository, we use the Bugzilla REST API to extract both general information of all bugs and the history of all metadata changes for each bug\footnote{\hyperlink{https://wiki.mozilla.org/Bugzilla:REST_API}{https://wiki.mozilla.org/Bugzilla:REST\_API}}. The collected information includes the creation time, current status, type, severity, priority, title and description, resolution, assignee, and component. On the other hand, the evolutionary information is not obtainable via the general information of a bug. Consequently, we extract the formal relationship between the bugs by considering the metadata of their change history, along with their timestamps. These relationships take the form of duplication and blocking.
We examine both blocking and blocked bugs to see whether their initiation was before or after 2010. If a blocking or dependent bug was created before that time, we again extract all its information and add the ``old'' bug to the current database since they could affect the time to solve the corresponding bugs. Therefore, our database captures a full picture of bug dependency, whether it belongs to the targeted dates or earlier. For older bugs, we ignore the blocking information among themselves; however, we consider their dependency effects on targeted bugs between 2010 and 2020.
Next, we construct an evolutionary database. This database includes any change in the reported bugs along with their timestamps. Typically, these data cannot be obtained merely from bugs' information, and it requires extracting bugs' history as well. While extracting historical data from Bugzilla, we obtain both missing and contradictory information. We handle the problem by combining the information of duplicate bugs and their historical metadata changes. Lastly, we sort the events' logs by their timestamps and design a database that includes bugs' information in chronological order.
\subsection{Descriptive analysis}\label{sec:descriptive_analysis}
Table~\ref{tab:bug_info} shows the most relevant information regarding the extracted datasets. The number of publicly available bugs reported to Bugzilla between 2010 and 2020 for Mozilla, Eclipse, and LibreOffice is 100,475, 16,228, and 70,168, respectively. \textcolor{black}{We choose these different projects for their diversities in terms of the number of reported bugs, the number of bug dependencies, and the ratio of open bugs to total reported bugs}. After extracting those bugs, we encounter some older bugs that block or are blocked by target bugs. We extract the information of the bugs older than 2010 if they are related to the target bugs. Therefore, our database includes the targeted bugs between 2010 and 2020 and older bugs before 2010. A complete report of their priority, severity, number of comments, and blocking information is provided in the table as well.
\begin{table}[!ht]
\centering
\caption{Information related to the bugs extracted from Bugzilla for Mozilla, Eclipse, and LibreOffice projects\label{tab:bug_info}}%
\resizebox{\linewidth}{!}{
\begin{tabular}{lrrrrr}
\toprule
\multicolumn{1}{c}{} & \multicolumn{2}{c}{\textbf{Mozilla}} & \multicolumn{2}{c}{\textbf{Eclipse}} & \multicolumn{1}{c}{\textbf{LibreOffice}} \\
\cline{2-3}\cline{4-5}\cline{6-6}
\textbf{Bug information} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{01/01/2010 -}\\ \textbf{31/11/2019}\\ \textbf{Targeted bugs}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{09/06/1999 -} \\ \textbf{31/11/2009}\\ \textbf{Older bugs}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{01/01/2010 -}\\ \textbf{31/11/2019}\\ \textbf{Targeted bugs}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{09/06/1999 -} \\ \textbf{31/11/2009}\\ \textbf{Older bugs}\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}\textbf{03/08/2010 -} \\ \textbf{31/11/2019}\\ \textbf{All bugs}\end{tabular}} \\
\midrule
\# of bugs & 100,475 & 12,944 &
16,228 & 114 &
70,168 \\
\midrule
Dependency info &&&&& \\
\quad \# of blocked bugs & 13,856 & 6,862 &
1,428 & 41 &
1,576 \\
\quad \# of blocking bugs & 29,021 & 11,415
& 2,236 & 97 &
23,734\\
\midrule
Priority info &&&&& \\
\quad P1 & 6,737 & 1,165 & 47 & 0 & 517 \\
\quad P2 & 2,720 & 815 & 132 & 4 & 2,150\\
\quad P3 & 6,880 & 1,485 & 15,811 & 98 & 62,590\\
\quad P4 & 693 & 211 & 76 & 1 & 3,792\\
\quad P5 & 4,449 & 529 & 162 & 11 & 1,119\\
\quad Missing & 78,996 & 8,739 & 0 & 0 & 0\\
\midrule
Severity info &&&&& \\
\quad blocker & 204 & 64 & 169 & 1 & 494 \\
\quad critical & 3,782 & 360 & 308 & 1 & 2,919\\
\quad major & 4,556 & 325 & 1,104 & 9 & 5,885\\
\quad normal & 88,443 & 11,976 & 11,384 & 38 & 46,147\\
\quad minor & 2,426 & 167 & 753 & 3 & 4,763\\
\quad trivial & 1,019 & 52 & 214 & 1 & 1,366\\
\quad enhancement & 45 & 0 & 2,296 & 61 & 8,594\\
\midrule
Number of comments &&&&& \\
\quad mean & 8.1 & NA & 7.89 & NA & 8.5\\
\quad median & 4.0 & NA & 5.0 & NA & 6.0 \\
\quad standard deviation & 16.69 & NA & 9.6 & NA & 8.7\\
\bottomrule
\end{tabular}}
\end{table}
Priority comes from either the bug's assignee or the project lead. Generally, the bugs are triaged based on their priority, where P1 refers to the most significant bugs, whereas P5 corresponds to the least important bugs. The priority of bugs may change during the bug resolution process. For instance, when a developer observes that a bug takes excessive time to be solved, they assign a lower priority and start working on another one. We note that in Mozilla, 78.6\% of the bugs are not assigned a priority level; on the other hand, in Eclipse and LibreOffice, most of the bugs are assigned the medium level of P3, and the variation in priority is negligible. These observations are consistent with previous studies claiming that both ``priority'' and ``severity'' are unreliable factors~\citep{Shirin2020}.
Also, the person who reports a bug (i.e., reporter) sets the severity to reflect how much it affects the user. To some extent, the reporter could overestimate this severity, and thus, it might need a revision from a developer. If users continually report bugs while assigning incorrect severity, they will damage their reputation and, in the long run, get less attention. Therefore, it is likely that a new user may tend to set the highest possible severity and make the severity level subjective. Bugzilla has a limit of ``Normal'' severity level for regular users, and the higher severity can be assigned only by contributors, developers, leaders, or admins.
Furthermore, the severity differentiates between a bug and an enhancement report. Not all severity levels are accessible to regular users. Table~\ref{tab:bug_info} indicates that most of the bugs receive the ``Normal'' severity, the highest accessible level for ordinary users. Lastly, the number of comments below a bug report is an indicator of the engagement of users or developers in the bug solving process. The bug triage relies upon the bug comments; however, some noisy comments may affect this value~\citep{Xuan2012}. \textcolor{black}{Therefore, we do not use the number of comments in our prioritization or triage tasks.}
\section{Wayback Machine mechanism}\label{sec:Wayback}
Using the ITS information, we created a \textcolor{black}{past event regenerator that requires an} evolutionary database in which all bugs are sorted by their events' timestamp. The events include ``introduced'', ``resolved'', ``closed'', ``blocks'', ``depends on'', and ``reopened''. We ignore other events such as ``new'', ``verified'', or unimportant updates. Afterward, our event-based Wayback Machine will get updated whenever we have a new event in the system. If a user reports a new bug, it will be added to the BDG with its full information retrieved from the Bugzilla ITS. If a bug blocks or depends on a new bug, we update the BDG by adding a new arc from the blocking bug to the blocked one. If a bug is resolved, we remove it from the BDG; however, we keep track of its information in a separate dataset, called ``resolved dataset.'' Using that, we can add back the bug to the BDG with its dependency information in the case of reopening.
As recalculating BDG information per event has a high complexity, we only update the information of the affected bugs. For instance, if a bug is linked to other bugs and is resolved in this timestamp, we update the depth and degree information of those bugs in the same subgraph. Using our Wayback Machine, we may retrieve the BDG information at any given time. Algorithm~\ref{alg:Wayback_Machine} shows how the ITS Wayback Machine works.
\begin{algorithm}[!ht]
\SetKwData{Ev}{Evolutionary Database}\SetKwData{BDG}{BDG}\SetKwData{Solv}{Solved bugs tracker} \SetKwData{Resolved}{Resolved dataset}
\SetKwData{DB}{$\mathscr{DB}$}
\KwData{\Ev with $K$ events, information of the bugs extracted from Bugzilla (\DB)}
\KwResult{Daily monitoring of bug dependency graph evolution}
initialization;\\
\emph{\BDG = $\emptyset$}\\
\emph{\Solv = $\emptyset$}\\
\emph{\Resolved = $\emptyset$}\\
Sort \Ev by the changes' timestamps
\BlankLine
\For{$i \in \{1,\hdots,K\}$}{
\begin{algorithmic}
\IF{\Ev$[i][\text{`status'}] == \text{introduced}$}
\STATE Add bug info to \BDG using \DB
\STATE Start solving time of the bug
\ELSIF{\Ev$[i][\text{`status'}] \in \text{[blocks, depends on]}$}
\STATE Add a directed arc from blocking to blocked bug in \BDG
\ELSIF{\Ev$[i][\text{`status'}] == \text{resolved}$}
\STATE Remove the bug from \BDG and add it to \Resolved
\STATE Update solving time of the bug
\ELSIF{\Ev$[i][\text{`status'}] == \text{reopened}$}
\STATE Remove the bug from \Resolved and add it back to \BDG
\STATE Update solving time of the bug
\ENDIF
Update \Solv in case we have a reopened, resolved, or introduced bug. \\
Update the graph information of the bugs that are affected by event $i$.
\end{algorithmic}
}
\caption{Wayback Machine}
\label{alg:Wayback_Machine}
\end{algorithm}
We model the actual bug tracking system via a discrete-event system simulation and explore the triage and prioritization decisions in the same environment. The timestamps of the bug reports and their dependency information are exactly adopted from the ITS. \textcolor{black}{Therefore, the mechanism is more of a past-event regenerator than a simulator. The event regenerator, which we call Wayback Machine,} is run for all the reports between 2010 and 2020. \textcolor{black}{Figure~\ref{fig:simulation_scheme} illustrates a simplified version of the Wayback Machine together with its inputs and outputs. We sort the events by their chronological order. The events include new bug reports, blocking information of the bugs, assigning information, bug reopenings, bug resolution or closing time, and new comments to the system. The event list can be further expanded to include cc'ed people and changes in severity or priority of the bugs. Moreover, we separately utilize bugs' and developers' tabular information as the other two model inputs. The modular Wayback Machine comprises three segments, namely, the update centre, optional customized triage or prioritization module, and report centre.}
\begin{figure*}[!ht]
\centering
\centerline{\includegraphics[width=\linewidth]{imgs/Wayback_Machine.png}}
\caption{\textcolor{black}{The modular Wayback Machine with comprehensive reports as a way to evaluate different prioritization and triage algorithms.}}
\label{fig:simulation_scheme}
\end{figure*}
\textcolor{black}{In the update center, the actual historical events are run one at a time according to their timestamp.} At each timestamp $t$, we check if bug $b_i^d$ assigned to developer $d$ should be solved. If there exists any bug to be solved and no other bug blocks it, we fix and remove it from the BD
. If we do not have any bug to solve at this timestamp, we may update a new blocking, reopening, closing, assigning, or fixing information. We also continue adding new bugs based on their actual report time which expands the BDG.
\textcolor{black}{In the last module, we track all the changes that the whole ecosystem undergoes. By default, the report centre records the changes on a daily basis; however, the granularity of the record times can be manually changed by the user. The reports have three main parts: the major changes to the BDG, the detailed updates of the fixed and postponed bugs, and the schedule of and the list of assigned bugs to the developers. These comprehensive metrics are recorded and presented as the output at the end of the testing phase.}
\section{Bug prioritization and triage tasks}
\textcolor{black}{Bug prioritization determines the priority of a bug according to its severity and urgency. On the other hand, although bug triage is relevant to bug prioritization, it also inspects a bug, understands its content, prioritizes, and finally assigns it to a proper developer using a variety of bug features~\cite{Hooimeijer2007,alazzam2020automatic}. In the Wayback Machine, we may use the optional triage or prioritization module and implement a new related algorithm; accordingly, the actual triage/assignment decisions will be substituted by the ones proposed by this module (see Figure~\ref{fig:simulation_scheme}). Hence, we may observe how the BDG evolves if we replace the actual assignment decisions with the proposed algorithm. As such, the Wayback Machine provides a practical perspective towards the performance of a suggested prioritization/triage model.}
In the triage process, we assume that developers cannot work on more than a bug at the same time. \textcolor{black}{Although this is a strong assumption, it is compatible with previous studies~\cite{Kashiwa2020, jahanshahi2021dabt} and is based on the fact that we are not aware of the exact schedules of the developers. We also presume when we prioritize a bug over others based on an algorithm, we assign it to the most appropriate developers. Therefore, as we only investigate the prioritization accuracy and not assignment accuracy in the prioritization task, we assume all of the bug assignments are done to the right developer. Nevertheless, in bug triage, the model decides on the assigned developer, and the above assumption only holds for the prioritization task.}
\subsection{Data preprocessing}
After collecting data and building the database, we implement the below steps to prepare the data for the Wayback Machine.
\begin{itemize}
\item We remove duplicate bugs, and whenever a duplicate bug has more information than the original one (e.g., dependency information or general information), we merge its information with the original bug's information. \textcolor{black}{This is similar to developers' practice in the ITS and the study by~\citet{Shirin2020}.}
\item Dependency information of older bugs is kept if and only if it \textcolor{black}{affects} the targeted bugs.
\item ``Enhancement'' reports are eliminated from the database as they do not represent a real defect in the system \textcolor{black}{(see~\citep{Shirin2020,Kashiwa2020}). We consider all the enhancements according to the bugs' last status.}
\item Some of the bugs were not accessible through REST API as a basic user. Hence, their information is not included.
\item As there are many lingering bugs in the system that remain unresolved, we decided to disregard these cases since the bugs with an extraordinary fixing time are considered to be outliers in the system~\citep{jahanshahi2021dabt}.
\end{itemize}
\textcolor{black}{\paragraph{Feasible bug prioritization/triage cases} Not all the bugs are feasible to be assigned/prioritized. We clean the data step by step and report the result only for the feasible bugs. Feasible bugs should
\begin{itemize}
\item have the resolved status by the end of 2019;
\item be solved by active developers \textendash i.e., developers whose bug fix number is higher than the interquartile range (IQR) of bug fix numbers of all developers;
\item have the exact assignment date (in some cases, the assignment date is not recorded in the history of the bugs, and we exclude those bugs);
\item have acceptable fixing time \textendash i.e., their fixing time should be smaller than $\text{Q3} + (1.5 \times \text{IQR}) $, where $\text{Q3}$ and $\text{IQR}$ are the third quartile and interquartile range of the bug fixing time, respectively.
\end{itemize}
We take the number of active developers as 28, 86, and 124 for EclipseJDT, LibreOffice, and Mozilla, respectively. The cleaning process is similar to that of~\citet{Kashiwa2020} and \citet{jahanshahi2021dabt}.}
\subsection{Performance metrics}
\textcolor{black}{We define various metrics to compare different prioritization and triage strategies. These metrics include static metrics (e.g., assignment accuracy) and evolutionary metrics (e.g., percentage of overdue bugs). Note that evolutionary metrics cannot be easily reported unless a Wayback Machine is used. That is, these are related to the time when a bug is assigned or prioritized while considering either the developers' workload at the assignment time or the status of other open bugs in the system. To incorporate the impact of a triage or prioritization decision, we require considering time-related measures as well. The complete set of metrics that we used in our experiments with various bug prioritization and triage strategies are as follows.}
\begin{itemize}
\item \textcolor{black}{\textbf{The Number of Assigned Bugs} represents the total number of assigned bugs during the testing phase. In practice, developers attempt to keep the number of open bugs in the system as low as possible. Therefore, they assign higher priority to the bugs that are more critical or/and easier or/and faster to be solved. The number of assigned bugs consists of the feasible bugs assigned by a specific method during the testing period~\citep{Kashiwa2020,Shirin2020}.}
\item \textcolor{black}{\textbf{(Early, On-time, Late) Prioritization} indicates how many of the prioritized bugs are early, on-time, or late compared to actual assignments. It shows whether a prioritization strategy follows a similar pattern as the actual case.}
\item \textcolor{black}{\textbf{Assigning Time Divergence}, similar to previous metrics, shows the standard deviation of the prioritization times compared to the actual case. The smaller value for the metric is desirable.}
\item \textcolor{black}{\textbf{Mean Fixing Time} illustrates the average fixing time of a bug. As the fixing time of a bug is defined based on the developer to whom it is assigned, this factor shows how a triage algorithm considers fixing time~\citep{Kashiwa2020}.}
\item \textcolor{black}{\textbf{The Number of Assigned Developers} is of importance as it can be useful to see how many developers are selected by a triage algorithm during the testing phase~\citep{Kashiwa2020}.}
\item \textcolor{black}{\textbf{Task Concentration} among developers shows how fair is the assignment distribution among them. Previous studies~\cite{Kashiwa2020, park2011costriage} indicate that some algorithms overspecialize, i.e., they assign all the bugs to few expert developers. Therefore, smaller task concentration shows a better distribution among developers.}
\item \textcolor{black}{\textbf{Assignment Accuracy} is significant as it helps understanding how a triage algorithm mimics the actual case. An accurate assignment is defined as assigning bug $b$ from component $c$ to developer $d$, who has previous experience in addressing bugs of type $c$~\citep{mani2019deeptriage,park2011costriage}.}
\item \textcolor{black}{\textbf{Percentage of Overdue Bugs} determines how many bugs cannot be fixed before the next release. This metric can be computed only if we regenerate past events. If we assign more bugs to a developer than they can handle, those bugs will be more likely to overdue. Therefore, using proper timing in the assignment is necessary~\citep{Kashiwa2020, jahanshahi2021dabt}.}
\item \textcolor{black}{\textbf{Infeasible Assignment with respect to the BDG} shows the percentage of the assigned bugs that had a blocking bug. These are infeasible assignments and need to be postponed until the parent bug is resolved. This evolutionary metric also requires the Wayback Machine as it relies on the information related to the time when a dependency is found and the fixing time of the blocking bug~\citep{ jahanshahi2021dabt}.}
\end{itemize}
\subsection{Bug prioritization strategies}
In practice, triagers may use a combination of factors, such as validity, reproducibility, severity, priority, and even customer pressure to choose an appropriate bug to fix. In some cases, they may also decide based on the blocking effect of a bug. \textcolor{black}{Thus, we define a list of prioritization strategies, including the graph-related (i.e., using features coming from the BDG), severity- and priority-based, and machine learning ones, as follows. We note that any other prioritization strategy can be added to the modular Wayback Machine.}
\begin{enumerate}
\item \textcolor{black}{ \textbf{Maximum sum of degree and depth}: This strategy selects the bug with the highest sum of its degree and depth. We take ``degree'' as the out-degree of a bug. Also, the depth of a bug in a directed graph is the maximum shortest path from the bug to any other bugs in the graph. \citet{Shirin2020} take this as a potential, unbiased factor in bug prioritization.}
\item \textcolor{black}{\textbf{Maximum priority}: This rule-based strategy chooses the bug that has the highest priority among other open bugs. In case of ties, it chooses one high-priority bug arbitrarily. Therefore, we repeat the experiment with this strategy and take the average performance. As we explored the importance of priority in RQ1b, we decide to keep it as an option to examine its similarity to the actual case.}
\item \textbf{Maximum severity}: This strategy chooses bugs with the highest severity first. This approach might be controversial due to the lack of objective assessment of the severity scores; however, we keep this strategy as an alternative approach to the existing ones\textcolor{black}{, as discussed in RQ1b.}
\item \textcolor{black}{\textbf{Cost-oriented strategy}: It computes the fixing time of a bug based on the Latent Dirichlet Allocation (LDA) similar to that of \citet{park2011costriage}. Specifically, we cluster bugs using the LDA algorithm and compute the average bug fixing times per topic/cluster. Accordingly, we prioritize the bugs that have the least estimated fixing time, i.e., cost.}
\item \textcolor{black}{\textbf{Estimated Priority}: We predict the priority using support vector machine (SVM) after converting the textual information of the bug to numeric values using TF-IDF~\citep{kanwal2012bug}. We train our model on the TF-IDF output of bugs' titles and descriptions given their current priority levels. Accordingly, given a new bug report, the model can predict its priority level. The bugs with the highest estimated priority are selected at each timestamp.}
\item \textcolor{black}{\textbf{Cost and Priority Consideration}: We consider both previous strategies. To this end, we first normalize the estimated fixing time $c_i$ and estimated priority $p_i$ of bug $i$ to the range of 1 to 5. Then, we choose the bugs based on the below formula:}
\begin{equation*}
\big(\alpha \cdot \frac{p_i}{\max_i\{p_i\}}\big) + \big( (1-\alpha) \cdot \frac{\nicefrac{1}{c_i}}{\nicefrac{1}{\min_i\{c_i\}}} \big).
\end{equation*}
\textcolor{black}{We set the $\alpha$ level to 0.5 as a control parameter. Therefore, we give the same importance to the priority and fixing cost. The bug with the highest aggregate value will be selected.}
\item \textbf{Random}: This approach is considered as a naive baseline and corresponds to selecting the candidate bug randomly. We use this strategy to show how well other strategies perform compared to a random selection\textcolor{black}{, and we do not recommend using such an approach for bug prioritization task. We acknowledge that a naive rule-based approach cannot address the bug prioritization task.}
\end{enumerate}
\subsection{Bug triage strategies}
\textcolor{black}{While prioritization techniques explore the order in which the bugs should be addressed, in the triage process we also consider the assignment of the bugs to proper developers in a timely manner. We consider different well-established bug triage algorithms, together with the actual case. However, as the Wayback Machine is a modular past-event regenerator, any other triage algorithm can be applied in the same context and be compared with these baselines. The source code and all datasets are available on our GitHub\footnote{\url{https://github.com/HadiJahanshahi/WaybackMachine}}.}
\begin{enumerate}
\item \textcolor{black}{\textbf{CBR}: Content-Based Recommendation (CBR) aims to assign a bug to the most appropriate developer through analyzing its content, i.e., its summary and description~\citep{anvik2006should}. This method converts bug titles and descriptions to numeric vectors and uses assigned developers as the labels. Previous studies show that SVM has the best performance for this classification task, and we use the same approach here~\citep{Lin2009, anvik2006should}.}
\item \textcolor{black}{\textbf{DeepTriage}: DeepTriage is based on the fact that BOW of TF-IDF as a feature representation is unable to capture the semantic of the text and loses the order of the words~ \citep{mani2019deeptriage}. Therefore, using a deep learning algorithm together with a word embedding, e.g., word2vec or paragraph vector can alleviate the issue. Accordingly, we re-implement the algorithm using Wayback Machine and report its performance through our novel revolutionary metrics.}
\item \textcolor{black}{\textbf{CosTriage}: In the cost-aware recommendation system, not only the accuracy of the assignment but also its fixing cost is of importance~\citep{park2011costriage}. Accordingly, it combines CBR with a collaborative filtering recommender (CF) and built developer profiles to estimate the approximate fixing time of each bug type. Bug types are determined by the LDA using summary and description. The trade-off between accuracy and fixing time can be formulated as
\begin{equation*}
\big(\alpha \frac{s_i^d}{\max_d\{s_i^d\}}\big) + \big( (1-\alpha) \frac{\nicefrac{1}{c_i^d}}{\nicefrac{1}{\min_d\{c_i^d\}}} \big),
\end{equation*}
where $s_i^d$ is the suitability of bug $i$ when assigned to developer $d$, $c_i^d$ is the estimated fixing time coming from the CF for bug $i$ when assigned to developer $d$, and $\alpha$ is a control parameter~\citep{park2011costriage}. The suitability is estimated by the SVM similar to CBR. In this study, we set the value of 0.5 for $\alpha$; however, the Wayback Machine can dynamically change it.}
\item \textcolor{black}{\textbf{Random}: This naive strategy randomly assigns a candidate bug to a developer. While using this strategy, we repeat the experiment 5 times and report the average performance. We acknowledge that a naive rule-based approach cannot address the bug triage task. We utilize it only as a baseline and not a proposed way to address bug triage task.}
\end{enumerate}
\section{\textcolor{black}{Results}}\label{sec:findings}
\textcolor{black}{In this section, we evaluate the proposed Wayback Machine in two ways. First, we investigate the ability of the simulator to provide practical information related to past prioritization and triage decisions. It includes exploring the number of bugs and dependencies, together with the depth, degree, severity, and priority of the open bugs compare to the fixed bugs over time. Second, we assess the ability of the Wayback Machine in incorporating prioritization and triage algorithms. We report the performance of those algorithms considering the evolutionary nature of the ITS.}
\subsection{Evaluating the history of the ITS}\label{sec:RQ1}
\textcolor{black}{In this section, we present the results of our empirical study that answer two main research questions. More specifically, we analyze the evolution of the bugs in the ITS and explore the effect of different bug prioritization and triage strategies. We characterize the bug dependency and its impact on lingering bugs during the evolution of three open-source software systems. We further investigate the actual evolutionary performance of well-established bug prioritization and triage strategies using the Wayback Machine.}
\begin{RQquestion}
\textbf{RQ1a: How do open-source software systems evolve in terms of the number of bug reports, bug dependencies, and lingering bugs?}
\end{RQquestion}
\textcolor{black}{The line plot in Figure~\ref{fig:number_of_bugs_and_arcs} shows the actual number of bugs, and the area plot shows the number of arcs (i.e., bug dependency) in each project during the last decade}. We extract dependencies from the bug's history and use the exact date when the dependency is determined. We observe significant differences between the projects. \textcolor{black}{The Eclipse JDT (Figure~\ref{fig:LibOffice_num}) has the lowest number of arcs among these projects. In this graph, we exclude meta bugs \textendash i.e., tracking bugs used to associate reports with useful data. We note that LibreOffice has very few reported dependencies. In fact, our interviews with LibreOffice developers confirmed this observation, where they mentioned that dependencies are not as frequently reported in LibreOffice as it is done in other projects. Therefore, bug dependency, in the case of LibreOffice, becomes a less important factor in triage and prioritization decisions. Developers in Mozilla (Figure~\ref{fig:Mozilla_num}) record the bug dependency during the project lifespan. Therefore, in the following research question, we investigate whether these dependencies influence the bug prioritization/triage process.}
In the last period, the ratios of open bugs to the number of bug reports are $15\%$, $20\%$, and $28\%$ for Mozilla, LibreOffice, and Eclipse, respectively, which suggests a significantly higher rate of lingering bugs in the Eclipse project. Although Eclipse has only 16,342 bug reports, it contains 4,643 unresolved reports at the end of the period. This observation indicates that the number of arcs is not the only factor in lingering bugs. That is, there might be a shortage of developers, or the bugs in the Eclipse project might require more time to be resolved, or there might be a higher number of fastidious contributors reporting bugs that are less important and can be postponed.
\begin{figure*}[!ht]
\centering
\medskip
\begin{subfigure}[t]{.32\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/n_arcs_bugs_Mozilla.pdf}
\caption{Mozilla} \label{fig:Mozilla_num}
\end{subfigure}
\begin{subfigure}[t]{.32\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/n_arcs_bugs_LibreOffice.pdf}
\caption{LibreOffice} \label{fig:LibOffice_num}
\end{subfigure}
\begin{subfigure}[t]{.32\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/n_arcs_bugs_EclipseJDT.pdf}
\caption{Eclipse} \label{fig:Eclipse_num}
\end{subfigure}
~\caption{The number of nodes and arcs in bug dependency graph for Mozilla, LibreOffice, and Eclipse projects \textit{($x$-axis corresponds to the year and $y$-axis corresponds to the monthly bug and dependency counts; $y$-axis range differs for each project.)}.}
\label{fig:number_of_bugs_and_arcs}
\end{figure*}
Figure~\ref{fig:depth_degree} shows the degree and depth evolution of all three projects. \textcolor{black}{In the atypical case of LibreOffice, we observe that after the initial spike in the depth and degree of the bugs, they become stable and approach the value of 0.01 after 2015. Also, the average depth and degree are much smaller in LibreOffice, as shown by Figure~\ref{fig:LibOffice_num}. After 2017, developers in LibreOffice introduced a large number of meta bugs; however, we ignored these bugs as they are not real blocking bugs and rather act as a clustering approach to group similar bugs. On the other hand, the general trend of the degree and depth of the bugs in the Mozilla project is ascending until 2016 and then descending afterward, whereas those for the Eclipse project remain almost at the same level with some seasonal fluctuation. Therefore, we conclude that in terms of graph complexity, each project has its own characteristics that cannot be generalized to other cases.}
\begin{figure*}[!ht]
\centering
\medskip
\begin{subfigure}[t]{.32\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/depth_degree_Mozilla.pdf}
\caption{Mozilla} \label{fig:Mozilla_depth_degree}
\end{subfigure}
\begin{subfigure}[t]{.32\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/depth_degree_LibreOffice.pdf}
\caption{LibreOffice} \label{fig:LibOffice_depth_degree}
\end{subfigure}
\begin{subfigure}[t]{.32\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/depth_degree_EclipseJDT.pdf}
\caption{Eclipse} \label{fig:Eclipse_depth_degree}
\end{subfigure}
~\caption{The monthly evolution of mean depth and degree of BDG for Mozilla, LibreOffice, and Eclipse projects \textit{($x$-axis corresponds to the year and $y$-axis corresponds to the mean depth and degree; $y$-axis range differs for each project.)}.}
\label{fig:depth_degree}
\end{figure*}
\begin{RQquestion}
\textbf{RQ1b: How do the characteristics of the resolved bugs change over time?}
\end{RQquestion}
To address this research question, we compare the characteristics of the resolved bugs and open bugs to infer the notion behind the actual bug prioritization process. We are mainly interested in graph-related indices (e.g., degree and depth of the bugs) \textcolor{black}{together with severity and priority}. While comparing the actual decisions over time, we explore whether bug triagers consider dependency information, priority, and severity in bug prioritization. \textcolor{black}{Our main focus is the training phase \textendash from 2018 to 2020. We assume that a triaged/fixed bug has a higher priority over deferred/unresolved bugs.}
Figure~\ref{fig:degree_of_solv} juxtaposes the degree and depth of the bugs that are solved with those of postponed bugs --i.e., remained open. Such a comparison provides a clear picture \textcolor{black}{of whether bug triagers prioritize a bug based on their dependency.} We show the average degree of the fixed bugs as an area plot and the average degree of the open bugs as a line graph. If we take the area plot as an upper bound of the line plot, we may conclude that, on average, the triagers prioritize the bugs with a higher degree. In Figures~\ref{fig:Mozilla_deg_solv} and \ref{fig:Eclipse_deg_solv}, \textcolor{black}{the grey region almost always contains the black line, meaning that, on average, the degree of solved bugs is greater than that of the postponed bugs. We use a one-tailed paired t-test with a significance level of 0.05 to check the validity of our observation. The null hypothesis is that the true degree/depth mean difference for fixed and unfixed bugs is equal to zero. For both projects, with a p-value of $4.3e-10$, we reject the null hypothesis.} Hence, triagers indirectly consider the dependency while addressing open bugs. In the special case of LibreOffice, where the BDG is very sparse (Figure~\ref{fig:LibOffice_deg_solv}),
\textcolor{black}{we do not observe such behavior. The area plot is almost always zero, meaning that the blocking effect is not considered to be an important factor here.}
\begin{figure*}[!ht]
\centering
\medskip
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ2/degree_vs_solved_Mozilla.pdf}
\caption{Mozilla (degree)} \label{fig:Mozilla_deg_solv}
\end{subfigure}\quad
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ2/degree_vs_solved_LibreOffice.pdf}
\caption{LibreOffice (degree)} \label{fig:LibOffice_deg_solv}
\end{subfigure} \quad
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ2/degree_vs_solved_EclipseJDT.pdf}
\caption{Eclipse (degree)} \label{fig:Eclipse_deg_solv}
\end{subfigure} \\
\medskip
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ2/depth_vs_solved_Mozilla.pdf}
\caption{Mozilla (depth)} \label{fig:Mozilla_depth_solv}
\end{subfigure}\quad
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ2/depth_vs_solved_LibreOffice.pdf}
\caption{LibreOffice (depth)} \label{fig:LibOffice_depth_solv}
\end{subfigure} \quad
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ2/depth_vs_solved_EclipseJDT.pdf}
\caption{Eclipse (depth)} \label{fig:Eclipse_depth_solv}
\end{subfigure}
~\caption{The comparison of the monthly depth and degree of the bugs in BDG and fixed bugs \textit{(the area plot shows the degree/depth of fixed bugs, whereas blue lines indicate the degree/depth of remaining bugs in the graph; $y$-axis range differs for each project.)}.}
\label{fig:degree_of_solv}
\end{figure*}
Regarding the average depth of fixed and open bugs, in Mozilla and Eclipse projects, the depth of the open bugs is mainly smaller than that of fixed bugs --i.e., the black line is within the area under the grey curve. \textcolor{black}{We also observe a similar behavior of the LibreOffice project as we explained for its degree. Our conclusion remains identical. The blocking bugs become important if and only if the blocking information is constantly recorded and the BDG is not sparse. We do not see any direct relationship with lingering bugs in this case. We find that in automating bug triage and bug prioritization process, researchers consider dependency together with other bug attributes. Prioritization based only on the bug dependency cannot be generalized~\cite{Shirin2020}}.
\textcolor{black}{While the subjectivity of the priority and severity can be of concern, the question of whether developers consider these subjective features in their prioritization and triage process can be answered using our proposed Wayback Machine. Specifically, we explore the evolution of the severity and priority in the ITS by comparing the mean severity and priority of the fixed bugs with those of open bugs. Figure~\ref{fig:prio_sever_of_solv} shows the average priority and severity of the fixed bugs as the grey area and the open bugs as the black line. First, we observe no significant change in the priority or severity level of the open bugs in all three projects. At the same time, we find that the average priority and severity of the fixed bugs are almost always higher than the open ones. Accordingly, we note that although these features are subjective, they are still used in practice in the triage process. On the other hand, we see that in Mozilla, the priority seems to be a more significant factor than severity, whereas, in the other projects, the reverse can be true. Referring to Table~\ref{tab:bug_info}, we emphasize many missing values for the priority level in Mozilla that we consider as the lowest level. Consequently, many of Mozilla's open bugs do not have a priority level, and the average priority level of the open bugs is close to zero. However, for the other two projects, the priority level is around three, i.e., the default value.}
\begin{figure*}[!ht]
\centering
\medskip
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/priority_vs_solved_Mozilla.pdf}
\caption{Mozilla (priority)} \label{fig:Mozilla_prio_solv}
\end{subfigure}\quad
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/priority_vs_solved_LibreOffice.pdf}
\caption{LibreOffice (priority)} \label{fig:LibOffice_prio_solv}
\end{subfigure} \quad
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/priority_vs_solved_EclipseJDT.pdf}
\caption{Eclipse (priority)} \label{fig:Eclipse_prio_solv}
\end{subfigure} \\
\medskip
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/severity_vs_solved_Mozilla.pdf}
\caption{Mozilla (severity)} \label{fig:Mozilla_sever_solv}
\end{subfigure}\quad
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/severity_vs_solved_LibreOffice.pdf}
\caption{LibreOffice (severity)} \label{fig:LibOffice_sever_solv}
\end{subfigure} \quad
\begin{subfigure}[t]{.31\textwidth}
\centering\includegraphics[width=\textwidth]{imgs/RQ1/severity_vs_solved_EclipseJDT.pdf}
\caption{Eclipse (severity)} \label{fig:Eclipse_sever_solv}
\end{subfigure}
~\caption{The comparison of the monthly priority and severity of the bugs in BDG and fixed bugs \textit{(the area plot shows the priority/severity of fixed bugs, whereas blue lines indicate the priority/severity of remaining bugs in the graph; $y$-axis range differs for each project.)}.}
\label{fig:prio_sever_of_solv}
\end{figure*}
\textcolor{black}{We find degree, depth, priority, and severity as important factors in the triage process; however, their significance may vary from one project to another.} To further analyze the importance of the BDG in the prioritization process, we simulate the triagers' tasks in the subsequent research questions.
\subsection{Evaluating the bug prioritization and triage algorithms} \label{sec:results}
\begin{RQquestion}
\textbf{\textcolor{black}{RQ2a: How do different bug prioritization strategies perform in terms of evolutionary metrics?}}
\end{RQquestion}
\textcolor{black}{In this research question, we investigate the prioritization module of the Wayback Machine. This module can be utilized by researchers to apply their proposed bug prioritization technique. Here we implement six different prioritization methods together with random prioritization and the actual decisions of the developers. Any other method can be incorporated into it and be compared with other scenarios. The Wayback Machine generates different metrics, three of which are shown here. The number of assigned bugs, the number of early, on-time, and late prioritization, and the standard deviation of the methods from the actual cases. Note that the second and third metrics, which we call evolutionary, can be best reported by an event regenerator that builds the exact environment of the prioritization time.}
\textcolor{black}{We consider the assignment time of a bug as its relative importance. Specifically, we record how many times proposed prioritization strategies can assign a bug on the same day of its actual assignment. Whenever a feasible bug is assigned, we run the model to see whether it is able to prioritize the same bug over other open bugs. The same-day assignment is called ``on-time'', and the rest are defined as ``early'' or ``late''.}
\textcolor{black}{We explore the performance of different strategies on bug prioritization in the long term. The practical aim of this experiment is to see how Wayback Machine can facilitate bug prioritization performance reports in the regenerated, actual environment. We also aim to contrast the performance of different policies against the actual bug prioritization. Here, we assume that the time that a bug is assigned is its prioritization time. Therefore, we examine whether a bug prioritized by a specific method has a similar assigning/prioritizing time to the actual prioritization. Accordingly, the assignment is considered to be a proxy for prioritization. We repeat the process for all strategies three times and report the average performance values to avoid any bias due to randomization. Table~\ref{tab:prioritization} shows the prioritization performance of different methods for different projects. ``Estimated priority'' and ``cost \& priority consideration'' have the most same-day assignment, i.e., the most similarity with the actual case. Perceived priority and fixing cost based on the textual information of the bug seems the most valid strategy to mimic the real cases. Interestingly, the ``estimated priority'' has much more on-time assignments than the ``maximum priority'' method. As the ML algorithm predicts the priority of a bug, it considers its relative priority given the textual information. Therefore, as the priority level is not determined for many bugs (see Table~\ref{tab:bug_info}), the model can estimate their priority levels based on the known priority. Also, the combination of the estimated priority and fixing cost considers both the important and the fast-to-resolve bugs. In that way, the strategy is able to better predict the priority of a bug. These results show the capability of Wayback Machine to objectively evaluate different prioritization strategies.}
\begin{table}[!ht]
\caption{Summary results for different bug prioritization strategies}
\label{tab:prioritization}
\resizebox{\linewidth}{!}{
\begin{tabular}{cl>{\columncolor[HTML]{EFEFEF}}r rrr|rrr|r
\toprule
& & \textbf{Actual} & \multicolumn{3}{c|}{\textbf{Rule-based}} & \multicolumn{3}{c|}{\textbf{Machine Learning}} & \textbf{Random} \\ \cline{4-9}
& & & \textbf{\begin{tabular}[c]{@{}c@{}} Maximum\\ \{depth + degree\}\end{tabular}}
& \textbf{\begin{tabular}[c]{@{}c@{}} Maximum\\Priority\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}} Maximum\\ Severity\end{tabular}} & \textbf{Cost-oriented} & \textbf{\begin{tabular}[c]{@{}c@{}} Estimated\\ Priority\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Cost \& Priority\\ Consideration\end{tabular}} & \\
\midrule
\multirow{3}{*}{{\rotatebox[origin=c]{90}{\scshape{\textbf{EclipseJDT}}}}} & \textbf{\begin{tabular}[c]{@{}l@{}}The number of \\ Assigned Bugs\end{tabular}} & 1,251 & 1,251 & 1,251 & 1,251 & 1,251 & 1,251 & 1,251 & 1,251 \\
& \textbf{\begin{tabular}[c]{@{}l@{}}(Early, On-time, Late) \\ Prioritization\end{tabular}} & (0, 1251, 0) & (810, 104, 337) & (972, 1, 278) & (821, 93, 337) & (970, 2, 279) & (349, \textbf{413}, 489) & (367, 358, 526) & (897, 21, 333) \\
& \textbf{\begin{tabular}[c]{@{}l@{}}Assigning Time\\ Divergence\end{tabular}} & 0 & 278 & 270 & 241 & 272 & 251 & 243 & 267 \\
\hline
\multirow{3}{*}{{\rotatebox[origin=c]{90}{\scshape{\textbf{LibreOffice}}}}} & \textbf{\begin{tabular}[c]{@{}l@{}}The number of \\ Assigned Bugs\end{tabular}} & 1,570 & 1,570 & 1,570 & 1,570 & 1,570 & 1,570 & 1,570 & 1,570 \\
& \textbf{\begin{tabular}[c]{@{}l@{}}(Early, On-time, Late) \\ Prioritization\end{tabular}} & (0, 1570, 0) & (1188, 4, 378) & (1009, 75, 486) & (1022, 83, 465) & (1190, 1, 379) & (377, 363, 830) & (363, \textbf{370}, 837) & (1100, 331, 759) \\
& \textbf{\begin{tabular}[c]{@{}l@{}}Assigning Time\\ Divergence\end{tabular}} & 0 & 185 & 185 & 154 & 186 & 159 & 156 & 177 \\
\hline
\multirow{5}{*}{{\rotatebox[origin=c]{90}{\scshape{\textbf{Mozilla}}}}} & \textbf{\begin{tabular}[c]{@{}l@{}}The number of \\ Assigned Bugs\end{tabular}} & 3,697 & 3,697 & 3,697 & 3,697 & 3,697 & 3,697 & 3,697 & 3,697 \\
& \textbf{\begin{tabular}[c]{@{}l@{}}(Early, On-time, Late) \\ Prioritization\end{tabular}} & (0, 3697, 0) & (2661, 319, 717) & (690, 764, 2243) & (3064, 59, 574) & (3162, 10, 525) & (761, 820, 2116) & (776, \textbf{861}, 2060) & (2845, 78, 774) \\
& \textbf{\begin{tabular}[c]{@{}l@{}}Assigning Time\\ Divergence\end{tabular}} & 0 & 126 & 162 & 122 & 123 & 146 & 143 & 135 \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{RQquestion}
\textbf{\textcolor{black}{RQ2b: How do different bug triage strategies perform in terms of evolutionary metrics?}}
\end{RQquestion}
\textcolor{black}{Using the triage module of the Wayback Machine, we implement three bug triage approaches, namely, Content-Based Recommendation, CosTriage, and DeepTriage. We compare them against actual and random cases. We report six different metrics for this process to see the evolutionary performance of well-established models. We aim to investigate their average fixing time, task concentration on developers, accuracy in assigning bugs to proper developers, percentage of overdue bugs, and infeasibility of the assignments due to the blocking effect. }
\textcolor{black}{The triage process is similar to that of \citet{Kashiwa2020} and \citet{jahanshahi2021dabt}. We triage once a day and assign open bugs to available developers according to the triage algorithm. As CBR, CosTriage, and DeepTriage do not consider the available schedule of the developers, the number of assigned bugs may exceed the total capacity a developer has. Therefore, the Wayback Machine is uniquely suitable for showing the task concentration on developers since it reports both the assignment accuracy and the number of tasks assigned to each developer. In the original studies, assignment accuracy was the main concern, similar to many traditional bug triage papers. However, the Wayback Machine reveals the possibility of overdue bugs due to overwhelming experienced developers with a torrent of assigned bugs.
}
\textcolor{black}{Table~\ref{tab:bugtriage} shows the evaluation of different triage strategies based on the evolutionary metrics. To have a fair comparison, we estimate the bug fixing time for all methods using the LDA method as suggested by \citet{Kashiwa2020} and \citet{park2011costriage}. CosTriage, which considers the fixing time in its formulation, has expectedly a better average fixing time over other approaches. There is no significant difference in terms of the number of assigned developers among the three algorithms, whereas, in the case of LibreOffice and Mozilla, they assign bugs to the fewer number of developers, i.e., they overspecialize. Accordingly, they concentrate so many tasks over few top developers. The accuracy of the assignment is computed as assigning a bug to a developer who has previous experience in the same component~\cite{park2011costriage}. Using an LSTM network with attention mechanism enhances the prediction of proper developers. Since these methods concentrate tasks on a fewer number of developers, a high percentage of overdue bugs is expected. Hence, \citet{Kashiwa2020}'s work that focuses on release-aware bug triaging may address the issue. Finally, the Wayback Machine reports the infeasible assignment cases due to the blocking effect (see Table~\ref{tab:bugtriage}). This information is beneficial to the practitioners since, by definition, blocked bugs should be fixed after the blocking bugs are fixed~\citet{jahanshahi2021dabt}.}
\begin{table}[!ht]
\centering
\renewcommand{\arraystretch}{1.3}
\caption{\textcolor{black}{Summary results for different bug triage algorithms}}
\label{tab:bugtriage}
\resizebox{\linewidth}{!}{
\begin{tabular}{cl >{\columncolor[HTML]{EFEFEF}}r rrr|r}
\toprule
& \textbf{} & \textbf{Actual} & \textbf{CBR} & \textbf{CosTriage} & \textbf{DeepTriage} & \textbf{Random} \\
\midrule
\multirow{6}{*}{{\rotatebox[origin=c]{90}{\scshape{\textbf{EclipseJDT}}}}} & \textbf{Mean Fixing Time} & \textbf{6.0} & 7.9 & 7.5 & 7.7 & 8.3 \\
& \textbf{The Number of Assigned Developers} & 15 & 19 & 19 & 19 & 21 \\
& \textbf{Task Concentration $(\mu \pm\sigma)$} & $83.4\pm93.7$ & $65.8\pm112.0$ & $65.8\pm108.5$ & $72.1\pm102.2$ & $57.5\pm88.3$ \\
& \textbf{Assignment Accuracy} & 97.7 & 95.5 & 94.0 & \textbf{96.7} & 38.1 \\
& \textbf{Percentage of Overdue Bugs} & \textbf{66.0} & 82.2 & 79.6 & 78.3 & 89.3 \\
& \textbf{Infeasible Assignment w.r.t. the BDG} & \textbf{5.4} & 6.0 & 5.8 & 6.3 & 5.9 \\
\hline
\multirow{6}{*}{{\rotatebox[origin=c]{90}{\scshape{\textbf{LibreOffice}}}}} & \textbf{Mean Fixing Time} & 3.3 & 2.1 & \textbf{1.8} & 1.9 & 2.3 \\
& \textbf{The Number of Assigned Developers} & 57 & 22 & 21 & 23 & 23 \\
& \textbf{Task Concentration $(\mu\pm\sigma)$} & $27.5\pm68.9$ & $71.3\pm224.5$ & $74.7\pm253.2$ & $70.7\pm218.4$ & $66.1\pm173.7$\\
& \textbf{Assignment Accuracy} & 91.7 & 99.1 & 99.3 & \textbf{99.4} & 43.3 \\
& \textbf{Percentage of Overdue Bugs} & \textbf{35.9} & 77.1 & 80.8 & 76.2 & 81.3\\
& \textbf{Infeasible Assignment w.r.t. the BDG} & \textbf{0.1} & 0.1 & 0.1 & 0.1 & 0.2 \\
\hline
\multirow{6}{*}{{\rotatebox[origin=c]{90}{\scshape{\textbf{Mozilla}}}}} & \textbf{Mean Fixing Time} & 7.0 & 7.2 & \textbf{6.6} & 7.1 & 8.6 \\
& \textbf{The Number of Assigned Developers} & 137 & 74 & 85 & 80 & 115 \\
& \textbf{Task Concentration $(\mu\pm\sigma)$} & $27.0\pm49.5$ & $50.1\pm204.0$ & $43.6\pm187.0$ & $41.7\pm192.3$ & $31.5\pm42.3$ \\
& \textbf{Assignment Accuracy} & 72.7 & 60.2 & 59.0 & \textbf{62.1} & 15.5 \\
& \textbf{Percentage of Overdue Bugs} & \textbf{69.8} & 80.1 & 77.6 & 78.5 & 82.6 \\
& \textbf{Infeasible Assignment w.r.t. the BDG} & 9.4 & 9.0 & \textbf{8.8} & 9.8 & 11.2 \\
\bottomrule
\end{tabular}
}
\end{table}
\textcolor{black}{Without considering the evolutionary nature of the reported bugs in the ITS, reporting accuracy of the bug triage model might be misleading. Therefore, the Wayback Machine provides a tool for researchers to explore other impacts that their proposed model may have on the whole ecosystem.}
\section{Threats to validity} \label{sec:threats}
The threats to the validity of our study are as follows.
\paragraph{Construct Validity}
\textcolor{black}{We report the model performance based on the train-test split, where the train set consists of the data from 2010 to 2018, and the test set period is taken as 2018 and 2019. However, the ITS is evolving, and some definitions may change while we split the data in this way. For instance, some active developers in 2012 may become inactive in 2019 and leave the system. Moreover, introducing new features for the software produces new bugs that do not exist in history. We disregard developers that are inactive for the past two years or whose activities have been reduced significantly. Additionally, a rolling train-test split strategy can alleviate this issue. However, we rely on the common practice and definitions from previous studies, and we take a similar approach for all strategies to make them comparable~\cite{Kashiwa2020}. Furthermore, we consider the changes in all attributes during the life-cycle of a bug. For instance, whenever a dependency is found, we add it to the bug attribute, and we do not use the last status of the bugs. Nevertheless, the changes in the severity level are not directly extractable from bug history. Therefore, we leave exploring how the changes in perception of severity impact the bug prioritization and triage outcomes to future research.}
\textcolor{black}{In this study, we regenerated past events in the ITS of three projects. We further applied different prioritization and triage algorithms used in the literature. For each, we defined some assumptions. However, we acknowledge that those assumptions might be strong and bug prioritization/triage as a multifaceted problem cannot be handled by simple, naive approaches, but we used the same preprocessing steps and assumptions applied in the literature. Moreover, we evaluated different strategies in terms of evolutionary metrics, e.g., the number of overdue bugs, together with traditional metrics, e.g., the assignment accuracy. That is, we ensure to include a complete list of metrics that can be used by other researchers while reporting their model's performance. Nevertheless, the Wayback Machine can easily incorporate more metrics based on the study objectives. Regarding the evolution in severity levels of a bug, we need to mine textual information of each bug's discussions. Changes in the severity level are not directly recorded in the bug's history. We plan to extend our work by incorporating the dynamic of the severity levels during a bug's lifespan.}
\paragraph{External Validity}
In our simulation, we rely on the data extracted from three different open-source projects with and without some minor modifications. Moreover, we choose well-established projects with different natures \textendash i.e., Firefox, Eclipse, and LibreOffice \textendash for the past decade. \textcolor{black}{Nonetheless, replication of our study using different ITS, e.g., industrial data or proprietary products, would prove fruitful.} We also consider the evolution of the bug reports instead of static snapshots of the system. We simplify our models by discarding some attributes, e.g., the number of CC'ed developers or comments' contents. We plan to expand the study by including different attributes of bug reports and create a more comprehensive evolutionary machine. We used the actual bug prioritization obtained from the ITS as the baseline, and since, to the best of our knowledge, there is no other study that considered the simulation of bug prioritization or triage, we incorporated other works according to our defined mechanism.
\textcolor{black}{As some strategies in our experiment have randomness in their process, i.e., they randomly choose a bug in the case of ties, we reiterate all experiments three times and report the results based on their average performance. We expect this iterative process to address the issue of random heterogeneity of subjects.}
\paragraph{Internal Validity}
The BDG is extracted from three Bugzilla ITS using the REST API. However, some bug reports might be deleted from the repository or have limited access to normal users. Our analysis applies to the bugs that are open to the public. \textcolor{black}{Furthermore, we estimate fixing time using the formulation proposed by~\citep{park2011costriage}, that is, $\textit{fixing date} - \textit{assignment date} + 1$. Nevertheless, we acknowledge that the exact solving time for a bug cannot be determined beforehand.} Therefore, all reported fixing times in the simulation part are estimated times to solve bugs. This assumption is not considered to impact the final decision when comparing different strategies since it remains identical for these strategies.
\section{Related work} \label{sec:background}
Bug prioritization \textcolor{black}{and triage are} vital in software systems as they affect the maintenance budget of software, scheduled releases and enhancements, and even the image of a brand in the eyes of end-users. The developers typically use manual examination and intuitive judgment in the process of bug triage. \citet{valdivia2016} reports that there is no specific bug prioritization strategy on which developers agree during the bug fixing process.
Bug triaging involves different processes such as designating an appropriate developer with relevant knowledge to resolve a bug, analyzing the time to fix a bug, specifying which bug needs to be solved immediately and which one does not, and finding duplicate bug reports~\citep{uddin2017}. Therefore, manual implementation of such an arduous process requires considerable time and resources in large and open-source software systems, making this task error-prone. A considerable amount of research aims to alleviate this issue through the automation of the entire triaging process. For instance, researchers approach the problem of duplicate bug detection using text retrieval techniques or more complex learning-based methods, including additional bug information~\citep{Chaparro2019, hindle2016, EBRAHIMI2019, hindle2019}. On the other hand, several other studies focused on automatic or semi-automatic bug triage models to either select the bug which should be solved next or choose an appropriate developer to solve it~\citep{jahanshahi2021dabt, Umer2018,Zhang2017,guo2020}.
In terms of bug triaging, different machine learning approaches, such as classification, integer programming, information retrieval, and reinforcement learning, were adopted. \textcolor{black}{\citet{park2011costriage}, referring to the over-specialization of content-based recommendation (CBR), considered both accuracy and fixing cost in their formulation. They combined CBR with a collaborative filtering recommender (CBCF). They use the Latent Dirichlet Allocation (LDA) approach to enhance the quality of the CBCF method.} \citet{Yang2014} suggested a method for semi-automatic bug triage and severity prediction. They utilized topic modeling, e.g., LDA, to determine the topic to which an arriving bug belongs. Then, they extracted a list of candidate assignees based on the selected topic and used bug attributes to rank appropriate developers. Similarly, \citet{Xia2017} proposed an extensible topic model based on the LDA approach, multi-feature topic model (MTM), which computes the affinity of a developer to a new bug report using the history of the bugs that the developer has ever fixed. \textcolor{black}{\citet{Kashiwa2020} used an integer programming (IP) formulation to address overdue bugs. They also improved the previous works by setting a limit on developers' capacity to solve bugs simultaneously. \citet{jahanshahi2021dabt} used our proposed Wayback Machine and improved \citet{Kashiwa2020}'s work by adding a constraint on bug dependency. They further reduced the fixing time by changing the IP objective function and embedding the fixing cost there. Our contribution to the literature includes a past-event regenerator facilitating performance report of the triage models, incorporating some important methods from the literature along with evolutionary performance metrics, and comparing the result with the actual sequence of historic decisions. }
Regarding bug prioritization, \citet{Umer2018} studied the effect of emotion analysis for the summary attribute of bug reports on bug prioritization. Specifically, they computed the emotion-value of each bug report and assigned them a priority level of P1 to P5. Moreover, they reported a high correlation ($r=0.405$) between emotion and the priority of bug reports. \citet{guo2020} utilized Natural Language Processing using Word2vec representation of bug summary and implementing a convolutional neural network (CNN). \citet{Shirin2020} pointed to a different concern for bug prioritization, noting that the bug priority and severity can be both subjective and misleading. They focused on the mutual impact of bugs by using a dependency graph. Although few other studies consider a graph-based analysis for the software evolution~\citep{Bhattacharya2012}, \citet{Shirin2020}'s work differs from those in terms of incorporating the uncertainty in the ITS. More specifically, they proposed a partially observable bug dependency graph, where the dependencies between the bugs are not fully observable beforehand and are revealed as the bugs are resolved, and defined its depth and degree as crucial factors affecting a bug's priority. They solved their POMDP model using the Monte Carlo simulation and compared their performance against the baseline policies. On the other hand, their work lacks an internal performance index that would allow them to compare different policies. \textcolor{black}{Our contribution to the bug prioritization literature includes a comprehensive list of evolutionary and traditional metrics for reporting the performance of any prioritization or triage algorithm. We also consider a list of rule-based and machine learning strategies to cover different bug prioritization policies. Moreover, the novel Wayback Machine enables practitioners to compare their suggested approaches with the actual practice recorded in the ITS. Unlike previous works, we consider evaluating prioritization and triage algorithms through reconstructing the exact ecosystem at the time a decision is made. Therefore, instead of extracting bug attributes and using a stable CSV file to estimate bugs' priority level or the assigned developer, we rely on an evolving system that considers the exact bug attributes at each timestamp and shows the real impact of the prioritization or triage decisions.}
\section{Conclusion} \label{sec:conclusion}
Previous studies showed that the bug dependency graph (BDG) is a reliable source for decision-makers in defect prioritization and triage tasks~\cite{Shirin2020, jahanshahi2021dabt, Bhattacharya2012-2}. \textcolor{black}{In this work, we design a Wayback Machine that regenerates past events related to bug reports in ITSs while considering the BDG. A detailed implementation of the Wayback Machine requires tackling three challenges. First, it needs to consider different elements of the ITS, such as users, bugs, developers, and the BDG. Second, it should be designed in a modular format to facilitate adopting any prioritization and triage algorithm. Accordingly, it can be utilized by other researchers to have a complete performance report of their prioritization and triage approaches. Most importantly, the simulator (i.e., the Wayback Machine) should comprehensively reproduce past prioritization/triage decisions and provide insight into their impacts on different system components.}
\textcolor{black}{Our work on open-source data indicates the importance of using a history regenerator that is able to implement proposed bug prioritization and triage algorithms, considering the whole ecosystem rather than applying them in a vacuum. We first explore the history of the events and the evolutionary characteristics of the bugs, e.g., severity, priority, depth, and degree. We compare the features of the resolved bugs with those remaining open during the same period. Our observations reveal the importance of bug dependency in projects with well-reported blocking effects. Moreover, we find that priority and severity, although subjective, are still significant factors in the triage process. }
\textcolor{black}{We extend our past-event regenerator, called Wayback Machine, to a mechanism that is able to integrate any bug prioritization or triage model. We embed some bug prioritization (e.g., rule-based and machine learning algorithms) and bug triage algorithms (e.g., CBR, CosTriage, and DeepTriage) into the Wayback Machine. Currently, the model tracks the algorithms' performance using evolutionary and traditional metrics through their life cycle. The machine requires bugs' information and history together with developers' information as inputs and produces detailed analysis for the given training and testing phase. Researchers may employ the Wayback Machine to have an easy-to-use evaluation tool for reporting the performances of their proposed models.}
\textcolor{black}{To validate the Wayback Machine, we utilize the data extracted from three OSS systems in Bugzilla. Our prioritization and triage experiments demonstrate novel perspectives towards the performance of the model. For instance, we observe that most models ignore the bug dependency during their triage phase. Moreover, the models overspecialize and assign tasks to few highly experienced developers. In that case, they increase their accuracy by ignoring the fact that the high number of reported bugs to the ITS requires an extensive list of developers to address them. Thus, we further explore the fairness of the task distribution and its impact on the overdue bugs. These findings were not easily achievable without the help of regenerating the exact ecosystem at the decision time.}
Our primary objective in this longitudinal study is to demonstrate the current status of the system and sequential decisions of the developers in these projects to facilitate exploring different bug prioritization \textcolor{black}{and triage} strategies.
\textcolor{black}{For practitioners, it highlights the importance of the history of the ITS in bug prioritization and triage. It also facilitates the comparison of any strategy with the actual decision-making process.} In the end, we recommend considering the evolutionary behavior of the ITS instead of snapshots of the past events, and a simulation study would be helpful for this purpose.
\section*{Supporting Information}
To make the work reproducible, we publicly share our originally extracted dataset of one-decade bug reports, scripts, and analyses on \href{https://github.com/HadiJahanshahi/WaybackMachine}{\textcolor{black}{GitHub}}.
\bibliographystyle{elsarticle-num-names}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 9,598 |
Kendall Coyne Schofield debuts as guest hockey analyst, encounters some mansplaining
By Abbi Matheson Globe Staff,January 31, 2019, 8:46 a.m.
US Women's National Team forward Kendall Coyne Schofield at the NHL All-Star weekend.(Ben Margot/Associated Press)
Kendall Coyne Schofield, US women's hockey national team forward and Northeastern graduate, was on the other side of the glass Wednesday night when she appeared as a guest analyst on NBC Sports' "Wednesday Night Hockey".
On the show, Coyne Schofield encountered some mansplaining from cohost Pierre McGuire during his "Inside the Glass" segment in the first period of the Pittsburgh Penguins-Tampa Bay Lightning game at PPG Paints Arena.
Mansplaining is defined by Merriam-Webster as explaining "something to a woman in a condescending way that assumes she has no knowledge about the topic."
After a brief introduction, McGuire told Coyne Schofield which side of the ice the teams would be on. He then asked Coyne Schofield what she was expecting out of the game, and added "we're paying you to be an analyst, not to be a fan tonight."
Coyne Schofield, the two-time Olympic medalist, didn't blink an eye and rolled right into her thoughts on each team going into the game.
The incident sparked some condemnation online as athletes and women in sports media said it was a reflection of the challenges women face in male-dominated fields.
I guarantee that Pierre didn't say that to Brian Boucher once in his first runs as an analyst. No one told Paul Bissonnette that when he made his color debut. Treating Coyne as if she's a kidcaster is embarrassing to watch.
— Catherine Silverman (@catmsilverman) January 31, 2019
I was mad earlier in the season when random guys were tweeting at me (a hockey writer) NHL rules like I didn't know them. Some people told me I overreacted.
No. THIS is why. The culture of assuming women don't know the sports they're an expert of. Treating like lesser.
— Marisa Ingemi (@Marisa_Ingemi) January 31, 2019
Pierre McGuire, a great example of how far the NHL, broadcast, and sporting world has to go in respecting female athletes and reporters.
— Ashley stretch Johnston (@strettyit) January 31, 2019
In a statement Thursday afternoon, McGuire expressed regret for the words he chose and said he had the "utmost respect" for Coyne Schofield.
"I've known Kendall for years and have had the privilege of covering her as a member of Team USA at the past two Winter Olympics. We were all thrilled to have her join our coverage last night, but at times my excitement got the better of me and I should have chosen my words better," he said. "I have the utmost respect for Kendall as a world-class player, analyst of the game, and role model."
Coyne Schofield's appearance follows her history-making participation in the National Hockey League's All Star Skills Competition on Jan. 25, where she became the first woman to compete in the tournament. She participated in the Fastest Skater competition, clocking in a time of 14.346 seconds — less than a second slower than the event's three-time winner.
She took to Twitter on Thursday afternoon, calling the past week "one of the most incredible weeks of my life" and expressing support for McGuire while acknowledging the feelings of viewers.
"I've known Pierre McGuire for years. I know he respects me as a hockey player, a woman, and a friend and that is why I didn't think twice about our on-air exchange when it happened," she wrote, and went on to say that, upon reviewing the video, she understood why some found it inappropriate.
"While I wish it came out differently, I know Pierre doesn't question my hockey knowledge. But, to be honest, that's not what's important. What IS important is for every young girl reading this to know that it doesn't matter what anyone thinks of my hockey knowledge — because I do not doubt my hockey knowledge. I didn't need a gold medal to come to that conclusion. I needed to believe in myself," she said.
Watch the full encounter here:
Christina Prignano of the Globe staff contributed. Abbi Matheson can be reached at abbi.matheson@globe.com. Follow her on Twitter at @AbbiMatheson | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,696 |
Revolution in Honduras and American Business
The Quintessential "Banana Republic"
Request a Trial Contact Your Sales Rep
Honduras is the "standard" for a "banana republic" having been O. Henry's model. This collection details more than the political and financial machinations of the fruit companies, but also the graft and corruption of the national government, the American banking community's loans, the U.S. government's response and the various aborted popular/revolutionary uprisings. The largest single group of records relates to Honduran political affairs; pertaining chiefly to the turbulent political situation and almost continuous revolutionary activity in Honduras. Included are discussions of boundary disputes and border troubles with EL Salvador, Nicaragua, and Guatemala and revolutionary movements originating from these neighboring countries and from Mexico; German activities in Honduras in World War I; landing of U.S. Marines to protect U.S. citizens during revolutions; cases of alleged violation of neutrality laws, and shipment of arms and munitions to Honduras from the U.S.; the participation of Sumner Welles in a conference to mediate the revolution in 1924; and presidential campaigns and elections. Another large group of records relates to financial affairs and concerns such matters as the proposed adjustment of the Honduran debt by the United States, loan negotiations and agreements between the Republic of Honduras and the J. P. Morgan Co. and other banking groups in the U.S., re-funding of the internal debt of Honduras, settlement of Honduras' foreign debt, and loans to the Government of Honduras by various fruit companies. | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,852 |
Home Forums > Entertainment > Movies >
*** Official HELLBOY 2: The Golden Army Discussion Thread
Discussion in 'Movies' started by DavidPla, Feb 10, 2007.
Message #1 of 58 Feb 10, 2007
DavidPla Cinematographer
Guillermo del Toro is back to direct but the sequel has moved from Columbia Picture to Universal Pictures. del Torro has mentioned how he'll have much more freedom now with the studio switch. The release date has been set August 1 2008.
(Mod note - Release date was moved up to July 11, 2008)
Andy Sheets Cinematographer
Hopefully it'll be an improvement over the first movie, which I do like very much but featured a lot of X-Men type aspects to the presentation of the BPRD that don't fit the spirit of Mignola's comics.
Chuck Mayer Lead Actor
Chuck Mayer
I hope his recent success allows him more room for Hellboy aspects...I think it will. I'm looking forward to this
Hey buddy...did you just see a real bright light?
Rakesh.S Second Unit
hellboy was horrible...i doubt i'll be watching this one.
DonRoeber Screenwriter
I enjoyed the first quite a bit, being a fan of both Mignola and del Toro. Looking forward to the second.
Luckily, right at that moment, an unconscious Argentinean fell through my roof.
He was quickly joined by a dwarf dressed as a nun.
JonZ Lead Actor
Did anyone watch the animated film? Story by Mignola and with the film actors doing the voices.
Yeah, the animated film was good. More faithful to the comics in its take on the characters and Mignola's style, probably because Mignola seemed to have a more direct hand in it. Has some pacing problems and Liz's character design isn't to my tastes but nothing too seriously wrong. I'm very much looking forward to the sequel (and hopefully a third movie if things go right).
Message #8 of 58 Dec 19, 2007
Trailer is coming out tomorrow online.
IGN Advertisement
I walked into a comic book store for the first time in prob 6 or 7 months on monday.
Picked up the new Hellboy story as well as the last 2 TPBs Ive missed.
Looking foward to the trailer.
Message #10 of 58 Dec 19, 2007
John Doran Screenwriter
from that picture, it looks as though they're including johann in the film; i wonder if they'll have roger the homonculus, too...that'd be pretty cool.
oh, and i should also point out that the BPRD stories are excellent, too, and well worth reading - especially the black flame and the universal machine.
fere libenter homines id quod volunt credunt
" i wonder if they'll have roger the homonculus, too...that'd be pretty cool."
He can be seen in the background of the first film.
http://moviesmovies.ign.com/movies/v..._qtlowwide.mov
http://moviesmovies.ign.com/movies/v...qthighwide.mov
Thanks Dave.
Russell G Fake Shemp
I'm looking forward to this. I liked the first film, but like most first films based on comics, it seemed to pre-occupied with settting up everything. After the 2 cartoons, this could be pretty good, the toons were great.
My wallet cries me to sleep!
See what I'm watching on Letterboxd!
This post kills threads!
I love Hellboy, and liked the first film alright. I think the teaser for HB2 is fairly dazzling, so color me pretty excited. All of the exposition nonsense is out of the way...on to the craziness
Message #16 of 58 Jul 8, 2008
Thi Them Producer
Everybody seems focused on the other comic book movie coming out, and this one is being neglected.
It's getting really great reviews. I haven't seen the first, but the visuals make me want to see this sequel.
~T
Yea, it seems the word of mouth is really great for this although I think it might just get lost in the shuffle at the Box Office. With The Dark Knight next weekend, I can't see this sticking around for too long especially since the first film was a small hit but by no means anything fairly large. Were DVD sales that good?
I'm not one of those guys who are so worried about a movie's gross that they must be stockholders in the company but I'm betting that Hellboy II won't do very well at the box office. That doesn't mean that I think it'll be a bad movie, I just think it'll be one comic book movie too many for the general public to really get into (especially when it's not from the only two comic companies that the public has heard of).
Message #19 of 58 Jul 10, 2008
Claire Panke Second Unit
Man, I hope you're wrong but fear you're right. I'm looking forward to this one more that TDK, because I always eager to see anything GDT directs.
Ray H Producer
I don't see it lighting the box office on fire, but I think it could fair decently. Maybe open around $25 - 30 mill. Maybe end up doing about what the first film did. The main problem is that the first movie didn't really connect with audiences (heck, it didn't even make that much money to really warrant a sequel!) so I don't think it's got that much of a built in base aside from fans of the comic.
I was thinking about seeing this tomorrow morning, but Saturday will probably be more like it.
"Here's looking at you, kid."
My DVD/Blu-ray Collection
My Letterboxd Page | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 483 |
Walking the Path of Service with Ram Dass
Meditation for Anxiety and Stress
Ram Dass x East Forest
Love In the Time of Chaos: A Guide to Compassion in 2018
Ram Dass on Relationships
Ram Dass Books Timeline
Featured Teachers
LSRF
LSRF Retreat Scholarship Opportunities
Ram Dass Fellowship Finder
LSRF Scholarship Fund
What was to be Maharajji's final day at Kainchi was spent in darshan, kirtan, and prayers. Both Indian and Western devotees were gathered. Maharajji was asking after everyone at the temple and elsewhere. Twice he put one of his Indian devotees into samadhi and brought him out of it by throwing his blanket over the man's head. At one point he said to those gathered, "He is your guru. He is young and I am old. He will live and I will die!" Everyone laughed. He then had the Westerners sing to Hanuman. There were tears in his eyes. The Indian women did arti before him, and one and all received a tilak upon the forehead.
Then he went to bathe and eat and hinted that he was leaving for four or five days. When he came out of his room he went to the temple and paused before the murti of Hanuman, holding his hands together in pranam silently for two or three minutes. Again he stopped and honored each of the murtis at the temple in turn. While crossing the bridge out of the temple compound he met an old devotee who was a photographer. Maharajji gave him an old photo and told him to copy it and distribute it freely. He instructed that the daily feeding be stopped and the Mothers taken to Nainital. Then he said softly, "Today, I am released from Central Jail forever." As he approached the car that was to take him to the station, the blanket slipped from his shoulders to the ground. A devotee tried to put it back on, but Maharajji said, "Leave it. One should be attached to nothing." Others folded it and placed it in the car.
Just at the moment when he sat in the car, an old woman arrived from the nearby village of Bhowali. Maharajji said, "Ma, I've been waiting for you." He touched her on the head and said, "I'm going." He was gay and full of humor.
The driver of the car was another old and trusted devotee. He reports that during the ride to the railway station, he became aware that Maharajji s feet had become extremely big. "I was afraid," he said.
Maharajji kept saying to him, "What is destiny? What is going to happen? Tomorrow we don't even know." They got to the station early for the train, so they sat in the car for two hours. Maharajji pointed out a beautiful rainbow and said, "Look at that natural beauty. How beautiful is God's creation, man can never make anything so beautiful."
Tickets had been purchased to Agra for him and for Ravi, a young devotee. On the train Maharajji did not close his eyes all night and kept waking the devotee and saying, "I'm not tired, talk with me." Ravi asked him to drink the milk which the Mothers had sent in a thermos but the milk had turned bad. "Throw it out," Maharajji said, "Throw the thermos out, too." Ravi didn't want to, but Maharajji did so himself, saying, "Throw it out, I will not need it anymore." He spoke of many things and many people through the night. He said, "I've come on earth only for the spreading of dharma."
When they reached Agra, Maharajji jumped from the train while Ravi trailed behind with the baggage. Instead of following the platform, Maharajji jumped from it easily, crossing six sets of tracks and jumping up on the main platform. Ravi caught up with him at the ticket-taker who had stopped Maharajji for his ticket. Then Maharajji bargained with various rickshaw drivers: one wanted three rupees (about thirty cents), which Maharajji argued was too much. Finally a price was fixed and they set out, only Maharajji knowing the way. En route, Maharajji pointed out a house and said, "Their son has gone to America and the family feels very sad. Sons don't serve their fathers anymore." When they arrived at the house, he told Ravi to give to the rickshaw driver the milk bucket filled with Ganga water that Maharajji always carried with him. Again he said, "Have no attachment for anything."
Except for one hour when Maharajji went to see a heart specialist (he had complained of pains in his chest), he remained at S's house from 6:00 A.M. to 9:00 P.M. that evening. The specialist said that Maharajji's heart was fine and that he just needed rest. At 9:00 P.M. he left for the station to meet the train that would take him back up to the foot of the mountains at Kathgodam. He was accompanied by young Ravi and another devotee, D. After some time he told Ravi to go and sit in the next compartment. Ravi went there but was thought to be a thief by the occupants, who yanked the chain and had the train stopped. Ravi was taken up and placed in the police van that was a part of the train. Ravi persuaded the police to ask Maharajji at the next station if Ravi was with him. Maharajji was very loving to Ravi and said, "We'll get off at Mathura and I'll make a call to the D.I.G. (Deputy Inspector General) and set things straight." At Mathura, not far from Agra, they got off the train. Some people bowed to him. He then sat down on the steps of the station after leaning against the outdoor latrine. D went to get a taxi, while R waited with Maharajji.
Maharajji then lay on the steps and began convulsing. His eyes were closed and his body was cold and sweating. D fed him some pills and Maharajji said, "Turn off the lights." He asked for water and to be taken to nearby Vrindaban. He was carried by stretcher to the taxi and laid across the back seat. During the ride to Vrindaban, Maharajji seemed unconscious for most of the way, though now and then he mumbled things they could not understand. They took him to the emergency room at the hospital. In the hospital the doctor gave him injections and placed an oxygen mask over his face. The hospital staff said that he was in a diabetic coma but that his pulse was fine. Maharajji roused and pulled the oxygen mask off his face and the blood pressure measuring band from his arm, saying, "Bekar (useless)." Maharajji asked for Ganga water. As there was none, they brought him regular water. He then repeated several times, "Jaya Jagadish Hare" (Hail to the Lord of the Universe)," each time in a lower pitch. His face became very peaceful, all signs of pain disappeared. He was dead. No one at the hospital had recognized him. The hospital staff left the room. Ravi and D carried Maharajji out and placed the body in a taxi and took it to the Hanuman temple (It was about 1:15 on the morning of September 11th.)
– Excerpt from Miracle of Love: Stories about Neem Karoli Baba compiled by Ram Dass
Download a free 20-minute 'Living Spirit' Meditation with Ram Dass
Join over 90,000 fellow seekers who receive updates on free courses, retreats, and new tools and methods to help you to integrate wisdom teachings into your life.
Jao!
Working Through our Attachment to Money
The Making of "American Yogi"
'Kala' by Trevor Hall
Shivaya's Transformative Moment [from 'Love Everyone']
The Duality Reality by Katrina Chester
Order Now: New Book Release from Ram Dass
Support the Love Serve Remember Foundation
Subscribe to the LSRF Mailing List
NEW PODCAST
Ep. 150 - Becoming Nobody
Celebrate the theatrical release of Becoming Nobody, the quintessential portal into Ram Dass' life and teachings, with this collection of teachings featured in the film... Listen Here >
Subscribe to the Ram Dass Here and Now Podcast
Brought to you by the Be Here Now Network.
Help Support These Teachings
If you enjoyed The Great Escape, please support our efforts to continue making teachings from Ram Dass and friends accessible to all. As Ram Dass says, "When you see the beloved all around you, everyone is family and everywhere is love." Learn more >
I would like to make a contribution of:
Please do not use the back button or click submit more than once while your order is processing.
If you would like to donate via mail, please send a check to:
Love Serve Remember Foundation
2355 Westwood Blvd. #130
LSRF is a 501(c)(3) nonprofit organization. Contributions are tax deductible as allowed by law.
© 2020 Love Serve Remember Foundation Registered 501(c)(3). EIN: 80-0308502
226 W Ojai Ave Ste 101 #531 Ojai, CA 93023 | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 8,231 |
Q: How to resolve exception attempting to load java class in DB2 Java Stored procedure? I have created Java.sqlj program and done with creating packages and jar, when attempting to call the procedure, facing the error like
ATTEMPTING TO LOAD JAVA CLASS s1.S1Sal FROM JAR COEDB.S1SAL
Here s1.S1Sal - Where s1 is the package name and S1Sal is class name and it's defined in the java program.
Error
Run: SYSPROC.S1NUM(DECIMAL(4, 0))
{call SYSPROC.S1NUM(?)}
USER-DEFINED ROUTINE SYSPROC.S1NUM ENCOUNTERED AN EXCEPTION ATTEMPTING TO LOAD JAVA CLASS s1.S1Sal FROM JAR COEDB.S1SAL. ORIGINAL EXCEPTION: (none). SQLCODE=-20212, SQLSTATE=46103, DRIVER=4.22.29
Run of routine failed.
- Roll back completed successfully.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 5,715 |
A few days ago, searchers from the Penn State College of Medicine published a new study on vaping that will most certainly bring hope to those of you who are trying to quit smoking. The document starts by discussing the numerous medical advantages of the e-cigarette while underlining some mistaken beliefs. They are especially concerned about a theory which is often exploited by anti-tobacco lobbies. According to this theory, vaping could be a gateway to nicotine dependance.
The Penn State scientists strongly disagree with this claim. They state on the contrary that vaping will reduce the urges of smokers. And also the number of daily occasions when they feel like smoking.
Searchers analyzed the answers of 32 320 participants under the direction of Guodong Liu.
Among participants, there were 3 586 vapers who fit the survey, out of which 5% consider themselves exclusive users of e-cigarettes. Out of these 5%, 93% were former smokers.
Vapers consider themselves less dependant to electronic cigarette than to traditional tobacco products. They also consider that the urge to vape is less important than the urge to smoke that they experienced before.
Finally, they find it easier to refrain from vaping in public spaces than it is to refrain from smoking.
All the participants of the study have been considered as addicted due to their regular use. But the main author of the study, Guodong Liu, who is associate teacher in Public Health Sciences stated that the results indicated clearly that "e-cigarettes are addictive, but not at the same level as traditional cigarettes".
The Penn State College of Medicine team plans to continue its research with further analysis of e-cigarette users' dependency and the evolution of e-cigarette use. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,578 |
The Ripple Effect: Free Dance Performance in Dublin OH
Tuesday, September 13, 2011 12:00 AM by irishattitude
This guest post is by David S. Guion, Executive Director of the Dublin Arts Council.
Have you ever dreamt of flying? This fascination with flying has helped inspire choreographer Keely Shaffer-Glenn to create Gravity's Ripple III, a site-specific dance performance on the sloping riverfront grounds of Dublin Arts Council. Keely's talented and agile dancers will incorporate balloons and paper airplanes as they encourage the audience to come along for the ride. Challenging the slope of the hill and the weight of gravity, the dancers will guide the audience through journeys of flight and fancy.
All are welcome to experience a dress rehearsal on Friday, Sept. 16 at 11 a.m. at Dublin Arts Council, 7125 Riverside Dr., in Dublin. Dancers and non-dancers of all ages are encouraged to attend. We encourage young children and their families to picnic on the grounds after the performance and enjoy the scenery.
You're also invited to two free performances of this original work, Friday Sept. 16 and Saturday Sept. 17 at 6:30 p.m. at Dublin Arts Council. The performance is approximately 40 minutes in length and will be followed by a Q&A session with the choreographer and the dancers. Sunset and nature will surely play a feature role in the performance.
Gravity's Ripple was first presented in 2009, and continues to receive supportive audience feedback. Most people have never attended a contemporary dance performance before – and certainly not outside. Their enthusiasm has encouraged us to continue this unique project.
The project is offered free of charge through an array of collaborations and financial supporters. Gravity's Ripple III is presented by Dublin Arts Council and OhioDance in collaboration with The Ohio State University Department of Dance and Ohio Department of Education Division of the Arts with additional support provided by Ohio Arts Council and the Target Foundation.
Come fly with us! To learn more about Dublin Arts Council, please visit us at www.dublinarts.org
~Post by David S. Guion, Ph.D., Executive Director, Dublin Arts Council
Categories: Arts in Dublin
Author: irishattitude | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,105 |
{"url":"https:\/\/mathematica.stackexchange.com\/questions\/206340\/square-brackets-in-axis-label\/206342","text":"# Square brackets in axis label\n\nWithin the Plot[] environment I'd like to label one of the axes as\n\n$$r\\,\\,[a_0]$$\n\nwith square brackets to denote that $$a_0$$ is the unit. How can I achieve this? So far I've tried\n\nText[Style[ToExpression[\"r [a_0]\", TeXForm, HoldForm]]\n\nbut this outputs\n\n$$r\\,\\,(a_0)$$.\n\nSo, the subscript comes out correctly but not the brackets.\n\nUse a StringForm:\nPlot[x, {x, 0, 1},\nAxesLabel -> {Text[StringForm[\"r[]\", Subscript[a, 0]]], Automatic}]\n\nIt is alternatively possible to enter the subscripts directly inside the quoted string (without using StringForm).","date":"2020-04-02 23:18:32","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 3, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8409661650657654, \"perplexity\": 4486.658496504444}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585370508367.57\/warc\/CC-MAIN-20200402204908-20200402234908-00309.warc.gz\"}"} | null | null |
{"url":"https:\/\/math.stackexchange.com\/questions\/2227059\/find-the-number-of-3-letter-words-that-can-be-made-with-letters-in-alphabetical","text":"# Find the number of 3-letter words that can be made with letters in alphabetical order.\n\nConsider the first ten letters of the alphabet $\\{A,B,C...J\\}$ and consider any three letter sequence a word. How many three letter 'words' can be constructed from this set in which all the letters are different and in which the letters are in alphabetical order.\n\nIt is easy to see that there are 720 words that can be made from three different letters. I have been informed that imposing the second constraint reduces this to 120 words, but it is not clear to me why exactly $\\frac{1}{6}$ of the of the first set are in alphabetical order. Any argument that clarifies this relationship would be greatly appreciated.\n\nThe answer is ${10\\choose 3} = 120$, because you must choose distinct letters, and for any set of three distinct letters you only get to construct one word based on alphabetical order.\n\n\u2022 The answer that you provided isn't initially intuitive but is quite simple and helpful. \u2013\u00a0Matthew Anderson Apr 10 '17 at 4:03\n\nIf you have any three different letters, and you represent the 'lowest' letter (i.e. the one that comes first in the alphabet) as L, the 'highest' as H, and the one on the middle as M, then you have 6 possibilities, only the first one of which is in alphabetical order:\n\nLMH\n\nLHM\n\nMLH\n\nMHL\n\nHLM\n\nHML\n\nSo: 1 out of 6","date":"2021-05-08 19:55:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8239415287971497, \"perplexity\": 171.89442076549753}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243988923.22\/warc\/CC-MAIN-20210508181551-20210508211551-00498.warc.gz\"}"} | null | null |
package co.infinum.ava;
import android.app.Activity;
import android.content.Context;
import android.view.View;
import android.view.ViewGroup;
import android.widget.ArrayAdapter;
import java.lang.reflect.Method;
import java.util.ArrayList;
/**
* Adapter that can be injected with views using AbstractViewHolder.
*
* Created by ivan on 12/15/13.
*/
public class AbstractViewAdapter<T> extends ArrayAdapter<T> {
protected static final String INJECTOR_CLASS_NAME_SUFIX = "$$AdapterInjector";
protected static final String INJECTOR_METHOD_NAME = "inject";
/**
* Factory for creating abstract view holders.
*/
protected AbstractViewHolder.Factory<T> abstractViewFactory;
/**
* Used to store current abstract view holder.
*/
private AbstractViewHolder<T> abstractViewHolder;
/**
* Creates new AbstractViewAdapter.
*
* @param context
* @param abstractViewFactory
* @param items
*/
public AbstractViewAdapter(Context context, AbstractViewHolder.Factory<T> abstractViewFactory, ArrayList<T> items) {
super(context, 0, items);
this.abstractViewFactory = abstractViewFactory;
}
@Override
public View getView(int position, View convertView, ViewGroup parent) {
if(convertView == null) {
abstractViewHolder = abstractViewFactory.createView(getContext());
convertView = abstractViewHolder.updateView(getItem(position));
convertView.setTag(abstractViewHolder);
} else {
abstractViewHolder = (AbstractViewHolder<T>)convertView.getTag();
abstractViewHolder.updateView(getItem(position));
return convertView;
}
return convertView;
}
public static void injectAdapters(Activity activity) {
String injectorClassName = activity.getClass().getName() + INJECTOR_CLASS_NAME_SUFIX;
try {
Object injectorObject = Class.forName(injectorClassName).newInstance();
Method method = getActivityInjectMethod(injectorObject, activity);
method.invoke(injectorObject, activity);
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (InstantiationException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
public static void injectAdapters(Object object, View rootView) {
String injectorClassName = object.getClass().getName() + INJECTOR_CLASS_NAME_SUFIX;
try {
Object injectorObject = Class.forName(injectorClassName).newInstance();
Method method = getObjectInjectMethod(injectorObject, object);
method.invoke(injectorObject, object, rootView);
} catch (ClassNotFoundException e) {
e.printStackTrace();
} catch (InstantiationException e) {
e.printStackTrace();
} catch (IllegalAccessException e) {
e.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* Returns method for activity injection.
*
* @param injectorObject
* @param activity
* @return
* @throws NoSuchMethodException
*/
private static Method getActivityInjectMethod(Object injectorObject, Activity activity) throws NoSuchMethodException {
Class[] parameterTypes = new Class[] {
activity.getClass()
};
return injectorObject.getClass().getMethod(INJECTOR_METHOD_NAME, parameterTypes);
}
/**
* Returns method for object injection.
*
* @param injectorObject
* @param object
* @return
* @throws NoSuchMethodException
*/
private static Method getObjectInjectMethod(Object injectorObject, Object object) throws NoSuchMethodException {
Class[] parameterTypes = new Class[] {
object.getClass(),
View.class
};
return injectorObject.getClass().getMethod(INJECTOR_METHOD_NAME, parameterTypes);
}
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 8,640 |
Camden Printworks has been printing with a purpose in Camden, New Jersey since 1990.
We believe in location, diversity, and impact.
Our location allows us to be good neighbors in a city that American "progress" has left behind. We create jobs and bring business to America's poorest city. We believe that small business is one of the keys to getting Camden back on its feet.
Our diverse staff are valued, respected, and fairly compensated. We believe that having a team made up of people from different racial, religious, and economic backgrounds makes Camden Printworks a place of incredible connection.
Our impact on our customers, our block, our planet, and ourselves inspires us to always do better. We are pushing the boundaries of what's possible with screenprinting with an eye towards positive impact. That's why we specialize in water-based inks and fairly-made goods. That's why we offer internships to print majors. That's why our customers keep coming back again and again. | {
"redpajama_set_name": "RedPajamaC4"
} | 9,140 |
Automatic and semi automatic pipeline CNC threading devices are gadgets that are used for producing thread designs observed in the pipes utilized for a variety of applications. The thread intendeds permit the pipe to be properly screwed and connected to the element for which it is made use of. For instance, if a pipeline is made use of for the purpose of water supplying, then the thread helps in linking the pipeline to the main water tank.
The semi and automatic automatic PVC pipeline CNC threading devices assists to maintain uniformity while cutting the threads in the pipelines. Threading just one end of a pipe is not deemed adequate in a lot of cases and for optimal results both the ends of the pipes needs to be threaded. In such circumstances, the semi and automatic automatic pipe CNC threading machines can be highly beneficial. Using these machines, one can get the exact same kinds of threading at both ends of the piping systems. These devices can likewise be utilized to create tailored thread designs that are more user-specific in nature.
Most pipe production business produce piping systems in bulk, and these devices can help in cutting of threads really quickly and easily.
The automatic and semi automatic CNC pipe threading machines not simply help in carving out threads in pipe ends however they are likewise useful for cutting the pipelines in a range of needed size pieces.
While the earlier models of automatic and semi PVC automatic CNC pipe threading machines were sluggish in their operations, advancements in innovation have resulted in the development of devices that can not only produce threads and cuts of high efficiency but can be made use of for bulk productions also.
As the upkeep costs for semi and automatic automatic PVC pipe threading CNC devices are really low, it enables people to use them for years with little or no technical glitches. Additionally, these machines provide a high rate of production in low output, which allows saving manufacturing expenses.
There are a variety of pipe threading CNC machines presently available in market, which offer users with a lot of options relating to the very same. Both little and large models of automatic and semi automatic pipe CNC threading devices are available; each of these models caters to a wide variety of functions. The machines are not just easy to transport or store however also quite hassle-free to utilize. | {
"redpajama_set_name": "RedPajamaC4"
} | 7,341 |
Q: Display different cursor when over text (NOT element) When a default cursor hovers over a text, it changes to a caret-shaped cursor. However, it only does when over text: everywhere else inside the element,it will not display that specific cursor.
As an example, here's how I want it to work:
Is it possible to get that to work?
A: It seems that there's currently no way to apply separate styles (here, cursors) to only text nodes using CSS. However a similar effect can be achieved using <span>s:
CSS:
span{
cursor:url("/images/cursorTextSelect.png") ,default;
}
HTML:
<span>This example text fills a line</span>
<br>
<span>Some more example text that fills another line.</span>
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,640 |
\section{Introduction}
\label{sec:Introduction}
Drone technology is developing at a breath-taking pace. From toys for hobbyists, it has now reached a state where both the private and public sectors can rely on them as tools to provide value-added services to users \cite{canis2015unmanned}. The low cost and mobilisation time of drones are two major drivers behind their adoption. Checking the structural integrity of a building, for example, is much cheaper with drones than with other aerial platforms, such as helicopters, or with high-rise cranes. Similarly, drones may be deployed faster than police helicopters -- a potentially attractive feature required by law-enforcement agencies.
Most likely, however, drones would not operate individually or in isolation; individual drones may form part of a fleet managed by the same organisation, where mission objectives are achievable only if fleet participants cooperate. Moreover, airspaces may contain drones from other organisations, where individual drones and fleets must negotiate to operate safely. Another complexity is that the airspace might be governed by a local government authority, which may use an automated management system to handle a dynamic and congested airspace containing large numbers of UAVs.
\subsection{Context}
\label{sec:Context}
There are multiple ways in which UAV fleets can be managed; for example, all decisions might be taken by the fleet management authority (also referred to as control centre). This requires drones to communicate information to the ground fleet management system, potentially in real-time, and to quickly react to any received instructions. For certain situations, as discussed previously, drone fleets are likely to act autonomously and should make in-flight decisions without requiring explicit permission from a fleet management control system on the ground.
Individual drones, with limited computation power, may not be suited for autonomous behaviour if it is based on Artificial Intelligence (AI) techniques -- limited computational and storage resources along with real-time decision requirements imposes just challenges.
To overcome this limitation, drone fleets may be behave like swarms, where AI algorithms designed for the Swarm Intelligence paradigm can be applied. For Fleets of Drones (FoD) that use swarm intelligence, we refer to them as `Swarm of Drones (SoD)'; the rest are referred to as `FoD'. An important consideration in swarm intelligence is the nature of the swarm and relationship between swarm-objects, particularly the concept of individual objects entering and leaving the swarm. This is crucial for a SoD that might either be static, dynamic or hybrid fleets. In a static fleet, all drones remain together for the whole duration of the mission. In dynamic fleets, individual drones may enter and leave the fleet as necessary or required by their mission. Lastly, hybrid fleets have a partial fleet behaving statically with the rest as a dynamic fleet.
\subsection{Challenges and Problem Statement}
\label{sec:ChallengesAndProblemStatement}
For a SoD, there are core operations that are necessary for successful mission completion. These include flight control, flight routing, obstacle avoidance, regulation conformation (regarding the airspace usage), and self-protection (including safety-critical operations) with respect to physical integrity and cybersecurity. All of these operations are highly time-sensitive. Collectively, we refer to these time-sensitive operations as `Mission-Critical Operations' (MCO). The listed operations are loosely related to three main challenges discussed below.
\subsubsection{Swarm Intelligence}
\label{sec:SwarmIntelligence}
Swarm intelligence takes inspiration from the collective behaviour of natural systems, such as swarms of insects; these systems are inherently decentralised and often have the ability to self-organise.
The ability of a swarm of insects to perform certain tasks emerges out of the interaction of simple and quasi-identical agents, which act asynchronously due to the lack of a central control \cite{Beni2005}.
Algorithms based on swarm intelligence principles, like ant colony optimisation, bee-inspired algorithms and particle-swarm optimisation, are used in optimisation problems that are static or change over time.
In distributed robotics systems, a swarm or fleet of autonomous agents may operate in remote locations with little or no control by a human operator. Swarm robotics uses large numbers of autonomous and situated robots with local sensing and communication capabilities \cite{Brambilla2013}.
A swarm of robots offers certain advantages over the use of a single one.
Due to the large number of robots, the workload can be distributed across the swarm, and multiple tasks can be worked on simultaneously \cite{Barca2013}.
It further offers distributed sensing capabilities and an increased robustness to failure, by eliminating the single point of failure, as demonstrated in the Swarmanoid project \cite{Dorigo2013}.
The swarm's behaviour is often optimised using evolutionary algorithms; for instance, researchers have successfully evolved a swarm's ability to adapt to unknown environments \cite{Urzelai2001,Bredeche2009}, its resilience to failure \cite{Millard2014a}, and the planning and following of formation patterns \cite{Saska2014,Dorigo2013}.
\subsubsection{Security, Safety and Privacy}
\label{sec:SecrutiyAndPrivacy}
Any flying asset -- drones in this case -- can be a potential target to harm its current state or to access the data it contains, whether it is part of a group or individual. These two elements raises the challenges of how to implement the security of the asset so its current state cannot be compromised \cite{javaid2012cyber,gupta2016survey}, and how the data can be protected in a manner so it does not violate any privacy requirements \cite{Gregory2013}. Similar issues regarding intentional hacking and signal jamming, accountability of security issues, management/enforcement of airspace restrictions, and concerns over privacy and intrusiveness were detailed in a report to the US Congress \cite{Elias2012}. Furthermore, not just national governments are concerned with the security, safety and privacy of drones flying in cities -- the US Congress passed legislation covering UAVs development and integration in civilian airspace (PUBLIC LAW 112–95—FEB. 14, 2012) -- but also general public \cite{Chang:2017,Lidynia2017,Cavoukian2012}. This is alongside a number of companies trialling the deployment of drones as part of their on-demand services, e.g. Amazon for Prime Delivery \cite{amazondrone2016}.
Individual drones and FoD, therefore, require strong assurances in terms of security, safety and privacy. There are multiple options in which the assurances and countermeasures can be built for FoD. One option is to opt for a set of static policies defined before the FoD commences its mission. This is useful if the FoD operates in an static environment that has fixed and predictable behaviour; creating fixed policies for MCOs is an obvious choice here. In reality, however, drones operating in the wild\footnote{Wild: Environment that is not under the control of the drone operators/owners.} has the potential to present scenarios that were not considered previously by drone owners and operators. For this reason, in this paper, we forward the proposal of designing SoDs based on swarm intelligence. All of the MCOs would have a deep foundation in swarm intelligence and have the potential to collaboratively learn, evolve and decide the best course of action when operating in the wild, without depending on a ground fleet management systems.
\subsubsection{Performance and Energy Consumption}
\label{sec:PerformanceEncergyConsumption}
Drones are resource-constrained pieces of equipment; individual aircraft are heavily impaired by limited processing capabilities and severe battery or fuel constraints. It is widely acknowledged that drone power consumption, whether it uses a thermal or electrical engine, is a major issue\cite{serge:DBLP:conf/iros/AbdillaRB15}. These energy constraints influences all the parameters of the drone system, as well as the mission itself. The impact of MCOs on energy management is important: on one hand, the MCOs impact the energy consumption -- for instance, because of encryption algorithms\cite{serge:article} -- but the power management must also take MCOs into account, e.g.\ reserving enough energy to ensure prompt responses to critical external stimuli that require quick (and energy demanding) route changes. Using a swarm is beneficial to ensure that the overall energy burden is shared, e.g. sharing processing loads (see below), to maintain continuous flight and mission succession.
Regarding performance in terms of computing power, even though the technology is evolving very quickly, the processors embedded on small drones (which usually constitute swarms) are not the most efficient. It should be noted that this lack of computational power also comes from the power management issue (see above): the more efficient a processor, the more power it requires. Therefore, to achieve a significant level of performance, load sharing (in addition to highly tuned algorithms) is required. Load can be shared between the drones of the swarm themselves or between the drones and some external system (a bigger drone or even a ground system). For instance in terms of image processing, mosaicking is often used \cite{serge:doi:10.1117/1.JRS.10.016030}. It consists of taking several photos of a given area and then assembling them to build a global picture that can thereafter be processed depending on the situation at hand. Such a process can be shared among the individual drones of the swarm, dispatching the load all over them \cite{serge:unknown,serge:chaumette:hal-01391871}.
The two challenges described in this section crucially depend on swarm intelligence, which, in turn, impacts power consumption and computational capabilities. All challenges should thus be considered in a holistic approach.
\subsection{Contributions}
\label{sec:Contributions}
The paper contributes in three main aspects to further the discussion on the management of autonomous and independent FoDs that are:
\begin{enumerate}
\item A rationale supporting application of drone fleets and potential impact of building a SoD.
\item A conceptual architecture for the SoD, its different variants based on the swarm (enrolment) structures and collaboration models.
\item Finally, charting the open issues that impact SoD in general but specifically the security, safety and privacy of SoD.
\end{enumerate}
\section{Related Work}
\label{sec:RelatedWork}
In this section, we discuss the related literature from three aspects.
\subsection{Swarm Robotics - Experiencing, Learning and Adaptation}
\label{sec:SwarmRobotics_ExperiencingLearningAndAdaptation}
Swarm intelligence is not a new concept for FoD. Existing literature \cite{wei2013agent,purta2013multi,Madey2014,7303086} has already explored different uses of swarm intelligence in the context of FoD, especially in the case of internal swarm communication and route planning. However, in related literature, it is difficult to find a case where swarm intelligence is proposed for all operations ranging from flight control to cybersecurity -- as is the case of this paper.
The swarm intelligence paradigm has been used to optimise and control single UAVs:
In \cite{Ross2013}, single vehicle autonomous path planning by learning from small number of examples.
In \cite{Wang2016}, three-dimensional path planning for a single drone using a bat inspired algorithm to determine suitable points in space and applying B-spline curves to improve smoothness of the path.
In \cite{Couceiro2013}, authors introduced and validated a decentralised architecture for search and rescue missions in ground based robot groups of different sizes.
Considered limited communication ability with a command centre and employs distributed communication.
In \cite{Pugh2006}, distributed unsupervised learning in a swarm of robots using a particle-swarm optimisation algorithm.
Accounting for limited communication capabilities amongst members of the swarm.
Requires \emph{a priori} knowledge of the terrain.
In \cite{Saska2014}, the authors achieved area coverage for surveillance in a FoD using visual relative localisation for keeping formation autonomously.
In \cite{Vasarhelyi2014}, authors explored the use of swarm intelligence paradigm to control formation flight and stabilisation through the use of GPS and locally shared information.
In \cite{Madey2014}, authors have investigated the use of a communication middleware and a rule based system to command and control an otherwise autonomous FoD.
\begin{figure*}[htbp]
\hfill
\centering\raisebox{-0.6\height}{\centering\includegraphics[width=.49\linewidth]{SingleUAV-withSensors.pdf}}
\hfill
\centering\raisebox{-0.5\height}{\centering\includegraphics[width=.49\linewidth]{OneLevelFleet-withSensors.pdf}}
\hfill
\caption{Single drone versus a one-level drones fleet}
\label{fig:single-vs-fleet}
\end{figure*}
\subsection{Security, Safety and Privacy of Fleet of Drones}
\label{sec:SecurityAndPrivacyOfFleetOfDrones}
Since the emergence of drones, different papers have proposed solutions to secure them and their communications: either a), an alone drone communicating with a GCS (Ground Control Station) or with other devices, or b), communication inside a FoD.
As stated in~\cite{AkramTrustCom2016d}, the attack vectors to consider are either the capture of drone to make physical or logical attacks, or attacks through its communication capabilities -- all of which might be conducted by a highly sophisticated adversary.
\subsubsection{Individual Drones}
For individual drones controlled by a GCS, the authors of~\cite{Steinmann2016ICNS} proposed a protocol to secure communication along with ensuring that illegitimate accesses to sensing data is not easily available to an attacker. While this proposal provides the deniability property to help to deal with privacy issues (i.e. a GCS is not able to prove to other parties from which drone the message was received), it is expensive to implement in terms of computation and thus energy consumption.
In a similar context, a more efficient proposal relying only on lightweight primitives was done by the authors of~\cite{Blazy2017ICNS} to establish a secure channel protocol when GCS is in the communication range of drone. Their proposal ensures confidentiality and privacy-protection of collected data; if a drone is captured, data cannot be accessible by an adversary.
In~\cite{Won2015AsiaCCS}, the authors proposed a secure communication protocol between drones and smart objects based on efficient Certificateless Signcryption Tag Key Encapsulation Mechanism using ECC, which addresses issue of drone capture.
In~\cite{mtita2017dasc}, the authors used drones to perform efficient inventory and search operations over some RFID tagged assets using lightweight secure and privacy-preserving serverless protocols defined in~\cite{mtita2016efficient}, while guaranteeing the privacy of the tags and the secret when the drone is captured (i.e. compromised).
In~\cite{Akram-ICNS2017a}, the authors proposed a secure and trusted channel protocol to enable communication of a drone and sensors of Aircraft Wireless Networks (AWNs) to retrieve collected data and ensuring their confidentiality.
\subsubsection{Drones in Fleet}
In the HAMSTER (HeAlthy, Mobility and Security based data communication archiTEctuRe) solution for unmanned vehicles~\cite{Pigatto2016}, the authors presented a security framework and cryptographic schemes without discussing specifically of secure channel protocols and issues of captured drones.
In~\cite{Maxa2016DASC}, the authors proposed a secure reactive routing protocol, called SUAP, for a fleet of drones. The proposal is efficient to detect and prevent routing, e.g. wormhole and blackhole attacks, but it does not consider an adversary with a high attack potential nor the issue of captured drones.
In~\cite{AkramTrustCom2016d}, the authors proposed to address an adversary with a high attack potential by adding a secure element to each drone of the fleets. Based on the built architecture, in~\cite{Akram2017wistp}, they proposed a secure and trusted channel protocol to establish a secure channel between the communicating drones and to provide security assurance that each drone is in the secure and trusted state.
\subsection{Performance and Energy Consumption}
\label{sec:PerformanceAndEnergyConsumptionStateOfTheArt}
Energy management is addressed at two different levels: in terms of refuelling (fuel or batteries) capabilities and in terms of in-flight/mission power consumption optimization. Regarding refuelling,
research and experiments are being done to provide mechanisms to reload/refuel during the flight. Standard avionic procedures are being used/adapted but more original approaches are also explored, like the usage of solar panels, laser power beaming or ad hoc hosts \cite{serge:abebe2017drone} for instance. Regarding in-flight power consumption optimization, the drone (internal) supervision algorithms (the algorithms that control the sensors, the IMU, the autopilot, \emph{etc.}) are studied, as well as the algorithms used to achieve/implement the missions. For instance, flight path management, sense \& avoid, \emph{etc.} can highly impact power consumption.
Computational load sharing is addressed mainly by the computing community, rather than the electronics community. Still, it should be noted that strong relationships are required with the situation management people so as to determine the important information that are really required for the decision process\cite{serge:cummings2007operator}. This is to avoid the burden and thus the processing load of processing potentially useless data.
It should also be noted that managing and organizing a swarm induces an overhead in terms of computational power and energy consumption. Indeed, this requires additional communication and management of swarm-related data (location of drone, proximity, RSSI, \emph{etc.}). Even though power consumption has been addressed in some work (see for instance \cite{serge:DBLP:conf/syscon/BrustAT16}) it is still an issue to consider. Moreover, security and safety related features also impact power consumption and computational load, and it is necessary to take these into account.
\section{Fleet of Drones - Why?}
\label{sec:FleetOfDrones}
\begin{figure*}[htbp]
\centering
\includegraphics[width=0.90\linewidth]{MultiLevelFleets-withSensors.pdf}
\caption{A multi-level drones fleets}
\label{fig:multi-level-fleets}
\end{figure*}
Why should FoD can be preferred to single, powerful drones? As illustrated Fig~\ref{fig:single-vs-fleet}, one of the first interest of the fleet is to be composed of several smaller drones that can be equipped with different sensors or other equipments providing redundancy that can help to tolerate a certain degree of failure. In addition, a multitude of drones can cover a larger geographic area than a single one. This can be performed in a smarter way, since only required drones with useful capabilities to support the mission can be sent to specific areas while other drones perform other tasks. Nevertheless, a single drone would have to attend each location. FoD can also take advantage of the network they form altogether to continue to communicate with the GCS when there are obstacles on the path between some drones and GCS by simply relying messages to neighbour drones which are able to establish communication with GCS.
Small drones are also interesting because they are more stealthy, less noisy than a big drone that might be of interest for both military and civilian applications.
Last, but not least, small drones can be less expensive than large drone due to their mass-production, and they might be safer for use in civilian applications since their weight is smaller, which is safer in the event of a crash.
However, large drones and smaller versions should not be opposed since they can be used in a complementary way; for example, in a multi-level fleet in which they can serve as relay (they can also be seen as cluster heads) for the smaller drones to enable communication with a GCS or with other FoDs. In Figure~\ref{fig:multi-level-fleets}, only a 1-level fleet is depicted.
It is worth noting that FoDs presented before always received command from the GCS. Of course, they can have a certain degree of autonomy but standalone swarm of drones, illustrated in Figure~\ref{fig:drone-swarm} acting like swarm of animals/insects can be regarded as highly desirable for researchers and operators. Indeed, once the mission is given, a SoD no longer requires to be driven by a GCS, hence making it autonomous and more stealthy.
\begin{figure}[htbp]
\includegraphics[width=\linewidth]{StandaloneUAVSwarm-withSensors.pdf}
\caption{A standalone swarm of drones}
\label{fig:drone-swarm}
\end{figure}
\subsection{Fleet of Drones - Commercial, Civilian and Military Need}
\label{sec:FleetOfDrones}
Depending on the end-users, different architectures of FoDs can be envisioned.
In a commercial context, FoDs can be shared by several stakeholders to decrease the cost of each having its own fleets. For instance, it can be imagined FoDs spread over countries to achieve the entitled missions on-demand of stakeholders that would have to pay according their uses. The missions can be of different types, delivery (assets buy in on-line shops, pizza, drugs, etc.), monitoring of fields or herds for agriculture sector, surveillance of buildings. The main requirement is a good cost effectiveness for FoDs, i.e. the fleets can be able to be shared between several stakeholders in a fair usage and in a dynamic way to enable drones of a fleet to join drones of other fleets to achieve objective of the missions. For such an applicative context, a certain degree of autonomy can be useful but essentially a WAN infrastructure of communication will be used. e.g. LTE network, to control the FoDs.
In a civilian context, authorities would require safe and reliable FoDs for rescue operations (fire detection in forest, searching missing people in sliding snow or after an earthquake, etc.), for smart city scenarios (detection of traffic offences, hunt of criminals, etc.), to monitor major infrastructures (nuclear plants, pipelines of oil and gas, power lines, water reserves, airports, railways, etc.) and borders. In this context, most applications can use WAN communication infrastructure to control the FoD, in addition to the drone-to-drone communication inside the FoD, to exchange data and potentially control commands to achieve the mission. One of the scenarios where communication infrastructures might no longer exist is after a disaster (earthquake, tsunami, hurricane, huge terrorist attacks) that destroyed them. Authorities can also use FoDs to fight against malevolent drone flying in forbidden areas by capturing it.
In a military context, requirements are stealth, protection of data regarding the missions (flight plans) and the data collected (positions of interests), adaptability to the adverse condition when deployed in the field. For stealth, SoDs are the best since they do not use the wide range communication which minimises potential detection by the adversaries. The SoD is adaptive to adverse conditions, to failure or destruction of some drones, to fulfil the missions. In any case, FoD or SoD of drones are more resilient than single drone for most of applications in this context.
It is worth noting that the security and privacy protection are requirements shared by all contexts.
\subsection{Use Cases}
\label{sec:UseCases}
This section present three use cases for FoDs and SoDs.
\subsubsection{Rescue Operation in Remote Areas}
\label{sec:RescueOperationInRemoteAreas}
FoDs and SoDs can be used in several rescue operations in remote areas where an event -- earthquakes, for instance -- has made access difficult or dangerous for emergency services. They can also be used to set up a network of communication dedicated to emergency staff for data exchange or to recover public networks (like 3G/4G) to enable victims and other people to communicate with their families or urgency staff.
FoD can also be used in case of sliding snow to cover a wide area than human persons to search victims. In addition, such small drones can fly closer to the ground than big drone. This can more efficient to detect signs of life; for instance, to look for people at sea after a plane crash. FoDs can cover a larger area than conventional means (helicopters, planes and ships). For such scenario, autonomy in energy is an issue to deal with.
\subsubsection{Facility Surveillance and Fault Detection}
\label{sec:FacilitySurveillanceAndFaultDetection}
As mentioned previously, FoDs can be used for surveillance of wide area to detect abnormal event.
For instance, they can detect fire in natural parks or forests by covering wide area equipped with multiple sensors (e.g. thermal) and they can also fight it with embedded dedicated payload to extinguish the flames before they grow.
For buildings and any large infrastructure requiring a high security, FoDs can provide an additional security level by providing a third dimension in defence against intrusion or degradation. Indeed the drones help to detect intruders based on embedded sensors and camera. However they can also provide a fourth dimension by being able to fight promptly against intruders, potentially at the price of their own destruction. For instance, recently at the time of the writing, a few hundred dollars drone land on the deck of HMS Queen Elizabeth -- without anyone raising the alarm. It can be imagined that one or several drones of a FoD protecting this ship would have tried to capture and/or to destroy the intruder's drone by sacrificing them if required.
For border surveillance, FoDs can help cost efficiency by avoiding continuous human patrols or wall construction.
\subsubsection{Data Collection}
\label{sec:DataCollection}
FoDs can also help to collect data from wireless sensors nodes (WSN) which are not always connected to a sink having a permanent internet connection (for instance, it can be a WSN requiring stealthy since it operates on an adversary field or individual sensors positioned on the ground but that do not form a network to save their energy). In such applications, the drones of the fleets are somehow mobile sinks to retrieve collected data. This kind of uses of FoDs can exist in smart city scenarios where to avoid crowding of radio frequency spectrum, sensors nodes disseminated in the city may only emit with a very low power that requires the recipient is very close to collect the data; which such small drones can do.
For data collection tasks, drones of a fleet can be used for inventories of RFID tagged assets if they are equipped with RFID readers, but also of livestock in wide areas using cameras.
Finally, a basic and common scenario of data collection with drones that can be extended to FoDs is ground imaging capture for different purposes, such as for military (to find point of interests: position of enemies for instance) and agriculture (to view area requiring watering, or those requiring treatment against a disease).
\section{Swarm of Drones - Technology Perspective}
\label{sec:FleetOfDrones}
In this section, we discuss a conceptual architecture that can deployed for SoD.
\subsection{Generic Architecture for SoD}
\label{sec:PotentialArchitecturesOfFleets}
The conceptual architecture shows the set of operations in two different contexts: 1), how they are stacked in a single drone, for example, operations are that specific (or individual) to a drone and how it is related to other operations on the drone, and 2), how different operations are actually a collaborative options in which the swarm decides rather than individual drones. Figure~\ref{fig:ConceptionArchitectureofSoD} shows the conceptual architecture. The architecture is divided into three layers with some duplication and each layer; the rationale of these is discussed in subsequent sections.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{ConceptualArchitecture.pdf}
\caption{Conceptual architecture of SoD}
\label{fig:ConceptionArchitectureofSoD}
\end{figure}
\subsubsection{Drone Abstraction}
\label{sec:ApplicationAbstraction}
This abstraction layer is focused on a single drone operations, which preserves the drone as an individual entity and includes:
\begin{enumerate}[label=D\arabic*]
\item Flight Management: This operation ensures that the drone remains in the air as required by the mission. The flight management features can be semi-static or partly dependent on \ref{itm:F-FM} and can only be modified in unique circumstance if required by \ref{itm:S-CDF}. \label{itm:D-FM}
\item Navigation: This involves airborne movements within the respective SoD, in relation to the external environment, flight route, airspace authority's injunction and \ref{itm:D-ODA}. Each drone has a static set of rules for its navigation activity; however, these can be superseded by the collaborative decision making process of the SoD. \label{itm:D-Nav}
\item Power and Performance Efficiency: This operation continually monitors in the drones power and performance matrix -- and at set intervals notifies the swarm. In case the power and performance efficiency has severely degraded, the conflicting self-preservation \ref{itm:D-SP} and SoD preferences (\ref{itm:S-SSP} and \ref{itm:S-CDF}) kicks in and based on the severity and mission requirements the drone can either take the route of disengaging from the mission or altruism. \label{itm:D-FE}
\item Service Level Maintenance: Similar to the \ref{itm:D-FE}, this operation looks into the entirety of the services a drone is offering as part of the SoD. Any delays or difficulty to fulfil its obligations, required by the SoD, it will raise the notification for the SoD (\ref{itm:S-MA} and \ref{itm:S-CL}), so adequate mitigation can be applied via \ref{itm:F-PPM}, \ref{itm:S-CE} and \ref{itm:S-CDF}. \label{itm:D-SLM}
\item Object Detection and Avoidance: This operation has two aspects, first to detect a obstacle approaching the flight path and second the avoid collision. These two actions are dual nature, due to time criticality of this operation, first option is that the decision might be taken by a drone individually, but even in this case it would notify the remaining swarm. The second option is that obstacle is not detected by the drone itself but by other swarm member and it takes adequate measure to avoid it pre-emptively. \label{itm:D-ODA}
\item Individual Mission Objectives: At the time of swarm construction (section \ref{sec:FleetConstruction}), the SoD owner would upload the objectives of the mission. These objectives would be specific to two levels, individual drones and the fleet (\ref{itm:F-MO}). This information would detail the criticality of mission, responsibility of individual drones and the fleet as a whole. This information would be used by \ref{itm:D-SP}, \ref{itm:S-SSP} and \ref{itm:S-MA}. \label{itm:D-IMO}
\item Security, Safety and Privacy Measures: This operation monitors the individual drone level security, safety and privacy features. The baseline rule set can be pre-defined either solely based on SoD owners design or an autonomously evolved formulation (from a baseline) based on the collaborate knowledge of all SoD flights (carried out by the respective SoD or other SoDs in the past). If a situation appears that an individual drone has not encountered before, then it can raise that to the SoD to take a collaborative decision (\ref{itm:S-CDF}). \label{itm:D-SSPM}
\item Self-Preservation: Depending upon the criticality of the mission, role of a drone and analysis from \ref{itm:S-MA}, a drone might opt for selfish attitude to preserve its operational integrity over the requirements of SoD or opt the altruism approach. In the later approach, individual drone might came to the decision to sacrifice its operational integrity for the success of the mission or in corner condition to upload the ethical principles (\ref{itm:S-EP}). \label{itm:D-SP}
\end{enumerate}
\subsubsection{Fleet Abstraction}
\label{sec:Task/MissionAbstraction}
This abstraction layer bridges between the decisions taken by individual drones on their own and the course of action that is stipulated as part of the mission brief from the SoD owner, with feed in from the swarm abstraction layer in case an unexpected situation is encountered in the wild.
\begin{enumerate}[label=F\arabic*]
\item Flight Management: The operation that is configured to manage the flight operations of the SoD as per pre-defined mission brief. The flight management operation is mission focused and pragmatic as depending on the situation it would either opt for pre-defined plan or \ref{itm:S-CDF}.\label{itm:F-FM}
\item Airspace Policy Management: This function of the SoD remains in constant communication with airspace controller and other drones operating in the same space to comply with the regulations stipulated in the respective region. When making decisions, whether by individual drone or by the SoD as a whole, it consults this function and abide by the airspace regulations.\label{itm:F-APM}
\item Navigation: This functions manages the airborne movements of the fleet as a whole based on the \ref{itm:F-APM} and follows the feeds of \ref{itm:F-FRM}. \label{itm:F-NV}
\item Flight Route Management: The route planner for the whole of the fleet is triggered by either \ref{itm:F-APM} and \ref{itm:F-CDA} -- but remains inside the airspace regulation (\ref{itm:F-APM}).\label{itm:F-FRM}
\item Object Detection and Avoidance: Logically, at the fleet abstraction layer this is part of the \ref{itm:F-FRM} but discussed separately. It depends upon individual drones detection, notification to the population in the SoD and potential avoidance strategy formulation (fleet level) -- based on the analysis results of \ref{itm:S-CDF}.\label{itm:F-ODA}
\item Mission Objectives: Manages the fleet wide mission objectives that are configured at the point of swarm construction (discussed later). This function assists multiple functions during the normal flight, however, in unique situation the swarm abstraction layer takes over to make adequate modification for successful completion or abortion of the mission.\label{itm:F-MO}
\item Congestion Detection and Avoidance: Based on the drone sensors and/or external feed like airspace traffic broadcasts, this function would identify potential congestion on the selected route of the mission. Based on this detection, it can notify the flight management (\ref{itm:F-FM}) to take adequate actions. \label{itm:F-CDA}
\item Secure Communication: Manages the set-up and maintenance of secure communication channels between drones in the SoD and with external entities. \label{itm:F-SC}
\item Trust Establishment and Verification: Depending upon the type of SoD (section \ref{sec:TypesOfFleets}) this functionality would establishes the trust relationship between drones in the SoD and with external entities. \label{itm:F-TEV}
\item Policy Consolidation and Harmonisation: Depending upon the type of the SoD (section \ref{sec:TypesOfFleets}) either all drones would abide by a single policy (covering airspace regulations, ethical principles and swarm participation guidelines) or they have different, sometime conflicting policies. When a drone enrols into a SoD this operation verifies whether the enrolled drone is compatible with the baseline policy of the SoD or not. \label{itm:F-PCH}
\item Power and Performance Management: Computation and power are two scarce resources for the SoD. This operation continuously monitors individual drones state and performs load-balancing to achieve maximum contributions from individual member of the SoD.\label{itm:F-PPM}
\end{enumerate}
\begin{figure*}[htbp]
\centering
\centering\includegraphics[width=0.90\linewidth]{SwarmConstruction.pdf}
\caption{Swarm of drones construction process -- pre- and post-mission activities.}
\label{fig:SoDConstruction}
\end{figure*}
\subsubsection{Swarm Abstraction}
\label{sec:SwarmAbstraction}
This abstraction layer is the foundation of the SoD proposal. The services in this layer, similar to the other abstractions layers, are continuously running on individual drones. This layer has a baseline knowledge: a collection of knowledge that is accumulation of all the SoD flights managed the SoD owner/operator. Therefore, learning, evaluation and decision formulation performed during a single mission then becomes part of the collaborative knowledge to improve all future missions.
\begin{enumerate}[label=S\arabic*]
\item Swarm Community Management: This service manages the drones participating in the SoD, their contributions and also detects any potential free-riders. \label{itm:S-SCM}
\item Security and Privacy: Deals with the unique situations encountered by the SoD that are specific to the security and privacy-preservation.\label{itm:S-SP}
\item Safety and Self-Preservation: Similarly to the \ref{itm:S-SP}, this service deals with safety and self-preservation of individual drones and SoD as a whole. \label{itm:S-SSP}
\item Ethical Principles: Set of ethical principles that are set by the drones owner. The \ref{itm:S-CDF} will take these principles into account when making decision.\label{itm:S-EP}
\item Mission Assessment: Swarm health feeds collected by the \ref{itm:S-SCM} will be used by the mission assessment service to perform the prediction of failure and success of the whole mission. This prediction would be useful to make challenging decision whether to continue the mission or abort it. Furthermore, this analysis would be part of the decision to take the mission abortion or an altruistic\footnote{Altruistic Decision: Sacrificing few of the members of the SoD to achieve overall mission objectives} decision. \label{itm:S-MA}
\item Collaborative Learning: The core module that continuously learns from different feeds that is being shared in the SoD. One point needs to be emphasised that the learning process is also collaborative -- as each of the drones might not have the resources to perform this entirely by itself. \label{itm:S-CL}
\item Collaborative Evaluation: Based on the learning, the SoD would evaluate a situation collaborative to see whether a precedent exists in the collaborative knowledge, if not the collective make a decision autonomously (\ref{itm:S-CDF}). \label{itm:S-CE}
\item Collaborative Decision Formulation: The decision formulation service that requires collaboration from the SoD participants to reach a decision either based on the existing knowledge or take a trail-error strategy. The decision taken and its success would be recorded along with the situation parameters -- post-mission evaluation and inclusion to the collaborative knowledge management (further discussed in section \ref{sec:FleetConstruction}). \label{itm:S-CDF}
\item Collaborative Knowledge Management: One of the objective of the SoD is to accumulate the knowledge from every mission to a single collaborate knowledge based that can then be part of every subsequent missions -- exploring as many as possible permutations of scenarios that SoD can encounter in the field.\label{itm:S-CKM}
\end{enumerate}
\subsection{Fleet Construction}
\label{sec:FleetConstruction}
Based on the conceptual model, the first step is the formation of the SoD at the pre-mission stage and deformation at the post-mission stage. Therefore the process of fleet construction consists of two parts: pre-mission and post-mission, which are discussed in this section.
At the pre-mission stage, the fleet construction process begins with the formulation of a mission -- with a set of objectives. The mission control unit generates a mission brief that includes mission objectives, airspace regulations, ethical principles, security and privacy policies, organisation commitments, baseline configuration (for first mission), and collaborative knowledge. The mission brief is then communicated to the ground flight management system (GFMS). This system would select the drones from the inventory that would participate in the mission. This selection process is based on the mission requirements, drone availability and organisation preferences. Once the set of drones are selected, the GFMS would then upload the mission brief to the selected drones. Once the brief is uploaded, the drones would establish secure communication channel among themselves in the SoD. Once all drones are connected and GFMS has given the permission to commence the mission, the SoD would initiate.
After the completion of the mission, upon return of the SoD participants to the base, the GFMS will connect with each drone to download the mission logs, learning/evaluation matrix and potential material that can contribute to the collaborative knowledge. The GFMS communicates this information to the mission control centre that would analyses the mission debriefing information and improves the collaborative knowledge. Figure~\ref{fig:SoDConstruction} shows the fleet construction process with both pre- and post-mission activities.
\subsection{Types of Swarm of Drones}
\label{sec:TypesOfFleets}
In this section, we discuss three types of SoDs that can be potentially deployed depending upon the target environment and situation.
\begin{figure*}[htbp]
\hfill
\centering\raisebox{-0.4\height}{\centering\includegraphics[width=.49\linewidth]{StaticDronesSwarm.pdf}}
\hfill
\centering\raisebox{-0.5\height}{\centering\includegraphics[width=.49\linewidth]{DynamicDronesSwarm.pdf}}
\hfill
\caption{Static swarm-of-drones versus dynamic swarm-of-drones}
\label{fig:static-vs-dynamic}
\end{figure*}
\subsubsection{Static SoD}
\label{sec:Static}
The most basic type of the SoD is the static SoD. In this formation, the members of the swarm are pre-selected at the pre-mission stage. During the flight, no new members can enrol into the swarm as the collective is locked at the point of mission commencement. The secure communication, mutual-trust and collaboration is setup by the GFMS of the SoD owner. Any drone that is whether belong to the respective SoD owner or not would be treated as an external entity to the SoD during the flight. Figure~\ref{fig:static-vs-dynamic} shows the static SoD in comparison with the dynamic SoD discussed in the next section.
\begin{figure}[htbp]
\centering
\includegraphics[width=\linewidth]{HybridDronesSwarm.pdf}
\caption{A hybrid swarm of drones}
\label{fig:hybrid-drone-swarm}
\end{figure}
\subsubsection{Dynamic SoD}
\label{sec:Dynamic}
In contrast to a static SoD, a dynamic SoD is open to the inclusion of new members along with existing members leaving the swarm at any point of time: pre-mission and/or during the mission. Such a SoD can either be a closed dynamic SoD that only allows enrolment of new drones from the same organisation, or an open dynamic SoD that allows enrolment of drones from any third-party organisations. Whichever the case, the challenges of secure communication, mutual trust and collaboration are unique in comparison to static SoD.
\subsubsection{Hybrid SoD}
\label{sec:Hybrid}
This variant of SoD combines both the static and dynamic SoDs together into a single collaborative unit. At the core of this SoD is a static SoD that behaves like one in all of its operations. This static SoD, however, is open to allowing other drones to join the swarm, thus creating an extended swarm that behaves like a dynamic SoD. One thing to note is that the core swarm takes high priority when making any collaborative learning, evaluation and decisions. The extended swarm can be viewed as drones that join the static SoD and would provide a service to the core swarm in return for some fair exchange. Members of the extended swarm can leave the collective at any stage. Figure~\ref{fig:hybrid-drone-swarm} shows the hybrid SoD construction.
\subsection{SoD Collaboration Models}
\label{sec:TypesOfSwarmDronesCollaborationModels}
In this section we discuss three variants of collaboration models for the SoD.
\begin{enumerate}
\item Centralised: In this collaboration model, there is a powerful, master drone in the SoD that collets all the feeds from individual drones and assists the swarm in computing and agreeing on decisions.
\item Decentralised: In this collaboration model, there is no single master drone, but instead a small subset of powerful drones that collect the feeds from their neighbouring drones and then these powerful drones perform the collaborative learning, evaluation and decision.
\item Distributed (Peer-Oriented): In this model, each and every participant of the SoD have more or less equal role in all collaborative learning, evaluation and decision making process. It can be noted that the activity load is distributed among the population based on their individual capabilities, current performance and power resources and the criticality of their unique features to overall mission.
\end{enumerate}
\section{Open Challenges of Swarm of Drones}
\label{sec:OpenChallenges/Problems}
In this section, we selected a very short list of open problems of the SoD. They are categorised into two categories: 1), Security, Privacy and Trust related, and 2), Performance and Energy Consumption related. The implication and importance of these open issues are listed in Table~\ref{tab:ImpOpenChallengeCombination}, represented by number of $\blacksquare$; the higher the number, the more crucial this open issue is to the success of the SoD.
\subsection{Security, Privacy and Trust Related}
\label{sec:SecurityPrivacyAndTrustRelated}
\begin{enumerate}[label=SP\arabic*]
\item Swarm Authentication, Attestation and Secure Communication: SoDs have to negotiate with external entities that include the airspace controllers, and other UAVs (includes SoDs). Beside this, for dynamic and hybrid types of SoDs, the inclusion of new drones and de-listing of ones that leave the swarm is another challenge. This open issue does not impact the static SoD as much it does the other two types.
\item Fair Exchange Services Architecture for Swarm of Drones: When drones participate in a group supporting swarm intelligence-based mechanisms to create a SoD, individual drones should have this synergy worthwhile. This should not tax them to the extend that solo mission is comparatively less costly and have negative impact on performance and energy of the drone than if it does not participates in the SoD. Fair exchange becomes way more relevant in the case of dynamic and hybrid SoD as during the flight swarms would potentially be changing and to identify the benefits of joining a swarm has to be clear and verifiable -- use fair-exchange mechanisms.
\item Collaborated Cybersecurity Deterrence Mechanism: This open issue concerns with how swarm intelligence can be deployed to provide a wide range of countermeasures -- protecting individual drone and the whole of SoD. This is still an open issue and potentially the most crucial element of the SoD proposal.
\item Detecting the Mole and Free-Riders in the Swarm: This open issues relates to the SP2, however, it focuses on detecting free-riders in the SoD. A free-rider is a drone in the SoD that does not contribute its fair share and becomes a burden on the rest of the drones in the SoD.
\end{enumerate}
\subsection{Performance and Energy Consumption}
\label{sec:PerformanceAndEnergyConsumptionRelatedChallenges}
\begin{enumerate}[label=PE\arabic*]
\item Balancing the Cybersecurity with Performance and Energy Consumption: we have seen in section \ref{sec:PerformanceEncergyConsumption} that the computational power and the energy consumption of a drone is highly impacted by
several factors. Among these, it is clear that MCOs (including critical event response capabilities), algorithms, data management (ciphering for instance) are of utmost importance. All these aspects must thus be mitigated with the cybersecurity issues in a holistic approach.
\item Graceful Degradation -- Altruism versus Selfish-Survival: Graceful Degradation is of course part of the intrinsic management system of each drone.
Still, when combined as a swarm, this should not longer be considered only at the level of one single drone but at the level of the swarm as a whole. Indeed, depending on the mission, it must be decided if it is more important for each drone to save its own energy/computational power (selfish approach) or if cooperating (and thus sharing energy consumption/computational load) is more appropriate to ensure the success of the mission (altruist approach).
\end{enumerate}
\begin{table}[tp]
\centering
\caption{Importance of Open Challenge/Problem to Combination of SoD Types and Collaboration Models.}
\label{tab:ImpOpenChallengeCombination}
\resizebox{0.968\columnwidth}{!}{%
\begin{tabular}{@{}ccccccc@{}}
\toprule
& \textbf{SP1} & \textbf{SP2} & \textbf{SP3} & \textbf{SP4} & \textbf{PE1} & \textbf{PE2} \\ \midrule
\multicolumn{7}{c}{\cellcolor[HTML]{EFEFEF}\textbf{Static SoD}} \\
\multicolumn{1}{l}{\textbf{Centralised}} & $\blacksquare\bl\square\sq\square$ & $\blacksquare\square\sq\square\sq$ & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\square\sq\square\sq$& $\blacksquare\bl\blacksquare\square\sq$& $\blacksquare\bl\blacksquare\bl\square$\\
\multicolumn{1}{l}{\textbf{Decentralised}} & $\blacksquare\bl\blacksquare\square\sq$ &$\blacksquare\square\sq\square\sq$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\square\sq\square\sq$ & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ \\
\multicolumn{1}{l}{\textbf{Distributed}} & $\blacksquare\bl\blacksquare\square\sq$ & $\blacksquare\square\sq\square\sq$ & $\blacksquare\bl\blacksquare\bl\blacksquare$& $\blacksquare\square\sq\square\sq$ & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ \\
\multicolumn{7}{c}{\cellcolor[HTML]{EFEFEF}\textbf{Dynamic SoD}} \\
\multicolumn{1}{l}{\textbf{Centralised}} & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\bl\square\sq\square$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\bl\blacksquare\bl\square$ \\
\multicolumn{1}{l}{\textbf{Decentralised}} & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\bl\blacksquare\square\sq$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ \\
\multicolumn{1}{l}{\textbf{Distributed}} & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\bl\blacksquare\square\sq$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ \\
\multicolumn{7}{c}{\cellcolor[HTML]{EFEFEF}\textbf{Hybrid SoD}} \\
\multicolumn{1}{l}{\textbf{Centralised}} & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\bl\square\sq\square$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\bl\blacksquare\bl\square$ \\
\multicolumn{1}{l}{\textbf{Decentralised}} & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\bl\blacksquare\square\sq$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ \\
\multicolumn{1}{l}{\textbf{Distributed}} & $\blacksquare\bl\blacksquare\bl\square$ & $\blacksquare\bl\blacksquare\square\sq$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ & $\blacksquare\bl\blacksquare\bl\blacksquare$ \\
\bottomrule
\end{tabular}
}
\end{table}
\section{Conclusion}
\label{sec:Conclusion}
The potential for having an independent and autonomous set of drones, whether we call them FoD or SoD, is gradually becoming necessary, especially with the increased complexities of making real-time decisions by individual or fleets of drones in an overcrowded and regulated airspace. For such an eventuality, the application of swarm intelligence in the UAVs domain is only a natural progression of the field. In this paper, we forwarded the rationale for drone fleets, the need for them to be independent and autonomous in the wild, and the application of swarm intelligence. We also explained a conceptual architecture to integrate swarm intelligence as a core function, not a merely restricted to a single function like traffic management, it manages and controls a wide range of functions and decisions at the individual drone and fleet level. To support this conceptual architecture, we have also listed different types and collaboration models of SoD along with open issues whose solutions are crucial to the success of the application of swarm intelligence as a core function of the FoDs.
\bibliographystyle{IEEEtran}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 958 |
Q: ListView's equivalent to Column's mainAxisSize in Flutter I'm looking to clean up my widget tree and to replicate a ListView, I use Padding>SingleChildScrollView>Column with a MainAxisSize.min property. I would like to use ListView instead but I run into an issue with buttons. The buttons always use the max width (the entire screen width) when I want to specify a size (ex. 1/3 of the screen size). I have to use those three widgets in order to be able to size a button properly unless there's a widget that I don't know of that would help me.
Here's an example with the ListView widget:
And an example without a ListView widget:
A: Try wrapping your button with a Center() widget
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 6,686 |
Q: TFIDF separate for each label Using TFIDFvectorizor(SKlearn), how to obtain word ranking based on tfidf score for each label separately. I want the word frequency for each label (positive and negative).
relevant code:
vectorizer = TfidfVectorizer(sublinear_tf=True, max_df=0.5,stop_words='english',use_idf=True, ngram_range =(1,1))
features_train = vectorizer.fit_transform(features_train).todense()
features_test = vectorizer.transform(features_test).todense()
for i in range(len(features_test)):
first_document_vector=features_test[i]
df_t = pd.DataFrame(first_document_vector.T, index=feature_names, columns=["tfidf"])
df_t.sort_values(by=["tfidf"],ascending=False).head(50)
A: This will give you positive, neutral, and negative sentiment analysis for each row of comments in a field of a dataframe. There is a lot of preprocessing code, to get things cleaned up, filter out stop-words, do some basic charting, etc.
import pickle
import pandas as pd
import numpy as np
import pandas as pd
import re
import nltk
import matplotlib.pyplot as plt
from nltk.corpus import stopwords
global str
df = pd.read_csv('C:\\your_path\\test_dataset.csv')
print(df.shape)
# let's experiment with some sentiment analysis concepts
# first we need to clean up the stuff in the independent field of the DF we are workign with
df['body'] = df[['body']].astype(str)
df['review_text'] = df[['review_text']].astype(str)
df['body'] = df['body'].str.replace('\d+', '')
df['review_text'] = df['review_text'].str.replace('\d+', '')
# get rid of special characters
df['body'] = df['body'].str.replace(r'[^\w\s]+', '')
df['review_text'] = df['review_text'].str.replace(r'[^\w\s]+', '')
# get rid fo double spaces
df['body'] = df['body'].str.replace(r'\^[a-zA-Z]\s+', '')
df['review_text'] = df['review_text'].str.replace(r'\^[a-zA-Z]\s+', '')
# convert all case to lower
df['body'] = df['body'].str.lower()
df['review_text'] = df['review_text'].str.lower()
# It looks like the language in body and review_text is very similar (2 fields in dataframe). let's check how closely they match...
# seems like the tone is similar, but the text is not matching at a high rate...less than 20% match rate
import difflib
body_list = df['body'].tolist()
review_text_list = df['review_text'].tolist()
body = body_list
reviews = review_text_list
s = difflib.SequenceMatcher(None, body, reviews).ratio()
print ("ratio:", s, "\n")
# filter out stop words
# these are the most common words such as: "the", "a", and "is".
from nltk.corpus import stopwords
english_stopwords = stopwords.words('english')
print(len(english_stopwords))
text = str(body_list)
# split into words
from nltk.tokenize import word_tokenize
tokens = word_tokenize(text)
# convert to lower case
tokens = [w.lower() for w in tokens]
# remove punctuation from each word
import string
table = str.maketrans('', '', string.punctuation)
stripped = [w.translate(table) for w in tokens]
# remove remaining tokens that are not alphabetic
words = [word for word in stripped if word.isalpha()]
# filter out stop words
from nltk.corpus import stopwords
stop_words = set(stopwords.words('english'))
words = [w for w in words if not w in stop_words]
print(words[:100])
# plot most frequently occurring words in a bar chart
# remove unwanted characters, numbers and symbols
df['review_text'] = df['review_text'].str.replace("[^a-zA-Z#]", " ")
#Let's try to remove the stopwords and short words (<2 letters) from the reviews.
from nltk.corpus import stopwords
stop_words = stopwords.words('english')
# function to remove stopwords
def remove_stopwords(rev):
rev_new = " ".join([i for i in rev if i not in stop_words])
return rev_new
# remove short words (length < 3)
df['review_text'] = df['review_text'].apply(lambda x: ' '.join([w for w in x.split() if len(w)>2]))
# remove stopwords from the text
reviews = [remove_stopwords(r.split()) for r in df['review_text']]
# make entire text lowercase
reviews = [r.lower() for r in reviews]
#Let's again plot the most frequent words and see if the more significant words have come out.
freq_words(reviews, 35)
###############################################################################
###############################################################################
# Tf-idf is a very common technique for determining roughly what each document in a set of
# documents is "about". It cleverly accomplishes this by looking at two simple metrics: tf
# (term frequency) and idf (inverse document frequency). Term frequency is the proportion
# of occurrences of a specific term to total number of terms in a document. Inverse document
# frequency is the inverse of the proportion of documents that contain that word/phrase.
# Simple, right!? The general idea is that if a specific phrase appears a lot of times in a
# given document, but it doesn't appear in many other documents, then we have a good idea
# that the phrase is important in distinguishing that document from all the others.
# Starting with the CountVectorizer/TfidfTransformer approach...
# convert fields in datframe to list
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
cvec = CountVectorizer(stop_words='english', min_df=1, max_df=.5, ngram_range=(1,2))
cvec
# Calculate all the n-grams found in all documents
from itertools import islice
cvec.fit(body_list)
list(islice(cvec.vocabulary_.items(), 20))
len(cvec.vocabulary_)
# Let's take a moment to describe these parameters as they are the primary levers for adjusting what
# feature set we end up with. First is "min_df" or mimimum document frequency. This sets the minimum
# number of documents that any term is contained in. This can either be an integer which sets the
# number specifically, or a decimal between 0 and 1 which is interpreted as a percentage of all documents.
# Next is "max_df" which similarly controls the maximum number of documents any term can be found in.
# If 90% of documents contain the word "spork" then it's so common that it's not very useful.
# Initialize the vectorizer with new settings and check the new vocabulary length
cvec = CountVectorizer(stop_words='english', min_df=.0025, max_df=.5, ngram_range=(1,2))
cvec.fit(body_list)
len(cvec.vocabulary_)
# Our next move is to transform the document into a "bag of words" representation which essentially is
# just a separate column for each term containing the count within each document. After that, we'll
# take a look at the sparsity of this representation which lets us know how many nonzero values there
# are in the dataset. The more sparse the data is the more challenging it will be to model
cvec_counts = cvec.transform(body_list)
print('sparse matrix shape:', cvec_counts.shape)
print('nonzero count:', cvec_counts.nnz)
print('sparsity: %.2f%%' % (100.0 * cvec_counts.nnz / (cvec_counts.shape[0] * cvec_counts.shape[1])))
# get counts of frequently occurring terms; top 20
occ = np.asarray(cvec_counts.sum(axis=0)).ravel().tolist()
counts_df = pd.DataFrame({'term': cvec.get_feature_names(), 'occurrences': occ})
counts_df.sort_values(by='occurrences', ascending=False).head(20)
# Now that we've got term counts for each document we can use the TfidfTransformer to calculate the
# weights for each term in each document
transformer = TfidfTransformer()
transformed_weights = transformer.fit_transform(cvec_counts)
transformed_weights
# we can take a look at the top 20 terms by average tf-idf weight.
weights = np.asarray(transformed_weights.mean(axis=0)).ravel().tolist()
weights_df = pd.DataFrame({'term': cvec.get_feature_names(), 'weight': weights})
weights_df.sort_values(by='weight', ascending=False).head(20)
# FINALLY!!!!
# Here we are doing some sentiment analysis, and distilling the 'review_text' field into positive, neutral, or negative,
# based on the tone of the text in each record. Also, we are filtering out the records that have <.2 negative score;
# keeping only those that have >.2 negative score. This is interesting, but this can contain some non-intitive results.
# For instance, one record in 'review_text' literally says 'no issues'. This is probably positive, but the algo sees the
# word 'no' and interprets the comment as negative. I would argue that it's positive. We'll circle back and resolve
# this potential issue a little later.
import nltk
nltk.download('vader_lexicon')
nltk.download('punkt')
from nltk.sentiment.vader import SentimentIntensityAnalyzer
sid = SentimentIntensityAnalyzer()
df['sentiment'] = df['review_text'].apply(lambda x: sid.polarity_scores(x))
def convert(x):
if x < 0:
return "negative"
elif x > .2:
return "positive"
else:
return "neutral"
df['result'] = df['sentiment'].apply(lambda x:convert(x['compound']))
# df.groupby(['brand','result']).size()
# df.groupby(['brand','result']).count()
x = df.groupby(['review_text','brand'])['result'].value_counts(normalize=True)
x = df.groupby(['brand'])['result'].value_counts(normalize=True)
y = x.loc[(x.index.get_level_values(1) == 'negative')]
print(y[y>0.2])
Result:
brand result
ABH negative 0.500000
Alexander McQueen negative 0.500000
Anastasia negative 0.498008
BURBERRY negative 0.248092
Beats negative 0.272947
Bowers & Wilkins negative 0.500000
Breitling Official negative 0.666667
Capri Blue negative 0.333333
FERRARI negative 1.000000
Fendi negative 0.283582
GIORGIO ARMANI negative 1.000000
Jan Marini Skin Research negative 0.250000
Jaybird negative 0.235294
LANC�ME negative 0.500000
Longchamp negative 0.271605
Longchamps negative 0.500000
M.A.C negative 0.203390
Meaningful Beauty negative 0.222222
Polk Audio negative 0.256410
Pumas negative 0.222222
Ralph Lauren Polo negative 0.500000
Roberto Cavalli negative 0.250000
Samsung negative 0.332298
T3 Micro negative 0.224138
Too Faced negative 0.216216
VALENTINO by Mario Valentino negative 0.333333
YSL negative 0.250000
Feel free to skip things you find to be irrelevant, but as-is, the code does a fairly comprehensive NLP analysis.
Also, take a look at these two links.
https://www.analyticsvidhya.com/blog/2018/02/the-different-methods-deal-text-data-predictive-python/
https://towardsdatascience.com/fine-grained-sentiment-analysis-in-python-part-1-2697bb111ed4
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 7,851 |
Tag Archives: Sci-Fi Channel
Season 4 of Wynonna Earp returns July 26 on CTV Sci-Fi Channel
June 26, 2020 Greg David
CTV Sci-Fi Channel, along with partners SYFY, IDW Entertainment, SEVEN24 Films, and Cineflix Media announced today that Season 4 of fan-favourite MADE-in-Canada series WYNONNA EARP premieres Sunday, July 26 at 9 p.m. ET on CTV Sci-Fi. The announcement, delivered via a Buzzfeed exclusive that featured a live appearance by cast and Showrunner Emily Andras, in a virtual "Happy Hour", also revealed first-look photos from Season 4, available for download here, along with a new Season 4 trailer. Click here for a sneak peek at the upcoming season of the supernatural hit series.
Due to the enforced production break caused by the COVID-19 pandemic, the first six episodes culminate in a mid-season finale airing on Sunday, Aug. 30 at 9 p.m. ET on CTV Sci-Fi. Production on the final six episodes of the fourth season is slated to resume later this summer in Calgary.
Additionally, WYNONNA EARP will join the panel lineup for Comic-Con@Home, the virtual program for San Diego Comic-Con that runs from July 23-26. The panel is expected to include special appearances by Emily Andras, Melanie Scrofano, Tim Rozon, Dominique Provost-Chalkley, Katherine Barrell, and Varun Saranga.
Created by Emily Andras, WYNONNA EARP stars Canadian actor Melanie Scrofano as the titular demon hunter. In Season 4, the infamous Earp Curse is broken, and witty and wild demon hunter Wynonna Earp would love to be celebrating with cold whisky and hot donuts. Too bad she has to rescue everyone she loves, save the town of Purgatory, and take on her most diabolical, Earp-hating enemy yet — all without her trustworthy gun, Peacemaker. And that's just Monday…
Sci-Fi ChannelWynonna Earp
Link: Killjoys gave us the gift of one of science fiction's most thoughtful shows
March 12, 2020 Greg David
From Princess Weekes of The Mary Sue:
With Johnny and Dutch's friendship, it never makes that awkward turn into the romantic, but it is always shown to be the most important relationship in the series. They are a family, and that is a bond that is even more important than romance. Continue reading.
KilljoysSci-Fi ChannelSpace
Featured, Killjoys
Killjoys' showrunner Adam Barken: "F—k yeah, we won"
September 21, 2019 Greg David 2 Comments
Spoiler alert: Do not continue reading until you have watched the series finale of Killjoys, "Last Dance," written by show creator Michelle Lovretta.
And, just like that, the final episode of Killjoys has come to a close. Personally, I loved the way it ended, with our three heroes—Dutch, D'avin and Johnny—getting ready to kick some alien butt one last time. Zeph reunited with Pip. Pree and Gared together and off on their own adventures. The Lady defeated.
And while the door closes on the final episode, Michelle Lovretta's script certainly left things open for more. We spoke to showrunner Adam Barken about this wild ride and the possibility of more stories.
Your job has been done for a while now. Has it been kind of weird watching these last episodes air and knowing that there isn't another season of Killjoys?
Adam Barken: It's been really nice, to be honest, but yeah, it's weird. But at the same time, it's been nice to be able to watch this without the … oftentimes before the panic of 'Oh God, what are we going to do next? And how are we going to do it?' And also just knowing that we're headed towards an ending.
At what point did you know how that final frame was going to be of our three heroes all together again stepping out with their guns?
AB: Although we didn't know the exact, 'OK, they're running up with their guns to shoot up a bunch of aliens.' The details of that we didn't know. But we knew pretty early on, like before we even started breaking Season 4 that this show was going to end with these three together.
We also knew that there might be some change in the situation, we knew that we wanted to pay off this idea that Johnny had wanted to go a different path of his life. So we weren't sure as it just going to be Dutch and D'avin in a ship. And maybe Johnny's with Clara. There were options, but we knew that the vibe at the end of it was, in my mind, I don't know if this was Michelle thought because as a Star Trek nerd, my mind was kind of the end of The Undiscovered Country, with Kirk saying, 'The second star on the right and straight on until morning.'
Just that vibe of on we go to the next adventure. Michelle and I had talked right at the very beginning and one of the first questions we asked was, 'What would a final season or final two seasons look like?' We both said, 'Does anybody die?' And we both kind of simultaneously I think had the feeling of, 'No they don't.'
There was nothing in our DNA that wanted to do it. There was nothing in the character stories in the same way that say, Pawter, who really was a character who Michelle, I think, will say was created with a sacrifice in mind. Her story that way with Dutch, D'avin, and Johnny, it did not feel like that sort of sacrifice was necessary. It didn't feel like it fit. It didn't feel like the show we wanted, we were making.
We wanted a show that at the end felt like, 'Fuck yeah, they won.' And they're going to keep going and it'll be in a different situation. There's a reason why we're ending here. The trio is going to split apart, but is this one moment we're still going to see that thing that we love, seeing them together and we know that in the future they will get back together every once in a while and go kick some ass. And that's the vibe we wanted to leave on.
You left this wide open for, maybe, five years down the line reuniting for an exclusive on Crave or something like that.
AB: Sure, yeah, absolutely. With Dutch's story, it took her from where you started at the very beginning saying, 'I'm a Killjoy because I don't take sides. I don't take bribes. I don't get involved. I am a central part of something that I believe in. I have a family, I have a people, I have a community and I accept it, and I will fight for it forever.' So that's the journey for her. In a way, it's the strongest arc in the series. So that's why it begins where it does. That's why it ends where it does. And that's why it felt like the right place to go out. But that doesn't require anybody to die. It doesn't require there to be a tragic moment at the end. There's no need for that because what was achieved for her was this positive thing.
Pip returned. Did you want to have a happy ending for Zeph
AB: Yeah, yeah. And a happy ending for Pip. This was one of the interesting things about taking over a show, running it, still wanting and needing Michelle to be there as my partner. We would definitely give and take and go back and forth on things. And one of them was when I said to her, 'I think we really need to kill Pip, and I think that sacrifice going to really resonate. I think it's going to really help us with Zeph. I think it's going to be really median, good stuff.' And she agreed with that but said, 'Yeah, but I don't want Pip dead at the end.' So that's where I can say, 'Well, OK, what do you got?'
And she came back with, 'He was in a pod,' and honestly it was on the board a long time and I just kept laughing going, 'I don't know how you're going to sell it, but if anybody could you will.' Then, sure enough, the script came in and the minute I read the scene I was like, 'Yeah, OK that works.' And if it's wish fulfillment, I think by the end we earned it, and that's fine because who doesn't want to see Pip back? And who doesn't want to see Zeph happy?
Is there a favourite character or character that you're most proud of because of their growth? For me it was D'avin.
AB: Oh yeah, absolutely loved seeing D'avin. I mean I think all the characters are super fun. In a way, I think about it more in terms of relationships and dynamics. I would say the one that was for me, because it was the most unexpected and yet paid off in some many wonderful dividends, was the Pree and Gared story.
You play around with these characters, you put different people together and see what happens. And there was just this moment back in Season 2 that Michelle was watching, where there was the wonderful Gavin Fox in as Gared. He was just supposed to be the jerky guy who keeps trying to take over things and failing and gets a knife in the hand at the end.
She just saw this moment between Tom and Gavin where she thought, 'I think Pree likes him.' As soon as she said that, we said, 'Oh, that's interesting.' And we just started exploring it, and thanks to those actors, where we did it the better it went.
By the end I was just really happy with, proud of, excited by all the things that we were able to do with that couple, and what the things we're able to put them through. And it still has us, and then the audience, cheer for them to be together. I think the ending we gave for them feels really great. So I think that's probably my favourite discovery.
What did you think of Killjoys' series finale? Who were your favourite characters and relationships? Let me know in the comments below!
Adam BarkenKilljoysMichelle LovrettaSci-Fi ChannelSpaceSyfy | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,219 |
Q: Python Regex: remove underscores and dashes except if they are in a dict of string substitutions I'm pre-processing a string. I have a dictionary of 10k string substitutions (e. g. "John Lennon": "john_lennon"). I want to replace all other punctuation with a space.
The problem is some of these string substitutions contain underscores or hyphens, so I want to replace punctuation (except full stops) with spaces unless the word is contained in the keys of this dict. I also want to do it in one Regex expression since the text corpus is quite large and this could be a bottleneck.
So far, I have:
import re
input_str = "John Lennon: a musician, artist and activist."
multi_words = dict((re.escape(k), v) for k, v in multi_words.items())
pattern = re.compile("|".join(multi_words.keys()))
output_str = pattern.sub(lambda m: multi_words[re.escape(m.group(0))], input_str)
This replaces all strings using the keys in a dict. Now I just need to also remove punctuation in the same pass. This should return "john_lennon a musician artist and activist."
A: You could handle the punctuation you would like to remove like the entries in your dictionary:
pattern = re.compile("|".join(multi_words.keys()) + r'|_|-')
and
multiwords['_'] = ' '
multiwords['-'] = ' '
Then these occurrences are treated like your key words.
But let me remind you that your code only works for a certain set of regular expressions. If you have the pattern foo.*bar in your keys and that matches a string like foo123bar, you will not find the corresponding value to the key by passing foo123bar through re.escape() and then searching in your multiword dictionary for it.
I think the whole escaping you do should be removed and the code should be commented to make clear that only fixed strings are allowed as keys, not complex regular expressions matching variable inputs.
A: You can add punctuations (excluding full stop) in a character set as part of the items to match, and then handle punctuations and dict keys separately in the substitution function:
import re
import string
punctuation = string.punctuation.replace('.', '')
pattern = re.compile("|".join(multi_words.keys())+
"|[{}]".format(re.escape(punctuation)))
def func(m):
m = m.group(0)
print(m, re.escape(m))
if m in string.punctuation:
return ''
return multi_words[re.escape(m)]
output_str = pattern.sub(func , input_str)
print(output_str)
# john_lennon a musician artist and activist.
A: You could do it by adding one more alternative to the constructed regex which matches a single punctuation character. When the match is processed, a match not in the dictionary can be replaced with a space, using dictionary's get method. Here, I use [,:;_-] but you probably want to replace other characters.
Note: I moved the call to re.escape into the construction of the regex to avoid having to call it on every match.
import re
input_str = "John Lennon: a musician, artist and activist."
pattern = re.compile(("|".join(map(re.escape, multi_words.keys())) + "|[,:;_-]+")
output_str = pattern.sub(lambda m: multi_words.get(m.group(0), ' '), input_str)
A: You may use a regex like (?:alt1|alt2...|altN)|([^\w\s.]+) and check if Group 1 (that is, any punctuation other than .) was matched. If yes, replace with an empty string:
pattern = re.compile(r"(?:{})|([^\w\s.]+)".format("|".join(multi_words.keys())))
output_str = pattern.sub(lambda m: "" if m.group(1) else multi_words[re.escape(m.group(0))], input_str)
See the Python demo.
A note about _: if you need to remove it as well, use r"(?:{})|([^\w\s.]+|_+)" because [^\w\s.] (matching any char other than word, whitespace and . chars) does not match an underscore (a word char) and you need to add it as a separate alternative.
Note on Unicode: if you deal with Unicode strings, in Python 2.x, pass re.U or re.UNICODE modifier flag to the re.compile() method.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,892 |
Q: Set TextView value inside a thread in android I am getting a crash under the fllowing circumstances. I am running a thread in the following way:
Thread t = new Thread(){
public void run() {
text.setText("hello");
}
};
t.start;
The crash occurs if I try to set the value of a TextView in my xml, (the reference to text is already available).
Am I doing something fundamentally wrong? Kindly point out where am going wrong.
A: You can only access user interface components from the UI thread.
Android has a few things to make this easy, such as the method runOnUiThread and the class AsyncTask.
For more reading see Painless Threading and Processes and Threads in the Android documentation.
A: You should access android ui toolkit widgets only on the UI thread. Read http://developer.android.com/resources/articles/painless-threading.html.
A: use Handler class and check it for more relevant methods
Handler mHandler;
mHandler=new Handler(){
hdandleMessage(Message what){
text.setText("hello");
}
};
Thread t = new Thread(){
public void run()
{
mHandler.sendEmptyMessage(int what)
}
};
t.start;
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 4,949 |
The redesigned 2018 Honda Fit is the perfect blend of sporty style and fantastic efficiency all in one unique package. With aggressive exterior design and the impressive Eco Assist™ System, you'll turn heads while zooming by the pumps at the same time. Once inside the incredibly spacious interior, you'll find yourself surrounded by plenty of versatility, room for five to sit comfortably and an impressive second-row Magic Seat® giving you the power to easily adjust your space to match your needs. Additionally, you'll love the fantastic list of advanced safety features, including the available Honda Sensing® Feature Suite keeping you safe everywhere you go behind the wheel. It's time to fit some more fun into your everyday drive. Come experience more for yourself when you test-drive a 2018 Honda Fit in Morristown, TN, at Honda Morristown, also serving customers throughout the greater Knoxville, TN, area.
When your 2018 Honda Fit needs maintenance or repairs, look no further than our service department where our professionally-trained technicians are equipped to handle all your service needs. Whether you need a simple oil change or major repairs, you can trust the quality service you'll receive here at our dealership.
If you'd like to purchase or lease the 2018 Honda Fit in Morristown, TN, stop by Honda Morristown at4190 W Andrew Johnson Highway, Morristown, TN 37814 for a test-drive today. We look forward to serving our customers from Morristown, TN, and throughout the greater Knoxville, TN, area.
*Options listed are based on the EX-L model shown in image. | {
"redpajama_set_name": "RedPajamaC4"
} | 5,703 |
{"url":"https:\/\/www.physicsforums.com\/threads\/partial-derivative.728919\/","text":"# Partial derivative\n\n1. Dec 17, 2013\n\n### Niles\n\n1. The problem statement, all variables and given\/known data\nHi\n\nSay I have a function $f(x(t), t)$. I am not 100% sure of the difference between\n$$\\frac{df}{dt}$$\nand\n$$\\frac{\\partial f}{\\partial t}$$\nIs it correct that the relation between these two is (from the chain rule)\n$$\\frac{df}{dt} = \\frac{\\partial f}{\\partial t} + \\frac{\\partial f}{\\partial x}\\frac{dx}{dt}$$\n\n2. Dec 17, 2013\n\n### LCKurtz\n\nIt is easy to be confused by the ambiguity of $\\frac{\\partial f}{\\partial t}$ symbol. If you write the expression instead as $f(u,v)$ where $u = x(t),~v=t$ you would write$$\\frac{df}{dt} = f_u\\frac {du}{dt} + f_v\\frac{dv}{dt}=f_u\\frac{dx}{dt}+f_v\\cdot 1$$You wouldn't normally talk about $\\frac{\\partial f}{\\partial t}$ as though $f$ depended on another variable also. But as the chain rule gives, you need the partials of $f$ with respect to each of its arguments. If you understand that $\\frac{\\partial f}{\\partial x}$ and $\\frac{\\partial f}{\\partial t}$ in this setting mean the partials of $f$ with respect to its first and second arguments, you should be OK.","date":"2017-08-19 04:02:17","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 2, \"mathjax_display_tex\": 2, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8466253876686096, \"perplexity\": 221.19138036504248}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-34\/segments\/1502886105297.38\/warc\/CC-MAIN-20170819031734-20170819051734-00656.warc.gz\"}"} | null | null |
Create GPX route data from Google Maps
# URL
* PC: http://330k.github.io/gmapgpx/create_gpx_route.html
* Smartphone: http://330k.github.io/gmapgpx/create_gpx_route_mobile.html
# Usage
1. visit above url with your browser
2. input origin and destination
3. click "Calculate Route" button
4. click "Add Elevation" button if you need elevation data
5. select number of reduced points
6. click "Download" button
| {
"redpajama_set_name": "RedPajamaGithub"
} | 7,338 |
{"url":"https:\/\/minutemath.com\/pre-algebra\/add-and-subtract-fractions-with-different-denominators\/","text":"4.5 Add and Subtract Fractions with Different Denominators\n\nThe topics covered in this section are:\n\n4.5.1 Find the Least Common Denominator\n\nIn the previous section, we explained how to add and subtract fractions with a common denominator. But how can we add and subtract fractions with unlike denominators?\n\nLet\u2019s think about coins again. Can you add one quarter and one dime? You could say there are two coins, but that\u2019s not very useful. To find the total value of one quarter plus one dime, you change them to the same kind of unit\u2014cents. One quarter equals\u00a0$25$\u00a0cents and one dime equals\u00a0$10$\u00a0cents, so the sum is\u00a0$35$\u00a0cents. See the figure below.\n\nSimilarly, when we add fractions with different denominators we have to convert them to equivalent fractions with a common denominator. With the coins, when we convert to cents, the denominator is\u00a0$100$.\u00a0Since there are\u00a0$100$\u00a0cents in one dollar,\u00a0$25$\u00a0cents is $\\frac{25}{100}$ and $10$ cents is $\\frac{10}{100}$. So we add $\\frac{25}{100} + \\frac{10}{100}$ to get $\\frac{35}{100}$, which is $35$ cents.\n\nYou have practiced adding and subtracting fractions with common denominators. Now let\u2019s see what you need to do with fractions that have different denominators.\n\nFirst, we will use fraction tiles to model finding the common denominator of\u00a0$\\frac{1}{2}$\u00a0and\u00a0$\\frac{1}{3}$.\n\nWe\u2019ll start with one\u00a0$\\frac{1}{2}$\u00a0tile and\u00a0$\\frac{1}{3}$\u00a0tile. We want to find a common fraction tile that we can use to match\u00a0both\u00a0$\\frac{1}{2}$\u00a0and\u00a0$\\frac{1}{3}$\u00a0exactly.\n\nIf we try the\u00a0$\\frac{1}{4}$\u00a0pieces,\u00a0$2$\u00a0of them exactly match the\u00a0$\\frac{1}{2}$\u00a0piece, but they do not exactly match the\u00a0$\\frac{1}{3}$\u00a0piece.\n\nIf we try the $\\frac{1}{5}$ pieces, they do not exactly cover the $\\frac{1}{2}$ piece or the $\\frac{1}{3}$ piece.\n\nIf we try the $\\frac{1}{6}$ pieces, we see thatexactly $3$ of them cover the $\\frac{1}{2}$ piece, and exactly $2$ of them cover the $\\frac{1}{3}$ piece.\n\nIf we were to try the $\\frac{1}{12}$ pieces, they would also work.\n\nEven smaller tiles, such as\u00a0$\\frac{1}{24}$\u00a0and\u00a0$\\frac{1}{48}$,\u00a0would also exactly cover the\u00a0$\\frac{1}{2}$\u00a0piece and the\u00a0$\\frac{1}{3}$\u00a0piece.\n\nThe denominator of the largest piece that covers both fractions is the\u00a0least common denominator (LCD)\u00a0of the two fractions. So, the least common denominator of\u00a0$\\frac{1}{2}$\u00a0and\u00a0$\\frac{1}{3}$\u00a0is\u00a0$6$.\n\nNotice that all of the tiles that cover\u00a0$\\frac{1}{2}$\u00a0and\u00a0$\\frac{1}{3}$\u00a0have something in common: Their denominators are common multiples of\u00a0$2$\u00a0and\u00a0$3$,\u00a0the denominators of\u00a0$\\frac{1}{2}$\u00a0and\u00a0$\\frac{1}{3}$.\u00a0The least common multiple (LCM) of the denominators is\u00a0$6$,\u00a0and so we say that\u00a0$6$\u00a0is the least common denominator (LCD) of the fractions\u00a0$\\frac{1}{2}$\u00a0and\u00a0$\\frac{1}{3}$.\n\nLEAST COMMON DENOMINATOR\n\nThe\u00a0least common denominator (LCD)\u00a0of two fractions is the least common multiple (LCM) of their denominators.\n\nTo find the LCD of two fractions, we will find the LCM of their denominators. We follow the procedure we used earlier to find the LCM of two numbers. We only use the denominators of the fractions, not the numerators, when finding the LCD.\n\nExample 1\n\nFind the LCD for the fractions $\\frac{7}{12}$ and $\\frac{5}{18}$.\n\nSolution\n\nTo find the LCD of two fractions, find the LCM of their denominators. Notice how the steps shown below are similar to the steps we took to find the LCM.\n\nHOW TO: Find the least common denominator (LCD) of two fractions.\n\n1. Factor each denominator into its primes.\n2. List the primes, matching primes in columns when possible.\n3. Bring down the columns.\n4. Multiply the factors. The product is the LCM of the denominators.\n5. The LCM of the denominators is the LCD of the fractions.\n\nExample 2\n\nFind the least common denominator for the fractions $\\frac{8}{15}$ and $\\frac{11}{24}$.\n\nSolution\n\nTo find the LCD, we find the LCM of the denominators.\n\nFind the LCM of\u00a0$15$\u00a0and $24$.\n\nThe LCM of $15$ and $24$ is $120$. So, the LCD of $\\frac{8}{15}$ and $\\frac{11}{24}$ is $120$.\n\n4.5.2 Convert Fractions to Equivalent Fractions with the LCD\n\nEarlier, we used fraction tiles to see that the LCD of\u00a0$\\frac{1}{4}$ and $\\frac{1}{6}$ is $12$.\u00a0We saw that three\u00a0$\\frac{1}{12}$\u00a0pieces exactly covered\u00a0$\\frac{1}{4}$\u00a0and two\u00a0$\\frac{1}{12}$\u00a0pieces exactly covered\u00a0$\\frac{1}{6}$,\u00a0so\n\n$\\large \\frac{1}{4} = \\frac{3}{12}$ and $\\large \\frac{1}{6} = \\frac{2}{12}$.\n\nWe say that\u00a0$\\frac{1}{4}$ and $\\frac{3}{12}$\u00a0are equivalent fractions and also that\u00a0$\\frac{1}{6}$ and $\\frac{2}{12}$\u00a0are equivalent fractions.\n\nWe can use the Equivalent Fractions Property to algebraically change a fraction to an equivalent one. Remember, two fractions are equivalent if they have the same value. The Equivalent Fractions Property is repeated below for reference.\n\nEQUIVALENT FRACTIONS PROPERTY\n\nIf $a,b,c$ are whole numbers where $b \\neq 0, c \\neq 0$, then\n\n$\\large \\frac{a}{b} = \\frac{a \\cdot c}{b \\cdot c}$ and $\\large \\frac{a \\cdot c}{b \\cdot c} = \\frac{a}{b}$\n\nTo add or subtract fractions with different denominators, we will first have to convert each fraction to an\u00a0equivalent\u00a0fraction with the LCD. Let\u2019s see how to change\u00a0$\\frac{1}{4}$ and $\\frac{1}{6}$\u00a0to equivalent fractions with denominator\u00a0$12$\u00a0without using models.\n\nExample 3\n\nConvert $\\frac{1}{4}$ and $\\frac{1}{6}$ to equivalent fractions with denominator $12$, their LCD.\n\nSolution\n\nWe do not reduce the resulting fractions. If we did, we would get back to our original fractions and lose the common denominator.\n\nHOW TO: Convert two fractions to equivalent fractions with their LCD as the common denominator.\n\n1. Find the LCD.\n2. For each fraction, determine the number needed to multiply the denominator to get the LCD.\n3. Use the Equivalent Fractions Property to multiply both the numerator and denominator by the number you found in Step 2.\n4. Simplify the numerator and denominator.\n\nExample 4\n\nConvert $\\frac{8}{15}$ and $\\frac{11}{24}$ to equivalent fractions with denominator $120$, their LCD.\n\nSolution\n\n4.5.3 Add and Subtract Fractions with Different Denominators\n\nOnce we have converted two fractions to equivalent forms with common denominators, we can add or subtract them by adding or subtracting the numerators.\n\nHOW TO: Add or subtract fractions with different denominators.\n\n1. Find the LCD.\n2. Convert each fraction to an equivalent form with the LCD as the denominator.\n3. Add or subtract the fractions.\n4. Write the result in simplified form.\n\nExample 5\n\nAdd: $\\frac{1}{2} + \\frac{1}{3}$.\n\nSolution\n\nRemember, always check to see if the answer can be simplified. Since\u00a0$5$\u00a0and\u00a0$6$\u00a0have no common factors, the fraction\u00a0$\\frac{5}{6}$\u00a0cannot be reduced.\n\nExample 6\n\nAdd: $\\frac{1}{2} \u2013 (- \\frac{1}{4})$.\n\nSolution\n\nOne of the fractions already had the least common denominator, so we only had to convert the other fraction.\n\nExample 6\n\nAdd: $\\frac{7}{12} + \\frac{5}{18}$.\n\nSolution\n\nBecause\u00a0$31$\u00a0is a prime number, it has no factors in common with\u00a0$36$.\u00a0The answer is simplified.\n\nWhen we use the Equivalent Fractions Property, there is a quick way to find the number you need to multiply by to get the LCD. Write the factors of the denominators and the LCD just as you did to find the LCD. The \u201cmissing\u201d factors of each denominator are the numbers you need.\n\nThe LCD,\u00a0$36$,\u00a0has\u00a0$2$\u00a0factors of\u00a0$2$\u00a0and\u00a0$2$\u00a0factors of\u00a0$3$.\n\nTwelve has two factors of\u00a0$2$,\u00a0but only one of\u00a0$3$\u2014so it is \u2018missing\u2018 one\u00a0$3$.\u00a0We multiplied the numerator and denominator of\u00a0$\\frac{7}{12}$\u00a0by\u00a0$3$\u00a0to get an equivalent fraction with denominator\u00a0$36$.\n\nEighteen is missing one factor of\u00a0$2$\u2014so you multiply the numerator and denominator\u00a0$\\frac{5}{18}$\u00a0by\u00a0$2$\u00a0to get an equivalent fraction with denominator\u00a0$36$.\u00a0We will apply this method as we subtract the fractions in the next example.\n\nExample 7\n\nSubtract: $\\frac{7}{15} \u2013 \\frac{19}{24}$.\n\nSolution\n\nExample 7\n\nAdd: $- \\frac{11}{30} + \\frac{23}{42}$.\n\nSolution\n\nIn the next example, one of the fractions has a variable in its numerator. We follow the same steps as when both numerators are numbers.\n\nExample 8\n\nAdd: $\\frac{3}{5} + \\frac{x}{8}$.\n\nSolution\n\nWe cannot add\u00a0$24$\u00a0and\u00a0$5x$\u00a0since they are not like terms, so we cannot simplify the expression any further.\n\n4.5.4 Identify and Use Fraction Operations\n\nBy now in this chapter, you have practiced multiplying, dividing, adding, and subtracting fractions. The following table summarizes these four fraction operations. Remember: You need a common denominator to add or subtract fractions, but not to multiply or divide fractions\n\nSUMMARY OF FRACTION OPERATIONS\n\nFraction multiplication:\u00a0Multiply the numerators and multiply the denominators.\n\n$\\large \\frac{a}{b} \\cdot \\frac{c}{d} = \\frac{ac}{bd}$\n\nFraction division:\u00a0Multiply the first fraction by the reciprocal of the second.\n\n$\\large \\frac{a}{b} \\div \\frac{c}{d} = \\frac{a}{b} \\cdot \\frac{d}{c}$\n\nFraction addition:\u00a0Add the numerators and place the sum over the common denominator. If the fractions have different denominators, first convert them to equivalent forms with the LCD.\n\n$\\large \\frac{a}{c} + \\frac{b}{c} = \\frac{a+b}{c}$\n\nFraction subtraction:\u00a0Subtract the numerators and place the difference over the common denominator. If the fractions have different denominators, first convert them to equivalent forms with the LCD.\n\n$\\large \\frac{a}{c} \u2013 \\frac{b}{c} = \\frac{a-b}{c}$\n\nExample 9\n\nSimplify:\n\n1. $- \\frac{1}{4} + \\frac{1}{6}$\n2. $- \\frac{1}{4} \\div \\frac{1}{6}$\nSolution\n\nFirst we ask ourselves, \u201cWhat is the operation?\u201d\n\nPart 1. The operation is addition.\n\nDo the fractions have a common denominator? No.\n\nPart 2. The operation is division. We do not need a common denominator.\n\nExample 9\n\n1. $\\frac{5x}{6} \u2013 \\frac{3}{10}$\n2. $\\frac{5x}{6} \\cdot \\frac{3}{10}$\nSolution\n\nPart 1. The operation is subtraction. The fractions do not have a common denominator.\n\nPart 2. The operation is multiplication; no need for a common denominator.\n\n4.5.5 Use the Order of Operations to Simplify Complex Fractions\n\nIn Multiply and Divide Mixed numbers and Complex Fractions, we saw that a complex fraction is a fraction in which the numerator or denominator contains a fraction. We simplified complex fractions by rewriting them as division problems. For example,\n\n$\\large \\frac{\\frac{3}{4}}{\\frac{5}{8}} = \\frac{3}{4} \\div \\frac{5}{8}$\n\nNow we will look at complex fractions in which the numerator or denominator can be simplified. To follow the order of operations, we simplify the numerator and denominator separately first. Then we divide the numerator by the denominator.\n\nHOW TO: Simplify complex fractions.\n\n1. Simplify the numerator.\n2. Simplify the denominator.\n3. Divide the numerator by the denominator.\n4. Simplify if possible.\n\nExample 10\n\nSimplify: $\\frac{( \\frac{1}{2} )^{2}}{ 4+3^{2} }$.\n\nSolution\n\nExample 11\n\nSimplify: $\\frac{\\frac{1}{2} + \\frac{2}{3}}{\\frac{3}{4} \u2013 \\frac{1}{6}}$.\n\nSolution\n\n4.5.6 Evaluate Variable Expressions with Fractions\n\nWe have evaluated expressions before, but now we can also evaluate expressions with fractions. Remember, to evaluate an expression, we substitute the value of the variable into the expression and then simplify.\n\nExample 12\n\nEvaluate $x+ \\frac{1}{3}$ when\n\n1. $x=- \\frac{1}{3}$\n2. $x=- \\frac{3}{4}$.\nSolution\n\nPart 1. To evaluate $x+ \\frac{1}{3}$ when $x=- \\frac{1}{3}$, substitute $- \\frac{1}{3}$ for $x$ in the expression.\n\nPart 2. To evaluate $x+ \\frac{1}{3}$ when $x=- \\frac{3}{4}$, we substitute $- \\frac{3}{4}$ for $x$ in the expression.\n\nExample 13\n\nEvaluate $y- \\frac{5}{6}$ when $y=- \\frac{2}{3}$.\n\nSolution\n\nWe substitute $- \\frac{2}{3}$ for $y$ in the expression\n\nExample 14\n\nEvaluate $2x^{2} y$ when $x= \\frac{1}{4}$ and $y=- \\frac{2}{3}$.\n\nSolution\n\nSubstitute the values into the expression. In $2x^{2} y$, the exponent applies only to $x$.\n\nExample 15\n\nEvaluate $\\frac{p+q}{r}$ when $p=-4$, $q=-2$, and $r=8$.\n\nSolution\n\nWe substitute the values into the expression and simplify.","date":"2021-09-29 03:22:58","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9013208150863647, \"perplexity\": 545.9437875838601}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780061350.42\/warc\/CC-MAIN-20210929004757-20210929034757-00137.warc.gz\"}"} | null | null |
7 way round trailer wiring diagram new 7 wire trailer plug diagram 7 way round trailer wiring diagram new 7 wire trailer plug diagram best elegant 5 pin. 4 pin trailer wiring diagram for light on plug and 7 wire 5 pin trailer plug wiring diagram with on 4. 6 pin wire diagram wiring diagram progresif 6 way wire harness diagram wiring diagram 6 pin wire diagram 6 pin wire diagram. Wiring diagram for 7 pin flat trailer plug wiring diagrams wiring diagram for 7 pin flat trailer plug 9 pin trailer plug wiring diagram 2018 5. Volvo 7 pin round trailer plug wiring diagram wiring diagram write volvo 7 pin round trailer plug wiring diagram wiring diagram 7wire trailer wiring diagram with brakes volvo 7 pin round trailer plug wiring diagram.
Trailer wiring diagram besides usb cable wiring diagram on 7 pin 5 pin plug wiring diagram wiring diagramtowbar wiring diagram 7 pin download wiring diagram. 5 pin trailer plug wiring diagram fresh wiring diagram for 7 pin 5 pin trailer plug wiring diagram fresh wiring diagram for 7 pin round trailer plug 2019. 60 unique 4 pin 5 wire trailer wiring diagram images wsmceorg new wiring diagram for car trailer 5 pin trailer plug wiring diagram 7 wire car electrical. Wiring diagram for 5 pin trailer plug shahsramblings wiring diagram for 5 pin trailer plug simple 5 pin round trailer plug wiring diagram fresh.
7 pin trailer plug wiring diagram >>> check this useful article by 7 pin trailer plug wiring diagram >>> check this useful article by going to the link at the image cingadvice. 7 way trailer plug in wiring diagram trailer wiring diagram 5 pin trailer plug wiring diagram australia reference 7 wire trailer 7 way trailer plug in. Wiring guides. 5 pin trailer plug wiring diagram australia fresh 7 wire trailer 5 pin trailer plug wiring diagram australia fresh 7 wire trailer plug diagram best elegant 5 pin wiring in fonar. 5 pin trailer wiring harness diagram fresh pin ke wiring for custom 5 pin trailer wiring harness diagram fresh pin ke wiring for custom wiring diagram •. 5 pin trailer wiring diagram narva weick in webtor for plug 5 pin trailer wiring diagram narva weick in webtor for plug.
Narva trailer plug wiring diagram trailer wiring diagram 5 pin narva trailer plug wiring diagram trailer wiring diagram 5 pin 235x150 narva trailer plug wiring. Trailer light plug wiring diagram reference wiring diagram for 5 pin trailer light plug wiring diagram reference wiring diagram for 5 pin flat trailer plug save trailer. 5 pin flat trailer plug wiring diagram automotive good of 7 6 4 full 5 pin flat trailer plug wiring diagram automotive good of 7 6 4 full size fantastic free trail in at 5 pin trailer wiring diagram. 6 pin vehicle plug wiring diagram index listing of wiring diagrams 6 pin vehicle side wiring diagram electrical wiring diagram symbols5 7 pin trailer plug wiring diagram. Wiring diagram for 5 pin trailer plug valid 7 pin flat wiring 5 wiring diagram for 5 pin trailer plug valid 7 pin flat wiring 5 wire trailer wiring diagram. | {
"redpajama_set_name": "RedPajamaC4"
} | 4,604 |
Q: Creating a template for a Writer-based calendar (diary) I'm currently working on a calendar for the new year – a daily planner, a diary! Something for GTD (Getting things done) based on a diary with planner options. It is created in OpenOffice or LibreOffice.
The questions are:
*
*How do I get the months that are shown in the bottom to the footer of the layout?
*How can I manage to get the months for the whole year?
The months are shown below. For each week in each month we have a little overview that shows:
*
*the current month
*the previous month
*the following month
Note: This an example for the month of March. How can I do this for the whole year? It is pretty difficult to create the calendar in the bottom.
By the way, there are more infos that go into the calendar. I do an import of the data.
I import 365 texts called "Losungen" that are biblical notes and verbs. See here:
There's no problem with the import of all those data. The only thing is: How do I create the calendar "images" in the bottom?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,922 |
Q: How to fetch unique field values (song_type) from core data I would like to know how to fetch unique field values (song_type) from core data.
i have the below code for that
NSEntityDescription *entity = [NSEntityDescription entityForName:@"Songs" inManagedObjectContext:myManagedObjectContext];
NSFetchRequest *fetchRequest = [[NSFetchRequest alloc]init];
[fetchRequest setPropertiesToFetch:[NSArray arrayWithObject:@"song_type"]];
[fetchRequest setEntity:entity];
NSError *aError;
NSMutableArray *fetchedResultsArray = [[myManagedObjectContext executeFetchRequest:fetchRequest error:&aError]mutableCopy];
if(! fetchedResultsArray ){
NSLog(@"EEEERRROR");
}
else {
NSLog(@"fetched %d", [fetchedResultsArray count]);
}
[fetchedResultsArray release];
[fetchRequest release];
i am getting the below error
Terminating app due to uncaught exception
'NSInvalidArgumentException', reason: 'Invalid keypath song_type
passed to setPropertiesToFetch:'
A: Set following 2 properties and you are done.
fetchRequest.returnsDistinctResults = YES;
fetchRequest.resultType = NSDictionaryResultType;
Hope this helps.
EDIT
NSFetchRequest *fetchRequest= [[NSFetchRequest alloc]init];
NSEntityDescription *entity = [NSEntityDescription entityForName:@"Songs" inManagedObjectContext:myManagedObjectContext];
//Take properties dictionary
NSDictionary *entityProperties = [entity propertiesByName];
[fetchRequest setEntity:entity];
[fetchRequest setReturnsDistinctResults:YES];
[fetchRequest setPropertiesToFetch:[NSArray arrayWithObject:[entityProperties objectForKey:@"song_type"]]];
NSArray * result = [myManagedObjectContext executeFetchRequest:fetchRequest error:nil];
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,020 |
Home » Daily Current Affairs , Free current Affairs » Daily Current Affairs 6th & 7th May 2018
Daily Current Affairs 6th & 7th May 2018
Govt Jobs Portal2:51:00 AMNo comments
RENOWNED MARATHI SINGER ARUN DATE DIES
Veteran Marathi singer Arun Date, known for his popular song "Shukratara", has died in Mumbai.
i. Date, famous for non-film 'bhavgeet' (lyrical poetry) songs, had crooned numbers like "Shukratara" and "Ya Janmavar", which became very popular among people.
INTERNATIONAL NO DIET DAY (INDD) – MAY 6
International No Diet Day (INDD) was observed all over the world.
It is observed every year on 6th May 2018. Its symbol is a light blue ribbon.
ISRAEL WITHDRAWS FROM RACE FOR UN SECURITY COUNCIL SEAT
Israel announced its withdrawal from the contest for a non-permanent seat at the UN Security Council (UNSC) for the 2019-2020 term.
Israel has been competing with Germany and Belgium for one out of the two seats assigned to the Western European and Others Group.
The UNSC has five permanent members and 10 non-permanent members who are elected for a two-year term.
INDIA SIGNS 200 MILLION US DOLLAR LOAN DEAL WITH WORLD BANK FOR NATIONAL NUTRITION MISSION (POSHAN ABHIYAAN)
Deal was signed for for 315 districts across all states/uts
This loan will help India to reach its goal of reducing stunting in children of the age 0-6 years from 38.4% to 25% by 2022.
The POSHAN (PM's Overarching Scheme for Holistic Nourishment) Abhiyaan was launched by Prime Minister on 8th March 2018 at Jhunjhunu, Rajasthan.
POSHAN Abhiyaan's main objective is gradual increase of the interventions supported by the World Bank assisted Integrated Child Development Services (ICDS) Systems Strengthening and Nutrition Improvement Project (ISSNIP) to all districts in India for a 3-year period.
THE BORDER ROADS ORGANISATION TO CELEBRATE ITS 58TH RAISING DAY ON MAY 7
Border Roads Organisation (BRO) will celebrate its Raising Day on May 7 every year.
About Border Roads Organisation (BRO)
It develops and maintains road networks in India's border areas and friendly neighbouring countries.
♦ Director General : Lt. Gen.Harpal Singh
♦ Headquarters : New Delhi
♦ Motto : Creates, Connects and Cares 'Shramena Sarvam Sadhyam'
♦ Founder : Jawaharlal Nehru | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 9,476 |
\section{Introduction}
\label{sec:introduction}
Fix a field $\mathbbm{k}$ and an element $q\neq 1,0\in \mathbbm{k}$. Let
$e$ be the multiplicative order of $q$. In this paper, we
discuss isomorphisms between two different families of algebras
constructed from this data.
One of these families is
ultimately descended from Erich Hecke, though it is a rather distant
descent. It's not clear he would recognize these particular progeny.
The other family is of a more recent vintage. While the first hint of
its existence was the nilHecke algebra acting on the cohomology of the
complete flag variety, it was not written in full generality until the
past decade in work of Khovanov, Lauda and Rouquier.
In the spirit of
other families in representation theory, one can think of Hecke family
as being {\it trigonometric} and the KLR family as {\it rational}. However, a common phenomenon
in mathematics is the existence of an isomorphism between
trigonometric and rational versions of an object after suitable completion; the
``ur-isomorphism'' of this type is between the associated graded of
the K-theory of a manifold and its cohomology. Such an isomorphism
has been given for completions of non-degenerate and degenerate affine
Hecke algebras by Lusztig in \cite{Lusgraded}. Another similar
isomorphism is given in \cite{GTL} for Yangians and quantum affine
algebras. In this paper, we will define isomorphisms with a similar
flavor between the algebras in the Hecke and KLR families.
In both cases, these families have somewhat complicated family
trees. Like blood types, there are two complementary ways that they can
become complicated. The simplest case, our analogue of blood type O,
is the affine Hecke algebra (on the Hecke side) and the KLR algebra of
$\hat{A}_e/A_\infty$ (on the KLR side). The two complications we can
add are like the type A and type B antigens on our red blood cells.
Since ``type A/B'' already have established connotations in
mathematics, we will instead call these types W and F:
\begin{itemize}
\item algebras with the type W complication are ``weighted'': these
include affine Schur algebras (on the Hecke side) and weighted KLR
algebras.
\item algebras with the type F complication are
``framed'': these include cyclotomic Hecke algebras and the
$\hat{A}_e/A_\infty$ tensor product categorifications from
\cite{Webmerged}. These are analogs of the passage from Lusztig to
Nakajima quiver varieties.
\item finally, both of these complications can be present
simultaneously, giving type WF. The natural object which appears in
the Hecke family is the category $\mathcal{O}$ of a Cherednik algebra $\mathbb{Z}/e\mathbb{Z}\wr
S_m$, though in a guise in which has not been seen previously. On
the KLR side, the result is a steadied quotient of a weighted KLR
algebra for the Crawley-Boevey quiver of a dominant weight of type
$\hat{A}_e/A_\infty$.
\end{itemize}
Our main theorem is that in each type, there are completions of these
Hecke- and KLR-type algebras that are isomorphic.
Since a great number of different algebras of representation theoretic
interest appear in this picture, it can be quite difficult to keep
them all straight. For the convenience of the reader, we give a table
here, placing all the algebras and categories which appear in this picture in their
appropriate type. Note that many of the items listed below (such as
Ariki-Koike algebras, or cyclotomic $q$-Schur and quiver Schur
algebras) are not the most general family members of that type, but
rather special cases. We'll ultimately focus on the category of
representations of a given algebra, so we have not
distinguished between Morita equivalent algebras. \smallskip
\centerline{
\begin{tabular}{c|l|l}
Type& KLR side & Hecke side \\ \hline
O&\parbox[c][1.5em][c]{0.39\textwidth}{KLR algebra \cite{KLI,Rou2KM}
}& affine Hecke algebra $H_m(\mathsf{q})$ (Thm. \ref{type-O-Hecke})\\
W&
\parbox[c][3em][c]{0.39\textwidth}{weighted KLR algebra
\cite{WebwKLR}, \\ quiver Schur algebra \cite{SWschur}} & affine
$q$-Schur algebra (Thm. \ref{waha-Schur})\\
F& \parbox[c][4.5em][c]{0.39\textwidth}{cyclotomic KLR algebras \cite{KLI},\\ algebras $T^{\underline{\boldsymbol{\la}}}$
categorifying tensor products for type $A\setminus\hat{A}$ \cite{Webmerged}}
& \parbox[c][2.8em][c]{0.49\textwidth}{ cyclotomic Hecke (Ariki-Koike)
algebras (Prop.~\ref{prop:cyclo-Hecke}), \\ category $\mathcal{O}$
for $\mathfrak{gl}_N$ ($e=\infty$) (\cite[9.11]{Webmerged})}\\
WF& \parbox[c][4.5em][c]{0.47\textwidth}{reduced steadied quotients
$T^\vartheta$ categorifying Uglov Fock spaces \cite{WebRou}, \\
cyclotomic quiver Schur algebras \cite{SWschur}}& \parbox[c][2.8em][c]{0.49\textwidth}{ category $\mathcal{O}$
for a Cherednik algebra with $\mathbb{Z}/e\mathbb{Z}\wr
S_m$ (\cite[Thm. A]{WebRou}), cyclotomic $q$-Schur algebras (Prop. \ref{cqs-morita})}
\end{tabular}}
For the KLR side, the membership of these
algebras is manifest by definition (with the exception of the quiver
Schur algebras, which follows from \cite[\ref{w-qSchur}]{WebwKLR}). For the Hecke side, we have
listed the result in this paper or another which gives the relation.
\subsubsection*{Type O} We'll first consider the simplest case of
this isomorphism, which is quite close to ones given in
\cite{BKKL,Rou2KM}. This is not the most direct approach, since the
algebras of types O, W, and F are all special cases of the algebras of
type WF. However, to illustrate our techniques it seems more sensible
to build from easy to hard rather than the other way around. The two algebras we consider are:
\begin{itemize}
\item the affine Hecke algebra $\mH_h$ of
$S_n$ with parameter $qe^h$, considered as a $\mathbbm{k}[[h]]$-algebra. Note
that this deformation only makes sense if $\mathbbm{k}$ has characteristic 0.
\item the KLR algebra $R_h$ of rank $n$ for
$\sllhat$ attached to the polynomials $Q_{i+1,i}(u,v)=u-v+h$, also
considered over $\mathbbm{k}[[h]]$.
\end{itemize}
These algebras categorify the algebra $U^+(\mathfrak{sl}_\infty)$ or $U^+(\mathfrak{\widehat{sl}}_e)$.
These algebras are defined in \cite[(4.1--5)]{BKKL} and
\cite[(1.6--15)]{BKKL} respectively; here we consider them with
the addition of an $h$-adic deformation. This deformation is very
important since it allows us to compare affine Hecke algebras with $q$
at a root of unity with those for generic $q$. For KLR algebras, this
corresponds to comparing KLR algebras for $\widehat{A}_e$ and
$A_\infty$.
\begin{theorem}
There is a $\mathbbm{k}[[h]]$-algebra isomorphism $\hmH_h\cong \hR_h$.
\end{theorem}
The characteristic 0 assumption may look peculiar to experts in the
field; the Hecke algebra over a field of characteristic $p$ has
similar deformations coming from deforming the parameter $q$ (though
$e^h$ does not make sense here). However, some more general
deformation of the KLR algebra may be needed to encompass these
deformations. Of course, it's hard to rule out the possibility of an
isomorphism, and there is no obvious obstruction to the existence of
one. Since our primary applications will be to Hecke algebras and
related structures of characteristic 0, this hypothesis is no problem
for us. In general, we'll work in parallel with the undeformed Hecke
algebra (and related structures) in arbitrary characteristic, and with
the exponentially deformed Hecke algebra in characteristic 0.
One isomorphism between type O completions was implicitly constructed by Brundan and
Kleshchev in \cite{BKKL} and for a related localization by Rouquier in
\cite[\S 3.2.5]{Rou2KM} for $h=0$. Unfortunately, it is not clear how
to extend these isomorphisms to the deformed case, so instead we
construct an isomorphism which is different even after the
specialization $h=0$. We will also generalize this theorem in a small
but useful way: in fact there is a natural class of completions of the Hecke
algebra that correspond with the KLR algebra for a larger Lie
algebra $\mathfrak{G}_U$; here we consider an arbitrary finite subset $U\subset \mathbbm{k}\setminus\{0\}$, given a graph
structure connecting $u$ and $u'$ if $qu=u'$.
The most important case is when $U$ is the $e$th roots of unity, so
$U$ is an $e$-cycle, but having a more general statement will be
useful in an analysis of the category $\mathcal{O}$ for a cyclotomic rational
Cherednik algebra given in \cite{WebRou}. This result gives an
alternate approach (and graded version) of the theorem of Dipper and
Mathas \cite{DMmorita} that Ariki-Koike algebras for arbitrary parameters are Morita
equivalent to a product of such algebras with $q$-connected parameters.
While it would be easy to
deduce this more general case from Brundan and Kleshchev's result, we find it a bit cleaner
to give a direct proof.
This isomorphism still has a similar flavor to those previously
defined; in brief, we use a general power series of the form
$1+y+\cdots$ (in particular $e^y$) where
Brundan and Kleshchev or Rouquier use $1+y$. The choice of $e^y$ matches better with
the other rational/trigonometric isomorphisms we mentioned, but
matching the deformation is a higher consideration for us than this philosophy.
We also introduce techniques that allow an easier account of
Brundan-Kleshchev type isomorphisms, which we would argue make them
more conceptual in natural. Our method, which is a variation on that
used by Rouquier in \cite[\S 3.2.5]{Rou2KM}, is to construct an isomorphism between
completions of the polynomial representations of $\mH_h$ and $R_h$,
and then to match the operators given by these algebras. This
requires considerably less calculation than confirming the relations
of the algebras themselves. It has the considerable advantage of
easily generalizing to other types.
\subsubsection*{Type W}
The first variation we introduce is ``weightedness.'' This is a
similar change of framework in both the Hecke and KLR families, though
it is not easy to see from the usual perspective on the Hecke algebra. This algebra
can be considered as the span of strand diagrams with number of strands equal to
the rank of the algebra, and a crossing corresponding to $T_i+1$ or
$T_i-q$, depending on conventions. In this framework, we can
introduce a generalization of the Hecke algebra which allows ``action
at a distance'' where certain interactions between strands occur at a
fixed distance from each other rather than when they cross. To see
the difference between these, compare the relations
(\ref{Hecke-1}--\ref{Hecke-triple}) with
(\ref{nilHecke-2}--\ref{eq:triple-point-2}). We have already
introduced this concept in the KLR family as {\bf weighted KLR
algebras} \cite{WebwKLR}, but the idea of incorporating it into the
Hecke algebra seems to be new.
The main result in this case is
that we obtain a graded KLR type presentation of the affine Schur
algebra, though we should note, it is considerably more convenient to
use a Morita equivalent algebra.
When $e=\infty$, these algebras are Morita equivalent to the type O
algebras, and thus they still categorify the algebra
$U^+(\mathfrak{sl}_\infty)$. When $e <\infty$, the category of
representations is larger, and corresponds to the passage from
$U^+(\mathfrak{\widehat{sl}}_e)$ to
$U^+(\mathfrak{\widehat{gl}}_e)$.
\subsubsection*{Type F}
The second variation we'll consider is ``framing.'' This is also a
fundamentally graphical operation, accomplished by including red lines, which
then interplay with those representing our original Hecke algebra.
This case is closely related to the extension from Hecke algebras to
cyclotomic Hecke algebras and parabolic category $\mathcal{O}$ of type A.
These algebras lead to categorifications of tensor products of simple
representations.
In the KLR family, these are precisely the tensor
product algebras introduced in \cite{Webmerged}; in the Hecke family,
these algebras do not seem to have appeared in precisely this form
before, though they appear naturally as endomorphisms of projectives
over cyclotomic Hecke algebras.
In particular, we show that our isomorphism and deformation are also compatible with
deformations of cyclotomic quotients.
For a fixed multiset $\{Q_1,\dots, Q_\ell\}$ of elements of $U$, there are cyclotomic quotients of both
$\mH_0$ and $R_0$, which Brundan and Kleshchev construct an
isomorphism between. We can deform this cyclotomic quotient with
respect to variables $\mathbf{z}=\{z_{j}\}$.
For $\mH_h$, consider the
deformed cyclotomic quotient (also called {\bf Ariki-Koike algebra}) attached to the
polynomial
\mbox{$C(A)= \prod_{i=1}^\ell
(A-Q_ie^{-z_i}).$}
\begin{definition}
The deformed cyclotomic quotient $\mH^{Q_\bullet}_{h,\mathbf{z}}$ is the quotient
of the base extension $\mH_h\otimes_\mathbbm{k}\K[[\mathbf{z}]]$ by the
2-sided ideal generated by $C(X_1)$.
\end{definition}
For $R_h$, the corresponding quotient is given by an additive
deformation of the roots. For each $u\in U$, we have a
polynomial $c_u(a)=\prod_{Q_j=u}(a-z_{j})$.
\begin{definition}
The deformed cyclotomic quotient $R^{Q_\bullet}_{h;\mathbf{z}}$ is a quotient of
the base extension $R_h\otimes_\mathbbm{k}\K[[\mathbf{z}]]$ by the ideal
generated by $c_{u_1}(y_1)e_\mathbf{u}$ for every length $n$ sequence
$\mathbf{u}\in U^n$.
\end{definition}
For the usual indexing of cyclotomic quotients by dominant weights,
this is a deformation of the algebra attached in \cite{KLI} to a
dominant weight $\leftarrow$ of $\mathfrak{G}_U$ satisfying
\[\alpha_u^\vee(\leftarrow)=\#\{i\in [1,\ell]\mid Q_i=u\}.\]
\begin{theorem}
The isomorphism $\hmH\cong \hR$ induces an isomorphism of
$\mathbbm{k}[[h,\mathbf{z}]]$-algebras $\mH^{Q_\bullet}_{h,\mathbf{z}}\cong R^{Q_\bullet}_{h;\mathbf{z}}$.
\end{theorem}
\subsubsection*{Type WF}
Our final goal, the algebras incorporating both these modifications,
is the least likely to be familiar to readers. The category of
representations over these algebras is equivalent to the category
$\mathcal{O}$ for a rational Cherednik algebra for $\mathbb{Z}/\ell\mathbb{Z} \wr S_n $, as
we show in \cite{WebRou}. In certain cases, these algebras are also
Morita equivalent to cyclotomic $q$-Schur algebras.
The isomorphism between the two families in
this case will prove key in the results of \cite{WebRou}, proving the
conjecture of Rouquier identifying decomposition numbers in this
category $\mathcal{O}$ with parabolic Kazhdan-Lusztig polynomials. This
construction is also of some independent interest as a
categorification Uglov's higher level Fock space, introduced in
\cite{Uglov}. In \cite{WebRou}, we will show that several natural,
but hard-to-motivate structures on the Fock space arise from these algebras.
\section*{Acknowledgements}
\label{sec:acknowledgements}
Many thanks to Stephen Griffeth for pointing out several small errors
in an earlier version of the paper, and to Peng Shan, Michaela
Varagnolo, \'Eric Vasserot, Liron Speyer and Christopher Bowman-Scargill for useful discussions.
\section{Type O}
\subsection{Hecke algebras}
We will follow the conventions of \cite{BKKL} concerning Hecke
algebras; we only consider the non-degenerate case, since we have no
motivation for changing Brundan and Kleshchev's formalism in the
degenerate case, since there is no parameter to deform here.
Our basic object is $\mH_h$, the affine Hecke algebra. Fix a element
$q\in \mathbbm{k}\setminus\{0,1\}$; let $e$ the multiplicative order of $q$
(which may be $\infty$). One of the key points where we wish to
generalize Brundan and Kleshchev's approach is that we wish to allow
infinitesimal generalizations of the parameter in the Hecke algebra.
That is, we let $d(h)=1+d_1h+\cdots$ be a formal power series in
$\mathbbm{k}[[z]]$, and let $\mathsf{q}=qd(h)$.
In practice, we'll only be interested
in the cases $d(h)=1$ and $d(h)=\exp(h)$ (which requires $\operatorname{char}(\mathbbm{k})=0$).
\excise{Throughout, we'll let $\operatorname{hexp}(h)$ denote the Artin-Hasse exponential in
$\mathbbm{k}$, that is, the formal power series
\[\operatorname{hexp}(h)=\prod_{\operatorname{char}(\mathbbm{k})\nmid n}(1-t^n)^{-\mu(n)/n}=
\begin{cases}
\exp(h) & \operatorname{char}(\mathbbm{k})=0\\
\exp(h+h^p/p+h^{p^2}/p^2+\cdots ) & \operatorname{char}(\mathbbm{k})=p
\end{cases}
\]
The second formula we give looks strange, since neither the power
series $\exp$ or the power series we apply it to has denominators
coprime to $p$, but their composition (calculated in $\mathbb{Q}$) does, and
thus makes sense in $\mathbbm{k}$. The important property this power series
has is that the power series $\operatorname{hexp}(x)\operatorname{hexp}(y)$ is divisible by
$\operatorname{hexp}(x+y)$, with the quotient given by a power series in $x$ and $y$
with integral coefficients (of course, when $\operatorname{char}(\mathbbm{k})=0$, this
quotient is $1$). }
The algebra $\mH_h$ is generated by $\{X_1^{\pm 1},\dots,X_n^{\pm 1}
\}\cup \{T_1,\dots, T_{n-1}\}$ with the relations:
\begin{align*}
X_r^{\pm 1} X_s^{\pm 1}&= X_s^{\pm 1} X_r^{\pm 1}& T_r^2&=(\mathsf{q}-1)T_r
+\mathsf{q}\\
T_rX_rT_r&=\mathsf{q} X_{r+1} & T_rT_{r+1}T_r&=T_{r+1}T_r T_{r+1}\\
T_rX_s&= X_sT_r\qquad (r\neq s,s+1)&T_rT_s&=T_sT_r\qquad (r \neq
s\pm 1)
\end{align*}
In this paper, we'll rely heavily on a diagrammatic visualization of
this algebra.
\begin{definition}
Let a {\bf type O diagram} be a collection of curves in
$\mathbb{R}\times [0,1]$ with each curve mapping diffeomorphically to
$[0,1]$ via the projection to the $y$-axis. Each curve is allowed
to carry any number of squares or the formal inverse of a square. We assume that these curves have no
triple points or tangencies, no squares lie on crossings and consider these up to isotopies that
preserve these conditions.
\end{definition}
As usual, we can compose these by taking $ab$ to be the diagram where
we place $a$ on top of $b$ and attempt to match up the bottom of $a$
and top of $b$. If the number of strands is the same, the result is
unique up to isotopy, and if it is different, we formally declare the
result to be $0$.
The {\bf type O affine Hecke algebra} is the quotient of the
span of these diagrams over $\mathbbm{k}[[h]]$ by the relations.
\newseq
\begin{equation*}\subeqn\label{Hecke-1}
\begin{tikzpicture}[scale=.7,baseline]
\draw[very thick,green!50!black](-3,0) +(-1,-1) -- +(1,1); \draw[very thick,green!50!black](-3,0) +(1,-1) --
node[pos=.8,fill=green!50!black,inner sep=3pt]{} +(-1,1) ;
\node at (-1.5,0){$-$}; \draw[very thick,green!50!black](0,0) +(-1,-1) -- +(1,1); \draw[very thick,green!50!black](0,0) +(1,-1) -- node[pos=.2,fill=green!50!black,inner sep=3pt]{}
+(-1,1);
\end{tikzpicture}\hspace{4mm}=\hspace{4mm}
\begin{tikzpicture}[scale=.7,baseline]
\draw[very thick,green!50!black](-3,0) +(-1,-1) -- node[pos=.2,fill=green!50!black,inner sep=3pt]{}+(1,1); \draw[very thick,green!50!black](-3,0) +(1,-1) -- +(-1,1);
\node at (-1.5,0){$-$}; \draw[very thick,green!50!black](0,0) +(-1,-1) --
node[pos=.8,fill=green!50!black,inner sep=3pt]{} +(1,1); \draw[very thick,green!50!black](0,0) +(1,-1) -- +(-1,1)
; \node at (2,0){$=$}; \draw[very
thick,green!50!black](4,0) +(-1,-1) -- node[midway,fill=green!50!black,inner
sep=3pt]{}+(-1,1); \draw[very
thick,green!50!black](4,0) +(0,-1) -- +(0,1); \node
at (5,0){$-\,\,\mathsf{q}$}; \draw[very thick](7,0) +(-1,-1) -- +(-1,1)
; \draw[very thick,green!50!black](7,0) +(0,-1) --
node[midway,fill=green!50!black,inner sep=3pt]{} +(0,1);
\end{tikzpicture}
\end{equation*}
\begin{equation*}\subeqn\label{Hecke-2}
\begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw(-2.8,0) +(0,-1) .. controls +(1.6,0) .. +(0,1)
; \draw (-1.2,0) +(0,-1) .. controls
+(-1.6,0) .. +(0,1) ;
\end{tikzpicture}\hspace{4mm}
= (1+\mathsf{q})\hspace{5mm}
\begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw (-2.8,-1)--(-1.2,1); \draw (-2.8,1)--(-1.2,-1);
\end{tikzpicture}
\end{equation*}
\begin{equation*}\subeqn\label{Hecke-triple}
\begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw (-3,0) +(1,-1) -- +(-1,1); \draw
(-3,0) +(-1,-1) -- +(1,1) ; \draw
(-3,0) +(0,-1) .. controls +(-1,0) .. +(0,1);
\end{tikzpicture}\hspace{4mm}-\hspace{4mm}
\begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw (1,0) +(1,-1) -- +(-1,1)
; \draw (1,0) +(-1,-1) -- +(1,1)
; \draw (1,0) +(0,-1) .. controls
+(1,0) .. +(0,1);
\end{tikzpicture}\hspace{4mm}
= \mathsf{q} \hspace{4mm} \begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw (-3,0) +(1,-1) -- +(1,1); \draw
(-3,0) +(-1,-1) -- +(0,1) ; \draw
(-3,0) +(0,-1) -- +(-1,1);
\end{tikzpicture}\hspace{4mm}-\mathsf{q} \hspace{4mm} \begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw (-3,0) +(-1,-1) -- +(-1,1); \draw
(-3,0) +(1,-1) -- +(0,1) ; \draw
(-3,0) +(0,-1) -- +(1,1);
\end{tikzpicture}
\end{equation*}
It may not be immediately clear what the value of this graphical
presentations is. However, this perspective
will lead us to generalizations of the affine Hecke algebra which we
call types W,F and WF.
\begin{theorem}\label{type-O-Hecke}
The algebra $\hmH_h$ is isomorphic to the type O Hecke
algebra via the map sending $T_r+1$ to the crossing of the $r$th and
$r+1$st strands, and $X_r$ to the square on the $r$th strand, as shown below:
\begin{equation}\label{Hecke-gens}
\tikz[baseline]{
\node[label=below:{$X_j$}] at (0,0){
\tikz[very thick,xscale=1.2,green!50!black]{
\draw (-.5,-.5)-- (-.5,.5);
\draw (.5,-.5)-- (.5,.5) node [midway,fill=green!50!black,inner
sep=2.5pt]{};
\draw (1.5,-.5)-- (1.5,.5);
\node at (1,0){$\cdots$};
\node at (0,0){$\cdots$};
}
};
\node[label=below:{$T_j+1$}] at (4.5,0){
\tikz[very thick,xscale=1.2, green!50!black]{
\draw (-.5,-.5)-- (-.5,.5);
\draw (.1,-.5)-- (.9,.5);
\draw (.9,-.5)-- (.1,.5);
\draw (1.5,-.5)-- (1.5,.5);
\node at (1,0){$\cdots$};
\node at (0,0){$\cdots$};
}
};
}\vspace{-2mm}
\end{equation}
\end{theorem}
\begin{proof}
\newseq
We'll freely use the relations given in \cite[\S 4]{BKKL}. The
equations (\ref{Hecke-1}--\ref{Hecke-triple}) become the relations:
\begin{align*}
X_r(T_r+1)-(T_r+1)X_{r+1}
&=T_rX_{r+1}+(1-\mathsf{q})X_{r+1}+X_r-T_rX_{r+1}-X_{r+1} \\
&=X_r-\mathsf{q} X_{r+1}\\
X_{r+1}(T_r+1)-(T_r+1)X_{r}
&=T_rX_{r}+(\mathsf{q}-1)X_{r+1}+X_{r+1}-T_rX_{r}-X_{r}\\
&=qX_{r+1}-X_r\\
(T_r+1)^2&=T_r^2+2T_r+1\\
&=(\mathsf{q}-1)T_r+\mathsf{q}+2T_r+1\\
&=(1+\mathsf{q})(T_r+1)
\end{align*}
\begin{align*}
(T_r+1)(T_{r+1}+1)(T_r+1)-(T_{r+1}+1)(T_{r}+1)(T_{r+1}+1)&=
T_r^2+T_r-T_{r+1}^2 -T_{r+1}\\
&=\mathsf{q}(T_r-T_{r+1})
\end{align*}
Similarly, one can easily derive the relations of the affine Hecke
from the diagrammatic ones given above. This shows that we have an
isomorphism.\excise{
Of course, we can just check the relations directly for the
$-$-isomorphism as well, but it is easier to recall that there is an
anti-automorphism of the Hecke algebra sending
$T_r\mapsto -qT_{n-r-1}^{-1}$ and $X_r\mapsto q^{2r-2} X_{n-r}$. One
can easily check that this sends
$T_r+1\mapsto
-T_{n-r-1}+q$.}
\end{proof}
Note that if we instead sent the element $T_i-\mathsf{q}$ to the crossing, we
would obtain quite similar, but subtly different relations
\begin{equation*}\subeqn\label{qHecke-1}
\begin{tikzpicture}[scale=.7,baseline,green!50!black]
\draw[very thick](-3,0) +(-1,-1) -- +(1,1); \draw[very thick](-3,0) +(1,-1) --
node[pos=.8,fill=green!50!black,inner sep=3pt]{} +(-1,1) ;
\node at (-1.5,0){$-$}; \draw[very thick](0,0) +(-1,-1) -- +(1,1); \draw[very thick](0,0) +(1,-1) -- node[pos=.2,fill=green!50!black,inner sep=3pt]{}
+(-1,1);
\end{tikzpicture}\hspace{4mm}=\hspace{4mm}
\begin{tikzpicture}[scale=.7,baseline,green!50!black]
\draw[very thick](-3,0) +(-1,-1) -- node[pos=.2,fill=green!50!black,inner sep=3pt]{}+(1,1); \draw[very thick](-3,0) +(1,-1) -- +(-1,1);
\node at (-1.5,0){$-$}; \draw[very thick](0,0) +(-1,-1) --
node[pos=.8,fill=green!50!black,inner sep=3pt]{} +(1,1); \draw[very thick](0,0) +(1,-1) -- +(-1,1)
; \node at (2,0){$=$}; \draw[very
thick](4,0) +(-1,-1) -- +(-1,1); \draw[very
thick](4,0) +(0,-1) -- node[midway,fill=green!50!black,inner
sep=3pt]{}+(0,1); \node[black]
at (5,0){$-\,\,\mathsf{q}$}; \draw[very thick](7,0) +(-1,-1) -- node[midway,fill=green!50!black,inner sep=3pt]{}+(-1,1)
; \draw[very thick](7,0) +(0,-1) --
+(0,1);
\end{tikzpicture}
\end{equation*}
\begin{equation*}\subeqn\label{qHecke-2}
\begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw(-2.8,0) +(0,-1) .. controls +(1.6,0) .. +(0,1)
; \draw (-1.2,0) +(0,-1) .. controls
+(-1.6,0) .. +(0,1) ;
\end{tikzpicture}\hspace{4mm}
= -(1+\mathsf{q})\hspace{5mm}
\begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw (-2.8,-1)--(-1.2,1); \draw (-2.8,1)--(-1.2,-1);
\end{tikzpicture}
\end{equation*}
\begin{equation*}\subeqn\label{qHecke-triple}
\begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw (-3,0) +(1,-1) -- +(-1,1); \draw
(-3,0) +(-1,-1) -- +(1,1) ; \draw
(-3,0) +(0,-1) .. controls +(-1,0) .. +(0,1);
\end{tikzpicture}\hspace{4mm}-\hspace{4mm}
\begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw (1,0) +(1,-1) -- +(-1,1)
; \draw (1,0) +(-1,-1) -- +(1,1)
; \draw (1,0) +(0,-1) .. controls
+(1,0) .. +(0,1);
\end{tikzpicture}\hspace{4mm}
= \mathsf{q} \hspace{4mm} \begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw (-3,0) +(1,-1) -- +(1,1); \draw
(-3,0) +(-1,-1) -- +(0,1) ; \draw
(-3,0) +(0,-1) -- +(-1,1);
\end{tikzpicture}\hspace{4mm}-\mathsf{q} \hspace{4mm} \begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw (-3,0) +(-1,-1) -- +(-1,1); \draw
(-3,0) +(1,-1) -- +(0,1) ; \draw
(-3,0) +(0,-1) -- +(1,1);
\end{tikzpicture}
\end{equation*}
Our first task is to describe the completions that are of interest to
us. Consider a finite subset
$U\subset \mathbbm{k}\setminus\{0\}$; as before, we endow this with a graph structure by adding an edge
from $u$ to $u'$ if $u'=qu$. Note that for $U$
chosen generically there will simply be no edges, and that under this
graph structure $U$ will always be a union of segments and cycles with $e$
nodes (if $e<\infty$).
Consider the ideal $\mathcal{I}$ in $\mathbb{C}[X_i^{\pm 1}][[h]]$ generated by $h$
and $\prod_{u\in U}(X_i-u)$.
We thus have a nested sequence of ideals in $\mH_h$ given by
$\mathcal{J}_n=\mH_h\cdot \mathcal{I}^n \cdot\mH_h$.
\begin{definition}
The topological algebra $\hmH_h$ is the completion of $\mH_h$ with
respect to the sequence $\mathcal{J}_n$.
\end{definition}
In the quotient $\hmH_0$, we have that $X_i$ acts on the quotient
$\mH_0/\mathcal{J}_n$ with minimal polynomial dividing $\prod_{u\in
U}(X_i-u)^n$ and thus
spectrum $U$; each such quotient is the sum of the simultaneous stable kernels
of the elements $X_i-u_i$ for $(u_1,\dots, u_n)\in U^n$. Thus,
$\hmH_h$ is the sum of the simultaneous stable kernels of these elements
interpreted topologically.
Consider a vector $\mathbf{u}=(u_1,\dots, u_n)\in U^n$;
if we let
\[e_{\mathbf{u}}\hmH_h=\{ g\in \hmH_h \mid \text{for each $m$},
(X_j-{u_j})^Ng\in \mathcal{J}_m\text{ for $N\gg 0$}\},\]
then we have that $\hmH_h=\bigoplus_{\mathbf{u}\in U^n}e_{\mathbf{u}}\hmH_h$. Since
the projection to one of these subspaces commutes with right
multiplication, there is an idempotent $e_{\mathbf{u}}\in \hmH_h$ whose
action gives
this projection (as the notation suggests).
The Hecke algebra $\mH$ has a natural signed polynomial representation $\mathcal{P}^-$;
this is generated by an element $\text{\textswab{1}}$ subject to the relation
$T_i \text{\textswab{1}}=-\text{\textswab{1}}$. One can calculate that the action of $T_i+1$ on
$F \mathbbm{1}$
for any Laurent polynomial $F$ is given by
\[T_i F\text{\textswab{1}}= -F^{s_i}v+ (1-\mathsf{q})
X_{i+1}\frac{F^{s_i}-F}{X_{i+1}-X_i}\text{\textswab{1}}.\] Thus, we have that
\[(T_i+1) F\text{\textswab{1}} =\frac{X_i-\mathsf{q} X_{i+1}}{X_{i+1}-X_i} (F^{s_i}-F)\text{\textswab{1}}.\]
The Hecke algebra acts faithfully on this
representation, so we can identify the affine Hecke algebra with a
subalgebra of operators on $\mathcal{P}^-$.
Similarly, there is an unsigned polynomial representation $\mathcal{P}^+$
generated by an element $\text{\textswab{1}}^+$ satisfying $T_i\text{\textswab{1}}^+=\text{\textswab{1}}^+$.
The action of $\mH$ in this case is given by the formula
\[T_iF\text{\textswab{1}}^+=\mathsf{q} F^{s_i}\text{\textswab{1}}^++(1-\mathsf{q})X_{i+1}\frac{F^{s_i}-F}{X_{i+1}-X_i}\text{\textswab{1}}^+\]
so we have that
\[(T_i-\mathsf{q})\text{\textswab{1}}^+=\frac{X_{i+1}-\mathsf{q} X_i}{X_{i+1}-X_i}( F^{s_i}-F)\]
Of course, we can complete either of these representations as well to arrive at
$\hcP^- :=\hmH_h \otimes_{\mH_h}\mathcal{P}^-$; this representation remains faithful
after completion. The space $e_{\mathbf{u}}P$ is isomorphic to
$\mathbbm{k}[[(X_1-{u_1}),\dots, (X_n-{u_n}),h]]$ via the action map on
$e_{\mathbf{u}}\text{\textswab{1}}$.
\excise{
\section*{Degenerate affine Hecke algebras}
We also wish to consider the case of a degenerate affine Hecke
algebra as mentioned above. This is the algebra $\mathsf{H}_m$ generated over by
$t_1,\dots, t_{m-1},$ $x_1,\dots,x_m$ with relations:
\[t_i^2=1\qquad t_it_{i\pm 1}t_i=t_{i\pm 1}t_it_{i\pm 1}
\qquad t_it_j=t_jt_i \,\, (i\neq j\pm 1)\]
\[x_ix_j=x_jx_i\qquad t_ix_it_i=x_{i+1}-t_i \qquad x_it_j=t_ix_j
\,\,(i\neq j,j+1). \] Note that unlike in the non-degenerate case,
there is no parameter which we can deform, and thus no need for the
extra parameter $h$. Instead, its role could be played (if $\mathbbm{k}$ has
characteristic $p$) by replacing $\mathbbm{k}$ with its ring of Witt vectors
(that is, the $p$-adic integers $\mathbb{Z}_p$ if $\mathbbm{k}=\mathbb{F}_p$).
Just as in the non-degenerate case, we fix a set $U\subset \mathbbm{k}$, but
now we must endow this with a different graph structure, given by
connecting $u$ and $u+1$ if both lie in $U$. Having chosen $U$, we
have a similar completion where we require the eigenvalues of $x_i$ to
live in $U$. We let $\widehat{\mathsf{H}}_m$ be this completion.
The dAHA also has a polynomial representation given by \[ t_if(x_1,\dots, x_m)=
-f^{s_i}-\frac{f^{s_i}-f}{x_{i+1}-x_i},\] which we can complete just
as in the non-degenerate case. Thus, \[ (t_i+1)f(x_1,\dots, x_m)=
(1+x_i-x_{i+1})\frac{f^{s_i}-f}{x_{i+1}-x_i}.\]
}
\subsection{KLR algebras}
We wish to define a similar completion of the KLR algebra $R_h$ for the
graph $U$. We use the conventions of Brundan and Kleshchev, but we
record the relations we need here for the sake of completeness and to
match our slightly more general context. The algebra $R_h$ is
generated over $\mathbbm{k}[h]$
by elements $\{e(\mathbf{u})\}_{\mathbf{u}\in U^n}\cup \{y_1,\dots, y_n\}\cup \{\psi_1,\dots, \psi_{n-1}\}$
subject to the relations:
\begin{align*}
e(\mathbf{u}) e(\Bv) &= \delta_{\mathbf{u},\Bv} e(\mathbf{u});
\hspace{53mm}{\sum_{\mathbf{u} \in I^\alpha}} e(\mathbf{u}) = 1;\\
y_r e(\mathbf{u}) &= e(\mathbf{u}) y_r;
\hspace{53mm}\psi_r e(\mathbf{u}) = e(\mathbf{u}^{s_r}) \psi_r;\\
y_r y_s &= y_s y_r;\\
\psi_r y_s &= y_s \psi_r\hspace{61.4mm}\text{if $s \neq r,r+1$};\\
\psi_r \psi_s &= \psi_s \psi_r\hspace{60.8mm}\text{if $s\neq r\pm 1$};\\
\psi_r y_{r+1} e(\mathbf{u})
&=
\begin{cases}
(y_r\psi_r+1)e(\mathbf{u}) &\hbox{if $u_r=u_{r+1}$},\\
y_r\psi_r e(\mathbf{u}) \hspace{48mm}&\hbox{if $u_r\neq u_{r+1}$};
\end{cases}\\
y_{r+1} \psi_re(\mathbf{u}) &=
\begin{cases}
(\psi_r y_r+1) e(\mathbf{u})
&\hbox{if $u_r=u_{r+1}$},\\
\psi_r y_r e(\mathbf{u}) \hspace{48mm}&\hbox{if $u_r\neq u_{r+1}$};\\
\end{cases}\\
\psi_r^2e(\mathbf{u}) &=
\begin{cases}
0 \hspace{61mm}&\text{if $u_r = u_{r+1}$},\\
e(\mathbf{u})&\text{if $u_r \neq q^{\pm 1}u_{r+1},u_{r+1}$},\\
(y_{r+1}-y_r+h)e(\mathbf{u})&\text{if $u_r = q^{-1}u_{r+1}, q\neq -1$},\\
(y_r - y_{r+1}+h)e(\mathbf{u})&\text{if $u_r = qu_{r+1}, q\neq -1$},\\
(y_r - y_{r+1}+h)(y_{r+1} - y_{r}+h) e(\mathbf{u})&\text{if $u_r=-u_{r+1}, q=-1$};
\end{cases}
\end{align*}\begin{align*}
\psi_{r}\psi_{r+1} \psi_{r} e(\mathbf{u})
&=
\begin{cases}
(\psi_{r+1} \psi_{r} \psi_{r+1} +1)e(\mathbf{u})&\text{if $u_r = u_{r+2}=q^{-1}u_{r+1}, q\neq -1$},\\
(\psi_{r+1} \psi_{r} \psi_{r+1} -1)e(\mathbf{u})&\text{if $u_r = u_{r+2}=qu_{r+1}, q\neq -1$},\\
\big(\psi_{r+1} \psi_{r} \psi_{r+1} -2y_{r+1} +y_r+y_{r+2}\big)e(\mathbf{u})
&\text{if $u_r = u_{r+2}=-u_{r+1}, q= -1$},\\
\psi_{r+1} \psi_{r} \psi_{r+1} e(\mathbf{u})&\text{otherwise}.
\end{cases}
\end{align*}
\newseq
Just as in the Hecke case, there is a graphical
presentation for the KLR algebra. Since this is covered in \cite{KLI}
and numerous other sources, we'll just record the relations here for convenience:
\begin{equation*}\subeqn\label{first-QH}
\begin{tikzpicture}[scale=.8,baseline]
\draw[very thick](-4,0) +(-1,-1) -- +(1,1) node[below,at start]
{$u$}; \draw[very thick](-4,0) +(1,-1) -- +(-1,1) node[below,at
start] {$v$}; \fill (-4.5,.5) circle (3pt);
\node at (-2,0){=}; \draw[very thick](0,0) +(-1,-1) -- +(1,1)
node[below,at start] {$u$}; \draw[very thick](0,0) +(1,-1) --
+(-1,1) node[below,at start] {$v$}; \fill (.5,-.5) circle (3pt);
\node at (4,0){unless $u=v$};
\end{tikzpicture}
\end{equation*}
\begin{equation*}\subeqn
\begin{tikzpicture}[scale=.8,baseline]
\draw[very thick](-4,0) +(-1,-1) -- +(1,1) node[below,at start]
{$u$}; \draw[very thick](-4,0) +(1,-1) -- +(-1,1) node[below,at
start] {$v$}; \fill (-3.5,.5) circle (3pt);
\node at (-2,0){=}; \draw[very thick](0,0) +(-1,-1) -- +(1,1)
node[below,at start] {$u$}; \draw[very thick](0,0) +(1,-1) --
+(-1,1) node[below,at start] {$v$}; \fill (-.5,-.5) circle (3pt);
\node at (4,0){unless $u=v$};
\end{tikzpicture}
\end{equation*}
\begin{equation*}\subeqn\label{nilHecke-1}
\begin{tikzpicture}[scale=.8,baseline]
\draw[very thick](-4,0) +(-1,-1) -- +(1,1) node[below,at start]
{$u$}; \draw[very thick](-4,0) +(1,-1) -- +(-1,1) node[below,at
start] {$u$}; \fill (-4.5,.5) circle (3pt);
\node at (-2,0){$-$}; \draw[very thick](0,0) +(-1,-1) -- +(1,1)
node[below,at start] {$u$}; \draw[very thick](0,0) +(1,-1) --
+(-1,1) node[below,at start] {$u$}; \fill (.5,-.5) circle (3pt);
\node at (2,0){$=$};
\end{tikzpicture}
\begin{tikzpicture}[scale=.8,baseline]
\draw[very thick](-4,0) +(-1,-1) -- +(1,1) node[below,at start]
{$u$}; \draw[very thick](-4,0) +(1,-1) -- +(-1,1) node[below,at
start] {$u$}; \fill (-4.5,-.5) circle (3pt);
\node at (-2,0){$-$}; \draw[very thick](0,0) +(-1,-1) -- +(1,1)
node[below,at start] {$u$}; \draw[very thick](0,0) +(1,-1) --
+(-1,1) node[below,at start] {$u$}; \fill (.5,.5) circle (3pt);
\node at (2,0){$=$}; \draw[very thick](4,0) +(-1,-1) -- +(-1,1)
node[below,at start] {$u$}; \draw[very thick](4,0) +(0,-1) --
+(0,1) node[below,at start] {$u$};
\end{tikzpicture}
\end{equation*}
\begin{equation*}\subeqn\label{black-bigon}
\begin{tikzpicture}[very thick,scale=.8,baseline]
\draw (-2.8,0) +(0,-1) .. controls +(1.6,0) .. +(0,1)
node[below,at start]{$u$}; \draw (-1.2,0) +(0,-1) .. controls
+(-1.6,0) .. +(0,1) node[below,at start]{$v$};
\end{tikzpicture}=\quad
\begin{cases}
0 & u=v\\
\begin{tikzpicture}[very thick,scale=.6,baseline=-3pt]
\draw (2,0) +(0,-1) -- +(0,1) node[below,at start]{$v$};
\draw (1,0) +(0,-1) -- +(0,1) node[below,at start]{$u$};
\end{tikzpicture} & u\notin \{v,qv,q^{-1}v\}\\
\begin{tikzpicture}[very thick,scale=.6,baseline=-3pt]
\draw (2,0) +(0,-1) -- +(0,1) node[below,at start]{$v$};
\draw (1,0) +(0,-1) -- +(0,1) node[below,at start]{$u$};\fill (2,0) circle (4pt);
\end{tikzpicture}-\begin{tikzpicture}[very thick,scale=.6,baseline=-3pt]
\draw (2,0) +(0,-1) -- +(0,1) node[below,at start]{$v$};
\draw (1,0) +(0,-1) -- +(0,1) node[below,at start]{$u$};\fill (1,0) circle (4pt);
\end{tikzpicture}+h \begin{tikzpicture}[very thick,scale=.6,baseline=-3pt]
\draw (2,0) +(0,-1) -- +(0,1) node[below,at start]{$v$};
\draw (1,0) +(0,-1) -- +(0,1) node[below,at start]{$u$};
\end{tikzpicture}& u=q^{-1}v,q\neq -1\\
\begin{tikzpicture}[very thick,baseline=-3pt,scale=.6]
\draw (2,0) +(0,-1) -- +(0,1) node[below,at start]{$v$};
\draw (1,0) +(0,-1) -- +(0,1) node[below,at start]{$u$};\fill (1,0) circle (4pt);
\end{tikzpicture}-\begin{tikzpicture}[very thick,scale=.6,baseline=-3pt]
\draw (2,0) +(0,-1) -- +(0,1) node[below,at start]{$v$};
\draw (1,0) +(0,-1) -- +(0,1) node[below,at start]{$u$};\fill (2,0) circle (4pt);
\end{tikzpicture}+h \begin{tikzpicture}[very thick,scale=.6,baseline=-3pt]
\draw (2,0) +(0,-1) -- +(0,1) node[below,at start]{$v$};
\draw (1,0) +(0,-1) -- +(0,1) node[below,at start]{$u$};
\end{tikzpicture}& u=qv,q\neq -1\\
- \Bigg(\begin{tikzpicture}[very thick,scale=.6,baseline=-3pt]
\draw (2,0) +(0,-1) -- +(0,1) node[below,at start]{$v$};
\draw (1,0) +(0,-1) -- +(0,1) node[below,at start]{$u$};\fill
(1,0) circle (4pt); \node at (.5,0) {$2$};
\end{tikzpicture}\Bigg)+2 \Bigg(\begin{tikzpicture}[very thick,scale=.6,baseline=-3pt]
\draw (2,0) +(0,-1) -- +(0,1) node[below,at start]{$v$};
\draw (1,0) +(0,-1) -- +(0,1) node[below,at start]{$u$};\fill (2,0) circle (4pt);\fill (1,0) circle (4pt);
\end{tikzpicture}\Bigg)- \Bigg(\begin{tikzpicture}[very thick,scale=.6,baseline=-3pt]
\draw (2,0) +(0,-1) -- +(0,1) node[below,at start]{$v$};
\draw (1,0) +(0,-1) -- +(0,1) node[below,at start]{$u$};\fill
(2,0) circle (4pt); \node at (2.5,0) {$2$}; \end{tikzpicture}\Bigg)+h^2 \Bigg(\begin{tikzpicture}[very thick,scale=.6,baseline=-3pt]
\draw (2,0) +(0,-1) -- +(0,1) node[below,at start]{$v$};
\draw (1,0) +(0,-1) -- +(0,1) node[below,at start]{$u$};
\end{tikzpicture}\Bigg)& u=-v,q=-1
\end{cases}
\end{equation*}
\begin{equation*}\subeqn\label{triple-dumb}
\begin{tikzpicture}[very thick,scale=.8,baseline=-3pt]
\draw (-2,0) +(1,-1) -- +(-1,1) node[below,at start]{$w$}; \draw
(-2,0) +(-1,-1) -- +(1,1) node[below,at start]{$u$}; \draw
(-2,0) +(0,-1) .. controls +(-1,0) .. +(0,1) node[below,at
start]{$v$}; \node at (-.5,0) {$-$}; \draw (1,0) +(1,-1) -- +(-1,1)
node[below,at start]{$w$}; \draw (1,0) +(-1,-1) -- +(1,1)
node[below,at start]{$u$}; \draw (1,0) +(0,-1) .. controls
+(1,0) .. +(0,1) node[below,at start]{$v$}; \end{tikzpicture}=\quad
\begin{cases}
\begin{tikzpicture}[very thick,scale=.6,baseline=-3pt]
\draw (6.2,0)
+(1,-1) -- +(1,1) node[below,at start]{$w$}; \draw (6.2,0)
+(-1,-1) -- +(-1,1) node[below,at start]{$u$}; \draw (6.2,0)
+(0,-1) -- +(0,1) node[below,at
start]{$v$}; \end{tikzpicture}& u=w=qv,q\neq -1\\
-\begin{tikzpicture}[very thick,scale=.6,baseline]
\draw (6.2,0)
+(1,-1) -- +(1,1) node[below,at start]{$w$}; \draw (6.2,0)
+(-1,-1) -- +(-1,1) node[below,at start]{$u$}; \draw (6.2,0)
+(0,-1) -- +(0,1) node[below,at
start]{$v$}; \end{tikzpicture}& u=w=q^{-1}v,q\neq -1\\
-\begin{tikzpicture}[very thick,scale=.6,baseline=-3pt]
\draw (0,0)
+(1,-1) -- +(1,1) node[below,at start]{$w$}; \draw (0,0)
+(-1,-1) -- +(-1,1) node[below,at start]{$u$}; \draw (0,0)
+(0,-1) -- +(0,1) node[below,at
start]{$v$}; \fill
(-1,0) circle (4pt); \end{tikzpicture}-\begin{tikzpicture}[very thick,scale=.6,baseline]
\draw (0,0)
+(1,-1) -- +(1,1) node[below,at start]{$w$}; \draw (0,0)
+(-1,-1) -- +(-1,1) node[below,at start]{$u$}; \draw (0,0)
+(0,-1) -- +(0,1) node[below,at
start]{$v$}; \fill
(1,0) circle (4pt); \end{tikzpicture}& u=w=-v,q= -1\\
\end{cases}
\end{equation*}
We let $I$ be the ideal in $\mathbb{C}[y_1,\dots,y_n,h]$ generated $h$ and $y_i$ for
all $i$; consider the ideals $J_n=R_h\cdot I^n \cdot R_h$. Let $\hR_h$ be the completion of $R_h$
respect to the system of ideals $J_n$.
The algebra $R_h$ also has a natural polynomial representation, defined
by Rouquier
in \cite[\S 3.2]{Rou2KM} and Khovanov and Lauda in \cite[\S 2.3]{KLI}.
This representation is generated
by a single element $\mathbbm{1}$, with the relations
\[ \psi_ke_{\mathbf{u}}\mathbbm{1}=
\begin{cases}
0 & u_k=u_{k+1}\\
(y_{k+1}-y_k+h)e_{\mathbf{u}^{s_k}}\mathbbm{1} & u_k=qu_{k+1}\\
e_{\mathbf{u}^{s_k}}\mathbbm{1} & u_k\neq u_{k+1},qu_{k+1}.
\end{cases}
\]
Just as in the Hecke algebra, the action of $\psi_k$ on arbitrary
polynomials can be written in terms of Demazure operators. For a
polynomial $f\in \mathbb{C}[[h]][y_1,\dots, y_n]$, we can describe the action
as
\[ \psi_k f e_{\mathbf{u}}\mathbbm{1}=
\begin{cases}
\displaystyle \frac{f^{s_k}-f}{y_{k+1}-y_k} e_{\mathbf{u}}\mathbbm{1}& u_k=u_{k+1}\\
(y_{k+1}-y_k+h)f^{s_k}e_{\mathbf{u}^{s_k}}\mathbbm{1} & u_k=qu_{k+1}\\
f^{s_k} e_{\mathbf{u}^{s_k}}\mathbbm{1} & u_k\neq u_{k+1},qu_{k+1}.
\end{cases}
\]
As in the Hecke algebra case, we let $\widehat{P}\cong \hR_h\otimes_{R_h}P$
be the completion of this polynomial representation.
\subsection{Isomorphisms}
Let $b(h)\in 1+h+h^2\mathbbm{k}[[h]]$ be a formal power series; if $d(h)=e^h$, we
assume that $b(h)=e^h$.
Our approach will match Brundan and Kleshchev's if we choose
$b(h)=1+h$.
Let $\gamma_p\colon \widehat{\mathcal{P}}^-\to \widehat{P}$ be the vector space
isomorphism\footnote{Here the subscript is not a parameter, but
distinguishes this map from an isomorphism of algebras we'll define later.} defined by
\[ \gamma_p((u_1^{-1}X_1)^{a_i}\cdots (u_n^{-1}X_n)^{a_n}e_{\mathbf{u}})=
\prod_{i=1}^nb(y_1)^{a_1}\cdots b(y_n)^{a_n}e_{\mathbf{u}}.\]
In particular, the operator of multiplication by $X_i$ on $e_{\mathbf{i}} \widehat{\mathcal{P}}^-$ is sent to
multiplication by $u_ib(y_i)$.
Just as in Brundan and Kleshchev, it will be convenient for us to
use different generators for $\hmH_h$. Let \[\Phi_r:=
T_r+\sum_{\mathbf{u}\text{ s.t. }u_r\neq
u_{r+1}}\frac{1-\mathsf{q}}{1-X_rX_{r+1}^{-1}}e_{\mathbf{u}}+\sum_{\mathbf{u}\text{ s.t. }u_r=
u_{r+1}}e_{\mathbf{u}}\]
Let \[\varphi_r(y_r,y_{r+1})=\frac{u_rb(y_r)-\mathsf{q}
u_{r+1}b(y_{r+1})}{u_{r+1}b(y_{r+1}) -u_rb(y_r)}\]
and \[A^{\mathbf{u}}_{r}=
\begin{cases}
\displaystyle
\varphi_r(y_r,y_{r+1})(y_{r+1} -y_r)
& u_r =u_{r+1}\\
\displaystyle \frac{\varphi_r(y_r,y_{r+1})}{y_{r+1}-y_r+h}&
u_r=qu_{r+1}, d(h)=e^h\\
\displaystyle \frac{\varphi_r(y_r,y_{r+1})}{y_{r+1}-y_r}&
u_r=qu_{r+1}, d(h)=1\\
\displaystyle \varphi_r(y_r,y_{r+1})&
u_r\neq u_{r+1}, qu_{r+1}.
\end{cases}
\]
\begin{proposition}\label{O-isomorphism}
The isomorphism $\gamma_p$ induces an isomorphism $\gamma\colon \hmH_h\cong
\hR_h$ such that \[\gamma(X_r)=\sum_{\mathbf{u}}u_rb(y_i)e_{\mathbf{u}} \qquad \gamma(\Phi_r)=\sum_{\mathbf{u}}
A_r^{\mathbf{u}}\psi_re_{\mathbf{u}}\]
which intertwines these two representations, if either $d(h)=1$ (and
$b(h)$ is arbitrary) or
$d(h)=b(h)=e^h$.
\end{proposition}
\begin{proof}
Denote the action of $S_n$ on $U^n$ by $\mathbf{u}\mapsto \mathbf{u}^s$ for $s\in
S_n$; as usual, we let $s_i=(i, i+1)$.
We
can easily calculate that
\begin{align*}
\Phi_r e_{\mathbf{u}} \mathbbm{1} &=
\begin{cases}
\displaystyle \frac{X_r-\mathsf{q} X_{r+1}}{X_{r+1}-X_r}
e_{\mathbf{u}^{s_r}} \text{\textswab{1}}& u_r\neq u_{r+1}\\
0 & u_r= u_{r+1}
\end{cases}\\
\Phi_r (X_{r+1}-X_r) e_{\mathbf{u}}\text{\textswab{1}}&=
\begin{cases}
\mathsf{q} X_{r+1}-X_r & u_r\neq u_{r+1}\\2(\mathsf{q} X_{r+1}-X_r)e_{\mathbf{u}} & u_r= u_{r+1}\text{\textswab{1}}
\end{cases}
\end{align*}
Using the commutation of $\Phi_r$ with symmetric Laurent polynomials
in the $X_i^{\pm 1}$'s, we obtain a general form of action of this
operator on an
arbitrary Laurent polynomial $F\in \mathbbm{k}[h,X_1^{\pm 1},\dots, X_n^{\pm
1}]$. We let $F^{s_r}(X_1,\dots,X_n)=F(X_1,\dots,X_{r+1},X_r,\dots, X_n)$.
\begin{equation}
\Phi_r F(X_1,\dots,X_n) e_{\mathbf{u}} \text{\textswab{1}} =
\begin{cases}
\displaystyle \frac{X_r-\mathsf{q} X_{r+1}}{X_{r+1}-X_r}F^{s_r}
e_{\mathbf{u}^{s_r}} \text{\textswab{1}}& u_r\neq u_{r+1}\\
\displaystyle\frac{X_{r}- \mathsf{q} X_{r+1}}{X_{r+1}-X_r}(F^{s_r}-F) e_{\mathbf{u}}
\text{\textswab{1}} & u_r= u_{r+1}
\end{cases}\label{eq:1}
\end{equation}
Now, consider how this operator acts if we intertwine with the
isomorphism $\gamma_p$; substituting into the formulas \eqref{eq:1},
we obtain that for a power series $f\in \mathbbm{k}[[h,y_1,\dots, y_n]]$,
\[\gamma(\Phi_r) f(y_1,\dots, y_n)e_{\mathbf{u}}\mathbbm{1} =\begin{cases}
\displaystyle \frac{u_r(1+b(y_r))-qd(h) u_{r+1}(1+b(y_{r+1}))}{u_{r+1}(1+b(y_{r+1})) -u_r(1+b(y_r))}f^{s_r}
e_{\mathbf{u}^{s_r}} \mathbbm{1}& u_r\neq u_{r+1}\\
\displaystyle \frac{1-qd(h)+b(y_r)-qd(h)b(y_r)}{b(y_{r+1}) -b(y_r)}
( f^{s_r}-f )e_{\mathbf{u}}\mathbbm{1}& u_r= u_{r+1}
\end{cases}\]
Note that:
\begin{itemize}
\item $\varphi_r(y_r,y_{r+1})$ is an invertible element of
$\mathbbm{k}[[h,y_r,y_{r+1}]]$ if and only if $u_r\neq qu_{r+1},u_{r+1}$.
\item If
$u_r= qu_{r+1}$ and
$d(h)=1$, then it is
just \[\varphi_r(y_r,y_{r+1})=(y_r-y_{r+1})\frac{\beta(y_r,y_{r+1})}{q^{-1} -1+q^{-1}b(y_{r+1}) -b(y_r)}.\]
The fraction is an invertible power series, since both the numerator
and denominator have non-zero constant terms.
Similarly, if $u_r= qu_{r+1}$, $d(h)=e^h$, and $b(h)=e^h-1$, then
\[\varphi_r(y_r,y_{r+1})= \frac{e^{y_r}-e^{y_{r+1}+h}}{q^{-1} -1+q^{-1}e^{y_{r+1}} -e^{y_r}}=(y_r-y_{r+1}-h)\frac{\beta(y_r,y_{r+1}+h)}{q^{-1} -1+q^{-1}e^{y_{r+1}} -e^{y_r}}.\]
\item if $u_r=u_{r+1}$, then \[\varphi_r(y_r,y_{r+1})=\frac{1-qd(h)+b(y_r)-qd(h)b(y_r)}{b(y_{r+1}) -b(y_r)}=\frac{1}{y_{r+1} -y_r}\frac{1-qd(h)+b(y_r)-qd(h)b(y_r)}{\beta(y_{r+1} ,y_r)}.\]
\end{itemize}
Thus,
we immediately obtain that $A^{\mathbf{u}}_{r}\psi_r e(\mathbf{u})=\Phi_r e(\mathbf{u}).$
Since $A^\mathbf{u}_r$ is invertible, this immediately shows that the image
of $\hR_h$ lies in that of $\hmH_h$ and {\it vice versa}. Thus, we
obtain an induced isomorphism between these algebras.
\end{proof}
\excise{
As mentioned in the introduction, this isomorphism induces
isomorphisms between appropriate deformed cyclotomic quotients.
\begin{proposition}
The isomorphism $\gamma$ induces an isomorphism of
$\mathbbm{k}[[h,\mathbf{z}]]$-algebras $\mH^\leftarrow_{h;\mathbf{z}}\cong R^\leftarrow_{h;\mathbf{z}}$.
\end{proposition}
\begin{proof}
By
definition, \[\gamma((X_1-u_1e^{-z_{u_1;j}})e_{\mathbf{u}}=u_1(e^{-y_1}-e^{-z_{u_1;j}})=-e^{-z_{u_1;j}}B(-y_1+z_{u_1;j})
(y_1-z_{u_1;j}).\]
Since $-e^{z_{u_1;j}}B(-y_1+z_{u_1;j})$ is invertible, this shows that
$C_{u_1}(X_1)e_{\mathbf{u}}$ generates
the ideal corresponding under $\gamma$ to that generated by $c_{u_1}(y_1)e_{\mathbf{u}}$ for every $\mathbf{u}$. Since
$C_j(X_1)e_{\mathbf{u}}$ is invertible in $e_{\mathbf{u}}\hmH_{h,\mathbf{z}}e_{\mathbf{u}}$ when
$j\neq u_1$, this shows that the cyclotomic ideals in $\hmH_{h,\mathbf{z}}$
and $\hR_{h;\mathbf{z}}$ coincide. Thus, we have the desired isomorphism.
\end{proof}}
\section{Type W}
\label{sec:weight-gener}
These results can be generalized a bit further to include not just KLR
algebras but also weighted KLR algebras, a generalization introduced
by the author in \cite{WebwKLR}.
Fix a real number $\ck\neq 0$.
\begin{definition}
A {\bf type W diagram} is a diagram like an type O
diagram defined above, with addition that we draw a dashed line $\ck$
units to the right of each strand, which we call a {\bf ghost}, and require that there are no
triple points or tangencies involving any combination of strands or
ghosts. We also only consider these equivalent if they are related by
an isotopy that avoids these tangencies and double points.
The {\bf type W affine Hecke algebra} (WAHA) $\EuScript{W}_\mathscr{B}$ for some collection $\mathscr{B}$
of finite subsets $B_i\subset \mathbb{R}$ is the $\mathbbm{k}[[h]]$-span of all type W Hecke diagrams such that endpoints of the strands on the lines
$y=0$ and $y=1$ form a subset in $\mathscr{B}$, modulo the relations:
\begin{equation*}\subeqn\label{nilHecke-2}
\begin{tikzpicture}[scale=.7,baseline,green!50!black]
\draw[very thick](-3,0) +(-1,-1) -- +(1,1); \draw[very thick](-3,0) +(1,-1) --
node[pos=.8,fill=green!50!black,inner sep=3pt]{} +(-1,1) ;
\node at (-1.5,0){$-$}; \draw[very thick](0,0) +(-1,-1) -- +(1,1); \draw[very thick](0,0) +(1,-1) -- node[pos=.2,fill=green!50!black,inner sep=3pt]{}
+(-1,1);
\end{tikzpicture}\hspace{4mm}=\hspace{4mm}
\begin{tikzpicture}[scale=.7,baseline]
\draw[very thick,green!50!black](-3,0) +(-1,-1) -- node[pos=.2,fill=green!50!black,inner sep=3pt]{}+(1,1); \draw[very thick,green!50!black](-3,0) +(1,-1) -- +(-1,1);
\node at (-1.5,0){$-$}; \draw[very thick,green!50!black](0,0) +(-1,-1) --
node[pos=.8,fill=green!50!black,inner sep=3pt]{} +(1,1); \draw[very thick,green!50!black](0,0) +(1,-1) -- +(-1,1)
; \node[black] at (2,0){$=$}; \draw[very
thick,green!50!black](4,0) +(-1,-1) -- +(-1,1); \draw[very
thick,green!50!black](4,0) +(0,-1) -- +(0,1);
\end{tikzpicture}
\end{equation*}
\begin{equation*}\subeqn\label{NilHecke3}
\begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw(-2.8,0) +(0,-1) .. controls +(1.6,0) .. +(0,1)
; \draw (-1.2,0) +(0,-1) .. controls
+(-1.6,0) .. +(0,1) ;
\end{tikzpicture}\hspace{4mm}
= 0\qquad \qquad
\begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw (-3,0) +(1,-1) -- +(-1,1); \draw
(-3,0) +(-1,-1) -- +(1,1) ; \draw
(-3,0) +(0,-1) .. controls +(-1,0) .. +(0,1);
\end{tikzpicture}\hspace{4mm}=\hspace{4mm}
\begin{tikzpicture}[very thick,scale=.9,baseline,green!50!black]
\draw (1,0) +(1,-1) -- +(-1,1)
; \draw (1,0) +(-1,-1) -- +(1,1)
; \draw (1,0) +(0,-1) .. controls
+(1,0) .. +(0,1);
\end{tikzpicture}\hspace{4mm}
\end{equation*}
\[ \subeqn\label{green-ghost-bigon1}
\begin{tikzpicture}[very thick,xscale=1.3,baseline=25pt,green!50!black]
\draw (1,0) to[in=-90,out=90] (1.5,1) to[in=-90,out=90] (1,2)
;
\draw[dashed] (1.5,0) to[in=-90,out=90] (1,1) to[in=-90,out=90] (1.5,2);
\draw (2.5,0) to[in=-90,out=90] (2,1) to[in=-90,out=90] (2.5,2);
\node[black] at (3,1) {$=$};
\draw (3.7,0) --node[midway,fill,inner sep=3pt]{} (3.7,2)
;
\draw[dashed] (4.2,0) to (4.2,2);
\draw (5.2,0) -- (5.2,2);
\node[black] at (5.6,1) {$-\mathsf{q}$};
\draw (6.2,0) -- (6.2,2);
\draw[dashed] (6.7,0)-- (6.7,2);
\draw (7.7,0) -- node[midway,fill,inner sep=3pt]{} (7.7,2);
\end{tikzpicture}
\]
\[ \subeqn\label{green-ghost-bigon2}
\begin{tikzpicture}[very thick,xscale=1.3,baseline=25pt,green!50!black]
\draw[dashed] (1,0) to[in=-90,out=90] (1.5,1) to[in=-90,out=90] (1,2)
;
\draw(1.5,0) to[in=-90,out=90] (1,1) to[in=-90,out=90] (1.5,2);
\draw (2,0) to[in=-90,out=90] (2.5,1) to[in=-90,out=90] (2,2);
\node[black] at (3,1) {$=$};
\draw[dashed] (3.7,0) --(3.7,2)
;
\draw (4.2,0) to node[midway,fill,inner sep=3pt]{} (4.2,2);
\draw (4.7,0) -- (4.7,2);
\node[black] at (5.6,1) {$-\mathsf{q}$};
\draw[dashed] (6.2,0) -- (6.2,2);
\draw (6.7,0)-- (6.7,2);
\draw (7.2,0) -- node[midway,fill,inner sep=3pt]{} (7.2,2);
\end{tikzpicture}
\]
\begin{equation*}\label{eq:triple-point-1}\subeqn
\begin{tikzpicture}[very thick,xscale=1.5,baseline,green!50!black]
\draw[dashed] (-3,0) +(.4,-1) -- +(-.4,1);
\draw[dashed] (-3,0) +(-.4,-1) -- +(.4,1);
\draw (-2,0) +(.4,-1) -- +(-.4,1); \draw
(-2,0) +(-.4,-1) -- +(.4,1);
\draw (-3,0) +(0,-1) .. controls +(-.5,0) .. +(0,1);\node[black] at (-1,0) {=}; \draw[dashed] (0,0) +(.4,-1) -- +(-.4,1);
\draw[dashed] (0,0) +(-.4,-1) -- +(.4,1);
\draw (1,0) +(.4,-1) -- +(-.4,1); \draw
(1,0) +(-.4,-1) -- +(.4,1);
\draw (0,0) +(0,-1) .. controls +(.5,0) .. +(0,1);
\node[black] at (2.1,0) {$-\mathsf{q}$};
\draw (4,0)
+(.4,-1) -- +(.4,1); \draw (4,0)
+(-.4,-1) -- +(-.4,1);
\draw[dashed] (3,0)
+(.4,-1) -- +(.4,1); \draw[dashed] (3,0)
+(-.4,-1) -- +(-.4,1);
\draw (3,0)
+(0,-1) -- +(0,1);
\end{tikzpicture}
\end{equation*}
\begin{equation*}\label{eq:triple-point-2}\subeqn
\begin{tikzpicture}[very thick,xscale=1.5,baseline,green!50!black]
\draw (-3,0) +(.4,-1) -- +(-.4,1);
\draw (-3,0) +(-.4,-1) -- +(.4,1);
\draw (-2,0) +(0,-1) .. controls +(-.5,0) .. +(0,1);
\draw[dashed] (-3,0) +(0,-1) .. controls +(-.5,0) .. +(0,1);\node[black] at (-1,0) {$=$}; \draw (0,0) +(.4,-1) -- +(-.4,1);
\draw (0,0) +(-.4,-1) -- +(.4,1);
\draw[dashed] (0,0) +(0,-1) .. controls +(.5,0) .. +(0,1);
\draw (1,0) +(0,-1) .. controls +(.5,0) .. +(0,1);
\node[black] at (2,0)
{$+$};
\draw (3,0)
+(.4,-1) -- +(.4,1); \draw (3,0)
+(-.4,-1) -- +(-.4,1);
\draw[dashed] (3,0)
+(0,-1) -- +(0,1); \draw (4,0)
+(0,-1) -- +(0,1);
\end{tikzpicture}
\end{equation*}
\end{definition}
By convention, we'll let $e_{B}$ be the diagram with vertical lines at
$x=b$ for $b\in B$, and use $X_i$ to represent the square on the $i$th
strand from left.
\begin{proposition}\label{W-poly}
The WAHA $\EuScript{W}_{\mathscr{B}}$ for a set $\mathscr{B}$ has a polynomial representation
\[P_{\mathscr{B}} := \oplus_{B\in \mathscr{B}}\mathbbm{k}[[h]][Y_1^{\pm 1},\dots,
Y_{|B|}^{\pm 1}]\]
defined by the rule that
\begin{itemize}
\item Each crossing of the $r$ and $r+1$st strands acts by the Demazure operator \[\partial_r(F)=
\frac{F^{s_r}-F}{Y_{r+1}-Y_{r}}.\]
\item
A crossing between the $r$th strand and a ghost of $s$th strand acts
by
\begin{itemize}
\item the identity if $\ck <0$ and the
strand is NE/SW or $\ck >0$ and the strand is NW/SE,
\item the multiplication operator of $Y_r-qY_s$ if $\ck <0$ and
the strand is NW/SE or $\ck >0$ and the strand is NE/SW
\end{itemize}
\item A square on the $r$th strand acts by the multiplication operator
$Y_r$.
\end{itemize}
\end{proposition}
\begin{proof}
The equations (\ref{nilHecke-2}--\ref{NilHecke3}) are the usual
relations satisfied by multiplication and Demazure operators. The
equations (\ref{green-ghost-bigon1}--\ref{green-ghost-bigon2}) are clear from
the definition of the operators for ghost/strand crossings. Finally,
the relations (\ref{eq:triple-point-1}--\ref{eq:triple-point-2}) are
calculation with Demazure operators similar to that which is standard
for triple points in various KLR calculi.
For example, assuming $\ck<0$ for (\ref{eq:triple-point-1}), the
LHS is \[\partial_s\circ (Y_r-qY_s)=(Y_r-qY_{s+1})\circ\partial_s-q\]
using the usual twisted Leibnitz rule for Demazure operators; this is
the RHS, so we are done. On the other hand,
(\ref{eq:triple-point-2}) follows in a similar way from the equation
\[(Y_r-qY_s)\circ\partial_r = \partial_r \circ (Y_{r+1}-qY_s)+1.\] This
completes the proof.
\end{proof}
\begin{proposition}
The type W Hecke algebra $\EuScript{W}_{\mathscr{B}}$ has a basis over $\mathbbm{k}[[h]]$ given by the products
$e_{B}D_wX_1^{a_1}\cdots X_n^{a_n}e_{B'}$ for $w\in S_n$ and $(a_1,\dots,
a_n)\in \mathbb{Z}^n$; here $D_w$ is a arbitrarily chosen diagram which
induces the permutation $w$ on the endpoints at $y=0$ when they are
associated to the endpoint at the top of same strand, and no pair of
strands or ghosts cross twice.
\end{proposition}
\begin{proof}
This proof follows many similar ones in KLR theory. These elements
are linearly independent because the elements $D_w$ span the action
of $\mathbbm{k}[S_n]$ after extending scalars to the fraction field of
rational functions, since $D_w=f_w w+\sum_{v<w} f_v v$ for some
rational functions $f_v$ with $f_w\neq 0$. Thus our proposed basis
is linearly independent over $\mathbbm{k}$ in this scalar extension, so must
have been linearly independent before.
Now we need only show that they span. Using relation
(\ref{nilHecke-2}), we can assume that all squares are at the bottom
of the diagram.
Furthermore, any two choices of the diagram $D_w$ differ via a series
of isotopies and triple points, so relations
(\ref{NilHecke3},\ref{eq:triple-point-1},\ref{eq:triple-point-2}) show
that these diagrams differ by diagrams with fewer crossings between
strands and ghosts. Thus, we
need only show that any diagram with a bigon can be written as a sum
of diagrams with fewer crossings.
Now, assume we have such a bigon. We should assume that it has no
smaller bigons inside it. In this case, we can shrink the bigon,
using the relations
(\ref{NilHecke3},\ref{eq:triple-point-1},\ref{eq:triple-point-2})
whenever we need to move a strand through the top and bottom of the
bigon or a crossing out through its side. Thus, we can ultimately
assume that the bigon is empty, and apply the relations (\ref{NilHecke3}--\ref{green-ghost-bigon2}).
\end{proof}
Choose $\mathscr{O}=\{B_s=\{s,2s,3s,\dots, ns\}\}$ for $s$ some real
number with $s\gg |g|$. For every type O diagram on $n$ strands,
we can choose an isotopy representative such that the endpoints of the
diagram are precisely $B_s$ at both $y=0$ and $y=1$. Furthermore, we
can choose this representative so that if we think of it as a type W
diagram and add ghosts, no strand is between a crossing
of strands and the corresponding ghost crossing. Obviously we can do
this for individual crossings, and any diagram can be factored into
these.
\begin{theorem}\label{wdHecke}
This embedding induces an isomorphism between the WAHA $\EuScript{W}_{\mathscr{O}}$ and
the honest affine Hecke algebra
$\mH_h$:
\begin{itemize}
\item If $\ck <0$, this isomorphism sends a single crossing to
$T_i+1$. That is, the diagrams satisfy the relations
(\ref{Hecke-1}--\ref{Hecke-triple}).
\item If $\ck>0$, this
isomorphism sends a single crossing to $T_i-q$. That is, the diagrams satisfy the relations
(\ref{qHecke-1}--\ref{qHecke-triple}).
\end{itemize}
The polynomial representation defined above
is intertwined by this map with the polynomial representation of
$\mH_n$ if $\ck <0$ and the signed polynomial representation if $\ck>0$.
\end{theorem}
This theorem shows that if we view type O diagrams as type W diagrams
where $\kappa$ is sufficiently small that we cannot distinguish
between a strand and its ghost\footnote{Perhaps this will be easier if
you take off your glasses.}, then the relations
(\ref{Hecke-1}--\ref{Hecke-triple}) will be consequences of (\ref{nilHecke-2}--\ref{eq:triple-point-2}).
\begin{proof}
We'll consider the case where $\ck <0$. We have that $T_i+1$ is sent to the diagram
\begin{equation*}
\begin{tikzpicture}[very thick,xscale=1.5,baseline]
\draw (-2.5,0) +(.7,-1) -- +(-.7,1);
\draw (-2.5,0) +(-.7,-1) -- +(.7,1);
\draw[dashed] (-3,0) +(.7,-1) -- +(-.7,1);
\draw [dashed] (-3,0) +(-.7,-1) -- +(.7,1);
\end{tikzpicture}
\end{equation*}
which sent by the polynomial representation of the type W affine Hecke algebra
representation to $(Y_r-\mathsf{q} Y_{r+1}) \circ \partial_r$. That is, we
have $T_iF=-F^{s_r}+(1-\mathsf{q})Y_{r+1}\partial_r$. Since
$\EuScript{W}_{\mathscr{B}}$ acts faithfully on its polynomial representation,
this shows that we have a map of the Hecke algebra to the WAHA;
since the diagram $D_w$ and the polynomials in the squares are in
the image of this map, the map is surjective. Since it becomes an
isomorphism after extension of scalars to the fraction field of the
squares, and both algebras are free modules over Laurent polynomials
in the squares, it must also be injective.
The case $\ck>0$ follows similarly.
\end{proof}
Thus, the wAHA is a ``larger'' algebra than the affine Hecke
algebra. The category of representations of affine Hecke algebras are
a quotient category of its representations, though in some cases, this
quotient will be an equivalence.
For any composition $\mathbf{k}=(k_1,\dots, k_n)$ of $m$, we have an associated
quasi-idempotent $\epsilon_{\mathbf{k}}=\sum_{w\in S_{\mathbf{k}}}T_w$ symmetrizing for the associated Young
subgroup. If $\mathbf{k}=(1,\dots, 1)$, then $\epsilon_{\mathbf{k}}=1$.
The {\bf affine Schur algebra} $\mathcal{S}_h(n,m)$ for our purposes is the new algebra
defined by \[\mathcal{S}_h(n,m):= \operatorname{End}\Big(\bigoplus_{|\mathbf{k}|=n}\mH
\epsilon_{\mathbf{k}}\Big)=\bigoplus_{|\mathbf{k}|=|\mathbf{k}'|=n}\epsilon_{\mathbf{k}}\mH
\epsilon_{\mathbf{k}'},\] where the sum is over $n$-part compositions of $m$.
Just as the most important property of the affine Hecke algebra is
that it acts naturally on $M\otimes V^{\otimes n}$ for any finite dimensional
$U_{\mathsf{q}}(\mathfrak{g})$-modules $M,V$ using universal R-matrices and Casimir
operators, the algebra $\mathcal{S}_h(n,m)$ naturally acts on \[\bigoplus
_{|\mathbf{k}|=m}M\otimes \operatorname{Sym}^{ k_1}V\otimes \cdots\otimes
\operatorname{Sym}^{k_n}V.\]
Furthermore, the algebra $\mathcal{S}_h(n,m)$ has a natural faithful polynomial representation given by $\mathcal{P}_{\mathcal{S}}:=\bigoplus_{|\mathbf{k}|=m}
\epsilon_{\mathbf{k}}\mathcal{P}$.
If we replace $\epsilon_{\mathbf{k}}$ by the anti-symmetrizing idempotent
$\epsilon_{\mathbf{k}}^-=\sum_{w\in S_{\mathbf{k}}} (-q)^{\ell(w)}T_w$, then we obtain the signed
Schur algebra $\mathcal{S}_h^-(n,m)$, which instead acts on \[\bigoplus
_{|\mathbf{k}|=m}M\otimes \iwedge{ k_1}V\otimes \cdots\otimes
\iwedge{k_n}V.\]
The affine Schur algebra has a diagrammatic realization much like the
affine Hecke algebra. For each composition $\mu=(\mu_1,\dots, \mu_n)$
of $m$, we let $C_\mu=\{i\epsilon+js\mid 0\leq i< \mu_j\}$ for some
fixed $0< \epsilon \ll g \ll s$, and let $\mathscr{C}$ be the
collection of these sets. In the type W affine Hecke algebra
$\EuScript{W}_{\mathscr{C}}$, we have an idempotent $e'_\mu$ which on each
group in $[js,js+\mu_j\epsilon]$ traces out the primitive idempotent
in the nilHecke algebra which acts as $y^{\mu_j-1}_1\cdots
y_{\mu_j-1}\partial_{w_0}$ in the polynomial representation. Let
$e'=\sum_\mu e'_\mu$ be the sum of these idempotents over $n$-part
compositions of $m$.
\begin{theorem}\label{waha-Schur}
If $\ck<0$, we have an isomorphism of algebras $e' \EuScript{W}_{\mathscr{C}} e'\cong \mathcal{S}_h(n,m)$
which induces an isomorphism of representations $e'
P_{\mathscr{C}}\cong \mathcal{P}_{\mathcal{S}_h(n,m)}$. Similarly, if $\ck>0$, we have an
isomorphism of algebras $e' \EuScript{W}_{\mathscr{C}} e\cong \mathcal{S}_h^-(n,m)$.
\end{theorem}
\begin{proof}
First, consider the case $\ck<0$. Consider the idempotent $e_{B_s}$ in $e' \EuScript{W}_{\mathscr{C}} e'$. This
satisfies $e_{B_s}\EuScript{W}_{\mathscr{C}}e_{B_s}\cong \mH_h$ by Theorem \ref{wdHecke}.
Thus, $e'e_{C_\mu}\EuScript{W}_{\mathscr{C}}e_{B_s}$ is naturally a right module
over $\mH_h$. We wish to show that it is isomorphic to $\epsilon_\mu
\mH_h$. Consider the diagram $e_{C_\mu}D_ee_{B_s}$. Acting on the
right by $T_i+1$ with $(i,i+1)\in S_{\mu_1}\times \cdots \times S_{\mu_p}$ gives $e_{C_\mu}D_ee_{B_s}(T_i+1)=(q+1)
e_{C_\mu}D_ee_{B_s}$, since
\begin{equation*}
\begin{tikzpicture}[very thick,xscale=1.5,baseline,green!50!black]
\draw (.7,-1) to [out=135,in=-90] (-.6,.3) to
[out=90,in=-135] (-.1,1) ;
\draw (-.7,-1)to [out=45,in=-90] (.6,.3) to
[out=90,in=-45] (.1,1);
\draw[dashed] (.2,-1) to [out=135,in=-90] (-1.1,.3) to
[out=90,in=-135] (-.6,1) ;
\draw [dashed] (-1.2,-1) to [out=45,in=-90] (.1,.3) to
[out=90,in=-45] (-.4,1) ;
\end{tikzpicture}\hspace{4mm}=\hspace{4mm}
\begin{tikzpicture}[xscale=1.5,baseline,green!50!black]
\draw[very thick](-3,0) +(-.7,-1) -- +(.2,1); \draw[very thick](-3,0) +(.7,-1) -- node[pos=.9,fill=green!50!black,inner
sep=3pt]{}+(-.2,1)
; \draw[very thick,dashed](-3.5,0) +(-.7,-1) -- +(.2,1);
\draw[very thick,dashed](-3.5,0)+(.7,-1) -- +(-.2,1) ;
\node[black] at (-1.8,0){$-\mathsf{q}$}; \draw[very thick](0,0) +(-.7,-1) -- node[pos=.9,fill=green!50!black,inner sep=3pt]{}+(.2,1); \draw[very thick](0,0) +(.7,-1) --
+(-.2,1);
\draw[very thick,dashed](-.5,0) +(-.7,-1) -- +(.2,1);
\draw[very thick,dashed] (-.5,0) +(.7,-1) -- +(-.2,1);
\end{tikzpicture}
\end{equation*}
Applying (\ref{nilHecke-2}), the RHS is equal to $1+q$ times the
identity, plus diagrams with a crossing at top, which are killed by $e$. This shows that
$e'e_{C_\mu}D_ee_{B_s}$ is invariant. Thus, we have a map of
$\epsilon_\mu \mH_h\to e'e_{C_\mu}\EuScript{W}_{\mathscr{C}}e_{B_s}$ sending
$\epsilon_\mu \mapsto e'e_{C_\mu}D_ee_{B_s}$. This map must be
surjective, since every $ e'e_{C_\mu}D_we_{B_s}$ is in its image, and
comparing ranks over the fraction field $\mathbbm{k}(X_1,\dots, X_n)$, we see that it
must be injective as well. Thus, the action of $e' \EuScript{W}_{\mathscr{C}} e'$
on $e' \EuScript{W}_{\mathscr{C}} e_{B_s}$ defines a map $e' \EuScript{W}_{\mathscr{C}} e'\to
\mathcal{S}_h$. After extending scalars to the fraction field $\mathbbm{k}[[h]](X_1,\dots,
X_n)$, this map becomes an isomorphism; both rings are just smash
products $\mathbbm{k}[S_n]\#\mathbbm{k}[[h]](X_1,\dots, X_n)$. Thus, the map $e' \EuScript{W}_{\mathscr{C}} e'\to
\mathcal{S}_h$ is injective.
On the other hand, note that the element $e'e_{C_\mu}D_we_{C_{\mu'}}e$
for $w$ any shortest double coset representative acts is sent to the element
$\phi_w=\sum_{w'\in S_{\mu}w S_{\mu'}}T_{w'}$ plus elements in
$\mathbbm{k}[[h]][X^{\pm 1}_1,\dots, X^{\pm 1}_n]\phi_v$ for $v$ shorter in Bruhat
order. Since $\phi_w X^{a_1}_1,\dots, X^{a_n}_n$ give a basis of
$\mathcal{S}_h$ by \cite[2.2.2]{GreenSchur}, the fact that these are in the image shows that this map is
surjective.
When $\ck >0$, the argument is quite similar, but with $T_i-q$
replacing $T_i+1$ and using the $\ck>0$ version of Theorem \ref{wdHecke}.
\end{proof}
We'll prove in Corollary \ref{affine-schur-morita} that the idempotent
$e'$ induces a Morita equivalence.
Thus, from the perspective of the Hecke side, introducing the type W
relations is an alternate way of understanding the affine Schur
algebra. On the other hand, the author has incorporated similar ideas
into the theory of KLR algebras, by introducing {\bf weighted KLR
algebras} \cite{WebwKLR}.
\begin{definition}
Let $W$ be the weighted KLR algebra attached to the graph $U$; that
is, $W$ is the quotient of $\mathbbm{k}[h]$-span of weighted KLR diagrams
(as defined in \cite[\ref{w-diagram-def}]{WebwKLR}) by the relations
(note that these relations are drawn with $\ck<0$): \newseq
\begin{equation*}\subeqn\label{dots-1}
\begin{tikzpicture}[scale=.45,baseline]
\draw[very thick](-4,0) +(-1,-1) -- +(1,1) node[below,at start]
{$u$}; \draw[very thick](-4,0) +(1,-1) -- +(-1,1) node[below,at
start] {$v$}; \fill (-4.5,.5) circle (5pt);
\node at (-2,0){=}; \draw[very thick](0,0) +(-1,-1) -- +(1,1)
node[below,at start] {$u$}; \draw[very thick](0,0) +(1,-1) --
+(-1,1) node[below,at start] {$v$}; \fill (.5,-.5) circle (5pt);
\node at (4,0){for $i\neq j$};
\end{tikzpicture}\end{equation*}
\begin{equation*}\label{dots-2}\subeqn
\begin{tikzpicture}[scale=.45,baseline]
\draw[very thick](-4,0) +(-1,-1) -- +(1,1) node[below,at start]
{$u$}; \draw[very thick](-4,0) +(1,-1) -- +(-1,1) node[below,at
start] {$u$}; \fill (-4.5,.5) circle (5pt);
\node at (-2,0){=}; \draw[very thick](0,0) +(-1,-1) -- +(1,1)
node[below,at start] {$u$}; \draw[very thick](0,0) +(1,-1) --
+(-1,1) node[below,at start] {$u$}; \fill (.5,-.5) circle (5pt);
\node at (2,0){+}; \draw[very thick](4,0) +(-1,-1) -- +(-1,1)
node[below,at start] {$u$}; \draw[very thick](4,0) +(0,-1) --
+(0,1) node[below,at start] {$u$};
\end{tikzpicture}\qquad
\begin{tikzpicture}[scale=.45,baseline]
\draw[very thick](-4,0) +(-1,-1) -- +(1,1) node[below,at start]
{$u$}; \draw[very thick](-4,0) +(1,-1) -- +(-1,1) node[below,at
start] {$u$}; \fill (-4.5,-.5) circle (5pt);
\node at (-2,0){=}; \draw[very thick](0,0) +(-1,-1) -- +(1,1)
node[below,at start] {$u$}; \draw[very thick](0,0) +(1,-1) --
+(-1,1) node[below,at start] {$u$}; \fill (.5,.5) circle (5pt);
\node at (2,0){+}; \draw[very thick](4,0) +(-1,-1) -- +(-1,1)
node[below,at start] {$u$}; \draw[very thick](4,0) +(0,-1) --
+(0,1) node[below,at start] {$u$};
\end{tikzpicture}
\end{equation*}
\begin{equation*}\label{strand-bigon}\subeqn
\begin{tikzpicture}[very thick,scale=.8,baseline]
\draw (-2.8,0) +(0,-1) .. controls +(1.6,0) .. +(0,1)
node[below,at start]{$u$}; \draw (-1.2,0) +(0,-1) .. controls
+(-1.6,0) .. +(0,1) node[below,at start]{$u$}; \node at (-.5,0)
{=}; \node at (0.4,0) {$0$}; \node at (1.5,.05) {and};
\end{tikzpicture}
\hspace{.4cm}
\begin{tikzpicture}[very thick,scale=.8 ,baseline]
\draw (-2.8,0) +(0,-1) .. controls +(1.6,0) .. +(0,1)
node[below,at start]{$u$}; \draw (-1.2,0) +(0,-1) .. controls
+(-1.6,0) .. +(0,1) node[below,at start]{$v$}; \node at (-.5,0)
{=};
\draw (1.8,0) +(0,-1) -- +(0,1) node[below,at start]{$v$}; \draw
(1,0) +(0,-1) -- +(0,1) node[below,at start]{$u$};\node at (3.3,0){for $u\neq v$};
\end{tikzpicture}
\end{equation*}
\begin{equation*}\label{ghost-bigon1}\subeqn
\begin{tikzpicture}[very thick,xscale=1.25 ,yscale=.8,baseline]
\draw (1,-1) to[in=-90,out=90] node[below, at start]{$u$}
(1.5,0) to[in=-90,out=90] (1,1) ; \draw[dashed] (1.5,-1)
to[in=-90,out=90] (1,0) to[in=-90,out=90] (1.5,1); \draw
(2.5,-1) to[in=-90,out=90] node[below, at start]{$v$} (2,0)
to[in=-90,out=90] (2.5,1);
\node at (3,0) {=};
\end{tikzpicture}
\begin{cases}
\begin{tikzpicture}[very thick,xscale=1.25 ,yscale=.8,baseline]
\draw (3.7,-1) --
(3.7,1) node[below, at start]{$u$} ; \draw[dashed] (4.2,-1) to
(4.2,1); \draw (5.2,-1) -- (5.2,1) node[below, at start]{$v$};
\end{tikzpicture} & \text{for $u\neq qv$}\\
\begin{tikzpicture}[very thick,xscale=1.25,yscale=.8,baseline]
\draw (3.7,-1) --
(3.7,1) node[below, at start]{$u$} ; \draw[dashed] (4.2,-1) to
(4.2,1); \draw (5.2,-1) -- (5.2,1) node[below, at start]{$v$}
node[midway,fill,inner sep=2.5pt,circle]{}; \node at (5.75,0)
{$-$};
\draw (6.2,-1) -- (6.2,1) node[below, at start]{$u$}
node[midway,fill,inner sep=2.5pt,circle]{}; \draw[dashed]
(6.7,-1)-- (6.7,1); \draw (7.7,-1) -- (7.7,1) node[below, at
start]{$v$};
\node at (8.25,0)
{$+h$};
\draw (8.7,-1) -- (8.7,1) node[below, at start]{$u$}; \draw[dashed]
(9.2,-1)-- (9.2,1); \draw (10.2,-1) -- (10.2,1) node[below, at
start]{$v$};
\end{tikzpicture} & \text{for $u= qv$}
\end{cases}
\end{equation*}
\begin{equation*}\label{ghost-bigon1a}\subeqn
\begin{tikzpicture}[very thick,xscale=1.25 ,yscale=.8,baseline]
\draw (1.5,-1) to[in=-90,out=90] node[below, at start]{$u$}
(1,0) to[in=-90,out=90] (1.5,1) ; \draw[dashed] (1,-1)
to[in=-90,out=90] (1.5,0) to[in=-90,out=90] (1,1); \draw
(2,-1) to[in=-90,out=90] node[below, at start]{$v$} (2.5,0)
to[in=-90,out=90] (2,1);
\node at (3,0) {=};
\end{tikzpicture}
\begin{cases}
\begin{tikzpicture}[very thick,xscale=1.25 ,yscale=.8,baseline]
\draw (4.7,-1) --
(4.7,1) node[below, at start]{$u$} ; \draw[dashed] (4.2,-1) to
(4.2,1); \draw (5.2,-1) -- (5.2,1) node[below, at start]{$v$};
\end{tikzpicture} & \text{for $u\neq qv$}\\
\begin{tikzpicture}[very thick,xscale=1.25,yscale=.8,baseline]
\draw (4.7,-1) --
(4.7,1) node[below, at start]{$u$} ; \draw[dashed] (4.2,-1) to
(4.2,1); \draw (5.2,-1) -- (5.2,1) node[below, at start]{$v$}
node[midway,fill,inner sep=2.5pt,circle]{}; \node at (5.75,0)
{$-$};
\draw (6.7,-1) -- (6.7,1) node[below, at start]{$u$}
node[midway,fill,inner sep=2.5pt,circle]{}; \draw[dashed]
(6.2,-1)-- (6.2,1); \draw (7.2,-1) -- (7.2,1) node[below, at
start]{$v$};
\node at (7.75,0)
{$+h$};
\draw (8.7,-1) -- (8.7,1) node[below, at start]{$u$}; \draw[dashed]
(8.2,-1)-- (8.2,1); \draw (9.2,-1) -- (9.2,1) node[below, at
start]{$v$};
\end{tikzpicture} & \text{for $u= qv$}
\end{cases}
\end{equation*}
\begin{equation*}\subeqn\label{triple-boring}
\begin{tikzpicture}[very thick,scale=1 ,scale=.8,baseline]
\draw (-3,0) +(1,-1) -- +(-1,1) node[below,at start]{$w$}; \draw
(-3,0) +(-1,-1) -- +(1,1) node[below,at start]{$u$}; \draw
(-3,0) +(0,-1) .. controls +(-1,0) .. +(0,1) node[below,at
start]{$v$}; \node at (-1,0) {=}; \draw (1,0) +(1,-1) -- +(-1,1)
node[below,at start]{$w$}; \draw (1,0) +(-1,-1) -- +(1,1)
node[below,at start]{$u$}; \draw (1,0) +(0,-1) .. controls
+(1,0) .. +(0,1) node[below,at start]{$v$};
\end{tikzpicture}
\end{equation*}
\begin{equation*}\subeqn \label{eq:triple-point1}
\begin{tikzpicture}[very thick,xscale=1.1,yscale=.8,baseline]
\draw[dashed] (-3,0) +(.4,-1) -- +(-.4,1); \draw[dashed] (-3,0)
+(-.4,-1) -- +(.4,1); \draw (-1.5,0) +(.4,-1) -- +(-.4,1)
node[below,at start]{$v$}; \draw (-1.5,0) +(-.4,-1) -- +(.4,1)
node[below,at start]{$w$}; \draw (-3,0) +(0,-1) .. controls
+(-.5,0) .. +(0,1) node[below,at start]{$u$};\node at (-.75,0)
{$-$}; \draw[dashed] (0,0) +(.4,-1) -- +(-.4,1); \draw[dashed]
(0,0) +(-.4,-1) -- +(.4,1); \draw (1.5,0) +(.4,-1) -- +(-.4,1)
node[below,at start]{$v$}; \draw (1.5,0) +(-.4,-1) -- +(.4,1)
node[below,at start]{$w$}; \draw (0,0) +(0,-1) .. controls
+(.5,0) .. +(0,1) node[below,at start]{$u$}; \node at (2.25,0)
{$=$};
\end{tikzpicture}
\begin{cases}
\begin{tikzpicture}[very thick,xscale=1.1,yscale=.8,baseline]
\draw (4.5,0) +(.4,-1) -- +(.4,1) node[below,at
start]{$v$}; \draw (4.5,0) +(-.4,-1) -- +(-.4,1) node[below,at
start]{$w$}; \draw[dashed] (3,0) +(.4,-1) -- +(.4,1);
\draw[dashed] (3,0) +(-.4,-1) -- +(-.4,1); \draw (3,0) +(0,-1)
-- +(0,1) node[below,at start]{$u$};
\end{tikzpicture}& \text{if $v=w=qu$}\\
0 & \text{unless $v=w=qu$}
\end{cases}
\end{equation*}
\begin{equation*}\subeqn\label{eq:KLRtriple-point2}
\begin{tikzpicture}[very thick,xscale=1.1,yscale=.8,baseline]
\draw[dashed] (-3,0) +(0,-1) .. controls +(-.5,0) .. +(0,1) ;
\draw (-3,0) +(.4,-1) -- +(-.4,1) node[below,at start]{$v$};
\draw (-3,0) +(-.4,-1) -- +(.4,1) node[below,at start]{$u$};
\draw (-1.5,0) +(0,-1) .. controls +(-.5,0) .. +(0,1)
node[below,at start]{$w$};\node at (-.75,0) {$-$}; \draw (0,0)
+(.4,-1) -- +(-.4,1) node[below,at start]{$v$}; \draw (0,0)
+(-.4,-1) -- +(.4,1) node[below,at start]{$u$}; \draw[dashed]
(0,0) +(0,-1) .. controls +(.5,0) .. +(0,1); \draw (1.5,0)
+(0,-1) .. controls +(.5,0) .. +(0,1) node[below,at
start]{$w$}; \node at (2.25,0) {$=$}; \end{tikzpicture}
\begin{cases}
\begin{tikzpicture}[very thick,xscale=1.1,yscale=.8,baseline]\draw (3,0) +(.4,-1) --
+(.4,1) node[below,at start]{$v$}; \draw (3,0) +(-.4,-1) --
+(-.4,1) node[below,at start]{$u$}; \draw[dashed] (3,0) +(0,-1)
-- +(0,1);\draw (4.5,0) +(0,-1) -- +(0,1) node[below,at
start]{$w$};
\end{tikzpicture}& \text{if $w=qu=qv$}\\
0 & \text{unless $w=qu=qv$}
\end{cases}.
\end{equation*}
\end{definition}
Note that this algebra is homogeneous with wKLR diagrams given their
usual grading and $h$ given grading $2$. \begin{proposition}\label{W-KLR-poly}
The wKLR algebra $W_{\mathscr{D}}$ for a collection $\mathscr{D}$ has a polynomial representation
\[P_{\mathscr{D}} := \oplus_{D\in \mathscr{D}}\mathbbm{k}[h,y_1,\dots,
y_{|D|}]\]
defined by the rule that
\begin{itemize}
\item Each crossing of the $r$ and $r+1$st strands acts by the Demazure operator \[\partial_r(f)=
\frac{f^{s_r}-f}{y_{r+1}-y_{r}}.\]
\item
A crossing between the $r$th strand and a ghost of $s$th strand acts
by
\begin{itemize}
\item the identity if $\ck <0$ and the
strand is NE/SW or $\ck >0$ and the strand is NW/SE,
\item the multiplication operator of $y_s-y_r+h$ if $\ck <0$ and
the strand is NW/SE or $\ck >0$ and the strand is NE/SW
\end{itemize}
\item A square on the $r$th strand acts by the multiplication operator
$Y_r$.
\end{itemize}
\end{proposition}
Fix a set $U\subset \mathbbm{k}\setminus\{0\}$.
\begin{definition}
We let $\widehat{W}$ be be the completion of the weighted KLR algebra
$W$ for $U$
with respect to the grading; since $h$ has degree 2, this completion
is naturally a complete $\mathbbm{k}[[h]]$-module. For any collection $\mathscr{D}$, we let
$W_{\mathscr{D}},\widehat{W}_{\mathscr{D}}$ be the sum of images of
the idempotents corresponding to loadings on a set of points in $\mathscr{D}$.
We let $\widehat{\EuScript{W}}_{\mathscr{D}}$ for any set
$\mathscr{D}$ denote
the completion of the algebra $\EuScript{W}_{\mathscr{D}}$ by the ideals
generated by $\prod_{u\in U}(X_i-u)^N$ for $N=1,2,3,\dots$
\end{definition}
Let $\mathbf{i}$ be a loading in
the sense of \cite{WebwKLR}, that is, a finite subset $D=\{d_1,\dots, d_n\}$ with
$d_1<\cdots <d_n$ of $\mathbb{R}$
together with a map $\mathbf{i}\colon D\to U$.
In the algebra $\widehat{\EuScript{W}}_{\mathscr{D}}$, we have an idempotent
$\epsilon_{\mathbf{i}}$ projecting to the stable kernel of $X_j-\mathbf{i}(d_j)$.
We represent $\epsilon_{\mathbf{i}}$ as a type W
diagram, with the strands labeled by the elements
$u_i=\mathbf{i}(d_j)$.
\begin{theorem}\label{W-isomorphism}
There is an isomorphism $\gamma\colon
\widehat{\EuScript{W}}_{\mathscr{D}}\to \widehat{W}_{\mathscr{D}}$ such
that $\gamma(X_r)=\sum_{\mathbf{u}}u_rb({y_r})e_{\mathbf{u}}$,
\newseq
\[\subeqn\label{crossing-match}
\tikz[baseline,very thick,scale=1.5, green!50!black]{\draw (.2,.3) --
(-.2,-.1); \draw
(.2,-.1) -- (-.2,.3);} \epsilon_{\mathbf{u}}\mapsto
\begin{cases}
\displaystyle\frac{1}{u_{r+1}b(y_{r+1})-u_rb(y_{r})}(\psi_r-1) e_{\mathbf{u}} & u_r\neq u_{r+1}\\
\displaystyle \frac{y_{r+1}-y_r}{u_{r+1}(b(y_{r+1})-b(y_{r}))}\psi_r e_{\mathbf{u}}& u_r=u_{r+1}
\end{cases}
\]
\[\subeqn\label{ghost-match}
\tikz[baseline,very thick,scale=1.5, green!50!black]{\draw[densely dashed]
(-.2,-.1)-- (.2,.3); \draw
(.2,-.1) -- (-.2,.3);}\epsilon_{\mathbf{u}} \mapsto
\begin{cases}
\displaystyle u_rb(y_r)-\mathsf{q} u_sb(y_s)\tikz[baseline,very thick,scale=1.5]{\draw[densely dashed]
(-.2,-.1)-- (.2,.3); \draw
(.2,-.1) -- (-.2,.3);} e_{\mathbf{u}}&
u_r\neq qu_s\\
\displaystyle \frac {u_rb(y_r)-\mathsf{q} u_sb(y_s)}{y_{s}-y_{r}}\tikz[baseline,very thick,scale=1.5]{\draw[densely dashed]
(-.2,-.1)-- (.2,.3); \draw
(.2,-.1) -- (-.2,.3);}e_{\mathbf{u}}& u_r=qu_s, d(h)=1\\
\displaystyle \frac {u_rb(y_r)-\mathsf{q} u_sb(y_s)}{y_{s}-y_{r}+h}\tikz[baseline,very thick,scale=1.5]{\draw[densely dashed]
(-.2,-.1)-- (.2,.3); \draw
(.2,-.1) -- (-.2,.3);}e_{\mathbf{u}}& u_r=qu_s,d(h)=e^h
\end{cases}
\qquad \qquad\tikz[baseline,very thick,scale=1.5, green!50!black]{\draw (.2,.3) --
(-.2,-.1); \draw [densely dashed]
(.2,-.1) -- (-.2,.3);} \mapsto \tikz[baseline,very thick,scale=1.5]{\draw (.2,.3) --
(-.2,-.1); \draw [densely dashed]
(.2,-.1) -- (-.2,.3);} \]
\end{theorem}
\begin{proof}
This follows from comparing the polynomial representations. After
applying the map $\gamma_p$ on completed polynomial representations
the diagram $\gamma_p(X_r)=\sum_{\mathbf{u}}u_re^{y_r}e_{\mathbf{u}}$, we should
consider how the basic diagrams of the WAHA act on the polynomial
representation.
We have that \[\tikz[baseline,very thick,scale=1.5, green!50!black]{\draw (.2,.3) --
(-.2,-.1); \draw
(.2,-.1) -- (-.2,.3);} \cdot
f\epsilon_{\mathbf{u}}=\frac{f^{s_r}-f}{y_{r+1}-y_r}\epsilon_{\mathbf{u}}\]
If $u_r\neq u_{r+1}$, then $\psi_r\cdot
f\epsilon_\mathbf{u}=f^{s_r}\epsilon_\mathbf{u}$ and $u_{r+1}b(y_{r+1})-u_rb(y_r)$
is invertible, so the appropriate case of (\ref{crossing-match}) holds. If $u_r= u_{r+1}$, then
$\frac{y_{r+1}-y_r}{u_{r}(b(y_{r+1})-b(y_r))}$ is invertible if $d(h)=1$
and $\frac{y_{r+1}-y_r+h}{u_{r}(b(y_{r+1})-b(y_r))}$ is if $d(h)=e^h$, so the
formula is clear.
Now, we turn to (\ref{ghost-match}). We find that $\tikz[baseline,very thick,scale=1.5, green!50!black]{\draw[densely dashed]
(-.2,-.1)-- (.2,.3); \draw
(.2,-.1) -- (-.2,.3);}\cdot f\epsilon_{\mathbf{u}}=(u_rb(y_r)-\mathsf{q} u_sb(y_s) )f\epsilon_{\mathbf{u}}$. The first case of the isomorphism (\ref{ghost-match}) thus
follows directly from the polynomial representation of the wKLR algebra
given in Proposition \ref{W-KLR-poly}. The second case of
(\ref{ghost-match}) is clear.
\end{proof}
\excise{
As before, we can complete the algebra $\mathcal{S}_h$
and its polynomial representation by applying the same constructions
to $\hmH$ and $\hcP^-$. We denote these completions by $\mathcal{\widehat S}_h$ and $\hcP^-_\mathcal{S}$.
In $\mathcal{\widehat S}_h$, we have idempotents attached to vector
compositions in the sense of \cite{SWschur}, lists of dimension
vectors whose entries sum to $n$. While, the elements $X_i$ do not
preserve the image of $\epsilon_{\mathbf{k}}$, the Laurent polynomials in them
invariant under $S_{\mathbf{k}}$ do. In the case where $\mathbf{k}=(k)$, this is
just the center of the Hecke algebra.
In this case, for each dimension vector of the quiver $\Gamma$, we
can consider the stable kernel of $f(X_\bullet)-f(u_1,\dots,
u_k)$, for each $S_k$-symmetric polynomial $f$. We let $e_{\partial}$
be the idempotent projecting to this space.
More generally, for each vector composition $\cod$, we can consider
the image under the inclusion of the Young Hecke subalgebra of
$e_{\cod}=e_{\partial_1}\otimes \cdots \otimes e_{\partial_m}$.
Note that the idempotent $\epsilon_{\mathbf{k}}$ has the effect of
``sticking'' together the sequences of nodes in the idempotent
$e_{\mathbf{u}}$. If we let $\cod(\mathbf{u};\mathbf{k})$ be the vector composition whose
first piece is $\alpha_{u_1}+\cdots + \alpha_{u_{k_1}}$, second piece is
$\alpha_{u_{k_1+1}}+\cdots +\alpha_{u_{k_2}}$, etc. Then, we have that
$e_{\mathbf{u}}\epsilon_{\mathbf{k}}=\epsilon_{\mathbf{k}}e_{\cod(\mathbf{u};\mathbf{k})}$.
Thus, we obtain a family of idempotents in $\mathcal{\widehat S}_h$ which by abuse of
notation we also denote $e_{\cod}\in \epsilon_{\mathbf{k}}\hmH \epsilon_{\mathbf{k}}$
In particular, we have a map $- \cdot e_{\cod'}\colon \mathcal{S}_he_{\cod}
\to \mathcal{S}_he_{\cod'}$ whenever $\cod'$ is a merge of $\cod$ in the sense
of \cite[\S 3]{SWschur}}
The reader will note that the image of the idempotent $e'$ under
this isomorphism is not homogeneous. On abstract grounds, there
must exist a homogeneous idempotent $e''$ with isomorphic image. Let
us give a description of one such, which is philosophically quite
close to the approach of \cite{SWschur}.
Choose an arbitrary order
on the elements of $U$. The idempotent $e_\mu'$ for a composition
$\mu$ is replaced by the sum of contributions from a list of
multi-subsets $Z_i$ of $U$ such that $|Z_i|=\mu_i$. There's a
loading corresponding to these subsets, which we'll denote
$\mathbf{i}_{Z_*}$. The underlying subset is $C_{\mu}$ as defined before;
the points associated to the $j$th part at $x=js+\epsilon,\dots,
js+\mu_j\epsilon$ are labeled with the elements of $Z_j$ in our
fixed order. Finally, $e''_{Z_*}$ is the idempotent on this loading
that acts on each group of strands with the same label in $U$ and attached
to the same part of $\mu$ with a fixed homogeneous primitive
idempotent in the nilHecke algebra, for example, that acts as
$y^{k-1}_1\cdots y_{k-1}\partial_{w_0}$ in the polynomial
representation. Consider the sum $e''$ of the idempotents $e''_{Z_*}$ over all
$p$-tuples of multi-subsets.
The idempotent $e''$ has
isomorphic image to $e'$, since $We''$ is a sum of projectives for
each composition $\mu$ whose
$(\mu_1!\cdots\mu_p!)$-fold direct sum is $We_{C_\mu}$.
Thus, the algebra $e''We''$ is graded and isomorphic to the Schur algebra.
It would be interesting to make this isomorphism a bit more
explicit, but we will leave that to other work.
\section{Type F}
Now let us turn to our other complication, analogous to that which
appeared in
\cite{Webmerged}:
\begin{definition}
A {\bf type F$_1$ Hecke diagram} is an affine Hecke diagram
with a vertical red line inserted at $x=0$. The diagram must avoid
tangencies and triple points with this strand as well, and only
allow isotopies that preserve these conditions.
\end{definition}
We decorate this red strand with a multisubset
$Q_\bullet=\{Q_1,\dots, Q_\ell\}\subset U$ and let
$\PQ_i=Q_ie^{-z_i}$. To distinguish from other uses of the letter, we
let $\mathsf{e}_k(\mathbf{z})$ be the degree $k$ elementary symmetric
function in an alphabet $\mathbf{z}$.
\begin{definition}
Let {\bf type F$_1$ affine Hecke algebra} $\tilde{\EuScript{F}} (\mathsf{q},\PQ_{\bullet})$ be the algebra generated over
$\mathbbm{k}[[h,\mathbf{z}]]$ by type F$_1$ Hecke diagrams with $m$ strands modulo the
relations (\ref{qHecke-1}--\ref{qHecke-triple}) and the
relations: \newseq
\begin{equation*}\label{qHcost}\subeqn
\begin{tikzpicture}[very thick,baseline,scale=.7]
\draw [wei] (-1.8,0) +(0,-1) -- +(0,1);
\draw[green!50!black](-1.2,0) +(0,-1) .. controls +(-1.6,0) ..
+(0,1); \node at (-.3,0) {=}; \draw [green!50!black] (2.2,0)
+(0,-1) -- node[midway, fill=green!50!black,inner
sep=2.5pt,label=right:{$\ell$}]{}+(0,1); \draw[wei] (1.2,0)
+(0,-1) -- +(0,1); \node at (4.3,0) {$+\,\mathsf{e}_1(-\PQ_\bullet)$};
\draw[green!50!black] (6.8,0) +(0,-1) -- node[midway,
fill=green!50!black,inner
sep=2.5pt,label=right:{$\ell-1$}]{}+(0,1); \draw [wei] (5.8,0)
+(0,-1) -- +(0,1); \node at (8.8,0) {$+$}; \node at (9.6,-.07)
{$\cdots$}; \node at (11.35,0) {$+\mathsf{e}_{\ell}(-\PQ_\bullet)$};
\draw[green!50!black] (13.8,0) +(0,-1) -- +(0,1); \draw [wei]
(12.8,0) +(0,-1) -- +(0,1);
\end{tikzpicture}
\end{equation*}
That is, on the RHS, we have the product $p_{\PQ}=(X_j-\PQ_1)\cdots
(X_j-\PQ_\ell)$, where the green strand shown is the $j$th, and
\begin{equation*}\label{qred-triple}\subeqn
\begin{tikzpicture}[very thick,baseline=-2pt,scale=.7]
\draw [wei] (0,-1) -- (0,1); \draw[green!50!black](.5,-1)
to[out=90,in=-30] (-.5,1); \draw[green!50!black](-.5,-1)
to[out=30,in=-90] (.5,1);
\end{tikzpicture}- \begin{tikzpicture}[very
thick,baseline=-2pt,scale=.7] \draw [wei] (0,-1) -- (0,1);
\draw[green!50!black](.5,-1) to[out=150,in=-90] (-.5,1);
\draw[green!50!black](-.5,-1) to[out=90,in=-150] (.5,1);
\end{tikzpicture}
=\sum_{i=1}^\ell\sum_{a+b=i-1}\mathsf{e}_{\ell-i}(-\PQ_\bullet)\cdot \Bigg(\begin{tikzpicture}[very thick,baseline=-2pt,scale=.7]
\draw [wei] (0,-1) -- (0,1);
\draw[green!50!black](.5,-1) to[out=90,in=-90] node[midway,
fill=green!50!black,inner sep=2.5pt,label=right:{$b$}]{} (.5,1);
\draw[green!50!black](-.5,-1) to[out=90,in=-90] node[midway,
fill=green!50!black,inner sep=2.5pt,label=left:{$a+1$}]{} (-.5,1);
\end{tikzpicture}-\mathsf{q} \begin{tikzpicture}[very
thick,baseline=-2pt,scale=.7] \draw [wei] (0,-1) -- (0,1);
\draw[green!50!black](.5,-1) to[out=90,in=-90] node[midway,
fill=green!50!black,inner sep=2.5pt,label=right:{$b+1$}]{}
(.5,1); \draw[green!50!black](-.5,-1) to[out=90,in=-90]
node[midway, fill=green!50!black,inner
sep=2.5pt,label=left:{$a$}]{} (-.5,1);
\end{tikzpicture}\Bigg).
\end{equation*}
The RHS can alternately by written as $(X_i-\mathsf{q}
X_{i+1})\frac{p_{\PQ}(X_i)-p_\PQ(X_{i+1})}{X_i-X_{i+1}}$.
\end{definition}
We'll continue to use our convention of letting $X_r$ denote the sum
of all straight-line diagrams with a square on the $r$th green strand from
the left (ignoring red strands).
Given ${\mathscr{D}}$ a collection of subsets of $\mathbb{R}$, we'll let $\tilde{\EuScript{F}}_{\mathscr{D}}
(\mathsf{q},\PQ_{\bullet}),{\EuScript{F}}_{\mathscr{D}} (\mathsf{q},\PQ_{\bullet})$
denote the subalgebras of $\tilde{\EuScript{F}}
(\mathsf{q},\PQ_{\bullet}),{\EuScript{F}} (\mathsf{q},\PQ_{\bullet})$ spanned by
diagrams whose tops and bottoms lie in the set ${\mathscr{D}}$.
Let $e_i$ be an arbitrarily fixed idempotent in $\tilde{\EuScript{F}} (\mathsf{q},\PQ_{\bullet})$ given by
$i$ strands left of the red strand and $m-i$ right of it; let
$\mathscr{D}^\circ$ be the collection of the corresponding
sets. Since any idempotent is isomorphic to one of these by a
straight-line diagram, enlarging $\mathscr{D}^\circ$ will give a
Morita equivalent algebra. Let
$\tilde{P}_m$ be the free $S[X_1^{\pm 1},\dots, X_m^{\pm 1}]$-module generated
by elements $f_p$ for $p=0,\dots, m$.
\begin{proposition}\label{tilde-poly}
The algebra $\tilde{\EuScript{F}}_{\mathscr{D}^\circ} (\mathsf{q},\PQ_{\bullet})$ has a polynomial
representation that sends
\begin{itemize}
\item $e_i$ to the identity on the submodule generated by $f_i$.
\item $X_i$ to the multiplication operator and \[(T_i+1)\cdot
F(X_1,\dots,X_m)f_p\mapsto (X_i-\mathsf{q} X_{i+1})\frac{F^{s_i}-F}{X_{i+1}-X_i}f_p.\]
\item the action of positive to negative crossing to the identity
\[F(X_1,\dots,X_m)f_i\mapsto
F(X_1,\dots,X_m)f_{i+1},\] and the opposite crossing to
\[F(X_1,\dots,X_m)f_i\mapsto
p_\PQ(X_i)F(X_1,\dots,X_m)f_{i-1}.\]
\end{itemize}
\end{proposition}
\begin{proof}
This is a standard computation with Demazure operators.
\end{proof}
Now, we can allow several red
lines at various values of $x$, each of which carries a multiset of
values in $U$. For the sake of notation, we'll still denote the
multiset given by all such labels as $\{Q_1,\dots, Q_\ell\}$, with a strand with the label
$Q_{i}$ at $x$-value
$\vartheta_i$. So, the situation we had previously considered was
$\vartheta_i=0$ for all $i$.
\begin{definition}
A {\bf type F Hecke diagram} is an affine Hecke diagram
with a vertical red lines inserted at $x=\vartheta_i$. The diagram must avoid
tangencies and triple points with these strands as well, and only
allow isotopies that preserve these conditions.
Let the {\bf type F affine Hecke algebra} $\tilde{\EuScript{F}}^\vartheta
(\mathsf{q},\PQ_\bullet)$ be the algebra generated over $\mathbbm{k}[[h,\mathbf{z}]]$ by
type F Hecke diagrams for $\vartheta$ with $m$ strands modulo the
local relations (\ref{qHecke-1}--\ref{qHecke-triple}) and
(\ref{qHcost}--\ref{red-triple}).
\end{definition}
These algebras have a polynomial representation using the same maps
attached to basic diagrams as Proposition \ref{tilde-poly}, but now
with idempotents, and thus copies of Laurent polynomials, indexed by weakly increasing
functions $\nu\colon [1,\ell]\to [0,m]$ with $\nu(i)$ giving the number of green strands to
the left of the $i$th red strand. As before, any two idempotents
corresponding to $\nu$ are isomorphic by straight-line diagrams.
These affine type F algebras have ``finite-type'' quotients. In other
contexts, these have been called ``steadied'' or ``cyclotomic'' quotients.
\begin{definition}
The {\bf type F Hecke algebra} $\EuScript{F}^\vartheta
(\mathsf{q},\PQ_\bullet)$ is the quotient of $\tilde{\EuScript{F}}^\vartheta
(\mathsf{q},\PQ_\bullet)$ by the 2-sided ideal generated by $e_{B}$ for
every set $B$ possessing an element $b\in B$ with $b<\vartheta_i$
for all $i$.
\end{definition}
Pictorially, the idempotents $e_B$ we kill possess a green strand
which is left of all the red strands. In \cite{Webmerged}, the corresponding ideal for KLR
algebras is called the {\bf violating ideal} and we will use the same
terminology here.
Given ${\mathscr{D}}$ a collection of subsets of $\mathbb{R}$, we'll let $\tilde{\EuScript{F}}^\vartheta_{\mathscr{D}}
(\mathsf{q},\PQ_{\bullet}),{\EuScript{F}}^\vartheta_{\mathscr{D}} (\mathsf{q},\PQ_{\bullet})$
denote the subalgebras of $\tilde{\EuScript{F}}^\vartheta
(\mathsf{q},\PQ_{\bullet}),{\EuScript{F}}^\vartheta (\mathsf{q},\PQ_{\bullet})$ spanned by
diagrams whose tops and bottoms lie in the set ${\mathscr{D}}$.
\begin{proposition}\label{prop:cyclo-Hecke}
The cyclotomic affine Hecke algebra $\mH^{Q_\bullet}_{h,\mathbf{z}}$ for the parameters $\{\PQ_1,\dots,
\PQ_\ell\}$ is isomorphic to the type F$_1$ Hecke algebra
$\EuScript{F}_{\mathscr{D}^\circ} (\mathsf{q},\PQ_{\bullet})$.
\end{proposition}
\begin{proof}
If we let $e$ be the idempotent given by green lines at $x=1,\dots,
m$, then we see by Theorem \ref{wdHecke}, there is a map from the
affine Hecke algebra sending $X_i$ and $T_i+1$ to diagrams as in
(\ref{Hecke-gens}). Applying
(\ref{qHcost}) at the leftmost strand shows that $p_\PQ(X_1)$ lies
in the violating ideal, and thus is $=0$ in $\EuScript{F}(\mathsf{q},\PQ_{\bullet})$. On the other hand,
the affine Hecke algebra acts faithfully on its signed polynomial
representation which factors through the representation of
Proposition \ref{tilde-poly}. Thus, we need only show that the preimage of
the violating ideal under this map lies the cyclotomic ideal. As in the proof of
\cite[3.16]{Webmerged}, the relations (\ref{qHcost},\ref{qred-triple})
allow us to reduce to the case where only a single green strand
passes into the left half of the plane. In this case, we gain a
factor of $p_\PQ(X_1)$, showing that this is in the cyclotomic
ideal.
\end{proof}
The type F algebras in the KLR family have been introduced in
\cite{Webmerged}.
Let $o_1=\min(\vartheta_i)$, and
$o_j=\min_{\vartheta_i>o_{j-1}}(\vartheta_i)$; so these are the real
numbers that occur as $\vartheta_i$ in increasing order.
Consider the sequence $\leftarrow_j=\sum_{\vartheta_i=o_j}\omega_{Q_i}$ of
dominant weights for $\mathfrak{g}_U$, and let $S_{u,j}=\{s\in
[1,\ell]|\vartheta_s=o_j, u=Q_s\}$.
In
\cite[\ref{m-T-def}]{Webmerged}, we defined algebras
$T^{\underline{\boldsymbol{\la}}},\tilde{T}^{\underline{\boldsymbol{\la}}}$ attached to this list of weights. These
cannot match $\widehat{\tilde{\EuScript{F}}^\vartheta}
(\mathsf{q},\PQ_\bullet),\EuScript{F}^\vartheta
(\mathsf{q},\PQ_\bullet)$ since they are not naturally modules over
$\mathbbm{k}[[h,\mathbf{z}]]$; however, we will recover them when we set
$h=z_1=\cdots=z_\ell=0$. Instead, we should consider deformed
versions of these algebras $\tilde{T}^{\underline{\boldsymbol{\la}}}(h,\mathbf{z}),T^{\underline{\boldsymbol{\la}}}(h,\mathbf{z})$
introduced in \cite[\S\ref{w-sec:relat-tens-prod}]{WebwKLR} based on
the canonical deformation of weighted KLR algebras. As usual, we'll
let $y_r$ denote the sum of all straight line Stendhal diagrams with
a dot on the $r$th strand.
\begin{definition}
We let $\tilde{T}^{\underline{\boldsymbol{\la}}}(h,\mathbf{z})$ be the quotient of the algebra
freely spanned over $\mathbbm{k}[h,\mathbf{z}]$ by Stendhal diagrams (as defined
in \cite[\S\ref{m-sec:stendhal-diagrams}]{Webmerged}), with the local
relations (\ref{first-QH}--\ref{triple-dumb}) and the
relations\newseq
\begin{equation*}\label{cost}\subeqn
\begin{tikzpicture}[very thick,scale=.8,baseline=1.2cm]
\draw (-2.8,0) +(0,-1) .. controls +(1.6,0) .. +(0,1)
node[below,at start]{$u$}; \draw[wei] (-1.2,0) +(0,-1)
.. controls +(-1.6,0) .. +(0,1) node[below,at
start]{$\leftarrow_j$}; \node at (-.3,0) {$=p_{u,j}$};
\node[scale=1.5] at (.5,0) {$\Bigg($}; \node[scale=1.5] at
(3.5,0) {$\Bigg)$}; \draw[wei] (2.8,0) +(0,-1) -- +(0,1)
node[below,at start]{$\leftarrow_j$}; \draw (1.2,0) +(0,-1) -- +(0,1)
node[below,at start]{$u$}; \fill (1.2,0) circle (3pt);
\draw[wei] (-2.8,3) +(0,-1) .. controls +(1.6,0) .. +(0,1)
node[below,at start]{$\leftarrow_j$}; \draw (-1.2,3) +(0,-1)
.. controls +(-1.6,0) .. +(0,1) node[below,at start]{$u$};
\node at (-.3,3) {$=p_{u,j}$};\node[scale=1.5] at (.5,3)
{$\Bigg($}; \node[scale=1.5] at (3.5,3) {$\Bigg)$}; \draw
(2.8,3) +(0,-1) -- +(0,1) node[below,at start]{$u$};
\draw[wei] (1.2,3) +(0,-1) -- +(0,1) node[below,at
start]{$\leftarrow_j$}; \fill (2.8,3) circle (3pt);
\end{tikzpicture}\qquad
p_{u,j}(y)=\prod_{s\in S_{u,j}}(y-z_{s})
\end{equation*}
\begin{equation*}\subeqn\label{dumb}
\begin{tikzpicture}[very thick,baseline=2.85cm,scale=.8]
\draw[wei] (-3,3) +(1,-1) -- +(-1,1); \draw (-3,3) +(0,-1)
.. controls +(-1,0) .. +(0,1); \draw (-3,3) +(-1,-1) --
+(1,1); \node at (-1,3) {=}; \draw[wei] (1,3) +(1,-1) --
+(-1,1); \draw (1,3) +(0,-1) .. controls +(1,0) .. +(0,1);
\draw (1,3) +(-1,-1) -- +(1,1); \end{tikzpicture}
\end{equation*}
\begin{equation*}\subeqn\label{red-dot}
\begin{tikzpicture}[very thick,baseline,scale=.8]
\draw(-3,0) +(-1,-1) -- +(1,1); \draw[wei](-3,0) +(1,-1) --
+(-1,1); \fill (-3.5,-.5) circle (3pt); \node at (-1,0) {=};
\draw(1,0) +(-1,-1) -- +(1,1); \draw[wei](1,0) +(1,-1) --
+(-1,1); \fill (1.5,.5) circle (3pt);
\end{tikzpicture}
\end{equation*}
\begin{equation*}\label{red-triple}\subeqn
\begin{tikzpicture}[very thick,baseline=-2pt,scale=.8]
\draw [wei] (0,-1) -- node[below,at start]{$\leftarrow_j$} (0,1);
\draw(.5,-1) to[out=90,in=-30] node[below,at start]{$u$}
(-.5,1); \draw(-.5,-1) to[out=30,in=-90] node[below,at
start]{$v$} (.5,1);
\end{tikzpicture}- \begin{tikzpicture}[very thick,baseline=-2pt,scale=.8]
\draw [wei] (0,-1) --node[below,at start]{$\leftarrow_j$} (0,1);
\draw(.5,-1) to[out=150,in=-90] node[below,at start]{$u$}
(-.5,1); \draw(-.5,-1) to[out=90,in=-150] node[below,at
start]{$v$} (.5,1);
\end{tikzpicture}
=\delta_{u,v}\sum_{p=1}^{\alpha_u^\vee(\leftarrow_j)}\sum_{a+b=p-1}\mathsf{e}_{\alpha_i^\vee(\leftarrow_j)-p}\big(\{-z_s\mid
s\in S_{u,j}\}\big) \cdot \Bigg(\begin{tikzpicture}[very thick,baseline=-2pt,scale=.8]
\draw [wei] (0,-1) -- (0,1);
\draw(.5,-1) to[out=90,in=-90] node[midway,circle,
fill=black,inner sep=2pt,label=right:{$b$}]{} (.5,1);
\draw(-.5,-1) to[out=90,in=-90] node[midway,circle,
fill=black,inner sep=2pt,label=left:{$a$}]{} (-.5,1);
\end{tikzpicture}\Bigg).
\end{equation*}
The algebra $T^{\underline{\boldsymbol{\la}}}(h,\mathbf{z})$ is the quotient of
$\tilde{T}^{\underline{\boldsymbol{\la}}}(h,\mathbf{z})$ by violating diagrams as defined in
\cite[\ref{m-viol-def}]{Webmerged}.
\end{definition}
This algebra is graded with Stendhal diagrams given their usual
grading and both $h$ and $\mathbf{z}$ having degree 2.
As in types O and W, we have natural completions. We let $\widehat{\tilde{\EuScript{F}}^\vartheta}
(\mathsf{q},\PQ_\bullet)$ be the completion of $\tilde{\EuScript{F}}^\vartheta
(\mathsf{q},\PQ_\bullet)$ by the ideals generated by $\prod_{u\in
U}(X_i-u)^N$ for $N=1,2,3,\dots$ and let
$\widehat{\tilde{T}^{\underline{\boldsymbol{\la}}}}(h,\mathbf{z})$ be the completion of $\tilde{T}^{\underline{\boldsymbol{\la}}}(h,\mathbf{z})$
with respect to degree. Since $h$ and $\mathbf{z}$ both have positive
degree, the result is a complete module over $\mathbbm{k}[[h,\mathbf{z}]]$.
For every loading, we have an associated function $\kappa$, with
$\kappa(k)$ equal to
the number of black strands to the left of $o_k$, and a sequence
$(u_1,\dots, u_n)$ given by the eigenvalues we've attached to each
black strand. We let $e_{\mathbf{u},\kappa}$ be the idempotent associated to this
data in $\tilde{T}^{\underline{\boldsymbol{\la}}}(h,\mathbf{z})$ and by extension in
$\widehat{\tilde{T}^{\underline{\boldsymbol{\la}}}}(h,\mathbf{z})$ and ${T}^{\underline{\boldsymbol{\la}}}(h,\mathbf{z})$.
\begin{theorem}\label{F-isomorphism}
We have an isomorphism $\widehat{\tilde{\EuScript{F}}^\vartheta}
(\mathsf{q},\PQ_\bullet)\cong \widehat{\tilde{T}^{\underline{\boldsymbol{\la}}}}(h,\mathbf{z})$ which induces an
isomorphism $\EuScript{F}^\vartheta
(\mathsf{q},\PQ_\bullet)\cong {T}^{\underline{\boldsymbol{\la}}}(h,\mathbf{z})$, given by
\[\epsilon_{\mathbf{u},\kappa}\mapsto e_{\mathbf{u},\kappa}\qquad\qquad X_r\mapsto\sum_{\mathbf{u},\kappa}u_rb({y_r})e_{\mathbf{u},\kappa}\qquad \qquad \tikz[baseline,very thick,scale=1.5, green!50!black]{\draw [wei] (.2,.3) --
(-.2,-.1); \draw
(.2,-.1) -- (-.2,.3);} \mapsto\tikz[baseline,very
thick,scale=1.5]{\draw [wei] (.2,.3) -- (-.2,-.1); \draw (.2,-.1) --
(-.2,.3);}\]
\vspace{-3mm}
\[\tikz[baseline,very thick,scale=1.5, green!50!black]{\draw [wei] (-.2,.3) --
(.2,-.1); \draw
(-.2,-.1) -- (.2,.3);} \epsilon_{\mathbf{u},\kappa} \mapsto
\frac{\displaystyle\prod_{\vartheta_s=o_k}(u_rb({y_r-z_s})-\PQ_s)}{\displaystyle\prod_{s\in
S_{u_r,j}} (y_r-z_s)}\, \tikz[baseline,very thick,scale=1.5]{\draw [wei] (-.2,.3) --
(.2,-.1); \draw
(-.2,-.1) -- (.2,.3);} e_{\mathbf{u},\kappa}
\qquad \qquad \tikz[baseline,very thick,scale=1.5, green!50!black]{\draw (.2,.3) --
(-.2,-.1); \draw
(.2,-.1) -- (-.2,.3);}\, e_{\mathbf{u},\kappa}\mapsto
A_r^{\mathbf{u}}\tikz[baseline,very thick,scale=1.5]{\draw (.2,.3) --
(-.2,-.1); \draw
(.2,-.1) -- (-.2,.3);} e_{\mathbf{u},\kappa} \]
where the leftmost green/black strand shown is the $r$th from the
left, and the red strand shown is the $j$th from the left.
\end{theorem}
\begin{proof}
This is easily worked out by comparing the completed polynomial representation
of $\tilde{T}^{\underline{\boldsymbol{\la}}}$ given in \cite[\ref{m-action}]{Webmerged} for
\[P_{uv}(a,b)=
\begin{cases}
b-a+h & u=qv\\
1 & u\neq qv
\end{cases}
\] and that for $\tilde{\EuScript{F}}^\vartheta (\mathsf{q},\PQ_\bullet)$.
Since all generators and relations involve at most one red line, we
can assume that $\ell=1$, and use the representation of Proposition
\ref{tilde-poly} for the Hecke side. That diagrams with only green
strands have actions that match is just Theorem \ref{O-isomorphism}.
Thus, we only need to check the crossing of green and red strands is
intertwined with a crossing of red and black strands. Since we have
only one red strand, we have that
$\prod_{\vartheta_s=o_k}(u_re^{y_r}-\PQ_s)=p_{\PQ_\bullet}(u_re^{y_r})$.
Thus, comparing the representation of Proposition \ref{tilde-poly} with
the obvious $\mathbbm{k}[h,\mathbf{z}]$-deformation of the action in
\cite[\ref{m-action}]{Webmerged} yields the result.
\end{proof}
\section{Type WF}
\label{sec:antigens-ab}
Finally, we consider these two complications jointly. As mentioned
before, these are unlikely to be familiar algebras for the reader, but
these results will ultimately be
useful in understanding category $\mathcal{O}$ of rational Cherednik algebras
in \cite{WebRou}.
\begin{definition}
A {\bf type WF Hecke diagram} is a type W Hecke diagram
with vertical red lines inserted at $x=\vartheta_i$. The diagram must avoid
tangencies and triple points between any combination of these
strands, green strands and ghosts, and only
allow isotopies that preserve these conditions.
Let the {\bf type WF affine Hecke algebra} $\EuScript{\widetilde{WF}}^\vartheta(\mathsf{q},\PQ_{\bullet})$ be the
$\mathbbm{k}[[h,\mathbf{z}]]$-algebra generated by
type WF Hecke diagrams modulo the relations
(\ref{nilHecke-2}--\ref{eq:triple-point-2},\ref{qHcost}) and \newseq
\excise{\newseq
\begin{equation*}\subeqn\label{PC-1}
\begin{tikzpicture}[scale=.7,baseline,green!50!black]
\draw[very thick](-3,0) +(-1,-1) -- +(1,1); \draw[very thick](-3,0) +(1,-1) --
node[pos=.8,fill=green!50!black,inner sep=3pt]{} +(-1,1) ;
\node[black] at (-1.5,0){$-$}; \draw[very thick](0,0) +(-1,-1) -- +(1,1); \draw[very thick](0,0) +(1,-1) -- node[pos=.2,fill=green!50!black,inner sep=3pt]{}
+(-1,1);
\end{tikzpicture}\hspace{4mm}=\hspace{4mm}
\begin{tikzpicture}[scale=.7,baseline,green!50!black]
\draw[very thick](-3,0) +(-1,-1) -- node[pos=.2,fill=green!50!black,inner sep=3pt]{}+(1,1); \draw[very thick](-3,0) +(1,-1) -- +(-1,1);
\node[black] at (-1.5,0){$-$}; \draw[very thick](0,0) +(-1,-1) --
node[pos=.8,fill=green!50!black,inner sep=3pt]{} +(1,1); \draw[very thick](0,0) +(1,-1) -- +(-1,1)
; \node[black] at (2,0){$=$}; \draw[very
thick](4,0) +(-1,-1) -- +(-1,1); \draw[very
thick](4,0) +(0,-1) -- +(0,1);
\end{tikzpicture}
\end{equation*}
\begin{equation*}\subeqn\label{PC2}
\begin{tikzpicture}[very thick,scale=.9,yscale=.8,baseline,green!50!black]
\draw(-2.8,0) +(0,-1) .. controls +(1.6,0) .. +(0,1)
; \draw (-1.2,0) +(0,-1) .. controls
+(-1.6,0) .. +(0,1) ;
\end{tikzpicture}\hspace{4mm}
= 0\qquad \qquad
\begin{tikzpicture}[very thick,scale=.9,yscale=.8,baseline,green!50!black]
\draw (-3,0) +(1,-1) -- +(-1,1); \draw
(-3,0) +(-1,-1) -- +(1,1) ; \draw
(-3,0) +(0,-1) .. controls +(-1,0) .. +(0,1);
\end{tikzpicture}\hspace{4mm}=\hspace{4mm}
\begin{tikzpicture}[very thick,scale=.9,yscale=.8,baseline,green!50!black]
\draw (1,0) +(1,-1) -- +(-1,1)
; \draw (1,0) +(-1,-1) -- +(1,1)
; \draw (1,0) +(0,-1) .. controls
+(1,0) .. +(0,1);
\end{tikzpicture}\hspace{4mm}
\end{equation*}
\[ \subeqn\label{PCbigon}
\begin{tikzpicture}[very thick,xscale=1.3,yscale=.8,baseline=25pt,green!50!black]
\draw (1,0) to[in=-90,out=90] (1.5,1) to[in=-90,out=90] (1,2)
;
\draw[dashed] (1.5,0) to[in=-90,out=90] (1,1) to[in=-90,out=90] (1.5,2);
\draw (2.5,0) to[in=-90,out=90] (2,1) to[in=-90,out=90] (2.5,2);
\node[black] at (3,1) {=};
\draw (3.7,0) --node[midway,fill,inner sep=3pt]{} (3.7,2)
;
\draw[dashed] (4.2,0) to (4.2,2);
\draw (5.2,0) -- (5.2,2);
\node[black] at (5.6,1) {$-\mathsf{q}$};
\draw (6.2,0) -- (6.2,2);
\draw[dashed] (6.7,0)-- (6.7,2);
\draw (7.7,0) -- node[midway,fill,inner sep=3pt]{} (7.7,2);
\end{tikzpicture}
\]
\[ \subeqn\label{PC-bigon2}
\begin{tikzpicture}[very thick,xscale=1.3,yscale=.8,baseline=25pt,green!50!black]
\draw[dashed] (1,0) to[in=-90,out=90] (1.5,1) to[in=-90,out=90] (1,2)
;
\draw(1.5,0) to[in=-90,out=90] (1,1) to[in=-90,out=90] (1.5,2);
\draw (2,0) to[in=-90,out=90] (2.5,1) to[in=-90,out=90] (2,2);
\node[black] at (3,1) {=};
\draw[dashed] (3.7,0) --(3.7,2)
;
\draw (4.2,0) to node[midway,fill,inner sep=3pt]{} (4.2,2);
\draw (4.7,0) -- (4.7,2);
\node[black] at (5.6,1) {$-\mathsf{q}$};
\draw[dashed] (6.2,0) -- (6.2,2);
\draw (6.7,0)-- (6.7,2);
\draw (7.2,0) -- node[midway,fill,inner sep=3pt]{} (7.2,2);
\end{tikzpicture}
\]
\begin{equation*}\label{PC-triple-point-1}\subeqn
\begin{tikzpicture}[very thick,xscale=1.5,yscale=.8,baseline,green!50!black]
\draw[dashed] (-3,0) +(.4,-1) -- +(-.4,1);
\draw[dashed] (-3,0) +(-.4,-1) -- +(.4,1);
\draw (-2,0) +(.4,-1) -- +(-.4,1); \draw
(-2,0) +(-.4,-1) -- +(.4,1);
\draw (-3,0) +(0,-1) .. controls +(-.5,0) .. +(0,1);\node at (-1,0) {=}; \draw[dashed] (0,0) +(.4,-1) -- +(-.4,1);
\draw[dashed] (0,0) +(-.4,-1) -- +(.4,1);
\draw (1,0) +(.4,-1) -- +(-.4,1); \draw
(1,0) +(-.4,-1) -- +(.4,1);
\draw (0,0) +(0,-1) .. controls +(.5,0) .. +(0,1);
\node[black] at (2.1,0) {$-\mathsf{q}$};
\draw (4,0)
+(.4,-1) -- +(.4,1); \draw (4,0)
+(-.4,-1) -- +(-.4,1);
\draw[dashed] (3,0)
+(.4,-1) -- +(.4,1); \draw[dashed] (3,0)
+(-.4,-1) -- +(-.4,1);
\draw (3,0)
+(0,-1) -- +(0,1);
\end{tikzpicture}
\end{equation*}
\begin{equation*}\label{PC-triple-point-2}\subeqn
\begin{tikzpicture}[very thick,xscale=1.5,yscale=.8,baseline,green!50!black]
\draw (-3,0) +(.4,-1) -- +(-.4,1);
\draw (-3,0) +(-.4,-1) -- +(.4,1);
\draw (-2,0) +(0,-1) .. controls +(-.5,0) .. +(0,1);
\draw[dashed] (-3,0) +(0,-1) .. controls +(-.5,0) .. +(0,1);\node[black] at (-1,0) {=}; \draw (0,0) +(.4,-1) -- +(-.4,1);
\draw (0,0) +(-.4,-1) -- +(.4,1);
\draw[dashed] (0,0) +(0,-1) .. controls +(.5,0) .. +(0,1);
\draw (1,0) +(0,-1) .. controls +(.5,0) .. +(0,1);
\node[black] at (2,0)
{$+$};
\draw (3,0)
+(.4,-1) -- +(.4,1); \draw (3,0)
+(-.4,-1) -- +(-.4,1);
\draw[dashed] (3,0)
+(0,-1) -- +(0,1); \draw (4,0)
+(0,-1) -- +(0,1);
\end{tikzpicture}
\end{equation*}
\begin{equation*}\label{PCcost}\subeqn
\begin{tikzpicture}[very thick,baseline,xscale=1.5,yscale=.8]
\draw [wei] (-1.8,0) +(0,-1) -- +(0,1);
\draw[green!50!black](-1.2,0) +(0,-1) .. controls +(-1.6,0) .. +(0,1);
\node at (-.5,0) {=};
\draw [green!50!black] (1.5,0) +(0,-1) -- node[midway,
fill=green!50!black,inner sep=2.5pt]{}+(0,1);
\draw[wei] (.8,0) +(0,-1) -- +(0,1);
\node at (2.8,0) {$-\PQ_\bullet$};
\draw[green!50!black] (4.6,0) +(0,-1) -- +(0,1);
\draw [wei] (3.9,0) +(0,-1) -- +(0,1);
\end{tikzpicture}
\end{equation*}}
\excise{ \begin{equation*}\label{PC-red-triple}\subeqn
\begin{tikzpicture}[very thick,baseline=-2pt,scale=.7]
\draw [wei] (0,-1) -- (0,1);
\draw[green!50!black](.5,-1) to[out=90,in=-30] (-.5,1);
\draw[green!50!black](-.5,-1) to[out=30,in=-90] (.5,1);
\end{tikzpicture}- \begin{tikzpicture}[very thick,baseline=-2pt,scale=.7]
\draw [wei] (0,-1) -- (0,1);
\draw[green!50!black](.5,-1) to[out=150,in=-90] (-.5,1);
\draw[green!50!black](-.5,-1) to[out=90,in=-150] (.5,1);
\end{tikzpicture}
=-\PQ_\bullet \Bigg(\,\begin{tikzpicture}[very thick,baseline=-2pt,scale=.7]
\draw [wei] (0,-1) -- (0,1);
\draw[green!50!black](.5,-1) to[out=90,in=-90] (.5,1);
\draw[green!50!black](-.5,-1) to[out=90,in=-90] node[midway,
fill=green!50!black,inner sep=2.5pt]{} (-.5,1);
\end{tikzpicture}\,-\mathsf{q} \,\begin{tikzpicture}[very thick,baseline=-2pt,scale=.7]
\draw [wei] (0,-1) -- (0,1);
\draw[green!50!black](.5,-1) to[out=90,in=-90] node[midway,
fill=green!50!black,inner sep=2.5pt]{} (.5,1);
\draw[green!50!black](-.5,-1) to[out=90,in=-90] (-.5,1);
\end{tikzpicture}\,\Bigg)
\end{equation*}}
\begin{equation*}\label{PCsmart-red-triple}\subeqn
\begin{tikzpicture}[very thick,baseline=-2pt,scale=.7]
\draw [wei] (0,-1) -- (0,1); \draw[green!50!black](.5,-1)
to[out=90,in=-30] (-.5,1); \draw[green!50!black](-.5,-1)
to[out=30,in=-90] (.5,1);
\end{tikzpicture}- \begin{tikzpicture}[very
thick,baseline=-2pt,scale=.7] \draw [wei] (0,-1) -- (0,1);
\draw[green!50!black](.5,-1) to[out=150,in=-90] (-.5,1);
\draw[green!50!black](-.5,-1) to[out=90,in=-150] (.5,1);
\end{tikzpicture}
=\sum_{i=1}^\ell\sum_{a+b=i-1}\mathsf{e}_{\ell-i}(-\PQ_\bullet)\cdot \Bigg(\begin{tikzpicture}[very thick,baseline=-2pt,scale=.7]
\draw [wei] (0,-1) -- (0,1);
\draw[green!50!black](.5,-1) to[out=90,in=-90] node[midway,
fill=green!50!black,inner sep=2.5pt,label=right:{$b$}]{} (.5,1);
\draw[green!50!black](-.5,-1) to[out=90,in=-90] node[midway,
fill=green!50!black,inner sep=2.5pt,label=left:{$a$}]{} (-.5,1);
\end{tikzpicture}\Bigg).
\end{equation*}
\begin{equation*}\label{PC-dumb-red-triple}\subeqn
\begin{tikzpicture}[very thick,baseline=-2pt,yscale=.8,]
\draw [wei] (0,-1) -- (0,1);
\draw[green!50!black,dashed](.5,-1) to[out=90,in=-30] (-.5,1);
\draw[green!50!black](-.5,-1) to[out=30,in=-90] (.5,1);
\end{tikzpicture}= \begin{tikzpicture}[very thick,baseline=-2pt,yscale=.8,]
\draw [wei] (0,-1) -- (0,1);
\draw[green!50!black,dashed](.5,-1) to[out=150,in=-90] (-.5,1);
\draw[green!50!black](-.5,-1) to[out=90,in=-150] (.5,1);
\end{tikzpicture}\qquad \qquad \begin{tikzpicture}[very thick,baseline=-2pt,yscale=.8,]
\draw [wei] (0,-1) -- (0,1);
\draw[green!50!black](.5,-1) to[out=90,in=-30] (-.5,1);
\draw[green!50!black,dashed](-.5,-1) to[out=30,in=-90] (.5,1);
\end{tikzpicture}= \begin{tikzpicture}[very thick,baseline=-2pt,yscale=.8]
\draw [wei] (0,-1) -- (0,1);
\draw[green!50!black](.5,-1) to[out=150,in=-90] (-.5,1);
\draw[green!50!black,dashed](-.5,-1) to[out=90,in=-150] (.5,1);
\end{tikzpicture}\qquad \qquad \begin{tikzpicture}[very thick,yscale=.8,baseline=-2pt]
\draw [wei] (0,-1) -- (0,1);
\draw[green!50!black,dashed](.5,-1) to[out=90,in=-30] (-.5,1);
\draw[green!50!black,dashed](-.5,-1) to[out=30,in=-90] (.5,1);
\end{tikzpicture}= \begin{tikzpicture}[very thick,yscale=.8,baseline=-2pt]
\draw [wei] (0,-1) -- (0,1);
\draw[green!50!black,dashed](.5,-1) to[out=150,in=-90] (-.5,1);
\draw[green!50!black,dashed](-.5,-1) to[out=90,in=-150] (.5,1);
\end{tikzpicture}
\end{equation*}
\end{definition}
Note that relation (\ref{qred-triple}) is {\it not} true in this
algebra. As before, we should think of type F diagrams as type WF
diagrams with $\ck$ so small that we cannot see that the ghost and
strand are separate. Using this approach, we can see that relation
(\ref{qred-triple}) for a strand and a ghost together is a consequence
of (\ref{ghost-bigon1}) and (\ref{PCsmart-red-triple}), much as in
Theorem \ref{wdHecke}.
We call an idempotent {\bf unsteady} if the strands can be
divided into two groups with a gap $>|\ck|$ between them
and all red strands in the right hand group, and {\bf steady} otherwise.
Thus, the idempotents shown in (\ref{steady}) are steady, and those in
(\ref{unsteady}) are unsteady.
\newseq
\[\subeqn\label{steady}\tikz[baseline=-2pt,very thick, xscale=3,green!50!black] {
\draw (.05,-.5) -- (0.05,.5);
\draw[dashed] (-.55,-.5) -- (-.55,.5);
\draw[dashed] (-.1,-.5) -- (-.1,.5);
\draw (.5,-.5) -- (.5,.5);
\draw[wei](.35,-.5) -- (.35,.5);
}
\hspace{3cm} \tikz[baseline=-2pt,very thick, xscale=3,green!50!black] {
\draw (1.25,-.5) -- (1.25,.5);
\draw[dashed] (.65,-.5) -- (.65,.5);
\draw[dashed] (-.1,-.5) -- (-.1,.5);
\draw (.5,-.5) -- (.5,.5);
\draw[wei](.35,-.5) -- (.35,.5);
}\]
\[\subeqn\label{unsteady}\tikz[baseline=-2pt,very thick, xscale=3,green!50!black] {
\draw (-.25,-.5) -- (-0.25,.5);
\draw[dashed] (-.85,-.5) -- (-.85,.5);
\draw[dashed] (-.1,-.5) -- (-.1,.5);
\draw (.5,-.5) -- (.5,.5);
\draw[wei](.35,-.5) -- (.35,.5);
} \hspace{3cm}\tikz[baseline=-2pt,very thick, xscale=3,green!50!black] {
\draw (-.25,-.5) -- (-0.25,.5);
\draw[dashed] (-.85,-.5) -- (-.85,.5);
\draw[dashed] (-.4,-.5) -- (-.4,.5);
\draw (.2,-.5) -- (.2,.5);
\draw[wei](.35,-.5) -- (.35,.5);
}\]
\begin{definition}
Let the {\bf type WF Hecke algebra}
$\EuScript{{WF}}^\vartheta(\mathsf{q},\PQ_{\bullet})\cong \PC^\vartheta$
be the quotient of
$\EuScript{\widetilde{WF}}^\vartheta(\mathsf{q},\PQ_{\bullet})$ by the
ideal generated by all unsteady idempotents.
\end{definition}
The name ``pictorial
Cherednik algebra'' refers to the fact that the representation
category of this algebra when $\mathbbm{k}=\mathbb{C}$ and we set $h=z_i=0$ is
equivalent to the category
$\mathcal{O}$ over a Cherednik algebra for the group $\mathbb{Z}/\ell\mathbb{Z}\wr S_m$ for
certain parameters.
This is also useful to consider as a common generalization of all
the algebras we have considered.
Given ${\mathscr{D}}$ a collection of subsets of $\mathbb{R}$, we'll let $\widetilde{\EuScript{WF}}_{\mathscr{D}}^\vartheta
(\mathsf{q},\PQ_{\bullet}),{\EuScript{WF}}_{\mathscr{D}}^\vartheta (\mathsf{q},\PQ_{\bullet})$
denote the subalgebras of $\widetilde{\EuScript{WF}}^\vartheta
(\mathsf{q},\PQ_{\bullet}),{\EuScript{WF}}^\vartheta (\mathsf{q},\PQ_{\bullet})$ spanned by
diagrams whose tops and bottoms lie in the set ${\mathscr{D}}$.
As in earlier cases, the algebra $\EuScript{\widetilde{WF}}^\vartheta(\mathsf{q},\PQ_{\bullet})$
is equipped with a polynomial representation using the rules of
Proposition \ref{W-poly} for diagrams only involving green strands and
Proposition \ref{tilde-poly} for basic diagrams involving red and
green strands.
We can extend Theorem \ref{wdHecke} to this setting. As before, let
$\mathscr{O}=\{B_s=\{s,2s,3s,\dots, ns\}\}$ for $s$ some real
number with $s\gg |\ck|,|\vartheta_i|$.
\begin{theorem}\label{wfHecke}
There is an isomorphism of $\EuScript{ {WF}}^\vartheta_\mathscr{O}
(\mathsf{q},\PQ_{\bullet})$ to the cyclotomic affine Hecke algebra
$\mH^{Q_\bullet}_{h,\mathbf{z}}$ for the
parameters $\{\PQ_1,\dots, \PQ_\ell\}$.
\end{theorem}
\begin{proof}
First, since $s\gg |\vartheta_i|$, all strands start and end to the
right of all red strands. Thus, we have that every diagram can be
written, using the relations, in terms of diagrams that remain to
the right of all red strands. Thus, we have a surjective map from
the type W affine Hecke algebra $\EuScript{W}_\mathcal{O}$ onto $\EuScript{ {WF}}^\vartheta_\mathscr{O}
(\mathsf{q},\PQ_{\bullet})$.
By Theorem \ref{wdHecke}, we can identify $\EuScript{W}_\mathcal{O}$ with the usual
affine Hecke algebra $\hmH$.
Now consider a diagram
where the first strand starts at $(s,0)$, goes linearly to
$(-s,\nicefrac 12)$ then back to $(s,1)$, while all others remain
straight. This diagram is unsteadied, since the horizontal slice at
$y=\nicefrac{1}2$ is unsteadied by the leftmost strand. By the relation (\ref{qHcost}), this diagram is equal to $\prod_{i=1}^\ell
(X_1-\PQ_i)$ which thus lies in the kernel of the map of the affine
Hecke algebra to $\EuScript{ {WF}}^\vartheta_\mathscr{O}
(\mathsf{q},\PQ_{\bullet})$.
As in the proof of \ref{prop:cyclo-Hecke}, we can easily check that
the diagram discussed above generates the kernel so $\EuScript{
{WF}}^\vartheta_\mathscr{O} (\mathsf{q},\PQ_{\bullet})$ is isomorphic to
this cyclotomic quotient.
\end{proof}
There is also a version of this theorem relating the type WF Hecke
algebras to cyclotomic Hecke algebras. Assume that the parameters
$\vartheta_i$ are ordered with $\vartheta_1<\dots <\vartheta_\ell$.
Fix a set $\Lambda$ of $\ell$-multicompositions of $m$ which is an
upper order ideal in dominance order. We'll be interested in the
cyclotomic $q$-Schur algebra $\mathscr{S}(\Lambda)$ attached to the
data $(\mathsf{q},\PQ^\bullet)$ defined by Dipper, James and Mathas
\cite[6.1]{DJM}; let $\mathscr{S}^-(\Lambda)$ be the signed version of
this algebra defined using signed permutation modules.
Let $r$ be
the maximum number of parts of one of the components of $\xi\in
\Lambda$. Choose constants $\epsilon \ll \ck$ and $s$ so
that \[|\ck|+m\epsilon < s < \min_{k\neq
n}(|\vartheta_k-\vartheta_n|/r);\] of course, this is only possible
is $r|\ck| <|\vartheta_k-\vartheta_n|$ for all $k\neq n$. In this
case, we associate to every multicomposition $\xi\in \Lambda$ a subset $E_\xi$
that consists of the points $\vartheta_p+i\epsilon+js$ for every
$1\leq j\leq \xi^{(p)}_i$.
Consider the different ways of filling the diagram of a multipartition
$\nu$ with the points $(i,j,p)$ in the diagram of $\xi\in \Lambda$. One can
easily see that:
\begin{lemma}
The filling that replaces $(i,j,p)$ by
$\vartheta_p+i\epsilon+js$ is a $E_\xi$-tableau if the filling by
$j_p$ is a semi-standard tableau increasing weakly along columns and strongly
along rows if $\kappa>0$ and {\it vice versa} if $\kappa<0$. In
fact, this gives a $\xi!:=\prod \xi^{(p)}_k!$-to-$1$ map from
$E_\xi$-tableau to semi-standard tableau of type $\xi$. \hfill\hfill \mbox{$\Box$}\medskip\newline
\end{lemma}
As in
Section \ref{sec:weight-gener}, there is an idempotent diagram
$e'_\xi$ on this
subset where we act on the strands with $x$-value in $[
\vartheta_p+js,\vartheta_p+js+\epsilon\mu^{(p)}_j]$ the idempotent
$y^{\mu_j-1}_1\cdots y_{\mu_j-1}\partial_{w_0}$. Let
$e_\Lambda=\sum_{\xi\in \Lambda} e_\xi'$. Let $\mathscr{D}$ be any
collection of $m$-element subsets containing $E_\xi$ for all $\xi\in \Lambda$.
\begin{proposition}\label{cqs-morita}
We have an isomorphism $\mathscr{S}(\Lambda)\cong e_\Lambda \EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet})e_\Lambda $ if $\ck<0$, and $\mathscr{S}^-(\Lambda)\cong e_\Lambda \EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet})e_\Lambda $ if $\ck>0$. If $\Lambda$ contains all
$\ell$-multipartitions of $m$, this subalgebra is Morita equivalent to $\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet})$ via the obvious bimodule.
\end{proposition}
\begin{proof}
For $t\gg 0$ sufficiently large, we have that $e_{D_{t,m}}\EuScript{
{WF}}^\vartheta_\mathscr{D} (\mathsf{q},\PQ_{\bullet}) e_{D_{t,m}}$ is the cyclotomic
Hecke algebra $\mH^{Q_\bullet}_{h,\mathbf{z}} $ by Theorem \ref{wfHecke}.
Thus, we have that $e_{D_{t,m}}\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet}) e_\Lambda $ is a bimodule over $\mH^{Q_\bullet}_{h,\mathbf{z}}$
and the algebra $e_\Lambda \EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet})e_\Lambda $.
The dimension of $e_{D_{t,m}}\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet}) e'_\xi$ is $1/\xi! $ times the
dimension of $e_{D_{t,m}}\EuScript{ {WF}}^\vartheta
(\mathsf{q},\PQ_{\bullet}) e_{E_\xi}$. Thus, by
\cite[\ref{r-th:cellular}]{WebRou}, it is equal to $1/\xi!$ times the
number of pairs of tableaux of the same shape, one standard and of
type $E_{\xi}$. That is to say, the number of pairs of tableaux of
the same shape, one standard and one
semi-standard of type $\xi$. This is the same as the dimension of the
permutation module, so it suffices to construct a surjective map from
the bimodule $e_{D_{t,m}}\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet}) e'_\xi$ to or from the corresponding permutation
module $P_\xi$.
Let $q_\xi$ be the diagram that linearly
interpolates between $D_{t,m}$ and $E_\xi$, times $e'_\xi$ on the
right. We'll concentrate on the case where $\kappa<0$. The same argument as the proof of Theorem \ref{waha-Schur}
shows that $(T_i-q)q_\xi=0$ if the $i$th and $i+1$st strands lie in
one of the segments $[
\vartheta_p+js,\vartheta_p+js+\epsilon\mu^{(p)}_j]$ in $E_\xi$. If
$\kappa>0$, we instead see that $(T_i+1)q_\xi=0$. Note
that $q_\xi$ generates $e_{D_{t,m}}\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet}) e_\Lambda $ as a left module.
If $\xi^{(p)}=\emptyset$ for $p<\ell$, then this shows that sending
$m_\xi\mapsto q_\xi$ induces a map of $P_\xi$
to $e_{D_{t,m}}\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet}) e'_\xi$, which is surjective since $q_{\xi}$
generates. Thus, we have an isomorphism in this case.
For an arbitrary $\xi$, let $\xi^\circ$ be the multicomposition where
$(\xi^\circ)^{(p)}=\emptyset$ for $p<\ell$, and $(\xi^\circ)^{(\ell)}$
is the concatenation of $\xi^{(p)}$ for all $p$. We have a natural
map $e_{D_{t,m}}\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet}) e'_\xi\to e_{D_{t,m}}\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet}) e'_{\xi^\circ}$ given by the straight-line
diagram interpolating between $\xi$ and $\xi^\circ$. Applying
relation (\ref{qHcost}) many times, we find that this map sends
\[q_{\xi}\mapsto\prod_{j\leq|\xi^{(1)}|+\cdots +|\xi^{(k-1)}|}
(L_j-Q_k)q_{\xi_\circ}.\] The submodule of $P_{\xi^\circ}$
generated by this element is a copy of $P_{\xi}$, thus we have a
surjective map $e_{D_{t,m}}\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet}) e'_\xi\to P_{\xi}$. As we argued above,
dimension considerations show that this is an isomorphism.
We have from \cite[\ref{r-lem:-1-faithful}]{WebRou} that the map
\begin{equation}
e_\Lambda \EuScript{ {WF}}^\vartheta
(\mathsf{q},\PQ_{\bullet})e_\Lambda \to \operatorname{End}_{\mH^{Q_\bullet}_{h,\mathbf{z}}}(e_{D_{t,m}}\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet}) e_\Lambda )\label{eq:2}
\end{equation}
is injective. Applying \cite[\ref{r-th:cellular}]{WebRou} again, the
dimension of $e_\Lambda \EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet})e_\Lambda $ is equal to the number of pairs of
semi-standard tableaux of the same shape and (possibly different) type
in $\Lambda$. Thus, the dimension coincides with $\dim
\mathscr{S}(\Lambda)$. This shows that the injective map \eqref{eq:2} must be
an isomorphism.
Finally, we wish to show that the bimodules $e_\Lambda \EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet})$ and $\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet})e_\Lambda $ induce a Morita equivalence. For this, it
suffices to show that no simple $\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet})$-module is killed by $e_\Lambda $. If this were the
case, $\EuScript{ {WF}}^\vartheta_\mathscr{D}
(\mathsf{q},\PQ_{\bullet})$ would have strictly more simple modules than the
cyclotomic $q$-Schur algebra. However, in
\cite[\ref{r-th:cellular}]{WebRou}, we show that this algebra is
cellular with the number of cells equal to the number of
$\ell$-multipartitions of $m$. By \cite[6.16]{DJM}, this is the
number of simples over $\mathscr{S}(\Lambda)$ as well.
\end{proof}
This also allows us to show:
\begin{theorem}\label{affine-schur-morita}
The idempotent $e'$ induces a Morita equivalence between the affine
Schur algebra $\mathcal{S}_h$ and the type W affine Hecke algebra
$\EuScript{W}_{\mathscr{B}}$ for any set $\mathscr{B}$ containing
${\mathscr{C}}$.
\end{theorem}
\begin{proof}
Since the algebra $\mathcal{S}_h$ is Noetherian, if
$\EuScript{W}_{\mathscr{B}}e' \EuScript{W}_{\mathscr{B}}\neq
\EuScript{W}_{\mathscr{B}}$, then there is at least one simple
module over $\EuScript{W}_{\mathscr{B}}/\EuScript{W}_{\mathscr{B}}e
\EuScript{W}_{\mathscr{B}}$, which is killed by $e'$. Any central
element of $ \EuScript{W}_{\mathscr{B}}$ must act by scalars on
this module. Since $\EuScript{W}_{\mathscr{B}}$ is of finite rank
over the center of this module, this simple module is finite
dimensional. Thus, $X_1$ acting on this simple module has a minimal
polynomial, and this simple module factors through the map to a type
WF Hecke algebra $\EuScript{WF}^\vartheta$ where we choose
$\vartheta_i\ll \vartheta_{i+1}$ for all $i$, and $\vartheta_\ell\ll
0$.
By Proposition \ref{cqs-morita}, the identity of
$\EuScript{WF}^\vartheta$ can be written as a sum of diagrams factoring through the idempotent
$e_{\xi}'$ at $y=\nicefrac 12$. Using the relations
(\ref{qHcost},\ref{qred-triple},\ref{PCsmart-red-triple},\ref{PC-dumb-red-triple}) allow us to
isotope the diagram, not changing the slice at $y=\nicefrac 12$.
Once all the strands are to the right of all red lines, this slice
at $y=\nicefrac 12$ will be the idempotent $e_{\xi^\circ}'$. Since
this idempotent $e_{\xi^\circ}'$ lies in $e'
\EuScript{W}_{\mathscr{B}}e'$. This shows that $\EuScript{W}_{\mathscr{B}}e' \EuScript{W}_{\mathscr{B}}=
\EuScript{W}_{\mathscr{B}}$, proving the Morita equivalence.
\end{proof}
There's also a KLR algebra in type WF.
\begin{definition}
A {\bf WF KLR diagram} is a wKLR diagram
with vertical red lines inserted at $x=\vartheta_i$. The diagram must avoid
tangencies and triple points between any combination of these
strands, green strands and ghosts, and only
allow isotopies that preserve these conditions.
The type WF KLR algebra $\tilde{\dalg}^\vartheta$ is the algebra
generated by these diagrams over $\mathbbm{k}[h,\mathbf{z}]$ modulo the local
relations (\ref{dots-1}--\ref{eq:KLRtriple-point2},\ref{cost}--\ref{red-triple}) and \begin{equation}\label{KLR-dumb-red-triple}
\begin{tikzpicture}[very thick,baseline=-2pt,yscale=.8,]
\draw [wei] (0,-1) -- (0,1);
\draw[dashed](.5,-1) to[out=90,in=-30] (-.5,1);
\draw(-.5,-1) to[out=30,in=-90] (.5,1);
\end{tikzpicture}= \begin{tikzpicture}[very thick,baseline=-2pt,yscale=.8,]
\draw [wei] (0,-1) -- (0,1);
\draw[dashed](.5,-1) to[out=150,in=-90] (-.5,1);
\draw(-.5,-1) to[out=90,in=-150] (.5,1);
\end{tikzpicture}\qquad \qquad \begin{tikzpicture}[very thick,baseline=-2pt,yscale=.8,]
\draw [wei] (0,-1) -- (0,1);
\draw(.5,-1) to[out=90,in=-30] (-.5,1);
\draw[dashed](-.5,-1) to[out=30,in=-90] (.5,1);
\end{tikzpicture}= \begin{tikzpicture}[very thick,baseline=-2pt,yscale=.8]
\draw [wei] (0,-1) -- (0,1);
\draw(.5,-1) to[out=150,in=-90] (-.5,1);
\draw[dashed](-.5,-1) to[out=90,in=-150] (.5,1);
\end{tikzpicture}\qquad \qquad \begin{tikzpicture}[very thick,yscale=.8,baseline=-2pt]
\draw [wei] (0,-1) -- (0,1);
\draw[dashed](.5,-1) to[out=90,in=-30] (-.5,1);
\draw[dashed](-.5,-1) to[out=30,in=-90] (.5,1);
\end{tikzpicture}= \begin{tikzpicture}[very thick,yscale=.8,baseline=-2pt]
\draw [wei] (0,-1) -- (0,1);
\draw[dashed](.5,-1) to[out=150,in=-90] (-.5,1);
\draw[dashed](-.5,-1) to[out=90,in=-150] (.5,1);
\end{tikzpicture}.
\end{equation} This is a
weighted KLR algebra for the Crawley-Boevey graph of $U$ for the
highest weight $\leftarrow$.
The steadied quotient of $ \dalg^\vartheta$ is the quotient of
$\tilde{\dalg}^\vartheta$ by the 2-sided ideal generated by all
unsteady idempotents.
\end{definition}
As with the other algebras we've introduced, the algebra
$\tilde{\dalg}^\vartheta$ has a natural polynomial representation,
defined in \cite[\ref{w-prop:action}]{WebwKLR}. Now, let $\mathbf{u}$ be a
loading on a set $D\in \mathscr{D}$, that is, a map $D\to U$. Let
$u_1,\dots,u_m$ be the values of $\mathbf{u}$ read from left to right.
Attached to such data, we have an idempotent $e_{\mathbf{u}}$ in $
\tilde{\dalg}^\vartheta_{\mathscr{D}}$ and another $\epsilon_{\mathbf{u}}$
in
$\widetilde{\EuScript{{WF}}}^\vartheta_{\mathscr{D}}(\mathsf{q},\PQ_{\bullet})
$ given by projection to the stable kernel of $X_r-u_r$ for all $r$.
\begin{theorem}\label{thm:Hecke-KLR}
We have isomorphisms of $\mathbbm{k}[h,\mathbf{z}]$-algebras
\[\EuScript{{WF}}^\vartheta_{\mathscr{D}}(\mathsf{q},\PQ_{\bullet})\cong
\dalg^\vartheta_{\mathscr{D}}\qquad \widetilde{\EuScript{{WF}}}^\vartheta_{\mathscr{D}}(\mathsf{q},\PQ_{\bullet})\cong
\tilde{\dalg}^\vartheta_{\mathscr{D}}\] which send
\[\epsilon_{\mathbf{u}}\mapsto e_{\mathbf{u}}\qquad\qquad X_r\mapsto\sum_{\mathbf{u}}u_rb(y_r)e_{\mathbf{u}}\qquad \qquad \tikz[baseline,very thick,scale=1.5, green!50!black]{\draw (.2,.3) --
(-.2,-.1); \draw [wei]
(.2,-.1) -- (-.2,.3);} \mapsto\tikz[baseline,very
thick,scale=1.5]{\draw (.2,.3) -- (-.2,-.1); \draw [wei] (.2,-.1) --
(-.2,.3);}\]
\vspace{-3mm}
\[\tikz[baseline,very thick,scale=1.5, green!50!black]{\draw [wei] (-.2,.3) --
(.2,-.1); \draw
(-.2,-.1) -- (.2,.3);} \epsilon_{\mathbf{u}} \mapsto
\frac{\displaystyle\prod_{\vartheta_s=o_k}(u_rb({y_r-z_s})-\PQ_s)}{\displaystyle\prod_{s\in
S_{u_r,j}} (y_r-z_s)}\, \tikz[baseline,very thick,scale=1.5]{\draw [wei] (-.2,.3) --
(.2,-.1); \draw
(-.2,-.1) -- (.2,.3);} e_{\mathbf{u}}
\]
\[\subeqn\label{crossing-match2}
\tikz[baseline,very thick,scale=1.5, green!50!black]{\draw (.2,.3) --
(-.2,-.1); \draw
(.2,-.1) -- (-.2,.3);} \epsilon_{\mathbf{u}}\mapsto
\begin{cases}
\displaystyle\frac{1}{u_{r+1}b(y_{r+1})-u_rb(y_{r})}(\psi_r-1) e_{\mathbf{u}} & u_r\neq u_{r+1}\\
\displaystyle \frac{y_{r+1}-y_r}{u_{r+1}(b(y_{r+1})-b(y_{r}))}\psi_r e_{\mathbf{u}}& u_r=u_{r+1}
\end{cases}
\]
\[\subeqn\label{ghost-match2}
\tikz[baseline,very thick,scale=1.5, green!50!black]{\draw[densely dashed]
(-.2,-.1)-- (.2,.3); \draw
(.2,-.1) -- (-.2,.3);}\epsilon_{\mathbf{u}} \mapsto
\begin{cases}
\displaystyle u_rb(y_r)-\mathsf{q} u_sb(y_s)\tikz[baseline,very thick,scale=1.5]{\draw[densely dashed]
(-.2,-.1)-- (.2,.3); \draw
(.2,-.1) -- (-.2,.3);} e_{\mathbf{u}}&
u_r\neq qu_s\\
\displaystyle \frac {u_rb(y_r)-\mathsf{q} u_sb(y_s)}{y_{s}-y_{r}}\tikz[baseline,very thick,scale=1.5]{\draw[densely dashed]
(-.2,-.1)-- (.2,.3); \draw
(.2,-.1) -- (-.2,.3);}e_{\mathbf{u}}& u_r=qu_s, d(h)=1\\
\displaystyle \frac {u_rb(y_r)-\mathsf{q} u_sb(y_s)}{y_{s}-y_{r}+h}\tikz[baseline,very thick,scale=1.5]{\draw[densely dashed]
(-.2,-.1)-- (.2,.3); \draw
(.2,-.1) -- (-.2,.3);}e_{\mathbf{u}}& u_r=qu_s,d(h)=e^h
\end{cases}
\qquad \qquad\tikz[baseline,very thick,scale=1.5, green!50!black]{\draw (.2,.3) --
(-.2,-.1); \draw [densely dashed]
(.2,-.1) -- (-.2,.3);} \mapsto \tikz[baseline,very thick,scale=1.5]{\draw (.2,.3) --
(-.2,-.1); \draw [densely dashed]
(.2,-.1) -- (-.2,.3);} \]
where the solid strand shown is the $r$th (and $r+1$st in the first line),
and the ghost is associated to the $s$th from the left.
\end{theorem}
\begin{proof}
That this map sends unsteady idempotents to unsteady idempotents is
clear, so we need only show that we have an isomorphism $\widetilde{\EuScript{{WF}}}^\vartheta_{\mathscr{D}}(\mathsf{q},\PQ_{\bullet})\cong
\tilde{\dalg}^\vartheta_{\mathscr{D}}$. As usual, we check this by
comparing polynomial representations. The comparison for
diagrams involving no red strands is covered by the isomorphism of Theorem
\ref{W-isomorphism} and for crossings with red strands is checked in Theorem \ref{F-isomorphism}.
\end{proof}
Just as in Section \ref{sec:weight-gener}, this isomorphism does not
immediately grade the cyclotomic $q$-Schur algebra, since the
idempotent from Theorem \ref{cqs-morita} does not have homogeneous
image. One can, however,
define a homogenous idempotent $e''$ with isomorphic image. As
before, $e''$ will be a sum over $\ell$-ordered lists of multi-subsets
of $U$
whose size gives a multi-composition in $\Lambda$. Each of these
contributes the idempotent where the points connected to the part
$\mu_i^{(s)}$ are labeled with the multi-subset, in increasing order,
with a primitive idempotent in the nilHecke algebra acting on the
groups with the same label.
Note that in the level one case, a graded version of the $q$-Schur
algebra was defined by \cite{Arikiq}. This grading was uniquely
determined by its compatibility with the Brundan-Kleshchev grading on
the Hecke algebra, so our algebra must match up to graded Morita
equivalence with that of \cite[3.17]{Arikiq}. \excise{
\begin{lemma}\label{lem:cell-hw}
The category $A\operatorname{-mod}$ is highest weight with standard modules given
by the cell modules if for every $\xi\in \mathcal{P}$, there is
some $\sS,\mathsf{T}$ with $a_{\sS,\mathsf{T}}^\xi $ a unit.
\end{lemma}
Various versions of this theorem are standard in the literature on
cellular algebras, but in the case where the basis is over a field.
\todo{proper citation}
\begin{proof}
Let us prove this statement by induction on the size of $\mathcal{P}$. Note
that if $\xi$ is maximal in $\mathcal{P}$, then the vectors $C_{\sS,\mathsf{T}}^\xi$
span an ideal $J$ in $A$, such that the quotient is a cellular algebra
for the data $(\mathcal{P}\setminus \{\xi\},M,C,*)$, where the later three
symbols denote the induced structure of $A/J$. Furthermore, all the
cell modules for cells in $\mathcal{P}\setminus \{\xi\}$ are pullbacks of cell
modules under this map (since $C_{\sS,\mathsf{T}}^\xi$ acts trivially on
them by definition).
Now, we need to check the conditions of \cite[4.11]{RouqSchur}:
\begin{enumerate}
\item The cell modules are free over $S$ by definition.
\item We wish to show that $\operatorname{End}(\Delta_\xi)\cong S$ for ever cell
module $\Delta_\xi$. By induction, we need only show this for
$\xi$ maximal. In this case, we have assumed there is some
$\sS,\mathsf{T}$ such that $e=(a_{\sS,\mathsf{T}}^\xi )^{-1}C_{\sS,\mathsf{T}}^\xi$ is
an idempotent, such that $\Delta_\xi\cong Ae$ and
\[\operatorname{End}(\Delta_\xi)\cong eAe=S\cdot
\{C_{\sS,\mathsf{T}}^\xi\}\cong S.\]
\item We wish to show that if $\operatorname{Hom}(\Delta_{\xi},\Delta_\nu)\neq 0$
then $\xi\leq \nu$. The image of
$C_{\sS,\mathsf{T}}^\xi$ with $a_{\sS,\mathsf{T}}^\xi $ a unit must generate
$\Delta_\xi$. Thus, we have that $C_{\sS,\mathsf{T}}^\xi$ acts nontrivially
on $\Delta_\nu$. But this is only possible if $\nu\leq \xi$.
\item We wish to show that any module satisfying
$\operatorname{Hom}(\Delta_\xi,M)=0$ for all $M$ is 0. If $\xi$ is a maximal
cell, then we have an attached idempotent, and either $eM\neq 0$ or
$eM=0$. In the former case, we thus have induced maps
$\Delta_\xi=Ae\to M$ which are non-zero. In the latter case, we
have that the ideal $J=AeA$ kills $M$, so $M$ factors through the
smaller cellular algebra $A/J$. In by induction, we can assume that
$M$ has a non-zero map from a cell module over this quotient, or
$M=0$.
\item Finally, we wish to show that $\Delta_\xi$ receives a map from
a projective with cell filtered kernel. Actually, we'll show that
every cell module receives such a map from the algebra $A$ itself. Of course, if $\xi$ is
maximal, then $\Delta_\xi$ is itself projective. Furthermore, note
that the map $A\to \Delta_\xi$ induce by multiplying $e$ is
surjective, and its kernel is $A(1-e)$; this projective maps
surjectively to $A/J$, which is cell filtered, and the kernel is
isomorphic to several copies of $Ae$ itself.
By induction, every other standard receives a surjective maps from a
$A/J$ with cell filtered kernel. Pulling this map back under the map $A\to A/J$ is again
surjective, and the kernel is
filtered by the same standard pieces as before, and then by $J\cong
\Delta_\xi^{\operatorname{rank}(\Delta_\xi)}$. This completes the proof.
\end{enumerate}
This shows that indeed we have a highest weight category.
\end{proof}}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,772 |
Q: Why so much memory needs PHP by multiple insertion to MySql DB? I try to insert data from 15MB csv file (over 200 000 rows) to MySql.
I'm using CodeIgniter and inserting data row by row.
Now I have set memory_limit = 400M and I am able insert only about 100 thousand, then I receive the following error:
Fatal error: Allowed memory size of xxxx bytes exhausted
I wonder how does it work, and what exactly allocates over 400MB memory when the file is 15MB and I insert data row by row always overwriting $data array with the new record?
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 9,788 |
{"url":"https:\/\/www.witch.be\/2005\/04\/21\/34819\/","text":"Cthulhu was deliciously absurd tonight\u2026 Did you know that, according to one of the players, Jezus drank coffee ? When we stated that coffee was originally from America and only brought to Europe some time after 1492, he remarked that in earlier times the world was one continent. I was scribbling when that remark was made, and misunderstood what was said, the sentence thus altered: \u201cYes, but everything used to be incontinent.\u201d\nNeedless to say we had quite some laughs this evening.\n\nOh, and my motorbike rocks !","date":"2021-10-22 16:25:35","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8305584192276001, \"perplexity\": 3824.323850192687}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323585516.51\/warc\/CC-MAIN-20211022145907-20211022175907-00099.warc.gz\"}"} | null | null |
import { moveDownIcon, moveUpIcon } from 'app/shell/icons';
import AppIcon from 'app/shell/icons/AppIcon';
import clsx from 'clsx';
import { useSelect } from 'downshift';
import React, { CSSProperties, ReactNode, useEffect, useRef, useState } from 'react';
import styles from './Select.m.scss';
import { usePopper } from './usePopper';
export interface Option<T> {
key: string;
content: ReactNode;
disabled?: boolean;
value?: T;
}
interface Props<T> {
className?: string;
/** Hide the selected option from the dropdown */
hideSelected?: boolean;
disabled?: boolean;
/** Sets the max width for the button. */
maxButtonWidth?: number;
/**
* Sets the max width for the dropdown.
*
* If 'button' is used the two things can happen:
* 1. If maxButtonWidth is set it will use that as the max width.
* 2. If maxButtonWidth is undefined it will calculate the width
* of the button dynamically and use that to set the max width.
*/
maxDropdownWidth?: number | 'button';
value?: T;
options: Option<T>[];
/** Optional override for the button content */
children?: ReactNode;
onChange(value?: T): void;
}
/**
* A Select menu, which maintains a current value and a dropdown to choose
* another value. A replacement for HTML's <select> element. This is a
* controlled component.
*
* @see Dropdown for a menu of commands
* @see MultiSelect for multiple-item selector
*/
export default function Select<T>({
className,
disabled,
maxButtonWidth,
maxDropdownWidth,
options: items,
onChange,
value,
hideSelected,
children,
}: Props<T>) {
const {
isOpen,
getToggleButtonProps,
getMenuProps,
highlightedIndex,
getItemProps,
selectedItem,
} = useSelect({
items,
selectedItem: items.find((o) => o.value === value),
itemToString: (i) => i?.key || 'none',
onSelectedItemChange: ({ selectedItem }) => onChange(selectedItem?.value),
});
const buttonRef = useRef<HTMLButtonElement>(null);
const menuRef = useRef<HTMLElement>(null);
const [dropdownWidth, setDropdownWidth] = useState<number | undefined>(() =>
typeof maxDropdownWidth === 'number' ? maxDropdownWidth : undefined
);
usePopper({
contents: menuRef,
reference: buttonRef,
placement: 'bottom-start',
offset: 2,
});
if (!selectedItem) {
throw new Error('value must correspond to one of the provided options');
}
useEffect(() => {
if (maxDropdownWidth === 'button' && dropdownWidth === undefined && buttonRef.current) {
// Minus 2 because the menu has a thicker outline than the button border (2px vs 1px)
const width =
maxButtonWidth !== undefined
? maxButtonWidth
: buttonRef.current.getBoundingClientRect().width - 2;
setDropdownWidth(width);
}
}, [dropdownWidth, maxButtonWidth, maxDropdownWidth]);
let buttonStyle: CSSProperties | undefined;
let dropdownStyle: CSSProperties | undefined;
if (maxButtonWidth !== undefined) {
buttonStyle = {
maxWidth: maxButtonWidth,
};
}
if (dropdownWidth !== undefined) {
dropdownStyle = {
maxWidth: dropdownWidth,
};
}
return (
<div className={className}>
<button
type="button"
style={buttonStyle}
className={children ? undefined : styles.button}
{...getToggleButtonProps({
ref: buttonRef,
disabled,
})}
>
{children ?? (
<>
{selectedItem.content}{' '}
<AppIcon icon={isOpen ? moveUpIcon : moveDownIcon} className={styles.arrow} />
</>
)}
</button>
<div
{...getMenuProps({ ref: menuRef })}
className={clsx(styles.menu, { [styles.open]: isOpen })}
>
<div style={dropdownStyle}>
{isOpen &&
items.map(
(item, index) =>
!(hideSelected && item.value === value) && (
<div
className={clsx(styles.menuItem, {
[styles.highlighted]: highlightedIndex === index,
[styles.disabled]: item.disabled,
})}
key={item.key}
{...getItemProps({
item,
index,
disabled: item.disabled,
})}
>
{item.content}
</div>
)
)}
</div>
</div>
</div>
);
}
| {
"redpajama_set_name": "RedPajamaGithub"
} | 291 |
@interface WGHomeSliderUserHeaderView ()
{
JHImageView *_bgImageView;
JHImageView *_imageView;
JHLabel *_nameLabel;
JHImageView *_accessoryView;
JHButton *_scanButton;
JHButton *_messageCenterButton;
}
@end
@implementation WGHomeSliderUserHeaderView
- (void)loadSubviews {
[super loadSubviews];
_bgImageView = [[JHImageView alloc] initWithFrame:CGRectMake(0, 0, kWGSideBarWidth, [WGHomeSliderUserHeaderView height])];
_bgImageView.image = [UIImage imageNamed:@"home_slider_header"];
[self addSubview:_bgImageView];
float width = kAppAdaptWidth(140);
NSArray *imageArray = @[@"home_slider_scan", @"home_slider_messageCenter"];
NSArray *titleArray = @[kStr(@"Slider_Mine_Scan"), kStr(@"Slider_Mine_MessageCenter")];
for (int num = 0; num < titleArray.count; ++num) {
JHButton *item = [[JHButton alloc] initWithFrame:CGRectMake(num * width, 0, width, kAppAdaptHeight(48))];
[item setImage:[UIImage imageNamed:imageArray[num]] forState:UIControlStateNormal];
[item setTitle:[NSString stringWithFormat:@" %@", titleArray[num]] forState:UIControlStateNormal];
[item setTitleColor:kWhiteColor forState:UIControlStateNormal];
item.titleLabel.font = kAppAdaptFont(14);
[item addTarget:self action:@selector(touchItemButton:) forControlEvents:UIControlEventTouchUpInside];
item.tag = num;
[self addSubview:item];
}
float y = kAppAdaptHeight(48 + 18);
_imageView = [[JHImageView alloc] initWithFrame:CGRectMake((kWGSideBarWidth - kAppAdaptWidth(80)) / 2, y, kAppAdaptWidth(80), kAppAdaptWidth(80))];
// _imageView.layer.cornerRadius = kAppAdaptWidth(24);
// _imageView.layer.masksToBounds = YES;
// _imageView.layer.borderColor = kWhiteColor.CGColor;
// _imageView.layer.borderWidth = kAppAdaptHeight(3);
_imageView.userInteractionEnabled = YES;
_imageView.contentMode = UIViewContentModeScaleAspectFill;
[self addSubview:_imageView];
[_imageView addSingleTapGestureRecognizerWithTarget:self action:@selector(handleLogin:)];
_nameLabel = [[JHLabel alloc] initWithFrame:CGRectMake(0, _imageView.maxY + kAppAdaptHeight(5), kWGSideBarWidth, kAppAdaptHeight(20))];
_nameLabel.font = kAppAdaptFont(14);
_nameLabel.textColor = kWhiteColor;
_nameLabel.userInteractionEnabled = YES;
_nameLabel.textAlignment = NSTextAlignmentCenter;
[self addSubview:_nameLabel];
[_nameLabel addSingleTapGestureRecognizerWithTarget:self action:@selector(handleLogin:)];
_accessoryView = [[JHImageView alloc] initWithFrame:CGRectMake(_nameLabel.maxX + kAppAdaptWidth(10), _imageView.y + kAppAdaptHeight(18), kAppAdaptWidth(8), kAppAdaptHeight(14))];
_accessoryView.image = [UIImage imageNamed:@"home_slider_arr"];
[self addSubview:_accessoryView];
}
- (void)handleLogin:(UIGestureRecognizer *)recognizer {
if (self.onLogin) {
self.onLogin();
}
}
- (void)touchItemButton:(JHButton *)sender {
if (sender.tag == 0) {
if (self.onScan) {
self.onScan();
}
}
else {
if (self.onMessageCenter) {
self.onMessageCenter();
}
}
}
- (void)showWithData:(JHObject *)data {
_imageView.image = [UIImage imageNamed:[WGApplication sharedApplication].userAvatar];
if ([WGApplication sharedApplication].isLogined) {
_nameLabel.text = [WGApplication sharedApplication].userName;
_accessoryView.hidden = NO;
float y = kAppAdaptHeight(48);
_imageView.frame = CGRectMake(kAppAdaptWidth(16), y, kAppAdaptWidth(48), kAppAdaptWidth(48));
_nameLabel.frame = CGRectMake(_imageView.maxX + kAppAdaptWidth(10), _imageView.y, kAppAdaptWidth(170), _imageView.height);
_nameLabel.textAlignment = NSTextAlignmentLeft;
_accessoryView.frame = CGRectMake(_nameLabel.maxX + kAppAdaptWidth(10), _imageView.y + kAppAdaptHeight(18), kAppAdaptWidth(8), kAppAdaptHeight(14));
}
else {
_accessoryView.hidden = YES;
_nameLabel.text = kStr(@"Slider_Mine_UnRegister");
float y = kAppAdaptHeight(48 + 18);
_imageView.frame = CGRectMake((kWGSideBarWidth - kAppAdaptWidth(80)) / 2, y, kAppAdaptWidth(80), kAppAdaptWidth(80));
_nameLabel.frame = CGRectMake(0, _imageView.maxY + kAppAdaptHeight(5), kWGSideBarWidth, kAppAdaptHeight(20));
_nameLabel.textAlignment = NSTextAlignmentCenter;
}
_bgImageView.height = [WGHomeSliderUserHeaderView height];
}
+ (CGFloat)height {
if ([WGApplication sharedApplication].isLogined) {
return kAppAdaptHeight(112);
}
else {
return kAppAdaptHeight(200);
}
}
@end
| {
"redpajama_set_name": "RedPajamaGithub"
} | 2,039 |
{"url":"https:\/\/handwiki.org\/wiki\/Ellipsoid_packing","text":"# Ellipsoid packing\n\nIn geometry, ellipsoid packing is the problem of arranging identical ellipsoid throughout three-dimensional space to fill the maximum possible fraction of space. The currently densest known packing structure for ellipsoid has two candidates, a simple monoclinic crystal with two ellipsoids of different orientations[1] and a square-triangle crystal containing 24 ellipsoids[2] in the fundamental cell. The former monoclinic structure can reach a maximum packing fraction around $\\displaystyle{ 0.77073 }$ for ellipsoids with maximal aspect ratios larger than $\\displaystyle{ \\sqrt{3} }$. The packing fraction of the square-triangle crystal exceeds that of the monoclinic crystal for specific biaxial ellipsoids, like ellipsoids with ratios of the axes $\\displaystyle{ \\alpha:\\sqrt{\\alpha}:1 }$ and $\\displaystyle{ \\alpha \\in (1.365,1.5625) }$. Any ellipsoids with aspect ratios larger than one can pack denser than spheres.","date":"2022-11-30 10:03:35","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.34801650047302246, \"perplexity\": 1708.6597230889513}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-49\/segments\/1669446710734.75\/warc\/CC-MAIN-20221130092453-20221130122453-00120.warc.gz\"}"} | null | null |
\section{Introduction}
Fractional Schr\"odinger equations are derived from the path integral over L\'{e}vy trajectories. It can be applied, for example, to describe the orbits radius for hydrogen-like atoms. (For more details of physical background, see, for example, \cite{La:FSE} and the references therein.)
We study the fractional nonlinear Schr\"odinger equation of form
\begin{equation}\label{e:evolutionEqn}
i\varepsilon \frac{\partial\psi}{\partial t}=(-\varepsilon^2\Delta)^s\psi+V(x)\psi-|\psi|^{p-1}\psi \quad\mbox{in }\mathbf R^n,
\end{equation}
where $\varepsilon$ is a small positive constant which is corresponding to the Planck constant, $(-\Delta)^s$, $0<s<1$, is the fractional Laplacian, $V(x)$ is a potential function, and $p>1$.
We shall look for the so-called standing wave solutions which are of form
\begin{equation*}\label{e:standingwave}
\psi(x,t)=e^{(i/\varepsilon)Et}v(x),
\end{equation*}
where $v$ is a real-valued function depending only on $x$ and $E$ is some constant in $\mathbf R$. The function $\psi$ solves (\ref{e:evolutionEqn}) provided the standing wave $v(x)$ satisfies
\begin{equation}\label{e:stationally}
(-\varepsilon^2\Delta)^sv+(V(x)+E)v-|v|^{p-1}v=0\quad\mbox{in } \mathbf R^n.
\end{equation}
In what follows, we assume that $E=1$ and $p$ is subcritical. That is, we will study the following equation:
\begin{equation}\label{e:main-equation}
\varepsilon^{2s}(-\Delta)^{s}u+u+V(x)u=|u|^{p-1}u,\quad u\in H^s(\mathbf R^n),
\end{equation}
where $0<s<1$, and $1<p<\frac{n+2s}{n-2s}$ for $n>2s$, and, $1<p<\infty$ for $n\le2s$.
In quantum mechanics, when $\varepsilon$ tends to zero, the existence and multiplicity of solutions to (\ref{e:main-equation}) is of importance. We will find multiple solutions $u_{\varepsilon}$ of (\ref{e:main-equation}) that concentrate near some point $x_0\in \mathbf R^n$ as $\varepsilon\to 0$. By this we mean that, for all $x\in \mathbf R^n\setminus \{x_0\}$, $u_{\varepsilon}(x)\to 0$ as $\varepsilon\to 0$. Such kind of solutions are so-called semiclassical standing waves or spike pattern solutions.
When $s=1$, Equation (\ref{e:main-equation}) is a classical nonlinear Schr\"odinger equation and the existence of semiclassical standing wave solutions was established by Floer and Weinstein \cite{FW:NWPCS}, and then Oh \cite{Oh:CMP89, Oh:CMP90}. There is a large mount of research on this subject in the past two decades. We refer for example to the (far from complete) list of papers \cite{ABC:ARMA97, AMS:MRNSE, AMN:SPEE, BL:MPEPS, BL:EPSSE, DF:LMP, BW:SWCF, G:NSPS, Li:SPEE, DW:CCNSE, CL:TMNA97, DF:MA02, G:CPDE96, KW:ADE00, R:ZAMP92, W:CMP93, MMM:SSE} and the references therein.
When $s\in (0,1)$, the existence of semiclassical solution to Equations (\ref{e:main-equation}) was obtained by D\'{a}vila, del Pino and Wei \cite{DDW:CSWFSE}, Chen and Zheng \cite{CZ:CPFSE}. Precisely, by a Lyapunov-Schimdt reduction, \cite{DDW:CSWFSE} proved that if $V$ is a sufficiently smooth positive function with non-degenerate critical points $\xi_1,\xi_2,\cdots,\xi_k$ and satisfies some degree conditions around these points, then there exists a solution of (\ref{e:main-equation}) concentrating to these $k$ critical points. (See \cite{CZ:CPFSE} for the case $k=1$ with more technical conditions.) Further, in \cite{FaMaVa14:GSCP}, Fall, Mahmoudi and Valdinoci proved that if there exist semiclassical solutions to (\ref{e:main-equation}) as $\varepsilon\to 0$, then the concentration points must be critical points of $V$.
Moreover, we should mention that the concentration phenomena for fractional Schr\"odinger equations on bounded domain with Dirichlet condition were investigated by D\'{a}vila, del Pino, Dipierro and Valdinoci \cite{DaDPDiVa14}.
In this paper, we mainly investigate existence and multiplicity of semiclassical standing wave solutions to Equation (\ref{e:main-equation}) when $V$ has non-isolated critical points. More precisely, we have the following theorem.
\begin{theorem}\label{t:main}
Let $0<s<1$, $n> 4-4s$. Suppose that $V$ is a non-negative function in $C^3_b(\mathbf R^n)$ with a non-degenerate smooth compact critical manifold $M$. Then for $\varepsilon>0$ small, Equation (\ref{e:main-equation}) has at least $l(M)$ solutions concentrating near points of $M$.
\end{theorem}
Here $l(M)$ denotes the cup length of $M$ (see Section \ref{ss:abstract} below) and $$C^3_b(\mathbf R^n)=\{v\in C^3(\mathbf R^n) \mid \partial^{J}v \mbox{ is bounded on }\mathbf R^n \mbox{ for all } |J|\le 3\}.$$
The non-degeneracy of a critical manifold is in the sense of Bott \cite{Bo:AM57}. Precisely, we say that a critical manifold $M$ of $V$ is non-degenerate if, for every $x\in M$, the kernel of $D^2f(x)$ equals to $T_x M$.
\begin{remark}
When $s=1$, the result of this theorem was obtained by Ambrosetti, Malchiodi and Secchi \cite{AMS:MRNSE}.
\end{remark}
\begin{remark}
Since the unique positive solution (up to translation) to the standard equation decays as $1/(1+|x|^{n+2s})$ (see for example \cite{FS:URSFL, FQT:PSNFS} or Theorem \ref{t:unda} below), we should technically assume that $0<s<1$ and $n> 4-4s$ to make some necessary integrals convergent (see the proof of Lemma \ref{l:vxz} below). Based on our observation, this assumption is essential since the decay estimate of the unique standard solution is optimal. We should also note that when $s\to 1$, there is no restriction on the dimension $n$. This is the same as the classical case $s=1$.
\end{remark}
\begin{remark}
Note that the assumption $V\ge 0$ on $\mathbf R^n$ is not essential. In fact, a similar argument as in Section \ref{sb:cuf} below implies that the condition $\inf (1+V)>0$ is sufficient. Without loss of generality, in what follows we assume that $V(0)=0$ for simplicity.
\end{remark}
Our proof relies on a singular perturbation argument as in \cite{AMS:MRNSE}. More precisely, by the change of variable $x\to \varepsilon x$, Equation (\ref{e:main-equation}) becomes
\begin{equation}\label{e:change}
(-\Delta)^{s}u+u+V(\varepsilon x)u=|u|^{p-1}u.
\end{equation}
Solutions of (\ref{e:change}) are the critical points $u\in H^s(\mathbf R^n)$ of the functional
\begin{equation}
f_{\varepsilon}(u)=f_0(u)+\frac{1}{2}\int_{\mathbf R^n}V(\varepsilon x)u^2dx,
\end{equation}
where
\begin{equation}\label{e:f0u}
f_0(u)=\frac{1}{2}\|u\|_s^2-\frac{1}{p+1}\int_{\mathbf R^n}|u|^{p+1}dx.
\end{equation}
Here $\|\cdot\|_s$ denotes the norm in $H^s(\mathbf R^n)$.
We should note that $f_{\varepsilon }\in C^2(H^s(\mathbf R^n))$.
We will find the solutions of (\ref{e:change}) near the solutions of
\begin{equation}\label{e:ss}
(-\Delta)^{s}u+u+V(\varepsilon \xi)u=|u|^{p-1}u,
\end{equation}
for some $\xi\in \mathbf R^n$ to be fixed. The solutions of (\ref{e:ss}) are critical points of the following functional
\begin{equation}\label{f:xi}
F_{\varepsilon, \xi}(u)=f_0(u)+\frac{1}{2}V(\varepsilon \xi)\int_{\mathbf R^n}u^2dx.
\end{equation}
Since (\ref{f:xi}) has a term of $V$, $F_{\varepsilon, \xi}$ inherits the topological features of the critical manifold $M$ of $V$. Therefore, if we consider $f_{\varepsilon}$ as a perturbation of $F_{\varepsilon,\xi}$, multiple solutions to (\ref{e:change}) will be found by a multiplicity theorem from \cite{Ch:IDMT} (see Theorem \ref{t:abstract} below).
Nevertheless, a direct application of the arguments in \cite{AMS:MRNSE} to our problem is impossible. There are two reasons which make our proof much more complicated. Firstly, unlike the Laplacian $-\Delta$, the fractional Laplacian $(-\Delta)^s$, $0<s<1$, is nonlocal. For this reason, when $0<s<1$, the classical local techniques as in $s=1$ case (see \cite{AMS:MRNSE}) can not be used any more. For instance, instead of using the classical method in \cite{AMS:MRNSE} which depends on the locality of $-\Delta$ essentially, we employ a functional analysis approach to prove the invertibility of $D^2f_{\varepsilon}$ (see Section \ref{s:inertibility} below). Secondly, the standard solution $U$ to unperturbed fractional Schr\"odinger equation ($V\equiv 0$ in Equation (\ref{e:main-equation})) decays only as $1/(1+|x|^{n+2s})$ (see Section \ref{ss:se} below), especially it does not decay exponentially as in $s=1$ case. Therefore, to ensure the necessary functions in certain Sobolev spaces on $\mathbf R^n$ and to recover the estimates for Lyapunov-Schmidt reduction, we need more detailed and involved analysis than the classical case (see Section \ref{s:estimates}, \ref{s:inertibility} and \ref{s:reduction} below).
Our paper is organized as follows. In Section \ref{s:preliminaries}, we recall the notations of Fractional Sobolev spaces, some basic properties of standard equation which is obtained by \cite{FL:UNGS, FS:URSFL, FQT:PSNFS}. Moreover, we formulate the functional corresponding Equation (\ref{e:main-equation}), and construct the critical manifold of the functional (\ref{f:xi}). In Section \ref{s:estimates}, some useful estimates are showed for further reference. In Section \ref{s:inertibility}, we prove the invertibility of linearized operator at the points on critical manifold of $F_{\varepsilon,\xi}$. In Section \ref{s:reduction}, we apply the Lyapunov-Schmidt reduction method to our functional. In Section \ref{s:proof}, we complete the proof of Theorem \ref{t:main}.
\section{Invertibility}\label{s:inertibility}
In this section, we will discuss the invertibility of $D^2f_{\varepsilon}(z_{\xi})$ on $(T_{z_{\xi}}Z^{\varepsilon})^{\perp_s}$.
Here $T_{z_{\xi}}Z^{\varepsilon}$ is the tangent space to $Z^{\varepsilon}$ at $z_{\xi}$, and $(T_{z_{\xi}}Z^{\varepsilon})^{\perp_s}$ is the orthogonal complement of $T_{z_{\xi}}Z^{\varepsilon}$ in $H^s(\mathbf R^n)$.
Let
\begin{equation*}
\mathcal L_{\varepsilon,\xi}:(T_{z_{\xi}}Z^{\varepsilon})^{\perp_s}\to (T_{z_{\xi}}Z^{\varepsilon})^{\perp_s}
\end{equation*}
be the tangent operator of $Df_{\varepsilon}$ restricted on $(T_{z_{\xi}}Z^{\varepsilon})^{\perp_s}$, that is, on $(T_{z_{\xi}}Z^{\varepsilon})^{\perp_s}$,
\begin{equation*}
\langle \mathcal L_{\varepsilon,\xi}v,w\rangle_s=D^2f_{\varepsilon}(z_{\xi})[v,w].
\end{equation*}
The main aim of this section is to prove the following result which implies that $\mathcal L_{\varepsilon,\xi}$ is invertible on $(T_{z_{\xi}}Z^{\varepsilon})^{\perp_s}$.
\begin{prop}\label{p:invertible}
Given $\bar\rho>0$, there exists $\bar\varepsilon>0$ such that, for all $|\xi|\le\bar\rho$ and $0<\varepsilon< \bar\varepsilon$, it holds that
\begin{equation*}
|\langle \mathcal L_{\varepsilon,\xi}v,v\rangle_s|\ge C\|v\|_s^2, \quad \forall v\in (T_{z_{\xi}}Z^{\varepsilon})^{\perp_s},
\end{equation*}
where $C>0$ is a constant only depending on $\bar\xi$ and $\bar\varepsilon$.
\end{prop}
Note that
\begin{equation*}
T_{z_{\xi}}Z^{\varepsilon}={\rm span}\{\partial_{\xi_1}z_{\xi},\cdots,\partial_{\xi_n}z_{\xi}\}.
\end{equation*}
By Lemma \ref{l:pxe}, we know that $\partial_{\xi_i}z_{\xi}$ is close to $-\partial_{x_i}z_{\xi}$ in $H^s(\mathbf R^n)$ when $\varepsilon\to 0$ and $|\xi|\le\bar\rho$. For convenience, we define
\begin{equation}\label{e:mxv}
K_{\varepsilon,\xi}={\rm span}\{z_{\xi},\partial_{x_1}z_{\xi},\cdots,\partial_{x_n}z_{\xi}\}.
\end{equation}
To prove Proposition \ref{p:invertible}, we need some lemmas.
\begin{lemma}
$z_{\xi}$ is a critical point of $F^{\varepsilon \xi}$ with Morse index one.
\end{lemma}
\begin{proof}
Since
\begin{equation}\label{e:D2F}
D^2F^{\varepsilon\xi}(z_{\xi})[z_{\xi},z_{\xi}]=-(p-1)\int_{\mathbf R^n}z_{\xi}^{p+1}dx<0,
\end{equation}
the operator $D^2F^{\varepsilon\xi}(z_{\xi})$ has at least one negative eigenvalue. For the details to prove that the Morse index of $z_{\xi}$ is one exactly,
see Section 3 in \cite{FS:URSFL}.
\end{proof}
\begin{lemma}\label{l:lzx}
Let $\bar\rho>0$, there exist $\varepsilon_0>0$ and a constant $C_1>0$ such that, for all $0<\varepsilon<\varepsilon_0$ and all $|\xi|\le\bar\rho$, it holds
\begin{equation*}
\langle \mathcal L_{\varepsilon,\xi}z_{\xi},z_{\xi}\rangle_s\le -C_1<0.
\end{equation*}
\end{lemma}
\begin{proof}
A direct calculus yields
\begin{equation}\label{e:lf}
\langle \mathcal L_{\varepsilon,\xi}z_{\xi},z_{\xi}\rangle_s=D^2F^{\varepsilon\xi}(z_{\xi})[z_{\xi},z_{\xi}]+\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon\xi))z_{\xi}^2dx.
\end{equation}
By (\ref{e:D2F}) and (\ref{e:dz}),
\begin{eqnarray*}
D^2F^{\varepsilon\xi}(z_{\xi})[z_{\xi},z_{\xi}]&=&-(p-1)\int_{\mathbf R^n}z_{\xi}^{p+1}dx\\
&=&-(p-1)\int_{\mathbf R^n}|b(\varepsilon\xi)U(a(\varepsilon\xi )(x-\xi))|^{p+1}dx\\
&=&-(p-1)[b(\varepsilon \xi)]^{p+1}[a(\varepsilon \xi)]^{-n}\int_{\mathbf R^n}U^{p+1}(x)dx
\end{eqnarray*}
From the definition of $a, b$ (see (\ref{d:a}) (\ref{d:b})) and $V(0)=0$, we have that,
for any fixed $\bar\rho>0$, there exists $\varepsilon_1>0$ small enough such that when $|\xi|\le\bar\rho$ and $0<\varepsilon<\varepsilon_1$, it holds
\begin{equation}\label{e:abr}
a(\varepsilon\xi)\in [\frac{1}{2},2] \quad \mbox{and}\quad b(\varepsilon\xi)\in [\frac{1}{2},2].
\end{equation}
Since $U$ is the unique solution (up to translation),
\begin{equation*}
\int_{\mathbf R^n}U^{p+1}(x)dx
\end{equation*}
is a constant. Therefore there is a positive constant $C_0$ such that
\begin{equation}\label{e:DFC}
D^2F^{\varepsilon\xi}(z_{\xi})[z_{\xi},z_{\xi}]\le -C_0<0.
\end{equation}
From Lemma \ref{l:vxz}, the second term on right side of (\ref{e:lf}) satisfies
\begin{eqnarray*}
&&\left|\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon\xi))z_{\xi}^2dx\right|\\
&\le&\int_{\mathbf R^n}(\varepsilon|\nabla V(\varepsilon\xi)\cdot (x-\xi)|+\varepsilon^2|D^2V(\eta)|\cdot |x-\xi|^2)z_{\xi}^2dx\\
&=&\varepsilon\int_{\mathbf R^n}|\nabla V(\varepsilon\xi)\cdot (x-\xi)|z_{\xi}^2dx+\varepsilon^2\int_{\mathbf R^n}|D^2V(\eta)|\cdot |x-\xi|^2z_{\xi}^2dx.
\end{eqnarray*}
Here $\eta$ is some point in $\mathbf R^n$.
Since $V\in C^3_b(\mathbf R^n)$, we have that
\begin{equation*}
|\nabla V(\varepsilon\xi)\cdot (x-\xi)|\le C|x-\xi|,
\end{equation*}
and
\begin{equation*}
|D^2V(\eta)|\cdot |x-\xi|^2\le C|x-\xi|^2.
\end{equation*}
Then by the definition of $z_{\xi}$,
\begin{eqnarray*}
&&\int_{\mathbf R^n}|\nabla V(\varepsilon\xi)\cdot (x-\xi)|z_{\xi}^2dx\\
&\le&C\int_{\mathbf R^n}|x-\xi||b(\varepsilon\xi)U(a(\varepsilon\xi )(x-\xi))|^2dx\\
&\le& C [b(\varepsilon\xi)]^2[a(\varepsilon\xi )]^{-n-1}\int_{\mathbf R^n}\frac{|x-\xi|}{(1+|x-\xi|)^{2n+4s}}dx
\end{eqnarray*}
Taking $|\xi|\le\rho$ and $\varepsilon<\varepsilon_1$ as in (\ref{e:abr}), we obtain that there exists a positive constant $C_2$ such that
\begin{equation*}
\int_{\mathbf R^n}|\nabla V(\varepsilon\xi)\cdot (x-\xi)|z_{\xi}^2dx<C_2
\end{equation*}
A similar argument yields
\begin{equation*}
\int_{\mathbf R^n}|D^2V(\eta)|\cdot |x-\xi|^2z_{\xi}^2dx
\le C_3\int_{\mathbf R^n}\frac{|x-\xi|^2}{1+|x-\xi|^{2n+4s}}dx\le C_4.
\end{equation*}
Therefore, when $|\xi|<\rho$ and $\varepsilon<\varepsilon_1$,
\begin{equation}\label{e:vv}
\left|\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon\xi))z_{\xi}^2dx\right|\le C_2\varepsilon+C_3\varepsilon^2.
\end{equation}
Then there is a $\varepsilon_0<\varepsilon_1$ such that when $\varepsilon<\varepsilon_0$,
\begin{equation}\label{e:cvv}
C_2\varepsilon+C_3\varepsilon^2<\frac{C_0}{2}
\end{equation}
From (\ref{e:DFC}), (\ref{e:vv}), (\ref{e:cvv}), we have
\begin{equation*}
\langle \mathcal L_{\varepsilon,\xi}z_{\xi},z_{\xi}\rangle_s\le -\frac{C_0}{2}<0.
\end{equation*}
This complete the proof.
\end{proof}
\begin{lemma}\label{l:d2f0i}
Let $\bar\rho>0$. There exists $\varepsilon_2>0$ small such that, for all $0<\varepsilon<\varepsilon_2$ and $|\xi|\le\bar\rho$, it holds
\begin{equation*}
D^2f_0(z_{\xi})[\phi,\phi]\ge C_2\|\phi\|_s^2,\quad \mbox{ for }\phi\in K_{\varepsilon,\xi}^{\perp_s},
\end{equation*}
where $C_2$ is a positive constant only depending on $\varepsilon_2$ and $\bar\rho$.
\end{lemma}
If this lemma does not hold, then there exists a sequence of $(\varepsilon_j,\xi_j)\to (0,\bar\xi)$ in $\mathbf R^+\times B_{\bar\rho}\subset\mathbf R^+\times \mathbf R^n$ and a sequence $\phi_j\in K_{\xi_j,\varepsilon_j}^{\perp_s}$
such that
\begin{equation}\label{e:pps1}
\|\phi_j\|_{s}=1,
\end{equation}
and
\begin{equation}\label{e:d2f0pj}
D^2f_0(z_{\xi_j})[\phi_j,\phi_j]\to 0,\quad\mbox{ as } j\to \infty.
\end{equation}
Since $\{\phi_j\}$ is bounded in $H^s(\mathbf R^n)$, we assume (passing to a subsequence) that $\phi_j$ converge weakly to a $\phi_{\infty}$ in $H^s(\mathbf R^n)$.
\begin{lemma}\label{l:pipk}
It holds that
\begin{equation*}
\phi_{\infty}\in K_{0,\bar\xi}^{\perp_s}.
\end{equation*}
\end{lemma}
\begin{proof}
Rewrite
\begin{eqnarray*}
\partial_{x_i}z_{\xi}&=&\partial_{x_i}U(x-\bar\xi)+\partial_{x_i}[b(\varepsilon_j\xi_j)U(a(\varepsilon_j\xi_j )(x-\xi_j))-U(x-\bar\xi)]\\
&:=&\partial_{x_i}U(x-\bar\xi)+\psi_j.
\end{eqnarray*}
By the definition of $a(\xi)$ and $b(\xi)$ (see (\ref{d:a}) and (\ref{d:b})), it holds that
\begin{eqnarray*}
\|\psi_j\|_s\to 0,\quad \mbox{as }j\to\infty.
\end{eqnarray*}
From $\phi_j \in K_{\xi_j,\varepsilon_j}^{\perp_s}$, it holds that
\begin{eqnarray*}
0=\langle \phi_j,\partial_{x_i}z_{\xi}\rangle_s
= \langle \phi_j,\partial_{x_i}U(\cdot-\bar\xi)\rangle_s+\langle\phi_j,\psi_j\rangle_s\to \langle \phi_{\infty},\partial_{x_i}U(\cdot-\bar\xi)\rangle_s.
\end{eqnarray*}
That is, $\phi_{\infty}\perp \partial_{x_i}U(\cdot-\bar\xi)$. Similarly, we have that $\phi_{\infty}\perp U(\cdot-\bar\xi)$. Therefore, we obtain $\phi_{\infty}\in K_{0,\bar\varepsilon}^{\perp_s}$. This completes the proof.
\end{proof}
Let
\begin{equation*}
\mathcal L_j:H^s(\mathbf R^n)\to H^s(\mathbf R^n)
\end{equation*}
be the operator given by
\begin{equation*}
\langle \mathcal L_j \phi,\psi\rangle_s=D^2f_0(z_{\xi_j})[\phi,\psi],\quad \mbox{ for }\phi,\psi\in H^s(\mathbf R^n),
\end{equation*}
and let
\begin{equation*}
\mathcal L_{\infty}:H^s(\mathbf R^n)\to H^s(\mathbf R^n)
\end{equation*}
be the operator defined by
\begin{equation*}
\langle \mathcal L_{\infty} \phi,\psi\rangle_s=D^2f_0(U(\cdot-\bar\xi))[\phi,\psi],\quad \mbox{ for }\phi,\psi\in H^s(\mathbf R^n).
\end{equation*}
We now have the following lemma.
\begin{lemma}
We have that $\phi_{\infty}=0$.
\end{lemma}
\begin{proof}
By (\ref{e:pps1}) and (\ref{e:d2f0pj}), we get that
\begin{equation*}
\langle \mathcal L_{j}\phi_j,\phi_j\rangle_s=\|\phi_j\|_s^2-p\int_{\mathbf R^n}z_{\xi_j}^{p-1}\phi_j^2dx\to 0.
\end{equation*}
and then
\begin{equation*}
p\int_{\mathbf R^n}z_{\xi_j}^{p-1}\phi_j^2dx\to 1.
\end{equation*}
Hence, from the definition of $z_{\xi}$ (see Section \ref{sb:cuf}), we obtain that
\begin{equation}\label{e:upi}
p\int_{\mathbf R^n}U^{p-1}(x-\bar\xi)\phi_j^2dx\to 1.
\end{equation}
Moreover, estimate
\begin{eqnarray*}
&&\left|\int_{\mathbf R^n}U^{p-1}(x-\bar\xi)(\phi_j^2-\phi_{\infty}^2)dx\right|\\
&\le& \left(\int_{\mathbf R^n}U^{2(p-1)}(x-\bar\xi)|\phi_j(x)-\phi_{\infty}(x)|^2dx\right)^{\frac{1}{2}}\|\phi_j+\phi_{\infty}\|_0\notag\\
&\le&C\left(\int_{\mathbf R^n}U^{2(p-1)}(x-\bar\xi)|\phi_j(x)-\phi_{\infty}(x)|^2dx\right)^{\frac{1}{2}}.\notag
\end{eqnarray*}
Let $B_r(\bar\xi)$ be the ball centered at $\bar\xi$ with radius $r$. Then
\begin{eqnarray}\label{e:bpjpi}
&&\int_{\mathbf R^n}U^{2(p-1)}(x-\bar\xi)|\phi_j(x)-\phi_{\infty}(x)|^2dx\notag\\
&=&\left(\int_{B_r(\bar\xi)}+\int_{\mathbf R^n\setminus B_r(\bar\xi)}\right)U^{2(p-1)}(x-\bar\xi)|\phi_j(x)-\phi_{\infty}(x)|^2dx.
\end{eqnarray}
For all sufficiently small $\epsilon>0$, there exists an $r(\epsilon)$ such that if $r>r(\epsilon)$, then, $U^{2(p-1)}(x-\bar\xi)<\epsilon$, for all $x\in \mathbf R^n\setminus B_r(\bar\xi)$. Thus,
\begin{equation*}
\left|\int_{\mathbf R^n\setminus B_r(\bar\xi)}U^{2(p-1)}(x-\bar\xi)|\phi_j(x)-\phi_{\infty}(x)|^2dx\right|\le \epsilon\|\phi_j(x)-\phi_{\infty}(x)\|_0^2.
\end{equation*}
We now estimate the other term in (\ref{e:bpjpi}). Let $\chi$ be a smooth function satisfying
\begin{equation*}
\chi(x)=\left\{\begin{array}{ll}
1, & \mbox{for }x\in B_r(\bar\xi), \\
0, & \mbox{for } x\in \mathbf R^n\setminus B_{r+1}(\bar\xi).
\end{array}\right.
\end{equation*}
Then $\{\chi\phi_j\}$ is a bounded sequence in $H^{s}(B_{r+1}(\bar\xi))$. Therefore, there exists a function $\eta\in H^{s}(B_{r+1}(\bar\xi))$ such that, up to a subsequence, $\chi\phi_j\rightharpoonup \eta$. Since the embedding $H^{s}(B_{r+1}(\bar\xi))\hookrightarrow L^2(B_{r+1}(\bar\xi))$ is compact, we have $\chi\phi_j\to \eta$ in $L^2(B_{r+1}(\bar\xi))$. Then
$$\phi_j|_{B_r(\bar\xi)}=\chi\phi_j|_{B_r(\bar\xi)}\to \eta|_{B_r(\bar\xi)}, \quad \mbox{ in }L^2(B_r(\bar\xi)).$$
Since $\phi_j\rightharpoonup \phi_{\infty}$ in $L^2(B_r(\bar\xi))$, we obtain that
\begin{equation}\label{e:slpi}
\phi_j\to \phi_{\infty} \quad \mbox{in } L^2(B_r(\bar\xi)).
\end{equation}
It follows that
\begin{equation*}
\left|\int_{B_r(\bar\xi)}U^{2(p-1)}(x-\bar\xi)|\phi_j(x)-\phi_{\infty}(x)|^2dx\right|\to 0,\quad \mbox{as }j\to \infty.
\end{equation*}
By the arbitrary of $\epsilon$, we have that
\begin{equation*}
\int_{\mathbf R^n}U^{2(p-1)}(x-\bar\xi)|\phi_j(x)-\phi_{\infty}(x)|^2dx\to 0.
\end{equation*}
This yields that
\begin{equation}\label{e:upjpi0}
\int_{\mathbf R^n}U^{p-1}(x-\bar\xi)(\phi_j^2-\phi_{\infty}^2)dx\to 0.
\end{equation}
From (\ref{e:upi}) and (\ref{e:upjpi0}), we get that
\begin{equation*}
p\int_{\mathbf R^n}U^{p-1}(x-\bar\xi)\phi_{\infty}^2dx= 1.
\end{equation*}
On the other hand, by $\phi_j\rightharpoonup\phi_{\infty}$ in $H^s(\mathbf R^n)$, we have that
\begin{equation*}
\langle \phi_{\infty},\phi_{\infty}\rangle_s\leftarrow \langle \phi_j,\phi_{\infty}\rangle_s \le \|\phi_j\|_s\|\phi_{\infty}\|_s=\|\phi_{\infty}\|_s.
\end{equation*}
It follows that
\begin{equation*}
\|\phi_{\infty}\|_s\le 1.
\end{equation*}
Therefore, we obtain that
\begin{equation*}
\langle \mathcal L_{\infty}\phi_{\infty},\phi_{\infty}\rangle_s=\|\phi_{\infty}\|_s^2-p\int_{\mathbf R^n}U^{p-1}(x-\bar\xi)\phi_{\infty}^2dx\le 0.
\end{equation*}
By Theorem \ref{t:unda}, Remark \ref{r:lce} and Lemma \ref{l:pipk}, it holds that
\begin{equation*}
\langle \mathcal L_{\infty}\phi_{\infty},\phi_{\infty}\rangle_s\ge C\|\phi_{\infty}\|_s^2,
\end{equation*}
where $C$ is a positive constant.
Then we have that
\begin{equation*}
\|\phi_{\infty}\|_s=0.
\end{equation*}
This completes the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{l:d2f0i}]
Note that $z_{\xi_j}^{p-1}$ decays uniformly to $0$ at infinity as $0<\varepsilon_j<\bar\varepsilon$ and $|\xi|\le \bar\rho$. Then, for any $\epsilon>0$, there exists a sufficiently large $r_0>0$ such that, for all $r>r_0$, $|z_{\xi_j}^{p-1}(x)|<\epsilon$ when $x\in \mathbf R^n\setminus B_r$. Therefore, from (\ref{e:slpi}) and $\phi_{\infty}=0$, we have that
\begin{eqnarray*}
\left|\int_{\mathbf R^n}z_{\xi_j}^{p-1}|\phi_j|^2dx\right|
\le C\int_{B_r}|\phi_j|^2dx+\epsilon\int_{\mathbf R^n\setminus B_r}|\phi_j|^2dx
\to\epsilon, \quad \mbox{as }j\to \infty.
\end{eqnarray*}
By the arbitrary of $\epsilon$, we have that
\begin{equation*}
\left|\int_{\mathbf R^n}z_{\xi_j}^{p-1}|\phi_j|^2dx\right|\to 0, \quad \mbox{as }j\to \infty.
\end{equation*}
Moreover, from (\ref{e:pps1}) and (\ref{e:d2f0pj}), it holds that
\begin{equation*}
0\leftarrow D^2f_0(z_{\xi_j})[\phi_j,\phi_j]=\|\phi_j\|_s^2-p\int_{\mathbf R^n}z_{\xi_j}^{p-1}|\phi_j|^2dx\to 1.
\end{equation*}
It is a contradiction. Thus we have Lemma \ref{l:d2f0i}.
\end{proof}
\begin{lemma}\label{l:df2kp}
Let $\bar\rho>0$. There exists $\varepsilon_3>0$ small such that for all $0<\varepsilon<\varepsilon_3$ and $|\xi|\le\bar\rho$, it holds
\begin{equation*}
D^2f_{\varepsilon}(z_{\xi})[\phi,\phi]\ge C_3\|\phi\|_s^2,\quad \mbox{ for }\phi\in K_{\varepsilon,\xi}^{\perp_s},
\end{equation*}
where $C_3$ is a positive constant only depending on $\varepsilon_2$ and $\bar\rho$.
\end{lemma}
\begin{proof}
By the nonnegativity of $V$ and Lemma \ref{l:d2f0i}, we have that, for all $\phi\in K_{\varepsilon,\xi}^{\perp}$
\begin{eqnarray*}
D^2f_{\varepsilon}(z_{\xi})[\phi,\phi]&=&D^2f_{0}(z_{\xi})[\phi,\phi]+\int_{\mathbf R^n}V(\varepsilon x)\phi^sdx\\
&\ge&D^2f_{0}(z_{\xi})[\phi,\phi]\ge C_0\|\phi\|_s^2.
\end{eqnarray*}
Here $0<\varepsilon<\varepsilon_2$ and $|\xi|\le\bar\rho$. Letting $\varepsilon_3=\varepsilon_2$ and $C_3=C_2$, we obtain the result.
\end{proof}
\begin{proof}[Proof of Proposition \ref{p:invertible}]
Let $\bar\varepsilon=\varepsilon_2$. From Lemma \ref{l:lzx}, Lemma \ref{l:df2kp}, Lemma \ref{l:pxe} and (\ref{e:mxv}), we have that, for all $|\xi|\le\bar\rho$ and $0<\varepsilon<\bar\varepsilon$,
\begin{equation*}
|\langle \mathcal L_{\varepsilon,\xi}v,v\rangle_s|\ge C\|v\|_s^2, \quad \forall v\in (T_{z_{\xi}}Z^{\varepsilon})^{\perp_s},
\end{equation*}
where $C>0$ is a constant only depending on $\bar\xi$ and $\bar\varepsilon$. This completes the proof.
\end{proof}
\section{Preliminaries}\label{s:preliminaries}
In this section, we recall some results on fractional Laplacian, fractional Sobolev spaces and some uniqueness, non-degeneracy and decay results for solutions to the standard Schr\"odinger equations.
\subsection{Fractional Laplacian and fractional order Sobolev spaces}
For further references, we recall some basic facts involving fractional Laplacian and fractional order Sobolev spaces. For more details, see, for example, \cite{Ad:SS}, \cite{Sh:POST}, \cite{NPV:HG}, \cite{Caffarelli&Silvestre07}.
Mathematically, $(-\Delta)^s$ is defined as
$$
(-\Delta)^s u = C(n, s)\mbox{P.V.} \int_{\mathbf{R}^n}\frac{u(x) - u(y)}{|x - y|^{n + 2s}}dy = C(n, s)\lim_{\delta\to 0^+}\int_{\mathbf R^n\setminus B_{\delta}(x)}\frac{u(x) - u(y)}{|x - y|^{n + 2s}}dy.
$$
Here P. V. is a commonly used abbreviation for `in the principal value sense' and $C(n, s) = \pi^{-(2s + n/2)}\frac{\Gamma(n/2 + s)}{\Gamma(-s)}$.
It is well known that $(-\Delta)^s$ on $\mathbf{R}^{n}$ with $s\in (0, 1)$ is a nonlocal operator.
When $s\in (0, 1)$, the space $H^{s}(\mathbf{R}^{n}) = W^{s, 2}(\mathbf{R}^n)$ is defined by
\begin{eqnarray*}
H^{s}(\mathbf{R}^{n})& = &\left\{u\in L^2(\mathbf{R}^2): \frac{|u(x) - u(y)|}{|x - y|^{\frac{n}{2} + s}}\in L^{2}(\mathbf{R}^n\times\mathbf{R}^n)\right\}\\
& = & \left\{u\in L^2(\mathbf{R}^2): \int_{\mathbf{R}^n}(1 + |\zeta|^{2s})|\mathcal{F}u(\zeta)|^2d\zeta < +\infty\right\}
\end{eqnarray*}
and the inner product is
\begin{eqnarray*}
\langle u,v\rangle_{s} & := &\int_{\mathbf{R}^n}uvdx + \int_{\mathbf{R}^n}\int_{\mathbf{R}^n}\frac{(u(x) - u(y))(v(x)-v(y))}{|x - y|^{n + 2s}}dxdy.
\end{eqnarray*}
Let
$$
[u]_{s} := [u]_{H^{s}(\mathbf{R}^{n})} = \left(\int_{\mathbf{R}^n}\int_{\mathbf{R}^n}\frac{|u(x) - u(y)|^2}{|x - y|^{n + 2s}}dxdy\right)^\frac{1}{2}
$$
be the Gagliardo (semi) norm of $u$. The following identity yields the relation between the fractional operator $(-\Delta)^s$ and the fractional Sobolev space $H^{s}(\mathbf{R}^{n})$,
$$
[u]_{H^{s}(\mathbf{R}^{n})} = C\left(\int_{\mathbf{R}^n}|\zeta|^{2s}|\mathcal{F}u(\zeta)|^2d\zeta\right)^{\frac{1}{2}} = C\|(-\Delta)^{\frac{s}{2}}u\|_{L^2(\mathbf{R}^n)}
$$
for a suitable positive constant $C$ depending only on $s$ and $n$.
When $s > 1$ and it is not an integer we write $s = m + \sigma$, where $m$ is an integer and $\sigma\in (0, 1)$. In this case the space $H^{s}(\mathbf{R}^{n})$
consists of those equivalence classes of functions $u\in H^{m}(\mathbf{R}^{n})$ whose distributional derivatives $D^{J} u$, with $|J| = m$, belong to $H^{\sigma}(\mathbf{R}^{n})$, namely
\begin{eqnarray*}
H^{s}(\mathbf{R}^{n}) = \left\{u\in H^{m}(\mathbf{R}^{n}): D^{J} u\in H^{\sigma}(\mathbf{R}^{n}) \mbox{\,\,for any\,\,}J \mbox{\,\,with\,\,} |J| = m\right\}
\end{eqnarray*}
and this is a Banach space with respect to the norm
\begin{eqnarray*}
\|u\|_{s} := \|u\|_{H^{s}(\mathbf{R}^{n})} = \left(\|u\|^2_{H^{m}(\mathbf{R}^{n})} + \displaystyle\sum_{|J| = m}\|D^{J} u\|^2_{H^{\sigma}(\mathbf{R}^{n})}\right)^\frac{1}{2}.
\end{eqnarray*}
Clearly, if $s = m$ is an integer, the space $H^{s}(\mathbf{R}^{n})$ coincides with the usual Sobolev space $H^{m}(\mathbf{R}^{n})$. By this notation, we denote the norm of $L^2(\mathbf R^n)$ by $\|\cdot\|_0$.
For a general domain $\Omega$, the space $H^s(\Omega)$ can be defined similarly.
On the Sobolev inequality and the compactness of embedding, one has
\begin{theorem}\cite{Ad:SS}\label{l:embedding}
Let $\Omega$ be a domain with smooth boundary in $\mathbf{R}^n$. Let $s > 0$, then
\begin{enumerate}
\item[(a)]If $n > 2s$, then $H^{s}(\Omega)\hookrightarrow L^{r}(\Omega)$ for $2\leq r \leq 2n/(n - 2s)$,
\item[(b)]If $n = 2s$, then $H^{s}(\Omega)\hookrightarrow L^{r}(\Omega)$ for $2\leq r < \infty$,
\end{enumerate}
\end{theorem}
\begin{theorem}\cite{Sh:POST}\label{l:compactness}
Let $s>s'$ and $\Omega$ be a bounded domain with smooth boundary in $\mathbf R^n$. Then the embedding operator
\begin{equation*}
i_s^{s'}:H^s(\Omega)\to H^{s'}(\Omega)
\end{equation*}
is compact.
\end{theorem}
\subsection{Some results for the standard equation}\label{ss:se}
We recall some basic properties of the solutions to the following equation
\begin{equation}\label{e:standard}
(-\Delta)^{s}u+u-|u|^{p-1}u=0.
\end{equation}
The solutions of (\ref{e:standard}) are the critical points of $f_0$ given by (\ref{e:f0u}).
The non-degeneracy of the standard solution to Equation (\ref{e:standard}) is investigated by many works. For our purpose, we recall the following theorem. (For more results and details on this topic, see, for example, \cite{FS:URSFL}, \cite{FL:UNGS}, \cite{FQT:PSNFS}, \cite{FaVa13:UNPS} and the references therein.)
\begin{theorem}\label{t:unda}
There exists a unique solution (up to translation) $U\in H^{2s+1}(\mathbf R^n)$ to (\ref{e:standard}) such that
\begin{equation*}\label{e:uday}
\frac{C_1}{1+|x|^{n+2s}}\le U(x)\le \frac{C_2}{1+|x|^{n+2s}}, \quad \mbox{ for }\,x\in\mathbf R^n,
\end{equation*}
with some constants $0< C_1\le C_2$. Moreover, the linearized operator $L_0$ at $U$ is non-degenerate, that is, its kernel is given by
\begin{equation*}
{\rm ker} L_0={\rm span}\{\partial_{x_1}U,\cdots,\partial_{x_n}U\}.
\end{equation*}
\end{theorem}
\begin{remark}\label{r:pu}
By Lemma C.2 of \cite{FS:URSFL}, $\nabla U$ satisfies
\begin{equation*}\label{e:ps}
|\nabla U(x)|\le\frac{C}{1+|x|^{n+2s}},
\end{equation*}
for some constant $C$.
\end{remark}
\begin{remark}\label{r:lce}
The non-degeneracy of $L_0$ yields the coercivity estimate as follows:
\begin{equation*}
\langle L_0\phi,\phi\rangle_0 \ge C\|\phi\|_s^2 \quad \mbox{ for } \phi\perp K,
\end{equation*}
where $C$ is a positive constant, and $K$ is a suitable chosen $(n+1)$-dimensional subspace. For example, we can choose $K={\rm span}\{\phi_{-1},\partial_{x_1}U,\cdots,\partial_{x_n}U\}$ with $\phi_{-1}$ being the linear ground state of $L_0$. For more details, see \cite[Section 3]{FS:URSFL}.
\end{remark}
\subsection{Critical points of $F_{\varepsilon,\xi}$}\label{sb:cuf}
Let
\begin{equation}\label{d:a}
a=a(\xi)=(1+V(\xi))^{\frac{1}{2s}}
\end{equation}
and
\begin{equation}\label{d:b}
b=b(\xi)=[1+V(\xi)]^{\frac{1}{p-1}}.
\end{equation}
Then $bU(ax)$ solves (\ref{e:ss}). Set
\begin{equation}\label{e:dz}
z^{\varepsilon\xi}=b(\varepsilon\xi)U(a(\varepsilon\xi )x)
\end{equation}
and
\begin{equation*}
Z^{\varepsilon}=\left\{z^{\varepsilon\xi}(x-\xi)\,|\,\xi\in\mathbf R^n\right\}.
\end{equation*}
Therefore, every point in $ Z^{\varepsilon}$ is a critical point of (\ref{f:xi}) or, equivalently, a solution to Equation (\ref{e:ss}).
For simplicity, we will set $z=z_{\xi}=z_{\varepsilon,\xi}=z^{\varepsilon\xi}(x-\xi)$.
\section{Some estimates}\label{s:estimates}
In this section, we prove some useful estimates for future reference. From now on, $C$ denotes various constants.
\begin{lemma}\label{l:pxe}
Let $\bar\rho>0$. For $\varepsilon$ sufficiently small and $|\xi|\le \bar\rho$,
there holds
\begin{equation}\label{e:pxpxs}
\partial_{\xi_i}z^{\varepsilon\xi}=-\partial_{x_i}z^{\varepsilon\xi}(x-\xi)+O(\varepsilon), \quad\mbox{ in }H^s(\mathbf R^n).
\end{equation}
\end{lemma}
\begin{proof}
A direct calculation gives
\begin{eqnarray*}
&&\partial_{\xi_i}z^{\varepsilon\xi}(x-\xi)=\partial_{\xi_i}\left[b(\varepsilon\xi)U(a(\varepsilon\xi)(x-\xi))\right]\\
&=&\varepsilon[\partial_{\xi_i}b](\varepsilon\xi)U(a(\varepsilon\xi)(x-\xi))+\varepsilon b(\varepsilon\xi)[\partial_{\xi_i}a](\varepsilon\xi)[\nabla U](a(\varepsilon\xi)(x-\xi))\cdot(x-\xi)\\
&&-a(\varepsilon\xi)b(\varepsilon\xi)[\partial_{x_i}U](a(\varepsilon\xi)(x-\xi)):=Z_1+Z_2+Z_3.
\end{eqnarray*}
Note that
\begin{equation*}
Z_3=-a(\varepsilon\xi)b(\varepsilon\xi)[\partial_{x_i}U](b(\varepsilon\xi)(x-\xi))=-\partial_{x_i}z^{\varepsilon\xi}(x-\xi).
\end{equation*}
By the definition of $a, b$ and assumption of $V$, we have that
$$|a(\varepsilon\xi)|\le C,\quad |b(\varepsilon\xi)|<C,\quad \left|[\partial_{\xi_i} a](\varepsilon\xi)\right|\le C, \quad \left|[\partial_{\xi_i}b](\varepsilon\xi)\right|<C$$
for some constant $C$. From assumption of $V$,
\begin{equation*}\label{e:avx}
|a(\varepsilon\xi)|\ge 1,\quad |b(\varepsilon\xi)|\ge 1.
\end{equation*}
Therefore, from $\partial_{x_i}U(\cdot-\xi)\in H^s(\mathbf R^n)$, we have $Z_3\in H^s(\mathbf R^n)$.
By $U\in H^s(\mathbf R^n)$, it holds that
\begin{equation}\label{e:z1}
\|Z_1\|_s=O(\varepsilon)\|[\partial_{\xi_i}b](\varepsilon\xi)U(a(\varepsilon\xi)(\cdot-\xi))\|_s=O(\varepsilon).
\end{equation}
Since $\partial_{\xi_i}z^{\varepsilon\xi}\in H^s(\mathbf R^n)$ and $Z_1,\,Z_3\in H^s(\mathbf R^n)$, we have that $Z_2$ is also in $H^s(\mathbf R^n)$.
It follows that $[\nabla U](a(\varepsilon\xi)(\cdot-\xi))\cdot(\cdot-\xi)\in H^s(\mathbf R^n)$. So, we obtain that $[\nabla U](\cdot-\xi)\cdot(\cdot-\xi)\in H^s(\mathbf R^n)$. Again, by the property of $a$, it holds that
\begin{equation}\label{e:z2}
\|Z_2\|_s=O(\varepsilon).
\end{equation}
From (\ref{e:z1}) and (\ref{e:z2}), we have (\ref{e:pxpxs}). This completes the proof.
\end{proof}
\begin{lemma}\label{l:vxz}
Given $\bar\rho>0$ and small $\bar\varepsilon>0$, we have that, if $|\xi|\le \bar\rho$ and $0<\varepsilon<\bar\varepsilon$, then
\begin{equation*}
\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon\xi)|^2 z_{\xi}^2dx\le C (\varepsilon^2|\nabla V(\varepsilon\xi)|^2+\varepsilon^4),
\end{equation*}
and
\begin{equation*}
\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon\xi)|^2 |\partial_{x_i}z_{\xi}|^2dx\le C (\varepsilon^2|\nabla V(\varepsilon\xi)|^2+\varepsilon^4).
\end{equation*}
\end{lemma}
\begin{proof}
Since $V\in C^3_b(\mathbf R^n)$ implies that $|\nabla V(x)|\le C$ and $|D^2V(x)|\le C$, it holds that
\begin{equation*}
|V(\varepsilon x)-V(\varepsilon \xi)|\le \varepsilon |\nabla V(\varepsilon \xi)|\cdot |x-\xi|+C\varepsilon^2 |x-\xi|^2.
\end{equation*}
Therefore,
\begin{eqnarray*}
\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon \xi)|^2z_{\xi}^2dx&\le& C\varepsilon^2|\nabla V(\varepsilon \xi)|^2
\int_{\mathbf R^n}|x-\xi|^2z^2_{\xi}(x-\xi)dx\\
&&+C\varepsilon^4\int_{\mathbf R^n}|x-\xi|^4z^2_{\xi}(x-\xi)dx.
\end{eqnarray*}
By the definition of $z_{\xi}$,
\begin{eqnarray*}
\int_{\mathbf R^n}|x-\xi|^2z^2_{\xi}(x-\xi)dx&=&b^2(\varepsilon\xi)\int_{\mathbf R^n}|y|^2U^2(a(\varepsilon \xi)y)dy\\
&=&a^{-n-2}b^2\int_{\mathbf R^n}|y'|^2U^2(y')dy'.
\end{eqnarray*}
Using Theorem \ref{t:unda}, we obtain
\begin{equation*}
\int_{\mathbf R^n}|y'|^2U^2(y')dy'\le C_2\int_{\mathbf R^n}\frac{|y'|^2}{(1+|y'|)^{2n+4s}}\le C.
\end{equation*}
Since we assume $n> 4-4s$, it follows that
\begin{eqnarray*}
\int_{\mathbf R^n}|x-\xi|^4z^2_{\xi}(x-\xi)dx&\le& C_2\int_{\mathbf R^n}\frac{|x-\xi|^4}{(1+|x-\xi|)^{2n+4s}}dx\\
&\le& C_2\int_{\mathbf R^n}\frac{1}{(1+|x-\xi|)^{2n+4s-4}}dx\le C.
\end{eqnarray*}
Therefore, we get
\begin{equation}\label{e:vxvxi}
\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon \xi)|^2z_{\xi}^2dx\le C(\varepsilon^2|\nabla V(\varepsilon\xi)|^2+\varepsilon^4).
\end{equation}
For the second estimate, we have
\begin{eqnarray*}
\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon \xi)|^2|\partial_{x_i}z_{\xi}|^2dx&\le& C\varepsilon^2|\nabla V(\varepsilon \xi)|^2
\int_{\mathbf R^n}|x-\xi|^2|\partial_{x_i}z_{\xi}(x-\xi)|^2dx\\
&&+C\varepsilon^4\int_{\mathbf R^n}|x-\xi|^4|\partial_{x_i}z_{\xi}(x-\xi)|^2dx.
\end{eqnarray*}
By Remark \ref{r:pu}, $|\partial_{x_i}z_{\xi}(x-\xi)|\le \frac{C}{1+|x-\xi|^{n+2s}}$. Then a similar argument as the proof of (\ref{e:vxvxi}) gives
\begin{equation*}
\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon\xi)|^2 |\partial_{x_i}z_{\xi}|^2dx\le C (\varepsilon^2|\nabla V(\varepsilon\xi)|^2+\varepsilon^4).
\end{equation*}
This completes the proof.
\end{proof}
\begin{lemma}\label{l:dgss}
Given $\bar\rho>0$ and small $\bar\varepsilon>0$, it holds that, for $|\xi|\le \bar\rho$ and $0<\varepsilon<\bar\varepsilon$,
\begin{equation}\label{e:dg}
\|Df_{\varepsilon}(z_{\xi})\|_s\le C(\varepsilon|\nabla V(\varepsilon\xi)|+O(\varepsilon^2)),
\end{equation}
for some constant $C$.
\end{lemma}
\begin{proof}
Rewrite
\begin{equation*}
f_{\varepsilon}(u)=F^{\varepsilon \xi}(u)+\frac{1}{2}\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon \xi))u^2dx.
\end{equation*}
Since $z_{\xi}$ is a critical point of $F^{\varepsilon \xi}$, we get
\begin{eqnarray*}
\langle Df_{\varepsilon}(z_{\xi}),v\rangle_s &=& \langle DF^{\varepsilon \xi}(z_{\xi}),v\rangle_s+\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon \xi))z_{\xi}v dx\\
&=&\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon \xi))z_{\xi}v dx.
\end{eqnarray*}
By the H\"{o}lder inequality, we have
\begin{eqnarray*}
|\langle Df_{\varepsilon}(z_{\xi}),v\rangle|^2&\le& \|v\|_{0}^2\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon \xi)|^2z_{\xi}^2dx\\
&\le&\|v\|_s^2\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon \xi)|^2z_{\xi}^2dx.
\end{eqnarray*}
Then Lemma \ref{l:vxz} implies (\ref{e:dg}).
\end{proof}
\section{Proof of the main theorem}\label{s:proof}
In this section, we shall prove the main theorem by a classical perturbation result.
\subsection{A multiplicity result by perturbation}\label{ss:abstract}
Let $M\subset\mathbf R^n$ be a non-empty set. We denote by $M_{\delta}$ its $\delta$-neighbourhood.
The cup length $l(M)$ of $M$ is defined by
\begin{equation*}
l(M)=1+\sup\{k\in \mathbf N\,\mid\, \exists \alpha_1,\cdots,\alpha_k\in \check{H}^*(M)\setminus 1, \alpha_1\cup\cdots\cup\alpha_k\ne 0\}.
\end{equation*}
If no such class exists, we set $l(M)=1$. Here $\check{H}^*(M)$ is the Alexander cohomology of $M$ with real coefficients and $\cup$ denotes the cup product.
Assume that $V$ has a smooth manifold of critical points of $M$. According to Bott \cite{Bo:AM57}, we say that $M$ is non-degenerate critical manifold for $V$ if every $x\in M$ is a critical point of $V$ and the nullity of all $x\in M$ equals to the dimension of $M$.
Now we recall a classical perturbation result. For more details, see Theorem 6.4 of Chapter II in \cite{Ch:IDMT}.
\begin{theorem}\label{t:abstract}
Let $h\in C^2(\mathbf R^n)$ and $\Sigma\subset \mathbf R^n$ be a smooth compact non-degenerate critical manifold of $h$. Let $W$ be a neighbourhood of $\Sigma$ and let $g\in C^1(\mathbf R^n)$. Then, if $\|h-g\|_{C^1(\overline W)}$ is sufficiently small, the function $g$ has at least $l(\Sigma)$ critical points in $W$.
\end{theorem}
\subsection{Proof of Theorem \ref{t:main}}
With the preliminary considerations of the sections above, we now prove Theorem \ref{t:main} by the abstract perturbation theorem above.
\begin{proof}[Proof of Theorem \ref{t:main}]
Fix $\bar\rho>0$ such that $M\subset B_{\bar \rho}$. Since $M$ is a non-degenerate smooth critical manifold of $V$, it is a non-degenerate critical manifold of $C_1(1+V)^{\theta}$ as well. To use Theorem \ref{t:abstract}, we define
\begin{equation*}
h(\xi)=C_1(1+V(\xi))^{\theta},
\end{equation*}
and
\begin{equation*}
g(\xi)=\Phi_{\varepsilon}\left(\frac{\xi}{\varepsilon}\right).
\end{equation*}
Set $\Sigma=M$. Fix a $\delta$-neighbourhood $M_{\delta}$ of $M$ such that $M_{\delta}\subset B_{\bar\rho}$ and the only critical points of $V$ in $M_{\delta}$ are those of $M$. Let $W=M_{\delta}$. From Proposition \ref{p:prv} and Remark \ref{r:prv}, the function $\Phi_{\varepsilon}(\cdot/\varepsilon)$ is converges to $h(\cdot)$ in $C^1(\overline W)$ as $\varepsilon\to 0$. Then Theorem \ref{t:abstract} yields the existence of at least $l(M)$ critical points of $g$ for $\varepsilon$ sufficiently small.
Let $\xi_k\in M_{\delta}$ be any of those critical points. Then $\xi_k/\varepsilon$ is a critical point of $\Phi_{\varepsilon}$ and Proposition \ref{p:dgw} implies that
\begin{equation*}
u_{\varepsilon,\xi_k}(x)=z_{\xi_k}\left(x-\frac{\xi_k}{\varepsilon}\right)+w(\varepsilon,\xi_k)
\end{equation*}
is a critical point of $f_{\varepsilon}$ and hence a solution of Equation (\ref{e:change}). Thus
\begin{equation*}
u_{\varepsilon,\xi_k}\left(\frac{x}{\varepsilon}\right)\simeq z_{\xi_k}\left(\frac{x-\xi_k}{\varepsilon}\right)
\end{equation*}
is a solution of Equation (\ref{e:main-equation}).
Any $\xi_k$ converges to some point $\xi_k^*\in M_{\delta}$ as $\varepsilon\to 0$. By Proposition \ref{p:prv}, we have that $\xi_k^*$ is a stationary point of $V$. Then the choice of $M_{\delta}$ implies that $\xi_k^*\in M$. That is, $u_{\varepsilon,\xi_k}(x/\varepsilon)$ concentrates near a point of $M$. This completes the proof.
\end{proof}
\section*{Acknowledgments}
We are grateful to the anonymous referees for useful comments and suggestions. This work was supported by National Natural Science Foundation of China (No. 11401521) and Zhejiang Provincial Natural Science Foundation of China (LQ13A010003).
\section{Lyapunov-Schmidt Reduction}\label{s:reduction}
In this section, we will prove that the existence of critical points of $f_{\varepsilon}$ can be reduced to find critical points of an auxiliary finite dimensional functional.
\subsection{Auxiliary finite dimensional functional}
Let $P_{\varepsilon,\xi}$ be the orthogonal projection onto $(T_{z_{\xi}}Z^{\varepsilon})^{\perp_s}$. Our aim is to find a point $w\in (T_{z_{\xi}}Z^{\varepsilon})^{\perp_s}$ satisfying
\begin{equation}\label{e:pdg0}
P_{\varepsilon,\xi}Df_{\varepsilon}(z_{\xi}+w)=0.
\end{equation}
By expansion, we have that
\begin{equation*}
Df_{\varepsilon}(z_{\xi}+w)=Df_{\varepsilon}(z_{\xi})+D^2f_{\varepsilon}(z_{\xi})[w]+\mathcal R(z_{\xi},w).
\end{equation*}
Here the map $\mathcal R(z_{\xi},w)$ is given by
\begin{equation*}
\begin{array}{ccccl}
\mathcal R(z_{\xi},w) & : & H^s & \to & \mathbf R \\
\, & \, & v & \to & \int_{\mathbf R^n}R(z_{\xi},w)vdx,
\end{array}
\end{equation*}
where
\begin{equation*}
R(z_{\xi},w)=-(|z_{\xi}+w|^{p-1}(z_{\xi}+w)-|z_{\xi}|^{p-1}z_{\xi}-p|z_{\xi}|^{p-1}w).
\end{equation*}
\begin{lemma}\label{l:rw1w2}
For all $w_1,w_2\in B_1\subset H^{s}(\mathbf R^n)$, it holds that
\begin{equation*}
\|\mathcal R(z_{\xi},w_2)-\mathcal R(z_{\xi},w_1)\|_s\le C\max\{\|w_1\|_{s}^{\sigma},\|w_2\|_{s}^{\sigma}\}\|w_2-w_1\|_{s}.
\end{equation*}
where $\sigma=\min\{1,p-1\}$, $C$ is a constant independent on $w_1, w_2$. Here $B_1$ is the unit ball in $H^{s}(\mathbf R^n)$.
\end{lemma}
\begin{proof}
For all $v\in H^s(\mathbf R^n)$,
\begin{eqnarray*}
&&\left|[\mathcal R(z_{\xi},w_2)-\mathcal R(z_{\xi},w_1)](v)\right|\\
&\le&\int_{\mathbf R^n}\left||z_{\xi}+w_2|^{p-1}(z_{\xi}+w_2)-|z_{\xi}+w_1|^{p-1}(z_{\xi}+w_1)-p|z_{\xi}|^{p-1}(w_2-w_1)\right|\,|v|dx\notag\\
&\le&p\int_{\mathbf R^n}\left||z_{\xi}+w_1+\theta_1(w_2-w_1)|^{p-1}-|z_{\xi}|^{p-1})\right||w_2-w_1|\,|v|dx.\notag
\end{eqnarray*}
Here $\theta_1\in[0,1]$.
For $1<p\le 2$,
\begin{eqnarray*}
\left|[\mathcal R(z_{\xi},w_2)-\mathcal R(z_{\xi},w_1)](v)\right|&\le& p\int_{\mathbf R^n}|w_1+\theta_1(w_2-w_1)|^{p-1}|w_2-w_1|\,|v|dx\\
&\le& C\int_{\mathbf R^n}(|w_1|+|w_2|)^{p-1}|w_2-w_1|\,|v|dx\\
&\le& C (\|w_1\|_{L^{p+1}}^{p-1}+\|w_2\|_{L^{p+1}}^{p-1})\|w_2-w_1\|_{L^{p+1}}\|v\|_{L^{p+1}}.
\end{eqnarray*}
By Sobolev imbedding (Theorem \ref{l:embedding}), we have that
\begin{equation*}
H^{s}(\mathbf R^n)\hookrightarrow L^{p+1}(\mathbf R^n).
\end{equation*}
Therefore, we obtain that
\begin{equation*}
\left|[\mathcal R(z_{\xi},w_2)-\mathcal R(z_{\xi},w_1)](v)\right|\le C (\|w_1\|_{s}^{p-1}+\|w_2\|_{s}^{p-1})\|w_2-w_1\|_{s}\|v\|_{s}.
\end{equation*}
For $2<p<\frac{n+2s}{n-2s}$ (if $2<\frac{n+2s}{n-2s}$), it holds that
\begin{eqnarray*}
&&\left|[\mathcal R(z_{\xi},w_2)-\mathcal R(z_{\xi},w_1)](v)\right|\\
&=&C\int_{\mathbf R^n}|z_{\xi}+\theta_2(w_1+\theta_1(w_2-w_1))|^{p-2}|w_2-w_1|^2|v|dx\\
&\le&C\|z_{\xi}+\theta_2(w_1+\theta_1(w_2-w_1))\|_{L^{p+1}}^{p-2}\|w_2-w_1\|_{L^{p+1}}^2\|v\|_{L^{p+1}},
\end{eqnarray*}
where $\theta_2\in[0,1]$.
Similarly, by Sobolev imbedding, we have that
\begin{equation*}
\left|[\mathcal R(z_{\xi},w_2)-\mathcal R(z_{\xi},w_1](v)\right|\le C(\|z_{\xi}\|_{s}+\|w_1\|_{s}+\|w_2\|_{s})^{p-2} \|w_2-w_1\|_{s}^2\|v\|_s.
\end{equation*}
Therefore, we have
\begin{equation*}
\|\mathcal R(z_{\xi},w_2)-\mathcal R(z_{\xi},w_1)\|_{s}\le C\max(\|w_1\|_{s}^{\sigma},\|w_1\|_{s}^{\sigma})\|w_2-w_1\|_{s},
\end{equation*}
where $\sigma=\min\{1,p-1\}$. This completes the proof.
\end{proof}
\begin{cor}\label{l:Row}
It holds that $\|\mathcal R(z_{\xi},w)\|_s= O(\|w\|_{s}^{1+\sigma})$ where $\sigma=\min\{1,p-1\}$.
\end{cor}
\begin{proof}
Choosing $w_1=0$ and $w_2=w$ in Lemma \ref{l:rw1w2}, we find that
\begin{eqnarray*}
\|\mathcal R(z_{\xi},w)\|_s&\le& C(\|w\|_{s}^{1+\sigma}).
\end{eqnarray*}
\end{proof}
From the definition of $\mathcal L_{\varepsilon,\xi}$, Equation (\ref{e:pdg0}) becomes
\begin{equation}\label{e:lpr}
\mathcal L_{\varepsilon,\xi}w+P_{\varepsilon,\xi}Df_{\varepsilon}(z_{\xi})+P_{\varepsilon,\xi}\mathcal R(z_{\xi},w)=0,\quad \mbox{for }w\in (T_{z_{\xi}}Z)^{\perp_s}.
\end{equation}
By Proposition \ref{p:invertible}, we known that $\mathcal L_{\varepsilon,\xi}$ is invertible on $(T_{z_{\xi}}Z)^{\perp_s}$. Denote the invertible operator by $\mathcal L_{\varepsilon,\xi}^{-1}$. Then Equation (\ref{e:lpr}) is equivalent to
\begin{equation*}
w=N_{\varepsilon,\xi}(w).
\end{equation*}
Here
\begin{equation*}
N_{\varepsilon,\xi}(w)=-\mathcal L_{\varepsilon,\xi}^{-1}(P_{\varepsilon\xi}Df_{\varepsilon}(z_{\xi})+P_{\varepsilon\xi}R(z_{\xi},w)).
\end{equation*}
\begin{lemma}\label{l:nbd}
There is a small ball $B_{\delta}\subset (T_{z_{\xi}}Z^{\varepsilon})^{\perp_s}$ such that $N_{\varepsilon,\xi}$ maps $B_{\delta}$ to itself if $0<\varepsilon<\bar\varepsilon$ and $|\xi|\le \bar\rho$.
\end{lemma}
\begin{proof}
Using Lemma \ref{l:dgss}, we obtain
\begin{equation}\label{e:nexw}
\|N_{\varepsilon,\xi}(w)\|_{s}\le C(\varepsilon|\nabla V(\varepsilon\xi)|+O(\varepsilon^2))+O(\|w\|_{s}^{1+\sigma}).
\end{equation}
Then there is a small positive constant $\delta$ such that $N_{\varepsilon,\xi}$ maps $B_{\delta}\subset (T_{z_{\xi}}Z)^{\perp_s}$ to itself if $0<\varepsilon<\bar\varepsilon$ and $|\xi|\le \bar\rho$.
\end{proof}
\begin{lemma}\label{l:ncm}
For all $w_1,w_2\in B_1\subset H^{s}(\mathbf R^n)$, we have that
\begin{equation*}
\|N_{\varepsilon,\xi}(w_2)-N_{\varepsilon,\xi}(w_1)\|_{s}\le C \max(\|w_1\|_{s}^{\sigma},\|w_2\|_{s}^{\sigma})\|w_2-w_1\|_{s},
\end{equation*}
where $C$ is a constant independent on $w_1$ and $w_2$, $\sigma=\min\{1,p-1\}$.
\end{lemma}
\begin{proof}
Compute
\begin{eqnarray*}
\|N_{\varepsilon,\xi}(w_2)-N_{\varepsilon,\xi}(w_1)\|_{s}&=&\|-\mathcal L_{\varepsilon,\xi}^{-1}P_{\varepsilon\xi}(\mathcal R(z_{\xi},w_2)-\mathcal R(z_{\xi},w_1))\|_{s}\\
&\le&C\|\mathcal R(z_{\xi},w_2)-\mathcal R(z_{\xi},w_1)\|_{s}.
\end{eqnarray*}
Then by Lemma \ref{l:rw1w2}, we have that
\begin{equation*}
\|N_{\varepsilon,\xi}(w_2)-N_{\varepsilon,\xi}(w_1)\|_{s}\le C\max(\|w_1\|_{s}^{\sigma},\|w_1\|_{s}^{\sigma})\|w_2-w_1\|_{s},
\end{equation*}
where $\sigma=\min\{1,p-1\}$. This completes the proof.
\end{proof}
\begin{prop}\label{p:dgw}
For $0<\varepsilon<\bar\varepsilon$ and $|\xi|\le\bar\rho$, there exists a unique $w=w(\varepsilon,\xi)\in (T_{z_{\xi}}Z)^{\perp_s}$ such that
$Df_{\varepsilon}(z_{\xi}+w)\in T_{z_{\xi}}Z$, and $w(\varepsilon,\xi)$ is of class $C^1$. Moreover, the functional
$\Phi_{\varepsilon}(\xi)=f_{\varepsilon}(z_{\xi}+w(\varepsilon,\xi))$ has the same regularity as $w$ and satisfies:
\begin{equation*}
\nabla\Phi_{\varepsilon}(\xi_0)=0\quad\Rightarrow\quad Df_{\varepsilon}(z_{\xi_0}+w(\varepsilon,\xi_0))=0.
\end{equation*}
\end{prop}
\begin{proof}
From Lemma \ref{l:nbd} and \ref{l:ncm}, the map $N_{\varepsilon,\xi}$ is a contraction on $B_{\delta}$ for $0<\varepsilon<\bar\varepsilon$ and $|\xi|\le\bar\rho$. Then there exists a unique $w$ such that $w=N_{\varepsilon,\xi}(w)$. For fixed $\varepsilon$, define
\begin{equation*}
\Xi_{\varepsilon}:(\xi,w)\to P_{\varepsilon,\xi}Df_{\varepsilon}(z_{\xi}+w).
\end{equation*}
Applying the Implicit Function Theorem to $\Xi_{\varepsilon}$, we have that $w(\varepsilon,\xi)$ is $C^1$ with respect to $\xi$. Then using a standard argument in \cite{ABC:ARMA97, AB:VPM}, we obtain that the critical points of $\Phi_{\varepsilon}(\xi)=f_{\varepsilon}(z_{\xi}+w(\varepsilon,\xi))$ give rise to critical points of $f_{\varepsilon}$.
\end{proof}
In what follows, we use the simple notation $w$ to denote $w(\varepsilon,\xi)$ which is obtained in Proposition \ref{p:dgw}.
\begin{remark}\label{r:ws}
By Equation (\ref{e:nexw}), it follows that
\begin{equation*}
\|w\|_{s}\le C(\varepsilon|\nabla V(\varepsilon\xi)|+\varepsilon^2),
\end{equation*}
where $C>0$.
\end{remark}
\begin{lemma}\label{l:nws}
The following inequality holds:
\begin{equation*}
\|\nabla_{\xi}w\|_{s}\le C\left(\varepsilon|\nabla V(\varepsilon\xi)|+O(\varepsilon^2)\right)^{\sigma},
\end{equation*}
where $C>0$ and $\sigma=\min\{1,p-1\}$.
\end{lemma}
\begin{proof}
By (\ref{e:lpr}) and Proposition \ref{p:dgw}, we have that, for all $v\in (T_{z_{\xi}}Z^{\varepsilon})^{\perp_s}$,
\begin{equation}\label{e:Lwe}
\langle \mathcal L_{\varepsilon,\xi}w,v\rangle_s+\langle Df_{\varepsilon}(z_{\xi}), v\rangle_s+\langle \mathcal R(z_{\xi},w),v\rangle_s=0.
\end{equation}
Since $DF_{\varepsilon,\xi}(z_{\xi})=0$, Equation (\ref{e:Lwe}) becomes
\begin{multline*}
\langle w,v\rangle_s+\int_{\mathbf R^n}V(\varepsilon x)w v dx-p\int_{\mathbf R^n}z_{\xi}^{p-1}wvdx+\int_{\mathbf R^n}[V(\varepsilon x)-V(\varepsilon \xi)]z_{\xi}vdx\\
+\int_{\mathbf R^n}R(z_{\xi},w)vdx=0.
\end{multline*}
Hence
\begin{eqnarray}\label{e:wjlr}
&&\langle \partial_{\xi_j}w,v\rangle_s+\int_{\mathbf R^n}V(\varepsilon x) (\partial_{\xi_j}w) v dx-p\int_{\mathbf R^n}z_{\xi}^{p-1}(\partial_{\xi_j}w)vdx\\
&&-p(p-1)\int_{\mathbf R^n}z_{\xi}^{p-2}(\partial_{\xi_j}z) wvdx+\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon \xi))(\partial_{\xi_j}z) vdx
\notag\\
&&-\varepsilon(\partial_{x_j}V)(\varepsilon \xi)\int_{\mathbf R^n}zvdx
-\int_{\mathbf R^n}(R_z\partial_{\xi_j}z+R_w\partial_{\xi_j}w)vdx=0.\notag
\end{eqnarray}
Set $\hat {\mathcal L}=\mathcal L_{\varepsilon,\xi}-\mathcal{R}_w$, where $\langle \mathcal{R}_w v_1,v_2\rangle=\int_{\mathbf R^n}R_{w}v_1v_2dx.$ Since $R_w\to 0$ as $w\to 0$ and $\mathcal L_{\varepsilon,\xi}$ is invertible on $(T_{z_{\xi}}Z^{\varepsilon})^{\perp_s}$, $\hat {\mathcal L}$ is also invertible for $0<\varepsilon<\bar\varepsilon$ and $|\xi|\le\bar\rho$. From (\ref{e:wjlr}), it holds that
\begin{multline*}
\langle\hat {\mathcal L} \partial_{\xi_j}w,v\rangle=p(p-1)\int_{\mathbf R^n}z_{\xi}^{p-2}(\partial_{\xi_j}z) wvdx-\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon \xi))(\partial_{\xi_j}z) vdx\\
+\varepsilon(\partial_{x_j}V)(\varepsilon \xi)\int_{\mathbf R^n}zvdx
+\int_{\mathbf R^n}R_z\partial_{\xi_j}zvdx=T_1+T_2+T_3+T_4.
\end{multline*}
Next, we shall estimate every term on the left of the equation above.
By Theorem \ref{t:unda} and Remark \ref{r:pu}, it holds that, for $1<p\le 2$,
\begin{eqnarray*}
|T_1|&=&p(p-1)\left|\int_{\mathbf R^n}z_{\xi}^{p-2}(\partial_{\xi_j}z) wvdx\right|\\
&\le &C\int_{\mathbf R^n}(1+|x|^{n+2s})^{2-p}\,\frac{1}{1+|x|^{n+2s}}|wv|dx\\
&\le&C\int_{\mathbf R^n}\frac{1}{(1+|x|^{n+2s})^{p-1}}|wv|dx\\
&\le& C\int_{\mathbf R^n}|wv|dx\le C\|w\|_0\|v\|_0\le C\|w\|_{s}\|v\|_s,
\end{eqnarray*}
and, for $2<p<\frac{n+2s}{n-2s}$ (if $2<\frac{n+2s}{n-2s}$),
\begin{eqnarray*}
\left|\int_{\mathbf R^n}z_{\xi}^{p-2}(\partial_{\xi_j}z) wvdx\right|&\le &C\int_{\mathbf R^n}\frac{1}{(1+|x|^{n+2s})^{p-1}}|wv|dx\\
&\le&C\|w\|_{s}\|v\|_s.
\end{eqnarray*}
Therefore, we have that
\begin{equation*}
|T_1|\le C\|w\|_{s}\|v\|_s.
\end{equation*}
Since $0<\varepsilon<\bar\varepsilon$ and $|\xi|\le \bar \rho$, by Lemma \ref{l:vxz} we have
\begin{eqnarray}\label{e:t2}
|T_2|&=&\left|\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon \xi))(\partial_{\xi_j}z) vdx\right|\\
&\le&\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon \xi)||\partial_{\xi_j}z| |v|dx\notag\\
&\le&\left(\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon \xi)|^2|\partial_{\xi_j}z|^2dx\right)^{\frac{1}{2}}\|v\|_0\notag\\
&\le& C \varepsilon|\nabla V(\varepsilon\xi)|\|v\|_s.\notag
\end{eqnarray}
Then we obtain that
\begin{equation*}
|T_2|\le C\varepsilon|\nabla V(\varepsilon\xi)| \|v\|_s.
\end{equation*}
Estimating the third term, we have
\begin{multline*}
|T_3|=\varepsilon\left|(\partial_{x_j}V)(\varepsilon \xi)\int_{\mathbf R^n}zvdx\right|
\le\varepsilon|(\nabla V)(\varepsilon \xi)|\|z\|_0\|v\|_0
\le \varepsilon|(\nabla V)(\varepsilon \xi)|\|v\|_s.
\end{multline*}
It remains to estimate the final term.
A direct computation yields
\begin{eqnarray*}
|T_4|&=&\left|\int_{\mathbf R^n}R_z\partial_{\xi_j}zvdx\right|\le \int_{\mathbf R^n}|R_z| |\partial_{\xi_j}z||v|dx\\
&\le&C\int_{\mathbf R^n}\left||z_{\xi}+w|^{p-1}-|z_{\xi}|^{p-1}\right|\cdot|\partial_{\xi_j}z_{\xi}|\cdot|v|dx\\
&&+C\int_{\mathbf R^n}|z_{\xi}|^{p-2}\cdot|\partial_{\xi_j}z_{\xi}|\cdot|w|\cdot|v|dx
\end{eqnarray*}
Then, for $1<p\le 2$,
\begin{eqnarray*}
&&\int_{\mathbf R^n}\left||z_{\xi}+w|^{p-1}-|z_{\xi}|^{p-1}\right|\cdot|\partial_{\xi_j}z_{\xi}|\cdot|v|dx\\
&\le&C\int_{\mathbf R^n}|w|^{p-1}\cdot|\partial_{\xi_j}z_{\xi}|\cdot|v|dx\\
&\le& C\|w\|_{L^{p+1}}^{p-1}\|\partial_{\xi_j}z_{\xi}\|_{L^{p+1}}\|v\|_{L^{p+1}}\le C\|w\|_{s}^{p-1}\|v\|_{s},
\end{eqnarray*}
and, for $2<p<\frac{n+2s}{n-2s}$ (if $2<\frac{n+2s}{n-2s}$),
\begin{eqnarray*}
&&\int_{\mathbf R^n}\left||z_{\xi}+w|^{p-1}-|z_{\xi}|^{p-1}\right|\cdot|\partial_{\xi_j}z_{\xi}|\cdot|v|dx\\
&\le& \int_{\mathbf R^n}(p-1)|z_{\xi}+\theta_3w|^{p-2}|w|\cdot|\partial_{\xi_j}z_{\xi}|\cdot|v|dx\\
&\le& C\|z_{\xi}+\theta_3w\|_{L^{p+1}}^{p-2}\|\partial_{\xi_j}z_{\xi}\|_{L^{p+1}}\|w\|_{L^{p+1}}\|v\|_{L^{p+1}}\le C\|w\|_{s}\|v\|_{s}.
\end{eqnarray*}
Here $\theta_3\in[0,1]$.
Then we have that
\begin{equation*}
\int_{\mathbf R^n}\left||z_{\xi}+w|^{p-1}-|z_{\xi}|^{p-1}\right|\cdot|\partial_{\xi_j}z_{\xi}|\cdot|v|dx\le C\|w\|_{s}^{\sigma}\|v\|_{s},
\end{equation*}
where $\sigma=\min\{1,p-1\}$.
Furthermore, we estimate
\begin{eqnarray*}
&&\int_{\mathbf R^n}|z_{\xi}|^{p-2}\cdot|\partial_{\xi_j}z_{\xi}|\cdot|w|\cdot|v|dx\\
&\le&C\int_{\mathbf R^n}\left(\frac{1}{(1+|x-\xi|)^{n+2s}}\right)^{p-1}\cdot|w|\cdot|v|dx\\
&\le& C\|w\|_0\|v\|_0\le C\|w\|_{s}\|v\|_s.
\end{eqnarray*}
Therefore, we obtain
\begin{equation*}
|T_4|\le C\|w\|_{s}^{\sigma}\|v\|_{s},
\end{equation*}
where $\sigma=\min\{1,p-1\}$.
Summarizing the estimates for $T_1,T_2,T_3,T_4$, we get
\begin{equation*}
\|\hat{\mathcal L} \partial_{\xi_j}w\|_s\le C(\varepsilon|\nabla V(\varepsilon\xi)|+\|w\|_{s}^{\sigma}).
\end{equation*}
Then by Remark \ref{r:ws}, it holds that
\begin{equation*}
\|\hat{\mathcal L} \partial_{\xi_j}w\|_s\le C(\varepsilon|\nabla V(\varepsilon\xi)|+O(\varepsilon^2))^{\sigma}.
\end{equation*}
Thus, we finally obtain
\begin{equation*}
\|\nabla_{\xi}w\|_{s}\le C(\varepsilon|\nabla V(\varepsilon\xi)|+O(\varepsilon^2))^{\sigma}.
\end{equation*}
This completes the proof.
\end{proof}
\subsection{Analysis of $\Phi_{\varepsilon}(\xi)$}
In this subsection, we shall expand $\Phi_{\varepsilon}(\xi)$. By the definition, we have that
\begin{eqnarray*}
\Phi_{\varepsilon}(\xi)&=&\frac{1}{2}\|z_{\xi}+w(\varepsilon,\xi)\|_s^2+\frac{1}{2}\int_{\mathbf R^n}V(\varepsilon x)(z_{\xi}+w(\varepsilon,\xi))^2dx\\
&&-\frac{1}{p+1}\int_{\mathbf R^n}|z_{\xi}+w(\varepsilon,\xi)|^{p+1}dx
\end{eqnarray*}
Since $(-\Delta)^sz_{\xi}+z_{\xi}+V(\varepsilon\xi)z_{\xi}=z_{\xi}^p$, it holds that
\begin{equation*}
\langle z_{\xi},w\rangle_{s}=-V(\varepsilon\xi)\int_{\mathbf R^n}z_{\xi}wdx+\int_{\mathbf R^n}z_{\xi}^pwdx.
\end{equation*}
Therefore, we can rewrite
\begin{eqnarray*}
\Phi_{\varepsilon}(\xi)&=&\left(\frac{1}{2}-\frac{1}{p+1}\right)\int_{\mathbf R^n}z^{p+1}dx+\frac{1}{2}\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon\xi))z^2dx\\
&&+\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon\xi))zwdx+\frac{1}{2}\int_{\mathbf R^n}V(\varepsilon x)w^2dx\\
&&+\frac{1}{2}\|w\|_s^2-\frac{1}{p+1}\int_{\mathbf R^n}\left(|z+w|^{p+1}-z^{p+1}-(p+1)z^pw\right)dx.
\end{eqnarray*}
By the definition of $z(x)$ (see Subsection \ref{sb:cuf}), $z(x)=b(\varepsilon\xi)U(a(\varepsilon\xi )x)$ where $a(\varepsilon \xi)=(1+V(\varepsilon\xi))^{\frac{1}{2s}}$ and $b(\varepsilon \xi)=(1+V(\varepsilon\xi))^{\frac{1}{p-1}}$. Then we have that
\begin{equation*}
\int_{\mathbf R^n}z^{p+1}dx=C_0(1+V(\varepsilon\xi))^{\theta},
\end{equation*}
where $C_0=\int_{\mathbf R^n}U^{p+1}dx$ and $\theta=\frac{p+1}{p-1}-\frac{n}{2s}$. Let $C_1=\left(\frac{1}{2}-\frac{1}{p+1}\right)C_0$. Then
\begin{equation*}
\Phi_{\varepsilon}(\xi)=C_1(1+V(\varepsilon\xi))^{\theta}+\Gamma_{\varepsilon}(\xi)+\Psi_{\varepsilon}(\xi),
\end{equation*}
where
\begin{equation*}
\Gamma_{\varepsilon}(\xi)=\frac{1}{2}\int_{\mathbf R^n}[V(\varepsilon x)-V(\varepsilon\xi)]z^2dx+\int_{\mathbf R^n}[V(\varepsilon x)-V(\varepsilon\xi)]zwdx
\end{equation*}
and
\begin{eqnarray*}
\Psi_{\varepsilon}(\xi)&=&\frac{1}{2}\int_{\mathbf R^n}V(\varepsilon x)w^2dx+\frac{1}{2}\|w\|_s^2\\
&&-\frac{1}{p+1}\int_{\mathbf R^n}\left[|z+w|^{p+1}-z^{p+1}-(p+1)z^pw\right]dx.\notag
\end{eqnarray*}
\begin{lemma}
We have the following estimate:
\begin{equation*}
|\nabla \Psi_{\varepsilon}(\xi)|\le C \|w\|_s( \|w\|_s^{\sigma}+\|\nabla_{\xi}w\|_s).
\end{equation*}
\end{lemma}
\begin{proof}
A direct calculus yields, for $j=1,2,\cdots,n$,
\begin{multline}\label{e:psif}
\left|\partial_{\xi_j}\left( \frac{1}{2}\int_{\mathbf R^n}V(\varepsilon x)w^2dx+\frac{1}{2}\|w\|_s^2\right)\right|=\left|\int_{\mathbf R^n}V(\varepsilon x)w\partial_{\xi_j} wdx+\langle w,\partial_{\xi_j} w\rangle_s \right|\\
\le C(\|w\|_{0}\|\partial_{\xi_j}w\|_{0}+\|w\|_s\|\partial_{\xi_j}w\|_s)\le C(\|w\|_s\|\partial_{\xi_j}w\|_s).
\end{multline}
Estimate
\begin{eqnarray*}
&&\left|\partial_{\xi_j}\left(\frac{1}{p+1}\int_{\mathbf R^n}\left(|z+w|^{p+1}-z^{p+1}-(p+1)z^pw\right)dx\right)\right|\\
&=&\left|\int_{\mathbf R^n}\left(|z+w|^{p}(\partial_{\xi_j}z+\partial_{\xi_j}w)-z^p(\partial_{\xi_j}z+\partial_{\xi_j}w)-pz^{p-1}w\partial_{\xi_j}z\right)dx\right|\\
&=&\left|\int_{\mathbf R^n}\left(p|z+\theta_4w|^{p-1}w(\partial_{\xi_j}z+\partial_{\xi_j}w)-pz^{p-1}w\partial_{\xi_j}z\right)dx\right|\\
&=&\left|\int_{\mathbf R^n}\left(pw(|z+\theta_4w|^{p-1}-z^{p-1})\partial_{\xi_j}z+pw|z+\theta_4w|^{p-1}\partial_{\xi_j}w\right)dx\right|.
\end{eqnarray*}
Here $\theta_4\in[0,1]$.
Then, for $1<p\le 2$,
\begin{eqnarray*}
&&\left|\int_{\mathbf R^n}\left(pw(|z+\theta_4w|^{p-1}-z^{p-1})\partial_{\xi_j}zdx\right)\right|\\
&\le&\int_{\mathbf R^n}\left|pw^p\partial_{\xi_j}z\right|dx\le C\|\partial_{\xi_j}z\|_{L^{p+1}}\|w\|_{L^{p+1}}^p\le C\|w\|_s^p,
\end{eqnarray*}
and, for $2<p<\frac{n+2s}{n-2s}$ (if $2<\frac{n+2s}{n-2s}$),
\begin{eqnarray*}
&&\left|\int_{\mathbf R^n}\left(pw|(z+\theta_4w|^{p-1}-z^{p-1})\partial_{\xi_j}zdx\right)\right|\\
&\le& \int_{\mathbf R^n}\left|p(p-1)w^2|z+\theta_5w|^{p-2}\partial_{\xi_j}z\right|dx\\
&\le& C\|\partial_{\xi_j}z\|_{L^{p+1}}\,\|z+\theta_5w\|_{L^{p+1}}^{p-2}\,\|w\|_{L^{p+1}}^2\\
&\le& C\|\partial_{\xi_j}z\|_s\,\|z+\theta_5w\|_{s}^{p-2}\,\|w\|_{s}^2\le C\|w\|_{s}^2.
\end{eqnarray*}
Here $\theta_5\in [0,1]$.
Therefore,
\begin{equation*}
\left|\partial_{\xi_j}\left(\frac{1}{p+1}\int_{\mathbf R^n}\left(|z+w|^{p+1}-z^{p+1}-(p+1)z^pw\right)dx\right)\right|\le C\|w\|_s^{1+\sigma}.
\end{equation*}
Moreover,
\begin{equation*}
\left|\int_{\mathbf R^n}pw|z+\theta_4w|^{p-1}\partial_{\xi_j}wdx\right|\le C\|z+\theta_4w\|_{L^{p+1}}^{p-1}\|w\|_{L^{p+1}}\|\partial_{\xi_j}\|_{L^{p+1}}
\le C\|w\|_{s}\|\partial_{\xi_j}w\|_{s}.
\end{equation*}
Therefore, we have that
\begin{equation*}
|\nabla \Psi_{\varepsilon}(\xi)|\le C \|w\|_s\left( \|w\|_s^{\sigma}+\|\nabla_{\xi}w\|_s\right).
\end{equation*}
This completes the proof.
\end{proof}
\begin{lemma}
It holds
\begin{equation}\label{e:gvz}
|\nabla \Gamma_{\varepsilon}(\xi)|\le C\varepsilon^{1+\sigma}.
\end{equation}
\end{lemma}
\begin{proof}
Compute
\begin{eqnarray*}
&&\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon\xi))z^2dx\\
&=&\varepsilon\int_{\mathbf R^n}\nabla V(\varepsilon\xi)\cdot(x-\xi)z^2dx\\
&&+\varepsilon^2\int_{\mathbf R^n}D^2V(\varepsilon\xi+\theta_6(\varepsilon-\xi))[x-\xi,x-\xi]z^2dx\\
&=&\varepsilon\int_{\mathbf R^n}\nabla V(\varepsilon\xi)\cdot yz^2(y)dx\\
&&+\varepsilon^2\int_{\mathbf R^n}D^2V(\varepsilon\xi+\theta_6(\varepsilon-\xi))[x-\xi,x-\xi]z^2dx\\
&=&\varepsilon^2\int_{\mathbf R^n}D^2V(\varepsilon\xi+\theta_6(\varepsilon-\xi))[x-\xi,x-\xi]z^2dx
\end{eqnarray*}
where $\theta_6\in[0,1]$.
Since $V\in C^3_b(\mathbf R^n)$, it holds that
\begin{eqnarray}\label{e:vzn1}
&&\left|\partial_{\xi_j}\left(\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon\xi))z^2dx\right)\right|\\
&=&\varepsilon^2\left|\partial_{\xi_j}\left(\int_{\mathbf R^n}D^2V(\varepsilon\xi+\theta_6(\varepsilon-\xi))[x-\xi,x-\xi]z^2dx\right)\right|
\le C\varepsilon^2.\notag
\end{eqnarray}
Estimate
\begin{eqnarray*}
&&\left|\partial_{\xi_j}\int_{\mathbf R^n}[V(\varepsilon x)-V(\varepsilon\xi)]zwdx\right|\\
&\le&\varepsilon|\nabla V(\varepsilon\xi)|\int_{\mathbf R^n}|zw|dx+\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon\xi)||\partial_{\xi_j}z||w|dx\\
&&+\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon\xi)||z||\partial_{\xi_j}w|dx\\
&\le&\varepsilon|\nabla V(\varepsilon\xi)|\|w\|_0+\left(\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon\xi)|^2|\partial_{\xi_j}z|^2dx\right)^{\frac{1}{2}}\|w\|_0\\
&&+\left(\int_{\mathbf R^n}|V(\varepsilon x)-V(\varepsilon\xi)|^2|z|^2dx\right)^{\frac{1}{2}}\|\partial_{\xi_j}w\|_0.
\end{eqnarray*}
Thus by Lemma \ref{l:vxz}, Remark \ref{r:ws} and Lemma \ref{l:nws}, we have that
\begin{equation}\label{e:vzn2}
\left|\nabla\left(\int_{\mathbf R^n}(V(\varepsilon x)-V(\varepsilon\xi))zwdx\right)\right|\le C\varepsilon(\varepsilon+\|w\|_s+\|\nabla w\|_s)\le C \varepsilon^{1+\sigma}.
\end{equation}
Therefore, from Estimates (\ref{e:vzn1}) (\ref{e:vzn2}), Equation (\ref{e:gvz}) holds.
\end{proof}
Let $\alpha(\varepsilon,\xi)=\theta C_1(1+V(\varepsilon\xi))^{\theta-1}$, where $\theta=\frac{p+1}{p-1}-\frac{n}{2s}$. Then summarizing all conclusion above, we get the following proposition.
\begin{prop}\label{p:prv}
It holds
\begin{equation*}
\nabla\Phi_{\varepsilon}(\xi)=\alpha(\varepsilon\xi)\varepsilon \nabla V(\varepsilon\xi)+\varepsilon^{1+\sigma}\varpi_{\varepsilon}(\xi),
\end{equation*}
where $\varpi_{\varepsilon}(\xi)$ is a bounded function and $\sigma=\min\{1,p-1\}$.
\end{prop}
\begin{remark}\label{r:prv}
Using similar argument, we can prove that
\begin{equation*}
\Phi_{\varepsilon}(\xi)=C(1+V(\varepsilon\xi))^{\theta}+\gamma_{\varepsilon}(\xi),
\end{equation*}
where $C>0$, $\theta=\frac{p+1}{p-1}-\frac{n}{2s}$ and $|\gamma_{\varepsilon}(\xi)|\le C(\varepsilon |\nabla V(\varepsilon\xi)|+\varepsilon^2)$.
\end{remark}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 5,688 |
{"url":"http:\/\/gsocspecfun.blogspot.com\/2017\/08\/","text":"## luned\u00ec 28 agosto 2017\n\n### Final Resume\n\nSummary During the GSoC I worked on different special functions that needed to be improved or implemented from scratch. Discussing with my mentors and the community, we decided that my work should be pushed on a copy of the scource code of Octave on my repository [1] and then I should have work with different bookmarks for each function I had to work on. When different functions happened to be related (e.g. gammainc and gammaincinv), I worked on these on the same bookmark. I present now a summary and the bookmarks related to the functions.\n\n## Incomplete gamma function\n\nbookmark: gammainc\nfirst commit: d1e03faf080b\nlast commit: 107dc1d24c1b\nremoved files:\/libinterp\/corefcn\/gammainc.cc, \/liboctave\/external\/slatec-fn\/dgami.f, \/liboctave\/external\/slatec-fn\/dgamit.f, \/liboctave\/external\/slatec-fn\/gami.f, \/liboctave\/external\/slatec-fn\/gamit.f, \/liboctave\/external\/slatec-fn\/xdgami.f, \/liboctave\/external\/slatec-fn\/xdgamit.f, \/liboctave\/external\/slatec-fn\/xgmainc.f, \/liboctave\/external\/slatec-fn\/xsgmainc.f\nmodified files: NEWS, \/doc\/interpreter\/arith.txi, \/libinterp\/corefcn\/module.mk, \/liboctave\/external\/slatec-fn\/module.mk, \/liboctave\/numeric\/lo-specfun.cc, \/scripts\/specfun\/module.mk\n\n### Summary of the work\n\nOn this bookmark I worked on the incomplete gamma function and its inverse.\nThe incomplete gamma function gammainc had both missing features (it were missed the \"scaled\" options) and some problem of inaccurate result type (see bug # 47800). Part of the work was already been done by Marco and Nir, I had to finish it. We decided to implement it as a single .m file (gammainc.m) which call (for some inputs) a subfunction written in C++ (__gammainc_lentz__.cc).\nThe inverse of the incomplete gamma function was missing in Octave (see bug # 48036). I implemented it as a single .m file (gammaincinv.m) which uses a Newton method.\n\n## Bessel functions\n\nbookmark: bessel\nfirst commit: aef0656026cc\nlast commit: e9468092daf9\nmodified files: \/liboctave\/external\/amos\/README, \/liboctave\/external\/amos\/cbesh.f, \/liboctave\/external\/amos\/cbesi.f, \/liboctave\/external\/amos\/cbesj.f, \/liboctave\/external\/amos\/cbesk.f, \/liboctave\/external\/amos\/zbesh.f, \/liboctave\/external\/amos\/zbesi.f, \/liboctave\/external\/amos\/zbesj.f, \/liboctave\/external\/amos\/zbesk.f, \/liboctave\/numeric\/lo-specfun.cc, \/scripts\/specfun\/bessel.m\n\n### Summary of the work\n\nOn this bookmark I worked on Bessel functions.\nThere was a bug reporting NaN as output when the argument $x$ was too large in magnitude (see bug # 48316). The problem was given by Amos library, which refuses to compute the output in such cases. I started \"unlocking\" this library, in such a way to compute the output even when the argument was over the limit setted by the library. Then I compared the results with other libraries (e.g. Cephes [2], Gnu Scientific library [3] and C++ special function library [4]) and some implementations I made. In the end, I discovered that the \"unlocked\" Amos were the best one to use, so we decided to maintain them (in the \"unlocked\" form), modifying the error variable to explain the loss of accuracy.\n\n## Incomplete beta function\n\nbookmark: betainc\nfirst commit: 712a069d2860\nlast commit: e0c0dd40f096\nremoved files: \/libinterp\/corefcn\/betainc.cc, \/liboctave\/external\/slatec-fn\/betai.f, \/liboctave\/external\/slatec-fn\/dbetai.f, \/liboctave\/external\/slatec-fn\/xbetai.f, \/liboctave\/external\/slatec-fn\/xdbetai.f\nmodified files: \/libinterp\/corefcn\/module.mk, \/liboctave\/external\/slatec-fn\/module.mk, \/liboctave\/numeric\/lo-specfun.cc, \/liboctave\/numeric\/lo-specfun.h, \/scripts\/specfun\/module.mk, \/scripts\/statistics\/distributions\/betainv.m, \/scripts\/statistics\/distributions\/binocdf.m\n\n### Summary of the work\n\nOn this bookmark I worked on the incomplete beta function and its inverse.\nThe incomplete beta function missed the \"upper\" version and had reported bugs on input validation (see bug # 34405) and inaccurate result (see bug # 51157). We decided to rewrite it from scratch. It is now implemented ad a single .m file (betainc.m) which make the input validation part, then the output is computed using a continued fraction evaluation, done by a C++ function (__betainc_lentz__.cc).\nThe inverse was present in Octave but missed the \"upper\" version (since it was missing also in betainc itself). The function is now written as a single .m file (betaincinv.m) which implement a Newton method where the initial guess is computed by few steps of bisection method.\n\n## Integral functions\n\nbookmark: expint\nfirst commit: 61d533c7d2d8\nlast commit: d5222cffb1a5\nmodified files: \/doc\/interpreter\/arith.txi, \/libinterp\/corefcn\/module.mk, \/scripts\/specfun\/expint.m, \/scripts\/specfun\/module.mk\n\n### Summary of the work\n\nOn this bookmark I worked on exponential integral, sine integral and cosine integral. I already rewrote the exponential integral before the GSoC. Here I just moved the Lentz algorithm to an external C++ function (__expint_lentz__.cc), accordingly to gammainc and betainc. I've also modified the exit criterion for the asymptotic expansion using [5] (pages 1 -- 4) as reference.\nThe functions sinint and cosint were present only in the symbolic package of Octave but was missing a numerical implementation in the core. I wrote them as .m files (sinint.m and cosint.m). Both codes use the series expansion near the origin and relations with expint for the other values.\n\n## To do\n\nThere is still room for improvement for some of the functions I wrote. In particular, gammainc can be improved in accuracy for certain couple of values, and I would like to make a template version for the various Lentz algorithms in C++ so to avoid code duplication in the functions.\nIn October I will start a PhD in Computer Science, still here in Verona. This will permit me to remain in contact with my mentor Marco Caliari, so that we will work on these aspects.\n\n[1] https:\/\/bitbucket.org\/M_Ginesi\/octave\n[2] http:\/\/www.netlib.org\/cephes\/\n[3] https:\/\/www.gnu.org\/software\/gsl\/\n[4] http:\/\/en.cppreference.com\/w\/cpp\/numeric\/special_math\n[5] N. Bleistein and R.A. Handelsman, \"Asymptotic Expansions of Integrals\", Dover Publications, 1986.\n\n## sabato 19 agosto 2017\n\n### Integral functions\n\nIntegral functions During the last week I made few modifications to expint.m and wrote sinint.m and cosint.m from scratch. All the work done can be found on the bookmark expint of my repository.\n\n## expint\n\nAs I mentioned here I rewrote expint.m from scratch before the GSoC. During the last week I moved the Lentz algorithm to a .cc function (in order to remain coherent with the implementations of gammainc and betainc) and added few tests.\n\n## sinint\n\nThe sinint function is present in the symbolic package, but is not present a numerical implementation in the core.\nThe sine integral is defined as $$\\text{Si} (z) = \\int_0^z \\frac{\\sin(t)}{t}\\,dt.$$ To compute it we use the series expansion $$\\text{Si}(z) = \\sum_{n=0}^\\infty \\frac{(-1)^n z^{2n+1}}{(2n+1)(2n+1)!}$$ when the module of the argument is smaller than 2. For bigger values we use the following relation with the exponential integral $$\\text{Si} = \\frac{1}{2i} (E_1(iz)-E_1(-iz)) + \\frac{\\pi}{2},\\quad |\\text{arg}(z)| < \\frac{\\pi}{2}$$ and the following simmetry relations $$\\text{Si}(-z) = -\\text{Si}(z),$$ $$\\text{Si}(\\bar{z}) = \\overline {\\text{Si}(z)}.$$ The function is write as a single .m file.\n\n## cosint\n\nAs the sinint function, also cosint is present in the symbolic package, but there is not a numerical implementation in the core.\nThe cosine integral is defined as $$\\text{Ci} (z) = -\\int_z^\\infty \\frac{\\cos(t)}{t}\\,dt.$$ An equivalent definition is $$\\text{Ci} (z) = \\gamma + \\log z + \\int_0^z \\frac{\\cos t - 1}{t}\\,dt.$$ To compute it we use the series expansion $$\\text{Ci}(z) = \\gamma + \\log z + \\sum_{n=1}^\\infty \\frac{(-1)^n z^{2n}}{(2n)(2n)!}$$ when the module of the argument is smaller than 2. For bigger values we use the following relation with the exponential integral $$\\text{Ci} = -\\frac{1}{2} (E_1(iz)+E_1(-iz)),\\quad |\\text{arg}(z)| < \\frac{\\pi}{2}$$ and the following simmetry relations $$\\text{Ci}(-z) = \\text{Ci}(z) -i\\pi,\\quad 0<\\text{arg}(z)<\\pi,$$ $$\\text{Ci}(\\bar{z}) = \\overline{\\text{Ci}(z)} .$$ As for sinint, also cosint is written as a single .m file.\n\n## sabato 12 agosto 2017\n\n### betaincinv\n\nbetaincinv The inverse of the incomplete beta function was present in Octave, but without the \"upper\" option (since it was missing in betainc itself). We decided to rewrite it from scratch using Newton method, as for gammaincinv (see my post on it if you are interested).\nTo make the code numerically more accurate, we decide which version (\"lower\" or \"upper\") invert depending on the inputs.\nAt first we compute the trivial values (0 and 1). Then the remaining terms are divided in two sets: those that will be inverted with the \"lower\" version, and those that will be inverted with the \"upper\" one. For both cases, we perform 10 iterations of bisection method and then we perform a Newton method.\nThe implementation (together with the new implementation of betainc) can be found on my repository, bookmark \"betainc\".\n\n## mercoled\u00ec 2 agosto 2017\n\n### betainc\n\nbetainc The betainc function has two bugs reported: #34405 on the input validation and #51157 on inaccurate result. Moreover, it is missing the \"upper\" version, which is present in MATLAB.\n\n# The function\n\nThe incomplete beta function ratio is defined as $$I_x(a,b) = \\dfrac{B_x(a,b)}{B(a,b)},\\quad 0\\le x \\le 1,\\,a>0,\\,b>0,$$ where $B(a,b)$ is the classical beta function and $$B_x(a,b)=\\int_0^x t^{a-1}(1-t)^{b-1}\\,dt.$$ In the \"upper\" version the integral goes from $x$ to $1$. To compute this we will use the fact that $$\\begin{array}{rcl} I_x(a,b) + I_x^U(a,b) &=& \\dfrac{1}{B(a,b)}\\left( \\int_0^x t^{a-1}(1-t)^{b-1}\\,dt + \\int_x^1 t^{a-1}(1-t)^{b-1}\\,dt\\right)\\\\ &=&\\dfrac{1}{B(a,b)}\\int_0^1 t^{a-1}(1-t)^{b-1}\\,dt\\\\ &=&\\dfrac{B(a,b)}{B(a,b)}\\\\ &=&1 \\end{array}$$ and the relation $$I_x(a,b) + I_{1-x}(b,a) = 1$$ so that $$I_x^U(a,b) = I_{1-x}(b,a).$$\n\n# The implementation\n\nEven if it is possible to obtain a Taylor series representation of the incomplete beta function, it seems to not be used. Indeed the MATLAB help cite only the continuous fraction representation present in \"Handbook of Mathematical Functions\" by Abramowitz and Stegun: $$I_x(a,b) = \\dfrac{x^a(1-x)^b}{aB(a,b)}\\left(\\dfrac{1}{1+} \\dfrac{d_1}{1+} \\dfrac{d_2}{1+}\\ldots\\right)$$ with $$d_{2m+1} = -\\dfrac{(a+m)(a+b+m)}{(a+2m)(a+2m+1)}x$$ and $$d_{2m} = \\dfrac{m(b-m)}{(a+2m-1)(a+2m)}x$$ which seems to be the same strategy used by GSL. To be more precise, this continued fraction is computed directly when $$x<\\dfrac{a-1}{a+b-2}$$ otherwise, the computed fraction is used to compute $I_{1-x}(b,a)$ and then it is used the fact that $$I_x(a,b) = 1-I_{1-x}(b,a).$$ In my implementation I use a continued fraction present in \"Handboob of Continued Fractions for Special Functions\" by Cuyt, Petersen, Verdonk, Waadeland and Jones, which is more complicated but converges in fewer steps: $$\\dfrac{B(a,b)I_x(a,b)}{x^a(1-x)^b} = \\mathop{\\huge{\\text{K}}}_{m=1}^\\infty \\left(\\dfrac{\\alpha_m(x)}{\\beta_m(x)}\\right),$$ where $$\\begin{array}{rcl} \\alpha_1(x) &=&1,\\\\ \\alpha_{m+1}(x) &=&\\dfrac{(a+m-1)(a+b+m-1)(b-m)m}{(a+2m-1)^2}x^2,\\quad m\\geq 1,\\\\ \\beta_{m+1}(x) &=&a + 2m + \\left( \\dfrac{m(b-m)}{a+2m-1} - \\dfrac{(a+m)(a+b+m)}{a+2m+1} \\right)x,\\quad m\\geq 0. \\end{array}$$ This is most useful when $$x\\leq\\dfrac{a}{a+b},$$ thus, the continued fraction is computed directly when this condition is satisfied, while it is used to evaluate $I_{1-x}(b,a)$ otherwise.\nThe function is now written as a .m file, which check the validity of the inputs and divide the same in the values which need to be rescaled and in those wo doesn't need it. Then the continued fraction is computed by an external .c function. Finally, the .m file explicit $I_x(a,b)$.\n\n# betaincinv\n\nNext step will be to write the inverse. It was already present in Octave, but is missing the upper version, so it has to be rewritten.","date":"2019-01-24 11:10:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8208617568016052, \"perplexity\": 1572.8401268624339}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-04\/segments\/1547584520525.90\/warc\/CC-MAIN-20190124100934-20190124122934-00185.warc.gz\"}"} | null | null |
{"url":"https:\/\/dml.cz\/handle\/10338.dmlcz\/147454","text":"# Article\n\nFull entry | PDF \u00a0 (0.3 MB)\nKeywords:\nDiophantine equation $x^{n} + y^{n} = \\lowercase {n!} z^{n}$; Diophantine equation $x^{3} + y^{3} = \\lowercase {3!} z^{3}$; unsolved problems; number theory.\nSummary:\nIn p.~219 of R.K. Guy's \\emph {Unsolved Problems in Number Theory}, 3rd edn., Springer, New York, 2004, we are asked to prove that the Diophantine equation $x^{n} + y^{n} = \\lowercase {n!} z^{n}$ has no integer solutions with $n\\in \\mathbb {N_{+}}$ and $n>2$. But, contrary to this expectation, we show that for $n = 3$, this equation has infinitely many primitive integer solutions, i.e.~the solutions satisfying the condition $\\gcd (x, y, z)=1$.\nReferences:\n[1] Elkies, N. D.: Wiles minus epsilon implies Fermat. Elliptic Curves, Modular Forms & Fermat's Last Theorem, 1995, 38-40, Ser. Number Theory, I, Internat. Press, Cambridge MA.. MR\u00a01363494\n[2] Erd\u00f6s, P., Obl\u00e1th, R.: \u00dcber diophantische Gleichungen der form $n! = x^p \\pm y^p$ and $n! \\pm m! = x^p$. Acta Litt. Sci. Szeged, 8, 1937, 241-255,\n[3] Guy, R. K.: Unsolved Problems in Number Theory. 2004, Springer Science+Business Media, Inc., New York, Third Edition.. MR\u00a02076335 | Zbl\u00a01058.11001\n[4] Ribet, K.: On modular representations of Gal($\\overline {\\mathbb Q}\\setminus \\mathbb {Q}$) arising from modular forms. Invent. Math., 100, 1990, 431-476, DOI\u00a010.1007\/BF01231195 | MR\u00a01047143\n\nPartner of","date":"2021-05-15 12:00:14","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9114200472831726, \"perplexity\": 3372.6430372132454}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-21\/segments\/1620243991801.49\/warc\/CC-MAIN-20210515100825-20210515130825-00339.warc.gz\"}"} | null | null |
Home > Sporting Events > College Football Championship Game > College Football Championship Game 2017
When is College Football Championship Game in 2017?
The date of the first College Football Championship in 2015 is on Monday, January 9th.
What is College Football Championship Game?
The College Football Championship Game, which replaced the BCS National Championship, is the third and final game as part of the College Football Playoff (CFP) system that began in 2014. The winner of this game is crowned the Football Bowl Subdivision (FBS) national champion. Under this sysetm, the top four teams play in two semifinal games to determine which two teams will compete in the championship game.
Six bowl games including the Rose, Sugar, Orange, Cotton, Fiesta, and Peach Bowl games will create a three-year rotation of hosting the semifinal games. The rotation pairs are Rose/Sugar, Orange/Cotton, and Fiesta/Peach. These six bowl games are known as the "New Year's Six" because they are all played on two consecutive days that usually include's New Year's Day.
The two winners of the semifinal games will compete in the College Football Championship Game. This game will occur on the first Monday that is at least six days after the semifinal games, which are typically played on January 1st. The host of the championship game is similair to the process of bidding for the opportunity to host the Super Bowl or NCAA Final Four (basketball). The winner of the game will receive the College Football Playoff National Championship Trophy, which is sponsored by Dr. Pepper.
This new system does not use computer rankings, like the BCS did, but rather uses a committee of thirteen experts to rank and seed the teams. This new system was developed out of a growing desire to find an alternative to the BCS system after the 2003 and 2004 seasons ended in controversy. At a minimum, the College Football Playoff system will run through the 2025 season and will be aired on the ESPN networks for those twelve-years.
When are other Sporting Events in 2017?
NFL Pro Bowl 2017
French Open 2017
New York City Marathon 2017
London Marathon 2017
Monaco Grand Prix 2017
Rose Bowl 2017
College Football Championship Game 2017
Winter X Games 2017
MLB Draft 2017
MLB All-Star Game 2017
Belmont Stakes 2017
Kentucky Derby 2017
Preakness Stakes 2017
Pittsburgh Marathon 2017
Cheyenne Frontier Days 2017
Coca Cola 600 2017
Duck Commander 500 2017
Coke Zero 400 2017
Bike Week 2017
Monster Jam 2017
Bay To Breakers 2017
Peachtree Road Race 2017
02 What Do You Think About College Football Championship Game 2017? | {
"redpajama_set_name": "RedPajamaCommonCrawl"
} | 3,529 |
Ihsahn (nascido em Notodden, 10 de outubro de 1975 como Vegard Sverre Tveitan) é um compositor, tecladista, guitarrista e vocalista norueguês. Ele é mais conhecido pelo seu trabalho com a banda de black metal Emperor. Ele tocou com o Thou Shalt Suffer até 4 de março de 2006, com o Peccatum junto com sua esposa Ihriel (pseudônimo de Heidi S. Tveitan), que é irmã do tecladista e vocalista da banda Leprous, que já foi a banda de apoio de Ihsahn e hoje segue carreira independente do músico, embora ele ainda faça algumas colaborações, tendo inclusive produzido o álbum Coal (2013). Atualmente dedica-se à sua carreira solo. Em suas primeiras aparições, ele utilizou o pseudônimo Ygg. Ele atualmente é patrocinado pela Ibanez e pela Line 6.
Discografia
Emperor
1994 - In the Nightside Eclipse
1997 - Anthems to the Welkin at Dusk
1999 - IX Equilibrium
2001 - Prometheus: The Discipline of Fire & Demise
Solo
2006 - The Adversary
2008 - angL
2010 - After
2012 - Eremita
2013 - Das Seelenbrechen
2016 - Arktis
2018 - Àmr
Thou Shalt Suffer
2000 - SomniumHardingrock
2007 - GrimenPeccatum
1999 - Strangling from Within2000 - Amor Fati2004 - Lost in Reverie''
Guitarristas da Noruega
Músicos de black metal
Multi-instrumentistas
Membros de Emperor | {
"redpajama_set_name": "RedPajamaWikipedia"
} | 3,859 |
\section{Introduction}
Until about a decade ago detailed analysis of the photospheric and
wind properties of O-type stars was limited to about 40 to 50 stars
divided over the Galaxy and the Magellanic Clouds (see e.g.\
\citealt{puls96}; see also \citealt*{repolust04}). The reason that at
that time only such a limited number of objects had been investigated
is related in part to the fact that considerable effort was directed
towards improving the physics of the non-local thermodynamic
equilibrium (non-LTE) model atmospheres used to analyse massive
stars. Notable developments have been the improvements in the atomic
models \citep[e.g.][]{becker92}, shock treatment \citep{pauldrach01},
clumping \citep{hillier91, hillier99}, and the implementation of line
blanketing \citep[e.g.][]{hubeny95, hillier98, pauldrach01}. To study
the effects of these new physics a core sample of ``standard'' O-type
stars has been repeatedly re-analysed. A second reason, that is at
least as important, is the complex, and time and CPU intensive nature
of these quantitative spectroscopic analyses. Typically, at least a
six dimensional parameter space has to be probed, i.e.\ effective
temperature, surface gravity, helium to hydrogen ratio, atmospheric
microturbulent velocity, mass-loss rate, and a measure of the
acceleration of the transonic outflow. Rotational velocities and
terminal outflow velocities can be determined to considerable accuracy
by means of external methods such as rotational (de-) convolution
methods \citep[e.g.][]{howarth97} and SEI-fitting of P-Cygni lines
\citep[e.g.][]{groenewegen89}, respectively. To get a good spectral
fit it typically requires tens, sometimes hundreds of models per
individual star.
In the last few years the field of massive stars has seen the
fortunate development that the number of O-type stars that have been
studied spectroscopically has been doubled \citep[e.g.][]{crowther02,
herrero02, bianchi02, bouret03, hillier03, garcia04, martins04,
massey04, evans04}. The available data set of massive O- and early
B-type stars has recently {\em again} been doubled, mainly through the
advent of multi-object spectroscopy. Here we explicitly mention the
{\em VLT-FLAMES Survey of Massive Stars} \citep{evans05} comprising
over 100 hours of VLT time. In this survey multi-object spectroscopy
using the {\em Fibre Large Array Multi-Element Spectrograph} (FLAMES)
has been used to secure over 550 spectra (of which in excess of 50 are
spectral type O) in a total of seven clusters distributed over the
Galaxy and the Magellanic Clouds.
This brings within reach different types of studies that so far could
only be attempted with a troublingly small sample of stars. These
studies include establishing the mass loss behaviour of Galactic stars
across the upper Hertzsprung-Russell diagram, from the weak winds of
the late O-type dwarfs (of order $10^{-8}$~\mbox{$M_{\sun}{\rm yr}^{-1}$}) to the very strong
winds of early O-type supergiants (of order $10^{-5}$~\mbox{$M_{\sun}{\rm yr}^{-1}$});
determination of the mass-loss versus metallicity dependence in the
abundance range spanned by Small Magellanic Cloud to Galactic stars;
placing constraints on the theory of massive star evolution by
comparing spectroscopic mass determinations and abundance patterns
with those predicted by stellar evolution computations, and the study
of (projected) spatial gradients in the mass function of O- and B-type
stars in young clusters, as well as such spatial gradients in the
initial atmospheric composition of these stars.
To best perform studies such as listed above not only requires a large
set of young massive stars, it also calls for a robust, homogeneous
and objective means to analyse such datasets using models that include
state-of-the-art physics. This essentially requires an automated
fitting method. Such an automated method should not only be fast, it
must also be sufficiently flexible to be able to treat early-type
stars with widely different properties (e.g. mass-loss rates that
differ by a factor of $10^{3}$). Moreover, it should apply a well
defined fitting criterium, like a $\chi^{2}$ criterium, allowing it to
work in an automated and reproducible way.
To cope with the dataset provided by the {\em VLT-FLAMES Survey} and
to improve the objectivity of the analysis, we have investigated the
possibility of automated fitting. Here we present a robust, fast, and
accurate method to perform automated fitting of the continuum
normalized spectra of O- and early B-type stars with stellar winds
using the fast performance stellar atmosphere code {\sc fastwind}\
\citep{puls05} combined with a genetic algorithm based fitting method.
This first implementation of an automated method should therefore be
seen as an improvement over the standard ``by eye'' method, and not as
a replacement of this method. The improvement lies in the fact that
with the automated method large data sets (tens or more stars),
spanning a wide parameter space, can be analysed in a repeatable and
homogeneous way. It does not replace the ``by eye'' method as our
automated fitting method still requires a by eye continuum
normalization as well as a human controlled line selection. This
latter should address the identification and exclusion of lines that
are not modeled (i.e.\ blends), as well as introduce information on
lacking physics and/or possible or potential problems in the model
atmosphere code. Future implementations of an automated fitting
method may use the absolute spectrum, preferably over a broad
wavelength range. This would eliminate the continuum rectification
problem, however, it will require a modeling of the interstellar
extinction. In this way one can work towards a true replacing of the
``by eye'' method by an automated approach.
In Sect.~\ref{sec:auto_fit} we describe the genetic algorithm method
and implementation, and we provide a short r\'{e}sum\'{e} of the
applied unified, non-LTE, line-blanketed atmosphere code {\sc fastwind}\ --
which is the only code to date for which the method described here is
actually achievable (in the context of analysing large data sets). To
test the method we analyse a set of 12 early type spectra in
Sect.~\ref{sec:spec-analysis}. We start with a re-analysis of a set of
seven stars in the open cluster \object{Cyg~OB2} that have been studied
by \cite{herrero02}. The advantage of focusing on this cluster is that
it has been analysed with a previous version of {\sc fastwind}, allowing
for as meaningful a comparison as is possible, while still satisfying
our preference to present a state-of-the-art analysis. The analysis of
Cyg~OB2\ has the added advantage that all stars studied are
approximately equidistant. To test the performance of our method
outside the parameter range offered by the Cyg~OB2\ sample we have
included an additional five well-studied stars with either low density
winds and/or very high rotational velocities. In
Sect.~\ref{sec:errors} we describe our error analysis method for the
multidimensional spectral fits obtained with the automated method. A
systematic comparison of the obtained parameters with previously
determined values is given in Sect.~\ref{sec:comp}. Implications of
the newly obtained parameters on the properties of massive stars are
discussed in Sect.~\ref{sec:implic}. In the last section we give our
conclusions.
\section{Automated fitting using a genetic algorithm}
\label{sec:auto_fit}
\subsection{Spectral line fitting as an optimization problem}
\label{sec:opt_prob}
Spectral line fitting of early-type stars is an optimization problem
in the sense that one tries to maximize the correspondence between a
given observed spectrum and a synthetic spectrum produced by a stellar
atmosphere model. Formally speaking one searches for the global
optimum, i.e.\ best fit, in the parameter space spanned by the free
parameters of the stellar atmosphere model by minimizing the
differences between the observed and synthesized line profiles.
Until now the preferred method to achieve this minimization has been
the so called fitting ``by eye'' method. In this method the best fit
to the observed spectrum of a certain object is determined in an
iterative manner. Starting with a first guess for the model parameters
a spectrum is synthesized. The quality of the fit to the observed
spectrum is determined, as is obvious from the methods name, by an
inspection by eye. Based on what the person performing the fit sees,
for instance, whether the width of the line profiles are reproduced
correctly, combined with his/her experience and knowledge of the
model and the object, the model parameters are modified and a new
spectrum is synthesized. This procedure is repeated until the quality
of the fit determined by eye cannot be increased anymore by modifying
the model parameters.
It can be questioned whether a fit constructed with the fitting ``by
eye'' method corresponds to the best fit possible, i.e.\ the global
optimum. Reasons for this are, {\em i)} the restricted size of
parameter space that can be investigated, both in terms of number of
free parameters as well as absolute size of the parameter domain that
can investigated with high accuracy, {\em ii)} the limited number of
free parameters that are changed simultaneously, and {\em iii)} biases
introduced by judging the quality of a line fit by eye. The importance
of the first point lies in the fact that in order to assure that the
global optimum is found, a parameter space that is as large as
possible should be explored with the same accuracy for all parameters
in the complete parameter space. If this is not the case the solution
found will likely correspond to a local optimum.
The argument above becomes stronger in view of the second point.
Spectral fitting is a multidimensional problem in which the line
profile shapes depend on all free parameters simultaneously, though to
a different extent. Consequently, the global optimum can only be
found if all parameters are allowed to vary at the same time. The use
of fit diagrams \cite[e.g.][]{kudritzki78, herrero92} does not resolve
this issue. These diagrams usually only take variations in \mbox{$T_{\rm eff}$}\ and
\mbox{$\log{{g}}$}\ into account, neglecting the effects of other parameters, like
microturbulence \cite[e.g.][]{smith98, villamariz00} and mass loss
(e.g. [Fig.\ 5 of] \citealt{mokiem04}), on the line profiles.
The last point implies that, strictly speaking, fitting ``by eye''
cannot work in a reproducible way. There is no uniform well defined
method to judge how well a synthetic line profile fits the data by
eye. More importantly, it implies that there is no guarantee that the
synthetic line profiles selected by the eye, correspond to the
profiles which match the data the best. This predominantly increases
the uncertainty in the derivation of those parameters that very
sensitively react to the line profile shape, like for instance the
surface gravity.
The new fitting method presented here does not suffer from the
drawbacks discussed above. It is an automated method capable of global
optimization in a multi-dimensional parameter space of arbitrary size
(Sect.~\ref{sec:fit-param}). As it is automated, it does not require
any human intervention in finding the best fit, avoiding potential
biases introduced by ``by eye'' interpretations of line profiles. The
method described here consists of two main components. The first
component is the non-LTE stellar atmosphere code
{\sc fastwind}. Section~\ref{sec:FastWind} gives an overview of the
capabilities of the code and the assumptions involved. The second
component is the genetic algorithm (GA) based optimizing routine
{\sc pikaia}\ from \cite{charbonneau95}, which is responsible for
optimizing the parameters of the {\sc fastwind}\ models. For the technical
details of this routine and more information on GAs we refer to the
cited paper and references therein. Here we will suffice with a short
description of GAs and a description of the GA implementation with
respect to optimization of spectral fits.
\subsection{The genetic algorithm implementation}
Genetic algorithms represent a class of heuristic optimization
techniques, which are inspired by the notion of evolution by means of
natural selection \citep{darwin1859}. They provide a method of solving
optimization problems by incorporating this biological notion in a
numerical fashion. This is achieved by evolving the global solution
over subsequent generations starting from a set of randomly guessed
initial solutions, so called individuals. Selection pressure is
imposed in between generations based on the quality of the solutions,
their so called fitness. A higher fitness implies a higher probability
the solution will be selected for reproduction. Consequently, only a
selected set of individuals will pass on their ``genetic material'' to
subsequent new generations.
To create the new generations discussed above GAs require a
reproduction mechanism. In its most basic form this mechanism consists
of two genetic operators. These are the crossover operator, simulating
sexual reproduction, and the mutation operator, simulating copying
errors and random effects affecting a gene in isolation. An important
benefit of these two operators is the fact that they also introduce
new genetic material into the population. This allows the GA to
explore new regions of parameters space, which is important in view of
the existence of local extremes. When the optimization runs into a
local optimum, these two operators, where usually mutation has the
strongest effect, allow for the construction of individuals outside of
this optimum, thereby allowing it to find a path out of the local
optimum. This capability to escape local extremes, consequently,
classifies GAs as global optimizers and is one of the reasons they
have been applied to many problems in and outside astrophysics
\citep[e.g.][]{metcalfe00, gibson98}.
Using an example we can further illustrate the GA optimization
technique. Lets assume that the optimization problem is the
minimization of some function $f$. This function has $n$ variables,
serving as the genetic building blocks, spanning a $n$ dimensional
parameter space. The first step in solving this problem is to create
an initial population of individuals, which are sets of $n$
parameters, randomly distributed in parameter space. For each of these
individuals the quality of their solution is determined by simply
calculating $f$ for the specific parameter values. Now selection
pressure is imposed and the fittest individuals, i.e.\ those that
correspond to the lowest values of $f$, are selected to construct a
new generation. As the selected individuals represent the fittest
individuals from the population, every new generation will consist of
fitter individuals, leading to a minimization of $f$, thereby, solving
the optimization problem.
With the previous example in mind we can explain our implementation of
the GA for solving the optimization problem of spectral line fitting,
with the following scheme. We start out with a first generation of a
population of {\sc fastwind}\ models randomly distributed in the free
parameter space (see Sect.~\ref{sec:fit-param}). For each of these
models it is determined how well an observed spectrum is fitted by
calculating the reduced chi squared, \mbox{$\chi_{{\rm red}, i}^2$}, for each of the fitted
lines $i$. The fitness $F$, of a model is then defined as the inverted
sum of the \mbox{$\chi_{{\rm red}, i}^2$}'s, i.e.
\begin{equation}
\label{eq:fitns}
F \equiv \left(\sum_i^N \chi_{{\rm red}, i}^2\right)^{-1}~,
\end{equation}
where $N$ corresponds to the number of lines evaluated. The fittest
models are selected and a new generation of models is constructed
based on their parameters. From this generation the fitnesses of the
models are determined and again from the fittest individuals a new
generation is constructed. This is repeated until $F$ is maximized,
i.e.\ a good fit is obtained.
In terms of quantifying the fit quality Eq.~(\ref{eq:fitns}) does not
represent a unique choice. Other expressions for the fitness
criterium, for instance, the sum of the inverted \mbox{$\chi_{{\rm red}, i}^2$}'s of the
individual lines, or the inverted \mbox{$\chi_{\rm red}^2$}\ of all the spectral points
evaluated, also produce the required functionality of an increased
fitness with an increased fit quality. We have chosen this particular
form based on two of its properties. Firstly, the evaluation of the
fit quality of the lines enter into the expression individually,
ensuring that, regardless of the number of points in a certain line,
all lines are weighted equally. This allows as well for weighting
factors for individual lines, which express the quality with which the
stellar atmosphere synthesizes these lines (cf.\
Sect.~\ref{sec:line-scheme}). Secondly, using the inverted sum of the
\mbox{$\chi_{{\rm red}, i}^2$}'s instead of the sum of the inverted \mbox{$\chi_{{\rm red}, i}^2$}'s avoids having
a single line, which is fitted particularly well, to dominate the
solution. Instead the former form demands a good fit of all lines
simultaneously.
\subsection{Parallelization of the genetic algorithm}
The ability of global optimization of GAs comes at a price. Finding
the global minimum requires the calculation of many generations. In
Sect.~\ref{sec:formal-tests} we will show that for the spectra studied
in this paper, the evaluation of more than a hundred generations is
needed to assure that the global optimum is found. For a typical
population size of $\sim$70 individuals, this comes down to the
calculation of $\sim$7000 {\sc fastwind}\ models. With a modern 3~GHz
processor a single {\sc fastwind}\ model (aiming at the analysis of
hydrogen and helium lines) can be calculated within five to ten
minutes. Consequently, automated fitting on a sequential computer
would be unworkable.
To overcome this problem, parallelization of the {\sc pikaia}\ routine is
necessary. This parallelization is inspired by the work of
\cite{metcalfe03}. Consequently, our parallel version is very similar
to the version of these authors. The main difference between the two
versions, is an extra parallelization of the so called elitism option
in the reproduction schemes (see \citeauthor{metcalfe03}). This was
treated in a sequential manner in the \citeauthor{metcalfe03}
implementation and has now been parallelized as well.
Due to the strong inherent parallelism of GAs, the parallel version of
our automated fitting method scales very well with the number of
processors used. Test calculations showed that for configurations in
which the population size is an integer multiple of the number of
processors the sequential overhead is negligible. Consequently, the
runtime scales directly with the inverse of the number of
processors. Thus enabling the automated fitting of spectra.
\subsection{The non-LTE model atmosphere code {\sc fastwind}}
\label{sec:FastWind}
For modeling the optical spectra of our stars we use the latest
version of the non-LTE, line-blanketed atmosphere code {\sc fastwind}\ for
early-type stars with winds. For a detailed description we refer to
\cite{puls05}. Here we give a short overview of the assumptions made
in this method. The code has been developed with the emphasis on a
{\em fast performance} (hence its name), which makes it currently the
best suited (and realistically only) model for use in this kind of
automated fitting methods.
{\sc fastwind}\ adopts the concept of ``unified model atmospheres'',
i.e.\ including both a pseudo-hydrostatic photosphere and a transonic
stellar wind, assuring a smooth transition between the two. The
photospheric density structure follows from a self-consistent solution
of the equation of hydrostatic equilibrium and accounts for the actual
temperature stratification and radiation pressure. The temperature
calculation utilizes a flux-correction method in the lower atmosphere
and the thermal balance of electrons in the outer atmosphere (with a
lower cut-off at $T_{\rm min} = 0.5 \mbox{$T_{\rm eff}$}$). In the photosphere the
velocity structure, $v(r)$, corresponds to quasi-hydrostatic
equilibrium; outside of this regime, in the region of the sonic
velocity and in the super-sonic wind regime it is prescribed by a
standard $\beta$-type velocity law, i.e.
\begin{equation}
v(r) = \mbox{$v_{\infty}$} \left( 1 - \frac{r_{\circ}}{r} \right)^{\beta}~,
\end{equation}
where \mbox{$v_{\infty}$}\ is the terminal velocity of the wind. The parameter
$r_{\circ}$ is used to assure a smooth connection, and $\beta$ is a
measure of the flow acceleration.
The code distinguishes between {\em explicit} elements (in our case
hydrogen and helium) and {\em background} elements (most importantly:
C, N, O, Ne, Mg, Si, S, Ar, Fe, Ni). The explicit elements are used as
diagnostic tools and are treated with high precision, i.e.\ by detailed
atomic models and by means of {\em co-moving-frame} transport for the
line transitions. The \ion{H}{i}\ and \ion{He}{ii}\ model atoms consist of 20 levels
each; the \ion{He}{i}\ model includes levels up to and including $n = 10$,
where levels with $n \ge 8$ have been packed. The background ions are
included to allow for the effects of line-blocking (treated in an
approximate way by using suitable means for the corresponding line
opacities) and line-blanketing. Occupation numbers and opacities of
both the explicit and the most abundant background ions are
constrained by assuming statistical equilibrium. The only difference
between the treatment of these types of ions is that for the
background ions the Sobolev approximation is used in describing the
line transfer (accounting for the actual illumination radiation
field).
Abundances of the background elements are taken from the solar values
provided by \citet[][and references therein]{grevesse98}. The He/H
ratio is not fixed and can be scaled independently from the background
element abundances.
A comparison between the optical H and He lines as synthesized by
{\sc fastwind}\ and those predicted by the independent comparison code
{\sc cmfgen}\ \citep{hillier98} show excellent agreement, save for the
\ion{He}{i}\ singlet lines in the temperature range between 36\,000 and
41\,000~K for dwarfs and between 31\,000 and 35\,000~K for
supergiants, where {\sc cmfgen}\ predicts weaker lines. We give account of
this discrepancy, and therefore of an increased uncertainty in the
reproduction of these lines, by introducing weighting factors, which
for the \ion{He}{i}\ singlets of stars in these ranges are lower (cf.\
Sect.~\ref{sec:line-scheme}).
\begin{table*}[t]
\caption{Input parameters of the formal test models (``In'' column)
and parameters obtained with the automated fitting method by fitting
synthetic data created from these models (``Out'' column). Results
were obtained by evolving a population of 72 {\sc fastwind}\ models over
200 generations.}
\label{tab:form-tests}
\begin{center}
\begin{tabular}{lrrrrrrrrr}
\hline\\[-9pt] \hline \\[-7pt]
& Set A& Search & & Set B & Search & & Set C& Search \\[2pt]
& In & range & Out & In & range & Out & In & range & Out \\[1pt]
\hline \\[-9pt]
Spectral type & O3~I & & & O5.5~I& & & B0~V & \\[3.5pt]
\mbox{$T_{\rm eff}$}\ [kK] & 45.0 & [42, 47] & 45.0 & 37.5 & [35, 40] & 37.6 & 30.0 & [28, 34] & 29.9 \\[3.5pt]
\mbox{$\log{{g}}$}\ [\mbox{cm\,s$^{-2}$}] & 3.80 & [3.5, 4.0] & 3.84 & 3.60 & [3.3, 3.9] & 3.57 & 4.00 & [3.7, 4.3] & 3.95 \\[3.5pt]
\mbox{$R_{\star}$}\ [\mbox{$R_{\sun}$}] & 17.0 & & & 20.0 & & & 8.0 & & \\[3.5pt]
$\log \mbox{$L_{\star}$}$ [\mbox{$L_{\sun}$}] & 6.03 & & - & 5.85 & & - & 4.67 & & - \\[3.5pt]
\mbox{$v_{\rm turb}$}\ [\mbox{km\,s$^{-1}$}] & 5.0 & [0, 20] & 5.9 & 10.0 & [0, 20] & 9.7 & 15.0 & [0, 20] & 14.8 \\[3.5pt]
\mbox{$Y_{\rm He}$} & 0.15 & [0.05, 0.30] & 0.15 & 0.10 & [0.05, 0.30] & 0.10 & 0.10 & [0.05, 0.30] & 0.10 \\[3.5pt]
\mbox{$\dot{M}$}\ [$10^{-6}$\mbox{$M_{\sun}{\rm yr}^{-1}$}] & 10.0 & [1.0, 20.0] & 9.3 & 5.0 & [1.0, 10.0] & 5.3 & 0.01 & [0.001, 0.2] & 0.008\\[3.5pt]
$\beta$ & 1.20 & [0.5, 1.5] & 1.18 & 1.00 & [0.5, 1.5] & 0.99 & 0.80 & [0.5, 1.5] & 0.93 \\[3.5pt]
\mbox{$v_{\infty}$}\ [\mbox{km\,s$^{-1}$}] & 2500 & & - & 2200 & & - & 2000 & & - \\[3.5pt]
\mbox{$v_{\rm r}\sin i$}\ [\mbox{km\,s$^{-1}$}] & 150 & & - & 120 & & - & 90 & & - \\[1pt]
\hline
\end{tabular}
\end{center}
\end{table*}
\subsection{Fit parameters}
\label{sec:fit-param}
The main parameters which will be determined from a spectral fit using
{\sc fastwind}\ are the effective temperature \mbox{$T_{\rm eff}$}, the surface gravity
$g$, the microturbulent velocity \mbox{$v_{\rm turb}$}, the helium over hydrogen
number density \mbox{$Y_{\rm He}$}, the mass loss rate \mbox{$\dot{M}$}\ and the exponent of the
beta-type velocity law $\beta$. These parameters span the free
parameter space of our fitting method. The stellar radius, \mbox{$R_{\star}$}, is
not a free parameter as its value is constrained by the absolute
visual magnitude \mbox{$M_{V}$}. To calculate \mbox{$R_{\star}$}\ we adopt the procedure
outlined in \cite{kudritzki80}, i.e.
\begin{equation}
5\log R/R_{\sun} = 29.57 - (\mbox{$M_{V}$} - V)~,
\end{equation}
where $V$ is the visual flux of the theoretical model given by
\begin{equation}
-2.5 \log \int_0^\infty F_\lambda S_\lambda d\lambda~.
\end{equation}
In the above equation $S_\lambda$ is the $V$-filter function of
\cite{matthews63} and $F_\lambda$ is the theoretical stellar
flux. Note that as \mbox{$R_{\star}$}\ is an input parameter, $F_\lambda$ is not
known before the {\sc fastwind}\ model is calculated. Therefore, during the
automated fitting we approximate $F_\lambda$ by a black body radiating
at $T=0.9\mbox{$T_{\rm eff}$}$ \citep[cf.][]{markova04}. After the fit is completed
we use the theoretical flux from the best fit model to calculate the
non approximated stellar radius. Based on this radius we rescale the
mass loss rate using the invariant wind-strength parameter $Q$
\citep{puls96, dekoter97}
\begin{equation}
Q = \frac{\mbox{$\dot{M}$}}{\left(\mbox{$v_{\infty}$} \mbox{$R_{\star}$}\right)^\frac{3}{2}}~.
\end{equation}
The largest difference between the approximated and final stellar
radius for the objects studied here, is $\sim$2 percent. The
corresponding rescaling in \mbox{$\dot{M}$}\ is approximately three percent.
The projected rotation velocity, \mbox{$v_{\rm r}\sin i$}, and terminal velocity of the
wind are not treated as free parameters. The value of \mbox{$v_{\rm r}\sin i$}\ is
determined from the broadening of weak metal lines and the width of
the \ion{He}{i}\ lines. For \mbox{$v_{\infty}$}\ we adopt values obtained from the study of
ultraviolet (UV) resonance lines, or, if not available, values from
calibrations are used.
Our fitting method only requires the size of the free parameter domain
to be specified. For the objects studied in this paper we keep the
boundaries between which the parameters are allowed to vary, fixed for
\mbox{$v_{\rm turb}$}, \mbox{$Y_{\rm He}$}\ and $\beta$. The adopted ranges, respectively, are [0,
20] \mbox{km\,s$^{-1}$}, [0.05, 0.30] and [0.5, 1.5]. The boundaries for \mbox{$T_{\rm eff}$}\ are
set based on the spectral type and luminosity class of the studied
object. Usually the size of this range is set to approximately
5000~K. The \mbox{$\log{{g}}$}\ range is delimited so that the implied stellar mass
lies between reasonable boundaries. For instance for the B1\,I star
Cyg~OB2~\#2 the adopted \mbox{$T_{\rm eff}$}\ range together with its absolute visual
magnitude imply a possible range in \mbox{$R_{\star}$}\ of [11.5:12.0]\mbox{$R_{\sun}$}. For
the automated fit we set the minimum and maximum \mbox{$\log{{g}}$}\ to 3.1 and
3.8, respectively, which sets the corresponding mass range that will
be investigated to [5.0:25.2]~\mbox{$M_{\sun}$}. For the mass loss rate we adopt a
conservative range of at least one order of magnitude. As example for
the analysis of Cyg~OB2~\#2 we adopted lower and upper boundaries of
$4\times 10^{-8}$ and $2\times 10^{-6}\,\mbox{$M_{\sun}{\rm yr}^{-1}$}$, respectively.
\begin{figure*}[t]
\centering
\resizebox{14cm}{!}{\includegraphics{param_form_B.ps}}
\caption{Evolution of the best fitting model parameters for formal
test B. From the 200 generation run only the first 75 generations
are shown. For this specific data set the location of the global
optimum is found within 50 generations. This is indicated by the
highest fitness found during the run, which is shown as a grey
dashed line and is scaled to the right vertical axis. The fitness is
normalized with respect to the fitness of the model used to create
the synthetic data (the data being this model plus noise).}
\label{fig:param_form}
\end{figure*}
\subsection{Formal tests of convergence}
\label{sec:formal-tests}
Before we apply our automated fitting method to real spectra, we first
test whether the method is capable of global optimization. For this we
perform convergence tests using synthetic data. The main goal of these
tests is to determine how well and how fast the input parameters, used
to create the synthetic data, can be recovered with the method. The
speed with which the input parameters are recovered, i.e.\ the number
of generations needed to find the global optimum, can then be used to
determine how many generations are needed to obtain the best fit for a
real spectrum. In other words, when the fit has converged to the
global optimum.
Three synthetic datasets, denoted by A, B and C, were created with the
following procedure. First, line profiles of Balmer hydrogen lines and
helium lines in the optical blue and H$\alpha$\ in the red calculated by
{\sc fastwind}\ were convolved with a rotational broadening profile. Table
\ref{tab:form-tests} lists the parameters of the three sets of models
as well as the projected rotational velocity used. A second
convolution with a Gaussian instrumental profile was applied to obtain
a spectral resolution of 0.8~\AA\ and 1.3~\AA\ for, respectively, the
H$\alpha$\ line and all other lines. These values correspond to the minimum
resolution of the spectra fitted in Sect.~\ref{sec:spec-analysis}.
Finally, Gaussian distributed noise, corresponding to a signal to
noise value of 100, was added to the profiles. Dataset A represents
an O3~I star with a very dense stellar wind
($\mbox{$\dot{M}$}=10^{-5}\,\mbox{$M_{\sun}{\rm yr}^{-1}$}$), while set B is that of an O5.5~I with a
more typical O-star mass loss. The last set C is characteristic for a
B0~V star with a very tenuous wind of only $10^{-8}\,\mbox{$M_{\sun}{\rm yr}^{-1}$}$.
From the synthetic datasets we fitted nine lines, three hydrogen,
three neutral helium and three singly ionized helium lines,
corresponding to the minimum set of lines fitted for a single object
in Sect.~\ref{sec:spec-analysis}. The fits were obtained by evolving a
population of 72 {\sc fastwind}\ models over a course of 200
generations. In this test and throughout the remainder of the paper we
use {\sc pikaia}\ with a dynamically adjustable mutation rate, with the
minimum and maximum mutation rate set to the default values (see
\citealt{charbonneau95b}). Selection pressure, i.e.\ the weighting of
the probability an individual will be selected for reproduction based
on its fitness, was also set to the default value.
Table~\ref{tab:form-tests} lists the parameter ranges in
which the method was allowed to search, i.e.\ the minimum and maximum
values allowed for the parameters of the {\sc fastwind}\ models. As \mbox{$v_{\infty}$}\
and \mbox{$v_{\rm r}\sin i$}\ are not free parameters these were set equal to the input
values.
In all the three test cases the automated method was able to recover
the global optimum. Table~\ref{tab:form-tests} lists the parameters of
the best fit models obtained by the method in the ``Out''
columns. Compared to the parameters used to create the synthetic data,
there is very good agreement. Moderate differences (of a 15-20\%
level) are found for \mbox{$v_{\rm turb}$}\ recovered from dataset A and for the wind
parameters $\beta$ and \mbox{$\dot{M}$}\ recovered from dataset C. This was to be
expected. In the case of the wind parameters the precision with which
information about these parameters can be recovered from the line
profiles decreases with decreasing wind density
\citep[e.g.][]{puls96}. Still, the precision with which the wind
parameters are recovered for the weak wind data set C, is remarkable.
A similar reasoning applies for the microturbulent velocity recovered
from data set A. For low values of the microturbulence, i.e.\ $\mbox{$v_{\rm turb}$}
< v_{\rm th}$, thermal broadening will dominate over broadening due to
microturbulence. This decreases the precision with which this
parameter can be recovered from the line profiles. Realizing that in
case of this dataset for helium $v_{\rm th}~\approx~14~\mbox{km\,s$^{-1}$}$, again,
the precision with which \mbox{$v_{\rm turb}$}\ is recovered, is impressive.
To illustrate how quickly and how well the input parameters are
recovered Fig.~\ref{fig:param_form} shows the evolution of the fit
parameters during the fit of synthetic dataset B. Also shown, as a
grey dashed line, is the fitness of the best fitting model found,
during the run. This fitness is normalized with respect to the fitness
of the model used to create the synthetic data (the data being the
combination of this model and noise). Note that the final maximum
normalized fitness found by the method exceeds 1.0, which is due to
the added noise allowing a further fine tuning of the parameters by
the GA based optimization. As can be seen in this figure the method
modifies multiple parameters simultaneously to produce a better
fit. This allows for an efficient exploration of parameter space and,
more importantly, it allows for the method to actually find the global
optimum.
In the case of dataset B finding the global optimum required only
a few tens of generations ($\sim$30). For the other two datasets all
save one parameter were well established within this number of
generations. To establish the very low value of \mbox{$v_{\rm turb}$}\ in dataset A
and the very low \mbox{$\dot{M}$}\ in dataset C required $\sim$100
generations. We will adopt 150 generations to fit the spectra in
Sect.~\ref{sec:spec-analysis}. One reason, obviously, is that to
safeguard that the global optimum is found. A second reason, however,
is that it assures that the errors on the model parameters that we
determine are meaningful (i.e.\ it assures that the error on the error
is modest).
We consider doing such a formal test as performed above as part of the
analysis of a set of observed spectra, as the exact number of
generations required is, in principle, a function of e.g.\ the
signal-to-noise ratio and the spectral resolution. Also, special
circumstances may play a role, such as potential nebular contamination
(in which case the impact of removing the line cores from the fit
procedure needs to be assessed).
\begin{table*}
\caption{Basic parameters of the early type stars studied
here. Spectral types are taken from \cite{massey91},
\cite{walborn72, walborn73} and \cite{conti71}. Blue and red
resolution, respectively, correspond to the region between
$\sim$4000 and $\sim$5000~\AA\ and the region around H$\alpha$.}
\label{tab:data}
\begin{center}
\begin{tabular}{llcccrc}
\hline\\[-9pt] \hline \\[-7pt]
Star & Spectral & \mbox{$M_{V}$} & Blue & Red & \multicolumn{1}{c}{\mbox{$v_{\rm r}\sin i$}} & \mbox{$v_{\infty}$}\\[2pt]
& Type & & resolution [\AA] & resolution [\AA] & [\mbox{km\,s$^{-1}$}] & [\mbox{km\,s$^{-1}$}] \\[1pt]
\hline\\[-9pt]
\object{Cyg~OB2~\#7} & O3~If$^*$ & $-5.91$ & 0.6 & 0.8 & 105 & 3080\\[3.5pt]
\object{Cyg~OB2~\#11} & O5~If$^+$ & $-6.51$ & 1.3 & 0.8 & 120 & 2300\\[3.5pt]
\object{Cyg~OB2~\#8C} & O5~If & $-5.61$ & 1.3 & 0.8 & 145 & 2650\\[3.5pt]
\object{Cyg~OB2~\#8A} & O5.5~I(f) & $-6.91$ & 0.6 & 0.8 & 130 & 2650\\[3.5pt]
\object{Cyg~OB2~\#4} & O7~III((f)) & $-5.44$ & 1.3 & 0.8 & 125 & 2550\\[3.5pt]
\object{Cyg~OB2~\#10} & O9.5~I & $-6.86$ & 0.6 & 0.8 & 95 & 1650\\[3.5pt]
\object{Cyg~OB2~\#2} & B1~I & $-4.64$ & 0.6 & 0.8 & 50 & 1250\\[3.5pt]
\object{\HD15629} & O5~V((f)) & $-5.50$ & 0.6 & 0.8 & 90 & 3200\\[3.5pt]
\object{\HD217086} & O7~Vn & $-4.50$ & 0.6 & 0.8 & 350 & 2550\\[3.5pt]
\object{10~Lac} & O9~V & $-4.40$ & 0.6 & 0.6 & 35 & 1140\\[3.5pt]
\object{\mbox{$\zeta$~Oph}} & O9~V & $-4.35$ & 0.6 & 0.8 & 400 & 1550\\[3.5pt]
\object{\mbox{$\tau$~Sco}} & B0.2~V & $-3.10$ & 0.2 & 0.2 & 5 & 2000\\[1pt]
\hline
\end{tabular}
\end{center}
\end{table*}
\section{Spectral analysis of early-type stars}
\label{sec:spec-analysis}
In this section we apply our fitting method to seven stars in the open
cluster Cyg~OB2, previously analysed by \cite{herrero02} and five
``standard'' early-type stars, 10~Lac, \mbox{$\tau$~Sco}, \mbox{$\zeta$~Oph}, \HD15629\ and
\HD217086, previously analysed by various authors.
\subsection{Description of the data}
Table~\ref{tab:data} lists the basic properties of the data used for
the analysis. All spectra studied have a S/N of at least 100. The
spectral resolution of the data in the blue (regions between
$\sim$4000 and $\sim$5000~\AA) and the red (region around H$\alpha$) is
given in Tab.~\ref{tab:data}.
The optical spectra of the stars in Cyg~OB2\ were obtained by
\cite{herrero99} and \cite{herrero00}. Absolute visual magnitudes of
the Cyg~OB2\ objects were adopted from \cite{massey91}, and correspond
to a distance modulus of 11.2\mbox{$^{\rm m}$}. Note that for object \#8A Tab.~7
in \citeauthor{massey91} contains an incorrect $V_0$ value of
4.08\mbox{$^{\rm m}$}. This should have been 4.26\mbox{$^{\rm m}$}\ conform the absorption
given in this table and the visual magnitude in their Tab.~2. For
\mbox{$v_{\rm r}\sin i$}\ values determined by \cite{herrero02} are used, with the
exception of objects \#8A and \#10. For these we found that the \ion{He}{i}\
and metal lines are somewhat better reproduced if we adopt \mbox{$v_{\rm r}\sin i$}\
that are higher by $\sim$35\% and $\sim$10\%, respectively. Terminal
flow velocities of the wind have been obtained from UV spectra
obtained with {\em Hubble Space Telescope}
\citep[cf.][]{herrero01}. Data of \HD15629, \HD217086 and \mbox{$\zeta$~Oph}\ are
from \cite{herrero92} and \cite{herrero93}. For \mbox{$M_{V}$}, \mbox{$v_{\infty}$}\ and
\mbox{$v_{\rm r}\sin i$}\ values given by \cite{repolust04} are adopted. The distances
to these objects are based on spectroscopic parallaxes, except for
\mbox{$\zeta$~Oph}\ which has a reliable {\em Hipparcos} distance
\citep{schroder04}.
The spectrum of 10~Lac\ was obtained by \cite{herrero02}. The
absolute visual magnitude of this star is from \cite{herrero92}. For
\mbox{$v_{\infty}$}\ we adopted the minimum value which is approximately equal to
the escape velocity at the stellar surface of this object. For the
projected rotational velocity we adopt 35 \mbox{km\,s$^{-1}$}.
The blue spectrum of \mbox{$\tau$~Sco}\ is from \cite{kilian92}. The red region
around H$\alpha$\ was observed by \cite{zaal99}. For \mbox{$\tau$~Sco}\ we also adopt
the {\em Hipparcos} distance. This distance results in an absolute
visual magnitude which is rather large for the spectral type of this
object, but is in between the \mbox{$M_{V}$}\ adopted by \cite{kilian92} and
\cite{humphreys78}. For the projected rotational velocity a value of 5
\mbox{km\,s$^{-1}$}\ was adopted.
\subsection{Lines selected for fitting and weighting scheme}
\label{sec:line-scheme}
For the analysis {\sc fastwind}\ will fit the hydrogen and helium spectrum
of the investigated objects. Depending on the wavelength range of the
available data, these lines comprise for hydrogen the Balmer
lines H$\alpha$, H$\beta$, H$\gamma$\ and H$\delta$; for \ion{He}{i}\ the singlet lines at 4387 and
4922\,\AA, the \ion{He}{i}\ triplet lines at 4026, which is blended with
\ion{He}{ii}, 4471 and 4713\,\AA; and finally for \ion{He}{ii}\ the lines at 4200,
4541 and 4686\,\AA.
For an efficient and reliable use of the automated method we have to
incorporate into it the expertise that we have developed in the
analysis of OB stars. The method has to take into account that some
lines may be blended or that they cannot be completely reproduced by
the model atmosphere code for whatever reason For example, the
so-called ``generalized dilution effect'' \cite{voels89}, present in
the \ion{He}{i}\,$\lambda$4471 line in late type supergiants, that is still
lacking an explanation.
\begin{table}
\caption{Line weighting scheme adopted for different spectral
types and luminosity classes for the objects fitted in this
paper. Late, mid and early spectral type correspond to,
respectively, [O2-O5.5], [O6-O7.5] and [O8-B1]. The weights are
implemented in the fitness definition according to
Eq.~(\ref{eq:fitns_weights}) and have values of 1.0, 0.5 and 0.25
in case of h, m and l, respectively.}
\label{tab:line-weights}
\begin{center}
\begin{tabular}{lccccccc}
\hline\\[-9pt] \hline \\[-7pt]
& \multicolumn{3}{c}{Dwarfs}
& & \multicolumn{3}{c}{Super Giants} \\[2pt]
& Late & Mid & Early & & Late & Mid & Early\\[1pt]
\hline \\[-9pt]
H Balmer & h & h & h & & h & h & h \\[3.5pt]
\ion{He}{i}\ singlets & h & l & l & & h & l & l \\[3.5pt]
\ion{He}{i}\ 4026 & h & h & h & & h & h & h \\[3.5pt]
\ion{He}{i}\ 4471 & h & h & h & & l & m & h \\[3.5pt]
\ion{He}{i}\ 4713 & h & h & h & & h & h & h \\[3.5pt]
\ion{He}{ii}\ 4686 & h & m & m & & m & m & m \\[3.5pt]
\ion{He}{ii}\ 4541 & h & h & h & & h & h & h \\[3.5pt]
\ion{He}{ii}\ 4200 & m & m & m & & m & m & m \\[1pt]
\hline
\end{tabular}
\end{center}
\end{table}
To that end we have divided the stars in two classes (``dwarfs'' and
``supergiants'', following their luminosity class
classification\footnote{For the one giant in our sample, Cyg~OB2~\#4,
we have adopted the line weighting scheme for dwarfs.}), and three
groups in each class (following spectral types). We have then a total
of six stellar groups, and have assigned the spectral lines different
weights depending on their behaviour in each stellar group. This
behaviour represents the expertise from years of ``by eye'' data
analysis that is being translated to the method. Three different
weights are assigned to each line: high, to lines very reliable for
the analysis; medium, and low. The implementation of these weights
into the fitness definition is given by
\begin{equation}
\label{eq:fitns_weights}
F \equiv \left(\sum_i^N w_i \chi_{{\rm red}, i}^2\right)^{-1}~,
\end{equation}
where the parameter $w_i$ corresponds to the weight of a specific
line.
\begin{table*}
\caption{Results obtained for the investigated early type stars
using GA optimized spectral fits. The spectra were fitted by
evolving a population of 72 {\sc fastwind}\ models over a course of 150
generations. Spectroscopic masses \mbox{$M_{\rm s}$}\ are calculated with the
gravities corrected for centrifugal acceleration $\log g_{\rm
c}$. Evolutionary masses \mbox{$M_{\rm ev}$}\ are from \cite{schaller92}. The error
bars on the derived parameters are given in Tab.~\ref{tab:errors}
and are discussed in Sect.~\ref{sec:errors}.}
\label{tab:fit-results}
\begin{center}
\begin{tabular}{lccccccrclcc}
\hline\\[-9pt] \hline \\[-7pt]
Star & \mbox{$T_{\rm eff}$} & \mbox{$\log{{g}}$} & \mbox{$\log{{g}}_{\rm c}$} & \mbox{$R_{\star}$} & $\log \mbox{$L_{\star}$}$ & \mbox{$Y_{\rm He}$}
& \multicolumn{1}{c}{\mbox{$v_{\rm turb}$}} & \mbox{$\dot{M}$} & \multicolumn{1}{c}{$\beta$} & \mbox{$M_{\rm s}$} & \mbox{$M_{\rm ev}$}\\[2pt]
& [kK] & [\mbox{cm\,s$^{-2}$}] & [\mbox{cm\,s$^{-2}$}] & [\mbox{$R_{\sun}$}] & [\mbox{$L_{\sun}$}] & & \multicolumn{1}{c}{[\mbox{km\,s$^{-1}$}]} & [\mbox{$M_{\sun}{\rm yr}^{-1}$}] &
& [\mbox{$M_{\sun}$}] & [\mbox{$M_{\sun}$}]\\[1pt]
\hline \\[-9pt]
Cyg~OB2~\#7 & 45.8 & 3.93 & 3.94 & 14.4 & 5.91 & 0.21 & 19.9 & 9.98$\cdot10^{-6}$ & 0.77 & 65.1 & 67.8\\[3.5pt]
Cyg~OB2~\#11 & 36.5 & 3.62 & 3.63 & 22.1 & 5.89 & 0.10 & 19.8 & 7.36$\cdot10^{-6}$ & 1.03 & 75.9 & 55.6\\[3.5pt]
Cyg~OB2~\#8C & 41.8 & 3.73 & 3.74 & 13.3 & 5.69 & 0.13 & 0.5 & 3.37$\cdot10^{-6}$ & 0.85 & 36.0 & 49.2\\[3.5pt]
Cyg~OB2~\#8A & 38.2 & 3.56 & 3.57 & 25.6 & 6.10 & 0.14 & 18.3 & 1.04$\cdot10^{-5}$ & 0.74 & 89.0 & 74.4\\[3.5pt]
Cyg~OB2~\#4 & 34.9 & 3.50 & 3.52 & 13.7 & 5.40 & 0.10 & 18.9 & 8.39$\cdot10^{-7}$ & 1.16 & 22.4 & 32.5\\[3.5pt]
Cyg~OB2~\#10 & 29.7 & 3.23 & 3.24 & 29.9 & 5.79 & 0.08 & 17.0 & 2.63$\cdot10^{-6}$ & 1.05 & 56.0 & 45.9\\[3.5pt]
Cyg~OB2~\#2 & 28.7 & 3.56 & 3.57 & 11.3 & 4.88 & 0.08 & 16.5 & 1.63$\cdot10^{-7}$ & 0.80$^{1)}$ & 17.0 & 18.7\\[3.5pt]
\HD15629 & 42.0 & 3.81 & 3.82 & 12.6 & 5.64 & 0.10 & 8.6 & 9.28$\cdot10^{-7}$ & 1.18 & 37.8 & 47.4\\[3.5pt]
\HD217086 & 38.1 & 3.91 & 4.01 & 8.30 & 5.11 & 0.09 & 17.1 & 2.09$\cdot10^{-7}$ & 1.27 & 25.7 & 28.5\\[3.5pt]
10~Lac\ & 36.0 & 4.03 & 4.03 & 8.27 & 5.01 & 0.09 & 15.5 & 6.06$\cdot10^{-8}$ & 0.80$^{1)}$ & 26.9 & 24.9\\[3.5pt]
\mbox{$\zeta$~Oph} & 32.1 & 3.62 & 3.83 & 8.9 & 4.88 & 0.11 & 19.7 & 1.43$\cdot10^{-7}$ & 0.80$^{1)}$ & 19.5 & 20.3\\[3.5pt]
\mbox{$\tau$~Sco} & 31.9 & 4.15 & 4.15 & 5.2 & 4.39 & 0.12 & 10.8 & 6.14$\cdot10^{-8}$ & 0.80$^{1)}$ & 13.7 & 16.0\\[1pt]
\hline
\end{tabular}
\end{center}
$^{1)}$ assumed fixed value
\end{table*}
Table~\ref{tab:data} gives the weights assigned to each line in each
stellar group. We will only briefly comment on the low or medium
weights. \ion{He}{i}\ singlets are assigned a low weight for mid-type stars
because of the singlet differential behaviour found between {\sc fastwind}\
and {\sc cmfgen}\ \citep{puls05}, while they are very weak for early-type
stars. In these two cases therefore we prefer to rely on the triplet
\ion{He}{i}\,$\lambda$4471 line. To this line, however, a low weight is
assigned at late-type Supergiants because of the above mentioned
dilution effect.
\ion{He}{ii}\,$\lambda$4686 is only assigned a medium weight (except for late
type dwarfs), as this line is not always completely consistent with
the mass-loss rates derived from Halpha. \ion{He}{ii}\,$\lambda$4200 is
sometimes blended with \ion{N}{iii}\,$\lambda$4200, and sometimes it
is not completely consistent with the rest of the \ion{He}{ii}\ lines. \ion{He}{i}\
and \ion{He}{ii}\ lines at 4026~\AA\ do overlap, but for both lines we find a
consistent behaviour.
The highest weight is therefore given to the Balmer lines plus the
\ion{He}{ii}\,$\lambda$4541 and the \ion{He}{i}/\ion{He}{ii}\ 4026 lines, which define the
He ionization balance with \ion{He}{i}\,$\lambda$4471 or the singlet \ion{He}{i}\
lines. Note however that, as discussed above, all lines fit
simultaneously in a satisfactory way for our best fitting models.
\subsection{Fits and comments on the individual analysis}
In the following we will present the fits that were obtained by the
automated method for our sample of 12 early type stars, and comment on
the individual analysis of the objects. Listed in
Tab.~\ref{tab:fit-results} are the values determined for the six free
parameters investigated and quantities derived from these.
\subsubsection{Analysis of the Cyg~OB2\ stars}
The Cyg~OB2\ objects studied here were previously analysed by
\citet[hereafter \citetalias{herrero02}]{herrero02}. We opted to
reanalyse these stars (to test our method) as these stars have equal
distances and have been analysed in a homogeneous way using (an
earlier version of) the same model atmosphere code. In
Sect.~\ref{sec:comp} we will systematically compare our results with
those obtained by \citetalias{herrero02}. Here, we will incidentally
discuss the agreement if this turns out to be relatively poor or if
the absolute value of a parameter seems unexpected, and we wanted to
test possible causes for the discrepancy.
\paragraph{Cyg~OB2~\#7}
The best fit obtained with our automated fitting method for Cyg~OB2~\#7
is shown in Fig.~\ref{fig:cob2_7_lines}. For all hydrogen lines
fitted, including H$\delta$\ not shown here, and all \ion{He}{ii}\ lines the fits
are of very good quality. Note that given the noise level the fits of
the \ion{He}{i}\ lines are also acceptable.
Interesting to mention is the manner in which the \ion{He}{i}\ and \ion{He}{ii}\
blend at 4026~\AA\ is fitted. At first sight, i.e.\ ``by eye'', it
seems that the fit is of poor quality, as the line wings of the
synthetic profile runs through ``features'' which might be attributed
to blends of weak photospheric metal lines. However, the broadest of
these features have a half maximum width of $\sim$70~\mbox{km\,s$^{-1}$}, which is
much smaller than the projected rotational velocity of
105~\mbox{km\,s$^{-1}$}. Consequently, these features are dominated by pure noise.
Compared to the investigation of \citetalias{herrero02} we have
partial agreement between the derived parameters. The mass loss rate,
\mbox{$T_{\rm eff}$}\ and to a lesser degree $\beta$ agree very well. For \mbox{$\log{{g}}$}\ and
the helium abundance we find, however, large differences. The \mbox{$\log{{g}}$}\
value obtained here is $\sim$0.2~dex larger, which results in a
spectroscopic mass of 65.1~\mbox{$M_{\sun}$}. A value which is in good agreement
with the evolutionary mass of 67.8~\mbox{$M_{\sun}$}.
The helium abundance needed to fit this object is 0.21, which is
considerably lower than the value obtained by \citetalias{herrero02},
who found an abundance ratio of 0.31. This large value still
corresponds to a strong helium surface enrichment. An interesting
question we need to address, is whether this is a real enrichment and
not an artifact that is attributable to a degeneracy effect of \mbox{$T_{\rm eff}$}\
and \mbox{$Y_{\rm He}$}. The latter can be the case, as no \ion{He}{i}\ lines are present in
the optical spectrum of Cyg~OB2~\#7. This issue can be resolved with
our fitting method by refitting the spectrum with a {\em helium
abundance fixed} at a lower value than previously obtained. If \mbox{$T_{\rm eff}$}\
and \mbox{$Y_{\rm He}$}\ are truly degenerate this would again yield a good fit,
however for a different \mbox{$T_{\rm eff}$}.
Shown as dotted lines in Fig.~\ref{fig:cob2_7_lines} are the results
of refitting Cyg~OB2~\#7 with a helium abundance fixed at the solar
value. For this lower \mbox{$Y_{\rm He}$}\ a \mbox{$T_{\rm eff}$}\ that is lower by $\sim$2.1~kK was
obtained. This was to be expected as for this temperature regime
\ion{He}{iii}\ is the dominant ionization stage. When consequently \mbox{$Y_{\rm He}$}\ is
reduced a reduction of the temperature is required to fit the \ion{He}{ii}\
lines. The reduction of \mbox{$T_{\rm eff}$}\ obtained is the maximum for which still
a good fit of the hydrogen lines is possible and the \ion{He}{i}\ lines do
not become too strong. More importantly, in
Fig.~\ref{fig:cob2_7_lines} it is shown that even with this large
reduction of \mbox{$T_{\rm eff}$}\ the \ion{He}{ii}\ lines cannot be fitted. This implies
that \mbox{$T_{\rm eff}$}\ and \mbox{$Y_{\rm He}$}\ are not degenerate and the obtained helium
enrichment is real.
\begin{figure*}
\centering
\resizebox{17.5cm}{!}{\includegraphics{cob2_7_lines.ps}}
\caption{Comparison of the observed line profiles of Cyg~OB2~\#7 with
the best fit obtained by the automated fitting method (dashed
lines). Note that the \ion{He}{ii}\ line at 6527.1~\AA\ is not included in
the fit and, therefore, disregarded by the automated
method. Horizontal axis gives the wavelength in \AA. Vertical axises
give the continuum normalized flux and are scaled differently for
each line. In this figure the dotted lines correspond to a fit
obtained for a helium abundance fixed at 0.1. See text for further
comments.}
\label{fig:cob2_7_lines}
\end{figure*}
\begin{figure*}
\centering
\resizebox{17.5cm}{!}{\includegraphics{cob2_11_lines.ps}}
\caption{Same as Fig.~\ref{fig:cob2_7_lines}, however for
Cyg~OB2~\#11}
\label{fig:cob2_11_lines}
\end{figure*}
\begin{figure*}
\centering
\resizebox{17.5cm}{!}{\includegraphics{cob2_8c_lines.ps}}
\caption{Same as Fig.~\ref{fig:cob2_7_lines}, however for
Cyg~OB2~\#8C. Shown with a dotted line for H$\alpha$\ and
\ion{He}{ii}~$\lambda$4686 are the line profiles of a model with a 0.05~dex
lower \mbox{$\dot{M}$}, which ``by eye'' fits the core of H$\alpha$. See text for
further comments.}
\label{fig:cob2_8C_lines}
\end{figure*}
\begin{figure*}
\centering
\resizebox{17.5cm}{!}{\includegraphics{cob2_8a_lines.ps}}
\caption{Same as Fig.~\ref{fig:cob2_7_lines}, however for
Cyg~OB2~\#8A. The dotted lines correspond to a model with a \mbox{$\dot{M}$}\
higher by 0.04 dex. This mass loss rate was obtained by fitting the
best fit model, found by the automated method, ``by eye'' to the
H$\alpha$\ core. Even though the fit obtained with the higher \mbox{$\dot{M}$}\
results in a fit of H$\alpha$\ which is more pleasing to the eye in the
line core, this higher mass loss rate does not describe this object
the best. This can be seen best from the reduced fit quality of the
other hydrogen Balmer lines and the severe mismatch of
\ion{He}{i}~$\lambda$4471. See text for further comments.}
\label{fig:cob2_8A_lines}
\end{figure*}
\paragraph{Cyg~OB2~\#11}
Figure \ref{fig:cob2_11_lines} shows the fit to Cyg~OB2~\#11. In
general all lines are reproduced correctly. There is a slight
under prediction of the cores of H$\gamma$\ and \ion{He}{ii}~$\lambda$4541, a
problem that was also pointed out by \cite{herrero92} and
\citetalias{herrero02}. Possibly this is due to too much filling in of
the predicted profiles by wind emission. Part of the
\ion{He}{ii}~$\lambda$4541 discrepancy might be related to problems in the
theoretical broadening functions \cite[see][]{repolust05}.
The parameters obtained for this object, with exception of \mbox{$\dot{M}$}, are
in agreement with the parameters derived by \citetalias{herrero02}.
With our automated method a mass loss rate lower by $\sim$0.1~dex was
obtained. Note that due to this lower value the behaviour of this
object in terms of its modified wind momentum
(cf. Sect.~\ref{sec:wind-param}) is in better accord with that of the
bulk of the stars investigated in this paper.
\begin{figure*}
\centering
\resizebox{17.5cm}{!}{\includegraphics{cob2_4_lines.ps}}
\caption{Same as Fig.~\ref{fig:cob2_7_lines}, however for
Cyg~OB2~\#4.}
\label{fig:cob2_4_lines}
\end{figure*}
\begin{figure*}
\centering
\resizebox{17.5cm}{!}{\includegraphics{cob2_10_lines.ps}}
\caption{Same as Fig.~\ref{fig:cob2_7_lines}, however for
Cyg~OB2~\#10. The emission feature in the core of H$\alpha$\ was not
included in the fit. A subsequent test which did include this
feature in the fit yielded the same parameters except for a small
increase of \mbox{$\dot{M}$}\ with 0.04~dex.}
\label{fig:cob2_10_lines}
\end{figure*}
\paragraph{Cyg~OB2~\#8C}
The best fit for Cyg~OB2~\#8C is shown in Fig.~\ref{fig:cob2_8C_lines}.
Again, with exception of the mass loss rate, the parameters we obtain
for this object are in good agreement with the findings of
\citetalias{herrero02}. We do find a small helium abundance
enhancement, whereas \citetalias{herrero02} found a solar value.
To fit the P~Cygni type profile of \ion{He}{ii}\ at 4686~\AA, the automated
method used a \mbox{$\dot{M}$}\ which, compared to these authors, was higher by
approximately 0.15 dex. This higher value for the mass loss rate
results in a H$\alpha$\ profile which, at first sight, looks to be filled in
too much by wind emission. To assess whether this could correspond to
a significant overestimation of the mass loss rate, we lowered \mbox{$\dot{M}$}\
in the best fit model by hand until the core of H$\alpha$\ was fitted. In
Fig.~\ref{fig:cob2_8C_lines} the resulting line profiles are shown as
a dotted line for H$\alpha$\ and \ion{He}{ii}~$\lambda$4686, which for this fit are
the lines which visibly reacted to the change in mass loss rate. To
obtain this fit ``by eye'' of the H$\alpha$\ core, a reduction of \mbox{$\dot{M}$}\
with merely 0.05 dex was required, showing that the mass loss rate was
not overestimated by the automated method. Note that for this lower
mass loss rate the fit of the \ion{He}{ii}~$\lambda$4686 becomes
significantly poorer.
\paragraph{Cyg~OB2~\#8A}
\cite{debecker04} report this to be a O6\,I and O5.5\,III binary
system, therefore the derived parameters, in particular the
spectroscopically determined mass, should be taken with care. However,
as this paper also aims to test automated fitting we did pursue the
comparison of this object with \citetalias{herrero02}, who also
treated the system assuming it to be a single star.
We obtained a good fit for all lines except for the problematic
\ion{He}{ii}~$\lambda$4686 line. The best fit is shown in
Fig.~\ref{fig:cob2_8A_lines}. Again the H$\alpha$\ core is not fitted
perfectly. To determine how significant this small discrepancy is, we
fitted the H$\alpha$\ core in a similar manner as for Cyg~OB2~\#8C. To obtain
a good fit ``by eye'' we find that \mbox{$\dot{M}$}\ has to be increased by 0.04
dex, indicating the extreme sensitivity of H$\alpha$\ to \mbox{$\dot{M}$}\ in this
regime. The profiles corresponding to the increased mass loss rate
model are shown in Fig.~\ref{fig:cob2_8A_lines} as dotted lines. It is
clear that not only the ``classical'' wind lines react strongly to
\mbox{$\dot{M}$}. All synthetic hydrogen Balmer line profiles show significant
filling in due to wind emission for an increased mass loss,
deteriorating the fit quality. Also the \ion{He}{i}~$\lambda$4471 line shows
a decrease in core strength which is comparable to the decrease in the
H$\alpha$\ core. This reconfirms that in order to self-consistently
determine the mass loss rate all lines need to be fitted
simultaneously. Therefore, a small discrepancy in the H$\alpha$\ core
between the observed and synthetic line profile should not be
considered a decisive reason to reject a fit.
Except for \mbox{$Y_{\rm He}$}\ the obtained parameters agree with the results of
\citetalias{herrero02} within the errors given by these
authors. Similar to Cyg~OB2~\#8C we find a small helium enhancement.
\paragraph{Cyg~OB2~\#4}
The final fit to the spectrum of Cyg~OB2~\#4 is presented in
Fig.~\ref{fig:cob2_4_lines}. We obtained good fits for all lines, with
exception of the helium singlet at 4922~\AA, for which the core is
predicted too strong. However, recall that for this spectral type we
assigned a relatively low weight to this line, for reasons explained
in Sect.~\ref{sec:line-scheme}.
The parameters obtained from the fit agree well with the values of
\citetalias{herrero02}, with exception of $\beta$, for which we find a
value higher by $\sim$0.2. Note that \citetalias{herrero02} used a
fixed value for $\beta$ to obtain their fit, whereas in this case the
automated method self-consistently derived this parameter.
The spectroscopic mass implied by the obtained \mbox{$\log{{g}}$}\ value is
significantly smaller than the evolutionary mass of
Cyg~OB2~\#4. However, within the error bars (Sect.~\ref{sec:errors})
the two masses agree with each other.
\begin{figure*}
\centering
\resizebox{17.5cm}{!}{\includegraphics{cob2_2_lines.ps}}
\caption{Same as Fig.~\ref{fig:cob2_7_lines}, however for
Cyg~OB2~\#2. Shown with dotted lines for H$\alpha$\ is the line profile of
the best fit model with a \mbox{$\dot{M}$}\ lower by a factor of 3. See text
for further comments.}
\label{fig:cob2_2_lines}
\end{figure*}
\paragraph{Cyg~OB2~\#10}
In the final fit for this object, shown in
Fig.~\ref{fig:cob2_10_lines}, there are two problematic lines. First,
for the \ion{He}{ii}~$\lambda$4686 line the core is predicted too
strong. Even though compared to \citetalias{herrero02} the situation
has improved considerably, the current version of {\sc fastwind}\ still has
difficulties predicting this line. Second, the predicted
\ion{He}{i}~$\lambda$4471 is too weak. Possibly this is connected to the
generalized dilution effect, for which we refer to \cite{repolust04}
for a recent discussion.
In Fig.~\ref{fig:cob2_10_lines} we also see that the H$\alpha$\ core of
Cyg~OB2~\#10 exhibits an emission feature. For this analysis we assumed
that is was nebular and, consequently, excluded it from the fit. To
test what the effect would be if this assumption was incorrect, a fit
was made with this feature included in the profile. It turned out that
the only parameter which was affected in this test was \mbox{$\dot{M}$}, which
showed a small increase of 0.04~dex.
\paragraph{Cyg~OB2~\#2}
For Cyg~OB2~\#2 the automated method could not self-consistently
determine $\beta$. Therefore, we fixed its value at a theoretically
predicted $\beta=0.8$ \citep[cf.][]{pauldrach86}. In
Fig.~\ref{fig:cob2_2_lines} the best fit is shown. We obtained good
fits for all lines. However, in the case of \ion{He}{i}~$\lambda$4471 we do
see a small under prediction of the forbidden component at 4469~\AA,
which is likely related to incorrect line-broadening functions.
For \mbox{$\log{{g}}$}\ and \mbox{$\dot{M}$}\ the obtained fit parameters differ considerably
from the findings of \citetalias{herrero02}. We first focus on mass
loss for which we obtain the relatively low rate of $1.63 \times
10^{-7}$~\mbox{$M_{\sun}{\rm yr}^{-1}$}\, with an error bar in the logarithm of this value of
$-0.15$ and $+0.12$ dex (see Tab.~\ref{tab:errors}), given the quoted
value of $\beta$. Our \mbox{$\dot{M}$}\ value is approximately a factor two
higher than the mass loss rate obtained by \citetalias{herrero02}.
These authors noted that it was not possible to well constrain the
mass loss rate of such a weak wind. Given the relatively modest errors
indicated by our automated fitting method, we conclude that at least
in principle our technique allows to determine mass loss rates of
winds as weak as that of Cyg~OB2~\#2. We have added the phrase ``in
principle'' as it assumes the notion of beta and a very reliable
continuum normalization, which is in this is case different from the
one used by \citetalias{herrero02} for the \ion{He}{ii}~4541, \ion{He}{ii}~4686 and
H$\alpha$\ lines. If this can not be assured, then systematic errors may
dominate over the characteristic fitting error and the mass loss may
be much less well constrained. Assuming the continuum location to be
reliable, we illustrate the sensitivity of the spectrum to mass loss
rates of $\sim$$10^{-7}$ \mbox{$M_{\sun}{\rm yr}^{-1}$}\ by reducing the mass loss by a
factor of three. The H$\alpha$\ profile of this reduced mass loss model is
shown in Fig.~\ref{fig:cob2_2_lines} as a dotted line. Comparison of
these two cases shows that for winds of order $10^{-7}$ \mbox{$M_{\sun}{\rm yr}^{-1}$}\ the
line still contain considerable \mbox{$\dot{M}$}\ information. Interestingly, if
we would not take into consideration the line core of H$\alpha$\ in our
fitting method, we still recover the quoted mass loss to within three
percent. We note that for our higher mass loss this object appears to
behave well in the wind momentum luminosity relation (see
Sect.~\ref{sec:wind-param}), whereas \citetalias{herrero02} signal a
discrepancy when using their estimated \mbox{$\dot{M}$}\ value.
The \mbox{$\log{{g}}$}\ value obtained in this study is 0.36~dex larger than the
value obtained ``by eye'' by \citetalias{herrero02}. Judging from the
very good fits obtained, there is no indication that the automated fit
overestimated the gravity. The spectroscopic mass of 17.0~\mbox{$M_{\sun}$}\
implied by the larger \mbox{$\log{{g}}$}\ value, is also in good agreement with the
evolutionary mass of Cyg~OB2~\#2, which is 18.7~\mbox{$M_{\sun}$}.
\subsubsection{Analysis of well studied dwarf OB-stars}
We have also reanalysed five well known and well studied dwarf
OB-stars, sampling the range of O spectral sub-types, in order to
probe a part of parameter space that is not well covered by the
Cyg~OB2\ stars. \HD217086 and \mbox{$\zeta$~Oph}, for instance, are fast rotators
with \mbox{$v_{\rm r}\sin i$}\ = 350 and 400 \mbox{km\,s$^{-1}$}\ respectively (see also
Tab.~\ref{tab:data}). 10~Lac\ is a slow rotator, and \mbox{$\tau$~Sco}\ is a
very slow rotator. The latter two stars also feature very low mass
loss rates, moreover, the actual \mbox{$\dot{M}$}\ values of these stars are much
debated \cite[see][]{martins04}. \HD15629 is selected because it
appears relatively normal.
\begin{figure*}
\centering
\resizebox{17.5cm}{!}{\includegraphics{hd15629_lines.ps}}
\caption{Same as Fig.~\ref{fig:cob2_7_lines}, however for \HD15629.}
\label{fig:hd15629_lines}
\end{figure*}
\begin{figure*}
\centering
\resizebox{17.5cm}{!}{\includegraphics{hd217086_lines.ps}}
\caption{Same as Fig.~\ref{fig:cob2_7_lines}, however for \HD217086.}
\label{fig:hd217086_lines}
\end{figure*}
\paragraph{\HD15629}
Apart from a slight over-prediction of the core strength in
\ion{He}{ii}~$\lambda$4200, a very good fit was obtained for this object. The
final fit is presented in Fig.~\ref{fig:hd15629_lines}. This object
has recently been studied by \citet[hereafter
\citetalias{repolust04}]{repolust04}. Compared to the parameters
obtained by these authors, we find good agreement except for \mbox{$T_{\rm eff}$},
\mbox{$\dot{M}$}\ and $\beta$. Note that in contrast to this study we do not find
a helium deficiency. However, the difference of 0.02 with respect to
the solar value obtained here is within the error quoted by
\citetalias{repolust04}.
The difference in wind parameters can be explained by the value of
$\beta=0.8$ assumed by \citetalias{repolust04}. Our self-consistently
derived value for $\beta = 1.18$. As the effect of $\beta$ on the
spectrum is connected to the mass loss rate through the velocity law
and the continuity equation, the lower \mbox{$\dot{M}$}\ obtained with the
automated method is explained.
The 1.5~kK increase of \mbox{$T_{\rm eff}$}\ compared to \citetalias{repolust04} can
be attributed to the improved fit quality and the increase in \mbox{$\log{{g}}$}\
of 0.1~dex. An increase in \mbox{$\log{{g}}$}\ implies an increase in electron
density, resulting in an increase in the recombination rate. The
strength of both the \ion{He}{i}\ and \ion{He}{ii}\ lines depend on this rate, as
the involved levels are mainly populated through recombination.
Consequently, as \ion{He}{iii}\ is the dominant ionization stage in the
atmosphere of \HD15629 the strength of the \ion{He}{i}\ and \ion{He}{ii}\ lines will
increase when the recombination rate increases. To compensate for this
increase in line strength an increase in \mbox{$T_{\rm eff}$}, decreasing the
ionization fractions of \ion{He}{i}\ and \ion{He}{ii}, is necessary.
\paragraph{\HD217086}
With a projected rotational velocity of 350~\mbox{km\,s$^{-1}$}\ this object can be
considered to be a fast rotator, and our analysis of this object will
show how well the automated method can handle large \mbox{$v_{\rm r}\sin i$}. In
Fig.~\ref{fig:hd217086_lines} the best fit obtained with our method is
presented. We find that the large projected rotational velocity does
not pose any problem for the method, i.e.\ the fit quality of all the
lines fitted is very good.
With respect to the obtained parameters, again, these can be compared
to the work of \citetalias{repolust04}. In this comparison we find
considerable differences for \mbox{$T_{\rm eff}$}\ and \mbox{$\log{{g}}$}\ and a small difference
for \mbox{$Y_{\rm He}$}. The effective temperature found by the automated method is
2.1~kK higher. This is a significant increase, but when the \mbox{$\log{{g}}$}\
value obtained here is considered, this can be explained in a similar
manner as the \mbox{$T_{\rm eff}$}\ increase of \HD15629.
\begin{figure*}
\centering
\resizebox{17.5cm}{!}{\includegraphics{tenlac_lines.ps}}
\caption{Same as Fig.~\ref{fig:cob2_7_lines}, however for 10~Lac.}
\label{fig:tenlac_lines}
\end{figure*}
\begin{figure*}
\centering
\resizebox{17.5cm}{!}{\includegraphics{zoph_lines.ps}}
\caption{Same as Fig.~\ref{fig:cob2_7_lines}, however for
\mbox{$\zeta$~Oph}.}
\label{fig:zoph_lines}
\end{figure*}
The best fit is obtained with a \mbox{$\log{{g}}$}\ value that is 0.29~dex higher
than the value from \citetalias{repolust04}. Judging from the line
profiles in Fig.~\ref{fig:hd217086_lines} there is no evidence for an
overestimation of \mbox{$\log{{g}}$}. This higher \mbox{$\log{{g}}$}\ removes the discrepancy
with the calibration of \cite{markova04} found by
\citetalias{repolust04} (see Fig.~17 in \citetalias{repolust04}). We
also note that, similar to Cyg~OB2~\#2, the increased \mbox{$\log{{g}}$}\ implies a
spectroscopic mass which agrees well with the evolutionary mass of
\HD217086 (cf.\ Tab.~\ref{tab:fit-results}). This is not the case for
the value determined by \citetalias{repolust04}, which points to a
clear discrepancy.
The considerable helium abundance enhancement found by
\citetalias{repolust04} is not reproduced by the automated
method. Even though this object is a rapid rotator, our fit indicates
a normal, i.e.\ solar, helium abundance.
\paragraph{10~Lac}
Like in the case of Cyg~OB2~\#2, and for the remaining objects, the
wind is too weak to self-consistently determine $\beta$. Therefore,
again a value of $\beta=0.8$ was assumed.
The photospheric parameters obtained for 10~Lac\ agree very well
with the results of \citetalias{herrero02}. The best fit to the
observed spectrum is shown in Fig.~\ref{fig:tenlac_lines}. Whereas
\citetalias{herrero02} find that the mass loss rate cannot be
constrained and only an upper limit of $10^{-8}$ \mbox{$M_{\sun}{\rm yr}^{-1}$}\ is found,
the automated method was able to self-consistently determine \mbox{$\dot{M}$}\ at
$6 \times 10^{-8}$ \mbox{$M_{\sun}{\rm yr}^{-1}$}, though with large error bars (see
Tab.~\ref{tab:errors}). Our error bar indicates that \mbox{$\dot{M}$}\ may be an
order of magnitude lower, i.e.\ it may still be consistent with the
\citetalias{herrero02} result.
\begin{figure*}
\centering
\resizebox{17.5cm}{!}{\includegraphics{tau_sco_lines.ps}}
\caption{Same as Fig.~\ref{fig:cob2_7_lines}, however for \mbox{$\tau$~Sco}.}
\label{fig:tausco_lines}
\end{figure*}
Various other authors have determined the mass loss rate of 10~Lac\
using different methods. These determinations range from up to
$2\times 10^{-7}$ \citep{howarth89} down to $2\times 10^{-9}~\mbox{$M_{\sun}{\rm yr}^{-1}$}$
\citep{martins04}. Consequently, compared to these independent
determinations no conclusive answer can be given to the question
whether the \mbox{$\dot{M}$}\ derived from the optical spectrum is correct. We
conclude that the mass loss rate of 10~Lac\ is anomalously low when
placed into context with the other dwarfs stars studied here. For
instance, the dwarfs \mbox{$\zeta$~Oph}\ and \HD217086, which have luminosities
that are, respectively, lower and higher by $\sim$0.1~dex, both
exhibit a mass loss rate higher by several factors. In
Sect.~\ref{sec:wind-param} we will discuss this further in terms of
the wind-momentum luminosity relation.
\paragraph{\mbox{$\zeta$~Oph}}
The large \mbox{$v_{\rm r}\sin i$}\ of 400~\mbox{km\,s$^{-1}$}\ was not a problem to obtain a good
fit. In Fig.~\ref{fig:zoph_lines} the best fit for \mbox{$\zeta$~Oph}\ is
presented. With the exception of the helium abundance, the comparison
with the results of \citetalias{repolust04} yields very good
agreement. Note that the mass loss rate obtained by these authors is
an upper limit, whereas in this study \mbox{$\dot{M}$}\ could be derived
self-consistently. With respect to \mbox{$Y_{\rm He}$}\ we do not find any evidence
for a significant overabundance of helium, in agreement with
\cite{villamariz05}.
\paragraph{\mbox{$\tau$~Sco}}
The best fit for \mbox{$\tau$~Sco}\ is presented in Fig.~\ref{fig:tausco_lines}.
All lines, including H$\delta$, which is not shown here, are reproduced
accurately. The photospheric parameters we obtained can be compared to
the work of \cite{schonberner88} and \cite{kilian91} who both studied
\mbox{$\tau$~Sco}\ using plane parallel models. \citeauthor{kilian91} found
\mbox{$T_{\rm eff}$}=31.7~kK and \mbox{$\log{{g}}$}=4.25, whereas \citeauthor{schonberner88}
obtained \mbox{$T_{\rm eff}$}=33.0~kK and \mbox{$\log{{g}}$}=4.15. The difference in \mbox{$T_{\rm eff}$}\
between the two studies is explained by the fact that in the latter
analysis no line blanketing was included in the models. Therefore, we
prefer to compare our \mbox{$T_{\rm eff}$}\ to the former investigation, which agree
very well. In terms of the gravity we find good agreement with the
second study. The value obtained by \citeauthor{kilian91} seems rather
high. Given the almost perfect agreement between the synthetic line
profiles and the observations in Fig.~\ref{fig:tausco_lines}, the
reason for this discrepancy is unclear. On a side note, more recently
\cite{repolust05} analysed the infrared spectrum of this object. Their
findings do confirm our lower value, but could not reproduce the
enhanced helium abundance we find, due to a lack of observed infrared
\ion{He}{ii}\ lines.
In the recent literature the mass loss rate usually adopted for
\mbox{$\tau$~Sco}\ is $9\times10^{-9}$~\mbox{$M_{\sun}{\rm yr}^{-1}$}, which is considerably smaller
than the $6.14\times10^{-8}$~\mbox{$M_{\sun}{\rm yr}^{-1}$}\ obtained in this study. However,
the former mass loss rate is an average value determined by
\cite{dejager88}, based on the mass loss rates independently found by
\cite{gathier81} and \cite{hamann81a}. Based on the UV resonance
lines, these two studies, respectively, determined \mbox{$\dot{M}$}\ to be
$7.4\times10^{-8}$ and $1.3\times10^{-9}$~\mbox{$M_{\sun}{\rm yr}^{-1}$}. So, they differ by
more than a factor of 50. The mass loss rate obtained with the
automated method is in reasonable agreement with that obtained by
\citeauthor{gathier81}. Our higher value is also supported by the
study of the infrared spectrum of \mbox{$\tau$~Sco}\ by \cite{repolust05} who
find $\mbox{$\dot{M}$} \simeq 2\times10^{-8}~\mbox{$M_{\sun}{\rm yr}^{-1}$}$. Detailed fitting of Br$\alpha$\
will likely clarify this issue.
\begin{figure*}
\centering
\resizebox{17cm}{!}{\includegraphics{error_an.eps}}
\caption{{\it Panel a:} fit diagram of \mbox{$T_{\rm eff}$}\ and \mbox{$\log{{g}}$}\ for
10~Lac. {\it Panel b:} Fitness as a function of \mbox{$T_{\rm eff}$}\ for $\mbox{$\log{{g}}$}\
= 4.0$. {\it Panel c:} Fitness distribution of the models calculated
during the fitting run of the automated method. {\it Panel d:}
Distribution of \mbox{$T_{\rm eff}$}\ in the models located within the global
optimum. The maximum variation of \mbox{$T_{\rm eff}$}\ within the global optimum,
which corresponds to the error estimate of this parameter, is
$\sim$900~K.}
\label{fig:error-an}
\end{figure*}
\section{Error analysis}
\label{sec:errors}
Here we will introduce our method of estimating errors on the
parameters derived with the automated method. This method is based on
properties of the distribution of the fitnesses of the models in
parameter space, which may seem conceptually different from classical
approaches of defining error bars (and in a sense it is). However, we
will demonstrate for the case of 10\,Lac that our error definition is
very comparable to what is routinely done in fit diagram approaches.
\subsection{Fit diagrams}
In a fit diagram method the error bar on \mbox{$T_{\rm eff}$}\ and \mbox{$\log{{g}}$}\ is derived
by investigating the simultaneous behaviour of these two parameters.
In panel {\it a} of Fig.~\ref{fig:error-an} the fit diagram of 10\,Lac
is presented adopting for all other parameters (save \mbox{$T_{\rm eff}$}\ and \mbox{$\log{{g}}$})
the best fit values obtained in our automated fitting. This diagram
was constructed by calculating a grid of {\sc fastwind}\ models in the
\mbox{$T_{\rm eff}$}-\mbox{$\log{{g}}$}\ plane, and evaluating for every line for every \mbox{$T_{\rm eff}$}\
which model, i.e.\ \mbox{$\log{{g}}$}, fits this line the best. The location where
the resulting fit curves intersect, corresponds to the best fit. This
best fit yields \mbox{$T_{\rm eff}$}\ = 36\,000~K and \mbox{$\log{{g}}$}\ = 4.0. Note that this
result was obtained without the use of our automated method. The error
can now be estimated by estimating the dispersion of the fit curves
around this location. In panel {\it a} of Fig.~\ref{fig:error-an} this
is indicated by a box around the best fit location. The corresponding
error estimates are 1000~K in \mbox{$T_{\rm eff}$}\ and 0.1~dex in \mbox{$\log{{g}}$}.
The method described above cannot be applied to our automated fitting
method due to two reasons. First, as we have defined the fit quality
according to Eq.~(\ref{eq:fitns}), this definition of fitness
compresses the fit curves of all individual lines in the fit diagram
to a single curve. In Fig.~\ref{fig:error-an} this curve is shown as a
thick dashed line. Although the curve runs through the best fit point,
no information about the dispersion of the solutions around this point
can be derived from it. The second reason lies in the multidimensional
character of the problem of line fitting. If one would want to
properly estimate the error taking this multidimensionality in to
account, a fit diagram should be constructed with a dimension equal to
the number of free parameters evaluated. In case of our fits this
translates to the construction of a six dimensional fit diagram.
\subsection{Optimum width based error estimates}
Even though we have argued that fit diagrams cannot be used with our
fitting method, it is possible to construct an error estimate which is
analogous to the use of these diagrams and does take the
multidimensionality of the problem into account. This can be done by
first realizing that the error box shown in Fig.~\ref{fig:error-an}
essentially is a {\em measure of the width of the optimum in parameter
space}, i.e.\ it defines the region in which models are located which
approximately have {\em the same fit quality}. This is illustrated in
panel {\it b}. There we show the one-dimensional fitness function in
the \mbox{$T_{\rm eff}$}-\mbox{$\log{{g}}$}\ plane for $\mbox{$\log{{g}}$}=4.0$. Indicated with dashed lines is
the error in \mbox{$T_{\rm eff}$}\ estimated using the fit diagram of
10~Lac. Confined between these lines is the region which corresponds
to the optimum as defined by the error box in panel {\it
a}. Consequently, the difference between maximum and minimum fitness
in this region defines the width of the optimum. Returning to the
general case, we can now invert the reasoning and state that the error
estimate for a given parameter is equal to the maximum variation of
this parameter in the group of best fitting models, i.e.\ the models
located within the error box. Consequently, in the automated fitting
method by defining a group of best fitting models, the {\em error
estimates for all free parameters} can be determined.
\begin{table*}
\caption{Error estimates for fit parameters obtained using the
automated fitting method and parameters derived from these. Denoted
by {\sc nd} are errors in \mbox{$v_{\rm turb}$}\ that reach up to the maximum
allowed value of \mbox{$v_{\rm turb}$}\ and, therefore, are formally not
defined. Uncertainties in the fit parameters result from the optimum
width based error estimates method. See text for details and
discussion.}
\label{tab:errors}
\begin{center}
\begin{tabular}{lccccclclll}
\hline\\[-9pt] \hline \\[-7pt]
Star & $\Delta$\mbox{$T_{\rm eff}$} & $\Delta$\mbox{$\log{{g}}_{\rm c}$} & $\Delta$\mbox{$R_{\star}$} & $\Delta \log \mbox{$L_{\star}$}$ & $\Delta$\mbox{$Y_{\rm He}$}
& \multicolumn{1}{c}{$\Delta$\mbox{$v_{\rm turb}$}} & $\Delta\log$ \mbox{$\dot{M}$} & \multicolumn{1}{c}{$\Delta$$\beta$} & \multicolumn{1}{c}{$\Delta$\mbox{$M_{\rm s}$}} & \multicolumn{1}{c}{$\Delta$\mbox{$M_{\rm ev}$}}\\[2pt]
& [kK] & [\mbox{cm\,s$^{-2}$}] & [\mbox{$R_{\sun}$}] & [\mbox{$L_{\sun}$}] & & \multicolumn{1}{c}{[\mbox{km\,s$^{-1}$}]} & [\mbox{$M_{\sun}{\rm yr}^{-1}$}] &
& \multicolumn{1}{c}{[\mbox{$M_{\sun}$}]} & \multicolumn{1}{c}{[\mbox{$M_{\sun}$}]}\\[1pt]
\hline \\[-9pt]
Cyg~OB2~\#7 & $^{-1.0}_{+1.5}$ & $^{-0.08}_{+0.06}$ & $\pm$0.7 &
$\pm$0.07 & $^{-0.02}_{+0.03}$ & \hspace{6pt}$^{-14.9}_{\rm +ND}$ & $^{-0.05}_{+0.03}$ & $^{-0.04}_{+0.09}$ & \hspace{4pt}$^{-15}_{+12}$ & \hspace{4pt}$^{-7 }_{+7}$\\[3.5pt]
Cyg~OB2~\#11 & $^{-0.6}_{+0.4}$ & $^{-0.07}_{+0.13}$ & $\pm$1.1 &
$\pm$0.05 & $^{-0.01}_{+0.03}$ & \hspace{6pt}$^{-4.0}_{\rm +ND}$ & $^{-0.03}_{+0.06}$ & $^{-0.05}_{+0.02}$ & \hspace{4pt}$^{-15}_{+27}$ & \hspace{4pt}$^{-3 }_{+4}$\\[3.5pt]
Cyg~OB2~\#8C & $^{-1.3}_{+1.1}$ & $^{-0.10}_{+0.14}$ & $\pm$0.7 & $\pm$0.07 & $^{-0.02}_{+0.04}$ & \hspace{6pt}$^{-0.2}_{+10.9}$ & $^{-0.07}_{+0.04}$ & $^{-0.05}_{+0.10}$ & \hspace{4pt}$^{-10}_{+14}$ & \hspace{4pt}$^{-4 }_{+4}$\\[3.5pt]
Cyg~OB2~\#8A & $^{-0.4}_{+1.7}$ & $^{-0.05}_{+0.13}$ & $\pm$1.3 &
$\pm$0.09 & $^{-0.04}_{+0.04}$ & \hspace{6pt}$^{-17.7}_{\rm +ND}$ & $^{-0.07}_{+0.03}$ & $^{-0.04}_{+0.11}$ & \hspace{4pt}$^{-15}_{+32}$ & \hspace{4pt}$^{-10}_{+8}$\\[3.5pt]
Cyg~OB2~\#4 & $^{-0.3}_{+1.5}$ & $^{-0.04}_{+0.21}$ & $\pm$0.7 &
$\pm$0.09 & $^{-0.02}_{+0.03}$ & \hspace{6pt}$^{-3.0}_{\rm +ND}$ & $^{-0.10}_{+0.05}$ & $^{-0.05}_{+0.21}$ & \hspace{4pt}$^{-3 }_{+15}$ & \hspace{4pt}$^{-3 }_{+3}$\\[3.5pt]
Cyg~OB2~\#10 & $^{-0.8}_{+1.0}$ & $^{-0.12}_{+0.16}$ & $\pm$1.5 &
$\pm$0.07 & $^{-0.02}_{+0.03}$ & \hspace{6pt}$^{-7.0}_{\rm +ND}$ & $^{-0.13}_{+0.08}$ & $^{-0.15}_{+0.19}$ & \hspace{4pt}$^{-19}_{+26}$ & \hspace{4pt}$^{-4 }_{+4}$\\[3.5pt]
Cyg~OB2~\#2 & $^{-0.8}_{+1.2}$ & $^{-0.14}_{+0.13}$ & $\pm$0.6 & $\pm$0.08 & $^{-0.01}_{+0.03}$ & \hspace{6pt}$^{-2.3}_{+2.4}$ & $^{-0.15}_{+0.12}$ & - & \hspace{4pt}$^{-7 }_{+6 }$ & \hspace{4pt}$^{-1 }_{+2}$\\[3.5pt]
\HD15629 & $^{-0.3}_{+0.7}$ & $^{-0.05}_{+0.07}$ & $\pm$1.9 & $\pm$0.12 & $^{-0.01}_{+0.03}$ & \hspace{6pt}$^{-8.4}_{+7.6}$ & $^{-0.13}_{+0.10}$ & $^{-0.10}_{+0.27}$ & \hspace{4pt}$^{-13}_{+14}$ & \hspace{4pt}$^{-5 }_{+7}$\\[3.5pt]
\HD217086 & $^{-0.5}_{+0.9}$ & $^{-0.08}_{+0.07}$ & $\pm$1.2 & $\pm$0.13 & $^{-0.02}_{+0.02}$ & \hspace{6pt}$^{-4.9}_{+2.9}$ & $^{-0.12}_{+0.18}$ & $^{-0.25}_{+0.16}$ & \hspace{4pt}$^{-10}_{+10}$ & \hspace{4pt}$^{-3 }_{+3}$\\[3.5pt]
10~Lac\ & $^{-0.9}_{+0.8}$ & $^{-0.12}_{+0.13}$ & $\pm$1.7 & $\pm$0.17 & $^{-0.02}_{+0.02}$ & \hspace{6pt}$^{-3.8}_{+4.1}$ & $^{-0.98}_{+0.39}$ & - & \hspace{4pt}$^{-16}_{+16}$ & \hspace{4pt}$^{-2 }_{+4}$\\[3.5pt]
\mbox{$\zeta$~Oph} & $^{-0.7}_{+0.7}$ & $^{-0.05}_{+0.16}$ & $\pm$1.3 &
$\pm$0.13 & $^{-0.02}_{+0.04}$ & \hspace{6pt}$^{-6.2}_{\rm +ND}$ & $^{-0.28}_{+0.15}$ & - & \hspace{4pt}$^{-7 }_{+11}$ & \hspace{4pt}$^{-2 }_{+2}$\\[3.5pt]
\mbox{$\tau$~Sco} & $^{-0.8}_{+0.5}$ & $^{-0.14}_{+0.09}$ & $\pm$0.5 & $\pm$0.09 & $^{-0.02}_{+0.04}$ & \hspace{6pt}$^{-2.2}_{+2.4}$ & $^{-0.99}_{+0.22}$ & - & \hspace{4pt}$^{-6 }_{+4 }$ & \hspace{4pt}$^{-1 }_{+1}$\\[3.5pt]
\hline
\end{tabular}
\end{center}
\end{table*}
We define the group of best fitting models as the group of models that
lie within the global optimum. Put differently, the width of the
global optimum in terms of fitness, defines the group of best fitting
models. Identifying and, consequently, measuring this width is
facilitated by the nature of the GA, i.e.\ selected reproduction,
incorporated in our fitting method. Due to this selected reproduction
the exploration through parameter space results in a mapping of this
space in which regions of high fit quality, i.e.\ the regions around
local optima and the global optimum, are sampled more
intensively. Consequently, if we would rank all models of all
generations calculated during a fitting run according to their
fitness, the resulting distribution will peak around the locations of
the optima. In case of the global optimum the width of this peak,
starting from up to the maximum fitness found, is, analogous to the
width of the error box used in a fit diagram, a direct measure of the
width of the optimum. Consequently, this width depends on the quality
of the data, i.e.\ it will be broader or narrower for, respectively,
low and high signal to noise, and on the degeneracy between the fit
parameters. Therefore, {\em the error estimates of the individual
parameters are equal to the maximum variations of these parameters for
all models contained in the peak corresponding to the global optimum}.
In panel {\it c} of Fig.~\ref{fig:error-an} the distribution of the
models according to their fitness calculated during the fitting run of
10~Lac\ using the automated method is shown. The fitnesses are
normalized with respect to the highest fitness and only the top half
of the distribution is shown. In this distribution two peaks are
clearly distinguishable. The most pronounced peak is located at $F
\approx 0.9$ and corresponds to the region around the global
optimum. A second peak, corresponding to a region around a secondary
optimum, is located at $F \approx 0.83$. To derive the error on the
fit parameters we estimate the total width of the global optimum for
10~Lac\ to be $\sim$0.15\footnote{In general this is not a fixed
number. Considering all programme stars we find the width of the
global optimum to be within the range $\sim$0.1 to $\sim$0.2.}, i.e.\
the range of $F=0.85...1.0$ corresponds to the width of the
optimum. In panel {\it d} of Fig.~\ref{fig:error-an} we show the
resulting distribution of \mbox{$T_{\rm eff}$}\ of the models within this global
optimum. In this figure we see that the maximum variation, hence the
error estimate, is $\sim$900~K, which is in good agreement with the
value derived using the fit diagram of 10~Lac. For \mbox{$\log{{g}}$}\ we also
find an error estimate of $\sim$0.1~dex, which is also very similar to
the value obtained with this diagram. The exact values as well as
error estimates for all fit parameters of all objects are given in
Tab.~\ref{tab:errors}. It is important to note that our error analysis
method also allows for an error estimate of parameters to which the
spectrum does not react strongly. For 10~Lac\ this is clearly the
case for the mass loss rate, for which we find large error bars.
\subsection{Derived parameters}
In Tab.~\ref{tab:errors} the errors on the derived parameters were
calculated based on the error estimates of the fit parameters. Here we
will elaborate on their derivation.
The error in the stellar radii is dominated by the uncertainty in the
absolute visual magnitude. In case of the Cyg~OB2\ objects we adopt
these to be 0.1\mbox{$^{\rm m}$}\ conform the work of \cite{massey91}. For
\HD15629, \HD217086 and \mbox{$\zeta$~Oph}\ we use the uncertainty as given by
\citetalias{repolust04} of 0.3\mbox{$^{\rm m}$}. The distance to 10~Lac\ and
\mbox{$\tau$~Sco}\ was measured by Hipparcos. Therefore, for these two objects
we adopt the error based on this measurement, which, respectively, is
0.4 and 0.2\mbox{$^{\rm m}$}. Together with the uncertainty in \mbox{$T_{\rm eff}$}\ the
uncertainty in \mbox{$R_{\star}$}\ is calculated according to Eq.~(8) of
\citetalias{repolust04}, where we used the largest absolute
uncertainty in \mbox{$T_{\rm eff}$}\ for a given object.
To correct the surface gravity for centrifugal forces, a
correction conform \cite{herrero92} was applied to the gravity
determined from the spectral fits. This corrected value is given in
Tab.~\ref{tab:fit-results}. As shown by \citetalias{repolust04} this
correction has a non negligible effect on the error in the resulting
\mbox{$\log{{g}}_{\rm c}$}. Consequently, we used their estimate to calculate the total
error estimate of \mbox{$\log{{g}}_{\rm c}$}\ as given in Tab.~\ref{tab:errors}. Using
this error together with the uncertainty in \mbox{$R_{\star}$}\ the resulting
uncertainty in the spectroscopic mass was calculated.
For the calculation of the uncertainty in the stellar luminosity, we
consistently adopted the largest absolute error in \mbox{$T_{\rm eff}$}. The
resulting $\Delta \log \mbox{$L_{\star}$}$ as well as the uncertainty in \mbox{$T_{\rm eff}$}\
have an effect on the evolutionary mass. We have estimated errors for
this quantity using the error box spanned by $\Delta \log \mbox{$L_{\star}$}$ and
$\Delta \log \mbox{$T_{\rm eff}$}$.
\section{Comparison with previous results}
\label{sec:comp}
\begin{figure}[t]
\centering
\resizebox{8.8cm}{!}{
\includegraphics{teff_comp.ps}}
\caption{Comparison of the effective temperatures obtained using
automated fits (horizontal axis) and ``by eye'' fits. On the
vertical axis the ratio of automated relative to ``by eye''
temperature determination is given. The dashed lines correspond to a
four percent error usually adopted for ``by eye'' determined
values.}
\label{fig:teff_comp}
\end{figure}
In this section we will compare the results obtained with our
automated fitting method with those from ``by eye'' fits (relevant
references to the comparison studies are given in the previous
section). This does not constitute a one-to-one comparison of the
automated and ``by eye'' approach as this would require the use of
identical model atmosphere codes as well as the same set of spectra,
moreover, with identical continuum normalization. Potential
differences can therefore not exclusively be attributed to the less
bias sensitive automated fitting method. However, as we have applied
our method to a sizeable sample of early type stars, the automated
nature of it does assure that it is the most homogeneous study to
date, i.e.\ without at least some of the biases involved in
conventional analyses.
\subsection{Effective temperature}
In Fig.~\ref{fig:teff_comp} a comparison of the effective temperatures
determined in this study with \mbox{$T_{\rm eff}$}\ values obtained with ``by eye''
fits, is presented. Indicated with dashed lines are the four percent
errors usually adopted for ``by eye'' fitted spectra. With the
exception of the outlier \HD217086 at 38.1~kK, the agreement is very
good and no systematic trend is visible. From this plot we can
conclude that the \mbox{$T_{\rm eff}$}\ obtained with the automated fit is at least
as reliable as the temperatures determined in the conventional way.
\subsection{Gravities}
\label{sec:gravities}
In many cases the gravity obtained with the automated procedure is
significantly higher than the values obtained with the conventional
``by eye'' fitted spectra. This is shown in Fig.~\ref{fig:logg_comp},
where we show as a function of the gravities obtained in this study
the differences with the ``by eye'' determined values. Indicated with
dashed lines in this figure is the 0.1~dex error in \mbox{$\log{{g}}$}\ that is
often assigned to a ``by eye'' fitting of the hydrogen Balmer
line wings. It is important to note that this plot shows that there is
no obvious trend in the differences, i.e.\ there appears no systematic
increase as a function of \mbox{$\log{{g}}$}\ present.
\begin{figure}[t]
\centering \resizebox{8.8cm}{!}{ \includegraphics{logg_comp.ps}}
\caption{Gravities obtained with automated fits (horizontal axis)
are compared to gravities determined from ``by eye'' fits. The
vertical axis gives the difference of the logarithm of the two
gravity determinations. Indicated by dashed lines are the 0.1 dex
errors usually adopted for gravities determined ``by eye''.}
\label{fig:logg_comp}
\end{figure}
It is clear, however, that there are three outliers for which previous
gravity determinations yield values that are at least 0.2 dex
lower. These are in order of increasing gravity (as determined in this
study): Cyg~OB2~\#2, Cyg~OB2~\#7 and \HD217086. For all three cases,
previous spectroscopic mass determinations result in values that are
about a factor of two less than the corresponding evolutionary masses.
One reason for these discrepant gravity values can be traced to a
difference between automated and ``by eye'' fitting. In ``by eye''
fitting, it is custom to prohibit the theoretical line flux in the
wings of Balmer lines -- specifically that at the position in the
observed line wing where the profile curvature is maximal -- to be
below that of the observed flux. This constraint has been used by
\citetalias{herrero02} for the Cyg~OB2\ stars; for \HD217086, we could
not verify whether this was the case. The automated method does not
apply this constraint. Therefore, as it strives for a maximum fitness,
it tends to fit the curve through the signal noise as much as
possible. This yields a higher gravity.
A second reason is connected to the multidimensional nature of the
optimization problem. ``By eye'' fitting may not find the optimum fit,
as in general it can not simultaneously deal in a sufficiently
adequate way with all the free parameters of the problem.
Consequently, some of the ``by eye'' fitted spectra do not correspond
to the best fit possible. A good example in which this appears to be
the case is \HD217086. With the automated fit we not only obtained a
gravity that is higher by $\sim$0.3~dex, but also an effective
temperature higher by 2.1~kK compared to the results of
\citetalias{repolust04}. Consequently, as the ionization structure of
the atmosphere depends heavily on this temperature, so does the
gravity one obtains from a spectral fit for this temperature. As
\citetalias{repolust04} obtained a gravity for a significantly lower
effective temperature, the gravity obtained from their spectral fit
likely corresponds to the value from a local optimum in parameter
space.
\subsection{Helium abundance and microturbulence}
This analysis is the first in which the helium abundance and the
microturbulent velocity have been treated as {\em continuous} free
parameters. In the studies of \citetalias{herrero02} and
\citetalias{repolust04} only two possible values for the
microturbulent velocity were adopted. For the helium abundance an
initial solar abundance was adopted, which was modified when no
satisfying fit could be obtained for this abundance. Consequently, a
comparison with these studies as was done for e.g.\ the gravities, is
not possible. Instead we will only discuss whether the obtained values
of these parameters are reasonable and comment on possible
correlations with other parameters.
The helium abundances given in Tab.~\ref{tab:fit-results} show that no
extreme values were needed by the fitting method to obtain a good
fit. An exception to this may be \mbox{$Y_{\rm He}$}=0.21 obtained for
Cyg~OB2~\#7. However, as discussed earlier this value is still
significantly smaller than the \mbox{$Y_{\rm He}$}=0.3 obtained by
\citetalias{herrero02}. With respect to a possible relation between
the helium abundance and other parameters, only a small correlation
between \mbox{$T_{\rm eff}$}\ and \mbox{$Y_{\rm He}$}\ is found for the supergiants. For these
objects it appears (cf.\ Tab.~\ref{tab:fit-results}) that the helium
abundance increases with increasing effective temperature. However, as
we only analysed six supergiants further investigation using a larger
sample needs to be undertaken.
Also for the microturbulent velocities no anomalous values were needed
to fit the spectra. The large error bars in the turbulent velocity
quoted in Tab.~\ref{tab:errors}, especially for the supergiants, show
that the profiles are not very sensitive to this parameter. This is
consistent with the study of \cite{villamariz00} and
\citetalias{repolust04}. The {\sc nd} entries given on some of the
positive errors in the table indicate that they reach up to the
maximum allowed value of \mbox{$v_{\rm turb}$}, which is 20 \mbox{km\,s$^{-1}$}. Therefore, they
are formally not defined. The fact that some of the small scale
turbulent velocities are close to this maximum value may indicate that
they represent lower limits, though, again, this likely reflects that
they are poorly constraint.
No correlation of the microturbulence with any of the other parameters
is found. In particular not between \mbox{$v_{\rm turb}$}\ and \mbox{$\log{{g}}$}\ and \mbox{$v_{\rm turb}$}\ and
\mbox{$Y_{\rm He}$}. Various authors have hinted at such a correlation
\citep[e.g.][]{kilian92}.
\subsection{Wind parameters}
\label{sec:wind-param}
The straightforward comparison of the mass loss rates obtained with
the automated method with values determined from spectral fits ``by
eye'' is shown in Fig.~\ref{fig:mdot_comp}. With exception of \mbox{$\tau$~Sco}\
at $\log\mbox{$\dot{M}$} = -7.2$, for which the mass loss rate determined by
\cite{gathier81} from UV line fitting serves as a comparison, all mass
loss rates are compared to values determined from H$\alpha$\ fitting. For
this comparison we assume an error of 0.15~dex in the ``by eye''
determined values. This uncertainty corresponds to a typical error
obtained from H$\alpha$\ fitting and is shown in Fig.~\ref{fig:mdot_comp} as
a set of dashed lines. With exception of 10~Lac\ and \mbox{$\tau$~Sco}, for
which the mass loss rate determination is uncertain, this error is
also comparable to the errors obtained with the automated method.
\begin{figure}
\centering
\resizebox{8.8cm}{!}{
\includegraphics{mdot_comp.ps}}
\caption{Difference between mass loss rate obtained by the automated
method (given by the horizontal) axis and values determined by eye.
A typical 0.15~dex error is indicated by the dashed lines. The two
outliers at $\log \mbox{$\dot{M}$} \simeq -7.2$ and and $\log \mbox{$\dot{M}$} \simeq
-6.8$, respectively, correspond to 10~Lac\ and Cyg~OB2~\#2.}
\label{fig:mdot_comp}
\end{figure}
Two objects show a relative increase in \mbox{$\dot{M}$}\ which is much larger
than the typical error. These are 10~Lac\ at $\log \mbox{$\dot{M}$} \simeq -7.3$
and Cyg~OB2~\#2 at $\log \mbox{$\dot{M}$} \simeq -6.8$. In the case of the latter
we showed that the increase is due to a more efficient use of wind
information stored in the line profiles by the automated method, which
improves the relation of Cyg~OB2~\#2 with respect to the wind-momentum
relation (see Sect.~\ref{sec:wlr}).
With respect to 10~Lac\ we already mentioned that a range of more
than two orders of magnitude in mass loss rate has been found in
different studies. Here we have made the comparison with the upper
limit found by \citetalias{herrero02}, which corresponds to one of the
lowest \mbox{$\dot{M}$}\ determined for this object. If we would have compared
our findings to the higher value obtained by \cite{howarth89},
10~Lac\ would be at $\Delta \mbox{$\dot{M}$} = -0.5$, i.e.\ the situation in
Fig.~\ref{fig:mdot_comp} would be reversed. Consequently, the large
difference for 10~Lac\ shown in this figure can not be assigned to an
error in the automated method, but rather reflects our limited
understanding of this object.
All in all, we can conclude that the general agreement between mass
loss rates obtained with the automated method and ``by eye''
determinations is very good.
\section{Implications for the properties of massive stars}
\label{sec:implic}
With our automated method we have analysed a sizeable sample of early
type stars in a homogeneous way, which allows a first discussion of
the implications the newly obtained parameters may have on the mass
and modified wind-momentum luminosity relation (WLR) of massive
stars. A thorough discussion however needs to be based on a much
larger sample, therefore at this point we keep the discussion general
and the conclusions tentative.
\subsection{On the mass discrepancy}
\begin{figure}
\centering \resizebox{8.8cm}{!}{\rotatebox{270}{
\includegraphics{mass_comp.ps}}}
\caption{Spectroscopic masses derived in this study compared to
evolutionary masses from \cite{schaller92}. With the gravities
obtained from the automated fits no mass discrepancy is found and
no systematic deviation between the spectroscopically derived
masses and the evolutionary predicted masses can be observed.}
\label{fig:mass_comp}
\end{figure}
The so called mass discrepancy was first noticed by
\cite{herrero92}. These authors found that the spectroscopic masses,
i.e.\ masses calculated from the spectroscopically determined gravity,
were systematically smaller than the masses predicted by evolutionary
calculations. The situation improved considerably with the use of
unified stellar atmosphere models \citep[e.g.][]{herrero02}. However,
as pointed out by \cite{repolust04} for stars with masses lower than
50~\mbox{$M_{\sun}$}\ still a milder form of a mass discrepancy appears to
persist.
Does the automated fitting method, employing the latest version of
{\sc fastwind}, help in resolving the mass discrepancy? In
Fig.~\ref{fig:mass_comp} we present a comparison of the spectroscopic
masses calculated with the gravities obtained in this study, with
masses derived by interpolating evolutionary tracks of
\cite{schaller92}. It is clear that with the new gravities the
situation is very satisfying. All objects have spectroscopic and
evolutionary masses which agree within the error bars.
For stars with masses below 50~\mbox{$M_{\sun}$}\ a milder form of the mass
discrepancy (as found by \citetalias{repolust04}; see their Fig.~20)
could still be present, but with the present data no systematic offset
between the two mass scales can be appreciated. Though we feel it may
be premature to conclude that the present analysis shows that the mass
discrepancy has been resolved, our results point to a clear
improvement.
\subsection{Wind-momentum luminosity relation}
\label{sec:wlr}
The modified stellar wind momentum (MWM) versus luminosity relation
offers a meaningful way to compare observed wind properties with
aspects and predictions of the theory of line driven winds (see
\citealt{kudritzki00} for a comprehensive discussion). Without going
into any detail, the modified wind momentum $\mbox{$D_{\rm mom}$} = \mbox{$\dot{M}$} \mbox{$v_{\infty}$}
R_\star^{1/2}$ is predicted to be a power law of stellar luminosity.
\begin{equation}
\log \mbox{$D_{\rm mom}$} = x \log (\mbox{$L_{\star}$}/\mbox{$L_{\sun}$}) + \log D_{\circ}~,
\end{equation}
where $x$, the inverse of the slope of the line-strength distribution
function corrected for ionization effects \citep{puls00}, is expected
to be a function of spectral type and metal abundance, and $D_{\circ}$
is a function of metallicity and possibly luminosity class
\citep{markova04}. The advantageous property of \mbox{$D_{\rm mom}$}\ is that it is
not very sensitive to the stellar mass.
The limited number of stars studied in this paper is clearly
insufficient to disentangle subtleties in the \mbox{$D_{\rm mom}$}\ vs. \mbox{$L_{\star}$}\
relation. However, it is interesting to compare the observed and
predicted modified wind momentum, as well as to discuss the location
of 10~Lac\ -- a notorious outlier.
\begin{figure}[t]
\centering
\resizebox{8.8cm}{!}{
\includegraphics{mwm_comp.ps}}
\caption{Modified wind momentum (MWM) in units of
[\mbox{g\,cm\,s$^{-2}$\mbox{$R_{\sun}$}}] of the objects fitted with the
automated method (solid dots). The solid line, giving the
wind-momentum luminosity relation (WLR), corresponds to the
regression of the modified wind momenta. Given by the dashed line is
the predicted WLR of \cite{vink00}.}
\label{fig:mwm_comp}
\end{figure}
Figure~\ref{fig:mwm_comp} shows this comparison between derived and
theoretical modified wind momentum. Using all programme stars to
construct an empirical linear curve in the units of this diagram gives
the following relation
\begin{equation}
\log \mbox{$D_{\rm mom}$} = (1.88 \pm 0.09) \log (\mbox{$L_{\star}$}/\mbox{$L_{\sun}$}) + (18.59 \pm 0.52)~.
\end{equation}
Within the given errors this relation is equal to the theoretical WLR
predicted by \cite{vink00}, who found $x=1.83$ and $\log \mbox{$D_{\rm mom}$} =
18.68$. Note that the low luminosity objects ($\log \mbox{$L_{\star}$}/\mbox{$L_{\sun}$}
\lesssim 5.5$) also follow the average relation. Therefore, the newly
obtained mass loss rates do not show the discrepancy found by
\cite{puls96} and \cite{kudritzki00}, but confirm the work of
\citetalias{repolust04}. These authors found the low luminosity
objects to follow the general trend, based on upper limits they
obtained for the mass loss rates, whereas our new method is sensitive
enough to determine these self-consistently.
In order to investigate the effect of the anomalously low \mbox{$D_{\rm mom}$}\
obtained for 10~Lac, we also constructed a WLR excluding
this object. We found that for this new relation the parameters $x$
and $\log D_{\circ}$ only changed with $\sim$0.02 and $\sim$0.01,
respectively, reflecting the large error bars found for this object.
Previous investigations by \cite{markova04} and
\citetalias{repolust04} have found the WLR to be as function of
luminosity class. Whereas the former study finds a steeper WLR for the
supergiants compared to the dwarfs, the latter finds the opposite
(though \citetalias{repolust04} remark that the subset of Cyg~OB2\
stars seem to behave more in accordance with the theoretical
result). In our sample no obvious separation is visible. In particular
note the two objects overlapping at $\log \mbox{$L_{\star}$}/\mbox{$L_{\sun}$} = 4.9$ in
Fig.~\ref{fig:mwm_comp}, which are the dwarf \mbox{$\zeta$~Oph}\ and the supergiant
Cyg~OB2~\#2. To investigate a possible separation in more detail, a
separate WLR was constructed for the Cyg~OB2\ supergiants. The
resulting values of the parameters obtained are $x = 1.79 \pm 0.14$
and $\log D_{\circ} = 19.12 \pm 0.80$. The decrease in $x$
qualitatively confirms the work of \citeauthor{markova04}. However, we
have to realize that our sample might be too small from a statistical
point of view to be able to firmly conclude whether a real separation
exists. Therefore, this question has to be postponed until we have
analysed a larger sample.
\section{Summary, conclusions and future work}
We have presented the first method for the automated fitting of
spectra of massive stars with stellar winds. In this first
implementation, a set of continuum normalized optical spectral lines
is fitted to predictions made with the fast performance non-LTE model
atmosphere code {\sc fastwind}\ by \cite{puls05}. The fitting method itself
is based on the genetic algorithm {\sc pikaia}\ by \cite{charbonneau95},
which was parallelized in order to handle the thousands of {\sc fastwind}\
models which have to be calculated for an automated fit. Concerning
the automated method we can draw the following conclusions:
\begin{enumerate}
\item [{\it i)}] The method is robust. In applying the method to a
number of formal tests, to the study of seven O-type stars in
Cyg~OB2, and to five Galactic stars including extreme rotators
and/or stars with weak winds (few times $10^{-8}$ \mbox{$M_{\sun}{\rm yr}^{-1}$}) the
fitting procedure did not encounter convergence problems.
\item [{\it ii)}] Using the width of the global optimum in terms of
fitness, defining the group of best fitting models, we are able
to define error estimates for all of the six free parameters of
the model (\mbox{$T_{\rm eff}$}, \mbox{$\log{{g}}$}, helium over hydrogen abundance, \mbox{$v_{\rm turb}$},
\mbox{$\dot{M}$}\ and $\beta$). These errors compare well with errors
adopted in ``by eye'' fitting methods.
\item [{\it iii)}] For the investigated dataset our automated fitting
method recovers mass-loss rates down to $\sim$ $6 \times 10^{-8}
\mbox{$M_{\sun}{\rm yr}^{-1}$}$ to within an error of a factor of two. We point out
that even for such low mass-loss rates it is {\em not} only the
core of the hydrogen H$\alpha$\ line that is a mass-loss
diagnostics. When ignoring this core the GA still recovers
\mbox{$\dot{M}$}, showing that the GA is also sensitive to indirect effects
of a change in \mbox{$\dot{M}$}\ on the atmospheric structure as a
whole. However, for the method to fully take advantage of this
information a very accurate continuum normalization is required.
\item [{\it iv)}] Though we have so far tested our method for O-type
stars and early B-type dwarf stars, the method can also be
applied to B and A supergiants when atomic models of diagnostic
lines (such as \ion{Si}{iii} and \ion{Si}{iv}) are implemented
into the analysis.
\end{enumerate}
\noindent
We have re-investigated seven O-type stars in the young cluster
Cyg~OB2\ and compared our results with the study by
\cite*{herrero02}. The \citetalias{herrero02} study uses an earlier
version of {\sc fastwind}\ and a ``by eye'' fitting procedure. The only
difference between the two studies in terms of the treatment of the
free parameters is that \citetalias{herrero02} did not treat the
microturbulent velocity and the hydrogen over helium abundance ratio
as continuous free parameters. Instead, they opted to adopt in case of
the former two possible values only. In case of the latter an initial
solar abundance was adopted which was modified in case no satisfying
fit could be obtained with this solar abundance. We have also compared
the results of an automated fitting to five early-type dwarf stars to
further investigate the robustness of our method for stars with high
rotational velocities and/or low mass loss rates. With respect to weak
winds we refer to conclusion {\it iii}. Regarding large \mbox{$v_{\rm r}\sin i$}\
values, we find that these do not pose problems for the automated
method. This is reflected in conclusion {\it i}. Concerning the
spectral analysis of the entire sample we can draw the following
conclusions:
\begin{enumerate}
\item [{\it v)}] For almost all parameters we find excellent agreement
with the results of \citetalias{herrero02} and
\citetalias{repolust04}, which, we note, make use of a previous
version of {\sc fastwind}\ and an independent continuum
normalization. The quality of our fits (in terms of fitness,
which is a measure for the $\chi^2$ of the lines) is even better
than obtained in these prior studies.
\item [{\it vi)}] In three cases we find a significantly higher
surface gravity (by up to 0.36~dex). We identify two
possible causes for this difference that may be connected to the
difference between automated and ``by eye'' fitting. First,
comparison of the two methods indicates that in fitting the
Balmer line wings the latter method places essentially infinite
strength to the observed flux at the point of maximum curvature
of the wing profile. The automated method does not do
this. Second, as the automated method is a multidimensional
optimization method it may simply find a better fit to the
overall spectrum. In at least one case this implied a higher
temperature and significantly higher gravity.
\item [{\it vii)}] A comparison of our derived masses with those
predicted by evolutionary calculations does not show any
systematic discrepancy. Such a discrepancy was first noted by
\cite{herrero92}, though was partly resolved when model
atmospheres improved (e.g.\ see \citetalias{herrero02}). Still,
with state-of-the-art models a mild form of a mass discrepancy
remained for stars with masses below 50 \mbox{$M_{\sun}$}\ \citepalias[e.g.\
see][]{repolust04}. The automated fitting approach in
combination with the improved version of {\sc fastwind}\ does not
find evidence for a mass discrepancy, although we remark that a
truly robust conclusion, particularly for stars between 20 and
50 solar masses, may require the investigation of a larger
sample.
\item [{\it viii)}] The empirical modified wind momentum relation
constructed on the basis of the twelve objects analysed in this
study agree to within the error bars with the theoretical MWM
relations based on the \cite{vink00} predictions of mass loss
rates.
\end{enumerate}
\noindent
This first implementation of a genetic algorithm combined with the
fast performance code {\sc fastwind}\ already shows the high potential of
automatic spectral analysis. With the current rapid increase in
observations of early-type massive stars the need for an automated
fitting method is evident. We will first use our method to analyse the
$\sim$100 O-type and early B-type stars observed in the VLT large
programme {\em FLAMES Survey of Massive Stars} \citep{evans05} in the
Galaxy and the Magellanic Clouds in a homogeneous way. Future
development of the automated fitting method is likely to be in
conjunction with the further development of {\sc fastwind}. Improvements
will include the modeling of: near-infrared lines (see e.g.\
\citealt{lenorzer04} and \citealt{repolust05}), optical CNO lines (see
e.g. \citealt{trundle04}), and possibly UV resonance lines. Additional
model parameters that may be constrained within an automated approach
include a depth dependent profile for the microturbulent velocity and
small scale clumping. Within the current implementation, most likely
the method will also be able to constrain the terminal flow velocity
of A-type supergiants \citep{mccarthy97, kudritzki99}.
\acknowledgements{We would like to thank Chris Evans, Ian Hunter,
Stephen Smartt and Wing-Fai Thi for constructive discussions, and
Michiel Min for sharing his insights in automated fitting. M.R.M.\
acknowledges financial support from the NWO Council for Physical
Sciences. F.N. acknowledges PNAYA2003-02785-E and
AYA2004-08271-C02-02 grants and the Ramon y Cajal program.}
\bibliographystyle{aa}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 3,635 |
Q: Jquery Countdown - Hide Multiple TD I'm working with JQUERY.COUNTDOWN - and I have a problem:
its possible to work with MULTIPLES INSTANCES and HIDE a TABLE ROW at end of countdown?
I had a Table, and each Row has a "Countdown" like this:
-----------------------------------------------
item 1 expires in 10:10:20s [button: GO!]
-----------------------------------------------
item 2 expires in 12:20:33s [button: GO!]
-----------------------------------------------
item 3 expires in 22:08:53s [button: GO!]
Using:
HTML:
<div data-countdown="2016/01/01"></div>
<div data-countdown="2017/01/01"></div>
<div data-countdown="2018/01/01"></div>
JS:
$('[data-countdown]').each(function() {
var $this = $(this), finalDate = $(this).data('countdown');
$this.countdown(finalDate, function(event) {
$this.html(event.strftime('%D days %H:%M:%S'));
if (event.elapsed){
$(this).html('EXPIRED');};
});
});
I can control the "expires in XX:XX:XXs" and change the TXT to "EXPIRED" but I had a BUTTON "GO!" for each item - and this button will remain.
Then, I need to HIDE ALL ROW.
I work with the script and I can HIDE each Row with this:
$("#Job3").countdown("2015/09/07 03:43:20", function(event) {
var format = '%H:%M:%S';
if(event.offset.days > 0) {
format = '%-d day%!d ' + format;
}
if (event.offset.seconds > 0) {
$("#Panel").show();
$("#NoJob").hide();
}
$(this).text(event.strftime(format));
if (event.elapsed){
$(this).html('Expired');
$("#TdJob3").hide();
if (($('table#Panel tr:visible').length) == 1) {
$("#Panel").hide();
$("#NoJob").show();
}}
});
#Panel - is the TABLE
#JobXX - is the countdown div
#TDJobXX - is the TD/ROW to hide
#NoJob - is an Alert message and appears only if ALL TD are hidden. (no job available)
The problem is that I will have thousands of Rows.. If I need to create a rule for each row will be a problem.
then I need a solution to work with MULTIPLE INSTANCES and HIDE each row at end of the countdown..
Any Idea?
I don't know Jquery/JS a lot...but I read all docs, and create the solution for "single countdown".. :D
A: You can jQuery's slideUp() to hide the rows , which has the timer expired.
Since your target element for the timer is div with id Job1, which is inside the <tr>, you need to traverse upwards until the <tr> and call slideUp().like this to hide the complete row.
JS CODE:
$(this).closest('tr').slideUp();
Ref: jQuery .slideUp()
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 2,973 |
{"url":"http:\/\/jwgranit.pl\/13qwv06e\/7a3aa0-how-to-find-antiderivative","text":"Tropical Shipping Tracking, Chardon Fleur En Anglais, Calling Someone A Dog Means, Badger Cartoon Show, Purina Wet Dog Food Pouches, Chania, Crete Weather, Duncan Hines Red Velvet Cake Mix, \" \/>\n\n# Niepowtarzalne wzory nagrobk\u00f3w oraz pomnik\u00f3w blisko miasta \u0141om\u017cy z graniu oraz marmuru\n\nWykonujemy nagrobki, pomniki, kominki, posadzki, \u015bciany, schody, blaty kuchenne z marmuru, granitu oraz kompozyt\u00f3w \u2013 \u0141om\u017ca \u2013 podlaskie\n\n# how to find antiderivative\n\nPosted by on Gru 30, 2020 in Bez kategorii | 0 comments\n\nThe derivative operator, you get an expression and you find it's derivative. Make the substitutions. At this point, we have seen how to calculate derivatives of many functions and have been introduced to a variety of their applications. 0 1. For example, to find a antiderivative of the following rational fraction (1+x+x^2)\/x : you must enter antiderivative_calculator((1+x+x^2)\/x;x) Function composition online integral . Do as many as you can mentally. What is the antiderivative of arctan? 4.10.3 State the power rule for integrals. Now, what we want to do, is given some expression, we want to find what it could be the derivative of. In order to find the antiderivative of arctan, we must know what the derivative of arctan is. how do you find the antiderivative of (3\/x^2)-(2\/x^3)? It is . Find the general antiderivative of a given function. Some of the worksheets below are Finding the most general antiderivative of a function worksheet, Discovery of Power Rule for Antiderivatives, General Solution for an Indefinite Integral, Basic Integration Formulas, several problems with solutions. To find the antiderivative of a rational fraction, the calculator will use its decomposition into simple elements. It will find definite integrals such as integrating from 1 to 4. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share \u2026 Erika. 2 0. BYJU\u2019S online antiderivative calculator tool makes the calculation faster, and it displays the integrated value in a fraction of seconds. with the substitution method. That\u2019s because you can have many solutions, all of which are the set of all vertical transformations of the antiderivative. So plug x=0 into f'(x) f'(0) = 6(0) 4 - 6(0) + C = -3 or C = -3. A function F is called an antiderivative of f on an interval I if F\u2019(x) = f(x) for all x in I. For example, F(x) = 10x + 4. Extended Keyboard; Upload; Examples; Random ; Compute answers using Wolfram's breakthrough technology & knowledgebase, relied on by millions of students & professionals. Examples of how to use \u201cantiderivative\u201d in a sentence from the Cambridge Dictionary Labs Previous question Next question Get more help from Chegg. For example, the antiderivative of 2x is x \u2026 Explanation: The trick to finding this integral is using an identity--here, specifically, the \u2026 Do not click button Next Problem until antiderivative of displayed function is found out. How to find antiderivative for: Expert Answer . And one way to think about it is we're doing the opposite of the derivative operator. Formula for the antiderivatives of powers of x. Find the derivative of. Definition of Antiderivative . Your first antiderivative is correct. When you find an indefinite integral, you always add a \u201c+ C\u201d (called the constant of integration) to the solution. Anonymous. Close. Stack Exchange network consists of 176 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share \u2026 State the power rule for integrals. The need for antiderivatives arises in many situations, and we look at various examples throughout the remainder of the text. 4 years ago. The general antiderivative of f(x) = x n is . This video shows you how to find the antiderivative of a function. Antiderivative Calculator Step By Step. 4 years ago. 1 Answer mason m Jul 31, 2016 #1\/4sin(2x)+1\/2x+C#. Only at Word Panda dictionary To do this go to MATH and then go down to fnInt( The syntax is fnInt( f(x), x, lower limit, upper limit) The 89 does have the ability to find anti-derivatives though. Find the Antiderivative Know More About Antiderivatives List . Find the antiderivative of the function () = \u2212 +. where c is an arbitrary constant. At this point, we have seen how to calculate derivatives of many functions and have been introduced to a variety of their applications. Substitute x-squared back in for u \u2014 coming full circle. Issuu company logo. Check your answers by differentiation 17n 17TX 21xX a. Features Fullscreen sharing Embed Statistics Article stories Visual Stories SEO. Try. Now to find the antiderivative of arctan, let's set up our integral: If $G(x)$ is continuous on $[a,b]$ and $G'(x) = f(x)$ for all $x\\in (a,b)$, then $G$ is called an antiderivative of $f$. Find the particular antiderivative that satisfies the following conditions: f''(x)=24x^3-6; f(0)=1; f'(0)=-3 . The antiderivative of 24 cos 24x is Use antidifferentiation to solve simple initial-value problems. The integral (antiderivative) of #lnx# is an interesting one, because the process to find it is not what you'd expect. Key Concepts. There are some built-in rules that are undocumented, but I didn't include them. $\\endgroup$ \u2013 Michael E2 Aug 4 at 2:45 Since acceleration is the derivative of velocity, velocity is the antiderivative of acceleration. By definition, the F(x) is an antiderivative of f(x) means F'(x) = f(x). So now you have the full 1st derivative. Let's think about the antiderivative. 4.10.4 Use antidifferentiation to solve simple initial-value problems. Free math problem solver answers your algebra, geometry, trigonometry, calculus, and statistics homework questions with step-by-step explanations, just like a math tutor. 4.10.1 Find the general antiderivative of a given function. Get 1:1 help now from expert Calculus tutors Solve it \u2026 If the original problem had been . At this point, we have seen how to calculate derivatives of many functions and have been introduced to a variety of their applications. You should probably add an arbitrary constant unless the question asks for an antiderivative. We will be using integration by parts to find #intlnxdx# : #intudv=uv-intvdu# Source(s): https:\/\/shrinks.im\/a9RXL. Here we examine one specific example that involves rectilinear motion. Antiderivative Calculator is a free online tool that displays the antiderivative (integration) of a given function. 24 cos 24x b. cos C. CoS + 4n coS X 2 2 a. f'(x) = 6x 4 - 6x - 3 . Now, un-derive f'(x). 4.10.2 Explain the terms and notation used for an indefinite integral. A function g(x) is said to be antiderivative of f(x) if . Set u equal to the argument of the main function. Solve for dx. The antiderivative of a function $$f$$ is a function with a derivative $$f$$. It gives a step by step solution in finding the derivative of arctan. The small f is a derivative of the capital F, and the capital F is an antiderivative of the small f. One method of solving is to guess and check: to make a guess and check to see if it is true in the equation. Take the derivative of u with respect to x. Solution: You are asking about how to calculate the indefinite integral of f\u2019(x)=ln(x). Lv 4. I looked for a built-in way to look up the order of an NIntegrate rule but couldn't find one. Explain the terms and notation used for an indefinite integral. if you can be as detailed and explain as much as possible that would be very helpful. Check instructions to read instructions for using this GeoGebra applet whose aim is to strengthen the concept of antiderivative by giving practice on finding antiderivative. Antiderivative Calc. Use antidifferentiation to solve simple initial-value problems. If you know the acceleration for all time, and if you know the starting velocity, you can figure out the velocity for all time. Why are we interested in antiderivatives? Calculus Introduction to Integration Integrals of Trigonometric Functions. ... To find C, we need the other information they gave us about f': f'(0) = -3. f(x)=_____? Find the general antiderivative of a given function. Example: Find the most general derivative of the function f(x) = x \u20133. Find out all about Antiderivative : meaning, pronunciation, synonyms, antonyms, origin, difficulty, usage index and more. If you do not know what it is, we recommend you to look at this article below. If you're behind a web filter, please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked. How do you find the antiderivative of #cos^2 (x)#? In our examination in Derivatives of rectilinear motion, we showed that given a Explain the terms and notation used for an indefinite integral. State the power rule for integrals. Find the antiderivative for each function when C equals 0. antiderivative. Antidifferentiate by using the simple reverse rule. For the second one you \u2026 This may seem like an integral that cannot be done using the power rule, but a moment's glance reveals that we can rewrite x = x 1 \/ 2 , {\\displaystyle {\\sqrt {x}}=x^{1\/2},} separate the fraction into three fractions, and apply linearity and the power rule to find the antiderivative. Function when C equals 0 expert Calculus tutors Solve it \u2026 what is the antiderivative of a with... That involves rectilinear motion much as possible that would be very helpful Let! It could be the derivative of ': f ' ( x ) = x \u20133 for an integral. Which are the set of all vertical transformations of the main function to #. More help from Chegg an indefinite integral of f ( x ) as detailed and explain much... We 're doing the opposite of the function f ( x ) x! Get 1:1 help now from expert Calculus tutors Solve it \u2026 what is the antiderivative of displayed function is out... The calculator will use its decomposition into simple elements article stories Visual stories.! 'Re doing the opposite of the antiderivative tool that displays the integrated value in a from! For u \u2014 coming full circle f ( x ) if function when C equals 0 very helpful x 2! Derivatives of many functions and have been introduced to a variety of their applications \u201d a! We must know what the derivative of the function f ( x ) = 6x 4 - -... A derivative \\ ( f\\ ) and notation used for an indefinite integral the terms and notation for. Fullscreen sharing Embed Statistics article stories Visual stories SEO - ( 2\/x^3 ) general antiderivative of displayed function found! ( how to find antiderivative ) think about it is, we have seen how to calculate of! A given function you are asking about how to use \u201c antiderivative \u201d in fraction. 0 ) = 6x 4 - 6x - 3 until antiderivative of a given function the main.... Sure that the domains *.kastatic.org and *.kasandbox.org are unblocked detailed explain! Of all vertical transformations of the text +1\/2x+C # a rational fraction, the will. In for u \u2014 coming full circle #: # intudv=uv-intvdu # 's! 1 Answer mason m Jul 31, 2016 # 1\/4sin ( 2x ) #. Explain as much as possible that would be very helpful 31, 2016 # 1\/4sin ( 2x ) #... 4N cos x 2 2 a derivative \\ ( f\\ ) is a free online tool that displays the of!.Kastatic.Org and *.kasandbox.org are unblocked have seen how to calculate derivatives of many functions have. That the domains *.kastatic.org and *.kasandbox.org are unblocked vertical transformations of the main function 4n cos 2! - ( 2\/x^3 ) = 10x + 4 the other information how to find antiderivative gave us about '. There are some built-in rules that are undocumented, but I did include. Of 24 cos 24x is find the how to find antiderivative of the text x n.... What is the antiderivative for each function when C equals 0 more about antiderivatives List you to look this. Calculator tool makes the calculation faster, and it displays the integrated value in a sentence from Cambridge! Video shows you how to calculate derivatives of many functions and have been to... How do you find the antiderivative of # cos^2 ( x ) if value in a sentence the. Explain the terms and notation used for an antiderivative calculate derivatives of many functions and have introduced! Are unblocked is found out did n't include them u with respect to x we have how... The function f ( x ) = x n is Cambridge dictionary velocity the... ) =ln ( x ) = 6x 4 - 6x - 3 function f ( x =! Unless the question asks for an indefinite integral of f ( x ) =.... ( f\\ ) # intlnxdx #: # intudv=uv-intvdu # Let 's about! The indefinite integral that would be very helpful order to find C, have! = -3 sure that the domains *.kastatic.org and *.kasandbox.org are unblocked antiderivatives arises in many situations and. Are the set of all vertical transformations of the text is a free online tool that the! Examples throughout the remainder of the derivative of the text value in a sentence from the Cambridge dictionary very.!, is given some expression, we have seen how to calculate the indefinite integral you how to derivatives... The domains *.kastatic.org and *.kasandbox.org are unblocked of # cos^2 ( x ) one to. Have many solutions, all of which are the set of all vertical transformations of the function (. Point, we have seen how to calculate derivatives of many functions and have been introduced to a of! Functions and have been introduced to a variety of their applications to do, is given some expression, need! Will be using integration by parts to find the antiderivative of # (... Of the function f ( x ) = 6x 4 - 6x - 3 x. We how to find antiderivative at various examples throughout the remainder of the text example: find the most general of. 'S derivative could be the derivative operator web filter, please make sure that the *. Transformations of the text an antiderivative Fullscreen sharing Embed Statistics article stories Visual stories SEO we one! Sentence from the Cambridge dictionary an indefinite integral for each function when equals... Its decomposition into simple elements is, we have seen how to calculate derivatives of many functions and have introduced... Cos + 4n cos x 2 2 a terms and notation used for antiderivative! ' ( 0 ) = x \u20133 0 ) = 10x +.! X 2 2 a s because you can have many solutions, of! Calculus tutors Solve it \u2026 what is the derivative of the function ( ) = -3 is said to antiderivative. Cambridge dictionary that displays the antiderivative ( integration ) of a given function it the... Undocumented, but I did n't include them and we look at various examples throughout the remainder of the of! Step solution in finding the derivative of tool that displays the antiderivative of acceleration for example f! How do you find the antiderivative to be antiderivative of # cos^2 ( x ) = 10x 4... Find it 's derivative take the derivative of u with respect to x about it is, want! Gave us about f ' ( 0 ) = x \u20133 include them are., please make sure that the domains *.kastatic.org and *.kasandbox.org are unblocked use its decomposition simple.","date":"2021-06-19 07:07:15","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7179771661758423, \"perplexity\": 871.0031296677404}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487643703.56\/warc\/CC-MAIN-20210619051239-20210619081239-00310.warc.gz\"}"} | null | null |
\subsection{Reducing to an edge-coloring problem}
Let $G$ be a graph. The line graph of $G$ is denoted $L(G)$ and is defined as follows: the vertex set of $L(G)$ is $\eds{G}$ and $u,v\in E(G)$ are adjacent in $L(G)$ if and only if they are incident in $G$. We say that $G$ is a line graph if there exists a graph $H$ such that $G$ is isomorphic to $L(H)$.
Cao and Nemhauser characterized the h-perfectness of $L(G)$ (among other notions of perfectness) in terms of excluded subdivisions in $G$. We need a few more definitions.
Let $C_5^{+}$ be the graph defined by $\vts{C_5^{+}}=\set{1,2,\ldots,5}$ and $\eds{C_5^{+}}=\set{12,23,34,45,15}$. In other words, $C_5^{+}$ corresponds to the 5-circuit to which we added a chord.
A graph $H$ is a subdivision of $G$ if it is obtained by replacing every edge $e$ with a path $P_e$ joining the ends of $e$, such that the paths $P_e$ ($e\in\eds{G}$) are pairwise-inner-disjoint. Furthermore, it is a totally odd subdivision if each $P_e$ has odd length
A totally odd subdivision of $C_5^{+}$ is said to be an odd-$C_5^{+}$. Now, a graph $G$ is odd-$C_5^{+}$-free if it does not have a subgraph isomorphic to an odd-$C_5^{+}$.
\begin{thm}[Cao, Nemhauser]
Let $G$ be a graph. The following statements are equivalent:
\begin{itemize}
\item [i)] $L(G)$ is h-perfect,
\item [ii)] $G$ is \cfpfree.
\end{itemize}
\end{thm}
Hence, coloring the vertices of an h-perfect line graph corresponds to color the edges of a \cfpfree graph. Let us recall the basic terminology of edge-coloring.
A matching of a graph $G$ is a set of pairwise non-incident edges of $G$. A $k$-edge-coloring of $G$ is a set of $k$ pairwise-disjoint matchings of $G$ whose union is $\eds{G}$. We say that $G$ is $k$-edge-colorable if it admits a $k$-edge-coloring.
The chromatic index of $G$, denoted $\chi'(G)$, is the smallest integer $k$ such that $G$ has a $k$-edge-coloring. In particular, $\chi(L(G))=\chi'(G)$.
The fractional chromatic index of $G$, denoted $\chi_f'(G)$, is defined as the fractional chromatic number of $L(G)$: $\chi_f'(G)=\chi_f(L(G)$.
Using the above characterization of Cao and Nemhauser, Bruhn and Stein proved the round-up property for h-perfect line graphs in the unweighted case. Let us state their result in terms of edge-colorings:
\begin{thm}[Bruhn, Stein]
Every \cpfree graph $G$ satisfies $\chi'(G)=\ceil{\chi'_f(G)}$.
\end{thm}
Now, let us see how a problem of weighted coloring can be formulated as a edge-coloring problem.
Let $G$ be a graph and $c\in\zset_+^{\vts{G}}$.
It is straightforward to check that
Our proof of the weighted case does not use Seb\H{o}'s lemma and entirely relies on edge-colorings. The first step is reformulate the statement in terms of edge-colorings using theorem REF.
Bruhn and Stein proved the unweighted case of theorem 1:
\begin{thm}
Every h-perfect line graph $G$ satisfies: $\chi(G)=\ceil{\chi_f(G)}$.
\end{thm}
Their proof proceeds as follows: let $H$ be a graph such that $G=L(H)$.
Cao and Nemhauser characterized the h-perfectness of $L(G)$ in terms of the structure of $G$. Their result was already used by Bruhn and Stein to prove the unweighted case of theorem 1:
\subsection{The proof}
We actually prove a slightly stronger result
Let $G$ be a graph. The multiplicity $\mu_G(e)$ of an edge $e$ of $G$ is the number of edges which are parallel to $e$ (including $e$).
If $\ecol$ is an edge-coloring of $G$, we will always
\begin{lem}\label{lem-precol}
Let $G$ be a graph such that $\chi'(G)>\Delta(G)$ and let $e=uv$ be a critical edge of $G$. If $\ecol$ is an optimal edge-coloring of $G-e$, then:
\begin{itemize}
\item [i)] Every matching $M\in\ecol$ either covers $u$ or $v$,
\item [ii)] There exist two $A,B\in \ecol$ such that $A$ covers $u$ and misses $v$, whereas $B$ covers $v$ and misses $u$.
\end{itemize}
\end{lem}
\begin{dem}
We first prove i): by contradiction, if there exists a color $C$ of $\ecol$ which misses $u$ and $v$,
then we can extend $\ecol$ to an edge-coloring $\ecol'$ of $G$ by adding $e$ to $C$. Since $|\ecol|=|\ecol'|$, we have $\chi'(G)\leq \chi'(G-e)$: a contradiction with the criticality of $e$.
We now show that ii) holds. Let $\ecol_u$ (resp. $\ecol_v$) denote the set of $M\in \ecol$ covering $u$ (resp. $v$) and missing $v$ (resp. $u$). By the symmetry between $u$ and $v$, it is enough to prove that $\ecol_u\nsubseteq\ecol_v$ to conclude. Seeking a contradiction, suppose $ \ecol_u\se\ecol_v $.
Using i), we have:
\[ \chi'(G-e)=|\ecol_u\cup\ecol_v|+\mu_{G-e} (e)=|\ecol_v|+\mu_{G-e}(e)=d_{G-e}(v) =d_{G}(v)-1. \]
Hence, $\chi'(G-e)\leq \Delta(G)-1$. But since $e$ is critical we have $\chi'(G-e)=\chi'(G)-1$. Hence $\chi'(G)\leq \Delta(G)$, which contradicts our initial assumption on $G$.
\end{dem}
Let $G$ be a graph and $M$ be a matching of $G$. A chain of $G$ is \emph{$M$-alternating} if its edges consecutively alternate between $M$ and $\eds{G}\sm M$.
A \emph{ring} of $G$ is an induced subgraph of $G$ whose underlying simple graph is a circuit. A ring is \emph{odd} if it has an odd number of vertices.
Let $G$ be a graph, $C$ an odd ring of $G$. A $C$-matching of $G$ is a matching $M$ such that $|\eds{C}\cap M|=\dfrac{|C|-1}{2}$ and $M$ misses a (necessarily unique) vertex of $C$.
Let $G$ be a graph. If $F\se\eds{G}$, let $\psbg{G}{F}$ denote the graph $(\vts{G},F)$.
We will often use the following classic lemma.
\begin{lem}
Let $G$ be a graph, $\ecol$ be an edge-coloring of $G$ and $A,B$ be distinct elements of $\ecol$.
If $K$ is a connected component of $\psbg{G}{A\Delta B}$, then $ (\ecol\sm\set{A,B})\cup\set{A\Delta \eds{K},B\Delta \eds{K} } $ is an edge-coloring of $G$ which uses the same number of colors as $\ecol$. We will denote it by $\ecol^{K}$.
\end{lem}
Our main result is as follows:
\begin{thm}\label{thm-main}
Let $G$ be a graph such that $\chi'(G)>\Delta(G)$.
If $e$ is a critical edge of $G$, then one of the following statements holds:
\begin{itemize}
\item [i)] $e$ is contained in an odd-$C_5^+$ of $G$,
\item [ii)] $e$ is contained in an odd ring $R$ of $G$ such that:
\[ |\eds{R}|= (\chi'(G)-1)r+1, \]
where $r=\dfrac{|\vts{R}|-1}{2}$.
\end{itemize}
\end{thm}
\begin{dem}
Let us assume that $e$ is not contained in an odd-$C_5^+$ of $G$. We will prove ii).
Let $u$ and $v$ be the ends of $e$, $H:=G-e$ and $\ecol$ be an optimal edge-coloring of $H$. Notice that we are under the assumptions of \myref{lem-precol}.
By its second point, there exist distinct matchings $A,B\in\ecol$ such that $A$ covers $u$ and misses $v$, whereas $B$ covers $v$ and misses $u$.
Consider the connected component $P$ of $u$ in the graph $\psbg{H}{A\Delta B}$. It is either a chain or a circuit, but $B$ misses $u$ so it must be a chain.
If $v\notin\vts{P}$, then $(\ecol\sm\set{A,B})
\cup\set{A\Delta \eds{P},B\Delta\eds{P}}$ is an optimal edge-coloring of $G$ in which $e$ is not incident with the color class $A\Delta\eds{P}$. It can therefore be extended in an edge-coloring of $G$ with $\chi'(G-e)$ colors: a contradiction with the criticality of $e$.
Hence $P$ is a $uv$-chain and the graph $L:=P+e$ is an odd circuit of $G$. It is also induced: if $f$ is a chord of $L$, then $L+f$ forms an odd-$C_5^+$ containing $e$, which contradicts our initial assumption on $e$.
Hence $L$ is contained in an odd ring $R$ of $G$. Let $r=(|R|-1)/2$. We claim that:
\begin{center}
\emph{$R$ contains exactly $r$ edges of each color $M$ of $\ecol$.}
\end{center}
Let us immediately show how this statement allows us to end the proof of the theorem: every edge of $R$ except $e$ belongs to one of the color classes of $\ecol$ and these are disjoint.
Thus, the claim implies $|E(R)|=|\ecol| r+1=\chi'(G-e)\cdot r+1$, and the equality of the theorem follows from $\chi'(G-e)=\chi'(G)-1$, which holds because $e$ is critical.
We now turn to the proof of the claim above.
Let $M\in\ecol$.
If $M$ is either $A$ or $B$, then $M$ has $r$ edges in $R$ because $|\vts{L}|=|\vts{R}|=2r+1$ and the edges of $P$ alternate between $A$ and $B$.
So let us henceforth assume that $M\notin\set{A,B}$.
Using the symmetry between $u$ and $v$, we may assume (without loss of generality) that $M$ covers $u$.
Let $K$ be the component of $u$ in $\psbg{G}{M\Delta B}$. The graph $K$ is a chain since $B$ is an $R$-matching of $G$ which misses $u$.
We claim that:
\begin{center}
\emph{the graph $K\cap R$ is an even chain.}
\end{center}
Indeed, if $K\cap R$ has more than one connected component then $K$ contains an ear $P'$ of $L$ with both its end-edges in $M$. Hence $P'$ is an odd ear of $L$ (the edges of $K$ alternates between $M$ and $B$) and $L+P'$ forms an odd-$C_5^{+}$ of $G$ containing $e$: a contradiction. So $K\cap R$ is a chain.
Now, $B$ is an $R$-matching so the end of $K\cap R$ other than $u$ (if it exists) must be covered by $B$ in $K\cap R$. Therefore $K\cap R$ has even length and this ends the proof of the claim above. It follows that :
\[ |M'\cap \eds{R}|=|M\cap \eds{R}|, \]
where $M':=M\Delta K$.
We now prove that $M'\cap R$ has $r$ edges in $R$.
Let $B'=B\Delta \eds{K}$ and $\ecol'=\ecol\sm\set{M,B}\cup\set{M',B'}$. Using lemma 3, we have that $\ecol'$ is an edge-coloring of $G-e$ with $|\ecol|=\chi'(G-e)$ colors and that $M'$ misses $u$. So $M'$ must cover $v$: we could otherwise extend $\ecol'$ to an edge-coloring of $G$ by adding $e$ to $M'$, which would contradict $\chi'(G-e)<\chi'(G)$.
Let $K'$ be the component of $v$ in $\psbg{G}{M'\Delta A}$. By construction, $A$ is an $R$-matching of $G$ which misses $v$.
Hence, we can repeat the argument of the claim above to show that $Q:=K'\cap R$ is a chain.
If $u\notin K'$, then let $M''=M\cap \eds{K'}$, $A'=A\cap \eds{K'}$ and $\ecol''=\ecol'\sm\set{M',A}\cup\set{M'',A'}$.
As above, we apply lemma 3 to show that $\ecol''$ is an edge-coloring of $G-e$ which uses $\chi'(G-e)$ colors. However, no edge of $M''$ is incident with $e$. It follows that $\ecol''$ can be extended to an edge-coloring of $G$ by adding $e$ to $M''$, which again contradicts $\chi'(G-e)<\chi'(G)$.
Hence, $u$ must belong to $K'$. Since $uv\notin M'$ (because $M'$ misses $u$), the only way for $Q$ to be a chain is that it coincides with the underlying simple graph of $R-e$. Therefore $|M'\cap R|=|M'\cap Q|=r$, because $Q$ is $M$-alternating and of even length.
\end{dem}
A graph $G$ is rounding-critical if $\chi'(G)>\kappa(G)$ and $\chi'(G-e)=\kappa(G-e)$ for every $e\in\eds{G}$.
For every graph $H$, let $\kappa(H)=\ceil{\chi'_f(H)}$. It is an easy exercise to show that $\kappa(H_1)\leq \kappa(H_2)$ whenever $H_1$ is a subgraph of $H_2$.
\begin{thm}
Let $G$ be a rounding-critical graph.
\end{thm}
\begin{dem}
Let $G$ be a rounding-critical graph. If $\eds{G}=\vn$, then the result is trivial. So let us assume that $|\eds{G}|\geq 1$ and let $e\in\eds{G}$. The edge $e$ is obviously critical, hence:
\[ \kappa(G-e)=\chi'(G-e)=\chi'(G)-1\geq \kappa(G)\geq\kappa(G-e). \]
So the inequalities above are equalities. In particular: $\kappa(G-e)=\kappa(G)$.
Seeking a contradiction, suppose that $e$ does not belong to an odd-$C_5^{+}$. By theorem \myrefwp{thm-main}, there exists an odd ring $R$ of $G$ such that $|E(R)|=(\chi'(G-e))r+1$, where $r=(|R|-1)/2$. Hence:
\[ \kappa(G-e)=\kappa(G)\geq \dfrac{|E(R)|}{r}=\chi'(G-e)+\dfrac{1}{r}>\chi'(G-e), \]
which contradicts $\chi'(G-e)=\kappa(G-e)$.
\end{dem}
The following corollary is a straightforward consequence of theorem 2.
\begin{thm}
If $G$ is a graph without an odd-$C_5^{+}$, then $\chi'(H)=\kappa(H)$.
\end{thm}
\subsection{Proof of theorem \ref{thm-ecol-form}}
Let $H$ be a graph. An \emph{odd ring} of a graph $H$ is an induced subgraph of $H$ whose underlying simple graph is an odd circuit (which can be of length 3).
Let:
\[ \Gamma '(H)=\max\set{ \frac{2}{|\vts{R}|-1}|\eds{R}|\colon \text{$R$ is an odd ring of $H$} }, \]
The \emph{degree} of a vertex $v$ of $H$, denoted $d_H(v)$, is the number of edges of $H$ incident with $v$ (since $H$ can have multiple edges, $d_H(v)$ can be different from the number of neighbors of $v$ in $H$).
Let $\Delta(H)$ denote the largest degree of a vertex of $H$.
Let $e\in\eds{H}$. The graph $H-e$ is defined by $\vts{H-e}=\vts{H}$ and $\eds{H-e}=\eds{H}\sm\set{e}$ (notice that the other edges parallel to $e$ (if any) are not deleted).
The edge $e$ is \emph{critical} if $\chi'(H-e)<\chi'(H)$ (that is, $\chi'(H-e)=\chi'(H)-1$).
The main ingredient of the proof of theorem \ref{thm-ecol-form} is the following "concentration" lemma (whose proof is postponed to part 3.3):
\begin{lem}\label{thm-ben-crit}
Let $H$ be a graph such that $\chi'(H)>\Delta(H)$ and let $e\in\eds{H}$.
If $e$ is critical and is not an edge of an odd-$C_5^{+}$ of $H$, then there exists an odd ring $R$ of $H$ such that $e\in\eds{R}$ and:
\[ |\eds{R}|= r\cdot\chi'(H-e)+1, \]
where $r=\frac{|\vts{R}|-1}{2}$.
\end{lem}
We need one more result on the fractional chromatic index of a graph. In section 5, we will see that equality actually holds in the statement i) below for every \cfpfree graph. As a byproduct, we will obtain a new formula for the chromatic index of these graphs. We defer this formula for the sake of clarity, since only the lower bound is needed in the proof of theorem \ref{thm-ecol-form}.
\begin{prop}\label{prop-basic-propty}
Let $H$ be a graph. The following statements hold:
\begin{itemize}
\item[i)] $\chi_f'(H)\geq \max(\Delta(H),\Gamma'(H))$,
\item[ii)] for every subgraph $K$ of $H$, we have $\chi_f'(K)\leq\chi_f'(H)$.
\end{itemize}
\end{prop}
\begin{dem}
Let $\mathcal{M}(H)$ be the set of all matchings of $H$. By the duality theorem of linear programming, we have:
\[ \chi_f'(G)=\max\set{\sum_{e\in\eds{H}}x_e
\colon x\in\qset_+^{\eds{G}};
\, \sum_{e\in M}x_e\leq 1,\,\text{for every $M\in\mathcal{M}(H)$}}. \]
Let $v\in\vts{H}$ and $R$ be an odd ring of $H$. Let $\delta(v)$ denote the set of edges incident with $v$.
Clearly, each matching of $H$ contains at most one edge of $\delta(v)$ and at most $\frac{2}{|\vts{R}|-1}|\eds{R}|$ edges of $R$. Hence, both $\chi^{\delta(v)}$ and $\frac{2}{|\vts{R}|-1}\chi^{\eds{R}}$ are feasible solutions of the linear program above and this implies i).
Statement ii) follows from the fact that any optimal solution $x$ of the linear program above for $\chi_f'(K)$ can be extended to a feasible solution of the program for $\chi_f'(H)$, by setting $x_e=0$ for every $e\in\eds{H}\setminus\eds{K}$.
\end{dem}
We are now ready to prove theorem \ref{thm-ecol-form} (the proof of lemma \ref{thm-ben-crit} being the subject of the next part):
\begin{dem}[of theorem \ref{thm-ecol-form}]
For every graph $G$, let $\kappa(G)$ denote $\ceil{\chi_f'(G)}$.
Seeking a contradiction, let $H$ be an \cfpfree graph with $\chi'(H)\neq\kappa(H)$ and choose $|\eds{H}|$ minimum. We actually have $\chi'(H)>\kappa(H)$ (since $\chi'(G)\geq\kappa(G)$ clearly holds for every graph $G$).
By proposition \ref{prop-basic-propty}, $\chi'(H)>\Delta(H)$.
Let $e\in\eds{H}$ and put $H'=H-e$. We have $\chi'(H')=\kappa(H')$ because of the minimality of $H$.
Besides, proposition \ref{prop-basic-propty}.ii) implies that $\kappa(H')\leq\kappa(H)$, so $e$ must be a critical edge of $H$.
Since $H$ is \cfpfree, lemma \ref{thm-ben-crit} can be applied to $H$ and $e$.
Hence, $H$ has an odd ring $R$ such that $e\in\eds{R}$ and $|\eds{R}|=r\cdot\chi'(H')+1$, where $r=\frac{|\vts{R}|-1}{2}$.
By proposition \ref{prop-basic-propty}.i), $\kappa(H)\geq \frac{|\eds{R}|}{r}$.
Therefore,
$\kappa(H)>\chi'(H')$ and this contradicts our assumption on $H$.
\end{dem}
\section{Introduction}
\input{intro}
\section{Basic definitions and properties}
\input{part1}
\section{H-perfect line-graphs}
The purpose of this section is to prove theorem \ref{thm-ben-line-irp}.
In part 3.1, we state an edge-coloring result (theorem \ref{thm-ecol-form}) and show that it easily implies theorem \ref{thm-ben-line-irp} using a result of Cao and Nemhauser (which is a direct consequence of Edmonds' description of the matching polytope).
Part 3.2 is devoted to the proof of this edge-coloring statement. It relies on an auxiliary result (lemma \ref{thm-ben-crit}) whose proof is postponed to part 3.3.
\subsection{Reduction to an edge-coloring result}
\input{linegraphs}
\subsection{Proof of lemma \ref{thm-ben-crit}}
\input{proofline}
\section{T-perfect claw-free graphs}
Our purpose is to prove theorem \ref{thm-ben-cft-irp}. The first part gives an outline of the proof and hopefully clarifies that we have to take a new approach compared to the unweighted case. The proofs of the two main lemmas are postponed to sections 4.2 and 4.3.
\subsection{How the proof works ?}
\input{clawfree}
\subsection{Proof of lemma \ref{lem-reduc}}
\input{proofclawfree}
\subsection{Proof of lemma \ref{lem-small-diamond}}
\input{proofclawfree2}
\section{Minmax formulae and algorithmic remarks}
\input{explicitform}
\input{conclusion}
\bibliographystyle{alpha}
| {
"redpajama_set_name": "RedPajamaArXiv"
} | 518 |
Q: My bash script doesn't print the flags I am not sure what is wrong with my bash script as it doesn't print the given flags nor it echoes them within case statement:
26 while getopts ":a:b:p:u" opts;
27 do
28 case $opts in
29 a) echo got an A flag;;
30 b) echo got an B flag;;
31 u) user=$OPTARGS echo $user;;
32 p) pass=$OPTARGS echo $pass;;
33 ?) echo I don\'t know what flag is this;;
34 esac
35 done
36
37 echo user: $user pass: $pass
This is how I have called it:
bash-4.3$ ./functionexample.sh -p 123 -u mona
A: I got it fixed thanks to help from IRC bash channel:
26 while getopts ":a:b:p:u:" opts;
27 do
28 case $opts in
29 a) echo got an A flag;;
30 b) echo got an B flag;;
31 u) user=$OPTARG; echo $user;;
32 p) pass=$OPTARG; echo $pass;;
33 ?) echo I don\'t know what flag is this;;
34 esac
35 done
36
37 echo user: $user pass: $pass
A: This should work :
while getopts ":a:b:p:u" opts
do
case $opts in #removed the dot at the end
a) echo "got an A flag";;
b) echo "got an B flag";;
u) user="$OPTARGS"
echo "$user"
#double quote the variables to prevent globbing and word splitting
;;
p) pass="$OPTARGS"
#Passwords can contain whitespace in the beginning.
#If you don't double quote , you loose them while storing.
#eg. pass=$@ will strip the leading whitespaces in the normal case.
echo "$pass"
;;
?) echo "I don't know what flag is this"
#Better double quote to make echo easy, consider something like \\\\\\
#count the hashes? eh?
;;
esac
done
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 8,942 |
Q: Wubi Installation program for 14.04 reported a "Permission Denied" error I downloaded ubuntu-14.04.1-desktop-amd64 and followed instructions to install onto my PC with 4GB RAM and 114 GB free disk space running Wins 7 Professional. After I ran the wubi.exe installation program, it reported an permission denied error and I couldn't find the log file in:
c:\users\00013851\appdata\local\temp\wubi-14.04-rev286.log
There's no such directory path. I have tried the 32-bit version but faced the same error.
| {
"redpajama_set_name": "RedPajamaStackExchange"
} | 263 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.