text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
Q: Font Size in Datagrid How to reduce the Font Size of Text in Rows of Data Grid in Flex ? A: Use an itemRenderer with a label and specify the fontSize style. Conceptually something like this: <mx:DataGrid> <mx:columns> <mx:DataGridColumn> <mx:itemRenderer> <mx:Component> <mx:Label text="{data.myText}" fontSize="8" /> </mx:Component> </mx:itemRenderer> </mx:DataGridColumn> </mx:columns> </mx:DataGrid> Yeah, I wrote this code in the browser. A: You could change the fontSize style property of the grid and increase the rowHeight at the same time.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,872
\section{Introduction} We construct the brane representation for the reduction of gauge theory dualities from $4$D to $3$D. This analysis was started in~\cite{Amariti:2015yea} where it was shown how to translate the dimensional reduction of dualities in terms of \D{}-- and \NS{}--branes in \tIIA supergravity. Here we elaborate on this picture, finding an algebraic description of the superpotential involved in the dimensional reduction which incorporates the generalization from unitary to real gauge groups. The brane picture gives a unified treatment of various dualities involving different gauge groups and matter content. It allows also to reduce further to pure $3$D dualities, by a double-scaling on the relative positions of some \D{}--branes and the radius of the circle. In this process an extra sector is created in the magnetic theories, reproducing the gauge theory duality in the pure $3$D limit in the brane picture. \bigskip Recent insight in the structure of supersymmetric field theories has been obtained relating results in different dimensions. One example is the similarity between the electric-magnetic duality discovered by Seiberg in~\cite{Seiberg:1994pq} for four-dimensional \textsc{sqcd} and the three-dimensional dualities studied in~\cite{Karch:1997ux,Aharony:1997gp}. The dualities can indeed be connected by dimensional reduction, as discussed in~\cite{Aharony:2013dha} (see also~\cite{Niarchos:2012ah}). It turns out that in a necessary intermediate step of this reduction one needs to consider the duality on $\mathbb{R}^3 \times S^1$. At scales lower than the inverse radius of $S^1$ this gives rise to a new, effective, $3$D duality. The presence of the circle is manifest through the contributions of \ac{kk} monopoles. The limit to pure $3$D dualities, recovering \emph{e.g.\ }the results of~\cite{Karch:1997ux,Aharony:1997gp}, depends on the details of the gauge and matter content ~\cite{Aharony:2013dha,Aharony:2013kma}. \bigskip In this paper we study the brane construction of this reduction, giving a physical origin for the differences in the pure $3$D limit. We obtain the effective $3$D dualities by T--duality, where Euclidean \D1--branes reproduce the non-perturbative effects of the \ac{kk} monopoles. Equivalently these effects are captured by an algebraic formulation in terms of S--dual \F1--strings. Configurations creating the monopole superpotential are classified by affine Dynkin diagrams and this superpotential is an affine Toda potential. As depicted in Figure~\ref{fig:new-orientifold}, we take the pure $3$D limit in the electric theory by moving some flavor branes to the mirror point \(x_3^\circ \), when sending the radius to infinity. The magnetic dual is obtained by an \ac{hw} transition, generating an additional gauge theory at $x^\circ_3$. This gauge theory is described by a sector of interacting singlets. It is necessary in reproducing the limit to pure $3$D dualities. The brane construction is quite general and can also be applied to theories with real gauge groups and tensor matter, which require some extra treatment in the field theory analysis~\cite{Aharony:2013dha,Aharony:2013kma}. In brane language these theories are obtained including orientifold planes. Orientifolds are straightforwardly incorporated in our picture. When the theory is put on the circle, a second orientifold plane is generated after T-duality at the mirror point $x^\circ_3$~\cite{Hanany:2000fq,Hanany:2001iy}. In the \acs{hw} transition the orientifold, carrying \D-brane charge, modifies the rank of the gauge groups both at $x_3=0$ and at $x_3=x^\circ_3$. By considering various brane realizations with orientifolds we recover the $3$D dualities with orthogonal or symplectic gauge groups and those with tensor matter~\cite{Aharony:1997gp,Karch:1997ux,Giveon:2008zn,Niarchos:2008jb, Orlando:2010uu,Orlando:2010aj,Willett:2011gp,Benini:2011mf,Kapustin:2011vz,Kim:2013cma,Aharony:2014uya}. \begin{figure} \centering \includegraphics{NewOrientifold.pdf} \caption{Geometry of the compact direction. The possible orientifolds are depicted in red color. LHS: \tIIA circle of radius $r$. RHS: T--dual circle of radius $\alpha'/r$. The black arrowheads indicate the motion of the \D{}--branes to the mirror point \(x_3 = x_3^\circ \equiv \pi \alpha'/r \).} \label{fig:new-orientifold} \end{figure} The plan of this article is as follows. In Section~\ref{sec:review}, we recap the reduction of $\mathcal{N}=1$ dualities from $4$ to $3$ dimensions. In Section~\ref{sec:general}, we summarize the brane realization of the dimensional reduction. We discuss the generation of an affine Toda potential on the Coulomb branch variables when the theories are studied on a circle and its relation to the second orientifold appearing in the T--dual picture. Moreover, we explain the double-scaling limit and the reduction to pure three-dimensional dualities. In Section~\ref{sec:symplectic} we apply the brane picture on symplectic gauge theories with fundamental, antisymmetric and adjoint matter. In Section~\ref{sec:unitary-antisymmetric} we study the brane setup of unitary gauge groups with antisymmetric flavor. In Section~\ref{sec:orthogonal} we study orthogonal gauge groups, our analysis is at the level of the local properties and the gauge algebra. We conclude in Section~\ref{ref:conclusions} by outlining some open questions. \section{Brane reduction of dualities} \subsection{Remarks on the 4D/3D reduction} \label{sec:review} In this subsection we review some general field theoretical aspects of the reduction of \(\mathcal{N} = 1\) $4$D dualities to $\mathcal{N}=2$ dualities in $3$D. For a more complete review see Section~2.1 of~\cite{Amariti:2015yea} and the original work~\cite{Aharony:2013dha}. Here we just recall few aspects which are important for our analysis. Connecting via dimensional reduction pairs of dual theories in $4$D to corresponding pairs in $3$D requires some care. A consistent reduction has been obtained by studying the $4$D theory on a circle of finite radius $r$, where a non-perturbative superpotential is generated from \ac{kk} monopoles on $S^1$. We refer to this superpotential as $W_\eta$ in the rest of this paper. In order to preserve the $4$D duality in the dimensional reduction where $r\rightarrow 0$, one needs to consider, in some cases, an RG flow triggered by real masses of order $\frac{1}{r}$. Furthermore, it has been argued in~\cite{Aharony:2013dha} that while some electric quarks are integrated out, one sometimes has to consider the magnetic theory in a particular vacuum. This magnetic vacuum corresponds to a large vev for the scalar field in the vector multiplet. The details of the reduction -- involving the superpotential $W_\eta$, the real mass flow and the non-trivial vacua -- depend on the nature of the gauge group and the matter content. We refer the reader to the papers~\cite{Aharony:2013dha,Aharony:2013kma,Csaki:2014cwa,Nii:2014jsa,Amariti:2013qea} for more details and explicit examples. Let us note that the $4$D dualities on a finite circle, with non-perturbative superpotentials $W_\eta$, give rise to new, effectively 3 dimensional dualities, which are interesting in their own right. \bigskip To sum up, the aspects important for the forthcoming analysis are the superpotential $W_\eta$, the real mass flow and the associated vacuum structure in the magnetic theory upon shrinking the circle to zero size. In the following we describe the brane construction of this reduction. We provide an algebraic description of $W_\eta$ in terms of the gauge group structure. This allows for a generalization from unitary to real gauge groups. We also find the brane description of the real mass flow and the vacuum structure. \begin{table} \centering \begin{tabular}{lcccccccccc} \toprule & 0 & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9 \\ \midrule \D4 & $\times$ & $\times$ & $\times$ & $\times$ & & & $\times$ & & & \\ \D6 & $\times$ & $\times$ & $\times$ & $\times$ & & & & $\times$ & $\times$ & $\times$ \\ \D6' & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & & $\times$ \\ \NS{} & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & & & & \\ \NS' & $\times$ & $\times$ & $\times$ & $\times$ & & & & & $\times$ & $\times$ \\ \(\O4^\pm\) & $\times$ & $\times$ & $\times$ & $\times$ & & & $\times$ \\ \(\O6^\pm\) & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & $\times$ & & $\times$ \\ \bottomrule \end{tabular} \caption{Brane setup for the realization of the gauge theories of interest in this paper.} \label{tab:O4+_branes} \end{table} \subsection{The general strategy} \label{sec:general} In this section we first summarize the brane engineering of theories with four supercharges on $\mathbb{R}^3 \times S^1$. In the second part we provide a brane picture for reducing $4$D dualities to $3$D. There, for the sake of being explicit, we focus on the example of $U(N)$ gauge theories and fundamental matter. The analysis of real groups and tensor matter follows analogously and is the subject of sections \ref{sec:symplectic}, \ref{sec:unitary-antisymmetric} and \ref{sec:orthogonal}. A brane description of theories on $\mathbb{R}^3 \times S^1$ is found \emph{e.g.\ }in~\cite{Amariti:2015yea}. Let us give a brief summary. The four dimensional gauge theory is engineered by a \tIIA brane system of \D4--branes stretched between \NS{}-- and \NS'--branes as denoted in Table \ref{tab:O4+_branes}. A dimensional reduction of field theories can be reproduced in this picture by T-dualizing along one compact, space-like dimension (say $x_3$). There is a compact Coulomb branch (\textsc{cb}) which is parameterized by the scalars $\sigma_i$ in the vector multiplet the dual photons $\phi_i$, where $i=1\cdots \text{rank}(G)$. The vev of the scalars $\sigma_i$, and hence the position on the \textsc{cb}, corresponds to the positions of the \D3-branes in $x_3$. In this configuration the \D3 branes repel each other. The force can be understood in terms of Euclidean \D1--branes stretched between the \NS~and the \D3--branes. These \D1–branes map in the field theory to monopoles, which generate an \ac{ahw} superpotential for the \textsc{cb} coordinates. The force of this superpotential maps to the repulsive force between branes. Note that there is a special Euclidean \D1-brane when the configuration is compact, as depicted in the left side of Figure \ref{fig:Dynkin-An} (there the S-dual scenario with \F1-strings instead of \D1-branes is shown). This special \D1-brane connects the $1$st and the $N$th \D3-brane ``on the rear side'' of the compact $x_3$. In field theory this corresponds to an additional term in the superpotential. As mentioned in Section~\ref{sec:review}, a crucial role in the dimensional reduction of dualities is played by the superpotential $W_\eta$, appearing at finite circle radius. In the brane picture it is reproduced precisely by this special, winding \D1 brane. In the literature the ``regular'' \D1-branes are usually referred to as \ac{bps}- and the ``special'' ones as \ac{kk}-monopoles. \bigskip So far the summary of engineering gauge dynamics from branes as given in~\cite{Amariti:2015yea}. Now we want to discuss how the superpotential $W_\eta$ is given in terms of gauge group data. Stable configurations of branes, possibly in the presence of orientifolds, are in one to one correspondence with the Dynkin diagram of the gauge group. The Dynkin diagrams, in turn, are in one to one correspondence with the possible superpotentials $W_\eta$. \begin{itemize} \item The fundamental (in the sense of~\cite{Weinberg:1979zt,Weinberg:1982ev}) \acs{bps} monopoles are labeled by the simple co-roots of the Lie co-algebra. For unitary gauge groups $G$ this corresponds to placing the \(i\)--th \D1--brane between the \(i\)--th and the \((i+1)\)--th \D3--brane (for $i=1,\dots,\rank(G)$). It is useful to study the S--dual configurations where \D1--branes become \F1--strings. In this picture the \D3--branes are still distributed on the circle and connected by \F1--strings, as depicted in the upper left corner of Figure~\ref{fig:Dynkin-An}. We now exploit the crucial fact that the spectrum of the allowed \acs{bps} \F1--strings is given by the simple roots of the corresponding Lie algebra~\cite{Garland:1988bv,Hanany:2001iy}. For a unitary gauge group the simple roots of the $A_N$ series correspond to $\sigma_i-\sigma_{i+1}$, \emph{i.e.} to the difference between the positions of two consecutive \D3--branes. We can include real gauge groups by adding orientifold planes (see Appendix \ref{sec:orientifolds} for details). The allowed spectrum of \F1--strings is again given by the corresponding Dynkin diagrams, classified by the \(B_N, C_N\) and \(D_N\) series. Summing the contributions from the \acs{bps} monopoles we obtain the superpotential on the Coulomb branch, finding a Toda potential for the associated Lie algebra~\cite{Davies:2000nw} \begin{equation} \label{eq:W-BPS} W(\Sigma)_{\text{BPS}} \equiv \sum_{i=1}^{\rank(G)} \frac{2}{\alpha_i^2} \exp[\alpha_i^* \cdot \Sigma] \; , \end{equation} \item The picture incorporates very naturally the \acs{kk} monopoles due to the compact direction. It turns out that the extra \F1 string, which winds around the circle connecting, for unitary $G$, the $1$st and the $N$th \D3 brane, can be accounted for by extending the Dynkin diagram to its \emph{affine} version. We depicted this in Figure \ref{fig:Dynkin-An}. Summing the contributions from the \acs{bps} and the \acs{kk} monopoles we obtain the superpotential on the compact Coulomb branch, finding an affine Toda potential for the associated affine algebra~\cite{Davies:2000nw} \begin{equation} \label{eq:W-afffine} W(\Sigma) = W(\Sigma)_{\text{BPS}} + W(\Sigma)_{\text{KK}} \equiv \sum_{i=1}^{\rank(G)} \frac{2}{\alpha_i^2} \exp[\alpha_i^* \cdot \Sigma] + \frac{2 \eta}{\alpha_0^2} \exp[\alpha_0^* \cdot \Sigma] \; , \end{equation} where $\Sigma= \sigma/e_3^2+ i \phi$, $\alpha_i$ are the simple roots and $\alpha^*_i$ are the associated co-roots. The extra simple root $\alpha_0$ corresponds to the \ac{kk} monopole and the corresponding contribution $W(\Sigma)_{\text{KK}}$ to $\eqref{eq:W-afffine}$ is identified with $W_\eta$ in field theory. \item After this general recipe let us spell out some details for the example of unitary gauge groups. Here we have the affine algebra $\widetilde A_N$, where the extra simple root is associated to the combination $\sigma_N-\sigma_1$ (see Figure~\ref{fig:Dynkin-An}). \begin{figure} \centering \includegraphics[width=14cm]{DynkinAn.pdf} \caption{Branes and Dynkin diagrams for \(A_N\) and \(\widetilde A_N\). The left column shows the S-dual configuration of \F1--strings stretched between \D3--branes. In the right column we depict the corresponding \(A_N\) and \(\widetilde A_N\) Dynkin diagrams. After compactification a new string appears between \(\sigma_1\) and \(\sigma_N\) and corresponds to the affine node (in blue in the brane cartoon and in the Dynkin diagram).} \label{fig:Dynkin-An} \end{figure} For $SU(N)$ theories\footnote{In the $U(N)$ case the same result holds but the last (affine) root splits in two terms $Y_{+} =e^{\sigma_1/e_3^2+i\phi_1}$ and $Y_{-}=e^{-\sigma_{N}/e_3^2-i \phi_N}$.} the superpotential $\eqref{eq:W-afffine}$, associated to the $\widetilde A_N$ diagram, is \begin{equation} \label{etaWSU} W = \sum_{i=1}^{N-1} \frac{1}{Y_i} + \eta Y_N \; , \end{equation} where $Y_i = e^{\Sigma_i-\Sigma_{i+1}}$. The last term in (\ref{etaWSU}) breaks explicitly the $U(1)_A$ symmetry in the three-dimensional field theory. This symmetry is associated to the rotation in the $(4,5)$ plane in the brane picture. The geometric realization of the breaking of this symmetry for compact $x_3$ has been discussed in~\cite{Amariti:2015yea}. \item Next we want to discuss real gauge groups. Here we outline few aspects, the proper analysis is given in the next sections. They are realized by including \O3 or \O5 planes. Let us first discuss the configurations with \O3 planes on $\mathbb{R}^3 \times S^1$. As reviewed in Appendix \ref{sec:orientifolds} the four differently charged orientifolds \O3$^+$, \O3$^-$, $\widetilde \text{\O3}^+$ and $\widetilde \text{\O3}^-$ project a unitary group to $SP(2N), SO(2N), SP(2N)$ and $SO(2N+1)$ respectively. In presence of a compact direction orientifolds come in pairs and here\footnote{For simplicity we do not distinguish between \O3$^+$ and $\widetilde \text{\O3}^+$ planes. } we have six different such pairs~\cite{Hanany:2001iy}. Brane configurations with $(\O3^-,\O3^-)$, $(\widetilde \O3^-,\O3^-)$ and $(\O3^+,\O3^+)$ are associated to affine Dynkin diagrams. The other pairs $(\O3^{+},\O3^{-})$, $(\O3^{+},\widetilde \O3^{-})$ and $(\widetilde \O3^{-}, \widetilde \O3^{-})$ correspond to twisted affine Dynkin diagrams. In this paper we are interested only in the ``affine'' pairs, since those are obtained by a T--duality from \tIIA configurations with \O4--planes. A similar discussion holds with $\O6$--planes, while the effect of the orientifold charge on the projection is exchanged. $\O5^+$ is associated to an $SO(N)$ and $\O5^-$ to an $Sp(2N)$ gauge group. The pairs $(\O5^\pm,\O5^\pm)$ are obtained by T--duality from a \tIIA configuration with $\O6^{\pm}$ planes. \end{itemize} \bigskip Eventually we include matter fields. Standardly, we can couple them to the four dimensional gauge theory by adding stacks of \D6--branes in the \tIIA setup, as shown in Table~\ref{tab:O4+_branes} In the T--dual frame they become \D5--branes. \bigskip So far the brane configurations for the dynamics of gauge theories on $\mathbb{R}^3 \times S^1$. Let us now turn to dualities and how their dimensional reduction can be understood in this picture. Here we highlight the steps following the example of $U(N)$ gauge theories with fundamental matter, in the next sections we apply them to more general cases. The brane construction of Seiberg dualities in $4$D is well understood, it boils down to an \ac{hw} transition in the \tIIA configuration. In the \ac{hw} transition every time a \D6--brane crosses a non-parallel fivebrane, a \D4--brane is generated. This brane creation mechanism in necessary for charge conservation, to preserve the so called linking number~\cite{Hanany:1996ie}. Moving the entire stack of flavor \D6--branes from one side of the \NS~to the other amounts to changing the brane configuration of the electric gauge theory to the configuration describing the magnetic one. For example unitary \textsc{sqcd} with $F+k$ flavors, \emph{i.e.\ } $F+k$ pairs of fields in the fundamental and anti-fundamental representation of $U(N)$, is engineered by a stack of $N$ \D4--branes stretched between an \NS~and an \NS', with $F+k$ \D6--branes on top as denoted in Table~\ref{tab:O4+_branes}. The magnetic dual is obtained by swapping the \NS~with the \NS', changing the number of \D4--branes in the stack to $F+k-N$. From the $4$D brane picture we can obtain the one describing Seiberg duality on $\mathbb{R}^3 \times S^1$ by a T--duality as described earlier in this section. The T--dual of the configuration describing the $4$D electric theory maps to the brane configuration of the effective $3$D electric theory with $W_\eta$ and analogously the T--dual of the $4$D magnetic brane configuration maps to the brane setup of the magnetic theory with $W_\eta$. In our example of unitary \textsc{sqcd} we obtain the \tIIB configuration with a stack of $N$ \D3 and $F+k$ \D5--branes at finite radius of $x_3$. This brane setup describes the electric theory with $U(N)$ gauge group, the superpotential $W_\eta$, and $F+k$ flavors. The magnetic dual configuration has $F+k-N$ \D3 and $F+k$ \D5--branes, it describes a $U(F+k-N)$ gauge theory with the superpotential $W_\eta$, $F+k$ flavors and $(F+k)^2$ gauge singlets. \bigskip Ultimately we want to obtain pure $3$D dualities in this brane picture. In order to reproduce the construction in~\cite{Aharony:2013dha} we move, as depicted in Figure~\ref{fig:new-orientifold}, some flavor branes of the electric theory to the mirror point of the T--dual circle. In field theory this corresponds to giving a mass $\sim \mathcal{O}(\frac{\alpha'}{r})$ to the associated matter fields. When taking the radius $r$ to zero, it corresponds to the double-scaling limit mentioned in the introduction and used in the next sections. The magnetic dual can be obtained by an \ac{hw} transition swapping the \NS--branes in the \tIIB configuration. This is equivalent to a double-scaling limit in the magnetic theory, that we had obtained by T--duality from \tIIA, when the flavor \D5--branes each drag a \D3 to the mirror point. In field theory it hence reproduces the higgsing of the theory. More explicitly, in the example of unitary \textsc{sqcd}, we move one stack of $k$ \D5--branes clockwise and another one counterclockwise on the circle, until they reconnect at the mirror point $x^\circ_3$. When taking the limit $r \rightarrow 0$ the \D5--branes at $x^\circ_3$ correspond to a set of massive fields, which do not contribute to the low energy theory. On the other hand in the magnetic theory, the \D5--branes at $x^\circ_3$ do give rise to massless states, contributing to the low energy dynamics. The reason is that the \ac{hw} transition creates $k$ \D3--branes at $x^\circ_3$. In terms of field theory, this corresponds to an extra $U(k)$ gauge theory with $k$ massless fundamental flavors and $k^2$ massless singlets. The singlets interact with the flavor fields through a superpotential, as the \D5 and the \NS' are parallel. In all cases studied in this paper the extra gauge sector in the magnetic theory at $x_3 = x_3^\circ$ can be described as a theory of interacting gauge singlets. Furthermore, this extra sector always interacts with the monopoles of the magnetic theory at $x_3 = 0$ and the interaction involves some of the gauge singlets describing the theory at $x_3 = x_3^\circ$. Next we discuss the example of unitary gauge groups with fundamental flavor, other examples will be described in the next sections. In the example of \textsc{sqcd} the extra sector corresponds to a $U(2k)$ gauge theory with $2k$ flavors $f$ and $\tilde f$, a singlet $L$ with $4k^2$ components and the superpotential $W=L f \tilde f$. This theory is ``mirror'' dual~\cite{Aharony:1997bx} to a set of singlets $M$, $L$ and $V_\pm$, where $M$ is identified with the meson $M \equiv f \tilde f$ and the singlets $V_\pm$ are identified with the monopoles of the $U(2k)$ sector, $V_+ \equiv e^{\Sigma_1}$ and $V_- \equiv e^{-\Sigma_{2k}}$, where $\Sigma_i$ is the $i$th \textsc{cb} coordinate of the $U(2k)$ gauge theory. The mirror theory is interacting, with superpotential $W=L M + V_+ V_- \det M$. In the \textsc{ir} this superpotential is set to zero by the eom. We just have argued that the $U(2k)$ gauge theory at $x_3=x_3^\circ$ is effectively described by a theory of interacting gauge singlets. Now we want to come back to the brane picture of $3$D duality. There is an interaction between this $U(2k)$ sector and the magnetic $U(F-N)$ gauge theory at $x_3 = 0$. This interaction can be studied, as in the beginning of this section, by \D1--branes stretching between the two stacks of \D3--branes. More explicitly, they describe a repulsive force between the $1$st \D3 brane at $x_3 = 0$ and the $(2k)$th \D3--brane at $x_3^\circ$, and analogously a repulsive force between the $(F-N)$th \D3--brane at $x_3=0$ and the $1$st \D3-brane at $x_3^\circ$. In field theory language this force is manifest through the superpotential\footnote{This superpotential corresponds to an \ac{ahw} superpotential due to the higgsing of the magnetic gauge theory.} \begin{align} W= y_+ V_- + y_- V_+ \, , \label{eq:W-Aharony} \end{align} where $y_+ = e^{\Sigma_1}$ and $y_- = e^{-\Sigma_{F-N}}$ and $\Sigma_i$ is the $i$th \textsc{cb} coordinate of the $U(F-N)$ gauge theory at $x_3 =0$. Note that the superpotential $\eqref{eq:W-Aharony}$ survives in the mirror dual descrition of the $U(2k)$ gauge theory. Indeed, the monopoles of the $U(2k)$ sector, which interact with the monopoles of the $U(F-N)$ sector, are exactlty those which are identified with the singlets $V_+$ and $V_-$ under mirror symmetry. The interaction $\eqref{eq:W-Aharony}$ can be seen as generating the relations $y_\pm = 0$ on the chiral ring of the magnetic theory with gauge group $U(F-N)$, as in the Aharony duality. Indeed, given their interaction with the monopoles $y_\pm$ of the magnetic theory, the singlets $V_+$ and $V_-$ have the natural interpretation as monopoles of the electric theory. In this sense we have recovered a brane description of the dynamics of Aharony duality. \bigskip In the rest of this paper we illustrate the generality of this picture, considering the effects of orientifold planes. Their \D-brane charge modifies the standard brane creation effect in the \ac{hw} transition. Nevertheless, the extra sector at $x^{\circ}_3$ in the magnetic theory remains mirror dual to singlets. \section{Sp(2N) theories} \label{sec:symplectic} In this section we discuss the reduction of the duality for $Sp(2N)$ with $2F$ fundamentals. An $Sp(2N)$ gauge theory with $2 F$ fundamentals% \footnote{In some cases the flavor symmetry is $SU(F)^2$ instead of $SU(2F)$ and we have $F$ pairs of fundamental and anti fundamental. We will denote this possibility as having $F$ flavors.} and global symmetry $SU(2F) \times U(1)_A \times U(1)_R $ without superpotential is dual to an $Sp(2(F-N-2))$ gauge theory with $2F$ dual fundamentals $q$ and a meson $M$ with superpotential $W= M q q $. This duality was first presented in~\cite{Intriligator:1995ne}. The $U(1)_A$ symmetry is anomalous at the quantum level. We present the global charges associated to this symmetry because it is quantum realized in the three-dimensional case. The field content is given in Table~\ref{tab:field-content-Sp}. \begin{table} \centering \begin{tabular}{ccccccc} \toprule & \(Sp(2N)\) & \(Sp(2 \widetilde N)\) & \(SU(2F)\) & \(U(1)_A\) & \(U(1)_R\) \\ \midrule \(Q\) & \(2N\) & \(1\) & \(2F\) & \(1\) & \(1-(N+2)/F\) \\ \(q\) & \(1\) & \(2 {\widetilde N}\) & \( 2 \overline F\) & \(-1\) & \((N+2)/F\) \\ \(M\) & \(1\) & \(1\) & \(F(2F-1)\) & \(2\) & \(2-2(N+2)/F\) \\ \bottomrule \end{tabular} \caption{Field content for the \( Sp(2N) \) gauge theory with global \( SU(2F) \times U(1)_A \times U(1)_R\) symmetry.} \label{tab:field-content-Sp} \end{table} \subsection{Brane description} There are two ways to represent this theory, by either considering an $\O4^{+}$--plane or an $\O6^{-}$--plane. These two constructions give rise to similar theories, that differ for the representation of the matter fields under the global symmetries. When modifying the theory (by allowing larger numbers of \NS{}--branes) the two constructions give rise to different two-index matter fields, adjoint or antisymmetric. We will study both possibilities. \begin{itemize} \item In the $\O4^{+} $--plane case, the brane setup is summarized in~Table~\ref{tab:O4+_branes} and Figure~\ref{fig:O4+_branes}. In this case we consider a stack of \(2N\) \D4--branes and an $\O4^{+} $--plane stretched between an \NS{} and an \NS'--brane. We consider also \(2F\) \D6--branes on the \NS'--brane. At brane level this theory as an $SO(2F)$ global symmetry, while on the field theory side there is the enhancement to $SU(2F)$. This is similar to the usual doubling of the global symmetry for unitary gauge groups. The dual theory is obtained by an \ac{hw} transition that exchanges the \NS{} and the \NS'--branes. In presence of an orientifold the linking number is modified. For example an $\O4^{+}$ has to be treated as a stack of $-4$ \D6--branes~\cite{Elitzur:1997hc}. After the transition we obtain the dual picture, in which the net number of \D4--branes is $2(F-N-2)$. \begin{figure} \centering \includegraphics{ElectricPS.pdf} \caption{Brane cartoon for the realization of an \(Sp(2N)\) theory in the electric phase with an \O4--plane.} \label{fig:O4+_branes} \end{figure} \item A similar theory can be constructed by using an $\O6^{-}$--plane. Consider two \NS--branes, $2N$ \D4s, $2F$ \D6's and an $\O6^{-}$--plane as in Figure~\ref{fig:O6-_plane_duality_4D}(a). If all the \NS--branes are parallel, the system has $\mathcal{N}=2$ supersymmetry. The orientifold projects the $SU(2N)$ gauge group to $Sp(2N)$ (where, as usual, $Sp(2)\simeq SU(2)$). The theory has an $SU(F)$ global symmetry with $F$ flavors. We expect this symmetry to be enhanced to $SU(F)^2$. Here we rotate the \NS--branes and the \D6'--branes by an angle $\theta$ as in Figure~\ref{fig:O6-_plane_duality_4D}(a), and we have two stacks of $\NS_{\pm \theta}$ and $\D6_{\pm \theta}$. For generic angles the $\mathcal{N}=2$ adjoint is massive. If $\theta=\pi/2$ the orientifold is parallel to the $NS_{\pm \theta}$--branes and this field is massless and has to be considered in the low-energy spectrum. We will come back to this configuration later. This model (for $\theta \neq \pi/2$) has a dual description as discussed above. In this case the $\O6^{-}$ behaves like a stack of $- 4$ \D6--branes in the \ac{hw} transition. The brane picture becomes the one shown in Figure~\ref{fig:O6-_plane_duality_4D}(b) where the dual gauge group is again $Sp(2(F-N-2))$. \begin{figure} \centering \includegraphics{SPO6.pdf} \caption{Brane cartoon for the realization of an \(Sp(2N)\) theory in the (a) electric and (b) magnetic phase with an \O6--plane.} \label{fig:O6-_plane_duality_4D} \end{figure} \end{itemize} \subsection{Dimensional reduction} \subsubsection{\O3--planes} Let us begin with the reduction of the duality with an $\O4^{+}$--plane. The three-dimensional system is obtained by compactifying the $x_3$--direction and T--dualizing. The \NS{}--branes remain invariant while the orientifold becomes an pair of ($\O3^+,\O3^+$)--planes. We can study the properties of the Coulomb branch by looking at the spectrum of \acs{bps} \F1 strings as explained above. \begin{figure} \centering \includegraphics{DynkinCn.pdf} \caption{Dynkin and affine Dynkin diagrams and spectrum of \acs{bps} \F1--strings associated to the fundamental monopoles for $Sp(2N)$ theories in the linear case and on the circle. The affine root is represented in blue on the affine Dynkin diagram and in the brane cartoon. } \label{fig:F1Cn} \end{figure} In this case the superpotential can be read off from the top half of Figure~\ref{fig:F1Cn}, \begin{equation} W = \sum_{i=1}^{N-1} \frac{2}{Y_i} + \frac{1}{Y_N} \; , \end{equation} where $Y_i = e^{(\sigma_i-\sigma_{i+1})/e_3^2+i(\phi_i-\phi_{i+1})}$ and $Y_N=e^{2(\sigma_N/e_3^2+i \phi_N)}$. The extra root in the affine case is proportional to the variable $Y_0 = e^{2(\sigma_1/e_3^2+i \phi_1)}$ (shown in blue on the botton half of Figure~\ref{fig:F1Cn}) and it gives the superpotential \begin{equation} W_\eta = \eta \eu^{2 \sigma_1/{e_3^2} + 2 i \phi_1} \; . \end{equation} The same result is obtained after the \ac{hw} transition. \bigskip Now we want to flow to the Aharony duality. We start by considering $2(F+1)$ \D5--branes in the electric theory. We rotate two \D5--branes on the circle and reconnect them on the other side of the circle. Since the \D5s intersect the \NS{}--brane in this configuration, there are no massless fields in this extra sector. If we take the $r \rightarrow 0$ limit on this configuration, we obtain an $Sp(2N)$ theory with $2F$ fundamentals. Next we turn to the dual theory. In this case if we perform an \ac{hw} transition, there are $2(F-N-1)$ \D3s at the origin. On the other side of the circle the two \D3s created by the \D5 crossing the \NS{}--brane are destroyed by the extra orientifold plane located there. This example shows one of the general aspects of our analysis. In principle it is not necessary to reconnect the \D5--branes at $x_3^\circ$. For example for unitary gauge groups the double scaling was realized by putting the \D5--branes at $x_3 < x_3^\circ$~\cite{Amariti:2015yea}. Here avoiding the orientifold to create a negative number of \D3--branes in the \ac{hw} transition we have to reconnect the \D5s at $x_3^\circ$. In the rest of the paper we will always follow this strategy. The final configuration is represented in Figure~\ref{fig:Aharony-flow-Sp-O3}. \begin{figure} \centering \includegraphics[width=12cm]{scalingSPn-crop.pdf} \caption{Dual Aharony flow \(Sp\) in the \O3 configuration} \label{fig:Aharony-flow-Sp-O3} \end{figure} In this case, even if there is no gauge symmetry, an extra meson arising from the \D5--brane remains massless. This suggests that we cannot simply decouple this sector before considering the effect of this massless field in the ordinary dual gauge theory. In fact in this case the two \D5--branes attract the branes labeled by $\sigma_1$ and $-\sigma_1$ in the dual gauge sector. This attractive force is reflected in the scale-matching relation between $Y_1$ and the meson $M_{2F+1,2F+2}$. It corresponds to the superpotential interaction \begin{equation} W = \widetilde \eta y_{low} M_{2F+1,2F+2} \; , \end{equation} \emph{i.e.} the low-energy description of the superpotential $W_\eta$. In the large-mass limit the effect of this interaction has to be considered. This reproduces the field theory expectation: the dual theory is an $Sp(2(F-N-1))$ theory with $2F$ fundamentals, an antisymmetric meson $M$ and superpotential \begin{equation} W = M q q + y Y \; , \end{equation} where we identified the broken component of the electric singlet \(M\) that parametrizes a direction in the dual Higgs branch, with the electric monopole \(Y\) that parameterizes the Coulomb branch of the electric phase. This is commonly the case when dealing with mirror symmetry and in fact the electric singlet describes the Higgs branch of the dual phase, \emph{i.e.} the Coulomb branch of the electric theory. \subsubsection{\O5--planes} Also in the case of the \O6--plane realization one can reduce the duality to three dimensions by compactifying the $x_3$--direction. After T--duality the \tIIB system contains a pair of \O5--planes and describes a theory with the same superpotential $W_\eta$ as above. By considering $F+2$ flavors and by integrating out of them we recover the usual Aharony duality. At the brane level this is obtained by introducing $F+2$ $\D5_{\pm \theta}$. We introduce real masses as in the construction with the \O3--plane. The orientifold identification is however different: in this case we have a unitary symmetry. Moving a pair $\D5_{\pm \theta}$ along $x_3$ gives a mass to one flavor. \begin{figure} \centering \includegraphics[width=.8\textwidth]{scalingSPnO5-crop.pdf} \caption{Aharony flow \(Sp\) in \O5 picture} \label{fig:Aharony-flow-Sp-O5} \end{figure} One can flow to Aharony duality by taking the double scaling limit as above (see Figure~\ref{fig:Aharony-flow-Sp-O5}). Let us explain the duality in this case. First we move the \D5--branes in the $x_3$--direction, assigning the real masses. We reconnect them on the other side of the circle. They reconnect at $x_3 = x_3^\circ$, where the second orientifold plane is located. The extra sector does not have massless degrees of freedom, and we can take the $r \rightarrow 0$ limit in this case. We obtain a three-dimensional $Sp(2N)$ theory with $2F$ flavors. Now we can turn to the dual picture, by exchanging the $\NS{}_{\pm \theta}$--branes. The \D3s are created when the branes cross each other. While at the origin the orientifold cancels two \D3s every time an \NS--brane crosses it, the net effect on the \D3s at $x_3 = x_3^\circ $ is the absence of branes in the gauge theory. The final configuration is reproduced in Figure~\ref{fig:Aharony-flow-Sp-O5}. Like in the case with \O3--planes, here we have an extra sector with massless singlets (coming from the original mesons). The $r \rightarrow 0$ limit has to be taken by considering the effect of this sector on the $Sp(2(F-N-1))$ theory. This is the same mechanism introduced above: the superpotential $W_\eta$ is absorbed in a scale matching, the meson couples with the magnetic monopoles, and in the final three-dimensional dual theory the extra interaction between the electric and magnetic monopoles takes place. \subsection{Generalizations} \subsubsection{Sp(2N) with antisymmetric matter} An $Sp(2N)$ gauge theory with $F$ flavors $Q$ and $\widetilde Q$ and an antisymmetric field $A$, with superpotential \(W = \Tr A^{k+1}\) is dual to an $Sp(2(k(F-2)-N))$ gauge theory with $F$ flavors $q$ and $\tilde q$, an antisymmetric $a$ and superpotential \(W = \Tr a^{k+1} + \sum_{j=0}^{k-1} M_{k-j-1} q a^j \tilde q\), where $M_j = Q A^{j} \widetilde Q$ is the generalized meson, with $j=0,\dots,k-1$. This duality was first presented in~\cite{Intriligator:1995ff}. In this case we consider two stacks of $k$ $\NS{}_{\pm \theta}$--branes. The gauge symmetry is broken by separating them along the directions \(4\) and \(5\) and leads to a polynomial superpotential for the antisymmetric field $A$. The electric theory is broken to \begin{equation} Sp(2 N) \rightarrow \prod_{i=1}^{k} Sp(2 r_i) \;, \end{equation} while the magnetic one becomes \begin{equation} Sp(2 \widetilde N) \rightarrow \prod_{i=1}^{k} Sp(2 \widetilde r_i) \;, \end{equation} where $\widetilde r_i = F- r_i - 2$ (see Figure~\ref{fig:Sp-antisymmetric}). \begin{figure} \centering \includegraphics{SpAdj-brane.pdf} \caption{Electric and magnetic sides of the duality for \(Sp(2N)\) gauge theories with antisymmetric matter.} \label{fig:Sp-antisymmetric} \end{figure} In this case we can perform the reduction on each sector. The bare monopoles associated to each $Sp(2 r_i)$ factor recombine, through the scale matching relation, with the antisymmetric field when the superpotential deformations are turned off. This correspond to recombining the $NS_{\pm \theta}$--branes. The theory on the circle can be further reduced to an Aharony-like duality by integrating out some matter fields. In the three-dimensional case we refer to the antisymmetric representation discussed in~\cite{Kapustin:2011vz}, obtained by combining the irreducible antisymmetric with a singlet. If we consider $(F+2)$ flavors and integrate out two of them in each sector we arrive in the dual at a $ \prod_{i=1}^{k} Sp(2 (F-r_i-1)) $ gauge theory. After reconnecting the branes, the dual theory is $ Sp(2 (k(F-1)-N)) $. As a check we consider $(F+2K)$ flavors and flow to a known duality. Integrating out $2K$ flavors after assigning them the same large real mass generates a \ac{cs} term. We arrive at the duality of Kapustin, Kim and Park~\cite{Kapustin:2011vz} between\footnote{The \ac{cs} levels have an extra factor of two because of the normalization of the generators in the Lie algebra.} $Sp(2N)_{2K}$ and $Sp(2 (k(F+|K|-1)-N))_{-2K}$. \subsubsection{Sp(2N) with adjoint matter} An $Sp(2N)$ gauge theory with $2F$ fundamentals and an adjoint field $X$, with superpotential \( W = \Tr (X)^{2(k+1)}\) is dual to an $Sp(2((2k+1) F-N-2))$ gauge theory with $2F$ fundamentals, an adjoint $Y$ and superpotential \begin{equation} \label{eq:dualadjsp} W = Y^{2(k+1)} + \sum_{j=0}^{2k} M_{2k-j} q Y^j q \;, \end{equation} where $Y$ is in the adjoint of the dual group and $M_j = Q X^j Q$. This duality was first presented in~\cite{Leigh:1995qp}. The electric theory is represented by $2 N$ \D4--branes and an $\O4^-$--plane stretched between $2k+1$ \NS{}--branes and one \NS'. In addition, there are $2F$ \D6--branes on the \NS{}--branes. By separating the \NS{}--branes along the $(45)$--plane we have a polynomial deformation in the adjoint $X$. In a generic vacuum the adjoint $X$ acquires a vacuum expectation value. At matrix level there is a rank$ = 2r_0$ sector at zero vev, and it gives rise to an $SP(2r_0)$ gauge group. The other $k$ rank$=r_i$ sectors, where the vev of the adjoint is non zero, give raise to a set of $U(r_i)$ sectors. The ranks are chosen such that $\sum_{i=0}^{k} r_i = N$. This higgsing corresponds to separating the $\D4$--branes along the directions $4$ and $5$ in the brane picture, as in Figure~\ref{fig:Sp-adjoint}. Eventually, in a generic vacuum, the gauge group is broken as \begin{equation} Sp(2N) \rightarrow Sp(2 r_0) \times \prod_{i=1}^{k} U( r_i) \;. \end{equation} in the electric theory and \begin{equation} Sp(2\tilde N) \rightarrow Sp(2 (F-r_0-2)) \times \prod_{i=1}^{k} U(F-r_i)\;. \end{equation} in the magnetic theory. At the brane level this dual description is obtained by first separating the \NS{}--branes, then performing the \ac{hw} transition and eventually reconnecting them, see Figure~\ref{fig:Sp-adjoint}. \begin{figure} \centering \includegraphics{SpAdjointBranes.pdf} \caption{Electric and magnetic sides of the duality for \(Sp(2N)\) gauge theories with adjoint matter.} \label{fig:Sp-adjoint} \end{figure} In the case of $Sp(2N)$ gauge theories with $2F$ fundamentals and adjoint matter with superpotential $W=\Tr X^{2(k+1)}$ one can perform the reduction in the $Sp(2r_0)$ and in the $U(r_i)$ sectors separately. In each sector, an superpotential $W_\eta$ is generated. By reconnecting the branes and using the scale-matching relation one can identify the bare monopoles of the theory with the product $Sp(2r_0) \times U( r_i)$ with the dressed monopoles of the $Sp(2N)$ theory. In the dual case the situation is similar. First one dualizes each sector, obtaining $Sp(2(F-2-r_0))\times \prod U((F-r_i))$, then reconnects the branes and eventually uses the scale matching relation to recover the duality without the polynomial deformation in the adjoint field. We can flow to the Aharony-like duality. Let us consider $2F+2$ \D5--branes in the electric phase. The dual gauge group is broken to \begin{equation} Sp(2 \widetilde r_0) \times \prod_{i=1}^{k} U(\widetilde r_{i }+1)\;, \end{equation} where $\widetilde r_0 = F-r_0-1$ and $\widetilde r_i = F-r_i$. In the brane description, we move two \D5--branes in each sector and perform the \ac{hw} transition. The dual gauge group at $x_3=0$ becomes \begin{equation} Sp(2 (F-r_0-1)) \times \prod_{i=1}^{k} U(F - r_i)\; . \end{equation} This is given by imposing in the field theory description the correct vacuum structure preserving the duality. By joining the \NS{}--branes back it becomes $Sp(2 ((2k+1) F-N-1))$. As a check we flow to a known duality. We can consider $2(F+K)$ fundamentals, integrating out $2K$ of them generating a \ac{cs} term. One obtains the duality of Kapustin, Kim and Park~\cite{Kapustin:2011vz}, between an $Sp(2N)_{2K}$-- and an $Sp(2((2k+1)(F+|K|)-N-1)_{-2K}$ gauge theory, with superpotential as in~\cite{Kapustin:2011vz}. \section{U(N) groups and antisymmetric matter} \label{sec:unitary-antisymmetric} For unitary groups with tensor matter there are two main cases: antisymmetric and symmetric tensors. We refer the reader to~\cite{Intriligator:1995ax} where these dualities have been first presented. Here we focus on the antisymmetric case. In the antisymmetric case one has: \begin{itemize} \item An $SU(N)$ gauge theory with an antisymmetric tensor $A$, its conjugate $\widetilde A$ with \begin{align} \label{asW} W= \Tr(A \widetilde A )^{2} \end{align} and $F$ flavors is dual to an $SU(3F-N-4)$ with superpotential \begin{align} W=\Tr (a \widetilde a)^{2} + M_1 q \widetilde q + M_0 q \widetilde a a \widetilde q + P q \widetilde a q + \widetilde P \widetilde q a \widetilde q \; , \end{align} where $a,\widetilde a$ are the dual antisymmetric fields, $q, \widetilde q$ the dual quarks and the mesons are \begin{align} P = Q \widetilde A Q \; , && \widetilde P = \widetilde Q A \widetilde Q \; , && M_0 = Q \widetilde Q \; , && M_1 = Q \widetilde A A \widetilde Q \; . \end{align} \item We can also consider the superpotential \begin{equation} \label{asW2} W= \Tr(A \widetilde A )^{2}+ A \widetilde Q \widetilde A Q + (Q \widetilde Q)^2 \end{equation} in the electric case. The \(SU(N)\) gauge theory is dual to an $SU(2 F-N-4)$ gauge theory with superpotential \begin{equation} W=\Tr (a \widetilde a)^{2} + q \widetilde a \widetilde q a +(q \widetilde q)^2 \,. \end{equation} This duality can be obtained from the previous one by a Higgs mechanism: a dual meson appears as a linear perturbation in the dual theory. After higgsing we obtain (\ref{asW2}) from (\ref{asW}) and the dual rank is modified accordingly.\\ \item The discussion can be generalized to the superpotential $W = \Tr(A \widetilde A)^{k+1}$. In this case one can break the gauge group by adding a polynomial superpotential in $(A\widetilde A)^{j}$. By turning this superpotential off one then finds a generalized \ac{kss} duality with dual rank $\widetilde N= (2k+1)F-N-4$ for the generalization of (\ref{asW}) and $\widetilde N= 2k F-N-4$ for the generalization of (\ref{asW2}). \end{itemize} \subsection{Brane description} The brane realization of these models has been done in~\cite{Landsteiner:1997ei,Csaki:1998mx}. All cases in this family can be summarized in the brane cartoon in Figure~\ref{fig:branes-unitary-tensor-matter}. \begin{figure} \centering \includegraphics[width=.8\textwidth]{Tensors-crop.pdf} \caption{Brane cartoon summarizing all the constructions of unitary gauge theories with tensor matter} \label{fig:branes-unitary-tensor-matter} \end{figure} In order to understand the action of the orientifold we start by discussing a configuration with three \NS--branes without the \O6--plane. The theory is an $\mathcal{N}=2$ quiver with two unitary nodes, connected by a pair of bifundamentals and adjoints (see Figure~\ref{fig:quiver-unitary-tensor-matter}). At each node there are $F$ flavors. This configuration and its generalization to $\mathcal{N}=1$ where extensively studied in~\cite{Brodie:1997sz}. Adding the orientifold plane the two nodes are identified and projected to a single $U(N)$ gauge node. The matter fields are identified as well and there are two possibilities, corresponding to the different signs of the orientifold projection: the pair $(A ,\widetilde A)$ or $(S,\widetilde S)$. Here we focus on the case with $(A ,\widetilde A)$. Now we can break to $\mathcal{N}=1$ by rotating the external \NS{}--branes: rotating the left and right \NS--brane by an angle $\theta$ (\emph{resp.} $-\theta$) corresponds to introducing a mass term \(\mu(\theta_{\pm} )\) proportional to \(\tan (\theta_{\pm})\) for the adjoints in the $\mathcal{N}=2$ vector multiplet. Integrating out the massive adjoints we obtain the superpotential $W = \Tr A \widetilde A$. If the rotation angle is $\pi/2$, the adjoint is infinitely massive and the superpotential vanishes. More in general we can consider two stacks of $k$ $\NS{}_{\pm \theta}$--branes, obtaining the superpotential $W =( A\widetilde A)^{k+1}$. The flavor branes can be added in two ways. In the first case one can add two stacks of \D6s parallel to the orientifold and to the \NS{}--brane, one on the left and one on the right. In the second case one can rotate the stack of \D6s on the left (right) to a stack of $\D6_{\theta}$ ($\D6_{-\theta}$). In the first case we have to add the term $Q \widetilde A \widetilde Q A + (Q \widetilde Q)^2$ to the superpotential. In the second case, the flavor branes are parallel to the $\NS_{\pm \theta}$--branes and the quartic terms for the fundamentals are absent. The two configurations with \D6 or \D6$_{\pm \theta}$--branes have different ranks in the dual \ac{hw} picture. \begin{figure} \centering \includegraphics{quiverUN.pdf} \caption{Orbifold projection of the \(A_2\) quiver realizing a U(N) theory with tensor matter.} \label{fig:quiver-unitary-tensor-matter} \end{figure} \bigskip The Seiberg duality can be studied in terms of brane motions. It is convenient to describe the motion at first without the orientifold and then add the projection at the end. The starting theory has two unitary gauge groups connected by a pair of bifundamentals and extra flavors. The Seiberg-dual phase is obtained by a cascading process, first we dualize one gauge group, then the other, and finally we dualize again the first gauge group. In terms of branes it corresponds to exchanging the first two \NS{}--branes, then the last two and then the first two again. Before this exchange, it is convenient to move the \D6--branes. We have to distinguish the two situations, where we have either two stacks of $\D6_{\pm \theta}$ or two stacks of \D6s parallel to the central brane. \begin{itemize} \item In the first case, the $\D6_\theta$ crosses first the \NS{}--brane and then the $NS{}_{-\theta}$--brane. Both times the crossing generates a stack of \D4--branes. The same operation has to be performed on the second brane. In this case there are $2$ \D4--branes ending on each $\D6_{\pm \theta}$. The S--rule is not violated because one stack of \D4s is attached to an \NS{}--brane and the other to an $\NS_{\mp \theta}$. If we interchange the position of the \NS--branes we obtain the dual picture. The reduction of this duality has been studied in~\cite{Amariti:2015yea} from the brane perspective. At this point we can consider the effect of the $\O6^-$ orientifold on the central \NS{}--brane. The following happens. \begin{enumerate} \item the gauge group is projected from $SU(N) \times SU(N)$ to $SU(N)$; \item the bifundamentals connecting the gauge groups become the tensor matter fields; \item the two flavor groups are identified. \end{enumerate} At the level of the duality the orientifold carries the charge of $- 4$ \D6--branes. By carefully considering the orientifold charge in each transition we end up with the $SU(3F - N - 4)$ gauge theory as expected. \item In the second case the \D6--branes are parallel to the \NS{}--brane. We move the \D6 on the left of the \NS{} towards the $\NS{}_\theta$ and the other in the opposite direction. Once they cross the $NS_{\pm \theta}$, each \D6 generates a stack of \(F\) \D4--branes. After this motion the duality works as in the case above. By carefully adding the orientifold charge, the dual $SU(2F-N-4)$ gauge theories are recovered. \end{itemize} One can also study the duality with \(k\) $\NS_{\pm \theta}$--branes. In this case one first separates these branes along the direction orthogonal to the plane that they occupy in $(4589)$ and then studies the duality in each sector separately. By reconnecting the branes the expected dualities are recovered. \subsection{Dimensional reduction} Now we compactly $x_3$ and T--dualize along this direction. We consider the $U(N)$ case, where the baryonic symmetry is gauged. On the T--dual circle, the theory develops a superpotential of the form \begin{equation} \label{anp} W = \eta Y_+ Y_-\;, \end{equation} where $Y_+=e^{\sigma_1/e_3^2+i \phi_i}$ and $Y_-=e^{-(\sigma_{N}/e_3^2+i \phi_{N})}$. This can be understood from the brane picture as follows: there are two sets of \D3s, one connecting the \NS$_{\theta}$ and \NS{}--branes and the other connecting the \NS$_{-\theta}$ and \NS{}--branes. On each stack a superpotential $W_\eta$ is generated by the Euclidean \D1--branes. The two superpotentials are identical and identified by the orientifold. Finally, one has (\ref{anp}). Now we want to investigate the dual phase. As discussed above there are two possible situations: the \D5--branes are parallel to the \NS{}--branes \emph{or} to the $NS_{\pm \theta}$--branes. In the first case $\widetilde N = 3 F-N-4$, while in the second case we have $\widetilde N = 2 F-N-4$. On the circle, the extra superpotential \begin{equation} \label{anp2} W = \eta' y_+ y_- \end{equation} is generated. Here $y_+=e^{\widetilde \sigma_1/{\widetilde e_3}^2+i \widetilde \phi_i}$ and $y_-=e^{-(\widetilde \sigma_{\widetilde N}/{\widetilde e_3}^2+i \widetilde \phi_{\widetilde N})}$. \begin{figure} \centering \includegraphics[width=.8\textwidth]{SUAs3d-crop.pdf} \caption{$U(N)$ gauge theory with antisymmetric matter, electric theory. (a) and (b) show the case without superpotential (\D5--branes parallel to $NS_{\pm \theta}$), (c) and (d) show the case with superpotential (\D5--branes are parallel to the \NS{}--branes).} \label{fig:lSUAs3d} \end{figure} Figures~\ref{fig:lSUAs3d}~(a) and~\ref{fig:lSUAs3d}~(b) show the brane cartoon of the electric theory in the case without the extra superpotential. The \NS{}--branes are drawn in black, the \D5s in green, the orientifold plane is orange and the \D3s are red. In Figures~\ref{fig:lSUAs3d}~(c) and~\ref{fig:lSUAs3d}~(d) we represent the case with the superpotential turned on. Now we want to flow to the theory without the superpotential $W_\eta$. We consider the case with $F+2$ green branes in each sector, and assign a positive large real mass to one flavor and one negative large real mass to a second one. We rotate one pair of $\D5_{\pm \theta}$ clockwise on the circle and another pair counterclockwise. Finally, we reconnect the pairs at $x_3 = x_3^\circ$, where the second orientifold is placed. Now we can proceed as above, we interchange the \NS{}--branes and arrive at the dual configuration. Finally, we obtain the setup in Figure~\ref{fig:32F}. \begin{figure} \centering \begin{tabular}{cc} \includegraphics[width=.7\textwidth]{3FscalingA-crop.pdf}\\ \includegraphics[width=.7\textwidth]{2FscalingA-crop.pdf} \end{tabular} \caption{Aharony-like duality for models with antisymmetric matter.} \label{fig:32F} \end{figure} These pictures represent the Aharony-like duality for the models with antisymmetric matter. The extra sectors are dualized to singlets, as done in~\cite{Amariti:2015yea} for the $U(N)$ \textsc{sqcd}. The extra singlets that are generated interact with the monopoles of the magnetic theory, and they are identified with the Coulomb branch variable of the electric theory. This can be explicitly verified on the field theory side. The duality now involves a $U(3F-N-2)$ gauge group in the case where the superpotential is $W = (A \widetilde A)^2$. One can add the extra deformation $(Q \widetilde Q)^2 + A \widetilde Q \widetilde A Q$ (corresponding to rotating the branes as in Figure~(c)). In the dual phase this deformation enforces a Higgs flow to the theory with $U(2F-N-2)$, and it exactly corresponds to the expected dual, after dualizing the extra sectors and considering the \ac{ahw} superpotential. This confirms the validity of our rules and of our picture. We can reproduce the same story by considering $k$ external $\NS_\pm{\theta}$--branes. In this case we can break the $\NS_{\pm{\theta}}$--branes, \emph{e.g.} generating a power superpotential $W \simeq \sum_i \lambda_i (A \widetilde A)^{j}$. By breaking the gauge group in the decoupled sectors we can use the same rules used above and reconstruct the dual theory. Finally, we obtain the dual ranks $U((2k+1) F-N-2k)$ and $U(2 k F - N-2k)$. As a final check, we can flow to the case with \ac{cs} terms. In this case we reproduce the duality between the $U(N)_{K}$ theory with $F$ flavors and the dual $U((2k+1)(F+K)-N-2k)_{-K}$ studied in~\cite{Kapustin:2011vz}. \section{Orthogonal gauge groups} \label{sec:orthogonal} In this section we discuss orthogonal gauge groups. An $SO(N)$ gauge theory with $2F$ fundamental vectors and global symmetry $SU(2F) \times U(1)_A \times U(1)_R$ without superpotential is dual to an $SO(2F-N-4)$ gauge theory with $2F$ fundamental vectors $q$ and a meson in the (conjugate) symmetric representation of the global $SU(2F)$ with superpotential $W= M q q $. This duality was first presented in~\cite{Intriligator:1995id}. The field content is given in Table~\ref{tab:field-content-SO}. \begin{table} \centering \begin{tabular}{ccccccc} \toprule & \(SO(N)\) & \(SO(\widetilde N)\) & \(SU(2F)\) & \(U(1)_A\) & \(U(1)_R\) \\ \midrule \(Q\) & \(N\) & \(1\) & \(2F\) & \(1\) & \(1-(N-2)/F\) \\ \(q\) & \(1\) & \(\widetilde N\) & \(2\overline F\) & \(-1\) & \((N-2)/F\) \\ \(M\) & \(1\) & \(1\) & \(F(2F-1)\) & \(2\) & \(2-2(N-2)/F\) \\ \bottomrule \end{tabular} \caption{Field content for the \( SO(N) \) gauge theory with global \(SU(2F) \times U(1)_A \times U(1)_R \) symmetry.} \label{tab:field-content-SO} \end{table} \subsection{Aspects of field theory} On can associate three distinct gauge groups to the Lie algebra $so(N)$, as discussed in~\cite{Aharony:2013hda} where they were called $SO(N)_{\pm}$ and $\Spin(N)$. In four dimensions the different choices depend on the spectrum of line defects, while in three dimensions they depend on the monopole charges in the dual algebra. The Coulomb branch variables associated to the $so(N)$ algebra are \begin{equation} Y_i = e^{(\sigma_i-\sigma_{i-1})/{e_3^2}+ i (\phi_i-\phi_{i-1} )} \; , \quad \quad i=1,\dots, N-1 \end{equation} and \begin{equation} \begin{cases} Y_{N} = e^{(\sigma_{N-1}-\sigma_N)/e_3^2+i(\phi_{N-1}-\phi_N)} & \text{\(N\) even} \;,\\ Y_{N} = e^{2 \sigma_N/e_3^2+2 i \phi_N} & \text{\(N\) odd} \;. \end{cases} \end{equation} At finite radius there is also a superpotential $W_\eta = \eta Z$ from the \ac{kk} monopoles~\cite{Davies:2000nw,Aharony:2011ci,Aharony:2013kma}, where $Z = Y_1 \prod_{i=2}^{N-2} Y_{i}^2 Y_{N-1} Y_{N}$ in the even case and $Z = Y_1 \prod_{i=2}^{N-1} Y_{i}^2 Y_{N}$ in the odd case. The two expressions finally boil down to $Z = e^{(\sigma_1+\sigma_2)/e_3^2+ i (\phi_1+\phi_2)} $. In presence of matter fields this superpotential still contributes to the theory, but there is a difference with the symplectic and unitary cases: the superpotential $W_\eta$ does not completely lift the Coulomb branch, parameterized by $Y_{Spin}= e^{2 \sigma_1/e_3^2+2 i \phi_1}$ in the $\Spin(N)$ case and $Y= e^{\sigma_1/e_3^2+i \phi_1}$ in the $SO(N)$ case. There are three possible dualities: $\Spin(N) \leftrightarrow SO(\widetilde N)_{-}$, $SO(N)_- \leftrightarrow \Spin(\widetilde N)$ or $SO(N)_+ \leftrightarrow SO(\widetilde N)_{+}$, where in each case $\widetilde N=F-N+2$. It is possible to reduce the $4$D dualities to $3$D dualities by considering the limit $r \rightarrow 0$, \emph{i.e.} $\eta \rightarrow 0$, without adding real masses. This is possible because of the presence of a Coulomb branch. A region near the origin of the moduli space on the electric side of $SO(N)_+$ corresponds in the dual to the region $\widetilde Y =i/\sqrt{ \widetilde \eta}$. This breaks the gauge symmetry to $SO(F-N+2) \times SO(2)$. This last sector in the \textsc{ir} is described by its Coulomb branch variable interacting with the monopole of the unbroken sector through an \ac{ahw} superpotential. Similarly, one obtains a duality between $SO(N)$ and $\Spin(N)$ theories in pure $3$D. At the local level, the $O(N)$ duality studied in~\cite{Benini:2011mf,Aharony:2011ci} is recovered. \subsection{Brane description} \begin{figure} \centering \includegraphics{DynkinBn.pdf} \caption{Dynkin and affine Dynkin diagrams and spectrum of \acs{bps} \F1--strings associated to the fundamental monopoles for $SO(2N+1)$ theories (\(B_N\) algebra). The orientifold at the mirror point \(x_3 = x_3^\circ\) is an \(\O_3^-\), while the one at \(x_3 = 0\) is an \(\widetilde \O_3^-\). For this reason the Dynkin diagram of \(\widetilde B_N\) does not have a \(\setZ_2\) symmetry.} \label{fig:Dynkin-Bn} \centering \includegraphics{DynkinDn.pdf} \caption{Dynkin and affine Dynkin diagrams and spectrum of \acs{bps} \F1--strings associated to the fundamental monopoles for $SO(2N)$ theories (\(D_N\) algebra). The affine root is in blue, the root due to the orientifold in \(x_3 = 0\) in teal. Both orientifolds are \(\O_3^-\) and the affine Dynkin diagram has a \(\setZ_2\) symmetry.} \label{fig:Dynkin-Dn} \end{figure} Now we turn to the brane picture. We will limit the discussion to the study of the local properties, without focusing on the difference between the $\mathrm{(S)pin}(N)$ and the $(S)O_\pm$ cases. We will comment on the possibility of extending the analysis to the global property of the gauge group in the conclusions. At the brane level these theories are obtained in two different ways. In one case, we put the $O4^{-}$ ($\widetilde O4^{-}$) on $N=2n$ ($N=2n+1$) \D4--branes stretched between the \NS{} and \NS'--brane. This theory has an $Sp(2F)$ global symmetry and we expect that this symmetry is enhanced to $SU(2F)$. In the second case we consider two $\NS{}_{\pm\theta}$--branes connected by a stack of \D4--branes intersecting the $\O6^+$--plane symmetrically with respect to the $\NS{}_{\pm \theta}$--branes. We can distinguish between the even $N=2n$ case and the odd $N=2n+1$ case, essentially this corresponds to the number of \D4--branes. In this case the global symmetry is $SU(F)$ and we expect this enhances to $SU(F)^2$. In the T--dual \tIIB description there is an \O3--plane between the two \NS$_{\pm \theta}$ and the \(2 F\) $\D5_{\pm \theta}$--branes. The gauge theory lives on the $N$ \D3s extended along $x_6$. First we study the generation of the superpotential in the Coulomb branch in the case of a pure gauge theory. Then we discuss the new duality obtained on the circle and finally we reproduce the $3$D limit studied in~\cite{Aharony:2013kma}. The three-dimensional theory on the circle has two possible orientifolds $\O3^-$ or $\widetilde \O3^-$. In the first case we have to consider an even number of \D3--branes while in the second case they have to be odd. We can study in both cases the generation of the superpotential in terms of the Coulomb branch variables. The superpotential on the Coulomb branch is obtained in terms of the spectrum of the allowed \acs{bps} \F1--strings in presence of the orientifold, as discussed in Section~\ref{sec:general}. In the orthogonal case we can represent the two different possibilities for the $\O3^-$ or $\widetilde \O3^-$ with the $B_N$ and the $D_N$ series. (see Figure~\ref{fig:Dynkin-Bn} and Figure~\ref{fig:Dynkin-Dn}). The extra superpotential corresponds in both cases to the extra term $Z = e^{(\sigma_1+\sigma_2)/e_3^2+ i (\phi_1+\phi_2)}$. Since it identifies two eigenvalues after we cross from one half to the other of the circle it involves the identification and add the superpotential $W_\eta$. Finally, we have \begin{align} W_{SO(2n)} = \sum_{i=1}^{r_G} \frac{1}{Y_i} + \eta Z \; , && W_{SO(2n+1)} = \sum_{i=1}^{r_G-1} \frac{1}{Y_i} +\frac{2}{Y_{r_G}} + \eta Z \;. \end{align} When we consider the \D6--branes there is an unlifted direction in the Coulomb branch, corresponding to the term $e^{2 \sigma_1/e_3^2}$. We consider $F+2$ \D5 on each \NS{}--brane and take the pure $3$D limit. In the electric theory we are left with a pure $3$D $so(N)$ theory with $2F$ flavors. In the dual theory the situation is more intricate. At $x_3=0$ there is an $so(F-N-2)$ theory with superpotential $W= M q q $. At $x_3=x_3^\circ$ there is an $so(4)$ gauge theory with two fundamentals. It can be dualized to a singlet Y, interacting with the $so(F-N-2)$ through an \ac{ahw} superpotential. This interaction is $W = y Y$ where $y$ is the magnetic monopole. By interpreting $Y$ as the electric monopole acting as a singlet in the magnetic theory we arrive to the expected duality. \subsubsection{An alternative limit} Differently from the unitary and symplectic cases, here the pure $3$D limit can be obtained without any real mass flow~\cite{Aharony:2013kma}. The reason is that the region $x_3\simeq 0$ of the Coulomb branch in the electric theory corresponds to the region $x_3 \simeq x_3^\circ$ in the magnetic one. An $so(2)$ gauge theory is created at $x_3^\circ$ in the magnetic theory, and the pure $3$D limit can be taken directly, preserving the duality. In the brane description we consider the \D3s in the electric theory at the origin, while in the magnetic theory the orientifold generates automatically a pair of \D3--branes at $x_3^\circ$. The dual gauge theory becomes $so(2F-N-2) \times so(2)$. The final configuration is in Figure~\ref{fig:scalingO(n)}. \begin{figure} \centering \includegraphics[width=.7\textwidth]{scalingOn-crop.pdf} \caption{Two \D5--branes reconnect at the mirror point of the circle.} \label{fig:scalingO(n)} \end{figure}The $so(2)$ sector is dual to a singlet $Y$, that interacts with the $so(2F-N-2)$ sector through an \ac{ahw} superpotential. Again, in the pure $3$D case, $Y$ has the same quantum numbers as the electric monopole. \bigskip It is possible to study the case with \O6--planes as well. In this case the discussion follows the one of the symplectic case, and we do not report the whole analysis. The mirror orientifold is created on the circle and the extra sectors can be studied with the usual brane techniques. One can also study the cases with tensor matter, by adding $k$ \NS'--branes in the case with an \O4--plane and $k$ NS$_{\pm \theta}$ for the cases with the \O6--planes. Moreover, one can consider the cases with \O6--planes and an extra \NS{}--brane, this leads to unitary theories with symmetric matter and the discussion follows the one in Section~\ref{sec:unitary-antisymmetric}. In all the cases new examples of three-dimensional dualities can be worked out. We conclude by observing that the known three-dimensional case studied in~\cite{Kim:2013cma} can be recovered from these dualities. \section{Conclusions} \label{ref:conclusions} In this paper we completed the analysis started in~\cite{Amariti:2015yea} of the reduction of four-dimensional dualities to three dimensions via brane constructions. We have shown that this picture captures the relevant properties of the reduction of the duality on $\mathbb{R}^3 \times S^1$. By T--duality the Coulomb branch on the circle is correctly described, after separating the \D3--branes in the compact direction, by an affine Toda potential for the \F1--strings in an S-dual frame. When considering real groups or tensor matter fields, a crucial role is played by the behavior of the orientifold under T--duality. A second orientifold plane is generated at an opposite point on the T--dual circle. We have shown that it is necessary to consider the physics at this mirror point when taking the three-dimensional limit. This limit is a double scaling on the real masses and the radius. The masses correspond to the positions of certain \D5--branes (and in the magnetic phases also \D3--branes). By reconnecting the branes at the mirror point, a new unified scenario to study the reduction of four-dimensional dualities admitting a \tIIA description in four dimensions emerges. The construction presents an algorithmic way to obtain many new three-dimensional dual pairs from their four-dimensional parents which we have discussed in this article. \bigskip The construction presented here is generic for $4$D dualities that can be described by \tIIA brane systems and several extensions are possible. \emph{E.g.\ }one could apply the reduction to \tIIA setups involving chiral matter and orientifolds like the ones studied in~\cite{Hanany:1997sa,Hanany:1997tb,Hanany:1999sj}. It would be interesting to study the spectrum of line defects and their connection to dualities in the brane picture. The relations of line defects to global properties of the gauge groups has been pointed out~\cite{Aharony:2013hda} and there are various implications for the duality involving the orthogonal algebras $so(n)$~\cite{Aharony:2013kma}. It should be possible to distinguish between ``$\Spin(N)$'' and the ``$SO(N)_\pm$'' (in the language of~\cite{Aharony:2013kma}) also in the brane setup, \emph{e.g.\ }following the discussion in~\cite{Moore:2014gua}. More explicitly, in the \tIIB description, one can separate the \D3--branes, studying configurations of semi-infinite (electric) \D1--branes and (magnetic) \F1--strings with endpoints on the \D3--branes. In the presence of an orientifold, this analysis should give rise to the distinction between $\Spin(N)$ and $SO(N)_\pm$. We leave this problem for future investigation. Another interesting extension of our analysis involves the pairs of orientifolds associated to \emph{twisted} affine Dynkin diagrams. These cases do not descend from a compactification of a \tIIA background, and they do not represent a four-dimensional theory. However, they do correspond to well-defined theories on $\mathbb{R}^3 \times S^1$. One might expect obtaining new Seiberg-like dualities corresponding to these configurations. By assigning suitable real masses one may even expect to obtain new purely three-dimensional dualities, without four-dimensional parents. It would be interesting to further investigate in this direction. Let us comment on the generation of the monopole charges. The axial $U(1)_A$, anomalous in four dimensions, is broken by the \ac{kk} monopole superpotential at finite radius. However, rotating the \D5s on the circle partially breaks the non-abelian flavor symmetry and generates the axial symmetry. The massless singlets located at $x_3 = x_3^\circ$ are charged under this symmetry and survive the pure $3$D limit. They correspond to the monopoles of the electric theory and, at the same time, their $U(1)_A$ charge is imposed by the original global symmetry. This observation explains the relation pointed out in~\cite{Benna:2009xd}, between the equations governing the cancellation of the anomalies in $4$D and those governing the monopole charges in $3$D. It is possible to reproduce our results when reducing the four-dimensional superconformal index~\cite{Romelsberger:2005eg,Kinney:2005ej} to the three-dimensional partition function~\cite{Jafferis:2010un,Hama:2010av}. That was done for the case of \textsc{sqcd} in~\cite{Aharony:2013dha} and in presence of adjoint matter in~\cite{Amariti:2014iza} \footnote{ Another example, involving matter matter in antisymmetric representation, appeared in~\cite{Gahramanov:2013pca}.}. One should consider the identities summarized in~\cite{Spiridonov:2009za} and obtain new identities for the three dimensional dualities. A possible strategy for this calculation is the \ac{kk} reduction of the one-loop determinants while shifting some fugacities of the global and local symmetries. This reproduces the double-scaling limit discussed in this paper. One should check that the surviving zero modes remove the possible divergent contributions found in~\cite{Aharony:2013kma}. \section*{Acknowledgments} \begin{small} The authors would like to thank Alberto Zaffaroni, Francisco Morales, Jan Troost, and Massimo Bianchi for discussions and comments. \noindent A.A. is funded by the European Research Council (\textsc{erc}-2012-\textsc{adg}\_20120216) and acknowledges support by \textsc{anr} grant 13-\textsc{bs}05-0001. D.F. is \textsc{frs-fnrs} Charg\'e de Recherches. He acknowledges support by the \textsc{frs-fnrs}, by \textsc{iisn} - Belgium through conventions 4.4511.06 and 4.4514.08, by the Communaut\'e Francaise de Belgique through the \textsc{arc} program and by the \textsc{erc} through the SyDuGraM Advanced Grant. C.K. acknowledges support by \textsc{anr} grant 12-\textsc{bs}05-003-01 and by Enhanced Eurotalents, which is co-funded by \textsc{cea} and the European Commission. The work of S.R. is supported by the Swiss National Science Foundation (\textsc{snf}) under grant number \textsc{pp}00\textsc{p}2\_157571/1. \noindent D.O. and S.R. would like to thank the Kavli \textsc{ipmu} for hospitality during the final stages of this work. \end{small}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,677
Pan Pacific London Debuts Yabu Pushelberg handled the interior design Sep 2, 2021 | Branded, Hotel, Hotels, Industry News, Interiors, News, Openings Located on Liverpool Street, Pan Pacific London is a juxtaposition of old and new London architecture. Bishopsgate Plaza encapsulates not only a 43-story bronze tower encompassing Pan Pacific London and private Sky Residence apartments, but also the 144-year old Devonshire House featuring designer shops, a destination restaurant, and a contemporary cocktail bar. A landscaped public plaza connects these two cultural hubs together. The hotel, including its 237 accommodations, event spaces, and public areas, has been created by design duo Yabu Pushelberg, who bring their signature style to the hotel sparked by the fusion of southeast Asian vibrancy and the refined elegance of traditional British design. Modern lines and artistic flair run throughout the hotel's public spaces while guestrooms offer a sense of peace and calm with curved walls and neutral color palettes. Photo: Courtesy of Pan Pacific Hotels Group
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,160
The Music Bank Chart is a record chart on the South Korean KBS television music program Music Bank. Every week, the show awards the best-performing single on the chart in the country during its live broadcast. In 2019, 38 singles achieved a number one on the chart and 28 music acts were awarded first-place trophies. "Boy with Luv" by BTS won for 7 weeks and is the song with the most wins of the year. The song also acquired the highest point total on the April 26 broadcast with a score of 13,007. Chart history Notes References 2019 in South Korean music South Korea Music Bank Lists of number-one songs in South Korea
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,914
We have an app that is life in the app store (View [login to view URL] to learn about it and view app store listing) It is working well but with some users reporting a strange bug that we need to resolve. 1. Install the app and give permissions (needs to access contacts but does not store them). 2. Add contact and add a custom tag. 3. From the home screen, please find that contact by Date. Try to find by Tag Also. Do you see the attached error message? Hello, I'm senior react-native developer as well as objective c. I have read your description and you want bugfix. Can we discuss details? Thanks.
{ "redpajama_set_name": "RedPajamaC4" }
5,384
Q: gson,fromJson returning null. Android I'm really stuck on this, after searching for a long time. I have a Json string which is called response. Gson gson = new Gson(); SearchResult searchResult = gson.fromJson(response, SearchResult.class); searchResult is always null. Here is the SearchResult class: public class SearchResult implements Serializable{ public SearchResult() { } private List<MyMarkerResponse> myMarkersList; public List<MyMarkerResponse> getSearch() { return myMarkersList; } public void setSearch(List<MyMarkerResponse> search) { myMarkersList = search; } Here is the json response: {"myMarkerList":[{"notes":"now","_id":5096363633147904,"latitude":51.52753303816573,"longitude":-0.15742387622594833,"title":"home"},{"notes":"","_id":5137355874762752,"latitude":51.46299731478184,"longitude":-0.015837103128433228,"title":""},{"notes":"","_id":5167132077719552,"latitude":44.890621596087136,"longitude":42.57036656141281,"title":"cat"},{"notes":"new place","_id":5631986051842048,"latitude":65.7773746361831,"longitude":107.60726854205132,"title":"hello"},{"notes":"new place","_id":5692462144159744,"latitude":65.7773746361831,"longitude":107.60726854205132,"title":"hello"},{"notes":"","_id":5720147234914304,"latitude":51.51407752981666,"longitude":-0.12392342090606688,"title":""},{"notes":"place","_id":5730082031140864,"latitude":61.10287229393116,"longitude":88.51000271737576,"title":"new"},{"notes":"","_id":5749328048029696,"latitude":23.44142818293187,"longitude":20.003406554460526,"title":""},{"notes":"now","_id":5769015641243648,"latitude":51.52753303816573,"longitude":-0.15742387622594833,"title":"home"}]} Here is MyMarkerResponse class: public class MyMarkerResponse implements Serializable{ public MyMarkerResponse() { } private String notes; private Long _id; private double latitude; private double longitude; private String title; public Long get_id() { return _id; } public void set_id(Long _id) { this._id = _id; } public double getLatitude() { return latitude; } public void setLatitude(double latitude) { this.latitude = latitude; } public double getLongitude() { return longitude; } public void setLongitude(double longitude) { this.longitude = longitude; } public String getTitle() { return title; } public void setTitle(String title) { this.title = title; } public String getNotes() { return notes; } public void setNotes(String notes) { this.notes = notes; } } When i call searchResult.getSearch() i get this error: java.lang.NullPointerException: Attempt to invoke interface method 'java.util.Iterator java.util.List.iterator()' on a null object reference If you could help, please!! A: Your SearchResult POJO is not correct for the JSON that you expect to deserialize. Specifically, the main JSON object contains just one property/attribute/key called "myMarkerList". So your POJO should likewise contain one property of type list/array. You need to change it into: public class SearchResult { private List<MyMarkerResponse> myMarkerList; public List<MyMarkerResponse> getSearch() { return myMarkerList; } public void setSearch(List<MyMarkerResponse> search) { Search = myMarkerList; } } UPDATED: I realized that the code itself is correct, you just need to use a different fromJson function (the one that takes the Type as second parameter): Gson gson = new Gson(); Type searchResultType = new TypeToken<SearchResult>() {}.getType(); SearchResult searchResult = gson.fromJson(response, searchResultType); Then you code should work as you expected. Please give it a try and let me know if this helps. A: My friend use this site : JSON schema to POJO If you using Gsonin POJO you need annotations. Select Source type : JSON and Annotation Style Gson Paste your example Json response and generate it . A: You've misspelled myMarkerList in the POJO. Currently you have: private List<MyMarkerResponse> myMarkersList; Check your spelling (remove the 's').
{ "redpajama_set_name": "RedPajamaStackExchange" }
274
\section{Introduction} \label{sec:intro} Throughout the modern history of nuclear and particle physics, measurements of structure functions in high-energy lepton-nucleon scattering have played a pivotal role. The demonstration of structure function scaling in early deep-inelastic scattering (DIS) experiments at SLAC in the late 1960's \cite{DIS} established the reality of quarks as elementary constituents of protons and neutrons --- a feat recognized by the award of the 1990 Nobel Prize in Physics to Friedman, Kendall and Taylor. This paved the way to the development of quantum chromodynamics (QCD) as the theory of strong nuclear interactions, and its subsequent confirmation several years later through the discovery of logarithmic scaling violations in structure functions \cite{Violation}. The interpretation of structure functions in terms of quark and gluon (or parton) momentum distributions resulted in the emergence of a remarkably simple and intuitive picture of the nucleon \cite{Feynman}, allowing a vast amount of scattering data to be described in terms of a few universal functions --- the parton distribution functions (PDFs). At leading order (LO) in $\alpha_s$, the $F_2$ structure function of the proton, for example, could be simply represented as a charge squared-weighted sum of PDFs, \begin{equation} F_2 = \sum_q e_q^2\, x (q + \bar q) = {4\over 9} x (u+\bar u) + {1\over 9} x (d+\bar d) + {1\over 9} x (s+\bar s) + \cdots\, , \end{equation} where $q$ and $\bar q$ are the quark and antiquark momentum distribution functions, usually expressed as functions of the momentum fraction $x$ of the nucleon carried by the parton, at a scale given by the momentum transfer squared $Q^2$. Over the ensuing decades concerted experimental DIS programs at SLAC, CERN, DESY and Fermilab have provided a detailed mapping of the PDFs over a large range of kinematics, with $Q^2$ and $x$ spanning several orders of magnitude. To manage the ever increasing number of data sets, from not only inclusive DIS but also other high energy processes such as Drell-Yan, $W$-boson and jet production in hadronic collisions at Fermilab, sophisticated global fitting efforts were developed \cite{CTEQ,MSTW,Alekhin,NNPDF} that now include perturbative corrections calculated to next-to-leading order (or higher) in the strong coupling constant $\alpha_s$. With the increasing energies available at facilities such as the DESY $ep$ collider HERA (and in future the Large Hadron Collider at CERN), PDF studies turned their attention primarily to exploring the region of very small $x$ (down to $x \sim 10^{-6}$), where the structure of the nucleon is dominated by its sea quark and gluon distributions. However, while opening the door to exploration of phenomena such as saturation and $Q^2$ evolution in new kinematic regimes, one can argue whether DIS at low $x$ measures the intrinsic structure of the nucleon or the hadronic structure of the virtual photon, $\gamma^*$. Because the virtual photon in the DIS process can fluctuate into $q\bar q$ pairs whose coherence length $\lambda \sim 1/M x$ becomes large at small $x$, DIS at $x \ltorder 0.1$ really probes the $\gamma^* N$ interaction rather than the structure of the nucleon or the $\gamma^*$ separately. At high $x$, in contrast, the virtual photon is point-like and unambiguously probes the structure of the nucleon \cite{Levy}. Moreover, despite the impressive achievements over the past 4 decades, there are still some regions of kinematics where our knowledge of structure functions and PDFs remains unacceptably poor, with little progress made since the 1970's. A striking example is the region of large $x$ ($x \gtorder 0.5$), where most of the momentum is carried by valence quarks, and sea quarks and gluons are suppressed. Here the valence quark PDFs can be more directly related to quark models of hadron structure; however, the rapidly falling cross sections have made precision measurements extremely challenging. Another example is the pre-asymptotic region dominated by nucleon resonances, where data on the individual transverse and longitudinal cross sections at intermediate and high values of $Q^2$ are either nonexistent or have large uncertainties. With the availability of continuous, high luminosity electron beams at the CEBAF accelerator, the first decade of experiments at Jefferson Lab has seen a wealth of high-quality data on unpolarized structure functions of the nucleon, penetrating into the relatively unexplored large-$x$ domain and the transition region between resonances and scaling. The new data in the resonance region confirmed in spectacular fashion the phenomenon of quark-hadron duality in the proton $F_2$ structure function, and revealed intriguing details about the workings of duality in a number of other observables. The impact of this has been a re-evaluation of the applicability of perturbative QCD to structure functions at low $Q^2$, and has allowed a much larger body of data to be used in global PDF analyses \cite{CTEQ6X}. Jefferson Lab has also set a new standard in the determination of Rosenbluth-separated longitudinal and transverse structure functions, which eliminates the need for model-dependent assumptions that have plagued previous extractions of structure functions from cross section data. On the theoretical front, the region of large $x$ and low $Q^2$ brings to the fore a number of issues which complicate structure function analysis, such as $1/Q^2$ suppressed target mass and higher twist corrections, and nuclear corrections when scattering from nuclear targets. Controlling these corrections requires more sophisticated theoretical tools to be developed, and has motivated theoretical studies, many of which are still ongoing. It has also paved the way towards the 12~GeV experimental program, in which structure functions will be measured to very high $x$ in the DIS region, addressing some long-standing questions about the behavior of PDFs as $x \to 1$. In the next section we summarize the kinematics and formalism relevant for inclusive lepton--nucleon scattering, including the key results from the operator product expansion. Measurements of the proton $F_2^p$ structure functions and their moments are reviewed in Sec.~\ref{sec:F2p}, together with their role in the verification of quark-hadron duality. Data on the deuteron $F_2^d$ structure function are presented in Sec.~\ref{sec:F2d}, and the extraction from these of the neutron $F_2^n$ structure function $F_2^n$ is discussed in Sec.~\ref{sec:F2n}. Section~\ref{sec:FL} reviews new measurements of the longitudinal structure function $F_L$, while Sec.~\ref{sec:other} surveys results from semi-inclusive pion production. Finally, in Sec.~\ref{sec:outlook} we describe the impact that the Jefferson Lab data have had on our understanding of nucleon structure in a global context, and briefly outline prospects for future measurements in the 12~GeV era. \section{Formalism} \label{sec:form} \subsection{Kinematics} \label{ssec:kin} Because of the small value of the electromagnetic fine structure constant, $\alpha = e^2/4\pi$, the inclusive scattering of an electron from a nucleon, $e(k) + N(p) \to e'(k') + X$, can usually be approximated by the exchange of a single virtual photon, $\gamma^* (q)$, where $q = k'-k$. In terms of the laboratory frame incident electron energy $E$, the scattered electron energy $E'$, and the scattering angle $\theta$, the photon virtuality is given by $-q^2 \equiv Q^2 = 4EE'\sin^2\theta/2$, where the electron mass has been neglected. The invariant mass squared of the final hadronic state $X$ is $W^2 = (p+q)^2 = M^2 + 2 M \nu - Q^2 = M^2 + Q^2(1-x)/x$, where $M$ is the nucleon mass, $\nu = E-E'$ is the energy transfer, and $x = Q^2/2M\nu$ is the Bjorken scaling variable. In the one photon exchange approximation the spin-averaged cross section for inclusive electron-nucleon scattering in the laboratory frame can be written as \begin{equation} { d^2\sigma \over d\Omega dE' } = { \alpha^2 \over Q^4 } {E' \over E} L_{\mu\nu} W^{\mu\nu}, \label{eq:sig_tens} \end{equation} where the leptonic tensor $L_{\mu\nu} = 2 \left( k_\mu k'_\nu + k'_\mu k_\nu - g_{\mu\nu} k \cdot k' \right)$, and using constraints from Lorentz, gauge and parity invariance the hadronic tensor $W^{\mu\nu}$ can in general be written as \begin{equation} M W^{\mu\nu} = \left( - g^{\mu\nu} + { q^\mu q^\nu \over q^2} \right) F_1(x,Q^2) + \left( p^\mu - { p\cdot q \over q^2 } q^\mu \right) \left( p^\nu - { p\cdot q \over q^2 } q^\nu \right) { F_2(x,Q^2) \over p\cdot q }\, . \label{eq:Wmunu} \end{equation} The structure functions $F_1$ and $F_2$ are generally functions of two variables, but become independent of the scale $Q^2$ in the Bjorken limit, in which both $Q^2$ and $\nu \to \infty$ with $x$ fixed. At finite values of $Q^2$ a modified scaling variable is more appropriate \cite{Nachtmann,Greenberg}, \begin{equation} \xi = {2x \over 1 + \rho}\, ,\ \ {\rm with}\ \rho = {|\bm{q}| \over \nu} = \sqrt{1 + {Q^2/\nu^2}}\ , \label{eq:xi} \end{equation} which tends to $x$ in the Bjorken limit. In terms of cross sections for absorbing helicity $\pm 1$ (transverse) and helicity 0 (longitudinal) photons, $\sigma_T$ and $\sigma_L$, the cross section can be written as \begin{equation} \label{eq:gameps} { d^2\sigma \over d\Omega dE' } = \Gamma \left( \sigma_T(x,Q^2) + \epsilon\, \sigma_L(x,Q^2) \right), \label{eq:sig_phot} \end{equation} where $\Gamma = (\alpha/2 \pi^2 Q^2) (E'/E) K/(1-\epsilon)$ is the flux of transverse virtual photons, with the factor $K = \nu (1-x)$ in the Hand convention \cite{Hand}, and \begin{equation} \epsilon = \left[ 1 + 2\left( 1+\frac{\nu^2}{Q^2} \right) \tan^2\frac{\theta}{2} \right]^{-1} \end{equation} is the relative flux of longitudinal virtual photons. Equating Eqs.~(\ref{eq:sig_tens}) and (\ref{eq:sig_phot}), the structure functions can be written in terms of the photoabsorption cross sections as \begin{eqnarray} \label{eq:f1sigt} F_1(x,Q^2) &=& { K \over 4\pi^2\alpha } M \sigma_T(x,Q^2)\, , \\ \label{eq:f2sigs} F_2(x,Q^2) &=& { K \over 4\pi^2\alpha } { \nu \over (1 + \nu^2/Q^2) } \left[ \sigma_T(x,Q^2) + \sigma_L(x,Q^2) \right], \end{eqnarray} which reveals that $F_1$ is related only to the transverse virtual photon coupling, while $F_2$ is a combination of both transverse and longitudinal couplings. One can also define a purely longitudinal structure function $F_L$, \begin{equation} F_L = \rho^2 F_2 - 2xF_1 = 2xF_1\, R\, , \label{eq:fl} \end{equation} where $R = \sigma_L/\sigma_T$ is the ratio of longitudinal to transverse cross sections. The separation of the unpolarized structure functions into longitudinal and transverse parts from cross section measurements can be accomplished via the Rosenbluth, or longitudinal-transverse (LT), separation technique \cite{Rosen}, by making measurements at two or more values of $\epsilon$ for fixed $x$ and $Q^2$. Fitting the reduced cross section $\sigma/\Gamma$ linearly in $\epsilon$ yields $\sigma_T$ (and therefore $F_1$) as the intercept, while the ratio $R$ is obtained from the slope. Note that $F_2$ can only be extracted from cross sections either by measuring at $\epsilon = 1$ or by performing LT separations. At typical Jefferson Lab kinematics the contribution of $F_L$ to $F_2$ can be significant. The above discussion assumed the dominance of the one-photon exchange amplitude in describing the neutron current electron scattering cross section. In principle there are additional contributions arising from the exchange of a $Z$ boson, and in particular the interference between $\gamma^*$ and $Z$ exchange. The interference is in fact very relevant for parity-violating electron scattering, discussed elsewhere in this volume in connection with extractions of strange electromagnetic form factors from parity-violating asymmetries. \subsection{Operator product expansion} \label{ssec:ope} The theoretical basis for describing the $Q^2$ dependence of structure functions in QCD is Wilson's operator product expansion (OPE) \cite{OPE}. The quantities most directly amenable to a QCD analysis are the {\it moments} of structure functions, the $n$-th moments of which are defined as \begin{eqnarray} M_1^{(n)}(Q^2) &=& \int_0^1 dx\ x^{n-1} F_1(x,Q^2)\, ,\ \ \ \ \ M_{2,L}^{(n)}(Q^2)\ =\ \int_0^1 dx\ x^{n-2} F_{2,L}(x,Q^2)\, . \label{eq:mom_def} \end{eqnarray} As will become relevant in the discussion of duality in Sec.~\ref{sec:F2p} below, at large $Q^2 \gg \Lambda_{\rm QCD}^2$ the moments can be expanded in powers of $1/Q^2$, with the coefficients in the expansion given by matrix elements of local operators corresponding to a certain {\em twist}, $\tau$, defined as the mass dimension minus the spin, $n$, of the operator. For the $n$-th moment of $F_2$, for instance, one has the expansion \begin{eqnarray} M_2^{(n)}(Q^2) &=& \sum_{\tau=2,4\ldots}^\infty { A_\tau^{(n)}(\alpha_s(Q^2)) \over Q^{\tau-2} }\, ,\ \ \ \ n = 2, 4, 6 \ldots \label{eq:MnOPE} \end{eqnarray} where $A_\tau^{(n)}$ are the matrix elements with twist $\leq \tau$. As the argument suggests, the $Q^2$ dependence of the matrix elements can be calculated perturbatively, with $A_\tau^{(n)}$ expressed as a power series in $\alpha_s(Q^2)$. For twist two, the coefficients $A_2^{(n)}$ are given in terms of matrix elements of spin-$n$ operators, $A_2^{(n)} p^{\mu_1} \cdots p^{\mu_n} + \cdots = \langle p | \bar\psi \gamma^{\{\mu_1} iD^{\mu_2} \cdots\, iD^{\mu_n\}} \psi | p \rangle$, where $\psi$ is the quark field, $D^\mu$ is the covariant derivative, and the braces $\{ \cdots \}$ denote symmetrization of indices and subtraction of traces. \begin{figure}[t] \center\includegraphics[width=12cm]{fig1.ps} \caption{{\bf (a)}~Leading-twist (``handbag'') contribution to the structure function, {\bf (b)}~higher-twist (``cat's ears'') four-quark contributions, {\bf (c)}~higher-twist quark-gluon interactions.} \label{fig:diag} \end{figure} The leading-twist terms correspond to diagrams such as in Fig.~1~(a), in which the virtual photon scatters incoherently from a single parton. The higher-twist terms in Eq.~(\ref{eq:MnOPE}) are proportional to higher powers of $1/Q^2$ whose coefficients are matrix elements of local operators involving multi-quark or quark-gluon fields, such as those depicted in Fig.~1~(b) and (c). The higher twists therefore parametrize long-distance multi-parton correlations, which can provide clues to the dynamics of quark confinement. The additional terms (referred to as the ``trace terms'') in the twist-two matrix elements involve structures such as $p^2 g^{\mu_i \mu_j}$ and are thus suppressed by powers of $p^2/Q^2 \sim Q^2/\nu^2$. While negligible in the Bjorken limit, at finite $Q^2$ these give rise to the so-called {\it target mass corrections} (TMCs), and are important in the analysis of Jefferson Lab data at large values of $x$. Because their origin is in the same twist-two operators that give rise to structure function scaling, TMCs are formally twist-two effects and are kinematical in origin. Inverting the expressions for the full moments including the trace terms, the resulting target mass corrected $F_2$ structure function in the OPE is given by \cite{GP,TMC} \begin{equation} F_2^{\rm TMC}(x,Q^2) = {x^2 \over \xi^2 \rho^3} F_2^{(0)}(\xi,Q^2) + {6 M^2 x^3 \over Q^2 \rho^4} \int_\xi^1 du\, \left( 1 + {2 M^2 x \over Q^2 \rho} (u-\xi) \right) {F_2^{(0)}(u,Q^2) \over u^2}\, , \label{eq:ope} \end{equation} where $F_2^{(0)}$ is the structure function in the $M^2/Q^2 \to 0$ limit. Similar expressions are found for the $F_1$ and $F_L$ structure functions \cite{GP,TMC}. One should note, however, that the OPE result for TMCs to structure functions is not unique; in the collinear factorization approach, for example, in which parton distributions are formulated {\it a priori} in momentum space, different expressions for TMCs arise \cite{CF}. While both formalisms give the same results in the Bjorken limit, the differences between these at finite $Q^2$ can be seen as representing an inherent prescription dependence and systematic uncertainty in the analysis of structure functions at low $Q^2$. \section{Proton $F_2$ structure function} \label{sec:F2p} \begin{figure}[ht] \begin{center} \includegraphics[width=8.1cm]{fig2a.eps} \includegraphics[width=6.9cm]{fig2b.eps} \caption{\label{fig:f2} ({\it Left}) Proton $F_2^p$ structure function data from Hall~C \cite{LIANG,SIMONA,e00002}, SLAC \cite{Whitlow}, and NMC \cite{f2nmc} at $Q^2 = 0.5, 1.5, 3$ and 5.5~GeV$^2$, compared with a fit \cite{ResFit} to the transverse and longitudinal resonance cross sections from photoproduction to $Q^2 = 9$~GeV$^2$ (solid), and a global fit \cite{f2allm} to DIS data (dashed). ({\it Right}) Proton $F_2^p$ from CLAS in Hall~B at $Q^2=0.775$~GeV$^2$ (stars) \cite{Osi03}, compared to earlier Hall~C data (open circles) \cite{IOANA}.} \end{center} \end{figure} Measurements of the proton $F_2^p$ structure function have been taken at Jefferson Lab over a range of kinematics, from $Q^2$ as low as 0.1~GeV$^2$ and below (to study the transition to the photoproduction point, $Q^2=0$) and up to $Q^2 = 8$~GeV$^2$ (to study the large-$x$ behavior and quark-hadron duality). At the larger $Q^2$ values the high luminosity provided by the CEBAF accelerator has allowed significant improvement in the statistical precision of high-$x$ measurements over all previous experiments. In addition, with the HMS spectrometer in Hall~C well understood, LT separated cross sections have been measured with better than 1.6\% systematic point-to-point uncertainties and typically less than 1.8\% normalization uncertainties. Precision $F_2^p$ spectra extracted from cross sections measured in Hall~C \cite{LIANG,SIMONA,e00002} are shown in Fig.~\ref{fig:f2}~(left) as a function of $x$ for several $Q^2$ values ($Q^2 = 0.5, 1.5, 3$ and 5.5~GeV$^2$), together with previous SLAC \cite{Whitlow} and NMC \cite{f2nmc} data at lower $x$. The data have been bin-centered to the common $Q^2$ values shown for all measurements within the range of 20\% of the central value, utilizing a fit \cite{f2allm} to the DIS data and a global fit \cite{ResFit} to Jefferson Lab resonance region data. In addition to the Hall~C measurements, there now also exists a large body of $F_2^p$ data from Hall~B covering a significant range of kinematics, which is afforded by the large acceptance of the CLAS spectrometer. An example of the $F_2^p$ spectrum extracted from CLAS is shown in Fig.~\ref{fig:f2}~(right) at $Q^2 = 0.775$~GeV$^2$. In Table~\ref{tab:jlab_exps} we present a complete list of all unpolarized inclusive and semi-inclusive measurements on the proton and deuteron performed at Jefferson Lab through 2010, including their current status. \begin{table}[th] \begin{center} \caption{Listing of Jefferson Lab unpolarized inclusive and semi-inclusive electron-nucleon scattering experiments.} \label{tab:jlab_exps} \begin{tabular}{l c c l c l} \hline \hline & & & & & \\ Experiment & Hall & Target & Observable & Reference & Status\\ & & & & & \\ \hline & & & & & \\ E94-110 & C & $p$ & $R$ in resonance region & \cite{LIANG,94elast,94nonlin,94baldin} & data taken in 1999, \\ & & & & & analysis completed \\ & & & & & \\ E99-118 & C & $p,d$ & nuclear dependence of & \cite{e99118} & data taken in 2000, \\ & & & $R$ at low $Q^2$ & & analysis completed \\ & & & & & \\ CLAS & B & $p,d$ & inclusive cross sections & \cite{Osi03,Osi06} & e1/e2 run periods \\ & & & & & \\ E00-002 & C & $p,d$ & $F_2$ at low $Q^2$ & \cite{e00002} & data taken in 2003, \\ & & & & & analysis completed, \\ & & & & & pub. in progress \\ & & & & & \\ E00-108 & C & $p,d$ & semi-inclusive $\pi^\pm$ & \cite{Navasardyan,Mkrtchyan} & data taken in 2003, \\ & & & electroproduction & & analysis completed \\ & & & & & \\ E00-116 & C & $p,d$ & inclusive resonance & \cite{SIMONA} & data taken in 2003, \\ & & & region cross sections & & analysis completed \\ & & & at intermediate $Q^2$ & & \\ & & & & & \\ E02-109 & C & $d$ & $R$ in resonance region & \cite{e02109} & data taken in 2005, \\ & & & & & analysis in progress \\ & & & & & \\ E03-012 & B & $d (n)$ & neutron $F_2^n$ {\it via} & \cite{BONUS} & data taken in 2005, \\ (BoNuS) & & & spectator tagging & & analysis completed, \\ & & & & & pub. in progress \\ & & & & & \\ E06-009 & C & $d$ & $R$ in resonance region & \cite{e06009} & data taken in 2007, \\ & & & \& beyond: extension of & & analysis in progress \\ & & & E02-109 to $Q^2=4$~GeV$^2$ & & \\ & & & & & \\ \hline\hline \end{tabular} \end{center} \end{table} \subsection{Quark-hadron duality} \label{ssec:qhd} The proton $F_2^p$ data in Fig.~\ref{fig:f2} illustrate the intriguing phenomenon of quark-hadron duality, which relates structure functions in the nucleon resonance and DIS regions. First observed by Bloom and Gilman \cite{BG} in the early inclusive SLAC data (hence also referred to as ``Bloom-Gilman duality''), the structure functions in the resonance region are found on average to equal the structure functions measured in the ``scaling'' region at higher $W$. The resonance data oscillate around the scaling curve and slide along it with increasing $Q^2$, as seen in Fig.~\ref{fig:f2}~(left). The early $F_2^p$ data from SLAC were extracted from cross sections assuming a fixed value for $R$ (= 0.18), and with a scaling curve parametrizing the limited data available in the early 1970's \cite{Mil72}. Since the original measurements, the $F_2^p$ structure function has become one of the best studied quantities in lepton scattering, with data from laboratories around the world contributing to a global data base spanning over five orders of magnitude in both $x$ and $Q^2$. With the advent of the Jefferson Lab data, precise $F_2^p$ measurements now also exist in the resonance region up to $Q^2 \approx 8$~GeV$^2$, allowing many new aspects of duality to be quantified for the first time \cite{MEK}. \begin{figure} \begin{center} \includegraphics[height=6.5cm]{fig3a.eps}\ \ \includegraphics[height=6.5cm]{fig3b.eps} \caption{Purely transverse {\it (left)} and longitudinal {\it (right)} proton structure functions $2xF_1$ and $F_L$ in the resonance region \cite{LIANG} (triangles), compared with earlier data from SLAC (squares). The curves are leading twist structure functions computed at NNLO from Alekhin (dashed) \cite{Ale03} and MRST \cite{MRST04} with (solid) and without (dotted) target mass corrections. The prominent resonance regions ($\Delta$, $S_{11}$, $F_{15}$) are indicated by the arrows along the abscissa.} \label{fig:F_L} \end{center} \end{figure} While the early duality studies considered only the $F_2$ structure function \cite{BG}, Jefferson Lab experiments have in addition revealed the presence of duality in other observables. For example, Fig.~\ref{fig:F_L} shows new LT-separated data from Jefferson Lab experiment E94-110 for the proton transverse ($F_1^p$) and longitudinal ($F_L^p$) structure functions in the nucleon resonance region \cite{LIANG}. LT-separated data from SLAC, predominantly in the DIS region, are also shown for comparison \cite{das94}. Where they refer to the same kinematic values, the Jefferson Lab and SLAC data are in excellent agreement, providing confidence in the achievement of the demanding precision required of this type of experiment. In all cases, the resonance and DIS data merge smoothly with one another in both $x$ and $Q^2$. The availability of leading twist PDF-based global fits \cite{CTEQ,MSTW,Alekhin,NNPDF,CTEQ6X} allows comparison of the resonance region data with leading twist structure functions at the same $x$ and $Q^2$. The resonance data on the $F_1$ and $F_L$ structure functions are also found to oscillate around the perturbative QCD (pQCD) curves, down to $Q^2$ as low as 0.7~GeV$^2$. Because most of the data lie at large values of $x$ and small $Q^2$, it is vital for tests of duality to account for the effects of kinematical target mass corrections \cite{GP}, which give large contributions as $x \to 1$ \cite{TMC}. This is clear from Fig.~\ref{fig:F_L}, where the data are compared with leading twist structure functions computed from PDFs to next-to-next-to-leading order (NNLO) accuracy from Alekhin \cite{Ale03} and MRST \cite{MRST04,MRST98}. The latter are shown with (solid) and without (dotted) target mass corrections, and clearly demonstrate the importance of subleading $1/Q^2$ effects at large $x$. In particular, TMCs give additional strength at large $x$ observed in the data, which would be significantly underestimated by the leading twist functions without TMCs. The phenomenological results raise the question of how can a scaling structure function be built up entirely from resonances, each of whose contribution falls rapidly with $Q^2$ \cite{IJMV}? A number of studies using various models have demonstrated how sums over resonances can indeed yield a $Q^2$ independent function (see Ref.~\cite{MEK} for a review). The key observation is that while the contribution from each individual resonance diminishes with $Q^2$, with increasing energy new states become accessible whose contributions compensate in such a way as to maintain an approximately constant strength overall. At a more microscopic level, the critical aspect of realizing the suppression of the higher twists is that at least one complete set of even and odd parity resonances must be summed over for duality to hold \cite{CI}. For an explicit demonstration of how this cancellation takes place in the SU(6) quark model and its extensions, see Refs.~\cite{CI,CM03,CM09}. \subsection{Structure function moments} \label{ssec:mom} The degree to which quark-hadron duality holds can be more precisely quantified by computing integrals of the structure functions over $x$ in the resonance region at fixed $Q^2$ values, $\int_{x_{\rm th}}^{x_{\rm res}}\; dx \; F_2(x,Q^2)$, where $x_{\rm th}$ corresponds to the pion production threshold at fixed $Q^2$, and $x_{\rm res} = Q^2/(W_{\rm res}^2 - M^2 + Q^2)$ indicates the $x$ value at the same $Q^2$ where the traditional delineation between the resonance and DIS regions at $W = W_{\rm res} \equiv 2$~GeV is made. These integrals can then be compared to the corresponding integrals of the structure functions fitted to the higher-$W$, deep-inelastic data, at the same $Q^2$ and over the same interval of $x$. The early phenomenological findings \cite{IOANA} suggested that the integrated strength of the resonance structure functions above $Q^2 \approx 1$~GeV$^2$ was indeed very similar to that in the deep-inelastic region, including in each of the individual prominent resonance regions. In this section we explore the duality between the resonance and deep-inelastic structure functions in the context of QCD moments. According to De~Rujula, Georgi and Politzer \cite{DGP}, one can formally relate the appearance of quark-hadron duality to the vanishing suppression of higher twist matrix elements in the QCD moments of the structure functions \cite{OPE}. Namely, if certain moments of structure functions are observed to be independent of $Q^2$, as implied by duality, then from Eq.~(\ref{eq:MnOPE}) the moments must be dominated by the leading, $Q^2$ independent term, with the $1/Q^{\tau-2}$ higher twist terms suppressed. Duality is then synonymous with the suppression of higher twists, which in partonic language corresponds to the suppression, or cancellation, of interactions between the scattered quark and the spectator system such as those illustrated in Fig.~1~(b) and (c). Conversely, if the moments display power-law $Q^2$ dependence, then this implies violation of duality; moreover, if the violation is not overwhelming, the $Q^2$ dependence of the data can be used to extract information on the higher twist matrix elements \cite{JiUnrau}. \begin{figure}[hb] \includegraphics[width=10cm]{fig4.eps} \begin{centering} \caption{Total $n=2$ moments of the proton $F_2^p$ {\it (top)}, $F_1^p$ {\it (center)} and $F_L^p$ {\it (bottom)} structure functions determined from global fits to existing DIS data and Jefferson Lab resonance region data \cite{Chr04}, compared with moments computed from the leading twist PDFs from MRST at NNLO \cite{MRST04}.} \label{fig:f2ht} \end{centering} \end{figure} The first determination \cite{Arm01} of the $F_2^p$ moments from Jefferson Lab data was made utilizing structure functions measured in Hall~C \cite{IOANA}, while a later evaluation \cite{Osi03} included the large body of additional data from CLAS. More recently, the extraction of $F_2^p$ has been further enhanced by LT-separated data from Hall~C, shown in Fig.~\ref{fig:f2} along with the fit \cite{ResFit} to the LT separated cross sections. The $n=2$ moments for the proton $F_2^p$, $F_1^p$, and $F_L^p$ structure functions are shown in Fig.~\ref{fig:f2ht} versus $Q^2$, as determined from integrating this fit. For $F_2^p$ they are found to be in very good agreement with the earlier measurements. Also shown is the leading-twist contribution calculated from the MRST parameterization \cite{MRST04}, corrected for target mass effects \cite{GP}. One of the most striking features of the results in Fig.~\ref{fig:f2ht} is that the elastic-subtracted moments exhibit the same $Q^2$ dependence as the PDF fits down to $Q^2 \approx 1$~GeV$^2$. Even with the elastic contribution included, which vanishes in the Bjorken limit and is hence pure higher twist, there is excellent agreement between the resonance and DIS data for $Q^2 \gtorder 2$~GeV$^2$. Until very recently \cite{Alekhin,CTEQ6X}, this fact has not been widely appreciated or utilized in global PDF fitting efforts \cite{CTEQ,MSTW}, which typically impose cuts on data of $Q^2 \gtorder 4$~GeV$^2$ and $W^2 \gtorder 12$~GeV$^2$. While the OPE provides a systematic approach to identifying and classifying higher twists, it does not reveal {\it why} these are small or {\it how} duality is realized globally and locally. To further explore the {\it local} aspects of duality within a perturbative QCD context, a ground-breaking new approach using ``truncated'' moments of structure functions, developed recently by Forte {\it et al.} \cite{Forte} and extended by Kotlorz \& Kotlorz \cite{Kotlorz}, was applied by Psaker {\em et al.} \cite{Ales} to Jefferson Lab data. The virtue of truncated moments is that they obey a similar set of $Q^2$ evolution equations as those for PDFs themselves, which therefore enables a rigorous connection to be made between local duality and QCD. It allows one to quantify the higher twist content of various resonance regions, and determine the degree to which individual resonances are dominated by the leading twist components. Defining the $n$-th truncated moment ${\cal M}_n$ of a PDF $q(x,Q^2)$ between $x_{\rm min}$ and $x_{\rm max}$ as \begin{eqnarray} {\cal M}_n(x_{\rm min},x_{\rm max},Q^2) &=& \int_{x_{\rm min}}^{x_{\rm max}} dx\; x^{n-1}\ q(x,Q^2)\ , \label{eq:trunc_def} \end{eqnarray} the evolution equations for the truncated moments can be written as \begin{eqnarray} \frac{d{\cal M}_n}{dt} & = & \frac{\alpha_s(Q^2)}{2\pi} \left( P'_n \otimes {\cal M}_n \right)\ ,\ \ \ \ t = \ln\left(Q^2/\Lambda_{\rm QCD}^2\right)\, . \label{eq:trunc_evol} \end{eqnarray} The symbol $\otimes$ denotes the Mellin convolution of the truncated moment and the ``splitting function'' $P'_n$, which is related to the usual DGLAP evolution splitting function $P$ \cite{DGLAP} by $P'_n(z,\alpha_s(Q^2)) = z^n\ P(z,\alpha_s(Q^2))$. The extent to which nucleon structure function data in specific regions in $x$ (or $W$) are dominated by leading twist can be determined by constructing empirical truncated moments and evolving them to different $Q^2$. Deviations of the evolved moments from the experimental data at the new $Q^2$ then reveal any higher twist contributions in the original data. A next-to-leading order (NLO) analysis \cite{Ales} of data on the proton $F_2^p$ structure function from Jefferson Lab covering a range in $Q^2$ from 1~GeV$^2$ to $\approx$~6~GeV$^2$ revealed intriguing behavior of the higher twists for different nucleon resonance regions. Assuming that $F_2^p$ data beyond a large enough $Q^2$ (taken to be $Q^2 = Q_0^2 = 25$~GeV$^2$ in Ref.~\cite{Ales}) are dominated by leading twist, the truncated moments were computed at $Q_0^2$ and evolved to lower $Q^2$. Note that the truncated moments are computed over the range $W_{\rm th} \leq W \leq W_{\rm max}$, where the $W_{\rm th} = M + m_\pi$ is the inelastic threshold. At $Q^2 = 1$~GeV$^2$ this corresponds to the integration range $x_{\rm min} \leq x \leq x_{\rm th}$, where $x_{\rm th} = \left[ 1 + m_\pi (m_\pi+2M)/Q^2\right]^{-1} \simeq 0.78$. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.3]{fig5a.eps}\hspace*{0.5cm} \includegraphics[scale=0.3]{fig5b.eps} \end{center} \caption{{\bf (a)} Ratio of the ${\cal M}_2$ truncated moments of the data to the leading twist + TMC (solid), and data to leading twist without TMC (dashed) at $Q^2 = 1$~GeV$^2$, as a function of $W_{\rm max}$. {\bf (b)} $Q^2$ dependence of the fractional higher twist (HT) contribution to the $n=2$ truncated moment data, for various intervals in $W$.} \label{fig:trunc} \end{figure} The ratio of the truncated moments of the data to the leading twist is shown in Fig.~\ref{fig:trunc}(a) as a function of $W_{\rm max}$ at $Q^2 = 1$~GeV$^2$, with and without target mass corrections. Including the effects of TMCs, the leading twist moment differs from the data by $\sim 15\%$ for $W_{\rm max} > 1.5$~GeV. To quantify the higher twist content of the specific resonance regions, and at different values of $Q^2$, several intervals in $W$ are considered: $W_{\rm th}^2 \leq W^2 \leq$~1.9~GeV$^2$ ($\Delta(1232)$ or first resonance region); $1.9 \leq W^2 \leq 2.5$~GeV$^2$ ($S_{11}(1535)$ or second resonance region); and $2.5 \leq W^2 \leq 3.1$~GeV$^2$ ($F_{15}(1680)$ or third resonance region). The higher twist contributions to ${\cal M}_2$ in these regions are shown in Fig.~\ref{fig:trunc}(b) as ratios to moments of the data. The results indicate deviations from leading twist behavior of the entire resonance region data (filled circles in Fig.~\ref{fig:trunc}(b)) at the level of $\ltorder 15\%$ for all values of $Q^2$ considered, with significant $Q^2$ dependence for $Q^2 \ltorder 4$~GeV$^2$. The strong $Q^2$ dependence of the higher twists is evident here in the change of sign around $Q^2 = 2$~GeV$^2$, with the higher twists going from $\approx -10\%$ at $Q^2 = 1$~GeV$^2$ to $\approx 10$--15\% for $Q^2 \sim 3-6$~GeV$^2$. At larger $Q^2$ the higher twists are naturally expected to decrease, once the leading twist component of the moments begins to dominate. Interestingly, the magnitude of the higher twist contributions in the $\Delta$ region (diamonds) is smallest, decreasing from $\approx -15\%$ of the data at $Q^2=1$~GeV$^2$ to values consistent with zero at larger $Q^2$. The higher twists are largest, on the other hand, for the $S_{11}$ region (squares), where they vary between $\approx -15\%$ of the data at $Q^2=1$~GeV$^2$ and 20--25\% at $Q^2 \sim 5$~GeV$^2$. Combined, the higher twist contribution from the first two resonance regions (dotted curve) is $\ltorder 15\%$ in magnitude for all $Q^2$. The rather dramatic difference between the $\Delta$ and the $S_{11}$, may, at least in part, be due to the choice of the differentiation point of $W^2 = 1.9$~GeV$^2$. A lower $W^2$ choice, for instance, would lower the higher twist content of the $S_{11}$ at large $Q^2$, while raising that of the $\Delta$. However, this $W^2$ choice corresponds to the local minimum between these two resonances in the inclusive spectra, and is the one most widely utilized. The higher twist content of the $F_{15}$ region (open circles) is similar to the $S_{11}$ at low $Q^2$, but decreases more rapidly for $Q^2 > 3$~GeV$^2$. The higher twist content of the first three resonance regions combined (dashed curve) is $\ltorder 15$--20\% in magnitude for $Q^2 \leq 6$~GeV$^2$. Integrating up to $W_{\rm max}^2 = 4$~GeV$^2$ (filled circles), the data on the $n=2$ truncated moment are found to be leading twist dominated at the level of 85--90\% over the entire $Q^2$ range. The results in Fig.~\ref{fig:trunc} can be compared with quark model expectations \cite{CI,CM03}, which predict systematic deviations of resonance data from local duality. Assuming dominance of magnetic coupling, the proton data are expected to overestimate the DIS function in the second and third resonance regions due to the relative strengths of couplings to odd-parity resonances; the positive higher twists observed in Fig.~\ref{fig:trunc}(b) for $Q^2 \gtorder 2$~GeV$^2$ indeed support these predictions. \section{Deuteron $F_2$ measurements} \label{sec:F2d} Together with hydrogen, inclusive lepton scattering from deuterium targets has provided an extensive data base of $F_2$ structure function measurements over a large range of kinematics. While a significant quantity of $F_2^d$ data was collected from experiments at CERN and SLAC, the quasi-elastic and nucleon resonance regions, especially at low and moderate $Q^2$ ($\approx$ few GeV$^2$), were only mapped precisely with the advent of Jefferson Lab data \cite{IOANA,Osi06}. Similarly, Jefferson Lab contributed with precision $F_2^d$ data in the region of $x > 1$ \cite{Arr01}, of relevance for constructing higher moments of deuteron structure functions. Experiments in Hall~C \cite{IOANA,Arr01} have provided $F_2^d$ data in select regions of $x$ and $Q^2$, while inclusive cross section measurements in CLAS have covered a continuous two-dimensional region over the entire resonance region up to $Q^2 = 6$~GeV$^2$. This combination is rather useful for determining moments of $F_2$. In such extractions one usually assumes that the ratio $R$ for the deuteron is similar to that for the proton at scales $Q^2$ of a few GeV$^2$ \cite{Tva07}. New measurements which will test this assumption will be reviewed in Sec.~\ref{sec:FL}. \begin{table}[ht] \caption{Lowest two moments (for $n=2$ and 4) of the isovector $F_2$ structure function. Experimental results for $Q^2 \approx 4$~GeV$^2$ are compared with lattice calculations extrapolated to the chiral limit.\\} \begin{center} \begin{tabular}{|c|c|c|c|} \hline \hline $n$ & Niculescu {\it et al.} \cite{NAEK06} & Osipenko {\it et al.} \cite{OMSKR06} & Detmold {\it et al.} \cite{DMT02} \\ & (Hall C) & (Hall B) & (lattice) \\ \hline 2 & 0.049(17) & 0.050(9) & 0.059(8) \\ 4 & 0.015(3) & 0.0094(16) & 0.008(3) \\ \hline \hline \end{tabular} \label{tab:momcomp} \end{center} \end{table} The results of the moment analyses are shown in Table~\ref{tab:momcomp}, expressed as the isovector (proton minus neutron) combination, and compared with isovector moments from lattice QCD \cite{DMT02,Det01}. For simplicity the neutron moments here are defined as the difference between the deuteron and proton moments --- see, however, Sec.~\ref{ssec:F2n_extract} below. The $n=2$ moments from the Hall~B \cite{OMSKR06} and Hall~C \cite{NAEK06} analyses agree well with each other, and with the lattice extraction, which includes the effects of pion loops and the intermediate $\Delta$(1232) resonance in the chiral extrapolation. For the $n=4$ moment the comparison between the Hall~B and Halls~C results shows a slight discrepancy, which may be reduced once precision high-$x$ data at $Q^2 \sim 4$~GeV$^2$ are included in the extractions. Analysis of the $Q^2$-dependence of the deuteron $F_2$ moments in Ref.~\cite{Osi06} suggests a partial cancellation of different higher twist contributions entering in the OPE with different signs, which is one of the manifestations of quark-hadron duality. The slow variation with $Q^2$ of the structure function moments, down to $Q^2 \approx 1$~GeV$^2$, was also found in the analysis of proton data \cite{Arm01}, where such cancellations were found to be mainly driven by the elastic contribution. Furthermore, by comparing the proton and (nuclear corrected) neutron structure function moments, the higher twist contributions were found to be essentially isospin independent \cite{Osi07}. This suggests the possible dominance of $ud$ correlations over $uu$ and $dd$ in the nucleon, and implies higher twist corrections that are consistent with zero in the isovector $F_2$ structure function. More recently, high precision deuteron cross sections in the resonance region have been measured in Hall~C \cite{e02109,e06009} with the aim of providing LT separated deuteron structure functions of comparable precision and kinematic coverage to those performed for the proton. These higher precision LT separated data will allow for a significant further reduction in the uncertainties in the current nonsinglet $F_2$ extractions. \section{Neutron $F_2$ structure function} \label{sec:F2n} A complete understanding of the valence quark structure of the nucleon requires knowledge of both its $u$ and $d$ quark distributions. While the $u$ distribution is relatively well constrained by measurements of the proton $F_2^p$ structure function, in contrast the $d$ quark distribution is poorly determined due to the lack of comparable data on the neutron structure function $F_2^n$. The absence of free neutron targets makes it necessary to use light nuclei such as deuterium as effective neutron targets, and one must therefore deal with the problem of extracting neutron information from nuclear data. \subsection{Neutron structure from inclusive $F_2$ data} \label{ssec:F2n_extract} In standard global PDF analyses, sensitivity to the $d$-quark from charged lepton scattering is primarily provided by the neutron in the deuteron. Usually the neutron $F_2^n$ structure function is extracted by subtracting the deuteron and proton structure function data assuming that nuclear corrections are negligible. At large $x$, however, the ratio of the deuteron to free nucleon structure functions is predicted to deviate significantly from unity \cite{MST,KPW,KP06,KMK}, which can have significant impact on the behavior of the extracted neutron structure function at large $x$ \cite{CTEQ6X,MT}. Even when nuclear effects are considered, there exist practical difficulties with extracting information on the free neutron from nuclear data, especially in the nucleon resonance region, where resonance structure is largely smeared out by nucleon Fermi motion. A recent analysis \cite{MKMK} used a new method \cite{KMK} to extract $F_2^n$ from $F_2^d$ and $F_2^p$ data, in which nuclear effects are parameterized via an additive correction to the free nucleon structure functions, in contrast to the more common multiplicative method \cite{BODEK} which fails for functions with zeros or with non-smooth data. In the standard impulse approximation approach to nuclear structure functions, the deuteron structure function can be written as a convolution \cite{KP06,KMK} \begin{eqnarray} F_2^d(x,Q^2) &=& \sum_{N=p,n} \int dy\ f_{N/d}(y,\rho)\ F_2^N(x/y,Q^2)\, , \label{eq:conv_def} \end{eqnarray} where $f_{N/d}$ is the light-cone momentum distribution of nucleons in the deuteron (or ``smearing function''), and is a function of the momentum fraction $y$ of the deuteron carried by the struck nucleon, and of the virtual photon ``velocity'' $\rho$ (see Eq.~(\ref{eq:xi})). The smearing function encodes the effects of the deuteron wave function, accounting for nuclear Fermi motion and binding effects, as well as kinematical finite-$Q^2$ corrections. Although not well constrained, nucleon off-shell effects have also been studied \cite{MST,KPW,KP06}; their influence appears to be small compared with the errors on the existing data, except at very large $x$. \begin{figure}[ht] \begin{center} \vspace*{1cm} \includegraphics[width=7.5cm,height=5cm]{fig6a.eps}\hspace*{0.5cm} \includegraphics[width=7.5cm,height=5cm]{fig6b.eps} \end{center} \caption{{\it (Left)} Neutron $F_2^n$ structure function extracted from inclusive deuteron and proton data at $Q^2=1.7$~GeV$^2$ \cite{MKMK}, together with the reconstructed $F_2^d$. {\it (Right)} Neutron structure function extracted from the BoNuS experiment \cite{BONUS}, compared with the Bosted/Christy model of the neutron $F_2^n$ \cite{ResFitd} and the corresponding Christy/Bosted parametrization for $F_2^p$ \cite{ResFit}.} \label{fig:F2n} \end{figure} In Fig.~\ref{fig:F2n} we illustrate the results of a typical extraction of $F_2^n$ from Jefferson Lab $F_2^d$ and $F_2^p$ data at $Q^2 = 1.7$~GeV$^2$. The proton data show clear resonant structure at large $x$, which is mostly washed out in the deuteron data. The resulting neutron $F_2^n$ is shown after two iterations of the procedure with an initial guess of $F_2^n=F_2^p$. Clear neutron resonance structure is visible in the first ($\Delta$) and second resonance regions, at $x \sim 0.75$ and 0.55, respectively, with some structure visible also in the third resonance region at $x \sim 0.45$, albeit with larger errors. The deuteron $F_2^d$ reconstructed from the proton and extracted neutron data via Eq.~(\ref{eq:conv_def}) indicates the relative accuracy and self-consistency of the extracted $F_2^n$. Of course it is not possible to avoid nuclear model dependence in the inversion procedure, and some differences in the extracted $F_2^n$ will arise using different models for the smearing functions $f_{N/d}$. To remove, or at least minimize, the model dependence in the extracted free neutron structure, several methods have been proposed, such as utilizing inclusive DIS from $A=3$ mirror nuclei \cite{Afnan,Salme,MARATHON}, and semi-inclusive DIS from a deuteron with spectator tagging \cite{BONUS,MSS}. In addition, experiments involving weak interaction probes \cite{PVDIS,SOLID,HM} can provide the information on the flavor separated valence quark distributions directly. In the next section we discuss in detail one of these methods for determining the free neutron structure, namely the BoNuS experiment at Jefferson Lab \cite{BONUS}. \subsection{Tagged neutron structure functions} \label{ssec:F2n_bonus} To overcome the absence of free neutron targets, the Hall~B BoNuS (Barely Off-shell NeUtron Structure) experiment has measured inclusive electron scattering on an almost free neutron using the CLAS spectrometer and a recoil detector to tag low momentum protons. The protons are tracked in a novel radial time projection chamber \cite{bonus-nim} utilizing gas electron multiplier foils to amplify the proton ionization in a cylindrical drift region filled with a mixture of Helium and Di-Methyl Ether, and with the proton momentum determined from the track curvature in a solenoidal magnetic field. Slow backward-moving spectator protons are tagged with momenta as low as 70~MeV in coincidence with the scattered electron in the reaction $D(e,e' p_s)X$. This technique ensures that the electron scattering took place on an almost free neutron, with its initial four-momentum inferred from the observed spectator proton. Cutting on spectator protons with momenta between 70 and 120~MeV and laboratory angles greater than 120~degrees minimizes the contributions from final state interactions and off-shell effects to less than several percent on the extracted neutron cross section. While inclusive scattering from deuterium results in resonances which are significantly broadened (often to the point of being unobservable), determination of the initial neutron momentum allows for a dramatic reduction in the fermi smearing effects and results in reconstructed resonance widths comparable to the inclusive proton measurements. In addition, the large CLAS acceptance for the scattered electron allowed for the tagged neutron cross section to be measured over a significant kinematic range in both $W^2$ and $Q^2$ at beam energies of 4.2~GeV and 5.3~GeV. BoNuS $F_2^n$ data extracted at a beam energy of 5.3~GeV and $Q^2 = 1.7~\rm GeV^2$ are shown in Fig.~\ref{fig:F2n} (right panel) for $x$ values from pion production threshold through the resonance region and into the DIS regime. This will allow for the first time a unambiguous study of the inclusive neutron resonance structure functions. An extension of BoNuS has been approved \cite{BONUS12} to run at a beam energy of 11~GeV after the energy upgrade of the CEBAF accelerator. The kinematic coverage will allow the extraction of the ratio of neutron to proton structure functions $F_2^n/F_2^p$ to $x$ as large as 0.8, and the corresponding $d$ to $u$ large x parton distribution ratio. Other quantities which BoNuS will make possible to measure include the elastic neutron form factor, quark-hadron duality on the neutron, semi-inclusive DIS and resonance production channels, hard exclusive reactions such as deeply-virtual Compton scattering or deeply-virtual meson-production from the neutron, as well as potentially the inclusive structure function of a virtual pion. \section{Longitudinal structure function $F_L$} \label{sec:FL} The unpolarized inclusive proton cross section contains two independent structure functions. While $F_2^p$ has been measured to high precision over many orders of magnitude in both $x$ and $Q^2$, measurements of the longitudinal structure function $F_L^p$ (and the ratio $R$) have been significantly more limited in both precision and kinematic coverage. This is due in part to the challenges inherent in performing LT separations, which typically require point-to-point systematic uncertainties in $\epsilon$ to be smaller than 2\% to obtain uncertainties on $F_L$ of less than 20\%. While the inclusive cross section is proportional to $2xF_1 + \epsilon F_L$, the $F_2$ structure function $\sim 2xF_1 + F_L$, so that $F_2$ is only proportional to the cross section for $\epsilon=1$. At high $Q^2$ the scattering of longitudinal photons from spin-1/2 quarks is suppressed, and in the parton model one expects $F_L$ (and $R$) to vanish as $Q^2 \to \infty$. At low $Q^2$, however, $F_L$ is no longer suppressed, and could be sizable, especially in the resonance region and at large $x$. On the other hand, $F_L$ is dominated by the gluon contribution at small $x$, where new measurements from HERA \cite{herafl} have shown that it continues to rise. In the kinematic range of Jefferson Lab $F_L$ has been found to be typically 20\% of the magnitude of $F_2$, which is consistent with earlier SLAC measurements where the kinematic regions overlap. An extensive program of LT separations has been carried out in Hall~C, including measurements of the longitudinal strength in the resonance region for $0.3 < Q^2 < 4.5$~$\rm GeV^2$ for both proton \cite{LIANG} and deuteron \cite{e02109,e06009} targets. The Jefferson Lab experiments listed in Table~\ref{tab:jlab_exps} for which LT separations have been performed for the proton or deuteron are E94-110, E99-118, E00-002, E02-109, and E06-009. The Jefferson Lab data complement well the previous results at smaller $x$ from SLAC and NMC in this $Q^2$ region, and improve dramatically on the few measurements that existed below $Q^2 = 8$~GeV$^2$ in this $x$ region, which had typical errors on $R$ and $F_L$ of 100\% or more --- see Fig.~\ref{fig:rd-rp} (left). The recent precision LT separated measurements of proton cross sections \cite{LIANG} have allowed for the first time detailed duality studies in all of the unpolarized structure functions and their moments. The results of the proton separated structure functions in the resonance region were presented in Fig.~\ref{fig:F_L} in Sec.\ref{sec:F2p}. Although significant resonant strength is observed in $F_L$ (or $R$), evidence of duality is nonetheless observed in this structure function, along with $F_2$ and $F_1$. In addition to the proton data, Jefferson Lab experiment E02-109 measured the LT separated $F_2$ and $F_L$ structure functions of the deuteron, in the same $W^2$ and $Q^2$ ranges, and with the same high precision as E94-110 did for the proton. This will allow quantitative studies of duality in both the longitudinal and transverse channels for the deuteron. If duality holds well for both the proton and neutron separately, it will hold to even better accuracy for the deuteron since the Fermi motion effects intrinsically perform some of the averaging over the resonances. However, if duality does not hold for the LT separated neutron structure functions, this should be observable in the deuteron data, and will thus provide a critical test for models of duality. In addition to the resonance region, measurements of the inclusive longitudinal proton and deuteron structure have also been performed at lower $x$ (higher $W^2$) and lower $Q^2$. While the longitudinal strength is significant at $Q^2$ of several GeV$^2$, the proton $F_L$ structure function is constrained by current conservation to behave, for fixed $W$, as $F_L \sim Q^4$ for $Q^2 \to 0$. However, even with the new Jefferson Lab data, which extend down to $Q^2 = 0.15$~GeV$^2$ \cite{TvaskisLT}, the $Q^2$ at which this behavior sets in has not yet been observed. Another interesting test provided by the E99-118 data is whether the relative longitudinal contribution to the cross section embodied in $R$ is different in the deuteron and proton at these low $Q^2$ values. While the higher $Q^2$ data from SLAC and NMC exhibit no significant difference in the deuteron and proton $R$, the Jefferson Lab results shown in Fig.~\ref{fig:rd-rp} (right) suggest a possible suppression of $R$ in the deuteron relative to the proton for $Q^2 < 1$~GeV$^2$. Although this suppression is consistent with the two lowest $Q^2$ data points from SLAC, the uncertainties are dominated by systematic errors and the combined significance of the effect is still less than 2~$\sigma$. Conclusive experimental evidence for the possible suppression of $R$ in deuterium at low $Q^2$ will likely be provided when the results from additional data from E00-002 are finalized in very near future. \begin{figure}[ht] \begin{center} \includegraphics[width=7cm,height=7.5cm]{fig7a.eps} \includegraphics[width=8.5cm,height=8.5cm]{fig7b.eps} \end{center} \caption{{\it (Left)} Sample LT separations from Jefferson Lab experiment E94-110 \cite{LIANG}. {\it (Right)} Difference between the ratios $R$ in deuterium and hydrogen versus $Q^2$, from E99-118 \cite{TvaskisLT}, compared with previous NMC and SLAC measurements.} \label{fig:rd-rp} \end{figure} \section{Semi-inclusive deep inelastic scattering} \label{sec:other} In addition to the traditional $F_2$ and $F_L$ structure function observables, a number of other processes have been studied at Jefferson Lab over the past decade, with potentially important consequences for our understanding of the workings of QCD at low energy. In this section we focus on semi-inclusive pion electroproduction. An analysis of the semi-inclusive process $e\, N \to e\, h\, X$, where a hadron $h$ is detected in the final state in coincidence with the scattered electron, has recently been made using data from Hall~C in the resonance--scaling transition region \cite{Navasardyan,Mkrtchyan}. One of the main motivations for studying semi-inclusive meson production is the promise of flavor separation via tagging of specific mesons in the final state. In the valence quark region a produced $\pi^+$ ($\pi^-$) meson, for example, primarily results from scattering off a $u$ ($d$) quark in the proton. The semi-inclusive cross section at LO in $\alpha_s$ is given by a simple product of quark distribution and quark $\to$ hadron fragmentation functions, \begin{eqnarray} { d\sigma \over dx dz } &\sim&\ \sum_q e_q^2\, q(x)\, D_q^h(z)\ \equiv\ {\cal N}_N^h(x,z)\, . \label{eq:semi-parton} \end{eqnarray} Here the fragmentation function $D_q^h(z)$ gives the probability for a quark $q$ to fragment to a hadron $h$ with a fraction $z = p_h \cdot p / q \cdot p = E_h / \nu$ of the quark's (or virtual photon's) laboratory frame energy. Although at LO the scattering and particle production mechanisms are independent, higher order pQCD corrections give rise to non-factorizable terms, which involve convolutions of the PDFs and fragmentation functions with hard coefficient functions \cite{AEMP}. For hadrons produced collinearly with the virtual photon, the invariant mass $W^\prime$ of the undetected hadronic system $X$ at large $Q^2$ can be written \cite{ACW} $W^{\prime 2} \approx M^2 + Q^2(1-z)(1-x)/x$, where the hadron mass is neglected with respect to $Q^2$. In the elastic limit, $z \to 1$, the hadron carries all of the photon's energy (with $W^\prime \to M$), so $z$ is also referred to as the ``elasticity''. While formally the LO factorized expression for the cross section (\ref{eq:semi-parton}) may be valid at large $Q^2$, at finite $Q^2$ there are important corrections arising from the finite masses of the target and produced hadron. One can show, however, that the LO factorization holds even at finite $Q^2$, provided the parton distribution and fragmentation functions are expressed in terms of generalized scaling variables \cite{AHM}, $q(x) D_q^h(z) \to q(\xi_h) D_q^h(\zeta_h)$, where $\zeta_h = (z_h \xi / 2 x) (1 + \sqrt{1 - 4 x^2 M^2 m_{h\perp}^2/z^2 Q^4})$ and $\xi_h = \xi (1 + m_h^2/\zeta_h Q^2)$, with $m_{h\perp}^2 = m_h^2 + p_{h\perp}^2$. Not surprisingly, these effects become large at large $x$ and $z$ when $Q^2$ is small; however, for heavier produced hadrons such as kaons or protons, significant effects can also arise at small values of $z$ \cite{AHM}. The validity of the factorized hypothesis in Eq.~(\ref{eq:semi-parton}) relies on the existence of a sufficiently large gap in rapidity $\eta = \ln\left[ (E_h-p_h^z)/(E_h+p_h^z) \right] / 2$ to allow a clean separation of the current fragmentation region (hadrons produced from the struck quark) from the target fragmentation region (hadrons produced from the spectator quark system). At high energies a gap of $\Delta\eta \approx 2$, which is typically required for a clean separation \cite{Berger}, can be achieved over a large range of $z$; at low energies, however, this can only be reached at larger values of $z$. On the other hand, at fixed $x$ and $Q^2$ the large-$z$ region corresponds to resonance dominance of the undetected hadronic system $X$ (corresponding to small $W^\prime$), so that the factorized description in terms of partonic distributions must eventually break down. It is vital therefore to establish empirically the limits beyond which the simple $x$ and $z$ factorization of Eq.~(\ref{eq:semi-parton}) is no longer valid. It is intriguing in particular to observe whether $W^\prime$ can play a role analogous to $W$ for duality in inclusive scattering, when the undetected hadronic system $X$ is dominated by resonances $W^\prime \ltorder 2$~GeV. In terms of hadronic variables the fragmentation process can be described through the excitation of nucleon resonances, $N^*$, and their subsequent decays into pions (or other mesons) and lower-lying resonances, $N'^*$. The hadronic description must be rather elaborate, however, as the production of fast outgoing pions in the current fragmentation region at high energy requires nontrivial cancellations of the angular distributions from various decay channels \cite{IJMV,CI,WMsidis}, \begin{eqnarray} {\cal N}_N^h(x,z) &=& \sum_{N'^*} \left| \sum_{N^*} F_{\gamma^* N \to N^*}(Q^2,W^2)\ {\cal D}_{N^* \to N'^* h}(W^2,W'^2)\ \right|^2 \label{eq:dual_sidis} \end{eqnarray} where $F_{\gamma^* N \to N^*}$ is the $N \to N^*$ transition form factor, which depends on the masses of the virtual photon and excited nucleon ($W = M_{N^*}$), and ${\cal D}_{N^* \to N'^* h}$ is a function representing the decay $N^* \to N'^* h$. A dedicated experiment (E00-018) to study duality in $\pi^\pm$ electroproduction was performed in Hall~C \cite{Navasardyan,Mkrtchyan}, in which a 5.5~GeV electron beam was scattered from proton and deuteron targets at $Q^2$ between 1.8 and 6.0 GeV$^2$, for $0.3 \le x \le 0.55$, with $z$ in the range $0.35 - 1$. From the deuterium data the ratio of unfavored to favored fragmentation functions $D^-/D^+$ was constructed, where $D^+$ corresponds to a pion containing the struck quark ({\em e.g.}, $\pi^+$ from a struck $u$ or $\bar d$ quark), while $D^-$ describes the fragmentation of a quark not contained in the valence structure of the pion ({\em e.g.}, a $d$ quark for the $\pi^+$). Since at moderate $x$ the dependence on PDFs cancels, the fragmentation function ratio is approximately given by $D^-/D^+ = (4 - {\cal N}_d^{\pi^+}/{\cal N}_d^{\pi^-})/ (4 {\cal N}_d^{\pi^+}/{\cal N_d}^{\pi^-} - 1)$, where ${\cal N}_d^\pi$ is the yield of produced pions in Eq.~(\ref{eq:dual_sidis}). \begin{figure}[htb] \begin{center} \includegraphics[height=4.5cm]{fig8.eps} \caption{\label{fig:dminusdplus} The ratio of unfavored to favored fragmentation functions $D^-/D^+$ as a function of $z$ extracted from deuterium data, for $x = 0.32$ \cite{Navasardyan}.} \end{center} \end{figure} The Jefferson Lab data for $D^-/D^+$ from E00-108 are shown in Fig.~\ref{fig:dminusdplus} as a function of $z$ at fixed $x = 0.32$ and $Q^2 = 2.3$~GeV$^2$ \cite{Navasardyan}, and compared with earlier HERMES data at higher energies \cite{HERMES}. Despite the different energies, there is good overall agreement between the two measurements, even though the Jefferson Lab data sit slightly higher. Furthermore, the $D^-/D^+$ ratio extracted from the Jefferson Lab data shows a smooth dependence on $z$, which is quite remarkable given that the data cover the full resonance region, $0.88 < W^{\prime 2} < 4.2$~GeV$^2$. This strongly suggests a suppression or cancellation of the resonance excitations in the $\pi^+/\pi^-$ cross section ratio, and hence in the fragmentation function ratio. Similar cancellations between resonances naturally arises in quark models, such as those discussed by Close {\it et al.} \cite{CI,CM09} for the $\gamma N \to \pi^\pm N'^*$ reaction. The pattern of constructive and destructive interference, which was a crucial feature of the appearance of duality in inclusive structure functions, is also repeated in the semi-inclusive case when one sums over the states $N'^*$. Moreover, the smooth behavior of the fragmentation function ratio $D^-/D^+$ in Fig.~\ref{fig:dminusdplus} can be qualitatively understood from the relative weights of the matrix elements, which are always 4 times larger than for $\pi^-$ production. In this case the resonance contributions to this ratio cancel exactly, leaving behind only the smooth background as would be expected at high energies. This may account for the striking lack of resonance structure in the resonance region fragmentation functions in Fig.~\ref{fig:dminusdplus}. \section{Outlook} \label{sec:outlook} \subsection{Impact of Jefferson Lab data} \label{ssec:impact} The first decade of unpolarized structure function measurements at Jefferson Lab has had significant impact on our understanding of nucleon structure, both for leading twist parton distributions and for the resonance--scaling transition and related studies of quark-hadron duality. With most of the data concentrated in the low-$W$ region in the $Q^2 \sim$~few GeV$^2$ range, the greatest influence on the global data base has naturally been at large $x$. \begin{figure}[ht] \begin{center} \vspace*{0.5cm} \includegraphics[scale=0.55]{fig9a.eps}\hspace{-0cm} \includegraphics[scale=0.5]{fig9b.eps} \end{center} \caption{{\it (Left)} CTEQ6X fit for $u$ and $d$ quark PDFs, normalized to the earlier CTEQ6.1 fit \cite{CTEQ6.1}. The vertical lines show the approximate values of $x$ above which PDFs are not directly constrained by data. The error bands correspond to $\Delta\chi=1$. {\it (Right)} Relative errors on $u$ and $d$ quark PDFs, normalized to the relative errors in the reference fit.} \label{fig:cteq6x} \end{figure} Recently a new global PDF analysis was performed \cite{CTEQ6X}, exploring the possibility of reducing the uncertainties at large $x$ by relaxing the constraints on the kinematics over which data are included in the fit. The data sets combined proton and deuteron DIS structure functions from Jefferson Lab, SLAC and CERN (NMC) with new $ep$ collider data from HERA, as well as new Drell-Yan, $W$~asymmetry and jet cross sections from $pp$ and $pd$ collisions at Fermilab. The new fit (referred to as ``CTEQ6X'') allowed for a significant increase in the large-$x$ data set ({\em e.g.}, a factor of two more DIS data points) by incorporating data for $W^2 > 3$~GeV$^2$ and $Q^2 > 1.69$~GeV$^2$, lower than in the standard global fits \cite{MSTW,CTEQ6.1} which typically use $W^2 > 12.25$~GeV$^2$ and $Q^2 > 4$~GeV$^2$. The new analysis also systematically studied the effects of target mass and higher twist contributions, and realistic nuclear corrections for deuterium data. Results from the CTEQ6X fit are shown in Fig.~\ref{fig:cteq6x}~(left) for the $u$ and $d$ quark PDFs, normalized to the earlier CTEQ6.1 fit, which had no nuclear or subleading $1/Q^2$ corrections applied. The biggest change is the $\sim 30$--40\% suppression of the $d$ quark at $x \sim 0.8$, which is found to be stable with respect to variations in the $W^2$ and $Q^2$ cuts, provided both TMC and higher twist corrections are included. The effect of the expanded data base is more dramatically illustrated in Fig.~\ref{fig:cteq6x}~(right), which shows the relative $u$ and $d$ PDF errors for a range of $W$ and $Q^2$ cuts (``cut0'' being the standard cut, ``cut3'' the more stringent CTEQ6X cut, and intermediate cuts ``cut1'' and ``cut2''), normalized to those of a reference fit with ``cut0'' and no nuclear or subleading corrections. The result is a reduction of the errors by up to 40--60\% at $x \gtorder 0.7$, which will have a profound impact on applications of PDFs in high energy processes, such as those as the LHC, as well as in constraining low-energy models of quark distributions. \subsection{Future prospects} \label{ssec:future} Uncertainties in PDFs will be further reduced with the availability of data at even larger $x$ and $Q^2$ from Jefferson Lab after its 12~GeV energy upgrade, which will determine the $d$ quark distribution up to $x \sim 0.8$ with minimal theoretical uncertainties associated with nuclear corrections. Several of the planned experiments include BoNuS12 \cite{BONUS12}, which will extend to larger $x$ the earlier measurements of low-momentum, backward protons in semi-inclusive scattering from deuterium (Sec.~\ref{ssec:F2n_bonus}); the MARATHON experiment \cite{MARATHON}, which plans to extract $F_2^n/F_2^p$ from the ratio of $^3$He to $^3$H structure functions, in which the nuclear corrections cancel to within $\sim 1\%$ \cite{Afnan,Salme}; and the program of parity-violating DIS measurements on hydrogen \cite{SOLID}, which will be sensitive to a new combination of $d/u$ in the proton, free of nuclear corrections. In the resonance region, experiment E12-10-002 \cite{E12-10-002} will extend proton and deuteron structure function measurements up to $Q^2 \sim 17$~GeV$^2$ and enable tests of quark-hadron duality over a much larger kinematic range, ultimately providing stronger constraints on large-$x$ PDFs. And finally, a new avenue for exploring nucleon structure at 12~GeV will be opened up with semi-inclusive meson production experiments \cite{E12-06-104}, which will test the factorization of scattering and fragmentation subprocesses needed for a partonic interpretation of semi-inclusive cross sections. A successful program of semi-inclusive measurements tagging specific mesons in the final state would allow unprecedented access to the flavor dependence of PDFs in previously unexplored regions of kinematics. \ack We thank R.~Ent and C.~E.~Keppel for their contributions to this review in the early stages of its development. This work was supported by the U.S. Department of Energy under Contract No. DE-FG02-03ER41231, and DOE contract No. DE-AC05-06OR23177, under which Jefferson Science Associates, LLC operates Jefferson Lab. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,670
\section{Introduction}{ One of the most important efforts of particle and nuclear physics is to understand the structure of the nucleon from first principles. Despite being studied for many years, there are still several unknown aspects on the nucleon structure, such as the origin of its mass, the charged radii, and the distribution of its spin among its constituents. Lattice QCD (LQCD) offers an ideal method for \textit{ab initio} calculations which can be used to study the properties of fundamental particles numerically. The lattice formulation is a non-perturbative tool that allows the study of phenomena at the hadronic scale, where perturbation theory fails. Parton distribution functions (PDFs) are important quantities that give insights on the nucleon structure. For example, to leading twist, they represent how the probability density of finding a specific parton in a hadron depends on the hadron's momentum and spin. PDFs are in the light-cone frame so they cannot be calculated in Euclidean space. A way to access PDFs from lattice QCD is to calculate moments of PDFs which are related to the original PDF through the operator product expansion. These moments can also be calculated from deep inelastic scattering (DIS) experiments through phenomenological analysis which can be compared to the LQCD results \cite{Abdel-Rehim}. In the framework of this work we compute nucleon matrix elements of a variety of local bilinear operators (with up to one covariant derivative). This includes the scalar, vector, axial and tensor charges, as well as, the one-derivative vector, axial and tensor. In this proceedings we present results only for the nucleon axial and tensor charges, and the first moments of the unpolarized and helicity distribution functions. It is crucial to extract reliable estimates of the aforementioned quantities from lattice QCD as they can serve as benchmark of the lattice techniques (e.g., the axial charge), and even provide input in the analyses of experimental data (e.g., the tensor charge~\cite{Lin:2017stx}). In particular, the axial charge $g_A$ gives information about the chiral structure of a particle and its value for the nucleon at the chiral limit is used as an input for chiral effective theories. It has been thoroughly studied in chiral effective theories \cite{chi-eff} and is well measured experimentally in $\beta$-decay experiments. In addition, $g_A$ is an ideal candidate for a benchmark quantity because it can be calculated directly at zero momentum transfer, unlike the anomalous magnetic moment of the nucleon, for example, which must be extrapolated from finite momentum transfer calculations. The tensor charge $g_T$ is an interesting quantity that may have implications for physics beyond the Standard Model. It is necessary for setting bounds on novel tensor interactions in ultra-cold neutron decays which are beyond the standard model \cite{gT}. Unlike $g_A$, its value is not well known experimentally with only limits on its value coming from radiative pion decay $\pi\rightarrow e\nu \gamma$. There are experiments at Jefferson Lab which use polarized $^3$He/Proton with the goal of increasing the experimental accuracy of $g_T$ by an order of magnitude \cite{jlab}, making a theoretical calculation especially timely.} \section{Calculation and Simulation Details}{ \subsection{Matrix elements} \begin{figure} \vspace*{-0.5cm} \hspace{.5cm} \begin{minipage}{.45\linewidth} \vspace{1.25cm} \includegraphics[scale=0.35]{Images/3pt_connected.png} \end{minipage} \hspace{.5cm} \begin{minipage}{.45\linewidth} \includegraphics[scale=0.35]{Images/3pt_disconnected.png} \end{minipage} \caption{Diagrams of connected (left) and disconnected (right) contributions to the three-point functions.} \label{fig:3ptDiagrams} \end{figure} \vspace{1mm} The calculation is based on nucleon matrix elements $\langle N(p)|{\cal O}_\Gamma|N(p)\rangle$, where the inserted operator, ${\cal O}_\Gamma$, is a local bilinear operator. To obtain these matrix elements, we calculate the three-point functions which are represented by the diagrams in Figure \ref{fig:3ptDiagrams}. In the connected diagram (left) the current couples with the quark fields of the nucleon interpolating field, while in the disconnected diagram (right) the quarks of the current for a closed loop and interact with the nucleon via gluon exchange. In this study, we focus on isovector quantities that receive contributions only from the connected diagram (the disconnected contributions cancel out up to cut-off effects). For $g_A$ and $g_T$, ${\cal O}_\Gamma$ is the ultra-local axial-vector and tensor operators, respectively: \\[0.5ex] \begin{equation} \displaystyle{\cal O}^\mu_{A^a} = \bar{q}\gamma_5\gamma^\mu\frac{\tau^a}{2} q\>, \quad {\cal O}^{\mu\nu}_{T^a} =\bar{q}\sigma^{\mu\nu}\frac{\tau^a}{2}q\>. \end{equation} where $\bar{q}=(\bar{u},\bar{d})$. For $\langle x\rangle_q$ and $\langle x\rangle_{\Delta q}$, the three-point functions are calculated using the one-derivative vector and axial-vector operators that is: \\[0.5ex] \begin{equation} {\cal O}_{V^a}^{\mu\nu} = \bar{q}\gamma^{\{\mu} \buildrel \leftrightarrow \over D\raise-1pt\hbox{}^{\nu\}} \frac{\tau^a}{2}q, \quad {\cal O}_{A^a}^{\mu\nu} = \bar{q}\gamma^{\{\mu} \buildrel \leftrightarrow \over D\raise-1pt\hbox{}^{\nu\}} \gamma_5 \frac{\tau^a}{2}q\,. \end{equation} The curly brackets represent a symmetrization over indice pairs and a subtraction of the trace. The presence of the derivative in the operators leads to reduction in the signal-to-noise ratio and larger statistics is required to extract $\langle x\rangle_q$ and $\langle x\rangle_{\Delta q}$ at the same statistical accuracy as the charges. The matrix elements of the ultra-local operators at zero momentum transfer give the nucleon charges ($g_A{\equiv}G_A(0)$ and $g_T{\equiv}A_{T10}(0)$) via the continuum decomposition: \begin{equation} \langle N(p,s')|\mathcal{O}^\mu_{A}|N(p,s)\rangle = i \bar{u}_N(p,s') \Bigl[\frac{1}{2}G_A(0)\gamma^\mu\gamma_5\Bigr] u_N(p,s)\,, \label{eq:OpA} \end{equation} \begin{equation} \langle N(p,s')|\mathcal{O}^{\mu\nu}_{T}|N(p,s)\rangle = \bar{u}_N(p,s') \Bigl[\frac{1}{2} A_{T10}(0)\sigma^{\mu\nu} \Bigr] u_N(p,s)\,.\label{eq:OpT} \end{equation} Equivalently, the matrix elements for the one-derivative operators at zero momentum transfer lead to: \begin{equation} \langle N(p,s^\prime)| {\cal O}_{V}^{\mu\nu}|N(p,s)\rangle = \bar{u}_N(p,s^\prime)\Bigl[ \frac{1}{2}\langle x\rangle_q \gamma^{\{\mu}p^{\nu\}} \Bigr] u_N(p,s)\,, \label{eq:oneD_V} \end{equation} \begin{equation} \langle N(p,s^\prime)| {\cal O}_{A}^{\mu\nu}|N(p,s)\rangle = i \bar u_N(p,s^\prime)\Bigl[ \frac{1}{2} \langle x\rangle_{\Delta q}\gamma^{\{\mu}p^{\nu\}}\gamma^5 \Bigr] u_N(p,s). \label{eq:oneD_A} \end{equation} For simplicity, we use the generic symbol $q$ to represent the quark combination in \ref{eq:oneD_V} and \ref{eq:oneD_A} but, in this proceedings, we will only consider the isovector case $q=u-d$. \subsection{Lattice Setup} The results presented in these proceedings are obtained from three different ensembles of the twisted-clover action $$ S_F\left[\chi,\overline{\chi},U \right]= a^4\sum_x \overline{\chi}(x)\left(D_W[U] + m_{\rm cr} + i \mu_l \gamma_5\tau^3 - \frac{1}{4}c_{\rm SW}\sigma^{\mu\nu}\mathcal{F}^{\mu\nu}[U] \right) \chi(x), $$ with quarks tuned to maximal twist. This action gives an automatic ${\cal O}(a)$-improvement and does not require any operator improvement, simplifying the renormalization procedure \cite{tm}. Table \ref{tab:simParams} shows the parameters used for each ensemble, listed from the smallest to the largest volume. The pion mass of each ensemble is near its physical value, thus avoiding a chiral extrapolation that may lead to uncontrolled uncertainties. The statistics are up to 24,000 for cB211.72.64 and 12,656 for cA2.90.64 (for their highest separation). {\small \renewcommand{\arraystretch}{1.2} \renewcommand{\tabcolsep}{18pt} \begin{table}[ht] \centering \begin{tabular}{c|c|c|c|c|c} \hline\hline \multicolumn{6}{c}{Parameters}\\ \hline & $N_f$ & \hspace*{-0.2cm}Volume\hspace*{-0.2cm} & a (fm) & \hspace*{-0.4cm}$m_\pi$ (MeV)\hspace*{-0.4cm} & \hspace*{-0.4cm} $t_s/a$ \hspace*{-0.4cm} \\ \hline \hspace*{-0.4cm} cA2.90.48\hspace*{-0.4cm} & \hspace*{-0.4cm}$2$\hspace*{-0.4cm} & \hspace*{-0.2cm}$48^3{\times}96$\hspace*{-0.2cm} & $0.094$ & $130$ & 10, 12, 14\\ \hspace*{-0.4cm} cB211.72.64 \cite{Nf211}\hspace*{-0.4cm} & \hspace*{-0.4cm}$2{+}1{+}1$\hspace*{-0.4cm} & \hspace*{-0.2cm}$64^3{\times}128$\hspace*{-0.2cm} & $0.082$ & $137$ & 12, 14, 16, 18, 20 \\ \hspace*{-0.4cm} cA2.90.64\hspace*{-0.4cm} & \hspace*{-0.4cm}$2$\hspace*{-0.4cm} & \hspace*{-0.2cm}$64^3{\times}128$\hspace*{-0.2cm} & $0.094$ & $130$ & 12, 14, 16 \\\hline\hline % \end{tabular} \caption{Parameters for the ensembles used in this work, and source-sink separation $t_s/a$.} \label{tab:simParams} \end{table} \noindent In order to extract the matrix elements we calculate the two- and three-point functions given by \begin{equation} G_{\rm 2pt}({\bf 0}, t_s) = \sum_{{\bf x}_s}\Gamma^4_{\beta\alpha}\langle J_\alpha({\bf x}_s,t_s)\bar{J}_\beta({\bf x}_0,t_0)\rangle \end{equation} \begin{equation} G^{\mu_1,...,\mu_n}_{\rm 3pt}(\Gamma^\nu, {\bf p}, t_s, t_{\rm i}) = \sum_{{\bf x}_s,{\bf x}_{\rm i}}\> e^{-i({\bf x}_s-{\bf x}_0)\cdot{\bf p}}\> \Gamma^\nu_{\beta\alpha}\langle J_\alpha({\bf x}_s,t_s) {\cal O}_\Gamma^{\mu_1,...,\mu_n} ({\bf x}_{\rm i}, t_{\rm i})\bar{J}_\beta({\bf x}_0,t_0)\rangle, \end{equation} respectively, where $x_0$, $x_i$, and $x_s$ are the insertion, sink, and source coordinates. $\Gamma^\nu$ is the projection matrix $ \Gamma^4 = \frac{1}{4}(\mathbb{1}+\gamma_4), \ \Gamma^k = \Gamma^4i\gamma_5\gamma_k$, and the proton interpolation operators are $ J_{\alpha}(x) = \epsilon^{abc}u_{\alpha}^a(x)[u^{\top b}(x) C\gamma_5 d^c(x)]$, where a, b, and c are color component indices. We then form the ratio: \begin{equation} R(\Gamma^\nu,t_s,t_i)= \frac{G_{\rm 3pt}(\Gamma^\nu,{\bf 0},t_s,t_i) }{G_{\rm 2pt}({\bf 0}, t_s)} \label{eq:ratio} \end{equation} which is simplified due to the zero momentum transfer. The information on the nucleon charges can be extracted from the fits of the ratio in Eq.~(\ref{eq:ratio}) using different techniques. We focus here on a single- ({\it plateau}) and {\it two-state} fits in order to examine ground state dominance. Results on the summation method are not shown here as they carry large statistical uncertainties leading to inconclusive results. \medskip \noindent {\it Plateau method:} In this method, the current insertion time ($t_i$) dependence of the ratio is fitted to a constant assuming a single-state dominance. The fit includes values of $t_i$ far from $t_o$ and $t_s$, so that contamination due to excited states is suppressed. The excited state effects are studied by varying $t_s$ and convergences is expected at large $t_s$. \medskip \noindent {\it Two-state method:} Instead of fitting the ratio to a constant value, alternatively one fits including the first excited state. In this case, the two- and three-point functions are written as \begin{eqnarray} G_{2pt}(\vec{p},t_s) &=& c_0(\vec{p}) e^{-E_0(\vec{p}) t_s} + c_1(\vec{p}) e^{-E_1(\vec{p}) t_s}, \\[2ex] G_{3pt}(\vec{p}\,',\vec{p},t_s,t_i) &=& A_{00}(\vec{p}\,',\vec{p}) e^{-E_0(\vec{p}\,')(t_s-t_i)-E_0(\vec{p})t_i} + A_{01}(\vec{p}\,',\vec{p}) e^{-E_0(\vec{p}\,')(t_s-t_i)-E_1(\vec{p})t_i} \nonumber \\ &+& A_{10}(\vec{p}\,',\vec{p}) e^{-E_1(\vec{p}\,')(t_s-t_i)-E_0(\vec{p})t_i} + A_{11}(\vec{p}\,',\vec{p}) e^{-E_1(\vec{p}\,')(t_s-t_i\ )-E_1(\vec{p})t_i}, \label{Eq:Thrp_tsf} \end{eqnarray} where $E_0(\vec{p})$ and $E_1(\vec{p})$ are the ground and first excited state with total momentum $\vec{p}$. The two- and three-point functions are fitted simultaneously which involves twelve fitting parameters . The matrix element of interest $\mathcal{M}$ is then ${\cal M}=\frac{A_{00}(\vec{p}\,',\vec{p})}{\sqrt{c_0(\vec{p}\,') c_0(\vec{p})}}\,$, and here we focus on zero momentum transfer for which $A_{10} {=} A_{01}$. \section{Results} \subsection{Nucleon Charges} In Fig.~\ref{fig:gA} (left) we show the ratio of Eq.~(\ref{eq:ratio}) for $g_A$ as a function of $t_i$ using the ensemble cB211.72.64 (see Table~\ref{tab:simParams}). For clarity purposes we only show the two largest separations $t_s$ for the plateau (red circles and blue triangles), while the two-state fit is shown with a green band using all possible separations. We find perfect agreement between the shown data within statistical uncertainties. In the right plot, we collect the final values of $g_A$ for all ensembles (cA2.90.48 (red circles), cB211.72.64 (blue triangles), and cA2.90.64 (green square)) as a function of source-sink separation. With the blue horizontal band we show the two-state fit curve for cB211.72.64 and the curve's convergent values. The corresponding two-state fit for cA2.90.64 will be performed in the near future once statistics are increased together with higher separation $t_s$. \begin{figure} \vspace*{-.3cm} \hspace*{-.65cm} \includegraphics[scale=0.29]{Images/gA_plat.pdf} \hspace*{-.75cm} \includegraphics[scale=0.29]{Images/gA.pdf} \caption{Left: the ratio of $g_A$ for cB211.72.64 and $t_s{=}18a$ (red circles) and $t_s{=}20$ (blue triangles). The green horizontal band is the two-state fit value. Right: plateau values for $g_A$ for cA2.90.48 (red circles), cB211.72.64 (blue triangles), and cA2.90.64 (green squares). The two-state fit curve and its convergent value for cB211.72.64 is also included. In both plots, the black horizontal band is the global fit of experimental data \cite{wp} and the errors show the statistical errors. } \label{fig:gA} \end{figure} \noindent In the right panel of Fig.~\ref{fig:gA}, it can be seen that $g_A$ rises as $t_s$ increases for cA2.90.48 and cB211.72.64 which shows that excited state contamination is affecting the values at lower $t_s$. Although the values agree between ensembles, it is not apparent that the cA2.90.64 data are rising with increased $t_s$, and thus it is not yet conclusive if there are volume or quenching effects, as cA2.90.48 and cA2.90.64 only share two data points. New data are currently being produced at higher source-sink separations as well as higher statistics for existing points. This will reveal whether this behavior is observed because excited states are not being suppressed as quickly as for the other ensembles or simply because statistical noise is obscuring the increase. The data point at $t_s\sim 1.6$ fm is in agreement with the experimental value and the two-state fit value for cB211.72.64 is in a very good agreement with experiment. The results for $g_T$ are presented in Fig.~\ref{fig:gT}, in the same format as the right plot of of Fig.~\ref{fig:gA}. Unlike the case of $g_A$, $g_T$ decreases for larger $t_s$ values, showing effects due to excited state contamination. However, we do not find strong volume or quenching effects for $g_T$ at these lattice volumes, as all three ensembles agree within statistical uncertainty. \vspace*{-0.375cm} \begin{figure} \centering \includegraphics[scale=0.265]{Images/gT.pdf} \vspace*{-0.2cm} \caption{Plateau values for $g_T$. Notation on the ensembles is the same as in the right plot of Fig.~\ref{fig:gA}. } \label{fig:gT} \end{figure} \subsection{First moments of unpolarized and helicity PDFs} Fig.~\ref{fig:PDFmoments} shows results for the first moment of the unpolarized ($\langle x \rangle_{u-d}$) and helicity ($\langle x \rangle_{\Delta u-\Delta d}$) distribution functions. Similar to the previous figures, we include results from all three ensembles, in the same notation as the right panel of Fig.~\ref{fig:gA}. The phenomenological fits of experimental data are shown as horizontal bands. Both plots show a general decrease of the values as $t_s$ increases. However, $\langle x \rangle_{u-d}$ for cA2.90.48 appears to decrease more with $t_s$, especially at $t_s\sim 1.5$. This behavior is likely due to volume rather than quenching effects since the two larger ensembles with different numbers of sea quark flavors, cB211.72.64 and cA2.90.64, are in good agreement. Other than the stronger excited state suppression in the cA2.90.48 $\langle x \rangle_{u-d}$ data, there are no strong volume or quenching effects on these moments. In both plots, the lattice data approaches the phenomenological values as $t_s$ increases, with $\langle x \rangle_{\Delta u-\Delta d}$ overlapping at $t_s\sim 1.6$ but $\langle x \rangle_{u-d}$ is expected to reach agreement at even higher separations. \begin{figure} \hspace*{-.65cm} \includegraphics[scale=0.29]{Images/avgX.pdf} \hspace*{-.75cm} \includegraphics[scale=0.29]{Images/hel.pdf} \caption{ Results for $\langle x \rangle_{u-d}$ (left) and $\langle x \rangle_{\Delta u-\Delta d}$ (right). Notation is the same as in the right plot of \ref{fig:gA}. The horizontal bands in the two plots are phenomenological fits of experimental data \cite{wp}. } \label{fig:PDFmoments} \end{figure} \section{Conclusion and Outlook} In these proceedings we present results on the connected contributions of nucleon charges and the first moments of the unpolarized and helicity PDFs, using two physical pion mass ensembles of varying volumes, one with two dynamical quarks in the sea ($N_f{=}2$) and one also with a strange and a charm in the sea ($N_f{=}2+1+1$). We employ the twisted mass fermions with a clover term and Iwasaki gluons. Results from this work are compared to values obtained by another ensemble of twisted mass fermions that has a smaller volume, with intention a preliminary assessment of volume and quenching effects. Excited states on each ensemble are suppressed by increasing the source-sink separation, seeking convergence between single- and two-state fits. We find non-negligible volume and/or quenching effects in $g_A$. In particular, the cA2.90.64 results for these quantities show no apparent decrease in excited state contamination over different source-sink separations. Within our present errors, we cannot clearly resolve finite volume effects for $\langle x \rangle_{u-d}$, but we see a trend towards the phenomenological value for increasing source-sink separation. For the quantities which can be compared to experimental measurements---$g_A$, $\langle x \rangle_{u-d}$, and $\langle x \rangle_{\Delta u-\Delta d}$---our results approach the experimental values as the source-sink separation is increased and we find agreement between theory and experiment at larger source-sink separations for $g_A$ and $\langle x \rangle_{\Delta u-\Delta d}$. Our study will be extended with increase of statistics and analysis for higher values for the source-sink separation, in order to assess and eliminate systematic uncertainties. In the near future we anticipate two more ensembles of $N_f{=}2{+}1{+}1$ twisted mass fermions at the same physical volume as one of the ensembles presented in this work ($N_f{=}2{+}1{+}1$, $64^3{\times}128$) and smaller lattice spacing. These ensembles will be used for an extrapolation to the continuum limit that has never been studied for simulations at the physical point. \section*{\it Acknowledgments:} We want to thank all members of the ETMC for a fruitful collaboration. This work used computational resources from Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number TG-PHY170022. The authors gratefully acknowledge the Gauss Centre for Supercomputing e.V. (www.gauss-centre.eu) for funding this project by providing computing time on the GCS Supercomputer SuperMUC at Leibniz Supercomputing Centre (www.lrz.de). This work was supported by a grant from the Swiss National Supercomputing Centre (CSCS) under project ID s702. This project has received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement No. 642069. S. B. was supported by this program. M.C. and C.L. acknowledge financial support by the U.S. National Science Foundation under Grant No. PHY-1714407. C.L acknowledges support from the U.S. Department of Energy, Office of Science, Office of Nuclear Physics, contract no. DE-AC02-06CH11357.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,021
435 The Scariest Scenario So Far "Brother, what should we do now?" Li Jiu and Ma Wei had not experienced something like this before, so the fact that they had not fainted directly was a miracle. There were still two minutes left until midnight, and the blood threads mixed with the red liquid, so one could not distinguish one from the other. Things only got worse. The blood threads were used to stabilize the dead bodies inside the walls. When the threads loosened and moved, the whole central hub shook like there was an earthquake. "Come in here first." Chen Ge pulled Ma Wei and Li Jiu into the room. He stood alone at the door, holding the handle. Midnight was coming, and the outside corridors had undergone various changes. The dead bodies that were sewn into the walls seemed to have lost some limitation. Arms fell from the ceiling, and they shook with the whole scenario. If this gets moved to the Haunted House, I doubt anyone would survive it. When Chen Ge moved his gaze, the eyes of the dead bodies suddenly opened! The dead opened their eyes‽ The eyes of the dead bodies were different from normal people. They had no pupils, or rather, the pupils had completely dissolved, and their eyes were a layer of something yellowish-brown. Thankfully, Chen Ge had a greater threshold for fear than most, and he could still maintain eye contact with the bodies calmly. However, what happened next caused even Chen Ge to panic. More eyes opened in the dark. These were victims of the ghost stories society, and even now, they were still part of the society. The faces woke up from their slumber. Their faces twisted, and their bodies lost the humanoid shape. Their necks were turned in weird angles as they looked at Chen Ge. This was a hard scene to describe. Endless twisted arms and elongated necks reached toward Chen Ge. The caved-in heads opened their jaws as they moved toward him. Chen Ge's back was covered with cold sweat. He tried his best to stay calm, and that was because his courage had been honed from completing the missions given by the black phone over the past two months. If he had seen this before he received the black phone, then he probably would have fainted. Is this what a complete three-star scenario looks like? His body stepped backward subconsciously. Chen Ge gripped the hammer, and it gave him a sense of security. When there was one minute left until midnight, the whole hub felt like it had come alive. All of the bodies had been awoken. The walls collapsed as the dead bodies crawled out from it. There were even cadavers that fell out from the ceiling. Their bodies were sewn together by red threads, and most of the body parts were attached even though they looked like they were falling apart. Chen Ge now understood how scary the society's lair was, and he had a new understanding of Doctor Gao, who was behind all of this. As the chairperson of the ghost stories society, to be able to come up with the treatment methods for so many mental patients and murderous madmen, Doctor Gao was truly the scariest existence. In the day, he was the best psychologist in Jiujiang, concerned about his patients, a flawless man. But at night, he stayed with the cadavers and used the victims' bodies to build an underground lab. This contrasting lifestyle somehow existed within the same man. The scariest thing was that he had lived this life for five years, and in these five years, no one had suspected him. "How did he do it?" The cadavers rushed at them in waves. It was impossible for them to leave. Chen Ge could only retreat into the room and lock the door from the inside. "Come with me, don't ask anything. Whenever I order you to do something, just do it." The cadavers seemed to be afraid of this door-they did not dare get too close. However, the bodies at the back pushed them forward, and the death masks would imprint themselves on the door. "Don't just stand there, come on!" The steel door creaked noisily; Chen Ge had no idea how long it could last. He returned to the innermost room, staring at the time on his phone, he stood before the door quietly. Midnight finally arrived. Blood bloomed on the wooden door like roses. The heavy stench of blood leaked from behind the door, and it soon dyed the whole door red. Ma Wei and Li Jiu had not seen something like this before. What they had experienced that night stunned them. Their brains were running on autopilot, and all they knew then was to follow Chen Ge. "What I'm saying next is very important so listen closely. You have two choices-either you follow me through that door or stay here and await your deaths," Chen Ge said seriously. He picked up the agitated white cat and used the hammer to push the door open. The smell of blood swallowed them like a wave. Ma Wei and Li Jiu dry heaved from the smell. Their faces were white, but they still followed closely behind Chen Ge. "Since you're willing to take this risk with me, I shall give you another reminder." Chen Ge pointed at the half-open blood door. "Based on my understanding of the blood door, if there is no one holding the door open, it will close on its own after one minute, and it can only be opened after twenty-four hours. You'd better be prepared." Having been inside the door before, Chen Ge knew that only the door-pusher could control the door. The door-pusher in Coffin Village was the ghost in the well, and the door-pusher at the Third Sick Hall was Men Nan. Neither of them meant to harm Chen Ge, so after the mission was completed, they had helped Chen Ge open the door to let him return to the real world. However, this time was different. He was in an opposing situation with the chairperson. After Chen Ge entered the door, no matter the result, the opponent would not open the door and let him leave. Therefore, Chen Ge would need to wait until the following midnight to leave. If this is just to avoid the danger, there shouldn't be too much of a problem. The world behind the door is scary, but the door-pusher, Doctor Gao, is not in Jiujiang. This is just like how the Third Sick Hall was after losing Men Nan-the level of danger will be greatly lowered. When Chen Ge was ruminating, the steel door outside collapsed. The blood vessels crawled on the ground, and the cadavers crawled into the room. Without wasting any time, Chen Ge led Li Jiu and Ma Wei into the door. I properly prepare every time I go on a Trial Mission, but even so, accidents cannot be avoided. Chen Ge looked at the dead bodies outside, and his eyes were filled with complicated emotions. The dead bodies controlled by blood threads were different from Specters. Even if he unleashed all of his employees, it would have been pointless. Perhaps that is Doctor Gao's aim, and that's why he did all of this. Chen Ge realized the another limitation of Specters, but he soon recovered. Normal Specters might not do anything to dead bodies, but Red Specter could be the exception. If I had enough Red Specters, I would have no reason to be afraid of these cadavers. Chen Ge was never overconfident, but he would not give up easily. In this Trial Mission, he had found himself a new target.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
500
<?php require_once 'PHPUnit/Framework.php'; require_once 'PHPUnit/Util/Filter.php'; PHPUnit_Util_Filter::addFileToFilter(__FILE__, 'PHPUNIT'); if (!class_exists('PHPUnit_Framework_ExpectationFailedException', FALSE)) { /** * Exception for expectations which failed their check. * * The exception contains the error message and optionally a * PHPUnit_Framework_ComparisonFailure which is used to * generate diff output of the failed expectations. * * @category Testing * @package PHPUnit * @author Sebastian Bergmann <sb@sebastian-bergmann.de> * @copyright 2002-2009 Sebastian Bergmann <sb@sebastian-bergmann.de> * @license http://www.opensource.org/licenses/bsd-license.php BSD License * @version Release: @package_version@ * @link http://www.phpunit.de/ * @since Class available since Release 3.0.0 */ class PHPUnit_Framework_ExpectationFailedException extends PHPUnit_Framework_AssertionFailedError { protected $comparisonFailure; protected $description; protected $customMessage; public function __construct($description, PHPUnit_Framework_ComparisonFailure $comparisonFailure = NULL, $message = '') { $this->description = $description; $this->comparisonFailure = $comparisonFailure; $this->customMessage = $message; if (!empty($message)) { $description .= "\n" . $message; } parent::__construct($description); } public function getComparisonFailure() { return $this->comparisonFailure; } public function getDescription() { return $this->description; } public function getCustomMessage() { return $this->customMessage; } } } ?>
{ "redpajama_set_name": "RedPajamaGithub" }
1,315
Open Enrollment and 2018 Benefits Plan Design Information Now Available Monday, October 2, 2017, By Jaclyn D. Grosso faculty and staffhuman resourcesopen enrollment Open Enrollment, the annual period when University faculty, staff and other eligible individuals make their benefit choices for the coming year, begins Monday, Oct. 30, and continues through Friday, Nov. 10. This is the only time of year when participants may elect or change coverage for many benefits, unless they experience a qualifying life event. As previously announced, Excellus BlueCross BlueShield has been selected to administer the medical plans for Syracuse University, effective Jan. 1, 2018. As an Upstate New York-based provider with offices in DeWitt, Excellus was selected as part of a comprehensive benefits review process to ensure employees and retirees have access to an excellent set of options from the highest-quality plans. In an Oct. 2 email to the campus community, Andrew R. Gordon, senior vice president and chief human resources officer, shared information and updates about the Syracuse University benefits package for 2018. Some important highlights include the following: There will be no increase in employee per-paycheck contributions for any University benefit plans for 2018. Members enrolled in SUBlue and SUOrange will not be required to obtain referrals to see in-network providers. Copays for certain medical services will increase by $10. This is the first such increase in four years. An annual in-network deductible will be applied to SUBlue and SUOrange. All copays and deductibles are listed on the University's Open Enrollment website, including detailed information provided in our Frequently Asked Questions resource guide. There will be no changes to existing deductibles or coinsurance for SUPro. SUPro has no referral requirement. OptumRx will continue to provide prescription drug coverage. The University's dental and vision plans will have no copayment, coinsurance or contribution increases. In these plans, 2018 is the second year of a two-year commitment, so the only change participants can make is to enroll or remove dependents from the plan, unless they experience a qualifying life event. Employee contributions for dependent life, supplemental life and AD&D, and voluntary long-term disability insurance will have no rate increases or plan changes. Free will preparation services are included when employees enroll in supplemental life insurance. Information sessions and office hours are available during Open Enrollment and can be found here. In addition, on-campus office hours will be held twice per week this fall and into 2018 by an experienced representative from Excellus to assist members with any personal transition questions. All of the relevant information will be posted online at openenrollment.syr.edu as it becomes available. Employees who have any immediate questions about Syracuse University benefits can contact the HR Service Center by phone at 315.443.4042 or by email at hrservice@syr.edu. Jaclyn D. Grosso Open Enrollment Begins Today—Reflect. Connect. Enroll. Friday, November 1, 2019, By News Staff Q&A: Bea Gonzalez on SU Employment Opportunities Workshop at the Warehouse Wednesday, July 26, 2017, By Kevin Morrow Excellus BlueCross BlueShield Selected as New Administrator for Syracuse University Medical Plans Friday, August 11, 2017, By News Staff Free Will Preparation Offered through Hyatt Legal Services Friday, July 14, 2017, By Jaclyn D. Grosso Looking for Creators for 'Cuse Market, Oct. 20 at Bird Library Friday, October 6, 2017, By Pamela Whiteley McLaughlin
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,741
\section{Introduction} \label{intro} The European Space Agency (ESA) has started planning the next cycle of space missions by establishing the long-term ESA Science Program Voyage 2050. This follows the current plan (Cosmic Vision, extending up to 2035) and is the framework in which ESA space missions from 2035 up to 2050 will be defined. In order to keep the Science Program a bottom-up process and gather inputs from the scientific community about the science themes that should be covered by the Voyage 2050 planning cycle, ESA has issued a call for White Papers in March 2019. In this paper we discuss five science cases, spanning different fields of astrophysics, presenting scientific challenges that can be addressed with a space-based mission in 2035 -- 2050, conducting spectroscopic observations in the ultraviolet (UV), a wavelength regime that is not accessible from the ground. \begin{itemize} \item By detecting the intergalactic medium in emission it will be possible to directly unveil the cosmic web, whose existence is predicted by current theories of structure formation. This will enable studies of the exchange of baryons between galaxies (and quasars) and their surroundings, unveiling how the halo gas contributes to the evolution of galaxies and what mechanisms drive the galaxy angular momentum build-up through cosmic time. Finally, multiple detections of the intergalactic medium in emission will provide measurements of the UV background, a critical quantity used in simulations of galaxy formation, yet still poorly constrained in observations (Section \ref{subsec:cgm_fab}, \ref{subsec:cgm_celine}). \item Observations of the neutral gas distribution (by mapping the Lyman-$\alpha$ emission) in low-redshift galaxy cluster members will clarify the efficiency with which ram-pressure stripping removes the gas from galaxies and the role of the environment in quenching star formation. These observations will be crucial to understand how and when the red sequence of galaxies is assembled in dense environments. These observations will be key in interpreting high redshift observations, where currently the Lyman-$\alpha$ is more easily accessible (Section \ref{subsec:ram_pressure}). \item By observing statistical samples of supernovae in the UV it will be possible to characterize the progenitor population of core-collapse supernovae, providing the initial conditions for explosion models and allowing the community to progress in the understanding of the explosion mechanism of stars, as well as the final stages of stellar evolution (Section \ref{subsec:transients}). \item By targeting populations of accreting white dwarfs in globular clusters it will be possible to constrain the evolution and fate of these stars and investigate the properties of the most compact systems with the shortest orbital periods which are expected to be the brightest low-frequency gravitational wave sources. The possibility will also be explored that accreting white dwarfs are progenitors of Type Ia supernovae, which are fundamental sources to constrain cosmological distances and the current models for dark energy (Section \ref{subsec:binaries}). \end{itemize} A UV-optimized telescope (wavelength range $\lambda \sim $ 90 - 350 nm), equipped with a panoramic integral field spectrograph with a large field of view (FoV $\sim$ 1 $\times$ 1 arcmin$^2$), with medium spectral (R $= 4000$) and spatial ($\sim$ 1" -- 3") resolution will allow the community to simultaneously obtain spectral and photometric information of the targets, and tackle the science questions presented in this paper (Section \ref{sec:performances}). The information-rich nature of the datasets provided by such an integral field spectrograph will represent a resource with considerable legacy value for the scientific community. Additionally, these observations will open up completely new areas of the parameter space, allowing the proposed mission to have a great potential for serendipitous discoveries. In the coming years, when most of the new large facilities such as the Extremely Large Telescope (ELT) and the James Webb Space Telescope (\textit{JWST}) will focus on the infrared (IR) wavelength range, and the Hubble Space Telescope (\textit{HST}) will not be operational anymore, a mission in the UV with the capability of observing spectroscopically large areas of the sky will be unique. In synergy with the Atacama Large Millimeter Array (ALMA), the ELT instruments, and Square Kilometer Array (SKA), but also with other space-based missions such as the Advanced Telescope for High Energy Astrophysics (\textit{Athena}) and the Laser Interferometer Space Antenna (\textit{LISA}) it will allow us to push further our current understanding of the Universe. We present the main science themes to be addressed in the coming decades in Section \ref{sec:sc}; we compare the characteristics of the proposed straw-man mission and instrument with other UV space missions in Section \ref{sec:uniqueness} and its synergies with future facilities in Section~\ref{sec:synergies}; finally in Section \ref{sec:performances} we sketch the high-level technical characteristics of the proposed instrument and address potential technological challenges. \section{Science cases} \label{sec:sc} \subsection{Unveiling large-scale structures in emission at $z\lesssim1.7$} \label{subsec:cgm_fab} The current paradigm of large-scale structure formation predicts the presence of an intricate net of gaseous filaments connecting galaxies (e.g., \citealt{White1987}, \citealt{Bond1996}). The existence of this cosmic web, also known as intergalactic medium (IGM; \citealt{meiksin09}), is until now confirmed only indirectly by observations of the large-scale structures traced with galaxy surveys at low redshift and by studies of the Lyman-$\alpha$ (Ly$\alpha$) forest in absorption against background quasars. Direct imaging of the cosmic web, probing its properties and evolution through cosmic history, will represent a major breakthrough for cosmology. Directly detecting the IGM in this way, is predicted to be very challenging (surface brightness in Ly$\alpha$ predicted to be SB$_{\rm Ly\alpha}\sim10^{-19}-10^{-20}{\rm\,erg\,s^{-1}\,cm^{-2}\,arcsec^{-2}}$; \citealt{GW96,Bertone2012,Witstok2019}) because of the expected low densities for such gas ($n_{\rm H}\lesssim0.01$~cm$^{-2}$) and the budget of ionizing photons in the ultraviolet background (UVB; e.g., \citealt{hm12}). A direct detection of the IGM appears to be so far elusive even with top-notch current facilities on 10m class telescopes (e.g., \citealt{Gallego2018,Wisotzki2018}), such as the Multi Unit Spectroscopic Explorer (MUSE; \citealt{Bacon2010}) and the Keck Cosmic Web Imager (KCWI; \citealt{Morrissey2012}). Ground-based instruments can target the Ly$\alpha$ emission only above the atmospheric cut-off ($z\gtrsim1.7$) and thus fight against a strong cosmological surface brightness dimming that scales as $(1+z)^{-4}$. Astronomers have tried to bypass these limitations by searching the IGM signal around quasars. A quasar is expected to act as a flashlight, photoionizing the surrounding medium out to large distances. The ionized gas would then recombine, emitting as the main product hydrogen Ly$\alpha$ photons in copious amounts (e.g. \citealt{Rees1988, hr01}). The Ly$\alpha$ glow around quasars should then be boosted up to SB$_{\rm Ly\alpha}>10^{-19}{\rm\,erg\,s^{-1}\,cm^{-2}\,arcsec^{-2}}$, and therefore be within the reach of state-of-the-art instruments (\citealt{Cantalupo2005,kollmeier10}). \begin{figure} \centering \includegraphics[width=\textwidth]{Fig1_NEW} \caption{An example of current large-scale structures detected in Ly$\alpha$ around quasars with VLT/MUSE at high redshift: the enormous Ly$\alpha$ nebula (ELAN) around the $z=3.164$ quasar SDSSJ~1020+1040 (figure adapted from \citealt{FAB2018}). {\bf (A)} ``optimally-extracted'' Ly$\alpha$ surface brightness map obtained after subtraction of the quasar point-spread-function and continuum. The black contours indicate the isophotes corresponding to a signal-to-noise ratio of ${\rm S/N}=2,\,4,\,10,\,20,\,30,\,50$, and $100$. This image reveals an extremely bright nebula (SB$_{\rm Ly\alpha}\sim 10^{-17}{\rm\,erg\,s^{-1}\,cm^{-2}\,arcsec^{-2}}$) extending on the NW side of the quasar. Additional four strong Ly$\alpha$ emitters (diagonal crosses) are associated with the quasar, and the nebular emission. Two of these sources have been spectroscopically confirmed as AGN, making this system the third known quasar triplet at high $z$. {\bf (B)} flux-weighted velocity-shift map with respect to the systemic redshift of the quasar obtained from the first-moment of the flux distribution. A velocity shear between the SE and NW portion of the nebula is evident. The transition region is referred to as the ``Boundary''. {\bf (C)} velocity dispersion map obtained from the second-moment of the flux distribution. Regions of higher dispersion ($\sigma_{\rm Ly\alpha}\approx430$~km~s$^{-1}$) are visible in proximity of the three AGN, but overall the Ly$\alpha$ nebula shows quiescent kinematics ($\sigma_{v} < 270$~km~s$^{-1}$). {\bf (D)} Each cut-out image (same size as A, B, and C) shows the surface brightness map of the ELAN within a 3.75~\AA\ layer ($3\times$ MUSE sampling) in the wavelength range 5058~\AA~$\lesssim \lambda \lesssim$~5084~\AA\ (from left to right). In all of the panels (A, B, C, D) the large white cross indicates the position of the quasar prior to PSF subtraction. Currently, similar large-scale emission cannot be probed at low $z$ ($z\lesssim 1.7$) because of the absence of appropriate facilities.} \label{fig:cgm_Fabrizio} \end{figure} Following this idea, several works targeted extended Ly$\alpha$ emission around high-redshift quasars to constrain the physical properties of the diffuse gas phases out to intergalactic scales around individual objects (\citealt{TPW_2017}, \citealt{HuCowie1987,heckman91a,Moller2000b,Weidinger04,Weidinger05,Christensen2006,cantalupo14,martin14a,fab+16,Farina2017,Farina2019}). For example, at $z\sim3$, it is now possible with MUSE to easily ($\sim 1$~hour on source; surface brightness limit of SB$_{\rm Ly\alpha}^{1~{\rm arcsec^2}}\sim10^{-18}{\rm\,erg\,s^{-1}\,cm^{-2}\,arcsec^{-2}}$) uncover the emission within 50 projected kpc from the targeted quasar, and to detect it up to distances of $\sim 80$~projected kpc (\citealt{Borisova2016,FAB2019}). Notwithstanding these achievements when targeting individual quasars, it is evident that detections of diffuse emission at intergalactic distances ($>100$~kpc) at high $z$ are favored when additional active companions are present in close proximity (\citealt{hennawi+15,FAB2019,FAB2018}), or much more sensitive observations are conducted ($>10$~hours). Recent studies in the literature started to show new approaches in unveiling the IGM emission, passing from the observations of individual quasars to (i) short (\citealt{Cai2018, FAB2019b}) or extremely long integrations ($\gg 40$~hours; \citealt{Lusso2019}) of multiple high-redshift quasars, or overdensities hosting quasars (\citealt{Cai2016}), and (ii) stacking of ultra deep observations of several galaxies (\citealt{Gallego2018,Wisotzki2018,Leclercq2020}). At face value, the detection of large-scale gas in emission still relies on the presence of active galaxies. In spite of the aforementioned difficulties in observing the IGM, state of the art integral field unit (IFU) spectrographs, by pushing the sensitivity of observations to SB$_{\rm Ly\alpha}\sim10^{-19}-10^{-20}{\rm\,erg\,s^{-1}\,cm^{-2}\,arcsec^{-2}}$, started to open new opportunities for the study of the gas distributed over large scales at high redshift. To study the large scale structures at the more accessible low-$z$ Universe, a new-generation space mission optimized for UV observations is needed to complement the ongoing efforts at high redshift. By routinely detecting the IGM in emission at $z \lesssim 1.7$, this instrument will allow us to achieve the following science goals: \begin{itemize} \item{Connect the dots at low redshift: test our current view of the matter distribution at low $z$ by detecting the cosmic web in emission (through rest-frame UV emission lines) surrounding galaxies and quasars. This is crucial in our understanding of structure formation as we can currently only rely on galaxies as tracers of the distribution of large-scale structures at low $z$ \citep{Malavasi2017}.} \item{Directly study the properties (e.g., density, metallicity) of the IGM in emission, complementing the information acquired from studies of the Ly$\alpha$ forest.} \item{Provide measurements of the UV background at low $z$ thanks to multiple detection of the IGM in emission, building up independent constraints from the statistics of the IGM in absorption. The UV background is a critical quantity used in all simulations and models of structure formation, but it is still poorly constrained observationally. Its precise determination is key for our understanding of galaxy formation and evolution (e.g., \citealt{KhaireSrianand20192019}, \citealt{Faucher-Giguere2020}).} \item{Study of the gas kinematics within the large-scale structures surrounding galaxies, allowing a direct characterization of the galaxy angular momentum build-up through cosmic time.} \end{itemize} \subsubsection{The need for space-based UV observations} Recent works show that the most efficient and effective way to detect emission extending to hundreds of kpc scales around high-$z$ quasars and galaxies is the use of wide-field IFU instruments (e.g., Figure \ref{fig:cgm_Fabrizio}; \citealt{FAB2018}). Space-based, wide-field spectroscopic observations in the wavelength range $\lambda \sim 90 - 350$ nm will allow us to achieve our scientific goals. Specifically, a space-based instrument is needed to observe the Ly$\alpha$ transition at low $z$ ($z\lesssim1.7$), where the cosmological surface brightness dimming is less severe. Assuming similar properties for the gas, it will be possible to target sources with $16\times$ lower surface brightness at $z \sim 0.5$ with respect to $z\sim3$. Furthermore, a wide field of view (1 $\times$ 1 arcmin$^2$, which corresponds to hundreds of kpc at $z \lesssim 1.7$) is needed to efficiently image the large-scale structures that subtend large areas on the sky. As we are interested in detecting large scale structures with very low surface brightness, large pixels (e.g. $\sim 1$~arcsec) are preferred to enhance sensitivity (as in e.g. KCWI; \citealt{Morrissey2012}). \subsection{Probing the emission of the Circumgalactic Medium around galaxies} \label{subsec:cgm_celine} Understanding the complex mechanisms regulating galaxy formation is one of the main questions today in cosmology and astrophysics. The question of how galaxies gather gas to sustain star formation is of particular interest, as it sheds light on the fact that the star formation rate (SFR) has been declining from $z \sim 2$ while diffusely distributed hydrogen is still the dominant component for the total baryonic mass budget (as compared to hydrogen in stars, \citealt{Madau2014}). The outflowing and accreting gas interacts around galaxies on scales up to hundreds of kpc (the Circumgalactic Medium, CGM). Studying the CGM is fundamental to understand the cosmic baryon cycle (\citealt{Steidel2010}, \citealt{Shull2014}, \citealt{TPW_2017}, \citealt{Peroux2020}) and it will provide key constraints on the question of galaxy formation and evolution. Absorption spectroscopy has already provided insights on the distribution and the chemical composition of the CGM gas from a statistical point of view, given that typically only one line of sight per galaxy can be studied due to the scarcity of background quasars in the vicinity of galaxies (\citealt{Noterdaeme2012}, \citealt{Pieri2014}, \citealt{Quiret2016}, \citealt{Rahmani2016}, \citealt{Krogager2017}, \citealt{Augustin2018}, \citealt{Hamanowicz2020}). Therefore, mapping the CGM in emission is the important next step to reach a full understanding of these complex regions. \begin{figure} \centering \includegraphics[width=0.7\textwidth]{lya_z07} \caption{Example of a mock Ly$\alpha$ CGM halo around a $z = 0.7$ galaxy, from RAMSES hydrodynamical cosmological simulations \citep{Augustin2019}. The Ly$\alpha$ line is at 206 nm~ and 1" $\sim$ 7 kpc at the redshift of this source. The CGM surface brightness levels shown in this map will be within reach of the proposed UV instrument.} \label{fig:cgm_celine} \end{figure} \subsubsection{The need for space-based UV observations} To achieve the science goals presented above it is key to detect and map different lines (e.g. Ly$\alpha$, CIV, OVI, CVI, OVIII) arising from the CGM of low-redshift galaxies. The signal (surface brightness) scales with $(1+z)^{-4}$ so that lower redshift observations are considerably easier in turn requiring space-born UV facilities to measure rest-frame UV lines. IFU-like capabilities are key for obtaining maps and kinematic reconstructions of the gas in the halos of galaxies. A field of view of $\sim$ 1 $\times$ 1 arcmin$^2$ will cover most of the CGM region of a galaxy at $z\sim1$. Modest spatial resolution (to increase sensitivity) and spectral resolution (R $\sim$ 4000) are sufficient. Such a program will be complementary to similar high-redshift projects that make use of extremely large ground-based telescopes (i.e. ELT/HARMONI). An example of mock Ly$\alpha$ CGM halo at $z=0.7$ from dedicated RAMSES cosmological hydrodynamical simulations is shown in Figure \ref{fig:cgm_celine} \citep{Augustin2019}. \subsection{Ram-pressure stripping and quenching in galaxy clusters} \label{subsec:ram_pressure} The existence of a well defined separation between massive, red, early-type, quiescent galaxies and blue, late-type, actively star-forming objects is a key leverage for the current modeling of galaxy formation and evolution. Nowadays, ``normal'' galaxies are thought to assemble their mass through a secular process of star formation in a relatively steady state, forming a ``Main Sequence'' up to high redshift (e.g. \citealt{Daddi2007}, \citealt{Speagle2014}) and following a tight gas-star formation rate (SFR) density relation (``KS'' relation, \citealt{Schmidt1959}, \citealt{Kennicutt1998}). Deviations from this dynamic equilibrium may occur in short starbursting events (typically associated with major mergers, \citealt{Sanders1996}) or due to the cessation of star formation (``quenching''). Both these deviations from equilibrium are poorly understood: why do galaxies suddenly ignite the formation of thousands of stars per year? Why do they stop forming stars? The deviations from the Main Sequence and the KS relation might be connected: the merger of gas-rich objects may first result in a burst of star formation, followed by a drop of the SFR and the subsequent quenching of the galaxy. The results of this process might be the compact, quiescent galaxies observed at $z \lesssim 2$ (e.g. \citealt{Cimatti2008}, \citealt{Toft2014}). However, it is still debated what is the mechanism responsible for stopping star formation and how galaxies are maintained quiescent for several Gyrs. An additional piece of the puzzle is the correlation between galaxy color (and morphology) with both local environment and stellar mass (\citealt{Dressler1980}, \citealt{Bell2004}, \citealt{Peng2010}). The environmental dependence seems to indicate that not only internal, but also external, physical processes play a role in shaping the star formation of galaxies at all cosmic ages (\citealt{Boselli2006}, \citealt{Blanton2009}). Internal mechanisms such as feedback from supernovae and active galactic nuclei (AGN, e.g. \cite{Hopkins2012}, \citealt{Croton2006}), dynamical stabilization (\citealt{Martig2009}), and gravitational heating \citep{Johansson2009} are deemed responsible for suppressing and quenching star formation at all densities. Environmental processes in the form of ram pressure or viscous stripping, tidal interactions \citep{Gunn1972}, and the consumption of the galaxy's gas reservoir by star formation without further replenishment from the cosmic web because of accumulation of hot plasma inducing shocks and heating up infalling gas (e.g. \citealt{Larson1980}, \citealt{Peng2015}; \citealt{Dekel2006}) seem to be key players in shaping the observed color bimodality within galaxy clusters and groups, especially at the faint end of the galaxy luminosity function. In the local Universe, the smoking gun of the role of ram pressure stripping in quenching star formation is the disturbed gas content of galaxy cluster members. Radio surveys at 21 cm revealed the HI deficiency of spiral galaxies in overdense environments, especially in the vicinity of the cluster core (\citealt{Haynes1985}) and more recently the molecular gas deficiency has also been reported (\citealt{Fumagalli2009}, \citealt{Boselli2014}). Furthermore, these galaxies often show disturbed gas morphologies and tails visible both in H$\alpha$ (Figure \ref{fig:ram_pressure_anita}) and UV continuum (e.g. \citealt{Gavazzi1985}, \citealt{Fumagalli2014}, \citealt{Fossati2016}). However, at high redshift ($1 \lesssim z \lesssim 1.5$) there is still no consensus on the influence of the environment on galaxies' gas content (\citealt{Aravena2012}, \citealt{Dannerbauer2017}, \citealt{Coogan2018}). This is mainly driven by detections that are either lacking, or limited to the most gas-rich members. Galaxies undergoing ram pressure stripping are ideal laboratories to constrain the efficiency with which gas can be removed and star formation is quenched. Observations of galaxy clusters and groups at different redshifts are therefore key to understand how and when the red sequence of galaxies is assembled in dense environments. \begin{figure} \centering \includegraphics[width=0.9\textwidth]{fig_ram_pressure} \caption{Example of a local cluster member undergoing ram pressure stripping observed with VLT/MUSE: ESO137-001 at $z = 0.016$ \citep{Fumagalli2014}. \textbf{Left}: HST/ACS image in the F475W filter with, superposed, the MUSE field of view at the two locations targeted by the observations. \textbf{Right}: RGB color image obtained combining images extracted from the MUSE data cube in three wavelength intervals ($\lambda = 500 - 600$ nm~ for the B channel, $\lambda = 600 - 700$ nm~ for the G channel, and $\lambda = 700 - 800$ nm~ for the R channel). A map of the H$\alpha$ flux is overlaid in red using a logarithmic scale, revealing the extended gas tail that originates from the high-velocity encounter of ESO137-001 with the intra-cluster medium.} \label{fig:ram_pressure_anita} \end{figure} \subsubsection{The need for space-based UV observations} To study quenching mechanisms in dense environments and the origin of galaxy red sequence, a space-based instrument with a large field of view ($\sim$ 1 $\times$ 1 arcmin$^2$), exquisite sensitivity, and a spectral coverage $\lambda \sim 90 - 350$ nm is needed. This part of the spectrum includes the Ly$\alpha$ emission line at redshift $z \lesssim 1.7$. Ly$\alpha$ traces the neutral gas, which is expected to be the bulk of the material stripped by ram pressure, and is expected to be $> 10\times$ brighter than H$\alpha$ (e.g. \citealt{Scarlata2009}). Mapping the Ly$\alpha$ emission with a wide-field spectrograph would allow us to trace stripped gas out to large distances from the galaxy ($\sim$ 500 kpc at $z = 1$) and to unveil gas tidal tails $\gtrsim 10\times$ fainter than the H$\alpha$ ones, therefore targeting also the less massive cluster members. The detection of other emission lines (e.g. CIII, OI, HeII, MgII) would allow us to constrain the density, temperature, and metallicity of the ionized gas. The comparison of the Ly$\alpha$ and H$\alpha$ fluxes would also give information about the powering mechanisms of Ly$\alpha$ (e.g. shocks, star-formation, cooling), still an unknown issue. Are cluster members mainly fast rotators, as expected if their star formation has been quenched by the interaction with the intra-cluster medium? Or are more violent interactions \citep{Toloba2011} responsible for the stripping of their gas? A resolution of $R \sim 4000$, would enable us to probe the kinematics of stars and gas simultaneously, and such questions would be thoroughly addressed. The stripped gas is expected to be mainly in the cold atomic phase, but it could be heated and change phase once it interacts with the hot gas confined within the potential well of the cluster. Complementary multi-frequency observations will allow us to reach a comprehensive picture of these phenomena: the molecular phase is detectable through CO (ALMA), SKA will target the cold HI gas at 21 cm, the hot gas phase is visible in the X-rays (\textit{Athena}), while the ionized gas phase will be probed by ELT instruments. \subsection{Supernovae} \label{subsec:transients} Core collapse supernovae are the most common form of death for massive stars \citep{li_nearby_2011}. These events are the end-stage of massive stellar evolution, and are responsible for the chemical enrichment of the galaxy. While many supernovae have been discovered and observed, the characteristics of the stars that explode are still a mystery. Observationally it has been established that massive stars ($>8M_\odot$) end their lives as a core-collapse supernova. However, theoretical models have not succeeded in reproducing the observed explosion properties, and more importantly cannot self-consistently explode the star without an artificially induced ignition \citep{burrows_colloquium:_2013}. The progenitors of core-collapse supernovae have been positively identified for a handful of events \citep{smartt_death_2009,smartt_observational_2015}. The progenitor's characteristics are one of the major open questions which must be addressed in order to progress our understanding of the explosion mechanism of stars, as well as the final stages of stellar evolution. The most robust technique for identifying progenitors is with high-resolution images taken before the explosion. This technique has been successfully employed a handful of times using primarily \textit{HST} images taken before the events \citep{smartt_detection_2004,maund_yellow_2011,van_dyk_supernova_2017}. Some core-collapse supernovae seem to originate from red supergaint stars, but there is also an example of a Type IIb originating from a yellow hypergiant \citep{maund_yellow_2011, bersten_type_2012}. Two limitations prevent this technique from significantly advancing the field in the future: first, an image of the region prior to the explosion is needed; second, it is necessary to wait for $5-10$ years until the supernova fades to below the progenitors' luminosity in order to confirm that the star terminally exploded and disappeared. Even when the second criterion is satisfied, some doubt remains as the star could be enshrouded in a high-extinction dust. Given these limitations, pre-explosion imaging is unlikely to increase the number of identified progenitors by orders of magnitude in the future. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{fig} \caption{Constraints on radius and ejecta velocity (energy per unit mass of the explosion). In the inset we show a simulated light-curve as it would appear in UVOT-UVW2 and SDSS-r. The simulation is of an $R=10^{13}$ cm progenitor observed with $\sim1$ day cadence. Note the faster rise of the UV light-curve. The contours represent 68\% and 95\% of the probability of the fit to the UV and visual bands separately. The UV is clearly more sensitive to the progenitor's radius than visual bands.} \label{fig:shock_cooling_Adam} \end{figure} The solution is to use semi-analytic models which have been developed in recent years \citep{waxman_grb_2007,nakar_early_2010,rabinak_early_2011,sapir_uv/optical_2017}. These models can relate the observed light curve of a core-collapse supernova to the progenitor's radius and the velocity of the ejecta (energy per unit mass of the explosion). These models can be applied to light curves of events which do not have pre-explosion images, and therefore have the potential to increase by orders of magnitude the number of identified progenitors. The hot ejecta ($>10^4$ K) cools rapidly during the first days after the explosion. Because in the UV and optical wavelengths the blackbody spectrum is in the Rayleigh-Jeans regime, the UV light-curve evolves more rapidly than longer optical wavelengths (e.g. R-band). Shock-cooling models can only be applied to data taken during the first few days after explosions, therefore the rapid evolution of the UV light curve is critical for constraining the progenitor's properties. \citet{rubin_exploring_2017} showed that visual-band wavelengths cannot constrain the progenitor's radius meaningfully, but that UV coverage at moderate ($0.5-1$ day cadence) can constrain the progenitor's radius to 20\%. With a significant number of supernovae observed in the UV, it will be possible to measure the progenitor population of core-collapse supernovae. These will provide the initial conditions for explosion models, and will provide benchmarks against which to design and test these simulations. UV spectroscopy will also play a major role in understanding the surface composition of the progenitors. It has been shown that many supernovae have a circumstellar material surrounding the star from before the explosion \citep{gal-yam_wolf-rayet-like_2014,yaron_confined_2017,khazov_flash_2016}. At the moment of first light, this material becomes highly ionized by the tremendous luminosity and temperature of the first photons of the explosions. After a timescale of hours to days the bulk of the supernovae ejecta sweeps up this material. During this window strong, highly ionized emission lines have been observed (e.g. OIII, HeII). These can be related back to the surface composition of the progenitor star in the last period before it exploded. \citet{groh_early-time_2014} showed that the most informative lines are in the UV, e.g. CIV $\lambda1548-1551$, HeII $\lambda1640$, and NVI $\lambda1718$. \subsubsection{The need for space-based UV observations} To achieve our science goals we will need photometry of magnitude 20 sources with a cadence of 0.5 -- 1 day. By simultaneously obtaining the spectra of our targets (with magnitude down to 20) with a signal-to-noise of 10 we will measure the temporal evolution of the UV flux, and the strength and equivalent widths of the transient emission lines CIV $\lambda1548-1551$, HeII $\lambda1640$, and NVI $\lambda1718$, to achieve the science goals mentioned above (Figure \ref{fig:shock_cooling_Adam}). \subsection{Accreting white dwarfs in globular clusters: testing the models of compact binary evolution} \label{subsec:binaries} Accreting white dwarfs (WDs), namely binaries in which a WD accretes from a main sequence star or a degenerate companion, are a strategical tool to probe the physical properties of the Universe and to test fundamental physical theories. The thermonuclear ignition of WDs following the interaction with a binary companion results in Type~Ia supernovae (SNe\,Ia), which are fundamental yardsticks to constrain cosmological distance scales \citep{candles} and the existence of dark energy \citep{riess,perlmutter}. Moreover, the most compact systems with orbital periods below one hour are among the brightest known low frequency gravitational wave sources and will be used to verify the performance of the space-based gravitational wave mission \emph{LISA} and to calibrate the detector for future gravitational wave source discoveries \citep{Thomas+2018}. It is therefore critical to understand the evolution and final fate of accreting WDs. Currently, there are several significant discrepancies between the predictions of population synthesis models and the observed properties of accreting WDs. In particular (i) the interplay between the angular momentum loss mechanisms driving the evolution of accreting WDs at different orbital period regimes is poorly understood \citep[e.g.][]{Schreiber+2016,Belloni+2020}; (ii) the evolutionary path followed by the most compact systems is unknown \citep[e.g.][]{Green+2018}; (iii) the final fate of accreting WDs and the pathway leading to SN\,Ia explosions is still not clear \citep[see e.g.][for a review]{Maoz+2012}. Solving these discrepancies is essential before the theoretical models can be sensibly applied to more complex binaries, such as black hole and neutron star binaries, X-ray transients, and SN\,Ia progenitors. Accreting WDs are predicted to be numerous in globular clusters ($\simeq 100-200$ per cluster, \citealt{Ivanova+2006,Knigge2012}). Globular clusters (GCs) are therefore a unique laboratory where to carry out statistical binary population synthesis studies for populations of accreting WDs with known distance, metallicity, and age \citep{Knigge2012,Belloni+2016}. By observing populations of accreting WDs in GCs we will explore the following science cases: \begin{itemize} \item Constrain the rates of orbital angular momentum losses. To test the prediction of the current models, measurements of the angular momentum loss rates at different orbital periods ($P_\mathrm{orb}$) are needed. However, the long timescale over which the orbital period changes typically prevents direct measurements from being obtained. A proxy for the mean mass accretion rate and, in turn, for the angular momentum loss rate is the WD effective temperature ($T_\mathrm{eff}$, \citealt{TowBild, Bildsten+2006}), as it is determined by the compressional heating of the accreted material \citep{Sion1995,Townsley+2004}. While several thousand accreting WDs are known, accurate temperatures are only measured for $\simeq 80$ systems, obtained from ultraviolet \textit{HST} observations \citep{Bildsten+2006,Dean_and_Boris2009, Pala}. In this sample, only systems with a period $70\,\mathrm{min} < P_\mathrm{orb} < 150\,\mathrm{min}$ show an angular momentum loss in agreement with model predictions, while all the others show discrepancies of more than one order of magnitude with the theory (Fig.\,5). \item The formation of the most compact systems. Two main populations of accreting WDs are known: (1) the systems with main-sequence and brown-dwarf donors (cataclysmic variables, CVs) with orbital periods in the range $60\,$min\,$ \lesssim P_\mathrm{orb} \lesssim 2\,$d and (2) the systems with helium stars or helium-core WDs as donors (AM\,CVn stars) with orbital periods $P_\mathrm{orb} \lesssim 60\,$min. The formation of AM\,CVn stars is still poorly understood and evolutionary models currently predict different formation scenarios \citep{Nelemans+2010}. Particularly, it has been suggested that AM\,CVn could descend from CVs in which the donor is nuclear evolved, i.e. enriched of CNO processed material. To observationally constrain different models it is key to study the chemical composition (e.g. N/O and N/C ratios) of the donor \citep{Nelemans+2010}. Establishing the fraction of CVs with evolved donor could provide an upper limit on the number of AM CVn that are expected to form through this channel \citep[see e.g.][]{Pala+2020}, thus yielding a valuable observational test for the formation of these compact systems. \item Accreting WDs and SN\,Ia progenitors. Both CVs and AM\,CVns are intimately connected to the formation of SNe\,Ia. In fact, CVs with nuclear evolved donors undergo a phase of stable hydrogen shell burning during which the WD mass grow (possibly to the Chandrasekhar limit) and AM\,CVns accrete He-rich material from their degenerate donors, potentially triggering a SN\,Ia via the double-detonation mechanism \citep{Bildsten+2007,Fink+2010}. Determining the masses of these systems through a fit of the UV spectrum provides firm statistical constraints for the above scenarios and could finally confirm the potential of these systems as SN\,Ia progenitors. Also the rotation rates of accreting WDs are a critical parameter in the pathway leading to SNe\,Ia explosion. Absorption features from the photosphere of the WD accretor, which are only cleanly detected in the UV, can be used to measure the rotational velocity and determine whether the WD is spun up by accretion, allowing the WD to exceed the Chandrasekhar limit without triggering the SN explosion. \end{itemize} \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{binaries_Anna} \caption{Effective temperatures of short-period (cyan) and long-period (green) CV WDs, AM\,CVns (pink) and systems with evolved donors (blue) \citep{Bildsten+2006,Dean_and_Boris2009,Pala}. The WD $T_\mathrm{eff}$ (right y-axis) directly translates into a measurement of the average mass accretion rate ($\langle\dot{M}\rangle$, left y-axis), which is a measurement of the angular momentum loss rate in the system. While the observations of short-period CVs (cyan) agree reasonably well with the theoretical evolutionary tracks \citep{Bildsten+2006,Yungelson2008,Nelemans+2010,Pala}, long-period CVs (green), AM\,CVns (pink) and systems with evolved donors (blue) are poorly studied and the few systems observed uncover major discrepancies between observations and models.} \label{fig:binaries_Anna} \end{figure} \subsubsection{The need for space-based UV observations} The population of accreting WDs in GCs provides a statistically significant sample of systems spanning the whole orbital period distribution and will allow us to carry out stringent tests for the current models of compact binary evolution \citep[e.g.][]{Knigge+2002}. Accreting WDs have typical temperatures $T_\mathrm{eff} \gtrsim 10\,000\,\mathrm{K}$ and their spectral energy distribution peaks in the ultraviolet. In other wavelength domains, the emission is dominated by the disc (in the optical) or a boundary layer at the WD surface (in the X-rays). Space-based ultraviolet spectroscopy is therefore the only method to access and fully characterize the physical properties of these stars \citep[e.g][]{Szkody2002,Boris2006,BDPav,Pala}. Moreover, the majority of the main sequence and red giants stars are cool (Teff $<$ 7000 K) and their emission peaks in the optical and/or in the near-infrared. For this reason, in the UV wave range GCs appear vastly less crowded than in the optical (see e.g. fig.~1 from \citealt{Knigge+2002}). The proposed UV IFU would provide the possibility to (i) deblend the denser environments and study those sources that would be inaccessible to optical studies owing to crowding and (ii) obtain, in one shot, a spectrum of the detectable UV sources in the field of view. From the knowledge of the GC distance, the WD $T_\mathrm{eff}$ can be measured via a spectral fit to the ultraviolet data with synthetic atmosphere models thus providing a direct measurement of the angular momentum loss rate in the system \citep{Dean_and_Boris2009}. The spectral fit to ultraviolet data also provides the WD photospheric abundances which reflect the composition of the donor star, via the detection of strong emission and absorption resonance lines of C, N, O and Si (e.g. N{\sc v} 1240\,\AA, C{\sc iv} 1549\,\AA, and O{\sc i} 1150, 1302\,\AA; \citealt{Boris+2003, Morales+2003,Sion+2006}) which are not accessible in the optical. A single ultraviolet spectrum thus provides information on both the WD accretor and the donor star, offering a unique insight into the composition of the donor and the prior evolution of the system. A spectral fit to ultraviolet data also yields accurate mass determinations, thus providing firm statistical constraints for the double-detonation scenario and the mass growth in CVs, finally confirming the potential of these systems as SN Ia progenitors. Finally, rotation rates of accreting WDs have so far been measured only in a handful of systems \citep[e.g.][]{sionetal94-1, longetal04-1}. A resolving power of $\simeq 4000$ is sufficient to measure rotation rates $v \sin i \gtrsim 100\,$km/s, thus providing robust observational constraints on the response of WDs to the accretion of mass and angular momentum. \section{Comparison with other space instruments} \label{sec:uniqueness} A wide-field, IFU-like instrument on board a space telescope optimized for UV observations ($\lambda = 90 - 350$ nm) will allow the community to simultaneously obtain photometric and spectroscopic observations with exquisite sensitivity in a wavelength regime that is not accessible from the ground, making it a unique instrument to tackle the scientific questions presented in Section \ref{sec:sc} as well as many other topics. There are currently only two spectrographs on board \textit{HST} that cover a wavelength range similar to the one that we propose (COS and STIS), one such spectrograph was launched on the stratospheric balloon FIREBall, and two new ones are currently being proposed to be on board of CETUS and the \textit{LUVOIR} telescope (LUMOS). In the following, we discuss why these spectrographs are not suitable to achieve the science cases described in Section \ref{sec:sc}. \textbf{\textit{HST}/COS}: the wavelength coverage of this spectrograph ($\lambda = $ 90 - 320 nm) is similar to the one that we propose and it was designed to obtain spectroscopy of faint point-like sources (e.g. stars, quasars) with a resolving power R $\sim$ 1500 -- 24000 \citep{cos2012}. The extremely small field of view of COS ($2.5''$ diameter) however prevents observations of large patches of the sky and the possibility of observing extended objects, a critical requirement for all science cases presented in this document. Some of our science questions overlap with those that motivated COS, but the way we want to address them is substantially different. As an example, the question of whether the cosmic web exists and how baryons from galaxies interact with the surrounding medium can be tackled by COS by studying absorption spectra of intergalactic gas (e.g. Ly$\alpha$ forest, highly ionized absorption lines such as OIV, NV), whereas we propose to address this issue by studing the intergalactic medium in emission, through the detection of Ly$\alpha$ halos extended across hundreds of kpc (and possibly other rest-frame UV emission lines). Similarly, COS is suitable to study white dwarfs, cataclysmic variables, and binary stars in the Milky Way. However the instrument that we propose will open up the new possibility to target these sources in globular clusters, dramatically increasing the statistics and efficiency of the observations. \textbf{\textit{HST}/STIS}: this spectrograph and imaging camera covers the far- and near-UV wavelength range ($\lambda = $ 115 -- 310 nm, \citealt{stis1998}). Spatially-resolved observations can be performed through slitless spectroscopy, but the resolving power (R $\leq 2500$) is too low to achieve our science goals (R $\gtrsim 4000$ is needed). Furthermore, the low throughput of STIS ($\lesssim$ 10\%) only allows observation of the brightest targets (the limiting magnitude V = 20.6 for an A0V star can be observed with a signal-to-noise ratio of 10 in 1 hour on source) and is not enough to reach the faint surface brightness levels required to achieve our goals (e.g. SB$_{\rm Ly\alpha}\sim10^{-19}-10^{-20}{\rm\,erg\,s^{-1}\,cm^{-2}\,arcsec^{-2}}$, see Section \ref{subsec:cgm_fab}). \textbf{FIREBall-2}: this is a multi-object spectrograph operating in the UV and flying on a stratospheric balloon at an altitude of 40 km \citep{Lemaitre2019}. It follows the experiments started with FIREBall-1 aimed at detecting the faint and diffuse emission of the intergalactic medium. It is equipped with a fiber IFU (300, 8" fibers). Part of the scientific motivation for FIREBall overlaps with the one proposed here (Section \ref{subsec:cgm_fab}), but FIREBall only observes a limited wavelength range (200 - 210 nm) and targets the Ly$\alpha$ emission line at $z \sim 0.6 - 0.7$. We propose to cover a larger wavelength range ($\sim$ 90 -- 350 nm), pushing the observations of Ly$\alpha$ up to z $\sim 1.7$, starting to bridge the local and high-redshift Universe. Furthermore the field of view of FIREBall-2 is not contiguous (i.e. the fibers need to be placed in separate positions) and the spatial ($\sim$ 4") and spectral (R $\sim$ 2000) resolution are not enough to achieve our science goals. The overall throughput of the instrument on FIREBall is $\sim$ 5\% that is not enough to reach the low surface brightness level needed by our science goals. Finally, since this instrument works on a stratospheric balloon, its operations strongly depend on weather conditions and aeronautical rules, and observations can only be performed for a few consecutive nights each time. The possibilities to get the long exposures needed to probe faint emissions are therefore limited. These kind of operational conditions cannot sustain a large demand from the astronomical community, but are key for testing new UV technologies and space-validate sub-components in a zero-gravity environment. The FIREBall team has also proposed to NASA to fund a new mission, ISTOS \citep{Martin2011}, designed at studying the faint UV emission from the circum- and inter-galactic medium leveraging in particular on the development of new detectors with improved capabilities (i.e. higher quantum efficiency and lower noise thanks to the use of EMCCDs). \textbf{\textit{LUVOIR}/LUMOS} and \textbf{CETUS}: these are both UV multi-object spectrographs that have been proposed for future missions. LUMOS will cover the far-UV to visible wavelength range ($\lambda = 100$ -- 1000 nm), is currently under study, and is a candidate instrument for the future \textit{LUVOIR} space mission \citep{France2017}. CETUS instead is a Probe Mission Concept that NASA has selected for consideration \citep{Kendrick2019}. It will enable parallel observations by the UV multi-object spectrograph ($\lambda \sim 180 - 350$ nm) and near-UV/far-UV camera ($\lambda 115 - 400$ nm) which will operate simultaneously but with separate field of views. Thanks to their large field of view ($2' \times 2'$ LUMOS, and $17' \times 17'$ CETUS) both these instruments will be able to observe hundreds of targets at once. However the field of view covered by LUMOS and CETUS, as opposed to the IFU instrument that we are proposing, will not be contiguous. A 100\% coverage of a large field of view to a high depth is critical to efficiently achieve our science goals, such as contiguously map the extended emission of the cosmic web and galaxy halos, the extended tidal tails around galaxy cluster members, globular clusters that host populations of accreting white dwarfs. A discontiguous field of view drastically reduces the efficiency of the observations and the discovery potential. Additionally, compared to a multi-object spectrograph that can only perform pointed observations, the IFU instrument that we propose will provide thousands of spectra with a single pointing and will therefore have a unique potential for serendipitous discoveries. Finally, LUMOS is designed to observe at the diffraction limit and CETUS will have a spatial resolution $\lesssim 0.3''$, conditions that would prevent us from achieving the extremely low surface brightness levels needed by our science cases. Additionally the \textbf{MESSIER surveyor}, a small UV ($\lambda = 150 - 1000$ nm) space mission designed to explore the Universe down to very low surface brightness fluxes ($\sim 34 - 37$ mag arcsec$^{-2}$), is under development and has recently received initial funding from the French Space Agency CNES \citep{Valls-Gabaud2017}. Although some of the science drivers of this surveyor overlap with those proposed in Section \ref{sec:sc} (e.g. the detection of the cosmic web in emission), MESSIER will be an imager equipped with a set of filters, whereas we are proposing an IFU instrument that will allow the community to simultaneously obtain photometric and spectroscopic information of the targets. With such an IFU the community could follow-up spectroscopically the most promising targets imaged by facilities like MESSIER. Having access to spectroscopy down to such low surface brightness will enlarge the parameter space that can be explored and greatly increase the discovery potential. \section{Synergies} \label{sec:synergies} In the next two decades all the major facilities are expected to operate at red, IR, and radio wavelengths (e.g. \textit{JWST}, ELT, SKA). A window on the UV will therefore offer a complementary view at shorter wavelengths and will have strong synergies with these facilities. It will be critical to conduct follow-up observations in the UV regime, at a time when \textit{HST} will not be operational anymore. An IFU-like instrument operating in the UV will also provide targets that can then be followed-up with the ELT (e.g. the gas stripped from galaxy cluster members observed in Ly$\alpha$ can be observed in [OII] and H$\alpha$ with ELT/HARMONI providing information on the ionized gas). Furthermore most of the science questions presented here have synergies with future major facilities such as SKA at longer wavelengths (e.g. to detect the HI gas stripped in cluster members and the cosmic web around galaxies) and will benefit from targets identified by \textit{Euclid} and the Rubin Observatory in large field imaging. There are also synergies with ALMA, should it still be operational (e.g. the gas stripped by cluster members might be detected both in the neutral phase through Ly$\alpha$ emission with our proposed instrument and in the molecular phase with ALMA). Finally synergies with \textit{Athena} and \textit{LISA} will enhance the discovery potential (e.g. \textit{Athena} will unveil the hot gas phase stripped by cluster members and present in the circumgalactic medium visible in the X-rays, while \textit{LISA} will detect the gravitational waves produced by the most compact accreting white dwarfs with short orbital periods). \section{Technical concept} \label{sec:performances} In this Section, we present some key design areas for a system compliant with the proposed science theme. We notice that a dedicated space mission is not strictly required and an instrument hosted on another future spacecraft may suffice. Finally, we highlight that a system study has not been performed, as this goes beyond the scope of the proposal. Critical technologies are highlighted via a brainstorming process and discussions with relevant experts. Given the main characteristics of the telescope (Table \ref{tab:tech}), the dedicated space mission fits an ESA M-size type. \subsection{Type of Telescope and Instrument} Current space-based facilities are optimized to detect point-like sources such as stars or distant galaxies, rather than extended low surface brightness targets \citep{Trujillo2016}. To reach the surface brightness of $\sim 10^{-19}$ erg s$^{-1}$ cm$^{-2}$ arcsec$^{-2}$ required by our science cases, a telescope with a diameter of $\sim$ 0.9 -- 1.1 m and a relatively fast optics (F-number $\sim$ F/5 -- F/15) is needed \citep{Valls-Gabaud2017}. Ideally, the F-number should have even faster values. That would require technologies with currently a very low technology readiness level (TRL), such as curved detectors. A UV integral field unit (IFU) is the only available option to achieve the science goals of the previous Sections. Part of the technology that has already been developed for the ground-based Multi Unit Spectroscopic Explorer (MUSE) on the Very Large Telescope (VLT) and that is currently being upgraded for BlueMUSE ($\lambda = $ 350 -- 600 nm, \citealt{Richard2019}) can be adapted for this project. Insights on the performance of IFUs in space can also be obtained from \textit{JWST}/NIRSpec \citep{Deshpande2018}, when launched, although NIRSpec will operate at IR wavelengths. The proposed IFU extends the capabilities of such type of instruments toward bluer wavelengths, $\lambda = $ 90 -- 350~nm, not observable from the ground, targeting different science cases. It has a single instrumental setup (i.e. fixed spectral and spatial setup), which simplifies the overall fore-optics. A rapid response mode is necessary to carry out observations of transients (Section \ref{subsec:transients}). We summarize in Table \ref{tab:tech} the main characteristics of the instrument, with their reasonable range. \begin{table} \centering \caption{Main (range of) parameters of the telescope and instrument.} \renewcommand{\arraystretch}{1.3} \label{tab:tech} \begin{tabular}{c|c} \toprule Aperture & 0.9 - 1.1 m \\ \hline F-number & F/5 - F/15 \\ \hline Field of view & 1 $\times$ 1 arcmin$^2$ \\ \hline \hline Wavelength range & 90 -- 350 nm (preferred) \\ & 100 -- 300 nm (minimum requirement) \\ \hline Spectral resolution & average R = 4000 \\ \hline Spectral sampling & 0.5 \AA~ per spectral bin \\ \hline Spatial sampling & 0.5" $\times$ 1" per spaxel \\ \hline Spatial resolution (FWHM) & 2" -- 3" \\ \hline Throughtput & $\gtrsim$ 25\% over the whole wavelength range \\ \bottomrule \end{tabular} \end{table} \subsection{Mass} Due to the complexity and extent of the optics, IFUs tend to be heavy instruments, an important constraint for a space mission. The mass will be one of the main design drivers. Section \ref{subsec:detectors} proposes a possible way to greatly simplify the optics. That is a solution with low TRL and requiring further development. \subsection{Thermal Control} Another design driver is given by the thermal environment, as it is often the case for optics in space. On one hand, for the detectors, cryogenic temperatures are required to maximize the performance. On the other hand, it is preferable to perform UV observations with a temperature of the optics above 260~K to prevent absorption on the reflective surfaces, given that UV systems are sensitive even to mono-layers of contaminants \citep{LuvoirRep}. An operating temperature in the range 270~K -- 290~K is recommended because it reduces the risks linked to launch shocks and facilitates mission development as testing, alignment, and mirrors polishing and figuring can be performed on the ground without cooling facilities, with a smaller number of iterations and eventually lower complexity. Wavefront stability requires the temperature of the optics to be controlled within $10^{-2}$ K. A sun-synchronous orbit would be preferable to avoid eclipses and operate in a more stable environment. However, this is not a strict constraint and UV observations have extensively been performed by other telescopes (e.g. \textit{HST}) on different orbits. \subsection{Vibrations Control} The wavefront stability also calls for isolation and damping of vibrations sources. Dedicated precision mechanisms on the optics or micronewton thrusters, following the heritage of Gara and LISA Pathfinder \citep{LTP}, are potential disturbance avoidance solutions \citep{NASAmicrovib}. In fact, they remove the need of reaction wheels, which are strong sources of disturbance also during operation. \subsection{Materials} In line with decades of warm optics design, the traditional choices for the material of the mirrors are Zerodur\texttrademark~ and ULE\texttrademark~ thanks to their high stability. Silicon Carbide (SiC) is also an option. In fact, while it poses some challenges on the surface finishing and requires more stringent constraints on the thermal control, it has significant advantages in terms of controllability thanks to its superior strength. A set of actuators (e.g. piezo-stacks) annealed in the SiC substrate, allows wavefront errors to be compensated in several load conditions, including on-ground testing in 1-g. Space-based instruments that are currently operational at UV wavelengths have a throughtput $< 25$\% in the wavelength range $\lambda =$ 200 -- 400 nm~ (e.g. WFC3 UVIS filters on board \textit{HST}). To achieve our goals a throughtput $\gtrsim 25$\% is needed. Such performance is currently hindered by the decrement of reflectivity below 110 nm of the typical mirror coatings (e.g. silver and gold), \citep{LuvoirRep}. Aluminium has in principle a good reflectivity down to wavelengths of $\sim$ 100 nm, but it has the known tendency to oxidation. There are however promising studies on this subject (\citealt{Balasubramanian2017}, \citealt{Quijada}) that point to the use of protective layers to prevent aluminum oxidation from degrading the performance. \subsection{Detectors} \label{subsec:detectors} To achieve the proposed science goals we need to reach very low surface brightness objects. As an example, to detect the Ly$\alpha$ emission from the circumgalactic medium we need to observe SB$_{\rm Ly\alpha}\sim10^{-19}{\rm\,erg\,s^{-1}\,cm^{-2}\,arcsec^{-2}}$. Given the characteristics of the proposed telescope and instrument (Table \ref{tab:tech}), this translates to a detection threshold of $\sim 0.2 - 1$ e$^-$ pixel$^{-1}$ hr$^{-1}$. Currently the main limitation to reach such low detection thresholds is given by the readout noise of the detectors ($\sim$ 2 -- 3 e$^-$ for both CMOS and CCDs). However, there are ongoing studies focusing on enhancing UV detector performances and reducing the readout noise. On the CCD side, EMCCDs that use avalanche gain on the readout are currently available. This technology has been tested in sub-orbital flights on FIREBall (\citealt{Hamden2016,Hamden2019}). In the CMOS detector market, 8k $\times$ 8k CMOS devices exist already, although they are not optimized yet for the UV and they still have a relatively high readout noise. A study to develop a CMOS detector with readout noise below 0.15 e$^-$ is currently in progress with results from the simulation of the silicon circuit showing single-photon sensitivity (\citealt{Stefanov2020}). Another promising technological development is in black silicon nano-structures with self induced junctions, allowing effective QE above 100\% without external amplification in the UV wavelength range (\citealt{Garin2019}). A combination of these technologies would lead to a powerful new detector optimized for low-noise UV observations. An alternative to building an optical IFU with traditional detectors is the use of an imaging camera together with a "spectrometer on a chip" detector based on MKID technology. These detectors measure the energy of every photon that hits the detector, and are already in use for the DARKNESS spectrometer \citep{Meeker2018}. MKIDs have no dark current or readout noise in the traditional sense, as they measure thousands to millions of quasiparticles for every photon that hits. However, they are subject to quasiparticle fluctuations, limiting the precision with which the energy of the photons can be measured and in turn the achievable spectral resolution. In fact, the spectral resolution that they can currently achieve is only R < 100, but theoretically it could be much higher and there are already proposals for using this technology in space \citep{Rausher2016}. This technology however trades decreased optics complexity for increased cryogenic complexity, as MKIDs require a superconducting layer and therefore extreme cryogenic temperatures ($\approx$1~K). \section{Conclusions} An entire new window on the Universe is opened by exploring its UV-range with high spatial and spectral resolution. We have showcased several different science questions to be addressed in the time frame of Voyage 2050, and highlighted how our approach will ensure ample opportunities for serendipitous discoveries. The ideal instrument to answer the proposed scientific questions is a UV-optimized, wide-field IFU. It could be hosted by a future telescope or alternatively by a new M-size mission and it will be the only UV IFU in space. Designing and building such a complex system will encourage the technological development of several areas, including material science, data acquisition and reduction, UV detector technology, and micro-vibrations mitigation. Finally, by focusing on the UV wavelength range after and during an epoch when the largest available facilities have been operating at red and IR wavelengths (e.g. \textit{JWST}, ELT), this instrument will open up a new discovery space and will be suitable to work in synergy with the main available observatories now in development such as \textit{Athena}, the ELT, and SKA. \begin{acknowledgements} We are grateful to J. Kosmalski, J. Spyromilio, Chian-Chou Chen, and F. Lelli for useful discussions and inputs. \end{acknowledgements} \section*{Conflict of interest} The authors declare that they have no conflict of interest. \bibliographystyle{spbasic}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,056
Ted Arnott was 10 years old when the family house in Arthur, Ont., caught fire one Saturday night. The village fire chief was at home just down the street, next door to the welding shop that he ran, and he joined the rest of the volunteer fire department in putting out the blaze. More than three decades later, Mr. Arnott is trying to pay back the favour. For three years, the Progressive Conservative MPP has been fighting to protect "double-hatters" -- the estimated 600 firefighters who work in big-city departments but also do occasional duty with the mostly volunteer brigades in the small communities in which they live. It's one of those quixotic crusades that make you think the best of backbench legislators who work ceaselessly for an issue that draws few headlines. Mr. Arnott, who represents the constituency of Waterloo Wellington, gets little official backing for his fight against efforts by the 10,000-member Ontario Professional Fire Fighters Association to eradicate double-hatting. When the Tories were in office, they refused to sanction legislation he wanted to introduce and so it came forward as a private member's bill. It was defeated on third reading in a free vote. He tried again just before the 2003 election and commenced a third effort last April with yet another private member's bill. The prospects for success aren't much better this time, but hope springs eternal in a backbencher's breast. Mr. Arnott, who has been an MPP since 1990, came to the issue in March, 2002, when a fire chief in his area told him that double-hatters were being forced to resign by the OPFFA. He said he came to believe that communities unable to afford a full-time fire department are being put at risk when professional firefighters are barred from lending their expertise to them. He was also offended the union was trying to restrict the freedom of people to use their own time to volunteer. All fire departments have roots in voluntarism and this cross-fertilization between big-city departments and their small-town kin was overlooked for years until 1992, when the OPFFA passed a prohibition against double-hatters, many of whom are actually paid for their time either by the hour or by an annual honorarium. About 200 have quit their double-hat jobs in recent years. About 160 municipalities have subsequently backed Mr. Arnott. The union, too, argues the case for public safety. It says full-time employees who take on part-time responsibilities can put their co-workers at risk by returning to work without having had a refreshing break. It argues as well that double-hatters increase their exposure to cancer-causing chemicals, which muddies the issue of compensation claims. The Liberals also opposed Mr. Arnott, but once in office they initially looked ready to do something. Last April, Community Safety Minister Monte Kwinter said it was "unacceptable" that any Ontarian would be put at risk because of the dispute and promised, "If I can't mediate this, then I will bring in legislation." These days, he seems to be wearying of the whole thing. "This is the 10th time the Official Opposition has asked me a question about double-hatters, and the answers are basically the same," he moaned in the legislature in December. The promise of legislation seems to have evaporated and been replaced by a pledge to take the counsel of the Ontario Fire Marshal. The thing is, Fire Marshal Bernard Moyle says he has told Mr. Kwinter that the exodus of double-hatters from small-town brigades is having an effect. "There's no question certainly that fire safety is being eroded in some communities, and I certainly have expressed my views," he said. Mr. Moyle, who double-hatted for seven years, says it's not a crisis yet but "my concern is that if it picks up, it could be a serious situation." In other words, don't count out Ted Arnott just yet.
{ "redpajama_set_name": "RedPajamaC4" }
1,527
{"url":"https:\/\/surfacetoairnewyork.com\/what-is-the-oxygen-to-hydrogen-mass-ratio-for-h2o2\/","text":"## 13.3 legislation of constant composition (ESADW)\n\nIn any given chemical compound, the elements constantly combine in the very same proportion with each other. This is the law of continuous composition.\n\nThe law of constant composition claims that, in any particular chemical compound, every samples of that compound will certainly be made up of the same aspects in the very same proportion or ratio. For example, any kind of water molecule is constantly made increase of two hydrogen atoms and also one oxygen atom in a $$2:1$$ ratio. If we look at the loved one masses that oxygen and also hydrogen in a water molecule, we view that $$\\text94\\%$$ of the mass of a water molecule is accounted because that by oxygen and also the staying $$\\text6\\%$$ is the fixed of hydrogen. This massive proportion will be the exact same for any water molecule.\n\nYou are watching: What is the oxygen-to-hydrogen mass ratio for h2o2?\n\nThis walk not median that hydrogen and oxygen always combine in a $$2:1$$ ratio to kind $$\\textH_2\\textO$$. Lot of proportions space possible. Because that example, hydrogen and also oxygen may combine in different proportions to type $$\\textH_2\\textO_2$$ fairly than $$\\textH_2\\textO$$. In $$\\textH_2\\textO_2$$, the $$\\textH:\\textO$$ proportion is $$1:1$$ and also the mass ratio of hydrogen come oxygen is $$1:16$$. This will be the exact same for any molecule of hydrogen peroxide.\n\n## Law of consistent composition\n\n### Aim\n\nTo inspection the ratio in i beg your pardon compounds combine.\n\n### Apparatus\n\n$$\\text0,1$$ $$\\textmol\u00b7dm^-3$$ silver nitrate ($$\\textAgNO_3$$)\n\n$$\\text0,1$$ $$\\textmol\u00b7dm^-3$$ sodium chloride ($$\\textNaCl$$)\n\n$$\\text0,1$$ $$\\textmol\u00b7dm^-3$$ command nitrate ($$\\textPbNO_3$$)\n\n$$\\text0,1$$ $$\\textmol\u00b7dm^-3$$ salt iodide ($$\\textNaI$$)\n\n$$\\text0,1$$ $$\\textmol\u00b7dm^-3$$ stole (III) chloride ($$\\textFeCl_3$$)\n\n$$\\text0,1$$ $$\\textmol\u00b7dm^-3$$ salt hydroxide ($$\\textNaOH$$)\n\n9 large test tubes\n\n3 propettes\n\n### Method\n\nReaction 1: Prepare three test tubes with $$\\text5$$ $$\\textmL$$, $$\\text10$$ $$\\textmL$$ and $$\\text15$$ $$\\textmL$$ of silver- nitrate respectively. Using a clean propette include $$\\text5$$ $$\\textmL$$ of sodium chloride to every one and also observe what happens.\n\nReaction 2: Prepare three test tubes v $$\\text5$$ $$\\textmL$$, $$\\text10$$ $$\\textmL$$ and $$\\text15$$ $$\\textmL$$ of lead nitrate respectively. Making use of a clean propette add $$\\text5$$ $$\\textmL$$ of sodium iodide to every one and also observe what happens. Compose a balanced equation for this reaction.\n\nReaction 3: Prepare three test tubes v $$\\text5$$ $$\\textmL$$, $$\\text10$$ $$\\textmL$$ and $$\\text15$$ $$\\textmL$$ of salt hydroxide respectively. Include $$\\text5$$ $$\\textmL$$ the iron(III) chloride to every one and also observe what happens.\n\n### Discussion and also conclusion\n\nRegardless the the quantity of reaction added, the same products, with the very same compositions, are developed (i.e. The precipitate it was observed in the reactions). However, if the reactants space not added in the correct ratios, there will be unreacted reactants the will continue to be in the last solution, along with the products formed.\n\ntemp message\n\n### Volume relationships in gases (ESADX)\n\nIn a chemistry reaction in between gases, the loved one volumes that the gases in the reaction are current in a proportion of little whole numbers if every the gases space at the exact same temperature and also pressure. This partnership is also known as Gay-Lussac\"s Law.\n\nFor example, in the reaction between hydrogen and oxygen to produce water, two volumes of $$\\textH_2$$ react v one volume the $$\\textO_2$$ to produce two volumes of $$\\textH_2\\textO$$.\n\nSee more: Which Section Of The Nation Was Economically Dominant After The Civil War?\n\n\\<2\\textH_2\\text(g) + \\textO_2\\text(g) \\rightarrow 2\\textH_2\\textO (l)\\>\n\nIn the reaction to develop ammonia, one volume that nitrogen gas reaction with three volumes that hydrogen gas to produce two volumes of ammonia gas.","date":"2021-10-28 14:19:41","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.791995108127594, \"perplexity\": 1710.0593659993558}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323588341.58\/warc\/CC-MAIN-20211028131628-20211028161628-00358.warc.gz\"}"}
null
null
\section{Introduction It is well-known that mixing by an incompressible flow enhances diffusion in many contexts. This is demonstrated, for instance, by the fact that the effective diffusivity of a periodic incompressible flow is always larger than diffusion in the absence of a flow~\cite{FannP}, or that the principal eigenvalue $\mu_u$ of the problem \begin{equation}\label{aug251} \left\{ \begin{array}{cl} -\triangle\phi+u\cdot\nabla\phi=\mu_u\phi & \text{ $\phi>0$ in $\Omega$},\\ \phi=0 & \text{ on $\partial\Omega$} \end{array}\right. \end{equation} is never smaller than the corresponding eigenvalue $\mu_0$ of \eqref{aug251} with $u\equiv 0$. Classes of flows which are most effective in enhancing diffusion have been studied both on bounded and unbounded domains, and their characterizations have been provided in~\cites{CKRZ, Zla2D}. On the other hand, it was observed in~\cite{BKJS} that an incompressible flow may actually slow down diffusion in the following sense. Consider the explosion problem $$ \begin{array}{cl} -\triangle\phi+u\cdot\nabla\phi=\lambda e^{\phi} & \text{ in $\Omega$},\\ \phi=0 & \text{ on $\partial\Omega$.} \end{array} $$ There exists $\lambda_*(u)$ such that this problem has a solution for all $\lambda\leq\lambda_*(u)$ and no solution for $\lambda>\lambda_*(u)$ (see \cites{CR,JL,KK} for $u\equiv 0$ and~\cite{BKNR} for $u\not\equiv 0$). Surprisingly, it was shown {\it numerically} in~\cite{BKJS} that in a long rectangle there are incompressible flows with $\lambda_*(u)<\lambda_*(0)$. This means that addition of a flow (which typically increases $\lambda_*$ due to mixing) can sometimes instead promote the creation of hotspots and inhibit their interaction with the cold boundary $\partial\Omega$. The present paper is a step toward mathematical understanding of this diffusion slowdown effect of certain incompressible flows. We consider the problem \begin{equation}\label{eqnEExitTime} \left\{ \begin{array}{cl} - \triangle \tau^u+\displaystyle u \cdot \nabla \tau^u = 1 &\text{in $\Omega$,}\\ \displaystyle \tau^u = 0 & \text{on $\partial \Omega$} \end{array} \right. \end{equation} on a smooth bounded domain $\Omega \subset \mathbb{R}^n$, with $u(x)$ an incompressible flow on $\Omega$ (i.e., $\nabla\cdot u\equiv 0$) which is tangential to $\partial\Omega$ (i.e., $u\cdot\hat n\equiv 0$ on $\partial\Omega$, with $\hat n$ the outward normal to $\partial\Omega$). Physically, the solution $\tau^u(x)$ is the expected exit time from $\Omega$ of the random process \[ dX_t=-u(X_t)\,dt+\sqrt{2}\,dB_t,\qquad X_0=x, \] modeling the motion of a diffusing particle advected by the flow $u$. Although one might think that the expected exit time is always decreased by the addition of an incompressible flow due to improved mixing, this need not be the case. Our first result shows that in any bounded simply connected domain in $\mathbb{R}^2$ which is not a disk, there are (regular) incompressible flows which \textit{increase} the maximum of the expected exit time of $X_t$ from $\Omega$. \begin{theorem}\label{thmWorseOnNonDisk Let $\Omega \subset \mathbb{R}^2$ be a bounded simply connected domain with a $C^1$ boundary which is not a disk. Then there exists a $C^1$, divergence free vector field $v:\Omega \to \mathbb{R}^2$ tangential to $\partial\Omega$ such that $\norm{\tau^v}_{L^\infty(\Omega)} > \norm{\tau^0}_{L^\infty(\Omega)}$. \end{theorem {\it Remark.} We note that the incompressible flows are the natural class to study in this context. Indeed, if one considers general $v$ (not necessarily divergence free), then it is easy to show that $\norm{\tau^v}_{L^\infty(\Omega)}$ can be made arbitrarily large by, for instance, taking $v(x)=A(x_0-x)$ with $x_0\in\Omega$ and $A$ sufficiently large. \smallskip On a disk however, no (incompressible) stirring will increase this expected exit time beyond the one for $u\equiv 0$. In fact we prove in any dimension that the $L^p$-norm of the expected exit time can never be larger than that from a disk of equal volume with $u\equiv 0$. \begin{theorem}\label{T.1.1 Let $\Omega\subset \mathbb{R}^n$ be a bounded domain with a $C^1$ boundary and $v:\Omega \to \mathbb{R}^n$ a $C^1$ divergence free vector field tangential to $\partial \Omega$. Then for any $p\in[1,\infty]$, $$ \norm{\tau^v}_{L^p(\Omega)} \leq \norm{\tau^{0,D}}_{L^p(D)} $$ where $D \subset \mathbb{R}^n$ is a ball with the same Lebesgue measure as $\Omega$, and $\tau^{0,D}$ is the solution of~\eqref{eqnPoisson} on $D$ with $u \equiv 0$. \end{theorem {\it Remark.} If $D$ is a ball with Lebesgue measure $V$ and center $0$, then $\tau^{0,D}$ is given explicitly by the formula $$ \tau^{0,D}(x) = \frac{1}{2n}\left[ \left(\frac{V}{\Gamma_n}\right)^\frac{2}{n}-\abs{x}^2 \right], $$ with $\Gamma_n$ the Lebesgue measure of the unit ball in $\mathbb{R}^n$. \smallskip There are, of course, other ways to quantify the effect of stirring on diffusion --- see, for instance, \cite{STD} where many additional references can be found, especially to the physics literature. Closely related to the problem studied in the present paper is the following question. It is shown in~\cite{BKNR} that for any $p>n/2$ there exists a constant $C_p(\Omega)$ such that for any incompressible $u$ tangential to $\partial\Omega$ and any $f\in L^p(\Omega)$, the solution of \begin{equation}\label{eqnPoisson} \left\{\begin{array}{cl} - \triangle \phi+\displaystyle u \cdot \nabla\phi = f &\text{in $\Omega$,}\\ \displaystyle \phi = 0 & \text{on $\partial \Omega$} \end{array}\right. \end{equation} satisfies $\norm{\phi}_{L^\infty}\leq C_p(\Omega)\norm{f}_{L^p}$. It would be interesting to determine which flows achieve $C_p(\Omega)$ and how does $C_p(\Omega)$ depend on $\Omega$. Theorems~\ref{thmWorseOnNonDisk} and \ref{T.1.1} are a first step in this direction. The present paper is organized as follows. In Section~\ref{proofs} prove our main results, Theorems~\ref{thmWorseOnNonDisk} and~\ref{T.1.1}. Our proof of Theorem~\ref{thmWorseOnNonDisk} involves a variational principle, Proposition~\ref{ppnCriticalPt}, the proof of which is somewhat technical and therefore postponed to Section~\ref{ptproof}. This variational principle leads to an interesting PDE for the critical points of the expected exit time functional. We discuss properties of these critical points and provide some numerical examples in Section~\ref{maximizer}. \subsection*{Acknowledgment} This work was supported in part by NSF grants of the authors. AZ was also supported by an Alfred P.~Sloan Research Fellowship. GI was also partially supported by the Center for Nonlinear Analysis. \section{Proofs of the main results}\label{proofs \subsection{Proof of Theorem~\ref{thmWorseOnNonDisk} For a given incompressible $C^3$ flow $u$ tangential to $\partial\Omega$, consider the family of Poisson problems \begin{equation}\label{aug181} \left\{\begin{array}{cl} -\triangle\tau^{Au}+Au\cdot\nabla\tau^{Au}=1 & \text{ in $\Omega$},\\ \tau^{Au}=0 & \text{ on $\partial\Omega$} \end{array}\right. \end{equation} with $A>0$. Let $\psi$ be the {\it stream function} of $u$, that is, $\psi:\Omega\to\mathbb{R}$ is $C^4$ and such that $\psi(\partial\Omega)=0$ and $u=\nabla^\perp\psi=(-\partial_{2} \psi, \partial_{1} \psi)$. It is well-known~\cites{Freidlin,WF} that if all critical points of $\psi$ are non-degenerate and no two of them lie on the same level set of $\psi$, then the functions $\tau^{Au}$ converge uniformly to a limit $\bar\tau^u$ which is constant on the level sets of $\psi$ and satisfies an asymptotic Freidlin problem on the Reeb graph of the function $\psi$. If $\Omega$ is simply connected and $\psi$ has a single (non-degenerate) critical point (in which case either $\psi>0$ or $\psi<0$ on $\Omega$, and we will assume without loss the former), then we have the explicit formula \begin{equation}\label{aug182} \bar \tau^u(y) = - \int_0^{\psi(y)} \frac {\abs{\Omega_{\psi,h}} } {\int_{\Omega_{\psi,h}} \triangle\psi \, dx} \, dh. \end{equation} Here and elsewhere we let $\Omega_{\psi,h}= \{x\in\Omega\,\big|\,\psi(x)>h\}$, the $h$-super-level set of $\psi$. Notice that $\bar \tau^u$ is just a reparametrization of $\psi$. As we prove in Proposition~\ref{ppnFreidlinProblem} below, the formula \eqref{aug182} holds also when the single critical point of $\psi$ is degenerate. We will start by considering only flows with the above property. That is, $u$ is $C^3$ and such that the stream function $\psi$ only has a single critical point in $\Omega$ (which is simply connected and $\psi>0$ on $\Omega$). In particular, all super-level sets of $\psi$ are simply connected and $\psi$ attains a single maximum $\psi(x_0)=M>0$. Moreover, any $h\in(0,M)$ is a regular value of $\psi$ and $\partial\Omega_{\psi,h}$ is a $C^4$ Jordan curve. Assume now that for some $C^3$ incompressible flow $w$ the function $\tau^w$ has a single critical point and let $\psi=\tau^w$ (which is $C^4$) and $u=\nabla^\perp\psi$. Thus $\psi$ solves \begin{equation}\label{aug183} \left\{\begin{array}{cl} -\triangle\psi+w\cdot\nabla\psi=1 & \text{ in $\Omega$},\\ \psi=0 & \text{on $\partial\Omega$}. \end{array}\right. \end{equation} Integrating this over $\Omega_{\psi,h}$ and using incompressibility of $w$, we obtain \[ \abs{\Omega_{\psi,h}}= -\int_{\Omega_{\psi,h}} \triangle\psi \, dx \] for any $h\in(0,M)$. This together with (\ref{aug182}) implies that $\bar\tau^u \equiv\psi$. That is, such solutions $\psi$ to the Poisson problem \eqref{aug183} solve the Freidlin problem for themselves. We are particularly interested in the case $w=0$, with $\psi=\tau^0$ solving \begin{equation}\label{aug184} \left\{\begin{array}{cl} -\triangle\tau^0=1 & \text{ in $\Omega$},\\ \tau^0=0 & \text{ on $\partial\Omega$} \end{array}\right. \end{equation} and $u_0=\nabla^\perp\tau^0$. Notice that then $\tau^0$ also solves \begin{equation}\label{aug186} \left\{\begin{array}{cl} -\triangle\tau^0+Au_0\cdot\nabla\tau^0=1 & \text{ in $\Omega$},\\ \tau^0=0 & \text{ on $\partial\Omega$.} \end{array}\right. \end{equation} for any $A\in\mathbb{R}$ and so $\tau^0=\bar\tau^{u_0}$. Let us therefore assume, for now, that $\Omega$ is such that $\tau^0$ has a single critical point. We now assume that for any incompressible flow $u$ on $\Omega$ we have $\norm{\tau^u}_{L^\infty}\le \norm{\tau^0}_{L^\infty}$. In particular, $\norm{\bar\tau^u}_{L^\infty}\le \norm{\bar\tau^{u_0}}_{L^\infty}$ for each $u$ whose stream function has a single critical point. We will now show that this is the case only when $\Omega$ is a disc, thus proving Theorem~\ref{thmWorseOnNonDisk} for all $\Omega$ such that $\tau^0$ has a single critical point. The key ingredient of our proof is that for all ``infinite amplitude'' expected exit times $\bar\tau^u$, we have a variational principle which gives an explicit equation satisfied by the critical points (and thus the maximiser) of the functional $I(\psi)=\|\bar\tau^{\nabla^\perp\psi}\|_{L^\infty}$ (with $\psi$ having a single critical point). We set up the variational principle as follows. Let $v$ (the ``direction'' of our variation) be any $C^4$ vector field tangential to $\partial \Omega$. Let $X$ be the flow (in the dynamical systems sense) given by $$ \frac{dX_\epsilon}{d\epsilon} = v\circ X_\epsilon, \qquad X_0 = \text{Id}. $$ Given a stream function $\psi$ with a single critical point, we perturb it by composing it with the flow $X_\epsilon$. Let $\psi^\epsilon = \psi \circ X_\epsilon$, $u^\epsilon = \grad^\perp \psi^\epsilon$, and $\bar \tau^\epsilon = \bar \tau^{u^\epsilon}$. Notice that $\psi^\epsilon$ is $C^4$ and again has a single critical point (the maximum) $x_0^\epsilon$. Then $\bar\tau^\epsilon$ also attains its maximum at $x_0^\epsilon$ due to \eqref{aug182}, so the variation of $I$ in direction $v$ is \begin{equation}\label{eqnVariationTau} V(\psi, v ) = \left.\frac{d}{d\epsilon} \bar \tau^\epsilon(x_0^\epsilon) \right\rvert_{\epsilon = 0}. \end{equation} We say that $\psi$ is a critical point of $I$ if for all $C^4$ (not necessarily divergence free) vector fields $v$ tangential to $\partial \Omega$, we have $V(\psi, v )=0$. Clearly any $\psi$ (with a single critical point) which maximises $I$ is a critical point of $I$. So our aim is to prove that $\tau^0$ is not a critical point of $I$ unless $\Omega$ is a disc (assuming for now that $\tau^0$ has a single critical point). As mentioned earlier, the proof of this fact rests on obtaining an explicit equation for critical points of $I$. We can now do this by a direct computation using the Freidlin-Wentzel theory~\cites{WF,Freidlin}. \begin{proposition}\label{ppnCriticalPt Let $\Omega\subset\mathbb{R}^2$ be a bounded simply connected domain with a $C^1$ boundary and let $\psi>0$ be a $C^4$ stream function on $\Omega$ with $\psi(\partial\Omega)=0$ and a single critical point. Then $\psi$ is a critical point of the functional $I$ if and only if $\phi = \bar \tau^{\grad^\perp \psi}$, the solution of the Freidlin problem~\eqref{aug182} with stream function $\psi$, also solves \begin{equation}\label{eqnCriticalPt} -2\triangle \phi(x) = 1 + {\abs{\nabla \phi(x)}^2} \int_{\partial\Omega_{\phi,\phi(x)}} \frac{d\sigma}{\abs{\nabla \phi}} {\displaystyle\left(\int_{\partial\Omega_{\phi,\phi(x)}} \abs{\nabla \phi} \, d\sigma \right)^{-1}} \end{equation} \end{proposition We postpone the proof of Proposition~\ref{ppnCriticalPt} to Section~\ref{ptproof}, but make two remarks before proceeding. \begin{remark*} One can also write down an explicit PDE \eqref{eqnCriticalPtVarphi} for the stream function $\psi$. This PDE, however, is somewhat more complicated, and we find it more convenient to work with~\eqref{eqnCriticalPt} involving the reparametrization $\phi$ of $\psi$. \end{remark*} \begin{remark*} Assume that a $C^3$ flow $w$ maximizes $\|\tau^w\|_{L^\infty}$ and $\tau^w$ has a single critical point. The argument following \eqref{aug183} above then shows $\tau^w\equiv \bar\tau^{\nabla^\perp\tau^w}$, so $\tau^w$ is a critical point of $I$ and solves \eqref{eqnCriticalPt}. \end{remark*} Let now $\psi=\tau^0$ have a single critical point $x_0\in\Omega$ and assume that $\psi$ is a critical point of $I$. Recall that $\psi=\bar\tau^{u_0}=\bar\tau^{\nabla^\perp \psi}$, so Proposition~\ref{ppnCriticalPt} implies that $\psi$ solves~\eqref{eqnCriticalPt}. Since $-\triangle \psi = 1$, we obtain \begin{equation}\label{eqnReducesToEikonal} \abs{\nabla \psi(x)}^2 \int_{\partial\Omega_{\psi,\psi(x)}} \frac{d\sigma}{\abs{\nabla \psi}}\left(\displaystyle\int_{\partial\Omega_{\psi,\psi(x)}} \abs{\nabla \psi} \, d\sigma \right)^{-1} =1, \end{equation} immediately showing that $\abs{\nabla \psi}$ must be constant on the level sets of $\psi$. Thus $\psi$ solves the eikonal equation $\abs{\nabla \psi(x)}=g(\psi)$ with $g$ equal zero at the maximum of $\psi$ and positive elsewhere. It is well known that a solution of such equation does not have interior singularities only if $\Omega$ is a disk and $\psi$ is radial~\cite{BBI}. In our situation this can be seen as follows. After reparametrization we may assume that $g\equiv1$, and $\psi$ attains its maximum at $x_0$. This introduces a singularity at $x_0$ so let us suppose that $\psi$ does not have other interior singularities. Since the level sets of $\psi$ are connected, and the maximum is isolated, for any $\epsilon>0$ we can find a wavefront (a level set of $\psi$) that is contained in a disc of radius $\epsilon>0$ centered at $x_0$. By compactness, this wavefront is a positive distance $\epsilon' \in (0, \epsilon)$ away from $x_0$. Absence of singularities now implies that we can evolve this level set, and the spheres of radius $\epsilon$ and $\epsilon'$ ``outward'' by the eikonal equation (with $g\equiv 1$). Then each level set of $\psi$ obtained by this evolution lies entirely within distance $\epsilon - \epsilon' < \epsilon$ from a circle. As $\epsilon>0$ is arbitrary we conclude that level sets of $\psi$ have to be circles. Since $ \psi(\partial\Omega) =0$, we have that $\Omega$ is a disk and $\psi$ radial. Thus we have proved that if $\tau^0$ has a single critical point, then it does not maximize $I$ when $\Omega$ is not a disc. Since the claim of Theorem~\ref{thmWorseOnNonDisk} for a disc follows from Theorem~\ref{T.1.1} (which we will prove shortly), we are left with considering the case of $\Omega$ such that $\tau^0$ has more than one critical point. We will use the following claim to reduce this to the previous case. \begin{lemma For any bounded simply connected $\Omega\subset\mathbb{R}^2$ with a $C^1$ boundary, the set of maxima of $\tau^0$ is discrete. \end{lemma \begin{proof Let $M = \| \tau^0\|_{L^\infty}$, let $\mathcal D = \{x\,|\,\tau^0(x)=M\}$, and suppose $\mathcal D$ is not discrete. Since $\mathcal D$ is positive distance from $\partial\Omega$, it has an accumulation point $x_0$ inside $\Omega$. Assume without loss of generality that a sequence $x_n\in\mathcal D$ converges to $x_0$ along the $x$-axis: $(x_n-x_0)/|x_n-x_0|\to (1,0)$. Thus $\partial_x\tau^0(x_0)=\partial_x^2\tau^0(x_0)=0$, so $\partial_y^2\tau^0(x_0)=1$ and the analytic implicit function theorem shows that there is a real analytic curve $\mathcal C$ containing $x_0$ on which $\partial_y\tau^0=0$ (since $\tau^0$ is real analytic). It then follows that $x_n\in\mathcal C$ for all large $n$, and real analyticity of $\tau^0|_{\mathcal C}$ now shows $\mathcal C\subset \mathcal D$. So $\mathcal D$ contains an analytic curve which cannot end inside $\Omega$ (by the previous argument) and must also stay away from $\partial \Omega$ (by $M>0$). This means that such a curve must be closed. But then $\tau^0 > M$ inside the region enclosed by this curve (which is a subset of $\Omega$), a contradiction. \end{proof We now return to the proof of Theorem~\ref{thmWorseOnNonDisk} for general $\tau^0$ and assume that the zero flow maximizes $\|\tau^u\|_{L^\infty}$ among all incompressible flows $u$ on $\Omega$. We will reduce the problem to the previous case by showing that then the same is true for a connected component of $\Omega_{\tau^0,h}$ containing a maximum of $\tau^0$, for all $h$ sufficiently close to $M = \linfnorm{\tau^0}$. We introduce some notation and make this precise below. Let $\tau^0(x_0)=M$ for some $x_0\in\Omega$ and denote by $\Omega_h$ the connected component of $\Omega_{\tau^0,h}$ containing $x_0$. For any $h$ and incompressible $C^3$ vector field $w$ tangential to $\partial\Omega_h$, define $Q_{\Omega_h}(w) = \limsup_{A\to\infty} \norm{\tau^{Aw}_{\Omega_h}}_{L^\infty(\Omega_h)}$ (where $\tau^{Aw}_{\Omega_h}$ satisfies~\eqref{eqnEExitTime} with $\Omega = \Omega_h$ and $u = Aw$). Finally, choose $h_0<M$ sufficiently close to $M$ so that $\bar\Omega_{h_0}$ contains no critical points of $\tau^0$ besides $x_0$. \begin{lemma}\label{lemaug212 Assume that for all $C^3$ incompressible vector fields $u$ tangential to $\partial \Omega$, we have $\norm{\tau^0}_{L^\infty(\Omega)} \geq Q_{\Omega}(u)$. Then for any $C^3$ incompressible vector field $w$ tangential to $\partial \Omega_{h_0}$, we have $\norm{\tau^0-h_0}_{L^\infty(\Omega_{h_0})} \geq Q_{\Omega_{h_0}}(w)$. \end{lemma Momentarily postponing the proof of Lemma~\ref{lemaug212}, note that $\tau^0 - h_0$ is the expected exit time of Brownian motion, starting at $x$, from $\Omega_{h_0}$. That is, $\tau^0 - h_0$ is the solution of~\eqref{eqnEExitTime} with $\Omega = \Omega_{h_0}$, and $u = 0$. Thus Theorem~\ref{thmWorseOnNonDisk} for $\Omega_{h_0}$ (which we have already proved) shows that $\Omega_{h_0}$ is a disk of some radius, say $R$, and $\tau^0$ is radial in it. Since $\tau^0(x)+\tfrac 12|x-x_0|^2$ is harmonic in $\Omega$ and radial near $x_0$, it must be constant in $\Omega$. Since $\tau^0(\partial\Omega)=0$ we have that $\Omega$ is a disk, completing the proof of Theorem~\ref{thmWorseOnNonDisk}. It only remains to prove Proposition~\ref{ppnCriticalPt} and Lemma~\ref{lemaug212}. Proposition~\ref{ppnCriticalPt} is proved in Section~\ref{ptproof}, and we prove Lemma~\ref{lemaug212} below. \begin{proof}[Proof of Lemma~\ref{lemaug212} The proof is based on the more general observation that changing any stream function near its maximum does not affect the asymptotic $A\to\infty$ behavior of the solution of~\eqref{aug181} away from the maximum. We make this precise below. Let $\psi$ be any $C^4$ function in $\Omega$ with $\psi(\partial\Omega)=0$ and let $x_0,M,h_0,\Omega_h$ be defined as above, with $\psi$ in place of $\tau^0$ (we will eventually choose $\psi = \tau^0$). For some $h_1\in(h_0,M)$ let $\psi'$ be some $C^4$ function such that $\psi(x)=\psi'(x)$ for $x\in\Omega\setminus\Omega_{h_1}$, and denote $u=\nabla^\perp \psi$, $u'=\nabla^\perp \psi'$. Let $\tau_A$ and $\tau_A'$ solve \begin{equation}\label{aug213} \left\{\begin{array}{cl} -\triangle\tau_A+Au\cdot\nabla\tau_A=1 & \text{ in $\Omega$},\\ \tau_A=0 & \hbox{ on $\partial\Omega$,} \end{array}\right. \end{equation} and \begin{equation}\label{aug214} \left\{\begin{array}{cl} -\triangle\tau_A'+Au'\cdot\nabla\tau_A'=1 &\text{ in $\Omega$},\\ \tau_A'=0 &\text{ on $\partial\Omega$.} \end{array}\right. \end{equation} We will first show \begin{equation}\label{aug216} \norm{\nabla\tau_A-\nabla\tau_A'}_{L^2(\Omega\setminus\Omega_{h_0})}\to0\text{ as }A\to+\infty, \end{equation} which, as mentioned earlier, says that perturbations of the stream function near $x_0$ do not affect the asymptotic $A\to\infty$ behavior away from $x_0$. To prove~\eqref{aug216}, let $\phi_A=\tau_A-\tau_A'$ so that for any $h\in[h_0,h_1]$ we have \begin{equation}\label{eqnPoisson11} \left\{\begin{array}{cl} -\triangle\phi_A+A u \cdot \nabla \phi_A = 0 &\text{in }\Omega \setminus \Omega_h,\\ \phi_A = 0 & \text{ on } \partial \Omega,\\ \displaystyle \int_{\partial \Omega_h} (\nabla \phi_A \cdot \hat n) \, d\sigma =0. \end{array}\right. \end{equation} where the third equation is obtained by integrating the difference of \eqref{aug213} and \eqref{aug214} over $\Omega_h$, and using $u \cdot \hat n = u' \cdot \hat n = 0$ on $\partial \Omega_h$. Multiplying (\ref{eqnPoisson11}) by $\phi_A$ and integrating by parts, we obtain: \[ \int_{\Omega \setminus \Omega_h} \abs{\nabla \phi_A}^2= -\int_{\partial \Omega_h}\phi_A (\nabla \phi_A \cdot \hat n )\, d \sigma. \] Combining the flux condition in \eqref{eqnPoisson11} with the last equality, we obtain: \[ \int_{\Omega \setminus \Omega_h} \abs{\nabla \phi_A}^2 dx = -\int_{\partial \Omega_h}(\phi_A-\tilde{\phi}_A) \nabla \phi_A \cdot \hat n \, d \sigma, \] where \[ \tilde{\phi}_A\iftextstyle|\else\Big|\fi_{\partial \Omega_h} = \frac{1}{\abs{\partial \Omega_h}}\int_{\partial \Omega_h}\phi_A \, d\sigma \] is the streamline-averaged $\phi_A$. Integrating this identity for $h\in[h_0, h_1]$ we obtain: \begin{align*} &(h_1-h_0)\int_{\Omega \setminus \Omega_{h_0}} \abs{\nabla \phi_A}^2 \leq \int_{h_0}^{h_1} \left( \int_{\Omega \setminus \Omega_h} \abs{\nabla \phi_A}^2 dx \right) \, dh\\ &=-\int_{h_0}^{h_1}\int_{\partial \Omega_h}(\phi_A-\tilde{\phi}_A) (\nabla \phi_A \cdot \hat n ) \, d\sigma \, dh \leq \int_{h_0}^{h_1}\int_{\partial \Omega_h}\abs{\phi_A-\tilde{\phi}_A}~\abs{\nabla\phi_A} \, d\sigma \, dh\\ &= \int_{\Omega_{h_0} \setminus \Omega_{h_1}}\!\!\!\!\! \abs{\phi_A-\tilde{\phi}_A}~\abs{\nabla \phi_A}~\abs{\nabla \psi} \, dx \leq C\norm{ \nabla \phi_A}_{L^2(\Omega)} \left(\int_{\Omega_{h_0} \setminus \Omega_{h_1}} \!\!\!\!\!\abs{\phi_A-\tilde{\phi}_A}^2 dx\right)^{1/2}\!\!\!\!. \end{align*} Multiplying \eqref{aug213} by $\tau_A$, \eqref{aug214} by $\tau_A'$, and integrating over $\Omega$, we obtain the uniform bound $\norm{ \nabla \tau_A}_{L^2(\Omega)}, \norm{ \nabla \tau_A'}_{L^2(\Omega)} \leq C$. Hence $\norm{ \nabla \phi_A}_{L^2(\Omega)} \leq C$ and it follows that \begin{multline}\label{aug218} \int_{\Omega \setminus \Omega_{h_0}} \abs{\nabla \phi_A}^2 \leq C\norm{\phi_A-\tilde{\phi}_A}_{L^2(\Omega_{h_0} \setminus \Omega_{h_1})} \\ \leq C\left(\norm{\tau_A-\tilde{\tau}_A}_{L^2(\Omega_{h_0} \setminus \Omega_{h_1})}+ \norm{\tau_A'-\tilde{\tau}_A'}_{L^2(\Omega_{h_0} \setminus \Omega_{h_1})}\right). \end{multline} We claim now that right side of~\eqref{aug218} tends to zero as $A \to \infty$. Indeed, multiplying \eqref{aug213} by $u\cdot\nabla\tau_A$, integrating, using incompressibility of $u$, and the fact that $u\cdot \hat n=0$ on $\partial\Omega$ gives \begin{eqnarray*} &&A\int_\Omega(u\cdot\nabla\tau_A)^2dx=\int_\Omega (u\cdot\nabla\tau_A)\triangle\tau_A dx =-\int_\Omega\pdr{\tau_A}{x_j}\pdr{}{x_j}\left(u\cdot\nabla\tau_A\right)dx\\ &&=-\int_\Omega\pdr{\tau_A}{x_j}\pdr{u_m}{x_j} \pdr{\tau_A}{x_m} dx\leq C\int_\Omega \abs{\nabla\tau_A}^2\leq C. \end{eqnarray*} As $\abs{u}$ is strictly positive in $\Omega_{h_0} \setminus \Omega_{h_1}$, it follows that \begin{equation} \label{2.222} \norm{\tau_A-\tilde\tau_A}_{L^2(\Omega_{h_0} \setminus \Omega_{h_1})}\to 0 \end{equation} as $A\to+\infty$. The argument for $\tau_A'$ is identical, completing the proof of \eqref{aug216}. In order to improve the $\dot H^1(\Omega\setminus\Omega_{h_0})$ bound (\ref{aug216}) to a bound in $L^\infty(\Omega\setminus\Omega_{h_0})$ we simply note that, given any $\epsilon>0$ and $A>A_0$, using (\ref{2.222}) we may find a streamline $\partial\Omega_{h'}$ with $h'<h_0$ but arbitrarily close to $h_0$ so that \[ \|\tau_A-\tilde\tau_A\|_{L^2(\partial\Omega_{h'})}+\|\tau_A'-\tilde\tau_A'\|_{L^2(\Omega_{h'})} +\|\nabla\tau_A-\nabla\tau_A'\|_{L^2(\partial\Omega_{h'})}<\epsilon. \] It follows that then \[ \|\tau_A-\tilde\tau_A\|_{L^\infty(\Omega_{h'})}+ \|\tau_A'-\tilde\tau_A'\|_{L^\infty(\Omega_{h'})} <C\epsilon, \] and, in addition, $|\tilde\tau_A-\tilde\tau_A'|_{\partial\Omega_{h'}}<C\epsilon$ because of (\ref{aug216}) and since $\tau_A=\tau_A'=0$ on $\partial\Omega$. Finally, since $\tau_A$ and $\tau_A'$ satisfy the same equation outside of $\Omega\setminus\Omega_{h_0}$, the maximum principle implies that $|\tau_A-\tau_A'|<C\epsilon$ in $\Omega\setminus\Omega_{h_0}$. Now assume that $\psi=\tau^0$ maximizes $\|\tau^u\|_{L^\infty}$ (then $u_0=\nabla^\perp\tau^0$ maximizes $Q_\Omega$) but $u_0$ is not a critical point of $I$. Then there exists a $C^4$ stream function $\psi'$ on $\Omega$, equal to $\tau^0$ on $\Omega\setminus\Omega_{h_0}$, such that for $w=\nabla^\perp\psi'$ (when restricted to $\Omega_{h_0}$), \begin{equation}\label{aug2110} M=\norm{\tau^0}_{L^\infty}<h_0+Q_{\Omega_{h_0}}(w). \end{equation} We can assume that $\psi'$ has a single critical point in $\Omega_{h_0}$ because so does $\tau^0$ as well as all the perturbations $\psi^\epsilon$ considered in the proof of Proposition~\ref{ppnCriticalPt}. Moreover, we can assume $\psi'\equiv\tau^0$ on $\Omega\setminus\Omega_{h_1}$ for some $h_1>h_0$ because it is sufficient to consider such perturbations in that proof (see the remark after the proof of Lemma \ref{lmaCriticalPtUnconstrained}). Then the previous argument shows $\norm{\tau^{Aw}-\tau^0}_{L^\infty(\Omega_{h_0-\epsilon}\setminus\Omega_{h_0+\epsilon})}\to 0$ as $A\to+\infty$. In particular, $\tau^{Aw}>h_0-\delta$ on $\partial\Omega_{h_0}$ for all large $A$, with $\delta=(h_0+Q_{\Omega_{h_0}}(w)-M)/2$. This means that $\tau^{Aw}>h_0-\delta+\tau^{Aw}_{\Omega_{h_0}}$ on $\Omega_{h_0}$ by the maximum principle. But then \[ M\ge Q_\Omega(w) \ge h_0-\delta+ Q_{\Omega_{h_0}}(w) =M+\delta >M, \] a contradiction. This finishes the proof. \end{proof \subsection{Proof of Theorem~\ref{T.1.1} } \label{ss2.2 We can assume that $v$ is sufficiently smooth (and approximate general $v$ with smooth ones). Let us denote $\tau=\tau^v$ and $\Omega_h=\Omega_{\tau,h}$. Then by Sard's theorem the set $\mathcal A$ of regular values of $\tau$ has full measure. Thus $\partial\Omega_h$ is a finite union of sufficiently smooth compact manifolds without boundary for each $h\in \mathcal A$ (moreover, $\mathcal A$ is then open because $\tau\in C^2(\Omega)$). Let $\Omega^*$ and $\tau^*$ be the symmetric rearrangements of $\Omega$ and $\tau$. That is, $\Omega^*$ is the ball with volume $\abs{\Omega}=V$ centered at the origin and $\tau^*:\Omega^*\to R_+$ is the non-increasing radial function such that the ball $\Omega^*_h = \{ x\in\Omega \,|\, \tau^*(x)>h \}$ satisfies $\abs{\Omega^*_h}=\abs{\Omega_h}$ for each $h\in \mathbb{R}$ (with $\Omega_h$ as above). Let now $h\in \mathcal A$. The isoperimetric inequality gives \begin{equation} \label{1.1} \abs{\partial\Omega^*_h} \leq \abs{\partial\Omega_h}, \end{equation} with equality precisely when $\Omega_h$ is a ball. Since $v$ is divergence-free and $\tau$ is constant on $\partial\Omega_h$, we have \begin{equation} \label{1.2} \int_{\partial \Omega_h} \abs{\nabla \tau} d\sigma = -\int_{\partial \Omega_h} \frac{\partial \tau}{\partial \nu} d\sigma= \int_{\Omega_h} (-\triangle \tau +v\cdot \nabla \tau)dx = \abs{\Omega_h}=\abs{\Omega^*_h}. \end{equation} Finally, the co-area formula yields \begin{equation} \label{1.3} -\int_{\partial\Omega_h} \frac{1}{ \abs{\nabla \tau}} \, d\sigma = \frac \partial{\partial h} \abs{\Omega_h} = \frac \partial{\partial h} \abs{\Omega^*_h} = -\int_{\partial\Omega^*_h} \frac{1}{ \abs{\nabla \tau^*}} \, d\sigma. \end{equation} Thus by \eqref{1.1} and the Schwarz inequality, \[ \int_{\partial \Omega^*_h} \abs{\nabla \tau^*} \, d\sigma \int_{\partial\Omega^*_h} \frac{1}{ \abs{\nabla \tau^*}} \,d\sigma = \abs{\partial\Omega^*_h}^2 \leq \abs{\partial\Omega_h}^2 \leq \int_{\partial \Omega_h} \abs{\nabla \tau} \,d\sigma \int_{\partial\Omega_h} \frac{1}{ \abs{\nabla \tau}} \,d\sigma. \] In view of \eqref{1.2} and \eqref{1.3} we obtain \[ \int_{\partial\Omega^*_h} \abs{\nabla \tau^*} \,d\sigma \leq \int_{\partial\Omega_h} \abs{\nabla \tau} \, d\sigma = \abs{\Omega^*_h}, \] with equality precisely when $\Omega_h$ is a ball and $\abs{\nabla \tau}=-\partial \tau/\partial \nu$ is constant on $\partial\Omega_h$. So if $\gamma(\abs{x})=\tau^*(x)$ and $\rho=(V/\Gamma_n)^{1/n}$ is the radius of $\Omega^*$, then with $\Sigma_n=n\Gamma_n$ the surface of the unit sphere, \[ \gamma(\rho)=0 \qquad \text{and} \qquad 0\leq -\gamma'(r)\leq \frac{\Gamma_n r^n}{\Sigma_n r^{n-1}} = \frac r n \] when $\gamma(r)\in \mathcal A$. Since $\mathcal A$ has full measure and $\gamma$ is continuous, we have \begin{equation} \label{1.4} \gamma(r)\leq \frac{\rho^2-r^2}{2n} = \tilde \gamma(r) \end{equation} for all $r\in[0,\rho]$, with $\gamma\equiv \tilde \gamma$ precisely when all $\Omega_h$ are balls and $\tau$ is radial (thus so is $v(x)\cdot x$, hence $v(x)\cdot x\equiv 0$ since $v$ is divergence-free). Now \eqref{1.4} gives \[ \abs{\Omega_h}=\abs{\Omega^*_h}=\abs{\{x\in\Omega^* \,|\, \gamma(\abs{x})>h \}} \leq \abs{\{x\in\Omega^* \,|\, \tilde\gamma(\abs{x})>h \}} \] and the claim follows. \hfill $\Box$ \section{Properties of the Maximizer}\label{maximizer We start by proving \eqref{aug182}. \begin{proposition}\label{ppnFreidlinProblem Let $\psi>0$ be a $C^4$ stream function on a bounded simply connected domain $\Omega\subset\mathbb{R}^2$ with a single critical point and let $u = \grad^\perp \psi$. Then $\tau^{Au} \to \bar \tau^u$ uniformly on $\Omega$, where $\bar\tau^u$ is given by \begin{equation}\label{eqnFreidlinProblem} \bar \tau^u(y) = - \int_0^{\psi(y)} \frac {\abs{\Omega_{\psi,h} }} {\int_{\Omega_{\psi,h}} \triangle\psi \, dx} \, dh. \end{equation} \end{proposition \begin{proof} Assume first that the maximum of $\psi$ is non-degenerate and let $\sup \psi=M>0$. It is then proved in \cite{BKNR} that, as $A \to \infty$, the functions $\tau^{Au}$ converge uniformly on $\Omega$ to $\bar\tau^u$ with $\bar\tau^u(y)=\bar\tau(\psi(y))$, where $\bar\tau$ solves the effective problem \begin{equation}\label{fr-3} \left\{ \begin{aligned} &-\frac{1}{T(h)}\frac{d}{dh}\left(p(h)\frac{d\bar \tau}{dh}\right)=1,\\ &\bar \tau(0)=0 \text{ and $\bar \tau$ is bounded on $(0,M)$} \end{aligned} \right. \end{equation} on the interval $(0,M)$, with the coefficients \begin{equation}\label{fr-one-coeff} T(h)=\int_{\partial \Omega_{\psi,h}}\frac{d\sigma}{\abs{\nabla\psi}}, \quad p(h)=\int_{\partial \Omega_{\psi,h}}{\abs{\nabla\psi}} \, d\sigma. \end{equation} By Green's formula and~(\ref{fr-one-coeff}), \[ p(h)= \int_{\partial \Omega_{\psi,h}}{\abs{\nabla\psi}} \,d\sigma = -\int_{ \partial \Omega_{\psi,h}}{\nabla \psi \cdot \hat n} \, d\sigma =-\int_{\Omega_{\psi,h} }\triangle \psi\, dx, \] where we have used $\nabla \psi \cdot \hat n = -\abs{\nabla\psi}$ on $\partial\Omega_{\psi,h}$. By the co-area formula, \[ \int_h^{M} T(r) \, dr = \int_{\Omega_{\psi,h} } \, dx, \] and thus~\eqref{fr-3} reduces to \begin{equation}\label{fr-3__} \bar \tau'(h)= \frac{\int_h^{M} T(r) \, dr + C}{ p(h)} = \frac{\abs{\Omega_{\psi,h}} + C}{-\int_{\Omega_{\psi,h} }\triangle \psi \,dx}. \end{equation} Non-degeneracy of the maximum of $\psi$ shows that $\tfrac 1{h-M} \int_{\Omega_{\psi,h} }\triangle \psi \,dx$ stays bounded away from zero and infinity as $h\uparrow M$. Boundedness of $\bar\tau$ then forces $C=0$, completing the proof of the non-degenerate case. If the maximum of $\psi$ is degenerate, we let $\psi_n$ be a $C^4$ stream function with a single non-degenerate critical point which agrees with $\psi$ on $\Omega\setminus \Omega_{\psi,M-(1/n)}$. The proof of Lemma \ref{lemaug212}, with $u=\nabla^\perp\psi$ and $u'=u_n=\nabla^\perp\psi_n$, shows that $\tau^{Au}-\tau^{Au_n}\to 0$ as $A\to\infty$, uniformly on $\Omega_{\psi,M-(2/n)}$. But \[ \int_{\Omega_{\psi,h} }\triangle \psi \,dx = -\int_{\partial \Omega_{\psi,h}}{\abs{\nabla\psi}} \,d\sigma \] shows that $\bar\tau^u$ and $\bar\tau^{u_n}$ coincide on $\Omega_{\psi,M-(1/n)}$, so $\tau^{Au}\to \bar\tau^u$ as $A\to\infty$ uniformly on $\Omega_{\psi,M-(2/n)}$. The result now follows by taking $n\to\infty$ and noticing that for large $n$ and large $A$, the oscillation of $\tau^{Au}$ on $\Omega_{\psi,M-(2/n)}$ has to be small thanks to the small oscillation of $\tau^{Au}$ on $\partial\Omega_{\psi,M-(2/n)}$, small diameter of $\Omega_{\psi,M-(2/n)}$, and the maximum principle. \end{proof} We are presently unable to analytically prove existence of solutions to~\eqref{eqnCriticalPt}. The structure of the nonlinear term in~\eqref{eqnCriticalPt} yields itself naturally to some apriori estimates. These, however, are not strong enough to prove existence, mainly because they do not seem to provide any form of compactness. \begin{proposition}\label{ppnApriori} Let $\phi$ be a $C^4$ solution of~\eqref{eqnCriticalPt} with a single critical point and $\phi = 0$ on $\partial\Omega$. Then \begin{enumerate} \item $\linfnorm{\phi} \leq \displaystyle\frac{\abs{\Omega}}{4\pi}$. \item For any Borel function $f$, $$ \int_{\Omega_{\psi,h}} f(\phi) \, \abs{\triangle \phi} \, dx = \int_{\Omega_{\psi,h}} f(\phi) \, dx, $$ and, in particular, $\displaystyle\int_\Omega \abs{\triangle \phi} \, dx= \abs{\Omega}$. \item If $\tau$ satisfies $-\triangle \tau = 1$ in $\Omega$ with $\tau = 0$ on $\partial \Omega$, then $$ \int_Omega \abs{\triangle \phi - \triangle \tau} < \abs{\Omega}. $$ \end{enumerate} \end{proposition} These estimates do give us some insight as to the nature of classical solutions to~\eqref{eqnCriticalPt}. For instance, the first two assertions give $L^\infty(\Omega)$ and $H^{1}(\Omega)$ bounds on $\phi$, while the third is an explicit upper bound on the distance between a classical solution of~\eqref{eqnCriticalPt} and the exit time of the Brownian motion from $\Omega$. \begin{proof} The second assertion follows by multiplying~\eqref{eqnCriticalPt} by $f(\phi)$ and using the co-area formula. As a consequence, for any $h>0$ we have the identity \[ \abs{\Omega_{\psi,h}}=\int_{\partial\Omega_{\psi,h}}\abs{\nabla\phi} \, d\sigma. \] Then (1) follows by a rearrangement argument as in the proof of Theorem~\ref{T.1.1}. For the last claim, note that $$ 2(\triangle \tau(x) - \triangle \phi(x)) = \frac{\abs{\nabla \phi(x)}^2}{\displaystyle\int\limits_{\Omega_{\phi,\phi(x)}} \abs{\nabla \phi} \, d\sigma } \;\int\limits_{\Omega_{\phi,\phi(x)}} \frac{1}{\abs{\nabla \phi}} \, d\sigma - 1. $$ By the co-area formula, the integral over $\Omega$ of the first term is exactly $\abs{\Omega}$. Since that term is non-negative, the strict inequality in (3) follows. \end{proof} Since an analytical proof of existence of solutions for~\eqref{eqnCriticalPt} is at present intangible, we turn our attention to numerics. As boundary integrals are problematic to compute numerically, it is more convenient to work with the equation \begin{equation}\label{eqnCriticalPtLn} -2\triangle \phi(x) = 1 + \nabla \phi(x) \cdot \nabla \ln \abs{\bigl.\Omega_{\phi(x)}}, \end{equation} which is equivalent to~\eqref{eqnCriticalPt}. Surprisingly, an iteration scheme of the form \begin{gather*} -\triangle \phi_0 = 1, \quad \phi\iftextstyle|\else\Big|\fi_{\partial\Omega} = 0\\ -2 \triangle \phi_{n+1}(x) = 1 + \nabla \phi_n(x) \cdot \nabla \ln \abs{ \bigl. \{\phi_n \geq \phi_n(x) \} }, \quad \phi_{n+1} \iftextstyle|\else\Big|\fi_{\partial \Omega} = 0 \end{gather*} does not always converge. For certain domains, it turns out that numerically $\linfnorm{\phi_n} \to \infty$ as $n \to \infty$, which is clearly not representative of the solution of~\eqref{eqnCriticalPt} as it violates the last assertion in Proposition~\ref{ppnApriori}. It turns out that an iteration scheme of the form \begin{gather} -\triangle \phi_0 = 1, \quad \phi\iftextstyle|\else\Big|\fi_{\partial\Omega} = 0\\ -2 \triangle \phi_{n+\frac{1}{2}}(x) = 1 + \nabla \phi_n(x) \cdot \nabla \ln \abs{ \bigl. \{\phi_n \geq \phi_n(x) \} }, \quad \phi_{n+\frac{1}{2}}\iftextstyle|\else\Big|\fi_{\partial\Omega}=0\\ \label{eqnIterScheme3} \phi_{n+1}\iftextstyle|\else\Big|\fi_{\phi_{n+\frac{1}{2}} = h_0} = - \int_0^{h_0} \frac{\abs{\bigl\{\phi_{n+\frac{1}{2}} \geq h\bigr\} }}{\int_{\{\phi_{n+\frac{1}{2}} = h\}} \frac{\partial \psi}{\partial \hat n} \, d\sigma} \, dh. \end{gather} does converge rapidly to a numerical solution of~\eqref{eqnCriticalPtLn}. In fact, \eqref{eqnIterScheme3} can be replaced by \begin{equation*}\tag{$\text{\ref{eqnIterScheme3}}$'} -\triangle \phi_{n+1} + A \grad^\perp \phi_{n+\frac{1}{2}} \cdot \nabla \phi_{n+1} = 1 \end{equation*} for some large, fixed $A$, which produces better numerical results. Figure~\ref{fgrMaximisersAndExitTimes} shows contour plots of the solution to~\eqref{eqnCriticalPt} in two different domains. For comparison, the expected exit time from the domain $\tau_0$ is shown alongside each plot of $\phi$. \begin{figure}[thb]\label{fgrMaximisersAndExitTimes} \subfigure[Maximiser $\psi$]{ \includegraphics[width=5cm]{dom1-psi.eps} } \quad \subfigure[Expected exit time $\tau_0$]{ \includegraphics[width=5cm]{dom1-tau0.eps} }\\ \subfigure[Maximiser $\psi$]{ \includegraphics[width=5cm]{dom2-psi.eps} } \quad \subfigure[Expected exit time $\tau_0$]{ \includegraphics[width=5cm]{dom2-tau0.eps} }\\ \caption{Maximisers and expected exit times from two different domains.} \end{figure} We are unable to prove convergence of these numerical schemes, just as we can not establish existence of solutions of~\eqref{eqnCriticalPt}. However, one immediate observation from Figure~\ref{fgrMaximisersAndExitTimes} is that the level sets of $\phi$ become circular near the maximum. Indeed, for any classical solution of~\eqref{eqnCriticalPt}, this must be the case. \begin{proposition} Let $\phi$ be a smooth solution of~\eqref{eqnCriticalPt} and assume that $\phi$ attains a local maximum at $(0,0)$, then $\partial_{xx} \phi(0) = \partial_{yy} \phi(0)$. \end{proposition} \begin{proof} We will show if $\phi$ is any smooth function which attains a maximum at $0$, then the last term in~\eqref{eqnCriticalPtLn} is continuous near $0$ if and only if $\partial_{xx} \phi(0) = \partial_{yy} \phi(0)$. This immediately implies the proposition. Assume first that the Hessian of $\phi$ at $0$ is not degenerate (in this case assuming $\phi \in C^3$ will be enough for the proof). We rotate our coordinate frame and assume without loss of generality that $$ \phi(x,y) = M - \frac{x^2}{a^2} - \frac{y^2}{b^2} + c_3(x,y) $$ where $c_3(x,y)$ is some function involving only third order or higher terms. Now for any $\epsilon > 0$, define $f(\epsilon)$ by \begin{equation} f(\epsilon) = \abs{ \Omega_{\phi, M - \epsilon} } = \abs{\left\{ \frac{x^2}{a^2} + \frac{y^2}{b^2} \leq \epsilon + c_3(x,y) \right\}} \label{eqnFofEpsilon} = \pi a b \epsilon + O(\epsilon^2), \end{equation} whence \begin{equation} \label{eqnLnFPrime} \frac{d}{d\epsilon} \ln (f(\epsilon)) = \frac{1}{\epsilon} + O(1). \end{equation} Thus \begin{align*} \nabla \phi(x,y) \cdot \nabla \ln \abs{ \Omega_{\phi, \phi(x,y)}} &= \abs{\nabla \phi(x,y)}^2 \left( \frac{1}{M - \phi(x,y)} + O(1) \right)\\ &= \frac{\abs{\nabla \phi(x,y)}^2}{ M - \phi(x,y)} + O(1)\abs{\nabla \phi(x,y)}^2 \end{align*} The second term on the right is certainly continuous at $0$, since $\nabla \phi(0) = 0$. The first term is continuous at $0$ if and only if $a = b$. In the case that the Hessian of $\phi$ is degenerate at $0$, the above proof works with minor modifications. Using a higher order Taylor approximation of $\phi$, the right hand side of \eqref{eqnFofEpsilon} becomes $c_1 \epsilon^{c_2}$ for two constants $c_1 > 0$, and $c_2 = \frac{1}{2} + \frac{1}{n}$ with $n \geq 3$. Now replacing ${1}/{\epsilon}$ with ${c_2}/{\epsilon}$ in~\eqref{eqnLnFPrime}, the remainder of the proof is unchanged. \end{proof} \section{Proof of Proposition~\ref{ppnCriticalPt}}\label{ptproof First, we obtain an expression for $V(\psi,v)$. Let $\Omega_h=\Omega_{\psi,h}$ and $\Omega_h^\epsilon = \Omega_{\psi^\epsilon, h } = X_\epsilon ^{-1}(\Omega_h)$. \begin{lemma}\label{lmaVariationTau Let $M=\sup\psi$, then the variation~\eqref{eqnVariationTau} is \begin{multline}\label{eqnDDEpsilon} V(\psi, v ) = \int_0^M \frac{1}{\left(\int_{\partial\Omega_h} \frac{\partial \psi}{\partial \hat n} \, d\sigma\right)^2} \Biggl[ \abs{\Omega_h}\int_{\partial \Omega_h} \left( \frac{\partial}{\partial \hat n} \left(v \cdot \nabla \psi \right) - \triangle \psi\, v \cdot \hat n \right)\, d\sigma +\\ + \left(\int_{\partial \Omega_h} \frac{\partial \psi}{\partial \hat n} \, d\sigma \right) \left( \int_{\partial \Omega_h} v \cdot \hat n \, d\sigma\right)\Biggr] \, dh. \end{multline} \end{lemma \iffalse\begin{remark If $-\triangle \psi = \alpha$, then~\eqref{eqnDDEpsilon} reduces to \begin{equation}\label{eqnDDEpsilonTau0} \left.\frac{d}{d\epsilon} \bar \tau^\epsilon(x_0^\epsilon) \right\rvert_{\epsilon = 0} = \int_0^1 \frac{1}{\alpha^2 \abs{\Omega_h}} \int_{\partial \Omega_h} \frac{\partial}{\partial \hat n} \left(v\cdot \nabla \psi\right) \, d\sigma \, dh \end{equation} since \begin{multline*} \abs{\Omega_h}\int_{\partial \Omega_h} \triangle \psi\, v \cdot \hat n \, d\sigma = -\alpha \abs{\Omega_h} \int_{\partial \Omega_h} v \cdot \hat n \, d\sigma\\ = \left( \int_{\Omega_h} \triangle \psi \, dx \right) \int_{\partial \Omega_h} v \cdot \hat n \, d\sigma = \left( \int_{\partial \Omega_h} \frac{\partial \psi}{\partial \hat n} \, d\sigma \right) \int_{\partial \Omega_h} v \cdot \hat n \, d\sigma \end{multline*} \end{remark} \f \begin{proof This follows from Proposition~\ref{ppnFreidlinProblem}. Note that \begin{multline}\label{eqnDDEpsilonAbsOmegah} \left. \frac{d}{d\epsilon} \right\rvert_{\epsilon = 0} \abs{\Omega_h^\epsilon} = \left. \frac{d}{d\epsilon} \right\rvert_{\epsilon = 0} \int_{\Omega_h^\epsilon} dx = \left. \frac{d}{d\epsilon} \right\rvert_{\epsilon = 0} \int_{\Omega_h} \abs{\det \nabla X_\epsilon ^{-1}} \, dx \\ = -\int_{\Omega_h} \grad \cdot v dx = -\int_{\partial \Omega_h} v \cdot \hat n \, d\sigma, \end{multline} and \begin{align} \nonumber \left.\frac{d}{d\epsilon}\right\rvert_{\epsilon = 0} \int_{\partial \Omega_h^\epsilon} \frac{\partial \psi^\epsilon}{\partial \hat n} \, d\sigma &= \left.\frac{d}{d\epsilon}\right\rvert_{\epsilon = 0} \int_{\Omega_h^\epsilon} \triangle \psi^\epsilon \, dx \\ \nonumber &= \left.\frac{d}{d\epsilon}\right\rvert_{\epsilon = 0} \int_{\Omega_h} \left(\triangle \psi^\epsilon\right) \circ X_\epsilon^{-1} \, \abs{\det \nabla X_\epsilon^{-1}} \, dx \\ \nonumber &= \int_{\Omega_h} \Bigl[ \triangle \left( v \cdot \nabla \psi \right) - v \cdot \nabla \triangle \psi - \left(\grad \cdot v\right) \triangle \psi \Bigr] \, dx \\ \label{eqnDDEpsilonDPhiDn} &= \int_{\partial \Omega_h} \frac{\partial}{\partial \hat n} \left(v \cdot \nabla \psi\right) \, d\sigma - \int_{\partial \Omega_h} (\triangle \psi) v \cdot \hat n \, d\sigma \end{align} Thus, using~\eqref{ppnFreidlinProblem}, and equations~\eqref{eqnDDEpsilonAbsOmegah}--\eqref{eqnDDEpsilonDPhiDn} we are done. \end{proof Before proving Proposition~\ref{ppnCriticalPt}, we require a lemma. \begin{lemma}\label{lmaCriticalPtUnconstrained A $C^4$ stream function $\psi$ (with a single critical point) is a critical point of the functional $I$ if and only if it solves \begin{equation}\label{eqnCriticalPtVarphi} \nabla F_\psi \cdot \nabla \psi + 2 F_\psi \triangle \psi - G_\psi = 0, \end{equation} where $F_\psi$ and $G_\psi$ are defined by \begin{eqnarray*} G_\psi(x) = \left( \int\limits_{\Omega_{\psi(x)}} \frac{\partial \psi}{\partial \hat n} \, d\sigma \right)^{-1} \quad\text{and}\quad F_\psi(x) = \abs{\bigl.\Omega_{\psi(x)}} G_\psi(x)^2. \end{eqnarray*} \end{lemma \begin{proof With $F_\psi,G_\psi$ as above, equation~\eqref{eqnDDEpsilon} reduces to \begin{equation}\label{eqnVFG} V(\psi, v) = \int_0^M F_\psi \int\limits_{\partial\Omega_{h}} \left[ \frac{\partial}{\partial \hat n} \left( v \cdot \nabla \psi \right) - \triangle \psi \, v \cdot \hat n \right] \, d\sigma \, dh + \int_{0}^M G_\psi \int\limits_{\partial\Omega_{h}} v \cdot \hat n \, d\sigma \, dh. \end{equation} By the co-area formula, we have, first, \begin{equation}\label{aug242} \int_{0}^M G_\psi \int\limits_{\partial\Omega_{h}} v \cdot \hat n \, d\sigma \, dh = \int_0^M \int\limits_{\partial\Omega_{h}} (-G_\psi\, v \cdot \nabla \psi) \, \frac{d\sigma}{\abs{\nabla \psi}} \, dh = -\int_\Omega G_\psi \, v\cdot \nabla \psi \, dx , \end{equation} second, \begin{equation}\label{eqnVFGline1term2} \int_0^M F_\psi \int\limits_{\partial\Omega_{h}} \left[- \triangle \psi \, v \cdot \hat n \right] \, d\sigma \, dh = \int_\Omega F_\psi \, \triangle \psi \, (v \cdot \nabla \psi) \, dx , \end{equation} and, finally, \begin{align} \nonumber \int_0^M F_\psi \int\limits_{\partial\Omega_{h}} \frac{\partial}{\partial \hat n} \left( v \cdot \nabla \psi \right) \, d\sigma \, dh &= -\int_\Omega F_\psi \, \nabla( v \cdot \nabla \psi ) \cdot \nabla \psi \, dx \\ \label{eqnVFGline1term1} &= \int_\Omega (v\cdot\nabla \psi) \left( \nabla F_\psi \cdot \nabla \psi + F_\psi \triangle \psi \right)\, dx . \end{align} In the last equality we used the identity $$ (v\cdot\nabla \psi) \left( \nabla F_\psi \cdot \nabla \psi + F _\psi \triangle \psi \right) +F_\psi \, \nabla( v \cdot \nabla \psi ) \cdot \nabla \psi = \grad \cdot \left[ F _\psi \, (v \cdot \nabla \psi) \, \nabla \psi \right] $$ and the fact that $F _\psi \, (v \cdot \nabla \psi) \, \nabla \psi = 0$ on $\partial \Omega$. Using~\eqref{aug242}--\eqref{eqnVFGline1term1}, expression~\eqref{eqnVFG} becomes $$ V(\psi, v) = \int_\Omega (v\cdot \nabla \psi) \left[\nabla F_\psi \cdot \nabla \psi + 2 F_\psi \triangle \psi - G_\psi \right] \, dx . $$ Thus $V(\psi, v) = 0$ for all $C^4$ functions $v$ which vanish on $\partial \Omega$ if and only if equation~\eqref{eqnCriticalPtVarphi} holds. \end{proof Notice that the same conclusion is obtained if we ask $V(\psi, v)=0$ only for all $v$ compactly supported inside $\Omega$. \begin{proof}[Proof of Proposition~\ref{ppnCriticalPt} Note that the variation $V(\psi, v)$ in~\eqref{eqnVariationTau} depends only on the geometry of the level sets of the stream function $\psi$ and thus is invariant under reparametrizations. Thus if $\psi$ is a solution of~\eqref{eqnCriticalPtVarphi}, then for any monotone function $f$, $f\circ\psi$ is also a solution of~\eqref{eqnCriticalPtVarphi}. Note that $\phi = \bar \tau^{\grad^\perp \psi}$, the solution of the Freidlin problem~\eqref{aug182} with stream function $\psi$, is only a reparametrization of the level sets of $\psi$. Thus to prove Proposition~\ref{ppnCriticalPt} we only need to show that if $\psi$ solves~\eqref{eqnCriticalPtVarphi}, then $\phi$ solves~\eqref{eqnCriticalPt}. Note that since $\phi$ solves the Freidlin problem~\eqref{aug182}, we have \begin{equation}\label{eqnFriedlinConstraint} \abs{\Omega_{\phi,h}} = \int\limits_{\partial\Omega_{\phi,h}} \abs{\nabla \phi} \, d\sigma, \end{equation} and so $F_\phi = -G_\phi$. We also have \begin{align*} \nabla F_\phi(x) &= -\nabla G_\phi(x) = G_\phi(x)^2\, \nabla\left( \int\limits_{\partial\Omega_{\phi,\phi(x)}} -\abs{\nabla \phi} \, d\sigma\right)\\ &= -G_\phi(x)^2\, \nabla \abs{\bigl.\Omega_{\phi,\phi(x)}} = -G_\phi(x)^2 \, \nabla \left( \int\limits_{\phi(x)}^M \int\limits_{\partial\Omega_{\phi,h}} \frac{1}{\abs{\nabla \phi}} \, d\sigma \, dh \right)\\ &= -G_\phi(x)^2 \, \nabla \phi (x) \, \int\limits_{\partial\Omega_{\phi,\phi(x)}} \frac{1}{\abs{\nabla \phi}} \, d\sigma, \end{align*} and using this in~\eqref{eqnCriticalPtVarphi} immediately yields~\eqref{eqnCriticalPt}. \end{proof We remark that any solution to~\eqref{eqnCriticalPt} is automatically a solution to the Freidlin problem with itself as stream function (i.e.\ satisfies~\eqref{eqnFriedlinConstraint}). Indeed, integrating~\eqref{eqnCriticalPt} over $\Omega_{\phi,h_0}$ and using the co-area formula gives $$ -2\int\limits_{\Omega_{\phi, h_0}} \triangle \phi = \abs{\bigl.\Omega_{\phi,h_0}} + \int_{h_0}^M \int\limits_{\partial\Omega_{\phi,h}} \abs{\nabla \phi}^2 \frac{d\sigma}{\abs{\nabla \phi}} \int\limits_{\partial\Omega_{\phi,h}} \frac{d\sigma}{\abs{\nabla \phi}} \left(\,\int\limits_{\partial\smash{\Omega_{\phi,h}}} \abs{\nabla \phi} \, d\sigma\right)^{-1} \iffalse\,\fi dh, $$ and hence $$ 2 \int\limits_{\partial\Omega_{\phi,h_0}} \abs{\nabla \phi} = \abs{\bigl.\Omega_{\phi,h_0}} + \int_{h_0}^M \int\limits_{\partial\Omega_{\phi,h}} \frac{1}{\abs{\nabla \phi}} \, d\sigma \, dh =2 \abs{\bigl.\Omega_{\phi, h_0}}, $$ showing~\eqref{eqnFriedlinConstraint} is satisfied. \begin{bibdiv \begin{biblist} \commentout \bib{Volpert-Chaos}{article}{ author={Belk, Micha{\"e}l}, author={Volpert, Vitaly}, title={Modeling of heat explosion with convection}, journal={Chaos}, volume={14}, date={2004}, number={2}, pages={263--273}, issn={1054-1500}, review={\MR{2064202 (2005a:80004)}}, } \bib{BKJS}{article}{ author={Berestycki, Henri}, author={Kagan, Leonid}, author={Joulin, Guy} , author={ Sivashinsky, Grigory}, title={The effect of stirring on the limits of thermal explosion}, journal={Comb. Theory Model.}, volume={1}, year={1997}, pages={ 97--112}, } \bib{BKNR}{article}{ author={Berestycki, Henri}, author={Kiselev, Alexander}, author={Novikov, Alexei}, author={Ryzhik, Lenya}, title={The explosion problem in a flow}, journal={Jour. d'Anal. Math.}, date={2009}, pages={to appear} } \bib{BBI}{book}{ author={Burago, Dmitri}, author={Burago, Yuri}, author={Ivanov, Sergei}, title={A course in metric geometry}, series={Graduate Studies in Mathematics}, volume={33}, publisher={American Mathematical Society}, place={Providence, RI}, date={2001}, pages={xiv+415}, isbn={0-8218-2129-6}, review={\MR{1835418 (2002e:53053)}}, } \bib{CKRZ}{article}{ author={Constantin, P.}, author={Kiselev, A.}, author={Ryzhik, L.}, author={Zlato{\v{s}}, A.}, title={Diffusion and mixing in fluid flow}, journal={Ann. of Math. (2)}, volume={168}, date={2008}, number={2}, pages={643--674}, issn={0003-486X}, review={\MR{2434887 (2009e:58045)}}, } \bib{CR}{article}{ author={Crandall, Michael G.}, author={Rabinowitz, Paul H.}, title={Some continuation and variational methods for positive solutions of nonlinear elliptic eigenvalue problems}, journal={Arch. Rational Mech. Anal.}, volume={58}, date={1975}, number={3}, pages={207--218}, issn={0003-9527}, review={\MR{0382848 (52 \#3730)}}, } \bib{FannP}{article}{ author={Fannjiang, Albert}, author={Papanicolaou, George}, title={Convection enhanced diffusion for periodic flows}, journal={SIAM J. Appl. Math.}, volume={54}, date={1994}, number={2}, pages={333--408}, issn={0036-1399}, review={\MR{1265233 (95d:76109)}}, } \bib{Freidlin}{article}{ author={Freidlin, Mark}, title={Reaction-diffusion in incompressible fluid: asymptotic problems}, journal={J. Diff. Eq.}, volume={179}, date={2002}, number={1}, pages={44--96}, issn={0022-0396}, review={\MR{1883738 (2003a:35107)}}, } \bib{WF}{article}{ author={Freidlin, Mark I.}, author={Wentzell, Alexander D.}, title={Diffusion processes on graphs and the averaging principle}, journal={Ann. Probab.}, volume={21}, date={1993}, number={4}, pages={2215--2245}, issn={0091-1798}, review={\MR{1245308 (94j:60116)}}, } \bib{JL}{article}{ author={Joseph, D. D.}, author={Lundgren, T. S.}, title={Quasilinear Dirichlet problems driven by positive sources}, journal={Arch. Rational Mech. Anal.}, volume={49}, date={1972/73}, pages={241--269}, issn={0003-9527}, review={\MR{0340701 (49 \#5452)}}, } \bib{KK}{article}{ author={Keener, J. P.}, author={Keller, H. B.}, title={Positive solutions of convex nonlinear eigenvalue problems}, journal={J. Differential Equations}, volume={16}, date={1974}, pages={103--125}, issn={0022-0396}, review={\MR{0346305 (49 \#11030)}}, } \bib{NPR}{article}{ author={Novikov, Alexei}, author={Papanicolaou, George}, author={Ryzhik, Lenya}, title={Boundary layers for cellular flows at high P\'eclet numbers}, journal={Comm. Pure Appl. Math.}, volume={58}, date={2005}, number={7}, pages={867--922}, issn={0010-3640}, review={\MR{2142878 (2007m:76045)}}, } \bib{STD}{article}{ author={Shaw, Tiffany A.}, author={Thiffeault, Jean-Luc}, author={Doering, Charles R.}, title={Stirring up trouble: multi-scale mixing measures for steady scalar sources}, journal={Phys. D}, volume={231}, date={2007}, number={2}, pages={143--164}, issn={0167-2789}, review={\MR{2345774 (2008f:76098)}}, } \bib{Zla2D}{article}{ author={Zlato\v s, Andrej}, title={Diffusion in fluid flow: Dissipation enhancement by flows in 2D}, journal={to appear in Comm. Partial Differential Equations}, } \end{biblist} \end{bibdiv} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,701
include_attribute 'apache2' default['graphite']['version'] = '0.9.12' default['graphite']['twisted_version'] = '11.1' default['graphite']['password'] = 'change_me' default['graphite']['chef_role'] = 'graphite' default['graphite']['url'] = 'graphite' default['graphite']['url_aliases'] = [] default['graphite']['listen_port'] = 80 default['graphite']['base_dir'] = '/opt/graphite' default['graphite']['doc_root'] = '/opt/graphite/webapp' default['graphite']['storage_dir'] = '/opt/graphite/storage' default['graphite']['timezone'] = 'America/Los_Angeles' default['graphite']['django_root'] = '@DJANGO_ROOT@' default['graphite']['encrypted_data_bag']['name'] = nil default['graphite']['install_type'] = 'package' default['graphite']['package_names'] = { 'whisper' => { 'package' => 'whisper', 'source' => 'https://github.com/graphite-project/whisper/zipball/master' }, 'carbon' => { 'package' => 'carbon', 'source' => 'https://github.com/graphite-project/graphite-web/zipball/master' }, 'graphite_web' => { 'package' => 'graphite-web', 'source' => 'https://github.com/graphite-project/graphite-web/zipball/master' } } # # graphite_web # default['graphite']['web']['debug'] = 'False' default['graphite']['web']['bitmap_support'] = true default['graphite']['web']['admin_email'] = 'admin@org.com' default['graphite']['web']['cluster_servers'] = [] default['graphite']['web']['carbonlink_hosts'] = [] default['graphite']['web']['memcached_hosts'] = ['127.0.0.1:11211'] default['graphite']['web']['database']['NAME'] = node['graphite']['storage_dir'] + '/graphite.db' default['graphite']['web']['database']['ENGINE'] = 'django.db.backends.sqlite3' default['graphite']['web']['database']['USER'] = '' default['graphite']['web']['database']['PASSWORD'] = '' default['graphite']['web']['database']['HOST'] = '' default['graphite']['web']['database']['PORT'] = '' default['graphite']['web']['ldap']['SERVER'] = '' default['graphite']['web']['ldap']['BASE_USER'] = '' default['graphite']['web']['ldap']['BASE_PASS'] = '' default['graphite']['web']['ldap']['USER_QUERY'] = '(sAMAccountName=%s)' default['graphite']['web']['ldap']['SEARCH_BASE'] = '' default['graphite']['web']['auth']['REMOTE_USER_AUTH'] = false default['graphite']['web']['auth']['LOGIN_URL'] = '/account/login' default['graphite']['web']['email']['BACKEND'] = 'django.core.mail.backends.smtp.EmailBackend' default['graphite']['web']['email']['HOST'] = 'localhost' default['graphite']['web']['email']['PORT'] = '25' default['graphite']['web']['email']['HOST_USER'] = '' default['graphite']['web']['email']['HOST_PASSWORD'] = '' default['graphite']['web']['email']['USE_TLS'] = false default['graphite']['web_server'] = 'apache' default['graphite']['create_user'] = false default['graphite']['graph_templates'] = [ { 'name' => 'default', 'background' => 'black', 'foreground' => 'white', 'majorLine' => 'white', 'minorLine' => 'grey', 'lineColors' => 'blue,green,red,purple,brown,yellow,aqua,grey,magenta,pink,gold,rose', 'fontName' => 'Sans', 'fontSize' => '10', 'fontBold' => 'False', 'fontItalic' => 'False' } ] case node['graphite']['web_server'] when 'apache' default['graphite']['user_account'] = node['apache']['user'] default['graphite']['group_account'] = node['apache']['group'] when 'uwsgi' default['graphite']['user_account'] = 'graphite' default['graphite']['group_account'] = 'graphite' end default['graphite']['ssl']['enabled'] = false default['graphite']['ssl']['cipher_suite'] = 'ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP' default['graphite']['ssl']['certificate_file'] = '/etc/ssl/server.crt' default['graphite']['ssl']['certificate_key_file'] = '/etc/ssl/server.key' default['graphite']['apache']['basic_auth']['enabled'] = false default['graphite']['apache']['basic_auth']['file_path'] = "#{node['graphite']['doc_root']}/htpasswd" default['graphite']['apache']['basic_auth']['user'] = nil default['graphite']['apache']['basic_auth']['pass'] = nil
{ "redpajama_set_name": "RedPajamaGithub" }
655
\section{Introduction}\label{sec-intro} The famous Rogers-Ramanujan identities assert that \begin{align}\label{RR} \sum_{n=0}^\infty \frac{q^{n^2}}{(q;q)_n}=\frac{1}{(q,q^4;q^5)_\infty}, \quad \sum_{n=0}^\infty \frac{q^{n(n+1)}}{(q;q)_n}=\frac{1}{(q^2,q^3;q^5)_\infty}. \end{align} Here and throughout this paper, we assume that $|q|<1$ for convergence and use the standard $q$-series notation \begin{align} (a;q)_0:=1, \quad (a;q)_n:=\prod\limits_{k=0}^{n-1}(1-aq^k), \quad (a;q)_\infty :=\prod\limits_{k=0}^\infty (1-aq^k), \\ (a_1,\cdots,a_m;q)_n:=(a_1;q)_n\cdots (a_m;q)_n, \quad n\in \mathbb{N}\cup \{\infty\}. \end{align} These two sum-product identities have fascinating combinatorial interpretations, and they stimulate a number of researches on finding similar identities. One of the famous work on this direction is Slater's list \cite{Slater}, which contains 130 of such identities such as \begin{align} \sum_{n=0}^\infty \frac{q^{2n^2}}{(q;q)_{2n}}&=\frac{1}{(q^2,q^3,q^4,q^5,q^{11},q^{12},q^{13},q^{14};q^{16})_\infty}, \\ \sum_{n=0}^\infty \frac{q^{2n(n+1)}}{(q;q)_{2n+1}}&= \frac{1}{(q,q^4,q^6,q^7,q^9,q^{10},q^{12},q^{15};q^{16})_\infty}. \end{align} Identities similar to \eqref{RR} are called as Rogers-Ramanujan type identities. It is natural to consider multi-sum Rogers-Ramanujan type identities. For example, the Andrews-Gordon identity (see \cite{Andrews1974,Gordon1961}), which is a generalization of \eqref{RR}, states that for positive integer $k>1$ and $1\leq i \leq k$, \begin{align} &\sum_{n_{k-1}\geq n_{k-2}\geq \cdots \geq n_1\geq 0} \frac{q^{n_1^2+n_2^2+\cdots+n_{k-1}^2+n_i+n_{i+1}+\cdots +n_{k-1}}}{(q;q)_{n_{k-1}-n_{k-2}}(q;q)_{n_{k-2}-n_{k-3}}\cdots (q;q)_{n_2-n_1} (q;q)_{n_1}} \nonumber \\ &=\frac{(q^i,q^{2k+1-i},q^{2k+1};q^{2k+1})_\infty}{(q;q)_\infty}. \label{AG} \end{align} Bressoud \cite{Bressoud1980} provided an even modulus analog of this identity. In a series of works (see e.g. \cite{Lepowsky-Wilson,Lepowsky-Wilson-1985}), Lepowsky and Wilson developed Lie theoretic approach to establish Rogers-Ramanujan type identities. In particular, they showed that the Rogers-Ramanujan identities, the Andrews-Gordon identity and Bressoud's identity are closely related to the affine Kac-Moody Lie algebra $A_1^{(1)}$. This motivates people to find similar identities by studying other Lie algebras. See the books \cite{Lost2,Sills-book} for more historical background. In recent years, Kanade and Russell \cite{KR-2019} searched for Rogers-Ramanujan type identities related to level 2 characters of the affine Lie algebra $A_9^{(2)}$, and they conjectured a number of such identities. Let \begin{align} F(u,v,w)&:=\sum_{i,j,k\geq 0} \frac{(-1)^kq^{3k(k-1)+(i+2j+3k)(i+2j+3k-1)}u^iv^jw^k}{(q;q)_i(q^4;q^4)_j(q^6;q^6)_k}, \\ G(u,v,w)&:=\sum_{i,j,k\geq 0}\frac{q^{(i+2j+3k)(i+2j+3k-1)/2+j^2}u^iv^jw^k}{(q;q)_i(q^2;q^2)_j(q^3;q^3)_k}. \end{align} Some of their conjectural identities are \begin{align} F(q,1,q^3)&=\frac{(q^3;q^{12})_\infty}{(q,q^2;q^4)_\infty}, \label{KR-conj-1} \\ F(q,q,q^6)&=\frac{1}{(q^3;q^4)_\infty (q,q^8;q^{12})_\infty}, \label{KR-conj-2} \\ G(q,q^2,q^4)&=\frac{1}{(q;q^3)_\infty (q^3,q^6,q^{11};q^{12})_\infty}, \label{KR-conj-3} \\ G(q^2,q^4,q^5)&=\frac{1}{(q^2;q^3)_\infty (q^3,q^6,q^7;q^{12})_\infty}. \label{KR-conj-4} \end{align} Five of their conjectural identities on $F(u,v,w)$ as well as the identities \eqref{KR-conj-3} and \eqref{KR-conj-4} on $G(u,v,w)$ were confirmed by Bringmann, Jennings-Shaffer and Mahlburg \cite{BSM}. Later, using an integral method, Rosengren \cite{Rosengren} gave proofs to all of the nine conjectural identities on $F(u,v,w)$. Since there are numerous Rogers-Ramanujan type identities in the literature and some of them have similar shapes, it is more convenient to group some of them together. Following \cite{Wang}, we shall call an identity of the following shape \begin{align}\label{type-defn} \text{finite sum of } \quad \sum_{(i_1,\cdots,i_k)\in S}\frac{(-1)^{t(i_1,\cdots,i_k)}q^{Q(i_1,\cdots,i_k)}}{(q^{n_1};q^{n_1})_{i_1}\cdots (q^{n_k};q^{n_k})_{i_k}}=\prod\limits_{ (a,n)\in P} (q^{a};q^n)_\infty^{r(a,n)} \end{align} as a Rogers-Ramanujan type identity of {\it index} $(n_1,n_2,\cdots,n_k)$. Here $t(i_1,\cdots,i_k)$ is an integer-valued function, $Q(i_1,\cdots,i_k)$ is a rational polynomial in variables $i_1,\cdots,i_k$, $n_1,\cdots, n_k$ are positive integers with $\gcd(n_1,n_2,\cdots,n_k)=1$, $S$ is a subset of $\mathbb{Z}^k$, $P$ is a finite subset of $\mathbb{Q}^2$ and $r(a,n)$ are integer-valued functions. With this notion, we see that the identities \eqref{KR-conj-1} and \eqref{KR-conj-2} are of index $(1,4,6)$ while \eqref{KR-conj-3} and \eqref{KR-conj-4} are of index $(1,2,3)$. There are some other identities similar to \eqref{KR-conj-1}--\eqref{KR-conj-4} in the literature. First, we can find some identities involving double sums of index $(1,2)$, $(1,3)$ and $(1,4)$. For instance, analytical forms of two conjectural partition identities of Capparelli \cite{Capparelli} were given in the work of Kanade and Russell \cite{KR-2019} as well as the work of Kur\c{s}ung\"{o}z \cite{Kursungoz}. These two identities are all of index $(1,3)$ and one of them is \begin{align}\label{Capparelli-eq} \sum_{i,j\geq 0}\frac{q^{2i^2+6ij+6j^2}}{(q;q)_i(q^3;q^3)_j}&=\frac{1}{(q^2,q^3,q^9,q^{10};q^{12})_\infty}. \end{align} Kur\c{s}ung\"{o}z \cite{Kursungoz} also found four identities of index $(1,4)$. Five conjectural identities of index $(1,3)$ were presented in \cite[Conjecture 6.1]{Kursungoz-AnnComb} such as \begin{align} \sum_{i,j\geq 0}\frac{q^{i^2+3j^2+3ij}}{(q;q)_i(q^3;q^3)_j}=\frac{1}{(q,q^3,q^6,q^8;q^9)_\infty}. \label{K-conj-1} \end{align} They are based on the work of Kanade and Russell \cite{KR-2015} and so far remain open. Andrews \cite{Andrews2019} and Takigiku and Tsuchioka \cite{Takigiku-2019} provided some identities of index $(1,2)$, which can be proved by summing over one of the index first and then summing over the second index. Uncu and Zudilin \cite{Uncu-Zudilin} presented two identities of index $(1,2)$ and mentioned that they can be explained as instances of Bressoud's identities \cite{Bressoud1979}. Berkovich and Uncu \cite{Berkovich} proved an identity of index $(1,3)$. In 2021, Andrews and Uncu \cite{Andrews-Uncu} proved an identity of index $(1,3)$ and further conjectured that \cite[Conjecture 1.2]{Andrews-Uncu} \begin{align}\label{AU-conj} \sum_{i,j\geq 0}\frac{(-1)^jq^{3j(3j+1)/2+i^2+3ij+i+j}}{(q;q)_i(q^3;q^3)_j}=\frac{1}{(q^2,q^3;q^6)_\infty}. \end{align} This was first proved by Chern \cite{Chern} and then by Wang \cite{Wang}. Through the integral method, Wang \cite{Wang} also provided new proofs to some other double sum Rogers-Ramanujan type identities of indexes $(1,2)$, $(1,3)$ and $(1,4)$. As for identities involving triple sums or quadruple sums, besides the Kanade-Russell identities of indexes $(1,2,3)$ and $(1,4,6)$ such as \eqref{KR-conj-1}--\eqref{KR-conj-4}, there are other known identities of indexes $(1,1,6)$, $(1,2,2)$, $(1,2,3)$, $(1,1,1,2)$, $(1,2,2,4)$ and $(1,2,3,4)$. For example, Rosengren \cite[Eq.\ (5.3a)]{Rosengren} proved an identity of index $(1,1,6)$. Kanade and Russell \cite{KR-2019} presented four conjectural identities of index $(1,2,3,4)$. Takigiku and Tsuchioka \cite{Takigiku} proved some identities of indexes $(1,2,2)$ and $(1,2,2,4)$, which are related to the principal characters of the level 5 and level 7 standard modules of the affine Lie algebra $A_2^{(2)}$. For example, they proved that \cite[Theorem 1.3]{Takigiku} \begin{align} &\sum_{i,j,k\geq 0}\frac{q^{\binom{i}{2}+8\binom{j}{2}+10\binom{k}{2}+2ij+2ik+8jk+i+4j+5k}}{(q;q)_i(q^2;q^2)_j(q^2;q^2)_k} \nonumber \\ &=\frac{1}{(q,q^3,q^4,q^5,q^7,q^9,q^{11},q^{13},q^{15},q^{16},q^{17},q^{19};q^{20})_\infty}. \end{align} Recently, Mc Laughlin \cite{Laughlin} applied Rosengren's method in \cite{Rosengren} to derive some new Rogers-Ramanujan type identities including the following one of index $(1,2,3)$ \begin{align}\label{Laughlin123} \sum_{i,j,k\geq 0} \frac{(-1)^j q^{(3k+2j-i)(3k+2j-i-1)/2+j(j-1)-i+6j+6k}}{(q;q)_i(q^2;q^2)_j(q^3;q^3)_k}=\frac{(-1;q)_\infty (q^{18};q^{18})_\infty}{(q^3;q^3)_\infty (q^9;q^{18})_\infty}. \end{align} Note that in \cite{Laughlin}, such identities are called as identities of Kanade-Russell type. In the way of finding generalizations of Capparelli's first partition identity, Dousse and Lovejoy \cite[Eqs.\ (2.6),(2.7)]{Dousse-Lovejoy} proved the following identity of index $(1,1,1,2)$: \begin{align}\label{DL1112} \sum_{i,j,k,l\geq 0} \frac{a^{i+l}b^{j+l}q^{\binom{i+j+k+2l+1}{2}+\binom{i+1}{2}+\binom{j+1}{2}+l}}{(q;q)_i(q;q)_j(q;q)_k(q^2;q^2)_l}=(-q;q)_\infty (-aq^2,-bq^2;q^2)_\infty. \end{align} Motivated by the above works, in this paper, we will use the integral method to establish some Rogers-Ramanujan type identities of the following indexes $$(1,1),(1,2), (1,1,1), (1,1,2), (1,1,3), (1,2,2), (1,2,3), (1,2,4).$$ Most of our results are new. Some of them contain additional parameters and thus indicate infinite families of Rogers-Ramanujan type identities. For instance, we prove that (see Theorems \ref{thm-11-general} and \ref{thm-R-3}) \begin{align} \sum_{i,j\geq 0} \frac{u^{i-j}q^{\binom{i}{2}+\binom{j+1}{2}+a\binom{j-i}{2}}}{(q;q)_i(q;q)_j}&=\frac{(-uq^a,-q/u,q^{a+1};q^{a+1})_\infty}{(q;q)_\infty}, \label{intro-eq-J-3}\\ \sum_{i,j,k\geq0}\frac{(-1)^{i+j}b^{-i+j}c^{i-j+k}q^{(i^{2}+(i-j+2k)^{2}-2i+3j-2k)/2}}{(q;q)_{i}(q;q)_{j}(q^{2};q^{2})_{k}}&=\frac{(-q,bq^{2}/c;q)_{\infty}(bq,c/b;q^{2})_{\infty}} {(b^{2}q^{2}/c;q^{2})_{\infty}}. \end{align} Some of the identities we discovered are quite surprising. For example, we find that for any $u\in \mathbb{C}$ (see Theorems \ref{thm-4112-3} and \ref{thm-123}), \begin{align}\label{intro-eq-4112-3} \sum_{i,j,k\geq0}\frac{(-1)^{i+j}u^{i+3k}q^{(i^{2}-i)/2+(i-2j+3k)^{2}/4}}{(q;q)_{i}(q^{2};q^{2})_{j}(q^{3};q^{3})_{k}}&=\frac{(u^{2};q)_{\infty}(q,-u^{2};q^{2})_{\infty}}{(-u^{6};q^{6})_{\infty}}, \\ \sum_{i,j,k\geq 0}\frac{(-1)^{(i-2j+3k)/2}u^{i+k}q^{(i^{2}-i)/2+(i-2j+3k)^{2}/4}} {(q;q)_{i}(q^{2};q^{2})_{j}(q^{3};q^{3})_{k}} &=\frac{(q;q^{2})_{\infty}(-u^{2};q^{3})_{\infty}} {(u^{2};q^{6})_{\infty}}. \end{align} A rough look at these identities will let us doubt their correctness. From the expression of each identity, it is expected that the left side will be a power series in $q^{1/4}$. But it turns out that it is a power series in $q$, as the right side indicates. The rest of this paper is organized as follows. In Section \ref{sec-pre} we collect some useful $q$-series formulas which will be used to derive our identities. In Sections \ref{sec-double} and \ref{sec-triple} we present and prove identities involving double sums and triple sums, respectively. Finally, we give some concluding remarks in Section \ref{sec-concluding} including a new proof of \eqref{DL1112} via the integral method. \section{Preliminaries}\label{sec-pre} Throughout this paper we will denote $\zeta_n=e^{2\pi i/n}$. First, we need Euler's $q$-exponential identities \begin{align}\label{Euler} \sum_{n=0}^\infty \frac{z^n}{(q;q)_n}=\frac{1}{(z;q)_\infty}, \quad \sum_{n=0}^\infty \frac{q^{\binom{n}{2}} z^n}{(q;q)_n}=(-z;q)_\infty, \quad |z|<1. \end{align} These two identities are corollaries of the $q$-binomial theorem \begin{align}\label{q-binomial} \sum_{n=0}^\infty \frac{(a;q)_n}{(q;q)_n}z^n=\frac{(az;q)_\infty}{(z;q)_\infty}, \quad |z|<1. \end{align} We also need the Jacobi triple product identity \begin{align}\label{Jacobi} (q,z,q/z;q)_\infty=\sum_{n=-\infty}^\infty (-1)^nq^{\binom{n}{2}}z^n. \end{align} We recall the basic hypergeometric series $${}_r\phi_s\bigg(\genfrac{}{}{0pt}{} {a_1,\dots,a_r}{b_1,\dots,b_s};q,z \bigg):=\sum_{n=0}^\infty \frac{(a_1,\dots,a_r;q)_n}{(q,b_1,\dots,b_s;q)_n}\Big((-1)^nq^{\binom{n}{2}} \Big)^{1+s-r}z^n.$$ For a Laurent series $f(z)=\sum_{n=-N}^\infty a(n)z^n$, we shall use $[z^n]f(z)$ to denote the coefficient of $z^n$. That is, $[z^n]f(z)=a(n)$. We recall the following simple fact \begin{align}\label{int-constant} \oint_K f(z) \frac{dz}{2\pi iz}=[z^0]f(z), \end{align} where $K$ is a positively oriented and simple closed contour such that all poles of $f(z)$ are outside $K$. This fact will be used frequently but usually without mention. There are two steps in using the integral method to prove Rogers-Ramanujan type identities: \begin{itemize} \item \textbf{Step 1.} Express the sum side as a finite sum of integrals of some infinite products. \item \textbf{Step 2.} Evaluate each of these integrals. \end{itemize} The first step is quite straightforward. In the proofs of all the Rogers-Ramanujan type identities appeared in \cite{Rosengren}, \cite{Wang} and this paper, this step will be done by the use of \eqref{Euler} and \eqref{Jacobi}. The main difficulty lies in the second step. In the book \cite[Sections 4.9 and 4.10]{GR-book}, calculations of the integral $$\oint_K \frac{(a_1z,\cdots,a_Az,b_1/z,\cdots,b_B/z;q)_\infty}{(c_1z,\cdots,c_Cz,d_1/z,\cdots,d_D/z;q)_\infty}z^{m}\frac{dz}{2\pi iz} $$ are given. Here $m$ is an integer, $K$ is a deformation of the (positively oriented) unit circle so that the poles of $1/(c_1z,\cdots,c_Cz;q)_\infty$ lie outside the contour and the origin and poles of $1/(d_1/z,\cdots,d_D/z;q)_\infty$ lie inside the contour. Throughout this paper, all the integral paths will be chosen in this way and we will omit them from the integral symbol. We will not need these general calculations. Instead, we recall some known formulas which will suffice to establish our multi-sum Rogers-Ramanujan type identities. First, from \cite[Eq.\ (4.10.8)]{GR-book} we find that when $|a_1a_2a_3|<|c_1c_2c_3|$, \begin{align}\label{GR41010} &\oint \frac{(a_{1}z,a_{2}z,a_{3}z,b_{1}/z;q)_{\infty}} {(c_{1}z,c_{2}z,c_{3}z,d_{1}/z;q)_{\infty}}\frac{dz}{2\pi iz} \\ & = \frac{(a_{1}d_{1},a_{2}d_{1},a_{3}d_{1},b_{1}/d_{1};q)_{\infty}} {(q,c_{1}d_{1},c_{2}d_{1},c_{3}d_{1};q)_{\infty}} \times{}_4\phi _3\left( \begin{gathered} c_{1}d_{1},c_{2}d_{1},c_{3}d_{1},qd_{1}/b_{1}\\ a_{1}d_{1},a_{2}d_{1},a_{3}d_{1} \end{gathered} ;q,b_{1}/d_{1} \right). \nonumber \end{align} From \cite[Eq.\ (4.11.2), (4.11.3)]{GR-book} we find \begin{align} \oint \frac{(cz/\beta,qz/c\alpha,c\alpha/z,q\beta/cz;q)_{\infty}}{(az,bz,\alpha/z,\beta/z;q)_{\infty}}\frac{dz}{2\pi iz} =\frac{(ab\alpha\beta,c,q/c,c\alpha/\beta,q\beta/c\alpha;q)_{\infty}}{(a\alpha,a\beta,b\alpha,b\beta,q;q)_{\infty}}, \label{GR4112} \end{align} \begin{align} &\oint \frac{(\delta z,qz/\gamma,\gamma/z,\gamma z/\alpha\beta,q\alpha\beta/\gamma z;q)_{\infty}} {(az,bz,cz,\alpha/z,\beta/z;q)_{\infty}}\frac{dz}{2\pi iz} \nonumber \\ &= \frac{(\gamma /\alpha,q\alpha/\gamma ,\gamma/\beta,q\beta/\gamma,\delta/a,\delta/b,\delta/c;q)_{\infty}} {(a\alpha,a\beta,b\alpha,b\beta,c\alpha,c\beta,q;q)_{\infty}}, \label{GR4113} \end{align} where $\delta=abc\alpha\beta$, $abc\alpha\beta\gamma\neq 0$ and $$a\alpha,a\beta,b\alpha,b\beta,c\alpha,c\beta \neq q^{-n}, \quad n=0,1,2,\dots.$$ Clearly, \eqref{GR4112} follows from \eqref{GR4113} after letting $c\rightarrow 0$. Next, we recall some identities in Rosengren's work \cite{Rosengren}. From \cite[Eq.\ (3.2)]{Rosengren} we know that when $\alpha_1\alpha_2=\beta_1\beta_2\beta_3$, \begin{align}\label{R32} \oint \frac{(\alpha_1z,\alpha_2z,qz,1/z;q)_\infty}{(\beta_1z,\beta_2z,\beta_3z;q)_\infty}\frac{\diff z}{2\pi iz}=\frac{(\beta_1,\alpha_1/\beta_1;q)_\infty}{(q;q)_\infty}{}_2\phi_1\bigg(\genfrac{}{}{0pt}{}{\alpha_2/\beta_2,\alpha_2/\beta_3}{\beta_1};q,\frac{\alpha_1}{\beta_1}\bigg). \end{align} From the proof of \cite[Proposition\ 3.2]{Rosengren}, we conclude that \begin{align}\label{Prop32-proof} \oint \frac{(abz,cz,qz/t,t/z;q)_{\infty}}{(az,bz,cz/t,d/z;q)_{\infty}}\frac{dz}{2\pi iz}=\frac{(abd,dq/t,t,c;q)_{\infty}}{(q,ad,bd,cd/t;q)_{\infty}} {}_3\phi _2\left( \begin{gathered} a,b,cd/t\\ c,abd \end{gathered} ;q,t \right). \end{align} Using the above formulas in Step 2, we can convert the sum-side of our Rogers-Ramanujan type identities to a ${}_r\phi_s$ series. Then to complete Step 2, it remains to evaluate this ${}_r\phi_s$ series. Here we recall the $q$-Gauss summation formula \cite[(\uppercase\expandafter{\romannumeral2}. 8)]{GR-book} \begin{align}\label{q-Gauss} {}_2\phi_1\bigg(\genfrac{}{}{0pt}{}{a,b}{c};q,\frac{c}{ab} \bigg)=\frac{(c/a,c/b;q)_\infty}{(c,c/ab;q)_\infty}, \end{align} the Bailey-Daum summation formula \cite[(\uppercase\expandafter{\romannumeral2}. 9)]{GR-book} \begin{align}\label{BD} {}_2\phi_1\bigg(\genfrac{}{}{0pt}{} {a,b}{aq/b};q,-\frac{q}{b} \bigg)=\frac{(-q;q)_\infty (aq,aq^2/b^2;q^2)_\infty}{(aq/b,-q/b;q)_\infty} \end{align} and the $q$-Dixon summation formula \cite[(\uppercase\expandafter{\romannumeral2}.13)]{GR-book} \begin{align}\label{II13} {}_4\phi _3\left( \begin{gathered} a,-qa^{1/2},b,c\\ -a^{1/2},aq/b,aq/c \end{gathered} ;q, \frac{qa^{1/2}}{bc} \right) =\frac{(aq,qa^{1/2}/b,qa^{1/2}/c,aq/bc;q)_{\infty}} {(aq/b,aq/c,qa^{1/2},qa^{1/2}/bc;q)_{\infty}}. \end{align} \section{Identities involving double sums}\label{sec-double} In this section, we present some identities involving double sums of indexes $(1,1)$ and $(1,2)$. \subsection{Identities of index $(1,1)$} \begin{theorem}\label{thm-R-1} We have \begin{align} \sum_{i,j\geq0}\frac{(-1)^{i+j}u^{i}v^{j}q^{((i-j)^{2}-i-j)/2}}{(q;q)_{i}(q;q)_{j}}= \frac{(u,v;q)_{\infty}}{(uv/q;q)_{\infty}}. \label{eq-R-1} \end{align} \end{theorem} Note that the identity \eqref{eq-R-1} is symmetric in $u$ and $v$. \begin{proof} Setting $a=c=0$ in \eqref{Prop32-proof}, we deduce that \begin{align} (q;q)_{\infty}\oint \frac{(qz/t,t/z;q)_{\infty}}{(bz,d/z;q)_{\infty}}\frac{dz}{2\pi iz} =\frac{(dq/t,t;q)_{\infty}}{(bd;q)_{\infty}} \sum_{n\geq0}\frac{(b;q)_{n}}{(q;q)_{n}}t^{n} =\frac{(dq/t,bt;q)_{\infty}} {(bd;q)_{\infty}}, \end{align} where for the last equality we used \eqref{q-binomial}. Now by \eqref{Euler} and \eqref{Jacobi}, \[ \begin{split} LHS&=\oint \sum_{i,j\geq0}\sum_{k= -\infty}^{\infty}\frac{(bz)^{i} (d/z)^{j} (-t/z)^{k} q^{(k^{2}-k)/2}}{(q;q)_{i}(q;q)_{j}} \frac{dz}{2\pi iz}\\ &=\sum_{i,j\geq0}\frac{(-1)^{i+j}b^{i}d^{j}t^{i-j}q^{((i-j)^{2}-i+j)/2}}{(q;q)_{i}(q;q)_{j}}. \end{split} \] Here we used \eqref{int-constant} for the second equality. This proves the desired identity after replacing $bt$ by $u$, and $dq/t$ by $v$. \end{proof} We can also prove Theorem \ref{thm-R-1} by the following way. \begin{proof}[Second proof of Theorem \ref{thm-R-1}] Summing over $i$ first using \eqref{Euler} and then applying \eqref{q-binomial}, we have \begin{align*} &\sum_{i,j\geq0}\frac{(-1)^{i+j}u^{i}v^{j}q^{((i-j)^{2}-i-j)/2}}{(q;q)_{i}(q;q)_{j}}=\sum_{j\geq 0} \frac{(-v)^{j}q^{(j^2-j)/2}}{(q;q)_j} \sum_{i\geq 0}\frac{(-uq^{-j})^{i}q^{(i^2-i)/2}}{(q;q)_i} \nonumber \\ &=\sum_{j\geq 0} \frac{(uq^{-j};q)_\infty (-v)^jq^{(j^2-j)/2}}{(q;q)_j} =(u;q)_\infty \sum_{j\geq 0}\frac{(uv/q)^{j}(q/u;q)_j}{(q;q)_j} \nonumber \\ &=\frac{(u,v;q)_\infty }{(uv/q;q)_\infty}. \qedhere \end{align*} \end{proof} Setting $u=-q$, $v=-q^{1/2}$ and $u=-q$, $v=-q$ in Theorem \ref{thm-R-1}, we obtain \begin{align} \sum_{i,j\geq 0}\frac{q^{((i-j)^{2}+i)/2}}{(q;q)_{i}(q;q)_{j}}&=\frac{1}{(q^{1/2};q)_{\infty}^{2}}, \label{eq-thm3.1-cor-1} \\ \sum_{i,j\geq 0}\frac{q^{((i-j)^{2}+i+j)/2}}{(q;q)_{i}(q;q)_{j}}&=\frac{(q^{2};q^{2})_{\infty}^{2}}{(q;q)_{\infty}^{3}}.\label{eq-thm3.1-cor-1.1} \end{align} \begin{theorem}\label{thm-4112-2} We have \begin{equation}\label{eq-4112-2} \sum_{i,j\geq0}\frac{(-1)^{i+j}u^{i}q^{(i-j)^{2}}}{(q^{2};q^{2})_{i}(q^{2};q^{2})_{j}} =\frac{(u;q)_{\infty}(q;q^{2})_{\infty}}{(u;q^{2})_{\infty}^{2}}. \end{equation} \end{theorem} \begin{proof} Setting $c=q^{1/2}$, $a=-b$ and $\alpha=-\beta$ in \eqref{GR4112}, then multiplying both sides by $(q^{2};q^{2})_{\infty}$, we obtain by \eqref{Euler} and \eqref{Jacobi} that the left side of \eqref{GR4112} becomes \begin{align*} LHS&=(q^{2};q^{2})_{\infty}\oint \frac{(qz^{2}/\alpha^{2},q\alpha^{2}/z^{2};q^{2})_{\infty}} {(a^{2}z^{2},\alpha^{2}/z^{2};q^{2})_{\infty}}\frac{dz}{2\pi iz}\\ &=\oint \sum_{i,j\geq0}\sum_{k= -\infty}^{\infty}\frac{(a^{2}z^{2})^{i} (\alpha^{2}/z^{2})^{j} (-q\alpha^{2}/z^{2})^{k}q^{k^{2}-k}}{(q^{2};q^{2})_{i}(q^{2};q^{2})_{j}} \frac{dz}{2\pi iz}\\ &= \sum_{i,j\geq0}\frac{(-1)^{i+j}a^{2i}\alpha^{2i}q^{(i-j)^{2}}}{(q^{2};q^{2})_{i}(q^{2};q^{2})_{j}}, \end{align*} and the right side of \eqref{GR4112} becomes \begin{align*} RHS=\frac{(a^{2}\alpha^{2};q)_{\infty}(q;q^{2})_{\infty}}{(a^{2}\alpha^{2};q^{2})_{\infty}^{2}}. \end{align*} This proves the theorem after replacing $\alpha^2 a^2$ by $u$. \end{proof} For example, if we set $u=-q$, $u=-q^{3/2}$ or $u=-q^2$ in the above theorem and replace $q$ by $q^2$ in the second assignment, we obtain \begin{align} \sum_{i,j\geq0}\frac{(-1)^{j}q^{(i-j)^{2}+i}}{(q^{2};q^{2})_{i}(q^{2};q^{2})_{j}}&=\frac{(q;q^{2})_{\infty}^{2}}{(q^{2};q^{4})_{\infty}^{2}}, \\ \sum_{i,j\geq0}\frac{(-1)^{j}q^{2(i-j)^{2}+3i}}{(q^{4};q^{4})_{i}(q^{4};q^{4})_{j}}&= \frac{(q^2,q^{10};q^{8})_{\infty}(q^{3};q^{4})_{\infty}}{(q^{5};q^{4})_{\infty}}, \\ \sum_{i,j\geq0}\frac{(-1)^{j}q^{(i-j)^{2}+2i}}{(q^{2};q^{2})_{i}(q^{2};q^{2})_{j}}&=\frac{(q,q^{2},q^{6};q^{4})_{\infty}}{(q^{5};q^{4})_{\infty}}. \end{align} \begin{theorem}\label{thm-T11} We have \begin{align} \sum_{i,j\geq0}\frac{(-1)^{i+j}q^{(i-j)^{2}/2}(q^{j}-q^{i+1/2})}{(q;q)_{i}(q;q)_{j}} &=\frac{(q^{1/2};q)_{\infty}^{2}} {(q;q)_{\infty}}, \label{T11-2}\\ \sum_{i,j\geq0}\frac{q^{(i-j)^{2}/2}(q^{j}+q^{i+1/2})}{(q;q)_{i}(q;q)_{j}} &=\frac{(q;q^{2})_{\infty}} {(q^{2};q^{2})_{\infty}(q^{1/2};q)_{\infty}^{2}}. \label{T11-3} \end{align} \end{theorem} \begin{proof} From \eqref{GR41010} and \eqref{II13} we have \begin{align}\label{Eq14} &\oint \frac{(-a^{1/2}z,a^{1/2}qz,abz,b/z;q)_{\infty}} {(az,-a^{1/2}qz,a^{1/2}z,1/z;q)_{\infty}}\frac{dz}{2\pi iz} \nonumber \\ & = \frac{(-a^{1/2},a^{1/2}q,ab,b;q)_{\infty}} {(q,a,-a^{1/2}q,a^{1/2};q)_{\infty}} {}_4\phi _3\left( \begin{gathered} a,-a^{1/2}q,a^{1/2},q/b\\ -a^{1/2},a^{1/2}q,ab \end{gathered} ;q,b \right) \nonumber \\ &=\frac{(-a^{1/2},aq,a^{1/2}b,a^{1/2}b;q)_{\infty}} {(a^{1/2},a,-a^{1/2}q,a^{1/2}q;q)_{\infty}}. \end{align} Let $a=q^{2}$ in \eqref{Eq14}. We obtain \begin{align}\label{Eq15} \oint \frac{(-qz,bq^{2}z,b/z;q)_{\infty}} {(-q^{2}z,qz,1/z;q)_{\infty}}\frac{dz}{2\pi iz} =\frac{(-q,q^{3},bq,bq;q)_{\infty}} {(q,q^{2},-q^{2},q^{2};q)_{\infty}}. \end{align} Setting $b=q^{-1/2}$ in \eqref{Eq15} and multiplying both sides by $(q;q)_\infty$, we see that its left side becomes \begin{align*} &(q;q)_{\infty} \oint \frac{(-qz,q^{3/2}z,1/q^{1/2}z;q)_{\infty}} {(-q^{2}z,qz,1/z;q)_{\infty}}\frac{dz}{2\pi iz} \\ &=\oint (1+qz)\sum_{i,j\geq0}\frac{(qz)^{i}(1/z)^{j}}{(q;q)_{i}(q;q)_{j}} \sum_{k= -\infty}^{\infty}(-q^{1/2}z)^{-k}q^{(k^{2}-k)/2}\frac{dz}{2\pi iz} \\ &=\sum_{i,j\geq0}\frac{(-1)^{i+j}q^{(i-j)^{2}/2}(q^{j}-q^{i+1/2})}{(q;q)_{i}(q;q)_{j}}, \end{align*} and its right side becomes \begin{align*} RHS=\frac{(-q,q^{3},q^{1/2},q^{1/2};q)_{\infty}} {(q^{2},-q^{2},q^{2};q)_{\infty}} =\frac{(q^{1/2};q)_{\infty}^{2}} {(q;q)_{\infty}}. \end{align*} This proves \eqref{T11-2}. Similarly, setting $b=-q^{-1/2}$ in \eqref{Eq15} and applying \eqref{Euler} and \eqref{Jacobi}, we obtain \eqref{T11-3}. \end{proof} Note that if we set $b=-1$ in \eqref{Eq15}, then we obtain \eqref{eq-thm3.1-cor-1.1}. \begin{rem}\label{rem-sec3} Similar to the second proof of Theorem \ref{thm-R-1}, Theorems \ref{thm-4112-2} and \ref{thm-T11} can also be proved by summing over one of the index first. We omit these proofs. \end{rem} Now we present another set of Rogers-Ramanujan type identities of index $(1,1)$. These identities are proved by repeated use of the Jacobi triple product identity, and we do not need to calculate any ${}_r\phi_s$ series. \begin{theorem}\label{thm-11-general} We have \begin{align} \sum_{i,j\geq 0} \frac{u^{i-j}q^{\binom{i}{2}+\binom{j+1}{2}+a\binom{j-i}{2}}}{(q;q)_i(q;q)_j}=\frac{(-uq^a,-q/u,q^{a+1};q^{a+1})_\infty}{(q;q)_\infty}. \end{align} \end{theorem} \begin{proof} By the Jacobi triple product identity, we have \begin{align*} &(q;q)_\infty (q^a;q^a)_\infty \oint (uz,q/uz;q)_\infty (z,q^a/z;q^a)_\infty \frac{dz}{2\pi iz} \nonumber \\ &=\oint \sum_{i,j=-\infty}^\infty (-uz)^i q^{\binom{i}{2}} (-z)^jq^{a\binom{j}{2}}\frac{dz}{2\pi iz} \nonumber \\ &=\sum_{i=-\infty}^\infty u^iq^{(a-1)i/2}q^{(a+1)i^2/2} \nonumber \\ &=(-uq^a,-q/u,q^{a+1};q^{a+1})_\infty. \end{align*} By \eqref{Euler} and \eqref{Jacobi}, the left side of this identity can also be written as \begin{align*} LHS&=(q;q)_\infty \oint \sum_{i,j\geq 0}\frac{(-uz)^iq^{\binom{i}{2}}}{(q;q)_i}\cdot \frac{(-q/uz)^jq^{\binom{j}{2}}}{(q;q)_j}\cdot \sum_{k=-\infty}^\infty (-z)^k q^{a\binom{k}{2}}\frac{dz}{2\pi iz} \nonumber \\ &=(q;q)_\infty\sum_{i,j\geq 0}\frac{u^{i-j}q^{\binom{i}{2}+\binom{j+1}{2}+a\binom{j-i}{2}}}{(q;q)_i(q;q)_j}. \end{align*} This proves the desired identity. \end{proof} Replacing $q$ by $q^{m_1}$ and setting $a=m_2/m_1$ and $u=\pm q^{n}$, where $m_1,m_2>0$ and $n\in \mathbb{R}$, we obtain the following corollary. \begin{corollary}\label{cor-Jacobi-add-1} We have \begin{align} &\sum_{i,j\geq 0}\frac{q^{((m_{1}+m_{2})(i^{2}+j^{2})-2m_{2}ij+(2n-m_{1}+m_{2})(i-j))/2}}{(q^{m_{1}};q^{m_{1}})_{i}(q^{m_{1}};q^{m_{1}})_{j}} \nonumber \\ &=\frac{(-q^{m_{1}-n},-q^{m_{2}+n},q^{m_{1}+m_{2}};q^{m_{1}+m_{2}})_{\infty}} {(q^{m_{1}};q^{m_{1}})_{\infty}}, \label{eq-J-1} \\ &\sum_{i,j\geq 0}\frac{(-1)^{i+j}q^{((m_{1}+m_{2})(i^{2}+j^{2})-2m_{2}ij+(2n-m_{1}+m_{2})(i-j))/2}}{(q^{m_{1}};q^{m_{1}})_{i}(q^{m_{1}};q^{m_{1}})_{j}} \nonumber \\ &=\frac{(q^{m_{1}-n},q^{m_{2}+n},q^{m_{1}+m_{2}};q^{m_{1}+m_{2}})_{\infty}} {(q^{m_{1}};q^{m_{1}})_{\infty}}. \label{eq-J-2} \end{align} \end{corollary} As examples, if we set $(m_1,m_2,n)=(1,3,-1)$ in \eqref{eq-J-1}, we obtain $$\sum_{i,j=0}^\infty \frac{q^{2(i^2+j^2)-3ij}}{(q;q)_i(q;q)_j}=\frac{(-q^2,-q^2,q^4;q^4)_\infty}{(q;q)_\infty}.$$ Setting $(m_1,m_2,n)$ as $(3,4,0)$, $(3,4,1)$ or $(3,4,2)$ in \eqref{eq-J-2}, we obtain \begin{align} \sum_{i,j\geq 0}\frac{(-1)^{i+j}q^{(7i^{2}+7j^{2}-8ij+i-j)/2}}{(q^{3};q^{3})_{i}(q^{3};q^{3})_{j}}&=\frac{(q^{3},q^{4},q^{7};q^{7})_{\infty}}{(q^{3};q^{3})_{\infty}}, \\ \sum_{i,j\geq 0}\frac{(-1)^{i+j}q^{(7i^{2}+7j^{2}-8ij+3i-3j)/2}}{(q^{3};q^{3})_{i}(q^{3};q^{3})_{j}}&= \frac{(q^{2},q^{5},q^{7};q^{7})_{\infty}}{(q^{3};q^{3})_{\infty}}, \\ \sum_{i,j\geq 0}\frac{(-1)^{i+j}q^{(7i^{2}+7j^{2}-8ij+5i-5j)/2}}{(q^{3};q^{3})_{i}(q^{3};q^{3})_{j}}&= \frac{(q,q^{6},q^{7};q^{7})_{\infty}}{(q^{3};q^{3})_{\infty}}. \end{align} \begin{theorem}\label{thm-J-3} We have \begin{align}\label{eq-thm-J-3} &\sum_{i,j\geq0}\frac{(-1)^{i+j}u^{i-j}q^{(i^{2}-i+j^{2}-j+4a(i-j)^{2})/2}}{(q;q)_{i}(q;q)_{j}} \\ &=\frac{(u^{-1}q^{2a},uq^{2a+1},q^{4a+1};q^{4a+1})_{\infty}+ (uq^{2a},u^{-1}q^{2a+1},q^{4a+1};q^{4a+1})_{\infty}}{(q;q)_{\infty}}. \nonumber \end{align} \end{theorem} \begin{proof} By the Jacobi triple product identity, we have \begin{align*} &(q;q)_{\infty}(q^{a};q^{a})_{\infty}\oint (uz^{2},1/uz^{2};q)_{\infty}(q^{a/2}z,q^{a/2}/z;q^{a})_{\infty} \frac{dz}{2\pi iz}\\ &= \oint (1-uz^{2}) \sum_{i,j=-\infty}^{\infty}(-1/uz^{2})^{i}q^{(i^{2}-i)/2}(-q^{a/2}z)^{j}q^{a(j^{2}-j)/2} \frac{dz}{2\pi iz} \\ &= \oint \Big(\sum_{i,j=-\infty}^{\infty}(-1/uz^{2})^{i}q^{(i^{2}-i)/2}(-q^{a/2}z)^{j}q^{a(j^{2}-j)/2} \\ &\quad -uz^{2}\sum_{i,j=-\infty}^{\infty}(-1/uz^{2})^{i}q^{(i^{2}-i)/2}(-q^{a/2}z)^{j}q^{a(j^{2}-j)/2} \Big)\frac{dz}{2\pi iz} \\ &=\sum_{i=-\infty}^{\infty} \big((-1)^{i}u^{-i}q^{((4a+1)i^{2}-i)/2}+(-1)^{i}u^{-i}q^{((4a+1)i^{2}+i)/2}\big) \qquad \\ &=(u^{-1}q^{2a},uq^{2a+1},q^{4a+1};q^{4a+1})_{\infty}+ (uq^{2a},u^{-1}q^{2a+1},q^{4a+1};q^{4a+1})_{\infty}. \end{align*} Here the third equality follows, since in the first sum, only the terms with $j=2i$ contributes to the integral, and in the second sum, only the terms with $j=2i-2$ contributes to the integral. We have also replaced $i$ by $i+1$ in the outcome of the integral of the second sum. By \eqref{Euler} and \eqref{Jacobi}, we see that the left side of the above identity is \begin{align*} LHS&=(q;q)_{\infty}\oint \sum_{i,j\geq0}\sum_{k= -\infty}^{\infty}\frac{(-uz^{2})^{i}q^{(i^{2}-i)/2} (-1/uz^{2})^{j} q^{(j^{2}-j)/2} (-q^{a/2}/z)^{k}q^{a(k^{2}-k)/2}}{(q;q)_{i}(q;q)_{j}} \frac{dz}{2\pi iz}\\ &=(q;q)_{\infty} \sum_{i,j\geq 0}\frac{(-1)^{i+j}u^{i-j}q^{(i^{2}-i+j^{2}-j+4a(i-j)^{2})/2}}{(q;q)_{i}(q;q)_{j}}. \end{align*} This proves the theorem. \end{proof} If we set $u=\pm 1$, $q^{2a}$ and $q^{2a+1}$ in Theorem \ref{thm-J-3}, we obtain the following corollary. \begin{corollary}\label{cor-J-4} We have \begin{align}\label{eq-J-3} \sum_{i,j\geq0}\frac{(-1)^{i+j}q^{(i^{2}-i+j^{2}-j+4a(i-j)^{2})/2}}{(q;q)_{i}(q;q)_{j}}&=\frac{2(q^{2a},q^{2a+1},q^{4a+1};q^{4a+1})_{\infty}}{(q;q)_{\infty}}, \\ \sum_{i,j\geq0}\frac{q^{(i^{2}-i+j^{2}-j+4a(i-j)^{2})/2}}{(q;q)_{i}(q;q)_{j}}&=\frac{2(-q^{2a},-q^{2a+1},q^{4a+1};q^{4a+1})_{\infty}}{(q;q)_{\infty}}, \\ \sum_{i,j\geq0}\frac{(-1)^{i+j}q^{2a(i-j)}q^{(i^{2}-i+j^{2}-j+4a(i-j)^{2})/2}}{(q;q)_{i}(q;q)_{j}}&=\frac{(q,q^{4a},q^{4a+1};q^{4a+1})_\infty}{(q;q)_\infty}, \\ \sum_{i,j\geq0}\frac{(-1)^{i+j}q^{(2a+1)(i-j)}q^{(i^{2}-i+j^{2}-j+4a(i-j)^{2})/2}}{(q;q)_{i}(q;q)_{j}}&=\frac{(q^{-1},q^{4a+2},q^{4a+1};q^{4a+1})_\infty}{(q;q)_\infty}. \end{align} \end{corollary} Setting $a=2$ and $a=3$ in the first two identities in Corollary \ref{cor-J-4}, we obtain \begin{align} \sum_{i,j\geq 0}\frac{(-1)^{i+j}q^{(i^{2}-i+j^{2}-j+8(i-j)^{2})/2}}{(q;q)_{i}(q;q)_{j}}&= \frac{2(q^{4},q^{5},q^{9};q^{9})_{\infty}}{(q;q)_{\infty}}, \\ \sum_{i,j\geq 0}\frac{(-1)^{i+j}q^{(i^{2}-i+j^{2}-j+12(i-j)^{2})/2}}{(q;q)_{i}(q;q)_{j}}&=\frac{2(q^{6},q^{7},q^{13};q^{13})_{\infty}}{(q;q)_{\infty}}, \\ \sum_{i,j\geq 0}\frac{q^{(i^{2}-i+j^{2}-j+8(i-j)^{2})/2}}{(q;q)_{i}(q;q)_{j}}&= \frac{2(-q^{4},-q^{5},q^{9};q^{9})_{\infty}}{(q;q)_{\infty}}, \\ \sum_{i,j\geq 0}\frac{q^{(i^{2}-i+j^{2}-j+12(i-j)^{2})/2}}{(q;q)_{i}(q;q)_{j}}&=\frac{2(-q^{6},-q^{7},q^{13};q^{13})_{\infty}}{(q;q)_{\infty}}. \end{align} \subsection{Identities of index $(1,2)$} \begin{theorem}\label{thm-R-5} We have \begin{align} \sum_{i,j\geq0}\frac{(-1)^{i}u^{i+j}q^{i^2+2ij+2j^2-i-j}}{(q;q)_{i}(q^{2};q^{2})_{j}}=(u;q^{2})_{\infty}, \label{eq-R-5a} \\ \sum_{i,j\geq0}\frac{(-1)^{i} u^{i+2j}q^{i^2+2ij+2j^2-i-j}}{(q;q)_{i}(q^{2};q^{2})_{j}}=(u;q)_{\infty}. \label{eq-R-5b} \end{align} \end{theorem} \begin{proof} Setting $\alpha_{1}=\beta_{2}$ in \eqref{R32} and using \eqref{q-binomial}, we deduce that \begin{align}\label{eq2.1} \oint \frac{(\beta_{1}\beta_{3}z,qz,1/z;q)_{\infty}}{(\beta_{1}z,\beta_{3}z;q)_{\infty}}\frac{dz}{2\pi iz}&=\frac{(\beta_1,\beta_2/\beta_1;q)_\infty}{(q;q)_\infty} \sum_{n=0}^\infty \frac{(\beta_1\beta_3/\beta_2;q)_n}{(q;q)_n}\left(\frac{\beta_2}{\beta_1}\right)^n \nonumber \\ &=\frac{(\beta_{1},\beta_{3};q)_{\infty}}{(q;q)_{\infty}}. \end{align} Setting $\beta_{1}=-\beta_{3}$ in \eqref{eq2.1}, we obtain \begin{align}\label{L-constant} (q;q)_{\infty}\oint \frac{(-\beta_{1}^{2}z,qz,1/z;q)_{\infty}}{(\beta_{1}^{2}z^{2};q^{2})_{\infty}}\frac{dz}{2\pi iz} = (\beta_{1}^{2};q^{2})_{\infty}. \end{align} By \eqref{Euler} and \eqref{Jacobi}, we see that its left side is \begin{align*} LHS&=\oint \sum_{i,j\geq0}\sum_{k= -\infty}^{\infty}\frac{(\beta_{1}^{2}z)^{i}q^{(i^{2}-i)/2} (\beta_{1}^{2}z^{2})^{j} (-1/z)^{k}q^{(k^{2}-k)/2} }{(q;q)_{i}(q^{2};q^{2})_{j}} \frac{dz}{2\pi iz}\\ &=\sum_{i,j\geq 0}\frac{(-1)^{i}\beta_{1}^{2i+2j}q^{(i^{2}+(i+2j)^{2}-2i-2j)/2}}{(q;q)_{i}(q^{2};q^{2})_{j}}. \end{align*} This proves \eqref{eq-R-5a} after replacing $\beta_1^2$ by $u$. Replacing $q$ by $q^{2}$ in \eqref{eq2.1} and setting $\beta_{3}=\beta_{1}q$, we obtain \begin{align*} (q^{2};q^{2})_{\infty}\oint \frac{(\beta_{1}^{2}qz,q^{2}z,1/z;q^{2})_{\infty}}{(\beta_{1}z;q)_{\infty}}\frac{dz}{2\pi iz} = (\beta_{1};q)_{\infty}. \end{align*} By \eqref{Euler} and \eqref{Jacobi}, we see that its left side is \begin{align*} LHS&=\oint \sum_{i,j\geq 0} \sum_{k= -\infty}^{\infty}\frac{(\beta_{1}z)^{i} (-\beta_{1}^{2}qz)^{j}q^{j^{2}-j} (-1/z)^{k}q^{k^{2}-k} }{(q;q)_{i}(q^{2};q^{2})_{j}} \frac{dz}{2\pi iz}\\ &=\sum_{i,j\geq 0}\frac{(-1)^{i}\beta_{1}^{i+2j}q^{j^{2}+(i+j)^{2}-i-j}}{(q;q)_{i}(q^{2};q^{2})_{j}}. \end{align*} This proves \eqref{eq-R-5b} after replacing $\beta_1$ by $u$. \end{proof} For example, if we set $u=q$ and $q^{2}$ in \eqref{eq-R-5a}, we obtain \begin{align} \sum_{i,j\geq 0}\frac{(-1)^{i}q^{i^{2}+2ij+2j^2}}{(q;q)_{i}(q^{2};q^{2})_{j}}&=(q;q^{2})_{\infty}, \label{add-12-1}\\ \sum_{i,j\geq 0}\frac{(-1)^{i}q^{i^{2}+2ij+2j^2+i+j}}{(q;q)_{i}(q^{2};q^{2})_{j}}&=(q^{2};q^{2})_{\infty}. \label{add-12-2} \end{align} If we set $u=q$ and $-q$ in \eqref{eq-R-5b}, we obtain \begin{align} \sum_{i,j\geq 0}\frac{(-1)^{i}q^{i^{2}+2ij+2j^{2}+j}}{(q;q)_{i}(q^{2};q^{2})_{j}}&= (q;q)_{\infty}, \label{add-12-3} \\ \sum_{i,j\geq 0}\frac{q^{i^{2}+2ij+2j^{2}+j}}{(q;q)_{i}(q^{2};q^{2})_{j}}&=\frac{1}{(q;q^{2})_{\infty}}. \label{add-12-4} \end{align} Note that \eqref{add-12-4} recovers \cite[Eq.\ (1.20)]{Wang} and hence \eqref{eq-R-5b} can be viewed as a generalization of it. \begin{rem} The identity \eqref{eq-R-5a} can also be deduced from the following identity in Lovejoy's work \cite[Eq.\ (1.7)]{Lovejoy2006}: \begin{align}\label{Lovejoy-constant-eq} [z^0]\frac{(-azq,-zq,-1/z;q)_\infty}{(-aqz^2;q^2)_\infty}=(-aq;q^2)_\infty. \end{align} Indeed, after setting $aq=-\beta_1^2$ and replacing $z$ by $-z$, we see that this identity is equivalent to \eqref{L-constant}. Lovejoy \cite{Lovejoy2006} also provided a partition interpretation to \eqref{Lovejoy-constant-eq} and hence the identity \eqref{eq-R-5a} can also be explained as a partition identity. \end{rem} \section{Identities involving triple sums}\label{sec-triple} In this section, we will establish Rogers-Ramanujan type identities involving triple sums. \subsection{Identities of index $(1,1,1)$} \begin{theorem}\label{thm-R-4} We have \begin{align}\label{eq-111} \sum_{i,j,k\geq0}\frac{(-1)^{j+k}\beta_{1}^{i+j}\beta_{3}^{i+k}q^{(i^{2}+(i+j+k)^{2}-2i-j-k)/2}}{(q;q)_{i}(q;q)_{j}(q;q)_{k}}=(\beta_{1},\beta_{3};q)_{\infty}. \end{align} \end{theorem} \begin{proof} Recall the identity \eqref{eq2.1}. By \eqref{Euler} and \eqref{Jacobi}, we see that its left side is \begin{align*} LHS&=\frac{1}{(q;q)_{\infty}}\oint \sum_{i,j,k\geq0}\sum_{l= -\infty}^{\infty}\frac{(-\beta_{1}\beta_{3}z)^{i}q^{(i^{2}-i)/2} (\beta_{1}z)^{j} (\beta_{3}z)^{k} (-1/z)^{l}q^{(l^{2}-l)/2}}{(q;q)_{i}(q;q)_{j}(q;q)_{k}} \frac{dz}{2\pi iz}\\ &=\sum_{i,j,k\geq0}\frac{(-1)^{j+k}\beta_{1}^{i+j}\beta_{3}^{i+k}q^{(i^{2}+(i+j+k)^{2}-2i-j-k)/2}}{(q;q)_{i}(q;q)_{j}(q;q)_{k}}. \end{align*} This proves the theorem. \end{proof} For example, if we set $\beta_{1}=-q^{1/4}$, $\beta_{3}=-q^{1/2}$ and replace $q$ by $q^4$, we obtain \begin{align} \sum_{i,j,k\geq0}\frac{q^{2i^{2}+2(i+j+k)^{2}-i-j}}{(q^4;q^4)_{i}(q^4;q^4)_{j}(q^4;q^4)_{k}}= \frac{(q^4;q^{8})_{\infty}}{(q;q^4)_{\infty}(q^{6};q^{8})_{\infty}}. \end{align} \begin{rem}\label{rem-111} The identity \eqref{eq-111} appeared in Lovejoy's work \cite{Lovejoy2017} and therein is viewed as a generalization of a partition theorem of Schur. See Section \ref{sec-concluding} for more discussion. \end{rem} \subsection{Identities of index $(1,1,2)$} \begin{theorem}\label{thm-R-3} We have \begin{align} \sum_{i,j,k\geq0}\frac{(-1)^{i+j}b^{-i+j}c^{i-j+k}q^{(i^{2}+(i-j+2k)^{2}-2i+3j-2k)/2}}{(q;q)_{i}(q;q)_{j}(q^{2};q^{2})_{k}}=\frac{(-q,bq^{2}/c;q)_{\infty}(bq,c/b;q^{2})_{\infty}} {(b^{2}q^{2}/c;q^{2})_{\infty}}. \end{align} \end{theorem} \begin{proof} Setting $a=0,t=-c/b$ and $d=-q/c$ in \eqref{Prop32-proof}, by \eqref{BD} we have \begin{align} & (q;q)_{\infty}\oint \frac{(cz,-bqz/c,-c/bz;q)_{\infty}}{(b^{2}z^{2};q^{2})_{\infty}(-q/cz;q)_{\infty}}\frac{dz}{2\pi iz} \nonumber \\ & = \frac{(bq^{2}/c^{2},-c/b,c;q)_{\infty}}{(-bq/c,bq/c;q)_{\infty}} {}_2\phi _1\left( \begin{gathered} b,bq/c\\ c \end{gathered} ;q,-c/b \right) \nonumber \\ &=\frac{(-q,bq^{2}/c^{2};q)_{\infty}(bq,c^{2}/b;q^{2})_{\infty}} {(b^{2}q^{2}/c^{2};q^{2})_{\infty}}. \end{align} By \eqref{Euler} and \eqref{Jacobi}, its left side is \begin{align*} LHS&=\oint \sum_{i,j,k\geq0}\sum_{l= -\infty}^{\infty}\frac{(-cz)^{i}q^{(i^{2}-i)/2} (-q/cz)^{j} (b^{2}z^{2})^{k} (c/bz)^{l}q^{(l^{2}-l)/2}}{(q;q)_{i}(q;q)_{j}(q^{2};q^{2})_{k}} \frac{dz}{2\pi iz} \\ &=\sum_{i,j,k\geq0}\frac{(-1)^{i+j}c^{2i-2j+2k}b^{-i+j}q^{(i^{2}+(i-j+2k)^{2}-2i+3j-2k)/2}}{(q;q)_{i}(q;q)_{j}(q^{2};q^{2})_{k}}. \end{align*} Replacing $c^2$ by $c$, we prove the theorem. \end{proof} Setting $(b,c)=(q^{1/2},q^2)$, $(-q^{1/2},q^2)$ and $(q^{1/2},q)$ and replacing $q$ by $q^2$, we obtain \begin{align} \sum_{i,j,k\geq 0}\frac{(-1)^{i+j}q^{i^{2}+(i-j+2k)^{2}+i+2k}}{(q^2;q^2)_{i}(q^2;q^2)_{j}(q^4;q^4)_{k}}&= \frac{(q;q^2)_{\infty}(q^{3};q^{4})_{\infty}^{2}}{(q^2;q^{4})_{\infty}^{2}}, \\ \sum_{i,j,k\geq 0}\frac{q^{i^{2}+(i-j+2k)^{2}+i+2k}}{(q^2;q^2)_{i}(q^2;q^2)_{j}(q^4;q^4)_{k}}&= \frac{(q^{6};q^{8})_{\infty}^{2}}{(q;q^2)_{\infty}(q^2;q^{4})_{\infty}(q^{3};q^{4})_{\infty}^{2}}, \\ \sum_{i,j,k\geq 0}\frac{(-1)^{i+j}q^{i^{2}+(i-j+2k)^{2}-i+2j}}{(q^2;q^2)_{i}(q^2;q^2)_{j}(q^4;q^4)_{k}}&= \frac{(q,q^3;q^2)_{\infty}}{(q^2;q^2)_{\infty}}. \end{align} \begin{theorem}\label{thm-4112-1} We have \begin{align}\label{eq-4112-1} \sum_{i,j,k\geq0}\frac{(-1)^{i}c^{2i-j+2k}d^{j}q^{(i^{2}+(i-j+2k)^{2}-2i+j-2k)/2}}{(q;q)_{i}(q;q)_{j}(q^{2};q^{2})_{k}}=\frac{(-d q/c;q)_{\infty}(c^{2};q^{2})_{\infty}}{(d^{2};q^{2})_{\infty}}. \end{align} \end{theorem} \begin{proof} Setting $\beta=-\alpha$ and $a=q/c\alpha$ in \eqref{GR4112}, we obtain \begin{align*} (q;q)_{\infty}\oint \frac{(-cz/\alpha,-q\alpha/cz,c\alpha/z;q)_{\infty}}{(bz;q)_{\infty}(\alpha^{2}/z^{2};q^{2})_{\infty}}\frac{dz}{2\pi iz} =\frac{(-b\alpha q/c;q)_{\infty}(c^{2};q^{2})_{\infty}}{(\alpha^{2}b^{2};q^{2})_{\infty}}. \end{align*} By \eqref{Euler} and \eqref{Jacobi} we see that its left side is \begin{align*} LHS&=\oint \sum_{i,j,k\geq 0}\sum_{l= -\infty}^{\infty}\frac{(-c\alpha/z)^{i}q^{(i^{2}-i)/2} (bz)^{j} (\alpha^{2}/z^{2})^{k} (cz/\alpha)^{l}q^{(l^{2}-l)/2}}{(q;q)_{i}(q;q)_{j}(q^{2};q^{2})_{k}} \frac{dz}{2\pi iz}\\ &= \sum_{i,j,k\geq0}\frac{(-1)^{i}c^{2i-j+2k}\alpha^{j}b^{j}q^{(i^{2}+(i-j+2k)^{2}-2i+j-2k)/2}}{(q;q)_{i}(q;q)_{j}(q^{2};q^{2})_{k}}. \end{align*} This proves the theorem after replacing $\alpha b$ by $d$. \end{proof} For example, if we replace $q$ by $q^4$ and set $(c,d)=(q^2,q)$ or $(q^2,q^3)$, we obtain \begin{align} \sum_{i,j,k\geq0}\frac{(-1)^{i}q^{2i^{2}+2(i-j+2k)^{2}+j}}{(q^{4};q^{4})_{i}(q^{4};q^{4})_{j}(q^{8};q^{8})_{k}}&= \frac{(q^{4},q^{6};q^{8})_{\infty}}{(q^{2},q^{3},q^{7};q^{8})_{\infty}}, \\ \sum_{i,j,k\geq0}\frac{(-1)^{i}q^{2i^{2}+2(i-j+2k)^{2}+3j}}{(q^{4};q^{4})_{i}(q^{4};q^{4})_{j}(q^{8};q^{8})_{k}}&= \frac{(q^{4},q^{10};q^{8})_{\infty}}{(q^{5},q^{6},q^{9};q^{8})_{\infty}}. \end{align} \subsection{Identities of index $(1,1,3)$} \begin{theorem}\label{thm-R-6} We have \begin{align}\label{eq-R-6} \sum_{i,j,k\geq0}\frac{(-1)^{k}u^{2i+j+3k}q^{(i^{2}+j^{2}+(i+j+3k)^{2}-2i-2j-3k)/2}}{(q;q)_{i}(q;q)_{j}(q^{3};q^{3})_{k}}=\frac{(u^{3};q^{3})_{\infty}}{(u;q)_{\infty}}. \end{align} \end{theorem} \begin{proof} Setting $\beta_{1}=\zeta_3 u,\beta_{3}=\zeta_3^{2}u$ in \eqref{eq2.1}, we obtain \begin{align*} (q;q)_{\infty}\oint \frac{(u^{2}z,uz,qz,1/z;q)_{\infty}}{(u^{3}z^{^{3}};q^{3})_{\infty}}\frac{dz}{2\pi iz} = \frac{(u^{3};q^{3})_{\infty}}{(u;q)_{\infty}}. \end{align*} By \eqref{Euler} and \eqref{Jacobi}, we see that its left side is \begin{align*} LHS&=\oint \sum_{i,j,k\geq0}\sum_{l= -\infty}^{\infty}\frac{(-u^{2}z)^{i}q^{(i^{2}-i)/2} (-uz)^{j}q^{(j^{2}-j)/2}(u^{3}z^{3})^{k} (-1/z)^{l}q^{(l^{2}-l)/2} }{(q;q)_{i}(q;q)_{j}(q^{3};q^{3})_{k}} \frac{dz}{2\pi iz}\\ &=\sum_{i,j,k\geq0}\frac{(-1)^{k}u^{2i+j+3k}q^{(i^{2}+j^{2}+(i+j+3k)^{2}-2i-2j-3k)/2}}{(q;q)_{i}(q;q)_{j}(q^{3};q^{3})_{k}}. \end{align*} This proves \eqref{eq-R-6}. \end{proof} Setting $u=q$, $q^{1/3}$, $q^{2/3}$ or $q^{1/2}$ in \eqref{eq-R-6} and replacing $q$ by $q^2$ or $q^3$ when necessary, we obtain \begin{align} \sum_{i,j,k\geq 0}\frac{(-1)^{k}q^{(i^{2}+j^{2}+(i+j+3k)^{2}+2i+3k)/2}}{(q;q)_{i}(q;q)_{j}(q^{3};q^{3})_{k}}&=\frac{1}{(q,q^{2};q^{3})_{\infty}}, \\ \sum_{i,j,k\geq 0}\frac{(-1)^{k}q^{3(i^{2}+j^{2}+(i+j+3k)^{2})/2-(2i+4j+3k)/2}}{(q^3;q^3)_{i}(q^3;q^3)_{j}(q^{9};q^{9})_{k}}&=\frac{(q^3;q^{9})_{\infty}}{(q;q^3)_{\infty}}, \\ \sum_{i,j,k\geq 0}\frac{(-1)^{k}q^{3(i^{2}+j^{2}+(i+j+3k)^{2})/2+(2i-2j+3k)/2}}{(q^3;q^3)_{i}(q^3;q^3)_{j}(q^{9};q^{9})_{k}}&= \frac{(q^{6};q^{9})_{\infty}}{(q^{2};q^3)_{\infty}}, \\ \sum_{i,j,k\geq0}\frac{(-1)^{k}q^{i^{2}+j^{2}+(i+j+3k)^{2}-j}}{(q^2;q^2)_{i}(q^2;q^2)_{j}(q^{6};q^{6})_{k}}&= \frac{1}{(q,q^5;q^{6})_{\infty}}. \end{align} \subsection{Identities of index $(1,2,2)$} \begin{theorem}\label{thm-122} We have \begin{align} \sum_{i,j,k\geq0}\frac{(-1)^{j}q^{i+j^{2}+2j+(i+j-k)^{2}}}{(q;q)_{i}(q^{2};q^{2})_{j}(q^{2};q^{2})_{k}} &=\frac{(q^{2};q^{2})_{\infty}(q^4;q^4)_\infty^2} {(q;q)_{\infty}^{2}}, \\ \sum_{i,j,k\geq0}\frac{(-1)^{j}q^{j^{2}+j+k}(q^{(i+j-k)^{2}}+q^{(i+j-k+1)^{2}})}{(q;q)_{i}(q^{2};q^{2})_{j}(q^{2};q^{2})_{k}} &=\frac{(q^{2};q^{2})_{\infty}^7} {(q;q)_{\infty}^{4} (q^4;q^4)_\infty^2}. \end{align} \end{theorem} \begin{proof} Let $b=-q/a^{1/2}$ in \eqref{Eq14}. We obtain \begin{align} \oint \frac{(-a^{1/2}z,a^{1/2}qz,-q/a^{1/2}z;q)_{\infty}} {(az,a^{1/2}z,1/z;q)_{\infty}}\frac{dz}{2\pi iz} =\frac{(-a^{1/2},aq,-q,-q;q)_{\infty}} {(a^{1/2},a,-a^{1/2}q,a^{1/2}q;q)_{\infty}}. \end{align} When $a=q$, we have \begin{align*} (q;q)_{\infty} \oint \frac{(-q^{1/2}z,q^{3/2}z,-q^{1/2}/z;q)_{\infty}} {(qz,q^{1/2}z,1/z;q)_{\infty}}\frac{dz}{2\pi iz} =\frac{(-q^{1/2},q^{2},-q,-q;q)_{\infty}} {(q^{1/2},-q^{3/2},q^{3/2};q)_{\infty}}. \end{align*} Replacing $q$ by $q^2$, simplifying the denominator of the integrand using \begin{align}\label{eq-simplify} (q^2z,qz;q^2)_\infty=(qz;q)_\infty\end{align} and applying \eqref{Euler} and \eqref{Jacobi}, we obtain the first identity. Let $b=-q^{1/2}/a^{1/2}$ in \eqref{Eq14}. We obtain \begin{align} &\oint \frac{(-a^{1/2}z,a^{1/2}qz,-a^{1/2}q^{1/2}z,-q^{1/2}/a^{1/2}z;q)_{\infty}} {(az,-a^{1/2}qz,a^{1/2}z,1/z;q)_{\infty}}\frac{dz}{2\pi iz} \nonumber \\ &=\frac{(-a^{1/2},aq,-q^{1/2},-q^{1/2};q)_{\infty}} {(a^{1/2},a,-a^{1/2}q,a^{1/2}q;q)_{\infty}}. \end{align} When $a=q$, we have \begin{align*} (q;q)_{\infty} \oint (1+q^{1/2}z)\frac{(q^{3/2}z,-qz,-1/z;q)_{\infty}} {(q^{1/2}z,qz,1/z;q)_{\infty}}\frac{dz}{2\pi iz} =\frac{(q^{2};q)_{\infty}(-q^{1/2};q)_{\infty}^{3}} {(q^{1/2};q)_{\infty}(q^{3};q^{2})_{\infty}}. \end{align*} Replacing $q$ by $q^{2}$, simplifying the denominator of the integrand using \eqref{eq-simplify} and applying \eqref{Euler} and \eqref{Jacobi}, we obtain the second identity. \end{proof} \subsection{Identities of index $(1,2,3)$} \begin{theorem}\label{thm-4112-3} We have \begin{equation}\label{eq-4112-3} \sum_{i,j,k\geq0}\frac{(-1)^{i+j}u^{i+3k}q^{(i^{2}-i)/2+(i-2j+3k)^{2}/4}}{(q;q)_{i}(q^{2};q^{2})_{j}(q^{3};q^{3})_{k}}=\frac{(u^{2};q)_{\infty}(q,-u^{2};q^{2})_{\infty}}{(-u^{6};q^{6})_{\infty}}. \end{equation} \end{theorem} \begin{proof} Setting $c=q^{1/2}$, replacing $\alpha$ by $\zeta_2\alpha$, setting $\beta=-\zeta_2 \alpha$, $a=d\zeta_3,b=d\zeta_3^{2}$ in \eqref{GR4112}, and then multiplying both sides by $(q^{2};q^{2})_{\infty}$, we see that the left side of \eqref{GR4112} becomes \begin{align} LHS&=(q^{2};q^{2})_{\infty}\oint \frac{(-qz^{2}/\alpha^{2},-q\alpha^{2}/z^{2};q^{2})_{\infty}(dz;q)_{\infty}} {(d^{3}z^{3};q^{3})_{\infty}(-\alpha^{2}/z^{2};q^{2})_{\infty}}\frac{dz}{2\pi iz} \nonumber \\ &=\oint \sum_{i,j,k\geq0}\sum_{l= -\infty}^{\infty}\frac{(-dz)^{i}q^{(i^{2}-i)/2} (-\alpha^{2}/z^{2})^{j} (d^{3}z^{3})^{k} (q\alpha^{2}/z^{2})^{l}q^{l^{2}-l}}{(q;q)_{i}(q^{2};q^{2})_{j}q^{3};q^{3})_{k}} \frac{dz}{2\pi iz} \nonumber \\ &= \sum_{i,j,k\geq0}\frac{(-1)^{i+j}\alpha^{i+3k}d^{i+3k}q^{(i^{2}-i)/2+(i-2j+3k)^{2}/4}}{(q;q)_{i}(q^{2};q^{2})_{j}q^{3};q^{3})_{k}}-S, \qquad \label{eq-S} \end{align} where \begin{align*} S=& \oint \sum_{i,j,k\geq 0}\sum_{m= -\infty}^{\infty}\frac{(-dz)^{i}q^{(i^{2}-i)/2} (-\alpha^{2}/z^{2})^{j} (d^{3}z^{3})^{k} }{(q;q)_{i}(q^{2};q^{2})_{j}q^{3};q^{3})_{k}} \\ & \times (q\alpha^{2}/z^{2})^{(2m+1)/2}q^{(2m+1)^{2}/4-(2m+1)/2} \frac{dz}{2\pi iz} \end{align*} corresponds to the case when $l=(i-2j+3k)/2$ is not an integer, i.e., $l=(2m+1)/2$ with $m\in \mathbb{Z}$. Now we convert the integrand in the expression of $S$ back to infinite products. We have \begin{align*} S&=\alpha q^{1/4}(q^2;q^2)_\infty \oint \frac{(-q^2\alpha^2 /z^{2},-z^2/\alpha^{2};q^{2})_{\infty}(dz;q)_{\infty}} {(d^{3}z^{3};q^{3})_{\infty}(-\alpha^{2}/z^{2};q^{2})_{\infty}}\frac{dz}{2\pi iz} \\ &=\alpha q^{1/4}(q^2;q^2)_\infty \oint \frac{(-z^2/\alpha^{2};q^{2})_{\infty}(dz;q)_{\infty}} {(d^{3}z^{3};q^{3})_{\infty}(1+\alpha^2/z^2)}z^{-1}\frac{dz}{2\pi iz} \\ &=\alpha^{-1} q^{1/4}(q^2;q^2)_\infty \oint \frac{(-z^2q^2/\alpha^2;q^2)_\infty (dz;q)_\infty}{(d^3z^3;q^3)_\infty} z \frac{dz}{2\pi iz} \\ &=0. \end{align*} Here the last equality follows since $$[z^0] \frac{(-z^2q^2/\alpha^2;q^2)_\infty (dz;q)_\infty}{(d^3z^3;q^3)_\infty} z=0.$$ Note that the right side of \eqref{GR4112} (after multiplication by $(q^{2};q^{2})_{\infty}$) is \begin{align}\label{eq-S-2} RHS=\frac{(d^{2}\alpha^{2};q)_{\infty}(q,-d^{2}\alpha^{2};q^{2})_{\infty}}{(-d^{6}\alpha^{6};q^{6})_{\infty}}. \end{align} Combining \eqref{eq-S} and \eqref{eq-S-2}, replacing $d\alpha$ by $u$, we obtain the desired identity. \end{proof} If we set $u$ as $q^{1/2}$ or $q$ in Theorem \ref{thm-4112-3}, we obtain \begin{align} \sum_{i,j,k\geq0}\frac{(-1)^{i+j}q^{(i^{2}+3k)/2+(i-2j+3k)^{2}/4}}{(q;q)_{i}(q^{2};q^{2})_{j}(q^{3};q^{3})_{k}}&=(q;q)_{\infty}(q^{3};q^{6})_{\infty}(q^{2},q^{10};q^{12})_{\infty}, \\ \sum_{i,j,k\geq0}\frac{(-1)^{i+j}q^{(i^{2}+i+6k)/2+(i-2j+3k)^{2}/4}}{(q;q)_{i}(q^{2};q^{2})_{j}(q^{3};q^{3})_{k}}&= \frac{(q^{2};q)_{\infty}(q;q^{2})_{\infty}}{(q^{2},q^{10};q^{12})_{\infty}}. \end{align} \begin{theorem}\label{thm-123} We have \begin{align} \sum_{i,j,k\geq 0}\frac{(-1)^{(i-2j+3k)/2}u^{i+k}q^{(i^{2}-i)/2+(i-2j+3k)^{2}/4}} {(q;q)_{i}(q^{2};q^{2})_{j}(q^{3};q^{3})_{k}} =\frac{(q;q^{2})_{\infty}(-u^{2};q^{3})_{\infty}} {(u^{2};q^{6})_{\infty}}. \end{align} \end{theorem} \begin{proof} Setting $b=\zeta_3 a,c=\zeta_3^{2}a,\alpha=-\beta$, $\gamma=q^{1/2}\alpha$ and $\delta=-a^3\alpha^2$ in \eqref{GR4113}, after multiplying both sides by $(q^2;q^2)_\infty$, we see that its left side is \begin{align} LHS=&(q^{2};q^{2})_{\infty}\oint \frac{(-a^{3}\alpha^{2}z;q)_{\infty} (qz^{2}/\alpha^{2},q\alpha^{2}/z^{2};q^{2})_{\infty}} {(a^{3}z^{3};q^{3})_{\infty}(\alpha^{2}/z^{2};q^{2})_{\infty}}\frac{dz}{2\pi iz} \nonumber \\ &=\oint \sum_{i,j,k\geq 0}\frac{(a^{3}\alpha^{2}z)^{i}q^{(i^2-i)/2}(\alpha^{2}/z^{2})^{j}(a^{3}z^{3})^{k}}{(q;q)_{i}(q^{2};q^{2})_{j}(q^{3};q^{3})_{k}} \sum_{l= -\infty}^{\infty}(-q\alpha^{2}/z^{2})^{l}q^{l^{2}-l}\frac{dz}{2\pi iz} \nonumber \\ &=\sum_{i,j,k\geq 0}\frac{(-1)^{(i-2j+3k)/2}a^{3i+3k}\alpha^{3i+3k}q^{(i^{2}-i)/2+(i-2j+3k)^{2}/4}} {(q;q)_{i}(q^{2};q^{2})_{j}(q^{3};q^{3})_{k}}-S, \label{add-S} \end{align} where \begin{align*} S&=\oint \sum_{i,j,k\geq 0}\frac{(a^{3}\alpha^{2}z)^{i}q^{(i^2-i)/2}(\alpha^{2}/z^{2})^{j}(a^{3}z^{3})^{k}}{(q;q)_{i}(q^{2};q^{2})_{j}(q^{3};q^{3})_{k}} \nonumber \\ &\quad \times \sum_{m= -\infty}^{\infty}(-q\alpha^{2}/z^{2})^{(2m+1)/2}q^{(2m+1)^{2}/4-(2m+1)/2}\frac{dz}{2\pi iz} \\ &=\zeta_2\alpha q^{1/4}(q^2;q^2)_\infty \oint\frac{(-a^{3}\alpha^{2}z;q)_{\infty} (z^{2}/\alpha^{2},q^{2}\alpha^{2}/z^{2};q^{2})_{\infty}} {(a^{3}z^{3};q^{3})_{\infty}(\alpha^{2}/z^{2};q^{2})_{\infty}} z^{-1}\frac{dz}{2\pi iz} \\ &=-\zeta_2\alpha^{-1} q^{1/4} \oint \frac{(-a^{3}\alpha^{2}z;q)_{\infty} (q^{2}z^{2}/\alpha^{2};q^{2})_{\infty}} {(a^{3}z^{3};q^{3})_{\infty}}z\frac{dz}{2\pi iz} \\ &=0. \end{align*} The right side of \eqref{GR4113} (after multiplication by $(q^2;q^2)_\infty$) is \begin{align} RHS=\frac{(q;q^{2})_{\infty}(-a^{6}\alpha^{6};q^{3})_{\infty}} {(a^{6}\alpha^{6};q^{6})_{\infty}}. \label{add-S-2} \end{align} Combining \eqref{add-S} and \eqref{add-S-2}, and replacing $a^3\alpha^3$ by $u$ in, we obtain the desired identity. \end{proof} Setting $u$ as $-\zeta_2 q^{3/2}$ and $q^{3/2}$ in Theorem \ref{thm-123}, we obtain \begin{align} \sum_{i,j,k\geq0}\frac{(-1)^{i+j}q^{(i^{2}+2i+3k)/2+(i-2j+3k)^{2}/4}} {(q;q)_{i}(q^{2};q^{2})_{j}(q^{3};q^{3})_{k}} &=(q;q^{2})_{\infty}(q^{3};q^{6})_{\infty}^{2}(q^{12};q^{12})_{\infty}, \\ \sum_{i,j,k\geq0}\frac{(-1)^{(i-2j+3k)/2}q^{(i^{2}+2i+3k)/2+(i-2j+3k)^{2}/4}} {(q;q)_{i}(q^{2};q^{2})_{j}(q^{3};q^{3})_{k}} &=\frac{(q,q^{5};q^{6})_{\infty}}{(q^{3};q^{6})_{\infty}}. \end{align} \subsection{Identities of index $(1,2,4)$} \begin{theorem} We have \begin{align} \sum_{i,j,k\geq 0} \frac{(-1)^{k}q^{(i+j+2k)(i+j+2k-1)+j+2k^2}u^{i+2j+4k}}{(q;q)_{i}(q^2;q^2)_{j}(q^4;q^4)_{k}}&=(-u;q)_\infty, \\ \sum_{i,j,k\geq 0}\frac{(-1)^{j}q^{(i+j+2k)(i+j+2k-1)+2i+3j+2k^2+6k}}{(q;q)_{i}(q^2;q^2)_{j}(q^4;q^4)_{k}}&=\frac{(q^4,q^{12},q^{16};q^{16})_\infty}{(q^2;q^2)_\infty}, \\ \sum_{i,j,k\geq 0}\frac{(-1)^{j}q^{(i+j+2k)^2+2k^2}}{(q;q)_{i}(q^2;q^2)_{j}(q^4;q^4)_{k}}&=\frac{(q^8;q^8)_\infty^2}{(q^2;q^2)_\infty (q^{16};q^{16})_\infty}. \end{align} \end{theorem} \begin{proof} Let \begin{align} &H(u,v,w)=H(u,v,w;q) \nonumber \\ &:=\sum_{i,j,k\geq 0}\frac{u^{i}v^{j}(-w)^{k}q^{(i+j+2k)(i+j+2k-1)+2k(k-1)}}{(q;q)_{i}(q^2;q^2)_{j}(q^4;q^4)_{k}}. \end{align} We have by \eqref{int-constant} that \begin{align*} H(u,v,w)=\oint\sum_{i=0}^\infty \frac{(uz)^{i}}{(q;q)_{i}}\sum_{j=0}^\infty \frac{(vz)^{j}}{(q^2;q^2)_{j}}\sum_{k=0}^\infty \frac{(-wz^2)^{k}q^{2k(k-1)}}{(q^4;q^4)_{k}}\sum_{l=-\infty}^\infty (1/z)^{l}q^{l(l-1)} \frac{\diff z}{2\pi iz}. \end{align*} Hence by \eqref{Euler} and \eqref{Jacobi}, \begin{align*} H(u,v,w)&=(q^2;q^2)_\infty \oint \frac{(wz^2;q^4)_\infty (-1/z,-q^2z;q^2)_\infty}{(uz;q)_\infty (vz;q^2)_\infty} \frac{\diff z}{2\pi iz} \\ &=(q^2;q^2)_\infty \oint \frac{(w^{1/2}z,-w^{1/2}z,-1/z,-q^2z;q^2)_\infty}{(uz,uzq,vz;q^2)_\infty} \frac{\diff z}{2\pi iz} \\ &=(q^2;q^2)_\infty \oint \frac{(w^{1/2}z,-w^{1/2}z,1/z,q^2z;q^2)_\infty}{(-uz,-uzq,-vz;q^2)_\infty} \frac{\diff z}{2\pi iz}. \end{align*} Here for the last line we have replaced $z$ by $-z$. When $w=u^2vq$, we can apply \eqref{R32} with $(\alpha_1,\alpha_2,\beta_1,\beta_2,\beta_3)=(w^{1/2},-w^{1/2},-v,-u,-uq)$ to deduce that \begin{align}\label{124F-exp} H(u,v,w;q)=(-v,-w^{1/2}/v;q^2)_\infty \cdot {}_2\phi_1 \bigg(\genfrac{}{}{0pt}{}{w^{1/2}/u,w^{1/2}/uq}{-v};q^2,-w^{1/2}/v \bigg). \end{align} We now specialize the choices of $(u,v,w)$ so that the ${}_2\phi_1$ series becomes a nice infinite product. By \eqref{124F-exp} and the $q$-Gauss summation \eqref{q-Gauss} ,we have \begin{align*} H(u,u^2q,u^4q^2)&=(-u^2q,-1;q^2)_\infty \cdot {}_2\phi_1 \bigg(\genfrac{}{}{0pt}{}{uq,u}{-u^2q};q^2,-1 \bigg) \\ &=(-u^2q,-1;q^2)_\infty \cdot \frac{(-u,-uq;q^2)_\infty}{(-u^2q,-1;q^2)_\infty} \\ &=(-u;q)_\infty. \end{align*} By \eqref{124F-exp} and the Bailey-Daum summation \eqref{BD}, we get \begin{align*} H(q^2,-q^3,-q^8)&=(q^3,\zeta_2 q;q^2)_\infty \cdot {}_2\phi_1 \bigg(\genfrac{}{}{0pt}{}{\zeta_2 q^2,\zeta_2q}{q^3};q^2,\zeta_2 q \bigg) \\ &=(q^3,\zeta_2 q;q^2)_\infty \cdot \frac{(-q^2;q^2)_\infty (\zeta_2q^4,-\zeta_2q^4;q^4)_\infty}{(q^3,\zeta_2q;q^2)_\infty} \\ &=\frac{(q^4,q^{12},q^{16};q^{16})_\infty}{(q^2;q^2)_\infty}. \end{align*} Similarly, by \eqref{124F-exp} and \eqref{BD} we get \begin{align*} H(q,-q,-q^4)&=(q,\zeta_2q;q^2)_\infty \cdot {}_2\phi_1\bigg(\genfrac{}{}{0pt}{}{\zeta_2,\zeta_2q}{q};q^2,\zeta_2q \bigg) \\ &=(q,\zeta_2q;q^2)_\infty \cdot \frac{(-q^2;q^2)_\infty (\zeta_2q^2,-\zeta_2q^2;q^4)_\infty}{(q,\zeta_2q;q^2)_\infty} \\ &=\frac{(q^8;q^8)_\infty^2}{(q^2;q^2)_\infty(q^{16};q^{16})_\infty}. \qedhere \end{align*} \end{proof} \section{Concluding Remarks}\label{sec-concluding} We give several remarks before closing this paper. First, though here we only discuss identities involving double sums and triple sums, the integral method can also be applied to deduce identities with more summation indexes. For example, we can use the integral method to give a new proof to the identity \eqref{DL1112}, which is of index $(1,1,1,2)$. \begin{proof}[Proof of \eqref{DL1112}] Using (\ref{Euler}), (\ref{Jacobi}) and (\ref{int-constant}), we have \begin{align} \label{Lovejoy1} & \sum_{i,j,k,l\geq 0} \frac{a^{i+l}b^{j+l}q^{\binom{i+j+k+2l+1}{2}+\binom{i+1}{2}+\binom{j+1}{2}+l}}{(q;q)_i(q;q)_j(q;q)_k(q^2;q^2)_l} \nonumber\\ &=(q;q)_\infty \oint \frac{(-aqz,-bqz,-z,-q/z;q)_{\infty}} {(z;q)_{\infty}(abqz^{2};q^{2})_{\infty}}\frac{dz}{2\pi iz} \nonumber\\ &=(q;q)_\infty \oint \frac{(-aqz,-bqz,-z,-q/z;q)_{\infty}} {(z,(abq)^{1/2}z,-(abq)^{1/2}z;q)_{\infty}}\frac{dz}{2\pi iz}. \end{align} Using (\ref{q-binomial}), we have \begin{align} \label{bi1} \frac{(-bdq^{n+1};q)_\infty }{(dq^{n};q)_\infty } =\sum_{m\geq 0} \frac{(-bq;q)_m}{(q;q)_m}(dq^{n})^{m}. \end{align} Using Heine's transformations of ${}_2\phi_1$ series \cite[(\uppercase\expandafter{\romannumeral3}. 3)]{GR-book} \begin{align} {}_2\phi_1\bigg(\genfrac{}{}{0pt}{}{a,b}{c};q,z \bigg)=\frac{(abz/c;q)_\infty}{(z;q)_\infty}{}_2\phi_1\bigg(\genfrac{}{}{0pt}{}{c/a,c/b}{c};q,abz/c \bigg), \end{align} we deduce that \begin{align} \label{he1} &\sum_{n\geq 0} \frac{((abq)^{1/2}d,-(abq)^{1/2}d;q)_n}{(q,-adq;q)_n}(-q^{m+1}/d)^{n} \nonumber \\ &=\frac{(-bq^{m+1};q)_\infty }{(-q^{m+1}/d;q)_\infty }\sum_{n\geq 0} \frac{(-(aq/b)^{1/2},(aq/b)^{1/2};q)_n}{(q,-adq;q)_n}(-bq^{m+1})^{n}. \end{align} By (\ref{GR41010}), we have \begin{align} \label{Lovejoy2} &(q;q)_\infty\oint \frac{(-aqz,-bqz,-z,-q/z;q)_{\infty}} {(z,(abq)^{1/2}z,-(abq)^{1/2}z,d/z;q)_{\infty}}\frac{dz}{2\pi iz} \nonumber \\ &=\frac{(-adq,-bdq,-d,-q/d;q)_{\infty}} {(d,(abq)^{1/2}d,-(abq)^{1/2}d;q)_{\infty}} \sum_{n\geq 0} \frac{(d,(abq)^{1/2}d,-(abq)^{1/2}d;q)_n}{(q,-adq,-bdq;q)_n}(-q/d)^{n} \nonumber \\ &=\frac{(-adq,-d,-q/d;q)_{\infty}} {((abq)^{1/2}d,-(abq)^{1/2}d;q)_{\infty}} \sum_{n\geq 0} \frac{((abq)^{1/2}d,-(abq)^{1/2}d;q)_n}{(q,-adq;q)_n}(-q/d)^{n} \frac{(-bdq^{n+1};q)_{\infty}} {(dq^{n};q)_{\infty}} \nonumber \\ &=\frac{(-adq,-d,-q/d;q)_{\infty}} {((abq)^{1/2}d,-(abq)^{1/2}d;q)_{\infty}} \sum_{m\geq 0} \frac{(-bq;q)_m}{(q;q)_m}d^{m} \nonumber \\ &\quad \quad \quad \quad \quad \times \sum_{n\geq 0} \frac{((abq)^{1/2}d,-(abq)^{1/2}d;q)_n}{(q,-adq;q)_n}(-q^{m+1}/d)^{n} \quad \text{(by \ref{bi1})} \nonumber \\ &= \frac{(-adq,-d,-q/d;q)_{\infty}} {((abq)^{1/2}d,-(abq)^{1/2}d;q)_{\infty}} \sum_{n\geq 0} \frac{(-(aq/b)^{1/2},(aq/b)^{1/2};q)_n}{(q,-adq;q)_n}(-bq)^{n} \nonumber \\ &\quad \quad \quad \times \sum_{m\geq 0} \frac{(-bq;q)_m}{(q;q)_m}(dq^{n})^{m} \frac{(-bq^{m+1};q)_\infty }{(-q^{m+1}/d;q)_\infty } \quad \text{(by \ref{he1})} \nonumber \\ &=\frac{(-adq,-d,-bq;q)_{\infty}} {((abq)^{1/2}d,-(abq)^{1/2}d;q)_{\infty}} \sum_{n\geq 0} \frac{(-(aq/b)^{1/2},(aq/b)^{1/2};q)_n}{(q,-adq;q)_n}(-bq)^{n} \nonumber \\ &\qquad \qquad \qquad \times \sum_{m\geq 0} \frac{(-q/d;q)_m}{(q;q)_m}(dq^{n})^{m} \nonumber \\ &=\frac{(-adq,-d,-bq;q)_{\infty}} {((abq)^{1/2}d,-(abq)^{1/2}d;q)_{\infty}} \sum_{n\geq 0} \frac{(aq/b;q^{2})_n}{(q,-adq;q)_n}(-bq)^{n} \frac{(-q^{n+1};q)_\infty }{(dq^{n};q)_\infty } \nonumber \\ &=\frac{(-adq,-d,-bq,-q;q)_{\infty}} {((abq)^{1/2}d,-(abq)^{1/2}d;q)_{\infty}} \sum_{n\geq 0} \frac{(aq/b;q^{2})_n}{(q^{2};q^{2})_n(-adq;q)_n}(-bq)^{n} \frac{1}{(dq^{n};q)_\infty } . \end{align} Letting $d\rightarrow 0$ on both sides of (\ref{Lovejoy2}), we have \begin{align}\label{Lovejoy3} &(q;q)_\infty \oint \frac{(-aqz,-bqz,-z,-q/z;q)_{\infty}} {(z,(abq)^{1/2}z,-(abq)^{1/2}z;q)_{\infty}}\frac{dz}{2\pi iz} \nonumber \\ &=(-bq,-q;q)_{\infty}\sum_{n\geq 0} \frac{(aq/b;q^{2})_n}{(q^{2};q^{2})_n}(-bq)^{n}\nonumber \\ &= \frac{(-bq,-q;q)_{\infty}(-aq^{2};q^{2})_{\infty}}{(-bq;q^{2})_{\infty}} \nonumber \\ &=(-q;q)_{\infty}(-aq^{2},-bq^{2};q^{2})_{\infty}. \end{align} Combining (\ref{Lovejoy1}) and (\ref{Lovejoy3}), we obtain (\ref{DL1112}). \end{proof} Second, there might exist partition or Lie theoretic interpretations for the identities we proved. This deserves further investigation. As promised in Remark \ref{rem-111}, we give below a brief discussion on partition interpretations of the identity \eqref{eq-111}. Here we follow closely the lines in \cite{Lovejoy2017}. Recall that a partition $\pi$ of $n$ is a nonincreasing sequence $\pi=(\lambda_1,\lambda_2,\dots,\lambda_s)$ of positive integers which sum up to $n$, i.e., $$n=\lambda_1+\lambda_2+\cdots+\lambda_s, \quad \lambda_1\geq \lambda_2\geq \cdots \geq \lambda_s\geq 1.$$ Let $T(u,v,n)$ be the number of bipartitions $(\pi_1,\pi_2)$ of $n$ such that the partition $\pi_1$ (resp.\ $\pi_2$) consists of $u$ (resp.\ $v$) distinct parts, respectively. Then clearly, we have \begin{align}\label{T-gen} \sum_{n=0}^\infty T(u,v,n)a^ub^vq^n=(-aq,-bq;q)_\infty. \end{align} To generalize and refine Schur's partition theorem, Alladi and Gordon \cite{Alladi-Gordon-1993,Alladi-Gordon-1995} introduced a new kind of three colored partitions. As in \cite{Lovejoy2017}, we color the positive integers by three colors $a,b$ and $ab$ with the order that $$ab<a<b.$$ Now the integers are ordered as $$1_{ab}<1_a<1_b<2_{ab}<2_a<2_b<\cdots.$$ Let $S(u,v,n)$ be the number of three-colored partitions of $n$ with no parts $1_{ab}$, $u$ parts colored $a$ or $ab$, $v$ parts colored $b$ or $ab$, and satisfying the difference conditions described in the matrix $$A=\bordermatrix{% & a & b & ab \cr a & 1 & 2 &1 \cr b & 1 & 1 &1 \cr ab & 2 & 2 & 2 }.$$ Here the entry $(x,y)$ gives the minimal difference between the parts $\lambda_i$ of color $x$ and $\lambda_{i+1}$ of color $y$. Alladi and Gordon proved that \begin{align}\label{Alladi-Gordon-eq} \sum_{u,v,n\geq 0} S(u,v,n)a^ub^vq^n=(-aq,-bq;q)_\infty. \end{align} Through combinatorial arguments, Lovejoy \cite[Eq.\ (2.3)]{Lovejoy2017} proved that \begin{align}\label{eq-Lovejoy-gen} \sum_{u,v,n\geq 0} S(u,v,n)a^ub^vq^n=\sum_{i,j,k\geq 0}\frac{a^iq^i}{(q;q)_i} \frac{b^jq^j}{(q;q)_j}\frac{(ab)^kq^kq^{\binom{k+1}{2}}}{(q;q)_k}q^{\binom{i+j+k}{2}}. \end{align} After simplifying the sums by the $q$-Chu-Vandermonde summation, Euler's identities \eqref{Euler} and the $q$-binomial identity \eqref{q-binomial}, Lovejoy obtained \eqref{Alladi-Gordon-eq}. Clearly, combining \eqref{Alladi-Gordon-eq} and \eqref{eq-Lovejoy-gen}, we get \eqref{eq-111}. Together with \eqref{T-gen}, we see that \eqref{eq-111} is equivalent to the partition identity $$S(u,v,n)=T(u,v,n).$$ Once we convert our identities to partition identities like the above one, it will be quite interesting to find bijective proofs for them. Finally, we want to emphasize on the advantage of the integral method. It allows us to prove the identities in this paper in a uniform manner. Of course, it is possible to give different proofs to our theorems. As discussed in Remarks \ref{rem-sec3}--\ref{rem-111}, one may prove some of the theorems using approaches such as summing over one of the index first. Compared with the other methods, the integral method has the advantage that it tells us how the identities are constructed and the calculations involved are streamlined. \subsection*{Acknowledgements} We thank Jeremy Lovejoy for some valuable comments, especially for bringing the works \cite{Dousse-Lovejoy,Lovejoy2006,Lovejoy2017} to our attention. We are also grateful to Chuanan Wei for helpful comments on the presentation of Corollaries \ref{cor-Jacobi-add-1} and \ref{cor-J-4}. This work was supported by the National Natural Science Foundation of China (12171375).
{ "redpajama_set_name": "RedPajamaArXiv" }
265
{"url":"https:\/\/study.com\/academy\/answer\/babbalu-inc-just-paid-a-dividend-of-s1-per-share-babbalu-expects-to-increase-its-annual-dividend-by-20-per-year-for-the-next-two-years-and-by-15-per-year-for-the-following-two-years-starting-in-year-5-babbalu-is-expected-to-pay-a-constant-annual-div.html","text":"# Babbalu Inc. just paid a dividend of S1 per share. Babbalu expects to increase its annual...\n\n## Question:\n\nBabbalu Inc. just paid a dividend of S1 per share. Babbalu expects to increase its annual dividend by 20% per year for the next two years and by 15% per year for the following two years. Starting in year 5, Babbalu is expected to pay a constant annual dividend of S3 a share. What is the current value of this stock if the required rate of return is 12%?\n\na. $17.71 b.$18.97\n\nc. $20.50 d.$21.08\n\n## Dividend Growth Model:\n\nAccording the dividend growth model, the price of a stock depends on the expected dividend per share, the expected growth rate of dividend as well as the required rate of return. The higher the growth rat or the lower the required return, the higher the stock price.\n\nWe can use the dividend discount model to compute the price. According to this model, the price of a stock is the discounted present value of future dividends, i.e.,\n\n\u2022 {eq}\\dfrac{1*(1+20\\%)}{(1 + 12\\%)} + \\dfrac{1*(1 + 20\\%)^2}{(1 + 12\\%)^2} + \\dfrac{1*(1 + 20\\%)^2*(1 + 15\\%)}{(1 + 12\\%)^3} + \\dfrac{1*(1 + 20\\%)^2*(1 + 15\\%)^2}{(1 + 12\\%)^4} + \\dfrac{3}{12\\%(1 + 12\\%)^4}\\\\ = 4.61 + 15.89\\\\ = 20.50 {\/eq}","date":"2020-06-05 06:13:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.249363511800766, \"perplexity\": 2420.6877122069254}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590348493151.92\/warc\/CC-MAIN-20200605045722-20200605075722-00223.warc.gz\"}"}
null
null
È stato commissario europeo per l'allargamento e la politica europea di vicinato nella Commissione Barroso II dal 9 febbraio 2010 al 31 ottobre 2014. Formazione Füle ha studiato all'Università Carolina di Praga e all'Istituto statale di relazioni internazionali di Mosca. Attività diplomatica e politica Füle è stato iscritto al Partito Comunista di Cecoslovacchia dal 1982 al 1989. Dal 1990 è legato al Partito Socialdemocratico Ceco, che è membro del Partito Socialista Europeo. Carriera diplomatica Füle ha ricoperto vari ruoli diplomatici per conto del ministero degli esteri della Repubblica Ceca. Ha fatto parte della missione permanente della Repubblica Ceca presso le Nazioni Unite dal 1990 al 1995. Nel 1998 è stato nominato ambasciatore ceco in Lituania ed è stato anche il referente della NATO per quel Paese. Nel 2000 Füle è stato nominato viceministro della difesa, ma poi è tornato alla carriera diplomatica come ambasciatore del suo Paese nel Regno Unito e successivamente come rappresentante permanente della Repubblica Ceca presso la NATO. Carriera politica Dopo un'esperienza come viceministro della difesa, Füle è stato ministro per gli affari europei nel governo guidato da Jan Fischer tra il 2009 e il 2010. Commissario europeo Füle è stato indicato come membro ceco della Commissione europea dopo il fallimento dei partiti politici ceci ad accordarsi su un nome condiviso e dopo le dichiarazioni di indisponibilità di alcuni altri possibili candidati. Dopo l'esplosione delle rivolte nei paesi del Nord Africa nei primi mesi del 2011, Füle sta gestendo il processo di revisione profonda della politica europea di vicinato nei confronti dei paesi del Mediterraneo. Il 31 ottobre 2014 ha concluso il suo mandato. Gli è succeduto nell'incarico Johannes Hahn. Vita personale Füle è sposato e ha tre figli. Onorificenze Note Altri progetti Collegamenti esterni Fule Commissione Barroso I Commissione Barroso II
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,997
/* This Source Code Form is subject to the terms of the Mozilla Public * License, v. 2.0. If a copy of the MPL was not distributed with this * file, You can obtain one at http://mozilla.org/MPL/2.0/. */ /* * $Id: camellia.h,v 1.2 2012/04/25 14:49:43 gerv%gerv.net Exp $ */ #ifndef _CAMELLIA_H_ #define _CAMELLIA_H_ 1 #define CAMELLIA_BLOCK_SIZE 16 /* bytes */ #define CAMELLIA_MIN_KEYSIZE 16 /* bytes */ #define CAMELLIA_MAX_KEYSIZE 32 /* bytes */ #define CAMELLIA_MAX_EXPANDEDKEY (34*2) /* 32bit unit */ typedef PRUint32 KEY_TABLE_TYPE[CAMELLIA_MAX_EXPANDEDKEY]; typedef SECStatus CamelliaFunc(CamelliaContext *cx, unsigned char *output, unsigned int *outputLen, unsigned int maxOutputLen, const unsigned char *input, unsigned int inputLen); typedef SECStatus CamelliaBlockFunc(const PRUint32 *subkey, unsigned char *output, const unsigned char *input); /* CamelliaContextStr * * Values which maintain the state for Camellia encryption/decryption. * * keysize - the number of key bits * worker - the encryption/decryption function to use with this context * iv - initialization vector for CBC mode * expandedKey - the round keys in 4-byte words */ struct CamelliaContextStr { PRUint32 keysize; /* bytes */ CamelliaFunc *worker; PRUint32 expandedKey[CAMELLIA_MAX_EXPANDEDKEY]; PRUint8 iv[CAMELLIA_BLOCK_SIZE]; }; #endif /* _CAMELLIA_H_ */
{ "redpajama_set_name": "RedPajamaGithub" }
2,760
A SCANDAL HAS been brewing over the past few days involving social media giant Facebook, data-driven political campaign firm Cambridge Analytica which was part of Trump's 2016 election campaign push. It's a complex story, but one that's becoming more and more important, mostly because it concerns people's Facebook data. It's because of Cambridge Analytica, for example, that Facebook shares tumbled by 5% in early trading today. It's also raised a debate about how far voters can/should be influenced, as Cambridge Analytica is a company that offers its services to those who want to "change audience behaviour" in the context of elections, referendums, and other important political events. What's kicked all this off? Some 270,000 people downloaded the app, allowing Kogan to access information such as the city listed on their profile, or content they had "liked". "However, the app also collected the information of the test-takers' Facebook friends, leading to the accumulation of a data pool tens of millions-strong," the Observer reported. Wylie said that the information taken included status updates, comments, likes and sometimes even people's private messages. "It is a full-service propaganda machine," he said. According to the New York Times and Britain's Observer, the company took information to help them design software to predict and influence voters' choices at the ballot box. Facebook responded by suspending the account of Cambridge Analytica. Also suspended were the accounts of its parent organisation, Strategic Communication Laboratories, as well as those of University of Cambridge psychologist Aleksandr Kogan and Wylie. The incident is being called the tech giant's biggest-ever data breach. The EU said the data breach was "horrifying" and Europe must do everything to protect its own citizens. EU Justice Commissioner Vera Jourova said she would seek clarification from Facebook about the reports in the Observer and the New York Times when she visits the US this week. Who has worked with CA? The company has worked on political campaigns in countries including Kenya, Colombia, and India, according to the Guardian. The company's CEO Alexander Nix has denied that his firm worked for campaigners for the Brexit leave vote, although this has been hotly disputed by the co-founder of Leave.EU, Arron Banks. Thomas Borwick, who used to work for CA, set up his own company Kanto, which was hired by Irish anti-abortion group according to the Times (Ireland edition). Kanto was also linked to the Brexit Leave campaign. Reality Check: Cambridge Analytica uses client and commercially and publicly available data; we don't use or hold any Facebook data. When we learned GSR sold us Facebook data that it shouldn't have​ done​, we deleted it all – system wide audit to verify. GSR is short for Global Science Research, whose co-founder Joseph Chancellor is working for Facebook as an in-house psychologist. More on them from the Guardian here. CA did not use any Facebook data for the 2016 Trump campaign. Advertising is not coercive; people are smarter than that. It added that Obama's 2008 campaign was "famously" data-driven, pioneered microtargeting in 2012, "talking to people specifically based on the issues they care about". Email "Here's why there's uproar again about Facebook and the data firm used by Trump's 2016 campaign". Feedback on "Here's why there's uproar again about Facebook and the data firm used by Trump's 2016 campaign".
{ "redpajama_set_name": "RedPajamaC4" }
1,741
Cortinarius violaceofuscus är en svampart som först beskrevs av Cooke & Massee, och fick sitt nu gällande namn av George Edward Massee 1904. Cortinarius violaceofuscus ingår i släktet Cortinarius och familjen spindlingar. Inga underarter finns listade i Catalogue of Life. Källor Spindlingar violaceofuscus
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,073
{"url":"http:\/\/mathhelpforum.com\/differential-geometry\/181509-max-x-y.html","text":"# Math Help - Max{x,y}\n\n1. ## Max{x,y}\n\n$\\forall x,y\\in\\mathbb{R}$\n\n$\\text{max}\\{x,y\\}=\\frac{1}{2}\\left(x+y+|x-y|\\right)$\n\nI know\n\n$\\text{max}\\{|x|-|y|,|y|-|x|\\}\\leq |x-y|$\n\n2. What is the question ? Show the first equality ? Treat the cases $x\\leq y$ and $y\\leq x$.\n\n3. Yes.\n\n4. Usually one would consider non-overlapping cases, but I guess the overlapping cases here would work.\n\nCase 1: Suppose x $\\leqslant$ y. What does this imply? If you don't recall what this implies I'll say that addition on R satisfy commutation, monotonicity, an equality for inverses, and an equality for neutrals. In other words, for all x, y, z, c on R the following properties for + hold:\n\nIf x $\\leqslant$y, (x+c) $\\leqslant$(y+c) (monotonicity)\n(x+y)=(y+x) (commutation)\n(x+-(x))=0 (inverse)\nx+0=x (neutrality)\n\nSince you started with x $\\leqslant$y, do you see which property you want to use here first? Do you see what element (function of an element, depending on how you look at it) of R you want to select for \"c\"? If you do, I think you'll infer what I've hinted at. Then check your definition of absolute value. So in this case, abs(x-y) must equal... what? Then replace what abs(x-y) must equal in this case with abs(x-y) in the maximum formula.\n\nCase 2: Suppose x>y. Now I'll point out that for all x, y, c in R,\nIf x>y, then (x+c)>(y+c). Now using a similar technique, along with your definition of absolute value you can show something about the absolute value in this case.\n\nThen you might to want to indicate the cases exhaust R and the result consequently follows.\n\nIf the above doesn't give you enough clues, I'll happily try and spell out details here.","date":"2016-07-31 06:17:55","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 9, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8404422402381897, \"perplexity\": 1081.8021307295155}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-30\/segments\/1469258950570.93\/warc\/CC-MAIN-20160723072910-00011-ip-10-185-27-174.ec2.internal.warc.gz\"}"}
null
null
Sinbad Jr. and his Magic Belt is a series of five-minute cartoons that originally aired in first-run syndication between 1965-1966. Produced by Hanna-Barbera for the American International Television division of American International Pictures, they were shown during local children's television programming. The series is partially lost, with only a handful of surviving episodes left. Sinbad Jr. and Salty will both appear in the HBO Max series Jellystone! Plot Sinbad Jr. (voiced by Dal McKennon and Tim Matheson) is the teenage son of Sinbad, the famous sailor, and he traveled the world in his single-masted sailboat seeking adventure and wrongs to right, fighting such villains as the Bluto-like, big, black-bearded Blubbo and the mad doctor Rotcoddam ("mad doctor" spelled backwards). Sinbad Jr. gained the strength of 50 men whenever he tightened his magic belt, causing the diamond-shaped buckle to flash like lightning and temporarily transform him into a mighty muscleman. Sinbad Jr.'s first mate was his feisty and funny feathered friend Salty the Parrot (voiced by Mel Blanc). Production The series was conceived by Sam Singer's production company in 1960, with Dal McKennon voicing the title role. Singer Studio produced the initial episodes for the Trans-Arts Company, but the deal fell through. American International Pictures, which had released the film The Magic Voyage of Sinbad, held rights to the "Sinbad" trademark for screen works. AIP's television division eventually negotiated an agreement under which Hanna-Barbera would produce the series, with Tim Matheson replacing McKennon. The final release includes episodes produced by both studios. Sinbad Jr. and His Magic Belt premiered in first-run syndication on Sept. 11, 1965. The 102 five-minute shorts aired first-run through 1966 ran within children's television programming. It was renamed Sinbad Jr., the Sailor out of deference to the 1962 Toei Studios feature-length cartoon, Adventures of Sinbad. The rights to the series were later acquired by MGM's subsidiary Orion Pictures (whose own holdings include the AIP library). Theme music The cartoon's theme song, composed by Ted Nichols, is a variation on the children's song "Sailing, Sailing (Over the Bounding Main)" that was written in 1880 by Godfrey Marks, a pseudonym of British organist and composer James Frederick Swift (1847–1931). A later version of the theme song has a jazzier beat. Episodes Each daily package consisted of three five-minute cartoons. See also List of works produced by Hanna-Barbera Productions List of Hanna-Barbera characters References External links 1965 American television series debuts 1966 American television series endings 1960s American animated television series American children's animated action television series American children's animated adventure television series American children's animated fantasy television series Films based on Sinbad the Sailor Television series by Hanna-Barbera Television series by MGM Television First-run syndicated television programs in the United States
{ "redpajama_set_name": "RedPajamaWikipedia" }
108
Men's Dating BBQ Guru™ Helps Couples Take the Guesswork Out of Creating Grill-Inspired Date Nights Khaya Caine The Short Version: For years, BBQ Guru has provided everyday cooks with the tools they need to become champion barbecuers. The company's devices allow people to easily control their food's cooking temperature in charcoal and wood-burning grills. For couples who love to barbecue, BBQ Guru, aka Bob Trudnak, makes planning a romantic dinner simple, and he encourages conversation by keeping you from having to constantly check on your food. Customers flock to the site for supplies and tips on keeping their meals at the perfect temperature and letting the grill do the work. Bob Trudnak's passion for barbecue began at the tender age of 7. The second his father placed him in charge of putting charcoal in the grill, he knew he was hooked. For Bob, barbecuing wasn't just about eating delicious, flavorful food — although that was a nice perk. It was more about enjoying the company of family and friends and the connections they made. Barbecue brought people together. "We looked forward to it every Sunday as a real sense of sharing and community for me," Bob said. "I knew from an early age that this is what I wanted my life to be like." After Bob graduated from college, he began working for inventor "Shotgun Fred" Pirkle in Texas. Fred loved barbecue just as much as Bob did and taught him all of the intricacies of making real Southern barbecue. That led the duo to develop their own products and even start a barbecue competition. Bob immersed himself in his passion for making memorable barbecue. His hard work didn't go unnoticed, and Fred brought Bob into the fold on a side project, which would eventually evolve into BBQ Guru. Bob Trudnak's passion for barbecue led him to design innovative products with BBQ Guru. "I fell in love with food competitions, and now BBQ is my life," Bob said. "Besides being the Pit Master, Brand Ambassador, and Marketing Director for BBQ Guru, my wife and I own a BBQ catering business — Hav'N A BBQ — and we sell our own sauces and rubs." BBQ Guru makes controls for grills and smokers that keep barbecue at the optimal temperature throughout the cooking process. That means no opening the lid to see if the meat is overcooking while you're making a meal for a date. "With the help of our barbecue accessories, people can automatically and consistently regulate air flow to maintain a steady temperature throughout the cook," Bob said. "This not only creates an effective and easier way to BBQ, but it also gives you better results. By turning your grill into a 'set-it-and-forget-it' machine, you no longer have to worry about controlling the heat or hovering around your cooker." Anyone can make perfect barbecue, with less hassle, and spend more time with family — or a special someone. Award-Winning Temperature-Control Devices Make Entertaining Easy Everyone knows that great barbecue doesn't just happen, but BBQ Guru's products can make the process seem effortless. The company knows that smoking meats and slow-cooking barbecue can be time-consuming and require plenty of work and patience. Which is why its innovations allow customers to have the best of both worlds — next-level barbecue with more fun in the process. When BBQ Guru developed its technology, it opened the door for anyone to give traditional BBQ a try — as a hobby or a lifestyle. Its automatic temperature control devices — PartyQ, DigiQ DX2, and CyberQ Cloud — keep barbecue at the optimal cooking temperature. One customer found his device to be extremely accurate. "It kept the pit temperature spot on… and beeped me when the internal temperature met its mark. Easy to set up… easy to put away," said Larry in a testimonial on the site. "Using the right equipment, along with an automatic temperature controller, takes the guesswork out of BBQ and allows time for fun activities," Bob said. "For example, you can watch a movie on the patio, open up a bottle of wine, and enjoy some appetizers while your dinner is cooking." Many couples have found that BBQ Guru's temperature controls make wonderful gifts for their significant others. Lisa D. found that her husband was a huge fan of the DigiQ. "I bought (the DigiQ) for my husband for Valentine's Day and he loves, loves, loves it!!! The food always comes out perfect!!" she said. Trusted grilling and barbecue companies, like AmazingRibs.com, also praise BBQ Guru's temperature controls. "You'll wonder how you lived without one of these after using it." Trusted Recipes Keep the Romance Sizzling When it comes to planning the perfect date night, BBQ Guru has become a trusted resource for helping couples keep the spark alive. Bob said couples can start with smoked or grilled meat and make an exciting night out of it whether it's a special occasion or a romantic date night in. Couples can spend more time talking instead of checking the grill by using BBQ Guru products. The website has recipes that range from tacos to seafood to pasta dishes, that are great for couples to cook together on a date night at home. Bob highlighted Grilled Shrimp Scampi over Linguini, Sausage and Scallop Skewers with a Honey Pomegranate Glaze, and Pulled Pork Tacos with Red Jalapeno BBQ Slaw as a few of his favorite recipes. Not only are they fun and flavorful, but they're also great for dividing up tasks and assembling as a team. "Once your meat is done cooking, you can even make a little competition," Bob said. "For example, if you're making tacos, you and your partner can each secretly choose four ingredients and see who builds the better tasting taco." Tips & Tricks Help Beginners Cook Like Pros Bob's passion is sharing the joy of barbecue, and he holds cooking classes throughout the US and Europe. He loves teaching people how to sharpen their skills and provides special tips and tricks that make barbecuing easier. "I try to explain cooking by making it simple. Anyone can do it. I've seen people create great dishes who never thought they could," he said. "I have also taught many people how to cook competitively. One of the greatest feelings is when they come up to me after winning an award at a contest and say, 'Thank you for teaching me how to do this.' Their smiles are priceless. It really warms my heart." Bob remembers being a beginner, so he always finds time to encourage those new to barbecue to be patient with themselves and ask questions. That is why BBQ Guru created a private online community, ShareMyCook.com. The website is a fun place where CyberQ Cloud owners can remotely track cooking temperatures and time, and share recipes and cooks with friends and family through social media. ShareMyCook.com offers mouth-watering recipes perfect for your next date night. "If you are the slightest bit skeptical about getting into the hobby of backyard BBQ, open your mind," Bob said. "Try our tips, tricks, and methods, and you will be pleasantly surprised how easy and fun it is to create wonderful foods on your grill and smoker. Our website and YouTube channel are full of resources, educational videos, and recipes to help you create better BBQ." According to BBQ Guru, maintaining low and even temperatures is key to great tasting barbecue. Bob said that one of the most common outdoor mistakes is repeatedly opening and closing the grill's lid. "As tempting as it is to see what's happening underneath, fiddling with the lid releases heat and causes temperature fluctuations," Bob said. "If you continuously check to see if your food is done, you are risking the chance of wild temperature fluctuations and tough, dried, or burnt meat." BBQ Guru is Developing Even More State-of-the-Art Products BBQ Guru is focused on upgrading its current product line to stay at the leading edge of the market. Bob said that the company wants to ensure its customers always have a pleasant experience. "Our wheels are constantly spinning to develop new products that we believe will help bring convenience and fun to outdoor cooking," he said. As the Pit Master and marketing specialist behind the brand, Bob shows no signs of slowing down. He has won more than 250 awards in barbecue contests all over the world. But, even with all of the awards he's won, Bob said he feels his biggest accomplishment was teaching his 17-year-old daughter and her two best friends how to cook competition barbecue. "They went on to win three Grand Championships and two Reserve Championships in the following five years," Bob said. "Teaching them how to not only cook well but also how to work as a team and solve problems was very important to me." It's this love of family and dedication to exquisite barbecue that has allowed BBQ Guru to remain an industry leader and satisfy customers all over the world. As a true romantic and people-lover at heart, Khaya Caine is happy to bring her expertise as a Parent and Community Engagement Specialist and Certified Professional Dating Coach to DatineAdvice.com. Khaya honed her sharp communication and writing skills as a journalist and copy editor at a daily newspaper. And, on DatingAdvice.com, it's Khaya's goal to help inform today's singles of the solutions that make finding love in the modern world a reality.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
577
Q: ApiRTC: Is there a way to get a call duration, or hang up call at specific duration or time? I am working on a project that includes real-time communication through ApiRTC? Looking for a solution to destroy call at specific duration, or at specific time, for example at 6PM? Is there any way to get call duration and based on that destroy (hang up) a call? A: This can be managed in your application by starting a timer when you start the call
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,834
Broccoli City Fest Recap: Uplifting the Community Through Healthier Living and Dope Music By: Vance BrinkleyCategory: Events April 27, 2015 CHECK OUT HIGHLIGHT'S FROM THIS YEAR'S BROCCOLI CITY FEST IN DC If you wasn't in Southeast DC, you wasn't at the place to be. Broccoli City Fest was in session, and the coolest artists in the local area came out to perform. The local festival featured a strong lineup with Erykah Badu and Thundercat headlining, along with acts Joey Bada$$, Kaytranada, Jaden & Willow Smith prior. There was a good mix of DMV singers, rappers, and bands. Virginia artist Kali Uchis supplied some chill vibes, while fellow local duo Sunny & Gabe making a dope performance. D.R.A.M. also came out to perform his hit single "Cha Cha". Southeast DC rapper Lightshow turnt up on stage Saturday Broccoli City Fest isn't all about music though, it is to inform the local community about a healthy, more active, lifestyle. There were several different vendors, activities, and performances that promoted that message too. Juice Bars, Healthy Food Trucks, even outside bike machines, BCFest had a ton of things to do to keep you active while you wait for the next artist. Erykah Badu ended the night with a stellar performance via DJ set. Bass player, Thundercat, supported Badu's set. Broccoli City Festival may be growing stronger in the District, but there's talk that it will soon expand. According to the organization's website, the festival will make it's way to the West Coast, Los Angeles to be exact. The website has a brief statement for new move to LA: "Always at the Epic Center of Trends, some say all new trends start in LA and blow eastward. The City of Angels is the West Coast Home for Broccoli City Festival 2015." All though the rain made a few things difficult halfway into the event, BCFest was definitely what the District needed. The city has been overlooked in the past for unique performances from artists. However, with festivals like this and Trillectro, a cultural shift is taking place in the local arts scene. It's for the better. In a city where gentrification is changing the culture that originally filled the streets of DC, Broccoli City Fest is a sign that it's still alive. What makes it more influential is that it's in it's home, the community, and that's far more organic and healthy for locals than any fruit of vegetable. Check out the official recap video for Broccoli City Fest 2015 below. Tracks used in the video are "At All" and Janet Jackson's "If" Remix, all made by Kaytranada Posted in Events, FeaturesTagged BC Fest, Broccoli City Fest, D.R.A.M., exclusive, feature, Jaden & Willow Smith, Joey BADA$$, Kali Uchis, Kaytranada, recap, Sunny & Gabe, Video
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,781
\section{Introduction} In the upcoming years the general purpose experiments ATLAS and CMS at the CERN LHC will collect a large amount of data which will be used to perform precision studies of various quantities. Among them are certainly the properties of the Higgs boson, in particular its couplings to the other particles of the Standard Model. Important quantities in this context are the production cross sections and partial decay rates of the Higgs boson. The dominant production process is via gluon fusion followed by vector boson fusion and the so-called Higgs-strahlung process $pp\to VH$ ($V=Z,W$) which is the subject of the current paper. Although $pp\to VH$ has a much smaller cross section it is a promising channel to observe, e.g., if the Higgs boson decays to a $b\bar{b}$ pair once substructure techniques are applied~\cite{Butterworth:2008iy}. The leading order (LO) cross section is obtained from the Drell-Yan process for the production of a virtual gauge boson $V^\star$ and its subsequent decay into $VH$. Next-to-next-to-leading order QCD corrections to this channel have been computed in Refs.~\cite{Brein:2003wg,Brein:2011vx,Ferrera:2011bk,Ferrera:2013yga,Ferrera:2014lca} and electroweak corrections have been considered in Refs.~\cite{Ciccolini:2003jy,Denner:2011id}. QCD corrections up to NNLO and electroweak corrections up to NLO for the total cross section have been implemented in the program {\tt vh@nnlo}~\cite{Brein:2012ne}. In Ref.~\cite{Kniehl:1990iva} the loop-induced production channel $gg\to ZH$ has been computed at leading order. NLO QCD corrections have been computed in Ref.~\cite{Altenkamp:2012sx} in the heavy top quark limit which significantly simplifies the calculation. They are also implemented in {\tt vh@nnlo}~\cite{Brein:2012ne}. Note that the NLO corrections to $gg\to ZH$ are formally N$^3$LO contributions to $pp\to ZH$. However, due to the numerical importance of the gluon-induced process it is worthwhile to compute $gg\to ZH$ to NLO accuracy. In this paper we study the effect of a finite top quark mass. At LO exact results are available. However, at NLO the occurring integrals are highly nontrivial and their evaluation is beyond straightforward application of current multi-loop techniques. We investigate the mass effects by expanding the amplitudes for large $m_t$. This approximation is not valid in all phase space regions. However, it provides an estimate of the numerical size of the power-suppressed terms and thus of the quality of the effective-theory result. Furthermore, it constitutes an important reference for a future exact result since we observe a good convergence of the partonic cross sections below the top quark pair threshold. We only consider the $gg$ channel; similar techniques can also be applied to the loop-induced contributions of the $qg$ and $q\bar{q}$ channels which are, however, numerically much smaller~\cite{Altenkamp:2012sx}. In our calculation we do not consider decays of the final-state $Z$ boson. Similar to $gg\to ZH$ also the process $gg\to HH$ is mediated by heavy quark loops. NLO and NNLO corrections have been considered in a series of papers~\cite{Dawson:1998py,Grigo:2013rya,deFlorian:2013jea,Shao:2013bz,Maltoni:2014eza,Grigo:2014jma,Grigo:2015dia,deFlorian:2015moa,Degrassi:2016vss,deFlorian:2016uhr} applying various approximations. Recently the exact NLO corrections became available~\cite{Borowka:2016ehy,Borowka:2016ypz}. The comparison to the approximations shows sizeable differences for the total cross section and the Higgs transverse momentum distribution. However, reasonable agreement between the exact and the in $1/m_t$-expanded results is found for the Higgs pair invariant mass ($m_{HH}$) distribution for not too large values of $m_{HH}$ if the approximated result is re-scaled with the exact LO cross section. Note that the region between the production threshold and the top quark threshold corresponds to about 100~GeV in the case of $HH$ and to about 135~GeV in the case of $ZH$ production which makes the heavy-top expansion more interesting for the latter. Top quark mass effects have also been computed for the related process $gg\to ZZ$. In Ref.~\cite{Melnikov:2015laa} $1/m_t^2$ corrections have been computed at NLO, and interference effects have been considered in~\cite{Campbell:2016ivq}. In the latter reference Pad\'e approximation and conformal mapping has been applied to improve the validity of the expansion in $1/m_t$. The remainder of the paper is organized as follows: In Section~\ref{sec::LO} we briefly discuss the LO cross section and compare the in $1/m_t$ expanded and exact results. In Section~\ref{sec::part} we present our findings for the partonic NLO cross section. In particular, we identify the approximation procedure which leads to promising hadronic results, subject of Section~\ref{sec::hadr_nlo}. We summarize our results in Section~\ref{sec::concl}. \section{\label{sec::LO}$gg\to ZH$ at LO} Sample Feynman diagrams contributing to the LO cross section are shown in Fig.~\ref{fig::diag} (a) and (b). There are triangle contributions where the final-state $Z$ and Higgs bosons are produced via a $s$-channel $Z$ or $\chi$ boson exchange. Both bottom and top quarks can be present in the loop. In the case of the box diagrams the Higgs boson couples directly to the quark running in the loop and thus only internal top quarks are present since we neglect the bottom Yukawa coupling. The effect of a finite bottom quark mass on the LO cross section is at the per mille level. \begin{figure}[t] \centering \begin{tabular}{cccc} \includegraphics[width=0.2\textwidth]{LOtri.eps} & \includegraphics[width=0.2\textwidth]{LObox.eps} & \includegraphics[width=0.2\textwidth]{NLOtri.eps} & \includegraphics[width=0.2\textwidth]{NLObox.eps} \\ (a)&(b)&(c)&(d) \end{tabular} \\ \begin{tabular}{ccc} \includegraphics[width=0.1\textwidth]{NLOred.eps}& \includegraphics[width=0.2\textwidth]{NLOreal1.eps}& \includegraphics[width=0.2\textwidth]{NLOreal2.eps} \\ (e)&(f)&(g) \end{tabular} \caption{\label{fig::diag}Sample Feynman diagram contributing to $gg\to ZH$ at LO and NLO. Solid, wavy, dashed and curly lines denote quarks, $Z$ and Higgs bosons, and gluons, respectively. Internal wavy lines can also represent Goldstone bosons.} \end{figure} In the heavy-$m_t$ approximation the diagrams with internal top quarks reduce to vacuum integrals. The massless triangle diagrams are computed with the help of simple form factor-type integrals which can be expressed in terms on $\Gamma$ functions (see, e.g., Appendix~A of Ref.~\cite{Smirnov:2012gma}). We perform the calculation for general $R_\xi$ gauge and check that the gauge parameter $\xi_Z$ present in the $Z$ and $\chi$ boson propagators drops out in the result for the cross section. In fact, it cancels between the diagrams with top and bottom quark triangles and a neutral Goldstone boson or a $Z$ boson in the $s$ channel. Note, that for special choices of $\xi_Z$ the calculation can be significantly simplified. For example, in Landau gauge the massless triangle contribution with virtual $Z$ boson vanishes~\cite{Altenkamp:2012sx}. Note that due to Furry's theorem there is no contribution from the vector coupling of the $Z$. Altogether there are 16 LO Feynman diagrams, all of them are individually finite. We compute the LO amplitudes both in an expansion for large top quark mass including terms up to order $1/m_t^{8}$, and without applying any approximation and keeping the full top quark mass dependence. In the latter case we have reduced the tensor integrals to scalar three- and four-point integrals which are evaluated using the {\tt LoopTools} library~\cite{Hahn:1998yk,vanOldenborgh:1989wn}. We want to mention that in the limit $m_t\to\infty$ the calculation is significantly simplified. In particular, all top quark triangle contributions with a coupling of the $Z$ boson vanish. For the numerical results we use the following input values~\cite{PDG} \begin{eqnarray} M_Z &=& 91.1876~\mbox{GeV} \,, \nonumber\\ M_W &=& 80.385~\mbox{GeV} \,, \nonumber\\ M_H &=& 125~\mbox{GeV} \,, \nonumber\\ G_\mu &=& 1.16637\cdot 10^{-5}~\mbox{GeV}^{-2} \,, \nonumber\\ M_t &=& 173.21~\mbox{GeV} \,, \end{eqnarray} where $M_t$ is the top quark pole mass. To obtain our numerical results we follow Ref.~\cite{Altenkamp:2012sx} and use the so-called $G_\mu$ scheme where the electromagnetic coupling constant $\alpha$ and the weak mixing angle ($s_W\equiv \sin\theta_W$) are defined via \begin{eqnarray} c_W^2 &=& 1-s_W^2 \,\,=\,\, \frac{M_W^2}{M_Z^2} \,\,\approx\,\, 0.77710 \,, \nonumber\\ \alpha &=& \frac{\sqrt{2}G_\mu M_W^2 s_W^2}{\pi} \,\,\approx\,\, 0.0075623 \,. \end{eqnarray} Our default PDF set is \verb|PDF4LHC15_nlo_100_pdfas|~\cite{Butterworth:2015oua} which we use to compute both the LO and NLO cross sections. For the strong coupling constant we use the value provided by \verb|PDF4LHC15_nlo_100_pdfas| which is given by \begin{eqnarray} \alpha_s(M_Z) &=& 0.118\,. \end{eqnarray} For the implementation of the PDFs we use version 6.1.6 of the {\tt LHAPDF} library~\cite{Buckley:2014ana} (see {\tt https://lhapdf.hepforge.org/}) which also provides the running for $\alpha_s$ form $M_Z$ to the chosen renormalization scale $\mu_R$. Our default choice for the latter and for the factorization scale $\mu_F$ is the invariant mass of the $ZH$ system \begin{eqnarray} \mu_0^2 = (p_H+p_Z)^2 \,. \end{eqnarray} If not stated otherwise we choose $s_H=14$~TeV for the hadronic center-of-mass energy. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{part_lo.eps} \caption{\label{fig::LO_exact_vs_exp}LO $gg\to ZH$ partonic cross section as a function of the partonic center-of-mass energy $\sqrt{s}$. The exact result is shown as black solid line and the expansion terms including $1/m_t^0$, \ldots, $1/m_t^8$ terms (note that the $1/m_t^2$ term vanishes) are represented by (blue) dashed lines where shorter-dashed lines include higher order power corrections. The dash-dotted (red) line represents the $[2/2]$ Pad\'e result, see text.} \end{figure} In Fig.~\ref{fig::LO_exact_vs_exp} we compare the partonic cross section of the exact (black solid line) and expanded results (blue dashed lines, see caption for details). One observes a continuous improvement of the large-$m_t$ approximations below the top quark pair threshold which is at $\sqrt{s}\approx 346$~GeV. However, the characteristic behaviour at threshold and the drop of the cross section for large values of $\sqrt{s}$ cannot be reproduced. We pick up the idea of Ref.~\cite{Campbell:2016ivq}\footnote{In contrast to~\cite{Campbell:2016ivq} we apply the Pad\'e approximation at the level of differential cross sections and not at the level of the amplitudes. Furthermore, we refrain from performing a conformal mapping since in our case the gain is marginal.} and use the expansion terms to construct the $[2/2]$ Pad\'e approximant, see (red) dash-dotted line in Fig.~\ref{fig::LO_exact_vs_exp}. One observes that the Pad\'e result approximates reasonably well the exact curve up to $\sqrt{s}\approx 346$~GeV which is indicated by the vertical dashed line. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{hadr_mzh_lo.eps} \caption{\label{fig::hadr_mzh_lo}Hadronic LO $gg\to ZH$ cross section as a function of $m_{ZH}^{\rm cut}$, the cut on the invariant mass of the $Z$-Higgs system. The exact result is shown in black. The dashed (blue) curves correspond to the expanded results (see caption of Fig.~\ref{fig::LO_exact_vs_exp} for more details) and the $[2/2]$ Pad\'e approximation is shown as dash-dotted (red) curve.} \end{figure} In Fig.~\ref{fig::hadr_mzh_lo} we show the hadronic cross section for $gg\to ZH$ as a function of the cut on the invariant mass of the $Z$-Higgs system using the same conventions as in Fig.~\ref{fig::LO_exact_vs_exp}. We observe a rapid convergence of the $1/m_t$ expansion (blue dashed curves) for $m_{ZH}^{\rm cut}\lsim 350$~GeV and a good approximation of the exact result (solid, black) by the Pad\'e curve (dash-dotted, red). By construction, for large values of $m_{ZH}^{\rm cut}$ the total cross section is reproduced. It is interesting to note that for $m_{ZH}^{\rm cut}\lsim 346$~GeV the cross section amounts to about {\num a quarter} of total cross section. For this value of $m_{ZH}^{\rm cut}$ the Pad\'e result yields {\num 16.1}~fb which is very close to the exact result ({\num 17.0}~fb). On the other hand, the infinite top mass approach only gives {\num 11.0}~fb. For a collision energy of $\sqrt{s_H}=8$~TeV we obtain for the total hadronic cross section $\sigma_{\rm H,LO}^{\rm (exact)}={\num 16.0}$~fb which agrees impressively well with the result obtained from the effective-theory approximation: ${\num 15.8}$~fb. Since the partonic cross sections have completely different shapes (cf. solid and long-dashed curves in Fig.~\ref{fig::LO_exact_vs_exp}) this agreement has to be considered as accidental. In fact, for $\sqrt{s_H}=14$~TeV we have $\sigma_{\rm H,LO}^{\rm (exact)}={\num 61.8}$~fb whereas the infinite-$m_t$ approximation gives ${\num 80.5}$~fb. \section{\label{sec::part}Partonic NLO corrections} Sample Feynman diagrams contributing to the real and virtual NLO corrections can be found in Fig.~\ref{fig::diag}. In our calculation we apply standard techniques. In particular, the one- and two-loop integrals are reduced to master integrals using the program {\tt FIRE}~\cite{Smirnov:2014hma}; the resulting master integrals can be found in Refs.~\cite{Gehrmann:2005pd,Ellis:2007qk}. For the isolation of the soft and collinear infrared divergences we follow Ref.~\cite{Frixione:1995ms} which allows to compute differential cross sections. Although we consider top quark mass effects we express our final result in terms of $\alpha_s$ defined in the five-flavour theory. We write the partonic cross section to NLO accuracy in the form \begin{eqnarray} \sigma_{\rm NLO} &=& \sigma_{\rm LO}^{\rm (exact)} + \delta \sigma_{\rm NLO}^{\rm (approx)} + \delta \sigma_{\rm NLO}^{\rm (virt,red)} \,, \label{eq::sig_part} \end{eqnarray} where results for the LO cross section have already been discussed in Section~\ref{sec::LO}. $\delta \sigma_{\rm NLO}^{\rm (virt,red)}$ is the contribution from the reducible diagrams where two quark triangles are connected by a gluon in the $t$ or $u$ channel, see Fig.~\ref{fig::diag}(e) for a sample Feynman diagram. In Ref.~\cite{Altenkamp:2012sx} the effective-theory result for the corresponding differential cross section is given, which is obtained by considering the interference with the LO amplitude. We confirm the analytic expression of~\cite{Altenkamp:2012sx} and add power-suppressed terms up to order $1/m_t^8$. Furthermore, we have computed this contribution exactly keeping the full top mass dependence. For the numerical results which we present in Section~\ref{sec::hadr_nlo} the exact expression is used. In this section we discuss $\delta \sigma_{\rm NLO}^{\rm(approx)}$. We define the NLO approximation by factoring out the exact LO cross section multiplied by the ratio of the in $1/m_t$ expanded NLO and LO contribution: \begin{eqnarray} \delta \sigma_{\rm NLO}^{\rm (approx)} &=& \sigma_{\rm LO}^{\rm (exact)} \,\,\frac{ \delta\sigma_{\rm NLO}^{({\rm exp}-n)} } { \sigma_{\rm LO}^{({\rm exp}-n)} } \,, \label{eq::sigma_NLO} \end{eqnarray} where ``${\rm exp}-n$'' means that the corresponding quantity contains expansion terms up to order $1/m_t^n$. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{part_nlo.eps} \caption{\label{fig::par_NLO}NLO partonic cross section as a function of $\sqrt{s}$. The expansion terms including $1/m_t^0$, \ldots, $1/m_t^8$ terms are represented by (blue) dashed lines where shorter-dashed lines include higher order power corrections. The dash-dotted (red) line represents the $[2/2]$ Pad\'e result. Approximations based on Eqs.~(\ref{eq::sigma_NLO}) and~(\ref{eq::nlo_fact2}) are shown as yellow and brown curves, respectively. In both cases we either include only the leading top quark mass corrections (long-dashed curves) or corrections up to order $1/m_t^8$ (short-dashed curves).} \end{figure} In Fig.~\ref{fig::par_NLO} we show as (blue) dashed lines the quantities $\delta\sigma_{\rm NLO}^{({\rm exp}-n)}$ and as (red) dashed-dotted line the $[2/2]$ Pad\'e approximant as a function of the partonic center-of-mass energy $\sqrt{s}$. We observe a similar behaviour as at LO (cf. Fig.~\ref{fig::LO_exact_vs_exp}). In particular, it can not be expected that meaningful NLO approximations are obtained for large values of $\sqrt{s}$ from these expansion terms. However, based on observations at LO we expect that the Pad\'e result provides a reasonable approximation below $\sqrt{s}\approx 346$~GeV. In Fig.~\ref{fig::par_NLO} we also show as (yellow) long- and short-dashed curves the quantity $\delta \sigma_{\rm NLO}^{\rm (approx)}$ with $n=0$ and $8$ (the curves for $n=2,4,6$ lie in between and are not shown for clarity). The shape is now dictated by the LO cross section and has a well-behaved high-energy limit. For $\sqrt{s} < 346$~GeV the two curves are close together, however above the top threshold the $n=8$ curve is significantly higher. As an alternative to Eq.~(\ref{eq::sigma_NLO}) we consider an approach where the exact LO result is factored at the differential level, i.e., before the integration over phase space. Schematically we write \begin{eqnarray} \int {\rm d}{\rm PS}_2 \, \left|{\cal M}_{\rm LO}^{\rm (exact)}\right|^2 \, \frac{ \left|{\cal M}^{({\rm exp}-n)}_{\rm NLO}\right|^2 } { \left|{\cal M}^{({\rm exp}-n)}_{\rm LO}\right|^2 } \,, \label{eq::nlo_fact2} \end{eqnarray} where ``${\rm d}{\rm PS}_2$'' indicates that we use this kind of factorization for the two-particle phase space contributions. The contribution from the three-particle phase space (which is numerically small) is added in the infinite top quark mass approximation. The integrand of Eq.~(\ref{eq::nlo_fact2}) is better behaved than the one for $\delta\sigma_{\rm NLO}^{({\rm exp}-n)}$ in Eq.~(\ref{eq::sigma_NLO}), which might lead to better approximations for the total cross section. However, below the top quark pair threshold we only expect small differences between Eqs.~(\ref{eq::sigma_NLO}) and~(\ref{eq::nlo_fact2}). Fig.~\ref{fig::par_NLO} shows $\delta \sigma_{\rm NLO}^{\rm (approx)}$ as obtained from Eq.~(\ref{eq::nlo_fact2}) for $n=0$ and $8$ as brown dashed lines. Note that the $n=0$ curve lies almost on top of the yellow curve (which is based on Eq.~(\ref{eq::sigma_NLO})). This is because the two-particle phase space contributions to the squared matrix elements are proportional to the LO result. Moreover the three-particle contribution is small. As before, the $n=0$ and $n=8$ curves are close together below the top threshold and significant deviations are observed above. \section{\label{sec::hadr_nlo}Numerical results for hadronic cross sections} Numerical results for the LO cross section have already been discussed in Section~\ref{sec::LO}. At NLO we write in analogy to Eq.~(\ref{eq::sig_part}) \begin{eqnarray} \sigma_{\rm H,NLO} &=& \sigma_{\rm H,LO}^{\rm (exact)} + \delta \sigma_{\rm H,NLO}^{\rm (approx)} + \delta \sigma_{\rm H,NLO}^{\rm (virt,red)} \,. \label{eq::sig_hadr} \end{eqnarray} For the construction of $\delta \sigma_{\rm H,NLO}^{\rm (approx)}$ we consider three possibilities: (i) we either use the in $1/m_t$ expanded partonic results; (ii) we construct an approximation using Eq.~(\ref{eq::sigma_NLO}) (where the partonic cross sections are replaced by their hadronic counterparts), or (iii) we utilize the differential approach of Eq.~(\ref{eq::nlo_fact2}). The latter option is only applied to the total cross section. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{hadr_mzh_nlo.eps} \caption{\label{fig::hadr_mzh_nlo}NLO contribution $\delta \sigma_{\rm H,NLO}^{\rm (approx)}$ to the hadronic cross section as a function of $m_{ZH}^{\rm cut}$. The dashed (blue) curves contain expansion terms up to order $1/m_t^8$ and the dash-dotted (red) curve represents the Pad\'e result. The long-dashed (yellow) curve is based on Eq.~(\ref{eq::sigma_NLO}) with $n=0$.} \end{figure} Fig.~\ref{fig::hadr_mzh_nlo} shows the $m_{ZH}^{\rm cut}$ dependence of the NLO contribution $\delta \sigma_{\rm H,NLO}^{\rm (approx)}$. We concentrate on the region below the top quark threshold where approximations are valid. For large values of $m_{ZH}^{\rm cut}$ one obtains the total cross section which is briefly discussed below. The (blue) dashed curves are obtained from the asymptotically expanded results and the dash-dotted (red) curve is obtained from the $[2/2]$ Pad\'e approximation. The general picture is similar to the one at partonic level. In particular, one observes a good convergence for $m_{ZH}^{\rm cut}\lesssim 350$~GeV and one can expect that the Pad\'e result provides a good approximation to the unknown exact result. Note that for $m_{ZH}^{\rm cut}=346$~GeV the large-$m_t$ approximation gives {\num 13~fb} whereas the Pad\'e result leads to {\num 21~fb} which corresponds to an increase of more than 50\%. The total cross section for $m_{ZH}^{\rm cut}=346$~GeV amounts to about {\num a quarter} of the total cross section computed in the infinite top quark mass approximation (see also below). The dashed yellow curve in Fig.~\ref{fig::hadr_mzh_nlo} is based on Eq.~(\ref{eq::sigma_NLO}). It is obtained from the $m_{ZH}^{\rm cut}$-dependence of the exact LO result multiplied by the ratio of the NLO and LO total cross sections taken in the infinite top quark mass approximation. Below $m_{ZH}^{\rm cut}\lesssim 350$~GeV this result and the Pad\'e curve lie basically on top of each other. Very similar results are also obtained if the ratio of the $m_{ZH}^{\rm cut}$-dependent NLO and LO total cross sections are considered in the effective theory limit. For reasons of clarity the corresponding curve is not shown in Fig.~\ref{fig::hadr_mzh_nlo}. We refrain from showing the $m_{ZH}^{\rm cut}$ dependence for $\delta \sigma_{\rm H,NLO}^{\rm (virt,red)}$ since this contribution is numerically small. It is negative and amounts to about {\num 1\%} of $\delta \sigma_{\rm NLO}^{\rm (approx)}$. However, it is included in the discussion of the total cross section below. Note that the infinite top quark mass approximation of $\sigma_{\rm H,NLO}^{\rm (virt,red)}$ is off by a factor two. \begin{table}[t] \begin{center} {\num \begin{tabular}{c|c|cccc|c} $\sqrt{s_{H}}/\text{GeV}$ & $\sigma_{\rm H,LO}^{\rm (exact)}$ & $\sigma_{\rm H,NLO}^{\rm (exp-0)}$ & $\sigma_{\rm H,NLO}^{\rm (re-scale)}$ & $\sigma_{\rm H,NLO}^{\rm (diff)}$ & $\sigma_{\rm H,NLO}^{[2/2]}$ & NLO scale variation\\ \hline 7 & 11.2 & 23.9 & 24.9 & 26.1 & 26.5 & ${}^{+21\%}_{-21\%}$\\ 8 & 16.0 & 35.2 & 35.4 & 37.2 & 38.8 & ${}^{+20\%}_{-20\%}$\\ 13 & 52.4 & 129 & 113 & 121 & 140 & ${}^{+14\%}_{-17\%}$\\ 14 & 61.8 & 155 & 133 & 142 & 168 & ${}^{+13\%}_{-16\%}$ \end{tabular} } \caption{\label{tab:totalCS} LO and NLO results for the total cross section in fb. In columns 3 to 6 the following NLO contributions are added to the exact LO result: infinite top mass approximation ($\sigma_{\rm H,NLO}^{\rm (exp-0)}$), re-scaled NLO contribution based on Eq.~(\ref{eq::sigma_NLO}) ($\sigma_{\rm H,NLO}^{\rm (re-scale)}$), re-scaled NLO contribution based on Eq.~(\ref{eq::nlo_fact2}) ($\sigma_{\rm H,NLO}^{\rm (diff)}$), and the approximation where below the top threshold the [2/2]-Pad\'e result and above the infinite top mass approximation is used ($\sigma_{\rm H,NLO}^{[2/2]}$). The last column gives the scale uncertainties for $\sigma_{\rm H,NLO}^{[2/2]}$ where $\mu_F=\mu_R$ is varied by $\mu_F/\mu_0\in[1/3,3]$. The NLO cross sections contain $\sigma_{\rm H,NLO}^{\rm (virt,red)}$.} \end{center} \end{table} Table \ref{tab:totalCS} shows the values for the total cross section at LO and for four possible approximations at NLO, see caption for details. The first three approximations treat the top quark as infinitely heavy, whereas the third one incorporates the heavy quark effects considered earlier in the form of a $[2/2]$ Pad\'e approximation. Note, that we only consider finite top mass corrections for $\sqrt{s}<346~\text{GeV}$. For higher values of $s$ the infinite top mass limit is applied. One observes, that the finite top mass corrections shift the total cross section upwards, however, the size is well within the scale uncertainties which are shown for $\sigma_{\rm H,NLO}^{[2/2]}$ in the last column. Similar uncertainties are also obtained for the other approximations. The numerical results discussed in this section and in Section~\ref{sec::part} have been obtained with the help of the program {\tt ggzh} which can be downloaded from~\cite{ggzh}. A brief description of {\tt ggzh} can be found in the appendix. {\tt ggzh} can be used to reproduce the numerical results of Ref.~\cite{Altenkamp:2012sx}. \section{\label{sec::concl}Conclusions} The associated production of a Higgs and $Z$ boson is a promising channel in view of the determination of the Higgs boson couplings, in particular the Yukawa coupling to bottom quarks. We compute top quark mass effects to the loop-induced process $gg\to ZH$ at NLO in QCD by expanding the Feynman amplitudes in the limit of large top quark mass. Our leading term reproduces the results of Ref.~\cite{Altenkamp:2012sx}. It is not expected that the top quark suppressed terms provide a good approximation for large partonic center-of-mass energies. However, we can show that below the production threshold of two top quarks, say for $\sqrt{s}\lesssim350$~GeV, the $1/m_t$-expansion shows a good convergence at NLO. This is strongly supported by the good agreement of the re-scaled NLO approximation using the exact LO cross section and the $[2/2]$ Pad\'e approximation constructed from expansion terms up to $1/m_t^8$. Thus, the corrections computed in this paper provide a good approximation to the $m_{ZH}$ distributions below $\sqrt{s}\lesssim350$~GeV. This region covers about {\num 25\%} of the total cross section. Furthermore, the top mass corrections in this region constitute an important cross check once the exact calculation of the NLO corrections to $gg\to ZH$ is available. The numerical results presented in this work can be reproduced with the program {\tt ggzh} which is publicly available from~\cite{ggzh}. \section*{Acknowledgements} We are thankful to Kirill Melnikov for enlightening discussions and to Lorenzo Tancredi for providing analytic results for the amplitude $Z\rightarrow ggg$ from Ref.~\cite{Gehrmann:2013vga} which we could compare to our results. We thank Robert Harlander for helping with the comparison to {\tt vh@nnlo}~\cite{Brein:2012ne}. This work is supported by the Deutsche Forschungsgemeinschaft through grant STE~945/2-1. \begin{appendix} \section{Brief description of {\tt ggzh}} Together with this paper we also publish the program {\tt ggzh} which can be downloaded from~\cite{ggzh}. {\tt ggzh} includes all contributions to the process $gg \to ZH$ which are discussed in this paper. {\tt ggzh} is written in {\tt C++}. Before compilation it is necessary to install the libraries CUBA~\cite{Hahn:2004fe}, {\tt LoopTools}~\cite{Hahn:1998yk,vanOldenborgh:1989wn} and {\tt gsl}~\cite{gsl}. The corresponding paths should be inserted in the file {\tt Makefile.local}. Afterwards, {\tt make} starts the compilation. The input file {\tt xsection.cfg} defines the channels which shall be considered. Furthermore, one has to decide whether the partonic or hadronic cross section is considered, which pdf set is used and whether the sum of the considered channels is computed or not. Thus, {\tt xsection.cfg} typically looks as follows \begin{verbatim} active channels: {LO_exact,LO_0} pdf set: PDF4LHC15_nlo_100_pdfas hadronic: true sum channels: false \end{verbatim} {\tt ggzh} outputs partonic cross sections in case \verb|hadronic: false| is chosen. In the sample file the exact LO cross section and the effective-theory result including $1/m_t^0$ terms is computed. Further available channels are \verb|LO_<i>| with $i=2,4,6,8$ for the $1/m_t^i$ contribution and \verb|pade22_LO| for the $[2/2]$ Pad\'e approximation of the LO cross section. The $1/m_t^i$ contribution to $\delta \sigma_{\rm NLO}^{\rm (approx)}$ is obtained by summing the channels \verb|NLO_phase2_<i>|, \verb|NLO_phase2eta_<i>| and \verb|NLO_phase3_<i>| ($i=0,2,4,6,8$) and $\delta \sigma_{\rm NLO}^{\rm (virt,red)}$ is implemented in \verb|NLO_reducible_exact|. Results based on the differential factorization of Eq.~(\ref{eq::nlo_fact2}) can be obtained via the channels \verb|NLO_differential_phase2| and \verb|NLO_differential_phase2eta| (remember that Eq.~(\ref{eq::nlo_fact2}) is only applied to two-particle phase space contributions). The parameter \verb|diff_order| in the input file {\tt params.cfg} specifies the expansion depth used for the LO and NLO expressions in~(\ref{eq::nlo_fact2}). The second input file {\tt params.cfg} contains the values for the various input parameters needed for the calculation. It overwrites the default values which are given in {\tt params.def} together with a brief description of the meaning. The package comes with template files which clarify the syntax. {\tt ggzh} is launched by simply calling the executable in the shell \begin{verbatim} > ./ggzh \end{verbatim} All input parameter are repeated in the output and the results for the individual channels is given in the form \begin{verbatim} Calculating hadronic cross-section for channel "LO_exact". Integrating (Vegas) ... Number of integrand evaluations: 1050000 Integration time: 49s. Per iteration: 0.04697ms Result [1/(GeV)^2]: 1.5875696750976581e-10 Error [1/(GeV)^2]: 7.5506640742330014e-13 Result [fbarn]: 61.816682911840118 Error [fbarn]: 0.29400725786852289 Relative error: 0.0047561150812284996 Chi^2 Probability: 0.18458145471778875 #points dropped: 0 Calculating hadronic cross-section for channel "LO_0". Integrating (Vegas) ... Number of integrand evaluations: 1050000 Integration time: 2s. Per iteration: 0.002444ms Result [1/(GeV)^2]: 2.0665863953725464e-10 Error [1/(GeV)^2]: 1.3007066760509459e-13 Result [fbarn]: 80.468604254996833 Error [fbarn]: 0.050646830445289774 Relative error: 0.00062939864452967408 Chi^2 Probability: 6.004534622038346e-12 #points dropped: 0 \end{verbatim} Besides the total cross section it is also possible to introduce a cut on the invariant mass $m_{ZH}$ which is switched on with \verb|use_inv_mass_cutoff: 1| in the file {\tt params.cfg}. The numerical values for the cut is specified with \verb|inv_mass_cutoff: <m_ZH-value>|. With the help of \verb|use_mt_threshold: 1| one switches on the possibility to use the infinite top mass approximation above the value for $\sqrt{s}$ given by \verb|mt_threshold: <mtthr-value>|. {\tt ggzh} contains the option to vary $\mu_R$ and $\mu_F$ independently. Furthermore, it is possible to choose fixed scales (e.g. $\mu_R=M_H$ or $\mu_R=m_t$) or identify the scales to the partonic center-of-mass energy. \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,851
Borderline Long by A + A Cooren for Vertigo Bird is a pendant lamp to illuminate large areas. The pendant lamp provides efficient but soft lighting. The lamp has 2 shells connected together with Velcro for easy maintenance. Neither a manual nor a screwdriver is required to change the light bulb. The two shells together form a perfect combination of soft, comfortable direct light and indirect light. Borderline Long pendant lamp has an oblong form that serves perfectly to illuminate large spaces and surfaces.
{ "redpajama_set_name": "RedPajamaC4" }
6,920
and the Moonee Valley Legal Service. experiencing relationship breakdown or family violence. To check your eligibility and see what we help with click here. On 9 December 2018, Grace: West Families Against Domestic Violence are holding their annual event, 'I am her voice!'. Join Safe from Harm lawyers attending to support this forum focused on engaging with youth to raise awareness of ending violence against women with future generations. The Moonee Valley Family Violence forum will be held on 22 November, bringing together service sector workers for a discussion on collective responses to family violence. This year we will hear from keynote speaker Jocelyn Bignold, CEO of McAuley Community Services with a following panel discussion moderated by Anthony Kelly, CEO of Flemington Community Legal Centre. The 16 Days of Activism Against Gender-based Violence is approaching! Beginning on the International Day for the Elimination of Violence against Women (25 November) and ending on Human Rights Day (10 December), you can show your support by wearing orange and taking on the16 day activist challenge. Read about it here. Come along to Bills+ Day on 20 June 2018 at the Flemington Community Centre where Safe from Harm lawyers will be providing free legal information to the community. Safe from Harm has teamed up with local service Grace: West Families Against Domestic Violence to deliver community legal education. Read more here. Safe from Harm lawyers featured in article for International Women's Day. Read more here. Safe from Harm lawyers Nadine Bradilovich and Liz Bitar featured in Law Institute of Victoria Journal article on the 'New Wave of School Lawyers.' Read more here. The information on this website is provided as a general guide, and should not be treated as legal advice or as a substitute for legal advice. You should seek full and proper legal advice about your specific matter. While due care has been taken to ensure the accuracy of the material on this website, Safe from Harm takes no responsibility for any errors contained herein.
{ "redpajama_set_name": "RedPajamaC4" }
3,205
An extension to easily create or update any NodeMaterial. ## Usage ### Online method Call the method `Show` of the `BABYLON.NoteMaterial` class: ``` BABYLON.NoteMaterial.Show({hostElement: document.getElementById("host")}); ``` This method will retrieve dynamically the library `nodeEditor.js`, download it and add it to the html page. ### Offline method If you don't have access to internet, the node editor should be imported manually in your HTML page : ``` <script src="babylon.nodeEditor.js" /> ```
{ "redpajama_set_name": "RedPajamaGithub" }
9,528
I totally forgot to mention this here. The gold version of the Hot Wheels DeLorean is out in stores, now. This is the second of three planned colours of the DeLorean for the 2010 Hot Wheels Basic Car Line. The first one is, of course, silver.
{ "redpajama_set_name": "RedPajamaC4" }
9,030
{"url":"https:\/\/hal.inria.fr\/inria-00073185","text":"# A Feasible BFGS Interior Point Algorithm for Solving Strongly Convex Minimization Problems\n\nAbstract : We propose a BFGS primal-dual interior point method for minimizing a convex function on a convex set defined by equality and inequality constraints. The algorithm generates feasible iterates and consists in computing approximate solutions of the optimality conditions perturbed by a sequence of positive parameters\u00a0$\\mu$ converging to zero. We prove that it converges q-superlinearly for each fixed $\\mu$ and that it is globally convergent when $\\mu\\to0$.\nKeywords :\nType de document :\nRapport\n[Research Report] RR-3500, INRIA. 1998\nDomaine :\n\nhttps:\/\/hal.inria.fr\/inria-00073185\nContributeur : Rapport de Recherche Inria <>\nSoumis le : mercredi 24 mai 2006 - 12:06:02\nDerni\u00e8re modification le : vendredi 25 mai 2018 - 12:02:03\nDocument(s) archiv\u00e9(s) le : dimanche 4 avril 2010 - 21:43:51\n\n### Identifiants\n\n\u2022 HAL Id : inria-00073185, version 1\n\n### Citation\n\nPaul Armand, Jean Charles Gilbert, Sophie Jan-J\u00e9gou. A Feasible BFGS Interior Point Algorithm for Solving Strongly Convex Minimization Problems. [Research Report] RR-3500, INRIA. 1998. \u3008inria-00073185\u3009\n\n### M\u00e9triques\n\nConsultations de la notice\n\n## 266\n\nT\u00e9l\u00e9chargements de fichiers","date":"2018-06-18 23:28:39","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5511112809181213, \"perplexity\": 4101.145547589909}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-26\/segments\/1529267861456.51\/warc\/CC-MAIN-20180618222556-20180619002556-00121.warc.gz\"}"}
null
null
Sharon Vonne Stone (born March 10, 1958) is an American actress, producer, and former fashion model. She is the recipient of a Primetime Emmy Award and a Golden Globe Award, as well as having received nominations for an Academy Award and two Screen Actors Guild Awards. After modelling in television commercials and print advertisements, she made her film debut as an extra in Woody Allen's comedy-drama Stardust Memories (1980). Her first speaking part was in Wes Craven's horror film Deadly Blessing (1981), and throughout the 1980s, Stone went on to appear in films such as Irreconcilable Differences (1984), King Solomon's Mines (1985), Cold Steel (1987), Action Jackson (1988), and Above the Law (1988). She found mainstream prominence with her part in Paul Verhoeven's science fiction action film Total Recall (1990). Stone became a sex symbol and rose to international recognition when she starred as Catherine Tramell in another Verhoeven film, the erotic thriller Basic Instinct (1992), for which she earned her first Golden Globe Award nomination for Best Actress in a Motion Picture – Drama. She received further critical acclaim with her performance in Martin Scorsese's epic crime drama Casino (1995), garnering the Golden Globe Award and an Academy Award nomination for Best Actress. Stone received two more Golden Globe Award nominations for her roles in The Mighty (1998) and The Muse (1999). Her other notable film roles include Sliver (1993), The Specialist (1994), The Quick and the Dead (1995), Last Dance (1996), Sphere (1998), Catwoman (2004), Broken Flowers (2005), Alpha Dog (2006), Basic Instinct 2 (2006), Bobby (2006), Lovelace (2013), Fading Gigolo (2013), and The Disaster Artist (2017). In 1995, she received a star on the Hollywood Walk of Fame, and in 2005, she was named Officer of the Order of Arts and Letters in France. On television, Stone has had notable performances in the miniseries War and Remembrance (1987) and the HBO television film If These Walls Could Talk 2 (2000). She made guest appearances in The Practice (2004), winning the Primetime Emmy Award for Outstanding Guest Actress in a Drama Series, and in Law & Order: Special Victims Unit (2010). Stone has also appeared in the series Agent X (2015), Mosaic (2017), and The New Pope (2019). Description above from the Wikipedia article Sharon Stone, licensed under CC-BY-SA, full list of contributors on Wikipedia. Place of Birth: Meadville, Pennsylvania, USA Also Known As: Sahron Stone, 샤론 스톤, Σάρον Στόουν, Шэрон Стоун Movies List of Sharon Stone The Year of Getting to Know Us (2008) The Mighty (1998) Scissors (1991) Last Dance (1996) A Different Loyalty (2004) Blood and Sand (1989) Tears in the Rain (1988) Border Run (2012) Gods Behaving Badly (2013) Sphere (1998) The Specialist (1994) Picking Up the Pieces (2000) Beautiful Joe (2000) The Muse (1999) Intersection (1994) Largo Winch II (2011) Calendar Girl Murders (1984) The Vegas Strip War (1984) A Golden Boy (2014) Femme (2013) Harlow: The Blonde Bombshell (1993) Into the Night: Portraits of Life and Death (2017) Sunny (2022) Forever Hollywood (1999) Basic Instinct: Sex, Death & Stone (2020) FEMME - Women healing the World (2022) Journey Into Buddhism: Prajna Earth (2007) Broken Flowers (2005) $5 a Day (2008) Images de femmes ou le corset social (2011) Year of the Gun (1991) Where Sleeping Dogs Lie (1991) Simpatico (1999) Streets of Blood (2009) Beyond the Stars (1989) Dior and I (2015) Running Wild (2017) What About Love (2022) Treasury of Children's Stories (1996) Basic Insect (1999) Action Jackson (1988) Police Academy 4: Citizens on Patrol (1987) When a Man Falls (2007) Above the Law (1988) Badlands 2005 (1988) Jewel's Catch One (2017) House of Cardin (2019) That Click (2020) AFI Life Achievement Award: A Tribute to Martin Scorsese (1997) He Said, She Said (1991) No Subtitles Necessary: Laszlo & Vilmos (2009) Fading Gigolo (2013) Irreconcilable Differences (1984) Lovelace (2013) Sylvester Stallone Bio. (2005) Mothers and Daughters (2016) Life on the Line (2015) The Sissy Duckling (1999) Mindfulness: Be Happy Now (2015) No Love in the City 3 (2013) Off the Menu: The Last Days of Chasen's (1998) Sunset Strip (2012) The Good, The Bad, and the Beautiful (1996) Hollywood's Greatest Villains (2005) Legends in Light: The Photography of George Hurrell (1995) Harry Benson: Shoot First (2016) CyberWorld (2000) If These Walls Could Talk 2 (2000) The Invocation (2010) This Changes Everything (2019) Rolling Thunder Revue: A Bob Dylan Story by Martin Scorsese (2019) Searching for Debra Winger (2002) The Please Watch the Jon Lovitz Special (1992) Cannes: All Access (2007) The Celluloid Closet (1996) The Disaster Artist (2017) Here Today (2021) Britney Spears: In The Zone (2003) Beyond Boundaries: The Harvey Weinstein Scandal (2018) Stardust Memories (1980) Bobby (2006) Alpha Dog (2006) Jiminy Glick in Lalawood (2004) The Laundromat (2019) D'un film à l'autre (2011) Big Guns Talk: The Story of the Western (1997) Last Action Hero (1993) DC Villains - Catwoman: The Feline Femme Fatale (2021) Killers Kill, Dead Men Die (2007) La Traversée du désir (2009) Catwalk (1995) Bolero: Dance of Life (1981)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,339
'use strict'; const Watcher = require('./watcher').Watcher; class BluetoothLEWatcher extends Watcher { constructor() { super('{bb7bb05e-5972-42b5-94fc-76eaa7084d49}', [ "System.Devices.Aep.CanPair", ]); } } module.exports = BluetoothLEWatcher;
{ "redpajama_set_name": "RedPajamaGithub" }
9,168
Clinotanypus claripennis är en tvåvingeart som beskrevs av Jean-Jacques Kieffer 1918. Clinotanypus claripennis ingår i släktet Clinotanypus och familjen fjädermyggor. Inga underarter finns listade i Catalogue of Life. Källor Fjädermyggor claripennis
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,215
Q: Library to send SMS messages via IP connected device I'm looking for a library (win32) to be used in a Delphi project that will enable me to send and receive SMS (text messages) via GSM modem devices connected via Ethernet (listening on an IP address). All the libraries I found until now support devices connected via COM/USB/bluetooth/InfraRed, but non of them support a direct connection via IP (using a COM to IP redirector or virtual COM port is not an option for us). Does anyone know of such a library (or a good classic COM/USB/... AT command lib (GSM07.05 GSM07.07 compatible) that includes full source code so that we can extend it ourselves for straight sending of AT commands via telnet over IP)? Thanks Koen. A: I'm not aware of a library to do this, but you can checkout the source of Kannel. Their is a rawtcp or telent modem type that might help you. Since AT commands are just text you should be able to have your program establish a TCP session and then issue the commands directly. The exact commands your modem uses may be manufacturer or even model specific, which may be why there are no libraries for this, but should be listed in the documentation. A: I think the answer will be 'no'. I can't think of any GSM device that is added to a PC via IP, they're always serial comms, so really you're looking for a library that takes ethernet commands and writes them to the device as serial (effectively a wrapper). I doubt you'll find this, all such wrappers will be written to expose a higher-level programming language functionality (eg a C# class or a C lib), and the calls you make will be via that language constructs - not IP. As a result, you'll be best off with a Serial->IP converter, they work remarkably well (we use them to connect a serial device to a computer over distances too long for serial cables) and they're completely transparent.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,336
Q: How to put the JSON to a store(DOJO) I have the JSON data as the following from a URL(http://a.b.c.d:8080/myWS/services/GetIcDetail/IcView) [ { "Name": "A", "order": "1", "dir": "In", "pName": "1/2" }, { "Name": "A", "order": "2", "dir": "Out", "pName": "1/2" }, { "Name": "CWR8-88", "order": "4", "dir": "Out", "pName": "1/3/OMD" }, { "Name": "11DPM12", "order": "5", "dir": "In", "pName": "1/5/L1" } ] How can i put this JSON data to a store in DOJO? EDIT JSON data is formatted and validated. You can edit your validate your json here. A: To utilize the module, require in the module, create a new instance. require(["dojo/store/JsonRest"], function(JsonRest){ var store = new JsonRest({ target: "/some/resource" }); // Store an object identified by identity store.put({ foo: "bar" }, { id: 3 }); // Remove an object by ID store.remove(3); });
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,344
Crumps' Naturals Sweet Potato Strips with Cinnamon 5.6 oz. Sweet Potatoes blended with a pinch of dried Citrus Pulp and Cinnamon. These strips are tough and chewy and break off into smaller pieces making them well suited for all breeds. Ingredients: sweet potato, dried citrus pulp, coconut oil and cinnamon.
{ "redpajama_set_name": "RedPajamaC4" }
9,195
Tag: Howe Ridge Fire Howe Ridge Fire causes more evacuations in Glacier National Park Above: CL-215 water scooping air tankers working the Howe Ridge Fire August 16, 2018. InciWeb photo. During the last four days the Howe Ridge Fire has spread almost three miles toward the southwest, and also moved south along the shore of Lake McDonald where it is 7 miles north of West Glacier, Montana. On the north end it is less than half a mile west of the Going-to-the-Sun Road. There are 134 personnel are assigned to the 7,835-acre blaze. That is a small number considering its size and the fact that the fire is causing evacuations, has destroyed 27 structures, and is threatening numerous others. Fire officials have not been able to acquire the number of firefighting resources that they need. This is due to reductions in the budgets of the federal land management agencies and competition from the other 55 large wildfires burning across the western states, many of which are also making do with inadequate staffing on their fires. The red lines represent the perimeter of the Howe Ridge Fire at 12:30 a.m. MDT August 19. The white line was the perimeter on August 15. Click to enlarge. Below is a video posted to YouTube August 16 by Justin Bilton. He described it like this: We were camped 2.5 up the North Macdonald Trail when we saw the then small Howe Ridge Fire began to spread from 5 acres to over 2000 in a matter of hours. We hiked back to the car to get out where it was parked at the end of a dead end road. We had just driven this road (safely) 3 hours before to get in and it was our only way out, apart from trying to stay ahead of the fire on foot. After we were stopped by the downed tree, we reversed back through all of this and were rescued by two park employees on a boat. They saved our lives. We were not joyriding through a wildfire. Very dry weather and record-setting high temperatures in the Glacier National Park area in the last several weeks have dried out the fuels and are causing the fire to spread much more rapidly than is typical for the area. Usually firefighters have days to think about rates of spread and to run fire behavior computer models, but this blaze is shortening those time frames making it difficult, for example, to evacuate the west side of Lake McDonald as quickly as needed. A weather system will bring slightly cooler temperatures, but the frontal passage will increase winds and cause shifts in wind directions. This could significantly affect fire behavior on the southern and western flanks of the fire. Saturday smoke over the fire prevented aircraft from dropping water. Crews are working around structures in the Fish Creek Campground area and along the Inside North Fork Road to reduce fuels and to set up sprinkler systems. Structure protection efforts continue along the north end of Lake McDonald using sprinkler systems around the remaining structures on North Lake McDonald Road. Personnel are installing hoses and sprinklers to minimize potential fire spread towards the Going-to-the-Sun Road. Fire managers will continue to proactively plan for protection of other areas as the fire progresses. The Fish Creek Campground area is now under an evacuation order. Evacuation orders remain in place for the North Lake McDonald road (private residences and the Lake McDonald Ranger Station), Lake McDonald Lodge area (all businesses, employees, and private residences), private residences along the Going-to-the-Sun Road, and Sprague Creek and Avalanche Campgrounds. Author Bill GabbertPosted on August 19, 2018 August 20, 2018 Tags Howe Ridge Fire, MontanaLeave a comment on Howe Ridge Fire causes more evacuations in Glacier National Park Howe Ridge Fire burns thousands of acres in Glacier National Park The fire is on the north end of Lake McDonald north of West Glacier, Montana Above: The Howe Ridge Fire at the north end of Lake McDonald, August 12, 2018. NPS photo. (Originally published at noon August 16, 2018) The Howe Ridge Fire in Glacier National Park has burned 3,500 acres at the north end of Lake McDonald 8 miles north of West Glacier, Montana. It started August 11 from a lightning strike and is being "managed", or herded around, rather than being fully suppressed. The 78 personnel assigned to the fire are protecting structures and utilizing water drops from air tankers and helicopters to slow the spread where needed. However on Wednesday fixed wing aircraft were grounded due to heavy smoke. Structural protection crews worked Wednesday to reduce risk to buildings at the head of Lake McDonald and Kelly's Camp. The time-lapse video of the fire below is very impressive: The Southwest Area Type 1 Incident Management Team, under the command of John Pierson, is onsite and will be taking over management of the fire at 6:00 a.m. Friday. Mr. Pierson's team is also managing two other fires, the Paola Ridge and Coal Ridge fires. Map showing the perimeter of the Howe Ridge Fire based on a mapping flight at 10:30 p.m. MDT August 15, 2018. Area closures and evacuations remain in place. The Going-to-the-Sun Road remains open between St. Mary and Logan Pass. It is closed between the foot of Lake McDonald (near Apgar) and Logan Pass. Apgar Village, Apgar Campground and Fish Creek campground remain open. Most other areas of the park are open. Howe Ridge Fire, August 12, 2018. NPS photo. Water scooping air tankers work the Howe Ridge Fire in Glacier National Park. Undated NPS photo. Author Bill GabbertPosted on August 16, 2018 August 16, 2018 Tags Howe Ridge Fire, Montana2 Comments on Howe Ridge Fire burns thousands of acres in Glacier National Park
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,013
The Mechanization and Modernization (M&M) Agreement of 1960 was an agreement reached by California longshoremen unions: International Longshore and Warehouse Union (ILWU), the International Longshoremen's Association (ILA), and the Pacific Maritime Association. This agreement applied to workers on the Pacific Coast of the United States, the West Coast of Canada, and Hawaii. The original agreement was contracted for five years and would be in effect until July 1, 1966. Origins Prior to the 1940s, the majority of cargo movement in ports were mainly done by hand and required a large number of skilled workers. There were some new technologies that were introduced in order to aid in the movement of shipments, such as rope slings, dollys, forklifts, and even cranes that helped longshoremen take large loads off of ships. However, longshoremen were still needed as they were skilled to maximize the space in each container. The methods of cargo movement differed greatly between each port on the Pacific Coast. Depending on the size of the cargo and what was being shipped, many ports required extensive manual labor of dock workers while others required the use of specialized mechanical cranes to hoist large truck containers off of ships. After World War II, the demand for a more efficient way of loading and unloading cargo brought new technology to ports that would require less workers to move shipments. Provisions Harry Bridges, then current leader of the ILWU, designed the Mechanization and Modernization Agreement, which was signed on October 18, 1960. It distinguished between 3 classes of longshoremen workers. Depending on the level of worker, each worker could guarantee a certain set of benefits. Worker classes "A" men: These men were fully registered longshoremen and had ILWU membership. They held preference for dispatch in ports and could claim full benefits as mentioned in the M&M Agreement. "B" men: These men were partially registered to the ILWU. Although they couldn't claim benefits as mentioned in the M&M Agreement, they could claim benefits mentioned in contracts, such as welfare and vacations benefits. If the "A" men list were to be exhausted, "B" men would be the next group to gain employment preference. Casual: They are not recognized as being attached to any aspect of the longshoremen industry and would only work peak days when the "A" men and "B" men lists were exhausted. These men could not claim any benefits as mentioned in the contracts or in the M&M Agreement. Agreements and benefits The M&M Agreement guaranteed employment security for the basic workforce, which were registered union members ("A" men). If there were to be a decent decline in employment due to modernization, there would be a decline in the employment of "B" men and Casual workers in order to prevent the loss of employment to the basic workforce. However, the ILWU asked that the Agreement also shorten weekly work shifts from 40 hours to 35 hours per week in order to accommodate the basic workforce and maintain equal wages among workers. Employers would be able to introduce new technology and device that would improve the ports productivity, efficiency, and reduce the number of labor forces needed. Criticisms Although the M&M Agreement, provided employment security for ILWU members, it failed in preventing the Longshoremen's Strike of 1971. The M&M Agreement failed to fully adapt to the introduction of technology and containerization at ports. The introduction of technology greatly reduced the need for labor by up to 90% and employers preferred to employ permanent workers ("A" men) rather than others, thus creating an imbalance between workers and wages. Also dock jobs were being placed outside of port areas and jobs were now being disputed between different unions. Whereas, employers of longshoremen were continuing making a profit from the reduction of labor. In July 1971, 12,000 longshoremen walked out on California ports; however, it was deemed a failure since the strike failed to achieve significant economic harm to employers. Without support from Harry Bridges, the ILWU leader, a Los Angeles court ordered all workers back to work. Although Bridges, attempted to reconcile with workers by creating a new contract, the contract failed to live up to expectations and "B" men and Casual workers were unemployed. Significance and legacy Since the Mechanization and Modernization Agreement, the ILWU had attempted to bring other unorganized, nonunion waterfront occupations into the union as a means to counteract new technology at ports. The ILWU has since expanded its jurisdiction as to maintain high employment for its members. Although organization of these occupations was met with employer resistance, the ILWU managed to absorb new jurisdiction. The Mechanization and Modernization Agreement of 1960, provided job security to members, but did not extend its benefits to those outside of the ILWU. References Industrial agreements International Longshore and Warehouse Union 1960 in labor relations
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,013
Q: Why does this expression resolve to zero? int a = 1/2 == 0.25 * 2; I'm not sure why I'm not seeing this. Am I missing something with precedence? A: Let digging: int a = 1/2 == 0.25 * 2; First, 1/2 == 0 (type of int), and 0.25 * 2 == 0.5 (type of double). So does 0 equal to 0.5? No. So a receives the value of 0 (FALSE).
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,260
Q: Address Book Unique Phone Numbers Is there any library out there that can take Address Book contacts with phone numbers and convert them to the unique phone number with international country calling code. This would involve * *Converting local phone numbers to international, once given a particular country code. *Recognise and maintain existing international numbers This is to achieve something similar to whatsapp does, by using full phone numbers as usernames and converting users contacts to full phone numbers to check which of your friends are on whatsapp. I am about to attempt it myself, but I feel there will be a lot of work in catering for the various different phone number configurations out there, so any head starts would be useful.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,595
Ernst Peter Burger (September 1, 1906 – October 9, 1975) was a German-American who was a saboteur for Germany during World War II who defected to the United States. A naturalized citizen of the United States who returned to Germany during the Great Depression, Burger was recruited along with seven others by the Abwehr for Operation Pastorius, which sought to sabotage targets in the United States in 1942. However, after being deployed, he and fellow saboteur George John Dasch defected and betrayed the other six agents involved to the Federal Bureau of Investigation. After some litigation, a military tribunal sentenced all eight agents to death, but President Franklin D. Roosevelt commuted Burger's sentence to life in prison. In 1948, President Harry S. Truman granted Burger executive clemency conditional on his deportation to the American occupation zone in Germany, where he died in 1975. Biography Born in Augsburg, Burger was a machinist by trade. Burger was a member of the Nazi Party from the age of 17. In 1923, he participated in the Beer Hall Putsch. Burger immigrated to America in 1927 and became a U.S. citizen in 1933. He had lived in the United States for some years, even serving in the Michigan and Wisconsin Army National Guard. During the Depression, Burger returned to Germany, he rejoined the Nazi Party and became an aide-de-camp to Ernst Roehm, the chief of the Nazi storm troopers. Later, he wrote a paper critical of the Gestapo—a move that earned him seventeen months in a concentration camp. In 1941, Burger was released and conscripted into the Wehrmacht. He served at a POW camp in Berlin, where he guarded Yugoslav and British prisoners. Despite his history as a survivor of a Nazi internment camp and harassment of his wife by Nazi Party members, Burger was recruited by the Abwehr, Nazi Germany's intelligence organization. He took part in Operation Pastorius, a plan by which eight German saboteurs were to be transported by U-boat to the United States. Burger and the others landed with the intention of damaging United States economic targets. Apprehension, trial, and deportation George John Dasch, another German agent, called Burger into their upper-story hotel room and opened a window, saying they would talk, and if they disagreed, "only one of us will walk out that door—the other will fly out this window." Dasch told him he had no intention of going through with the mission, hated Nazism, and planned to report the plot to the FBI. Burger agreed to defect to the United States immediately. Besides Burger, none of the other German agents knew they were betrayed. Over the next two weeks, Burger and the other six were arrested. FBI Director J. Edgar Hoover made no mention that Dasch had turned himself in, and claimed credit for the FBI for cracking the spy ring. The saboteurs were tried and convicted of espionage. All were sentenced to execution by electrocution; however, Burger's sentence was commuted by President Franklin D. Roosevelt to life in prison and Dasch's to thirty years because of their cooperation. In 1948, President Harry S. Truman granted executive clemency to Dasch and Burger on the condition they be deported to the American occupation zone in Germany. They were not welcomed back in Germany, as they were regarded as traitors who had caused the death of their comrades. Although they had been promised pardons by J. Edgar Hoover in exchange for their cooperation, both men died without ever receiving them. References External links FBI Famous Cases Operation Pastorius entry on German Wikipedia 1906 births 1975 deaths American people convicted of spying for Nazi Germany American prisoners sentenced to death Michigan National Guard personnel Nazis who participated in the Beer Hall Putsch German Army personnel of World War II Sturmabteilung personnel People deported from the United States People from Augsburg Prisoners sentenced to death by the United States military Recipients of American presidential clemency Wisconsin National Guard personnel Saboteurs
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,651
A trapeze is a short horizontal bar hung by ropes or metal straps from a ceiling support. It is an aerial apparatus commonly found in circus performances. Trapeze acts may be static, spinning (rigged from a single point), swinging or flying, and may be performed solo, double, triple or as a group act. The name of the apparatus reflects the trapezoid shape made by the horizontal bar, ropes and ceiling support. History The art of trapeze performance is reported to have been developed by Jules Léotard, a young French acrobat and aerialist, in Toulouse in the mid-1800s. He is said to have used his father's swimming pool to practice. However, the name "trapeze" can be found in books dating as far back as twenty years earlier, before Léotard was born. One such example is George Roland's "An Introductory Course of Modern Gymnastic Exercises", published in 1832. Roland proposes the idea that the trapeze might owe its origin to Colonel Amoros, but ultimately deems the question of origin "unimportant to the present subject". The name was applied in French () from the resemblance of the apparatus to a trapezium or irregular four-sided figure. Types of trapeze Static trapeze refers to a trapeze act in which the performer moves around the bar and ropes, performing a wide range of movements including balances, drops, hangs while the bar itself stays generally static. The difficulty on a static trapeze is making every move look effortless. It is like dance, in that most people of a reasonable level of strength can get onto the trapeze bar for the first time and perform some basic tricks, but an experienced artist will do them with much more grace and style. Swinging trapeze (or swinging single trapeze) refers to an act performed on a trapeze swinging in a forward–backward motion. The performer builds up swing from a still position, and uses the momentum of the swing to execute the tricks. Usually tricks on a swinging trapeze are thrown on the peaks of the swing and involve dynamic movements that require precise timing. Most of the tricks begin with the performer sitting or standing on the bar and end with the performer catching the bar in his/her hands or in an ankle hang (hanging by the ankles by bracing them between the rope and the bar). This act requires a great deal of strength, grace, and flexibility. The trapeze bar is weighted and often has cable inside the supporting ropes for extra strength to withstand the dynamic forces of the swing. Flying trapeze refers to a trapeze act where a performer, or "flyer," grabs the trapeze bar and jumps off a high platform, or pedestal board, so that gravity creates the swing. The swing's parts are the "force out" (sometimes called the "cast out") at the far end of the first swing, the beat back and the rise (also known as "seven") as the performer swings back above the pedestal board, and then the trick is thrown at the far end of the second swing. The performer often releases the bar and is caught by another performer, the "catcher," who hangs by their knees on another trapeze, or sometimes on a cradle, which can be either stationary or also swinging. People of any size are able to execute basic trapeze maneuvers. Flying trapeze is generally done over a net, or occasionally over water. However, some smaller rigs, usually created for teaching purposes, use mats instead of a net. In the UK, many outdoor education centres offer an activity known as 'leap of faith'. This activity invites participants to climb to the top of a narrow pole and jump, arms outstretched, to grab a trapeze bar. Similar to the flying trapeze, gravity creates the swing. In this type of activity, participants are attached via rope and harness and an added challenge to get your legs over the trapeze can be included. Washington trapeze (also known as head trapeze or heavy trapeze) refers to a variation on static and swinging trapeze where the aerialist performs various headstand skills on the bar, which is typically much heavier than a normal trapeze bar and has a small (about 4-inch round) headstand platform on it. The trapeze is supported by wire cables rather than ropes, and the apparatus will often be lifted and lowered during the act. Dance trapeze (also known as single-point trapeze) refers to a trapeze used by many modern dance companies in aerial dance. The ropes of the trapeze are often both attached to a single swivel, allowing the trapeze to spin in either small or large circles. Double trapeze (also known as the French trapeze) is a variation on the static trapeze, and features two performers working together on the same trapeze to perform figures and bear each other's weight. It can also be performed swinging, in which case the act is called swinging double trapeze. Multiple trapeze refers to a number of different shapes and sizes of trapeze, including double trapeze, triple trapeze and larger multiples designed for use by multiple simultaneous flyers. Shaped trapezes are apparatuses that can take virtually any shape imaginable. Duplex trapeze refers to any trapeze with two layers of hand bars, one on top of the other, such that acrobats can jump from the upper bar and land (or be caught by a catcher) on the lower. Further reading Sharon McCutcheon, Geoff Perrem. Circus in Schools Handbook. Tarook Publishing, 2004. () Hovey Burgess, Judy Finelli. Circus Techniques. Brian Dube, 1989. () Carrie Heller. Aerial Circus Training and Safety Manual. National Writers Press, 2004. () Jayne C. Bernasconi and Nancy E. Smith. Aerial Dance. United States: Human Kinetics, 2008. () View at Google Books Elena Zanzu, M.A. Il Trapezio Oscillante: Storie di Circo nell'Aria. (The Swinging Trapeze: Histories of the Circus in the Air.) Bologna University, Italy, 2004–2005. Language: Italian. References External links European Federation of Professional Circus Schools (FEDEC) Desiree Belmarez. "Trapeze Tests Grace and Style." Denver Post. August 24, 2007. The physics of the flying trapeze Bloomington once capital of 'aerial kingdom' – Pantagraph (Bloomington, Illinois newspaper) Fred and Harry Green – "The Flying LaVans" – McLean County Museum of History Circus equipment
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,084
\section{Approach} \label{sec:approach} In this section, we describe our approach for discovering executable routine specifications from User Interaction (UI) logs. We adhere to the RPM pipeline proposed by Leno et al.~\cite{lenobise20}, which we implemented in five macro steps (see Figure~\ref{fig:approach}): i) \emph{preprocessing and normalization}; ii) \emph{segmentation}; iii) \emph{candidate routine identification}; iv) \emph{automatability assessment}; v) \emph{routines aggregation}. \begin{figure*}[htb] \centering \includegraphics[scale=0.6]{Approach.pdf} \caption{Outline of the proposed approach} \label{fig:approach} \end{figure*} Our approach takes as input a UI log, which is a chronologically ordered sequence of UIs between a worker and computer-based applications. In this paper, we assume that the applications used by the worker are either spreadsheet management applications or web browser applications. A UI log is usually recorded during the execution of the worker's daily tasks using specialized logging tools, for example, the \emph{Action Logger} tool~\cite{DBLP:conf/bpm/LenoPRDM19}. An example of a UI log is provided in Table~\ref{tab:uiLog}. Each row of Table~\ref{tab:uiLog} captures one UI (e.g., clicking a button or copying the content of a cell). Each UI is characterized by a \emph{timestamp}, a \emph{type}, and a set of \emph{parameters}, or \emph{payload} (e.g., application, button's label and value of a field). The payload of a UI is not standardized, and depends on the UI type and application. Consequently, the UIs recorded in the same log may have different payloads. For example, the payload of UIs performed within a spreadsheet contains information regarding the spreadsheet name and the location of the target cell (e.g., cell row and column). In contrast, the payload of the UIs performed in a web browser contains information regarding the webpage URL, the name and identifier of the UI's target HTML element and its value (if any); -- see Table~\ref{tab:uiLog} rows 1 and 2. \input{tables-complete-uiLog-compact} \newpage Our approach analyzes the log to identify and output a collection of \emph{executable routine specifications}. Each routine specification is a pair ($c$, $\Lambda$), where $c$ is a sequence of UIs, or a \emph{candidate routine}, and $\Lambda$ is a set of \emph{data transformation steps}. Each \emph{data transformation step} is a triplet that specifies: i) variables from which the data was read, ii) variables to which the data was written, and iii) a function capturing the data transformation (if any occurs). Such routine specifications can be compiled into software bots that can be deployed on a tool like UiPath,~\footnote{A commercial tool available at www.uipath.com} which would be able to automatically replicate the routine. In the following, we describe step-by-step how we generate a collection of executable routine specifications from an input UI log. \subsection{Preprocessing and Normalization} \label{sec:preprocessing} Before diving into the details of this step, we formally define the concepts of a \emph{user interaction} and \emph{user interaction log}, which we will refer to throughout this and the following sections. \begin{definition}[\textbf{User interaction (UI)}] A \emph{user interaction (UI)} is a tuple $u = (t, \tau, P_{\tau}, Z, \phi)$, where: $t$ is a timestamp; $\tau$ is a UI type; $P_{\tau}$ is a set of parameters, or \emph{payload}; $Z$ is a set of parameter values; and $\phi : P_{\tau} \rightarrow Z$ is a value assignment function. \end{definition} Table~\ref{tab:uiParam} shows UIs and their associated payloads recorded by the Action Logger tool~\cite{DBLP:conf/bpm/LenoPRDM19}. The UIs are logically grouped, based on their type, into three groups: \emph{navigation}; \emph{read}; and \emph{write} UIs. We assume that every UI is an \emph{instantiation} of one of the UI types from Table~\ref{tab:uiParam}, with every parameter assigned with a specific value. \begin{definition}[\textbf{User interaction log}] A user interaction log $\Sigma$ is a sequence of UIs $\Sigma = \langle u_1, u_2, \dots, u_n \rangle$, ordered by their timestamps, i.e., $u_{i\mid t} < u_{j\mid t}$ for any $i,j such that 1 \leq i < j \leq n$. \end{definition} \input{tables-uiParameters} Ideally, UIs recorded in a log should only relate to the execution of the task(s) of interest. However, in practice, a log often also contains UIs that do not contribute to completing the recorded task(s). We can consider such UIs to be \emph{noise}. Examples of noise UIs include a worker browsing the web (e.g., social networking) while executing a task that does not require to do that, or a worker committing mistakes (e.g., filling a text field with an incorrect value or copying a wrong cell of a spreadsheet). While we cannot detect the former kind of noise without a context-aware noise filter, we can identify the latter type of noise. Given that noise in a log may negatively affect the segmentation step, we attempt to remove it. Specifically, the filter we implemented removes UIs whose effects are overwritten by subsequent UIs, and certain navigation UIs that a software robot would not need to replicate. To identify and remove such UIs, we rely on three search-and-replace rules defined as regular expressions that operate as follows. \begin{itemize} \item[1.] Remove UIs of type \emph{select cell}, \emph{select range}, \emph{select field} (e.g., Table~\ref{tab:uiLog}, rows 2, 4, 7); \item[2.] Remove UIs of type \emph{copy} that are not eventually followed by UI of type \emph{paste} before another UI of type \emph{copy} occurs (e.g., Table~\ref{tab:uiLog}, row 42); \item[3.] Remove UIs of type \emph{edit cell}, \emph{edit range}, and \emph{edit field} that are followed by another UI of the same type that targets the same cell or field and overwrites its content before a UI of type \emph{copy} occurs (e.g., Table~\ref{tab:uiLog}, row 22). \end{itemize} We note that, given an unsegmented log, it is impossible to apply the third rule straightforward, as removing the first UI of type \emph{edit} (considered redundant) may be an error if the second UI of type \emph{edit} belongs to a successive task execution. Therefore, we postpone the application of the third rule after the segmentation step. The filtering rules are applied recursively on the log until no more UIs are removed and the log is assumed to be free of \emph{detectable} noise. Devising and applying more sophisticated noise filtering algorithms would probably benefit the approach presented in this study. However, the design of such algorithms is outside the scope of this paper, and we leave it as possible future work. After filtering the log, the vast majority of UIs are unique because they differ by their unique payload. Note that even the UIs capturing the same action within the same task execution (or different task executions) would appear different. To discover each task execution recorded in the log, we need to detect all the UIs that even having different payloads correspond to the same action within the same or different task execution(s). Given a UI, its payload can be divided into \emph{data parameters} and \emph{context parameters}. The former store the data values used during the execution of tasks, e.g., the value of text fields or copied content. Consequently, \emph{data parameters} usually have different values in different task executions. In contrast, the latter capture the context in which UIs were performed, e.g., the application and the location within the application. Therefore, \emph{context parameters} of the same UI within a task are likely to have the same values across different task executions. For example, the payload of a UI of type \emph{copy cell} has the following parameters (see also Table~\ref{tab:uiParam}): \emph{workbook name} (the Excel file name); \emph{worksheet name} (within the Excel file); \emph{cell column} (i.e., the column of the cell in the worksheet that was selected for the UI); \emph{cell row} (i.e., the row of the cell in the worksheet that was selected for the UI); \emph{value} (i.e., current value of the cell selected for the UI); \emph{copied content} (the content copied as the result of the UI). Here, \emph{workbook name}, \emph{worksheet name}, \emph{cell column/row} are \emph{context parameters}, while \emph{copied content} and \emph{value} are \emph{data parameters}. Different context parameters characterize different UI types. For example, a UI of type \emph{click button} performed in a web browser has only these context parameters: \emph{URL}; \emph{name} (i.e., the label of the button); \emph{ID} (of the button, as an element in the HTML page); and \emph{type}. Often, context parameters are determined by the type of UI. To reduce the chance of possible automated misinterpretations, we allow the user to configure the context parameters of various UI types manually. To segment an input UI log, we rely on the context parameters of the UIs. We call a UI whose payload has been reduced to its context parameters a \emph{normalized UI}. \begin{definition}[\textbf{Normalized UI}]\label{def:nui} Given a UI $u = (t, \tau, P_{\tau}, Z, \phi)$, the UI $\bar{u} = (t, \tau, \bar{P_{\tau}}, \bar{Z}, \phi)$ is its normalized version, where $\bar{Z}$ contains only the values of the parameters in $\bar{P_{\tau}}$, where $\bar{P_{\tau}}$ is a set of context parameters. \end{definition} Two normalized UIs $u_1 = (t_1, \tau, \bar{P_{\tau}}, \bar{Z_1}, \phi_1)$ and $u_2 = (t_2, \tau, \bar{P_{\tau}}, \bar{Z_2}, \phi_2)$ are \emph{equivalent}, denoted by $u_1 = u_2$ iff $\forall p \in \bar{P_{\tau}} \Rightarrow \phi_1(p) = \phi_2(p)$. A log in which all the UIs have been normalized is a \emph{normalized log}, and we refer to it with the notation $\bar{\Sigma} = \langle \bar{u_1}, \bar{u_2}, \dots, \bar{u_n} \rangle$. Table~\ref{tab:uiLog} and Table~\ref{tab:norm-uilog} show, respectively, a fragment of a log and its normalized version. Intuitively, in a normalized log, the chances that two executions of the same task have the same sequence (or set) of normalized UIs are high because they have only context parameters. We leverage such a characteristic of the normalized log to identify its segments (i.e., start and end of each executed task), and then the routine(s) within the segments. \input{tables-complete-uiLogNorm-compact} \subsection{Segmentation} \label{sec:segmentation} A log may capture long working sessions, where a worker performs multiple instances of one or more tasks. The next step of our approach decomposes the log into \emph{segments} that identify the start and the end of each recorded task in the log. Given a normalized log, we generate its control-flow graph (CFG). A CFG is a graph where each vertex represents a different normalized UI, and each edge captures a directly-follows relation between the two normalized UIs represented by the source and the target vertices of the edge. A CFG has an explicit source vertex representing the first normalized UI recorded in the log. Given a log, the directly follows relation on UI is defined as follows. \begin{definition}[\textbf{Directly-follows relation}] Let $\bar{\Sigma} = \langle \bar{u}_1, \bar{u}_2, \dots, \bar{u}_n \rangle$ be a normalized log. Given two UIs, $\bar{u}_x, \bar{u}_y \in \bar{\Sigma}$, we say that $\bar{u}_y$ directly-follows $\bar{u}_x$, i.e., $\bar{u}_x \leadsto \bar{u}_y$, iff $\bar{u}_{x\mid t} < \bar{u}_{y\mid t} \wedge \nexists \bar{u}_z \in \bar{\Sigma} \mid \bar{u}_{x\mid t} \leq \bar{u}_{z\mid t} \leq \bar{u}_{y\mid t}$. \end{definition} \begin{definition}[\textbf{Control-Flow Graph (CFG)}] Given a normalized log, $\bar{\Sigma} = \langle \bar{u_1}, \bar{u_2}, \dots, \bar{u_n} \rangle$, let $\bar{A}$ be the set of all the normalized UIs in $\bar{\Sigma}$. A Control-Flow Graph (CFG) is a tuple $G = (V, E, \hat{v}, \hat{e})$, where: $V$ is the set of vertices of the graph, each vertex maps one UI in $\bar{A}$; $E \subseteq V \times V$ is the set of edges of the graph, and each $(v_i, v_j) \in E$ represents a directly-follows relation between the UIs mapped by $v_i$ and $v_j$; $\hat{v}$ is the graph \emph{entry vertex}, such that $\forall v \in V \nexists (v, \hat{v}) \in E \wedge \nexists (\hat{v}, v) \in E$; while $\hat{e} = (\hat{v}, v_0)$ is the graph \emph{entry edge}, such that $v_0$ maps $\bar{u_1}$. We note that $\hat{v} \notin V$, and $\hat{e} \notin E$, since they are artificial elements of the graph. \end{definition} It is likely that a CFG is cyclic, since a loop represents the start of a new execution of the task recorded in the log. Indeed, in an ideal scenario, once a task execution ends with a certain UI (a vertex in the CFG), the next UI (i.e., the first UI of the next task execution) should have already been mapped to a vertex of the CFG, and a loop will be generated. In such a case, all the vertices in the loop represent the UIs performed during the execution of the task. If several different tasks are recorded in sequence in the same log, we would observe several disjoint loops in the CFG, while if a task has repetitive subtasks, we would observe nested loops in the CFG. \figurename~\ref{fig:cfg} shows the CFG generated from the log captured in Table~\ref{tab:norm-uilog}, we note that for simplicity we collapsed some vertices as shown in Figure~\ref{fig:collapsing}. \begin{figure}[htb] \centering \subfloat[Before\label{fig:original}]{ \includegraphics[scale = 0.95]{Subprocess.pdf} } \hspace{1cm} \subfloat[After\label{fig:collapsed}]{ \includegraphics[scale = 0.85]{SubprocessCollapsed.pdf} } \caption{Collapsed vertices in Figure~\ref{fig:cfg}} \label{fig:collapsing} \end{figure} \begin{figure}[htb] \centering \hspace*{-2cm}\includegraphics[scale = 0.8]{CFG.pdf} \caption{Example of a Control-Flow Graph} \label{fig:cfg} \end{figure} Once the CFG is generated, we turn our attention to identifying its back-edges (i.e., its loops). By identifying the CFG back-edges and their UIs, we extract the start and end UIs of the repeated task. These UIs are used to mark the boundaries between task executions. The back-edges of a CFG can be identified by analyzing the CFG Strongly Connected Components (SCCs). Given a graph, an SCC is a subgraph where for all its pairs of vertices, there exist a set of edges connecting the pair of vertices such that all the sources and targets of these edges belong to the subgraph. \begin{definition}[\textbf{CFG Path}] Given a CFG $G = (V, E, \hat{v}, \hat{e})$, a CFG path is a sequence of vertices $p_{v_1,v_k} = \langle v_1, \dots, v_k \rangle$ such that for each $i \in [1,k-1] \Rightarrow v_i \in V \cup \{ \hat{v} \} \wedge \exists (v_i, v_{i+1}) \in E \cup \{ \hat{e}\}$. \end{definition} \begin{definition}[\textbf{Strongly Connected Component (SCC)}] Given a graph $G = (V, E, \hat{v}, \hat{e})$, a strongly connected component (SCC) of G is a pair $\delta = (\bar{V}, \bar{E})$, where $\bar{V} = \{ v_1, v_2, \dots, v_m \} \subseteq V$ and $\bar{E} = \{ e_1, e_2, \dots, e_k \} \subseteq E$ such that $\forall v_i, v_j \in \bar{V} \exists p_{v_i,v_j} \mid \forall v \in p \Rightarrow v \in \bar{V}$. Given an SCC $\delta = (\bar{V}, \bar{E})$, we say that $\delta$ is \emph{non-trivial} iff $\left| \bar{V} \right| > 1$. Given a graph $G$, $\Delta_G$ denotes the set of all the non-trivial SCCs in G. \end{definition} Algorithm~\ref{alg:beDetection} and Algorithm~\ref{alg:analyseSCC} describe how we identify the SCCs of the CFG. Given a CFG $G = (V,E,\hat{v},\hat{e})$, we first build its dominator tree $\Theta$ (Algorithm~\ref{alg:beDetection}, line~\ref{alg:domTree}), which captures domination relations between the vertices of the CFG. \figurename~\ref{fig:domTree} shows the dominator tree of the CFG in \figurename~\ref{fig:cfg}. \begin{figure}[htb] \centering \includegraphics[scale = 0.9]{DominatorTreeNew.pdf} \caption{Dominator tree} \label{fig:domTree} \end{figure} Then, we discover the set of all non-trivial SCCs ($\Delta_G$) by applying the Kosaraju's algorithm \cite{sharir1981strong} and removing the trivial SCCs (Algorithm~\ref{alg:beDetection}, line~\ref{alg:scc}). For each $\delta = (\bar{V}, \bar{E}) \in \Delta_G$, we discover its \emph{header} using the dominator tree (Algorithm~\ref{alg:analyseSCC}, line~\ref{alg:header}). The header of a dominator tree $\delta$ is a special vertex $\hat{h} \in \bar{V}$, such that $\forall p_{\hat{v},v} \mid v \in \bar{V} \Rightarrow \hat{h} \in p_{\hat{v},v}$, i.e., the \emph{header} $\hat{h}$ (a.k.a. the SCC entry) is the SCC vertex that dominates all the other SCC vertices. Once we have $\hat{h}$, we can identify the back-edges as $(v,\hat{h})$ with $v \in \bar{V}$ (line~\ref{alg:incoming}). Finally, the identified back-edges are stored and removed (lines~\ref{alg:backEdges} and ~\ref{alg:edgesSub}) in order to look for nested SCCs and their back-edges by recursively executing Algorithm~\ref{alg:analyseSCC} (line~\ref{alg:recursion}), until no more SCCs and back-edges are found. However, if we detect an SCC that does not have a header vertex (formally, the SCC is irreducible), we cannot identify the SCC back-edges. In such a case, we collect via a depth-first search of the CFG the edges $(v_x, v_y) \in \bar{E}$ such that $v_y$ is topologically deeper than $v_x$ - we call these edges \emph{loop-edges} of the SCC (line~\ref{alg:loops}). Then, out of all the loop-edges, we store (and remove from the SCC) the one having target and source connected by the longest \emph{simple path} entirely contained within the SCC (lines~\ref{alg:deepestEdge} to ~\ref{alg:removeEdge}). Given the CFG presented in \figurename~\ref{fig:cfg} and its corresponding dominator tree (see \figurename~\ref{fig:domTree}), we identify the SCC that consists of all the vertices except the \emph{entry vertex}. Then, by applying Algorithm~\ref{alg:analyseSCC}, we identify: the SCC header -- \emph{Click Button [New Record]}; and the only back-edge -- (\emph{Click Button [Submit]}, \emph{Click Button [New Record]}), which we save and remove from the SCC. After the removal of this back-edge, we identify the nested SCC that contains edits of \emph{Full Name}, \emph{Date}, and \emph{Phone} fields. Note that this second SCC does not have a header because it is irreducible, due to its multiple entries (\emph{Edit Field [Full Name]} and \emph{Edit Field [Date]}). However, by applying the depth-first search, we identify as candidate loop-edge for removal: (\emph{Edit Field [Phone]}, \emph{Edit Field [Full Name]}). After we remove this edge from the CFG, no SCCs are left, so Algorithm~\ref{alg:analyseSCC} terminates. \input{algorithm1.tex} \input{algorithm2.tex} At this point, we collected all the back-edges of the CFG. Next, we use them to segment the log. We do so by applying Algorithm~\ref{alg:segIdentification}. First, we retrieve all the targets and sources of all the back-edges in the CFG and collect their corresponding UIs (lines~\ref{alg:targets4} and~\ref{alg:sources4}). Each UI mapped onto a back-edge target is an eligible segment starting point (from now on, \emph{segment-start UI}). A back-edge conceptually captures the end of a task execution, while its target represents the first UI of the next task execution. By applying the same reasoning, each UI mapped onto the source of a back-edge is an eligible segment ending point (hereinafter, \emph{segment-end UI}). Then, we sequentially scan all the UIs in the log (line~\ref{alg:uilogscan4}). When we encounter a segment-start UI (line~\ref{alg:segstart4}), and we are not already within a segment (see line~\ref{alg:notinsegment4}), we create a new segment ($s$, a list of UIs), we append the segment-start UI ($\bar{u}$), and we store it in order to match it with the correct segment-end UI (line~\ref{alg:startsegment41} to~\ref{alg:startsegment42}). Our strategy to detect segments in the log is driven by the following underlying assumption: a specific segment-end UI will be followed by the same segment-start UI so that we can match segment-end and segment-start UIs exploiting back-edge's sources and targets (respectively). If the UI is not a segment-start (line~\ref{alg:nostart4}), we check if we are within a segment (line~\ref{alg:insegment4}) and, if not, we discard the UI, assuming it is noise since it fell between the previous segment-end UI and the next segment-start UI. Otherwise, we append the UI to the current segment, and we check if this UI is a segment-end matching the current segment-start UI (line~\ref{alg:startendmatching4}). If that is the case, we reached the end of the segment, and we add it to the set of segments (line~\ref{alg:segmentcomplete4}); otherwise, we continue reading the segment. \newpage \input{algorithm3.tex} Table~\ref{tab:segments} shows the segment-start and the segment-end UIs (highlighted in green and red, respectively), which delimits two segments within the normalized UI log of our running example (see also Table~\ref{tab:norm-uilog}). \input{tables-complete-segmentation-compact} \subsection{Candidates routines identification} \label{sec:candidatesDiscovery} Once the log has been segmented, we move to the identification of the candidate routines. The identification step is based on the CloFast sequence mining algorithm \cite{fumarola2016clofast}. To integrate CloFast in our approach, we have to define the structure of the sequential patterns we want to identify. In this paper, we define a \emph{sequential pattern} within a UI log as a sequence of normalized UIs always occurring in the same order in different segments, yet allowing gaps between the UIs belonging to the pattern. For example, if we consider the following three segments: $\langle u_1, u_y, u_2, u_3 \rangle$, $\langle u_1, u_2, u_x, u_3 \rangle$, and $\langle u_1, u_x, u_2, u_3 \rangle$; they all contain the same sequential pattern that is $\langle u_1, u_2, u_3 \rangle$. Furthermore, we define the \emph{support} of a sequential pattern as the ratio of segments containing the pattern and the total number of segments. We refer to \emph{closed} patterns and \emph{frequent} patterns (relatively to an input threshold) as they are known in the literature. Specifically, a frequent pattern is a pattern that appears in at least a number of occurrences indicated by the threshold, while a closed pattern is a pattern that is not included in another pattern having exactly the same support. By applying CloFast to the log segments, we discover all the \emph{frequent closed} sequential patterns. Some of these patterns may be \emph{overlapping}, which (in our context) means that they share some UIs. An example of overlapping patterns is the following, given three segments: $\langle u_1, u_y, u_2, u_3, u_x, u_4 \rangle$, $\langle u_1, u_y, u_2, u_x, u_3, u_4 \rangle$, and $\langle u_1, u_x, u_2, u_3, u_4 \rangle$; $\langle u_1, u_2, u_3, u_4 \rangle$ and $\langle u_1, u_x, u_4 \rangle$ are sequential patterns, but they overlap due to the shared UIs: $u_1$ and $u_4$. In practice, each UI belongs to only one routine, therefore, we are interested in discovering only non-overlapping patterns. For this purpose, we implemented an optimization that we use on top of CloFast. Given the set of patterns discovered by CloFast, we rank them by a pattern quality criterion, and we select the best pattern (i.e., the top one in the ranking). We integrated four pattern quality criteria to select the candidate routines: pattern frequency, pattern length, pattern coverage, and pattern cohesion score~\cite{DBLP:conf/iui/DevL17}. Pattern frequency considers how many times the pattern was observed in different segments. Pattern length considers the length of the patterns. Pattern coverage considers the percentage of the log that is covered by all the pattern occurrences. Finally, pattern cohesion score considers the level of adjacency of the elements inside a pattern. It is calculated as the difference between the pattern length and the median number of gaps between its elements. In other words, cohesion prioritizes the patterns whose UIs appear consecutively without (or with few) gaps while taking into account also the pattern length. For the candidate routine that we identified as the best pattern for a given quality criterion, we collect and remove all its occurrences from the log. An occurrence of a candidate routine is called a \emph{routine instance}. Formally, a routine instance is a sequence of (non-normalized) UIs, e.g., $r = \langle u_1, u_2, u_3, u_4 \rangle$. After the removal of all the instances of the best candidate routine from the log, we repeat this identification step until no more candidate routines are identified. At the completion of this step, we obtain a set of candidate routines, referred to as $\mathcal{C}_{\Sigma}$, such that, for each candidate routine $c_i \in \mathcal{C}_{\Sigma}$, we can retrieve the set of its routine instances, referred to as $\mathcal{R}_{c_{i}}$. Considering our running example, with reference to Table~\ref{tab:segments}, assuming that the two routine instances that we identified in the previous step (by detecting their segment-start and segment-end UIs) frequently occur in the original log (a snapshot of which is captured in Table~\ref{tab:uiLog}), and choosing length as a selection criterion, at the end of this step, we would discover two candidate routines, each consisting of 15 normalized UIs (as shown in Table~\ref{tab:segments}). An example of a routine instance for each of the two candidate routines can be easily observed in the original log, Table~\ref{tab:uiLog} rows 1 to 24 and 25 to 49 (excluding the UIs filtered in the first step of our approach). \subsection{Automatability assessment} \label{sec:automatabilityAssessment} The candidate routines in $\mathcal{C}_{\Sigma}$ (and their instances, $\mathcal{R}_{c_{i}}$) that we identified in the previous step represent behavior recorded in the log that frequently repeats itself, thus it is the candidate for automation. However, the fact that a routine is frequently observed in a log is not a sufficient condition to guarantee its automatability. Let us consider the following example; a worker fills in and submits 100 times the same web-form, doing it always with the same sequence of actions but inputting manually-generated data (e.g., received over a phone call or copied from a hard-copy document). In such a scenario, although we would identify the filling and submission of the web-form as a candidate routine, we would not be able to automate it because we cannot automatically generate the data in input to the web-forms. On the other hand, if the data in input to the web-forms was copied from another digital document, for example a spreadsheet, we could probably automate the routine. Considering such a context, the next step of our approach is to assess the degree of automatability of the discovered candidate routines. To do so, given a candidate routine $c_i \in \mathcal{C}_{\Sigma}$, we check whether all its UIs are deterministic. We consider a UI to be deterministic if a software robot can replicate its execution. This is possible when: i) the input data of a UI can be determined automatically; or ii) the input data of a UI can be provided as input by the user when deploying the software robot. According to such constraints, we can provide the following rules to check whether a UI is deterministic or not. \begin{itemize} \item[1.] UIs belonging to the \emph{navigation} group (see Table~\ref{tab:uiParam}) are always deterministic because they do not take in input any data; except the \emph{select cell}, \emph{select field}, and \emph{select range} UIs which are removed during the filtering of the log (as described in Section~\ref{sec:segmentation}); \item[2.] UIs belonging to the \emph{read} group are always deterministic because the only input they require is the source of the copied content (e.g., row and column of a cell), which is either constant or can be inputted by the user when deploying the software robot in UiPath; \item[3.] UIs belonging to the \emph{write} group that are of type \emph{click} are always deterministic because they do not take in input any data, except the information regarding the element to be clicked which is always constant for a given candidate routine (by construction); \item[4.] UIs belonging to the \emph{write} group that are of type \emph{paste} are always deterministic because they always retrieve data from the same source (i.e., the system clipboard). \item[5.] UIs belonging to the \emph{write} group that are of type \emph{edit} are the only ones that are not always deterministic. In fact, these UIs are deterministic only if it is possible to determine the updated value of the edited elements (e.g., the value of a cell in a spreadsheet or of a text field in the web browser after the UI is executed). Furthermore, it has also to be possible to determine the target of the editing, although this is usually constant (if a web element) or can be inputted by the user when deploying the software robot in UiPath. \end{itemize} Algorithm~\ref{alg:automatabilityAssessment} shows how we check these five rules given as input a candidate routine $c_i$ and its routine instances $\mathcal{R}_{c_{i}}$, and how we compose the corresponding routine specification of the input $c_i$. The algorithm starts by initializing the set $E$ as a collection of \emph{edit} UI types (\emph{edit cell}, \emph{edit range}, \emph{edit field}). Then, it iterates over all the normalized UIs in the input $c_i$ by checking their types. If the type of a normalized UI $\bar{u}$ is not in $E$ (line~\ref{alg:checktype}), i.e., one of the rules 1 to 4 applies, we add it to the queue $D$, which stores all the deterministic UIs we identified. Otherwise, rule 5 applies. While rules 1 to 4 are simple checks on the UI types, the complexity of rule 5 required us to operationalize it through a separate algorithm, i.e., Algorithm~\ref{alg:checkeditUIs}, which is called within Algorithm~\ref{alg:automatabilityAssessment} (line~\ref{alg:calleditcheck}). Algorithm~\ref{alg:checkeditUIs} returns a pair $(d, \lambda)$, where $d$ is a \emph{boolean} (true if the input normalized UI is deterministic), and $\lambda$ is a \emph{data transformation step} required to automate $\bar{u}$ and therefore available only if $\bar{u}$ is deterministic. Once all the normalized UIs in the input $c_i$ have been checked, Algorithm~\ref{alg:automatabilityAssessment} outputs the \emph{routine specification} of $c_i$, as the pair ($c_i$, $\Lambda$), where $\Lambda$ is the set of all the \emph{data transformation steps} we collected by executing Algorithm~\ref{alg:checkeditUIs} (line~\ref{alg:calleditcheck}). \input{algorithm4.tex} \input{algorithm5.tex} Before moving to the final step of our approach, we describe how Algorithm~\ref{alg:checkeditUIs} verifies whether an input (normalized) UI of type \emph{edit} ($\bar{u}$) is deterministic. In essence, Algorithm~\ref{alg:checkeditUIs} checks whether the value of the element edited by the execution of $\bar{u}$ can be deterministically computed from the UIs observed before $\bar{u}$ (in all the routine instances in $\mathcal{R}_{c_{i}}$). To do so, the algorithm looks for a possible data transformation function to compute the value of the edited element from the payloads of the UIs observed before $\bar{u}$. If such a data transformation function exists, $\bar{u}$ is considered to be deterministic, and the algorithm returns the identified function in the form of a data transformation step (which also includes source(s) and target of the data transformation function). In the following, we walk through Algorithm~\ref{alg:checkeditUIs}. We start by assuming that the UI in input is not deterministic, and we try to prove the opposite. We initialize to false the boolean variable which we will output at the end of the algorithm (line~\ref{alg:setDeterminism}), and we create the necessary data structures (line~\ref{alg:setC} to~\ref{alg:data2}). Given the input candidate routine $c_i$ and the normalized UI $\bar{u}$, we extract the index of $\bar{u}$ within $c_i$ (line~\ref{alg:getPosition}). Then, for each routine instance $r \in \mathcal{R}_{c_{i}}$, we do what follows. We get the instance of the normalized UI $\bar{u}$\footnote{We recall that a UI instance contains all the parameters, both context and data ones.} by retrieving the UI of index $n$ from $r$ (line~\ref{alg:getInstance}), and we store this UI ($u_1$) in the set $K$ (line~\ref{alg:addInstance}). We read the payload of $u_1$ to retrieve the target element ($t_1$, line~\ref{alg:getTarget}), $t_1$ can be the ID of a web browser element or the location of a cell in a spreadsheet. Also, we read the payload of $u_1$ to retrieve the value of the target element after the editing ($o$, line~\ref{alg:getOutput}). We initialize two queues, $S$ (which stands for \emph{sources}) and $I$ (which stands for \emph{inputs}). Queue $S$ stores the ID or location of the (source) element(s) that produced the data used by the \emph{edit} UI instance $u_1$; while queue $I$ stores the data that was used by the \emph{edit} UI instance $u_1$. After this initialization, we iterate over all the UI instances preceding $u_1$ in $r$. Such an iteration goes backward from $u_1$ (position $n$ in $r$) till the first UI instance in $r$ (position 1) -- line~\ref{alg:startMainIteration} to~\ref{alg:stopMainIteration}, unless we identify another UI instance of type \emph{edit} performed on the same target element $t_1$ (see lines~\ref{alg:checkForEdit} to \ref{alg:sameEditTarget}). In the iteration captured between line~\ref{alg:startMainIteration} to~\ref{alg:stopMainIteration}, we do the following. We store all the preceding UI instances ($u_2$) into the set $\Pi$, alongside the routine instance they belong to (i.e., we store a pair $(r, u_2)$ in $\Pi$). For each encountered $u_2$ of type \emph{paste}, we check its target element and we compare it to the target element of $u_1$. If they are the same, we again traverse backward the routine instance from the \emph{paste} UI until we find a \emph{copy} UI $u_3$ (line~\ref{alg:startPasteCheck} to~\ref{alg:stopPasteCheck}).\footnote{Our filtering approach, described in Section~\ref{sec:segmentation} guarantees that there exists a $u_3$ of type \emph{copy} preceding the \emph{paste} UI}. Then, we retrieve the target element of $u_3$ and we append it to queue $S$, and we add the copied value of $u_3$ to queue $I$ (lines~\ref{alg:addSource1} and \ref{alg:addInput1}). For each encountered $u_2$ of type \emph{edit} (line~\ref{alg:checkForEdit}), we check its target element and we compare it to the target element of $u_1$. If they are the same (line~\ref{alg:sameEditTarget}), we push the \emph{target element} of $u_2$ to the front of queue $S$, and we push the \emph{data content} of the target element after the editing performed by $u_2$ to the front of the queue $I$ (line~\ref{alg:addSource2} and~\ref{alg:addInput2}). When we reach this point, we also stop the iteration over all the UI instances preceding $u_1$ because the value of the target element after performing $u_1$ is obtained from the last \emph{edit} UI performed on the same target element and any other UI (i.e., \emph{paste} UIs) between $u_2$ and $u_1$. Finally, before moving to the next routine instance (i.e., returning to line~\ref{alg:traverseRoutineInstances}), we store the input data and the output data observed in the current routine instance for the normalized UI $\bar{u}$ in the set $T$, which collects all the input and output data observed for \emph{all} the instances of $\bar{u}$ (see line~\ref{alg:addTransformationExample}). After performing all the above steps for each routine instance $r \in \mathcal{R}_{c_{i}}$, and collecting all the required data to identify a possible data transformation function into the sets $T, K,$ and $\Pi$, we look for the data transformation function by leveraging two state-of-the-art tools: Foofah~\cite{DBLP:conf/sigmod/JinACJ17} and TANE~\cite{DBLP:journals/cj/HuhtalaKPT99}. First, we try to identify the data transformation function using Foofah, then -- if Foofah fails -- we use TANE. Foofah requires in input two series of data values, one referred to as \emph{input} and one referred to as \emph{output}. We generate the two series from the pairs $(I,O)$ that we collected in $T$, which capture examples of data transformations. From these examples, Foofah tries to synthesize an optimal data transformation function to convert input(s) to output.\footnote{For more details about Foofah refer to~\cite{DBLP:conf/sigmod/JinACJ17}.} We note that we run Foofah under the assumption that the output series is noise- and error-free, i.e., the analyzed data transformations are supposed to be correct. However, Foofah suffers from two limitations: it is inefficient when the size of the input and output series is large; it cannot discover conditional data transformation functions (where different manipulations are applied depending on the input). Hence Foofah cannot deal with heterogeneous data. To address these limitations, we group the data transformation examples into equivalence classes, where each class represents a different structural pattern of the input data. To create these equivalence classes, for each data sample in the input data series, we discover its symbolic representation describing its structural pattern by applying \emph{tokenization}. The tokenization that we apply replaces each maximal chained subsequence of symbols of the same type (either digits or letters) with a special token character ($\langle d \rangle+$ or $\langle a \rangle+$, resp.), and leaves any other symbol unaltered. For each equivalence class, we discover a data transformation function by providing to Foofah one randomly selected data transformation example from the equivalence class. The use of equivalence classes allows us to remove the heterogeneity of the input data and to facilitate the application of Foofah, which will operate only on a single data transformation example. If Foofah cannot identify a data transformation function (line~\ref{alg:syntTransDiscoveryEnd}), we turn to TANE, which can discover semantical data transformation functions (also known as \emph{functional dependencies}~\cite{DBLP:journals/cj/HuhtalaKPT99}). TANE requires in input a table where each row contains $n-1$ input data values and an output data value in column $n$ (this is conceptually similar to the input and output series required by Foofah). TANE analyzes each row of such a table to check if there exists any dependency between the values in the first $n-1$ columns and the value in column $n$.\footnote{For more details about TANE refer to~\cite{DBLP:journals/cj/HuhtalaKPT99}.} An example of a semantical data transformation function discovered by TANE would be: if the value of column $i$ is X, then the value of column $n$ is always Y. In our context, the input table for TANE is a table where each row represents the output data observed in all the UIs preceding $\bar{u}$ in a routine instance, and the last element of the row is the output data of the $\bar{u}$ instance in that routine (i.e., the value of the element edited by the execution of $\bar{u}$ in that routine instance). To build such a table, we require in input all the instances of $\bar{u}$ (which we stored in the set $K$) as well as all the instances of any UI preceding $\bar{u}$ (which we stored in the set $\Pi$). If TANE identifies a semantical data transformation function (line~\ref{alg:depfound}), we set $\bar{u}$ as deterministic (through the boolean $d$), and we compose the data transformation step using the output of TANE (see lines~\ref{alg:tane1} to~\ref{alg:tane2}). Table~\ref{table:dependencyTable} shows an example of the dependency table that we would build from the log captured in Table~\ref{tab:uiLog} (assuming that the full-length UIs log contains nine instances of the routine showed in rows 1 to 24). Giving Table~\ref{table:dependencyTable} in input to TANE, it would identify that the value of the last column (i.e., the type of student, domestic or international) can be deterministically generated by observing the value of column four (i.e., \emph{country of residence}). \input{tables-dependencyExample.tex} If also TANE does not discover any data transformation function, it means that we are not able to automatically determine the value of the element edited by the execution of $\bar{u}$, consequently we assume that $\bar{u}$ is not deterministic. Otherwise, we output the data transformation step discovered. \begin{figure}[htb] \centering \hspace*{-1cm} \includegraphics[scale = 0.7]{TransformationFunctions.pdf} \caption{Transformation functions discovered from the running example} \label{fig:tfunctions} \end{figure} \input{tables-transformationSteps} Considering our running example, Figure~\ref{fig:tfunctions} shows the data transformations functions discovered by Foofah (t1 to t4) and by TANE (t5) when running Algorithm~\ref{alg:checkeditUIs} on an hypothetical extended version of the UI log in Table~\ref{tab:uiLog} and giving as input the routine shown in rows 1 to 24 (Table~\ref{tab:uiLog}) along all its instances, and the \emph{edit} UIs at rows 6, 11, 16, 21, 23 (respectively, for identifying the data transformation functions from t1 to t5). Each data transformation function shows how input data is turned into output data. Although some rules are intuitive to interpret (e.g., t1 and t5), others may appear slightly cryptic. We refer to Foofah~\cite{DBLP:conf/sigmod/JinACJ17} and TANE~\cite{DBLP:journals/cj/HuhtalaKPT99} original studies for an extensive description of the set of rules that the two tools are capable to discover. Finally, the data transformation functions are integrated into the data transformation steps, which also include the instantiation of the input and the output of the function, as shown in Table~\ref{tab:transSteps}. \subsection{Routines aggregation} \label{sec:routinesAggregation} When a routine can be performed by executing a set of UIs without following a strict order, we may observe multiple execution variants of the same routine in the log. For example, if a worker needs to copy the \emph{first name}, the \emph{family name}, and the \emph{phone number} of a set of customers from a spreadsheet to different web-forms, she may choose to copy the data of each customer in any order (e.g., \emph{first name}, \emph{phone number}, and \emph{family name}, or \emph{family name}, \emph{phone number}, \emph{first name}). In such a scenario, the UI log would record several different execution variants of the same routine. Routine execution variants do not bring any additional value, rather they just generate redundancy within the log leading to the discovery of different routine specifications that would actually execute (once deployed as software bots) the same routine. Considering these routine specification as duplicates, this final step focuses on their removal. To identify duplicate routine specifications, we start by generating for each routine discovered in the previous step its \emph{data transformation graph}. \begin{definition}[\textbf{Data Transformation Graph}] Given a routine specification ($c_i$, $\Lambda$), its \emph{data transformation graph} is a graph $G_\Lambda = (D_\Lambda , L_\Lambda)$, where: $D_\Lambda $ is the set of vertices of the graph, and each vertex $d \in D_\Lambda$ maps one data transformation step $\lambda \in \Lambda$; $L_\Lambda \subseteq D_\Lambda \times D_\Lambda$ is the set of edges of the graph, and each edge $(d_i, d_j) \in L_\Lambda$ represents a dependency between two data transformation steps capturing the fact that the target of the data transformation step mapped by $d_i$ is (one of) the source(s) of the data transformation step mapped by $d_j$. \end{definition} \figurename~\ref{fig:transformationGraph} shows the data transformation graph of the routine we discovered in the previous step in our running example. Data transformation graphs can be used to check whether two routine specifications are equivalent, in fact, two routine specifications, ($c_i$, $\Lambda_1$) and ($c_j$, $\Lambda_2$), are equivalent if and only if the following two relations hold: i) their data transformation graphs are the same, i.e., $D_{\Lambda_1}$ = $D_{\Lambda_2}$ and $L_{\Lambda_1}$ = $L_{\Lambda_2}$; ii) their candidate routines $c_i$ and $c_j$ contain the same set of UIs, and all the UIs of type \emph{click button} appear in the same order in both $c_i$ and $c_j$. \begin{figure}[htb] \centering \hspace*{-1cm} \includegraphics[scale = 0.9]{TransformationGraph-Alt.pdf} \caption{Data transformation graph example} \label{fig:transformationGraph} \end{figure} By comparing each pair of routine specifications, we first create sets of equivalent routine specifications, and, for each set, we discard all the routine specifications but one. Ideally, we would like to retain the best routine specification of each set, however, we need to define what it means to be the \emph{best} one. We can select the best routine specification by relying on different quantitative metrics, such as frequency, length, or duration of the candidate routine of a routine specification. For example, we can choose frequency as a selection criterion and retain from each set the routine specification whose candidate routine is the most frequent in the UI log. Intuitively, the most frequent candidate routine represents the common routine execution, so that one may be tempted to use that criterion by default. However, the most frequent routine execution is not necessarily the optimal execution. For example, length or duration could represent better selection criteria. Length prioritizes short candidate routines over long ones, assuming that a candidate routine should comprise as few steps as possible. Duration prioritizes execution times over the number of steps. The duration of a candidate routine can be estimated as the average execution time of each routine instance of the candidate routine that is recorded in the UI log. Note, however, that the duration could be not always reliable since during the routine execution, the worker might perform activities that do not appear in the log or that are not relevant for the routine execution, thus involuntarily increasing the observed execution time of the routine. For this reason, we implemented a combination of length and frequency to select the best routine specification from each set. Precisely, we use length first and then compare the frequencies of the candidate routines having the same length. \section{Conclusion} \label{sec:conclusion} \medskip This paper presented an approach to discover automatable routines from UI logs. The approach starts by decomposing the UI log into segments corresponding to paths within the connected components of a control-flow graph derived from the log. These paths represent sequences of actions that are repeated multiple times within the event log, possibly with some variations. Once the log is segmented, a noise-resilient sequential pattern mining technique is used to extract frequent patterns that corresponds to the candidate routines. Next, the candidate routines are assessed for their amenability to automation. For each routine, a corresponding executable specification is synthesized, which can be compiled into an RPA script. Finally, the approach identifies semantically equivalent routines in order to produce a non-redundant set of automatable routines. The approach has been implemented as an open-source tool, namely Robidium. This article reported on an evaluation of the fit-for-purpose and computational efficiency of the proposed approach. The evaluation shows that the approach can rediscover routines injected into synthetic logs, and that it discovers relevant routines in real-life logs. For most logs, the execution time does not exceed one minute. The only exceptions related to logs where we deliberately injected complex data transformations or where the routine instances overlap in the UI log. The proposed approach makes a number of limiting assumptions. First, the effectiveness of the approach is sensitive to noise, e.g.\ clicks that are not related to the routine itself or clicks resulting from user mistakes. In our evaluation, we observed this phenomenon to varying degrees when dealing with real-life logs. In practice, the approach can identify correct routines only if they are frequently observed in the log. Recurring noise affects the accuracy of the results. To address this limitation, we will investigate the use of alternative segmentation and sequential pattern discovery techniques that incorporate noise tolerance mechanisms. Another avenue is to discover sequential patterns using the approach outlined in this article and then to filter out patterns that are \emph{chaotic} in the sense that their occurrence does not affect the probability of other patterns occurring subsequently nor vice-versa. This latter approach has been studied in the context of event log filtering for process mining in~\cite{DBLP:journals/jiis/TaxSA19}. Second, the approach is designed for logs that capture consecutive routine executions. In practice, routine instances may sometimes overlap (cf. the {\sc S2} \ real-life log in the evaluation). A possible avenue to address this limitation is to search for overlapping frequent patterns directly in the unsegmented log, instead of first segmenting it and then finding patterns in the segmented log. This approach has been previously investigated in the context of so-called Local Process Mining (LPM), where the goal is to discover process models capturing frequently repeated (and possibly overlapping) behavior in an unsegmented sequence of events~\cite{LPM}. When assessing the automatability of a routine, the propsoed approach assumes that the values of the edited fields are entirely derived from the (input) fields that are explicitly accessed (e.g., via copy operations) during the routine's execution. Hence, it will fail to identify automatable user interactions in the case where a worker visually reads from a field (without performing a \emph{copy} operation on it) and writes what they see into another field. An avenue for addressing this limitation is to complement the proposed method with optical character recognition techniques over screenshots taken during the UI log recording, so as to be able to detect that some of the outputs of a routine come from fields that have not been explicitly accessed via a copy-to-clipboard operation. Furthermore, the proposed approach is unable to discover conditional behavior, where the transformation function for the target field depends on the value of another field. Consider, for example, a routine that involves copying delivery data. If the delivery country is USA, then the month comes before the day (MM/DD/YYYY), otherwise the day comes before the month. Here, the transformation function depends on a condition of the form ``country = USA'', which the proposed approach is unable to discover. In a similar vein, the proposed approach is able to discover transformations that depend on the structural pattern of the value of the input field(s), but it fails to distinguish the patterns that, although having the same syntactical structure, have different semantics. Following the example above, our approach will put both date types into the same equivalence class. Addressing this limitation would require the development of more sophisticated data transformation discovery techniques, beyond the capabilities of Foofah. Finally, the method to detect if two routines are semantically equivalent assumes that all button clicks in a UI are effectful, meaning that their presence and the order in which they occur affect the outcome of the routine. In practice, some clicks may have no effect on the routine's outcome. For example, some clicks may simply serve to pop up a help box, while others may just serve to move from one page to another in a listing. To address this limitation, we foresee extensions of the proposed method where the alphabet of the UI log is extended with a richer array of actions, and where the routine discovery approach can be configured via a language for the specification of action effects. \medskip\noindent\textbf{Acknowledgments}. The authors thank Stanislav Deviatykh for his help in the prototype implementation. This research is supported by the Australian Research Council (DP180102839) and the European Research Council (project PIX). \section{Evaluation} \label{sec:evaluation} \urldef{\footurla}\url{https://github.com/volodymyrLeno/RPM_Miner} \urldef{\footurlb}\url{https://doi.org/10.6084/m9.figshare.12543587} We implemented our approach as an open-source Java command-line application\footnote{Available at \footurla} and also embedded this in the open-source tool Robidium~\cite{LenoDPRDM20}. Using the command-line application, we conducted a series of experiments to analyze the applicability of our approach in real-life settings. Specifically, we assessed to what extent our approach can rediscover routines that are known to be recorded in the input UI logs, and analyzed whether our approach is able to correctly identify automatable and not automatable user interactions within such routines. Accordingly, we define the following research questions: \begin{itemize} \item \textbf{RQ1.} Does the approach discover candidate routines that are known to exist in a UI log? \item \textbf{RQ2.} Does the approach discover automatable routines that are known to be present in a UI log? \end{itemize} \subsection{Datasets} \label{sec:datasets} To answer our research questions, we rely on a dataset of 13 logs. These logs can be divided into three subgroups: artificial logs, real-life logs recorded in a supervised environment, and real-life logs recorded in an unsupervised environment.\footnote{The real-life logs were recorded with the Action Logger tool~\cite{DBLP:conf/bpm/LenoPRDM19}. All the logs are available at \footurlb} Table~\ref{table:datasets} shows the logs characteristics. \input{tables-datasets.tex} The artificial logs (CPN1--CPN9) were generated from Colored Petri Nets (CPNs) in \cite{bosco2019}. The CPNs used have increasing complexity, from low (the net used to generate CPN1) to high (the net used for CPN9). The underlying routines are characterized by a varying amount of non-deterministic user interactions injected. They involve simple data transformations, mostly in the form of copy-pasting. The logs generated were originally noise-free and segmented. We removed the segment identifiers to produce unsegmented logs. The \emph{Student Records} ({\sc SR}) and \emph{Reimbursement} ({\sc RT}) logs record the simulation of real-life scenarios. The {\sc SR} \ log simulates the task of transferring students' data from a spreadsheet to a Web form. The {\sc RT} \ log simulates the task of filling reimbursement requests with data provided by a claimant. Each log contains fifty recordings of the corresponding task executed by one of the authors, who followed strict guidelines on how to perform the task. These logs contain little noise, which only accounts for user mistakes, such as filling the form with an incorrect value and performing additional actions to fix the mistake. For both logs, we know how the underlying task was executed, and we treat such information as ground truth when evaluating our approach. While the routines captured in the logs are fully automatable, they include complex transformations to test the automatability assessment step of the approach. Finally, the \emph{Scholarships} logs ({\sc S1} \ and {\sc S2}) were recorded by two employees of the University of Melbourne who performed the same task. It is the task of processing scholarship applications for international and domestic students. This task mainly consists of students' data manipulation with transfers between spreadsheets and Web pages. Compared to the other logs used in our experiences, we have no a-priori knowledge of how to perform the task at hand (no ground truth). Also, when recording the logs, the University employees were not instructed to perform their task in a specific manner, i.e.,\ they were left free to perform this task as they would normally do when unrecorded. \subsection{Setup} \label{sec:setup} To measure the quality of the discovered candidate routines, we use the Jaccard Coefficient (JC), which captures the level of similarity between discovered and ground truth routines. JC does not penalize the order of the interactions in a routine, which follows from the assumption that a routine could be executed by performing some actions in a different order. The JC between two routines is the ratio $\frac{n}{m}$, where $n$ is the number of user interactions that are contained in both routines, while $m$ is the total number of user interactions present in the two routines. Given the set of discovered routines and the set of ground truth routines, for each discovered routine, we compute its JC with all the ground truth routines and assign the maximum JC to the discovered routine as its quality score. Finally, we assess the overall quality of the discovered routines as the average of the JC of each discovered routine. As the ground truth, we use the segments of the artificial logs and the guidelines given to the author who performed the tasks in {\sc SR} \ and {\sc RT}. The JC alone is not enough to assess the quality of the discovered routines, as this measure does not consider the routines we may have missed in the discovery. Thus, we also measure the total coverage to quantify how much log behavior is captured by the discovered routines. We would like to reach high coverage with as few routines as possible. Thus, we prioritize long routines over short ones by measuring the average routine length alongside its coverage. We assess the quality of the automatable routines discovery by measuring precision, recall and F-score. For each discovered routine, we compute the corresponding confusion matrix, where \emph{true positives} (TP) are correctly identified automatable user interactions, \emph{true negatives} (TN) are correctly identified non-automatable user interactions, \emph{false positives} (FP) are the user interactions that were wrongly marked as automatable, and \emph{false negatives} (FN) are the user interactions that were wrongly marked as non-automatable. From the constructed confusion matrix, we calculate precision, recall and F-score as follows: \begin{equation}Precision = \frac{TP}{TP + FP},\end{equation} \begin{equation}Recall = \frac{TP}{TP + FN},\end{equation} \begin{equation}F{\text -}score = 2 \cdot \frac{Precision \cdot Recall}{Precision + Recall}.\end{equation} We report the averages of these metrics for all the discovered routines in the log. We also report the average ratio of automatable user interactions for the routines in the log. The results for the {\sc S1} \ and {\sc S2} \ logs were qualitatively assessed with the help of the University of Melbourne employees who performed the task. Specifically, we asked them to compare the rediscovered routines with the actions they performed while recording. All experiments were conducted on a Windows 10 laptop with an Intel Core i5-5200U CPU 2.20 GHz and 16GB RAM, using cohesion as a routine selection criterion with the minimum support threshold set to 0.1 and the minimum coverage threshold equal to 0.05. \subsection{Results} Table~\ref{table:candidatesIdentification} shows the quality of the discovered routine candidates. Although the synthetic logs only contain the user interactions that belong to routines, we achieved perfect coverage for three logs only, namely CPN1, CPN4 and CPN6. This is because some execution patterns were observed very rarely. Since the {\sc SR} \ and {\sc RT} \ logs contain noise, the coverage cannot be 1 in these two cases. For six logs out of eleven logs, the discovered routines match with the ground truth. Overall, the JC is very high, above 0.95 for all the logs except CPN5. The underlying model of the CPN5 log consists of multiple branches, generating 36 different executions. Considering the fact that some execution patterns are not frequent enough, we discovered only partial routines. As can be seen clearly, for this log we also achieved the lowest coverage (0.84). For the {\sc RT} \ log we found two routines consisting of an identical set of actions. These routines were not merged though, because they are characterized by different transformation functions. \input{tables-results-candidatesIdentification.tex} Table~\ref{table:automatableResults} shows the quality of the automatable routines discovery. We correctly identified all the automatable and not automatable user interactions for the CPN3, CPN6 and {\sc SR} \ logs. The routines recorded in the CPN3 and {\sc SR} \ logs are fully automatable. Although the {\sc RT} \ log contains automatable routines only, our approach failed to discover some of the underlying transformations, and, therefore, incorrectly marked some interactions as not automatable. Some of the user interactions of the synthetic logs were wrongly identified as automatable. Although the data values of such interactions can be deterministically computed, the locations of the edited elements were completely random as it was intended in the corresponding models. Thus, in practice, such interactions are not automatable. The routines discovered from the CPN5 log are characterized by the lowest number of automatable user interactions, and we achieved the lowest recall for this log (0.805). Overall, F-score is high, above 0.85 for all the logs, except CPN7 and CPN8. For these logs we also achieved the lowest recall, meaning that some interactions of the corresponding routines were wrongly identified as not automatable. Although in the CPN models used to generate the artificial logs, some of the interactions are not deterministic, they are automatable in the context of the discovered routines. For example, for the CPN9 log we discovered six routines that correspond to the different branches within the model. For all the executions of a branch we use the same data values, and hence, the corresponding user interactions are automatable. \input{tables-results-automatableRoutinesDiscovery.tex} From the {\sc S1}\ log we discovered five fully automatable routines. The first routine consists in manually adding graduate research student applications to the student record in the university's student management system. The application is then assessed, and the student is notified of the outcome. The second routine consists in lodging a ticket to verify possible duplicate applications. When a new application is entered in the system and its data matches an existing application, the new application is temporarily put on hold, and the employee fills in and lodges a ticket to investigate the duplicate. The remaining three routines represent exceptional cases, where the employee either executed the first or the second routine in a different manner (i.e.,\ by altering the order of the actions or overlapping routines executions). These routines were not identified as duplicate because they are characterized by different sequences of button clicks. To assess the results, we showed the discovered routines to the employee of the University of Melbourne who recorded the {\sc S1}\ log, and they confirmed that the discovered routines correctly capture their task executions. Also, they confirmed that the last three routines are alternative executions of the first routine.\footnote{Detailed results at \footurlb} While the results from the {\sc S1} \ log were positive, our approach could not discover any correct routine from the {\sc S2} \ log. By analyzing the results, we found out that the employee worked with multiple worksheets at the same time, frequently switching between them for visualization purposes. Such behavior recorded in the log negatively affects the construction of the CFG and its domination tree, ultimately leading to the discovery of incorrect segments and routines. Table~\ref{table:executionTimes} shows the execution time for each step of the approach. As we can see, the most computationally heavy step is the automatability assessment. For all the logs, this step took the largest amount of time, except for the CPN5, {\sc S1}, and {\sc S2} \ logs. While the execution time is still reasonably low for all the artificial logs, it substantially increases for the {\sc SR} \ and {\sc RT} \ logs. In these two logs, the automatability assessment took 99 percent of the total computation time. This is caused by the fact that the underlying transformations in these two logs were very complex, often involving regular expressions or long sequences of manipulations. In contrast, all the transformations in the CPN1-CPN9 logs were simple copy-paste operations. Overall, for the synthetic logs, the approach took no more than 42 seconds. The aggregation step required the smallest amount of time. For the CPN1 log, we discovered only one routine, and, therefore, we did not have to apply any aggregation. For the {\sc S1} \ and {\sc S2} logs, the most time taking step was the segmentation. The CFGs constructed for these logs were very complex, with a high number of loops. This significantly increased the time to identify back-edges in such CFGs and, therefore, the total time of segmentation. \input{tables-results-executionTime.tex} \subsection{Threats to validity} The reported evaluation has a number of threats to validity. First, a potential threat to internal validity is the fact that the context parameters (i.e.\ the attributes in the log that capture the notion of ``user interaction'') were manually selected. These context parameters are required as one of the inputs of the proposed method (in addition to the UI log). To mitigate this threat, the parameters were first selected by each of the two authors of the paper independently, then cross-checked to reach a mutual agreement, and then validated by the other authors based on their understanding of the event logs in question. Another possible threat to internal validity is the limited use of parameter values to configure the approach at hand. To ensure we do not miss any significantly important behavior in the logs, we used very low support and coverage, equal to 0.1 and 0.05, respectively. A potential threat to external validity is given by the use of a limited number of real-life logs (four). These logs focus on one type of task that can be automated via RPA, namely data transferring. These logs, however, exhibit different characteristics in terms of the complexity of the captured processes and log size. To mitigate this threat, we additionally performed a more extensive evaluation on a battery of artificial logs. For two real-life logs, we had no information about the underlying processes. Therefore we evaluated the results qualitatively with the workers responsible for their execution. To ensure the full reproducibility of the results, we have released all the logs, both real-life and artificial, used in our experiments. The only exceptions are the {\sc S1} \ and {\sc S2} \ logs as they contain sensitive information. \section{Introduction} \label{sec:intro} Robotic Process Automation (RPA) allows organizations to improve their processes by automating repetitive sequences of interactions between a user and one or more software applications (a.k.a.\ routines). Using this technology, it is possible to automate data entry, data transfer, and verification tasks, particularly when such tasks involve multiple applications. To exploit this technology, organizations need to identify routines that are amenable to automation~\cite{leopold2018identifying}. This can be achieved via interviews, walk-throughs, job shadowing, or by examining documented procedures~\cite{leopold2018identifying}. These approaches are not always cost-efficient in large organizations, as routines tend to be scattered across the process landscape. To tackle this gap, several research studies have proposed techniques to analyze User Interaction (UI) logs in order to discover repetitive routines that are amenable to automation via RPA~\cite{jimenez2019method,bosco2019,gao2019automated,leno2020aaai,DBLP:conf/bpm/AgostinelliLMM20}. However, existing approaches in this space make various assumptions that limit their applicability. First, all of the existing approaches for discovering frequent and/or automatable routines from UI logs assume that the UI log consists of a set of traces (segments) of a task that is presupposed to contain one or more routines. In practice, however, UI logs are not segmented. Instead, a recording of a working session consists of a single sequence of actions encompassing many instances of one or more routines, interspersed with other events that may not be part of any routine. Second, most of the existing approaches ~\cite{jimenez2019method,bosco2019,gao2019automated} discover frequent routines and/or automatable routines, but they do not produce an executable routine specification. Third, existing approaches do not take into account the fact that the same routine may be performed differently (albeit equivalently) by different workers, or sometimes even by the same worker. In other words, existing approaches may produce redudant routines as output. This article addresses these gaps by presenting an approach to discover automatable routines from unsegmented UI logs. The approach splits the unsegmented UI log into a set of segments, each representing a sequence of steps that appears frequently in the unsegmented UI log. It then applies sequential pattern mining techniques to find candidate routines for automation and evaluates their automatability. For each automatable routine, the approach synthesizes an executable routine specification, which can be compiled into an RPA bot. This bot can then be executed by an RPA tool to replicate the underlying routine automatically. The proposed approach has been implemented as an open-source prototype called Robidium~\cite{LenoDPRDM20}. Using this implementation, we have evaluated the proposed approach on synthetic and real-life UI logs in terms of its execution times and its ability to accurately discover routines from an UI log. This article is an extended and revised version of a conference paper~\cite{DBLP:conf/icpm/LenoADRMP20}. The conference version focused on the discovery of frequently repeated routines from unsegmented UI logs (i.e.\ candidate routines). This article extends this initial approach in two ways. First, this article presents an approach to post-process the identified candidate routines in order to assess their automatability and, in case a routine is fully automatable, to generate an executable routine specification. Second, this article proposes a method to identify semantically equivalent routines, so as to produce a non-redundant set of automable routines. This article provides a concrete realization of a high-level architecture for discovering automatable routines from UI logs, sketched in~\cite{lenobise20}. To this end, the article proposes concrete techniques to implement each of the building blocks in~\cite{lenobise20}, except for the UI log recording step, which is documented in~\cite{DBLP:conf/bpm/LenoPRDM19}. The article is structured as follows. Section \ref{sec:related} provides an overview of related work. Section \ref{sec:approach} describes the approach, while Section \ref{sec:evaluation} reports the results of the evaluation. Finally, Section \ref{sec:conclusion} concludes the paper and discusses the directions for future work. \section{Related work} \label{sec:related} The problem addressed by this article is denominated as Robotic Process Mining (RPM) in~\cite{lenobise20}. RPM is a family of methods to discover repetitive routines performed by employees during their daily work, and to turn such routines into software scripts that emulate their execution. The first step in an RPM pipeline is to record the interactions between one or more workers and one or more software applications~\cite{DBLP:conf/bpm/LenoPRDM19}. The recorded data is represented as a UI log -- a sequence of user interactions (herein called UIs), such as selecting a cell in a spreadsheet or editing a text field in a form. The UI log may be filtered to remove irrelevant UIs (e.g., misclicks). Next, it may be decomposed into segments (segmentation). The discovered segments are then scanned to identify routines that occur frequently across these segments. Finally, the resulting frequent routines (a.k.a.\ candidate routines) are analyzed in order to identify those that are automatable and to derive executable routine specifications. In this section, we review previous research related to the three core research challenges of RPM identified in~\cite{lenobise20}: UI log segmentation, discovery of frequent (candidate) routines and discovery of automatable routines. \subsection{UI Log Segmentation} Given a UI log (i.e., a sequence of UIs), segmentation consists in identifying non-overlapping subsequences of UIs, namely \emph{segments}, such that each subsequence represents the execution of a task performed by an employee from start to end. In other words, segmentation searches for repetitive patterns in the UI log. In an ideal scenario, we would observe only one unique pattern (the task execution) repeated a finite number of times. However, in reality, this scenario is unlikely to materialize. Instead, it is reasonable to assume that an employee performing X-times the same task would make some mistakes or introduce variance in how the task is performed. The problem of segmentation is similar to periodic pattern mining on time series. While several studies addressed the latter problem over the past decades~\cite{cao2007discovery,zhu2017matrix}, most of them require information regarding the length of the pattern to discover or assume a natural period to be available (e.g., hour, day, week). This makes the adaptation of such techniques to solve the problem of segmentation challenging unless periodicity and pattern length are known a priori. Under the same class of problems, we find web session reconstruction~\cite{spiliopoulou2003framework}, whose goal is to identify the beginning and the end of web navigation sessions in server log data (e.g., streams of clicks and web page navigation)~\cite{spiliopoulou2003framework}. Methods for session reconstruction are usually based on heuristics that rely on structural organization of web sites or time intervals between events. The former approach covers only the cases when all the user interactions are performed in the web applications, while the latter approach assumes that users make breaks in-between two consecutive segments -- in our case, two routine instances. Lastly, segmentation also relates to the problem of correlation of event logs for process mining. In such logs, each event should normally include an identifier of a process instance (case identifier), a timestamp, an activity label, and possibly other attributes. When the events in an event log do not contain explicit case identifiers, they are said to be uncorrelated. Various methods have been proposed to extract correlated event logs from uncorrelated ones. However, existing methods in this field either assume that a process model is given as input~\cite{DBLP:conf/caise/BayomieAE16} or that the underlying process is acyclic~\cite{DBLP:conf/bpm/FerreiraG09}. Both of these assumptions are unrealistic in our setting: a process model is not available since we are precisely trying to identify the routines in the log, and a routine may contain repetition. Recent work on UI log segmentation~\cite{DBLP:conf/icpm/Agostinelli20} proposes to use trace alignment between the logs and the corresponding interaction models to identify the segments. In practice, however, such interaction models are not available beforehand. In this article, we outline a segmentation approach that does not require any models as inputs nor does it require that the user specifies one or more explicit delimiters between segments (e.g.\ that the user specifies that a given symbol X represents the start and/or the end of a segment). \subsection{Frequent Routine Discovery} Dev and Liu~\cite{DBLP:conf/iui/DevL17} have noted that the problem of routine identification from (segmented) UI logs can be mapped to that of frequent pattern mining, a well-known problem in the field of data mining~\cite{han2007frequent}. Indeed, the goal of routine identification is to identify repetitive (frequent) sequences of interactions, which can be represented as symbols. In the literature, several algorithms are available to mine frequent patterns from sequences of symbols. Depending on their output, we can distinguish two types of frequent pattern mining algorithms: those that discover only exact patterns~\cite{lee2004efficient,ohlebusch2015alphabet} (hence vulnerable to noise), and those that allow frequent patterns to have gaps within the sequence of symbols~\cite{wang2004bide,fumarola2016clofast} (hence noise-resilient). Depending on their input, we can distinguish between algorithms that operate on a collection of sequences of symbols and those that discover frequent patterns from a single long sequence of symbols~\cite{ohlebusch2015alphabet}. The former algorithms can be applied to segmented UI logs, while the latter can be applied directly to unsegmented ones. However, techniques that identify patterns from a single sequence of symbols only scale up when identifying exact patterns. While such approaches discover the frequently repeated routines, they do not analyze whether they are automatable. In other words, these approaches focus on the discovery of the control-flow models instead of executable specifications. The identification of frequent routines from sequences of actions is related to the problem of Automated Process Discovery (APD) \cite{DBLP:journals/tkde/AugustoCDRMMMS19}, which has been studied in the field of process mining. Recent works~\cite{DBLP:conf/bpm/Geyer-Klingeberg18,jimenez2019method} show that RPA can benefit from process mining. In particular, the work in \cite{jimenez2019method} proposes to apply traditional APD techniques to discover process models of routines captured in UI logs. However, traditional APD techniques discover control-flow models, while, in the context of RPA, we seek to discover executable specifications that capture the mapping between the outputs and the inputs of the actions performed during a routine. \subsection{Discovery of Automatable Routines} The discovery of automatable sequences of user interactions has been widely studied in the context of Web form and table auto-completion. For example, Excel's Flash Fill feature detects string patterns in the values of the cells in a spreadsheet and uses these patterns for auto-completion~\cite{DBLP:conf/popl/Gulwani11}. However, auto-completion techniques focus on identifying repetitions of keystrokes (sequences of characters). In this article, we look at routines that involve transfering data across fields in one or more applications as well as editing field values. The discovery of data transfer routines that are amenable for RPA automation has been addressed in~\cite{bosco2019}. This latter paper proposes a technique to discover sequences of actions such that the inputs of each action in the sequence (except the first one) can be derived from the data observed in previous actions. However, this technique can only discover perfectly sequential routines, and is hence not resilient to variability in the order of the actions, whereas in reality, different users may perform the actions in a routine in a different order. Another technique for routine identification~\cite{leopold2018identifying} attempts to identify candidate routines from textual documents -- an approach that is suitable for earlier stages of routine identification and could be used to determine which processes or tasks could be recorded and analyzed in order to identify routines. In \cite{DBLP:conf/bpm/AgostinelliLMM20} the authors present an approach to automatically discover routines from UI logs and automate them in the form of scripts. This approach, however, assumes that all the actions within a routine are automatable. In practice, it is possible that some actions have to be performed manually, and they can not be automated. The approach presented in \cite{gao2019automated} aims at extracting rules from segmented UI logs that can be used to fill in forms automatically. However, this approach only discovers branching conditions that specify whether a certain activity has to be performed or not (e.g., check the box of the form). It focuses only on the copy-paste operations and does not identify more complex manipulations. In previous work~\cite{leno2020aaai}, we mapped the problem of discovering routines related to the data transferring to the problem of discovering data transformations. In this paper, we reuse this idea and extend it to tackle the problem of assessing if and to what extent a frequent (candidate) routine is automatable, and if such, producing an executable specification.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,593
Designed by Taylor Sharkey Student Government expects to exceed budget for formal event By Meghan Crum | 11/25/2018 | 11:01pm The University of South Carolina's Student Government will hold a formal event called the Student Government Fall Awards on Nov. 29 at City Art Gallery at 8 p.m. and plans to spend $3,515 on the event — more than $1,000 above its original budget. Although Student Government says the event is free and open to all students — as is required of other student organizations when they request funds from Student Government for social events — the organization did not advertise the event other than inviting student leaders from certain organizations, failed to send out the RSVP Google Form to the student body, asked attendees to pay a $5 donation for food in advance and booked a venue that can hold a maximum of 300 people. Student Body Vice President Mills Hayes, who led the planning of the event, said the donation was not necessary to attend. Originally called the Student Government Ball and then the Student Leader Ball, the Student Government Fall Awards has changed this year. The event was rebranded this year to include student leaders on campus and, according to Student Body President Taylor Wright, to be open and free to all students, unlike past years. "When we got into office, we didn't think [the Student Government Ball] really was appropriate or really represented what we wanted to get out of the event," Wright said. "So, initially branded as the Student Leader Ball, but I think we kept reflecting and it still didn't quite completely say what the event actually was, so now we're looking more at kind of a Fall Awards." Wright emphasized the importance of taking the event to an off-campus venue. "Although we love Russell House, I think we just don't want to have everything in one building. I think you kind of gain something with having a change of scenery," Wright said. "We're in here every day, so I think it's kind of fun to leave Russell House and get to meet people outside of a professional setting." Student Government set aside $1,500 in this year's budget for the Fall Awards, a significant cut from the last year when $2,900 was budgeted for the event. However, Student Government has not adhered to this year's budget as it is currently planning to spend $3,515 for the Fall Awards — $2,015 more than cited on the original budget. Of that $3,515, $1,280 are food costs. As of Nov. 20, donations cover $1,165, leaving Student Government to cover the leftover $115 in food costs. This is the most expensive Student Government formal event in recent years, according to the organization's fiscal records. In a letter to the Student Senate requesting more money for this year's event, Hayes said the additional funding would make the event "successful and tasteful." She later said in an interview with The Daily Gamecock that the Fall Awards would allow student leaders of various student organizations to communicate with each other and collaborate with the student body. "Student leaders are first and foremost students," Hayes said. "I think it's really important to recognize your leaders and really encourage them to continue their hard work." While Student Government exceeded its budget for the Student Government Fall Awards, the Student Senate Finance Committee has allocated a total of $4,207.62 this year to other student organizations' formal events, such as the Methodist Student Network's Winter Ball, the Association of African American Students 50th Anniversary Celebration in the Russell House Ballroom and the Individuals Respecting Identities and Sexualities Fall Formal. All of these events were free and open to all students, as is required in order for money to be allocated. Editor's Note: The editor-in-chief of The Daily Gamecock was invited to the Student Government Fall Awards and will not attend. Please note All comments are eligible for publication in The Daily Gamecock. Social Justice Awards given at Office of Diversity and Inclusion MLK breakfast By Camdyn Bruce | 2 hours ago Recap: Student Government cabinet discusses plans for upcoming semester By Jack Bingham | 01/15/20 4:01am 'It reshaped my entire life': USC alumna crowned Miss USA By Christine Bartruff | 01/13/20 5:06am Student Body President Luke Rankin looks back at first semester in office USC class reflects on history of slavery By Lexi Torrence | 01/13/20 1:04am USC alumnus lands spot on Forbes Magazine's '30 Under 30' list By Kailey Cota | 01/13/20 12:35am The Daily Gamecock © 2020. Solutions by The State News.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,742
\section*{Results} We first present the structural feature of the locally densely connected groups that cause the inaccuracy of the $k$-shell method in determining coreness of nodes in a dynamic spreading. We then define the diffusion importance of edges and remove the redundant edges. Finally, we validate the improved accuracy of the renewed coreness from the perspective of spreading dynamics. \textbf{Structural feature of locally densely connected group.} We first focus on six real-world networks in which the $k$-shell method fails to identify the core shells because the existence of the core-like groups~\cite{liu2015}(For the identification of core-like groups, see Methods for details). The properties of the studied networks are listed in Table~\ref{tab:basiccharacteristic}. \begin{figure}[!ht] \begin{center} \epsfig{file=figure1.eps,width=1\linewidth} \caption{\textbf{Illustration of structural feature of the core-like group and the true core.} (a) Core-like group. (b) True core. For the core-like group, core nodes are mutually connected and have very few out-leaving links. While for the true core, core nodes are connected and each of them have a lot of out-leaving links.} \label{figure1} \end{center} \end{figure} \begin{table*}[!ht] \caption{\textbf{Properties of the real-world networks studied in this work.} Structural properties include number of nodes ($N$), number of edges ($E$), average degree ($\langle k \rangle$), maximum degree ($k_{max}$), degree heterogeneity ($H_{k}=\langle k^{2} \rangle/\langle k \rangle^{2}$), degree assortativity ($r$), clustering coefficient ($C$), maximum $k_S$ index ($k_{Smax}$), epidemic threshold ($\lambda_c$), infection probability used in the SIR spreading in the main text ($\lambda$) (see Method for details). For the first six networks, there exists core-like groups, while for the last three networks, there is no core-like group in the network, which we will discuss in the last part.}\centering \begin{tabular}{ccccccccccccc} \hline \hline \textbf{Network} & \textbf{$N$} & \textbf{$E$} & \textbf{$\langle k \rangle$} & \textbf{$k_{max}$} & \textbf{$H_{k}$} & \textbf{$r$} & \textbf{$C$} & \textbf{$k_{Smax}$} & \textbf{$\lambda_c$} & \textbf{$\lambda$}\\ \hline Email &1133 &5451 &9.6 &71 &1.942 &0.078 &0.220 &11 &0.06 &0.08\\ CA-Hep &8638 &24806 &5.7 &65 &2.261 &0.239 &0.482 &31 &0.08 &0.12\\ Hamster &2000 &16097 &16.1 &273 &2.719 &0.023 &0.540 &24 &0.02 &0.04\\ Blog &3982 &6803 &3.4 &189 &4.038 &-0.133 &0.284 &7 &0.08 &0.27\\ PGP &10680 &24340 &4.6 &206 &4.153 &0.240 &0.266 &31 &0.06 &0.19\\ Astro &14845 &119652 &16.1 &360 &2.820 &0.228 &0.670 &56 &0.02 &0.05\\ \hline Router &5022 &6258 &2.5 &106 &5.503 &-0.138 &0.012 &7 &0.08 &0.27\\ Emailcontact &12625 &20362 &3.2 &576 &34.249 &-0.387 &0.109 &23 &0.01 &0.10\\ AS &22963 &48436 &4.2 &2390 &61.978 &-0.198 &0.230 &25 &0.004 &0.13\\ \hline \hline \end{tabular} \label{tab:basiccharacteristic} \end{table*} Based on in-depth analysis of the network local structure, we find that the core-like group has a clique-like local structure as shown in Fig.~\ref{figure1} (a). Most of the nodes in the core-like group have a similar connection pattern. Let's take node $i$ for example. Neighbors of node $i$ are mutually connected, with only one neighbor having a few out-leaving link, that are links connecting outside the neighborhood of node $i$. In the $k$-shell decomposing process, node $i$ will be assigned a $k_S$ value equal to its degree. Considering the feature of core in the core-periphery structure~\cite{borgatti1999,rombach2014}, which is densely connected among themselves and well connected to the periphery, we think that the cohesive group shown in Fig. 1 (a) is not a true core, because it is only densely connected within a group but not well connected to the remaining part of the network. When a disease origins from node $i$, most of the infections are limited in the neighborhood of node $i$. As for the true core in Fig.1 (b), core nodes are well connected and at the same time connect well to the outside of the core. When a disease or rumor origins from node $i$, it is easier to spread to a broad area of the network through neighbors of node $i$ whose links are connecting to the external parts of $i$'s neighborhood. We take the innermost core of the network CA-Hep and Router for example and visualize the connection pattern of the innermost core by the software Gephi of version 0.8.2~\cite{bastian2009}. We find that for the innermost core of CA-Hep, which is the $31$-shell composed of 32 mutually connected nodes, has a structure very similar to the structure shown in Fig. 1 (a), with only five nodes having a small number of links out leaving the group, as shown in Fig. S1 (a) in Supporting Information (SI). As for the innermost core of Router, which is the $7$-shell composed of 26 nodes, each nodes connects well to a large amount of nodes that are not in the core-shell, as shown in Fig. S1 (b). Motivated by the structural difference of the core-like group and the true core, we think that the importance of links of a node $i$ varies depending on the connection pattern of its neighbor nodes (e.g. node $j$): if node $j$ has many connections out-leaving node $i$'s neighborhood, the probability of infecting more nodes increases when the spreading origins from node $i$, and thus the edge linking node $i$ and node $j$ is important for node $i$. On the other hand, if node $j$ has very few or even no out-leaving links from node $i$'s neighborhood, the probability of infecting a large population by node $i$ decreases, and thus the edge linking node $i$ and node $j$ is less important. To confirm the relationship between the structure feature and spreading behavior on it, we use the SIR spreading model~\cite{anderson1991} to simulate the spreading process on networks. We record the spreading efficiency of each node, which is the size of the final infected population $M$ when a spreading origins from the node (see Methods for details). Then we study the correlation between the total number of out-leaving links $n_{out}$ of a node, that is the sum of out-leaving links over all neighbors the node, and its spreading efficiency $M$. To find out the difference between the core-like group and the true core, we choose two groups of nodes for each networks. The first one is the shell that is a core-like group (there may be several core-like groups in the network, and we choose the one with the largest $k_S$ value); the second one is the shell with the highest average spreading efficiency. From Fig.~\ref{figure2} we can see that in general nodes in core-like groups (blue squares), which have a relatively low spreading efficiency, have a lower number of out-leaving links than nodes in the highest spreading shell (red circles). What is worth noticing is that although most nodes in core-like groups have a relatively low spreading efficiency, there may be some nodes that have a high spreading efficiency, corresponding to some blue nodes in Email and PGP, which also have a relatively high number of out-leaving links. On the other hand, in the highest spreading efficiency shell, there are nodes with relatively low spreading efficiency whose number of out-leaving links is correspondingly low, such as some red nodes in Email and Blog. These indicate a positive correlation between the spreading efficiency and the number of out-leaving links of a node through its neighbors. Considering the structural feature of core-like groups and the correlation between the number of out-leaving links and the spreading efficiency of a node, we realize that in the locally densely connected structures, there exists some links which lead to form a clique-like local structure but contribute little to the spreading process. This causes the failure of $k$-shell method in accurately determining nodes coreness and identifying true cores in many real-world networks, from the perspective of spreading efficiency. Next we will step further to find a way to eliminate the negative effect of these links and improve the accuracy of the $k$-shell method in determining network core structure. \begin{figure}[!ht] \begin{center} \epsfig{file=figure2.eps,width=1\linewidth} \caption{\textbf{Correlation of spreading efficiency and the number of out-leaving links.} For each network, we present the nodes in the core-like group (blue squares) and in the highest spreading efficiency shell (red circles). A positive correlation between the spreading efficiency and the number of out-leaving links is demonstrated.} \label{figure2} \end{center} \end{figure} \textbf{Defining the diffusion importance for links.} We define the diffusion importance of links in the following way. Consider an edge $e_{ij}$. When a disease spreads along it, there are two possible directions. In one direction, the disease origins from node $i$ and spreads along $e_{ij}$ to node $j$, and then spreads to the other parts of the network through node $j$. We record the number of links of node $j$ connecting outside the nearest neighborhood of node $i$ as $n_{i\rightarrow j}$. In the other direction, the disease origins from node $j$ and spreads along $e_{ji}$ (the same edge as $e_{ij}$ because it is undirected edge) to node $i$, and then spreads through node $i$ to the other parts of the network. We record the number of links of node $i$ connecting outside the nearest neighborhood of node $j$ as $n_{j\rightarrow i}$. Then the diffusion importance of edge $e_{ij}$ is defined as \begin{equation} D_{ij}=(n_{i\rightarrow j}+n_{j\rightarrow i})/2. \end{equation} This value quantifies the average potential influence of an edge in both directions. Let's take edge $e_{ij}$ in Fig. 1 as an example to calculate the diffusion importance. In Fig.1 (a), $n_{i\rightarrow j}=0$, which is the number of links of node $j$ that connect outside the neighborhood of node $i$. At the same time, $n_{j\rightarrow i}=0$, which reflects that node $i$ has no links connecting to nodes that are not in the neighborhood of node $j$. Thus the $D_{ij}=0$. In Fig. 1 (b), $n_{i\rightarrow j}=3$, $n_{j\rightarrow i}=2$, and thus $D_{ij}=2.5$. In this way, we can calculate the diffusion importance for all edges in the network. When each edge is assigned a diffusion importance, the unweighted graph becomes weighted graph. The weight on edge contains the information of the potential spreading coverage when a disease spreads along the edge. For a general discussion of the weighted network is not in the scope of this paper, which we will explore in the future. Here, we concentrate on identifying links that is less important in the spreading process but lead to form a locally densely connected local structure, which results in the failure of the $k$-shell method to accurately determine the coreness of nodes in spreading dynamics. \textbf{Filtering out redundant links and applying the $k$-shell method to obtain a new coreness for nodes.} From the analysis of Fig.1, we come to the idea that links with low diffusion importance are redundant links, which contribute much to a densely connected local structure and a high $k_S$ for nodes but have a limited diffusion influence. We set a redundant threshold $D_{thr}$ to determine redundant links. If $D_{ij}< D_{thr}$, edge $e_{ij}$ is considered as a redundant link. If we use $G=\{V, E\}$ to represent a graph, where $V$ is the set of nodes and $E$ is the set of edges, then the residual network that is obtained by filtering out redundant links is represented as $G^{\prime}=\{V^{\prime}, E^{\prime}\}$, where $V^{\prime}=V$ and $E^{\prime}\subseteq E$. If all edges in the network have a $D_{ij}\geq D_{thr}$, then $E^{\prime}=E$. We first apply the $k$-shell decomposition to the original networks and obtain the coreness for each node, recorded as $k_S^{o}$. Then we identify and filter out the redundant links. Given that filtering out too many edges may destruct the main structure of the network, the $D_{thr}$ should not be too large which will lead to a large proportion of links being identified as redundant links. Meanwhile, the $D_{thr}$ should not be too small because the redundant links that contribute much to the local densely connected structure may have a diffusion importance greater than 0 but are still not so important in a spreading process. We adopt a diffusion threshold of $D_{ij}=2$. For a discussion of the diffusion threshold, please see SI for details. In this case, edges with $D_{ij}\geq 2$ are remained in $G^{\prime}$. We apply the $k$-shell method to $G^{\prime}$ and obtain a renewed coreness for each node, recorded as $k_S^{r}$. We use the imprecision function, which is initially proposed by Kitsak \textit{et al.}~\cite{kitsak2010} and modified by Liu \textit{et al.}~\cite{liu2015}, to compare the accuracy of $k_S^{o}$ and $k_S^{r}$ in determine node coreness in the network. The imprecision function is defined as \begin{equation} \varepsilon(k_S)=1-\frac{M_{core}(k_S)}{M_{eff}(k_S)}, \end{equation} where $k_S$ is the variable ranging from $0$ (for isolated nodes in the residual network) to the maximum $k_S$ value in the network. $M_{core}(k_S)$ is the average spreading efficiency of nodes with coreness $k_S'\geq k_S$ (nodes in $k_S$-core), and $M_{eff}(k_S)$ is the average spreading efficiency of $n$ nodes with the highest spreading efficiency, where $n$ equals to the number of nodes in $k_S$-core. This function quantifies how close to the optimal spreading is the average spreading of nodes in $k_S$-core. A small $\varepsilon(k_S)$ value means nodes identified as in core shells have a correspondingly high spreading efficiency. In Fig.\ref{figure3} we compare the imprecision of $k_S^{o}$ and $k_S^{r}$. The number of shells may be different for the original graph $G$ and the residual graph $G^{\prime}$, so we normalized the shell index $k_S$ by the maximum shell index $k_{Smax}$ in $G$ and $G^{\prime}$ respectively. The imprecision based on $k_S^{r}$ is in general obviously lower than the imprecision based on $k_S^{o}$. For the networks of Email, CA-Hep, Hamster and Blog, the imprecision of $k_S^{o}$ is high for large values of $k_S$, close to or above $0.4$. This means that in these networks nodes identified as core by the $k_S^{o}$ are in fact not very influential in a spreading process. In the network of PGP and Astro, there are sudden jumps in $k_S^{o}$ imprecision, which correspond to the locally densely connected structure that does not exist in the innermost core but exist in some outer shells of the network~\cite{liu2015}. On the contrary, when the $k_S^{r}$ is used to determine node coreness, a much lower imprecision is obtained. In all the studied real-world networks, the absolute value of the imprecision function based on $k_S^{r}$ is close to or smaller than 0.1. This means that the $k_S^{r}$ is a good indicator of spreading efficiency. After removing the redundant links with low $D_{ij}$ values, the accuracy of the $k$-shell method in determining cores is greatly greatly improved. \begin{figure}[!ht] \begin{center} \epsfig{file=figure3.eps,width=1\linewidth} \caption{\textbf{The imprecision of $k_S^{o}$ and $k_S^{r}$ as a function of shell index.} $k_S^{o}$ is the coreness obtained from the original network, and $k_S^{r}$ is the coreness obtained from the residual network. Shell index $k_S$ ranges from 0 to $k_{Smax}$ and is normalized by $k_{Smax}$. The imprecision of $k_S^{r}$ is obviously smaller than that of $k_S^{o}$.} \label{figure3} \end{center} \end{figure} In many cases, people are more interested in top ranked nodes, which corresponds to leaders in the society. We rank nodes by their coreness $k_S^{o}$ and $k_S^{r}$ respectively and compare the accuracy of coreness in identifying the most influential spreaders. Results show that the coreness obtained from the residual network is much more accurate than the original coreness in identifying the most influential spreaders. See Fig. S3 in SI for more details. Then we focus on the spreading efficiency of shells. A good partition of the network is supposed to display a concordant trend between the shell index obtained from network topology and the spreading efficiency of that shell. One would expect that shells with large $k_S$ should have a higher spreading efficiency than shells with small $k_S$. We plot the spreading efficiency $M$ of each shell (expressed as the distance $d$ of a shell from the innermost core), where the spreading efficiency of a shell is the average spreading efficiency of nodes in that shell. As shown in Fig.~\ref{figure4}, the spreading efficiency of shells are in general decreasing monotonically with the increase of distance from the innermost core in all studied networks when $k_S^{r}$ is used. In the networks of Email, CA-Hep and Blog, the spreading efficiency of each shell and its coreness $k_S^{r}$ is completely concordant. A large $k_S^{r}$ indicates a higher spreading efficiency of the shell. In the networks of Hamster, PGP and Astro, the spreading efficiency and its coreness $k_S^{r}$ are concordant in most shells. There are a limited number of shells where the trend is not so monotonic, however the fluctuation in spreading efficiency is relatively small compared to that of the $k_S^{o}$. As for the $k_S^{o}$, the trend is not as monotonic as $k_S^{r}$. In other words, the coreness obtained from the residual network predicts the spreading efficiency much more accurate than the original one. \begin{figure}[!ht] \begin{center} \epsfig{file=figure4.eps,width=1\linewidth} \caption{\textbf{Spreading efficiency of a shell and its distance from the innermost core.} $k_S^{o}$ is the coreness obtained from the original network, and $k_S^{r}$ is the coreness obtained from the residual network. $d$ is the distance from the innermost core. $d=0$ corresponds to the innermost core. } \label{figure4} \end{center} \end{figure} \textbf{Comparing with random deletion and other way of targeted removing of links.} Our way of removing redundant links obviously improve the accuracy of the $k$-shell method in determining the influence of nodes in a spreading. Now we compare the effectiveness of our way of targeting the redundant links with random deletion, as well as targeting links whose importance is determined by the degree of nodes on its two ends. To compare with random deletion, we randomly select a set of edges and delete them from the network. The number of edges that is to be deleted is the same as the number of identified redundant links. Then we apply the $k$-shell decomposition to the residual network and obtain a $k_S$ for each node. We realize the random deletion for $50$ times and average the $k_S$ obtained at each realization as coreness for node $i$, which we record as $k_S^{a}$ to represent random or arbitrary deletion. A comparison of the imprecision as a function of shell index are shown in Fig. \ref{figure5}. In most cases, the imprecision of $k_S^{a}$ is very close to that of the $k_S^{o}$ obtained from the original network, and is obviously higher than the imprecision of $k_S^{r}$ obtained from the residual network. This implies that the core-like groups still exist in the residual network after random deletion of links. Although the imprecision of $k_S^{a}$ is slightly improved in some networks, we think it is because that when the links are selected randomly, there is a chance that a redundant link is selected. A widely used way of determining edge importance is considering the degree of nodes on its two ends. The weight (also the importance) of an edge $e_{ij}$ is proportional to the product of $k_{i}$ and $k_{j}$ as $w_{ij}={(k_{i}k_{j})}^{\theta}$, where $k_{i}$ and $k_{j}$ are the degree of node $i$ and node $j$ respectively~\cite{barrat2004, wang2008, tang2011} and $\theta$ is a tunable parameter. This measure is also strongly correlated with the betweenness centrality of an edge~\cite{holme2002}. We use a parameter $\theta=1$ to determine the edge importance, and remove the edges of small weight from the network to see its effect on the $k$-shell method. The number of edges removed is the same as the number of redundant links identified. We find that the imprecision of coreness $k_S^{w}$ obtained from the residual network in this way is almost the same as the original $k_S^{o}$, as shown in Fig. S4 in SI. The above analysis implies us two points. First, our way of identifying and removing the redundant links is effective in improving the accuracy of $k$-shell method in profiling the core structure of the network from the perspective of spreading dynamics. Second, the $k$-shell index has a robustness against random failure, which is consistent with the result in Ref. \cite{kitsak2010}. In that work, authors pointed out that the $k$-shell method is robust under random deletion of even up to 50\% of the edges, which means the relative ranking of the $k_S$ value for the same nodes in the original network and the network after random deletion are almost the same. \begin{figure}[!ht] \begin{center} \epsfig{file=figure5.eps,width=1\linewidth} \caption{\textbf{The imprecision of $k_S^{o}$, $k_S^{r}$ and $k_S^{a}$ as a function of shell index.} $k_S^{o}$ is the coreness obtained from the original network, $k_S^{r}$ is the coreness obtained from the residual network and $k_S^{a}$ is the coreness obtained from the network after random deletion of edges. Shell index $k_S$ ranges from 0 to $k_{Smax}$ and is normalized by $k_{Smax}$. The imprecision of $k_S^{r}$ is obviously smaller than that of $k_S^{o}$ and $k_S^{a}$.} \label{figure5} \end{center} \end{figure} \section*{Discussion} Profiling the network hierarchical structure is very important in understanding the behaviors on it. The $k$-shell decomposition is a basic method to describe network structure and identify core areas that is used in many fields of science. We study the $k$-core structure of real-world networks and the spreading process on it. We find that the accuracy of the $k$-shell method in identifying influential spreaders is impacted by the locally densely connected group in the network, which correspond to real-world scenarios such as extensive communication and cooperation within a small group or community. Based on in-depth analysis of network local structure and motivated by research advances in core-periphery structure, we realize that the links of nodes contribute differently to the affected population in a spreading process. For the first time we define a diffusion importance for each link in the network based on its potential influence in a spreading process. By filtering out redundant links and then applying the $k$-shell decomposition to the residual graph, we get a renewed coreness for nodes. Experimental results show that this renewed coreness is much more accurate in determining the spreading influence of node from the core to the periphery. Specifically speaking, the imprecision of coreness in identifying influential spreaders is greatly reduced. Nodes with high renewed coreness are in general have a higher spreading efficiency than nodes with low renewed coreness. There are many algorithms using the $k_S$ index as a global importance of nodes and ranking nodes. Among them, the iterative resource allocating (IRA) algorithm~\cite{ren2014} greatly enhance the accuracy of centrality measures in ranking node influence by iteratively relocating sources to each node based on the centrality of its neighbors (see Methods for details). After iteration, the resource of a node will be stable and is used to rank node of its spreading influence. As above, we filter out the redundant links of $G$ and apply the $k$-shell decomposition to the residual graph $G^{\prime}$ to obtain a $k_S^{r}$ and then implement the IRA algorithm on $G^{\prime}$. We find that the ranking accuracy is greatly improved, as shown in Fig. S5. The effectiveness of our method in another ranking algorithm, which defines a neighborhood coreness $C_{nc}$ of node $i$ as $C_{nc}=\sum_{j\in \Gamma(i)}k_S(j)$ in~\cite{bae2014}, where $\Gamma(i)$ is the set of neighbors of node $i$ and $k_s(j)$ is the coreness of node $j$, is shown in SI Fig. S6. We still come to a great improvement in the ranking accuracy. As our way of filtering out redundant links works well for networks with locally densely connected structure, one may ask the performance of $k_S^{r}$ on networks with no such local structure. For the networks of Router, Emailcontact and AS listed in Table 1, in which there is no core-like group and the $k$-shell method works well on the original network, we find that by filtering out redundant links, the performance of $k_S^{o}$ and $k_S^{r}$ are nearly exactly the same, implying that there is no negative effect on the $k$-shell method on networks where it works well. We present the coreness imprecision as a function of shell index and percentage of nodes $p$ in SI Fig. S7 and S8 respectively, as well as the spreading efficiency of each shell in Fig. S9. It is again due to the robustness of the $k$-shell method. This feature is meaningful in that our way of filtering out redundant links will greatly improve the accuracy of the $k$-shell method in networks where it doesn't work well while at the same time do not impact its performance in networks where it already works well. We also test the effects of filtering out redundant links on other centrality measures such as degree centrality, betweenness centrality and eigenvector centrality in ranking node's spreading influence. Results show that the ranking performance of the centrality obtained from the residual network remains very close to the centrality obtained from the original network. This means the redundant links has little influence on these centrality measures, which is a proof of the redundancy of these links. The identification of redundant links gives us implication that redundancy has an impact on the analysis of network structure. While we only concentrate on its effectiveness in $k$-shell method and from the perspective of spreading dynamics, the influence of redundant links on other network analysis remains unexplored, such as community partition and network controllability. This propose two challenges. First, we need to decide which structural features of network are affected much by redundant links. Second, how to define the importance of links in the network may depend on the behaviors on it such as rumor spreading, synchronization and immunization. In addition, while our way of determining the redundant threshold $D_{thr}$ is obtained from simulation experiments, a parameter-free way of identifying the redundant links is worthy of further explore. \section*{Methods} \textbf{The $k$-shell decomposition.} The algorithm starts by removing all nodes with degree $k=1$. After removing all nodes with $k=1$, there may appear some nodes with only one link left. We iteratively remove these nodes until there is no node left with $k=1$. The removed nodes are assigned with an index $k_S=1$ and are considered in the 1-shell. In a similar way, nodes with degree $k\leqslant2$ are iteratively removed and assigned an index $k_S=2$. This pruning process continues removing higher shells until all nodes are removed. Isolated nodes are assigned an index $k_S=0$. As a result, each node is assigned a $k_S$ index, and the network can be viewed as a hierarchical structure from the innermost shell to the periphery shell. \textbf{Identify core-like groups in real-world networks.} The link entropy of a shell with index $k_S$ is defined~\cite{liu2015} as \begin{equation}\label{entropy} H_{k_S}=-\frac{1}{lnL}\sum_{k'_S=1}^{k_{Smax}}r_{k_S,k'_S}lnr_{k_S,k'_S}, \end{equation} where $r_{k_S,k'_S}$ is the average link strength of nodes in the $k_S$-shell to the $k'_S$-shell and $L$ is the number of shells in the network. The link strength of node $i$ to the $k'_S$-shell is the ratio of the number of links originating from node $i$ to the shell with index $k'_S$ to the total number of links of node $i$. The shells which have a relatively low entropy compared with its adjacent shells are usually locally connected core-like groups. \textbf{SIR model}. We use the susceptible-infected-recovered (SIR) spreading model to simulate the spreading process on networks and obtain the spreading efficiency for each node. In the model, a node has three possible states: $S$ (susceptible), $I$ (infected) and $R$ (recovered). Susceptible individual become infected with probability $\lambda$ if it is contacted by an infected neighbor. Infected nodes contact their neighbors and then they change to recovered state with probability $\mu$. For generality we set $\mu=1$. Recovered nodes will neither be infected any more nor infect others, and they remain the $R$ state until the spreading stops. Initially, a single node is infected and all others are susceptible. Then the disease spreads from the seed node to the others through links. The spreading process stops when there is no infected node in the network. The proportion of recovered nodes $M$ when spreading stops is considered as the spreading capability, or spreading efficiency, of the origin node. We realize the spreading process for $100$ times and take the average spreading efficiency of a node as its spreading efficiency. As we have discovered that the infection probability will not change the relative spreading efficiency of nodes, we chose an infection probability $\lambda>\lambda_{c}$, where $\lambda_{c}=\langle k\rangle/(\langle k^{2}\rangle-\langle k\rangle)$ is the epidemic threshold determined from the heterogenous mean-field method~\cite{castellano2010}. Under the infection probability of $\lambda$, the final infected population $M$ is above $0$ and reaches a finite but small fraction of the network size for most nodes as spreading origins, in the range of $1\%$-$20\%$~\cite{kitsak2010}. \textbf{Ranking algorithm of IRA}. This algorithm considers that the spreading influence of a node is determined by both its centrality and its neighbor's centrality~\cite{ren2014}. In an iterative resource allocation process, the resource of nodes is distributed to its neighbors according to their centrality. The resource node $i$ receive is \begin{equation} I_i(t+1)=\sum_{j\in\Gamma_{i}}R_{j\rightarrow i}(t+1)= \sum_{j\in\Gamma_{i}}(\frac{\theta_i^\alpha}{\sum_{u\in\Gamma_{j}}\theta_u^\alpha}\delta_{ij})I_j(t), \end{equation} where $R_{j\rightarrow i}(t+1)$ is the amount of resource distributed from node $j$ to node $i$ at time t+1, $\Gamma_{i}$ is the sets of node $i$'s neighbors. $\theta_i$ is the centrality of node $i$, and $\alpha$ is a tunable parameter to adjust the influence of centrality. $u$ belongs to the neighborhood $\Gamma_{j}$ of node $j$. $\delta_{ij}=1$ if there is a link between node $i$ and node $j$, otherwise $\delta_{ij}=0$. $I_j(t)$ is the resource hold by node $j$ at time step t. Initially, each node has an unit resource. The resource distributed to each node will be stable after iterations, and the final resources of nodes are used to rank their spreading influence. The coreness centrality is used here, and $\alpha$ is set to 1. \textbf{Data sets.} The real networks studied in the paper are: (1) Email (e-mail network of University at Rovira i Virgili, URV) ~\cite{guimera2003};(2) CA-Hep (Giant connected component of collaboration network of arxiv in high-energy physics theory)~\cite{leskovec2012}; (3) Hamster (friendships and family links between users of the website hamsterster.com)~\cite{hamster2014}; (4) Blog (the communication relationships between owners of blogs on the MSN (Windows Live) Spaces website)~\cite{xie2006}; (5) PGP (an encrypted communication network)~\cite{boguna2004}; (6) Astro physics (collaboration network of astrophysics scientists)~\cite{newman2001}; (7) Router (the router level topology of the Internet, collected by the Rocketfuel Project)~\cite{spring2004};(8) Email-contact (Email contacts at Computer Science Department of University college London)~\cite{kitsak2010}; (9) AS (Internet at the autonomous system level)~\cite{newmandataas}.
{ "redpajama_set_name": "RedPajamaArXiv" }
7,798
Omphaliodes obscura är en fjärilsart som beskrevs av Walker 1856. Omphaliodes obscura ingår i släktet Omphaliodes och familjen Anthelidae. Inga underarter finns listade i Catalogue of Life. Källor Fjärilar obscura
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,779
G to the izzame T to the izzime. When we last left our Game Time, we were playing the PSB Original Game, Headliners. In particular, we asked you to give us a headline about the historic moon landing. Let's review, shall we? Big A said "Man Walks on Moon: Young Michael Jackson Gets an Idea." We give props for trying to tie in previous games, but we do not give him the win. Luca headline "Um...Now What?" brought up an interesting point: Why did we go to the moon in the first place? Just because it's cool? Well, i guess it was worth the billions of dollars. But the winner: Tony for his response of "TAKE THAT COMMYS!" Yes, what better to celebrate an American achievement than sticking it to the Soviets. Indeed, "TAKE THAT COMMYS!" should be a headline for almost anything. Send a message to the godless Communists. Good work, Tony! A few months ago, about ten months ago in fact, we played a game where we gave you a punchline, and you had to think of the set-up line. We thought this week we would play that game again. Here's the punchline, for which you have to think of the question that leads up to it: Because Dracula can't flip pancakes! So, all you have to do is give us a question to which that is the answer. Understand? Here's how ours goes: Why is a spatula better than Dracula? Now, it's your turn. Good luck! Labels: Game Time Luca July 31, 2009 at 7:51 AM Why does Frankenstein always have to make breakfast? Because Dracula can't flip pancakes! Lorenzo July 31, 2009 at 1:50 PM Why did Aunt Jemima marry Uncle Ben? Tony August 2, 2009 at 1:12 PM Pancakes: "Hey, Sausage!" Sausage: "Yes, Pancakes?" Pancakes: "I've got an important question for you, Sausage." Sausage: "Oh, what's that, Pancakes?" Pancakes: "Why is there no Transylvanian Gymnastics Team, Sausage?" Sausage: "I know why!" Pancakes: "You do? Why?" Sausage: "Because Dracula can't flip, Pancackes!" Mama Meg August 3, 2009 at 4:36 PM When Dracula applied for a job at McDonald's, why did he have to work the night shift? Because Dracula can't flip hotcakes...and 'cause he's a vampire. Don't Work With Your Food The Eye of a Needle Wet Sock Red Red Whine One Small Flight Name That Show Sotomayor's Hearing Bastolen Day Focus on the Locusts Wedding Bell Blues Good Thing I Ain't Roll, Do Not Walk to the Nearest Exit Ringing in My Ears Thyme After Time
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,381
One of the world's most successful fashion designers, Diane von Fürstenberg impressed the fashion world when she introduced her now-iconic "wrap dress" for the working woman in 1972. Elegance, ease, and accessibility have always been the core of her design philosophy, which has allowed her to turn DVF into a global luxury lifestyle brand. In 2005, she became the recipient of the CFDA's Lifetime Achievement Award. During this Illustrator and Photoshop tutorial we'll learn how to prepare a reference image in Photoshop before crossing over to Illustrator to create a striking vector portrait. Then, we'll cross back over into Photoshop and create the final compositions. We'll add drama to the piece by adding scanned in mixed media elements such as: watercolors, hand drawn shapes, and vector shapes. By the end of this Illustrator and Photoshop training, you'll be able to exhibit many cool tips and tricks in composition and illustration that are used for commercial projects to achieve outstanding results quickly and effectively. Software required: Adobe Illustrator CS6, Adobe Photoshop CS6. Fashion designing is the knack in which the relevance of design and instinctive exquisiteness to attire and embellishments. Fashion design is predisposed by edifying and societal autonomy and has speckled over time and place. Some fashion designers work unaided or as part of a panel. They endeavor to gratify client ardently desire because of the time is requisite to bring a garment onto the souk. Additionally, if you have two or more objects together in a small scene or even floating about in space, you'll likely want to show that they're interacting. This means showing they're on the same plane and subjecting them to the same style of perspective as well as overlapping objects, using similar lighting, and using the same design style to illustrate both (unless you're making a purposeful statement or telling a story by not doing so). In the Seventies, Halston befriended (and dressed) members of the international jet set, including Bianca Jagger, Liza Minnelli, and Liz Taylor. Dressed in his trademark black turtleneck, he could often be found partying at Studio 54 and enjoying his success with a host of celebrity friends. Licensing deals made him very wealthy, but tragedy lay in the distance…drug addiction and an AIDS diagnosis in 1988 led to his downfall. Unable to cope with the demands of his career, he was fired from his own company…Halston died of AIDS-related complications in 1990. Very interesting list but I am surprised that Paul Poriet is not on this list. He revolutionized and created the modern fashion industry. Although in the end he could not survive in the industry he created his impact is huge. He was the first in many areas including freeing woman from corsets, using live models, creating a signature perfume, making an entire lifestyle brand, and modern marketing. Illustrate your original design. Think about what look you're trying to create, and represent it down to the last detail. If you're designing a dress, for example, add patterns, ruffles, text, bows, and so on to create a beautiful piece. Focus on the elements of your design that are unique, and include appropriate accessories so that the style you're going for is clear.[1] If you need some fresh ideas or don't know where to start, look up fashion trends on the internet or in magazines for inspiration. In this stage designer create a brand name or logo. The philosophy and identity of a range can be continued into promotional activities such as branding and styling. To promoting and marketing fashion goods it is very important. Normally the branding would be a graphic designer's job. It is the interesting use of the language or words can produce visual effects, such as, iMac (Internet ready Macintosh computer), O2 (mobile phone provider), FCUK (French Connection United Kingdom). Most established illustrators have agents. Rodgers is represented by Digital Brand Architects, and Morrison is represented in Australia and Asia by Perrott of The Illustration Room. They are more responsible for prioritizing work and negotiating contracts than finding new jobs. Both Morrison and Rodgers say nearly all of their work comes through word of mouth and social media. "[Having an agent] helps a lot in terms of contracts and there's legal terms that I don't understand, but I don't think you have to have it," says Morrison. "I think it is better to have it once you've established yourself and you've grown a little bit on your own." We toyed with it on Vogue.co.uk during my decade as editor of the site from 2005 to 2015, with a shoppable version of the Fashion Illustrated Gallery (founded by William Ling; stocking the work of all the prominent modern illustrators including Downton and Ling's wife Tanya), running alongside an illustrated blog by Downton himself written from the Fumoir - but it didn't get huge traction. In contrast, today illustration generates great engagement, even recently making it into the realms of the still-controversial space of branded content with a campaign of illustrated fashion fairytales that ran across Vogue, GQ and Tatler which surpassed all commercial targets for a month-long campaign within the first 24 hours. Hairstyles, color, and textures can do a lot of an overall design. Different types of hairstyles may be worn by different people for a variety of reasons. Consider the way in which culture and ethnic heritage may affect the types of hairstyles a figure could wear. Not only will you be telling a story about who the person is or where they may be from, but you'll also be allowing limits for the hair's movement and style itself. Women who enjoy the artsy style tend to stay away from the traditional 'trends' of the fashion world and love to make a statement with their clothing. Oftentimes they will be the creator of their own fashions, designing and creating their own blouses, hats, and jackets. Each artsy style will be different per woman, as everyone has their own idea of what 'art' truly is. That's what makes this particular fashion style so unconventional and interesting.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,809
{"url":"http:\/\/www.catonmat.net\/page\/31","text":"Linear Algebra December 16, 2009\n\n# MIT Linear Algebra, Lecture 3: Matrix Multiplication and Inverse Matrices\n\n<- previous article next article ->\n\nThis is the third post in an article series about MIT's course \"Linear Algebra\". In this post I will review lecture three on five ways to multiply matrices, inverse matrices and an algorithm for finding inverse matrices called Gauss-Jordan elimination.\n\nThe first lecture covered the geometry of linear equations and the second lecture covered the matrix elimination.\n\nHere is lecture three.\n\n## Lecture 3: Matrix Multiplication and Inverse Matrices\n\nLecture three starts with five ways to multiply matrices.\n\nThe first way is the classical way. Suppose we are given a matrix A of size mxn with elements aij and a matrix B of size nxp with elements bjk, and we want to find the product A\u00b7B. Multiplying matrices A and B will produce matrix C of size mxp with elements .\n\nHere is how this sum works. To find the first element c11 of matrix C, we sum over the 1st row of A and the 1st column of B. The sum expands to c11 = a11\u00b7b11 + a12\u00b7b21 + a13\u00b7b31 + ... + a1n\u00b7bn1. Here is a visualization of the summation:\n\nWe continue this way until we find all the elements of matrix C. Here is another visualization of finding c23:\n\nThe second way is to take each column in B, multiply it by the whole matrix A and put the resulting column in the matrix C. The columns of C are combinations of columns of A. (Remember from previous lecture that a matrix times a column is a column.)\n\nFor example, to get column 1 of matrix C, we multiply A\u00b7(column 1 of matrix B):\n\nThe third way is to take each row in A, multiply it by the whole matrix B and put the resulting row in the matrix C. The rows of C are combinations of rows of B. (Again, remember from previous lecture that a row times a matrix is a row.)\n\nFor example, to get row 1 of matrix C, we multiply row 1 of matrix A with the whole matrix B:\n\nThe fourth way is to look at the product of A\u00b7B as a sum of (columns of A) times (rows of B).\n\nHere is an example:\n\nThe fifth way is to chop matrices in blocks and multiply blocks by any of the previous methods.\n\nHere is an example. Matrix A gets subdivided in four submatrices A1 A2 A3 A4, matrix B gets divided in four submatrices B1 B2 B3 B4 and the blocks get treated like simple matrix elements.\n\nHere is the visualization:\n\nElement C1, for example, is obtained by multiplying A1\u00b7B1 + A2\u00b7B3.\n\nNext the lecture proceeds to finding the inverse matrices. An inverse of a matrix A is another matrix, such that A-1\u00b7A = I, where I is the identity matrix. In fact if A-1 is the inverse matrix of a square matrix A, then it's both the left-inverse and the right inverse, i.e., A-1\u00b7A = A\u00b7A-1 = I.\n\nIf a matrix A has an inverse then it is said to be invertible or non-singular.\n\nMatrix A is singular if we can find a non-zero vector x such that A\u00b7x = 0. The proof is easy. Suppose A is not singular, i.e., there exists matrix A-1. Then A-1\u00b7A\u00b7x = 0\u00b7A-1, which leads to a false statement that x = 0. Therefore A must be singular.\n\nAnother way of saying that matrix A is singular is to say that columns of matrix A are linearly dependent (one ore more columns can be expressed as a linear combination of others).\n\nFinally, the lecture shows a deterministic method for finding the inverse matrix. This method is called the Gauss-Jordan elimination. In short, Gauss-Jordan elimination transforms augmented matrix (A|I) into (I|A-1) by using only row eliminations.\n\nPlease watch the lecture to find out how it works in all the details:\n\nTopics covered in lecture three:\n\n\u2022 [00:51] The first way to multiply matrices.\n\u2022 [04:50] When are we allowed to multiply matrices?\n\u2022 [06:45] The second way to multiply matrices.\n\u2022 [10:10] The third way to multiply matrices.\n\u2022 [12:30] What is the result of multiplying a column of A and a row of B?\n\u2022 [15:30] The fourth way to multiply matrices.\n\u2022 [18:35] The fifth way to multiply matrices by blocks.\n\u2022 [21:30] Inverses for square matrices.\n\u2022 [24:55] Singular matrices (no inverse matrix exists).\n\u2022 [30:00] Why singular matrices can't have inverse?\n\u2022 [36:20] Gauss-Jordan elimination.\n\u2022 [41:20] Gauss-Jordan idea A\u00b7I -> I\u00b7A-1.\n\nHere are my notes of lecture three:\n\nMy notes of linear algebra lecture 3 on the matrix multiplication and inverses.\n\nHave fun with this lecture! The next post is going to be about the A=LU matrix decomposition (also known as factorization).\n\nPS. This course is taught from Introduction to Linear Algebra textbook. Get it here:\n\nComputer Science December 14, 2009\n\n# Recursive Regular Expressions\n\nThe regular expressions we use in our daily lives are actually not that \"regular.\" Most of the languages support some kind of extended regular expressions that are computationally more powerful than the \"regular\" regular expressions as defined by the formal language theory.\n\nFor instance, the so often used capture buffers add auxiliary storage to the regular expressions that allow them to match an arbitrary pattern repeatedly. Or look-ahead assertions that allow the regular expression engine to peek ahead before it making a decision. These extensions make regular expressions powerful enough to describe some context-free grammars.\n\nThe Perl programming language has an especially rich with regex engine. One of the engine's features is the lazy regular subexpressions. The lazy regular subexpressions are expressed as (??{ code }), where the \"code\" is arbitrary Perl code that gets executed when the moment this subexpression may match.\n\nThis allows us to construct something really interesting - we can define a regular expression that has itself in the \"code\" part. The result is a recursive regular expression!\n\nOne of the classical problems that a regular expression can't match is the language 0n1n, i.e., a string with a number of zeroes followed by an equal number of ones. Surprisingly, using the lazy regular subexpressions this problem becomes tractable!\n\nHere is a Perl regular expression that matches 0n1n:\n\n$regex = qr\/0(??{$regex})?1\/;\n\n\nThis regular expression matches a 0 followed by itself zero or one time, followed by a one. If the itself part doesn't match, then the string this regular expression matches is 01. If the itself part matches, the string this regular expression matches is 00($regex)?11, which is 0011 if $regex doesn't match or it's 000($regex)?111 if it matches, ..., etc. Here is a Perl program that matches 050000150000: #!\/usr\/bin\/perl$str = \"0\"x50000 . \"1\"x50000;\n$regex = qr\/0(??{$regex})*1\/;\n\nif ($str =~ \/^$regex$\/) { print \"yes, it matches\" } else { print \"no, it doesn't match\" } Now let's look at the Yo Dawg regular expression in the picture above. Can you guess what it does? It matches a fully parenthesized expression such as (foo(bar())baz) or balanced parentheses ((()()())()). $regex = qr\/\n$$# (1) match an open paren ( ( # followed by [^()]+ # (3) one or more non-paren character | # OR (??{regex}) # (5) the regex itself )* # (6) repeated zero or more times$$ # (7) followed by a close paren )\n\/x;\n\n\nHere is how to think about this regular expression. For an expression to be fully parenthesized, it has to start with an open paren, so we match it (point (1) in the regex). It also has to end with close paren, so we match a close paren at the end (point (7)). Now we have to think what can be in-between the parens? Well, we can either have some text that is neither an open paren or closed paren (point (3)) OR we can have another fully parenthesized expression! (point (5)). And all this may be repeated either zero times (point (6)) to match the smallest fully parenthesized expression () or more times to match a more complex expression.\n\nWithout the \/x flag (that allows multiline regexes), it can be written more compactly:\n\n$regex = qr\/$$([^()]+|(??{regex}))*$$\/; But please don't use these regular expressions in production as they are too cryptic. Use Text::Balanced or Regexp::Common Perl modules. And finally, in Perl 5.10 you can use recursive capture buffers instead of lazy code subexpressions to achieve the same result. Here is a regular expression that matches 0n1n and uses the recursive capture buffer syntax (?N): my$rx = qr\/(0(?1)*1)\/\n\n\nThe (?1)* says \"match the first group zero or more times,\" where the first group is the whole regular expression.\n\nYou can try to rewrite the regular expression that matches balanced parens as an exercise.\n\nHave fun!\n\nThe New Catonmat December 10, 2009\n\n# 50 ideas for the new catonmat.net website\n\nI have been working on the new catonmat.net website for quite some time now and I have gathered a very long list ideas of what a modern website should have. They include ideas from psychology, search engine optimization, social media, programming, and the best web design practices.\n\nSince I started to use github last week, all the new catonmat code will be there. I already created a catonmat repository and I am going to be pushing code there daily.\n\nI love to share my ideas, so here are the first 50 of them. I am going to share more later in the upcoming article series \"Designing the new catonmat.net together with me.\" You should subscribe to my blog here, if you are interested in this topic and haven't subscribed yet.\n\nThe ideas are in no particular order. If some of them seem fuzzy or unclear, please ask me to clarify in the comments.\n\n## 1. 301 Redirect Table.\n\nThe idea is to maintain a table that will be checked against the request URLs. If the request URL is in the table, it gets redirected to a new destination URL with the 301 HTTP header, which forwards the link juice from one URL to another. This is necessary because some forums break URLs in multiple lines and the resulting clickable URL is 404. If I didn't do a 301 to the new location, this link would be lost. Anyone who clicked it, would end up on the 404 error page. Another example is if someone links to a mistyped URL. Anyone who clicks it ends up being unsatisfied, at 404 page. With a 301 table I can quickly fix these problems. Here is a concrete example: Someone links to www.catonmat.net\/artikle when they wanted to link to www.catonmat\/article. I'd simply insert an entry to 301 redirect \/artikle to \/article and everyone's happy.\n\n## 2. 404 Error Log.\n\nI absolutely have to know what pages trigger 404 errors so that I can fix them ASAP with the 301 redirect table. There may be many false alarms, therefore I have to add filters to the 404 error log to ignore some spammy patterns.\n\n## 3. A Great 404 Page.\n\nIf someone ends up on a non-existing page, log it as described in #2 and then explain to the visitor why the page he visited was not found. Suggest him or her or it to view 5 latest post titles, or 5 most popular posts, or most popular downloads, or something else (still have to think about it when I implement it).\n\nTracebacks are usually displayed among comments. This is very messy. There is no place for them among comments. The idea is to have them on a separate page like www.catonmat.net\/post-title\/trackbacks. Also add bulk moderation as spammers love to exploit trackback. (Perhaps drop trackbacks altogether as they are of very little value.)\n\n## 5. A Page With Last Comments.\n\nCurrently I am displaying only the 10 last comments on the right sidebar. This is insufficient. I want to see all the comments (like www.reddit.com\/comments) on a single page, so that even if I am away from computer for several days, I can easily navigate through them and reply\/hide\/edit\/delete in-place. Perhaps add a RSS feed for comments page.\n\n## 6. Comment Statistics.\n\nI need to tell my blog readers who are the most active people on my website. This will stimulate people to be more active. Therefore I should add a comment statistics with most popular commentators, link to their pages without nofollow to give them link juice. Also add statistics of the most commented articles.\n\n## 7. Add Incoming Search Term Statistics On All Pages.\n\nWhen people come from Google or Yahoo, save the query they used and display it on the page. This way I will always know what terms were searched for on each of the pages, without having to write complex queries.\n\nCurrently I have sloppy download statistics. I want nice graphs and want to see the most popular downloads by day, month, etc. This should be written as a statistics framework as traffic statistics, article statistics, delicious statistics and download statistics will all have more or less same graphs.\n\n## 9. Public Statistics Section For del.icious.com Bookmarks.\n\nI want to see how often my posts get bookmarked on del.icious.com. I also want nice graphs for them by day, week, month, etc. I also want a list of people who bookmarked my posts the most often, and tags they used. Make this public and also very modular so other people can reuse the code and put it on their sites. Reward the most frequent bookmarkers with links to their sites.\n\n## 10. Insert Beautiful Images To Give Rest To Eyes.\n\nInsert some beautiful landscape images after serveral paragraphs of text to give rest to eyes and give the reader positive emotions.\n\nAn image from ginnerobot's photostream on Flickr.\n\n## 11. Public Traffic Statistics.\n\nI want the super statistics for my website. And make them public. I want to see which is the most popular article today, which was yesterday. I want nice traffic graphs and trends.\n\nFind who's tweeting about catonmat and put all these tweets on a separate page. Find who's the most active catonmat tweeter and make this person stand out. Link to twitter profiles.\n\n## 13. Integrate GitHub Directly In Catonmat\n\nIntegrate GitHub with status updates, friend count, directly in catonmat. Would have to write some kind of a scraper or use their API, if it's usable for this purpose.\n\nCurrently I have linear comments, which are no good. To find whom someone replied to I have to scroll through the comments. This doesn't make any sense. I have to add threaded comments. This will also engage users is more conversations.\n\nI don't like links to comments in a form www.catonmat.net\/article\/#comment-55. I want them to be in form www.catonmat.net\/article\/comment-55 so that each comment (or thread of comments) is on its own page. This way, if linked to a comment, person will precisely have that comment loaded. Browsers can misbehave in case of anchor links like #comment-55 and I don't like that.\n\n## 16. Lightweight Syntax For Comments.\n\nThere are just a few things comments need:\n\n\u2022 Quote someone.\n\u2022 Emphesize part of a comment in bold or italic.\n\u2022 Share code fragment (auto syntax highlight it).\n\nThat's it. Nothing else is necessary, no stupid HTML comments.\n\n## 17. Filter Comments By Language.\n\nMy website is getting a lot of spam from Russia with comments in Russian. I am sure no one would write comments in Russian on my site. Add a filter to leave just the English comments. All other are spam.\n\nGravatar is at www.gravatar.com. It's a map from emails to jpegs of user icons. This will make people stand out. Also display the gravatar user icons on comment statistics (point above). Having gravatars will emotionally associates you with commentators and the next time you see a gravatar you can predict the nature of the comment (depending if you had positive emotions or negative before).\n\n## 19. Add More \"Contact Me\" Options Near \"About me\" on The Right Sidebar.\n\nYou want to get yourself out. If you don't do it, no one will come after you. Add links to Facebook, Linked In, Twitter, Plurk, GitHub, FriendFeed, perhaps some other sites. Show the email as an image and add a link to my IRC channel #catonmat on irc.freenode.net. Initially show only Twitter, Facebook and GitHub. Add an arrow down clickable image. After clicked displays all other contact options.\n\n## 20. Snippets Page.\n\nI have been writing and collecting various programming snippets. I want to have them in a central database on my site. Instead of putting them on some foreign service like GitHub's Gist or Pastebin of some kind, I want to keep them on my website in my database so that I can easily modify them in a single place and integrate within posts.\n\nAn image from digital cat's photostream on Flickr.\n\n## 21. Add Revision Control For All The Pages.\n\nCurrently if I edit a page, the previous page is lost and I can't see the changes. It's crucial to keep the changes as sometimes I need to get something from a year ago. I have to add revision control like wikipedia does. The URL scheme could be www.catonmat.net\/article\/revisions - displays all available revisions, and www.catonmat.net\/article\/revisions\/r1\/r2 displays changes between r1 and r2, but I have to think about it a bit more.\n\n## 22. Create Tiny URLs For Articles On My Own Site\n\nI don't want to depend on some service that may go down. Make short urls like http:\/\/catonmat.net\/abc, where abc is [a-zA-Z0-9] this will give me 238328 URLs, more than enough. I could even go for something shorter.\n\n## 23. Optimize Catonmat Load Speed To Maximum.\n\nUse the page-speed FireBug plugin to optimize site loading speed. Page-speed is at http:\/\/code.google.com\/speed\/page-speed\/. Things to be optimized include minified javascript, minified html, gzip compressed content, maximizing caching, use asynchronous js loading of google analytics and others.\n\n## 24. Make The Posts More Available\n\nCurrently the posts are only available as HTML documents. I should try to convert them to PDF and put them to Scribd. I have to think about consequences as Scribd may show up on search engines at a higher ranking position than catonmat iself, which would have drastic impact on the traffic. Saving to PDF has a benefit that it's a single file. If saved as HTML, the browser creates a folder with tons of images. Can't be shared easily and is clutter. Perhaps offer a PDF download for all articles.\n\n## 25. Make Posts Printer Friendly.\n\nCreate a nice CSS template for printing articles. At the end of the article include URLs to all the mentioned resources. Add an option to choose whether to print comments or not. URL structure could be www.catonmat.net\/article\/print\n\nAdd \"Share this\" widget and perhaps \"Reddit this\", \"Digg this\", \"Stumble this\", etc., buttons. This should be based on referrer as I don't want to show \"Reddit this\" to a Digg visitor as there is a holy war between Reddit and Digg. Also add \"Tweet this\" button somewhere.\n\n## 27. Utilize The New Google Feature Of Displaying Named Anchors In Search Results.\n\nSee this post: http:\/\/googleblog.blogspot.com\/2009\/09\/jump-to-information-you-want-right-from.html Some of my posts utilize this (10 Awk Tips, Tricks and Pitfalls), but I need to utilize it more.\n\n## 28. Highlight The Python Code As In SQLalchemy Documentation.\n\nI like how the code is highlighted in SQLalchemy documentation. I am not sure if they are using Pygments or not, but I'll try to make mine exactly the same. Example: http:\/\/www.sqlalchemy.org\/docs\/05\/ormtutorial.html\n\n## 29. Create Pagination For Posts\/Categories\/Tags As In Flickr.com\n\nI like the style of Flickr's pagination. Got to implement the same on catonmat. Example: http:\/\/www.flickr.com\/photos\/frijole\/\n\n## 30. Have Pages Open In A New Window By Default.\n\nI feel that opening links in a new window would keep visitors on the website longer. I haven't tested but I will A\/B this. Update: This is not a good idea, won't implement it.\n\nAn image from paraflyer's photostream on Flickr.\n\n## 31. Investigate What Do Various <a rel=\"...\"> Do.\n\nThere are a bunch of different relations like rel=\"bookmark\", rel=\"prev\", rel=\"next\". This could improve the website navigation greatly. More info here: http:\/\/www.w3schools.com\/TAGS\/att_a_rel.asp\n\n## 32. Perhaps Remove The Article Date Altogether.\n\nI have noticed myself that if I search for something and I find an article from 2004, I want to look for something fresher. Got to A\/B test this and see how long do people stay on the site.\n\n## 33. Add An IP Ban List.\n\nSometimes spammers use the same IPs. Rather than iptables -I INPUT --src IP -j REJECT them, just block them at application level, 404 all pages, or redirect them elsewhere.\n\n## 34. Automatically Translate All Pages To All Languages Via Google Translate\n\nSometimes people search for something in their own language and can't find it. Perhaps they don't know English term and therefore can't find what they wanted. If I automatically translate all pages to all languages, people would end up on my website and find what they were looking for. The URL scheme for this could be: www.catonmat.net\/xx\/..., where xx is two letter international language code.\n\n## 35. Try Out Dustin Curtis' Advice On Best Performing Link Texts.\n\nDustin Curtis did an experiment where he tried various link texts to invite readers to follow him on Twitter. The first version was \"I'm on Twitter\", this got 4.70% click through rate. The last version was \"You Should follow me on Twitter <here>\". This had 12.81% CTR, which is a massive improvement.\n\nPeople ask me a lot of questions over email. I could answer all the questions on my website instead of email. This way everyone could always find all my answers.\n\n## 37. Add A \"FAQ\" section.\n\nI get asked the same questions over and over again over email. For example,\n\n\u2022 What do I have to know for Google interview?\n\u2022 What books do I read?\n\u2022 How to learn C++\/C\/Python\/Algorithms?\n\u2022 etc.\n\nInstead of sending the answer over email, I could send these people to FAQ page where they could find my latest answers (as they change over time).\n\nThis idea has the highest priority. A knowledge database is a section on my website where I can write everything that I learned each day. This should be accompanied with a desktop application that has a hotkey that instantly brings up input dialog and I can type what I just learned. I had a database like this in 2002-2004 and my knowledge literally went exponential. I wrote out key facts that I learned each day and could easily locate as necessary.\n\n## 39. Add A Miniblog For Quick Articles.\n\nSometimes I have some cool idea or quick hack that I want to share, but as I am used to writing large and well thought out articles, I can't post the quick hacks and my thoughts don't get shared. A miniblog would allow me to share even the smallest thoughts that I have.\n\n## 40. Add More Programming Quotes.\n\nI love various smart programming quotes. I should make them more accessible, make them searchable by author\/text.\n\nAn image from paraflyer's photostream on Flickr.\n\n## 41. Integrate LaTeX In My Posts.\n\nAs I am sometimes writing about maths, I need to integrate LaTeX directly in my posts. I should not forget to do SEO on images it generates - instead of having some ridiculous <img src=\"latex-generator?q=$\\begin{bmatrix}1&2\\\\3&4\\end{bmatrix}$>\" I should have it generate an image \"<img src=\"matrix.jpg\" title=\"Matrix\">\". This way people will be able to find my posts via image search if they search for some mathematical terms like \"Matrix\".\n\n## 42. Try Out How The Articles Look With Text-Align: Justify.\n\nI currently have the default left-aligned text. Books and journals have it justified. Not sure how it would look on my website. Have to try it out. A\/B. I read somewhere that this may feel confusing to dyslexic users.\n\n## 43. Have In-Line Code Snippets And Variables Stand Out From Rest Of The Text.\n\nA nice example of this is Github's blog. They have gray background for constant-width things. See this: http:\/\/github.com\/blog\/530-how-we-made-github-fast\n\n## 44. Have <a> Links Change Background Color On Hover.\n\nI love how mattt.me has done it: http:\/\/mattt.me\/articles\/ I want this.\n\n## 45. Have Only One Category Per Post But Multiple Tags.\n\nA post should have only one single category. The category must strictly define the main theme of the article. A post can have multiple tags. Tags define topics discussed in the post.\n\n## 46. Add Crazyegg Tracking For The First Month.\n\nMust add crazyegg to track how the users navigate the site, where they click and what they visit. Optimize based on the results. http:\/\/crazyegg.com\/\n\n## 47. Add A Job Board.\n\nAs my site is getting more popular and popular among programmers, it may be a good idea to add a job board. Joel Spolsky made a million \\$ in a year with job boards. As the popularity of my site increases, I might make a few dollars out of it as well.\n\n## 48. Use Statcounter and Google Analytics\n\nThis is obvious. Statcounter is for real-time data. The free version is limited to statistics of last 500 visits. Google Analytics is for keeping long term statistics with a day of delay in updates. Load them asynchronously: http:\/\/googlecode.blogspot.com\/2009\/12\/google-analytics-launches-asynchronous.html\n\n## 49. Form Input Fields And Text Fields Should Change Border Color On Focus.\n\nUsers should know what field they are focused to without trying to find the cursor.\n\n## 50. Mandatory Alt Attributes For Images, Title Attributes For Links.\n\nThe rationale of this feature is that if images don't get loaded or if blind people are listening to the content of my posts via text-to-speech engine, they should know what the image displays. Alt tags also help the search engines to classify images. The same goes for title attributes for links.\n\n## 51. Optimize Meta Description For Categories And Tags\n\nCategory and tag pages usually have meta description as \"Posts in <category>\". This is unsatisfactory. I want a description of category and if it's missing or is too short, I want it to have some post titles in it, to make it unique. The same for tags, make meta tags \"Posts in <tag>: post title1, post title2, ...\" not exceeding 20 words or so.\n\nAn image from dsevilla's photostream on Flickr.\n\nThe next post in this series will be about Python libraries that I use for the new catonmat and the structure of the site. You can also follow the development on GitHub (I started importing code only yesterday so there is not much yet).\n\nProjects December 08, 2009\n\n# I pushed 30 of my projects to GitHub\n\nHey everyone, I just pushed 30 of my projects to GitHub. I realized that all the projects were scattered across my blog and there was no central repository. So I took time to organize them, write documentation, and uploaded them to GitHub.\n\nI did all of these projects for fun and to learn better programming. You can't become a great programmer if you don't program a lot. The more you program, the more language idioms and constructs you'll learn. You'll learn common patterns that occur frequently in programming and it will greatly improve your problem solving skills.\n\nThese were all relatively small projects and I think I am ready to move to the next level. I have several larger ideas in mind that need to be turned into code. I will post the updates to catonmat.\n\nIf you find any of my projects interesting, clone and start hacking. You can also follow my profile at github. :-)\n\nHere they all are. Enjoy!\n\n## 1. Busy Beaver\n\nBusy beaver is a computer science problem to finding the smallest Turing Machine that outputs the most data and eventually halts. This project is an implementation of a Turing Machine in Python and C++ that runs the busy beavers. It also comes with Turing Machine's tape visualization tool written in Perl.\n\n## 2. Feedburner Graph Generator\n\nCurrent Feedburner statistics graphs do not look nice. I wrote this Perl program to generate the nice graphs they had in 2008.\n\n## 3. CodingHorror Keyword Analyzer\n\nThis is a Perl program that parses public statcounter data for codinghorror.com blog and stores the search keywords in an SQLite database.\n\nThis is a tiny Python module that does asynchronous DNS resolution with adns library. My benchmarks were 20,000 resolutions per minute on a 512kbit\/s connection.\n\n## 5. Winamp Music Reporter Plugin\n\nThis is a Winamp plugin that reports to digitalpoint forums the tracks you are listening to. Written in C and uses Winamp SDK. The code can be modified to make it report to Facebook or Twitter, or anywhere you wish.\n\n## 6. Bithacks.h\n\nBithacks.h is a C header file that implements various bit operations.\n\n## 7. Set Operations in Unix Shell\n\nThis is an implementation of 14 set operations by using only Unix utilities such as sort, uniq, diff, comm, cat, head, tail, awk, and others.\n\n## 8. Hacker Top\n\nHacker top is a Unix top-like program written in Python for monitoring Hacker News from the console.\n\n## 9. Reddit Top\n\nReddit top is a Unix top-like program written in Python for monitoring Reddit from the console.\n\nThis is a program written in GNU Awk that downloads YouTube videos. It works well but please don't take it too seriously. It's a proof of concept code, written to see what can be done with GNU Awk's networking interface.\n\nThis is a program written in VBScript that downloads YouTube videos. I wrote it because when I was a child, I did a lot of programming in Visual Basic and I wanted to remember what it was like.\n\nThis is a Perl one-liner that downloads YouTube videos. I wrote it because I love Perl golf.\n\nThis is a YouTube video uploader that works without any APIs. It just simulates what a browser would do and takes all the steps to post the video and set the video info. Written in Perl.\n\n## 14. Plurk Translation Tool\n\nThis is a GreaseMonkey script that translates Plurks to English.\n\nPlurk is like Twitter but more fun and organized. Come be my friend on Plurk, my profile name is pkrumins. Written in JavaScript.\n\n## 15. Command Line Plurker\n\nPlurk from your command line. Written in Perl.\n\n## 16. Find Plurks on Google\n\nThis program searches for plurks that were indexed by Google. It outputs URLs to indexed pages. It's written in Python and uses my xgoogle library.\n\n## 17. Delete Plurks\n\nThis is a GreaseMonkey script that adds a \"delete\" button on individual plurk pages. This way you can delete your or other people plurks (if it's your thread) directly from the plurk page. Written in JavaScript.\n\n## 19. Reddit Media\n\nThe old Reddit Media website that I created in 2007. No longer maintained. It was written in Perl.\n\n## 20. Reddit River\n\nThe old Reddit River website for mobile devices. No longer maintained. It was written in Python.\n\n## 21. Digpicz\n\nThis is the digpicz.com website that I created back in 2007. It got massive attention back then because Digg didn't have picture section then. It was written in Perl.\n\n## 22. Picurls\n\nPicurls.com is a picture aggregator much like popurls.com but for pics. Currently down for maintenance, will be soon up again. Written in PHP.\n\n## 23. Bash Vi Editing Mode Cheat Sheet\n\nBash has two input modes - emacs and vi. This is vi input\/editing mode keyboard shortcut cheat sheet.\n\n## 24. Bash Emacs Editing Mode Cheat Sheet\n\nBash has two input modes - emacs and vi. This is emacs input\/editing mode keyboard shortcut cheat sheet.\n\n## 25. Bash History Cheat Sheet\n\nThis is the bash history cheat sheet. It summarizes everything there is to know about working efficiently with command line history in bash.\n\n## 26. Screen Cheat Sheet\n\nThis is the screen terminal emulator cheat sheet. It lists the default keyboard shortcuts for working with screen.\n\n## 27. Perl Special Variable Cheat Sheet\n\nThis is Perl predefined variable cheat sheet. It lists all the variables from perldoc perlvar with a concise description and some example usages. I created it when I was mastering Perl. I enjoy Perl golf and I wanted to know all of the variables.\n\n## 28. Perl pack\/unpack and printf\/sprintf Cheat Sheet\n\nThis is Perl pack\/unpack\/printf\/sprintf cheat sheet. The pack\/unpack cheat sheet is on page one, and it lists the pack\/unpack template parameters and what they do. The printf\/sprintf cheat sheet is on page two, and it lists the printf\/sprintf format specifiers and format attributes.\n\nI created this when I was mastering what could be done with pack\/unpack. I added printf\/sprintf as I could never remember all the format specifiers.\n\n## 29. Awk Cheat Sheet\n\nAWK programming language cheat sheet.\n\n## 30. Sed Cheat Sheet\n\nThis is sed cheat sheet. Sed is the Unix stream editor. If you don't know it, you don't know Unix.\n\n## 31. Ed Cheat Sheet\n\nThis is ed cheat sheet. Ed is The Unix Text Editor.\n\nOne day when I was learning sed, I got interested if it originated from ed, which got me interested in ed itself. I find that cheat sheets are a great way to learn new topics and therefore I created this cheat sheet.\n\n## 32. The New catonmat.net Website\n\nI just started pushing code to the new catonmat.net repository. It's going to be a state of the art personal website from now on.\n\nI have around 100 ideas for it, and the next big article series on catonmat is going to be \"Designing the new catonmat.net together with me.\" You should subscribe to my blog here if you are interested!\n\nIf you have any questions, don't hesitate to ask in the comments!\n\nLinear Algebra December 03, 2009\n\n# MIT Linear Algebra, Lecture 2: Elimination with Matrices\n\n<- previous article next article ->\n\nThis is the second post in an article series about MIT's course \"Linear Algebra\". In this post I will review lecture two on solving systems of linear equations by elimination and back-substitution. The other topics in the lecture are elimination matrices (also known as elementary matrices) and permutation matrices.\n\nThe first post covered the geometry of linear equations.\n\nOne of my blog readers, Seyed M. Mottaghinejad, had also watched this course and sent me his lecture notes. They are awesome. Grab them here: lecture notes by Seyed M. Mottaghinejad (includes .pdf, .tex and his document class).\n\nOkay, here is the second lecture.\n\n## Lecture 2: Elimination with Matrices\n\nElimination is the way every software package solves equations. If the elimination succeeds it gets the answer. If the matrix A in Ax=b is a \"good\" matrix (we'll see what a good matrix is later) then the elimination will work and we'll get the answer in an efficient way. It's also always good to ask how can it fail. We'll see in this lecture how elimination decides if the matrix A is good or bad. After the elimination there is a step called back-substitution to complete the answer.\n\nOkay, here is a system of equations. Three equations in three unknowns.\n\nRemember from lecture one, that every such system can be written in the matrix form Ax=b, where A is the matrix of coefficients, x is a column vector of unknowns and b is the column vector of solutions (the right hand side). Therefore the matrix form of this example is the following:\n\nFor the elimination process we need the matrix A and the column vector b. The idea is very simple, first we write them down in the augmented matrix form A|b:\n\nNext we subtract rows from one another in such a way that the final result is an upper triangular matrix (a matrix with all the elements below the diagonal being zero).\n\nSo the first step is to subtract the first row multiplied by 3 from the second row. This gives us the following matrix:\n\nThe next step is to subtract the second row multiplied by 2 from the third row. This is the final step and produces an upper triangular matrix that we needed:\n\nNow let's write down the equations that resulted from the elimination:\n\nWorking from the bottom up we can immediately find the solutions z, y, and x. From the last equation, z = -10\/5 = -2. Now we put z in the middle equation and solve for y. 2y = 6 + 2z = 6 + 2(-2) = 6 - 4 = 2 => y = 1. And finally, we can substitute y and z in the first equation and solve for x. x = 2 - 2y - z = 2 - 2(1) - (-2) = 2.\n\nWe have found the solution, it's (x=2, y=1, z=-2). The process we used to find it is called the back-substitution.\n\nThe elimination would fail if taking a multiple of one row and adding to the next would produce a zero on the diagonal (and there would be no other row to try to exchange the failing row with).\n\nThe lecture continues with figuring out how to do the elimination by using matrices. In the first lecture we learned that a matrix times a column vector gave us a combination of the columns of the matrix. Similarly, a row times a matrix gives us a combination of the rows of the matrix.\n\nLet's look at our first step of elimination again. It was to subtract 3 times the first row from the second row. This can be expressed as matrix multiplication (forget the column b for a while):\n\nLet's call the matrix on the right E as elimination matrix (or elementary matrix), and give it subscript E21 for making a zero in the resulting matrix at row 2, column 1.\n\nThe next step was twice the second row minus the third row:\n\nThe matrix on the right is again an elimination matrix. Let's call it E32 for giving a zero at row 3, column 2.\n\nBut notice that these two operations can be combined:\n\nAnd we can write E32(E21A) = U. Now remember that matrix operations are associative, therefore we can change the parenthesis (E32E21)A = U. If we multiply (E32E21) we get a single matrix E that we will call the elimination matrix. What we have done is expressed the whole elimination process in matrix language!\n\nNext, the lecture continues takes a step back and looks at permutation matrices. The question asked is \"what matrix would exchange two rows of a matrix?\" and \"what matrix would exchange two columns of a matrix?\"\n\nWatch the lecture to find the answer to these questions!\n\nTopics covered in lecture two:\n\n\u2022 [00:25] Main topic for today: elimination.\n\u2022 [02:35] A system with three equations and three unknowns:\n\u2022 [03:30] Elimination process. Taking matrix A to U.\n\u2022 [08:35] Three pivots of matrix U.\n\u2022 [10:15] Relation of pivots to determinant of a matrix.\n\u2022 [10:40] How can elimination fail?\n\u2022 [14:40] Back substitution. Solution (x=2, y=1, z=-2).\n\u2022 [19:45] Elimination with matrices.\n\u2022 [21:10] Matrix times a column vector is a linear combination of columns the matrix.\n\u2022 [22:15] A row vector times a matrix is a linear combination of rows of the matrix.\n\u2022 [23:40] Matrix x column = column.\n\u2022 [24:10] Row x matrix = row.\n\u2022 [24:20] Elimination matrix for subtracting three times row one from row two.\n\u2022 [26:55] The identity matrix.\n\u2022 [30:00] Elimination matrix for subtracting two times row two from row three.\n\u2022 [32:40] E32E21A = U.\n\u2022 [37:20] Permutation matrices.\n\u2022 [37:30] How to exchange rows of a 2x2 matrix?\n\u2022 [37:55] Permutation matrix P to exchange rows of a 2x2 matrix.\n\u2022 [38:40] How to exchange columns of a 2x2 matrix?\n\u2022 [39:40] Permutation matrix P to exchange columns of a 2x2 matrix.\n\u2022 [42:00] Commutative law does not hold for matrices.\n\u2022 [44:25] Introduction to inverse matrices.\n\u2022 [47:10] E-1E = I.\n\nHere are my notes of lecture two:\n\nMy notes of linear algebra lecture 2 on elimination with matrices.\n\nHave fun with this lecture! The next post is going to be either on lectures three and four together or just lecture three. Lecture three will touch a bit more on matrix multiplication and then dive into the inverse matrices. Lecture four will cover A=LU matrix decomposition (also called factorization).\n\nPS. This course is taught from Introduction to Linear Algebra textbook. Get it here:","date":"2015-07-29 13:32:20","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.31800132989883423, \"perplexity\": 1969.7456130835858}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2015-32\/segments\/1438042986444.39\/warc\/CC-MAIN-20150728002306-00181-ip-10-236-191-2.ec2.internal.warc.gz\"}"}
null
null
Veldwijk is een buurtschap in de gemeente Bronckhorst in de Nederlandse provincie Gelderland. Het ligt twee kilometer ten noorden van Vorden bij de weg naar Almen. In de buurtschap staat Kasteel De Bramel. Geografie van Bronckhorst Buurtschap in Gelderland
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,981
{"url":"https:\/\/financetrain.com\/normal-distribution","text":"Normal Distribution\n\nThe normal distribution is the well-known bell-shaped curve depicted below. The bell-shaped curve comes from a statistical tendency for outcomes to cluster symmetrically around the mean (or average).\n\nDeviations from the mean are described in terms of standard deviations. In all normal distributions, 68% of outcomes will fall within 1 standard deviation to either side of the mean.\n\nLet\u2019s illustrate the concept of mean and standard deviation with a simple example. My New York subway commute every day is 30 minutes on average, with a standard deviation of 5 minutes.\n\nAssuming a normal distribution for the time it takes me to get to work, this would imply that:\n\n\u2022 68% of the time, I can expect my daily commute to be between 25 minutes and 35 minutes (i.e., the mean of 30 minutes plus or minus 1 standard deviation, or 5 minutes).\n\n\u2022 16% of the time, my commute is less than 25 minutes (because the normal distribution is symmetrical around the mean, I expect this event to occur 16% of the time, or (100%-68%)\/2).\n\n\u2022 16% of the time, my commute is greater than 35 minutes (again, because the normal distribution is symmetrical). Or, in other words, my 84% confidence level worst-case commute is 35 minutes (Only 16% of the time I would expect longer commute).\n\nFrom this example, it makes sense that the more standard deviations we move from the mean, the lower the probability is of such an event occurring. For example, a delay of 10 minutes or more (2 standard deviations) only has a 2.5% chance of occurring, compared to a 16% probability of a delay of 5 minutes or more (1 standard deviation).\n\nThe table below relates standard deviations to lower tail probabilities (lower tail probabilities quantify the chance of an event of that magnitude or greater occurring):\n\n Standard Deviations Lower Tail Probability Commuting Example 1 16% Delay of 5 minutes or more 1.28 10 Delay of 6.4 minutes or more 1.65 5 Delay of 8.25 minutes or more 2 2.5 Delay of 10 minutes or more 2.33 1 Delay of 11.65 minutes or more\n\nCharacteristics of Normal Distribution\n\n\u2022 The distribution is characterized by bell curve which has more weight in the center and tapers off on either side which means it has tails on either side. It takes only two moments i.e. the mean \u00b5 and the variance \u03c32\u00a0to describe this function and is therefore a parametric function.\n\u2022 The mean gives more information about the location and the variance gives an idea of how dispersed the values are.\n\u2022 A normal distribution has a skewness=0 and kurtosis=3. The excess kurtosis is 0.\n\u2022 A linear combination of 2 or more normally distributed random variables is also normally distributed. This means if X and Y are normally distributed then X + Y is also normally distributed.\n\nThe density function for a normal distribution is given using the following formula:","date":"2023-03-24 02:42:30","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8059405088424683, \"perplexity\": 334.25564881756117}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296945242.64\/warc\/CC-MAIN-20230324020038-20230324050038-00684.warc.gz\"}"}
null
null
Here in Alabama, it seems odd to start preparing for and then planting fall vegetables when it feels like it's 100 degrees outside, but really if you wait for the cool down it will be November and that's too late! So we are planting NOW! We have begun the process of clearing out some of the squash plants that have stopped producing. In their place will go more organic potatoes along with carrots, rutabagas, and radishes. We know you guys want those leafy greens year round and we wish we could provide them. Did you know that in the Summer months, leafy greens are almost non-existent because the heat wipes them out? Sad, but true! There are some heat tolerant varieties, but "tolerance" doesn't always equal survival. Our heat and humidity have proven to be just too much for some plants. One day we hope to extend our growing season with a climate controlled greenhouse, but for now we look forward to planting many different varieties for you to choose from in the Fall. Of course we plan to offer staples like collards, kale, and broccoli. We have been researching, praying, and planning and we decided to try our hand at cauliflower and cabbage. We haven't been successful with these in the past, but we're hoping this is our year! Since we've harvested the meat birds that we're pastured in one of our garden areas, we will be turning it over for the Fall. It's imperative between plantings to always add nutrients back into the soil. Good thing the chickens left us plenty of nitrogen! So we're happy about that and we're hoping for a beautiful and bounteous season to come!
{ "redpajama_set_name": "RedPajamaC4" }
592
Roberti Community House (RCH), located in a distressed neighborhood of Waukegan, provides a safe, welcoming place for our neighbors to get to know each other and strengthen connections. Offering many programs for both adults and children, RCH allows our community members to discover new ideas and relationships. Everyone is valued for their natural talents and encouraged to share them with each other, the house and the community. Roberti Community House Afterschool Program offers a safe and welcoming place to all kids in the community every Tuesday and Thursday, 4-5:30. Come once, twice, or every week to continue the connections you make in our community! Come provide community youth with fun activities and meaningful connections!
{ "redpajama_set_name": "RedPajamaC4" }
872
{"url":"https:\/\/math.stackexchange.com\/questions\/4087426\/proof-for-a-theorem-about-relations-beween-convergents-in-continued-fractions","text":"# Proof for a theorem about relations beween convergents in continued fractions\n\nUpon reading about some properties of numerators and denominators in a textbook called Continued Fractions (here, chapter 2.3), I was unable to understand the following transmutation of the expression (circled in red in the image link), and highlighted red here:\n\n2.3 Relations between convergents In this section, we see some properties of the simple continued fractions in terms of the numerators and denominators appearing in the convergents.\n\nTheorem 2.4. If $$p_{n}$$ and $$q_{n}$$ are defined by $$\\begin{array}{l} p_{0}=a_{0}, p_{1}=a_{1} a_{0}+1, p_{n}=a_{n} p_{n-1}+p_{n-2} \\text { for } 2 \\leq n \\\\ q_{0}=1, q_{1}=a_{1}, q_{n}=a_{n} q_{n-1}+q_{n-2} \\text { for } 2 \\leq n \\end{array}$$\n\nthen $$\\left[a_{0}, a_{1}, \\ldots, a_{n}\\right]=\\frac{p_{n}}{q_{n}}$$\n\nProof. The proof proceeds by induction. The base cases are seen to be true by the assumptions given for $$n=0, n=1$$. Let us assume the statement to be true for some $$m$$. Then\n\n$$\\left[a_{0}, a_{1}, \\ldots a_{m-1}, a_{m}\\right]=\\frac{p_{m}}{q_{m}}=\\frac{a_{m} p_{m-1}+p_{m-2}}{a_{m} q_{m-1}+q_{m-2}}$$\n\nHence, we get\n\n$$\\left[a_{0}, a_{1}, \\ldots a_{m-1}, a_{m},\\color{red}{a_{m+1}}\\right]=\\left[a_{0}, a_{1}, \\ldots a_{m-1}, \\color{red}{a_{m}+\\frac{1}{a_{m+1}}}\\right]$$\n\nIn addition to not seeing how the equality in the last line was achieved, I was also under the impression that convergents of continued fractions (the quotients in square bracket notation) must by definition always be integers. The marked transmutation makes the last quotient a fraction.\n\n\u2022 continued fractions are pretty easy to calculate; if you get good at that the notation that now worries you will be transparent. To begin, it is just a way to do the extended Euclidean Algorithm. \u2013\u00a0Will Jagy Apr 3 at 1:24\n\u2022 My personal interest is in the relationship between continued fractions and the Pell equation. As a supplemental text, that is on a math sophistication par with the pdf that you referenced, I recommend this pdf. You will have to be careful, however, because the formulas will be different, since the alternate notation of $[a_1, a_2, \\cdots]$ rather than $[a_0, a_1, \\cdots]$ is used. ...see next comment \u2013\u00a0user2661923 Apr 3 at 3:06\n\u2022 For a much more advanced treatment, that I don't recommend that you tackle until after completing the pdf referenced in the previous comment, I recommend this 2nd pdf. Personally, I never went beyond chapter 1 in the 2nd pdf, because that chapter had everything I needed, with respect to the Pell equation. If you do get through these, and you have further interest, you will see many mathSE queries that involve continued fractions or the Pell Equation. In fact, many of the answers of Will Jagy, who you can search on, are on one or both of these topics. \u2013\u00a0user2661923 Apr 3 at 3:10\n\u2022 It is best to put the relevant portions of your linked pdf and image in the body of question. I have voted to close as \"needs details or clarity\". \u2013\u00a0Paramanand Singh Apr 3 at 3:26\n\nThis is an interpretation question, rather than a request for a problem to be solved. Therefore, I personally see no problem answering it, even though the OP has shown no work. If I get downvoted, okay.\n\nBecause of the difficulty displaying long continued fractions, I am going to illustrate the OP's question, under the assumption that $$m = 3$$.\n\n$$a_0 +\\cfrac{1}{a_1+\\cfrac{1}{a_2+\\cfrac{1}{a_3 + \\cfrac{1}{a_4}}}}$$\n\ncan be equivalently interpeted as\n\n$$a_0 +\\cfrac{1}{a_1+\\cfrac{1}{a_2+\\cfrac{1}{\\{a_3 + \\frac{1}{a_4}\\}}}}$$\n\nI was also under the impression that convergents of continued fractions (the quotients in square bracket notation) must by definition always be integers.\n\nI am assuming that you intend the coefficients of continued fractions, which are normally expressed as integers, rather than the convergents of continued fractions, which normally have form $$\\frac{p_n}{q_n}.$$\n\nWhile it's true that the coefficients are normally computed to be integers, you specifically asked how a specific line in a proof can be algebraically justified. As indicated in my continued fraction examples above, the algebra is justified.\n\nIn my examples, what this means is that\n\n$$[a_0, a_1, a_2, a_3, a_4]$$\n\nis algebraically equivalent to\n\n$$\\left[a_0, a_1, a_2, \\left(a_3 + \\frac{1}{a_4}\\right)\\right].$$\n\n\u2022 Indeed, I was confused about convergents and coefficients . Skipping over the gratitude 'noises' for superb answer - You've referenced a couple of pdf's that appear to be more detailed and thorough in explanations than what I had found. Would You happen to have a list of Your personal favorite reading resources posted somewhere? At a risk of going way off-topic - I'm interested in software engineering, thus, anything from calculus, abstract algebra, mathematical logic, graph theory, set theory, linear algebra, number theory, numerical methods, probability and statistics. \u2013\u00a0Whyitbenotworkin Apr 4 at 10:42\n\u2022 @Whyitbenotworkin first see my answer to this question. My answer, which will be the one at the bottom, discusses generic book searching tips. Beyond that, books that I have used are [1] \"Calculus, Volumes I and II\" (2nd Edition, Tom Apostol, 1966) [2] \"Elementary Number Theory\" (Uspensky and Heaslet, 1939) and [3] \"An Introduction To Complex Function Theory\" (Bruce Palka, 1991). \u2013\u00a0user2661923 Apr 4 at 23:56\n\nhere's an example, including the Bezout equation at the end\n\n$$\\gcd( 479, 231 ) = ???$$\n\n$$\\frac{ 479 }{ 231 } = 2 + \\frac{ 17 }{ 231 }$$ $$\\frac{ 231 }{ 17 } = 13 + \\frac{ 10 }{ 17 }$$ $$\\frac{ 17 }{ 10 } = 1 + \\frac{ 7 }{ 10 }$$ $$\\frac{ 10 }{ 7 } = 1 + \\frac{ 3 }{ 7 }$$ $$\\frac{ 7 }{ 3 } = 2 + \\frac{ 1 }{ 3 }$$ $$\\frac{ 3 }{ 1 } = 3 + \\frac{ 0 }{ 1 }$$ Simple continued fraction tableau:\n$$\\begin{array}{cccccccccccccc} & & 2 & & 13 & & 1 & & 1 & & 2 & & 3 & \\\\ \\frac{ 0 }{ 1 } & \\frac{ 1 }{ 0 } & & \\frac{ 2 }{ 1 } & & \\frac{ 27 }{ 13 } & & \\frac{ 29 }{ 14 } & & \\frac{ 56 }{ 27 } & & \\frac{ 141 }{ 68 } & & \\frac{ 479 }{ 231 } \\end{array}$$ $$479 \\cdot 68 - 231 \\cdot 141 = 1$$\n\n================================================\n\ndifferent, infinite but periodic continued fraction, here for $$\\sqrt {13}$$\n\n$$\\sqrt { 13} = 3 + \\frac{ \\sqrt {13} - 3 }{ 1 }$$ $$\\frac{ 1 }{ \\sqrt {13} - 3 } = \\frac{ \\sqrt {13} + 3 }{4 } = 1 + \\frac{ \\sqrt {13} - 1 }{4 }$$ $$\\frac{ 4 }{ \\sqrt {13} - 1 } = \\frac{ \\sqrt {13} + 1 }{3 } = 1 + \\frac{ \\sqrt {13} - 2 }{3 }$$ $$\\frac{ 3 }{ \\sqrt {13} - 2 } = \\frac{ \\sqrt {13} + 2 }{3 } = 1 + \\frac{ \\sqrt {13} - 1 }{3 }$$ $$\\frac{ 3 }{ \\sqrt {13} - 1 } = \\frac{ \\sqrt {13} + 1 }{4 } = 1 + \\frac{ \\sqrt {13} - 3 }{4 }$$ $$\\frac{ 4 }{ \\sqrt {13} - 3 } = \\frac{ \\sqrt {13} + 3 }{1 } = 6 + \\frac{ \\sqrt {13} - 3 }{1 }$$\n\nSimple continued fraction tableau:\n$$\\begin{array}{cccccccccccccccccccccccc} & & 3 & & 1 & & 1 & & 1 & & 1 & & 6 & & 1 & & 1 & & 1 & & 1 & & 6 & \\\\ \\\\ \\frac{ 0 }{ 1 } & \\frac{ 1 }{ 0 } & & \\frac{ 3 }{ 1 } & & \\frac{ 4 }{ 1 } & & \\frac{ 7 }{ 2 } & & \\frac{ 11 }{ 3 } & & \\frac{ 18 }{ 5 } & & \\frac{ 119 }{ 33 } & & \\frac{ 137 }{ 38 } & & \\frac{ 256 }{ 71 } & & \\frac{ 393 }{ 109 } & & \\frac{ 649 }{ 180 } \\\\ \\\\ & 1 & & -4 & & 3 & & -3 & & 4 & & -1 & & 4 & & -3 & & 3 & & -4 & & 1 \\end{array}$$\n\n$$\\begin{array}{cccc} \\frac{ 1 }{ 0 } & 1^2 - 13 \\cdot 0^2 = 1 & \\mbox{digit} & 3 \\\\ \\frac{ 3 }{ 1 } & 3^2 - 13 \\cdot 1^2 = -4 & \\mbox{digit} & 1 \\\\ \\frac{ 4 }{ 1 } & 4^2 - 13 \\cdot 1^2 = 3 & \\mbox{digit} & 1 \\\\ \\frac{ 7 }{ 2 } & 7^2 - 13 \\cdot 2^2 = -3 & \\mbox{digit} & 1 \\\\ \\frac{ 11 }{ 3 } & 11^2 - 13 \\cdot 3^2 = 4 & \\mbox{digit} & 1 \\\\ \\frac{ 18 }{ 5 } & 18^2 - 13 \\cdot 5^2 = -1 & \\mbox{digit} & 6 \\\\ \\frac{ 119 }{ 33 } & 119^2 - 13 \\cdot 33^2 = 4 & \\mbox{digit} & 1 \\\\ \\frac{ 137 }{ 38 } & 137^2 - 13 \\cdot 38^2 = -3 & \\mbox{digit} & 1 \\\\ \\frac{ 256 }{ 71 } & 256^2 - 13 \\cdot 71^2 = 3 & \\mbox{digit} & 1 \\\\ \\frac{ 393 }{ 109 } & 393^2 - 13 \\cdot 109^2 = -4 & \\mbox{digit} & 1 \\\\ \\frac{ 649 }{ 180 } & 649^2 - 13 \\cdot 180^2 = 1 & \\mbox{digit} & 6 \\\\ \\end{array}$$\n\nSomething I programmed recently; this is the Gauss-Lagrange method of chains of reduced forms, here I use the left neighbors. The Pell equation deals with $$x^2 - n y^2,$$ the continued fraction being for $$\\sqrt n.$$ With little extra effort we get the continued fraction for $$\\frac{B + \\sqrt D}{2A},$$ where $$D=B^2 -4AC.$$\n\nI also have it print some 2 by 2 matrices in proper format for pari-gp; in each case below, the product pari calls rt * h * r is something specific related to the original Hessian matrix h\n\nThe \"partial quotients\" of the continued fraction are the absolute values of the \"digits\" I write at the right hand side of each line. The output below shows $$\\frac{5 + \\sqrt {597}}{22}$$\n\njagy@phobeusjunior:~\/old drive\/home\/jagy\/Cplusplus$$jagy@phobeusjunior:~\/old drive\/home\/jagy\/Cplusplus$$ .\/indefCycleLeft 11 5 -13\n\n0 form 11 5 -13 epsilon 1\n1 form -7 17 11 Epsilon -2\n2 form 17 11 -7 Epsilon 1\n3 form -1 23 17 Epsilon -23 ambiguous\n4 form 17 23 -1 Epsilon 1 ambiguous\n5 form -7 11 17 Epsilon -2\n6 form 11 17 -7 Epsilon 1\n7 form -13 5 11 Epsilon -1 opposite\n8 form 3 21 -13 Epsilon 7 ambiguous\n9 form -13 21 3 Epsilon -1 ambiguous\n10 form 11 5 -13\n\nform 11 x^2 + 5 x y -13 y^2\n\nminimum was 1rep x = -4 y = 3 disc 597 dSqrt 24\nAutomorph, written on right of Gram matrix:\n-5872 5187\n4389 -3877\nfor Pari\/gp: rt = [ -5872 , 4389 ; 5187 , -3877 ] ; h = [ 22 , 5 ; 5 , -26 ] ; r = [ -5872 , 5187 ; 4389 , -3877 ] ;\n\nopposite Pari\/gp: rt = [ -392 , 293 ; -293 , 219 ] ; h = [ 22 , 5 ; 5 , -26 ] ; r = [ -392 , -293 ; 293 , 219 ] ;\n\n=========================================\njagy@phobeusjunior:~\/old drive\/home\/jagy\/Cplusplus$ \u2022 Do you prefer C++ to Java? \u2013 user2661923 Apr 14 at 17:29 \u2022 @user2661923 I am told by reliable sources that Java is better unless the only concern is speed. However, when I came back to town decades ago, they let me audit a self-paced course in C++, so that is the one I know. \u2013 Will Jagy Apr 14 at 18:41 \u2022 @user2661923 meanwhile, if you are interested in programming this, it is all integer arithmetic. Everything needed is in zakuski.utsa.edu\/~jagy\/indefinite_binary_Buell.pdf with this addition: the definition of \"reduced\" for indefinite$ax^2 + bxy + cy^2$is equivalent to:$ ac < 0 \\; , \\; \\; \\; b > |a+c| $\u2013 Will Jagy Apr 14 at 19:10 \u2022 I deleted my last comment, which I posted before examining the ...Buell pdf that you linked to. That answers all my questions, except 1. Do you have an opinion on the C++ facilities of large integers versus the Java facilities, which refer to Java's BigInteger class, which is mentioned here? \u2013 user2661923 Apr 14 at 19:23 \u2022 @user2661923 No opinion. Note Pell$u^2 - Dv^2=1$leads to an automrphiosm matrix$ A = \\left( \\begin{array}{cc} u & Dv \\\\ v & u \\end{array} \\right) $such that$A^T H A= H \\; , \\; \\;$where$H$is the Hessian matrix of the form. The matrix my program reports as an \"automorph\" solves$A^T H A= H \\; , \\; \\;$where this$H$is the Hessian of$11x^2 + 5xy - 13 y^2 $That is, the left column of this$A$represents$11$and the right column$ \\; \\;-13\\$ \u2013\u00a0Will Jagy Apr 14 at 19:30","date":"2021-08-05 18:20:05","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 39, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8474275469779968, \"perplexity\": 531.9360435947724}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-31\/segments\/1627046156141.29\/warc\/CC-MAIN-20210805161906-20210805191906-00496.warc.gz\"}"}
null
null
Q: I'm making ASP .NET MVC application, I'm getting this error when running the project I am getting the following error This is my connection string in web.config <!--EmployeeContext connection string--> <connectionStrings> <add name="EmployeeContext" connectionString="Data Source=(LocalDB)\MSSQLLocalDB;AttachDbFilename=D:\VisualStudioProjects\C# projects\LocalDBApplication\LocalDBApplication\Database1.mdf;Integrated Security=True" providerName="System.Data.SqlClient" /> </connectionStrings> A: The error message is clear that you have duplicate connectionStrings section in your config file. A: Error clearly says you have more than one <conncetionStrings> section specified. Check your code and remove duplicates A: It only allowed one <connectionStrings> section in config file. Check your web.config to remove or merge your defined connections.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,978
{"url":"https:\/\/socratic.org\/chemistry\/acids-and-bases\/stoichiometry-with-acid-and-base-dissociation","text":"# Stoichiometry with Acid and Base Dissociation\n\nAcid - Base Equilibria | Common Ion Effect & Percent Dissociation.\n\nTip: This isn't the place to ask a question because the teacher can't reply.\n\n1 of 4 videos by Dr. Hayek\n\n## Key Questions\n\nWatch these videos!\n\n#### Explanation:\n\nSince the question is very general and pages could be written to answer this question, I would like to recommend the following videos on different Acid-Base Titration examples.\n\nAcid - Base Equilibria | Strong Acid - Strong Base Titration.\n\nAcid - Base Equilibria | Weak Acid - Strong Base Titration.\n\n\u2022 A very basic intro to stoich...\n\nvideo from: Noel Pauller\n\nStoichiometry is the calculation of relative quantities of reactants and products in chemical reactions. Stoichiometry is founded on the law of conservation of mass where the total mass of the reactants equals the total mass of the products.\n\nC${H}_{4}$ + 2${O}_{2}$ -----> C${O}_{2}$ + 2 ${H}_{2}$O\n\nHere, one molecule of methane reacts with two molecules of oxygen gas to yield one molecule of carbon dioxide and two molecules water. Stoichiometry measures these quantitative relationships, and is used to determine the amount of products\/reactants that are produced\/needed in a given reaction.\n\nas per the equation 1 mole of Methane , reacts with 2 moles of Oxygen to produce 1 mole of Carbon dioxide and 2 moles of Water.\n\nif we double the amount of Methane and Oxygen to 2 moles and 4 moles respectively , the amount of carbon dioxide ad water we will get also get doubled up.\n\nHow many moles of chlorine gas ($C {l}_{2}$) would react with 5 moles of sodium (Na) according to the following chemical equation? (Balance equation.) 2Na + $C {l}_{2}$ -----> 2NaCl\n\n(b) Using the equation (after it is balanced) above, determine the amount of product that can be produced from 24.7 g Na.\n\nAs per the equation 2 moles of sodium produces one mole of $C {l}_{2}$ , so the amount of $C {l}_{2}$ required is one half the amount of Sodium , Na.\nIf we start with 5 moles of Na , we will need 5 \/ 2 moles or 2.5 moles of $C {l}_{2}$.\n\n(b) As per the equation 2 moles of sodium ( mass 46 g) produces two moles of NaCl (116.8g)\nwe can say 46 g of Na on reaction with sufficient amount of Chlorine produces 116.8 g of NaCl.\n\n1 g of Na on reaction produces (116.8 \/ 46 ) g of NaCl\n\n24.7 g of Na on reaction will produce ( 116.8 \/ 46 ) x 24.7 g of NaCl\n\n62 g of NaCl.\n\n\u2022 This key question hasn't been answered yet.\n\n## Questions\n\n\u2022 \u00b7 7 months ago\n\u2022 \u00b7 7 months ago\n\u2022 8 months ago\n\u2022 \u00b7 10 months ago\n\u2022 \u00b7 10 months ago\n\u2022 \u00b7 11 months ago\n\u2022 \u00b7 11 months ago\n\u2022 \u00b7 11 months ago\n\u2022 \u00b7 11 months ago\n\u2022 \u00b7 11 months ago\n\u2022 \u00b7 1 year ago\n\u2022 \u00b7 1 year ago\n\u2022 \u00b7 1 year ago\n\u2022 \u00b7 1 year ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 2 years ago\n\u2022 \u00b7 3 years ago\n\u2022 \u00b7 3 years ago\n\u2022 \u00b7 3 years ago\n\u2022 3 years ago\n\u2022 \u00b7 3 years ago\n\u2022 \u00b7 3 years ago\n\u2022 \u00b7 3 years ago","date":"2018-03-24 00:23:25","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 9, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5741716027259827, \"perplexity\": 3537.865189331074}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-13\/segments\/1521257649508.48\/warc\/CC-MAIN-20180323235620-20180324015620-00604.warc.gz\"}"}
null
null
What It Really Means to Be an Entrepreneur The image that comes into a lot of people's minds when they think of an entrepreneur is someone like Steve Jobs starting Apple with his friend Steve Wozniak from his parents' garage in his early 20s. They obviously built something enormously successful out of nothing, but that's just a single story—one that represents the 1% of entrepreneurs. In my view, entrepreneurs are defined by their actions, not by their intent or by whether you can find them listed on the stock exchange. They include start-up and small business owners, freelancers, contract workers, gig workers, side hustlers, and anyone else taking risks to pursue their dreams. Related: 3 Relationships That Will Build the Tribe Every Entrepreneur Deserves You can (and should!) call yourself an entrepreneur if you can say YES! to one of these three business basics: I operate a business. It can be a business of any size and at any stage of development, as long as you're past the "ideation" phase. I'm taking a risk. You could be risking your money, resources, reputation, time, or all of the above. But it's a risk you believe in. I'm making—or on the path to—making money. Money doesn't have to be the primary reason you do what you do, but if you don't intend to make a profit, then you have a hobby, not a business. If you make money from your store on Etsy, then you are an entrepreneur. You may be a part-time entrepreneur who also has a day job, but an entrepreneur nonetheless! If you're building an app that hasn't made money yet, but that you fully intend to market to customers to make a profit, then you are an entrepreneur. And if you're a techy twenty-something-year-old who just started a business in your garage… you're an entrepreneur, too! It's time to rethink our entrepreneurial stereotypes. Innovation and inspiration can come at any age, whether you're 9, 72, or anywhere in between. What matters most is the desire to begin something of your own and the belief that it's possible. It's never too late (or too early) to start. Are you ready to start something new? Sign up for the Side Hustle Accelerator and get everything you need to get your side hustle started! An $996 value, yours for only $29!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,147
\section{Introduction} \label{section:layering:intro} Algorithms for the convex layers problem that achieve optimal time and space complexities are arguably complex and discourage implementation. We give a simple $\BigO{n \log n}$-time and linear space algorithm for the problem, which is optimal. Our algorithm computes four quarter convex layers using a plane-sweep paradigm as the first step. The second step then merges these in $\BigO{n \log n}$-time. \begin{comment} The \emph{convex layers problem}, also known as the ``onion peeling problem,'' can be defined as follows: Given a set of points $P$ in the plane, produce a set of non-intersecting convex polygons, that would be constructed by iteratively finding the convex hull of the points left after all points on all previously constructed convex polygons are deleted. One can compute the convex layers of a point set $P$ in the plane by taking the convex hull of $P$ to obtain its first layer $L_1$. These points are then discarded from $P$ and the convex hull of the remaining points are taken to obtain the second layer $L_2$. Those are then discarded and we continue this process until we run out of points. So, in general a point $p$ belongs to layer $L_i$, if it lies on the convex hull of the point set $P \setminus \bigcup_{j<i}\{L_j\}$. \end{comment} Formally, the \emph{convex layers}, $\mathbb{L}(P) = \{L_1, L_2, \cdots, L_k \}$, of a set $P$ of $n \geq 3$ points is a partition of $P$ into $k \leq \lceil n/3\rceil$ disjoint subsets $L_i$, $i = 1, 2, \cdots, k$ called \emph{layers}, such that each layer $L_i$ is the ordered\footnote{That is the layers are polygons not sets.} set of vertices of the convex hull of $P \setminus \bigcup_{j<i}\{L_j\}$. Thus, the outermost layer $L_1$ coincides exactly with the convex hull of $P$, $\operatornamewithlimits{conv}(P)$. The \emph{convex layers problem} is to compute $\mathbb{L}(P)$. \begin{comment} A related concept is the notion of the \emph{depth} of a point in a point set. The \emph{depth} of a point $p\in P$ is the index $i \in [1 \cdots k]$, such that $p \in L_i$. It would appear that the convex layers problem can be defined as the the problem of computing the depths of all the points in $P$. This is not quite right, however, as each convex layer must be a cyclically ordered set of the points that have the same depth. \end{comment} \label{section:layering:applications} Convex layers have several applications in various domains, including robust statistics, computational geometry, and pattern recognition. \begin{comment} Convex layers have several applications in various domains, including robust statistics\cite{chazelle85,ps-cgi-90,green79,huber72}, computational geometry \cite{chazelle85,ChazelleGuibasLee85}, and pattern recognition \cite{suk99}. \label{section:layering:other:problems} The convex layers problem is one of a number of \emph{layering problems}. A layering problem uses an appropriate notion of depth to partition a set of objects into subsets, called layers, such that objects in the same layer have a common depth. Examples of layering problems include the upper envelope layers problem \cite{79766,Hershberger92}, the maximal layers problem \cite{356484,1008971,1715908}, and the multi-list layering problem \cite{Dessmark1995}. The rest of this article relates relevant literature (\cref{section:layering:relatedWorks}), presents our contributions (\cref{section:layering:results}), and concludes with a list of open problems (\cref{section:layering:openProblems}). \end{comment} \section{Related Work} \label{section:layering:relatedWorks} A brute-force solution to the convex layers problem is obvious---construct each layer $L_i$ as the convex hull of the set $P\setminus \bigcup_{j < i} L_j$ using some suitable convex hull algorithm. The brute-force algorithm will take $O(kn\log n)$ time where $k$ is the number of the layers. We say this algorithm ``peels off'' a set of points one layer at a time. This \emph{peeling} approach is reminiscent of many convex layers algorithms. Another general approach to this problem is the \emph{plane-sweep} paradigm. One of the earliest works to take the peeling approach is Green and Silverman \cite{green79}. Their algorithm repeatedly invokes quickhull to extract a convex layer at each invocation. This algorithm runs in $O(n^3)$ worst-case time, and $O(n^2\log n)$ expected time. Overmars and van Leeuwen's \cite{Overmars1981166} algorithm for this problem runs in $O(n\log^2 n)$. It is based on a fully dynamic data structure for maintaining a convex hull under arbitrary insertions and deletions of points. Each of these update operations takes $O(\log^2 n)$ time, since constructing the convex layers can be reduced to inserting all the points into the data structure in time $O(n\log^2 n)$, marking points on the current convex hull, deleting them, and then repeating the process for the next layer. Since each point is marked exactly once and deleted exactly once, these steps together take no more than $O(n\log^2 n)$ time. Chazelle's \cite{chazelle85} algorithm for this problem runs in $O(n\log n)$ time and $O(n)$ space, both of which are optimal. A new algorithm is discussed in \cref{section:layering:results}. (Chazelle \cite{chazelle85} used a balanced tree approach as well as our new algorithm, but the information stored in our tree corresponds to a very different set of polygonal chains.) The first algorithm on record that uses the plane-sweep approach is a modification of Jarvis march proposed by Shamos \cite{ps-cgi-90}. The algorithm works by doing a radial sweep, changing the pivot along the way, just as in the Jarvis march, but does not stop after processing all the points. It proceeds with another round of Jarvis march that excludes points found to belong to the convex hull in the previous iteration. This way, the algorithm runs in $O(n^2)$. Nielsen \cite{nielsen96} took advantage of Chan's grouping trick \cite{chan96b} to obtain yet another optimal algorithm for the convex layers problem. Nielsen's algorithm is output-sensitive in that it can be parametrized by the number of layers $l$ to be computed. It runs in $O(n\log H_l)$ time where $H_l$ is the number of points appearing on the first $l$ layers. \begin{comment} Dalal \cite{dalal2004} showed that the expected number of convex layers for a set of $n$ points uniformly and identically distributed within a smooth region such as a circle is $\Theta(n^{2/3})$. Surprisingly, for a polygonal region the expectation is $\Theta(n/\log n)$. The envelope layers problem \cite{Hershberger92} and the multi-list layering problem \cite{Dessmark1995} have been shown to be $P$-complete. It is still not known whether the convex layers problem belongs to the class $NC$ \cite{atallah90,Hershberger92}. Dessmark et al. \cite{Dessmark1995} reported a reduction of the convex layers problem to the multi-list layering problem, but this reduction does not bring us any closer to resolving the status of the convex layers problem. \end{comment} \section{New Algorithm} \label{section:layering:results} Our algorithm builds four sets of convex layers, each with a distinct direction of curvature. The set of points $P$ must be known ahead of time. For ease of presentation, we will assume below that the points are in general position (no three on a line and no two share a coordinate). Removing this assumption is easy and our implementation does not make the assumption. Each point's horizontal ranking is precomputed by sorting the points using their $x$-coordinates. A \emph{northwest} monotone convex chain $C = (p_1, p_2, \cdots, p_n)$ has $p_i.x<p_{i+1}.x$, $p_i.y<p_{i+1}.y$ and no point in the chain is above the extended line through $p_i$ and $p_{i+1}$, $1\le i < n$. The head (tail) of the chain $C$ is defined as $\operatornamewithlimits{\CALL{head}}(C)=p_1$ ($\operatornamewithlimits{\CALL{tail}}(C)=p_n$). A {\em full} monotone convex chain is formed by augmenting $C = (p_1, p_2, \cdots, p_n)$ with two fictional \emph{sentinel} points, $p_0 = (p_{1}.x, -\infty)$, and $p_{n+1} = (\infty, p_n.y)$. Note that given a full chain $C$, the calls $\operatornamewithlimits{\CALL{head}}(C)$ and $\operatornamewithlimits{\CALL{tail}}(C)$ return $p_1$ and $p_n$ and not the sentinels. The $\infty$'s are symbolic and the operations are overloaded for them. We call the chain \emph{northwest} since it bows outward in that direction. Similarly we will refer to a chain as \emph{southwest} if it is northwest after rotating the point set 90 degrees clockwise about the origin. \emph{Northeast} and \emph{southeast} are defined analogously. Except in the section on merging below, {\bf all} of our chains will be northwest monotone convex chains, so we will simply call them \emph{chains}. We say a chain $C_1$ \emph{precedes} another chain $C_2$, if $\operatornamewithlimits{\CALL{tail}}(C_1).x < \operatornamewithlimits{\CALL{head}}(C_2).x$. Additionally, we say a line is \emph{tangent} to a chain, if it touches the chain and no point in the chain is above the line. Let chain $C_1$ precede chain $C_2$. If $\operatornamewithlimits{\CALL{tail}}(C_1).y < \operatornamewithlimits{\CALL{tail}}(C_2).y$ then a \emph{bridge} between the chains is a two-point chain $(p_1, p_2)$ where $p_1$ is in chain $C_1$, $p_2$ is in chain $C_2$ and the line through $p_1$ and $p_2$ is tangent to both chains. If $\operatornamewithlimits{\CALL{tail}}(C_1).y \ge \operatornamewithlimits{\CALL{tail}}(C_2).y$ then the chain $(\operatornamewithlimits{\CALL{tail}}(C_1), \operatornamewithlimits{\CALL{tail}}(C_2))$ is a \emph{degenerate bridge}. Let $C = (p_0, p_1, p_2, \cdots, p_n, p_{n+1})$ be a full chain. $C$ \emph{dominates} a point $p$ if $p$ is below the segment $(p_i, p_{i+1})$, for some $1\le i\le n$. A full chain $C$ dominates a full chain $C'$ if $C$ dominates every point of $C'$. The (northwest) \emph{hull} of a point set $P$, or just the \emph{hull chain} of $P$, is the chain of points from $P$ that dominates every other point in $P$. \subsection{Hull Tree Data Structure} The \emph{hull tree} $T$ for the point set $P$, is a binary tree. $T$ is either $\text{\textbf{nil}}$ or has \begin{enumerate*}[label=\emph{\alph*}), before=\unskip{: }, itemjoin={{; }}, itemjoin*={{, and }}]\item A root node that contains the hull chain for $P$ \item A partition of the non-hull points from $P$ by some $x$ coordinate into $P_L$ and $P_R$. The left and right children of the root are the hull trees $T.L$ and $T.R$ for $P_L$ and $P_R$, respectively. \end{enumerate*} The root node contains the fields \begin{enumerate*}[label=\emph{\alph*}), before=\unskip{: }, itemjoin={{; }}, itemjoin*={{, and }}] \item $T.hull$ is the full hull chai \item $T.l$ is a cursor into the hull chain, that initially scans rightwards \item $T.r$ is a cursor into the hull chain, that initially goes leftwards. \end{enumerate*} The reason for the two cursors is so that we can be explicit about how the hull chain is scanned. Our analysis depends on the claim that a point is only scanned a constant number of times before being deleted from that chain. We will maintain the invariant that $T.l$ is never to the right of $T.r$. (For example, if $T.r$ coincides with $T.l$ and $T.r$ moves left then $T.l$ will be pushed back to maintain the invariant.) As a preprocessing step, the points are sorted by $x$-coordinates and assigned a 0-based integer rank, represented by a $\lceil\log n\rceil$ bit binary string. We will use a perfectly balanced binary tree as the skeleton for our hull tree. It will have $n$ leaves with no gaps from left to right. The skeleton is only implicit in our construction. The leaves will be associated with the points by rank from left to right. We will use the conventional trick that the path from the root to a leaf is determined by the binary representation of the rank of the leaf, going left on 0 and right on 1. We now specify $P_L$ as the set of points, not in the hull chain, whose rank starts with 0 in binary and $P_R$ is analogous with a leading bit of 1. We will say a point in, say, $P_L$ ``belongs'' to the subtree $T.L$. As hull points are peeled off, the corresponding leaf nodes will become obsolete, but we never recalculate ranks. It follows that the skeleton of the tree will always be of height $\lceil\log n\rceil$, even though, over time, more and more of the bottom of the tree will become vacant. In addition to this invariant, we explicitly mention these other invariants: (a) $T.hull$ is a full northwest monotone convex chain, (b) $T.hull$ is the hull of $P$, (c) $T.l.x \le T.r.x$, and (d) $T.L$ and $T.R$ are the hull trees for $P_L$ and $P_R$. \begin{lemma} The space complexity of a hull-tree $T$ for a set $P$ of $n$ points is $\BigTheta{n}$. \end{lemma} \begin{proof} The skeleton of the binary tree with $n$ leaves clearly has $O(n)$ nodes of constant size. There is also the space for all the lists representing the various hull chains. However, from the definitions of $P_L$ and $P_R$, every point is on exactly one hull chain. This completes the proof. \qed \end{proof} \subsection{Tree Construction} The construction of the hull tree is done by repeated insertions into an initially empty tree. The overall procedure is a plane-sweep algorithm, since the inserted points will have increasing $y$-coordinates. We shall come back to the $\operatornamewithlimits{\CALL{buildTree}}$ routine after first looking into the $\operatornamewithlimits{\CALL{insert}}$ algorithm. \subsubsection{$\operatornamewithlimits{\CALL{insert}}$.} \label{subsection:insert} Algorithm $\operatornamewithlimits{\CALL{insert}}$ is a recursive algorithm. Rather than insert a single point at a time, we feel it is more natural to batch several such insertions when we can first link them into a chain. It takes as input a chain $C$ of vertices and a hull tree $T$. Because $\operatornamewithlimits{\CALL{insert}}$ will only be used in a plane-sweep manner, we will be able to assume as a precondition that $C$ is nonempty, no point in $C$ was previously inserted, and no point in $C$ is dominated by the initial hull tree of $T$. \begin{algorithm}[!ht] \caption{$\operatornamewithlimits{\CALL{insert}}(C, T)$} \label{alg:insert} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \SetKw{DownTo}{downto} \Input{$C$, a chain of points to be inserted into $T$, \\ $T$, the hull tree for some point set $P$.} \Output{$T$, the hull tree for $P \cup C$.} \If{$T = \text{\textbf{nil}}$}{ Create a root node with $T.hull = C$\; $T.l = \operatornamewithlimits{\CALL{head}}(C); T.r = \operatornamewithlimits{\CALL{tail}}(C) $ \; } \Else{ $(q_l,q_r) = \operatornamewithlimits{\CALL{tangents}}(\operatornamewithlimits{\CALL{head}}(C), \operatornamewithlimits{\CALL{tail}}(C), T)$ \label{alg:insert:else:start} \; $C^\prime = $ the portion of $T.hull$ strictly between $q_l$ and $q_r$\; Replace $C^\prime$ by $C$ within $T.hull$ \; Scan and split $C^\prime$ to create these two chains\; \ \ \ \ \ \ $C_L=\{p\in C^\prime\:|\: p \mbox{ belongs in } T.L\}$; \ \ $C_R=\{p\in C^\prime\:|\: p \mbox{ belongs in } T.R\}$\; $T.L = \operatornamewithlimits{\CALL{insert}}(C_L, T.L)$ \label{alg:insert:else:rec:l} ;\ \ $T.R = \operatornamewithlimits{\CALL{insert}}(C_R, T.R)$ \label{alg:insert:else:rec:r} \;\label{alg:insert:else:end} } \Return$T$ \; \end{algorithm} We specify the procedure $\operatornamewithlimits{\CALL{tangents}}(a_l, a_r, T)$ which assumes $a_l.x <a_r.x$ $a_l.y <a_r.y$ and neither $a_l$ nor $a_r$ is dominated by $T.hull=(h_0, h_1, \ldots , h_k, h_{k+1})$. It returns a pair of points $(q_l,q_r)$ each from $T.hull$. We require the line through $q_l$ and $a_l$ be a leftward tangent. If $h_1.x<a_l.x$ and $h_1.y<a_l.y$, this is well-defined. Otherwise, we return a degenerate tangent with $q_l=h_0$. Similarly, if $a_r.x<h_k.x$ and $a_r.y<h_k.y$, then $q_r$ defines a rightward tangent with $a_r$. Otherwise, we return $q_r=h_{k+1}$. We sketch the implementation of $\operatornamewithlimits{\CALL{tangents}}$. If the leftward tangent is well-defined, we compute $q_l$ by scanning from the current position of $T.l$. In constant time we can determine if we should scan to the left or to the right. (As is standard we keep track of the changing slopes of lines through $a_l$.) Similarly, if the tangent is well-defined, we scan for $q_r$ using $T.r$. \begin{lemma} \label{lemma:insert:correctness} Algorithm $\operatornamewithlimits{\CALL{insert}}$ correctly inserts $C$ into $T$. \end{lemma} \begin{proof} The proof is by induction on the number of points. The base case, where $T$ is empty, is clear. In general, we only need to establish that the new $T.hull$ is correct; this follows since all the points removed, in $C^\prime$, are dominated by the new hull. By the recursive definition of hull trees the points of $C^\prime$ now belong in either $T.L$ or $T.R$ and are recursively inserted into those trees. \qed \end{proof} \subsubsection{$\operatornamewithlimits{\CALL{buildTree}}$.} \label{section:buildtrees} Given a point set $P$, algorithm $\operatornamewithlimits{\CALL{buildTree}}$ starts by sorting these points by their $x$-coordinates. The 0-based index of a point $p$ in such a sorted order is called its \emph{rank}. As discussed above, a point's rank is used to guide its descent down the hull tree during insertion. \begin{algorithm}[!ht] \caption{$\operatornamewithlimits{\CALL{buildTree}}(P)$} \label{alg:BuildTree} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$P$, a set of points, $\{p_1, p_2, \dots , p_n\}$.} \Output{$T$, a hull tree built from $P$.} Compute the rank of each point in $P$by $x$-coordinate \label{buildtree:rank}\; Sort the points in $P$ by increasing $y$-coordinate \; Create an empty hull tree $T$ \label{buildtree:newhulltree}\; \For{each $p$ in order}{ $\operatornamewithlimits{\CALL{insert}}(p, T)$ } \Return $T$ \; \end{algorithm} Recall that the $\operatornamewithlimits{\CALL{insert}}$ procedure expects a hull chain as the first parameter, so the call to $\operatornamewithlimits{\CALL{insert}}$ in $\operatornamewithlimits{\CALL{buildTree}}$ is a understood to be a chain of one vertex. Note that such singleton chains satisfy the preconditions of $\operatornamewithlimits{\CALL{insert}}$. Once all the points have been inserted, the hull tree is returned. \begin{lemma} \label{lemma:insert:tail} Right after a point $p$ is inserted into a hull tree $T$, $\operatornamewithlimits{\CALL{tail}}(T.hull) = p$. \end{lemma} \begin{proof} Since points are inserted into $T$ by increasing $y$-coordinate value, the most recently inserted point must have the largest $y$ coordinate value so far. So it must be in the root hull and cannot have any point after it. \qed \end{proof} \begin{lemma} \label{lemma:buildtree:runtime} Algorithm $\operatornamewithlimits{\CALL{buildTree}}$ constructs a hull tree of a set of $n$ points in $\BigO{n \log n}$ time. \end{lemma} \begin{proof} Clearly the initial steps are within the time bound. It remains only to show that all the invocations of $\operatornamewithlimits{\CALL{insert}}$ take no more than $\BigO{n \log n}$ time. Consider an arbitrary point $p$ inserted into $T$ by $\operatornamewithlimits{\CALL{buildTree}}$. Initially, it goes into the $T.hull$ by \cref{lemma:insert:tail}. In subsequent iterations, the point either stays within its current hull chain or descends one level owing to an eviction from its current hull chain. The cost of all evictions from a chain $C$ is dominated by the right-to-left tangent scan. We consider the number of points we scan past (i.e. not counting the points at the beginning and end of scan). Consider the cursor $T.l$. It scans left to right past points once; if we scan past a point a second time, going right to left, then that point will be in $C^\prime$ and will be evicted from this level. Symmetric observations hold for $T.r$. And a point will be scanned a final time if it is in $C_L$ or $C_R$. Hence we will scan past a point a constant number of times before it is evicted. A call to $\operatornamewithlimits{\CALL{insert}}$ takes constant time every time it is invoked (and it is only invoked when at least one point has been evicted from its parent). In addition $\operatornamewithlimits{\CALL{insert}}$ takes time bounded by the number of points scanned past. Note that any particular point $p$ starts at the root and only moves downward (when evicted) and there are only $O(\log n)$ levels. Hence during the execution of $\operatornamewithlimits{\CALL{buildTree}}$ both the total number of points evicted and the total number of points scanned past is bounded by $O(n\log n)$. \qed \end{proof} \begin{lemma} \label{lemma:insert:runtime} Each point is handled by $\operatornamewithlimits{\CALL{buildTree}}$ in $\BigO{\log n}$ amortized time. \end{lemma} \begin{proof} By \cref{lemma:buildtree:runtime}, the cost of all invocations of $\operatornamewithlimits{\CALL{insert}}$ by algorithm $\operatornamewithlimits{\CALL{buildTree}}$ is $\BigO{n \log n}$, which amortizes to $\BigO{\log n}$ per point. \qed \end{proof} \subsection{Hull Peeling} \label{section:hull:peeling} We begin the discussion of hull peeling by examining algorithm $\operatornamewithlimits{\CALL{extractHull}}$, which takes a valid hull tree $T$, extracts from it the root hull chain $T.hull$, and then returns it. \begin{algorithm}[!ht] \caption{$\operatornamewithlimits{\CALL{extractHull}}(T)$} \label{alg:extractHull} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$T$ is a hull tree for a non-empty pointset $P$ and $H$ be the set of points in $T.hull$.} \Output{the hull $h$ and $T$ a hull tree for the point set $P\setminus H$} $h = T.hull$ \; $\operatornamewithlimits{\CALL{delete}}(h, T)$ \; \Return $h$ \; \end{algorithm} The correctness and cost of algorithm $\operatornamewithlimits{\CALL{extractHull}}$ obviously depend on $\operatornamewithlimits{\CALL{delete}}$. $\operatornamewithlimits{\CALL{delete}}$ is called after a subchain has been cut out of the middle of the root hull chain. This can be visualized if we imagine the root hull as a roof. Further there is a left and a right ``overhang'' remaining after the middle of a roof has caved in. The overhang might degenerate to just a sentinel point. Algorithm $\operatornamewithlimits{\CALL{delete}}$ itself also depends on other procedures which we discuss first. \subsubsection{$\operatornamewithlimits{\CALL{below}}$.} We could just connect the two endpoints of the overhangs with a straight line to repair the roof. However because of the curvature of the old roof, some of the points in $T.L$ or $T.R$ might be above this new straight line. In that case, these points need to move out of their subtrees and join in to form the new root hull chain. Therefore, we will need a Boolean function $\operatornamewithlimits{\CALL{below}}(T,p,q)$ to that end. It returns true if there exists a tangent of the root hull such that both $p$ and $q$ are above it. A precondition of $\operatornamewithlimits{\CALL{below}}(T,p,q)$ is that the root hull can rise above the line through $p$ and $q$ only between $p$ and $q$. We also specify the Boolean function $\operatornamewithlimits{\CALL{above}}(p,q,r)$ to be true if point $r$ is above the line passing through $p$ and $q$. This is done with a standard constant time test. Note that $\operatornamewithlimits{\CALL{below}}$ is quite different than $\operatornamewithlimits{\CALL{above}}$. The functions $\operatornamewithlimits{\CALL{pred}}$ and $\operatornamewithlimits{\CALL{succ}}$ operate on the corresponding hull chain in the obvious way. \begin{algorithm}[!ht] \caption{$\operatornamewithlimits{\CALL{below}}(T, p_l, p_r)$} \label{alg:below} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$T$, a hull tree, \\ $p_l$: the rightmost end in the left overhang, \\ $p_r$: the leftmost point in the right overhang } \Output{True iff every point of $T.hull$ is below the line through $p_l$ and $p_r$ } \lIf{$p_l$ or $p_r$ is a sentinel }{\Return false} \If{$\operatornamewithlimits{\CALL{above}}(\operatornamewithlimits{\CALL{pred}}(T.r), T.r, P_l)$ }{ \While{$\lnot\operatornamewithlimits{\CALL{above}}(T.l,\operatornamewithlimits{\CALL{succ}}(T.l),p_r)\text{\textbf{ and }} \lnot\operatornamewithlimits{\CALL{above}}(p_l,p_r,T.l)$}{ $T.l = \operatornamewithlimits{\CALL{succ}}(T.l)$ } \Return $\lnot\operatornamewithlimits{\CALL{above}}(p_l,p_r,T.l)$ \; } \Else{ \While{$\lnot\operatornamewithlimits{\CALL{above}}(\operatornamewithlimits{\CALL{pred}}(T.r),T.r,p_l)\text{\textbf{ and }} \lnot\operatornamewithlimits{\CALL{above}}(p_l,p_r,T.r)$}{ $T.r = \operatornamewithlimits{\CALL{pred}}(T.r)$ } \Return $\lnot\operatornamewithlimits{\CALL{above}}(p_l,p_r,T.r)$ \; } \end{algorithm} \begin{lemma} \label{lemma:below} Algorithm $\operatornamewithlimits{\CALL{below}}$ runs in linear time. Further, if it returns false either $T.l$ or $T.r$ is above the line through $p_l$ and $p_r$. \end{lemma} \begin{proof} Recall that one cursor may push the other cursor as it moves. Only one cursor moves. If $T.r$ is too far left to help decide, then $T.l$ moves until it is above the line or we find a tangent. \qed \end{proof} \subsubsection{$\operatornamewithlimits{\CALL{getBridge}}$.} Given two hull trees, where one precedes the other, algorithm $\operatornamewithlimits{\CALL{getBridge}}$ scans the hull chains of the hull trees to find the bridge that connects them. \begin{algorithm}[!ht] \caption{$\operatornamewithlimits{\CALL{getBridge}}(T.L, T.R)$} \label{alg:getBridge} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$T.L$: a left hull tree of some hull tree $T$, \\ $T.R$: a right hull tree of $T$ } \Output{$p_l, p_r$: the left and right bridge points for $T.L$ and $T.R$ } \lIf{$ T.L = \text{\textbf{nil}}$} {$ p_l = (-\infty, -\infty)$ } \lElse{$p_l = T.L.l$} \If{$T.R = \text{\textbf{nil}} \text{\textbf{ or }} \operatornamewithlimits{\CALL{tail}}(T.L.hull).y > \operatornamewithlimits{\CALL{tail}}(T.R.hull).y $}{$p_r = (+\infty, -\infty)$} \lElse{$p_r = T.R.r$ } \If{$ \operatornamewithlimits{\CALL{above}}(T.L.l, T.R.r, \operatornamewithlimits{\CALL{succ}}(T.L.l))$ }{ $T.L.l = \operatornamewithlimits{\CALL{succ}}(T.L.l)$ \; $(p_l,p_r) = \operatornamewithlimits{\CALL{getBridge}}(T.L, T.R)$ \; } \If{$ \operatornamewithlimits{\CALL{above}}(T.L.l, T.R.r, \operatornamewithlimits{\CALL{pred}}(T.L.l))$ }{ $T.L.l = \operatornamewithlimits{\CALL{pred}}(T.L.l)$ \; $(p_l,p_r) = \operatornamewithlimits{\CALL{getBridge}}(T.L, T.R)$ \; } \If{$ \operatornamewithlimits{\CALL{above}}(T.L.l, T.R.r, \operatornamewithlimits{\CALL{succ}}(T.R.r))$}{ $T.R.r = \operatornamewithlimits{\CALL{succ}}(T.R.r)$ \; $(p_l,p_r) = \operatornamewithlimits{\CALL{getBridge}}(T.L, T.R)$ \; } \If{$\operatornamewithlimits{\CALL{above}}(T.L.l, T.R.r, \operatornamewithlimits{\CALL{pred}}(T.R.r))$}{ $T.R.r = \operatornamewithlimits{\CALL{pred}}(T.R.r)$ \; $(p_l,p_r) = \operatornamewithlimits{\CALL{getBridge}}(T.L, T.R)$ \; } \Return $(p_l,p_r)$ \; \end{algorithm} \begin{lemma} \label{lemma:getbridge} Given two valid hull trees $T.L$ and $T.R$, $\operatornamewithlimits{\CALL{getBridge}}$ correctly computes the bridge connecting $T.L.hull$ and $T.R.hull$ in time linear in the lengths of those hulls. \end{lemma} \begin{proof} The scan for the left bridge point in $T.L$ is done using its left-to-right cursor $T.L.l$, while the scan for the right bridge point in $T.R$ is done using $T.R$'s right-to-left cursor $T.R.r$. On completion, the two cursors will be pointing to the bridge points. Since each vertex is scanned past at most once, the runtime is $\BigO{|T.L.hull| + |T.R.hull|)}$. This completes the proof. \qed \end{proof} \subsubsection{$\operatornamewithlimits{\CALL{delete}}$.} \label{subsection:delete} The general idea of $\operatornamewithlimits{\CALL{delete}}(C,T)$ is that if $C$ is a subchain of $T.hull$ then the procedure will return with the tree $T$ being a valid hull tree for the point set $P\setminus H$, where $H$ is the set of points in $C$. The procedure will be employed during the peeling process, and so $C$ will initially be the entire root hull. The root hull will have to be replaced by moving points up from the subtrees. In fact the points moved up will be subchains of the root hulls of $T.L$ and $T.R$. Recursively, these subtrees will in turn need to repair their root hulls. We do a case analysis below and find that the only procedure we will need is $\operatornamewithlimits{\CALL{delete}}$. Again we shall employ the analogy of a roof caving in, in the middle. The rebuilding of the ``roof'' starts with identifying the endpoints of the remaining left and right overhangs. These points will be the sentinels if the overhangs are empty. The endpoints $a_l$ and $a_r$ define a line segment, $(a_l, a_r)$, which we shall call the \emph{roof segment}. Before continuing, let us examine the dynamics of the points in the root hull of a (sub)tree $T$ during successive invocations of $\operatornamewithlimits{\CALL{delete}}$. Successive calls might cause the roof segment $(a_l,a_r)$ to get bigger. For successive roof segments, the root hull of $T$ is queried. There are two phases involved: before the root hull intersects the roof segment, and thereafter. During the first phase, each new roof segment is below the previous one (cf. \cref{fig:phase:1} and \cref{fig:phase:2}). During the first phase, the root hull is not changed but is queried by a series of roof segments $(a_l,a_r)$. In the second phase, it gives up its subchain from $T.l$ to $T.r$ to its parent (or is extracted). Thereafter, for each new excision, $T.l$ and $T.r$ will move further apart, until they become a sentinel. This is shown inductively on the depth of the subtree. In the first phase the cursors ($T.l$ and $T.r$) start at the $\operatornamewithlimits{\CALL{head}}$ and $\operatornamewithlimits{\CALL{tail}}$ of the list and move in response to $\operatornamewithlimits{\CALL{below}}$ queries. Each cursor will move in one direction at first and then, only once, change direction. This is because each subsequent query has $(a_l,a_r)$ moving apart on the parent's convex chain. See \cref{fig:phase:1} and \cref{fig:phase:2}. Now we examine the algorithm more carefully. \begin{algorithm}[!ht] \caption{$\operatornamewithlimits{\CALL{delete}}(C, T)$} \label{alg:delete} \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{$C$, a chain of points to be deleted from $T.hull$, \\ $T$: a hull tree or points et $P$} \Output{$T$: The updated hull tree for the point set $P\setminus H$} $ a_l = \operatornamewithlimits{\CALL{pred}}(\operatornamewithlimits{\CALL{head}}(C)); \enspace a_r = \operatornamewithlimits{\CALL{succ}}(\operatornamewithlimits{\CALL{tail}}(C))$ \; $\operatornamewithlimits{\mathit{Use}}_L = \lnot\operatornamewithlimits{\CALL{below}}(T.L,a_l,a_r); \enspace \operatornamewithlimits{\mathit{Use}}_R = \lnot\operatornamewithlimits{\CALL{below}}(T.R,a_l,a_r) $ \label{alg:delete:call:getExtremes}\; \lIf {$\operatornamewithlimits{\mathit{Use}}_L$}{$(L_l,L_r) = \operatornamewithlimits{\CALL{tangents}}(a_l,a_r,T.L) $} \lIf {$\operatornamewithlimits{\mathit{Use}}_R$}{$(R_l,R_r) = \operatornamewithlimits{\CALL{tangents}}(a_l,a_r,T.R) $} \Comment{Case 1: Neither subtree used to rebuild the roof} \lCase{$\lnot \operatornamewithlimits{\mathit{Use}}_L \text{\textbf{ and }} \lnot \operatornamewithlimits{\mathit{Use}}_R$}{ nothing } \Comment{Case 2: Only the right subtree used to rebuild the roof} \Case{$\operatornamewithlimits{\mathit{Use}}_R \text{\textbf{ and }} (\lnot \operatornamewithlimits{\mathit{Use}}_L \text{\textbf{ or }} \operatornamewithlimits{\CALL{above}}(a_r,L_l,R_l))$}{ $C_R =$ chain in $T.R.hull$ from $R_l$ to $R_r$, inclusive \; Update $T.hull$ with $C_R$ inserted between $a_l$ and $a_r$\; $\operatornamewithlimits{\CALL{delete}}(C_R, T.R)$ \; } \Comment{Case 3: Only the left subtree used to rebuild the roof} \Case{$\operatornamewithlimits{\mathit{Use}}_L\text{\textbf{ and }} (\lnot \operatornamewithlimits{\mathit{Use}}_R \text{\textbf{ or }} \operatornamewithlimits{\CALL{above}}(R_l,a_l,L_r))$}{ $C_L =$ chain in $T.L.hull$ from $L_l$ to $L_r$, inclusive \; Update $T.hull$ with $C_L$ inserted between $a_l$ and $a_r$\; $\operatornamewithlimits{\CALL{delete}}(C_L, T.L)$ \; } \Comment{Case 4: Both subtrees used to rebuild the roof} \Case{$\operatornamewithlimits{\mathit{Use}}_L \text{\textbf{ and }} \operatornamewithlimits{\mathit{Use}}_R \text{\textbf{ and }} \operatornamewithlimits{\CALL{above}}(a_l,R_l,L_l)\text{\textbf{ and }}\operatornamewithlimits{\CALL{above}}(L_r,a_r,R_r)$}{ $(q_l,q_r) = \operatornamewithlimits{\CALL{getBridge}}(T.L, T.R)$ \; $C_L =$ chain in $T.L.hull$ from $L_l$ to $q_l$, inclusive \; $C_R =$ chain in $T.R.hull$ from $q_r$ to $R_r$, inclusive \; $D =$ chain from concatenating $C_L$ to $C_R$ \; Update $T.hull$ with $D$ inserted between $a_l$ and $a_r$\; $\operatornamewithlimits{\CALL{delete}}(C_L, T.L)$ \; $\operatornamewithlimits{\CALL{delete}}(C_R, T.R)$ \; } \Return $T$ \; \end{algorithm} \begin{figure} \centering \begin{minipage}{.5\textwidth} \centering \input{images/fig_a.tikz} \caption{Before $a_l$ and $a_r$ move up.} \label{fig:phase:1} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \input{images/fig_b.tikz} \caption{After $a_l$ and $a_r$ have moved up. } \label{fig:phase:2} \end{minipage} \end{figure} The rebuilding process breaks into four cases depending on whether any points from $T_L$ and $T_R$ are above the roof segment and hence will be involved in the rebuilding. \setcounter {case} {0} \begin{case} Neither subtree is needed to rebuild the roof.\end{case} This case, depicted in \cref{fig:layers:delete:case1}, is when the deletion of subchain $C$ from $T.hull$ leaves a hull that already dominates all other points. \begin{case} Only the right subtree is needed to rebuild the roof.\end{case} This case, depicted in \cref{fig:layers:delete:case2}, is when $T.hull$ no longer dominates the hull chain in the right subtree. A second subcase is when the left root hull does extend above the $(a_l,a_r)$ segment but is still below the left tangent from the right root hull. To maintain the hull tree invariants, a subchain of $T.R.hull$ will have to be extracted and moved up to become part of $T.hull$. In Case 2, only the vertices of $T.R.hull$ that will be moved up are scanned past twice, since points scanned past in phase two are removed from the current hull. \begin{figure}[!ht] \begin{minipage}{.5\textwidth} \centering \input{images/layers_delete_case_1.tikz} \caption{Case 1: $a_l$ can connect to $a_r$.} \label{fig:layers:delete:case1} \end{minipage}% \begin{minipage}{.5\textwidth} \centering \input{images/layers_delete_case_2.tikz} \caption{Case 2: Right subtree involved in rebuilding.} \label{fig:layers:delete:case2} \end{minipage} \end{figure} \begin{case} Only the left subtree is needed to rebuild the roof.\end{case} This case, depicted in \cref{fig:layers:delete:case3}, is the converse of case 2, Again, a second subcase is when the right root hull does extend above the $(a_l,a_r)$ segment but is still below the right tangent from the left root hull. As in the previous case, only the vertices of $T.L.hull$ that will be moved up are scanned twice. \begin{comment} After the call to $\operatornamewithlimits{\CALL{getBridge}}$ in line \cref{alg:delete:call:getBridge} of algorithm $\operatornamewithlimits{\CALL{delete}}$, $T.L$'s left-to-right cursor $T.L.l$ is already positioned on the left bridge point, by Postcondition 3 of algorithm $\operatornamewithlimits{\CALL{getBridge}}$, but its right-to-left cursor $T.L.r$ is still pointing to $\operatornamewithlimits{\CALL{tail}}(T.L.hull)$, not having done any scan so far. The scan for the left tangent point visible to $a_l$ is done by walking forward or backward. The decision of which direction to walk can be done in constant time. If the walk forward toward $T.L.r$ is chosen, then all the points encountered will be encountered for the first time. However, if the scan is backward toward the head of $T.L.hull$, then any point encountered is a point that will be moved up to join the roof. The scan for the right tangent point, visible to $a_r$ and above the segment $a_la_r$, is done by having $T.L.r$ walk up the chain, until $a_r$ can see no further, at which point the right tangent point has been found. Note that in this walk, all the points that were encountered were seen for the first time. \qed \end{comment} \begin{case} Both subtrees are needed to rebuild the roof.\end{case} In this case, we need to compute two subchains, one from $T.l.hull$ and the other from $T.r.hull$, which are then connected by a bridge to fix the roof. \begin{figure}[!ht] \begin{minipage}{.5\textwidth} \centering \input{images/layers_delete_case_3.tikz} \caption{Case 3: Only left subtree involved. } \label{fig:layers:delete:case3} \end{minipage} \begin{minipage}{.5\textwidth} \centering \input{images/layers_delete_case_4.tikz} \caption{Case 4: Both subtrees involved.} \label{fig:layers:delete:case4} \end{minipage} \end{figure} \begin{lemma} \label{lemma:case4} In Case 4, only the vertices of $T.L.hull$ and $T.R.hull$ that will be moved up to join the roof are scanned twice. \end{lemma} \begin{proof} Recall that after the call to $\operatornamewithlimits{\CALL{getBridge}}$, the two cursors $T.L.l$ and $T.R.r$ are already pointing to the left and right bridge points. The scan for the left tangent point visible to $a_l$ and above the segment $a_la_r$ is done by walking $T.L.l$ forward or backward. The decision of which direction to walk can be done in constant time. If the walk toward $T.L.r$ is chosen, then all the points encountered will be encountered for the first time. However, if the scan is backward toward the head of $T.L.hull$, then any point encountered is one that will be moved up. A symmetrical argument applies on the right. \qed \end{proof} \begin{theorem} \label{theorem:correctness:delete:cont} Consider a sequence of calls to $\operatornamewithlimits{\CALL{extractHull}}$, starting with $n$ points, until all points have been extracted. The total time amortized over all calls is $\BigO{n\log n}$ \end{theorem} \begin{proof} We assume that we start with a hull tree (built with $\operatornamewithlimits{\CALL{buildTree}}$). The run time is dominated by cursor movement. Each procedure takes constant time (and the number of calls is proportional to the number of chain movements) plus the number of points a cursor has moved past. The above discussion shows that each point is passed over a constant number of times before moving out a chain. As with $\operatornamewithlimits{\CALL{buildTree}}$, this leads to our result. \qed \end{proof} \subsection{Merge} Recall our discussion so far has only been for ``northwest'' hull trees, which we now call $T_{NW}$. We rotate the point set 90 degrees and recompute three times, resulting in the four hull trees $T_{NW}, T_{NE}, T_{SE}$, and $T_{SW}$. We will use these to construct the successive convex hulls by peeling. When the points are in general position, the extreme points (topmost, bottommost, rightmost, and leftmost) are on the convex hull. Note that some of these may coincide. Further, it is clear that the chain that connects the leftmost point with the topmost is just the northwest hull chain found at the root of $T_{NW}$. The rest of the convex hull is the root hull chains of the other trees. \begin{comment} \Cref{fig:layers:upperhull} shows the upper hull of $T_{NW}$ before it is peeled off. \begin{figure}[ht] \centering \input{images/hulltree.tikz} \caption{An upper hull.} \label{fig:layers:upperhull} \end{figure} \end{comment} Initially all the points are ``unmarked''. When marked, a point is marked in all four trees. We iteratively perform the following actions to construct each layer \begin{enumerate*}[label=\emph{\alph*}), before=\unskip{: }, itemjoin={{; }}, itemjoin*={{, and }}] \item Retrieve and delete the root hull chain from each of the hull trees \item Remove the marked points from each chain \item Mark the points remaining in each chain \item Concatenate the four chains to form the convex hull for this layer. \end{enumerate*} This process stops when all vertices have been marked, which is when all the points have been deleted from all the trees. This correctness follows from above. \begin{lemma} Given a set $S$ of $n$ points and the four hull trees of $P$ with the four orientations of $NW, NE, SE$, and $SE$, the $\operatornamewithlimits{\CALL{merge}}$ procedure executes in $\BigO{n\log n}$ time. \end{lemma} \begin{proof} Note that the sum of the lengths of all the chains is $\BigO{n}$. So marking points and removing them later all in all takes linear time. Recall that all the calls to $\operatornamewithlimits{\CALL{delete}}$ altogether take time \BigO{n\log n} time. \qed \end{proof} \section{Conclusion} \label{section:layering:openProblems} We have provided a new simple optimal algorithm for the convex layers problem. Detailed pseudocode, space and time complexity results are also given. The pseudocode might appear detailed, but that is only because the approach is simple enough that we can deal with all cases explicitly. However, by using four sets of hulls, we only need to work with monotone chains which simplifies our case analyses and makes the correctness argument straightforward. The extension to dynamic point sets remains an open problem. \begin{comment} There are other related problems that could benefit from simpler but optimal algorithms. One example is the dynamic convex hull problem, where the convex hull is to be maintained under an arbitrary sequence of insert, delete and query requests. To date, the most practical algorithm for this problem remains that of Overmars and van Leeuwen \cite{Overmars1981166}. Their algorithm runs in $\BigO{\log^2 n}$ time per update and $\BigO{\log n}$ per query request. Although the work of Brodal and Jacob \cite{brodal2002dynamic} resolved the long-standing open problem in 2002 by achieving the optimal $\BigO{\log n}$ amortized time for update and query requests and optimal $\BigO{n}$ space, their solution depends on data structures that are too intricate and complex to be practical. One line of future work would be to explore simpler and more practical solutions that are nonetheless optimal, either in an amortized sense or in the worst-case. \end{comment} \bibliographystyle{splncs03}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,616
from unittest import mock import fixtures as fx import testtools from nova.tests import fixtures """Test request logging middleware under various conditions. The request logging middleware is needed when running under something other than eventlet. While Nova grew up on eventlet, and it's wsgi server, it meant that our user facing data (the log stream) was a mix of what Nova was emitting, and what eventlet.wsgi was emitting on our behalf. When running under uwsgi we want to make sure that we have equivalent coverage. All these tests use GET / to hit an endpoint that doesn't require the database setup. We have to do a bit of mocking to make that work. """ class TestRequestLogMiddleware(testtools.TestCase): def setUp(self): super(TestRequestLogMiddleware, self).setUp() # this is the minimal set of magic mocks needed to convince # the API service it can start on it's own without a database. mocks = ['nova.objects.Service.get_by_host_and_binary', 'nova.objects.Service.create', 'nova.utils.raise_if_old_compute'] self.stdlog = fixtures.StandardLogging() self.useFixture(self.stdlog) for m in mocks: p = mock.patch(m) self.addCleanup(p.stop) p.start() @mock.patch('nova.api.openstack.requestlog.RequestLog._should_emit') def test_logs_requests(self, emit): """Ensure requests are logged. Make a standard request for / and ensure there is a log entry. """ emit.return_value = True conf = self.useFixture(fixtures.ConfFixture()).conf self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api resp = api.api_request('/', strip_version=True) # the content length might vary, but the important part is # what we log is what we return to the user (which turns out # to excitingly not be the case with eventlet!) content_length = resp.headers['content-length'] log1 = ('INFO [nova.api.openstack.requestlog] 127.0.0.1 ' '"GET /" status: 200 len: %s' % content_length) self.assertIn(log1, self.stdlog.logger.output) # Verify handling of X-Forwarded-For header, example: load balancer. # First, try without setting CONF.api.use_forwarded_for, it should not # use the header value. headers = {'X-Forwarded-For': '1.2.3.4'} resp = api.api_request('/', strip_version=True, headers=headers) content_length = resp.headers['content-length'] log2 = ('INFO [nova.api.openstack.requestlog] 127.0.0.1 ' '"GET /" status: 200 len: %s' % content_length) self.assertIn(log2, self.stdlog.logger.output) # Now set CONF.api.use_forwarded_for, it should use the header value. conf.set_override('use_forwarded_for', True, 'api') headers = {'X-Forwarded-For': '1.2.3.4'} resp = api.api_request('/', strip_version=True, headers=headers) content_length = resp.headers['content-length'] log3 = ('INFO [nova.api.openstack.requestlog] 1.2.3.4 ' '"GET /" status: 200 len: %s' % content_length) self.assertIn(log3, self.stdlog.logger.output) @mock.patch('nova.api.openstack.requestlog.RequestLog._should_emit') def test_logs_mv(self, emit): """Ensure logs register microversion if passed. This makes sure that microversion logging actually shows up when appropriate. """ emit.return_value = True self.useFixture(fixtures.ConfFixture()) # NOTE(sdague): all these tests are using the self.useFixture( fx.MonkeyPatch( 'nova.api.openstack.compute.versions.' 'Versions.support_api_request_version', True)) self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api api.microversion = '2.25' resp = api.api_request('/', strip_version=True) content_length = resp.headers['content-length'] log1 = ('INFO [nova.api.openstack.requestlog] 127.0.0.1 ' '"GET /" status: 200 len: %s microversion: 2.25 time:' % content_length) self.assertIn(log1, self.stdlog.logger.output) @mock.patch('nova.api.openstack.compute.versions.Versions.index') @mock.patch('nova.api.openstack.requestlog.RequestLog._should_emit') def test_logs_under_exception(self, emit, v_index): """Ensure that logs still emit under unexpected failure. If we get an unexpected failure all the way up to the top, we should still have a record of that request via the except block. """ emit.return_value = True v_index.side_effect = Exception("Unexpected Error") self.useFixture(fixtures.ConfFixture()) self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api api.api_request('/', strip_version=True) log1 = ('INFO [nova.api.openstack.requestlog] 127.0.0.1 "GET /"' ' status: 500 len: 0 microversion: - time:') self.assertIn(log1, self.stdlog.logger.output) @mock.patch('nova.api.openstack.requestlog.RequestLog._should_emit') def test_no_log_under_eventlet(self, emit): """Ensure that logs don't end up under eventlet. We still set the _should_emit return value directly to prevent the situation where eventlet is removed from tests and this preventing that. NOTE(sdague): this test can be deleted when eventlet is no longer supported for the wsgi stack in Nova. """ emit.return_value = False self.useFixture(fixtures.ConfFixture()) self.useFixture(fixtures.RPCFixture('nova.test')) api = self.useFixture(fixtures.OSAPIFixture()).api api.api_request('/', strip_version=True) self.assertNotIn("nova.api.openstack.requestlog", self.stdlog.logger.output)
{ "redpajama_set_name": "RedPajamaGithub" }
592
Q: Move Xml Node by xslt How to move the selected Xml Node to the last of the selected Node. below is a sample xml. <Custom> <Root name="root1"> <Folder name="Folder1"> <Node name="Sample Node"> <LevelGroup> <Level name="1">First Level</Level> <Level name="5">Fifth Level</Level> </LevelGroup> </Node> </Folder> <Folder name="Folder2"> <Node name="Node A"> <LevelGroup> <Level name="1">First Level</Level> <Level name="2">Second Level</Level> </LevelGroup> </Node> <Node name="Node C"> <LevelGroup> <Level name="4">Fourth Level</Level> <Level name="5">Fifth Level</Level> </LevelGroup> </Node> </Folder> </Root> <Root name="root2"> <Folder name="FolderA"> <Node name="Node X"> <LevelGroup> <Level name="1">First Level</Level> </LevelGroup> </Node> </Folder> </Root> <Root name="root4"> <Folder name="FolderC"> <Node name="Node Z"> <LevelGroup> <Level name="1">First Level</Level> </LevelGroup> </Node> </Folder> </Root> </Custom> in the above Xml, there are so many Nodes called "Root" and "Node". the "Node" has an attribute named "name". If the name attribute values Contains a string "Sample Node", then it has to move to last of the matched "Node". How can we achieve this by xslt. below is the Expected result: <Custom> <Root name="root1"> <Folder name="Folder2"> <Node name="Node A"> <LevelGroup> <Level name="1">First Level</Level> <Level name="2">Second Level</Level> </LevelGroup> </Node> <Node name="Node C"> <LevelGroup> <Level name="4">Fourth Level</Level> <Level name="5">Fifth Level</Level> </LevelGroup> </Node> </Folder> <Folder name="Folder1"> <Node name="Sample Node"> <LevelGroup> <Level name="1">First Level</Level> <Level name="5">Fifth Level</Level> </LevelGroup> </Node> </Folder> </Root> <Root name="root2"> <Folder name="FolderA"> <Node name="Node X"> <LevelGroup> <Level name="1">First Level</Level> </LevelGroup> </Node> </Folder> </Root> <Root name="root4"> <Folder name="FolderC"> <Node name="Node Z"> <LevelGroup> <Level name="1">First Level</Level> </LevelGroup> </Node> </Folder> </Root> </Custom> the change has to happen for <Folder name="Folder1"> <Node name="Sample Node"> <LevelGroup> <Level name="1">First Level</Level> <Level name="5">Fifth Level</Level> </LevelGroup> </Node> </Folder> Thanks in Advance. A: It seems that the element you actually want to move is Folder and not Node Anyway, I think all you have to do basically, is to set to copy-of-Lines. One for all those elements without your "sample string" as @name, one for those with. So, try the code below: <?xml version="1.0" encoding="UTF-8"?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:template match="Custom"> <xsl:copy> <xsl:apply-templates/> </xsl:copy> </xsl:template> <xsl:template match="Root"> <xsl:copy> <xsl:for-each select="@*"><xsl:attribute name="{name()}"><xsl:value-of select="."/></xsl:attribute></xsl:for-each> <xsl:copy-of select="Folder[not(descendant::Node[@name='Sample Node'])]"/> <xsl:copy-of select="Folder[descendant::Node[@name='Sample Node']]"/> </xsl:copy> </xsl:template> </xsl:stylesheet>
{ "redpajama_set_name": "RedPajamaStackExchange" }
5,026
\section{Introduction} \label{sec:intro} Since their first appearance in the seminal monograph of \cite{wald1947sequential}, Optimal Stopping Problems (OSP) have become ubiquitous tools in mathematical finance, stochastic analysis, and mathematical statistics, among many other fields. Particularly, OSPs which are non-homogeneous in time are known to be mathematically challenging and, compared to the time-homogeneous counterpart, the literature addressing this topic is scarce, non-comprehensive, and often heavy on smoothing conditions. Markov bridges are not only time-inhomogeneous processes, but they also fail to meet the common assumption of Lipschitz continuity of the underlying drift (see, e.g., \citet[Chapter 3]{krylov1980controlled}, or \cite{jacka_finite-horizon_1992}), as their drifts explode when time approaches the horizon, thus inherently adding an extra layer of complexity. The first result in OSPs with Markov bridges was given by \cite{shepp_explicit_1969}, who circumvented the complexity of dealing with a Brownian Bridge (BB) by using a time-space transformation that allowed to reformulate the problem into a more tractable one with a Brownian motion underneath. Since then, more than fifty years ago, the use of Markov bridges in the context of OSPs has been narrowed to extending the result of \cite{shepp_explicit_1969}: \cite{ekstrom_optimal_2009} and \cite{ ernst_revisiting_2015} studied alternative methods of solutions; \cite{ekstrom_optimal_2009} and \cite{de_angelis_optimal_2020} looked at a broader class of gain functions, \cite{glover_optimally_2020} randomized the horizon while \cite{follmer_optimal_1972}, \cite{leung_optimal_2018}, and \cite{ekstrom_optimal_2020} analyzed the randomization of the bridge's terminal point. In finance, the use of a BB in OSPs has been motivated by several applications. \cite{boyce_stopping_1970} applied it to the optimal selling of bonds; \cite{baurdoux_optimal_2015} suggested the use of a BB to model misspriced assets that could rapidly return to their fair price, or perishable commodities that become useless after a given deadline; and \cite{ekstrom_optimal_2009} used a BB to model the \textit{stock-pinning} effect, that is, the phenomenon in which the price of a stock tends to be pulled towards the strike price of one of its underlying options with massive trading volumes at the expiration date. While these motivations encourage the investor to rely on a model with added information at the horizon, none of them are exclusive to a BB, its usage being rather driven by tractability issues. Thus, in those same scenarios, other bridge processes could be more appealing than the over-simplistic BB. In particular, we drive our attention to an Ornstein--Uhlenbeck Bridge (OUB) process, since its version without added information, the Ornstein--Uhlenbeck (OU) process, is often the reference model in many financial problems. Indeed, OU processes are a go-to in finance when it comes to modeling assets with prices that fluctuate around a given level. This mean-reverting phenomenon has been systematically observed in a wide variety of markets. A good reference for either theory, applications, and empirical evidence of mean-reverting problems is \cite{leung_optimal_2015}. An example is given by the pair trading strategy, which consists on holding a position in one asset as well the opposite position in another, both assets known to be correlated in a way that the spread between their prices shows mean reversion. Recently, many authors have tackled pair trading by using an OSP approach with an OU process. \cite{ekstrom_optimal_2011} found the best time to liquidate the spread in the presence of a stop-loss level; \cite{leung_optimal_2015-1} used a discounted double OSP to compute the optimal buy-low-sell-high strategy in a perpetual frame; and \cite{kitapbayev_optimal_2017} extended that result to a finite horizon and took the viewpoint of investors entering the spread either buying or shorting. In this paper we solve the finite-horizon OSP featuring the identity as the gain function and an OUB as the underlying process. The solution is provided in terms of an non-linear, Volterra-type integral equation. Similarly to \cite{shepp_explicit_1969}, our methodology relies on a time-space change that casts the original problem into an infinite horizon OSP with a Brownian motion as the underlying process. Due to the complexity of our resulting OSP, we use a direct approach to solve it rather than using the common candidate-verification scheme. We then show that one can either apply the inverse transformation to recover the solution of the original OSP or, equivalently, solve the Volterra integral equation reformulated back in terms of OUB. It is worthwhile to highlight that the BB framework is included in our analysis as a limit case. The rest of the paper is structured as follows. Section \ref{sec:formulation} introduces the main problem and some useful notation. In Section \ref{sec:reformulation} we derive the transformed OSP and establish its equivalence to the original one. The most technical part of the paper is relegated to Section \ref{sec:sol_reformulated}, in which we derive the solution of the reformulated OSP. From it, we use the reversed transformation to get back the solution to the original OSP in Section \ref{sec:sol_original}, where we also remark that both a BB and an OUB with general pulling level and terminal time are immediate consequences of our results. An algorithm for numerical approximations of the solution is given in Section \ref{sec:numerical_results}, along with a compendium of illustrative cases for different values of the OUB's parameters. Concluding remarks are relegated to Section \ref{sec:conclusions}. \section{Formulation of the problem} \label{sec:formulation} Let $X = \{X_t\}_{t \in [0, 1]}$ be an OUB with terminal value $X_1 = z$, $z\in\mathbb{R}$, and defined in the filtered space $(\Omega, \mathcal{F}, \Pr, \{\mathcal{F}_t\}_{t \in [0, 1]})$. That is, for an OU process $\widetilde{X} = \{\widetilde{X}_t\}_{t \in [0, 1]}$, take $X$ such that $\mathrm{Law}(X, \Pr) = \mathrm{Law}(\widetilde{X}, \widetilde{\Pr}_{z}),$ where $\widetilde{\Pr}_{z} := \mathbb{P}\big(\cdot | \widetilde{X}_1 = z\big)$. It is well known (see, e.g., \cite{Barczy2013Sample}) that $X$ solves the Stochastic Differential Equation (SDE) \begin{align}\label{eq:OUB_SDE} \mathrm{d} X_t = \mu(t, X_t)\mathrm{d} t + \gamma\mathrm{d} B_t,\ \ 0 \leq t \leq 1, \end{align} with $\gamma > 0$ and \begin{align}\label{eq:OUB_drift} \mu(t, x) = \alpha \frac{z - \cosh(\alpha(1 - t)) x}{\sinh(\alpha(1 - t))},\quad \alpha \neq 0. \end{align} Note that we can take $\{\mathcal{F}_t\}_{t \in [0, 1]}$ as the natural filtration of the underlying standard Brownian motion $\{B_s\}_{t \in [0, 1]}$ in \eqref{eq:OUB_SDE}. Consider the finite-horizon OSP \begin{align}\label{eq:OSP_OUB} V(t, x) := \sup_{\tau \leq 1 - t}\Es{X_{t + \tau}}{t, x}, \end{align} where $V$ is the value function and $\mathbb{E}_{t, x}$ represents the expectation under the probability measure $\Pr_{t, x}$ defined as $\Pr_{t, x}(\cdot) := \Pr(\cdot | X_t = x)$. The supremum above is taken under all random times $\tau$ in the underlying filtration, such that $t + \tau$ is a stopping time in $\{\mathcal{F}_t\}_{t \in [0, 1]}$. Henceforth, we will call $\tau$ a stopping time while keeping in mind that $t + \tau$ is the actual stopping time. \section{Reformulation of the problem} \label{sec:reformulation} \cite{Barczy2013Sample} provide the following space-time transformed representation for $X$: \begin{align*} X_t &= a_1(t, X_0, z) + a_2(t)B_{\psi(t)}, \end{align*} where the functions $a_1$ and $a_2$ take the form \begin{align*} a_1(t, x, z) := x\frac{\sinh(\alpha(1 - t))}{\sinh(\alpha)} + z\frac{\sinh(\alpha t)}{\sinh(\alpha)}, \quad a_2(t) := \gamma e^{\alpha t}\frac{\kappa(1) - \kappa(t)}{\kappa(1)}, \end{align*} and $\psi:[0, 1)\rightarrow \mathbb{R}_+$ is the time transformation $ \psi(t) := \kappa(t)\kappa(1)/(\kappa(1) - \kappa(t)), $ with $\kappa(t) := (2\alpha)^{-1}(1 - e^{-2\alpha t})$. Notice that $ t = \kappa^{-1}\lrp{\psi(t)\kappa(1)/(\psi(t) + \kappa(1))}, $ where $\kappa^{-1}(s) = -(2\alpha)^{-1}\ln(1 - 2\alpha s)$. The following identities can be easily checked: \begin{align*} a_1(t, x, z) = \lrp{x + z\frac{\psi(t)e^{-\alpha}}{\kappa(1)}}\frac{1}{f\lrp{\frac{\psi(t)e^{-\alpha}}{\kappa(1)}}}, \quad a_2(t) = \frac{\gamma}{f\lrp{\frac{\psi(t)e^{-\alpha}}{\kappa(1)}}}, \end{align*} with \begin{align}\label{eq:f} f(s) := \sqrt{\lrp{e^{\alpha} + s}\lrp{e^{-\alpha} + s}}. \end{align} Therefore, if we set the time change $s = \psi(t)e^{-\alpha}/\kappa(1 - u)$, we get the space change \begin{align} X_t = \frac{X_0 + zs}{f(s)} + \frac{\gamma}{f(s)}B_{s\kappa(1 - u)e^{\alpha}} = \frac{zs + \gamma\sqrt{\kappa(1 - u)e^{\alpha}}}{f(s)}\lrp{B_{s} + \frac{X_0}{\gamma\sqrt{\kappa(1 - u)e^{\alpha}}}}. \label{eq:OUB_to_BM} \end{align} Let $Y = \lrb{Y_s}_{s\geq0}$ be a Brownian motion starting at $Y_0 = y$ under the probability measure $\Pr_{y}$, that is, $\Pro{Y_0 = y}{y} = 1$. Consider the infinite-horizon OSP \begin{align}\label{eq:OSP_BM} W_{c}(s, y) := \sup_{\sigma}\Es{G_{c}(s + \sigma, Y_\sigma)}{y}, \end{align} with gain function \begin{align}\label{eq:gain} G_{c}(s, y) := \frac{cs + y}{f(s)} \end{align} and $c \in \mathbb{R}$. The operator $\mathbb{E}_{y}$ emphasizes that we are taking the mean with respect to $\Pr_{y}$, and the supremum in \eqref{eq:OSP_BM} is taken over all the stopping times $\sigma$ in the natural filtration of $\lrb{Y_s}_{s\geq0}$. Solving an OSP means to give a tractable expression for the value function and to find a stopping time in which the supremum is attained. Thereby, we show in the next proposition the equivalence between \eqref{eq:OSP_OUB} and \eqref{eq:OSP_BM}, by providing formulae that relate $V$ to $W$, and switch from a stopping time that is optimal in the former problem (if it exists) to one optimal in the latter. \begin{proposition}(Time-space equivalence)\label{pr:OSP_equiv}\ \\ Consider the time change $\upsilon:[0,t]\rightarrow \mathbb{R}$ such that $\upsilon(t) = \psi(t)e^{-\alpha}/\kappa(1)$. Take $(t, x)\in[0, 1)\times\mathbb{R}$ and set $s = \upsilon(t)$, $c_z := z/(\gamma\sqrt{\kappa(1)e^{\alpha}})$, and $y = c_x$. Then: \begin{enumerate}[label=(\textit{\roman{*}}), ref=(\textit{\roman{*}})] \item \label{pr:OSP_value_equiv} The following equation holds: \begin{align}\label{eq:value_equiv} V(t, x) = \frac{z}{c_z}W_{c_z}\lrp{s, y}. \end{align} \item \label{pr:OSP_OST_equiv} The stopping time $\sigma^*(s, y)$ is optimal in \eqref{eq:OSP_BM} under $\Pr_{y}$ for $c = c_z$ if and only if \begin{align}\label{eq:OST_transform} \tau^*(t, x) := \upsilon^{-1}\lrp{\sigma^*(s, y)} \end{align} is optimal in \eqref{eq:OSP_OUB} under $\Pr_{t, x}$. \end{enumerate} \end{proposition} \begin{proof} \ref{pr:OSP_value_equiv}\ We have already proved this part of the proposition. Indeed, \eqref{eq:value_equiv} follows trivially from \eqref{eq:OSP_OUB} and \eqref{eq:OUB_to_BM}--\eqref{eq:gain}. \ref{pr:OSP_OST_equiv}\ Suppose that $\sigma^* = \sigma^*(s, y)$ is optimal in \eqref{eq:OSP_BM} under $\Pr_{y}$ for $c = c_z$. Assume that there exists a stopping time $\tau' = \tau'(t, x)$ that outperforms $\tau^* = \tau^*(t, x)$ defined in \eqref{eq:OST_transform}, and set $\sigma' = \sigma'(s, y) := \upsilon^{-1}(\tau')$. Then, by relying on \eqref{eq:OUB_to_BM}, we get that \begin{align*} \Es{G_{c_z}\lrp{s + {\sigma'}, Y_{\sigma'}}}{y} = \Es{X_{t + \tau'}}{t, x} > \Es{X_{t + \tau'}}{t, x} = \Es{G_{c_z}\lrp{s + {\sigma^*}, Y_{\sigma^*}}}{y}, \end{align*} which contradicts the fact that $\sigma^*$ is optimal in \eqref{eq:OSP_BM}. Then, we have proved the \textit{only if} part of the statement. The \textit{if} direction follows by similar arguments. \end{proof} \section{Solution of the reformulated problem: a direct approach} \label{sec:sol_reformulated} In this section we will work out a solution for the OSP \eqref{eq:OSP_BM}. For the sake of briefness and since there is no risk of confusion, throughout the section we will use the notations $W = W_c$ and $G = G_c$, so that \eqref{eq:OSP_BM} can be rewritten as \begin{align}\label{eq:OSP_BM_re} W(s, y) = \sup_{\sigma}\Es{G(s + \sigma, Y_\sigma)}{y}. \end{align} Notice that $0 \leq s/f(s) \leq 1$ and $f(s) \geq \sqrt{1 + s^2}$ for all $s\in\mathbb{R}_+$, $f(0) = 1$, and $f$ is increasing. Hence, the following holds for $M := \Esp{\sup_{0\leq u\leq1}\left|B_u\right|}$ and all $(s, y) \in\mathbb{R}_+\times\mathbb{R}$: \begin{align} \Es{\sup_{u\geq 0 }\left|G\lrp{s + u, Y_u}\right|}{y} &\leq |c| + \Es{\sup_{u\geq 0}\frac{\left|Y_u\right|}{f(u)}}{y} \nonumber \leq |c| + |y| + \Esp{\sup_{u\geq0}\frac{\left|B_u\right|}{\sqrt{1 + u^2}}} \nonumber \\ &\leq |c| + |y| + M + \Esp{\sup_{u\geq1}\frac{\left|B_u\right|}{\sqrt{1 + u^2}}} \nonumber \\ &= |c| + |y| + M + \Esp{\sup_{u\geq1}\frac{u}{\sqrt{1 + u^2}}\left|B_{1/u}\right|} \nonumber \\ &\leq |c| + |y| + M + \Esp{\sup_{u\geq1} \left|B_{1/u}\right|} = |c| + |y| + 2M, \label{eq:sup_bound} \end{align} where we used the time-inversion property of a Brownian motion in the first equality. Thereby, since $M<\infty$ and $G$ is continuous, we get that (see, e.g., Corollary 2.9, Remark 2.10, and equation (2.2.80) in \citealp{goran-optimal}) the first hitting time \begin{align}\label{eq:OST} \sigma^*(s, y) = \inf\lrb{u \geq 0 : (s + u, Y_u)\in \mathcal{D}} \end{align} into the stopping set $\mathcal{D} := \lrb{W = G}$ is optimal for \eqref{eq:OSP_BM_re}. That is, \begin{align}\label{eq:value} W(s, y) = \Es{G\lrp{s + \sigma^*(s, y), Y_{\sigma^*(s, y)}}}{y}. \end{align} After applying Itô's lemma to both \eqref{eq:OSP_BM} and \eqref{eq:value} we get the following alternative representations of~$W$: \begin{align}\label{value_ito} W(s, y) - G(s, y) &= \sup_{\sigma}\Es{\int_0^\sigma \mathbb{L} G\lrp{s + u, Y_u}\,\mathrm{d} u}{y} = \Es{\int_0^{\sigma^*(s, y)} \mathbb{L} G\lrp{s + u, Y_u}\,\mathrm{d} u}{y}, \end{align} where $\mathbb{L} = \partial_t + \frac{1}{2}\partial_{xx}$ is the infinitesimal generator of $\lrb{\lrp{s + u, Y_u}}_{u\geq 0}$. Here and thereafter, $\partial_t$ and $\partial_x$ will stand, respectively, for the differential operator with respect to time and space, while $\partial_{xx}$ is a shorthand for $\partial_x\partial_x$. Notice that $\mathbb{L} G = \partial_t G$. Since many of the proofs rely on the first order partial derivatives of the gain function, we display them next for a quick reference: \begin{align} \partial_t G(s, y) &= \frac{c\lrp{f(s) - sf'(s)} - f'(s)y }{f^2(s)}, \label{eq:G_t} \\ \partial_x G(s, y) &= \frac{1}{f(s)}. \label{eq:G_x} \end{align} To keep track of the initial condition in a way that does not change the underlying probability measure, we introduce the process $Y^y = \lrb{Y_s^y}_{s\geq 0}$ such that \begin{align*} \mathrm{Law}\lrp{\lrb{Y_s^y}_{s\geq0}, \Pr} = \mathrm{Law}\lrp{\lrb{Y_s}_{s\geq 0}, \Pr_y}. \end{align*} Notice that the characterization of the Optimal Stopping Time (OST) in \eqref{eq:OST} is too abstract to work with. In the next proposition we characterize $\sigma^*(s, y)$ by means of a function called the Optimal Stopping Boundary (OSB), which is the frontier between $\mathcal{D}$ and its complement $\mathcal{C} := \lrb{W > G}$. We also derive some properties about the shape of the OSB which shed light on the geometry of $\mathcal{D}$ and~$\mathcal{C}$. \begin{proposition}[Existence and shape of the optimal stopping boundary]\label{pr:boundary_existence}\ \\ There exists a function $b:\mathbb{R}_+\rightarrow\mathbb{R}$ such that $\mathcal{D} = \lrb{(s, y) : y \geq b(s)}$. Moreover, $c(f(s) - sf'(s))/f(s) < b(s) < \infty$ for all $s\in\mathbb{R}_+$. \end{proposition} \begin{proof} The claimed shape for the stopping set, $\mathcal{D} = \lrb{(s, y) : y \geq b(s)}$, is a straightforward consequence of the fact that both $y \mapsto G(s, y)$ and $y \mapsto Y_s^y$ are increasing for all $s\in\mathbb{R}_+$. We now see that $b(s) > c(f(s) - sf'(s))/f(s)$ for all $s>0$. Fix a pair $(s, y)$ such that \mbox{$\partial_t G(s, y) > 0$}. Then, the continuity of $\partial_t G$ allows to pick a ball $\mathcal{B}$ such that $(s, y)\in\mathcal{B}$ and \mbox{$\partial_t G > 0$ in $\mathcal{B}$}. After recalling \eqref{value_ito} and setting $\sigma_\mathcal{B}$ as the first exit time of $\lrb{\lrp{s + u, Y_u^y}}_{u\geq 0}$ from $\mathcal{B}$, we get that $$ W(s, y) - G(s, y) \geq \Es{\int_0^{\sigma_\mathcal{B}} \partial_t G\lrp{s + u, Y_u}\,\mathrm{d} u}{y} > 0. $$ We conclude then that $(s, y)\in\mathcal{C}$. Finally, the claimed lower bound for $b$ comes after using \eqref{eq:G_t} to realize that $\partial_t G(s, y) > 0$ if and only if $y < c(f(s) - sf'(s))/f(s)$. We now prove $b(s) < \infty$ for all $s>0$. Let $\widetilde{X} = \big\{\widetilde{X}_t\big\}_{t \in [0, 1]}$ be a BB with pinning point $\widetilde{X}_1 = z$. The drift of $\widetilde{X}$ has the form $\widetilde{\mu}(t, x) = (z - x)/(1 - t)$. Define $m_z:[0, 1)\rightarrow\mathbb{R}$ such that $$ m_z(t) = z\frac{\sinh(\alpha(1 - t)) - \alpha(1 - t)}{\sinh(\alpha(1 - t)) - \alpha(1 - t)\cosh(\alpha(1 - t))}, $$ take $\overline{M}_z := \sup_{t\in[0, 1)}m_z(t) < \infty$ and notice the following relation: $$ X_t \leq m_z(t) + |X_t - m_z(t)| \leq m_z(t) + |\widetilde{X}_t - m_z(t)| \leq \overline{M}_z + |\widetilde{X}_t - \overline{M}_z|. $$ The second inequality holds since, due to the fact that $\mu(t, x) \leq \widetilde{\mu}(t, x)$ if and only if $x \geq m_z(t)$, the drift of the reflection of $X$ with respect to $m_z$ is lower than the drift of the reflection of $\widetilde{X}$ with respect to $m_z$, and therefore we can ensure that, pathwise, the first process is lower than the last one $\Pr$-a.s. \cite[see][Theorem 1.1]{ikeda1977}. The third inequality is straightforward from the definition of $\overline{M}_z$. Therefore, if we consider the OSP $$ \widetilde{V}_{\overline{M}_z}(t, x) = \sup_{\tau \leq 1 - t}\Es{\overline{M}_z + |\widetilde{X}_{t + \tau} - \overline{M}_z|}{t, x}, $$ we are allowed to state that $V \leq \widetilde{V}_{\overline{M}_z}$. If we take a pair $(t, x)\in[0, 1]\times[\overline{M}, \infty)$ within the stopping set related to $V_{\overline{M}_z}$, then $V(t, x) \leq V_{\underline{M}_z}(t, x) = x$, meaning that $(t, x)$ lies in the stopping set of $V$. Since it is known that the OSB related to $V_{\underline{M}_z}$ is finite (actually, this is one of the few cases in which the explicit form of the OSP with finite horizon is available; see, e.g., Theorem 3.2 in \cite{ekstrom_optimal_2009}), so is the one related to $V$. Then, by means of \eqref{eq:value_equiv}, we conclude that $b$ is bounded from above. \end{proof} We next show that $W$ is Lipschitz continuous in sets of the type $\mathbb{R}_+\times \mathcal{R}$, where $\mathcal{R}$ stands for a compact set in $\mathbb{R}$. \begin{proposition}[Lipschitz continuity of the value function]\label{pr:W_Lipschitz}\ \\ For any compact set $\mathcal{R}\subset\mathbb{R}$, there exists a constant $L_\mathcal{R} > 0$ such that \begin{align*} \left|W(s_1, y_1) - W(s_2, y_2)\right| \leq L_{\mathcal{R}}\lrp{|s_1 - s_2| + |y_1 + y_2|}, \end{align*} for all $(s_1, y_1), (s_2, y_2) \in \mathbb{R}_+\times\mathcal{R}$. \end{proposition} \begin{proof} Take $(s_1, y_1), (s_2, y_2) \in \mathbb{R}_+\times\mathcal{R}$ and realize that \begin{align*} W(s_1, y_1) - W(s_2, y_2) =&\; \sup_{\sigma}\Es{G(s_1 + \sigma, Y_\sigma)}{y_1} - \sup_{\sigma}\Es{G(s_1 + \sigma, Y_\sigma)}{y_2} \\ &+ \sup_{\sigma}\Es{G(s_1 + \sigma, Y_\sigma)}{y_2} - \sup_{\sigma}\Es{G(s_2 + \sigma, Y_\sigma)}{y_2}. \end{align*} Notice from \eqref{eq:G_t} that the following relation holds: \begin{align*} \left|\partial_t G(s, y)\right| \leq K \lrp{1 + \frac{|y|}{f(u)}}. \end{align*} Then, since $|\sup_\sigma a_\sigma - \sup_\sigma b_\sigma|\leq \sup_\sigma|a_\sigma - b_\sigma|$, alongside Jensen's inequality, and \eqref{eq:G_t} and \eqref{eq:G_x}, we get that \begin{align*} \Big|\sup_{\sigma}\Es{G(s_1 + \sigma, Y_\sigma)}{y_1}& - \sup_{\sigma}\Es{G(s_1 + \sigma, Y_\sigma)}{y_2}\Big| \\ \leq&\; \sup_{\sigma}\Esp{\left|G(s_1 + \sigma, Y_\sigma^{y_1}) - G(s_1 + \sigma, Y_\sigma^{y_2})\right|} \\ =&\; \sup_{\sigma}\Esp{\frac{\left|Y_\sigma^{y_1} - Y_\sigma^{y_2}\right|}{f(s_1 + \sigma)}} = \frac{|y_1 - y_2|}{f(s_1)} \leq |y_1 - y_2|, \end{align*} and \begin{align*} \Big|\sup_{\sigma}\Es{G(s_1 + \sigma, Y_\sigma)}{y_2}& - \sup_{\sigma}\Es{G(s_2 + \sigma, Y_\sigma)}{y_2}\Big| \\ \leq&\; \sup_{\sigma}\Esp{\left|G(s_1 + \sigma, Y_\sigma^{y_2}) - G(s_2 + \sigma, Y_\sigma^{y_2})\right|} \\ =&\; |s_1 - s_2|\sup_{\sigma}\Esp{\left|\partial_t G(\xi, Y_\sigma^{y_2})\right|} \\ \leq&\; |s_1 - s_2|K\lrp{1 + \Esp{\sup_{s\geq0}\frac{\left| Y_s^{y_2}\right|}{f(s)}}}, \end{align*} where $\xi\in\lrp{\min\lrb{s_1, s_2}, \max\lrb{s_1, s_2}}$ follows from the mean value theorem. Since we already proved in \eqref{eq:sup_bound} that $\Esp{\sup_{s\geq0}\left| Y_s^{y_2}\right|/f(s)} < \infty$, the Lipschitz continuity of $W$ in $\mathbb{R}_+\times\mathcal{R}$ follows. \end{proof} Beyond Lipschitz continuity, it turns out that the value function attains a higher smoothness away from the boundary. While this assertion is trivial in the interior of the stopping region, where $W = G$, we prove in the next proposition that it also holds in the continuation set. In addition, we show that $\mathbb{L} W$ vanishes in $\mathcal{C}$, which establishes the equivalence between \eqref{eq:OSP_BM_re} and a free-boundary problem. \begin{proposition}[Higher smoothness of the value function and the free-boundary problem]\label{pr:W_smoothness}\ \\ $W\in C^{1, 2}(\mathcal{C})$ and $\mathbb{L} W = 0$ in $\mathcal{C}$. \end{proposition} \begin{proof} The fact that $\mathbb{L}W = 0$ in $\mathcal{C}$ comes right after the strong Markov property of $\lrb{\lrp{s + u, Y_u}}_{u\geq0}$; see \citet[Section 7.1]{goran-optimal} for more details. Since $W$ is continuous in $\mathcal{C}$ (see Proposition \ref{pr:W_Lipschitz}) and the coefficients in the parabolic operator $\mathbb{L}$ are smooth enough (it suffices to require local $\alpha$-H\"older continuity), then standard theory from parabolic partial differential equations \citep[Section 3, Theorem 9]{friedman1964partial} guarantees that, for an open rectangle $R\subset \mathcal{C}$, the first initial-boundary value problem \begin{subequations} \label{eq:PDE} \begin{align} \mathbb{L} f &= 0 &&\hspace*{-3.5cm} \text{in } R, \label{eq:PDE1} \\ f &= V &&\hspace*{-3.5cm} \text{on } \partial R \label{eq:PDE2} \end{align} \end{subequations} has a unique solution $f\in C^{1, 2}(R)$. Therefore, we can use Itô's formula on $f(s + u, Y_u)$ at $u = \tau_{R^c}$, that is, the first time $(s + u, Y_u)$ exits $R$, and then take $\Pr_y$-expectation with $y \in R$, which guarantees the vanishing of the martingale term and yields, together with \eqref{eq:PDE1} and \eqref{eq:PDE2}, the equality $\mathbb{E}_y[W(s + \tau_{R^c}, Y_{\tau_{R^c}})] = f(t, x)$. Finally, due to the strong Markov property, $\mathbb{E}_y[W(s + \tau_{R^c}, Y_{\tau_{R^c}})] = W(s, y)$. \end{proof} Not only the gain function has continuous partial derivatives away from the boundary, but we can provide relatively explicit forms for those derivatives, as shown in the next proposition. \begin{proposition}[Partial derivatives of the value function]\label{pr:W_t_&_W_x}\ \\ Let $\sigma^* = \sigma^*(s, y)$, for $(s, y)\in\mathcal{C}$, and $a := e^{-\alpha} + e^{\alpha}$. Then, \begin{align}\label{eq:W_t} &\partial_t W(s, y) = \partial_t G(s, y) + \Esp{\int_s^{s + \sigma^*}\frac{1}{f^3(u)}\lrp{-c\lrp{a + 3u} + \frac{3\lrp{a + 2u}^2}{4f^2(u)} - Y_{u - s}^y}\, \mathrm{d} u} \end{align} and \begin{align}\label{eq:W_x} \partial_x W(s, y) = \Esp{\frac{1}{f(s + \sigma^*)}}. \end{align} \end{proposition} \begin{proof} Take $(s, y)\in\mathcal{C}$ and $\varepsilon > 0$. Due to \eqref{eq:OSP_BM_re} and \eqref{eq:value}, one gets the following for $\sigma^* = \sigma^*(s, y)$: \begin{align*} \varepsilon^{-1}\lrp{W(s, y) - W(s - \varepsilon, y)} &\leq \varepsilon^{-1}\Esp{G(s + \sigma^*, Y_{\sigma^*}^y) - G(s - \varepsilon + \sigma^*, Y_{\sigma^*}^y)}. \end{align*} Hence, by letting $\varepsilon\rightarrow 0$ and recalling that $W\in C^{1,2}(\mathcal{C})$ (see Proposition \ref{pr:W_smoothness}), we get that \begin{align}\label{eq:W_t<} \partial_t W(s, y) &\leq \Esp{\partial_t G(s + \sigma^*, Y_{\sigma^*}^y)} = \partial_t G(s, y) + \Esp{\int_s^{s + \sigma^*}\mathbb{L}\partial_t G(u, Y_{s - u}^y)\, \mathrm{d} u}. \end{align} In the same fashion we obtain \begin{align*} \varepsilon^{-1}\lrp{W(s + \varepsilon, y) - W(s, y)} &\geq \varepsilon^{-1}\Esp{G(s + \varepsilon + \sigma^*, Y_{\sigma^*}^y) - G(s + \sigma^*, Y_{\sigma^*}^y)}. \end{align*} Thus, by arguing as in \eqref{eq:W_t<} we get the reverse inequality, and therefore \eqref{eq:W_t} gets proved after computing $\mathbb{L}\partial_t G(u, Y_{s - u}^y) = \partial_{tt} G(u, Y_{s - u}^y)$. To get the analog result for the space coordinate, notice that \begin{align*} \varepsilon^{-1}\lrp{W(s, y) - W(s, y - \varepsilon)} &\leq \varepsilon^{-1}\Esp{W(s + \sigma^*, Y_{\sigma^*}^y) - W(s + \sigma^*, Y_{\sigma^*}^{y - \varepsilon})} \\ &\leq \varepsilon^{-1}\Esp{G(s + \sigma^*, Y_{\sigma^*}^y) - G(s + \sigma^*, Y_{\sigma^*}^{y - \varepsilon})} \\ &= \Esp{\frac{1}{f(s + \sigma^*)}}, \end{align*} while the same reasoning yields the inequality $\varepsilon^{-1}\lrp{W(s, y + \varepsilon) - W(s, y)} \geq \Esp{1/f(s + \sigma^*)}$, and then, by letting $\varepsilon\rightarrow 0$, we get \eqref{eq:W_x}. \end{proof} So far we have proved that solving \eqref{eq:OSP_BM_re} is equivalent to solving the free-boundary problem \begin{subequations} \label{eq:free-boundary} \begin{align} \mathbb{L} W(s, y) &= 0 &&\hspace*{-3cm} \text{for } y < b(t) , \label{eq:free-boundary1}\\ W(s, y) &> G(s, y) &&\hspace*{-3cm} \text{for } y < b(t), \label{eq:free-boundary2}\\ W(s, y) &= G(s, y) &&\hspace*{-3cm} \text{for } y\geq b(t). \label{eq:free-boundary3} \end{align} \end{subequations} However, an additional condition for the value function on the free boundary is required to guarantee a unique solution. Roughly speaking, that condition comes in the form of smoothly binding the value and the gain functions with respect to the space coordinate, provided that the optimal boundary is (probabilistically) regular for the underlying process, that is, if after starting at a point $(s, y) \in \partial\mathcal{C}$, the process enters $D$ immediately $\Pr_y$-a.s. This type of regularity is proved to hold true for piecewise monotonic and continuous boundaries in \cite{cox_embedding_2015} whenever the underlying process is a recurrent diffusion. In the next proposition we show that the boundary is differentiable with bounded derivative on any real interval, which implies piecewise monotonicity. The proof is inspired on Theorem 4.3 from \cite{de_angelis_lipschitz_2019}, which states the boundary's Lipschitz continuity for time-homogeneous processes satisfying some regularity conditions. \begin{proposition}[Lipschitz continuity of the optimal stopping boundary]\label{pr:Lipschitz_boundary}\ \\ The function $b$ is differentiable. Moreover, for any closed interval $I := [\underline{s}, \overline{s}]\subset\mathbb{R}_+$, there exists a constant $L_I > 0$ such that \begin{align}\label{eq:|b'|<} |b'(s)| \leq L_I, \end{align} whenever $s\in I$. \end{proposition} \begin{proof} Consider the function $H:I\times\mathbb{R}\rightarrow\mathbb{R}_+$, for a closed interval $I\subset\mathbb{R}_+$, defined as $H(s, y) = W(s, y) - G(s, y)$. Proposition \ref{pr:boundary_existence} entails that $b$ is bounded from below, and thus we can choose a constant $r\in \mathbb{R}$ such that $r < \inf\lrb{b(s) : s\in I}$. Since $I\times\lrb{r}\subset\mathcal{C}$, $H$ is continuous (see Proposition \ref{pr:W_Lipschitz}) and $H|_{I\times\lrb{r}} > 0$. Then, there exists $a > 0$ such that $H(s, r) > a$ for all $s\in I$. Therefore, for all $\delta$ such that $0 < \delta \leq a$, the equation $H(s, y) = \delta$ has a solution in $\mathcal{C}$ for all $s\in I$. Moreover, this solution is unique for each $s$ since $\partial_x H < 0$ in $\mathcal{C}$ (see Proposition \ref{pr:W_t_&_W_x}), and we denote it by $b_\delta(s)$, where $b_\delta:I\rightarrow \mathbb{R}$. Away from the boundary, $H$ is regular enough to apply the implicit function theorem that guarantees that $b_\delta$ is differentiable and \begin{align}\label{eq:b_delta'} b_\delta'(s) = -\partial_t H(s, b_\delta(s)) / \partial_x H(s, b_\delta(s)). \end{align} Notice that $b_\delta$ is decreasing in $\delta$ and therefore it converges pointwisely to some limit function $b_0$, which satisfies $b_0 \leq b$ in $I$ as $b_\delta < b$ for all $\delta$. Since $H(s, b_\delta(s)) = \delta$ and $H$ is continuous, it follows that $H(s, b_0(s)) = 0$ after taking $\delta\rightarrow 0$, which means that $b_0 \geq b$ in $I$ and hence $b_0 = b$ in $I$. Take $(s, y)\in\mathcal{C}$ such that $y>r$. Set $\sigma^* = \sigma^*(s, y)$ and consider $$ \sigma_r = \sigma_r(s, y) := \inf\lrb{u\geq 0 : \lrp{s + u, Y_u^y} \notin I\times(r, \infty)}. $$ Recalling \eqref{eq:W_t}, it is easy to check that there exists a constant $K_I^{(1)} > 0$ such that \begin{align}\label{eq:H_t<_1} \left|\partial_t H(s, y)\right| \leq K_I^{(1)} m(s, y) \end{align} with $$ m(s, y) := \Es{\int_0^{\sigma^*}\lrp{1 + \frac{\left|Y_u\right|}{f^2(s + u)}}\,\mathrm{d} u}{y}. $$ Using the tower property of conditional expectation, alongside the strong Markov property, we get \begin{align} m&(s, y) \nonumber\\ &= \Es{\int_0^{\sigma^*\wedge\sigma_r}\lrp{1 + \frac{\left|Y_u\right|}{f^2(s + u)}}\,\mathrm{d} u + \mathbbm{1}\lrp{\sigma_r \leq \sigma^*}\int_{\sigma_r}^{\sigma^*}\lrp{1 + \frac{\left|Y_u\right|}{f^2(s + u)}}\,\mathrm{d} u}{y} \nonumber \\ &= \Es{\int_0^{\sigma^*\wedge\sigma_r}\lrp{1 + \frac{\left|Y_u\right|}{f^2(s + u)}}\,\mathrm{d} u + \mathbbm{1}\lrp{\sigma_r \leq \sigma^*} \Es{\int_{\sigma_r}^{\sigma_r + \sigma^*\lrp{\sigma_r, Y_{\sigma_r}}}\lrp{1 + \frac{\left|Y_u\right|}{f^2(s + u)}}\,\mathrm{d} u \Big| \mathcal{F}_{\sigma_r}}{y}}{y} \nonumber \\ &= \Es{\int_0^{\sigma^*\wedge\sigma_r}\lrp{1 + \frac{\left|Y_u\right|}{f^2(s + u)}}\,\mathrm{d} u + \mathbbm{1}\lrp{\sigma_r \leq \sigma^*} \Es{\int_{0}^{\sigma^*\lrp{\sigma_r, Y_{\sigma_r}}}\lrp{1 + \frac{\left|Y_u\right|}{f^2(s + \sigma_r + u)}}\,\mathrm{d} u}{r}}{y} \nonumber \\ &= \Es{\int_0^{\sigma^*\wedge\sigma_r}\lrp{1 + \frac{\left|Y_u\right|}{f^2(s + u)}}\,\mathrm{d} u + \mathbbm{1}\lrp{\sigma_r \leq \sigma^*} m(s + \sigma_r, Y_{\sigma_r})}{y}. \label{eq:m=} \end{align} Notice that, for $c < r < y < b(s)$, $(s + \sigma_r, Y_{\sigma_r}^y) \in \Gamma_s$ on the set $\lrb{\sigma_r \leq \sigma^*}$, with $\Gamma_s := \lrb{(s, \bar{s})\times\lrb{r}} \cup \lrb{\bar{s}\times[r, b(\bar{s}))}$ and $\bar{s}:= \sup\lrb{s: s \in I}$. Hence, the following holds true on the set $\lrb{\sigma_r \leq \sigma^*}$: \begin{align} m\lrp{s + \sigma_r, Y_{\sigma_r}^y} &\leq \sup_{(t, x) \in \Gamma_s}m\left(t, x\right) \nonumber \\ &\leq \sup_{(t, x) \in \Gamma_s} \Es{\int_{0}^{\infty}\lrp{1 + \frac{\left|Y_u\right|}{f^2(t + u)}}\,\mathrm{d} u}{x} \nonumber \\ &\leq \sup_{(t, x) \in \Gamma_s} \int_{0}^{\infty}\lrp{1 + \frac{\left|x\right|}{f^2(t + u)}}\,\mathrm{d} u + \int_{0}^{\infty}\frac{\Esp{\left|B_u\right|}}{f^2(t + u)}\,\mathrm{d} u \nonumber \\ &\leq \int_{0}^{\infty}\lrp{1 +\frac{ \left|b(\bar{s})\right|}{f^2(u)}}\,\mathrm{d} u + \int_{0}^{\infty}\sqrt{\frac{2}{\pi}}\frac{\sqrt{u}}{f^2(u)}\,\mathrm{d} u < \infty. \label{eq:m_bound} \end{align} By plugging \eqref{eq:m_bound} into \eqref{eq:m=}, after observing that $\lrp{1 + \left|Y_u\right|/f^2(s + u)} \leq 1 + \max\lrb{|\sup_{s\in I}b(s)|, |r|}$, and recalling \eqref{eq:H_t<_1}, we obtain the following for some constant $K_I^{(2)} > 0$: \begin{align}\label{eq:H_t<_2} \left|\partial_t H(s, y)\right| \leq K_I^{(2)} \Es{\sigma_\delta\wedge\sigma_r + \mathbbm{1}\lrp{\sigma_r \leq \sigma_\delta}}{y}. \end{align} Arguing as in \eqref{eq:m=} and recalling \eqref{eq:G_x} along with \eqref{eq:W_x}, we get that \begin{align} \left|\partial_x H(s, y)\right| &= \Es{\frac{1}{f(s)} - \frac{1}{f(s + \sigma^*)}}{y} = \Es{\int_0^{\sigma^*}-\partial_t(1/f)(s + u)\,\mathrm{d} u}{y} \nonumber \\ &= \Es{\int_0^{\sigma^*\wedge\sigma_r}-\partial_t(1/f)(s + u)\,\mathrm{d} u + \mathbbm{1}\lrp{\sigma_r \leq \sigma^*} \left|\partial_x H(s + \sigma_r, Y_{\sigma_r})\right|}{y} \nonumber \\ &\geq \Es{\int_0^{\sigma^*\wedge\sigma_r}-\partial_t(1/f)(s + u)\,\mathrm{d} u + \mathbbm{1}\lrp{\sigma_r \leq \sigma^*, \sigma_r < \overline{s} - s}\left|\partial_x H(s + \sigma_r, r)\right|}{y}. \label{eq:H_x>_1} \end{align} Take $\varepsilon > 0$ such that $I\times\lrb{r + \varepsilon}\subset \mathcal{C}$, and consider the stopping time $\sigma_{\varepsilon} = \inf\lrb{u\geq0: Y_u^r > r + \varepsilon}$. Observe that $\sigma^*(s, r) > \sigma_{\varepsilon}$ for all $s\in I$. Then, \begin{align} \left|\partial_x H(s + \sigma_r, r)\right| &\geq \inf_{s\in I} \left|\partial_x H(s, r)\right| = \inf_{s\in I} \Es{\frac{1}{f(s)} - \frac{1}{f(s + \sigma^*(s, r))}}{r} \nonumber \\ &\geq \inf_{s\in I} \Es{\frac{1}{f(s)} - \frac{1}{f(s + \sigma_\varepsilon)}}{r} \nonumber \\ &\geq \inf_{s\in I} \lrp{\frac{1}{f(s)} - \frac{1}{f(s + \varepsilon)}}\Pro{\sigma_\varepsilon > \varepsilon}{r} \nonumber \\ &= \lrp{\frac{1}{f(\overline{s})} - \frac{1}{f(\overline{s} + \varepsilon)}}\Prob{\sup_{u\leq \varepsilon} B_u < \varepsilon} \label{eq:H_x>_2} > 0, \end{align} where we used the fact that $s\mapsto 1/f(s) - 1/f(s + u)$ is decreasing for all $u\geq 0$. After noticing that $-\partial_t(1/f)$ is positive and decreasing, which means that $-\partial_t(1/f)(s + u)\geq -\partial_t(1/f)(\overline{s}) > 0$ \mbox{for all $u\leq \sigma_r$}, and by plugging \eqref{eq:H_x>_2} into \eqref{eq:H_x>_1}, we obtain, for a constant $K_{I, \varepsilon}^{(3)} > 0$, \begin{align} \left|\partial_x H(s, y)\right| \geq K_{I}^{(3)}\Es{\sigma^*\wedge\sigma_r + \mathbbm{1}\lrp{\sigma_r \leq \sigma^*, \sigma_r < \overline{s} - s}}{y}. \label{eq:H_x>_3} \end{align} Therefore, using \eqref{eq:H_t<_2} and \eqref{eq:H_x>_3} in \eqref{eq:b_delta'} yields the following bound for some constant $K_{I}^{(4)} > 0$, $y_\delta = b_\delta(s)$, and $\sigma_\delta = \sigma^*(s, y_\delta)$: \begin{align} \left|b_\delta'(s)\right| &\leq K_{I}^{(4)}\frac{\Es{\sigma_\delta\wedge\sigma_r + \mathbbm{1}\lrp{\sigma_r \leq \sigma_\delta}}{y_\delta}}{\Es{\sigma_\delta\wedge\sigma_r + \mathbbm{1}\lrp{\sigma_r \leq \sigma_\delta, \sigma_r < \overline{s} - s}}{y_\delta}} \nonumber \\ &\leq K_{I}^{(4)}\lrp{1 + \frac{\Pro{\sigma_r \leq \sigma_\delta}{y_\delta}}{\Es{\sigma_\delta\wedge\sigma_r + \mathbbm{1}\lrp{\sigma_r \leq \sigma_\delta, \sigma_r < \overline{s} - s}}{y_\delta}}} \nonumber \\ &\leq K_{I}^{(4)}\lrp{1 + \frac{\Pro{\sigma_r \leq \sigma_\delta, \sigma_r = \bar{s} - s}{y_\delta}}{\Es{\sigma_\delta\wedge\sigma_r}{y_\delta}} + \frac{\Pro{\sigma_r \leq \sigma_\delta, \sigma_r < \bar{s} - s}{y_\delta}}{\Es{\mathbbm{1}\lrp{\sigma_r \leq \sigma_\delta, \sigma_r < \overline{s} - s}}{y_\delta}}} \nonumber \\ &\leq K_I^{(4)}\lrp{2 + \frac{\Pro{\sigma_r \leq \sigma_\delta, \sigma_r = \bar{s} - s}{y_\delta}}{\Es{\mathbbm{1}\lrp{\sigma_r\leq\sigma_\delta, \sigma_r = \bar{s} - s}\lrp{\sigma_\delta\wedge\sigma_r}}{y_\delta}}} \nonumber \\ &\leq K_I^{(4)}\lrp{2 + \frac{1}{\bar{s} - s}}. \label{eq:|b_delta'|<} \end{align} If we set $I_\varepsilon = [\underline{s}, \bar{s} - \varepsilon]$ for $\varepsilon > 0$ small enough, then, by relying on \eqref{eq:|b_delta'|<}, we obtain the existence of a constant $L_{I_\varepsilon} > 0$, independent from $\delta$, such that $|b_\delta'(s)| < L_{I_\varepsilon}$ for all $s\in I_\varepsilon$ and $0 < \delta \leq a$. We are thus able to use the Arzelà--Ascoly's theorem to guarantee that $b_\delta$ converges to $b$ uniformly with respect to $\delta$ in $I_\varepsilon$. Since $\varepsilon > 0$ and $I$ were arbitrarily chosen, we then conclude that $b$ is anywhere differentiable and \eqref{eq:|b'|<} holds true. \end{proof} Once we have the Lipschitz continuity of the boundary on real bounded sets, this implying piecewise monotonicity, we proceed to illustrate in the following proposition how to obtain the principle of smooth fit, which, as we highlighted before, is required to provide a unique solution to the associated free-boundary problem \eqref{eq:free-boundary1}--\eqref{eq:free-boundary3}. \begin{proposition}[The smooth-fit condition]\label{pr:smooth-fit}\ \\ For all $s\geq 0$, $y\mapsto W(s, y)$ is differentiable at $y = b(s)$. Moreover, $\partial_x W(s, b(s)) = \partial_x G(s, b(s))$. \end{proposition} \begin{proof} Recall that we have already obtained in \eqref{eq:W_x} an explicit form for $\partial_x W$ away from the boundary, namely, $$ \partial_x W(s, y) = \Esp{\frac{1}{f(s + \sigma^*(s, y))}} \ , \quad (s,y) \in \mathcal{C}. $$ The principle of smooth fit is just the validation of such a formula on the boundary points $y = b(s)$, $s\in\mathbb{R}_+$. We have that $\partial_x W(s, b(s)^+) = \partial_x G(s, b(s)) = 1/f(s)$, as $\sigma^*(s, y) = 0$ for all $y \geq b(s)$. By relying on \citet[Corollary 8]{cox_embedding_2015}, alongside the fact that our OSB is piecewise monotonic and continuous, we get that $\sigma^*(s, b(s)^-) = \sigma^*(s, b(s)) = 0\ \Pr$-a.s., and hence the Dominated Convergence Theorem (DCT) entails that $\partial_x W(s, b(s)^-) = 1/f(s) = \partial_x G(s, b(s))$, thus concluding that the smooth-fit condition holds. \end{proof} We are now in the position of getting a tractable characterization of both the value function and the OSB. Propositions \ref{pr:boundary_existence}--\ref{pr:smooth-fit} allow us to use an extension of the Itô's lemma \citep[Lemma A2]{d2020discounted} on the function $W(s + t, Y_t)$ for $t\geq 0$. By recalling that $\mathbb{L} W = 0$ on $\mathcal{C}$ and $W = G$ on $\mathcal{D}$, and after taking $\Pr_y$-expectation (which cancels the martingale term), we get \begin{align} W(s, y) &= \Es{W(s + t, Y_t)}{y} - \Es{\int_0^t (\mathbb{L} W)\lrp{s + u, Y_u}\,\mathrm{d} u}{y} \nonumber \\ &= \Es{W(s + t, Y_t)}{y} - \Es{\int_0^t \partial_t G\lrp{s + u, Y_u}\mathbbm{1}\lrp{Y_u \geq b(s + u)}\,\mathrm{d} u}{y}, \label{eq:pricing_formula_aux} \end{align} where the local-time term does not appear due to the smooth-fit condition. \begin{lemma}\label{lm:W->c} For all $(s, y)\in\mathbb{R}_+\times\mathbb{R}$, $$ \lim_{u\rightarrow\infty}\Es{W(s + u, Y_u)}{y} = c. $$ \end{lemma} \begin{proof} The Markov property of $Y$, together with the fact that both $s\mapsto s/f(s)$ and $s\mapsto f(s)$ are increasing and $s/f(s)\rightarrow 1$ as $s\rightarrow\infty$, implies that \begin{align} \Es{W(s + u, Y_u)}{y} &= \Es{\sup_{\sigma}\Es{G\lrp{s + u + \sigma, Y_\sigma}}{Y_u}}{y} \leq \Es{\Es{\sup_{r\geq 0}G\lrp{s + u + r, Y_r}}{Y_u}}{y} \nonumber \\ &= \Es{\Es{\sup_{r\geq 0}\lrb{c\frac{s + u + r}{f(s + u + r)} + \frac{Y_r}{f(s + u + r)}}}{Y_u}}{y} \nonumber \\ &\leq c\lrp{\mathbbm{1}(c > 0) + \frac{s + u}{f(s + u)}\mathbbm{1}(c \leq 0)} + \Es{\sup_{r\geq 0}\frac{Y_{u+ r}}{f(u + r)}}{y}, \label{eq:W->c_upper_bound} \end{align} and \begin{align} \Es{W(s + u, Y_u)}{y} &\geq \Es{\Es{\inf_{r\geq 0}G\lrp{s + u + r, Y_r}}{Y_u}}{y} \nonumber \\ &\geq c\lrp{\mathbbm{1}(c < 0) + \frac{s + u}{f(s + u)}\mathbbm{1}(c \geq 0)} + \Es{\inf_{r\geq 0}\frac{Y_{u+ r}}{f(s + u + r)}}{y}. \label{eq:W->c_lower_bound} \end{align} Notice that \begin{align*} \lim_{u\rightarrow\infty}\Es{\sup_{r\geq 0}\frac{Y_{u+ r}}{f(u + r)}}{y} = \Es{\lim_{u\rightarrow\infty}\sup_{r\geq u}\frac{Y_{r}}{f(r)}}{y} = \Es{\limsup_{u\rightarrow\infty}\frac{Y_{u}}{f(u)}}{y} = 0, \end{align*} where in the first equality we applied the monotone convergence theorem and in the second one we used the law of the iterated logarithm as an estimate of the convergence of the process in the numerator. A similar argument yields \begin{align*} \lim_{u\rightarrow\infty}\Es{\inf_{r\geq 0}\frac{Y_{u+ r}}{f(s + u + r)}}{y} = 0. \end{align*} Thus, we can take $u\rightarrow\infty$ in both \eqref{eq:W->c_upper_bound} and \eqref{eq:W->c_lower_bound} to complete the proof. \end{proof} By taking $t\rightarrow\infty$ in \eqref{eq:pricing_formula_aux} and relying on Proposition \ref{lm:W->c}, we get the following pricing formula for the value function: \begin{align} W(s, y) &= c - \Es{\int_0^\infty (\mathbb{L} W)\lrp{s + u, Y_u}\,\mathrm{d} u}{y} \nonumber \\ &= c - \Es{\int_0^\infty \partial_t G\lrp{s + u, Y_u}\mathbbm{1}\lrp{Y_u \geq b(s + u)}\,\mathrm{d} u}{y}. \label{eq:pricing_formula} \end{align} We can obtain a more tractable version of \eqref{eq:pricing_formula} by exploiting the linearity of $y\mapsto \partial_t G(s, y)$ (see \eqref{eq:G_t}) as well as the Gaussianity of $Y_u$. Specifically, since $Y_u\sim \mathcal{N}(y, u)$ under $\Pr_y$, then $\Es{Y_u\mathbbm{1}\lrp{Y_u\geq x}}{y} = \bar{\Phi}((x - y)/\sqrt{u})y + \sqrt{u}\phi((x - y)/\sqrt{u})$, where $\bar{\Phi}$ and $\phi$ denote the survival and the density functions of a standard normal random variable, respectively. By shifting the integrating variable $s$ units to the right, we get that \begin{align}\label{eq:pricing_formula_refined} W(s, y) &= c - \int_s^\infty \frac{1}{f(u)}\lrp{c\bar{\Phi}_{s, y, u, b(u)} - \frac{(a + 2u)\lrp{(y + cu)\bar{\Phi}_{s, y, u, b(u)} + \sqrt{u - s}\phi_{s, y, u, b(u)}}}{2f^2(u)}} \,\mathrm{d} u, \end{align} where $a = e^{-\alpha} + e^{\alpha}$ and \begin{align*} \bar{\Phi}_{s_1, y_1, s_2, y_2} := \bar{\Phi}\lrp{\frac{y_2 - y_1}{\sqrt{s_2 - s_1}}}, \quad \phi_{s_1, y_1, s_2, y_2} := \phi\lrp{\frac{y_2 - y_1}{\sqrt{s_2 - s_1}}}, \quad y_1, y_2 \in \mathbb{R}, s_2 \geq s_1 \geq 0. \end{align*} Take now $y\downarrow b(s)$ in both \eqref{eq:pricing_formula} and \eqref{eq:pricing_formula_refined} to derive the free-boundary equation \begin{align}\label{eq:free-boundary_eq} G(s, b(s)) &= c - \Es{\int_0^\infty \partial_t G\lrp{s + u, Y_u}\mathbbm{1}\lrp{Y_u \geq b(s + u)}\,\mathrm{d} u}{b(s)}, \end{align} alongside its more explicit expression \begin{align*} G&(s, b(s)) \\ &= c - \int_s^\infty \frac{1}{f(u)}\lrp{c\bar{\Phi}_{s, b(s), u, b(u)} - \frac{(a + 2u)\lrp{(b(s) + cu)\bar{\Phi}_{s, b(s), u, b(u)} + \sqrt{u - s}\phi_{s, b(s), u, b(u)}}}{2f^2(u)}} \,\mathrm{d} u. \end{align*} It turns out that there exists a unique function $b$ that solves \eqref{eq:free-boundary_eq}, as we state in the next theorem. The proof for such an assertion follows from adapting the methodology used in \citet[Theorem 3.1]{peskir2005ontheamerican}, where it is addressed the uniqueness of the solution of the free-boundary equation for an American put option with a geometric Brownian motion. \begin{theorem} The integral equation \eqref{eq:free-boundary_eq} admits a unique solution among the class of continuous functions $\beta:\mathbb{R}_+\rightarrow\mathbb{R}$ of bounded variation and such that $\beta(s) > c$ for all $s\in\mathbb{R}_+$. \end{theorem} \begin{proof} Suppose there exists a function $\beta:\mathbb{R}_+\rightarrow \mathbb{R}$ solving the integral equation \eqref{eq:free-boundary_eq}, and define $W^\beta$ as in \eqref{eq:pricing_formula}, but with $\beta$ instead of $b$. We can conclude from \eqref{eq:pricing_formula} that the integrand is twice continuously differentiable with respect to $y$ and, therefore, we can obtain $\partial_x W^\beta$ and $\partial_{xx} W^\beta$ by differentiating inside the integral symbol and ensure they are continuous functions on $\mathbb{R}_+\times \mathbb{R}$. Besides, the following expression for $\mathbb{L} W^\beta$ can be easily computed from \eqref{eq:pricing_formula}: $$ \mathbb{L} W^\beta(s, y) = \partial_t G(t, y)\mathbbm{1}(y \geq \beta(s)). $$ Define the sets \begin{align*} \mathcal{C}_\beta := \lrb{(s, y) \in \mathbb{R}_+\times\mathbb{R} : y < \beta(s)},\ \ \mathcal{D}_\beta := \lrb{(s, y) \in \mathbb{R}_+\times\mathbb{R} : y \geq \beta(s)}. \end{align*} It turns out that, on both sets, $W^\beta$ is regular enough to apply the extension of the Itô's formula given in Lemma A2 in \cite{d2020discounted}, which yields \begin{align}\label{eq:pricing_formula_W^beta} W^\beta(s, y) &= \Es{W^\beta(s + t, Y_t)}{y} - \Es{\int_0^t \partial_t G\lrp{s + u, Y_u}\mathbbm{1}\lrp{Y_u \geq \beta(s + u)}\,\mathrm{d} u}{y}, \end{align} where the martingale term is canceled after taking $\Pr_y$-expectation and the local time term is missing due to the continuity of $\partial_x W^\beta$ on $\partial \mathcal{C}_\beta$. In addition, \begin{align}\label{eq:G_Ito} G(s, y) &= \Es{G(s + t, Y_t)}{y} - \Es{\int_0^t \partial_t G\lrp{s + u, Y_u}\,\mathrm{d} u}{y}. \end{align} Consider the first hitting time $\sigma_{\mathcal{C}_\beta}$ into $\mathcal{C}_\beta$, fix $(s, y)\in\mathcal{D}_\beta$, and notice that $\Pr_y(Y_u \geq \beta(t + s)) = 1$ for all $0 \leq u \leq \rho_{\mathcal{C}_\beta}$. Recall that $W^\beta(s, \beta(s)) = G(s, \beta(s))$ for all $s\in\mathbb{R}_+$, as $\beta$ solves \eqref{eq:free-boundary_eq}. Due to the law of the iterated logarithm, the DCT, the fact that $W^\beta$ satisfies \eqref{eq:pricing_formula} with $\beta$ instead of $b$, and recalling \eqref{eq:gain}, we get $$ \lim_{u\rightarrow\infty}W^\beta(s + u, Y_u) = \lim_{u\rightarrow\infty}G(s + u, Y_u) = c $$ $\Pr_y$-a.s. for all $y\in\mathbb{R}$. Hence, $W^\beta\big(s + \sigma_{\mathcal{C}_\beta}, Y_{\sigma_{\mathcal{C}_\beta}}\big) = G\big(s + \sigma_{\mathcal{C}_\beta}, Y_{\sigma_{\mathcal{C}_\beta}}\big)$. From \eqref{eq:pricing_formula_W^beta} and \eqref{eq:G_Ito} it follows that \begin{align*} W^\beta(s, y) &= \Es{W^\beta\big(s + \sigma_{\mathcal{C}_\beta}, Y_{\sigma_{\mathcal{C}_\beta}}\big)}{y} - \Es{\int_0^{\sigma_{\mathcal{C}_\beta}} \partial_t G\lrp{s + u, Y_u}\,\mathrm{d} u}{y} \\ &= \Es{G^\beta\big(s + \sigma_{\mathcal{C}_\beta}, Y_{\sigma_{\mathcal{C}_\beta}}\big)}{y} - \Es{\int_0^{\sigma_{\mathcal{C}_\beta}} \partial_t G\lrp{s + u, Y_u}\,\mathrm{d} u}{y} \\ & = G(s, y), \end{align*} which proves that $W^\beta = G$ on $\mathcal{D}_\beta$. Define now the first hitting time $\sigma_{\mathcal{D}_\beta}$ into $\mathcal{C}_\beta$. Notice that either $\sigma_{\mathcal{D}_\beta} = 0$ for $(s, y)\in\mathcal{D}_\beta$, on which $W^\beta = G$, or $Y_u < \beta(s + u)$ for all $0\leq u < \sigma_{\mathcal{D}_\beta}$. We derive from \eqref{eq:pricing_formula_W^beta} that \begin{align*} W^\beta(s, y) &= \Es{W^\beta\lrp{s + \sigma_{\mathcal{D}_\beta}, Y_{\sigma_{\mathcal{D}_\beta}}}}{y} = \Es{G\lrp{s + \sigma_{\mathcal{D}_\beta}, Y_{\sigma_{\mathcal{D}_\beta}}}}{y}, \end{align*} for all $(s, y)\in\mathbb{R}_+\times\mathbb{R}$, which, after recalling the definition of $W$ in \eqref{eq:OSP_BM}, proves that $W^\beta \leq W$. Take $(s, y)\in \mathcal{D}_\beta\cap\mathcal{D}$ and consider the first hitting time $\sigma_\mathcal{C}$ into the continuation set $\mathcal{C}$. Since \mbox{$W = G$} on $\mathcal{D}$ and $W^\beta = G$ on $\mathcal{D}_\beta$, by relying on \eqref{eq:pricing_formula}, \eqref{eq:pricing_formula_W^beta}, and the fact that \mbox{$\Pr_y\lrp{Y_u \geq b(s + u)} = 1$} for all $0 \leq u < \sigma_\mathcal{C}$, we get \begin{align*} \Es{W\lrp{s + \sigma_\mathcal{C}, Y_{\sigma_\mathcal{C} }}}{y} &= G(s, y) + \Es{\int_0^{\sigma_\mathcal{C}} \partial_t G\lrp{s + u, Y_u}\,\mathrm{d} u}{y}, \\ \Es{W^\beta\lrp{s + \sigma_\mathcal{C}, Y_{\sigma_\mathcal{C}}}}{y} &= G(s, y) + \Es{\int_0^{\sigma_\mathcal{C}} \partial_t G\lrp{s + u, Y_u}\mathbbm{1}\lrp{Y_u \geq \beta(s + u)}\,\mathrm{d} u}{y}. \end{align*} After recalling that $W^\beta \leq W$, we can merge the two previous equalities into \begin{align*} \Es{\int_0^{\sigma_\mathcal{C}} \partial_t G\lrp{s + u, Y_u}\mathbbm{1}\lrp{Y_u \geq \beta(s + u)}\,\mathrm{d} u}{y} &\leq \Es{\int_0^{\sigma_\mathcal{C}} \partial_t G\lrp{s + u, Y_u}\,\mathrm{d} u}{y}, \end{align*} which, alongside the fact that $\partial_t G(s, y) < 0$ for all $(s, y)\in\mathcal{D}$ (otherwise we get from \eqref{eq:pricing_formula_aux} that the first exit time from a ball around $(s, y)$ small enough will yield a better strategy than stopping immediately) and the continuity of $\beta$, implies that $b \geq \beta$. Suppose that there exists a point $s\in\mathbb{R}_+$ such that $b(s) > \beta(s)$ and fix $y\in(\beta(s), b(s))$. Consider the stopping time $\sigma^* = \sigma^*(s, y)$ and plug it into both \eqref{eq:pricing_formula} and \eqref{eq:pricing_formula_W^beta} to obtain \begin{align*} \Es{W^\beta\lrp{s + \sigma^*, Y_{\sigma^*}}}{y} &= \Es{G\lrp{s + \sigma^*, Y_{\sigma^*}}}{y}\\ &= W^\beta(s, y) + \Es{\int_0^{\sigma^*} \partial_t G\lrp{s + u, Y_u}\mathbbm{1}\lrp{Y_u \geq \beta(s + u)}\,\mathrm{d} u}{y} \end{align*} and \begin{align*} \Es{W\lrp{s + \sigma^*, Y_{\sigma^*}}}{y} = \Es{G\lrp{s + \sigma^*, Y_{\sigma^*}}}{y} = W(s, y). \end{align*} Thus, since $W^\beta \leq W$, we get \begin{align*} \Es{\int_0^{\sigma^*} \partial_t G\lrp{s + u, Y_u}\mathbbm{1}\lrp{Y_u \geq \beta(s + u)}\,\mathrm{d} u}{y} \geq 0. \end{align*} By using the fact that $y < b(s)$, the continuity of $b$, and the time-continuity of the process $Y$, we can state that $\sigma^* > 0\ \Pr_y$-a.s. Therefore, since $\partial_t G(s, y) < 0$ for all $(s, y)\in\mathcal{D}_\beta$ (the same arguments used to prove that $\partial_t G < 0$ in $\mathcal{D}$ lead to this conclusion) the previous inequality can only stand if $\mathbbm{1}\lrp{Y_u \geq \beta(s + u)} = 0$ for all $0\leq u\leq \sigma^*$, meaning that $b(s + u) \leq \beta(s + u)$ in the same interval, which contradicts the assumption $b(s) > \beta(s)$ due to the continuity of both $b$ and $\beta$. \end{proof} \section{Solution of the original problem and some extensions} \label{sec:sol_original} Recall that the OSPs \eqref{eq:OSP_BM} and \eqref{eq:OSP_OUB} are equivalent, meaning that the value functions and the OSTs of both problems are linked through a homeomorphic transformation. Details on how to actually translate one problem into the other were given in Proposition \ref{pr:OSP_equiv}. It then follows that the stopping time $\tau^*(t, x)$ defined in \eqref{eq:OST_transform} is optimal for \eqref{eq:OSP_BM} and it admits the following alternative representation under~$\Pr_x$: \begin{align}\label{eq:OST_OSB} \tau^*(t, x) = \inf\lrb{u\geq 0 : X_{t + u} \geq \beta(t + u)},\quad \beta(t) = \frac{z}{c_z}G_{c_z}\lrp{s, b(s)}, \end{align} where $\beta$ is the OSB associated to \eqref{eq:OSP_OUB}, and $s = \upsilon(t)$ and $c_z$ are defined in Proposition \ref{pr:OSP_equiv}. We can obtain both $V$ and $\beta$ without requiring the computation of $W$ and $b$. Indeed, consider the infinitesimal generator of $\lrb{\lrp{t, X_t}}_{t \in [0, 1]}$, $\mathbb{L}_X$, and set $y=c_x$, $s_\varepsilon = s + \varepsilon$, and $t_\varepsilon = \upsilon^{-1}(s_\varepsilon)$ for $\varepsilon\in\mathbb{R}$. By means of \eqref{eq:value_equiv} and the chain rule, we get that \begin{align*} \frac{z}{c_z}\lrp{\mathbb{L} W_{c_z}}(s, y) :=&\; \lim_{\varepsilon\rightarrow 0}\varepsilon^{-1}\lrp{\Es{\frac{z}{c_z}W_{c_z}\lrp{s_\varepsilon, Y_\varepsilon}}{y} - \frac{z}{c_z}W_{c_z}(s, y)} \\ =&\; \lim_{\varepsilon\rightarrow 0}\varepsilon^{-1}\lrp{\Es{V(t_\varepsilon, X_{t_\varepsilon})}{t, x} - V(t, x)} \\ =&\; \lrp{\mathbb{L}_X V}(t, x)\lrp{\upsilon^{-1}}'(s). \end{align*} Hence, after multiplying both sides of \eqref{eq:pricing_formula_aux} by $z/c_z$, integrating with respect to $\upsilon^{-1}(u)$ instead of $u$, and recalling that $\mathbb{L}_X V(t, x) = 0$ for all $x\leq \beta(t)$ and $V(t, x) = x$ for all $x\geq\beta(t)$, we get the pricing formula \begin{align} V(t, x) &= z - \Es{\int_0^{1-t} (\mathbb{L}_X V)(t + u, X_{t + u})\,\mathrm{d} u}{t, x} \nonumber \\ &= z - \Es{\int_0^{1-t} \mu(t + u, X_{t + u})\mathbbm{1}(X_{t + u} \geq \beta(t + u))\,\mathrm{d} u}{t, x}. \label{eq:pricing_formula_original} \end{align} In the same fashion we obtained \eqref{eq:pricing_formula_refined}, we can take advantage of the linearity of $x\mapsto\mu(t, x)$ and the Gaussian marginal distributions of $X$ to come up with the following refined version of \eqref{eq:pricing_formula_original}: \begin{align}\label{eq:pricing_formula_original_refined} W(t, x) = z - \int_t^1K(t, x, u, \beta(u))\,\mathrm{d} u, \end{align} where, for $x_1, x_2 \in\mathbb{R}$ and $0\leq t_1 \leq t_2 \leq 1$, \begin{align}\label{eq:kernel} K(t_1, x_1, t_2, x_2) := \alpha\frac{z\widetilde{\Phi}_{t_1, x_1, t_2, x_2} - \cosh(\alpha(1 - t_2))(m_{t_2}(t_1, x_1)\widetilde{\Phi}_{t_1, x_1, t_2, x_2} + v_{t_2}(t_1)\widetilde{\phi}_{t_1, x_1, t_2, x_2})}{\sinh(\alpha(1 - t_2))}, \end{align} with \begin{align*} \widetilde{\Phi}_{t_1, x_1, t_2, x_2} := \bar{\Phi}\lrp{\frac{x_2 - m_{t_2}(t_1, x_1)}{v_{t_2}(t_1)}}, \quad \widetilde{\phi}_{t_1, x_1, t_2, x_2} := \phi\lrp{\frac{x_2 - m_{t_2}(t_1, x_1)}{v_{t_2}(t_1)}} \end{align*} and \begin{align*} m_{t_2}(t_1, x_1) := \Es{X_{t_2}}{t_1, x_1} = \frac{x_1\sinh(\alpha(1 - t_2)) + z\sinh(\alpha (t_2 - t_1))}{\sinh(\alpha(1 - t_1))}, \\ v_{t_2}(t_1) := \sqrt{\Vs{X_{t_2}}{t_1}} = \sqrt{\frac{\gamma^2}{\alpha}\frac{\sinh(\alpha(1 - t_2))\sinh(\alpha (t_2 - t_1))}{\sinh(\alpha(1 - t_1))}}. \end{align*} Consequently, by taking $x\downarrow\beta(t)$ in \eqref{eq:pricing_formula_original} (or by directly transforming \eqref{eq:free-boundary_eq} in the same way we obtained \eqref{eq:pricing_formula_original} from \eqref{eq:pricing_formula}), we get the free-boundary equation \begin{align*} \beta(t) &= z - \Es{\int_0^{1-t} (\mathbb{L}_X V)(t + u, X_{t + u})\,\mathrm{d} u}{t, \beta(t)} \\ &= z - \Es{\int_0^{1-t} \mu(t + u, X_{t + u})\mathbbm{1}(X_{t + u} \geq \beta(t + u))\,\mathrm{d} u}{t, \beta(t)}, \end{align*} which may also be expressed as \begin{align}\label{eq:free-boundary_eq_original_refined} \beta(t) = z - \int_t^1 K(t, \beta(t), u, \beta(u))\,\mathrm{d} u. \end{align} The next three remarks broaden the scope of applicability of the OUB as the underlying model in \eqref{eq:OSP_OUB}. In particular, the two first reveal that setting the terminal time to $1$ and the pulling level (coming from the asymptotic mean of the OU process underneath) to $0$ does not take a toll on generality, while the last one shows that the OSP for the BB arises as a limit case when $\alpha\to0$. \begin{remark}[OUB with a general pulling level]\label{rmk:general_pulling_level} Let $\widetilde{X}^\theta = \big\{\widetilde{X}_t^\theta\big\}_{t \in [0, 1]}$ be an OU process satisfying the SDE $\mathrm{d} \widetilde{X}_t^\theta = \alpha(\widetilde{X}_t^\theta - \theta)\,\mathrm{d} t + \gamma\,\mathrm{d} B_t$. That is, $X^{\theta, z}$ is pulled towards $\theta$ with a time-dependent strength dictated by $\alpha$. Denote by $X^{\theta, z} = \big\{X_t^{\theta, z}\big\}_{t \in [0, 1]}$ the OUB process build on top of $\widetilde{X}^\theta$ and such that $X_1^{\theta,z} = z$. It is easy to check that $X^{\theta, z} = X^{0, z - \theta} + \theta$, whenever $X_0^{0, z - \theta} = X_0^{\theta, z} - \theta$. Denote by $V^{\theta, z}$ and $\beta^{\theta, z}$ the value function and the OSB associated to the OSP \eqref{eq:OSP_OUB} with $X$ replaced by $X^{\theta, z}$. Then $V^{\theta, z}(t, x) = V^{0, z - \theta}(t, x - \theta) + \theta$ and $b^{\theta, z}(t) = b^{0, z - \theta}(t) + \theta$. \end{remark} \begin{remark}[OUB with a general horizon] Denote by $X^{\alpha, \gamma, T} = \big\{X_t^{\alpha, \gamma, T}\big\}_{t \in [0, T]}$ an OUB with slope $\alpha$, volatility $\gamma$, and horizon $T$. Likewise, let $V^{\alpha, \gamma, T}$ and $\beta^{\alpha, \gamma, T}$ be the corresponding value function and the OSB. By relying on the scaling property of a Brownian motion, one can easily verify that $X_t^{\alpha r, \gamma, T} = X_{rt}^{\alpha, \gamma r^{-1/2}, rT}$ $\Pr_x$-a.s. for any $r > 0 $. Consequently, $V^{\alpha r, \gamma, T}(t, x) = V^{\alpha, \gamma r^{-1/2}, rT}(rt, x)$ and $\beta^{\alpha r, \gamma, T}(t) = \beta^{\alpha, \gamma r^{-1/2}, rT}(rt)$. Thereby, by taking $r = 1/T$, one can derive $V^{\alpha, \gamma, T}$ and $\beta^{\alpha, \gamma, T}$ for any set of values $\alpha$, $\gamma$, and $T$ from the solution of the OSP in \eqref{eq:OSP_OUB}. \end{remark} \begin{remark}[BB from an OUB] To emphasize the dependence on $\alpha$, denote by $X(\alpha)$, $V_\alpha$, and $\beta_\alpha$, respectively, the OUB solving \eqref{eq:OUB_SDE}, the value function in \eqref{eq:value}, and the corresponding OSB. The process $X_t(\alpha)$ has the following integral representation under $\Pr_x$ \citep{Barczy2013Sample}: \begin{align*} X_{t} = x\frac{\sinh(\alpha (1 - t))}{\sinh(\alpha)} + z\frac{\sinh(\alpha t)}{\sinh(\alpha)} + \sigma\int_{0}^t \frac{\sinh(\alpha (1 - t))}{\sinh(\alpha (1 - u))}\,\mathrm{d} B_u, \end{align*} from where we can conclude, after taking $\alpha\rightarrow 0$ and using the DCT, that $X_t(\alpha) \rightarrow \widetilde{X}_t$ $\Pr_x$-a.s. for all $t\in[0, 1)$, where $\widetilde{X}$ is a BB process with final value $\tilde{X}_1 = z$. Then, by applying Theorem 5 from \cite{coquet_convergence_2007} we have that $V_\alpha \rightarrow \widetilde{V}$, and hence $\beta_\alpha \rightarrow \widetilde{\beta}$, as $\alpha\rightarrow 0$, where $\widetilde{V}$ and $\widetilde{\beta}$ are the value function and the OSB related to $\widetilde{X}$. \end{remark} \section{Numerical results} \label{sec:numerical_results} The free-boundary equation \eqref{eq:free-boundary_eq_original_refined} does not admit a closed-form solution and thus numerical procedures come in handy to compute an approximate boundary. By exploiting the fact that the OSB at a given time $t$ depends only on its shape from $t$ up to the horizon, one can discretize the integral in \eqref{eq:free-boundary_eq_original_refined} by means of a right Riemann sum and, since the terminal value $\beta(1)$ is known, the entire boundary can be computed in a backward form. This method of backward induction is detailed in \citet[Chapter 8]{detemple_american-style_2005} and examples of its implementation can be found, e.g., in \cite{Pedersen02onnonlinear}. Another approach to solve \eqref{eq:free-boundary_eq_original_refined} is by using Picard iterations, that is, by treating \eqref{eq:free-boundary_eq_original_refined} as a fixed-point problem in which the entire boundary is updated in each step. The works of \cite{DETEMPLE2020104807} and \cite{de_angelis_optimal_2020} use this approach to solve the associated Volterra-type integral equation characterizing the OSB. To the best of our knowledge, when it comes to non-linear integral equations arisen from OSPs, the convergence of both the Picard scheme and the backward induction technique are numerically checked rather than formally proved. Therefore, we chose to use the Picard scheme since empirical tests suggested a faster convergence rate while keeping a similar accuracy compared to the backward induction approach. Define a partition of $[0, 1]$, namely, $0 = t_0 < t_1 < \cdots < t_N = 1$ for $N\in\mathbb{N}$. Given that $\beta(1) = z$, we will initialize the Picard iterations by starting with the constant boundary $\beta^{(0)}:[0, 1] \to \mathbb{R}$ with $\beta^{(0)}\equiv z$. The updating mechanism that generates subsequent boundaries is laid down in the following formula, which comes after discretizing the integral in \eqref{eq:free-boundary_eq_original_refined} by using a right Riemann sum: \begin{align*} \beta_i^{(k)} = z - \sum_{j = i}^{N - 2} K\lrp{t_i, \beta_i^{(k - 1)}, t_{j + 1}, \beta_{j + 1}^{(k - 1)}}(t_{j + 1} - t_j), \quad k = 1, 2, \dots \end{align*} We neglect the $(N-1)$-addend and allow the sum to run only until $N - 2$ since $K(t, x, 1, z)$ is not well defined, and therefore the last integral piece cannot be included in the right Riemann sum. As the overall integral is finite, the last piece vanishes as $t_{N - 1}$ gets closer to $1$. We chose to stop the fixed-point Picard algorithm after the $m$-th iteration if $m = \min\big\{k > 0: \max_{i = 1, \dots, N}| \beta_i^{k-1} - \beta_i^k| < \varepsilon\big\}$ for $\varepsilon=10^{-4}$. Empirical evidence suggested that the best performance of the algorithm was achieved when using a non-uniform mesh that lets the distances $t_i - t_{i-1}$ decrease smoothly as $i$ increases. In our computations, we used the logarithmically-spaced partition $t_i = \ln\lrp{1 + i(e - 1)/N}$, where $N = 500$ unless is otherwise specified. Figures \ref{fig:alpha_change}, \ref{fig:gamma_change}, and \ref{fig:pinning_change} reveal how the OSB's shape is affected by different set of values for the slope $\alpha$, the volatility $\gamma$, and the anchor point $z$. \begin{figure}[h!] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{img/OSB_OU_ChangeSlope_PinningEqualPullingLevel.pdf} \subcaption{$z = 0, \gamma = 1$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{img/OSB_OU_ChangeSlope_PinningLowerPullingLevel.pdf} \subcaption{$z = -5, \gamma = 1$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{img/OSB_OU_ChangeSlope_PinningGreaterPullingLevel.pdf} \subcaption{$z = 5, \gamma = 1$} \end{subfigure} \vspace{6PT} \caption{\small Optimal stopping boundary estimation for different values of $\alpha$. The boundary is pulled towards $0$ with a strength that increases as both $|\alpha|$ (values of $\alpha$ with equal absolute values yield the same boundary) and the residual time to the horizon $1 - t$ get higher. As $\alpha \rightarrow 0$, the boundary estimation is shown to converge towards the OSB of a BB (dashed line), which is known to be $z + L\sqrt{1 - t}$, for $L\approx 0.8399$.} \label{fig:alpha_change} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{img/OSB_OU_ChangeVolatility_PinningEqualPullingLevel.pdf} \subcaption{$z = 0, \alpha = 1$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{img/OSB_OU_ChangeVolatility_PinningLowerPullingLevel.pdf} \subcaption{$z = -5, \alpha = 1$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{img/OSB_OU_ChangeVolatility_PinningGreaterPullingLevel.pdf} \subcaption{$z = 5, \alpha = 1$} \end{subfigure} \vspace{6PT} \caption{\small Optimal stopping boundary estimation for different values of $\gamma$. The boundary exhibits an increasing proportional relationship with respect to $\gamma$.} \label{fig:gamma_change} \end{figure} \begin{figure}[h!] \centering \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{img/OSB_OUB_ChangePinning_N10.pdf} \subcaption{$\alpha = \gamma = 1, N = 10$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{img/OSB_OUB_ChangePinning_N100.pdf} \subcaption{$\alpha = \gamma = 1, N = 100$} \end{subfigure} \begin{subfigure}[b]{0.32\textwidth} \includegraphics[width = \textwidth]{img/OSB_OUB_ChangePinning_N500.pdf} \subcaption{$\alpha = \gamma = 1, N = 500$} \end{subfigure} \vspace{6PT} \caption{\small Optimal stopping boundary estimation for different values of $z$ and $N$. We display $t\mapsto\beta(t) - z$ to allow a clearer comparison across the different values of $z$. As $N$ gets larger the boundary estimation is shown to converge.} \label{fig:pinning_change} \end{figure} The code implementing the boundary computation is available at \cite{OSP_OUB_GitHub}. \section{Conclusions} \label{sec:conclusions} In this paper we solved the finite-horizon OSP for an OUB process with the identity as the gain function. To the best of our knowledge, so far the only Markov bridge addressed by the optimal stopping literature has been the BB and some slight variations of it (see, e.g., \cite{shepp_explicit_1969,follmer_optimal_1972,ekstrom_optimal_2009,ernst_revisiting_2015,leung_optimal_2018,de_angelis_optimal_2020, glover_optimally_2020,ekstrom_optimal_2020, d2020discounted}). Markov bridges are potentially useful in mathematical finance as they allow to include additional information at some terminal time. Arguing as \cite{shepp_explicit_1969} for the BB, we worked out the OUB case by coming up with an equivalent OSP having a Brownian motion as the underlying process after time-space transforming the OUB. Contrary to \cite{shepp_explicit_1969}, the complexity of our problem did not allow to guess a candidate solution, and we directly characterized the value function and the OSB by means of the pricing formula and the free-boundary equation. However, the equivalence between both OSPs were used only to facilitate technicalities along the proofs, and it is not necessary to compute the solution, since both the pricing formula and the free-boundary equation are also provided in the original formulation. We discussed how to use a Picard iteration algorithm to numerically approximate the OSB and displayed some examples to illustrate how different sets of values for the OUB's parameters rule the shape of the OSB. \section*{Acknowledgments} The first and third authors acknowledge the financial support by Spain's Ministry of Science and Innovation through the grant PID2020-116694GB-I00. The research of the first and second authors was also supported by the Community of Madrid through the framework of the multi-year agreement with Carlos III University of Madrid in its line of action ``Excelencia para el Profesorado Universitario'' (EPUC3M13). The second author acknowledges support by grant PGC2018-097284-B-100 by Spain's Ministry of Science, Innovation and Universities. The grant is co-funded with ERDF~funds.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,245
Q: Calculating a new column in pandas dataframe which is a subset of values returns column not found error I'm interested in finding the sum of values in a column creating a new column in the process on a subset of a dataframe meeting some condition. I'm not sure of how to work the sum of a new column from these two as I get an error when I try to access the New column created in the process: import pandas as pd d1={'X':[1,10,100,1000,1,10,100,1000,1,10,100,1000], 'Y':[0.2,0.5,0.4,1.2,0.1,0.25,0.2,0.6,0.05,0.125,0.1,0.3], 'RUN':[1,1,1,1,2,2,2,2,3,3,3,3] } df=pd.DataFrame(d1) for RUNno in (df.RUN.unique()): df1=df.RUN==RUNno #Selects the rows matching RUNno df[df1]['NewColumn']=df[df1]['X']+df[df1]['Y'] #For the selected dataset, calculates the sum of two columns and creates a new column print(df[df1].NewColumn) #Print the contents of the new column I am unable to get df[df1].NewColumn contents as it is unable to identify the Key NewColumn. I'm pretty sure this way of creating new columns works on the standard dataframe df but not sure why it doesn't work on df[df1]. For eg. df['NewColumn']=df['X']+df['Y'] df.NewColumn Would work seamlessly. To update the question, the columns data entries that are added to form the new column are from two different dataframes. import pandas as pd from scipy.interpolate import interp1d interpolating_functions=dict() d1={'X':[1,10,100,1000,1,10,100,1000,1,10,100,1000], 'Y':[0.2,0.5,0.4,1.2,0.1,0.25,0.2,0.6,0.05,0.125,0.1,0.3], 'RUN':[1,1,1,1,2,2,2,2,3,3,3,3] } d2={'X':[1,10,100,1000,1,10,100,1000,1,10,100,1000], 'Y':[0.2,0.5,0.4,1.2,0.1,0.25,0.2,0.6,0.05,0.125,0.1,0.3], 'RUN':[1,1,1,1,2,2,2,2,3,3,3,3] } df=pd.DataFrame(d1) df2=pd.DataFrame(d2) for RUNno in (df.RUN.unique()): df1=df.RUN==RUNno df3=df.RUN==RUNno interpolating_functions[RUNno]=interp1d(df2[df3].X,df2[df3].Y) df[df1]['NewColumn']=df[df1]['X']+interpolating_functions[RUNno](df2[df3]['X']) print(df[df1].NewColumn) A: Use custom function with GroupBy.apply with create new column and then return each group - here x: def func(x): #check groups print (x) #working with groups DataFrame x x['NewColumn']=x['X']+x['Y'] return x df = df.groupby('RUN').apply(func) print (df) X Y RUN NewColumn 0 1 0.200 1 1.200 1 10 0.500 1 10.500 2 100 0.400 1 100.400 3 1000 1.200 1 1001.200 4 1 0.100 2 1.100 5 10 0.250 2 10.250 6 100 0.200 2 100.200 7 1000 0.600 2 1000.600 8 1 0.050 3 1.050 9 10 0.125 3 10.125 10 100 0.100 3 100.100 11 1000 0.300 3 1000.300 It seems you need loc for select columns by masks, only necessary same length of index in both DataFrames: for RUNno in (df.RUN.unique()): df1=df.RUN==RUNno df3=df.RUN==RUNno interpolating_functions[RUNno]=interp1d(df2.loc[df3, 'X'], df2.loc[df3,'Y']) df.loc[df1, 'NewColumn'] = df.loc[df1, 'X'] + interpolating_functions[RUNno](df2.loc[df3, 'X']) print (df) X Y RUN NewColumn 0 1 0.200 1 1.200 1 10 0.500 1 10.500 2 100 0.400 1 100.400 3 1000 1.200 1 1001.200 4 1 0.100 2 1.100 5 10 0.250 2 10.250 6 100 0.200 2 100.200 7 1000 0.600 2 1000.600 8 1 0.050 3 1.050 9 10 0.125 3 10.125 10 100 0.100 3 100.100 11 1000 0.300 3 1000.300
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,471
How to clean or change the VQ35DE 3.5L V6 engine's air filter in a 3rd generation 2015 to 2018 Nissan Murano. This automotive maintenance tutorial was specifically written to assist owners of the third generation (2015, 2016, 2017, 2018 and probably also the face lifted 2019 & 2020 model years) Nissan Murano SUV in cleaning or changing the engine air filter element for the VQ35DE 3.5 liter V6 motor. Owners of other Nissan and Infiniti vehicles such as the Rogue, Pathfinder, Armada, Versa, Sentra, Altima, Maxima, Leaf, 370Z, GT-R, Frontier, Titan, NV200, Q50, Q70, Q60, QX30, QX50, QX60, QX70 and QX80 may also find these DIY instructions to be helpful. A few compatible replacement filters with their part numbers are as follows: Fram CA4309, Purolator A24278, ACDelco A975C, K&N 33-2031-2, Champion CAP4309, PotAuto MAP 6039, EPAuto GP309 and Premium Guard PA4278. No tools are needed to access and replace the engine air filter. The rectangular shaped black plastic engine air filter box is located to the left of the fuse box, to the right of the brake fluid reservoir and behind the 12V automotive battery. There are two metal latches on the front edge of the air box. Gently flip the metal clips up and off the bottom half of the air box to release the cover. Once the two metal latches are released, gently lift the cover off the top of the air box. If the old air filter is dark grey or black and clogged with dirt, dust, insects, leaves, hair, pollen, soot, sand and other debris, it should be replaced with a new element. I recommend buying the Fram CA4309 engine air filter since it has excellent reviews on Amazon. If your vacuum cleaner has a crevice attachment, clean out any debris or sand in the bottom of the air box. Lower the new filter into the bottom half of the air box with the pleats facing down and the rubber gasket facing up towards you. Line up the tabs on the rear edge of the air box cover with the slots on the bottom half of the air box. Push the air box cover back down into place. Flip up the two metal latches and snap them into place over the top half of the air box. Double check that the two clips are secure and that no part of the new filter is visible at every edge. For more, please check out all of my 2015-2018 Nissan Murano DIY Repair & Maintenance Guides.
{ "redpajama_set_name": "RedPajamaC4" }
6,587
{-# OPTIONS_GHC -Wall #-} {-# LANGUAGE TupleSections #-} module Algw.Infer where import Algw.Ast import Algw.Type import Algw.Env import State import Data.IORef import Data.Maybe import Control.Monad import qualified Data.Map as M import qualified Data.Set as S makeNewVar :: Infer (Infer TName) makeNewVar = do r <- newIORef 'a' -- return a closure like structure to mock a generator return $ do v <- readIORef r modifyIORef r succ return [v] generalize :: Env -> T -> Scheme generalize env t = let fvs = freeVars t `S.difference` freeVars env -- cause poly type here can only hold single quantified type var in S.fold Poly (Mono t) fvs replaceFreeVars :: Scheme -> Subrule -> T replaceFreeVars (Mono t) s = subst s t replaceFreeVars (Poly _ t) s = replaceFreeVars t s -- just replace quantified type variables by fresh ones to make it monomorphic instantiate :: Infer TName -> Scheme -> Infer T -- each poly type hold single quantified type variable is not really a good design, but just to be compatible with the origin paper -- τ ::= α | ι | τ → τ -- σ ::= τ | ∀α. σ instantiate newVar t = let boundVars = allVars t `S.difference` freeVars t -- update quantified type variable with fresh one update acc a = do fresh <- fmap TVar newVar return $ M.insert a fresh acc replace = foldM update M.empty boundVars -- applicative functor in pure (replaceFreeVars t) <*> replace occurs :: TName -> T -> Bool occurs a t = a `S.member` freeVars t makeSingleSubrule :: TName -> T -> Infer Subrule makeSingleSubrule a t | t == TVar a = return emptyRule | occurs a t = error "occurs check fails" | otherwise = return $ M.singleton a t -- find mgu(most general unifier) of two types unify :: T -> T -> Infer Subrule unify TInt TInt = return emptyRule unify TBool TBool = return emptyRule unify (TVar n) t = makeSingleSubrule n t unify t (TVar n) = makeSingleSubrule n t unify (TArrow tl1 tr1) (TArrow tl2 tr2) = do s1 <- unify tl1 tl2 s2 <- subst s1 tr1 `unify` subst s1 tr2 return $ s2 `compose` s1 unify t1 t2 = error $ "types do not unify: " ++ show t1 ++ " vs. " ++ show t2 -- just like assoc in clojure assocEnv :: TName -> Scheme -> Env -> Env assocEnv n v env = M.insert n v $ M.delete n env algw :: Infer TName -> Env -> Expr -> IO (Subrule, T) algw newVar env (EVar name) = (emptyRule,) <$> instantiate newVar t -- pure (emptyRule,) <*> instantiate newVar t is also fine where t = fromMaybe (error $ "unbound variable: " ++ name) $ M.lookup name env {- t <- fmap TVar newVar will work because instance Functor IO where fmap f action = do result <- action return (f result) -} algw newVar env (EAbs name expr) = do fresh <- fmap TVar newVar let env' = assocEnv name (Mono fresh) env (subrule, mono) <- algw newVar env' expr return (subrule, subst subrule fresh `TArrow` mono) algw newVar env (EApp e1 e2) = do (s1, m1) <- algw newVar env e1 (s2, m2) <- algw newVar (subst s1 env) e2 fresh <- fmap TVar newVar s3 <- unify (subst s2 m1) (TArrow m2 fresh) return (s3 `compose` s2 `compose` s1, subst s3 fresh) algw newVar env (ELet name value body) = do (s1, vmono) <- algw newVar env value let env' = subst s1 env g = generalize env' vmono env'' = assocEnv name g env' (s2, bmono) <- algw newVar env'' body return (s2 `compose` s1, bmono) -- environment is assumptions at the initial state infer :: Env -> Expr -> IO T infer env expr = do newVar <- makeNewVar (_, t) <- algw newVar env expr return t
{ "redpajama_set_name": "RedPajamaGithub" }
668
{"url":"https:\/\/biz.libretexts.org\/Bookshelves\/Finance\/Book%3A_International_Finance__Theory_and_Policy\/02%3A_National_Income_and_the_Balance_of_Payments_Accounts\/2.08%3A_International_Investment_Position","text":"2.8: International Investment Position\n\n\u2022 Contributed by No Attribution by request\n\u2022 Anonymous by request\n\nLearning objectives\n\n1. Learn how to define and interpret a country\u2019s international investment position.\n2. Understand how the international investment position is updated from year to year.\n\nA country\u2019s international investment position (IIP) is like a balance sheet in that it shows the total holdings of foreign assets by domestic residents and the total holdings of domestic assets by foreign residents at a point in time. In the International Monetary Fund\u2019s (IMF) financial statistics, these are listed as domestic assets (foreign assets held by domestic residents) and domestic liabilities (domestic assets owned by foreign residents). The financial account balance, whose counterpart is the current account balance, is more like an income statement that shows the changes in asset holdings during the past year. In other words, the financial account balance consists of flow variables since it records changes in the country\u2019s asset holdings during the year, while the international asset position of a country consists of stock variables since it records the total value of assets at a point in time.\n\nA country\u2019s net international asset position may be in surplus, deficit, or balance. If in surplus, then the value of foreign assets (debt and equity) held by domestic residents exceeds the value of domestic assets held by foreigners. Alternatively, we could say that domestic assets exceed domestic liabilities. This country would then be referred to as a creditor country. If the reverse is true, so that domestic liabilities to foreigners exceed domestic assets, then the country would be called a debtor country.\n\nAsset holdings may consist of either debt obligations or equity claims. Debt consists of IOUs (i.e., I owe you) in which two parties sign a contract agreeing to an initial transfer of money from the lender to the borrower followed by a repayment according to an agreed schedule. The debt contract establishes an obligation for the borrower to repay principal and interest in the future. Equity claims represent ownership shares in potentially productive assets. Equity holdings do not establish obligations between parties, at least not in the form of guaranteed repayments. Once ownership in an asset is transferred from seller to buyer, all advantages and disadvantages of the asset are transferred as well.\n\nDebt and equity obligations always pose several risks. The first risk with debt obligations is the risk of possible default (either total or partial). To the lender, default risk means that the IOU will not be repaid at all, that it will be repaid only in part, or that it is repaid over a much longer period of time than originally contracted. The risk of default to the borrower is that future borrowing will likely become unavailable. The advantage of default to the borrower, of course, is that not all the borrowed money is repaid. The second risk posed by debt is that the real value of the repayments may be different than expected. This can arise because of unexpected inflation or unexpected currency changes. Consider inflation first. If inflation is higher than expected, then the real value of debt repayment (if the nominal interest rate is fixed) will be lower than originally expected. This will be an advantage to the borrower, who repays less in real terms, and a disadvantage to the lender, who receives less in real terms. If inflation turns out to be less than expected, then the advantages are reversed. Next, consider currency fluctuations. Suppose a domestic resident, who receives income in the domestic currency, borrows foreign currency in the international market. If the domestic currency depreciates, then the value of the repayments in domestic currency terms will rise even though the foreign currency repayment value remains the same. Thus currency depreciations can be harmful to borrowers of foreign currency. A similar problem can arise for a lender. Suppose a domestic resident purchases foreign currency and then lends it to a foreign resident (note that this is the equivalent of saving money abroad). If the domestic currency appreciates, then foreign savings, once cashed in, will purchase fewer domestic goods and the lender will lose.\n\nThe risk of equity purchases arises whenever the asset\u2019s rate of return is less than expected. This can happen for a number of different reasons. First, if the equity purchases are direct investment in a business, then the return on that investment will depend on how well the business performs. If the market is vibrant and management is good, then the investment will be profitable. Otherwise, the rate of return on the investment could be negative. All the risk, however, is borne by the investor. The same holds true for stock purchases. Returns on stocks may be positive or negative, but it is the purchaser who bears full responsibility for the return on the investment. Equity purchases can suffer from exchange rate risk as well. When foreign equities are purchased, their rate of return in terms of domestic currency will depend on the currency value. If the foreign currency in which assets are denominated falls substantially in value, then the value of those assets falls along with it.\n\nThe U.S. International Investment Position\n\nThe United States is the largest debtor nation in the world. This means that its international investment position is in deficit and the monetary value of that deficit is larger than that of any other country in the world. The data for the U.S. international investment position in 2008 are available in this U.S. BEA international investment position spreadsheet.The data for the U.S. international investment position are available from the Bureau of Economic Analysis, International Economic Accounts, International Investment Position, at www.bea.gov\/international\/xls\/intinv08_t1.xls. At market values the preliminary estimate for 2008 is that the U.S. was in debt to the rest of the world in the amount of $3.469 trillion. (Refer to cell I22 in spreadsheet.) Excluding financial derivatives that refer to interest rate and foreign exchange contracts, the United States was in debt in the amount \u2212$3.628 trillion (cell I24).\n\nNote that this valuation is the U.S. \u201cnet\u201d investment position, meaning that it is the difference between the sum total value of foreign assets owned by U.S. residents (U.S. assets abroad) minus U.S. assets owned by foreigners (foreign-owned assets in the United States). The first of these, U.S. assets abroad, represents our purchases of foreign equities and money we have lent to foreigners. The total value stood at $19.888 trillion in 2008 using market value methods (cell I26). The second, foreign-owned assets in the United States, represents foreign purchases of U.S. equities and money foreigners have lent to us or, equivalently, that we have borrowed. The total in 2008 stood at$23.357 trillion (cell I50).\n\nThe size of the U.S. debt position causes worry for some. Thirty years ago the United States had a sizable creditor position. However, as a result of trade deficits run throughout the 1980s and 1990s, the United States quickly turned from a net creditor to a net debtor. The changeover occurred in 1989. In the early 1990s, the size of this debt position was not too large compared to the size of the economy; however, by the late 1990s and early 2000s, the debt ballooned. In 2008, the U.S. debt position stood at 24.6 percent of GDP, which interestingly is down slightly from 24.9 percent of GDP in 2002 despite annual current account deficits since then. The reason for these changes is changes in the valuations of assets, as reflected in stock market prices, real estate price changes, and changes in the exchange rate.\n\nNotice in the 2008 BEA IIP spreadsheet that the investment position is derived from the 2007 position in the following way. First, the current account deficit caused an addition to U.S. external debt of $505 billion (cell D22). Changes in asset prices both here and abroad further increased U.S. external debt by$720 billion (cell E22). This could be because either real estate prices abroad fell by more than in the United States or security prices abroad fell by more than in the United States. Next, there was another increase of $583 billion in external U.S. debt because of changes in exchange rates. In this case, an appreciation of the U.S. dollar increased the values of foreign-held U.S. assets and reduced the value of U.S.-held foreign assets. Finally, U.S. external debt decreased by$479 billion due to other factors that don\u2019t neatly fit into the first two categories. (See footnote 2 in the BEA IIP spreadsheet.)\n\nFor several reasons, the debt is not a cause for great worry, although it is growing quickly. First, despite its large numerical size, the U.S. international debt position is still less than 25 percent of its annual GDP. Although this is large enough to be worrisome, especially with a trend toward a future increase, it is not nearly as large as some other countries have experienced in the past. In Argentina and Brazil, international debt positions exceeded 60 percent of their GDPs. For some less-developed countries, international debt at times has exceeded 100 percent of their annual GDP.\n\nA second important point is that much of our international obligations are denominated in our own home currency. This means that when international debts (principal + interest) are paid to foreigners, they will be paid in U.S. currency rather than foreign currency. This relieves the U.S. from the requirement to sell products abroad to acquire sufficient foreign currency to repay its debts. Many other countries that have experienced international debt crises have had great problems financing interest and principal repayments especially when bad economic times make it difficult to maintain foreign sales.\n\nFinally, it is worth noting that, despite the name applied to it, our international \u201cdebt\u201d position does not correspond entirely to \u201cdebt\u201d in the term\u2019s common usage. Recall that debt commonly refers to obligations that must be repaid with interest in the future. Although a sizable share of our outstanding obligations is in the form of debt, another component is in equities. That means some of the money \u201cowed\u201d to foreigners is simply the value of their shares of stock in U.S. companies. These equities either will make money or will not be based on the success of the business, but they do not require a formal obligation for repayment in the future.\n\nkey Takeaways\n\n\u2022 The IIP measures the difference between the total value of domestic holdings of foreign assets and the value of foreign assets held in the domestic country. If the IIP is negative, we say the country is a debtor country. If the IIP is positive, we say the country is a creditor country.\n\u2022 Asset holdings include both debt and equities. Debt involves an obligation to repay principal and interest, whereas equities involve either profit or loss to the foreign asset holder.\n\u2022 The U.S. IIP stands at \\$3.5 trillion in 2008, making the United States the largest debtor nation in the world.\n\nExercise\n\n1. Jeopardy Questions. As in the popular television game show, you are given an answer to a question and you must respond with the question. For example, if the answer is \u201ca tax on imports,\u201d then the correct question is \u201cWhat is a tariff?\u201d\n\n\u2022 A complete record of a country\u2019s holdings of foreign assets and foreigners\u2019 holdings of domestic assets at a point in time.\n\u2022 The term describing a country whose total domestic assets held abroad exceed total domestic liabilities held by foreigners.\n\u2022 The term describing a country whose total domestic liabilities held by foreigners exceed total domestic assets held abroad.\n\u2022 The name for the type of asset that establishes an obligation for the borrower to repay principal and interest in the future.\n\u2022 The name for the type of asset that represents ownership shares in potentially productive assets.","date":"2021-04-20 07:12:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3038463890552521, \"perplexity\": 2417.5687354232737}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618039379601.74\/warc\/CC-MAIN-20210420060507-20210420090507-00335.warc.gz\"}"}
null
null
\section{Introduction} Neutrino oscillation experiments rely heavily on predictions from Monte Carlo simulations to infer the parameters of interest from their data. Among the most challenging components of the simulation chain for such experiments is typically a neutrino interaction generator, which predicts the rates of neutrino reactions in detector materials as well as the identities and four-momenta of reactions' outgoing particles. The NOvA experiment, which is a long-baseline neutrino oscillation experiment based at Fermilab in Batavia, IL, currently employs the GENIE generator \cite{genie} (version 2.12.2) to simulate neutrino interactions in its near detector (ND) at Fermilab and its far detector (FD) in Ash River, MN. Historically, generators used by experiments, such as GENIE, have made concessions to the difficult task of predicting interactions with the complex nuclear environment by adopting a ``factorization'' approach. In this scheme, numerous theoretical models for hard-scattering processes from hadrons or quarks at various momentum scales are composed with a relatively simple model for the nuclear dynamics. Though this picture has been sufficient for past work, increasing statistical precision in modern experiments has begun to reveal cracks in the foundation. In particular, dedicated measurements from neutrino scattering experiments (e.g., \cite{miniboone-qe-1, miniboone-qe-2, minerva-lowq3-1, minerva-lowq3-2, minerva-xverse-vars, t2k-incl, t2k-xverse-vars}) have cast considerable doubt on whether the non-interacting relativistic Fermi gas (RFG) nuclear model used by default in contemporary versions of GENIE is viable. Nontrivial uncertainty is associated with both the details of the nuclear model and the hard scattering processes themselves, which rely on approximations where explicit non-perturbative calculations using QCD are untenable. While NOvA is designed with the mitigation of these cross section uncertainties in mind, no present or planned experiment is completely insensitive to them. In the following sections we explore the uncertainties noted above, including adjustments to GENIE's model we find we are forced to make by external data, improvements to available theory, and our own ND data. We then discuss the impact they have on NOvA's $\nu_{\mu}$ disappearance and $\nu_{e}$ appearance measurements, given NOvA's design---the detectors are built to be as similar as possible in materials and technologies---and the calorimetric energy reconstruction principle used in the analyses. \section{GENIE 2.12.2 model and adjustments} GENIE 2.12.2's default model divides the total hard-scattering cross section into numerous processes for which independent models exist. The largest ones (and the only ones we will consider here) in charged-current (CC) interactions are, in order of increasing final-state hadronic mass $W$, quasielastic scattering (QE), resonant baryon production (RES), and nonresonant deep inelastic scattering (DIS). In current analyses, NOvA makes adjustments to the axial mass in the QE dipole form factor (setting $M_A = \unit[1.04]{GeV}$ rather than the default $\unit[0.99]{GeV}$) and nonresonant single pion production with $W < \unit[2.0]{GeV}$ (reducing it to 43\% of its nominal value) based on reanalysis of bubble chamber data that these parameters were originally tuned to \cite{bubble-qe, bubble-nonres-reana}. These make relatively small differences in the prediction. A much larger impact comes from the addition of a new hard-scattering process, that of two-nucleon ejection via a meson-exchange current (MEC) process.\cite{nustec-whitepaper} Because this is a reaction well known from electron scattering, but no contemporary model is able to describe the extant neutrino data\cite{minerva-lowq3-1, minerva-lowq3-2, minerva-xverse-vars, t2k-xverse-vars}, NOvA has elected to enable the optional ``Empirical MEC'' model in GENIE\cite{katori-empMEC} and tune it to NOvA ND data in energy- and three-momentum transfer $(q_0, |\vec{q}|)$. Comparisons of the default and tuned predictions to NOvA ND data, as well as the uncertainties constructed from alternative tunes, and the outcome of a similar procedure performed by the MINERvA Collaboration to their own data\cite{minerva-mec-tune}, are shown in fig. \ref{fig:MEC tuning}. \begin{figure}[htb] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth,trim={0 0.25cm 0 0.25cm},clip]{figures/MEC-tuned+errors-FHC.pdf} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth,trim={0 0.25cm 0 0.25cm},clip]{figures/MEC-tuned+errors-RHC.pdf} \end{subfigure} \caption{Distributions in visible hadronic energy (event energy deposited in the scintillator less that deposited by the muon) for $\nu_{\mu}$ CC candidates in the NOvA ND. Black points are data; the shaded histogram is non-MEC prediction, while colored curves represent the gray component summed with various predictions for MEC: solid black is our central value; dotted curves represent our uncertainties, arising from fits to our data with different assumptions about the non-MEC prediction; solid blue uses MINERvA's tuned MEC prediction referenced in the text.} \label{fig:MEC tuning} \end{figure} As noted above, the most precarious component of the current generator prediction is the nuclear model. NOvA alters the default GENIE 2.12.2 model in several ways to address shortcomings here. First, there is widespread agreement that long-range interactions of the nuclear potential between nucleons affect QE reactions, significantly suppressing them at low $Q^2$ and mildly enhancing them at higher $Q^2$ relative to the RFG prediction. We adopt the random phase approximation (RPA)-based calculation of the Val\`{e}ncia group \cite{valencia-rpa} parameterized as reweights in $(q_0, |\vec{q}|)$ to the GENIE QE model by R. Gran \cite{gran-rpa} and the associated uncertainties. Measurements of delta resonant production in external data \cite{miniboone-res, minos-qe, minerva-pi-1, minerva-pi-2}, as well as our own ND data, also suggest the presence of nuclear dynamics resulting in a similar suppression at low $Q^2$ relative to the free nucleon prediction as the RPA effect, so we also apply the $Q^2$ parameterization of the RPA effect to RES as a placeholder for whatever the true nuclear effect may be. We take the unmodified RES prediction as an uncertainty variation. \section{Impact on neutrino oscillation measurements} As discussed elsewhere in these Proceedings \cite{erica-nufact}, NOvA uses a calorimetric technique for both $\nu_{\mu}$ and $\nu_e$ energy reconstruction in which neutrino energy is estimated using a function of both lepton and hadronic system energies. Uncertainties in cross section modeling can impact the fidelity of these estimators in a number of different ways: for instance, shifting the balance of energy between the better-resolved leptonic and the more-poorly-resolved hadronic systems in CC events; changing the predicted mean energy that is unseen by the detector (due either to the assumed nuclear binding potential or hadronic energy that escapes as neutrons) and must be added back by the estimator; or adjusting the expected frequency of background processes that have different energy responses than the signal. To mitigate the impact of these uncertainties on the prediction at the FD, NOvA relies on measurements at the ND, which are propagated to predictions for the FD via an ``extrapolation'' procedure. The latter supposes that discrepancies observed between ND simulation and data distributions can be accounted for in the FD prediction by modifying the ND true event rate in bins of true energy, which can then be multiplied by the simulated ratio of the geometric and oscillation effects between the two detectors to yield the FD true rate. This is conveniently expressed as a matrix equation over the energy bins: \begin{equation} \vec{N}_{FD} = \vec{N}_{ND}\ \mathbf{R}\ \mathbf{M}_{ND}\ \mathbf{F}\ \mathbf{P}_{osc}\ \mathbf{M}_{FD}^{-1} \label{eq:fovern} \end{equation} Here, the $\vec{N}_{\alpha}$ are the predicted event yields in bins of reconstructed energy for detector $\alpha$; the diagonal matrix $\mathbf{R}$ contains the bin-by-bin ratios of the observed and predicted ND yields, $R_{ii} = N^{ND}_{\mathrm{data},i} /N^{ND}_{\mathrm{MC},i}$; the $\mathbf{M}_{\alpha}$ are so-called ``migration'' matrices between reconstructed and true energies for detector $\alpha$, from simulation; the diagonal $\mathbf{F}$ is denoted the ``far over near ratio,'' $F/N$, which encodes the predicted effect of the neutrino beam dispersion and the difference in acceptance between the detectors; and the diagonal $\mathbf{P}_{osc}$ applies oscillation probabilities for given oscillation parameters. This approach differs from the strategy sometimes employed by other oscillation experiments in which parameters in the model are fitted to the ND data and propagated to the FD prediction via their fitted covariance matrix. While the NOvA strategy is less general (it is only effective when the ND and FD share the same underlying cross section uncertainties, like in NOvA, for instance), it is guaranteed to reproduce the observed ND distribution, even if unknown effects are present in the data that the model cannot account for. The extent to which the $F/N$ method enables calculation of the effect of changes in the cross section model on the FD prediction using ND data can be illustrated with test cases. In such a test, the ND data is replaced by a modified prediction using a designated cross-section change during the calculation of $\mathbf{R}$, resulting in a modified $\mathbf{R}'$. The $\vec{N}_{FD}'$ obtained from applying eq. \ref{eq:fovern} to $\mathbf{R}'$ can then be compared to a different $\vec{N}_{FD}''$ obtained by directly applying the modified cross section model to the FD prediction in simulation. If $\vec{N}_{FD}'$ and $\vec{N}_{FD}''$ coincide, then the extrapolation procedure can perfectly account for the effect of the given cross-section shift using the ND data. If they differ, on the other hand, the residual between $\vec{N}_{FD}''$ (direct FD prediction under shifted model) and $\vec{N}_{FD}'$ (extrapolation of shifted prediction with nominal model) illustrates the fraction of the given shift that is not ``canceled'' (i.e., is left uncorrelated between the two predictions) by the extrapolation procedure. Fig. \ref{fig:extrap MEC unc} shows the comparison resulting from shifts due to two important uncertainties in the MEC model noted above; the extrapolation procedure reduces the original uncertainties of up to 10\% to a few percentage points. \begin{figure}[htb] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth,trim={0 0 0 0.3cm},clip]{figures/MEC-extrap-EnuShape-FHC.pdf} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth,trim={0 0 0 0.3cm},clip]{figures/MEC-extrap-q0q3unc-FHC.pdf} \end{subfigure} \caption{Ratio of predicted FD CC $\nu_{\mu}$ yields $\vec{N}_{FD}'$ (orange) and $\vec{N}_{FD}''$ (purple) for $\pm 1\sigma$ shifts to the nominal in two important uncertainties in the MEC model: its shape as a function of $E_{\nu}$, left, and its shape in $(q_{0},|\vec{q}|)$, right. (See text for the definitions of the $\vec{N}$.) The shaded residual difference between the two predictions corresponds to the uncertainty \textit{not} accounted for by extrapolation.} \label{fig:extrap MEC unc} \end{figure} In fits to the FD data, the extrapolation procedure is used first to correct the nominal FD prediction. Known uncertainties are then accounted for using nuisance parameters constructed from bin-by-bin splines fitted to the difference between shifted predictions $\vec{N}_{FD}''$ and the corrected nominal prediction. The reduction of the cross section impact on uncertainties in the $\nu_{e}$ signal and background predictions due to extrapolation is illustrated in fig. \ref{fig:extrap xsec nue}. Even after extrapolation is applied, however, neutrino cross section uncertainties retain significant influence on the results, together accounting for 35\%, 44\%, and 53\% of the total systematic error budgets for NOvA's $\sin^2(\theta_{23})$, $\Delta m_{32}^2$, and $\delta_{CP}$ measurements, respectively. We anticipate that future continued improvements to cross section modeling, particularly in regard to the nuclear dynamics in QE and RES interactions, the detailed nature of 2p2h, and antineutrino reactions, will be essential as the statistical precision of these measurements improves and systematics begin to limit them. \begin{figure}[htb] \centering \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth,trim={0 0 0 0.3cm},clip]{figures/nue_syst_xsec_red_fhc_sig.pdf} \end{subfigure} \begin{subfigure}{0.45\textwidth} \includegraphics[width=\textwidth,trim={0 0 0 0.3cm},clip]{figures/nue_syst_xsec_red_fhc_bkg.pdf} \end{subfigure} \caption{Effect of selected cross section uncertainties on the signal (left) and background (right) predictions for the $\nu_e$ appearance measurement, before (blue) and after (red) extrapolation.} \label{fig:extrap xsec nue} \end{figure} \section{Conclusions} NOvA relies on strong internal constraints on cross section uncertainties for its oscillation program derived from the functionally identical detector paradigm and a calorimetric neutrino energy reconstruction technique. In addition, a comprehensive program is underway to ensure that all relevant cross section issues are considered. After the constraint from the ND is applied, cross section uncertainties currently comprise 30-50\% of the systematic budget on the most important oscillation parameter measurements. We look forward to continued development of models and associated systematic treatments in the community, new measurements of cross sections to help constrain them, and ultimately their integration into improved oscillation parameter measurements in NOvA.
{ "redpajama_set_name": "RedPajamaArXiv" }
932
Honorary Captain Hawa Singh (16 December 1937 in Umarwas, Haryana – 14 August 2000, in Bhiwani, Haryana) was an Indian Heavyweight boxer, who dominated Indian and Asian amateur boxing for a decade in his weight class. He won the Asian Games gold medal in Heavyweight category in consecutive editions of the games in the 1966 Asiad and the 1970 Asiad both held in Bangkok, Thailand - a feat unmatched by any Indian boxer to date (August 2008). He won the National Championships in the Heavyweight category a record 11 consecutive times — from 1961 to 1972. Biography Hawa Singh Sheoran was born in a Jat Family now Haryana in 1937. He enrolled in the Indian Army in 1956, and became the champion of the Western Command in 1960 by defeating the defending champion, Mohabbat Singh. He won the National Championships for 11 straight years from 1961 to 1972, winning gold medals at the 1966 Asian Games and the 1970 Asian Games in Bangkok. He was awarded the Arjuna Award, India's highest sporting award, in 1966.. Biopic on Hawa Singh will be produced by Kamlesh Singh Kushwaha and Sam Fernandes. The film is written by Junaid Wasi and shall be directed by Prakash Nambiar. After retiring, he took up coaching and was the co-founder of the Bhiwani Boxing Club which produced a slew of Indian boxers in the 1990s and 2000s (decade), including Olympic medallist Vijender Singh. He was awarded the Dronacharya Award in 1999. He died suddenly in Bhiwani on 14 August 2000 – 15 days before he was to have received the Dronacharya Award. On 4 February 2020, Salman Khan announced a biopic featuring Sooraj Pancholi as Hawa Singh. Notes References Biography of Hawa Singh*India's highest sporting awards and those who won them, SS Gandhi, The Defence Review Indian male boxers People from Bhiwani district Recipients of the Dronacharya Award Recipients of the Arjuna Award 1937 births 2000 deaths Asian Games gold medalists for India Boxers from Haryana Asian Games medalists in boxing Boxers at the 1966 Asian Games Boxers at the 1970 Asian Games Medalists at the 1966 Asian Games Medalists at the 1970 Asian Games Heavyweight boxers
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,768
BRIAN INJURY LAWYER BOAT ACCIDENT ATTORNEY PEDESTRIAN ACCIDENT LAWYER SLIP AND FALL INJURY LAWYER CRIMINAL DEFENSE OVERVIEW MARIJUANA/DRUGS DRIVING OFFENSES THEFT OFFENSES DIVERSION/PROBATION VETERANS DEFENSE SEXUAL HARASSMENT LAWYER RAPE & SEXUAL ASSAULT SEXUAL ABUSE BY THERAPISTS SEXUAL ABUSE IN SCHOOL SEXUAL ABUSE IN YOUTH ORGANIZATION SEXUAL ABUSE IN YOUTH ATHLETICS SEXUAL ABUSE IN WORKPLACE SEXUAL ABUSE BY RELIGIOUS ORGANIZATION SEXUAL ABUSE AT SD UNIFIED SCHOOL DISTRICT WHY HHJ ADAM HEPBURN MICHAEL HERNANDEZ ELLIOTT JUNG CAITLIN VAN VOORST Additionally Serving BRAIN INJURY LAWYER SEXUAL ABUSE & ASSAULT SEXUAL ABUSE IN THE WORKPLACE SEXUAL HARRASSMENT IN THE WORKPLACE SEXUAL ABUSE BY RELIGIOUS ORGANIZATIONS Crimes Against Officers The most common crime against an officer is resisting arrest under Penal Code section 148(a)(1). Penal Code section 148(a)(1) is a misdemeanor. Often times, officers will arrest you for resisting arrest when you are not following their instructions, challenging their authority or physically resisting their demands. District attorneys will often ask the officers for their opinion about how the case should resolve and if the client should serve jail time. The problem with officers having charging and sentencing discretion is that similarly situated clients may be treated differently based on which officer they are dealing with. Penal Code section 148(a)(1): The elements for resisting arrest include the following: 1) the defendant willfully resisted, delayed, or obstructed a peace officer, public officer or an emergency medical technician (EMT); 2) at the time of the offense, the peace officer, public officer, or EMT was engaged in or attempting to engage in the performance of his or her duties; and 3) the defendant who willfully resisted, delayed or obstructed knew or reasonably should have known that the victim was a peace officer, public officer or EMT and that the victim was engaged in the performance of his or her duties. The prosecution must prove every element beyond a reasonable doubt for you to be found guilty of Penal Code section 148(a)(1). Resisting arrest charges are usually filed when someone is being arrested for a different crime, but resisting arrest cases can be charged on their own as stand-alone offenses. The District Attorney's office may also charge you under Penal Code section 69. Although similar to Penal Code section 148(a)(1), Penal Code section 69 is usually more serious. Penal Code section 69 may be charged as a misdemeanor or a felony (also known as a "wobbler"). Penal Code section 69: The elements for this offense state that any person who attempts, by means of any threat or violence, to deter or prevent an executive officer from performing any duty imposed upon the officer by law, or who knowingly resists the officer by using force or violence while the officer is in performance of his or her duty, is punishable by a fine not exceeding ten thousand dollars ($10,000), or by imprisonment pursuant to subdivision (h) of Section 1170, or in a county jail not exceeding one year, or by both such fine and imprisonment. (b) The fact that a person takes a photograph or makes an audio or video recording of an executive officer while the officer is in a public place, or the person taking the photograph or making the recording is in a place he or she has the right to be, does not constitute, in and of itself, a violation of Penal Code section 69. The District Attorney's office will usually charge Penal Code section 243(b) when they believe that the defendant committed a battery against the officer. Penal Code section 243(b) is a misdemeanor. Penal Code section 243(b): The elements for this offense include the following: 1) the defendant committed a battery upon the person of a peace officer, firefighter, search and rescue member or any other person engaged in the performance of his or her duties; 2) at the time of the battery the peace officer, firefighter or any other person was engaged in the performance of his or her duties; and 3) the defendant who used the force or violence knew or reasonably should have known that the other person was: (a) a peace officer, firefighter or other, or (b) engaged in the performance of his or her duties. A district attorney will charge defendants with a felony under Penal Code section 243(c)(1) when the officer suffers an injury. It is not required that the injury be serious. A felony "strike" may be charged when the officer suffers great bodily injury. Crimes against officers require experienced attorneys that know how to properly investigate and argue the facts to a jury. These crimes are difficult because the case usually comes down to the officer's word versus the client's word. Unfortunately, some officers will lie in their police reports and on the stand to justify an arrest or the use of excessive force. It is important to hold officers accountable for their actions and to aggressively combat these charges. The attorneys at Hepburn, Hernandez and Jung Trial Attorneys have handled hundreds of cases involving crimes against officers and have a track record of winning these difficult cases. Please contact HHJ Trial Attorneys today if you are charged with a crime against an officer. Wrongful Death Attorneys Slip And Fall Injury Attorney Sex Assault The attorneys at HHJ are ready and available to help you with your case. We answer our phones and email 24/7. Call now! Email Us24/7 365 Days Partners at HHJ Elliott H. Jung No Recovery, No Fee Guarantee! We guarantee that you won't have to pay any attorney's fees or costs if your civil case is not successful. ¡Sin Recuperación, Sin Tarifa Garantizada! Le garantizamos que usted no tendrá que pagar los honorarios o costos de un abogado si su caso civil no tiene éxito. 6435 Caminito Blythefield, Suite D. La Jolla, CA 92037 2121 Palomar Airport Road, Suite 300, Carlsbad, CA 92011 Copyright © 2021 . Hepburn - Hernandez - Jung. All Rights Reserved *Attorney advertising: Past results do not guarantee any similar or certain outcomes.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,521
\section{Introduction} 3D immersive experiences can benefit many application scenarios. For example, in an online store one would often like to view products interactively in 3D rather than from discrete view angles. Likewise in map applications it is desirable to explore the vicinity of street-view like images beyond the position at which the photograph was taken. This is often not possible because either only 2D imagery exists, or because storing and rendering of full 3D information does not scale. To overcome this limitation we study the problem of interactive view synthesis with 6-DoF view control, taking only a single image as input. We propose a method that can produce a continuous stream of novel views under fine-grained (e.g., 1$^\circ$ step-size) camera control (see \figref{fig:teaser}). \footnotetext[2]{https://ait.ethz.ch/projects/2019/cont-view-synth/} Producing a continuous stream of novel views in \emph{real-time} is a challenging task. To be able to synthesize high-quality images one needs to reason about the underlying geometry. However, with only a monocular image as input the task of 3D reconstruction is severely ill-posed. Traditional image-based rendering techniques do not apply to the real-time monocular setting since they rely on multiple input views and also can be computationally expensive. Recent work has demonstrated the potential of learning to predict novel views from monocular inputs by leveraging a training set of viewpoint pairs \cite{TDB16a,zhou2016view,tvsn_cvpr2017,dosovitskiy2017learning}. This is achieved either by directly synthesizing the pixels in the target view~\cite{TDB16a,dosovitskiy2017learning} or predicting flow maps to warp the input pixels to the output~\cite{zhou2016view,sun2018multiview}. However, we experimentally show that such approaches are prone to over-fitting to the training views and do not generalize well to free-from non-training viewpoints. If the camera is moved continuously in small increments, with such methods, the image quality quickly degrades. One possible solution is to incorporate much denser training pairs but this is not practical for many real applications. Explicit integration of geometry representations such as meshes~\cite{kato2018renderer,DBLP:journals/corr/abs-1901-05567} or voxel grids~\cite{wu20153d,choy20163d,girdhar2016learning,tulsiani2017multi} could be leveraged for view synthesis. However, such representations would limit applicability to settings where the camera orbits a single object. In this paper, we propose a novel learning pipeline that determines the output pixels directly from the source color but forces the network to implicitly reason about the underlying geometry. This is achieved by injecting geometric transformations, including perspective projection, 3D rotations and translations into an end-to-end trainable network. The latent 3D geometry representation is compact and memory efficient, is meaningful under explicit 3D transformation and can be used to produce geometrically accurate views for both single objects and natural scenes. More specifically, we propose a geometry aware neural architecture consisting of a 3D transforming autoencoder (TAE) network~\cite{Hinton:2011:TA:2029556.2029562} and subsequent depth-guided appearance warping. In contrast to existing work, that directly concatenate view point parameters with latent codes, we first encode the image into a latent representation which is explicitly rotated and translated in Euclidean space. We then decode the transformed latent code, which is assumed to implicitly represent the 3D geometry, into a depth map in target view. From the depth map we compute dense correspondences between pixels in the source and target view via perspective projection and subsequently the final output image via pixel warping. All operations involved are differentiable, allowing for end-to-end training. Detailed experiments are performed on synthetic objects \cite{chang2015shapenet} and natural images \cite{Geiger2012CVPR}. We assess the image quality, granularity, precision of continuous viewpoint control and implicit recovery of scene geometry qualitatively and quantitatively. Our experiments demonstrate that both components, the TAE and depth-guided warping, drastically improve the robustness and accuracy for continuous view synthesis. In conclusion, our main contributions are: \begin{compactitem} \item We propose the task of continuous view synthesis from monocular inputs under fine-grained view control. \item This goal is achieved via a proposed novel architecture that integrates a transforming encoder-decoder network and depth-guided image mapping. \item Thorough experiments are conducted, demonstrating the efficacy of our method compared to prior art. \end{compactitem} \section{Related Work} \vspace{-0.2cm} \myparagraph{View synthesis with multi-view images.} The task of synthesizing new views given a sequence of images as input has been studied intensely in both the vision and graphics community. Strategies can be classified into those that explicitly compute a 3D representation of the scene~\cite{rematas2017novel, kopf2013image,penner2017soft,debevec1996modeling,he1998layered,seitz2006comparison,zitnick2004high,chaurasia2013depth,ladicky2014pulling}, and those in which the 3D geometry is handled implicitly~\cite{fitzgibbon2005image,matusik2002image,mcmillan1995plenoptic}. Others have deployed full 4D light fields \cite{gortler1996lumigraph, levoy1996light}, albeit at the cost of complex hardware setups and increased computational cost. Recently, deep learning techniques have been applied in similar settings to fill holes and eliminate artifacts caused by the sampling gap, dis-occlusions, and inaccurate 3D reconstructions \cite{flynn2016deepstereo,HPPFDB18,zhou2018stereo,Wang2018-ox,Srinivasan2019-hx,Flynn2019-lk,Mildenhall2019-xk}. While improving results over traditional methods, such approaches rely on multi-view input and are hence limited to the same setting. \myparagraph{View synthesis with monocular input.} Recent work leverages deep neural networks to learn a monocular image-to-image mapping between source and target view from data \cite{kulkarni2015deep,TDB16a,dosovitskiy2017learning,zhou2016view,tvsn_cvpr2017,sun2018multiview,rnn_view_synthesis}. One line of work \cite{kulkarni2015deep,TDB16a,dosovitskiy2017learning,olszewski2019tbn} directly generates image pixels. Given the difficulty of the task, direct image-to-image translation approaches struggle with preservation of local details and often produce blurry images. Zhou et.al. \cite{zhou2016view} estimate flow maps in order to warp source view pixels to their location in the output. Others further refine the results by image completion \cite{tvsn_cvpr2017} or by fusing multiple views \cite{sun2018multiview}. Typically, the desired view is controlled by concatenating latent codes with a flattened viewpoint transform. However, the exact mapping between viewpoint parameters to images is difficult to learn due to sparse training pairs from the continuous viewpoint space. We show experimentally that this leads to a snapping to training views, with image quality quickly degrading under continuous view control. Recent works demonstrate the potential for fine-grained view synthesis, but either are limited to single instances of objects \cite{sitzmann2019deepvoxels} or require additional supervision in the form of depth maps \cite{zhu,liu2018geometry}, surface normals \cite{liu2018geometry} and even light field images \cite{srinivasan2017learning}, which are cumbersome to acquire in real settings. In contrast, our method consists of a fully differentiable network, which is trained with image pairs and associated transformations as sole supervision. \myparagraph{3D from single image.} Reasoning about the 3D shape can serve as an implicit step of free-from view synthesis. Given the severely under-constrained case of recovering 3D shapes from a single image, recent works have deployed neural networks for this task. They can be categorized by their output representation into mesh \cite{kato2018renderer,DBLP:journals/corr/abs-1901-05567}, point cloud \cite{fan2017point,lin2018learning, insafutdinov18pointclouds}, voxel~\cite{wu20153d, choy20163d,girdhar2016learning,tulsiani2017multi,riegler2017octnet}, or depth map based~\cite{eigen2014depth, zhou2017unsupervised, Tulsiani_2018_ECCV}. Mesh-based approaches are still not accurate enough due to the indirect learning process. Point clouds are often sparse and cannot be directly leveraged to project dense color information in the output image and voxel-based methods are limited in resolution and number and type of objects due to memory constraints. Depth maps become sparse and incomplete when projected into other views due to the sampling gap and occlusions. Layered depth map representations \cite{Tulsiani_2018_ECCV} have been used to alleviate this problem. However, a large number of layers would be necessary which poses significant hurdles in terms of scalability and runtime efficiency. In contrast to explicit models, our latent 3D geometry representation is compact and memory efficient, is meaningful under explicit 3D transformation and can be used to render dense images. \myparagraph{Deep generative models.} View synthesis can also be seen as an image generation process, which is related to the field of deep generative modelling of images \cite{kingma2013auto, goodfellow2014generative}. Recent models \cite{brock2018large,karras2018style} are able to generate high-fidelity images with diversity in many aspects including viewpoint, shape and appearance, but offer little to no exact control over the underlying parameters. Disentangling latent factors has been studied in \cite{chen2016infogan, higgins2017beta} to provide control over image attributes. In particular, recent work \cite{zhu2018visual,Nguyen-Phuoc2019-fv} demonstrates inspiring results of viewpoint disentanglement by reasoning about the geometry. Although such methods can be used for view synthesis, the generated views lack consistency and moreover one cannot control which object to synthesize. \section{Method} \figureNetwork Our main contribution is a novel geometry aware network design, shown in \figref{fig:pipeline}, that consists of four components: 3D transforming auto-encoder (TAE), self-supervised depth map prediction, depth map projection and appearance warping. The source view is first encoded into a latent code ($ z = E_{\theta_e}(I_s)$). This latent code $z$ is encouraged by our learning scheme to be meaningful in 3D metric space. After encoding we apply the desired transformation between the source and target to the latent code. The transformed code ($z_T = T_{s \rightarrow t}(z)$) is decoded by a neural network to predict a depth map $D_t$ as observed from the target viewpoint. $D_t$ is projected back into the source view based on the known camera intrinsics $K$ and extrinsics $T_{s \rightarrow t}$, yielding dense correspondences between the target and source views, encoded as dense backward flow map $C_{t \rightarrow s}$. This flow map is used to warp the source view pixel-by-pixel into the target view. Note that attaining backward flow and hence predicting depth maps in the \emph{target} view is a crucial difference to prior work. Forward mapping of pixel values into the target view $I_t$ would incur discretization artifacts when moving between ray and pixel-space, visible as banding after re-projection of the (source view) depth map. The whole network is trained end-to-end with a simple per-pixel reconstruction loss as sole guidance. Overall, we want to learn a mapping $M:X\rightarrow Y$, which in our case can be decomposed as: \begin{align} M(I_s) = B(P_{t \rightarrow s}(D_{\theta_d}(T_{s \rightarrow t}(E_{\theta_e}(I_s)))), I_s) = \hat{I}_t, \label{equ:mapping} \end{align} where $B$ is the bi-linear sampling function, $P_{t\rightarrow s}$ is the perspective projection, and $E_{\theta_e},D_{\theta_d}$ are the encoder and decoder networks respectively. This decomposition is an important contribution of our work. By asking the network to predict a depth map $D_t$ in the target view, we implicitly encourage the TAE encoder $E_{\theta_e}$ to produce position predictions for features and the decoder $D_{\theta_d}$ learns to generate features at corresponding positions by rendering the transformed representation from the specified view-angle. \subsection{Transforming Auto-encoder} We take inspiration from recent work \cite{sabour2017dynamic,hinton2018matrix,Worrall2017InterpretableTW,Rhodin} which itself builds upon earlier work by Hinton et al.~\cite{Hinton:2011:TA:2029556.2029562}, that uses encoder-decoder architectures to learn representations that are transformation equivariant, establishing a direct correspondence between image and feature spaces. We leverage such a latent space to model the relationship between viewpoint and implicit 3D shape. To this end, we represent the latent code $z_s$ as vectorized set of points $z_s\in \mathbb{R}^{n\times 3}$, where $n$ is a hyper-parameter. This representation is then multiplied with the ground-truth transformation $T_{s\rightarrow t} = [R|t]_{s\rightarrow t}$ describing the viewpoint change between source view $I_s$ and target view $I_t$ to attain the rotated code $z_t$: { \begin{align} z_t = [R|t]_{s\rightarrow t} \cdot \Tilde{z_s}, \label{equ:transformation} \end{align} } where $\Tilde z_s$ is the homogeneous representation of $z_s$. In this way the network is trained to encode position predictions for features which can then be decoded into images. All functions in the TAE module including encoding, vector reshaping, matrix multiplication and decoding are differentiable and hence amenable to training via backpropagation. \\ \vspace{-0.2cm} \subsection{Depth Guided Appearance Mapping} We decode $z_t$ into 3D shape in the target view, represented as a depth image $D_t$. From $D_t$ we compute the dense correspondence field $C_{t\rightarrow s}$ deterministically via perspective projection $P_{t \rightarrow s}$. The dense correspondences are then used to warp the pixels of the texture (source view) $I_s$ into the target view $\hat{I}_t$. This allows the network to warp the source view into the target view and makes the prediction of target view invariant to the texture of the input, resulting in sharp and detail-preserving outputs. \myparagraph{Establishing correspondences.} The per-pixel correspondences $C_{t\rightarrow s}$ are attained from the depth image $D_t$ in the target view by conversion from the depth map to 3D coordinates $[X,Y,Z]$ and perspective projection: {\footnotesize \begin{align} [X,Y,Z]^T = D_t(x_t,y_t)K^{-1}[x_t, y_t,1]^T \quad \\ \text{and} \quad [x_s,y_s,1]^T \sim KT_{t\rightarrow s}[X,Y,Z,1]^T \quad . \label{equ:projection} \end{align} } where each pixel $(x_t,y_t)$ encodes the corresponding pixel position in the source view $(x_s,y_s)$. Furthermore, $K$ is the camera intrinsic matrix describing normalized focal length along both axes $f_x, f_y$ and image center $c_x, c_y$. Note that only the focal length ratio $f_x/f_y$ as well as image center affect view synthesis, while the absolute scale of the focal length is only important to predict geometry at correct scale. \\ \myparagraph{Warping with correspondences.} With the dense correspondences obtained, we are now able to warp the source view to the target view. This operation propagates texture and local details. Since the corresponding pixel positions that are derived from Eq.~\ref{equ:projection} are non-integer, this is done via differentiable bilinear sampling as proposed in \cite{jaderberg2015spatial}: {\footnotesize \begin{align} \begin{split} I_{t}(x_t,y_t) = \sum_{x_s} \sum_{y_s} I_s(x_s,y_s) \text{max}(0, 1-| x_s - C_{x}(x_t,y_t)|)\\ \text{max}(0, 1-|y_s - C_{y}(x_t,y_t)|) . \end{split} \end{align} }% The use of backward flow $C_{t \rightarrow s}$, computed from the predicted depth map $D_t$, makes the approach amenable to gradient based optimization since the gradient of the per-pixel reconstruction loss provides meaningful information to correct erroneous correspondences. The gradients also flow back to provide useful information to the TAE network owing to the fact that the correspondences are computed deterministically from the predicted depth maps. While bearing similarity to \cite{zhou2016view}, we introduce the intermediate step of predicting depth, instead of predicting the correspondences directly. This enforces the network to obey geometric constraints, resolving ambiguous correspondences. \subsection{Training}\label{sec:training} All steps in our network, namely 3D transforming auto-encoder (TAE), self-supervised depth map prediction, depth map projection and appearance warping, are differentiable which enables end-to-end training. Among all modules, only the TAE module contains trainable parameters ($\theta_e,\theta_d$). To train the network only pairs of source and target views and their transformation are required. The network weights are optimized via minimization of the $L_1$ loss between the predicted target view $\hat{I_t}$ and the ground truth $I_t$. {\footnotesize \begin{align} \mathcal{L}_{recon} = \norm{I_t- \hat{I}_{t}}_1 \end{align} } Minimizing this reconstruction loss, the network learns to produce realistic novel views, to predict the necessary flow and depth maps and learn to form a geometrical latent space. \section{Experiments} \figureShapeNet We now evaluate our method quantitatively and qualitatively. We are especially interested in assessing image quality, granularity and precision of fine-grained viewpoint control. First, we conduct detailed experiments on synthetic objects, where ground-truth of continuous viewpoint is easy to obtain, to numerically assess the reconstruction quality. Notably, we vary the viewpoints in much smaller step-sizes than what is observed in the training data. Second, to evaluate generalizability, we test our system on natural city scenes. In this setting, given an image input, we specify the desired ground-truth camera trajectories along which the system generates novel views. Then we run an existing visual odometry system on these synthesized continuous views to recover the camera trajectory. By comparing the recovered trajectory with the ground-truth, we can evaluate the geometrical property of the synthesized images under the consideration of granularity and continuous view control. Finally, to better understand the mechanism of our proposed network, we further conduct studies on its two key components, namely depth-guided texture mapping and transforming auto-encoder. We evaluate the intermediate depth and flow, and qualitatively verify the meaningfulness of the latent space of the TAE. \subsection{Datasets} \vspace{-0.1cm} \noindent We conduct our experiments on two challenging datasets: synthetic objects~\cite{chang2015shapenet} and real natural scenes~\cite{Geiger2012CVPR}. \myparagraph{ShapeNet \cite{chang2015shapenet}} is a large collection of 3D synthetic objects from various categories. Similar to \cite{zhou2016view,tvsn_cvpr2017,sun2018multiview} we choose \textbf{car} and \textbf{chair} to evaluate our method. We use the same train test split as proposed in \cite{zhou2016view}. For training we render each models from 54 viewpoints with different azimuth and elevation. The azimuth goes from $0^{\circ}$ to $360^{\circ}$ with a step size of $20^{\circ}$ and the elevation from $0^{\circ}$ to $30^{\circ}$ with a step size of $10^{\circ}$. Each training pair consists of two views of the same instance, with a difference in azimuth within $\pm 40^{\circ}$. \myparagraph{KITTI \cite{Geiger2012CVPR}} is a standard dataset for autonomous driving, containing complex city scenes in uncontrolled environments. We conduct experiments on the KITTI odometry subset which contains image sequences as well as the global camera poses of each frame. In total there are 18560 images for training and 4641 images for testing. We construct training pairs by randomly selecting target view among 10 nearest frames of source view. The relative transformation is obtained from the global camera poses. \subsection{Metrics} \vspace{-0.1cm} \noindent In our evaluations we report the following metrics: \myparagraph{Mean Absolute Error $L_1$} is used to measure per-pixel value differences between ground-truth and the predictions. \myparagraph{Structural SIMilarity (SSIM) Index}\cite{wang2004image} has values in [-1, 1] and measures the structural similarity between synthesized image and ground truth. We report SSIM in addition to the $L_1$ loss since it i) gives an indication of perceptual image quality and ii) serves as further metric that is not directly optimized during training. \myparagraph{Percentage of correctness under threshold $\delta$ (Acc)}. The predicted flow/depth $\hat{y_i}$ at pixel $i$, given ground truth $y_i$, is regarded as correct if $max(\frac{y_i}{\hat{y_i}},\frac{\hat{y_i}}{y_i}) < \delta$ is satisfied. We count the portion of correctly predicted pixels. Here $\delta=1.05$. \myparagraph{Rotation error and translation error} are defined as: {\footnotesize \begin{align} &RE = arccos(\frac{\mathbf{Tr}(\Tilde{R} \cdot R^T )-1}{2}), TE = arccos( \frac{\Tilde{t} \cdot t^T}{ \Tilde{\norm{t}}_2 \cdot \norm{t}_2 }) \end{align} }% where $\mathbf{Tr}$ represents the trace of the matrix. \subsection{Comparison with other methods} \vspace{-0.1cm} We compare with several representative state-of-the-art learning-based view synthesis methods. \myparagraph{Tatarchenko et al.}~\cite{TDB16a} treat the view synthesis as an image-to-image translation task and generate pixels directly. In their framework the viewpoint is directly concatenated with the latent code. \myparagraph{Zhou et al.}~\cite{zhou2016view} generates flow instead of pixels. The view information is also directly concatenated. \myparagraph{Sun et al.}~\cite{sun2018multiview} combines both pixel generation \cite{TDB16a} and image warping \cite{zhou2016view}. The original implementation in Zhou et al.~\cite{zhou2016view} and Sun et al.~\cite{sun2018multiview} does not support continuous viewpoint input for objects. To allow for continuous input for comparison, we replace their encoded discrete one hot viewpoint representation with cosine and sine values of the view angles. The same encoder and decoder are used for all comparisons. \figureControl \subsection{ShapeNet Evaluation} \vspace{-0.1cm} \figureResultKITTIImage To test the granularity and precision of viewpoint control, for each test object, given a source view $I_s$, the network synthesizes 80 views around the source view with a step size of $1^\circ$ which is much denser than the step size of $20^\circ$ for training (and much denser than previously reported experiments). In total the test set contains 100,000 view pairs of objects. To study the effectiveness of the transformation-aware latent space, we introduce \textbf{Ours (w/o TAE)} concatenating the viewpoint analogously to \cite{TDB16a,dosovitskiy2017learning,zhou2016view,tvsn_cvpr2017,zhou2016view} while still keeping the depth-guided texture mapping process. To evaluate the depth-guided texture mapping process, we introduce \textbf{Ours (w/o depth)} which directly predicts flow without the depth guidance but does deploy the TAE. \myparagraph{Viewpoint dependent error.} Fig.~\ref{fig:control} plots the L1 reconstruction error between $[-40^\circ,40^\circ]$ of all methods. Note that $0^\circ$ here means no transformation applied to the source view. \emph{Ours} consistently produces lower errors. More importantly it yields much lower variance between non-training and training views ($\pm40^\circ,\pm20^\circ$ are training views). While previous methods can achieve similar performance to ours at training views, their performance significantly decreases for non-training views. Notably, both of our designs (TAE and depth-based appearance) contribute to the final performance and the problem of snapping to training views persists with either of the two components discarded (\textbf{Ours (w/o TAE)} and \textbf{Ours (w/o depth)}). Tab.~\ref{tab:shapenet_view} summarizes the average $L_1$ error and SSIM for all generated views between $[-40^\circ,40^\circ]$. Inline with Fig.~\ref{fig:control}, our method significantly outperforms previous methods on both car and chair. In addition, both of our ablative methods also perform better than previous methods, demonstrating the effectiveness of both modules. \tableShapeNetView \myparagraph{Qualitative results.} The qualitative results in Fig.~\ref{fig:large_shapenet} confirm the quantitative findings. To demonstrate the capability of continuous viewpoint control, we generate and overlay 80 views with step size of $1^\circ$ from a single input. Compared to previous approaches, our method exhibits similar spin pattern as the ground truth, whereas other methods mostly snap to the fixed training views (Zhou et al.~\cite{zhou2016view}, Sun et al.~\cite{sun2018multiview}). This suggests that overfitting occurs, limiting the granularity and precision of view control. A close look at specific views reveals that previous methods display distortions at non-training views, highlighted in red. The image generated by Tatarchenko et al.~\cite{TDB16a} is blurry. \figureResultKITTISLAM \subsection{KITTI Evaluation}% \vspace{-0.1cm} We now evaluate our method in the more realistic setting of the KITTI dataset. Note that the dataset only contains fairly linear forward motion recorded from a car's dash. This setting is a good testbed for the envisioned application scenarios where one desires to extract 3D information retroactively. \myparagraph{Qualitative results} In \figref{fig:large_kitti2} we show qualitative results from novel views synthesized along a straight camera trajectory: Zhou et al.~\cite{zhou2016view} and Sun et al.~\cite{sun2018multiview} both have difficulties to deal with viewpoints outside of the training setting and produce distorted images while ours are sharp and geometrically correct. Ours more faithfully reproduces the desired motion than \cite{zhou2016view} and \cite{sun2018multiview} which remains stationary. \myparagraph{Complex trajectory recovery.} To simulate real use cases, we introduce a new experimental setting. We specify arbitrary \textit{desired} trajectories, specifically so that the camera moves away from the car's original motion. From this specification we generate a sequences of 100 images along the trajectories. Subsequently we run a state-of-the-art visual odometry \cite{engel2017direct} system to estimate the camera pose based on the \emph{synthesized} views. If the view synthesis approach is geometrically accurate, the visual odometry system should recover the \textit{desired} trajectory. \figref{fig:large_kitti} illustrates one such experiment. The estimated trajectory from ours aligns well with the ground-truth. In contrast, views from \cite{zhou2016view} result in a wrong trajectory and \cite{sun2018multiview} mostly produce straight forward motion, possibly due to overfitting to training trajectories. \myparagraph{Quantitative results.} To evaluate the geometrical properties quantitatively, we generate new views with randomly sampled transformation $T=[R|t]$. We then estimate the relative transformation between the input and the synthesized view $\Tilde{T}=[\Tilde{R}|\Tilde{t}]$ and compare to the ground-truth $T$. This is done by first detecting and matching SURF features \cite{bay2006surf} in both views, and then computing and decomposing the essential matrix. We report the numerical error in Tab.~\ref{tab:kitti_view}. Our method produces drastically lower error in rotation and the translation, indicating accurate viewpoint control. Note that we had to remove \cite{TDB16a} from this comparison since SURF feature detection fails due to the very blurry images. \tableKITTIView \vspace{-0.3cm} \subsection{Depth and Flow Evaluation} \vspace{-0.1cm} The quality of predicted depth map and warping flow is essential to produce geometrically correct views. We evaluate the accuracy of depth and flow prediction with two metrics ($L_1$ and Acc). Tab.~\ref{tab:depth_flow} summarizes results for ShapeNet. Ours achieves the best accuracy in both flow and depth prediction, which directly benefits view synthesis (cf. Tab.~\ref{tab:shapenet_view}). The relative ranking of the ablative baselines furthermore indicates that both the TAE and the depth-guided texture mapping help to improve the flow accuracy. The TAE furthermore guides the depth prediction. To illustrate that the reconstructed depth maps are indeed meaningful, we predict depth in different target views and visualize the extracted normal maps, as shown in Fig.~\ref{fig:pcl}. \myparagraph{Discussion} Together these experiments indicate that the proposed self-supervision indeed forces the network to infer underlying 3D structure (yielding good depth which is necessary for accurate flow maps) and that it helps the final task without requiring additional labels. \tableDepthFlow \vspace{-0.3cm} \figurePCL \vspace{-0.3cm} \subsection{Latent Space Analysis} \vspace{-0.1cm} To verify that the learned latent space is indeed interpretable and meaningful under geometrical transformation, we i) linearly interpolate between latent points of two objects and ii) rotate each interpolated latent point set. These point sets are then decoded into depth maps, visualized as normal maps in the global frame. Fig.~\ref{fig:inter} shows that interpolated samples exhibit a smooth shape transition while the viewpoint remains constant (i). Moreover, rotating the latent points only changes the viewpoint without affecting the shape (ii). \vspace{-0.2cm} \figureInterpolation \vspace{-0.5cm} \subsection{Generalization to unseen data} \vspace{-0.1cm} We find that our model generalizes well to unseen data thanks to the usage of depth-based warping. Interestingly, our model trained on $256^2$ images can be directly applied to high resolution ($1024^2$) images without additional training. The inference process takes 50ms per frame on a Titan X GPU, allowing for real time rendering of synthetized views. This enables many appealing application scenarios. For example, our model, trained on ShapeNet only, can be used in an app where downloaded 2D images are brought to life and a user may browse the depicted object in 3D. With a model trained on KITTI, a user may explore a 3D scene from a single image, via generation of free-viewpoint videos or AR/VR content (see \figref{fig:teaser}). \section{Conclusion} \vspace{-0.1cm} We have presented a novel learning pipeline for continuous view synthesis. At its core lies a depth-based image prediction network that is forced to satisfy explicitly formulated geometric constraints. The latent representation is meaningful under explicit 3D transformation and can be used to produce geometrically accurate views for both single objects and natural scenes. We have conducted thorough experiments on synthetic and natural images and have demonstrated the efficacy of our approach.
{ "redpajama_set_name": "RedPajamaArXiv" }
668
Q: Transferring values between Fragments MainActivity -A Fragment -B Fragment -C Fragment (has 3 sub-fragments) -a Fragment -b Fragment -c Fragment That's how my project is organized. RecyclerView exists in the a,b,c Fragment, and the number of items generated in the RecyclerView is indicated in the textView of the a,b,c fragments. To see the number of RecyclerView items in a,b,c Fragment at a glance, I would like to deliver the integer value to A Fragment. adapter = new TestAdapter(); recyclerView_kor.setAdapter(adapter); ArrayList<TestInfo> result = callback.selectAll(); //callback.SelectAll() is a lookup of database cumulative values. adapter.setItems(result); String countMath = result.size()+"times"; textView_kor.setText(countMath); AFragment Afragment = new AFragment(); Bundle bundle = new Bundle(); bundle.putString("textView_Math", countMath); Afragment.setArguments(bundle); I tried to receive the value by putting the key value and the string to be delivered to it and entering the key value in A Fragment like Math_practicenumber_textVIew = rootView.findViewById(R.id.Math_practicenumber_textVIew); Math_practicenumber_textVIew.setText(getArguments().getString("textView_Math")); but it didn't work... How do I fix the chords? A: Suppose this is your Fragment B val bundle = Bundle() bundle.putString("textView_Math", countMath) findNavController(R.id.nav_host_fragment).navigate( R.id.fragmentA, bundle ) and in your FragmentA private var textName: String = "" override fun onCreate(savedInstanceState: Bundle?) { super.onCreate(savedInstanceState) arguments?.let { textName = it.getString("textView_Math") } } and last Math_practicenumber_textVIew.text=textName A: Use ViewModel to communicate between fragments but please be careful view models needs to be sharedViewModel.
{ "redpajama_set_name": "RedPajamaStackExchange" }
761
We knew exercise is good for you, but we now know it can make you live longer. The news comes from Brigham and Women's Hospital, and the National Cancer Institute. Researchers actually figured out how many years of life you can gain by being physically active. This applied to various exercise levels, among people of all ages and body sizes. If someone over 40 adds a "low" amount of exercise, like 75 minutes of brisk walking each week, he or she will gain 1.8 years of life (compared to being inactive). This is the minimum level—and any exercise performed above this will then help people live longer. For instance, if brisk walking is increased to at least 450 minutes per week, the gain will be 4.5 years. Similar patterns were consistent with people of normal weight, the overweight, and the obese. Such interesting facts came from six different studies, the results of which were pooled together. This type of exercise was called "leisure-time physical activity," and it was meant to be anywhere from moderate to vigorous. In other words, it is physical activity that you do for the direct reason of improving your fitness levels. The study comprised more than 650,000 people who were followed for an average of 10 years. There were 82,000 deaths in that time. Participating in a low level of leisure time physical activity of moderate to vigorous intensity (75 minutes of walking per week) was linked with a 19% reduced risk of death. This translates into nearly two years extra life. For those who did about 150 minutes of brisk walking per week (not an extraordinary effort), the gain in life expectancy was about 3.5 years. These benefits were seen in both men and women. For people who are above normal weight, exercising at 150 minutes per week was linked to an amazing 7.2 years of extra life. Clearly, this study reinforces the point that being physically active is unmatched in preventing disease, and helping you live better and live longer. It may be particularly interesting for people who are inactive right now, searching for motivation to try and get moving. Living longer might be the motivation you need! Moore, S.C., et al., "Leisure Time Physical Activity of Moderate to Vigorous Intensity and Mortality: A Large Pooled Cohort Analysis," PLoS Med. 2012; 9(11): e1001335. doi:10.1371/journal.pmed.1001335.
{ "redpajama_set_name": "RedPajamaC4" }
3,459
Q: Combining output of two java servlets I'm making a hostel management system with backend in java. I made a header class that shows the menu and I want it to be included on every servlet of my project. I have tried using request dispatcher. like this: RequestDispatcher rd1 = req.getRequestDispatcher("/header"); rd1.include(req, res); When I put it on some servlet, the output of that servlet is removed(only that output which is placed after this include line) and only header servlet is displayed. I have overloaded both doGet() and doPost() methods in header servlet. The Following picture shows that my header is working fine An Example servlet is here in which I'm including header servlet. import javax.servlet.*; import javax.servlet.http.*; import java.io.*; public class add extends HttpServlet{ public void doGet(HttpServletRequest req,HttpServletResponse res) throws IOException,ServletException { PrintWriter out = res.getWriter(); res.setContentType("text/html"); out.println("<html><head><title>Add student</title></head>"); RequestDispatcher rd1 = req.getRequestDispatcher("/header"); rd1.include(req, res); //I want to show the data after this line as well out.println("<form method='post' action='addtoDatabase'>"); out.println("Roll Number : <input type='text' name='roll' placeholder='student Roll Number'><br>"); out.println("Name : <input type='text' name='studentName' autofocus placeholder='student name'><br>"); out.println("room number : <input type='text' name='roomNumber' placeholder='Room Number'><br>"); out.println("Address : <input type='text' name='address' placeholder='Address'><br>"); out.println("Phone : <input type='text' name='phone' placeholder='03001234567'><br>"); out.println("<input type='submit' value='Add Student'> "); out.println("</form></body></html>"); out.close(); } } In the above code, the HTML form is not been displayed on the browser. Only the header is displayed as I have shown in the following screenshot. Screenshot of remove page How can I combine the output of both header and any other servlets on the browser? A: there are Session Scope, also have Application Scope. I am wondering what are you achieving is share some data between different sessions. That is you need Application Scope. That is ServletContext , Please refer to Using application scope variables in java
{ "redpajama_set_name": "RedPajamaStackExchange" }
872
This awesome men's bi-fold wallet is made from premium vegan leather. It measures 4.0x3.5x0.5 Inches, and features the standard billfold, 5 card slots, and windowed ID holder. The logos are applied using advanced printing technologies, so the wallet will still look good after years of hard use. This product is Made in the USA by Buckle-Down, Inc. and Officially Licensed by Fall Out Boy. This product is handcrafted in the USA by Buckle-Down, Inc. and is Officially Licensed by Fall Out Boy.
{ "redpajama_set_name": "RedPajamaC4" }
2,075
package com.example.ruby.mygetgps.utils.retrofit; import com.example.ruby.mygetgps.models.AuthTokenWS; import com.example.ruby.mygetgps.models.LocationSave; import com.example.ruby.mygetgps.models.RecordWS; import com.example.ruby.mygetgps.models.SignUpUser; import com.example.ruby.mygetgps.models.TravelerWS; import com.example.ruby.mygetgps.models.Trip; import com.example.ruby.mygetgps.models.TripWS; import com.example.ruby.mygetgps.models.User; import com.example.ruby.mygetgps.models.network.RecordBody; import com.example.ruby.mygetgps.models.network.TripBody; import com.google.gson.JsonObject; import java.util.ArrayList; import java.util.Date; import java.util.HashMap; import retrofit2.Call; import retrofit2.http.Body; import retrofit2.http.DELETE; import retrofit2.http.Field; import retrofit2.http.FormUrlEncoded; import retrofit2.http.GET; import retrofit2.http.POST; import retrofit2.http.PUT; import retrofit2.http.Path; import retrofit2.http.Query; /** * Where all web services are going to be accessed */ public interface GetGpsServices { //:vehicle_type_id @FormUrlEncoded @POST(Urls.TRIPS) Call<TripWS> uploadTrip(@Field("travel[vehicle_type_id]") int vehicle_type_id); //:travel_id, :start_latitude, :start_longitude, :end_latitude, :end_longitude, :speed, :time_registered @FormUrlEncoded @POST(Urls.RECORDS) Call<RecordWS> uploadRecord(@Field("record[travel_id]") int travel_id, @Field("record[start_latitude]") float start_latitude, @Field("record[start_longitude]") float start_longitude, @Field("record[end_latitude]") float end_latitude, @Field("record[end_longitude]") float end_longitude, @Field("record[speed]") float speed, @Field("record[time_registered]") Date time_registered); @FormUrlEncoded @POST(Urls.TRAVELERS) Call<TravelerWS> uploadRegistration(@Field("traveler[first_name]") String firstName, @Field("traveler[last_name]") String lastName, @Field("traveler[email]") String email, @Field("traveler[password]") String password, @Field("traveler[password_confirmation]") String password_confirmation, @Field("traveler[vehicle_type_id]") int vehicleTypeId); @FormUrlEncoded @POST(Urls.LOGIN) Call<AuthTokenWS> login(@Field("user_login[email]") String email, @Field("user_login[password]") String password); }
{ "redpajama_set_name": "RedPajamaGithub" }
9,021
\section{Introduction} \noindent In 1982 J. Cuntz obtained a very elegant result about the full free product of unital C*-algebras with one-dimensional representations that leads to a conjectural long exact sequence for amalgamated free products in a general situation \cite{Cu82}. At about the same time M. Pimsner and D. Voiculescu computation of the $KK$-theory for some groups C*-algebras culminated in the computation of full and reduced crossed products by groups acting on trees \cite{Pi86} (or by the fundamental group of a graph of groups in Serre's terminology). To go over the group situation has been difficult and it relied heavily on various generalizations of Voiculescu absorption theorem (see \cite{Th03} for the most general results in that direction). Note also that G. Kasparov and G. Skandalis had another proof of Pimsner long exact sequence when studying KK-theory for buildings \cite{KS91} \vspace{0.2cm} \noindent Section $2$ is a preliminary section in which we investigate the notion of reduced amalgamated free products of unital C*-algebras $A_1*_BA_2$ in the presence of non-necessary GNS-faithful conditional expectations. The usual reduced version, due to D. Voiculecscu, which is obtained by looking at the module over $B$, is often too small. Indeed, when the conditional expectations onto $B$ are both $*$-homomorphisms, the Voiculescu's reduced amalgamated free product is isomorphic to $B$ and all the information on $A_1$ and $A_2$ is lost. This is why we consider another reduced amalgamated free product, that we call vertex-reduced, which is obtained by looking at the two modules over $A_1$ and $A_2$ and is an intermediate quotient between the full amalgamated free product and the Voiculescu's reduced amalgamated free product. When the conditional expectations are GNS faithful, this two reduced amalgamated free products coincide and when the conditional expectations are $*$-homomorphisms the vertex reduced amalgamated free product is isomorphic to the fiber sum $A_1\oplus_B A_2$. Hence, even in the extreme degenerated case, the information on $A_1$ and $A_2$ is still contained in the vertex-reduced amalgamated free product. As the vertex-reduced free product is a new construction, we devote some time to show some of its properties. \vspace{0.2cm} \noindent Before proving our long exact sequence in KK-theory we start with an auxiliary and easy result in Section $3$. This result states that the full free product is always K-equivalent to the vertex-reduced free product. In particular, when the conditional expectations are morphisms, we get exactly Cuntz result \cite{Cu82}. This result also generalizes and simplifies the previous result obtained by the second author \cite{Ge96}. The proof is very natural, just a rotation trick. While finishing writing this paper, the authors have been aware that K. Hasegawa just obtained the same result in the particular case of GNS-faithful conditional expectations. By a remark by Ueda (\cite{Ue08}), this result also proves the K-equivalence between full and (vertex) reduced HNN extensions. \vspace{0.2cm} \noindent The main part and also the more difficult part of our paper comes in Section $4$. Under the presence of conditional expectation, we show that the full amalgamated free product $A_1\underset{B}{*} A_2$ with the algebra $D$ of continuous functions $f$ from $]-1,1[$ to the full free product such that $f(]-1,0])\subset A_1$, $f([0,1[)\subset A_2$ and $f(0)\in B$. This is done by generalizing one of the author paper (\cite{Ge97}). Therefore the full amalgamated free product $A_1\underset{B}{*} A_2$ sits inside a long exact sequence for the computation of its $KK$-groups. Of course the vertex reduced free product has the same long exact sequence. Explicitly, if $C$ is any separable C*-algebra, then we have the two $6$-terms exact sequence, $$\begin{array}{ccccc} KK^0(C,B)&\longrightarrow& KK^0(C,A_1)\bigoplus KK^0(C,A_2) &\longrightarrow &KK^0(C,A_1*_B A_2)\\ \uparrow& & & &\downarrow\\ KK^1(C,A_1*_BA_2)& \longleftarrow& KK^1(C,A_1)\bigoplus KK^1(C,A_2) &\longleftarrow& KK^1(C,B) \\ \end{array} $$ and $$\begin{array}{ccccc} KK^0(B,C)&\longleftarrow& KK^0(A_1,C)\bigoplus KK^0(A_2,C) &\longleftarrow& KK^0(A_1*_BA_2,C)\\ \downarrow& & & &\uparrow\\ KK^1(A_1*_BA_2,C)& \longrightarrow& KK^1(A_1,C)\oplus KK^1(A_2,C) &\longrightarrow& KK^1(B,C) \\ \end{array} $$ \noindent Again the HNN extension case follows using the isomorphism with an amalgamated free product. Note that this result greatly simplifies and generalizes the results of Thomsen \cite{Th03} about KK-theory for amalgamated free products which are valid only when the amalgam is finite dimensional. \vspace{0.2cm} \noindent Let us mention some applications. As a direct corollary, we obtain that the amalgamated free product of discrete quantum groups is $K$-amenable if and only if the initial quantum groups are K-amenable. This generalizes the result of Vergnioux \cite{Ve04} which was valid only for amenable discrete quantum groups and this also implies that a graph product of discrete quantum groups (see \cite{CF14}) is $K$-amenable if and only if the initial quantum groups are $K$-amenable. Finally, let us mention that this results will be applied in a future paper to deduce a long exact sequence in KK-theory for fundamental C*-algebras of graph of C*-algebras, generalizing and simplifying the results of Pimsner \cite{Pi86} and, as an application, the results of Fima-Freslon \cite{FF13}. \section{Preliminaries} \subsection{Notations and conventions} All C*-algebras and Hilbert modules are supposed to be separable. For a C*-algebra $A$ and a Hilbert $A$-module $H$ we denote by $\mathcal{L}_A(H)$ the C*-algebra of $A$-linear adjointable operators from $H$ to $H$ and by $\mathcal{K}_A(H)$ the sub-C*-algebra of $\mathcal{L}_A(H)$ consisting of $A$-compact operators. For $a\in A$, we denote by $L_A(a)\in\mathcal{L}_A(A)$ the left multiplication operator by $a$. \subsection{Conditional expectations}\label{SectionCE} Let $A$, $B$ be unital C*-algebras and $\varphi\,:\,A\rightarrow B$ be a unital completely positive map (ucp). A \textit{GNS construction} of $\varphi$ is a triple $(K,\rho,\eta)$, where $K$ is a Hilbert $B$-module, $\eta\in K$ and $\rho\,:\, A\rightarrow\mathcal{L}_B(K)$ is a unital $*$-homomorphism such that $K=\overline{\rho(A)\eta\cdot B}$ and $\langle\eta,\rho(a)\eta\rangle=\varphi(a)$ for all $a\in A$. A GNS construction always exists and is unique, up to a canonical isomorphism. Note that, if $B\subset A$ and $E\,:\,A\rightarrow B$ is a conditional expectation then, the Hilbert $B$-submodule $\eta\cdot B$ of $K$, where $(K,\rho,\eta)$ is a GNS construction of $E$, is complemented. Indeed, we have $K=\eta\cdot B\oplus K^\circ$, where $ K^\circ=\overline{\text{Span}}\{\rho(a)\eta\cdot b\,:\,a\in A^\circ\text{ and }b\in B\}$ and $A^\circ=\text{Ker}(E)$. Since $E$ is a conditional expectation onto $B$ we have $bA^\circ\subset A^\circ$ for all $b\in B$. It follows that $\rho(b)K^\circ\subset K^\circ$ for all $b\in B$. Hence, the restriction of $\rho$ to $B$ (and to $K^\circ$) gives a unital $*$-homomorphism $\rho\,:\,B\rightarrow\mathcal{L}_B(K^\circ)$. \vspace{0.2cm} \noindent A conditional expectation is called \textit{GNS-faithful} (or \textit{non-degenerate}) if for a given GNS construction (and hence for all GNS constructions) $(K,\rho,\eta)$, the homomorphism $\rho$ is faithful. In this paper we will consider reduced amalgamated free product with respect to non-necessary GNS-faithful conditional expectations. Actually, the degeneracy of the conditional expectations will naturally produce different types of reduced amalgamated free products. This is why we include the next proposition, which is well known to specialists but helps to understand the extreme degenerated case: when $E$ is an homomorphism. We include a complete proof for the convenience of the reader. \begin{proposition}\label{PropDeg1} Let $B\subset A$ be a unital inclusion of unital C*-algebras and $E\,:\,A\rightarrow B$ be a conditional expectation with GNS construction $(K,\rho,\eta)$. The following are equivalent. \begin{enumerate} \item $E$ is an homomorphism. \item $K\simeq B$ as Hilbert $B$-modules. \item $K^\circ=\{0\}$. \end{enumerate} \end{proposition} \begin{proof} Since $K=\eta\cdot B\oplus K^\circ$ the equivalence between $(2)$ and $(3)$ is obvious. \vspace{0.2cm} \noindent$(1)\Rightarrow(3)$. If $E$ is an homomorphism from $A$ to $B$ then, since $E$ is ucp, it is a unital $*$-homomorphism and we have for all $b\in B$ and all $a\in A^\circ$, $$\langle\rho(a)\eta\cdot b,\rho(a)\eta\cdot b\rangle_{K}=b^*\langle\eta\cdot b,\rho(a^*a)\eta\rangle_{K}b=b^*E(a^*a)b=b^*E(a)^*E(a)b=0.$$ \noindent$(3)\Rightarrow(1)$. If $K^\circ=\{0\}$ then, for all $a\in A^\circ$, we have $E(a^*a)=\langle\rho(a)\eta,\rho(a)\eta\rangle_{K}=0$. Hence $$E((a-E(a))^*(a-E(a)))=0=E(a^*a)-E(a^*)E(a)-E(a)^*E(a)+E(a)^*E(a)\quad\text{for all }a\in A.$$ It follows that, for all $a\in A$, we have $E(a^*a)=E(a)^*E(a)$. Hence, the multiplicative domain of the ucp map $E$ is equal to $A$ which implies that $E$ is an homomorphism. \end{proof} \subsection{The full and reduced amalgamated free products} \noindent Let $A_1$, $A_2$ be two unital C*-algebras with a common C*-subalgebra $B\subset A_k$, $k=1,2$ and denote by $A_f$ the full amalgamated free product. To be more precise, we sometimes write $A_f=A_1\underset{B}{*}A_2$. It is well known that the canonical map from $A_k$ to $A_f$ is faithful for $k=1,2$. Hence, we will always view $A_1$ and $A_2$ as subalgebras of $A_f$. \vspace{0.2cm} \noindent We will now construct, in the presence of conditional expectations, two different reduced amalgamated free products. One of them, that we call the \textit{edge-reduced amalgamated free product} has been extensively studied and it is called, in the literature, the reduced amalgamated free product. The other one, that we call the \textit{vertex-reduced amalgamated free product}, does not seem to be known, even from specialists. As it will become gradually clear, the vertex-reduced amalgamated free product is actually much more natural than the edge-reduced amalgamated free product. It is an intermediate quotient of the full amalgamated free product and it is isomorphic to the edge-reduced amalgamated free product in the presence of GNS-faithful conditional expectations. This is the reason why it has not appear before in the literature since many authors only consider amalgamated free product in the presence of GNS-faithful conditional expectations. Since the vertex-reduced and the edge-reduced amalgamated free product are the foundations of our proofs we will now explain in great details their constructions. \vspace{0.2cm} \noindent In the sequel, we always assume that, for $k=1,2$, there exists a conditional expectation $E_k\,:\, A_k\rightarrow B$. We write $A_k^\circ=\{a\in A_k\,:\, E_k(a)=0\}$, we denote by $(K_k,\rho_k,\eta_k)$ a GNS construction of $E_k$ and by $K_k^\circ$ the canonical orthogonal complement of $\eta_k\cdot B$ in $K_k$ as explain in section \ref{SectionCE}. Recall that the restriction of $\rho_k$ to $B$ (and to $K_k^\circ$) gives a unital $*$-homomorphism $\rho_k\,:\, B\rightarrow\mathcal{L}_B(K_k^\circ)$. \vspace{0.2cm} \noindent We denote by $I$ the subset of $\cup_{n\geq 1}\{1,2\}^n$ defined by $$I=\{(i_1,\dots,i_n)\in\{1,2\}^n\,:\,n\geq 1\text{ and }i_k\neq i_{k+1}\text{ for all }1\leq k\leq n-1\},$$ \noindent Recall that an operator $x\in A_f$ is called \textit{reduced} if $x\neq 0$ and $x$ can be written as $x=a_1\dots a_n$ with $n\geq 1$ and $a_k\in A_{i_k}^\circ-\{0\}$ such that $\underline{i}=(i_1,\dots i_n)\in I$. \subsubsection{The vertex-reduced amalgamated free products} For $\underline{i}=(i_1,\dots,i_n)\in I$, we define a $A_{i_1}$-$A_{i_n}$-bimodule $H_{\underline{i}}$. As Hilbert $A_{i_n}$-module we have: $$H_{\underline{i}}=\left\{\begin{array}{lcl}K_{i_1}\underset{B}{\otimes}K_{i_2}^\circ\underset{B}{\otimes}\dots\underset{B}{\otimes}K_{i_{n-1}}^\circ\underset{B}{\otimes} A_{i_n}&\text{if}&n\geq 3,\\ K_{i_1}\underset{B}{\otimes} A_{i_2}&\text{if}&n=2,\\ A_{i_1}&\text{if}&n=1.\end{array}\right.$$ \noindent The left action of $A_{i_1}$ on $H_{\underline{i}}$ is given by the unital $*$-homomorphism defined by $$\lambda_{\underline{i}}\,:\,A_{i_1}\rightarrow\mathcal{L}_{A_{i_n}}(H_{\underline{i}});\quad\lambda_{\underline{i}}=\left\{\begin{array}{lcl}\rho_{i_1}\underset{B}{\otimes}\text{id}&\text{if}& n\geq 2,\\ L_{A_{i_1}}&\text{if}&n=1.\end{array}\right.$$ \noindent We consider, for $k,l\in\{1,2\}$, the subset $I_{k,l}=\{\underline{i}=(i_1,\dots,i_n)\in I\,:\,i_1=k\text{ and }i_n=l\}$ and the $A_k$-$A_l$-bimodule defined by $$H_{k,l}=\underset{\underline{i}\in I_{k,l}}{\bigoplus}H_{\underline{i}}\quad\text{and}\quad\lambda_{k,l}=\underset{\underline{i}\in I_{k,l}}{\bigoplus}\lambda_{\underline{i}}\,:\, A_k\rightarrow\mathcal{L}_{A_l}(H_{k,l}).$$ \noindent For $k\in\{1,2\}$ we denote by $\overline{k}$ the unique element in $\{1,2\}\setminus\{k\}$. \begin{example}\label{ExDeg1} If, for $k\in\{1,2\}$, $E_k$ is an homomorphism from $A_k$ to $B$ it follows from Proposition \ref{PropDeg1} that $K_k^\circ=\{0\}$. Hence, $H_{k,k}=A_k\oplus K_k\underset{B}{\otimes} K_{\overline{k}}^\circ\underset{B}{\otimes} A_k$ and $H_{\overline{k},k}=K_{\overline{k}}\underset{B}{\otimes} A_k$. Note that, since $K_k\simeq B$, we have $H_{k,k}\simeq A_k\oplus K_{\overline{k}}^\circ\underset{B}{\otimes} A_k\simeq K_{\overline{k}}\underset{B}{\otimes} A_k= H_{\overline{k},k}$. Also we have $H_{k,\overline{k}}=K_k\underset{B}{\otimes} A_{\overline{k}}$ and $H_{\overline{k},\overline{k}}=A_{\overline{k}}$. Again, $H_{k,\overline{k}}\simeq A_{\overline{k}}=H_{\overline{k},\overline{k}}$. Actually the isomorphism of Hilbert $A_l$-modules $H_{k,l}\simeq H_{\overline{k},l}$ is true in full generality as explained below. \end{example} \vspace{0.2cm} \noindent For $k,l\in\{1,2\}$ we define a unitary $u_{k,l}\in\mathcal{L}_{A_l}(H_{k,l}, H_{\overline{k},l})$, by the following formula. Let $\underline{i}=(i_1,\dots,i_n)\in I$, with $i_1=k$ and $i_l=l$. For $\xi\in H_{\underline{i}}$ we define $u_{k,l}\xi\in H_{\overline{k},l}$ in the following way. \begin{itemize} \item If $n\geq 2$, write $\underline{i}=(k,\underline{i}')$, where $\underline{i}'=(i_2,\dots,i_n)\in I_{\overline{k},l}$. For $\xi=\rho_{k}(a)\eta_k\otimes\xi'$, with $a\in A_k$ and $\xi'\in H_{\underline{i}'}$, we define $u_{k,l}\xi:=\left\{\begin{array}{lcl}\eta_{\overline{k}}\otimes\xi&\text{if}&E_k(a)=0,\\ \lambda_{\underline{i}'}(a)\xi'&\text{if}&a\in B.\end{array}\right.$ \item If $n=1$ then $k=l$, $\underline{i}=(l)$ and $\xi\in A_l=H_{\underline{i}}$. We define $u_{k,l}\xi:=\eta_{\overline{k}}\otimes \xi$. \end{itemize} It is easy to check that, for all $k,l\in\{1,2\}$, the operator $u_{k,l}$ commutes with the right actions of $A_l$ on $H_{k,l}$ and $H_{\overline{k},l}$ and extends to a unitary operators, still denoted $u_{k,l}$, in $\mathcal{L}_{A_l}(H_{k,l}, H_{\overline{k},l})$ such that $u_{k,l}^*=u_{\overline{k},l}$. Moreover, the definition of $u_{k,l}$ implies that, \begin{equation}\label{EqVertexUnitary}u_{k,l}^*\lambda_{\overline{k},l}(b)u_{k,l}=\lambda_{k,l}(b)\quad\text{for all }b\in B. \end{equation} \begin{definition} Let $k\in\{1,2\}$. The \textit{$k$-vertex-reduced amalgamated free product} is the C*-sub-algebra $A_{v,k}\subset\mathcal{L}_{A_k}(H_{k,k})$ generated by $\lambda_{k,k}(A_k)\cup u_{k,k}^*\lambda_{\overline{k},k}(A_{\overline{k}})u_{k,k}\subset\mathcal{L}_{A_k}(H_{k,k})$. To be more precise, we use sometimes the notation $A_{v,k}=A_1\overset{k}{\underset{B}{*}}A_2$. \end{definition} \noindent For a fixed $k\in\{1,2\}$ the relations $(\ref{EqVertexUnitary})$ imply the existence of a unique unital $*$-homomorphism $\pi_k\,:\,A_f\rightarrow A_{v,k}$ such that $\pi_k(a)=\left\{\begin{array}{lcl}\lambda_{k,k}(a)&\text{if}&a\in A_k,\\ u_{k,k}^*\lambda_{\overline{k},k}(a) u_{k,k}&\text{if}&a\in A_{\overline{k}}.\end{array}\right.$ \vspace{0.2cm} \noindent In the sequel we will denote by $\xi_k$ the vector $\xi_k:=1_{A_k}\in A_k\subset H_{k,k}$. We summarize the fundamental properties of $A_{v,k}$ in the following proposition. \begin{proposition}\label{PropkVertexReduced} For all $k\in\{1,2\}$ the following holds. \begin{enumerate} \item The morphism $\pi_k$ is faithful on $A_k$. \item If $E_{\overline{k}}$ is GNS-faithful then $\pi_k$ is faithful on $A_{\overline{k}}$. \item There exists a unique ucp map $\mathbb{E}_k\,:\, A_{v,k}\rightarrow A_k$ such that $\mathbb{E}_k(\pi_k(a))=a$ $\forall a\in A_k$ and $$\mathbb{E}_k(\pi_k(a_1\dots a_n))=0\text{ for all }a=a_1\dots a_n\in A_f\text{ reduced with }n\geq 2\text{ or }n=1\text{ and }a=a_1\in A_{\overline{k}}^\circ.$$ \noindent Moreover, $\mathbb{E}_k$ is GNS-faithful. \item For any unital C*-algebra $C$ with unital $*$-homomorphisms $\nu_k\,:\,A_k\rightarrow C$ such that \begin{itemize} \item $\nu_1(b)=\nu_2(b)$ for all $b\in B$, \item $C$ is generated, as a C*-algebra, by $\nu_1(A_1)\cup\nu_2(A_2)$, \item $\nu_k$ is faithful and there exists a GNS-faithful ucp map $E\,:\,C\rightarrow A_k$ such that $E(\nu_k(a))=a$ for all $a\in A_k$ and $$E(\nu_{i_1}(a_1)\dots\nu_{i_n}(a_n))=0\text{ for all }a=a_1\dots a_n\in A_f\text{ reduced with }n\geq 2\text{ or }n=1\text{ and }a=a_1\in A_{\overline{k}}^\circ,$$ \end{itemize} there exists a unique unital $*$-isomorphism $\nu\,:\,A_{v,k}\rightarrow C$ such that $\nu\circ\pi_k(a)=\nu_k(a)$ for all $a\in A_1\cup A_2$. Moreover, $\nu$ satisfies $E\circ\nu=\mathbb{E}_k$. \end{enumerate} \end{proposition} \begin{proof} By definition of $\pi_k$ we have, if $a\in A_k$, $\langle\xi_k,\pi_k(a)\xi_k\rangle=a$. It follows directly that $\pi_k$ is faithful on $A_k$. Moreover, the map $\mathbb{E}_k\,:\,A_{v,k}\rightarrow A_k$, $x\mapsto\langle\xi_k,x\xi_k\rangle$ satisfies $\mathbb{E}_k(\pi_k(a))=a$ $\forall a\in A_k$. By the definition of the unitaries $u_{k,l}$ we have, for all $k\in\{1,2\}$ and all reduced operator $x=a_1\dots a_n$ with $a_k\in A_k^\circ$ and $\underline{i}=(i_1,\dots,i_n)\in I$, \begin{equation}\label{EqGNS}\pi_k(a_1\dots a_n)\xi_k=\left\{\begin{array}{lcl}\rho_{i_1}(a_1)\eta_1\otimes\dots\otimes\rho_{i_{n-1}}(a_{n-1})\eta_{n-1}\otimes a_n&\text{if}&i_1= k\text{ and }i_n=k,\\ \eta_{k}\otimes\rho_{i_1}(a_1)\eta_1\otimes\dots\otimes\rho_{i_{n-1}}(a_{n-1})\eta_{n-1}\otimes a_n&\text{if}&i_1\neq k\text{ and }i_n=k,\\ \rho_{i_1}(a_1)\eta_1\otimes\dots\otimes\rho_{i_{n}}(a_{n})\eta_{n}\otimes 1_{A_k}&\text{if}&i_1= k\text{ and }i_n\neq k,\\ \eta_{k}\otimes\rho_{i_1}(a_1)\eta_1\otimes\dots\otimes\rho_{i_{n}}(a_{n})\eta_{n}\otimes 1_{A_k}&\text{if}&i_1\neq k\text{ and }i_n\neq k.\\ \end{array}\right.\end{equation} \vspace{0.2cm} \noindent Hence we have $\mathbb{E}_k(\pi_k(a_1,\dots a_n))=0$ for all $a=a_1\dots a_n\in A_f$ reduced with $n\geq 2$ or $n=1$ and $a=a_1\in A_{\overline{k}}^\circ$. It also follows easily from the previous set of equations that $\overline{\pi_k(A_f)\xi_k\cdot A_k}=H_{k,k}$. Hence the triple $(H_{k,k},\text{id},\xi_k)$ is a GNS construction for $\mathbb{E}_k$. This shows that $\mathbb{E}_k$ is GNS-faithful. Note that the uniqueness statement of the third assertion is obvious since $A_f$ is the linear span of $B$ and the reduced operators. Also, the second statement becomes now obvious since, by the properties of $\mathbb{E}_k$ we have, for all $x\in A_{\overline{k}}$, $\mathbb{E}_k(\pi_k(x))=\mathbb{E}_k(\pi_k(x-E_{\overline{k}}(x)))+\mathbb{E}_k(\pi_k(E_{\overline{k}}(x)))=\pi_k(E_{\overline{k}}(x))$. It follows easily from this equation that $\pi_k$ is faithful on $A_{\overline{k}}$ whenever $E_{\overline{k}}$ is GNS-faithful. Indeed, let $x\in A_{\overline{k}}$ such that $\pi_k(a)=0$. Then, for all $y\in A_{\overline{k}}$ we have $\pi_k(y^*x^*xy)=0$. Hence, $\pi_k\circ E_{\overline{k}}(y^*x^*xy)=\mathbb{E}_k\circ\pi_k(y^*x^*xy)=0$ for all $y\in A_{\overline{k}}$. Since $\pi_k$ is faithful on $A_k$ we find $E_{\overline{k}}(y^*x^*xy)=0$, for all $y\in A_{\overline{k}}$. Since $E_{\overline{k}}$ is GNS-faithful we conclude that $x=0$. \vspace{0.2cm} \noindent $(4)$. The proof is a routine. We write the argument for the convenience of the reader. Let $(K,\rho,\eta)$ be the GNS construction of $E$. Since $E$ is GNS-faithful we may and will assume that $\rho=\text{id}$ and $C\subset\mathcal{L}_{A_k}(K)$. By the properties of $\mathbb{E}_k$ and $E$, the map $U\,:\,H_{k,k}\rightarrow K$ defined by, for $x=a_1\dots a_n\in A_f$ reduced with $a_k\in A_{i_k}^\circ$, $U(\pi_k(x)\xi_k):=\nu_{i_1}(a_1)\dots\nu_{i_n}(a_n)\eta$ and, for $x=b\in B$, $U(\pi_k(b)\xi_k)=\nu_1(b)\eta=\nu_2(b)\eta$, is well defined and extends to a unitary $U\in\mathcal{L}_{A_k}(H_{k,k},K)$. By construction, the map $\nu(x):=UxU^*$, for $x\in A_{v,k}$, satisfies the claimed properties. The uniqueness is obvious. \end{proof} \begin{remark} It is known that the canonical homomorphism from $A_k$ to $A_f$ is faithful for $k\in\{1,2\}$ without assuming the existence of conditional expectations from $A_k$ to $B$. However, assertion $(1)$ of Proposition \ref{PropkVertexReduced} gives a very simple proof of this fact, since it shows that the composition of the canonical homomorphism from $A_k$ to $A_f$ with the homomorphism $\pi_k$ is faithful, which implies that the canonical homomorphism from $A_k$ to $A_f$ itself is faithful. \end{remark} \begin{example}\label{ExDeg2} Suppose that, for a given $k\in\{1,2\}$, $E_k$ is an homomorphism. Then, as observed in Example \ref{ExDeg1}, we have $H_{\overline{k},\overline{k}}=A_{\overline{k}}$ (and $\lambda_{\overline{k},\overline{k}}=L_{A_{\overline{k}}}$). It follows from the definition of $\pi_{\overline{k}}$ that $$\pi_{\overline{k}}(a)=\left\{\begin{array}{lcl}L_{A_{\overline{k}}}(a)&\text{if}&a\in A_{\overline{k}},\\ 0&\text{if}&a\in A_k^\circ.\end{array}\right.$$ Hence, since $A_f$ the closed linear span of $A_{\overline{k}}$ and the reduced operators and $\pi_{\overline{k}}\,:\,A_f\rightarrow A_{v,\overline{k}}$ is surjective, we find that $A_{v,\overline{k}}=\pi_{\overline{k}}(A_{\overline{k}})$. Moreover, since $\pi_{\overline{k}}$ is faithful on $A_{\overline{k}}$ we conclude that the restriction of $\pi_{\overline{k}}$ to $A_{\overline{k}}$ gives an isomorphism $A_{\overline{k}}\simeq A_{v,\overline{k}}$. \end{example} \begin{definition} The \textit{vertex-reduced amalgamated free product} is the C*-algebra obtained by separation and completion of $A_f$ with respect to the C*-semi-norm $\Vert\cdot\Vert_v$ on $A_f$ defined by $$\Vert x\Vert_v:=\text{Max}\{\Vert\pi_1(x)\Vert,\Vert\pi_2(x)\Vert\}\quad\text{for all }x\in A_f.$$ \end{definition} \noindent We will note it $A_1\overset{v}{\underset{B}{*}} A_2$ or $A_v$ for simplicity in the rest of this section and let $\pi\,:\,A_f\rightarrow A_v$ be the canonical surjective unital $*$-homomorphism. Note that, by construction of $A_v$, for all $k\in\{1,2\}$, there exists a unique unital (surjective) $*$-homomorphism $\pi_{v,k}\,:\,A_v\rightarrow A_{v,k}$ such that $\pi_{v,k}\circ\pi=\pi_k$. We describe the fundamental properties of the vertex-reduced amalgamated free product in the following proposition. We call a family of ucp maps $\{\varphi_i\}_{i\in I}$, $\varphi_i\,:\,A\rightarrow B_i$ GNS-faithful if $\cap_{i\in I}{\rm Ker}(\pi_i)=\{0\}$, where $(H_i,\pi_i,\xi_i)$ is a GNS-construction for $\varphi_i$. From Proposition \ref{PropkVertexReduced} and the definition of $A_v$ we deduce the following result. \begin{proposition}\label{PropVertexReduced} The following holds. \begin{enumerate} \item $\pi$ is faithful on $A_k$ for all $k\in\{1,2\}$. \item For all $k\in\{1,2\}$, there is a unique ucp map $\mathbb{E}_{A_k}\,:\,A_v\rightarrow A_k$ such that $\mathbb{E}_{A_k}\circ\pi(a)=a$ for all $a\in A_k$ and all $k\in\{1,2\}$ and, $$\mathbb{E}_{A_k}(\pi(a_1\dots a_n))=0\text{ for all }a=a_1\dots a_n\in A_f\text{ reduced with }n\geq 2\text{ or }n=1\text{ and }a=a_1\in A_{\overline{k}}^\circ.$$ \noindent Moreover, the family $\{\mathbb{E}_{A_1},\mathbb{E}_{A_2}\}$ is GNS-faithful. \item Suppose that $C$ is a unital C*-algebra with $*$-homomorphisms $\nu_k\,:\,A_k\rightarrow C$ such that \begin{itemize} \item $\nu_1(b)=\nu_2(b)$ for all $b\in B$, \item $C$ is generated, as a C*-algebra, by $\nu_1(A_1)\cup\nu_2(A_2)$, \item $\nu_1$ and $\nu_2$ are faithful and, for all $k\in\{1,2\}$, there exists a ucp map $E_{A_k}\,:\,C\rightarrow A_k$ such that $E_{A_k}\circ\nu_k(a)=a$ for all $a\in A_k$ and all $k\in\{1,2\}$ and, $$E_{A_k}(\nu_{i_1}(a_1)\dots\nu_{i_n}(a_n))=0\text{ for all }a=a_1\dots a_n\in A_f\text{ reduced with }n\geq 2\text{ or }n=1\text{ and }a=a_1\in A_{\overline{k}}^\circ,$$ and the family $\{E_{A_1},E_{A_2}\}$ is GNS-faithful. \end{itemize} Then, there exists a unique unital $*$-isomorphism $\nu\,:\,A_{v}\rightarrow C$ such that $\nu\circ\pi(a)=\nu_k(a)$ for all $a\in A_k$ and all $k\in\{1,2\}$. Moreover, $\nu$ satisfies $E_{A_k}\circ\nu=\mathbb{E}_{A_k}$, $k\in\{1,2\}$. \end{enumerate} \end{proposition} \begin{proof} $(1)$. It is obvious since, by Proposition \ref{PropkVertexReduced}, $\pi_k$ is faithful on $A_k$ for $k=1,2$. \vspace{0.2cm} \noindent$(2)$. By Proposition \ref{PropkVertexReduced}, the maps $\mathbb{E}_{A_k}=\mathbb{E}_k\circ\pi_{v,k}$ satisfy the desired properties and it suffices to check that the family $\{\mathbb{E}_{A_1},\mathbb{E}_{A_2}\}$ is GNS-faithful. Let $x_0\in A_f$ be such that $x=\pi(x_0)\in A_v$ satisfies $\mathbb{E}_{A_k}(y^*x^*xy)=0$ for all $y\in A_v$ and all $k\in\{1,2\}$. Then, for all $k\in\{1,2\}$ we have $\mathbb{E}_k(y^*\pi_{v,k}(x^*x)y)=0$ for all $y\in A_{v,k}$. Since $\mathbb{E}_k$ is GNS-faithful, this implies that $\pi_{v,k}(x)=\pi_k(x_0)=0$ for all $k\in\{1,2\}$. Hence, $\Vert x\Vert_{A_v}=\text{Max}(\Vert\pi_1(x_0)\Vert,\Vert\pi_2(x_0)\Vert)=0$. \vspace{0.2cm} \noindent$(3)$. The proof is a routine. We include it for the convenience of the reader. Let $(L_k,m_k,f_k)$ be the GNS construction of $E_{A_k}$. By the universal property of $A_{v,k}$, the C*-algebra $C_k$ generated by $m_k( C )\subset\mathcal{L}_{A_k}(L_k)$ is canonically isomorphic with $A_{v,k}$. Hence, in the remainder of the proof we suppose that $C_k=A_{v,k}$ and, by the universal property of $A_f$, we have a unital surjective $*$-homomorphism $\nu_f\,:\,A_f\rightarrow C$ such that $\nu_f\vert_{A_k}=\nu_k$. Note that, by the identification we made, $m_k\circ\nu_f=\pi_{k}$. Hence, by construction of $A_v$, there exists a unique unital (surjective) $*$-homomorphism $\nu\,:\,C\rightarrow A_v$ such that $\pi_{v,k}\circ\nu=m_k$ for all $k\in\{1,2\}$. The homomorphism $\nu$ satisfies all the claimed properties and it suffices to check that it is faithful. But it is obvious since, by the identity $\pi_{v,k}\circ\nu=m_k$, $k=1,2$, it follows that $\text{Ker}(\nu)\subset\text{Ker}(m_1)\cap\text{Ker}(m_2)=\{0\}$, since the pair $(E_{A_1},E_{A_2})$ is GNS-faithful. \end{proof} \begin{corollary}\label{CorDegVertexRed} If both $E_1$ and $E_2$ are homomorphisms then there is a canonical isomorphism $A_v\simeq A_1\underset{B}{\oplus}A_2$, where $A_1\underset{B}{\oplus}A_2:=\{(a_1,a_2)\in A_1\oplus A_2\,:\, E_1(a_1)=E_2(a_2)\}$. \end{corollary} \begin{proof} \begin{comment} Suppose that $E_{k}$ is an homomorphism for all $k\in\{1,2\}$. Then, as observed in Example \ref{ExDeg2}, the restriction of $\pi_{\overline{k}}$ to $A_{\overline{k}}$ gives an isomorphism $A_{\overline{k}}\simeq A_{v,\overline{k}}$, for all $k\in\{1,2\}$. Let us define $A_1\underset{B}{\oplus}A_2:=\{(a_1,a_2)\in A_1\oplus A_2\,:\, E_1(a_1)=E_2(a_2)\}$. Note that, since $E_1$ and $E_2$ are homomorphisms, $A_1\underset{B}{\oplus}A_2$ is a unital $*$-sub-algebra of $A_1\oplus A_2$. Let us show that $A_v\simeq A_1\underset{B}{\oplus}A_2$. A FAIRE. \end{comment} We use the universal property of $A_v$ described in Proposition \ref{PropVertexReduced}. Define $\nu_k\,:\,A_k\rightarrow A_1\underset{B}{\oplus}A_2$ by $\nu_1(x)=(x,E_1(x))$ and $\nu_2(y)=(E_2(y),y)$. It is clear that $\nu_1$ and $\nu_2$ are both faithful unital $*$-homomorphisms such that $\nu_1(b)=\nu_2(b)$ for all $b\in B$. Define $E_{A_k}\,:\,A_1\underset{B}{\oplus}A_2\rightarrow A_k$ by $E_{A_1}(a_1,a_2)=a_1$ and $E_{A_2}(a_1,a_2)=a_2$. Then, for all $k\in\{1,2\}$, $E_k$ is a unital $*$-homomorphisms such that $E_{A_k}\circ\nu_k(a)=a$ for all $a\in A_k$. In particular both $E_1$ and $E_2$ are conditional expectations and, since ${\rm Ker}(E_{A_1})\cap{\rm Ker}(E_{A_2})=\{0\}$, the family $\{E_{A_1},E_{A_2}\}$ is GNS-faithful. Hence, it suffices to check the condition on the reduced operators. Since $\nu_1(A_1^\circ)=\{(x,0)\,:\,x\in A_1^\circ\}$ and $\nu_2(A_2^\circ)=\{(0,y)\,:\,y\in A_2^\circ\}$, we have $\nu_1(A_1^\circ)\nu_2(A_2^\circ)=\nu_2(A_2^\circ)\nu_1(A_1^\circ)=\{0\}$. Hence, it suffices to check the condition on elements $(a_1,a_2)\in\nu_1(A_1^\circ)\cup \nu_2(A_2^\circ)$ which is obvious. \end{proof} \subsubsection{The edge-reduced amalgamated free product} In this section we show how the construction of the edge-reduced (or, in the literature, the reduced) amalgamated free product in full generality is related to the vertex-reduced free product we just defined. \vspace{0.2cm} \noindent For $\underline{i}\in I$, we consider the $B$-$B$-module $K_{\underline{i}}=K_{i_1}^\circ\underset{B}{\otimes}\dots\underset{B}{\otimes}K_{i_n}^\circ$ as Hilbert $B$-module with the left action of $B$ given by the unital $*$-homomorphism $\rho_{\underline{i}}\,:\,B\rightarrow\mathcal{L}_B(K_{\underline{i}})$, $\rho_{\underline{i}}(b)=\rho_{i_1}(b)\underset{B}{\otimes}\text{id}$ for all $b\in B$ and we define the Hilbert $B$-bimodule $K=B\oplus\left(\bigoplus_{\underline{i}\in I}K_{\underline{i}}\right)$. \begin{example}If, for some $k\in\{1,2\}$, $E_k$ is an homomorphism then $K=B\oplus K_{\overline{k}}^\circ\simeq K_{\overline{k}}$. Hence, if both $E_1$ and $E_2$ are homomorphisms then $K=B$. \end{example} \begin{proposition} There are isomorphisms between $H_{k,k}\underset{E_k}{\otimes} B$ and $K$ for $k=1,2$ implemented by some unitary $V_k$. Moreover when we intertwine the representation $\pi_k\otimes 1$ by $V_k$ we get the classical representation of the full amalgamated free product on the space $K$. \end{proposition} \begin{proof} Note that, for $\underline{i}=(i_1,\dots,i_n)\in I$ with $i_1=i_n=k$ (hence $n$ is odd) we have, if $n=1$, $H_{\underline{i}}\underset{E_k}{\otimes} B=A_k\underset{E_k}{\otimes} B\simeq K_k\simeq K_k^\circ\oplus B$, and, if $n\geq 3$, $H_{\underline{i}}\underset{E_k}{\otimes} B=K_k\underset{B}{\otimes}\left(K_{\overline{k}}^\circ\underset{B}{\otimes}\dots\underset{B}{\otimes} K_{\overline{k}}^\circ\right)\underset{B}{\otimes} K_k\simeq K_{\underline{i}}\oplus K_{\underline{i}'}\oplus K_{\underline{i}''}\oplus K_{\underline{i}'''}$, where $\underline{i}'=(i_2,\dots,i_n)$, $\underline{i}''=(i_1,\dots,i_{n-1})$ and $\underline{i}'''=(i_2,\dots,i_{n-1})$. Hence the existence of $V_k\,:\,H_{k,k}\underset{E_k}{\otimes} B\rightarrow K$. It is easy to check that $V_k$ satisfies $V_k(\pi_k(a)\otimes 1)V_k^*=\rho(a)$ for all $a\in A_k$ and all $k\in\{1,2\}$ where $\rho$ is the (classical) reduced free product representation which we recall below for convenience. For $l\in\{1,2\}$ define $K(l)=B\oplus\left(\underset{\underline{i}\in I,\,i_1\neq l}{\bigoplus} K_{\underline{i}}\right)$ and note that we have a unital $*$-homomorphism $\rho_{l}\,:\,B\rightarrow\mathcal{L}_{B}(K(l))$ defined by $\rho_{l}=\underset{\underline{i}\in I_k,\,i_1\neq l}{\bigoplus}\rho_{\underline{i}}$. Let $U_l\in\mathcal{L}_{B}(K_l\underset{\rho_{l}}{\otimes} K(l),K)$ be the unitary operator defined by $$\begin{array}{llcl} U_l\,:\,&K_l\underset{\rho_{l}}{\otimes}K(l) &\longrightarrow& K\\ &\eta_l\underset{\rho_l}{\otimes} B &\overset{\simeq}{\longrightarrow}&B\\ &K_l^\circ\underset{\rho_l}{\otimes} B&\overset{\simeq}{\longrightarrow}&K_l^\circ\\ &\eta_l\underset{\rho_l}{\otimes} H_{\underline{i}}&\overset{\simeq}{\longrightarrow}&H_{\underline{i}}\\ &K_l^\circ\underset{\rho_l}{\otimes} H_{\underline{i}}&\overset{\simeq}{\longrightarrow}&H_{(l,\underline{i})}\\ \end{array}$$ where $(l,\underline{i})=(l,i_1,\dots,i_n)\in I$ if $\underline{i}=(i_1,\dots,i_n)\in I$ with $i_1\neq l$. We define the unital $*$-homomorphisms $\lambda_{l}\,:\,\mathcal{L}_B(K_l)\rightarrow\mathcal{L}_{B}(K)$ by $\lambda_{l}(x)=U_l(x\ot1)U_l^*$. By definition we have $\lambda_{1}(\rho_1(b))=\lambda_{2}(\rho_2(b))$ for all $b\in B$. It follows that there exists a unique unital $*$-homomorphism $\rho\,:\,A_f\rightarrow \mathcal{L}_{B}(K)$ such that $\rho(a)=\lambda_k(a)$ for $a\in A_k$, for all $k\in\{1,2\}$. \end{proof} \begin{definition} The \textit{edge-reduced} amalgamated free product is the C*-subalgebra $A_e\subset\mathcal{L}_B(K)$ generated by $\lambda_1(A_1)\cup\lambda_2(A_2)\subset\mathcal{L}_B(K)$. To be more precise, we use sometimes the notation $A_e=A_1\overset{e}{\underset{B}{*}} A_2$. \end{definition} \begin{example} If, for some $k\in\{1,2\}$, $E_k$ is an homomorphism then $A_e$ is the C*-algebra $\overline{\rho_{\overline{k}}(A_{\overline{k}})}\subset\mathcal{L}_B(K_{\overline{k}})$. If both $E_1$ and $E_2$ are homomorphisms then $A_e\simeq B$. \end{example} \noindent The preceding example shows that the edge reduced amalgamated free product may forget everything about the initial C*-algebras $A_1$ and $A_2$ in the extreme degenerated case: it only remembers $B$. This shows that, in general, one should consider instead the vertex-reduced amalgamated free product. Indeed, even in the extreme degenerated case, the vertex reduced amalgamated free product correctly remembers the C*-algebras $A_1$ and $A_2$, as shown in corollary \ref{CorDegVertexRed}. \vspace{0.2cm} \noindent In the following proposition we recall the properties of $A_e$. The results below are well known when $E_1$ and $E_2$ are GNS-faithful. The proof is similar to the proof of proposition \ref{PropkVertexReduced} and we leave it to the reader. \begin{proposition}\label{PropEdgeReduced} The following holds. \begin{enumerate} \item $\rho$ is faithful on $B$. \item If $\E_k$ is GNS-faithful then $\rho$ is faithful on $A_k$. \item There exists a unique ucp map $\mathbb{E}\,:\, A_{e}\rightarrow B$ such that $\mathbb{E}\circ\rho(b)=b$ for all $b\in B$ and, $$\mathbb{E}(\rho(a_1,\dots a_n))=0\text{ for all }a=a_1\dots a_n\in A_f\text{ reduced}.$$ \noindent Moreover, $\mathbb{E}$ is GNS-faithful. \item For any unital C*-algebra $C$ with unital $*$-homomorphisms $\nu_k\,:\,A_k\rightarrow C$ such that \begin{itemize} \item $\nu_1(b)=\nu_2(b)$ for all $b\in B$, \item $C$ is generated, as a C*-algebra, by $\nu_1(A_1)\cup\nu_2(A_2)$, \item $\nu_1\vert_B=\nu_2\vert_B$ is faithful and there exists a GNS-faithful ucp map $E\,:\,C\rightarrow B$ such that $E\circ\nu_k(b)=b$ for all $b\in B$, $k=1,2$, and, $$E(\nu_{i_1}(a_1)\dots\nu_{i_n}(a_n))=0\text{ for all }a=a_1\dots a_n\in A_f\text{ reduced},$$ \end{itemize} there exists a unique unital $*$-isomorphism $\nu\,:\,A_{e}\rightarrow C$ such that $\nu\circ\rho(a)=\nu_k(a)$ for all $a\in A_k$, $k\in \{1,2\}$. Moreover, $\nu$ satisfies $E\circ\nu=\mathbb{E}$. \end{enumerate} \end{proposition} \begin{proposition} For all $k\in\{1,2\}$ there exists a unique unital $*$-homomorphism $$\lambda_{v,k}\,:\,A_{v,k}\rightarrow A_e\quad\text{such that}\quad\lambda_{v,k}\circ\pi_k=\rho.$$ Moreover, $\lambda_{v,k}$ is faithful on $\pi_k(A_{\overline{k}})$ and, if $E_k$ is GNS-faithful, $\lambda_{k,v}$ is an isomorphism. \end{proposition} \begin{proof} The formulae $\lambda_{v,k}(x)=V_k(x\otimes 1)V_k^*$ defines a unital $*$-homomorphism $\lambda_{v,k}\,:\,A_{v,k}\rightarrow A_e$ satisfying $\lambda_{v,k}\circ\pi_k=\rho$. The uniqueness of $\lambda_{v,k}$ is obvious. Let us check that $\lambda_{v,k}$ is faithful on $\pi_k(A_{\overline{k}})$. Suppose that $x\in A_{\overline{k}}$ and $\lambda_{v,k}(\pi_k(x))=0$. Then, for all $y\in A_{\overline{k}}$, we have $\rho(y^*x^*xy)=\lambda_{v,k}(\pi_k(y^*x^*xy))=0$. Hence, $0=\mathbb{E}\circ\rho(y^*x^*xy)=\mathbb{E}\circ\rho(E_{\overline{k}}(y^*x^*xy))=E_{\overline{k}}(y^*x^*xy)$. It follows that $x\in\text{Ker}(\rho_{\overline{k}})$ hence, $\lambda_{\overline{k},k}(x)=\oplus_{\underline{i}\in I_{\overline{k},k}}\rho_{\overline{k}}(x)\otimes 1=0$ which implies that $\pi_k(x)=u_{k,k}^*\lambda_{\overline{k},k}(x)u_{k,k}=0$. The last statement follows from the universal property of $A_e$ since the ucp map $E_k\circ\mathbb{E}_k\,:\,A_{v,k}\rightarrow B$ is GNS-faithful whenever $E_k$ is GNS-faithful. \end{proof} \noindent In the next proposition, we study some associativity properties between the edge-reduced and the vertex-reduced amalgamated free product. The result is interesting in itself and it will be used to easily obtain ucp radial multipliers on the vertex-reduced amalgamated free product. \begin{proposition}\label{CorVertex-EdgeReduced} Let $A_1,A_2,A_3$ be unital C*-algebras with a common unital C*-subalgebra $B$ and conditional expectations $\E_k\,:\,A_k\rightarrow B$. After identification of $A_1$ with a C*-subalgebra of both $A_1\overset{1}{\underset{B}{*}}A_2$ and $A_1\overset{1}{\underset{B}{*}}A_3$, the canonical GNS-faithful ucp maps $A_1\overset{1}{\underset{B}{*}}A_2\rightarrow A_1$ and $A_1\overset{1}{\underset{B}{*}}A_3\rightarrow A_1$ become conditional expectations and, with respect to this GNS-faithful conditional expectations, we have canonical isomorphisms \begin{itemize} \item $\left(A_1\overset{1}{\underset{B}{*}}A_2\right)\overset{e}{\underset{A_1}{*}}\left( A_1\overset{1}{\underset{B}{*}}A_3\right)\simeq A_1\overset{1}{\underset{B}{*}}\left(A_2\overset{e}{\underset{B}{*}}A_3\right)$. \item $\left(A_1\overset{2}{\underset{B}{*}}A_2\right)\overset{e}{\underset{A_2}{*}}\left( A_3\overset{2}{\underset{B}{*}}A_2\right)\simeq \left(A_1\overset{e}{\underset{B}{*}}A_3\right)\overset{2}{\underset{B}{*}}A_2$. \end{itemize} \end{proposition} \begin{proof} We prove the first point. The proof of the second point is similar. We write $\widetilde{A}=A_1\overset{1}{\underset{B}{*}}\left(A_2\overset{e}{\underset{B}{*}}A_3\right)$. Let $\rho\,:\,A_2\underset{B}{*}A_3\rightarrow A_1\overset{e}{\underset{B}{*}}A_3$ and $\widetilde{\pi}\,:\,A_1\underset{B}{*}\left( A_2\overset{e}{\underset{B}{*}}A_3\right)\rightarrow \widetilde{A}$ be the canonical surjections and $\widetilde{\mathbb{E}}\,:\,\widetilde{A}\rightarrow A_1$ the canonical GNS-faithful ucp map. Define, for $k=1,2$, $\nu_k\,:\,A_k\rightarrow D$ by $\nu_1=\widetilde{\pi}\vert_D$ and $\nu_2=\widetilde{\pi}\circ\rho\vert_{A_2}$. By definition, $\nu_1(b)=\nu_2(b)$ for all $b\in B$ and $\nu_1$ is faithful. Let $C$ be the C*-subalgebra of $\widetilde{A}$ generated by $\nu_1(A_1)\cup\nu_2(A_2)$. We claim that the exists a (unique) unital faithful $*$-homomorphism $\nu\,:\,A_1\underset{B}{\overset{1}{*}}A_2\rightarrow \widetilde{A}$ such that $\nu\circ\pi_1\vert_{A_k}=\nu_k$ for $k=1,2$, where $\pi_1\,:\,A_1\underset{B}{*}A_2\rightarrow A_1\underset{B}{\overset{1}{*}}A_2$ is the canonical surjection. By the universal property of the $1$-vertex-reduced amalgamated free product, it suffices to show the following claim, where $E=\widetilde{\mathbb{E}}\vert_C\,:\,C\rightarrow A_1$. \vspace{0.2cm} \noindent\textbf{Claim.}\textit{ The ucp map $E$ is GNS-faithful and satisfies $E\circ\nu_1=\text{id}_{A_1}$ and, for all $a=a_1\dots a_n\in A_f$ reduced with $a_k\in A_{i_k}^\circ$, $E(\nu_{i_1}(a_1)\dots\nu_{i_n}(a_n))=0$ whenever $n\geq 2$ or $n=1$ and $a=a_1\in A_2^\circ$.} \vspace{0.2cm} \noindent\textit{Proof of the claim.} The fact the $E$ vanishes on the reduced operators (not in $A_1^\circ$) is obvious, since $\widetilde{\mathbb{E}}$ satisfies the same property. The only non-trivial property to check is the fact that $E$ is GNS-faithful: indeed, it is not true, in general, that the restriction of a GNS-faithful ucp map to a subalgebra is again GNS-faithful. So suppose that there exists $x\in C$ such that $E(y^*x^*xy)=0$ for all $y\in C$ and let us show that $x$ must be zero. Since $\widetilde{\mathbb{E}}\,:\,\widetilde{A}\rightarrow A_1$ is GNS-faithful, it suffices to show that $\widetilde{\mathbb{E}}(y^*x^*xy)=0$ for all $y\in\widetilde{A}$. By hypothesis, we know that it is true for all $y\in C$. Since $\widetilde{A}$ is the closed linear span of $\widetilde{\pi}(A_1)$ and $\widetilde{\pi}(z)$, for $z\in A_1\underset{B}{*}\left( A_2\overset{e}{\underset{B}{*}}A_3\right)$ a reduced operator not in $A_1^\circ$ and since $\widetilde{\pi}(A_1)\cup\widetilde{\pi}\circ\rho(A_2)\subset C$, it suffices to show that $\widetilde{\mathbb{E}}(y^*x^*xy)=0$ for $y=\widetilde{\pi}(z)$ and $z=z_1\dots z_n\in A_1\underset{B}{*}\left( A_1\overset{e}{\underset{B}{*}}A_3\right)$ a reduced operator with letters $z_k$ alternating from $A_1^\circ$, $\rho(A_2^\circ)$ and $\rho(A_3^\circ)$ and containing at least one letter in $\rho(A_3^\circ)$. Since one of the $z_k$ is in $\rho(A_3^\circ)$ and $x\in C$ we have, by the property of $\widetilde{\mathbb{E}}$, $\widetilde{\mathbb{E}}(y^*(x^*x-\widetilde{\mathbb{E}}(x^*x))y)=0$. Hence, $\widetilde{\mathbb{E}}(y^*x^*xy)=\widetilde{\mathbb{E}}(y^*\widetilde{\mathbb{E}}(x^*x)y)=\widetilde{\mathbb{E}}(y^*E(x^*x)y)=0$, since $E(x^*x)=0$. \vspace{0.2cm} \noindent\textit{End of the proof of the proposition.} Define, for $k=1,3$, the unital $*$-homomorphism $\eta_k\,:\,A_k\rightarrow \widetilde{A}$ by $\eta_1=\widetilde{\pi}\vert_{A_1}=\nu_1$ and $\eta_3=\widetilde{\pi}\circ\rho\vert_{A_3}$. Using the universal property of the $1$-vertex-reduced amalgamated free product one can show, using exactly the same arguments we used to construct the homomorphism $\nu$, that there exists a (necessarily unique) unital faithful $*$-homomorphism $\eta\,:\, A_1\underset{B}{\overset{1}{*}}A_3\rightarrow\widetilde{A}$ such that $\eta\circ\pi_1'\vert_{A_k}=\eta_k$ for $k=1,3$, where $\pi_1'\,:\,A_1\underset{B}{*}A_3\rightarrow A_1\underset{B}{\overset{1}{*}}A_3$ is the canonical surjection. Note that $\nu(b)=\eta(b)$ for all $b\in B$ and $\widetilde{A}$ is generated, as a C*-algebra, by $\nu(A_1\underset{B}{\overset{1}{*}}A_2)\cup\eta(A_1\underset{B}{\overset{1}{*}}A_3)$. Since the GNS-faithful ucp map $\widetilde{\mathbb{E}}\,:\,\widetilde{A}\rightarrow A_1$ obviously satisfies the condition on the reduced operators we may use the universal property of the edge-reduced amalgamated free product to conclude that there exists a canonical $*$-isomorphism $$\left(A_1\overset{1}{\underset{B}{*}}A_2\right)\overset{e}{\underset{A_1}{*}}\left( A_1\overset{1}{\underset{B}{*}}A_3\right)\rightarrow\widetilde{A}.$$ \end{proof} \noindent Using the previous identifications one can prove the following result about completely positive radial multipliers. For $\underline{i}=(i_1,\dots,i_n)\in I$ and $l\in\{1,2\}$ we define the number $$\underline{i}_l=\vert\{s\in\{1,\dots,n\}\,:\, i_s=l\}\vert.$$ \begin{proposition}\label{PropMultipliers} For all $k,l\in \{1,2\}$ and all $0< r\leq 1$ there exists a unique ucp map $\varphi_r\,:\,A_{v,k}\rightarrow A_{v,k}$ such that $\varphi_r(\pi_k(b))=\pi_k(b)$ for all $b\in B$ and, $$\varphi_r(\pi_k(a_1\dots a_n))=r^{\underline{i}_l}\pi_k(a_1\dots a_n)\text{ for all }a_1\dots a_n\in A_f\text{ reduced with }a_k\in A_{i_k}^\circ\text{ and }\underline{i}=(i_1,\dots,i_n).$$ \end{proposition} \begin{proof} We first prove the proposition for $k=1$. We separate the proof in two cases. \vspace{0.2cm} \noindent\textbf{Case 1: $l=2$.} Since $\pi_1$ is faithful on $A_1$, we may and will view $A_1\subset A_{v,1}$. After this identification, the canonical GNS-faithful ucp map $\mathbb{E}_1\,:\,A_{v,1}\rightarrow A_1$ becomes a conditonal expectation. Consider the conditional expectation $\tau\otimes \text{id}\,:\, C([0,1])\otimes B\rightarrow B$, where $\tau$ is the integral with respect to the normalized Lebesgue measure on $[0,1]$. We will also view $A_1\subset A_1\underset{B}{\overset{1}{*}}(C([0,1])\otimes B)$ so that the canonical GNS-faithful ucp map $\widetilde{\mathbb{E}}_1\,:\,A_1\underset{B}{\overset{1}{*}}(C([0,1])\otimes B)\rightarrow A_1$ is a conditional expectation. Define $\widetilde{A}=A_{v,1}\overset{e}{\underset{A_1}{*}}\left(A_1\overset{1}{\underset{B}{*}}(C([0,1])\otimes B)\right)$ with respect to the conditional expectations $\mathbb{E}_1$ and $\widetilde{\mathbb{E}}_1$. Since $\mathbb{E}_1$ and $\widetilde{\mathbb{E}}_1$ are GNS-faithful, the edge-reduced and the $k$-vertex-reduced amalgamated free products coincides for $k=1,2$. Hence, we may and will view $A_{v,1}\subset A_1\overset{1}{\underset{B}{*}}(C([0,1])\otimes B)\subset \widetilde{A}$ and we have a canonical GNS-faithful conditional expectation $\widetilde{E}\,:\, \widetilde{A}\rightarrow A_{v,1}$. Also, by the first assertion of proposition \ref{CorVertex-EdgeReduced} we have a canonical identification $\widetilde{A}= A_1\overset{1}{\underset{B}{*}}\widetilde{A}_2$, where $\widetilde{A}_2=A_2\overset{e}{\underset{B}{*}}(C([0,1])\otimes B)$. Let $\widetilde{\rho}_2\,:\,A_2\underset{B}{*} C([0,1])\otimes B\rightarrow \widetilde{A}_2$ be the canonical surjection from the full to the edge-reduced amalgamated free product and $\widetilde{\pi}\,:\,A_1\underset{B}{*}\widetilde{A}_2\rightarrow A_1\overset{1}{\underset{B}{*}}\widetilde{A}_2=\widetilde{A}$ be the canonical surjection from the full to the vertex-reduced amalgamated free product. Fix $t\in\mathbb{R}$ and define the unitary $v_t\in C([0,1])$ by $v_t(x)=e^{2i\pi tx}$. Let $\rho_t=\vert\tau(v_t)\vert^2$ and $u_t=\widetilde{\pi}\circ\widetilde{\rho}_2(v_t\otimes 1_B)\in \widetilde{A}$. Define the unital $*$-homomorphisms $\nu_1=\widetilde{\pi}\vert_{A_1}\,:\, A_1\rightarrow\widetilde{A}$ and $\nu_2\,:\,\widetilde{A}_2\rightarrow\widetilde{A}$ by $\nu_2(x)=u_t\widetilde{\pi}(x)u_t^*$. Note that $\nu_1$ is faithful. To simplify the notations we put $\widetilde{A}_1:=A_1$. \vspace{0.2cm} \noindent\textbf{Claim.}\textit{ For all $x=x_1\dots x_n\in A_1\underset{B}{*}\widetilde{A}_2$ reduced with $a_k\in \widetilde{A}_{i_k}^\circ$ and $\underline{i}=(i_1,\dots,i_n)$ one has: $$\widetilde{\mathbb{E}}(\nu_{i_1}(x_1)\dots\nu_{i_n}(x_n))=\left\{\begin{array}{lcl} \rho_t^{\underline{i}_l}\widetilde{\pi}(x_1\dots x_n)&\text{if}&\widetilde{\pi}(x)\in A_{v,1},\\ 0&\textit{if}&\widetilde{\mathbb{E}}(\widetilde{\pi}(x))=0.\end{array}\right.$$} \noindent\textit{Proof of the claim.} Note that $\widetilde{\pi}(x)\in A_{v,1}$ if and only if the letters $x_k$ of $x$ are alternating from $A_1^\circ$ and $\widetilde{\rho}_2(A_2^\circ)$ and $\widetilde{\mathbb{E}}(\widetilde{\pi}(x))=0$ if and only if one of the letters of $x$ comes from $\rho_2((C([0,1])\otimes B)^\circ)$. We prove the formula by induction on $n$. If $n=1$ we have either $x\in A_1^\circ$ in that case $\widetilde{\mathbb{E}}(\nu_1(x))=\widetilde{\mathbb{E}}(\widetilde{\pi}((x))=\widetilde{\pi}(x)$ or $x\in \widetilde{\rho}_2(\widetilde{A}_2^\circ)$ and \begin{eqnarray*} \widetilde{\mathbb{E}}(\nu_2(x))&=&\widetilde{\mathbb{E}}(u_t\widetilde{\pi}(x)u_t^*)\\ &=&\widetilde{\mathbb{E}}((u_t-\tau(v_t))\widetilde{\pi}(x)(u_t^*-\overline{\tau(v_t)}))+\tau(v_t)\widetilde{\mathbb{E}}(\widetilde{\pi}(x)(u_t^*-\overline{\tau(v_t)}))\\ &&+\overline{\tau(v_t)}\widetilde{\mathbb{E}}((u_t-\tau(v_t))\widetilde{\pi}(x))+\vert\tau(v_t)\vert^2\widetilde{\mathbb{E}}(\widetilde{\pi}(x))\\ &=&\vert\tau(v_t)\vert^2\widetilde{\mathbb{E}}(\widetilde{\pi}(x))=\rho_t\widetilde{\mathbb{E}}(\widetilde{\pi}(x)). \end{eqnarray*} \noindent Hence, $\widetilde{\mathbb{E}}(\nu_2(x))=\left\{\begin{array}{lcl}\rho_t\widetilde{\pi}(x)&\text{if}&\widetilde{\pi}(x)\in A_{v,1},\\ 0&\text{if}&\widetilde{\mathbb{E}}(\widetilde{\pi}(x))=0.\end{array}\right.$ \noindent This proves the formula for $n=1$. Suppose that the formulae holds for a given $n\geq 1$. Let $x=x_1\dots x_{n+1}$ be reduced with $x_k\in \widetilde{A}_{i_k}^\circ$ and define $x'=x_1\dots x_n$ and $z=\nu_{i_1}(x_1)\dots\nu_{i_n}(x_n)$. Let $\underline{i}=(i_1,\dots ,i_{n+1})$ and $\underline{i}'=(i_1,\dots,i_{n})$. \vspace{0.2cm} \noindent Suppose that $x_{n+1}\in A_1^\circ$. Then $\underline{i}_2=\underline{i}'_2$ and, $$\widetilde{\mathbb{E}}(\nu_{i_1}(x_1)\dots\nu_{i_n}(x_n)\nu_{i_{n+1}}(x_{n+1}))=\widetilde{\mathbb{E}}(\nu_{i_1}(x_1)\dots\nu_{i_n}(x_n)\widetilde{\pi}(x_{n+1}))=\widetilde{\mathbb{E}}(z)\widetilde{\pi}(x_{n+1}).$$ Hence, if $\widetilde{\pi}(x)\in A_{v,1}$ then also $\widetilde{\pi}(x')\in A_{v,1}$ and we have, by the induction hypothesis, $$\widetilde{\mathbb{E}}(\nu_{i_1}(x_1)\dots\nu_{i_n}(x_n)\nu_{i_{n+1}}(x_{n+1}))=\rho_t^{\underline{i}'_2}\widetilde{\pi}(x')\widetilde{\pi}(x_{n+1})=\rho_t^{\underline{i}_2}\widetilde{\pi}(x).$$ If $\widetilde{\mathbb{E}}(\widetilde{\pi}(x))=0$ then also $\widetilde{\mathbb{E}}(\widetilde{\pi}(x'))=0$ and we have, by the induction hypothesis, $\widetilde{\mathbb{E}}(z)=0$ so $\widetilde{\mathbb{E}}(\nu_{i_1}(x_1)\dots\nu_{i_n}(x_n)\nu_{i_{n+1}}(x_{n+1}))=0$. \vspace{0.2cm} \noindent Suppose now that $x_{n+1}\in \widetilde{A}_2^\circ$ then $x_n\in A_1^\circ$ and we have, \begin{eqnarray*} \widetilde{\mathbb{E}}(z\nu_{i_{n+1}}(x_{n+1}))&=&\widetilde{\mathbb{E}}(zu_t\widetilde{\pi}(x_{n+1})u_t^*)\\ &=&\widetilde{\mathbb{E}}(z(u_t-\tau(v_t))\widetilde{\pi}(x_{n+1})(u_t^*-\overline{\tau(v_t)}))+\tau(v_t)\widetilde{\mathbb{E}}(z\widetilde{\pi}(x_{n+1})(u_t^*-\overline{\tau(v_t)}))\\ &&+\overline{\tau(v_t)}\widetilde{\mathbb{E}}(z(u_t-\tau(v_t))\widetilde{\pi}(x_{n+1}))+\vert\tau(v_t)\vert^2\widetilde{\mathbb{E}}(z\widetilde{\pi}(x_{n+1}))\\ &=&\vert\tau(v_t)\vert^2\widetilde{\mathbb{E}}(z\widetilde{\pi}(x_{n+1}))=\rho_t\widetilde{\mathbb{E}}(z\widetilde{\pi}(x_{n+1})). \end{eqnarray*} \noindent Hence, if $\widetilde{\pi}(x)\in A_{v,1}$ then also $\widetilde{\pi}(x')\in A_{v,1}$ and $x_{n+1}\in A_2^\circ$ so $\widetilde{\pi}(x_{n+1})\in A_{v,1}$ and $\underline{i}_2=\underline{i}_2'+1$. By the preceding computation and the induction hypothesis we find: $$\widetilde{\mathbb{E}}(z\nu_{i_{n+1}}(x_{n+1}))=\rho_t\widetilde{\mathbb{E}}(z\widetilde{\pi}(x_{n+1}))=\rho_t\widetilde{\mathbb{E}}(z)\widetilde{\pi}(x_{n+1})=\rho_t\rho_t^{\underline{i}_2'}\widetilde{\pi}(x')\widetilde{\pi}(x_{n+1})=\rho_t^{\underline{i}_2}\widetilde{\pi}(x).$$ \noindent Finally, if $\widetilde{\mathbb{E}}(\widetilde{\pi}(x))=0$, we need to prove that $\widetilde{\mathbb{E}}(z\widetilde{\pi}(x_{n+1}))=0$. Note that, since $x_n\in A_1^\circ$, we have $z=\nu_{i_1}(x_1)\dots\nu_{i_{n-1}}(x_{n-1})x_n$. Hence, if $\widetilde{\mathbb{E}}(\widetilde{\pi}(x'))=0$ so by the induction hypothesis we have $\widetilde{\mathbb{E}}(z)=0$, $z$ may be written as a sum of reduced operators, containing at least one letter from $\widetilde{\rho}_2(( C([0,1])\otimes B)^\circ)$ and ending with a letter from $A_1^\circ$. It follows that $z\widetilde{\pi}(x_{n+1})$ may be written as a sum of reduced operators, containing at least one letter from $\widetilde{\rho}_2(( C([0,1])\otimes B)^\circ)$. Hence, $\widetilde{\mathbb{E}}(z\widetilde{\pi}(x_{n+1}))=0$. Eventually, if $\widetilde{\mathbb{E}}(\widetilde{\pi}(x))=0$ and $\widetilde{\mathbb{E}}(\widetilde{\pi}(x'))\in A_{v,1}$ then, $x_1,\dots x_n\in A_1^\circ\cup A_2^\circ$ but $\widetilde{\mathbb{E}}(\widetilde{\pi}(x_{n+1}))=0$. It follows that $z=\nu_{i_1}(x_1)\dots\nu_{i_{n-1}}(x_{n-1})x_n$ may be written as a sum of reduced operators ending with a letter from $A_1^\circ$. Hence, $z\widetilde{\pi}(x_{n+1})$ be be written as a sum of reduced operators containing at least one letter from $\widetilde{\rho}_2(( C([0,1])\otimes B)^\circ)$. Hence, $\widetilde{\mathbb{E}}(z\widetilde{\pi}(x_{n+1}))=0$. \vspace{0.2cm} \noindent\textit{End of the proof of the proposition.} By the claim, $\mathbb{E}_1\circ\widetilde{\mathbb{E}}(\nu_{i_1}(x_1)\dots\nu_{i_n}(x_n))=0$ for all reduced operators $x=x_1\dots x_n\in A_1\underset{B}{*}\widetilde{A}_2$ which are not in $A_1$ and, we obviously have, $\mathbb{E}_1\circ\widetilde{\mathbb{E}}\circ\nu_1=\text{id}_{A_1}$. Viewing $\widetilde{A}= A_1\overset{1}{\underset{B}{*}}\widetilde{A}_2$ and using the universal property of the vertex-reduced amalgamated free product, there exists, for all $t\in\mathbb{R}$, a unique unital $*$-isomorphism $\alpha_t\,:\,\widetilde{A}\rightarrow\widetilde{A}$ such that $\alpha_t(\widetilde{\pi}(a))=\widetilde{\pi}(a)$ if $a\in A_1$ and $\alpha_t((\widetilde{\pi}(x))=u_t\widetilde{\pi}(x)u_t^*$ if $x\in A_2\overset{e}{\underset{B}{*}}(C([0,1])\otimes B)$. In particular, it follows from the claim that $\widetilde{\mathbb{E}}\circ\alpha_t\vert_{A_{v,1}}\,:\,A_{v,1}\rightarrow A_{v,1}$, which is a ucp map, satisfies the properties of the map $\varphi_r$ described in the statement of the proposition, with $r=\rho_t=\left\vert\frac{\sin(\pi t)}{\pi t}\right\vert^2$. This concludes the proof. \vspace{0.2cm} \noindent\textbf{Case 2: $l=1$.} The proof is similar. This time, the automorphism $\alpha_t\,:\,\widetilde{A}\rightarrow\widetilde{A}$ is defined, by the universal property, starting with the maps $\nu_1=\widetilde{\pi}\vert_{A_1}\,:\, A_1\rightarrow\widetilde{A}$ and $\nu_2\,:\,\widetilde{A}_2\rightarrow\widetilde{A}$ defined by $\nu_1(a)=u_t\widetilde{\pi}(a)u_t^*$ and $\nu_2(x)=\widetilde{\pi}(x)$. The reminder of the proof is the same. \vspace{0.2cm} \noindent The proof for $k=2$ is the same, using the second assertion of proposition \ref{CorVertex-EdgeReduced}. \end{proof} \section{$K$-equivalence between the full and reduced amalgamated free products} \noindent Let $A_1$, $A_2$ be two unital C*-algebra with a common C*-subalgebra $B\subset A_k$, $k=1,2$ and denote by $A_f$ the full amalgamated free product. \vspace{0.2cm} \noindent Let $A:=A_1\overset{v}{\underset{B}{*}} A_2$ be the vertex-reduced amalgamated free product. For $k=1,2$, let $E_{A_k}$ (resp. $E_B$) be the canonical conditional expectation from $A$ to $A_k$ (reps. from $A$ to $B$). We will denote by the same symbol $\mathcal{A}$ the set of reduced operators viewed in $A$ or in $A_f$. Recall that the linear span of $\mathcal{A}$ and $B$ is a weakly dense unital $*$-subalgebra of $A$ (resp. $A_f$). \vspace{0.2cm} \noindent We denote by $\lambda\,:\,A_f\rightarrow A$ the canonical surjective unital $*$-homomorphism which is the identity on $\mathcal{A}$. In this section we prove the following result. \begin{theorem}\label{TheoremKequivalence} $[\lambda]\in {\rm KK}(A_f,A)$ is invertible. \end{theorem} \noindent The following lemma is well known (see \cite[Lemma 3.1]{Ve04}). We include a proof for the convenience of the reader. \begin{lemma}\label{LemmaCE} Let $n\geq 1$, $a_k\in A_{l_k}^{\circ}$ for $1\leq k\leq n$, and $a=a_1\dots a_n\in A$ a reduced word. One has $$E_{A_k}(a^*a)=E_B(a^*a)\quad\text{whenever}\quad l_n\neq k.$$ \end{lemma} \begin{proof} We prove it for $k=1$ by induction on $n$. The proof for $k=2$ is the same. \vspace{0.2cm} \noindent It's obvious for $n=1$. Suppose that $n\geq 2$, define $b=E_B(a_1^*a_1)^{\frac{1}{2}}$, $x=(ba_2)\dots a_n$. One has: $$E_{A_1}(a^*a)=E_{A_1}(a_n^*\dots a_1^*a_1\dots a_n)=E_{A_1}(a_n^*\dots a_2^*E_B(a_1^*a_1)a_2\dots a_n)=E_{A_1}(x^*x)=E_B(x^*x),$$ where we applied the induction hypothesis to get the last equality. Since the same computation gives $E_B(a^*a)=E_B(x^*x)$, this concludes the proof. \end{proof} \noindent We denote by $(H_k,\pi_k,\xi_k)$ (resp. $(K,\rho,\eta)$) the GNS construction of $E_{A_k}$ (resp. $E_B$). We may and will assume that $A\subset\mathcal{L}_{A_k}(H_k)$ and $\pi_k=\text{id}$. \vspace{0.2cm} \noindent Observe that the Hilbert $A_k$-module $\xi_k.A_k\subset H_k$ is orthogonally complemented i.e. $H_k=\xi_k.A_k\oplus H_k^{\circ}$, as Hilbert $A_k$-modules, where $H_k^{\circ}$ is the closure of $\{a\xi_k\,:\,a\in A,\,\,E_{A_k}(a)=0\}$. \vspace{0.2cm} \noindent We now define a partial isometry $F_k\in\mathcal{L}_{A_k}(H_k,K\underset{B}{\otimes} A_k)$ in the following way. First we put $F_k(\xi_k.a)=0$ for all $a\in A_k$. Then, it follows from lemma \ref{LemmaCE} that we can define an isometry $F_k\,:\, H_k^{\circ}\rightarrow K\underset{B}{\otimes} A_k$ by the following formula: $$F_k(a_1\dots a_n\xi_k)=\left\{\begin{array}{lcl} \rho(a_1\dots a_n)\eta\underset{B}{\otimes} 1&\text{if}&l_n\neq k\\ \rho(a_1\dots a_{n-1})\eta\underset{B}{\otimes} a_n&\text{if}& l_n=k\end{array}\right.\quad\text{for all}\quad a_1\dots a_n\in A\,\,\text{a reduced operator}.$$ \noindent Hence, $F_k\in\mathcal{L}_{A_k}(H_k,K\underset{B}{\otimes} A_k)$ is a well defined partial isometry such that $1-F_k^*F_k$ is the orthogonal projection onto $\xi_k.A_k$ and, $1-F_kF_k^*$ is the orthogonal projection onto $$(\eta\otimes 1).A_k\oplus\overline{\text{Span}}\{\rho(a_1\dots a_n)\eta\otimes 1\,:\,a=a_1\dots a_n\in A\,\,\text{reduced with}\,\,l_n= k\}.A_k.$$ \noindent We will denote in the sequel $q_0$ the orthogonal projection of $K$ onto $\eta.B$, and for $l=1,2$ $q_l$ the projection in $K$ such that $F_lF_l^*=q_l\otimes_{A_l} 1$. It is clear that $1=q_1+q_2+q_0$ and that all the projections commutes. Define also $\overline{F}_l=F_l+\theta_{\eta\otimes_B 1, \xi_l}$. It is again clear that $\overline F_l$ is an isometry and $\overline F_l \overline F_l^*=q_l+q_0=1-q_k$ for $k\neq l$. \begin{lemma}\label{LemmaCompactCommutation} For $k=1,2$ the following holds. \begin{enumerate} \item $\rho(a)F_k=F_ka\in\mathcal{L}_{A_k}(H_k,K\underset{B}{\otimes} A_k)$ for all $a\in A_k$. \item ${\rm Im}(\rho(a)F_k-F_ka)\subset(\rho(a)\eta\underset{B}{\otimes}1).A_k\oplus(\eta\underset{B}{\otimes}1).A_k$ for all $a\in A_l^{\circ}$ with $l\neq k$. \item $\rho(x)F_k-F_kx\in\mathcal{K}_{A_k}(H_k,K\underset{B}{\otimes} A_k)$ for all $x\in A$. \item $\rho(a)\overline F_k=\overline F_k a$ $\forall a\in A_l$ with $l\neq k$ and $\rho(x)\overline F_k-\overline F_kx\in\mathcal{K}_{A_k}(H_k,K\underset{B}{\otimes} A_k)$ $\forall x\in A$. \end{enumerate} \end{lemma} \begin{proof} We prove the lemma for $k=1$. The proof for $k=2$ is the same. \vspace{0.2cm} \noindent $(1)$. When $a\in B$ the commutation is obvious hence we may and will assume that $a\in A_1^{\circ}$. One has $F_1a\xi_1=0=\rho(a)F_1\xi_1$. Let now $n\geq 1$ and $x=a_1\dots a_n\in A$, $a_k\in A_{l_k}^{\circ}$, be a reduced operator with $E_{A_1}(x)=0$. It suffices to show that $F_1ax\xi_1=\rho(a) F_1x\xi_1$. If $n=1$ we must have $x\in A_2^{\circ}$ and $F_1ax\xi_1=\rho(ax)\eta\otimes 1 =\rho(a)F_1x\xi_1$. Suppose that $n\geq 2$. If $l_1=2$ then $ax$ is reduced and ends with a letter from $A_{l_n}^{\circ}$. It follows that $F_1ax\xi_2=\rho(a)F_1x\xi_2$. It $l_1=1$ then we can write $ax=(aa_1)^{\circ}a_2\dots a_n+ E_B(aa_1)a_2\dots a_n$. Since $a_2\dots a_n$ is reduced and ends with $l_n$ we find again that $F_1ax\xi_1=\rho(a)F_1x\xi_1$. \vspace{0.2cm} \noindent $(2)$. Let $a\in A_2^{\circ}$ and put $X_a=(\rho(a)\eta\underset{B}{\otimes}1).A_k\oplus(\eta\underset{B}{\otimes}1).A_k$. We have $F_1a\xi_1=\rho(a)\eta\otimes 1$ and $\rho(a)F_1\xi_1=0$ hence, $(\rho(a)F_1-F_1a)\xi_1=-\rho(a)\eta\otimes 1\in X_a$. Let now $n\geq 1$ and $x=a_1\dots a_n\in A$, $a_k\in A_{l_k}^{\circ}$, be a reduced operator with $E_{A_1}(x)=0$. If $n=1$ we must have $x\in A_2^{\circ}$. It follows that $F_1ax\xi_1=F_1(ax)^{\circ}\xi_1+F_1E_B(ax)\xi_1=\rho((ax)^{\circ})\eta\otimes 1$ and $\rho(a)F_1x\xi_1=\rho(ax)\eta\otimes 1$. Hence, $(\rho(a)F_1-F_1a)x\xi_1=E_B(ax)\eta\otimes 1=(\eta\otimes 1).E_B(ax)\in X_a$. If $n\geq 2$, arguing as in the proof of $(1)$, we see that $F_1ax\xi_1=\rho(a)F_1x\xi_1$. Hence, ${\rm Im}(\rho(a)F_k-F_ka)\subset X_a$. \vspace{0.2cm} \noindent $(3)$. It is obvious since $A$ is generated, as a C*-algebra, by $A_1$ and $A_2^\circ$ and, by assertions $(1)$ $\rho(a)F_k-F_ka=0$ if $a\in A_k$ and the computation of $(2)$ shows that for $a\in A_2^\circ$, $\rho(a)F_k-F_ka= \theta(a)$ where $\theta(a)$ is the "rank one" operator that sends $\zeta \in H_k$ to $-\rho(a)\eta\otimes 1<\xi_1,\zeta>_{H_k}$. Hence $\rho(a)F_k-F_ka\in\mathcal{K}_{A_k}(H_k,K\underset{B}{\otimes} A_k)$ for all $a\in A_1\cup A_2^\circ$ and therefore for all $a\in A$. \vspace{0.2cm} \noindent $(4)$. The second part is obvious in view of $(3)$ as $\overline F_1$ is a compact perturbation of $F_1$, so let's concentrate on the exact commutation. Let $a\in A_2^{\circ}$. Clearly $\overline F_1a\xi_1=F_1a\xi_1=\rho(a)\eta\otimes 1$ and $\rho(a)\overline F_1\xi_1=\rho(a)\eta\otimes 1$. Let now $n\geq 1$ and $x=a_1\dots a_n\in A$, $a_k\in A_{l_k}^{\circ}$, be a reduced operator with $E_{A_1}(x)=0$. If $n=1$ we must have $x\in A_2^{\circ}$. It follows that $\overline F_1ax\xi_1=F_1(ax)^{\circ}\xi_1+\theta_{\eta\otimes 1, \xi_1}E_B(ax)\xi_1=\rho((ax)^{\circ})\eta\otimes 1+E_B(ax)\eta\otimes 1 $ and $\rho(a)\overline F_1x\xi_1=F_1x\xi_1=\rho(ax)\eta\otimes 1$. If $n\geq 2$, arguing as in the proof of $(1)$, we see that $\overline F_1ax\xi_1=F_1ax\xi_1=\rho(a)F_1x\xi_1=\rho(a)\overline F_1x\xi_1$. \end{proof} \noindent We define the following Hilbert $A_f$-modules: $$H_m=H_1\underset{A_1}{\otimes}A_f\oplus H_2\underset{A_2}{\otimes}A_f\quad\text{and}\quad K_m=K\underset{B}{\otimes}A_f=\left(K\underset{B}{\otimes}A_k\right)\underset{A_k}{\otimes} A_f,$$ with the canonical representations $\pi\,:\,A\rightarrow\mathcal{L}_{A_f}(H_m)$, $\pi(x)=x\underset{A_1}{\otimes}1_{A_f}\oplus x\underset{A_2}{\otimes}1_{A_f}$ and $\bar\rho\,:\,A\rightarrow\mathcal{L}_{A_f}(K)$, $\bar\rho(x)=\rho(x)\underset{B}{\otimes}1_{A_f}$. We consider, for $k=1,2$, the partial isometry $$F_k\underset{A_k}{\otimes}1_{A_f}\in\mathcal{L}_{A_f}(H_k\underset{A_k}{\otimes}A_f,(K\underset{B}{\otimes}A_k)\underset{A_k}{\otimes} A_f).$$ Observe that $F_1\underset{A_1}{\otimes}1_{A_f}$ and $F_2\underset{A_2}{\otimes}1_{A_f}$ have orthogonal images. Indeed, the image of $F_k\underset{A_k}{\otimes}1_{A_f}$ is the closed linear span of $\{\rho(a_1\dots a_n)\eta\underset{B}{\otimes} y\,:\,y\in A_f\,\,\text{and}\,\,a_1\dots a_n\in A\,\,\text{reduced with}\,\,a_n\notin A_{k}^{\circ}\}$. Hence the operator $F\in\mathcal{L}_{A_f}(H_m,K_m)$ defined by $F=F_1\underset{A_1}{\otimes}1_{A_f}\oplus F_2\underset{A_2}{\otimes}1_{A_f}$ is a partial isometry such that $1-FF^*$ is the orthogonal projection onto $(\eta\underset{B}{\otimes} 1_{A_f}).A_f$ and $1-F^*F$ is the orthogonal projection onto $(\xi_1\underset{A_1}{\otimes}1_{A_f}).A_f\oplus(\xi_2\underset{A_2}{\otimes}1_{A_f}).A_f$. In particular $1-F^*F, 1-FF^*\in\mathcal{K}_{A_f}(H_m,K_m)$. Moreover, it follows from lemma \ref{LemmaCompactCommutation} that $F\pi(x)-\bar\rho(x)F\in\mathcal{K}_{A_f}(H_m,K_m)$ for all $x\in A$. Hence, we get an element $\alpha=[(H_m\oplus K_m,\pi\oplus\bar\rho,F)]\in {\rm KK}(A,A_f)$. \vspace{0.2cm} \noindent To prove theorem \ref{TheoremKequivalence} it suffices to prove that $\alpha\underset{A_f}{\otimes}[\lambda]=[\text{id}_{A}]$ in ${\rm KK}(A,A)$ and $[\lambda]\underset{A}{\otimes}\alpha=[\text{id}_{A_f}]$ in ${\rm KK}(A_f,A_f)$. We prove the easy part in the next proposition. \begin{proposition}\label{sub-K-equivalence} One has $[\lambda]\underset{A}{\otimes}\alpha=[\text{id}_{A_f}]$ in ${\rm KK}(A_f,A_f)$. \end{proposition} \begin{proof} Observe that $[\lambda]\underset{A}{\otimes}\alpha=[(H_m\oplus K_m,\pi_m\oplus\rho_m,F)]$ where $\pi_m=\pi\circ\lambda\,:\,A_f\rightarrow\mathcal{L}_{A_f}(H_m)$ and $\rho_m=\bar\rho\circ\lambda\,:\,A_f\rightarrow\mathcal{L}_{A_f}(K_m)$. Hence, $[\lambda]\underset{A}{\otimes}\alpha-[\text{id}_{A_f}]$ is represented by the Kasparov triple $(H_m\oplus\widetilde{K}_m,\pi_m\oplus\widetilde{\rho}_m,\widetilde{F})$, where $\widetilde{K}_m=K_m\oplus A_f$ and $\widetilde{\rho}_m(x)=\rho_m(x)\oplus x$, where we view $A_f=\mathcal{L}_{A_f}(A_f)$ by left multiplication. Finally, $\widetilde{F}\in\mathcal{L}_{A_f}(H_m,\widetilde{K}_m)$ is the unitary defined by $$\widetilde{F}(\xi_1\underset{A_1}{\otimes}1_{A_f})=\eta\underset{B}{\otimes}1_{A_f},\quad\widetilde{F}(\xi_2\underset{A_2}{\otimes}1_{A_f})=1_{A_f}\quad\text{and,}$$ $$\widetilde{F}(\xi)=F(\xi)\,\,\text{for all}\,\,\xi\in H_m\ominus\left((\xi_1\underset{A_1}{\otimes}1_{A_f}).A_f\oplus(\xi_2\underset{A_2}{\otimes}1_{A_f}).A_f\right).$$ We collect some computations in the following claim. \vspace{0.2cm} \noindent\textbf{Claim.}\textit{ Let $v\in\mathcal{L}_{A_f}(H_m)$ be the self-adjoint unitary defined by the identity on $H_m\ominus((\xi_1\underset{A_1}{\otimes} 1_{A_f}).A_f\oplus (\xi_2\underset{A_2}{\otimes} 1_{A_f}).A_f)$ and $v(\xi_1\underset{A_1}{\otimes} 1_{A_f})=\xi_2\underset{A_2}{\otimes} 1_{A_f}$, $v(\xi_2\underset{A_2}{\otimes} 1_{A_f})=\xi_1\underset{A_1}{\otimes} 1_{A_f}$. One has: \begin{enumerate} \item $\widetilde{F}^*\widetilde{\rho}_m(b)\widetilde{F}=\pi_m(b)$ and $v^*\pi_m(b)v=\pi_m(b)$ for all $b\in B$. \item $\widetilde{F}^*\widetilde{\rho}_m(a)\widetilde{F}=v^*\pi_m(a)v$ for all $a\in A_1$. \item $\widetilde{F}^*\widetilde{\rho}_m(a)\widetilde{F}=\pi_m(a)$ for all $a\in A_2$. \end{enumerate}} \noindent\textit{Proof of the claim.}The proof of $(1)$ is obvious and we leave it to the reader. \vspace{0.2cm} \noindent$(2)$. By $(1)$, it suffices to prove $(2)$ for $a\in A_1^{\circ}$. Let $a\in A_1^{\circ}$. One the one hand: $$\widetilde{F}^*\widetilde{\rho}_m(a)\widetilde{F}\xi_1\underset{A_1}{\otimes}1_{A_f}=\widetilde{F}^*(\rho(a)\eta\underset{B}{\otimes}1_{A_f})=a\xi_2\underset{A_2}{\otimes}1_{A_f}\quad\text{and}\quad \widetilde{F}^*\widetilde{\rho}_m(a)\widetilde{F}\xi_2\underset{A_2}{\otimes}1_{A_f}=\widetilde{F}^*(a)=\xi_2\underset{A_2}{\otimes}a.$$ \noindent One the other hand: $$v^*\pi_m(a)v\xi_1\underset{A_1}{\otimes}1_{A_f}=v^*(a\xi_2\underset{A_2}{\otimes} 1_{A_f})=a\xi_2\underset{A_2}{\otimes} 1_{A_f}\quad\text{and}\quad v^*\pi_m(a)v\xi_2\underset{A_2}{\otimes}1_{A_f}=v^*(a\xi_1\underset{A_1}{\otimes} 1_{A_f})=\xi_2\underset{A_2}{\otimes} a.$$ \noindent Let now $x=a_1\dots a_n\in A$ be reduced operator with $a_k\in A_{l_k}^{\circ}$. We prove by induction on $n$ that $\widetilde{F}^*\widetilde{\rho}_m(a)\widetilde{F}x\xi_k\underset{A_k}{\otimes} 1_{A_f}=v^*\pi_m(a)vx\xi_k\underset{A_k}{\otimes} 1_{A_f}$ for all $k\in\{1,2\}$. Suppose that $n=1$ so $x\in A_1^{\circ}\cup A_2^{\circ}$ and let $k\in\{1,2\}$ such that $x\notin A_k^{\circ}$ (the case $x\in A_k^{\circ}$ has been done before). We have: $$\widetilde{F}^*\widetilde{\rho}_m(a)\widetilde{F}x\xi_k\underset{A_k}{\otimes} 1_{A_f}=\widetilde{F}^*(\rho(ax)\eta\underset{B}{\otimes}1_{A_f})=\left\{\begin{array}{lcl} (ax)^{\circ}\xi_2\underset{A_2}{\otimes} 1_{A_f}+\xi_1\underset{A_1}{\otimes}E_B(ax)&\text{if}&x\in A_{1}^{\circ},\\ ax\xi_1\underset{A_1}{\otimes} 1_{A_f}&\text{if}&x\in A_{2}^{\circ}.\end{array}\right.$$ One the other hand we have: $$v^*\pi_m(a)vx\xi_k\underset{A_k}{\otimes} 1_{A_f}=v^*(ax\xi_k\underset{A_k}{\otimes}1_{A_f})=\left\{\begin{array}{lcl} (ax)^{\circ}\xi_2\underset{A_2}{\otimes} 1_{A_f}+\xi_1\underset{A_1}{\otimes}E_B(ax)&\text{if}&x\in A_{1}^{\circ}\,\,(k=2),\\ ax\xi_1\underset{A_1}{\otimes} 1_{A_f}&\text{if}&x\in A_{2}^{\circ}\,\,(k=1).\end{array}\right.$$ \vspace{0.2cm} \noindent Finally, suppose that $n\geq 2$ and the formula holds for $n-1$. Write $ax=y+z$, where, if $l_1=1$, $y=(aa_1)^{\circ}a_2\dots a_n$ and $z=E_B(aa_1)a_2\dots a_n$ and, if $l_1=2$, $y=ax$ and $z=0$. Observe that, in both cases, $y$ is a reduced operator ending with a letter from $A_{l_n}^\circ$ and $z$ is either $0$ or a reduced operator ending with a letter from $A_{l_n}^\circ$. By the induction hypothesis, we may and will assume that $k\neq l_n$. We have: \begin{eqnarray*} \widetilde{F}^*\widetilde{\rho}_m(a)\widetilde{F}x\xi_k\underset{A_k}{\otimes} 1_{A_f}&=&\widetilde{F}^*(\rho(ax)\eta\underset{B}{\otimes} 1_{A_f})=\widetilde{F}^*(\rho(y)\eta\underset{B}{\otimes} 1_{A_f})+\widetilde{F}^*(\rho(z)\eta\underset{B}{\otimes} 1_{A_f})\\ &=&y\xi_k\underset{A_k}{\otimes} 1_{A_f}+z\xi_k\underset{A_k}{\otimes} 1_{A_f}=ax\xi_k\underset{A_k}{\otimes} 1_{A_f}. \end{eqnarray*} Moreover, \begin{eqnarray*} v^*\pi_m(a)v x\xi_k\underset{A_k}{\otimes} 1_{A_f}&=&v^*(ax\xi_k\underset{A_k}{\otimes} 1_{A_f})=v^*(y\xi_k\underset{A_k}{\otimes} 1_{A_f})+v^*(z\xi_k\underset{A_k}{\otimes} 1_{A_f})\\ &=&y\xi_k\underset{A_k}{\otimes} 1_{A_f}+z\xi_k\underset{A_k}{\otimes} 1_{A_f}=ax\xi_k\underset{A_k}{\otimes} 1_{A_f}. \end{eqnarray*} \noindent The proof of $(3)$ is similar.\hfill{$\Box$} \vspace{0.2cm} \noindent\textit{End of the proof of proposition \ref{sub-K-equivalence}.} Let $t\in\mathbb{R}$ and define $v_t=\cos(t)+iv\sin(t)\in\mathcal{L}_{A_f}(H_m)$. Since $v=v^*$ is unitary, $v_t$ is a unitary for all $t\in\mathbb{R}$. Moreover, assertion $(1)$ of the Claim implies that $v_t\pi_m(b)v_t^{*}=\pi_m(b)$ for all $b\in B$. It follows from the universal property of $A_f$ that there exists a unique unital $*$-homomorphism $\pi_t\,:\,A_f\rightarrow \mathcal{L}_{A_f}(H_m)$ such that: $$\pi_t(a)=\left\{\begin{array}{lcl} v_t^*\pi_m(a)v_t&\text{if}&a\in A_1,\\ \pi_m(a)&\text{if}&a\in A_2.\end{array}\right.$$ Then the triple $\alpha_t=(H_m\oplus\widetilde{K}_m,\pi_t\oplus\widetilde{\rho}_m,\widetilde{F})$ gives an homotopy between $\alpha_0$ which represents $[\lambda]\underset{A}{\otimes}\alpha-[\text{id}_{A_f}]$ and $\alpha_{\frac{\pi}{2}}$ which is degenerated by the claim. \end{proof} \noindent We finish the proof of theorem \ref{TheoremKequivalence} in the next proposition. \begin{proposition}\label{PropositionKequivalence} One has $\alpha\underset{A_f}{\otimes}[\lambda]=[\text{id}_{A}]$ in ${\rm KK}(A,A)$. \end{proposition} \begin{proof} Observe that $\alpha\underset{A_f}{\otimes}[\lambda]=[(H_r\oplus K_r,\pi_r\oplus\rho_r,F_r)]$ where $$H_r=H_m\underset{\lambda}{\otimes}A=H_1\underset{A_1}{\otimes}A\oplus H_2\underset{A_2}{\otimes}A\quad\text{and}\quad K_r=K_m\underset{\lambda}{\otimes}A=K\underset{B}{\otimes}A=\left(K\underset{B}{\otimes}A_k\right)\underset{A_k}{\otimes} A,$$ with the canonical representations $\pi_r\,:\,A\rightarrow\mathcal{L}_{A}(H_r)$, $\pi_r(x)=\pi(x)\underset{\lambda}{\otimes}1=x\underset{A_1}{\otimes}1_{A}\oplus x\underset{A_2}{\otimes}1_{A}$ and $\rho_r\,:\,A\rightarrow\mathcal{L}_{A}(K_r)$, $\rho_r(x)=\bar\rho(x)\underset{\lambda}{\otimes}1=\rho(x)\underset{B}{\otimes}1_{A}$ and with the operator $F_r=F\underset{\lambda}{\otimes} 1\in\mathcal{L}_A(H_r,K_r)$. Hence, $\alpha\underset{A_f}{\otimes}[\lambda]-[\text{id}_{A}]$ is represented by the Kasparov triple $(H_r\oplus\widetilde{K}_r,\pi_r\oplus\widetilde{\rho}_r,\widetilde{F}_r)$, where $\widetilde{K}_r=K_r\oplus A$ and $\widetilde{\rho}_r(x)=\rho_r(x)\oplus x$, where we view $A=\mathcal{L}_{A}(A)$ by left multiplication. Finally, $\widetilde{F}_r\in\mathcal{L}_{A}(H_r,\widetilde{K}_r)$ is the unitary defined by $$\widetilde{F}_r(\xi_1\underset{A_1}{\otimes}1_{A})=\eta\underset{B}{\otimes}1_{A},\quad\widetilde{F}_r(\xi_2\underset{A_2}{\otimes}1_{A})=1_{A}\quad\text{and,}$$ $$\widetilde{F}(\xi)=F(\xi)\,\,\text{for all}\,\,\xi\in H_r\ominus\left((\xi_1\underset{A_1}{\otimes}1_{A}).A\oplus(\xi_2\underset{A_2}{\otimes}1_{A}).A\right).$$ The claim in the proof of proposition \ref{sub-K-equivalence} implies the following claim. \vspace{0.2cm} \noindent\textbf{Claim.}\textit{ Let $u\in\mathcal{L}_{A}(H_r)$ be the self-adjoint unitary defined by the identity on $H_r\ominus((\xi_1\underset{A_1}{\otimes} 1_{A}).A\oplus (\xi_2\underset{A_2}{\otimes} 1_{A}).A)$ and $u(\xi_1\underset{A_1}{\otimes} 1_{A})=\xi_2\underset{A_2}{\otimes} 1_{A}$, $u(\xi_2\underset{A_2}{\otimes} 1_{A})=\xi_1\underset{A_1}{\otimes} 1_{A}$. One has: \begin{enumerate} \item $\widetilde{F}^*\widetilde{\rho}_r(b)\widetilde{F}=\pi_r(b)$ and $u^*\pi_r(b)u=\pi_r(b)$ for all $b\in B$. \item $\widetilde{F}^*\widetilde{\rho}_r(a)\widetilde{F}=u^*\pi_r(a)u$ for all $a\in A_1$. \item $\widetilde{F}^*\widetilde{\rho}_r(a)\widetilde{F}=\pi_r(a)$ for all $a\in A_2$. \end{enumerate}} \noindent Let $t\in\mathbb{R}$ and define the unitary $u_t=\cos(t)+iu\sin(t)\in\mathcal{L}_{A}(H_r)$. Assertion $(1)$ of the Claim implies that $u_t^*\pi_r(b)u_t=\pi_r(b)$ for all $b\in B$. By the universal property of full amalgamated free products, for all $t\in\mathbb{R}$, there exists a unique unital $*$-homomorphism $\pi_t\,:\,A_f\rightarrow \mathcal{L}_{A}(H_r)$ such that: $$\pi_t(a)=\left\{\begin{array}{lcl} u_t^*\pi_r(a)u_t&\text{if}&a\in A_1,\\ \pi_r(a)&\text{if}&a\in A_2.\end{array}\right.$$ Arguing as in the end of the proof of proposition \ref{sub-K-equivalence}, we see that it suffices to show that, for all $t\in[0,\frac{\pi}{2}]$, $\pi_t$ factorizes through $A$ i.e. $\ker(\lambda)\subset\ker(\pi_t)$. To do that, we need the following claim. \vspace{0.2cm} \noindent\textbf{Claim.} \textit{For all $t\in\mathbb{R}$ and all $a=a_1\dots a_n\in \mathcal{A}$ a reduced operator with $a_k\in A_{l_k}^{\circ}$ one has \begin{enumerate} \item $\pi_t(a)u_t^*(\xi_2\underset{A_2}{\otimes}1_A)=e^{-it}(a\xi_2\underset{A_2}{\otimes}1_A)$ if $l_n=1$ and $\pi_t(a)(\xi_1\underset{A_1}{\otimes}1_A)=a\xi_1\underset{A_1}{\otimes}1_A$ if $l_n=2$. \item $\langle u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A),\pi_t(a) u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A)\rangle=\sin^{2k}(t)a$ where $k=\left\{\begin{array}{ll} \frac{n}{2}&\text{if }n\text{ is even},\\ \frac{n-1}{2}&\text{if }n\text{ is odd and }l_n=1,\\ \frac{n+1}{2}&\text{if }n\text{ is odd and }l_n=2.\end{array}\right.$ \item $\langle \xi_2\underset{A_2}{\otimes} 1_A,\pi_t(a) \xi_2\underset{A_2}{\otimes} 1_A\rangle=\sin^{2k}(t)a$ where $k=\left\{\begin{array}{ll} \frac{n}{2}&\text{if }n\text{ is even},\\ \frac{n+1}{2}&\text{if }n\text{ is odd and }l_n=1,\\ \frac{n-1}{2}&\text{if }n\text{ is odd and }l_n=2.\end{array}\right.$ \end{enumerate}} \noindent\textit{Proof of the claim.} $(1)$ is obvious by induction on $n$ once observed that $u_t\xi=e^{it}\xi$ (and $u_t^*\xi=e^{-it}\xi$) for all $\xi\in H_r\ominus(\xi_1\underset{A_1}{\otimes}1_A.A\oplus\xi_2\underset{A_2}{\otimes}1_A.A)$. \noindent $(2)$. Define, for $a_1\dots a_n\in\mathcal{A}$, $F(a_1,\dots,a_n)=\langle u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A),\pi_t(a) u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A)\rangle$. First suppose that $a\in A_1^{\circ}$ then $F(a)=\langle u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A),u_t^*\pi_r(a)(\xi_1\underset{A_1}{\otimes} 1_A)\rangle=\langle \xi_1\underset{A_1}{\otimes} 1_A,\xi_1\underset{A_1}{\otimes} a\rangle=a$. Now, let $a=a_1\dots a_n\in\mathcal{A}$ with $n\geq 2$ and $l_n=1$. We have: $$F(a_1,\dots,a_n) =\langle u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A),\pi_t(a_1\dots a_{n-1})u_t^*(\xi_1\underset{A_1}{\otimes} a_n)\rangle=F(a_1,\dots,a_{n-1})a_n.$$ Hence, it suffices to show the formula for $l_n=2$. Suppose $a\in A_2^{\circ}$, we have: \begin{eqnarray*} F(a)&=&\langle u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A),\pi_r(a)u_t^*(\xi_1\underset{A_1}{\otimes} 1_A)\rangle\\ &=&\langle \cos(t)\xi_1\underset{A_1}{\otimes} 1_A-i\sin(t)\xi_2\underset{A_2}{\otimes} 1_A,\cos(t)a\xi_1\underset{A_1}{\otimes} 1_A-i\sin(t)\xi_2\underset{A_2}{\otimes} a\rangle =\sin^2(t)a. \end{eqnarray*} Now suppose $a_1a_2\in\mathcal{A}$, with $l_2=2$, $l_1=1$. We have: \begin{eqnarray*} F(a_1,a_2)&=&\langle \xi_1\underset{A_1}{\otimes} 1_A,\pi_r(a_1)u_t\pi_r(a_2)u_t^*(\xi_1\underset{A_1}{\otimes} 1_A)\rangle\\ &=&\langle \xi_1\underset{A_1}{\otimes} 1_A,\pi_r(a_1)u_t(\cos(t)a_2\xi_1\underset{A_1}{\otimes} 1_A-i\sin(t)\xi_2\underset{A_2}{\otimes} a_2)\rangle\\ &=&\langle \xi_1\underset{A_1}{\otimes} 1_A,\cos(t)e^{it}a_1a_2\xi_1\underset{A_1}{\otimes} 1_A-i\cos(t)\sin(t) a_1\xi_2\underset{A_2}{\otimes} a_2+\sin^2(t)\xi_1\underset{A_1}{\otimes} a_1a_2\rangle\\ &=&\sin^2(t)a_1a_2. \end{eqnarray*} Finally, suppose that $n\geq 3$ and $a_1\dots a_n\in\mathcal{A}$ with $l_n=2$. Define $x=a_1\dots a_{n-2}$. We have \begin{eqnarray*} F(a_1,\dots,a_n)&=&\langle u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A),\pi_t(x)u_t^*\pi_r(a_{n-1})u_t\pi_r(a_n) u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A)\rangle\\ &=&\langle u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A),\pi_t(x)u_t^*\pi_r(a_{n-1})u_t(\cos(t)a_n\xi_1\underset{A_1}{\otimes} 1_A-i\sin(t)\xi_2\underset{A_2}{\otimes} a_n)\rangle \end{eqnarray*} $$ =\langle u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A),\pi_t(x)u_t^*(\cos(t)e^{it}a_{n-1}a_n\xi_1\underset{A_1}{\otimes} 1_A-i\cos(t)\sin(t) a_{n-1}\xi_2\underset{A_2}{\otimes} a_n+\sin^2(t)\xi_1\underset{A_1}{\otimes} a_{n-1}a_n)\rangle $$ $$ =\langle u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A),\cos(t)a_1\dots a_n\xi_1\underset{A_1}{\otimes} 1_A-ie^{-it}\cos(t)\sin(t)a_1\dots a_{n-1}\xi_2\underset{A_2}{\otimes} a_n\rangle$$ $$ +\langle u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A),\sin^2(t)\pi_t(x)u_t^*\xi_1\underset{A_1}{\otimes} a_{n-1}a_n)\rangle. $$ Hence we find: $$F(a_1,\dots,a_n)=\sin^2(t)\langle u_t^{*}(\xi_1\underset{A_1}{\otimes} 1_A),\pi_t(x)u_t^*\xi_1\underset{A_1}{\otimes} a_{n-1}a_n)\rangle=\sin^2(t)F(a_1,\dots,a_{n-2})a_{n-1}a_n.$$ The result now follows by an obvious induction on $n$. The proof of $(3)$ is similar.\hfill{$\Box$} \vspace{0.2cm} \noindent\textit{End of the proof of proposition \ref{PropositionKequivalence}.} Fix $t\in[0,\frac{\pi}{2}]$ and let $A_t$ be the C*-subalgebra of $\mathcal{L}_A(H_r)$ generated by $\pi_t(A_1)\cup\pi_t(A_2)$. Hence, $\pi_t\,:\,A_f\rightarrow A_t$ is surjective. Consider the ucp map $\varphi_t\,:\,A_t\rightarrow A$ defined by $\varphi_t(x)=\frac{1}{2}\left(\langle u_t^*(\xi_1\underset{A_1}{\otimes}1_A),xu_t^*(\xi_1\underset{A_1}{\otimes}1_A)\rangle+\langle \xi_2\underset{A_2}{\otimes}1_A,x\xi_2\underset{A_2}{\otimes}1_A\rangle\right)$ and note that $\varphi_t$ is GNS faithful. Indeed, let $x\in A_t$ such that $\varphi_t(y^*x^*xy)=0$ for all $y\in A_t$. Then $L\subset\ker(x)$ where, \begin{eqnarray*} L&=&\overline{\text{Span}}\left(A_tu_t^*(\xi_1\underset{A_1}{\otimes} 1_A).A\cup A_t(\xi_2\underset{A_2}{\otimes} 1_A).A\right)=\overline{\text{Span}}\left(A_t(\xi_1\underset{A_1}{\otimes} 1_A).A\cup A_t(\xi_2\underset{A_2}{\otimes} 1_A).A \right)\\ &=&\overline{\text{Span}}\left(A_t(\xi_1\underset{A_1}{\otimes} 1_A).A\cup A_tu_t^*(\xi_2\underset{A_2}{\otimes} 1_A).A \right)=H_r, \end{eqnarray*} where we used assertion $(3)$ of the claim for the last equality. Hence $x=0$. Let $A_{v,k}$ for $k=1,2$ be the $k$-vertex-reduced free product and call $i_k$ the natural inclusion of $A$ in $A_{v,k}$ and $\pi_k=i_k\circ\lambda$ the natural map from $A_f$ to $A_{v,k}$. Clearly $||x||_A=\max (||i_1(x)||,||,i_2(x)||)$ for any $x$ in the vertex-reduced free product $A$. From the assertions $(1)$ and $(2)$ of the claim and proposition \ref{PropMultipliers} with $r=\sin^2(t)$ we deduced that for any $k=1,2$ there exists two ucp maps $\psi_1^k$ and $\psi_2^k$ from $A_{v,k}$ to itself such that $i_k(\varphi_t(\pi_t(a)))=\frac{1}{2} (\psi_1^k(\pi_k(a))+\psi_2^k(\pi_k(a)))$ for all $a\in A_f$. Therefore $||\varphi_t(\pi_t(a))||_A \leq \max (||\pi_1(a)||, ||\pi_2(a)||) =|| \lambda(a) ||$ for all $a\in A_f$. Let us show that $\ker(\lambda)\subset\ker(\pi_t)$. Let $x\in\ker(\lambda)$. Then, for all $y\in A_f$ we have $\lambda(y^*x^*xy)=0$. Therefore $\varphi_t\circ\pi_t(y^*x^*xy)=0$ for all $y\in A_f$. Since $\pi_t$ is surjective we deduced that $\varphi_t(y^*\pi_t(x)^*\pi_t(x)y)=0$ for all $y\in A_t$. Using that $\varphi_t$ is GNS faithful we deduce that $\pi_t(x)=0$. \end{proof} \noindent We obtain the following obvious corollary of theorem \ref{TheoremKequivalence} and corollary \ref{CorDegVertexRed}. \begin{corollary}[\cite{Cu82}] If we have conditional expectations $E_k\,:\, A_k\rightarrow B$ which are also unital $*$-homomorphism, then the canonical surjection $A_1\underset{B}{*} A_2\rightarrow A_1\oplus_B A_2$ is $K$-invertible \end{corollary} \section{A long exact sequence in $KK$-theory for full amalgamated free products} \noindent Let $A_1$ and $A_2$ two unital C*-algebras with a common unital C*-subalgebra $B$. We will denote by $i_l$ the inclusion of $B$ in $A_l$ for $l=1,2$. The algebra $A_f$ is the full amalgamated free product. To simplify notation we will denote by $S$ the algebra $C_0(]-1,1[)$. \vspace{0.2cm} \noindent Let $D$ be the subalgebra of $S\otimes A_f$ consisting of functions $f$ such that $f(]-1,0[)\subset A_1$, $f(]0,1[)\subset A_2$ and $f(0)\in B$. This algebra is of course isomorphic to the cone of $i_1\oplus i_2$ from $B$ to $A_1\oplus A_2$. We call $j$ the inclusion of $D$ in the suspension of $A_f$. \begin{theorem}\label{bigthm} Suppose that there exist unital conditional expectations from $A_l$ to $B$ for $l=1,2$, then the map $j$, seen as an element $[j]$ of $KK^0(D, S\otimes A_f)$, is invertible. \end{theorem} \noindent The proof of this result will be done in several steps. We will start with the construction of an element $x$ of $KK^1(A_f, D)$. As $KK^1(A_f,D)$ is isomorphic to $KK^0( S\otimes A_f, D)$ this will produce a candidate $y$ for the inverse of $j$. The proof that $y\otimes_D [j]$ is the identity of the suspension of $A_f$ will use \ref{sub-K-equivalence}. Finally the proof that $[j]\otimes_{S\otimes A_f}y$ is the identity of $D$ will be done indirectly by using a short exact sequence for $D$. \subsection{An inverse in KK-theory} In order to present the inverse, we need some additional notations and preliminaries. Let $\kappa_1$ be the inclusion of $C_0(]-1,0[;A_1)$ in $D$ and $\kappa_2$ the inclusion of $C_0(]0,1[;A_2)$ in $D$. There is also $\kappa_0$ the obvious map from $S\otimes B$ in $D$. As $K$ of the preceding section is a $B$-module, we can define $$K_0=(K\otimes S)\otimes_{\kappa_0} D,\,\,K_1=(K\otimes_{i_1}A_1\otimes C_0(]-1,0[))\otimes_{\kappa_1} D\text{ and }K_2=(K\otimes_{i_2}A_2\otimes C_0(]0,1[))\otimes_{\kappa_2} D.$$ \noindent If one defines $I_l$ as the images of $\kappa_l$ in $D$ for $l=1,2$, it is clear that these are ideals in $D$. \begin{lemma}\label{lemmaK} $K_l$ is isomorphic to $\overline{K_0.I_l}$ for $l=1,2$ as $D$ Hilbert module. \end{lemma} \begin{proof} Lets do if for $l=1$. Indeed as $I_1=\overline{C_0(]-1,0[).I_1}$ because an approximate unit for $C_0(]-1,0[)$ is also one for $I_1$, it is easy to see that $\overline{K_0.I_1}$ is isomorphic to $\overline{(K\otimes S).C_0(]-1,0[) }\otimes_{\kappa_0} D .I_1$, i.e. $(K\otimes C_0(]-1,0[)) \otimes_{\kappa_0} D .I_1$. Considering that $C_0(]-1,0[;A_1)\otimes_{\kappa_1} D$ is $D.I_1$, one gets that $\overline{K_0.I_1}$ is nothing but $(K\otimes C_0(]-1,0[)) \otimes_{\tilde\kappa_0} C_0(]-1,0[;A_1)\otimes_{\kappa_1} D$ where $\tilde\kappa_0$ is the natural inclusion of $C_0(]-1,0[;B)$ in $C_0(]-1,0[;A_1)$, i.e. $i_1\otimes Id_{C_0(]-1,0[)}.$ Therefore $(K\otimes_{i_1}A_1)\otimes C_0(]-1,0[)$ is $(K\otimes C_0(]-1,0[)) \otimes_{\tilde\kappa_0} C_0(]-1,0[;A_1)$ and $\overline{K_0.I_1}$ is $K_1$. \end{proof} \noindent We will also need the following lemmas \begin{lemma}\label{basicprop} \begin{enumerate} \item If $f\in C([-1,1];\mathbb{R})$, then $f$ is a self-adjoint element in $Z(M(D))$ and more generally for any $D$-Hilbert module $\mathcal{E}$ then the right multiplication by $f$ induces a morphism $\hat f \in Z(\mathcal{L}_D(\mathcal{E}))$ such that the map $f\mapsto \hat f$ is a algebra morphism. \item Let $f$ in $C_0(]-1,0[;\mathbb{R})$. Then $f\in I_1\cap Z(D) $ and the right multiplication by $f$ induces a morphism $\hat f$ of $\mathcal{L}_D(K_0,K_1)$ such that $\hat f^* \hat f=\hat{f^2}$ in $\mathcal{L}_D(\mathcal{K}_0)$ and $\hat f \hat f^*=\hat{f^2}$ in $\mathcal{L}_D(\mathcal{K}_1)$ \item Let $f$ in $C_0(]0,1[;\mathbb{R})$. Then $f\in I_2\cap Z(D) $ and the right multiplication by $f$ induces a morphism $\hat f$ of $\mathcal{L}_D(K_0,K_2)$ such that $\hat f^* \hat f=\hat{f^2}$ in $\mathcal{L}_D(\mathcal{K}_0)$ and $\hat f \hat f^*=\hat{f^2}$ in $\mathcal{L}_D(\mathcal{K}_2)$ \end{enumerate} \end{lemma} \noindent The first point is pretty obvious and $(2)$ and $(3)$ are also clear in view of lemma \ref{lemmaK}. \begin{lemma}\label{compactprop} \begin{enumerate} \item If $f\in C_0(]-1,1[;\mathbb{R})$ then for any $B$-module $\mathcal{E}$ and $F\in \mathcal{K}_B(\mathcal{E})$, we have $(F\otimes 1_S)\otimes_{\kappa_0} 1_D \hat f$ is a compact operator of $(\mathcal{E}\otimes S)\otimes_{\kappa_0} D$ \item If $f\in C_0(]-1,0[;\mathbb{R})$ then for any $A_1$-module $\mathcal{E}$ and $F\in \mathcal{K}_{A_1}(\mathcal{E})$, we have $F\otimes 1_{C_0(]-1,0[;\mathbb{R})}\otimes_{\kappa_1} 1_D \hat f$ is a compact operator of $(\mathcal{E}\otimes C_0(]-1,0[))\otimes_{\kappa_1} D$ \item Similarily for $f\in C_0(]0,1[;\mathbb{R})$ and $A_2$-modules. \end{enumerate} \end{lemma} \begin{proof} Point $(2)$ and $(3)$ are similar to $(1)$. To do $(1)$, let $F$ be the rank one operator $\theta_{\xi,\eta}$ for $\xi$ and $\eta$ vectors in $\mathcal{E}$ which is defined as $\theta_{\xi,\eta}(x)= \xi<\eta,x>$ for all $x$ in $\mathcal{E}$. Then $(F\otimes 1_S)\otimes_{\kappa_0} 1_D \hat f$ is $\theta_{\xi\otimes f_2\otimes f_2,\eta\otimes f_2\otimes f_2} \hat f_1$ and therefore compact for any function $f=f_1f_2^4$ with $f_1$ and $f_2$ in $C_0(]-1,1[;\mathbb{R})$. As any function can be written like that, use for example the polar decomposition, we get our result. \end{proof} \noindent Define now two functions in $C([-1,1];\mathbb{R})$ : $C^+(t)$ is $\cos(\pi t)$ if $t\geq 0$ and $1$ if $t\leq 0$, the function $C^-(t)$ is $\cos(\pi t)$ if $t\leq 0$ and $1$ if $t \geq 0$. Similarly, we have two functions in $S$ ; $S^+$ is $\sin(\pi t)$ if $t\geq 0$ and $0$ if $t\leq 0$, the function $S^-(t)$ is $\sin(\pi t)$ if $t\leq 0$ and $0$ if $t \geq 0$. And finally $T$ is the identity function of $C([-1,1];\mathbb{R})$. \noindent With the notation of the first part, we have a natural $D$-module $$H=(H_1\otimes C_0(]-1,0[))\otimes_{\kappa_1} D\oplus (H_2\otimes C_0(]0,1[))\otimes_{\kappa_2} D\oplus (K\otimes S)\otimes_{\kappa_0} D.$$ It is also clear that H is endowed with a natural (left) action of $A_f$ as $H_1, H_2$ and $K$ have it. \vspace{0.2cm} \noindent Let $G$ be the operator of $\mathcal{L}_D(H)$ defined in matrix form by $$G=\begin{pmatrix} \widehat{C^-} & 0 & - ((F_1\otimes 1_{C_0(]-1,0[)})^*\otimes_{\kappa_1} 1 )\widehat{S^-} \\ 0 & -\widehat C^+ & ((F_2\otimes 1_{C_0(]0,1[)})^*\otimes_{\kappa_2} 1 )\widehat{S^+}\\ - \widehat{S^-}^*((F_1\otimes 1_{C_0(]-1,0[)})\otimes_{\kappa_1} 1) & \widehat{S^+}^* ((F_2\otimes 1_{C_0(]0,1[)})\otimes_{\kappa_2} 1) & Z\\ \end{pmatrix}$$ where $Z= -\widehat{C^-} (q_1\otimes 1_S)\otimes_{\kappa_0} 1+\widehat{ C^+}(q_2\otimes 1_S)\otimes_{\kappa_0} 1-\widehat{T} (q_0\otimes 1_S)\otimes_{\kappa_0} 1$. Thanks to lemma \ref{basicprop}, $G$ is well-defined. Moreover the following holds. \begin{proposition} The operator $G$ verifies $G^2-1$ is a compact operator of $H$ and $G$ commutes modulo compact operators with the action of $A_f$. \end{proposition} \begin{proof} Computing $G^2$ one gets as upper left $2\times 2$ corner : $$\begin{pmatrix} \widehat{C^-}^2+F_1^*\otimes_{\kappa_1} 1 \widehat{S^-} \widehat{S^-}^*F_1\otimes_{\kappa_1} 1 & F_1^*\otimes_{\kappa_1} 1 \widehat{S^-} \widehat{S^+}^* F_2\otimes_{\kappa_2} 1 \\ F_1^*\otimes_{\kappa_1} 1 \widehat{S^-} \widehat{S^+}^* F_2\otimes_{\kappa_2} 1 & \widehat{C^+}^2+F_2^*\otimes_{\kappa_1} 1 \widehat{S^+} \widehat{S^+}^*F_2\otimes_{\kappa_1} 1\\ \end{pmatrix}$$ As $F_1^*F_1$ is the identity modulo compact operator, using lemma \ref{compactprop} ( the function $(S^-)^2$ is in $C_0(]-1,1[)$ ) one has that $F_1^*\otimes_{\kappa_1} 1 \widehat{(S^-)^2} F_1\otimes_{\kappa_1} 1$ is $\widehat{(S^-)^2}$ modulo compact operators. Recalling also that $F_1^*F_2=0$, one gets that this matrix is the identity modulo compact operators. \noindent Let's focus now on the last row of $G^2$. We get first $-\widehat{C^-} F_1^*\otimes_{\kappa_1} 1 \widehat{S^-} - F_1^*\otimes_{\kappa_1} 1 \widehat{S^-} Z$. As $F_1^*q_1\otimes_{i_1} 1=F_1^*$ and $F_1^*q_2\otimes_{i_1} 1=0$ along with $F_1^*q_0\otimes_{i_1} 1=0$, $F_1^*\otimes_{\kappa_1} 1 \widehat{S^-} Z$ is $-F_1^*\otimes_{\kappa_1} 1 \widehat{S^-}\widehat{C^-}$. The second composant of that row is treated in the same way. Finally the last composant is $\widehat{S^-}^2(F_1F_1^*)\otimes_{\kappa_1} 1+\widehat{S^+}^2(F_2F_2^*)\otimes_{\kappa_1} 1+ \widehat{C^-}^2 (q_1\otimes 1_S)\otimes_{\kappa_0} 1+\widehat{ C^+}^2(q_2\otimes 1_S)\otimes_{\kappa_0} 1+\widehat{T}^2 (q_0\otimes 1_S)\otimes_{\kappa_0} 1$ as $q_0,q_1,q_2$ are commuting projections. But $F_lF_l^*$ is $q_l\otimes_{i_l} 1$ so $\widehat{S^-}^2(F_1F_1^*)\otimes_{\kappa_1} 1$ is $\widehat{S^-}^2 (q_1\otimes 1_S)\otimes_{\kappa_0} 1$. Hence, as $q_1+q_2+q_0=1$, the last component is $1+ \widehat{T^2-1} (q_0\otimes 1_S)\otimes_{\kappa_0} 1$. As $T^2-1$ is in $C_0(]-1,1[)$ and $q_0$ is compact, this composant is then $1$ modulo compact operator. \noindent Addressing now the compact commutation with the left action of $A_f$, it is very obvious using \ref{compactprop} for every composant of $G$ except $Z$ as it contains multiplication with functions not in $C_0(]-1,1[)$. So let $a$ be in $A_1$. We need to compute $[Z,\rho(a)\otimes_{\kappa_0} 1]$. But we know that $[q_1,\rho(a)]=0$. As $q_2=1-q_1-q_0$ we get that $[Z,\rho(a)\otimes_{\kappa_0} 1]= -(\widehat{C^+ + T})[q_0,\rho(a)]\otimes_{\kappa_0}1$ which is compact as $C^+ + T$ is a function that vanishes on $-1$ and $1$. The case when $a$ is in $A_2$ is treated in a similar way, hence the compact commutation property is proved for all $a$ in $A_f$. \end{proof} \noindent As a consequence, the couple $(H, G)$ defines an element of $KK^1(A_f, D)$ which we will call $x$ in the sequel. \subsection{K-equivalence} In all the following proofs we will very often use the external tensor product of Kasparov elements. Instead of the traditional notation $\tau_C(x)$ for the tensorisation with the algebra $C$ of an element $x$ in $KK^*(A,B)$, we will write $1_C\otimes x$ for the element in $KK^*(C\otimes A, C\otimes B)$ or $x\otimes 1_C$ for the element in $KK^*(A\otimes C, B\otimes C)$. Of course $B\otimes C $ is (non canonically) isomorphic to $C\otimes B$, but as we will perform several times this operation, the order will matter. Note that we do not specify the tensor norm as the algebra $C$ we will be using is alway nuclear. Also when $\pi$ is a morphism between $A$ and $B$, we will write $[\pi]$ for the canonical element in $KK^0(A,B)$. We will denote by $b$ the element of $KK^1(\mathbb{C}, S)$ which is defined on the $S$ Hilbert module $S$ itself by the operator $\widehat{T}$. It is well known that $b$ is invertible. \begin{proposition}\label{subequiv} With the hypothesis of \ref{bigthm}, one has in $KK^1(A_f, A_f\otimes S)$ that $x\otimes_D [j]$ is homotopic to $ ( 1_{A_f}\otimes b)\otimes_{A_f\otimes S} ([Id_{A_f}]\otimes 1_S)$ \end{proposition} \begin{proof} To prove that we will choose the representant of $[Id_{A_f}]$ that appear in \ref{sub-K-equivalence} and show that its Kasparov product with $b$ is homotopic to $x\otimes_D [j]$. Call $j_l$ for $l=1,2$ the inclusions of $A_l$ in $A_f$ and $j_0=j_1\circ i_1=j_2\circ i_2$ the inclusion of $B$ in $A_f$. First it is obvious that $H\otimes_{j} (A\otimes S)$ is $H_1\otimes_{j_1}A_f\otimes C_0(]-1,0[)\oplus H_2\otimes_{j_2} A_f\otimes C_0(]0,1[)\oplus K\otimes_{j_0} A_f\otimes S$ which is not quite the same as $(H_1\otimes_{j_1}A_f\oplus H_2\otimes_{j_2} A_f\otimes\oplus K\otimes_{j_0} A_f)\otimes S$. So we will realize now an homotopy to fix that. \begin{lemma} Consider the following two spaces : $\Delta_1=\{(t,s)\in \mathbb{R}^2, 0\leq s\leq 1, \,-1< t<s\}$ and $\Delta_2=\{(t,s)\in \mathbb{R}^2, 0\leq s\leq 1 ,\,-s< t<1\}$. The Hilbert module $\overline H= H_1\otimes_{j_1}A_f\otimes C_0(\Delta_1)\oplus H_2\otimes_{j_2} A_f\otimes C_0(\Delta_2)\oplus K\otimes_{j_0} A_f\otimes S\otimes C([0,1])$ is endowed with a natural structure of $A_f\otimes S\otimes C([0,1])$ Hilbert module and $A_f$ left action. Moreover the operator $$\overline G= \begin{pmatrix} \widehat{C^-}\otimes 1_{C([0,1])} & 0 & - F_1^*\otimes_{j_1} 1\otimes 1_{\Delta_1} \widehat{S^-}\otimes 1_{C([0,1])} \\ 0 & -\widehat C^+ \otimes 1_{C([0,1])}& F_2^*\otimes_{j_2} 1\otimes 1_{\Delta_2} \widehat{S^+}\otimes 1_{C([0,1])}\\ - \widehat{S^-}^*\otimes 1_{C([0,1])} \,F_1\otimes_{j_1} 1\otimes 1_{\Delta_1} & \widehat{S^+}^*\otimes 1_{C([0,1])}\, F_2\otimes_{j_2} 1 \otimes_{\Delta_2} & \overline Z\\ \end{pmatrix} $$ with $\overline Z= \widetilde Z\otimes 1_{C([0,1])} $ where $\widetilde Z=-\widehat{C^-} q_1\otimes_{j_0} 1\otimes 1_S+\widehat{ C^+}q_2\otimes_{j_0} 1\otimes 1_S -\widehat{T} q_0\otimes_{j_0} 1\otimes 1_S$ makes the pair $(\overline H, \overline G)$ into an element of $KK^1(A_f, A\otimes S\otimes C([0,1]))$ for which the evaluation at $t=0$ is $x\otimes_{D} [j]$ and the evaluation at $t=1$ has $(H_1\otimes_{j_1}A_f\oplus H_2\otimes_{j_2} A_f\otimes\oplus K\otimes_{j_0} A_f)\otimes S$ as module and $\widetilde G=\begin{pmatrix} \widehat{C^-} & 0 & - F_1^*\otimes_{j_1} 1\otimes 1_S \widehat{S^-} \\ 0 & -\widehat C^+ & F_2^*\otimes_{j_2} 1\otimes 1_S \widehat{S^+}\\ - \widehat{S^-}^*F_1\otimes_{j_1} 1\otimes 1_S & \widehat{S^+}^* F_2\otimes_{j_2} 1 & \widetilde Z\\ \end{pmatrix}$ as operator.\end{lemma} \begin{proof} As it is a straightforward check, details will be omitted. \end{proof} \noindent Then one easily checks that $\widetilde G$ is an $\widehat{T}\otimes_{A_f} 1$ connection. Indeed as $H_1\otimes_{j_1}A_f\oplus H_2\otimes_{j_2} A_f$ is of grading $0$ and $K\otimes {j_0} A_f$ of grading $-1$, one need to check that, when evaluating on $-1$, $\widetilde G$ does the same thing as $\widehat T$ i.e. is the matrix $\begin{pmatrix}-1&0&0\\ 0&-1&0\\ 0&0&1\\ \end{pmatrix}$ and, when evaluating on $1$, $G$ is the opposite matrix. It is indeed the case as $q_1+q_2+q_0=1$. \vspace{0.2cm} \noindent Lastly one need the following lemma where the operator $F$ of \ref{sub-K-equivalence} appears. \begin{lemma} The anti-commutator of $\widetilde G$ and $F\otimes 1_S$ is positive. \end{lemma} \begin{proof} To do that, we will decompose $\widetilde G$ in its diagonal and anti-diagonal part. It is clear that $\begin{pmatrix} \widehat{C^-} & 0 & 0 \\ 0 & -\widehat C^+ & 0\\ 0 & 0 & \widetilde Z\\ \end{pmatrix}$ and $\begin{pmatrix} 0 & 0 & F_1^*\otimes_{i_1} 1 \otimes 1_S \\ 0 & 0 & F_2^*\otimes_{\i_2} 1 \otimes 1_S\\ F_1\otimes_{i_1} 1 \otimes 1_S & F_2\otimes_{i_2} 1 \otimes 1_S & 0\\ \end{pmatrix}$ anti-commutes modulo compact operator as we have (modulo compact operator) $q_1F_1=F_1$ and $q_2F_1= q_0 F_1=0$. On the other hand the anti-commutator with the anti-diagonal part is $$\begin{pmatrix} -2 (F_1^*F_1)\otimes_{j_1} 1\otimes 1_S \widehat{S^-} &0&0\\ 0& 2 (F_2^*F_2)\otimes_{j_2} 1\otimes 1_S \widehat{S^+}&0\\ 0&0&-2 q_1\otimes_{j_0} 1\otimes 1_S\widehat{S^-} +2 q_2 \otimes_{j_2} 1\otimes 1_S \widehat{S^+} \\ \end{pmatrix}$$ As $-S^-$ and $S^+$ are positive functions and $q_1$ and $q_2$ commutes, the previous matrix is a diagonal matrix of positive operators hence positive. \end{proof} \noindent Using Connes- Skandalis characterization of the Kasparov product, we have established that $\widetilde G$ is a representant of the Fredholm operator for the product $ ( 1_{A_f}\otimes b)\otimes_{A_f\otimes S} ([Id_{A_f}]\otimes 1_S)$. Our proposition is henceforth proven. \end{proof} \noindent We need now the following two lemmas to get some information about $[j]\otimes_{A_f\otimes S} (x\otimes 1_S)$ as an element of $KK^1(D,D\otimes S)$. \begin{lemma}\label{lemmaev0} Call $ev_0$ the morphism from $D$ to $B$ that evaluates a function at $0$. Then we have in $KK^1(D, B\otimes S)$ that $[j]\otimes_{A_f\otimes S} ( (x\otimes_D [ev_0])\otimes 1_S)= - [ev_0]\otimes_B(1_B\otimes b)$. \end{lemma} \begin{proof} Let's first describe the left hand side. The Hilbert module is $K\otimes 1_S$ as the module $(H_1\otimes C_0(]-1,0[))\otimes_{\kappa_1} D\otimes_{ev_0} B$ is $0$. The left $D$ action is given by $(\rho\otimes 1_S)\circ j$ and the operator is just $(-q_1+q_2)\otimes 1_S$. We can replace this operator with $G_0=(-q_1+q_2)\otimes 1_S-\widehat{T}q_0\otimes 1_S$ as for any $f$ in $D$, $(\rho\otimes 1_S)\circ j(f)\,\widehat{T}q_0\otimes 1_S$ is compact. Note now that the evaluation at $-1$ of $G_0$ is $(1-2q_1)$ and at $-1$ is $2q_2-1$. It then enables us to do an homotopy. Consider the pair $(K\otimes S\otimes C([0,1]), G_0\otimes 1_{C([0,1])})$ where the left action of $D$ is defined now for any $f$ in $D$ and $k\in C(]-1,1[\times[0,1];K)$ as $(f.k)(t,s)=\rho(f(t(1-s))) k(t,s)$. This is still a Kasparov element as $(G_0^2-1)\otimes 1_{C([0,1])} =(\widehat{(T^2-1)}q_0\otimes 1_S)\otimes 1_{C([0,1])}$ hence compact. Also the commutator of the left action with the operator $G_0\otimes 1$ is compact. Indeed, as $q_0$ is compact, it is only necessary to check that the evaluation at $-1$ or $1$ of any commutator is $0$. But this is true as $[q_1,\rho(A_1)]=0$ and $[q_2,\rho(A_2)]=0$. Therefore $[j]\otimes_{A_f\otimes S} ( (x\otimes_D [ev_0])\otimes 1_S)$ is homotopic to an element of $KK^1(D,B\otimes S)$ wich is described with the pair $(K\otimes S, G_0)$ where $D$ acts on $K\otimes S$ as the constant morphism $\rho\circ ev_0$. So it is $[ev_0]\otimes_B z$ with $z$ an element of $KK^1(B,B\otimes S)$ which is only non trivial on $q_0K\otimes S\simeq B\otimes S$ where $G_0$ acts as $-\widehat{T}$. Thus $z=- 1_B\otimes b$. \end{proof} \noindent Recall that for $l=1,2$, $\kappa_l$ is the inclusion of $A_l\otimes C(]-1,0[)$ in $D$. To be precise we will use $\bar\kappa_l$ for the induced map from $A_l\otimes S$ to $D$ via the isomorphism of $C(]-1,0[)$ with $S$. \begin{lemma}\label{lemma-vertex} For all $l=1,2$, one has $[j_l]\otimes_{A_f} x=([Id_{A_l}]\otimes b)\otimes_{A_l\otimes S} [\bar\kappa_l]\in KK^1(A_l,D)$. \end{lemma} \begin{proof} We will do the lemma for $l=1$. The element $[j_1]\otimes_{A_f} x$ as the same module and operator that $x$, the only change is that we only consider a left action of $A_1$. We first perform a compact perturbation of the operator $G$. With the operators $\overline F_l$ defined before \ref{LemmaCompactCommutation}, consider $$G_1=\begin{pmatrix} \widehat{C^-} & 0 & - F_1^*\otimes_{\kappa_1} 1 \widehat{S^-} \\ 0 & -\widehat C^+ & \overline F_2^*\otimes_{\kappa_2} 1 \widehat{S^+}\\ - \widehat{S^-}^*F_1\otimes_{\kappa_1} 1 & \widehat{S^+}^* \overline F_2\otimes_{\kappa_2} 1 & \overline Z\\ \end{pmatrix}$$ where $\overline Z = -\widehat{C^-} (q_1\otimes 1_S)\otimes_{\kappa_0} 1+\widehat{ C^+}(1- q_1\otimes 1_S)\otimes_{\kappa_0} 1$. \noindent As $F_2-\overline F_2$ is compact (see \ref{LemmaCompactCommutation} ) and $\overline Z- Z= \widehat{C^+ +T}(q_0\otimes 1_S)\otimes_{\kappa_0} 1$ is compact as $C^+ +T$ is in $S$, we get the same element of $KK^1(A_1,D)$. Observe now that when evaluating at any positive $t$, $G_1^2$ is the identity because $\overline F_2$ is an isometry and $\widehat{S^-} F_1\otimes_{\kappa_1} 1$ vanishes and that for any $t$, $G_1$ commutes exactly with the left action of $A_1$ as $F_1$ and $\overline F_2$ does. \vspace{0.2cm} \noindent We will now construct an homotopy to remove the $[0,1[$ part of our module. Consider the space $\Delta_3=\{(t,s)\in\mathbb{R}\, 0<s\leq 1,\, 0<t<s\}$ and $\Delta_4=\{(t,s)\in\mathbb{R}\, 0\leq s\leq 1,\, -1<t<s\}$ which are open in $]-1,1[\times [0,1]$. Hence we also have a natural imbedding $\delta_4$ of $C_0(\Delta_4;B)$ in $D\otimes C([0,1])$ and $\delta_3$ of $C_0(\Delta_3;A_2)$ in $D\otimes C([0,1])$. Then $\widetilde H= (H_1\otimes C_0(]-1,0[))\otimes_{\kappa_1} D\otimes C([0,1]) \oplus (H_2\otimes C_0(\Delta_3[))\otimes_{\delta_3} D\otimes C([0,1]) \oplus (K\otimes C_0(\Delta_4)\otimes_{\delta_4} D\otimes C([0,1])$ is well defined and the pair $(\widetilde H, \widetilde G_1)$ is a Kasparov element in $KK^1(A_1,D\otimes C([0,1]))$. Indeed the only thing to check is whether $ \widetilde G_1^2$ is the identity modulo compact operator as $\widetilde G_1$ has exact commutation with the action of $A_1$. But this is true by the previous observation. \vspace{0.2cm} \noindent Therefore $[j_l]\otimes_{A_f} x$ can be represented by the evaluation at $0$ of this Kasparov element. Let's describe it: the module part is $(H_1\oplus K\otimes_{i_1} A_1 )\otimes C_0(]-1,0[)\otimes_{\kappa_1} D$ with obvious left $A_1$ action as $(K\otimes C_0(]-1,0[))\otimes_{\kappa_0} D$ is isomorphic to $(K\otimes_{i_1}A_1)\otimes C_0(]-1,0[)\otimes_{\kappa_1} D$. With this identification, the operator is $$E_1=\begin{pmatrix} \widehat{C^-} & - F_1^*\otimes 1_{C_0(]-1,0[)} \otimes_{\kappa_1} 1 \widehat{S^-} \\ - \widehat{S^-}^*F_1\otimes 1_{C_0(]-1,0[)}\otimes_{\kappa_1} 1 & -\widehat{C^-} (q_1\otimes_{i_1} 1\otimes 1_{C_0(]-1,0[})\otimes_{\kappa_1} 1+(1- q_1 \otimes_{i_1} 1 \otimes1_{C_0(]-1,0[})\otimes_{\kappa_1} 1\\ \end{pmatrix}$$. \noindent It is then clear, after identifying $C_0(]-1,0[)$ with $S$, that $[j_l]\otimes_{A_f} x$ is $z\otimes_{A_1} [\bar\kappa_1]$ with $z$ in $KK^1(A_1, A_1\otimes S)$. By recalling that $1-q_1$ commutes with the left action of $A_1$, it is obvious that $z$ is represented by the pair $((H_1\oplus q_1K\otimes_{i_1} A_1)\otimes S, \overline E_1)$ with $\overline E_1=\begin{pmatrix} \widehat{C_1} & - F_1^*\otimes 1_{S_1} \widehat{S_1} \\ - \widehat{S_1}^*F_1\otimes 1_{S} & -\widehat{C_1} (q_1\otimes_{i_1} 1\otimes 1_{S})\\ \end{pmatrix}$ where $C_1$ is the function $\cos(\pi(t/2-1/2))$ and $S_1$ the function $\sin(\pi(t/2-1/2))$. \vspace{0.2cm} \noindent Following the proof of \ref{subequiv}, $z$ is obviously the product $z'\otimes b$ where $z'$ is the element of $KK^0(A_1,A_1)$ given by the module $H_1\oplus q_1K\otimes_{i_1} A_1$ with $H_1$ positively graded and the obvious left action of $A_1$ and the operator $\begin{pmatrix} 0&F_1^*\\ F_1&0\end{pmatrix}$. We will be finish when we prove that $z'$ is $[Id_{A_1}]$. To do this we represent $z'\oplus -[Id_{A_1}]$ by the module $H_1\oplus q_1K\otimes_{i_1} A_1\oplus q_0K\otimes_{i_1} A_1\simeq H_1\oplus (1-q_2) K\otimes_{i_1} A_1$ and the previous operator. But it is a compact perturbation of $\begin{pmatrix} 0&\overline{F_1}^*\\ \overline{F_1}&0\end{pmatrix}$. This last operator is homotopic via a simple rotation to $\begin{pmatrix} \overline{F}_1^* \overline{F}_1&0\\ 0&\overline{F}_1\overline{F}_1^* \end{pmatrix}$ hence trivial as $\overline{F}_1^* \overline{F}_1=1$ and $\overline{F}_1\overline{F}_1^*=1$ modulo compact operators as observed before \ref{LemmaCompactCommutation}. \end{proof} \vspace{0.2cm} \noindent We are now ready to prove our theorem \ref{bigthm}. \begin{proof} Call $a \in KK^1(S, \mathbb{C})$ the inverse of $b$. The element $y=(1_{A_f}\otimes a)\otimes_{A_f} x$ is an element of $KK^0(A\otimes S, D)$. We claim that this is the inverse of $[j]$. Indeed thanks to \ref{subequiv} we have that $$y\otimes_D [j]=(1_{A_f}\otimes a)\otimes_{A_f} x\otimes_D [j]= (1_{A_f}\otimes a)\otimes_{A_f} (1_{A_f}\otimes b)\otimes_{A_f\otimes S} ([Id_{A_f}]\otimes 1_S).$$ As $a\otimes_{ \mathbb{C}} b=[Id_S]$ we get that $ y\otimes_D [j]= (1_A\otimes [Id_S])\otimes_{A_f\otimes S}([Id_{A_f}]\otimes 1_S)$ is $[Id_{A_f\otimes S}]$. To prove the reverse equality, we will need a trick that can be found already in \cite{Pi86}. Observe first that for any $l=1,2$ and using \ref{lemma-vertex}, \begin{eqnarray*} [\bar\kappa_l]\otimes_D\otimes [j]\otimes_{A_f\otimes S} y&=&[j\circ \bar\kappa_l]\otimes_{A_f\otimes S} y=([j_l]\otimes 1_S)\otimes_{A_f\otimes S} (1_{A_f}\otimes a)\otimes_{A_f}x\\ &=&(1_{A_l}\otimes a)\otimes_{A_l}[j_l]\otimes_{A_f}x\\ &=&(1_{A_l}\otimes a)\otimes_{A_l}(1_{A_l}\otimes b)\otimes_{A_l}([Id_{A_l}]\otimes 1_S)\otimes_{A_l\otimes S} [\bar\kappa_l]\\ &=& [\bar\kappa_l]. \end{eqnarray*} \noindent We need now to compute $[j]\otimes_{A_f\otimes S} y\otimes_D [ev_0]$. To do this we will use the following lemma. \begin{lemma}\label{lemma-edge} In $KK^1(D\otimes S, A\otimes S)$, one has $([j]\otimes_{A_f\otimes S}(1_{A_f}\otimes a))\otimes 1_S= -(1_D\otimes a)\otimes_D [j].$ \end{lemma} \begin{proof} Indeed, $$(1_D\otimes b)\otimes_{D\otimes S}([j]\otimes_{A_f\otimes S}(1_{A_f}\otimes a))\otimes 1_S=[j]\otimes_{A_f\otimes S}(1_{A_f}\otimes(1_S\otimes b)\otimes_{S\otimes S} (a\otimes 1_S)).$$ If $\Sigma$ is the flip automorphism of $S\otimes S$ then clearly $[\Sigma]=-[Id_{S\otimes S}]$ in $KK^0(S\otimes S, S\otimes S)$. As a consequence $(1_S\otimes b)\otimes_{S\otimes S}(a\otimes 1_S)=-1_S\otimes ( b\otimes_{\mathbb{C}} a)=-[Id_S]$. Hence $$(1_D\otimes b)\otimes_{D\otimes S}([j]\otimes_{A_f\otimes S}(1_{A_f}\otimes a))\otimes 1_S)=-[j].$$ Multiplying both side by $1_D\otimes a$ gives the result. \end{proof} \vspace{0.2cm} \noindent In view of the lemma and \ref{lemmaev0} one has: \begin{eqnarray*} ([j]\otimes_{A_f\otimes S} y\otimes_D [ev_0])\otimes 1_S&=&-(1_D\otimes a)\otimes_D ([j]\otimes_{A_f\otimes S} (x\otimes_D [ev_0])\otimes 1_S)\\ &=&+ (1_D\otimes a)\otimes_D [ev_0] \otimes_B (1_B\otimes b)\\ &=&(1_D\otimes a)\otimes_D (1_D\otimes b)\otimes_{D\otimes S}( [ev_0]\otimes 1_S)\\ &=&[ev_0]\otimes 1_S \end{eqnarray*} \noindent As $-\otimes 1_S$ from $KK(B_1,B_2)$ to $KK(B_1\otimes S, B_2\otimes S)$ is an isomorphism for any $B_1$ and $B_2$, we get $[j]\otimes_{A_f\otimes S} y\otimes_D [ev_0]=[ev_0]$. Denote now $q= [Id_D]-[j]\otimes_{A_f\otimes S} y$. As $y\otimes_D [j]=[Id_{A_f\otimes S}]$, $q$ is an idempotent in the ring $KK^0(D, D)$. On the other end, $D$ fits into a short exact sequence $$0\rightarrow A_1\otimes S\oplus A_2\otimes S\overset{\bar\kappa_1\oplus \bar\kappa_2}{\longrightarrow} D\overset{ev_0}\longrightarrow B\rightarrow 0.$$ The induced six terms exact sequence for the functor $KK^0(D,-)$ then shows that, as $q\otimes_D [ev_0]=0$, there exist $q_l$ in $KK^0( D,A_l)$ for $l=1,2$ such that $q=(q_1\oplus q_2)\otimes_{A_1\oplus A_2}([\bar\kappa_1]\oplus [\bar\kappa_2] )$. So $q=q\otimes_D q =(q_1\oplus q_2)\otimes_{A_1\oplus A_2}([\bar\kappa_1]\oplus [\bar\kappa_2] )\otimes_D q=0$. because $[\bar\kappa_l]\otimes_D q=0$ for $l=1,2$ as observed before \ref{lemma-edge}. Therefore $[Id_D]= [j]\otimes_{A_f\otimes S} y$ and the K-equivalence between $A_f$ and $D$ is established. \end{proof} \noindent We obtain the following immediate corollary. \begin{corollary} The amalgamated free product of two discrete discrete quantum groups is $K$-amenable if and only if the two initial discrete quantum groups are $K$-amenable. \end{corollary} \begin{proof} One way is obvious. Let us prove the converse. Let $G_1, G_2, H$ be compact quantum groups and suppose that $\widehat{H}$ is a common discrete quantum subgroup of both $\widehat{G_1},\widehat{G_2}$ and $\widehat{G_k}$ is $K$-amenable for $k=1,2$. Write, for $k=1,2$, $C_m(G_k), C_m(H)$ the full C*-algebras and $C(G_k), C(H)$ the reduced C*-algebra and view $C_m(H)\subset C_m(G_k)$, $C(H)\subset C(G_k)$, for $k=1,2$. Let $\widehat{G}$ be the amalgamated free product discrete quantum group. One has $C_m(G)=C_m(G_1)\underset{C_m(H)}{*}C_m(G_2)$ and $C(G)=C(G_1)\underset{C(H)}{\overset{e}{*}}C(G_2)$, where the edge-reduced amalgamated free product is done with respect to the \textit{faithful} Haar states on $C(G_k)$, for $k=1,2$. Let $\lambda_{G_k}\,:\,C_m(G_k)\rightarrow C(G_k)$ be the canonical surjection. By assumption, $\lambda_{G_k}$ is $K$-invertible for $k=1,2$. Observe that the canonical surjection $\lambda_G\,:\,C_m(G)\rightarrow C(G)$ is given by $\lambda_G=\pi\circ\lambda$, where $\lambda\,:\, C_m(G_1)\underset{C_m(H)}{*}C_m(G_2)\rightarrow C(G_1)\underset{C(H)}{*}C(G_2)$ is the free product of the maps $\lambda_{G_1}$ and $\lambda_{G_2}$ and $\pi\,:\,C(G_1)\underset{C(H)}{*}C(G_2)\rightarrow C(G_1)\underset{C(H)}{\overset{e}{*}}C(G_2)$ is the canonical quotient map. By theorem \ref{TheoremKequivalence} $\pi$ is $K$-invertible and using the exact sequence of the full free product and the five's lemma, $\lambda$ is $K$-invertible. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,508
Joely Fisher Says 'Amazing' Late Father Eddie Fisher and Mom Connie Stevens Influenced Her Own Career When Joely Fisher was growing up, Malibu was not yet fashionable or expensive. She remembers her mom, actress and singer Connie Stevens, being surprised by a home that stood out from the rest. "She said, 'Who is the [jerk] that has a swimming pool on the beach?' The real estate agent started laughing and said, 'Debbie Reynolds!' " Joely recalls to Closer. Debbie, of course, was the first wife of Joely's father, singer Eddie Fisher. "We lived next door to Debbie for seven years. I called her Momma Deb," says Joely, who tells stories from her Hollywood upbringing in her 2017 memoir, Growing Up Fisher. "She was spectacular, interesting and charming. As much of a broad as my mother is." It's no surprise that the apple didn't fall far from the tree. After graduation from Boston's Emerson College, Joely, 54, launched her own career as an actress, singer and, lately, a screenwriter and director. "You can't just wait for someone else to say, 'Oh, you are the person that I want for this job,' " she explains. "You have to keep creating avenues for your own expression." John Salangsang/Shutterstock Considering your famous parents, you must have always known you were headed for a showbiz career. "Oh, sure, since birth. In my memoir, I describe my family as a fishbowl. We have been watched, scrutinized and judged. But as they watched us swim around, we have perfected our stroke! We were like carny people." Did you travel a lot with your mom as a child? "We would go on the road with her. It was a survival thing. It wasn't all first-class and limos. She was a single mother working to support the family. My mom called me the gypsy. I knew that this was my life from the very start." Debbie Reynolds' Life Was Lucky Despite Marrying the Wrong Men What's your mom, Connie, like at home? "She doesn't always practice what she preaches, but she is an affectionate, generous, incandescent spirit of a woman. She was nearly taken out by a stroke five years ago. She says, 'I bounced back!' and I was like, 'I don't know if I can describe what you're doing right now as bouncing, Mom! But we are happy to have you alive. You have eight grandchildren, and they adore you.' What was the best thing you learned from her? "I remember when I was young and I had my first screen test. I didn't get the role, and I was just hysterical. And she was like, 'It's a job. It wasn't your role, it was somebody else's role. And it doesn't validate who you are as a person.' She reiterated that over the years — what an incredible human being I am and how proud she is of me and what a great mother I am. And so I guess I took that in." That's lovely. How about your dad, Eddie. What was your relationship like? "I didn't grow up with my dad. He never really acted like a dad. But he was amazing, and we did develop a real friendship after I turned 16 and then later in life. I adored him and forgave him. He is gone 11 years." It's never easy to lose a parent. "When my father passed away…my mother said, 'Oh, we have to memorialize Eddie,' because she still calls him 'delicious,' even after all these years. So we had a little party with bagels and lox for my little Jewish dad. My sisters and I got up and spoke. By the end, we all had included in our speech that we were his favorite! The reason why we all said that is because at one point in our life, he made us feel that way. He didn't know really how to be a father, but he did adore us all, and he did honor us all in that way." How Debbie Reynolds and Elizabeth Taylor Repaired Their Magical Friendship You also lost your half-sister Carrie Fisher. Do you miss her? "I miss her desperately. The world doesn't feel right without her. She was the smartest person I know, and she was the funniest. I aspire to leave the same kind of mark." If you hadn't been an actress and singer, what do you think you would have done? "When I was younger, I used to say, 'Oh, I would love to have been the cruise director on The Love Boat — but that's show business, too. Now, I've authored a book. I've written screenplays, I've done Broadway musicals, I've done television, I'm directing. I always joke, I am a butcher, a baker and a candlestick maker, but I really am all those things!" Eric Charbonneau/Shutterstock Are you creating during the pandemic? "Sure. Since the pandemic started, I've written four pilots for television. And one of them is in development right now. So, eventually, you'll be seeing the fruits of my writing labor on the screen. I also ran for office in my union." Wow, how did that happen? "It became very important to me because I saw what's been happening. Health care is tied to our employment. During the pandemic, 12,000 seniors, the icons of [show] business, lost their insurance. I'm ignited by that. I said I have to get into the boardroom and negotiate better contracts for people like my mom." You're a mom yourself. How old are your kids? "I have two stepsons, who are 35 and 33. They're both married with two children each, so I'm Glam-ma! I also have a 20-year-old daughter who is living in New York City and working on a television show behind the camera. I have an almost 16-year-old daughter who's in high school, and a 13-year-old. Their names are, from the top down, Cameron, Collin, Skylar, True and Olivia." Biographer Says Leaving Hollywood Was Hard for Grace Kelly This year you participated in Doris Bergman's Emmy gifting suite, which supports the foster children charity Wednesday's Child. Why is this organization important to you? "I have an adopted child myself, so it resonates with me to support families that are underserved and underrepresented in the world. My concern is always for the next generation. I do want to write, eventually, about adoption and mental health. Not from a Hollywood standpoint, but [talking about] the mental health of the children of this generation and the pressures they are under." What do you like most about being the age you are? "I don't really give a [damn] anymore — just kidding! I think when you get to a certain age, you don't have to try so hard to prove who you are or what you've done. It's like, 'This is who I am. Take it or leave it.' I've tried really hard to be an amazing human, an artist and a good mom. What you see is what you get." — Reporting by Susan Hornik For more on this story, pick up the latest issue of Closer magazine, on newsstands now. Famous Familes
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,035
Elly Jacoba Meijer (Hengelo, 16 mei 1936) is een kunstschilderes en tekenares uit Tubbergen. Tot haar werk behoren houtsnedes, mozaïek, gouaches, pentekeningen en sgraffito. Biografie Haar vader was bloemist en hovenier en gaf in de avonduren les aan de tuinbouwschool. Hij illustreerde zijn wekelijkse artikel in de Twentsche Courant met pentekeningen. Ook Elly tekende van kinds af aan planten, bloemen en landschappen. De groeiprocessen van planten en bloemen zouden later haar inspiratiebron vormen. Na de ULO kreeg zij haar opleiding aan de AKI in Enschede. Tot haar leermeesters behoorden Jan Stroosma, Philip Kouwen en Johan Haanstra. In 1960 slaagde zij voor vrij schilderen en monumentale kunst. In Nederland bestond sinds 1951 de Percentageregeling beeldende kunst voor het decoratief aankleden van door de overheid gefinancierde gebouwen. Anderhalf procent van de bouwsom van de nieuwbouw was bestemd voor kunst. Het vroege werk van Meijer bestaat veelal uit wandvullende sgrafitto's en glasmozaïek op kerken, scholen en andere openbare gebouwen. Door haar cum laude afstuderen kreeg ze reeds in 1957 opdracht om een mozaïek van 7 m² te maken rond de auladeuren van de HTS in Enschede. Hierna volgden meerdere opdrachten in Twente. In 1961 maakt ze voor de uitgebouwde entree van het Almelose hoofdpostkantoor een kunstwerk. Het bestaat uit 100.000 gebroken glasmozaïeksteentjes met een totale oppervlakte van 56 m². Het postkantoor werd in 2000 gesloopt. Tot 1961 woonde ze achtereenvolgens in Hengelo, Enschede en Almelo. Op de AKI had ze haar latere man Ben Griepink leren kennen. Toen Ben Griepink in 1961 een aanstelling kreeg als ontwerper van platenhoezen voor Philips Phonogram, verhuisde zij met hem naar de Hyacintstraat in Baarn. Als ze de Utrechtse ontwerper van de Maria Koninginkerk, Hans Knoop ontmoet verstrekt die haar de opdracht om een mozaïektabernakel van gekleurd glas te maken. Voor de halfronde buitenkant van de Maria Koninginkerk maakte ze een scrafitto voorstellend de Gelijkenis van de Wijze en de Dwaze Meisjes. De wand zou in 2009 geheel worden gerestaureerd. De samenwerking met Knoops leidde tot het maken van meer monumentale kunstwerken in Enschede, Dinxperlo en Losser. In 1971 verhuisde het gezin naar het Twentse Goor en later naar Het Stift in Weerselo. Ook in die tijd maakte ze nog werken in Maartensdijk, Baarn en Bunschoten. Meerdere van haar werken zouden later bij sloop verloren gaan. Tot 1966 was haar stijl expressionistisch, hierna werd het meer abstract. In 1980 werd het monumentale werk voor haar te zwaar en maakte ze vrij schilderwerk zoals gouaches en vrije textiele werkvormen in haar atelier in Tubbergen. Als particuliere opdracht beschilderde ze een klavecimbel met 140 bloemen, vlinders en vogels. Als lerares tekenen, schilderen en handvaardigheid werkte zij aan verschillende lagere scholen. In 1995 maakte ze het ontwerp en uitvoering van de St. Jorismis van Herman Finkers. Prijzen In 1959 won de eerste prijs voor een ontwerp/wandschildering aan de Providentia-bank in Enschede. Werken (selectie) 1957 mozaïek - HTS Enschede; 1959 sgrafitto - postkantoor Almelo; 1959 glasmozaïek - Onze Lieve Vrouwe school Markelo; 1960 mozaïek (56 m²) - hoofdpostkantoor Almelo; 1962 sgraffito's - J.W. Schuiling NV, Groot Handelsgebouw Haven Hengelo; 1963 sgraffito's - buitenmuur van de Maria Koninginkerk in Baarn; 1964 sgraffito - buitenmuur van de Heilige Geestkerk te Enschede; 1965 glasmozaïek - lagere school te Utrecht; 1965 glasmozaïek - Montinischool te Baarn; ±1965 mozaïek - hoofdingang Twents Carmel College in Enschede (Bonhoeffercollege); 1965 Italiaans glasmozaïek - Het Zonneveld op voormalige kleuterschool Zonneveld aan de Zandvoortseweg te Baarn; 1968 mozaïek (2,50 m - 4m) - tussen de hoofdingangen van de Sint Maartenkerk in Maartensdijk; 1968 sgrafitto's en mozaiek, tabernakel en altaarkruis (9m - 3m) - Sint Martinuskerk te Losser; 1970 mozaïek gemeentewapen - voorgevel gemeentehuis van Baarn, bij de herinvoering van het oude gemeentewapen; 1974 5 mozaïeken met als thema 'muziek' - lagere school te Bunschoten; 1977 mozaïek 't Iemenschoer" in Hengelo. Exposities 1980 grafische werken en schilderijen - galerie Quadraat te Almelo 1986 Overzichtstentoonstelling - gemeentehuis Weerselo 1988 aquarellen en schilderijen - De Wanne in Beltrum 1988 olieverfschilderijen en gouaches - bibliotheek van Eibergen 1991 olieverfschilderijen en gouaches - gemeentehuis Weerselo 1992 groepstentoonstelling textiele werkvormen - Jaarbeurs Utrecht 1992 vrij schilderwerk - galerie Prinsengang te Eibergen 1994 olieverfschilderijen, aquarellen en gouaches - st Ouwenaller Nederlands kunstschilder Nederlands tekenaar
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,669
\section{Introduction}\label{sec:intro} Big data poses new challenges not only in storage but in intelligent data analytics as well. Many organisations have the infrastructure to maintain big structured data and need to find methods to efficiently discover patterns and relationships to derive intelligence~\cite{oracle2,oracle-bigdata}. Thus, it would be desirable to be able to construct out of big data a right representative part that can explain aggregate queries, e.g., why the salaries or the sales of a department are high. AI systems typically make decisions based on the value of a function computed on data's attributes. Several approaches have in common the computation of aggregates over the whole or large subsets of the data that helps explain patterns and trends of the data. E.g., recommendation systems rank and retrieve items that are more interesting for a specific user by aggregating existing recommendations~\cite{rec2011}. For another example, collaborative filtering computes a function which uses aggregates and a sum over the existing ratings from all users for each product in order to predict the preference of a new user~\cite{collaborate,ers-2008}. User preferences are often described as queries~\cite{journal-aicom09}, e.g., queries that give constraints on item features that need to be satisfied. Another reason for which data analytics seek to explain data is for data debugging purposes. {\em Data debugging}, which is the the process that allows users to find incorrect data~\cite{MeliouGNS11,MusluBM13}, is a research direction that is growing fast. Data are collected by various techniques which, moreover, are unknown to and uncontrolled by the user, thus are often erroneous. Finding which part of the data contains errors is essential for companies and affects a large part of their business. All these applications call for techniques to explain our data. Aggregation is a significant component in all of them. In this paper we offer a technique that constructs a summary of the data with properties that allow it to be used efficiently to explain much of the data behaviour in aggregate for sums. We refer to this summary as {\em Aggregate Lineage}, since in most applications it represents the source of an aggregate query\footnote{Lineage used to be referred to as ``explain'' in database papers of the late 80's.}. Lineage (a.k.a. provenance) keeps track of where data comes from. Lineage has been investigated for data debugging purposes~\cite{IkedaCFSTW12}. Storing the complete lineage of data can be prohibitively expensive and storage-saving techniques to eliminate or simplify similar patterns in it are studied in~\cite{ChapmanJR08}. For select-project-join SQL queries, lineage stores the set of {\em all} tuples that were used to compute a tuple in the answer of the query~\cite{BenjellounSHTW08}. This is natural for select-project-join SQL queries where original attribute values are ``copied'' in attribute values of the answer. However, in an aggregate query the value of the answer is the result of applying an aggregate function over many numerical attribute values. When we want to understand why we get an aggregate answer it may no longer be important or feasible to have lineage to point to all contributing original tuples and their values. We would rather want to compute {\em few} values that can be used to tell us as much as possible about the origin of the result of an aggregate query. However is this at all possible and if it is what are the limitations? In this paper we initiate an investigation of such questions and, interestingly, we show that useful and practical solutions exist. In particular, we offer a technique that uses randomisation to compute Aggregate Lineage which is a small representative sample (it is more sophisticated than just a simple random sample) of the data. This sample has the property to allow for good approximations of a sum query on ad hoc subsets of data -- we call them {\em test queries}. Test queries are applied to the Aggregate Lineage -- not the whole original data. The test queries which we consider are sum queries with same aggregated attribute conditioned with any grouping attributes depending on which subsets of the data we want to test. We give performance guarantees about the quality of the results of the test queries that show the approximation to be good for test queries with large values (i.e., close to the total sum over the whole set of data). Our performance guarantees hold, with high probability, for any set of queries, even if the number of queries is exponentially large in the size of the lineage. The only restriction is that the queries should be oblivious to the actual Aggregate Lineage. This restriction is standard in all previous work on random representative subsets for the evaluation of aggregate queries and is naturally satisfied in virtually all practical applications. The following example offers a scenario about how Aggregate Lineage can be used in data debugging and demonstrates how some test queries can be defined. \begin{example} \label{ex:first} Suppose that the accounting department of a big company maintains a database with a relation $Salaries$ with hundreds of attributes and millions of tuples. Each tuple in the relation may contain an identifier of an employee stored in attribute $EmplID$, his Department stored in attribute $Department$, his annual salary stored in attribute $Sal$ and many more attribute values. Other relations are extracted from this relation, e.g., a relation which contains aggregated data such as the total sum of salaries of all employees. A user is trying to use the second relation for decision making but he finds that the total sum of salaries is unacceptably high. He does not have easy access to the original relation or he does not want to waste time to pose time-consuming queries on the original big relation. The error could be caused by several reasons (duplication of data in a certain time period, incorrect code that computes salaries in a new department). Thus e.g., if we could find the total sum of salaries for employees in the toy department during 2009, and see that this is unreasonably high, still close to the first total sum of all employees' salaries, then we will be able to detect such errors and narrow them down to small (and controllable) pieces of data. In order to do that, we need the capability of posing sum queries restricted to certain parts of the data by using combinations of attributes. This will help the user understand which piece of data is incorrect. We do not know in advance, however, which piece of data the user would want to inquire and thus Aggregate Lineage should allow the user to be able to get good approximated answers to whatever queries he wants to try. There are billions of such possible queries and hence billions of subsets of data which we want to compute a good approximation of the summation of salaries. We want Aggregate Lineage to offer this possibility. \end{example} We propose to keep as Aggregate Lineage a small relation under the same schema of the original relation. In order to select which tuples to include, we use valued-based sampling with repetition, i.e., weighted random sampling where the probability of selecting each tuple is proportional to its value on the summed attribute. The intuition why this method works is the following. Larger values contribute more to the sum than smaller ones, thus we expect that tuples with larger values should be selected more often than tuples with smaller values. Hence, we could end up with a tuple selected many times in the sample even if it appears only once in the original data. On the other hand, if there are many tuples with values of moderate size, many of them will be selected in the Aggregate Lineage, so that their total contribution to the approximation of the sum remains significant. \subsection{Our contribution} In our approach Aggregate Lineage is a small relation with same schema as the original relation and with the property to offer good approximations to test queries posed on it. To present performance guarantees, we build on Alth\"{o}fer's Sparsification Lemma \cite{sparse94}. In \cite{sparse94}, Alth\"{o}fer shows that the result of weighted random sampling over a probability vector is a sparse approximation of the original vector with high probability. This technique has found numerous applications e.g., in the efficient approximation of Nash equilibria for (bi)matrix games \cite{LMM03}, in the very fast computation of approximate solutions in Linear Programming \cite{LY94}, and in selfish network design \cite{FKS12}. In this paper, we show for the first time that the techniques of \cite{sparse94} are also useful in the context of sum database queries with lineage. Our results show that the Aggregate Lineage that we extract has the following properties (which we describe in technical terms and prove rigorously in Section~\ref{sec:tec}): \begin{itemize} \item Its size is practically independent of the size of the original data. \item It can be used to approximate well all ``large'' sums (i.e., with values close to the total sum), of the aggregated attribute in time that depends only on its size, and thus is almost independent of the size of the original data. \end{itemize} \section{Computing Aggregate Lineage} \label{sec:com} \begin{figure*} \begin{center} \begin{tabular}{| c |l|} \hline $t[A]$ & value of attribute $A$ in tuple $ t$\\ \hline $S$ & sum of all values over attribute $A$\\ \hline $p_t$ & probability that Algorithm Comp-Lineage selects tuple $t$\\ \hline $Fr$ & additional attribute recording the frequency of a tuple in the lineage\\ \hline $L_{R.A}$ & the lineage relation computed by Algorithm Comp-Lineage wrto attribute $A$\\ \hline $Q(R.A)$ (Sec. 4) & a sub-sum query computed over original relation $R$ wrto attribute $A$\\ \hline $Q'(L_{R.A})$ (Sec. 4)& sub-sum query $Q$ computed over the aggregate lineage relation \\ \hline $I^Q_R$ (Sec. 4)& set of identifiers of the tuples in relation $R$ that satisfy the predicates in query $Q$\\ \hline \end{tabular} \caption{Main symbols used in the paper.} \label{symbols-table} \end{center} \end{figure*} In this section, we present randomised algorithm Comp-Lineage which computes Aggregate Lineage in one pass over the data and in time linear in the size $n$ of the original database relation. In Section~\ref{sec:tec} we show that the output of Comp-Lineage is useful to approximate {\em arbitrary} ad-hoc sum test queries in time independent of $n$. We note that our algorithm is agnostic of the specific sum queries that will be approximated by using its output. Suppose that we are given a database with a relation $R$ with $n$ tuples and we are given a positive integer $b$ which is the number of tuples we have decided to include in the Aggregate Lineage (in Section~\ref{sec:tec} we will explain how we decide $b$ to give good performance and approximation guarantees). Suppose that $A$ is a numerical attribute of $R$ which takes nonnegative values. Let $S$ be the sum of values of attribute $A$ over all $n$ tuples. The algorithm essentially is a biased sampling with repetition that selects $b$ tuples from $R$. Each tuple $t$ has probability to be selected equal to $p_t=t[A]/S$ where $t[A]$ is the value of attribute $A$ in $t$. It collects initially a bag (a.k.a. multiset and is allowed to have the same element more than once) of tuples (since each tuple may be selected multiple times) which is turned in a set of tuples by adding an extra attribute $Fr$ (for Frequency) which shows the number of times this tuple is selected. We denote by $L_{R.A}$ the Aggregate Lineage of relation $R$ with sum attribute $A$. \smallskip {\sc Algorithm Comp-Lineage} {\small {\bf Input:} A relation $R$ with $n$ tuples and positive integer $b<n$. {\bf Output:} An $\mathrm{Aggregate~Lineage}$ relation $L_{R.A}$ with at most $b$ tuples. \begin{itemize} \item Randomly select with repetition one out of the $n$ tuples of $R$ in $b$ trials where each tuple $t$ is selected with probability $p_t$. \item Form relation $L_{R.A}$ by including all tuples selected above and adding an extra attribute $Fr$ to each tuple to record how many times this tuple was selected. \end{itemize} We can use the techniques of \cite{EfraimidisS06} for weighted random sampling and efficiently implement our algorithm to run in linear time in the size of the input either in a parallel/distributed environment or over data streams. Table~\ref{symbols-table} summarizes the main symbols used throughout the paper. \section{Running Example} \begin{example}\label{ex:al} We illustrate Algorithm Comp-Lineage by applying it to Example~\ref{ex:first} with $b=8,852$ and presenting the data and the Aggregate Lineage in Figure~\ref{fig:ex0-sparse}. Actually Figure~\ref{fig:ex0-sparse} only shows the value of the aggregated attribute ($Sal$ in our example), the rest of the tuple is not shown. \begin{figure*} \begin{center} \begin{tabular}{| c |c || c|c|c|c|} \hline $Sal$: & \# of Tuples & Total \# of Tuples & $Fr$ & \# of Tuples with & $Sal$: Values $Fr\cdotp S/b$ \\ O.V. & in $Salaries$ & in Aggregate Lineage & & $Fr$ & in Aggregate Lineage \\ \hline \hline \multirow{9}{*}{$10^{9}$} & \multirow{9}{*}{$100$}& \multirow{9}{*}{$100$} & $3$ & $5$ & $3\cdotp S/b= 4.41\times 10^{8}$ \\ \cline{4-6} & & & $4$ & $10$ & $4\cdotp S/b=5.87\times10^{8}$\\ \cline{4-6} & & & $5$ & $19$ & $5\cdotp S/b=7.34\times10^{8}$\\ \cline{4-6} & & & $6$ & $14$ & $6\cdotp S/b=8.81\times10^{8}$ \\ \cline{4-6} & & & $7$ & $13$ & $7\cdotp S/b=1.03\times10^{9}$\\ \cline{4-6} & & & $8$ & $15$ & $8\cdotp S/b=1.17\times10^{9}$ \\ \cline{4-6} & & & $9$ & $8$ & $9\cdotp S/b=1.32\times10^{9}$\\ \cline{4-6} & & & $10$ & $12$ & $10\cdotp S/b=1.47\times10^{9}$\\ \cline{4-6} & & & $11$ & $4$ & $11\cdotp S/b=1.62\times10^{9}$ \\ \hline \multirow{4}{*}{$10^{8}$} & \multirow{4}{*}{$1,000$} & \multirow{4}{*}{$497$} & $1$ & $347$ & $ S/b=1.47\times 10^{8}$ \\ \cline{4-6} & & & $2$ & $123$ & $2\cdotp S/b=2.94\times10^{8}$\\ \cline{4-6} & & & $3$ & $20$ & $3\cdotp S/b=4.41\times10^{8}$ \\ \cline{4-6} & & & $4$ & $7$ & $4\cdotp S/b=5.87\times10^{8}$ \\ \hline $10^{7}$ & $10,000$ & $681$ & $1$ & $681$ &$ S/b=1.47\times 10^{8}$ \\ \hline $10^{6}$ & $1,000,000$& $6,809$ & $1$ & $6,809$ & $ S/b=1.47\times 10^{8}$ \\ \hline $10$ & $1,000$& $0$& $0$ & $0$ & $0$ \\ \hline \hline \end{tabular} \caption{Properties of $\mathrm{Aggregate~Lineage}$ $L_{Salaries.Sal}$ for $b=8,852$. The first two columns describe the data. The next three columns describe the Aggregate Lineage relation. The last column shows how we use this lineage to compute sub-sums.} \label{fig:ex0-sparse} \end{center} \end{figure*} The first two columns of Figure~\ref{fig:ex0-sparse} present the data in relation $Salaries$. In order to be able to present many tuples we have chosen a relation with a few values for attribute $Sal$, actually five (i.e., $10^9, 10^8, 10^7,10^6$ and 10) and their Original Values (O.V.) are shown in the first column. The second column shows how many tuples in $Salaries$ have these values in $Sal$. Thus, it says, e.g., that there are 100 tuples with value in $Sal$ equal to $10^9$, 1,000 tuples with value in $Sal$ equal to $10^8$ and so on. The third column in Figure~\ref{fig:ex0-sparse} shows how many tuples from $Salaries$ with a specific value in $Sal$ are selected by {\sc Algorithm Comp-Lineage} to be included in the Aggregate Lineage relation. Thus, e.g., all 100 tuples with $Sal=10^9$ were chosen, only 681 tuples with $Sal=10^7$ were chosen and no tuple with $Sal=10$ was chosen. In order to represent the Aggregate Lineage relation $L_{Salaries.Sal}$ in the most demonstrative way, we have chosen to partition its tuples in blocks (each block further divided in multiple rows in columns 4, 5 and 6), each block corresponding to one value of $Sal $ in $Salaries$. Thus the first block has 9 rows, the second block has 4 rows and the last three blocks have one row each. This breaking into blocks gives a visualisation of the characteristics of the algorithm. The fourth column stores the extra attribute frequency $Fr$ which tells how many times a certain tuple was selected by the algorithm and the fifth column stores the number of tuples that were selected so many times. Thus, e.g., the first row says that 5 tuples were selected 3 times each. The ninth row says that 4 tuples from $Salaries$ were selected 11 times each. The blocks give us an intuition of the characteristics of the Aggregate Lineage. The first block corresponds to the largest value of $Sal$ and tuples with this value (i.e., $Sal= 10^9$) contributed quite heavily to the lineage - all 100 tuples with $Sal = 10^9$ were selected multiple times. In more detail, there are 100 tuples with value $Sal= 10^9$. Of those tuples, 5 were added in the bag 3 times each, 10 tuples were added in the bag 4 times each, and so on. Thus, by considering these 100 tuples, the Algorithm Comp-Lineage added in the bag $3\cdotp 5+4\cdotp 10+5\cdotp 19+6\cdotp 14+7\cdotp 13+8\cdotp 15+9\cdotp 8+10\cdotp 12+11\cdotp 4=\,681$ tuples in total. That is to say, each of those 100 tuples contributed on average $6.81$ to the bag. When we get a set out of the bag by using frequencies (to avoid repeating a tuple multiple times), then we see that the average frequency per tuple is $6.81$. So, from this first block, the $681$ tuples in the bag of Algorithm Comp-Lineage are transformed to a set of $100$ tuples in Aggregate Lineage with average frequency $6.81$. We can compare it with the average frequency in the second block which is 0.681 (this is $1\cdotp 347 + 2\cdotp 123 +3\cdotp 20 +4\cdotp 7=681$ divided by 1000 tuples) and see that, in the data of our example, each tuple of the first block contributes more heavily to the lineage. As we will explain in more detail later, this shows partly why the lineage is useful for discovering almost accurately sub-sums that are large compared to the total sum, whereas when a sub-sum is small in comparison, then the lineage cannot be used to compute it accurately. The second block did not contribute that heavily but still quite a lot, around half of tuples with $Sal=10^8$ were selected at least once and quite a few more than once, in total this block contributed 681 tuples in the bag. The third block contributed moderately. The fourth block is interesting because the value of $Sal$ is very small only $10^6$ but it contributed quite a lot due to the fact that there are many tuples in $Salaries$ with $Sal=10^6$, thus it contributed almost 85 percent of the tuples in the Aggregate Lineage. Finally the last column in the figure shows how much each tuple from the Aggregate Lineage contributes to the approximation of sub-sums that are computed by the test queries. The same tuple is added in the Aggregate Lineage several times as recorded in the new attribute $Fr$ and thus, in order to calculate the contribution of a certain tuple, we multiply its frequency in $Fr$ by $S/b$. By doing so, some tuples (e.g., the ones in the fourth block) in our example of Figure~\ref{fig:ex0-sparse} will contribute much more than their actual value in $Sal$. But this is to compensate for the tuples with value close to it (same value in our example) that are not selected to be included in the Aggregate Lineage. In the next section we give the technical details on how Aggregate Lineage can be used in order to approximate sub-sums. \end{example} Note that $\mathrm{Aggregate~Lineage}$ does not assume any knowledge of the query set: i.e., we run the random selection of Algorithm Comp-Lineage only once and compute $\mathrm L_{R.A}$ without assuming anything about the queries. Then, this same relation $\mathrm L_{R.A}$ can be used to make us understand any sub-sum test query, without requiring that the test queries are given beforehand or requiring that the test queries are chosen in any specific fashion (e.g., they do not have to be chosen uniformly at random), as long as the query choice is oblivious to the actual sample computed by $\mathrm{Aggregate~Lineage}$% \footnote{In technical terms, the queries are posed by an \emph{oblivious adversary}, i.e., an adversary that knows how exactly $\mathrm{Aggregate~Lineage}$ works but does not have access to its random choices. The restriction to oblivious adversaries is standard and unavoidable, since if one knows the actual value of $\mathrm L_{R.A}$, he can construct a query that includes only tuples not belonging to $\mathrm L_{R.A}$, for which no meaningful approximation guarantee would be possible.}. We first present the theoretical approximation guarantees and then demonstrate how these guarantees play for debugging on our running example. \section{Approximation Guarantees of Test Queries on Aggregate Lineage} \label{sec:tec} In this section we prove the theoretical guarantees of $\mathrm{Aggregate~Lineage}$. Let $R$ be a relation with a nonnegative numerical attribute $A$. We consider SUM queries that ask for the sum of attribute's $A$ values over arbitrary subsets of the tuples in relation $R$. We use tuple identifiers in order to succinctly represent subsets of tuples. Thus, any SUM query defines a set of tuple identifiers for tuples that satisfy its predicates, hence the following formal definitions: \begin{definition}[Exact SUM $Q(R.A)$] \label{def:exact} Let $R$ be a database relation. We attach a tuple identifier on each tuple of $R$. We denote by $I_R$ the set of all identifiers in relation $R$. Given an attribute $A$ in the schema of $R$, we denote by $a_i$ the value of attribute $R.A$ in the tuple with identifier $i$ in $R$. Let $Q$ be a SUM query over $R.A$. We denote by $I^Q_R$ the set of tuple identifiers from $I_R$ for tuples of $R$ that satisfy $Q$'s predicates. The result of a SUM query, $Q(R.A)$, is the summation of the values of $R.A$ over the set of tuples with identifiers that appear in $I^Q_R$, i.e., $Q(R.A)=\sum_{i\in I^Q_R} a_i$. \end{definition} \begin{definition}[Approximated SUM $Q'(L_{R.A})$] \label{def:appr} Let $Q$ be a SUM query over $R.A$ and let $L_{R.A}$ be an $Aggregate~~Lineage$. We attach a tuple identifier on each tuple of $L_{R.A}$. We denote by $I_{L}$ the set of all identifiers in $L_{R.A}$. We denote by $I^Q_L$ the set of tuple identifiers from $I_{L}$ for tuples of $L_{R.A}$ that satisfy $Q$'s predicates (since the set of attributes of $R$ is a subset of the set of attributes of $L_{R.A}$, we have that the predicates of a SUM query $Q$, expressed on attributes of $R$, define $I^Q_{L}$). We denote by $f_i$ the value of attribute $L_{R.A}.Fr$ in the tuple with identifier $i$ in $L_{R.A}$. The approximated result of SUM query $Q$, denoted by $Q'(L_{R.A})$, is the summation of the values of $L_{R.A}.Fr $ over the set of tuples with identifiers that appear in $I^Q_L$ multiplied by $S/b$, i.e., $Q'(L_{R.A})=\sum_{i\in I^Q_L} f_i\cdotp S/b$. \end{definition} The following theorem provides the performance guarantees for any arbitrary set of $m$ SUM queries computed over the Aggregate Lineage relation in order to serve as an approximation of the corresponding SUM queries over the original data. \def\mathrm{I\!E}{\mathrm{I\!E}} \def\mathrm{I\!Pr}{\mathrm{I\!Pr}} \begin{theorem}\label{th:sparse} Let $R$ be a relation with $n$ tuples having nonnegative values $a_1, \ldots, a_n$ on attribute $A$, and let $S = \sum_{i=1}^n a_i$. Then, for any collection of $m$ SUM queries $Q_1(R.A), \ldots, Q_m(R.A)$ (not known to the algorithm), any $p \in (0, 1)$, and any $\epsilon > 0$, the Algorithm Comp-Lineage with input all tuples of $R$ and $b = \lceil \ln{(2m/p)}/(2\epsilon^{2}) \rceil$ derives an $\mathrm{Aggregate~Lineage}$ $L_{R.A}$ such that \( | Q_j(R.A)-Q'_j(L_{R.A})| \leq \epsilon S \), for all $j \in [m]$, with probability at least $1-p$. \end{theorem} \begin{proof} The proof is an adaptation of the proof of Alth\"{o}fer's Sparsification Lemma \cite{sparse94}. For simplicity, we assume, without loss of generality, that the set $I_{R}$ of all tuple identifiers of $R$ in Definition \ref{def:exact} is $I_{R}=\{1,\ldots,n\}$. We define $b$ independent identically distributed random variables $X_1, \ldots, X_b$, which take each value $i \in [n]$ with probability $a_i / S$. Namely, each random variable $X_i$ corresponds to the outcome of the $i$-th trial of Comp-Lineage. For each tuple $i$, its frequency in the sample is $f_i = |\{ k \in [b] : X_k = i\}|$. Let us fix an arbitrary SUM query $Q_j(R.A)$. For each $k \in [b]$, we let $Y^k_j$ be a random variable that is equal to $1$, if $X_k \in I^{Q_j}_{R}$, and $0$, otherwise. Since the random variable $Y^k_j$ is equal to $1$ with probability $Q_j(R.A)/S$, $\mathrm{I\!E}[Y^k_j] = Q_j(R.A)/S$. We observe that the random variables $\{ Y^k_j \}_{k \in [b]}$ are independent, because the random variables $\{ X_k \}_{k \in [b]}$ are independent. Furthermore, we let $Y_j$ be a random variable defined as \[ Y_j = \frac{1}{b} \sum_{k = 1}^b Y^k_j \] By definition, $Y_j = \sum_{i \in I^{Q_j}_{R}} f_i/b = \sum_{i \in I^{Q_j}_{L}} f_i/b$, and thus we have that $Y_j = Q'_j(L_{R.A})/S$, i.e., $Y_j$ is equal to the approximated result of the SUM query divided by $S$. Also, by linearity of expectation, $\mathrm{I\!E}[Y_j] = Q_j(R.A)/S$. Applying the Chernoff-Hoeffding bound, we obtain that for the particular choice of $b$, with probability at least $1-p/m$, the actual value of $Y_j$ differs from its expectation $Q_j(R.A)/S$ by at most $\epsilon$, which implies that $Q'_j(L_{R.A})$ differs from $Q_j(R.A)$ by at most $\epsilon S$. Formally, by the Chernoff-Hoeffding bound \footnote{We use the following form of the Chernoff-Hoeffding bound (see \cite{Hoef63}): Let $Y^1, \ldots, Y^b$ be random variables independently distributed in $[0, 1]$, and let $Y = \frac{1}{b} \sum_{k=1}^b Y^k$. Then, for all $\epsilon > 0$, $\mathrm{I\!Pr}[|Y - \mathrm{I\!E}[Y]| > \epsilon] \leq 2\exp^{-2\epsilon^2 b}$, where $\exp = 2.71\ldots$ is the basis of natural logarithms.}, $$ \mathrm{I\!Pr}[|Q'_j(L_{R.A}) - Q_j(R.A)| > \epsilon S] $$ $$= \mathrm{I\!Pr}[|Y_j - Q_j(R.A)/S| > \epsilon] $$ $$\leq 2\exp^{-2\epsilon^2 b} \leq p/m\,, $$ where the last inequality follows from the choice of $b$. Applying the union bound, we obtain that \[ \mathrm{I\!Pr}[\exists j \in [m] : |Q'_j(L_{R.A}) - Q_j(R.A)| > \epsilon S] \leq p \] which concludes the proof of the lemma. \end{proof} \begin{example} \label{ex:practical} Suppose, in our running example, we want to be able to answer with good approximation $m=10^{6}$ queries. What are the guarantees that the theorem provides? The original data have $n\approx10^6$ tuples. Suppose we select the number of tuples in the Aggregate Lineage to be $b\approx 9000$. Then the theorem says that, by setting $\epsilon=0.04$, we can compute any of $10^{6}$ arbitrary queries within $0.04 S$ of its real value with probability $1-10^{-6}$. Thus, if the real exact value of the query $Q_{1}$ is equal to $Q_{1}(Salaries.Sal)=0.4S=S_{1}$ (remember $S$ is the sum over all tuples of relation $R$) then the approximation will be $0.04S= 0.04S_{1}/0.4=0.1S_{1}$. If for another query $Q_{2}$ we have $Q_{2}(Salaries.Sal)=0.8S=S_{2}$ then the approximation will be $0.05S_{2}$, so, then with high probability we get an answer that is within a factor of $0.05$ of the actual answer. \end{example} {\sl Observations on the practical consequences of Theorem~\ref{th:sparse}}. Examining closely equation $b=\lceil \ln{(2m/p)}/2\epsilon^{2} \rceil$ which gives us an upper bound of the number of tuples in the Aggregate Lineage for $m$ queries and with $p$ and $\epsilon$ guarantees as in its statement, we make the following observations: \begin{itemize} \item The value of $b$ depends on $m$ as the logarithm, hence if we go from $m$ to $m^2$ queries, we only need to multiply $b$ by 2 in order to keep the same performance guarantees. Thus it is reasonable to state that, in many practical cases the number $m$ of queries that can be approximated well can be as large as a polynomial on the size of data -- even with coefficient in the order of a few hundreds. \item The value of $b$ does not depend much on $p$ (again only as in the logarithm) but it depends mainly on $\epsilon$ which controls the approximation ratio (the approximation ratio itself is $\epsilon /\rho$ if the query to be computed has a sum $S'=\rho S$). \end{itemize} \section{A debugging scenario }\label{sec:deb} Here is what a user can do for data debugging when using the Aggregate Lineage we propose. \begin{itemize} \item He computes sub-sums by filtering some attributes and possibly specific values for these attributes. E.g., what was the sum of salaries of employees in the toy department in Spring 2010 and only for those employees who were hired after 2005. The user devises several such test queries as he sees appropriate and while he computes them and checks that sub-data is ok or suspicious, he devised different test queries to suit the situation. E.g., if he observes an unusually large value, close to the total sum, in the query about employees in the toy department and hired before 2005, then the rest of the queries he devises stay within this department and within the range until 2005, and tries to narrow down further the wrong part of data. E.g., now he narrows down to each month or/and to employees that are hired between 2005 and 2007, etc. On the other hand, if he finds the answer satisfactory, then he announces this part of the data correct, therefore stays outside this sub-data and tries to find some other part of the data that are faulty. The user uses and poses his test queries over the stored small Aggregate Lineage instead of inefficiently use the original big relation. \end{itemize} In the following example we show how using Aggregate Lineage to approximate test queries applies to our running example. \begin{example}\label{ex:deb} We continue our running Example~\ref{ex:al} where we computed $\mathrm{Aggregate~Lineage}$ $L_{Salaries.Sal}$. Suppose that we have a SUM test query $Q_{1}$ asking the sum of the salaries of a subset of the employees of the company defined from a subset of $EmpID$'s. Let this subset consist of $50$ employees with salary $10^{9}$, $5,000$ employees with salary $10^{7}$ (so half of them) and of all $10^{6}$ employees with salary $10^{6}$. We compute the query over $Salaries$ and take the exact answer $1.1\times 10^{12}$. In order to use $\mathrm{Aggregate~Lineage}$ to understand our data we compute $I^{Q_{1}}_{L}$. The $\mathrm{Aggregate~Lineage}$ has at most $8,852$ tuples. The identifiers of $I^{Q_{1}}_{L}$ define the {\em sub-lineage} of query $Q_{1}$ over $L_{Salaries.Sal}$. The sub-lineage of $Q_{1}$ points to $50$ of the tuples of $L_{Salaries.Sal}$ with original salaries $10^{9}$ and to all $6,809$ tuples with original $Sal$ values $10^{6}$ (cf. Figure \ref{fig:ex0-sparse}). It will also point to some tuples of $L_{Salaries.Sal}$ with $Sal$ values $10^{7}$: On average query $Q_{1}$ is applied on half of the $681$ selected in Aggregate Lineage tuples, but in extreme cases it may include all or none of them. For this reason, it is a good practice to run the randomised algorithm more than once and compute a few distinct summaries in order to have better results. For instance, we may compute three summaries, use some benchmark sub-queries to decide a distance between summaries, toss the summary which is the more distant and keep one of the others arbitrarily. Note that it is easy to compute the benchmark queries in one pass through the original data in parallel with computing the lineage. We now use the Aggregate Lineage $L_{Salaries.Sal}$ shown in Figure \ref{fig:ex0-sparse} to approximate the value of the sum answer to $Q_1$. In one worst case query $Q_{1}$ will include: the $50$ tuples with salaries $10^{9}$ from $L_{Salaries.Sal}$ tuples with the larger frequencies and all $681$ selected tuples with salaries $10^{7}$. The approximation $Q'_{1}(L_{Salaries.Sal})$ in this case is $(4\cdotp 11+ 12\cdotp 10 + \ldots+681 +6,809)S/b= 7,935\cdotp S/b=\,1.17\cdotp 10^{12}$. In the other extreme case $Q_{1}$ includes tuples with the smaller frequencies and none of the selected in Aggregate Lineage tuples with salaries $10^{7}$, yielding the approximation $6,995\cdotp S/b=\,1.03\cdotp 10^{12}$. We see that $Q_{1}$ is well approximated. Of course the approximation bounds are not the same for every SUM query - we presented the guarantees in Section \ref{sec:tec}. Another straw man approach would be to select as lineage the $8,852$ tuples with larger salary values. This method will select all $100$ tuples with salaries $10^{9}$, all $1,000$ tuples with salaries $10^{8}$ and the remaining $7,752$ tuples from tules with salaries $10^{7}$. With this approach, query $Q_{1}$ will be on average approximated with the value $50\cdotp10^{9}+ 3,876\cdotp10^{7} \approx 8.8\times 10^{10}$ because it loses all the information about all original $10^{6}$ tuples with salaries $10^{6}$ contributing to the sum. On another approach, a simple random sampling of $8,852$ tuples will almost always select all of them from the $10^{6}$ many tuples with salaries $10^{6}$. Query $Q_{1}$ will then be approximated with the value $8,852\cdotp10^{6} \approx 8.8\times 10^{9}$. Note, on the other hand, that if all original tuples had the same salaries then our method would coincide with simple random sampling. \end{example} \section{Discussion} We have focused in our exposition only on a single aggregated attribute (e.g., $Sal$ in our example). This is done for simplicity. Our ideas can be easily extended to include more aggregated attributes as long as we are willing to keep a distinct aggregate lineage for each attribute. E..g., suppose we also had a $Rev$ (for Revenue) attribute for each employee. In such a case we keep two lineage relations, one for $Sal$ and one for $Rev$. The algorithm to compute them can be thought of as a parallel implementation of two copies of the algorithm Comp-Lineage. We need only one pass through the original data. The only difference is that now, a) we need the two total sums $S_{Sal}$ and $S_{Rev} $ and b) for each tuple $t$, we have two probabilities $p^{Sal}_t$ and $p^{Rev}_t$, the first to be used for the lineage related to attribute $Sal$ and the second to be used for the lineage related to attribute $Rev$. Algorithm Comp-Lineage performs a weighted random sampling which selects with replacement $b$ out of $n$ tuples of $R$ where the weight $w_{i}$ for the tuple with identifier $i$ is equal to the value of attribute $A$ of this tuple. Using $b$ copies (each copy selects a single element) of the weighted random sampling with reservoir algorithm presented in \cite{EfraimidisS06}, we can implement Comp-Lineage in one-pass over $R$, in $O(b n)$ time and $O(b)$ space. This implementation can also be applied to data streams and to settings where the values of $n$ and $S$ are not known in advance. However the technique in \cite{EfraimidisS06}, does not seem to be efficiently parallelizable, at least not in a direct way. Thus the problem of how to efficiently implement our technique in distributed computational environments such as MapReduce remains open. Issues about how to implement sampling in MapReduce are discussed in \cite{Carey12}. Another open problem is how to apply this technique to evolving data~\cite{Ganti02}. In data streams, we assume that the sample is to be computed over the entire data. When data continuously evolve with time, the sample may also change considerably with time. The nature of the sample may vary with both the moment at which it is computed and with the time horizon over which the user is interested in. We have not investigated here how to provide this flexibility. \section{Comparison with Synopses for Data} There has been extensive research on approximation techniques for aggregate queries on large databases and data streams. Previous work considers a variety of techniques including random sampling, histograms, multivalued histograms, wavelets and sketches (see e.g., \cite{CormodeGHJ12} and the references therein for details and applications of those methods). Most of the previous work on histograms, wavelets, and sketches focuses on approximating aggregate queries on a given attribute $A$ for specific subsets of the data that are known when the synopsis is computed (e.g., the synopsis concerns the entire data stream or a particular subset of the database). Thus, such techniques typically lose the correlation between the approximated $A$ values and the original values of other attributes. For the more general case of multiple queries that can be posed over arbitrary sets of attributes and subsets of the data not specified when the synopsis is computed, those techniques typically lead to an exponential (in the number of other attributes involved) increase in the size of the synopsis (see e.g., \cite{DobraGGR02,DobraGGR04}). In contrast, our approach is far more general and does not focus on approximating queries over specific attributes or subsets of the data. Our algorithm computes a small sample without assuming any knowledge on the set of queries and keeps the association between the sampled $A$ values and all other attributes. Then, we can use the Aggregate Lineage to approximate large-valued sum queries over arbitrary subsets of the data that can be expressed over any set of attributes. The Aggregate Lineage can approximately answer a number of queries exponential in its size. Of course, the queries should be oblivious to the actual Aggregate Lineage (technically, they should be computed by an oblivious adversary), but this technical condition applies to all previously known randomised synopses constructions (see e.g., \cite{CormodeGHJ12}). \section{Conclusions} We have presented a method that computes lineage for aggregate queries by applying weighted sampling. The aggregate lineage can be used to compute arbitrary test aggregate queries on subsets of the original data. However the test queries can be computed with good approximation only if the result of each test query is large enough with respect to the total sum over all the data. The aggregate lineage we compute cannot be used to compute test queries if their result is comparatively small. We give performance guarantees. Parallel implementation on frameworks such as MapReduce is not studied here. The naive approach of parallelizing \cite{EfraimidisS06} would have either to transmit a large amount of data to the several compute nodes, or to have a makespan linear in $n$. The idea of getting a single (possibly weighted) random sample from a large data set and using it for repeated estimations of a given quantity has appeared before in the context of machine learning and statistical estimation. Boosting techniques \cite{Schapire03} such as bootstrapping \cite{Efron79,Efron93} are used. In \cite{KleinerTSJ12} BLB is used for the efficient estimation of bootstrap-based quantities in a distributed computational environment. \section*{Acknowledgements} This work was supported by the project Handling Uncertainty in Data Intensive Applications, co-financed by the European Union (European Social Fund - ESF) and Greek national funds, through the Operational Program "Education and Lifelong Learning", under the program THALES. \bibliographystyle{abbrv}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,522
namespace Distribox.CommonLib { using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.Text; /// <summary> /// Config data. /// </summary> public class ConfigData { /// <summary> /// Gets or sets the listen port. /// </summary> /// <value>The listen port.</value> public int ListenPort { get; set; } /// <summary> /// Gets or sets the default bandwidth. /// </summary> public int DefaultBandwidth { get; set; } /// <summary> /// Gets or sets the default connection speed. /// </summary> public int DefaultConnectionSpeed { get; set; } /// <summary> /// Gets or sets the connect period millisecond. /// </summary> /// <value>The connect period millisecond.</value> public int ConnectPeriodMs { get; set; } /// <summary> /// Gets or sets the file watcher time interval millisecond. /// </summary> /// <value>The file watcher time interval millisecond.</value> public int FileWatcherTimeIntervalMs { get; set; } /// <summary> /// Gets or sets the root folder. /// </summary> /// <value>The root folder.</value> public string RootFolder { get; set; } /// <summary> /// Sets the default. /// </summary> public void SetDefault() { this.ListenPort = Properties.DefaultListenPort; // Use working directory/shared as root folder this.RootFolder = Directory.GetCurrentDirectory(); this.ConnectPeriodMs = Properties.DefaultConnectPeriodMs; this.FileWatcherTimeIntervalMs = Properties.DefaultFileWatcherTimeIntervalMs; this.DefaultBandwidth = Properties.DefaultBandwidth; this.DefaultConnectionSpeed = Properties.DefaultConnectionSpeed; CommonHelper.WriteObject(this, Properties.ConfigFileName); } } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,493
Vityaziella is een geslacht van sponsdieren uit de klasse van de Hexactinellida. Soort Vityaziella renki Tabachnick & Lévi, 1997 Sponzen
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,793
\section{Introduction} Although all fundamental particles and most composite particles either satisfy Bose-Einstein (BE) or Fermi-Dirac (FD) statistics, recent measurements provide the strongest evidence yet that the quasiparticle excitations in the fractional quantum Hall effect (FQHE) obey \emph{fractional exchange statistics} \cite{bartolomei_fractional_2020, nakamura_direct_2020}. Wilczek coined the term anyons for particles that obey fractional exchange statistics~\cite{wilczek_quantum_1982} and these recent experiments demonstrate that the excitations in the FQHE characterized by a $\nu=1/3$ filling factor exhibit a $\theta = \pi/3$ exchange phase, satisfying a prediction made nearly forty years ago~\cite{halperin_statistics_1984, arovas_fractional_1984, wilczek_fractional_1990}. The origin of fractional exchange phases in the FQHE can be traced to a topological peculiarity: for two-dimensional systems with two-body coincidences excluded, the configuration space is not simply-connected~\cite{leinaas_theory_1977}. Not all paths in configuration space that exchange particles are equivalent, and the equivalence classes of possible exchange paths are described by the braid group. Abelian representations of the braid group are characterized by an arbitrary exchange phase $\theta \in [0,2\pi)$ that determines the fractional exchange statistics parameter $\theta$~\cite{wu_general_1984, wu_multiparticle_1984, forte_quantum_1992, khare_fractional_2005}. Because of their topological origin, the fractional exchange statistics of these anyons can be `transmuted' into a gauge interaction, i.e.~the charged flux-tube model~\cite{wilczek_magnetic_1982, khare_fractional_2005}. Quasiparticle excitations obeying non-abelian braid group statistics may also exist in the FQHE~\cite{moore_nonabelions_1991, wen_non-abelian_1991, nayak_non-abelian_2008}, but unambiguous confirmation remains experimentally elusive. Adiabatic exchanges of non-abelian anyons could provide an implementation for fault-tolerant quantum computing \cite{kitaev_fault-tolerant_2003, freedman_topological_2003}. This application, combined with a fundamental interest in understanding topological states of matter, continues to drive interest in engineering novel physical systems that support excitations with non-standard exchange statistics \cite{nayak_non-abelian_2008, alicea_non-abelian_2011, maciazek_non-abelian_2019}. Can the topological approach to exchange statistics in two-dimensional systems be applied to one-dimensional systems? There is debate in the literature. Unlike in two or more dimensions, particles in one-dimensional systems must pass through each other to exchange positions and even short range (or zero-range) interactions have dramatic dynamic and thermodynamic effects~\cite{minguzzi2022strongly}. Given the inevitable intermingling of interactions with exchange in one dimension, can a purely topological exchange phase be separated from the dynamical phase accumulated along the exchange trajectory~\cite{ha_fractional_1995}? Further, for indistinguishable particles, what does ``pass through each other'' even mean? The two-body coincidence is a singular point in configuration space that introduces ambiguity~\cite{bourdeau_when_1992}, so how can one distinguish trajectories in which two particles reflect from those in which they transmit? The standard formulation of topological exchange statistics is not sufficient as it excludes two-body coincidences from configuration space. In one dimension, the removal of two-body coincidences makes particle exchanges impossible, and so the standard formulation of topological exchange statistics allows only trivial representations~\cite{polychronakos_generalized_1999, nayak_non-abelian_2008}. To overcome this technical limitation, we extend the standard formulation of topological exchange statistics by treating the configuration space of indistinguishable particles as an \emph{orbifold}~\cite{bourdeau_when_1992, landsman_quantization_2016, ohya_generalization_2021}, a generalization of the idea of a manifold~\cite{thurston_geometry_2002, adem_orbifolds_2007}. Informally, an orbifold is locally equivalent to the linear quotient of a Euclidean space by a finite group and `remembers' that symmetry. This extension allows trajectories that reflect and transmit at two-body coincidences to be topologically distinguished by elements of the \emph{orbifold fundamental group}, even for indistinguishable particles. We present details about orbifolds in Sect.~II, but a simple example of an orbifold is the configuration space for two indistinguishable particles on a line, equal to a plane modulo a reflection, ${\mathbb{R}}^2/S_2$. This quotient identifies indistinguishable configurations $(x_1,x_2)$ and $(x_2,x_1)$ in ${\mathbb{R}}^2$ and has singular locus $x_1 = x_2$. In ${\mathbb{R}}^2$, singular points like $(x,x)$ have a Euclidean neighborhood, but how do they look in the quotient space? One could collapse all identified points to obtain the underlying space, the half-plane $|{\mathbb{R}}^2/S_2|$, as a manifold with boundary~\cite{leinaas_theory_1977}. Alternatively, one could consider ${\mathbb{R}}^2/S_2$ as an orbifold, a two-dimensional space with an internal edge of orbifold singularities on the reflection line. To elucidate the difference, consider the perspective of an ant sitting on the singular locus of ${\mathbb{R}}^2/S_2$. In the half-plane perspective, there is an ``edge of the universe'' which no path can cross. The half-plane is simply-connected and there are no topological exchange statistics. However, in the orbifold perspective, the ant sees itself as sitting in the middle of an infinite plane where the view just happens to be symmetric. Paths that cross (transmit) and do not cross (reflect) this internal edge can be distinguished by the local observer. The orbifold fundamental group in this case is the symmetric group $S_2$, giving the possibility for both bosonic and fermionic topological exchange statistics. We first introduced the orbifold extension to the topological approach to exchange statistics in \cite{harshman_anyons_2020}. This analysis was motivated by the observation that, like two-body interactions in two dimensions, three-body interactions in one dimension cause co-dimension two defects in configuration space. Engineering such interactions could be feasible in ultracold atomic gases in optical traps and optical lattices \cite{buchler_three-body_2007, daley_effective_2014, mahmud_dynamically_2014, valiente_three-body_2019}. The orbifold fundamental group arising from hard-core three-body interactions in one dimension is similar to the braid group: it is an infinite discrete group that descends from the symmetric group by breaking one of the generating relations. Further, it is realized by strand diagrams obeying certain crossing rules. We named it the \emph{traid} group by analogy, but mathematicians also had discovered it in other contexts and given it various other names~\cite{cisneros2020alexander}, including the doodle group~\cite{khovanov_doodle_1997, bartholomew_doodles_2016}, the planar braid group~\cite{gonzalez_linear_2021}, and the twin group~\cite{bardakov_structural_2019, Naik20}. We extend these results in Sect.~III and classify all possible topological exchange statistics for distinguishable and indistinguishable particles in one dimension. We consider the only two topological types of path-connected one-dimensional base manifolds, the interval-type (which includes the infinite interval of the real line) and circle, and we analyze all possible topologically non-trivial interactions. This includes hard-core two-body interactions, three-body interactions, and one other non-trivial form: a non-local four-body interaction for which pairwise coincidences are allowed, but not pairs of pairwise coincidences. This interaction leads to another descendent of the symmetric group that we call the \emph{fraid} group. It is realized by strand diagrams with non-local relations between the generators. Our results in Sect.~IIIB demonstrate that on the twisted configuration space appropriate for indistinguishable particles on a ring, the possibility exists for non-abelian generalized parastatistics for soft-core particles. We also extend the traid and fraid group (and their combination) to the ring geometry and discuss their `pure' forms that apply to distinguishable particles. Although a topological definition for exchange statistics in one dimension did not previously exist, there is a large literature of one-dimensional models where the particles are `anyons' and/or have `fractional statistics'. Continuum models include (references are representative, not exhaustive)~\footnote{Our analysis does not consider the anyon-Hubbard model~\cite{amico_one-dimensional_1998, keilmann_statistically_2011, hao_dynamical_2012, wright_nonequilibrium_2014, greschner_2015, straeter_2016, lange_strongly_2017} because our topological methods do not extend to configuration spaces with discrete topology, although we note that the continuum limit of the anyon-Hubbard model has recently been derived~\cite{bonkoff_bosonic_2021}.}: (1) Leinaas-Myrheim anyons~\cite{leinaas_theory_1977, HANSSON1992559, posske_2017}; (2) Calogero-Sutherland anyon models~\cite{ha_fractional_1995, polychronakos_generalized_1999, SREERANJANI20091176}; (3) hard-core anyon models with $\delta$-type interactions and fractional exchange statistics~\cite{zhu_topological_1996, girardeau_anyon-fermion_2006, delcampo_fermionization_2008}; and (4) Kundu/Lieb-Liniger anyons with non hard-core $\delta$-type interactions and fractional exchange statistics~\cite{kundu_exact_1999, batchelor_one-dimensional_2006, Patu_2007, hao_dynamical_2012, zinner_strongly_2015, posske_2017, stouten_something_2018, patu_correlation_2019, valiente_bose-fermi-2020, bonkoff_bosonic_2021, valiente_universal_2021}. A full survey of these model exceeds the scope of this article (for a brief review see Appendix A of Ref.~\cite{posske_2017}), but a unifying property of these models is that they contain a parameter that interpolates between bosonic and fermionic limits. In Sect.~IV, we give a preliminary analysis of these four classes of continuum models and conclude that any braid-like exchange phases in models (2)-(4) originate from dynamics, i.e., they are non-topological in origin and cannot be absorbed into a statistical gauge interaction. Finally, in the concluding Sect.~V we summarize our results and point out several directions for future work. In particular, we highlight two sets of open questions: the mathematical properties and physical consequences of traid and fraid group anyons and the conceptual and mathematical shifts required to formulate the quantum mechanics of indistinguishable particles on orbifolds. \section{Orbifold approach to topological exchange statistics} This section presumes that the reader has some familiarity with the topological approach to building a quantum theory on a configuration space ${\mathcal{X}}$. The key idea is that when a configuration space is not simply-connected, then single-valued wave functions defined on the configuration space may not exhaust the set of allowed states. The fundamental group $\pi_1({\mathcal{X}})$ of the configuration space describes the connectivity of the ${\mathcal{X}}$ by equivalence classes of loops. The irreducible representations of $\pi_1({\mathcal{X}})$ provide a classification of possible wave functions, including single-valued and multi-valued as well as single-component (abelian) and multi-component (non-abelian). Alternatively, one may work with only-single valued functions by lifting the quantum system to the unique, simply-connected universal cover of the configuration space $\widetilde{{\mathcal{X}}}$. For completeness, we provide a brief overview of these standard results for manifolds ${\mathcal{X}}$ in Appendix A. We also provide a brief review of the main approaches to particle statistics, including exchange statistics and exclusion statistics, in Appendix B. As first demonstrated by Leinaas and Myrheim~\cite{leinaas_theory_1977}, topological exchange statistics beyond FD or BE are possible when the base manifold upon which the particles move is not simply-connected or (in one and two dimensions) when particle interactions create topological defects. They found the possibility for novel statistics by defining the configuration space of $N$ indistinguishable particles on a base manifold ${\mathcal{M}}$ as the quotient of the configuration space manifold of $N$ distinguishable particles ${\mathcal{X}} = {\mathcal{M}}^N$ by the symmetric group $S_N$ of particle permutations~\cite{laidlaw_feynman_1971, leinaas_theory_1977}: \begin{equation}\label{eq:Q} {\mathcal{Q}} = {\mathcal{X}}/S_N. \end{equation} Sometimes called the \emph{intrinsic} approach to indistinguishability~\cite{bourdeau_when_1992}, taking the quotient removes physically-meaningless particle labels from the mathematical description at the start. Although the taking the quotient precludes making passive permutations of particle identity, active particle exchanges are realized by loops in ${\mathcal{Q}}$. Taking the quotient also introduces a natural orbifold structure to ${\mathcal{Q}}$. Orbifolds have singular points where non-trivial local symmetries have topological consequences. Traditionally, these singular points have either been removed from configuration space or trivialized. Instead, we include these points and describe the particle exchanges of indistinguishable particles using the orbifold fundamental group $\pi_1^*({\mathcal{Q}})$. The generalization to $\pi^*_1$ captures the topological impact of singular points with co-dimension $\tilde{d}=1$ or $\tilde{d}=2$ on parallel transport. Loci of singular points with $\tilde{d}=1$ are \emph{internal mirrors} and occur at two-body coincidences in one-dimensional systems. Loci of singular points with $\tilde{d}=2$ are either isolated singularities, called \emph{cone points}, which occur in the case of particles in two dimensions, or \emph{corners} formed by intersections of $\tilde{d}=1$ intersections of internal mirrors and occur in the case of particles in one dimension. Using the orbifold fundamental group, we find the relation \begin{equation}\label{eq:gensym} \pi_1^*({\mathcal{Q}}) = S_N({\mathcal{M}}). \end{equation} holds for any path-connected base manifold ${\mathcal{M}}$. Here $S_N({\mathcal{M}})$ is the generalized symmetric group \cite{imbo_identical_1990}: \begin{equation}\label{eq:gensym2} S_N({\mathcal{M}}) \equiv \pi_1({\mathcal{M}})^N \rtimes S_N \equiv \pi_1({\mathcal{M}}) \wr S_N, \end{equation} where $\wr$ denotes the wreath product, a semidirect product in which $S_N$ acts on the normal subgroup $\pi_1({\mathcal{M}})^N$ by permutation of factors \cite{james_representation_1984, harshman_one-dimensional_2016}. For a simply-connected base manifold, $S_N({\mathcal{M}})$ reduces to the symmetric group $S_N$ as expected. The result (\ref{eq:gensym}) was previously derived for $d = \dim{\mathcal{M}} \geq 3$ in Ref.~\cite{imbo_identical_1990} where exchange statistics given by $S_N({\mathcal{M}})$ are called generalized parastatistics. Extending the result (\ref{eq:gensym}) to particles on base manifolds ${\mathcal{M}}$ with dimensions $d=1$ and $d=2$ requires the orbifold approach that we develop over the rest of this section. The classification of topological exchange statistics in one dimension presented in Sect.~III can be understood without these details on orbifolds and orbifold fundamental groups. \subsection{Distinguishable particle} To understand the orbifold structure of the configuration space for indistinguishable particles, first consider \emph{distinguishable} particles moving on a path-connected manifold ${\mathcal{M}}$ with $\dim{{\mathcal{M}}} = d$. The configuration space is \begin{equation}\label{eq:X} {\mathcal{X}} = {\mathcal{M}}^N \equiv \overbrace{{\mathcal{M}} \times \cdots \times{\mathcal{M}}}^{N\ \mathrm{times}}. \end{equation} The fundamental group of ${\mathcal{X}}$ \begin{equation}\label{eq:piX} \pi_1({\mathcal{X}}) = \pi_1({\mathcal{M}})^N \end{equation} is the $N$-fold direct product of the fundamental group of ${\mathcal{M}}$. The universal cover $\widetilde{{\mathcal{X}}}$ of ${\mathcal{X}}$ factorizes similarly $\tilde{{\mathcal{X}}} = (\widetilde{{\mathcal{M}}} )^N$. Removing particle coincidences may disrupt this product structure (\ref{eq:piX}) for base manifolds ${\mathcal{M}}$ with dimension $d=1$ or $d=2$. To see this, note that every point in ${\mathcal{X}}$ can be classified by its pattern of coinciding coordinates. For \emph{generic} points $x\equiv \{x_1, \ldots, x_N\} \in {\mathcal{X}}$, all $N$ coordinates are different. In contrast, the set of all points that have at least one pair of coordinates the same is called the coincidence locus $\Delta_2 \subset {\mathcal{X}}$, or sometimes the `fat diagonal' of ${\mathcal{X}}$~\cite{juhasz2018naturality}. Similarly, one can define $\Delta_3 \subset \Delta_2$ as the locus where at least three particle coordinates are the same, $\Delta_{2,2} \subset \Delta_2$ as the locus where there are at least two pairs of coinciding particles, etc. More generally, the space ${\mathcal{X}}$ is stratified by integer partitions of $N$. Each integer partition $[n_1\ldots n_k] \in P_N$ is a collection of positive integers which sum to $N$, typically written in non-increasing order. For example, for $N=5$, the configuration space ${\mathcal{X}}={\mathcal{M}}^5$ is stratified into seven partitions, including the partition $[221]$ corresponding to points with two $2$-particle coincidences and the partition $[41]$ for points with one $4$-particle coincidence. For each partition $[\nu] = [n_1 \ldots n_k]$, we define the stratum ${\mathcal{X}}_{[\nu]} \subset {\mathcal{X}}$ of points with that partition type. The stratum ${\mathcal{X}}_{[\nu]}$ has $h_{[\nu]} = N!/(n_1!\cdots n_k!)$ path-connected components depending on which coordinates are equal. The closure of each stratum $\overline{{\mathcal{X}}_{[\nu]}}$ is defined by $\sum_{j=1}^k (n_j-1)$ equalities of $d$-dimensional variables giving us that ${\mathcal{X}}_{[\nu]}$ is co-dimension $\tilde{d} = \sum_{j=1}^k (n_j-1) d$ within ${\mathcal{X}}$. The fat diagonal $\Delta_2 = \overline{{\mathcal{X}}_{[21\ldots 1]}}$ is the union of all ${\mathcal{X}}_{[\nu]}$ except the generic points in ${\mathcal{X}}_{[1\ldots 1]}$ and its top-dimensional stratum has co-dimension $\tilde{d}=d$. Therefore, for base manifolds ${\mathcal{M}}$ with $d=1$ or $d=2$, removing $\Delta_2$ from ${\mathcal{X}}$ disrupts the connectivity and the fundamental group is no longer given by (\ref{eq:piX}). For $d=1$ base manifolds ${\mathcal{M}}$, the coincidence loci $\Delta_3$ (the closure of ${\mathcal{X}}_{[31\ldots 1]}$) and $\Delta_{2,2}$ (the closure of ${\mathcal{X}}_{[2 2 1 \ldots 1]}$) have co-dimensions $\tilde{d}=2d=2$, as we discuss in detail Sect.~III. As a final note, when the particles are distinguishable but identical, then particle permutations are a symmetry of the Hamiltonian. There is a representation $O:S_N \to \Diff({\mathcal{X}})$ denoted $s\mapsto O_s \in O(S_N)$ that acts as a \emph{passive} coordinate transformation, exchanging factors in the product (\ref{eq:X}). Generic points $x \in {\mathcal{X}}_{[1\ldots 1]}$ have orbits under permutation $\left\{O_s(x) \left| s \in S_N \right. \right\}$ with $N!$ members. For all other partitions $[\nu]$, each point $x\in {\mathcal{X}}_{[\nu]}$ has a non-trivial stabilizer subgroup $H_{x} \subset S_N$, i.e., the subgroup that exchanges coinciding coordinates. Such points have orbits with only $N!/\left|H_{x}\right|$ members. The particular embedding $S_{[\nu]} = S_{n_1}\times \cdots \times S_{n_k}$ into $S_N$ depends on which of the $h_{[\nu]}$ components of ${\mathcal{X}}_{[\nu]}$ contains $x$, and two points of different components will will have conjugate stabilizers. \subsection{Intrinsic approach}\label{subsec:intrinstic} In the intrinsic approach, physics happens on the indistinguishable particle space ${\mathcal{Q}}$ rather than ${\mathcal{X}}$. The quotient ${\mathcal{Q}} = {\mathcal{X}}/S_N$ defines a `forgetful map' $p:{\mathcal{X}} \to {\mathcal{Q}}$ that erases the particles' identities and sends each point of ${\mathcal{X}}$ to its $S_N$ orbit. For generic points in ${\mathcal{X}}_{[1\ldots 1]}$, the map $p:{\mathcal{X}} \to \mathcal{Q}$ is $N!:1$, but points in the coincidence locus $\Delta_2$ are \emph{singular} under this map. Reversing this, each possible choice of coordinate labels is equivalent to a `lift' of the point $q_0 \in {\mathcal{Q}}$ to one of its representative points $x_0 \in {\mathcal{X}}$ in the orbit. Since particle labels are meaningless for indistinguishable particles, any assignment of labels to coordinates of $q_0$ is a choice of gauge. The diffeomorphisms $O_s$ of ${\mathcal{X}}$ permute the representatives of $q_0$ and therefore the particle labels, so they form a discrete gauge transformation group. From another perspective, these label-permuting symmetries $O_s$ form a generalization of the deck transformations of a covering space in the orbifold category~\cite{Boileau_2003_Three}. In contrast to the passive transformations $O_s$ of ${\mathcal{X}}$, closed loops in ${\mathcal{Q}}$ realize active particle exchanges along continuous paths. However, paths that pass through the singular points $\Delta_2/S_N \subset \mathcal{Q}$ give an ambiguity. Did the particles exchange at the coincidence point or not? Especially in one dimension, the ambiguity between reflection and transmission is essential to understanding exchange statistics. However, the fundamental group $\pi_1(\mathcal{Q})$ does not see the path ambiguity embodied in these singular points~\cite{bourdeau_when_1992}. There are two standard solutions to the presence of these singular points that avoid using orbifolds. First, one can consider the space \begin{equation}\label{eq:Q2} {\mathcal{Q}}_2 = {\mathcal{X}}_2/S_N = {\mathcal{Q}} - \Delta_2/S_N, \end{equation} where ${\mathcal{X}}_2 = {\mathcal{X}} -\Delta_2$. The configuration space $\mathcal{Q}_2$ has no multi-body coincidences and without these points, the restriction of $p$ that maps ${\mathcal{X}}_2 \to {\mathcal{Q}}_2$ forms a covering space in the usual sense. Therefore ${\mathcal{Q}}_2$ is a manifold and exchange paths are described completely by $\pi_1({\mathcal{Q}}_2)$. Famously, for ${\mathcal{M}} = {\mathbb{R}}^2$, $\pi_1({\mathcal{Q}}_2)$ is the braid group $B_N$ and $\pi_1({\mathcal{X}}_2)$ is the pure braid group $PB_N$. On general $d=2$ surfaces ${\mathcal{M}}$, the fundamental group $\pi_1({\mathcal{Q}}_2)$ gives the generalized braid groups $B_N({\mathcal{M}})$ and pure braid groups $PB_N({\mathcal{M}})$~\cite{birman_braid_1969, thouless_remarks_1985, imbo_identical_1990, einarsson_fractional_1990, hatsugai_braid_1991}. For ${\mathcal{M}} = {\mathbb{R}}^3$, all particle exchanges which do not permute the particles are homotopically trivial and so $\pi_1({\mathcal{Q}}_2)= S_N$, resulting in normal exchange statistics. Alternatively, one can consider the \emph{underlying space} $|\mathcal{Q}|$ of the orbifold ${\mathcal{Q}}$, which includes set of singular points $|\Delta_2/S_N|$ but ignores their orbifold structure~\cite{Boileau_2003_Three}. The space $|{\mathcal{Q}}|$ is a manifold with certain degeneracies along $|\Delta_2/S_N|$ such as a boundary or corners. In this case, one uses the usual fundamental group $\pi_1(|{\mathcal{Q}}|)$ to describe particle exchanges. However, trivializing the topology like this removes the possibility for non-trivial exchange statistics. Returning to the case of $\mathcal{M} = \mathbb{R}^2$, we find $\pi_1(|\mathcal{Q}|) = 1$. This implies the unsatisfactory result that only trivial topological exchange statistics (i.e., only bosons, not even fermions) would be possible for particles moving on the plane unless interactions exclude $\Delta_2/S_N$~\cite{leinaas_theory_1977, wu_general_1984}. \subsection{Configuration space orbifold} Instead of either of these approaches, we include the singular points $\Delta_2/S_N$ in the configuration space and consider $\mathcal{Q}$ as an orbifold \cite{thurston_geometry_2002, Boileau_2003_Three}. Orbifolds occur naturally in the context of classifying spaces of objects with symmetries \cite{adem_orbifolds_2007}. The space ${\mathcal{Q}}$ meets the definition of a `good' orbifold because it is the quotient of a manifold by a discrete group acting upon it by diffeomorphisms~\cite{Boileau_2003_Three}. Further, ${\mathcal{Q}}$ is considered `very good' as it actually is the global quotient of a manifold by a {\em finite} group. Points in an orbifold are classified as manifold points if their local symmetry group is trivial and orbifold points if their local symmetry group is non-trivial. Depending on their co-dimension, orbifold points form loci that appear as internal mirrors, cone points or corners, and other higher order singular points in the orbifold. Manifolds form a subclass of orbifolds where all points have trivial local symmetry groups. For the orbifold ${\mathcal{Q}}$, the orbifold singular points are precisely the singular points $\Delta_2/S_N$ of $p: {\mathcal{X}} \to {\mathcal{Q}}$. In the planar example, the orbifold locus of $\mathbb{R}^2/S_2$ has co-dimension $\tilde{d}=1$, forming an internal mirror edge at the line $\Delta_2/S_2$ of two-body coincidences. The local symmetry group of points on $\Delta_2/S_2 = p({\mathcal{X}}_{[2]})$ is isomorphic to $S_2$; all other points on the half plane $p({\mathcal{X}}_{[11]})$ are manifold points. More generally, the local symmetry group of an orbifold point in ${\mathcal{Q}}$ comes from its location in the stratification of ${\mathcal{X}}$ by partitions, ${\mathcal{X}}_{[\nu]}$. Each point of ${\mathcal{X}}_{[\nu]}$ is invariant under a subgroup of $S_N$ isomorphic to $S_{[\nu]} = S_{n_1} \times \cdots \times S_{n_k}$. The stratification ${\mathcal{X}}_{[\nu]}$ is invariant under the $S_N$ action so the decomposition descends to a similar stratification $p({\mathcal{X}}_{[\nu]})={\mathcal{Q}}_{[\nu]} \in {\mathcal{Q}}$ where the stabilizers $S_{[\nu]}$ of points $x\in {\mathcal{X}}_{[\nu]}$ become the local symmetry groups of the corresponding point $q=p(x)\in {\mathcal{Q}}_{[\nu]}$. To understand how local symmetries act, first consider the case of a manifold point $q \in {\mathcal{Q}}_{[1\ldots 1]}$. For a sufficiently small neighborhood $U$ of $q$ in $|{\mathcal{Q}}|$ the preimage $p^{-1}(U)$ consists of $N!$ disjoint sets in ${\mathcal{X}}$ all isomorphic to $U$ via $p$. Each connected component $V$ of $p^{-1}(U)$ is a local model for ${\mathcal{Q}}$ at $q$ and corresponds to an assignment of $N$ labels to the $N$ distinct points of ${\mathcal{M}}$ defining $q$, or equivalently, an unambiguous ordering of those points in the product ${\mathcal{X}} ={\mathcal{M}} \times \cdots {\mathcal{M}}$. In other words, $p:{\mathcal{X}}_{[1\ldots 1]} \to {\mathcal{Q}}_{[1\ldots 1]}$ is a topological covering map and the local symmetry group of the point $q \in {\mathcal{Q}}$ is trivial. However, an orbifold point $q \in {\mathcal{Q}}_{[\nu]}$, $\nu \neq [1\ldots 1]$, lifts to only $h_{[\nu]} = N!/|S_{[\nu]}|$ points in ${\mathcal{X}}$. Therefore, if $U$ is a small neighborhood of $q$ in $|{\mathcal{Q}}|$, then $p^{-1}(U)$ will have $h_{[\nu]}$ connected components. Although each connected component $V$ of $p^{-1}(U)$ serves as a local model for the orbifold ${\mathcal{Q}}$ at $q$, they differ from the neighborhoods of manifold points in that they are not isomorphic to $U$. Instead, each component $V$ of $p^{-1}(U)$ comes with an action of $S_{[\nu]}$ so that $U=V/S_{[\nu]}$. The realization of the $S_{[\nu]}$ symmetries by particle-label permutations (i.e. the embedding $S_{[\nu]} \subset S_N$) depends non-trivially on the specific component $V$. Each of the $V$ do not correspond to a canonical choice of particle labels, but instead corresponds to a choice of particle labels up to an $S_{[\nu]}$ ambiguity. For example, take $N=3$ and consider points $q \in {\mathcal{Q}}_{[2 1]}$. These points $q$ with the form $\{a,a,b\}$ have preimages $(a,a,b)$, $(a,b,a)$, and $(b,a,a)$ which have neighborhoods where the label-involution is realized by the permutations $(1,2)$, $(1,3)$, and $(2,3)$, respectively. These are exactly the generators of the stabilizer subgroups of the respective points in ${\mathcal{X}}_{[21]}$ and the resulting copies of $S_2\subset S_3$ are all conjugate. \subsection{Orbifold fundamental group}\label{subsect:ofg} The \emph{orbifold fundamental group} $\pi^*_1$ classifies based loops in an orbifold up to continuous deformation, generalizing the usual fundamental group \cite{thurston_geometry_2002, Boileau_2003_Three, harshman_anyons_2020}. Intuitively, an orbifold path $\gamma: I \to {\mathcal{Q}}$ consists of a map $|\gamma|: I \to |{\mathcal{Q}}|$ of the underlying spaces together with compatible lifts to local models on the orbifold. For example, if ${\mathcal{Q}} = \mathbb{R}^2/S_2$, there are two types of paths which touch the singular boundary: one which lifts to the transmitted path in the local model and the other which lifts to the reflected path. These paths realize, and are distinguished by, the two elements of $\pi_1^*(\mathbb{R}^2/S_2)=S_2$. As with $\pi_1$, choice of base point does not affect the isomorphism type of $\pi_1^*$ for path-connected spaces. Note that for $d\geq 3$ the orbifold singular points have large enough co-dimension $\tilde{d} \geq 3$ that their presence or absence has no effect on the connectivity or the (orbifold) fundamental groups of ${\mathcal{X}}$, ${\mathcal{Q}}$, and $|{\mathcal{Q}}|$. Therefore, each of these (orbifold) fundamental groups are canonically isomorphic with $S_N(\mathcal{M})$. However, for $d=1$ and $d=2$, the orbifold fundamental group `feels' the disruption to connectedness created by the local symmetry groups of orbifold loci. Like the fundamental group, the orbifold fundamental group keeps track of how paths intersect with $\tilde{d}=1$ orbifold loci and wind around $\tilde{d}=2$ loci. Using the orbifold fundamental group, the same relation $\pi_1^*({\mathcal{Q}}) = S_N({\mathcal{M}})$ (\ref{eq:gensym}) holds for any dimension. When $\pi_1({\mathcal{M}})$ is trivial, then $S_N({\mathcal{M}})$ reduces to $S_N$ and the universal cover of ${\mathcal{Q}}$ is the simply-connected space $\widetilde{{\mathcal{Q}}} = {\mathcal{X}}$ and topological exchange statistics reproduces the results of the symmetrization postulate in any dimension (see Appendix B). For non-trivial $\pi_1({\mathcal{M}})$, the universal covers $\widetilde{{\mathcal{Q}}} = \widetilde{{\mathcal{X}}} = \widetilde{{\mathcal{M}}}^N$ coincide. The topology of the base manifold allows multiple exchange paths supporting the same particle permutation to be distinguished. Equivalently, there are paths that do not exchange particles that are not homotopically trivial. These results can be summarized in a short exact sequence, a linear sequence of groups connected by homomorphisms such that the image of one homomorphism is the kernel of the next, and `short' in the sense that there are five group terms beginning and ending with the trivial group. The following short exact sequence holds for any path-connected base manifold ${\mathcal{M}}$ of any dimension and relates the fundamental group of ${\mathcal{X}}$ to the orbifold fundamental group of ${\mathcal{Q}}$: \begin{subequations}\label{eq:ses} \begin{equation} 1 \to \pi_1({\mathcal{X}}) \to \pi_1^*({\mathcal{Q}}) \to S_N \to 1, \end{equation} which specifies to \begin{equation} 1 \to \pi_1(\mathcal{M})^N \to S_N(\mathcal{M}) \to S_N \to 1. \end{equation} \end{subequations} The proof of (\ref{eq:gensym}) and (\ref{eq:ses}) in Ref.~\cite{imbo_identical_1990} holds when $d \geq 3$, but fails to generalize to the orbifold case. However, we have an alternate proof that employs a distinct, but isomorphic definition of the orbifold fundamental group. In this definition, like the usual fundamental group, the orbifold fundamental group is defined to be the deck transformation group of the universal cover \cite{thurston_geometry_2002}. Then the short exact sequence can be extracted from a series of covers $\widetilde{{\mathcal{X}}} \to {\mathcal{X}} \to {\mathcal{Q}}$. Here, the covering $\widetilde{{\mathcal{X}}} \to {\mathcal{X}}$ is the usual universal covering of the manifold ${\mathcal{X}}$, which has deck transformation group isomorphic to $\pi_1({\mathcal{X}})$ and the composition $\widetilde{{\mathcal{X}}} \to {\mathcal{Q}}$ is the universal cover of the orbifold ${\mathcal{Q}}$. Since ${\mathcal{X}} \to {\mathcal{Q}}$ comes from a group quotient, the action of the deck transformation group is transitive, and thus the cover is regular. This means that $\pi_1({\mathcal{X}})$ includes into $\pi_1^*({\mathcal{Q}})$ as a normal subgroup with quotient equal to the deck transformation group $S_N$ of the cover ${\mathcal{X}} \to {\mathcal{Q}}$, giving the short exact sequence (\ref{eq:ses}). However, we should warn the reader that we have done something sneaky here. When passing to the deck transformation interpretation of the the orbifold fundamental group, we have changed the object of study! That is, for each cover, there are two groups at work here. One is the group of passive transformations, acting as diffeomorphisms of the covering space, forming a discrete gauge group. In the case of ${\mathcal{X}} \to {\mathcal{Q}}$, it is the group that permutes choices of particle labels and in the case of $\widetilde{{\mathcal{Q}}}=\widetilde{{\mathcal{X}}} \to {\mathcal{Q}}$, it additionally intertwines how the constituent particles have wound around ${\mathcal{M}}$. The other group is the ``point-pushing'' group of orbifold-homotopy loops in the configuration space that realizes active transformations of indistinguishable particles. These groups are isomorphic and that allows the proof of (\ref{eq:gensym}) and (\ref{eq:ses}), but the methods of action on $\widetilde{{\mathcal{Q}}}$ are distinct and commute. In Sect.~\ref{subsect:interval} below, we contrast these actions for the simplest case of indistinguishable particles on a one-dimensional interval. \section{Application to one dimension} The orbifold fundamental group (\ref{eq:gensym}) of ${\mathcal{Q}}$ describes how two factors determine the connectedness of configuration space: indistinguishability and the fundamental group of the base manifold. In three dimensions and higher, those are the only two factors that contribute to topological exchange statistics. However, hard-core or singular interactions exclude points from configuration space, and in $d=1$ and $d=2$, excluding points of coincidence alters $\pi_0$ (the set of path-connected components) and $\pi_1$ of configuration space because this locus has co-dimension $\tilde{d} \leq 2$. A co-dimension $\tilde{d}=1$ defect, such as the set of two-body coincidences $\Delta_2$ in one dimension, locally splits configuration space into connected components. Recall that $\Delta_2$ is formed as the union of sets defined by equations of the form $x_i=x_j$, so for a generic point, the local splitting is into two pieces in the same manner as a point on a line, a line in a plane, or a plane in a three-dimensional space. When the intersections of all two-body coincidences $\Delta_2$ are removed to form ${\mathcal{X}}_2 = {\mathcal{X}} - \Delta_2$, particle exchanges through coincidences become impossible and the particles can be given a consistent order on each element of $\pi_0({\mathcal{X}}_2)$. The removal of codimension $\tilde{d}=2$ defects, which sit in configuration space like a point in a plane or a line in a space, also disrupts the connectivity of configuration space. Paths that wind around the defects lead to new, non-trivial elements of $\pi_1$ that serve as (often non-abelian) winding numbers. Such defects occur as the result of the following few-body coincidences in ${\mathcal{X}}$~\cite{harshman_anyons_2020, harshman_coincidence_2018}: \begin{enumerate} \item two-body coincidences $\Delta_2$ defined by $x_j = x_k$ and $y_j = y_k$ in two dimensions, \item three-body coincidences $\Delta_3$ defined by $x_i = x_j = x_k$ in one dimension, and \item a non-local, partial four body coincidences $\Delta_{2,2}$ formed by the intersection of two two-body coincidences $x_i = x_j$ and $x_k = x_l$. \end{enumerate} The first case famously leads to the braid group and generalization (see Sect.~\ref{subsec:intrinstic}). We call (orbifold) fundamental groups that derive from the exclusion of these coincidence loci \emph{strand groups} because, like the braid group, they lead to configuration space described by generalizations of the symmetric group that can be realized by strand diagrams. However, we note that for the case of $\Delta_{2,2}$, this notion of strand group involves non-local constraints, as we discuss below. Therefore, for $d=1$ base manifolds $\mathcal{M}$, we define the additional configuration spaces \begin{eqnarray}\label{eq:tilded2} &{\mathcal{X}}_3 = \mathcal{M}^N - \Delta_3, {\mathcal{Q}}_3 = {\mathcal{X}}_3/S_N \\ &{\mathcal{X}}_{2,2} = \mathcal{M}^N - \Delta_{2,2}, {\mathcal{Q}}_{2,2} = {\mathcal{X}}_{2,2}/S_N \nonumber\\ &{\mathcal{X}}_{\{3;2,2\}} = \mathcal{M}^N - \Delta_3 \cup \Delta_{2,2}, {\mathcal{Q}}_{\{3;2,2\}} = {\mathcal{X}}_{\{3;2,2\}}/S_N \nonumber \end{eqnarray} where the spaces ${\mathcal{X}}_3, {\mathcal{Q}}_3$ excluding three-body coincidence hold interest for $N \geq 3$ and spaces excluding double two-body coincidences are relevant for $N \geq 4$. Because these ${\mathcal{Q}}$-spaces include single two-body coincidence orbifold locus $\Delta_2/S_N$, we consider them as orbifolds and use $\pi_1^*$. There are only two homotopy types for manifolds in one dimension: the interval type with $\pi_1(\mathcal{M}) = 1$ (a type that includes the infinite interval $\mathbb{R}$) and the circle $S^1$ with $\pi_1(\mathcal{M}) = \mathbb{Z}$. We classify the possible topological exchange statistics for both manifold types below for indistinguishable and distinguishable particles with and without the relevant coincident loci removed. \subsection{Particles on interval-type manifolds}\label{subsect:interval} When ${\mathcal{M}}$ is of the simply connected interval type, the configuration space for distinguishable particles ${\mathcal{X}}$ is also path-connected and simply-connected and there are no non-trivial exchange statistics. For a finite interval, ${\mathcal{X}}$ is an $N$-dimensional hypercube, but without loss of generality we extend to the infinite interval and consider ${\mathcal{M}} = {\mathbb{R}}$ and ${\mathcal{X}} = {\mathbb{R}}^N$. If the particles are distinguishable but identical, then particle permutations are a symmetry of the Hamiltonian. A permutation $s \in S_N$ is represented by an orthogonal transformation $O_s \in O(S_N) \subset O(N)$ on ${\mathcal{X}} = {\mathbb{R}}^N$; see Ref.~\cite{harshman_one-dimensional_2016a,harshman_one-dimensional_2016} for a pedagogical introduction. The symmetrization postulate uses the representations $O(S_N)$ to decompose the Hilbert space of wave functions on ${\mathcal{X}}$ into symmetric and antisymmetric subspaces (see Appendix B). The intrinsic approach to indistinguishable particles, also gives the symmetric group $\pi_1^*({\mathcal{Q}}) = S_N$, as expected, but with a different interpretation in terms of exchange loops. The orbifold fundamental group is generated by $N-1$ pairwise exchanges $\sigma_1$ through $\sigma_{N-1}$ satisfying the relations \begin{subequations}\label{rel} \begin{eqnarray} \sigma_i^2 &=& 1 \label{rel:braid}\\ \sigma_i \sigma_{i+1}\sigma_i &=& \sigma_{i+1} \sigma_i \sigma_{i+1} \label{rel:traid}\\ \sigma_i \sigma_j &=& \sigma_j \sigma_i\ \mbox{for}\ j > i+1 \label{rel:fraid}. \end{eqnarray} \end{subequations} Recall that ${\mathcal{Q}}$ is the space of indistinguishable particles, so the generators $\sigma_i \in \pi_1^*({\mathcal{Q}})$ are not the discrete permutations of labeled particles. Instead, they are continuous paths that actively exchange particle \emph{orderings}, where each configuration of points has a natural particle order determined by the position of the particles in ${\mathbb{R}}$. The interpretation of $\sigma_1$ is that it realizes a loop in ${\mathcal{Q}}$ that exchanges the first and second particle; $\sigma_2$ exchanges the second and third particles, etc. These exchanges are realized as strand diagrams in Fig.~\ref{fig:symrels}. \begin{figure} \centering \includegraphics[width=\columnwidth]{symrels.pdf} \caption{Depiction of the three types of generator relations (\ref{rel}) for $\pi_1^*({\mathcal{Q}})$ when $N=4$ as strand diagrams, read from the bottom: (a) the self-inverse relation $\sigma_1^2 = 1$ that is broken for the braid group generators; (b) the Yang-Baxter relation $\sigma_1 \sigma_2 \sigma_1 = \sigma_2 \sigma_1 \sigma_2$ that is broken for the traid group generators; (c) the locality relation $\sigma_1 \sigma_3 = \sigma_3 \sigma_1 $ that is broken for the fraid group generators.} \label{fig:symrels} \end{figure} We want to emphasize the difference between these two appearances of the symmetric group: (1) the active, continuous particle exchanges $\pi_1^*({\mathcal{Q}}) \sim S_N$ represented as closed loops on ${\mathcal{Q}}$; and (2) the group of passive particle label permutations $O(S_N) \sim S_N$ represented as orthogonal transformations on ${\mathcal{X}}$. We compare the action of these groups on the points of ${\mathcal{X}}_2 = {\mathcal{X}} - \Delta_2$ by restricting the action of $O(S_N)$ from ${\mathcal{X}}$ to ${\mathcal{X}}_2$ and lifting the action of $\pi_1^*({\mathcal{Q}})$ from ${\mathcal{Q}}$ to ${\mathcal{X}}_2$. See Fig.~\ref{fig:deck} for a depiction of the these different actions for the case of $N=3$. \begin{figure*} \centering \includegraphics[width=\textwidth]{deck.pdf} \caption{Graphical comparison of the actions of $O(S_N)\sim S_N$ and $\pi^*_1({\mathcal{Q}}) \sim S_N$ on ${\mathcal{X}}_2$ for $N=3$. (a) The relative configuration space for three distinguishable particles on a line with coordinates $(x_1,x_2,x_3)$. The horizontal coordinate is $z_1 = (x_1 - x_2)/\sqrt{2}$ and the vertical coordinate is $z_2= (x_1 + x_2 - x_3)/\sqrt{6}$. The black lines are the two-body coincidence locus $\Delta_2$ and they section ${\mathcal{X}}$ into six alcoves ${\mathcal{Y}}_\omega \in \pi_0({\mathcal{X}}_2)$ where the particles have different position orders $x_{\omega_1} < x_{\omega_2} < x_{\omega_3}$. (b) Reflections across each of the three colored lines are orthogonal transformations $O_s$ of ${\mathcal{X}}$ that represent the passive permutation of particle labels. For example, $O_{(12)}$ is a reflection across the vertical line (red) that permutes the labels of particle 1 and particle 2 and $O_{(23)}$ is a reflection across the diagonal line with positive slope (green) that permutes the labels of particles 2 and 3. The double-sided arrows (red) connect points $x_\omega \in p^{-1}(q)$ and indicate how the ${\mathcal{Y}}_\omega \in \pi_0({\mathcal{X}}_2)$ are permuted by $O_{(12)}$. (c) The relative components of the orbifold ${\mathcal{Q}}$ and a based loop $\sigma_1 = \pi^*_1({\mathcal{Q}})$ starting and ending at manifold point $q$. This path realizes an exchange $\sigma_1$ of the first two particles. (d) The based loop $\sigma_1$ in ${\mathcal{Q}}$ is lifted to six paths in ${\mathcal{X}}_2$ that start at $x_\omega$ and end at $x_{\omega'}$. For visual clarity, half of the path lifts are represented as dashed lines. These six lifts of element $\sigma_1 = \pi^*_1({\mathcal{Q}})$ define a map $\tilde{\sigma}_1$ on ${\mathcal{X}}_2$ that permutes the ${\mathcal{Y}}_\omega \in \pi_0({\mathcal{X}}_2)$. } \label{fig:deck} \end{figure*} \subsubsection{Singular two-body interactions} For particles in one dimension with hard-core two-body interactions, the two-body coincidence locus $\Delta_2$ divides ${\mathcal{X}}$ into $N!$ simply-connected alcoves ${\mathcal{Y}}_\omega \in \pi_0({\mathcal{X}}_2)$. Each alcove ${\mathcal{Y}}_\omega$ is an open subset of ${\mathcal{X}}$ such that the quotient map $p({\mathcal{Y}}_\omega)$ consists entirely of manifold points ${\mathcal{Q}}_2 \subset {\mathcal{Q}}$. Each simply-connected component $\mathcal{Y}_\omega$ has the geometry of an open cone on an open simplex on the $N-1$ sphere~\cite{harshman_integrable_2017}. Conversely, each manifold point $q$ in ${\mathcal{Q}}_2$ lifts to $N!$ points $p^{-1}(q)=\left\{x_\omega \mid x_\omega \in {\mathcal{Y}}_\omega, p(x_\omega)=q \right\}$. Each alcove index $\omega = [\omega_1 \omega_2 \ldots \omega_N]$ is a permutation of the set $\{12\ldots N\}$ that indicates the positional ordering of labeled particles in ${\mathbb{R}}$; see Fig.~\ref{fig:deck}. When ${\mathcal{M}}$ is of interval type, sorting in this manner gives a one-to-one correspondence between alcove orderings ${\mathcal{Y}}_\omega$ and permutations in $S_N$. However, as we explore in the next section, when ${\mathcal{M}}=S^1$ we can only order particles cyclically and so this correspondence develops a similar cyclic ambiguity. For indistinguishable particles, $\mathcal{Q}_2$ is isomorphic to $\mathcal{Y}_\omega$ and the fundamental group $\pi_1({\mathcal{Q}}_2)=1$ is trivial. This provides a topological perspective on the so-called fermionization of hard-core bosons \cite{girardeau_relationship_1960, harshman_infinite_2017}. Because no exchanges are possible, there are no topological exchange statistics that differentiate fermions and bosons from the intrinsic perspective. As we discuss in Sect.~IV below, non-topological exchange statistics that interpolate between bosonic and fermionic solutions (sometimes called Leinaas-Myrheim anyons) have been defined by imposing Robin boundary conditions on $\Delta_2/S_N$ (equivalent to delta-interactions on $\Delta_2 \subset {\mathcal{X}}$) \cite{leinaas_theory_1977, posske_2017, balachandran_classical_1991}. \subsubsection{Singular few-body interactions} In contrast to ${\mathcal{Q}}_2$, removing the $\tilde{d}=2$ few-body coincidences (\ref{eq:tilded2}) gives non-trivial tolopogical exchange statistics. The following strand groups are defined as: \begin{subequations}\label{aidgroups} \begin{eqnarray} \pi^*_1( \mathcal{Q}_3) &=& T_N \\ \pi^*_1( \mathcal{Q}_{2,2}) &=& F_N \\ \pi^*_1( \mathcal{Q}_{\{3;2,2\}}) &=& W_N \end{eqnarray} \end{subequations} We call these discrete, infinite, non-abelian groups the traid group $T_N$ \cite{harshman_anyons_2020} (aka doodle group, planar braid group, twin group), the fraid group $F_N$, and the free Coxeter group $W_N$ (aka universal Coxeter group \cite{humphreys_reflection_1992}). Like the braid group $B_N$, the strand groups $T_N$, $F_N$ and $W_N$ can be understood as resulting from eliminating generator relations of the symmetric group \cite{harshman_anyons_2020}. Relaxing relation (\ref{rel:braid}), the generators $\sigma_i$ give the braid group $B_N$. Relaxing (\ref{rel:traid}) gives the traid group $T_N$ and relaxing (\ref{rel:fraid}) gives the fraid group $F_N$. Relaxing both (\ref{rel:traid}) and (\ref{rel:fraid}) means that only the self-inverse relations remain and the resulting group is the free Coxeter group $W_N$. Each of these groups admit natural homomorphisms to $S_N$ obtained by reintroducing the lost relations. These relations have a topological basis. When $d\geq 2$, each $\sigma_i$ can be represented by a path that avoids the coincidence locus $\Delta_2$ and the loci $\Delta_3$ and $\Delta_{2,2}$ are codimension $2d \geq 4$ and their removal does not affect the fundamental group as any null-homotopy of a loop can be arranged to avoid theses sets. However for $d=2$, the relation $\sigma_i^2=e$ comes from a null-homotopy of a path which must pass through the codimension $2$ locus $\Delta_2$, so the presence of $\Delta_2$ determines the presence of the relation (\ref{rel:braid}). In dimension $d=1$, paths representing $\sigma_i$ cannot be represented disjointly from $\Delta_2$. However, the loci $\Delta_3$ and $\Delta_{2,2}$ are co-dimension $2d=2$ and the relations (\ref{rel:traid}) and (\ref{rel:fraid}) are induced by null-homotopies that must pass through these sets, respectively. For example, any null-homotopy of the natural representative of $(\sigma_i \sigma_{i+1})^3$ must pass through have a strand diagram which has at least on one triple point. In other words, the null-homotopy passes through $\Delta_3$. The pure version of each of these groups in (\ref{aidgroups}) is defined by looking at the corresponding distinguishable particle configuration space. For example, we can define the pure traid as $PT_N = \pi_1({\mathcal{X}}_3)$. Following the argument of Sect.~\ref{subsect:ofg}, we construct the following short exact sequences analogous to (\ref{eq:ses}): \begin{subequations}\label{paidgroups} \begin{eqnarray} &1 \to PT_N \to T_N \to S_N \to 1& \\ &1 \to PF_N \to F_N \to S_N \to 1& \\ &1 \to PW_N \to W_N \to S_N \to 1&. \end{eqnarray} \end{subequations} Some mathematical results for the pure traid group $PT_N$ can be found in~\cite{ bardakov_structural_2019, Naik20, Mostovoy20, MostovoyPresentation}. The possible topological exchange statistics for these novel strand groups are classified by the irreducible representations of these groups. The irreducible representations of $T_N$ and $PT_N$ have not received much attention from mathematicians, although the abelian representations of $T_N$ are classified and several non-abelian representations found in Ref.~\cite{harshman_anyons_2020}. However, there is still not a complete classification of the non-abelian irreducible representations of the braid group 75 years after the group was first described~\cite{artin_theory_1947}, so one should expect classifying the non-abelian representations of the traid and fraid groups will be a similarly complicated and long-standing mathematical project. \subsection{Particles on rings} For $\mathcal{M}=S^1$, the distinguishable particle configuration space ${\mathcal{X}} = T^N$ is the $N$-torus $T^N = S^1 \times \cdots \times S^1$ with fundamental group $\pi_1(T^N) = {\mathbb{Z}}^N$. This abelian group of $N$-tuples of integers under addition describes equivalences classes of paths by how many times each particle winds around the ring. It is generated by the $N$ translations $t_i$ of a single particle around the circle. All irreducible representations of ${\mathbb{Z}}^N$ are classified by $N$-tuples of phases $(\phi_1, \cdots, \phi_N)$ with $\phi_i\in [0, 2\pi)$. Only for all $\phi_i=0$ are the wave functions on ${\mathcal{X}}$ single-valued. For identical but distinguishable particles, all phases take the same value $\phi_i = \phi$. For indistinguishable particles, the orbifold fundamental group allows generalized parastatistics given by \begin{equation}\label{eq:Qring} \pi^*_1(\mathcal{Q}) = S_N(S^1) = {\mathbb{Z}} \wr S_N = S_N \ltimes {\mathbb{Z}}^N. \end{equation} This group is non-abelian even for $N=2$ and in addition to multi-valued, scalar representations~\cite{forte_quantum_1992}, there are multi-valued, multi-component wave functions on the orbifold $\mathcal{Q}$ that realize states with generalized parastatistics. In principle, a complete classification of the irreducible representations of (\ref{eq:Qring}) can be found using the method of induced representations of the normal subgroup ${\mathbb{Z}}^N$~\cite{altmann_induced_1977}. \begin{figure} \centering \includegraphics[width=\columnwidth]{affsym.pdf} \caption{Depictions of six different elements of $S_4(S^1) = {\mathbb{Z}} \wr S_4 = {\mathbb{Z}}^4 \rtimes S_4$ mentioned in the text. In these strand diagrams, the ring has been cut flattened into a line and the gray dashed lines on either side are the cut. The elements depicted include: $\sigma_4$, the pairwise exchange of the first and fourth particle (relative to the cut) constructed from $S_N$ generators as in first line of (\ref{eq:sigmaN}); $t_1$, one of the $4$ generators of ${\mathbb{Z}}^4$ subgroup of $S_4(S^1)$; its inverse $t_1^{-1}$; the `around-the-back' operator $\sigma_0$ defined in (\ref{eq:sigma0}) and simplified using $\sigma_i^2 = 1$; the $\zeta$ operators defined in (\ref{eq:zeta}) corresponding to a shift in the center-of-mass by $\pi/2$ around the ring; and $\zeta^{-1}$ its inverse.} \label{fig:affsym} \end{figure} \subsubsection{Twisted subgroup} The group $\pi^*_1(\mathcal{Q}) = S_N(S^1)$ contains the affine symmetric group $\tilde{S}_N$~\cite{humphreys_reflection_1992} (also called the twisted symmetric group \cite{sutherland_beautiful_2004}) as a normal subgroup. To show this, denote the $N-1$ generators of $S_N$ by $\sigma_i$ as in (\ref{rel}) and the $N$ generators of ${\mathbb{Z}}^N$ by $t_i$. The semidirect action of $S_N$ on ${\mathbb{Z}}^N$ in the wreath product (\ref{eq:Qring}) implies the generator relations $t_i \sigma_i = \sigma_i t_{i+1}$ and $t_i \sigma_{j} = \sigma_{j} t_i$ for $j \neq i$ and $j \neq i +1$. In terms of these generators, define the following three elements: \begin{subequations} \begin{eqnarray} \sigma_N &=& \sigma_1 \sigma_2 \ldots \sigma_{N-2} \sigma_{N-1}\sigma_{N-2}\ldots \sigma_2 \sigma_1 \label{eq:sigmaN}\\ &=& \sigma_{N-1} \sigma_{N-2} \ldots \sigma_2 \sigma_1\sigma_2\ldots \sigma_{N-2} \sigma_{N-1} \nonumber\\ \sigma_0 &=& t_1 \sigma_N t_1^{-1} = t_1 t_N^{-1} \sigma_N \label{eq:sigma0}\\ \zeta &=& t_1 \sigma_1 \sigma_2 \ldots \sigma_{N-1} = \sigma_1 \ldots \sigma_{N-1} t_N.\label{eq:zeta} \end{eqnarray} \end{subequations} These elements are depicted as strand diagrams in Fig.~\ref{fig:affsym}. The element $\sigma_N$ is the pairwise exchange of the `first' and `last' particles (with respect to some starting angle on the ring) in which they pass through all the particles between them. The element $\sigma_0$ is the pairwise exchange of the first and last particle `around the back', i.e., without crossing the other particles. With the addition of $\sigma_0$, the set of elements $\{\sigma_0, \sigma_1,\ldots,\sigma_{N-1}\}$ satisfies the defining relations for generators of the affine symmetric group $\tilde{S}_N$~\cite{sutherland_beautiful_2004}. This establishes that $\tilde{S}_N \subset S_N(S^1)$. To show $\tilde{S}_N$ is a normal subgroup, consider the element $\zeta$. It shifts all particles one place in the order and generates a subgroup ${\mathbb{Z}}_\zeta \subset S_N(S^1)$ isomorphic to the integers. It also satisfies the relation \begin{equation}\label{eq:zetarel} \zeta \sigma_i = \sigma_{i+1}\zeta \end{equation} and in particular $\zeta \sigma_{N-1} = \sigma_0 \zeta$. The element $\zeta \notin \tilde{S}_N$ acts as an outer automorphism on the generators of $\tilde{S}_N$ (and therefore on all of $\tilde{S}_N$) establishing that $\tilde{S}_N$ is a normal subgroup of $S_N(S^1)$. Therefore, the orbifold fundamental group can be equivalently expressed as \begin{equation}\label{eq:Qringsep} S_N(S^1) = {\mathbb{Z}}_\zeta \ltimes \tilde{S}_N \end{equation} where the semidirect product is specified by (\ref{eq:zetarel}). Further note that $\zeta^N = t_1 t_2 \cdots t_N$ realizes a displacement of all particles one trip around the ring, i.e.~a full displacement of the center-of-mass. One can show that $\zeta^N$ commutes with all elements of $\tilde{S}_N$. \subsubsection{Singular two-body interactions} Now consider ${\mathcal{X}}_2$. The removal of $\Delta_2$ divides the $N$-torus $T^N$ into only $(N-1)!$ sectors $\mathcal{Y}_\omega \in \pi_0({\mathcal{X}}_2)$ where $\omega$ labels a {\em cyclic} order of the particles. There is one cyclic order for every coset $S_N/C_N$ of the symmetric group by the cyclic group $C_N \cong {\mathbb{Z}}/N \equiv {\mathbb{Z}}_N$. For $N > 2$, hard-core two-body interactions lock the particles into a particular cyclic order $\omega$ and ${\mathcal{X}}_2$ is not path-connected. Spaces with multiple path components may, {\em a priori}, have non-isomorphic fundamental groups for each component. However, within each ordering sector $\mathcal{Y}_\omega$, the fundamental group $\pi_1(\mathcal{Y}_\omega)$ is naturally isomorphic to the integers. We denote this group as $N {\mathbb{Z}}_\zeta$ as it is generated by a full cycle around the ring by particles $\zeta^N$. This ``full rotation'' interpretation also gives a natural isomorphism from $\pi_1(\mathcal{Y}_\omega)$ to the fundamental group of the base space, $\pi_1(S^1)$. As in the interval case, $p: \mathcal{X}_2 \to \mathcal{Q}_2$ is not a connected cover, but unlike the interval case, $p$ is not simply an isomorphism from each $\mathcal{Y}_\omega$ to $\mathcal{Q}_2$. Rather, the restriction of $p$ to $\mathcal{Y}_\omega$ is a connected $N:1$ cover. Each point in $\mathcal{Q}_2$ corresponds to $N$ distinct points in $\mathcal{Y}_\omega$ that differ by a cyclic rotation $c \in C_N$ acting as a deck/gauge transformation of distinguishable particles. The fundamental group $\pi_1(\mathcal{Q}_2)={\mathbb{Z}}_\zeta$ generated by $\zeta$ naturally contains $\pi_1(\mathcal{Y}_\omega) \sim \pi_1(S^1)$ as its $N {\mathbb{Z}}_\zeta$ subgroup generated by $\zeta^N$. These results establish that the corresponding short exact sequence of abelian groups for a connected component ${\mathcal{Y}}_\omega \subset {\mathcal{X}}_2$ and ${\mathcal{Q}}_2$ is \begin{eqnarray}\label{eq:ses1} 1 \to \pi_1({\mathcal{Y}}_\omega) \to \pi_1({\mathcal{Q}}_2) \to C_N \to 1 \nonumber\\ 1 \to N{\mathbb{Z}}_\zeta \to {\mathbb{Z}}_\zeta \to {\mathbb{Z}}/N \to 1. \end{eqnarray} Note that the symmetric group $S_N$ no longer appears in this sequence and therefore there are no FD statistics or parastatistics possible. Instead the abelian group $\pi_1({\mathcal{Q}}_2)= {\mathbb{Z}}_\zeta$ has one-dimensional representations characterized by a single phase $\phi$. The geometry of ${\mathcal{Q}}_2$ provides insights into these results for $\pi_1({\mathcal{Q}})$ and $\pi_1({\mathcal{Q}}_2)$. The manifold ${\mathcal{Q}}_2$ is a M\"obius band for the case $N=2$ \cite{leinaas_theory_1977}; see Fig.~\ref{fig:cover2}. For $N=3$, the space ${\mathcal{Q}}_2$ is a kind of Penrose triangle, a torus with an equilateral triangle cross section that makes a $2\pi/3$ twist every rotation; see Fig.~\ref{fig:cover3}. For higher dimensions, there is a generalized simplex hyperprism with a twist, e.g.~for $N=4$ the cross-section of the hyperprism is a rhombic dispheniod. The ordering sector ${\mathcal{Y}}_\omega$ is an $N$-fold cover of $\mathcal{Q}_2$ that is a also a simplex hyperprism, but no longer twisted. This universal cover of ${\mathcal{Q}}_2$ is the product of ${\mathbb{R}}$ and an $N$-simplex (non-regular for $N \geq 4$) and is useful for solving the Schr\"odinger equation in certain polytopes \cite{turner_quantum_1984, jain_exact_2008}. For ${\mathcal{Q}}$ the corresponding geometries are the same, but the orbifold singularities at the boundaries are included. \begin{figure} \centering \includegraphics[width=\columnwidth]{cover2.pdf} \caption{Depiction of the universal cover $\widetilde{{\mathcal{Q}}}= \widetilde{{\mathcal{X}}}$ and several useful domains for $N=2$ and ${\mathcal{M}}=S^1$. The black dots are all lifts of the same point in ${\mathcal{Q}}$ and ${\mathcal{X}}$, the black lines are the lifts of the coincidence locus $\Delta_2$ to $\widetilde{{\mathcal{Q}}}$. The red square (a) is one choice for the fundamental domain isomorphic to the torus ${\mathcal{X}} = T^2$. Each congruent square would then be labeled by a pair of integers in $\pi_1({\mathcal{X}})=Z^2$ describing the path to fundamental domain. The green triangle (b) and purple square (c) are equivalent alternate choices for the fundamental domain isomorphic to the M\"obius band ${\mathcal{Q}}$ or ${\mathcal{Q}}_2$. The blue rectangle (d) is the double cover of ${\mathcal{Q}}$ or ${\mathcal{Q}}_2$. It is the smallest torus $\overline{T}^2 = S^1_\mathrm{rel} \times S^1_\mathrm{com}$ for which there exists separable and single-valued center-of-mass and relative coordinates for indistinguishable particles. Although it has the same area in configuration space, it is not equivalent to the torus ${\mathcal{X}} = T^2$. The yellow square (e) is the double cover of ${\mathcal{X}}$ that is the smallest torus for which exist single-valued center-of-mass and relative coordinates for distinguishable particles.} \label{fig:cover2} \end{figure} \begin{figure} \centering \includegraphics[width=\columnwidth]{side3.pdf} \includegraphics[width=\columnwidth]{top3.pdf} \caption{Two views of a depiction of the universal cover $\widetilde{{\mathcal{Q}}}= \widetilde{{\mathcal{X}}}$ and several useful domains for $N=3$ and ${\mathcal{M}}=S^1$. The black dots are all lifts of the same point in ${\mathcal{Q}}$ and ${\mathcal{X}}$, the black planes in the lower figure are the lifts of the coincidence locus $\Delta_2$ to $\widetilde{{\mathcal{Q}}}$. The five highlighted domains are equivalent to the five domains depicted in Fig.~\ref{fig:cover2}: the red cube is a fundamental domain congruent to ${\mathcal{X}} = T^3$, the green 3-orthoscheme tetrahedron and purple triangular prism are equivalent alternate choices for the fundamental domain isomorphic to the Penrose triangle ${\mathcal{Q}}$ or ${\mathcal{Q}}_2$. The blue rhombic prism is the minimal separable torus for indistinguishable particles $\overline{T}^3 = T^2_\mathrm{rel} \times S^1_\mathrm{com}$ and the yellow rhombic prism is the minimal separable torus for distinguishable particles.} \label{fig:cover3} \end{figure} \subsubsection{Singular few-body interactions} For co-dimension $\tilde{d}=2$ interactions, we define the following strand groups \begin{subequations}\label{raidgroups} \begin{eqnarray} \pi^*_1( \mathcal{Q}_3) &=& T_N(S^1) \\ \pi^*_1( \mathcal{Q}_{2,2}) &=& F_N(S^1) \\ \pi^*_1( \mathcal{Q}_{\{3;2,2\}}) &=& W_N(S^1) \end{eqnarray} \end{subequations} Like the interval versions (\ref{aidgroups}), the equivalent groups (\ref{raidgroups}) result from breaking the analogous generator relations and each of these groups admit maps to $S_N(S^1)$ from relation reintroduction. The factorizations (\ref{eq:Qring}) and (\ref{eq:Qringsep}) of $S_N(S^1)$ provides two alternate ways to view the groups in (\ref{raidgroups}). For the case of $\pi^*_1( \mathcal{Q}_3)$, the two factorizations are \begin{subequations} \begin{eqnarray} T_N(S^1) & = & T_N \ltimes {\mathbb{Z}}^N \label{eq:traidwreath} \\ &= & {\mathbb{Z}}_\zeta \ltimes \tilde{T}_N \label{eq:twistedtraid}. \end{eqnarray} \end{subequations} In the first factorization, the normal subgroup ${\mathbb{Z}}^N$ of particle translations around the ring is the same as (\ref{eq:Qring}), but $T_N$ replaces $S_N$. The semidirect product in (\ref{eq:traidwreath}) obeys the same relations between the generators $\sigma_i$ and $t_j$ as (\ref{eq:Qring}). Similarly, in the second factorization the `twisted' traid group $\tilde{T}_N$ is defined from $\tilde{S}_N$ by breaking the relation (\ref{rel:traid}) extended to the larger set of generators $\{\sigma_0, \ldots, \sigma_{N-1} \}$. The semidirect product is inferred from the same generator relations as (\ref{eq:Qringsep}). Equivalent constructions define the twisted fraid group $\tilde{F}_N$ and the twisted universal Coxeter group $\tilde{W}_N$. In principle, these forms of the factorization provide a way to construct and classify irreducible representations of these novel strand groups on rings from the irreducible representations of strand groups on intervals. Finally, pure versions of (\ref{raidgroups}) can be defined for the fundamental groups of the manifolds ${\mathcal{X}}_3$, ${\mathcal{X}}_{2,2}$ and $\mathcal{X}_{\{3;2,2\}}$ that satisfy short exact sequences like (\ref{paidgroups}). \section{Topological analysis of models with one-dimensional anyons} For some physicists, an abelian anyon model must (by definition) have fractional exchange statistics, i.e., a pairwise exchange of particles transforms the wave function by a phase $\exp(i\theta)$ that interpolates between bosons $\theta=0$ and fermions $\theta = \pi$. Therefore, much previous work on anyons in one dimension either imposes fractional exchange statistics on wave functions (or field operators) at the start~\cite{ha_fractional_1995, zhu_topological_1996, girardeau_anyon-fermion_2006} or derives wave functions or field operators with fractional exchange phases from a Hamiltonian~\cite{kundu_exact_1999, keilmann_statistically_2011}. However, in the previous section, we calculated the orbifold fundamental group (including the fundamental group in the case of ${\mathcal{Q}}_2$) for the configuration space of indistinguishable particle for every possible topologically disruptive interaction on both intervals and rings. For all scenarios, pairwise exchanges were either absent (as in ${\mathcal{Q}}_2$) or square-trivial (for ${\mathcal{Q}}$, ${\mathcal{Q}}_3$, and the rest). In no one-dimensional case is the (orbifold) fundamental group the braid group or any other group with abelian representations that furnish `traditional' fractional exchange statistics. The necessary conclusion is that if fractional exchange statistics occur in models of particle systems in one dimension, they have a dynamical origin (i.e., they derive from interactions) and not a topological origin. In other words, unlike in two dimensions, fractional exchange statistics do not derive from the configuration space of indistinguishable particles on one-dimensional manifolds even accounting for excluded few-body coincidences. This in no way diminishes their physical or mathematical interest, but it may alter their interpretation and application. To make the contrast clear, first consider the simplest case where `traditional' fractional exchange statistics given by a phase $\theta$ occurs: two particles in a plane with two-body coincidences excluded. The configuration space ${\mathcal{Q}}_2 = ({\mathbb{R}}^4 - \Delta_2)/S_2$ can be factored into the product of a plane and cone with the point at the tip excluded~\cite{leinaas_theory_1977, bourdeau_when_1992}. This space is not simply-connected and its fundamental group is $\pi_1({\mathcal{Q}}_2) = B_2 \cong {\mathbb{Z}}$. The group $B_2$ therefore has abelian representations characterized by $\theta \in [0, 2\pi)$. For $\theta \neq 0$, the wave function considered as a map $\psi: {\mathcal{Q}}_2 \to \mathbb{C}$ is multi-valued. Alternatively, one can define single-valued wave-functions the universal cover $\widetilde{{\mathcal{Q}}_2}$, which the the product of a plane and a half-plane. On the universal cover $\widetilde{{\mathcal{Q}}_2}$ particle exchanges in $\pi_1({\mathcal{Q}}_2)$ are represented by translations. The Hilbert space on $\widetilde{{\mathcal{Q}}_2}$ decomposes into abelian representations of $\pi_1({\mathcal{Q}}_2)$ labeled by the quasi-momentum and $\theta$. Besides working on ${\mathcal{Q}}_2$ or $\widetilde{{\mathcal{Q}}_2}$, there is a third option: to work with single-valued functions on ${\mathcal{X}}_2$ and incorporate the $\theta$ exchange statistics as a gauge interaction potential for either bosons or fermions~\cite{khare_fractional_2005}. This gauge potential is singular at the particle coincidence, but because that point is removed from configuration space ${\mathcal{Q}}_2$ this singularity presents no difficulties. For $N=2$, the gauge potential is the vector potential of a delta-function `flux tube' at the two-body coincidence~\cite{wilczek_magnetic_1982, wilczek_fractional_1990}. This works for abelian braid group anyons because topological exchange statistics derive from flat connections on fiber bundles over configuration spaces. Therefore, they can be absorbed into a gauge potential on a covering space, and a trivial gauge on the universal cover~\cite{ balachandran_classical_1991}. As a result, free particles with fractional exchange statistics can be modeled with bosons (or fermions) where the statistics are `absorbed' into a statistical gauge interaction. This strategy of absorbing statistics into a gauge potential motivated Kundu~\cite{kundu_exact_1999} to define a one-dimensional anyon model (also called the anyon Lieb-Liniger model~\cite{posske_2017}). Starting from a bosonic model defined on ${\mathcal{X}}$ with singular interactions described by $\delta$, $\delta'$ and double-$\delta$ functions on the $\Delta_2$ and $\Delta_3$ coincidence loci, Kundu performs a density-dependent gauge transformation~\footnote{Although discrete models are outside the scope of this analysis, a similar technique defines the anyon-Hubbard model~\cite{keilmann_statistically_2011}. Density-dependent interactions for bosons are `transmuted' by the gauge transformation into site-dependent phase slips. See also~\cite{bonkoff_bosonic_2021} for the relation of the anyon Hubbard model to the Kundu/Lieb-Liniger model.}. The gauge-transformed model retains $\delta$-interactions on the $\Delta_2$ locus on which the creation and annihilation operators no longer satisfy bosonic relations. The wave function that is constructed on ${\mathcal{X}}$ jumps by $\exp(\pm i \theta)$ across $\Delta_2$. Similarly, Girardeau defined a model with similar `phase slips' of $\exp(\pm i \theta)$ from a gauge transformation of a hard-core, fermion model which does not require singular interactions~\cite{girardeau_anyon-fermion_2006}. The anyon Lieb-Liniger model has been shown to possess generalized \emph{exclusion} statistics~\cite{batchelor_one-dimensional_2006}. However, the phase slips $\exp(\pm i \theta)$ on ${\mathcal{X}}$ in either the boson-based Lieb-Liniger model or fermion-based Girardeau model are the result of a particle-label dependent gauge transformation and do not have the same topological interpretation as fractional \emph{exchange} statistics of indistinguishable particles. For example, in the simplest case of two particles, unlike the braid anyon case described above, there are not an infinite number of possible exchange phases multiplying the entire wave function depending on a winding number. Instead, there is a single phase difference between the two different particle orderings of the same wave function defined on ${\mathcal{X}}$. The non-topological interpretation of Kundu/Lieb-Liniger anyons presented here agrees with several previous analyses that argue that fractional exchange statistics cannot be absorbed into a gauge potential for one-dimensional systems~\cite{aglietti_anyons_1996, valiente_bose-fermi-2020}. In contrast, because of their similar topological origin from excluded co-dimension $\tilde{d}=2$ coincidences, we hypothesize a `transmutation' from statistics to dynamics is possible for particles obeying traid or fraid exchange statistics, and is an avenue of future research. Two other models sometimes identified as one-dimensional anyon models also have trivial topological exchange statistics: Leinaas-Myrhaim anyons and Calogero-Sutherland anyons. In the Leinaas-Myrheim analysis of one-dimensional particles systems, the model is built on the exchange-trivial underlying space $|{\mathcal{Q}}|$~\cite{leinaas_theory_1977}. To make the Hamiltonian self-adjoint on the domain $|{\mathcal{Q}}|$, boundary conditions must be imposed for the wave function on $\Delta_2$. These self-adjoint extensions are characterized by an `anyon-like' parameter that effectively interpolates between Neumann boundary conditions with bosonic symmetry and Dirichlet boundary conditions with fermionic antisymmetry. The Leinaas-Myrheim model can be lifted to the Lieb-Liniger bosonic model on ${\mathcal{X}}$ with two-body delta-interactions, except the wave functions and all observables are restricted to the underlying space $|{\mathcal{Q}}|$~\cite{posske_2017}. Like the Kundu model, here the interpolating statistics are again from the dynamics and not the topology, which is trivial. Further, we hypothesize that the Leinaas-Myrheim model on $|{\mathcal{Q}}|$ can also be lifted to ${\mathcal{X}}$ with fermionic antisymmetry or Kundu-like phase shifts using statistical mapping techniques~\cite{cheon_fermion-boson_1999, valiente_bose-fermi-2020, valiente_universal_2021, ohya_generalization_2021, ohya_discrete_2022}. Similarly, in the Calogero-Sutherland model, interactions prevent two-body coincidences and a parameter interpolates between BE and FD exclusion statistics. For indistinguishable particles, the inverse square interaction is sufficiently singular to exclude $\Delta_2$ and therefore the configuration space is ${\mathcal{Q}}_2$. Lifting the model to ${\mathcal{X}}_2$, one is free to define arbitrary phases to different orderings of particles and define a Calogero-Sutherland \emph{anyon} model with fractional exclusion statistics and order-dependent phase slips \cite{ha_fractional_1995, polychronakos_generalized_1999, SREERANJANI20091176}. However, since there is no exchange possible, such phases are a gauge symmetry that only has consequences for observables defined on the (non-universal) covering space ${\mathcal{X}}_2$ and as such do not constitute a statistical gauge interaction. \section{Conclusions} To summarize, for indistinguishable particles on a one-dimensional interval we find the following possibilities for the group describing topological exchange statistics: \begin{eqnarray} \pi_1^*({\mathcal{Q}}) &=& S_N \nonumber\\ \pi_1({\mathcal{Q}}_2) &=& 1 \nonumber\\ \pi_1^*({\mathcal{Q}}_3) &=& T_N \nonumber\\ \pi_1^*({\mathcal{Q}}_{2,2}) &=& F_N \nonumber\\ \pi_1^*({\mathcal{Q}}_{\{3;2,2\}}) &=& W_N.\label{interval} \end{eqnarray} The novel strand groups $T_N$, $F_N$ and $W_N$ result when three-body and certain four-body coincidences are removed from the configuration space orbifold ${\mathcal{Q}}$. Unlike the braid group, these co-dimension $\tilde{d}=2$ exclusions preserve the self-inverse property of pairwise exchanges (\ref{rel:braid}), but break the other defining relations of the symmetric group (\ref{rel:traid}) and (\ref{rel:fraid}). These groups provide the possibility of novel abelian and non-abelian anyons, but their irreducible representations have not been classified or explored. For indistinguishable particles on a circle, the underlying topology of the base space $S^1$ gets `mixed up' with the symmetric group in the same way the braid group on (for example) $S^2$ allows different topological exchange statistics than the more familiar braid group on $\mathbb{R}^2$. The equivalent groups to (\ref{interval}) expressed in two alternate forms are \begin{eqnarray} \pi_1^*({\mathcal{Q}}) &=& {\mathbb{Z}}^N \rtimes S_N = {\mathbb{Z}}_\zeta \ltimes \tilde{S}_N \nonumber\\ \pi_1({\mathcal{Q}}_2) &=& {\mathbb{Z}}_\zeta \nonumber\\ \pi_1^*({\mathcal{Q}}_3) &=& {\mathbb{Z}}^N \rtimes T_N = {\mathbb{Z}}_\zeta \ltimes \tilde{T}_N \nonumber\\ \pi_1^*({\mathcal{Q}}_{2,2}) &=& {\mathbb{Z}}^N \rtimes F_N = {\mathbb{Z}}_\zeta \ltimes \tilde{F}_N \nonumber\\ \pi_1^*({\mathcal{Q}}_{\{3;2,2\}}) &=& {\mathbb{Z}}^N \rtimes W_N = {\mathbb{Z}}_\zeta \ltimes \tilde{W}_N. \end{eqnarray} The affine strand groups $\tilde{T}_N$, $\tilde{F}_N$ and $\tilde{W}_N$ arise from broken relations of $\tilde{S}_N$ in the same way as the non-affine versions. As far as we know, this is the first time these hyperbolic groups have been identified in a physical system and their group structures and irreducible representations are largely unexplored. Also, except for the case of ${\mathcal{Q}}_2$, all of these groups provide the possibility for novel abelian and non-abelian anyons. In none of these cases does the group giving topological exchange statistics furnish an abelian representation with fractional exchange statistics for an arbitrary $\theta$. Our conclusion agrees with \cite{aglietti_anyons_1996, valiente_bose-fermi-2020} that if a one-dimensional model exhibits fractional exchange statistics, these are of a dynamical origin and cannot be absorbed into a consistent gauge potential. In contrast, because they have a topological origin, the alternate strand groups like $T_N$ and $\tilde{T}_N$ described above should be `transmutable' into a gauge potential, and this looks to be a promising avenue for investigating there phenomenological signatures. Because the traid and fraid group also derive from co-dimension two topological defects in configuration space, we hypothesize there could be similarities to braid group anyons where connections among conformal field theory, fusion rules, and quantum groups is a productive line of research; c.f.~\cite{ALVAREZGAUME1990347}. A field theoretical formulation of these alternate strand groups is certainly required for applications to many-body systems. Are these alternate strand groups feasible to realize physically? Ultracold atoms can be confined to effectively one-dimensional traps, and in principle hard-core three body interactions can be engineered in cold atoms systems \cite{buchler_three-body_2007, daley_effective_2014, mahmud_dynamically_2014, valiente_three-body_2019}. Density-induced interactions in Floquet-driven lattice models also show promise~\cite{greschner_2015, straeter_2016}. Hard-core three body interactions would effective exclude $\Delta_3$ from $\mathcal{Q}$, and dynamical models with three body interactions have shown signs of novel statistics and other thermodynamic properties \cite{paredes_pfaffian-like_2007, keilmann_statistically_2011, sowinski_criticality_2015, arcila-forero_three-body-interaction_2018}. In Ref.~\cite{harshman_anyons_2020}, we have described and depicted the lowest energy wave functions obeying abelian traid exchange statistics for three particles in a harmonic trap, but exploring these solutions for more particles and classifying how they transform under discrete symmetries is an ongoing project. In contrast, the paired two-body interactions necessary to exclude $\Delta_{2,2}$ are non-local. A single two-body coincidence would need to prevent other two-body coincidences anywhere in the system. As a fundamental interaction, non-local interactions are typically excluded, but as an effective theory for long-range interactions in a many-body system, it may have interest. More generally, because these novel strands groups arise `naturally' as degenerations of the ubiquitous symmetric group, one can image that they could emerge in a variety of non-particle model contexts. As a final note, building an `intrinsic' quantum theory for indistinguishable particles directly on the quotient space, without reference to a covering space of distinguishable particles, is an incomplete project that requires mathematical and conceptual definitions. Operators that are self-adjoint on ${\mathcal{X}}$, such as the Hamiltonian and the single-particle position and momentum, no longer have that property when restricted to ${\mathcal{Q}}$~\cite{bourdeau_when_1992, balachandran_classical_1991, Gaveau_2012}. For relative momentum and other operators that are not symmetric under particle exchange, self-adjointness cannot be restored and their interpretation does not seem to extend unambiguously to indistinguishable particles. \acknowledgements{We would like to thank Franscesca Ark, Andr\'e Eckardt, Andreas Bock Michelsen, Zachary Morales, Philip Johnson, Maxim Olshanii, Thore Posske, and Thomas Schmidt for useful discussions.}
{ "redpajama_set_name": "RedPajamaArXiv" }
794
{"url":"https:\/\/zbmath.org\/?q=an:1009.22001","text":"# zbMATH \u2014 the first resource for mathematics\n\nMatrix groups. An introduction to Lie group theory. (English) Zbl\u00a01009.22001\nSpringer Undergraduate Mathematics Series. London: Springer. xi, 330 p. (2002).\nThis book is an introduction to Lie group theory with focus on the matrix case. Chapter one presents several standard matrix groups: $$Gl_n(K)$$, $$Sl_n(K)$$, $$O(n)$$, $$SO(n)$$, $$U(n)$$, $$SU(n)$$, the Lorentz groups and symplectic groups. Topological aspects are mentioned. Chapter two deals with the exponential of matrices and the related one-parameter subgroups. The Lie algebra and associated entities is the theme of chapter three. The next two chapters treat various topics on algebras and examples (Clifford algebras, spinor groups, quaternionic groups, automorphism groups of algebras). Chapter six details the Lorentz group. A next part introduces to abstract Lie groups and the differential geometric perspective, homogeneous spaces, the connectivity of matrix groups. A final part introduces to compact connected Lie groups, tori, semi-simple decompositions, the adjoint representation. Chapter twelve presents root systems, Weyl groups and Dynkin diagrams. Exercises are included and hints for the solution to some of them are located at the end. One finds also a bibliography (29 entries) and an index.\nThis book can be recommended to students, making Lie group theory more accessible to them.\n\n##### MSC:\n 22-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to topological groups 57-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to manifolds and cell complexes 22E46 Semisimple Lie groups and their representations 17-01 Introductory exposition (textbooks, tutorial papers, etc.) pertaining to nonassociative rings and algebras 15A66 Clifford algebras, spinors","date":"2021-09-17 04:10:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.24876733124256134, \"perplexity\": 1408.2285011023753}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-39\/segments\/1631780054023.35\/warc\/CC-MAIN-20210917024943-20210917054943-00678.warc.gz\"}"}
null
null
package com.ericsson.otp.erlang; import java.io.IOException; import java.net.InetAddress; /** * Factory class used to create client- and server-side transport instances. One * static instance of class implementing this interface is created when program * loaded. Default implementation used is {@link OtpSocketTransportFactory}. * JInterface user can specify custom transport factory implementing this * interface in the following ways: * <dl> * <dt>defining static class as internal to class holding main() method</dt> * <dd>In the systems, where main class can be retrieved with * <code>System.getProperty("sun.java.command")</code>, user can define static * class <b>OtpErlangSystemTuner</b> internal to the main class, providing at * least one static method with the name <b>getOtpTransportFactory</b>, with no * parameters, returning object of class implementing * <b>OtpTransportFactory</b>, for example: * * <pre> * * public class MyMainClass { * * public static class OtpErlangSystemTuner { * ... * public static OtpTransportFactory getOtpTransportFactory() { * return new MyTransportFactory(); * } * } * * public static class MyTransportFactory implements OtpTransportFactory { * ... * } * * public static void main(String[] args) { * ... * } * } * * * </pre> * * </dd> * * <dt>specifying factory class in the system properties</dt> * <dd>User-defined transport factory class may be specified via system property * <b>OtpTransportFactory</b>, for example: * * <pre> * * package com.my.company; * * public static class MyTransportFactory implements OtpTransportFactory { * ... * } * </pre> * * In such case program may be run with * -DOtpTransportFactory=com.my.company.MyTransportFactory, or other way of * setting system property <i>before execution of static initializers</i> may be * used.</dd> * </dl> * * @author Dmitriy Kargapolov */ public interface OtpTransportFactory { /** * Create instance of {@link OtpTransport} * * @param addr * host name or IP address string * @param port * port number * @return new socket object * @throws IOException */ public abstract OtpTransport createTransport(String addr, int port) throws IOException; /** * Create instance of {@link OtpTransport} * * @param addr * peer address * @param port * port number * @return new socket object * @throws IOException */ public abstract OtpTransport createTransport(InetAddress addr, int port) throws IOException; /** * Create instance of {@link OtpServerTransport} * * @param port * port number to listen on * @return new socket object * @throws IOException */ public OtpServerTransport createServerTransport(int port) throws IOException; }
{ "redpajama_set_name": "RedPajamaGithub" }
6,811
Tashi Tsering () (; also written Tashi Chirring) is a Nepalese former footballer of Tibetan descent. The defender has played for the Tibet national football team in 1999 and the Nepal national football team in 2005. Club career Tsering has played for Manang Marsyangdi for several years and was noted for his strong performances for the club in the AFC President's Cup 2006. International career Tsering has appeared for Nepal in two qualifiers for the 2010 FIFA World Cup Qualifiers. He also appeared in five matches for Nepal, scoring once, at the 2006 AFC Challenge Cup, where he played in the semi-final as Nepal lost to Sri Lanka on penalties. See also Tibet national football team Tibetan culture References External links TNFA, team 1999-2000 TNFA, team 2001-2002 Shaolin Soccer Foot au Tibet, team 2008 Living people Nepalese footballers Nepal international footballers Manang Marshyangdi Club players Tibetan footballers Tibet international footballers Association football defenders Nepalese people of Tibetan descent 1973 births
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,556
{"url":"https:\/\/readingfeynman.org\/tag\/probability-wave-of-a-photon\/","text":"# The shape and size of a\u00a0photon\n\nImportant post script (PS) \u2013 dated 22 December 2018: Dear readers of this post, this is one of the more popular posts of my blog but \u2212 in the meanwhile \u2212 I did move on, and quite a bit, actually! The analysis below is not entirely consistent: I got many questions on it, and I have been thinking differently as a result. The Q&A below sums up everything: I do think of the photon as a pointlike particle now, and Chapter VIII of my book sums up the photon model. At the same time, if you are really interested in this question \u2013 how should one think of a photon? \u2013 then it\u2019s probably good you also read the original post. If anything, it shows you how easy it is to get confused.\n\nHi Brian \u2013 see section III of this paper:\u00a0http:\/\/vixra.org\/pdf\/1812.0273v2.pdf\n\nFeynman\u2019s classical idea of an atomic oscillator is fine in the context of the blackbody radiation problem, but his description of the photon as a long wavetrain does not make any sense. A photon has to pack two things: (1) the energy difference between the Bohr orbitals and (2) Planck\u2019s constant h, which is the (physical) action associated with one cycle of an oscillation (so it\u2019s a force over a distance (the loop or the radius \u2013 depending on the force you\u2019re looking at) over a cycle time). See section V of the paper for how the fine-structure constant pops up here \u2013 it\u2019s, as usual, a sort of scaling constant, but this time it scales a force. In any case, the idea is that we should think of a photon as one cycle \u2013 rather than a long wavetrain. The one cycle makes sense: when you calculate field strength and force you get quite moderate values (not the kind of black-hole energy concentrations some people suggest). It also makes sense from a logical point of view: the wavelength is something real, and so we should think of the photon amplitude (the electric field strength) as being real as well \u2013 especially when you think of how that photon is going to interact or be absorbed into another atom.\n\nSorry for my late reply. It\u2019s been a while since I checked the comments. Please let me know if this makes sense. I\u2019ll have a look at your blog in the coming days. I am working on a new paper on the anomalous magnetic moment \u2013 which is not anomalous as all if you start to think about how things might be working in reality. After many years of study, I\u2019ve come to the conclusion that quantum mechanics is a nice way of describing things, but it doesn\u2019t help us in terms of understanding anything. When we want to understand something, we need to push the classical framework a lot further than we currently do. In any case, that\u2019s another discussion.\n\nJL\n\nOK. Now you can move on to the post itself. \ud83d\ude42 Sorry if this is confusing the reader, but it is necessary to warn him. I think of this post now as still being here to document the history of my search for a \u2018basic version of truth\u2019, as someone called it. [For an even more recent update, see Chapter 8 of my book, A Realist Interpretation of Quantum Mechanics.\n\nOriginal post:\n\nPhotons are weird.\u00a0All elementary particles are weird. As Feynman puts it, in the very first paragraph of his Lectures on Quantum Mechanics\u00a0: \u201cHistorically, the electron, for example, was thought to behave like a particle, and then it was found that in many respects it behaved like a wave. So it really behaves like neither. Now we have given up. We say: \u201cIt is like neither.\u00a0There is one lucky break, however\u2014electrons behave just like light. The quantum behavior of atomic objects (electrons, protons, neutrons, photons, and so on) is the same for all, they are all \u201cparticle waves,\u201d or whatever you want to call them. So what we learn about the properties of electrons\u00a0will apply also to all \u201cparticles,\u201d including photons\u00a0of light.\u201d\u00a0(Feynman\u2019s\u00a0Lectures, Vol. III, Chapter 1, Section 1)\n\nI wouldn\u2019t dare to argue with Feynman, of course, but\u2026 What?\u00a0Well\u2026\u00a0Photons\u00a0are like electrons, and then they are not. Obviously not, I\u2019d say.\u00a0For starters, photons do not have mass or charge, and they are also bosons, i.e. \u2018force-carriers\u2019 (as opposed to matter-particles), and so they obey\u00a0very different quantum-mechanical rules, which are referred to as Bose-Einstein statistics. I\u2019ve written about that in other post (see, for example, my post on Bose-Einstein and Fermi-Dirac statistics), so I won\u2019t do that again here. It\u2019s probably sufficient to remind the reader that these rules imply that the so-called Pauli exclusion principle does not apply to them: bosons like to crowd together, thereby occupying the same quantum state\u2014unlike their counterparts, the so-called fermions or matter-particles: quarks (which make up protons and neutrons) and leptons (including electrons and neutrinos), which can\u2019t do that. Two electrons, for example, can only sit on top of each other (or be\u00a0very\u00a0near to each other, I should say) if their spins are opposite (so that makes their quantum state different), and there\u2019s no place whatsoever to add a third one because there are only two possible \u2018directions\u2019 for the spin: up or down.\n\nFrom all that I\u2019ve been writing so far, I am sure you have some kind of picture of matter-particles now, and notably of the electron: it\u2019s not really\u00a0point-like, because it has a so-called scattering cross-section (I\u2019ll say more about this later), and we can find it somewhere\u00a0taking into account the Uncertainty Principle, with the probability of finding it at point x at time t given by the absolute square of a so-called \u2018wave function\u2019 \u03a8(x, t).\n\nBut what about the photon? Unlike quarks or electrons, they are really\u00a0point-like, aren\u2019t they? And can we associate them with a psi\u00a0function too? I mean, they have a wavelength, obviously, which is given by the Planck-Einstein energy-frequency relation: E = h\u03bd, with h the Planck constant and \u03bd the frequency of the associated \u2018light\u2019. But an electromagnetic wave is not like a \u2018probability wave\u2019. So\u2026 Do they have a\u00a0de Broglie\u00a0wavelength as well?\n\nBefore answering that question, let me present that \u2018picture\u2019 of the electron once again.\n\nThe\u00a0wave function for electrons\n\nThe electron \u2018picture\u2019 can be represented in a number of ways but one of the more scientifically correct ones\u00a0\u2013 whatever that means \u2013 is that of a spatially confined wave function representing a complex quantity referred to as the probability amplitude. The animation below (which I took from Wikipedia) visualizes such wave functions.\u00a0As mentioned above, the wave function is usually represented by the Greek letter psi\u00a0(\u03a8), and it is often referred to as a \u2018probability wave\u2019 \u2013 by bloggers like me, that is \ud83d\ude42 \u2013 but that term is quite misleading. Why?\u00a0You surely know that by now: the wave\u00a0function represents a probability amplitude, not a probability. [So, to be correct, we should say a \u2018probability amplitude wave\u2019, or an \u2018amplitude wave\u2019, but so these terms are obviously too long and so they\u2019ve been dropped and everybody talks about \u2018the\u2019 wave function now, although that\u2019s confusing too, because an electromagnetic wave is a \u2018wave function\u2019 too, but describing \u2018real\u2019 amplitudes, not some weird complex numbers referred to as \u2018probability amplitudes\u2019.]\n\nHaving said what I\u2019ve said above, probability amplitude and probability are obviously related: if\u00a0we take the (absolute) square of the psi function \u2013 i.e. if we take the (absolute) square of all these amplitudes \u03a8(x, t) \u2013 then we get the actual\u00a0probability\u00a0of finding that electron at point x at time t. So then we get the so-called probability density\u00a0functions, which are shown on the right-hand side of the illustration above. [As for the term \u2018absolute\u2019 square, the absolute\u00a0square is the\u00a0squared norm of the associated \u2018vector\u2019. Indeed, you should note that the square of a complex number can be negative as evidenced, for example, by the definition of\u00a0i: i= \u20131. In fact, if there\u2019s only an imaginary part, then its square is always\u00a0negative. Probabilities are real numbers between 0 and 1, and so they can\u2019t be negative, and so that\u2019s why we always talk about the absolute\u00a0square, rather than the square as such.]\n\nBelow, I\u2019ve inserted another image, which gives a static picture (i.e. one that is not varying in time) of the wave function of a real-life\u00a0electron. To be precise: it\u2019s the wave function for an electron on the 5d orbital of a hydrogen orbital. You can see it\u2019s much more complicated than those easy things above. However, the idea behind is the same. We have a complex-valued function varying in space and in time. I took it from Wikipedia and so I\u2019ll just copy the explanation here: \u201cThe solid body shows the places where the electron\u2019s probability density\u00a0is above a certain value (0.02), as calculated from the probability amplitude.\u201d What about these colors? Well\u2026 The image uses the so-called HSL color system to represent complex numbers: each complex number is represented by a unique color, with a different hue (H), saturation (S) and lightness (L). Just google\u00a0if you want to know how that works exactly.\n\nOK. That should be clear enough. I wanted to talk about photons here. So let\u2019s go for it. Well\u2026 Hmm\u2026 I realize I need to talk about some more \u2018basics\u2019 first. Sorry for that.\n\nThe Uncertainty Principle revisited (1)\n\nThe wave function is usually given as a function in space and time: \u03a8 = \u03a8(x, t). However, I should also remind you that we have a similar function in the \u2018momentum space\u2019: if \u03c8 is a psi function, then the function in the momentum space is a phi\u00a0function, and we\u2019ll write it as\u00a0\u03a6\u00a0=\u00a0\u03a6(p, t). [As for the notation, x and p are written with capital letters and, hence, represent (three-dimensional) vectors. Likewise, we use a capital letter for psi and phi so we don\u2019t confuse it with, for example, the lower-case\u00a0\u03c6 (phi) representing the\u00a0phase\u00a0of a wave function.]\n\nThe position-space and momentum-space wave functions \u03a8 and \u03a6\u00a0are related through the Uncertainty Principle. To be precise: they are Fourier transforms of each other.\u00a0Huh?\u00a0Don\u2019t be put off by that statement. In fact, I shouldn\u2019t have mentioned it, but then it\u2019s how one can actually prove or derive\u00a0the Uncertainty Principle from\u2026 Well\u2026 From \u2018first principles\u2019, let\u2019s say, instead of just jotting it down as some God-given rule. Indeed, as Feynman puts: \u201cThe Uncertainty Principle should be seen in its historical context. If you get rid of all of the old-fashioned ideas and instead use the ideas that I\u2019m explaining in these lectures\u2014adding arrows for all the ways an event can happen\u2014there is no need for an uncertainty principle!\u201d However, I must assume you\u2019re, just like me, not quite used to the new ideas as yet, and so let me just jot down the Uncertainty Principle once again, as some God-given rule indeed :-):\n\n\u03c3x\u00b7\u03c3\u0127\/2\n\nThis is the so-called Kennard formulation of the Principle: it measures the uncertainty\u00a0about the exact position (x) as well as the momentum (p), in terms of the standard deviation (so that\u2019s the\u00a0\u03c3 (sigma) symbol) around the mean. To be precise, the assumption is that we cannot know the real\u00a0x and p: we can only find some probability\u00a0distribution for x and p, which is usually some\u00a0nice \u201cbell curve\u201d in the textbooks. While the Kennard formulation is the most precise (and exact) formulation of the Uncertainty Principle (or uncertainty relation, I should say), you\u2019ll often find \u2018other\u2019 formulations. These \u2018other\u2019 formulates\u00a0usually write \u0394x and \u0394p instead of \u03c3and \u03c3p, with the \u0394 symbol indicating some \u2018spread\u2019 or a similar concept\u2014surely do not\u00a0think of \u0394\u00a0as a differential or so! [Sorry for assuming you don\u2019t know this (I know you do!) but I just want to make sure here!] Also, these \u2018other\u2019 formulations will usually (a) not mention the 1\/2 factor, (b) substitute\u00a0\u0127\u00a0for h (\u0127\u00a0= h\/2\u03c0, as you know, so \u0127\u00a0is preferred when we\u2019re talking things like angular\u00a0frequency or other stuff involving the unit circle), or (c) put an equality (=) sign in, instead of an inequality sign (\u2265). Niels Bohr\u2019s early formulation of the Uncertainty Principle actually does all of that:\n\n\u0394x\u0394p\u00a0h\n\nSo\u2026 Well\u2026 That\u2019s a bit sloppy, isn\u2019t it? Maybe. In Feynman\u2019s Lectures, you\u2019ll find an oft-quoted \u2018application\u2019 of the Uncertainty Principle leading to a pretty accurate calculation of the typical size of an atom (the so-called\u00a0Bohr radius), which Feynman starts with an equally sloppy statement of the Uncertainty Principle, so he notes: \u201cWe needn\u2019t trust our answer to within factors like 2,\u00a0\u03c0 etcetera.\u201d Frankly, I used to think that\u2019s ugly and, hence, doubt the \u2018seriousness\u2019 of such kind of calculations. Now I know it doesn\u2019t really matter indeed, as the essence of the relationship is clearly not a 2,\u00a0\u03c0 or 2\u03c0 factor. The essence is the uncertainty itself: it\u2019s very\u00a0tiny (and multiplying it with\u00a02,\u00a0\u03c0 or 2\u03c0 doesn\u2019t make it much bigger) but so it\u2019s there.\n\nIn this regard, I need to remind you of\u00a0how tiny that physical constant \u0127 actually is: about 6.58\u00d710\u221216\u00a0eV\u00b7s. So that\u2019s a zero followed by a decimal point and fifteen zeroes: only then we get the first significant digits (65812\u2026). And if 10\u221216\u00a0doesn\u2019t look tiny enough for you, then just think about how tiny the electronvolt\u00a0unit is: it\u2019s the amount of (potential) energy gained (or lost) by an electron as it moves across a potential difference of one volt (which, believe me, is nothing much really): if we\u2019d express \u0127 in Joule, then we\u2019d have to add nineteen more zeroes, because\u00a01 eV = 1.6\u00d710\u221219\u00a0J. As for such phenomenally small numbers, I\u2019ll just repeat what I\u2019ve said many times before: we just cannot imagine such small number. Indeed, our mind can sort of intuitively deal with addition (and, hence, subtraction), and with multiplication and division (but to some extent only), but our mind is not made to understand non-linear stuff, such as exponentials indeed. If you don\u2019t believe me, think of the Richter scale: can you explain the difference between a 4.0 and a 5.0 earthquake? [\u2026] If the answer to that question took you more than a second\u2026 Well\u2026 I am right. \ud83d\ude42 [The Richter scale is based on the base-10 exponential function: a 5.0 earthquake has a shaking amplitude that is 10 times that of an earthquake that registered 4.0, and because energy is proportional to the square\u00a0of the amplitude, that\u00a0corresponds to an energy release that is 31.6 times that of the lesser earthquake.]\n\nA digression on units\n\nHaving said what I said above, I am well aware of the fact that saying that we cannot imagine this or that is what most people say. I am also aware of the fact that they usually say that to avoid having to explain something. So let me try to do something more worthwhile here.\n\n1. First, I should note that \u0127\u00a0is so small because the second, as a unit of time,\u00a0is so incredibly large. All is relative, of course. \ud83d\ude42 For sure, we should express time in a more natural unit at the atomic or sub-atomic scale, like the time that\u2019s needed for light to travel one meter.\u00a0Let\u2019s do it. Let\u2019s express time in a unit that I shall call a \u2018meter\u2018. Of course, it\u2019s not an actual meter (because it doesn\u2019t measure any distance), but so I\u00a0don\u2019t want to invent a new word and surely not any new symbol here. Hence, I\u2019ll just put apostrophes before and after: so I\u2019ll write \u2018meter\u2019 or \u2018m\u2019. When adopting the \u2018meter\u2019 as a unit of time, we get a value for \u2018\u0127\u2018 that is equal to\u00a0(6.6\u00d710\u221216\u00a0eV\u00b7s)(1\/3\u00d7108\u00a0\u2018meter\u2019\/second) = 2.2\u00d710\u22128\u00a0eV\u00b7\u2019m\u2019. Now, 2.2\u00d710\u22128\u00a0is a number that is still too tiny to imagine. But then our \u2018meter\u2019 is still a rather huge unit at the atomic scale: we should take the \u2018millimicron\u2019, aka the \u2018nanometer\u2019 (1 nm =\u00a01\u00d710\u22129\u00a0m), or \u2013 even better because more appropriate\u00a0\u2013\u00a0the \u2018angstrom\u2018:\u00a01\u00a0\u00c5 = 0.1 nm =\u00a01\u00d710\u221210\u00a0m. Indeed, the smallest atom (hydrogen) has a radius of 0.25 \u00c5, while larger atoms will have a radius of about 1 or more \u00c5. Now that\u00a0should work, isn\u2019t it? You\u2019re right, we get a value for \u2018\u0127\u2018 equal to (6.6\u00d710\u221216\u00a0eV\u00b7s)(1\/3\u00d7108\u00a0\u2018m\u2019\/s)(1\u00d71010\u00a0\u2018\u00c5\u2019\/m) = 220\u00a0eV\u00b7\u2019\u00c5\u2019, or 22 220\u00a0eV\u00b7\u2019nm\u2019. So\u2026 What? Well\u2026 If anything, it shows \u0127\u00a0is not\u00a0a small unit at the atomic or sub-atomic level! Hence, we actually\u00a0can\u00a0start imagining how things work at the atomic level when using more adequate units.\n\n[Now, just to test your knowledge, let me ask you: what\u2019s the wavelength of visible light in\u00a0angstrom? [\u2026] Well? [\u2026] Let me tell you: 400 to 700 nm is 4000 to 7000 \u00c5. In other words, the wavelength of visible light is quite sizable as compared to the size of atoms or electron orbits!]\n\n2. Secondly, let\u2019s do a quick dimension analysis of\u00a0that \u0394x\u0394p\u00a0h relation and\/or its more accurate expression\u00a0\u03c3x\u00b7\u03c3\u0127\/2.\n\nA position (and its uncertainty or standard deviation) is expressed in distance units, while momentum\u2026 Euh\u2026 Well\u2026 What?\u00a0[\u2026]\u00a0Momentum is mass times velocity, so it\u2019s kg\u00b7m\/s. Hence, the dimension of the product on the left-hand side of the inequality is m\u00b7kg\u00b7m\/s = kg\u00b7m2\/s. So what about this eV\u00b7s dimension on the right-hand side? Well\u2026 The electronvolt is a unit of energy, and so we can convert it to joules. Now, a joule is a newton-meter (N\u00b7m), which is the unit for both energy and work: it\u2019s the work done\u00a0when applying a force of one newton\u00a0over a distance of one\u00a0meter.\u00a0So we now have N\u00b7m\u00b7s for \u0127, which is nice, because Planck\u2019s constant (h or \u0127\u2014whatever: the choice for one of the two depends on the variables we\u2019re looking at) is the quantum for action\u00a0indeed. It\u2019s a Wirkung\u00a0as they say in German, so its dimension combines both energy as well as time.\n\nTo put it simply, it\u2019s a bit like\u00a0power, which is what we men are interested in when looking at a car or motorbike engine. \ud83d\ude42 Power is the energy spent or delivered\u00a0per second, so its dimension is J\/s, not J\u00b7s. However, your mind can see the similarity in thinking here. Energy is a nice concept, be it potential (think of a water bucket above your head)\u00a0or kinetic (think of a punch in a bar fight), but it makes more \u00a0sense to us when adding the dimension of time (emptying a bucket of water over your head is different than walking in the rain, and the impact of a punch depends on the power\u00a0with which it is being delivered).\u00a0In fact, the best way to understand the dimension of Planck\u2019s constant is probably to also write the joule in \u2018base units\u2019. Again, one joule is the amount of energy we need to move an object over a distance of one meter against a force of one newton. So one J\u00b7s is one N\u00b7m\u00b7s is (1) a force of one newton acting over a distance of (2) one meter over a time period equal to (3) one second.\n\nI hope that gives you a better idea of what \u2018action\u2019 really is in physics. [\u2026]\u00a0In any case, we haven\u2019t answered the question. How do we\u00a0relate the two sides? Simple: a newton is an oft-used SI unit, but it\u2019s not a SI base\u00a0unit, and so we should deconstruct it even more (i.e. write it in SI base units). If we do that, we get 1 N = 1 kg\u00b7m\/s2: one newton is the force needed to give a mass of 1 kg an acceleration of 1 m\/s per second. So just substitute and you\u2019ll see the dimension on the right-hand side is kg\u00b7(m\/s2)\u00b7m\u00b7s =\u00a0kg\u00b7m2\/s, so it comes out alright.\n\nWhy this digression on units? Not sure. Perhaps just to remind you also that the Uncertainty Principle can also be expressed in terms of energy and time:\n\n\u0394E\u00b7\u0394t = h\n\nHere there\u2019s no confusion\u00a0 in regard to\u00a0the units on both sides: we don\u2019t need to convert to SI base units to see that they\u2019re the same: [\u0394E][\u0394t] = J\u00b7s.\n\nThe Uncertainty Principle revisited (2)\n\nThe \u0394E\u00b7\u0394t = h\u00a0expression is not\u00a0so often used as an expression of the Uncertainty Principle. I am not sure why, and I don\u2019t think it\u2019s a good thing. Energy and time are also complementary\u00a0variables in quantum mechanics, so it\u2019s just like position and momentum indeed. In fact, I\u00a0like the energy-time expression somewhat more than the position-momentum expression because it does not create any confusion in regard to\u00a0the units on both sides: it\u2019s just joules (or electronvolts) and seconds on\u00a0both\u00a0sides of the equation. So what?\n\nFrankly, I don\u2019t want to digress too much here (this post is going to become awfully\u00a0long)\u00a0but, personally, I found it hard, for quite a while, to relate the two expressions of the very same uncertainty \u2018principle\u2019 and, hence, let me show you how the two express the same thing really, especially because you may or may not know that there are even\u00a0more pairs of complementary variables in quantum mechanics. So, I don\u2019t know if the following will help you a lot, but it helped me to note that:\n\n1. The energy and momentum of a particle are intimately related through the (relativistic) energy-momentum relationship. Now, that formula, E2\u00a0= p2c2\u00a0\u2013 m02c4, which links energy, momentum and intrinsic mass (aka rest mass), looks quite monstrous at first. Hence, you may prefer a simpler form: pc = Ev\/c. It\u2019s the same really as both are based on the relativistic mass-energy equivalence: E = mc2\u00a0or, the way I prefer to write it: m = E\/c2. [Both expressions are the same, obviously, but we can \u2018read\u2019 them differently: m = E\/c2\u00a0expresses the idea that energy has a equivalent mass, defined as inertia, and so it makes energy the primordial concept, rather than mass.] Of course, you should note that m\u00a0is the total\u00a0mass of the object here, including both (a) its rest mass as well as (b) the equivalent mass it gets from moving at the speed v. So m, not m0, is the concept of mass used to define p, and note how easy it is to demonstrate the equivalence of both formulas:\u00a0pc = Ev\/c\u00a0\u21d4 mvc =\u00a0Ev\/c\u00a0\u21d4 E = mc2. In any case, the bottom line is: don\u2019t think of the energy and momentum of a particle as two separate things; they are two aspects of the same \u2018reality\u2019, involving mass (a measure of inertia, as you know) and velocity (as measured in a particular (so-called inertial) reference frame).\n2. Time and space are intimately related through the universal constant c,\u00a0i.e. the speed of light, as evidenced by the fact that we will often want to express distance not in meter but in light-seconds (i.e. the distance that light travels (in a vacuum) in one second) or, vice versa, express time in meter (i.e. the time that light needs to travel a distance of one meter).\n\nThese relationships are interconnected, and the following diagram shows how.\n\nThe easiest way to remember it all is to apply the Uncertainty Principle, in both its \u0394E\u00b7\u0394t = h\u00a0as well as its \u0394p\u00b7\u0394x = h\u00a0 expressions, to a photon. A photon has no rest mass and its velocity v is, obviously, c. So the energy-momentum relationship is a very simple one: p = E\/c. We then get both expressions of the Uncertainty Principle by simply substituting E for p, or vice versa, and remember that time and position (or distance) are related in exactly the same way: the constant of proportionality is the very same. It\u2019s c. So we can write: \u0394x = \u0394t\u00b7c and \u0394t = \u0394x\/c. If you\u2019re confused, think about it in very practical terms: because the speed of light is what it is, an uncertainty of a second in time amounts, roughly, to an uncertainty in position of some 300,000 km (c = 3\u00d710m\/s). Conversely, an uncertainty of some 300,000 km in the position amounts to a uncertainty in time of one second. That\u2019s what the 1-2-3 in the diagram above is all about: please check if you \u2018get\u2019 it, because that\u2019s \u2018essential\u2019 indeed.\n\nBack to \u2018probability waves\u2019\n\nMatter-particles are not the same, but we do have the same relations, including that \u2018energy-momentum\u00a0duality\u2019. The formulas are just somewhat more complicated because they involve mass and velocity (i.e. a velocity less than that of light). For matter-particles, we can see that energy-momentum duality not only in the relationships expressed above (notably the relativistic energy-momentum relation), but also in the (in)famous\u00a0de Broglie\u00a0relation, which associates some \u2018frequency\u2019 (f) to the energy (E) of a particle or, what amounts to the same, some \u2018wavelength\u2019 (\u03bb) to its momentum (p):\n\n\u03bb = h\/p and f = E\/h\n\nThese two complementary equations give a \u2018wavelength\u2019 (\u03bb) and\/or a \u2018frequency\u2019 (f) of a de Broglie\u00a0wave, or a \u2018matter wave\u2019 as it\u2019s sometimes referred to. I am using, once again, apostrophes because the de Broglie\u00a0wavelength and frequency are a different concept\u2014different than the wavelength or frequency of light, or of any other \u2018real\u2019 wave (like water or sound waves, for example). To illustrate the differences, let\u2019s start with a very simple question: what\u2019s the velocity of a de Broglie\u00a0wave? Well\u2026 [\u2026] So? You thought you knew, didn\u2019t you?\n\n1. The mathematically (and physically) correct answer involves distinguishing the group and phase velocity of a wave.\n2. The \u2018easy\u2019 answer is: the de Broglie wave of a particle moves with\u00a0the particle and, hence, its velocity is, obviously, the speed of the particle which, for electrons, is usually non-relativistic (i.e. rather slow as compared to the speed of light).\n\nTo be clear on this, the velocity of a de Broglie\u00a0wave is not\u00a0the speed of light. So a de Broglie\u00a0wave is\u00a0not\u00a0like an electromagnetic wave\u00a0at all. They have nothing in common really, except for the fact that we refer to both of them as \u2018waves\u2019. \ud83d\ude42\n\nThe second thing to note is that, when we\u2019re talking about the \u2018frequency\u2019 or \u2018wavelength\u2019 of \u2018matter waves\u2019 (i.e. de Broglie\u00a0waves), we\u2019re talking the frequency and wavelength of a wave with two components: it\u2019s a\u00a0complex-valued\u00a0wave function, indeed, and so we get a real and imaginary part when we\u2019re \u2018feeding\u2019 the function with some values for x and t.\n\nThirdly and, perhaps, most importantly, we should always remember\u00a0the Uncertainty Principle when looking at the de Broglie\u00a0relation. The Uncertainty Principle\u00a0implies that we can actually not\u00a0assign any\u00a0precise wavelength (or, what amounts to the same, a precise frequency) to a\u00a0de Broglie\u00a0wave: if there is a spread in p (and, hence, in E), then there will be a spread in\u00a0\u03bb (and in f). In fact, I tend to think that it would be better to write the\u00a0de Broglie\u00a0relation as an \u2018uncertainty relation\u2019 in its own right:\n\n\u0394\u03bb = \u0394(h\/p) = h\u0394p and \u0394f = \u0394E\/h = h\u0394E\n\nBesides from underscoring the fact that we have other \u2018pairs\u2019 of complementary variables, this \u2018version\u2019 of the\u00a0de Broglie\u00a0equation would also remind us continually of the fact that\u00a0a \u2018regular\u2019 wave with an exact frequency and\/or an exact wavelength (so a\u00a0\u0394\u03bb and\/or a\u00a0\u0394f\u00a0equal to zero) would not give us any information about the momentum and\/or the energy. Indeed, as\u00a0\u0394\u03bb and\/or \u0394f go to zero (\u0394\u03bb \u2192 0 and\/or \u0394f \u2192 0\u00a0), then\u00a0\u0394p and \u0394E must go to infinity (\u0394p \u2192\u00a0\u221e and \u0394E \u2192\u00a0\u221e. That\u2019s just the math involved in such expressions. \ud83d\ude42\n\nJokes aside, I\u2019ll admit I used to have a lot of trouble understanding this, so I\u2019ll just quote the expert teacher (Feynman) on this to make sure you don\u2019t get me wrong here:\n\n\u201cThe amplitude to find a particle at a place can, in some circumstances, vary in space and time, let us say in one dimension, in this manner: \u03a8\u00a0Aei(\u03c9tkx, where \u03c9 is the frequency, which is related to the classical idea of the energy through \u0127\u03c9, and k is the wave number, which is related to the momentum through\u00a0\u0127k. [These are equivalent formulations of the de Broglie\u00a0relations using the\u00a0angular\u00a0frequency and the wave\u00a0number\u00a0instead of wavelength and frequency.]\u00a0We would say the particle had a definite momentum\u00a0p if the wave number were exactly\u00a0k, that is, a perfect wave which goes on with the same amplitude everywhere. The\u00a0\u03a8\u00a0Aei(\u03c9tkxequation [then]\u00a0gives the [complex-valued probability] amplitude, and if we take the absolute square, we get the relative probability for finding the particle as a function of position and time. This is a constant, which means that the probability to find a [this] particle is the same anywhere.\u201d (Feynman\u2019s\u00a0Lectures, I-48-5)\n\nYou may say or think: What\u2019s the problem here really? Well\u2026 If the probability to find a particle is the same anywhere, then the particle can be anywhere and, for all practical purposes, that amounts to saying it\u2019s nowhere really. Hence, that wave function doesn\u2019t serve the purpose. In short, that nice \u03a8\u00a0Aei(\u03c9tkxfunction is completely useless in terms of representing an electron, or any other actual particle moving through space. So what to do?\n\nThe Wikipedia article on the Uncertainty Principle has this wonderful animation that shows how we can superimpose several waves, one on top of each other, to form a wave packet. Let me copy it below:\n\nSo that\u2019s what the wave we want indeed: a wave packet that travels through space but which is, at the same time, limited in space. Of course, you should note, once again, that it shows only one part of the complex-valued probability amplitude: just visualize the other part (imaginary if the wave above would happen to represent the real part, and vice versa if the wave would happen to represent the imaginary part of the probability amplitude).\u00a0The animation basically illustrates a mathematical operation. To be precise, it involves a Fourier analysis or decomposition: it\u00a0separates a wave packet into a finite or (potentially) infinite number of component waves. Indeed, note how, in the illustration above, the frequency of the component waves gradually increases (or, what amounts to the same, how the wavelength gets smaller and smaller) and how, with every wave we \u2018add\u2019 to the packet, it becomes increasingly localized.\u00a0Now, you can easily see that the \u2018uncertainty\u2019 or \u2018spread\u2019 in the wavelength here (which we\u2019ll denote by \u0394\u03bb) is, quite simply,\u00a0the difference between the wavelength of the \u2018one-cycle wave\u2019, which is equal to the space the whole wave packet occupies (which we\u2019ll denote by \u0394x), and the wavelength of the \u2018highest-frequency wave\u2019. For all practical purposes, they are about the same, so we can write: \u0394x \u2248 \u0394\u03bb. Using Bohr\u2019s formulation of the Uncertainty Principle, we can see the expression I used above (\u0394\u03bb = h\u0394p) makes sense:\u00a0\u0394x = \u0394\u03bb = h\/\u0394p, so \u0394\u03bb\u0394p = h.\n\n[Just to be 100% clear on terminology: a Fourier decomposition is not\u00a0the same as that\u00a0Fourier transform I mentioned when talking about the relation between position and momentum in the Kennard formulation of the Uncertainty Principle, although these two mathematical concepts obviously have a few things in common.]\n\nThe wave train revisited\n\nAll what I\u2019ve said above, is the \u2018correct\u2019 interpretation of the Uncertainty Principle and the de Broglie\u00a0equation. To be frank, it took me quite a while to \u2018get\u2019 that\u2014and, as you can see, it also took me quite a while to get \u2018here\u2019, of course. \ud83d\ude42\n\nIn fact, I was confused, for quite a few years actually, because\u00a0I never quite understood whey there had to be a spread in the wavelength of a wave train. Indeed, we can all easily imagine a localized wave train with a fixed frequency and a fixed wavelength, like the one below, which I\u2019ll re-use later.\u00a0I\u2019ve made this wave train myself: it\u2019s a standard sine and cosine function multiplied with an \u2018envelope\u2019 function generating the envelope. As you can see, it\u2019s a complex-valued thing indeed: the blue curve is the real part, and the imaginary part is the red curve.\n\nYou can easily make a graph like this yourself. [Just use of one of those online graph tools.] This thing is localized in space and, as mentioned above, it has a fixed frequency and wavelength. So all those enigmatic statements you\u2019ll find in serious or less serious books (i.e. textbooks or popular accounts) on quantum mechanics saying that \u201cwe cannot define a unique wavelength for a short wave train\u201d and\/or saying that \u201cthere is an indefiniteness in the wave number that is related to the finite length of the train, and thus there is an indefiniteness in the momentum\u201d (I am quoting Feynman\u00a0here, so not\u00a0one of the lesser gods) are \u2013 with all due respect for these authors, especially Feynman\u00a0\u2013 just wrong. I\u2019ve made another \u2018short wave train\u2019 below, but this time it\u00a0depicts the real part of a (possible) wave function only.\n\nHmm\u2026 Now that one has a weird shape, you\u2019ll say. It doesn\u2019t look like a \u2018matter wave\u2019! Well\u2026 You\u2019re right. Perhaps. [I\u2019ll challenge you in a moment.]\u00a0The shape of the function above is consistent, though, with the view of a photon as a transient electromagnetic oscillation. Let me come straight to the point by stating the basics: the view of a photon in physics is that photons are emitted by\u00a0atomic oscillators. As an electron jumps from one energy level to the other, it seems to oscillate back and forth until it\u2019s in equilibrium again, thereby emitting an electromagnetic wave train that looks like a transient.\n\nHuh?\u00a0What\u2019s a transient?\u00a0It\u2019s an oscillation like the one above: its amplitude and, hence, its energy, gets smaller and smaller as time goes by. To be precise, its energy level has the same shape as the envelope curve below: E = E0e\u2013t\/\u03c4. In this expression, we have\u00a0\u03c4 as the so-called decay time, and one can show it\u2019s the inverse of the so-called decay rate: \u03c4 = 1\/\u03b3 with \u03b3E = \u2013dE\/dt. In case you wonder, check it out on Wikipedia: it\u2019s one of the many applications of the natural exponential function: we\u2019re talking a so-called exponential decay here indeed, involves a quantity (in this case, the amplitude and\/or the energy) decreasing at a rate that is proportional to its current value, with the coefficient of proportionality being \u03b3. So we write that as \u03b3E = \u2013dE\/dt in mathematical notation. \ud83d\ude42\n\nI need to move on. All of what I wrote above was \u2018plain physics\u2019, but so what I really\u00a0want to explore in this post is a crazy hypothesis. Could these wave trains above \u2013 I mean the wave trains with the fixed frequency and wavelength \u2013 possible represent a de Broglie\u00a0wave for a photon?\n\nYou\u2019ll say:\u00a0of course not!\u00a0But, let\u2019s be honest, you\u2019d have some trouble explaining why. The best answer you could probably come up with is: because no physics textbook says something like that. You\u2019re right. It\u2019s a\u00a0crazy hypothesis because, when you ask a physicist (believe it or not, but I actually went through the trouble of asking two nuclear scientists), they\u2019ll tell you that photons are not to be associated with\u00a0de Broglie\u00a0waves. [You\u2019ll say: why didn\u2019t you try looking for an answer on the Internet? I actually did but \u2013 unlike what I am used to \u2013\u00a0I got very\u00a0confusing answers on this one, so I gave up trying to find some definite\u00a0answer on this question on the Internet.]\n\nHowever, these negative answers don\u2019t discourage me from trying to do some more freewheeling. Before discussing whether or not the idea of a de Broglie\u00a0wave for a photon makes sense, let\u2019s think about\u00a0mathematical\u00a0constraints. I\u00a0googled\u00a0a bit but I only see one actually: the amplitudes of a\u00a0de Broglie\u00a0wave are subject to a normalization condition. Indeed, when everything is said and done, all probabilities must take a value between 0 and 1, and they must also all add up to exactly 1. So that\u2019s a so-called normalization condition that obviously imposes some constraints on the (complex-valued) probability amplitudes of our wave function.\n\nBut let\u2019s get back to the photon. Let me remind you of what happens when a photon is being emitted by inserting the two diagrams below, which gives the energy levels of the atomic orbitals of electrons.\n\nSo an electron absorbs or emits a photon when it goes from one energy level to the other, so it absorbs or emits radiation. And, of course, you will also remember that the frequency of the absorbed or emitted light is related to those energy levels. More specifically, the frequency of the light emitted in a transition from, let\u2019s say, energy level Eto Ewill be written as\u00a0\u03bd31\u00a0= (E\u2013\u00a0E1)\/h. This frequency will be one of the so-called characteristic frequencies of the atom and will define a specific so-called spectral emission line.\n\nNow, from a mathematical point of view, there\u2019s no difference between that\u00a0\u03bd31\u00a0= (E\u2013\u00a0E1)\/h equation and the de Broglie\u00a0equation,\u00a0f = E\/h,\u00a0which assigns a de Broglie\u00a0wave to a particle. But, of course, from all that I wrote above, it\u2019s obvious that, while these two formulas are the same from a math\u00a0point of view,\u00a0they\u00a0represent very different things. Again, let me repeat what I said above: a de Broglie\u00a0wave is a matter-wave and, as such, it has\u00a0nothing\u00a0to do with an electromagnetic wave.\n\nLet me be even more explicit. A\u00a0de Broglie\u00a0wave is not a \u2018real\u2019\u00a0wave, in a sense (but, of course, that\u2019s a very unscientific statement to make); it\u2019s a psi function, so it represents these weird mathematical quantities\u2013complex probability amplitudes\u2013which allow us to calculate the probability of finding the particle at position x or, if it\u2019s a wave function for the momentum-space, to find a value p for its momentum. In contrast, a photon that\u2019s emitted or absorbed represents a \u2018real\u2019 disturbance of the electromagnetic field propagating through space. Hence,\u00a0that\u00a0frequency \u03bd\u00a0is something very\u00a0different than f, which is why we use another symbol for it (\u03bd is\u00a0the Greek letter nu, not to be confused with the v\u00a0symbol we use for velocity). [Of course, you may wonder how \u2018real\u2019 or \u2018unreal\u2019 an electromagnetic field is but, in the context of this\u00a0discussion, let me assure you we should look at it as something that\u2019s\u00a0very\u00a0real.]\n\nThat being said, we also know light is emitted in discrete energy packets: in fact, that\u2019s how photons were defined\u00a0originally, first by Planck and then by Einstein.\u00a0Now, when an electron falls from one energy level in an atom to another (lower) energy level, it emits one \u2013 and only one \u2013 photon with that particular wavelength and energy. The question then is: how should we picture that photon? Does it also have some more or less defined position in space, and some momentum? The answer is definitely yes, on both accounts:\n\n1. Subject to the constraints of the Uncertainty Principle, we know, more or less indeed, when a photon leaves a source and when\u00a0it hits some detector. [And, yes, due to the \u2018Uncertainty Principle\u2019 or, as Feynman puts it, the rules for adding arrows, it may not travel in a straight line and\/or at the speed of light\u2014but that\u2019s a discussion that, believe it or not, is not directly relevant here. If you want to know more about it, check one or more of my posts on it.]\n2. We also know light has a very definite momentum, which I\u2019ve calculated elsewhere and so I\u2019ll just note the result: p = E\/c. It\u2019s a \u2018pushing momentum\u2019 referred to as radiation pressure, and its in the direction of travel indeed.\n\nIn short, it\u00a0does\u00a0makes sense, in my humble opinion that is, to associate some wave function with the photon, and then I mean a\u00a0de Broglie wave. Just think about it yourself. You\u2019re right to say that\u00a0a\u00a0de Broglie\u00a0wave is a \u2018matter wave\u2019, and photons aren\u2019t matter but, having said that, photons do behave like like electrons, don\u2019t they? There\u2019s diffraction (when you send a photon through one slit) and interference (when photons go through two slits, altogether or \u2013 amazingly \u2013\u00a0one by one), so it\u2019s the same weirdness as electrons indeed, and so why wouldn\u2019t we associate some kind of wave function with them?\n\nYou can react in one of three ways here. The first reaction is: \u201cWell\u2026 I don\u2019t know. You tell me.\u201d\u00a0Well\u2026 That\u2019s what I am trying to do here. \ud83d\ude42\n\nThe second reaction may be somewhat more to the point. For example, those who\u2019ve\u00a0read Feynman\u2019s Strange Theory of Light and Matter, could say: \u201cOf course, why not? That\u2019s what we do when we associate a photon going from point A to B with an amplitude P(A to B), isn\u2019t it?\u201d\n\nWell\u2026 No. I am talking about something else here. Not some amplitude associated with a path in spacetime, but a wave function giving an approximate position of the photon.\n\nThe third reaction may be the same as the reaction of those two nuclear scientists I asked: \u201cNo. It doesn\u2019t make sense. We do not associate\u00a0photons with a\u00a0de Broglie wave.\u201d But so they didn\u2019t tell me why\u00a0because\u2026 Well\u2026 They didn\u2019t have the time to entertain a guy like me and so I didn\u2019t dare to push the question and continued to explore it more in detail myself.\n\nSo I\u2019ve done that, and I thought of one reason why the question, perhaps, may not make all that much sense: a photon travels at the speed of light; therefore, it has no length. Hence, doing what I am doing below, and that\u2019s to associate the electromagnetic transient with a de Broglie\u00a0wave might not\u00a0make sense.\n\nMaybe. I\u2019ll let you judge. Before developing the point, I\u2019ll raise two objections to the \u2018objection\u2019 raised above (i.e. the statement that a photon has no length). First, if we\u2019re looking at the photon as some particle, it will obviously have no length. However, an electromagnetic transient is just what it is: an electromagnetic transient. I\u2019ve see nothing that makes me think its length should be zero. In fact, if that would be the case, the concept of an electromagnetic wave itself would not make sense, as its \u2018length\u2019 would always be zero. Second, even if\u00a0\u2013 somehow\u00a0\u2013 the length of the electromagnetic transient would be reduced to zero because of its speed, we can still\u00a0imagine\u00a0that\u00a0we\u2019re looking at the emission of an electromagnetic pulse (i.e. a photon) using the reference frame of the photon, so that we\u2019re traveling at speed c,\u2019 riding\u2019 with the photon, so to say, as it\u2019s being emitted. Then we would \u2018see\u2019 the electromagnetic transient as it\u2019s being radiated into space, wouldn\u2019t we?\n\nPerhaps. I actually don\u2019t know. That\u2019s why I wrote this post and hope someone will react to it. I really don\u2019t know, so I thought it would be nice to just freewheel a bit on this question. So be warned: nothing of what I write below has been researched really, so critical comments and corrections from actual specialists are more than welcome.\n\nThe shape of a photon wave\n\nAs mentioned above, the answer in regard to the definition of a photon\u2019s position and momentum is, obviously, unambiguous. Perhaps we have to stretch whatever we understand of Einstein\u2019s (special) relativity theory, but we should be able to draw some conclusions, I feel.\n\nLet me say one thing more about the momentum here. As said, I\u2019ll refer you to one of my posts\u00a0for the detail but, all you should know here is that the momentum of light is related to the magnetic field vector, which we usually never mention when discussing light because it\u2019s so tiny as compared to the electric field vector in our inertial frame of reference. Indeed, the magnitude of the magnetic field vector is equal to the magnitude of the electric field\u00a0vector divided by c =\u00a03\u00d7108, so we write B = E\/c. Now, the E here stands for the electric\u00a0field, so let me use W to refer to the\u00a0energy\u00a0instead of E. Using the B = E\/equation and a fairly straightforward calculation of the work\u00a0that can be done by the associated force on a charge that\u2019s being put into this field, we get that famous equation which we mentioned above already: the momentum of a photon is its total energy divided by c, so we write p = W\/c. You\u2019ll say: so what? Well\u2026 Nothing. I just wanted to note we get the same\u00a0p = W\/c\u00a0equation indeed, but from a very\u00a0different angle of analysis here. We didn\u2019t use the energy-momentum relation here at all!\u00a0In any case, the point to note is that\u00a0the momentum of a photon is only a tiny fraction of its energy (p = W\/c), and that the associated magnetic field vector is also just a tiny fraction of the electric field vector (B = E\/c).\n\nBut so it\u2019s there and, in fact, when adopting a moving reference frame, the mix of E and B (i.e. the electric and magnetic field) becomes an entirely different one. One of the \u2018gems\u2019 in Feynman\u2019s Lectures\u00a0is the expos\u00e9 on the relativity of electric and magnetic fields indeed, in which he analyzes the electric and magnetic field caused by a current, and in which he shows that, if we switch our inertial reference frame for that of the moving electrons in the wire, the \u2018magnetic\u2019 field disappears, and the whole\u00a0electromagnetic\u00a0effect becomes \u2018electric\u2019 indeed.\n\nI am just noting this because I know I should do\u00a0a similar analysis for the E and B \u2018mixture\u2019 involved in the electromagnetic transient that\u2019s being emitted by our atomic oscillator. However, I\u2019ll admit I am not quite comfortably enough\u00a0with the physics nor the math involved to do that, so\u2026 Well\u2026 Please do bear this in mind as I will be jotting down some quite speculative thoughts in what follows.\n\nSo\u2026 A photon is, in essence, a electromagnetic disturbance and so, when trying to picture a photon, we can think of some oscillating electric field vector traveling through\u2013and also limited in\u2013space. [Note that I am leaving the magnetic field vector out of the analysis from the start, which is not \u2018nice\u2019 but, in light of that B = E\/c relationship, I\u2019ll assume it\u2019s acceptable.] In short, in the classical world\u00a0\u2013 and in the classical world only of course\u00a0\u2013 a photon must be some electromagnetic wave train, like the one below\u2013perhaps.\n\nBut why would it have that shape? I only suggested it because it has\u00a0the same shape as Feynman\u2019s representation of a particle (see below) as a \u2018probability wave\u2019 traveling through\u2013and limited in\u2013space.\n\nSo, what about it? Let me first remind you once again (I just can\u2019t stress this point enough it seems) that Feynman\u2019s representation \u2013 and most are based on his, it seems \u2013 is misleading because it suggests that \u03c8(x) is some real number. It\u2019s not. In the image above, the vertical axis should not represent some real number (and it surely should not represent a probability, i.e. some real positive\u00a0number between 0 and 1)\u00a0but a\u00a0probability amplitude, i.e. a\u00a0complex\u00a0number in which both the real and imaginary part are important. Just to be fully complete (in case you forgot), such complex-valued\u00a0wave function \u03c8(x) will give you all the probabilities you need when you take its (absolute) square, but so\u2026 Well\u2026 We\u2019re really talking a different animal here, and the image above gives you only one part of the complex-valued wave function (either the real or the imaginary part), while it should give you both. That\u2019s why I find my graph below much better. \ud83d\ude42 It\u2019s the same really, but so it shows both the real as well as the complex part of a wave function.\n\nBut let me go back to the first illustration: the vertical axis of the first illustration is not \u03c8 but E \u2013 the electric field vector. So there\u2019s no imaginary part here: just a\u00a0real\u00a0number, representing the strength\u2013or magnitude I should say\u2013 of the electric field E as a function of the space coordinate x. [Can magnitudes be negative? The honest answer is: no, they can\u2019t. But just think of it as representing the field vector pointing in the other way .]\n\nRegardless of the shortcomings of this graph, including the fact we only have some real-valued oscillation here, would it work as a \u2018suggestion\u2019 of how a real-life photon could look like?\n\nOf course, you could try to not\u00a0answer that question by mumbling something like: \u201cWell\u2026 It surely doesn\u2019t represent anything coming near to a photon in quantum mechanics.\u201d But\u2026 Well\u2026 That\u2019s not my question here: I am asking you to be creative and \u2018think outside of the box\u2019, so to say. \ud83d\ude42\n\nSo you should say \u2018No!\u2019 because of some other reason. What reason? Well\u2026 If a photon is an electromagnetic transient \u2013 in other words, if we adopt a purely classical\u00a0point of view\u00a0\u2013 it\u2019s going to be a transient wave indeed, and so then it should walk, talk and even look like a transient. \ud83d\ude42 Let me quickly jot down the formula for the (vertical) component of E as a function of the acceleration of some charge q:\n\nThe charge q (i.e. the source\u00a0of the radiation)\u00a0is, of course, our electron that\u2019s emitting the photon as it jumps from a higher to a lower energy level (or, vice versa, absorbing it). This formula basically states that the magnitude of the electric field (E) is proportional to the acceleration (a) of the charge (with t\u2013r\/c the retarded argument). Hence, the suggested shape of E as a function of x as shown above\u00a0would imply that the acceleration\u00a0of the electron is (a) initially quite small, (b) then becomes larger and larger to reach some maximum, and then (c) becomes smaller and smaller again to then die down completely. In short, it does match the definition of a transient\u00a0wave sensu stricto\u00a0(Wikipedia defines a transient as \u201ca short-lived burst of energy in a system caused by a sudden change of state\u201d) but it\u2019s not\u00a0likely to represent any\u00a0real transient. So, we can\u2019t exclude it, but a real transient\u00a0is much more likely to look like\u00a0something\u00a0what\u2019s depicted below: no gradual increase in amplitude but big swings initially which then dampen to zero. In other words, if our photon is a transient electromagnetic disturbance caused by a \u2018sudden burst of energy\u2019 (which is what that electron jump is, I would think), then its representation will, much more likely, resemble a\u00a0damped wave, like the one below, rather than Feynman\u2019s picture of a moving matter-particle.\n\nIn fact, we\u2019d have to flip the image, both vertically and horizontally, because the acceleration of the source and the field are related as shown below. The vertical flip is because of the minus sign in the formula for E(t). The horizontal flip is because of the minus sign in the (t \u2013 r\/c) term, the retarded argument: if we add a little time (\u0394t), we get the same value for a(tr\/cas we would have if we had subtracted a little distance: \u0394r=c\u0394t. So that\u2019s why E as a function of r (or of x), i.e. as a function in space, is a \u2018reversed\u2019 plot of the acceleration as a function of time.\n\nSo we\u2019d have something like below.\n\nWhat does this resemble? It\u2019s not a vibrating string (although I do start to understand the attractiveness of string theory now: vibrating strings are great as energy storage systems, so the idea of a photon being some kind of vibrating string sounds great, doesn\u2019t it?). It\u2019s not resembling a bullwhip effect either, because the oscillation of a whip is confined by a different envelope (see below). And, no, it\u2019s also definitely not a trumpet. \ud83d\ude42\n\nIt\u2019s just what it is: an electromagnetic transient traveling through space.\u00a0Would this be realistic as a \u2018picture\u2019 of a photon?\u00a0Frankly, I don\u2019t know. I\u2019ve looked at a lot of stuff but didn\u2019t find anything on this really. The easy answer, of course, is quite straightforward: we\u2019re not interested in the shape of a photon because we know it is\u00a0not\u00a0an electromagnetic wave. It\u2019s a \u2018wavicle\u2019, just like an electron.\n\n[\u2026]\u00a0Sure. I know that too. Feynman told me. \ud83d\ude42 But then why wouldn\u2019t we associate some wave function with it? Please tell me, because I really can\u2019t find much of an answer to that question in the literature, and so that\u2019s why I am freewheeling here. So just go along with me for a while, and come up with another suggestion. As\u00a0I said above, your bet is as good as mine. All that I know is that there\u2019s one thing we need to explain when considering the various possibilities: a photon has a very well-defined frequency (which defines\u00a0its color in the visible light spectrum) and so our wave train should\u00a0\u2013 in my humble opinion\u00a0\u2013 also have that frequency. At least for \u2018quite a while\u2019\u2014and then I mean \u2018most of the time\u2019, or \u2018on average\u2019 at least. Otherwise the concept of a frequency \u2013 or a wavelength \u2013 wouldn\u2019t make much sense. Indeed, if the photon has no defined wavelength or frequency, then we could not perceive it as some color (as you may or may not know, the sense of \u2018color\u2019 is produced by our eye and brain, but so it\u2019s definitely associated with the frequency of the light). A photon should have a color (in phyics, that means a frequency) because, when everything is said and done, that\u2019s what the Planck relation is all about.\n\nWhat would be your alternative? I mean\u2026 Doesn\u2019t it make sense to think that, when jumping from one energy level to the other, the electron would initially sort of overshoot its new equilibrium position, to then overshoot it again on the other side, and so on and so on, but with an\u00a0amplitude\u00a0that becomes smaller and smaller as the oscillation dies out? In short, if we look at radiation as being caused by atomic oscillators, why would we not go all the way and think of them as\u00a0oscillators subject to some damping force?\u00a0Just think about it. \ud83d\ude42\n\nThe size of a photon wave\n\nLet\u2019s forget about the shape for a while and think about size. We\u2019ve got an electromagnetic train here. So how long would it be? Well\u2026 Feynman calculated the Q of these atomic oscillators: it\u2019s of the order of 10(see his\u00a0Lectures,\u00a0I-33-3: it\u2019s a wonderfully simple exercise, and one that really shows his greatness as a physics teacher) and, hence, this wave train will last about 10\u20138\u00a0seconds (that\u2019s the time it takes for the radiation to die out by a factor 1\/e). To give a somewhat more precise example,\u00a0for sodium light, which has a frequency of 500 THz (500\u00d71012\u00a0oscillations per second) and a wavelength of 600 nm (600\u00d710\u20139\u00a0meter), the radiation will lasts about 3.2\u00d710\u20138\u00a0seconds. [In fact, that\u2019s the time it takes for the radiation\u2019s energy to die out by a factor 1\/e, so(i.e. the so-called decay time \u03c4), so the wavetrain will actually last\u00a0longer, but so the amplitude becomes quite small after that time.]\n\nSo that\u2019s a very short time, but still, taking into account the rather spectacular frequency (500 THz) of sodium light, that still makes for some 16 million oscillations and, taking into the account the rather spectacular speed of light (3\u00d710m\/s), that makes for a wave train with a length of, roughly,\u00a09.6 meter. Huh? 9.6 meter!?\n\nYou\u2019re right. That\u2019s an incredible distance: it\u2019s like infinity on an atomic scale!\n\nSo\u2026 Well\u2026 What to say? Such length surely cannot match the picture of a photon as a fundamental particle which cannot be broken up, can it? So it surely\u00a0cannot be right because, if this would be the case, then there surely must be some way to break this thing up and, hence, it cannot be \u2018elementary\u2019, can it?\n\nWell\u2026 Maybe. But think it through. First note that we will\u00a0not\u00a0see the photon as a 10-meter long string because it travels at the speed of light indeed and so the length contraction effect ensure its length, as measured in our\u00a0reference frame (and from whatever \u2018real-life\u2019 reference frame actually, because the speed of light will always\u00a0be\u00a0c, regardless of the speeds we mortals could ever reach (including speeds close to c), is zero.\n\nSo, yes, I surely must be joking here but,\u00a0as far as jokes go, I can\u2019t help thinking this one is fairly robust from a scientific point of view. Again, please do double-check and correct me, but all what I\u2019ve written so far is not all that speculative. It corresponds to all what I\u2019ve read about it: only one photon is produced per electron in any de-excitation, and its energy is determined by the number of energy levels it drops, as illustrated (for a simple hydrogen atom) below. For those who continue to be skeptical about my sanity here, I\u2019ll quote Feynman once again:\n\n\u201cWhat happens in a light source is that first one atom radiates, then another atom radiates, and so forth, and we have just seen that atoms radiate a train of waves only for about 10\u20138\u00a0sec; after 10\u20138\u00a0sec, some atom has probably taken over, then another atom takes over, and so on. So the phases can really only stay the same for about 10\u20138\u00a0sec. Therefore, if we average for very much more than 10\u20138\u00a0sec, we do not see an interference from two different sources, because they cannot hold their phases steady for longer than 10\u20138\u00a0sec. With photocells, very high-speed detection is possible, and one can show that there is an interference which varies with time, up and down, in about 10\u20138\u00a0sec.\u201d (Feynman\u2019s Lectures, I-34-4)\n\nSo\u2026 Well\u2026 Now it\u2019s up to you. I am going along here with the assumption that a photon in the visible light spectrum, from a classical world perspective, should\u00a0indeed be something that\u2019s several meters long and packs a few million oscillations. So, while we usually measure stuff in seconds, or hours, or years, and, hence, while we\u00a0would that\u00a0think 10\u20138\u00a0seconds is short, a photon would actually be a very stretched-out transient that occupies quite a lot of space. I should also add that, in light of that number of ten meter, the dampening seems to happen rather slowly!\n\n[\u2026]\n\nFirst because this type of analysis is not appropriate. [\u2026] You think so?\u00a0Well\u2026 I don\u2019t know. Perhaps you\u2019re right. Perhaps we shouldn\u2019t try to think of a photon as being something different than a discrete packet of energy. But then we\u00a0also\u00a0know it\u00a0is\u00a0an electromagnetic waveSo why wouldn\u2019t we go all the way?\n\nSecond, I guess you may find the math involved in this post not to your liking, even if it\u2019s quite simple and I am not doing anything spectacular here. [\u2026] Well\u2026 Frankly, I don\u2019t care. Let me bulldozer on.\u00a0\ud83d\ude42\n\nWhat about the \u2018vertical\u2019 dimension, the y and the z coordinates in space? We\u2019ve got this long snaky \u00a0thing: how thick-bodied is it?\n\nHere, we need to watch our language. While it\u2019s fairly obvious to associate a wave\u00a0with a cross-section that\u2019s normal to its direction of propagation, it is\u00a0not\u00a0obvious to associate a photon with the same thing. Not at all actually: as that electric field vector E oscillates up and down (or goes round and round, as shown in the illustration below, which is an image of a circularly polarized wave), it does not\u00a0actually\u00a0take any space. Indeed, the electric and magnetic field vectors E and B have a direction and a magnitude in space but they\u2019re\u00a0not\u00a0representing something that is actually taking up some small or larger\u00a0core in space.\n\nHence, the vertical axis of that graph showing the wave train does not indicate some spatial position: it\u2019s not a y-coordinate but the magnitude of an electric field vector. [Just to underline the fact that the magnitude E has nothing to do with spatial coordinates: note that its value depends on the unit we use to measure field strength (so that\u2019s newton\/coulomb, if you want to know), so it\u2019s really got nothing to do with an actual\u00a0position in space-time.]\n\nSo, what can we say about it? Nothing much, perhaps. But let me try.\n\nCross-sections in nuclear physics\n\nIn nuclear physics, the term \u2018cross-section\u2019 would usually refer to the so-called Thompson scattering cross-section of an electron (or any charged particle really), which can be defined rather loosely as the target area for the incident wave (i.e. the photons): it is, in fact, a surface which can be calculated from what is referred to as the classical electron radius, which is about 2.82\u00d710\u201315\u00a0m. Just to compare: you may or may not remember the so-called Bohr radius of an atom, which is about 5.29\u00d710\u201311\u00a0m, so that\u2019s a length that\u2019s about 20,000 times longer. To be fully complete, let me give you the exact value for the Thompson scattering cross-section of an electron: 6.62\u00d710\u201329\u00a0m(note that this is a surface\u00a0indeed, so we have\u00a0m\u00a0squared\u00a0as a unit, not m).\n\nNow, let me remind you \u2013 once again \u2013 that we should not associate the oscillation of the electric field vector with something actually happening in space: an electromagnetic field does not move in a medium and, hence, it\u2019s not like a water or sound wave, which makes molecules go up and down as it propagates through its medium. To put it simply: there\u2019s nothing that\u2019s wriggling in space as that photon is flashing through space. However, when it does\u00a0hit an electron,\u00a0that\u00a0electron will effectively \u2018move\u2019 (or vibrate or wriggle or whatever you can imagine) as a result of the incident electromagnetic field.\n\nThat\u2019s what\u2019s depicted and labeled below: there is a so-called \u2018radial component\u2019 of the electric field, and I would say: that\u2019s our photon! [What else would it be?] The illustration below shows that this \u2018radial\u2019 component is just E for the incident beam and that, for the scattered beam, it is, in fact, determined by the electron motion caused by the incident beam through that relation described above, in which a is the normal component (i.e. normal to the direction of propagation of the outgoing beam) of the electron\u2019s acceleration.\n\nNow, before I proceed, let me remind you once again that the above illustration is, once again, one of those illustrations that only wants to convey an idea, and so we should not attach too much importance to it: the world at the smallest scale is best not represented by a billiard ball model. In addition, I should also note that the illustration above was taken from the Wikipedia article on elastic\u00a0scattering (i.e. Thomson scattering), which is only a special case of the more general\u00a0Compton\u00a0scattering that actually takes place. It is, in fact, the low-energy limit. Photons with higher energy will usually be absorbed, and then there will be a re-emission, but, in the process, there will be a loss of energy in this \u2018collision\u2019 and, hence, the scattered light will have lower energy (and, hence, lower frequency and longer wavelength). But\u00a0\u2013\u00a0Hey!\u00a0\u2013 now that I think of it: that\u2019s quite compatible with my idea of damping, isn\u2019t it? \ud83d\ude42 [If you think I\u2019ve gone crazy, I am really joking here: when it\u2019s Compton scattering, there\u2019s no \u2018lost\u2019 energy: the electron will recoil and, hence, its momentum will increase. That\u2019s what\u2019s shown below (credit goes to the HyperPhysics site).]\n\nSo\u2026 Well\u2026 Perhaps we should just assume that a\u00a0photon is a long wave train indeed (as mentioned above, ten meter is very long\u00a0indeed:\u00a0not\u00a0an atomic scale at all!)\u00a0but that its effective \u2018radius\u2019 should be of the same order as the classical electron radius. So what\u2019s that order?\u00a0If it\u2019s more or less the same radius, then it would be in the order of femtometers\u00a0(1 fm = 1 fermi\u00a0= 1\u00d710\u201315\u00a0m). That\u2019s good because that\u2019s a typical length-scale in nuclear physics. For example, it would be comparable with the radius of a proton. So we look at a photon here as something very different\u00a0\u2013 because it\u2019s so incredibly long (at least as measured from its own reference frame)\u00a0\u2013\u00a0but as something which does have some kind of \u2018radius\u2019 that is normal to its direction of propagation and equal or smaller than the classical electron radius. [Now that I think of it, we should probably think of it as being substantially smaller. Why? Well\u2026\u00a0An electron is obviously fairly massive as compared to a photon (if only because an electron has a rest mass and a photon hasn\u2019t) and so\u2026 Well\u2026 When everything is said and done, it\u2019s the electron that absorbs a photon\u2013not the other way around!]\n\nNow, that radius determines the area in which it may produce some effect, like hitting an electron, for example, or like being detected in a photon detector, which is just what this so-called radius of an atom or an electron is all about: the area which is susceptible of being hit by some particle (including a photon), or which is likely to emit some particle (including a photon). What is exactly, we don\u2019t know: it\u2019s still as spooky as an electron and, therefore, it also does not make all that much sense to talk about its exact position in space.\u00a0However, if we\u2019d talk about its position, then we should obviously also invoke the Uncertainty Principle, which will give us some upper and lower bounds for its actual position, just like it does for any other particle: the uncertainty about its position will be related to the uncertainty about its momentum, and more knowledge about the former, will implies less knowledge about the latter, and vice versa. Therefore, we can also associate some complex wave function with this photon which is \u2013 for all practical purposes \u2013 a de Broglie\u00a0wave.\u00a0Now how should we visualize\u00a0that\u00a0wave?\n\nWell\u2026 I don\u2019t know.\u00a0I am actually not going to offer anything specific here. First, it\u2019s all speculation. Second, I think I\u2019ve written too much rubbish already. However, if you\u2019re still reading, and you like this kind of unorthodox application of electromagnetics, then the following remarks may stimulate your\u00a0imagination.\n\nThe first thing to note is that we should not end up with a wave function that, when squared, gives us a constant probability for each and every point in space. No. The wave function needs to be confined in space and, hence, we\u2019re also talking a wave train here, and a very\u00a0short one in this case. So\u2026 Well\u2026 What about linking its amplitude to the amplitude of the field for the photon. In other words, the probability amplitude could, perhaps, be proportional to the amplitude of E, with the proportionality factor being determined by (a) the unit in which we measure E (i.e. newton\/coulomb) and (b) the normalization condition.\n\nOK. I hear you say it now:\u00a0\u201cHa-ha! Got you! Now you\u2019re really talking nonsense! How can a complex number (the probability amplitude) be proportional to some real number (the field strength)?\u201d\n\nWell\u2026 Be creative. It\u2019s not that difficult to imagine some linkages. First, the electric field vector has both a magnitude and a direction. Hence, there\u2019s more to E than just its magnitude. Second, you should note that the real and imaginary part of a complex-valued wave function is a simple sine and cosine function, and so these two functions are the same really, except for a phase difference of \u03c0\/2. In other words, if we have a formula for the real part of a wave function, we have a formula for its imaginary part as well. So\u2026 Your remark is to the point and then it isn\u2019t.\n\nOK, you\u2019ll say, but then so how exactly would you link the E vector with the\u00a0\u03c8(x, t) function for a photon. Well\u2026\u00a0Frankly, I am a bit exhausted now and so I\u2019ll leave any further speculation to you. The whole idea of a\u00a0de Broglie\u00a0wave of a photon, with the (complex-valued) amplitude having some kind of \u2018proportional\u2019 relationship\u00a0to the (magnitude of) the electric field vector makes sense to me, although we\u2019d have to be innovative about what that \u2018proportionality\u2019 exactly is.\n\nLet me conclude this speculative business by noting a few more things about our \u2018transient\u2019 electromagnetic wave:\n\n1. First, it\u2019s obvious that the usual relations between (a) energy (W), (b) frequency (f) and (c) amplitude (A) hold. If we increase the frequency of a wave, we\u2019ll have a proportional increase in energy (twice the frequency is twice the energy), with the factor of proportionality being given by the Planck-Einstein relation: W = hf. But if we\u2019re talking amplitudes (for which we do\u00a0not\u00a0have a formula, which is why we\u2019re engaging in those assumptions on the shape of the transient wave), we should not forget that the energy of a wave is proportional to the\u00a0square\u00a0of its amplitude: W \u223c A2. Hence, a linear\u00a0increase of the amplitudes results in an exponential (quadratic) increase in energy (e.g. if you double all amplitudes, you\u2019ll pack four\u00a0times more energy in that wave).\n\n2.\u00a0Both factors come into play when an electron emits a photon. Indeed, if the\u00a0difference\u00a0between the two energy levels is larger, then the photon will not only have a higher frequency (i.e. we\u2019re talking light (or electromagnetic radiation) in the upper ranges of the spectrum then) but one should also expect that the initial overshooting\u00a0\u2013 and, hence, the initial oscillation\u00a0\u2013 will also be larger. In short, we\u2019ll have larger amplitudes. Hence, higher-energy photons will pack even more energy upfront. They will also have higher frequency, because of the Planck relation. So, yes, both factors would come into play.\n\nWhat about the length of these wave trains? Would it make them shorter? Yes. I\u2019ll refer you to Feynman\u2019s Lectures\u00a0to verify that the wavelength appears in the numerator of the formula for Q. Hence, higher frequency means shorter wavelength and, hence, lower Q.\u00a0Now, I am not quite sure (I am not sure about anything I am writing here it seems) but this may or may not be the reason for yet another statement I never quite understood: photons with higher and higher energy are said to become smaller and smaller, and when they reach the Planck scale, they are said to become black holes.\n\nHmm\u2026 I should check on that. \ud83d\ude42\n\nConclusion\n\nSo what\u2019s the conclusion? Well\u2026 I\u2019ll leave it to you to think about this. As said, I am a bit tired now and so I\u2019ll just wrap this up, as this post has become way too long anyway. Let me, before parting, offer the following bold suggestion in terms of finding a de Broglie wave for our photon: perhaps\u00a0that transient above actually\u00a0is\u00a0the wave function.\n\nYou\u2019ll say: What !?\u00a0What about normalization? All probabilities have to add up to one and, surely, those magnitudes of the electric field vector wouldn\u2019t add up to one, would they?\n\nMy answer to that is simple: that\u2019s just a question of units, i.e. of normalization indeed. So just measure the field strength in some other unit and it will come all right.\n\n[\u2026] But\u2026 Yes? What?\u00a0Well\u2026 Those magnitudes are real numbers, not complex numbers.\n\nI am not sure how to answer that one but there\u2019s two things I could say:\n\n1. Real numbers are complex numbers too: it\u2019s just that their imaginary part is zero.\n2. When working with waves, and especially with transients, we\u2019ve always represented them using the complex exponential function. For example, we would write a wave function whose\u00a0amplitude varies sinusoidally in space and time as Aei(\u03c9tr), with \u03c9 the (angular) frequency and k the wave number (so that\u2019s the wavelength expressed in radians per unit distance).\n\nSo, frankly, think about it: where is the photon? It\u2019s that ten-meter long transient, isn\u2019t it? And the probability to find it somewhere is the (absolute) square of some complex number, right? And then we have a wave function already, representing an electromagnetic wave, for which we know that the energy which it packs is the square of its amplitude, as well as being proportional to its frequency. We also know we\u2019re more likely to detect something with high energy than something with low energy, don\u2019t we? So\u2026 Tell me why the transient itself would not make for a good psi function?\n\nBut then what about these probability amplitudes being a function of the y and z coordinates?\n\nWell\u2026 Frankly, I\u2019ve started to wonder if a photon actually has a radius. If it doesn\u2019t have a mass, it\u2019s probably the only\u00a0real\u00a0point-like particle (i.e. a particle not occupying any space) \u2013 as opposed to all other matter-particles, which do have mass.\n\nWhy?\n\nI don\u2019t know. Your guess is as good as mine. Maybe our concepts of amplitude and frequency of a photon are not very relevant. Perhaps it\u2019s only energy that counts. We know that a photon has a more or less well-defined energy level (within the limits of the Uncertainty Principle) and, hence, our ideas about how that energy actually gets distributed over the frequency, the amplitude and the length of that \u2018transient\u2019 have no relation with reality. Perhaps we like to think of a photon as a transient electromagnetic wave, because we\u2019re used to thinking in terms of waves and fields, but perhaps a photon is just a point-like thing indeed, with a wave function that\u2019s got the same shape as that transient. \ud83d\ude42\n\nPost scriptum: Perhaps I should apologize to you, my dear reader. It\u2019s obvious that, in quantum mechanics, we don\u2019t think of a photon as having some frequency and some wavelength and some dimension in space: it\u2019s just an elementary particle with energy interacting with other elementary particles with energy, and we use these coupling constants and what have you to work with them. So we don\u2019t usually think of photons as ten-meter long transients moving through space. So, when I write that \u201cour concepts of amplitude and frequency of a photon are maybe not very relevant\u201d when trying to picture a photon, and that \u201cperhaps,\u00a0it\u2019s only energy that counts\u201d, I actually don\u2019t mean \u201cmaybe\u201d or \u201cperhaps\u201c. I mean: Of course! [\u2026]\u00a0In the quantum-mechanical world view, that is.\n\nSo I apologize for, perhaps, posting what may or may not amount to plain nonsense. However,\u00a0as all of this nonsense helps me to make sense of these things myself, I\u2019ll just continue. \ud83d\ude42 I seem to move very slowly on this\u00a0Road to Reality, but the good thing about moving slowly, is that it will \u2212 hopefully \u2212 give me the kind of \u2018deeper\u2019 understanding I want, i.e. an understanding beyond the formulas and mathematical and physical models. In the end, that\u2019s all that I am striving for when pursuing this \u2018hobby\u2019 of mine. Nothing more, nothing less. \ud83d\ude42 Onwards!\n\nSome content on this page was disabled on June 17, 2020 as a result of a DMCA takedown notice from Michael A. Gottlieb, Rudolf Pfeiffer, and The California Institute of Technology. You can learn more about the DMCA here:","date":"2023-02-05 21:01:46","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8451920747756958, \"perplexity\": 528.2683907385122}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-06\/segments\/1674764500288.69\/warc\/CC-MAIN-20230205193202-20230205223202-00121.warc.gz\"}"}
null
null
There are many things we can say about the high quality of service The ASFOUR Team provides its home seller and home buyer clients, but probably none would be as credible as testimonials from some of our recent clients themselves. So please take a moment and read the testimonials that some of our clients have sent us. We hope they reflect our experience and integrity, as well as make it clear how much we enjoy helping people with their real estate needs. "We do not know how much to thank you both. You guys did a fantastic job and we sincerely appreciate all of the hard work you did. You guys are awesome!!! "While there are many Real Estate Agents out there, rarely do you find ones that exemplify customer service at the level demonstrated by the Asfour Team. During our home search, Sami and Lucy were always prompt to every appointment and on many occasions had previewed the homes for us ahead of time. They provided their observations along the way, but never pressured us to rush into a purchase. When we decided on a home, they presented the offer to the Selling Agent in person and diligently followed it every step of the way to help ensure all problems were resolved in a timely manner. Sami and Lucy treat their clients like friends and family. They care about the outcome of the transaction. Their efforts are customer oriented and sincere. We highly recommend the Asfour Team to anyone looking for excellent Real Estate Agents. "We met Sami at an open house and were immediately impressed by his knowledge of real estate. We then decided that Sami was the best agent for us. I personally appreciate his straightforward approach, and his ability to communicate so well with people. It has been a great pleasure working with Sami and his wife Lucy, a Realtor and interior design member of The ASFOUR Team. I highly recommend Sami and his Team to anyone considering buying or selling property." "I have just completed a real estate sale with Sami Asfour. I contacted Sami, as we are both members of the same service club. I have observed Sami's dedication to excellence in all he undertakes. I had a rather unusual real estate sale and needed a knowledgeable representative. Sami far exceeded my expectations. He is, first of all, conscientious and knowledgeable. He is dedicated to "getting it right the first time." I am more than pleased and satisfied with the outstanding service I received. I highly recommend Sami Asfour to anyone seeking real estate services. You will walk away very satisfied and pleased with your experience."
{ "redpajama_set_name": "RedPajamaC4" }
274
// Copyright (c) Microsoft Corporation. All rights reserved. // Licensed under the MIT License. // Code generated by Microsoft (R) AutoRest Code Generator. package com.azure.resourcemanager.network.models; import com.azure.core.annotation.Fluent; import com.fasterxml.jackson.annotation.JsonProperty; /** Recommended actions based on discovered issues. */ @Fluent public final class TroubleshootingRecommendedActions { /* * ID of the recommended action. */ @JsonProperty(value = "actionId") private String actionId; /* * Description of recommended actions. */ @JsonProperty(value = "actionText") private String actionText; /* * The uri linking to a documentation for the recommended troubleshooting actions. */ @JsonProperty(value = "actionUri") private String actionUri; /* * The information from the URI for the recommended troubleshooting actions. */ @JsonProperty(value = "actionUriText") private String actionUriText; /** Creates an instance of TroubleshootingRecommendedActions class. */ public TroubleshootingRecommendedActions() { } /** * Get the actionId property: ID of the recommended action. * * @return the actionId value. */ public String actionId() { return this.actionId; } /** * Set the actionId property: ID of the recommended action. * * @param actionId the actionId value to set. * @return the TroubleshootingRecommendedActions object itself. */ public TroubleshootingRecommendedActions withActionId(String actionId) { this.actionId = actionId; return this; } /** * Get the actionText property: Description of recommended actions. * * @return the actionText value. */ public String actionText() { return this.actionText; } /** * Set the actionText property: Description of recommended actions. * * @param actionText the actionText value to set. * @return the TroubleshootingRecommendedActions object itself. */ public TroubleshootingRecommendedActions withActionText(String actionText) { this.actionText = actionText; return this; } /** * Get the actionUri property: The uri linking to a documentation for the recommended troubleshooting actions. * * @return the actionUri value. */ public String actionUri() { return this.actionUri; } /** * Set the actionUri property: The uri linking to a documentation for the recommended troubleshooting actions. * * @param actionUri the actionUri value to set. * @return the TroubleshootingRecommendedActions object itself. */ public TroubleshootingRecommendedActions withActionUri(String actionUri) { this.actionUri = actionUri; return this; } /** * Get the actionUriText property: The information from the URI for the recommended troubleshooting actions. * * @return the actionUriText value. */ public String actionUriText() { return this.actionUriText; } /** * Set the actionUriText property: The information from the URI for the recommended troubleshooting actions. * * @param actionUriText the actionUriText value to set. * @return the TroubleshootingRecommendedActions object itself. */ public TroubleshootingRecommendedActions withActionUriText(String actionUriText) { this.actionUriText = actionUriText; return this; } /** * Validates the instance. * * @throws IllegalArgumentException thrown if the instance is not valid. */ public void validate() { } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,310
\section{Introduction} Defects play a central role in Quantum Field Theory. They are valuable tools to study physical configurations far from ideal, e.g., finite size effects, presence of boundaries, interfaces, or impurities, and give insight into how a theory responds to the insertion of a probe. \\ When restricting to a $n$ dimensional Conformal Field Theory (CFT), a codimension-$m$ conformal defect is a lower dimensional operator which breaks the conformal group to the conformal group in $(n-m)$ dimensions on the defect, plus transformations in the directions orthogonal to the defect. The presence of the defect enlarges the spectrum of observables of the CFT. On the one hand, the set of correlation functions of operators localized on the defect defines a \emph{Defect Conformal Field Theory} (dCFT) that can be analyzed through standard methods of CFTs. On the other hand, local bulk operators acquire a non-trivial VEV near the defect, leading to a new set of CFT data \cite{Billo:2016cpy}. These coefficients are not independent but constrained by crossing symmetry and unitarity in the defect bootstrap equations \cite{Liendo:2012hy,McAvity:1995zd}. In particular, one-dimensional defects have been extensively considered. Physically, they represent worldlines of physical particles and provide a universal language to describe phenomena in condensed matter physics, like the Kondo problem (see \cite{Affleck:1995ge} for a review) or in high energy physics. In the latter case, the preeminent example of a line operator is the Wilson loop operator in gauge theories \cite{Wilson:1974sk}. It serves as an order parameter for phase transitions, e.g., the confinement/deconfinement phase transition. Moreover, the Wilson line spectrum characterizes the form of the gauge group, a piece of information that is not accessible by using only local observables \cite{Aharony:2013hda,Gaiotto:2014kfa}. In supersymmetric theories, it is natural to consider extensions of the Wilson loops, preserving a fraction of the supercharges (BPS). They are constructed by adding extra couplings with the matter fields to the Wilson connection. Linear and circular paths can often support BPS Wilson loops that keep maximal superconformal symmetry, namely half of the total supercharges \cite{Zarembo:2002an}. The first example is in 4d $\mathcal{N}=4$ SYM where, in the context of the AdS/CFT correspondence, the BPS Wilson loop was introduced as the CFT dual of the fundamental string \cite{Maldacena:1998im, Rey:1998ik}. BPS Wilson loops in 4d \(\mathcal{N}=4\) Super Yang-Mills provide a natural example of dCFT \cite{Giombi:2017cqn, Cooke:2017qgm, Giombi:2018qox}. Generally, the study of Wilson loops in supersymmetric theories has a broad horizon. In particular setups, we can combine supersymmetric localization and perturbation theory to compute exactly physical observables, such as the Bremsstrahlung function \cite{Pestun:2007rz, Correa:2012at}. This possibility establishes an exciting bridge with integrability, as the same quantity is accessible using integrability-based methods \cite{Correa:2012hh}. Having in mind to investigate if similar features are present in superconformal field theories (SCFTs) defined in different spacetime dimensions, it is natural to explore BPS Wilson operators in the three-dimensional analog of $\mathcal{N}=4$ SYM, namely ABJ(M) theory. This is a class of three dimensional $\mathcal{N}=6$ Super Chern-Simons-matter theories with gauge group $U(N_1)_k\times U(N_2)_{-k}$ \cite{Aharony:2008ug, Aharony:2008gk}, dual to type IIA string theory in AdS$_4\times \mathbb{CP}_3$ or M-theory in AdS$_4\times S^7/Z_k$. In ABJ(M) theory, the structure of BPS Wilson loops is much richer than in four dimensions due to the possibility of using not only scalar matter but also fermions to build up a supersymmetric loop connection. We will limit the discussion to the maximally supersymmetric (1/2 BPS) Wilson operator. This observable is naturally defined in terms of a $U(N_1|N_2)$ supermatrix connection, which involves gauge fields and scalars in the diagonal terms, and matter fermions in the anti-diagonal ones \cite{Drukker:2009hy}. The fermionic couplings constitute a defect marginal deformation \cite{Ouyang:2015iza, Ouyang:2015bmy, Mauri:2017whf}, which connects the fermionic loops to the less supersymmetric bosonic Wilson loops that do not involve any fermions \cite{Drukker:2008zx, Berenstein:2008dc, Chen:2008bp}. The exact quantum value of these Wilson loops is accessible via localization, either using the cohomological equivalence between fermionic and bosonic Wilson loops or exploiting a more recent representation involving the background connection \cite{Kapustin:2009kz, Drukker:2019bev, Drukker:2020opf}. In the AdS/CFT context, the 1/2 BPS Wilson loop is dual to the fundamental open string living in AdS$_4\times \mathbb{CP}_3$, with appropriate boundary conditions on the contour at the boundary. The maximally supersymmetric Wilson line provides an example of a conformal line defect with non-trivial boundary conditions induced by the fermions. In this paper, we will focus on the related defect CFT \cite{ Bianchi:2017ozk, Bianchi:2020hsz}\footnote{For the dCFT on the bosonic Wilson loop see \cite{Bianchi:2018scb}. }. This dCFT admits a Lagrangian formulation and a weak coupling limit and is thus amenable to perturbative investigation. Moreover, according to the AdS/CFT dictionary, its strong coupling limit is described by the AdS$_2$ theory for the fluctuations of the fundamental open string in AdS$_4\times \mathbb{CP}_3$. This fact provides a controlled example of non-maximally supersymmetric AdS$_2$/CFT$_1$ correspondence \cite{Giombi:2017cqn}. The exploration of this theory was initiated in \cite{ Bianchi:2017ozk, Bianchi:2020hsz}, where the main focus was the \emph{displacement multiplet}. The presence of the displacement operator is a universal feature of defects arising from the non-conservation of the stress-energy tensor in the direction orthogonal to the defect. It describes the response of the defect to contour deformations. The displacement operator is always the top component of a superconformal multiplet and arises from breaking transverse translations. Its correlation functions encode the information on the energy exchanged between the bulk and the defect, usually called the Bremsstrahlung function. Here we continue the investigation of the dCFT defined on the 1/2 BPS fermionic Wilson line, whose set of local operators are naturally represented as supermatrices. We will be primarily interested in constructing the lowest dimensional multiplets and evaluating their correlation functions at the perturbative level. At weak coupling, in any Lagrangian theory, a multiplet can be built by acting on its superconformal primary (SCP) operator with the supersymmetry charges preserved by the theory. However, unlike its 4d counterpart, the superconnection of the Wilson line is supersymmetric invariant, only up to a super-gauge transformation. Following \cite{Bianchi:2020hsz}, we compensate for this residual transformation by dressing the standard supersymmetry with a further super-gauge transformation. We generalize this idea, building this structure on the space of supermatrices. The resulting algebra is called \emph{covariant} superconformal algebra. We explore its algebraic structure in detail and make contact with the abstract representation theory for the worldline superconformal algebra. Exploiting this covariant formalism, we find, quite surprisingly, that at weak coupling, the lightest multiplet living on the defect is a long multiplet whose SCP is classically dimensionless. It corresponds to the insertion of a constant supermatrix on the Wilson line. Formally, it is the supermatrix that swaps the supertrace and the trace prescriptions in the definition of the Wilson line. Physically, it can be related to the action of transverse rotations on the defect. Since the superconnection involves fermions, one might think that the Wilson line breaks transverse rotations. However, this is not the case since its variation can always be modded out by a gauge transformation \cite{Agmon:2020pde}. Equivalently, we prove that the defect operator produced under transverse rotations is a descendant (a total line derivative) of the constant operator. Strong coupling considerations corroborate the physical relevance of the constant operator. As in the 4d case, \cite{Giombi:2017cqn}, in the ABJ(M) theory, the displacement multiplet of the 1/2 BPS Wilson line is naturally mapped into the fluctuations around the classical open string solution \cite{Bianchi:2020hsz}. In the 4d case, the lowest dimensional long multiplet is conjectured to be dual to the lightest bound state of fluctuations in the string theory setup. In the same spirit, we conjecture that our constant operator is the CFT dual of the lowest dimensional bound state built out from the fluctuations corresponding to the displacement multiplet. We find a non-trivial agreement with the weak coupling structure. The constant operator at weak coupling is the only possible operator matching the SCP in the holographic representation. We begin by studying the quantum properties of the constant multiplet at weak and strong coupling. First, we provide the explicit construction of the multiplet in terms of the ABJ(M) fundamental fields. We study its correlation functions at weak coupling in standard perturbation theory. Our main result is the evaluation of the anomalous dimension of the constant operator at one loop. We also borrow the results from the bootstrap analysis of \cite{Bianchi:2020hsz} to get the anomalous dimension at strong coupling. It turns out that the quantum dimension of the constant operator is compatible with an interpolating function between the weak and strong coupling. We provide a next-to-leading order computation of the Brehmsstrahlung function at weak coupling from the two-point of the displacement, finding agreement with previous results \cite{Lewkowycz:2013laa, Bianchi:2014laa, Bianchi:2017svd, Bianchi:2018bke, Griguolo:2021rke}. \\ In performing perturbative calculations, long-distance divergences associated with the infinite length of the Wilson line arise and need to be regularized. Usually, this is done by cutting the line to a segment, but this produces unwanted terms which mix the IR regulator with the parameter of the UV dimensional regularization in a way that renders the dCFT at quantum level sensible to the regularization scheme and therefore ambiguous. To clarify the origin of these terms and find a consistent way to remove them, we compare the results from the dCFT on the line with those of the same dCFT on the maximally supersymmetric circular Wilson loop where the IR divergences are absent. This comparison allows us to identify a definite procedure to avoid these issues, which eventually seems to correspond to putting extra degrees of freedom at the two edges of the cut-off line. The paper is organized as follows. In section \ref{sect:WL}, we begin by summarizing the main features of 1/2 BPS Wilson loops in ABJ(M) theory. We then study the symmetries of the defect in section \ref{sect:symmetries}. The section includes the construction of the supermatrix covariant generators of the superconformal algebra. Section \ref{sec:dSCFT} is devoted to introducing the constant operator, constructing its superconformal multiplet, and discussing its properties in connection with the would-be breaking of transverse rotations. We also discuss the role of the constant operator in connection with the cohomological equivalence between bosonic and fermionic Wilson loops. We investigate the new multiplet at weak coupling in section \ref{sect:perturbative}, where we exploit a non-trivial Ward identity that arises from the covariant algebra to read its anomalous dimension at one loop directly from the coefficient of the two-point function of its descendant. The regularization prescription that we adopt to tame IR divergences on the line is checked against the computation of the Brehmsstrahlung function at two loops, which is consistent with previous results in the literature. Finally, the realization of the constant operator in terms of the dual bound state is discussed in section \ref{sect:strong}. We summarize the main results and collect insights on future directions, in section \ref{sect:conclusions}. Six appendices complete the paper. They cover technical details on supermatrices, the ABJ(M) theory, and the superconformal algebra on the defect and its representations. A detailed discussion on the regularization of large distance divergences on the line and its comparison with the theory defined on the circle is presented in appendix \ref{sect:cut-off}. \section{$1/2$ BPS defects in ABJ(M) theory}\label{sect:WL} This section briefly reviews the structure and properties of $1/2$ BPS Wilson operators in ABJ(M) theory \cite{Drukker:2009hy}, primarily to fix notations and conventions. Given the ABJ(M) theory associated with the $U(N_1)_k \times U(N_2)_{-k}$ quiver and described by the action in eqs. (\ref{action}, \ref{CS} -- \ref{S4pt}), we consider the fermionic Wilson operator defined as \begin{equation}\label{WL} W[C] =\mathcal P\exp (-i\int_C \mathcal L) \end{equation} The super-connection $\mathcal L$ is given by the following {\em even} supermatrix\footnote{See App. \ref{supermatrices} for the basic properties of supermatrices.} in the Lie superalgebra of $U(N_1|N_2)$: \begin{equation}\label{connec} \mathcal L= \begin{pmatrix} A_\mu\dot x^\mu- 2\pi i \frac{\ell}{k}|\dot x|{M_J}^IC_I\bar C^J & i\sqrt{2\pi\frac{\ell}{k}}|\dot x|\eta^\alpha_I\bar\psi^I_\alpha\\ -i\sqrt{2\pi \frac{\ell}{k}}|\dot x|\psi_I^\alpha\bar\eta_\alpha^I& \hat A_\mu\dot x^\mu- 2\pi i \frac{\ell}{k}|\dot x|{\hat M_J}^I\bar C^JC_I \end{pmatrix} \end{equation} In \eqref{connec} $C$ denotes a generic smooth contour in ${\mathbb R}^3$ parametrized as $x^\mu=x^\mu(\tau)$ (we work in Euclidean signature with conventions given in appendix \ref{ABJ(M)}). The super-connection depends on the Chern-Simons level $k$, whereas $\ell$ is an arbitrary parameter that can take values $\pm1$. The quantities $M_{J}^{\ \ I}(\tau)$, $\hat M_{J}^{\ \ I}(\tau)$, $\eta_{I}^{\alpha}(\tau)$ and $\bar{\eta}^{I}_{\alpha}(\tau)$ control the possible local couplings. The latter two, in particular, are considered Grassmann even quantities even though they transform in the spinor representation of the Lorentz group. We shall focus on locally 1/2 BPS operators that possess a local $U(1)\times SU(3)$ $R-$symmetry invariance. Thus the couplings in \eqref{connec} can be taken of the form \begin{equation} \begin{split} \label{cc} &\eta_{I}^{\alpha} (\tau)=n_{I} (\tau)\eta^{\alpha} (\tau),\ \ \ \bar\eta^{I}_{\alpha} (\tau)=\bar n^{I} (\tau) \bar\eta_{\alpha} (\tau),\ \ \ M_{J}^{\ \ I} (\tau)= \widehat M_{J}^{\ \ I} (\tau)=\delta^{I}_{J}-2 n_{J} (\tau) \bar n^{I} (\tau) \end{split} \end{equation} The functional dependence of $n^I(\tau)$ and $\eta_\alpha(\tau)$ on $\tau$ is then determined by requiring that the loop preserves some superconformal transformations (see \cite{Drukker:2009hy,Cardinali:2012ru,Lietti:2017gtc}). For generic closed paths, the net result of this subset of transformations on \eqref{WL} can be represented as a field-dependent super-gauge transformation belonging to $u(N_1|N_2)$. However, this super-gauge transformation is not, in general, periodic when $\tau$ spans the contour. Thus, to obtain a BPS quantity, it is not enough to take the super-trace of the line operator, but we have to introduce a twist matrix $\mathcal{T}$ that takes care of the lacking of periodicity, i.e. \begin{equation} \mathcal{W}=\mathrm{Str}\left( W[C] \mathcal{T}\right) \end{equation} See \cite{Cardinali:2012ru} for details. Alternatively, this issue can be cured by introducing a classical background connection along the path, which makes the super-gauge transformation periodic \cite{Drukker:2019bev}. Although it is more elegant from a geometric point of view, this latter approach is less suited for performing perturbative computations, and thus we shall not use it in the following. One can also consider Wilson operators, which are supported on unbounded contours. A typical example is the Wilson line, where the contour is an infinite straight line. In this case, to obtain a BPS operator, we must carefully choose the boundary condition at infinity. This choice is not always unique or unambiguous. In the following, we choose to fix the boundary conditions and consequently the twist matrix by requiring that the unbounded contour is obtained as the decompactification limit of a closed path. Our analysis will focus on two types of (conformally equivalent) operators/defects: the infinite straight line and the great circle in $S^2$. \noindent \paragraph{Linear defect.} This is described by the Wilson operator in \eqref{WL}, with $C$ being the infinite straight line parametrized as $x^\mu = (0,0, s)$, $ -\infty < s < +\infty$. When the matter couplings are chosen as \begin{equation} {M_J}^I={\hat M_J}^I= \begin{pmatrix} -1&0&0&0\\ 0&1&0&0\\ 0&0&1&0\\ 0&0&0&1 \end{pmatrix}, \qquad \eta^\alpha_I=\sqrt{2} \begin{pmatrix} 1\\ 0\\ 0\\ 0 \end{pmatrix}_I (1,0)^\alpha,\qquad \bar\eta_\alpha^I=i\sqrt{2}\ (1,0,0,0)^I \begin{pmatrix} 1\\ 0 \end{pmatrix}_\alpha \end{equation} with the fermionic couplings satisfying the conditions \begin{equation} \delta^\beta_\alpha=\frac{1}{2i}(\eta^\beta\bar\eta_\alpha-\eta_\alpha\bar\eta^\beta)\qquad {(\dot x\cdot\gamma)_\alpha}^\beta=\frac{\ell}{2i}|\dot x|(\eta^\beta\bar\eta_\alpha+\eta_\alpha\bar\eta^\beta) \end{equation} the operator preserves half of the supersymmetry charges \cite{Cardinali:2012ru}, {\em i.e.} it defines a $1/2$ BPS linear defect. Using this parametrization and organizing the elementary fields in $SU(3)$ representations (see eq. \eqref{su3breaking} for field redefinitions), we rewrite \begin{equation}\label{connecsu3} \begin{aligned} \mathcal L &= \left( \begin{matrix} A_3&0\\ 0&\hat A_3 \end{matrix} \right) +2\pi i\frac{\ell}{k} \left( \begin{matrix} Z\bar Z-Y_a\bar Y^a&0\\ 0&\bar Z Z-\bar Y^a Y_a \end{matrix} \right) +2\sqrt{\pi\frac{\ell}{k}} \left( \begin{matrix} 0&i\bar\psi_{(1)}\\ \psi^{(1)}&0 \end{matrix} \right)\\ &\equiv \mathcal L_A+\mathcal L_B+\mathcal{L}_F \end{aligned} \end{equation} Since we shall think of the line as the decompactification limit of the circle, we shall select the same twist-matrix of the circle, i.e., $\sigma_3$, (see \cite{Drukker:2009hy,Cardinali:2012ru}). This choice amounts to taking the trace instead of the super-trace of \eqref{WL} and it is also the prescription that leads to an operator that is dual to the 1/2 BPS string configuration in AdS$_3 \times {\mathbb {CP}}^3$ or a 1/2 BPS M2-brane configuration in M-theory \cite{Lietti:2017gtc}. With this choice, the vacuum expectation value (VEV) at the tree level is given by \begin{align} \langle \mathcal{W} \rangle \equiv \langle \Tr[W] \rangle=N_1+N_2 \label{eq:w+} \end{align} \paragraph{Circular defect.} A circular defect can be easily obtained by conformally mapping the line onto the circle. This requires to first shift the line in the $(2,3)$-plane by half unit along the $x^2$-direction, namely taking $x^\mu=(0,\frac{1}{2},s)$, in order to have a complete invertible mapping $\forall s\in\mathbb R$. Then, taking a special conformal transformation generated by the vector $b^\mu=(0,1,0)$, the line-to-circle map reads \cite{Griguolo:2012iq} \begin{align} &x^\mu=\bigg(0,\frac{1}{2},s\bigg)\ \mapsto\ x'^\mu=\bigg(0,\ \frac{\frac{1}{4}-s^2}{\frac{1}{4}+s^2},\ \frac{s}{\frac{1}{4}+s^2}\bigg) \equiv (0,\ \cos \tau,\ \sin \tau) \\ &\Lambda(\tau)=\frac{1}{4}+s^2=\frac{1}{4\cos^2\frac{\tau}{2}} \end{align} where, in the last equality, we have defined $s\equiv \frac{1}{2}\tan \frac{\tau}{2}, \tau\in (-\pi,\pi)$ being the proper time of the circle. Scalars couplings are not affected by these transformations, while for the fermionic ones, we obtain \begin{equation} \eta'^\alpha_I=\sqrt{2}\delta^1_I\left(\begin{matrix}\cos\frac{\tau}{2} & i\sin\frac{\tau}{2}\end{matrix}\right)^\alpha\qquad \bar{\eta}'^I_\alpha=i\sqrt{2}\delta_1^I\left(\begin{matrix}\cos\frac{\tau}{2} \\ -i\sin\frac{\tau}{2}\end{matrix}\right)_\alpha \label{sceltacer} \end{equation} The super-connection on the circle then reads \begin{equation} \resizebox{\hsize}{!}{\(\mathcal L=\left(\begin{matrix} \left(A_3\cos \tau\!-\!A_2\sin\tau\right)-2\pi i\frac{\ell}{k}\left(Z\bar{Z}\!-\!Y_a\bar{Y}^a\right) & -i\sqrt{2\pi\frac{\ell}{k}}\, \eta\bar{\psi} \\ -i\sqrt{2\pi\frac{\ell}{k}}\, \psi\bar{\eta} & \left(\hat{A}_3\cos\tau\!-\!\hat{A}_2\sin\tau\right)-2\pi i \frac{\ell}{k}\left(\bar{Z}Z \!-\!\bar{Y}^aY_a\right) \end{matrix}\right)\)} \label{superconnectioncircle} \end{equation} The antiperiodicity of the fermionic couplings immediately suggests that the twist matrix is $\sigma_3$ and thus $\mathcal{W}=\Tr[W]$. At tree level it evaluates to \eqref{eq:w+}. The role of the parameter \(\ell\) will be clarified in a following paper \cite{forthcoming}; we proceed in our analysis setting \(\ell=1\). \section{Symmetries of the defect}\label{sect:symmetries} Aimed at studying the one-dimensional SCFTs defined on linear and circular defects, we first discuss in detail the symmetries that the defects inherit from the bulk theory. For simplicity, we restrict to the case of a linear defect. Everything can be easily rephrased for circular Wilson loops. The ABJ(M) theory is invariant under the action of the superconformal algebra \(\mathfrak{osp}(6|4)\). The insertion of the defect breaks this symmetry as \[\mathfrak{osp}(6|4)\rightarrow \mathfrak{su}(1,1|3)\oplus \mathfrak{u}(1)_b\] The \(\mathfrak{su}(1,1|3)\) superalgebra contains as the maximal bosonic subalgebra \(\mathfrak{su}(1,1) \times \mathfrak{su}(3) \times \mathfrak{u}(1)_M\). Here \(\mathfrak{su}(1,1)\) is the conformal algebra in one dimension, \(\mathfrak{su}(3) \) is the R-symmetry algebra on the defect and the $\mathfrak{u}(1)_M$ abelian factor is generated by a linear combination of the generator of rotations transverse to the defect and a broken generator of the bulk R-symmetry algebra (see eq. \eqref{eq:M}). The fermionic sector is generated by twelve odd generators, six Poincar\'e supercharges \(Q^a, \bar{Q}_a\) and six superconformal generators \(S^a, \bar{S}_a\), where $a=1,2,3$ is a \(\mathfrak{su}(3) \) fundamental index. The residual \(\mathfrak{u}(1)_b\) is generated by the operator in \eqref{eq:B}. This symmetry plays the role of a flavor symmetry for the defect SCFT. For more details on the \(\mathfrak{su}(1,1|3)\) algebra and the classification of its representations, we refer to appendix \ref{sect: su(1,1|3)}. Here, we discuss the covariant realization of the $\mathfrak{su}(1,1|3)$ superconformal algebra induced by the defect. \subsection{Supersymmetry invariance}We begin by studying the behavior of the linear defect introduced in section \ref{sect:WL} under the action of the $\mathfrak{su}(1,1|3)$ supercharges $Q^a, \bar{Q}_a$, and $S^a, \bar{S}_a$, $a=1,2,3$. Since the Wilson operator is defined in terms of a $U(N_1|N_2)$ superconnection, SUSY variations are better described in terms of matrix supercharges, defined as \begin{equation}\label{matrixQ} Q^a \rightarrow {\mathbb Q}^a \equiv Q^a {\mathbb 1} = \left( \begin{matrix} Q^a &0\\ 0&- Q^a \end{matrix} \right) \qquad \qquad \bar{Q}_a \rightarrow \bar{\mathbb Q}_a \equiv \bar{Q}_a {\mathbb 1} = \left( \begin{matrix} \bar{Q}_a &0\\ 0&- \bar{Q}_a \end{matrix} \right) \end{equation} \begin{equation}\label{matrixS} \hspace{-1cm} S^a \rightarrow {\mathbb S}^a \equiv {\mathbb S}^a\mathbb 1 = \left( \begin{matrix} S^a &0\\ 0&- S^a \end{matrix} \right) \qquad \qquad \quad \bar{S}_a \rightarrow \bar{\mathbb S}_a \equiv \bar{\mathbb S}_a \mathbb 1 = \left( \begin{matrix} \bar{S}_a &0\\ 0&- \bar{S}_a \end{matrix} \right) \end{equation} where ${\mathbb 1} $ is the identity in the space of $U(N_1|N_2)$ supermatrices. Here, we have used identity \eqref{left}, taking into account that $Q^a, \bar{Q}_a, S^a, \bar{S}_a$ are grade-1 scalars. In the case of the Poincar\'e supercharges\footnote{Identical definitions hold for superconformal generators.}, the SUSY variation of a generic supermatrix $T$ is defined as \begin{equation}\label{matrixtransf2} \delta_Q T = [ \theta_a {\mathbb Q}^a, T\} \; , \qquad \quad \bar\delta_Q T = [\bar\theta^a \bar{\mathbb Q}_a, T\} \end{equation} where \( \theta_a, \bar{\theta}^a\) are constant odd parameters and the graded commutators are defined in (\ref{supercomm}). The products explicitly read as \begin{equation}\label{matrixtransf} \theta_a {\mathbb Q}^a = \left( \begin{matrix} \theta_aQ^a &0\\ 0& \theta_aQ^a \end{matrix} \right) \qquad \qquad \bar\theta^a \bar{\mathbb Q}_a = \left( \begin{matrix} \bar\theta^a \bar{Q}_a &0\\ 0& \bar\theta^a \bar{Q}_a \end{matrix} \right) \end{equation} According to these definitions, the SUSY variation of the superconnection in \eqref{connecsu3} reads\footnote{For simplicity, here we set $\ell =1$.} \begin{equation} [ {\mathbb Q}^a, \mathcal{L}] =\left(\begin{matrix} -\frac{4\pi i }{k}\bar{\psi}_1 \bar{Y}^a & \; \; 0 \\ 2\sqrt{\frac{\pi}{k}}\left(i D_3 \bar{Y}^a+ i\frac{2\pi}{k}\left(\bar{Y}^a l_B-\hat{l}_B\bar{Y}^a\right)\right) & \; \; \frac{4\pi i}{k}\bar{Y}^a\bar{\psi}_1 \end{matrix}\right) \label{} \end{equation} \begin{equation} [\bar{\mathbb Q}_a , \mathcal{L}] =\left(\begin{matrix} -\frac{4\pi}{k} Y_a \psi^1 & \; \; \; -2\sqrt{\frac{\pi}{k}}\left(i D_3 Y_a + \frac{2\pi i}{k}\left( Y_a \hat{l}_B- l_B Y_a\right)\right) \\ 0& \; \; \; \frac{4\pi}{k}\psi^1 Y_a \end{matrix}\right) \label{} \end{equation} These identities can be rewritten as \cite{Drukker:2009hy, Bianchi:2020hsz} \begin{equation} [ {\mathbb Q}^a, \mathcal{L}] =i\partial_3 {\mathbb G}^a - \left[\mathcal{L},{\mathbb G}^a\right] \equiv i\mathfrak{D}_3 \mathbb{G}^a \qquad [\bar{\mathbb Q}_a , \mathcal{L}] =-i\partial_3\bar{{\mathbb G}}_a +\left[\mathcal{L},\bar{{\mathbb G}}_a\right] \equiv -i\mathfrak{D}_3 \bar{{\mathbb G}}_a \label{eq:QtransfL2} \end{equation} where we have defined \begin{equation}\label{Gmatrices} {\mathbb G}^a = 2 \sqrt{\frac{\pi}{k}} \begin{pmatrix} 0 & 0\\ \bar{Y}^a & 0 \end{pmatrix} \qquad \qquad \bar {\mathbb G}_a = 2\sqrt{\frac{\pi}{k}} \begin{pmatrix} 0 & Y_a \\ 0 & 0 \end{pmatrix} \end{equation} and $\mathfrak{D}_3 = \partial_3 + i [ {\mathcal L}, \cdot \}$. As a consequence, a generic linear Wilson operator defined on a segment $[s_1,s_2]$ \begin{equation}\label{WL2} W(s_2,s_1) =\mathcal P\exp (-i\int_{s_1}^{s_2} \!\!ds \, \mathcal L(s)) \end{equation} is not invariant under the action of SUSY charges \eqref{matrixQ}, rather it transforms as \begin{align}\label{GW} & \delta W = [ \theta_a{\mathbb Q}^a, W] = G(s_2)W(s_2,s_1)-W(s_2,s_1)G(s_1) \nonumber \\ & \bar{\delta} W = [ \bar{\theta}^a\bar{\mathbb Q}_a, W] = \bar G(s_2)W(s_2,s_1)-W(s_2,s_1)\bar G(s_1) \end{align} where $G \equiv \theta_a {\mathbb G}^a$, $\bar G \equiv -\bar\theta^a \bar {\mathbb G}_a$. However, if we define the {\em covariant} SUSY charges \begin{equation}\label{covsusy} {\mathcal Q}^a \equiv {\mathbb Q}^a - {\mathbb G}^a = \begin{pmatrix} Q^a & 0 \\ -2 \sqrt{\frac{\pi}{k}} \, \bar{Y}^a & - Q^a \\ \end{pmatrix} \quad \quad \bar {\mathcal Q}_a \equiv \bar{\mathbb Q}_a + \bar {\mathbb G}_a = \begin{pmatrix} \bar{Q}_a & 2 \sqrt{\frac{\pi}{k}} \, Y_a \\ 0 & - \bar{Q}_a \\ \end{pmatrix} \end{equation} from \eqref{eq:QtransfL2} we obtain \begin{equation} \label{Ltransf} [ {\mathcal Q}^a, \mathcal{L} ] = i \partial_3 {\mathbb G}^a \qquad \qquad [\bar {\mathcal Q}_a ,\mathcal{L}] = -i \partial_3 \bar {\mathbb G}_a \end{equation} or equivalently \begin{equation} \delta_{\mathcal Q} \mathcal{L} \equiv [\theta_a {\mathcal Q}^a, \mathcal{L}] = i \partial_3 G \; , \qquad \quad \bar\delta_{\mathcal Q} \mathcal{L} \equiv [\bar\theta^a {\mathcal Q}_a, \mathcal{L}] = i \partial_3 \bar G \end{equation} Now, if, in addition, we define the non-local covariant variation \begin{equation} \delta_{12}^{\mathcal Q} \equiv \delta - G(s_2) (\cdot) + (\cdot)G(s_1) \end{equation} from identities \eqref{GW} it follows that the invariance of the Wilson line under covariant transformations reads \begin{equation}\label{WQinvariance} \delta_{12}^{\mathcal Q} W(s_2, s_1) = 0 \end{equation} Similar results are obtained by applying the superconformal $S^a, \bar S_a$ charges to the superconnection. Defining the variations $\delta^{\text s} \equiv \lambda_a {\mathbb S}^a$ and $\bar\delta^{\text s} \equiv \bar\lambda^a \bar{\mathbb S}_a$, we obtain that under superconformal transformations the superconnection transforms as \begin{equation} [ {\mathbb S}^a, \mathcal{L}] =i\partial_3 \left(s{\mathbb G}^a,\mathcal{L}\right] \equiv i\mathfrak{D}_3 \left(s{\mathbb G}^a\right) \qquad [\bar{\mathbb S}_a , \mathcal{L}] =-i\partial_3\left(s\bar{{\mathbb G}}_a\right) -\left[s\bar{{\mathbb G}}_a,\mathcal{L}\right] \equiv -i\mathfrak{D}_3 \left(s\bar{{\mathbb G}}_a\right) \label{eq:QtransfL} \end{equation} with ${\mathbb G}^a$ and $\bar {\mathbb G}_a$ given in \eqref{Gmatrices}. Now, defining covariant superconformal charges as \begin{equation}\label{covsuperconf} {\mathcal S}^a \equiv {\mathbb S}^a - s{\mathbb G}^a = \begin{pmatrix} S^a & 0 \\ -2s \sqrt{\frac{\pi}{k}} \bar{Y}^a & - S^a \\ \end{pmatrix} \quad \quad \bar {\mathcal S}_a \equiv \bar{\mathbb S}_a + s\bar {\mathbb G}_a = \begin{pmatrix} \bar{S}_a & 2s \sqrt{\frac{\pi}{k}} Y_a \\ 0 & - \bar{S}_a \\ \end{pmatrix} \end{equation} and covariant variations, $\delta_{{\mathcal S}} \equiv \lambda_a {\mathcal S}^a = \delta - sG$ and $\bar\delta_{{\mathcal S}}\equiv \bar\lambda^a \bar {\mathcal S}_a = \bar\delta - s\bar G$, a reasoning similar to the one which led to eq. \eqref{WQinvariance} allows concluding that the defect is invariant under the following {\em covariant} superconformal transformations \begin{equation}\label{WSinvariance} \delta_{12}^{{\mathcal S}} W(s_2, s_1) = 0 \; , \quad {\rm with} \qquad \delta_{12}^{{\mathcal S}} \equiv \delta - sG(s_2) (\cdot) + (\cdot)sG(s_1) \end{equation} \subsection{The covariant $\mathfrak{su}(1,1|3)$ superconformal algebra}\label{sec:cov_alg} From the previous analysis it follows that the correct supersymmetry and superconformal charges which leave the defect invariant are the covariant supercharges \eqref{covsusy} and \eqref{covsuperconf}. In order to construct the whole one-dimensional superconformal algebra, we start evaluating their anticommutators. As we are going to show, the main novelty is that these anticommutators close on a covariantized version of the $\mathfrak{su}(1,1|3)$ superalgebra, where the differential representation of the spacetime bosonic operators is given in terms of supercovariant derivatives \eqref{eq:QtransfL2} taken along the defect. To prove this statement we start acting with the covariantized anticommutator $\acomm*{\mathcal{Q}^b}{\bar{\mathcal{Q}}_c}$ on supermatrix local operators defined on the Wilson line, for instance ${\mathbb G}^a$ or $\bar{{\mathbb G}}_a$ in \eqref{Gmatrices}. We obtain \begin{equation} \acomm*{\mathcal{Q}^b}{\bar{\mathcal{Q}}_c} {\mathbb G}^a =-\delta^b_c\left(\partial_3 {\mathbb G}^a+i\comm*{\mathcal{L}}{{\mathbb G}^a}\right) = - \delta_b^c \, \mathfrak{D}_3 {\mathbb G}^a\,, \end{equation} and comparing it with the first identity in \eqref{anticomm} we find that $\mathcal{P} = -\mathfrak{D}_3$. Proceeding in an analogous way, we derive all the other (anti)commutators, and comparing them with the general structure of the $\mathfrak{su}(1,1|3)$ algebra given in appendix \ref{sect: su(1,1|3)}, we find the explicit realization of all the other generators. Details of the derivation and further examples are reported in Appendix \ref{app:closure}. Here we simply list the final result. We find that the expressions for the spacetime generators are given by \begin{equation}\label{covariant_gen} {\mathcal P}=-\mathfrak{D}_3 \; , \qquad {\mathcal D}=-s\mathfrak{D}_3+\Delta\; , \qquad {\mathcal K}=-s^2\mathfrak{D}_3+2s\Delta \end{equation} where $\Delta$ is the scaling dimension\footnote{When this generator acts on supermatrix operators, it has to be thought as the supermatrix $ \begin{pmatrix} \Delta & 0 \\ 0 & \Delta \end{pmatrix} $.}. It is easy to prove that they satisfy the correct $\mathfrak{sl}(2)$ algebraic relations \begin{equation}\label{sl2algebra} [{\mathcal D}, {\mathcal P}]={\mathcal P}\qquad [{\mathcal D},{\mathcal K}]=-{\mathcal K}\qquad [{\mathcal P},{\mathcal K}]=-2{\mathcal D} \end{equation} in agreement with \eqref{su1,1}. Therefore, $\{{\mathcal P}, {\mathcal K}, {\mathcal D} \}$ correctly realise the covariant conformal algebra on the defect. These generators, together with the covariant supercharges \eqref{covsusy}, \eqref{covsuperconf}, the R-symmetry generators and the residual $\mathfrak{u}(1)_M$ symmetry generator \eqref{eq:M} suitably promoted to supermatrices, provide a representation of the $\mathfrak{su}(1,1|3)$ superalgebra on the space of supermatrices. The covariantization of the generators is required in order to make the superconformal algebra compatible with the gauge invariance on the defect generated by its superconnection. The net effect of the covariantization can be thought of as a modification of the supersymmetry generators obtained by adding a gauge transformation, in analogy with the "gauge-restoring" gauge transformations that modify SUSY transformations in order to preserve the Wess-Zumino gauge. In section \ref{sec:dSCFT} we are going to use the covariantized supercharges to characterize the supersymmetry properties of the dCFT living on the fermionic Wilson line. \section{The defect SCFT} \label{sec:dSCFT} We now study the defect superconformal field theory (dSCFT) generated by local operators ${\mathcal O}$ defined as even/odd $U(N_1|N_2)$ supermatrices\footnote{The same analysis can be straighforwadly performed for the dual theory.} localized on the Wilson line and belonging to a given representation of the covariant superconformal algebra $\mathfrak{su}(1,1|3)$. They can be easily constructed by promoting ABJ(M) fields localized on the defect to supermatrices. One example is the ${\mathbb G}^a$ (or the $\bar{{\mathbb G}}_a$) supermatrix \eqref{Gmatrices} entering the covariant realization of supercharges. Supermatrix local operators are the natural objects on which the action of the supermatrix generators introduced in the previous section is well defined. Local operators on the defect organize themselves into superconformal multiplets of the $\mathfrak{su}(1,1|3)$ algebra. These are generated by the repeated action of ${\mathcal Q}^a, \bar{\mathcal Q}_a$ supercharges on the superconformal primary (SCP), the lowest dimensional operator appearing in the multiplet. A supermultiplet can be decomposed into a finite sum of conformal multiplets, which generate from the repeated application of the covariant momentum generator ${\mathcal P}$ to superconformal descendants annichilated by ${\mathcal K}$. Supermultiplet components are labeled by quantum numbers $[\Delta, m , j_1,j_2]$, where $\Delta$ is the conformal weight, $m$ the $\mathfrak{u}(1)$ charge associated with the $M$ generator, and $(j_1, j_2)$ are the eigenvalues corresponding to two $\mathfrak{su}(3)$ Cartan generators \cite{Bianchi:2017ozk, Bianchi:2020hsz, Gorini:2020new}. For details we refer to appendix \ref{sect: su(1,1|3)}. \vskip 5pt The physical observables of the dSCFT are correlations functions of supermatrix local operators, defined as \begin{align}\label{eq:def_cor} \frac{ \langle \Tr W(+ \infty,s_n) {\mathcal O}(s_n) W(s_n, s_{n-1}) {\mathcal O} (s_{n-1}) \cdots W(s_2, s_1) {\mathcal O}(s_1) W(s_1,-\infty)\rangle}{\langle W(+\infty, - \infty) \rangle} \end{align} where the insertion of ``Wilson segments'' $W(s_j, s_{j-1})$ ensures manifest gauge invariance. This definition can be easily understood as the expectation value on the (suitably normalized) dressed vacuum $| 0 \rangle \! \rangle \equiv W(0,-\infty)|0\rangle$ (with $| 0 \rangle \!\rangle^\dagger \equiv \langle 0|W(+\infty,0)$) of local operators translated along the line by the covariant translation generator ${\mathcal P}= -\mathfrak{D}_3$. In fact, using the explicit expression $\mathfrak{D}_3 = \partial_3 + i [ {\mathcal L} , \cdot \}$, one can easily check that \begin{equation}\label{eq:covtransl} {\mathcal {O}}(0) \; \to \; e^{-s{\mathcal P}} {\mathcal {O}}(0) e^{s{\mathcal P}} = W(0,s) {\mathcal O}(s) W(s,0) \equiv \tilde{\mathcal {O}}(s) \end{equation} where ${\mathcal O}(s)$ is the original operator evaluated at point $s$, whereas we have dubbed $\tilde{\mathcal O}(s)$ the covariantly translated operator. Correlator \eqref{eq:def_cor} can then be rewritten as \begin{equation}\label{eq:covcorr} \langle \! \langle \, {\rm Tr} \, \tilde{\mathcal {O}}(s_n) \tilde{\mathcal O} (s_{n-1}) \cdots \tilde{\mathcal O}(s_1) \, \rangle \! \rangle \end{equation} This suggests a systematic way for constructing local operators of the dSCFT. Originally the defect inherits local operators from the bulk theory, which are localized at the origin. Gauge covariance then requires to translate the operators at a point $s$ by acting with the covariantized momentum generator. The result is that the operators get dressed with Wilson segments as in \eqref{eq:covtransl}. It is interesting to investigate how the rest of the covariantized generators work on the defect correlators. To begin with, we consider the action of the covariantized SUSY supercharges defined in \eqref{covsusy}. Taking for simplicity a one-point function and assuming that $\mathbb{G}^a\to 0$ for $s\to\pm\infty$, we find that its covariantized SUSY variation works as follows\footnote{We use the shorthand notation $W_{s_1,s_2} \equiv W(s_1,s_2)$, in particular $W_{+s} \equiv W(+\infty,s)$, $W_{s-} \equiv W(s,-\infty)$. We also neglect $\Tr$. \label{foot:notation}} \begin{equation}\label{Delta} \begin{aligned} & \langle\!\langle \delta_{\mathcal Q} \mathcal \mathcal{\tilde{O}}(s) \rangle\!\rangle \equiv \langle W_{+s}[\delta_{\mathcal Q},\mathcal{O}(s)]W_{s-}\rangle =\langle W_{+s} \big([\delta_Q,\mathcal{O}(s)] - [G(s),\mathcal{O}(s)] \big) W_{s-} \rangle\\ &=\langle W_{+s}(\delta_Q\mathcal{O}(s)) W_{s-} \rangle-\langle W_{+s} G(s) \mathcal{O}(s) W_{s-}\rangle+\langle W_{+s} \mathcal{O}(s) G(s) W_{s-} \rangle\\ &= \langle\delta_Q(W_{+s} \mathcal{O}(s) W_{s-}) \rangle = 0 \end{aligned} \end{equation} where $\delta_Q$ is the ordinary SUSY variation defined in \eqref{matrixtransf2}, and we have used that the ordinary vacuum is killed by $\delta_Q$. This result implies that the dressed vacuum defining $\langle\! \langle \cdot \rangle\! \rangle$ on the l.h.s. of \eqref{Delta} is instead killed by the covariant supercharges ${\mathcal Q}^a$. Correlators \eqref{eq:covcorr} are therefore invariant under the action of covariantized supersymmetry generators, while ordinary SUSY $\delta$-variations would not leave them invariant. This is consistent with the observation that if two supercharges were to close on an ordinary translation generated by $\partial_{3}$, gauge invariance on the defect would be broken. We need to act with supercharges that close on $\mathfrak{D}_3$ to maintain gauge invariance. One can recursively check that the same property holds for any higher-point correlator. Similarly, one can check that correlators \eqref{eq:covcorr} are invariant under the action of the covariantized $\mathcal S$ generators defined in \eqref{covsuperconf}. Therefore, we conclude that the covariantized algebra built above is the correct realization of the $\mathfrak{su}(1,1|3)$ superalgebra on the defect and definition \eqref{eq:covcorr} of correlation functions is consistent with it. \vskip 5pt In order to make the previous discussion more concrete and open the possibility to evaluate correlators explicitly, we now proceed to the construction of some elementary $\mathfrak{su}(1,1|3)$ supermultiplets of the dSCFT on the Wilson line. For a systematic classification of unitary representations of the $\mathfrak{su}(1,1|3)$ algebra on the rigid line we refer to \cite{Gorini:2020new} (see also appendix \ref{sect: su(1,1|3)} for a brief review). This classification can be easily adapted to the case of the dCFT without relevant modifications. A main difference arises, instead, in the actual realization of the multiplet components in terms of ABJ(M) elementary fields. This is due to the structural difference between the algebra generators defined on the rigid line and on the Wilson line. As a relevant example, in the next section we construct a new long multiplet living on the Wilson line, which does not have analogue in ABJ(M) and on the rigid line. We also review the construction of the displacement multiplet using the present approach of covariant supercharges realized as supermatrices. \subsection{The lowest dimensional supermultiplet}\label{sect:T} We observe that the covariantized $\mathfrak{su}(1,1|3)$ generators are not just differential operators as in the ordinary case. Rather they acquire a non-trivial dependence on local fields from the covariantizing terms. This implies that when we look for superconformal primaries (SCPs) generating supermultiplets, we should also enlarge the spectrum to include constant operators. The action of the covariant SUSY charges on constant operators may lead to non-trivial local descendants originating from the multiplication with the covariantizing term. Here, we construct an example of such a multiplet. Constant operators can be easily constructed as linear combinations of the ${\mathcal I}, {\mathcal T}$ operators, the natural basis of even $U(N_1|N_2)$ supermatrices, given by\footnote{The particular normalization is chosen for later convenience.} \begin{equation} \quad {\mathcal I} = \begin{pmatrix} \mathds{1}_{N_1} & 0 \\ 0 & \mathds{1}_{N_2} \end{pmatrix} \; , \qquad \qquad {\mathcal T} = \frac12 \begin{pmatrix} -\mathds{1}_{N_1} & 0 \\ 0 & \mathds{1}_{N_2} \end{pmatrix} \end{equation} Now, while covariant SUSY charges \eqref{covsusy} act on ${\mathcal I}$ trivially, this is no longer the case for ${\mathcal T}$. We obtain \begin{equation}\label{eq:Yop} \left[ {\mathcal Q}^a(s) , {\mathcal T} \right] = {\mathbb G}^a(s) \; , \quad \quad \left[ \bar{\mathcal Q}_a(s) , {\mathcal T} \right] = \bar{\mathbb G}_a(s) \qquad a = 1,2,3 \end{equation} where ${\mathbb G}^a(s)$ and $\bar{\mathbb G}_a(s)$ are the supermatrix operators that covariantize the SUSY charges (see eq. \eqref{Gmatrices}). In addition, the constant operator ${\mathcal T}$ satisfies \begin{equation} \left[ {\mathcal S}^a(s) , {\mathcal T} \right] = s \, {\mathbb G}^a \qquad \left[ \bar{\mathcal S}_a(s) , {\mathcal T} \right] = s \, \bar{\mathbb G}_a \end{equation} Therefore, at the origin ($s=0$) it is a superconformal primary (SCP), with quantum numbers $[0,0,0,0]$. Since ${\mathcal T}$ is not annihilated by any ${\mathcal Q}, \bar{\mathcal Q}$ supercharge, it is not protected and, as we show in the next section, it acquires an anomalous dimension. The repeated application of ${\mathcal Q}^a, \bar{\mathcal Q}_a$ generates a whole $\mathfrak{su}(1,1|3)$ long multiplet that we now construct explicitly. We organize the resulting descendant operators in terms of conformal primaries with a specific $M$-charge and R-symmetry representation. This implies that in the derivation of the descendants, we can neglect $\mathcal{P}$-exact terms. In other words, we set $\mathcal{P}=-\mathfrak{D}_3=0$ and treat all the Poincar\'e supercharges as anticommuting. The R-symmetry representation of the descendants is determined by considering that $\mathcal{T}$ is an R-symmetry singlet. In contrast, in our conventions, $\bar{\mathcal Q}_a$ belongs to the fundamental representation $\mathbf{3}$ of the $SU(3)$ R-symmetry group, and ${\mathcal Q}^a$ belongs to the antifundamental $\bar{\mathbf{3}}$. Due to the nature of $\mathcal{T}$, some representations are forbidden by the SUSY algebra, particularly those that would correspond to the application of symmetric configurations of supercharges. We organize the results in terms of the \emph{level} of a conformal primary. It is defined as the number of supercharges acting on the SCP. In the present case, taking into account that $\Delta({\mathcal T}) = 0$ and $\Delta({\mathcal Q}^a) = \Delta(\bar{\mathcal Q}_a)=1/2$ (see table \ref{table1}), a conformal primary at level $p$ has dimension $p/2$. \paragraph{Level 1.} Referring to figure \ref{fig:supmultT}, at the first level we find $\bar{\mathbb G}_a$ and ${\mathbb G}^a$. They are superconformal descendants belonging to the fundamental and antifundamental representation of the $SU(3)$ R-symmetry group, respectively. Level 1 operators have quantum numbers $\Delta = 1/2$ and $m({\mathbb G}^a) = 1/2, m(\bar{\mathbb G}^a) = -1/2$. \begin{figure} \centering \includegraphics{document.pdf} \caption{Diamonds corresponding to the \({\mathcal{T}}\) supermultiplet. Arrows pointing towards the left (right) mean the application of one $\bar{\mathcal Q}_a$ (${\mathcal Q}^a$). The left diagram shows how the various components have been named in the main text. The right diagram provides their decomposition in terms of $SU(3)$ irreducible representations.} \label{fig:supmultT} \end{figure} \paragraph{Level 2.} This is obtained by acting with supercharges \eqref{covsusy} on $\bar{\mathbb G}_a$ and ${\mathbb G}^a$. According to the representation decomposition \begin{equation} \mathbf{3} \otimes \mathbf{3} =\mathbf{\Bar{3}} \oplus \mathbf{6} \nonumber \end{equation} we expect that acting with ${\mathcal Q}^a$ on ${\mathbb G}^b$, we obtain a $SU(3)$ antifundamental representation and a symmetric tensor one (similarly for its complex conjugate). However, taking into account the SUSY transformations given in appendix \ref{sect: su(1,1|3)}, we explicitly find \begin{equation}\label{eq:QG} \{ \mathcal Q^a , {\mathbb G}^b \} = 2 \sqrt{\frac{\pi}{k}} \epsilon^{abc} \begin{pmatrix} 0 & 0\\ \chi_c^2 & 0 \end{pmatrix} \equiv \epsilon^{abc}\, \bar{\mathbb{H}}_c \qquad \quad \{ \bar{\mathcal Q}_a , \bar{\mathbb G}_b \} = - 2i \sqrt{\frac{\pi}{k}}\epsilon_{abc} \begin{pmatrix} 0 & \bar\chi_2^c\\ 0 & 0 \end{pmatrix} \equiv -i\epsilon_{abc}\, \mathbb{H}^c \end{equation} Only one operator in the (anti)fundamental representation appears while the one in the $\boldmath{6}$ is missing. The reason can be traced back to the fact that due to the anticommuting nature of the supercharges, it is impossible to realize a symmetric tensor in $(a,b)$ by applying ${\mathcal Q}^a, {\mathcal Q}^b$ to ${\mathcal T}$. In other words, a $\boldmath{6}$ symmetric tensor structure with $\Delta =m=1$ cannot be obtained from ABJ(M) elementary fields. Similarly, according to the decomposition \[ \mathbf{3} \otimes \mathbf{\Bar{3}} = \mathbf{1} \oplus \mathbf{8} \] applying ${\mathcal Q}^a$ to $\bar{\mathbb G}_b$ we should find a $SU(3)$ singlet and an adjoint. In fact, using SUSY transformations in appendix \ref{sect: su(1,1|3)}, we obtain \begin{align}\label{eq:QY} \{\mathcal Q^a , \bar{\mathbb G}_b \} & = 2 \sqrt{\frac{\pi}{k}}\delta_b^a \begin{pmatrix} 0 & \bar{\psi}_1\\ 0 & 0 \end{pmatrix} - \frac{4\pi}{k} \begin{pmatrix} Y_b \bar Y^a & 0\\ 0 & \bar Y^a Y_b \end{pmatrix} \equiv \delta_b^a \, \mathbb{K} - \mathbb{R}\indices{_b^a} \\ \{ \bar{\mathcal Q}_a , {\mathbb G}^b \} & = 2 \sqrt{\frac{\pi}{k}}\delta_a^b \begin{pmatrix} 0 & 0\\ i \psi^1 & 0 \end{pmatrix} + \frac{4\pi}{k} \begin{pmatrix} Y_a \bar Y^b & 0\\ 0 & \bar Y^b Y_a \end{pmatrix} \equiv \delta_a^b \, \bar{\mathbb{K}} + {\mathbb{R}}\indices{_a^b} \label{eq:barQbarY} \end{align} where we have defined \begin{equation} \mathbb{K}\!=2 \sqrt{\frac{\pi}{k}}\!\begin{pmatrix} -\frac{2}{3}\sqrt{\frac{\pi}{k}}Y_c\bar{Y}^c & \bar{\psi}_1 \\ 0 & -\frac{2}{3}\sqrt{\frac{\pi}{k}}\bar{Y}^cY_c \end{pmatrix}\; , \qquad \bar{\mathbb{K}}\!=2 \sqrt{\frac{\pi}{k}}\! \begin{pmatrix} \frac{2}{3}\sqrt{\frac{\pi}{k}}Y_c\bar{Y}^c & 0 \\ i\psi^1 & \frac{2}{3}\sqrt{\frac{\pi}{k}}\bar{Y}^cY_c \nonumber \end{pmatrix} \end{equation} \begin{equation} {\mathbb{R}}\indices{_a^b}\!=\! \frac{4\pi}{k}\begin{pmatrix} Y_a\bar{Y}^b\!-\!\frac{1}{3} \delta_a^b \, Y_c\bar{Y}^c & 0 \\ 0 & \bar{Y}^bY_a\!-\!\frac{1}{3}\delta_a^b \, \bar{Y}^cY_c \end{pmatrix} \end{equation} Although the apparent existence of two singlets, one can easily check that \begin{equation}\label{eq:Bdescendant} {\mathbb K} + \bar{\mathbb K} = [{\mathcal P}, {\mathcal T}] \end{equation} is a ${\mathcal T}$ descendant and can be removed from the spectrum. Therefore, at this level we have only one singlet $({\mathbb K} - \bar{\mathbb K})$ plus the adjoint operator ${\mathbb{R}}\indices{_a^b}$ and the two ${\mathbb H}^c, \bar{\mathbb H}_c$ (anti)fundamentals. The adjoint operator would not be present in the non-interacting case (\(k\to \infty\)). The covariantization then acts by turning on states that are absent on the rigid line. According to the classification of $\mathfrak{su}(1,1|3)$ representations summarised in appendix \ref{sect: su(1,1|3)}, it turns out that all the operators have $\Delta = 1$, whereas $m({\mathbb H}^c) = 1$, $m(\bar{\mathbb H}_c) = -1$ and $m(\mathbb{K}-\bar{\mathbb{K}}) = m(\mathbb{R}\indices{_a^b})=0$. \paragraph{Level 3.} Using the previous arguments, acting with \({\mathcal{Q}}^a\) on \(\mathbb{{H}}_b\) (or \(\bar{\mathcal{Q}}_a\) on \(\mathbb{\bar{H}}^b\)) we should expect to produce one singlet and one adjoint, with quantum numbers $\Delta= -m = \tfrac32$. The \(\mathbf{8}\) quantum numbers are incompatible with the gauge structure and the anticommuting nature of the supercharges. Thus this state is trivially zero. Instead, for the singlets we obtain \begin{equation} \begin{aligned} &[\bar{Q}_a,\bar{\mathbb{H}}^{b}]=2\sqrt{\frac{\pi}{k}}\delta_a^{\;\;b}\begin{pmatrix} 0 & \bar{D} Z \\ 0 & 0 \end{pmatrix} \equiv \delta_a^{\;\;b}\;\bar{\mathbb{V}} \\ &\left[\mathcal{Q}^a,\mathbb{H}_b\right]=-2i\sqrt{\frac{\pi}{k}}\delta^a_{\;\;b}\begin{pmatrix} 0 & 0 \\ D \bar{Z} & 0 \end{pmatrix} \equiv -i\delta^{a}_{\;\;b} {\mathbb{V}} \end{aligned} \label{} \end{equation} These operators appear at the two edges of the diamond in fig. \ref{fig:supmultT}. In order to obtain the ${\mathbb X}$ operator in the middle, we can either act with \(\bar{\mathcal{Q}}\) on the representation \(\mathbf{1} \oplus \mathbf{8}\) (operators \({\mathbb K}-\bar{\mathbb K}, {\mathbb R}\indices{_a^b}\)), or with $\mathcal{Q}$ on representation \(\mathbf{\Bar{3}}\) (\({\mathbb H}^a\) operator). Compatibility between the two decompositions \begin{equation} \mathbf{3}\otimes\left(\mathbf{1} \oplus \mathbf{8}\right)=\mathbf{3} \oplus \mathbf{3} \oplus \mathbf{\Bar{6}} \oplus \mathbf{15} \quad \qquad \quad \mathbf{\Bar{3}}\otimes \mathbf{\Bar{3}}=\mathbf{3} \oplus \mathbf{\Bar{6}} \end{equation} implies that the additional operator in the $\mathbf{15}$ is trivially zero\footnote{We stress that the two fundamentals in the first decomposition, the one coming from \(\mathbf{3}\otimes\mathbf{1}\) and the one from \(\mathbf{3}\otimes \mathbf{8}\), are the same operator. We can write \begin{align} \bar{Q}_a{\mathbb{R}_b}^c\Big|_{\mathbf{1}}\propto\delta_a^c \bar{Q}_k{\mathbb{R}_b}^k-\delta^c_b \bar{Q}_k{\mathbb{R}_a}^k \nonumber \end{align} and easily observe that $\bar{Q}_k{\mathbb{R}_a}^k\sim \bar{Q}_k Q^k \bar{Q}_a\mathcal{T}$, which up to descendants is the same as $\bar{Q}_a\left(\mathbb{K}-\bar{\mathbb{K}}\right)$.}. In conclusion, applying the explicit SUSY variations to the fields, at level 3 we find one extra fundamental operator ${\mathbb N}^a$ with $\Delta = \tfrac32$ and $m= -\tfrac12$, plus a $\bar{\mathbf{6}}$ operator ${\mathbb X}^{ac}_b $, with same quantum numbers, given explicitly by \begin{align} &\mathbb{N}^a= 2\sqrt{\frac{\pi}{k}}\begin{pmatrix} \frac{4}{3}\sqrt{\frac{\pi}{k}}\left(\bar\psi_1\bar Y^a\!-\!2\epsilon^{abc}Y_b\chi_c^2\right) \; & 0\\ \; -D_3\bar Y_a\!-\!\frac{2\pi}{3k}\left[\bar Y^a(3Z\bar Z\!+\!Y_k\bar Y^k)\!-\!(3\bar Z Z\!+\!\bar Y^k Y_k)\bar Y^a\right] & -\frac{4}{3}\sqrt{\frac{\pi}{k}}\left(\bar\psi_1\bar Y^a-2\epsilon^{abc}Y_b\chi_c^2\right) \end{pmatrix} \\ & {\mathbb X}^{ac}_b = 2\sqrt{\frac{\pi}{k}}\begin{pmatrix} 2\sqrt{\frac{\pi}{k}}\epsilon^{cak}\left(Y_k\chi_b^2+Y_b\chi_k^2\right)&0\\ \frac{2\pi}{k}\left[\bar Y^c Y_b \bar Y^a\!+\!\frac{1}{2}\delta_b^c\left(\bar Y^aY_k\bar Y^k\!-\!\bar Y^kY_k\bar Y^a\right)\!-\!(c\leftrightarrow a)\right] \; & \; \; 2\sqrt{\frac{\pi}{k}}\epsilon^{ack}\left(\chi_b^2Y_k\!+\!\chi_k^2Y_b\right) \end{pmatrix} \end{align} Similarly, the $\mathcal{Q}$ action on \(\mathbf{1} \oplus \mathbf{8}\) yields to two operators $\bar{\mathbb{N}}_a$, $\bar{\mathbb{X}}^a_{\;\;bc}$ transforming respectively in the $\bar{\mathbf{3}}$ and $\mathbf{6}$. Their field realization reads \begin{align} &\bar{\mathbb{N}}_a=2\sqrt{\frac{\pi}{k}}\begin{pmatrix} \frac{4i}{3}\sqrt{\frac{\pi}{k}}\left(Y_a\psi^1\!-\!2\epsilon_{abc}\bar\chi^c_2\bar Y^b\right) \; & \; D_3 Y_a\!+\!\frac{2\pi}{3k}\left[Y_a(3\bar Z Z\!+\!Y_k\bar Y^k)\!-\!(3 Z \bar Z\!+\! Y_k \bar Y^k)Y_a\right] \\ 0 & -\frac{4i}{3}\sqrt{\frac{\pi}{k}}\left(\psi^1Y_a-2\epsilon_{abc}\bar Y^b\bar\chi^c_2\right) \end{pmatrix}\\ &\bar{\mathbb{X}}_{ac}^b=-2\sqrt{\frac{\pi}{k}}\begin{pmatrix} -2\sqrt{\frac{\pi}{k}}\epsilon_{cak}\left(\bar\chi^k_2\bar Y^b+\bar\chi_2^b\bar Y^k\right)& \frac{2\pi}{k}\left[ Y_c\bar Y^bY_a\!+\!\frac{1}{2}\delta_a^b\left(Y_c\bar Y^kY_k\!-\!Y_k\bar Y^kY_c\right)\!-\!(a\leftrightarrow c)\right] \; \; \\ 0 & \; 2\sqrt{\frac{\pi}{k}}\epsilon_{cak}\left(\bar Y^b\bar\chi^k_2\!+\!\bar Y^k\bar\chi_2^b\right) \end{pmatrix} \end{align} \paragraph{Higher levels.} Starting from level 4, the explicit realization of the operators in terms of elementary fields becomes quite cumbersome and not very instructive. Therefore, here we simply discuss how the various structures emerge and refer to table \ref{table:Tmultiplet} for a summary of the multiplet components and their quantum numbers. Level 4 is obtained by acting on $\mathcal{T}$ either with three $\mathcal{Q}^a$ ($\bar{\mathcal{Q}}_a$) and one $\bar{\mathcal{Q}}_a$ ($\mathcal{Q}^a$) or with two $\mathcal{Q}^a$ and two $\bar{\mathcal{Q}}_a$. In the former case, the SUSY algebra fixes the only possible state to be of the form $$ \mathbb{W}^a\sim \epsilon^{klm}\bar{\mathcal{Q}}_k\bar{\mathcal{Q}}_l\bar{\mathcal{Q}}_m \mathcal{Q}^a \mathcal{T}\,, \qquad \bar{\mathbb{W}}_a \sim \epsilon_{klm}\mathcal{Q}^k\mathcal{Q}^l\mathcal{Q}^m\bar{\mathcal{Q}}_a \mathcal{T} $$ up to descendants. For the remaining combination of supercharges, the only non-vanishing state comes from $\epsilon_{akl}\epsilon^{bcd}\mathcal{Q}^k\mathcal{Q}^l \epsilon^{klm}\bar{\mathcal{Q}}_c\bar{\mathcal{Q}}_d\mathcal{T}$. It can be easily decomposed in $\mathbf{1}\oplus\mathbf{8}$, giving rise to a singlet ${\mathbb F}$ and a tensor $\mathbb{E}^{ab}_{\;\;\;cd}$. Similarly, at level 5 the states are of the form $$ \bar{\mathbb{U}}^a \sim \epsilon^{abc}\epsilon_{klm}\mathcal{Q}^k\mathcal{Q}^l\mathcal{Q}^m\bar{\mathcal{Q}}_b\bar{\mathcal{Q}}_c \mathcal{T}\,, \qquad \mathbb{U}_a\sim \epsilon_{abc}\epsilon^{klm}\bar{\mathcal{Q}}_k\bar{\mathcal{Q}}_l\bar{\mathcal{Q}}_m \mathcal{Q}^b\mathcal{Q}^c \mathcal{T} $$ Finally, the singlet at level 6 comes from the only non-vanishing contractions of the supercharges, namely that with two epsilon tensors. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|} \hline Level & irrep & op name \\ \hline\hline \multirow{1}{5em}{0} & $[\mathbf{1}]_0^0$ & ${\mathcal{T}}$ \\ \hline \multirow{2}{5em}{1} & $[\mathbf{3}]_{1/2}^{-1/2}$ & $\mathbb{G}_a$\\ & $[\mathbf{\bar{3}}]_{1/2}^{1/2}$ & $\Bar{\mathbb{G}}^a$\\ \hline \multirow{4}{5em}{2} & $[\mathbf{\bar{3}}]_1^{-1}$ & $\bar{\mathbb{H}}^a$\\ & $[\mathbf{1}]_1^0$ & $\mathbb{K}$\\ & $[\mathbf{8}]_1^0$ & $\mathbb{R}_a^{\;\;b}$\\ & $[\mathbf{3}]_1^1$ & $\mathbb{H}_a$\\ \hline \multirow{4}{5em}{3} & $[\mathbf{1}]_{\frac{3}{2}}^{-\frac{3}{2}}$ & $\mathbb{V}$\\ & $[\mathbf{3}]_{3/2}^{-1/2}$ & $\mathbb{X}_a$\\ & $[\mathbf{\Bar{6}}]_{3/2}^{-1/2}$ & $\Bar{\mathbb{Y}}_{ab}$\\ & $[\mathbf{\bar{3}}]_{3/2}^{1/2}$ & $\bar{\mathbb{X}}^a$\\ & $[\mathbf{6}]_{3/2}^{1/2}$ & $\mathbb{Y}_{ab}$\\ & $[\mathbf{1}]_{3/2}^{3/2}$ & $\bar{\mathbb{V}}$\\ \hline \multirow{4}{5em}{4} & $[\mathbf{\bar{3}}]_2^{-1}$ & $\bar{\mathbb{W}}^a$ \\ & $[\mathbf{1}]_2^0$ & $\mathbb{F}$ \\ & $[\mathbf{8}]_2^0$ & $\mathbb{E}_a^{\,\;b}$\\ & $[\mathbf{3}]_2^1$ & $\mathbb{W}_a$\\ \hline \multirow{2}{5em}{5} & $[\mathbf{3}]_{5/2}^{-1/2}$ & $\mathbb{U}_a$\\ & $[\mathbf{\bar{3}}]_{5/2}^{1/2}$ & $\bar{\mathbb{U}}^a$ \\ \hline \multirow{1}{5em}{6} & $[\mathbf{1}]_3^0$ & $\mathbb{P}$ \\ \hline\hline \end{tabular} \end{center} \caption{The list of operators in the ${\mathcal T}$ supermultiplet with their quantum numbers. We use the notation $[\mathbf{A}]_\Delta^m$, where $\mathbf{A}$ is the irrep of SU(3), $\Delta$ the scaling dimension, and $m$ the eigenvalue of the $U(1)$ generator M.} \label{table:Tmultiplet} \end{table} \vskip 10pt We close this section with a couple of further observations. First, we note that the SCP ${\mathcal T}$, though trivially constant, is not covariantly constant. Acting with the covariant momentum ${\mathcal P}$ according to prescription \eqref{eq:covtransl}, we find that under translation along the line, it gets mapped to \begin{equation}\label{eq:Ttilde} {\mathcal T} \rightarrow \tilde{\mathcal T}(s) = W(0,s) {\mathcal T} \, W(s,0) \end{equation} However, since the covariant supercharges commute with the covariant momentum ${\mathcal P}$, identities \eqref{eq:Yop} remain true also for the covariantly translated operators. Using \eqref{eq:covtransl}, they get the form \begin{equation}\label{eq:easycov} \tilde{{\mathbb G}}^a(s) = \left[ {\mathcal Q}^a(s) , \tilde{\mathcal T}(s) \right] \qquad \qquad \tilde{\bar{\mathbb G}}_a(s) = \left[ \bar{\mathcal Q}_a(s) , \tilde{\mathcal T}(s) \right] \end{equation} The further application of covariant supercharges works similarly and leads to constructing the whole supermultiplet at point $s$. We note that, as a consequence of \eqref{eq:Ttilde}, away from the origin the action of the superconformal charges is no longer trivial, but gives $[ {\mathcal S}^a(s) , \tilde{\mathcal T}(s) ] = {\mathcal Q}^a(s)$--exact and $[ \bar{\mathcal S}_a(s), \tilde{\mathcal T}(s) ] = \bar{\mathcal Q}_a(s)$-exact. The second observation arises from comparing ABJ(M) operators localized on the rigid line and those defined on a Wilson line. There is, in fact, a highly non-trivial difference in the nature of the operators they give rise to in the two cases. Let's consider, for instance, the ABJ(M) elementary scalars $Y_a, \bar{Y}^a$, $a=1,2,3$. When localized on the rigid line, they give rise to $1/2$-BPS operators, killed by three of the six Poincar\`e supercharges preserved by the line\footnote{Rigorously speaking, these are not well-defined operators on the line, as they are not gauge invariant. One should rather consider combinations of the form $\Tr (Y_a\bar{Y}^a)$ as the building blocks of the local sector on the line. However, since gauge invariance does not play any role in the present discussion, we prefer to simplify the discussion by looking directly at $Y_a$.}. As such, they turn out to be the SCP of short multiplets \cite{Gorini:2020new}. For example, in the notations of appendix \ref{sect: su(1,1|3)}, $Y_1$ generates the $\mathcal{B}^{\frac{1}{3},\frac{1}{6}}_{-\frac{1}{2},1,0}$ multiplet. Their scaling dimension is protected against quantum corrections \cite{Gorini:2020new}. Instead, when $Y_a, \bar{Y}^a$ are localized on the Wilson line and promoted to supermatrices, they give rise to $\bar{\mathbb G}_a$ and ${\mathbb G}^a$ operators, which are killed only by one covariant Poincar\`e supercharge. As discussed above, they are no longer SCPs. Rather they are the level 1 descendants of ${\mathcal T}$. Moreover, they belong to a long multiplet. Thus, they are expected to develop an anomalous dimension at the quantum level. We will return to this point in section \ref{sect:perturbative} where we compute their defect two-point function perturbatively. Here we provide a simple algebraic argument that explains why these operators are no longer protected on the Wilson defect. We consider the $\bar{\mathbb G}_1$ operator at the origin and act on it with a particular combination of covariant generators \begin{equation}\label{eq:comb} [-({\mathcal D}+{\mathcal M}) + {{\mathcal R}_1}^1+2{{\mathcal R}_2}^2, \bar{\mathbb G}_1 ] \equiv [ \acomm*{\bar{\mathcal Q}_1-2{\mathcal Q}^2}{{\mathcal S}^1+\bar{\mathcal S}_2}, \bar{\mathbb G}_1] \end{equation} The l.h.s. of this expression gives $-(\Delta - 1/2) \bar{\mathbb G}_1$, whereas evaluating the r.h.s. we obtain $[\bar{\mathcal S}_2, \{ {\mathcal Q}^2 , \bar{\mathbb G}_1 \}]$ which is not vanishing, as it can be easily checked using SUSY transformations of appendix \ref{susytransfs}. Therefore, identity \eqref{eq:comb} leads to conclude that $\Delta(\bar{\mathbb G}_1) \neq 1/2$, i.e. the operator acquires non-trivial quantum dimension. The same argument holds for $\bar{\mathbb G}_2, \bar{\mathbb G}_3$ by suitably changing the linear combination of generators in \eqref{eq:comb}. We note that this result is a direct consequence of the fact that $\bar{\mathbb G}_1$ is annihilated by at most one supercharge. In particular, it is not killed by ${\mathcal Q}^2$. On the rigid line where instead $ \left[{\mathcal Q}^2 , Y_1 \right] = 0$, the same argument concludes that the operator is protected. \subsection{The displacement supermultiplet} The displacement supermultiplet is the $su(1,1|3)$ multiplet containing the displacement operator as the top component, the operator that measures the breaking of translation invariance in the directions orthogonal to the Wilson line. The supermultiplet components have been worked out in \cite{Bianchi:2020hsz} by applying covariant SUSY transformations to the SCP, which in terms of the ABJ(M) elementary fields is given by\footnote{We focus on the $U(N_1|N_2)$ defect theory. A similar construction holds for its dual too.} \begin{equation}\label{eq:Z} {\mathbb{Z}} =2\sqrt{\frac{\pi}{k}} \begin{pmatrix} 0 & Z \\ 0 & 0 \end{pmatrix} \qquad \qquad \bar{\mathbb{Z}}=2\sqrt{\frac{\pi}{k}}\begin{pmatrix} 0 & 0\\ \bar{Z} & 0 \end{pmatrix} \end{equation} where the normalization factor has been chosen for later convenience\footnote{Our definition of the SCP differs from the one in \cite{Bianchi:2020hsz} by the absence of an overall constant spinor. In fact, with our conventions on supermatrices - see appendix A - operator \eqref{eq:Z} has an automatically spinorial (odd) nature.}. These operators have quantum numbers $\Delta = 1/2$, $m = \pm 3/2$, respectively and are both R-symmetry singlets. Here, we quickly re-derive the whole supermultiplet by applying the supermatrix version of SUSY charges introduced in the previous sections. This helps us check the consistency of our covariant generators and, at the same time, fix notations. Contrary to what happens with the ${\mathbb G}^a, \bar{\mathbb G}_a$ triplets, the singlet operators maintain the same nature when they are defined on the rigid line or the Wilson line. In fact, on the 1/2-BPS line the $Z, \bar{Z}$ operators are annihilated by all the $\bar{Q}^a$ and all the $Q_a$, respectively, and therefore they generate the $\mathcal{B}^{0,\frac12}_{\frac32, 0,0}$ and $\mathcal{B}^{\frac12,0}_{-\frac32, 0,0}$ short multiplets \cite{Bianchi:2017ozk, Bianchi:2020hsz}. Studying the action of covariant supercharges \eqref{covsusy} on the ${\mathbb{Z}}, \bar{\mathbb{Z}}$ operators it is easy to realize that the same property survives on the Wilson line, that is $\{ \bar{\mathcal Q}_a , \mathbb{Z} \}= \{ \mathcal Q^a , \bar{\mathbb{Z}} \} = 0, \, a=1,2,3$. In this case, covariantization only affects the action of non-annihilating supercharges. It follows that operators \eqref{eq:Z} are still the superprimaries of the short multiplets $\mathcal{B}^{0,\frac12}_{\frac 32, 0,0}$ and $\mathcal{B}^{\frac12,0}_{-\frac 32, 0,0}$. Consequently, they are expected to be protected from acquiring anomalous dimensions at the quantum level. In section \ref{sect:perturbative} we will give a perturbative confirmation of this expectation. We now construct the whole supermultiplet by acting with supermatrix covariantized charges. For simplicity, we focus on the supermultiplet generated by ${\mathbb Z}$, but a similar procedure can be easily implemented on $\bar{\mathbb Z}$. At level 1 we find \begin{equation} \mathbb{O}^a \equiv \{ \mathcal{Q}^a,\mathbb{Z}\}=-2\sqrt{\frac{\pi}{k}}\left(\begin{matrix} 2\sqrt{\frac{\pi}{k}}Z\bar{Y}^a & \bar{\chi}^a_1 \\ 0 & 2\sqrt{\frac{\pi}{k}}\bar{Y}^a Z \end{matrix}\right) \label{eq:operOfromvariation} \end{equation} Acting once more with one \(\mathcal{Q}^a\), at level 2 we obtain \begin{equation} [ \mathcal{Q}^a , \mathbb{O}^b ] =\epsilon^{abc}\mathbb{\Lambda}_c \end{equation} with \begin{equation} \mathbb{\Lambda}_c=2\sqrt{\frac{\pi}{k}}\left(\begin{matrix} 2\sqrt{\frac{\pi}{k}}\left(\epsilon_{cde}\bar{\chi}_1^d \, \bar{Y}^e + Z\chi_c^2\right) & i D Y_c \\ \frac{4\pi}{k}\, \epsilon_{cde}\bar{Y}^d \, Z \bar{Y}^e & 2\sqrt{\frac{\pi}{k}}\left(\epsilon_{cde}\bar{Y}^d\bar{\chi}_1^e-\chi_c^2 Z\right) \end{matrix}\right) \label{} \end{equation} Finally, at level 3 we write \begin{equation} \mathbb{D} \equiv \frac{1}{3!}\epsilon_{abc}\{\mathcal{Q}^a, [ \mathcal{Q}^b, \{\mathcal{Q}^c ,\mathbb{Z}\}]\} =\frac{1}{3}\{\mathcal{Q}^a , \mathbb{\Lambda}_a\} \end{equation} and the displacement operator is explicitly given by \begin{equation} \resizebox{\hsize}{!}{\(\mathbb{D}=i\left(\begin{matrix} \frac{4\pi}{k}\left(ZD\bar{Z}-DY_a\bar{Y}^a+i\bar{\chi}_1^a\chi_a^2\right) & 2\sqrt{\frac{\pi}{k}} D \bar{\psi}_1 \\ 8i\left(\frac{\pi}{k}\right)^{\frac{3}{2}}\left(\bar{Y}^a Z\chi_a^2-\chi_a^2Z\bar{Y}^a+\epsilon_{abc}\bar{Y}^a\bar{\chi}_1^b\bar{Y}^c\right) &\frac{4\pi}{k}\left(D\bar{Z}Z-\bar{Y}^aDY_a-i\chi_a^2\bar{\chi}_1^a\right) \end{matrix}\right)\)} \label{eq:fallingdispl} \end{equation} with the covariant derivative $D$ defined in \eqref{eq:derivatives}. The quantum numbers of these operators are reported in figure \ref{fig:displsuperm}. \begin{figure} \centering \subfigure[]{\begin{tikzpicture} \begin{feynman} \vertex (a) {\(\mathbb{Z} \; [\mathbf{1}]_{1/2}^{3/2}\)}; \vertex[below left= of a] (b) {\(\mathbb{O}^a \; [\bar{\mathbf{3}}]_{1}^{2}\)}; \vertex[below left=of b] (c) {\(\mathbb{\Lambda}_a \; [\mathbf{3}]_{3/2}^{5/2}\)}; \vertex[below left=of c] (d) {\(\mathbb{D} \; [\mathbf{1}]_{2}^{3} \)}; \draw[->] (a) -- (b); \draw[->] (b) -- (c); \draw[->] (c) -- (d); \end{feynman} \end{tikzpicture}\label{fig:displa}}\qquad \subfigure[]{\begin{tikzpicture} \begin{feynman} \vertex (a) {\(\bar{\mathbb{Z}} \; [\mathbf{1}]_{1/2}^{-3/2}\)}; \vertex[below right= of a] (b) {\(\bar{\mathbb{O}}_a \; [\mathbf{3}]_{1}^{-2}\)}; \vertex[below right=of b] (c) {\(\bar{\mathbb{\Lambda}}^a \; [\bar{\mathbf{3}}]_{3/2}^{-5/2}\)}; \vertex[below right=of c] (d) {\(\bar{\mathbb{D}} \; [\mathbf{1}]_{2}^{-3} \)}; \draw[->] (a) -- (b); \draw[->] (b) -- (c); \draw[->] (c) -- (d); \end{feynman} \end{tikzpicture}\label{fig:displb}} \caption{The displacement supermultiplet and its hermitian conjugate.} \label{fig:displsuperm} \end{figure} The barred operators (see fig. \ref{fig:displb}) can be obtained in a similar way acting multiple times with \(\Bar{\mathcal{Q}}_a\) on the superprimary \(\bar{\mathbb{Z}}\). \subsection{Symmetry breaking and defect deformations} One way to generate insertions of local primary operators on the defect is by acting with bulk symmetry generators broken by the defect's presence. This can be easily understood by observing that if we vary the Wilson line with respect to a broken symmetry, at first order in the deformation parameter, we bring down a new local operator $\delta {\cal L}$ according to \begin{equation}\label{eq:wavyline} \frac{\langle (\delta W) \cdots \rangle}{\langle W \rangle} = -i \int ds \, \langle \! \langle \delta {\cal L}(s) \cdots \rangle \! \rangle \end{equation} For a generic variation $\delta {\cal L}\equiv [\epsilon U, {\cal L} ]$ where $U$ is any of the broken generators, this identity can be more formally expressed as \begin{equation} [U, W] = \int ds \, {\cal C}(s) \, W \end{equation} where ${\cal C}(s) \equiv [U, -i{\cal L}(s) ]$ is the primary operator inserted on the defect. Many structural theorems follow from this set of identities, together with the algebra of (anti)commutators, which constrain the organization of these operators inside $\mathfrak{su}(1,1|3)$ supermultiplets \cite{Agmon:2020pde}. In particular, the conformal primary operators in the displacement supermultiplet reviewed above are associated with the action of the bulk superconformal generators broken by the Wilson line \cite{Cooke:2017qgm,Bianchi:2017ozk}. To be concrete, Eq. \eqref{eq:wavyline} for the broken translations $P_i$, with $i=1,2$ \begin{equation} [P_i, W] = \int ds \, {\mathbb{D}_i}(s) \, W \end{equation} provides an explicit definition for the displacement operator. The corresponding multiplet also includes the operators associated with half broken supersymmetries ${\mathbb \Lambda}_a, \bar{\mathbb \Lambda}^a$, as well as the ${\mathbb O}^a, \bar{\mathbb O}_a$ operators from the action of broken $SU(4)/SU(3)$ R-symmetry generators. As a consistency check of our construction, below, we review the action of transverse translations to check that we obtain precisely the displacement operator in \eqref{eq:fallingdispl} constructed by acting with the covariant generators. Moreover, we study the action of the would-be broken $U(1)_b$ symmetry and explain why the Wilson line does not break this symmetry. Finally, we will use the wavy-line formalism to discuss the fate of the $\mathfrak{u}(1)_B$ symmetry. As a byproduct, we give an alternative motivation to consider $\mathcal{T}$ as a genuine defect operator. \subsubsection{The wavy-line} Deforming a generic contour as \(x^\mu(s) \to x^\mu(s) +\delta x^\mu(s)\), the variation of the corresponding fermionic Wilson loop at first order in $\delta x^\mu$ leads to the insertion of the displacement operator, whose explicit expression is given by \cite{Bianchi:2017ozk,Cooke:2017qgm} \begin{equation} \delta {\mathcal L}|_{transl} \equiv \tilde{\mathbb{D}}=\delta x^\mu \left(-i\dot{x}^\nu \mathbb{F}_{\mu\nu}+|\dot{x}| \mathcal{D}_{\mu}\mathbb{O}\right)+\frac{\dot{x}\cdot \delta \dot{x} }{|\dot{x}|}\mathbb{O} \label{eq:wavydisplacement} \end{equation} Here we have defined \begin{equation} \mathbb{F}_{\mu\nu}=\left(\begin{matrix} F_{\mu\nu} & 0 \\ 0 & \hat{F}_{\mu\nu} \end{matrix}\right)=\partial_\mu \mathcal{A}_\nu-\partial_\nu \mathcal{A}_\mu+i[\mathcal{A}_\mu,\mathcal{A}_\nu],\qquad\mathcal{A}_\mu = \frac{1}{\sqrt{k}} \left(\begin{matrix} A_\mu & 0 \\ 0 & \hat{A}_\mu \end{matrix}\right) \label{opfinal} \end{equation} and \begin{equation} \mathbb{O}=\left(\begin{matrix} -\frac{2\pi}{k}\tensor{M}{^I_J}C_I\bar{C}^J & \sqrt{\frac{2\pi}{k}}\eta_I\bar{\psi}^I \\ -\sqrt{\frac{2\pi}{k}}\psi_I\bar{\eta}^I & -\frac{2\pi}{k}\tensor{M}{_J^I}\bar{C}^J C_I \end{matrix}\right) \, , \qquad {\rm with} \quad \mathcal{D}_{\mu}\mathbb{O}=\partial_\mu\mathbb{O}+i\left[\mathcal{A}_\mu,\mathbb{O}\right] \label{eq:supercovder} \end{equation} Specializing to the line \(x^\mu(s)=(0,0,s)\), we choose a deformation \(\delta x^\mu(s)=(\epsilon^1(s),\epsilon^2(s),0)\) orthogonal to the defect. The general expression for the displacement then reduces to \begin{equation} \tilde{\mathbb{D}}_{line}=\epsilon^k\left(-i\dot{x}^3\mathbb{F}_{k3}+\mathcal{D}_k\mathbb{O}_l\right) \equiv \epsilon^k \, \mathbb{D}_k \qquad k=1,2 \label{Dline} \end{equation} and the operator in \eqref{eq:supercovder} reads \begin{equation} \mathbb{O}_{line} =\left(\begin{matrix} \frac{2\pi}{k}\left(Z \bar{Z}-Y_a \bar{Y}^a \right) & 2\sqrt{\frac{\pi}{k}}\bar{\psi}_1 \\ -2i\sqrt{\frac{\pi}{k}}\psi^1 & \frac{2\pi}{k}\left(\bar{Z}Z-\bar{Y}^a Y_a\right) \end{matrix}\right) \label{eq:Ol} \end{equation} In particular, if we now consider the complex combination corresponding to the choice \(\epsilon^k=(1,-i)\) of the deformation parameters\footnote{The hermitian conjugate \(\bar{\mathbb{D}} \equiv \mathbb{D}_1 + i \mathbb{D}_2\) can be obtained taking the conjugate deformation parameters, i.e. \(\epsilon^k=(1,i)\).} \begin{equation} \mathbb{D} \equiv \mathbb{D}_1 - i \mathbb{D}_2 =-\left(\mathbb{F}_{23}+i\mathbb{F}_{13}\right)+ D\mathbb{O}_{line} \label{eq:displquas} \end{equation} with the $D$ derivative defined in \eqref{eq:derivatives}, and use the equations of motion for the gauge fields and the fermion \(\psi^1\), we can easily prove that this operator coincides with the top component \eqref{eq:fallingdispl} of the displacement multiplet, up to the total covariant derivative \begin{equation} -2i\sqrt{\frac{\pi}{k}} \, \mathfrak{D}_3 \left(\begin{matrix} 0 & 0 \\ \psi^2 & 0 \end{matrix}\right) \label{eq:finaltildeD} \end{equation} This is the expected result. In fact, since the correlator at the r.h.s. of \eqref{eq:wavyline} is integrated along the contour, the operator insertion is always defined up to a total covariant derivative along the defect \cite{Bianchi:2020hsz}. Assuming that the correlators decay quickly enough at infinity, it is not hard to show that \begin{equation}\label{eq:nullvar} \int ds \, \langle \! \langle\mathfrak{D}_3 {\cal O}(s)\cdots \rangle \! \rangle= \int ds\; \partial_s \langle \! \langle {\cal O}(s)\cdots \rangle \! \rangle=0 \end{equation} where the dots indicate possible insertions of local operators away from $s$. In the present framework, the identification between the operator insertion generated by the ``wavy line'' and the operator in \eqref{eq:fallingdispl} has an even simpler explanation: Their difference \eqref{eq:finaltildeD} is a conformal descendant, but the supermultiplet construction of the previous section is blind to descendants. In conclusion, this derivation represents a non-trivial consistency check of the covariant superalgebra constructed in section \ref{sec:cov_alg} and its representations studied in this section. \subsubsection{The $\mathfrak{u}(1)_B$ variation} We now consider the action of the would-be broken generator $B=M_{12}+2i {J_1}^1$ of \eqref{eq:B}. It generates the abelian factor $\mathfrak{u}(1)_B$. Being a linear combination of the transverse rotations and one broken R-symmetry generator orthogonal to the preserved $\mathfrak{u}(1)_M$ generator \eqref{eq:M}, it is supposed to be broken by the Wilson line. Applying $\delta_B$ to the Wilson line, the associated $\delta_B{\mathcal L}$ is non-vanishing due to a non-trivial transformation of the fermions \begin{equation}\label{eq:rotation} \delta_B \psi^{(1)} = -i \psi^{(1)}\; , \qquad \delta_B \bar\psi_{(1)} = i\bar\psi_{(1)} \end{equation} According to identity \eqref{eq:wavyline}, this leads to the insertion of the defect operator \begin{equation}\label{eq:Bop} \mathbb{B} = -2 \sqrt{\frac{\pi}{k}} \begin{pmatrix} 0 & \bar{\psi}_{(1)} \\ i\psi^{(1)} & 0 \end{pmatrix} \end{equation} However, it is easy to realize that $\mathbb{B} = -(\mathbb{K} + \bar{\mathbb{K}})$, where $\mathbb{K} + \bar{\mathbb{K}}$ is the descendant \eqref{eq:Bdescendant} appearing at level 2 of the ${\mathcal T}$ supermultiplet. Since it is a total covariant derivative, because of \eqref{eq:nullvar}, its contribution to the r.h.s. of \eqref{eq:wavyline} vanishes and we eventually obtain that $\delta_B W = 0$. It follows that $B$ is preserved, even in the presence of the Wilson line. This proof that the Wilson line preserves the $U(1)_B$ symmetry is alternative to the argument of \cite{Agmon:2020pde} based on the fact that the non-trivial rotation \eqref{eq:rotation} of fermions can always be compensated by a gauge transformation. The relation between the two arguments is that $\cal{T}$ is precisely the generator of the gauge transformation of \cite{Agmon:2020pde}. As noticed in \cite{Billo:2016cpy, Agmon:2020pde}, if the transverse rotations are preserved, their action on the defect yields a descendant operator. What is relevant here is that the primary of the descendant operator is precisely $\cal{T}$. This fact provides further evidence that $\cal{T}$ is a building block of the dCFT on the Wilson line. \subsection{The cohomological equivalence revised} The constant operator ${\mathcal T}$ turns out to play an interesting role also in connection with the cohomological equivalence between the bosonic 1/6 BPS and the fermionic 1/2 BPS Wilson Lines, discovered in \cite{Drukker:2009hy}. In fact, using the covariant supercharges, it is easy to check that the difference between the fermionic and the bosonic superconnections corresponding to line operators along direction 3, can be written as \begin{equation} \mathcal{L}_{1/2}-\mathcal{L}_{1/6}=\{ \mathcal{Q}^2+\bar{\mathcal{Q}}_2 , \Lambda \} \qquad\text{where}\qquad \Lambda= 2i\sqrt{\frac{\pi}{k}} \begin{pmatrix} 0 & Y_2 \\ -\bar{Y}^2 & 0 \end{pmatrix} = i \left( \bar{\mathbb G}_2 - {\mathbb G}^2 \right) \label{eq:cohom1} \end{equation} with ${\mathbb G}^2, \bar{\mathbb G}^2$ defined in \eqref{Gmatrices}. Therefore, the \(\Lambda\) operator is a combination of \(\mathcal{T}\) descendants, precisely \begin{equation} \Lambda=i[\bar{\mathcal{Q}}_2-\mathcal{Q}^2 , \mathcal{T}] \label{} \end{equation} Inserting this expression in \eqref{eq:cohom1} gives \begin{equation} \mathcal{L}_{1/2}-\mathcal{L}_{1/6}= 2i\left\{\mathcal{Q}^2,[\bar{\mathcal{Q}}_2,\mathcal{T}] \right\} + i\mathfrak{D}_3 {\mathcal T} = 2i\left\{\mathcal{Q}^2,[\bar{\mathcal{Q}}_2,\mathcal{T}] \right\} + i {\mathbb B} \end{equation} where ${\mathbb B}$ is the operator defined in \eqref{eq:Bop}. \section{Ward Identities and perturbative analysis}\label{sect:perturbative} This section discusses the perturbative evaluation of two-point correlation functions of local operators inserted on the Wilson line Perturbation theory is in terms of the couplings $N_1/k, N_2/k$. There is no need to take any planar limit, so the calculations are trustable for any finite $N_1 N_2$, as long as $N_{1,2} \ll k$ holds. At a given order in $1/k$, the contributing Feynman diagrams arise from all possible contractions among the local operators, powers of ${\mathcal L}$ super connections coming from the expansion of the $W$'s and the action vertices. To begin with, we discuss a set of Ward identities that relate correlation functions of local operators belonging to the same supermultiplet. We specialize these identities to the ${\mathcal T}$ supermultiplet, obtaining useful instructions for computing its anomalous dimension perturbatively. We look at its one- and two-point functions, discovering a non-trivial mixing with the identity operator, which occurs already at the tree level. Moving at loop order, we first discuss a general prescription for the IR regularization of the infinite line, compatible with its conformal mapping on the circle. We then apply this prescription to the evaluation of the two-point functions appearing in \eqref{eq:an_dim}, thus finding the anomalous dimension of \(\mathcal{T}\) at one loop. As a by-product of the $\langle \! \langle {\mathbb G}^a \bar{\mathbb G}_b \rangle \! \rangle$ calculation, we easily obtain the two-point correlator of the Displacement superprimary \(\mathbb{Z}\). We discuss the technical mechanism which ensures the \(\mathbb{Z}\) protection, while the ${\mathbb G}^a$ protection is lost. Finally, as a consistency check, we recover the result for the Bremsstrahlung function from the \(\mathbb{Z}\) correlator up to two loops. \subsection{Ward Identities} The link between primaries and descendants driven by the SUSY charges preserved by the Wilson line leads to super-Ward identities that correlators on the defect must satisfy. This is a well-known fact in any SCFT, but what makes the Ward identities special on the defect is that the covariant supercharges used to build up multiplets carry a non-trivial dependence on the $1/k$ coupling (see eq. \eqref{covsusy}). Therefore, they are responsible for mixing between loop orders, thus leading to Ward identities peculiar to the dSCFT, as we will now describe. In order to find the general structure of Ward identities, we consider a primary operator $P^{(n)}$ at level $n$ of a given multiplet. We can take the ${\mathcal T}$ multiplet of figure \ref{fig:supmultT} as a reference example. $P^{(n)}$ can be a single primary or mixing of primaries if at level $n$ there is more than one primary with the same $\mathfrak{u}(1)_M$ charge. It may carry $SU(3)$ indices, but we neglect them for simplicity. Now, given the two descendants \begin{equation}\label{eq:descendants} D^{(n+1) \; a} = [ {\mathcal Q}^a, P^{(n)} \} \qquad , \qquad \bar{D}^{(n+1)}_a = [ \bar{\mathcal Q}_a, \bar{P}^{(n)} \} \end{equation} we consider the two-point function $\langle \! \langle D^{(n+1) \; a}(s) \bar{D}^{(n+1)}_a(0) \rangle \! \rangle$. Expressing the operators as in \eqref{eq:descendants} and using the covariant algebra of section \ref{sec:cov_alg} we obtain the following set of Ward identities \begin{equation}\label{eq:WI} \langle \! \langle D^{(n+1) \; a}(s) \bar{D}^{(n+1)}_a(0) \rangle \! \rangle = - 3 \, \partial_s \langle \! \langle P^{(n)}(s) \bar{P}^{(n)}(0) \rangle \! \rangle - \langle \! \langle D^{(n+2) \; a}_a (s) \bar{P}^{(n)}(0) \rangle \! \rangle \end{equation} where the descendant at level $(n+2)$ is defined as $D^{(n+2) \; a}_b = [ Q^a, \bar{D}^{(n+1)}_b \}$. Useful information can be obtained from identity \eqref{eq:WI} when the last correlator on the r.h.s. is identically vanishing\footnote{In perturbation theory it would be enough for the correlator to vanish up to the order one is interested in.}. In this case, if we write \begin{equation} \langle \! \langle P^{(n)}(s) \bar{P}^{(n)}(0) \rangle \! \rangle = \frac{C_P}{s^{2\Delta_P + 2 \gamma_P }} \qquad , \qquad \langle \! \langle D^{(n+1) \; a}(s) \bar{D}^{(n+1)}_a(0) \rangle \! \rangle = \frac{C_D}{s^{2\Delta_P + 1 + 2 \gamma_P}} \end{equation} the Ward identity reduces to \begin{equation}\label{eq:WI2} C_D = 6(\Delta_P + \gamma_P) \, C_P \end{equation} where $\Delta_P$ is the scaling dimension of $P^{(n)}$ and $\gamma_P$ is the corresponding anomalous dimension. Here we have already considered that the descendant has the same anomalous dimension, as follows from the covariant algebra, particularly because the supercharges have a protected dimension $1/2$. This identity relates the anomalous dimension of the primary to the coefficient of the correlator of the descendant. Expressing these quantities perturbatively as series in $1/k$, \begin{equation} C_P(k) = \sum_{r=0}^{\infty} \frac{c_r}{k^r} \, , \qquad C_D(k) = \sum_{r=0}^{\infty} \frac{d_r}{k^r} \, , \qquad \gamma_P(k)= \sum_{r=1}^{\infty} \frac{\gamma_r}{k^r} \end{equation} at the first few orders, we read \begin{align}\label{eq:WI3} & {\rm Order} \; k^0 \; \; : \quad d_0 = 6 \Delta_P c_0 \nonumber \\ & {\rm Order} \; k^{-1}: \quad \gamma_1 = \frac{d_1}{6c_0} - \Delta_P \frac{c_1}{c_0} \nonumber \\ & {\rm Order} \; k^{-2}: \quad \gamma_2 = \frac{d_2}{6c_0} - \gamma_1 \frac{c_1}{c_0} - \Delta_P \frac{c_2}{c_0} \end{align} These relations further simplify when applied to $P^{(n=0)} \equiv {\mathcal T}$, the lowest dimensional superprimary on the defect with $\Delta_{\mathcal T} = 0$, introduced in section \ref{sect:T}. In this case the descendants are $D^{(1) \, a} = {\mathbb G}^a$, $\bar{D}^{(1)}_a = \bar{\mathbb G}_a$ and $D^{(2) \, a}_b = \delta_b^a {\mathbb K} - {\mathbb R}_b^a$ (see figure \ref{fig:supmultT}). It is easy to see that up to one loop (order $1/k$) one has $\langle \! \langle D^{(2) \, a}_b(s) {\mathcal T}(0) \rangle \! \rangle =0$. Therefore, the Ward identity reduces to (\ref{eq:WI2}, \ref{eq:WI3}) where we set $\Delta_P=0$. In particular, from the first identity in \eqref{eq:WI3}, we read \begin{equation} \langle \! \langle {\mathbb G}^a(s) \bar{\mathbb G}_b(0) \rangle \! \rangle^{(0)} =0 \end{equation} which is consistent with the fact that each operator is already of order $1/\sqrt{k}$. Moreover, the second identity in \eqref{eq:WI3} leads to \begin{equation}\label{eq:an_dim} \gamma({\mathcal T})|_{1L} = \frac16 \frac{{\rm coeff }[\langle \! \langle {\mathbb G}^a(s) \bar{\mathbb G}_a(0) \rangle \! \rangle^{(1)}]}{\langle \! \langle {\mathcal T}(s) \bar{\mathcal T}(0) \rangle \! \rangle^{(0)}} \end{equation} where the numerator means taking the overall coefficient of the two-point function at order $1/k$. We note that since the ${\mathbb G}, \bar{\mathbb G}$ operators are already of order $1/\sqrt{k}$, this means taking the overall coefficient of their two-point function at the tree level. Therefore, the tree level of the descendant measures the anomalous dimension of its superprimary. We will exploit this identity in the next subsection to infer the anomalous dimension of ${\mathcal T}$. \subsection{The constant operator at weak coupling} Considering the constant operator ${\mathcal T}$, it is easy to see that in the ABJ theory ($N_1 \neq N_2$), its one-point function at the tree level is non-vanishing. In fact, \begin{equation} \langle \!\langle \mathcal{T}\rangle \!\rangle^{(0)}= \frac{\langle \Tr\left[W(+\infty, -\infty) \mathcal{T}\right]\rangle}{\langle \Tr W(+\infty, -\infty) \rangle}= -\frac12 \frac{\langle \text{STr} W(+\infty, -\infty) \rangle}{\langle \Tr W(+\infty, -\infty) \rangle} = -\frac{1}{2}\frac{N_1-N_2}{N_1+N_2} \label{} \end{equation} This result may signal a non-trivial mixing of ${\mathcal T}$ with the identity operator. From this consideration, it would follow that the correct operator to consider is the linear combination \begin{equation}\label{eq:twistedyo} {\mathcal T}'=\mathcal{T}+\frac{N_1-N_2}{2(N_1+N_2)}\mathbb{1} \end{equation} that satisfies $\langle\!\langle {\mathcal T}'\rangle\!\rangle=0$. This combination does not get any correction at one-loop, as the ${\mathcal T}$ one-point function is zero at this order. However, at higher orders, there is no reason why this pattern should persist. Therefore, we cannot exclude that the linear combination coefficient in \eqref{eq:twistedyo} may get $1/k^2$ corrections. Another problematic aspect of our interpretation would arise, in any case, by observing that the odd correlation function of ${\mathcal T}'$ is non-zero already at tree-level. Nevertheless, we observe that the new operator ${\mathcal T}'$ can safely replace ${\mathcal T}$ as the superprimary of the multiplet in figure \ref{fig:supmultT}. Adding the identity operator does not affect the descendant operators' commutation relations. Therefore, identities \eqref{eq:Yop} defining the ${\mathbb G}^a, \bar{\mathbb G}_a$ operators can be safely replaced by \begin{equation} {\mathbb G}^a=\left[\mathcal{Q}^a,{\mathcal{T}}'\right] \; , \quad \quad \bar{\mathbb G}_a=\left[\bar{\mathcal{Q}}_a, {\mathcal{T}}'\right] \qquad a=1,2,3 \label{eq:defdestilde} \end{equation} Having identified the correct operator, we can now determine its anomalous dimension using identity \eqref{eq:an_dim}. First of all, at the tree level, we find \begin{equation}\label{eq:Ttree} \langle\!\langle {\mathcal{T}}'(s) {\mathcal{T}}'(0)\rangle\!\rangle^{(0)}= \frac{N_1 N_2}{(N_1+N_2)^2} \end{equation} For the $\langle\!\langle {\mathbb G}^a(s) \bar{\mathbb G}_a(0)\rangle\!\rangle$ correlator at order $1/k$, a simple calculation leads to \begin{equation}\label{eq:Gtree} \langle\!\langle {\mathbb G}^a(s)\bar{\mathbb G}_a(0)\rangle\!\rangle^{(1)}=\frac{3}{k}\frac{N_1 N_2}{N_1+N_2}\frac{1}{s} \end{equation} Inserting these results into \eqref{eq:an_dim}, we finally obtain \begin{equation}\label{eq:an_dimT} \gamma({\mathcal T}')|_{1L} = \frac{N_1 + N_2}{2k} \end{equation} A similar calculation can be done in the ABJM theory ($N_1=N_2 \equiv N$). In this case there is no apparent mixing and $\langle\!\langle {\mathcal{T}}(s) {\mathcal{T}}(0)\rangle\!\rangle^{(0)}=1/4$. Since the result in \eqref{eq:Gtree} is valid also for $N_1=N_2$, we can still use it in \eqref{eq:an_dim} and find $\gamma({\mathcal T})|_{1L} = N/k$. This is consistent with \eqref{eq:an_dimT} for $N_1=N_2$. Result \eqref{eq:an_dimT} is the one-loop anomalous dimension of the whole ${\mathcal T}$ multiplet in figure \ref{fig:supmultT}, in particular of the $SU(3)$ triplets $\{ {\mathbb G}^a \}, \{ \bar{\mathbb G}_a \}$, which are then non-protected operators. It is interesting to recall that these operators, together with the ${\mathbb Z}, \bar{\mathbb Z}$ (anticommuting) scalars in \eqref{eq:Z}, originate from the $SU(4)$ multiplets $C_I, \bar{C}^J, I,J=1, \dots, 4$ of the bulk theory, under decomposition \eqref{su3breaking}. In the parent theory, they concur to form protected, gauge invariant operators of the form $\Tr{(C_I \bar{C}^J)^n}$, with the trace in $I,J$ removed. Nonetheless, once localized on the line, they undergo a completely different destiny: The ${\mathbb Z}, \bar{\mathbb Z}$ scalars remain protected, being part of the displacement multiplet, whereas $ {\mathbb G}^a, \bar{\mathbb G}_a$ are no longer protected, being descendants of the non-protected constant ${\mathcal T}'$ operator. From a computational point of view, it would be interesting to understand the mechanism that leads on the Wilson line to finite $\langle\!\langle {\mathbb Z} \bar{\mathbb Z} \rangle\!\rangle$ correlators, but divergent $\langle\!\langle {\mathbb G} \bar{\mathbb G} \rangle\!\rangle$ ones. We devote the rest of this section to addressing this question, digging out this mechanism perturbatively, at order $1/k^2$. \subsection{Two-loop scalar correlators}\label{sect:corr} We now move to evaluate the two-point functions \begin{equation} \langle\!\langle {\mathbb Z} \bar{\mathbb Z} \rangle\!\rangle\qquad \text{and}\qquad \langle\!\langle \bar{\mathbb{G}}_a \mathbb G^b \rangle\!\rangle \label{eq:twocorr} \end{equation} on the Wilson line. As already mentioned, we expect only the first correlator to be finite, as the ${\mathbb G}$ operators should acquire anomalous dimension at quantum level. As a by-product, we will also rederive the two-loop Bremsstrahlung function associated to the \(1/2\)-BPS Wilson loop. In fact, this is known to be captured by the coefficient of the displacement two-point function \cite{Correa:2012at}, or equivalently of its $\mathbb{Z}$ superprimary. We begin by evaluating the normalization factor $\langle W \rangle$ in \eqref{eq:def_cor}. At the order we are interested in, it is sufficient to evaluate the Wilson expectation value up to order $1/k$. In the case of a linear defect, the evaluation of $\langle {W}\rangle$ is complicated by the appearance of long distance singularities associated with the infinite domain of line integrals. Regularizing such singularities requires introducing a long distance cut-off which restricts the line integrals to integrals on a finite size segment $(-L,L)$. Moreover, short distance singularities also appear, which are suitably regularized by using dimensional regularization in $d=3-2\epsilon$ with dimensional reduction \cite{Chen:1992ee,Bianchi:2013zda,Bianchi:2013rma}. The problem of how to remove the regulators and in which order is a subtle issue that requires careful analysis. At one loop the Wilson line receives a non-trivial contribution coming from the exchange of a fermion propagator. Using Feynman rule \eqref{0fermion}, we obtain the following integral\footnote{We use the convention $s_{ij} \equiv s_i - s_j$ for the distance between two points on the line.} \begin{equation} \int_{-L}^{L}ds_1\int_{-L}^{s_1}ds_2\;\frac{1}{s_{12}^{2-2\epsilon}}=-\frac{(2L)^{2\epsilon}}{4\epsilon\left(\frac{1}{2}-\epsilon\right)} \label{eq:intLL} \end{equation} It follows that, including all the factors from the propagator and the traces, up to one loop the defect vacuum-to-vacuum transition amplitude is \begin{equation} \langle {W} \rangle^{(0)+(1)}=(N_1+N_2) - \frac{N_1 N_2}{k} \, \frac{\Gamma\left(\frac{1}{2}-\epsilon\right)}{\pi^{\frac{1}{2}- \epsilon}} \, \frac{\left(2L\mu\right)^{2\epsilon}}{\epsilon} \label{eq:Wpert} \end{equation} where $\mu$ is the mass scale of dimensional regularization. We note that, although we are working in Landau gauge, this result is gauge independent (differently from what observed for the analog operator in \(\mathcal{N}=4\) SYM \cite{Griguolo:2012iq} and for amplitudes in ABJM theory \cite{Leoni:2010az}). In fact, the longitudinal part of the gauge propagator vanishes on the line, as follows from eq. \eqref{eq:xigauge}. Therefore, there is no possibility that extra gauge-dependent contributions arise from the exchange of a vector propagator. For finite $L$ expression \eqref{eq:Wpert} is UV divergent, against the expectations based on the BPS nature of the defect. This is due to the appearance of boundary effects induced by the IR regularization that temporarily destroy the SUSY invariance of the Wilson line. It would be interesting to better investigate how to remove these unwanted contributions for the Wilson line {\em per s\`e}, in particular which should be the correct renormalization prescription and how to safely remove the IR cut-off. However, since here we are primarily interested in evaluating defect correlators, we study how to cure this problem once we have combined this divergent term with similar terms that are expected to appear in the evaluation of the numerator in \eqref{eq:def_cor}. Expanding the normalization factor $\frac{1}{\langle {W} \rangle^{(0) + (1)}}$, at the order we are interested in a generic correlator $\langle\!\langle {\mathcal O} \bar{\mathcal O} \rangle\!\rangle$ is given by \begin{eqnarray}\label{eq:correxp} && \hspace{-0.5cm} \left( \langle W {\mathcal O} W \bar{\mathcal O} W \rangle^{(1)} + \langle W {\mathcal O} W \bar{\mathcal O} W \rangle^{(2)} \right) \times \frac{1}{N_1+N_2} \left( 1 + \frac{1}{k} \, \frac{N_1 N_2}{N_1+N_2} \, \frac{\Gamma\left(\frac{1}{2}-\epsilon\right)}{\pi^{\frac{1}{2}- \epsilon}} \, \frac{\left(2L\mu\right)^{2\epsilon}}{\epsilon} \right) \nonumber \\ &=& \frac{\langle W {\mathcal O} W \bar{\mathcal O} W \rangle^{(1)}}{N_1+N_2} \nonumber \\ &~& + \frac{\langle W {\mathcal O} W \bar{\mathcal O} W \rangle^{(2)}}{N_1+N_2} + \frac{1}{k} \, \frac{N_1 N_2}{(N_1+N_2)^2} \, \frac{\Gamma\left(\frac{1}{2}-\epsilon\right)}{\pi^{\frac{1}{2}- \epsilon}} \, \frac{\left(2L\mu\right)^{2\epsilon}}{\epsilon} \, \langle W {\mathcal O} W \bar{\mathcal O} W \rangle^{(1)} \end{eqnarray} Lowest order corresponds to the first term in this expansion. Evaluating the numerators for the two correlators \eqref{eq:twocorr}, we find that their $O(1/k)$ expression in the $\epsilon \to 0$ limit reads\footnote{We note that this should correspond to tree level, but due to the particular normalization of the operators, it is already order $1/k$.} \begin{equation} \langle\!\langle \mathbb{Z}\bar{\mathbb{Z}} \rangle\!\rangle^{(1)}=\frac{1}{k}\frac{N_1 N_2}{N_1+N_2}\frac{1}{s} \qquad , \qquad \langle\!\langle \, \bar{\mathbb G}_a{\mathbb G}^b \rangle \!\rangle^{(1)} =\delta_b^a \frac{1}{k}\frac{N_1 N_2}{N_1+N_2}\frac{1}{s} \label{eq:tree} \end{equation} Now we move to order $1/k^2$, that is the last line in \eqref{eq:correxp} where the second term comes from the one-loop result for $\langle W \rangle$ multiplied by results in \eqref{eq:tree}. \begin{figure}[h] \centering \subfigure[]{\begin{tikzpicture} \begin{feynman} \vertex (a); \vertex[right=0.5cm of a] (b) ; \vertex[right=1.5cm of b] (c) ; \vertex[right=1.5cm of c] (d) ; \vertex[right=0.5cm of d] (e); \diagram* {(b) -- [scalar,half left] (c), (b) -- [scalar, half left] (d),}; \draw[fill=gray] (b) circle (3pt); \draw[blue,thick] (a) -- (e); \draw[fill=white] (c) circle (2pt); \draw[fill=white] (d) circle (2pt); \draw[fill=lightgray] (b) circle (3pt); \end{feynman} \end{tikzpicture}\label{scalaraapp}}\qquad \subfigure[]{\begin{tikzpicture} \begin{feynman} \vertex (a); \vertex[right=0.5cm of a] (b) ; \vertex[right=1.5cm of b] (c) ; \vertex[right=1.5cm of c] (d) ; \vertex[right=0.5cm of d] (e); \diagram* {(b) -- [scalar,half left] (c), (c) -- [scalar, half left] (d),}; \draw[blue,thick] (a) -- (e); \draw[fill=lightgray] (c) circle (3pt); \draw[fill=white] (b) circle (2pt); \draw[fill=white] (d) circle (2pt); \end{feynman} \end{tikzpicture}\label{scalarbapp}}\qquad \subfigure[]{\begin{tikzpicture} \begin{feynman} \vertex (a); \vertex[right=0.5cm of a] (b) ; \vertex[right=1.5cm of b] (c) ; \vertex[right=1.5cm of c] (d) ; \vertex[right=0.5cm of d] (e); \diagram* {(b) -- [scalar,half left] (d), (c) -- [scalar, half left] (d),}; \draw[thick,blue] (a) -- (e); \draw[fill=lightgray] (d) circle (3pt); \draw[fill=white] (b) circle (2pt); \draw[fill=white] (c) circle (2pt); \end{feynman} \end{tikzpicture}\label{scalarcapp}} \caption{Diagrams with purely bosonic contractions. White bubbles represent the two local operator insertions, whereas the grey one is the bosonic part of the ${\mathcal L}$ superconnection coming from the first order expansion of W. The diagrams take into account all possible path orderings of the operators.} \label{fig:bremscalar1loop} \end{figure} \begin{figure}[h] \centering \subfigure[]{\begin{tikzpicture} \begin{feynman} \vertex (a); \vertex[right=0.5cm of a] (b) ; \vertex[right=1cm of b] (c); \vertex[right=1cm of c] (d); \vertex[right=1cm of d] (e) ; \vertex[right=0.5cm of e] (f) ; \diagram* {(d) -- [scalar,half left] (e), (b) -- [fermion, half left] (c),}; \draw[thick,blue] (a) -- (f); \draw[fill=black] (b) circle (3pt); \draw[fill=black] (c) circle (3pt); \draw[fill=white] (d) circle (2pt); \draw[fill=white] (e) circle (2pt); \end{feynman} \end{tikzpicture}\label{fermionlineaapp}}\qquad \subfigure[]{\begin{tikzpicture} \begin{feynman} \vertex (a); \vertex[right=0.5cm of a] (b); \vertex[right=1cm of b] (c); \vertex[right=1cm of c] (d); \vertex[right=1cm of d] (e); \vertex[right=1cm of e] (f); \diagram* {(b) -- [scalar,half left] (e), (d) -- [fermion, half right] (c)}; \draw[thick,blue] (a) -- (f); \draw[fill=black] (d) circle (3pt); \draw[fill=black] (c) circle (3pt); \draw[fill=white] (b) circle (2pt); \draw[fill=white] (e) circle (2pt); \end{feynman} \end{tikzpicture}\label{fermionlinebapp}}\qquad \subfigure[]{\begin{tikzpicture} \begin{feynman} \vertex (a); \vertex[right=0.5cm of a] (b); \vertex[right=1cm of b] (c); \vertex[right=1cm of c] (d); \vertex[right=1cm of d] (e); \vertex[right=.5cm of e] (f); \diagram* {(b) -- [scalar,half left] (c), (d) -- [fermion, half left] (e)}; \draw[thick,blue] (a) -- (f); \draw[fill=black] (d) circle (3pt); \draw[fill=black] (e) circle (3pt); \draw[fill=white] (b) circle (2pt); \draw[fill=white] (c) circle (2pt); \end{feynman} \end{tikzpicture}\label{fermionlinecapp}}\qquad \subfigure[]{\begin{tikzpicture} \begin{feynman} \vertex (a); \vertex[right=0.5cm of a] (b); \vertex[right=1cm of b] (c) ; \vertex[right=1cm of c] (d) ; \vertex[right=1cm of d] (e) ; \vertex[right=.5cm of e] (f) ; \diagram* {(c) -- [scalar,half left] (d), (e) -- [fermion, half right] (b)}; \draw[thick,blue] (a) -- (f); \draw[fill=black] (b) circle (3pt); \draw[fill=black] (e) circle (3pt); \draw[fill=white] (c) circle (2pt); \draw[fill=white] (d) circle (2pt); \end{feynman} \end{tikzpicture}\label{fermionlinedapp}} \caption{Diagrams with fermionic contractions (arrowed lines). White bubbles represent the two local operator insertions, whereas the black ones are the fermions from two ${\mathcal L}_F$ superconnections coming from the second order expansion of W. The diagrams take into account all possible path orderings of the operators.} \label{fig:bremferm1loop} \end{figure} The first term $\langle W {\mathcal O} W \bar{\mathcal O} W \rangle^{(2)}$ receives contributions from two sets of diagrams. Diagrams in figure \ref{fig:bremscalar1loop} come from the first order expansion of the Wilson line and involve contractions of the $\mathbb{Z}$ and $\mathbb{G}$ operators with the scalar part of the superconnection ${\mathcal L}_B$ in eq. \eqref{connecsu3}. The second set of diagrams are depicted in figure \ref{fig:bremferm1loop}. They come from the second order expansion of $W$ and involve self-contractions of two fermionic ${\mathcal L}_F$ terms in eq. \eqref{connecsu3}, times the free propagators $\langle Z \bar{Z} \rangle$ and $\langle \bar{Y}^a Y_b \rangle$, respectively. Since the tree level propagators for $Z$ and $Y$'s are the same (they all come from propagator \eqref{scalartree} evaluated on the line), it is clear that the diagramatic contributions in figure \ref{fig:bremferm1loop} are the same for both the correlators \eqref{eq:twocorr}. Instead, due to the sign difference between the two biscalars appearing in ${\mathcal L}_B$, the diagrams in figure \ref{fig:bremscalar1loop} contribute to the two correlators with an opposite sign. Therefore, if we call ${\mathcal B}^{(2)}$ the contributions from diagrams \ref{fig:bremscalar1loop} and ${\mathcal F}^{(2)}$ the ones from diagrams \ref{fig:bremferm1loop}, we can write \begin{eqnarray}\label{eq:BandF} && \langle W \, {\mathbb Z}(s) \, W \, \bar{\mathbb Z}(0) \, W\rangle^{(2)} = {\mathcal F}^{(2)} + {\mathcal B}^{(2)} \\ && \langle W \, \bar{{\mathbb G}}_a(s) \, W \, {\mathbb G}^b(0) \, W \rangle^{(2)} = \delta_a^b \, \left( {\mathcal F}^{(2)} - {\mathcal B}^{(2)} \right) \nonumber \end{eqnarray} We now evaluate ${\mathcal B}^{(2)}$ and ${\mathcal F}^{(2)}$, explicitly. We compute the Feynman integrals corresponding to the diagrams in figures \ref{fig:bremscalar1loop} and \ref{fig:bremferm1loop} by using the IR regulator discussed above, plus dimensional regularization for short distance divergences. The necessary Feynman rules are listed in appendix \ref{ABJ(M)}. We evaluate one of the two correlators in the expressions \eqref{eq:BandF}. From the diagrams in fig. \ref{fig:bremscalar1loop} we obtain \begin{align} \ref{scalaraapp}&= \frac{N_1^2 N_2}{k^2} \; \frac{\Gamma^2\left(\frac{1}{2}-\epsilon\right)}{2\pi^{1-2\epsilon}} \,\left(\frac{L}{L+s}\right)^{2\epsilon} \, \frac{s^{4\epsilon-1}}{2\epsilon} \\ \ref{scalarbapp}&=\frac{N_1 N_2^2 }{k^2} \; \frac{\Gamma^2\left(\frac{1}{2}-\epsilon\right)}{2\pi^{1-2\epsilon}} \, \frac{\Gamma^2(2\epsilon)}{\Gamma(4\epsilon)} \, s^{4\epsilon-1}\\ \ref{scalarcapp}&=\frac{N_1^2 N_2}{k^2} \; \frac{\Gamma^2\left(\frac{1}{2}-\epsilon\right)}{2\pi^{1-2\epsilon}} \, \left(\frac{L-s}{L}\right)^{2\epsilon} \, \frac{s^{4\epsilon-1}}{2\epsilon} \end{align} We see that these contributions are regular in the limit \(L\to \infty\). Therefore, removing the IR cut-off they eventually sum up to the following UV divergent contribution \begin{eqnarray} {\mathcal B}^{(2)}&=& \frac{1}{\epsilon} \; \frac{N_1 N_2}{2k^2} \, \frac{\Gamma^2\left(\frac{1}{2}-\epsilon\right)}{\pi^{1-2\epsilon}} \left(N_1 + N_2\frac{\Gamma^2(1+2\epsilon)}{\Gamma(1 + 4 \epsilon)}\right) \, \frac{1}{s^{1-4\epsilon}} \nonumber \\ &\sim& \frac{1}{\epsilon} \; \frac{N_1 N_2(N_1+N_2)}{2k^2} \, \frac{1}{s} + O(\epsilon) \label{eq:bosonic1loop} \end{eqnarray} Now we move to the fermionic contributions. The double integrals coming from diagrams in figure \ref{fig:bremferm1loop} evaluate to \begin{align} \ref{fermionlineaapp}&=\frac{N_1 N_2^2}{k^2} \, \frac{\Gamma^2\left(\frac{1}{2}-\epsilon\right)}{2\pi^{1-2\epsilon}} \, \left[-L^{2\epsilon}\right] \, \frac{s^{2\epsilon-1}}{\epsilon}\\ \ref{fermionlinebapp}&=\frac{N_1^2 N_2 }{k^2} \, \frac{\Gamma^2\left(\frac{1}{2}-\epsilon\right)}{2\pi^{1-2\epsilon}} \, \left[-s^{2\epsilon}\right] \, \frac{s^{2\epsilon-1}}{\epsilon}\\ \ref{fermionlinecapp}&=\frac{N_1 N_2^2}{k^2} \, \frac{\Gamma^2\left(\frac{1}{2}-\epsilon\right)}{2\pi^{1-2\epsilon}} \, \left[-(L-s)^{2\epsilon}\right] \, \frac{s^{2\epsilon-1}}{\epsilon} \\ \ref{fermionlinedapp}&=\frac{N_1 N_2^2}{k^2} \, \frac{\Gamma^2\left(\frac{1}{2}-\epsilon\right)}{2\pi^{1-2\epsilon}} \, \left[L^{2\epsilon}-s^{2\epsilon}-(2L)^{2\epsilon}+(L+s)^{2\epsilon}\right] \, \frac{s^{2\epsilon-1}}{\epsilon} \end{align} and sum up to \begin{equation}\label{eq:calF} \begin{aligned} {\mathcal F}^{(2)} = - \frac{1}{\epsilon} \; \frac{N_1 N_2}{2k^2} \, & \frac{\Gamma^2\left(\frac{1}{2}-\epsilon\right)}{\pi^{1-2\epsilon}} \, \frac{1}{s^{1-2\epsilon}}\\ \times &\left( (N_1+N_2) s^{2\epsilon} + N_2\left( (L-s)^{2\epsilon} - (L+s)^{2\epsilon} + (2L)^{2\epsilon} \right)\right) \end{aligned} \end{equation} In this case the $L \to \infty$ limit is not totally safe as long as $\epsilon \neq 0$. In fact, while the second and the third terms cancel each other in this limit, we are left with a divergent contribution proportional to $(2L)^{2\epsilon}$ which is problematic. However, this is exactly of the same form of the last term in \eqref{eq:correxp} coming from the expansion of the denominator $\langle W \rangle$. Therefore, we have to sum up all the contributions before discussing how to remove the IR regulator. Focusing for the time being only on the problematic terms, for both correlators we have the following contribution (reiserting the mass scale $\mu$) \begin{equation}\label{eq:sick} - \frac{1}{2k^2}\, \frac{N_1 N_2^2 (N_1-N_2) }{(N_1+N_2)^2} \, \frac{(2L\mu)^{2\epsilon}}{\epsilon} \, \frac{1}{s} + c \, (2L\mu)^{2\epsilon} + O(\epsilon) \end{equation} where $c$ is an UV finite function of the couplings and the position $s$. We see that the problematic term is eventually proportional to $(N_1-N_2)$, and it vanishes for $N_1=N_2$. It is therefore convenient to split the discussion of the ABJM and ABJ cases. \paragraph{The {\bf $N_1=N_2 \equiv N$} case.} When the defect lives in the ABJM theory, the divergent term in \eqref{eq:sick} vanishes identically. This means that, at least at order $1/k^2$, the bad divergent one-loop contribution to the Wilson expectation value is needed to cancel exactly a similar term which arises in the evaluation of the correlators in \eqref{eq:BandF}. The rest of expression \eqref{eq:sick} does not present any problem and can be safely removed by sending for instance $\epsilon \to 0$ and then $L \to \infty$. In the ABJM case it can be actually checked that the result is independent of the order of limits. A similar pattern was already encountered in \cite{Bianchi:2017ozk}. Having removed the $(2L)^{2\epsilon}$ terms, from \eqref{eq:bosonic1loop} and \eqref{eq:calF} it is now easy to realize that \begin{equation} {\mathcal F}^{(2)} = -{\mathcal B}^{(2)} + O(\epsilon) \end{equation} Therefore, from eqs. \eqref{eq:BandF} it follows that \begin{eqnarray} && \langle \! \langle \, {\mathbb Z}(s) \bar{\mathbb Z}(0) \, \rangle\!\rangle^{(2)} = O(\epsilon) \\ && \langle\!\langle \, \bar{{\mathbb G}}_a(s) {\mathbb G}^b(0) \, \rangle\!\rangle^{(2)} = -\delta_a^b \, \frac{N^2}{k^2} \left( \frac{1}{\epsilon } + 4 \log{s}+ 2\gamma +2\log{(4 \pi )} \right) \, \frac{1}{s} + O(\epsilon) \nonumber \end{eqnarray} The first line is perfectly consistent with the expectations: not only the $\langle \! \langle {\mathbb Z} \bar{\mathbb Z} \rangle\!\rangle $ correlator is finite, in addition its one-loop coefficient is zero, in agreement with the fact that the Bremsstrahlung function is known to get no corrections at order $1/k^2$ \cite{Lewkowycz:2013laa, Bianchi:2014laa, Bianchi:2017svd}. More interesting is the second line. The appearance of the $1/\epsilon$ divergence signals the necessity of renormalizing the $\mathbb{G}^a$ operators, which consequently acquire an anomalous dimension. It is easy to show that renormalizing the operators as $\mathbb{G}_R^a = Z_{\mathbb G}^{-1} \mathbb{G}^a$ (the same for $\bar{\mathbb{G}}_a$) and applying the usual procedure which in minimal subtraction scheme allows to read the anomalous dimension from the $1/\epsilon$ pole of $Z_{\mathbb G}$, one finds $\gamma({\mathbb G}^a)|_{1L} = \tfrac{N}{k}$, in agreement with \eqref{eq:an_dimT} for $N_1=N_2 \equiv N$. \paragraph{The {\bf $N_1 \neq N_2$} case.} In the ABJ theory the previous calculations reveal that the $1/\epsilon$ pole in \eqref{eq:sick} proportional to the IR regulator is not vanishing. This term, mixing UV and IR divergences, renders the regularization prescriptions ambiguous. In fact, this term is divergent for $L \to \infty$, as long as $\epsilon \neq 0$. On the other hand, if we keep $L$ finite and choose an UV renormalization prescription which removes completely the first term in \eqref{eq:sick}, the dependence on the IR cut-off disappears and one can safely take the $L \to \infty$ limit afterwards. It follows that the perturbative corrections to the correlators on the line can be anything, depending on the order of the $L \to \infty$ and $\epsilon \to 0$ limits and the renormalization prescription that we adopt. We fix this ambiguity by choosing a different prescription to regularize the IR divergences in the ABJ case. This regularization is analysed in details in appendix \ref{sect:cut-off} and basically amounts to conformally mapping the cut-off line onto the cut-off circle to avoid long distance bad behavior. As discussed in the appendix, this new prescription simply amounts to discard the terms $(L-s)^{2\epsilon}, (L+s)^{2\epsilon}$ and $(2L)^{2\epsilon}$, as they were to be cancelled by extra degrees of freedom placed at the two edges of the cut-off line\footnote{A different regularization scheme that one might try is the gauge averaging proposed in \cite{Leoni:2010az}.}. Using this prescription, and still using dimensional regularization to keep UV divergences under control, the result in \eqref{eq:calF} reads \begin{equation} {\mathcal F}^{(2)} = - \frac{N_1 N_2(N_1+N_2)}{2k^2} \, \frac{\Gamma^2\left(\frac{1}{2}-\epsilon\right)}{\pi^{1-2\epsilon}} \, \frac{1}{\epsilon} \, \frac{1}{s^{1 - 4\epsilon}} = - {\mathcal B}^{(2)} + O(\epsilon) \end{equation} Therefore, expanding around $\epsilon = 0$, from eqs. \eqref{eq:BandF} we finally obtain \begin{eqnarray} && \langle\!\langle {\mathbb Z}(s) \bar{\mathbb Z}(0) \rangle\!\rangle^{(2)} = O(\epsilon) \\ && \langle\!\langle \bar{{\mathbb G}}_a(s) {\mathbb G}^b(0) \rangle\!\rangle^{(2)} = -\delta_a^b \, \frac{N_1 N_2}{k^2} \left( \frac{1}{\epsilon } + 4 \log{s}+ 2\gamma +2\log{(4 \pi )} \right) \, \frac{1}{s} + O(\epsilon) \nonumber \end{eqnarray} Once more, the first correlator is consistent with the absence of $1/k^2$ corrections to the Bremsstrahlung function of the 1/2-BPS Wilson loop, whereas renormalizing the second correlator we obtain the one-loop anomalous dimension of ${\mathbb G}^a$ which agrees with the expression in \eqref{eq:an_dimT}. \section{The constant operator at strong coupling} \label{sect:strong} In this section, we propose a holographic interpretation of the $\mathcal{T}$ multiplet. Our conjecture relies on a similar situation in 4d $\mathcal{N}=4$ SYM. Therefore, we begin by briefly recalling what happens in four dimensions. To this end, we focus on the one-dimensional dCFT defined on the $\frac{1}{2}$-BPS Wilson loop \cite{Rey:1998ik, Maldacena:1998im} of the $\mathcal{N}=4$ SYM theory. The lightest local operators one can consider are the scalars $\Phi^I$, $I=1,\dots, 6$. When localized on the defect, these are the SCPs of two supermultiplets. Precisely, one can choose $\Phi^a$ $a=1,\dots,5$ to be the lowest operators of the displacement multiplet, which is a short multiplet, while $\Phi^6$ generates a long multiplet. In \cite{Giombi:2017cqn}, a holographic description of the dCFT on the Wilson loop has been proposed. Given the minimal surface dual to the straight Wilson line, which defines an $AdS_2$ metric inside the $AdS_5\times S^5$ background \cite{Maldacena:1998im}, the holographic dual of the dCFT is the $AdS_2$ QFT for the transverse fluctuations around the minimal surface, obtained by expanding the worldsheet superstring action in the static gauge. According to the holographic dictionary, the $\Phi^a$ operators with $a=1, \dots , 5$ are mapped to the fluctuations $y^a$ in the $S^5$ directions, whereas the unprotected $\Phi^6$ scalar is conjectured to be dual to the lightest bound state $y^a y_a$. Since this is the lightest operator exchanged in the OPE $y^a\times y^a$, one can use bootstrap methods to compute the anomalous dimension of the bound state. Here, we generalize this proposal to the ABJM theory. In this case, the $\frac{1}{2}$-BPS Wilson line admits a holographic description in terms of a minimal area superstring worldsheet on $AdS_4\times \mathbb{C}{\rm P}^3$ \footnote{In the ABJM theory, the duals of $\frac{1}{2}$-BPS Wilson operators can be more generally obtained in terms of minimal M2-brane configurations in M-theory on $AdS_4 \times S^7/Z_k$ \cite{Lietti:2017gtc}. They reduce to $AdS_4\times \mathbb{C}{\rm P}^3$ type IIA string solutions in the regime $k \ll N \ll k^5$.}. Following the 4d counterpart, one can consider the $AdS_2$ QFT, which arises from expanding the superstring action on $AdS_4\times \mathbb{C}{\rm P}^3$ in the static gauge around the Wilson line solution. We interpret it as the gravitational dual of the dCFT defined on the $\frac{1}{2}$-BPS Wilson line. The fluctuations transverse to the $AdS_2$ solution are in one-to-one correspondence with the operators in the displacement multiplet \cite{Bianchi:2020hsz}. An important difference with respect to the 4d case is that the SCP $\mathbb{Z}$, being an anticommuting supermatrix operator, corresponds to a {\em fermionic} fluctuation $z$ in the worldsheet theory. At weak coupling, it is tempting to make an analogy between the lightest non-protected operator $\mathcal{T}$ of the dCFT and the non-protected scalar $\Phi^6$ in 4d. It is then natural to take inspiration from the 4d duality $\Phi^6 \sim y^a y_a$ to conjecture a duality between the $\mathcal{T}$ excitations and the lightest bound state built from the fluctuations dual to the displacement multiplet, that is \begin{equation} \mathcal{T} \sim z\bar{z} \end{equation} The quantum numbers of the bound state $z\bar{z}$ are $[1,0,0,0]$. While the $\mathfrak{u}(1)_M$ and R-symmetry quantum numbers match those of $\mathcal{T} $, scaling dimensions are different. However, this may not be a problem since $\mathcal{T}$ is not protected. It is, in fact, conceivable that its quantum dimension, being a function of $k$, $N_1$ and $N_2$, interpolates between the dimension at weak coupling (zero at lowest order) and the one at strong coupling captured by $z\bar{z}$. Indeed, the same pattern occurs in the four-dimensional case. A couple of qualitative arguments can be used to support our conjecture. First of all, the fact that the $\mathcal{T}$ dimension flows in the IR to a larger value is consistent with our perturbative findings. In fact, at weak coupling, we have found a positive anomalous dimension (see \eqref{eq:an_dimT}), which signals an increasing flow towards the IR. Second, for $N_1=N_2 \equiv N$, the anomalous dimension of the $\mathbb{Z}\bar{\mathbb{Z}}$ operator dual to the bound state $z\bar{z}$ has been computed at strong coupling in \cite{Bianchi:2020hsz}, and reads \begin{equation} \Delta_{\mathbb{Z}\bar{\mathbb{Z}}}=1-3\epsilon \end{equation} where $\epsilon \sim {(N/k)}^{-\frac12}$ is the coupling constant. Again, the negative sign of the correction, signaling a decreasing flow towards the UV, agrees with our proposal. \section{Conclusions and perspectives}\label{sect:conclusions} The study of dCFT's defined through supersymmetric Wilson lines in ABJ(M) theory is still on its infancy. Already the maximal 1/2 BPS case presents peculiarities and unexpected properties, due to the fermionic couplings appearing in its field theoretical definition. In this paper we have observed the existence of a long multiplet whose highest weight state is obtained by inserting into the Wilson line a constant supermatrix operator ${\mathcal T}$. We have derived the full supermultiplet exploiting an explicit covariant representation of the preserved supercharges. While the relation between ${\mathcal T}$ and the honest local operators ${\mathbb G}^a(x)$ might seem an artifact of taking the covariant version of the supercharges, perturbation theory supports our interpretation. In fact ${\mathbb G}^a(x)$ is not protected and acquires at quantum level the same anomalous dimension as ${\mathcal T}$, suggesting that ${\mathbb G}^a(x)$ is truly a descendant of ${\mathcal T}$. Another piece of evidence for the consistency of our construction comes from strong coupling considerations. If we were not to assume that ${\mathbb G}^a(x), \bar{\mathbb G}_a(x)$ are ${\mathcal T}$ descendants, we could not find any obvious operator corresponding to the $z\bar{z}$ bound state appearing in this regime. It turns out that the quantum dimension of the constant operator ${\mathcal T}$ is compatible with an interpolating function between weak and strong coupling. In this respect, it would be certainly interesting to apply bootstrap techniques to verify our intuition, mimicking the 4d analog \cite{Grabner:2020nis, Ferrero:2021bsb, Cavaglia:2021bnz, Cavaglia:2022qpg}. We have also noted that, even more mysteriously, ${\mathcal T}$ enters the cohomological equivalence between 1/2 BPS and 1/6 BPS Wilson loops. We remark that "constant" local operators inserted into Wilson lines were previously considered in the literature. For example "defect changing operators", which change the scalar coupled to the Wilson loop have been studied in \cite{Kim:2017sju}, while in \cite{Gabai:2022vri} it has been shown that the holonomy itself gets contribution from the constant part. While in the case of ABJM the situation seems quite clear at weak coupling, for $N_1\neq N_2$ we found some subtle and somehow unexpected effect. Bad terms, proportional to $(N_1-N_2)$, arise in our computations, inducing a strong dependence on the regularization procedure. We adopted a regularization consistent with the same calculation on a circular Wilson loop, finding a reasonable result for the anomalous dimensions. It is certainly worth to explore more deeply this last feature, maybe in connection with the parity properties of ABJ theory. More generally, it would be important to have a more clear picture on the correct way to define the 1/2 BPS Wilson line at perturbative level, maybe resorting to a well-defined limiting procedure that involves boundary operators connected by the line. As a final remark, we stress that no computation of four-point functions has been attempted so far for defect operators in the ABJ(M) theory, at perturbative level. It could be useful to have some results in this direction, also to understand the behaviour of ${\mathcal T}$ in the OPE expansion. \vskip 35pt \acknowledgments We thank Lorenzo Bianchi, Diego Correa, Shota Komatsu, Carlo Meneghelli and Guillermo Silva for interesting discussions and useful insights. The work of Luigi Guerrini and Paolo Soresina is supported by Della Riccia Foundation. This work has been supported in part by Italian Ministero dell'Universit\`a e Ricerca (MUR), and by Istituto Nazionale di Fisica Nucleare (INFN) through the ``Gauge Theories, Strings, Supergravity'' (GSS) and ``Gauge and String Theory'' (GAST) research projects. \newpage
{ "redpajama_set_name": "RedPajamaArXiv" }
3,281
Project acronym DEATHSWITCHING Project Identifying genes and pathways that drive molecular switches and back-up mechanisms between apoptosis and autophagy Researcher (PI) Adi Kimchi Host Institution (HI) WEIZMANN INSTITUTE OF SCIENCE Call Details Advanced Grant (AdG), LS3, ERC-2012-ADG_20120314 Summary A cell's decision to die is governed by multiple input signals received from a complex network of programmed cell death (PCD) pathways, including apoptosis and programmed necrosis. Additionally, under some conditions, autophagy, whose function is mainly pro-survival, may act as a back-up death pathway. We propose to apply new approaches to study the molecular basis of two important questions that await resolution in the field: a) how the cell switches from a pro-survival autophagic response to an apoptotic response and b) whether and how pro-survival autophagy is converted to a death mechanism when apoptosis is blocked. To address the first issue, we will screen for direct physical interactions between autophagic and apoptotic proteins, using the protein fragment complementation assay. Validated pairs will be studied in depth to identify built-in molecular switches that activate apoptosis when autophagy fails to restore homeostasis. As a pilot case to address the concept of molecular 'sensors' and 'switches', we will focus on the previously identified Atg12/Bcl-2 interaction. In the second line of research we will categorize autophagy-dependent cell death triggers into those that directly result from autophagy-dependent degradation, either by excessive self-digestion or by selective protein degradation, and those that utilize the autophagy machinery to activate programmed necrosis. We will identify the genes regulating these scenarios by whole genome RNAi screens for increased cell survival. In parallel, we will use a cell library of annotated fluorescent-tagged proteins for measuring selective protein degradation. These will be the starting point for identification of the molecular pathways that convert survival autophagy to a death program. Finally, we will explore the physiological relevance of back-up death mechanisms and the newly identified molecular mechanisms to developmental PCD during the cavitation process in early stages of embryogenesis. A cell's decision to die is governed by multiple input signals received from a complex network of programmed cell death (PCD) pathways, including apoptosis and programmed necrosis. Additionally, under some conditions, autophagy, whose function is mainly pro-survival, may act as a back-up death pathway. We propose to apply new approaches to study the molecular basis of two important questions that await resolution in the field: a) how the cell switches from a pro-survival autophagic response to an apoptotic response and b) whether and how pro-survival autophagy is converted to a death mechanism when apoptosis is blocked. To address the first issue, we will screen for direct physical interactions between autophagic and apoptotic proteins, using the protein fragment complementation assay. Validated pairs will be studied in depth to identify built-in molecular switches that activate apoptosis when autophagy fails to restore homeostasis. As a pilot case to address the concept of molecular 'sensors' and 'switches', we will focus on the previously identified Atg12/Bcl-2 interaction. In the second line of research we will categorize autophagy-dependent cell death triggers into those that directly result from autophagy-dependent degradation, either by excessive self-digestion or by selective protein degradation, and those that utilize the autophagy machinery to activate programmed necrosis. We will identify the genes regulating these scenarios by whole genome RNAi screens for increased cell survival. In parallel, we will use a cell library of annotated fluorescent-tagged proteins for measuring selective protein degradation. These will be the starting point for identification of the molecular pathways that convert survival autophagy to a death program. Finally, we will explore the physiological relevance of back-up death mechanisms and the newly identified molecular mechanisms to developmental PCD during the cavitation process in early stages of embryogenesis. Project acronym FSC Project Fast and Sound Cryptography: From Theoretical Foundations to Practical Constructions Researcher (PI) Alon Rosen Host Institution (HI) INTERDISCIPLINARY CENTER (IDC) HERZLIYA Summary "Much currently deployed cryptography is designed using more "art'" than "science," and most of the schemes used in practice lack rigorous justification for their security. While theoretically sound designs do exist, they tend to be quite a bit slower to run and hence are not realistic from a practical point of view. This gap is especially evident in "low-level" cryptographic primitives, which are the building blocks that ultimately process the largest quantities of data. Recent years have witnessed dramatic progress in the understanding of highly-parallelizable (local) cryptography, and in the construction of schemes based on the mathematics of geometric objects called lattices. Besides being based on firm theoretical foundations, these schemes also allow for very efficient implementations, especially on modern microprocessors. Yet despite all this recent progress, there has not yet been a major effort specifically focused on bringing the efficiency of such constructions as close as possible to practicality; this project will do exactly that. The main goal of the Fast and Sound Cryptography project is to develop new tools and techniques that would lead to practical and theoretically sound implementations of cryptographic primitives. We plan to draw ideas from both theory and practice, and expect their combination to generate new questions, conjectures, and insights. A considerable fraction of our efforts will be devoted to demonstrating the efficiency of our constructions. This will be achieved by a concrete setting of parameters, allowing for cryptanalysis and direct performance comparison to popular designs. While our initial focus will be on low-level primitives, we expect our research to also have direct impact on the practical efficiency of higher-level cryptographic tasks. Indeed, many of the recent improvements in the efficiency of lattice-based public-key cryptography can be traced back to research on the efficiency of lattice-based hash functions." "Much currently deployed cryptography is designed using more "art'" than "science," and most of the schemes used in practice lack rigorous justification for their security. While theoretically sound designs do exist, they tend to be quite a bit slower to run and hence are not realistic from a practical point of view. This gap is especially evident in "low-level" cryptographic primitives, which are the building blocks that ultimately process the largest quantities of data. Recent years have witnessed dramatic progress in the understanding of highly-parallelizable (local) cryptography, and in the construction of schemes based on the mathematics of geometric objects called lattices. Besides being based on firm theoretical foundations, these schemes also allow for very efficient implementations, especially on modern microprocessors. Yet despite all this recent progress, there has not yet been a major effort specifically focused on bringing the efficiency of such constructions as close as possible to practicality; this project will do exactly that. The main goal of the Fast and Sound Cryptography project is to develop new tools and techniques that would lead to practical and theoretically sound implementations of cryptographic primitives. We plan to draw ideas from both theory and practice, and expect their combination to generate new questions, conjectures, and insights. A considerable fraction of our efforts will be devoted to demonstrating the efficiency of our constructions. This will be achieved by a concrete setting of parameters, allowing for cryptanalysis and direct performance comparison to popular designs. While our initial focus will be on low-level primitives, we expect our research to also have direct impact on the practical efficiency of higher-level cryptographic tasks. Indeed, many of the recent improvements in the efficiency of lattice-based public-key cryptography can be traced back to research on the efficiency of lattice-based hash functions." Project acronym MQC Project Methods for Quantum Computing Researcher (PI) Andris Ambainis Host Institution (HI) LATVIJAS UNIVERSITATE Summary "Quantum information science (QIS) is a young research area at the frontier of both computer science and physics. It studies what happens when we apply the principles of quantum mechanics to problems in computer science and information processing. This has resulted in many unexpected discoveries and opened up new frontiers. Quantum algorithms (such as Shor's factoring algorithm) can solve computational problems that are intractable for conventional computers. Quantum mechanics also enables quantum cryptography which provides an ultimate degree of security that cannot be achieved by conventional methods. These developments have generated an enormous interest both in building a quantum computer and exploring the mathematical foundations of quantum information. We will study computer science aspects of QIS. Our first goal is to develop new quantum algorithms and, more generally, new algorithmic techniques for developing quantum algorithms. We will explore a variety of new ideas: quantum walks, span programs, learning graphs, linear equation solving, computing by transforming quantum states. Secondly, we will study the limits of quantum computing. We will look at various classes of computational problems and analyze what are the biggest speedups that quantum algorithms can achieve. We will also work on identifying computational problems which are hard even for a quantum computer. Such problems can serve as a basis for cryptography that would be secure against quantum computers. Thirdly, the ideas from quantum information can lead to very surprising connections between different fields. The mathematical methods from quantum information can be applied to solve purely classical (non-quantum) problems in computer science. The ideas from computer science can be used to study the complexity of physical systems in quantum mechanics. We think that both of those directions have the potential for unexpected breakthroughs and we will pursue both of them." "Quantum information science (QIS) is a young research area at the frontier of both computer science and physics. It studies what happens when we apply the principles of quantum mechanics to problems in computer science and information processing. This has resulted in many unexpected discoveries and opened up new frontiers. Quantum algorithms (such as Shor's factoring algorithm) can solve computational problems that are intractable for conventional computers. Quantum mechanics also enables quantum cryptography which provides an ultimate degree of security that cannot be achieved by conventional methods. These developments have generated an enormous interest both in building a quantum computer and exploring the mathematical foundations of quantum information. We will study computer science aspects of QIS. Our first goal is to develop new quantum algorithms and, more generally, new algorithmic techniques for developing quantum algorithms. We will explore a variety of new ideas: quantum walks, span programs, learning graphs, linear equation solving, computing by transforming quantum states. Secondly, we will study the limits of quantum computing. We will look at various classes of computational problems and analyze what are the biggest speedups that quantum algorithms can achieve. We will also work on identifying computational problems which are hard even for a quantum computer. Such problems can serve as a basis for cryptography that would be secure against quantum computers. Thirdly, the ideas from quantum information can lead to very surprising connections between different fields. The mathematical methods from quantum information can be applied to solve purely classical (non-quantum) problems in computer science. The ideas from computer science can be used to study the complexity of physical systems in quantum mechanics. We think that both of those directions have the potential for unexpected breakthroughs and we will pursue both of them." Project acronym SUPREL Project "Scaling Up Reinforcement Learning: Structure Learning, Skill Acquisition, and Reward Shaping" Researcher (PI) Shie Mannor Host Institution (HI) TECHNION - ISRAEL INSTITUTE OF TECHNOLOGY Summary "Learning how to act optimally in high-dimensional stochastic dynamic environments is a fundamental problem in many areas of engineering and computer science. The basic setup is that of an agent who interacts with an environment trying to maximize some long term payoff while having access to observations of the state of the environment. A standard approach to solving this problem is the Reinforcement Learning (RL) paradigm in which an agent is trying to improve its policy by interacting with the environment or, more generally, by using different sources of information such as traces from an expert and interacting with a simulator. In spite of several success stories of the RL paradigm, a unified methodology for scaling-up RL has not emerged to date. The goal of this research proposal is to create a methodology for learning and acting in high-dimensional stochastic dynamic environments that would scale up to real-world applications well and that will be useful across domains and engineering disciplines. We focus on three key aspects of learning and optimization in high dimensional stochastic dynamic environments that are interrelated and essential to scaling up RL. First, we consider the problem of structure learning. This is the problem of how to identify the key features and underlying structures in the environment that are most useful for optimization and learning. Second, we consider the problem of learning, defining, and optimizing skills. Skills are sub-policies whose goal is more focused than solving the whole optimization problem and can hence be more easily learned and optimized. Third, we consider changing the natural reward of the system to obtain desirable properties of the solution such as robustness, adversity to risk and smoothness of the control policy. In order to validate our approach we study two challenging real-world domains: a jet fighter flight simulator and a smart-grid short term control problem." "Learning how to act optimally in high-dimensional stochastic dynamic environments is a fundamental problem in many areas of engineering and computer science. The basic setup is that of an agent who interacts with an environment trying to maximize some long term payoff while having access to observations of the state of the environment. A standard approach to solving this problem is the Reinforcement Learning (RL) paradigm in which an agent is trying to improve its policy by interacting with the environment or, more generally, by using different sources of information such as traces from an expert and interacting with a simulator. In spite of several success stories of the RL paradigm, a unified methodology for scaling-up RL has not emerged to date. The goal of this research proposal is to create a methodology for learning and acting in high-dimensional stochastic dynamic environments that would scale up to real-world applications well and that will be useful across domains and engineering disciplines. We focus on three key aspects of learning and optimization in high dimensional stochastic dynamic environments that are interrelated and essential to scaling up RL. First, we consider the problem of structure learning. This is the problem of how to identify the key features and underlying structures in the environment that are most useful for optimization and learning. Second, we consider the problem of learning, defining, and optimizing skills. Skills are sub-policies whose goal is more focused than solving the whole optimization problem and can hence be more easily learned and optimized. Third, we consider changing the natural reward of the system to obtain desirable properties of the solution such as robustness, adversity to risk and smoothness of the control policy. In order to validate our approach we study two challenging real-world domains: a jet fighter flight simulator and a smart-grid short term control problem." Project acronym SURFCOMP Project Comparing and Analyzing Collections of Surfaces Researcher (PI) Yaron Lipman Summary The proposed research program intends to cover all aspects of the problem of learning and analyzing collections of surfaces and apply the developed methods and algorithms to a wide range of scientific data. The proposal has two parts: In the first part of the proposal, we concentrate on developing the most basic operators comparing automatically pairs of surfaces. Although this problem has received a lot of attention in recent years, and significant progress has been made, there is still a great need for algorithms that are both efficient/tractable and come with guarantees of convergence or accuracy. The main difficulty in most approaches so far is that they work in a huge and non-linear search space to compare surfaces; most algorithms resort to gradient descent from an initial guess, risking to find only local optimal solution. We offer a few research directions to tackle this problem based on the idea of identifying EFFICIENT search spaces that APPROXIMATE the desired optimal correspondence. In the second part of the proposal we propose to make use of the methods developed in the first part to perform global analysis of, or learn, collections of surfaces. We put special emphasis on ``real-world'' applications and intend to validate our algorithm on a significant collection, including data-sets such as biological anatomic data-sets and computer graphics' benchmark collections of surfaces. We propose to formulate and construct geometric structures on these collections and investigate their domain specific implications. The proposed research program intends to cover all aspects of the problem of learning and analyzing collections of surfaces and apply the developed methods and algorithms to a wide range of scientific data. The proposal has two parts: In the first part of the proposal, we concentrate on developing the most basic operators comparing automatically pairs of surfaces. Although this problem has received a lot of attention in recent years, and significant progress has been made, there is still a great need for algorithms that are both efficient/tractable and come with guarantees of convergence or accuracy. The main difficulty in most approaches so far is that they work in a huge and non-linear search space to compare surfaces; most algorithms resort to gradient descent from an initial guess, risking to find only local optimal solution. We offer a few research directions to tackle this problem based on the idea of identifying EFFICIENT search spaces that APPROXIMATE the desired optimal correspondence. In the second part of the proposal we propose to make use of the methods developed in the first part to perform global analysis of, or learn, collections of surfaces. We put special emphasis on ``real-world'' applications and intend to validate our algorithm on a significant collection, including data-sets such as biological anatomic data-sets and computer graphics' benchmark collections of surfaces. We propose to formulate and construct geometric structures on these collections and investigate their domain specific implications. Project acronym TENDONTOBONE Project The mechanisms that underlie the development of a tendon-bone attachment unit Researcher (PI) Elazar Zelzer Call Details Starting Grant (StG), LS3, ERC-2012-StG_20111109 Summary We walk, run and jump using the complex and ingenious musculoskeletal system. It is therefore puzzling that although each of its components has been extensively studied, research of the musculoskeleton as an integrated system and, in particular, of its assembly has been scarce. In recent years, studies conducted in my lab have demonstrated the centrality of cross regulation between musculoskeletal tissues in skeletogenesis. These works have provided me with the inspiration for a revolutionary hypothesis on the way tendons connect to bones, along with sufficient preliminary data on which to base it. The critical component in the assembly of the musculoskeleton is the formation of an attachment unit, where a tendon is inserted into a bone. Instead of two tissues that attach to each other, my novel hypothesis suggests that the entire attachment unit originates from a single pool of progenitor cells, which following differentiation diverges to form a tendon attached to cartilage. With the support of the ERC scheme, I will uncover the previously uncharacterized cellular origin of the attachment unit and the genetic program underlying its development. The attachment unit is a compound tissue, as it is composed of chondrocytes at one end and of tenocytes at the other end. We will investigate the mechanisms that facilitate in situ differentiation of mesenchymal progenitor cells into two distinct cell fates, under one defined niche. In addition, I will identify the contribution of both mechanical stimuli and molecular signals to the development of the attachment unit. The ultimate goal of this program is to provide a complete picture of attachment unit development, in order to promote understanding of musculoskeletal assembly. The acquired knowledge may provide the basis for new therapies for enthesopathies, through tissue engineering or repair. We walk, run and jump using the complex and ingenious musculoskeletal system. It is therefore puzzling that although each of its components has been extensively studied, research of the musculoskeleton as an integrated system and, in particular, of its assembly has been scarce. In recent years, studies conducted in my lab have demonstrated the centrality of cross regulation between musculoskeletal tissues in skeletogenesis. These works have provided me with the inspiration for a revolutionary hypothesis on the way tendons connect to bones, along with sufficient preliminary data on which to base it. The critical component in the assembly of the musculoskeleton is the formation of an attachment unit, where a tendon is inserted into a bone. Instead of two tissues that attach to each other, my novel hypothesis suggests that the entire attachment unit originates from a single pool of progenitor cells, which following differentiation diverges to form a tendon attached to cartilage. With the support of the ERC scheme, I will uncover the previously uncharacterized cellular origin of the attachment unit and the genetic program underlying its development. The attachment unit is a compound tissue, as it is composed of chondrocytes at one end and of tenocytes at the other end. We will investigate the mechanisms that facilitate in situ differentiation of mesenchymal progenitor cells into two distinct cell fates, under one defined niche. In addition, I will identify the contribution of both mechanical stimuli and molecular signals to the development of the attachment unit. The ultimate goal of this program is to provide a complete picture of attachment unit development, in order to promote understanding of musculoskeletal assembly. The acquired knowledge may provide the basis for new therapies for enthesopathies, through tissue engineering or repair. Project acronym TRANSFORM OPTICS Project Transformation optics: cloaking, perfect imaging and horizons Researcher (PI) Ulf Leonhardt Summary Transformation optics grew out of ideas for invisibility cloaking devices and exploits connections between electromagnetism in media and in geometries. Invisibility has turned from fiction into science since 2006, but is far from being practical yet. Advances in the theory of transformation optical are the key for bringing invisibility closer to practicality. Probably the most important practical application of connections between media and geometries is perfect imaging, the ability to optically transfer images with a resolution not limited by the wavelength. This is because imaging lies at the heart of photolithography, the key technology used for making electronic chips. On the other hand, probably the intellectually most important application of connections between media and geometries lies in the quantum physics of the event horizon, which, for the first time, could be studied in the laboratory. The objective of this proposal is to make significant breakthroughs in (1) moving cloaking from frontier research closer to practicality, (2) turning perfect imaging into a viable technology and (3) demonstrating the quantum physics of the event horizon in the laboratory. This project is at the cutting edge of a global communal effort in the research of metamaterials. The overarching theme of the project is to make abstract and seemingly fantastic ideas practical, by combining ideas from geometry and general relativity with the latest advances in optical metamaterials and integrated and ultrafast photonics. Transformation optics grew out of ideas for invisibility cloaking devices and exploits connections between electromagnetism in media and in geometries. Invisibility has turned from fiction into science since 2006, but is far from being practical yet. Advances in the theory of transformation optical are the key for bringing invisibility closer to practicality. Probably the most important practical application of connections between media and geometries is perfect imaging, the ability to optically transfer images with a resolution not limited by the wavelength. This is because imaging lies at the heart of photolithography, the key technology used for making electronic chips. On the other hand, probably the intellectually most important application of connections between media and geometries lies in the quantum physics of the event horizon, which, for the first time, could be studied in the laboratory. The objective of this proposal is to make significant breakthroughs in (1) moving cloaking from frontier research closer to practicality, (2) turning perfect imaging into a viable technology and (3) demonstrating the quantum physics of the event horizon in the laboratory. This project is at the cutting edge of a global communal effort in the research of metamaterials. The overarching theme of the project is to make abstract and seemingly fantastic ideas practical, by combining ideas from geometry and general relativity with the latest advances in optical metamaterials and integrated and ultrafast photonics. Project acronym VSSC Project Verifying and Synthesizing Software Compositions Researcher (PI) Shmuel (Mooly) Sagiv Host Institution (HI) TEL AVIV UNIVERSITY Summary One of the first things a programmer must commit to in developing any significant piece of software is the representation of the data. In applications where performance or memory consumption is important, this representation is often quite complex: the data may be indexed in multiple ways and use a variety of concrete, interlinked data structures. The current situation, in which programmers either directly write these data structures themselves or use a standard data structure library, leads to two problems: 1:The particular choice of data representation is based on an expectation of what the most common workloads will be; that is, the programmer has already made cost-benefit trade-offs based on the expected distribution of operations the program will perform on these data structures. 2: It is difficult for the programmer to check or even express the high-level consistency properties of complex structures, especially when these structures are shared. This also makes software verification in existing programming languages very hard. We will investigate specification languages for describing and reasoning program data at a much higher level. The hope is that this can reduce the inherited complexity of reasoning about programs. In tandem, we will check if the high level specifications can be semi-automatically mapped specifications to efficient data representations. A novel aspect of our approach allows the user to define global invariants and a restricted set of high level operations, and only then to synthesize a representation that both adheres to the invariants and is highly specialized to exactly the set of operations the user requires. In contrast, the classical approach in databases is to assume nothing about the queries that must be answered; the representation must support all possible operations. One of the first things a programmer must commit to in developing any significant piece of software is the representation of the data. In applications where performance or memory consumption is important, this representation is often quite complex: the data may be indexed in multiple ways and use a variety of concrete, interlinked data structures. The current situation, in which programmers either directly write these data structures themselves or use a standard data structure library, leads to two problems: 1:The particular choice of data representation is based on an expectation of what the most common workloads will be; that is, the programmer has already made cost-benefit trade-offs based on the expected distribution of operations the program will perform on these data structures. 2: It is difficult for the programmer to check or even express the high-level consistency properties of complex structures, especially when these structures are shared. This also makes software verification in existing programming languages very hard. We will investigate specification languages for describing and reasoning program data at a much higher level. The hope is that this can reduce the inherited complexity of reasoning about programs. In tandem, we will check if the high level specifications can be semi-automatically mapped specifications to efficient data representations. A novel aspect of our approach allows the user to define global invariants and a restricted set of high level operations, and only then to synthesize a representation that both adheres to the invariants and is highly specialized to exactly the set of operations the user requires. In contrast, the classical approach in databases is to assume nothing about the queries that must be answered; the representation must support all possible operations.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
567
{"url":"https:\/\/mathshistory.st-andrews.ac.uk\/Biographies\/Ornstein\/","text":"# Donald Samuel Ornstein\n\nBorn\n30 July 1934\nNew York, USA\n\n### Biography\n\nDonald Ornstein is known to his friends and colleagues as Don. His parents were Harry Ornstein and Rose Wisner. He was educated at Swarthmore College, Pennsylvania, and then entered the University of Chicago where he undertook research towards his Ph.D. advised by Irving Kaplansky. He was initiated as a member of the Chicago Chapter of Sigma Xi in autumn 1956. He submitted his thesis Dual Vector Spaces to the University of Chicago and was awarded a Ph.D. in 1957. However, by the time he was awarded his doctorate, Ornstein was already at Institute for Advanced Study at Princeton where he spent two academic years 1956-58. Following these years when he concentrated on research, he was appointed as an Instructor in Mathematics at the University of Wisconsin, spending two academic years 1958-60 in this post.\n\nOrnstein's first publication appeared in 1959 in the Annals of Mathematics. Not surprisingly, this first paper was based on his Ph.D. thesis and had the same title Dual Vector Spaces. However, his second paper, written jointly with R V Chacon, was on ergodic theory, the topic for which Ornstein is now best known. The paper A general ergodic theorem (1960) contained a proof of a conjecture that had been made by Eberhard Hopf. Two further publications appeared in 1960, namely The differentiability of transition functions and On invariant measure. Both were single authored and, in the second paper, he gave a negative answer to a long-standing question in the theory of measurable transformations. In 1960 Ornstein was appointed as an Assistant professor at Stanford University. He continued to hold a position at Stanford for the rest of his career, being promoted to an associate professor in 1963 and also being a Sloan fellow during 1963-65. During this period he married Shari Richman in 1964; they had two sons and one daughter. He continued as an associate professor at Stanford during 1965-66 and he was promoted to full professor in 1966. During the academic year 1967-68 he was a visiting professor at Cornell University and at New York University's Courant Institute.\n\nThe year 1968 was an important one for Ornstein for in that year he began proving a number of exceptionally important results which led to major advances in ergodic theory in the following years. He was invited to write the paper [2] describing these advances and to give the James K Whittemore Lectures in Mathematics given at Yale University describing his breakthroughs. The Yale lectures were published as the monograph Ergodic theory, randomness, and dynamical systems in 1974. Also in 1974 the American Mathematical Society awarded Ornstein their B\u00f4cher Memorial Prize:-\n... in recognition of his paper \"Bernoulli shifts with the same entropy are isomorphic\" (1970).\nTo understand what Ornstein proved in this papers, we quote from his own description in [1]:-\nErgodic theory, the study of measure-preserving transformations or flows, arose from the study of the long-term statistical behaviour of dynamical systems. Consider, for example, a billiard ball moving at constant speed on a rectangular table with a convex obstacle. The state of the system (the position and velocity of the ball), at one instant of time, can be described by three numbers or a point in Euclidean 3-dimensional space, and its time evolution by a flow on its state space, a subset of 3-dimensional space. The Lebesgue measure of a set does not change as it evolves and can be identified with its probability. One can abstract the statistical properties (e.g., ignoring sets of probability 0) and regard the state-space as an abstract measure space. Equivalently, one says that two flows are isomorphic if there is a one-to-one measure-preserving (probability-preserving) correspondence between their state spaces so that corresponding sets evolve in the same way (i.e., the correspondence is maintained for all time). It is sometimes convenient to discretize time (i.e., look at the flow once every minute), and this is also referred to as a transformation. Measure-preserving transformations (or flows) also arise from the study of stationary processes. The simplest examples are independent processes such as coin tossing. The outcome of each coin tossing experiment (the experiment goes on for all time) can be described as a doubly-infinite sequence of heads H and tails T. The state space is the collection of these sequences. Each subset is assigned a probability. For example, the set of all sequences that are H at time 3 and T at time 5 gets probability $\\large\\frac{1}{4}\\normalsize$. The passage of time shifts each sequence to the left (what used to be time 1 is now time 0). (This kind of construction works for all stochastic processes, independence and discrete time are not needed.) The above transformation is called the Bernoulli shift B(\u00bd, \u00bd). If, instead of flipping a coin, one spins a roulette wheel with three slots of probability $p_{1}, p_{2}, p_{3}$ , one would get the Bernoulli shift B($p_{1}, p_{2}, p_{3}$).\n\nBernoulli shifts play a central role in ergodic theory, but it was not known until 1958 whether or not all Bernoulli shifts are isomorphic. A N Kolmogorov and Ya G Sinai solved this problem by introducing a new invariant for measure-preserving transformations: the entropy, which they took from Shannon's theory of information. They [used entropy to prove] that not all Bernoulli shifts are isomorphic. The simplest case of the Ornstein isomorphism theorem (1970), states that two Bernoulli shifts of the same entropy are isomorphic.\nPaul C Shields, reviewing Ornstein's book Ergodic theory, randomness, and dynamical systems (1974) writes:-\nIn 1969, the author solved one of the most difficult problems in ergodic theory when he showed that two Bernoulli shifts with the same entropy are isomorphic. This immediately led to many new results about measure-preserving transformations including proofs that many transformations of physical and mathematical interest are just disguised versions of Bernoulli shifts. In these lectures the author gives a clear and thorough discussion of the ideas which are the basis for this growing new branch of ergodic theory. ... Since the publication in 1974 of this book the subject has continued to grow. There is now an extensive theory of isomorphism of flows, group actions (including Ising models), and even communications channels.\nOrnstein published other significant papers in 1970 in addition to Bernoulli shifts with the same entropy are isomorphic for which he received the B\u00f4cher Memorial Prize, although it is clear that it was this paper which was the starting point for all that followed. These other 1970 papers were Factors of Bernoulli shifts are Bernoulli shifts and Imbedding Bernoulli shifts in flows which contained a generalisation of the results of his prize-winning paper. He continued to produce papers in the following couple of years which pushed understanding of the topic significantly forward: A Kolmogorov automorphism that is not a Bernoulli shift; A K-automorphism with no square root and Pinsker's conjecture; and The isomorphism theorem for Bernoulli flows. The paper [2] gives a very clear overview of this work.\n\nAfter the publication of this outstanding work, Ornstein was a visiting professor at the Hebrew University in Jerusalem in 1975-76 and at the University of California's Mathematical Sciences Research Institute at Berkeley in 1983-84.\n\nPerhaps the greatest honours that Ornstein has received was his election to the National Academy of Sciences in 1981 and to the American Academy of Arts and Sciences in 1991.\n\n### References (show)\n\n1. D S Ornstein, Ornstein isomorphism theorem, in Michiel Hazewinkel (ed.), Encyclopaedia of Mathematics (Springer-Verlag, Berlin-Heidelberg-New York, 2002). http:\/\/eom.springer.de\/\n2. D S Ornstein, An Application of Ergodic Theory to Probability Theory, The Annals of Probability 1 (1) (1973), 43-58.\n3. Donald Samuel Ornstein, The International Who's Who 2004 (Routledge, 2003).","date":"2020-07-04 01:54:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 3, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7136567831039429, \"perplexity\": 650.3507015850464}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-29\/segments\/1593655883961.50\/warc\/CC-MAIN-20200704011041-20200704041041-00031.warc.gz\"}"}
null
null
Celebrating 50 years of the ... 2 Mins With: Rozi, Chalton ... Celebrating 50 years of the charity for the young and homeless We go behind the scenes of New Horizon, which assists 2500 vulnerable people a year by Clare Hand August 8, 2017 Celebrating its 50th anniversary this year. Photo: PR For half a century, the New Horizon Youth Centre has been working with vulnerable young people, providing them with the information and tools to transform their lives and escape the trap of homelessness. The centre has migrated around central London, running from Soho and Covent Garden at times, but it is now securely settled on Chalton Street. The large, barn-like building acts as a day centre for anyone aged 16-21 who finds themselves in a position of homelessness and in need of both immediate relief and more long-term support. We popped in one Monday lunchtime to have a chat with Owen Duff NH's media and communications coordinator. As the chef, who seemed to have a private joke with everyone who walked past, served up a hot lunch, young guys sat at computers, others played guitar warming up for their afternoon of busking, while some popped in and out of counselling sessions. In this hotbed of warmth and activity, we still managed to get a greeting and an offer of a cuppa by almost everyone who walked past. NH pancake stall, open twice weekly. Photo: Dan Hall This atmosphere, Owen later tells us, is what makes this place so unique. "There is just such a strong sense of community here," he says, "which really translates across to the young people." The centre offers a kind of one-stop-shop for young people. Services range from offering short-term assistance through providing luggage storage, showers, food and laundry; these are particularly important for people who go to college or are in work but are sleeping rough. The centre also has an in-house counsellor and an on-sight nurse, and provides advice and training on a range of social, psychological and legal issues, as well as having a very strong employment and education team. "In a way, we are supplying the support network that middle class people often have available to them," says Owen. "We're trying to provide the knowledge and back-up for people who just don't have it." New Horizon also sources emergency accommodation for those who are in work but are unable to fend for themselves on minimum wage. They work with a housing association to provide reduced rent to young people. "On top of this, we do all kinds of workshops for art, health and fitness, and we also have our own music studio, so there's stuff going on all the time." Bob Hoskins and NH patron Jon Snow at the opening of the Euston centre in 1995. Photo: NH Owen highlights that their services are available regardless of where the individual is from. "On average, New Horizons assists 2500 young people a year from across London. A lot of the time, the support an individual can receive will be tied to their borough's ability to assist them. Whereas we're a London-wide service: when people fall out of their council's remit they get referred to us, this includes helping young people who are seeking asylum." The youth centre offers a unique and holistic service that intervenes at this crucial stage, preventing young homeless people from becoming entrenched rough sleepers. "Obviously homelessness is a big area of people's concern, but there are particular issues around young street sleepers," he says. "If we get to them at this early stage then there's a good chance that we can prevent them from becoming trapped in a cycle of homelessness." New Horizon is located at 68 Chalton St NW1. More info about donating, volunteering and events here. Success story: Camden Watch Company This Euston-based start-up makes watches inspired by NW1 past and present (and its bus ... eustontowner 9th January 2017 5 Minutes With: Simon Pitkeathley, Collective Clare Hand meets the man who's standing up for co-working in unoccupied spaces – ... Clare Hand 5th May 2017 The story of the Flying Dutchman Bike Shop How one young couple are peddling the ultimate urban vehicle Tom Kihl 4th June 2017
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,884
Q: Fabric.js animate sprites and text field about month ago i asked how to quee animatations for objects in Fabric.js to which i got response from @nickvans. This is the code. Based on this I changed the code and created with HTML5 drag and drop, aplication which allows you to give to each object on canvas its own set of different commands. Basicly creating moving scene. Mine 1st question is :if it is possible to also use sprites. So instead of the triangle in the exmaple below it would be animated sprite that would also change its position. And 2nd question is it possible to somehow add text field that would follow the object during its movement? Something like those comics bubles. Thanks in advance for any tips function startAnimationQueue(animationQueue){ // queues up the animations for the shape var runningDuration = 0; // variable that adds up the animationQueue durations for (var i=0; i<animationQueue.length; i++){ var animationDefinition = animationQueue[i]; // Create a closure around the animationDefiniton so that each setTimeout gets sequential animationDefinition inputs var fn = (function(animationDefinition){ return function(){ triangle.animate('left', animationDefinition.left, {duration: animationDefinition.duration, onChange:canvas.renderAll.bind(canvas)}) triangle.animate('top', animationDefinition.top, {duration: animationDefinition.duration, onChange:canvas.renderAll.bind(canvas)}) // Note: you can animate additional attributes here if you want, just add additional attributes to the objects in // the animationQueue list. You could also have one of those inputs be the object to be animated in case you want // to animate multiple objects using the same queue. }; }) // Create the timeout that will apply the transformations sequentially // If you want you could set the window.setTimeout to a variable that you could destroy if you need // to interrupt the animation queue after you create it (but before it finishes) window.setTimeout(fn(animationDefinition), runningDuration); // set the next setTimeout duration to be .duration later than the previous one // this makes the second animation fire after the first one completes runningDuration += animationDefinition.duration; } } document.onreadystatechange = function () { if (document.readyState == "complete") { // I put the canvas init stuff in here because of (I think) a failed race condition or something that caused // your original method to fail in Chrome window.canvas = new fabric.Canvas('scene'); window.triangle = new fabric.Triangle({ width: 30 , height: 30 , fill: 'red' , left: 30 , top: 0 }); window.canvas.add(window.triangle); window.canvas.renderAll(); // Create a list of animations to apply var animationQueue = [ {"left": "+=0", "top": "+=100", "duration": 1000}, {"left": "+=55", "top": "+=0", "duration": 2000} ] // Apply the animations in sequence using window.setTimeout startAnimationQueue(animationQueue); } } and HTML <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title>JS Bin</title> <script src="http://cdnjs.cloudflare.com/ajax/libs/fabric.js/1.4.0/fabric.min.js"></script> </head> <body> <canvas id="scene" width="400" height="400" /> </body> </html>
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,918
Bespoke steel planters were commissioned for the launch of Wardian London – the prime residential development by EcoWorld Ballymore at Marsh Walk on the Isle of Dogs, London E14. The initial planters commissioned, shown here, are used to green the floating Design Cube at Ballymore, the marketing suite for Wardian London. A total of 16nr. planters were supplied, in a variety of configurations: including tree planters up to dims. L 1200 x W 1200 x H 900mm, in addition to a number of large rectangular trough planters of various lengths and profiles [the largest being L 2800 x W 1350 x H 530mm]. All planters were manufactured from 1.5mm thick Zintec Steel, polyester powder coated to RAL 7021 [Black grey]. Just a three minute walk from Canary Wharf, Wardian London will comprise of 624 suites, apartments and penthouses in two towers of 50 and 55 storeys. The architect for Wardian London is Glenn Howells and the developer is EcoWorld Ballymore. The landscape architects for Wardian London are Camlins, and landscaping is at the heart of EcoWorld Ballymore's vision "to create a tranquil haven of nature in the heart of London's new financial centre". Gardens and exotic plants will feature throughout both towers: including a sky garden; an open-air swimming pool within a tropical environment that includes a living wall; a public plaza with sunken gardens; and generous private balconies. Camlins' Huw Morgan said: "Wardian London offers an exceptional opportunity to engage with the natural elements while living in the thriving hub of the city. Green space is one of the most sought after commodities in London and Wardian London capitalises on this with abundant planting of more than a hundred different exotic species of plants and flowers across the development. Creating hidden sanctuaries, inspired by Wardian Cases, was of utmost importance in this project to deliver a tangible retreat from city living." IOTA CAD Drawing and Images License Agreement.doc 56.5KB IOTA Project Images - Wardian London, Design Cube at Ballymore.zip 12.53MB Featured Bespoke Material Limitless design scope and a flawless surface finish
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
31
BETHLEHEM – Mother Perpetua (née Laura) Giampietro O.S.B., 69, a member of the Abbey of Regina Laudis, died Feb. 7, 2019, after a long illness. She loved liturgy and Gregorian chant and served as mistress of ceremonies. An accomplished potter, she taught pottery to community members and interns. She was a gifted seamstress, worked in the monastic Infirmary, and started the Abbey's compost department. Mother Perpetua heroically battled early onset Alzheimer's disease for the last ten years of her life. Laura Catherine Antoinette was born in Washington, D.C., the second of 11 children to Daphne and Professor Alexander Giampietro, an internationally-regarded sculptor and ceramic artist who taught at Catholic University of America. Professor and Mrs. Giampietro instilled in their children a love of beauty, striving to make art and music a part of everyday life. Mother Perpetua is survived by her monastic community; her mother, Daphne; her siblings, Matilda, Isabelle and her husband, George, Joseph and his wife, Sharon, Mary and her husband, Larry, Frances, Martha, Charles and his wife, Rosellyn, Rev. Father Anthony C.S.B., Teresa and her husband, Mickey, and Gordon and his wife, Mia; and beloved nieces and nephews. She was predeceased by her father, Alexander. The loving care of Mother by the nurses and staff of the Lutheran Home and Hartford HealthCare at Home-Hospice Care is gratefully acknowledged. Calling hours will be Sunday from 2:30 to 5 p.m. at the Abbey Church Jesu Fili Mariae, 15 Robert Leather Road, Bethlehem, followed by Vespers, and on Monday at 9 a.m. followed by requiem Mass at 10 a.m. and burial in the Abbey cemetery. In lieu of flowers, donations to New Horizons Renovation Project would be gratefully accepted online or c/o The Abbey of Regina Laudis, 273 Flanders Road, Bethlehem, CT 06751. Please visit abbeyofreginalaudis.org, or berginfuneralhome.com for more information and Mother Perpetua's life story.
{ "redpajama_set_name": "RedPajamaC4" }
7,752
Green Jade is a stone that will help you in the manifestation of your dreams into reality. It's considered a very lucky gemstone, usually bringing the peace and harmony in tense or conflicted situations. It will also ensure that you have serenity and stability in the physical, emotional, mental, and spiritual aspect of your life. This stone will show you that when everything in your life is at peace and in balance, it will be so much easier to manifest and achieve your goals. Green Jade will also instill courage and compassion in you. The more wealth and abundance that you receive, the more generous you will also become.
{ "redpajama_set_name": "RedPajamaC4" }
8,993
<?php namespace makallio85\YamlRoute\Test; class YamlRouteTest extends \PHPUnit_Framework_TestCase { /** * Call protected/private method of a class. * * @param object &$object Instantiated object that we will run method on. * @param string $methodName Method name to call * @param array $parameters Array of parameters to pass into method. * * @return mixed Method return. */ protected function _invokeMethod(&$object, $methodName, array $parameters = []) { $reflection = new \ReflectionClass(get_class($object)); $method = $reflection->getMethod($methodName); $method->setAccessible(true); return $method->invokeArgs($object, $parameters); } public function testInit() {} }
{ "redpajama_set_name": "RedPajamaGithub" }
6,364
Paul V. Cornell du Houx Paul Cornell du Houx grew up among several Western countries. At Amherst College, he majored separately in economics and French. His honors thesis identified the aesthetic mysticism in the works of Gustave Flaubert. This led to his early attempts to bring cross-cultural insights to clarify a crisis some economists saw in the utilitarian way mainstream theory was moving. He decided to investigate the marketplace first-hand, rather than take the well-worn academic path that one day would lead the world economy into the Great Recession and now largely unprepared into the 2020 pandemic. Clearly, capitalism has gone begging for something more than money. While looking for answers, Cornell du Houx wrote currency reports for the MSA consultancy newsletter in the London Square Mile, audited companies for PwC, studied law at the Inns of Court, sold computers, and with his patented improvements on an electrical connector got involved in a start-up. Attracted to the succinct form of the ancient sutra, the author began gathering ideas in the late seventies under the Yoganomics portmanteau, written as a conversation piece, in the spirit of his storytelling grandfather, a Kansas farmer with a talent for making up words and combining disparate ideas with comedy. They would not be the only ones to be pleasantly surprised at further plays on the word, economics. Eventually, Cornell du Houx developed the math proposed in Unicycle that lets us read the ethics of natural law within the environment. In 2020, he decided to rewrite Yoganomics accordingly. Somewhere along the line, he wrote What the Farmer Told the Bard, a near-apocalyptic novel involving runes encoded in a Shakespeare monument. In 1991 Ramona and Paul settled with their children in Maine. Publishing books, art, and the news magazine Maine Insights led to founding the Solon Center for Research and Publishing, with the mission of helping to build community in Maine and beyond, through words and art, science and music. Gallery Fukurou at 20 Main St., Rockland, Maine, opened to the public in 2018. At the Solon Center, Elected Officials to Protect America (EOPA, ProtectingAmerica.net) works to combat climate change, often focusing on water security, with the help and leadership of military veterans. It is the author's hope that the sense of a deep democracy in nature, which inspired Native American communities and merged with our Founders' Enlightenment vision of natural law, will help bring hearts and minds together in time. Paul's blog on his book, here.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,217