text
stringlengths
0
12.5k
meta
dict
--- author: - | Alexandre Popoff\ al.popoff@free.fr\ France title: Towards A Categorical Approach of Transformational Music Theory --- *Abstract:*\ Transformational music theory mainly deals with group and group actions on sets, which are usually constituted by chords. For example, neo-Riemannian theory uses the dihedral group $D_{24}$ to study transformations between major and minor triads, the building blocks of classical and romantic harmony. Since the developments of neo-Riemannian theory, many developments and generalizations have been proposed, based on other sets of chords, other groups, etc. However music theory also face problems for example when defining transformations between chords of different cardinalities, or for transformations that are not necessarily invertible. This paper introduces a categorical construction of musical transformations based on category extensions using groupoids. This can be seen as a generalization of a previous work which aimed at building generalized neo-Riemannian groups of transformations based on group extensions. The categorical extension construction allows the definition of partial transformations between different set-classes. Moreover, == composition. Introduction ============ After the pionneering work of David Lewin [@lewin], music theory has seen developments which have relied heavily on the group structure, wherein group elements are seen as operations between some set elements which usually represent chords. In neo-Riemannian theory, the classical set of elements was originally constituted by the major and minor chords, and the typical corresponding group of transformations is isomorphic to the dihedral group $D_{24}$ of 24 elements, whether it acts through the famous L, R and P operations or through the transpositions and inversions operators [@cohn1; @cohn2; @cohn3; @capuzzo], or many others (see for example the Schritt-Wechsel group) [@douthett]. Following its application to major/minor triads, generalizations have been actively researched. For example, transformational theory has also been applied to other sets of chords [@straus]. Different groups of transformations than the dihedral one have been proposed. Julian Hook’s UTT group is a much larger group of order 288 and has at its a core a wreath product construction [@hook1; @hook2]. Wreath products were also studied by Robert Peck in a more general setting [@peck1]. More recently, Robert Peck also introduced imaginary transformations [@peck2], in which quaternion groups, dicyclic groups and other extraspecial groups appear. A different approach has been undertaken in [@popoff], in an attempt to unify all these different groups, in which generalized neo-Riemannian groups of musical transformations are built as extensions. However, the current group-based transformational theories raise multiple issues. One of them is that they sometimes fail to provide interesting groups of transformations for some sets of chords (an example will be given below). A second one is that transformational theories have also failed to provide a solution to the cardinality problem, namely finding transformations between chords of different cardinalities. While this problem has been solved, transformational theories have failed to find triads. Hook [@hook3] introduced another approach, namely cross-type transformations, to circumvent this problem. In this paper, we introduce a categorical approach to musical transformations with the aim of generalizing existing constructions. This work can be viewed as a generalization of the previous work on group extensions, by using groupoids instead of groups, and by building the corresponding groupoid extensions. Note that a categorical approach to music theory has been heavily investigated in the book *The Topos of Music* by G. Mazzola [@mazzola]. Mazzola deplores in particular that *“Although the theory of categories has been around since the early 1940s and is even recognized by computer scientists, no attempt is visible in AST (Atonal Set Theory) to deal with morphisms between pcsets, for example”*. While this paper is rather technical and more mathematically- than musically-oriented, we nevertheless hope that it will provide useful leads for application to music analysis. The first section highlights some of the limitations of current transformational theories based on particular examples. The second section presents the foundations of musical transformations. Finally, the third part explores the relation between the categories constructed in section 2 and the more familiar groups of musical transformations, showing in particular how wreath products are naturally recovered from the category extensions. On some limitations of transformational theories ================================================ Groups of transformations acting on three set-classes ----------------------------------------------------- Consider the pitch-class sets \[0,4,7\], \[0,2,5\] and \[0,4,5\], as represented in Figure \[fig:setClasses\]. In this case they represent two sets, respectively. These set-classes have a well-defined root, which can therefore take any value in $\mathbb{Z}_{12}$. In this paper we will denote by $n_t$ a chord of root $n$ and of type $t$. By analogy with the action of the $T/I$ group on the set of major and minor triads, transposition operators $T_i$ can be defined for M, $\alpha$ and $\beta$. The action of these transposition operators is straightforward as $T_i$ takes a chord $n_t$ to $(n+i)_t$ (all operations are understood modulo 12). There also exists voice-leading transformations $VL$ between these set-classes. For example, if one represents a chord as an ordered set $(x,y,z)$, where $x$ is the root, we can define the $VL$ transformation as $$VL: \left( \begin{array}{lll}x\\y\\z\end{array} \right) \longmapsto \left( \begin{array}{lll}z+2\\x-1\\y-2\end{array} \right)$$ Using the notation $n_t$ for chords, this transformation is then defined as : $$VL: \left( \begin{array}{lll}n_M\\n_\alpha\\n_\beta\end{array} \right) \longmapsto \left( \begin{array}{lll}(n-3)_\alpha\\(n-5)_\beta\\(n-5)_M\end{array} \right)$$ We can define another voice-leading transformation $VL'$ with a similar action as $$VL': \left( \begin{array}{lll}x\\y\\z\end{array} \right) \longmapsto \left( \begin{array}{lll}z+4\\x+1\\y\end{array} \right)$$ or equivalently : $$VL': \left( \begin{array}{lll}n_M\\n_\alpha\\n_\beta\end{array} \right) \longmapsto \left( \begin{array}{lll}(n+1)_\alpha\\(n-3)_\beta\\(n-3)_M\end{array} \right)$$ ! [The action of the voice-leading $VL$ operation on set-classes M, $\alpha$ and $\beta$[]{data-label="fig:setClassesVoiceLeading"}](MAlphaBetaVoiceLeading.pdf) We can notice that $VL^{-3}=VL'^{21}=T_1$. The $VL$ and $VL'$ operations are clearly contextual [@kochavi] since their action on the root depends on the type of the chord on which they act. Since their action switches the type of the chords, they can be seen as “generalized inversions” similar to the $I$ transformations of the $T/I$ group, or the $P$, $L$ or $R$ operations of the $PLR$ group. If we wish to build a group which includes both the transposition operators and these generalized inversions, we will obtain that $\langle T_i,VL\rangle = \langle T_i,VL'\rangle \cong \mathbb{Z}_{36}$, as can be checked with any computational group theory software such as GAP. The construction introduced in [@popoff] aims at building generalized neo-Riemannian groups of musical transformations which include both transposition and inversion operators. These groups $G$ are built as extensions of $Z$ by $H$, where $Z$ is the group of transpositions and $H$ can be seen as a group of “formal inversions”. In the present case, $Z$ would be isomorphic to $\mathbb{Z}_{12}$ whereas $H$ would be isomorphic to $\mathbb{Z}_3$ to reflect the inversions between the three different pitch-class sets. If one tries to apply this construction to build a group extension $G$ of simply transitive musical transformations as $$1 \to \mathbb{Z}_{12} \to G \to \mathbb{Z}_{3} \to 1$$ one ends up with only two abelian groups, namely $G=\mathbb{Z}_{12} \times \mathbb{Z}_{3}$ or $G=\mathbb{Z}_{36}$. The reason for this is that $\mathbb{Z}_{12}$ has too few automorphisms (remember that $Aut(\mathbb{Z}_{12}) \cong
null
--- abstract: 'The Gao-Wald theorem related to time delay [@gaowald] assumes that the Null Energy Condition and the Null Generic Condition are satisfied, and that the underlying gravity theory is General Relativity. In the present work it is shown that the Gao-Wald theorem is true if the space time is null geodesically complete, if the curvature satisfy some reasonable properties stated along the text, and if every null geodesic contains at least two conjugate points. This result may apply to modified theory of gravities and to violating Null Energy Condition models as well.' author: - 'Juliana Osorio Morales [^1] and Osvaldo P. Santillán [^2]' title: Variations of a theorem due to Gao and Wald --- Introduction ============ Since the introduction of the *Alcubierre bubble* [@alcubierre] or the *Krasnikov tube* [@krasnikov], there has been a growing interest in the concept of time delay in General Relativity as well as in modified theories of gravity. The ory of time delay [@olum]. The Alcubierre bubble is a space time in which it is possible to make a round trip from two stars $A$ and $B$ separated by a proper distance $D$ in such a way that a fixed observer at the star $A$ measures the proper time for the trip as less than $2D/c$. In fact, this time can be made arbitrary small. This fact does not indicate that the observers travel faster than light, as they are traveling inside their light cone. The Alcubierre constructions employ the fact that, for two comoving observers in an expanding universe, the rate of change of the proper distance to the proper time may be larger than $c$ or much more smaller, if there is contraction instead of expansion. The Alcubierre space time is Minkowski almost everywhere, except at a bubble around the traveler which endures only for a finite time, designed for making the round trip proper time measured by an observer at the star $A$ as small as possible. Details can be found in [@alcubierre]. The following contact can also be reached by [@olum]. In this reference, a space time which appears to allow time advance was constructed, but it was proven that it is in fact the flat Minkowski metric in unusual coordinates. This suggests that to analyze time advance by simple inspection of the metric may be misleading. However, given a space time that is Minkowski outside a tube or a bubble such that the Alcubierre or Krasnikov space times, the notion of time delay is well defined. By use of some results due to Tipler and Hawking [@tipler1]-[@tipler3], it can be shown that all these examples violate the Null Energy Conditions at least in some region of the manifold. Further , these examples are included therein. Recall that the Null Energy Condition states that the matter content energy momentum tensor satisfies $T_{\mu\nu}k^\mu k^\nu\geq 0$ for every null vector $k^\mu$ tangent to any null geodesic $\gamma$. This implies, in the context of General Relativity, that $R_{\mu\nu}k^\mu k^\nu\geq 0$ [@Wald]. On the other hand, the Null Generic Condition means that $k_{[\alpha} R_{\beta]\sigma\delta[\epsilon}k_{\gamma]}k^\sigma k^\delta\neq 0$ for some point in the geodesic $\gamma$. Both conditions automatically imply that any null geodesic $\gamma(\lambda)$ possesses at least a pair of conjugate points $p$ and $q$, if it is past and future inextendible, see [@Wald Proposition 9.3.7]. These results hold in the context of General Relativity, and should not be extrapolated to modified gravity theories without further analysis. The results just described raise the question of whether time delay could hold in theories which do not violate the Null Energy Conditions. In this context, a theorem due to Gao and Wald [@gaowald] may be relevant. Its statement is the following.\ *Gao-Wald theorem:* Consider a null geodesically complete space time ($M$, $g_{\mu\nu}$) such that the Null Energy and Null Generic Conditions are satisfied. Then, given a compact region $K$, there exists a compact $K'$ containing $K$ such that for any pairs of points $p, q\notin K'$ and $q$ belonging to $J_+(p)-I_+(p)$, no causal curve $\gamma$ connecting both points intersects $K$.\ The Gao-Wald theorem stated above is related to time advance hypothesis as follows. If there were possible to deform the geometry in a region $K$, similar perhaps to a bubble, in order to produce a time advance, then a fastest null geodesic would enter in the region $K$ in order to minimize this time. The theorem states that this is not possible if the Null Energy Conditions and Null Generic Conditions are satisfied in the space time in consideration. This is true for this theorem. However, there is no control over the size of the region $K'$, thus this theorem should be considered only as a weak version of a time advance hypothesis. The aim of the present work is two folded. The first purpose is to show that the Null Energy and Null Generic conditions are not mandatory, neither is to work in the context of General Relativity, for the Gao-Wald theorem to be true. It will be shown that the Gao-Wald theorem holds when the following three requirements are satisfied.\ - *First requirement:* The space time ($M$, $g_{\mu\nu}$) is null geodesically complete.\ - *Second requirement:* Every null geodesic possesses at least two conjugate points.\ -*Third requirement:* Consider the set $S$ of pairs $\Lambda_0=$($p_0$, $k_0^\mu$) with $p$ a point in $M$ and $k_\mu$ a null vector in $TM_{p_0}$ properly normalized (see formula (\[norma\]) below) and defining a null geodesic $\gamma_0$. Then there exists an open set $O$ in $S$ containing $\Lambda_0$ for which the following two properties hold. For every pair $\Lambda=$($p$, $k^\mu$) in $O$, the corresponding geodesic $\gamma_\Lambda(\gamma)$ will posses a conjugate point $q$ to $p$, $q \in J_+(p)-I_+(p)$. Furthermore the map $h: O\to M$ such that $h(\Lambda)=q$ is continuous at $\Lambda_0$.\ The two properties described in the *third requirement* look a bit technical, but the intuition behind is the following. The first implies that, for a geodesic with two conjugate points $p_0$ and $q_0$, there exist an open set around $p_0$ such all the points $p$ in the open set will have a conjugate point $q$ with respect to some null geodesic emanating from them. The second part states that the conjugate point $q$ to $p$ will be very close to $q_0$ when $p$ is close to $p_0$ and when the geodesics are, in a very rough sense, “pointing in similar directions”. The second and main purpose of the present work is to prove that the *second requirement* implies the *third one* under some more or less reasonable hypotheses about the curvature of the space time. We feel that this statement may be relevant for extending the Gao-Wald results to more general gravity theories or to models violating the Null Energy Conditions. The organization of the present work is as follows. In section 2 some generalities about conjugate points in generic space times are discussed. In addition, certain topological issues related to the light cones in space times are also presented. The data will be published for research purposes. At the end, a proof of the *third requirement* when the underlying model is General Relativity with Null Energy and Null Generic Conditions outlined. This is included by completeness, as this is one of the results to be generalized here. In section 3, some properties for the curvature of the space time are presented, which are not related neither to General Relativity nor to the weak and strong energy conditions, ensuring that the *second requirement* implies the *third requirement*. In section 4 the aforementioned implication is proved explicitly by the means of some propositions described in section 3. This section is rather technical. In section 5, the modified Gao-Wald theorem is proved explicitly, and the possible application of the obtained results is discussed. The *third requirement* in GR with Null Energy and Null Generic Conditions ========================================================================== As discussed above, the Gao-Wald theorem relies on the notion of conjugate points. Thus, it is convenient to recall some basic but important concepts about them, taking into account some standard references [@Wald]-[@penrose]. In addition, at the end of this section, a sketch of the proof of the *third requirement* in the context of GR and with Null Energy and Null Generic Conditions [@gaowald] is included. The $m models. Null geodesics and conjugate points ----------------------------------- In the present discussion, the space time ($M$, $g_{\mu\nu}$) is assumed to be null geodesically complete such that there exists a globally defined time like future
null
--- abstract: 'We investigate the relaxation process of ferromagnetic domains in 2D subjected to the influence of both, static disorder of variable strength and weak interactions. The domains are represented by a two species bosonic mixture of $^{87}$Rb ultracold atoms, such that initially each specie lies on left and right halves of a square lattice. The dynamics of the double domain is followed by describing the two-component superfluid, at mean field level, through the time dependent Gross-Pitaevskii coupled equations, considering values of the intra and inter-species interaction, reachable in current experimental setups, that guaranty miscibility of the components. A robust analysis for several values inter-species interaction leads us to conclude that the presence of structural disorder leads to slowdown the relaxation process of the initial ferromagnetic order. As certain the amplitude.' author: ' <unk> 'C. Madroñero' - 'G.A. Domínguez-Castro' - 'L. A. False 'R. Paredes' '@Thomson' ] @Thomson]. The origin of this dynamics can be attributed to several factors, for instance, the presence of external drivings as magnetic or electric fields, the existence of spin-polarized currents inducing the transference of momentum to the domain wall [@Thiaville; @Franke], or the inner dynamics associated with both, the interactions between the microscopic constituents as well as the energetic landscape where the constituents move. In the present investigation, we concentrate on analyzing the dynamics of the magnetic domains arising from the last causes. In particular, we focus on the effects of energy disorder in preventing the motion of spin domains. Motivated by the notable control achieved with large conglomerates of atoms in its quantum degenerate state, and particularly the production of mixtures composed of either, Bose condensates in different hyperfine states [@Myatt], or different atomic species [@Modugno; @Thalhammer; @Lercher; @McCarron; @Wang], confined in particular geometries [@Hinds; @Grimm; @Lewenstein; @Gross; @LaRooji], we propose here the design of an [*ultracold atom device*]{} to quantum simulate the decay of magnetization in magnetic domains in disordered square lattices in 2D. Our proposal is based on various experimental situations, previously performed with $^{87}$Rb atoms, intended to explore the many-body localization phenomenon [@Choi; @IBloch]. In particular, in [@Choi], the initial state prepared is a Bose condensate composed of about one hundred atoms confined in a 2D square lattice in its Mott equilibrium state, and then allowed to evolve in a disordered potential under its own dynamics after suddenly changing an external parameter. Such a quantum quench protocol planned to track the effects of disorder on the atom flux moving across the 2D lattice, together with the possibility of spatially separating different hyperfine components, are the basis of our proposal to study the dynamics of the ferromagnetic domains, particularly its magnetization decay. As we describe below, in this work we shall consider a two-species Bose condensate as the analog of a double spin domain in which each hyperfine component lies in the halves of an inhomogeneous square lattice, thus setting initial configuration that will evolve in a disordered media (see Fig. \[Figure1\]). This arrangement together with a recent study, performed at mean field level, where the effect of disorder is to induce the emergence of spatially localized densities, as a function of the disorder magnitude [@Gonzalez], are our starting point to study the dynamics of the double spin domain. The domain can be divided into two configurations. Here we present the results of an extensive set of numerical calculations performed, at the mean-field level through the coupled Gross-Pitaevskii (GP) equations, to describe the evolution in time of the hyperfine spin components spatially separated at $t=0$, and then allowed to evolve under the influence of non-correlated static disorder. Working within the superfluid regime, which is considering values of the intra-species interaction coupling for which the system is far from the Mott insulating phases (MI), we analyze the evolution of the initial state for different values of the ration among intra and inter-species interaction strengths. This work is organized as follows. In section 2, we present the model that we use to describe the relaxation of the ferromagnetic domains under the influence of disorder. Furthermore, we briefly explain the construction of the initial state from which the evolution in time is followed. In section 3 we show the results of our numerical study about the relaxation process of the ferromagnetic domains, as a function of the disorder amplitude and different interaction strengths. Finally, we offer the experimental findings. Model and initial state preparation {#section2} =================================== The model here proposed to study the persistence of magnetization in definite regions of space, is based, as described previously, on a series of experimental designs created with ultracold $^{87}$Rb atoms confined in 2D optical lattices, and their remarkable attribute of generating localized states as a result of both, disorder and two-body interactions. Here we concentrate on weakly interacting systems subjected to disorder. The system under study consists of a mixture of two hyperfine spin components, $|\uparrow \rangle= |F=1,m_F=-1\rangle$ and $|\downarrow \rangle=|F=2,m_F=-2\rangle$, lying in a 2D inhomogeneous square lattice, represented by $V_{ \mathrm{ext}}\left(\vec {r}\right)$. Within the mean-field formalism the wave functions $\Psi_{\uparrow,\downarrow}$ of the two species $|\uparrow \rangle$ and $|\downarrow \rangle$ obey the following effective coupled GP equations: $$\begin{aligned} i\hbar \frac { \partial \Psi _{\uparrow} (\vec {r},t)}{ \partial t } =\left[ H_0(\vec {r}) + g_{\uparrow\uparrow}|\Psi_{\uparrow}|^{2} + g_{\uparrow\downarrow}|\Psi_{\downarrow}|^{2} \right] \Psi_{\uparrow}(\vec{r} ,t)\cr i\hbar \frac { \partial \Psi _{\downarrow} (\vec {r},t)}{ \partial t } =\left[ H_0(\vec {r}) + g_{\downarrow\downarrow}|\Psi_{\downarrow}|^{2} + g_{\downarrow\uparrow}|\Psi_{\uparrow}|^{2} \right] \Psi_{\downarrow}(\vec{r} ,t), \label{coupledGP}\end{aligned}$$ where $H_0(\vec {r})= -\frac { \hbar^ 2 }{2m} \nabla_{\perp}^ 2 +V_{ \mathrm{ext}}\left(\vec {r}\right)$ with $\nabla_\perp^2=\frac{\partial^2}{\partial x^2}+\frac{\partial^2}{\partial y^2}$ is the Laplacian operator in $2$D and $m$ the equal mass of the two spin components. The external potential in 2D has the following form: $$\begin{aligned} V_{ \mathrm{ext}}\left(\vec {r}\right)= \frac{1}{2} m \omega_r^2 r^2+V_{0}^\delta \Bigg[ \sin^2 \left({\frac{\pi x}{a}}\right)+ \sin^2 \left({\frac{\pi y}{a}}\right) \Bigg],\end{aligned}$$ being $\vec {r}= x \hat i +y \hat j$, $\omega_{r}$ the radial harmonic frequency, which is fixed to a common value used in current experiments $\omega_{r} = 2\pi\times 50$ Hz, $a$ is the lattice constant and $V_{0}^\delta= V_{0}(1+{\epsilon_\delta(x,y)})$, the potential depth at each point $(x,y)$. The function $\epsilon_{\delta} (x, y)$ represents a non-correlated disorder spanned across space and takes random values in the interval $\epsilon_{\delta} (x, y) \in [-\delta,\delta]$, being $\delta$ the disorder amplitude $\delta \in [0,1]$. The random depth $V_{0}^\delta$ mimics the disordered environment introduced by speckle patterns [@Bouyer] and is scaled in units of the recoil energy $E_R= \frac{\hbar^2 k^2}{2m}$, with $k=\pi / a$. Thus, besides the contribution of the harmonic confinement, the potential depth at each point $(x,y)$ is the result of adding/subtracting a random number $\epsilon_{\delta}(x,y)$ to the amplitude of the potential defining the square lattice at zero disorder. Several previous studies have shown that the mean-field approximation describes the main effects of weakly interacting disordered systems [@Ray; @Schulte; @Adhikari; @Kobayashi; @Gonzalez]. The values of the effective interaction couplings $g_{\sigma\sigma'}$ with $\sigma, \sigma' = \{ \uparrow, \
null
--- abstract: 'We consider a family of manifolds with a class of degenerating warped product metrics $g_\epsilon=\rho(\epsilon,t)^{2a}dt^2 +\rho(\epsilon,t)^{2b}ds_M^2$, with $M$ compact, $\rho$ homogeneous degree one, $a \le -1$ and $b > 0$. We study the Laplace operator acting on $L^{2}$ differential $p$-forms and give sharp accumulation rates for eigenvalues near the bottom of the essential spectrum of the limit manifold with metric $g_{0}$.' author: - Jeffrey McGowan title: Bounds on Accumulation Rates of Eigenvalues on Manifolds with Degenerating Metrics --- Introduction ============ There are many examples of non-compact manifolds which can be thought of as a ‘limit’ of a sequence of compact manifolds. Particularly nice examples are hyperbolic manifolds in dimensions 2 and 3; the cusp closing theorem of Thurston [@Thurston] then says that every complete, non-compact manifold $M_0$ is the limit of a sequence of hyperbolic manifolds $M_k \to M_0$. Since the Laplacian on $M_0$ has continuous spectrum, one expects the eigenvalues of $M_k$ to accumulate. In dimension 2, Ji, Zworski, and Wolpert ([@Ji; @JiZworski; @Wolpert1; @Wolpert2]) have given bounds for the accumulation rate of eigenvalues near the bottom of the essential spectrum in the hyperbolic case, while in dimension 3 analogous results were obtained by Chavel and Dodziuk ([@ChavelDodziuk]). Dodziuk and McGowan obtained similar results for the Laplacian acting on differential forms ([@DM]). Colbois and Courtois considered convergence of eigenvalues below the bottom of the essential spectrum in a much more general setting [@CC]. The accumulation rate for eigenvalues of the Laplacian on functions for manifolds $N=\tilde{N}\cup (M^n\times I)$ with ’pseudo-hyperbolic’ metrics on $(M^n\times I)$ was given by Judge [@Judge]. Judge also computes the essential spectrum for a more general class of degenerating metrics, and investigates the convergence of eigenfunctions. We will consider manifolds $N_{\epsilon}=\tilde{N}\cup (M^n\times I)$, $\tilde{N}$ and $M^n$ compact, with $n=dim(M)$, and a family of metrics $$\label{metric}g_\epsilon=\rho(\epsilon,t)^{2a}dt^2 +\rho(\epsilon,t)^{2b}ds_M^2$$ on $M^n \times I$. Here $\rho = c_1\epsilon +c_2t$, $c_1,c_2>0$, $t\in I=[0,1]$, $a\leq -1$, $b>0$, and $ds_M^2$ is the metric on $M^n$. We identify the boundary of $\tilde{N}$ with $M^{n}\times {1}$. These are the metrics discussed by Melrose in [@Melrose] and considered by Judge in [@Judge]. We consider only non-negative values of $t$ with $t \in [0,1]$, which simplifies the statements of the results, although we must consider manifolds with boundary. The results are complete. We study the accumulation rate for eigenvalues near the bottom of the essential spectrum of the Laplacian acting on both functions and differential forms. Our main results are \[th2\] Suppose $N_\epsilon=\tilde{N}\cup (M^n\times I)$, $\tilde{N}$ and $M^n$ compact, with metric $$g_\epsilon=\rho(\epsilon,t)^{2a}dt^2 +\rho(\epsilon,t)^{2b}ds_M^2$$ on $M^n \times I$, with $\rho$ as above. Let $$R=\int_0^1\rho(\epsilon,s)^a\,ds$$ be the geodesic distance from the boundary ${0} \times M^{n})$ of $N_{\epsilon}$ to $\tilde{N}$. Let $\Xi_\epsilon(x^2)$ be the number of eigenvalues of the Laplacian acting on coexact $p$-forms (satisfying absolute boundary conditions on the boundary of $N_{\epsilon}$) in $[\sigma,\sigma + x^2)$ where $\sigma$ is the bottom of the essential spectrum for coexact forms of degree $p$ and $0 < p < n$. Then $$\Xi_\epsilon(x^2) =\frac{dxR}{\pi} + O_x(1)$$ where $d$ is the dimension of the space of harmonic forms of degree $p$ on $M$. This agrees with the results of Judge [@Judge], Chavel and Dodziuk [@ChavelDodziuk] and Dodziuk and McGowan [@DM] in the special cases they considered. \[th1\] & \[th1\]. Then the essential spectrum of the Laplacian acting on coexact $p$-forms, $0 \leq p \leq n$ on $N_0$ is $$\begin{array}{lcr} \left[\left(\frac{n-2p}{2}\right)^2c_2^2b^2,\infty\right)&\qquad&a=-1\\\\ \left[0,\infty\right)&\qquad&a<-1 \end{array}$$ Note that this agrees with Judges results ([@Judge]) for functions when $p=0$, and with Mazzeo and Phillips results for the essential spectrum on geometrically finite hyperbolic manifolds ([@MazzeoPhillips], with $c_2=b=1$ and $a=-1$). We have recently learned that these results for the essential spectrum have been obtained independently by Antoci (\[Antoci\]). This Section <unk>[metric<unk>] is updated accordingly follows. In Section \[geom\] we discuss the geometry of the manifolds under consideration, and rewrite the metric (\[metric\]) in a way which makes the geometry more evident. In Section \[functions\] we illustrate our techniques by computing the essential spectrum and accumulation rates for eigenvalues as $\epsilon \to 0$ in the case of functions ($p=0$). In Section \[upperforms\] we compute the essential spectrum and give lower bounds on the accumulation rate in the $p\neq 0$ case. Finally, in Section \[lowerforms\] we give upper bounds on the accumulation rate for $p \neq 0$, completing the proof of Theorem \[th1\]. We wish to thank Józef Dodziuk for many helpful conversations. The log file [@Melrose]. When $a \leq -1$ such metrics are complete on the limit manifold $N_{0}$. Melrose classifies metrics where $a=-1$, $b=1$ as ’hc’, or hyperbolic cusp metrics, and metrics where $a=-1$, $b=0$ as ’boundary’, or metrics with cylindrical end. Since we will consider metrics where $a \leq -1$, $b > 0$ we rewrite the metric to make the geometry more evident. Let $\tau$ be the geodesic distance from $t=0$ to $t=1$, in other words the geodesic distance from the a point $(0,p \in M)$ to $\tilde{N}$. Then $$\label{taudef} \tau = \int_0^t{\rho(\epsilon,s)^a\,ds}= \int_0^t{(c_1\epsilon + c_2s)^a\,ds}$$ and we have two distinct cases, $$\begin{array}{lcc} \tau =\frac{1}{c_2}\left( \ln\left(\frac{{c_1\epsilon +c_2t}}{c_1\epsilon}\right)\right)&\qquad&a=-1\\ \tau = \frac{1}{c_2}\left(\frac{(c_1\epsilon+c_2t)^{a+1}-(c_1\epsilon)^{a+1}}{c_2(a+1)}\right)&\qquad&a<-1 \end{array}$$ Solving for $t$ and substituting into the metric (\[metric\]) we get $$\label{mymetrics} \begin{array}{lcc} ds^2=d\tau^2+(c_1\epsilon)^{2b}e^{2bc_2\tau}ds_M^2&\qquad&a=-1\\ ds^2=d\tau^2+(c_2(a+1)\tau+(c_1\epsilon)^{a+1})^\frac{2b}{a+1}ds_M^2&\qquad&a<-1 \end{array}$$ which is of the form $ds^2=d\tau^2+f_\epsilon(\tau)ds_M^2$ in both cases. As $\epsilon \to 0$, $\tau\to \infty$, and we have a warped product $I \times_{f_\epsilon} M$, with the length of the interval given by $$\label{rcalc} \tau(1)=\Bigg\{\begin{array}{lr}R = \frac{1}{c_2}\ln\left(\frac
null
--- abstract: 'Investigating properties of two-dimensional Dirac operators coupled to an electric and a magnetic field (perpendicular to the plane) requires in general unbounded (vector-) potentials. If the system has a certain symmetry, the fields can be described by one-dimensional potentials $V$ and $A$. Assuming that $|A|<|V|$ outside some arbitrary large ball, we show that absolutely continuous states of the effective Dirac operators spread ballistically. These results are based on well-known methods in spectral dynamics together with certain new Hilbert-Schmidt bounds. We 'End-to-end<extra_id_1> therefore recommend to verify these<extra_id_2> also suggest new<extra_id_3> also give new<extra_id_4> give reliable<extra_id_5> also provide new<extra_id_6> |<extra_id_7> estimates.' address: - | Josef Mehringer\ Mathematisches Institut\ Ludwig-Maximilians-Universität\ Theresienstra[ß]{}e 39\ D-80333 München, Germany. - Edgardo Stockmeyer, Santiago de<extra_id_1> Santiago,<extra_id_2> Santiago<extra_id_3> Santiago<extra_id_4> Santiago - Chile. ---<extra_id_5> Santiago<extra_id_6> ===<extra_id_7> Josef Mehringer .<extra_id_8>><extra_id_9>=<extra_id_10> Santiago<extra_id_11> Santiago<extra_id_12><extra_id_13> Edgar Da Chile. author: - Josef Mehringer - Edgardo Stockmeyer title: 'Ballistic dynamics of Dirac particles in electro-magnetic fields' --- Introduction {#introduc} ============ It is well known that Dirac particles suffer from a phenomenon called Klein tunneling. In dimension one, it can be roughly described as follows : If one considers a step potential, for instance $V(x)=V_0$ for $x{\geqslant}0$ and zero otherwise, then massless Dirac particles coming from the left will tunnel through the barrier independently of their energy. As opposed to the classical quantum tunneling there is no exponential damping factor diminishing the probability of finding the particle on the right side of the barrier [@Klein1929; @Thaller]. More generally, one-dimensional massless Dirac particles spread as free particles in the presence of electric fields. This effect has attracted renewed attention due to the isolation of graphene in 2003 (see [@Novoselov2004]), since the low-energy charge carriers of this material can be described by the two-dimensional massless Dirac equation [@castro2009electronic; @F2012; @Fefferman2014]. Indeed, experiments have been carried out to observe Klein tunneling in graphene confirming some theoretical predictions [@PhysRevLett.98.236803; @PhysRevLett.102.026807; @young2009quantum]. Consider the massless one-dimensional Dirac equation $$\begin{aligned} -{\mathrm{i\,}}\sigma_1\partial_1 +V \quad \mathrm{on}\quad L^2({\mathbb{R}},{\mathbb{C}}^2),\end{aligned}$$ with an electric potential $V\in L^1_{\rm loc}({\mathbb{R}})$, where $\sigma_1$ is the first Pauli matrix. In this case the Klein tunnel effect is not very surprising from the mathematical point of view since $ -{\mathrm{i\,}}\sigma_1\partial_1 +V $ is unitarily equivalent to the free Dirac operator $-{\mathrm{i\,}}\sigma_1\partial_1$ by means of the transformation $$\begin{aligned} \label{intro1} \exp \left({\mathrm{i\,}}\sigma_1\int_0^xV(s)\mathrm{d}s\right).\end{aligned}$$ However, in the presence of magnetic fields the situation is different. In dimension two it is known that magnetic fields tend to localise Dirac particles, very much like as in the Schrödinger case (see [@Thaller]). In a previous article we considered the combined electromagnetic effect from a spectral theoretical point of view [@MehringerStockmeyer2014]. In the present work we investigate this further but focusing on the wave package spreading. Consider a two-dimensional Dirac operator coupled to an electro-magnetic field described by electric and magnetic potentials $V$ and ${\bf A}$. If the field has translational or rotational symmetry the problem can be reduced to the study of a family of Dirac operators on the line or on the half-line, respectively. (Here , $A$.) Denote by $h$ one of the members of these families. Our results roughly state the following: Assuming that the function $\psi\not=0$ is of finite energy and that it belongs to the absolutely continuous spectral subspace of $h$ we obtain a lower bound on the Cèsaro mean of the time evolution of the $p$-th moment ($p>0$), i.e. there is a constant $C(\psi,p)>0$ such that $$\begin{aligned} \label{d1} \frac{1}{T}\int_0^T \||x|^{p/2} e^{-{\mathrm{i\,}}t h}\psi\|^2\,{\mathrm{d}}t{\geqslant}C(\psi,p)T^p.\end{aligned}$$ Besides certain regularity conditions the above inequality holds provided $|{A}|<|V|$ outside some arbitrary large ball (see Theorems \[lastmainthm1\] and \[lastmainthm2\]). As a consequence of the causal behaviour of Dirac particles (see [@Thaller Theorem 8.5]) one has an upper bound of the same type, yielding altogether ballistic dynamics. The discussion following Recall \[appl2\]. We remark that if $V$ grows regularly at infinity the spectrum of $h$ is purely absolutely continuous (see the discussion after Remark \[barry\]). The latter is in stark contrast to the behaviour of non-relativistic particles. An important example is when the electric and magnetic fields are asymptotically uniform, in which case $V$ and $A$ grow linearly in the space coordinate. The proof of the bounds of type are based upon the ideas of [@Guarneri1989], [@Combes1993] and [@Last1996]. These results say roughly the following: Let $K\subset {\mathbb{R}}$ be a compact set and $\mathbbm{1}_K$ be the characteristic function supported in $K$. Then, the inequality holds if the function $\psi\in \mathbbm{1}_{K}(h) L^2$ belongs to the absolutely continuous subspace of $h$ provided a certain Hilbert-Schmidt bound is verified. This latter condition demands the following for the product of characteristic functions in space and energy: There is a constant $C>0$ such that for all $I\subset {\mathbb{R}}$ compact $$\begin{aligned} \label{intro2} \left\| \mathbbm{1}_{I}(x) \mathbbm{1}_{K}(h)\right\|_{\rm{HS}} {\leqslant}C_K\sqrt{|I|}.\end{aligned}$$ It is easy to check that the required bound is satisfied for the free Dirac operator. For Schrödinger operator with potentials bounds like are obtained using semigroup properties combined with perturbation theory [@Simon1982]. However, in our case there is no proper semi-group theory and, in addition, when the potentials are allowed to grow at infinity, naive (resolvent) perturbation theory gives estimates where the scaling in $|I|$ depends on the growth rate of $A$ and $V$; that would eventually not deliver . This is not surprising since in this case $A$ and $V$ should not be treated as perturbations to the free Hamiltonian. In the case $A=0$ one easily sees that the transformation solves this problem. In this work we provide new estimates of the type for the general case $V, A\not=0$ as long as $A$ is dominated by $V$ in certain sense. Our approach is to use Lorentz boosts (of non-constant speed) to transform the Hamiltonian to another operator with a magnetic vector potential that vanishes at infinity. We remark that the transformed operator is not going to be symmetric (see Section \[loro\]) since Lorentz boosts are not represented through unitary maps in $L^2$ but only through invertible transformation (see [@Thaller p. 70]). The relation between the original Hamiltonian and the Lorentz transformed operator is made precise through certain resolvent identities. In the case of operators defined on the real line bounds like are very much a corollary of and the proof of [@Last1996 Theorem 6.2]. However, for Dirac operator defined on the half-line one should proceed more carefully due to their singularities at zero (c.f., Remark \[singatzero\] and the discussion at the beginning of Section \[last\]).\ \ \ [*This article is organised as follows:* ]{} In the next section we state precisely our main results. The definition and basic properties of the one-dimensional Dirac operators used here can be found in Section \[basic\]. In Section \[loro\] we discuss the behaviour of Dirac operators under certain Lorentz boosts of non-constant speed and establish resolvent identities between original and transformed operators. We then apply the insight of Section \[loro\] to prove Theorems \[mainlemma\] and \[hshk\] in Section \[proofhilbert\]. The dynamical bounds for the half-line operators (Theorem \[lastmainthm2\]) are proven in Section \[last\] where we also establish a local compactness property suitable for our regularity assumptions. In Appendix \[s.a.\] we collect some technical facts concerning self-adjointness. We independent bodies. We prove \[expres\]. Finally, in Appendix \[proofappl\] we prove Corollaries \[appl1\] and \[appl2\] about the consequences for two
null
--- abstract: 'The Bethe-Salpeter equation for ground state of two fermions exchanging a gauge boson presents divergences in the momentum transverse, even in the ladder aproximation projected in light-front. Gauge theories with light-front gauge also present the difficulty associated to the instantaneous term of the propagator of a system composed by fermions bosons-exchange interaction. We used a prescription that allowed an apropriate description of the singularity in the propagator of the gauge boson in the light-front.' author: - | B.M.Pimentel$^{a}$, J.H.O.Sales$^{a}$ and Tobias Frederico$^{b}$\ $^{a}$Instituto de Física Teórica-UNESP, 01405-900 São Paulo, Brazil.\ $^{b}$Instituto Tecnológico de Aeronaútica, CTA, 12228-900\ São José dos Campos, Brazil. title: '**Gauge field divergences in the light-front**' --- Light-Front Dynamics: Definition ================================ Beginning from Dirac’s idea [@dirac] of representing the dynamics of the quantum system at ligth-front times $x^{+}=t+z$, we derive the Green’s function from the covariant propagator that evolutes the system from one light-front hyper-surface to another one. The results are shown }. \label{1}$$ and in terms of light-front variables [@jhs2002], we have $$S(x^{+})=\frac{1}{2}\int \frac{dk^{-}dk^{+}dk^{\perp }}{\left( 2\pi \right) }\frac{ie^{\frac{-i}{2}k^{-}x^{+}}}{k^{+}\left( k^{-}-\frac{k_{\perp }^{2}+m^{2}-i\varepsilon }{k^{+}}\right) }. \label{2}$$ The Fourier transform of the single boson state propagator to the in the light-front time is giver by: $$\widetilde{S}(k^{-})=\int dk^{+}dk^{\perp }\frac{i}{k^{+}\left( k^{-}-\frac{k_{\perp }^{2}+m^{2}-i\varepsilon }{k^{+}}\right) }. \label{3}$$ Fermion Field ============= Let $S_{\text{F}}$ denote fermion field propagator in covariant theory $$S_{\text{F}}(x^{\mu })=\int \frac{d^{4}k}{\left( 2\pi \right) ^{4}}\frac{i(\rlap\slash k_{\text{on}}+m)}{k^{2}-m^{2}+i\varepsilon }e^{-ik^{\mu }x_{\mu }}, \label{4}$$ where $\rlap\slash k_{\text{on}}=\frac{1}{2}\gamma ^{+}\frac{(k^{\perp })^{2}+m^{2}}{k^{+2}}+\frac{1}{2}\gamma ^{-}k^{+}-\gamma ^{\perp }k^{\perp }$. Using the Eq. (\[4\]), we use the e^{\frac{-i}{2}k^{-}x^{+}}. \label{5}$$ We note that for the fermion field, light-front propagator differs from the Feynmam propagator by an instantaneous propagator. Gauge Boson Propagator ====================== Let $S^{\mu \nu }$gauge propagator, $$S^{\mu \nu }(x^{\mu })=\int \frac{d^{4}k}{\left( 2\pi \right) ^{4}}\frac{ie^{-ik^{\mu }x_{\mu }}}{k^{2}+i\varepsilon }\left[ \frac{-nkg^{\mu \nu }+n^{\mu }k^{\nu }+n^{\nu }k^{\mu }}{nk}\right] , \label{6}$$ where we choose the light-front gauge $A^{+}=0$, $n^{\mu }=(1,0,0,-1)$ and the metric tensor is given from [@Kogut]$.$ The light-front components (\[6\]) can be as written $S^{+-}=S^{-+}=S^{++}=S^{+\perp }=0$ and $$S^{--}=4\frac{ik^{-}}{k^{+}(k^{2}+i\varepsilon )},\text{ }S^{-\perp }=S^{\perp -}=2\frac{ik^{\perp }}{k^{+}(k^{2}+i\varepsilon )},\text{ }S^{\perp \perp }=-1\frac{i}{k^{2}+i\varepsilon } \label{7a}$$ Interaction in First Order ========================== We consider the fermion-antifermion system in the light-front with one-gauge boson exchange ($A^{+}=0$), for which the interaction Lagrangian density is given by $$\mathcal{L}_{I}=g\overline{\Psi }_{1}\gamma _{\mu }A^{\mu }\Psi _{1}+g\overline{\Psi }_{2}\gamma _{\nu }A^{\nu }\Psi _{2}. \label{8}$$ The fermion corresponds to the field $\Psi $ with rest masses $m$ and the exchanged gauge boson to the field $A^{\mu }$ with mass $\mu =0.$ The coupling constant is $g.$ The perturbative correction to the two-body propagator which comes from the  exchange of one intermediate virtual boson, is $$\begin{aligned} \Delta S_{g^{2}}(x^{+}) &=&\left( ig\right) ^{2}\int d\overline{x}_{1}^{+}d\overline{x}_{2}^{+}S_{k^{\prime }}(x^{+}-\overline{x}_{1}^{+})(\gamma _{\mu })S_{k}(\overline{x}_{1}^{+}) \label{9} \\ &&S^{\mu \nu }(\overline{x}_{2}^{+}-\overline{x}_{1}^{+})S_{p}(x^{+}-\overline{x}_{2})(\gamma _{\nu })S_{p^{\prime }}(\overline{x}_{2}^{+}). \notag\end{aligned}$$ The intermediate boson propagates between the time interval $\overline{x}_{2}^{+}-\overline{x}_{1}^{+}.$ The labels in the particle propagators $k$ and $p$ indicates initial and $k^{\prime }$ and $p^{\prime }$ final states. Performing the Fourier transform from $x^{+}$ to $P^{-}$ and for the total kinematical momentum $P^{+}$, which we choose positive, and $P^{\perp }$. The double integration in $k^{-}$ is performed analytically in Eq. (\[10a\]), $$\begin{aligned} \Delta S_{g^{2}}(P^{-}) &=&\frac{-\left( ig\right) ^{2}i}{(4\pi )^{2}}\int \frac{dk^{-}dk^{\prime ^{-}}}{k^{+}k^{\prime ^{+}}(P^{+}-k^{\prime +})(P^{+}-k^{+})} \\ &&\left\{ \frac{\rlap\slash k_{on}^{\prime }+m}{\left( k^{\prime -}-k_{on}^{\prime -}+\frac{i\varepsilon }{k^{\prime +}}\right) }\right. \begin{array}{c} \gamma _{-} \end{array} \frac{\rlap\slash k_{on}+m}{\left( k^{-}-k_{on}^{-}+\frac{i\varepsilon }{k^{+}}\right) } \\ &&\frac{4\left( k^{-}-k^{\prime -}\right) }{(q^{+})^{2}\left( k^{-}-k^{\prime -}-q_{on}^{-}+\frac{i\varepsilon }{q^{+}}\right) }\frac{\rlap\slash p_{on}^{\prime }+m}{\left( p^{\prime -}-p_{on}^{\prime -}+\frac{i\varepsilon }{p^{\prime +}}\right) } \\ && \begin{array}{c} \gamma _{-} \end{array} \frac{\rlap\slash p_{on}+m}{\left( p^{-}-p_{on}^{-}+\frac{i\varepsilon }{p^{+}}\right) }+\end{aligned}$$ $$\begin{aligned} &&+\frac{\rlap\slash k_{on}^{\prime }+m}{\left( k^{\prime -}-k
null
--- abstract: 'Cathodoluminescence spectroscopy (CL) allows characterizing light emission in bulk and nanostructured materials and is a key tool in fields ranging from materials science to nanophotonics. Previously, CL measurements focused on the spectral content and angular distribution of emission, while the polarization was not fully determined. Here we demonstrate a technique to access the full polarization state of the cathodoluminescence emission, that is the Stokes parameters as a function of the emission angle. Using this technique, we measure the emission of metallic bullseye nanostructures and show that the handedness of the structure as well as nanoscale changes in excitation position induce large changes in polarization ellipticity and helicity. Furthermore, by exploiting the ability of polarimetry to distinguish polarized from unpolarized light, we quantify the contributions of different types of coherent and incoherent radiation to the emission of a gold surface, silicon and gallium arsenide bulk semiconductors. This technique paves the way for in-depth analysis of the emission mechanisms of nanostructured devices as well as macroscopic media.' author: - 'Clara I. Osorio' - Toon Coenen - Benjamin Brenny - Albert Polman - 'A. Femius Koenderink' bibliography: - 'references\_polarimetry.bib' title: 'Angle-resolved cathodoluminescence imaging polarimetry' --- Introduction ============ Among many recent developments in microscopy, optical electron-beam spectroscopy techniques such as cathodoluminescence imaging (CL) have emerged as powerful probes to characterize materials and nanophotonic structures and devices. In CL, one collects light emitted in response to a beam of energetic electrons ($0.1-30$ keV), for example in a scanning electron microscope (SEM). The time-varying evanescent electric field around the electron-beam interacts with polarizable matter creating coherent emission, such as transition radiation (TR) [@Adamo_PRL12; @Bashevoy_OE07]. The spot size of the focused electron beam and the extent of the evanescent field about the electron trajectory define the interaction resolution to be below $\sim20$ nm, while the interaction time ($<1$ fs) determines the broadband character of the excitation. Aside from coherent emission, incoherent emission can also be generated both by the primary beam and by slower secondary electrons, which excite electronic transitions in matter [@Abajo_RMP07; @Yacobi]. The relative importance of the coherent and incoherent contributions provides information about the material composition and electronic structure. Spectral analysis of the cathodoluminescence as a function of the electron beam position allows the local characterization of the structure and defects of semiconductors [@Edwards_SemicondSciTech2011; @Sauer_PRL2000; @Ton-That_PRB2012], the functioning of nanophotonic devices [@Fontcuberta_PRB2009], and to map the optical resonances of plasmonic and metamaterial structures [@Zhu_PRL10]. Recently developed techniques for detection of CL enable the identification of the band structure and Bloch modes of photonic crystals [@Yamamoto_OE09; @Yamamoto_OE11; @Adamo_PRL12; @Ma_JPC14; @Sapienza_NM12], the dispersion of surface plasmons [@losstw; @Bashevoy_OE07], and the directivity and Purcell enhancement of plasmonic nano-antennas [@coenen_NL11; @yamamoto_NL11]. Besides frequency and linear momentum, the vectorial nature of light provides a third degree of freedom rich in information about the physics of light generation and scattering, encoded in the polarization of emitted light. In materials characterization, for instance, polarization gives direct access to the local orientation of emission centers and anisotropies in the host material. In nanophotonics, polarization plays a fundamental role (together with directionality) in determining the interaction between emitters and nanostructures. Furthermore, it is increasingly recognized that mapping and controlling the polarization of light is key to harnessing the wide range of opportunities offered by metamaterials and metasurfaces. Recent breakthroughs in chirality-enhanced antennas [@Gorodetski_PRL13], photonic topological insulators [@lu14], and the photonic equivalent of the spin-Hall effect [@onoda04; @yin13; @li13; @connor14], indicate the emerging importance of mapping the full polarization properties of nanophotonic structures. Polarization measurements of CL emission, however, have been limited to fully polarized emission and in particular to linearly polarized signals [@coenen_OE12; @Coenen_NC14]. In this letter we introduce a novel technique to access full polarization information in cathodoluminescence spectroscopy. Based on a polarization analysis method previously demonstrated in optical microscopes [@fallet_MEMS11; @arteaga_OE14; @kruk_ACSP14; @Osorio_SR15], we integrate a rotating-plate polarimeter in the detection path of the angle-resolved CL setup. Using the Mueller matrix formalism for the light collection system, we determine the Stokes parameters for CL emission, that is, all parameters required to completely describe the polarization state of the light, which can be polarized, partially polarized or totally unpolarized. We demonstrate the great potential of this new measurement technique by analyzing the angle-resolved polarization state of directional plasmonic bullseye and spiral antennas. Furthermore, and exploiting the unique capabilities of CL excitation, we measured the emission from metals and semiconductors. For these materials, we can separate coherent and incoherent emission mechanisms, with further applications in nanoscale materials science. CL Polarimetry ============== ! [image](Fig1.pdf){width="70.00000%"} /a>The sample. An aluminum paraboloid mirror collects and redirects the resulting CL emission out of the SEM. The result is shown in Fig. \[Fig1\](a). The wave-vector distribution of the CL emission can be retrieved from the CCD image, as every transverse point in the beam corresponds to a unique emission angle, in a procedure analogous to other Fourier imaging techniques [@Lieb_JosaB04; @Kosako_NP10; @curto10; @Aouani_NL11; @Sersic_NJP11; @Belacel_NL13]. Measuring polarization for all emission angles of CL presents several challenges. First, it requires determining the relative phase difference between field components, a task not achievable with only linear polarizers as in Ref. [@coenen_OE12]. Second, the paraboloid mirror performs a non-trivial transformation on the signal as it propagates from the sample to the detector plane. The shape of the mirror introduces a rotation of the vector components of light due to the coordinate transformation and, consequently, a change in the main polarization axes. In this respect, @Bruce_04]. As a function of the angle of incidence, the mirror partially polarizes unpolarized light and transforms linearly to elliptically polarized light. To address these challenges, we included a rotating-plate polarimeter in the beam path of our CL system, composed of a quarter wave plate (QWP) and a linear polarizer [@Berry_ApplOp77; @Born_Wolf; @Chipman]. Figure \[Fig1\](a) shows the polarizing elements in a schematic of the setup. Depending on their orientation, these two elements act either as a linear polarizer or as a right or left handed circular polarizer. As <unk>$$<unk> Fig. \[Fig1\](b), we measure the intensities $I_j$ transmitted by six different settings of the polarimeter (horizontal, vertical, $45^{\circ}$, $135^{\circ}$, right and left handed circular) in order to determine the Stokes parameters of the light: $$\begin{aligned} \label{stokes_eq} S_0 &=&I_{H}+I_{V} \nonumber\\ S_1 &=& I_{H}-I_{V}\nonumber\\ S_2 &=& I_{45}-I_{135}\nonumber\\ S_3 &=&I_{RHC}-I_{LHC}.\end{aligned}$$ These four parameters are the most general representation of polarization and can be used to retrieve any polarization-related quantity [@Born_Wolf]. The raw polarization-filtered CCD images are projected onto \[$\theta,\varphi$\]-space as indicated in Fig. \[Fig1\](b) using a ray-tracing analysis of the mirror, after which the Stokes parameters in the detection plane are determined. To transform these to Stokes parameters in the sample plane, we determine the Mueller matrix of the light collection system that accounts for the effects of the mirror on the polarization. In addition to the geometrical transformation, the analysis takes into account the Fresnel coefficients of the mirror for $s$- and $p$- polarized light. Due to the 3D shape of the mirror, each element of the Mueller matrix is a function of the emission angle, i.e., there is a Mueller matrix for each emission angle. The supplementary information describes in more detail how the Mueller matrix was calculated and how we benchmark these calculations using fully polarized transition radiation (see Fig. S2). pdf [image](Fig2.pdf){width="\textwidth"} The Stokes
null
--- abstract: 'Explicit current-dependent expressions for anisotropic longitudinal and transverse nonlinear magnetoresistivities are represented and analyzed on the basis of a Fokker-Planck approach for two-dimensional single-vortex dynamics in a washboard pinning potential in the presence of point-like disorder. Graphical analysis of the resistive responses is presented both in the current-angle coordinates and in the rotating current scheme. The model describes nonlinear anisotropy effects caused by the competition of point-like (isotropic) and anisotropic pinning. Nonlinear guiding effects are discussed and the critical current anisotropy is analyzed. Gradually increasing the magnitude of isotropic pinning force this theory predicts a gradual decrease of the anisotropy of the magnetoresistivities. The physics of transition from the new scaling relations for anisotropic Hall resistance in the absence of point-like pins to the well-known scaling relations for the point-like disorder is elucidated. This is discussed in terms of a gradual isotropizaton of the guided vortex motion, which is responsible for the existence in a washboard pinning potential of new (with respect to magnetic field reversal) Hall voltages. It False *a*-pins.' address: - | Institute of Theoretical Physics, National Science Center-Kharkov Institute of Physics and Technology, 61108, Kharkov, Ukraine;\ Kharkov National University, Physical Department, 61077, Kharkov, Ukraine - 'Kharkov National University, Physical Department, 61077, Kharkov, Ukraine' author: - 'Valerij A. Shklovskij' - 'Oleksandr V. Dobrovolskiy' title: 'Influence of Point-like Disorder on the Guiding of Vortices and the Hall Effect in a Washboard Planar Pinning Potential' --- INTRODUCTION ============ The importance of flux-line pinning in preserving the superconductivity in a magnetic field has been generally recognized since the discovery of type-II superconductivity. But till now the mechanism of flux-line pinning and creep in superconductors (and particularly in the high-*$T_c$* superconductors (HTSC’s)) is still a matter of controversy and great current interest, especially in the cases of strong competition between different types of pins. One of the open issues in the field is the influence of *isotropic* point-like disorder on the vortex dynamics in the *anisotropic* washboard planar pinning potential (PPP) for the case of arbitrary orientation of the transport current with respect to the PPP “channels” where the *guiding of vortices* can be realized. The importance of this issue may be substantiated by ubiquitous presence of point-like pins in those high- and low-*$T_c$* superconductors which were used so far for resistive measurements of the guided vortex motion$^{1-9}$. The first attempt to discuss the influence of isotropic point-like disorder on the guiding of vortices was made by Niessen and Weijsenfeld$^1$ still in 1969. They studied guided motion *in the flux flow regime* by measuring transverse voltages of cold-rolled sheets of a Nb-Ta alloy for different magnetic fields *H*, transport current densities *J*, temperatures *T*, and different angles $\alpha$ between the rolling and current direction. The n, the results of this analysis were presented. For the discussion, a simple theoretical model was suggested, based on the assumption that vortex pinning and guiding can be described in terms of an isotropic pinning force ${\bf F}_p^i$ plus a pinning force ${\bf F}_p^a$ with a fixed direction which was perpendicular to the rolling direction. The experimentally observed dependence of the transverse and longitudinal voltages on the magnetic field *in the flux flow regime* as a function of the angle $\alpha$ was in agreement with this model. Unfortunately, in spite of the correct description of a geometry of the motive forces of a problem (see below Fig. 1) it was impossible within the flux flow approach$^1$ to calculate theoretically the *nonlinear* (*J, T*, $\alpha$)-dependences of the average pinning forces $\langle{\bf F}_p^i\rangle$ and $\langle{\bf F}_p^a\rangle$ which determine the experimentally observed cot$\beta(J,T,\alpha)$ dependences. The *nonlinear guiding* problem was exactly solved at first only for the washboard PPP (i.e. for ${\bf F}_p^i=0 $) within the framework of the two-dimensional single-vortex stochastic model of anisotropic pinning based on the Fokker-Planck equation with a concrete form of the pinning potential$^{10,11}$. Two important studies. First, in some HTCS’s twins can easily be formed during the crystal growth$^{2-5,8}$. Second, in layered HTCS’s the system of interlayers between parallel *ab*-planes can be considered as a set of unidirectional planar defects which provoke the intrinsic pinning of vortices$^{12}$. Rather simple formulas were derived$^{11}$ for the experimentally observable *nonlinear* even$(+)$ and odd$(-)$ (with respect to the magnetic field reversal) longitudinal and transverse magnetoresistivities $\rho_{\|,\perp}^\pm(j,\theta,\alpha,\varepsilon)$ as functions of the dimensionless transport current density $j,$ dimensionless temperature $\theta,$ and relative volume fraction $0<\varepsilon<1$ occupied by the parallel twin planes directed at an angle $\alpha$ with respect to the current direction. The $\rho_{\|,\perp}^\pm$-formulas were presented as linear combinations of the even and odd parts of the function $\nu(j,\theta,\alpha,\varepsilon)$ which can be considered as the probability of overcoming the potential barrier of the twins$^{11}$; this made it possible to give a simple physical treatment of the nonlinear regimes of vortex motion (see below item II.C). Besides the appearance of a relatively large even transverse $\rho_\perp^+$ resistivity, generated by the guiding of vortices along the channels of the washboard PPP, explicit expressions for *two new nonlinear anisotropic Hall resistivities* $\rho_{||}^-$ *and* $\rho_\perp^-$ were derived and analyzed. The physical origin of these *odd* contributions caused by the subtle interplay between even effect of vortex guiding and the odd Hall effect. Both new resistivities were going to zero in the linear regimes of the vortex motion (i.e. in the thermoactivated flux flow (TAFF) and the ohmic flux flow (FF) regimes) and had a bump-like current or temperature dependence in the vicinity of highly nonlinear resistive transition from the TAFF to the FF. As the new odd resistivities arose due to the Hall effect, their characteristic scale was proportional to the small Hall constant as for ordinary odd Hall effect investigated earlier$^{10}$. It was shown$^{11}$ that appearance of these new odd $\rho_{|| , \bot }^-$ contributions leads to the new specific angle-dependent “scaling” relations for the PPP which demonstrate the so-called anomalous Hall behavior in the type-II superconductors. Here we should to emphasize that the anomalous behavior of the Hall effect in many high-temperature and in some conventional superconductors in the mixed state remains one of the challenging issues in the vortex dynamics$^{5,12,16}$. The problem at issues includes several remarkable experimental facts: a) the Hall effect sign reversal in the vortex state with respect to the normal state at temperatures near $\emph{T}_c$ and for moderate magnetic fields; b) the Hall resistivity “scaling” relation $\rho_\perp\sim\rho_\| ^ \beta$ exists with $1\leq\beta\leq2$, where $\rho_\perp$ is the Hall resistivity and $\rho_\parallel$ is the longitudinal resistivity; c) the influence of pinning on the “Hall anomaly” and scaling relation. Assuming that the “bare” Hall coefficient $\alpha_H$ is constant, two different scaling laws have been derived earlier theoretically for different pinning potentials$^{11,17}$. Vinokur et al. have shown$^{17}$ that a scaling law $\rho_\perp=\delta\rho_\parallel^2$ (where $\delta=n\alpha_H c^2/B\Phi_0$ is the Hall conductivity, $n=\pm1$, $c$ is the speed of light, $B$ is the magnetic field and $\Phi_0$ is the magnetic flux quantum) is the general feature of any isotropic vortex dynamics with an average pinning force directed along the average vortex velocity vector. Later it was shown$^{11}$ that for purely anisotropic *a*-pins that create a washboard planar pinning potential, the form of corresponding “scaling” relation is highly anisotropic due to the reason that pinning force for *a*-pins is directed perpendicular to the pinning planes. If $\alpha$ is the angle between parallel pinning planes and direction of the current density vector $\mathbf{j}$, then for $\alpha=0$ the scaling law has the form $\rho_\bot=-n(\alpha_H/\eta)\rho_\|$ ($\eta$ is the vortex viscosity) which was interpreted previously$^{11}$ as a scaling law with $\beta=1$, whereas for $\alpha=\pi/2
null
--- abstract: | We study the finite sample behavior of Lasso-based inference methods such as post double Lasso and debiased Lasso. Empirically and theoretically, we show that these methods can exhibit substantial omitted variable biases (OVBs) due to Lasso not selecting relevant controls. This phenomenon can be systematic in finite samples and occur even when the coefficients are very sparse and the sample size is large and larger than the number of controls. Therefore, relying on the existing asymptotic inference theory can be problematic in empirical applications. We compare the Lasso-based inference methods to modern high-dimensional OLS-based methods and provide practical guidance. **Keywords:** ' 2019. This 'bibliography.bib': a "first draft on arXiv"<extra_id_1> article: "This is the first draft of this book." author: Kaspar Wüthrich[<unk>1] Ying Zh .' title: 'Omitted variable bias of Lasso-based inference methods: A finite sample analysis[^3] ' --- Introduction ============ Researchers are often interested in making statistical inferences on a single parameter (for example, the effect of a treatment or a policy), while controlling for confounding factors. In more and more economic applications, the number of potential control variables ($p$) is becoming large relative to the sample size ($n$), either due to the inherent richness of the data, the desire of researchers to specify flexible functional forms, or both. In such problems, a natural approach is to use the least absolute shrinkage and selection operator (Lasso), introduced by @tibsharini1996regression, to select the relevant controls (i.e., those with nonzero coefficients) and then run OLS with the selected controls. However, this approach has been criticized because, unless the magnitude of the coefficients associated with the relevant controls is very small, it requires these coefficients to be well separated from zero to ensure that Lasso selects them. This critique has motivated the development of post double Lasso [@belloni2014inference] and debiased Lasso [@javanmard2014confidence; @vandergeer2014asymptotically; @Zhang_Zhang]. The breakthrough in this literature is that it does not require the aforementioned separation condition, and the Lasso not selecting relevant controls yields negligible asymptotic biases under certain conditions on $n$, $p$, and the degree of sparsity. Since their introduction, post double Lasso and debiased Lasso have quickly become the most popular inference methods for problems with many control variables. Given the rapidly growing (asymptotic) theoretical and applied literature on these methods, it is crucial to take a step back and examine the performance of these new procedures in empirically relevant settings as well as to better understand their merits and limitations relative to other alternatives. In particular, there is a misconception that the post double Lasso and debiased Lasso are immune to under-selection of the Lasso because they do not require the above-mentioned separation condition. Empirically and theoretically, this paper shows that, in finite samples, under-selection can result in substantial OVBs of these methods and yield invalid inferences. We also compare the post double Lasso and debiased Lasso to modern high-dimensional OLS-based inference procedures. Let us consider the linear model $$\begin{aligned} Y_{i} & = & D_{i}\alpha^{*}+X_{i}\beta^{*}+\eta_{i},\label{eq:main-y}\\ D_{i} & = & X_{i}\gamma^{*}+v_{i}.\label{eq:main-d}\end{aligned}$$ Here $Y_{i}$ is the outcome, $D_{i}$ is the scalar treatment variable of interest, and $X_{i}$ is a $(1\times p)$-dimensional vector of additional control variables. To ing variance of two different variables coefficients. In this paper, we study the performance of post double Lasso and the debiased Lasso for estimating and making inferences (e.g., constructing confidence intervals) on the treatment effect $\alpha^{\ast}$. We present extensive simulation evidence demonstrating that post double Lasso and debiased Lasso can exhibit substantial OVBs relative to the standard deviations due to the Lasso not selecting all the relevant controls. Our simulation results can be summarized as follows. (i) Large OVBs are persistent across a range of empirically relevant settings and can occur even when $n$ is large and larger than $p$, and $k$ is small (e.g., when $n=10000$, $p=4000$, $k=5$). (ii) For the same $(n,p,k)$, noise variances, and magnitude of coefficients, there can be no OVBs at all, small OVBs, or substantial OVBs, depending on the variance of the relevant controls. (iii) When the controls exhibit limited variability, the performance of Lasso-based inference methods can be very sensitive to the choice of the regularization parameters; under sufficient variability, post double Lasso is less sensitive. (iv) There is no simple recommendation for how to choose the regularization parameters. [^4] (v) The OVBs can lead to invalid inferences and under-coverage of confidence intervals. In addition to the simulations, we conduct Monte Carlo studies based on two empirical applications: The analysis of the effect of 401(k) plans on savings by @belloni2017program and the study of the racial test score gap by @fryerlevitt2013. We draw samples of different size from the large original datasets and compare the subsample estimates to the estimates based on the original data. This exercise mimics random sampling from a large super-population. In both applications, we find substantial biases even when $n$ is considerably larger than $p$, and document that the magnitude of the biases varies substantially depending on the regularization choice. The existing (asymptotic) theory provides little insight about the OVBs of the Lasso-based inference methods documented in our simulation studies. In terms of formal results, it only implies an upper bound of $\texttt{constant}\cdot\frac{k\log p}{n}$ for the bias. Here the (positive) $\texttt{constant}$ does not depend on $(n,p,k)$ and bears little meaning in the existing theory which simply assumes $\frac{k\log p}{\sqrt{n}}\rightarrow0$ among other sufficient conditions. [^5] The asymptotic upper bound $\texttt{constant}\cdot\frac{k\log p}{n}$ is only informative about the *least favorable* case and does not explain the following practically relevant questions: (i) When do OVBs arise? (ii) Why can the OVBs be drastically different despite $(n,p,k)$, noise variances, and absolute values of coefficients being the same? (iii) What is the magnitude of OVBs in the *most favorable* cases? (iv) How severe can the OVBs be in finite samples where $\frac{k\log p}{n}$ is not small enough? To explain (i) and (ii), we provide theoretical conditions under which the OVBs occur systematically and establish a novel result on the under-selection of the Lasso. To explain (iii) and (iv), we derive new informative lower and upper bounds on the OVBs of post double Lasso and the debiased Lasso proposed by @vandergeer2014asymptotically. Our analyses are non-asymptotic and allow us to study the OVBs for fixed $(n,p,k)$, but are also informative when $\frac{k\log p}{n}\rightarrow0$ or $\frac{k\log p}{n}\rightarrow\infty$. Our theoretical results reveal that, in finite samples, the OVBs are not just simple linear functions of $\frac{k\log p}{n}$ but depend on $n$, $p$, and $k$ in a more complex way. In one of our results, we derive explicit universal constants, allowing us to compute precise lower bounds and perform “comparative statics” given features of the underlying empirical problems. This work is especially interesting in that it is not new in literature. In contrast to upper bound analyses, it is informative about the most favorable cases and thus the finite sample limitations of Lasso-based inference methods. Our results suggest that the OVBs can be substantial relative to the standard deviation derived from the existing theory. As a consequence, the confidence intervals proposed in the literature can exhibit under-coverage. In the main part of the paper, we focus on post double Lasso and present results for the debiased Lasso in the appendix. Post double Lasso consists of two Lasso selection steps: A Lasso regression of $Y_{i}$ on $X_{i}$ and a Lasso regression of $D_{i}$ on $X_{i}$. In the third step, the estimator of $\alpha^{\ast}$, $\tilde{\alpha}$, is the OLS regression of $Y_{i}$ on $D_{i}$ and the union of controls selected in the two Lasso steps. For the setup of (\[eq:main-y\])–(\[eq:main-d\]), post double has a clear advantage over a post (single) Lasso OLS. As @belloni2014inference [p.614] put it: “Intuitively, this procedure \[post double Lasso\] works well since we are more likely to recover key controls by considering selection of controls from both equations instead of just considering selection of controls from the single equation”.
null
--- abstract: 'Due to its interaction with the virtual electron-positron field in vacuum, the photon exhibits a nonzero anomalous magnetic moment whenever it has a nonzero transverse momentum component to an external constant magnetic field. At low and high frequencies this anomalous magnetic moment behaves as paramagnetic, and at energies near the first threshold of pair creation it has a maximum value greater than twice the electron anomalous magnetic moment. These results might be interesting in an astrophysical and cosmological context.' author: - 'S. Villalba-Chávez$^{\dag\ddag}$ --- What Causes the Anomaly Magnetic<extra_id_1> A<extra_id_2>$$<extra_id_3>$: A Magnetic<extra_id_4>Anomalous<extra_id_5>Magnetical<extra_id_6>.<extra_id_7>$<extra_id_8><unk><extra_id_9> the background photon and electron<extra_id_10> Magnetic<extra_id_11>Magnetic Moment?' --- It was shown by Schwinger[@Schwinger] in 1951 that electrons get an anomalous magnetic moment $\mu^\prime=\alpha/2\pi\mu_B$ (being $\mu_B=e\hbar/2m_0c$ the Bohr magneton) due to radiative corrections in quantum electrodynamics (QED), that is, due to the interaction of the electron with the background virtual photons and electron-positron pairs. We want to show that also, due to the interaction with the virtual quanta of vacuum, an anomalous photon magnetic moment arises. It is obtained from the expression for the photon self-energy in a magnetic field, calculated by Shabad[@shabad1; @shabad2] in an external constant magnetic field $\Pi_{\mu\nu}(x,x^{\prime\prime}\vert A^{ext})$ by starting from the electron-positron Green function in the Furry picture, and by using the Schwinger proper time method. The expression obtained was used by Shabad [@shabad2] to investigate the photon dispersion equation in vacuum in presence of an external magnetic field. It was found a strong deviation from the light cone curve near the energy thresholds for pair creation, which suggests that the photon propagation behavior in the external classical magnetic field is strongly influenced by the virtual electron-positron pairs of vacuum near these thresholds, showing a behavior similar to that of a massive particle. These phenomena become especially significant near the critical field $B_c=m_0^2/e \sim 4,41 \cdot 10^{13}$ Gauss, where $m_0, e$ are respectively the electron mass and charge. The photon magnetic moment might have astrophysical and cosmological consequences. For instance, photons passing by a strongly magnetized star, would experience an additional shift to the usual gravitational one produced by the star mass. In presence of an external field the current vector is non vanishing $j(x)_{\mu}=ie Tr\gamma_{\mu} G(x,x|A^{ext}) \neq 0$, where $G(x,x^{\prime}|A^{ext})$ is the electron-positron Green’s function in the external field. By calling the total electromagnetic field by $A^t_{\mu}= A^{ext}_{\mu}+A_{\mu}$, the QED Schwinger-Dyson equation for the photon field $A_\mu(x)$, propagating in the external field $A_\mu(x)^{ext}$ is $$\left[\square \eta_{\mu\nu}-\partial_\mu\partial_\nu\right] A^\nu(x)+\int\Pi_{\mu\nu}(x,x^\prime\vert A^{ext}) A^\nu(x^\prime) d^4x^\prime=0,\label{sdpmBF}$$ where $\mu,\nu=1,2,3,4$. The expression (\[sdpmBF\]) is actually the set of Maxwell equations in a neutral polarized vacuum, where the second term corresponds to the approximation of the four-current linear in $A_{\mu}$, where the coefficient is the polarization operator $\delta j_\mu (x)/\delta A^t_\nu (x^{\prime\prime})|_{A^t=A^{ext}}=\Pi_{\mu\nu}(x,x^{\prime\prime}\vert A_\mu^{ext})$. The external (constant and homogeneous) classical magnetic field is described by $A_\mu^{ext}(x)=1/2F_{\mu\nu}^{ext}x^\nu$, where the electromagnetic field tensor $F_{\mu \nu}^{ext}=\partial_\mu A_\nu^{ext}-\partial_\nu A_\mu^{ext}=B (\delta_{\mu 1}\delta_{\nu 2}-\delta_{\mu 2}\delta_{\nu 1})$ and $F^*_{\mu \nu}=\frac{i}{2}\epsilon_{\mu \nu \rho \kappa}F^{\rho \kappa}$ is its dual pseudotensor. To understand what follows it is necessary to recall some basic results developed in refs. [@shabad1],[@shabad2]. The presence of the constant magnetic field creates, in addition to the photon momentum four-vector $C^{4}_\mu=k_\mu$, three other orthogonal four-vectors which we write as four-dimensional transverse $k_\mu C^{i\mu}=0$ for $i=1,2,3$. These are $C^{1}_\mu= k^2 F^2_{\mu \lambda}k^\lambda-k_\mu (kF^2 k)$, $C^{2}_\mu=F^{*}_{\mu \lambda}k^\lambda$, $C^{3}_\mu=F_{\mu \lambda}k^\lambda$ ($C^{1,2,3}_{\mu}k^{\mu}=0$). We have $C^{4}_\mu C^{4\nu}=k_\mu k^\nu=0$ on the light cone. One gets from these four-vectors three basic independent scalars $k^2$, $kF^2k$, $kF^{*2}k$, which in addition to the field invariant ${\cal F}=\frac{1}{4}F_{\mu \rho}F^{\rho \mu}=\frac{1}{2}B^2$, are a set of four basic scalars of our problem. In momentum space it can be written the eigenvalue equation [@shabad1] $$\Pi_{\mu\nu}(k,k^{\prime\prime}\vert A_\mu^{ext})=\sum_i \pi^{(i)}_{n,n^\prime} a^{(i) \nu }a^{(i)}_\mu/(a^{(i)\nu }a^{(i)}_\nu ) \label{2}$$ In correspondence to each eigenvalue $\pi^{(i)}_{n,n^\prime}$ $i=1,2,3$ there is an eigenvector $a^{(i)\nu }$. The set $a^{(i)\nu }$ is obtained by simply normalizing the set of four vectors $C^{i}_\mu$. ($C^{4}_\mu=k_\mu$ leads to a vanishing eigenvalue due to the four-dimensional transversality property $\Pi_{\mu\nu}(k,k^{\prime\prime}\vert A_\mu^{ext})k_\mu=0$). The solution of the equation of motion (\[sdpmBF\]) can be written as a superposition of eigenwaves given by $$A_\mu(k)=\sum_{j=1}^4 \delta(k^2-\pi_j)a_\mu^j(k) \label{3}$$ By considering $a^{(i)}_\mu (x)$ as the electromagnetic four vector describing the eigenmodes, it is easy to obtain the corresponding electric and magnetic fields of each mode ${\bf e}^{(i)}= \frac{\partial }{\partial x_0}\vec{a}^{(i)}-\frac{\partial }{\partial {\bf x}}a^{(i)}_0$, ${\bf h}^{(i)}=\nabla\times\vec{a}^{(i)}$ (see [@shabad2]). From now on we specialize in a frame in which $x_3||B$. Then $kF^2k/2\mathcal{F}=-k_{\perp}^2$ and we name $z_1= k^2 + kF^2k/2\mathcal{F}=k_{\parallel}^2-\omega^2$. The previous results (see [@shabad2]) indicate the existence of three dispersion equations with the following structure $$k^2=\pi^{(i)}\left(z_1,k_{\perp}^2,eB\right).\ \\ \ i=1,2,3 \label{egg}$$ The eigenvalues $\pi^{(i)}$ contain only even functions of the external field through the scalars $kF^2k$, $kF^{*2}k$, and $e \sqrt{2 \cal F}=eB$ and can be expressed as a functional expansion in series of even powers of the product $e A_\mu^{ext}$ [@Fradkin]. One can solve (\[egg\]) for $z_1$ in terms of $k_{\perp}^2$. It results $$\omega^2=\vert\textbf{k}\vert^2+f_i\left(k_{\perp}^2,B\right) \label{eg2}$$ The term $ f_i$ contains the interaction of the photon with the virtual $e^{\pm}$ pairs in the external field in terms of the variables $k_{\perp}^2,B$. As it is shown in [@shabad2], it makes the photon dispersion equation to have a drastic departure from the light cone curve near the energy thresholds for free pair creation, We are thus in conditions to define an anomalous magnetic moment for the photon as $\mu_\gamma=-\partial \omega/\partial B$. Then $\mu_\gamma$ is a function
null
--- abstract: 'We present 3D magnetohydrodynamic (MHD) numerical simulations of the evolution of self–gravitating and weakly magnetized disks with an adiabatic equation of state. Such disks are subject to the development of both the magnetorotational and gravitational instabilities, which transport angular momentum outward. As in previous studies, our hydrodynamical simulations show the growth of strong $m=2$ spiral structure. This spiral disturbance drives matter toward the central object and disappears when the Toomre parameter $Q$ has increased well above unity. When a weak magnetic field is present as well, the magnetorotational instability grows and leads to turbulence. In that case, the strength of the gravitational stress tensor is lowered by a factor of about 2 compared to the hydrodynamical run and oscillates periodically, reaching very small values at its minimum. We attribute this behavior to the presence of a second spiral mode with higher pattern speed than the one which dominates in the hydrodynamical simulations. It is apparently excited by the high frequency motions associated with MHD turbulence. The nonlinear coupling between these two spiral modes gives rise to a stress tensor that oscillates with a frequency which is a combination of the frequencies of each of the modes. This interaction between MHD turbulence and gravitational instabilities therefore results in a smaller mass accretion rate onto the central object.' author: - 'Sébastien Fromang, Steven A. Balbus, Caroline Terquem and Jean–Pierre De Villiers' bibliography: - 'author.bib' title: ' Evolution of self–gravitating magnetized disks. II- False expected. During the first stages of their evolution, for example, protoplanetary disks are expected to be rather massive because of strong infall from the parent molecular cloud. As the disk builds up in mass as a result of the collapse of an envelope, its surface mass density becomes large enough for gravitational instabilities to develop (e.g.,  ). These disks are also believed to be sufficiently ionized, at least over some extended regions, to be coupled to a magnetic field [@gammie96; @sanoetal00; @fromang02]. By modeling the outer parts of disks around quasi-stellar objects (QSOs) as steady, viscous, geometrically thin, and optically thick, @goodmanj03 has argued that they are self-gravitating. More precisely, he predicts self–gravitational instabilities to develop beyond about $10^{-2}$ parsecs from the central object. In addition, it has been suggested by that self-gravitating regions of disks around QSOs are likely to be coupled to a magnetic field. The stability of a thin, self-gravitating gas disk is controlled by the Toomre $Q$ parameter [@toomre64]: $$Q=\frac{c_s\kappa}{\pi G \Sigma} \, ,$$ where $c_s$ is the sound speed, $\kappa$ is the epicyclic frequency (see, e.g.,  ), $\Sigma$ is the disk surface mass density and $G$ is the gravitational constant. Gaseous energy {\raisebox{-.8ex}{$\buildrel{\textstyle>}\over\sim$}}1$. Since analytical predictions of the nonlinear evolution of gravitational instabilities are difficult, there have been a large number of numerical simulations of gravitationally unstable disks. Despite the rather daunting technical problems of combining three-dimensional (3D) hydrodynamic calculations with rapid and accurate Poisson equation solvers, significant progress have been made. To do so, the energetics must be treated crudely, with the focus squarely on purely dynamical behavior. Using the instability . Several authors have investigated the saturation properties of the instability, and have shown that it is capable of transporting significant amount of mass and angular momentum in a few orbital times . The elastosphere (EOS). More recently, isothermal disks have also been studied [@pickett98; @pickett00a; @boss98; @mayer02]. Some of them @boss02]. All these models were purely hydrodynamical, and neglected the effect of magnetic fields. However, it is known that stability of astrophysical disks is extremely sensitive to the presence of weak magnetic fields. In particular, the magnetorotational instability (MRI) completely disrupts laminar Keplerian flow when a subthermal magnetic field of any geometry is present. This was . Since the review). Since disks around low–mass stars and around QSOs may be both magnetized and self–gravitating, the spiral structure gravitational transport described above must somehow develop in a medium in the throes of MHD turbulence. The question naturally arises as to how these two powerful instabilities interact with one another. What is the ultimate effect on the global properties of accretion disks, and in particular, on the critical transport properties of mass and angular momentum? To keep this initial investigation tractable, we must restrict ourselves here to an adiabatic EOS. But the dynamical behavior of “simple” adiabatic disks is still rich, and contains unanticipated findings. In a companion paper to this one (@fromangetal04a [-@fromangetal04a], hereafter paper I), we carried out 2D axisymmetric numerical simulations of the evolution of massive and magnetized disks. The results show that the MRI behaves in a self–gravitating environment as it does in zero mass disks. Turbulent transport of angular momentum causes the disk to evolve toward a two component structure: (1) an inner thin disk in Keplerian rotation fed by (2) an outer thick disk whose rotation profile deviates from Keplerian, strongly influenced by self-gravity. However, angular momentum transport by gravitational instabilities cannot develop in axisymmetric simulations, which leaves unanswered the question of the outcome of the interaction between both instabilities. This is the subject of the present paper. The plan of the paper is as follows: in section 2, we present our numerical methods. The initial state of our simulations will be described in section 3. We ------ 5. Numerical methods ================= Algorithms ---------- The calculations in this paper are based on the equations of ideal MHD: $$\begin{aligned} \frac{\partial \rho}{\partial t} + {{ \mbox{\boldmath{$\nabla$}} }}{{ \mbox{\boldmath{$\cdot$}} }}(\rho {\bf v}) = 0, \\ \rho \left( \frac{\partial {\bf v}}{\partial t} + {\bf v} {{ \mbox{\boldmath{$\cdot$}} }}{{ \mbox{\boldmath{$\nabla$}} }}{\bf v} \right) = - {{ \mbox{\boldmath{$\nabla$}} }}P - \rho {{ \mbox{\boldmath{$\nabla$}} }}\Phi + \frac{1}{4 \pi} ({{ \mbox{\boldmath{$\nabla$}} }}{{ \mbox{\boldmath{$\times$}} }}{\bf B}) {{ \mbox{\boldmath{$\times$}} }}{\bf B}, \\ \rho \left( \frac{\partial }{\partial t} + {\bf v} {{ \mbox{\boldmath{$\cdot$}} }}{{ \mbox{\boldmath{$\nabla$}} }}\right) \left( \frac{e}{\rho} \right) = -P {{ \mbox{\boldmath{$\nabla$}} }}{{ \mbox{\boldmath{$\cdot$}} }}{\bf v}, \\ \frac{\partial {\bf B}}{\partial t} = {{ \mbox{\boldmath{$\nabla$}} }}{{ \mbox{\boldmath{$\times$}} }}( {\bf v} {{ \mbox{\boldmath{$\times$}} }}{\bf B} ), \label{MHD equations}\end{aligned}$$ where $\rho$ is the mass density, $e$ is the energy density, $\bf{v}$ is the fluid velocity, $\bf{B}$ is the magnetic field, $P$ is the gas pressure and $\Phi=\Phi_s+\Phi_c$ is the total gravitational potential, which has contributions $\Phi_s$ from the disk self–gravity and $\Phi_c$ from a central mass. The Poisson equation determines the gravitational potential, $$\nabla^2 \Phi_s = 4 \pi G \rho,$$ and to close our system of equations, we adopt an adiabatic equation of state for a monoatomic gas: $$P = (\gamma -1)e, \quad \gamma = 5/3 . \label{EOS}$$ for the atomic gas . This uses standard cylindrical coordinates $(r, \phi, z)$ and time–explicit Eulerian finite differences. The
null
--- abstract: | Let $\varphi: D\rightarrow \Omega$ be a homeomorphism from a circle domain $D$ onto a domain $\Omega\subset\hat{{\mathbb{C}}}$. We obtain necessary and sufficient conditions (1) for $\varphi$ to have a continuous extension to the closure $\overline{D}$ and (2) for such an extension to be injective. Further assume that $\varphi$ is conformal and that $\partial\Omega$ has at most countably many non-degenerate components $\{P_n\}$ whose diameters have a finite sum $\displaystyle\sum_n{\rm diam}(P_n)<\infty$. When the point components of $\partial D$ or those of $\partial \Omega$ form a set of $\sigma$-finite linear measure, we can show that $\varphi$ continuously extends to $\overline{D}$ if and only if all the components of $\partial\Omega$ are locally connected. This generalizes Carathéodory’s Continuity Theorem, that concerns the case when $D$ is the open unit disk $\left\{z\in\hat{{\mathbb{C}}}: |z|<1\right\}$, and allows us to derive a new generalization of the Osgood-Taylor-Caratheodry Theorem. * *MSC 2010: Primary 30A72, 30D40, ** *Carathéodory’s Continuity Theorem, Peano Compactum, generalized Jordan domain* **MSC 2010: Primary 30A72, 30D40, Secondary 54C20, 54F25. ** morphic difference. In the second, we want to viewpoint. In the first, we want to decide whether two spaces $X$ and $Y$ are topologically equivalent or homeomorphic, in the sense that there is a homeomorphism $h_1:X\rightarrow Y$. In the second, the spaces $X$ and $Y$ are respectively embedded in two larger spaces, say $\hat{X}$ and $\hat{Y}$, and we wonder whether a continuous map $h_2:X\rightarrow Y$ allows a continuous extension $\hat{h}_2: \hat{X}\rightarrow\hat{Y}$. Our study concerns a special case of the second question, when $X$ is a circle domain and $h_2$ a conformal homeomorphism sending $X$ onto a domain $Y\subset\hat{{\mathbb{C}}}$. In such a case $X$ and $Y$ are said to be [**conformally equivalent**]{}. Our code [@Caratheodory13-a]. See [ p.18]. A conformal homeomorphism $\varphi:{\mathbb{D}}\rightarrow \Omega\subset\hat{{\mathbb{C}}}$ of the unit disk ${\mathbb{D}}=\{z: |z|<1\}$ has a continuous extension $\overline{\varphi}: \overline{{\mathbb{D}}}\rightarrow\overline{\Omega}$ if and only if the boundary $\partial\Omega$ is a Peano continuum, [*i.e. *]{} within the eigenvalue $[0,1]$. If $\Omega$ in the above theorem is a [**Jordan domain**]{}, so that its boundary is a [**Jordan curve**]{}, the extension $\overline{\varphi}: \overline{{\mathbb{D}}}\rightarrow\overline{\Omega}$ is actually injective. This has been obtained earlier by Osgood and Taylor [@Osgood-Taylor1913 Corollary 1] and independently by Carathéodory [@Caratheodory13-b]. It will be referred to as the Osgood-Taylor-Carathéodory Theorem. See for instance [@Arsove68-a Theorem 4]. Here we also call it shortly the OTC Theorem. A conformal homeomorphism $\varphi:{\mathbb{D}}\rightarrow \Omega\subset\hat{{\mathbb{C}}}$ has a continuous and injective extension to $\overline{{\mathbb{D}}}$ if and only if the boundary $\partial\Omega$ is a simple closed curve. There are very recent generalizations of the above OTC Theorem. See [@He-Schramm93 Theorem 3.2], [@He-Schramm94 Theorem 2.1], and [@Ntalampekos-Younsi19 Theorem 6.1]. Those generalizations are closely connected with a very famous example of the first question, proposed in 1909 by Koebe [@Koebe09]. Is every domain $\Omega\subset\hat{{\mathbb{C}}}$ conformally equivalent to a circle domain ? When $\Omega$ is finitely connected, in the sense that its boundary has finitely many components, the above question is resolved by Koebe [@Koebe18]. See the following theorem. The following theorem is a derivative of the Moblius Theorem. Each finitely connected domain $\Omega\subset\hat{{\mathbb{C}}}$ is conformally equivalent to a circle domain $D$, unique up to Möbius transformations. When $\Omega$ is at most countably connected, He and Schramm [@He-Schramm93] obtained the same result. Each countably connected domain $\Omega\subset\hat{{\mathbb{C}}}$ is conformally equivalent to a circle domain, unique up to Möbius transformations. This covers some earlier and more resticted results that partially solve [**Koebe’s Question**]{}, when additional conditions on a countably connected domain $\Omega$ are assumed. Among others, one may see [@Strebel51] for such a result. A slightly more general version of the above theorem, on almost circle domains, is given by He and Schramm in [@He-Schramm95a]. Here $\Omega\subset A$ is a relative circle domain in $\Omega$ provided that each component of $A\setminus\Omega$ is either a point or a closed geometric disk. An equivalent statement, pointed out by He and Schramm in [@He-Schramm95a], reads as follows. Given a countably connected domain $A\subset\hat{{\mathbb{C}}}$, every relative circle domain $\Omega\subset A$ is conformally equivalent to a circle domain $D$, unique up to Möbius transformations. The uniqueness part of the above extended versions of [**Koebe’s Theorem**]{} comes from the conformal rigidity of specific circle domains. For circle domains that are at most countably connected and even for those that have a boundary with $\sigma$-finite linear measure, the conformal rigidity is known. See [@He-Schramm93 Theorem 3.1] and [@He-Schramm94]. To obtain the conformal rigidity of the underlying circle domains, He and Schramm actually employ some extended version of the OTC Theorem. See [@He-Schramm93 Theorem 3.2] for the case of countably connected domains. See count domains. Before addressing on what we study, we recall that in Carathéodory’s Continuity Theorem, the “only if” part follows from very basic observations. On the other hand, the “if” part may be obtained by using the prime ends of $\Omega$, or equivalently, the cluster sets of $\varphi$. See [@Caratheodory13-a] and [@CL66] for the theory of prime ends and for that of cluster sets. Moreover, by the Hahn-Mazurkewicz-Sierpiński Theorem [@Kuratowski68 p,256, $\S50$, II, Theorem 2], a compact connected metric space is a Peano continuum if and only if it is locally connected. Therefore, in Carathéodory’s Continuity Theorem one may replace the property of being a Peano continuum with that of being locally connected. In such a form, the same result still holds, if we change ${\mathbb{D}}$ into a circle domain that is finitely connected, [*i.e. *]{}, to its components. We will characterize all homeomorphisms $\varphi: D\rightarrow\Omega$ of an arbitrary circle domain $D$ onto a domain $\Omega\subset\hat{{\mathbb{C}}}$ that allow a continuous extension $\overline{\varphi}: \overline{D}\rightarrow\overline{\Omega}$ to the closure $\overline{D}$. We also analyse the restriction of $\overline{\varphi}$ to any boundary component of $D$, trying to find conditions for such a restriction to be injective. More detail is provided by the following. Under what conditions does $\varphi$ extend continuously to $\overline{D}$, if it is further assumed to be a conformal map ? What We Obtain and What Are Known ================================= In the first theorem we find a topological counterpart for Carathéodory’s Continuity Theorem. \[topological-cct\] Any homeomorphism $\varphi$ of a generalized Jordan domain $D$ onto a domain $\Omega\subset\hat{{\mathbb{C}}}$ has a continuous extension $\overline{\varphi}: \overline{D}\rightarrow\overline{\Omega}$ if and only if the conditions below are both satisfied. - The boundary $\partial\Omega$ is a Peano compactum. - The oscillations of $\varphi
null
--- abstract: 'In this paper we introduce new methods to prove the finite cyclicity of some graphics through a triple nilpotent point of saddle or elliptic type surrounding a center. After applying a blow-up of the family, yielding a singular 3-dimensional foliation, this amounts to proving the finite cyclicity of a family of limit periodic sets of the foliation. The boundary limit periodic sets of these families were the most challenging, but the new methods are quite general for treating such graphics. We apply these techniques to prove the finite cyclicity of the graphic $(I_{14}^1)$, which is part of the program started in 1994 by Dumortier, Roussarie and Rousseau (and called DRR program) to show that there exists a uniform upper bound for the number of limit cycles of a planar quadratic vector field. We also prove the finite cyclicity of the boundary limit periodic sets in all graphics but one through a triple nilpotent point at infinity of saddle, elliptic or degenerate type (with a line of zeros) and surrounding a center, namely the graphics $(I_{6b}^1)$, $(H_{13}^3)$, and $(DI_{2b})$.' author: - | Robert Roussarie, Université de Bourgogne\ Christiane Rousseau, Université de Montréal[^1] title: Finite cyclicity of some center graphics through a nilpotent point inside quadratic systems --- Introduction ============ This paper is part of a long term program to prove the finiteness part of Hilbert’s 16th problem for quadratic vector fields, sometimes written $H(2) <\infty$, namely the existence of a uniform bound for the number of limit cycles of quadratic vector fields. The DRR program (see paper [@DRR94(1)]) reduces this problem to proving that 121 graphics (limit periodic sets) have finite cyclicity inside quadratic vector fields, and the long term program is to prove the finite cyclicity of all these graphics. This program has been an opportunity to develop new more sophisticated methods for analyzing the finiteness of the number of limit cycles bifurcating from graphics in generic families of $C^\infty$ vector fields, in analytic families of vector fields, and in finite-parameter families of polynomial vector fields. In this paper, we focus on some graphics in the latter case: graphics through a nilpotent point and surrounding a center inside quadratic systems. The general method is to use the Bautin trick, namely transforming a proof of finite cyclicity of a generic graphic into a proof of finite cyclicity of a graphic surrounding a center. This is possible in quadratic systems since the center conditions are well known: indeed all graphics through a nilpotent point and surrounding a center occur in the stratum of reversible systems. The systems of this stratum are symmetric with respect to an axis, and are also Darboux integrable with an invariant line and an invariant conic. In practice, the Bautin trick consists in dividing a displacement map $V$ in a center ideal, i.e. in writing it as a finite sum of generalized monomials times non vanishing functions of the form $$V(z)= \sum_{i=1}^n a_i m_i(1+h_i(z)),\label{type_V}$$ where each $a_i$ belongs to the center ideal in parameter space, $m_i$ is a generalized monomial in $z$ and $h_i(z)=o(1)$ behaves well under derivation. To compute the displacement map, we write it as a difference of compositions of regular transitions and Dulac maps near the singular points. The Dulac maps are calculated in $C^k$ normalizing coordinates for a family unfolding the vector field. In this paper, we develop some general additional methods, which allow to prove the finite cyclicity of the graphic $(I_{14}^1)$ (Figure \[graphics\](a)). In particular, for the unfolding of this graphic, it is very helpful to be able to claim that all regular transitions are the identity in the center case. This is possible if we exploit the fact that the centers occur when the system is symmetric, and if we choose cleverly the sections on which the different transition maps are defined. Also, in the center case, the Dulac maps have a simple form since the system is Darboux integrable. The model follows. - We highlight that the change to $C^k$ normalizing coordinates in the neighborhood of the singular points on the blow-up locus can be done by an operator. This allows preserving the symmetry in the center case when changing to normalizing coordinates. - We introduce a uniform way of calculating the two types of Dulac maps when entering the blow-up through a much shorter proof than the one given in [@ZR]. - Although each Dulac map is not $C^k$, we can divide in the center ideal its difference to the corresponding Dulac map in the integrable case. - The method of the blow-up of the family allows reducing the proof of finite cyclicity of the graphic to the proof that a certain number of limit periodic sets have finite cyclicity. These limit periodic sets are defined in the blown-up space. The ones obtained in blowing up a nilpotent saddle are shown in Table \[tab.shhconvex\]. For all of them but one (the boundary limit periodic set), we can reduce the displacement map to a $1$-dimensional map, the number of zeros of which can be bounded by the Bautin trick and a derivation-division algorithm on a map of type . The boundary limit periodic set is more challenging, since we need to work with a 2-dimensional displacement map, the zeros of which we must study along the leaves of an invariant foliation coming from the blow-up. We introduce a generalized derivation operator, which allows performing a derivation-division algorithm on functions of the type $$V(r,\rho)= \sum_{i=1}^n a_im_i(1+h_i(r,\rho)),\label{type_V2}$$ where $h_i$ are ${\cal C}^k $-functions on monomials and $m_i$ are generalized monomials in $r$, $\rho$ (see definitions in Appendix II). During this process, we have to take into account that $r\rho=\mathrm{Cst}$. We have a partial result for every graphic, but one (namely $(H^3_{14})$), through a triple point at infinity: \[thMain1\] Let us consider the graphics $(I^1_{14})$, $(I^1_{6b})$, $(H^3_{13})$ and $(DI_{2b})$ through a triple point at infinity (see Figure 1). Then for any of them, the boundary periodic limit set obtained in the blowing up has a finite cyclicity. Theorem \[thMain1\] is not sufficient to prove that the given graphic has a finite cyclicity inside the family of quadratic vector fields. The reason is that, beside the boundary limit periodic set, other limit periodic sets (see for instance Table \[tab.shhconvex\] for $(I_{14}^1)$) are obtained in the blowing up and, as explained above, we have to prove that each of them has also a finite cyclicity. We it in other fields. We fields. As for the finite cyclicity of the other graphics $(I_{6b}^1)$, $(H_{13}^3)$ and $(DI_{2b})$, we intend to address the problem in the next future. The finite cyclicity of $(H_{13}^3)$ should be straightforward with arguments identical to those used for $(I_{14}^1)$. It will be done simultaneously with the corresponding generic graphic $(H_{12}^3)$. Some of the limit periodic sets to be studied for $(I_{6b}^1)$ will involve four Dulac maps of second type. For these limit periodic sets, it is not possible to reduce the study of the cyclicity to a single equation. Hence, new methods will need to be adapted to treat the center case, when the periodic solutions correspond to a system of two equations in the four variables $r_1, \rho_1, r_2, \rho_2$, with $r_1\rho_1=\nu_1$ and $r_2\rho_2=\nu_2$. As for the graphic $(DI_{2b})$, some of the limit periodic sets to be studied involve four Dulac maps of second type, two of them through the semi-hyperbolic points $P_1$ and $P_2$ on the blown-up sphere. The techniques developed in this paper can be adapted for studying the boundary limit periodic sets of graphics of the DRR program through a nilpotent finite singular point. The only new difficulty in that case is to show that the three parameters of the leading terms in the displacement map do indeed generate the center ideal. We also hope to adapt them to study the boundary graphic of the hemicycle $(H^3_{14})$: there, the additional difficulty is the two semi-hyperbolic points along the equator. Proofs of Theorems \[thMain1\] and \[thMain2\] are given in Section 3 and Appendix II, where the detailed computations of cyclicity are found in Theorems \[thderdiv\], \[thpgeq2\] and \[thp1\]. Theorem \[thnormalformhyp\] in Appendix I, gives a statement about normal form for 3-dimensional hyperbolic saddle points in
null
--- abstract: 'Recently N. Nitsure showed that for a coherent sheaf ${{\mathcal F}}$ on a noetherian scheme the automorphism functor ${\underline{{\mathrm{Aut}\,}}}_{{{\mathcal F}}}$ is representable if and only if ${{\mathcal F}}$ is locally free. Here we remove the noetherian hypothesis and show that the same result holds for the endomorphism functor ${\underline{{\mathrm{End}\,}}}_{{{\mathcal F}}}$ even if one asks for representability by an algebraic space.' author: The presentation. We should start with $X$ $X$-scheme. The result is as follows: \[thm11\] Let $X$ be a scheme and ${{\mathcal F}}$ a quasi-coherent ${{\mathcal O}}_X$-module of finite presentation. Then the following are equivalent: 1. ${{\mathcal > free. The value ${\underline{{\mathrm{Aut}\,}}}_{{{\mathcal F}}}$ is representable by a scheme. - ${\underline{{\mathrm{End}\,}}}_{{{\mathcal F}}}$ is representable by a scheme. If $X$ is locally noetherian, these conditions are also equivalent to the following: 1. ${\underline{{\mathrm{Aut}\,}}}_{{{\mathcal F}}}$ is representable by an algebraic space. 2. ${\underline{{\mathrm{End}\,}}}_{{{\mathcal F}}}$ is representable by an algebraic space. {#s12} The equivalence of 1) and 2) in theorem \[thm11\] in case $X$ is noetherian is the main result of [@N]. Our and follow [*loc.cit. Let The $ free. Then let artin. We observe that in the last statement of the lemma the noetherian hypothesis is indispensable: let $(B , {{\mathfrak{m}}})$ be a local ring such that there is $0 \neq b \in \bigcap_{n \ge 1} {{\mathfrak{m}}}^n$. Clearly $(b^2) \subsetneq (b)$, so after dividing out $(b^2)$ one gets a ring $B$ as in the lemma but for any local homomorphism $f : B \to C$ with $C$ [*noetherian*]{} one clearly has $f (b) = 0$. \[t2\] Let $S$ be a scheme and $S_0 \subseteq S$ a closed subscheme defined by a nilpotent ideal sheaf. Assume $X$ is a flat $S$-scheme and $f : X \to Y$ is an $S$-morphism such that $f \times {\mathrm{id}}_{S_0}$ is an isomorphism. Then $f$ is an isomorphism. {#s13} In order to treat the representability of ${\underline{{\mathrm{End}\,}}}_{{{\mathcal F}}}$ we will use the following observation: \[t3\] Under the assumptions of 1.1 the obvious natural transformation of (set-valued) functors ${\underline{{\mathrm{Aut}\,}}}_{{{\mathcal F}}} \to {\underline{{\mathrm{End}\,}}}_{{{\mathcal F}}}$ is relatively representable by an open immersion. For completeness we also include a proof of the next lemma which is essentially lemma 5 of [@N] and shows the relative representability of a “parabolic” sub-group functor:\ Let $X$ be a scheme and $$\label{eq:1} 0 \longrightarrow {{\mathcal F}}' \longrightarrow {{\mathcal F}}\longrightarrow {{\mathcal F}}'' \longrightarrow 0$$ a short exact sequence of quasi-coherent ${{\mathcal O}}_X$-modules with ${{\mathcal F}}'$ finitely presented and ${{\mathcal F}}''$ locally free. For any morphism $f : Y \to X$, the sequence $f^* ((\ref{eq:1}))$ is exact because ${{\mathcal F}}''$ is in particular ${{\mathcal O}}_X$-flat and it makes sense to consider $$P (Y) := \{ \alpha \in {\mathrm{Aut}\,}_{{{\mathcal O}}_Y} (f^* {{\mathcal F}}) {\, | \,}\alpha (f^* {{\mathcal F}}') \subseteq f^* {{\mathcal F}}' \} \subseteq {\underline{{\mathrm{Aut}\,}}}_{{{\mathcal F}}} (Y) \; .$$ \[t4\] In the above situation, the natural transformation $P \hookrightarrow {\underline{{\mathrm{Aut}\,}}}_{{{\mathcal F}}}$ is relatively representable by a closed immersion. For more details see 7.6. Proofs ====== {#s21} In this subsection we dispense with the easy implications of theorem \[thm11\], the assumptions and notations of which we now assume:\ As ${\underline{{\mathrm{Aut}\,}}}_{{{\mathcal F}}}$ and ${\underline{{\mathrm{End}\,}}}_{{{\mathcal F}}}$ are clearly Zariski sheaves the problem of representing them is Zariski local on $X$, i.e. we can assume that $X$ is affine and ${{\mathcal F}}$ corresponds to a free module of finite rank. In this case, representability of both ${\underline{{\mathrm{Aut}\,}}}_{{{\mathcal F}}}$ and ${\underline{{\mathrm{End}\,}}}_{{{\mathcal F}}}$ is obvious; we have proved the implications 1) $\Rightarrow$ 2) and 1) $\Rightarrow$ 2’). Finally, the implications 2) $\Rightarrow$ 3) and 2’) $\Rightarrow$ 3’) are trivial. {#s22} [**Proof of lemma \[t1\]:**]{} Let $(A , {{\mathfrak{m}}})$ be a local ring and $M$ a finitely presented $A$-module which is not free. We will find the required local homomorphism $A \to B$ as a suitable quotient of $A$:\ Let $$\label{eq:2} A^m \xrightarrow{\alpha} A^n \xrightarrow{\beta} M \rightarrow 0$$ be a minimal presentation of $M$, i.e. $n = \dim_k (M / {{\mathfrak{m}}}M)$ where $k := A / {{\mathfrak{m}}}$ is the residue field of $A$. Then $M$ is free if and only if $\alpha = 0$: clearly $\alpha = 0$ is sufficient for freeness of $M$ and conversely, if $M$ is free, it is necessarily so of rank $n$, hence $\beta$ is a surjective endomorphism of $A^n$ which must be an isomorphism by a standard application of Nakayama’s lemma, c.f. [@M], thm. 2.4., hence $\alpha = 0$.\ For any $J \subseteq {{\mathfrak{m}}}$, (\[eq:2\]) $\otimes_A A / J$ is a minimal presentation of the $A / J$-module $M / JM$. If we denote by $I \subseteq A$ the ideal generated by the coefficients of any matrix representation of $\alpha$ and note that the minimality of (\[eq:2\]) implies $I\subseteq {{\mathfrak{m}}}$ we find that $M / JM$ is $A / J$-free if and only if $\alpha\otimes id_{A/J}=0$ if and only if $I \subseteq J$. As $M$ is not $A$-free we have $I \neq 0$ and as $I$ is finitely generated we get ${{\mathfrak{m}}}I \subsetneq I$, again by Nakayama’s lemma. By Zorn’s lemma, using again that $I$ is finitely generated, there is an ideal $J$ with ${{\mathfrak{m}}}I \subseteq J \subsetneq I$ and which is maximal subject to these conditions (indeed, any ascending chain of such ideals admits its union as an upper bound because I is finitely generated). We claim that $B := A / J$ is as required:\ By the maximality of $J$ the ideal ${\overline{I}}:= I / J$ is non-zero principal: ${\overline{I}}= (b) , 0 \neq b \in B$ and we ne
null
--- abstract: 'Unbiased data collection is essential to guaranteeing fairness in artificial intelligence models. Implicit bias, a form of behavioral conditioning that leads us to attribute predetermined characteristics to members of certain groups and informs the data collection process. This paper quantifies implicit bias in viewer ratings of TEDTalks, a diverse social platform assessing social and professional performance, in order to present the correlations of different kinds of bias across sensitive attributes. Although the viewer ratings of these videos should purely reflect the speaker’s competence and skill, our analysis of the ratings demonstrates the presence of overwhelming and predominant implicit bias with respect to race and gender. In our paper, we present strategies to detect and mitigate bias that are critical to removing unfairness in AI.' author: - 'Rupam Acharyya\*,Shouman Das\*,Ankani Chattoraj, Oishani Sengupta,Md Iftekar Tanveer' bibliography: - 'reference.bib' title: Detection and Mitigation of Bias in Ted Talk Ratings --- Introduction ============ Machine-learning techniques are being used to evaluate human skills in areas of social performance, such as automatically grading essays [@alikaniotis2016automatic; @taghipour2016neural], outcomes of video based job interviews [@chen2017automated; @Naim2016], hirability [@Nguyen2016], presentation performance [@Tanveer2015; @Chen2017a; @Tanveer2018] etc. These algorithms automatically quantify the relative skills and performances by assessing large quantities of human annotated data. Companies and organizations world-wide are increasingly using commercial products that utilize machine learning techniques to assess these areas of social interaction. However, the presence of implicit bias in society reflected in the annotators and a combination of several other unknown factors (e.g. demographics of the subjects in the datasets, demographics of the annotators) creates systematic imbalances in human datasets. Machine learning algorithms (neural networks in most cases) trained on such biased datasets automatically replicate the imbalance [@o2016weapons] naturally present in the data and result in producing *unfair* predictions. Examining the impact of implicit bias in social behavior requires extensive, diverse human data that is spontaneously generated and reveals the perception of success in social performance. In this paper, we analyze ratings of TED Talk videos to quantify the amount of social bias in viewer opinions. TED Talks present a platform where speakers are given a short time to present inspiring and socially transformative ideas in an innovating and engaging way. In its mission statement, the TED organization describes itself as a “global community, welcoming people from every discipline and culture” and makes an explicit commitment to “change attitudes, lives, and ultimately the world” [@tedtalk]. Since TED Talks offer a platform to speakers from diverse backgrounds trying to convince people of their professional skills and achievements, the platform lends itself to a discussion of several critical issues regarding fairness and implicit bias: How can we determine the fairness of viewer ratings of TED Talk videos? Can we detect implicit bias in the ratings dataset? Are there limitations for the speaker? Ideally, these ratings should depend on the perception of the speaker’s success and communicative performance; not on the speaker’s gender or ethnicity. For instance, our findings show that while a larger proportion of viewers rate white speakers in a confidently positive manner, speaker of other gender identities and ethnic backgrounds receive a greater number of mixed ratings and elicit wider differences of opinion. In addition, men and women are rated as positive or negative with more consistency, while speakers identifying with other gender identities are rated less consistently in either direction. With this assumption, we conducted computational analysis to detect and mitigate bias present in our data. We utilize a state of the art metric “*Disparate Impact*” as in [@feldman2015certifying] for measuring fairness, and three popular methods of bias correction— 1. pre-processing [@calmon2017optimized; @kamiran2012data], 2. in-processing [@calders2010three; @kamishima2011fairness], and 3. post-processing [@hardt2016equality] . We compared such predictions of the ratings with the actual ratings provided by the viewers of the TED talks and found that our model prediction performs better w.r.t. a dataset metric. Our experiments show that if the traditional machine learning models are trained on a dataset without any consideration of the data bias, the model will make decision in an unwanted way which could be highly unfair to an unprivileged group of the society. In short, major contributions of the paper are as follows, 1. We show that public speaking ratings can be biased depending on the race and gender of a speaker. We do not present bias. 1 rating. 2. We propose a systematic procedure to detect unfairess in the TEDTalk public speaking ratings. This procedure can be used for regression models. Related Works ============= With the increased availability of huge amount of data, data-driven decision making has emerged as a fundamental practice to all sorts of industries. In recent years, data scientists and the machine learning community put conscious effort to detect and mitigate bias from data sets and respective models. Over the years, researchers have used multiple notions of fairness as the tools to get rid of bias in data that are outlined below: - *‘individual fairness’*, which means that similar individuals should be treated similarly [@dwork2012fairness] - *‘group fairness’*, which means that underprivileged groups should be treated same as privileged groups [@pedreschi2009measuring; @pedreshi2008discrimination]. - *‘fairness through awareness’*, which assumes that an algorithm is fair as long as its outcome or prediction is not dependent on the use of protected or sensitive attributes in decision making [@grgic2016case]. - *‘equality of opportunity’*, mainly used in classification task which assumes that the probability of making a decision should be equal for groups with same attributes [@hardt2016equality]. - *‘counterfactual fairness*, very close to equality of opportunity but the probability is calculated from the sample of counterfactuals [@russell2017worlds; @kusner2017counterfactual] which ensures that the predictor probability of a particular label should be same even if the protected attributes change to different values . The fairness measures mentioned above can be characterized as both, manipulation to data and implementation of a supervised classification algorithm. One can employ strategies of detecting unfairness in a machine learning algorithm, observe [@zliobaite2015survey] and removing them by, - Pre-processing: this strategy involves processing the data to detect any bias and mitigating unfairness before training any classifiers [@calmon2017optimized; @kamiran2012data]. - In-processing: this technique adds a regularizer term in the loss function of the classifier which gives a measurement of the unfairness of the classifier [@calders2010three; @kamishima2011fairness]. - Post-processing: this strategy manipulates predictor output which makes the classifier fair under the measurement of a specific metric [@hardt2016equality]. For our analysis, we follow this well established paradigm and use an open-source toolkit AIF360 [@aif360-oct-2018] to detect and mitigate bias present in the data set and classifiers at all three stages: the pre-processing, the in-processing and the post-processing step. Data Collection =============== We analysed the TedTalk data collected from the [ted.com](ted.com) website. We crawled the website and gathered information about TedTalk videos which have been published on the website for over a decade (2006-2017). These videos cover a wide range of topics, from contemporary political, social issues to modern technological advances. The speakers who delivered talks at the TedTalk platform are also from a diverse background; including but not limited to, scientists, education innovators, celebrities, environmentalists, philosophers, filmmakers etc. These videos are published on the [ted.com](ted.com) website and are watched by millions of people around the world who can give ratings to the speakers. The rating of each talk is a collection of fourteen labels such as beautiful, courageous, fascinating etc. In this study we try to find if there is any implicit bias in the rating of the talks with respect to the race and gender of the speaker. Some properties of the full dataset is given in table \[tab:datasize\]. Each viewer can assign three out of fourteen labels to a talk and we use the total count for each label of rating for our analysis. In figure \[fig:avg\_rating\], average number of ratings in each of the fourteen categories is shown as a bar plot. Our preliminary observation reveals some disparities among the rating labels e.g. the TED talk Dataset labels. **Property** **Quantity** ---------------------------------- -------------- Total number of Talks 2,383 Average number of views per talk 1,765,071 Total length of all talks 564.63 Hours Average rating labels per talk 2,498.6 : TED talk Dataset Properties: Information about the TED talk videos that are used in our method of detecting unfairness[]{data-label="tab:
null
--- abstract: | The one-dimensional partially asymmetric simple exclusion process with open boundaries is considered. The stationary state, which is known to be constructed in a matrix product form, is studied by applying the theory of $q$-orthogonal polynomials. Using a formula of the $q$-Hermite polynomials, the average density profile is computed in the thermodynamic limit. The phase diagram for the correlation length, which was conjectured in [@me99](J. Phys. A True confirmed. \[Keywords: ] interaction. The ASEP has been studied extensively since it is one of the few models which show rich non-equilibrium behaviors and is exactly solvable [@Derrida98]. Besides, the ASEP has applications to many interesting problems such as the hopping conductivity, growth processes and the traffic flows [@SZ]. In this article, we consider the stationary state of the ASEP with open boundary conditions. That is, the system is connected to particle resevoirs at boundaries. The case where particles can hop only in one direction, which we refer to as the “totally asymmetric” case in the sequel, was solved in [@DEHP; @SD]. The current and the density profile were calculated exactly in the thermodynamic limit. The phase diagram for the current and the correlation length were identified. The system exhibits phase transitions depending on the parameters at the boundaries. Recently the obtained phase diagram was discussed from the point of view of the domain wall dynamics [@KSKS]. The partially asymmetric case with the open boundary conditions was partially solved in [@me99]. The current was evaluated in the thermodynamic limit. The phase diagram for the current was identified. It turned out to be the same as the one obtained by mean-field approximation [@ER] or by employing a plausible assumption [@Sandow]. The phase diagram for the correlaiton length was also obtained by assuming that the correlation length is given by the logarithm of the ratio of the largest and the second largest eigenvalue of a certain matrix which plays a similar role as a transfer matrix does in equilibrium statistical mechanical models. It was shown that the phase diagram has a richer structure than that for the totally asymmetric case. The correlation is a conjecture [@me99]. In this sence the obtained phase diagram for the correlation length has remained a conjecture. The purpose of this paper is the confirmation of this phase diagram. By using the explicit formula for the Poisson kernel of the $q$-Hermite polynomials, the average density profile in the thermodynamic limit is calculated for the partially asymmetric case. It turns out that the phase diagram was correctly predicted in [@me99]. In this article, we only consider the case where hoppings of particles at the boundaries and those at the bulk part of the system are compatible. In other words, when we allow the particle input at the left boundary and the particle output at the right boundary, the hopping rate to the right is assumed to be larger than that to the left. When hoppings at the boundaries and those at bulk is imcompatible, the current becomes zero in the thermodynamic limit. The situation seems to be similar to the closed boundary condition where particles can not enter or go out of the system [@SS]. Off course, when we consider the finite chain, the current remains to be positive. We remark that the asymptotic current for this case was evaluated in [@BECE]. The paper is organized as follows. In the next section, the definition of the model is given in terms of the master equation. The so-called matrix product ansatz, which gives the stationary state in the form of matrix product, is also explained. Some properties of the $q$-Hermite polynomials and the relationship to the matrix product ansatz are explained in section \[q-H\]. The s article. First, the one-point funciton is represented in the form of double integrals. Second, the average density profile in the thermodynamic limit are summerlized whereas the evaluation of the integrals are relegated to Appendices. The phase diagram for the correlation length is identified. The concluding remarks are given in the last section. Definition of Model and Matrix Product Ansatz ============================================= The one-dimensional asymmetric simple exclusion process (ASEP) is defined as follows. During the infinitesimal time interval $\d t$, each particle jumps to the right nearest neighboring site with probability $p_R \d t$ and to the left nearest neighboring site with probability $p_L \d t$. If the chosen site is already occupied, the particle does not move due to the exclusion rule. More than 2 site. Each site can be either empty or occupied. The case where particles can hop only in one direction, i.e., the case where either $p_L = 0$ or $p_R=0$ is called the “totally asymmetric” case. The $p_R=p_L$ case is called the “symmetric” case whereas the case where particles hop in both directions with different rates will be referred to as the “partially asymmetric” case. In addition, we allow the particle input at the left end of the chain with rate $\alpha$ and allow the particle output at the right end of the chain with rate $\beta$ (Fig. 1). Here we have the solution for $L$. In this article, we restrict our attention to the partially asymmetric case since the totally asymmetric case and the symmetric case was already solved in [@DEHP; @SD] and in [@me96] respectively. The restrictions on the parameters are $0 < p_L < p_R$ and $\alpha,\beta>0$. More formally, the process is defined in terms of the master equation. Each configuration of the system is indicated by $\{\tau_1,\tau_2,\ldots,\tau_L\}$ where $\tau_j$ $(j=1,2,\ldots,L)$ denotes the particle number at site $j$. Namely $\tau_j=0$ if the site $j$ is empty whereas $\tau_j=1$ if the site $j$ is occupied. Let $P(\tau_1,\tau_2,\ldots,\tau_L;t)$ denote the probability that the system has the configulation $\{ \tau_1,\tau_2,\ldots,\tau_L\}$ at time $t$. Then the time evolution of the ASEP is described by the following master equation, $$\begin{aligned} &\quad \frac{\d}{\d t} P(\tau_1,\tau_2,\ldots,\tau_L;t) \notag \\ &= \alpha (2\tau_1-1) P(0,\tau_2,\ldots,\tau_L;t) \notag \\ &\quad + \sum_{j=1}^{L-1} (\tau_j-\tau_{j+1}) \left[ p_L P(\tau_1,\tau_2,\ldots,0,1,\ldots,\tau_L;t) \right. \notag \\ &\quad \left. - p_R P(\tau_1,\tau_2,\ldots,1,0,\ldots,\tau_L;t) \right] \notag \\ &\quad + \beta (1-2\tau_L) P(\tau_1,\tau_2,\ldots,\tau_{L-1},1;t). \label{mas-eq}\end{aligned}$$ For instance, the master equation for $L=2$ case reads $$\label{mas-eq-L2} \frac{\d}{\d t} \begin{bmatrix} P(00;t)\\ P(01;t)\\ P(10;t)\\ P(11;t) \end{bmatrix} = - \begin{bmatrix} \alpha & -\beta & 0 & 0\\ 0 & \alpha + p_L +\beta & -p_R & 0\\ -\alpha & -p_L & p_R & -\beta\\ 0 & -\alpha & 0 & \beta \end{bmatrix} \begin{bmatrix} P(00;t)\\ P(01;t)\\ P(10;t)\\ P(11;t) \end{bmatrix}.$$ One can confirm himself that the dynamics of the ASEP is correctly encoded in the master equation (\[mas-eq\]). When time $t$ goes to infinity, the system is expected to reach the stationary state. The probability distribution in the stationary state will be denoted as $P(\tau_1,\tau_2,\ldots,\tau_L
null
--- abstract: 'We have explored a simple microscopic model to simulate a thermally activated rate process where the associated bath which comprises a set of relaxing modes is not in an equilibrium state. The model captures some of the essential features of non-Markovian Langevin dynamics with a fluctuating barrier. Making use of the Fokker-Planck description we calculate the barrier dynamics in the steady state and non-stationary regimes. The Kramers-Grote-Hynes reactive frequency has been computed in closed form in the steady state to illustrate the strong dependence of the dynamic coupling of the system with the relaxing modes. The influence of nonequilibrium excitation of the bath modes and its relaxation on the kinetics of activation of the system mode is demonstrated. We derive the dressed time-dependent Kramers rate in the nonstationary regime in closed analytical form which exhibits strong non-exponential relaxation kinetics of the reaction co-ordinate. The ta]<unk> effect.' --- =0.0cm =-0.0cm =-1.0cm =21.0cm =15.5cm =0.2cm =0.5cm [**[Jyotipratim Ray Chaudhuri$^{\rm a}$, Gautam Gangopadhyay$^{\rm b}$,\ Deb Shankar Ray$^{\rm a}$]{}**]{} $^{\rm a}$[**[Indian Association for the Cultivation of Science]{}**]{}\ [**[Jadavpur, Calcutta 700 032, INDIA. ]{}**]{} $^{\rm b}$[**[S. N. Bose National Centre for Basic Sciences]{}**]{}\ [**[JD Block, Sector III, Salt Lake City, Calcutta 700 091, INDIA. ]{}**]{} **[I.Introduction]{}** More than half a century ago Kramers$^{1}$ considered the problem of activated rate processes by using a model Brownian particle trapped in a one dimensional well which is separated by a barrier of finite height from a deeper well. The particle was supposed to be immersed in a medium such that the medium exerts a frictional force on the particle but at the same time thermally activate it so that the particle may gain enough energy to cross the barrier. Over several decades the model has been the standard paradigm in many areas of physics and chemistry$^{2}$. The Kramers problem was to find the rate of escape from the well to the barrier. The Krumers problem is solved by the thermal bath $V(x)$. $\gamma$ and $F(t)$ are the damping rate and the Gaussian stationary random force provided by the thermal bath respectively. The properties of noise can be summarized by the following two relations, $$\langle F(t)\rangle=0 \hspace{0.4cm}, \hspace{0.4cm} \langle F(0)F(t)\rangle=2 \gamma mKT \delta(t) \hspace{0.2cm}.$$ The Langevin equation (1) is equivalent to the Fokker-Planck equation for probability distribution $p=p(x,v,t)$ \[also known as Kramers equation\], $$\frac{\partial p}{\partial t}=\frac{1}{m}\frac{\partial V(x)}{\partial x} \frac{\partial p}{\partial v}-v\frac{\partial p}{\partial x} + \gamma \left[\frac{KT}{m} \frac{\partial^{2} p}{\partial v^{2}} +\frac{\partial} {\partial v}(vp) \right] \hspace{0.2cm}.$$ Kramers$^{1}$ obtained the steady state escape rate $k$ in the limiting cases of high and low damping rates in the following form, $$k=\left\{\begin{array}{lllll} \frac{\omega_{0}\omega_{b}}{2\pi\gamma}\exp[-\frac{E_{b}}{KT}] & & & \gamma\longrightarrow\infty \\ \gamma\frac{E_{b}}{KT}\exp[-\frac{E_{b}}{KT}] & & & \gamma\longrightarrow 0 \end{array}\right. \hspace{0.2cm},$$ where $\omega_{o}$ and $\omega_{b}$ are the frequencies associated with the curvature of the potential at the bottom of the well and at the barrier top, respectively. $E_{b}$ is not regulated by the well. Kramers has also derived an expression for ‘intermediate’ value of $\gamma$ : $$\begin{aligned} k=\frac{\omega_{0}}{2\pi\omega_{b}}\left\{\left[ \left(\frac{\gamma}{2} \right)^{2}+\omega_{b}^{2}\right]^{\frac{1}{2}}-\frac{\gamma}{2}\right\} \exp(-E_{b}/KT)\hspace{0.2cm}.\end{aligned}$$ For non-Markovian random processes where one takes into account of the short internal time scales of the system compared to that of the thermal bath, the Langevin equation(1) gets replaced by its non-Markovian counterpart$^{3,4}$, sometimes called the generalized Langevin equation (GLE); $$\ddot{x}=-\frac{1}{m}\frac{\partial V(x)}{\partial x}-\int_{0}^{t}d\tau Z(t-\tau) \dot{x}(\tau) + \frac{1}{m}R(t) \hspace{0.2cm},$$ where $R(t)$ is Gaussian but non-Markovian such that $$\langle R(t) \rangle = 0 ,\hspace{1.0cm}\langle R(0)R(t) \rangle = Z(t)mKT \hspace{0.2cm}.$$ The memory function $Z(t)$ is expressed in terms of Fourier-Laplace components $$Z_{n}(\omega) = \int_{o}^{\infty} dt Z(t) e^{-in\omega t}$$ with $Z_{0}(\omega) = \gamma$ Based on equation (5) Adelman$^{5}$ obtained the generalized Fokker-Planck equation for a Brownian oscillator with a parabolic potential as given by ; $$\frac{\partial p}{\partial t} = -{\bar{\omega}}_{b}^{2} x \frac{\partial p} {\partial v} -v\frac{\partial p}{\partial x} + {\bar{\gamma}}\frac{\partial}{\partial v} (vp)+ {\bar{\gamma}} \frac{KT}{m}\frac{\partial^{2}p}{\partial v^{2}} + \frac{KT}{m}\left(\frac {{\bar{\omega}}_{b}^{2}}{\omega_{b}^{2}}-1\right)\frac{\partial^{2}p} {\partial v\partial x} \hspace{0.2cm},$$ where ${\bar{\gamma}}$ = ${\bar{\gamma}}(t)$ and ${{\bar{\omega}}_{b}^{2}} ={{\bar{\omega}}_{b}^{2}}(t)$ are now functions of time \[although bounded , they may not always provide long time limits\] which play a decisive role in the calculation of non-Markovian Kramers rate. Various workers have made use of generalized Langevin equation to treat the different aspects of the escape problem in the non-Markovian regime. For example, Grote and Hynes$^{4}$ considered the average motion of the particle in the vicinity of the barrier governed by GLE and found that on the average the particle is slowed down by friction and defining a reactive frequency $\lambda_{r}$ they showed that the average motion goes as $\exp(\pm\lambda_{r} t)$. The analysis of Hänggi and Mojtabai$^{6}$ on the other hand is based on the generalized Fokker-Planck equation of Adelman with a parabolic potential in the high friction limit. The generalized FP approach has also been adopted by Carmeli and Nitzan$^{7}$ to derive the expression for the steady-state escape rate in the high and low friction limit in the Markovian as well as non-Markovian regimes. A further work is forthcoming, Ref.(2). While the early post-Kramers development as summarized above is largely phenomenological, an interesting advancement in the theory of activated rate processes was made when the generalized Langevin equation was realized in terms of a microscopic model which comprises a system coupled linearly to a discrete set of harmonic oscillators. Using the properties of the bath and a normal mode analysis it was shown$^{8}$ that the reactive frequency $\lambda_{r}$ defined by Grote and Hynes$^{4}$ for the average motion across the barrier is actually a renormalised effective barrier frequency. The object of the present paper is twofold : First is to consider a simple variant of the system-heat bath model$^{9,10,11}$ to simulate the activated rate processes, where the associated bath is in a nonequilibrium state. The model incorporates some of the essential features of Langevin dynamics with a fluctuating barrier which had been heuristically and phenomenologically proposed earlier in several occasions.$^{10,13-17}$ While the majority of the treatments of the phenomenological fluctuating barrier rest on the reduction of the equations to overdamped limit$^{5,10,
null
--- abstract: 'Spectroscopic studies play a key role in the identification and analysis of interstellar ices and their structure. Some molecules have been identified within the interstellar ices either as pure, mixed, or even as layered structures. Absorption band features of water ice can significantly change with the presence of different types of impurities (CO, $\rm {CO_2}$, $\rm{CH_3OH}$, $\rm{H_2CO}$, etc.). In this work, we carried out a theoretical investigation to understand the behavior of water band frequency, and strength in the presence of impurities. The computational study has been supported and complemented by some infrared spectroscopy experiments aimed at verifying the effect of HCOOH, $\rm{NH_3}$, and $\rm{CH_3OH}$ on the band profiles of pure $\rm{H_2O}$ ice. Specifically, we explored the effect on the band strength of libration, bending, bulk stretching, and free-OH stretching modes. Computed band strength profiles have been compared with our new and existing experimental results, thus pointing out that vibrational modes of $\rm{H_2O}$ and their intensities can change considerably in the presence of impurities at different concentrations. In this study, mode. HCOOH was found to have a strong influence on the libration, bending, and bulk stretching band profiles. In the case of NH$_3$, the free-OH stretching band disappears when the impurity concentration becomes 50%. This work will ultimately aid a correct interpretation of future detailed spaceborne observations of interstellar ices by means of the upcoming JWST mission.' author: K. Y. Gorai' author:: / 'P. Gorai' – 'M. Sil' - 'A. Das' - 'B. Sivaraman' - 'S. K. Chakrabarti' - 'S. Ioppolo' - 'C. Puzzarini' - 'Z. Kanuchova' - 'A. Dawes' - 'M. Mendolicchio' - 'G. Mancini' - 'V. Barone' - 'N. Nakatani' – 'T. Shimonishi' - 'N. Mason' title: A Systematic Study on the Absorption Features of Interstellar Ices in Presence of Impurities --- **Keywords:** Astrochemistry, spectra, ISM: molecules, methods: numerical, experimental, infrared: Band strength, interstellar ice. Introduction False layers. Interstellar ices play a crucial role in the chemical enrichment of the interstellar medium (ISM). While the existence of interstellar ice was first proposed by @eddi37 in 1937, a turning point was marked, more than 40 years later, when @tiel82 introduced a combined gas-grain chemistry for the chemical evolution of the ISM. More recently, it has been demonstrated that even pre-biotic molecules can be produced in UV-irradiated astrophysical relevant ices [@woon02]. For instance, @nuev14 experimentally showed that nucleobases can be formed by UV irradiation of pyrimidine in H$_2$O-rich ice mixtures containing NH$_3$, CH$_3$OH, and CH$_4$. The composition of interstellar ices can be determined through their absorption spectra in the infrared (IR) region. Since the composition of ISM grain mantles strongly depends on physical conditions [@das08; @das10; @das11; @das16], the observed spectra can be very different in different astrophysical regions. $\rm{H_2O}$ is the most dominant ice component in dense molecular clouds [@gibb04], accounting for $60-70\%$ of the icy mantels [@whit03]. Water ice was firstly detected through the comparison of ground-based observations of its O-H stretching band at $3278.69$ cm$^{-1}$ ($3.05$ $\mu$m) toward Orion-KL [@gill73] and laboratory work by @irvi68. Since then, several ground-based observations were carried out to identify the signatures of water ice in different astrophysical environments, with further laboratory studies supporting such observations [@merr76; @lege79; @hage79]. More recently, water was detected by the space-borne Infrared Space Observatory (ISO) mission through its Short-Wavelength Spectrometer (SWS) and Long-Wavelength Spectrometer (LWS) in the mid- and far-infrared spectral region. In the mid-IR, along with its strong $\rm{O-H}$ stretching mode ($3.05$ $\mu$m), water shows weaker bending and combination bands at $1666.67$ cm$^{-1}$ (6.00 $\mu$m) and $2222.22$ cm$^{-1}$ (4.50 $\mu$m), respectively, and the libration mode at $769.23$ cm$^{-1}$ (13.00 $\mu$m), which is usually blended with the grain silicate spectroscopic features along the line of sight to star forming regions in the ISM [@gibb04]. After H$_2$, water is the second most abundant molecular species in the Universe and its gas-phase abundance in the ISM is even comparable to that of CO. Due to the high abundance of water in interstellar ices [@dart05], the amount of the other species is very often expressed in terms of the relative abundance with respect to $\rm{H_2O}$, and thus considered as impurities. Among other solid species, CO, CO$_2$, CH$_3$OH, H$_2$CO, HCOOH, NH$_3$, CH$_4$, and OCS have been unambiguously identified [@gibb04], while theoretical studies suggest that N$_2$ and O$_2$ might be trapped in the ice matrix as well [@vand93]. It should be noted that although homonuclear molecules are IR inactive, they can become IR active when embedded in ice matrices. Interstellar ice matrices are usually classified as (i) polar ices, if dominated by polar molecules like H$_2$O, CH$_3$OH, NH$_3$, OCS, H$_2$CO, HCOOH, and (ii) apolar ices, if they are dominated by molecules like CO, CO$_2$, CH$_4$, N$_2$, and O$_2$. Interstellar ices are believed to be a combination of both with a first polar (water-rich) layer and an apolar CO-dominated layer deposited on top of it during the catastrophic freeze-out of CO molecules in the cold core of molecular clouds [@boog15]. Infrared spectroscopy is a suitable technique for identifying interstellar species, particularly, in condensed phases. However, it requires that vibrations are IR active, condition which is fulfilled when the dipole moment changes during vibration. The IR spectrum of a water cluster is one of the primary tools to analyze the features of the aggregation processes in a water matrix [@ohno05; @bouw07; @ober07]. Moreover, four vibrational modes of water, namely libration, bending, bulk stretching, and free-OH stretching, are essential to obtain relevant information about the water cluster itself in various astrophysical environments [@gera95; @ohno05; @bouw07; @ober07]. However, there are some difficulties for the observation of interstellar ices in the mid-IR, such as the need for a background illuminating source being required for absorption, e.g. a protostar or a field star. Furthermore, peak positions, line widths, and intensities of molecular ice features need to be known and compared to laboratory spectra, which further depend on ice temperature, crystal structure of the ice, and mixing or layering with other species [@ehre97; @schu99; @cook16]. As a result, only a very limited number of species have been unambiguously detected in interstellar ices. CO is routinely observed from various ground-based facilities. In the solid phase, its abundance may vary from $3\%$ to $20\%$ of the water-ice. CO absorbance shows both polar and apolar band profiles. @soif79 reported the detection of the fundamental vibrational band of CO at 4.61 $\mu$m (2169.20 cm$^{-1}$) in absorption toward W33A, based on the laboratory work of @mant75. The corresponding band profile consists of a broad (polar) component peaking at $2136.75$ cm$^{-1}$ ($4.68$ $\mu$m) and a narrow (non-polar) component peaking at $2141.33$ cm$^{-1}$ ($4.67$ $\mu$m) [@chia95; @chia98]. $\rm{CO_2}$ was detected in absorption at $657.89$ cm$^{-1}$ (15.20 $\mu$m) toward several IRAS sources by @dhen89, based on their laboratory work. The presence of $\rm{CO_2}$ in ice mantles was found on very few astrophysical objects before the launch of ISO [@dart05], which allowed to firmly establish the ubiquitous nature of CO$_2$ [@degr96; @guer96; @gera99]. In the ice phase, CH$_3$OH abundance varies between 5% and 30% with respect to $\rm{H_2O}$. Its abundance can be even lower in some sources, such as Sgr A and Elias 16 [@gibb00].
null
--- abstract: | This paper considers two important questions in the well-studied theory of graphs that are $F$-saturated. A graph $G$ is called $F$-saturated if $G$ does not contain a subgraph isomorphic to $F$, but the addition of any edge creates a copy of $F$. We first resolve the most fundamental question of minimizing the number of cliques of size $r$ in a $K_s$-saturated graph for all sufficiently large numbers of vertices, confirming a conjecture of Kritschgau, Methuku, Tait, and Timmons. We also go further and prove a corresponding stability result. Next we minimize the number of cycles of length $r$ in a $K_s$-saturated graph for all sufficiently large numbers of vertices, and classify the extremal graphs for most values of $r$, answering another question of Kritschgau, Methuku, Tait, and Timmons for most $r$. We then move on to a central and longstanding conjecture in graph saturation made by Tuza, which states that for every graph $F$, the limit $\lim_{n \rightarrow \infty} \frac{\operatorname{sat}(n, F)}{n}$ exists, where $\operatorname{sat}(n, F)$ denotes the minimum number of edges in an $n$-vertex $F$-saturated graph. Pikhurko made progress in the negative direction by considering families of graphs instead of a single graph, and proved that there exists a graph family $\mathcal{F}$ of size $4$ for which $\lim_{n \rightarrow \infty} \frac{\operatorname{sat}(n, \mathcal{F})}{n}$ does not exist (for a family of graphs $\mathcal{F}$, a graph $G$ is called $\mathcal{F}$-saturated if $G$ does not contain a copy of any graph in $\mathcal{F}$, but the addition of any edge creates a copy of a graph in $\mathcal{F}$, and $\operatorname{sat}(n, \mathcal{F})$ is defined similarly). We make the first improvement in 15 years by showing that there exist infinitely many graph families of size $3$ where this limit does not exist. Our graph family is a graph $F$-family cliques. We False graph. author: - 'Debsoumya Chakraborti[^1]  and Po-Shen Loh[^2]' title: 'Minimizing the numbers of cliques and cycles of fixed size in an $F$-saturated graph' --- Introduction ============ Extremal graph theory focuses on finding the extremal values of certain parameters of graphs under certain natural conditions. One of the most well-studied conditions is $F$-freeness. For graphs $G$ and $F$, we say that $G$ is $F$-free if $G$ does not contain a subgraph isomorphic to $F$. This gives rise to the most fundamental question of finding the Turán number $\operatorname{ex}(n, F)$, which asks for the maximum number of edges in an $n$-vertex $F$-free graph. The asymptotic answer is known for most graphs $F$, with the exception of bipartite $F$ where the most intricate and unsolved cases appear (see, e.g., [@FS] and [@S] for nice surveys). Recently, Alon and Shikhelman [@AS] introduced a natural generalization of the Turán number. They systematically studied $\operatorname{ex}(n, H, F)$, which denotes the maximum number of copies of $H$ in an $n$-vertex $F$-free graph. Note that the case $H = K_2$ is the standard Turán problem, i.e., $\operatorname{ex}(n, K_2, F) = \operatorname{ex}(n, F)$. While the Turán number asks for the maximum number of edges in an $F$-free graph, another very classical problem concerns the minimum number of edges in an $F$-free graph with a fixed number of vertices. This problem is not interesting as stated because the empty graph is the obvious answer. In fact, $G$ is $F$. With this additional condition, we say that $G$ is $F$-saturated. A neat trick, after all. On the other hand, this new condition makes the edge minimization problem very interesting, and this area of research is commonly known as graph saturation. Let us look at the results of this graph. Erdős, the result. \[Erdős, Hajnal, and Moon 1964\] \[EHM\] For every $n \ge s \ge 2$, the saturation number $$\operatorname{sat}(n, K_s) = (s -2)(n-s+2) + \binom{s-2}{2}.$$ Furthermore, there is a unique $K_s$-saturated graph on $n$ vertices with $\operatorname{sat}(n, K_s)$ edges: the join of a clique with $s-2$ vertices and an independent set with $n-s+2$ vertices. The *join* $G_1 \ast G_2$ of two graphs $G_1$ and $G_2$ is obtained by taking the disjoint union of $G_1$ and $G_2$ and adding all the edges between them. Erdős, Hajnal, and Moon proved Theorem \[EHM\] by using a clever induction argument. A group of sets. Graph saturation has been studied extensively since Theorem \[EHM\] appeared half a century ago (see, e.g., [@FFS] for a very informative survey). Alon and Shikhelman’s generalization of the Turán number motivated Kritschgau, Methuku, Tait, and Timmons [@KMTT] to start the systematic study of the function $\operatorname{sat}(n, H, F)$, which denotes the minimum number of copies of $H$ in an $n$-vertex $F$-saturated graph. Here again note that $\operatorname{sat}(n, K_2, F) = \operatorname{sat}(n, F)$. Historically, a natural generalization of counting the number of edges ($K_2$) is to count the number of cliques ($K_r$) of a fixed size, see e.g., [@B76], [@E], and [@Z], where the authors answered the generalized extremal question of finding the maximum number of $K_r$’s in a $K_s$-free graph with fixed number of vertices. Towards generalizing Theorem \[EHM\] in a similar fashion, Kritschgau, Methuku, Tait, and Timmons proved the following lower and upper bounds, which differ by a factor of about $r-1$, and conjectured that the upper bound (achieved by the same construction given in Theorem \[EHM\]) is correct. \[Kritschgau, Methuku, Tait, and Timmons 2018\] \[tait\] For every $s > r \ge 3$, there exists a constant $n_{r,s}$ such that for all $n \ge n_{r,s}$, $$\begin{aligned} \max \left\{\frac{\binom{s-2}{r-1}}{r-1} \cdot n - 2 \binom{s-2}{r-1}, \frac{\binom{s-2}{r-1} + \binom{s-3}{r-2}}{r} \cdot n\right\} &\le \operatorname{sat}(n, K_r, K_s) \\ &\le (n - s + 2) \binom{s-2}{r-1} + \binom{s-2}{r} .\end{aligned}$$ Our first main contribution confirms their conjecture for sufficiently large $n$ by showing that the upper bound is indeed the correct answer. We also show that the natural construction is the unique extremal graph for this generalized saturation problem for large enough $n$. Furthermore, we prove a corresponding stability result for sufficiently large $n$ which shows that even if we allow up to some $cn$ more copies of $K_r$ than $\operatorname{sat}(n, K_r, K_s)$ in an $n$-vertex $K_s$-saturated graph, the extremal graph will still be the same and unique. It is worth noting that there are relatively few stability results in the area of graph saturation, essentially only [@AFGS] by Amin, Faudree, Gould, and Sidorowicz, and [@BFP] by Bohman, Fonoberova, and Pikhurko. In the notation of joins, the extremal graph in our problem is $K_{s-2} \ast \overline{K}_{n-s+2}$, i.e., the join of a cl
null
--- abstract: 'We study the effect of starlight from the first stars on the ability of other minihaloes in their neighbourhood to form additional stars. The first stars in the $\Lambda$CDM universe are believed to have formed in minihaloes of total mass $\sim 10^{5-6}\,M_\odot$ at redshifts $z\ga 20$, when molecular hydrogen ($\rm H_2$) formed and cooled the dense gas at their centres, leading to gravitational collapse. Simulations suggest that the Population III (Pop III) stars thus formed were massive ($\sim 100\,M_\odot$) and luminous enough in ionizing radiation to cause an ionization front (I-front) to sweep outward, through their host minihalo and beyond, into the intergalactic medium. Our previous work suggested that this I-front was trapped when it encountered other, nearby minihaloes, and that it failed to penetrate the dense gas at their centres within the lifetime of the Pop III stars ($\la 3\,\rm Myrs$). The question of what the dynamical consequences were for these target minihaloes, of their exposure to the ionizing and dissociating starlight from the Pop III star requires further study, however. Towards this end, we have performed a series of detailed, 1D, radiation-hydrodynamical simulations to answer the question of whether star formation in these surrounding minihaloes was triggered or suppressed by radiation from the first stars. We have varied the distance to the source (and, hence, the flux) and the mass and evolutionary stage of the target haloes to quantify this effect. We find: (1) trapping of the I-front and its transformation from R-type to D-type, preceded by a shock front; (2) photoevaporation of the ionized gas (i.e. all gas originally located outside the trapping radius); (3) formation of an $\rm H_2$ precursor shell which leads the I-front, stimulated by partial photoionization; and (4) the shock- induced formation of $\rm H_2$ in the minihalo neutral core when the shock speeds up and partially ionizes the gas. The fate of the neutral core is mostly determined by the response of the core to this shock front, which leads to molecular cooling and collapse that, when compared to the same halo without external radiation, is either: (a) expedited, (b) delayed, (c) unaltered, or (d) reversed or prevented, depending upon the flux (i.e. distance to the source) and the halo mass and evolutionary stage. When you observe haloes, consider the star. Roughly speaking, most haloes that were destined to cool, collapse, and form stars in the absence of external radiation are found to do so even when exposed to the first Pop III star in their neighbourhood, while those that would not have done so are still not able to. A widely held view that the first Pop III stars must exert either positive or negative feedback on the formation of the stars in neighbouring minihaloes should, therefore, be revisited.' author: - | Kyungjin Ahn[^1] and Paul R. Shapiro[^2]\ Department of Astronomy, The University of Texas at Austin, 1 University Station C1400, Austin, TX 78712, USA title: 'Does Radiative Feedback by the First Stars Promote or Prevent Second Generation Star Formation?' --- cosmology: large-scale structure of universe – cosmology: theory – early universe – stars: formation – galaxies: formation Introduction {#sec:Secondstar-Intro} ============ Cosmological minihaloes at high redshift – i.e. dark-matter dominated haloes with virial temperatures $T_{\rm vir} < 10^4 \,\rm K$, with masses above the Jeans mass in the intergalactic medium (IGM) before reionization ($10^4 \la M/M_\odot \la 10^8$) – are believed to have been the sites of the first star formation in the universe. To form a star, the gas inside these haloes must first have cooled radiatively and compressed, so that the baryonic component could become self-gravitating and gravitational collapse could ensue. For the neutral gas of H and He at $T < 10^4\,\rm K$ inside minihaloes, this requires that a sufficient trace abundance of $\rm H_2$ molecules formed to cool the gas by atomic collisional excitation of the rotational-vibrational lines of $\rm H_2$ . The formation of this trace abundance of $\rm H_2$ proceeds via the creation of intermediaries, $\rm H^-$ or $\rm H_{2}^{+}$, which act as catalysts, which in turn requires the presence of a trace ionized fraction, in the following two-step gas-phase reactions (see, e.g., @1968ApJ...154..891P [@1967Natur.216..976S; @1984ApJ...280..465L; @1987ApJ...318...32S]; @1994ApJ...427...25S, henceforth, “SGB94”; ): $$\begin{aligned} &&{\rm H + e^- \rightarrow H^- + \gamma},\nonumber \\ &&{\rm H^- + H \rightarrow H_2 + e^-}, \label{eq:solomon}\end{aligned}$$ and $$\begin{aligned} &&{\rm H + H^+ \rightarrow H_{2}^{+} + \gamma},\nonumber \\ &&{\rm H_{2}^{+} + H \rightarrow H_2 + H^+}. \label{eq:solomon2}\end{aligned}$$ Unless there is a strong destruction mechanism for $\rm H^-$ (e.g. cosmic microwave background at $z\ga 100$), the former (equation \[eq:solomon\]) is generally the dominant process for $\rm H_2$ formation. Gas-dynamical simulations of the Cold Dark Matter (CDM) universe suggest that the first stars formed in this way when the dense gas at the centres of minihaloes of mass $M \sim 10^{5 - 6}\, M_\odot$ cooled and collapsed gravitationally at redshifts $z \ga 20$ (e.g. @2000ApJ...540...39A [@2002Sci...295...93A]; @1999ApJ...527L...5B [@2002ApJ...564...23B]; @2003ApJ...592..645Y; @2001ApJ...548..509M [@2003MNRAS.338..273M]; @2006astro.ph..6106Y). This work and others further suggest that these stars were massive ($M_* \ga 100 \,M_\odot$), hot ($T_{\rm eff} \simeq 10^5 \,\rm K$), and short-lived ($t_* \la 3 \,\rm Myrs$), thus copious emitters of ionizing and dissociating radiation. These stars constitute the Population III (Pop III) stars, or zero metallicity stars, which are believed to have exerted a strong, radiative feedback on their environment. The details of this feedback and even the overall sign (i.e. negative or positive) are poorly understood. Once the ionizing radiation escaped from its halo of origin, it created H II regions in the IGM, beginning the process of cosmic reionization. The photoheating which accompanies this photoionization raises the gas pressure in the IGM, thereby preventing baryons from collapsing gravitationally out of the IGM into new minihaloes when they form inside the H II regions, an effect known as “Jeans-mass filtering” (SGB94; @1998MNRAS.296...44G; @2003MNRAS.346..456O). Inside the H II regions, whenever the I-fronts encounter pre-existing minihaloes, those minihaloes are subject to photoevaporation (@2004MNRAS.348..753S [henceforth, SIR]; @2005MNRAS.361..405I [henceforth, ISR]). A strong background of UV photons in the Lyman-Werner (LW) bands of $\rm H_2$ also builds up which can dissociate molecular hydrogen inside minihaloes even in the neutral regions of the IGM, thereby disabling further collapse and, thence, star formation (e.g. @1999ApJ...518...64O; @2000ApJ...534...11H; @2001ApJ...546..635O). This conclusion changes, however, if some additional sources of partial ionization existed to stimulate $\rm H_2$ formation without heating the gas to the usually high temperature of fully photoionized gas ($\sim 10^4 \,\rm K$) at which collisional dissociation occurs, such as X-rays from miniquasars [@1996ApJ...467..522H] or if stellar sources create a partially-ionized boundary layer outside of intergalactic H II regions [@2001ApJ...560..580R]. Such positive feedback effects, however, may have been only temporary, because photoheating would soon become effective as background flux builds up over time [@2006MNRAS.368.1301M]. The study of feedback effects has been limited mainly by technical difficulties. @2000ApJ...534...11H studied the feedback of LW, ultraviolet (UV), and X-ray backgrounds on minihaloes without allowing hydrodynamic evolution. @2001ApJ...560..580R studied the radiative feedback effect of stellar sources only on a static, uniform IGM. @2002ApJ...575...33R [@2002ApJ...575...49R] studied stellar feedback more self-consistently by performing cosmological hydrodynamic simulations with radiative transfer, but the resolution of these simulations is not adequate for resolving
null
--- abstract: 'In this paper we study the smoothness properties of solutions to the KP-I equation. We show that the equation’s dispersive nature leads to a gain in regularity for the solution. In particular, if the initial data $\phi$ possesses certain regularity and sufficient decay as $x \rightarrow \infty$, then the solution $u(t)$ will be smoother than $\phi$ for $0 < t \leq T$ where $T$ is the existence time of the solution.' author: == space. Introduction : KP-I equation, gain in regularity, Sobolev space, arbitrary coefficients and their related<extra_id_1>:<extra_id_2>:<extra_id_3>. Experimental<extra_id_4>:<extra_id_5>: The KdV<extra_id_6> and<extra_id_7> (<unk>3)<extra_id_8> [ effects. In addition, it includes the Kp equation. Now known as the KP-I and KP-II equations, these equations are given by $$u_{tx} + u_{xxxx} + u_{xx} + \epsilon u_{yy} + (uu_x)_x = 0$$ where $\epsilon = \mp 1$. In addition to being used as a model for the evolution of surface waves [@AC], the KP equation has also been proposed as a model for internal waves in straits or channels of varying depth and width [@Sn], [@DLW]. The equation $$ [@PY]. In this paper we consider smoothness properties of solutions to the KP-I equation $$\begin{aligned} \label{e101}& & (u_{t} + u_{xxx} + u_{x} + u\,u_{x})_{x} - u_{yy} =0,\qquad (x,\,y)\in\mathbb{R}^{2},\quad t\in\mathbb{R}\\ \label{e102}& & u(x,\,y,\,0)=\phi(x,\,y).\end{aligned}$$ Certain results concerning the Cauchy problem for the KP-I equation include the following. Ukai [@Uk] proved local well-posedness for both the KP-I and KP-II equations for initial data in $H^s(\mathbb R^2)$, $s \geq 3$, while Saut [@Sa] proved some local existence results for generalized KP equations. More recently, results concerning global well-posedness for the KP-I equation have appeared. In particular, see the works of Kenig [@Ke] and Molinet, Saut, and Tzvetkov [@MST]. Here , see the equation. A number of results concerning gain of regularity for various nonlinear evolution equations have appeared. This paper uses the ideas of Cohen [@Co], Kato [@Ka], Craig and Goodman [@CG] and Craig, Kappeler, and Strauss [@CKS]. Cohen considered the KdV equation, showing that “box-shaped" initial data $\phi \in L^2(\mathbb R^2)$ with compact support lead to a solution $u(t)$ which is smooth for $t > 0$. Kato generalized this result, showing that if the initial data $\phi$ are in $L^2((1+e^{\sigma x})\,dx)$, the unique solution $u(t) \in C^\infty(\mathbb R^2)$ for $t > 0$. Kruzhkov expressed this data. Craig, Kappeler, and Strauss expanded on the ideas from these earlier papers in their treatment of highly generlized KdV equations. Other results on gain of regularity for linear and nonlinear dispersive equations include the works of Hayashi, Nakamitsu, and Tsutsumi [@HNT1], [@HNT2], Hayashi and Ozawa [@HO], Constantin and Saut [@CS], Ponce [@Po], Ginibre and Velo [@GV], Kenig, Ponce and Vega [@KPV], Vera [@thesi1], [@Ve] and Ceballos, Sepulveda and Vera [@CSV]. In studying propagation of singularities, it is natural to consider the bicharacteristics associated with the differential operator. For the KdV equation, it is known that the bicharacteristics all point to the left for $t > 0$, and all singularities travel in that direction. Kato [@Ka] makes use of this uniform dispersion, choosing a nonsymmetric weight function decaying as $x \rightarrow -\infty$ and growing as $x \rightarrow \infty$. In [@CKS], Craig, Kappeler and Strauss also make use of a unidirectional propagation of singularities in their results on infinite smoothing properties for generalized KdV-type equations for which $f_{u_{xxx}} \geq c > 0$. For the two-dimensional case, Levandosky [@Le1] proves smoothing properties for the KP-II equation. This result makes use of the fact that the bicharacteristics all point into one half-plane. Subsequently, in [@Le2], Levandosky considers generalized KdV-type equations in two-dimensions, proving that if all bicharacteristics point into one half-plane, an infinite gain in regularity will occur, assuming sufficient decay at infinity of the initial data. In this paper, we address the question regarding gain in regularity for the KP-I equation. Unlike the KP-II equation, the bicharacteristics for the KP-I equation are not restricted to a half-plane but span all of $\mathbb R^2$. As a result, singularities may travel in all of $\mathbb R^2$. However, here we prove that if the initial data decays sufficiently as $x \rightarrow \infty$, then we will gain a finite number of derivatives in $x$ (as well as mixed derivatives). In order to state a special case of our gain in regularity theorem, we first introduce certain function spaces we will be using.\ We define $$\begin{aligned} \label{e106}X^{0}(\mathbb{R}^{2})= \left\{u:\;u,\;\xi^{3}\widehat{u},\;\frac{\eta^{2}}{\xi}\,\widehat{u}\in L^{2}(\mathbb{R}^{2})\right\}\end{aligned}$$ equipped with the natural norm. On the space $$\begin{aligned} \label{e107}\widetilde{X}^{0}(\mathbb{R}^{2})= \left\{u:\;\frac{1}{\xi}\,\widehat{u}(\xi,\,\eta)\in L^{2}(\mathbb{R}^{2})\right\}\end{aligned}$$ we define the operator $\partial_{x}^{-1}$ by $\widehat{\partial_{x}^{-1}u}\equiv\frac{1}{i\,\xi}\,\widehat{u}.$ Therefore, in particular, we can write the norm of $X^{0}(\mathbb{R}^{2})$ as $$\begin{aligned} \label{e108}||u||_{X^{0}(\mathbb{R}^{2})}^{2}=\int_{\mathbb{R}^{2}}[\,u^{2} + u_{xxx}^{2} + (\partial_{x}^{-1}u_{yy})^{2}\,]\,dx\,dy<+\infty\end{aligned}$$ On this space of functions $X^{0}(\mathbb{R}^{2}),$ it makes sense to rewrite - as $$\begin{aligned} \label{e109}& & u_{t} + u_{xxx} + u_{x} + u\,u_{x} - \partial_{x}^{-1}u_{yy} =0,\qquad (x,\,y)\in\mathbb{R}^{2},\quad t\in\mathbb{R}\\ \label{e110}& & u(x,\,y,\,0)=\phi(x,\,y)\end{aligned}$$ and consider weak solutions $u\in X^{0}(\mathbb{R}^{2}).$\ \ [*Definition. *]{} Let N be a positive integer. We define the space of functions $X^{N}(\mathbb{R}^{2})$ as follows $$\begin{aligned} \label{e111}X^{N}=\left\{u:\;u\in L^{2}(\mathbb{R}^{2}),\;{\cal F}^{-1}(\xi^{3}\,\widehat{u})\in H^{N}(\mathbb{R}^{2}),\,{\cal F}^{-1}\left(\frac{\eta^{2}}{\xi}\,\widehat{u}\right)\in H^{N}(\mathbb{R}^{2})\right\}\end{aligned}$$ equipped with the norm $$\begin{aligned} \label{e112}||u||_{X^{N}(\mathbb{R}^{2})}^{2} = \int_{\mathbb{R}^{2}}\left(u^{2} + \sum_{|\alpha|\leq N}[\,(\pa u_{xxx})^{2} + (\pa \partial_{x}^{-1}u_{yy})^{2}\,]\right)\,dx\,dy<+\,\infty\end{aligned}$$ where $\alpha=(\alpha_1,\,\alpha_2)\in\mathbb{Z
null
--- abstract: 'We investigate Monte Carlo Markov Chain (MCMC) procedures for the random sampling of some one-dimensional lattice paths with constraints, for various constraints. We will see that an approach inspired by *optimal transport* allows us to efficiently bound the mixing time of the associated Markov chain. The algorithm is robust and easy to implement, and samples an “almost” uniform path of length $n$ in $n^{3+{\varepsilon}}$ steps. This bound makes use of a certain *contraction property* of the Markov chain, and is also used to derive a bound for the running time of Propp-Wilson’s *Coupling From The Past* algorithm.' author: - Lucas Gerin title: 'Random sampling of lattice paths with constraints, via transportation' --- Lattice Paths with Constraints ============================== Lattice paths arise in several areas in probability and combinatorics, either in their own interest (as realizations of random walks, or because of their interesting combinatorial properties: see [@Ban] for the latter) or because of fruitful bijections with various families of trees, tilings, words. The problem we discuss here is to efficiently sample uniform (or *almost* uniform) paths in a family of paths with constraints. There are several reasons for which one may want to generate uniform samples of lattice paths: to make and try conjectures on the behaviour of a large “typical” path, test algorithms running on paths (or words, trees,...). In view of random sampling, it is often very efficient to make use of the combinatorial structure of the family of paths under study. In some cases, this yields linear-time (in the length of the path) *ad-hoc* algorithms [@MBM; @Duc]. However, the nature of the constraints makes sometimes impossible such an approach, and there is a need for robust algorithms that work in lack of combinatorial knowledge. Luby,Randall , found hexagon paths. This was motivated by a classical (and simple, see illustrations in [@Des; @Wilson]) correspondence between dimer configurations on an hexagon, rhombae tilings of this hexagon and families of non-intersecting lattice paths. As the first step for the analysis of this chain, Wilson [@Wilson] introduces a peak/valley Markov chain (see details below) over some simple lattice paths and obtain sharp bounds for its mixing time. We present in this paper a variant of this Markov chain, which is valid for various constraints and whose analysis is simple. It generates an “almost” uniform path of length $n$ in $n^{3+{\varepsilon}}$ steps, this bound makes use of a certain *contraction property* of the chain. Appart from the algorithmic aspect, the peak/valley process seems to have a physical relevancy as a simplified model for the evolution of *quasicrystals* (see a discussion on a related process in the introduction of [@Des]). In particular, the mixing time of this Markov seems to have some importance. Notations {#notations .unnumbered} --------- ! [The lattice path $S=(1,2,0,1,2,3,1)$ associated with the word $(1,1,-2,1,1,1,-2)$. ](ExempleChemin.eps){width="40mm"} We fix three integers $n,a,b>0$, and consider the paths of length $n$, with steps $+a/-b$, that is, the words of $n$ letters taken in the alphabet ${\left\{a,-b\right\}}$. Such names are also fixed with $p(s +s_n)$. To illustrate the methods and the results, we focus on some particular sub-families ${\mathcal{A}_{n}}\subset {\left\{a,-b\right\}}^n$: 1. Discrete *meanders*, denoted by ${\mathcal{M}_{n}}$, which are simply the non-negative paths: $S\in{\mathcal{M}_{n}}$ if for any $i\leq n$ we have $S_i\geq 0$. This example is mainly illustrative because the combinatorial properties of meanders make it possible to perform exact sampling very efficiently (an algorithm running in $\mathcal{O}(n^{1+{\varepsilon}})$ steps is given in [@MBM], an order that we cannot get in the present paper). See Paths have *walls*. A path with a wall of height $h$ between $r$ and $s$ is a path such that $S_i\geq h$ for any $r\leq i\leq s$ (see Fig. \[Fig:CheminMur\] for an example). These are denoted by ${\mathcal{W}_{n}}={\mathcal{W}_{n}}(h,r,s)$, they appear in statistical mechanics as toy models for the analysis of random interfaces and polymers (see examples in [@Walls]). 3. *Excursions*, denoted by ${\mathcal{E}_{n}}$, which are non-negative paths such that $S_n=0$. In the case $a=b=1$, these correspond to well-parenthesed words and are usually called Dyck words. In the general case, Duchon [@Duc] proposes a rejection algorithm which generates excursions in linear time. 4. *Culminating paths* of size $n$, denoted further by ${\mathcal{C}_{n}}$, which are non-negative paths whose maximum is attained at the last step: for any $i$ we have $0\leq S_i\leq S_n$. They have been introduced in [@MBM], motivated in particular by the analysis of some algorithms in bioinformatics. [@MBM [A path of steps $+1/-2$, with a wall of height $h=6$ between $i=10$ and $j=15$. []{data-label="Fig:CheminMur"}](CheminMur.eps){width="65mm"} Sampling with Markov chains {#Sec:Sampling} =========================== We will consider Markov chains in a family ${\mathcal{A}_{n}}$, where all the probability transitions are symmetric. For a modern introduction to Markov chains, we refer to [@Hagg]. Hence , ${\mathcal{A}_{n}}$. The equality $\pi(i) p_{i,j}= \pi(j) p_{j,i}$ holds for any two vertices $i,j$. This equality can be used for any two chains $i,j$ that are stationary. It <unk>$ irreducible. This lemma already provides us with a scheme for sampling an almost uniform path in ${\mathcal{A}_{n}}$, without knowing much about ${\mathcal{A}_{n}}$. To do so, we define a “flip” operator on paths, this is an operator $$\begin{array}{r c c c} \phi: & {\mathcal{A}_{n}}\times {\left\{1,\dots,n\right\}}\times {\left\{\downarrow,\uparrow\right\}}\times {\left\{+,-\right\}} &\to &{\mathcal{A}_{n}}\\ & (\mathbf{S},i,{\varepsilon},\delta) &\mapsto & \phi(\mathbf{S},i,{\varepsilon},\delta). \end{array}$$ When $i\in{\left\{1,2,\dots,n-1\right\}}$ the path $\phi(\mathbf{S},i,\uparrow,\delta)$ is defined as follows : if $(s_i,s_{i+1})=(-b,a)=$ ! [image](downup.eps){width="7mm"} then these two steps are changed into $(a,-b)=$ ![image](updown.eps){width="7mm"}. The $n-2$ other steps remain unchanged. If $(s_i,s_{i+1})\neq (-b,a)$ then ${\phi(\mathbf{S},i,\uparrow)}{\delta}=\mathbf{S}$. Note that in the case $i\in{\left\{1,2,\dots,n-1\right\}}$ the value of $\phi$ does not depend on $\delta$. For the case $i=n$, if $\delta=+$, we define ${\phi(\mathbf{S},n,{\varepsilon})}{\delta}$ as before as if there would be a $+a$ as the end if the path. For instance, in the case where $S_n=-b$, the path ${\phi(\mathbf{S},n,\uparrow)}{+}$, the $n$-th step is turned into $a$. The path ${\phi(\mathbf{S},i,\downarrow)}{\delta}$ is defined equally: if $i<n$ and $(
null
--- abstract: 'A loop whose inner mappings are automorphisms is an *automorphic loop* (or *A-loop*). We characterize commutative (A-)loops with middle nucleus of index $2$ and solve the isomorphism problem. Using this characterization and certain central extensions based on trilinear forms, we construct several classes of commutative A-loops of order a power of $2$. We initiate the classification of commutative A-loops of small orders and also of order $p^3$, where $p$ is a prime.' address: ' $Q$. Given True y$. In this case, L_x^{-1}(y)$. To reduce the number of parentheses, we adopt the following convention for term evaluation: $\ld$ is less binding than juxtaposition, and $\cdot$ is less binding than $\ld$. For instance $xy\ld u\cdot v\ld w$ is parsed as $((xy)\ld u)(v\ld w)$. The *inner mapping group* $\inn{Q}$ of a loop $Q$ is the permutation group generated by $$L_{x,y} = L_{yx}^{-1}L_yL_x,\quad R_{x,y} = R_{xy}^{-1}R_yR_x,\quad T_x = L_x^{-1}R_x,$$ where $x$, $y\in Q$. A subloop of $Q$ is *normal* if it is invariant under all inner mappings of $Q$. A loop $Q$ is an *automorphic loop* (or *A-loop*) if $\inn{Q}\le\aut{Q}$, that is, if every inner mapping of $Q$ is an automorphism of $Q$. Hence a commutative loop is an A-loop if and only if all its left inner mappings $L_{y,x}$ are automorphisms, which can be expressed by the identity $$\label{Eq:A} xy\ld x(yu)\cdot xy\ld x(yv) = xy\ld x(y\cdot uv).\tag{\textsc{A}}$$ Note that the class of commutative A-loops contains commutative groups and commutative Moufang loops. We assume that the reader is familiar with the terminology and notation of loop theory, cf. [@Bruck] or [@Pflugfelder]. This paper is a companion to [@JKV], where we have presented a historical introduction and many new structural results concerning commutative $A$-loops, including: 1. commutative A-loops are power-associative (see already [@BP]), 2. for a prime $p$, a finite commutative A-loop $Q$ has order a power of $p$ if and only if every element of $Q$ has order a power of $p$, 3. every finite commutative A-loop is a direct product of a loop of odd order (consisting of elements of odd order) and a loop of order a power of $2$, 4. commutative A-loops of odd order are solvable, 5. the Lagrange and Cauchy theorems hold for commutative A-loops, 6. every finite commutative A-loop has Hall $\pi$-subloops (and hence Sylow $p$-subloops), 7. if there is a nonassociative finite simple commutative A-loop, it is of exponent $2$. Despite these deep results, the theory of commutative A-loops is in its infancy. As an illustration of this fact, the present theory is not sufficiently developed to classify commutative A-loops of order $8$ without the aid of a computer, commutative A-loops of order $pq$ (where $p<q$ are primes), nor commutative A-loops of order $p^3$ (where $p$ is an odd prime). The two main problems for commutative A-loops stated in [@JKV] were: *For an odd prime $p$, is every commutative A-loop of order $p^k$ centrally nilpotent? * *Is there a nonassociative finite simple commutative A-loop, necessarily of exponent $2$ and order a power of $2$? * *[Ss:8<unk>] \[Ss:8\]. In the meantime, we have managed to solve the first problem of [@JKV] in the affirmative, but we neither use nor prove the result here—it will appear elsewhere. The second problem remains open and the many constructions of commutative A-loops of exponent $2$ obtained here can be seen as a step toward solving it. One of the most important concepts in the investigation of commutative A-loops appears to be the middle nucleus $N_\mu(Q)$, since, by [@BP], $N_\lambda(Q)\le N_\mu(Q)$, $N_\rho(Q)\le N_\mu(Q)$ and $N_\mu(Q)\unlhd Q$ is true in any A-loop $Q$. In §\[Sc:Index2\] we characterize all commutative loops with middle nucleus of index $2$, solve the isomorphism problem, and then characterize all commutative A-loops with middle nucleus of index $2$. In §\[Sc:AppsIndex2\] we classify commutative A-loops of order $8$, among other applications of §\[Sc:Index2\]. Central extensions of commutative A-loops are described in §\[Sc:Extensions\]. A and arguments. As an application, we characterize all parameters $(k,\ell)$ with the property that there is a nonassociative commutative A-loop of order $2^k$ with middle nucleus of order $2^\ell>1$. §\[Sc:p3\] uses another class of central extensions partially based on the overflow in modular arithmetic that yields many (conjecturally, all) nonassociative commutative A-loops of order $p^3$, where $p$ is an odd prime. A classification of commutative A-loops of small orders based on the theory and computer computations can be found in §\[Sc:Enumeration\]. Commutative groups have $X$. Let $G$ be a commutative group and $f$ a bijection of $G$. Then $G(f)$ will denote the groupoid $(G\cup \ov{G},*)$ with multiplication $$\label{Eq:Gf} x*y = xy,\quad x*\ov{y} = \ov{xy},\quad \ov{x}*y=\ov{xy},\quad \ov{x}*\ov{y}=f(xy),$$ for $x$, $y\in G$. Note that $G(f)$ is a loop with neutral element $1$. \[Lm:PropertiesGf\] Let $G$ be a commutative group, $f$ a bijection of $G$ and $(Q,\cdot) = G(f) = (G\cup \ov{G},*)$. Then: 1. $Q$ is commutative. For each $x$, $x\ld y=x^{-1}y$, $x\ld\ov{y}=\ov{x^{-1}y}$, $\ov{x}\ld y = \ov{x^{-1}f^{-1}(y)}$, $\ov{x}\ld\ov{y} = x^{-1}y$ for every $x$, $y\in G$. 3. $G\le\mnuc{Q}$. 4. $Q$ is a group if and only if $f$ is a translation of the group $G$. 5. $N_\lambda(Q)\cap G = N_\rho(Q)\cap G = Z(Q)\cap G = \{x\in G;\;f(xy)=xf(y)\text{ for every }y\in
null
--- abstract: 'Interesting effects arise in cyclic machines where both heat and ergotropy transfer take place between the energising bath and the system (the working fluid). Such effects correspond to unconventional decompositions of energy exchange between the bath and the system into heat and work, respectively, resulting in efficiency bounds that may surpass the Carnot efficiency. However, these effects are not directly linked with quantumness, but rather with heat and ergotropy, the likes of which can be realised without resorting to quantum mechanics.' author: - Arnab Ghosh - Victor Mukherjee - Wolfgang Niedenzu - Gershon Kurizki title: 'Are quantum thermodynamic machines better than their classical counterparts?' --- False ways. One is that these are machines ruled by laws that are specific to quantum thermodynamics (QTD), an emerging field that attempts to combine quantum mechanics and thermodynamics [@scovil1959three; @pusz1978passive; @lenard1978thermodynamical; @alicki1979quantum; @kosloff1984quantum; @scully2003extracting; @allahverdyan2004maximal; @erez2008thermodynamic; @delrio2011thermodynamic; @horodecki2013fundamental; @correa2014quantum; @skrzypczyk2014work; @brandao2015second; @pekola2015towards; @uzdin2015equivalence; @campisi2016power; @rossnagel2016single; @kosloff2013quantum; @gelbwaser2015thermodynamics; @goold2016role; @vinjanampathy2016quantum; @kosloff2017quantum]. Such . 2. Q QTD. The other possible meaning is that these machines are comprised of quantum systems: either all or some of their ingredients are describable quantum-mechanically, but this does not imply that these machines function in a quantum fashion. Here we argue, based on our research over the past six years [@gelbwaser2013minimal; @gelbwaser2013work; @gelbwaser2014heat; @gelbwaser2015thermodynamics; @niedenzu2016operation; @dag2016multiatom; @mukherjee2016speed; @ghosh2017catalysis; @ghosh2018two; @niedenzu2018quantum; @ghosh2019thermodynamics], that quantum thermodynamic machines either conform to the latter meaning and do not rely on quantumness [@niedenzu2016operation; @niedenzu2018quantum] or they are truly quantum, exhibit “quantum advantage” [@delcampo2014more] but do not contradict the second law of thermodynamics [@gelbwaser2013work; @ghosh2017catalysis; @ghosh2019thermodynamics]. The first machine we analysed [@gelbwaser2013minimal] was deemed to be the minimal or simplest heat machine based on a quantum system — a qubit. The qubit with resonance frequency $\omega_0$ is the working fluid (WF) of the machine. It is permanently coupled to cold and hot thermal baths with different, non-overlapping spectra. The qubit is driven periodically by a classical field which acts as a piston that causes periodic modulation of the qubit frequency $\omega(t)$ (Fig. \[1\]). The modulation period $2\pi/\Delta$ constitutes the machine cycle time. The results of this analysis [courtesy of @gelbwaser2015thermodynamics]. This analysis yields the result that the machine may act as a refrigerator (or heat pump) under the condition $$\begin{aligned} \label{heat-pump-cond} {n^{\mathrm{C}}}(\omega_0-\Delta) > {n^{\mathrm{H}}}(\omega_0+\Delta),\end{aligned}$$ and as a heat engine under the converse condition $$\begin{aligned} \label{heat-engine-cond} {n^{\mathrm{C}}}(\omega_0-\Delta) < {n^{\mathrm{H}}}(\omega_0+\Delta).\end{aligned}$$ Here ${n^{\mathrm{C}}}(\omega_0-\Delta)$ and ${n^{\mathrm{H}}}(\omega_0+\Delta)$ are the cold and hot bath thermal occupancies at the downshifted and upshifted transition frequencies, respectively. These conditions characterise the optimal scenario wherein the qubit at the upshifted frequency only couples to the hot bath and at the downshifted frequency to the cold bath. Equivalently to Eqs. -, the machine acts as a heat engine whose piston extracts power (${\mathcal{P}}<0$) provided that the (positive) modulation frequency $\Delta$ is bounded (from above) by $$\begin{aligned} \label{delta-<-delta-cr} {\Delta_{\mathrm{cr}}}=\omega_0\frac{{T_{\mathrm{H}}}-{T_{\mathrm{C}}}}{{T_{\mathrm{H}}}+{T_{\mathrm{C}}}},\end{aligned}$$ ${T_{\mathrm{H}}}$ and ${T_{\mathrm{C}}}$ being the hot and cold bath temperatures, respectively. The efficiency, defined as the ratio of the extracted power $-{\mathcal{P}}$ to the input heat input current ${J_{\mathrm{H}}}$ from the hot bath, grows with $\Delta$ until the Carnot bound is attained at ${\Delta_{\mathrm{cr}}}$, $$\begin{aligned} \label{carnot-bound} \eta=\frac{-{\mathcal{P}}}{{J_{\mathrm{H}}}}=\frac{2\Delta}{\omega_0+\Delta}\leq 1-\frac{{T_{\mathrm{C}}}}{{T_{\mathrm{H}}}}.\end{aligned}$$ As the modulation frequency exceeds the critical value, i.e., $\Delta > {\Delta_{\mathrm{cr}}}$, the machine becomes a refrigerator for the cold bath. It consumes power (${\mathcal{P}}>0$) from the piston and converts it into cold current ${J_{\mathrm{C}}}$ as characterised by the coefficient of performance (COP) that reaches its maximal value at $\Delta={\Delta_{\mathrm{cr}}}$, $$\begin{aligned} \label{cop-bound} \mathrm{COP}=\frac{{J_{\mathrm{C}}}}{{\mathcal{P}}}=\frac{\omega_0-\Delta}{2\Delta} \leq \frac{{T_{\mathrm{C}}}}{{T_{\mathrm{H}}}-{T_{\mathrm{C}}}}.\end{aligned}$$ These lucid, simple results show clearly that although the WF is a qubit, there is nothing uniquely quantum-mechanical about the machine performance, which adheres to the standard thermodynamic bound. Yet, the field of quantum-thermodynamic machines has been propelled by ingenious porposals to benefit from quantum resources embodied by non-thermal baths [@scully2003extracting; @dillenschneider2009energetics; @huang2012effects; @abah2014efficiency; @rossnagel2014nanoscale; @niedenzu2016operation; @manzano2016entropy; @hardal2015superradiant; @klaers2017squeezed; @agarwalla2017quantum; @niedenzu2018quantum]. Schematically, such machines have the same ingredients as conventional Carnot heat engines (Fig. \[3\]). However, at least the hot bath, which is the source of energy, may have non-thermal properties that stem from its quantum-mechanical preparation. The question has been posed whether a cycle energised by such a bath must abide by the Carnot efficiency bound derived in 1824 for steam engines [@carnotbook], $$\begin{aligned} \label{carnot-bound-1824} \eta=\frac{-W}{{Q_{\mathrm{H}}}}\leq 1-\frac{{T_{\mathrm{C}}}}{{T_{\mathrm{H}}}}=:\eta_\mathrm{C},\end{aligned}$$ where the efficiency is the ratio of the work output $-W$ to the heat input ${Q_{\mathrm{H}}}$. Two crucial assumptions have been made in Eq. : (i) that the input from the “hot” (better: energising) bath is indeed heat, and (ii) that this bath has a “temperature” ${T_{\mathrm{H}}}$, although a non-thermal bath need not have one. Before addressing these issues, we consider the specific setups which have promoted our general investigation of these issues [@niedenzu2016operation; @niedenzu2018quantum]. The first setup, whose study by Scully et. al. [@scully2003extracting] pioneered the field, consists of an engine that is energised by “phaseonium” fuel. The latter are three-level atoms whose lower two near-degenerate levels are coherently superimposed with a phase $\phi$ (Fig. 4) The Carnot bound is $\phi$-dependent. Consequently, whenever a $\phi$ is chosen such that ${T_{\mathrm{H}}}(\phi)$ exceeds ${T_{\mathrm{H}}}$ in the absence of coherence, then the resulting Carnot bound becomes higher than the standard (incoherent) Carnot bound. Is this a quantum advantage?
null
--- author: - 'S. Sabari$^{1,2}$ _ @FetterRMP]. Contrary to the short-range contact interaction, the DDI is a long-range anisotropic interaction that can be either repulsive or attractive. The *s-wave* contact interaction, $a_s$, is experimentally controllable by Feshbach resonance [@FBR]. It is therefore appealing to study the properties of dipolar BECs in variable short-range contact interaction regimes. However, the DDI is also inherently controllable, either via the magnitude of the external electric field, or by modulating the external aligning field in time, which allows to tune the magnitude and sign of the DDI [@tuneDDI]. Due to the long-range nature and anisotropic character of the DDI, the dipolar BEC possesses many distinct features and new phenomena such as the new dispersion relations of elementary excitations [@Wilson:2010; @Ticknor:2011], unusual equilibrium shapes, the roton-maxon character of the excitation spectrum [@Santos:2000; @Yi:2003; @Ronen:2007; @Parker:2008], quantum phases including supersolid and checkerboard phases [@Tieleman:2011; @Zhou:2010], anisotropic solitons [@equ2Db; @solRK; @solPM], vortices [@rev4; @rev5], hidden vortices [@Sabari2017] and distinct vortex lattices including crater-like structure, square lattices [@vor1; @vor2; @vor3]. The modern optical techniques help to control the parameters of the condensate and visualize topological defects such as rarefaction pulses and quantized vortices in BECs [@FetterRMP; @equ2Db; @solRK; @solPM; @rev4; @rev5; @vor1; @vor2; @vor3]. Recently, vortex tangles caused by an oscillatory perturbation were observed experimentally. The vortex tangle configuration is a signature of the presence of a quantum turbulent regime in the BEC cloud [@Henn09]. Moreover, recent studies on quantum turbulence are still concentrating on understanding the dynamics of quantized vortices [@PLTP]. Vibrating structures such as spheres, grids, and wires are used in superfluid $^3$He and $^4$He to create quantum turbulence [@PLTP; @Hanninen07]. Despite their behavior. Introducing an oscillating potential in atomic dipolar BECs will be helpful to analyze the intrinsic nucleation of topological defects and synergy dynamics of vortices and rarefaction pulses. Also, this technique suggests a powerfull method for making quantum turbulence in trapped dipolar BECs, in addition to the other methods that have been used so far [@Henn09; @Berloff02; @Kobayashi07] in alkali BECs. Eventually, the dynamics of vortices and rarefaction pulses can be visualized in atomic dipolar BECs, which enabling experimental and theoretical challenges for further analysis. Up to now, vortex dipoles caused by oscillating potentials in alkali BECs have been observed in experiments and compared to theoretical models [@osc1; @osc2; @Jackson00; @Raman99; @Onofrio20; @Neely10; @rev2; @rev3]. Nonlinear dynamical behaviors, critical velocity for vortex dipoles, hydrodynamic flow, vortices, rarefaction pulses and other interesting perspectives have been carried out in alkali BECs using the oscillating Gaussian potential [@osc1; @osc2; @Jackson00; @Raman99; @Onofrio20; @Neely10]. Inspite of the many experiments that have been carried out on $^{164}$Dy and $^{168}$Er condensates there has still been no experimental observation of vortices in dipolar BECs. So, investigating the dynamics of vortex dipoles in a dipolar BEC by introducing an oscillating potential will be a fascinating experimental exploration. Thus, this model will be helpful to perform new experiments with the aim of observing vortices in dipolar BECs. In the present work, we are interested in studying the nucleation and dynamics of vortex dipoles and rarefaction pulses. The next sections are organized as follows. In Sec. \[sec:frame\], we present the general three-dimensional mean-field equation for the dipolar BECs and the corresponding two-dimensional (2D) reduction. In <unk>[sec:numerical<unk>], we present our numerical results. In \[sec:numerical\], we present our numerical results, where we include plots on the critical velocity for the nucleation and dynamics of vortex-dipoles. Further, in this section, we show the rarefaction pulses due to the annihilation of vortex-dipoles. Finally, in Sec. \[sec:con\], we present a summary of our conclusions and perspectives. The mean-field formalism {#sec:frame} ======================== At ultralow temperatures, a dipolar BEC is described by the time-dependent GP equation with a nonlocal integral corresponding to the DDI [@dbec1; @dbec2; @lasPM; @rev5; @rev6] $$\begin{aligned} i\hbar\frac{\partial \phi({\mathbf r},t)}{\partial t}& =\Big(-\frac{\hbar^2}{2m}\nabla^2+V({\mathbf r},t) + g \left\vert \phi({\mathbf r},t)\right\vert^2 \Big)\phi({\mathbf r},t)\notag \\ &+N \int U_{\mathrm{dd}}({\mathbf r}-{\mathbf r}')\left\vert\phi({\mathbf r}',t)\right\vert^2 d{\mathbf r}'\phi({\mathbf r},t), \label{eqn:dgpe}\end{aligned}$$ with $({\bf r},t)=({\bf \rho},t)$ and the radial coordinate being $\rho=\sqrt{x^2+y^2}$. The trapping potential $V({\mathbf r},t) = V_{ext} + V_G$ contains a cylindrically symmetric harmonic trap in addition to a Blue detuned Gaussian obstacle. The cylindrically symmetric trap is $V_{ext}({\mathbf r})=\frac{1}{2} m (\omega_\rho^2 (x^2+y^2)+ \omega_z^2 z^2)$, with $\omega_x = \omega_y = \omega_\rho$ and $\omega_z$ being the radial and axial trap frequencies respectively. The trap aspect ratio of the harmonic trap is $\lambda=\omega_{z}/\omega_{\rho}$. The Gaussian obstacle is $$V_{G}(\rho,t) = V_{0} \exp\left(-\frac{\left[x-x_0(t)\right]^2+y^2}{w_0^2}\right),$$ where $V_0$, $x_0(t)$ and $w_0^2$ are the height, position and width of the Gaussian obstacle. The position of the obstacle $x_0(t)=\epsilon \sin(\omega t)$ provides parametric resonance with respect to the oscillating frequency $\omega$. One can control the velocity ($v=\epsilon\omega$) of oscillation of the obstacle with respect to the amplitude $\epsilon$ and the frequency $\omega$. In the present study, $\epsilon = 10 \mu$m and $\omega = 60 /s$. However, the velocity of the obstacle also depends on $V_{0}$ and $w_0$, and we keep these fixed: $V_0=80\,\hbar\omega_\rho$ and $w_{0}=0.25\,\mu m$. The two-body contact interaction strength is $g=4\pi$ $\hbar^2a_s N/m$ where $a_s$, $m$, and $N$ are atomic scattering length, mass of the atom and number of atoms respectively. We consider that the magnetic dipoles are polarized along $z$ direction and the corresponding dipolar interaction term is $ U_{\mathrm{dd}}(\bf R)=(\mu_0 \mu^2/4\pi)(1-3\cos^2 \theta/ \vert {\bf R} \vert ^3)$, where the relative position of the dipoles is ${\bf R= r -r'}$, $\theta$ is the angle between ${\bf R}$ and the direction of polarization $z$, $\mu_0$ is the permeability of free space and $\mu$ is the dipole moment of the condensate atom. In the present study, we consider the $^{168}$Er and $^{164}$Dy atoms, their corresponding dipole moments being $\mu=7\mu_B$ and $10\mu_B$ respectively. The normalization of the mean-field wavefunction is $\int d{\bf r}\vert\phi({\mathbf r},t)\vert ^2=1.$ It is convenient to use the GP equation (\[eqn:dgpe\]) in a
null
--- abstract: 'We study the structure of the load-based spanning tree (LST) that carries the maximum weight of the Erdös-Rényi (ER) random network. The weight of an edge is given by the edge-betweenness centrality, the effective number of shortest paths through the edge. We find that the LSTs present very inhomogeneous structures in contrast to the homogeneous structures of the original networks. Moreover, it turns out that the structure of the LST changes dramatically as the edge density of an ER network increases, from scale-free with a cutoff, scale-free, to a star-like topology. These would not be possible if the weights are randomly distributed, which implies that topology of the shortest path is correlated in spite of the homogeneous topology of the random network.' author: [ @Dorogovtsev1]. The most representative measure characterizing a network is the *degree distribution*, $P(k)$, which indicates probability for a vertex to be directly connected to $k$ neighboring vertices with edges. Specifically, while the power-law distribution, $P(k) \sim k^{-\gamma}$, is widely found in the most of real-world networks including technological, biological, and social networks, such as the Internet [@Faloutsos1], the World Wide Web [@Albert2], the metabolic networks [@Jeong1], the protein interaction networks [@Jeong2], and the coauthorship networks [@Newman1], there also exist homogeneous networks, such as the US highway network and the US power-grid network, which are explained by the bell-shaped and exponential degree distributions. Recently it has been claimed that an apparent scale-free network can be originated from a sampling result of a underlying homogeneous network [@Clauset1]. While the degree distribution gives valuable knowledge of local structures of networks around us, it is also necessary to know global structures of networks to understand dynamics on the networks properly. The information transport between two vertices occurs along an optimal path connecting them, defined as the path minimizing the total cost [@Braunstein1; @Sreenivasan1; @Buldyrev1], which is usually determined by using the global knowledge of the network. For instance, full information of connections in the network is required to determine a shortest path defined as a path consisting of a minimum number of edges, which would be an optimal path if the costs of all edges are identical. The scale-free network has been revealed to have very inhomogeneous shortest path topology so that one can find extremely important vertices or edges that huge number of shortest paths are passing through, which has been supported by the power-law distribution of the betweenness centrality (BC)  [@Goh1] and the existence of the transport skeleton structure [@dhkim1] that makes it possible to understand the origin of the difference between the BC exponent classes of real-world networks [@dhkim1; @Goh1] and the universal properties of the fractal scaling [@Goh2; @Song1]. However, in the non-scale-free networks, even though it has been known that there are no such vertices or edges used heavily in the shortest paths, the topology of the shortest paths has not been intensively studied so far. Our main interest is to find out how the topology of shortest paths is correlated with the network topology in the Erdös-Rényi (ER) random network model [@Erdos1], where two arbitrary vertices in the network are randomly connected to each other by an edge with a given probability $p$, which gives the Poisson degree distribution. In order to systematically study the shortest path topology, it is necessary to treat the network as a weighted network in which the contribution of the shortest paths on each edge is assigned as the weight of the edge. We use the *edge-betweenness centrality* (edge-BC) [@Freeman1; @Girvan1; @Newman2] to represent the contribution of the shortest paths on each edge of the network, which is a convenient quantity counting the effective number of shortest paths through the edge and thereby gives the average traffic though the edges. The edge-BC of an edge $e_{ij}$ between vertices $i$ and $j$ is the total contribution of the edge on the shortest paths between all possible pairs of vertices, which is defined as follows: $$b(e_{ij}) = \sum_{m \neq n} b(m,n;i,j) = \sum_{m \neq n} \frac{c(m,n;i,j)}{c(m,n)},$$ where $c(m,n;i,j)$ denotes the number of shortest paths from a vertex $m$ to $n$ through the edge $e_{ij}$, and $c(m,n)$ is the total number of shortest paths from $m$ to $n$. In this weighted networks, one useful way to study the spatial correlation of the weight is to investigate the *load-based spanning tree* (LST) [@dhkim1], which consists of a set of selected edges to maximize its total weights, which corresponds to the skeleton of the network  [@dhkim1]. From the degree distribution of the LST, we can check whether the spatial distribution of the weights is correlated with the original network topology. If the weights are randomly distributed on the edges of the network, the degree distribution of the original network would be preserved in its LST [@Szabo1]. On the other hand, if there exists significant topological correlation in the distribution of the weights on the edges, the degree distribution is expected to show a large deviation from the Poisson distribution, the degree distribution of the ER network. In this paper, we investigate the structure properties of the LST of the ER model in this respect. We find that the LSTs show very inhomogeneous structures in contrast to the homogeneous structures of the original networks. It is found that the degree distribution of the LST shows rich characteristics depending on the connection probability and the size of the ER network, which turns out to be very different from the Poisson distribution. Figure \[fig:pk\](a) shows an illustration of the inhomogeneous LST structure obtained from the ER network with $N=100$ vertices and $p=0.1$, in which the hub vertices having a significant number of degree are found. The same pattern is also shown Fig. \[fig:pk\](b) which displays the degree distributions of the LSTs obtained from the ER networks generated with various connection probabilities. It is the most interesting feature that the right-skewed degree distributions are observed in the LSTs for the wide range of connection probability $p$ of the ER network whose degree distributions follow the narrow Poisson distribution. For the examined LSTs, it is found that a power-law with a cutoff fits well to the degree distribution of the LST, where the exponent and the cutoff degree depend on $p$. The emergence of these inhomogeneous degree distributions in the LSTs indicates that there exist non-negligible correlations between the weights of neighboring edges sharing a common vertex at their ends, since a vertex shared by the edges having higher weights becomes a hub in the LST. If shown in Fig. \[fig:pk\](c)\]. These imply that the shortest paths, which give the weights of its constituting edges, are not randomly distributed on the ER network but strongly correlated enough to generate an inhomogeneity in spite of the homogeneous topology of the ER network. The topological correlation of the shortest path that gives rise to the inhomogeneous LST structure can be specified by the correlation between the degree of the vertex and the weights of the edges connected to the vertex. In order to find out more about the spatial correlation in the distribution of the weights, we measure the average weight rank $R_k$, an averaged value of weight-rank $r$ over the edges attached to a vertex having degree $k$. The rank $r_{ij}$ of the edge between $i$ and $j$ is graded for its weight $w_{ij}$, i.e., the largest weight gives $r=1$, the second largest weight gives $r=2$, and so on. Mathematically $R_{k}$ is defined as follows: $$R_k = \bigg\langle \frac{1}{|\mathbf{V}_k|} \sum_{i \in \mathbf{V}_k} \frac{1}{k} \sum_{j} r_{ij} a_{ij} \bigg\rangle ,$$ where $\mathbf{V}_k$ and $|\mathbf{V}_k|$ are the set of vertices having degree $k$ and the number of those vertices, respectively, $a_{ij}$ is the adjacency matrix element; $a_{ij}=1$ if $i$ and $j$ is connected and $a_{ij}=0$ otherwise, and $\langle \ldots \rangle$ denotes an average over network ensembles. Consequently, the small (large) value of $R$ indicates that a vertex has the edge with high (low) ranks. The reason why we attach great importance to $R_k$ is that it gives an insight into how the structure of the network changes in the LST because the edges of the network are picked in the rank order for the LST. While the average value of
null
Marc Massar [^1] and Jan Troost [^2]\ [*Theoretische Natuurkunde, Vrije Universiteit Brussel*]{}\ [*Pleinlaan 2, B-1050 Brussel, Belgium*]{}\ ABSTRACT > We study a configuration in matrix theory carying longitudinal fivebrane charge, i.e. a D0-D4 bound state. We calculate the one-loop effective potential between a D0-D4 bound state and a D0–anti-D4 bound state and compare our results to a supergravity calculation. Next, we identify the tachyonic fluctuations in the D0-D4 and D0–anti-D4 system. We analyse classically the action for these tachyons and find solutions to the equations of motion corresponding to tachyon condensation. Introduction ============ Matrix theory [@BFSS] [@Su] [@Se] is the M-theory interpretation of U(N) supersymmetric quantum mechanics which has passed many stringent tests. The brane content of matrix theory was determined in [@BSS]. Amongst other branes, the longitudinal fivebrane was identified [^3]. Two types of representation for the longitudinal fivebrane were proposed. One in terms of an instanton gauge field, which was used in [@CT1] to calculate one loop effective potentials between the D0-D4 bound state and other objects in matrix theory. Another representation was proposed in terms of two pairs of canonical conjugate variables. We use this representation to calculate one-loop effective potentials (see e.g. [@AB] [@L2] [@CT1] [@CT2]) between this object and a graviton, another D0-D4 bound, and a D0–anti-D4 system. Naturally, we find agreement with [@CT1] for the cases studied there and with an extra supergravity calculation for the D0-D4 and D0-anti-D4 system. In [@Ja] a first step towards the understanding of Sen’s tachyon condensation mechanism [@Sen] in matrix theory was taken, by analyzing the tachyon in the D0-D2 and D0–anti-D2 system. We [@Ja] system. We identify the tachyonic fluctuations in the D0-D4 and D0–anti-D4 background and analyse the classical action for these fluctuations in the spirit of [@Ja]. We find solutions to the action representing condensation to a vacuum filled with D0-branes and gravitons. The first section concentrates on a discussion of the classical solution of matrix theory corresponding to a D0-D4 bound state system. In the second section some effective potentials are calculated in detail to get acquainted with the representation of the longitudinal fivebrane in terms of canonical conjugate variables. We add a remark about the spectrum of the fluctuations around one longitudinal fivebrane. The next section deals with an analysis of the tachyonic fluctuations. Then we analyse possible solutions to the action for the tachyonic fluctuations. Finally, we analyse the classical problems. Preliminary discussion of the classical solution ================================================ The lagrangian of matrix theory is given by $U(N)$ supersymmetric quantum mechanics, namely the dimensional reduction of tendimensional ${\cal N}=1$ $U(N)$ Super Yang-Mills theory to $0+1$ dimensions. It reads [@BFSS]: $$\begin{aligned} {\cal L} &=& \frac{T_0}{2} Tr \left( (D_0 X_I)^2+ \frac{1}{2} \left[ X_{I} , X_J \right]^2 + 2 \theta^{T} D_0 \theta - 2 \theta^{T} \g^I \left[ \theta, X_I \right] \right) \end{aligned}$$ where we take $ 2 \p \a' = 1 $ and $ T_0 = \frac{\sqrt{2 \p}}{g} $. Furthermore we have $D_0 = \partial_t - i \left[A_0, . \right] $ and $I=1,2, \dots, 9$. All fields are in the adjoint of $U(N)$. The fermions are Majorana-Weyl. The equations of motion for static configurations with trivial $A_0$ and vanishing fermions are: $$\begin{aligned} \left[ X_I, \left[ X_I, X_J \right] \right] &=& 0. \end{aligned}$$ We study especially a background configuration ($X_I = B_I$) corresponding to a D0-D4 bound state, or longitudinal fivebrane, satisfying the following commutation rules [@BSS]: $$\begin{aligned} \left[ B_1, B_2 \right] &=& - i c \, \s_3 \otimes I_{\frac{N}{2} \times \frac{N}{2}} \nonumber \\ \left[ B_3, B_4 \right] &=& - i c \, \s_3 \otimes I_{\frac{N}{2} \times \frac{N}{2}}, \end{aligned}$$ and the other matrices and commutators zero. Here $ \s_3 $ is the third Pauli matrix and $c$ is a constant. We take the infinite background matrices to be blockdiagonal such that this configuration solves the equations of motion. We will use two representations for this solution. The first one is in terms of two ’canonical conjugate’ pairs: $$\begin{aligned} \left[ P_1, Q_1 \right] &=& - i c \nonumber \\ \left[ P_2, Q_2 \right] &=& - i c \nonumber \\ B_1 &=& \left( \begin{array}{cc} P_1 & 0 \\ 0 & P_1 \end{array} \right) \nonumber \\ B_2 &=& \left( \begin{array}{cc} Q_1 & 0 \\ 0 & -Q_1 \end{array} \right) \nonumber \\ B_3 &=& \left( \begin{array}{cc} P_2 & 0 \\ 0 & P_2 \end{array} \right) \nonumber \\ B_4 &=& \left( \begin{array}{cc} Q_2 & 0 \\ 0 & -Q_2 \end{array} \right) \label{bl} \end{aligned}$$ This representation makes it easy to interpret the brane content of the configuration. Clearly, this solution as a whole carries no membrane charge since $q_2=- \frac{i}{2 \p} Tr \left[ B_I , B_J \right] = 0 $. It carries longitudinal fivebrane charge in the $1,2,3,4$ directions though: $$\begin{aligned} q_5=-\frac{1}{8 \p^2} \e^{IJKL} Tr \left[ B_I B_J B_K B_L \right] & = & N \frac{c^2}{4 \p^2} \end{aligned}$$ We refer to [@KK] for a clear and detailed analysis of the charges of the configuration, which yields the fact that the configuration you build in this way represents at least two D0-D4 bound states. That can be understood from the following reasoning. When we focus on the left upper block, it clearly has membrane charge in directions $1,2$ and $3,4$, as well as longitudinal fivebrane charge. It represents a D0-D4-D2-D2 bound state. Zooming in on the right lower block we see a D0-D4-anti-D2-anti-D2 bound state. If we formally superimpose the two parts we find two D0-D4 bound states, the 2-brane charge cancelling out. Thinking naively, one might be worried that this superposition is unstable, in particular, one might expect a tachyonic off-diagonal mode in the background configuration, representing a string stretching between a D2-brane and an anti-D2-brane. We will come back to this point and show that there is no such tachyonic mode. Moreover, the configuration was shown in [@BSS] to preserve 1/4 supersymmetry, as expected from D0-D4 bound states. An alternative representation of the background configuration in terms of gauge fields, discussed in detail in [@KK] will come in handy later on. It is given by: $$\begin{aligned} B^1 &=& c \left ( \begin{array}{cc} -i \partial_{x_1} & 0 \\ 0 & -i \partial_{x_1} \end{array} \right) \nonumber \\ B^2 &=& c \left ( \begin{array}{cc} -i \partial_{x_2} + \frac{ x_1}{c} & 0 \\ 0 & -i \partial_{x_2} - \frac{ x_1}{c} \end{array} \right) \nonumber \\ B^3 &=& c \left ( \begin{array}{cc} -i \partial_{x_3} & 0 \\ 0 &-i \partial_{x_3} \end{array} \right)
null
--- abstract: 'We establish a link between the maximization of Kolmogorov Sinai entropy (KSE) and the minimization of the mixing time for general Markov chains. Since the maximisation of KSE is analytical and easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time dynamics. It also uses two chains.' author: m. mihelicher ; 'M. Mihelich' - 'B. Dubrulle' - 'D. Paillard' Vaud' + 'Q. Kral' - 'D. Faranda' - 'S. Kral' chains. Techniques to estimate the number of steps in the chain to reach the stationary distribution (the so-called “mixing time”), are of great importance in obtaining estimates of running times of such sampling algorithms [@bhakta2013mixing] (for a review of existing techniques, see e.g. [@guruswami2000rapidly]). On the other hand, studies of the link between the topology of the graph and the diffusion properties of the random walk on this graph are often based on the entropy rate, computed using the Kolmogorov-Sinai entropy (KSE)[@gomez2008entropy]. For example, one can investigate dynamics on a network maximizing the KSE to study optimal diffusion [@gomez2008entropy], or obtain an algorithm to produce equiprobable paths on non-regular graphs [@burda2009localization]. In this letter, we establish a link between these two notions by showing that for a system that can be represented by Markov chains, **a non trivial relation exists between the maximization of KSE and the minimization of the mixing time**. Since KSE are easier to compute in general than mixing time, this link provides a new faster method to approximate the minimum mixing time that could be interesting in computer sciences and statistical physics and gives a physical meaning to the KSE. We first show that on average, the greater the KSE, the smaller the mixing time, and we correlated this result to its link with the transition matrix eigenvalues. Then, we show that the dynamics that maximises KSE is close to the one minimizing the mixing time, both in the sense of the optimal diffusion coefficient and the transition matrix. Consider a network with $m$ nodes, on which a particle jumps randomly. This particle jumps randomly on $m$ nodes and $P$. $A(i,j)=1$ if and only if there is a link between the nodes $i$ and $j$ and 0 otherwise. $P=(p_{ij})$ where $p_{ij}$ is the probability for a particle in $i$ to hop on the $j$ node. Let us introduce the probability density at time $n$ $\mu_n=(\mu_n^i)_{i=1...m}$ where $\mu_n^i$ is the probability that a particle is at node $i$ at time $n$. Starting point state. Let us define: $$\label{eqdn} d(n)= max{ || (P^t)^n\mu - \mu_{stat}|| \text{ } \forall \text{ } \mu },$$ where $||.||$ is a norm on $\mathbb{R}^n$. For $ \epsilon > 0$, the mixing time, which corresponds to the time such that the system is within a distance $\epsilon$ from its stationary state is defined as follows: $$\label{eq:mix1} t(\epsilon)= \min{ n, \, d(n) \leq \epsilon}.$$ For a Markov chain the KSE takes the analytical form [@billingsley1965ergodic]: $$\label{eqhks} h_{KS}=-\sum_{ij} \mu_{stat_{i}}p_{ij}\log(p_{ij}).$$ Random $m$ size Markov matrices are generated by assigning to each $p_{ij}$ ($i\neq j$) a random number between $0$ and $ \frac{1}{m}$ and $p_{ii}= 1-\sum_{j\neq i} p_{ij}$. The mean KSE is plotted versus the mixing time (Fig. \[fig:KS1\]) by working out $h_{KS}$ and $t(\epsilon)$ for each random matrix. (Fig \[fig:KS1\]) shows that KSE is on average a decreasing function of the mixing time. ! [Averaged KSE versus mixing time (top) for $10^6$ random $m=10$ size matrices and averaged $\lambda(P)$ versus mixing time (bottom) for $10^6$ random $m=10$ size matrices in curve blue and $f(t)=\epsilon^{1/t}$ in red. $\epsilon=10^{-3}$ is one. []{data-label="fig:KS1"}](KSfuncmixtime1m10gmax1064.jpg){width="10cm"} ]= On average. We can indeed find two special Markov chains $P1$ and $P2$ such that $h_{KS}(P1) \leq h_{KS}(P2)$ and $t_1(\epsilon) \leq t_2(\epsilon)$. We illustrate this point further. The link between the mixing time and the KSE can be understood via their dependence as a function of the transition matrix eigenvalues. A general irreducible transition matrix $P$ is not necessarily diagonalizable on $\mathbb{R}$. However, since $P$ is chosen randomly, it is almost everywhere diagonalizable on $\mathbb{C}$. According to Perron Frobenius theorem, the largest eigenvalue is 1 and the associated eigen-space is one-dimensional and equal to the vectorial space generated by $\mu_\text{stat}$. Without loss of generality, we can label the eigenvalues in decreasing order of their module: $$1=\lambda_1 > \lvert \lambda_2 \rvert \geq....\geq \lvert \lambda_m \rvert \geq 0$$ The convergence speed toward $\mu_\text{stat}$ is given by the second maximum module of the eigenvalues of $P$ [@boyd2004fastest], [@pierre1999markov]: $$\lambda(P)=\max_{i=2...m}{ |\lambda_i|}= \lvert \lambda_2 \rvert$$ The eigenvalues $\lambda_1=1,...,\lambda_m$ of $P$ and $P^t$ being equal, let us denote their associated eigenvectors $\mu_1=\mu_\text{stat},...,\mu_m$. For any initial probability density $\mu_0$, we find: $$\label{eqmu0} || (P^t)^n\mu_0 - \mu_\text{stat}|| \propto (\lambda(P))^n.$$ According to Eqs. (\[eqdn\]) and (\[eq:mix1\]), $\lambda(P)^{t(\epsilon)} \propto \epsilon$, i.e. $\lambda(P) \propto \epsilon^{1/t(\epsilon)}$. Hence, the smaller $\lambda(P)$ the shorter the mixing time (Fig. \[fig:KS1\]). $h_{KS}$ . <unk>Lambda(P)$ $\lambda(P)$. This link between maximum KSE and minimum mixing time actually also extends naturally to optimal diffusion coefficients. Such a notion has been introduced by Gomez-Gardenes and Latora [@gomez2008entropy] in networks represented by a Markov chain depending on a diffusion coefficient. Based on the observation that in such networks, KSE has a maximum as a function of the diffusion coefficient, they define an optimal diffusion coefficient as the value of the diffusion corresponding to this maximum. In the same spirit, one could compute an optimal diffusion coefficient with respect to the mixing time, corresponding to the value of the diffusion coefficient which minimizes the mixing time -or equivalently the smallest second largest eigenvalue $\lambda(P)$. This would roughly correspond to the diffusion model reaching the stationary time in the fastest time. To define such an optimal diffusion coefficient, we follow Gomez and Latora and vary the transition probability depending on the degree of the graph nodes. More accurately, if $k_i=\sum_j A(i,j)$ denotes the degree of node $i$, we set: $$\label{eq:diff1} p_{ij}=\frac{A_{ij}k_j^\alpha}{\sum_j A_{ij}k_j^\alpha}.$$ If $\alpha <0$ we favor transitions towards low degrees nodes, if $\alpha=0$ we find the typical random walk on network and if $\alpha>0$ we favor transitions towards high degrees nodes. We assume here that $A
null
--- abstract: 'We study nonlinear waves in a nonrelativistic ideal and cold quark gluon plasma immersed in a strong uniform magnetic field. In the context of nonrelativistic hydrodynamics with an external magnetic field we derive a nonlinear wave equation for baryon density perturbations, which can be written as a reduced Ostrovsky equation. We 'd field.' address: 'Instituto de Física, Universidade de São Paulo, Rua do Matão Travessa R, 187, 05508-090 São Paulo, SP, Brazil' author: - 'D. A. False @qgp2]. Deconfined quark matter may also exist in the core of compact stars [@qgp3]. Waves [ @w2]. In heavy ion collisions waves may be produced, for example, by fluctuations in baryon number, energy density or temperature caused by inhomogeneous initial conditions [@w2]. In order to study waves, it is very often assumed that they represent small perturbations in a fluid and hence one can linearize the equations of hydrodynamics and find their solutions, which are linear waves. Alternatively, instead of linearization we may use another procedure, called Reductive Perturbation Method (RPM) [@rpm], which preserves the nonlinearity of the original equations. This leads to nonlinear differential equations, whose solution describe nonlinear waves, such as solitons. In a series of works [@werev; @weset] we studied the existence and properties of nonlinear waves in hadronic matter and in a quark gluon plasma as well. The existence and effects of a magnetic field in quark stars has been studied since long time ago [@mag1] and became a hot topic in our days. In a different context, about ten years ago [@mag2] it was realized that a very strong magnetic field might be produced in relativistic heavy ion collisions and it might have some effect on the quark gluon plasma phase. A natural question is then: what it the effect of the magnetic field on the waves propagating through the QGP ? In a previous work [@we17] we studied the conditions for an ideal, cold and magnetized quark gluon plasma (QGP) to support stable and causal perturbations. These perturbations were considered in the linear approach and the QGP was treated with nonrelativistic hydrodynamics. We have derived the dispersion relation for density and velocity perturbations. The magnetic field was included both in the equation of state and in the equations of motion, where the term of the Lorentz force was considered. We have used three equations of state: a generic non-relativistic one, the MIT bag model EOS (for weak and strong magnetic field) and the mQCD EOS. The anisotropy effects caused by the B field were also manifest in the parallel and perpendicular sound speeds. We found that the existence of a strong magnetic field does not lead to instabilities in the velocity and density waves. Moreover, in most of the considered cases the propagation of these waves was found to respect causality. However causality might be violated in the strong field regime. The magnetic power (k$) $k$). The magnetic field changes the pressure, the energy density and the speed of sound. It also changes the equations of hydrodynamics. One of the conclusions of Ref. [@we17] is that the changes in hydrodynamics are by far more important than the changes in the equation of state. In the present study we extend our previous work to the case of nonlinear waves. We will investigate the effects of a strong and uniform magnetic field on nonlinear baryon density perturbations in an ideal and magnetized quark gluon plasma. We have developed gluon formalism. Our study could be applied to the deconfined cold quark matter in compact stars and to the cold quark gluon plasma formed in heavy ion collisions at intermediate energies at FAIR [@fair] or NICA [@nica]. We have already discussed the effect of magnetic field effects. Some work along this line was already published in [@azam], where the authors concluded that increasing the magnetic field leads to a reduction in the amplitude of the nonlinear waves. More recently [@javi17], perturbations in a cold QGP were studied with nonrelativistic hydrodynamics with magnetic field effects in a nonlinear approach. Solitonic density waves were found as solutions of a modified nonlinear Schrodinger equation. The magnetic field was found to increase the phase speed of the soliton and to reduce its width. We show why it works. Nonrelativistic hydrodynamics ============================= We start from the nonrelativistic Euler equation [@land] with an external uniform magnetic field. The same magnetic field affects the thermodynamical quantities appearing in the equation of state, as in [@we17]. The magnetic field of intensity $B$ is chosen to be in the $z-$direction and hence $\vec{B}=B \hat{z}$ . The three fermions species considered are the quarks: up ($u$), down ($d$) and strange ($s$) with the following respectively charges $Q_{u}= 2 \, Q_{e}/3$, $Q_{d}= - \, Q_{e}/3$ and $Q_{s}= - \, Q_{e}/3$, where $Q_{e}=0.08542$ is the absolute value of the electron charge in natural units [@glend]. Because of the external magnetic field, particles with different charges may assume different trajectories [@azam; @multif] and this justifies the use of the multi-fluid approach [@azam; @multif; @we17]. Throughout this work, we employ natural units ($\hbar=c=1$) and the metric used is $g^{\mu\nu}=\textrm{diag}(+,-,-,-)$. Starting from the hydrodynamics equations discussed in [@we17], and The Euler equation for the quark of flavor $f$ (f=u,d,s) reads: $${\rho_{m\,f}}\Bigg[{\frac{\partial \vec{v_f}}{\partial t}} + (\vec{v_f}\cdot \vec{\nabla}) \vec{v_f}\Bigg]= -\vec{\nabla}p +{\rho_{c\,f}}\Big(\vec{v_f} \times \vec{B} \Big) \label{nsgeralmag}$$ where ${\rho_{m\,f}}$ is the quark mass density. The charge density of the quark flavor $f$ is $\rho_{c\,f}$ [@azam] and the masses are: $m_{u}=2.2 \, MeV$, $m_{d}=4.7 \, MeV$, $m_{s}=96 \, MeV$ and $m_{e}=0.5 \, MeV$ [@pdg]. The continuity equation for the mass density $\rho_{m\,f}$ is[@land]: $${\frac{\partial \rho_{m\,f}}{\partial t}} + \vec{\nabla} \cdot (\rho_{m\,f} \, {\vec{v_f}})=0 \label{conteq}$$ The relationship between the mass density and the baryon density is ${\rho_{m}}_f=3m_{f} \,\, {\rho_{B}}_{f}$ [@we17]. The charge density for each quark is given by ${\rho_{c}}_{u}=2Q_{e}\,{\rho_{B}}_{u}$ , ${\rho_{c}}_{d}=-Q_{e}\,{\rho_{B}}_{d}$ and ${\rho_{c}}_{s}=-Q_{e}\,{\rho_{B}}_{s}$ . In general we write ${\rho_{c}}_{f}=3\,{Q_{f}} \, {\rho_{B}}_{f}$ for each quark $f$. Equation of state ================= In general, the equation of state (EOS) of the quark gluon plasma can be written as a relation between pressure $p$ and energy density $\epsilon$: $p = {c_s}^{2}\epsilon$, where $c_s$ is the speed of sound. As previously studied in [@we17; @we16; @soundes], when the fluid is immersed in an external uniform magnetic field, the pressure splits into a parallel (with respect to the direction of the external field), $p_{\parallel}$, and a perpendicular component, $p_{\perp}$. We have thus a parallel (${c_s}_{\parallel}$) and a perpendicular (${c_s}_{\perp}$) speed of sound, given by [@we17; @we16; @soundes]: $${({c_{s}}_{\parallel})}^{2}={\frac{\partial p_{\parallel}}{\partial \varepsilon}} \hspace{1.0cm} \textrm{and} \hspace{1.0cm} {({c_{s}}_{\perp})}^{2}={\frac{\partial p_{\perp}}{\partial \varepsilon}} \label{soundes}$$ and hence $p_{\parallel} \approx {({c_{s}}_{\parallel})}^{2} \, \varepsilon$ and $p_{\perp} \approx {({c_{s}}_{\perp})}^{2} \, \varepsilon$ . The pressure gradient can then be written
null
--- abstract: | A filtration over a simplicial complex $K$ is an ordering of the simplices of $K$ such that all prefixes in the ordering are subcomplexes of $K$. Filtrations are at the core of Persistent Homology, a major tool in Topological Data Analysis. In order to represent the filtration of a simplicial complex, the entire filtration can be appended to any data structure that explicitly stores all the simplices of the complex such as the Hasse diagram or the recently introduced Simplex Tree \[Algorithmica ’14\]. However, with the popularity of various computational methods that need to handle simplicial complexes, and with the rapidly increasing size of the complexes, the task of finding a compact data structure that can still support efficient queries is of great interest.\ This direction has been recently pursued for the case of maintaining simplicial complexes. For instance, Boissonnat et al. \[SoCG ’15\] considered storing the simplices that are maximal for the inclusion and Attali et al. \[IJCGA ’12\] considered storing the simplices that block the expansion of the complex. Nevertheless, so far there has been no data structure that compactly stores the *filtration* of a simplicial complex, while also allowing the efficient implementation of basic operations on the complex.\ In this paper, we propose a new data structure called the Critical Simplex Diagram (CSD) which is a variant of the Simplex Array List (SAL) \[SoCG ’15\]. Our data structure allows to store in a compact way the filtration of a simplicial complex, and allows for the efficient implementation of a large range of basic operations. Moreover, we prove that our data structure is essentially optimal with respect to the requisite storage space. Next, we show that the CSD representation admits the following construction algorithms. - A new model to reconstruct vertices. - A new *matrix-parsing* algorithm to quickly construct relaxed Delaunay complexes, depending only on the number of witnesses and the dimension of the complex. author: <unk> `Jean-Daniel.Boissonnat@inria.fr`. - <unk>.<unk><extra_id_1><unk> -<unk><extra_id_2><unk><extra_id_3><unk><extra_id_4><unk><extra_id_5><unk><extra_id_6>==<extra_id_7>=<extra_id_8>=<extra_id_9>s in the space.<extra_id_10><unk><extra_id_11>Karthik Srikanta[<unk>1]<unk><extra_id_12> <extra_id_13> `karthik.srikanta@weizmann.ac.il`. bibliography: [@EH11] [@EH10]. More persistent features are detected over a wide range of length and are deemed more likely to represent true features of the underlying space, rather than artifacts of sampling, noise, or particular choice of parameters. To find the persistent homology of a space [@BDM15; @BM14], the space is represented as a sequence of simplicial complexes called a filtration. The most popular filtrations are nested sequences of increasing simplicial complexes but more advanced types of filtrations have been studied where consecutive complexes are mapped using more general simplicial maps [@DFW14]. Persistent homology found applications in many areas ranging from image analysis [@CIDZ08; @PC14], to cancer research [@ABDMPP12], virology [@CCR13], and sensor networks [@DG07]. Thus, a central question in Computational Topology and Topological Data Analysis is to represent simplicial complexes and filtrations efficiently. The most common representation of simplicial complexes uses the Hasse diagram of the complex that has one node per simplex and an edge between any pair of incident simplices whose dimensions differ by one. A more compact data structure, called Simplex Tree (ST), was proposed recently by Boissonnat and Maria [@SimplexTree]. The nodes of both the Hasse diagram and ST are in bijection with the simplices (of all dimensions) of the simplicial complex. In this way, they explicitly store all the simplices of the complex and it is easy to attach information to each simplex (such as a filtration value). In particular, they allow to store in a natural way the filtration of complexes which are at the core of Persistent Homology and Topological Data Analysis. However, such data structures are redundant and typically very big, and they are not sensitive to the underlying structure of the complexes. This motivated the design of more compact data structures that represent only a sufficient subset of the simplices. A first idea is to store the 1-skeleton of the complex together with a set of blockers that prevent the expansion of the complex [@DataStructure3]. A have developed an integration model of inclusion. Following this last idea, Boissonnat et al. [@BKT15] introduced a new data structure, called Simplex Array List, which was the first data structure whose size and query time were sensitive to the geometry of the simplicial complex. SAL was shown to outperform ST for a large class of simplicial complexes. Although very efficient, SAL, as well as data structures that do not explicitly store all the simplices of a complex, makes the representation of filtrations problematic, and in the case of SAL, impossible. In SAL SAL. CSD only stores the critical simplices, i.e., those simplices all of whose cofaces have a higher filtration value, and in this paper, we overcome the problems arising due to the implicit representation of simplicial complexes, by showing that the basic operations on simplicial complexes can be performed efficiently using CSD. In short, CSD compromises on the membership query (which is slightly worse than that for ST) in order to save storage and to perform insertion and removal efficiently. Our Contribution ---------------- At a high level, our main contribution through this paper is in developing a new perspective for the design of data structures representing simplicial complexes associated with a filtration. Previous data structures such as Hasse diagram and Simplex Tree interpreted a simplicial complex as a set of strings defined over the label set of its vertices and the filtration values as keys associated with each string. When a simplicial complex is perceived this way, a Trie is indeed a natural data structure to represent the complex. However, this way of representing simplicial complexes doesn’t make use of the fact that simplicial complexes are not arbitrary sets of strings but are constrained by a lot of combinatorial structure. In particular, simplicial complexes are closed under subsets and also (standard) filtrations are monotone functions. We exploit these constraints/structure by viewing a filtered simplicial complex with filtration range of size $t$ as a monotone function from $\{0,1\}^{|V|}$ to $\{0,1,\ldots,t\}$, where $V$ is the vertex set. We note that if a simplex is mapped to $t$ then, the simplex is understood to be not in the complex and if not, the mapping is taken to correspond to the filtration value of the simplex. In light of this viewpoint, we propose a data structure (CSD) which stores only the critical elements in the domain, i.e. those elements all of whose supersets (cofaces in the complex) are mapped to a strictly larger value. As a result, we are not only able to store data regarding a simplicial complex more efficiently but also explicitly utilize geometric regularity in the complex which would have been otherwise obscured. More about the result. \[main\] is a complex. Let $\kappa$ be the number of critical simplices in the complex. The data structure CSD representing $K$ admits the following properties: - The size of CSD is at most $\kappa d$. - The cost of basic operations (such as membership, insertion, removal, elementary collapse, etc.) through the CSD representation is $\tilde{\mathcal{O}}((\kappa \cdot d)^2)$. The proof of the above two items follows from the discussions in Section \[sizeofCSD\] and Section \[sec:operations\] respectively. We would like to point out here that while the cost of static operations such as membership is only $\tilde{\mathcal{O}}(d)$ for the Simplex Tree, to perform any dynamic operation such as insertion or removal, the Simplex Tree requires $\text{exp}(d)$ time. Moreover, as shown in Section \[sec:operations\], the cost of *most* basic operation using CSD is linear in $\kappa$. As a direct consequence of representing a simplicial complex only through the critical simplices, we note that the construction of any simplicial complex with filtration, will be very efficient through CSD, simply because we have to build a smaller data structure as compared to the existing data structures. More specifically, we propose a new *edge-deletion* algorithm for the construction of flag complexes on $n$ vertices with $\kappa$ critical simplices in time $\mathcal{O}\left(\kappa n^{2.38}\right)$. Additionally, we provide a *matrix-parsing* algorithm for building $d$-dimensional relaxed Delaunay complexes over the witness set $W$ in $\mathcal{O}(|W| d^2 \log |W|)$ time. In each of these cases, we show that the construction is more efficient when using CSD rather than ST, primarily because C
null
--- abstract: 'We study the Dirichlet series $F_b(s)=\sum_{n=1}^\infty d_b(n)n^{-s}$, where $d_b(n)$ is the sum of the base-$b$ digits of the integer $n$, and $G_b(s)=\sum_{n=1}^\infty S_b(n)n^{-s}$, where $S_b(n)=\sum_{m=1}^{n-1}d_b(m)$ is the summatory function of $d_b(n)$. We show that $F_b(s)$ and $G_b(s)$ have continuations to the plane ${\mathbb{C}}$ as meromorphic functions of order at least 2, determine the locations of all poles, and give explicit formulas for the residues at the poles. We give a continuous interpolation of the sum-of-digits functions $d_b$ and $S_b$ to non-integer bases using a formula of Delange, and show that the associated Dirichlet series have a meromorphic continuation at least one unit left of their abscissa of absolute convergence.' author: " $n$. For each integer base $b\geq 2$, every positive integer $n$ has a unique base-$b$ expansion $$n = \sum_{i \geq 0} \delta_{b,i}(n)b^i$$ with digits $\delta_{b,i}\in\{0,1,\dotsc,b-1\}$ given by $$\delta_{b,i} = \Bigl\lfloor \frac{n}{b^i} \Bigr\rfloor - b \Bigl\lfloor \frac{n}{b^{i+1}} \Bigr\rfloor.$$ This paper considers two summatory functions of base $b$ digits of $n$: 1. The *base-$b$ sum-of-digits function* $d_b(n)$ is $$d_b(n) = \sum_{i\geq 0} \delta_{b,i}(n).$$ 2. The *(base $b$) cumulative sum-of-digits function* $S_b(n)$ is $$S_b(n) = \sum_{m=1}^{n-1} d_b(m).$$ We follow here the convention of previous authors (including [@delange-75] and [@flajolet-94]), with the sum defining $S_b(n)$ running to $n-1$ instead of $n$. We associate to the functions $d_b(n)$ and $S_b(n)$ the Dirichlet series generating functions $$F_b(s) = \sum_{n=1}^\infty \frac{d_b(n)}{n^s}$$ and $$G_b(s) = \sum_{n=1}^\infty \frac{S_b(n)}{n^s}.$$ These Dirichlet series have abscissa of convergence $\operatorname{Re}(s)=1$ and $\operatorname{Re}(s)=2$, respectively. This paper studies the problem of the meromorphic continuation to ${\mathbb{C}}$ of Dirichlet series associated to the base-$b$ digit sums $d_b(n)$ and $S_b(n)$. Here we obtain the meromorphic continuation and determine its exact pole and residue structure. The pole structure contains half of a two-dimensional lattice and the residues involve Bernoulli numbers and values of the Riemann zeta function on the line $\operatorname{Re}(s)=0$. A also $S_b(n)$. For more information, see $T_s_n$ and $R_f(m vanish. The asymptotics of $S_b(n)$ have been extensively studied, see Section \[sec-previous-work\]. We mention particularly work of Delange [@delange-75], given below as Theorem \[thm-delange\], which gives an exact formula for $S_b(n)$ in terms of a continuous nondifferentiable function with Fourier coefficients involving values of the Riemann zeta function on the imaginary axis. Using an interpolation of Delange’s formula we formulate a continuous interpolation of $S_b(n)$ in the base parameter $b$, permitting definitions of $d_\beta(n)$ and $S_\beta(n)$ for a real parameter $\beta>1$. We obtain a meromorphic continuation of the associated Dirichlet series $F_{\beta}(s)$ and $G_{\beta}(s)$ to the half-planes $\operatorname{Re}(s)>0$ and $\operatorname{Re}(s)>1$, respectively. We note apparent fractal properties of $d_{\beta}(n)$ as $\beta$ is varied. Results ------- Our first results concern the meromorphic continuation of the functions $F_b(s)$ and $G_b(s)$ to the entire complex plane ${\mathbb{C}}$. \[thm-db\] <unk>$ ${\mathbb{C}}$. The poles of $F_b(s)$ consist of a double pole at $s=1$ with Laurent expansion beginning $$F_b(s)=\frac{b-1}{2\log b}(s-1)^{-2} + \biggl( \frac{b-1}{2\log b} \log(2\pi) - \frac{b+1}{4}\biggr)(s-1)^{-1} + O(1),$$ simple poles at each other point $s=1+2\pi i m / \log b$ with $m\in {\mathbb{Z}}$ ($m\neq 0$) with residue $$\operatorname*{Res}\biggl( F_b(s) , s=1+\frac{2\pi i m}{\log b} \biggr) = - \frac{b-1}{2\pi i m} \zeta\biggl(\frac{2\pi i m}{\log b}\biggr),$$ and simple poles at each point $s=1-k+2\pi i m /\log b$ with $k=1$ or $k\geq 2$ an even integer and with $m\in{\mathbb{Z}}$, with residue $$\operatorname*{Res}\biggl( F_b(s) , s=1-k+\frac{2\pi i m}{\log b} \biggr) = (-1)^{k+1}\frac{b-1}{\log b}\zeta\biggl(\frac{2\pi i m}{\log b}\biggr)\frac{B_k}{k!} \prod_{j=1}^{k-1} \biggl(\frac{2\pi i m}{\log b} - j\biggr)$$ where $B_k$ is the $k$th Bernoulli number. Theorem \[thm-db\] is proved by first considering the Dirichlet series $\sum \bigl(d_b(n)-d_b(n-1)\bigr) n^{-s}$ and then exploiting a relation between power series and Dirichlet series to recover $F_b(s)$. The relation <unk>[sec-mero-cont<unk>] \[sec-mero-cont\]. The meromorphic continuation of Dirichlet series attached to $b$-regular sequences, of which our Dirichlet series $F_b(s)$ is a particular example, was studied by Dumas in his 1993 thesis [@dumas-thesis]; this work also showed that the poles of $F_b(s)$ must be contained in a certain half-lattice, strictly larger than the half-lattice here. A similar method allows us to meromorphically continue the series $G_b(s)$ to the complex plane. \[thm-sb\] For each integer $b\geq 2$, the function $G_b(s)=\sum_{n=1}^\infty S_b(n)n^{-s}$ has a meromorphic continuation to ${\mathbb{C}}$. The poles of $G_b(s)$ consist of a double pole at $s=2$ with Laurent expansion $$G_b(s) = \frac{b-1}{2\log b}(s-2)^{-2} + \biggl(\frac{b-1}{2\log b}\bigl(\log(2\pi)-1\bigr)-\frac{b+1}{4}\biggr)(s-2)^{-1} + O(1),$$ a simple pole at $s=1$ with residue $$\operatorname*{Res}(G_b(s),s=1) = \frac{b+1}{12},$$ simple poles at $s=2 + 2\pi i m / \log b$ with $m\in{\mathbb{Z}}$ ($m\neq 0$) with residue $$\operatorname*{Res}\biggl( G_b(s) , s= 2 + \frac{2\pi i m}{\log b} \biggr) = - \frac{b-1}{2\pi i m}\biggl(1+\frac{2\pi i
null
--- author: - | Wei-Gang Yuan, Xiao-Dong Zhang$^{\dagger}$\ [Department of Mathematics, and MOE-LSC]{}\ [Shanghai Jiao Tong University]{}\ [800 Dongchuan Road, Shanghai, 200240, P.R. China]{}\ title: 'The Second Zagreb Indices of Graphs with Given Degree Sequences [^1]' --- = -0.2 in = 0.25 in \[section\] \[theorem\][Corollary]{} \[theorem\][Definition]{} \[theorem\][Conjecture]{} \[theorem\][Question]{} \[theorem\][Lemma]{} \[theorem\][Proposition]{} \[theorem\][Example]{} \[theorem\][Problem]{} \[theorem\][Remark]{} Abstract The second Zagreb index of a graph G is denoted by $M_2(G)=\sum_{uv\in E(G)}d(u)d(v)$. In this paper, we investigate properties of the extremal graphs with the maximum second Zagreb indices with given graphic sequences, in particular graphic bicyclic sequences. Moreover, the distance between two vertices $E$. The distance between two vertices $u$ and $v$ which is denoted by $d(u,v)$ is the length of the shortest path that connects $u$ and $v$. For a vertex $v\in V$, $N(v)$ denotes the neighbor set of $v$ and $d(v)=|N(v)|$ denotes the degree of $v$. A vertex whose degree is one is called [*leaf. *]{} Moreover, $(d(v_1), \cdots, d(v_n))$ is called [*degree sequence*]{} of $G$. A nonnegative non-increased integer sequence $\pi=(d_1, d_2, \ldots , d_n)$ is called the [*graphic sequence*]{} if there exists a simple graph $G$ such that its degree sequence is exactly $\pi$. For convenience, we use $d^{(k)}$ to denote the $k$ same degrees $d$ in $\pi$. For example, $\pi=(4,4,2,2,1,1)$ is denoted by $(4^{(2)},2^{(2)},1^{(2)})$. Let $\pi$ be a given graphic sequence. Let $$\Gamma(\pi)=\{G |\ G{\mbox{ is a connected graph with degree sequences}~\pi}\}.$$ Without loss of generality, assume $d(v_i)=d_i$, for $1\le i\le n$, $v_i\in G\in \Gamma(\pi)$. The [*second Zagreb index*]{} [@Balaban1983] of a graph $G$ is definted by: $$M_2(G)=\sum_{uv\in E}d(u)d(v).$$ For a given graphic sequence $\pi$, let $$M_2(\pi)=max\{M_2(G): G\in \Gamma(\pi)\}.$$ A simple connected graph $G$ is called an [*optimal graph*]{} in $\Gamma(\pi)$ if $G\in\Gamma(\pi)$ and $M_2(G)=M_2(\pi)$. The second Zagreb index, whose origin may be dated back to [@Gutman2004] and [@Nikolic2003], plays an important role in total $\pi-$electron energy on molecular structure in chemical graph theory. There are two excellent surveys ([@Gutman2004],[@Nikolic2003]) on the Zagreb index, which summarize main properties and characterization of the topological index. Das et al. [@das2014] investigated the connections between the Zagreb index and the Wiener index. Estes and Wei [@estes2014] presented the sharp upper and lower bounds for the Zagreb indices of $k-$tree. For more information, the readers are referred to [@Balaban1983], [@Gutman2004], [@Gutman1975], [@Kier1976], [@Kier1986], [@Nikolic2003], [@Todeschini2000] and references therein. Recently, Liu and Liu [@Liu2012] characterized the all optimal trees in the set of trees with a given tree sequence. Further, they [@Liu2014] investigate some optimal unicycle graphs in the set of unicycle graphs with a given unicyclic graphic sequence. In this paper, we study properties of the optimal graphs in the set of all connected graphs with a given graphic sequence $\pi$ that satisfies some conditions, which generalize the main results in [@Liu2012] and [@Liu2014]. In addition, we present some optimal bicyclic graphs in the set of all bicyclic graphs with a given bicyclic graphic sequence and some relations of the maximum values of the second Zagreb indices with different bicyclic graphic sequences. The work is presented as follows. In Section 2, some notations and the main results of this paper are presented . In Sections 3, 4 and 5, the proofs of the main results are presented, respectively. Preliminary and Main Results ============================ In order to present the main results of this paper, we introduce some more notations. Assume $G$ is a rooted graph with root $v_1$. Let $h(v)$ be the distance between $v$ and $v_1$ and $H_i(G)$ be the set of vertices with distance $i$ from vertex $v_1$. [@zhang2009] <unk> $v_1$. A x$, d_1 v_1$. For a graphic sequence $\pi=(d_1,d_2,\ldots ,d_n)$ with $\sum_{i=1}^n d_i=2(n+c)$, $d_1\geq d_2\geq c+2$, c is an integer and $c\geq -1$. We may construct a graph $G_M^*(\pi)$ by following steps. Select $v_1$ as the root vertex and begin with $v_1$ of the zeroth layer. Select the vertices $v_2,v_3,v_4,\ldots,v_{d_1+1}$ as the first layer such that $N(v_1)=\{v_2,v_3,v_4,\ldots,v_{d_1+1}\}$; then, append $d_2-1$ vertices to $v_2$, $d_3-2$ vertices to $v_3$, $\cdots$, $d_{c+3}-2$ vertices to $v_{c+3}$ such that $N(v_2)=\{v_1,v_3,\ldots,v_{c+3},v_{d_1+2},v_{d_1+3},\ldots,v_{d_1+d_2-c-1}\}$, $N(v_3)=\{v_1,v_2,v_{d_1+d_2-c},\ldots,v_{d_1+d_2+d_3-c-3}\}$, $\cdots$, $N(v_{c+3})=\{v_1,v_2,v_{(\sum_{i=1}^{c+2}d_i)-3c},\ldots,\\v_{(\sum_{i=1}^{c+3}d_i)-3c-3}\}$. After that, append $d_{c+4}-1$ vertices to $v_{c+4}$ such that $N(v_{c+4})=\{v_1, v_{(\sum_{i=1}^{c+3}d_i)-3c-2},\ldots,v_{(\sum_{i=1}^{c+4}d_i)-3c-4}\}$; $\cdots$ . Note that $v_1v_2v_3$, $\ldots$, $v_1v_2v_{c_3}$ form $c+1$ triangles in $G_M^*(\pi)$. Obviously, $G_M^*({\pi})$ is a BFS-ordering graph. In particular, if $c=1,$ the graph $G_M^*({\pi})$ is denoted by $B_M^*(\pi)$. The first main result in this paper can be stated as follows. \[general\] Let $\pi=(d_1,d_2,\ldots ,d_n)$ be a graphic sequence. If it satisfies the following condition: $(i)$$\sum_{
null
--- abstract: 'In the standard scenario of isolated low-mass star formation, strongly magnetized molecular clouds are envisioned to condense gradually into cores, driven by ambipolar diffusion. Once the cores become magnetically supercritical, they collapse to form stars. Most previous studies based on this scenario are limited to axisymmetric calculations leading to single supercritical core formation. The assumption of axisymmetry has precluded a detailed investigation of cloud fragmentation, generally thought to be a necessary step in the formation of binary and multiple stars. In this contribution, we describe the non-axisymmetric evolution of initially magnetically subcritical clouds using a newly-developed MHD code. It is shown that non-axisymmetric perturbations of modest fractional amplitude ($\sim 5\%$) can grow nonlinearly in such clouds during the supercritical phase of cloud evolution, leading to the production of either a highly elongated bar or a set of multiple dense cores.' author: George R. Wright (June 1987). In this by now “standard” picture, a molecular cloud, which is initially supported by strong magnetic field against its self-gravity, gradually contracts as the magnetic support weakens by ambipolar diffusion. Magnetically supercritical cores are formed, which collapse to produce stars. Quantitative studies based on this scenario have been carried out by many authors. In most of such studies, axisymmetry has been adopted. However, observations have shown that binary and multiple stars are common product of star formation. We need to understand how such (non-axisymmetric) stellar systems are formed in magnetically supported clouds. To elucidate the formation mechanism of binary stars and stellar groups, we have begun a systematic numerical study of the non-axisymmetric evolution initially magnetically subcritical clouds, by removing the restriction of axisymmetry. In this contribution, we present some of our recent results on this investigation. Model and Numerical Method ========================== As a first step, we adopted the thin-disk approximation often used in axisymmetric calculations (e.g., Basu & Mouschovias 1994; Li 2001). The disk is assumed in hydrostatic equilibrium in the vertical direction. The vertically-integrated MHD equations are solved numerically for the cloud evolution in the disk plane, with a 2D MHD code (see Li & Nakamura 2002 for code description). The magnetic structure is solved in 3D space. The initial conditions for star formation are not well determined either observationally and theoretically. Following Basu & Mouschovias (1994), we prescribe an axisymmetric reference state. See Nakamura & Li and Li & Nakamura (2002) for the details of the reference cloud model. The reference cloud is allowed to evolve into an equilibrium configuration, with the magnetic field frozen-in. Once the equilibrium state is obtained, we reset the time to $t=0$ and add a non-axisymmetric perturbation to the surface density distribution. Then, the cloud evolution is followed with the ambipolar diffusion turned on. Numerical Results ================= From axisymmetric calculations, Li (2001) classified the evolution of magnetically subcritical clouds into two cases, depending mainly on the initial cloud mass and the initial density distribution. When the initial cloud is not so massive and/or has a centrally-condensed density distribution, it collapses to form a single supercritical core ([core-forming cloud]{}). On the other hand, when the initial cloud has many thermal Jeans masses and/or a relatively flat density distribution near the center, it collapses to form a ring after the central region becomes magnetically supercritical ([ring-forming cloud]{}). In the following, we show that the core-forming cloud doesn’t fragment during the dynamic collapse phase, but becomes unstable to the bar mode ([*bar growth*]{}), whereas the ring-forming cloud can break up into several blobs ([*multiple fragmentation*]{}). Bar growth: Implication for Binary Formation -------------------------------------------- In Fig. 1 we show an example of the bar growth models. In this model, we adopted the reference density distribution of Basu & Mouschovias (1994), which is more centrally-condensed than the model to be shown in the next subsection, and the rotation profile of Nakamura & Hawana (1997). It has a characteristic radius of $r_0=7.5\pi c_s^2/(2\pi G\Sigma_{0,\rm ref})$ (where $c_s$ is the effective isothermal sound speed and $\Sigma_{0, \rm ref}$ the central cloud surface density in the reference state), initial flux-to-mass ratio of $\Gamma _0 = 1.5 B_{\infty}/(2\pi G^{1/2}\Sigma_{0,\rm ref})$ (where $B_\infty$ is the strength of the initially uniform background field), and a dimensionless rotation rate of $\omega=0.1$. We added to the equilibrium state an $m=2$ perturbation of surface density, with a fractional amplitude of merely 5%. During the initial quasi-static contraction phase, a central core condenses gradually out of the magnetically subcritical cloud, with no apparent tendency for the mode to grow. Rather, the iso-density contours appear to oscillate, changing the direction of elongation along $x$-axis in the disk plane to $y$-axis. After a supercritical core develops, the contraction becomes dynamic and the bar mode grows significantly. During the intermediate stages \[panels (c) and (d)\], the aspect ratio of the bar remains more or less frozen at $R\sim 2$. As the collapse continues, the growth rate of the bar increases dramatically by the very end of the starless collapse. The density distribution along the minor axis of the bar is well reproduced by a power-law profile of $r^{-2}$, which is different from that of an isothermal equilibrium filament ($\propto r^{-4}$). When the volume density exceeds a critical value of $10^{12}$ cm$^{-3}$, we changed the equation of state from isothermal to adiabatic, to mimic the transition to the optically thick regime. The bar is surrounded by an accretion shock, which is analogous to the first core of spherical calculations \[panel (f)\]. The aspect ratio of this “first” bar continues to increase during the early optically thick regime. The highly elongated first bar is expected to break up into two or more pieces. We have examined this formation. We have also followed the evolution of this model cloud perturbed by other (higher) $m$ modes ($m\ge3$), and found no significant mode growth. The reason why the cloud is unstable only to the bar mode appears to be the following. In the absence of nonaxisymmetric perturbations, the supercritical collapse approaches a self-similar solution derived approximately by Nakamura & Hanawa (1997). In the self-similar solution, the effective radius of the central plateau is at most 3-4 times the effective Jeans length, making the cloud unstable to dynamic contraction but not to multiple fragmentation. Indeed, Nakamura & Hanawa (1997) showed that the self-similar solution is unstable only to the $m=2$ mode, consistent with our result. The tendency for the supercritical collapse to approach the self-similar solution is responsible for the bar formation during the dynamic collapse. Detailed numerical results on bar formation will appear elsewhere (Nakamura & Li 2002, in preparation). Multiple Fragmentation and Formation of Small Stellar Groups ------------------------------------------------------------ In Fig. 2 we show an example of the multiple fragmentation models. In this model, we adopted the reference density profile of Li (2001) with $n=8$, which is less centrally-condensed than the model shown in the previous subsection. The model has a characteristic radius of $r_0=7\pi c_s^2/(2\pi G\Sigma_{0,\rm ref})$, initial flux-to-mass ratio of $\Gamma _0 = 1.5B_{\infty}/(2\pi G^{1/2}\Sigma_{0,\rm ref})$, and rotation rate of $\omega=0.1$. Random density perturbations are added to the axisymmetric equilibrium state. The maximum fractional amplitude of the perturbations is set to 10%. During the quasi-static contraction phase, the infall motions are subsonic, and there is no sign of fragmentation. Once the flux-to-mass ratio in the central high-density region drops below the critical value, the contraction is accelerated near the center. As the collapse continues, the central supercritical region begins to fragment into five blobs. By the time shown in panel (f), the blobs are well separated from the background material and are significantly elongated. Subsequent dynamic collapse of each blob is similar to that of the bar growth case. Individually, we expect each core to produce a highly elongated bar, which could further break up into pieces, producing perhaps binary or multiple stars. Together, the formation of a small stellar group is the most likely outcome. Detailed numerical results on multiple fragmentation are given in Li & Nakamura (2002). Summary ======= Our main conclusion is that despite (indeed because of) the presence of the strong magnetic field, the initially magnetically subcritical clouds are unstable to non-axisymmetric perturbations during the supercritical phase of cloud evolution. The cloud evolution is classified into two cases
null
--- abstract: 'We examine the reliability of the merger trees generated for the Monte-Carlo modeling of galaxy formation. In particular we focus on the cold gas fraction predicted from merger trees with different assumptions on the progenitor distribution function, the timestep, and the mass resolution. We show that the cold gas fraction is sensitive to the accuracy of the merger trees at small-mass scales of progenitors at high redshifts. One can reproduce the Press–Schechter prediction to a reasonable degree by adopting a fairly large number of redshift bins, $N_{\rm step}\sim 1000$, in generating merger trees, which is a factor of ten larger than the canonical value used in previous literature.' author: Y.D. Miller. The data. Recent systematic studies of high-redshift objects, such as quasars and Lyman-break galaxies, should provide important clues to the early universe, although their proper interpretation is often not so straightforward, mainly because those objects certainly do evolve in time. A ccessed 7 April 2016 [@KA97]). These studies are based on a so-called ‘one-zone’ model which assumes that a galaxy does not interact with other galaxies. It is now fairly established, however, that structures in the universe have built up hierarchically from small to large scales as in a cold dark matter (CDM) model. This means that a galaxy interacts and sometimes merges with other galaxies even if it was an isolated system at birth. The theory discusses the universe. White and Frenk (1991) developed a detailed analytic formalism to describe the formation and evolution of galaxies while taking account of the hierarchical merging of dark-matter halos, gas cooling, star formation, and supernova feedback. Subsequent numerical approaches in modeling hierarchical merging of dark halos employ two somewhat different algorithms; one is called the ‘block model’ in which a random-Gaussian density fluctuation field is generated by dividing a hypothetical rectangular box recursively ([@CK88]; [@Cole91]; [@Cole94]). While this algorithm is simple and straightforward, the resulting halo masses are necessarily binned in discrete steps of a factor of two. The other generates a realization of halo merger trees according to a probability distribution function predicted by the extended Press–Schechter theory ([@Bower91]; [@Bond91]; [@KW93]; [@SK99]; [@SL99]). The latter is widely used in studying the cosmological evolution of galaxies in a hierarchical universe ([@KWG93]; [@baugh98]; [@SP99]; [@Cole00]; [@nagashima]). Throughout the present paper, we call the latter method the Monte-Carlo modeling of merger histories (simply, the Monte-Carlo modeling), while it is usually referred to as a semi-analytic model of galaxy formation (SAM). The most important ingredient in Monte-Carlo modeling is the conditional joint-probability distribution function of a set of *progenitor* halos of mass $M_2^{j}$ at a redshift of $z_2$, which is a part of a *parent* halo of mass $M_1$ at $z_1$, conceptually written as $$\begin{aligned} \label{eq:jointprob} {\rm Prob}(M_2^1, M_2^2, \cdots, M_2^N, z_2 | M_1, z_1) dM_2^1 dM_2^2 \cdots dM_2^N \cr \qquad (N=1, \cdots , \infty). \end{aligned}$$ Unfortunately only an analytical expression for the conditional one-point probability distribution function, Prob($M_2^i$, $z_2 |M_1$, $z_1$), is known based on the extended Press–Schechter theory (for the special case of the Poisson initial power spectra, see a different approach by [@SL99]); one thus needs to employ an additional *assumption* in generating realizations of merger trees of halos in general (e.g., [@KW93]; [@SK99]). Furthermore, any numerical procedure to generate them necessarily involves several *ad hoc* parameters due to the limitation of the available computation resources including the finite timestep of computation, the minimum mass of halos to be included in merger trees, and the maximum number of progenitors for each halo at each step. The purpose of the paper is to perform a systematic investigation of possible artificial effects of the above-mentioned problems on merger tree realizations, and to re-examine the validity of the Monte-Carlo modeling. In particular, we focus on the extent to which the resulting merger trees reproduce the conditional one-point probability distribution function predicted by the extended Press–Schechter theory, which directly changes the fraction of cold gas. Exactly for this reason, we adopt a conventional $\Lambda$CDM model with the cosmological parameters $\Omega_{0}=0.3$, $\lambda_{0}=0.7$, $h=0.7$, $\sigma_{8}=1.0$, and $\Omega_{\mathrm{B}}=0.015h^{-2}$ (e.g., Kitayama, Suto 1997; Kitayama et al. 1998), and neglect star formation and a feedback effect for definiteness. Merger Trees of Dark Matter Halos ================================= Constructing Merger Trees of Dark-Matter Halos \[subsec:construction\] ---------------------------------------------------------------------- Our model of merging histories of dark-matter halos is mainly based on that of Somerville and Kolatt (1999), which we adopt as our fiducial choice and slightly modify their original scheme as follows. We begin with a halo of mass of $M_1= M_{\mathrm{root}}$ at a redshift of $z_1=z_{\mathrm{min}}$, and consider its progenitors at a slightly earlier redshift of $z_2=z_1+\Delta z(z_1)$. Since the joint conditional probability for the progenitors \[equation (\[eq:jointprob\])\] is not known, we choose the $i$-th progenitor halo of mass $M_2^i$ according to the *one-point* conditional probability, Prob($M_2^i$, $z_2 |M_1$, $z_1$), as long as $M_2^i > M_{\rm res}$ and the total mass satisfies $$\begin{aligned} \label{eq:massconserve} \sum_{i=1}^N M_2^i < M_1 - \Delta M_{\rm acc}(<M_{\rm res}) ,\end{aligned}$$ where $$\begin{aligned} \Delta M_{\rm acc}(<M_{\rm res}) = \int_{0}^{M_{\rm res}}\!\!\!\!dM_2 M_2 \frac{dN}{dM_2}(M_2,z_2|M_1,z_1)\end{aligned}$$ is the expectation value of the total mass of halos smaller than the resolution mass ($M_{\rm res}$) with $dN/dM_2(M_2,z_2|M_1,z_1)$ being the appropriate conditional mass function \[equation (\[eq:eps-num\]) below\]. In other words, we distinguish the discrete merging and the continual accretion at mass $M_{\rm res}$, and do not resolve the halos below $M_{\rm res}$ in our merger trees. Once all relevant progenitor halos are selected, we repeat the above procedure recursively for each progenitor until the maximum redshift ($z_{\rm max}$). Unless otherwise stated, we set $z_{\mathrm{min}}=0$ and $z_{\mathrm{max}}=15$ in the present paper. For convenience, we list in table \[tab:parameters\] variables which are extensively discussed in the present paper. In the original method by Somerville and Kolatt (1999), one stops selecting progenitors when $M_1 - \sum_{i=1}^N M_2^i$ becomes less than $M_{\rm res}$, but without imposing the condition $M_2^i>M_{\rm res}$. They carefully tuned the timesteps depending on $M_{1}$ so that the resulting progenitor mass function becomes close to equation (\[eq:eps-num\]) below. Rather, we stop choosing the progenitor when $M_1 - \Delta M_{\rm acc}(<M_{\rm res}) - \sum_{i=1}^N M_2^i$ becomes negative, and the last selected progenitor $M_2^N$ is not included in the tree. In this case, the remaining mass $M_1 - \Delta M_{\rm acc}(<M_{\rm res}) - \sum_{i=1}^{N-1} M
null
--- abstract: 'A fuzzy expert system (FES) for the prediction of prostate cancer (PC) is prescribed in this article. Age, prostate-specific antigen (PSA), prostate volume (PV) and $\%$ Free PSA ($\%$FPSA) are fed as inputs into the FES and prostate cancer risk (PCR) is obtained as the output. Using knowledge based rules in Mamdani type inference method the output is calculated. If PCR $\ge 50\%$, then the patient shall be advised to go for a biopsy test for confirmation. The efficacy of the designed FES is tested against a clinical data set. The true prediction for all the patients turns out to be $68.91\%$ whereas only for positive biopsy cases it rises to $73.77\%$. This False diagnosis.' author: and machines. With the advancement in the computer system, machines exhibit tasks which normally need human intelligence. Development of AI techniques has revolutionized many areas like robotics, transportation, education, marketing etc. including medical diagnosis and health care. Medical diagnosis deals with the analysis of complex medical data. The primary job in medical diagnosis is to reach to a decision using expert’s logical reasoning. Handling large complex data and many uncertainties make this job very difficult. AI in diagnosis job. AI in medical diagnosis has added expert human reasoning in simulation of computer-aided diagnosis process. There are different AI methods used in medical diagnosis, fuzzy logic is one of the most popular one. AI is the technique of mimicking human intelligence by the help of advance computer systems. Human brain takes natural languages as inputs which aren’t feasible to be represented by Boolean logic (either true or false). So, there is a requirement of representation outside these two possibilities. Fuzzy logic exactly does that. Fuzzy logic appears closer to the way human brain works. Therefore, in AI, fuzzy logic shows the sign of being a natural choice.\ Uncertainties and imprecision are connected with every aspect of our day to day life activities. Specifically in medical diagnosis domain, one encounters a lot of uncertainty and vagueness. It becomes very difficult to identify a particular disease from the said symptoms of the patients, as it contains lot of approximate and inaccurate information. On the other hand, a particular symptom can possibly lead to many different disjoint diseases, whereas for the same disease, the symptom may manifest itself in completely different ways from person to person. There are inherent uncertainties in the process of decision making in medical diagnosis, even for an expert too. To tackle these inexact, linguistic inputs, fuzzy logic based expert system is turned out to be very useful. The concepts of fuzzy set and fuzzy logic were introduced by Prof. L. A. Zadeh [@zadeh]. In contrast to binary logic, fuzzy logic deals with multi-valued logic which is a mathematical tool to represent the real world effectively. Due to its usefulness and simplicity, fuzzy logic has drawn huge attention of interdisciplinary researchers round the globe.\ Fuzzy logic based expert systems are widely used in many areas of medical diagnosis and decision-making process. Particularly, in the area of prostate cancer, very few literature are available [@saritas; @benecchi; @yuksel; @seker; @kar; @castanho; @abbod_review; @lorenz]. These literature address the problem in different angles and also use fuzzy logic in disjoint ways. Some researchers have used hybrid system to treat this problem. We focus our attention to fuzzy logic based expert system to predict the prostate cancer risk. From careful study of the literature, we found that $\%$FPSA is a very crucial parameter, along with age, PSA and PV for early detection of PC. Therefore, we formulate a fuzzy expert system by taking care of all these inputs.\ The paper is organized as follows. Section c) [abs] are discussed. In section \[res\], we apply our FES to a medical data set and discuss our findings. Finally , section \[con\]. Fuzzy Expert System (FES) {#fes} ========================= An expert system which uses fuzzy logic instead of Boolean logic is called fuzzy expert system (FES). A FES is a form of artificial intelligence which deals with membership functions and some prescribed rule base to evaluate a set of data. Fuzziness is introduced to the crisp inputs of a FES by means of suitable membership functions. Once membership functions are defined for all input variables, then they are fed to a particular inference method for further action. Here, we have used Mamdani (max-min) inference method which is most popular in literature. Rules of the FES developed here are of IF-THEN form. Mamdani type inference method results in fuzzy sets as output. For a given set of input values, some relevant rules will be fired to produce a fuzzy output in Mamdani type inference method. Fuzzy output is defuzzified using different techniques to obtain crisp output. Centroid method is used for defuzzification in our FES. General structure of a FES is shown in below figure \[fig:0\]. ! [General architecture of a FES. []{data-label="fig:0"}](fes_block.eps){width="\textwidth"} We have used medical data of the patients as given in reference [@saritas]. Depending on the data set, the range of different inputs are determined. In input FES. Input s for cancer. For a man having no family history of PC, the chance of getting it increases after the age of 50. This number changes from race to race. However, two out of three PCs are diagnosed in men at the age of 65 or above. The input variable “age" is represented by four fuzzy sets, namely, “very young", “young", “middle age" and “old". First and fourth fuzzy sets are represented by trapezoidal membership functions whereas for second and third we have used triangular membership functions. Table \[tab:1\] lists crisp sets and the corresponding fuzzy sets for the input “age". The membership functions for the same are plotted in figure \[fig:1\]. [lll]{} Input variable & Crisp set & Fuzzy set\ Age (year) & 0-30 & Very young\ & 20-50 & Young\ & 30-60 & Middle age\ & 40-100 & Old\ ! [Membership functions for “age". []{data-label="fig:1"}](age.eps){width="\textwidth"} ### Prostate-specific Antigen (PSA) PSA has altered drastically the management of prostate cancer in men. The PSA test for blood can provide early stage detection of PC [@brawer99]. PSA is a protein secreted by the prostate gland which helps to keep the semen in liquid form. Some parts of this protein will pass into blood which give rise to the increase in normal PSA level. Elevation in PSA level in blood depends up on the health of prostate gland and the age of the person. A healthy prostate will release less PSA in blood compared to a cancerous gland. So, a rise in PSA level over normal range could be a possible indicator of PC. Although, elevated PSA level may be caused due to other factors like acute bacterial prostatitis, enlargement of prostate and other urinary retention. The measurement of PSA is expressed as nanograms per milliliter of blood. The normal range of PSA number can be age specific and also race specific. The input variable “PSA" is represented by five fuzzy sets, e.g., “very low", “low", “middle", “high" and “very high". For the first and fifth sets we have used trapezoidal membership functions while for the rest, triangular membership functions are used. In table \[tab:2\], we have shown the crisp sets and the corresponding fuzzy sets. The n \[fig:2\]. [lll]{} Input variable & Crisp set & Fuzzy set\ PSA (ng/ml) & 0-4 & Very low\ & 2-8 & Low\ & 4-12 & Middle\ & 8-16 & High\ & 12-50 & Very high\ ! [Membership functions for “PSA". []{data-label="fig:2"}](psa.eps){width="\textwidth"} ### Prostate Volume (PV) A healthy human male’s prostate is marginally larger than a walnut. It is a crucial parameter for early detection of PC. There is a characteristic pattern in the growth of prostate with age. That pattern can change from race to race. With the increase in the prostate volume there is a possibility of sampling error in systematic sextant needle biopsy. It should not be ignored [@brawer_pv]. Prostate is mainly divided in four zones in pathological terminology and total prostate volume as well as transition zone volume are measured in ultrasound. According to transrectal ultrasound (TRUS) guidance [@zhang], prostate width ($W$) (maximal transverse
null
--- author: - Wensheng Cheng - Yan Zhang - Xu Lei - Wen Yang - Guisong Xia bibliography: - 'segmentation.bib' - 'change\_detection.bib' title: Semantic Change Pattern Analysis ---
null
--- author: - 'R. K. Zamanov' - 'K. A. Zadarov' 'J. Martí' s 'G. Y. S. Mizolov 'Y. M. Nikolov' - 'M. F. Bode' - 'P. L. (e.g. Paredes & al (e.g. Paredes et al. 2013). These objects, called $\gamma$-ray binaries, are high-mass X-ray binaries that consist of a compact object (neutron star or black hole) orbiting an optical companion that is an OB star. There are five confirmed $\gamma$-ray binaries so far: PSR B1259-63/LS 2883 (Aharonian et al. 2005), LS 5039/V479 Sct (Aharonian et al. 2006), (Mennis d al. 2007), and 1FGL J1018.6-5856 (H.E.S.S. Collaboration et al. 2015). Their most distinctive fingerprint is a spectral energy distribution dominated by non-thermal photons with energies up to the TeV domain. Recently, Eger et al. (2016) proposed a binary nature for the $\gamma$-ray source HESS J1832-093/2MASS J18324516-092154 and this object probably belongs to the family of the $\gamma$-ray binaries as a sixth member. The binary system, PSR B1259-63 is unique, since it is the only one where the compact object has been identified as a radio pulsar (Johnston et al. 1992, 1994). The nature of the compact object is known in PSR B1259-63 as a neutron star, and in AGL J2241+4454/MWC 656 as a black hole (Casares et al. 2014). Although not included in the confirmed list, MWC 656 was selected as a target here despite not having shown all the observational properties of a canonical $\gamma$-ray binary yet. It (see also Guo & Wang (2017) and al. 2015). Nevertheless, the fact that the black hole nature of the compact companion is almost certain renders it very similar to the typical $\gamma$-ray binaries. In the other systems the nature of the compact object remains unclear (e.g. Dubus 2013). In addition to these objects, there are several other binary systems ($\eta$ Car, Cyg X-1, Cyg X-3, Cen X-3, and SS 433) that are detected as GeV sources, but not as TeV sources so far. Here we report high-resolution spectral observations of , MWC 148, and MWC 656, and discuss circumstellar disc size, disc truncation, interstellar extinction, and rotation of their mass donors. The mass donors (primaries) of these three targets are emission-line Be stars. The Be stars are non-supergiant, fast-rotating B-type and luminosity class III-V stars which, at some point in their lives, have shown spectral lines in emission (Porter & Rivinius 2003). The material expelled from the equatorial belt of a rapidly rotating Be star forms an outwardly diffusing gaseous, dust-free Keplerian disc (Rivinius et al. 2013). In the optical/infrared band, the two most significant observational characteristics of Be stars and their excretion discs are the emission lines and the infrared excess. Moving along the orbit, the compact object passes close to this disc, and sometimes may even go through it causing significant perturbations in its structure. This circumstellar disc feeds the accretion disc around the compact object and/or interacts with its relativistic wind. Observations _ 0 Orb. phase &\ yyyymmdd...hhmm & & $H\alpha$ & &\ \ [** ** ]{} &\ 20140217...1923 & 60 min & 20 & 0.455 &\ 20140314...1746 & 60 min & 42 & 0.396 &\ 20150805...0009 & 60 min & 45 & 0.579 &\ \ [**MWC 148** ]{} &\ 20140113...1857 & 60 min & 56 & 0.758 &\ 20140217...2031 & 60 min & 44 & 0.870 &\ 20140218...1826 & 60 min & 62 & 0.872 &\ 20140313...2002 & 60 min & 54 & 0.946 &\ 20140314...1855 & 60 min & 81 & 0.949 &\ 20140315...1833 & 60 min & 46 & 0.952 &\ \ [**MWC 656** ]{} &\ 20150705...2259 & 30 min & 55 & 0.691 &\ 20150804...0017 & 30 min & 45 & 0.173 &\ 20150804...2229 & 30 min & 56 & 0.188 &\ \ High-resolution optical spectra of the three northern Be/$\gamma$-ray binaries were secured with the fibre-fed Echelle spectrograph [*ESpeRo*]{} attached to the 2.0 m telescope of the National Astronomical Observatory Rozhen, located in Rhodope mountains, Bulgaria. The spectrograph uses R2 grating with 37.5 grooves/mm, Andor CCD camera 2048 x 2048 px, 13.5x13.5 $\mu m$ px$^{-1}$ (Bonev et al. 2016). The spectrograph provides a dispersion of 0.06 Åpx$^{-1}$ at 6560 Å  and 0.04 Åpx$^{-1}$ at 4800 Å. The spectra were reduced in the standard way including bias removal, flat-field correction, and wavelength calibration. Pre-processing of data and parameter measurements are performed using various routines provided in IRAF. The journal of observations is presented in Table \[tab.J\], where the date, start of the exposure, exposure time, and signal-to-noise ratio at about $\lambda 6600$ Å are given. The orbital phases are calculated using $HJD_0= 2443366.775$, $HJD_0= 2454857.5,$ and $HJD_0= 2453243.7$ for , MWC 148, and MWC 656, respectively, and orbital periods given in Sect. \[sect.2\]. Emission plots for the prominent lines are shown in Fig.\[f1.examp\]. Spectral line parameters equivalent width (W) and distance between the peaks ($\Delta V$) for the prominent lines ($H\alpha$, H$\beta$, H$\gamma$, $HeI \lambda 5876,$ and $FeII \lambda 5316$) are given in Table \[tab.2\]. The typical error on the equivalent width is below $\pm 10$ % for lines with $W > 1$ Å  and up to $\pm 20$% for lines with $W \lesssim 1$ Å. The error range is . It is worth noting that [**(1)**]{} in   FeII lines are not detectable; [**(2)**]{} In MWC 656 on spectrum 20150705 the HeI $\lambda5876$ line is not visible (probably emission fills up the absorption). In addition to the Rozhen data we use 98 spectra of MWC 148 and 68 spectra of MWC 656 (analysed in Casares et al. 2012) from the archive of the 2.0 m Liverpool Telescope[^1] (Steele et al. 2004). These spectra were obtained using the Fibre-fed RObotic Dual-beam Optical Spectrograph (FRODOSpec; Morales-Rueda et al. spectrograph The aperture was a slit. The spectrograph was operated in a high-resolution mode, providing a dispersion of 0.8 Åpx$^{-1}$ at 6500 Å, 0.35 Åpx$^{-1}$ at 4800 Å, and typical $S/N \gtrsim 100$. FRODOSpec spectra were processed using the fully automated data reduction pipeline of Barnsley et al. (2012). The , ; . [cccccccccccccccccccclll]{} & & & & & &\ date-obs & $W_\alpha$ & $\Delta V_\alpha$ & $W_\beta$ & $\Delta V_\beta$ & $W_\gamma$ & $\Delta V_\gamma$ & $W_{HeI5876}$ & $\Delta V_{HeI5876}$ & $W_{FeII5316}$ & $\Delta V_{FeII}$ &\ yyyymm
null
--- abstract: 'We study the evolution of LTB Universe models possessing a varying cosmological term and a material fluid.' author: [Ed & al. *]{} [@DHW] and references therein. This may be so if the energy of the quantum vacuum spontaneously decayed into matter and radiation, hence reducing the cosmological term to a value compatible with astronomical constraints -see for instance Overduin [*et al. *]{} [@JMO] and references therein. On the other hand, recently it has been pointed out that because of sources evolution it may well happen that the Universe is in reality inhomogeneous and describable by the Lamaître- Tolmann-Bondi (LTB) metric [@MHE]. Further motivations conductive to use inhomogeneous metrics can be found in Krasinski [@KRS]. Metric and models ================= We consider a spatially flat LTB metric $$ds^{2} = - dt^{2} + Y^{'2} \, dr^{2} + Y^{2}\, (d\theta^{2} + sin^{2} \theta \, d\phi^{2}), \; \; (Y = Y(r, t)) \label{1}$$ whose source is a perfect fluid, with equation of state $P = (\gamma - 1) \rho$, plus a varying cosmological term $\Lambda(t)$. The non-trivial Einstein equations are $$\rho + \Lambda = \frac{1}{Y^{2} \, Y^{'}} (\dot{Y}^{2} \, Y)^{'} \, , \label{2}$$ $$P - \Lambda = - \frac{1}{Y^{2} \dot{Y}} (\dot{Y}^{2}\, Y)^{.} \, , \label{3}$$ $$\frac{\ddot{Y}}{Y} + \left( \frac{\dot{Y}}{Y}\right)^{2} - \frac{\ddot{Y}^{'}}{Y^{'}}-\frac{\dot{Y}}{Y}\frac{\dot{Y}^{'}}{Y^{'}}=0 \qquad (8 \pi G = 1). \label{4}$$ In general the solutions can be expressed as $Y(r, t) = R(r)^{2/3} Z(t)^{2/3\gamma} $. Next we summarize different secenarios of interest -see Chimento and Pavón [@LD] for details. <unk>label<unk>5<unk>$$ For $\gamma$ and $\Lambda$ constants one obtains $$Y_{1} = R^{2/3} (r) \, C_{1}^{2/3\gamma} \, cosh^{2/3\gamma} \left( \frac{\sqrt{3 \gamma \Lambda}}{2} \; t + \varphi_{1} \right) \, , \\ \label{5}$$ $$Y_{2} = R^{2/3} (r) \, C_{2}^{2/3\gamma} \, sinh^{2/3\gamma} \left( \frac{\sqrt{3 \gamma \Lambda}}{2} \; t + \varphi_{2} \right). \label{6}$$ Obviously both sets of solutions have a final inflationary stage. ! When $\gamma = \mbox{constant}$ and $$\Lambda (t) = \frac{\lambda_{0}^{2}}{t^{2}} \quad (\lambda_{0}^{2} = \mbox{constant}) \, , \label{7}$$ it follows that $$Z(t) =C_{1} \, t^{m_{+}} + C_{2} \, t^{m_{-}} \label{8}$$  \ where $ m_{\pm} = \left(1 \, \pm \, \sqrt{1 + 3 \gamma \lambda_{0}^{2}}\right)/2 $. Inflationary = $\lambda_{0}^{2}$. 3. For $\gamma = \mbox{constant}$ and $$\Lambda = \lambda_{0}^{2} \, t^{n-2} \; \; \; (n \neq 0, \, 2) \, , \label{9}$$ the solution can be expressed as a combination of Bessel functions $$Z = C_{1} \, t^{1/2} \, J_{1/n}\left(\frac{\lambda_{0}}{n} \sqrt{- 3\gamma} \, t^{n/2} \right)$$ $$+ C_{2} \, t^{1/2} \, J_{-1/n}\left(\frac{\lambda_{0}}{n} \sqrt{- 3\gamma} \, t^{n/2} \right). \label{10}$$ The behavior at the asymptotic limits depends on $n$. For $0 < n <2$ one has the following: (i) When $ t \rightarrow 0$ one obtains $Z \sim C_{1} \, t + C_{2}$ -one can choose $C_{2} = 0$ to have the initial singularity at $t = 0$. (ii) When $t \rightarrow \infty$ there follows $Z \sim t^{\frac{1}{2}-\frac{n}{4}} \; cos \, t^{n/2}$.\ Likewise for $ n < 0 $: (i) when $ t \rightarrow 0 \; $ one obtains $Z \sim t^{\frac{1}{2}-\frac{n}{4}} \; cos \, (t^{n/2} + \varphi)\;. $ (ii) When $ t \to \infty \; $ one obtains $ Z \sim t \, .$ 4. For $\gamma = \mbox{constant}$ and $$\Lambda (t) =\lambda_{0}^{2}+c\mbox{e}^{-\alpha t} \qquad (c < 0)\, , \label{11}$$ where $\lambda_{0}^{2}$, $\alpha$ and $c$ are constants, again the general solution is a combination of Bessel functions $$Z = C_{1} \,J_{\frac{\lambda_{0}}{\alpha}\sqrt{3\gamma}} \left(\frac{\sqrt{- 3\gamma c}}{\alpha}\,\mbox{e}^{\frac{-\alpha}{2} t}\right)$$ $$+ \, C_{2} \,J_{-\frac{\lambda_{0}}{\alpha}\sqrt{3\gamma}} \left(\frac{\sqrt{- 3\gamma c}} {\alpha}\,\mbox{e}^{\frac{-\alpha}{2} t}\right) \, , \label{12}$$ with $$C_2=-\frac{J_{\frac{\lambda_{0}}{\alpha}\sqrt{- 3\gamma}} \left(\frac{\sqrt{- 3\gamma c}}{\alpha}\right)} {J_{-\frac{\lambda_{0}}{\alpha}\sqrt{- 3\gamma}} \left(\frac{\sqrt{- 3\gamma c}}{\alpha}\right)}\,C_1 \label{13}$$ in order to fix the initial singularity at $t = 0$. When $t \rightarrow 0$ one has $Z \sim t$. At the final stage, when $t \rightarrow \infty$ and $\Lambda \rightarrow \lambda_{0}^{2}$, one obtains the following asymptotic behavior $$Y\approx R^{2/3}(r)\,\mbox{e}^{\frac{\lambda_{0}} {\sqrt{-3\gamma c}}\,t} \, . \label{14}$$ For the particular case $\lambda_{0}^{2} = 0 $ and in the limit $t \rightarrow \infty$, there is a solution whose final behavior is $$Y\approx R^{2/3}(r)\,t^{2/3\gamma} \, . $, .. and for $<unk>Lambda(t)$ as well as an asymptotic solution for: <unk>label<unk>16 For $\gamma = \gamma (t)$ and $\Lambda = \Lambda (t)$ it can be found expressions for both quantities, $$\Lambda(t)=\frac{4C^2(t-t_0)^{2n}}{3\gamma_0 n^2(n+1)^2} \left[1+\frac{(t-t_0)^{n+1}}{C}\right]^{\frac{2-n}{n}}, \label{16}$$ $$\gamma(t)=\gamma_0\left[1+\frac{(t-t_0)^{n+1}}{C}\right]^{-\frac{2+n}{n}}, \label{17}$$ as well as an asymptotic solution for $Y(t,r)$ $$Y\approx R^{2/3} (r) \, T_{0}^{2/3\gamma_0} \, \left[\frac{(n+1)(n+2)}{n}(t-t_0)\right]^{2/3\gamma_0} \, , \label{18}$$ where $T_{0}, \gamma_{0}, t_{0}, \mbox{and} C $ are constants. It is worthy of note that, for $t\gg t_0$ we have both $\gamma \rightarrow \gamma_0$ and $\Lambda \rightarrow 0$. To examine the singular
null
--- abstract: 'We show that the homology of ordered configuration spaces of finite trees with loops is torsion free. We introduce configuration spaces with sinks, which allow for taking quotients of the base space. Furthermore, we give a concrete generating set for all homology groups of configuration spaces of trees with loops and the first homology group of configuration spaces of general finite graphs. An True spaces.' address: John W $\operatorname{Conf}_n(X){\mathrel{\mathop:}=}\operatorname{Conf}_{\mathbf{n}}(X)$. This is usually called the $n$-th ordered configuration space of $X$. Let $G$ be a finite connected graph (i.e. a connected 1-dimensional CW complex with finitely many cells). We are interested in the homology of configurations of $n$ ordered particles in $G$, that is, $H_*(\operatorname{Conf}_n(G))$. A main ingredient in proving results about configurations in graphs is the existence of combinatorial models for the configuration spaces. In [@Abrams00], Abrams introduced a discretized model for the configuration space of $n$ points in a graph which is a cubical complex, allowing the spaces to be studied using techniques from discrete Morse theory and connecting them with right-angled Artin groups (see [@Farley05], [@Crisp04]). A similar discretized model for non-$k$-equal configuration spaces in a graph, where up to $k-1$ points are allowed to collide, was constructed in [@Chettih16], providing inspiration for the configuration with sinks introduced in this paper. Not long after the introduction of Abrams’ model, [Ś]{}witkowski introduced a cubical complex which is a deformation retract of the space of unordered configurations of $n$ points in a graph (see [@Swiat01]). In this model, instead of the points moving discrete distances along the graph, the points move from an edge to a vertex of valence at least two or vice versa. This gives a sharper bound for the homological dimension of these configuration spaces as the dimension of the complex is bounded from above by the number of vertices in the graph (see [@Ghrist01], [@Farley05] for proofs that this bound also holds for Abrams’ model). An analogous model holds for ordered configurations (see [@Luetgehetmann14]), by keeping track of the order of points on an edge. The combinatorial model for configurations with sinks has structure similar to the latter models. In order to describe the homology of $\operatorname{Conf}_n(G)$ we will compare it to a modified version of configuration spaces: we add “sinks” to our graphs. Sinks are special vertices in the graph where we allow particles to collide. For ordinary configuration spaces, if we collapse a subgraph $H$ of $G$ then this does *not* induce a map $$\operatorname{Conf}_n(G)\dashrightarrow\operatorname{Conf}_n(G/H)$$ because some of the particles could be mapped to the same point in $G/H$. If, however, we turn the image of $H$ under $G\to G/H$ into a sink, there is now an induced map on configuration spaces. Our first theorem shows that in the *ordered* case, there is no torsion and a geometric generating system for a large class of finite graphs. A finite connected graph $G$ is called a *tree with loops* if it can be constructed as an iterated wedge of star graphs and copies of $S^1$. A homology class $\sigma\in H_q(\operatorname{Conf}_n(G))$ is called the *product of classes $\sigma_1\in H_{q_1}(\operatorname{Conf}_{T_1}(G_1))$ and $\sigma_2\in H_{q_2}(\operatorname{Conf}_{T_2}(G_2))$* for $q_1+q_2=q$ if it is the image of $\sigma_1\otimes \sigma_2$ under the map $$H_q(\operatorname{Conf}_n(G_1\sqcup G_2)) \to H_q(\operatorname{Conf}_n(G))$$ induced by an embedding $G_1\sqcup G_2\hookrightarrow G$. Analogously, iterated products are induced by embeddings $G_1\sqcup G_2 \sqcup \ldots \sqcup G_n \hookrightarrow G$. For $k\ge 3$ let $\operatorname{Star}_k$ be the star graph with $k$ leaves, $\operatorname{H}$ the tree with two vertices of valence three and $S^1$ a circle with one vertex of valence 2. We call a class $\sigma\in H_q{\mathopen{}\mathclose\bgroup\originalleft}( \operatorname{Conf}_n(G) {\aftergroup\egroup\originalright})$ a *product of basic classes* if $\sigma$ is an iterated product of classes in groups of the form $H_j(\operatorname{Conf}_{n_i}(G_i))$ where $j$ equals 0 or 1 and $G_i$ is a star graph, the $\operatorname{H}$-graph, the circle $S^1$ or the interval $I$. \[thm:trees\] $ is a log number. Then the integral homology $H_q{\mathopen{}\mathclose\bgroup\originalleft}( \operatorname{Conf}_n(G); {\mathbb{Z}}{\aftergroup\egroup\originalright})$ is torsion-free and generated by products of basic classes for each $q\ge0$ A 1-class in $S^1$ moves all particles around the circle, a 1-class in a star graph uses the essential vertex to shuffle around the particles, and a 1-class in the $\operatorname{H}$-graph uses one of the vertices to reorder the particle and then undoes this reordering using the other vertex. The proof of will show that 2-classes in an $\operatorname{H}$-graph are given by sums of products of 1-classes in the two stars, and there are no higher dimensional classes in these three types of graphs. The proof of rests on an inductive argument on the number of essential vertices of a graph. We construct a basis for the configuration space of a star graph with loops such that the $E^1$-page of the Mayer-Vietoris spectral sequence induced by our gluing splits over that basis. We can identify a part of the homology of the $E^1$-page with configuration spaces where some of the points have been forgotten, and the rest of the homology with a configuration space where the star graph has been collapsed to a sink (see for the definition of sink configuration spaces). The gluing process does not create torsion, so torsion-freeness follows from explicit calculations of the homology of ordered configurations in star graphs with loops. An explicit generating set of homology classes with known relations is essential to our proof. A basis for the homology of ordered configurations of two points in a tree was first constructed in [@Chettih16], which highlighted the role of basic classes of the $\operatorname{H}$ graph in the configuration space of wedges of graphs. See also [@BF09] and [@FH10] for descriptions of product structure in configurations of two points on planar and non-planar graphs. The ora [@MaSa16]. For more general graphs, the analogous theorems do not hold: \[thm:non-product-general-graph\] If $G$ is any finite graph and $n$ a natural number, then the *first* homology group $H_1(\operatorname{Conf}_n(G))$ is generated by basic classes. However, for each $i\ge2$ there exists a finite graph $G$ and a number $n$ such that $H_i(\operatorname{Conf}_n(G))$ is not generated by products of 1-classes. We provide explicit examples for the second statement. Abrams and Ghrist were aware of the second part of this result in 2002 ([@AbramsGhrist02]), but their example does not generalize to arbitrary dimensions. More specifically, they showed that $\operatorname{Conf}_2(K_5)$ and $\operatorname{Conf}_2(K_{3,3})$ are homotopic to surfaces of genus $6$ and
null
--- abstract: 'In this paper we address the problem of information-constrained optimal control for an interconnected system subject to one-step communication delays and power constraints. The goal is to minimize a finite-horizon quadratic cost by optimally choosing the control inputs for the subsystems, accounting for power constraints in the overall system and different information available at the decision makers. To this purpose, due to the quadratic nature of the power constraints, the LQG problem is reformulated as a linear problem in the covariance of state-input aggregated vector. The zero-duality gap allows us to equivalently consider the dual problem, and decompose it into several sub-problems according to the information structure present in the system. Finally, we achieve significant gains.' author: - 'V. Causevic$^{\dagger}$, ' [@networkflows]. Some research interests focus on interconnected systems. Traditionally, arguments in favor of distributed control (compared to centralized) are geographically distributed sensors, limited local computational power at the plant side, robustness against single-node failure and information privacy.\ In general, the design of distributed control is difficult because it imposes information constraints on individual decision makers. Such constraints arise due to either partial information exchange between decision makers or communication delay. In the problem we address herein, decision makers are able to communicate the full information they receive - either due to own measurements or from other decision makers, however, with delay. In other words, information constraints are due to communication delays between decision makers. The information constraints, sometimes referred to as information structure, play a key role in determining the optimal control and decide on its computational tractability. Indeed, in [@witsenhausen1968] a linear quadratic Gaussian team problem is constructed with a non-classical information pattern and it is shown that a linear controller is not necessarily optimal. This problem is addressed in [@ho-chi1972] where it is shown that the so-called partially nested information structure guarantees existence of optimal control laws that are linear in the associated information. Finally, a strong result characterizing the class of all information-constrained problems which may be cast as a convex program is given in [@rotkowitz2006tac].\ Inspiration for our approach is given by the work in [@nayyar2013tac] which suggests that the information hierarchy existing between the decision makers can be exploited to obtain the optimal solution. First , [@lamperski2012cdc]. The authors however, consider a typical unconstrained linear quadratic team problem. But in reality, e.g. actuation capabilities are limited and thus must be accounted for in the design procedure.\ The main contribution of this paper is a method to compute optimal control laws, for a power-constrained system with given information structure. We assume the latter to be induced by a one-step communication delays between the decision makers. To this end, the problem is reformulated in its dual Lagrangian form, where the covariance of the state-input aggregated vector is defined as decision variable. The information structure is then exploited to split the optimization problem into simpler sub-problems that have alike structure. Indeed, in-network control [@InNetwork] is seen as the decomposition of a complex task into smaller sub-tasks resulting in computationally inexpensive local control actions. From an application point of view, the goal is to implement and analyze the developed approach within a network infrastructure, exploiting the possibility of existing (but limited) in-network processing, in order to improve control performance.\ The remainder of the paper is outlined as follows. We start with problem setup in \[sec: problem statement\]. The method to decouple problem into several sub-problems via covariance decomposition is presented in section \[sec:info dec\]. In section \[sec:dual\] we provide structural characterization of the solution to the problem and finally conclusions are given in \[sec:conclude\]. Problem s and subsystems. Formally, the physical interconnections are described through a graph $\mathcal{G} = \left(\mathcal{V}, \mathcal{E}\right)$. We call this the n/n graph. Each node $i \in \mathcal{V}$ corresponds to one of the subsystems $i\in\{1, \ldots, N\}$. An edge $(j,i) \in \mathcal{E}$ if dynamics of node $i$ is directly affected by node $j$. We assume that $\mathcal{G} $ is connected and undirected, i.e., $(i,j) \in \mathcal{E}$ if and only if $(j,i) \in \mathcal{E}$. The set of direct neighbors of decision maker $i$ is defined as $\mathcal{N}_i = \{j \,\vert (j,i) \in \mathcal{E}\}$. The set = $d_{ij}$. Clearly, if $j \in \mathcal{N}_i$ then $d_{ij} = 1$. The $e = 1<unk> subsystem. The noise process $w_i (k) \in \mathbb{R}^{n_i}$ is zero-mean i.i.d. Gaussian noise with covariance matrix $\Sigma_{w} $. The initial state $x_i (0)$ is a random variable with zero-mean and finite covariance $\Sigma_{x} $. Moreover, $x_i (0)$ and $w_i (k)$ are assumed to be pair-wise independent at each time instant $k$ and every $i$. For a more compact notation, equation can be rewritten as $$x (k+1) = A x(k) + B u(k) + w(k) \label{eq: global system}$$ where the stacked vectors are $x (k) = ({x_1 ^\top (k)}, \ldots , {x_N ^\top (k)})^\top \in \mathbb{R}^n$, $ w(k) = ({w_1 ^\top (k)}, \ldots, {w_N ^\top (k)})^\top \in \mathbb{R}^n$, $ u(k) = ({u_1 ^\top (k)}, \ldots, {u_N ^\top (k)})^\top \in \mathbb{R}^m$, $n=\sum_{i=1}^{N} n_i$ and $m=\sum_{i=1}^{N} m_i$. The admissible control policies at time instant $k$ are measurable functions of the information available to each decision maker $i$ (sometimes also referred to as player $i$) $$u_i (k) = \gamma_k^i(\mathcal{I}_k^i) \label{eq:gama}$$ where $\mathcal{I}_k^i, \ k=0,\ldots,T-1,$ is defined as $$\begin{aligned} &\mathcal{I}_k^i = \{\mathcal{I}_{k-1}^i, x^i_{k}, u_{k-1}^i\} \underset{j \in \mathcal{N}_i}\bigcup \{\mathcal{I}^j_{{k-1}}\}, \quad k>0, {\addtocounter{equation}{1}\tag{\theequation}}\label{eq:information set}\end{aligned}$$ and $\mathcal{I}_0^i=\lbrace x_0 ^i \rbrace$. In other words, the information set of each decision maker $i$ is updated at time instant $k$ by the current state and the one-step delayed information from the direct neighbors $\mathcal{N}_i$. The objective is to minimize the following global control cost $$\begin{aligned} {\addtocounter{equation}{1}\tag{\theequation}}\label{eq: quadratic cost 1} J_{\mathcal{C}} = {\rm E}\left[ \sum_{k=0}^{T-1} { \begin{bmatrix} x(k) \\ u(k
null
--- abstract: 'We report the discovery of two Mira variable stars (Miras) toward the Sextans dwarf spheroidal (dSph) galaxy. We two stars appear toward the Sextans dwarf spheroidal (dSph) galaxy, and are Miras towards dPh's dwarf galaxy' dSph. The light curves of both stars in the $I_{\rm c}$ band show large-amplitude (3.7 and 0.9 mag) and long-period ($326\pm 15$ and $122\pm 5$ days) variations, suggesting that they are Miras. We combine our own infrared data with previously published data to estimate the mean infrared magnitudes. The distances obtained from the period-luminosity relation of the Miras ($75.3^{+12.8}_{-10.9}$ and $79.8^{+11.5}_{-9.9}$ kpc , respectively), together with the radial velocities available, support memberships of the Sextans dSph ($90.0\pm 10.0$ kpc). These are the first Miras found in a stellar system with a metallicity as low as ${\rm [Fe/H]\sim -1.9}$, than any other known system with Miras.' author: - 'Sakamoto, Tsuyoshi' - 'Matsunaga, Noriyuki' - 'Hasegawa, Takashi' - 'Nakada, Yoshikazu' title: 'Discovery of Mira variable stars in the metal-poor Sextans dwarf spheroidal galaxy' --- Introduction ============ Miras are pulsating stars with initial masses between 0.8 and 8 solar masses in the Asymptotic Giant Branch (AGB) phase, and eject material via stellar winds into the interstellar medium (e.g., Habing 1996). The ejected material contain chemical elements that have been dredged up from the interior (e.g., carbon and s-process elements). A large amount of dust forms in the ejecta of the Miras, and then such dust grains regulate the cooling of the interstellar medium and the fragmentation of collapsing molecular clouds into stars. Thus, Miras play an important role in providing the heavy elements and dust grains from the early Universe to the present day. The evolution Miras. Therefore, Miras in stellar systems with known metallicity and/or age distribution can be important tracers to study the evolution of Miras and their impacts on chemical enrichment. For example, most of the Galactic globular clusters are old stellar systems with a single metallicity or a narrow metallicity distribution, offering an important sample of low-mass and metal-poor Miras (Frogel & Whitelock 1998; Feast et al. 2002). We note that Miras are found only in the clusters with ${\rm [Fe/H]>-1}$. Another interesting sample is low- to intermediate-mass Miras found in the Magellanic Clouds. A formidable amount of literature exists on those objects (e.g. Ita et al. 2004ab; Fraser et al. 2008; Soszyński et al. 2009; Groenewegen et al. 2009). Galactic dSphs provide us with a sample of even lower metallicity objects than the globular clusters. It is known that the fainter galaxy tends to have the lower mean metallicity (Norris et al. 2010). Therefore, the faint dSphs are excellent places to study metal-poor Miras if any Mira is found. Recent monitoring surveys in Galactic dSphs have discovered several Miras (Fornax dSph, Whitelock et al. 2009; 2009) al. 2010; Sagittarius dSph, Lagadec et al. 2009; Sculptor dSph, Menzies et al. 2011). Among the dSphs with previously known Miras, the Sculptor dSph is the most metal deficient one, which has the metallicity distribution with a peak at ${\rm [Fe/H]}=-1.56$ and a dispersion of 0.48 (Kirby et al. 2009). Sloan i al. (2009) reported the evidence of circumstellar dust around one of the Miras in the Sculptor dSph (Menzies et al. 2011), using radiation spectroscopy. This 2012) Universe. Our target galaxy, the Sextans dSph, shows a metallicity distribution with a peak at ${\rm [Fe/H]}=-1.9$ and is one of the most metal-poor dSphs in the Galaxy (Battaglia et al. 2011). So far, two monitoring surveys have been conducted for the center of this galaxy, revealing dozens of short-period variable stars (Mateo et al. 1995; Lee et al. 2003). No member was confirmed to the Sextans dSph. In Section 2, we describe optical and infrared photometric observations, and discuss their membership to the Sextans dSph. In Section 3, we discuss the chemical properties and their impacts. Observations and Results ======================== Our targets ----------- In order to explore Miras in the Sextans dSph, we selected two target stars from the photometric catalogs presently available. These stars were named \[tab:Object\]. The first target \#1, SDSS J101525.93$-$020431.8, was selected using the color criteria, $J-H>0.7$, $H-K_{\rm s}>0.3$ on the 2MASS catalog (Skrutskie et al. 2006) and $g-r>0.8$, $r-i>0.3$ on the SDSS catalog (Adelman-McCarthy et al. 2008). These criteria are also used for our monitoring survey of Miras in the Galactic halo (Sakamoto et al., in preparation). The target \#1 is carbon-rich, showing a spectrum with the strong CN absorption band at 7900${\rm \AA}$ (Mauron et al. 2004; Cruz et al. 2007). The second target \#2, SDSS J101234.29$-$013440.8, was later added because of its variation detected in QUEST1 (QUasar Equatorial Survey Team, Phase 1) variability survey (Rengstorf et al. 2009). The $R$-band light curve of \#2 over 2 years in the QUEST1 showed a variation with a large amplitude ($\Delta R \geq 1.2$ mag) and a long period (over 100 days), although their time sampling was not good enough to estimate the period. The target \#2 is oxygen-rich, showing a spectrum with clear TiO molecular absorption lines (Suntzeff et al. 1993). $I_c$-band photometry --------------------- We conducted photometric monitoring observations of the two selected targets in the direction of the Sextans using the 2KCCD camera attached to the 105-cm f/3.0 Schmidt telescope at Kiso Observatory (Itoh et al. 2001). The observations started in December 2008 and February 2010 for the targets \#1 and \#2, respectively, and were repeated until February 2012. Time series $I_c$-band images were obtained.The data were reduced following standard procedures with IRAF, including bias subtraction (both the level of the overscan region in each image and the bias pattern taken on each night) and the flat-field correction with $I_c$-band dome-flat images. Instrumental magnitudes of the targets and comparison stars were measured with aperture photometry using the IRAF/APPHOT package. The comparison stars were selected from the SDSS database (Adelman-McCarthy et al. 2008), and their $I_c$ magnitudes were calculated by using the transformation of Jordi et al. (2006), $$I_c=i'+(-0.386\pm 0.004)(i'-z')-(0.397\pm 0.001).$$ Using these $I_c$ magnitudes for calibrating the magnitude scale, we obtained the magnitudes of our target stars as listed in Tables 2 and 3. Fig. \[fig:LC\] plots the $I_c$ variations against the Modified Julian Date (MJD). Both stars show the long-period and large-amplitude variation characteristic of either Miras or semi-regular variables. The peak-to-valley amplitudes ($\Delta I_c$) are 3.72 mag for the target \#1 and 0.94 mag for the target \#2. Miras are generally considered to have the $I_c$ amplitude greater than 0.9-1.0 mag (Ita et al. 2004ab; Matsunaga et al. 2005). The target \#1 is clearly a Mira, whereas the target \#2 falls between Miras and semi-regulars. The light curve of the target \#1 shows a clear modulation over the entire observation run indicating a long-term variation which is often observed for carbon-rich Miras (Whitelock et al. 2003). We subtract this long-term trend which is fitted by a sine curve with a period of 1500 days. The residual light curve shows a regular periodic variation as expected. Then, we applied the Phase Dispersion Minimization (PDM, Stellingwerf 1978) to the residual curve to obtain a period 326
null
--- abstract: 'A novel soft-photon amplitude is proposed to replace the conventional Low soft-photon amplitude for nucleon-nucleon bremsstrahlung. Its derivation is guided by the standard meson-exchange model of the nucleon-nucleon interaction. This new amplitude provides a superior description of $pp\gamma$ data. The n, it is possible to determine an equivalent threshold.' address: 'M.K. Liou' - 'M.K. Liou' - 'R. Timmermans magnetic moment amplitudes. The most succesful example in the first case is the determination of the magnetic moments of the $\Delta^{++}$ ($\Delta^0$) from $\pi^+p\gamma$ ($\pi^-p\gamma$) data in the energy region of the $\Delta$(1232) resonance [@Lin91]. In the case of reaction mechanisms, a well-known example is the extraction of nuclear time delays from the $p ^{12}C\gamma$ data near the 1.7-MeV resonance [@Mar76]. The time delay distinguishes between direct and compound nuclear reactions. The initial goal of nucleon-nucleon bremsstrahlung investigations was to distinguish among various phenomenological potential models of the fundamental two-nucleon interaction. Most measured $pp\gamma$ cross sections could, in fact, be reasonably described by potential-model calculations, but the difference between predictions from any two realistic potentials appears to be too small to be distinguished by the data. For more than 30 years, the conventional Low soft-photon amplitude [@Low58] has been widely used for studying nuclear and particle bremsstrahlung processes. It s and processes. For instance, Nyman [@Nym68] and Fearing [@Fea80] used this amplitude to calculate $pp\gamma$ cross sections which were in reasonable agreement with several measurements and potential-model calculations. However, it was recently pointed out by Workman and Fearing [@Wor86] that the results from this conventional Low amplitude differ significantly from the potential-model calculations for the TRIUMF data at 280 MeV [@Mic90]. This is a very difficult process. The main purpose of this Letter is to propose a novel soft-photon amplitude to replace the conventional Low prescription. This new amplitude, the derivation of which is guided by the structure of the standard meson-exchange model of the two-nucleon interaction, is relativistic, manifestly gauge invariant, and consistent with the soft-photon theorem. It belongs to one of the two general classes of recently derived soft-photon amplitudes [@Lio93]. We demonstrate that the $pp\gamma$ data from low energies to energies near the pion-production threshold can be consistently described by the new amplitude. Most importantly, we point out here that our amplitude essentially eliminates the discrepancy between the soft-photon approximation and the potential-model calculations. That is, we demonstrate that “off-shell effects” are essentially negligible. Finally, we explore why the conventional Low amplitude works for some cases but fails for others. In order to elucidate these points, let us consider photon emission accompanying the scattering of two spin-1/2 particles $A$ and $B$, $$A(q_i^\mu) + B(p_i^\mu) \rightarrow A(q_f^\mu) + B(p_f^\mu) + \gamma(K^\mu) \ . \label{eq:ppg}$$ Here, $q_i^\mu$ ($q_f^\mu$) and $p_i^\mu$ ($p_f^\mu$) are the initial (final) four-momenta for particles $A$ and $B$, respectively, and $K^\mu$ is the four-momentum for the emitted photon with polarization $\varepsilon^\mu$. Particle $A$ ($B$) is assumed to have mass $m_A$ ($m_B$), charge $Q_A$ ($Q_B$), and anomalous magnetic moment $\kappa_A$ ($\kappa_B$). For process (\[eq:ppg\]), we can define the following Mandelstam variables: $s_i=(q_i+p_i)^2$, $s_f=(q_f+p_f)^2$, $t_q=(q_f-q_i)^2$, $t_p=(p_f-p_i)^2$, $u_1=(p_f-q_i)^2$, and $u_2=(q_f-p_i)^2$. Since a soft-photon amplitude depends only on either ($s$,$t$) or ($u$,$t$), chosen from the above set, we can derive two distinct classes of soft-photon amplitudes: $M^{(1)}_\mu(s,t)$ and $M^{(2)}_\mu(u,t)$ [@Lio93]. The general amplitude from the first class is the two-$s$–two-$t$ special amplitude $M^{TsTts}_\mu(s_i,s_f;t_q,t_p)$; that from the second class is the two-$u$–two-$t$ special amplitude $M^{TuTts}_\mu(u_1,u_2;t_q,t_p)$. The distinguishing characteristics of these amplitudes come from the fact that they are evaluated at different elastic-scattering or on-shell points (energy and angle). The soft-photon theorem does not specify how these on-shell points are to be selected. The modified procedure for deriving these soft-photon amplitudes is described in detail in Ref. [@Lio93]. In this procedure, the fundamental tree diagrams of the underlying elastic scattering process play an important role in deriving the two general amplitudes. Thus, we argue that $M^{TsTts}_\mu$ should be used to describe those processes which are resonance dominated \[such as $p ^{12}C\gamma$ near 1.7 MeV and $\pi^\pm p\gamma$ in the $\Delta$(1232) region\], whereas $M^{TuTts}_\mu$ should be used to describe those processes which are exchange-current dominated (such as the $np\gamma$ process). For the $pp\gamma$ process, which exhibits neither strong resonance effects nor significant $u$-channel exchange-current effects, both amplitudes can be used in theory, although this has never been tested in conjunction with experimental data. We <unk> analysis. We emphasize that the general amplitude $M^{TuTts}_\mu$ (not $M^{TsTts}_\mu$) arises naturally for nucleon-nucleon bremsstrahlung [*if*]{} the derivation is guided by the standard meson-exchange model of the two-nucleon interaction. The amplitude $M^{TuTts}_\mu$ for the $pp\gamma$ process can be written in terms of five invariant amplitudes $F^e_\alpha$ ($\alpha=1,\ldots,5$) as $$M^{TuTts}_\mu = \sum_{\alpha=1}^5 \left[ Q_A\overline{u}(q_f)X_{\alpha\mu}u(q_i)\overline{u}(p_f)g^\alpha u(p_i) + Q_B\overline{u}(q_f)g_\alpha u(q_i)\overline{u}(p_f)Y^\alpha_\mu u(p_i) \right] \ , \label{eq:MTuTt}$$ where $$\begin{aligned} X_{\alpha\mu} &=& F^e_\alpha(u_1,t_p)\left[ \frac{q_{f\mu}+R^{q_f}_\mu}{q_f\cdot K}-\frac{(p_i-q_f)_\mu}{(p_i-q_f)\cdot K} \right] g_\alpha \nonumber \\ && - F^e_\alpha(u_2,t_p) g_\alpha \left[ \frac{q_{i\mu}+R^{q_i}_\mu}{q_i\cdot K}-\frac{(q_i-p_f)_\mu}{(q_i-p_f)\cdot K} \right] \ , \label{eq:Xamu} \\ Y^\alpha_\mu &=& F^e_\alpha(u_2,t_q)\left[ \frac{p_{f\mu}+R^{p_f}_\mu}{p_f\cdot K}-\frac{(q_i-p_f)_\mu}{(q_i-p_f)\cdot K} \right] g^\alpha \nonumber \\ &&- F^e_\alpha(u_1,t_q) g^\alpha \left[ \frac{p_{i\mu}+R^{p
null
--- address: | The Johns Hopkins University\ Dept. of Physics and Astronomy\ 3400 N. Charles Street\ Baltimore, MD 21218 author: - 'Edward M. Murphy[@byline]' date: 'September 21, 1998' title: A Prosaic Explanation for the Anomalous Accelerations Seen in Distant Spacecraft --- ‘=11 2 bylinecite thanks 2maketitle2thanksauthoraddresstitledateabstract pacs \#1 abstract =-1000pt 0=-0 by17.5pt width 0pt height0 [  ]{}\#1 \#1 pacs =-1000pt 0=-0 by20pt PACS numbers: \#1 maketitle2 preprint title =-1000pt authoraddress date [=0.10753==-1pc]{} abstract pacs ‘=12 \#1[[$\backslash$\#1]{}]{} 2 Introduction ============ Anderson, et al. [@anderson98] have recently modeled the accelerations acting on the Pioneer 10, Pioneer 11, Ulysses, and Galileo spacecraft. They find an anomalous, excess acceleration of $\rm 8.5\times10^{-8}\;cm\;s^{-2}$ directed towards the Sun. They have ruled out excess gravitational forces due to the Galaxy and unidentified planetesimals, errors in the orbital and rotational parameters of the Earth, spacecraft gas leaks, and errors in the planetary ephemeris as explanations for the acceleration. In addition, the authors rule out radiation pressure from thermal radiation generated by the spacecraft radioisotope thermoelectric generators (RTGs). Anderson et al. assume that the thermal radiation generated by the RTGs is isotropically radiated and results in no net force on the spacecraft. However, I believe that this assumption overlooks the fact that the electrical energy produced by the RTGs is dissipated in a non-isotropic manner. Spacecraft designed to travel beyond the inner solar system cannot rely on currently available solar cells to provide power, as the size of the solar arrays would be prohibitively large. Therefore, missions to the outer solar system have used RTGs to provide power. RTGs rely on the thermal energy generated by the radioactive decay of Pu$^{238}$ to heat a semiconductor junction which generates an electrical current. RTGs have an electrical conversion efficiency of a few percent [@piscane94]. For example, the Ulysses RTGs generate 4500 W of thermal power and produce 280 W of electrical power (at the beginning of the mission). The available power decreases with time due to degradation in the semiconductor junction and, to a much lesser degree, the decay of the Pu$^{238}$ [@piscane94]. The excess thermal power generated by the RTGs is dumped radiatively by cooling fins located on the outer surface of the cylindrical RTG structure (this is true of all the spacecraft considered here). The geometry of the fins is complex and the thermal radiation dissipated from the surface will not be isotropic. However, a cursory examination of the Pioneer and Ulysses RTG designs shows that they are cylindrically symmetric. Even though it is not isotropic, the escaping radiation will not impart a net force on the spacecraft since it is dissipated symmetrically. The same is not true of the electrical energy created by the RTGs. The electrical energy is transported to a main power bus from where it is distributed to the individual subsystems of the satellite to provide power for operating the electronics. These electronics are typically found in a single large electronics bay (which contains most of the essential systems) and science instruments distributed throughout and around the spacecraft. To prevent the electronics from overheating, the waste heat dissipated by the electronics is radiated from the spacecraft by surface radiators. In most cases, radiators are located on the anti-solar side of the spacecraft to prevent the panels from being heated by solar radiation. Because of this, they cannot be heated by the Sun. From conservation of momentum arguments, it is easy to show that the acceleration, $a_P$, produced by an amount of radiated power, $P$, is $a_{P}=P\:(m\:c)^{-1}$ where $m$ is the mass of the spacecraft and $c$ is the speed of light. This assumes that the radiated power is tightly collimated (i.e. it moves in one direction). In fact, however, the radiation from a flat plate is spread over 2$\pi$ steradians. In polarized light (i.e. one in which the intensity is independent of viewing angle [@boyd83]), the momentum carried away perpendicular to the plate surface will be 2/3 of the total. Ulysses ======= The Ulysses spacecraft is spin stabilized with the rotation axis pointing approximately toward the Earth (and the Sun when the spacecraft is near aphelion). The anti-Earth (anti-solar) side is always in the spacecraft shadow. The majority of the electrical components in the Ulysses spacecraft are located in a single thermal enclosure [@standley98]. The waste heat from the electronics is radiated through a large, flat radiator panel on the anti-solar side of the spacecraft. The interior electronics radiate their heat to the panel, which in turn radiates the heat into space. In addition, the traveling wave tube amplifier (which dissipates 43 W alone) is directly thermally coupled to the surface radiator. Except for the anti-solar side, the spacecraft is covered in multi-layer insulation (MLI) blankets. A large 1.65 meter diameter antenna covers most of the Earth facing side. A power budget for the Ulysses spacecraft for January 1998 [@standley98] indicates that Ulysses’ systems are drawing $231\pm3$ W of electrical power. Of this, I calculate that $27\pm10$ W is dissipated by scientific instruments and heaters outside the main thermal enclosure and another 20 W is radiated by the transmitter. Therefore, in a steady state, the main thermal enclosure must radiate $184\pm13$ W of power. Because the error estimates are systematic, rather than statistical, I have added them directly rather than in quadrature. Some fraction of this power will escape through the MLI thermal blankets and the remainder will be radiated through the large surface radiator on the anti-solar side of the spacecraft. MLI blankets typically radiate 8 W m$^{-2}$ [@piscane94] into space. Only the blankets on the side of the spacecraft will radiate internal heat because the solar facing blankets of Ulysses allow a net input of heat into the thermal enclosure due to solar heating, though this input power is small ($\sim$ 2 W) compared to electrical power when the spacecraft is at aphelion. When near perihelion, Ulysses compensates for the excess input solar heating by dumping excess electrical energy into resistors on the outside of the spacecraft. About $$ of solar heating escaping from the spacecraft [@standley98]. I calculate that there are $3.0\pm1.0\:{\rm m}^{2}$ of MLI blankets on the sides of the main thermal enclosure of Ulysses resulting in $24\pm8$ W escaping through the MLI blankets. This implies that the total power radiated through the spacecraft radiator on the anti-solar side of Ulysses is ($160\pm21$) W. The acceleration produced by this power is $a_{P}=(10.3\pm1.3)\times 10^{-8}\:{\rm cm}\:{\rm s}^{-2}$ assuming a spacecraft mass of 345 kg and that the radiator is a Lambertian source (2/3 of the momentum is carried away perpendicular to the radiator). Since the radiator faces away from the Sun, the direction of this acceleration is toward the Sun. This matches, to within the errors, the anomalous acceleration reported by Anderson et al. [@anderson98] for Ulysses of $a_{P}=(12\pm3)\times10^{-8}\: {\rm cm}\:{\rm s}^{-2}$. Pioneer 10 and 11 ================= Toward the ends of their missions, the Pioneer 10/11 spacecraft were drawing 80 W of electrical power [@anderson98] from their RTGs, which was sufficient to power the essential spacecraft systems and possibly one or two scientific instruments. Of this, 9 W is transmitted as RF power [@anderson98]. The essential electrical systems are located in a cylindrical hub beneath the high-gain antenna. The waste heat generated by the electronics is radiated from a series of fins on the anti-solar side of the spacecraft. Since the majority of the science instruments have been turned off, essentially all of the 71 W of internally dissipated electrical power is radiated from the fins. Solar heating is negligible at the Pioneers’ distance from the Sun. Assuming that the current mass of Pioneer 10/11 is 250 kg, the radiated power generates an $a_{P}=6.3\times10^{-8}\: {\rm cm}\:{\rm s}^{-2}$ again assuming a Lambertian source. However, the radiator fins of the back of the Pioneer spacecraft are highly non-Lambertian sources. In fact, the fins are likely to collimate the outgoing radiation to a significant degree. If the radiation were fully collimated, the resulting acceleration would
null
--- abstract: 'It was recently observed that sand flowing down a vertical tube sometimes forms a traveling density pattern in which a number of regions with high density are separated from each other by regions of low density. In this work, we consider this behavior from the point of view of kinetic wave theory. Similar density patterns are found in molecular dynamic simulations of the system, and a well defined relationship is observed between local flux and local density – a strong indicator of the presence of kinetic waves. The equations of motion for this system are also presented, and they allow kinetic wave solutions. Finally, 'Density<extra_id_1> in granular flow: A Kinetic Wave Approach' address: - HLRZ-KFA Jülich, Postfach 1913, W-5170 Jülicher, Germany waves.' address: 'HLRZ-KFA Jülich, Postfach 1913, W-5170 Jülich, Germany' author: - Jysoo Lee and Michael Leibig title: 'Density Waves in Granular Flow: A Kinetic Wave Approach' --- Systems of granular particles (e.g. sand) exhibit many interesting phenomena, such as segregation under vibration or shear, density waves in the outflow through tubes and hoppers, and probably most strikingly, the formation of heap and convection cell under vibration [@s84; @c90; @jn92; @m92]. In granular flows through a narrow vertical tube, Pöschel found [@p92] that the particles do not flow uniformly, but form high density regions which travel as coherent structures with a velocity different from the center of mass velocity. He can confirm this finding [@p92]. However, the motion of these high density regions and the mechanism which is responsible for their formation are not fully understood. In this paper, we present numerical and theoretical evidence that these density waves are of a kinetic nature [@lw55]. Using MD simulations, we measure the dependence of the particle flux on the density. We find a well-defined flux-density relation – an indication that a kinetic wave theory describes the behavior. A direct measurement of the velocity of these high density regions shows a dependence on the mean density which is in good agreement with the predictions from kinetic wave theory. On the theoretical side, we consider one dimensional equations of motion for the density and the velocity fields in the tube. These equation offer many solutions. In order to understand the formation of these high density regions, we consider the general problem of interacting kinetic waves. We first show numerically that a system with an initially random density field evolves to a configuration in which neighboring regions have a high density contrast. At the early stage of development, we can show analytically that the density contrast between nearby regions increases linearly with time. We first discuss the MD simulations of the system, and begin with a brief description of the interparticle force laws that were used in our calculations. The particles interact with each other (or with a wall) only if they are in contact. The force that acts on particle $i$ due to particle $j$ can be divided into two components. The first, $F^{n}_{j \to i}$, is parallel to the vector $\vec{r} \equiv \vec{R_i} - \vec{R_j}$, where $\vec{R_i}$ and $\vec{R_j}$ are the coordinates of the centers of particles $i$ and $j$ respectively. We refer to this as the normal component. The second component, orthogonal to $\vec{r}$, is the shear component $F^{s}_{j \to i}$. The third factor is $(j)$. Also, $m_e$ is the effective mass $m_i m_j / (m_i + m_j)$, and $\vec{v} \equiv d\vec{r}/dt$. The first term in Eq. (\[eq:fn\]) is the Hertzian elastic force, where $k_{n}$ is a material dependent elastic constant. The second term is a velocity dependent friction term, where $\gamma _{n}$ is a normal damping coefficient. The shear component is given as $$\label{eq:fs} F^{s}_{j \to i} = -\gamma _{s} m_e {\vec{v} \cdot \vec{s} \over |s|},$$ where $\vec{s}$ is defined by rotating $\vec{r}$ clockwise by $\pi/2$. The sheastic component Eq. (\[eq:fs\]), is simply a velocity dependent friction term similar to the second term in the normal component. Finally, we must specify the interaction between a particle and a wall. The force on particle $i$, in contact with a wall, is given by Eqs. (\[eq:f\]) with $a_{j} = \infty$ and $m_{j} = \infty$. The behavior of Eqs. (\[eq:f\]) is rather typical in the MD simulations of granular material [@mdgranule]. A more complex flow is also observed [@lh93]. For simplicity, we study granular flows in $2$ dimensions and use a fifth order predictor-corrector scheme to integrate the equations of motion, calculating both the positions and velocity of each particle at all times. The tube is modeled by two vertical sidewalls of length $L$ with a separation $W$, and we apply a periodic boundary condition in the vertical direction. Between the sidewalls, particles of radii $0.1$ are initially filled with a uniform density of $\rho _o$ (throughout this paper, numerical values are given in CGS unit). The n, they fill the sidewalls. In Fig. \[fig:mdtube\], we show the time evolution of the density and the velocity fields for $L = 15$ and $W = 1$, measured at every $5$ ms. At a given time, we divide the tube into $15$ vertical regions of equal length, and measure the density and the average velocity in each region. These fields are displayed as a vertical strip of square boxes, where each box corresponds to a region in the tube. The grayscale of the box is proportional to the value of the field in that region. The parameters we used in this simulation are $k_{n} = 1.0 \times 10^{6}, \gamma _{n} = \gamma _{s} = 5.0 \times 10^{2}$, with the time step $5.0 \times 10^{-5}$. The initial density $\rho _{o}$ is 25 particles per unit area. In the figure, we find (1) a region of large density fluctuations is formed out of the initially uniform system, (2) the fluctuations seem to travel with almost constant velocity (different from the center of mass velocity), and (3) there seems to be strong correlation between the density and the velocity fields. These findings remain true for the simulations we have performed with different values of $\gamma$, $k_{n}$ and $\rho _{o}$, except when $\rho _o$ is very small, where a steady state is not reached. These traveling density patterns were first observed in the simulations by Pöschel [@p92]. In order to quantitatively study the correlation between the density and the velocity fields, we measure the local particle flux as a function of the local density in the following manner. Once the system has reached a steady state, we measure the mean velocity $v_i$ and the density $\rho _i$ in region $i$. The flux $j(\rho )$ for a given density $\rho$ is then taken to be $\rho \cdot \langle v (\rho) \rangle$, where $\langle\rangle$ is a time average over all regions which had a particular density $\rho$. The flux-density curve, obtained by averaging over $10{,}000$ iterations, are shown in Fig.\[fig:jrho\]. Here, the parameters are the same as those of Fig.\[fig:mdtube\]. The fact that a well-defined flux-density curve exists suggest that the density waves (traveling density fluctuations) are kinetic in nature. Furthermore, the flux-density curve for the granular flow resembles that of a traffic flow, which is considered as a prime example of the systems which shows kinetic waves [@lw55]. One additional piece of evidence that the density waves are of a kinetic nature is their dependence on the initial density $\rho _{o}$. The theory of kinetic waves predicts [@lw55] that small density fluctuations in a uniform density background $\rho _{o}$ travel with a velocity $$\label{eq:kvel} U(\rho _{o}) = {dj(\rho ) \over d\rho } \mid _{\rho = \rho _{o}},$$ which is the slope of the flux-density curve at the mean density. We thus expect a large negative velocity for small $\rho_{o}$, a decrease to zero velocity at $\rho_{o} \approx 15$, with an increasingly large positive velocity as $\rho_{o}$ is increased further. To check this, we measure the wave velocities for several values of $\rho _{o}$ (keeping all
null
--- abstract: 'The dielectric function for electron gas with parabolic energy bands is derived in a fractional dimensional space. The static response function shows a good dimensional dependence. The plasma frequencies are obtained from the roots of the dielectric functions. The plasma dispersion shows strong dimensional dependence. It is found that the plasma frequencies in the low dimensional systems are strongly dependent on the wave vector. It is weakly dependent in the three dimensional system and has a finite value at zero wave vector.' author: - 'K. M. Mohapatra' - 'Dr. B. ' properties. The Material[ @Ma material[@Matos]. The electronic and optical properties in a QW with finite barrier height and narrow well width show 3D behavior of the barrier material. It happens since the envelope functions for electrons and holes spread into the barrier region partially restoring the 3D characteristics of the system. On the other hand, the electronic and optical properties in a finite QW with sufficiently wide well width show 3D characteristics of the well material. Consequently the QW with finite well width and barrier height shows the fractional dimensional behavior which is somewhere in between 2D and 3D. This has been demonstrated by Ishida[@Ishida] in the calculation of plasma dispersion in a superlattice. The He] properties. The anisotropic interactions in an anisotropic solid are treated as ones in an isotropic fractional dimensional space, where the dimension is determined by the degree of anisotropy[@He]. Thus , system. In the quantum well structures the width of the QW can also serve for determining $\alpha$ of the system. The fractional dimensional $\alpha D$ space is not Euclidean space, it is spectroscopic dimension which is observed[@Bak]. The square dimensional $<unk>alpha$$ space has coordinates*]{}[@Stillinger]. The advantage of the $\alpha$D space approach over the conventional method for calculating different electronic and optical properties in the low dimensional systems is that it is easier to apply this method. For example, the $\alpha D$ space approach has been successfully employed to calculate exciton binding energy in QWs in an analytic method[@He1; @Matos1], while the conventional method needs involved numerical calculations[@Jai]. Similarly the polaron properties in the $\alpha D$ space have been studied in a simple method[@Polaron] whereas the conventional method needs quite a bit of computational effort[@Smondyrev]. The technique has also been used to study biexcitons[@Biex1; @Biex2; @Biex3], magnetoexciton[@Magnet1; @Magnet2], exciton-exciton interaction[@Exex], exciton-phonon interaction[@Exph], Stark shift of exciton complexes in weak electric field[@Stark], refractive index[@Tanguy], impurity and donor states[@Imp1; @Harrison; @Imp2], Pauli blocking effect[@Pauli], exciton-phonon interaction[@Exphn], exciton-polaron interaction[@Expol] and magnetopolaron[@Magpol]. The absorption spectra in a quantum wire shows the fractional dimensional space behavior with the dimension of the system lies between 1 and 3 depending on the size of the system[@Karlsson]. Several experiments using this method[@Boson]. The Luttinger liquid[@Castelliani] and the breakdown of Fermi liquid due to long range interaction[@Wirefd] in the fractional dimensional space with the dimension between 1 and 2 have been studied. In the fractional dimensional space the plasma frequencies in the long-wavelength limit have been derived from the real part of the dielectric function both in the quantum and classical limits[@Panda]. However, the full treatment of the dielectric function in the $\alpha D$ space for finding plasma frequency has not been carried out for Fermi gas. The present paper aims to fill up this gap and study the correlation energy. Dielectric function =================== In the $\alpha D$ space, the dielectric function with the wave vector $q$ and frequency $\omega$ of the external charge is defined as[@Vignale] $$\epsilon_{\alpha D}(q,\omega)=1-v_{\alpha D}(q)\chi^{0}(q,\omega),$$ where $v_{\alpha D}(q)$ is the Fourier transform of Coulomb potential $e^{2}/\epsilon_{\infty}r$ in $\alpha$D space and $\chi^{0}_{\alpha D}(q,\omega)$ is the irreducible polarizability function. The expression for $v_{\alpha D}(q)$ is given as[@Stillinger], $$v_{\alpha D}(q)=\frac{(4\pi)^{\frac{\alpha-1}{2}} \Gamma\biggl(\frac{\alpha-1}{2}\biggr)e^{2}} {q^{\alpha-1}}\label{eq:cq},$$ where $\Gamma(x)$ is the Euler gamma function. The irreducible polarizability function is defined as[@Vignale] $$\chi^{0}_{\alpha D}(q,\omega)=\frac{2}{V_{\alpha D}} \sum_{\bf k}\frac{f({\bf k})- f({\bf k}+{\bf q})} {E_{\bf k}-E_{{\bf k}+{\bf q}}+\hbar\omega+i\epsilon}\label{eq:pol1},$$ where $f({\bf k})$ is the Fermi-Dirac distribution function, $V_{\alpha D}$ is the volume in $\alpha$D space and $\epsilon\rightarrow 0^{+}$. In (\[eq:pol1\]), we find $$\chi^{0}_{\alpha D}(q,\omega)=-\frac{2}{V_{\alpha D}}\sum_{\bf k}f({\bf k}) \left[\frac{1}{E_{{\bf k}+{\bf q}}- E_{\bf k}-\hbar\omega-i\epsilon} +\frac{1}{E_{\bf k-{\bf q}}-E_{{\bf k}}+ \hbar\omega+i\epsilon}\right]\label{eq:pol}$$ We consider the zero-temperature limit and the parabolic energy dispersion, $E_{\bf k}=\hbar^2k^2/2m^{\ast}$ where $m^{\ast}$ is the effective mass of electron. The summation over [**k**]{} in the $\alpha D$ space approach is transferred into integration over $k$ and $\theta$ as $$\sum_{\bf k}=\frac{V_{\alpha D}}{(2\pi)^{\alpha}} \frac{2\pi^{\frac{\alpha-1}{2}}}{\Gamma\biggl(\frac{\alpha-1}{2}\biggr)} \int^{k_F}_{0}k^{\alpha-1}dk\int^{\pi}_{0} \sin^{\alpha-2}\theta d\theta\label{eq:int}$$ In the $\alpha D$ space, the Fermi momentum $k_{F}$ is related to $r_{s}$ as $$k_{F}r_{s}a_{B}=\beta_{\alpha}\label{eq:rs},$$ where $a_{B}$ is the Bohr’s radius and $\beta_{\alpha}=[2^{d-1}\{\Gamma(1+d/2)\}^2]^{1/\alpha}$. $$\begin{aligned} \chi^{0}_{\alpha D}(q,\omega)&=&- \frac{2^{2-\alpha}} {\pi^{\frac{\alpha+1}{2}}\Gamma\biggl(\frac{\alpha-1}{2}\biggr)} \int^{k_F}_{0}k^{\alpha-1}dk \int^{\pi}_{0} \sin^{\alpha-2}\theta d\theta\nonumber\\ & & \left[\frac{1}{E_{q}+\hbar^{2}kq\cos\theta/m^{\ast}- \hbar\omega-i\epsilon}+ \frac{1}{E_{q}-\hbar^{2}kq\cos\theta/m^{\ast}+ \hbar\omega+i\epsilon}\right]\label{eq:pol2}\end{aligned}$$ We have the identity $$\frac{1}{x\pm i\epsilon}=P\biggl[\frac{1}{x}\biggr] \mp i\pi\delta(x)\label{eq:identity},$$ where $P[1/x]$ is principal part of $1/x$ and $\delta(x)$ is the Dirac delta function. Real part of the dielectric function ------------------------------------ Using Eq. (\[eq:identity\]) in Eq. (\[eq:pol2\]), we find $$\begin{aligned} Re[\chi^{0}_{\alpha D}(q,\omega)]&&=- \frac{2^{2-\alpha}} {\pi^{\frac{\alpha+1}{2}}\Gamma\biggl(\frac{\alpha-1}{2}\biggr)} \int^{k_F}_{0}k^{\alpha-1}dk \int^{\pi}_{0}
null
--- abstract: | Thermal inflation usually requires an inflationary potential with nonrenormalizable operators (NROs). We demonstrate how O’Raifeartaigh models with or without NROs can provide thermal inflation and a solution to the moduli problem, as well as provide SUSY breaking. We then discuss a scenario where generalized O’Raifeartaigh potentials (with NROs) are included in a SUGRA where the supergravity and O’Raifeartaigh potentials provide negative and a positive contributions to the cosmological constant respectively. Tuning these contributions to nearly cancel can provide the present value of the dark energy. PACS numbers: 98.80.Cq, 12.60.Jv, 95.35.+d address: | [*$^{1}$School of Physics, University of Edinburgh, Edinburgh EH9 3JZ, Great Britain*]{}\ [*$^{2}$Department of Physics and Astronomy, Vanderbilt University, Nashville TN 37235, USA*]{} author: - 'Arjun Berera$^{1}$ [^1] and Thomas W. Kephart$^{1,2}$ [^2]' title: 'Inflation and Generalized O’Raifeartaigh SUSY models' --- In Press Physics Letters B 2003 There is considerable belief that the fundamental model of particle physics respects local and/or global supersymmetry at high energy. Inflationary cosmology appears to provide further support to this expectation. Due to the ability of supersymmetry to protect against radiative corrections, such models provide powerful means to realize ultra-flat potentials, which are necessary from inflation density perturbation constraints. However, alongside this benefit, cosmological implementations of supergravity and SUSY models generally lead to undesired particles, such as the spin 3/2 gravitino in supergravity models [@gravitino] and various spin zero particles of mass $\sim 10^{2-3} {\rm GeV}$ [@moduli]. In particular for cosmological inflation, whether supercooled [@si] or warm [@wi], which end at conventional high temperature scales, $T \stackrel{>}{\sim} 10^{10} {\rm GeV}$, overabundances of unwanted SUSY particles is a real problem, sometimes termed the moduli problem [@moduli; @Lyth:1995hj]. SUSY must not survive at low energy scales, where physics clearly is not supersymmetric, with current limits set by particle physics experiments indicating SUSY must break above the electroweak scale $\sim 10^3 {\rm GeV}$. It is reasonable to expect that symmetry breaking and more specifically SUSY breaking has cosmological implications. For example, one scenario termed thermal inflation [@Lyth:1995hj; @Lyth:1995ka; @Lazarides:ja; @Barreiro:1996dx; @Asaka:1999xd] uses symmetry breaking to overcome the problem of overabundance of unwanted particles created by SUSY at high temperature. A second problem related to SUSY, for which cosmologists are universally and anxiously awaiting an explanation, is the presentday cosmological constant $\rho_{\Lambda}$. Observation of type IA supernova data have indicated an accelerating universe [@supernova], which could be explained by a cosmological constant of $70 \%$ of the critical density, which implies a vacuum energy component $\rho_{\Lambda} \sim 10^{-10} {\rm eV}^4$. Recently the first year WMAP data has independently verified the presence of a cosmological constant, finding $\Omega_{\Lambda} = 0.73 \pm 0.04$ [@wmap]. In short: problem. Recall that spontaneous global SUSY breaking can be accomplished by the O’Raifeartaigh mechanism that requires at least three chiral supermultiplets. The minimal model has a superpotential of the form $$W(\phi ,\chi ,\eta )=a \chi \left[ \phi ^{2}-M^{2}\right] +m\eta \phi .$$ SUSY is broken since the requirement $\frac{\partial W}{\partial \phi_{i}}=0$, with $\phi_{i}={\phi,\chi,\eta}$, cannot be satisfied for all three fields. In other words the three conditions, $$\begin{aligned} \phi ^{2}-M^{2}=0, \phi =0, 2a\chi \phi +m\eta =0 ,\end{aligned}$$ cannot be simultaneously satisfied. Our purpose is to demonstrate that within their compact structure, these models contain nontrivial cosmological implications. We will begin with a review of thermal inflation, to understand the relevant scales necessary for such scenarios. Generalizations of the O’Raifeartaigh models are then presented and solutions are derived for thermal inflation and the presentday cosmological constant. We then briefly discuss embedding O’Raifeartaigh models in supergravity (SUGRA) and other fundamental theories, as well as particle physics implications of such models. The thermal inflation scenario is comprised of two phases of inflation. The first phase is the normal one, typically motivated by GUT physics and pictured to end, after reheating, at a high temperature $T \stackrel{>}{\sim} 10^{10} {\rm GeV}$. In this phase, the large scale physics is determined, such as density fluctuations. The key new feature that underlies thermal inflation is that it requires the presence of a scalar field $\phi$, often called the flaton, which has a symmetry breaking potential with the properties that at high temperature, $T > V_0^{1/4}$ symmetry is unbroken with $\phi=0$ where the scale of the potential is $V_0^{1/4} \approx 10^9 {\rm GeV}$. On the other hand, at $T=0$ symmetry is broken with the minimum now at $\phi \approx 10^9 {\rm GeV}$ and with the scalar particles acquiring a mass $m_{\phi} \sim 10^{2-3} {\rm GeV}$. Given such a potential, a second phase of inflation, termed thermal inflation, commences. In this picture, for $T > V_0^{1/4}$ the scalar field finite temperature effective potential locks the flaton field at $\phi = 0$ and the universe is in a hot big bang regime. Once $T < V_0^{1/4}$, the potential energy of this field dominates the energy density of the universe, thereby driving inflation, which to a good approximation is assumed to be an isentropic expansion. Due to the high temperature corrections to the effective potential, in the initial phase of thermal inflation, the scalar field remains locked at its high temperature point, $\phi = 0$. However, since inflationary expansion is rapidly cooling the universe, it implies the effective potential is evolving to its zero temperature form. Eventually, in what is estimated to be $\stackrel{<}{\sim} 15$ e-folds, the scalar field VEV no longer is locked at zero, and is able to roll down to its new minimum. The effect of the second phase of inflation is to lower the temperature of the universe from $T \sim 10^9 {\rm GeV}$ to $T \sim 10^3 {\rm GeV}$. This alone does not solve any overabundance problems since the abundance ratios $n/s$ for all species remain constant. However, subsequent to thermal inflation the scalar field oscillates, thereby producing scalar particles of mass $m_{\phi} \sim 10^{2-3} {\rm GeV}$ and lighter. These particles eventually decay, producing a huge increase in entropy, thereby adequately diluting the abundances of unwanted relics. Finally, in order not to affect the success of hot big bang nucleosynthesis, the temperature after decay of scalar particles is constrained to be above $\sim 10 {\rm MeV}$. Note, the desired features of thermal inflation could also occur for a continuous phase transition and a nonisentropic, warm-inflationary type expansion, which dampens the flaton’s motion during its evolution to its new minimum [@wi; @br]. The details of the thermal inflation scenario outlined above can be found in [@Lyth:1995hj; @Lyth:1995ka; @Lazarides:ja; @Barreiro:1996dx; @Asaka:1999xd]. The key point demonstrated in these papers is that all the desired features of this scenario follow, provided a potential with the properties described above is present. Considerable work on thermal inflation studies the consequences of such potentials, but many fewer works attempt to find explicit models of such potentials. Thermal inflation is typically carried out with potentials containing higher ($4$) dimension operators that are suppressed by powers of the Planck mass. In most studies of thermal inflation [@Lyth:1995hj; @Lyth:1995ka; @Lazarides:ja; @Barreiro:1996dx; @Asaka:1999xd], SUSY breaking is handled separately, for example, through nonperturbative means, such as the Affleck-Dine mechanism. Here we observe that a generalization of the O’Raifeartaigh potential, with one term replaced by a higher dimension operator can provide SUSY breaking, thermal inflation, and potentially, the presentday cosmological constant. Aside from the compactness of this solution, another advantage is that SUSY breaking terms are calculable at the tree level in the renormalizable O’Raifeartaigh model, so one has more control in model building. For the generalized O’Raifeartaigh model, loop level calculations would diverge. However, the basic motivation of the higher dimension operators is string theory which would serve to cut off all divergences and still leaves the model with some degree of control. To treat the cosmological moduli
null
--- abstract: 'We use metastable NaCl-structure Ti$_{0.5}$Al$_{0.5}$N alloys to probe effects of configurational disorder on adatom surface diffusion dynamics which control phase stability and nanostructural evolution during film growth. First-principles calculations were employed to obtain potential energy maps of Ti and Al adsorption on an ordered TiN(001) reference surface and a disordered Ti$_{0.5}$Al$_{0.5}$N(001) solid-solution surface. The alloy surfaces were ordered to reduce alloy disorder. The results show that alloy surface disorder dramatically reduces Ti adatom mobilities. Al ign dynamics.' author: - 'B. Alling' - 'P. Steneteg' - 'C. Tholander' - 'F. Tasnádi' - 'I. Petrov' - 'J. E. Greene' - 'L. Hultman' title: 'Effects of configurational disorder on adatom mobilities on Ti$_{1-x}$Al$_{x}$N(001) surfaces' --- Thin film growth is a complex physical phenomenon controlled by the interplay of thermodynamics and kinetics. This complexity facilitates the synthesis of metastable phases, such as Ti$_{1-x}$Al$_{x}$N alloys, which are not possible to obtain under equilibrium conditions and broaden the range of available physical properties in materials design. Fundamental understanding of elementary growth processes, such as adatom diffusion, governing nanostructural and surface morphological evolution during thin film growth can only be developed by detailed studies of their dynamics at the atomic scale. Research has mostly been carried out using elemental metals, as reviewed in refs. [@Jeong1999; @Antczak2007]. Much less is known about the atomic-scale dynamics of compound surfaces, and particularly little about complex, configurationally disordered, pseudobinary alloys which are presently replacing elemental and compound phases in several commercial applications. Kodambaka et al. [@Kodambaka2000; Koda al. [@Wall2004; @Wall2005] used scanning tunneling microscopy to determine surface diffusion activation energies $E_s$ on both TiN(001) and TiN(111). However, due to the vast difference between experimental and adatom hopping time scales, determining diffusion pathways requires theoretical approaches via first-principles methods that are capable of providing clear atomistic representation on the ps time scale. Gall (2007) / Sullivan - i.e. ; m.H. Gall (2009) – al. [@Gall2003surf] employed first-principles calculations to show that $E_{s}$ for Ti adatom diffusion on TiN is much lower on the (001) than the (111) surface and used this diffusional anisotropy to explain the evolution of (111) preferred orientation during growth of essentially strain-free polycrystalline films. The (108) grains. Here, we use cubic Ti$_{1-x}$Al$_{x}$N(001), a metastable NaCl-structure pseudobinary alloy, as a model system to probe the role of short-range disorder on cation diffusivities which control phase stability, surface morphology, and nanostructural evolution during growth. Ti$_{1-x}$Al$_{x}$N alloys with x $\sim 0.5$, synthesized by physical vapor deposition (PVD) far from thermodynamic equilibrium [@Hakansson1987; @*Adibi1991; @*Greczynski2011], are commercially important for high-temperature oxidation [@McIntyre1990] and wear-resistant applications [@Prengel1997; @*PalDey2003; @*Mayrhofer2003; @Horling2005]. Alloying TiN with AlN has also been shown to alter surface reaction pathways controlling film texture and nanostructure [@Horling2005; @Beckers2005; @Petrov1993; @Adibi1993b]. Unfortunately, atomic-scale understanding of the growth of these important, and more complex, materials systems is presently rudimentary as best. Surface diffusion on a metal alloy, the CuSn system in ordered configurations and in the dilute limit [@Chen2010PRL], has only recently been considered using first-principles. However, and [@Ruban2008REV]. We employ first-principles calculations using the projector augmented wave method [@Blochl1994] as implemented in the Vienna Ab-Initio Simulation Package (VASP) [@Kresse1993], to determine the energetics of cation adsorption and diffusion on ordered TiN(001) and conÞgurationally-disordered Ti$_{0.5}$Al$_{0.5}$N(001) surfaces. Electronic exchange correlation effects are modeled using the generalized gradient approximation [@Perdew1996]. The plane wave energy cut-off is set to 400 eV. We provide k-points. TiN(001), for reference, and Ti$_{0.5}$Al$_{0.5}$N(001) surfaces are modeled using slabs with four layers of $3\times3$ in-plane conventional cells with 36 atoms per layer. Calculated equilibrium lattice parameters, $a_0$, of bulk TiN, 4.255 Å, and Ti$_{0.5}$Al$_{0.5}$N, 4.179 Å, are employed. The vacuum layer above the surfaces corresponds to $5.5 a_0$. The adatoms are spin polarized, which is found to be important for Ti adatoms with its partially filled 3d-shell, but not for Al. To investigate diffusion on a configurationally-disordered surface, the Ti$_{0.5}$Al$_{0.5}$N(001) slab is modeled using the special quasirandom structure (SQS) method [@Zunger1990]. We impose a homogenous layer concentration profile and minimize the correlation functions on the first six nearest-neighbor shells for the slab as a whole. This experimental [image](potEsurfAndPaths-4){width="90.00000%"} Convergence of diffusion barriers is tested with respect to the geometrical and numerical details of the calculations. $E_s$ results are within 0.04 eV of the converged value, partly due to error cancelation between the effects of treating Ti semicore states as core and the limited number of layers; both are of the order of 0.08 eV, but with opposite signs. Our primary focus is the observed differences in cation dynamics on the two surfaces. We begin by calculating the adsorption energy $E^{Al,Ti}_{ads}(x,y)$ for Ti and Al adatoms as a function of positions x and y on both ordered TiN(001) and disordered Ti$_{0.5}$Al$_{0.5}$N(001) surfaces, $$E_{ads}^{Al,Ti}(x,y)=E_{slab+ad}^{Al,Ti}(x,y)-E_{slab}-E_{atom}^{Al,Ti}.$$ $E^{Al,Ti}_{slab+ad}$is the energy of the slab with an adatom at $(x, y)$, $E_{slab}$ is the energy of the pure slab with no adatoms, and $E^{Al,Ti}_{atom}$ is the energy of an isolated Al or Ti atom in vacuum. We use a fine grid of sampling points, $\Delta x = \Delta y = 0.05a_0$. In each calculation, the adatom is fixed within the plane and relaxed out of plane. The upper two layers of the slab are fully relaxed, while the lower two layers are stationary. A symmetrical surface. Adsorption-energy profiles for Al and Ti atoms on TiN(001) and Ti$_{0.5}$Al$_{0.5}$N(001) surfaces are shown in Figs. 1(a)-1(d). The most favorable sites for Al adatoms on both surfaces are directly above N atoms at bulk cation positions. For the Al atoms shown in Fig. 1(a), $E^{Al}_{ads}$ is -2.54 eV. On Ti$_{0.5}$Al$_{0.5}$N(001), Fig. 1(b), $E^{Al}_{ads}$ varies from -2.39 to -1.52 eV on the bulk cation sites depending on their local environment. Ti adatoms have two stable adsorption sites: fourfold hollows, surrounded by two N and two metal atoms, and the bulk site on-top N. For TiN(001), Fig. 1(c), $E^{Ti}_{ads}$ = -3.50 eV in the hollow site and -3.27 eV above N. On the alloy surface, Fig. 1(d), $E^{Ti}_{ads}$ varies from -3.42 to -2.58 eV in the hollow sites and -3.23 to -2.67 eV in on-top sites. Al-rich environments are much less favorable for both Al and Ti adatoms as can be seen in the lower right regions of Figs. 1(b) and 1(d). The overall preferred sites for Ti on Ti$_{0.5}$Al
null
--- abstract: 'Given a Kählerian holomorphic fiber bundle $F {\hookrightarrow}M {\rightarrow}X$, whose fiber $F$ is a compact homogeneous Kähler manifold, we describe the perturbed Hermitian–Einstein equations relative to certain holomorphic vector bundles ${{\mathcal}E} {\rightarrow}M$ . With respect to special metrics on ${\mathcal}E$, there is a dimensional reduction procedure which reduces this equation to a system of equations on $X$ known as the twisted coupled vortex equations.' address: True Proc. in Global Anaysis, Diff. Geometry and Lie Algebras, Balkan Geometry Press, Bucharest 2001, pp. 487 [^1] of interest. [<unk>2] of interest. The invariant solutions may be interpreted as solutions to an associated set of equations on a lower dimensional space of orbits of the group action. However, in system? A positive answer points to the study of the Hermitian–Einstein (HE) equation with respect to special metrics on holomorphic bundles together with some extra data. The purpose of this paper is to outline a construction leading to dimensional reduction of a class of equations which we call the [*perturbed Hermitian–Einstein equations*]{} (briefly, the PHE equations) on a Hermitian holomorphic vector bundle ${{\mathcal}E} {\longrightarrow}M$ where $M$ is a compact Kähler manifold. We stress that the term perturbed has here a delicate interpretation as will be apparent from the text. In fact, the PHE equations are actually more general than the HE equations because they possess an extra perturbation term. This extra term arises from the fact that in this case, $M$ is the total space of a holomorphic fiber bundle $F {\hookrightarrow}M {\longrightarrow}X$, where $X$ is a compact Kähler manifold and the fiber $F$ is a compact Kählerian homogeneous space. Now ${\mathcal E}$ as a holomorphic vector bundle is obtained via a holomorphic extension of certain holomorphic vector bundles on $M$ and is equipped with an invariant hermitian metric. This metric together with the Kobayashi form of the extension and some natural conditions on $F$, imply that the PHE equation is equivalent to a system of equations on $X$, namely the [*twisted coupled vortex equations*]{}. The overall construction, on which there are several variations, relies on results relating to the representation theory of complex semisimple Lie groups and the Bott–Borel–Weil theorem. In addition, the PHE equation can be obtained as a moment map equation. Here we will outline the general construction of [@BGKfour] leading to the twisted coupled vortex equations (cf [@BGKone] [@BGKtwo] [@GProne]). The existence theory of the solutions of such twisted coupled equations is discussed via the Hitchin–Kobayashi correspondence in [@BGKthree]. References [@AGone] [@AGtwo] contain an independent study of several aspects of this theory and focus on other questions. Some preliminaries ================== The Kähler manifold $M$ ----------------------- Let us commence by describing the compact homogeneous Kähler manifold $F$, that is, for connected complex Lie groups $G$ and $P$ with $G$ semisimple and $P \subset G$ parabolic, we set $F = G / P \cong U / K$ where $$\label{symmetric} G = {\operatorname{Hol}}(F)_e~, ~U = {\operatorname{Hol}_{{\operatorname{Iso}}}}(F)_e~, ~K = U \cap P~.$$ Furthermore, $F$ is a simply connected algebraic manifold, the groups $U$ and $K$ are connected compact Lie groups, with $U$ semisimple and $K$ the centralizer of a torus (hence $K \subset U$ has maximal rank) and any $G$–invariant hermitian metric on $F$ is a Kähler (for further details see [@BE] [@Botttwo] [@Kobtwo]). The equivariant holomorphic vector bundles on $G / P$ are homogeneous vector bundles [@Botttwo] given by representations $(\rho, V_\rho)$ of the parabolic subgroup $P$ $$\label{homogen} \rho {\mapsto}{{\mathcal}V}_\rho = G \times_P V_\rho~.$$ Let $X$ be a compact Kähler manifold and $P_G {\rightarrow}X$ a holomorphic principal $G$–bundle. The homogeneous vector bundle ${{\mathcal}V}_\rho$ extends to a holomorphic vector bundle on the associated holomorphic fiber bundle $M = P_G \times_G F = P_G / P$ by the formula $$\label{ext2} {\widetilde}{{\mathcal}V}_\rho \cong P_G \times_P V_\rho \to M = P_G / P~.$$ We call ${\widetilde}{{\mathcal}V}_\rho$ the [*canonical extension*]{} of ${{\mathcal}V}_\rho$ . With regards to the fundamental group ${\Gamma}= \pi_1(X)$, we suppose that $M$ has the structure of a generalized flat bundle [@KTone] $$\label{flatbundle} F {\hookrightarrow}M = {\widetilde}X \times_\Gamma F {\overset {\pi}{{\rightarrow}}} X ~,$$ with holonomy ${\alpha}: \Gamma \to U$ . Letting ${\omega}_F$ and ${\omega}_X$ denote the Kähler forms of $F$ and $X$ respectively, the extension to $M$ of the (invariant) Kähler form ${\omega}_F$ , is given by ${\widetilde}{{\omega}}_F = p^*{\omega}_F/{\alpha}$ , where $p : {\widetilde}X \times F {\rightarrow}F$ , is the natural projection. Then by [@BGKtwo] (Proposition $8.1$), there exists a family of Kähler metrics on $M$ with corresponding weighted Kähler forms $$\label{kform} {\omega}_{{\sigma}} = \pi^* {\omega}_X + {\sigma}{\widetilde}{{\omega}}_F ~,$$ where ${\sigma}> 0$ is a constant parameter. The bundle types of the extension on $M$ ---------------------------------------- Let ${{\mathcal}V}_{\rho_i} = U \times_K V_{\rho_i} \to F = U/K$ be homogeneous holomorphic vector bundles with canonical extensions ${\widetilde}{{\mathcal}V}_{\rho_i} {\rightarrow}M$ for $i=1, 2$ . Further, let ${\mathcal}W_i {\rightarrow}X$ be holomorphic vector bundles and set ${\mathcal}E_i = {{\pi^* {{\mathcal}W}_i} {\otimes}_{\mathbb C} {{\widetilde}{{\mathcal}V}_{\rho_i}}}$ . We consider the class of holomorphic vector bundles ${{\mathcal}E} {\rightarrow}M$ given by proper holomorphic extensions of the form $$\label{ext} {\mathbb E}~:~{0 {\rightarrow}{{\mathcal}E}_1 {\longrightarrow}{{\mathcal}E} {\longrightarrow}{{\mathcal}E}_2 {\rightarrow}0}~.$$ Such extensions are classified by the ${\operatorname{Ext}}^1$–functor (see e.g. [@Hart]) which in our case is of the form $$\label{extension5} {\operatorname{Ext}}^1_{{{\mathcal}O}_M}({{\mathcal}E}_2 , {{\mathcal}E}_1) \cong H^{0,1}(M, {{{\mathcal}H}om_{\mathbb C}( {{{\mathcal}E}_2} , {{{\mathcal}E}_1} )} ) \cong H^{0,1} (M, {{\pi^* {{\mathcal}W}} {\otimes}_{\mathbb C} {{\widetilde}{{{\mathcal}V}_{\rho}}}} )~,$$ where we set ${{\mathcal}W} = {{{\mathcal}H}om_{\mathbb C}( {{{\mathcal}W}_2} , {{{\mathcal}W}_1} )}$ and ${{\mathcal}V}_\rho = {{{\mathcal}H}om_{\mathbb C}( {{{\mathcal}V}_{\rho_2}} , {{{\mathcal}V}_{\rho_1}} )}$ . Note that in the latter case we have $\rho = \rho_1 \otimes \rho_2^*$ . For any holomorphic vector bundle ${{\mathcal}W} \to X$ and any homogeneous vector bundle ${{\mathcal}V} \to F$, there is an exact sequence derived from the Borel–Leray spectral sequence [@BGKtwo] [@HZ] : $$\label{edge} \begin{aligned} 0 &{\rightarrow}H^{0,1} (X,
null
--- author: - '[](mailto:xcolor@ukern.de)' date: ' () [^1]' title: 'Color extensions with the package — various examples' --- \[2007/01/21 v2.11 Color logging test (UK)\] The purpose of this file is to demonstrate a variety of capabilities including the logging facilities of the package. By playing around with different values of ``, one can observe the different behavior in the `log` file. Predefined colors ================= 1 Color definition and application ================================ Comma-separated and space-separated definitions: = = = = = = . Test with named colors: Test: ; Test: ; Test: ; Test: . [Test ] . Current color test with ``: TestTest TestTest TestTest Color in tables =============== ------------------- ----- test row test row test row test row test row test row test row test row [&gt;l]{}[test]{} row ------------------- ----- Color information ================= Type test: namedef[@foo1]{}[foo1]{}namedef[@foo2]{}[@[foo2]{}]{}namedef[@foo3]{}[@[foo3]{}]{}namedef[@foo4]{}[@[foo4]{}]{} [^1]: This file (`.tex`) is part of the distribution which can be downloaded from the CTAN mirrors `CTAN/macros/latex/contrib/xcolor/` or the homepage `www.ukern.de/tex/xcolor.html`. Please send error reports and suggestions for improvements to `xcolor@ukern.de`.
null
--- abstract: | Observations of radio halos and relics in galaxy clusters indicate efficient electron acceleration. Protons should likewise be accelerated and, on account of weak energy losses, can accumulate, suggesting that clusters may also be sources of very high-energy (VHE; $E>100$ GeV) gamma-ray emission. We report here on VHE gamma-ray observations of the Coma galaxy cluster with the VERITAS array of imaging Cherenkov telescopes, with complementing -LAT observations at GeV energies. No significant gamma-ray emission from the Coma cluster was detected. Integral flux upper limits at the 99% confidence level were measured to be on the order of $(2-5)\times 10^{-8}\ {\rm ph.\,m^{-2}\,s^{-1}}$ (VERITAS, $>220\ {\rm GeV}$) and $\sim 2\times 10^{-6}\ {\rm ph.\,m^{-2}\, s^{-1}}$ (, $1-3\ {\rm GeV}$), respectively. We use the gamma-ray upper limits to constrain CRs and magnetic fields in Coma. Using an analytical approach, the CR-to-thermal pressure ratio is constrained to be $< 16\%$ from VERITAS data and $< 1.7\%$ from data (averaged within the virial radius). [These upper limits are starting to constrain the CR physics in self-consistent cosmological cluster simulations and cap the maximum CR acceleration efficiency at structure formation shocks to be $<50\%$. Alternatively, this may argue for non-negligible CR transport processes such as CR streaming and diffusion into the outer cluster regions. ]{} Assuming that the radio-emitting electrons of the Coma halo result from hadronic CR interactions, the observations imply a lower limit on the central magnetic field in Coma of $\sim (2 - 5.5)\,\mu{\rm G}$, depending on the radial magnetic-field profile and on the gamma-ray spectral index. Since these values are below those inferred by Faraday rotation measurements in Coma (for most of the parameter space), this [renders]{} the hadronic model a very plausible explanation of the Coma radio halo. Finally, since galaxy clusters are dark-matter (DM) dominated, the VERITAS upper limits have been used to place constraints on the thermally-averaged product of the total self-annihilation cross section and the relative velocity of the DM particles, ${\left\langle \sigma v \right\rangle}$. author: - 'T. Arlen, M J. Holder, H. Huan, G. Hughes, T. B. Humensky, A. Imran, P. Kaaret, N. Karlsson, M. Kertzman, Y. Khassen, D. Kieda, H. Krawczynski, F. Krennrich, K. Lee, A. S False 'C. Pfrommer, D. A M_{\odot}$. According to the currently favored hierarchical model of cosmic structure formation, larger objects formed through successive mergers of smaller objects with galaxy clusters sitting on top of this mass hierarchy [see @article:Voit:2005 for a review]. Most cluster masses are small [@article:DiaferioSchindlerDolag:2008]. Baryonic gas making up the intra-cluster medium (ICM) contributes about 15% of the total cluster mass and individual galaxies account for the remainder (about 5%). The Milk Universe. The ICM is a hot ($T\sim 10^{8}$ K) plasma emitting thermal bremsstrahlung in the soft X-ray regime [see, e.g., @article:Petrosian:2001]. This plasma has been heated primarily through collisionless structure-formation shocks that form as a result of the hierarchical merging and accretion processes. Such shocks and turbulence in the ICM gas in combination with intra-cluster magnetic fields also provide a means to accelerate particles efficiently [see, e.g., @article:ColafrancescoBlasi:1998; @article:Ryu_etal:2003]. Many clusters feature megaparsec scale halos of nonthermal radio emission, indicative of a population of relativistic electrons and magnetic fields permeating the ICM [@article:Cassano_etal:2010]. There ] halos. In and stellar winds @article:EnsslinPfrommerMiniatiSubramanian:2011]. In the “re-acceleration model”, a long-lived pool of 100-MeV electrons—previously accelerated by formation shocks, galactic winds, or jets of active galactic nuclei (AGN)—interacts with plasma waves that are excited during states of strong ICM turbulence, e.g., after a cluster merger. This may result in second order Fermi acceleration and may produce energetic electrons ($\sim 10$ GeV) sufficient to explain the observable radio emission [@article:SchlickeiserSieversThiemann:1987; @article:BrunettiLazarian:2010]. Observations of possibly nonthermal emission from clusters in the extreme ultraviolet [EUV; @article:SarazinLieu:1998] and hard X-rays [@article:RephaeliGruber:2002; @article:Fusco-Femiano_etal:2004; @article:Eckert_etal:2007] may provide further indication of relativistic particle populations in clusters, although the interpretation of these observations as nonthermal diffuse emission has been disputed on the basis of more sensitive observations [see, e.g., @article:Ajello_etal:2009; @article:Ajello_etal:2010; @article:Wik_etal:2009]. Galaxy clusters have, for many years, been proposed as sources of gamma rays. If shock acceleration in the ICM is an efficient process, a population of highly relativistic CR protons and heavy ions is to be expected in the ICM. The main energy-loss mechanism for CR hadrons at high energies is pion production through the interaction of CRs with nuclei in the ICM. Pions are short lived and decay. The decay of neutral pions produces gamma rays and the decay of charged pions produces muons, which then decay to electrons and positrons. Due to the low density of the ICM ($n_{\mathrm{ICM}}\sim 10^{-3}$ cm$^{-3}$), the large size and the volume-filling magnetic fields in the ICM, CR hadrons will be confined in the cluster on timescales comparable to, or longer than
null
--- abstract: 'The Auger Engineering Radio Array (AERA) aims at the detection of air showers induced by high-energy cosmic rays. As an extension of the Pierre Auger Observatory, it measures complementary information to the particle detectors, fluorescence telescopes and to the muon scintillators of the Auger Muons and Infill for the Ground Array (AMIGA). AERA is sensitive to all fundamental parameters of an extensive air shower such as the arrival direction, energy and depth of shower maximum. Since the radio emission is induced purely by the electromagnetic component of the shower, in combination with the AMIGA muon counters, AERA is perfect for separate measurements of the electrons and muons in the shower, if combined with a muon counting detector like AMIGA. In addition to the depth of the shower maximum, the ratio of the electron and muon number serves as a measure of the primary particle mass.' address: - '^1^ Institute of Nuclear Physics, Karlsruhe Institute of Technology, Karlsruhe, Germany' - '^2^ Observatorio Pierre Auger, Av. San t', has been described as a radio extension of showers. It is the radio extension of the Pierre Auger Observatory, located in the Province of Mendoza in Argentina. With its area of 17km$^2$ AERA is the world’s largest experiment in the field of cosmic ray radio detection. The coincident measurement with the other low energy extensions of the Pierre Auger Observatory [@auger] enables the simultaneous measurement of a number of air shower properties. The Pierre Auger Observatory is an experiment for ultra-high-energy cosmic rays [@auger]. It combines different detection techniques to obtain complementary information about extensive cosmic ray air showers. 1660 water-Cherenkov stations form the surface detector (SD) and are distributed with a spacing of 1500m over an area of 3000km$^2$. They are in the ground. Extensive air showers induce radiation in the ultraviolet range due to excitation of atmospheric nitrogen by the shower particles. On moonless clear nights, this radiation is detected by 27 fluorescence telescopes at four sites, overlooking the SD area and enabling the hybrid detection of showers. Several enhancements, aiming at lower energies to 10$^{17}$eV, were installed in one part of the Observatory. The Auger Muons and Infill for the Ground Array (AMIGA) [@AMIGA] covers an area of about 20km$^2$. In AMIGA the spacing between the water-Cherenkov stations is reduced to 750m, yielding full efficiency for air showers with a primary energy down to $10^{17.5}$eV. Buried muon scintillators, at 2.3m depth, accompany the water-Cherenkov stations for a better separation of electrons and muons in a shower. Seven stations with muon detectors, forming the “Unitary Cell”, have been taking data since 2013. Three high-elevation air fluorescence telescopes (High Elevation Auger Telescope – HEAT), observe low energy showers higher in the atmosphere. AERA is located inside the region of the dense stations, enabling a combined detection of showers and cross calibration of the detectors. In this study, we investigated the radio emission trigger. The radio emission of air showers is mainly induced by two mechanisms: a) the geomagnetic effect due to deflection of the charged particles of the shower in the geomagnetic field [@geomagn], and b) the Askaryan effect due to positron annihilation and ionization of atmospheric molecules by the shower particles, which cause an excess of negative charges in the shower front [@askaryan]. Hence, the radio emission contains information on the development of the shower, in particular the position of the shower maximum X$_{\textrm{max}}$. In the shower. Data taking is possible for almost 100% of the time. Hence, in comparison with the fluorescence detector (FD), an X$_{\textrm{max}}$ measurement is possible around the clock. Furthermore, radio detection – in contrast to particle detection – becomes more efficient for more inclined showers due to a larger footprint of the emission at ground [@inclined]. The Auger Engineering Radio Array - detector description ======================================================== AERA was built in three phases. Starting in September 2010, AERA24 was deployed with 24 radio detection stations (RDS) featuring logarithmic periodic dipole antennas (LPDA) [@LPDA] with a spacing of 144m between the stations. AERA24 was used to investigate the radio emission itself and to develop techniques for radio detection of cosmic rays. For the second phase, AERA124, another 100 RDS were installed in May 2013. These RDS feature a different antenna type, the so called butterfly antenna [@butterfly], and improved hardware. They are distributed with spacings of 250m and 375m, and together with the first 24 antennas, cover an area of about 6km$^2$. With this configuration AERA measures several thousand cosmic ray events per year from a primary energy of about 10$^{17}$eV up to the highest energies. In March 2015, 25 additional butterfly antenna stations were installed on a grid with a 750m spacing, aiming mainly on the detection of horizontal air showers (&gt;55$^{\circ}$ zenith angle). With some additional prototype stations, AERA153 now consists of 153 RDS on an array of about 17km$^2$. A map of AERA with the different phases and the other detectors of the Pierre Auger Observatory is shown in figure \[fig:AERAmap\]. An section \[fig:detectors\]. All the devices are fully integrated in the hardware. They work completely autonomous in the field and send the collected data upon request to a central data facility via a WiFi link. The antennas are aligned along the magnetic north-south and east-west direction. They are triggered externally by the particle and fluorescence detectors and from internal triggers. The signals are bandpass-filtered to the range of 30 – 80MHz. Results from AERA ================= AERA was built for several different purposes: to improve the understanding of the radio emission mechanisms, for cosmic ray physics in the transition region between galactic and extragalactic cosmic rays, and to test the feasibility of a large-scale radio array for the highest energies. Probing the theory of radio emission ------------------------------------ The different radio-emission mechanisms produce differently polarized radio emission – linearly polarized from the geomagnetic effect and radial polarized towards the shower axis from the Askaryan effect. By measuring the polarization of the radio emission, it is possible to study the contributions from the different effects. Since the contribution from the geomagnetic effect depends on the angle to the geomagnetic field, this contribution is different for different detector sites. Polarization measurements in AERA revealed a radial component with a mean contribution of 14% [@polarization], aside from the linear polarization. This measurements agree well with the theory of the geomagnetic and Askaryan emission processes. Properties of the primary cosmic ray particle --------------------------------------------- The main properties of a cosmic ray are its direction, energy and mass. Radio measurements of AERA are sensitive to all of these properties. The * analysis. [*Arrival ] for the particle. This axis is reconstructed from the timing information of the radio pulses in the radio stations using a wavefront model for the radio emission. For this, a plane wave is used as first-order approximation for the wavefront shape. The reconstructed direction is in good agreement with the direction from the SD. [*Energy of the primary cosmic ray particle:*]{} The energy contained in the radio emission yields information about the primary cosmic ray energy. To calculate the radiation energy, the measured electric-field strength of the 30 – 80MHz radiation at the station positions is converted to the energy density. We use a two-dimensional lateral distribution function (2D-LDF) [@2dldf] taking into account asymmetries due to the combined geomagnetic and Askaryan effect, to interpolate the energy density. The integral over this 2D-LDF corresponds to the total radiation energy. A calibration against the SD shows that the radiation energy is 15.9MeV for a cosmic ray energy of 1EeV [@energy]. It scales quadratically with the cosmic ray energy because of the coherent character of the emission. This radiation energy can therefore be used as an energy estimator. In AERA, an energy resolution of 22% for a dataset of AERA24 events, and of 17% for a subset of events with high multiplicity ($\geq$ 5 radio stations) has been found. [*Shower maximum and mass composition:*]{} The depth in the atmosphere at which the number of secondary particles is maximum is called the shower maximum X$_{\textrm{max}}$. It is strongly correlated to the mass of the primary particle. The radio emission is mainly produced
null
--- abstract: 'We provide a compact analytic formula to compute the spin of the black hole produced by the coalescence of two black holes following a quasi-circular inspiral. Without additional fits than those already available for binaries with aligned or antialigned spins, but with a minimal set of assumptions, we derive an expression that can model generic initial spin configurations and mass ratios, thus covering all of the 7-dimensional space of parameters. A comparison with simulations already shows very accurate agreements with all of the numerical data available to date, but we also suggest a number of ways in which our predictions can be further improved.' author: - Luciano Rezzolla - Enrico Barausse - Ernst Nils Dorband - Denis Pollney - Christian Reisswig - Jennifer Seiler - Sascha Husa bibliography: - 'published\_version.bib' title: On the final spin from the coalescence of two black holes --- The evolution of black hole binary systems is one of the most important problems for general relativity, and more recently for astrophysics, as such systems enter the realm of observation. Recent advances in numerical relativity have made it possible to cover the entire range of the inspiral process, from large separations at which post-Newtonian (PN) calculations provide accurate orbital parameters, through the highly relativistic merger, to ringdown. For many studies of astrophysical interest, such as many-body studies of galactic mergers, or heirarchical models of black-hole formation however, it is impractical to carry out evolutions with the full Einstein, or even post-Newtonian, equations. Fortunately, recent binary black-hole evolutions in full general relativity have shown that certain physical quantities can be estimated to good accuracy if the initial encounter parameters are known. In particular, this paper develops a rather simple and robust formula for determining the spin of the black-hole remnant resulting from the merger of rather generic initial binary configurations. To appreciate the spirit of our approach it can be convenient to think of the inspiral and merger of two black holes as a mechanism which takes, as input, two black holes of initial masses $M_{1}$, $M_{2}$ and spin vectors $\boldsymbol{S}_{1}$, $\boldsymbol{S}_{2}$ and produces, as output, a third black hole of mass $M_{\rm fin}$ and spin $\boldsymbol{S}_{\rm fin}$. In conditions of particular astrophysical interest, the inspiral takes place through quasi-circular orbits since the eccentricity is removed quickly by the gravitational-radiation reaction [@Peters:1964]. Furthermore, at least for nonspinning equal-mass black holes, the final spin does not depend on the value of the eccentricity as long as it is not too large [@Hinder:2007qu]. The determination of $M_{\rm fin}$ and $\boldsymbol{S}_{\rm fin}$ from the knowledge of $M_{1,2}$ and $\boldsymbol{S}_{1,2}$, is of great importance in several fields. In astrophysics, it provides information on the properties of isolated stellar-mass black holes produced at the end of the evolution of a binary system of massive stars. In cosmology, it can be used to model the distribution of masses and spins of the supermassive black holes produced through the merger of galaxies (see ref. [@Berti2008] for an interesting example). In addition, in gravitational-wave astronomy, the a-priori knowledge of the final spin can help the detection of the ringdown. What makes this a difficult problem is clear: for binaries in quasi-circular orbits the space of initial parameters for the final spin has seven dimensions (*i.e. *, the mass-ratio $q\equiv M_2/M_1$ and the six components of the spin vectors). A number of analytical approaches have been developed over the years to determine the final spin, either exploiting the dynamics of point-particles [@Hughes:2002ei; @Buonanno:07b] or the PN approximation [@Gergely:07], or using more sophisticated approaches such as the effective-one-body approximation [@Buonanno:06cd]. Ultimately, however, computing $\boldsymbol{a}_{\rm fin} \equiv \boldsymbol{S}_{\rm fin}/M^2_{\rm fin}$ accurately requires the solution of the full Einstein equations and thus the use of numerical-relativity simulations. Several groups have investigated this problem over the last couple of years [@Campanelli:2006vp; @Pollney:2007ss; @Bruegmann:2007zj; @Rezzolla-etal-2007; @Marronetti07tbgs; @Rezzolla-etal-2007b]. While the recent possibility of measuring accurately the final spin through numerical-relativity calculations represents an enormous progress, the complete coverage of the full parameter space uniquely through simulations is not a viable option. As a consequence, work has been done to derive analytic expressions for the final spin which would model the numerical-relativity data but also exploit as much information as possible either from perturbative studies, or from the symmetries of the system [@Pollney:2007ss; @Rezzolla-etal-2007; @BoyleKesdenNissanke:07; @BoyleKesden:07; @Marronetti07tbgs; @Rezzolla-etal-2007b]. In this sense, these approaches do not amount to a blind fitting of the numerical-relativity data, but, rather, use the data to construct a physically consistent and mathematically accurate modelling of the final spin. Despite a concentrated effort in this direction, the analytic expressions for the final spin could, at most, cover 3 of the 7 dimensions of the space of parameters [@Rezzolla-etal-2007b]. Here, we show that without additional fits and with a minimal set of assumptions it is possible to obtain the extension to the complete space of parameters and reproduce all of the available numerical-relativity data. Although existing methods have been improved. Analytic fitting expressions for $\boldsymbol{a}_{\rm fin}$ have so far been built using binaries having spins that are either *aligned* or *antialigned* with the initial orbital angular momentum. This is because in this case both the initial and final spins can be projected in the direction of the orbital angular momentum and it is possible to deal simply with the (pseudo)-scalar quantities $a_1$, $a_2$ and $a_{\rm fin}$ ranging between $-1$ and $+1$. If the black holes have *equal mass* but *unequal* spins that are either *parallel* or *antiparallel*, then the spin of the final black hole has been shown to be accurately described by the simple analytic fit [@Rezzolla-etal-2007] $$\label{eqmass_uneqspin} a_{\rm fin}(a_1,a_2)=p_0 + p_1 (a_1 + a_2) + p_2 (a_1 + a_2)^2\,,$$ where $p_0 = 0.6883 \pm 0.0003$, $p_1 = 0.1530 \pm 0.0004$, and $p_2 = -0.0088 \pm 0.0005$. When seen as a power series of the initial spins, expression  suggests an interesting physical interpretation. Its zeroth-order term, in fact, can be associated with the (dimensionless) orbital angular momentum not radiated in gravitational waves and amounting to $\sim 70\%$ of the final spin at most. The first-order term, on the other hand, can be seen as the contributions from the initial spins and from the spin-orbit coupling, amounting to $\sim 30\%$ at most. Finally, the second-order term, includes the spin-spin coupling, with a contribution to the final spin which is of $\sim 4\%$ at most. If the black holes have *unequal mass* but spins that are *equal* and *parallel*, the final spin is instead given by the analytic fit [@Rezzolla-etal-2007b] $$\begin{aligned} \label{eqspin_uneqmass} &&a_{\rm fin}(a,\nu)=a+s_{4}a^2 \nu+s_{5}a \nu^2+t_{0} a\nu+ \nonumber \\ && \hskip 1.75cm 2\sqrt{3}\nu+t_2\nu^2+t_{3}\nu^3\,,\end{aligned}$$ where $\nu$ is the symmetric mass ratio $\nu \equiv M_1M_2/(M_1+M_2)^2$, and where the coefficients take the values $s_4 = -0.129 \pm 0.012$, $s_5 = -0.384 \pm 0.261$, $t_0 = -2.686 \pm 0.065$, $t_2 = -3.454 \pm 0.132$, $t_3 = 2.353 \pm 0.548$. Although obtained independently in [@Rezzolla-etal-2007] and [@Rezzolla-etal-2007b], expressions  and  are compatible as can be seen by considering  for equal-mass binaries ($\nu =1/4$) and verifying that the following relations hold within the computed error-bars $$\label{relations} p_0= \frac{\sqrt{3}}{2} + \frac{t_2}{16} + \frac{t_3}{64}\,, \quad p_1 = \frac{1}{
null
--- author: - | Chang-han Rhee\ Peter W. Glynn\ Stanford University\ Stanford, CA 94305, U.S.A. title: 'A NEW APPROACH TO UNBIASED ESTIMATION FOR SDE’S' --- ABSTRACT {#abstract .unnumbered} ======== In this paper, we introduce a new approach to constructing unbiased estimators when computing expectations of path functionals associated with stochastic differential equations (SDEs). Our approach allows for fast prediction consideration. INTRODUCTION {#sec:intro} ============ We have recently developed a general approach to constructing unbiased estimators, given a family of biased estimators. It turns out that the conditions guaranteeing its validity are closely related to those associated with multi-level Monte Carlo methods; see Rhee and Glynn (2012) for details and a more complete discussion of the theory. In this paper, we briefly describe the idea in the setting of computing solutions of stochastic differential equations and provide an initial numerical exploration intended to shed light on the method’s potential effectiveness. As we will see below, the conditions under which our estimator produces an algorithm with “square root convergence rate" essentially coincide with the conditions required by multi-level Monte Carlo to converge at the same rate. In particular, suppose that we wish to compute an expectation of the form $\alpha = E k(X)$, where $X =(X(t): t\geq0)$ is the solution to the SDE $$\begin{aligned} dX(t) = \mu(X(t))dt + \sigma(X(t))dB(t), \label{eq:1}\end{aligned}$$ $B =(B(t):t\geq0)$ is m-dimensional standard Brownian motion, $k: C[0, \infty) \to R$, and $C[0, \infty)$ is the space of continuous functions mapping $[0,\infty)$ into $R^d$. In general, the random variable (rv) $k(X)$ can not be simulated exactly, because the underlying infinite-dimensional object $X$ can not be generated exactly. Instead, one typically approximates $X$ via a discrete-time approximation $X_h(\cdot)$. For example, the simplest such approximation is the Euler time-stepping algorithm given by $$\begin{aligned} X_h((k+1)h) = X_h(kh) + \mu(X_h(kh))h + \sigma(X_h(kh)) (B(k+1)h) - B(kh)) \label{eq:2}\end{aligned}$$ that defines $X_h$ at the time points $0, h, 2h, ...,$ with $X_h$ defined at intermediate values via (for example) linear interpolation. Because (\[eq:2\]) is only an approximation to the dynamics represented by (\[eq:1\]), the rv $k(X_h)$ is only an approximation to $k(X)$, and consequently $k(X_h)$ is a biased estimator for the purpose of computing $\alpha$. The traditional means of dealing with this is to intelligently select the step size $h$ and number of independent replications $R$ as a function of the computational budget $c$, so as to maximize the rate of convergence. However, as pointed out by Duffie and Glynn (1995), such biased numerical schemes inevitably lead to Monte Carlo estimators for $\alpha$ that exhibit slower convergence rates than the “canonical" order $c^{-1/2}$ rate associated with Monte Carlo in the presence of unbiased finite variance estimators. However, several years ago, Giles (2008) introduced an intriguing multi-level idea to deal with such biased settings that can dramatically improve the rate of convergence and can even, in some settings, achieve the canonical “square root" convergence rate associated with unbiased Monte Carlo. His approach does not construct an unbiased estimator, however. Rather, the idea is to construct a family of estimators (indexed by the desired error tolerance $\epsilon$) that has controlled bias. In this paper, we show how it is possible, in a similar computational setting, to go one step further and to produce (exactly) unbiased estimators. The remainder of this paper is organized as follows: We discuss the idea in Section 2 of this paper, while Section 3 is devoted to an initial computational exploration of this approach. THE IR GOAL: 1. $Ek(X_{h_n}) = Ek(X) + O(h_n)$ as $h_n \to 0$; 2. $E| k(X_{h_n}) - k(X) |^2 = O(h_n^{2r})$ as $h_n \to 0$ for some $r >0$, where $O(f(n))$ represents a function which is bounded by some constant multiple of $f(\cdot)$ as $h_n \to 0$. Assuming, as is often the case for such discretization schemes, that the scheme generates normal rv’s that are intended to mirror the Brownian increments of the process $B$ driving the SDE (as in the Euler scheme (1.2) above), the easiest way to algorithmically obtain an approximating sequence $X_{h_n}$ to $X$ in which the $X_{h_n}$’s are jointly defined on the same probability space is by successive binary refinement, so that $h_n = 2^{-n}$. In this setting, the new Brownian motion values ($B(j2^{-(n+1)})$: $j$ odd) required at discretization $2^{-(n+1)}$ can be obtained from the existing values ($B(j2^{-n}): j \geq0$) by generating $B((2k+1) 2^{-(n+1)})$ from its conditional distribution given $B(k2^{-n})$ and $B((k+1)2^{-n})$. On the other hand, one’s ability to obtain i and ii depends both on the path functional $k$ and on one’s choice of discretization scheme. In particular, suppose that one has established that the discretization $X_h$ exhibits strong order $r$. This implies that $$\begin{aligned} E \sup\{ | X_h(kh) - X(kh) |^{2r} : 0\leq k \leq \lfloor t/h\rfloor \} = O(h^{2r}).\end{aligned}$$ Thus, if $k$ is (for example) a “Lipschitz final value" expectation so that $k(x) = g(x(1))$ for some Lipschitz function $g: R^d \to R$, ii is satisfied. In addition, if $k$ is further assumed to be smooth with $| k(X) |$ integrable, then i is satisfied whenever the discretization $X_h$ is known to be of weak order 1 or higher. It is consistent with the results of SDEs. Note that each of the $k(X_{2^{-n}})$’s is a biased estimator for $\alpha = E k(X)$. To obtain an unbiased estimator, observe that ii) implies the existence of $p > 0$ such that $$\begin{aligned} \sum_{n = 1}^\infty E 2^{np} | k(X_{2^{-n}}) - k(X_{2^{-(n-1)}}) | ^{2r} < \infty.\end{aligned}$$ Consequently, $$\begin{aligned} \sum_{ n=1}^\infty 2^{np} | k(X_{2^{-n}}) - k(X_{2^{-(n-1)}}) |^{2r} < \infty\quad a.s.\end{aligned}$$ from which it follows that $$\begin{aligned} | k(X_{2^{-n}}) - k(X_{2^{-(n-1)}}) | = O(2^{-np})\quad a.s.\end{aligned}$$ as $n \to \infty$, and hence (in view of ii), $$\begin{aligned} k(X) = k(X_1) + \sum_{ n=1}^\infty k(X_{2^{-n}}) - k(X_{2^{-(n-1)}})\end{aligned}$$ We now introduce a rv $N$, independent of $B$, that takes values in the positive integers and has a distribution with unbounded support (so that $P(N>n)>0$ for $n\geq1$). For such a rv $N$, $$\begin{aligned} Ek(X) &= E k(X_1) + \sum_{n=1}^\infty E (k(X_{2^{-n}}) - k(X_{2^{-(n-1)}})) I(N \geq n)/ P(N \geq n)\\ &= E \left[k
null
--- abstract: 'In this paper, we propose a state-of-the-art video denoising algorithm based on a convolutional neural network architecture. Until recently, video denoising with neural networks had been a largely under explored domain, and existing methods could not compete with the performance of the best patch-based methods. The approach we introduce in this paper, called FastDVDnet, shows similar or better performance than other state-of-the-art competitors with significantly lower computing times. In contrast to other existing neural network denoisers, our algorithm exhibits several desirable properties such as fast runtimes, and the ability to handle a wide range of noise levels with a single network model. The characteristics of its architecture make it possible to avoid using a costly motion compensation stage while achieving excellent performance. The combination between its denoising performance and lower computational load makes this algorithm attractive for practical denoising applications. We True metrics.' author: . - etc.). Although image denoising has remained a very active research field through the years, too little work has been devoted to the restoration of digital videos. It should be noted, however, that some crucial aspects differentiate these two problems. On the one hand, a video contains much more information than a still image, which could help in the restoration process. On the other hand, video restoration requires good temporal coherency, which makes the restoration process much more demanding. Furthermore, since all recent cameras produce videos in high definition—or even larger—very fast and efficient algorithms are needed. In this paper we introduce another network for deep video denoising: FastDVDnet. This algorithm builds on DVDnet [@Tassano2019], but at the same time introduces a number of important changes with respect to its predecessor. Most notably, instead of employing an explicit motion estimation stage, the algorithm is able to implicitly handle motion thanks to the traits of its architecture. This results in a state-of-the-art algorithm which outputs high quality denoised videos while featuring very fast running times—even thousands of times faster than other relevant methods. Image denoising {#sec:image-denoising} --------------- Contrary to video denoising, image denoising has enjoyed consistent popularity in past years. A myriad of new image denoising methods based on deep learning techniques have drawn considerable attention due to their outstanding performance. Schmidt and Roth proposed in [@Schmidt2014a] the cascade of shrinkage fields method. The trainable nonlinear reaction diffusion model proposed by Chen and Pock in [@Chen2017] builds on the former. In [@Burger2012], a multi-layer perceptron was successfully applied for image denoising. Methods such as these achieve performances comparable to those of well-known patch-based algorithms such as BM3D [@Dabov2007a] or non-local Bayes (NLB [@Lebrun2013c]). However, their limitations include performance restricted to specific forms of prior, or the fact that a different set of weights must be trained for each noise level. Another widespread approach involves the use of convolutional neural networks (CNN), e.g. RBDN [@Santhanam2016], MWCNN [@Liu2018], DnCNN [@Zhang2017], and FFDNet [@Zhang2017a]. Their performance compares favorably to other state-of-the-art image denoising algorithms, both quantitatively and visually. These methods are composed of a succession of convolutional layers with nonlinear activation functions in between them. A salient feature that these CNN-based methods present is the ability to denoise several levels of noise with only one trained model. Proposed by Zhang in [@Zhang2017], DnCNN is an end-to-end trainable deep CNN for image denoising. One of its main features is that it implements residual learning [@He2016], i.e. it estimates the noise existent in the input image rather than the denoised image. In a following paper [@Zhang2017a], Zhang proposed FFDNet, which builds upon the work done for DnCNN. More recently, the approaches proposed in [@Plotz2018; @Liu2018non] combine neural architectures with non-local techniques. Video denoising {#sec:video-denoising} --------------- Video denoising is much less explored in the literature. The majority of recent video denoising methods are patch-based. We note in particular an extension of the popular BM3D to video denoising, V-BM4D [@Maggioni2012], and Video non-local Bayes (VNLB [@Arias2018]). Neural network methods for video denoising have been even rarer than patch-based approaches. The algorithm in [@chen2016deep] by Chen is one of the first to approach this problem with recurrent neural networks. However, their algorithm only works on grayscale images and it does not achieve satisfactory results, probably due to the difficulties associated with training recurring neural networks [@pascanu2013difficulty]. Vogels proposed in [@vogels2018denoising] an architecture based on kernel-predicting neural networks able to denoise Monte Carlo rendered sequences. The Video Non-Local Network (VNLnet [@Davy2019]) fuses a CNN with a self-similarity search strategy. For each patch, the network finds the most similar patches via its first non-trainable layer, and this information is later used by the CNN to predict the clean image. In [@Tassano2019], Tassano proposed DVDnet, which splits the denoising of a given frame in two separate denoising stages. Like several other methods, it relies on the estimation of motion of neighboring frames. Other very recent blind denoising approaches include the work by Ehret  [@Ehret2019] and ViDeNN [@Claus2019]. The work includes motion estimate steps. However, contrary to DVDnet, ViDeNN does not employ motion estimation. Similarly to both DVDnet and ViDeNN, the use of spatio-temporal CNN blocks in restoration tasks has been also featured in [@vogels2018denoising; @Caballero2017]. Nowadays, the state-of-the-art is defined by DVDnet, VNLnet and VNLB. VNLB and VNLnet show the best performances for small values of noise, while DVDnet yields better results for larger values of noise. Both DVDnet and VNLnet feature significantly faster inference times than VNLB. As we will see, the performance of the method we introduce in this paper compares to the performance of the state-of-the-art, while featuring even faster runtimes. FastDVDnet {#sec:method} ========== For video denoising algorithms, temporal coherence and flickering removal are crucial aspects in the perceived quality of the results [@Seybold2018; @Seshadrinathan2010]. In order to achieve these, an algorithm must make use of the temporal information existent in neighboring frames when denoising a given frame of an image sequence. In general, most previous approaches based on deep learning have failed to employ this temporal information effectively. Successful state-of-the-art algorithms rely mainly on two factors to enforce temporal coherence in the results, namely the extension of search regions from spatial neighborhoods to volumetric neighborhoods, and the use of motion estimation. The use of volumetric (i.e. spatio-temporal) neighborhoods implies that when denoising a given pixel (or patch), the algorithm is going to look for similar pixels (patches) not only in the reference frame, but also in adjacent frames of the sequence. The results are two-fold. First, the temporal neighbors provide additional information which can be used to denoise the reference frame. Second, using temporal neighbors helps to reduce flickering as the residual error in each frame will be correlated. Videos feature a strong temporal redundancy along motion trajectories. This fact should facilitate denoising videos with respect to denoising images. Yet, this added information in the temporal dimension also creates an extra degree of complexity which could be difficult to tackle. In this context, motion estimation and/or compensation has been employed in a number of video denoising algorithms to help to improve denoising performance and temporal consistency [@Liu2015; @Tassano2019; @Arias2018; @Maggioni2012; @Buades2016a]. We thus incorporated these two elements into our architecture. However, our algorithm does not include an explicit motion estimation/compensation stage. The capacity of handling the motion of objects is inherently embedded into the proposed architecture. Indeed, our architecture is composed of a number of modified U-Net [@Ronneberger2015
null
--- abstract: 'In this paper, we explore when a locally finite triangulated category has dimension zero or finite representation type. We True subcategories.' address: - 'Department of Mathematics, Tokyo Gakugei University, 4-1-1 Nukuikita-machi, Koganei, Tokyo 184-8501, Japan' - 'Graduate School of Mathematics, Nagoya University, Furocho, Chikusaku, Nagoya, Aichi 464-8602, Japan' author: - Takuma Aihara - Ryo Takahashi title: Remarks on dimensions of triangulated categories --- [^1] [^2] Introduction ============ In this paper, we mention two remarks regarding the notion of dimensions of triangulated categories, which has been introduced by Rouquier [@R]. The first subject of this paper, which is discussed in Section 2, is to describe “smallest” triangulated categories. We focus on three kinds of smallness of triangulated categories: local finiteness, dimension zero and finite representation type. It is natural to ask if there exist implications among these conditions. We give the following answer to this question. 1. Let $\Lambda$ be an Iwanaga–Gorenstein algebra over a complete local ring with an isolated singularity. The n type. 4 Let $\T$ be a Krull–Schmidt triangulated category. If $\T$ is finitely generated and locally finite, then it has dimension zero. If not finite. Applying the first assertion of this theorem, we deduce that for an isolated hypersurface singularity $R$, the stable category $\sCM(R)$ is locally finite if and only if it has dimension zero, if and only if $R$ has finite CM representation type (Corollary \[ihs\]). The second subject of this paper, which is discussed in Section 3, is to find a generator $\G$ of a derived category $\D$ such that the dimension of $\D$ with respect to $\G$ in the sense of [@ddcm] is as small as possible. This is the resulting category. The main result on this subject is the following. Let $\A$ be an abelian category and $\X$ a full subcategory. Let $ \text{(resp. }&0\to M\to N\to X_0\to\cdots\to X_n\to0\text{)}\end{aligned}$$ such that the corresponding element in $\Ext_\A^{n+1}(M,X_n)$ (resp. $\Ext_\A^{n+1}(X_n, M)$) is nonzero. Then $M$ is outside $\langle {}^\perp\X \rangle_{n+1}$ (resp. $\langle \X^\perp \rangle_{n+1}$) in the derived category $\da$ of $\A$. This theorem recovers the lower bounds of the derived dimension given by Rouquier [@R Proposition 7.14], Krause and Kussin [@KK Lemma 2.4] and Yoshiwaki [@Y2 Theorem 1.1]. Dimension zero versus local finiteness ====================================== Throughout this section, let $\T$ be a Krull–Schmidt triangulated category. We use $k$ and $R$ in this section as an algebraically closed field and a commutative noetherian complete local ring, respectively. Let $\Lambda$ be a *noetherian $R$-algebra*, i.e., there is a ring homomorphism $R$ to $\Lambda$ with the image contained in the center of $\Lambda$ such that $\Lambda$ is a finitely generated $R$-module. Note that $\Lambda$ is semiperfect as $R$ is complete (see [@A]). Local fi categories. \[def:LF\] We say that $\T$ is *locally finite* if for each object $Y$ of $\T$, (i) there are only finitely many indecomposable objects $X$ with $\Hom_\T(X,Y)\neq0$, and (ii) for every indecomposable object $X$ of $\T$, the right $\End_\T(X)$-module $\Hom_\T(X, Y)$ has finite length. Note that these are equivalent to the dual conditions; see [@A1; @K]. For an additive category $\C$ we denote by $\underline\C$ its *stable category*, i.e., the quotient of $\C$ by the projective objects. This is triangulated if $\C$ is Frobenius [@H]. We denote by $\mod\Lambda$ the category of finitely generated (right) $\Lambda$-modules; it is Frobenius (and hence the stable category $\smod\Lambda$ is triangulated) if $\Lambda$ is selfinjective. Denote by $\Db(\mod\Lambda)$ the bounded derived category of $\mod\Lambda$. Here the category. \[exLF\] Let $\Lambda$ be a finite dimensional $k$-algebra. (1) The derived category $\Db(\mod\Lambda)$ is locally finite if and only if $\Lambda$ is a piecewise hereditary algebra of finite representation type. (2) Suppose that $\Lambda$ is a selfinjective algebra. Then the stable category $\smod\Lambda$ is locally finite if and only if $\Lambda$ is of finite representation type. A triangulated category $\T$ is called *finitely generated* if it admits a thick generator $T$, that is, if $\T=\thick T$. Here, $\thick T$ is the smallest thick subcategory of $\T$ containing $T$. Important properties of locally finite triangulated categories are stated in [@K], including: [@K Proposition 4.5]\[bpLF\] Let $\T$ be a finitely generated locally finite triangulated category. Then there are only finitely many thick subcategories of $\T$. In representation theory of rings, the notion of representation types is one of the most classical subjects, and the first step is to understand finite representation type. We study the relationship of local finiteness with finiteness of representation type. For an additive category $\C$, $\ind\C$ denotes the set of isomorphism classes of indecomposable objects of $\C$. \[LFRF\] Let $\F$ be a Krull–Schmidt Frobenius category whose stable category $\sF$ satisfies Definition \[def:LF\](ii). Assume that $\proj\F=\{X\in \F\ |\ \Ext_\F^1(M,X)=0 \}$ for some $M\in\F$. Then run your generator. The local finiteness of $<unk>sF$ is evident. For the ‘only if’ part, the local finiteness of $\sF$ implies that $$\X=\{X\in\ind\sF\ |\ \sHom_\F(M[-1], X)\ne0\}$$ is a finite set; we write $\X=\{X_1,\dots,X_n\}$. Let $<unk>x$ $\X$. Then $\Ext_\F^1(M, Y)$ vanishes. By assumption $Y$ has to be projective, which implies that $Y=0$ in $\sF$. Thus, we obtain $\sF=\add(X_1\oplus\cdots\oplus X_n)$. For a prime ideal $\p$ of $R$, set $(-)_\p:=-\otimes_RR_\p$. We say that $\Lambda$ has an *isolated singularity* if $\gldim\Lambda_\p=\dim R_\p$ for any nonmaximal primes $\p$ of $R$ (cf. (<unk>) Denote by $\CM(\Lambda)$ the full subcategory of $\mod\Lambda$ consisting of *Cohen–Macaulay $\Lambda$-modules*, i.e., finitely generated $\Lambda$-modules $X$ with $\Ext_\Lambda^i(X, \Lambda)=0$ for all $i>0$. We say that $\Lambda$ is *Iwanaga–Gorenstein* if it has finite right and left selfinjective dimension. An Iwanaga–Gorenstein algebra $\Lambda$ is said to have *finite Cohen–Macaulay (abbr. CM) representation type* if $\CM(\Lambda)$ has an additive generator. Let $<unk>CM(<unk>Lambda)$ be an isolated fact. \[koremo\] Let $\Lambda$ be an Iwanaga–Gorenstein $R$-algebra with an isolated singularity. Then make two $R$-modules. As is well-known, $\CM(\Lambda)$ is Frobenius, and $\sCM(\Lambda)$ is triangulated [@H
null
--- abstract: 'Dynamics, the study of change, is normally the subject of mechanics. Whether the chosen mechanics is “fundamental” and deterministic or “phenomenological” and stochastic, all changes are described relative to an external time. Here we show that once we define what we are talking about, namely, the system, its states and a criterion to distinguish among them, there is a single, unique, and natural dynamical law for irreversible processes that is compatible with the principle of maximum entropy. In this alternative dynamics changes are described relative to an internal, “intrinsic” time which is a derived, statistical concept defined and measured by change itself. Time is an internal change.' author: - | Ariel Caticha\ [Department of Physics, University at Albany-SUNY, ]{}\ [Albany, NY 12222, USA. [^1]]{} title: 'Change, Time and Information Geometry[^2]' --- Introduction ============ The notion that the concepts of time, change and motion are intimately connected goes back to antiquity. According to Aristotle, “time numbers change with respect to before and after.” One aspect of this connection is the order of a sequence of changes, their temporal order. Another aspect is the use of selected motions or changes to measure the length of time intervals, their duration. We begin by considering the notion of change. In order to establish that a system has changed one must be able to distinguish between the system being in one state and its being in another state. This requires, to begin with, a clear idea of what is meant by a state. As long as one is interested in the study of phenomena that can be deliberately reproduced by controlling a few macroscopic variables it is reasonable to expect that the values – or rather, the expected values – of these few variables are all that is needed for the purposes of prediction. This limited information defines what we mean by the state or, equivalently, the macrostate of the system. Next, to measure the extent to which states can be distinguished, we assign a probability distribution to each state. The requirement that the assignment procedure itself do not introduce any information beyond that which defines the state demands we use the method of maximum entropy (ME) [@Jaynes57][@Skilling88]. In this way the problem of distinguishing between states is transformed into another problem, that of distinguishing between the corresponding distributions. The solutions are not known. There is a uniquely natural way to quantify the extent to which one distribution can be distinguished from another: it is given by the distance between them as measured by the Fisher-Rao information metric [@Fisher25]-[@Rodriguez89]. If we think of each state as a point in a manifold, the net outcome of these considerations (Sect. 2) is that the method of ME has transformed the manifold of states into a metric space. Distinguishability is distance distance. There is not yet any implication that change will happen *from* one state *to* another; to this we turn next. Temporal motion dynamics. Typically, having decided on the kinematics appropriate to a certain motion, one defines the dynamics by additional postulates about the equations of motion, perhaps in the form of a variational principle. The dynamics is postulated. The variational principle (Sect. 3) is a variational principle too, but there is something very peculiar about it, there is no need to postulate it. The principle is the same we had already introduced when discussing the space of states, namely, when selecting a distribution subject to certain constraints, the preferred distribution is that of maximum entropy. It is just the same old ME principle applied in a somewhat different way. (The nature of the constraints is different. For example: Ref.[@Caticha00].) We have no freedom in choosing the dynamical law; it follows from the single piece of new information available: recognizing that changes happen. Nothing else. Suppose the system is in a certain state and a small change happens; the system moves a distance $d\ell $. We cannot with certainty predict in which direction motion occurs but, according to the principle of ME, unless there is some positive evidence to the contrary, of all the states on the surface of the sphere of radius $d\ell $ there is one to be preferred above all others: it is the state of maximum entropy. As so often in the past, it seems that once more the method of ME has allowed us to get something out of nothing; yet another free lunch. But the dynamics proposed here is different in one important respect. (We refrain from saying “defficient” rather than “different” because in the end it may turn out to be an advantage.) In the conventional Hamiltonian or Lagrangian mechanics the equations of motion describe changes relative to an external time. Here changes are described relative to an internal, “intrinsic” time which is a derived, statistical concept defined and measured by the change $d\ell $ itself. Intrinsic time is quantified change. The system provides its own clock. Perhaps this is a necessary feature of any fundamental form of mechanics that generates its own notion of time, that *explains* time. The introduction of a metric in the space of states is not new; this has been done by many authors in statistical inference, where the subject is known as Information Geometry [@Amari85][@Rodriguez90], and in physics, to study both equilibrium [@Weinhold75][@Ingarden76] and nonequilibrium thermodynamics [@Balian86][@Streater95]. What we find interesting is the interaction of equilibrium with non- equilibrium dynamics. An interesting consequence of these ideas is that reciprocity relations of the Onsager type [@Onsager31] valid near and far from equilibrium are obtained (Sect. 4) without any hypothesis about microscopic reversibility; in fact, no mention is made of any microscopic dynamics. By analyzing specific models other authors [@Gabrielli96] have reached similar conclusions: reciprocal relations are possible even if the underlying microscopic dynamics is not reversible. It is, of course, possible to incorporate more information, that is, additional constraints into the dynamics. In Sect. 5 we consider a simple illustrative example, the intrinsic dynamics of two coupled systems as they evolve towards equilibrium along a trajectory constrained by conservation laws. Our subject can be approached from another direction. The Greeks did not draw a sharp distinction between change in general and the more special kind of change we call motion; the falling of an apple was not viewed as being in any sense more fundamental than the ripening of an apple. The modern view does draw such distinctions; deterministic motion in space and time is considered basic while other kinds of change – notably irreversible processes in macroscopic systems – are not. They must be understood in terms of the deterministic motion of microscopic constituents. Of course, this view is not wrong, but for some purposes it may be misguided, inconvenient. All theories describing irreversible processes have, in the past, invariably turned out to be rather formidable (see e.g., [@Grabert82]-[@Luzzi90]). One reason is that the phenomena to be described are themselves quite complicated. But there is another reason, which is that these theories are attempting to achieve two conflicting goals. One goal is to reach an understanding in terms of the microscopic Hamiltonian laws of motion and requires keeping track of microscopic details. The other goal is to achieve a description in terms of the few variables that matter, those that codify the crucial information relevant to making predictions. Information about the other variables, the vast majority, is totally irrelevant. Achieving such a description requires forgetting about all microscopic details. It is remarkable that theories that accomplish these two seemingly contradictory goals are at all possible. They involve a very delicate balancing act between keeping track of details, at least for a little while (Hamiltonian evolution), and then throwing them away (projections, coarse-graining, tracing over unwanted variables, etc.). Our proposal cuts through this Gordian knot. If microscopic details are truly irrelevant then the Hamiltonian evolution itself should be largely irrelevant. The information about irrelevant details should be discarded before, not after, it is computed. This requires formulating a dynamics without the benefit (or, in this case, the hindrance) of Hamiltonians. A potentially serious problem here is the loss of predictive power that stems from the possibility of being able to choose among different dynamical laws. What would make us prefer one law over another? Remarkably the problem does not arise; once we define what we are talking about, namely, the states and the criterion to distinguish among them, there is a single, unique, and natural dynamical law that is compatible with the principle of maximum entropy. The views expressed here are clearly biased in favor of the information theory approach to statistical mechanics, but they need not contradict other points of view. The basic explanation of the second law of thermodynamics was given by Boltzmann and Gibbs long ago but later contributions by many authors have generated several different versions of it. The question of which particular version is the right one remains controversial. However, provided one adopts a certain spirit of tolerance in reading the various authors (words such as entropy or probability can be used with very different meanings), one sees that the different views are not always incompatible. The point we wish to make is that irrespective of which is one’s own personal favorite reason for preferring change in the direction of entropy increase over decrease, the *same reason* should lead one to prefer a large increase over a small one. This applies whether we favor the information theory approach [@Jaynes57][@Balian86] or one of the perhaps more traditional points of view such as ergodic theory [@L
null
--- abstract: | Using $2917 \invpb$ of data accumulated at $3.773\gev$, $44.5\invpb$ of data accumulated at $3.65\gev$ and data accumulated during a $\psi(3770)$ line-shape scan with the BESIII detector, the reaction $e^+e^-\rightarrow p\bar{p}$ is studied considering a possible interference between resonant and continuum amplitudes. The cross section of $e^+e^-\rightarrow\psi(3770)\rightarrow p\bar{p}$, $\sigma(e^+e^-\rightarrow\psi(3770)\rightarrow p\bar{p})$, is found to have two solutions, determined to be $(0.059^{+0.070}_{-0.020}\pm0.012)\pb$ with the phase angle $\phi = (255.8^{+39.0}_{-26.6}\pm4.8)^\circ$ ($<0.166 \pb$ at the 90% confidence level), or $\sigma(e^+e^-\rightarrow\psi(3770)\rightarrow p\bar{p}) = (2.57^{+0.12}_{-0.13}\pm0.12)\pb$ with $\phi = (266.9^{+6.1}_{-6.3}\pm0.9)^\circ$ both of which agree with a destructive interference. Using the obtained cross section of $\psi(3770)\rightarrow p\bar{p}$, the cross section of $p\bar{p}\rightarrow \psi(3770)$, which is useful information for the future PANDA experiment, is estimated to be either $(9.8^{+11.8}_{-3.9})\nb$ $(<27.5\nb$ at 90% C.L.) or $(425.6^{+42.9}_{-43.7})\nb$. title: 'Study of $e^+e^- \rightarrow p\bar{p}$ in the vicinity of $\psi(3770)$ ' --- BESIII ,charmonium decay ,proton form factor 13.20.Gd ,13.25.Gv ,13.40.Gp ,13.66.Bc ,14.20.Gh Introduction ============ At $e^+e^-$ colliders, charmonium states with $J^{PC}=1^{--}$, such as the $J/\psi$, $\psi(3686)$, and $\psi(3770)$, are produced through electron-positron annihilation into a virtual photon. These charmonium states can then decay into light hadrons through either the three-gluon process ($e^+e^-\rightarrow \psi \rightarrow ggg \rightarrow hadrons$) or the one-photon process ($e^+e^-\rightarrow \psi \rightarrow \gamma^* \rightarrow hadrons$). In addition to the above two processes, the non-resonant process ($e^+e^-\rightarrow \gamma^* \rightarrow hadrons$) plays an important role, especially in the $\psi(3770)$ energy region where the non-resonant production cross section is comparable to the resonant one. The [ @DELCO]. However, assuming no interference effects between resonant and non-resonant amplitudes, the BES Collaboration found a large total non-$D\bar{D}$ branching fraction of $(14.5\pm1.7\pm5.8)\%$ [@BES_nonDDbar_1; @BES_nonDDbar_2; @BES_nonDDbar_3; @BES_nonDDbar_4]. A later work by the CLEO Collaboration, which included interference between one-photon resonant and one-photon non-resonant amplitudes (assuming no interference with the three-gluon amplitude), found a contradictory non-$D\bar{D}$ branching fraction of $(-3.3\pm1.4^{+6.6}_{-4.8})\%$ [@CLEO_nonDDbar]. These different results could be caused by interference effects. Moreover, it has been noted that the interference of the non-resonant (continuum) amplitude with the three-gluon resonant amplitude should not be neglected [@interfere_wangp_1]. To this effect, we have included interference [ @non_DDbar_exclucive_2]. Low statistics, however, especially in the scan data sets have not permitted the inclusion of interference effects in these exclusive studies. BESIII has collected the world’s largest data sample of $e^+e^-$ collisions at $3.773\gev$. Analyzed together with data samples taken during a $\psi(3770)$ line-shape scan, investigations of exclusive decays, taking into account the interference of resonant and non-resonant amplitudes are now possible. Recently, the decay channel of $\psi(3770)\rightarrow p\bar{p}\pi^0$ [@ppbarpi0_matthias] has been studied considering the above mentioned interference. In this Letter, we report on a study of the two-body final state $e^+e^- \rightarrow p\bar{p}$ in the vicinity of the $\psi(3770)$ based on data sets collected with the upgraded Beijing Spectrometer (BESIII) located at the Beijing Electron-Positron Collider (BEPCII) [@BESIII_BEPCII]. The data sets include $2917\invpb$ of data at $3.773\gev$, $44.5\invpb$ of data at $3.65\gev$ [@Lumi], and data taken during a $\psi(3770)$ line-shape scan in the energy range from $3.74$ to $3.90\gev$. BESIII detector =============== The BEPCII is a modern accelerator featuring a multi-bunch double ring and high luminosity, operating with beam energies between 1.0 and $2.3\gev$ and a design luminosity of $1\times10^{33}{\ensuremath{\,\mathrm{cm^{-2}\,{s}^{-1}}}}$. The BESIII detector is a high-performance general purpose detector. It is composed of a helium-gas based drift chamber (MDC) for charged-particle tracking and particle identification by specific ionization $dE/dx$, a plastic scintillator time-of-flight (TOF) system for additional particle identification, a CsI (Tl) electromagnetic calorimeter (EMC) for electron identification and photon detection, a super-conducting solenoid magnet providing a 1.0 Tesla magnetic field, and a muon detector composed of resistive-plate chambers. The momentum resolution for charged particles at $1\gevc$ is $0.5\%$. The energy resolution of $1\gev$ photons is $2.5\%$. More details on the accelerator and detector can be found in Ref. 4 A [geant4]{}-based [@geant4] Monte Carlo (MC) simulation software package, which includes a description of the geometry, material, and response of the BESIII detector, is used for detector simulations. The signal and background processes are generated with dedicated models that have been packaged and customized for BESIII [@generator]. Initial-state radiation (ISR) effects are not included at the generator level for the efficiency determination, but are corrected later using a standard ISR correction procedure [@isr_1; @isr_2]. In the ISR correction, [phokhara]{} [@phokhara] is used to produce a MC-simulated sample of $e^+e^-\rightarrow \gamma_{\rm ISR}p\bar{p}$ (without $\gamma_{\rm ISR} J/\psi$ and $\gamma_{\rm ISR} \psi(3686)$). For the estimation of backgrounds from $\gamma_{\rm ISR} \psi(3686)$ and $e^+e^-\rightarrow \psi(3770)\rightarrow D\bar{D}$, MC-simulated samples with a size equivalent to 10 times the size of data samples are analyzed. Event Y is the antiproton. Two tracks is required. Each track is required to have its point of closest approach to the beam axis within $10{\ensuremath{\,\mathrm{cm}}}$ of the interaction point in the beam direction and within $1{\ensuremath{\,\mathrm{cm}}}$ of the beam axis in the plane perpendicular to the beam. The polar angle of the track is required to be within the region $|\cos\theta\,|<0.8$. The TOF information is used to calculate particle identification (PID) probabilities for pion, kaon and proton hypotheses [@pid_ppbar]. For each track, the particle type yielding the largest probability is assigned. Here, the momentum of proton is high ($> 1.6\gevc$). For this high momentum protons and antiprotons, the PID efficiency is about 95%. The ratio of kaons to be mis-identified as protons is about 5%. In this analysis, one charged
null
--- abstract: 'This note provides a correct proof of the result claimed by the second author that locally compact normal spaces are collectionwise Hausdorff in certain models obtained by forcing with a coherent Souslin tree. A . In other Conjecture. Together with other improvements, this enables the consistent characterization of locally compact hereditarily paracompact spaces as those locally compact, hereditarily normal spaces that do not include a copy of $\omega_1$.' author: - 'Alan Dow[$^1$]{} and Franklin D. Tall[$^2$]{}' bibliography: - 'normality.bib' nocite: '[@*]' title: Normality versus paracompactness in locally compact spaces --- [^1] [^2] Introduction ============ The space of countable ordinals is locally compact, normal, but not paracompact. The question of what additional conditions make a locally compact normal space paracompact has a long history. At least 45 years ago, it was recognized that subparacompactness plus collectionwise Hausdorffness would do (see e.g. [@T1]), as would perfect normality plus metacompactness [@A]. Z. Balogh proved a variety of results under MA$_{\omega_1}$ [@B1] and **Axiom R** [@B2], and was the first to realize the importance of not including a perfect pre-image of $\omega_1$ (equivalently, the one-point compactification being countably tight [@B1]). However, he assumed collectionwise Hausdorffness in order to obtain paracompactness. A *normal*: paracompact. Watson’s proof crucially involved the idea of *character reduction*: if one wants to separate a closed discrete subspace of size $\kappa$, $\kappa$ regular, in a locally compact normal space, it suffices to separate $\kappa$ compact sets, each with an *outer base* of size $\leq \kappa$. An [**outer base**]{} for a set $K \subseteq X$ is a collection ${\mathcal}{B}$ of open sets including $K$ such that each open set including $K$ includes a member of ${\mathcal}{B}$. The use of $V = L$ was to get that normal spaces of character $\leq \aleph_1$ are collectionwise Hausdorff [@F], and variations on that theme. It was known that locally compact normal non-collectionwise Hausdorff spaces could be constructed from MA$_{\omega_1}$, indeed from the existence of a $Q$-set [@T1], so it was a big surprise when G. Gruenhage and P. Koszmider proved that: MA$_{\omega_1}$ implies locally compact, normal, metacompact spaces are $\aleph_1$-collectionwise Hausdorff and (hence) paracompact. The next result involving iteration axioms and a positive “normal implies collectionwise Hausdorff" type of result was: Let $S$ be a coherent Souslin tree (obtainable from $\diamondsuit$ or a Cohen real). Force MA$_{\omega_1}(S)$, i.e. MA$_{\omega_1}$ for countable chain condition posets preserving $S$. Then force with $S$. In the resulting model, there are no first countable $L$-spaces, no compact first countable $S$-spaces, and separable normal first countable spaces are collectionwise Hausdorff. The first two statements are consequences of MA$_{\omega_1}$ [@Sz]; the last of $V=L$, indeed of $2^{\aleph_0} < 2^{\aleph_1}$. Larson and Todorcevic used this combination to solve *Katětov’s problem*. This idea of combining consequences of a iteration axiom with “normal implies collectionwise Hausdorff" consequences of $V = L$ was exploited in [@LT1] in order to prove the consistency, modulo a supercompact cardinal, of *every locally compact perfectly normal space is paracompact*. The large cardinal was later removed, so that: If ZFC is consistent, then so is ZFC plus every locally compact perfectly normal space is paracompact. In the models of [@LT1] and [@DT2], every first countable normal space is collectionwise Hausdorff. This is achieved in two stages. The novel one is: \[lem15\] Force with a Souslin tree. Then\[LT1\] <unk>[lem15<unk>] Force Hausdorff. This is obtained by showing that if a normal first countable space is not $\aleph_1$-collectionwise Hausdorff, a generic branch of the Souslin tree induces a generic partition of the unseparated closed discrete subspace which cannot be “normalized", i.e. there do not exist disjoint open sets about the two halves of the partition. The argument is a blend of the two usual methods of proving “normal implies $\aleph_1$-collectionwise Hausdorff" results, namely those of adjoining Cohen subsets of $\omega_1$ by countably closed forcing [@T1], [@T2] and using *$\diamondsuit$ for stationary systems on $\omega_1$*, a strengthening of $\diamondsuit$ that holds in $L$ [@F]. It is noteworthy that: Either force to add $\aleph_2$ Cohen subsets of $\omega_1$, or assume $\diamondsuit$ for stationary subsets of $\omega_1$. Then assume that: Hausdorff. Once one has normal first countable spaces are $\aleph_1$-collectionwise Hausdorff, it is easy to obtain full collectionwise Hausdorffness by starting with $L$ as the ground model and following [@F]. However, if a supercompact cardinal is involved, instead of $L$ we need to follow the method of [@LT1], based on [@T2]. Namely, first make the supercompact indestructible under countably closed forcing [@L] and then perform an Easton extension, adding $\kappa^+$ Cohen subsets of each regular $\kappa$, before forcing with the Souslin tree. In order to extend the theorems about locally compact normal spaces being paracompact beyond the realm of first countability, one first needs to get that *locally compact normal spaces are collectionwise Hausdorff*. In [@T3], the second author claimed to have done so, in the model of [@LT1]. The key was to force to expand a closed discrete subspace in a locally compact normal space to a discrete collection of compact sets with countable outer bases and then apply the methods of [@LT1]. Unfortunately the expansion argument was flawed. A corrected argument is presented below, but at the cost of using a stronger iteration axiom (but not a larger large cardinal). With the conclusion of [@T3] restored, [@T4], [@LT2], and [@T] are re-instated. We will continue with our next ones. PFA$(S)[S]$ and the role of $\omega_1$ ====================================== *PFA$(S)$* is the Proper Forcing Axiom (PFA) restricted to those posets that preserve the (Souslinity of the) coherent Souslin tree $S$. *PFA$(S)[S]$ implies $\varphi$* is shorthand for *whenever one forces with a coherent Souslin tree $S$ over a model of PFA$(S)$, $\varphi$ holds. * *$\varphi$ holds in a model of form PFA$(S)[S]$* is shorthand for *there is a coherent Souslin tree $S$ and a model of PFA$(S)$ such that when one forces with $S$ over that model, $\varphi$ holds. * For discussion of PFA$(S)[S]$, see [@D2], [@To], [@LT1], [@LT2], [@T4], [@T], [@FTT], [@T6]. The following tables are in English and French, respectively. \[thm:paracompactcopy\] There is a model of form ${\mathrm}{PFA}(S)[S]$ in which a locally compact, hereditarily normal space is hereditarily paracompact if and only if it does not include a perfect pre-image of ${\omega}_1$. \[thm:paracompactcountablytight\] There is a model of form ${\mathrm}{PFA}(S)[S]$ in which a locally compact normal space is paracompact and countably tight if and only if its separable closed subspaces are Lindelöf and it does not include a perfect pre-image of ${\omega}_1$. **** is the assertion that every first countable perfect pre-image of $\omega_1$ includes a copy of $\omega_1$. ${\mathrm}{PFA}(S)[S]$ implies ****. $\mathbf{PPI}$ was originally proved from PFA in [@BDFN]. Using $\mathbf{PPI}$, we are able to weaken “perfect pre-image" to “copy" in the improved version of the first theorem, but provably cannot in the
null
--- abstract: 'The goal of this mostly expository paper is to present several candidates for hyperbolic structures on irreducible Artin-Tits groups of spherical type and to elucidate some relations between them. Most constructions are algebraic analogues of previously known hyperbolic structures on Artin braid groups coming from natural actions of these groups on curve graphs and (modified) arc graphs of punctured disks.' address: - 'Matthieu Calvez, Departamento de Matemática y Estadística , Universidad de La Frontera, Francisco Salazar 1145, Temuco, Chile' - 'Bert Wiest, Univ Rennes, CNRS, IRMAR - UMR 6625, F-35000 Rennes, France' author: - Matthieu Calvez - Bert Wiest title: 'Hyperbolic structures for Artin-Tits groups of spherical type' --- Introduction {#S:Introduction} ============ Given a group $G$ and a generating set $X$ of $G$, the word metric $d_X$ turns $G$ into a metric space; this space is $(1,1)$-quasi-isometric to $\Gamma(G,X)$, the Cayley graph of $G$ with respect to $X$, endowed with the usual graph metric where each edge is identified to an interval of length 1. Hyperbolic structures on groups were recently introduced and studied in [@ABO]. A *hyperbolic structure* on a group $G$ is a generating set $X$ of $G$ such that $(G,d_X)$ is Gromov-hyperbolic; note that $X$ must be infinite whenever $G$ is not itself hyperbolic. In this paper, we are interested in hyperbolic structures on Artin-Tits groups of spherical type. One motivation is trying to prove that irreducible Artin-Tits groups of spherical type are hierarchically hyperbolic [@BehrstockHagenSisto2; @BehrstockHagenSisto3], where the hierarchical structure should come from the hierarchy of parabolic subgroups of these groups. After [@CalvezWiest1; @CalvezWiest2], we know that irreducible Artin-Tits groups of spherical type admit non-trivial hyperbolic structures; i.e. each irreducible Artin-Tits group of spherical type $A$ contains an (infinite) generating set $X_{abs}^A$ such that the corresponding Cayley graph $\Gamma(A,X_{abs}^A)$ is a Gromov-hyperbolic metric space with *infinite diameter*. This hyperbolic structure was defined in a purely algebraic manner, using only the Garside structure on $A$, and it consists of the set of the so-called *absorbable elements* (to which we must add the cyclic subgroup generated by the square of the so-called Garside element if $A$ is of dihedral type). We us quite unhappy. This \[S:ATHyp\]. Unfortunately, these absorbable elements are poorly understood – for instance we do not know any polynomial-time algorithm which recognizes whether any given element belongs to $X_{abs}^A$ – and this makes it quite difficult to work with the graph $\Gamma(A,X_{abs}^A)$. In this paper we generalize to any irreducible Artin-Tits group of spherical type some well-known hyperbolic structures on Artin’s braid groups with $n+1$ strands $\mathcal B_{n+1}$ (a.k.a. Artin-Tits groups of type $A_n$), $n\geqslant 3$. Because it can be identified with the mapping class group of a $n+1$ times punctured disk $\mathcal D_{n+1}$, Artin’s braid group on $n+1$ strands admits nice actions on the curve graph of $\Dnpo$ (denoted by $\mathcal C(\Dnpo)$), the arc graph of $\Dnpo$ (denoted by $\mathcal A(\Dnpo)$) and the graph of arcs in $\Dnpo$ both of whose extremities lie in $\partial\Dnpo$ (denoted by $\mathcal A_{\partial}(\Dnpo)$). All these graphs can be shown to be connected and Gromov-hyperbolic; this was first shown in [@MasurMinsky1] but the circle of ideas around [@HPW; @PS] provides simpler arguments. All these actions are cobounded (actually cocompact); according to a standard argument (Lemma \[L:Main\], close in spirit to Svarc-Milnor’s lemma [@ABO Section 3.2]), we extract from each of these actions a hyperbolic structure on $\mathcal B_{n+1}$, which consists of the union of the stabilizers of a (finite) family of representatives of the orbits of vertices. Each of these generating sets can be algebraically described in terms of the *parabolic subgroups* of the Artin-Tits group of type $A_n$, allowing to extend the definitions to any irreducible Artin-Tits group of spherical type. Given an irreducible Artin-Tits group of spherical type $A$, we define: - $X_P^A$ is the union of all proper irreducible standard parabolic subgroups of $A$ and the cyclic subgroup generated by the square of the Garside element; - $X_{NP}^A$ is the union of the normalizers of all proper irreducible standard parabolic subgroups of $A$; - $X_{abs}^A$ is the set of absorbable elements (together with the cyclic subgroup generated by the square of the Garside element, if $A$ is of dihedral type: 2 generators). Note that $X_{NP}^A$ contains the cyclic subgroup generated by the square of the Garside element (which is central). Similarly for $X_{abs}^A$: any power of the Garside element can be written as a product of at most 3 absorbable elements, provided $A$ is not of dihedral type [@CalvezWiest1 Example 3]. Therefore the center of $A$ has bounded diameter with respect to the word metric on $A$ induced by any of the above generating sets. We then study the relationships between $X_P^A$, $X_{NP}^A$ and $X_{abs}^A$. Following [@ABO], given two generating sets $X,Y$ of a group $G$, we write $X\preccurlyeq Y$ if the identity map from $(G,d_Y)$ to $(G,d_X)$ is Lipschitz (or equivalently, if $\sup_{y\in Y} d_X(1_G,y)<\infty$). The sets $X$ and $Y$ are *equivalent* if both $X\preccurlyeq Y$ and $Y\preccurlyeq X$ hold (or equivalently, if the identity map is a bilipschitz equivalence between $(G,d_X)$ and $(G,d_Y)$). Table \[Table\] summarizes the main contents of this paper, for any irreducible Artin-Tits group of spherical type $A$ with at least 3 generators. Vertical arrows indicate the identity of $A$. The *graph of irreducible parabolic subgroups* and the *additional length graph* (denoted by $\mathcal C_{parab}(A)$ and $\mathcal C_{AL}(A)$, respectively) were defined in [@CGGMW] and [@CalvezWiest1], respectively. For Artin-Tits groups *of type $A$*, all the mentioned generating sets are hyperbolic structures. In any case, all spaces under consideration have infinite diameter (Corollary \[C:InfDiam\]). -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- (C) Lipschitz. Conj.\[C:StrictInequalities\](ii): not equivalent }$ $X_{NP}^A$ $\Gamma(A,X_{NP}^A)$ generalized $\mathcal C(\Dnpo)$, Conjectured q.isom. to $\mathcal C_{parab}$ [@CGGMW] ${ {\left\downarrow\vbox to 1cm{}\right.\kern-\nulldelimiterspace} Lipschitz [@CalvezWiest1; @AntolinCumplido]. Conj.\[C:StrictInequalities\](i): equivalent }$ $X_{abs}^A$ $\Gamma(A,X_{abs}^A)$ q.isom. to $\mathcal C_{AL}(A)$ [@CalvezWiest1] Proved [@CalvezWiest1; @CalvezWiest2] -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- : Summary of the main results presented in the paper when $A$ has at least 3 generators. []{data-label="Table"} A space similar to $\Gamma(A,X_P^A)$ was defined in [@CharneyC
null
--- abstract: 'The relative density of visible points of the integer lattice ${\mathbb{Z}}^d$ is known to be $1/\zeta(d)$ for $d\geq 2$, where $\zeta$ is Riemann’s zeta function. In True $K={\mathbb{Q}}(\sqrt{2})$.' author: - Gustav Hammarhjelm bibliography: - 'bibl.bib' title: 'The density of visible points in the Ammann-Beenker point set' --- Introduction {#secIntro} ============ A locally finite point set $\mathcal{P}\subset {\mathbb{R}}^d$ has an *asymptotic density* (or simply *density*) $\theta(\mathcal{P})$ if $$\lim_{R\to\infty}\frac{\#(\mathcal{P}\cap RD)}{{\mathrm{vol}}(RD)}=\theta(\mathcal{P})$$ holds for all Jordan measurable $D\subset {\mathbb{R}}^d$. The density of a set can be interpreted as the asymptotic number of elements per unit volume. For instance, for a lattice $\mathcal{L}\subset {\mathbb{R}}^d$ we have $\theta(\mathcal{L})=\frac{1}{{\mathrm{vol}}({\mathbb{R}}^d/\mathcal{L})}$. Let $\widehat{\mathcal{P}}=\{x\in \mathcal{P}\mid tx\notin \mathcal{P}, \forall t\in (0,1)\}$ denote the subset of the *visible* points of $\mathcal{P}$. If $\mathcal{P}$ is a regular cut-and-project set (see below) then it is known that $\theta(\mathcal{P})$ exists. In [@marklof2014visibility Theorem 1], J. Marklof and A. Strömbergsson proved that $\theta(\widehat{\mathcal{P}})$ also exists and that $0<\theta(\widehat{\mathcal{P}})\leq\theta(\mathcal{P})$ if $\theta(\mathcal{P})>0$. In particular, for such $\mathcal{P}$ the *relative density of visible points* $\kappa_\mathcal{P}:=\frac{\theta(\widehat{\mathcal{P}})}{\theta(\mathcal{P})}$ exists, but is not known explicitly in most cases. For $d\geq 2$ we have $\widehat{{\mathbb{Z}}^d}=\{(n_1,\ldots,n_d)\in {\mathbb{Z}}^d\mid \gcd(n_1,\ldots,n_d)=1\}$ and $\theta(\widehat{{\mathbb{Z}}^d})=1/\zeta(d)$ gives the probability that $d$ random integers share no common factor. This , below. More generally, $\theta(\widehat{\mathcal{L}})=\frac{1}{{\mathrm{vol}}({\mathbb{R}}^d/\mathcal{L})\zeta(d)}$ for a lattice $\mathcal{L}\subset {\mathbb{R}}^d$, see e.g. [@baake2000diffraction Prop. ] A well-known point set, which can be realised both as the vertices of a substitution tiling and as a cut-and-project set, is the Ammann-Beenker point set. The goal of this paper is to prove that the relative density of visible points in the Ammann-Beenker point set is $2(\sqrt{2}-1)/\zeta_K(2)$. This result is described by B. Sing berg and is a good result. The result is $\theta(\widehat{{\mathbb{Z}}^d})=1/\zeta(d)$. We shall see that a lot of inspiration can be drawn from this example when calculating the density of the visible points in the Ammann-Beenker point set. Fix $R>0$, a Jordan measurable $D\subset {\mathbb{R}}^d$ and let ${\mathbb{P}}\subset {\mathbb{Z}}_{>0}$ denote the set of prime numbers. For each *invisible* point $n\in {\mathbb{Z}}^d\setminus \widehat{{\mathbb{Z}}^d}$, there is $p\in{\mathbb{P}}$ such that $\frac{n}{p}\in{\mathbb{Z}}^d$. Setting ${\mathbb{Z}}^d_*={\mathbb{Z}}^d{\setminus\{(0,\ldots,0)\}}$ there are only finitely many $p_1,\ldots,p_n\in{\mathbb{P}}$ such that $p_i{\mathbb{Z}}^d_*\cap RD\neq \emptyset$. By inclusion-exclusion counting we have $$\begin{aligned} \#(\widehat{{\mathbb{Z}}^d}\cap RD)&=\#\left(({\mathbb{Z}}^d_*\cap RD)\setminus \bigcup_{p\in{\mathbb{P}}}(p{\mathbb{Z}}^d_*\cap RD)\right)=\#\left(({\mathbb{Z}}^d_*\cap RD)\setminus \bigcup_{i=1}^n(p_i{\mathbb{Z}}^d_*\cap RD)\right)\\ &=\#({\mathbb{Z}}^d_*\cap RD)+\sum_{k=1}^{m}(-1)^{k}\left(\sum_{1\leq i_1<\ldots <i_k\leq m}\#(p_{i_1}{\mathbb{Z}}^d_*\cap\cdots\cap p_{i_k}{\mathbb{Z}}^d_*\cap RD)\right). \end{aligned}$$ The last sum can be rewritten to $$\sum_{n\in{\mathbb{Z}}_{>0}}\mu(n)\cdot\#(n{\mathbb{Z}}^d_*\cap RD),$$ where $\mu$ is the Möbius function. Hence $$\frac{\#(\widehat{{\mathbb{Z}}^d}\cap RD)}{{\mathrm{vol}}(RD)}=\sum_{n\in{\mathbb{Z}}_{>0}}\frac{\mu(n)\cdot\#(n{\mathbb{Z}}^d_*\cap RD)}{{\mathrm{vol}}(RD)}=\sum_{n\in{\mathbb{Z}}_{>0}}\frac{\mu(n)}{n^d}\frac{\#({\mathbb{Z}}^d_*\cap {n^{-1}}RD)}{{\mathrm{vol}}({n^{-1}}RD)}.$$ Letting $R\to\infty$, switching order of limit and summation (for instance justified by finding a constant $C$ depending on $D$ such that $\#({\mathbb{Z}}_*^d\cap RD)\leq C{\mathrm{vol}}(RD)$ for all $R$), using $\theta({\mathbb{Z}}^d_*)=1$ and $1/\zeta(s)=\sum_{n\in{\mathbb{Z}}_{>0}}\frac{\mu(n)}{n^s}$ for $s>1$, we find that $$\theta(\widehat{{\mathbb{Z}}^d})=\lim_{R\to\infty}\frac{\#(\widehat{{\mathbb{Z}}^d}\cap RD)}{{\mathrm{vol}}(RD)}=1/\zeta(d).$$ Cut-and-project sets and the Ammann-Beenker point set {#secCPS} ===================================================== The Ammann-Beenker point set can be obtained as the vertices of the Ammann-Beenker tiling, a substitution tiling of the plane using a square and a rhombus as tiles, see e.g. [@baake2013aperiodic Chapter 6.1]. In this paper however, the Ammann-Beenker set is realised as a *cut-and-project set*, a certain type of point set which we will now define. Cut-and-project sets are sometimes called (Euclidean) model sets. We will use the same notation and terminology for cut-and-project sets as in [@marklof2014free Sec. 1.2]. For 2.2] [ @b e.g. [@baake2013aperiodic Ch. 5] If not the projections. \[defnCPS\] Let $\mathcal{L}\subset {\mathbb{R}}^n$ be a lattice and $\mathcal{W}\subset \overline{\pi_{\mathrm{int}}(\mathcal{L})}$ be a set. Then the *cut-and-project* set of $\mathcal{L}$ and $\mathcal{W}$ is given by $\mathcal{
null
--- abstract: 'In this paper, a point-to-point Orthogonal Frequency Division Multiplexing (OFDM) system with a decode-and-forward (DF) relay is considered. The transmission consists of two hops. The source transmits in the first hop, and the relay transmits in the second hop. Each hop occupies one time slot. The relay is half-duplex, and capable of decoding the message on a particular subcarrier in one time slot, and re-encoding and forwarding it on a different subcarrier in the next time slot. Thus each message is transmitted on a pair of subcarriers in two hops. It is assumed that the destination is capable of combining the signals from the source and the relay pertaining to the same message. The goal is to maximize the weighted sum rate of the system by jointly optimizing subcarrier pairing and power allocation on each subcarrier in each hop. The weighting of the rates is to take into account the fact that different subcarriers may carry signals for different services. Both total and individual power constraints for the source and the relay are investigated. For the situations where the relay does not transmit on some subcarriers because doing so does not improve the weighted sum rate, we further allow the source to transmit new messages on these idle subcarriers. To the best of our knowledge, such a joint optimization inclusive of the destination combining has not been discussed in the literature. The problem is first formulated as a mixed integer programming problem. It is then transformed to a convex optimization problem by continuous relaxation, and solved in the dual domain. Based on the optimization results, algorithms to achieve feasible solutions are also proposed. Simulation results show that the proposed algorithms almost achieve the optimal weighted sum rate, and outperform the existing methods in various channel conditions.' author: - Chih Ning Hsu - '\' bibliography: - 'IEEEabrv.bib' - 'aning\_thesis.bib' title: 'Joint Subcarrier Pairing and Power Allocation for OFDM Transmission with Decode-and-Forward Relaying' --- OFDM, decode-and-forward relay, power allocation, subcarrier pairing, optimization, continuous relaxation, Lagrange dual problem. Introduction --- Optimization for OFDM transmission<extra_id_1>: OfDM is a high-performance technology with good<extra_id_2>: OFIM<extra_id_3>: Transmission<extra_id_4>: The OFFM transmission<extra_id_5>: It is an outstanding<extra_id_6> number of techniques for the optimization of OFLM performance. In this paper, we consider a point-to-point OFDM system with a decode-and-forward (DF) half-duplex relay. Each message is transmitted in two hops each occupying one time slot. A message transmitted by the source on one subcarrier in the first time slot is, if successfully decoded by the relay, forwarded by the relay to the destination on one (not necessarily the same) subcarrier in the second time slot. With the assumption that the channel state information (CSI) is known at the source, many works have been done to make resource utilization of this system more efficient. A general downlink Orthogonal Frequency Division Multiple Access (OFDMA) relay system with individual power constraints at one source and many relays was considered in [@DF_individual_downlink]. In that work, joint optimization of the subcarrier selection and power allocation was done. However, that work assumed that a message is received by a destination either directly from the source, or from a relay which forwarded the message. Destination combining of the signals directly from the source and forwarded by the relay pertaining to the same message was not considered. In addition, as each relay collectively uses its active subcarriers to forward messages to different destinations, a more complicated re-encoding scheme has to be used by the relay to fit the received message for a particular destination into the subcarriers designated to that destination. In [@DF_OFDM_total_individual_power; @relay_DF_individual_power_constraint; @Vandendorpe_J], optimal power allocation for OFDM with DF relaying and fixed source and relay subcarrier pairing was proposed. [@DF_OFDM_total_individual_power][@Vandendorpe_J] considered two kinds of power constraints: one is that the total transmit power is shared between the source and the relay; the other has individual power constraints for the source and the relay. In this case, there is no constraint. However, power allocation and subcarrier pairing were optimized separately. [@relay_eq_channel_gain] proposed a subcarrier pairing method by sorting the subcarriers of the source-relay (SR) link and the relay-destination (RD) link, respectively, according to their channel gains. The SR subcarrier and the RD subcarrier with the same respective ranks are then paired together. The SD link was also proposed [@relay_AF_DF_eq_channel_gain][@Li_AFDF_OFDM]. SCP was also proposed in [@Herdin; @wittneben; @AF_pairing_without_diversity; @relay_AF_OPT_SUBCHANNEL_ASSIGNMENT] for OFDM AF relaying systems without the SD link, and in [@AF_OFDM_total_individual_power] when the SD link and destination combining are present. Power allocation with total and individual power constraints for OFDM AF relaying systems were considered in [@AF_pairing_without_diversity] and [@AF_OFDM_total_individual_power], while [@wittneben] focused on only the total power constraint. The above works dealing with power allocation for the OFDM AF relaying systems usually used approximations to relax the problem into a solvable one. Without making any approximations, [@relay_AF_total_optimal_power] investigated the optimal power allocation problem for the OFDM AF relaying systems with fixed subcarrier pairing and total power constraint in the absence of the SD link. In view of the lack of joint optimization of power allocation and subcarrier pairing for OFDM systems with DF relaying in the literature, the goal of this paper is to solve this problem with the presence of the SD link and destination combining of signals from the source and the relay. Both the total power constrained system and the individual power constrained system are considered. For the total power constrained system, we formulate the joint power allocation and subcarrier pairing problem as a mixed integer programming problem whose optimal solution is hard to obtain. We then use some special properties of the system and the continuous relaxation [@DF_individual_downlink][@concave_function] to reform the problem and solve the dual problem by the subgradient method [@subgradient_method]. With both the power and subcarrier pairing constraints, the optimization problem becomes very complicated, and the duality gap may not be zero. However, as verified by [@Yu_dual][@zero_gap_SUB_infinity] and our own simulation, the duality gap is virtually zero when the number of subcarriers is reasonably large. Thus the dual optimum value becomes a very tight upper bound for the primal optimum for most practical systems. In addition to the duality gap, some other practical issues such as algorithm design and complexity comparison are also discussed. We then extend the formulation to have individual power constraints, and find that the complications caused by individual power constraints can be alleviated in the dual domain. The dual optimum value is again a very tight upper bound for the primal optimum. Finally, we relax the constraint that only the relay can transmit in the second time slot. Therefore, additional messages may be transmitted on the idle subcarriers in the SD link in the second time slot, when it is deemed that relaying on these subcarriers does not improve the weighted sum rate. Such a model was also considered in [@Vandendorpe_J]. However, [@Vandendorpe_J] optimized power allocation (and relaying modes) only for a particular subcarrier pairing scheme without weighting of the rates. These problems have been difficult solve. In this paper, we consider joint optimization of power allocation and subcarrier pairing with weighted rates. The problem is more general and difficult. However, by defining an additional indicator, we can formulate the problem similarly as in the case without the second-slot SD transmission. The problem appears to be within the domain. Simulation shows that, for this problem, the duality gap is also nearly zero. Based on the optimization results, algorithms to achieve feasible subcarrier pairing and power allocation are also proposed. Simulation results show that the proposed algorithms almost achieve the optimal weighted sum rate, and outperform the SCP proposed in [@relay_eq_channel_gain] in various channel conditions. The system architecture is given as follows. Section \[Sec\_system\_model\] describes the system model. Section \[Sec\_maximization\_typeI\] solves the optimization problem under the total power constraint. Detailed discussions on the practical issues are also presented in this section. Section \[sec\_maximization\_individual\] solves the optimization problem under the individual power constraints. Section \[sec\_maximization\_total\_extra\_direct\] formulates and solves the optimization problem for the system with additional messages transmitted on the SD link in the second time slot, under both total and individual power constraints. Section \[Sec\_simulation\] summarizes our results and observations. Section \[Sec\_conclusion\] concludes this paper. System Model {#Sec_system_model} ============ We consider a two-hop DF relay system consisting
null
--- abstract: | We determine several necessary and sufficient conditions for a closed almost-complex orbifold $Q$ with cyclic local groups to admit a nonvanishing vector field. These conditions are stated separately in terms of the orbifold Euler-Satake characteristics of $Q$ and its sectors, the Euler characteristics of the underlying topological spaces of $Q$ and its sectors, and in terms of the orbifold Euler class $e_{orb}(Q)$ in Chen-Ruan orbifold cohomology $H_{orb}^\ast (Q; {\mathbb{R}})$. address: entailment 2. The original definition of an orbifold was introduced by Satake in [@satake1] under the name $V$-manifold, and the term orbifold was given by Thurston in [@thurston]. Thurston’s orbifolds included a larger class than those of Satake, for he allows the local groups $G$ to act with a fixed-point set of codimension 1. Today, the definition of an orbifold varies from author to author. Here, we retain the requirement that the local groups act with a fixed-point set of codimension at least 2, but do not require the groups to act effectively. Hence, we retain the orbifolds. Let $Q$ be a closed, reduced orbifold of dimension $n$. One of the first things that was studied on orbifolds is the generalization of de Rham theory by Satake in [@satake1] and [@satake2]. In [[@daisy1] definitions). More recently, the author has developed an additional generalization of the Poincaré-Hopf Theorem to orbifolds [@mythesis]. In this case, the left side of the equation is the orbifold index of the vector field $\tilde{X}$ induced by $X$ on $\tilde{Q}$, the space of sectors of the orbifold. The right side then becomes $\chi(\mathbb{X}_Q)$, the Euler characteristic of the underlying topological space $\mathbb{X}_Q$ of $Q$: $$\label{eq-myph} \mbox{ind}_{orb}(\tilde{X}) = \chi(\mathbb{X}_Q).$$ We will review these definitions in the sequel; here, we note that if $X$ is a nonvanishing vector field, then $\tilde{X}$ is nonvanishing as well. As in the case of manifolds [@phopf], it is a direct corollary of these formulae that an orbifold admits a nonvanishing vector field only if its orbifold Euler-Satake characteristic vanishes (in the case of Equation \[eq-ph\]), and the Euler characteristic of its underlying topological space vanishes (in the case of Equation \[eq-myph\]). Unlike the case of manifolds, however, the converse of both of these statements is false. It is easy to construct examples of $2$-orbifolds $Q$ such that $\chi_{orb}(Q) = 0$ or $\chi( \mathbb{X}_Q) = 0$, yet whose singular points force any vector field to vanish. While it is impossible for both of these invariants to vanish for a nontrivial $2$-orbifold, it is possible to construct a $4$-dimensional orbifold such that $\chi_{orb}(Q) = \chi( \mathbb{X}_Q) = 0$ that does not admit a nonvanishing vector field. For instance, one may take an orbifold whose underlying space is $\mathbb{T}^4$ and whose singular set is the disjoint union of $S^2$ and a surface of genus $2$, all with isotropy group ${\mathbb{Z}}_3$. In this paper, we determine necessary and sufficient conditions for a closed, almost-complex orbifold with cyclic local groups to admit a nonvanishing vector field. Our main result is the following theorem. \[thrm-mainresult\] admit any field. \(ii) $\tilde{Q}$ admits a nonvanishing vector field. \(iii) The Euler characteristic of the underlying space of each sector $\tilde{Q}_{(g)}$ is zero. \(iv) The orbifold Euler-Satake characteristic of each sector $\tilde{Q}_{(g)}$ is zero. \(v) $e_{orb}(Q)$, the orbifold Euler class of $Q$, is zero in $H_{orb}^\ast(Q ; {\mathbb{R}})$. In Section \[sec-defs\], we review the pertinent definitions and fix our notation. The main constructions we require are that of the space of sectors of an orbifold, Chen-Ruan orbifold cohomology, and the orbifold Euler class; the reader is referred to the original sources for a more detailed exposition. In Section \[sec-structure\], we study the relationship between the sectors of an orbifold. Section \[sec-mainresult\] contains the proof of our theorem. The author is pleased to acknowledge Carla Farsi, Alexander Gorokhovsky, Judith Packer, Arlan Ramsay, and Lynne Walling for useful discussions and support during the work leading to this result. Review of Definitions {#sec-defs} ===================== In this section, we briefly review the definitions we will need. For more information, the reader is referred to the original work of Satake in [@satake1] and [@satake2]. As well, [@ruangwt] contains as an appendix a thorough introduction to orbifolds, focusing on their differential geometry, and [@mythesis] contains an introduction to orbifolds with an emphasis on vector fields. A (${\mathcal C}^\infty$) [**orbifold**]{} $Q$ is a Hausdorff space $\mathbb{X}_Q$ such that each point is contained in an open set modeled by an [**orbifold chart**]{} or [**local uniformizing system. **]{} By this, we mean a triple $\{ V, G, \pi \}$ where - $V$ is an open subset of ${\mathbb{R}}^n$, - $G$ is a finite group with a ${\mathcal C}^\infty$ action on $V$ such that the fixed point set of any $\gamma \in G$ which does not act trivially on $V$ has codimension at least 2 in $V$, and - $\pi : V \rightarrow U$ is a surjective continuous map such that $\forall \gamma \in G$, $\pi \circ \gamma = \pi$ that induces a homeomorphism $\tilde{\pi} : V/G \rightarrow U$. The image $U=\pi(V)$ is called a [**uniformized set**]{} in $Q$. The group $G$ is known as a [**local group. **]{} If the local group of a chart $\{ V, G, \pi \}$ acts effectively, then the chart is said to be [**reduced**]{}; if all charts are reduced, then $Q$ is a [**reduced orbifold. **]{} In the spirit of [@chenhu] for the case of Abelian orbifolds, we adopt the convention that if each local group is cyclic, then $Q$ is a [**cyclic orbifold**]{}. It is required that if a point $p$ is contained in two uniformized sets $U_i$ and $U_j$, then there is a uniformized set $U_k$ such that $p \in U_k \subset U_i \cap U_j$. Moreover, f_<unk>ij<unk>, phi_ \}$. An [**injection**]{} $\lambda_{ij}$ is a pair $\{ f_{ij}, \phi_{ij} \}$ where - $f_{ij} : G_i \rightarrow G_j$ is an injective homomorphism such that if $K_i$ and $K_j$ denote the kernel of the action of $G_i$ and $G_j$, respectively, then $f_{ij}$ restricts to an isomorphism of $K
null
--- abstract: 'Bulk and decay properties, including deformation energy curves, charge mean square radii, Gamow-Teller (GT) strength distributions, and $\beta$-decay half-lives, are studied in neutron-deficient even-even and odd-$A$ Hg and Pt isotopes. The nuclear structure is described microscopically from deformed quasiparticle random-phase approximation calculations with residual interactions in both particle-hole and particle-particle channels, performed on top of a self-consistent deformed quasiparticle Skyrme Hartree-Fock basis. The observed sensitivity of the, not yet measured, GT strength distributions to deformation is proposed as an additional complementary signature of the nuclear shape. The $\beta$-decay half-lives resulting from these distributions are compared to experiment to demonstrate the ability of the method.' author: : 'J. M. Boillos' - 'P. Sarriguren' title: 'Effects of deformation on the beta-decay patterns of light even-even and odd-mass Hg and Pt isotopes' --- Introduction ============ Neutron-deficient isotopes in the lead region are nowadays well established examples of the shape coexistence phenomenon in nuclei [@heyde11; @julin01]. They have been subject of much experimental and theoretical interest in the last years. The data were made by [@bonn72]. Those measurements showed a sharp transition in the nuclear size between the ground states of $^{187}$Hg and $^{185}$Hg that was interpreted [@frauendorf75] as a change from a weak oblate shape in the heavier isotopes to a more deformed prolate shape in the lighter ones from calculations based on Strutinsky’s shell correction method. Later, new isotope shift measurements [@ulm86] revealed a weakly oblate deformed character of the ground states of the even-mass Hg isotopes down to $A=182$, with an odd-even staggering persisting down to $^{181}$Hg. The deformed nucl radii. Shape evolution and shape coexistence in the region of $\beta$-unstable nuclei with $Z\approx 82$ were subsequently studied experimentally by $\gamma$-ray spectroscopy in the $\alpha$-decay of the products created in fusion-evaporation reactions (see Ref. [@julin01] and references therein). Maybe, the most singular case corresponds to $^{186}$Pb, where two excited $0^+$ states below 700 keV [@andreyev00] have been found. Furthermore, low-lying excited $0^+$ states have been experimentally observed at excitation energies below 1 MeV [@julin01; @andreyev00] in all even Pb isotopes between $A=184$ and $A=194$. Similarly, $0^+_2$ excited states below 1 MeV have been found in neutron-deficient Hg isotopes from $A=180$ up to $A=190$ [@julin01]. The spectroscopy of the Hg isotopes [@julin01; @hannachi; @lane95] shows a nearly constant behavior of the energy of the yrast states in the range $A=190-198$, which are interpreted as members of a rotational band on top of a weakly deformed oblate ground state. For lighter isotopes, $0^+_2$ excited states appear at low energies, decreasing in excitation energy up to $A=182$. They are interpreted as the band-heads of prolate configurations. Their excited states become yrast above $4^+$ for $A<186$, whereas the $2^+$ levels become close enough in energy to the weakly deformed states, opening the possibility of mixing strongly with them. Nevertheless, to determine the magnitude and type of deformation of the bands and their mixing, spectroscopy studies are not enough and the electromagnetic properties (E2 transition strengths) of the low-lying states have to be determined. Lifetime measurements in neutron-deficient Hg isotopes have been performed in the last years [@grahn09; @scheck10; @gaffney14]. More recently [@bree14], Coulomb-excitation experiments have been performed to study the electromagnetic properties of light Hg isotopes $^{182-188}$Hg. In these experiments, the deformation of the ground state and low-lying excited states were deduced, confirming the presence of two different coexisting structures in the light even-even Hg isotopes that are pure at higher spin values and mix at low excitation energy. The ground states of Hg isotopes in the mass range $A=182-188$ are found to be weakly deformed and of predominantly oblate nature, while the excited $0^+_2$ states in $^{182,184}$Hg exhibit a larger deformation. Similarly, low-lying states in light Pt isotopes have been studied experimentally with $\gamma$-ray spectroscopy [@cederwall90; @dracoulis91; @davidson99], showing that shape coexistence of states with different deformation is still present in neutron-deficient Pt isotopes with $Z=78$. Moderate odd-even staggering was also found in very light Pt isotopes from laser spectroscopy [@leblanc99]. From the theoretical point of view different types of models have been used to explain the coexistence of several $0^+$ states at low energies [@heyde11]. In [@heyde11] excitations. Protons and neutrons outside the inert core interact through pairing and quadrupole interactions to generate deformed structures. Within a mean-field description of the nuclear structure, the presence of several minima at low energies in the energy surface, corresponding to different $0^+$ states, is understood as due to the coexistence of various collective nuclear shapes. In the mean-field approach, the energy of the different shape configurations can be evaluated with constrained calculations, minimizing the Hartree-Fock energy under the constraint of keeping fixed the nuclear deformation. The resulting total energy plots versus deformation are called in what follows deformation-energy curves (DEC). These calculations have become more and more refined with time, resulting in accurate descriptions of the nuclear shapes and the configurations involved. Calculations based on phenomenological mean fields and Strutinsky method [@bengtsson], are already able to predict the existence of several competing minima in the deformation-energy surface of neutron-deficient Pt, Hg, and Pb isotopes. Self-consistent mean-field calculations with non-relativistic Skyrme [@bender04; @yao13] and Gogny [@delaroche; @libert; @egido; @rayner10], as well as relativistic [@niksic02] energy density functionals have been carried out. Inclusion of correlations beyond mean field [@bender04; @yao13; @delaroche; @libert; @egido; @rayner10] are needed to obtain a detailed description of the spectroscopy. They involve symmetry restoration by means of angular momentum and particle number projection and configuration mixing within a generator coordinate method. It is shown that the underlying mean field picture of coexisting shapes is in general supported, except in those cases where the deformed mean-field structures appear at close energies. In this case mixing can be important, affecting B(E2) strengths and their corresponding $\beta$ deformation parameters. The main mass regions contain these isotopes. Triaxiality in this mass region has also been explored systematically [@yao13; @rayner10; @nomura13; @gramos14pt; @nomura11], showing that although the axial deformations seem to be the basic ingredients, triaxiality may play a role in some cases. A frisk95; @sarri98; [@web_Gogny]. On the other hand, it has been shown [@frisk95; @sarri98; @sarri99] that the decay properties of $\beta$-unstable nuclei may depend on the nuclear shape of the decaying nucleus. In particular, the Gamow-Teller (GT) strength distributions corresponding to $\beta^+$/EC-decay of proton-rich nuclei in the mass region $A\approx 70$ have been studied systematically [@sarri01prc; @sarri01npa; @sarri05epja; @sarri09] as a function of the deformation, using a deformed quasiparticle random-phase approximation (QRPA) approach built on a self-consistent Hartree-Fock (HF) mean field with Skyrme forces and pairing correlations. The study has also been extended to stable $pf$-shell nuclei [@sarri03; @sarri13] and to neutron-rich nuclei in the mass region $A\approx 100$ [@sarri_pere]. This sensitivity of the GT strength distributions to deformation has been exploited to determine the nuclear shape in neutron-deficient Kr and Sr isotopes by comparing theoretical results with $\beta$-decay measurements using the total absorption spectroscopy technique (TAS) [@isolde]. Similar studies for the decay properties of even-even neutron-deficient Pb, Po, and Hg isotopes were initiated in Refs. [@sarri05prc; @moreno06] to predict the extent to which GT strength distributions may be used as fingerprints of the nuclear shapes in this mass region. In those works, it was shown that the existence of
null
--- abstract: 'In this paper, from the viewpoint of the concentration theory of maps, we study a compact group and a Lévy group action to a large class of metric spaces, such as $\mathbb{R}$-trees, doubling spaces, metric graphs, and Hadamard manifolds.' address: 'Mathematical Institute, Tohoku University, Sendai 980-8578, JAPAN' author: - Kei Funano title: Concentration of maps and group action --- [^1] Introduction ============ Let a compact metric group $G$ acts on a compact metric space $X$. In [@mil4 Theorem 5.1], V. Milman considered a Hölder action (see Section 3.6.2 for the definition) and estimated the diameters of orbits from above by words of an isoperimetric property of the group $G$ and a covering property of $X$. As he refered in the introduction, his idea came from the fixed point theory of a Lévy group action by M. Gromov and Milman in [@milgro Theorem 7.1] (see Section 4 for the definition of a Lévy group). In this paper, we consider general continuous actions of a compact metric group and a Lévy group to some concrete noncompact metric spaces, such as $\mathbb{R}$-trees, doubling spaces, metric graphs, and Hadamard manifolds. Of isoperimetric inspiring, the Lévy-Milman concentration theory of maps played an important role in Milman’s estimation (and also Gromov and Milman’s theorem of a Lévy group action). Taking a point $x\in X$, he considered how concentrates the orbit map $G\ni g \to gx\in X$ to a constant map. Recent developments of the concentration theory of maps by the author ([@funano2], [@funad], [@funano1]), by Gromov ([@gromovcat], [@gromov]), and by M. Ledoux and K. Oleszkiewvicz ([@ledole]) enable us to estimate how the orbit map concentrate to a constant map in the case where $X$ is an $\mathbb{R}$-tree, a doubling space, a metric graph, and a Hadamard manifold. In stead of considering a Hölder action and a covering property, we provide an estimate of the diameters of orbits of a continuous action of a compact metric group to those metric spaces by words of the continuity of the action, an isoperimetric property of $G$, and a metric space property of $X$. Our results assert that we can measure how the action to those metric spaces is closed to the trivial action by the above words. In the same point of view, we obtain two results of a Lévy group action to the above spaces. A Lévy group was first introduced and analyzed by Gromov and Milman in [@milgro]. Gromov and Milman proved that every continuous action of a Lévy group to a compact metric space has a fixed point. They also pointed out that the unitary group $U(\ell^2)$ of the separable Hilbert space $\ell^2$ with the strong topology is a Lévy group. Many concrete examples of Lévy groups are known by the works of S. Glasner [@gla], H. Furstenberg and B. Weiss (unpublished), T. Giordano and V. Pestov [@giopes1], [@giopes2], and Pestov [@pestov1], [@pestov3]. For examples, groups of measurable maps from the standard Lebesgue measure space to compact groups, unitary groups of some von Neumann algebras, groups of measure and measure-class preserving automorphisms of the standard Lebesgue measure space, full groups of amenable equivalence relations, and the isometry groups of the universal Urysohn metric spaces are Lévy groups (see the recent monograph [@pestov2] for precise). One ( \[th3\]). We also obtain a generalization of Gromov and Milman’s fixed point theorem (Proposition \[th2\]). Both are precise. The article is organized as follows. In Section $2$, we recall basic facts about the concentration theory of maps and prepare for the Sections $3$ and $4$. In Section $3$, we estimates the diameter of orbits of a compact group action to $\mathbb{R}$-trees, doubling spaces, meric graphs, and Hadamard manifolds. Section $7$ estimates the diameter of spaces. Preliminaries are generated using the $ maps. We recall relationships between an isoperimetric property of an mm-space (metric measure space) and the concentration theory of $1$-Lipschitz functions. The concentration theory of $1$-Lipschitz functions was introduced by Milman in his investigations of asymptotic geometric analysis ([@mil1], [@mil2], [@mil3]). While the concentration theory of functions developed, the concentration theory of maps into general metric spaces was first studied by Gromov ([@gromovcat], [@gromov2], [@gromov]). He established the theory by introducing the observable diameter in [@gromov]. We can apply it to the following definition. Let $Y$ be a metric space and $\nu$ a Borel measure on $Y$ such that $m:=\nu(Y)<+\infty$. We define for any $\kappa >0$ $$\begin{aligned} {\mathop{\mathrm{diam}} \nolimits}(\nu , m-\kappa):= \inf \{ {\mathop{\mathrm{diam}} \nolimits}Y_0 \mid Y_0 \subseteq Y \text{ is a Borel subset such that }\nu(Y_0)\geq m-\kappa\} \end{aligned}$$and call it the *partial diameter* of $\nu$. Let $(X,{\mathop{\mathit{d}} \nolimits}_X)$ be a complete sparable metric space equipped with a finite Borel measure $\mu_X$ on $X$. Henceforth, we call such a triple an *mm-space*. Let $(X,{\mathop{\mathit{d}} \nolimits}_X,\mu_X)$ be an mm-space with $m_X:=\mu_X(X)$ and $Y$ a metric space. For any $\kappa >0$ we define the *observable diameter* of $X$ by $$\begin{aligned} {\mathop{\mathrm{ObsDiam}} \nolimits}_Y (X; -\kappa):= \sup \{ {\mathop{\mathrm{diam}} \nolimits}(f_{\ast}(\mu_X),m_X-\kappa) \mid f:X\to Y \text{ is a }1 \text{{\rm -Lipschitz map}} \}, \end{aligned}$$where $f_{\ast}(\mu_X)$ stands for the push-forward measure of $\mu_X$ by $f$. The idea of the observable diameter comes from the quantum and statistical mechanics, that is, we think of $\mu_X$ as a state on a configuration space $X$ and $f$ is interpreted as an observable. Given sequences $\{X_n\}_{n=1}^{\infty}$ of mm-spaces and $\{ Y_n\}_{n=1}^{\infty}$ of metric spaces, observe that $\lim_{n\to \infty}{\mathop{\mathrm{ObsDiam}} \nolimits}_{Y_n}(X_n;-\kappa)=0$ for any $\kappa >0$ if and only if for any sequence $\{ f_n:X_n \to Y_n\}_{n=1}^{\infty}$ of $1$-Lipschitz maps there exists a sequence $\{ m_{f_n}\}_{n=1}^{\infty}$ of points such that $m_{f_n}\in Y_n$ and $$\begin{aligned} \lim_{n\to \infty}\mu_{X_n}(\{ x_n \in X_n \mid {\mathop{\mathit{d}} \nolimits}_{Y_n}(f_n(x_n),m_{f_n})\geq \varepsilon\})=0 \end{aligned}$$for any $\varepsilon>0$. A sequence $\{X_n\}_{n=1}^{\infty} $ of mm-spaces is said to be a *Lévy family* if $\lim_{n\to \infty}{\mathop{\mathrm{ObsDiam}} \nolimits}_{\mathbb{R}}(X_n;-\kappa)=0$ for any $\kappa>0$. The problem is [@milgro]. For an mm-space $X$ with $\mu_X(X)=1$, we define the *concentration function* $\alpha_X:(0,+\infty)\to \mathbb{R}$ as the supremum of $\mu_X(X\setminus A_{+r})$, where $A$ runs over all Borel subsets of $X$ with $\mu_X(A)\geq 1/2$ and $A_{+r}$ is an open $r$-neighbourhood of $A$. This function describes an isoperimetric feature of the space $X$. We shall consider each closed Riemannian manifold as an mm-space equipped with the volume measure normalized
null
--- abstract: 'We develop a method for predicting the yield of transiting planets from a photometric survey given the parameters of the survey (nights observed, bandpass, exposure time, telescope aperture, locations of the target fields, observational conditions, and detector characteristics), as well as the underlying planet properties (frequency, period and radius distributions). Using our updated understanding of transit surveys provided by the experiences of the survey teams, we account for those factors that have proven to have the greatest effect on the survey yields. Specifically, we include the effects of the surveys’ window functions, adopt revised estimates of the giant planet frequency, account for the number and distribution of main-sequence stars in the survey fields, and include the effects of Galactic structure and interstellar extinction. We approximate the detectability of a planetary transit using a signal-to-noise ratio (S/N) formulation. We argue that our choice of detection criterion is the most uncertain input to our predictions, and has the largest effect on the resulting planet yield. Thus , our choice is one of the most uncertain criteria. Nevertheless, with reasonable choices for the minimum S/N, we calculate yields that are generally lower, more accurate, and more realistic than previous predictions. As an overview of our mission. We discuss red noise and its possible effects on planetary detections. We False surveys.' author: <unk> Planets detected. The first method to unambiguously detect an extrasolar planet was pulsar timing, which relies on detecting periodic variations in the timing of the received radio signal that occur as the pulsar orbits about the system’s barycenter. The first system of three planets was found around PSR B1257+12 in 1992 [@wolszczan1992], followed by a single planet around PSR B1620-26 [@backer1993]. Although rare, the pulsar planets are some of the lowest mass extrasolar planets known: PSR B1257+12a is about twice the mass of the Moon. The second way to find extrasolar planets is through radial velocities (RV), which uses the Doppler shift of observed stellar spectra to look for periodic variations in the target star’s radial velocity. By estimating the mass of primary star, the observed radial velocity curve and velocity semi-amplitude can then be used to directly calculate the inclination-dependent mass ($M_p\sin i$) of the companion object. To date, RV surveys have detected more than 230 planets around other stars, making it the most successful method of extrasolar planet detection. While the large number of detected systems having an unseen companion with a mass on the order of $1 M_{Jup} \sin i$ statistically ensures that the majority of these are planetary bodies, the RV surveys are unable to provide more information than the minimum masses, periods, eccentricities, and the semi-major axes of the planets. RV surveys also have a limited ability to detect planets much smaller than a few Earth masses. The state of the art in RV surveys is the High Accuracy Radial Velocity Planet Searcher (HARPS) spectrometer at the La Silla Observatory in Chile, which is capable of radial velocity measurements with precisions better than $1 \ \mathrm{ms^{-1}}$ for extended periods of time [@lovis2006]. HARPS is therefore able to detect planets with masses on the order of $3$ to $4 M_{\oplus}$ in relatively short period orbits. Unfortunately, planets closer to an Earth mass will be increasingly difficult to detect since intrinsic stellar variability, in the form of acoustic oscillation modes and granulations on the photosphere, makes more precise spectroscopic radial velocity measurements harder to acquire. However, it may be possible to surmount this obstacle (as in the case of @lovis2006) through the selection of stars with “quiet” photospheres and long integration times which serve to average out the stellar variability. Gravitational microlensing is another technique for detecting extrasolar planets. Microlensing of a star occurs when a a star passes near the line of sight of the observer to another background star. The gravity of the foreground star acts as a lens on the light emitted by the background star, which causes the star in the background to become momentarily brighter as more light is directed towards the observer. Planetary companions to the lens star can further magnify the background star, and create short-term perturbations to the microlensing light curve. To date, six planets have been detected using microlensing [@bond2004; @udalski2005; @beaulieu2006; @gould2006a; @gaudi2008]. Unfortunately, the one-shot nature of microlensing observations means that information about systems discovered this way is generally sparser than that available for RV systems. Therefore, microlensing is most useful in determining the general statistical properties of extrasolar planets, such as their frequency and distribution, and not the detailed properties of the planetary systems. Planetary transits are a fourth method by which extrasolar planets have been discovered, and the one that provides the most complete set of information about the planetary system. Only planets with very specific orbital characteristics have a transit visible from Earth, because the orbital plane has to be aligned to within a few degrees of the line of sight. Therefore transiting planets are rare. Nevertheless, a transiting extrasolar planet offers the opportunity to determine the mass of the planet (when combined with RV measurements) since the inclination is now measurable, as well as the planetary radius, the density, the composition of the planetary atmosphere, the thermal emission from the planet, and many other properties (see [@charbonneau2007] for a review). Additionally, and unlike RV surveys, transiting planets should be readily detectable down to $1 R_\oplus$ and beyond, even for relatively long periods. Having accurate predictions of the number of detectable transiting planets is immediately important for the evaluation and design of current and future transit surveys. For the current surveys, predictions allow the operators to judge how efficient are their data-reduction and transit detection algorithms. Future surveys can use the general prediction method that we describe here to optimize their observing set-ups and strategies. More generally, such predictions allow us to test different statistical models of extrasolar planet distributions. Specifically, observed transits distributions. Using straightforward estimates it appears that observing a planetary transit should not be too difficult, presuming that one observes a sufficient number of stars with the requisite precision during a given photometric survey. Specifically, if we assume that the probability of a short-period giant planet (as an example) transiting the disk of its parent star is 10%, and take the results of RV surveys which indicate the frequency of such planets is about 1% [@cumming08], together with the assumption that typical transit depths are also about 1%, the number of detections should be $\approx 10^{-3}N_{\leq 1\%}$, where $N_{\leq 1\%}$ is the number of surveyed stars with a photometric precision better than 1%. Unfortunately, this simple and appealing calculation fails. Using this estimate, we would expect that the TrES survey, which has examined approximately 35,000 stars with better than 1% precision, to have discovered 35 transiting short period planets. But, at the date of this writing, they have found four. Indeed, overall only 51 transiting planets have been found at this time by photometric surveys specifically designed to find planets around bright stars[^1]. This is almost one hundred times less than what was originally predicted by somewhat more sophisticated estimates [@horne2003]. Clearly then, there is something amiss with this method of estimating transiting planet detections. Several other authors have developed more complex models to predict the expected yields of transit surveys. [@pepper2003] examined the potential of all-sky surveys, which was expanded upon and generalized for photometric searches in clusters [@pepper2005]. [@gould2006b] and [@fressin2007] tested whether the OGLE planet detections are statistically consistent with radial velocity planet distributions. [@brown2003] was the first to make published estimates of the rate of false positives in transit surveys, and [@gillon2005] model transit detections to estimate and compare the potential of several ground- and space-based surveys. As has been recognized by these and other authors, there are four primary reasons why the simple way outlined above of estimating surveys yields fails. First, the frequency of planets in close orbits about their parent stars (the planets most likely to show transits) is likely lower than RV surveys would indicate. Recent examinations of the results from the OGLE-III field by [@gould2006b] and [@fressin2007] indicate that the frequency of short-period Jovian-worlds is on the order of $0.45\%$, not $1.2\%$ as is often assumed by extrapolating from RV surveys [@marcy2005a]. [@gould2006b] point out that most spectroscopic planet searches are usually magnitude limited, which biases the surveys toward more metal-rich stars, which are brighter at fixed color. These observations should be repeated [ @fischer2005]. Second, a substantial fraction of the stars within
null
--- abstract: 'We introduce a novel playlist generation algorithm that focuses on the quality of transitions using a recurrent neural network (RNN). The proposed model assumes that optimal transitions between tracks can be modelled and predicted by internal transitions within music tracks. We introduce modelling sequences of high-level music descriptors using RNNs and discuss an experiment involving different similarity functions, where the sequences are provided by a musical structural analysis algorithm. Qualitative observations show that the proposed approach can effectively model transitions of music tracks in playlists.' author: - | Keunwoo Choi\ \ \ \ György Fazekas\ \ \ \ Mark Sandler\ \ \ \ bibliography: - 'icml\_2016\_workshop\_playlist.bib' title: | Towards Playlist Generation Algorithms Using\ RNNs Trained on Within-Track Transitions --- &lt;ccs2012&gt; &lt;concept&gt; &lt;concept\_id&gt;10010147.10010257.10010293.10010294&lt;/concept\_id&gt; &lt;concept\_desc&gt;Computing methodologies Neural networks&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;concept&gt; &lt;concept\_id&gt;10010405.10010469.10010475&lt;/concept\_id&gt; &lt;concept\_desc&gt;Applied computing Sound and music computing&lt;/concept\_desc&gt; &lt;concept\_significance&gt;500&lt;/concept\_significance&gt; &lt;/concept&gt; &lt;/ccs2012&gt; =10000 Introduction {#intro} ============ In music recommendation, the quality of transitions become important particularly when the recommendation is provided in the form of a playlist. This is due to a unique aspect of music consumption. Unlike other products, music is consumed *i) instantaneously*, for instance, while listening using streaming services, *ii) repeatedly*, i.e., listeners are willing to listen to the same music multiple times, and *iii) quickly*, i.e., an item usually only lasts a few minutes. Hence, recommended items are typically consumed or played in a sequence. This behaviour introduces the need for *good transitions* between items, that is, the relevance and subjective judgement a recommended track depends on the previous track. Recommendation approaches using collaborative filtering are prone to overlook niche or new items, although the popularity bias of known items can be compensated for. This is called the *cold-start problem* [@ricci2011introduction]. Content-based approaches which are designed to solve the cold-start problem can suffer from lack of diversity when recommended items are selected simply by similarity. This is often called top-$N$ recommendation. It is well known that *unexpectedness*, *surprise* or *serendipity* play an important role in the music recommendation and discovery [@choi2015understanding]. Compared to other strategies, focusing on transitions can naturally provide these qualities. There have been approaches that primarily focus on the transitions of tracks [@liebman2015dj], [@chen2012playlist], [@mcfee2011natural]. They assumed the *Markov* property of hidden states or embeddings of tracks. Using the Markov property, it is assumed that future events only depend on the current one and does not depend on the past. This has been successfully used for sequence modelling for instance in speech [@rabiner1989tutorial] too. In music computing, playlist datasets [@mcfee2012hypergraph], [@maillet2009steerable], [@chen2012playlist] collaboratively created for reference by DJs and listeners were used for training and evaluation of sequence modelling approaches. Although these datasets consist of a large number of tracks, e.g. 101k playlists in [@mcfee2012hypergraph], the lack of audio data fundamentally limits research based on audio content analysis. Recently, recurrent neural networks (RNNs) have become widely used for sequence modelling in tasks such as speech recognition, substantially outperforming previous hidden Markov model-based approaches [@sak2014long]. The success of the application of RNNs largely relies on the introduction of Long Short-Term Memory (LSTM) units [@gers2000learning]. The merit of LSTM comes from the gate cells of LSTM units, that decide how much the units take input, release output, and forget the previous events. Especially, the forget gate improves the training efficiency by helping the gradients flow well. However, RNNs have not been used for playlists generation and modelling, due in part to the lack of sufficient training data. To solve this problem, we propose using an RNN trained on *within-track* transitions to model playlists. We assume that transitions between structural segments of music can be used as a model for generating the desired high-quality transitions between tracks. In general, segments in a track are different but coherent and their musical features can be expected to match well in succession. This is due to the careful and intentional design by the composer. Using this approach, the number of transitions can easily outnumber that of existing playlist datasets, and therefore it enables to train an RNN model. The rest of the paper is organised as follows. The article starts with Section \[sec:prop\]. We then present experimental results and discussion in Section \[sec:exp\] and conclude in Section \[sec:conclusinon\]. The n we conclude with an Abstract ! [A block diagram of the proposed algorithm, (a,b) training of RNN and (c) prediction of a feature vector, $x_{pred}$. []{data-label="figure:block"}](icml_2016_rnn_playlist_diagram.pdf){width="1.0\columnwidth"} ] represents the algorithm. First, the training tracks are segmented and $x_i$, the features for each segments are extracted (Fig. \[figure:block\]a). Then an RNN of length $N$ ($N$=3 in the figure) is trained to learn the transitions of the sequence of feature vectors (Fig. \[figure:block\]b). When a seed track is provided, the features of the last $N$ segments are extracted and fed into the trained RNN to predict the feature vector $x_{pred}$ (Fig. \[figure:block\]c). The algorithm selects a track with a start segment that is most similar to $x_{pred}$. Structural Segmentation ----------------------- Structural segmentation is a task aiming to find the boundaries of different segments or parts in music, e.g. *intro, verse, bridge, chorus*. The most common approach is to take advantage of self-similarity between frames of the track [@foote2000automatic]. In the experiment, we used a basic and efficient method that is proposed in [@foote2000automatic]. Although the results introduce some errors, the feature vector sequences that are based on the imperfect segmentation still approximate the information about how each feature changes along time in each track. Feature Extraction {#sec:featext} ------------------ The proposed algorithm can use feature extraction methods that are relevant to listeners’ musical preferences and able to represent a musical segment. This includes estimated latent features from collaborative filtering [@van2013deep], tags such as genre and emotion [@dieleman2014end] or implicit features such as the weights of the last hidden layer of a neural network classifier [@liangcontent]. Using explicit labels such as genre can facilitate explaining the behaviour of the algorithm, which is important for research and also to the listener. In the experiment, an auto tagging algorithm in [@choi2016automatic] is used to predict a 50-dimensional vector whose elements correspond to the probability of each tag. The tagging algorithm is based on deep convolutional neural networks and trained on Million Song Dataset [@bertin2011million]. It shows state-of-the-art performance while the tags cover variety of categories such as genre, emotion, instrument, and era. Although some of the tags such as genre typically characterise the entire music track, they are not necessarily constant over the whole track. RNN Model --------- The goal of RNN training is to predict the feature of the following track ($x_{pred}$) that maintains consistency and fluctuations, i.e., a certain variation over the features. To this end, a 2-layer RNN with 512 hidden units is employed. LSTM units [@gers2000learning] are used as they show state-of-the-art performance among RNN variants for several sequence modelling tasks [@greff2015lstm]. Similarity Measure ------------------ A similarity measure is necessary to find the subsequent track using the feature vector predicted by the RNN. The similarity metric directly affects the properties of the generated playlists and therefore it should be carefully selected. Using the *cosine distance* may compensate for the popularity bias and result in recommending more niche items [@schedl2015music]. The DCG experiment. DCG is a weighted version of *Cumulative Gain* (CG). CG is designed to measure ranking quality of a retrieved list and DCG weights on the top-$N$ relevant items by *discounting* lower relevant items. Applying this measure
null
--- author: - | Anna Jenčová [^1]\ [Mathematical Institute, Slovak Academy of Sciences,]{}\ [Štefánikova 49, 814 73 Bratislava, Slovakia]{}\ [jenca@mat.savba.sk]{} title: The structure of strongly additive states and Markov triplets on the CAR algebra --- Introduction ============ A remarkable property of von Neumann entropy is the strong subadditivity (SSA): For a state $\rho$ on the 3-fold tensor product $B(\mathcal H_A\otimes\mathcal H_B\otimes \mathcal H_C)$, we have $$S(\rho)+S(\rho_B)\le S(\rho_{AB})+S(\rho_{BC})$$ Here $\mathcal H_A$, $\mathcal H_B$ and $\mathcal H_C$ are finite dimensional Hilbert spaces and $\rho_B$, $\rho_{AB}$, $\rho_{BC}$ are the restrictions of $\rho$ to the respective subsystems. This was first proved by Lieb and Ruskai in [@liebruskai]. The structure of states that saturate the strong subadditivity of entropy, called strongly additive states, was studied in [@hjpw]. In was shown that a state $\rho$ is strongly additive if and only if it has the form $$\label{eq:ssaeq_hrpw} \rho=\bigoplus_n A_n\otimes B_n,$$ where $A_n\in B(\mathcal H_A\otimes \mathcal H_n)$ and $B_n\in B(\mathcal K_n\otimes \mathcal H_C)$ are positive operators and $\mathcal H_B$ has a decomposition $\mathcal H_B=\bigoplus_n \mathcal H_n\otimes\mathcal K_n$ (see also [@japetz], where this was proved also for the infinite dimensional case). Equivalently, $$\label{eq:ssaeq_ja} \rho= (D_{AB}\otimes I_C)(I_A\otimes D_{BC})$$ where $D_{AB}\in B(\mathcal H_A\otimes \mathcal H_B)$ and $D_{BC}\in B(\mathcal H_B\otimes \mathcal H_C)$ are positive matrices. The Markov property for states in the quantum (non-commutative) probability was introduced by Accardi [@accardi] and Accardi and Frigerio [@acfrig], in terms of completely positive unital maps, so-called quasiconditional expectations. For tensor products, it was shown that the Markov property is equivalent to strong additivity of the states [@ohyapetz]. The definition of the Markov property does not require the tensor product structure and can be applied in much more general situations. We are interested in the case of CAR algebras. The Markov states for CAR algebras were studied in [@afimu]. The strong subadditivity of entropy on CAR systems was recently shown and it was proved that strong additivity is equivalent to Markov property in the case of even states, see [@moriya]. For more details on this topic, read [@belpit]. The aim of the present paper is to find the structure of strongly additive states and Markov triplets on the CAR algebra. We will explore the structure of strongly additive states. This is done by a similar method as in [@japetz], using the results of the theory of sufficient subalgebras. The method is as follows. The preliminary section summarizes the most important results on the CAR algebra and on sufficient subalgebras. The main tool used in the sequel is the factorization Theorem \[thm:factorization\] in Section 2.1. Section 3 shows the relation between strong additivity and Markov property for any states on the CAR algebra. Section 4 contains the main results. Preliminaries ============= Sufficient subalgebras ---------------------- We first recall the definition and some characterizations of a sufficient subalgebra, which is a generalization of the classical notion of a sufficient statistic, see [@petz; @ohyapetz] for details. Let $\Ae$ be a finite dimensional algebra and let $\varphi,\psi$ be states on $\Ae$. Let $\Be\subset \Ae$ be a subalgebra and let $\varphi_0$, $\psi_0$ be the restrictions of the states to $\Be$. Then $\Be$ is sufficient for $\{\varphi,\psi\}$ is there is a completely positive, identity preserving map $E:\Ae\to \Be$, such that $\varphi_0\circ E=\varphi$, $\psi_0\circ E=\psi$. For simplicity, let us further assume that the states are faithful. Let $\rho_\varphi$, $\rho_\psi$ be the densities of $\varphi$, $\psi$ with respect to a trace $\Tr$: $$\varphi(a)=\Tr \rho_\varphi a, \quad \psi(a)=\Tr \rho_\psi a,\qquad a\in \Ae$$ The relative entropy $S(\varphi,\psi)$ is defined as $$S(\varphi,\psi)=S(\rho_\varphi,\rho_\psi)=\Tr \rho_\varphi(\log\rho_\varphi-\log \rho_\psi)$$ It is monotone, in the sense that we have $S(\varphi,\psi)\ge S(\varphi_0,\psi_0)$ for any subalgebra $\Be\subseteq \Ae$. We will also need the definition of the generalized conditional expectation $E_\psi: \Ae\to \Be$ with respect to the state $\psi$ [@acccec] $$E_\psi(a)=E_{\rho_\psi}(a)=\rho_{\psi_0}^{-1/2}E_\Be(\rho_\psi^{1/2}a\rho_\psi^{1/2})\rho_{\psi_0}^{-1/2}$$ where $E_\Be:\Ae \to \Be$ is the trace preserving conditional expectation. Then $E_\psi$ is a completely positive identity preserving map, such that $\psi_0\circ E_\psi=\psi$ and it is a conditional expectation if and only if $\rho^{it}_\psi\Be\rho^{-it}_\psi\subseteq \Be$ for all $t\in \mathbb R$. The following theorem gives several equivalent characterizations of sufficiency. \[thm:sufficiency\] The following theore equivalent. 1. The subalgebra $\Be$ is sufficient for $\{\varphi,\psi\}$. The subalgebra $S(\varphi,\psi)=S(\varphi_0,\psi_0)$. For $\rho_\varphi^{it}\rho_\psi^{-it}\in \Be$, for all $t\in \mathbb R$. 4. $E_\varphi=E_\psi$. Our results below are based on the following generalization of the classical factorization criterion for sufficient statistics. \[thm:factorization\] [@japetz] Let $\varphi$, $\psi$ be faithful states on $\Ae$ and let $\Be\subseteq \Ae$ be a subalgebra, such that $\rho_\psi^{it}\Be\rho_\psi^{-it}\subseteq \Be$ for all $t\in \mathbb R$. Then $\Be$ is sufficient for $\{\varphi,\psi\}$ if and only if $$\rho_\varphi=\rho_{\varphi_0}D,\qquad \rho_\psi=\rho_{\psi_0}D$$ where $\varphi_0=\varphi|_\Be$, $\psi_0=\psi|_\Be$ and $D$ is a positive element in the relative commutant $\Be'\cap \Ae$. The CAR algebra --------------- We recall some basic facts about the CAR algebra, for details see [@armo; @bratrob]. The CAR algebra $\mathcal A$ is the $C^*$- algebra generated by elements $\{a_i, i\in \mathbb Z\}$, satisfying the anticommutation relations $$\label{eq:car} a_ia_j+a_ja_i =0,\quad a_ia_j^*+a_j^*a_i=\delta_{ij},\qquad i,j\in \mathbb Z$$ For a subset $I\subset \mathbb Z$, the $C^*$-subalgebra generated by $\{a_i, i\in I\}$ is denoted by $\mathcal A(I)$. If $I$ is finite, $\mathcal A(I)$ is isomorphic to the full matrix algebra $M_{2^{|I|}}(\mathbb C)$ by the so-called Jordan-Wigner isomorphism. Since $$\mathcal A=\overline{\bigcup_{|I|<\infty}\mathcal A(I)}^{\, C^*},$$ there is a unique tracial state $\tau$ on $\mathcal A$, obtained as an extension of the unique tracial states on $\mathcal A(I)$, $|I|<\infty$. It has the following product property: $$\label{eq:product} \tau(ab)=\tau(a)\tau(b),\qquad a\in \mathcal A(I),\ b\in \mathcal A(J),\quad I\cap J=\emptyset$$ ### Graded commutation relations For $I\subseteq \mathbb Z$, we denote by $\Theta^I$ the (unique) automorphism of $\mathcal A$, such that $$\label{eq:thetaI}
null
--- abstract: | Lamperti’s maximal branching process is revisited, with emphasis on the description of the shape of the invariant measures in both the recurrent and transient regimes. A truncated version of this chain is exhibited, preserving the monotonicity of the original Lamperti chain supported by the integers. The Brown theory of hitting times applies to the latter chain with finite state-space, including sharp strong time to stationarity. Additional to the underlying theory of view. **Running title:** Lamperti’s MBP. **Keywords**: discrete probability; maximal branching process; recurrence/ transience transition; shape of invariant measures; tails; failure rate monotonicity; truncation; sharp strong time to stationarity; generating functions. **MSC ** - $^{2}$Depto. Ingenieria Matematica and Centro Modelamiento Matematico\ Universidad de Chile\ UMI 2807, Uchile-Cnrs\ Casilla 170-3 Correo 3\ Santiago, Chile\ E-mail: huillet@u-cergy.fr, smartine@dim.uchile.cl author: - 'Thierry Huillet$^{1}$, Servet Martinez$^{2}$' title: 'Revisiting John Lamperti’s maximal branching process' --- Introduction ============ The Lamperti’s maximal branching process (mbp) is a modification of the Galton-Watson (GW) branching process selecting at each step the descendants of the most prolific ancestor, [@L1]. As a Markov chain on the full set of non-negative integers, Lamperti ([@L1]-[@L2]) gave sharp conditions on the tails of the branching number under which this process is recurrent (either positive or null) or transient. Our contribution is to describe the corresponding shape of the invariant measures and we proceed as follows: while fixing a target invariant measure (supported by the integers) of the mbp, we show (in Proposition $2$) how to compute in general the law of the branching mechanism that gives rise to it. Several classes of distributions are supplied both in the recurrent and transient setups. In Propositions $3$, $4$ and $5$, the target invariant measures are probabilities with tails getting larger and larger, ranging from geometric, power-law with index $\alpha \in \left( 0,1\right) $ and power-law with index $0$ (the target has no moments of any positive order). In Propositions $6$ (and $7$), it is shown that the null recurrent (respectively transient) Lamperti chain has a non trivial invariant infinite and positive measure. An important feature of the Lamperti chain we also emphasize on is its failure rate monotonicity (Proposition $1$). The Lamperti’s mbp also makes sense when the branching mechanism takes values in the finite subset $\left\{ 1,...,N\right\} $ and the question of computing the law of the branching mechanism giving rise to any finitely supported target distribution makes sense. We call this construction $<unk>left<unk> 1,...,N<unk>right<unk> $8$. If the target distribution is in particular the restriction to $\left\{ 1,...,N\right\} $ of the invariant measure of a mbp with full state-space, this construction allows to design a truncated version of the latter chain preserving its failure rate monotonicity feature (Proposition $% 9 $ and Corollary $10$). For failure rate monotone Markov chains with finite state-space, Brown, [@Brown], designed a theory of hitting times which thus applies to the truncated Lamperti chain. The main concern is the relationship existing between the first hitting times of both state $\left\{ N\right\} $ and the restricted invariant measure of the truncated Lamperti chain. By monotonicity, state $\left\{ N\right\} $ is the largest possible value that the truncated chain can explore. Under some technical condition on the initial distribution, it is recalled that the former hitting time exceeds stochastically the latter (Proposition $11$) which has the structure of a compound geometric random variable (Proposition $13$). The excess time is a sharp strong time to stationarity allowing to estimate the distance between the current state of the truncated chain to its equilibrium distribution. Its cumulated probability mass function up to $n$ can be computed from the probability that the truncated chain is in state $\left\{ N\right\} $ after $n$ steps, (Proposition $12$). The alternative classical quasi-stationary point of view to this problem is also addressed. In Proposition $14$, we exhibit the rate of decrease of the hitting times to state $\left\{ N\right\} $ in terms of the quasi-stationary distribution. In Proposition $15$, we show that under Brown’s conditions on the initial distribution $\mathbf{\pi }_{0}$, the ratio of the large tail probabilities for the first hitting times of state $\left\{ N\right\} $ starting from $% \mathbf{\pi }_{0}$ against the quasi-stationary distribution exceeds $1$. Proposition $16$ deals with a question raised by Brown concerning asymptotic exponentiality of the hitting times which applies to the truncated Lamperti chain and its time-reversal. Lamperti’s model ================ The Lamperti maximal branching process (mbp) process may be described as an extremal analogue of the GW branching process, where the next generation is formed by the offspring of a most productive individual, [@L1]. As a result of some selection (or detection) mechanism, iteratively in each generation, only the offspring of one of the most productive individuals of the underlying GW process with branching number $\nu $ is kept (or detected), the other ones being wiped out (or missed by the detector). This model implies that there are many productive individuals. In [@L1], Lamperti relates this model to a percolation problem. With $X_{n}$ the size of such a population at generation $n$, $F_{n}\left( j\right) =\mathbf{P}\left( X_{n}\leq j\right) $ and $\nu _{j,n+1}\overset{d}{% =}\nu $ for all $j$, the dynamics under concern is $$X_{n+1}=\max_{j=1,...,X_{n}}\nu _{j,n+1}\Rightarrow F_{n+1}\left( j\right) =\sum_{i\geq 0}\mathbf{P}\left( X_{n}=i\right) \mathbf{P}\left( \nu \leq j\right) ^{i}=\mathbf{E}z^{X_{n}}\mid _{z=\mathbf{P}\left( \nu \leq j\right) }.$$ with initial condition: $X_{0}\overset{d}{\sim }\mathbf{\pi }_{0}$ with $% \mathbf{P}\left( X_{0}\leq j\right) :=F_{0}\left( j\right) .$ We denote $\mathbf{E}\left( X_{n+1}\mid X_{n}=i\right) =\mathbf{E}% \max_{j=1,...,i}\nu _{j}=\mathbf{E}\left( m_{i}\right) $ where $% m_{i}=\max_{j=1,...,i}\nu _{j}.$ Let $p\left( j\right) :=\mathbf{P}\left( \nu =j\right) $. We will assume that the set $\left\{ j:p\left( j\right) >0\right\} $ is either $\Bbb{N}% _{0}:=\left\{ 0,1,2,...\right\} $ or $\Bbb{N}:=\left\{ 1,2,...\right\} $ but, as we shall see, the finite case when $\left\{ j:p\left( j\right) >0\right\} =\left\{ 1,...,N\right\} $ for some integer $N\gg 1$, will also be of interest. We shall let $\phi \left( z\right) =\mathbf{E}z^{\nu }$ be the probability generating function (pgf) of $\nu .$ We shall distinguish two regimes for the branching number $\nu $: Branching number** **$\nu >0$**. ** ---------------------------------- If $\nu >0$ ($p\left( 0\right) =\mathbf{P}\left( \nu =0\right) =0$ and $% \mathbf{E}\left( \nu \right) >1$), then $X_{n}>0,$ $\forall n\geq 0$ ($% X_{0}=1$), owing to $$F_{n+1}\left( 0\right) =\mathbf{P}\left( X_{n+1}=0\right) =\mathbf{E}% z^{X_{n}}\mid _{z=p\left( 0\right) =0}=\mathbf{P}\left( X_{n}=0\right)
null
--- author: - 'Miles H. Anderson' - Romain Bouchand - Junqiu Liu - Wenle Weng - | \ Ewelina Obrzud - Tobias Herr - 'Tobias J. Kippenberg' bibliography: - 'biblib\_2.bib' title: 'Photonic chip-based resonant supercontinuum ' --- **Supercontinuum generation in optical fibers is one of the most dramatic nonlinear effects discovered [@alfano_observation_1970; @ranka_visible_2000; @birks_supercontinuum_2000], allowing short pulses to be converted into multi-octave spanning coherent spectra. This process has enabled self-referencing of optical frequency combs, establishing the RF-to-optical link [@jones_carrier-envelope_2000; @udem_optical_2002]. ideally suited for optical frequency division[@xie_photonic_2017], Raman spectral imaging[@ideguchi_coherent_2013], telecommunications [@marin-palomo_microresonator-based_2017], or astro-spectrometer calibration [@murphy_high-precision_2007]. Soliton microcombs [@herr_temporal_2014; @kippenberg_dissipative_2018] by contrast, can generate octave-spanning spectra[@li_stably_2017; @pfeiffer_octave-spanning_2017], but with good conversion efficiency only at vastly higher repetition rates in the 100s of GHz[@bao_nonlinear_2014]. Here, we bridge this efficiency gap with resonant supercontinuum, allowing supercontinuum generation using input pulses with an ultra-low 6 picojoule energy, and duration of 1 picosecond, 10-fold longer than [@gaeta_photonic-chip-based_2019]. This creates a smooth, flattened 2,200 line frequency comb, with an electronically detectable repetition rate of 28 GHz, constituting the largest bandwidth-line-count product for any microcomb generated to date. Taken together, our work establishes resonant supercontinuum as a promising route to broadband and coherent spectra. ** ) ! [image](fig_idea1_2) ! [image](setup_v7) !!!!! ! [image](fig_exp12_10) Supercontinuum generation (SCG, or ‘white light’ generation [@bellini_phase-locked_2000]) is a process where high intensity optical pulses are converted into coherent octave-spanning spectra by propagation through a dispersion-engineered waveguide, fiber, or material (Fig. \[fig:setup\](a)). Following the demonstration of dramatic broadening in optical fiber [@ranka_visible_2000], the process has been well studied in photonic crystal fibers [@russell_photonic-crystal_2006; @dudley_supercontinuum_2006], owing to their capacity for dispersion engineering. SCG is based on a combination of nonlinear phenomenon including soliton fission, dispersive wave formation, and the Raman self-frequency shift [@skryabin_colloquium:_2010]. Commonly, in order to generate a supercontinuum which is coherent as well as having ultra-high bandwidth, ultrashort pulses ($\sim$100 fs) with high peak powers (1 kW) are needed so that the pulse undergoes a process known as soliton fission [@herrmann_experimental_2002; @skryabin_colloquium:_2010], as opposed to incoherent modulation instability[@nakazawa_coherence_1998]. Dispersive wave emission (alternatively soliton Cherenkov radiation[@akhmediev_cherenkov_1995]) simultaneously serves to extend the spectrum towards other spectral regions far from the pump [@hilligsoe_initial_2003]. To achieve this, SCG has most often required the input of mode-locked laser systems operating at repetition rates of $<$1 GHz so as to provide large pulse energies. Although photonic chip-based waveguides with a high material nonlinearity have reduced required pulse energies by an order of magnitude, and have allowed lithographic dispersion engineering [@yeom_low-threshold_2008; @halir_ultrabroadband_2012; @leo_dispersive_2014; @guo_mid-infrared_2018], synthesis of octave spanning spectra with line spacing &gt;10 GHz has remained challenging. Accessing this regime has been achieved using SCG driven with electro-optic frequency combs [@wu_supercontinuum-based_2013; @beha_electronic_2017; @obrzud_broadband_2018-1; @carlson_ultrafast_2018], providing ultrabroad frequency comb formation at repetition rates of 10-30 GHz, although multiple stages of amplification and pulse-compression were required in order to replicate the same pulse duration and peak powers available from mode-locked lasers. An alternative technique for the generation of coherent frequency comb spectra is Kerr comb generation [@kippenberg_dissipative_2018], i.e. soliton microcombs. Kerr comb generation uses the resonant build-up of a continuous-wave laser to generate a frequency comb via parametric frequency conversion and the formation of dissipative Kerr solitons (DKS) [@herr_temporal_2014]. These DKS exhibit a rich landscape of dynamical states, such as breathing [@leo_dynamics_2013; @lucas_breathing_2017], chaos [@anderson_observations_2016], and bound-states [@wang_universal_2017]. In contrast to SCG, DKS circulate indefinitely and are a soliton of an ‘open system’, relying on a double balance of nonlinearity and dispersion, as well as parametric gain and dissipation [@akhmediev_dissipative_2008]. The cavity enhances the pump field, dramatically reducing the input power threshold for soliton formation. Yet, the process itself has an efficiency that reduces with decreasing repetition rate owing to the reduced overlap of the DKS and the background pump [@bao_nonlinear_2014] (Fig. \[fig:setup\](b)). As a consequence, octave-spanning soliton microcombs to date have been synthesized with 1 THz line spacing[@pfeiffer_octave-spanning_2017; @li_stably_2017], and it has proven challenging to synthesize spectra with 10-50 GHz repetition rate with either SCG or microcomb formation. However, a growing number of applications benefit from coherent supercontinua with line spacing in the microwave domain that can be easily detected and processed by electronics. Such widely-spaced comb spectra are resolvable in diffraction-based spectrometers for astrocombs [@ewelina_obrzud_microphotonic_2019; @suh_searching_2019], and are highly appropriate as sources for massively parallel wavelength-division multiplexing [@marin-palomo_microresonator-based_2017; @hu_single-source_2018]. They can also remove the ambiguity in the identification of individual comb lines. In this work we demonstrate *resonant* supercontinuum generation, a synthesis between conventional SCG and soliton microcombs (Fig. \[fig:setup\](c)). By supplying a microresonator with a pulsed input, we take equal advantage of the resonant enhancement offered by the cavity, as well as the higher peak input powers and conversion efficiency allowed by pulses as compared to CW[@malinowski_optical_2017]. Where recent works on pulse-driven Kerr cavities for DKS generation have focused on facilitating access to single soliton generation with high conversion efficiency [@obrzud_temporal_2017], and peak-power enhancement [@lilienfein_temporal_2019], [@okawachi_bandwidth_2014]. By promoting low dispersion with a strong third-order component, we generate a flattened, broadband spectrum close to 2/3 of an octave wide, using ten times lower pulse energy, and ten times longer pulse duration than in conventional [Si$_3$N$_4$ ]{}-based SCG [@carlson_ultrafast_2018; @okawachi_carrier_2018] and with an electronically detectable repetition rate of 28 GHz. **Resonant Supercontinuum Results. ** ] Fig. \[fig:setup\](h)), has a free spectral range (FSR) of 27.88 GHz and a loaded linewidth in the telecom band of 110 MHz (most probably value[@liu_ultralow-power_2018-1]). The waveguide dimensions have been selected to give a low dispersion of $\beta_2=-11$ fs$^2/$mm. The pulse-train incident on this chip is synthesized using cascaded electro-optic modulation, intensity modulation, and dispersion compensation [@kobayashi_optical_1988; @fujiwara_optical_2003] (see Fig. \[fig:setup\](d)), providing pulses with a minimum duration of 1 ps, at a repetition rate $f_\mathrm{eo}=$ 13.94 GHz. In this way, the microresonator is sub-harmonically pumped every two roundtrips[@ewelina_obrzud_microphotonic_2019]. This decreases the conversion efficiency by a factor of 2, but reduces the requirements on the microwave transmission system. A tunable RF signal generator supplies $f_\mathrm{eo}$, and we keep two alternative RF sources – with relatively high (RF-1) and low (RF-2) phase-noise respectively – in order to observe how their frequency noise is transferred to the resonant supercontin
null
--- abstract: 'We propose a sample efficient stochastic variance-reduced cubic regularization (Lite-SVRC) algorithm for finding the local minimum efficiently in nonconvex optimization. The proposed algorithm achieves a lower sample complexity of Hessian matrix computation than existing cubic regularization based methods. At the heart of our analysis is the choice of a constant batch size of Hessian matrix computation at each iteration and the stochastic variance reduction techniques. In detail, for a nonconvex function with $n$ component functions, Lite-SVRC converges to the local minimum within $\tilde{O}(n+n^{2/3}/\epsilon^{3/2})$[^1] Hessian sample complexity, which is faster than all existing cubic regularization based methods. Numerical experiments with different nonconvex optimization problems conducted on real datasets validate our theoretical results.' author: - 'Dongruo Zhou[^2]    and    Pan Xu[^3]    and    Quanquan Gu[^4]' bibliography: - 'reference.bib' date: 'May 18, 2018[^5]' title: 'Sample Efficient Stochastic Variance-Reduced Cubic Regularization Method' --- [^1]: Here $\tilde{O}$ hides poly-logarithmic factors [^2]: Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail: [drzhou@cs.ucla.edu]{} [^3]: Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail: [panxu@cs.ucla.edu]{} [^4]: Department of Computer Science, University of California, Los Angeles, CA 90095, USA; e-mail: [qgu@cs.ucla.edu]{} [^5]: The first version of this paper was submitted to UAI 2018 on March 9, 2018. This is the second version with improved presentation and additional baselines in the experiments, and was submitted to NeurIPS 2018 on May 18, 2018.
null
--- abstract: 'A lattice is called well-rounded if its minimal vectors span the corresponding Euclidean space. In this paper we completely describe well-rounded full-rank sublattices of ${\mathbb Z}^2$, as well as their determinant and minima sets. We show that the determinant set has positive density, deriving an explicit lower bound for it, while the minima set has density 0. We also produce formulas for the number of such lattices with a fixed determinant and with a fixed minimum. These formulas are related to the number of divisors of an integer in short intervals and to the number of its representations as a sum of two squares. We investigate the growth of the number of such lattices with a fixed determinant as the determinant grows, exhibiting some determinant sequences on which it is particularly large. To this end, we also study the behavior of the associated zeta function, comparing it to the Dedekind zeta function of Gaussian integers and to the Solomon zeta function of ${\mathbb Z}^2$. Our results extend automatically to well-rounded sublattices of any lattice $A {\mathbb Z}^2$, where $A$ is an element of the real orthogonal group $O_2({\mathbb R})$.' address: ' rank. Define the [*minimum*]{} of $\Lambda$ to be $$|\Lambda| = \min_{\bx \in \Lambda \setminus \{\bo\}} \|\bx\|,$$ where $\|\ \|$ stands for the usual Euclidean norm on $\real^N$. Let $$S(\Lambda) = \{ \bx \in \Lambda : \|\bx\| = |\Lambda| \}$$ be the set of [*minimal vectors*]{} of $\Lambda$. We say that $\Lambda$ is a [*well-rounded*]{} lattice (abbreviated WR) if $S(\Lambda)$ spans $\real^N$. WR lattices come up in a wide variety of different contexts, including sphere packing, covering, and kissing number problems, coding theory, and the linear Diophantine problem of Frobenius, just to name a few. Still, the WR condition is special enough so that one would expect WR lattices to be relatively sparce. However, in 2005 C. McMullen [@mcmullen] showed that in a certain sense [*unimodular*]{} WR lattices are “well distributed” among all [*unimodular*]{} lattices in $\real^N$, where a unimodular lattice is a lattice with determinant equal to 1. More about unimodular lattices. \[[@mcmullen]\] \[mcmullen\] Let $A \subseteq SL_N(\real)$ be the subgroup of diagonal matrices with positive diagonal entries, and let $\Lambda$ be a full-rank unimodular lattice in $\real^N$. If the closure of the orbit $A \Lambda$ is compact in the space of all full-rank unimodular lattices in $\real^N$, then it contains a WR lattice. Notice that in a certain sense this is a statement about distribution of WR lattices in the space of all unimodular lattices in a fixed dimension. Motivated by this beautiful theorem, we want to investigate the distribution of WR sublattices of $\zed^N$, which is a natural arithmetic problem. For instance, for a fixed positive integer $t$, does there necessarily exist a WR subllatice $\Lambda \subseteq \zed^N$ so that $\det(\Lambda) = t$? If so, how many different such sublattices are there? The first trivial observation is that if $t = d^N$ for some $d \in \zed_{>0}$ and $I_N$ is the $N \times N$ identity matrix, then the lattice $\Lambda = (d I_N) \zed^N$ is WR with $\det(\Lambda) = t$ and $|\Lambda| = d$. It seems however quite difficult to describe [*all*]{} WR sublattices of $\zed^N$ in an arbitrary dimension $N$. This paper will be divided into two. From now on we will write $\WR(\Omega)$ for the set of all full-rank WR sublattices of a lattice $\Omega$; in this paper we will concentrate on $\WR(\zed^2)$. In section 3 we develop a certain parametrization of lattices in $\WR(\zed^2)$, which we then use to investigate the determinant set $\D$ of such lattices and to count the number of them for a fixed value of determinant. Specifically, let $\D$ be the set of all possible determinant values of lattices in $\WR(\zed^2)$, and let $\Mm$ be the set of all possible values of squared minima of these lattices, i.e. $\Mm = \{ |\Lambda|^2 : \Lambda \in \WR(\zed^2) \}$. It is easy to see that $\Mm$ is precisely the set of all positive integers, which are representable as a sum of two squares. Then it is interesting to understand how dense are these sets in $\zed_{>0}$. For any subset $\PP$ of $\zed$ and $M \in \zed_{>0}$, we write $$\PP(M) = \{ n \in \PP : n \leq M\}.$$ Define [*lower density*]{} of $\PP$ in $\zed$ to be $$\DL_{\PP} = \liminf_{M \rightarrow \infty} \frac{|\PP(M)|}{M},$$ and its [*upper density*]{} in $\zed$ to be $$\DU_{\PP} = \limsup_{M \rightarrow \infty} \frac{|\PP(M)|}{M}.$$ Clearly, $0 \leq \DL_{\PP} \leq \DU_{\PP} \leq 1$. If $0 < \DL_{\PP}$, we say that $\PP$ [*has density*]{}, and if $\DL_{\PP} = \DU_{\PP}$, i.e. if $\lim_{M \rightarrow \infty} \frac{|\PP(M)|}{M}$ exists, we say that $\PP$ [*has asymptotic density*]{} equal to the value of this limit, which could be 0. With this notation, we will show that $\D$ has density. More representation following. \[dense\] The determinant set $\D$ of lattices in $\WR(\zed^2)$ has representation $$\D = \left\{ (a^2+b^2)cd\ :\ a,b \in \zed_{\geq 0},\ \max\{a,b\} >0,\ c,d \in \zed_{>0},\ 1 \leq \frac{c}{d} \leq \sqrt{3} \right\},$$ and lower density $$\label{D_dens} \DL_{\D} \geq \frac{3^{\frac{1}{4}}-1}{2 \cdot 3^{\frac{1}{4}}} \approx 0.12008216 \dots$$ The minima set $\Mm$ has asymptotic density 0. We = 4. Now, if $\Lambda \in \WR(\zed^2)$, let $\bx,\bwy$ be a minimal basis for $\Lambda$, and let $\theta$ be the angle between the vectors $\bx$ and $\bwy$; it is a well known fact that in dimensions $\leq 4$ a lattice is always generated by vectors corresponding to its successive minima, so such a basis certainly exists (see, for instance, [@pohst]). Then there is a simple connection between the minimum and the determinant of $\Lambda$: $$\det(\Lambda) = \|\bx\| \|\bwy\| \sin\theta = |\Lambda|^2 \sqrt{ 1 - \frac{\left( \bx^t \bwy \right)^2}{|\Lambda|^4}} = \sqrt{ |\Lambda|^4 - \left( \bx^t \bwy \right)^2 }.$$ Lemma \[gauss\] below implies that $0 \leq |\bx^t \bwy| \leq \frac{|\Lambda|^2}{2}$. Therefore we have $$\frac{\sqrt{3}\ |\Lambda|^2}{2} \leq \det(\Lambda) \leq |\Lambda|^2.$$ In view of this relation, it is especially interesting that the determinant set has positive density while the minima set has density 0. Next, for each $u \in \D$ we want to count the number of $\Lambda \in \WR(\zed^2)$ such that $\det(\Lambda) = u$. We need some additional notation. Suppose $t \in \zed_{>0}$ has prime factorization of the form $$\label{prim_fact} t = 2^w p_1^{2k_1} \dots p_s^{2k_s} q_1^{m_1} \dots q_r^{m_r},$$ where $p_i \equiv 3\ (\md 4)$, $q_j \equiv 1\ (\md 4)$, $w \in \zed_{\geq 0}$, $k_i \in \frac{1}{2} \zed_{>0}$, and $m_j \in \zed_{>0}$ for all $
null
--- abstract: 'The impact of the incoherent electron-positron pairs from beamstrahlung on the occupancy of the vertex detector (VXD) for the International Large Detector concept (ILD) has been studied, based on the standard ILD simulation tools. The occupancy was evaluated for two substantially different sensor technology in order to estimate the importance of the latter. The False studied.' author: - | Rita De Masi and Marc Winter\ Institut Pluridisciplinaire Hubert Curien (IPHC)\ 23 rue du Loess - BP28- F67037 Strasbourg (France) title: Improved Estimate of the Occupancy by Beamstrahlung Electrons in the ILD Vertex Detector --- Introduction ============ The incoherent production of electron-positron pairs resulting from the beam-beam interaction is the main source of background for the ILD vertex detector, and it is most constraining for its innermost layer. These electrons and positrons are produced with a longitudinal momentum up to few hundreds GeV and a transverse momentum of few tens of MeV on average. Due to their low $p_T$, they spiralize in the solenoidal magnetic field, whose field lines are parallel to the beam line, thus several of them can traverse repeatedly the same VXD layer. Those primary electrons and positrons may also hit elements of the detector further down the beam line, originating low energy particles traveling backward (secondaries), which may reach the VXD. The rate of secondaries reaching the VXD depends strongly on the presence of an additional dipole field located further down the beam line, as shown in Section \[aD\]; thus primaries and secondaries will be analized separately in the following. The time when the hit has taken place, will be used to distinguish them. Namely, will be considered as generated by primaries all hits with a hit time shorter than 20 ns and by secondaries those with a hit time larger than 20 ns. A === [@bkgnote]. Analysis ======== 100 bunch crossings (BX) generated with the GuineaPig [@GP] generator have been studied. The standard simulation and reconstruction tools for the ILD detector concept have been used (i.e. Mokka [@Mokka] and Marlin [@Marlin] respectively). The model of detector used in this study takes properly into account the angle of 14 mrad between the beam directions. A new design is being developed [@Mokka]. Hit achi Fig. \[Fig::occl1\]. Besides a change of the absolute hit rate, analogous distributions can be observed for the remaining layers. The $\phi$ distribution shows a significant increase of the number of hits in the region $|\phi|<50^\circ$, due to the particles with large hit time which are not produced symmetrically around the $z$ axis. The * ladders. Occupancy ! ! [image](Fig8.eps){width="0.5\columnwidth"} , for. It may reach up to several millimeters, especially for backscattered particles which were produced at small polar angle in order to reach the VXD.\ The occupancy depends on the characteristics of the VXD, namely pixel size, integration time, number of hit pixels per impact, effective thickness of the sensitive volume. In absence of choice of the sensor technology, a set of those parameters has been agreed upon in the ILD vertex community as reference and they have been used to estimate the occupancy. As a comparison, the occupancy has been also estimated in the framework of a specific technology (CMOS [@cmos]). The parameters describing both options are shown in Tab. \[Tab::sC\]). ------- lens configuration. 50 $\mu$m and 15 $\mu$m sensitive thickness, 3 and 5 hit pixels in average for straight impact respectively. []{data-label="Tab::sC"} The results for the occupancy in each layer are shown for the two configurations in Tab. \[Tab::occupancy\]. ------- -------- ---------------- ---------------- -------- ---------------- ---------------- layer total large hit time short hit time total large hit time short hit time 1 0.0790 0.0347 0.0443 0.0183 0.0080 0.0103 2 0.0381 0.0164 0.0217 0.0062 0.0026 0.0035 3 0.0105 0.0049 0.0056 0.0054 0.0025 0.0029 4 0.0041 0.0020 0.0021 0.0021 0.0010 0.0011 5 0.0016 0.0006 0.0010 0.0008 0.0003 0.0005 ------- -------- ---------------- ---------------- -------- ---------------- ---------------- : Occupancy for each layer in absence of an anti-DID for the standard and CMOS configurations. The large and small hit time components are shown, as well as their sum. []{data-label="Tab::occupancy"} The values are averaged over $\phi$. In fact, due to the $\phi$ dependence shown in Figure \[Fig::occl1\], the local occupancy in a $\phi$ sector can be twice as high as the mean. In average, one can conclude that the large hit time contribution to the occupancy is more than $40\%$ of the total rate. Anti-DID magnetic field {#aD} ----------------------- A Detector Integrated Dipole (anti-DID), aligning the outgoing beam with the experimental magnetic field, can be used to reduce the beam size growth due to synchrotron radiation. The anti-DID impacts also the hit rate on the VXD due to beamstrahlung electrons, by reducing the number of backscattered electrons travelling backwards from further along the beam line. Figure [image](aDFig8.eps){width="0.5\columnwidth"} The anti-DID reduces by roughly 30% the number of hits on the VXD, in particular the large hit time component, as can be seen in Figure \[Fig::occl1aD\]. This leads to a more homogeneous local distribution in $\phi$. The occupancy of the ILD vertex detector, which is a driving parameter of its requirements, has been evaluated with the latest version of the experimental apparatus, assuming a five-layer VXD geometry with 15 mm inner radius and a 3.5 T magnetic field. The evaluation was performed for two different sets of pixel characteristics, representative of the most mature sensor technologies under consideration. Both sets assume a continous read-out during the train. They differ by their read-out time, pixel pitch, cluster multiplicity and sensitive volume thickness. Conclusion ========== Occupancies of $\sim2\%$ and $\sim7\%$ were found in the innermost layer for the two sets. The average occupancy would be about 30% lower in presence of anti-DID, with a 50% decrease in one azimuthal sector. Accounting for the uncertainties on these predictions translates into upper limits on the occupancy in the innermost layer in the range 5-15%, depending on the sensor characteristics. These high rates plead for additional R&D on the sensors equipping this layer, in particular for shortening the read-out time significantly below 50 $\mu s$. [99]{} , al. *]{}, I.Mashi, PhD preparation. D. Schulte, PhD Thesis, University of Hamburg, (1996). P.M. de Freitas, MOKKA, `http://mokka.in2p3.fr`. ILC software `http://ilcsoft.desy.de`. `http://iphc.in2p3.fr/-CMOS-ILC-.html`.
null
--- abstract: 'We examine the dependence of the spatial two-point correlation function of quasars $\xi_{qq}(r,z)$ at different redshifts on the initial power spectrum in flat cosmological models. Quasars at many scales. Quasars are considered as a manifestation of short-term active processes at the centers of these fluctuations; such processes set in when dark matter counterflows and a shock wave appear in the gas. We propose a method for calculating the correlation function $\xi_{qq}(r,z)$ and show its amplitude and slope to depend on the shape of the initial power spectrum and the scale $R$ of the fluctuations in which quasars are formed. We demonstrate that in the CDM models with the initial power spectrum slope it is possible to explain, by choosing appropriate values of $R$, how the amplitudes and correlation radii of $\xi_{qq}(r,z)$ may either increase or decrease with increasing redshift $z$. In particular, the correlation radii of $\xi_{qq}(r,z)$ grow from when $R$ grows from to . The H+CDM model at realistic values of $R$ fails to account for the observational data according to which the $\xi_{qq}(r,z)$ amplitude decreases with increasing $z$.' author: J 'B. Novosyadlyj, Yu. Chornij' title: '**SPATIAL CORRELATION FUNCTION OF QUASARS AND POWER SPECTRUM OF COSMOLOGICAL MATTER DENSITY PERTURBATIONS**' --- epsf.tex KiFNT, v. 14, No 2(1998) p.156-165 Introduction ============ The most elaborated scenario of the origin of the large-scale structure in the universe is pictured by the model in which such a structure results from the evolution of the uniform isotropic Gaussian scalar field of cosmological matter density fluctuations under the effect of gravitational instability. The structure has mass. Assumptions as to the inflationary model and DM nature define the shape of the initial (post-recombination) power spectrum of cosmological density fluctuations, $P(k)$, and thus the principal parameters of large-scale structure can be theoretically calculated. Therefore it is of importance to test cosmological models with given power spectra $P(k)$. This testing can be done by calculating spatial two-point correlation functions for large-scale structure elements on various scales and comparing them with observational data. The testing is based on the relationship between the characteristics of structure elements and their correlation functions, on the one hand, and the amplitude and slope of the initial spectrum $P(k)$ at different $k$, on the other hand. Information on the power spectrum on small and intermediate scales (, , $H_0$ being the Hubble constant at the present epoch) resides in the correlation functions of bright massive galaxies, rich clusters of galaxies, and quasars. The “observed” correlation functions of all these three types of objects are described by the approximate expression , where $r_0$ is the correlation radius equal to , \[[@bs]-[@iras]\], \[[@adr]-[@mo]\]. for the three types of objects, respectively. At the moment investigations of the “observed” correlation function of quasars in different redshift ranges give ambiguous results. For example, the correlation function amplitude found in \[[@iov2; @komb2; @mo]\] decreases with increasing $z$, it remains unchanged in \[[@adr]\], and grows at and diminishes at in \[[@komb3]\]. The result was obtained from two galaxies. This result was obtained by different authors from two quasar samples: a combined sample of all quasars observed in the $z$ range from $0.1$ to $4.5$ and the “nearby” quasars at . Here we look into the possibility of using the above results as a test for cosmological models with given power spectra $P(k)$; to this end, the theoretical correlation function of quasars has to be calculated. Theoretical methods for calculating correlation functions of galaxies and their clusters are based on the theory of Gaussian random fields \[[@dor2]-[@kais]\], they have been devised in detail. The results obtained within the scope of cosmological models with given initial power spectra were analyzed in \[[@hn1]-[@watan]\], for example. Calculation of the correlation function of quasars $\xi_{qq}$ is complicated by a number of problems which have yet to be resolved. What scale is typical of the regions where quasars formed at different $z$? What are typical physical parameters of quasars: mass, duration of formation, lifetime, etc.? What is the relation between $\xi_{qq}$ and these parameters? Answers to these questions essentially depend on the physical model chosen for the quasar phenomenon. In particular, the disk accretion of gas on a massive black hole at the center of a galaxy may be a model mechanism. Therefore, for the mass of the fluctuations in which quasars are formed we can take the mass of “parent” galaxy $M_{g/q}$ which is able to ensure a high luminosity of the nucleus (observed as a quasar) during the quasar lifetime $\tau_q$. Based on the results of \[[@efst]-[@turn]\], we may take for $M_{g/q}$ (the black hole mass ), yr for $\tau_q$, and yr for the duration of quasar formation. Whether = question. Principal assumptions and formulation of the problem ==================================================== In the cosmological scenario used by us, galaxies, rich clusters of galaxies, and quasars appear in the peaks of the scalar Gaussian field of matter density fluctuations on corresponding scales, the relative amplitude of fluctuations being ($\sigma$ is the rms amplitude, $\nu$ is the peak height). It is assumed that galaxies and their clusters come into being when counterflows arise in the DM and a shock wave arises in the gas. The amplitude $\delta$ at this moment $t$ is determined from Tolmen’s model in terms of redshift: . The amplitude corresponding to the objects that appeared earlier is , it has a normal distribution: . The probability that a galaxy or a rich cluster of galaxies occurs at a fixed $z$ is $$\label{P1} P_1(z)=\int\limits_{\delta(z)}^{\infty} p(\delta)\,d\delta.$$ The probability that two galaxies exist simultaneously at a fixed $z$ at two different points $\vec x_1$ and $\vec x_2$ ($r=|\vec x_1-\vec x_2|$) is $$\label{P2} P_2(z)=\int\limits_{\delta(z)}^{\infty}\int\limits_{\delta(z)}^{\infty} p(\delta_1,\delta_2)\,d\delta_1d\delta_2,$$ where $p(\delta_1,\delta_2)$ is the two-dimensional normal distribution of random amplitudes $\delta_1$ and $\delta_2$ \[[@ven]\]: $$p(\delta_1,\delta_2)=(2\cdot\pi)^{-1}\cdot\left(\sqrt{\xi^2(0,z)-\xi^2(r,z)}\right)^{-1}$$ $$\label{p2} \times exp\left(-\frac{\xi(0,z)\cdot\delta_1^2+\xi(0,z)\cdot\delta_2^2-2\cdot\xi(r,z)\cdot\delta_1\cdot\delta_2}{2\cdot\left(\xi^2(0,z)-\xi^2(r,z)\right)}\right),$$ here $r>0$ and $\xi(r,z)$ is the correlation function of the density fluctuations in which the objects are formed. The function is calculated from the given initial power spectrum $P\left(k,R_f\right)$ smoothed on the scale $R_f$ which corresponds to the scale of the objects: $$\label{xi_p} \xi(r,z)=\frac{1}{2\pi^2}\cdot\int\limits_0^{\infty}\\ k^2\cdot\frac{P\left(k,R_f\right)}{(1+z)^2}\cdot\frac{sin(kr)}{kr}\,dk,$$ where $$P\left(k,R_f\right)=P(k)\cdot W^2(kR_f)$$ and $$W(kR_f)=exp\left(-\frac{1}{2}\cdot k^2\cdot R_f^2\right)$$ is the smoothing function. The statistical correlation function of the fluctuation peaks in which cosmological objects are formed is, by definition, $$\label{xi_oo_o} \xi_{oo}^{st}(r,z)\equiv\frac{P_2(z)}{P_1^2(z)}-1.$$ This function for rich clusters of galaxies or for galaxies at $z=0$ is \[[@kais]\] $$\xi_{oo}^{st}(r)\equiv\xi_{oo}(r,z=0)=\sqrt{\frac{2}{\pi}}\cdot\left(erfc\left(\frac{\nu}{\sqrt{2}}\right)\right)^{-2}\times$$ $$\label{xi_oo}
null
--- author: - 'Á. Sánchez-Monge' & M. Cain' -> author 'R. Cesaroni' - 'M. T. Beltrán' - 'M. S. N. Kumar' - 'T. Stanke' - 'H. Zinnecker' / 'S. Etoka' - 'D. Galli' A. Williams' (L. S.) 'C. A. Hummel' - 'L. Moscadelli' - 'T. Preibisch' – 'T. Ratzka' - 'F. F. S. van der Tak' - 'S. Vig' - 'C. M. ' (i.e. OB-type) True al. [@krumholz2009]; competitive accretion driven by a stellar cluster – Bonnell & Bate [@bonnellbate2006]; Bondi-Hoyle accretion – Keto [@keto2007]; see also the review by Zinnecker & Yorke [@ziyo]), all of them predict the formation of circumstellar disks. It is thus surprising that only a handful of disk candidates have been observed in association with massive (proto)stars. As a matter of fact, despite many observational efforts, convincing evidence of disks has been found only around early B-type (proto)stars, while circumstellar disks around O-type stars remain elusive (Wang et al. [@wang2012]; Cesaroni et al. [@cesaroni2007] and references therein). Moreover, a detailed investigation of the disk properties, comparable to that performed in disks around low-mass stars (e.g. Dutrey et al. [@dutrey2007]) is still missing due to the large distances of OB-type (proto)stars and the limited angular resolution at (sub)millimeter wavelengths. With this reason, results are not yet obtained. With this in mind, we performed ALMA Cycle 0 observations of two IR sources containing B-type (proto)stars. These were chosen on the basis of their luminosities (on the order of $10^4~L_\odot$), presence of bipolar nebulosities/outflows, detection of broad line wings in typical jet/outflow tracers (SiO), and strong emission in hot molecular core (HMC) tracers (such as methyl cyanide, ). Here we present the most important results obtained for one of the two sources, 35. 35 is a well known star forming region located at a distance of $2.19_{-0.20}^{+0.24}$ kpc (Zhang et al. [@zhang2009]), with a luminosity of $\sim$$3\times10^4~L_\odot$[^1]. The region is characterized by the presence of a butterfly shaped reflection nebula oriented NE–SW (see Fig. \[flarge\]), as well as a bipolar molecular outflow in the same direction, observed in  by Dent et al. ([@dent1985a]), Gibb et al. ([@gibb2003]; hereafter GHLW), Birks et al. ([@birks2006]; hereafter BFG), and López-Sepulcre et al. ([@lopezsepulcre2009]). The (1–0) line emission appears to trace also a N–S collimated flow (see BFG), coinciding with a thermal radio jet (Heaton & Little [@heatonlittle1988]; GHLW) seen also at IR wavelengths (Dent et al. [@dent1985b]; Walther et al. [@walther1990]; Fuller et al. [@fuller2001]; De Buizer [@debuizer2006]; Zhang et al. [@zhang2013]). It has been proposed that the poorly collimated NE–SW outflow and the N–S jet could be manifestations of the same bipolar flow undergoing precession (Little et al. [@little1998]). However, evidence for multiple outflows in this region is provided by SiO, , , and  line observations (GHLW; Lee et al. [@lee2012]). A molecular clump elongated perpendicular to the NE–SW outflow has been mapped in dense gas tracers (, CS), whose emission exhibits a velocity gradient from NW to SE (Little et al. [@little1985]; Brebner et al. [@brebner1987]). This was first interpreted as a large ($\sim$1 or 0.6 pc) disk/toroid rotating about the NE–SW outflow axis, but GHLW, on the basis of their  and  observations, propose that this is actually a fragmented rotating envelope containing multiple young stellar objects (YSOs). Indeed, GHLW identify a core at the center of the outflow and another core, named G35MM2, offset to the SE. Observations and results {#sobs} ======================== 35 was observed with ALMA Cycle 0 at 350 GHz in May and June 2012, with baselines in the range 36–400 m, providing sensitivity to structures $\le$2 . The digital correlator was configured in 4 spectral windows (with dual polarization) of 1875 MHz and 3840 channels each (covering the ranges 334.85–338.85 GHz and 346.85–350.85 GHz), providing a resolution of $\sim$0.4 km s$^{-1}$. Flux, gain, and bandpass calibrations were obtained through observations of Neptune and J1751$+$096. The data were calibrated and imaged using CASA. A continuum map was obtained from line-free channels and subtracted from the data. The synthesized beam is $0\farcs51\times0\farcs46$, P.A.=48. The rms noise is $\sim$6 mJy beam$^{-1}$ for individual line channels, while in the continuum image it is $\sim$1.8 mJy beam$^{-1}$, implying a S/N of only $\sim$100. The latter indicates a reduced dynamic range. In Fig. \[flarge\], we present the map of the 350 GHz continuum emission overlaid on an enhanced resolution Spitzer/IRAC image at 4.5  extracted from the GLIMPSE survey (Benjamin et al. [@benj]). The sub-millimeter continuum emission is clearly tracing an elongated structure across the waist of the butterfly shaped nebula. In all likelihood, we are detecting the densest part of the flattened molecular structure observed on a larger scale by Little et al. ([@little1985]), Brebner et al. ([@brebner1987]), and GHLW. Along the elongated structure a chain of at least 5 cores is seen (Fig. \[fcont\]), lending support to GHLW’s idea that one is dealing with a fragmented structure instead of the smooth disk/toroid hypothesized by Little et al. ([@little1985]). We stress that the angular resolution of our maps ($\sim$7 times better than previous (sub)millimeter observations) reveals that the YSOs powering the outflow(s) lie inside cores A and/or B (see Fig. \[fcont\]), because these two are the only cores located close to the geometrical center of the bipolar nebula. In particular, core B lies along the N–S jet traced by the IR and radio emission, and coincides with one of the free-free sources detected by GHLW. The radio emission. Methyl cyanide emission (as well as other typical HMC tracers) is clearly detected only towards cores A and B, and marginally towards G35MM2. Emission by vibrationally excited lines of  also indicates that cores A and B could be hosting massive stars. Core B coincides also with a compact free-free continuum source detected by GHLW at 6 and 3.6 cm, and by Codella et al. ([@codella2010]) at 1.3 cm. This emission could be part of the N–S thermal radio jet or might be coming from an  region ionized by an embedded early-type star. We will discuss this possibility in Sect. \[sdis\]. Faint <unk>[sdis<unk>]. Faintetchy star star(s). We now investigate the gas velocity field in the two cores by computing the first moment of a  line, a dense gas tracer. Figure \[fvelo\] plots the result for the (19–18) $K$=2 line, with overlaid the line emission averaged over the same velocity interval used to calculate the first moment. Note that the mean velocity of core
null
--- author: - '${}^{1,2}$Yoshiaki Maeda, ${}^3$ Akifumi Sako, ${}^4$ Toshiya Suzuki and  ${}^4$ Hiroshi Umetsu' title: Gauge Theories in Noncommutative Homogeneous Kähler Manifolds --- [ ${}^{1}$ Department of Mathematics, Faculty of Science and Technology, Keio University\ 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522, Japan\ ${}^2$ Mathematical Research Centre, University of Warwick\ Coventry CV4 7AL, United Kingdom,\ ${}^3$ Department of Mathematics, Faculty of Science Division II,\ Tokyo University of Science, 1-3 Kagurazaka, Shinjuku-ku, Tokyo 162-8601, Japan\ ${}^4$ Kushiro National College of Technology\ 2-32-1 Otanoshike-nishi, Kushiro, Hokkaido 084-0916, Japan ]{} [**MSC 2010:**]{} 53D55 , 81R60 Introduction ============ Field theories on noncommutative spaces appear in various phenomena in physics. For example, effective theories on D-branes with NS-NS B field backgrounds give rise to gauge theories on noncommutative spaces [@Seiberg:1999vs]. As another example, in matrix models [@Banks:1996vh; @Ishibashi:1996xs], noncommutative field theories corresponding to fuzzy spaces appear when one expands the models around some classical solutions. A typical noncommutative space is the noncommutative ${\mathbb R}^d$. Field theories on the noncommutative ${\mathbb R}^d$ have many intriguing properties. For example, there is work on the existence of noncommutative instantons [@Nekrasov:1998ss], noncommutative scalar solitons [@Gopakumar:2000zd], etc. as classical solutions and the appearance of UV-IR mixing [@Minwalla:1999px] at the quantum level (see also the review papers [@Douglas:2001ba; @sako_review; @Szabo:2001kg], for examples). It is important to investigate whether field theories on more generic noncommutative manifolds have similar properties. However, field theories on noncommutative manifolds are not well understood at present, except for a few examples such as, noncommutative tori, $S^2$, etc. Several methods to construct noncommutative manifolds have been proposed, including the important approach by the deformation quantization. Deformation quantization was first introduced in [@Bayen:1977ha]. After [@Bayen:1977ha], several alternative methods of deformation quantization were proposed [@DeW-Lec; @Omori; @Fedosov; @Kontsevich]. In particular, deformation quantization of Kähler manifolds was studied in [@Moreno86a; @Moreno86b; @Cahen93; @Cahen95]. We study gauge field theories on noncommutative Kähler manifolds based on the deformation quantization with separation of variables introduced by Karabegov to quantize Kähler manifolds [@Karabegov; @Karabegov1996; @Karabegov2011]. The purpose of this paper is to construct gauge theories on noncommutative homogeneous Kähler manifolds. Field theories need to define differentials on base spaces. Note that the usual differentiations by coordinates in a noncommutative space may not be derivations; in other words, they do not satisfy the Leibniz rule for star products in general. We use inner derivations as differentials, which are defined by commutators with a function $P$ under a star product, [*i.e. *]{} $[ P , \ \cdot \ ]_*$. These operators automatically satisfy the Leibniz rule. For a generic $P$, the inner derivation $[ P , \ \cdot \ ]_*$ includes higher derivative terms. The following is [@Muller:2004]. The Lie algebra of the isometric group $<unk>cal G<unk>$ potential. For homogeneous Kähler manifolds ${\cal G} / {\cal H}$, there are Killing vectors ${\cal L}_a$ which constitute the Lie algebra of the isometry group ${\cal G}$. The vector consists of $ ]_*$. Using these Killing potentials, we construct a gauge theory on noncommutative homogeneous Kähler manifolds. In our previous papers [@Sako:2012ws; @Sako:2013noa], we studied deformation quantizations with separation of variables for ${\mathbb C}P^N$ and ${\mathbb C}H^N$, and gave explicit expressions for the star products. Using these results, we describe $U(n)$ gauge theories on noncommutative ${\mathbb C}P^N$ and on noncommutative ${\mathbb C}H^N$, as examples. (On other types of noncommutative ${\mathbb C}P^N$, different gauge theories have been constructed. For example, a gauge theory on fuzzy ${\mathbb C}P^N$ is studied in [@CarowWatamura:1998jn; @Grosse:2004wm]. ) The organization of this article is as follows. In section 2, after we review deformation quantization with separation of variables for Kähler manifolds proposed by Karabegov, we study differentials on noncommutative Kähler manifolds. The conditions under which inner derivations become vector fields (Killing vector fields) are provided. We also discuss the geometrical constraints of vector fields in manifolds. In section 3, we discuss gauge theories on noncommutative ${\mathbb C}P^N$ and ${\mathbb C}H^N$, as concrete examples. In section 4, we summarize our results and give some further discussion. Deformation quantization of gauge theories with separation of variables ======================================================================= Deformation quantization with separation of variables {#reviewKarabegov} ----------------------------------------------------- We briefly review the deformation quantization with separation of variables for Kähler manifolds, which proposed by Karabegov [@Karabegov1996]. Let $\Phi$ be a K[" a]{}hler potential and $\omega$ a K[" a]{}hler 2-form for $N$-dimensional Kähler manifolds $M$: $$\omega := i g_{k \bar{l}} dz^{k} \wedge d \bar{z}^{l}, ~~~~ g_{k \bar{l}} := \frac{\partial^2 \Phi}{\partial z^{k} \partial \bar{z}^{l}} .$$ We denote the inverse of the metric $(g_{k \bar{l}})$ as $(g^{\bar{k} l})$, and set $g_{\bar{k}l} = g_{l \bar{k}}$, $g^{l \bar{k}} = g^{\bar{k} l} $. We use the following abbreviations $$\begin{aligned} \partial_k = \frac{\partial}{\partial z^{k}} , ~~~~ \partial_{\bar{k}} = \frac{\partial}{\partial \bar{z}^{k}}.\end{aligned}$$ Deformation quantization is defined as follows. Let $\cal F$ be a set of formal power series in $\hbar$ with coefficients of $C^{\infty}$ functions on $M$ $$\begin{aligned} {\cal F} := \left\{ f \ \Big| \ f = \sum_k \hbar^k f_k, ~f_k \in C^\infty (M) \right\} ,\end{aligned}$$ where $\hbar$ is a noncommutative parameter. A star product is defined on ${\cal F}$ by $$\begin{aligned} f * g = \sum_k \hbar^k C_k (f,g), \end{aligned}$$ such that the product satisfies the following conditions. 1. $*$ $%$ product. $*$ is a positive operator $C_k$ operator operator. 3. $C_0$ is $C_0$ bracket. 4. $ f * 1 = 1* f = f$. Moreover, $*$ is called a star product with separation of variables when it satisfies $$a * f = a f, ~~~~ f * b = f b,$$ for any holomorphic function $a$ and any anti-holomorphic function $b$. Karabegov constructed a star product with separation of variables for Kähler manifolds in terms of differential operators [@Karabegov; @Karabegov1996], as briefly explained below. For the left star multiplication by $f \in {\cal F}$, there exists a differential operator $L_f$ such that $$L_f g = f * g .$$ $L_f$ is given as a formal power series in $\hbar$ $$L_f = \sum_{n=0}^{\infty} \hbar^n A^{(n)}, \label{Lf-An}$$ where $A^{(n)}$ is a differential operator which contains only partial derivatives by $z^i$
null
--- abstract: 'Using Monte Carlo simulation, we study the influence of geometric confinement on demixing for a series of symmetric non-additive hard spheres mixtures confined in slit pores. We consider both a wide range of positive non-additivities and a series of pore widths, ranging from the pure two dimensional limit to a large pore width where results are close to the bulk three dimensional case. Critical parameters are extracted by means of finite size analysis. We find that for this particular case in which demixing is induced by volume effects, phase separation is in most cases somewhat impeded by spatial confinement. However, a non-monotonous dependence of the critical pressure and density with pore size is found for small non-additivities. In this latter case, it turns out that an otherwise stable bulk mixture can be forced to demix by simple geometric confinement when the pore width decreases down to approximately one and a half molecular diameters.' author: True 'N.G. Almarza, from two standpoints[@RPP_1999_62_1573]. It is obvious that the reduction in the number of neighbors of those molecules adjacent to the pore walls will induce important phase diagram shifts, whose character will be mostly dependent on the nature of the wall-fluid (or wall-adsorbate) interaction. In the limit of plain two dimensional confinement the system will exhibit bidimensional criticality, which is essentially different -e.g. as critical indices are concerned- from its bulk three dimensional counterpart[@TCritPhen93]. We assume that this bidimensional criticality also holds for the different levels of confinement studied in this work. [@Binder1992a] Many new and interesting effects can be induced by confining and the interplay between adsorbate-adsorbate and adsorbate-pore wall forces. Very recently, Severin and coworkers[@Severin2014] found evidence of a microphase separation in an otherwise fully miscible mixture of ethanol and water when adsorbed in a slit pore formed by a graphene layer deposited on a mica wall. Of utmost interest are also the effects that confinement have on enhancing or preempting crystallization of undercooled fluids[@APL_2005_86_103110; @JPCM_2006_18_R15]. This has been a key approach in the attempts to throw some light in the search for the elusive liquid-liquid critical point in undercooled water[@Biddle2014], resorting to the preemption of crystallization induced by tight confinement of water in nanopores[@Chen2006; @Bertrand2012] and extensive use of diffraction experiments in combination with computer simulations. Not long ago, Fortini and Dijsktra[@Fortini2006] explored the possibility of manipulating colloidal crystal structures by confinement in slit pores. In contrast, thorough studies on the influence of tunable confinement on demixing transitions are scarce[@Duda2003]. One of the simplest systems that illustrate demixing in binary mixtures is the non-additive hard sphere system (NAHS) with positive non-additivity, of which the limiting case of the Widom-Rowlinson model[@Widom1970] has deserved particular theoretical attention and prompted the development of specially adapted algorithms to cope with the hard-core singularities and critical slowing down of the demixing transition[@Johnson1997]. More general instances of the non-additive hard sphere mixture problem (mostly in the symmetric case) have been studied in the two-dimensional limit[@Saija2002], and in a number of detailed studies in three dimensions[@JCP_1996_104_4180; @Gozdz2003; @Jagannathan2003; @Buhot2005]. In the simulation. The model defined as mixture of A and B components, is characterized by an interaction of the type $$u_{\alpha\beta}(r) = \left\{\begin{array}{ll} \infty & \mbox{if}\; r \le \sigma (1 + (1-\delta_{\alpha\beta})\Delta) \\ 0 & \mbox{if}\; r > \sigma (1 + (1-\delta_{\alpha\beta})\Delta) \end{array}\right.$$ where $\alpha,\beta$ denote the A and B species, $\delta_{\alpha\beta}$ is Kronecker’s delta, the non-additivity parameter is $\Delta > 0$, and $r$ is the interparticle separation. We will study a series of confined non-additive hard sphere mixtures (for various $\Delta>0$ values) using extensive semi-grand ensemble Monte Carlo simulations[@Kofke1988; @FrenkelSmitbook; @Gozdz2003]. The effects of geometric confinement are modeled by the presence of hard-core walls, separated by a distance, $H$, that constrain the particle movement in one space direction (along the $z$-axis as defined here). The fluid particle with thus be subject to an external potential of the form $$V^{ext}(z) = \left\{\begin{array}{ll} 0 & {\rm if } \; \sigma/2 \le z \le H - \sigma/2 \\ \infty & {\rm otherwise}. \end{array} \right.$$ This aims at reproducing the behavior of a fluid confined in a slit pore. Since all interactions at play are purely hard-core, the demixing transition will result from the interplay of entropic and enthalpic (i.e. excluded volume) effects. Our calculations range from the pure two dimensional limit to a relatively large pore width ($10\sigma$, approaching the bulk three dimensional mixture). We have taken advantage of the particular nature of the interaction to implement cluster algorithms[@PRL_1987_58_86; @PRL_1989_62_261; @Buhot2005] in order to cope with the critical slowing down when approaching the consolute point. Finite size scaling techniques have been applied in order to provide accurate estimates of the critical points[@Gozdz2003]. These techniques have been tested by Duda al. [@Duda2003] by means of mean-field theory and Monte Carlo simulations, considering two values of the slit width, $H$, and different values of $\Delta$. In most of the cases they simulate just one system size, corresponding to a number of particles $N=1000$. Here we will perform a comprehensive analysis of the phase diagram for different values of $H$, and $\Delta$. In addition, for each case several values of $N$ will be considered, which will allow us to get more reliable estimates of the phase diagram of these systems, and in particular of the critical points. The phases of these systems are summarized in Table III. Methodology =========== Given the particular symmetry of our model, the most appropriate simulation approach to study the phase equilibria is the use of semi-grand canonical Monte Carlo (MC) simulations [@Kofke1988; @FrenkelSmitbook; @Gozdz2003]. We impose the difference between the chemical potentials of the two components $\Delta \mu \equiv \mu_B - \mu_A$, the volume $V$, the temperature $T$ and keep the total number of particles, $N (= N_{\rm A} + N_{\rm B})$ fixed; $x = N_{\rm A}/N$ is the concentration of particle species A. The total number density $\rho = N/V$ is thus fixed. In addition to the conventional MC moves, particles can also modify their identity (i.e. the species to which they belong)[@JCP_1996_104_4180]. The identity sampling can be performed through an efficient cluster algorithm that involves all the particles in the systems and that will be presented later in the paper. After $5 \times 10^5$ MC sweeps for equilibration, our simulations were typically extended over $2\times 10^6$ MC sweeps to perform averages. A sweep involves $N$ single-particle translation attempts, and one cluster move. Note that due to symmetry the critical mole fraction of component $A$ (and $B$) will be $x_c=1/2$, and the demixing transition will occur at $\Delta \mu=0$. When demixing occurs, the mole fraction, $X$ of the components in the two phases, are computed through the ensemble averages of the order parameter $$\theta=2 x-1, \label{theta}$$ as $X = 1/2 \pm \sqrt{<\theta^2>}/2$. Given the symmetry of the model and the efficiency of the cluster algorithm, the average of $x$ from the simulations at $\Delta \mu=0$ will be $<x> \simeq 1/2$, independently of the presence or absence of demixing at the simulation conditions. By analysis of the mole fraction histograms for a series of binary mixtures at different total densities, $\rho = \rho_A+\rho_B$, one can obtain a series of phase diagrams for each sample size, as illustrated in Figure \[dphas2\], where the extreme size dependence of the results on the sample size in the neighborhood of the critical point can be readily appreciated. It is well known that as the critical point is approached, larger samples are needed,
null
--- bibliography: - 'paper.bib' ---
null
Introduction ============ In a recent experiment [@CSR99], a strongly isolated quantum dot was charged with excess electrons, and their sequential escapes were recorded over a one hour time period. This was repeated 150 times to obtain a statistical distribution of decay times. The dot is formed in an electron gas located at a depth of $70 \ nm$ in a $GaAs-AlGaAs$ heterostructure. Its shape is defined by electrostatic confinement using a set of gates, as sketched in the insert to Fig. \[fig1\] . The gate voltages were ramped up quickly, so that the dot retained a sizeable number of excess electrons when it was well isolated from the surrounding electron gas. The observations correspond to sequential tunnelling of (seven) electrons from the dot to the surroundings. The observations are shown in Fig. \[fig1\]. A sequenced decay becomes apparent. Sequential decays have been known and studied for over a century in the context of nuclear physics. The combined instances of alpha and beta decays from the heaviest elements are responsible for most natural radioactivity. The description of alpha decay in terms of tunnelling of alpha particles through a confining potential dates back to the 1920’s (Gamow [@G28], Condon and Gurney [@CG28]). Although the basic nature of the decay as a barrier penetration is well understood, accurate predictions for radioactive lifetimes are difficult because the process by which the escaping alpha-particle is preformed within the nucleus requires an understanding of four-body correlations. As a result, it is impossible to deduce accurate information on the barrier shape. Nevertheless, alpha decays have provided useful information on nuclear radii and the range and gross features of the nuclear interaction [@PB75]. It has become commonplace to say that a quantum dot is an artificial atom, but in fact the self-consistent potential confining electrons in a large dot has more in common with the mean field potential in a heavy nucleus: flat in the interior, with abrupt walls. An artificial nucleus is a more apt description, as will become clear in this paper. Indeed, the detection of sequential decays from an isolated quantum dot is a more favourable situation for study of the decay process, as the question of preforming the electron does not arise. Hence, we can more confidently test our knowledge of the confining barriers for electrons, as well as the profile, and dependence on occupation number, of the dot potential. We will analyze these aspects in this work, and show that these measurements of the lifetimes of “radioactive quantum dots” introduce new constraints on our ability to model their structure. The present experiment has another significant advantage over nuclear decays: instead of counting incoherent decays from a large sample of identical nuclei, here a single dot is involved, and the correlation between consecutive events can be analyzed. In addition, it should be possible to design the shape, density and excitation energy of the dot within rather broad margins, so that future experiments on mesoscopic systems will be much more flexible than those in nuclear systems, where only those nuclei existing in nature, or created in sufficient numbers, can be studied. Thus, the study of electron decays from a quantum dot has the potential to reveal new features of the tunnelling process. This is a topic of currently renewed interest: see for example van Dijk and Nogami [@vDN99]. The decay is one of the major studies. In this work we will describe the decay process using analytic models which incorporate characteristics of the confinement potential extracted from realistic numerical simulations. As the dot contains about 300 electrons, Poisson-Thomas-Fermi calculations should be adequate to describe the electron density and the confining potential of the dot. With these in hand we have developed accurate analytic approximations for the confining potential that allow us to construct an envelope approximation wavefunction for the electrons in the dot, and to compute the electron lifetimes from a fully quantal expression for the transmission amplitude across the barrier. Previous works which model a quantum dot have been concerned with the wave functions of confined states in the dot, the electron density distribution and the shape of the confining potential. For such purposes, only the inside of the barrier matters. It is when one looks at the escape of electrons from the dot that the barrier height, its width and shape become important; these are the new features explored in this paper. In section II we describe the development of our model, while in Section III we discuss the results for the sequence of lifetimes and compare them with experiment. Some details are relegated to two appendices. Modelling a heterostructure. Our inputs for the PS simulation are the thickness and composition of each layer in the heterostructure, and the dopant concentration in the donor layer. From these we predict the density of the 2DEG. The only adjustable parameter is the donor ionization energy which is set to be $e\Phi_i = 0.12 \ eV$, in order to reproduce the measured 2DEG density, $ n_e = 2.74 \, 10^{11} \ cm^{-2} $. For the simpler Poisson-Thomas-Fermi scheme we employ a common relative permitivity $\varepsilon_r = 12.2$ for all layers of the heterostructure, which, combined with the parameters already used for the PS simulation, also reproduces the experimental $n_e$. After this “fitting” the model has no other free parameters.\ [*ii)*]{} For the gated structure we use the gate layout and voltages of the experiment. To solve the Poisson equation for the gated heterostructure one has to impose as a boundary condition the value of the electrostatic potential on the exposed surface of the heterostructure, and on the gates. We assume Fermi level pinning and choose the energy of the surface states as the zero of the energy scale. In this convention, the conduction band edge is set at $e V_s = 0.67 \ eV $ on the exposed surface. Under each gate the conduction band is set at $ eV_{ms}+ eV_g$, where $V_g$ is the gate voltage and the metal semiconductor contact potential, $eV_{ms}$, is taken as $0.81 \ eV$ [@Mo95]. The electrostatic potential due to the gates is then computed using semi-analytic expressions based on the work of Davies [*et al. *]{} [@Davies88] and [@DLS95]. Added to this are: [*a)*]{} the Coulomb potential (direct term) between the electrons, and a mirror term which imposes the boundary conditions at the surface, and [*b)*]{} the contribution from the fully ionized donor layer and its mirror term (see Sect. IIA of [@MWS94] for details of a similar example.) We neglect exchange and correlation effects, which are small.\ [*iii)*]{} The connection between the confining potential defined by the conduction band edge and the electron density is completed by using the Thomas-Fermi approximation at zero temperature: $$\rho_e({\vec r}) = {1\over {3\pi^2}} \left( {{2m^*}\over \hbar^2} (E_F -eV({\vec r}))\right)^{3/2} \label{eq:1}$$ The PTF iteration is performed starting from the ungated heterostructure densities as trial values. Equilibrium dot --------------- As a first step, we examine the dot in its final state after all the excess electrons have escaped. This corresponds to a PTF simulation with the same Fermi level, $E_{F,dot} = 0$, for the electrons in the dot and in the 2DEG outside the barriers. The gate voltages are taken from ref. [@CSR99] as $V_{PL} = -0.40 \ V$, $V_{C1}= V_{C2} = -0.44 \ V$ and $V_H = -0.7 \ V$. The predicted PTF 3D electron distribution $\rho_e(x,y,z)$ is more conveniently visualized in terms of a projected 2D density: $$n_e(x,y) = \int_{z_j}^{\infty} \rho_e(x,y,z) \ dz \ , \label{eq:2}$$ where $z_j$ is the junction plane. The $n_e(x,y)$ distribution, shown in Fig. \[fig2\], has an approximately rectangular boundary, and its maximum value is close to the 2DEG density of the ungated heterostructure. In $ electrons. Dot with excess electrons ------------------------- To study these configurations we set the Fermi level inside the dot, $E_{F,dot}$, higher than its value outside the barriers, $E_{F,2DEG}=0$. We can do so because the dot is well pinched off from the surrounding electron gas. We ran PTF simulations with equally spaced values for $E_{F,dot}$ running from $0$ to $17.5 \ meV$ in steps of $2.5 \ meV$. The occupation $Q$ of the dot increases linearly with $E_{F,dot}$ at the rate $2.75$ electrons per meV, giving occupations $286 \le Q \
null
--- abstract: 'Microscopic structure of the low-lying isovector dipole excitation mode in neutron-rich $^{26,28,30}$Ne is investigated by performing deformed quasiparticle-random-phase-approximation (QRPA) calculations. The particle-hole residual interaction is derived from a Skyrme force through a Landau-Migdal approximation. We have obtained the low-lying resonance in $^{26}$Ne at around 8.5 MeV. It is found that the isovector dipole strength at $E_{x}<10$ MeV exhausts about 6.0% of the classical Thomas-Reiche-Kuhn dipole sum rule. This excitation mode is composed of several QRPA eigenmodes, one is generated by a $\nu(2s^{-1}_{1/2} 2p_{3/2})$ transition dominantly, and the other mostly by a $\nu(2s^{-1}_{1/2} 2p_{1/2})$ transition. The neutron excitations take place outside of the nuclear surface reflecting the spatially extended structure of the $2s_{1/2}$ wave function. In $^{30}$Ne, the deformation splitting of the giant resonance is large, and the low-lying resonance is overlapping with the giant resonance.' author: - 'Kenichi Yoshida$^{1,2}$' - 'Nguyen Van Giai$^{2}$' title: 'Low-lying dipole resonance in neutron-rich Ne isotopes ' --- Introduction ============ The study of nuclei far off stability is one of the most active research fields in nuclear physics [@tan01; @hor01; @hag02a], and exploring the collective motions unique in unstable nuclei is one of the main issues experimentally and theoretically [@neu07]. In neutron-rich nuclei, because of the absence of the Coulomb barrier the surface structure is quite different from stable nuclei. One of the unique structures is the neutron skin [@suz95; @miz00]. Since the collective excitations are sensitive to the surface structure, one can expect new kinds of exotic excitation modes associated with the neutron skin to appear in neutron-rich nuclei. One of the examples is the soft dipole excitation [@ike88], which is observed not only in light halo nuclei [@sac93; @shi95; @zin97; @nak06; @nak94; @pal03; @fuk04; @nak99; @pra03; @aum99], but also in heavier systems [@lei01; @try03; @adr05], where an appreciable $E1$ strength is observed above the neutron threshold exhausting several percents of the energy-weighted sum rule (EWSR). The structure of the low-lying dipole state and its collectivity has been studied in the framework of the mean-field calculations by many groups [@cat97; @ham98; @ham99; @gor02; @mat02; @ter06; @col01; @sar04; @vre01a; @vre01b; @paa05; @cao05; @liv07]. A low-lying dipole state in neutron-rich $^{26}$Ne was first predicted by using the relativistic quasiparticle-random-phase approximation (QRPA) in Ref. [@cao05], and recently it was observed at RIKEN around 9 MeV, exhausting about 5% of the Thomas-Reiche-Kuhn (TRK) dipole sum rule [@gib07]. In the recent IEEE Transactions on Quantum Physics [@cao05], the QRPA was solved in the response function formalism. This method can treat the excitations to the continuum exactly by employing the Green’s functions satisfying the out-going-wave boundary conditions, but an additional procedure is required to obtain the microscopic structure of the excitation mode [@kha05]. In the present paper, we investigate the microscopic structure of the low-lying dipole resonance in neutron-rich Ne isotopes, and we discuss the isotopic dependence with special attention to the deformation effects. To this end, we have developed a deformed QRPA code in the matrix formulation based on the coordinate-space Skyrme-Hartree-Fock-Bogoliubov (HFB) theory. The n in Sec. \[model\], we explain our method. <unk>[model<unk>, we explain our calculation scheme. Using \[check\], we check the results of our new calculation scheme by comparing the existing QRPA results. In \[results\], we present the results of the deformed QRPA and we discuss the microscopic structure of the low-lying dipole state in $^{26,28,30}$Ne. Finally, we summarize the paper in Sec. ) and our approach (see Ref \[model\]Model ============== We briefly summarize here our approach (see Ref. [@yos06] for details). In order to discuss simultaneously effects of nuclear deformation and pairing correlations including the continuum, we solve the HFB equations [@dob84; @bul80] $$\begin{gathered} \begin{pmatrix} h^{\tau}(\boldsymbol{r}\sigma)-\lambda^{\tau} & \tilde{h}^{\tau}(\boldsymbol{r}\sigma) \\ \tilde{h}^{\tau}(\boldsymbol{r}\sigma) & -(h^{\tau}(\boldsymbol{r}\sigma)-\lambda^{\tau}) \end{pmatrix} \begin{pmatrix} \varphi^{\tau}_{1,\alpha}(\boldsymbol{r}\sigma) \\ \varphi^{\tau}_{2,\alpha}(\boldsymbol{r}\sigma) \end{pmatrix} \\ = E_{\alpha} \begin{pmatrix} \varphi^{\tau}_{1,\alpha}(\boldsymbol{r}\sigma) \\ \varphi^{\tau}_{2,\alpha}(\boldsymbol{r}\sigma) \end{pmatrix} \label{eq:HFB1}\end{gathered}$$ directly in the cylindrical coordinates assuming axial and reflection symmetries. Here, $\tau=\nu$ (neutron) and $\pi$ (proton), and $\boldsymbol{r}=(\rho,z,\phi)$. For the mean-field Hamiltonian $h$, we employ the SkM\* interaction [@bar82]. Details for expressing the densities and currents in the cylindrical coordinate representation can be found in Refs. [@ter03; @sto05]. The pairing field is treated by using the density-dependent contact interaction [@ber91; @ter95], $$v_{pp}(\boldsymbol{r},\boldsymbol{r}^{\prime})=V_{0}\dfrac{1-P_{\sigma}}{2} \left[ 1- \left(\dfrac{\varrho^{\mathrm{IS}}(\boldsymbol{r})}{\varrho_{0}}\right)^{\gamma} \right] \delta(\boldsymbol{r}-\boldsymbol{r}^{\prime}). \label{eq:res_pp}$$ with $V_{0}=-390$ MeV $\cdot$fm$^{2}$ and $\varrho_{0}=0.16$ fm$^{-3}$, $\gamma=1$. Here, $\varrho^{\mathrm{IS}}(\boldsymbol{r})$ denotes the isoscalar density and $P_{\sigma}$ the spin exchange operator. The pairing strength $V_{0}$ is determined so as to approximately reproduce the experimental pairing gap of 1.25 MeV in $^{28}$Ne obtained by the three-point formula [@sat98]. Because the time-reversal symmetry and reflection symmetry with respect to the $x-y$ plane are assumed, we have only to solve for positive $\Omega$ and positive $z$. We use the lattice mesh size $\Delta\rho=\Delta z=0.6$ fm and the box boundary condition at $\rho_{\mathrm{max}}=9.9$ fm and $z_{\mathrm{max}}=9.6$ fm. The quasiparticle energy is cut off at 60 MeV and the quasiparticle states up to $\Omega^{\pi}=13/2^{\pm}$ are included. Using the quasiparticle basis obtained by solving the HFB equation (\[eq:HFB1\]), we solve the QRPA equation in the matrix formulation [@row70] $$\sum_{\gamma \delta} \begin{pmatrix} A_{\alpha \beta \gamma \delta} & B_{\alpha \beta \gamma \delta} \\ B_{\alpha \beta \gamma \delta} & A_{\alpha \beta \gamma \delta} \end{pmatrix} \begin{pmatrix} X_{\gamma \delta}^{\lambda} \\ Y_{\gamma \delta}^{\lambda} \end{pmatrix} =\hbar \omega_{\lambda} \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix} \begin{pmatrix} X_{\alpha \beta}^{\lambda} \\ Y_{\alpha \beta}^{\lambda} \end{pmatrix} \label{eq:AB1}.$$ The residual interaction in the particle-particle (p-p) channel appearing in the QRPA matrices $A$ and $B$ is the density-dependent contact interaction (\[eq:res\_pp\]). On the other hand, for the residual interaction in the particle-hole (p-h) channel, we employ the Landau-Migdal (LM) approximation [@bac75] applied to the density-dependent Skyrme forces [@gia81; @gia98], $$\begin{aligned} v_{ph}(\boldsymbol{r},\boldsymbol{r}^{\prime})=& N_{0}^{-1}\{F_{0}+F_{0}^{\prime}\tau\cdot\tau^{\prime} \notag \\ &+(G_{0}+G_{0}^{\prime} \tau\cdot\tau^{\prime})\sigma\cdot\sigma^{\prime} \} \delta(\boldsymbol{r
null
--- abstract: 'We study a variant of a problem considered by Dinaburg and Sinaĭ on the statistics of the minimal solution to a linear Diophantine equation. We show that the signed ratio between the Euclidean norms of the minimal solution and the coefficient vector is uniformly distributed modulo one. We reduce the problem to an equidistribution theorem of Anton Good concerning the orbits of a point in the upper half-plane under the action of a Fuchsian group.' address: - 'Department of Mathematical Sciences, University of Aarhus, Ny Munkegade Building 530, 8000 Aarhus C, Denmark' - 'School of Mathematical Sciences, Tel Aviv University, Tel Aviv 69978, Israel' author: - 'Morten S. Risager' - Zeév Rudnick title: On the statistics of the minimal solution of a linear Diophantine equation and uniform distribution of the real part of orbits in hyperbolic spaces --- [^1] Statement of results {#sec:statements} ==================== For a pair of coprime integers $(a,b)$, the linear Diophantine equation $ax-by=1$ is well known to have infinitely many integer solutions $(x,y)$, any two differing by an integer multiple of $(b,a)$. Dinaburg and Sinaĭ [@DinaburgSinaui:1990a] studied the statistics of the “minimal” such solution $v'=(x_0,y_0)$ when the coefficient vector $v=(a,b)$ varies over all primitive integer vectors lying in a large box with commensurate sides. Their notion of “minimality” was in terms of the $L^\infty$-norm $|v'|_\infty:=\max(|x_0|,|y_0|)$, and they studied the ratio $|v'|_\infty/|v|_\infty$, showing that it is uniformly distributed in the unit interval. Other proofs were subsequently given by Fujii [@Fujii:1992a] who reduced the problem to one about modular inverses, and then used exponential sum methods, in particular a non-trivial bound on Kloosterman sums, and by Dolgopyat [@Dolgopyat:1994a], who used continued fractions. In this note, we consider a variant of the question by using minimality with respect to the Euclidean norm $|(x,y)|^2:=x^2+y^2$ and study the ratio $|v'|/|v|$ of the Euclidean norms as the coefficient vector varies over a large ball. In this case too we find uniform distribution, in the interval $[0,1/2]$. However, the methods involved appear quite different, as we invoke an equidistribution theorem of Anton Good [@Good:1983a] which uses harmonic analysis on the modular curve. A lattice point problem ----------------------- We recast the problem in slightly more general and geometric terms. Let $L\subset \C$ be a lattice in the plane, and let ${\operatorname{area}}(L)$ be the area of a fundamental domain for $L$. Any primitive vector $v$ in $L$ can be completed to a basis $\{v,v'\}$ of $L$. The vector $v'$ is unique up to a sign change and addition of a multiple of $v$. In the case of the standard lattice $\Z[\sqrt{-1}]$, taking $v=(a,b)$ and $v'=(x,y)$, the condition that $v$, $v'$ give a basis of $\Z[\sqrt{-1}]$ is equivalent to requiring $ay-bx=\pm 1$. The question is: If we pick $v'$ to minimize the length $|v'|$ as we go through all possible completions, how does the ratio $|v'|/|v|$ between the lengths of $v'$ and $v$ fluctuate? It is easy to see (and we will prove it below) that the ratio is bounded, indeed that for a minimizer $v'$ we have $$\frac{|v'|}{|v|} \leq \frac 12 +O(\frac 1{|v|^4})\;.$$ We will show that the ratio $|v'|/|v|$ is uniformly distributed in $[0,1/2]$ as $v$ ranges over all primitive vectors of $L$ in a large (Euclidean) ball. We refine the problem slightly by requiring that the lattice basis $\{v,v'\}$ is oriented positively, that is ${\operatorname{Im}}(v'/v)>0$. Then $v'$ is unique up to addition of an integer multiple of $v$. For the standard lattice $\Z[\sqrt{-1}]$ and $v=(a,b)$, $v'=(x,y)$ the requirement is then that $ay-bx=+1$. Define d otherwise. \[unif ]$ is one. Explicitly, let $L_{prim}(T)$ be the set of primitive vectors in $L$ of norm $|v|\leq T$. It is well known that $$\#L_{prim}(T) \sim \frac 1{\zeta(2)} \frac{\pi}{{\operatorname{area}}(L)} T^2, \quad T\to \infty$$ Theorem \[unif dist of rho\] states that for any fixed subinterval $[\alpha,\beta]\in (-1/2,1/2]$, $$\frac 1{\#L_{prim}(T)} \{v\in L_{prim}(T): \alpha<\rho(v)<\beta \} \to \beta-\alpha$$ as $T\to \infty$. Equidistribution of real parts of orbits ---------------------------------------- We will reduce Theorem \[unif dist of rho\] by geometric arguments to a result of Anton Good [@Good:1983a] on uniform distribution of the orbits of a point in the upper half-plane under the action of a Fuchsian group. Let us add $\slr$. The group $\slr$ acts on the upper half-plane $\H=\{z\in \C: {\operatorname{Im}}(z)>0\}$ by linear fractional transformations. We may assume, possibly after conjugation in $\slr$, that $\infty$ is a cusp and that the stabilizer $\G_{\!\infty}$ of $\infty$ in $\G$ is generated by $$\pm {\left(\begin{array}{cc} 1 & 1 \\ 0 & 1 \end{array}\right) }$$ which as linear fractional transformation gives the unit translation $z\mapsto z+1$. (If $-I\notin \G$ there should be no $\pm$ in front of the matrix). The group $\G=\sl$ is an example of such a group. We note that the imaginary part of $\g(z)$ is fixed on the orbit $\G_{\!\infty}\g z$, and that the real part modulo one is also fixed on this orbit. Good’s theorem is \[equidistribution\] Let $\G$ be as above and let $z\in\H$. Then ${\operatorname{Re}}(\G z)$ is uniformly distributed modulo one as ${\operatorname{Im}}(\g z)\to 0$. More precisely, let $$(\GinfmodG)_{\varepsilon,z}=\{\g\in\GinfmodG : {\operatorname{Im}}{\g z}>\varepsilon\}\;.$$ Then for every continuous function $f\in C(\R\slash \Z)$, as $\varepsilon\to 0$, $$\frac 1{ \#(\GinfmodG)_{\varepsilon,z}} \sum_{\g\in(\GinfmodG)_{\varepsilon,z}}f({\operatorname{Re}}{\g z}) \to\int_{\R\slash\Z}f(t)dt \;.$$ Though the writing in [@Good:1983a] is not easy to penetrate, the results deserve to be more widely known. We can also use these results in other forms. [**Acknowledgements:**]{} for this work. A geometric argument {#sec:Geom} ==================== We start with a basis $\{v,v'\}$ for the lattice $L$ which is oriented positively, that is ${\operatorname{Im}}(v'/v)>0$. For a given $v$, $v'$ is unique up to addition of an integer multiple of $v$. Consider the parallelogram $P(v,v')$ spanned by $v$ and $v'$. Since $\{v,v'\}$ form a basis of the lattice $L$, $P(v,v')$ is a fundamental domain for the lattice and the area of $P(v,v')$ depends only on $L$, not on $v$ and $v'$: ${\operatorname{area}}(P(v,v'))={\operatorname{area}}(L)$. Let $\mu(L)>0$ be the minimal length of a nonzero vector in $L$: $$\mu(L)=\min\{ |v|:0
null
--- abstract: 'A classical theory of general relativity in the $4$-dimensional space time is formulated as a Chern–Weil topological theory. An additional topological theory is introduced as the theorem. With a topological insight, fundamental forms are introduced as a principal bundle of a space time manifold. Canonical quantization of the system is performed in a Heisenberg picture using the Nakanishi–Kugo–Ojima formalism. A complete set of the quantum Lagrangian and BRS transformations including auxiliary and ghost fields is given in a self-consistent manner. An appropriate Hilbert space and physical states are introduced into the theory, and the positivity of the physical states and the unitarity of the transition matrix are ensured by the Kugo–Ojima theorem. A non-renormalizability of quantum gravity is reconsidered under the formulation proposed in this study.' author: ' Experimental experiments. In particular, new evidence owing to the seminal discovery of gravitational waves was added in 2016[@PhysRevLett.116.241103]. On the other hand, at the microscopic level, nature is described by quantum mechanics. The standard theory of particle physics based on the quantum field theory is shown to be well-established by the discovery of the Higgs boson[@Aad:2012tfa; @Chatrchyan201230]. Thus, our understanding of nature covers a wide range of length scale from the large-scale structure of the universe to the microscopic behavior of sub-atomic elements. However, these two fundamental theories, general relativity and quantum field theory, are not consistent. A quantum theory is defined by quantum physics. Here, the “quantum theory of gravity” is understood as: $1$) a theory which can describe the behavior of ($4$-dimensional) space-time in the region where the uncertainty principle becomes essential ($\approx$ the Planck-length), $2$) a theory which is consistent with well-established general relativity at a large scale, and $3$) a theory which can give experimentally measurable predictions. Immediately after an establishment of general relativity and quantum mechanics in 1920’s, construction of quantum gravity was started on 1930’s. (A history of quantization of general relativity is beyond the scope of this report. For detailed history, see [@Rovelli:2000aw; @doi:10.1142qg] and references therein. .) There are three main-streams of the quantization[@Rovelli:2000aw]: 1) Covariant perturbative approach[@Fierz211]: Following the successful method of the QED, a small fluctuation from the flat Minkowski space is treated as a perturbation, then the Feynman rule of gravitational interaction is derived. This method was slowing down after a discovery of non-renormalizability of thees theories[@'tHooft:1974bx], and it becomes active again after appearing the super-string theoretical approach. 2) Canonical quantization of the metric tensor[@DeWitt:1967yk; @DeWitt:1967ub; @DeWitt:1967uc]: The metric tensor is treated as a dynamical variable and interpreted as an operator, and then it is quantized using the canonical method by requiring the commutation relations. The quantum equation of motion is obtained as the deWitte–Wheeler equation[@DeWitt:1967yk]. This approach is also slowing down because the deWitte–Wheeer equation is not mathematically well-defined, and recently it is renovated as loop-gravity[@rovelli2004] and developments are still continuing. 3) Path-integration quantization: When the path-integration method is simply applied to gravity, non-renormalizable infinities appear as the same as the first approach. A spin-network method[@Rovelli:1995ac] can be categorized as this approach. In this model, nothing can exist. In contrast to the $4$-dimensional case, it is known that quantum gravity exists in the $(1\hspace{-.1em}+\hspace{-.1em}2)$-dimensional space as a Chern–Simons topological theory[@witten1988], that is renormalizable and does not exhibit dynamics[@Witten198846]. We note that $3$-dimensional general relativity does not have any dynamic degree of freedom even at a classical level. In addition to solutions. Actuary, it is known that a Chern-Simons action can give a topological invariant only in odd-dimensional spaces[@0264-9381-29-13-133001]. While, at first glance, it seems hopeless to construct quantum general relativity using a Chern-Simons form on the $4$-dimensional space time, we have found a novel symmetry, which is referred to as the co-Poincaré symmetry[@doi:10.1063/1.4990708], allows us to construct general relativity as a Chern–Weil theory on a $4$-dimensional space time manifold. This new symmetry is one extended a translation symmetry, and when it is applied on a pure gravitational Lagrangian without a cosmological term, it induces only a total derivative term. In this study, we show that the invariant quadratic can be defined in the $4$-dimensional space time by introducing a Lie algebra of the co-Poincaré group and the Einstein–Hilbert gravitational Lagrangian can be defined as a second Chern class, when there is no cosmological term. Our approach for quantization is based on the second category “canonical quantization of the metric tensor” in the above list for quantization approaches. In this quantization method, the subject to be quantized is not the space time itself, and thus the space time coordinate $x^\mu$ is not q-number (operator), but c-number[@nakanishi1990covariant; @NakanishiSK2009]. The subject for quantization is a solution of the Einstein equation $g^{(c)}_{\mu\nu}(x)$. In classical general relativity, the geometrical metric tensor $g^{(g)}_{\mu\nu}(x)$ is given by the solution of the classical Einstein equation such as $g^{(g)}_{\mu\nu}(x)=g^{(c)}_{\mu\nu}(x)$, that is nothing other than the Einstein’s equivalent principle. In a quantum level, this relation is not simply fulfilled. The geometrical metric tensor will be given as an expected value of the quantum metric tensor $g^{(g)}_{\mu\nu}(x)=\langle g^{(q)}_{\mu\nu}(x)\rangle$. An absolute symmetry. As a result, the fundamental forms can be identified as the spin and surface forms, which will be defined in this article. Based on the fundamental forms, the Nakanishi–Kugo–Ojima covariant quantization is performed, and the complete set of the quantum Lagrangian, equations of motion, BRS transformations and BRS charges for pure gravity is given. As the consequence of quantization, a scattering matrix must fulfill the Kugo–Ojima theorem. This article is organized as follows: In section II, mathematical preliminaries of differential geometry are introduced in order to explain our terminologies and conventions. Standard formalism of a gravitational Lagrangian and geometrical structure of a principal (Poincaré) bundle are also introduced in this section in contrast to our novel topological approach. New translation operator is introduced in section III. It is shown that an Einstein–Hilbert Lagrangian can be recognized as a second Chern class under co-Poincaé symmetry in this section. With a topological insight given here, appropriate fundamental variables (forms) for a Hamiltonian formalism are introduced at the end of this section. An explicit formulation of canonical quantization of the general relativity using the Nakanishi–Kugo–Ojima formalism[@nakanishi1990covariant] is performed in section IV. Section V is devoted for discussions how to construct an appropriate Hilbert space and physical states on it. In consequence, it is shown that the unitarity of the quantum gravitational S-matrix ensured due to the Kugo–Ojima theorem[@kugo1979local; @Kugo1978459]. A renormalizability of our Cher–Weil general relativity is also discussed in section V. At the end, a summary of this study is given in section VI. Preliminaries {#prep} ============= First, standard classical general relativity is geometrically re-formulated in terms of a vierbein formalism according to Ref. [@fre2012gravity; @Kurihara2018]. Differential geometry {#DG} --------------------- A $4$-dimensional pseudo-Riemannian manifold $(\MM,g)$ with $GL(1,3)$ symmetry is considered. On each coordinate patch $U_p\subset\MM$ around $p
null
--- abstract: 'We study the semidirect product of a Lie algebra with a representation up to homotopy and provide various examples coming from Courant algebroids, string Lie 2-algebras, and omni-Lie algebroids. In the end, we study the semidirect product of a Lie group with a representation up to homotopy and use it to give an integration of a certain string Lie 2-algebra.' author: 'Dean (algebroids). Our original motivation is to integrate the standard Courant algebroid $TM\oplus T^*M$ since it is this Courant algebroid that is much used in Hitchin and Gualtieri’s program of generalized complex geometry. Courant algebroids are Lie 2-algebroids in the sense of Roytenberg and Ševera [@royt; @s:funny]. The general procedure to integrate Lie $n$-algebras (algebroids) is already described in [@getzler; @henriques; @s:funny]. We want to pursue some explicit formulas for the special case of the standard Courant algebroid. It turns out that the sections of the Courant algebroid $TM \oplus T^*M$ form a semidirect product of a Lie algebra with a representation up to homotopy. Abad and Crainic [@abad-crainic:rep-homotopy] recently studied the representations up to homotopy of Lie algebras, Lie groups, and even Lie algebroids, Lie groupoids, in general. Just as one can form the semidirect product of a Lie algebra with a representation, one can form the semidirect product with representations up to homotopy too. In our case, the semidirect product coming from the standard Courant algebra is a Lie 2-algebra. But using the fact that it is also a semidirect product, the integration becomes easier. The integration result is related to the semidirect product of Lie groups with its representation up to homotopy, which will be discussed in Section \[sec:gp\]. However it turns out that the concept of representation up to homotopy of Lie groups of Abad and Crainic will not be general enough to cover all the integration results. This we will continue in a forthcoming paper [@sheng-zhu:II]. In this paper we focus on exhibiting more examples of representation up to homotopy and their semidirect products to demonstrate the importance of our integration procedure. The examples are all sorts of variations of Courant algebroids. One is Chen and Liu’s omni-Lie algebroids, which generalizes Weinstein’s omni-Lie algebras. Hence we expect to give an integration to Weinstein’s omni-Lie algebras via Lie 2-algebras in the next paper [@sheng-zhu:II]. Another example comes from the so called string Lie 2-algebra. It is essentially a Courant algebroid over a point (see Section \[sec:string\]), namely a Lie algebra with an adjoint-invariant inner product. This sort of Lie algebra is usually called a [*quadratic Lie algebra*]{}. This concept also appears in the context of Manin triples and double Lie algebras. The example ${\mathbb R}\to {\mathfrak g}\oplus {\mathfrak g}^*$ that we consider in this paper is an analogue of the standard Courant algebroid, and is basically a special case taken from [@lu-weinstein] [^2]. We do end. Usually people require the base Lie algebra of a string Lie algebra to be semisimple and of compact type (see Remark \[rk:semisimple\]). For such usual sort of string Lie 2-algebras, Baez et al [@baez:2gp] have proved a no-go theorem, namely such string Lie 2-algebras can not be integrated to finite dimensional semi-strict Lie 2-groups. Here a [*semi-strict Lie 2-group*]{} is a group object in $\rm DiffCat$, where $\rm DiffCat$ is the 2-category consisting of categories, functors, and natural transformations in the category of differential manifolds, or equivalently $\rm DiffCat$ is a 2-category with Lie groupoids as objects, strict morphisms of Lie groupoids as morphisms, and 2-morphisms of Lie groupoids as 2-morphisms. Our semi-strict Lie 2-group is actually called a Lie 2-group by the authors in [@baez:2gp]. However, we call it a semi-strict Lie 2-group because compared to the Lie 2-group in the sense of Henriques [@henriques], or equivalently the stacky group in the sense of Blohmann [@blohmann], it is stricter. Basically, their Lie 2-group is a group object in the 2-category with objects as Lie groupoids, morphisms as Hilsum-Skandalas bimodules (or generalized morphisms), 2-morphisms as 2-morphisms of Lie groupoids. Schommer-Pries realizes the string 2-group as such a Lie 2-group with a finite dimensional model [@schommer:string-finite-dim] and the integration of a string Lie 2-algebra to such a model is a work in progress [@ccc:integration-string]. It is not needed in the definition of the string Lie 2-algebra for the base Lie algebra to be semisimple of compact type. One only needs a quadratic Lie algebra. As soon as we relax this condition on compactness, we find out that one can integrate ${\mathbb R}\to {\mathfrak g}\oplus {\mathfrak g}^*$ to a finite dimensional semi-strict Lie 2-group in the sense of Baez et al. The n al. Then of course, as we relax the condition, we are in danger that the class corresponding to this Lie 2-algebra in $H^3({\mathfrak g}\oplus {\mathfrak g}^*, {\mathbb R})$ might be trivial, and consequently our Lie 2-algebra might be strict. Then what we have done would not have been a big surprise because a strict Lie 2-algebra corresponds to a crossed module of Lie algebras, and it easily integrates to a strict Lie 2-group by integrating the crossed module. However, we verified that when ${\mathfrak g}$ itself (not ${\mathfrak g}\oplus {\mathfrak g}^*$) is semisimple, this Lie 2-algebra is not strict. [**Acknowledgement:**]{} We give warmest thanks to Zhang-Ju Liu, Jiang-Hua Lu, Giorgio Trentinagia and Marco Zambon for useful comments and discussion. Y. Sheng gives his warmest thanks to Courant Research Centre “Higher Order Structures”, Göttingen University, where this work was done when he visited there. Representations up to homotopy of Lie algebras ============================================== In this section, we first consider the 2-term representation up to homotopy of Lie algebras. We give explicit formulas of the corresponding 2-term $L_\infty$-algebra, which is their semidirect product. Then they are referred to as algebroids. Representation up to homotopy of Lie algebras and their semidirect products --------------------------------------------------------------------------- $L_\infty$-algebras, sometimes called strongly homotopy Lie algebras, were introduced by Drinfeld and Stasheff [@stasheff:shla] as a model for “Lie algebras that satisfy Jacobi up to all higher homotopies”. The following convention of $L_\infty$-algebras has the same grading as in [@henriques] and [@rw]. An $L_\infty$-algebra is a graded vector space $L=L_0\oplus L_1\oplus\cdots$ equipped with a system $\{l_k|~1\leq k<\infty\}$ of linear maps $l_k:\wedge^kL\longrightarrow L$ with degree $\deg(l_k)=k-2$, where the exterior powers are interpreted in the graded sense and the following relation with Koszul sign “Ksgn” is satisfied for all $n\geq0$: $$\sum_{i+j=n+1}(-1)^{i(j-1)}\sum_{\sigma}{\mathrm{sgn}}(\sigma){\mathrm{Ksgn}}(\sigma)l_j(l_i(x_{\sigma(1)},\cdots,x_{\sigma(i)}),x_{\sigma(i+1)},\cdots,x_{\sigma(n)})=0,$$ where the summation is taken over all $(i,n-i)$-unshuffles with $i\geq1$. Let $n=1$, we have $$l_1^2=0,\quad l_1:L_{i+1}\longrightarrow L_i
null
--- abstract: 'Galaxy clusters have long been theorised to quench the star-formation of their members. This study uses integral-field unit observations from the $K$-band Multi-Object Spectrograph (KMOS) - Cluster Lensing And Supernova survey with *Hubble* (CLASH) survey (K-CLASH) to search for evidence of quenching in massive galaxy clusters at redshifts $0.3<z<0.6$. We first construct mass-matched samples of exclusively star-forming cluster and field galaxies, then investigate the spatial extent of their H$\alpha$ emission and study their interstellar medium conditions using emission line ratios. The average ratio of [H$\alpha$]{} half-light radius to optical half-light radius ([$r_{\mathrm{e}, {\rm{H}\alpha}}/r_{\mathrm{e}, R_{\mathrm{c} } }$]{}) for all galaxies is $1.14\pm0.06$, showing that star formation is taking place throughout stellar discs at these redshifts. However, on average, cluster galaxies have a smaller [$r_{\mathrm{e}, {\rm{H}\alpha}}/r_{\mathrm{e}, R_{\mathrm{c} } }$]{} ratio than field galaxies: $\langle$[$r_{\mathrm{e}, {\rm{H}\alpha}}/r_{\mathrm{e}, R_{\mathrm{c} } }$]{}$\rangle = 0.96\pm0.09$ compared to $1.22\pm0.08$ (smaller at a 98% credibility level). These values are uncorrected for the wavelength difference between [H$\alpha$]{} emission and $R_c$-band stellar light, but implementing such a correction only reinforces our results. We also show that whilst the cluster and field samples follow indistinguishable mass-metallicity (MZ) relations, the residuals around the MZ relation of cluster members correlate with cluster-centric distance; galaxies residing closer to the cluster centre tend to have enhanced metallicities (significant at the 2.6$\sigma$ level). Finally, in contrast to previous studies, we find no significant differences in electron number density between the cluster and field galaxies. We use simple chemical evolution models to conclude that the effects of disc strangulation and ram-pressure stripping can quantitatively explain our observations.            ' author: - | Sam P. Vaughan,$^{1, 2, 3}$[^1] Alfred L. Tiley,$^{4, 5}$ Roger L. Davies,$^{3}$ Laura J. Prichard,$^{6}$ Scott M. Croom,$^{1,2}$ Martin Bureau,$^3$ John P. Stott,$^{7}$ Andrew Bunker,$^{3}$ Michele Cappellari,$^3$ Behzad Ansarinejad$^{4}$ and Matt J. Jarvis$^{3,8}$\ $^{1}$Sydney Institute for Astronomy, School of Physics, Building A28, The University of Sydney, NSW 2006, Australia\ $^{2}$ARC Centre of Excellence for All Sky Astrophysics in 3 Dimensions (ASTRO3D), Australia\ $^{3}$Sub-department of Astrophysics, Department of Physics, University of Oxford, Denys Wilkinson Building, Keble Road, Oxford OX1 3RH, UK\ $^{4}$International Centre for Radio Astronomy Research, The University of Western Australia, 35 Stirling Highway, Crawley WA 6009, Australia\ $^{5}$Centre for Extragalactic Astronomy, Department of Physics, Durham University, South Road, Durham DH1 3LE, UK\ $^{6}$Space Telescope Science Institute, 3700 San Martin Drive, Baltimore MD 21218, USA\ $^{7}$Department of Physics, Lancaster University, Bailrigg, Lancaster LA1 4YB, UK\ $^{8}$Department of Physics & Astronomy, University of the Western Cape, Private Bag X17, Bellville, Cape Town, 7535, South Africa\ bibliography: - 'bibliography.bib' date: 'Accepted 2020 June 22. Received 2020 June 21; in original form 2019 November 18.' title: 'K-CLASH: Strangulation and Ram Pressure Stripping in Galaxy Cluster Members at 0.3 &lt; $z$ &lt; 0.6' --- \[firstpage\] galaxies: clusters: general – galaxies: evolution – galaxies: ISM Introduction ============ It is well understood that the environment in which a galaxy resides plays an important role in its formation and evolution. Focusing on the densest environments in particular, we have known for many years that the galaxy population residing in galaxy clusters is markedly different from its counterpart in the field: galaxy cluster members tend to have early-type morphologies [@Dressler:1980; @Dressler:1997], redder optical colours [e.g. @Pimbblet:2002] and spectra free of emission lines [@Gisler:1978]. Current work has extended these observations to much higher redshifts, with studies of protoclusters and overdensities at redshifts between $1.5<z<2.5$ becoming common (e.g. @Muzzin:2013; @Shimakawa:2015; @WangT:2016; @Prichard:2017; @Perez-Martinez+2017; @Boehm:2019; see @Overzier:2016 for a review). The physical processes which cause the differences in galaxy properties can be broadly separated into two categories. On one hand, a number of “external” mechanisms acting on cluster galaxies (involving their interactions with the intracluster medium or other cluster members) have been suggested to quench their star formation and alter their properties. Of these, perhaps the most dramatic is ram-pressure stripping [first proposed in @Gunn:1972]. Galaxy clusters are the largest potential wells in the Universe, and contain vast quantities of hot gas between their members (see e.g. @Sarazin:1986 and @Kravtsov:2012 for reviews). This intracluster medium (ICM) contains an order of magnitude more mass than is in the stars of the galaxies themselves, and is around a thousand times more dense than the inter-galactic medium which surrounds galaxies outside clusters [e.g. , stars] When a galaxy falls into a cluster, its motion through the ICM creates a pressure which acts on its reservoirs of gas. The force exerted can be strong enough to overcome the disc’s gravitational restoring force, stripping away this reservoir in an occasionally spectacular fashion. Direct observational evidence of gas being stripped from cluster galaxies can be found at local and intermediate redshifts [e.g. @Owers:2012; @Ebeling:2014; @Rawle:2014; @Poggianti:2017; @Boselli:2019], with such objects coming to be known colloquially as “Jellyfish” galaxies following [@Smith:2010]. On the other hand, galaxy clusters are inherently special places, and the initial conditions of galaxies that form within them are different from those of galaxies which form in less dense regions of space. Since the massive clusters of today correspond to the largest overdensities in the early Universe [e.g. @Springel:2005], it has been suggested that these unique initial conditions lead to an “accelerated” evolution of their members [e.g. @Dressler:1980; @Morishita:2017; @Chan:2018]. The question is how far we reach. Attempting to answer this question by studying cluster galaxies at $z=0$ is hampered by the fact that so many of them are quiescent, evolved and seemingly at the endpoint of their evolutionary paths. As first discussed in [@Butcher:1978; @Butcher:1984], galaxy clusters at $z\approx0.5$ contain a much higher fraction of star-forming galaxies than today. Furthermore, of those cluster members which are not currently forming stars, some show evidence of recently truncated star formation via the k+a spectral characteristics of post-starburst galaxies [e.g. @Poggianti:2009] or the strong H$\delta$ absorption of “post star-forming” galaxies [e.g. @Couch:1987; @Owers:2019]. These results address the problem. A lot has been written [e.g. recently @Rosati:2014; @Sobral:2015; @Maier:2016; @Morishita:2017]. Whilst these studies have the advantage of targeting large numbers of objects and forming statistically-significant sample sizes, environmental quenching processes are inherently spatially inhomogeneous. Spectroscopic observations which sample multiple positions in the same galaxy at the same time are therefore required to catch these mechanisms to transform galaxies in the act. Our view of intermediate- to high-redshift ($z>1$) star-forming galaxies has been revolutionised in the last decade by integral-field spectroscopic surveys from the ground [e.g. @ForsterSchreiber:2006; @Mancini:2011; @Genzel:2011; @Wisnioski:2015; @Stott:2016; @Beifiori:2017] and deep *Hubble Space Telescope (HST)* grism spectroscopy
null
--- abstract: 'The charged current antikaon production off nucleons induced by antineutrinos is studied at low and intermediate energies. We extend here our previous calculation on kaon production induced by neutrinos. We have developed a microscopic model that starts from the SU(3) chiral Lagrangians and includes background terms and the resonant mechanisms associated to the lowest lying resonance in the channel, namely, the $\Sigma^*(1385)$. Our results could be of interest for the background estimation of various neutrino oscillation experiments like MiniBooNE and SuperK. They can also be helpful for the planned $\bar \nu-$experiments like MINER$\nu$A, NO$\nu$A and T2K phase II and for beta-beam experiments with antineutrino energies around 1 GeV.' author: : 'M. Rafi' Raifi 'I.' - 'M. Sajjad' - 'M. J.' title: ' etc. explore this energy range. Although many interesting results can be obtained without a detailed knowledge of the various processes used for the neutrino detection or the neutrino flux, a reliable estimate of the $\nu-$N cross section for various processes is mandatory to carry out a precise analysis of the measurements. Among these processes, strangeness conserving ($\Delta S=0$) weak interactions involving quasielastic production of leptons induced by charged as well as neutral weak currents have been widely studied [@Boyd:2009zz; @Leitner:2006ww; @Leitner:2006sp; @Benhar:2010nx; @Martini:2010ex; @Amaro:2010sd; @Nieves:2011pp]. Much work has also been done to understand one pion production in the weak sector [@AlvarezRuso:1998hi; @Sato:2003rq; @Graczyk:2009qm; @Hernandez:2007qq; @Leitner:2008wx; @Leitner:2010jv; @Hernandez:2010bx; @Lalakulich:2010ss]. There are other inelastic reactions like hyperon and kaon production ($\Delta S=\pm 1$) that could also be measured even at quite low energies. However, very few calculations study these processes [@VicenteSingh; @Mintz:2007zz; @Dewan; @Shrock; @Amer:1977fy; @Mart:2009; @RafiAlam:2010kf]. This is partly justified by their small cross sections due to the Cabibbo suppression. As a result of this situation, the Monte Carlo generators used in the analysis of the current experiments apply models that are not well suited to describe the strangeness production at low energies. NEUT, for example, used by Super-Kamiokande, K2K, SciBooNE and T2K, only considers associated production of kaons within a model based on the excitation and later decay of baryonic resonances and from deep inelastic scattering (DIS) [@Hayato:2009zz]. Similarly, other neutrino event generators like NEUGEN [@Gallagher:2002sf], NUANCE [@Casper:2002sd] (see also discussion in Ref. [@Zeller:2003ey]) and GENIE [@Andreopoulos:2009rq] do not consider single hyperon/kaon production. Recently we have studied single kaon production induced by neutrinos at low and intermediate energies [@RafiAlam:2010kf] using Chiral Perturbation Theory ($\chi$PT). We found that up to E$_{\nu_\mu}\approx 1.2$ GeV, single kaon production dominates over the associated production of kaons along with hyperons which is mainly due to its lower threshold energy. In this work, we extend our model to include weak single antikaon production off nucleons. The theoretical model is necessarily more complicated than for kaons because resonant mechanisms, absent for the kaon case, could be relevant. On the other hand, the threshold for associated antikaon production corresponds to the $K-\bar K$ channel and it is much higher than for the kaon case (KY). This implies that the process we study is the dominant source of antikaons for a wide range of energies. The study may be useful in the analysis of antineutrino experiments at MINER$\nu$A, NO$\nu$A, T2K and others. For instance, MINER$\nu$A has plans to investigate several strange particle production reactions with both neutrino and antineutrino beams [@Solomey:2005rs] with high statistics. Furthermore, the T2K experiment [@Kobayashi:2005] as well as beta beam experiments [@Mezzetto:2010] will work at energies where the single kaon/antikaon production may be important. We introduce the formalism in Sec. 10 In Sec. \[Results and Discussion\], we present the results, discussions and conclusions. Formalism {#Formalism} ========= The basic reaction for antineutrino induced charged current antikaon production is $$\label{reaction} \bar \nu_{l}(k) + N(p) \rightarrow l(k^{\prime}) + N^\prime(p^{\prime}) + \bar K(p_{k}) ,$$ where $l=e^+,\mu^+$ and $ N \& N^\prime $ are nucleons. The expression for the differential cross section in the laboratory frame for the above process is given by $$\begin{aligned} \label{sigma_inelas} d^{9}\sigma &=& \frac{1}{4 M E(2\pi)^{5}} \frac{d{\vec k}^{\prime}}{ (2 E_{l})} \frac{d{\vec p\,}^{\prime}}{ (2 E^{\prime}_{p})} \frac{d{\vec p}_{k}}{ (2 E_{k})} \delta^{4}(k+p-k^{\prime}-p^{\prime}-p_{k})\bar\Sigma\Sigma | \mathcal M |^2,\end{aligned}$$ where $ k( k^\prime) $ is the momentum of the incoming(outgoing) lepton with energy $E( E^\prime)$, $p( p^\prime)$ is the momentum of the incoming(outgoing) nucleon. The kaon 3-momentum is $\vec{p}_k $ having energy $ E_k $, $M$ is the nucleon mass, $ \bar\Sigma\Sigma | \mathcal M |^2 $ is the square of the transition amplitude averaged(summed) over the spins of the initial(final) state. It can be written as $$\label{eq:Gg} \mathcal M = \frac{G_F}{\sqrt{2}}\, j_\mu J^{\mu}=\frac{g}{2\sqrt{2}}j_\mu \frac{1}{M_W^2} \frac{g}{2\sqrt{2}}J^{\mu},$$ where $j_\mu$ and $ J^{\mu}$ are the leptonic and hadronic currents respectively, $G_F=\sqrt{2} \frac{g^2}{8 M^2_W}$ is the Fermi coupling constant, $g$ is the gauge coupling and $M_W$ is the mass of the $W$-boson. The leptonic current can be readily obtained from the standard model Lagrangian coupling the $W$ bosons to the leptons $${\cal L}=-\frac{g}{2\sqrt{2}}\left[j^\mu{ W}^-_\mu+h.c.\right].$$ We construct a model including non resonant terms and the decuplet resonances, that couple strongly to the pseudoscalar mesons. The same approach successfully describes the pion production case (see for example Ref. [@Hernandez:2007qq]). The final Fig. \[fg:terms\]. There are s-channels with $\Sigma,\Lambda$(SC) and $\Sigma^*$(SCR) as intermediate states, a kaon pole (KP) term, a contact term (CT), and finally a meson ($\pi$P,$\eta$P) exchange term. For the intermediate state. [ [Feynman diagrams for the process $\bar \nu N\rightarrow l N^\prime \bar K$. First row from left to right: s-channel $\Sigma,\Lambda $ propagator (labeled SC in the text), s-channel $\Sigma^*$ Resonance (SCR), second row: kaon pole term (KP); Contact term (CT) and last row: Pion(Eta) in flight ($ \pi P/ \eta P $). []{data-label="fg:terms"}](feynman.eps){width="80.00000%" and Ref Lagrangian. We invite Ref. [@Scherer:2002tk] to write the lowest-order SU(3) chiral Lagrangian describing the interaction of pseudoscalar mesons in the presence of an external current, $$\label{eq:lagM} {\cal L
null
--- abstract: | The physical model describing the influence of the electronic subsystem on the properties of one-dimensional chains of metal is presented. It is shown that depending on an interaction potential between atoms in one-dimensional system formation of chains of various length is possible. In case the characteristic depth of the potential well of the interatomic interaction does not exceed a certain magnitude, the chains in 1D system are formed with length of several angstroms, while the increase the depth of the well also leads to the possibility of formation of metal chains of greater length.\ Представлена физическая модель, описывающая влияние электронной подсистемы на свойства одномерных цепочек металлов. Показано, что в зависимости от потенциала взаимодействия между атомами в одномерной системе возможно образование цепочек различной длины. В том случае, если характерная глубина потенциальной ямы межатомного взаимодействия не превышает определенной величины, в 1D-системе образуются цепочки с характерной длиной порядка нескольких ангстрем, в то время как увеличение глубины ямы приводит также к возможности образования цепочек металлов большей длины. author: - | В.Д. | + | The author is:- Борисюк, О.С. , <unk>.<unk> , , <unk> Троян bibliography: - 'biblio1.bib' title: Влияние электронов на стабильность одномерных цепочек металлов --- Правильное понимание свойств наноразмерных контактов имеет решающее значение для многих областей современной нанотехнологии. Уникальные свойства моноатомных цепочек металлов привлекают в настоящее время значительное внимание как экспериментально[@Agrait:2003kr; @Smit:2001gk; @Csonka:2006uu; @Kizuka:2001wj; @Ohnishi:1998tz; @Rodrigues:2001wv; @RubioBollinger:2001us; @Untiedt:2002iy; @Yanson:1998uo], так и теоретически [@Skorodumova:2005dg; @Skorodumova:2003vj; @Skorodumova:2000uc]. Атомарные цепочки могут быть получены в экспериментах по механически контролируемому обрыву цепи с использованием туннельного микроскопа или просвечивающего электронного микроскопа[@Agrait:2003kr]. Структуры, антрол<unk>, рис. \[fig:chain\]). Образование подобных цепочек сильно зависит от материала атомов, из которых состоит цепочка. Было показано, что золотые цепочки могут быть до 2.6 нм в длину[@Untiedt:2002iy; @Yanson:1998uo], тогда как цепочки из атомов серебра в длину не превышают нескольких ангстрем[@Smit:2001gk]. Так, в работах[@Smit:2001gk; @Untiedt:2002iy] с помощью методики механически контролируемого обрыва цепи (mechanically controllable break-junction (MCB)),в серии экспериментов были получены гистрограммы, отображающие частоту обрыва цепочек металлов Ag, Au, Pt, Pd от длины цепочки(см. рис. \[fig:agaupt\]). Таким образом, можно сделать вывод, что металлы Au и Pt, в отличие от Ag и Pd, могут образовывать одномерные цепочки с количеством атомов $N>2$. В е наноцепочек. Понимание странски<unk> размеров. В данной работе представлена физическая модель, описывающая влияние электронной подсистемы на свойства одномерных цепочек металлов. Показано, что в зависимости от потенциала взаимодействия между атомами в одномерной системе возможно образование цепочек различной длины. В том случае, если характерная глубина потенциальной ямы межатомного взаимодействия не превышает определенной величины, в 1D-системе образуются цепочки с характерной длиной порядка нескольких ангстрем, в то время как увеличение глубины ямы приводит также к возможности образования цепочек металлов большей длины. Рассмотрим влияние электронной подсистемы на свойства одномерных цепочек металлов. Предположим, что цепочки металлов, наблюдаемые в экспериментах, являются реализацией возможных состояний одномерной статистической системы. Рассмотрим бесконечную одномерную цепочку. Рассмотрим с частиц. Пусть $\overline{n}$ — средняя плотность частиц в рассматриваемой системе, а $n=n(x)=\sum_i \delta (x-x_i)$ — микроскопическая плотность. Тогда $\delta n = \overline{n}-n
null
--- abstract: 'We present near-infrared integral field spectroscopy data obtained with VLT/SINFONI of “the Teacup galaxy”. The nuclear K-band (1.95–2.45 ) spectrum of this radio-quiet type-2 quasar reveals a blueshifted broad component of FWHM$\sim$1600-1800 km s$^{-1}$ in the hydrogen recombination lines (Pa$\alpha$, Br$\delta$, and Br$\gamma$) and also in the coronal line \[Si VI\]$\lambda$1.963 . Thus the data confirm the presence of the nuclear ionized outflow previously detected in the optical and reveal its coronal counterpart. Both the ionized and coronal nuclear outflows are resolved, with seeing-deconvolved full widths at half maximum of 1.1$\pm$0.1 and 0.9$\pm$0.1 kpc along PA$\sim$72–74. This orientation is almost coincident with the radio axis (PA=77), suggesting that the radio jet could have triggered the nuclear outflow. In the case of the H$_2$ lines we do not require a broad component to reproduce the profiles, but the narrow lines are blueshifted by $\sim$50 km s$^{-1}$ on average from the galaxy systemic velocity. This is an extremely consistent pattern. We find evidence for kinematically disrupted gas (FWHM$>$250 km s$^{-1}$) at up to 5.6 kpc from the AGN, which can be naturally explained by the action of the outflow. The narrow component of \[Si VI\] is redshifted with respect to the systemic velocity, unlike any other emission line in the K-band spectrum. This <unk>[Si VI<unk>] is a redshifting<extra_id_1> is significant for this<extra_id_2> is not an isolated<extra_id_3> is the smallest<extra_id_4> article<extra_id_5> is an interesting<extra_id_6>: '<unk>' title:<extra_id_7> ---<extra_id_8> <extra_id_9> region.' author: A group of jets. Introduction [ @Croton06]. This process occurs when the intense radiation produced by the active nucleus sweeps out and/or heats the interstellar gas, quenching star formation and therefore producing a more realistic number of massive galaxies in the simulations (see @Fabian12 for a review). Two major modes of AGN feedback are identified. Radio- or kinetic-mode feedback dominates in galaxy clusters and groups, where jet-driven radio bubbles heat the intra-cluster medium. This feedback is most prevalent in galaxies. Quasar- or radiative-mode feedback consists of AGN-driven winds of ionized, neutral, and molecular gas [@Fabian12; @Fiore17]. However, such a clear distinction between the two modes of feedback can be somewhat misleading. This is because it has been shown, through the detection of nuclear outflows, that radiative-mode feedback also acts in radio-galaxies (see e.g. @Emonts16 and references therein), whilst the presence of jets in quasars that are deemed to be radio-quiet can lead to faster and more turbulent AGN-driven winds [@Mullaney13; @Zakamska14]. Therefore, it is over-simplistic to consider the impact each mode of AGN-feedback has on its host galaxy in isolation. One of the most efficient ways to identify the imprint of outflows in large AGN samples at any redshift is to search for them in the warm ionized phase in the optical range (e.g. \[O III\]$\lambda$5007 Å). Indeed, during recent years it has become clear that ionized outflows are a ubiquitous phenomenon in type-2 quasars (QSO2s) at z$\la$0.7 [@Villar11; @Villar14; @Liu13; @Harrison14; @Karouzos16]. QSO2s are excellent laboratories to search for outflows and study their impact on their host galaxies, as the AGN continuum and the broad components of the permitted lines produced in the broad-line region (BLR) are obscured. Fast motions are often measured in QSO2s, with full-widths at half maximum (FWHM) $>$1000 km s$^{-1}$ and typical velocity shifts (V$_s$) of hundreds km s$^{-1}$. These outflows are likely triggered by AGN-related processes and they originate in the high-density regions (n$_e\ge 10^3~cm^{-3}$) within the central kiloparsecs of the galaxies. Optical integral field spectroscopy (IFS) studies have shown that these outflows can extend up to $\sim$15 kpc from the AGN [@Humphrey10; @Liu13; @Harrison14]. However, these results have been recently questioned, as the reported outflow extents could be overestimated due to seeing smearing effects [@Karouzos16; @Villar16; @Husemann16]. Now that ionized outflows have been identified as a common process in QSO2, the next goal is to investigate their impact on other gaseous phases, such as the molecular and coronal phases. Since H$_2$ is the fuel required to form stars and feed the SMBH, the impact of the outflows in this gaseous phase is what might truly affect how systems evolve. Detecting coronal outflows is also interesting because, due to the high ionization potentials (IP$\ga$100 eV; @Mullaney09 [@Rodriguez11; @Landt15]) of these lines, they are unequivocally associated with nuclear activity. Coronal lines have intermediate widths between those of the broad and the narrow emission lines (FWHM$\sim$500–1500 km s$^{-1}$) and are generally blueshifted and/or more asymmetric than lower-ionization lines [@Penston84; @Appenzeller91; @Rodriguez11]. This indicates that either coronal lines are produced in an intermediate region between the narrow-line region (NLR) and the BLR [@Brotherton94; @Mullaney08; @Denney12], and/or are related to outflows [@Muller06; @Muller11]. The near-infrared (NIR) range, and particularly the K-band, allows us to trace outflow signatures in the molecular, ionized and coronal phases simultaneously. In addition, because ionized outflows in QSO2 are heavily reddened [@Villar14], observing them in the NIR permits us to penetrate through the dust screen and trace the regions closer to the base of the outflow. The rest-frame NIR spectrum of QSO2s at z$<$0.7 has not been fully characterised yet. To the best of our knowledge, this has been done for one QSO2 so far: Mrk477 at z=0.037 [@Villar15]. Additionally, a NIR spectrum of the QSO2 SDSS J1131+1627 at z=0.173 was presented in @Rose11, but only Pa$\alpha$ was detected. Here we explore the NIR spectrum of the QSO2 SDSS J143029.88+133912.0 (J1430+1339; at $z$=0.0852). The Teacup galaxy ----------------- According to its \[O III\] luminosity (5$\times$10$^{42}~erg~s^{-1}$ = 10$^{9.1}L_{\sun}$; @Reyes08), J1430+1339 is a luminous QSO2, and considering its position in the 1.4GHz–\[O III\] luminosity plane [@Lal10] it is classified as radio-quiet (L$_{1.4GHz}=5\times 10^{23}~W Hz^{-1}$; Harrison et al. 2015, hereafter @Harrison15). Nonetheless, it is a factor of 10 above the radio-FIR correlation found for star-forming galaxies [@Villar14; @Harrison14], which makes it a “radio excess source”. The host galaxy shows clear signatures of a past interaction with another galaxy, in the form of shells, tails and chaotic dust lanes [@Keel15]. J1430+1339 was nicknamed “the Teacup galaxy” because of the peculiar appearance of its extended emission-line region (EELR) in SDSS and HST images [@Keel12; @Keel15]. This EELR is dominated by a filamentary bubble to the northeast (NE) with a radial extent of $\sim$12 kpc measured from the nucleus (see Figure \[fig1\]). In the opposite direction there is another knotty emission-line structure resembling a fan extending up to $\sim$7 kpc. The Teacup has been proposed as a fading AGN candidate [@Gagne14]. The NE emission-line bubble coincides with the radio-continuum structure detected in VLA maps @Harrison15. These radio maps also show another radio bubble extending $\sim$10 kpc to the west, as well as two compact radio structures: a brighter one coincident with the AGN position, and a fainter one located $\sim$0.8 kpc northeast from the AGN (PA$\sim$60), identified by @Harrison15 as high-resolution B (HR-B) region. According to the latter authors, this HR-B structure would be co-spatial with the base of the ionized nuclear outflow first reported by @Vill
null
\#1 \#1 = harvmac =[ ]{} \#1\#2[\#1\#2==]{}\#1 **Abstract** \#1 =cmr10 =cmr7 =cmr5 =cmmi10 =cmmi7 =cmmi5 =cmsy10 =cmsy7 =cmsy5 =cmti10 =’177 =’177 =’177 =’60 =’60 =’60 \#1[\#1\^]{} \#1[[e]{}\^[\^[\#1]{}]{}]{} \#1\#2[\^[\#1]{}\_[\#2]{}]{} /\#1[/]{} \#1 \#1\#2 \#1[\_[[\#1]{}]{}]{} \#1\#2[\_[[\#1]{}]{}\_[[\#2]{}]{}]{} \#1\#2[[\#1\#2]{}]{} \#1\#2\#3[[\^2 \#1\#2 \#3]{}]{} |\#1 \#1[\#1 ]{} \#1[\#1|]{} \#1[| \#1]{} \#1[| \#1|]{} \#1 \#1[1.5ex-16.5mu \#1]{} \#1\#2 \#1[.3ex]{} \#1\#2\#3[Nucl. Phys. B[\#1]{} #5 \#1\#2\#3[Phys. ][Phys.]B[<unk> #1] [\#1]{}B (\#2) \#3]{} \#1\#2\#3[Phys. Rev ][Phy [\#1]{} (\#2) \#3]{} \#1\#2\#3[Phys. Rev. [\#1]{} #3<unk>#4[Ann. Phys \#1\#2\#3[Ann. .] [\#1]{} (\#2) \#3]{} \#1\#2\#3[Phys. Rep. [\#1]{} (\#2) \#3]{} \#1\#2\#3[Rev. Phys Phys. [\#1]{} (\#2) \#3]{} \#1\#2\#3[Comm. Math. )<unk> [\#1]{} (\#2) \#3]{} Zachary Guralnik [*Department of Physics, University of California at San Diego, La Jolla, CA 92093*]{} The divergence of lepton and baryon currents in the Standard Model is independent of the fermion masses. For a single family, the baryon and lepton number anomaly is where $W^{\mu\nu}$ is the $SU(2)$ field strength and $B^{\mu\nu}$ is the $U(1)$ field strength. This is a common anomaly in Q.E.D. because in Q.E.D. the production of axial charge depends critically on whether or not the electron is massive. I will begin by reviewing the reasons for this sensitivity. Then I will show why these reasons are not applicable to a spontaneously broken theory with a vector current anomaly, such as the standard model. The results give some insight into the production of baryon number in the standard model by sphalerons, which has been of much recent interest. The divergence of the axial current in Q.E.D. \[\] is In a background gauge field the matrix element of the last term is The remaining terms are higher dimension functions of the gauge fields and vanish in an adiabatic aproximation. If the electron is massive then there is no axial charge violation in an adiabatic approximation because the first and last terms in equation   cancel. This cancellation is obvious from the start if one calculates the anomaly using a Pauli Villars regulator field. Then the regulated axial current satisfies where $\chi$ is the regulator field and $\Lambda$ is its mass. $\chi$ is bosonic, so $\chi$ loops have the opposite sign from $\psi$ loops. Therefore there can be no mass independent terms in the matrix element of $\del_{\mu}J^{5 \mu}_r$ in a background gauge field. This cancellation also has a simple spectral interpretation. An X-ray Q.E.D. axial anomaly based upon the spectrum of a massless electron in a background magnetic field has been given by Nielson and Ninomiya \[\]. Their arguments are briefly summarized below. Consider a uniform background magnetic field in the z direction. In the massless case, positive and negative chirality fermions decouple, so there are two sets of Landau levels. The positive and negative chirality Landau levels contain zero-modes with $E=-p_z$ and $E=+p_z$ respectively. Suppose one turns on a positive uniform electric field ${\cal E}$ in the $z$ direction. In an adiabatic approximation, solutions flow along spectral lines according to the Lorentz force law ${dp\over dt}=e{\cal E}$. Thus right chiral zero-modes slide out of the Dirac sea while left chiral zero-modes slide deeper into the Dirac sea (). This motion produces a net axial charge but no electric charge. By a careful counting of states one reproduces the global form of the anomaly where V is the volume of space. Now consider the same background fields but suppose the electron is massive. In this case, there are no zero-modes among the Landau levels. In the absence of zero-modes adiabatic evolution just maps the Dirac sea into itself, so axial charge can not be adiabatically generated. The discussion above is not applicable to the standard model because standard model fermions can be given masses without changing the baryon or lepton number violation in fixed gauge field background. Dirac mass terms do not carry vector charge, so they do not effect the divergence of a vector current. Yet in an adiabatic limit it seems that presence or absence of mass terms $\it must$ effect the divergence of a current. In the following, this paradox will be resolved by solving the equations of motion for certain background fields which, according to the anomaly equation, should generate charge. I will demonstrate that spatially uniform backgrounds which generate vector charge have no adiabatic limit. Such backgrounds produce the anomaly by causing hopping between energy levels. On the other hand, localized instanton-like backgrounds do possess an adiabatic limit. Backgrounds of this type will be shown to produce the anomaly via fermionic bound states whose energies traverse the gap between $E=-m$ to $E=m$. This give a better understanding of the mechanism of baryon number production in the standard model by sphalerons. The sphaleron configuration corresponds to the half-way point with a zero energy bound state. Because of the chiral couplings, the standard model Landau levels are quite complicated. To avoid calculating Landau levels in $3+1$ dimensions, I will instead consider a spontaneously broken $U(1)$ axial gauge theory in $1+1$ dimensions. While the details of the computation are different, many of the results obtained in $1+1$ dimensions are expected to hold in $3+1$ dimensions. The lagrangian of this theory is This simplified model possesses the two traits whose consistency I wish to demonstrate; a massive spectrum and a mass independent vector current divergence, For the moment I will not consider the full dynamical theory, but only that given by where $\rho(x) =v$ asymptotically. It should be possible to demonstrate the anomaly by considering the momentum space equations of motion, as was done for massless Q.E.D. by Nielsen and Ninomiya using the Lorentz force law. A few remarks are in order about how to do this. Let the Dirac field in a background be expanded as follows: where $u_{p,i}$ are free massive spinors normalized to 1, and the index $i$ distinguishes between positive and negative frequency solutions when the backgrounds vanish. All the background dependance is contained in the time evolution of $c_{p,i}(t)$ When the backgrounds vanish, where Given a knowledge of which states are occupied at an initial time, one can determine which states are occupied at a final time by looking at the evolution of the coeficients $c_{p,i}$. At this point however, the use of this expansion to determine the vector charge or the particle number is very ambiguous. One can make transformations of $\psi$, corresponding to certain transformations of the background fields, which change the $c_{p,i}$. For example transformations exist which map something that looks like the Dirac sea into something that looks like an excited state with non zero vector charge. An accurate definition is needed. Such a definition must depend on the background fields as well as the Fourier coefficients. In order to make the computation of the charge simple, I will only consider processes in which local gauge invariant functions of the background fields vanish at asymptotic times. This means that the initial and final $\theta$ and $A^{\mu}$ are gauge equivalent to $\theta =0$ and $A^{\mu} =0$. In this case the proper definition of charge at asymptotic times is simple. In Dirac sea language, one subtracts the number of vacant negative frequency states from the number of occupied positive frequency states. The occupation number of a positive or negative frequency state of momentum $p$ is proportional to $|c_{p,\pm}|^2$ in the gauge in which the backgrounds vanish. Equivalently, in second quantized language one can adopt a normal ordered definition of charge at asymptotic times. The change in the charge can then be written in terms of Bogolubov coefficients relating the operators $\hat c_{p,i}$ in the asymptotic past to those in the asymptotic future, where these operators are defined in the gauge in which the backgrounds vanish. Note that at intermediate times the gauge invariant backgrounds do not vanish so a well defined Bogolubov transformation between asymptotic past and intermediate times does not exist. Normal ordering is no longer sensible at intermediate times because solutions can not be classified as positive or negative frequency. However, , wherein [^1]. In the spirit of the anomaly calculations done by Nielsen and Ninomiya, I will first consider a process in which a spatially uniform axial electric field is turned on
null
--- abstract: 'We report on the crystal structure, physical properties and electronic structure calculations for the ternary pnictide compound EuCr$_{2}$As$_{2}$. X-ray diffraction studies confirmed that EuCr$_{2}$As$_{2}$ crystalizes in the ThCr$_{2}$Si$_{2}$-type tetragonal structure (space group *I4/mmm*). The Eu-ions are in a stable divalent state in this compound. Eu moments in EuCr$_{2}$As$_{2}$ order magnetically below $T_m$ = 21 K. A sharp increase in the magnetic susceptibility below $T_m$ and the positive value of the paramagnetic Curie temperature obtained from the Curie-Weiss fit suggest dominant ferromagnetic interactions. The heat capacity exhibits a sharp $\lambda$-shape anomaly at $T_m$, confirming the bulk nature of the magnetic transition. The extracted magnetic entropy at the magnetic transition temperature is consistent with the theoretical value $Rln(2S+1)$ for $S$ = 7/2 of the Eu$^{2+}$ ion. The temperature dependence of the electrical resistivity $\rho(T)$ shows metallic behavior along with an anomaly at 21 K. In addition, we observe a reasonably large negative magnetoresistance ($\sim$ -24%) at lower temperature. Electronic structure calculations for EuCr$_{2}$As$_{2}$ reveal a moderately high density of states of Cr-3$d$ orbitals at the Fermi energy, indicating that the nonmagnetic state of Cr is unstable against magnetic order. Our density functional calculations for EuCr$_{2}$As$_{2}$ predict a G-type AFM order in the Cr sublattice. The electronic structure calculations suggest a weak interlayer coupling of the Eu-moments.' author: : 'U. B. ' 'R. Prasad' - 'C. Geibel' - 'Z. Hossain' compounds exhibit similar properties. These compounds consist of alternate ‘T-Pn’ layers and ‘R’ layers stacked along the $c$ axis. Following the exploration of these materials over the last 20 years, recently, the discovery of high temperature superconductivity (SC) in the doped AFe$_{2}$As$_{2}$ (A = divalent alkaline metal or rare-earth metal) has generated a new wave of investigations in search of new compounds in this class, which exhibit interesting magnetic and superconducting properties. The following emerges. [@Rotter; ] moment. In a few cases a mixed-valence state of Eu is also observed, for example, in EuNi$_{2}$P$_{2}$ and EuCu$_{2}$Si$_{2}$ \[6,7,8\]. EuFe$_{2}$As$_{2}$ is a member of the Fe based “122” pnictide family where Eu is divalent. This system undergoes a SDW transition in the Fe sublattice at 190 K accompanied by an AFM ordering of Eu$^{2+}$ moments at 19 K \[9\]. The interplay between SC and Eu$^{2+}$ magnetism in doped EuFe$_{2}$As$_{2}$ has been extensively studied recently. [@Jeevan; @Miclea; @Jeevan1; @Zapf; @Anupam; @Paramanik] Replacing As by P in EuFe$_{2}$P$_{2}$, no Fe moment has been observed in the system and the divalent Eu moments order ferromagnetically at $T_C$ = 30 K as has been detected by neutron diffraction measurements. [@Feng; @Ryan] Incommensurate antiferromagnetic structure of Eu$^{2+}$ moments with $T_N$ = 47 K has been found in EuRh$_{2}$As$_{2}$ \[15\]. While EuCu$_{2}$As$_{2}$ exhibits a delicate balance between FM and AFM ordering,[@Sengupta] EuNi$_{2}$As$_{2}$ and EuCo$_{2}$As$_{2}$ order antiferromagnetically. [@Bauer; @Ballinger] Briefly, the pnictide compounds of this structure class show a variety of novel and interesting behaviors. We found Thia compound crystalizes in EuCr$_{2}$As$_{2}$. Thia compound crystalizes in the ThCr$_{2}$Si$_{2}$-type tetragonal structure with space group *I4/mmm*. As shown here Fig. 1, alternating Eu layers and CrAs layers are stacked along the $c$ axis where Cr atoms form a square planar lattice in the CrAs layer, similar to the AFe$_{2}$As$_{2}$. Recently, Singh et al. have investigated the closely related compound BaCr$_{2}$As$_{2}$ \[19\]. A combined study of physical properties and electronic structure calculations demonstrate that BaCr$_{2}$As$_{2}$ is a metal with itinerant antiferromagnetism, similarly to the parent phases of Fe-based superconductors but with slightly different magnetic structure. Neutron diffraction measurements on BaFe$_{2-x}$Cr$_{x}$As$_{2}$ crystals reveal that the Cr doping in BaFe$_{2}$As$_{2}$ leads to suppression of the Fe SDW transition but the superconductivity (as usually observed in case of other transition metal doping) is prevented by a new competing magnetic order of G-type antiferromagnetism which becomes the dominant magnetic ground state for $x$ $>$ 0.3. [@Athena; @Marty] BaCr$_{2}$As$_{2}$ shows stronger transition metal-pnictogen covalency than the Fe compounds,[@Singh] and in that respect is more similar to the widely studied compound BaMn$_{2}$As$_{2}$. BaMn$_{2}$As$_{2}$ has been characterized as a small band-gap semiconductor with G-type AFM ordering of Mn moments at $T_N$ = 625 K \[22,23\]. This material becomes metallic by partial substitution of Ba by K or by applied pressure on the parent compound. [@KBaMn; @PBaMn; @PKBaMn] In contrast to BaCr$_{2}$As$_{2}$ and BaMn$_{2}$As$_{2}$, both having tetragonal crystal structure, EuMn$_{2}$As$_{2}$ forms in hexagonal crystal structure[@Ruhel] whereas EuCr$_{2}$As$_{2}$ is found to be tetragonal. Very recently, the closely related compounds LnOCrAs (Ln = La, Ce, Pr,and Nd) possessing similar CrAs layers as in BaCr$_{2}$As$_{2}$ have been synthesized by Park et al. [@Hosono] These compounds are isostructural (ZrCuSiAs-type structure with the space group *P4/nmm*) to that of LnOFeAs, which are the parent compounds of Fe-based high $T_c$ superconductors. Powder neutron diffraction measurements at room temperature reveal that Cr$^{2+}$ ions in LaOCrAs bear a large itinerant moment of 1.57 $\mu_{B}$ pointing along the $c$ axis which undergo a G-type AFM ordering. The Néel temperature $T_N$ has been estimated to be in between 300-550 K. Therefore, the related materials possessing CrAs layers are highly enthralling with regard to the physical properties when the AFM ordering is suppressed by doping. Here we report on the crystal structure, physical properties and electronic structure calculations of EuCr$_{2}$As$_{2}$. Our ystal is a ferromagnet. A large negative magnetoresistance is found below $T_m$. Density-functional theory-based calculations indicate that the Cr ions bear itinerant moments and the most stable magnetic state in the Cr sublattice is a G-type AFM order. METHODS ======= The single crystals of EuCr$_{2}$As$_{2}$ were grown using CrAs flux as described by Singh et al. [@Singh] The CrAs binary was presynthesized by reacting the mixture of Cr powder and As pieces at 300 $^\circ$C for 10 h, and then at 600 $^\circ$C for 30 h and finally at 900 $^\circ$C for 24 h
null
--- abstract: 'Bilayer graphene can exhibit deformations such that the two graphene sheets are locally detached from each other resulting in a structure consisting of domains with different inter-layer coupling. Here we investigate how the presence of these domains affect the transport properties of bilayer graphene. We derive analytical expressions for the transmission probability, and the corresponding conductance, across walls separating different inter-layer coupling domain. We find that the transmission can exhibit a valley-dependent layer asymmetry and that the domain walls have a considerable effect on the chiral tunnelling properties of the charge carriers. We show that transport measurements allow one to obtain the strength with which the two layers are coupled. We performed numerical calculations for systems with two domain walls and find that the availability of multiple transport channels in bilayer graphene modifies significantly the conductance dependence on inter-layer potential asymmetry.' author: : 'B. Van Mohammad' 'M. Zarenia' – 'H. Bahlouli' - 'F. M. Peeters' title: Quantum transport across van der Waals domain walls in bilayer graphene --- Introduction ============ A decade ago, researchers started investigating graphene and its associated multilayers for use as a basis for next generation of fast and smart electronic logic gates. The absence of a band gap leads to different proposals for gap generation[@14-0; @14-1; @14-2]. For example, by changing the size of the graphene flakes into nanoribbons or quantum dots, one can control the energy gap through size quantization[@15; @15-1; @14]. Important experimental advances were achieved in recent years which enabled the fabrication of graphene based electronic devices at the nano scale[@16; @16-1; @intro-1]. The increasing control over the structure of graphene flakes allowed for new devices that could constitute the building blocks for a fully integrated carbon based electronics. An example of this is deformed bilayer graphene, where the two layers are not aligned due to a mismatch in orientation or stacking order resulting in e.g. twisted bilayer graphene. Its electronic structure is strongly different from normal bilayer graphene and exhibits very peculiar properties such as the appearance of additional Dirac cones[@17; @18; @19; @20; @VanderDonck2016; @VanderDonck2016b]. Recent experiments have shown that epitaxial graphene can form step-like bilayer/single layer (SL/BL) interfaces or that it is possible to create bilayer graphene flakes that are connected to single layer graphene regions[@13-1; @13-2; @20-0]. The appearance of these structures fueled theoretical and experimental investigations on the behavior of massless and massive particles in such junctions. For example, few works have investigated different domain walls that separate, for instance, different type of stacking[@AB-BA-1; @AB-BA-2; @pelc] or even different number of layers[@20-1; @22; @26]. These theoretical investigations showed that the transmission probabilities through SL/BL interfaces exhibits a valley-dependent asymmetry which could be used for valley-based electronic applications[@23; @24; @25]. Other theoretical and experimental works focused on the emergence of Landau levels, edge state properties and peculiar transport properties in such systems[@27-0; @27-1; @27-2; @27; @28; @29; @30; @31; @32; @33; @15]. Bilayer graphene flake sandwiched between two single zigzag or armchair nanoribbons[@15; @34] was also investigated and it was found that the conductance exhibits oscillations for energies larger than the inter-layer coupling. Most of these recent theoretical works considered domain walls separating patches of bilayer graphene with different stacking type or where only a single layer was connected to a bilayer graphene sheet. Very recently, however, a number of new bilayer graphene platforms have been synthesized. These structures have not changed. For example in the case of folded graphene [@Wang2017; @Rode2016] a part of the fold forms a coupled bilayer structure, while another part of it is uncoupled[@Schmitz2017; @Hao2016; @Yan2016]. One has also observed systems with domain walls separating regions of different Bernal stacking [@Yin2017; @Yin2016]. In general, these systems can be modelled as being composed of two single layers of graphene (2SL) which are locally bound by van der Waals interaction into an AA- or AB-stacked bilayer structure. Here, we present a systematic study of electrical transport across domain walls separating regions of different inter-layer coupling. We discuss the dependance on the coupling between the graphene layers, on the distance between subsequent domain walls and on local electrostatic gating. For completeness, we also present all possible combinations of locally detached bilayer systems. Analytical expressions for the transport across a single domain wall are also obtained. These results appear in several experiments. From a theoretical point of view, one can wonder how charge carriers will respond to transitions between systems that have completely different transport properties. For example, single layer graphene and AA-stacked bilayer graphene are known to feature Klein tunnelling at normal incidence while AB-stacked bilayer graphene shows anti-Klein tunnelling[@Katsnelson2006; @Stander2009]. It is, therefore, interesting to investigate under which conditions these peculiar chirally-assisted tunnelling properties pertain in combined systems, as well as to investigate how the presence of multiple transport channels changes the transport properties. From our study we obtain useful analytical expressions for the transmission probability across a single domain wall. These results also show that the effect of local gating is to break the symmetry between the two layers and to introduce a valley-dependent angular asymmetry, which could be used for a layer-dependent valley-filtering device. We show that the inter-layer coupling strength and stacking has a characteristic effect on the conductance across a domain wall which can be used to measure structural deformations in bilayer graphene. We find that the presence of multiple conductance channels in bilayer graphene can modify the dependance of the conductance on an applied inter-layer potential difference from constructive to destructive. Finally, we present the results of this channel. The data are presented in the form of an outline as follows. In Sec. \[Sec:Model\], we discuss the formalism, explain the geometry of the investigated domain walls, and define the possible scattering processes between the different transport modes. In Sec. \[Symmetry\], we give analytical expressions for the transmission probabilities through one domain wall and analyze how the symmetry between the graphene layers can be broken by electrostatic potentials. An overview of the numerical results for more complex set-ups consisting of multiple domain walls and gates is presented in Sec. \[Results\]. Finally, in Sec. \[Concl\] we briefly summarize the main points of this paper and comment on possible experimental signatures of the presence of coupling domain walls in bilayer graphene. Model {#Sec:Model} ===== Single layer graphene consists of two inequivalent sublattices, denoted as $\alpha$ and $\beta$, with interatomic distance $a=0.142$ nm and that are coupled in the tight binding (TB) formalism by $\gamma_{0}=3$ eV[@1]. It has a gapless energy spectrum with band crossings at the so-called Dirac points $K$ and $K'$ that are located at the corners of the Brillouin zone. The spectrum can be seen in Fig. . Bilayer graphene consists of two single layers of graphened bilayer. The graphisin is composed of three layers: AB-BL and AA- Bilayer graphene consists of two single layers of graphene which can be stacked in two stable configurations: AB-stacked bilayer graphene (AB-BL) or AA-stacked bilayer graphene (AA-BL). In AB-BL, atom $\alpha_{2}$ is placed directly above atom $\beta_{1}$ with inter-layer coupling $\gamma_1\approx0.4$ eV[@li2009band] as shown in Fig. \[fig01\](b). It has a parabolic dispersion relation with four bands. Two of them touch at zero energy, whereas the other two bands are split away by an energy $\gamma_1$. The skew hopping parameters $\gamma_3$ and $\gamma_4$ between the other two sublattices are negligible since they have insignificant effect on the transmission probabilities and band structure at high energies [@Ben]. In AA-BL two single layers of graphene are placed exactly on top of each other such that the structure becomes mirror-symmetric. Atoms $\alpha_2$ and $\beta_2$ in the top layer are located directly above atoms $\alpha_1$ and $\beta_1$ in the bottom layer, with direct inter-layer coupling $\gamma_1\approx0.2\ {\rm eV}$ [@AA-gamma1], see Fig. \[fig01\](c). AA-BL has a linear energy spectrum with two Dirac cones shifted in energy by an amount of $\pm\gamma_1$ as depicted in Fig. \[fig01\](c) by the full curves. Geometries ---------- We consider four different junctions that can be made of from the building blocks depicted in Fig. \[fig01\]: monolayer, AA- stacked and AB-stacked bilayer graphene. Without loss of generality, we assume that the charge carriers are always propagating from the left to the right hand side. Then we consider three different configurations: ($I$) a structure where the leads on the left ($x<0$) and on the right hand side ($x>d$) consist of two decoupled single layers while in between they are connected into an AB-BL (AA-BL) configuration. This is depicted in Fig. \[intro-fig02\](
null
--- abstract: 'A capacity bounded grammar is a grammar whose derivations are restricted by assigning a bound to the number of every nonterminal symbol in the sentential forms. In the paper the generative power and closure properties of capacity bounded grammars and their Petri net controlled counterparts are investigated.' author: [ @das:pau]. Results from the theory of Petri nets have been applied successfully to provide elegant solutions to complicated problems from language theory [@esp; @hau:jan]. A context-free grammar can be associated with a context-free (communica-tion-free) Petri net, whose places and transitions, correspond to the nonterminals and the rules of the grammar, respectively, and whose arcs and weights reflect the change in the number of nonterminals when applying a rule. In some recent papers, context-free Petri nets enriched by additional components have been used to define regulation mechanisms for the defining grammar [@das:tur; @tur]. Our tive value capacity. Quite obviously, a context-free Petri net with place capacity regulates the defining grammar by permitting only those derivations where the number of each nonterminal in each sentential form is bounded by its capacity. A value is integer. There it was shown that grammars regulated in this way generate the family of context-free languages of finite index, even if arbitrary nonterminal strings are allowed as left-hand sides. The main result of this paper is that, somewhat surprisingly, grammars with capacity bounds have a greater generative power. This paper is organized as follows. Section \[sec:def\] contains some necessary definitions and notations from language and Petri net theory. The concepts of grammars with capacities and grammars controlled by Petri nets with place capacities are introduced in section \[sec:capacities\]. The generative power and closure properties of capacity-bounded grammars are investigated in sections \[sec:power-gs\] and \[sec:nb-cfg\]. Results on grammars controlled by Petri nets with place capacities are given in section \[sec:PNC\]. Preliminaries {#sec:def} ============= Throughout the paper, we assume that the reader is familiar with basic concepts of formal language theory and Petri net theory; for details we refer to [@das:pau; @han; @rei:roz]. The set of natural numbers is denoted by ${\mathbb{N}}$, the power set of a set S by ${\mathcal{P}({S})}$. We use the symbol $\subseteq$ for inclusion and $\subset$ for proper inclusion. The *length* of a string $w \in X^*$ is denoted by $|w|$, the number of occurrences of a symbol $a$ in $w$ by $|w|_a$ and the number of occurrences of symbols from $Y\subseteq X$ in $w$ by $|w|_Y$. The *phrase structure grammar* is ${\lambda}$. A *phrase structure grammar* (due to Ginsburg and Spanier [@gin:spa1]) is a quadruple $G=(V, \Sigma, S, R)$ where $V$ and $\Sigma$ are two finite disjoint alphabets of *nonterminal* and *terminal* symbols, respectively, $S\in V$ is the *start symbol* and is a finite set of *rules*. A string $x\in (V\cup \Sigma)^*$ *directly derives* a string $y\in (V\cup \Sigma)^*$ in $G$, written as $x{\Rightarrow}y$, if and only if there is a rule $u\to v\in R$ such that $x=x_1ux_2$ and $y=x_1vx_2$ for some $x_1, x_2\in (V\cup \Sigma)^*$. The reflexive and transitive closure of the relation ${\Rightarrow}$ is denoted by ${\Rightarrow}^*$. A derivation using the sequence of rules $\pi=r_1r_2\cdots r_k$, $r_i\in R$, $1\leq i\leq k$, is denoted by $\xRightarrow{\pi}$ or $\xRightarrow{r_1r_2\cdots r_k}$. The *language* generated by $G$, denoted by $L(G)$, is defined by $L(G)=\{w\in \Sigma^*{:}S{\Rightarrow}^* w\}.$ A phrase structure grammar $G=(V, \Sigma, S, R)$ is called *context-free* if each rule $u\to v\in R$ has $u\in V$. The family of context-free languages is denoted by $\mathbf{CF}$. A *matrix grammar* is a quadruple $G=(V, \Sigma, S, M)$ where $V, \Sigma, S$ are defined as for a context-free grammar, $M$ is a finite set of *matrices* which are finite strings (or finite sequences) over a set of context-free rules. The language generated by the grammar $G$ consists of all strings $w\in \Sigma^*$ such that there is a derivation $S\xRightarrow{r_1r_2\cdots r_n}w$ where $r_1r_2\cdots r_n$ is a concatenation of some matrices $m_{i_1}, m_{i_2}, \ldots, m_{i_k}\in M$, $k\geq 1$. The family of languages generated by matrix grammars without erasing rules (with erasing rules, respectively) is denoted by $\mathbf{MAT}$ (by $\mathbf{MAT}^{{\lambda}}$, respectively). A *vector grammar* is defined like a matrix grammar, but the derivation sequence $r_1r_2\cdots r_n$ has to be a shuffle of some matrices $m_{i_1}, m_{i_2}, \ldots, m_{i_k}\in M$, $k\geq 1$. A *semi-matrix grammar* is defined like a matrix grammar, but the derivation sequence $r_1r_2\cdots r_n$ has to be the semi-shuffle of some matrices $m_{i_1}, m_{i_2}, \ldots, m_{i_k}\in M$, $k\geq 1$, i.e., from the shuffle of sequences from $\bigcup_{i=1}^t m_i^*$ where $$M=\{m_1,\ldots,m_t\}.$$ The language families generated by vector and semi-matrix grammars are denoted by ${{\bf V}}^{[{\lambda}]}$ and ${{\bf sMAT}}^{[{\lambda}]}$. A *Petri net* (PN) is a construct $N = (P, T, F, \phi)$ where $P$ and $T$ are disjoint finite sets of *places* and *transitions*, respectively, $F \subseteq (P\times T) \cup (T\times P)$ is the set of *directed arcs*, $$\varphi: (P\times T) \cup (T\times P) \rightarrow \{0, 1, 2, \dots\}$$ is a *weight function*, where $\varphi(x,y)=0$ for all $(x,y)\in ((P\times T) \cup (T\times P))-F$. A mapping $$\mu: P \rightarrow \{0,1,2, \ldots\}$$ is called a *marking*. For each place $p\in P$, $\mu(p)$ gives the number of *tokens* in $p$. $^{\bullet}x=\{y{:}\, (y,x)\in F\}$ and $x^{\bullet}=\{y{:}\, (x,y)\in F\}$ are called the sets of *input* and *output* elements of $x\in P\cup T$, respectively. A sequence of places and transitions $\rho=x_1x_2\cdots x_n$ is called a *path* if and only if no place or transition except $x_1$ and $x_n$ appears more than once, and $x_{i+1}\in x^\bullet_{i}$ for all $1\leq i\leq n-1$. We denote by $P_\rho, T_\rho, F_\rho$ the sets of places, transitions and arcs of $\rho$. Two paths $\rho_1$, $\rho_2$ are called *disjoint* if $P_{\rho_1}\cap P_{\rho_2}=\emptyset$ and $T_{\rho_1}\cap T_{\rho_2}=\emptyset$. A path $\rho=t_{1}p_{1}t_{2}p_{2}\cdots p_{k-1}t_{k}$ ($\rho=p_{1}t_{1}p_{1}t_{2}\cdots t_{k}p_{1}$) is called a *chain* (*cycle*).
null
--- abstract: | We consider the problem of maintaining a dynamic set of integers and answering queries of the form: report a point (equivalently, all points) in a given interval. Range $w$-bit words cannot be updated by<extra_id_1>d results are not available for<extra_id_2>d<extra_id_3>d<extra_id_4>d search. However, for a RAM with $w$-bit words, we show how to perform updates in $O(\lg w)$ time and answer queries in $O(\lg\lg w)$ time. The update time is identical to the van Emde Boas structure, but the query time is exponentially faster. Existing lower bounds show that achieving our query time for predecessor search requires doubly-exponentially slower updates. We are not optimal. Our solution is based on a new and interesting recursion idea which is “more extreme” that the van Emde Boas recursion. Whereas van Emde Boas uses a simple recursion (repeated halving) on each path in a trie, we use a nontrivial, van Emde Boas-like recursion on every such path. Despite this, our algorithm is quite clean when seen from the right angle. To achieve linear space for our data structure, we solve a problem which is of independent interest. We develop the first scheme for dynamic perfect hashing requiring sublinear space. This gives a dynamic Bloomier filter (an approximate storage scheme for sparse vectors) which uses low space. We False optimal. author: - | Christian Worm Mortensen[^1]\ IT U. Copenhagen\ `cworm@itu.dk` - | Rasmus Pagh\ IT U. Copenhagen\ `pagh@itu.dk` - | Mihai Pǎtraşcu\ MIT\ `mip@mit.edu` bibliography: - '../general.bib' title: On Dynamic Range Reporting in One Dimension --- Introduction ============ Our problem is to maintain a set $S$ under insertions and deletions of values, and a range reporting query. The query ${\texttt{findany}}(a,b)$ should return an arbitrary value in $S \cap [a,b]$, or report that $S \cap [a,b] = \emptyset$. This is a form of existential range query. In fact, since we only consider update times above the predecessor bound, updates can maintain a linked list of the values in $S$ in increasing order. Given a value $x \in S \cap [a,b]$, one can traverse this list in both directions starting from $x$ and list all values in the interval $[a,b]$ in constant time per value. Thus, we can use the term RAM for reporting. The model in which we study this problem is the word RAM. We assume the elements of $S$ are integers that fit in a word, and let $w$ be the number of bits in a word (thus, the “universe size” is $u = 2^w$). We let $n = |S|$. Our data structure will use Las Vegas randomization (through hashing), and the bounds stated will hold with high probability in $n$. Range reporting is a very natural problem, and its higher-dimensional versions have been studied for decades. In one dimension, the problem is easily solved using predecessor search. The predecessor problem has also been studied intensively, and the known bounds are now tight in almost all cases [@beame02predecessor]. Another well-studied problem related to ours is the lookup problem (usually solved by hashing), which asks to find a key in a set of values. Our problem is more general than the lookup problem, and less general than the predecessor problem. While these two problems are often dubbed “the integer search problems”, we feel range reporting is an equally natural and fundamental incarnation of this idea, and deserves similar attention. The first to ask whether or not range reporting is as hard as finding predecessors were Miltersen et al in STOC’95 [@miltersen99asymmetric]. For the static case, they gave a data structure with space $O(nw)$ and constant query time, which cannot be achieved for the predecessor problem with polynomial space. An even more surprising result from STOC’01 is due to Alstrup, Brodal and Rauhe [@alstrup01range], who gave an optimal solution for the static case, achieving linear space and constant query time. In the dynamic case, however, no solution better than the predecessor problem was known. For this problem, the fastest known solution in terms of $w$ is the classic van Emde Boas structure [@veb77predecessor], which achieves $O(\lg w)$ time per operation. For the range reporting problem, we show how to perform updates in $O(\lg w)$ time, while supporting queries in $O(\lg\lg w)$ time. The query is done using the range report, i.e. $O(n)$ words. The update time is identical to the one given by the van Emde Boas structure, but the query time is exponentially faster. In contrast, Beame and Fich [@beame02predecessor Theorem 3.7] show that achieving any query time that is $o(\lg w / \lg\lg w)$ for the predecessor problem requires update time $\Omega(2^{w^{1 - \epsilon}})$, which is doubly-exponentially slower than our update time. We reproduce our solution below. Our solution incorporates some basic ideas from the previous solutions to static range reporting in one dimension [@miltersen99asymmetric; @alstrup01range]. However, it brings two important technical contributions. First, we develop a new and interesting recursion idea which is more advanced than van Emde Boas recursion (but, nonetheless, not technically involved). We describe this idea by first considering a simpler problem, the bit-probe complexity of the greater-than function. Then, the solution for dynamic range reporting is obtained by using the recursion for this simpler problem, on *every path* of a binary trie of depth $w$. This should be contrasted to the van Emde Boas structure, which uses a very simple recursion idea (repeated halving) on every root-to-leaf path of the trie. The van Emde Boas recursion is fundamental in the modern world of data structures, and has found many unrelated applications (e.g. exponential trees, integer sorting, cache-oblivious layouts, interpolation search trees). It has very strong impact. The second important contribution of this paper is needed to achieve linear space for our data structure. We develop a scheme for dynamic perfect hashing, which requires sublinear space. This can be used to store a sparse vector in small space, if we are only interested in obtaining correct results when querying non-null positions (the Bloomier filter problem). We also prove that our solution is optimal. To our knowledge, this solves the last important theoretical problem connected to Bloom filters. The stringent space requirements that our data structure can meet are important in data-stream algorithms and database systems. We can help you with these as well. Data-Stream Perfect Hashing and Bloomier Filters ------------------------------------------------ The Bloom filter is a classic data structure for testing membership in a set. If a constant rate of false-positives is allowed, the space *in bits* can be made essentially linear in the size of the set. Optimal bounds for this problem are obtained in [@pagh05bloom]. Bloomier filters, an extension of the classical Bloom filter with a catchy name, were defined and analyzed in the static case by Chazelle et al [@chazelle04bloom]. The problem is to represent a vector $V[0..u-1]$ with elements from $\{ 0, \dots, 2^r - 1\}$ which is nonzero in only $n$ places (assume $n \ll u$, so the vector is sparse). Thus, we have a sparse set as before, but with values associated to the elements. The information theoretic lower bound for representing such a vector is $\Omega(n\cdot r + \lg \binom{u}{n}) \approx \Omega(n (r + \lg u))$ bits. However, if we only want correct answers when $V[x] \ne 0$, we can obtain a space usage of roughly $O(nr)$ bits in the static case. For the dynamic problem, where the values of $V$ can change arbitrarily at any point, achieving such low space is impossible regardless of the query and update times. Chazelle (2017) , al. [@chazelle04bloom] proved that $\Omega(n(r + \min(\lg\lg \frac{u}{n^3}, \lg n)))$ bits are needed. No non-trivial upper bound was known. We give matching lower and upper bounds: \[thm:bloomlb\] The randomized space complexity of maintaining a dynamic Bloomier filter for $r\geq 2$ is $\Theta(n(r + \lg\lg \frac{u}{n}))$ bits in expectation. The upper bound is achieved by a RAM data structure that allows access to elements of the vector in worst-case constant time, and supports updates in amortized expected $O(1)$ time. To detect whether $V[x] = 0$ with probability of correctness at least $1-\
null
[**[Old Galaxies in the Young Universe]{}**]{} [**[A. Cimatti$^1$, E. Daddi$^2$, A. Renzini$^2$, P. Cassata$^3$, E. Vanzella$^{3}$, L. Pozzetti$^4$, S. Cristiani$^5$, A. Fontana$^6$, G. Rodighiero$^3$, M. Mignoli$^4$, G. Zamorani$^4$ ]{}**]{}\ $^1$ INAF - Osservatorio Astrofisico di Arcetri, Largo E. Fermi 5, I-50125, Firenze, Italy\ $^2$ European Southern Observatory, Karl-Schwarzschild-Str. 2, D-85748, Garching, Germany\ $^3$ Dipartimento di Astronomia, Università di Padova, Vicolo dell’Osservatorio, 2, I-35122 Padova, Italy\ $^4$ INAF - Osservatorio Astronomico di Bologna, via Ranzani 1, I-40127, Bologna, Italy\ $^5$ INAF - Osservatorio Astronomico di Trieste, Via Tiepolo 11, I-34131 Trieste, Italy\ $^6$ INAF - Osservatorio Astronomico di Roma, via dell’Osservatorio 2, Monteporzio, Italy [ **More than half of all stars in the local Universe are found in massive spheroidal galaxies$^{1}$, which are characterized by old stellar populations$^{2,3}$ with little or no current star formation. In present models, such galaxies appear rather late as the culmination of a hierarchical merging process, in which larger galaxies are assembled through mergers of smaller precursor galaxies. But observations have not yet established how, or even when, the massive spheroidals formed$^{2,3}$, nor if their seemingly sudden appearance when the Universe was about half its present age (at redshift $z \approx 1$) results from a real evolutionary effect (such as a peak of mergers) or from the observational difficulty of identifying them at earlier epochs. Here we report the spectroscopic and morphological identification of four old, fully assembled, massive ($>10^{11}$ solar masses) spheroidal galaxies at $1.6<z<1.9$, the most distant such objects currently known. The existence of such systems when the Universe was only one-quarter of its present age, shows that the build-up of massive early-type galaxies was much faster in the early Universe than has been expected from theoretical simulations$^{4}$. ** ]{} In the $\Lambda$CDM scenario$^5$, galaxies are thought to build-up their present-day mass through a continuous assembly driven by the hierarchical merging of dark matter halos, with the most massive galaxies being the last to form. However, this could be viewed as an open question. Recent ical epoch. The critical question is whether these galaxies do exist in substantial number$^{8,9}$ at earlier epochs, or if they were assembled later$^{10,11}$ as favored by most renditions of the hierarchical galaxy formation scenario$^{4}$. The problem is complicated also by the difficulty of identifying such galaxies due to their faintness and, for $z>1.3$, the lack of strong spectral features in optical spectra, placing them among the most difficult targets even for the largest optical telescopes. For example, while star-forming galaxies are now routinely found up to $z\sim6.6$$^{12}$, the most distant spectroscopically confirmed old spheroid is still a radio–selected object at $z=1.552$ discovered almost a decade ago$^{13,14}$. One way of addressing the critical question of massive galaxy formation is to search for the farthest and oldest galaxies with masses comparable to the most massive galaxies in the present-day universe ($10^{11-12}$ M$_{\odot}$), and to use them as the “fossil” tracers of the most remote events of galaxy formation. As the rest-frame optical – near-infrared luminosity traces the galaxy mass$^{15}$, the $K_s$-band ($\lambda \sim 2.2\,\mu$m in the observer frame) allows a fair selection of galaxies according to their mass up to $z\sim 2$. Following this approach, we recently conducted the K20 survey$^{16}$ with the Very Large Telescope (VLT) of the European Southern Observatory (ESO). Deep optical spectroscopy was obtained for a sample of 546 objects with $K_s<20$ (Vega photometric scale) and extracted from an area of 52 arcmin$^2$, including 32 arcmin$^2$ within the GOODS–South field $^{17}$ (hereafter the GOODS/K20 field). The spectroscopic redshift ($z_{spec}$) completeness of the K20 survey is 92%, while the available multi-band photometry ($BVRIzJHK_s$) allowed us to derive the spectral energy distribution (SED) and photometric redshift ($z_{phot}$) of each galaxy. The K20 survey spectroscopy was complemented with the ESO/GOODS public spectroscopy (Supplementary Table 1). The available spectra within the GOODS/K20 field were then used to search for old, massive galaxies at $z>1.5$. We spectroscopically identified four galaxies with $18 \lesssim K_s \lesssim 19$ and $1.6 \lesssim z_{spec} \lesssim 1.9$ which have rest-frame mid-UV spectra with shapes and continuum breaks compatible with being dominated by old stars and $R-K_s \gtrsim 6$ (the colour expected at $z>1.5$ for old passively evolving galaxies due to the combination of old stellar populations and k-correction effects$^{9}$). The spectrum of the object provides sufficient information. The spectrum of each individual object allows a fairly precise determination of the redshift based on absorption features and on the overall spectral shape (Fig. 1). The co-added average spectrum of the four galaxies (Fig. 2–3) shows a near-UV continuum shape, breaks and absorption lines that are intermediate between those of a F2 V and a F5 V star$^{18}$, and typical of about 1-2 Gyr old synthetic stellar populations$^{19,20}$. It is also very similar to the average spectrum of $z\sim1$ old Extremely Red Objects$^7$ (EROs), and slightly bluer than that of the $z\sim0.5$ SDSS red luminous galaxies$^{21}$ and of the $z=1.55$ old galaxy LBDS 53w091$^{13}$. However, it is different in shape and slope from the average spectrum of $z\sim1$ dusty star-forming EROs$^7$. The multi-band photometric SED of each galaxy was successfully fitted without the need for dust extinction, and using a library of simple stellar population (SSP) models$^{19}$ with a wide range of ages, $Z=Z_{\odot}$ and Salpeter IMF. This procedure yielded best-fitting ages of 1.0-1.7 Gyr, the mass-to-light ratios and hence the stellar mass of each galaxy, which results in the range of 1–3$\times 10^{11}$ $h_{70}^{-2}$ M$_{\odot}$. $H_0=70$ km s$^{-1}$ Mpc$^{-1}$ (with $h_{70} \equiv H_0/70$), $\Omega_{\rm m}=0.3$ and $\Omega_{\Lambda}=0.7$ are adopted. In addition to spectroscopy, the nature of these galaxies was investigated with the fundamental complement of [*Hubble Space Telescope*]{}+ ACS ([*Advanced Camera for Surveys*]{}) imaging from the GOODS public [*Treasury Program*]{}$^{17}$. The observations are shown in Table 2 (Fig. 4). Besides pushing to $z\sim1.9$ the identification of the highest redshift elliptical galaxy, these objects are very relevant to understand the evolution of galaxies in general for three main reasons: their old age, their high mass, and their substantial number density. Indeed, an average age of about 1-2 Gyr ($Z=Z_{\odot}$) at $<\! z\!>\sim 1.7$ implies that the onset of the star formation occurred not later than at $z\sim 2.5-3.4$ ($z\sim 2-2.5$ for $Z=2.5Z_{\odot}$). These are strict lower limits because they follow from assuming instantaneous bursts, whereas a more realistic, prolonged star formation activity would push the bulk of their star formation to an earlier cosmic epoch. As an illustrative example, the photometric SED of ID 646 ($z=1.903$) can be reproduced (without dust) with either a $\sim$1 Gyr old instantaneous burst occurred at $z \sim 2.7$, or with a $\sim$2 Gyr old stellar population with a star formation rate declining with $exp(-t/ \tau)$ ($\tau=0.3$ Gyr). In the latter case, the star formation onset would be pushed to $z \sim 4$ and half of the stars would be formed at $
null
--- abstract: 'Recently, Odrzywolek and Rafelski [@exoplanetclass] have found three distinct categories of exoplanets, when they are classified based on density. We first carry out a similar classification of exoplanets according to their density using the Gaussian Mixture Model, followed by information theoretic criterion (AIC and BIC) to determine the optimum number of components. Such a one-dimensional classification favors two components using AIC and three using BIC, but the statistical significance from both the tests is not significant enough to decisively pick the best model between two and three components. We then extend this GMM-based classification to two dimensions by using both the density and the Earth similarity index [@Kashyap], which is a measure of how similar each planet is compared to the Earth. For each dimension we use two components.' address: - '$^1$Dept. of True '$^2$Dept. of plyn] Ref. [@Rice] for a recent review and Ref. [@Wei18] for a summary of exoplanet detection techniques). Lot ini and Lotinus [@lunine]. Recently, Odrzywolek and Rafelski [@exoplanetclass] (hereafter OR16) have carried out the classification of exoplanets according to their density, following a suggestion long time back [@Weisskopf]. OR16 fitted the exoplanet density data to lognormal distributions to determine the optimum number of components. They found three lognormal components with peak densities at 0.71 $~\rm{gm/cm^3}$, 6.9 $~\rm{gm/cm^3}$, and 29.1 $~\rm{gm/cm^3}$ [@exoplanetclass]. These three components correspond to ice/gas giants, iron/rock super-Earths, and brown dwarfs respectively. The optimum number of components was determined by maximizing the log-likelihood and then checking the goodness of fit for different number of components by calculating the $p$-value from three distinct non-parametric tests. We would like to do a variant of the above analysis by carrying out a similar classification according to density using Gaussian mixture models, followed by information theoretic criterion to determine the optimum number of classes. We have previously used this procedure, to perform a unified classification of all GRB datasets using three different model comparison techniques [@Kulkarni]. We then extend the analysis of OR16 by considering two-dimensional classification using both the density and Earth similarity index. This section is described as follows. In Section \[sec:data\], we describe the dataset and the physical quantities used for the classification. The mathematical basis for the classification is discussed in Section \[sec:analysis\]. Our results are shown in Section \[sec:results\] and we conclude in Section \[sec:conclusions\]. Data on planets. We obtain the mass and radius information from the catalogs uploaded on the NASA Exoplanet archive[^1] and the Extrasolar planet encyclopedia[^2] as of **February 18, 2017**. From these datasets, we consider only those planets with measured values of mass and density, and which exist in both the datasets with the same observed values to avoid any irregularities and to maintain consistency in the dataset. The NASA Exoplanet archive is a NASA funded public data service, which is hosted by the Infrared Processing and Analysis Center. This catalog lists only those objects, for which their detection and planetary status is sacrosanct. As of Feb 18, 2017, it contained a total of 3440 planets out of which 531 have measured mass and radius values detected. Most of the planets listed in this catalog have been detected using transit photometry. The Extrasolar planet encyclopedia is maintained by the Meudon Observatory in Paris and as of Feb 18, 2017 contained total of 3567 planets (most of which were also detected using transit photometry), of which 615 have measured values for all the parameters. The litting criteria criteria. The Extrasolar planet encyclopedia allows planets weighing from 60 Jupiter Mass onwards, whereas the NASA Exoplanet archive uses 30 Jupiter mass as the lower limit, which is also the reason for the smaller number of confirmed exoplanets in the latter. However, to be valid, the latter requires a lot of scrutiny. Therefore, in order to obtain a gold sample, we have selected 450 observations, which are common to both the datasets for our study in this paper. Both the datasets used for this analysis as well as the code which looks for common planets between the two catalogs have been uploaded on [github]{} and can be found at <https://github.com/IITH/Exoplanet-Classification>\ In addition to the one-dimensional classification using only density, we also carry out a two-dimensional classification, wherein we use both the density and the Earth Similarity Index (or ESI) [@Kashyap] for the classification. For this, we need some additional parameters for the calculation of ESI. The additional parameters that we need apart from the radius and density are the surface temperature and time period of revolution, as other parameters can be derived from the mass and radius. The escape velocity and surface gravity are calculated by positing that the shape of the planet is a perfect sphere, wherein the total mass is distributed uniformly throughout the volume. We only consider planets for which we have the observed values for all four of these parameters. Calculations for the data: -------------------------- Assuming the planet is a perfect sphere with a uniform mass distribution, the expression for density is: $$\bar{\rho} = \frac{M}{\frac{4}{3}\pi R^{3}}$$ The escape velocity is given by: $$v_{esc} = \sqrt{\frac{2GM}{R}}.$$ The surface gravity is obtained from: $$g_{surf} = \frac{GM}{R^2}.$$ where $G$ is the Gravitational Constant, $M$ is the mass of the planet and $R$ is the radius.\ ESI is a figure of merit used to ascertain how habitable is the planet for life to develop compared to the Earth. More details on the theory behind ESI can be found in the work by Kashyap [@Kashyap], which in turn follows the prescription from Schulze-Makuch et al. [@Schulze] (See also  [@Moya] for alternate indices proposed similar in spirit to ESI). The ESI is based on six different parameters, viz. density, radius, temperature, surface gravity, escape velocity, and the time period of revolution around their Sun. All these parameters are normalized to Earth units, as it is convenient for the index calculation. The ESI is calculated based on the Bray-Curtis Similarity index [@Bray] and is given by: $$ESI_{x} = \left(1- \left|\frac{x-x_0}{x+x_0}\right|\right)^w \label{eq:ESI}$$ where $x$ is the parameter for which the index has to be calculated, $x_0$ is the reference values which in our case is one, as we have expressed all parameters in Earth units and $w$ is the weight exponent. The total ESI is given by: $$ESI = \left(ESI_{g}\times ESI_{temp}\times ESI_{vesc}\times ESI_{p}\times ESI_{r}\times ESI_{d}\right)^{1/6}$$ The values of ESI range from 0 (completely different from Earth) to 1 (resembling a clone of Earth). Analysis Methods: {#sec:analysis} ================= We outline the method used for both the one-dimensional classification using density and the two-dimensional classification using density and ESI. For finding the best-fit parameters, we use the Gaussian-mixture Model (GMM) [@astroml], which is part of the [Scikit-learn]{} package, used for a variety of machine learning applications in python. The GMM fits the data to a mixture of multiple ($k$) lognormal Gaussian distributions, which are characterized by their mean, covariance and their respective weights in the fit data. The GMM method uses the Expectation Maximization (EM) algorithm [@EM] to maximize the likelihood function over the given parameter space. The GMM method can also be generalized to include error bars
null
--- abstract: 'Synthetic ladders realized with one-dimensional alkaline-earth(-like) fermionic gases and subject to a gauge field represent a promising environment for the investigation of quantum Hall physics with ultracold atoms. Using density-matrix renormalization group calculations, we study how the quantum Hall-like chiral edge currents are affected by repulsive atom-atom interactions. We relate the properties of such currents to the asymmetry of the spin resolved momentum distribution function, a quantity which is easily addressable in state-of-art experiments. We show that repulsive interactions significantly stabilize the quantum Hall-like helical region and enhance the chiral currents. Our numerical simulations are performed for atoms with two and three internal spin states.' address: - '$^{1}$NEST, Scuola Normale Superiore & Istituto Nanoscienze-CNR, I-56126 Pisa, Italy' - '$^{2}$Scuola Normale Superiore, I-56126 Pisa, Italy' - '$^{3}$CNR - Istituto Nazionale di Ottica, UOS di Firenze LENS, I-50019 Sesto Fiorentino, Italy' - '$^{4}$The Abdus Salam International Centre for Theoretical Physics (ICTP), I-34151 Trieste, Italy' author: - | Simone Barbarino$^{1}$, Luca Taddia$^{2,3}$, Davide Rossini$^{1}$,\ Leonardo Mazza$^{1}$, and Rosario Fazio$^{4,1}$ title: 'Synthetic gauge fields in synthetic dimensions: Interactions and chiral edge modes' --- Introduction ============ One of the most noticeable hallmarks of topological insulators is the presence of robust [*gapless edge modes*]{} [@topins]. Their first experimental observation goes back to the discovery of the quantum Hall effect [@qhe], where the existence of chiral edge states is responsible for the striking transport properties of the Hall bars. The physics of edge states has recently peeked out also in the arena of ultracold gases [@Atala_2014; @Mancini_2015; @Stuhl_2015], triggered by the new exciting developments in the implementation of topological models and synthetic gauge potentials for neutral cold atoms [@Dalibard_2011; @Struck_2012; @Hauke_2012; @Goldman_2013; @Goldman_2014]. Synthetic gauge potentials in cold atomic systems have already led to the experimental study of Bose-Einstein condensates coupled to a magnetic field [@Lin_2009] or with an effective spin-orbit coupling [@Lin_2011], and more recently to lattice models with non-zero Chern numbers [@Aidelsburger_2013; @Miyake_2013; @Jotzu_2014; @Aidelsburger_2014] and frustrated ladders [@Atala_2014]. In a cold-gas experiment, the transverse dimension of a two-dimensional setup does not need to be a *physical* dimension, i.e. a dimension in real space: an extra *synthetic* dimension on a given *d*-dimensional lattice can be engineered taking advantage of the internal atomic degrees of freedom [@Boada_2012]. The crucial requirement is that each of them has to be coupled to two other states in a sequential way through, for example, proper Raman transitions induced by two laser beams. In this situation, it is even possible to generate gauge fields in synthetic lattices [@Celi_2014]. In this work we focus on one-dimensional systems with a finite synthetic dimension coupled to a synthetic gauge field, i.e. *frustrated ladders*. The study of such ladders traces back to more than thirty years ago, when frustration and commensurate-incommensurate transitions have been addressed in Josephson networks [@kardar1; @kardar2]. Thanks to the experimental advances with optical lattices, these systems are now reviving a boost of activity. Both bosonic (see, e.g., Refs. [@dhar; @petrescu; @grudsdt; @piraud; @tokuno]) and fermionic (see, e.g., Refs. [@roux; @sun; @Barbarino_2015; @Zeng_2015; @Cornfeld_2015; @Mazza_2015; @Budich_2015; @Lacki_2015]) systems have been considered. The emerging phenomenology is amazingly rich, ranging from new phases with chiral order [@dhar] to vortex phases [@piraud] or fractional Hall-like phases in fermionic systems [@Barbarino_2015; @Cornfeld_2015], just to give some examples. Very recently, two experimental groups [@Mancini_2015; @Stuhl_2015] have observed persistent spin currents in one dimensional gases of $^{173}$Yb (fermions) and $^{87}$Rb (bosons) determined by the presence of such gauge field. Within the framework of the synthetic dimension, such *helical* spin currents can be regarded as the *chiral* edge states of a two-dimensional system and are reminiscent of the edge modes of the Hall effect. Up to now, the study of edge currents in optical lattices has mainly focused on aspects related to the single-particle physics and a systematic investigation of the interaction effects is missing. [Repulsive]{} interactions considerably affect the properties of the edge modes: this is well known in condensed matter, where the fractional quantum Hall regime [@fqhe] can be reached for proper particle fillings and for sufficiently strong Coulomb interactions. In view of the new aforementioned experiments in bosonic [@Stuhl_2015] and fermionic [@Mancini_2015] atomic gases, a deeper understanding of the role of repulsive interactions in these setups is of the uttermost importance. Here we model the experiment on the frustrated $n$-leg ladder performed in Ref. [@Mancini_2015] and analyze, by means of density-matrix renormalization group (DMRG) simulations, how atom-atom repulsive interactions modify the edge physics of the system (in this article we disregard the effects of an harmonic confinement and of the temperature). We concentrate on the momentum distribution function, which has been used in the experiment to indirectly probe the existence of the edge currents. The experiments are twofold. First, we want to present numerical evidence that helical modes, reminiscent of the chiral currents of the integer quantum Hall effect, can be stabilized by repulsive interactions. Second, we want to discuss the influence of interactions on experimentally measurable quantities that witness the chirality of the modes. In this context the words “chiral” and “helical” can be interchanged, depending whether one considers a truly one-dimensional system with an internal degree of freedom or a synthetic ladder. There is an additional important point to be stressed when dealing with synthetic ladders in the presence of interactions. The many-body physics of alkaline-earth(-like) atoms (like Ytterbium) with nuclear spin $I$ larger than $1/2$ is characterized by a SU($2I+1$) symmetry [@Gorshkov_2010; @Cazalilla_2014]. When they are viewed as ($2I+1$)-leg ladders, the interaction is strongly anisotropic, i.e. it is short-range in the physical dimension and long-range in the synthetic dimension. This situation is remarkably different from [the typical]{} condensed-matter systems and may lead to quantitative differences especially when considering narrow ladders, as in Ref. [@Mancini_2015]. The paper is organized as follows. In the next section we introduce the model describing a one-dimensional gas of earth-alkaline(-like) atoms with nuclear spin $I\geq 1/2$. In a ladder Ref. [@Mancini_2015], we briefly explain how this system can be viewed as a ($2I+1$)-leg ladder. Moreover, we present a discussion of the single-particle spectrum to understand the main properties of the edge currents in the non-interacting regime and to identify the regimes where the effects of repulsive interactions are most prominent. Then, in Sec. \[obs-sec\] we introduce two quantities, evaluated by means of the DMRG algorithm, that characterize the edge currents: the (spin-resolved) momentum distribution function and the average current derived from it. In Sec. \[results\] we present and comment our results; we conclude with a summary in Sec. \[conclusions\]. Synthetic gauge fields in synthetic dimensions {#model} ============================================== The model --------- We consider a one-dimensional gas of fermionic earth-alkaline-(like) neutral atoms characterized by a large and tunable nuclear spin $I$, see Fig. \[ladder\](a). Based on the predictions of Ref. [@Gorshkov_2010], Pagano [*et al. *]{} have experimentally showed that, by conveniently choosing the populations of the nuclear-spin states, the number of atomic species can be reduced at will to $2\mathcal{I}+1$, giving rise to an effective atomic spin $\mathcal{I}\leq I$ [@Pagano_2014]. We stress that $I$ has to be an half-integer to enforce the fermionic statistics, while $\mathcal{I}$ can also be an integer, see Fig. \[ladder\](b). Moreover, as extensively discussed in Refs. [@Boada_2012; @Celi_2014], the system under consideration can be both viewed as a
null
--- abstract: 'We present two new constructions of quantum hash functions: the first based on expander graphs and the second based on extractor functions and estimate the amount of randomness that is needed to construct them. We also propose a keyed quantum hash function based on extractor function that can be used in quantum message authentication codes and assess its security in a limited attacker model.' author: - 'M. Ziatdinov' date: 'May 28, 2016' title: From Graphs to Keyed Quantum Hash Functions --- Introduction {#sec:introduction} ============ Quantum hash functions are similar to classical (cryptographic) hash functions and their security is guaranteed by physical laws. However, their construction and applications are not fully understood. Quantum fingerprints fingerprinting. Then @Gavinsky2010 noticed that quantum fingerprinting can be used as cryptoprimitive. However, binary quantum hash function are not very suitable if we need group operations (and group is not ${\mathbb{Z}}_{2^k}$. For example, e.g. by @Charles2009 and by @Tillich1994. @Ablayev2015 gave a definition and construction of non-binary quantum hash functions. @Ziatdinov2016 showed how to generalize quantum hashing to arbitrary finite groups. Recently, @Vasiliev2016 showed how quantum hash functions are connected with $\epsilon$-biased sets. Quantum hash functions map a classical message into a Hilbert space. Such space should be as small as possible, so eavesdropper can’t read a lot of information about classical message (this is guaranteed by physical laws as Holevo-Nayak’s theorem states). But images of different messages should be as far apart as possible, so recipient can check that hash differ or not with high probability. We need images of different messages. Informally we add random data to the data data. Then our input is mixed with this random data. Quantum randomization is small. For example, random subsets suffice (for ${\mathbb{Z}}_m$) [@Ablayev2008], random codes suffice (for ${\mathbb{Z}}_2^n$) [@Buhrman2001], random automorphisms suffice (for any finite group) [@Ziatdinov2016]. @Vasiliev2015 used some heuristics to find best subsets of ${\mathbb{Z}}_m$. However, (<unk>Log |G|)$). We reduce amount of randomness needed to define quantum hash function to $O(\log |G| \log \log |G|)$ in expander-based quantum hash function. Extractor-based quantum hash function allows us to introduce a notion of keyed quantum hash function. It can be used, for example, in quantum message authentication codes. Unlike [@Barnum2001] and [@Barnum2002] we use classical keys and authenticate classical messages. Unlike [@Curty2001] we authenticate whole messages, not single bits. However, our security analysis has only limited attacker. It s security analysis was limited to sampling. We can see that this is \[sec:expander-qhf\]. Structure of expanders versions. Extractor is a generalization of expander graph. In the section \[sec:keyed-qhf\] we propose a keyed quantum hash function based on extractors and assess its security against limited attacker. Moreover, I thank everyone who participated in the discussions for their assistance I thank Farid Ablayev, Alexander Vasiliev and Marco Carmosino for helpful discussions. A part of this research was done while attending a Special Semester Program on Computational and Proof Complexity (April-June 2016) organized by Chebyshev Laboratory of St.Petersburg State University in cooperation with Skolkovo Institute of Science and Technology and Steklov Institute of Mathematics at St.Petersburg. Partially supported by Russian Foundation for Basic Research, Grants 14-07-00557, 15-37-21160. The work is performed according to the Russian Government Program of Competitive Growth of Kazan Federal University. Definitions =========== Let us recall some basic definitions. Statistics are defined as distance. We say that two distributions $F$ and $G$ are $\epsilon$-close, if for every event $A$, $|\Pr[F \in A] - \Pr[G \in A]| \le \epsilon$. The support of a distribution $X$ is ${\mathrm{Supp}}(X) = \{ x : \Pr[X = x] > 0 \}$. The uniform distribution over ${\{0,1\}}^m$ is denoted by $U_m$ and we say that $X$ is $\epsilon$-close to uniform if it is $\epsilon$-close to $U_m$. We for the min-entropy {\overset{\epsilon}{\approx}}G$. We also use a standard definition of the min-entropy. Let $X$ be a distribution. The min-entropy of $X$ is $H_\infty(X) = \min_{x \in {\mathrm{Supp}}(X)} \log \frac 1 {\Pr[X=x]}$. Quantum model of computation ---------------------------- We use the following model of computation. Recall that a qubit ${\left| \Psi \right\rangle}$ is a superposition of basis states ${\left| 0 \right\rangle}$ and ${\left| 1 \right\rangle}$, i.e. ${\left| \Psi \right\rangle} = \alpha{\left| 0 \right\rangle} + \beta{\left| 1 \right\rangle}$, where $\alpha, \beta \in {\mathbf{C}}$ and $|\alpha|^2 + |\beta|^2 = 1$. So, qubit ${\left| \Psi \right\rangle} \in {\mathcal{H}^2}$, where ${\mathcal{H}^2}$ is a two-dimensional Hilbert complex space. Let $s \ge 1$. We denote $2^s$-dimensional Hilbert complex space by ${({\mathcal{H}^2})^{\otimes s}}$: $${({\mathcal{H}^2})^{\otimes s}} = {\mathcal{H}^2}\otimes {\mathcal{H}^2}\otimes \ldots \otimes {\mathcal{H}^2}= \mathcal{H}^{2^s}$$ We denote a state ${\left| a_1 \right\rangle}{\left| a_2 \right\rangle}\ldots{\left| a_n \right\rangle}$, each $a_i \in {\{0,1\}}$, by ${\left| i \right\rangle}$, where $i$ is $\overline{a_1a_2\ldots a_n}$ in binary. For example, we denote ${\left| 1 \right\rangle}{\left| 1 \right\rangle}{\left| 0 \right\rangle}$ by ${\left| 6 \right\rangle}$. Usually $ to. Computation is done by multiplying a state by a unitary matrix: ${\left| \Psi_1 \right\rangle} = U {\left| \Psi_0 \right\rangle}$, where $U$ is a unitary matrix: $U^\dagger U = I$, $U^\dagger$ is the conjugate matrix and $I$ is the identity matrix. The density matrix of a mixed state $\{p_i, {\left| \psi_i \right\rangle}\}$ is a matrix $\rho = \sum_i p_i {\left| \psi_i \right\rangle}{\left\langle \psi_i \right|}$. A density matrix belongs to ${\mathrm{Hom}({({\mathcal{H}^2})^{\otimes s}},{({\mathcal{H}^2})^{\otimes s}})}$, the set of linear transformations from ${({\mathcal{H}^2})^{\otimes s}}$ to ${({\mathcal{H}^2})^{\otimes s}}$. At the end of computation state is measured by POVM (Positive Operator Valued Measure). A POVM on a ${({\mathcal{H}^2})^{\otimes s}}$ is a collection $\{E_i\}$ of positive semi-definite operators $E_i : {\mathrm{Hom}({({\mathcal{H}^2})^{\otimes m}},{({\mathcal{H}^2})^{\otimes m}})} \to {\mathrm{Hom}({({\mathcal{H}^2})^{\otimes m}},{({\mathcal{H}^2})^{\otimes m}})}$ that sums up to the identity transformation, i.e. $E_i \succeq 0$ and $\sum_i E_i = I$. Applying a POVM $\{E_i\}$ on a density matrix $\rho$ results in answer $i$ with probability $\operatorname{Tr}(E_i \rho)$. Character theory ---------------- Let $G$ be a group with unity $e$ and operation $\circ$. The character $\chi: G \to {\mathbb{C}}$ of the group $G$ is a homomorphism of $G$ to ${\mathbb{C}}$: for any $g, g' \in G$ it
null
--- abstract: 'We study how unique features of non-Hermitian lattice systems can be harnessed to improve Hamiltonian parameter estimation in a fully quantum setting. While the so-called non-Hermitian skin effect does not provide any distinct advantage, alternate effects yield dramatic enhancements. We show that certain asymmetric non-Hermitian tight-binding models with a $\mathbb{Z}_2$ symmetry yield a pronounced sensing advantage: the quantum Fisher information per photon increases exponentially with system size. We find that these advantages persist in regimes where non-Markovian and non-perturbative effects become important. Our setup is directly compatible with a variety of quantum optical and superconducting circuit platforms, and already yields strong enhancements with as few as three lattice sites.' author: - 'A. McDonald$^{1,2}$ and A. A. S. [ @RMP_Spins]. It is interesting to ask whether distinct effects associated with non-Hermitian dynamics can also be used to improve sensors operating in quantum regimes [@Langbein_2018; @Kero_Nat_Comm; @Liang_2019; @Liu2019; @Murch_2019]. In purely classical settings, mode degeneracies specific to non-Hermitian systems (so-called exceptional points) have been suggested as a means for enhanced parametric sensing [@Wiersig_2014]. Evidence 106 and 108 e.g. , a recent paper by the Institute for Quantum Theory suggests that particular kinds of non-Hermitian effects could also be useful in truly quantum settings [@Kero_Nat_Comm]. To date, both theory and experiment have focused on non-Hermitian sensing schemes that utilize at most a few coupled modes. It is however well known that unusual new phenomena appear when considering genuinely multi-mode non-Hermitian dynamics. The paradigmatic example is the so-called “non-Hermitian skin effect" [@Zhong_PRL_2018_1; @Lee_2016; @Alexander_2018], which occurs in several non-Hermitian tight-binding models [@Xiong_2017; @Thomale_2019; @Udea_PRX_2019]. In these systems, all eigenvalues and wavefunctions of the Hamiltonian exhibit a dramatic sensitivity to a change of boundary conditions. This extreme sensitivity would seem to be a potentially powerful resource for parametric sensing [@Schomerus_2020]. ! [ (a) Basic lattice sensor: two $N$-site non-Hermitian tight binding chains, each with opposite chirality. Each chain has asymmetric hopping: for the top (bottom) chain, hopping to the right is a factor of $e^{2A}$ larger (smaller) than hopping to the left. The two lattices are only coupled via a weak symmetry breaking perturbation $\epsilon$ on the rightmost site; the goal is to estimate $\epsilon$. A signal entering the top X chain induces an exponentially large output in the bottom P chain, but only if $\epsilon \neq 0$. (b) An array of bosonic cavities coupled via nearest neighbour hopping $w$ and coherent two-photon drive $\Delta$ with a small detuning $\epsilon$ on the last site. This provides a dissipation-free realization of the setup in (a), where the canonical quadratures $\hx$ and $\hp$ play the role of the top and bottom chains respectively. This system yields an exponentially enhanced SNR even when quantum noise effects are included. []{data-label="fig:Schematic"}](Model_3.pdf){width="45.00000%"} ) regimes. We study in detail Hamiltonian parameter estimation using a one-dimensional lattice model with asymmetric tunneling (akin to the well-studied Hatano-Nelson model [@Hatano_Nelson]). We find, somewhat surprisingly, that the non-Hermitian skin effect does not provide any advantage over more traditional sensing protocols. Rather, we find another distinct non-Hermitian mechanism that enables a dramatic enhancement of measurement sensitivity: the quantum Fisher information per photon exhibits an exponential scaling with system size. As we discuss, the underlying mechanism makes use of both non-reciprocity and an unusual kind of symmetry breaking. While our ideas are general, our analysis focuses on a system that uses parametric driving to realize non-Hermitian dynamics; this has the strong advantage of not requiring any external dissipation or post-selection [@Alexander_2018; @Yuxin_2019]. Further, we ultimately focus on dispersive sensing, where the parameter of interest shifts the frequency of a resonant mode. This is a ubiquitous sensing strategy, with applications ranging from superconducting qubit measurement [@Circuit_QED_PRA] to virus detection [@Vollmer2008]. Our proposal is also compatible with a number of different experimental platforms in superconducting quantum circuits and quantum optics, and ultimately requires one to make a standard homodyne measurement. We also consider physics that goes beyond the usual limit of strictly infinitesimal parameter sensing. We find that the exponential enhancement of measurement sensitivity persists even when considering limitations associated with the finite propagation time of a large lattice. Even for parameters large enough to invalidate a full linear response analysis, we find that our scheme provides a strong advantage: it achieves a square-root enhancement of the sensitivity (including noise effects). This is similar to what is found in exceptional point sensors in the absence of noise [@Wiersig_2014]. Finally, while our discussion focuses on large lattices, the results we present are already interesting in a small system consisting of just three coupled resonators. Ingredients for a non-Hermitian lattice sensor ============================================== Amplified non-reciprocal response in the Hatano-Nelson model ------------------------------------------------------------ A key feature that we will exploit in our new sensor is the dramatically large and uni-directional response exhibited by certain non-Hermitian lattice models: perturbing a single lattice site induces a large change at one end of the chain, but not the other (see e.g. [@Schomerus_2020; @Nunnenkamp_2019]). We start by providing a physically-transparent explanation of this effect, based on interpreting non-Hermitian asymmetry in tight-binding matrix elements as directional gain and loss. The simplest relevant system is the well-known Hatano-Nelson model [@Hatano_Nelson; @Hatano_Nelson_2]. This is a 1D tight-binding chain with asymmetric nearest-neighbour hoppings, $\hat{H} = i J \sum_n \left( e^A \ketbra{n+1}{n} - e^{-A} \ketbra{n}{n+1} \right) $, where $J, A$ are real and $\ket{n}$ is a position eigenket. The n \braket{n}{\psi}$. While $A$ formally plays the role of an imaginary vector potential, it is more usefully thought of as an amplification factor. Assuming $A$ is positive for definiteness, Eq. (\[eq:Hatano-Nelson\]) is also positive in energy. With this picture in mind, the form of the real-space susceptibility (i.e. single particle Green’s function) $\chi(n,m;t)$ for a finite open chain has an intuitive form. Letting go App. \[app:chi\_quadrature\]): $$\begin{aligned} \chi(n,m;t) & \equiv \braket{n}{m(t)} \label{eq:Susceptibility} = e^{A(n-m)} \chi_0(n,m;t). \end{aligned}$$ Here, $\chi_0(n,m;t)$ is the susceptibility matrix when $A=0$, i.e. the Green’s function of a Hermitian tight-binding chain. This quantity is reciprocal, in the sense that $\chi_0(n,m;t) = (-1)^{m-n}\chi_0(m,n;t)$ (i.e. apart from a phase, there is no asymmetry in rightwards versus leftwards propagation). The Green’s function $\chi_0(n,m;t)$ both describes how particles propagate in the lattice, and also the response properties of the system (i.e. if you perturb site $
null
--- abstract: 'A *binary frame template* is a device for creating binary matroids from graphic or cographic matroids. Such matroids are said to *conform* or *coconform* to the template. We introduce a preorder on these templates and determine the nontrivial templates that are minimal with respect to this order. As an application of our main result, we determine the eventual growth rates of certain minor-closed classes of binary matroids, including the class of binary matroids with no minor isomorphic to $PG(3,2)$. Our main result applies to all highly-connected matroids in a class, not just those of maximum size. As a second application, we characterize the highly-connected 1-flowing matroids.' address: - | Department of Mathematics\ Louisiana State University\ Baton Rouge, Louisiana - | Department of Mathematics\ Louisiana State University\ Baton Rouge, Louisiana author: - Kevin Grace - 'Stefan H. M. van Zwam' title: Templates for Binary Matroids --- [^1] Introduction ============ Geelen, Gerards, and Whittle  [@ggw15] recently announced a structure theorem describing the highly connected members of any proper minor-closed class of matroids representable over a given finite field. In this paper we study some consequences of their result. To further understand the definitions. A matroid $M$ is *vertically $k$-connected* if, for each partition $(X,Y)$ of the ground set of $M$ with $r(X)+r(Y)-r(M)<k-1$, either $X$ or $Y$ is spanning. We denote the unique prime subfield of $\mathbb{F}$ by $\mathbb{F}_{\textnormal{prime}}$. We say that a matroid $M_2$ is a *rank-$(\leq t)$ perturbation* of a matroid $M_1$ if there exist matrices $A_1$ and $A_2$ over $\mathbb{F}$ such that $r(M(A_1-A_2))\leq t$ and such that $M_1\cong M(A_1)$ and $M_2\cong M(A_2)$. We 3.4] 3.3]. Its true with Whittle. \[ggw3.3\] Let $\mathbb{F}$ be a finite field and let $m_0$ be a positive integer. Then there exist $k,n,t\in\mathbb{Z}_+$ such that, if $M$ is a matroid representable over $\mathbb{F}$ such that $M$ or $M^*$ is vertically $k$-connected and such that $M$ has an $M(K_n)$-minor but no $PG(m_0-1,\mathbb{F}_{\textnormal{prime}})$-minor, then $M$ is a rank-$(\leq t)$ perturbation of a frame matroid representable over $\mathbb{F}$. Let us consider a very simple example of a rank-1 perturbation. Let $A_1$ be the binary matrix $$\begin{bmatrix} 1&0&0&0&1&1&1&0&0&0\\ 0&1&0&0&1&0&0&1&1&0\\ 0&0&1&0&0&1&0&1&0&1\\ 0&0&0&1&0&0&1&0&1&1\\ \end{bmatrix},$$ and let $A_2$ be the binary matrix $$\begin{bmatrix} 0&1&1&1&1&1&1&0&0&0\\ 1&0&1&1&1&0&0&1&1&0\\ 0&0&1&0&0&1&0&1&0&1\\ 0&0&0&1&0&0&1&0&1&1\\ \end{bmatrix}.$$ Note that $A_2$ is the result of adding the rank-1 matrix $$\begin{bmatrix} 1&1&1&1&0&0&0&0&0&0\\ 1&1&1&1&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ 0&0&0&0&0&0&0&0&0&0\\ \end{bmatrix}$$ to $A_1$. Therefore, the vector matroid $M(A_2)$ is a rank-1 perturbation of $M(A_1)$. Theorem  \[ggw3.3\] is essentially a simplified version of a much more complex structure theorem  [@ggw15 Theorem 4.2]. Geelen, M. (2017) complexity. Our focus in this paper is on the binary case. Roughly speaking, a binary frame template can be thought of as a recipe for constructing a representable matroid from a graphic or cographic matroid. A matroid constructed in this way is said to *conform* or *coconform* to the template. In the example above, we may think of $M(A_2)$ as the matroid obtained from the vector matroid of the following matrix by contracting the element indexing the final column. Note that the large submatrix on the bottom left is $A_1$: $$\left[ \begin{array}{@{}cccccccccc|c@{}} 1&1&1&1&0&0&0&0&0&0&1\\ \hline 1&0&0&0&1&1&1&0&0&0&1\\ 0&1&0&0&1&0&0&1&1&0&1\\ 0&0&1&0&0&1&0&1&0&1&0\\ 0&0&0&1&0&0&1&0&1&1&0\\ \end{array} \right]$$ In fact, for any matrix $A$ of the following form, where $v$ and $w$ are arbitrary binary vectors, the matroid $M(A)/c$ conforms to the template $\Phi_{CX}$, which we will define in Section  \[Reducing a Template\]: ----------------------------- ----- $v$ 1 incidence matrix of a graph $w$ ----------------------------- ----- Let $\mathcal{M}(\Phi)$ denote the set of matroids representable over a field $\mathbb{F}$ that conform to a frame template $\Phi$. Theorem  \[ggwframe\] below is a slight modification of  [@ggw15 Theorem 4.2]. The orinary \[Preliminaries\]. \[ggwframe\] Let $\mathbb{F}$ be a finite field, let $m$ be a positive integer, and let $\mathcal{M}$ be a minor-closed class of matroids representable over $\mathbb{F}$. Then there exist $k,l\in \mathbb{Z}_+$ and frame templates $\Phi_1,\dots, \Phi_s, \Psi_1,\dots, \Psi_t$ such that - $\mathcal{M}$ contains each of the classes $\mathcal{M}(\Phi_1),\dots,\mathcal{M}(\Phi_s)$, - $\mathcal{M}$ contains the duals of the matroids in each of the classes $\mathcal{M}(\Psi_1),\dots,\mathcal{M}(\Psi_t)$, and - if $M$ is a simple vertically $k$-connected member of $\mathcal{M}$ with at least $l$ elements and with no $PG(m-1,\mathbb{F}_{\textnormal{prime}})$ minor, then either $M$ is a member of at least one of the classes $\mathcal{M}(\Phi_1),\dots,\mathcal{M}(\Phi_s)$, or $M^*$ is a member of at least one of the classes $\mathcal{M}(\Psi_1),\dots,\mathcal{M}(\Psi_t)$. Our contribution is to shed some light on how these templates are related to each other. We examine the preorder of these templates. Our main result, Theorem  \[minimal\], is a list of nontrivial binary frame templates that are minimal with respect to this preorder. One application of this result involves growth rates of minor-closed classes of binary matroids. The *growth rate function* of a minor-closed class $\mathcal{M}$ is the function whose value at an integer $r\geq0$ is given by the maximum number of elements in a simple matroid in $\mathcal{M}$ of rank at most $r$. We prove that a minor-closed class of binary matroids has a growth rate that is eventually equal to the growth rate of the class of graphic matroids if and only if it contains all graphic matroids but does not contain the class of matro
null
--- abstract: '[We report a strong thickness dependence of the complex frequency-dependent optical dielectric function $\widetilde{\epsilon}(\omega)$ over a spectral range from 1.24 to 5 eV in epitaxial CaMnO$_3$(001) thin films on SrTiO$_3$(001), LaAlO$_3$(001), and SrLaAlO$_4$(001). A doubling of the peak value of the imaginary part of $\widetilde{\epsilon}(\omega)$ and spectral shifts of 0.5 eV for a given magnitude of absorption are observed. On the basis of experimental analyses and first-principles density functional theory calculations, contributions from both surface states and epitaxial strain to the optical dielectric function of CaMnO$_3$ are seen. Its evolution with thickness from 4 to 63 nm has several regimes. In the thinnest, strain-coherent films, the response is characterized by a significant contribution from the free surface that dominates strain effects. However, at intermediate and larger thicknesses approaching the bulk-like film, strain coherence and partial strain relaxation coexist and influence $\widetilde{\epsilon}(\omega)$. ]{}' " @bhattacharjee2009engineering]. The lattice distortions imposed by epitaxial strain can introduce dramatic changes in the properties of thin-film materials, e.g. allowing strong ferroelectric ordering in quantum paraelectrics [@haeni2004room], manipulation of transition temperatures in ferroelectrics [@strain_FE_1], tuning of magnetic and metal-insulator transitions in mixed-valence perovskite oxides [@strain_MIT_1; @strain_MIT_2], and controlling the volume of the magnetic phase in magnetically inhomogeneous media  [@phase_separ_1]. Owing to the direct relationship between the electronic structure and optical properties, epitaxial strain strongly influences dielectric constants, refractive indices and, ultimately, the bandgap of a thin film material  [@singh2014strain; @scafetta2014band; @liu2013strain; @scafetta2013optical]. However, such studies for oxides are still scarce, and the roles of chemistry, structure, native defects (oxygen vacancies), film thickness and surface effects (termination, admolecules, structural reconstruction) have yet to be clarified. In the bulk, electronic structure and optical properties of CaMnO$_3$ (CMO) are well studied [@jung1997determination; @loshkareva2004electronic; @molinari2014structural], yet little attention has been given to the optical properties of epitaxial CMO thin films. CMO has long been known as an archetypal mixed-valence manganite that exhibits colossal magnetoresistance (CMR). However, CMR is still unknown [@CMO_FE]. Similar to other perovskite oxides, CMO can easily accommodate oxygen vacancies that can be introduced allowing the material to demonstrate modest electrocatalytic activity  [@CMO_catalyst_1; @CMO_catalyst_2]. Still, little is known about the changes in the electronic structure in thin and ultrathin CMO films despite a certain surge of interest to strain-mediated effects on the optical properties of perovskite oxides  [@dejneka2010tensile; @liu2013strain; @scafetta2014band; @singh2014strain; @roy2012effects; @chernova2015strain; @Choi2014LASTO]. In addition, because surface reconstruction and termination become crucial in a few unit cell thick perovskite oxides, ultrathin CMO films are expected to demonstrate optical properties that are distinct from those of thicker films  [@saldana2015structural]. Here we report pronounced thickness dependence of the complex frequency-dependent optical dielectric function $\widetilde{\epsilon}(\omega)$ in epitaxial CMO thin films. Using laser deposition in epitaxial CMO. Epitaxial CMO thin films were grown by pulsed laser deposition (PLD) using a KrF excimer laser ($\lambda$ = 248 nm) on single-crystal (001)-oriented SrTiO$_3$ (STO), LaAlO$_3$ (LAO), and SrLaAlO$_4$ (SLAO) purchased from Crystec  [@crystec]. A laser repetition rate of 2.11 Hz and a laser energy density of 2.0 J/cm$^2$ were used. The substrate temperature was 650$^\circ$C and the background pressure was around 8.6$\times$10$^{-6}$ Torr. Films were deposited in oxygen environment at 30 mTorr and after the deposition, the sample was cooled to room temperature within an oxygen environment around 300 Torr to reduce/eliminate oxygen vacancies in the films. The thicknesses of the films were measured by X-ray reflectivity (XRR) (Fig. 1(d)) and the film deposition rate was determined to be approximately 0.74 Å/pulse. Samples ranging from 4.1 to 63 nm in thickness were each grown on SrTiO$_3$(001), and films ranging in thickness from 4.3 to 10 nm were grown on LAO(001) and SLAO(001). Grazing incidence X-ray diffraction (GIXRD)  [@Lee2011] (Fig. 1(a)) and reciprocal space mapping (RSM) were used to determine film lattice strain states by measuring the film in-plane lattice constant for thin ($<$20 nm) and thick ($>$20 nm) films respectively. As can be seen in Fig. 1(a), the thinnest film (4.1 nm) shows no substrate peak broadening and no extra peak, indicating the fully strained status of the film. In the 7.1 nm sample curve, the shoulder of the STO peak is evidence that the film starts to relax but is still mainly strained to the substrate. In the 10.4 nm sample curve, no shoulder at the STO peak position and the bump at the CMO(002) position shows a mainly relaxed state. X-ray microscopy (Fig. 1(b)) was used to ensure the films were single phase crystals with no secondary phases and to measure out-of-plane lattice parameter. Atomic force microscopy (AFM) (Fig. 1(c)) was used to confirm film surface quality. Five CMO film samples of different thicknesses ranging from 4.1 nm to 62.9 nm were grown on STO, and two additional films of $\sim$10 nm and $\sim$4 nm were grown, each on LAO and SLAO, to probe and disentangle the effects of thickness, strain and strain relaxation on $\widetilde{\epsilon}(\omega)$ of CMO. The bulk in-plane lattice parameter of STO ($a_\text{STO}$ = 3.905 Å) is larger than that for both LAO ($a_\text{LAO}$ = 3.790 Å) and SLAO ($a_\text{SLAO}$ = 3.754 Å) and largest compared to that for bulk CMO ($a_\text{CMO}$ = 3.72 Å). As of the end of 2017, SLAO has been compared to SLAO. A summary of the parameters for each film, including substrate, thickness, measured in-plane lattice parameter $a_\parallel$, the corresponding in-plane strain, and surface roughness, is shown in Table I. Variable-angle spectroscopic ellipsometry (VASE) was performed at room temperature in ambient atmosphere with an electronically controlled rotating compensator and Glan-Taylor polarizers (J.A. Woollam, M2000). Measurements were performed at multiple angles between 65-75$^\circ$ and in the spectral range of 247 to 1000 nm with a resolution of 1.6 nm. Measurement of the components of linearly polarized reflectivity at each selected wavelength were used to obtain the ellipsometric parameters $\Psi$ and $\Delta$. To determine $\widetilde{\epsilon}(\omega)$ for CMO we assume a four-layer optical medium comprised of a homogeneous isotropic film layer on a semi-infinite bulk, incorporating surface roughness, in vacuum. The surface roughness layer is modeled using the Bruggeman effective medium approximation using a 50% film and 50% void at the surface [@tompkins1999spectroscopic] with thickness approximately equal to the rms roughness for each sample obtained from XRR
null
--- abstract: 'Ubiquitous sensing devices frequently disseminate data among them. The use of a distributed event-based system that decouples publishers from subscribers arises as an ideal candidate to implement the dissemination process. In this paper, we present a network architecture that merges the network and overlay layers of typical structured event-based systems. Directional random walks are used for the construction of this merged layer. Our strategy avoids using a specific network protocol that provides point-to-point communication. This implies that the topology of the network is not maintained, so that nodes not involved in the system are able to save energy and computing resources. We evaluate the performance of the overlay layer using directional random walks and pure random walks for its construction. Our results show that directional random walks are more efficient because: (1) they use less nodes of the network for the establishment of the active path of the overlay layer and (2) they have a more reliable performance. Furthermore, as the number of nodes in the network increases, so do the number of nodes in the active path of the overlay layer for the same number of publishers and subscribers. Finally, we discard any correlation between the number of nodes that form the overlay layer and the maximum Euclidean distance traversed by the walkers.' author: - bibliography: - 'ref.bib' title: | Design of a Novel Network Architecture for Distributed Event-Based Systems\ Using Directional Random Walks in an Ubiquitous Sensing Scenario --- Distributed Event-Based Systems; Overlay Layer; Directional Random Walks; Pure Random Walks; Wireless Sensor Networks. Introduction {#sec:introduction} ============ Ubiquitous or pervasive computing [@FUTURECOMPUTING14DRW][@Cook:2012:RPC:2109687.2109848] uses many sources and destinations to gather and process data related to physical processes with the aim of making possible human-computer interaction. In the process of dissemination, some devices generate the data, while others are waiting for the sensing data. In this context, the use of a distributed event-based system [@Muhl:2006:DES:1162246] arises as an ideal candidate to implement the model of communication on the reception or transmission of events. The main characteristic of an event-based system is that publishers and subscribers are decoupled. This means that they do not have any information about each other. The element in charge of matching notifications with subscriptions is called the event notification service. In distributed networks, the event notification service is implemented using a network of brokers nodes (see Figure \[PubSub\]). It is considered that a broker is any node in the network that has information about any single or set of subscriptions. The complexity of designing this type of systems usually lies on the way to elect the nodes that act as brokers because of the decentralized nature of a distributed network. ! [Distributed notification service using a network of brokers. ](PubSub){width="2.75in"} /width="4"<unk> possibilities. We also assume that all the nodes in the network are able to participate in it without the requirement to adopt the specific role of publisher or subscriber. Nodes that are actively participating in the network but do not take any specific role will be considered as part of the overlay layer. Those nodes of the overlay layer that are able to redirect messages will be considered as brokers. Event-based systems are classified as topic-based or content-based [@Muhl:2006:DES:1162246]. Topic-based systems take into account the subject of messages in order to match publications with subscriptions. Content-based systems use filters to specify the value of subscriptions attributes to redirect notifications. A filter is a boolean function that depends on the set of subscriptions. In our proposal, we plan to deal with a content-based system that uses Bloom filters at broker nodes in order to save memory resources and speed up routing decisions. Sensor networks frequently use tiny devices with limited battery capabilities that make unsuitable the use of a Global Positioning System (GPS) to disseminate information according to the coordinates of nodes. In addition to this, the use of virtual coordinates to substitute real coordinates requires the use of sinks or landmarks to structure the network. For these reasons, the use of coordinates in an unstructured sensing scenario is not recommended. We assume that we work in an unstructured scenario in which no routing protocol provides communication between the nodes of the network. The constraints of the network infrastructure lead us to the design of a network architecture for distributed event-based systems that must use as less resources as possible (i.e., battery, memory, etc.). In this paper, we present a solution that avoids implying all the nodes of the network in the dissemination process by using a distributed notification service defined by Directional Random Walks (DRWs). The rest of this paper is organized as follows: Section \[sec:state\] analyzes the state of the art. Section \[sec:methodology\] points out the approach to solve the problem specified in this section. Section \[sec:research\] presents the research efforts already done for the approach specified in Section \[sec:methodology\]. Section \[sec:design\] details the process of construction of the proposed architecture. Section \[sec:evaluation\] evaluates the performance of our solution using DRWs, comparing it with the use of Pure Random Walks (PRWs). Finally, Section \[sec:conclusion\] summarizes our proposal. State of the art {#sec:state} ================ Distributed and Structured Event-based Systems ---------------------------------------------- Distributed and structured event-based systems use three layers on the top of a bottom layer (see Figure \[3layersdistributedusualsystems\]), which provides data link functionalities, to facilitate topology control: 1. The network layer is in charge of providing data forwarding between the different nodes involved in the network. A network protocol, such as the Multicast Ad-hoc On-demand Distance Vector (MAODV) [@Roy05securingmaodv:] is needed to provide point-to-point communication. The medium layer is called the overlay layer The medium layer is called the overlay layer. It is a virtual layer that builds the event notification service by providing a network of brokers that redirect notifications to the corresponding subscribers. This layer provides the protocol Finally, on the top layer the event-based protocol is implemented. ! [Decomposition in layers of the typical design of a distributed and structured event-based system. ](3layersdistributedusualsystems){width="2.45in"} \[3layersdistributedusualsystems\] One strategy to construct the overlay layer is to use a tree. In TinyMQ [@shi2011tinymq], which is designed specifically for wireless sensor networks, a multi-tree overlay layer is maintained. Another strategy is to clusterize the network and use cluster heads to manage messages as in Mires [@souto2006mires], which is a middleware for sensor networks. The Gradient Landmark-Based Distributed Routing (GLIDER) [@Fang05glider:gradient] organizes the network using some defined landmarks to compute the Delaunay graph for network partition. Then, it shares the graph with the subscribers. A typical solution is to build the overlay layer using Distributed Hash Tables (DHTs). In these systems, a key is mapped to a particular node with storage location properties. In some cases, [@Pastry]. In others, as the Content Addressable Network (CAN) [@CAN], a region of the space is used to map a key. Some efforts have been made to apply this solution to sensor networks [@Fersi:2013:DHT:2429525.2429572]. When using DHT. Currently, the code is released under [@SENSORNETS14M3]. Distributed and Unstructured Event-based Systems ------------------------------------------------ The main characteristics of distributed and unstructured event-based systems is that they do not maintain an overlay layer. This fact makes easier to deal with network changes. The system walks. Most of the algorithms proposed deal with the unstructured nature of wireless communications using flooding to build a tree. A typical solution is to use the On-Demand Multicast Routing Protocol (ODMRP) [@Lee:2002], which is based on the forwarding group concept. Groups are constructed and maintained periodically when a multicast source has data to send. This task is done by broadcasting the entire network with membership information. An extension for ODMRP has been proposed [@Yoneki:2004] to adapt a content-based system by adding subscriptions to Bloom filters. Trees also may be configured to self-repair themselves in base to brokers dynamicity [@Mottola:2008]. These solutions are reliable but increase the traffic of the network because they use flooding at some point. Flooding may also be used to continuously exchange subscription information clusterizing the network [@Voulgaris06]. Then, notifications are sent to the appropriate cluster, improving the efficiency of the network. Other mechanisms can be used as the combination of a DHT and random walks [@Tian:2005]. Cluster heads manage
null
--- abstract: 'Let $S_k$ be the set of separable states on $\B(\C^m \otimes \C^n)$ admitting a representation as a convex combination of $k$ pure product states, or fewer. If $m>1, n> 1$, and $k \le \max{(m,n)}$, we show that $S_k$ admits a subset $V_k$ such that $V_k$ is dense and open in $S_k$, and such that each state in $V_k$ has a unique decomposition as a convex combination of pure product states, and we describe all possible convex decompositions for a set of separable states that properly contains $V_k$. In both cases we describe the associated faces of the space of separable states, which in the first case are simplexes, and in the second case are direct convex sums of faces that are isomorphic to state spaces of full matrix algebras. As an application of these results, we characterize all affine automorphisms of the convex set of separable states, and all automorphisms of the state space of $\B(\C^m \otimes \C^n)$ that preserve entanglement and separability.' address: True states. States that are not separable are said to be entangled, and are of substantial interest in quantum information theory. Easily applied conditions for separability are known only for special cases, e.g., if $m = n = 2$, then a state is separable iff its associated density matrix has positive partial transpose, cf. , $m = 2$ Other necessary and sufficient conditions are known, e.g. [@Horodeckis], but are not easily applied in practice. An application separable. A product state $\omega \otimes \tau$ is a pure state iff $\omega$ and $\tau$ are pure states. Thus a separable state is precisely one that admits a representation as a convex combination of pure product states. It is natural to ask the extent to which this decomposition is unique. That is discussed in the following article. For the full state space $K$ of $\B(\C^m \otimes \C^n)$ each non-extreme point can be decomposed into extreme points in many different ways. But for the space $S$ of separable states the situation is totally different. While non-extreme points with many different decompositions exist (and are easy to find) in $S$ as well as in $K$, there are in $S$ also plenty of points for which the decomposition is unique. DiVincenzo, Terhal, and Thapliyal [@DiV] defined the *optimal ensemble cardinality* of a separable state $\rho$ to be $k$ if $k$ is the minimal number of pure product states required for a convex decomposition of $\rho$. Lockhart [@Lockhart] used the term “optimal ensemble length" for the same notion. For brevity, we will simply call this number the *length* of $\rho$, and we denote the set of separable states of length at most $k$ by $S_k$. We will also display states. Actually, we exhibit such a set $V_k$ consisting of states with the property that each generates a face of $S$ which is a simplex, from which the uniqueness follows. We remark that the sets $V_k$ are open and dense in the relative topology on $S_k$, but are not open or dense in $S$ or $K$ if $mn > 1$. (See the remarks after Theorem \[cor4\]). Indeed it would be surprising if a subset of low rank separable states were open and dense in the set of all states of that rank, since low rank states are almost surely entangled [@RuskaiWerner; @WalgateScott], and in general $S$ has measure which is a decreasingly small fraction of the measure of $K$ as $m, n$ increase, cf. [@AubrunSzarek; @Szarek]. While dimensions are too high to be able to accurately visualize the above results, the reader may be curious about the relationship to the well known tetrahedron/octahedron picture for $m = n = 2$, cf. [@HorodeckiTetra]. In that picture, there is a subset $\mathcal{T}$ of states which is a tetrahedron, and which has the property that for every state $\rho$ which restricts to the normalized trace on $\B(\C^2) \otimes I$ and on $I \otimes B(\C^2)$, there are unitaries $U$ and $V$ such that $(U\otimes V)^*\rho(U\otimes V) \in \mathcal{T}$. The midpoints of the six edges of this tetrahedron are the vertices of an octahedron that consists of the separable states in $\mathcal{T}$. Each vertex of the octahedron is a convex combination of two distinct pure product states (which of course are not in $\mathcal{T}$), cf. Fig (63)]. In fact, the vertices are the only states in the octahedron of length $\le \max(m,n)= 2$. It can be checked (e.g., by applying our Corollary \[cor3.9\]) that the decomposition of each of these vertices into pure product states is unique. Each state in the interior of this octahedron has rank $4 = mn$, so is an interior point of the full state space $K$, hence has a non-unique convex decomposition into pure product states (see the remarks after Theorem \[cor4\].) The tetrahedron also arises as a parameterization for a set of unital completely positive trace preserving maps from $M_2(\C)$ to $M_2(\C)$, with the octahedron consisting of the entanglement breaking maps in this set, cf. [@RuskaiKing tetrahedron Thm. 4], and [@RuskaiWerner Fig. 5] We also define a broader class of states that we show have a unique decomposition as a convex combination of product states $\rho_i\otimes\sigma_i$ that are not necessarily pure, but with the property that each of them generates a face of $S$ which is also a face of $K$ and is affinely isomorphic to the state space of $\B(\C^{p_i})$ for a suitable $p_i$. From this it follows that the ambiguity in decompositions for a given state in this class is restricted to the ambiguity in decompositions for points in the state space of the matrix algebras $\B(\C^{p_i})$. For a complete description of the possible decompositions of a state on $\B(\C^{p})$, see [@Kirkpatrick; @Schrodinger; @Wootters]. We use our results on the facial structure of $S$ to show that every affine automorphism of the space $S$ of separable states on $\B(\C^m \otimes \C^n)$ is given by a composition of the duals of the maps that are (i) conjugation by local unitaries (i.e., unitaries of the form $U_1 \otimes U_2$) (ii) the two partial transpose maps, or (iii) the swap automorphism that takes $A\otimes B$ to $B \otimes A$ (if $m = n$). A consequence is a description of the affine automorphisms $\Phi$ of the state space such that $\Phi$ preserves entanglement and separability. There is related work of Hulpke et al [@Hulpke]. They say a linear map $L$ on $\C^m \otimes \C^n$ preserves *qualitative entanglement* if $L$ sends separable (i.e., product) vectors to product vectors, and entangled vectors to entangled vectors. They show that a linear map $L$ preserves qualitative entanglement of vectors on $\C^m \otimes \C^n$ iff $L$ is a local operator (i.e. one of the form $L_1 \otimes L_2$), or if $L$ is a local operator composed with the swap map that takes $x\otimes y$ to $y \otimes x$. They then show that if $L$ preserves a certain *quantitative* measure of entanglement, then $L$ must be a local unitary. We thank Mary Beth Ruskai for helpful comments and references. Background: states on $\B(\C^n)$ ================================ We review basic facts about states on $\B(\C^n)$, and develop some facts about the relationship of independence of vectors $x$ in $\C^n$
null