text
stringlengths
0
12.5k
meta
dict
--- abstract: 'A proper vertex coloring of a graph is equitable if the sizes of color classes differ by at most one. The equitable chromatic number of a graph $G$, denoted by $\chi_=(G)$, is the minimum $k$ such that $G$ is equitably $k$-colorable. The equitable chromatic threshold of a graph $G$, denoted by $\chi_=^*(G)$, is the minimum $t$ such that $G$ is equitably $k$-colorable for $k\ge t$. We develop a formula and a linear-time algorithm which compute the equitable chromatic threshold of an arbitrary complete multipartite graph.' address: 'College of Information Engineering, Tarim University, Alar 843300, China' author: - Zhidan Yan - Wei Wang title: Equitable chromatic threshold of complete multipartite graphs --- equitable coloring ,equitable chromatic threshold ,complete multipartite graphs 05C15 Introduction {#intro} ============ All graphs considered in this paper are finite, undirected and without loops or multiple edges. For a positive integer $k$, let $[k] = \{1,2,\cdots,k\}$. A proper $k$-coloring of a graph $G$ is a mapping $f : V(G) \rightarrow [k]$ such that $f(x) \neq f(y)$ whenever $xy \in E(G)$. We call the set $f^{-1}(i)= \{x \in V(G) : f(x) = i\}$ a color class for each $i \in [k]$. A graph is $k$-colorable if it has a $k$-coloring. The chromatic number of $G$, denoted by $\chi(G)$, is equal to min{$k$ : $G$ is $k$-colorable}. An equitable $k$-coloring of $G$ is a $k$-coloring for which any two color classes differ in size by at most one, or equivalently, each color class is of size $\lfloor|V(G)|/k\rfloor$ or $\lceil|V(G)|/k\rceil$. If $G$ has $n$ vertices, we write $n = kq + r$ with $0\leq r < k$, then we can rewrite $n = (k-r)q + r(q + 1)$, or equivalently, exactly $r$ (respectively, $k - r$) color classes have size $q + 1$ (respectively, $q$). The equitable chromatic number of $G$, denoted by $\chi_ = (G)$, is equal to min{$k$ : $G$ is equitably $k$-colorable }, and the equitable chromatic threshold of a graph $G$, denoted by $\chi_=^*(G)$, is equal to min {$t$ : $G$ is equitably $k$-colorable for $k\geq t$}. The concept of equitable colorability was first introduced by Meyer [@W.; @Meyer1973]. The definitive survey of the subject is by Lih [@K.-W.; @Lih1998]. Many application such as scheduling and constructing timetables, please see [@B.Baker1996; @S.; @Irani1996; @S.; @Janson2002; @F.; @Kitagawa1988; @M.J.; @Pelsmajer2004; @B.F.; @Smith1996; @A.; @Tucker1973]. In 1964, Erdős [@P.; @Erdos1964] conjectured that any graph $G$ with maximum degree $\Delta(G)\le k$ has an equitable $(k + 1)$-coloring, or equivalently, $\chi_=^*(G)\leq \Delta(G) + 1$. This conjecture was proved in 1970 by Hajnal and Szemerédi [@A.; @Hajnal1970] with a long and complicated proof, a polynomial algorithm for such a coloring was found by Mydlarz and Szemerédi [@M.; @MydlarzManuscript]. Kierstead and Kostochka [@H.A.; @Kierstead2008] gave a short proof of the theorem, and presented another polynomial algorithm for such a coloring. Brooks’ type results are conjectured: Equitable Coloring Conjecture [@W.; @Meyer1973] $\chi_=(G) \leq \Delta(G)$, and Equitable $\Delta$-Coloring Conjecture [@B.-L.; @Chen1994b] $\chi_=^*(G)\leq \Delta(G)$ for $G\notin\{K_n, C_{2n+1}, K_{2n+1,2n+1}\}$. Exact values of equitable chromatic numbers of trees [@B.-L.; @Chen1994a] and complete multipartite graphs [@D.Blum2003], [@P.C.B.; @Lam2001] were determined. Our article determines the exact value of equitable chromatic threshold of complete multipartite graphs. The formula which is different from ours was established independently in a manuscript by Chen and Wu, and was reported in [@K.-W.; @Lih1998]. However, Chen and Wu never published their proof. To our knowledge, this article contains the only published proof. The results =========== Before stating our main result, we need several preliminary results on integer partitions. Recall that a partition of an integer $n$ is a sum of the form $n = m_1 + m_2 + \cdots + m_k$, where $0\leq m_i \leq n$ for each $0\leq i \leq k$. We call such a partition a $q$-partition if each $m_i$ is in the set $\{q, q+1\}$. A $q$-partition of $n$ is typically denoted as $n = aq + b(q + 1)$, where $n$ is the sum of $a$ $q$’s and $b$ $q + 1$’s. A $q$-partition of $n$ is called a minimal $q$-partition if the number of its addends, $a + b$, is as small as possible. A $q$-partition of $n$ is called a maximal $q$-partition if the number of its addends, $a + b$, is as large as possible. For example, $2 + 2 + 2 + 2$ is a maximal $2$-partition of $8$, and $2 + 3 + 3$ is a minimal $2$-partition of $8$. If $q|n$, or equivalently, $n = kq$, with $k \geq 1$, thus we write $n = 0(q-1) + kq$ (respectively, $n = kq + 0(q + 1)$), then there are both $(q - 1)$-partition and $q$-partition of $n$. For example, since $2|8$, we write $8 = 0 \times 1 + 4 \times 2 $ (respectively, $8 = 4 \times 2 + 0 \times 3$), then there are both $1$-partition and $2$-partition of $8$. Our first lemma is from [@D.Blum2003], which study the condition of which a $q$-partition of $n$ exists. For the sake of completeness, here we restate their proof. In what follows, all variables are nonegative integers. \[basic\][@D.Blum2003] If $0 < q \leq n$, and $n=kq+r$ with $0 \leq r < q$, then there is a $q$-partition of $n$ if and only if $r\leq k $. If $r\leq k $, then $n = (k - r)q + r(q + 1)$ is a $q$-partition of $n$. Conversely, given a $q$-partition $n = aq + b(q + 1)$ of $n$, we have $n = (a + b)q + b$, so $(a + b) \leq k$ and $r \leq b$. Consequently, $r \leq b \leq (a + b) \leq k$. \[noq-partion\] There is no $q$-partition of $n$ if and only if $n/(q + 1)> \lfloor n/q \rfloor$. Using the division algorithm, write $n = kq + r$, with $0 \leq r < q$. Then $k = \lfloor n/q \rfloor$, and $r = n - \lfloor n/q \rfloor q$. Lemma \[basic\] implies that there is no $q$-partition of $n$ if and only if $r>k$, hence $n - \lfloor n/q \rfloor q > \lfloor n/q \rfloor$, we can rewrite $n > \lfloor n/q \rfloor (q + 1)$. The Corollary \[noq-partion\] follows immediately. The next two lemmas give conditions under which a $q$-partition of $n$ is maximal (respectively, minimal). \[maximal1\] A $q$-partition $n = aq + b(q + 1)$ of $n$ is maximal if and only if $b < q$. Moreover a maximal $q$-partition is unique. Regard $a$ and $b$ as variables, and $q$ as fixed. Solving the linear relation $n = aq + b(q
{ "pile_set_name": "ArXiv" }
--- abstract: 'Time-resolved photoluminescence measurements show that the decay time for charged excitons in a GaAs two-dimensional electron gas increases by an order of magnitude at high magnetic fields. Unlike neutral excitons, the charged exciton center-of-mass is spatially confined in a “magnetically-adjustable quantum dot” by the cyclotron orbit and the quantum well. The inhibited recombination is explained by a reduced phase coherence volume of the magnetically-confined charged excitons.' address: - '$^1$Physics Department, Kobe University, Kobe 657-8501, Japan.' - '$^2$Physics Department, Northeastern University, Boston, Massachusetts 02115' - '$^3$Francis Bitter Magnet Laboratory, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139' - '$^4$Materials Department, University of California, Santa Barbara, California 93106' author: - 'H. Okamura,$^{1,2,3}$ D. Heiman,$^{2,3}$ M. Sundaram,$^4$[@address] and A.C. Gossard$^4$' title: '**Inhibited Recombination of Charged Magnetoexcitons**' --- =9.25in =-.25in =-.5in Charged excitons or “trions” were first identified in optical absorption experiments on electron-doped CdTe quantum well (QW) structures through their polarization properties in a magnetic field.[@kheng] The negatively charged exciton ($X^-$) transition in a narrow QW was manifest in the spectra as a peak lying several meV below the uncharged exciton ($X^0$) peak. Although both $X^0$ and $X^-$ transitions may had been observed previously in PL spectra of GaAs QWs, the high electron density precluded their identification.[@heiman88] In hindsight, it is not surprising that the recently identified $X^-$ is often the most common exciton found in a system with excess electrons, similar to the $X^0$ in a system without excess electrons. An $X^0$ in the presence of excess electrons becomes polarized by a nearby electron and binds the electron by a dipolar attraction. The properties of $X^-$ transitions in GaAs QWs have been explored in several recent experimental[@buhmann; @finkel; @shields; @gekhtman; @yoon] and theoretical[@wojs; @palacios; @chapman; @liberman; @whittaker] studies. An interesting facet of the charged exciton that has yet to be explored is the [*confinement*]{} produced by the cyclotron motion in a magnetic field. The $X^-$ complex (two electrons plus one hole) is singly-charged and a magnetic field confines the $X^-$ center-of-mass motion to a cyclotron orbit, unlike the neutral exciton ($X^0$) which is free to move in a magnetic field. This will be referred to as the [*magnetically-confined charged exciton*]{} (MCX). A magnetic field applied perpendicular to a two-dimensional (2D) QW effectively confines the exciton to a quantum dot (QD) whose size is adjustable with magnetic field. The 3D MCX volume, defined roughly by the QW width and the area of the cyclotron orbit in the plane of QW, is inversely proportional to the perpendicular magnetic field, $V_{_{MCX}} \propto 1/B_\perp$. At high magnetic fields this volume is typically smaller than QDs currently available via patterned nanostructures. The purpose of the present study is to examine the radiative recombination of excitons confined in these MCX QDs. Exciton recombination times were determined by measuring the photoluminescence (PL) decay times in low-density GaAs/AlGaAs electron gases in magnetic fields to 18 T, at temperatures 0.5-7 K. At low temperatures, the $X^-$ decay time was found to increase by an order of magnitude for increasing perpendicular magnetic field. In contrast, the recombination is rapid for both the $X^-$ in fields applied in the plane of the QW and for the uncharged $X^0$. In the latter two cases the exciton is not confined to a QD. The linear dependence of exciton decay time with magnetic field is explained by a model in which the transition strength for optical recombination is inversely proportional to the MCX QD volume or phase coherence volume. Experiments were performed on a symmetrically modulation-doped electron gas contained in wide parabolic GaAs/Al$_{0.3}$Ga$_{0.7}$As QWs.[@gossard] In these QWs the electrons are distributed uniformly over a thick layer $\sim$250 nm wide, with electron densities of 5 and 7 $\times$ 10$^{15}$ cm$^{-3}$, and mobility of 1.2 and 1.9 $\times$ 10$^5$ cm$^2$/Vsec. The photo-generated holes were confined within a layer $\sim$ 25 nm wide at the center of the much wider electron layer. Thus although the electrons are spread over $\sim$ 250 nm, the excitons are confined to a narrow 2D plane in the center of the QW. Samples were mounted on a fiber optic probe inserted into a $^3$He cryostat, which was placed in the bore of an 18 T Bitter or a 30 T hybrid magnet.[@heiman92] Time-resolved PL measurements employed standard time-correlated single-photon counting electronics and a multichannel plate. A pulsed diode laser operating at 1.58 eV (200 ps pulse length at 17 MHz) and a 0.85 m double spectrometer provided a system response of 300 ps full-width at half-maximum.[@manfred] Deconvolution of the system response resulted in a time resolution of $\sim$ 20-100 ps for PL decay times. Figure 1(a) shows the PL spectra at 0.5 K in magnetic fields applied perpendicular to the QW layers, and the inset (b) plots the PL peak positions up to $B$=30 T. There are two prominent PL peaks, both showing a quadratic spectral shift at low fields and a nearly linear shift at high fields, which is typical of exciton emission. Excitonic character of these PL lines was further supported by the presence of a clear onset of absorption in the PL excitation spectrum, and also by strong resonant Rayleigh scattering.[@thesis] In each spectrum the PL peak at higher energy is assigned to recombination emission from the $X^0$ neutral exciton and the peak at lower energy is assigned to the $X^-$ charged exciton. Assignment of the lower energy peak to the $X^-$ rather than to a trapped exciton is in agreement with many other optical studies of electron-doped GaAs QWs.[@buhmann; @finkel; @shields; @gekhtman; @yoon] The singlet (antiparallel electron spins) and triplet (parallel electron spins) states are not resolved in this sample, however, another sample having smaller PL linewidths showed $X^-$ peaks similar to those found in a previous study of triplet $X^-$.[@shields] The $X^-$ and $X^0$ peaks here have strong opposite circular polarizations at high fields, consistent with their peak assignments. The energy separation between the two peaks is the binding energy of the second electron, $\sim$ 1 meV at high fields. With increasing temperature the spectral intensity shifts from the $X^-$ peak to the $X^0$ peak due to thermal ionization of the second electron. These spectral features for the two peaks are quite similar to previous reports of $X^-$ transitions in GaAs QWs.[@buhmann; @finkel; @shields; @gekhtman; @yoon] Results of time-resolved measurements at $T$=0.5 K are shown in Fig. 2. The inset displays the PL intensity of the $X^-$ on a log scale as a function of time. At $B$=0 the $X^-$ decay is rapid and closely follows the system response (dashed curve). Deconvolution of the system response from the PL decay curve at $B$=0 yields a decay time of $t \sim$ 100 ps, which is close to that observed for high-mobility 2D electron gas.[@heiman92; @manfred; @thesis] For fields applied perpendicular to the QW, the decay becomes increasingly longer at higher fields. At $B$=18 T it reaches $t$=1.2 ns, an order of magnitude longer than at $B$=0. (Note that the field-induced increase in the PL decay time is not due to changes in non-radiative decay channels, as indicated by the nearly constant total intensity of the two PL peaks in magnetic fields.) Figure 2 plots PL decay times at $T$=0.5 K for fields up to B=18 T. The solid circles and solid squares represent $X^-$ data measured for two samples with electron densities differing by a factor of 1.4. These data are nearly identical. They demonstrate that the MCX [*decay time is linearly proportional to the magnetic field*]{}. In contrast, for fields applied parallel to the QW, the $X^-$ decay time is independent of magnetic field, shown by the open circles. Furthermore, the decay time for the $X^0$ peak (not shown) does not show appreciable lifetime increase, and $t \leq 150$ ps for all fields.[@okamura94] Rapid decay of
{ "pile_set_name": "ArXiv" }
--- author: - | Georges Elencwajg\ Patrick Le Barz\ Laboratoire Jean Dieudonné, UMR CNRS 6621\ Parc Valrose, 06108 Nice Cedex 2, France\ `elenc@unice.fr and lebarz@unice.fr` title: Young Diagrams and embeddings of Grassmannians --- In this article, we consider two embeddings $ f : G\longrightarrow G'$ between Grassmannians and we study the associated morphisms $f_{*}$ and $f^{*}$ between the corresponding (classical) rings of rational equivalence, also called Chow rings. To do this, we study the direct and inverse images of the Schubert cycles. These cycles are presented through Young diagrams, which give a pleasant and visual description. In particular, we show a precise link (c.f.“visual result”, I.2.b) between the Young diagrams and the matrices representing the corresponding Schubert cell. This is useful for proving transversality results (lemmas 2 and 3). We refer to Fulton’s book \[1\] “Young Tableaux” and to the copious bibliography which appears there. Our article is essentially Linear Algebra, modulo basic intersection theory.\ \ 1) Let $E$ be a vector space of dimension $N$ (over $\textbf{C}$ to fix ideas). We denote by $G = G_{d}(E)$ the grassmannian of vector subspaces of dimension $d$ of $E$. Let $c=N-d$ be the codimension of these subspaces; then $dim (G) = cd$. Recall that a ”partition” $\lambda$ is a decreasing sequence $\lambda = (\lambda_{1},...,\lambda_{d})$ with $$c \geq \lambda_{1}\geq \lambda_{2} \geq \lambda_{3} \geq ... \lambda_{d} \geq 0 .$$ Such a sequence will be called a “partition”. To such a $\lambda$ and a flag $F.$ of $E$: $$0 = F_{0}\subset F_{1}\subset F_{2}\subset ... \subset F_{N} = E$$ (dim $F_{k} = k$), we associate a closed subvariety $\Omega_{\lambda}(F.)$ of $G$ defined as follows. For $P\in G_{d}(E)$, we have $P\in \Omega_{\lambda}(F.)$ if and only if the following *d* conditions are fulfilled: $$dim(P\cap F_{c+i-\lambda_{i}}) \geq i \;\;\; (for \;1\leq i\leq d).$$ This i-*th* condition above is equivalent to $$dim(P + F_{c+i-\lambda_{i}}) \leq N-\lambda_{i}.$$ This last formulation is often more tractable, especially when the subspaces are given by generators. Notice that the i-*th* condition is empty when $\lambda_{i} = 0$; that’s why in practice we do not write the zero terms of the sequence $\lambda$. Moreover, if $\lambda_{i-1} = \lambda_{i}$, the i-*th* condition implies the (i-1)-*th*. To $\lambda $ we associate a Young diagram; for example for $d=4, c = 7$ and $ \lambda = (5,2,1)$ here is the associated Young diagram:\ (-8,0)(7,5) (0,1)(1,1)(1,2)(2,2)(2,3)(5,3)(5,4)(0,4)(0,1) (0,0)(7,0) (0,1)(7,1) (0,2)(7,2) (0,3)(7,3) (0,4)(7,4) (0,0)(0,4) (1,0)(1,4) (2,0)(2,4) (3,0)(3,4) (4,0)(4,4) (5,0)(5,4) (6,0)(6,4) (7,0)(7,4) (-.5,2)[d]{} (3.5,4.5)[c]{} $$fig.1$$ We denote by $CH^{\textbf{.}}(G)$ the ring of rational equivalence classes of the grassmannian $G$. We write $\sigma_{\lambda} \in CH^{\textbf{.}}(G)$ for the cycle associated to the subvariety $\Omega_{\lambda}(F_{\textbf{.}})$: it is independant of the flag $F_{.}$ (we shall always write “cycle” instead of “class of cycle”). If we write $|\lambda| = \lambda_{1}+ ... + \lambda_{d}$, then the $\sigma_{\lambda}$’s with $|\lambda| = p$ form a **Z**-basis of $CH^{p}(G)$.\ : The *codimension* of the cycle can be read in the number of *full* squares. The *dimension* of the cycle can be read in the number of *empty* squares. In what follows, $\lambda, $ the Young diagram representing $\lambda $ and $\sigma_{\lambda}$ will be identified. For example the following diagram represents the $P\in G$ *contained* in a fixed subspace of *codimension* $q$ : (-8,0)(7,5) (0,0)(2,0)(2,4)(0,4) (0,0)(7,0) (0,1)(7,1) (0,2)(7,2) (0,3)(7,3) (0,4)(7,4) (0,0)(0,4) (1,0)(1,4) (2,0)(2,4) (3,0)(3,4) (4,0)(4,4) (5,0)(5,4) (6,0)(6,4) (7,0)(7,4) (-.5,2)[d]{} (3.5,4.5)[c]{} (1,-0.5)[q]{} $$fig.2$$ Analogously, the following diagram represents the $P\in G$ *containing* a fixed subspace of $E$ of *dimension* $q$: (-8,0)(7,5) (0,2)(7,2)(7,4)(0,4) (0,0)(7,0) (0,1)(7,1) (0,2)(7,2) (0,3)(7,3) (0,4)(7,4) (0,0)(0,4) (1,0)(1,4) (2,0)(2,4) (3,0)(3,4) (4,0)(4,4) (5,0)(5,4) (6,0)(6,4) (7,0)(7,4) (-.5,2)[d]{} (3.5,4.5)[c]{} (7.5,3)[q]{} $$fig.3$$ 2\) $G_{d}(\textbf{C}^{m})$ $\lambda$\ Recall that elements of $\textbf{C}^{m}$ are written in *columns*. a\) To $\lambda $ we will associate an open subset $U^{\lambda}\subset G_{d}(\textbf{C}^{m})$ and a chart (denote $c = m-d$): $$M_{c\times d}(\textbf{C}) \longrightarrow U^{\lambda}.$$ To this end, associate to $A\in M_{c\times d}(\textbf{C})$ a matrix $M_{A} \in M_{m\times d}(\textbf{C})$ by the following rule: i\) Rotate the diagram $\lambda$ by $\frac{\pi}{2}$ and obtain a drawing $\lambda^{rot}:$ (-1,0)(7,8) (0,0)(2,0)(2,1)(3,1)(3,2)(5,2)(5,3)(0,3) (0,0)(6,0) (0,1)(6,1) (0,2)(6,2) (0,3)(6,3) (0,0)(0,3) (1,0)(1,3) (2,0)(2,3) (3,0)(3,3) (4,0)(4,3) (5,0)(5,3) (6,0)(6,3) (-.5,1.5)[d]{} (3,3.5)[c]{} (2.5,5)[$\lambda=(5,3,2)$]{} (10,1.5)[**$\curvearrowleft$**]{} (10,2.5)[$\frac{\pi}{2}$]{} (15,0)(18,0)(18,2)(17,2)(17,3)(16,3)(16,5)(15,5) (15,0)(18,0) (15,1)(18,1) (15,2)(18,2) (15,3)(18,3) (15,4)(18,4) (15,5)(18,5) (15,6)(
{ "pile_set_name": "ArXiv" }
--- author: - | János Kollár\ [ ]{}\ [with an appendix by]{} C. Raicu bibliography: - 'refs.bib' title: Quotients by finite equivalence relations --- Let $f:X\to Y$ be a finite morphism of schemes. Given $Y$, one can easily describe $X$ by the coherent sheaf of algebras $f_*{{\mathcal O}}_X$. Here our main interest is the converse. Given $X$, what kind of data do we need to construct $Y$? For this question, the surjectivity of $f$ is indispensable. The fiber product $X\times_YX\subset X\times X$ defines an equivalence relation on $X$, and one might hope to reconstruct $Y$ as the quotient of $X$ by this equivalence relation. Our main interest is in the cases when $f$ is not flat. A typical example we have in mind is when $Y$ is not normal and $X$ is its normalization. In these cases, the fiber product $X\times_YX$ can be rather complicated. Even if $Y$ and $X$ are pure dimensional and CM, $X\times_YX$ can have irreducible components of different dimension and its connected components need not be pure dimensional. None of these difficulties appear if $f$ is flat [@MR0232781; @sga3] or if $Y$ is normal (\[quot.pure.dim.lem\]). The aim of this note is to give many examples, review known results, pose questions and to prove a few theorems concerning finite equivalence relations. Definition of equivalence relations =================================== \[eq.rel.defn\] Let $X$ be an $S$-scheme and $\sigma:R\to X\times_SX$ a morphism (or $\sigma_1,\sigma_2:R\rightrightarrows X$ a pair of morphisms). We say that $R$ is an [*equivalence relation*]{} on $X$ if, for every scheme $T\to S$, we get a (set theoretic) equivalence relation $$\sigma(T):{\operatorname{Mor}}_S(T,R)\into {\operatorname{Mor}}_S(T,X)\times {\operatorname{Mor}}_S(T,X).$$ Equivalently, the following conditions hold: 1. $\sigma$ is a monomorphism (\[monom.defn\]) 2. (reflexive) $R$ contains the diagonal $\Delta_X$. 3. (symmetric) There is an involution $\tau_R$ on $R$ such that $\tau_{X\times X}\circ\sigma\circ\tau_R=\sigma$, where $\tau_{X\times X}$ denotes the involution which interchanges the two factors of $X\times X$. 4. (transitive) For $1\leq i<j\leq 3$ set $X_i:=X$ and let $R_{ij}:=R$ when it maps to $X_i\times_SX_j$. Then the coordinate projection of $R_{12}\times_{X_2}R_{23}$ to $X_1\times_SX_3$ factors through $R_{13}$: $$R_{12}\times_{X_2}R_{23}\to R_{13}\stackrel{\pi_{13}}{\longrightarrow} X_1\times_SX_3.$$ We say that $\sigma_1,\sigma_2:R\rightrightarrows X$ is a [*finite*]{} equivalence relation if the maps $\sigma_1,\sigma_2$ are finite. In this case, $\sigma:R\to X\times_SX$ is also finite, hence a closed embedding (\[monom.defn\]). \[setth.eq.rel.defn\] Let $X$ and $R$ be reduced $S$-schemes. We say that a morphism $\sigma:R\to X\times_SX$ is a [*set theoretic equivalence relation*]{} on $X$ if, for every geometric point ${\operatorname{Spec}}K\to S$, we get an equivalence relation on $K$-points $$\sigma(K):{\operatorname{Mor}}_S({\operatorname{Spec}}K,R)\into {\operatorname{Mor}}_S({\operatorname{Spec}}K,X)\times {\operatorname{Mor}}_S({\operatorname{Spec}}K,X).$$ Equivalently, 1. $\sigma$ is geometrically injective. 2. (reflexive) $R$ contains the diagonal $\Delta_X$. 3. (symmetric) There is an involution $\tau_R$ on $ R$ such that $\tau_{X\times X}\circ\sigma\circ\tau_R=\sigma$ where $\tau_{X\times X}$ denotes the involution which interchanges the two factors of $X\times X$. 4. (transitive) For $1\leq i<j\leq 3$ set $X_i:=X$ and let $R_{ij}:=R$ when it maps to $X_i\times_SX_j$. Then the coordinate projection of ${\operatorname{red}}\bigl(R_{12}\times_{X_2}R_{23}\bigr)$ to $X_1\times_SX_3$ factors through $R_{13}$: $${\operatorname{red}}\bigl(R_{12}\times_{X_2}R_{23}\bigr)\to R_{13} \stackrel{\pi_{13}}{\longrightarrow} X_1\times_SX_3.$$ Note that the fiber product need not be reduced, and taking the reduced structure above is essential, as shown by (\[nonred.noneq.exmp\]). It is sometimes convenient to consider finite morphisms $p:R\to X\times_SX$ such that the injection $i:p(R)\into X\times_SX$ is a set theoretic equivalence relation. Such a $p:R\to X\times_SX$ is called a [*set theoretic pre-equivalence relation.*]{} \[nonred.noneq.exmp\] On $X:={{\mathbb C}}^2$ consider the ${{\mathbb Z}}/2$-action $(x,y)\mapsto (-x,-y)$. This can be given by a set theoretic equivalence relation $R\subset X_{x_1,y_1}\times X_{x_2,y_2}$ defined by the ideal $$(x_1-x_2,y_1-y_2)\cap (x_1+x_2,y_1+y_2)= (x_1^2-x_2^2, y_1^2-y_2^2, x_1y_1-x_2y_2, x_1y_2-x_2y_1)$$ in ${{\mathbb C}}[x_1,y_1,x_2,y_2]$. We claim that this is [*not*]{} an equivalence relation. The problem is with transitivity. The defining ideal of $R_{12}\times_{X_2}R_{23}$ in ${{\mathbb C}}[x_1,y_1,x_2,y_2,x_3,y_3]$ is $$(x_1^2-x_2^2, y_1^2-y_2^2, x_1y_1-x_2y_2, x_1y_2-x_2y_1, x_2^2-x_3^2, y_2^2-y_3^2, x_2y_2-x_3y_3, x_2y_3-x_3y_2).$$ This contains $(x_1^2-x_3^2, y_1^2-y_3^2, x_1y_1-x_3y_3)$ but it does not contain $ x_1y_3-x_3y_1$. Thus there is no map $R_{12}\times_{X_2}R_{23}\to R_{13}$. Not, however, that the problem is easy to remedy. Let $R^*\subset X\times X$ be defined by the ideal $$(x_1^2-x_2^2, y_1^2-y_2^2, x_1y_1-x_2y_2)\subset {{\mathbb C}}[x_1,y_1,x_2,y_2].$$ We see that $R^*$ defines an equivalence relation. The difference between $R$ and $R^*$ is one embedded point at the origin. \[cat.geom.quot\] Given two morphisms, $\sigma_1,\sigma_2:R\rightrightarrows X$, there is at most one scheme $q:X\to (X/R)^{cat}$ such that $q\circ \sigma_1=q\circ \sigma_2$ and $q$ is universal with this property. We call $(X/R)^{cat}$ the [*categorical quotient*]{} (or [*coequalizer*]{}) of $\sigma_1,\sigma_2:R\rightrightarrows X$. The categorical quotient is easy to construct in the
{ "pile_set_name": "ArXiv" }
--- abstract: 'We construct higher order rogue wave solutions for the Gerdjikov-Ivanov equation explicitly in term of determinant expression. Dynamics of both soliton and non-soliton solutions is discussed. A family of solutions with distinct structures are presented, which are new to the Gerdjikov-Ivanov equation.' author: - 'Lijuan Guo, Yongshuai Zhang, Shuwei Xu, Zhiwei wu, Jingsong He' title: ' The higher order Rogue Wave solutions of the Gerdjikov-Ivanov equation' --- [^1] [**Key words**]{}: Darboux transformation, higher order rogue wave, Gerdjikov-Ivanov equation.\ [**PACS numbers**]{}: 42.65.Tg,42.65.Sf,05.45.Yv,02.30.Ik [**Introduction**]{} ==================== Rogue wave is one type of natural disasters first found in deep ocean wave. Later, people observed similar phenomena in other physical breaches, such as optical physics, plasmas, capillary waves and so on [@NOW2009; @Nature450; @EPJST18557; @PRL104104503; @PRE86]. There is a consensus that these rogue waves are the result of modulation instability waves. Meanwhile, breather solution usually comes from the instability of small amplitude perturbations that may grow in size to disastrous proportions. Therefore, in mathematical understanding, rogue wave can be treated as a limit case of Ma soliton when the space period tends to infinity , or of the Akhmediev breather as the time period approaches to infinity[@PLA373675]. Due to both theoretical frame and reality application, it is imperative to do further study about rogue waves, especially, higher order rogue waves of different models. The Peregrine soliton which is located in time-space plane is one of the formal ways to explain the surprise phenomenon mathematically [@JAMSS2516]. The solution appears from a non-zero constant and disappears to the constant background as time approaching infinity, but it develops a localized hump with peak amplitude three times of average waves in the intermediate times. The object of higher order rogue wave is now under intense discussion by different approaches such as iterated Darboux transformation method [@PRE82026602; @PRE80026601], the algebra-geometric means [@EPJST185247], and generalized Darboux transformation [@PRE85026607]. Recently, the expression of the first order rogue wave solution and the figure of the second order rouge wave for the Gerdjikov-Ivanov (GI) equation was provided by the third and fifth authors of present paper [@JMP063507]. However, the higher order rogue wave solutions for this equation have not been studied. The main aim of this paper is to discuss higher order rogue wave solutions and describe their different structures. The nonlinear Schrödinger equation is one of the most important equations in physics, which can be derived from Ablowitz-Kaup-Newell-Segur system [@PRL31125; @SPJETP3462]. Considering the higher order nonlinear effects, the derivative nonlinear schrödignger equation with a polynomial spectral problem of arbitrary order[@JPAMG143125] is regard as a model in a wide variety of fields such as weakly nonlinear dispersive water waves[@PRS357], nonlinear optics fibers[@PRA23; @PRA27; @JP39], quantum field theory[@JPA23], plasmas[@PF14]. The DNLS equations have three generic deformations, the DNLSI equation [@JMP19798]: $$\label{dnlsi} {\rm i}q_{t}-q_{xx}+{\rm i}(q^2q^\ast)_{x}=0,$$ the DNLSII equation [@PS20490]: $$\label{dnlsii} {\rm i}q_{t}+q_{xx}+{\rm i}qq^{\ast}q_{x}=0,$$ and the DNLSIII equation or the GI equation [@JPB10130]: $$\label{dnlsiii} {\rm i}q_t+q_{xx}-{\rm i}q^2q^{\ast}_{x}+\frac{1}{2}q^3{q^\ast}^2=0.$$ In many circumstances, Darboux transformation has been proved to be one powerful methods to obtain soliton solutions [@MS91; @Gu05], breather solutions and rational solutions. The Darboux transformation and its determinant expression for the GI equations have been given in [@JPAMG33625; @JMP063507]. But there is not a straight extension construct higher order rogue wave solutions at the same eigenvalues. In this paper, we take a limit technique with respect to degenerate eigenvalues and Taylor expansion in Darboux transformation [@PRE85026607; @PLA166205; @arxiv12093742]. Based on this explicit method, we can further discuss the structure of solutions, both soliton and rogue wave. This paper is organized as following: In section 2, we review the general $n$-fold Darboux transformation for the GI equation. In section 3, explicit solutions are constructed, such as soltion, breather, position solutions and higher order rogue wave with two parameters $D_1$ and $D_2$. By choosing different values of $D_1$ and $D_2$, we show four basic models, fundamental pattern, triangular structure, modified triangular structure and ring structure, and display their dynamical evolutions respectively in section 4. The conclusions and discussions are contained in the final section. [**Darboux transformation for the Gerdjikov-Ivanov equation**]{} ================================================================ In this section, we start with Lax pair of to construct Darboux transformation. Considering the spectral problem $$\label{sys11} \left\{ \begin{aligned} \partial_{x}\psi&=(J\lambda^2+Q_{1}\lambda+Q_{0})\psi=U\psi,\\ \partial_{t}\psi&=(2J\lambda^4+V_{3}\lambda^3+V_{2}\lambda^2+V_{1}\lambda+V_{0})\psi=V\psi. \end{aligned} \right.$$ where $$\label{fj1} \psi=\left( \begin{array}{c} \phi \\ \varphi\\ \end{array} \right)=\left( \begin{array}{c} \phi(x,t, \lambda) \\ \varphi(x,t, \lambda) \\ \end{array} \right),\nonumber\\ \quad J= \left( \begin{array}{cc} -i &0 \\ 0 &i\\ \end{array} \right),\nonumber\\ \quad Q_{1}=\left( \begin{array}{cc} 0 &q \\ r &0\\ \end{array} \right),\nonumber\\ \quad Q_{0}=\left( \begin{array}{cc} -\dfrac{1}{2}i q r &0 \\ 0 &\dfrac{1}{2}i q r\\ \end{array} \right),\nonumber\\$$ $$\label{fj2} V_{3}=2Q_{1}, V_{2}=Jqr, V_{1}=\left( \begin{array}{cc} 0 &iq_{x} \\ -ir_{x}&0\\ \end{array} \right), V_{0}=\left( \begin{array}{cc} \dfrac{1}{2}(rq_x-qr_x)+\dfrac{1}{4}iq^2 r^2 &0 \\ 0&-\dfrac{1}{2}(rq_x-qr_x)-\dfrac{1}{4}iq^2 r^2\\ \end{array} \right).\nonumber\\$$ here $\lambda\in\mathbb{C}$, $\psi$ is the eigenfunction of corresponding to the eigenvalue $\lambda$. By the condition $U_{t}-V_{x}+[U,V]=0$ , we get $$\label{eq11} \left\{ \begin{aligned} {\rm i}q_t+q_{xx}+{\rm i}q^2r_x+\frac{1}{2}q^3r^2=0, \\ {\rm i}r_t-r_{xx}+{\rm i}r^2q_x-\frac{1}{2}q^2r^3=0. \end{aligned} \right.$$ This system admits the reduction $r=-q^{*}$, and becomes just one equation which is the GI equation . From gauge transformation, we can construct new solution from initial data, i.e, if there exist some non-singular $T$, such that $$\label{eq22} \left\{ \begin{aligned} U^{[1]}=(T_{x}+T~U)T^{-1}. \\ V^{[1]}=(T_{t}+T~V)T^{-1}. \end{aligned} \right.$$ where $U^{[1]}$ and $V^{[1]}$ have the same form as $U$ and $V$ with $q$ and $r$ replaced by certain $q^{[1]}$ and $r^{[1]}$. Therefore, it is crucial to find an algebraic formula for $T$ instead of the .\ [**2
{ "pile_set_name": "ArXiv" }
--- author: - | Yana Hasson^1,2^ [^1] Bugra Tekin^4^ Federica Bogo^4^\ Ivan Laptev^1,2^ Marc Pollefeys^3,4^ Cordelia Schmid^1,5^\ \ [^1^Inria, ^2^Département d’informatique de l’ENS, CNRS, PSL Research University]{}\ [^3^ETH Zürich, ^4^Microsoft]{}, [^5^Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK]{} bibliography: - 'egbib.bib' title: | Leveraging Photometric Consistency over Time for\ Sparsely Supervised Hand-Object Reconstruction --- Appendix {#appendix .unnumbered} ======== Our main paper described a method for joint reconstruction of hands and objects, and proposed to leverage photometric consistency as an additional source of supervision in scenarios where ground truth is scarce. We provide additional details on the implementation in Section \[sec:implemdetails\], and describe the used training and test splits on the HO-3D dataset [@hampali2019ho3d] in Section \[sec:ho3dsub\]. In Section \[sec:cyclic\], we detail the cyclic consistency check that allows us to compute the valid mask for the photometric consistency loss. Section \[sec:skeletonadapt\] provides additional insights on the effect of using the skeleton adaptation layer. [^1]: This work was performed during an internship at Microsoft.
{ "pile_set_name": "ArXiv" }
--- abstract: 'The union of an ascending chain of prime ideals is not always prime. The union of an ascending chain of semi-prime ideals is not always semi-prime. We show that these two properties are independent. We also show that the number of non-prime unions of subchains in a chain of primes in a PI-algebra does not exceed the PI-class minus one, and this bound is tight.' address: 'Department of Mathematics, Bar Ilan University, Ramat Gan 5290002, Israel' author: - 'Be’eri Greenfeld' - 'Louis H. Rowen' - Uzi Vishne title: Unions of chains of primes --- Introduction {#sec:intro} ============ In a commutative ring, the union of a chain of prime ideals is prime, and the union of a chain of semiprime ideals is semiprime. This paper demonstrates and measures the failure of these chain conditions in general. A ring has the [**[(semi)prime chain property]{}**]{} (denoted  and , respectively) if the union of any countable chain of (semi)prime ideals is always (semi)prime.[^1] The property  was recognized by Fisher and Snider [@FS] as the missing hypothesis for Kaplansky’s conjecture on regular rings, and they gave an example of a ring without . Our focus is on . The class of rings satisfying  is quite large. An easy exercise shows that every commutative ring satisfies , and the same argument yields that the union of strongly prime ideals is strongly prime ($P \normali R$ is strongly prime if $R/P$ is a domain). In fact, we have the following result: \[first\] Every ring $R$ which is a finite module over a central subring, satisfies . Write $R = \sum_{i=1}^t Cr_i$ where $C \sub \operatorname{Cent}(R)$. Suppose $P_1 \subset P_2 \subset \cdots$ is a chain of prime ideals, with $P = \cup P_i$. If $a,b \in R$ with $$\sum C ar_i b = \sum aCr_i b = aRb \subseteq P,$$ then there is $n$ such that $ar_i b \in P_n$ for $1 \le i \le t,$ implying $aRb = \sum Car_i b \subseteq P_n$, and thus $a \in P_n$ or $b \in P_n$. (For a recent treatment of the correspondence of infinite chains of primes between a ring $R$ and a central subring, see [@Shai]). The class of rings satisfying  also contains every ring that satisfies ACC (ascending chain condition) on primes, and is closed under homomorphic images and central localizations. This led some mathematicians to believe that it holds in general. On the other hand, Bergman produced an example lacking  (see [[Example \[Ex1\]]{}]{} below), implying that the free algebra on two generators does not have . Obviously, the property  follows from the maximum property on families of primes. On the other hand,  implies (by Zorn’s lemma) the following maximum property: for every prime $Q$ contained in any ideal $I$, there is a prime $P$ maximal with respect to $Q \sub P \sub I$. In [[Section \[sec:mat\]]{}]{} we show that  and  are independent, by presenting an example (due to Kaplansky and Lanski) of a ring satisfying  and not , and an example of a ring satisfying  but not . We say that an ideal is [****]{} if it is a union of a chain of primes, but is not itself prime. (If ${{\{P_\lam\}}}$ is an ascending chain of primes, then $R/\bigcup P_{\lambda} = \lim_{\rightarrow} R/P_{\lam}$ is a direct limit of prime rings). The [**[$\PP$-index]{}**]{} of the ring $R$ is the maximal number of non-prime unions of subchains of a chain of prime ideals in $R$ (or infinity if the number is unbounded, see [[Proposition \[PPindex\]]{}]{}). [[Section \[sec:mon\]]{}]{} extends Bergman’s example by showing that the -index of the free (countable) algebra is infinity. A variation of this construction, based on free products, is presented in [[Section \[sec:example2.2\]]{}]{}. After defining the -index in [[Section \[sec:PP\]]{}]{}, in [[Section \[sec:PI\]]{}]{} we discuss PI-rings, showing that the -index does not exceed the PI-class minus one, and this bound is tight. We thank the anonymous referee for careful comments on a previous version of this paper. Monomial algebras {#sec:mon} ================= Fix a field $F$. We show that  and  fail in the free algebra (over $F$) by constructing an (ascending) chain of primitive ideals whose union is not semiprime. Let us start with a simpler theme, whose variations have extra properties. \[Ex1\] Let $R$ be the free algebra in the (noncommuting) variables $x,y$. For each $n$, let $$P_n = {{\left<xx,xyx, xy^2x, \dots, xy^{n-1}x\right>}}.$$ As a monomial ideal, it is enough to check primality on monomials. If $uRu' \sub P_n$ for some words $u,u'$, then in particular $uy^{n}u' \in P_n$, which forces a subword of the form $xy^ix$ (with $i<n$) in $u$ or in $u'$; hence either $u\in P_n$ or $u'\in P_n$. On the other hand $\bigcup P_n = (RxR)^2$ which is not semiprime. This example, due to G. Bergman, appears in [@P Exmpl. 4.2]. Interestingly, primeness is always maintained in the following sense ([@P Lem. 4.1], also due to Bergman): for every countable chain of primes $P_1 \subset P_2 \subset \cdots$ in a ring $R$, the union $\bigcup (P_n[[\zeta]])$ is a prime ideal of the power series ring $R[[\zeta]]$. Since in [[Example \[Ex1\]]{}]{} $\bigcup P_n = (RxR)^2$, if $Q \normali R$ is a prime containing the union then $x \in Q$ so $R/Q$ is commutative. In particular, a chain of prime ideals starting from the chain $P_1 \subset P_2 \subset \cdots$ has only one . Let us exhibit a (countable) chain providing infinitely many s. Let $R$ be the free algebra generated by $x,y,z$. For a monomial $w$ we denote by $\deg_yw$ the degree of $w$ with respect to $y$. For $i,n \geq 1$, consider the monomial ideals $$I_{i,n} = RxxR+RxzxR+\cdots+Rxz^{i-1}xR+\sum_{\deg_y w < n} R xz^ixwxz^ix R,$$ which form an ascending chain with respect to the lexicographic order on the indices $(i,n)$, since $xz^ix \in I_{i',n}$ for every $i'>i$. To show that $I_{i,n}$ are prime, suppose that $u,u'$ are monomials such that $u,u' \not \in I_{i,n}$ but $u R u' \sub I_{i,n}$. Then $u z^i y^n z^i u' \in I_{i,n}$. Since none of the monomials $xz^{i'}x$ ($i'<i$) is a subword of $u$ or $u'$, they are not subwords of $u z^i y^n z^i u'$, forcing $u z^i y^n z^i u'$ to have a subword of the form $xz^ixwxz^ix$ where $\deg_yw < n$. It follows that $z^iy^nz^i$ is a subword of $z^ixwxz^i$, contrary to the degree assumption. Now, for every $i$, $$\bigcup_{n} I_{i,n} = RxxR+RxzxR+\cdots+Rxz^{i-1}xR+(Rxz^ixR)^2,$$ which contains $(Rxz^ixR)^2$ but not $Rxz^ixR$, so it is not semiprime. In particular the -index of $R$ (see [[Proposition \[PPindex\]]{}]{}) is infinity. In Section \[sec:PI\] we show that this phenomenon is impossible in PI algebras: there, the number of s in a prime chain is bounded by the PI-class. The ideals $P_n$ in [[Example \[Ex1\]]{}]{}
{ "pile_set_name": "ArXiv" }
{ "pile_set_name": "ArXiv" }
Resonant inelastic x-ray scattering (RIXS) is developing very rapidly into a powerful technique to investigate elementary excitations in the strongly correlated electron systems [@Kao; @Butorin; @Hill; @Kuiper; @Abbamonte]. The application of this technique to insulating copper oxides has made it possible to observe an excitation due to a local charge transfer between copper and oxygen [@Hill] and local $d$-$d$ excitations on copper site [@Kuiper]. In addition, it has been demonstrated that, by using high resolution experiments [@Abbamonte], the momentum-dependent measurement of the charge transfer gap is possible when the incident photon energy $\omega_i$ is tuned through Cu $K$ absorption edge. Thus, the RIXS can be a useful probe to obtain information on the momentum dependence of the elementary excitations. One of the elementary excitations in the insulating cuprates is the charge-transfer process from the occupied Zhang-Rice singlet band (ZRB) [@Zhang] composed of Cu 3$d_{x^2-y^2}$ and O 2$p_\sigma$ orbitals to the unoccupied upper Hubbard band (UHB). The dispersion of ZRB have been extensively studied by angle-resolved photoemission spectroscopy (ARPES) experiments on the parent compounds of high $T_c$ superconductors [@Wells; @Kim; @Ronning]: A $d$-wave-like dispersion was observed along the (0,$\pi$)-($\pi$,0) line with the minimum of the binding energy at ($\pi$/2,$\pi$/2) [@Ronning]. On the contrary, the dispersion relation and spectral properties of the unoccupied UHB have not been examined and thus remain to be understood. The information of UHB is of crucial importance for the understanding of the motion of electrons in the electron-doped superconductors. In addition, it may be useful to know if the particle-hole symmetry is required for the high temperature superconductivity. In this Letter, we examine the RIXS spectrum for the Cu $K$-edge, and demonstrate that the characteristic features of the dispersion of UHB can be extracted from the momentum dependence of the spectrum. To see this, we use the half-filled single-band Hubbard model to describe the occupied ZRB and unoccupied UHB by mapping ZRB onto the lower Hubbard band (LHB) in the model. Then, we incorporate Cu 1$s$ and 4$p$ orbitals into the model to include the 1$s$-core hole and excited 4$p$ electron into the intermediate state of the RIXS process. The long-range hoppings are also introduced in the Hubbard model with realistic values obtained from the analysis of ARPES data. We find a characteristic momentum dependence of the Cu $K$-edge RIXS spectrum: The energy of the threshold of the RIXS spectrum at ($\pi$/2,$\pi$/2) is higher than that at (0,0), whereas the energy of the threshold at ($\pi$/2,0) is lower than that at (0,0). This anisotropic dependence is explained by the dispersion of the UHB which has the minimum energy at ($\pi$,0) due to the long-range hoppings. The determination of the UHB will contribute to the understanding of the different behavior of hole- and electron-doped superconductors [@Kim; @Takagi]. We map the ZRB onto the LHB, which is equivalent to the elimination of O $2p$ orbitals. Such mapping was used in the analysis of O $1s$ x-ray absorption spectrum[@Chen]. The Hubbard Hamiltonian with second and third neighbor hoppings for the $3d$ electron system is written as, $$\begin{aligned} \label{ham3d} H_{3d} &=& -t\sum_{\langle {\bf i},{\bf j} \rangle_{\rm 1st}, \sigma} d_{{\bf i},\sigma}^\dag d_{{\bf j},\sigma} -t'\sum_{\langle {\bf i},{\bf j} \rangle_{\rm 2nd}, \sigma} d_{{\bf i},\sigma}^\dag d_{{\bf j},\sigma} \nonumber\\ &&-t''\sum_{\langle {\bf i},{\bf j} \rangle_{\rm 3rd}, \sigma} d_{{\bf i},\sigma}^\dag d_{{\bf j},\sigma} + {\rm H.c.} +U\sum_{\bf i} n^d_{{\bf i},\uparrow}n^d_{{\bf i},\downarrow},\end{aligned}$$ where $d_{{\bf i},\sigma}^\dag$ is the creation operator of $3d$ electron with spin $\sigma$ at site ${\bf i}$, $n_{{\bf i},\sigma}^d=d_{{\bf i},\sigma}^\dag d_{{\bf j},\sigma}$, the summations $\langle {\bf i},{\bf j} \rangle_{\rm 1st}$, $\langle {\bf i},{\bf j} \rangle_{\rm 2nd}$, and $\langle {\bf i},{\bf j} \rangle_{\rm 3rd}$ run over first, second, and third nearest-neighbor pairs, respectively, and the rest of the notation is standard. Figure \[figpic\] shows the schematic process of Cu $K$-edge RIXS. An absorption of an incident photon with energy $\omega_i$, momentum ${\bf K}_i$, and polarization ${\bf \epsilon}_i$ brings about the dipole transition of an electron from Cu $1s$ to $4p$ orbital \[process (a) in Fig. \[figpic\]\]. In the intermediate states, $3d$ electrons interact with a $1s$-core hole and a photo-excited $4p$ electron via the Coulomb interactions so that the excitations in the $3d$ electron system are evolved \[process (b)\]. The $4p$ electron goes back to the $1s$ orbital again and a photon with energy $\omega_f$, momentum ${\bf K}_f$, and polarization ${\bf\epsilon}_f$ is emitted \[process (c)\]. The differences of the energies and the momenta between incident and emitted photons are transferred to the $3d$ electron system. In the intermediate state, there are a $1s$-core hole and a $4p$ electron, with which $3d$ electrons interact. Since the 1$s$-core hole is localized because of a small radius of the Cu 1$s$ orbital, the attractive interaction between the 1$s$-core hole and 3$d$ electrons is very strong. The interaction is written as, $$\begin{aligned} H_{1s\text{-}3d}=-V\sum_{{\bf i},\sigma,\sigma'} n_{{\bf i},\sigma}^d n_{{\bf i},\sigma'}^s,\end{aligned}$$ where $n_{{\bf i},\sigma}^s$ is the number operator of 1$s$-core hole with spin $\sigma$ at site ${\bf i}$, and $V$ is taken to be positive. On the contrary, since the 4$p$ electron is delocalized, the repulsive interaction between the 4$p$ and 3$d$ electrons as well as the attractive one between the 4$p$ electron and the 1$s$-core hole is small as compared with the 1$s$-3$d$ interaction. In addition, when the core-hole is screened by the 3$d$ electrons through the strong 1$s$-3$d$ interaction, effective charge that acts on the 4$p$ electron at the core-hole site becomes small. Therefore, the interactions related to the 4$p$ electron are neglected for simplicity. Furthermore, we assume that the photo-excited 4$p$ electron enters into the bottom of the 4$p_z$ band with momentum ${\bf k}_0$, where $z$-axis is perpendicular to the CuO$_2$ plane. This assumption is justified as long as the Coulomb interactions associated with the 4$p$ electron are neglected and the resonance condition is set to the threshold of the 1$s$$\rightarrow$4$p_z$ absorption spectrum[@polarization]. Under these assumptions, the RIXS spectrum is expressed as, $$\begin{aligned} \label{rixs} I(\Delta {\bf K},\Delta\omega)&=&\sum_\alpha\left|\langle\alpha| \sum_\sigma s_{{\bf k}_0-{\bf K}_f,\sigma} p_{{\bf k}_0,\sigma} \right.\nonumber\\&&\times\left. \frac{1}{H-E_0-\omega_i-i\Gamma} p_{{\bf k}_0,\sigma}^\dag s_{{\bf k}_0-{\bf K}_i,\sigma}^\dag |0\rangle\right|^2 \nonumber\\&&\times \delta(\Delta\omega-E_\alpha+E_0),\end{aligned}$$ where $H=H_{3d}+H_{1s\text{-}3d}+H_{1s,4p}$, $H_{1s,4p}$ being kinetic and on-site energy terms for a 1$s$-core hole and a 4$p$ electron, $\Delta{\bf K}={\bf K}_i-{\bf K}_f$, $\Delta\omega=\omega_i-\omega_f$, $s_{{\bf k},\sigma}^\dag$ ($p_{{\bf k},\sigma}^\dag$) is the creation operator of the 1$s$-core hole (4$p$ electron) with momentum ${\bf k}$ and spin $\sigma$, $|0\rangle$ is the ground state of
{ "pile_set_name": "ArXiv" }
--- author: - 'K. Sandstrom' - 'O. Krause' - 'H. Linz' - 'E. Schinnerer' - 'G. Dumas' - 'S. Meidt' - 'H.-W. Rix' - 'M. Sauvage' - 'F. Walter' - 'R. C. Kennicutt' - 'D. Calzetti' - 'P. Appleton' - 'L. Armus' - 'P. Beirão' - 'A. Bolatto' - 'B. Brandl' - 'A. Crocker' - 'K. Croxall' - 'D. Dale' - 'B. T. Draine' - 'C. Engelbracht' - 'A. Gil de Paz' - 'K. Gordon' - 'B. Groves' - 'C.-N. Hao' - 'G. Helou' - 'J. Hinz' - 'L. Hunt' - 'B. D. Johnson' - 'J. Koda' - 'A. Leroy' - 'E. J. Murphy' - 'N. Rahman' - 'H. Roussel' - 'R. Skibba' - 'J.-D. Smith' - 'S. Srinivasan' - 'L. Vigroux' - 'B. E. Warren' - 'C. D. Wilson' - 'M. Wolfire' - 'S. Zibetti' title: 'Mapping far-IR emission from the central kiloparsec of [NGC$\,1097$]{}[^1]' --- Introduction ============ The central regions of galaxies host some of the most intense star-formation that we can observe in the local Universe in circumnuclear starburst rings. Starburst rings are believed to be the consequence of the pile-up of inflowing gas and dust, driven by a non-axisymmetric potential from a stellar bar, on orbits located near the Inner Lindblad Resonance of the bar [@combes85; @athanassoula92]. The high surface densities that exist in the ring lead to high star-formation rates. Indeed starburst rings are one of few regions in non-interacting galaxies where the formation of “super star clusters” commonly occurs [@maoz96]. The stars formed in the ring can be numerous enough to drive the structural evolution of the galaxy [@norman96; @kormendy04] and can be the dominant power source for the galaxy’s infrared (IR) emission. Star-formation in circumnuclear rings occurs under conditions not normally found in the disks of galaxies: in addition to their high gas surface densities, these regions have dynamical timescales that are comparable to the lifetimes of massive stars. Understanding star formation in circumnuclear rings has been a long-standing problem [@combes96]. There are two main models: the “popcorn” model [@elmegreen94], where star-formation is driven by stochastic gravitational fragmentation along the ring, and the “pearls on a string” model, where gas flowing into the ring is compressed near the contact points (i.e. locations where the dust lanes intersect the ring) and then forms stars a short distance downstream [e.g., @boker08]. The “pearls on a string” model predicts a gradient in the ages of young stellar clusters as one moves away from the contact points. This has been observed in a number of starburst rings [e.g., @mazzuca08; @boker08]. Conversely, many well-studied rings show no evidence for an age gradient [@maoz01]. It is not obvious, however, that a single mode of star-formation must occur in all rings or even at all times in a given ring [@vandeven09]. KINGFISH (Key Insights into Nearby Galaxies: A Far-Infrared Survey with [*Herschel*]{}, PI R. Kennicutt) is an Open-Time Key Program to study the interstellar medium (ISM) of nearby galaxies with far-IR/sub-mm photometry and spectroscopy. Among the unique aspects of the KINGFISH science program is the ability to observe thermal dust emission at unprecedented spatial resolution ($\sim$ 5.6, 6.8 and 11.3at 70, 100 and 160 [$\mu$m]{}) using PACS (Photodetector Array Camera and Spectrometer) imaging. High spatial resolution is crucial for observing processes occurring in the central regions of galaxies. These regions represent our best opportunity to study in detail the interplay between dynamics, star-formation and feedback that regulate the fueling of nuclear activity, be it a starburst or an active galactic nucleus (AGN). Below we present PACS imaging of the galaxy NGC 1097, one of the first KINGFISH targets observed during the [*Herschel*]{} Science Demonstration Program (SDP) (for PACS spectroscopy of NGC 1097 see Beirão et al. 2010 and for [*SPIRE*]{} observations see Engelbracht et al. 2010). The source NGC 1097 is a barred spiral galaxy located at a distance of 19.1 Mpc [@willick97 1$\approx 92$ pc]. In its central kpc it hosts an intensely star-forming [$\sim 5$  yr$^{-1}$; @hummel87] ring with a radius of $\sim 900$ pc. The ring’s rotation speed of $\sim 300$ km s$^{-1}$ [corrected for inclination, @storchi-bergmann96], corresponds to a rotation period of $\sim 18$ Myr. The galaxy’s nucleus is classified as a LINER from optical emission line diagnostics [@phillips84], but is shown to be a Seyfert 1 by its double-peaked H$\alpha$ profile [@storchi-bergmann93]. UV spectroscopy has revealed a few Myr old burst of star-formation in the central 9 pc of the galaxy [@storchi-bergmann05]. With the high spatial resolution of [*Herschel*]{} PACS, we can resolve the starburst ring and inner 600 pc of NGC 1097 for the first time at wavelengths near the peak of the dust spectral energy distribution (SED). Observations and data reduction =============================== The galaxy NGC 1097 was observed with the PACS instrument (Poglitsch et al. 2010) on the [*Herschel*]{} Space Observatory (Pilbratt et al. 2010) on 2009 December 20 during the SDP. We obtained 15 long scan-maps in two orthogonal directions at the medium scan speed (20s$^{-1}$). The scan position angles (45$^\circ$ relative to the scan direction) provide homogeneous coverage over the mapped region. The total on-source times per pixel were approximately 150, 150, and 300 seconds for 70, 100 and 160 [$\mu$m]{}, respectively. The raw data were reduced with HIPE (Ott 2010), version 3.0, build 455. Besides the standard steps leading to level-1 calibrated data, second-level deglitching and correction for offsets in the detector sub-matrices were performed. The data were then highpass-filtered using a median window of 5 to remove the effects of bolometer sensitivity drifts and 1/f noise. We masked out emission structures (visible in a first iteration of the map-making) with a 5-wide mask before computing this running median to prevent subtraction of source emission. Although the filtering may remove some extended flux from the galaxy, because we are primarily interested in the very bright central 1 of NGC 1097 this effect will be negligible. Finally, the data were projected onto a coordinate grid with 1 pixels. After pipeline processing we applied flux correction factors from the PACS team to adjust the calibration. The current calibration has uncertainties of $\sim 10$, 10, and 20% for the 70, 100 and 160 [$\mu$m]{} bands, respectively (Poglitsch et al. 2010). Because we aim to compare our PACS observations with ancillary data at other wavelengths, we adjusted the relative astrometry of the PACS observations to match that of the [*Spitzer*]{} 24 [$\mu$m]{} from SINGS [Spitzer Infrared Nearby Galaxies Survey: @kennicutt03]. This was done by measuring the positions of background point-sources in the MIPS 24 [$\mu$m]{} (Multi-Band Imaging Photometer) and PACS 100 [$\mu$m]{} images, adjusting the PACS 100 [$\mu$m]{} astrometry, assuming the relative astrometry for the PACS bands is well-calibrated and transferring the solution to the other bands. The offset between the PACS and MIPS astrometry was $\sim 2$. The one-sigma surface brightness sensitivities per pixel of the final maps are 5.9, 6.2 and 3.3 MJy sr$^{-1}$. In Fig \[fig:rgb\] we show the three PACS images with a logarithmic stretch to highlight the spiral arms. Note that below we extract photometry from the images at their native resolution using apertures larger than the beam size of the lowest resolution map. [*Herschel* PACS observations of
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a framework for obtaining reliable solid-state charge and optical excitations and spectra from optimally-tuned range-separated hybrid density functional theory. , allows for accurate prediction of exciton binding energies. We demonstrate our approach through calculations of one- and two-particle excitations in pentacene, a molecular semiconducting crystal, where our work is in excellent agreement with experiments and prior computations. We further show that with one adjustable parameter, , this method accurately predicts band structures and optical spectra of silicon and lithium flouride, prototypical covalent and ionic solids. Our findings indicate that for a broad range of extended bulk systems, this method may provide a computationally inexpensive alternative to many-body perturbation theory, opening the door to studies of materials of increasing size and complexity.' author: - 'Sivan Refaely-Abramson' - Manish Jain - Sahar Sharifzadeh - 'Jeffrey B. Neaton' - Leeor Kronik bibliography: - 'tdrsh.bib' title: | Solid-state optical absorption from optimally-tuned time-dependent\ range-separated hybrid density functional theory --- Many solid-state systems exhibit strong excitonic effects, notably an optical excitation spectrum that is affected substantially by interaction between excited electron and hole quasiparticle states. The nature of this electron-hole, or [*excitonic*]{}, interaction is of central importance for a variety of applications in, e.g., optoelectronics and photovoltaics [@Savoie2014]. Nevertheless, its accurate theoretical prediction remains a challenging task. It is common to account for such interactions within the framework of ab initio many-body perturbation theory, in which single-particle excitations are well-predicted from Dyson’s equation, typically solved within the GW approximation [@Hedin1965; @Hybertsen1986], and two-particle excitations are well-predicted using the Bethe-Salpeter equation (BSE) [@Rohlfing1998; @*Rohlfing2000; @Strinati1982; @*Strinati1984]. Current GW-BSE calculations are highly demanding and therefore presently impose significant practical limits on the calculated system size and complexity. Density functional theory (DFT), in both its time-independent [@DreizlerGross; @ParrYang] and time-dependent (TDDFT) [@Marques2012; @Casida1995; @Burke2005; @Ullrich_book] forms, is considerably more efficient computationally. However, common (semi-)local approximations to both DFT and TDDFT suffer from serious deficiencies which have precluded their use as a viable alternative to GW-BSE in the prediction of excitonic properties [@Onida2002]. First, quasi-hole and quasi-electron excitation energies are generally underestimated and overestimated, respectively, by the DFT Kohn-Sham eigenvalue spectrum [@Kummel2008; @Kronik2012]. While the same functionals often perform better in the prediction of optical excitation energies of isolated molecular systems, the Kohn-Sham gap is typically similar to the optical gap [@Salzner1997; @Chong2002; @Baerends2013; @Kronik2012; @Kronik2014]. In any case, they still fail in the solid-state limit [@Onida2002; @Ullrich_book; @Izmaylov2008; @Sharma2014; @Ullrich2014]. Therefore, neither one- nor two-particle excitations are well-predicted in the solid-state, and hence the nature of excitons or their binding energies are not obtained. The failure of semi-local functionals in predicting solid-state absorption spectra has been traced back to an incorrect description of the long-range electron-electron and electron-hole interaction, manifested by the absence of a $1/q^2$ contribution [@Gonze1995; @*Ghosez1997; @Kim2002] to the interaction, where $q$ is a wavevector in the periodic system. Several ingenious schemes for overcoming this deficiency have been suggested, including the use of an exchange-correlation kernel of the form $f_{xc}(r,r')=-\alpha/(4\pi |r-r'|)$, where $\alpha$ is a system-dependent empirical parameter [@Reining2002; @Botti2004]; a static approximation to the exchange-correlation kernel based on a jellium-with-gap model [@Trevisanutto2013]; a “bootstrap” parameter-free kernel, achieved using self-consistent iterations of the random phase approximation (RPA) dielectric function [@Sharma2011; @Sharma2014]; a related “guided iteration” RPA-bootstrap kernel [@Rigamonti2015]; and the Nanoquanta kernel [@Onida2002; @Reining2002; @Sottile2003; @Marini2003; @*Adragna2003], derived by constructing the exchange-correlation kernel from an approximate solution to the BSE. Each correction provides a major step forward. However, none is a fully DFT-based solution, as single quasiparticle excitations are obtained from GW, RPA, a DFT+U approach, or a scissors-shift correction. A different path for enabling TDDFT calculations in the solid state is the use of (global or range-separated) hybrid functionals. These are still well within density functional theory, using the generalized Kohn-Sham (GKS) framework [@Seidl1996; @Kummel2008; @Kronik2012], and their non-local Fock-like exchange component assists in the inclusion of long-range contributions. Although the time-dependent GKS equations have yet to be formally derived, hybrid functionals are already widely used for calculating optical properties. For gas-phase molecules, hybrid functionals can improve optical excitation energies, although standard hybrids still do not provide for accurate single-particle excitation energies [@Salzner1997; @Kummel2008; @Kronik2012; @Kronik2014]. TDDFT using the Heyd-Scuseria-Ernzerhof (HSE) short-range hybrid functional [@Heyd2003; @*Heyd2006], where non-local exchange is introduced only in the short-range, can improve the absorption spectra of semiconductors and insulators [@Paier2008], although some discrepancies remain. However, the HSE functional still does not provide the desired long-range non-local contribution. The B3LYP hybrid functional [@Becke1993; @*Stephens1994], in which a global 20% fraction of exact-exchange is used, was recently shown to yield TDDFT optical spectra for semiconductors in good agreement with experiment [@Bernasconi2011; @*Tomic2014], . Although in this case a non-local contribution to the kernel tail does exist, it is global and parameterized for a finite set of small organic molecules. However, global and short-range hybrid functionals were shown to be insufficient predictors of band-structures in solid-state systems [@Jain2011], notably for molecular crystals [@Refaely-Abramson2013] where excitonic effects are strong. Recently, Yang et al. [@Yang2015] suggested a screened exact-exchange (SXX) approach, in which the local part of the hybrid calculation is set to zero and the time-dependent Hartree-Fock exchange is scaled down non-empirically per system by using the inverse of the dielectric constant, based on a ground state obtained from a scissor-corrected local density approximation (LDA) calculation. Again, this led to improved performance for more strongly bound excitons. ![(Color online) (a) Band-structure (left) and density of states (right) of the pentacene solid, calculated using LDA (gray, dashed lines), G$_0$W$_0$@LDA (red, dashed lines), and OT-SRSH (black, solid lines). For all methods, the middle of the bandgap is shifted to zero. (b) The imaginary part of the dielectric function of the pantacene solid, with incident light polarization averaged over the $a$, $b$, and $c$ main unit-cell axes, calculated using TDLDA (gray, dashed lines), G$_0$W$_0$/BSE (red, dashed lines), and TD-OT-SRSH (black, solid lines). For visualization purposes, the leading absorption feature (between 0.5 to 2.5 eV) was multiplied by a factor of 10 with all computational methods used. The OT-SRSH and TD-OT-SRSH results were obtained using the parameters $\gamma=0.16$ Bohr$^{-1}$, $\alpha=0.2$, and $\varepsilon=3.6$. For computational details and convergence information, see the SI.[]{data-label="fig_pen"}](pen_DOS_abs_2h-averageLS-wan.pdf) Ideally, we seek a DFT-based method where accurate one- and two-particle excitations can be read directly off of the eigenvalues of the time-independent (G)KS and the linear-response time-dependent (G)KS equations, respectively, using a single exchange-correlation functional, from which a consistent exchange-correlation potential and kernel are derived. This challenge is not met by any of the above-surveyed methods. Recently, it was met for gas-phase systems, using the optimally-tuned range separated hybrid functional (OT-RSH) approach [@Baer2010; @Stein2010; @Kronik2012], where the long- and short- range fraction of Fock-exchange is tuned non-empir
{ "pile_set_name": "ArXiv" }
--- abstract: 'The existence of wide-sense-stationarity (WSS) in narrowband wireless body-to-body networks is investigated for “everyday" scenarios using many hours of contiguous experimental data. We employ different parametric and non-parametric hypothesis tests for evaluating mean and variance stationarity, along with distribution consistency, of several body-to-body channels found from different on-body sensor locations. We also estimate the variation of power spectrum to evaluate the time independence of the auto-covariance function. Our results show that, with 95% confidence, the assumption of WSS is met for at most 90% of the cases with window lengths of 5 seconds for the channels between the hubs of different BANs. Additionally, in the best-case scenario, the hub-to-hub channel remains reasonably stationary (with more than 80% probability of satisfying the null hypothesis) for longer window lengths of more than 10 seconds. The short time power spectral variation for body-to-body channels is also shown to be negligible. Moreover, we show that body-to-body channels can be considered wide-sense-stationary over significantly longer periods than on-body channels.' author: - - - '[^1]' bibliography: - 'Reference.bib' title: 'Wide-Sense-Stationarity of Everyday Wireless Channels for Body-to-Body Networks' --- Introduction ============ Wireless body-to-body networks (BBNs) can enable coexistence of wireless body area networks (BANs) by exploiting body-to-body (B$2$B) communications using wearable on-body hub/sensor devices. While BANs are specifically designed to collect data from various sensors placed on/inside or around the human body, BBNs send data through closely located BANs to reach the intended destination/server in case of unavailable or out-of-range network infrastructure (in emergency indoor/outdoor situations) [@shimly2017cross]. BBNs are envisioned to be self-organizing, smart and mobile networks that can create their own centralized/decentralized network connection without any external coordination. This requires systematic prediction and modeling of the channel behavior. Statistical characterization of a channel requires time segments that possess wide-sense-stationarity (WSS) or second-order stationarity where the first and second moment (i.e., mean, variance and auto-covariance) of the channel are independent of time [@chaganti2014narrowband]. Also in [@bello1963characterization], Bello suggested that, his proposed wide-sense-stationary uncorrelated scattering (WSSUS) assumption can only be held for limited intervals of time and frequency as the real-world radio channels often demonstrate ‘quasi-stationary’ behavior. Therefore, it is important to estimate the channel parameters to identify the wide-sense stationary regions to see if these model parameters can be applied over a suitable time-frame. To test the WSS of wireless channels, a parametric approach is proposed in [@kay2008new] to detect non-stationarity based on the time-variant autoregressive (TVAR) model. A parametric unit-root test is proposed in [@reinsel2003elements] to parameterize a predetermined structure. Willink tested the WSS of multiple-input multiple-output (MIMO) wireless channels in [@willink2008wide] by investigating the first and second moment with parametric one-way ANOVA and non-parametric time-dependent evolutionary spectrum analysis, respectively. Other non-parametric approaches to identify the stationarity intervals include run-test described in[@bultitude2002estimating], comparison of the delay power spectral density (PSD) estimated at different time instances [@umansky2009stationarity] and evaluation of the variation of time-localized PSD estimate [@basu2009nonparametric]. However, BAN/BBN channels are practically different to the typical wireless/radio channels because of the slowly-varying human-body dynamics and shadowing caused by postural body movements [@smith2015channel]. Hence, in [@chaganti2014narrowband] the authors used different parametric and non-parametric approaches for testing WSS of on-body channels and showed that on-body channels have non-stationary characteristics. *The novelty here is the investigation of whether the WSS assumption can be applied for body-to-body (B$2$B) channels and to find the typical duration for WSS regions of B$2$B channels.* We use parametric one-way ANOVA for investigating mean stationarity and non-parametric Brown–Forsythe (B–F) test and Kolmogorov–Smirnov (K–S) test to investigate variance stationarity and distribution consistency of the channels, respectively. We also use variation in the PSD estimate for testing the time independence of the auto-covariance of the channels. Our findings based on the application of the aforementioned tests on the experimental setup are as follows: - For body-to-body channels, the hub-to-hub (Left-Hip to Left-Hip) links show better probability of satisfying the wide-sense-stationarity (WSS) assumption than the hub-to-sensor (i.e., Left-Hip to Right-upper-Arm, Left-Hip to Left-Wrist) links. - According to the tests, there is approximately up to $90\%$ probability (over the total period) that the hub-to-hub links will satisfy the null hypothesis for window lengths of $5$ s with $95\%$ confidence level (also up to $85\%$ for window lengths of $10$ s with $99\%$ confidence level). - In the best-case scenario, the hub-to-hub channel can satisfy the null hypothesis assumption with a window length of $50$ s for more than $85\%$ time over the whole period (with $95\%$ confidence level). - Negligible variation in power spectral density is found for different window lengths (e.g., $5$ s, $10$ s) amongst many different B$2$B channels. - Body-to-body links are more stationary with respect to on-body links, as on-body links show non-stationary behavior with $50\%$ chance of rejecting the null hypothesis over the whole period for an estimated minimum required window length of $3$ s. - Even in the best-case scenario, on-body links show lower probability of being stationary (tending to non-stationary behavior) than B$2$B links. *Hence, from this analysis, in conjunction with on-body results in [@chaganti2014narrowband], B$2$B communications shows significantly more stationarity than on-body communications.* - For B$2$B channels, the probability of satisfying WSS can depend on the sensor locations, as hub-to-hub and hub-to-sensor links show varying probability of satisfying the null hypothesis. - The probability of being stationary for all B$2$B channels decreases with increasing the window length. Experimental Scenario ===================== We use an open-access dataset which consists of contiguous extensive intra-BAN (on-body) and inter-BAN (body-to-body) channel gain data of around $45$ minutes, captured from $10$ closely located mobile subjects (adult male and female)[^2] with $50$ ms sampling time. The experimented subjects were walking together to a crowded hotel bar, remaining there for a while and then walking back to the office. Each subject wore $1$ transmitter (Tx hub) on the left-hip and $2$ receivers (sensors/ relays) on the left-wrist and right-upper-arm, respectively (Fig. \[b2b\]). The radios were transmitting at $0$ dBm power with $-100$ dBm receive sensitivity. A description of these wearable radios can be found in [@hanlen2010open] and the “open-access" dataset can be downloaded from [@smith2012body]. Each Tx transmits in a round-robin fashion, at $2.36$ GHz, with $5$ ms separation between transmission and also acts as Rx capturing RSSI (Receive Signal Strength Indicator) values. We investigate the WSS of three different body-to-body (B$2$B) links i.e., left hip to left hip (LH–LH), left hip to right upper arm (LH–RA) and left hip to left wrist (LH–LW) and average the results from $10$ BANs. An illustration of the B$2$B links found from different on-body sensor locations is shown in Fig. \[b2b\] with two BANs. We also investigated single B$2$B links with good and bad conditions from $10$ BANs (shown in Table \[table\_bw\]) to investigate best-case and worst-case stationarity of different B$2$B links. \[table\_bw\] [|c|c|c|c|]{} & **LH–LH** & **LH–RA** & **LH–LW**\ Best-case & BAN$7$ – BAN$5$ & BAN$5$ – BAN$7$ & BAN$7$ – BAN$8$\ Worst-case & BAN$2$ – BAN$9$ & BAN$2$ – BAN$1$ & BAN$2$ – BAN$10$\ Tests of Significance for WSS =============================
{ "pile_set_name": "ArXiv" }
--- abstract: 'Stochastic resonance (SR) is a well known phenomenon in dynamical systems. It consists of the amplification and optimization of the response of a system assisted by stochastic noise. Here we carry out the first experimental study of SR in single DNA hairpins which exhibit cooperatively folding/unfolding transitions under the action of an applied oscillating mechanical force with optical tweezers. By varying the frequency of the force oscillation, we investigated the folding/unfolding kinetics of DNA hairpins in a periodically driven bistable free-energy potential. We measured several SR quantifiers under varied conditions of the experimental setup such as trap stiffness and length of the molecular handles used for single-molecule manipulation. We find that the signal-to-noise ratio (SNR) of the spectral density of measured fluctuations in molecular extension of the DNA hairpins is a good quantifier of the SR. The frequency dependence of the SNR exhibits a peak at a frequency value given by the resonance matching condition. Finally, we carried out experiments in short hairpins that show how SR might be useful to enhance the detection of conformational molecular transitions of low SNR.' author: - 'K. Hayashi$^{1,2}$, S. de Lorenzo$^{3}$, M. Manosas$^4$, J. M. Huguet$^1$ and F. Ritort$^{1,3,*}$' title: 'Single-molecule stochastic resonance' --- Introduction ============ All nonlinear systems that exhibit stochastic noise are susceptible to undergo stochastic resonance (SR). When SR is triggered, the response of a system to an external forcing is amplified. SR has been studied in a large variety of systems, including climate dynamics [@benzi1; @benzi2], colloidal particles [@simon; @schmitt; @ciliberto], biological systems [@bio; @McDonAb09; @reviewer2], and quantum systems [@quantum1; @quantum2]. With the recent advent of single-molecule techniques, it is nowadays possible to measure SR at the level of individual molecules. Biomolecules exhibit rough and complex free energy landscapes that determine folding kinetics and influence the way they fold into their native structures. The use of force spectroscopy techniques has become important practice in studies of molecular biophysics. By applying a mechanical force at both extremities of an individual molecule and by recording the time evolution of the molecular extension (the reaction coordinate in these experiments), information about the folding reaction can be obtained. The application of forces makes possible to disrupt the weak bonds that hold their native structure to reach a stretched unfolded conformation. In this way thermodynamics (e.g. the free energy of folding) and kinetics (the rates of unfolding and folding) can be determined. Although most SR studies use temperature as a tunable parameter, this is not the best choice to investigate SR effects at the single-molecule level. Biomolecules have a strong sensitivity to temperature variations. Indeed, beyond increasing thermally assisted noise, temperature also modifies the shape of the molecular free energy landscape. Thus, another tunable parameter such as the oscillation frequency of force might be more appropriate to study SR in biomolecules. SR appears as a maximum in the response of a biomolecule at a characteristic frequency (the resonance frequency). This occurs when a characteristic timescale of the signal (e.g. its decorrelation or relaxation time) matches half period of the oscillation (the so-called matching condition). The matching condition must not be taken as a strict equality but a qualitative relationship between the two timescales [@hanggi; @wellens]. This means that different SR quantifiers may not give coincident resonance frequencies specially for low quality resonance peaks. It seems important to investigate which SR quantifier is best suited to identify SR behavior. ![image](Figure1.eps){width="16cm"} In this work, we use optical tweezers to investigate SR in single DNA hairpins driven by oscillatory mechanical forces. The high chemical stability of DNA makes DNA hairpins excellent models to investigate SR at the single-molecule level. When force oscillates around the average unfolding force, thermally activated hopping kinetics between the folded (F) and unfolded (U) states synchronizes with the frequency of the external driving force, leading to SR. SR can be measured by recording the oscillations produced in the molecular extension, relative to the magnitude of the noise produced by the thermal forces. Our aim in this work is to perform a systematic study of SR in single-molecules exhibiting bistable dynamics, rather than using SR as a useful tool to determine the kinetic properties of DNA hairpins. In fact, these can be estimated by using other much less time-consuming methods (e.g. by directly analyzing hopping traces). Yet, we also carry out SR studies in short hairpins that show how SR might prove useful to enhance the detection of conformational transitions of low SNR. The paper is organized as follows. In Section II, our experimental set up is explained. Our main SR results in DNA hairpins are presented in Section III, and the influence of the experimental conditions (i.e. dsDNA handle length and trap stiffness) is investigated in Section IV. We compare different SR quantifiers in Section V and in section VI we describe the related phenomenon of resonant activation. Finally in Section VII, we purposely designed short DNA sequences to increase the noise of the signal to test whether SR can still be used to identify the hopping rate. In the last section, we summarize our conclusions, and discuss situations where SR might be a useful technique. Experimental setup and Hopping Experiments ========================================== In Fig. 1a, we show a schematic illustration of our experimental setup (left) and the DNA sequence of hairpin H1 that we investigated (upper right). The DNA hairpin is tethered between two short dsDNA handles (29 bp) that are linked to micron-size beads [@Nuria]. One bead is captured in the optical trap whereas the other is immobilized at the tip of a glass pipette [@footnote1]. By moving the position of the optical trap relative to the pipette, a force is exerted at the extremities of the hairpin. In a pulling experiment, the optical trap is moved away from the pipette and mechanical force is applied to the ends of the DNA construct (DNA hairpin plus DNA handles) until the value of the force at which the hairpin unfolds is reached. In the reverse process, the trap approaches the pipette and the force is relaxed until the hairpin refolds. In this experiment, the force exerted upon the system, $f$, is recorded as a function of the relative trap-pipette distance giving the so-called force-distance curve (Fig. 1a, lower right). Around the co-existence force, $f_{\rm c}\simeq14.5$ pN, the hairpin hops between the F and U states for sufficiently low pulling speeds. Hopping experiments can be done in two different modes: constant force mode (CFM) and passive mode (PM) [@exp2; @exp3]. In the CFM, the force applied to the DNA construct is maintained at a preset value by moving the optical trap through force-feedback control (Fig. 1b, upper). The folding and unfolding transitions of the DNA hairpin are followed by recording the trap position, $X(t)$. In contrast to the CFM, the PM is operated by leaving the position of the optical trap stationary without any feedback. The bead passively moves in the trap in response to changes in the extension of the DNA construct (Fig. 1b, lower). When the hairpin unfolds, the trapped bead moves toward the trap center and the force decreases; when the hairpin folds, the trapped bead is pulled away from the trap center and the force increases. The folding and unfolding transitions of the DNA hairpin are followed by recording the force, $f(t)$. In both cases (CFM and PM), the kinetic rates of hopping can be measured from the residence times of the trace ($X(t)$ in the CFM and $f(t)$ in the PM). Fig. 1b shows hopping traces measured in the CFM and PM at the co-existence force, $f_{\rm c}\simeq 14.5$ pN, where the hairpin hops between the F and U states populating them with equal probability (i.e. it spends equal time in both states). In this work, we focused on the experiments at controlled force, rather than at fixed trap position. Both the hopping and the oscillation experiments (described below) were carried out using the force feedback control. The reason is that the controlled force experiments avoid undesirable drift effects in force that strongly affect the kinetics of the hairpin (see Methods). Therefore we mainly carried out the experiments in the CFM by recording the position of the trap, $X(t)$. This signal exhibits dichotomic motion between the two distinct levels of extension (Fig. 1b, upper left). The difference between the two levels (short extension, folded; long extension, unfolded) reflects the release in extension ($\simeq 18$ nm) of the 44 nucleotides of hairpin H1. From $X(t)$ we can extract the residence time distribution at each state that shows the exponential form characteristic of first-order decay processes (Fig. 1c). The fit of the time distribution to an exponential function allow us to get the average residence time. The force-dependent kinetic rates (equal to the inverse of the mean lifetimes), $k_{\rm FU}$ and $k_{\rm UF}$, were measured at the co-existence force, $f_{\rm c}=14.5\pm0.3$ pN, giving $k_{\rm c}=k_{\rm FU}^{\rm c}=k_{\rm UF}^{\rm c}\simeq$ $0.66\
{ "pile_set_name": "ArXiv" }
--- abstract: '$N$-jettiness subtractions provide a general approach for performing fully-differential next-to-next-to-leading order (NNLO) calculations. Since they are based on the physical resolution variable $N$-jettiness, ${\mathcal{T}}_N$, subleading power corrections in $\tau={\mathcal{T}}_N/Q$, with $Q$ a hard interaction scale, can also be systematically computed. We study the structure of power corrections for $0$-jettiness, ${\mathcal{T}}_0$, for the $gg\to H$ process. Using the soft-collinear effective theory we analytically compute the leading power corrections $\alpha_s \tau \ln\tau$ and $\alpha_s^2 \tau \ln^3\tau$ (finding partial agreement with a previous result in the literature), and perform a detailed numerical study of the power corrections in the $gg$, $gq$, and $q\bar q$ channels. This includes a numerical extraction of the $\alpha_s\tau$ and $\alpha_s^2 \tau \ln^2\tau$ corrections, and a study of the dependence on the ${\mathcal{T}}_0$ definition. Including such power suppressed logarithms significantly reduces the size of missing power corrections, and hence improves the numerical efficiency of the subtraction method. Having a more detailed understanding of the power corrections for both $q\bar q$ and $gg$ initiated processes also provides insight into their universality, and hence their behavior in more complicated processes where they have not yet been analytically calculated.' author: - Ian Moult - Lorena Rothen - 'Iain W. Stewart' - 'Frank J. Tackmann' - Hua Xing Zhu bibliography: - '../subleading.bib' date: 'October 9, 2017' title: 'N-Jettiness Subtractions for $gg\to H$ at Subleading Power' --- Introduction {#sec:intro} ============ Our ability to perform next-to-next-to-leading order (NNLO) calculations for cross sections of phenomenological importance is crucial for theory predictions to match the precision of Run 2 measurements at the LHC. Due to significant recent progress a number of NNLO subtraction techniques are now available for hadron-hadron collisions, and have been successfully demonstrated both for color-singlet production [@Catani:2007vq; @Caola:2017dug], as well as for cross sections involving jets in the final state [@GehrmannDeRidder:2005cm; @Czakon:2010td; @Boughezal:2011jf; @Czakon:2014oma; @Boughezal:2015aha; @Gaunt:2015pea]. However, particularly when final state jets are involved, these techniques remain complicated and computationally expensive. Improving the numerical efficiency and theoretical understanding of NNLO subtraction schemes is therefore of significant interest. In this paper, we focus on improving the understanding of the infrared structure of $N$-jettiness subtractions [@Boughezal:2015aha; @Gaunt:2015pea], which is a nonlocal subtraction method based on the $N$-jettiness resolution variable ${\mathcal{T}}_N$ [@Stewart:2009yx; @Stewart:2010tn]. $N$-jettiness subtractions provide a powerful and simple method that is in principle applicable for an arbitrary number of jets in the final state. They have been applied to $W/Z/H/\gamma+$ jet at NNLO [@Boughezal:2015dva; @Boughezal:2015aha; @Boughezal:2015ded; @Boughezal:2016dtm; @Campbell:2017dqk], as well as inclusive photon production [@Campbell:2016lzl], and have been implemented in MCFM8 for color-singlet production [@Campbell:2016jau; @Campbell:2016yrh; @Boughezal:2016wmq; @Campbell:2017aul]. They have also been used to calculate single-inclusive jet production in $ep$ collisions at NNLO [@Abelof:2016pby]. The $N$-jettiness subtraction scheme has the advantage that it is simple to implement using known NNLO results from the literature, can be interfaced with resummation or parton shower programs,[^1] and is conceptually simple to extend to higher perturbative orders. An important feature of the $N$-jettiness subtraction scheme is that it is based on a physical infrared-safe observable, $N$-jettiness ${\mathcal{T}}_N$, and the subtraction terms are determined by the behavior of ${\mathcal{T}}_N$ in the soft and collinear limits. Using our understanding of the simplifications of gauge theories in these limits allows the subtraction terms to be systematically computed as an expansion in $\tau \equiv {\mathcal{T}}_N/Q$, with $Q$ a typical hard interaction scale. The leading terms in the $\tau\to 0$ limit are naively nonintegrable divergences that are properly defined as plus-functions, $[\ln^k\tau /\tau]_+$, and are required for the subtractions. These terms are described by well-established factorization formulas valid to all orders in $\alpha_s$. For the case of $N$-jettiness, these formulas were determined in Refs. [@Stewart:2009yx; @Stewart:2010tn] using soft collinear effective theory (SCET) [@Bauer:2000ew; @Bauer:2000yr; @Bauer:2001ct; @Bauer:2001yt; @Bauer:2002nz]. They are expressed in terms of universal soft, jet, and beam functions. The required ingredients to compute the leading-power subtraction terms at NNLO are the NNLO jet [@Becher:2006qw; @Becher:2010pd] and beam [@Gaunt:2014xga; @Gaunt:2014cfa] functions (the spin-dependent quark beam functions were recently computed to NNLO [@Boughezal:2017tdd]), which are process independent, as well as the soft function, which depends on the number of external colored partons, $n$, in the Born process. The soft function is known analytically at next-to-leading order (NLO) for arbitrary $n$ [@Jouttenus:2011wh], and at NNLO it is known analytically for $n=2$ [@Kelley:2011ng; @Monni:2011gb; @Hornig:2011iu; @Kang:2015moa], and numerically for $n=3$ [@Boughezal:2015eha], and with a third massive parton [@Li:2016tvb]. The leading-logarithmic (LL) terms at subleading order in $\tau$ were analytically computed at NLO and NNLO for Drell-Yan like processes in Refs. [@Moult:2016fqy; @Boughezal:2016zws]. Including these improves the subtractions by an order of magnitude, with further improvements possible by computing additional subleading logarithms. In Ref. [@Moult:2016fqy] it was also shown that the rapidity dependence of the power corrections strongly depends on the observable definition. For the specific definition of ${\mathcal{T}}_N$ in the hadronic frame that had been used in some implementations, the power corrections grow exponentially with the rapidity $Y$ of the Born system, and the power expansion is in ${\mathcal{T}}_N e^{|Y|}$, instead of ${\mathcal{T}}_N$, causing it to break down at large $Y$. On the other hand, using the definition of ${\mathcal{T}}_N$ [@Stewart:2009yx; @Stewart:2010tn] that takes into account the boost of the Born system results in a well-behaved power expansion, with power corrections that are approximately flat in $Y$. Unlike the leading-power factorization, which has the same structure for any color-singlet production, the universality of subleading power corrections is not well understood, and it is therefore important to understand their behavior in other processes. In this paper, we present a detailed study of the power corrections in ${\mathcal{T}}_0$ for the gluon fusion, $gg\to H$, process. We analytically compute the LL correction at both NLO and NNLO for all partonic channels, namely $gg$, $g q$, and $q\bar q$, and at NLO we also compute the next-to-leading-logarithmic (NLL) contribution for the $q\bar q$ channel, which is the first nonzero contribution from this channel. We then perform a detailed numerical study using $H+1$ jet NLO results from MCFM8 [@Campbell:1999ah; @Campbell:2010ff; @Campbell:2015qma; @Boughezal:2016wmq]. This provides both a confirmation of our analytic calculation, and enables us to study the extent to which the power corrections are well described by the LL approximation. We also study the rapidity and observable dependence of the power corrections. The analytic LL power corrections for ${\mathcal{T}}_0$ in the hadronic frame for the $gg$ and $gq$ channels were first computed in Ref. [@Boughezal:2016zws]. While we agree with the results of Ref. 
{ "pile_set_name": "ArXiv" }
--- abstract: 'An empirical model for forecasting solar wind speed related geomagnetic events is presented here. The model is based on the estimated location and size of solar coronal holes. This method differs from models that are based on photospheric magnetograms (e.g., Wang-Sheeley model) to estimate the open field line configuration. Rather than requiring the use of a full magnetic synoptic map, the method presented here can be used to forecast solar wind velocities and magnetic polarity from a single coronal hole image, along with a single magnetic full-disk image. The coronal hole parameters used in this study are estimated with Kitt Peak Vacuum Telescope He [I]{} 1083 nm spectrograms and photospheric magnetograms. Solar wind and coronal hole data for the period between May 1992 and September 2003 are investigated. The new model is found to be accurate to within $10\%$ of observed solar wind measurements for its best one-month periods, and it has a linear correlation coefficient of $\sim$0.38 for the full 11 years studied. Using a single estimated coronal hole map, the model can forecast the Earth directed solar wind velocity up to 8.5 days in advance. In addition, this method can be used with any source of coronal hole area and location data.' author: - - - title: Solar Wind Forecasting with Coronal Holes --- Introduction ============ Prediction of space weather near the Earth is a major goal of solar research. An important aspect of attaining this goal is to accurately describe the solar drivers of space weather. The drivers are the solar wind and the various phenomena that shape and modulate that wind. Among the earliest findings from space observations of the solar wind was that it consisted of recurrent low-speed, dense streams and high-speed tenuous streams, and that the latter were strongly associated with increased geomagnetic activity [@Synder63]. Many suggestions were made that the high-speed solar wind streams might be associated with regions on the Sun having magnetic fields open to interplanetary space. When coronal holes were found to be regions likely to have open magnetic fields [@Alt72], it was not long until , and several other investigators, demonstrated a link between open-field coronal holes, high-speed solar wind streams, and enhanced geomagnetic activity. In an effort to strengthen this linkage, constructed time-stacked diagrams of coronal holes, solar wind speed and geomagnetic activity in one-rotation-long rows. The diagrams covering the years 1973-1975 strongly supported the linkage and the authors suggested that observations of coronal holes could be used to predict the arrival of high-speed streams and their associated geomagnetic activity a week in advance. ![A sample NSO/KPVT computer-assisted hand-drawn coronal hole image (left) and a EIT 19.5 nm Fe XII emission line image (left) for July 14, 2003 at approximately 17 UT. Note that the coronal hole regions appear dark in the EIT image.](fig01b.ps "fig:"){width="2.2in"} ![A sample NSO/KPVT computer-assisted hand-drawn coronal hole image (left) and a EIT 19.5 nm Fe XII emission line image (left) for July 14, 2003 at approximately 17 UT. Note that the coronal hole regions appear dark in the EIT image.](fig01a.ps "fig:"){width="2.2in"} Coronal holes are best seen against the solar disk as low-intensity regions in space observations of material at coronal temperatures. This can also be done from the ground using radio observations. found that coronal holes could be seen faintly in ground-based images made with helium lines such as 587.6 and 1083.0 nm because the strength of these lines is partly controlled by the intensity of overlying coronal radiation (see, e.g., ). A program of regular 1083 nm observations has been conducted by the National Solar Observatory (NSO) Kitt Peak Vacuum Telescope (KPVT) starting in 1974. Among the derived products are estimates of the locations and magnetic polarity of coronal holes. An example coronal hole estimate image derived from a KPVT observation is shown in Figure 1, along with a 19.5 nm Fe XII emission line image measured by the Extreme Ultraviolet Imaging Telescope (EIT) for comparison. describe how coronal holes are identified using KPVT He [I]{} 1083 nm observations. Predictions of solar wind speed at Earth are regularly made by several groups based on solar potential field extrapolations (e.g.,\ http://solar.sec.noaa.gov/ws/, http://www.lmsal.com/forecast/,\ http://bdm.iszf.irk.ru/Vel.html, and http://gse.gi.alaska.edu) and interplanetary scintillation [@Hew1964] observations (e.g.,\ http://cassfos02.ucsd.edu/solar/forecast/index.html, and\ http://stesun5.stelab.nagoya-u.ac.jp/forecast/). The former set of forecasts are based on extrapolation of photospheric longitudinal magnetic field measurements using a potential field assumption to locate open field lines. Using one of these models, a modified @Wang90 ([-@Wang90; -@Wang92]) flux-transport model, studied a three-year period centered about the May 1996 solar minimum. They compared predicted solar wind speed and magnetic polarity with observations near Earth. Their three-year sample period had an overall correlation of $\sim0.4$ with observed solar wind velocities and an average fractional deviation, $\xi$, of $0.15$, where $\xi = \left\langle {{({\rm{prediction - observed})}}/{{\rm{observed}}}} \right\rangle$. When excluding a 6-month period with large data gaps, they correctly forecast the solar wind to within $10-15\%$. Interplanetary magnetic field (IMF) polarity was correctly forecast $\sim75\%$ of the time. In this paper, we address the suggestion of that observations of coronal hole regions can be used to predict the solar wind speed at Earth as much as a week in advance. In addition, the model presented here is based on observations that find moderate and high-speed solar wind streams are associated with small and large near-equatorial coronal holes, respectively [@Nolte1976]. Here we correlate the coronal hole percent area coverage of sectoral regions of the observed solar surface with solar wind measurements to derive a simple empirical model (discussed in Sections 2 through 5). As a measure of the merit of this model for solar wind forecasting, we compare predictions with observations and contrast this technique with the ones based on magnetic field extrapolations (Sections 5 and 6). Model Input Data {#data} ================ The coronal hole data used here are based on KPVT observations from May 28, 1992 through September 25, 2003 (i.e. the last half of cycle 22 and the first half of cycle 23). The coronal hole locations and area estimates are from computer-assisted, hand-drawn maps (see Figure 1) based upon the KPVT He [I]{} 1083 nm images and photospheric magnetograms [@KHar02]. For this investigation, the estimated coronal hole boundaries were mapped into sine-latitude and longitude to create heliographic images. The coronal hole region image pixels are set to a value of 1, whereas the background is defined as 0. For the time period analyzed here, the KPVT coronal hole maps have a $69\%$ daily coverage. The solar wind speed data utilized here was obtained from the OMNIWeb website (http://nssdc.gsfc.nasa.gov/omniweb/) provided by the National Space Science Data Center. Daily averages of the solar wind speed time series were created with the approximate cadence of the KPVT-based coronal hole maps. For the time period analyzed here, the solar wind speed time series has a $92\%$ daily coverage. Data gaps in the time series are interpolated using a cubic spline. Solar Wind Correlation Analysis {#analysis} =============================== For comparison with the solar wind speed time series, each heliographic coronal hole image was divided into 23 swaths (i.e. sectoral regions) $14$ degree-wide in longitude overlapped by 7 degrees. The approximately $1$-day-wide longitudinal window was selected to correspond with the temporal cadence of the KPVT observations. These sectoral samples are then summed, where each pixel corresponding to a coronal hole is equal to 1, to yield a percent coverage of that area by coronal holes. For each coronal hole image there may be no or only a few coronal hole regions observed for that time. For example, swath sectors with no coronal hole regions would yield a hole coverage of zero percent. This is repeated for each coronal hole image available in the $11$-year period to form a coronal hole time series for each of the 23 sectoral samples. Each sectoral time series is then interpolated into the time frame of the solar wind velocity data. The correlation and time lag between the time series were estimated with weighted cross-correlations (e.g. ). The weighted cross-correlation simplifies the analysis by allowing the use of the continuous time series. The gap-filled data are given small weights to minimize their contribution while the measured or derived values are given equal and relatively large weight values. In addition, following , periods of CME events were estimated using the plasma $\beta$ value (obtained from the OMNIWeb data set) when $\beta \le 0.1$. For periods estimated to correspond to a coronal mass ejection (
{ "pile_set_name": "ArXiv" }
--- abstract: 'In Aldous and Shields (1988), a model for a rooted, growing random binary tree was presented. For some $c>0$, an external vertex splits at rate $c^{-i}$ (and becomes internal) if its distance from the root (depth) is $i$. For $c>1$, we reanalyse the tree profile, i.e. the numbers of external vertices in depth $i=1,2,...$. Our main result are concrete formulas for the expectation and covariance-structure of the profile. In addition, we present the application of the model to cellular ageing. Here, we assume that nodes in depth $h+1$ are senescent, i.e. do not split. We obtain a limit result for the proportion of non-senescent vertices for large $h$.' author: - '[by K. Best[^1] $\mbox{}^,$[^2]$\;$, P. Pfaffelhuber$\mbox{}^{\ast,\dagger,\,}$[^3]]{}' title: | The Aldous-Shields model revisited\ (with application to cellular ageing) --- = [^4] = = [^5] = Introduction ============ Trees arise in several applied sciences: In linguistics and biology, trees describe the relationship of items (languages, species) and in computer science, trees are used as data structures, e.g. for sorting. Randomizing the input leads to random trees, which are object of a large body of research. For applications in biology, see e.g.[@Berest09; @Felsenstein2002]. Here, important examples are trees arising from branching processes (e.g. Yule trees). In computer science, prominent examples are search trees; see e.g.[@Neininger2004; @Drmota2009].   In this note, we are concerned with an application of random trees in cellular biology. In the 1960s it was known that eukaryotic cells have a limited replication capacity ([@pmid13905658]). The number of generations until cells do not proliferate any more is today known as the *Hayflick limit* and the phenomenon that cells loose their ability to proliferate is called *cellular senescence*. The molecular basis for cellular senescence were uncovered starting in the 1970s. A theory was developed which argued that during each round of replication, the *telomeres* (which are the end part of each chromosome) are shortened due to physical constraints of the DNA copying mechanism ([@pmid9415101]). In humans, these telomeres are a multiple (i.e. more than 1000-fold) repetition of the base pairs TTAGGG and up to 200 bases are lost in each replication round ([@Levy:1992:J-Mol-Biol:1613801]). Most importantly, telomeres have a stabilizing effect on the DNA. The *DNA repair mechanism* of a cell must be able to distinguish between usual DNA breaks (which it is assumed to repair) and the telomeres (which it is assumed to ignore). Hence, when telomeres become shorter this stabilizing effect seizes and ageing occurs. It can be observed that telomeres shrink from 15 kilobases at birth to less than 5 kilobases during a lifetime ([@pmid15471900]). However, the enzyme *telomerase* is known to be able to decrease the loss of telomeres during replication. This enzyme has been found to be active in stem cells and cancer cells, which both are cell types with an (almost) unbounded replication potential. The deeper understanding of the role of telomeres and telomerase is an active field of research because of the medical implications for ageing and cancer. In particular, it was awarded the Nobel prize in medicine in 2009 ([@pmid19815741]).   We study the model of random trees introduced in Aldous and Shields in [@AldousShields:1988:ProbTh] (hereafter referred to as \[AS\]) and extend it for an application to cellular ageing. Given some $c>0$ and a full binary tree $\mathbb T$, the model introduced in \[AS\] describes the evolution of the vertices of the tree. Here, we distinguish *internal*, *external* and *prospective* vertices. At $t=0$, the root is the only external vertex (and there are no internal vertices). An external vertex $u\in\mathbb T$ in depth $|u|$ becomes internal at rate $c^{-|u|}$. At the time it becomes internal, the two daughter vertices in depth $|u|+1$ become external. We present our result on the profile of the Aldous-Shields model in Theorem \[Th1\]. For our application to cellular senescence, we will analyze a relative of the Aldous-Shields model for $c>1$. Here, a critical depth $h$ is fixed, and only external vertices in depth at most $h$ can become internal. External vertices in depth $h+1$ never become internal. Here, external vertices can be thought of as cells. The depth of a vertex is the number of generations from the first cell. Vertices in depth at most $h$ represent proliferating cells, because they are able to produce offspring (i.e. daughter cells). Vertices in depth $h+1$ represent senescent cells. This model has two features, which appear to be realistic in cellular senescence. First, the rate of cell proliferation decreases with the generation of a cell, parameterized by $c>1$. Second, cells which have already split too often loose their ability to proliferate at all. For this model, we obtain a limit result for the frequency of proliferating cells in Theorem \[T2\].   The paper is organized as follows: In Section \[sec:model\], we state our results on the Aldous-Shields model. The application to cellular senescence is carried out in Section \[sec:application\], where we also give an overview of other models for cellular senescence in the literature. Section \[sec:proofs\] contains the proofs for our results on the Aldous-Shields model (Theorem \[Th1\]), and in Section \[sec::proofs2\], we give proofs for the results on the model of cellular ageing (Theorem \[T2\]). Model and results {#sec:model} ================= We start by introducing some notation. Let $\mathbb T$ be the complete binary tree, given through $$\mathbb T = \bigcup_{n=0}^\infty \mathbb T_n$$ and $$\mathbb T_0 = \{\emptyset\}, \qquad \qquad \mathbb T_n = \{0,1\}^n\; \text{ for }n=1,2,...$$ We refer to elements in $\mathbb T$ by *vertices* and identify $u\in\mathbb T_n$ by a word of length $n$ over the alphabet $\{0,1\}$, whose $i$th letter is $u_i$, $n\geq 1$. The vertex $\emptyset$ is the root of the tree and vertex $u\in\mathbb T$ has two daughter vertices, $u0$ and $u1$. (We make the convention that $\emptyset 0 := 0, \emptyset 1:=1$.) For $u\in\mathbb T$ we set $|u|=n$ iff $u\in\mathbb T_n$. We say that $u$ is an ancestor of $v$ if $|u|<|v|$ and there are $i_1,...,i_{|v|-|u|}\in\{0,1\}$ with $v= u i_1\cdots i_{|v|-|u|}$. The ancestor induces a transitive order relation in $\mathbb T$, and we write $u\prec v$ iff $u$ is ancestor of $v$. Fix $c>0$. The (time-continuous) *Aldous-Shields model* with parameter $c$ is a Markov jump process $\mathcal Y = (Y(t))_{t\geq 0}$, $Y(t) = (Y_u(t))_{u\in\mathbb T}$ with state space $\{0,1\}^{\mathbb T}$, starting in $Y(0) = (\mathbbm{1}_{u=\emptyset})_{u\in\mathbb T}$. Given $Y(t) = y\in\{0,1\}^{\mathbb T}$ and $u\in\mathbb T$ with $y_u=1$, it jumps to $(\widetilde y_{v})_{v\in\mathbb T}$, given by $$\widetilde y_v = \begin{cases} 0, & v=u,\\ 1, & v=u0 \text{ or } v=u1,\\ y_v, & \text{else,}\end{cases}$$ at rate $c^{-|u|}$. In this case, we say that vertex $u$ splits. Let $\mathcal Y = (Y(t))_{t\geq 0}$ be the Aldous-Shields model and $Y=Y(t)$ for some $t\geq 0$. It is important to note that the dynamics is such that any path $\emptyset, i_1, i_1 i_2,...\in\mathbb T$ with $i_1, i_2,...\in\{0,1\}$, starting at the root, has exactly one element $u$ with $Y_u=1$. In particular, the sets $$\{u: \exists v: u\prec v, Y_v=1\}, \qquad \{u: Y_u=1\}, \
{ "pile_set_name": "ArXiv" }
--- abstract: 'We use the oxDNA coarse-grained model to provide a detailed characterization of the fundamental structural properties of DNA origamis, focussing on archetypal 2D and 3D origamis. The model reproduces well the characteristic pattern of helix bending in a 2D origami, showing that it stems from the intrinsic tendency of anti-parallel four-way junctions to splay apart, a tendency that is enhanced both by less screened electrostatic interactions and by increased thermal motion. We also compare to the structure of a 3D origami whose structure has been determined by cryo-electron microscopy. The oxDNA average structure has a root-mean-square deviation from the experimental structure of 8.4Å, which is of the order of the experimental resolution. These results illustrate that the oxDNA model is capable of providing detailed and accurate insights into the structure of DNA origamis, and has the potential to be used to routinely pre-screen putative origami designs.' author: - 'Benedict E. K. Snodin' - 'John S. Schreck' - Flavio Romano - 'Ard A. Louis' - 'Jonathan P. K. Doye' title: 'Coarse-grained modelling of the structural properties of DNA origami' --- Introduction ============ DNA nanotechnology seeks to use the specificity of the Watson-Crick base pairing and the programmability possible through the DNA sequence to design self-assembling nanoscale DNA structures and devices. The most prevalent technique used is probably that of DNA origami in which a long viral “scaffold” DNA single strand is folded up into virtually any arbitrary structure by the addition of many different “staple” strands that bind to multiple specific domains on the scaffold [@Linko13; @Hong17]. The initial designs were two-dimensional [@Rothemund06] but were soon generalized to three-dimensional shapes [@Douglas09], and then to bent, twisted [@Dietz09] and curved [@Han11] structures through the introduction of internal mechanical stresses. The increasing usage of origamis was particularly facilitated by the development of computer-aided design tools, such as cadnano [@Douglas09b]. These original approaches produced structures involving mainly bundles of locally parallel double helices held together by four-way junctions. More recently, scaffolded origami approaches have been developed that generate more open “wireframe” structures [@Benson15; @Veneziano16; @Matthies16]. The structural control and the addressability provided by the DNA origami technique naturally have led to many types of applications, particularly in the areas of biosensing, drug delivery and nanofabrication [@Wang17; @Liu18]. In Rothemund’s original paper the structure of the origamis were characterized by atomic force microscopy (AFM). The images were used to confirm that the origamis had folded into the designed structures without significant defects, and identified structural features of the origamis, such as what we here term the “weave” pattern where the helices, rather than being straight, splay out between four-way junctions, thus leading to the characteristic pattern where the helices weave back and forth between adjacent helices [@Rothemund06]. Such microscopy studies (by AFM and transmission electron microscopy) are probably the most prevalent way of characterizing the structures of DNA origamis, but are usually limited in terms of the fine-grained detail that can be obtained. Furthermore, adsorption onto a surface may perturb the structure, especially for 2D origamis, which may be flattened and made to look more ordered because of the suppression of out-of-plane thermal fluctuations. Solutions-based measurements can be performed by, for example, small-angle X-ray scattering (SAXS) and FRET, but SAXS interpretation usually requires a structural model (and its computed SAXS pattern) for comparison [@Andersen09; @Fischer16; @Bruetzel16; @Baker18]. FRET can potentially provide detailed measurements of selected distances, but has been relatively little used to provide detailed structural analysis of origamis [@Stein11; @Funke16]. Cryo-EM can potentially provide the most detailed structural analysis. For example, Bai [*et al.*]{} were able to obtain a high-resolution structure for a three-dimensional origami where an all-atom structure was fitted to the obtained electron density maps [@Bai12]. However, such detailed studies are unlikely to be a routine approach. More commonly, cryoEM has been used at a lower level of resolution, particularly for polyhedral nanostructures [@He08; @Kato09]. Very recently, particle electron tomography has also begun to be applied allowing visualization of the 3D structure of individual DNA nanostructures [@Lei18; @Wagenbauer17b]. Given both the difficulty of obtaining high-resolution structural information and the potential utility of being able to predict structural properties prior to experimental realization, computational modelling of the structure of DNA origamis has the potential to play a significant role in the field [@Jabbari15]. All-atom simulations have the potential to provide the most detailed structural insights [@Yoo13; @Wu13; @Li15; @Gopfrich16; @Maffeo16; @Lee17]. Notably, the Aksimentiev group have simulated a number of origamis [@Yoo13; @Li15; @Gopfrich16; @Maffeo16], including even an origami nanopore inserted into a membrane [@Gopfrich16]. However, such simulations are extremely computationally intensive and cannot be performed routinely. Furthermore, even for the relatively stiff origamis considered in these studies, it is not clear that they have fully equilibrated on the simulation time scales [@Maffeo16]. More promising as a general tool is an approach where only the atoms of the origami (but not the water environment) are simulated and an elastic network is used to constrain the origami in its assembled state; these constraints are applied to the base pairing and base stacking interactions, and also to the distance between neighboring helices [@Maffeo16]. A computationally less expensive approach is to use coarse-grained models in which the basic units are no longer atoms, but some larger moiety, be it a nucleotide [@Ouldridge11; @Sulc12; @Snodin15], a base pair [@mergell03; @Arbona12] or a section of a double helix [@Castro11; @Kim12; @Pan14; @Sedeh16; @Reshetnikov18; @Hemmig18]. Such approaches of course inevitably lead to a lower level of structural detail, and the accuracy of their properties will depend on the quality of the parameterization. By far the most widely-used approach is “cando” as it allows efficient and reliable structural screening of potential origami designs through a simple-to-use web interface [@Castro11; @Kim12; @Pan14; @Sedeh16]. However, its lack of excluded volume interaction means that it may not be appropriate for flexible origamis whose structure is not fully mechanically constrained. Furthermore, as with any model whose basic unit is above the level of a nucleotide, there is no coupling to intra-base-pair degrees of freedom; consequently processes such as duplex fraying, junction migration, and breaking of base pairs due to internal stresses cannot be resolved. Finally, it has a simplified representation of single-stranded DNA, and so cannot take into account, for example, secondary structure formation. All these potential deficiencies can be addressed by a nucleotide-level model, albeit at greater computational expense. Although there are a number of such models at this level of detail [@MorrisAndrews10; @Hinckley13; @Chakraborty18], here we explore in detail the description of DNA origamis provided by the oxDNA model [@Ouldridge11; @Sulc12; @Snodin15]. This model has been particularly successful at describing a wide variety of biophysical properties of DNA [@Ouldridge11; @Romano13; @Ouldridge13b; @Matek15; @Snodin15; @Harrison15; @Skoruppa17], and has been applied to a significant number of DNA nanotechnology systems [@Ouldridge10; @Ouldridge13; @Doye13; @Sulc14; @Machinek14; @Kocar16; @Snodin15; @Snodin16; @Schreck16; @Shi17; @Sharma17; @Khara18; @Fonseca18; @Engel18]. What are the features that make the oxDNA model particularly appropriate to study DNA origamis? Firstly, it is able to accurately reproduce DNA’s basic structural properties. Properties such as the DNA pitch are particularly important, as the large size of DNA origamis means that small deviations can lead to internal stresses that lead to global twisting of the origami—note that in the second version of the oxDNA model the duplex pitch and the twist at nicks and junction were fine-tuned to correct just such an issue [@Snodin15]. Secondly, it is able to capture the mechanical properties of DNA such as the persistence length and torsional modulus [@Ouldridge11; @Matek15; @Snodin15; @Skoruppa17]; these are important for correctly capturing both the thermal fluctuations of DNA origami and the equilibrium structure when internal stresses are deliberately designed into the origami to cause overall bend and twist [@Dietz09]. Thirdly, it has a very good representation of the
{ "pile_set_name": "ArXiv" }
--- abstract: 'We develop a new approach to the linear ordering of the braid group $B_{n}$, based on investigating its restriction to the set¨${{\mathrm{Div}}({\Delta}_{{n}}^{{d}})}$ of all divisors of ${{\Delta}_{{n}}^{{d}}}$ in the monoid¨$B_\infty^+$, [[*i.e.*]{}]{}, to positive $n$-braids whose normal form has length at most¨${d}$. In the general case, we compute several numerical parameters attached with the finite orders ${({\mathrm{Div}}({\Delta}_{{n}}^{{d}}),\nobreak{<}\nobreak)}$. In the case of $3$ strands, we moreover give a complete description of the increasing enumeration of ${({\mathrm{Div}}({\Delta}_{3}^{{d}}),\nobreak{<}\nobreak)}$. We deduce a new and specially direct construction of the ordering on $B_3$, and a new proof of the result that its restriction to $B_3^+$ is a well-ordering of ordinal type $\omega^\omega$.' address: | Laboratoire de Mathématiques Nicolas Oresme UMR 6139\ Université de Caen, 14032 Caen, France author: - Patrick DEHORNOY title: Still another approach to the braid ordering --- The general aim of this paper is to investigate the connection between the Garside structure of Artin’s braid groups and their distinguished linear ordering (sometimes called the Dehornoy ordering). This leads to a new, alternative construction of the ordering. Artin’s braid groups¨$B_{n}$ are endowed with several interesting combinatorial structures. One of them stems from Garside’s analysis [@Gar] and is nowadays known as a Garside structure¨[@Dgk; @McC]. It describes $B_{n}$ as the group of fractions of a monoid¨$B_{n}^+$ with a rich divisibility theory. One of the outcomes of this theory is a unique normal decomposition for every braid in¨$B_{n}$ in terms of simple braids, which are the divisors of Garside’s fundamental braid ${\Delta}_{n}$, a finite family of¨$B_{n}^+$ in one-to-one correspondence with the permutations of ${n}$ objects. One obtains a natural graduation of the monoid¨$B_{n}^+$ by considering the family ${{\mathrm{Div}}({\Delta}_{{n}}^{{d}})}$ of all divisors of ${{\Delta}_{{n}}^{{d}}}$, which also are the elements of $B_n^+$ whose normal form has length at most ${d}$. On the other hand, the braid groups are equipped with a distinguished linear ordering, which is compatible with multiplication on the left, and admits a simple combinatorial characterization¨[@Dfb]: a braid¨${x}$ is smaller than another braid¨${y}$ if, among all expressions of the quotient¨${x}{^{-1}}{y}$ in terms of the standard generators¨${\sigma_{i}}$, there exists at least one expression in which the generator¨${\sigma_{{m}}}$ with maximal (or minimal) index¨${m}$ appears only positively, [[*i.e.*]{}]{}, ${\sigma_{{m}}}$ occurs, but ${\sigma_{{m}}}{^{-1}}$ does not. Several deep results about that ordering are known, in particular the fact that its restriction to¨$B_\infty^+$ is a well-ordering, and a number of equivalent constructions are known¨[@Dgr]. Although both combinatorial in nature, the previous structures remain mostly unconnected—and connecting them may appear as one of the most natural questions of braid combinatorics. For degree¨$1$, [[*i.e.*]{}]{}, for simple braids, the linear ordering corresponds to a lexicographical ordering of the associated permutations¨[@Dgb]. But this connection does not extend to higher degrees, and almost nothing is known about the restriction of the linear ordering to positive braids of a given degree. In particular, no connection is known between the above mentioned Garside normal form and the alternative normal form constructed by S.Burckel in¨[@Bus; @But; @Buu], one that makes comparison with respect to the linear ordering easy: to give an example, the Garside normal form of¨${{\Delta}_{3}^{2{d}}}$ is¨$({\sigma_{1}}{\sigma_{2}}{\sigma_{1}})^{2{d}}$, while its Burckel normal form is $({\sigma_{2}}{\sigma_{1}}^2{\sigma_{2}})^{d}{\sigma_{1}}^{2{d}}$. Our aim in this paper is to investigate the finite linearly ordered sets ${({\mathrm{Div}}({\Delta}_{{n}}^{{d}}),\nobreak{<}\nobreak)}$. A nice way of thinking of this structure is to consider the increasing enumeration of¨${{\mathrm{Div}}({\Delta}_{{n}}^{{d}})}$, and to view it as a distinguished path from¨$1$ to¨${{\Delta}_{{n}}^{{d}}}$ in the Cayley graph of¨$B_{n}$. A complete description of this path would arguably be an optimal solution to the rather vague question of connecting the Garside and the ordered structures of braid groups. Such a description seems to be extremely intricate from a combinatorial point of view, and it remains out of reach for the moment, but we prove partial results in this direction, namely - $(i)$ in the general case, a determination of some numerical parameters attached with ${({\mathrm{Div}}({\Delta}_{{n}}^{{d}}),\nobreak{<}\nobreak)}$ that in some sense measure its size, with explicit values for small values of¨${n}$ and¨${d}$, and - $(ii)$ in the special case ${n}= 3$, a complete description of the increasing enumeration of¨${({\mathrm{Div}}({\Delta}_{{n}}^{{d}}),\nobreak{<}\nobreak)}$. More specifically, the parameters we investigate are the complexity and the heights. The complexity¨${{c}({{\Delta}_{{n}}^{{d}}})}$ is defined as the maximal number of occurrences of¨${\sigma_{{n}-1}}$ in an expresion of¨${{\Delta}_{{n}}^{{d}}}$ containing no¨${\sigma_{{n}-1}}$. It is connected with the termination of the handle reduction algorithm of¨[@Dfo], and its determination was left as an open question in the latter paper. The ${r}$-height¨${{h_{{r}}}({{\Delta}_{{n}}^{{d}}})}$ is defined to be the number of¨${r}$-jumps in the increasing enumeration of ${({\mathrm{Div}}({\Delta}_{{n}}^{{d}}),\nobreak{<}\nobreak)}$ (augmented by¨$1$), where the term ${r}$-jump refers to some natural filtration of the linear ordering¨${<}$ by a sequence of partial orderings¨${<}_{r}$. When ${r}$ increases, ${r}$-jumps are higher and higher, so ${{h_{{r}}}({{\Delta}_{{n}}^{{d}}})}$ counts how many big jumps exist in ${({\mathrm{Div}}({\Delta}_{{n}}^{{d}}),\nobreak{<}\nobreak)}$. We prove that the complexity ${{c}({{\Delta}_{{n}}^{{d}}})}$ equals the height ${{h_{{n}-1}}({{\Delta}_{{n}}^{{d}}})}$ (Proposition¨\[P:MainHeight\]), and that, for each¨${r}$, the ${r}$-height ${{h_{{r}}}({{\Delta}_{{n}}^{{d}}})}$ is the number of divisors of¨${{\Delta}_{{n}}^{{d}}}$ whose ${d}$th factor of the normal form is right divisible by¨${\Delta}_{r}$ (Proposition¨\[P:Main\]). Together with the combinatorial results of¨[@Dhi], this allows for computing the explicit values listed in Table¨\[T:Values\], and for establishing various inductive formulas (Propositions¨\[P:Values\] and¨\[P:Values34\], among others). Besides the enumerative results, we also prove a general structural result that connects the ordered set ${({\mathrm{Div}}({\Delta}_{{n}}^{{d}}),\nobreak{<}\nobreak)}$ with (subsets of) ${({\mathrm{Div}}({\Delta}_{{n}-1}^{{d}}),\nobreak{<}\nobreak)}$ (Corollary¨\[C:Structure\]). This result suggests an inductive method for directly constructing the increasing enumeration of ${({\mathrm{Div}}({\Delta}_{{n}}^{{d}}),\nobreak{<}\nobreak)}$ starting from those of ${({\mathrm{Div}}({\Delta}_{{n}-1}^{{d}}),\nobreak{<}\nobreak)}$ and ${({\mathrm{Div}}({\Delta}_{{n}}^{{d}-1}),\nobreak{<}\nobreak)}$. This approach is completed here for ${n}= 3$ (Proposition¨\[P:Enum3\]). In some sense, $3$ strand braids are simple objects, and the result may appear as of modest interest; however, the order on $B_3^+$ is a well-ordering of ordinal type $\omega^\omega$, hence not a so simple object. The interesting point is that this approach leads to a new, alternative construction of the braid ordering, with in particular a new and simple proof for the so-called Comparison Property which is the hard
{ "pile_set_name": "ArXiv" }
Computer simulation has become an essential tool in condensed matter physics [@landaubinder], particularly for the study of phase transitions and critical phenomena. The workhorse for the past half-century has been the Metropolis importance sampling algorithm, but more recently new, efficient algorithms have begun to play a role in allowing simulation to achieve the resolution which is needed to accurately locate and characterize phase transitions. For example, cluster flip algorithms, beginning with the seminal work of Swendsen and Wang  [@Swendsen_Wang], have been used to reduce critical slowing down near 2nd order transitions. Similarly, the multicanonical ensemble method [@berg] was introduced to overcome the tunneling barrier between coexisting phases at 1st order transitions, and this approach also has utility for systems with a rough energy landscape [@janke_kappler; @berg_sg; @alves]. In both situations, histogram re-weighting techniques [@ferrenberg] can be applied in the analysis to increase the amount of information that can be gleaned from simulational data, but the applicability of reweighting is severely limited in large systems by the statistical quality of the “wings” of the histogram. This latter effect is quite important in systems with competing interactions for which short range order effects might occur over very broad temperature ranges or even give rise to frustration that produces a very complicated energy landscape and limit the efficiency of other methods. In this paper, we introduce a new, general, efficient Monte Carlo algorithm that offers substantial advantages over existing approaches. Unlike conventional Monte Carlo methods that directly generate a canonical distribution at a given temperature $g(E)e^{-E/K_{\text{B}}T}$, our approach is to estimate the density of states $g(E)$ accurately via a random walk which produces a flat histogram in energy space. The method can be further enhanced by performing multiple random walks, each for a different range of energy, either serially or in parallel fashion. The resultant pieces of the density of states can be joined together and used to produce canonical averages for the calculation of thermodynamic quantities at essentially any temperature. We will apply our algorithm to the 2-dim ten state Potts model and Ising model which have 1st- and 2nd-order phase transitions, respectively, to demonstrate the efficiency and accuracy of the method. Our algorithm is based on the observation that if we perform a random walk in energy space with a probability proportional to the reciprocal of the density of states ${\frac{1}{g(E)}}$, then a flat histogram is generated for the energy distribution. This is accomplished by modifying the estimated density of states in a systematic way to produce a “flat” histogram over the allowed range of energy and simultaneously making the density of states converge to the true value. At the very beginning of the random walk, the density of states is [*a priori*]{} unknown, so we simply set all densities of states $g(E)$ for all energies $E$ to $g(E)=1$. Then we begin our random walk in energy space by flipping spins randomly. In general, if $E_{1}$ and $ E_{2}$ are energies before and after a spin is flipped, the transition probability from energy level $E_{1}$ to $E_{2}$ is simply: $$p(E_{1}\rightarrow E_{2})=\min ({\frac{g(E_{1})}{g(E_{2})}},1)$$ This is also the probability to flip the spin. Each time an energy level $E$ is visited, we update the corresponding density of states by multiplying the existing value by a modification factor $f>1$, i.e. $g(E)\rightarrow g(E)\ast f$. The initial modification factor can be as large as $f=f_{0}=e^{1}\simeq 2.71828...$ which allows us to reach all possible energy levels very quickly, even for large systems. We keep walking randomly in energy space and modifying the density of states until the accumulated histogram $H(E)$ is “flat”. At this point, the density of states converges to the true value with an accuracy proportion to $\ln (f)$ . We then reduce the modification factor to a finer one according to some recipe like $f_{1}=\sqrt{f_{0}}$ (any function that monotonically decreases to $1$ will do) and reset the histogram $H(E)=0$. Then we begin the next level random walk with a finer modification factor $f=f_{1}$ , continuing until the histogram is again “flat” after which we stop and reduce the modification factor as before, i.e. $f_{i+1}=\sqrt{f_{i}}$. We stop the simulation process when the modification factor is smaller than some predefined final value (such as $f_{\text{final}}=\exp(10^{-8})\simeq 1.00000001$). It is very clear that the modification factor $f$ in our random walk acts as a control parameter for the accuracy of the density of states during the simulation and also determines how many MC sweeps are necessary for the whole simulation. It is impossible to obtain a perfectly flat histogram and the phrase “flat histogram” in this paper means that histogram $H(E)$ for all possible $E$ is not less than $80\%$ of the average histogram $\langle H(E)\rangle $. Since the density of states is modified every time the state is visited, we only obtain a relative density of states at the end of the simulation. To calculate the absolute values, we use the condition that the number of ground states for the Ising model is 2 (all spins are up or down) to re-scale the density of states; and if multiple walks are performed within different energy ranges, they must be matched up at the boundaries in energy. Because of the exponential growth of the density of states in energy space, it is not efficient to simply update the density of states until enough histogram entries are accumulated. All methods based on the accumulation of entries, such as the histogram method [@ferrenberg], Lee’s version of the multicanonical method (entropic sampling) [@berg], the broad histogram method [@oliveira] and the flat histogram method [@jswang_fh; @jswang] have the problem of scalability for large systems. These methods suffer from systematic errors and substantial deviations which increase rapidly for large system size. The algorithm proposed in this paper is of both high efficiency and accuracy over wide ranges of temperature for sizes that are beyond those that are tractable by other approaches. We should point out here that during the random walk (especially for the early stage of iteration), the algorithm does not exactly satisfy the detailed balance condition, since the density of states is modified constantly during the random walk in energy space; however, after many iterations, the density of states converges to the true value very quickly as the modification factor approaches $1$. From eq. (1), we have: $${\frac{1}{g(E_{1})}} p(E_{1}\rightarrow E_{2})={\frac{1}{g(E_{2})}} { p(E_{2}\rightarrow E_{1})}$$ where ${\frac{1}{g(E_{1})}}$ is the probability at the energy level $E_{1}$ and $p(E_{1}\rightarrow E_{2})$ is the transition probability from $E_{1}$ to $E_{2}$ for the random walk. We can thus conclude that the detailed balance condition is satisfied to within the accuracy proportion to $\ln (f)$. The convergence and accuracy of our algorithm may be tested for a system with a 2nd order transition, the $L\times L$ Ising square lattice with nearest neighbor coupling which is generally perceived as an ideal benchmark for new theories [@landau_ising] and simulation algorithms [@ferrenberg; @jswang_prl]. We simulated both small lattices for which exact results are available as well as $L=256$ for which exact enumeration is impossible. In Fig. 1, the densities of states estimated by our algorithm are shown along with the exact results obtained by the method proposed by Beale [@beale]. We only show the density for systems up to $L=50$ which is the maximum size we can calculate with the Mathematica program used in the reference  [@beale]. Since no difference is visible, we show the relative error $\varepsilon (\log(g(E)))$, which is defined as $\varepsilon (X) \equiv {{|(X_{\text{sim}}-X_{\text{exact}})}/{X_{\text{exact}}}|}$ for a general quantity $X$ in this paper. With our algorithm we obtain an average error as small as 0.035 % on the $32\times 32$ lattice with $7\times 10^{5}$ sweeps. It is possible to estimate the density of states for small systems with the broad histogram method [@oliveira]. Recent broad histogram simulational data [@lima] for the 2D Ising model on a $32\times 32$ lattice with $10^{6}$ MC sweeps yielded an average deviation of the microcanonical entropy from about 0.08 % from the exact solution [@beale]. With the Monte Carlo algorithm proposed in this paper, we can estimate the density of states efficiently even for large systems. Because of the symmetry of the density of states for Ising model $g(E)=g(-E)$, we only need to estimate the density of states in the region $E/N \in [-2, 0]$, where $N$ is total lattice sites. To speed up the simulation for $L=256$, we perform 15 independent random walks, each for a different region of energy from $E/N=-2$ to $E/N=0.2$ using $f_{\
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study properties of the strongly repulsive Bose gas on one-dimensional incommensurate optical lattices with a harmonic trap, which can be deal with by using the exact numerical method through the Bose-Fermi mapping. We first exploit the phase transition of the hard-core bosons in the optical lattices from superfluid-to-Bose-glass phase as the strength of the incommensurate potential increases. Then we study the dynamical properties of the system after suddenly switching off the harmonic trap. We calculate the one-particle density matrices, momentum distributions, the natural orbitals and their occupations for both the static and dynamic systems. Our results indicate that the Bose-glass phase and superfluid phase display quite different properties and expansion dynamics.' author: - Xiaoming Cai - Shu Chen - Yupeng Wang date: title: 'Ground-state and dynamical properties of hard-core bosons on one-dimensional incommensurate optical lattices with harmonic trap' --- Introduction ============ Recently, the technical advances in cold atom trapping allowed the experimental realization of the Anderson localization [@Anderson] in quantum matter waves [@Billy; @Roati]. The good tunability and controllability of optical lattices offer myriad opportunities for studying the disorder effects in ultracold atom system [@Lewenstein]. So far, different techniques have been devised for the introduction of disorder in the ultracold atom system, such as speckle filed patterns in the optical lattice [@Billy; @Lye], random localized impurities by loading a mixture of two kinds of atoms with heavy and light masses [@Gavish], and incommensurate bichromatic optical lattices by superimposing two one-dimensional (1D) optical lattices with incommensurate frequency [@Roati; @Fallani]. In particular, the experiment [@Fallani] has provided evidences of the existence of a Bose-glass (BG) phase [@Fisher89]. Interesting phenomenons are expected to appear in disordered systems when the interplay of disorder and interactions is taken into account. Interactions between atoms can be controllably tuned by Feshbach resonances in ultra-cold atom systems. While disorder can lead to localization of the wave function of a particle, delocalization can arise as the consequence of interactions in some many-body systems. Theoretically, for a repulsive Bose gas it has been predicted that there is a quantum phase transition from a superfluid phase to an insulating BG phase with localized single-particle states as disorder is increased [@Giamarchi; @Fisher; @Delande; @Fontanesi; @Egger; @Egger2; @Cai], however unambiguous observation of the superfluid-Bose-glass transition is still under debate [@Damski; @Delande]. A lot of attention [@Gurarie; @Roux; @Deng; @Roscilde; @Orso] has been payed to investigating the combined role of disorder and interactions in the strongly interacting ultra-cold atomic system. Except numerical or approximate approaches, the exact solution for the many-body systems with the interplay of disorder and interaction are rarely known. In this paper, we study the interacting bosons on the incommensurate optical lattices with harmonic trap in the limit case with infinitely repulsive interaction which can be solved exactly. The 1D Bose gas with infinitely repulsive interaction is known as the hard-core boson (HCB) or Tonks-Girardeau (TG) gas [@Girardeau], which can be exactly solved via the Bose-Fermi mapping [@Girardeau] and has attracted intensive theoretical attention [@Girardeau1; @Minguzzi; @Gangardt]. Experimental access to the required parameter regime has made the TG gas a physical reality [@Paredes; @Kinoshita]. For the HCB in 1D optical lattices, it is convenient to use the exact numerical approach proposed by Rigol and Muramatsu [@Rigol]. Following the exact numerical approach, we calculate the static properties of the hard-core bosons, such as one-particle density matrices, density profiles, momentum distributions, natural orbitals and their occupations, to exploit the superfluid-to-BG phase transition for the systems in incommensurate optical lattices with harmonic confining trap. Furthermore, we study the nonequilibrium dynamical properties of expanding clouds of hard-core bosons on the 1D incommensurate lattices after turning off the harmonic trap suddenly. We find that the expansion dynamics for the superfluid phase and BG phase exhibit quite different behaviors, which may serve as a signature for experimentally detecting the transition from superfluid-to-BG phase. The paper is organized as follows. In Section II, we present the model and the exact approach used in this paper. In Section III, we show properties of the ground-state for hard-core bosons on the incommensurate optical lattice with a harmonic trapping potential. Section IV is devoted to studying the nonequilibrium dynamics of the system after the harmonic trap is suddenly switched off. Finally, a summary is presented in Section V. Model And Method ================ In the present section we describe the exact approach which we used to study 1D hard-core bosons on the incommensurate lattice with an additional harmonic trap. Under the single-band tight-binding approximation, the system of $N$ hard-core bosons in the 1D optical lattice can be described by the following Hamiltonian: $$\label{eqn2} H=-t\sum_i(b^\dagger_ib_{i+1}+\mathrm{H.c.})+\sum_iV_in^b_i,$$ where $b^\dagger_i(b_i)$ is the creation (annihilation) operator of the boson which fulfills the hard-core constraints [@Rigol], [*i.e.,*]{} the on-site anticommutation $(\{b_i,b^\dagger_i\}=1)$ and $[b_i,b^\dagger_j]=0$ for $i\neq j$; $n^b_i$ is the bosonic particle number operator; $t$ is the hopping amplitude set to be the unit of the energy $(t=1)$; $V_i$ is given by $$V_i=V_I\mathrm{cos}(\alpha2\pi i+\delta)+V_H(i-i_0)^2.$$ Here $V_I$ is the strength of incommensurate potential with $\alpha$ as an irrational number characterizing the degree of the incommensurability and $\delta$ an arbitrary phase (in our calculation it is chosen to be zero for convenience, without loss of generality), $V_H$ is the strength of harmonic trap and $i_0$ is the position of the vale of the harmonic trap. In order to calculate the properties of hard-core bosons, it is convenient to use the Jordan-Wigner transformation [@Jordan] (JWT) or Bose-Fermi mapping for the lattice model $$b^\dagger_j=f^\dagger_j\prod^{j-1}_{\beta=1}e^{-i\pi f^\dagger_\beta f_\beta},b_j=\prod^{j-1}_{\beta=1}e^{+i\pi f^\dagger_\beta f_\beta}f_j,$$ which maps the Hamiltonian of hard-core bosons into the Hamiltonian of noninteracting spinless fermions $$\label{eqn1} H_F=-\sum_i(f^\dagger_if_{i+1}+\mathrm{H.c.})+\sum_iV_in^f_i ,$$ where $f^\dagger_i(f_i)$ is the creation (annihilation) operator of the spinless fermion and $n^f_i$ is the particle number operator. The ground-state wave function of the system with $N$ spinless free fermions can be obtained by diagonalizing Eq.(\[eqn1\]) and can be represented as $$\label{eqn3} |\Psi^G_F\rangle=\prod^N_{n=1}\sum^L_{i=1}P_{in}f^\dagger_i|0\rangle ,$$ where $L$ is the number of the lattice sites, $N$ is the number of fermions (same as bosons), and the coefficients $P_{in}$ are the amplitude of the $n$-th single-particle eigenfunction at the $i$-th site which can form an $L \times N$ matrix $P$ [@Rigol]. In order to get the static properties of the ground-state, we calculate the one-particle Green function for the hard-core bosons defined by $$G_{ij}=\langle\Psi^G_{HCB}|b_ib^\dagger_j|\Psi^G_{HCB}\rangle\\ = \langle\Psi^A|\Psi^B\rangle ,$$ where $|\Psi^G_{HCB}\rangle$ is the ground-state of hard-core bosons, and $\langle\Psi^A| = \left(f^\dagger_i\prod_{\beta=1}^{i-1}e^{-i\pi f^\dagger_\beta f_\beta}|\Psi^G_F\rangle\right)^\dagger$, $ |\Psi^B\rangle =f^\dagger_j\prod_{\gamma=1}^{j-1}e^{-i\pi f^\dagger_\gamma f_\gamma}|\Psi^G_F\rangle $. Explicitly the state $\left| \Psi ^A\right\rangle $ can be represented as $ \left| \Psi^A\right\rangle =\prod_{n=1}^{N+1}\sum_{l=1}^LP
{ "pile_set_name": "ArXiv" }
--- abstract: 'We review the theory and observations of star cluster disruption. The three main phases and corresponding typical timescales of cluster disruption are: [*I) Infant Mortality*]{} ($\sim10^7$ yr), [*II) Stellar Evolution*]{} ($\sim10^8$ yr) and [*III) Tidal relaxation*]{} ($\sim10^9$ yr). During all three phases there are additional tidal external perturbations from the host galaxy. In this review we focus on the physics and observations of Phase I and on population studies of Phases II & III and external perturbations (concentrating on cluster-GMC interactions). Particular attention is given to the successes and short-comings of the Lamers cluster disruption law, which has recently been shown to stand on a firm physical footing.' author: - Nate Bastian$^1$ and Mark Gieles$^2$ title: 'Cluster Disruption: Combining Theory and Observations' --- Introduction ============ The vast majority (perhaps all) of stars are formed in a clustered fashion. However, only a very small percentage of older stars are found in bound clusters. These two observations highlight the importance of clusters in the star-formation process and the significance of cluster disruption. The process of cluster disruption begins soon after, or concurrent with, cluster formation. [@lada03] found that $\lesssim10\%$ of stars formed in embedded clusters end up in bound clusters after $\sim10^{8}$ yr. [@whitmore03] and [@fall05] have shown that at least 20%, but perhaps all, star formation in the merging Antennae galaxies is taking place in clusters, the majority of which are likely to become unbound. The case is similar in M51, with $>60\%$ of all young ($<10$ Myr) clusters likely to be destroyed within the first 10s of Myr of their lives [@bastian05]. On longer timescales, [@oort58] and [@wielen71] noted a clear lack of older ($>$ few Gyr) open clusters in the solar neighbourhood and [@bl03] found a strong absence of older clusters in M51, M33, SMC, and the solar neighbourhood. The lack of old open clusters in the solar neighbourhood is even more striking when compared with the LMC, which contains a significant number of ‘blue globular clusters’ with ages well in excess of a Gyr (e.g. @1966MNRAS.134...59G [@degrijs06]). This difference can be understood either as a difference in the formation history of clusters or as a difference in the disruption timescales. This later scenario was suggested by @hodge87, who directly compared the age distribution of Galactic open clusters and the SMC cluster population. He noted that there are $10-15$ times more clusters with an age of 1 Gyr in the SMC as compared to the solar neighbourhood (when normalising both populations to an age of $10^8$ yr) and concluded that disruption mechanisms must be less efficient in the SMC. Much theoretical work has gone into the later scenario, with both analytic and numerical models of cluster evolution predicting a strong influence of the galactic tidal field on the dissolution of star clusters (for a recent review see @baumgardt06). Only recently has there been a large push to understand cluster disruption from an observational standpoint in various external potentials, making explicit comparison with models [@bl03; @lamers05a; @lamers05b; @gieles05a; @lamers06]. We direct the reader to the review by Larsen in these proceedings for a historical look at the observations and theory of cluster disruption. Phases of cluster disruption {#subsec:phases} ---------------------------- While cluster disruption is a gradual process with several different disruptive agents at work simultaneously, one can distinguish three general phases of cluster mass loss and disruption. As we will see, a large fraction of clusters gets destroyed during the [*primary*]{} phase. The main phases and corresponding typical timescales of cluster disruption are: [*I) Infant Mortality*]{} ($\sim10^7$ yr), [*II) Stellar Evolution*]{} ($\sim10^8$ yr) and [*III) Tidal relaxation*]{} ($\sim10^9$ yr). During all three phases there are additional tidal external perturbations from e.g. giant molecular clouds (GMCs), the galactic disc and spiral arms that heat the cluster and speed up the process of disruption. However, these external perturbations operate on longer timescales for cluster populations and so are most important in Phase III. In Fig. \[fig0\] we schematically illustrate the three Phases of disruption and the involved time-scales. Note that the number of disruptive agents decreases in time. In this review we will focus on the physics and observations of Phase I as well as on recent population studies aimed at understanding Phases II and III on a statistical basis. For a recent review on the physics of Phases II and III, we refer the reader to @baumgardt06. ![Schematic overview of the three phases of cluster disruption considered and the responsible physics that drives the disruption. []{data-label="fig0"}](fig0.ps){height="8cm"} Before proceeding, it is worthwhile to consider our definition of a cluster. [@schweizer06] defines a cluster to be a [*gravitationally bound*]{} stellar association which will survive for 10–20 crossing times. This definition implies that the stars provide enough gravitational potential to bind the cluster and ignores the role of gas in the early evolution of clusters. In this review, we will define a cluster as a collection of gas and stars which was [*initially gravitationally bound*]{}. The reason for this definition will become evident in Section \[infantmortality\] Infant Mortality {#infantmortality} ================ Recent studies on the populations of young star clusters in M51 [@bastian05] and the Antennae galaxies [@whitmore03; @fall05] have shown a large excess of star clusters with ages less than $\sim$10 Myr with respect to what would be expected assuming a constant cluster formation rate. The fact that open clusters in the solar neighbourhood display a similar trend [@lada03] has led to the conclusion that this is a physical effect and not simply that we are observing these galaxies at a special time in their star-formation history. If one adopts this view, then we are forced to conclude that the majority (between 60-90%) of star clusters become unbound when the remaining gas (i.e. gas that is left-over from the star formation process) is expelled. These clusters will survive less than a few crossing times. Gas expulsion ------------- Suppose that a star cluster is formed out of a sphere of gas with an efficiency $\epsilon$, where $\epsilon = M_{stars}/(M_{stars} + M_{gas})$. Further suppose that the gas and stars are initially in virial equilibrium. If we define the virial parameter as $Q=-2T/W$, with $T$ the kinetic energy and $W$ the potential energy, virial equilibrium implies $Q=1$. Finally, suppose that the remaining gas is removed on a timescale faster than the crossing time of stars in the cluster. In such a scenario the cluster is left in a super-virial state after the gas removal, with $Q=1/\epsilon$, and the star cluster will expand since the binding energy is too low for the stellar velocities. The expanding cluster will reach virial equilibrium after a few crossing times, but only after a (possibly large) fraction of the stars have escaped. This process has been shown to remove a significant amount of the stellar mass of a cluster, and if $\epsilon < 0.3$ the entire cluster will become unbound on a timescale of 10s of Myr [@tutukov78; @goodwin97a; @goodwin97b; @kroupa02; @boily03a; @boily03b; @bg06]. Rapid gas removal of the type discussed above leaves distinct observables. In Figure \[fig1\] we show the surface brightness profiles of three young clusters (left panels) as well as two results of $N$-body simulations (right panels) of clusters including the effects of rapid gas removal. All three young clusters show an excess of light at large radii with respect to the best fitting EFF [@eff] or @king profiles. This is in good agreement with the predictions of the simulations, in which an unbound halo of stars is removed (although still appearing to be associated with the cluster for 10s of Myr) due to the rapid change of the gravitational potential [@bg06]. Such excess light at large radii has also been found in young clusters in the Antennae galaxies [@whitmore99]. @gb06 show that for values of $\epsilon$ of 0.1 and 0.6, clusters will lose 75% and 10% of the stellar mass respectively within the first $\sim20$ Myr of their lives. Thus we see that this is an extremely efficient way to rapidly disperse stars from young clusters into the field. This mechanism provides a natural explanation for the observed diffuse UV light in the field of starburst galaxies [@tremonti01; @chandar05] and supports the scenario of these authors that this light is due to rapidly dispersing young clusters. Whether or not a cluster survives this phase, and hence more than 10–20 crossing times, is largely dependent on the star-formation efficiency of the GMC core in which the cluster formed. Thus, two clusters with exactly the same parameters (radius, mass, metallicity, external potential field, etc)
{ "pile_set_name": "ArXiv" }
--- abstract: 'A generalization of a well-known relation between the Riemann zeta function and Bernoulli numbers is obtained. The formula is a new representation of the Riemann zeta function in terms of a nested series of Bernoulli numbers.' --- [**Generalization of a relation between the Riemann zeta function and Bernoulli numbers**]{} [S.C. Woon]{} Trinity College, University of Cambridge, Cambridge CB2 1TQ, UK s.c.woon@damtp.cam.ac.uk MSC-class Primary 11M06; Secondary 11B68 Keywords: The Riemann zeta function; Bernoulli numbers December 23, 1998 A New Representation of the Riemann Zeta Function ================================================= \[t:zetarep\] $$\fbox{$ \begin{array}{lll} \zeta(s)&\!\!\!=\!\!\!&\ds -\; \frac{(2\pi)^s}{2} \,w^{s-1}\!\! \lim_{\;{\ds \hat{s}}\to {\ds s}} \left\{\! \frac{ \left( \ds \frac{1}{2} + \sum_{n=1}^\infty (-1)^n {\hat{s}\!-\!1 \choose n}\!\!\left[ \frac{1}{2} + \sum_{m=1}^n \left(\frac{-1}{w}\right)^{\!\!m} \!\! {n \choose m} \frac{B_{m+1}}{(m\!+\!1)!} \right] \!\right)} {\ds \cos\left(\frac{\pi \hat{s}}{2}\right)} \right\}\end{array} $} \label{e:zeta(s)Bn}$$ for $\re(s)>(1/w)$ where $s\in\C, \;w\in\R, \;w>0$, the notation of binomial coefficient is extended such that $${s\!-\!1 \choose n} = \frac{1}{n!} \left[ \prod_{k=0}^{n-1} (s\!-\!1\!-\!k) \right] = \frac{1}{n!}\, \frac{\Gamma(s)}{\Gamma(\!s\!-\!n)}\,,$$ $B_m$ are the Bernoulli numbers with $B_1 = 1/2$, and the limit only needs to be taken when $s\in\{1,3,5,\dots\}$ for which the denominator $\cos\,(\pi s/2)$ is $0$. The representation (\[e:zeta(s)Bn\]) can be seen as a generalization of the well-known relation $$\zeta(2n) \;=\;-\;\frac{(2\pi)^{2n}}{2} \,\frac{(-1)^n \,B_{2n}}{(2n)!} \quad (n\in\Z^+)\;.$$ This representation (\[e:zeta(s)Bn\]) of $\zeta(s)$ in terms of a nested series of $B_n$ is distinct from the well-known Euler-Maclaurin summation representation [@abramowitz p.807, (23.2.3)] which also relates $\zeta(s)$ to $B_n$ as follows: $$\begin{aligned} \zeta(s)&=&\lim_{N\to\infty} \left[ \begin{array}{l} \ds \sum_{n=1}^N n^{-s} - \frac{1}{-s\!+\!1}\,N^{-s+1} - \frac{1}{2}\,N^{-s}\\ \ds\Bigg. -\, \sum_{k=1}^M \frac{B_{2k}}{(2k)!} \frac{\partial^{2k-1}}{\partial x^{2k-1}} x^{-s}\Big|_{x=N}\\ \ds\Big. +\, O(N^{-s-2M-1}) \end{array} \right] \label{e:EulerMaclaurinsum}\\ &&\ds\bigg.(\re(s)>-2M-1)\;.\nn\end{aligned}$$ To prove Theorem \[t:zetarep\], we shall have to introduce a binary tree and a set of operators for generating Bernoulli Numbers. A Binary Tree for Generating Bernoulli Numbers ============================================== \[d:Bn\] Bernoulli numbers $B_n$ are defined by [@bateman p.35, (1.13.1)] $$\frac{z}{e^z-1} \,=\, \sum_{n=0}^\infty \frac{B_n}{n!} z^n \quad (|z| < 2\pi)\;.\label{e:defBn}$$ Expanding the left hand side as a series and matching the coefficients on both sides give $$B_1 = -1/2, \quad B_n \left\{ \begin{array}{ccc} = 0 &,& \mbox{odd }n, \;n\ne 1\\ \ne 0 &,& \mbox{even }n \end{array} \right.\;.$$ Now (\[e:defBn\]) can be rewritten as $$\frac{z}{e^z-1} + \frac{z}{2} \,=\, \sum_{n=0}^\infty \frac{B_{2n}}{(2n)!} z^{2n}\;.\label{e:defB2n}$$ Alternatively, $B_n$ can be defined as the solution of the recurrence relation $$B_n \,=\, -\, \frac{1}{n+1} \,\sum_{k=0}^{n-1} {n\!+\!1 \choose k} B_k\;, \quad B_0 = 1 \;.\label{e:defBnrecursion}$$ A binary tree for generating Bernoulli numbers $B_n$ can be constructed using two operators, $O_L$ and $O_R$. \ At each node of the binary tree sits a formal expression of the form $\ds \frac{\pm 1}{a!\,b!\dots}$. The operators $O_L$ and $O_R$ are defined to act only on formal expressions of this form at the nodes of the tree as follows: $$\begin{aligned} O_L^{} & : & \frac{\pm 1}{a!\,b!\dots} \to \frac{\pm 1}{(a+1)!\,b!\dots} \;,\\ O_R^{} & : & \frac{\pm 1}{a!\,b!\dots} \to \frac{\mp 1}{2!\,a!\,b!\dots} \;.\end{aligned}$$ Schematically, - $O_L^{}$ acting on a node of the tree generates a branch downwards to the left (hence the subscript $L$ in $O_L^{}$) with a new node at the end of the branch. - $O_R^{}$ acting on the same node generates a branch downwards to the right. ![The binary tree that generates Bernoulli numbers.](tree.eps){height="175pt"} The following finite series formed out of the two non-commuting operators $$S_n = (O_L^{} + O_R^{})^n \!\hf = \left( O_L^n + \sum_{k=0}^{n-1} O_L^{n-1-k} O_R^{}\,O_L^k + \cdots + O_R^n \right) \!\!\hf . \label{e:sumtreerep}$$ is equivalent to the sum of terms on the $n$-th row of nodes across the tree. Bernoulli numbers and the $S_n$ series are related by $$\fbox{$\; B_n = n!\;S_{n-1} \quad (n\ge 2) \;$}\;. \label{e:BandS}$$ For example, $$\begin{aligned} B_3 &=& 3!\, S_2 = 3!\, (O_L^{}+O_R^{})^2 \!\hf = 3!\,(O_L^{}+O_R^{})\,(O_L^{}+O_R^{}) \!\hf\\ &=& 3!\,(O_L^{}\,O_L^{} + O_L^{}\,O_R^{} + O_R^{}\,O_L^{} + O_R^{}\,O_R^{}) \!\hf\\ &=& 3!\left( \frac{+1}{4!} + \frac{-1}{2!\,3!} + \frac{-1}{3!\,2!} + \frac{+1}{2!\,2!\,2!} \right) = 0\;.\end{aligned}$$ [**Proof**]{} The Riemann zeta function [@titchmarsh] $$\zeta(s) = \sum_{n=1}^\infty n^{-s} \quad (\re(s)>1, \;s\in\C) \label{e:zeta}$$ can be analytically extended to the left-half of
{ "pile_set_name": "ArXiv" }
--- abstract: 'Quantum pigeonhole principle states that if there are three pigeons and two boxes then there are instances where no two pigeons are in the same box which seems to defy classical pigeonhole counting principle. Here, we investigate the quantum pigeonhole effect on the ibmqx2 superconducting chip with five physical qubits. We also observe the same effect in a proposed non-local circuit which avoid any direct physical interactions between the qubits which may lead to some unknown local effects. We use the standard quantum gate operations and measurement to construct the required quantum circuits on IBM quantum experience platform. We perform the experiment and simulation which illustrates the fact that no two qubits (pigeons) are in the same quantum state (boxes). The experimental results obtained using IBM quantum computer are in good agreement with theoretical predictions.' author: - 'Narendra N. Hegade' - Antariksha Das - Swarnadeep Seth - 'Prasanta K. Panigrahi' date: 'Received: date / Accepted: date' title: Investigation of quantum pigeonhole effect in IBM quantum computer --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore [Quantum Pigeonhole Effect, IBM Quantum Experience]{} Introduction \[I\] ================== Quantum mechanics is well known for its counter-intuitive results which pose a conceptual conflict of our regular understanding. There are many quantum mechanical phenomena such as the EPR paradox, the no-cloning theorem, quantum Zeno effect, quantum teleportation, quantum tunneling etc. that can not be answered by classical physics. Quantum pigeonhole effect is one of these. In number theory, classical pigeonhole principle [@Allenby2011Howtocount] states that if $n$ objects are distributed among $m$ boxes, with the condition $m < n$, then there is at least one box where we can find more than one object. In other words, if there are more objects than the number of boxes then there is at least one box which must contain more than one object. Aharonov et al. [@Aharonov2016QPP] first proposed the idea of quantum pigeonhole effect(QPHE) where they have shown that in some scenarios the classical pigeonhole counting principle is violated. It is shown that for a particular choice of pre- and post-selected state, three quantum particles which can take two quantum states could end up in a situation where no two particles can be found in the same quantum states. To observe the quantum pigeonhole effect, the former designed and performed an interferometric experiment shown in Figure \[fig1\]. In their set-up three quantum particles (pigeons) pass simultaneously through the two arms (Pigeonholes/boxes) of the Mach-Zehnder interferometer (MZI) which characterizes two distinct quantum state ($\Ket{0}$ and $\Ket{1}$) of the quantum particles. Now because of the Coulomb repulsion between the quantum particles if at least two or three particles are in the same arm of the interferometer then the particles will repel each other and expected to get deflected and the pattern of the detector will give information whether any of the quantum particles are in the same path of the interferometer or not. The particles have equal probability of arriving at either of the two detectors. It is shown that if one post-select those cases where all three quantum particles are detected at the same detector then the pattern of the detector indicates that none of the quantum particles get deflected thus no such interaction has taken place. It suggests that no two quantum particles can take the same path which contradict the classical pigeonhole principle. This phenomenon has already drawn a fair amount of attention. Recently, Chen et al. [@chen2019ExpParadox] experimentally demonstrated the quantum pigeonhole paradox using three single photons, transmitting through two distinct polarization channels under appropriate pre and post selections of the polarization states. They used the weak measurement technique [@weakmeasure1; @weakmeasure2; @weakmeasure3] to probe the underlying mechanism. Mahesh et al. [@mahesh2016NMR] experimentally simulated the quantum pigeonhole principle using four-qubit NMR quantum simulator where the quantum pigeons are mimicked by three spin-1/2 nuclei whose states are probed by another ancillary spin. It was also argued that the effect arises from the quantum contextuality in quantum physics. Later, Rae and Forgan describe, the effect is observed due to the interference between the wavefunctions of weakly interacting quantum particles [@RaeandForgan]. In order to close any conceptual loophole that may arise from the unknown local interactions of the physical qubits, Paraoanu illustrated the violation based on non-local schemes by designing two different quantum circuit using standard gates and measurement [@paraoanu2018nonlocal]. IBM’s cloud-based quantum computing platform has opened a new window of opportunity to perform experiments with quantum states. It allows testing various quantum mechanical phenomena [@GarciaJAMP2018; @SisodiaQIP2017; @HuffmanPRA2017; @VishnuQIP2018; @AlsinaPRA2016; @YalcinkayaPRA2017; @KandalaNAT2017; @SisodiaPLA2017]. Here, we present an equivalent quantum circuit design using IBM’s real quantum processor ‘ibmqx2’ to investigate the quantum pigeonhole effect. In order to get rid of any kind of local interactions we implement two similar non local circuits proposed by Paraoanu [@paraoanu2018nonlocal]. We show that by standard quantum gate operations and measurements, it is indeed possible to observe quantum pigeonhole effect. We perform simulation to verify the theoretical predictions. ![**Schematic diagram of the Mach-Zehnder interferometer**. Three quantum particles are injected simultaneously. They are split into the two arms of the interferometer ($\Ket{0}$ and $\Ket{1}$) after the first beam splitter $BS1$. There is a phase shifter in one path of the interferometer. The particles are detected at detector $D_0$ and $D_1$ after another beam splitter $BS2$.[]{data-label="fig1"}](fig1.eps){width="\linewidth"} This paper is organized in the following way. In Section \[II\], the quantum pigeonhole effect is discussed briefly. In Section \[III\], we present the implementation of the quantum circuits on ‘ibmqx2’ superconducting chip to investigate the quantum pigeonhole effect and discuss the experimental outcome and its significance. In Section \[IV\], we give the conclusion about the work with some remarks. Theory \[II\] ============= In our experiment, we model the quantum pigeonhole effect using superconducting qubits in IBM quantum experience platform as shown in Figure \[fig2\]). We consider a three qubit system which corresponds to three pigeons and two orthogonal states $\ket{0}$ and $\Ket{1}$, represents two boxes. We prepare the initial state by applying Hadamard gate on the three qubits $$\Ket{\psi_i} =\Ket{+}_1 \Ket{+}_2 \Ket{+}_3 .$$ where, $\Ket{+}=\frac{\ket{0}+\Ket{1}}{\sqrt{2}}$ and the indices 1,2,3 refer to the qubits one, two and three respectively. A phase-shifter is then operated on the initial state $\Ket{\psi_i}$ and the state transforms into $\Ket{+i}_1 \Ket{+i}_2 \Ket{+i}_3$ where, $\Ket{+i}=\frac{\ket{0}+i \Ket{1}}{\sqrt{2}}$. Then, after applying Hadamard gate, the three qubit state becomes $$\begin{aligned} \Ket{\psi_f} &= \left( \frac{1+i}{2}\Ket{0} + \frac{1-i}{2}\Ket{1} \right) \otimes \left( \frac{1+i}{2}\Ket{0} + \frac{1-i}{2}\Ket{1} \right) \nonumber \\ & \hspace{2.9cm} \otimes \left( \frac{1+i}{2}\Ket{0} + \frac{1-i}{2}\Ket{1} \right).\end{aligned}$$ So, each qubit has equal probability to be found in either $\Ket{0}$ or $\Ket{1}$ state. The $\Ket{+}$ state can also be written as $$\Ket{+}=\frac{1-i}{2} \Ket{+i} + \frac{1+i}{2} \Ket{-i}.$$ After the phase shift operator, $\Ket{+i}$ will transform to $\Ket{-}= \frac{\Ket{0} - \Ket{1}}{\sqrt{2}} $ and finally to $\Ket{1}$, after the Hadamard operation. Similarly, $\Ket{-i}$ will transform to $\Ket{+}$ and then to $\Ket{0}$, after the Hadamard operation. From this we can infer that after the measurement if we get $\Ket{0}$, then it corresponds to a post-selected state $\Ket{-i} = \frac{\ket{0}-i
{ "pile_set_name": "ArXiv" }
--- abstract: 'Hagfish slime is a unique predator defense material containing a network of long fibrous threads each $\sim 10\,\cm$ in length. Hagfish release the threads in a condensed coiled state known as thread cells, or skeins ($\sim 100\,\microm$), which must unravel within a fraction of a second to thwart a predator attack. Here we consider the hypothesis that viscous hydrodynamics can be responsible for this rapid unraveling, as opposed to chemical reaction kinetics alone. Our main conclusion is that, under reasonable physiological conditions, unraveling due to viscous drag can occur within a few hundred milliseconds, and is accelerated if the skein is pinned at a surface such as the mouth of a predator. We model a single thread cell unspooling as the fiber peels away due to viscous drag. We capture essential features by considering one-dimensional scenarios where the fiber is aligned with streamlines in either uniform flow or uniaxial extensional flow. The peeling resistance is modeled with a power-law dependence on peeling velocity. A dimensionless ratio of viscous drag to peeling resistance appears in the dynamical equations and determines the unraveling timescale. Our modeling approach is general and can be refined with future experimental measurements of peel strength for skein unraveling. It provides key insights into the unraveling process, offers potential answers to lingering questions about slime formation from threads and mucous vesicles, and will aid the growing interest in engineering similar bioinspired material systems.' author: - Gaurav Chaudhary - 'Randy H. Ewoldt' - 'Jean-Luc Thiffeault' bibliography: - 'hagfish.bib' title: Unraveling hagfish slime --- Introduction ============ Marine organisms present numerous interesting examples of fluid-structure interactions that are necessary for their physiological functions such as feeding [@Bishop2008; @Yaniv2014], motion [@Chapman2011], mechanosensing [@Oteiza2017], and defense [@Waggett2006]. A rather remarkable and unusual example of fluid-structure interaction is the production of hagfish slime, also known as hagfish defense gel. The hagfish is an eel-shaped deep-sea creature that produces the slime when it is provoked [@Downing1981]. Slime is formed from a small amount of biomaterial ejected from the hagfish’s slime glands into the surrounding water [@Fudge2005]. The biomaterial expands by a factor of 10,000 (by volume) into a mucus-like cohesive mass, which is hypothesized to choke predators and thus provide defense against attacks (Fig. \[fig:introduction\]A) [@Zintzen2011]. The secreted biomaterial has two main constituents — gland mucus cells and gland thread cells — responsible for the mucus and fibrous component of slime, respectively [@Downing1981; @Fernholm1981]. In the present study we focus on thread cells, which possess a remarkable structure wherein a long filament $(10$–$16\,\cm$ in length) is efficiently packed in canonical loops into a prolate spheroid ($120$–$150\,\microm$ by $50$–$60\,\microm$) [@Fernholm1981; @Fudge2005], called the skein (Fig. \[fig:introduction\]B). When mixed with the surrounding water, the fiber ($1$–$3\,\microm$ thread diameter) unravels from the skein (Fig. \[fig:introduction\]C) and forms a fibrous network with other threads and mucous vesicles. This process occurs on timescales of a predator attack ($100$–$400\,\millisecond$), as apparent from the video evidence [@Zintzen2011; @Lim2006]. ![Slime defends hagfish against predator attacks. (A) Sequence of events during a predator attack (adapted from [@Zintzen2011]). On being attacked, the hagfish produces a large quantity of slime that chokes the predator. The process of secretion and slime creation took less than $0.4\,\second$. (B) Slime is formed from the secreted biomaterial, in part containing prolate-shaped thread cells. (C) A thread cell unravels under the hydrodynamic forces from the surrounding flow field and produces a micron-width fiber of length $10$–$15\,\cm$. (D) The unraveled fibers and mucous vesicles entrain a large volume of water to form a cohesive network. Details on materials and microscopy are provided in Supplementary Information (S.I.) Sec. \[sec:materials\].[]{data-label="fig:introduction"}](introduction.pdf){width="\textwidth"} While several studies have revealed the mechanical and biochemical aspects [@Ewoldt2011; @Winegard2014; @Boni2016; @Boni2018; @Chaudhary2018] of slime, little is known about mechanisms involved in its rapid deployment. Newby [@newby1946] postulated that the fiber is coiled under a considerable pressure and the rupture of the cell membrane allows the fiber to uncoil. However, later studies [@koch1991; @Lim2006; @Winegard2010] have shown that convective mixing is essential for the production of fibers and slime. More recently, Bernards et al. [@bernards2014] experimentally demonstrated that Pacific hagfish thread cells can unravel even in the absence of flow, potentially due to chemical release of the adhesives holding the fiber together, but the timescales observed in their work are orders of magnitude larger than physiological timescales during the attack. Therefore, the key question about the fast timescales involved in this process remains to be answered. Deeper insights into the remarkable process of slime formation will aid the development of bioinspired material systems with novel functionality, such as materials with fast autonomous expansion and deployment. Motivated by the aforementioned experimental studies, our objective in this paper is to investigate the role of viscous hydrodynamics in skein unraveling via a simple physical model, and thus supply a qualitative understanding of the unraveling process. The key question we answer here is whether the viscous hydrodynamic unraveling alone can account for the fast unraveling timescales that are observed in physiological scenarios. We hypothesize that suction feeding in marine predators creates sufficient hydrodynamic stresses to aid in the unraveling of skeins and set up the slime network. We develop fundamental insight by considering only the simplest flow fields — uniform flow and extensional flow. Our modeling framework, however, generalizes to complex flow fields that occur in physiological conditions. In Sec. \[sec:experiment\], we present a simple qualitative experiment demonstrating the force-induced unraveling of a hagfish skein. This motivates the model paradigm that follows. Section \[sec:model\] outlines the problem statement, and we derive the general governing equations. In Sec. \[sec:skeinflow\], the equations are solved for skein unraveling in one-dimensional flows under different physically-relevant scenarios. In Sec. \[sec:discussion\] we discuss the results in more detail, including the influence of constitutive model parameters for the peel strength, and comment on the qualitative comparisons between the experimental studies and theoretical work. Unraveling experiment {#sec:experiment} ===================== To motivate the mathematical modeling, we perform a simple experiment demonstrating the force-induced unraveling of thread from a skein (Fig. \[fig:UnravelSequence\], see also Supplementary video). A skein, obtained from Atlantic hagfish, is held in place by weak interactions with the substrate, and a force is applied to the dangling end using a syringe tip that naturally sticks to the filament. Figure \[fig:UnravelSequence\] shows the unraveling skein at different time frames. Frame 1 shows the unforced and stable configuration, with no unraveling. Unraveling occurs only when a force is applied from frame 2 onward. There are events when the thread peels away in clumps, but the orderly unraveling recovers quickly. A minimum peeling force seems required to unravel the thread from the skein. A simple estimate of the minimum peeling force based on weak adhesion (van der Waals interaction) between unraveling fiber and skein gives an estimate of $0.1\,\microNewton$ (see S.I. Sec. \[sec:force\]). ![Unraveling a thread skein by pulling, as viewed with brightfield microscopy. Bottom right scale bar $50\,\microm$.[]{data-label="fig:UnravelSequence"}](UnravelSequence1-vC.jpg){width=".98\columnwidth"} Problem formulation {#sec:model} =================== (0,0) – (2,0) node\[midway,above\] [initial thread]{}; (2,0) – (5,0) node\[midway,above\] [unraveled thread]{}; (0,-.15) node\[below\] [$\s=0$]{}; (2,-.15) node\[below\] [$\s=\L_0$]{}; (4.75,-.15) node\[below\] [$\s=\L(\t)$]{}; (4.75,-.45) node\[below\] [$\xv(\L,\t)=\Xv(\t)$]{}; (5.25,0) circle (.25); (5.8,-.25) node\[red\] [skein]{}; To determine if viscous hydrodynamic forces can account for fast skein unraveling, we consider a model of an inextensible slender thread unraveling from a spherical skein. The thread unravels and separates from the skein in response to a local force due to a viscous fluid flow surrounding the connected thread and skein. A schematic representation is shown
{ "pile_set_name": "ArXiv" }
--- abstract: | Background : Measurement of the fusion cross-section for neutron-rich light nuclei is crucial in ascertaining if fusion of these nuclei occurs in the outer crust of a neutron star. Purpose : Measure the fusion excitation function at near-barrier energies for the $^{19}$O + $^{12}$C system. Compare the experimental results with the fusion excitation function of $^{18}$O + $^{12}$C and $^{16}$O + $^{12}$C. Method : A beam of $^{19}$O, produced via the $^{18}$O(d,p) reaction, was incident on a $^{12}$C target at energies near the Coulomb barrier. Evaporation residues produced in fusion of $^{18,19}$O ions with $^{12}$C target nuclei were detected with good geometric efficiency and identified by measuring their energy and time-of-flight. Results : A significant enhancement is observed in the fusion probability of $^{19}$O ions with a $^{12}$C target as compared to $^{18}$O ions. Conclusion : The larger cross-sections observed at near barrier energies is related to significant narrowing of the fusion barrier indicating a larger tunneling probability for the fusion process. author: - Varinderjit Singh - 'J. Vadas' - 'T. K. Steinbach' - 'S. Hudan' - 'R. T. deSouza' - 'L. T. Baby' - 'S. A. Kuvin' - 'V. Tripathi' - 'I. Wiedenhöver' bibliography: - 'fusion\_19O.bib' title: 'Fusion Enhancement for Neutron-Rich Light Nuclei' --- Approximately half the elements beyond iron are formed via the r-process in which seed nuclei rapidly capture multiple neutrons and subsequently undergo $\beta$ decay. Although it is clear that a high neutron density is required for the r-process, the exact site or sites at which r-process nucleosynthesis occurs is still a question of debate. One proposed scenario involves the merging of two compact objects such as neutron stars. Tidal forces between the two compact objects disrupts the neutron stars, ejecting neutron-rich nuclei into the interstellar medium. Although nucleosynthesis via decompression of neutronized nuclear matter was initially proposed decades ago [@Lattimer77; @Meyer89], only recently have detailed computational investigations of such a scenario e.g. tidal disruption of a neutron star become feasible [@Berger13; @Martin13; @Foucart14; @Just15; @Radice16]. The most recent calculations suggest that such events could be responsible for heavy element (A$>$130) r-process nucleosynthesis. Recent observation of gravitational waves emanating from two black holes merging [@Abbott16] has re-ignited the question of whether and to what degree the disruption of neutron stars contributes to the heavy element composition of the universe. A natural question in considering the ejecta from the disruption of the neutron star is the composition of the neutron star prior to the merger as well as the reactions that might occur both during and post the merging event. The outer crust of a neutron star provides an unique environment in which nuclear reactions can occur. Of particular interest are the fusion reactions of neutron-rich light nuclei. These nuclei have been hypothesized to fuse more readily than the corresponding $\beta$ stable isotopes providing a potential heat source that triggers the fusion of $^{12}$C nuclei resulting in an X-ray superburst [@Horowitz08]. An initial measurement of fusion induced with neutron-rich oxygen nuclei suggested an enhancement of the fusion probability as compared to standard models of fusion-evaporation [@Rudolph12]. To definitively establish if neutron-rich light nuclei exhibit a fusion enhancement at sub-barrier energies, high quality experimental data is needed. In the present work, we present for the first time a measurement of the total fusion cross-section for $^{19}$O + $^{12}$C at incident energies near the barrier and compare the results with the fusion cross-section for $^{16,18}$O + $^{12}$C. Fusion excitation functions reflect the interplay of the repulsive Coulomb and attractive nuclear potentials as the two nuclei collide. As the charge distribution of the projectile oxygen nuclei is essentially unaffected by the additional neutrons, the repulsive Coulomb potential is unchanged. Consequently, the comparison of the fusion excitation functions for the different oxygen isotopes provides access to the changes in the attractive nuclear potential. This change in the attractive potential can be related to changes in the neutron density distribution with increasing number of neutrons for oxygen nuclei. ![\[fig:setup\] (Color online) Schematic illustration of the experimental setup. The MCP$\mathrm{_{RESOLUT}}$ detector is located approximately 3.5 m upstream of the MCP$\mathrm{_{TGT}}$ detector. Inset: Energy deposit versus time-of-flight spectrum for ions exiting RESOLUT that are incident on $^{12}$C target at E$_{\mathrm{lab}}$=46.7 MeV. Color is used to represent yield in the two dimensional spectrum on a logarithmic scale.](Fig1_19OPaper_v2.eps) The experiment was performed at the John D. Fox accelerator laboratory at Florida State University. A beam of $^{18}$O ions, accelerated to an energy of 80.7 MeV impinged on a deuterium gas cell at a pressure of 350 torr cooled to a temperature of 77 K. Ions of $^{19}$O were produced via a (d,p) reaction and separated from the incident beam by the electromagnetic spectrometer RESOLUT [@RESOLUT]. Although this spectrometer rejected most of the unreacted beam that exited the production gas cell, the beam exiting the spectrometer consisted of both $^{19}$O and $^{18}$O ions. As each beam particle was independently identified, this beam mixture allowed simultaneous measurement of $^{18}$O + $^{12}$C and $^{19}$O + $^{12}$C thus providing a robust measure of the fusion enhancement due to the presence of the additional neutron. The experimental setup used to measure fusion of oxygen ions with carbon nuclei is depicted in Fig. \[fig:setup\]. To identify beam particles, the energy deposit and time-of-flight [@deSouza11] of each particle was measured. Upon exiting the spectrometer particles first traverse a thin secondary emission foil (0.5 $\mu$m thick aluminized mylar) ejecting electrons in the process. These electrons are accelerated and bent out of the beam path and onto the surface of a microchannel plate detector (MCP$\mathrm{_{RESOLUT}}$) where they are amplified to produce a fast timing signal. After traversing the thin foil of MCP$\mathrm{_{RESOLUT}}$, the ions passed through a compact ionization detector (CID) located approximately 3.5 m downstream. Passage of the ions through this ionization chamber results in an energy deposit ($\Delta$E) characterized by their atomic number (Z), mass number (A), and incident energy. After exiting the small ionization chamber the ions are incident on a 100 $\mu$g/cm$^2$ carbon foil. This foil serves both as a secondary electron emission foil for the target microchannel plate detector (MCP$\mathrm{_{TGT}}$) and as the target for the fusion experiment. ![\[fig:pid\] (Color online) Two dimensional spectrum depicting dependence of the energy deposited in the annular silicon detector, T2, on the mass of the ion. The dashed (red) rectangle indicates the region of the evaporation residues. Inset: Mass distribution of ions detected within the interval 13 MeV $<$ E$_{Si}$ $<$ 41 MeV. Vertical lines indicate the A limits used to designate evaporation residues.](EA_19O12C_E18_1P1R.eps) By utilizing the timing signals from both microchannel plate detectors together with the ionization chamber a $\Delta$E-TOF measurement is performed. This measurement allows identification of ions in the beam as indicated in the inset of Fig. \[fig:setup\]. Clearly evident in the figure are three peaks associated with the $^{19}$O$^{7+}$ ions, $^{18}$O$^{7+}$ ions, and $^{18}$O$^{6+}$ ions. The $^{19}$O ions corresponded to 31 % of the beam intensity with the $^{18}$O$^{7+}$ and $^{18}$O$^{6+}$ corresponding to approximately 20 % and 29 % respectively. Fusion of $^{19}$O (or $^{18}$O) nuclei in the beam together with $^{12}$C nuclei in the target foil results in the production of an excited $^{31}$Si (or correspondingly $^{30}$Si) nucleus. For collisions near the Coulomb barrier the excitation of the fusion product is relatively modest, E$^*$ $\approx$ 35 MeV. This fusion product de-excites by evaporation of a few neutrons, protons, and $\alpha$ particles resulting in evaporation residues (ERs). Statistical model calculations [@evapor] indicate that for $^{31}$Si compound nucleus, the nuclei $^{30}$Si, $^{29}$Si, $^{28}$Si, $^{29}$Al, $^{28}$Al, $^{27}$Mg, and $^{26}$Mg account for the bulk of the ERs. These ERs are deflected from the beam direction by the recoil imparted by the emission of the light particles. The ERs are detected and identified by two annular silicon detectors designated T2 and T3 situated downstream of the MCP$\mathrm{_{TGT}}$. These detectors subtend the angular range 3.5$^\circ$ $<$ $\theta_{lab}$ $<$ 25$^\circ$. Evaporation
{ "pile_set_name": "ArXiv" }
--- abstract: 'This work is devoted to improving empirical mass-luminosity relations (MLR) and mass-metallicity-luminosity relation (MMLR) for low mass stars. For these stars, observational data in the mass-luminosity plane or the mass-metallicity-luminosity space subject to non-negligible errors in all coordinates with different dimensions. Thus a reasonable weight assigning scheme is needed for obtaining more reliable results. Such a scheme is developed, with which each data point can have its own due contribution. Previous studies have shown that there exists a plateau feature in the MLR. Taking into account the constraints from the observational luminosity function, we find by fitting the observational data using our weight assigning scheme that the plateau spans from 0.28 [[$M_\odot$]{}]{}to 0.50 [[$M_\odot$]{}]{}. Three-piecewise continuous improved MLRs in K, J, H and V bands, respectively, are obtained. The visual MMLR is also improved based on our K band MLR and the available observational metallicity data.' author: - | Fang Xia$^{1,2}$[^1], Shulin Ren$^{1,2}$ and Yanning Fu$^{1}$\ $^{1}$Purple Mountain Observatory, Chinese Academy of Sciences, Nanjing 210008, China\ $^{2}$Graduate School of Chinese Academy of Sciences, Beijing 100039, China title: 'The Empirical Mass-Luminosity Relation for Low Mass Stars' --- [*Keywords:*]{} [ stars: low-mass–stars: fundamental parameters–methods: numerical]{} Introduction ============ Mass is one of the most fundamental parameters of a star. Unfortunately, stellar mass is difficult to be determined directly. Therefore, it is often estimated by mass-luminosity relation (MLR). Since the pioneering papers of Hertzsprung (1923) and Russell et al. (1923) there have been many studies devoted to improving and understanding MLR. To date, the relation has been well constrained for solar-type and intermediate stars [@Del00]. But the empirical MLR remains poorly defined for very low mass stars due to scarcity of data. On the other hand, however, the empirical MLR for very low mass stars is crucial in many aspects of astronomy and astrophysics. For example, without an accurate MLR in this mass interval, the total luminous mass of the Galaxy will never be well determined [@Henry98]. This is because very low mass stars account for at least 70% of all stars, and they make up more than 40% of the total stellar mass of the Galaxy [@Henry99]. Recently, the MLR for very low mass stars has been improved significantly, thanks to the painstaking observations and the required orbital analysis. Henry et al. (1999) obtained or improved some dynamical masses of very low mass stars and discussed the MLR for the mass interval 0.2 [[$M_\odot$]{}]{}to 0.08 [[$M_\odot$]{}]{}. Delfosse et al. (2000) pointed out that MLR is tight only in near-infrared bands. Together with the improved MLRs in K, J and H bands, they suggested that the large dispersion in V band should be due to the difference in stellar metallicity. In Bonfils et al. (2005), with quantitative metallicity estimations, the authors verified this metallicity dependency and provided visual mass-metallicity-luminosity relation (MMLR) for the first time. Despite of these progresses, the empirical MLR for low mass stars obtained so far is still not satisfying. The existence of a transitional mass interval, where the derivative of MLR presents a plateau feature, is an intrinsic obstacle to a satisfying result. This is because a continuous yet segmented model is needed for MLR. While more data are often necessary for fitting to a segmented model than a non-segmented one, some data with relatively low accuracy were discarded in the previous work. This is not without reason, as many of the data are prone to systematics. Obviously, to remove such data, it should be more reasonable to require that the remaining data set is well behaved. And at the same time, one should avoid discarding data more than necessary, such that the original data set can be fully utilized. Based on a newly developed weight assigning scheme, we are able to achieve this by an iterative process and, as thus, give a more confident MLR. In comparison with previous work [@Del00; @HM93], the improvement of MLR in (0.7 [[$M_\odot$]{}]{}, 1.0[[$M_\odot$]{}]{}) also comes from incorporating more observational data, and that in (0.1[[$M_\odot$]{}]{}, 0.7[[$M_\odot$]{}]{}) from taking into account the underlying physics. In section 2 and section 3, respectively, we describe our sample data and develop a fitting method with a reasonable weight assigning scheme. In section 4, the best-fitting three-piecewise empirical MLRs in K, J, H and V bands, the MMLR in V band are provided. Concluding remarks are given in the last section. DATA COLLECTION =============== In order to map out MLR for low mass main sequence stars, we need a sample of dynamical masses derived from orbital analysis [@Henry04] and absolute magnitudes. By searching the literatures, the resulting sample of 48 main sequence stars is listed in Table 1. The columns of Table 1 are, respectively, name of these stars, the dynamical mass ($M\pm \Delta M$) (spanning from 0.07 [[$M_\odot$]{}]{}$\sim$1.086 [[$M_\odot$]{}]{}), absolute V magnitude ($M_V\pm\Delta M_{V}$), absolute K magnitude ($M_K\pm \Delta M_{K}$), absolute J magnitude ($M_J\pm\Delta M_{J}$) and absolute H magnitude ($M_H\pm\Delta M_{H}$). Most of the data can be found directly from the literatures except that some values of $M_V, M_J$ and $M_H$ are obtained from apparent magnitudes (color index if necessary) and parallaxes. The types of spectrum as well as references are also indicated. --------- --------------------- --------------------- --------- --------------- -------- --------------- --------- --------------- -------- --------------- ---------- -------------- Name $M $ $\Delta{M}$ $M_V$ $\Delta{M_V}$ $M_K$ $\Delta{M_K}$ $M_J$ $\Delta{M_J}$ $M_H$ $\Delta{M_H}$ Spectrum Reference ([[$M_\odot$]{}]{}) ([[$M_\odot$]{}]{}) (mag) (mag) (mag) (mag) (mag) (mag) (mag) (mag) GL22A 0.43 0.039 10.56 0.07 6.44 0.10 6.74 0.08 M2V 1&2&9 GL22C 0.14 0.014 13.64 0.12 8.43 0.13 8.85 0.10 M2V 1&2&9 GL25A 0.94 0.088 4.67 0.088 3.82 0.27 *3.18* *0.08* 3.88 0.26 G8V 6&9&10$^{a}$ GL25B 0.7 0.079 4.99 0.088 3.98 0.27 4.13 0.26 G8V 6&9&10$^{a}$ GL65A 0.102 0.01 15.41 0.05 8.76 0.07 *9.68* *0.05* *9.15* *0.03* M5.5V 8 GL65B 0.100 0.01 15.87 0.06 9.16 0.07 10.06 0.05 9.45 0.03 M6V 8 GL67A 0.933 0.231 4.45 0.03 *2.87* *0.12* *3.14* *0.12* 2.90 0.12 M4V 9 GL67B 0.28 0.071 12.07 0.5 7.30 0.13 *7.52* *0.27* 7.40 0.17 M4V 9 GL166C 0.177 0.029 12.68 0.03 7.58 0.07 8.49 0.07 *7.87* *0.07* M4.5V 1&9 GL234A 0.2027 0.0106 13.07 0.05 7.64 0.04 8.52 0.06 7.93 0.04 M4.5V 8 GL234B 0.1034 0.0035 16.16 0.07 9.26 0.04 10.31 0.25 9.56 0.10 M4.5V 8
{ "pile_set_name": "ArXiv" }
--- abstract: | A model with a singular forward scattering amplitude for particles with opposite spins in d spatial dimensions is proposed and solved by using the bosonization transformation. This interacting potential leads to the spin-charge separation. Thermal properties at low temperature for this Luttinger liquid are discussed. Also, the explicit form of the single-electron Green function is found; it has square-root branch cut. New fermion field operators are defined; they describe holons and spinons as the elementary excitations. Their single particle Green functions possess pseudoparticle properties. Using these operators the spin-charge separated Hamiltonian for an ideal gases of holons and spinons is derived and reflects an inverse (fermionization) transformation.\ PACS Nos.71.10.+x, 71.27.+a address: | (a) Institute of Theoretical Physics, Warsaw University, ul. Hoża 69, 00-681 Warszawa, Poland\ (b) Institute of Physics, Jagiellonian University, ul. Reymonta 4, 30-059 Kraków, Poland author: - 'Krzysztof Byczuk$^{a}$ and Jozef Spałek$^{b,a}$ [^1]' title: 'Spin-Charge Separated Luttinger Liquid in Arbitrary Spatial Dimensions ' --- It was suggested [@and1] that the properties of normal state of high-temperature superconductors are properly described by Luttinger liquid, where the spin and charge degrees of freedom are separated. In one-dimensional systems this phenomenon is well understood [@rew]. However, in two and three dimensions the present understanding of the spin-charge separation is rather poor. In this letter we formulate and solve exactly a d-dimensional model exhibiting the spin-charge separation, as well as discuss its thermal and dynamic properties. A natural approach to study spin-charge decoupling phenomena is the bosonization transformation, generalized recently to the multidimensional space situation [@hal]. Here we adopt the operator version of the bosonization developed in Ref.[@how]. The starting assumption in this method is the existence of the Fermi surface (FS) defined as a collection of points at which the momentum distribution function has singularities at zero temperature ($T=0$). These points are parameterized by vectors $\bf S$ and ${\bf T}$, which label a finite and a locally flat (rectangular in shape) mesh of grid points on FS with spacing $\Lambda \ll k_F$ between them [@how; @hal]. Introducing coarse-grained density fluctuation operators $J_{\sigma}({\bf S},{\bf q})$, defined in boxes centered at each FS point and having surface area $\Lambda^{d-1}$ and the thicknesses $\lambda/2$ both above and below it, one can transform the effective Hamiltonian for interacting fermions into an effective Hamiltonian for free bosons. Explicitly, it takes the general form $$H = \frac{1}{2} \sum_{{\bf S}, {\bf T}} \sum_{\bf q} \sum_{\sigma \sigma'} \Gamma_{\sigma \sigma'} ({\bf S}, {\bf T}, {\bf q}) J_{\sigma}({\bf S}, {\bf q})J_{\sigma'}({\bf T}, - {\bf q}), \label{e7}$$ where $ \Gamma_{\sigma \sigma'} ({\bf S}, {\bf T}, {\bf q}) = v_F( {\bf S}) \frac{1}{\Omega} \delta_{\sigma, \sigma'} \delta^{d-1}_{{\bf S}, {\bf T}} + \frac{1}{L^d} V_{\sigma \sigma'}({\bf S}, {\bf T}, {\bf q}) %\label{e8} $ is the positive defined matrix element. The first term corresponds to the kinetic energy part of the original fermionic Hamiltonian with linearized dispersion relation close to FS, whereas the second term is the effective (low-energy) interaction between the particles with spins $\sigma$ and $\sigma'$. The geometrical factor $\Omega = \Lambda^{d-1} (\frac{L}{2 \pi})^d$ depends on the system dimension $d$. The explicit expression for $V_{\sigma \sigma'}({\bf S},{\bf T}, {\bf q})$ is generally derived by transforming out the high energy modes in the fermionic Hamiltonian [@shankar]. Obviously, this procedure can also change the Fermi velocity $v_F({\bf S})$. Therefore, we take $v_F({\bf S})$ as an effective value obtained after removing the high-energy degrees of freedom. As shown below, we can characterize the universal properties of fermions knowing only the asymptotic behavior of the interaction potential $V_{\sigma \sigma'}({\bf S}, {\bf T}, {\bf q})$ in the thermodynamic limit. If the system is invariant under the time reversal, the interaction part must be explicitly symmetric under this operation, which means that $ V_{\sigma \sigma'}({\bf S}, {\bf T}, {\bf q}) = V_{\bar{\sigma} \bar{\sigma}'}(-{\bf S}, -{\bf T}, - {\bf q}) $. Furthermore, if FS is also invariant under the reflections ${\bf S} \rightarrow - {\bf S}$ etc., the last condition becomes $ V_{\sigma \sigma'}({\bf S}, {\bf T}, {\bf q}) = V_{\bar{\sigma} \bar{\sigma}'}({\bf S}, {\bf T}, {\bf q}) $. In that case $V_{\sigma \sigma'}({\bf S}, {\bf T}, {\bf q})$ depends only on the relative orientation of the spins $\sigma$ and $\sigma'$; there are only two independent components: $V_{\sigma \sigma}$ for parallel spins and $V_{\sigma \bar{\sigma}}$ for antiparallel spins. It is convenient to introduce the symmetric and the antisymmetric combinations: $ V^{c,s}({\bf S},{\bf T},{\bf q}) \equiv \frac{1}{2} (V_{\sigma \sigma}({\bf S}, {\bf T}, {\bf q}) \pm V_{\sigma \bar{\sigma}}({\bf S}, {\bf T}, {\bf q}) ), $ where $c$ and $s$ superscripts corresponds to “$\pm$” signs, respectively. Correspondingly, we define the currents $ J_{c,s} ({\bf S}, {\bf q}) \equiv \frac{1}{\sqrt{2}} ( J_{\uparrow}({\bf S}, {\bf q}) \pm J_{\downarrow}({\bf S}, {\bf q})), %\label{e13} $ which describe the charge and the spin density fluctuations, respectively. Then, the original Hamiltonian (\[e7\]) takes the form $$H = \sum_{\alpha = c,s} \frac{1}{2} \sum_{{\bf S}} v_F({\bf S}) \frac{1}{\Omega} \sum_{{\bf q}} J_{\alpha}({\bf S}, {\bf q}) J_{\alpha}({\bf S}, - {\bf q}) + \frac{1}{L^d} \sum_{{\bf S}, {\bf T}} \sum_{{\bf q}} V^{\alpha}({\bf S}, {\bf T}, {\bf q}) J_{\alpha}({\bf S}, {\bf q}) J_{\alpha}( {\bf T}, -{\bf q}). \label{e15}$$ The $\alpha =c$ term describes the dynamics of the charge density fluctuations in the system, whereas the $\alpha = s$ term deals with the longitudinal spin density fluctuations. There are no terms which mix the degrees of freedom (i.e. $ \sim J_c \cdot J_s $) because the Hamiltonian is assumed to be invariant under the spin flip (i.e. $J_c \rightarrow J_c$, $J_s \rightarrow - J_s$). We can check out that for the noninteracting case, i.e. for $V^{\alpha}\equiv 0$, both the spin and the charge density fluctuations propagate with the same velocity $v_F({\bf S})$. The commutation relation for density fluctuation operators take the following form $ \left[ J_{\alpha}({\bf S}, {\bf q}) , J_{\beta}({\bf T}, {\bf p}) \right] = \delta_{\alpha \beta} \delta^{d-1}_{{\bf S}, {\bf T}} \delta^d_{ {\bf p} + {\bf q},0}\: \Omega \: {\bf q} \cdot \hat{n}_{\bf S}, %\label{e19} $ where $\alpha , \beta = c,s$. Thus, the two branches of density fluctuations are independent of each other. The commutation relations become equivalent to those obeyed by the bosonic harmonic-oscillator creation and annihilation operators after rescaling them by the factor on the right hand side, i.e. by defining the creation ($a_{\alpha}^+$) and the annihilation ($a_{\alpha}$) operators according to $$J_{\alpha}({\bf S}, {\bf q}) = \theta( \hat{n}_{{\bf S}} \cdot {\bf q}) \sqrt{\Omega \hat{n}_{{\bf S}} \cdot {\bf q}} \; a_{\alpha}({\bf S}, {\bf q}) + \theta(- \hat{n} _{\bf S} \cdot {\bf q}) \sqrt{-\Omega \hat{n}_{{\bf S}} \cdot {\bf q}} \; a_{\alpha}^+({\bf S},- {\bf q}),$$
{ "pile_set_name": "ArXiv" }
--- author: - 'Lasma Alberte$^{a,}$[^1], Andrei Khmelnitsky$^{a,}$[^2]' title: 'Reduced Massive Gravity with Two St" uckelberg Fields' --- Introduction ============ The observation of the accelerated expansion of our universe is the driving motivation for various infrared modifications of general relativity. One of the theoretically most natural infrared modification would be to give a small mass to the graviton. Since the early discovery of the quadratic Fierz-Pauli mass term for metric perturbations in [@pauli], there has been an ongoing search for a healthy non-linear completion of massive gravity. The construction of the non-linear graviton mass term is based on the use of an auxiliary non-dynamical reference metric, which as an absolute object would break the diffeomorphism invariance of general relativity. The diffeomorphism invariance can be restored by introducing four Stückelberg scalars, corresponding to the four coordinate transformations [@Siegel:1993sk; @arkani; @mukh]. However, a generic theory of four Stückelberg scalars together with the two degrees of freedom of massless graviton propagates six degrees of freedom in total. It is one degree of freedom too much in comparison to the five degrees of freedom expected from the massive spin-2 representations of the Poincaré group. Moreover, the additional degree of freedom is sick and represents the (in)famous Boulware-Deser (BD) ghost [@boul]. After an order-by-order construction of a non-linear theory which is ghost-free in the decoupling limit in [@gr], a full resummed theory of non-linear massive gravity was proposed by de Rham, Gabadadze, and Tolley (dRGT) [@grt]. In unitary gauge this theory has been shown to propagate five degrees of freedom [@Hassan:2011hr; @Hassan:2011ea]. The Hamiltonian analysis of the full diffeomorphism invariant theory including the four Stückelberg fields also seems to confirm the expectation that the dRGT theory propagates at most five degrees of freedom [@deRham:2011rn; @Hassan:2012qv; @Kluson:2012wf] (for recent counterarguments see [@Chamseddine:2013lid]). However, the canonical analysis of dRGT theory in the presence of the four scalar fields is intricate, and in the existent literature it is often obscured either by mixing the gravitational and scalar degrees of freedom or by introduction of new auxiliary fields. In the present paper we take a different point of view and treat dRGT massive gravity as a theory of Stückelberg scalar fields $\phi^A$ coupled to the Einstein-Hilbert gravity. Since the theory is reparametrization invariant, and the scalars are coupled to gravity minimally, we shall count the degrees of freedom propagated by the metric and by the scalar fields separately. Hence the absence of the sixth mode in dRGT theory should manifest itself as the feature of the scalar fields Lagrangian alone. Motivated by these considerations we study the dynamics of the Stückelberg scalar fields given by the dRGT mass term [@grt]. We observe that, if seen as a particular scalar field theory, the dRGT scalar field Lagrangian allows for an arbitrary number of scalar fields in it. In particular, the number of scalar fields $N$ can be chosen to be less than the space-time dimension $d+1$ without affecting the diffeomorphism nor the space-time Lorentz invariance of the theory. We dub the dRGT theories of gravity with reduced number $N<d+1$ of St" uckelberg scalar fields as “reduced massive gravity". The simplest particular cases of such dRGT inspired scalar theories include, for $d = 0$, the action of a massive relativistic particle in $N$ dimensions and, for $N =1$, the single “k-essence” field with DBI-like action [@ArmendarizPicon:1999rj]. Another “simple" choice is arbitrary $N$ fields in $1+1$ dimensions, and gives the action of a relativistic string in $N$-dimensional target space-time. In the case $N=3$, with three scalar fields living in a configuration space diffeomorphic to $\mathbb R^3$, the reduced dRGT action can be regarded as a particular effective field theory of homogeneous solid [@Dubovsky:2005xd]. The degree of symmetry of the solid depends on the isometries of the metric $f_{AB}(\phi)$ in the internal space of scalar fields. If the metric is symmetric under the $SO(3)$ group, and the action contains only the term, invariant under the volume preserving diffeomorphisms, then it describes a perfect fluid. The case with the number of scalar fields $N\geq d+1$ has been recently discussed in [@Gabadadze:2012tr; @Andrews:2013ora] as a theory of multiple Galileon fields covariantly coupled to the dRGT massive gravity. In general, the solutions of the reduced massive gravity theories are expected to break Lorentz and rotational symmetries and lead to anisotropic cosmologies. The pattern of such breaking is determined by the number of scalar fields and the signature and isometries of the reference metric. The connection of reduced massive gravity theories to the Lorentz violating massive gravity theories will be discussed in more detail in the main body of the paper. Another possible application of reduced massive gravity theories could be found in modeling the translational symmetry breaking and momentum dissipation in holography. In particular, in [@Vegh:2013sk] the conductivity in the boundary theory was calculated in the presence of a Lorentz violating graviton mass term in the bulk, that originated from the dRGT-like action with two Stückelberg fields and Euclidean reference metric. The models discussed in our paper could be further used in holographic constructions. In this paper we consider the case of reduced massive gravity with two St" uckelberg fields. It is the simplest case with several scalar fields involved, in which we can write the Hamiltonian and constraint structure explicitly. We perform the full Hamiltonian analysis of the scalar field sector and find that, in distinction from the dRGT massive gravity the determinant of the kinetic matrix does not vanish. Hence the scalar field Lagrangian in general propagates two degrees of freedom. We formulate the condition for the scalar field configurations on which the determinant vanishes and investigate the different regions in the phase space of scalar fields. We show that on the singular surface, where the determinant of the kinetic matrix vanishes, the theory is equivalent to $1+1$-dimensional massive gravity and thus has no dynamical degrees of freedom. We also show that the regular solutions away but in close vicinity of the singular surface approach the singular surface but can never reach it in finite time. At the same time any perturbation of the singular solution drives the system away from this singular surface. In quantum theory the vanishing of the determinant signals the strong coupling regime for the scalar fields, and the dynamics in the vicinity of the singular surface are highly affected by quantum corrections. Whether or not the two dynamical degrees of freedom away from the singular surface contain ghost modes might depend on the particular choice of the reference metric in the configuration space of the scalar fields. We do not address this question in the present paper, but leave it for future studies. The paper is organized as follows. In section \[sec:2\] we recall the formulation of dRGT massive gravity. In section \[sec:3\] we formulate the theory of reduced massive gravity and perform the Hamiltonian analysis away from the singularity surface. In section \[sec:4\] we consider the behaviour of the system on the singular surface, and show that it is equivalent to $1+1$ dimensional massive gravity. We perform the canonical analysis in this case and find the gauge symmetry of the scalar fields, eliminating both scalar degrees of freedom. Section \[sec:5\] is devoted to conclusions. Non-linear massive gravity in Stückelberg formulation {#sec:2} ===================================================== The non-linear massive gravity action can be written in terms of the variables $$\mathcal K^\mu_\nu=\delta^\mu_\nu-\left(\sqrt{g^{-1}f}\right)^\mu_\nu\;,$$ where $g^{\mu\nu}$ is the inverse space-time metric, and $f_{\mu\nu}$ is an auxiliary reference metric. The full dRGT action is given by $$\label{act0} \mathcal L_{EH}+m^2\mathcal L_\phi=\frac{M^2_P}{2}\sqrt{-g}R+m^2\sqrt{-g}\sum_{n=0}^4\tilde \alpha_n\mathsf e_n(\mathcal K)\;,$$ where the characteristic polynomials $\mathsf e_n(\mathbb X)$ of a $4\times 4$ matrix $\mathbb X$ are $$\begin{aligned} \mathsf e_0(\mathbb X)&=1\;,\qquad\mathsf e_1(\mathbb X)=[\mathbb X]\;,\qquad \mathsf e_2(\mathbb X)=\frac{1}{2}\left([\mathbb X]^2-[\mathbb X^2]\right)\;,\\ \mathsf e_3(\mathbb X)&=\frac{1}{6}\left([\mathbb X]^3-3[\mathbb X][\mathbb X^2]+2[\mathbb X^3]\right)\;,\qquad \mathsf e_4(\mathbb X)=\det\mathbb X\;.\end{aligned}$$ The squared brackets denote the traces, and the coefficients $\tilde \alpha_n$ are arbitrary. It is also possible to rewrite the mass term in terms of the characteristic
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, a Sturm-Liouville boundary value problem equiped with conformable fractional derivates is considered. We give some uniqueness theorems for the solutions of inverse problems according to the Weyl function, two given spectra and classical spectral data. We also study on half-inverse problem and prove a Hochstadt and Lieberman-type theorem.' author: - 'A. Sinan Ozkan' - İbrahim Adalar title: 'Inverse problems for a conformable fractional Sturm-Liouville operator' --- **Introduction** ================= Inverse spectral problems consist in recovering the coefficients of an operator from some given data; for example Weyl function, spectral function, nodal points and some special sequences which consist of some spectral values. Various inverse problems for the classical Sturm-Liouville operator have been studied for about ninety years (see [@Ambar], [@Borg]-[horv]{}, [@lev], [@levitan], [@marc], [@marc2], [@troos], [@ozk] and the references therein). Since these kinds of problems appear in mathematical physics, mechanics, electronics, geophysics and other branches of natural sciences the literature on this area is vast. Fractional derivative which is as old as calculus appears by a question of L’Hospital to Leibniz in 1695. He asked what does it mean $\frac{d^{n}f}{% dx^{n}}$ if $n=1/2$. Later on, many researchers tried to give a definition of a fractional derivative. Most of them used an integral form for the fractional derivative (see [@mil], [@old]). However, almost all of them fail to satisfy some of the basic properties owned by usual derivatives, for example chain rule, the product rule, mean value theorem and etc. In 2014, the authors Khalil et al. introduced a new simple well-behaved definition of the fractional derivative called conformable fractional derivative [@hal]. One year later, Abdeljawad gave the fractional versions of some important concepts e.g. chain rule, exponential functions, Gronwall’s inequality, integration by parts, Taylor power series expansions and etc [@abd]. Also, other basic properties on conformable derivative can be found in [@atan]. It seems to satisfy all the requirements of the standard derivative. Because of its effectiveness and applicability, conformable derivative has received a lot of attention and has applied quickly to various areas. In recent years, some new fractional Sturm-Liouville problems have been studied (see [@Bp], [@al], [@kro], [@kli], [@riv]). These problems appear in various branches of natural sciences (see [@bal], [@main], [@mon], [@pal], [@sil]). Although the inverse Sturm-Liouville problems with classical derivation are studied extensively, there is only one study about this subject with conformable fractional derivation. Mortazaasl and Akbarfam gave a solution of inverse nodal problem for conformable fractional Sturm-Liouville operator in [@Oz]. In the present paper, we consider a conformable fractional Sturm-Liouville boundary value problem and give uniqueness theorems for the solution of inverse problem according to the Weyl function, two eigenvalues-sets and the sequences which consist of eigenvalues and norming constants. We also study on half-inverse problem and prove a Hochstadt and Lieberman-type theorem. **Preliminaries** ================= Before presenting our main results, we recall the some important concepts of the conformable fractional calculus theory. Let $f:[0,\infty )\rightarrow %TCIMACRO{\U{211d} }% %BeginExpansion \mathbb{R} %EndExpansion $ be a given function. Then, the conformable fractional derivative of order $% 0<\alpha \leq 1$ of $f$  at $x>0$ is defined by:$$D^{\alpha }f(x)=\underset{h\rightarrow 0}{\lim }\frac{f(x+hx^{1-\alpha })-f(x)}{h},$$and the fractional derivative at $0$ is defined as $D^{\alpha }f(0)=\underset% {x\rightarrow 0^{+}}{\lim }D^{\alpha }f(x).$ Let $f:[0,\infty )\rightarrow %TCIMACRO{\U{211d} }% %BeginExpansion \mathbb{R} %EndExpansion $ be a given function. The conformable fractional integral of $f$ of order $% \alpha $ is defined by:$$I_{\alpha }f(x)=\int\limits_{0}^{x}f(t)d_{\alpha }t=\int\limits_{0}^{x}t^{\alpha -1}f(t)dt,$$for all $x>0.$ We collect some necessary relations in the following lemma. Let $f,g$ be $\alpha $-differentiable at $x,$ $x>0.$ i\) $D_{x}^{\alpha }(af+bg)=aD_{x}^{\alpha }f+bD_{x}^{\alpha }g,$ $\forall a,b\in %TCIMACRO{\U{211d} }% %BeginExpansion \mathbb{R} %EndExpansion ,$ ii\) $D_{x}^{\alpha }(x^{a})=ax^{a-\alpha },$ $\forall a\in %TCIMACRO{\U{211d} }% %BeginExpansion \mathbb{R} %EndExpansion ,$ iii\) $D_{x}^{\alpha }(c)=0,$ ($c$ is a constant) iv\) $D_{x}^{\alpha }(fg)=D_{x}^{\alpha }(f)g+fD_{x}^{\alpha }(g)$, v\) $D_{x}^{\alpha }(f/g)=\frac{D_{x}^{\alpha }(f)g-fD_{x}^{\alpha }(g)}{g^{2}% },$ vi\) if $f$ is a continuous function, then for all $x>0,$ we have $\ D_{x}^{\alpha }I_{\alpha }f(x)=f(x),$ vii\) if $f$ is a differentiable function, then we have$\ D_{x}^{\alpha }f(x)=x^{1-\alpha }f^{^{\prime }}(x),$ Let $f,g:(0,\infty )\rightarrow %TCIMACRO{\U{211d} }% %BeginExpansion \mathbb{R} %EndExpansion $ be $\alpha $-differentiable functions and $h(x)=f(g(x)).$ Then, $h(x)$ is $% \alpha $-differentiable and for all $x\neq 0$ and $g(x)\neq 0,$$$(D_{x}^{\alpha }h)(x)=(D_{x}^{\alpha }f)(g(x))(D_{x}^{\alpha }g)(x)g^{\alpha -1}(x),$$if $x=0,$ then $\ \ (D_{x}^{\alpha }h)(0)=\underset{x\rightarrow 0^{+}}{\lim }(D_{x}^{\alpha }f)(g(x))(D_{x}^{\alpha }g)(x)g^{\alpha -1}(x).$ For further knowledge about the conformable fractional derivative, the reader is referred to [@abd] and [@atan], [@hal]. Let us consider the following boundary value problem $L_{\alpha }(q(x),h,H)$  $$\begin{aligned} &&\text{\ }\left. \ell y:=-D_{x}^{\alpha }D_{x}^{\alpha }y+q(x)y=\lambda y% \text{, \ \ }0<x<\pi \right. \medskip \\ &&\text{ }\left. U(y):=D_{x}^{\alpha }y(0)-hy(0)=0\right. \medskip \\ &&\text{ }\left. V(y):=D_{x}^{\alpha }y(\pi )+Hy(\pi )=0\right. \medskip\end{aligned}$$where $D_{x}^{\alpha }$ is the conformable fractional (CF) derivative of order $\alpha ,$ $0<\alpha \leq 1,$ $q(t)$ is real valued continuous function on $\left[ 0,\pi \right] $, $h,H\in %TCIMACRO{\U{211d} }% %BeginExpansion \mathbb{R} %EndExpansion $ and $\lambda $ is the spectral parameter. Let the functions $\varphi (x,\lambda )$ and $\psi (x,\lambda )$ be the solutions of (1) under the initial conditions $$\varphi (0,\lambda )=1\text{, }D_{x}^{\alpha }\varphi (0,\lambda )=h\text{ and }\psi (\pi ,\lambda )=1,D_{x}^{\alpha }\psi (\pi ,\lambda )=-H$$respectively. These solutions are entire according to $\lambda $ for each fixed $x$ in $\left[ 0,\pi \right] $ and they satisfy the following asymptotic formulas [@Oz]: $$\begin{aligned} \varphi (x,\lambda ) &=&\cos (\frac{\sqrt{\lambda }}{\alpha }x^{\alpha })+O\left( \dfrac{1}{\sqrt{\lambda }}\exp (\frac{\left\vert \tau \
{ "pile_set_name": "ArXiv" }
--- abstract: 'After the discovery of a substellar companion to the hot subdwarf HD149382, we have started a radial velocity search for similar objects around other bright sdB stars using the Anglo-Australian Telescope. Our aim is to test the hypothesis that close substellar companions can significantly affect the post-main sequence evolution of solar-type stars. It has previously been proposed that binary interactions in this scenario could lead to the formation of hot subdwarfs. The detection of such objects will provide strong evidence that Jupiter-mass planets can survive the interaction with a solar-type star as it evolves up the Red Giant Branch. We present the first results of our search here.' author: - 'Simon O’Toole' - Uli Heber - Stephan Geier - Lew Classen - Orsola De Marco title: Radial Velocity search for substellar companions to sdB stars --- [ address=[Australian Astronomical Observatory, PO Box 296, Epping 1710, Australia]{} ]{} [ address=[Dr Remeis-Sternwarte, Universität Erlangen-Nürnberg, Sternwartstrasse 7, Bamberg, D-96049, Germany]{} ]{} [ address=[Dr Remeis-Sternwarte, Universität Erlangen-Nürnberg, Sternwartstrasse 7, Bamberg, D-96049, Germany]{} ]{} [ address=[Dr Remeis-Sternwarte, Universität Erlangen-Nürnberg, Sternwartstrasse 7, Bamberg, D-96049, Germany]{} ]{} [ address=[Dept of Physics and Astronomy, Macquarie University]{} ]{} Introduction and motivation =========================== Most investigations into the so-called “Hot Jupiters” and other exoplanets close to their parent stars have focussed on the formation and migration of these objects to their present-day location. The planets’ subsequent evolution – and especially their effect on the evolution of the stars they orbit – has received less attention. For the former, the proximity of the exoplanet to its star leads to measurable mass loss through evaporation (e.g. Vidal-Madjar et al. 2003). By considering the energy of the system, Lecavelier des Etangs (2007) found that despite this mass loss, all of the known exoplanets will survive at least 5 billion years. On these time-scales the evolution of the host stars begins to become important. In a study examining the influence of planets on post-main sequence evolution, Soker (1998) found that substellar companions in orbits of up to 5AU interact with the evolving star as it expands during the red giant phase. Mass loss on the red giant branch is enhanced as the companion(s) deposit angular momentum and energy in to the stellar envelope and this leads to a bluer horizontal branch (HB) star than might otherwise be expected. Soker used this model to explain the observed morphologies of the HB in galactic globular clusters, and predicted that massive planets or brown dwarfs should orbit stars at the extreme blue end of the HB with orbital periods of $\sim$10 days. In a later study, Livio & Soker (2002) found that at least 3.5% of evolved solar-type stars will be “significantly affected by the presence of planetary companions”. This number increases to more than 9% for stars with metallicities above the solar value. It is now well established that metal-rich stars are more likely to harbour planetary companions (e.g. Fischer & Valenti 2005). An analysis of the group properties of exoplanets of Marcy et al. (2008) found that $\sim$4% of solar-type stars have planets with orbits of $<$2.5AU. Most recently, Bowler et al. (2010) found that 26$^{+9}_{-8}$% of evolved A-type stars (1.5$\le M_*/M_\odot\le$2.0) host Jupiter-mass planets within 3AU. It is clear then, that *there should be a population of very blue HB stars with substellar companions.* The hot subdwarfs ================= The very blue, or extreme, HB stars are the hot subdwarf B (sdB) stars. These objects, like their more normal HB counterparts, are core helium-burning stars, except with hydrogen envelopes too thin to sustain nuclear burning. Their masses are typically $\sim$0.5M$_\odot$. After the consumption of helium in their cores, they evolve directly into white dwarfs, avoiding a second red-giant phase. Most formation scenarios for sdB stars have focussed on close binary interaction with a main sequence – not substellar – companion or the merger of two He-core white dwarfs (e.g. Han et al. 2003). A large fraction of sdB stars are predicted to be in close binaries with a main sequence star or white dwarf companion. Several radial velocity studies have found that this is the case: many sdB stars reside in close binaries with periods as short as 0.07 days, and with either an M-type main sequence star or an invisible white dwarf companion (e.g. Maxted et al. 2001; Heber et al. 2004; O’Toole et al. 2004; Edelmann et al. 2005). Other studies have used 2MASS photometry to estimate the fraction of sdBs with main sequence stars (Stark & Wade 2004; Reed & Stiening 2004), although they are limited by the flux of the sdB to stars earlier than $\sim$M2. Binary fraction estimates are in the 40-70% range, with selection effects difficult to determine. This still leaves at least 30% of all sdBs as apparently single stars. The Han et al. (2003) formation models suggest that these stars are the product of a merger between two helium-core white dwarfs. It is not clear, however, whether there are enough of these double-degenerate systems that are close enough to merge within a Hubble time. Perhaps instead of mergers, the majority of single sdB stars are the product of common envelope evolution with a *substellar* companion. The search for substellar companions to sdB stars ================================================= The discovery of HD149382b with mass 6-23M$_{\mathrm{Jup}}$ by Geier et al. (2009) has clarified the situation somewhat, and forced a re-examination of the Soker (1998) and Livio & Soker (2002) models. The detected Doppler velocity variations of the sdB star are sufficiently low ($K=2.3\,{\rm km\,s^{-1}}$, see Figure 1) that previous surveys for RV variability – whose limits are typically 2-3km$^{-1}$ – would not have seen them. Furthermore, HD 149382 is the brightest known sdB, where very high quality data is easily accessible. We note however, that this results is the subject of debate; see Jacobs et al. (these proceedings). The detection of more substellar companions in short-period orbits around other sdB stars will strengthen the case that these objects *can* cause common envelope ejection. Using UCLES + CYCLOPS on the AAT -------------------------------- We have been granted 10 nights in total with the Anglo-Australian Telescope (AAT) to carry out time-resolved high-resolution spectroscopy with UCLES/CYCLOPS of a sample of bright sdBs. The CYCLOPS fibre-feed to UCLES provides higher resolution (R=70000) with no loss in signal when compared to the standard mode (R$\approx$45000). One of the key features of the new system is the $\sim$2.1 arcsecond lenslet array feeding the fibres; this makes the spectrograph more immune to the sometimes poor seeing at Siding Spring Observatory. Overall a gain in throughput is expected once the system is fully implemented. Our goal is to search for Doppler velocity variations with semi-amplitudes of 1-2kms$^{-1}$ on timescales of days. This will allow us to detect companions with masses as low as $\sim$4M$_{\mathrm{Jup}}$. Target Selection ---------------- Previous observations of the bright sdB HD205805 with ESO-2.2m/FEROS found a shift of 2.5$\pm$0.5 kms$^{-1}$, larger than the measurement uncertainties. This object, along with HD149382, was one of our highest priority targets with UCLES/CYCLOPS. No intensity variations have been detected for this star (Chris Koen, priv. comm.) as might be expected for stellar pulsation, suggesting that the Doppler velocity variability is more likely due to a companion. We have taken high cadence observations of the star, which will allow us to accurately measure its orbital period. The other stars in our sample are well studied bright subdwarfs where no companion has been detected up to now, either with Doppler velocities or infrared colours. Should the ejection of the common envelope be caused by close substellar companions for apparently single sdBs, most of these stars are predicted to show Doppler velocity variations with low semi-amplitudes. The fact that the brightest known sdB shows variability is a strong hint in this direction. [t]{} ![An extracted non-wavelength calibrated CYCLOPS spectrum of HD205805, showing pixels and counts on the x- and y-axes, respectively.[]{data-label="fig:205805"}](HD205805_CYCLOPS_raw_spectrum "fig:"){width="\textwidth"} Early results... or what you will ================================= Our first four night allocation was very successful, despite a
{ "pile_set_name": "ArXiv" }
--- abstract: | A detection technique of ultra-high energy cosmic rays, complementary to the fluorescence technique, would be the use of the molecular Bremsstrahlung radiation emitted by low-energy electrons left after the passage of the showers in the atmosphere. The emission mechanism is expected from quasi-elastic collisions of electrons produced in the shower by the ionisation of the molecules in the atmosphere.\ In this article, a detailed calculation of the spectral intensity of photons at ground level originating from the transitions between unquantised energy states of free ionisation electrons is presented. In the absence of absorption of the emitted photons in the plasma, the obtained spectral intensity is shown to be $\simeq 4.0~10^{-26}$ W m$^{-2}$ Hz$^{-1}$ at 10 km from the shower core for a vertical shower induced by a proton of $10^{17.5}$ eV. author: - | I. Al Samarai$^{1}$, O. Deligny$^{1}$, D. Lebrun$^2$, A. Letessier-Selvon$^3$, F. Salamida$^{1}$\ $^{1}$ Institut de Physique Nucléaire d’Orsay,\ CNRS/IN2P3 & Université Paris Sud, Orsay, France\ $^2$ Laboratoire de Physique Subatomique et Corpusculaire,\ CNRS/IN2P3 & Université Joseph Fourier, Grenoble, France\ $^3$ Laboratoire de Physique Nucléaire et des Hautes Energies,\ CNRS/IN2P3 & Université Pierre et Marie Curie, Paris, France title: | An Estimate of the Spectral Intensity\ Expected from the Molecular Bremsstrahlung Radiation\ in Extensive Air Showers --- Introduction ============ The origin and nature of ultra-high energy cosmic rays still remain to be elucidated despite the recent progresses provided by the data collected at the Pierre Auger Observatory and the Telescope Array [@KHK-PT]. This is due to the extremely low intensity of particles at these energies. As of today, the most direct way to infer the nature of the particles at ultra-high energies relies on the observation of the shower longitudinal profile to measure its maximum of development. The use of telescopes detecting the nitrogen fluorescence light emitted after the passage of the electromagnetic cascade is a well-suited technique to achieve such measurements. Moreover, these fluorescence telescopes provide a good calorimetric estimate of the energy of the showers, which is preferable to detectors requiring external information to calibrate the energy estimator of the showers. However, this technique can only be used on moonless nights, resulting in a 10% duty cycle. Together with the low intensity of particles, this makes the study of the cosmic ray composition above few tens of EeV very challenging. Triggered by microwave emission measurements in laboratory [@Gorham], new telescope techniques based on the detection of the microwave emission in the GHz C-band (3.4-4.2 GHz) have been developed at the Pierre Auger Observatory [@Gaior]. These techniques aim at providing measurements of the electromagnetic content of the cascade with quality comparable to the fluorescence detectors but with a 100% duty cycle. Molecular Bremsstrahlung radiation in the GHz band provides an interesting mechanism to detect ultra-high energy cosmic rays due to the expected isotropic and unpolarised radiation. This feature would allow for the possibility of performing shower calorimetry in the same spirit as the fluorescence technique does, by mapping the ionisation content along the showers through the intensity of the microwave signals detected at ground level. Attempts to estimate the spectral intensity expected from the molecular Bremsstrahlung radiation in beam experiments [@Gorham] or in extensive air showers [@KITicrc2013] have been performed, based on general frameworks pertaining to radiative processes in plasmas. In these works, the sources of the emission are the low-energy electrons left along the shower track after the passage of high-energy electrons of the cascade propagating in the atmosphere. The different energy distributions of the ionisation electrons are considered as static during the time the electrons can emit. These approaches resulted in a free-parametric [@Gorham] or in a very low expectation [@KITicrc2013] for the signal power that could be observed at the ground level. In this paper, the approach adopted is based on the computation of the spectral power per volume unit, which is shown to be the natural quantity to estimate the spectral intensity at any reference point in space and time. It is derived from the collision rate of ionisation electrons leading to the production of photons through free-free transitions. Moreover, the ionisation electrons are tracked from their production to their disappearance by accounting for all interactions affecting their energy distribution with time, as detailed in section \[electrons\]. In turn, these electrons can produce their own emission, such as Bremsstrahlung emission. The expected spectral intensity at ground level of such an emission is the object of section \[mbr\]. Possible attenuation or suppression effects are studied in section \[attenuation\_effects\]. Finally, the results obtained in this study are illustrated in section \[discussion\] on a toy reference shower. From these results, the perspectives of detection of ultra-high energy cosmic rays by making use of molecular Bremsstrahlung radiation are discussed. Ionisation Electrons along the Shower Track {#electrons} =========================================== A Crude Model of Vertical Air Showers ------------------------------------- In this work, an extensive air shower is considered as a thin plane front of high energy charged particles propagating in the atmosphere at the speed of light $c$. For a given primary type and a given energy $E$, the longitudinal development of the electromagnetic cascade depends only on the *cumulated slant depth* $X$ expressed as the ratio between the vertical thickness of the atmosphere $X_{\mathrm{vert}}$ (1000 g cm$^{-2}$ at sea level) and the cosine of the zenith angle of the shower. After the succession of a few initial steps in the cascade, all showers can be described by reproducible macroscopic states. In particular, the shape of the showers is universal except for a translation depending logarithmically on $E$ and a global factor roughly linear in $E$. In this way, for any given slant depth $X$ or equivalently any altitude $a$, the total number of primary $e^+/e^-$ particles, $N_{e,p}$, can be adequately parameterised by the Gaisser-Hillas function as [@GaisserHillas]: $$N_{e,p}(a)=N_{\mathrm{max}}\bigg(\frac{X(a)-X_0}{X_{\mathrm{max}}-X_0}\bigg)^{\frac{X_{\mathrm{max}}-X_0}{\lambda}}\exp{\bigg(\frac{X_{\mathrm{max}}-X(a)}{\lambda}\bigg)},$$ with $X(a)$ the depth corresponding to the altitude $a$, $X_0$ the depth of the first interaction, $X_{\mathrm{max}}$ the depth of shower maximum, $N_{\mathrm{max}}$ the number of particles observed at $X_{\mathrm{max}}$, and $\lambda$ a parameter describing the attenuation of the shower. On the other hand, high energy particles constituting the *core* of the shower are collimated along the initial shower axis. The lateral extension of the core depends on the mean free path and can be expressed in terms of the *Moliere radius* $R_M$ such that 90% of the energy is contained within a distance $r$ from the axis such as $r<R_M$. Motivated by general arguments to describe the electromagnetic cascade of showers, the NKG lateral distribution function denoted hereafter by $g(r,a)$ is known to reproduce reasonably well the observations [@NKG]: $$g(r,a)=C(s(a))~R_M^{-2}~\left(\frac{r}{R_M}\right)^{s(a)-2}\left(1+\frac{r}{R_M}\right)^{s(a)-4.5}.$$ Here, $s(a)$ stands for the age parameter at altitude $a$ defined as $s(a)=3X(a)/(X(a)+2X_{\mathrm{max}})$, and $C(s)$ is a normalisation factor. ![[]{data-label="fig:shower"}](Shower-eps-converted-to.pdf){width="12cm"} The number of primary $e^+/e^-$ per unit surface, $n_{e,p}(r,a)$, is then simply obtained by folding the longitudinal profile to the normalised lateral one. For a vertical shower whose geometry is depicted in figure \[fig:shower\], $n_{e,p}(r,a)$ reads as: $$n_{e,p}(r,a)=N_{e,p}(a)~\frac{g(r,a)}{2\pi\displaystyle\int \mathrm{d}r~r~g(r,a)}.$$ Noticeably, this description is only a crude model of an extensive air shower. This shall allow us, however, to derive in the following a realistic number of ionisation electrons left along the shower track and thus to estimate relevant orders of magnitude for the spectral intensities (in W m$^{-2}$ Hz$^{-1}$) that can be expected from molecular Bremsstrahlung radiation by these ionisation electrons. To facilitate comparisons of the results obtained in this study with the values reported in [@Gorham
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, we will compute the holographic complexity (dual to a volume in AdS), holographic fidelity susceptibility and the holographic entanglement entropy (dual to an area in AdS) in a two-dimensional version of $AdS$ which is dual to open strings. We will explicitly demonstrate that these quantities are well defined, and then argue that a relation for fidelity susceptibility and time should hold in general due to the $AdS_2$ version of the classical Kepler’s principle. We will demonstrate that it holds for $AdS_2$ solution as well as conformal copies metrics in bulk theory of a prescribed dual conformal invariant quantum mechanics which have been obtained in open string theory. We will also show that hierarchical UV/IR mixing exists in boundary string theory through the holographic bulk picture.' author: - Kazuharu Bamba - Davood Momeni - Mudhahir Al Ajmi title: ' Holographic Entanglement Entropy, Complexity, Fidelity Susceptibility and Hierarchical UV/IR Mixing Problem in $AdS_2/\mbox{open strings}$ ' --- Introduction {#sec:intro} ============ Varied studies done in wide areas of physics have proved that the fundamental laws of physics can be reformulated in terms of relevant information theory quantities [@info; @info2]. Entropy quantifies the amount of information bits that are lost in a certain (un)physical process, and thus, it is supposed to be one of the most important physical quantities related to any such information theoretical process. The entropy has been used to realize several physical phenomena from condensed matter physics (like phase transitions, critical phenomena) to gravitational physics, where entropy looks just like the area of certain surfaces, called horizons. Also, it is believed that the geometry of spacetime can be underestood as an emergent object, which emerges due to some types of information theoretical process. A simple reason to believe it is that , in the Jacobson formalism where gravitational field equations are related to thermodynamical quantities, it is always possible to derive the Einstein equations from thermodynamics of horizon by presuming a certain scaling form for the entropy [@z12j; @jz12]. We know that the maximum entropy of a certain region of space scales with the horizon’s area, although this observation has been acquired holding the physics of black holes. This naive realtion between area (boundary) and entropy (a quantity relates to the quantum states inside a system) makes the idea of holographic principle [@1; @2], and the AdS/CFT correspondence, one of the most significant dualities between two regimes (strong/weak) of several physical systems [@M:1997]. The AdS/CFT correspondence makes it possible to describe quantum entanglement in complex systems in the form of the holographic entanglement entropy (HEE) [@6; @RT; @6a]. The HEE of a given quantum field theory in $d+1$ dimensions (even non relativistic on) is holographically calculated in terms of the area of a minimal surface defined in the geometry of an asymptotically $AdS_d$ dual geometry. Let us consider HEE for a given subsystem $A$ with its complement $A'$. The Ryu-Takayanagi (RT) expression for the holographic entanglement entropy is $$\label{HEE} S_{A}=\frac{\mathcal{A}(\gamma _{A})}{4G_{d+1}}$$ where $G$ is the gravitational constant in $d$ dimensions, $\gamma_{A}$ is the $(d-1)$-minimal surface in the $AdS_d$ geometry. We assumed that the boundary of this surface named as $\mathcal{A}(\gamma _{A})$ is the dsame as the boundary of the quantum entangled system $\partial A$. Because of non renormalizability of Einstein gravity as well as existence of cutoffs, in RT scheme to compute HEE, there are UV divergence terms like ${\varepsilon}^{-n},n\geq1, \ln{\varepsilon},..,$ etc. Consequently we are required to find a regularization strategy to improve (remove) these divergences. Inspired from quantum field theory, we consider a deformed geometry $D$ and we define the area as follows, $$\begin{aligned} \mathcal{A}(\gamma_A) = \mathcal{A}_{D}(\gamma_A) - \mathcal{A}_{AdS}(\gamma_A), \end{aligned}$$ where $ \mathcal{A}_{D}(\gamma_A) $ is defined in deformed geometry (for example excited states), and $ \mathcal{A}_{AdS}(\gamma_A)$ is defined in the background $AdS$ spacetime (ground state). We hope that if one defines the holographic entanglement entropy for a deformed geometry by subtracting the contribution coming from the background $AdS$ spacetime, we are only left with a finite part. We will use this scheme of renormalization through this paper. As mentioned, the entropy measures the amount of the information which we lost during a physical process. It is very common to define the complexity as a quantity which quantifies the difficulty to obtain the information of a system. The complexity has been introduced to investigate different physical systems from gravitational physics to condensed matter physics, and even quantum information theory. This lost information never can be retracted using any possible physical process [@hawk]. Because complexity has only been recently introduced to investigate miscellaneous physical systems, there are different schemes to define the complexity for a CFT. However, recently inspired by RT proposal for holographic entanglement entropy, holographic complexity (HC) has been holographically conjectured as a certsain types of volumes in the dual anti-de Sitter (AdS) background [@Susskind:2014rva1] -[@Stanford:2014jda]. Moreover, it is adequate to specify a subsystem $A$ with its complement, and define this volume as $V = V(\gamma_A)$, i.e., the volume enclosed by the same minimal surface which was prposed to estimate the HEE [@Alishahiha:2015rta] is given as follows (called holographic complexity or HC), $$\label{HC} \mathcal{C}_A= \frac{V(\gamma_A) }{8\pi R G_{d+1}},$$ here $R$ and $V(\gamma_A)$ are the radius of the curvature and the volume in the $AdS_d$ bulk geometry. This volume contains UV divergences, and so we need an appropriate regularization scheme for it. In analogous to the HEE, we define the regularized volume as $$\begin{aligned} \Delta\mathcal{V}(\gamma_A) = V_{D}(\gamma_A) - V_{AdS}(\gamma_A).\end{aligned}$$ Here $ \Delta\mathcal{V}_{D}(\gamma_A) $ denotes the volume in deformed geometry, and $V_{AdS}(\gamma_A)$ is the volume in the background $AdS$ spacetime. This again improves the divergences and one is again left with a finite part. Several examples for HC have been studied in literature [@Momeni:2016ekm]-[@Momeni:2016qfv]. A type of duality between dilaton gravity (gravity in 2 dimensions) on $AdS_2$ and open strings was discovred in [@Cadoni:2000gm] and it was clearly shown how $AdS_2$ is equivalent to conformal quantum mechanics (CQM) as a non relativistic limit of $CFT_1$ (see [@Cadoni:2000gm] and other papers of these authors). This lower dimensional version of $AdS_{d+1}/CFT_d$ conjecture argued that gravity on $AdS_2$ is holographically dual to a one-dimensional conformal field theory on the boundary of $AdS_2$. One reason to believe that such duality exists, is due to the fact that classical two-dimensional (dilaton) gauge theory for gravity has trivial conformal symmetry. It can be possible to reformulate this gauge theory as a nonlinear sigma-model [@M.; @Cavagli`a] and it was proved that when we take trhe classical limit of this toy model, we were observed the conformal symmetry. This is a reason to think about gravity on $AdS_2$ as a natural dual to a one-dimensional CFT Entanglement entropy, Holographic Complexity and Fidelity Susceptibility in open strings. This duality is called as $AdS_2/\mbox{open string}$ or $AdS_2/CQM$. The object of interest in this paper is to compute HEE, HC and fidelity susceptibility for a generic open string system using the $AdS_2/\mbox{open string}$ duality. The question of interest is how these quantities evolve with time. The structure of the paper is as follows. In Sec. 2, we fleetingly review the formal frameworks of gravity on $AdS_2$. In Sec. 3, we compute HEE of a string via RT formalism. In Sec. 4, we calculate HC of the string. In Sec. 5, we investigate fidelity susceptibility holographically. In Sec. 6, we summarize our results. Gravity in two dimensions and $AdS_2/\mbox{open string}$ duality ================================================================ Let us start by two-dimensional dilaton theory with the following action, $$\begin{aligned} &&S=\frac{1}{2\kappa^2}\int d^2x \sqrt{-g}\Big(\phi R+V(\phi)\Big),\label{action}\end{aligned}$$ where the potential is $V(\phi)=2\lambda^2 \phi$, $\lambda^2$ stands for cosmological constant $\
{ "pile_set_name": "ArXiv" }
--- abstract: | In this contribution, we introduce numerical continuation methods and bifurcation theory, techniques which find their roots in the study of dynamical systems, to the problem of tracing the parameter dependence of bound and resonant states of the quantum mechanical Schrödinger equation. We extend previous work on the subject [@Broeckhove2009] to systems of coupled equations. Bound and resonant states of the Schrödinger equation can be determined through the poles of the $S$-matrix, a quantity that can be derived from the asymptotic form of the wave function. We introduce a regularization procedure that essentially transforms the $S$-matrix into its inverse and improves its smoothness properties, thus making it amenable to numerical continuation. This allows us to automate the process of tracking bound and resonant states when parameters in the Schrödinger equation are varied. We have applied this approach to a number of model problems with satisfying results. address: | Department of Mathematics and Computer Science, Universiteit Antwerpen,\ Middelheimlaan 1, B-2020 Antwerpen author: - 'Przemysław Kłosiewicz , Jan Broeckhove and Wim Vanroose' bibliography: - 'coupled.bib' title: Numerical Continuation of resonances and bound states in coupled channel Schrödinger equations --- Introduction {#sec:Introduction} ============ The appearance of resonances is of ever-growing interest in the study of wave phenomena as they are considered among the most important features of systems described by wave equations. They appear in systems that are penetrable by an impacting wave. Such systems allow the interior field to couple to the external domain which leaves a characteristic fingerprint on the far-field pattern of the scattered wave. Many examples appear naturally in acoustic scattering [@fahy2007sound] and in fluid-mechanical structure interaction [@dhia2007resonances]. In all cases the appearance of resonant states has a profound and important influence on the system’s dynamics. In the context of quantum mechanical scattering, resonant behavior also strongly influences the interactions between microscopic particles, which in turn has its influence on the reactivity of molecules and atoms described by such quantum mechanical models. In molecular systems these resonances can easily turn into bound states if the molecular configuration changes. In [@Broeckhove2009] we developed a framework for applying numerical continuation techniques in the context of bound states and resonances in spherically symmetric short-range potentials. We have shown that numerical continuation methods, originally developed in the study of dynamical systems, can be applied successfully to track bound and resonant states efficiently in terms of a varying system parameter. Moreover, this technique can be used to reveal subtle and interesting transitions and connections between states automatically. The present work focuses on the extension of that procedure to coupled channel short-range systems. This extension is a logical step towards automated, efficient and robust methods for the study of interactions in scattering experiments. In all generality, these techniques can be applied to systems of coupled Helmholtz equations with variable wave numbers, as long as the short-range conditions are met. The outline of the paper is as follows. Section \[sec:Scattering\] sets the coupled channel Schrödinger equations in the context of non spherically symmetric quantum mechanical problems. In section \[sec:Regularization\] a regularization procedure is discussed that allows application of numerical continuation even though the underlying functions that characterize resonances and bound states are numerically and analytically not very well-suited. Section \[sec:NumCont\] provides a brief overview of basic numerical continuation methods and gives some pointers on the available implementations. Finally, in section \[sec:results\] we present several results obtained with our implementation of the discussed methods. Quantum scattering in coupled channel problems {#sec:Scattering} ============================================== In this paper we discuss a coupled channel problem that derives from a one particle Schrödinger equation with a non-spherical potential $$\label{eq:3dschrodinger} \left(-\frac{1}{2\mu}{\Delta} + V(\mathbold{r},\lambda) - E \right) \psi(\mathbold{r}) =0,$$ where $\Delta$ is the three-dimensional Laplacian, $V(\mathbold{r},\lambda)$ is a potential with a system parameter $\lambda$ and $E$ is the complex-valued energy of the system. The problem is such that for all $\mathbold{r}$ outside a bounded domain $V(\mathbold{r},\lambda) \approx 0$, i.e. the potential becomes negligibly small. Formally, the limitation to potentials that are negligible outside a certain radius is termed as the restriction to so called short-range potentials: $V(\mathbold{r},\lambda)$ must decay faster than $r^{-3}$ as $r=|\mathbold{r}|\to\infty$ and must be less singular than $r^{-2}$ in the origin $r=|\mathbold{r}|\to0$ [@Taylor2006]. The long-range Coulomb interaction requires a significantly different approach and is not discussed here. The boundary conditions that are appropriate in the short-range case require the solution to be zero at the origin and force the solution to tend to a linear combination of free waves (i.e. solutions of for $V=0$) at infinity [@Taylor2006]. In this section we briefly introduce elements from quantum scattering and partial wave expansion to arrive at the concept of resonant states. In the subsequent sections we then focus on applying numerical continuation to study the dependence of such resonances on the system parameter $\lambda$ in . For this reason, we will faithfully record the $\lambda$ dependence in our notations, even if it is at times somewhat cumbersome. Equation can be written in spherical coordinates $(r,\theta, \varphi)$ around the center of the system. The differential operator $\Delta$ then splits into angular and radial differential operators and the solution can be expanded as a sum $$\psi(\mathbold{r}) = \psi(r,\theta,\varphi) = \sum_{l=0}^{\infty}\sum_{m=-l}^{l} \psi_{lm}(r) Y_{l}^{m}(\theta, \varphi), \label{eq:PartialWave}$$ where $Y_{l}^{m}(\theta,\varphi)$ are the spherical harmonics, eigenfunctions of the angular differential operators in the Laplacian. In physics this decomposition is commonly referred to as the partial wave expansion. The labels $(l,m)$ are intimately connected to the irreducible representations of the rotational symmetry groups $SO(3) \supset SO(2)$ of equation . Each $(l,m)$-component of is known as a partial wave and the system is said to be modeled by multiple partial wave channels. We are now interested in localizing the resonances in such systems. Single-channel scattering {#sphericalsymmetry} ------------------------- To introduce some notations and concepts we first briefly look at a spherically symmetric potential, i.e. $V(\mathbold{r},\lambda) = V(r,\lambda)$. In this case, the partial wave channels are decoupled and the radial wave function $\psi_{lm}$ is identical for all $m$. Hence we can drop the index $m$ and equation turns into: $$\label{eq:equationspherical} \left(-\frac{1}{2\mu} \frac{d^2}{d r^2} + \frac{l(l+1)}{2\mu r^2} - E \right) \psi_{l}(r,\lambda) + V(r,\lambda) \psi_{l}(r, \lambda) = 0,$$ for each $l$. The boundary condition at $r=0$ specifies $\psi_{l}(r,\lambda) = 0$ and at large $r$ the solution must be a linear combination of the spherical Riccati-Hankel functions (see appendix \[app:spherical\]), the free incoming and outgoing waves: $$\label{eq:sphericalboundary} \psi_{l}(r, \lambda) \xrightarrow{r\to\infty} \frac{i}{2} \left( \hat{h}^{-}_{l}(kr) - \hat{h}^{+}_{l}(kr) S_{l}(k, \lambda) \right),$$ The spherical Riccati-Hankel functions are the solutions obtained in the absence of the potential term in , i.e. when the equation reduces to a Helmholtz equation with a constant wave number $k=\sqrt{2\mu E}$. The $S_{l}(k, \lambda)$ is called the $S$-matrix for channel $l$ and it determines the scattering properties associated with potential $V(\mathbold{r},\lambda)$. Multi-channel scattering ------------------------ For a non-spherical potential the partial wave channels do not decouple. The wave function must be represented by a sum as in equation , which is typically truncated at an $l_\text{max}$. In most cases of physical interest the potential still has axial symmetry. It is well known that as a consequence the channels with different $m$ are decoupled and the solutions can be represented by $$\psi(\mathbold{r}) = \sum_{l=0}^{l_\text{max}} \psi_{lm}(r) Y_{l}^{m}(\theta, \varphi). \label{eq:PartialWaveAxial0}$$ Upon substitution in , this generates a separate set of equations for each m. For the purpose of our exposition we may, without loss of generality, take $m=0$ and drop the index $m$ in the notation for the radial wave function $$\psi(\mathbold{r}) =
{ "pile_set_name": "ArXiv" }
--- author: - 'Claudia Kirch[^1]' - 'Christina Stoehr[^2]' bibliography: - 'BIB.bib' date: December 2019 title: 'Sequential change point tests based on U-statistics' --- **Abstract** We propose a general framework of sequential testing procedures based on $U$-statistics which contains as an example a sequential CUSUM test based on differences in mean but also includes a robust sequential Wilcoxon change point procedure. Within this framework, we consider several monitoring schemes that take different observations into account to make a decision at a given time point. Unlike the originally proposed scheme that takes all observations of the monitoring period into account, we also consider a modified moving-sum-version as well as a version of a Page-monitoring scheme. The latter behave almost as good for early changes while being advantageous for later changes. For all proposed procedures we provide the limit distribution under the null hypothesis which yields the threshold to control the asymptotic type-I-error. Furthermore, we show that the proposed tests have asymptotic power one. In a simulation study we compare the performance of the sequential procedures via their empirical size, power and detection delay, which is further illustrated by means of a temperature data set. \ **Keywords:** structural breaks, Wilcoxon statistics, CUSUM statistics, data monitoring, control charts **MSC2000 Classification: 62L10** Introduction {#sec.intro} ============ Change point and data segmentation procedures have a long tradition in statistics. So called a-posteriori or offline procedures deal with the testing and segmentation of completely observed data, see e.g. [@aue2013] and [@horvath2014] for two recent survey articles mainly concerned with a-posteriori testing or [@fryzlewicz2014wild], [@killick2012optimal], [@rigaill2015pruned] and [@frick2014multiscale] to name but a few articles dealing with data segmentation. On the other hand, data is collected more and more automatically with data arriving one by one, which requires a different statistical methodology designed to sequentially (online) make a decision after each new observation whether a break is likely to have occurred. In this setup an additional focus lies in the quick detection of changes after they have occurred. Examples include the monitoring of medical data of patients (e.g. [@fried2004online]), financial data (e.g. [@andreou2006monitoring], [@aue2012sequential]) as well as deforestation monitoring (e.g. [@dutrieux2015monitoring]). In sequential change point detection, there are different approaches: One approach aims at minimizing the mean detection delay while not causing too frequent alarms if no change occurs, see [@tarta] for a recent monograph. In this paper, we follow a different approach closer related to classical testing theory, that was first proposed by [@Chu]. By making use of a historic data set without changes their approach can control the asymptotic type-I-error while having power one in many situations. This approach has been extended allowing for more general error sequences, multivariate observations but also different types of changes by several other authors including e.g.[@Hor], [@aue2006change], [@huvskova2005monitoring] or [@ciuperca2013two]. Additionally, [@chen2010modified], [@fremdt2015page] and [@kirch2018modified] have proposed modified versions of the original monitoring scheme focusing more on recent monitoring observations to make a decision. These procedures outperform the original one for later changes while giving comparable results for early changes. In this paper we present a unified theory based on $U$-statistics for different monitoring schemes. This is different from the unified theory based on estimating functions as proposed by [@Est] and further considered in [@kirch2018modified]. In particular, the proposed methodology includes a robust Wilcoxon monitoring, thus adapting the a-posteriori methodology proposed by [@Dehl]. Change point problem and monitoring statistics {#sec.model} ---------------------------------------------- Following [@Chu] we assume the existence of a historic data set $X_1,\ldots, X_m$ without a change. These historic observations can be used to estimate unknown parameters consistently. Asymptotic results are then obtained by letting the length of the historic data set increase to infinity. Subsequent to the historic observations we start monitoring new incoming data by testing for a structural break after each new observation $X_{m+k},k\geq 1,$ where $k$ denotes the monitoring time, by means of a monitoring statistic $\Psi(m,k)$ not depending on future data. We consider the following general change model $$\label{meanmodel} X_{i,m}=1_{\{1\leq i\leq k^*+m\}}Y_i+ 1_{\{i>k^*+m\}}Z_{i,m},\quad i\geq 1,$$ where $\{Y_i\}_{i\in\mathbb{Z}}$ and $\{Z_{i,m}\}_{i\in\mathbb{Z}}$ are suitable stationary time series. The distribution of the time series after the change and thus the change itself is allowed to depend on $m$. The null hypothesis then corresponds to $k^ *=\infty$. Our examples focus on the special case of a classical mean change model with $Z_{i,m}=Y_i+d_m$, i.e. $$\label{meanex} X_{i,m}=Y_i+1_{\{i>k^*+m\}}d_m,\quad d_m\neq 0,$$ where the change in the mean $d_m$ is allowed to depend on m.\ Because the monitoring continues as long as no alarm is given, the monitoring horizon is potentially infinite. By introducing a weight function $w(m,k)$ as e.g. given in below it is still possible to control the asymptotic type-I-error. More precisely, an alarm is given as soon as $$w(m,k)\left|\Psi(m,k)\right|>c_{\alpha},$$ where the critical value $c_{\alpha}$ is chosen such that the testing procedure holds the level $\alpha$ asymptotically. Equivalently (for $w(m,k)\neq 0$) an alarm is given as soon as the absolute monitoring statistic $\left|\Psi(m,k)\right|$ exceeds the critical curve given by $\frac{c_{\alpha}}{w(m,k)}$. As long as the monitoring statistic does not exceed the critical curve, we continue monitoring, such that the stopping time $\tau_m$ is given by $$\begin{aligned} \tau_m=\begin{cases} \inf\{k\geq 1:w(m,k)\left|\Psi(m,k)\right|>c_{\alpha}\},\\ \infty,\quad \mbox{if } w(m,k)\left|\Psi(m,k)\right|\leq c_{\alpha}\mbox{ for all k}. \end{cases} \end{aligned}$$ This setup allows to control the type-I-error as in classical statistics by choosing the critical value $c_{\alpha}$ for a given level $\alpha$ such that under the null hypothesis of no change $$\lim_{m\rightarrow\infty}P_{H_0}(\tau_m<\infty)=P_{H_0}\left(\sup_{k\geq 1}w(m,k)\left|\Psi(m,k)\right|> c_{\alpha}\right)=\alpha.$$ Furthermore, it will turn out that typically under weak assumptions on the alternative such a sequential test also has asymptotic power one, $$\lim_{m\rightarrow\infty}P_{H_1}(\tau_m<\infty)=1.$$ At each time point the question effectively comes down to a two-sample problem testing whether the distribution from the historic data set is still valid. For such a problem $U$-statistics have favorable properties, see e.g. [@ustat] for a recent monograph. Therefore, we consider monitoring schemes that are based on the following $U$-statistic: $$\begin{aligned} \label{detstat} \Gamma(m,k)=\frac{1}{m}\sum_{i=1}^m\sum_{j=m+1}^{m+k}(h(X_i,X_j)-\theta),\end{aligned}$$ where the kernel $h:\mathbb{R}^2\rightarrow\mathbb{R}$ is a measurable function and $\theta={\operatorname{E}}(h(Y,Y_1))$ with $Y\stackrel{D}{=}Y_1$ being an independent copy of $Y_1$ (also independent of $\{Z_{i,m}\}$). The easiest example for a corresponding monitoring statistic is $\Psi(m,k)=\Gamma(m,k)$. The centering parameter $\theta$ is the expectation under the null hypothesis for independent data, while it still approximates ${\operatorname{E}}(h(Y_i,Y_j))$ with increasing lag $j-i$ under appropriate weak dependency assumptions. On the other hand, under alternatives the expectation (after the change) is given by ${\operatorname{E}}(h(Y,Z_{i,m}))$ so that changes can only be detected if ${\operatorname{E}}(h(Y,Z_{i,m}))\neq\theta$. The actual magnitude of the change is then given by $$\begin{aligned} \label{eq_Delta} \Delta_m={\operatorname{E}}(h(Y,Z_{i,m}))- \theta.\end{aligned}$$ If we allow for
{ "pile_set_name": "ArXiv" }
--- abstract: 'We use a database of direct numerical simulations to derive parameterizations for energy dissipation rate in stably stratified flows. We show that shear-based formulations are more appropriate for stable boundary layers than commonly used buoyancy-based formulations. As part of the derivations, we explore several length scales of turbulence and investigate their dependence on local stability.' author: - Sukanta Basu - Ping He - 'Adam W. DeMarco' bibliography: - 'EDR.bib' title: Parameterizing the Energy Dissipation Rate in Stably Stratified Flows --- \[sec:level1\]Introduction ========================== Energy dissipation rate is a key variable for characterizing turbulence [@vassilicos15]. It is a sink term in the prognostic equation of turbulent kinetic energy (TKE; $\overline{e}$): $$\frac{\partial \overline{e}}{\partial t} + ADV = BNC + SHR + TRP + PRC - \overline{\varepsilon}, \label{TKE}$$ where, $\overline{\varepsilon}$ is the mean energy dissipation rate. The terms $ADV$, $BNC$, $SHR$, $TRP$, and $PRC$ refer to advection, buoyancy production (or destruction), shear production, transport, and pressure correlation terms, respectively. Energy dissipation rate also appears in the celebrated “-5/3 law” of Kolmogorov [@kolmogorov41a] and Obukhov [@obukhov41a; @obukhov41b]: $$E(\kappa) \approx \overline{\varepsilon}^{2/3} \kappa^{-5/3}, \label{K41}$$ where, $E(\kappa)$ and $\kappa$ denote the energy spectrum and wavenumber, respectively. In field campaigns or laboratory experiments, direct estimation of $\overline{\varepsilon}$ has always been a challenging task as it involves measurements of nine components of the strain rate tensor. Thus, several approximations (e.g., isotropy, Taylor’s hypothesis) have been utilized and a number of indirect measurement techniques (e.g., scintillometers, lidars) have been developed over the years. In parallel, a significant effort has been made to correlate $\overline{\varepsilon}$ with easily measurable meteorological variables. For example, several flux-based and gradient-based similarity hypotheses have been proposed [e.g., @wyngaard71a; @wyngaard71b; @thiermann92; @hartogensis05]. In addition, a handful of papers also attempted to establish relationships between $\overline{\varepsilon}$ and either the vertical velocity variance ($\sigma_w^2$) or TKE ($\overline{e}$). One of the first relationships was proposed by Chen [@chen74]. By utilizing the Kolmogorov-Obukhov spectrum (i.e., Eq. \[K41\]) with certain assumptions, he derived: $$\overline{\varepsilon} \approx \sigma_w^3. \label{C74}$$ Since this derivation is only valid in the inertial-range of turbulence, a band-pass filtering of vertical velocity measurements was recommended prior to computing $\sigma_w$. A few years later, Weinstock [@weinstock81] revisited the work of [@chen74] and again made use of Eq. \[K41\], albeit with different assumptions (see Appendix 2 for details). He arrived at the following equation: $$\overline{\varepsilon} \approx \sigma_w^2 N, \label{W81}$$ where, $N$ is the so-called Brunt-Väisäla frequency. Using observational data from stratosphere, Weinstock [@weinstock81] demonstrated the superiority of Eq. \[W81\] over Eq. \[C74\]. In a recent empirical study, by analyzing measurements from the CASES-99 field campaign, Bocquet et al. [@bocquet11] proposed to use $\overline{\varepsilon}$ as a proxy for $\sigma_w^2$. In the present work, we quantify the relationship between $\overline{\varepsilon}$ and $\overline{e}$ (as well as between $\overline{\varepsilon}$ and $\sigma_w$) by using turbulence data generated by direct numerical simulation (DNS). To this end, we first compute several well-known “outer” length scales (e.g., buoyancy length scale and Ozmidov scale), normalize them appropriately, and explore their dependence on local stability. Next, we investigate the inter-relationships of certain (normalized) outer length scales (OLS) which portray qualitatively similar stability-dependence. By analytically expanding these relationships, we arrive at two $\overline{\varepsilon}$–$\overline{e}$ and two $\overline{\varepsilon}$–$\sigma_w$ formulations; only the shear-based formulations portray quasi-universal scaling. The organization of this paper is as follows. In Sect. 2, we describe our DNS runs and subsequent data analyses. Simulated results pertaining to various length scales are included in Sect. 3. The $\overline{\varepsilon}$–$\overline{e}$ and $\overline{\varepsilon}$–$\sigma_w$ formulations are derived in Sect. 4. A few concluding remarks, including implications of our results for atmospheric modeling, are made in Sect. 5. In order to enhance the readability of the paper, either a heuristic or an analytical derivation of all the length scales is provided in Appendix 1. Given the importance of Eq. \[W81\], its derivation is also summarized in Appendix 2. Last, in Appendix 3, we elaborate on the normalization of various variables which are essential for the post-processing of DNS-generated data. Direct Numerical Simulation =========================== Over the past decade, due to the increasing abundance of high-performance computing resources, several studies probed different types of stratified flows by using DNS [e.g., @flores11; @garcia11; @brethouwer12; @chung12; @ansorge14; @shah14; @he15; @he16b]. These studies provided valuable insights into the dynamical and statistical properties of these flows (e.g., intermittency, structure parameters). In the present study, we use a DNS database which was previously generated by using a massively parallel DNS code, called HERCULES [@he16a], for the parameterization of optical turbulence [@he16c]. The computational domain size for all the DNS runs was $L_x \times L_y \times L_z = 18 h \times 10 h \times h$, where $h$ is the height of the open channel. The domain was discretized by $2304 \times 2048 \times 288$ grid points in streamwise, spanwise, and wall-normal directions, respectively. The bulk Reynolds number, $Re_b = \frac{U_b h}{\nu}$, for all the simulations was fixed at 20000; where, $U_b$ and $\nu$ denote the bulk (averaged) velocity in the channel and kinematic viscosity, respectively. The bulk Richardson number was calculated as: $Ri_b = \frac{\left(\Theta_{top}-\Theta_{bot}\right)g h}{U_b^2 \Theta_{top}}$; where, $\Theta_{top}$ and $\Theta_{bot}$ represent potential temperature at the top and the bottom of the channel, respectively. The gravitational acceleration is denoted by $g$. A total of five simulations were performed with gradual decrease in the temperature of the bottom wall (effectively by increasing $Ri_b$) to mimic the nighttime cooling of the land-surface. The normalized cooling rates ($CR$), $Ri_b/T_n$, ranged from $1\times10^{-3}$ to $5\times10^{-3}$; where, $T_n$ is a non-dimensional time ($=tU_b/h$). Since we were considering stably stratified flows in the atmosphere, the Prandtl number, $Pr = \nu/k$ was assumed to be equal to 0.7 with $k$ being the thermal diffusivity. All the simulations used fully developed neutrally stratified flows as initial conditions and evolved for upto $T_n = 100$. The simulation results were output every 10 non-dimensional time. To avoid spin-up issues, in the present study, we only use data for the last five output files (i.e., $60 \le T_n \le 100$). Furthermore, we only consider data from the region $0.1 h\le z \le 0.5 h$ to discard any blocking effect of the surface or avoid any laminarization in the upper part of the open channel. The turbulent kinetic energy and its mean dissipation are computed as follows (using Einstein’s summation notation): $$\overline{e} = \frac{1}{2} \overline{u_i' u_i'}$$ $$\overline{\varepsilon} = \nu \overline{\left(\frac{\partial u_i'}{\partial x_j} \frac{\partial u_i'}{\partial x_j}\right)}$$ In these equations and in the rest of the paper, the “overbar” notation is used to denote mean quantities. Horizontal (planar) averaging operation is performed for all the cases. The “prime” symbol is used to represent the fluctuation of a variable with respect to its planar averaged value. Length Scales ============= In this section, we discuss various length scales of turbulence. To enhance the readability of the paper, we do not elaborate on their derivations or physical interpretations here; for such details,
{ "pile_set_name": "ArXiv" }
Top 1,000 words in troll/not-troll dataset ========================================== All word tokens used in the calculation of Burrows’ Z. The number of uses (*counts*) are the number of times that token (regardless of case) appeared in our troll corpus. The troll percentage is the percentage of usages by positively-labeled trolls. Trolls accounted for 72.00% of the word tokens in the corpus. We also show the percentage points from random chance for each word token. Stopword tokens are indicated in **bold**. [ p[0.35]{}p[0.15]{}p[0.15]{}p[0.15]{}]{} foke & 28,633 & 100.0 & +27.99\ fukushima2015 & 14,575 & 100.0 & +27.99\ fukushimaagain & 9,318 & 100.0 & +27.99\ blicqer & 8,795 & 100.0 & +27.99\ giselleevns & 7,463 & 100.0 & +27.99\ danageezus & 7,210 & 100.0 & +27.99\ blacklivesmatter & 15,236 & 99.94 & +27.93\ topnews & 16,425 & 99.92 & +27.92\ uthornsrawk & 8,502 & 99.88 & +27.88\ lsu & 7,794 & 99.80 & +27.80\ sanjose & 8,284 & 99.80 & +27.80\ pjnet & 10,601 & 99.76 & +27.76\ enlist & 6,302 & 99.66 & +27.66\ showbiz & 10,927 & 99.62 & +27.62\ newyork & 13,795 & 99.62 & +27.62\ workout & 44,690 & 99.47 & +27.47\ cleveland & 12,268 & 99.33 & +27.33\ sports & 105,633 & 99.23 & +27.22\ tcot & 14,141 & 99.21 & +27.21\ talibkweli & 6,697 & 99.17 & +27.17\ ukraine & 12,800 & 99.00 & +27.00\ exercise & 15,376 & 98.71 & +26.71\ fatal & 6,506 & 98.63 & +26.63\ baltimore & 11,477 & 98.45 & +26.45\ politics & 85,456 & 98.29 & +26.29\ kansas & 7,637 & 98.28 & +26.28\ mt & 6,410 & 98.20 & +26.20\ shooting & 26,920 & 98.02 & +26.02\ syria & 10,845 & 97.93 & +25.93\ obamacare & 7,675 & 97.90 & +25.90\ local & 62,112 & 97.89 & +25.89\ charlottesville & 6,434 & 97.77 & +25.77\ suspect & 12,981 & 97.77 & +25.77\ nfl & 12,724 & 97.64 & +25.64\ midnight & 13,504 & 97.45 & +25.45\ news & 280,799 & 97.36 & +25.35\ injured & 10,135 & 97.27 & +25.27\ chicago & 37,828 & 97.22 & +25.22\ cruz & 7,177 & 97.21 & +25.21\ crash & 16,080 & 97.12 & +25.11\ isis & 18,457 & 96.98 & +24.98\ hillary & 41,684 & 96.95 & +24.94\ entertainment & 11,457 & 96.80 & +24.80\ comey & 8,482 & 96.80 & +24.80\ nuclear & 13,218 & 96.65 & +24.65\ baseball & 9,326 & 96.63 & +24.63\ texas & 31,408 & 96.61 & +24.61\ dies & 12,687 & 96.59 & +24.59\ cops & 12,978 & 96.36 & +24.36\ orleans & 11,948 & 96.18 & +24.18\ arizona & 6,392 & 96.05 & +24.05\ foxnews & 11,284 & 95.79 & +23.79\ bush & 7,075 & 95.63 & +23.63\ police & 75,410 & 95.61 & +23.61\ christmas & 8,027 & 95.58 & +23.58\ stocks & 6,649 & 95.47 & +23.47\ weight & 10,565 & 95.25 & +23.25\ county & 16,190 & 95.19 & +23.19\ san & 17,726 & 95.14 & +23.14\ officer & 14,784 & 94.97 & +22.97\ obama & 74,987 & 94.85 & +22.85\ ohio & 7,790 & 94.82 & +22.82\ charged & 13,819 & 94.54 & +22.54\ islamic & 7,552 & 94.43 & +22.43\ driver & 7,962 & 94.43 & +22.43\ clinton & 37,896 & 94.29 & +22.29\ north & 17,414 & 94.18 & +22.18\ gun & 14,372 & 94.16 & +22.16\ debate & 9,847 & 93.95 & +21.95\ health & 36,862 & 93.95 & +21.95\ cop & 9,530 & 93.92 & +21.92\ russia & 18,950 & 93.71 & +21.71\ ban & 9,232 & 93.59 & +21.59\ march & 6,747 & 93.59 & +21.59\ teen & 10,333 & 93.51 & +21.51\ rap & 14,825 & 93.44 & +21.44\ environment & 6,852 & 93.38 & +21.38\ budget & 6,661 & 93.37 & +21.37\ sanders & 8,700 & 93.10 & +21.10\ loss & 8,435 & 93.05 & +21.05\ maga & 23,924 & 92.95 & +20.95\ rally & 9,212 & 92.94 & +20.94\ breaking & 58,196 & 92.92 & +20.92\ turkey & 6,787 & 92.91 & +20.91\ officers & 7,232 & 92.78 & +20.78\ shot & 24,120 & 92.73 & +20.73\ fired & 7,542 & 92.66 & +20.66\ trial & 7,285 & 92.57 & +20.57\ murder & 12,366 & 92.36 & +20.36\ houston & 7,067 & 92.27 & +20.27\ viral & 6,304 & 92.24 & +20.24\ coach & 6,589 & 92.16 & +20.16\ business & 39,739 & 92.09 & +20.08\ tech & 15,139 & 92.08 & +20.07\ killed & 29,967 & 92.07 & +20.06\ islam & 6,460 & 92.04 & +20.04\ quote & 13,455 & 91.84 & +19.84\ fbi & 13,250 & 91.81 & +19.81\ louisiana & 7,953 & 91.72 & +19.72\ schools & 6,468 & 91.65 & +19.64\ hillaryclinton & 7,547 & 91.51 & +19.51\ east & 8,590 & 91.46 & +19.46\ mayor & 9,826 & 91.42 & +19.41\ liberals & 10,648 & 91.40 & +19.40\ student & 8,357 & 91.39 & +19.39\ university & 6,431 & 91.36 & +19.36\ students & 8,955 & 91.22 & +19.22\ nyc & 13,946 & 91.20 & +19.20\ fire & 25,081 & 91.17 & +19.17\ terror & 6,619 & 91.08 & +19.08\ california & 13,457 & 91.01 & +19.01\ senate
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study a hidden Markov process which is the result of a transmission of the binary symmetric Markov source over the memoryless binary symmetric channel. This process has been studied extensively in Information Theory and is often used as a benchmark case for the so-called denoising algorithms. Exploiting the link between this process and the 1D Random Field Ising Model (RFIM), we are able to identify the Gibbs potential of the resulting Hidden Markov process. Moreover, we obtain a stronger bound on the memory decay rate. We conclude with a discussion on implications of our results for the development of denoising algorithms.' address: 'Mathematical Institute, Leiden University, Postbus 9512, 2300 RA Leiden, The Netherlands Department of Mathematics, University of Groningen, PO Box 407, 9700 AK Groningen, The Netherlands' author: - Evgeny Verbitskiy title: Thermodynamics of the Binary Symmetric Channel --- Introduction ============ We study the binary symmetric Markov source over the memoryless binary symmetric channel. More specifically, let $\{X_n\}$ be a stationary two-state Markov chain with values $\{\pm 1\}$, and $${\mathbb P}( X_{n+1}\ne X_n) =p,$$ where $0<p<1$. The binary symmetric channel will be modelled as a sequence of Bernoulli random variables $\{Z_n\}$ with $${\mathbb P}_Z(Z_n=-1)={\varepsilon},\quad {\mathbb P}_Z(Z_n=1)=1-{\varepsilon}.$$ Finally, put $$\label{model} Y_n=X_n\cdot Z_n$$ for all $n$. The process $\{Y_n\}$ is a hidden Markov process, because $Y_n\in\{-1,1\}$ is chosen independently for any $n$ from an [*emission*]{} distribution $\pi_{X_n}$ on $\{-1,1\}$: $\pi_1=({\varepsilon},1-{\varepsilon})$ and $\pi_{-1}=(1-{\varepsilon},{\varepsilon})$. The law ${\mathbb Q}$ of the process $\{Y_n\}$ is the push-forward of ${\mathbb P}\times{\mathbb P}_Z$ under $\psi: \{-1,1\}^{\mathbb Z}\times\{-1,1\}^{\mathbb Z}\mapsto \{-1,1\}^{\mathbb Z}$, with $\psi( (x_n,z_n) )=x_n\cdot z_n$. We write ${\mathbb Q}=({\mathbb P}\times{\mathbb P}_Z)\circ\psi^{-1}$. For every $m\le n$, and $y_m^n:=(y_m,\ldots, y_n)\in \{-1,1\}^{n-m+1}$, the measure of the corresponding cylindric set is given by $$\label{eq:cylone} \aligned {\mathbb Q}(y_m^n)&:={\mathbb Q}(Y_m=y_m,\ldots,Y_n=y_n)\\ & =\sum_{x_{m}^n,z_{m}^n\in\{-1,1\}^{n-m+1}} {\mathbb P}(x_m^n){\mathbb P}_Z(z_{m}^n) \prod_{k=m}^n \mathbb I[ y_k=x_k\cdot z_k]\\ &=\sum_{x_{m}^n\in\{-1,1\}^{n-m+1}} \frac 12 \prod_{i=m}^{n-1} p_{x_i,x_{i+1}} \cdot {\varepsilon}^{\#\{i\in [m,n]: x_i y_i=-1\}}(1-{\varepsilon})^{\#\{i\in [m,n]: x_i y_i=1\}}. \endaligned$$ Random Field Ising Model {#sec:RFIM} ======================== It was observed in [@Zuk] that the probability ${\mathbb Q}(y_{m},\ldots,y_n)$ of a cylindric event $\{Y_m=y_m,\ldots, Y_n=y_n\}$, $m\le n$, can be expressed via a partition function of a random field Ising model. We exploit this observation further. Assume $p>0$ and ${\varepsilon}>0$, and put $$J=\frac 12\log\frac {1-p}{p},\quad K=\frac 12\log \frac {1-{\varepsilon}}{{\varepsilon}}.$$ Then for any $(y_m,\ldots,y_n)\in\{-1,1\}^{n-m+1}$, expression for the cylinder probability (\[eq:cylone\]) can be rewritten as $${\mathbb Q}(y_m,\ldots,y_n)= \frac {c_J}{\lambda_{J,K}^{n-m+1}} \sum_{x_m^n\in\{-1,1\}^{n-m+1}}\exp\Bigl( J\sum_{i=m}^{n-1} x_ix_{i+1} +K\sum_{i=m}^n x_i y_i\Bigr),$$ where $$\aligned c_J&= {\cosh(J)},\quad \lambda_{J,K}&=2\left( \cosh(J+K)+ \cosh(J-K)\right)=4\cosh(J)\cosh(K). \\ \endaligned$$ The non-trivial part of the cylinder probability is the sum over all hidden configurations $(x_m,\ldots,x_n)$: $${\mathsf{Z}}_{n,m}(y_n^m):= \sum_{x_m^n\in\{-1,1\}^{n-m+1}}\exp\Bigl( J\sum_{i=m}^{n-1} x_ix_{i+1} +K\sum_{i=m}^n x_i y_i\Bigr)$$ is in fact the partition function of the Ising model with the random field given by $y$’s. Applying the recursive method of [@Rujan], the partition function can be evaluated in the following fashion [@BZ]. Consider the following functions $$\aligned A(w) &=\frac 12 \log \frac{\cosh(w+J)}{\cosh(w-J)},\\ B(w) &=\frac 12 \log\Bigl[ 4\cdot{\cosh(w+J)}{\cosh(w-J)}\Bigr]= \frac 12 \log\Bigl[ e^{2w}+e^{-2w}+e^{2J}+e^{-2J}\Bigr] \endaligned$$ One readily checks that if $s=\pm 1$, then for all $w\in{\mathbb R}$ $$\label{basic} \exp\Bigl( s A(w)+B(w) \Bigr) =2\cosh( w+s J).$$ Now the partiton function can be evaluated by summing the right-most spin. Namely, suppose $m<n$, $y_m^n\in \{-1,1\}^{n-m+1}$, then $$\aligned {\mathsf{Z}}_{m,n}(y_m^n) &=\sum_{x_m^{n-1}\in\{-1,1\}^{m-n}} \exp\Bigl( J\sum_{i=m}^{n-2} x_{i}x_{i+1}+ K\sum_{i=m}^{n-1} x_i y_i\Bigr)\sum_{x_n\in\{-1,1\}} e^{x_n( Jx_{n-1}+Ky_n)} \\ &=\sum_{x_m^{n-1}\in\{-1,1\}^{m-n}} \exp\Bigl( J\sum_{i=m}^{n-2} x_{i}x_{i+1}+ K\sum_{i=m}^{n-1} x_i y_i\Bigr) \Bigl\{ 2\cosh( Jx_{n-1}+Ky_n) \Bigr\}\\ &=\sum_{x_m^{n-1}\in\{-1,1\}^{m-n}} \exp\Bigl( J\sum_{i=m}^{n-2} x_{i}x_{i+1}+ K\sum_{i=m}^{n-1} x_i y_i\Bigr) \exp\Bigl(x_{n-1}A(w_n^{(n)})+B(w_n^{(n)})\Bigr) \endaligned$$ where $$w_n^{(n)} = Ky_n.$$ Hence, $$\aligned {\mathsf{Z}}_{m,n}(y_m^n)&=\sum_{x_m^{n-1}\in\{-1,1\}^{m-n}} \exp\Bigl( J\sum_{i=m}^{n-2} x_{i}x_{i+1}+ K\sum_{i=m}^{n-2} x_i y_i +x_{n-1}\bigl(\underbrace{K y_{n-1}+A(w_n^{(n)})}_{w_{n-1}^{(n)}}\bigr)\Bigr) \\ &\qquad\times\exp\Bigl(B(w_n^{(n)})\Bigr). \endaligned$$ and thus the new sum has exactly the same form, but instead
{ "pile_set_name": "ArXiv" }
--- abstract: 'Restricted Boltzmann Machine (RBM) is a generative stochastic energy-based model of artificial neural network for unsupervised learning. Recently, RBM is well known to be a pre-training method of Deep Learning. In addition to visible and hidden neurons, the structure of RBM has a number of parameters such as the weights between neurons and the coefficients for them. Therefore, we may meet some difficulties to determine an optimal network structure to analyze big data. In order to evade the problem, we investigated the variance of parameters to find an optimal structure during learning. For the reason, we should check the variance of parameters to cause the fluctuation for energy function in RBM model. In this paper, we propose the adaptive learning method of RBM that can discover an optimal number of hidden neurons according to the training situation by applying the neuron generation and annihilation algorithm. In this method, a new hidden neuron is generated if the energy function is not still converged and the variance of the parameters is large. Moreover, the inactivated hidden neuron will be annihilated if the neuron does not affect the learning situation. The experimental results for some benchmark data sets were discussed in this paper.' author: - - title: | An Adaptive Learning Method of\ Restricted Boltzmann Machine by\ Neuron Generation and Annihilation Algorithm [^1] --- Introduction {#sec:Introduction} ============ The current information technology can collect various kinds of data sets, because the recent tremendous technical advances in processing power, storage capacity, and network connected to cloud computing. Such data sample includes not only numerical values but also text such as comments, numerical evaluation such as ranking, and binary data such as pictures. Such data set is called big data. The technical methods to discover knowledge from big data are known to be a field of data mining and also developed in the research field of Deep Learning [@Quoc12]. Deep Learning attracts a lot of attention in methodology research of artificial intelligence such as machine learning [@Bengio09]. Especially, the industrial world is deeply impressed by the outcome to increase the capability of image processing. The learning architecture has an advantage of not only multi-layered network structure but also pre-training. The latter characteristic means that the architecture of Deep Learning accumulates prior knowledge of the features for input patterns. Restricted Boltzmann Machine (RBM) [@Hinton12] is one of popular method of Deep Learning for unsupervised learning. RBM has the capability of representing an probability distribution of input data set, and it can represent an energy-based statistical model. Moreover, the Contrastive Divergence (CD) learning procedure which is a faster algorithm of Gibbs sampling based on Markov chain Monte Carlo methods can be often used as one of the learning methods of RBM [@Hinton02; @Tileman08]. The problem related to RBM is how to determine the definition of an optimal initial network structure such as the number of hidden neurons according to the features of input pattern because the traditional RBM model cannot change its network structure during learning phase. In this paper, we propose the adaptive learning method of RBM that can discover an optimal number of hidden neurons according to the training situation by applying the neuron generation and annihilation algorithm. In multi-layered neural networks, the adaptive learning method by neuron generation and annihilation algorithm during learning phase was proposed [@Ichimura97; @Ichimura04]. The method monitors the variance of the weight vectors called the Walking Distance ($WD$) in the learning phase. A new neuron will be generated and inserted into the related position if the weight vector tends to fluctuate greatly even after a certain period of the training process. Moreover, the inactivated hidden neuron will be annihilated if the neuron does not affect the learning situation. However, RBM with CD method works in an output to use binary neurons. We consider to convergence under the Lipschitz continuous condition [@Carlson15]. According to [@Carlson15], the energy function of RBM can be transformed to the equations under the continuous conditions with 3 kinds of parameters for visible and hidden neurons. We investigated the variance of 3 kinds parameters where energy function of RBM converges [@Kamada15a]. Then we selected 2 parameters which influence the convergence situation of RBM except the parameter related to input features. In this paper, we show that our proposed model has the good classification capability for the small data set (about 1000 records [@Kamada15b]). Moreover, we applied our proposed adaptive learning method of RBM to big data set such as CIFAR-10 [@cifar10]. From experimental results, our proposed model will be the good performance in comparison to previous RBM model [@Dieleman12]. The remainder of this paper is organized as follows. Section \[sec:RBM\] describes the basic concept of RBM and the condition of convergence under the Lipschitz continuous is derived. In Section \[subsec:WD\], neuron generation and annihilation algorithm in multi-layered neural networks is explained and we apply this method into RBM in Section \[subsec:AdaptiveRBM\]. Section \[sec:EXE\] describes some experimental results. We give some discussions to conclude this paper in Section \[sec:Conclusion\]. Restricted Boltzmann Machine {#sec:RBM} ============================ Overview -------- This section explains the basic concept of RBM [@Hinton12]. As shown in Fig.\[fig:rbm\], RBM has the network structure with 2 kinds of layers where one is a visible layer for input data and the other is a hidden layer for representing the features of given data space. Each layer consists of some binary neurons. The traditional Boltzmann Machine has the connections between neurons in the same layer [@Ackley85]. However RBM has no connection in the same layer. Therefore the calculation is easier than the traditional one from the viewpoint of the no interaction between neurons. The RBM learning employs to train the weights and some parameters for visible and hidden neurons till the energy function becomes to a certain small value. The trained RBM can represent a probability for the distribution of input data. Let $v_i (0 \leq i \leq I)$ and $h_j (0 \leq j \leq J)$ be binary variable of a visible neuron and a hidden neuron, respectively. $I$ and $J$ are the number of visible and hidden neurons, respectively. The energy function $E({\mbox{\boldmath $v$}}, {\mbox{\boldmath $h$}})$ for visible vector ${\mbox{\boldmath $v$}} \in \{ 0, 1 \}^{I}$ and hidden vector ${\mbox{\boldmath $h$}} \in \{ 0, 1 \}^{J}$ is given by Eq.(\[eq:energy\]). $p({\mbox{\boldmath $v$}}, {\mbox{\boldmath $h$}})$ is the joint probability distribution of ${\mbox{\boldmath $v$}}$ and ${\mbox{\boldmath $h$}}$ as shown in Eq.(\[eq:prob\]). ![The structure of RBM](rbm.eps) \[fig:rbm\] $$E({\mbox{\boldmath $v$}}, {\mbox{\boldmath $h$}}) = - \sum_{i} b_i v_i - \sum_j c_j h_j - \sum_{i} \sum_{j} v_i W_{ij} h_j , \label{eq:energy}$$ $$p({\mbox{\boldmath $v$}}, {\mbox{\boldmath $h$}})=\frac{1}{Z} \exp(-E({\mbox{\boldmath $v$}}, {\mbox{\boldmath $h$}})) , \label{eq:prob}$$ $$Z = \sum_{{\mbox{\boldmath $v$}}} \sum_{{\mbox{\boldmath $h$}}} \exp(-E({\mbox{\boldmath $v$}}, {\mbox{\boldmath $h$}})) , \label{eq:PartitionFunction}$$ where $b_i$ and $c_j$ are the parameters for $v_i$ and $h_j$, respectively. $W_{ij}$ is the weight between $v_i$ and $h_j$. $Z$ is the partition function which is given by summing over all possible pairs of visible and hidden vectors. The parameters of RBM are updated by maximum likelihood estimation for $p({\mbox{\boldmath $v$}})=\sum_{{\mbox{\boldmath $h$}}} p({\mbox{\boldmath $v$}}, {\mbox{\boldmath $h$}})$ which is the probability of ${\mbox{\boldmath $v$}}$. However, the computational elements increase exponentially because the optimal configuration for all possible pairs is required to obtain the maximum likelihood estimation. Therefore, the Contrastive Divergence (CD) learning procedure has been proposed as RBM training. CD method can be a faster algorithm of Gibbs sampling based on Markov chain Monte Carlo methods [@Hinton02]. Then CD method is known to make a good performance even in a few sampling steps [@Tileman08]. Convergence under the Lipschitz continuous condition [@Carlson15] {#subsec:RBM_bound} ----------------------------------------------------------------- CD method works in discrete space. Therefore, we consider the convergence situation of RBM under the Lipschitz continuous condition. Generally, the solution will be found by using machine learning if and only if the convexity and continuous conditions for an objective function are satisfied. However RBM learning with CD sampling method meets the situation that may cause the slight error and it may not satisfy the continuous condition because of the use of binary neuron. Even if the network has a small error in the initial step, but the total energy after a certain period of iterations will be fluctuated seriously. Carlson et al. discussed the upper bounds on the log partition function for each parameter of RBM by the convexity and Lipschitz continuous [@Carlson15]. The paper derived the following equations to measure
{ "pile_set_name": "ArXiv" }
--- abstract: 'Although fractional Brownian motion was not invented by Benoît Mandelbrot, it was he who recognized the importance of this random process and gave it the name by which it is known today. This is a personal account of the history behind fractional Brownian motion and some subsequent developments.' address: 'Murad S. Taqqu is Professor, Department of Mathematics and Statistics, Boston University, 111 Cummington Street, Boston, Massachusetts 02215, USA .' author: - title: Benoît Mandelbrot and Fractional Brownian Motion --- . Since Benoît Mandelbrot’s passing in October2010, many well-deserved tributes have been paid to him.[^1] Benoît influenced a great many fields ranging from the physical sciences to economics, and mathematics was certainly among them. Benoît’s great gift was his ability to recognize the hidden potential in certain mathematical objects.[^2] I had the good fortune to observe Benoît’s mathematical analysis in action, and I would like to tell you about my experience with one of the objects that Benoît worked with, the random process known as a fractional Brownian motion. Although fractional Brownian motion was introduced by Kolmogorov, it was Benoît Mandelbrot who recognized the relevance of this random process and, in his seminal paper with Van Ness [@mandelbrotvanness1968], derived many important properties. There, he gave this process the name by which it is known today. See [@mandelbrot2002] for a general review. Let me recount first how I met Benoît. At the beginning of the seventies, I was a graduate student at Columbia University in the Department of Mathematical Statistics—a small department but home to prominent faculty such as Herbert Robbins, David Siegmund and Yuan Shih Chow. Although I had a fellowship during the academic year, I needed to find summer work—something I failed to do in my first year. I had sent my Curriculum Vitae to many companies in New York City, but I did not receive a single reply. For my second year, I decided to proceed differently. I asked members of the Department for contacts. This is how I was put in touch with Benoît Mandelbrot, who was then at IBM Research—an hour’s drive from New York City—but was also nominally an Adjunct Professor in the Department. In January of my second year, I called him and inquired about potential summer jobs. The conversation began in English but quickly turned to French. I had expected it to last a few minutes, but the conversation lasted an hour with Benoît doing most of the talking (as was often the case). He ended the conversation, saying that he knew of no jobs. But a few months later, he called me back. As things developed, it turned out that he needed a programmer for the summer and asked if I was interested. I accepted. This is how I became acquainted with his research at the time, which involved fractional Brownian motion and its application to hydrology, and how I ended up as Mandelbrot’s student. It started with the so-called “$R/S$ statistic,” where $R$ is the range of partial sums of the data, and $S$ is the sample standard deviation. It is a statistic that the British hydrologist Harold Edwin Hurst, in the first half of the twentieth century, had used to study the yearly variation of the levels of Nile river in Egypt [@hurst1951]. The original work on the subject by Benoît Mandelbrot appeared in 1965 in the Comptes Rendus [@mandelbrot1965]. Under the usual assumptions of finite variance and independent and identically distributed observations, the $R/S$ statistic should grow like $n^{1/2}$, where $n$ is the sample size. The Nile data, however, indicated a growth of $n^H$, where $1/2<H<1$. The growth $n^{1/2}$ is typically associated with random walk, so $n^H$, with $1/2<H<1$ must correspond to something else. This is why Mandelbrot suspected that a process like fractional Brownian motion $B_H(t)$ may perhaps be relevant in this framework[^3] since, while the standard deviation of Brownian motion at time $t$ is $t^{1/2}$, that of fractional Brownian motion at time $t$ is $t^H$, where $0<H<1$ [@mandelbrotwallis1968]. The letter $H$, which refers to the hydrologist Hurst and which was used by Mandelbrot, has become standard in this context, and it now labels the fractional Brownian motion. The term “fractional Brownian motion” was coined by Mandelbrot and Van Ness in the now classical paper [@mandelbrotvanness1968]. Fractional Brownian motion has a number of nice properties, one of which is “self-similarity.” A process $\{X(t), t\in\mathbb{R}\}$ is self-similar with index $H>0$ if for any $a>0$, the process $\{X(at), t\in\mathbb{R}\}$ has the same finite-dimensional distributions as $\{a^H X(t),\break t\in\mathbb {R}\}$. Thus, like a fractal, there is scaling, but it is not the trajectories of the process that scale, but the probability distribution, the “odds.” This is why this type of scaling is sometimes called “statistical self-similarity” or, more precisely, “statistical self-affinity.” The fractional Brownian motion process is then characterized by the following three properties: the process is Gaussian with zero mean; it has stationary increments; it is self-similar with index $H$, $0<H<1$. Fractional Brownian motion reduces to Brownian motion when $H=1/2$, but in contrast to Brownian motion, it has dependent increments when $H \neq1/2$. Fractional Brownian motion was first introduced in 1940 by Andrei Nikolaevich Kolmogorov [@kolmogorov1940], who was studying spiral curves in Hilbert space. It was considered by Richard Allen Hunt [@hunt1951] in the context of random Fourier transforms and by Akiva Moiseevich Yaglom [@yaglom1955], who studied the correlation structure of processes that have stationary $n$th order increments. However, it is undoubtedly the seminal paper of Mandelbrot and Van Ness which put the focus on fractional Brownian motion and gave it its name. Why the term “fractional?” This is because the process can be represented as an integral with respect to Brownian motion $B(t)$, as follows: $$\begin{aligned} \label{e:fbm} \hspace*{20pt}B_H(t) &=& \int_{-\infty}^0 \{(t-s)^{H-1/2} - (-s)^{H-1/2} \} \,dB(s)\hspace*{-20pt}\nonumber\\[-8pt]\\[-8pt] &&{}+\int_0^t (t-s)^{H-1/2} \,dB(s)\nonumber\\ &=& \int_{-\infty}^\infty\{{(t-s)}_{+}^{H-1/2} - {(-s)}_{+}^{H-1/2} \}\, dB(s).\end{aligned}$$ The integrals are well defined because the integrands are square integrable with respect to Lebesgue measure. The form of the integrands is also reminiscent of the one that appears in the $n$-fold iterated integral formula, $$\begin{aligned} &&\int_0^t dt_{n-1} \int_0^{t_{n-1}} dt_{n-2} \cdots\int_0^{t_2} dt_1 \int_0^{t_1} g(s) \,ds \\ &&\quad= \frac{1}{(n-1)!} \int_0^t (t-s)^{n-1} g(s) \,ds\end{aligned}$$ and therefore (\[e:fbm\]) can be regarded as involving “fractional integrals.” This, in fact, turns out to be more than a superficial analogy! The focus on fractional Brownian motion has proved to be extremely fruitful because it has allowed all kind of extensions, some of which were hinted at by Benoît Mandelbrot. For example, the Gaussian noise “$dB$” in (\[e:fbm\]) can be replaced by an infinite variance Lévy-stable noise, giving rise to the linear Lévy fractional stable motion, which is an infinite variance self-similar process with stationary, but dependent, increments [@samorodnitskytaqqu1994book]. The kernel can also be replaced by a random sum of pulses [@cioczekgeorgesmandelbrotsamorodnitskytaqqu1995]. From a different perspective, the single integral in (\[e:fbm\]) can be replaced by a multiple integral, so that it becomes an element of the so-called Wiener chaos [@taqqu1979; @peccatitaqqu2011], of the form $$\label{e:mult} \int_{\mathbb{R}^k}^\prime g_t (x_1,\ldots,x_k)\, dB(x_1)\cdots\, dB(x_k),$$ for a suitable kernel $g
{ "pile_set_name": "ArXiv" }
--- abstract: | 0.2truecm Classical strings coupled to a metric, a dilaton and an axion, as conceived by superstring theory, suffer from ultraviolet divergences due to self-interactions. Consequently, as in the case of radiating charged particles, the corresponding effective string dynamics can not be derived from an action principle. We propose a [*fundamental principle*]{} to build this dynamics, based on local energy-momentum conservation in terms of a well-defined distribution-valued energy-momentum tensor. Its continuity equation implies a finite equation of motion for self-interacting strings. The construction is carried out explicitly for strings in uniform motion in arbitrary space-time dimensions, where we establish cancelations of ultraviolet divergences which parallel superstring non-renormalization theorems. The uniqueness properties of the resulting dynamics are analyzed. --- Preprint DFPD/2016/TH05\ April 2016\ **Dynamics of self-interacting strings and energy-momentum conservation** 0.3truecm Kurt Lechner 1truecm *Dipartimento di Fisica e Astronomia, Università degli Studi di Padova, Italy* *and* *INFN, Sezione di Padova,* *Via F. Marzolo, 8, 35131 Padova, Italy* 2.0truecm Keywords: classical strings, self-interaction, renormalization, energy-momentum conservation, distribution theory. PACS: 11.25.-w, 11.30.-j, 11.10.Kk, 11.10.Gh, 02.30.Sa. Introduction ============ In the same way as charged particles in four space-time dimensions are subject to divergent electromagnetic self-interactions, generic charged extended objects, $p$-branes, in $D$ space-time dimensions are subject to infinite self-interactions. The reason for this is that the fields created by a brane become singular on the brane world-volume, meaning that the [*self-fields*]{}, and hence the [*self-forces*]{}, are infinite. A - in a certain sense dramatic - consequence of these ultraviolet divergences is that the theory of self-interacting branes can not be derived from a variational principle: while the original fundamental equations of motion for fields and branes follow of course from an action principle, once one substitutes the fields resolving the formers in the equations of motion of the latter, the resulting equations are divergent. If one isolates and subtracts - adapting whatever prescription - the infinities, the resulting non-local equations of motion of the brane do no longer follow from an action principle. This in turn implies that the conservation laws, in particular energy-momentum conservation, can not be derived from Nöther’s theorem, see [*e.g.*]{} [@R; @LM; @KL0] for the case of self-interacting charged particles and dyons in $D=4$. Within this approach one looses thus the control over energy-momentum conservation. More precisely ultraviolet divergences show up in brane theory in two, a priori, unrelated physical quantities: $i)$ in the [*self-force*]{} of the brane, [*i.e.*]{} the force exerted by the field generated by the brane on the brane itself, as explained above, and $ii)$ in the $D$-momentum contained in a volume $V$ enclosing (a portion of) the brane. Although the origins of the divergences appearing in these two quantities - the self-force and the $D$-momentum - are the same, [*i.e.*]{} the bad ultraviolet behavior of the field in the vicinity of the brane, their cures require actually two distinct unrelated procedures [@KL1]. To cure the divergent self-force one may proceed, as anticipated above, regularizing the field produced by the brane in some way, evaluating it on the brane and trying then to isolate and subtract the divergent terms. The cure of the infinite $D$-momentum requires instead the construction of a well-defined [*distribution-valued*]{} energy-momentum tensor and offers - at the same time - a strategy for the derivation of the self-force, that is alternative to the approach described above and overcomes its main drawback, [*i.e.*]{} the missing control over energy-momentum conservation. It works as follows. Generically the standard total energy-momentum tensor has the structure \^=\^\_[field]{}+ \_[kin]{}\^, \_[kin]{}\^=Ml\^\^D(x-y())d\^2, where $\tau_{\rm kin}^{\m\n}$ is the free [*kinetic*]{} energy-momentum tensor of the brane (with $M$ the brane tension and $y^\m(\s)$ the brane coordinates, see sections \[aad\] and \[eom\] for the notations) and $\tau^{\m\n}_{\rm field}$ is the [*bare*]{} energy-momentum tensor produced by the fields[^1]: while the fields - solutions of linear d’Alembert equations - are by definition distributions, the tensor $\tau^{\m\n}_{\rm field}$ - a product of the fields - is [*not*]{} a distribution. Consequently, $i)$ the $D$-momentum of the field $$P^\m_V=\int_V \tau^{0\m}_{\rm field}\, d^3x$$ contained in a volume $V$ is in general divergent and, $ii)$ it makes no sense to evaluate the divergence $\pa_\m \tau^{\m\n}_{\rm field}$ to analyze the conservation properties of $\tau^{\m\n}$. The cure of these pathologies requires the construction of a [*renormalized*]{} distribution-valued energy-momentum tensor $T^{\m\n}_{\rm field}$, out of $\tau^{\m\n}_{\rm field}$. A - in principle standard - way to do this consists in the introduction of a regularization - preserving possibly Lorentz- as well as reparameterization-invariance - and the subsequent subtraction from the regularized energy-momentum tensor $(\tau^{\m\n}_{\rm field})_{reg}$ of [*divergent local counterterms*]{}, [*i.e.*]{} of counterterms supported on the brane that do not converge to distributions as the regularization is removed. By construction the resulting energy-momentum tensor $T^{\m\n}_{\rm field}$ is a distribution and admits hence a well-defined divergence, supported on the word-volume, \[prel0\] \_T\^\_[field]{} =-\^\^D(x-y()) d\^p, where the vector ${\cal S}^\n$ is going to become the [*finite*]{} self-force of the brane. In fact, for the divergence of the renormalized total energy-momentum tensor $T^{\m\n}=T^{\m\n}_{\rm field}+ \tau_{\rm kin}^{\m\n}$ one obtains now \[prel\] \_T\^ =(M\_iU\^[i]{}- [S]{}\^)\^D(x-y())d\^p, where the quantity $\D_iU^{\n i}$ represents the generalized acceleration of the brane. Upon requiring local energy-momentum conservation one derives then the equation of motion for the brane coordinates \[fam\] M \_iU\^[i]{}= [S]{}\^. This strategy to derive the self-force may however encounter an obstacle: it can happen that the vector ${\cal S}^\n$ in [(\[prel0\])]{} is not a pure [*multiplication*]{} operator but contains also terms involving derivatives acting on the $\delta$-function, as for example ${\cal S}^\n \sim \pa^\n$. In this case there would be no equation of motion for the brane ensuring the vanishing of $\pa_\m T^{\m\n}$. This obstacle can be faced through the [*finite-counterterm ambiguity*]{} inherent in any renormalization process in physics - in the present case the fact that after the subtraction of [*divergent*]{} local counterterms, the renormalized energy-momentum tensor is defined only modulo [*finite*]{} local counterterms. The general strategy just described has been envisaged in [@KL1], where a $p$-brane interacting minimally with a $(p+1)$-form potential in $D$ dimensions has been considered, based on previous work facing the analogous problem for massive [@LM] as well as massless [@AL1; @AL2; @KL2] point-charges in four dimensions. The present paper represents the first step in the application of this method to the physically more interesting case of the low energy effective superstring theory, compactified to dimensions $D<10$, where the string couples to the metric $g_{\m\n}$, the dilaton $\Phi$ and the axion field $B_{\m\n}$. Particular attention will be paid to four-dimensional space-time. We will actually consider two prototype models: a) the [*general model*]{}, where a certain set of free parameters, or coupling constants, assume generic values, and b) the [*fundamental string model*]{}, where these parameters are tied by the special relations [(\[fundpar\])]{} predicted by ten-dimensional superstring theory. The problem of ultraviolet divergences and self-interactions of strings moving in a space-time of dimension $D\ge4$ has a long history, especially w.r.t. the problem of tension renormalization and the related finiteness/divergence properties of the self-force and the self-energy. A far from exhaustive literature with this respect is [@DH; @CHH; @DQ; @DGHRR; @BS; @BC1; @C; @BC2; @BD12; @BD12bis; @CBU; @BCM]; for some recent results on the same problem for point-
{ "pile_set_name": "ArXiv" }
--- abstract: 'By applying an idea of , we study various scaling limits of determinantal point processes with trace class projection kernels given by spectral projections of selfadjoint Sturm–Liouville operators. Instead of studying the convergence of the kernels as functions, the method directly addresses the strong convergence of the induced integral operators. We show that, for this notion of convergence, the Dyson, Airy, and Bessel kernels are universal in the bulk, soft-edge, and hard-edge scaling limits. This result allows us to give a short and unified derivation of the known formulae for the scaling limits of the classical unitary random matrix ensembles (GUE, LUE/Wishart, JUE/MANOVA).' address: 'Zentrum Mathematik – M3, Technische Universität München, 80290 München, Germany' author: - Folkmar Bornemann bibliography: - 'article.bib' title: 'On the Scaling Limits of Determinantal Point Processes with Kernels Induced by Sturm–Liouville Operators' --- Introduction {#sect:intro} ============ We consider determinantal point processes on an interval $\Lambda = (a,b)$ with trace class projection kernel $$\label{eq:prokernel} K_n(x,y) = \sum_{j=0}^{n-1} \phi_{j}(x) \phi_{j}(y),$$ where $\phi_{0},\phi_1,\ldots,\phi_{n-1}$ are orthonormal in $L^2(\Lambda)$; each $\phi_j$ may have some dependence on $n$ that we suppress from the notation. We recall that for such processes the joint probability density of the $n$ points is given by $$p_n(x_1,\ldots,x_n) = \frac{1}{n!} \det_{i,j=1}^n K_n(x_i,x_j),$$ the mean counting probability is given by the density $$\rho_n(x) = n^{-1} K_n(x,x),$$ and the gap probabilities are given, by the inclusion-exclusion principle, in terms of a Fredholm determinant, namely $$E_n(J) = {{\mathbb P}}(\{x_1,\ldots,x_n\} \cap J = \emptyset) = \det(I - {{\mathbb 1}}_J K_n {{\mathbb 1}}_J).$$ The various scaling limits are usually derived from a pointwise convergence of the kernel function $K_n(x,y)$ obtained from considering the large $n$ asymptotic of the eigenfunctions $\phi_j$, which can be technically *very* involved.[^1] suggested, for discrete point processes, a different, conceptually and technically much simpler approach based on selfadjoint difference operators. We will show that their method, generalized to selfadjoint Sturm–Liouville operators, allows us to give a short and unified derivation of the various scaling limits for the unitary random matrix ensembles (GUE, LUE/Wishart, JUE/MANOVA) that are based on the classical orthogonal polynomials (Hermite, Laguerre, Jacobi). The Borodin–Olshanski Method {#the-borodinolshanski-method .unnumbered} ---------------------------- The method proceeds along three steps: First, we identify the induced integral operator $K_n$ as the spectral projection $$K_n = {{\mathbb 1}}_{(-\infty,0)}(L_n)$$ of some selfadjoint ordinary differential operator $L_n$ on $L^2(\Lambda)$. Any scaling of the point process by $x = \sigma_n \xi + \mu_n$ ($\sigma_n \neq 0$) yields, in turn, the scaled objects $$\tilde E_n(J) = \det(I - {{\mathbb 1}}_J \tilde K_n {{\mathbb 1}}_J),\quad \tilde K_n = {{\mathbb 1}}_{(-\infty,0)}(\tilde L_n),$$ where $\tilde L_n$ is a selfadjoint differential operator on $L^2(\tilde\Lambda_n)$, $\tilde \Lambda_n = (\tilde a_n,\tilde b_n)$. Second, if $\tilde \Lambda_n \subset \tilde \Lambda = (\tilde a,\tilde b)$ with $\tilde a_n \to \tilde a$, $\tilde b_n \to \tilde b$, we aim for a selfadjoint operator $\tilde L$ on $L^2(\tilde \Lambda)$ with a core $C$ such that eventually $C \subset D(\tilde L_n)$ and $$\label{eq:Lpointwise} \tilde L_n u \,\to\, \tilde Lu \qquad (u \in C).$$ The point is that, if the test functions from $C$ are particularly nice, such a convergence is just a simple consequence of the *locally uniform convergence of the coefficients* of the differential operators $\tilde L_n$—a convergence that is, typically, an easy calculus exercise. Now, given (\[eq:Lpointwise\]), the concept of *strong resolvent convergence* (see Theorem \[thm:stolz\]) immediately yields,[^2] if $0 \not\in \sigma_{pp}(\tilde L)$, $$K_n {{\mathbb 1}}_{\tilde \Lambda_n} = {{\mathbb 1}}_{(-\infty,0)}(\tilde L_n) {{\mathbb 1}}_{\tilde \Lambda_n} \,{\overset{s}{\longrightarrow}}\, {{\mathbb 1}}_{(-\infty,0)}(\tilde L).$$ Third, we take an interval $J \subset \tilde\Lambda$, eventually satisfying $J \subset \tilde \Lambda_n$, such that the operator ${{\mathbb 1}}_{(-\infty,0)}(\tilde L) {{\mathbb 1}}_J$ is trace class with kernel $\tilde K(x,y)$ (which can be obtained from the generalized eigenfunction expansion of $\tilde L$, see §\[sect:genEFE\]). Then, we immediately get the strong convergence $$\tilde K_n {{\mathbb 1}}_J \,{\overset{s}{\longrightarrow}}\, \tilde K\, {{\mathbb 1}}_J.$$ sketches the Borodin–Olshanski method, applied to the bulk and edge scaling of GUE, as a heuristic device. Because of the microlocal methods that he uses to calculate the projection ${{\mathbb 1}}_{(-\infty,0)}(\tilde L)$, he puts his sketch under the headline “The Dyson and Airy kernels of GUE via semiclassical analysis”. Scaling Limits and Other Modes of Convergence {#scaling-limits-and-other-modes-of-convergence .unnumbered} --------------------------------------------- Given that one just has to establish the convergence of the coefficients of a differential operator (instead of an asymptotic of its eigenfunctions), the Borodin–Olshanski method is an extremely simple device to determine all the scalings $x=\sigma_n \xi + \mu_n$ that would yield some [*meaningful*]{} limit $\tilde K_n {{\mathbb 1}}_J \,\to\, \tilde K\, {{\mathbb 1}}_J$, namely in the strong operator topology. Other modes of convergence have been studied in the literature, ranging from some weak convergence of $k$-point correlation functions over pointwise convergence of the kernel functions to the convergence of gap probabilities, that is, $$\tilde E_n(J) = \det(I - {{\mathbb 1}}_J \tilde K_n {{\mathbb 1}}_J) \to \det(I - {{\mathbb 1}}_J \tilde K\, {{\mathbb 1}}_J) = \tilde E(J).$$ From a probabilistic point of view, the latter convergence is of particular interest and has been shown in at least three ways: 1. By Hadamard’s inequality, convergence of the determinants follows directly from the locally uniform convergence of the kernels $K_n$ [@AGZ Lemma 3.4.5] and, for unbounded $J$, from further large deviation estimates [@AGZ Lemma 3.3.2]. This way, the limit gap probabilities in the bulk and soft edge scaling limit of GUE can rigorously be established .\ 2. Since $A \mapsto \det(I-A)$ is continuous with respect to the trace class norm [@MR2154153 Thm. 3.4], $\tilde K_n {{\mathbb 1}}_J \,\to\, \tilde K\, {{\mathbb 1}}_J$ in trace class norm would generally suffice. Such a convergence can be proved by factorizing the trace class operators into Hilbert–Schmidt operators and obtaining the $L^2$-convergence of the factorized kernels once more from locally uniform convergence, see the work of on the scaling limits of the LUE/Wishart ensembles [-@MR1863961] and that on the limits of the JUE/MANOVA ensembles [-@MR2485010].\ 3. Since ${{\mathbb 1}}_J \tilde K_n {{\mathbb 1}}_J$ and ${{\mathbb 1}}_J \tilde K\, {{\mathbb 1}}_J$ are selfadjoint and positive semi-definite, yet another way is by observing that the convergence $\tilde K_n {{\mathbb 1}}_J \,\to\, \tilde K\, {{\mathbb 1}}_J$ in trace class norm is, for continuous kernels, equivalent [@MR2154153 Thm. 2.20] to the combination of both
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present a comprehensive analysis of different techniques available for the spectroscopic analysis of FGK stars, and provide a recommended methodology which efficiently estimates accurate stellar atmospheric parameters for large samples of stars. Our analysis includes a simultaneous equivalent width analysis of [Fe]{} and [Fe]{} spectral lines, and for the first time, utilises on-the-fly NLTE corrections of individual [Fe]{} lines. We further investigate several temperature scales, finding that estimates from Balmer line measurements provide the most accurate effective temperatures at all metallicites. We apply our analysis to a large sample of both dwarf and giant stars selected from the RAVE survey. We then show that the difference between parameters determined by our method and that by standard 1D LTE excitation-ionisation balance of Fe reveals substantial systematic biases: up to $400$ K in effective temperature, $1.0$ dex in surface gravity, and $0.4$ dex in metallicity for stars with $\feh\sim-2.5$. This has large implications for the study of the stellar populations in the Milky Way.' author: - | Gregory R. Ruchti,$^{1}$[^1] Maria Bergemann,$^{1}$ Aldo Serenelli,$^{2}$ Luca Casagrande$^{3}$ and Karin Lind$^{1}$\ $^{1}$Max Planck Institut für Astrophysik, Karl-Schwarzschild-Str. 1, 85741 Garching, Germany\ $^{2}$Instituto de Ciencias del Espacio (CSIC-IEEC), Facultad de Ciencias, Campus UAB, 08193 Bellaterra, Spain\ $^{3}$Research School of Astronomy & Astrophysics, Mount Stromlo Observatory, The Australian National University, ACT 2611, Australia\ date: 'Accepted 2012 October 30. Received 2012 September 26' title: 'Unveiling systematic biases in 1D LTE excitation-ionisation balance of Fe for FGK stars. A novel approach to determination of stellar parameters.' --- \[firstpage\] stars: abundances — stars: late-type — stars: Population II Introduction {#sec-intro} ============ The fundamental atmospheric (effective temperature, surface gravity, and metallicity) and physical (mass and age) parameters of stars provide the major observational foundation for chemo-dynamical studies of the Milky Way and other galaxies in the Local Group. With the dawn of large spectroscopic surveys to study individual stars, such as SEGUE [@yanny09], RAVE [@steinmetz06], Gaia-ESO [@gilmore12], and HERMES [@barden08], these parameters are used to infer the characteristics of different populations of stars that comprise the Milky Way. Stellar parameters determined by spectroscopic methods are of a key importance. The only way to accurately measure metallicity is through spectroscopy, which thus underlies photometric calibrations [e.g., @holmberg07; @an09; @arnadottir10; @casagrande11], while high-resolution spectroscopy is also used to correct the low-resolution results [e.g., @carollo10]. The atmospheric parameters can all be estimated from a spectrum in a consistent and efficient way. This also avoids the problem of reddening inherent in photometry since spectroscopic parameters are not sensitive to reddening. The spectroscopic parameters can then be used alone or in combination with photometric information to fit individual stars to theoretical isochrones or evolutionary tracks to determine the stellar mass, age, and distance of a star. A common method for deriving the spectroscopic atmospheric parameters is to use the information from [Fe]{} and [Fe]{} absorption lines under the assumption of hydrostatic equilibrium (HE) and local thermodynamic equilibrium (LTE). Many previous studies have used some variation of this technique (e.g., ionisation or excitation equilibrium) to determine the stellar atmospheric parameters and abundances, and henceforth distances and kinematics, of FGK stars in the Milky Way. For example, some have used this procedure to estimate the effective temperature, surface gravity, and metallicity of a star [e.g., @fulbright00; @prochaska00; @johnson02], while others use photometric estimates of effective temperature in combination with the ionisation equilibrium of the abundance of iron in LTE to estimate surface gravity and metallicity [e.g., @mcwilliam95; @francois03; @bai04; @allendep06; @lai08]. However, both observational [e.g., @fuhrmann98; @ivans01; @ruchti11; @bruntt12] and theoretical evidence [e.g., @thevenin99; @asplund05; @mashonkina11] suggest that systematic biases are present within such analyses due to the breakdown of the assumption of LTE. More recently, @bergemann12 and @lind12 quantified the effects of non-local thermodynamic equilibrium (NLTE) on the determination of surface gravity and metallicity, revealing very substantial systematic biases in the estimates at low metallicity and/or surface gravity. It is therefore extremely important to develop sophisticated methods, which reconcile these effects in order to derive accurate spectroscopic parameters. This is the first in a series of papers, in which we develop new, robust methods to determine the fundamental parameters of FGK stars and then apply these techniques to large stellar samples to study the chemical and dynamical properties of the different stellar populations of the Milky Way. In this work, we utilise the sample of stars selected from the RAVE survey originally published in @ruchti11 [hereafter R11] to formulate the methodology to derive very accurate atmospheric parameters. We consider several temperature scales and show that the Balmer line method is the most reliable among the different methods presently available. Further, we have developed the necessary tools to apply on-the-fly NLTE corrections[^2] to [Fe]{} lines, utilising the grid described in @lind12. We verify our method using a sample of standard stars with interferometric estimates of effective temperature and/or [*Hipparcos*]{} parallaxes. We then perform a comprehensive comparison to standard 1D, LTE techniques for the spectral analysis of stars, finding significant systematic biases. Sample Selection and Observations ================================= NLTE effects in iron are most prominent in low-metallicity stars [@lind12; @bergemann12]. We therefore chose the metal-poor sample from R11 for our study. These stars were originally selected for high-resolution observations based on data obtained by the RAVE survey in order to study the metal-poor thick disk of the Milky Way. Spectral data for these stars were obtained using high-resolution echelle spectrographs at several facilities around the world. Full details of the observations and data reduction of the spectra can be found in R11. Briefly, all spectrographs delivered a resolving power greater than 30,000 and covered the full optical wavelength range. Further, nearly all spectra had signal-to-noise ratios greater than $100:1$ per pixel. The equivalent widths (EWs) of both [Fe]{} and [Fe]{} lines, taken from the line lists of @fulbright00 and @johnson02, were measured using the ARES code [@sousa07]. However, during measurement quality checks, we found that the continuum was poorly estimated for some lines. We therefore determined EWs for these affected lines using hand measurements. Stellar Parameter Analyses {#sec-par} ========================== We computed the stellar parameters for each star using two different methods. In the first method, which is commonly used in the literature, we derived an effective temperature, $\tlte$, surface gravity, $\logglte$, metallicity, $\fehlte$, and microturbulence, $\vtlte$, from the ionisation and excitation equilibrium of Fe in LTE. This is hereafter denoted as the LTE-Fe method. We used an iterative procedure that utilised the `MOOG` analysis program [@sneden73] and 1D, plane-parallel `ATLAS-ODF` model atmospheres from Kurucz[^3] computed under the assumption of LTE and HE. In our procedure, the stellar effective temperature was set by minimising the magnitude of the slope of the relationship between the abundance of iron from [Fe]{} lines and the excitation potential of each line. Similarly, the microturbulent velocity was found by minimising the slope between the abundance of iron from [Fe]{} lines and the reduced EW of each line. The surface gravity was then estimated by minimising the difference between the abundance of iron measured from [Fe]{} and [Fe]{} lines. Iterations continued until all of the criteria above were satisfied. Finally, $\fehlte$ was chosen to equal the abundance of iron from the analysis. Our results for this method are described in Section \[sec-lte\]. The second method, denoted as the NLTE-Opt method, consists of two parts. First, we determined the optimal effective temperature estimate, $\tfin$, for each star (see Section \[sec-temp\] for more details). Then, we utilised `MOOG` to compute a new surface gravity, $\loggf$, metallicity, $\fehf$, and microturbulence, $\vtf$. This was done using the same iterative techniques as the LTE-Fe method, that is the ionisation balance of the abundance of iron from [Fe]{} and [Fe]{} lines. There are, however, three important differences. First, the stellar effective temperature was held fixed to the optimal value, $\tfin$. Second, we restricted the analysis to Fe lines with excitation potentials above 2 
{ "pile_set_name": "ArXiv" }
--- abstract: 'We investigate the effect of free electrons on the quality factor ($Q$) of a metallic nanomechanical resonator in the form of a thin elastic beam. The flexural and longitudinal modes of the beam are modeled using thin beam elasticity theory, and simple perturbation theory is used to calculate the rate at which an externally excited vibration mode decays due to its interaction with free electrons. We find that electron-phonon interaction significantly affects the $Q$ of longitudinal modes, and may also be of significance to the damping of flexural modes in otherwise high-$Q$ beams. The finite geometry of the beam is manifested in two important ways. Its finite length breaks translation invariance along the beam and introduces an imperfect momentum conservation law in place of the exact law. Its finite width imposes a quantization of the electronic states that introduces a temperature scale for which there exists a crossover from a high-temperature macroscopic regime, where electron-phonon damping behaves as if the electrons were in the bulk, to a low-temperature mesoscopic regime, where damping is dominated by just a few dissipation channels and exhibits sharp non-monotonic changes as parameters are varied. This suggests a novel scheme for probing the electronic spectrum of a nanoscale device by measuring the $Q$ of its mechanical vibrations.' author: - 'Ze’ev Lindenfeld and Ron Lifshitz' bibliography: - 'EPD.bib' date: 'November 2, 2012' title: Damping of mechanical vibrations by free electrons in metallic nanoresonators --- Introduction {#sec:introduction} ============ The design and fabrication of high-$Q$ mechanical resonators is an ongoing effort that has intensified with the advent of microelectromechanical systems (MEMS) and even more with the recent progression toward nanoelectromechanical systems (NEMS).[@RoukesPlenty; @Cleland03; @ekinci05] One requires low-loss mechanical resonators for a host of nanotechnological applications, such as low phase-noise oscillators;[@Kenig12] highly sensitive mass,[@ekinci04; @*Yang06; @*Hanay12; @ilic04; @Jensen08; @lassagne08] spin,[@rugar] and charge detectors;[@cleland98] and ultra-sensitive thermometers[@roukes99] and displacement sensors;[@cleland; @*knobel; @ekinci02; @*truitt07] as well as for basic research in the mesoscopic physics of phonons,[@schwab00] and the general study of the behavior of mechanical degrees of freedom at the interface between the classical and the quantum worlds.[@schwab05; @lahaye04; @*naik06; @Oconnell10; @katz; @*katz08] It is therefore of great importance to understand the dominant damping mechanisms in small mechanical resonators. A variety of different mechanisms—such as internal friction due to bulk or surface defects,[@Mihailovich95; @olkhovets; @*carr2; @*evoy2; @liu; @mohanty; @zolfagharkhani; @seoanez; @remus; @chu07; @unterreithmeier] phonon-mediated damping,[@lifshitzTED; @*lifshitzPhonon; @houston; @sudipto; @kiselev] and clamping losses [@cross; @photiadis1; @geller3; @*geller1; @*geller2; @schmid08; @wilson-rae; @*cole]—may contribute to the dissipation of energy in mechanical resonators, and thus impose limits on their quality factors. The dissipated energy is transferred from a particular mode of the resonator, which is driven externally, to energy reservoirs formed by all other degrees of freedom of the system. Here, we focus our attention on *electron-phonon damping*, arising from energy transfer between the driven mode of the resonator and free electrons. This dissipation mechanism is avoided altogether by fabricating resonators from dielectric materials, but for different practical reasons one often prefers to fabricate MEMS and NEMS resonators from metals, such as platinum,[@husain] gold,[@buks02; @venkatesan10] and aluminum.[@davis; @li; @*hoehne; @teufel] Free electrons are also present in metallic carbon-nanotube resonators[@peng06; @Eriksson08] and in resonating nanoparticles.[@min; @pelton; @zijlstra] All these different resonators exhibit a wide range of quality factors, from as low as about $10$ and up to around $10^5$, yet one still lacks a full understanding of their damping mechanisms. It is well-known from at least as early as the 1950s that electron-phonon scattering is a dominant source of attenuation of longitudinal sound waves in bulk metals at low temperatures,[@Bommel54; @kittel; @pippard; @blount; @ziman; @kokkedee] and indications exist that it may play a significant role in the damping of longitudinal vibrations in freely suspended bi-pyramid gold nanoparticles.[@pelton] We note that the effect of electron-phonon scattering on electronic transport through suspended nanomechanical beams,[@weig04] carbon nanotubes,[@leroy04; @leturcq09; @*mariani; @Steele09; @*huttel09; @*Laird12] fullerenes,[@park] atomic wires,[@paulsson; @viljas; @vega] and molecular junctions[@pecchia; @kushmerick; @wang; @galperin; @Tal08] is well documented and intensively studied. There is also evidence for the effect of electron-phonon scattering on heat transport in nanostructures.[@fon; @barman] Motivated by all of these considerations, it is our aim here to estimate the contribution of electron-phonon interaction to the damping of vibrational modes in small metallic resonators, while focusing on the effects of their finite dimensions. We describe the interaction between electrons and phonons by means of a simple screened electrostatic potential. We assume that initially both electrons and phonons are at thermal equilibrium at the same temperature, except for a single mode, which is externally excited by the addition of just a single phonon to its thermal population. This allows us to assume that the electrons remain almost thermally distributed at all times, even though they do not actually relax back to equilibrium. The decay rate of the excited mode is calculated pertubatively, using Fermi’s Golden Rule, as the difference between the rates at which phonons enter and leave the excited mode through their interaction with free electrons. This requires us to assume that the electron and phonon energies are precisely known, or in other words that the *a priori* lifetimes of both the electrons and the phonons are much longer than all other relevant time scales. For the phonons this means that all other damping mechanisms must be much weaker than electron-phonon damping—although we show later that additional damping mechanisms do not significantly alter the results of our calculations. For the electrons, on the other hand, this implies that we are working in the high-frequency, or unrelaxed *adiabatic limit*, with $\omega_q\tau_{e}>1$, where $\omega_q$ is the vibration frequency and $\tau_{e}$ is the mean lifetime of the electron due to its scattering with other electrons, thermal phonons, defects, etc. In certain situations, as discussed in detail in section 5.12 of the book by Ziman,[@ziman] it is sufficient to satisfy the spatial version of this requirement, namely that $\Lambda_{e}q>1$—where $\Lambda_{e}$ is the electron mean free path and $q$ is the wavenumber of the excited mode—which is easier to satisfy because the ratio of the phonon group velocity to the electron Fermi velocity is typically very small. Intuitively speaking, it is as if the moving electron explores the elastic wave much faster than it would have, if it were standing in place and waiting for the wave to go by. In bulk metals it is difficult to satisfy the adiabatic condition, and one is inevitably required to address the relaxation of the electronic distribution by employing the Boltzmann equation or other approaches.[@pippard; @blount; @Khan87] However, in clean nanometer scale devices oscillating at very-high frequencies, and operating at sufficiently low temperatures, there is a greater chance of reaching the adiabatic limit. We therefore assume that this is the case, and alert the reader to the fact that our results may be less applicable at high temperatures. We describe the vibrational modes of the nanomechanical resonator using continuum elasticity theory, which is often employed for treating nanomechanical systems, even in the case of carbon nanotubes,[@kahn; @suzuura; @martino], and also in the quantum regime.[@blencowe99; @santamore01; @*santamore02; @lindenfeld11] The small size of such nanostructures may raise the question of the validity of a continuum elastic approach. However, explicit comparisons with atomistic calculations and experimental results have shown that continuum elasticity is valid down to dimensions of a few nanometers,[@Broughton97; @murray2; @combe; @ramirez] and may indeed be used even for carbon nanotubes, as long as one uses appropriate effective parameters.[@yoon; @chico] Finally, we investigate beams with typical dimensions that are much larger than the bulk Fermi wavelength. In such systems it is usually assumed that the effect of the boundaries on the
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present results from very high signal-to-noise spectropolarimetric observations of the Seyfert 1 galaxy NGC 3783. Position Angle (PA) changes across the Balmer lines show that the scatterer is [*resolving*]{} the Broad-Emission Line Region (BLR). A broad component seen in polarized light and located bluewards from the H$\beta$ line very likely corresponds to HeII$\lambda4686$. The lack of PA changes across this line suggests that the region responsible for this emission appears to the scatterer as unresolved as the continuum source, in agreement with the stratified BLR structure determined from reverberation mapping.' author: - 'P. Lira$^{1}$, M. Kishimoto$^{2}$, A. Robinson$^{3}$, S. Young$^{4}$, D. Axon$^{3}$, M. Elvis$^{5}$, A. Lawrence$^{6}$ & B. Peterson$^{7}$' title: Resolving the BLR in NGC 3783 --- Observations ============ We have obtained very high S/N spectropolarimetric observations of NGC 3783 using the VLT and the 3.6m telescope at La Silla in 2006, with total exposure times of 3.4 and 6.2 hours, respectively. The data were reduced following Miller, Robinson & Goodrich (1988) and special care was taken to correct for the interstellar polarization in our Galaxy along the line of sight towards NGC 3783. Main Results ============ The left panel in Figure 1 shows the total flux and PA of the polarized emission in the 4000-5000 Å range. Strong PA changes are coincident with the broad Balmer emission lines. This is consistent with near-field scattering, i.e., with the scatterer being close enough to the Balmer emitting region to [*resolve*]{} it. These PA changes, however, are not consistent with a simple, pure rotational motion of the BLR as seen by an equatorial scattering medium, which predicts a horizontal [*S-shaped*]{} PA swing, as already observed in several other Seyfert 1 galaxies, and modeled by Smith et al. (2005). Instead, a [*M-shaped*]{} pattern is seen in all Balmer lines (also clear in H$\alpha$, which is not shown here). The Balmer lines also show a narrow dip in polarized flux which is blue-shifted from the position of the emission peak seen in total flux (central panel in Figure 1). In a forthcoming paper we will present detailed modeling of our data. The HeII$\lambda4686$ line, very conspicuous in total flux, does not show the features observed in the polarized flux of the Balmer lines, and essentially no PA change. The right panel in Figure 1 compares the H$\alpha$ and H$\beta$ profiles in velocity space. An excess is clearly seen extending from the blue wing of H$\beta$, which is coincident with the position of the HeII$\lambda4686$ emission line. We therefore interprete this excess as polarized emission from the broad HeII$\lambda4686$ line. The lack of a PA change across the HeII$\lambda4686$ line strongly suggests a smaller solid angle subtended by the emitting region as seen by the scatterer when compared with the Balmer emitting region. Discussion ========== We have found clear evidence that the scatterer is resolving the Balmer emitting region in NGC 3783. The geometry and kinematics will be explored in a forthcoming paper which will model the spectropolarimetric observations. Reverberation mapping results show clear evidence for a stratified BLR which is also consistent with Virial motion of the BLR (Onken & Peterson, 2002). We have found strong evidence that the high ionization HeII$\lambda4686$ line is produced in a region much more compact than that producing the Balmer lines, in good agreement with the idea of a stratified BLR. (15,5)[ (-0.5,0)[![[*Left and central panels:*]{} Comparison between total flux and polarization position angle (PA), and total and polarized flux spectra. [*Right panel:*]{} H$\alpha$ and H$\beta$ emission line profiles in velocity space. We identify the excess seen in the blue wing of H$\beta$ as emission from the HeII$\lambda4686$ line.](p_lira_fig1.ps "fig:")]{} (9,0)[![[*Left and central panels:*]{} Comparison between total flux and polarization position angle (PA), and total and polarized flux spectra. [*Right panel:*]{} H$\alpha$ and H$\beta$ emission line profiles in velocity space. We identify the excess seen in the blue wing of H$\beta$ as emission from the HeII$\lambda4686$ line.](p_lira_fig2.ps "fig:")]{}]{} PL acknowledges support by Fondap project \#15010003 and Fondecyt project \#1040719. Onken, C. A., & Peterson, B. M., 2002, ApJ, 572, 746 Miller, J. S., Robinson, L. B., & Goodrich, R. W., 1988, in ‘The 9$^{th}$ Santa Cruz Summer Workshop in Astronomy and Astrophysics’, p. 157, L. B. Robinson ed. Smith, J. E., Robinson, A., Young, S., Axon, D. J., & Corbett, E. A., 2005, MNRAS, 359, 846
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper, we provide a novel way to generate low-dimension (dense) vector embeddings for the noun and verb synsets in WordNet, so that the hypernym-hyponym tree structure is preserved in the embeddings. We call this embedding the sense spectrum (and sense spectra for embeddings). In order to create suitable labels for the training of sense spectra, we designed a new similarity measurement for noun and verb synsets in WordNet. We call this similarity measurement the hypernym intersection similarity (HIS), since it compares the common and unique hypernyms between two synsets. Our experiments show that on the noun and verb pairs of the SimLex-999 dataset, HIS outperforms the three similarity measurements in WordNet. Moreover, to the best of our knowledge, the sense spectra is the first dense embedding system that can explicitly and completely measure the hypernym-hyponym relationship in WordNet.' author: - | Canlin Zhang\ Department of Mathematics\ Florida State University\ [czhang@math.fsu.edu]{}\ Xiuwen Liu\ Department of Computer Science\ Florida State University\ [liux@cs.fsu.edu]{}\ bibliography: - 'anthology.bib' - 'emnlp2020.bib' title: Preserving the Hypernym Tree of WordNet in Dense Embeddings --- Introduction ============ WordNet is a lexical database for the English language [@WordNet_introduction], which groups English words into sets of synonyms called $synsets$ [@WordNet2]. Each synset is related to a specific semantic sense, and synsets related to the same semantic sense are usually ordered by their usage frequencies in English. There are four types of synsets in WordNet: noun (n), verb (v), adjective (a) and adverb (r). As a result, a synset in WordNet is represented in the form of “semantic sense.type.ordering". For instance, $domestic\_animal.n.01$ means the first noun synset related to the semantic sense “domestic animal", and $eat.v.03$ means the third verb synset related to the semantic sense “eat". WordNet can be regarded as a dictionary, since it provides brief definitions and usage examples for each synset. On the other hand, WordNet can also be regarded as a thesaurus [@WordNet4], since it records a number of semantic relationships among synsets or their members (called $lemmas$). The most important relationship among synsets in WordNet is the hypernym-hyponym relationship [@Yamada_2009_hyper], which indicates the generic term (hypernym) and a specific instance of it (hyponym). In fact, the hypernym-hyponym relationship is very complicated, which consists of a tree-like structure [@WordNet_introduction] with nodes to be the synsets. Only noun and verb synsets in WordNet possess the hypernym-hyponym relationship [@WordNet3]. Since almost all the state-of-the-art Natural Language Processing (NLP) models are built on embeddings [@mikolov2013distributed; @Devlin_BERT], it is desirable to represent the synsets in WordNet by embeddings as well. To be specific, low dimensional embeddings (dense embeddings) that can preserve the semantic relationships among synsets in WordNet are especially desired, which has not been fully realized. Also, the hypernym-hyponym relationship is regarded as the most important relationship in WordNet [@WordNet2]. So, it will be valuable if we generate dense embeddings that can completely preserve the hypernym-hyponym tree structure for noun and verb synsets in WordNet. Hence, we first design the hypernym intersection similarity (HIS) as the desired measurement, by which the “commonness" and “differences" between two noun or verb synsets are measured according to the intersection situation of their hypernym sets. Using HIS as labels, we train the synset embeddings with a novel operation other than the inner product, which preserves the HIS measurement (and hence the hypernym-hyponym tree) in the synset embeddings. This training method makes our embedding vector looks like a “spectrum of senses". So, we call it the sense spectrum. After training, the same operation is used to measure the hypernym-hyponym tree structure preserved by sense spectra. In the next section, we shall discuss the related work on creating embeddings for WordNet synsets. Then in Section 3, we shall introduce the architectures of our model. In Section 4, we will describe our implementations and provide experimental results. Then in Section 5, we will provide further discussions on our model. Finally,we will conclude the paper with a brief summary in Section 6. Related Work ============ Roughly speaking, there are two traditional ways to create embeddings for WordNet synsets: One way is to combine the embeddings of words appeared in the definition or usage examples of that synset, where pre-trained word embeddings from other models are required [@Autoextend]. Synset embeddings created in this way are dense, yet preserve no semantic relationships. Another way is to keep each synset in one unique dimension of the embedding vector, and then create a binary matrix recording the existence (or not) of one specific semantic relationship between any two synsets (two dimensions). Synset embeddings created in this way do preserve the semantic relationships. But these embeddings are high dimensional several-hot vectors [@Bengio_one_hot_embd], which often lead to over-fitting when being used as neural network inputs [@cursedim]. Therefore, many novel methods are designed to generate dense embeddings that can preserve the semantic relationships in WordNet. Among them, the wnet2vec [@WordNet_Embeddings] provides the model that is most similar to ours. Simply speaking, wnet2vec performs Principle Component Analysis (PCA) [@PCA] on the binary matrix recording the semantic relationships to obtain compressed synset vectors. But in this way, the semantic relationships are only implicitly preserved in the compressed vectors, which cannot be measured directly. Hence, we hope to design a synset embedding system with a measurement, so that the semantic relationships can be not only preserved in the dense embeddings but also explicitly measured by the measurement. This goal motivates our research on the sense spectrum. Architectures ============= In the first subsection, we shall introduce the proposed HIS measurement. Then, in the next subsection, we shall introduce the three basic similarity measurements in WordNet, which will be used as comparisons to our HIS measurement. After that, the formulas and training algorithms of sense spectra will be given. Besides, we note that it is not very meaningful to compare a noun synset with a verb one. So, whenever we mention “two (noun or verb) synsets $a$ and $b$" in this paper, we assume that either both $a$ and $b$ are noun synsets, or both of them are verb ones. Hypernym intersection similarity -------------------------------- Primarily, we note that WordNet not only provides the direct hypernym for each noun and verb synset, but also provides its $\mathbf{hypernym \ closure}$ [@WordNet_introduction]: Suppose $h_1$ is a direct hypernym of the synset $a$, and $h_2$ is a direct hypernym of $h_1$. Then, the hypernym closure of $a$ will contain both $h_1$ and $h_2$. That is, the hypernym closure consists of “all the hypernyms of all the hypernyms" for the synset $a$, which is denoted as $H_a$ in this paper. For example, if we set synset $a$ to be $man.n.01$ and synset $b$ to be $woman.n.01$, their hypernym closures $H_a$ and $H_b$ are then shown in Figure 1: ![(To be viewed in color) The hypernym closures of synsets $man.n.01$ and $woman.n.01$.](1.png){width="7.5cm"} The synset $man.n.01$ denotes the common sense of “man", whose definition in WordNet is “An adult person who is male (as opposed to a woman)". Accordingly, $woman.n.01$ is defined as “an adult female person (as opposed to a man)". They have the same direct hypernym $adult.n.01$. We can see from Figure 1 that all the hypernyms of $man.n.01$ and $woman.n.01$ are the same, except that $male.n.02$ is unique to $man.n.01$ and $female.n.02$ is unique to $woman.n.01$. Hence, the hypernym closures are the key to describe the hypernym-hyponym relationship between two synsets $a$ and $b$. However, we will not use the hypernym closure directly in the HIS measurement. This is because the semantic field of a synset should be smaller than that of its hypernym closure [@Gao_2013_semantic]. Hence, based on the hypernym-hyponym relationship, the precise representation of a synset should be its hypernym closure plus the synset itself, which is the $\mathbf{hypernym \ set}$ $S_a=H_a\cup \{a\}$. We shall
{ "pile_set_name": "ArXiv" }
--- abstract: 'Very high energy gamma-rays ($E_\gamma >$ 20 GeV) from blazars traversing cosmological distances through the metagalactic radiation field can convert to electron-positron pairs in photon-photon collisions. The converted gamma rays initiate electromagnetic cascades driven by inverse-Compton scattering off the microwave background photons. The cascades shift the injected gamma ray spectrum to MeV-GeV energies. Randomly oriented magnetic fields rapidly isotropize the secondary electron-positron beams resulting from the beamed blazar gamma ray emission, leading to faint gamma-ray halos. Using a model for the time-dependent metagalactic radiation field consistent with all currently available far-infrared-to-optical data, we compute (i) the expected gamma-ray attenuation in blazar spectra, and (ii) the cascade contribution from faint, unresolved blazar to the extragalactic gamma-ray background as measured by EGRET, assuming a generic emitted spectrum extending to an energy of 10 TeV. The latter cascade contribution to the EGRET background is fed by the assumed &gt;20 GeV emission from the hitherto undiscovered sources, and we estimate their dN-dz distribution taking into account that the nearby (z&lt;0.2) fraction of these sources must be consistent with the known (low) numbers of sources above 300 GeV.' author: - 'Tanja M. Kneiske' - Karl Mannheim title: 'BL Lac Contribution to the Extragalactic Gamma-Ray Background' --- [ address=[Universitaet Wuerzburg, Am Hubland, 97074 Wuerzburg, Germany]{} ]{} [ address=[Universitaet Wuerzburg, Am Hubland, 97074 Wuerzburg, Germany]{} ]{} Introduction ============ An isotropic, diffuse background radiation presumably due to faint, unresolved extragalactic sources has been observed in nearly all energy bands. The confirmation of an extragalactic gamma-ray background by EGRET (Energetic Gamma-Ray Experiment Telescope) on board the Compton Gamma Ray Observatory has extended the spectrum up to an energy of $\sim$50 GeV. A first analysis of the data resulted in a total flux of $(1.45\pm 0.05) \cdot 10^{-5}$ photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ above 100 MeV and a spectrum which could be fitted by a power law with an spectra index of $-2.1\pm 0.03$ [@lit:sreekumar]. These values are strongly dependent on the foreground emission model which is subtracted from the observed intensity to obtain the extragalactic residual [@lit:hunter]. Since using the old foreground model, a residual GeV halo remained after subtraction (in addition to the isotropic extragalactic background), the foreground model had to be improved. This lead to a new analysis of the EGRET data, and a new result for the extragalactic background spectrum, now showing a dip a GeV energies and an overall weaker intensity of $(1.14\pm 0.12) \cdot 10^{-5}$ photons cm$^{-2}$ s$^{-1}$ sr$^{-1}$ [@lit:strong]. This new result can help us to understand the origin of the extragalactic background radiation. Since EGRET detected a large number of extragalactic gamma-ray sources belonging to the blazar class of AGN, a reasonable assumption is that the gamma background is produced by unresolved AGN. Using a gamma-ray luminosity function from EGRET data [@lit:chiang]Chiang & Mukherjee (1998) came to the result that only 25% to 50% of the gamma background could be explained by blazars. [@lit_stecker96] were able to explain 100% of the background, but were facing a problem with the deficit of observed faint, nearby blazars. The new idea which will be presented in this paper is to extend the existing models by assuming a population of BL Lacs with a spectral energy distribution such that their flux at EGRET energies is too low to be generally detected, while their very high energy gamma ray flux is strong. Since most of these sources are at redshifts high enough for pair attenuation to take place, a significant part of their VHE emission is reprocessed by cascades contributing to the diffuse background, but not to the single source counts. During the paper we use a Hubble constant of $H_0=71$km s$^{-1}$ Mpc$^{-1}$ and a flat universe with the cosmological parameters $\Omega$=0.3 and $\Omega_\Lambda$=0.7. ![Pair attenuation optical depth for various redshifts and MRF models. The labeling of the line styles are explained in [@lit:kneiske2]. The crossing point with the line $\tau=1$ defines the exponential cutoff energy.[]{data-label="fig:Tau"}](Tau_bis06_201004){height=".3\textheight"} Gamma-Ray Background ==================== If the gamma-ray background is produced by unresolved sources, it can be described by $$F_{E_\gamma} = \frac{1}{\Omega} \int^{z_m}_0 dz \frac{dV}{dz} \int^{\infty}_{L_{\rm min}} \frac{dN}{dV dL} F_{E_\gamma}(z, L) dL, \ \ \ \ \label{eq:gammaback}$$ with $\Omega$ the solid angle coverage of the survey ($\Omega_{\rm EGRET}=10.4$), $\frac{dV}{dz}$ the volume element, $L_{min}$ is the luminosity of the weakest source, $\frac{dN}{dV dL}$ the luminosity function and $F_{E_\gamma}(z_q, L)$ the flux of the gamma sources, depending on their luminosity and redshift. The luminosity function of resolved EGRET sources, extended to the faint end has been computed by [@lit:chiang]. We used their model changing only the spectral index from $\alpha=2.1$ to $\alpha=2.3$. The new spectral index was determined by fitting the reanalyzed EGRET data at $ <2.0$ GeV. The remaining excess of the measured gamma-ray background we ascribe to high-energy peaked blazars belonging to the HBL and ExBL classes (defined by [@lit:Ghisellini]. We calculate the flux from this sources using equation \[eq:gammaback\]. The spectral energy distribution of these sources between 100 MeV and 10 TeV and their luminosity function (LF) is poorly known, and we have to make some theoretical assumptions for them which are described the next sections. ![Spectrum of the extragalactic gamma-ray background (open circles: Sreekumar (1998), filled diamonds: Strong et al. (2004)). The contribution of the HBL component (thick solid line) is compared with the spectrum without any absorption and reemission (thin solid line) and the contribution of the secondary photons only (dashed line)[]{data-label="fig:gammabackHBL"}](Gammaback_HBL){height=".3\textheight"} Template Spectra ================ A number of extragalactic gamma-ray sources have been detected with imaging air-Cherenkov telescopes (Table 6, [@lit:horan]). Four of them (with redshifts $z=0.03, 0.03, 0.129, 0.048$) were bright enough to resolve their spectra in the TeV energy band. The observed spectra are presumably modified by gamma ray attenuation, i.e. $$F_{\rm obs}(E)=F_{\rm int}(E)\exp[-\tau_{\gamma\gamma}(E,z)]$$ where $\tau_{\gamma\gamma}(E,z)$ is the optical depth for gamma-rays (Fig. \[fig:Tau\]). We used various model parameters for the metagalactic radiation field (MRF) to bracket the range of the un-absorbed (intrinsic) spectra. Depending on the model of the MRF the intrinsic spectra show turnovers or broad maxima around a few TeV. The intrinsic spectrum of H1426+428 which has a larger redshift (z=0.129) could have a maximum at 10 TeV or higher. We use a mean of the spectra of Mkn501, Mkn421 and ES1959+650 as a template for the HBL-type sources and a spectrum like H1426+428 as the template spectrum for the ExBL-types. Each template spectrum is modeled using two power laws. The parameters are two spectral indices and the location of the maximum. For HBL the spectral index at low energies is $\alpha=1.7$ and at high energies $\alpha=2.3$ with a maximum at 4 TeV. The spectral index of the ExBLs is $\alpha=1.2$ with a maximum at 10 TeV. ![Spectrum of the extragalactic gamma-ray background. The contribution of the ExBL component (thick solid line) is compared with the spectrum without any absorption and reemission (thin solid line) and the contribution of the secondary photons only (dashed line)[]{data-label="fig:gammabackExBL"}](Gammaback_ExBL){height=".3\textheight"} The absorption of the primary photons is calculated using the MRF model presented in [@lit:kneiske1], and the reemission is calculated using the radiative transfer equation employing an inverse-Compton emission term due to scattering off the microwave background.
{ "pile_set_name": "ArXiv" }
--- abstract: 'A Kalman filter can be used to determine material parameters using uncertain experimental data. However, starting with inappropriate initial values for material parameters might include false local attractors or even divergence. Also, inappropriate choices of covariance errors of initial state, present state, and measurements might affect the stability of the prediction. The present method suggests a simple way to predict the parameters and the errors, required to start Kalman filter based on known parameters that are used to generate the data with different noises used as “measurement data”. The method consists of two steps. First, an appropriate range of parameter values is chosen based on a graphical representation of the mean square error. Second, the Kalman filter is used based on the selected range and the suggested parameters and errors. The method of the filter significantly reduces the iteration time, and covers a wide range of initial suggested values for the parameters compared with the standard Kalman filter. When the methodology is applied to real data, very good results are obtained. Diffusion coefficient for bovine bone is chosen to be a case study in this work.' address: - 'Division of Solid Mechanics, Lund University, 22100 Lund, Sweden' - 'Industrial Engineering Department, Fayoum University, 63514 Fayoum, Egypt' author: - Abdallah Shokry - Per Ståhle bibliography: - 'ReferencesMethodology.bib' title: A methodology for using Kalman filter to determine material parameters from uncertain measurements --- Model, Least-Squares, Kalman filter, material parameters, diffusion in bone, uncertain measurements Introduction ============ The Kalman filter is an inverse method to determine variables or parameters using input data with more noise and get output data with less noise. It is firstly presented by R.E. Kalman [@Kalman(1960)] in 1960. Kalman filter has the advantages of taking the random noise for state and measurements into consideration, also it is an optimal estimator for linear models because it minimizes the mean square error between the state. In addition, it converges quickly. A more complete introduction to the Kalman filter is given by Brown [@Brown(1983)]. The Kalman filter can be found under different updated forms that used in many different fields such as tracking objects [@Siouris(1997); @Weng(2006); @Antnov(2011)], control systems [@Ahn(2009); @Shi(2009)], and weather forecast [@Mitchell(2009); @Wu(2010); @Miyoshi(2012)]. Kalman filter can be used to determine material parameters from uncertain and inaccurate measurements. Aoki et al. [@Aoki(1997)] used Kalman filter to identify Gurson’s model constant. They found that the accuracy of parameters prediction is affected by both specimen geometry and measurement type, and the shape of the tested specimen affects the convergence of the parameters. Also, they noticed that the rate of convergence can be improved by combining measurements of two different specimens in shape. The identification of GursonTvergaard material model parameters via Kalman filtering technique is studied by Corigliano et al. [@Corigliano(2000)]. They stated that the estimated values of the parameters are in well agreement with those obtained in previous work, but the initial suggested values for the seeking parameters affects the estimated parameters. Nakamura et al. [@Nakamura(2007)] implemented Kalman filter to determine elastic-plastic anisotropic parameters for thin materials using instrumented indentation. They observed that the initial chosen values for the parameters converged to a specific small area, but not to one point. Also, based on the convergence intensity, the parameters are determined. The same findings are obtained by using Kalman filter to determine the nonlinear properties of thermal sprayed ceramic coatings [@Nakamura(2007)2]. Bolzon et al. [@Bolzon(2002)] used Kalman filter to identify parameters of a cohesive crack model. They reported that almost a linear correlation between convergent parameters is found, and the reason for the multiple local minimum might be related to using the linear Kalman filter for non-linear model. Vaddadi et al. [@Vaddadi(2003)] used Kalman filter to determine critical moisture diffusion parameters for a fiber reinforced composite. They estimated the parameters from the intensity of the convergence, which found to be in consistent with known values. Another study made by Vaddadi et al. [@Vaddadi(2007)] to determine hygrothermal properties in fiber reinforced composite using Kalman filter. The parameters are extracted by reading the intensity of convergence plot. Kalman filter is an efficient way to filter noisy experimental data for determination of material parameters. However, the initial suggested parameters required for Kalman filter should be chosen carefully, to avoid false local attractor. Also, the covariance error for the parameters noise almost assumed to be zero, which slow the rate of convergence and might lead to more than one intensity area for the predicted parameters. In this study, a methodology will be applied for using Kalman filter to determine material parameters using uncertain measurements. The methodology starts by a way based on the mean square error to choose appropriate initial suggested parameters required for Kalman filter, and followed by a suggested way to choose the covariance errors for both state and measurements. The determination of diffusion coefficients in bovine bone for generated data with different noises scatter from known parameters will be applied as a case study. A real measurements will be used also. Methods ======= The Model --------- Assume that an experiment resulted in $N$ measurements obtained at different times, locations, temperatures etc. These are collected in a vector, $z$, with $N$ measurements. The experimental data may be obtained at different known times, locations, temperatures etc. Measurements and all other data are available *a priori*. In an attempt to predict the measurements a model, $h=h(x)$ is used, with $h$ being a vector of $N$ predictions of observations. Further, $x$ is a vector of $n$ unknown parameters defining the model based on variables such as position, temperature, time, etc. The unknown model parameters may describe the state of the system regarding, material, geometry or similar. In the present study, $x$ is limited to parameters describing the material. Measurements always include systematic and non-systematic errors due to instrumentation, indirect observations, gauges sensitive, irrelevant external influence, and similar. Material parameter is sought but the experimental method may require a state parameters to be determined as well. Further, material parameters contain non-systematic errors due to thermal fluctuations, unstable structural configurations such as mobile dislocations, impurities, inclusions, unstable chemical composition, etc. Also inevitably, there is a difference between model and reality while a model never gives an exact description of the physical processes. Under ideal conditions the model would perfect in the sense that $z=h(x)$. Here, only non-systematic errors or noise is considered. The model is defined for measurement $i$ as $$z=h(\bar{x})+v\,,\label{eq:model-properties}$$ where $v$ is a vector with $N$ errors due to inaccurate measurements $z$. The instant parameter $\bar{x}$ corresponding to the individual measurement $i$ includes noise according to $$\bar{x}\mbox{=}x+w\,,\label{eq:matparameter noise}$$ where $w$ is a vector with $n$ errors caused by the parameter deviations. The elements of $v$ and $w$ are assumed to be uncorrelated. All elements of $w$ and $v$ are supposed to be random, having the same respective stochastic distribution and for both a vanishing mean value is expected, cf. [@Brown(1983)]. Assuming that a set of parameters $\hat{x}_{k}$ is an estimate in the neighborhood of $x$, an improved estimate $x_{k+1}$ may be obtained by using linearized using a Taylor series which gives $$h(\hat{x}_{k+1})\text{\ensuremath{\approx}}h(\hat{x}_{k})+\mathbf{H}(\hat{x}_{k})(x_{o}-\hat{x}_{k})\,,\label{eq:Taylor}$$ when quadratic and higher order terms of $x$ are neglected. On matrix form involved variables are $$h(x)\mbox{=}\begin{bmatrix}h^{(1)}\\ \vdots\\ h^{(N)} \end{bmatrix},\mathbf{\ H}(x)\mbox{=}\begin{bmatrix}\dfrac{\partial h^{(1)}}{\partial x_{1}} & \cdots & \dfrac{\partial h^{(1)}}{\partial x_{n}}\\ \vdots & \ddots & \vdots\\ \dfrac{\partial h^{(N)}}{\partial x_{1}} & \cdots & \dfrac{\partial h^{(N)}}{\partial x_{n}} \end{bmatrix},\ x=\begin{bmatrix}x_{o}\\ \vdots\\ x_{n} \end{bmatrix}\,.$$ Here, $\mathbf{H}$ is an $N\times n$ a Jacobian matrix. Least-Squares ------------- The system is supposed to be overdetermined, meaning that the number of measurements $N$ exceeds the number of unknown parameters $n$. $z$ $h(x)$.$x$$x\approx\hat{x}_{k}$$k$$x_{0}$ns. Solutions for non-linear systems (see \[sec:Appendix A\]) may be obtained iteratively. As an example, the Newton-Raphson method applied to these solutions gives the following recursive scheme, $$\hat{x}_{k+1}=\hat{x}_{k}+(\mathbf{H}_{k}^{\
{ "pile_set_name": "ArXiv" }
24.2cm 17.0cm -1.0in -42pt plus 2mm minus 1mm [hep-ph/9609489]{}\ September 1996 \ M. Botje$^a$, M. Klein$^b$, C. Pascaud$^c$\ $^a$ NIKHEF PO Box 41882, NL-1009 DB Amsterdam, Netherlands\ $^b$ DESY-IfH Zeuthen, Platanenallee 6, D-15738 Zeuthen, Germany\ $^c$ Université de Paris Sud, LAL, F-91405 Orsay, France\ The results are presented of a study of the accuracy one may achieve at HERA in measuring the strong coupling constant $\alpha_{s}$ and the gluon distribution $xg(x,Q^{2})$ using future data of the structure function $F_{2}(x,Q^{2})$ which are estimated to be accurate at the few % level over the full accessible kinematic region down to $x \simeq 10^{-5}$ and up to $Q^{2} \simeq 50000$ GeV$^{2}$. The analysis includes simulated proton and deuteron data, and the effect of combining HERA data with fixed target data is discussed. Future Precision Measurements\ of $F_2(x,Q^2)$, $\alpha_S(Q^2)$ and $xg(x,Q^2)$ at HERA\ M. Botje$^a$, M. Klein$^b$, C. Pascaud$^c$\ $^a$ NIKHEF PO Box 41882, NL-1009 DB Amsterdam, Netherlands\ $^b$ DESY-IfH Zeuthen, Platanenallee 6, D-15738 Zeuthen, Germany\ $^c$ Universite de Paris Sud, LPTHE, F-91405 Orsay, France > [**Abstract:**]{} The results are presented of a study of the accuracy one may achieve at HERA in measuring the strong coupling constant $\alpha_{s}$ and the gluon distribution $xg(x,Q^{2})$ using future data of the structure function $F_{2}(x,Q^{2})$ which are estimated to be accurate at the few % level over the full accessible kinematic region down to $x \simeq 10^{-5}$ and up to $Q^{2} \simeq > 50000$ GeV$^{2}$. The analysis includes simulated proton and deuteron data, and the effect of combining HERA data with fixed target data is discussed. Introduction ============ Deep inelastic scattering is the ideal place to investigate the quark-gluon interaction. Previous fixed target experiments have lead to very precise tests of Quantum Chromodynamics in the kinematic range of larger $x \geq 0.005$ and lower $Q^{2} \leq 300$ GeV$^{2}$. The first few years of experimentation at HERA extended this range to very low $x \simeq 0.0001$ and large $Q^{2} \simeq 3000$ GeV$^{2}$ leading to remarkable results in the investigation of deep inelastic scattering [@mk; @ry] including rather accurate measurements already of the proton structure function $F_{2}(x,Q^{2})$. In this study an attempt has been made to estimate the accuracy of future measurements of $F_{2}$ at HERA and their possible impact on precision mesurements of the strong coupling constant $\alpha_{s}(Q^{2})$ and the gluon distribution $xg(x,Q^{2})$. The measurement of these quantities is a key task at HERA. Both can be determined in a number of different processes as deep inelastic jet production, charm and $J/\psi$ production and with future measurements of the longitudinal structure function. The measurement of $F_{2}$, however, is expected to be the most precise way to determine $\alpha_{s}$ and $xg$ from the scaling violations of $F_{2}$. Those are most prominent at very low $x$ due to quark pair production from the gluon field and weaker at large $x \geq 0.1 $ due to gluon bremsstrahlung. Both processes, and their NLO corrections, will be accessible with future high statistics data at HERA which is hoped to deliver a final luminosity figure near to ${\cal L} \simeq 1$ fb$^{-1}$ during the next 8 years of operation. The QCD analysis of the past and present $F_{2}$ structure function data lead to remarkable results already, more than listed here: - [ A rather precise determination of $\alpha_{s}(Q^{2})$ with an experimental error of 0.003 at $Q^{2}=M_{Z}^{2}$ was performed using the SLAC and the BCDMS structure function data [@MARCALAIN].]{} - [ Both H1 [@h1f; @h1g] and ZEUS [@zeusf; @ZEUSQCD] have determined the gluon distribution with an about 15% accuracy at $Q^{2} = 20$ GeV$^{2}$ and $x \simeq 10^{-4}$ by using different sets of fixed target data [@SLAC; @BCDMS; @NMC] combined with the HERA results]{}. - [The HERA deep inelastic structure function data have a big impact on global analyses and the determination of parton distributions [@trv].]{} The analysis presented in this paper will show that HERA will allow to reach the 1% level of determining $\alpha_{s}$ and $xg$. This represents a challenge to the theoretical understanding of deep inelastic scattering in perturbative QCD in the low $x$ and low $Q^{2} \sim M_{p}^{2}$ region. A precision measurement of the strong coupling constant will represent an important constraint to unified theories. As such it represents one fundamental reason to perform an extended long term programme of experimentation at HERA. This paper is organized as follows. Section 2 presents the assumptions and the results of the simulation of $F_{2}$ structure function data. Section 3 contains the outline of the QCD analysis procedure and error treatment required for the analysis. The results of a detailed study of the $\alpha_{s}$ measurement accuracy are given in section 4. Similarly the determination of the gluon distribution is presented in section 5. A brief summary is given in section 6. Accuracy of Future HERA Structure Function Data =============================================== Recent measurements of the proton structure function $F_{2}(x,Q^{2})$ by the H1 and ZEUS collaborations [@h1f; @zeusf], based on data taken in 1994 with an integrated luminosity $\cal L$ of about 3 pb$^{-1}$, have reached a systematic error level of about 4-5% in the bulk region of the data, $10 \leq Q^{2} \leq 100$ GeV$^{2}$. Exploratory measurements of the very low $Q^{2}$ region with about 15-20% accuracy were presented by H1 with 1995 shifted vertex data [@h1w] and by ZEUS using a rear calorimeter installed near the beam pipe in backward direction [@zeusp]. Based on the experience of these analyses a study has been made in order to estimate what might be the ultimate accuracy of $F_{2}$ measurements at HERA. This is a difficult task: on one hand one can rather easily extrapolate the present knowledge of systematic errors and also calculate rather straightforward the effect of residual miscalibrations on the cross section measurement. On the other hand there will always be local, detector dependent effects in addition and, furthermore, one can not simulate the results to be expected from innovations of the structure function analyses. For example, it is likely that a low electron energy calibration, much below the kinematic peak, can be performed reconstructing the $\pi_{0}$ mass or, to give another one, the region of $y$ below 0.01, which was considered to be not accessible due to calorimetric noise, may be accessed nevertheless by imposing a $p_{T}$ balance constraint using the electron information. Therefore this simulation study may give valid estimates but the truth will be the result of data taking and analysis work over many years still to come. For this analysis the following kinematic constraints have been imposed: - [$Q^{2} \geq 1$ GeV$^{2}$ which may be the limit of applicability of the DGLAP evolution equations at low $x$ [@mrs];]{} - [ $\theta_{e} \leq 177^{o}$ which might be accessible with nominal energy running even after the luminosity upgrade;]{} - [$y \leq 0.8$ a limit arising from large radiative corrections and a small scattered electron energy limit $E_{e}' \geq $ few GeV due to photoproduction background and electron identification limitations;]{} - [$\theta_{h} \geq 8^{o}$, a hadron reconstruction limit imposed by the beam pipe which may differ somewhat finally.]{} A number of data sets was generated as summarized in table \[tab1\] and illustrated in fig.1. The maximum $Q^2$ of the data depends on the available luminosity and might reach values of up to 50000 GeV$^{2}$. The generation and systematic error calculation was performed with a numerical program written by one of us which was checked to be in good agreement with the Monte Carlo programs used for real data analyses. number nucleon $E_{e}$ $E_{N}$ ${\cal L}/pb^{-1}$ ¥ $Q^{2}_{min}$ $Q^{2}_{max}$ --------
{ "pile_set_name": "ArXiv" }
--- abstract: 'Gaussian mixture modeling is a fundamental tool in clustering, as well as discriminant analysis and semiparametric density estimation. However, estimating the optimal model for any given number of components is an NP-hard problem, and estimating the number of components is in some respects an even harder problem. In R, a popular package called `mclust` addresses both of these problems. However, Python has lacked such a package. We therefore introduce `AutoGMM`, a Python algorithm for automatic Gaussian mixture modeling. `AutoGMM` builds upon `scikit-learn`’s `AgglomerativeClustering` and `GaussianMixture` classes, with certain modifications to make the results more stable. Empirically, on several different applications, `AutoGMM` performs approximately as well as `mclust`. This algorithm is freely available and therefore further shrinks the gap between functionality of R and Python for data science.' author: - 'Thomas L. Athey' - 'Joshua T. Vogelstein$^,$' bibliography: - 'refs.bib' title: 'AutoGMM: Automatic Gaussian Mixture Modeling in Python' --- Introduction ============ Clustering is a fundamental problem in data analysis where a set of objects is partitioned into clusters according to similarities between the objects. Objects within a cluster are similar to each other, and objects across clusters are different, according to some criteria. Clustering has its roots in the 1960s [@cluster_og1; @cluster_og2], but is still researched heavily today [@cluster_review; @jain]. Clustering can be applied to many different problems such as separating potential customers into market segments [@cluster_market], segmenting satellite images to measure land cover [@cluster_satellite], or identifying when different images contain the same person [@cluster_face]. A popular technique for clustering is Gaussian mixture modeling. In this approach, a Gaussian mixture is fit to the observed data via maximum likelihood estimation. The flexibility of the Gaussian mixture model, however, comes at the cost hyperparameters that can be difficult to tune, and model assumptions that can be difficult to choose [@jain]. If users make assumptions about the model’s covariance matrices, they risk inappropriate model restriction. On the other hand, relaxing covariance assumptions leads to a large number of parameters to estimate. Users are also forced to choose the number of mixture components and how to initialize the estimation procedure. This paper presents `AutoGMM`, a Gaussian mixture model based algorithm implemented in python that automatically chooses the initialization, number of clusters and covariance constraints. Inspired by the `mclust` package in R [@mclust5], our algorithm iterates through different clustering options and cluster numbers and evaluates each according to the Bayesian Information Criterion. The algorithm starts with agglomerative clustering, then fits a Gaussian mixture model with a dynamic regularization scheme that discourages singleton clusters. We compared the algorithm to `mclust` on several datasets, and they perform similarly. Background ========== Gaussian Mixture Models ----------------------- The most popular statistical model of clustered data is the Gaussian mixture model (GMM). A Gaussian mixture is simply a composition of multiple normal distributions. Each component has a “weight”, $w_i$: the proportion of the overall data that belongs to that component. Therefore, the combined probability distribution, $f(x)$ is of the form: $$f(x) = \sum_{k=1}^{K} w_k f_k(x) = \sum_{k=1}^{K} \frac{w_k}{(2\pi)^{\frac{d}{2}}|\Sigma_k|^{-\frac{1}{2}}}\exp \left \{ {\frac{1}{2}(x-\mu_k)^T\Sigma_k^{-1}(x-\mu_k)} \right\}$$ where $k$ is the number of clusters, $d$ is the dimensionality of the data. The maximum likelihood estimate (MLE) of Gaussian mixture parameters cannot be directly computed, so the Expectation-Maximization (EM) algorithm is typically used to estimate model parameters [@mclachlan]. The EM algorithm is guaranteed to monotonically increase the likelihood with each iteration [@em]. A drawback of the EM algorithm, however, is that it can produce singular covariance matrices if not adequately constrained. The computational complexity of a single EM iteration with respect to the number of data points is $O(n)$. After running EM, the fitted GMM can be used to “hard cluster” data by calculating which mixture component was most likely to produce a data point. Soft clusterings of the data are also available upon running the EM algorithm, as each point is assigned a weight corresponding to all $k$ components. To initialize the EM algorithm, typically all points are assigned a cluster, which is then fed as input into the M-step. The key question in the initialization then becomes how to initially assign points to clusters. Initialization -------------- ### Random The simplest way to initialize the EM algorithm is by randomly choosing data points to serve as the initial mixture component means. This method is simple and fast, but different initializations can lead to drastically different results. In order to alleviate this issue, it is common to perform random initialization and subsequent EM several times, and choose the best result. However, there is no guarantee the random initializations will lead to satisfactory results, and running EM many times can be computationally costly. ### K-Means Another strategy is to use the k-means algorithm to initialize the mixture component means. K-means is perhaps the most popular clustering algorithm [@jain], and it seeks to minimize the squared distance within clusters. The k-means algorithm is usually fast, since the computational complexity of performing a fixed number iterations is $O(n)$ [@cluster_review]. K-means itself needs to be initialized, and k-means++ is a principled choice, since it bounds the k-means cost function [@kmeans++]. Since there is randomness in k-means++, running this algorithm on the same dataset may result in different clusterings. `GraSPy`, a Python package for graph statistics, performs EM initialization this way in its `GaussianCluster` class. ### Agglomerative Clustering Agglomerative clustering is a hierarchical technique that starts with every data point as its own cluster. Then, the two closest clusters are merged until the desired number of clusters is reached. In `scikit-learn`’s `AgglomerativeClustering` class, “closeness” between clusters can be quantified by L1 distance, L2 distance, or cosine similarity. Additionally, there are several linkage criteria that can be used to determine which clusters should be merged next. Complete linkage, which merges clusters according to the maximally distant data points within a pair of clusters, tends to find compact clusters of similar size. On the other hand, single linkage, which merges clusters according to the closest pairs of data points, is more likely to result in unbalanced clusters with more variable shape. Average linkage merges according to the average distance between points of different clusters, and Ward linkage merges clusters that cause the smallest increase in within-cluster variance. All four of these linkage criteria are implemented in `AgglomerativeClustering` and further comparisons between them can be found in @everitt. The computational complexity of agglomerative clustering can be prohibitive in large datasets [@xu]. Naively, agglomerative clustering has computational complexity of $\mc{O}(n^3)$. However, algorithmic improvements have improved this upper bound @hclust_eff. @scikit-learn uses minimum spanning tree and nearest neighbor chain methods to achieve $\mc{O}(n^2)$ complexity. Efforts to make faster agglomerative methods involve novel data structures [@birch], and cluster summary statistics [@cure], which approximate standard agglomeration methods. The algorithm in @mclust5 caps the number of data points on which it performs agglomeration by some number $N$. If the number of data points exceeds $N$, then it agglomerates a random subset of $N$ points, and uses those results to initialize the M step of the GMM initialization. So as $n$ increases beyond this cap, computational complexity of agglomeration remains constant with respect to $n$ per iteration. Covariance Constraints ---------------------- There are many possible constraints that can be made on the covariance matrices in Gaussian mixture modeling [@constraints; @mclust5]. Constraints lower the number of parameters in the model, which can reduce overfitting, but can introduce unnecessary bias. `scikit-learn`’s `GaussianMixture` class implements four covariance constraints (see Table \[tab:constraints\]). Constraint name Equivalent model in `mclust` Description ----------------- ------------------------------ ---------------------------------------------------------------- Full VVV Covariances are unconstrained and can vary between components. Tied EEE All components have the same, unconstrained, covariance. Diag VVI Covariances are diagonal and can vary between components Spherical VII Covariances are spherical and can vary between components Automatic Model Selection ------------------------- When clustering data, the user must decide how many clusters to use. In Gaussian mixture modeling, this cannot be done with the typical likelihood ratio test approach because mixture models do not satisfy regularity conditions [@mclachlan]. One approach to selecting the number of components is to use a Dirichlet process model [@rasmussen; @ferguson]. The Dirichlet process is
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $\K:=\{\x: g(\x)\leq 1\}$ be the compact sub-level set of some homogeneous polynomial $g$. Assume that the only knowledge about $\K$ is the degree of $g$ as well as the moments of the Lebesgue measure on $\K$ up to order $2d$. Then the vector of coefficients of $g$ is solution of a simple linear system whose associated matrix is nonsingular. In other words, the moments up to order $2d$ of the Lebesgue measure on $\K$ encode all information on the homogeneous polynomial $g$ that defines $\K$ (in fact, only moments of order $d$ and $2d$ are needed).' author: - 'Jean B. Lasserre' title: Recovering an homogeneous polynomial from moments of its level set --- Introduction ============ The inverse problem of reconstructing a geometrical object $\K\subset\R^n$ from the only knowledge of moments of some measure $\mu$ whose support is $\K$ is a fundamental problem in both applied and pure mathematics with important applications in e.g. computer tomography, inverse potentials, signal processing, and statistics and probability, to cite a few. In computer tomography, for instance, the X-ray images of an object can be used to estimate the moments of the underlying mass distribution, from which one seeks to recover the shape of the object that appears on some given images. In gravimetry applications, the measurements of the gravitational field can be converted into information concerning the moments, from which one seeks to recover the shape of the source of the anomaly. Of course, [*exact*]{} reconstruction of objects $\K\subset\R^n$ is in general impossible unless $\K$ has very specific properties. For instance, if $\K$ is a convex polytope then exact recovery of all its vertices has been shown to be possible via a variant of what is known as [*prony*]{} method. Only a rough bound on the number of vertices is required and relatively few moments suffice for exact recovery. For more details the interested reader is referred to the recent contribution of Gravin et al. [@gravin] and the references therein. On the other hand, Cuyt et al. [@cuyt] have shown that [*approximate*]{} recovery of a general $n$-dimensional shape is possible by using an interesting property of multi-dimensional Padé approximants, analogous to the Fourier slice theorem for the Radon transform. Contribution {#contribution .unnumbered} ------------ From previous contributions and their references, it is transparent that exact recovery of an $n$-dimensional shape is a difficult problem that can be solved only in a few cases. And so identifying such cases is of theoretical and practical interest. The goal of this paper is to identify one such case as we show that exact recovery is possible when $\K\subset\R^n$ is the (compact) sublevel set $\{\x\in\R^n \,:\,g(\x)\leq 1\}$ associated with an homogeneous polynomial $g$. By exact recovery we mean recovery of [*all*]{} coefficients of the polynomial $g$. In fact, exact recovery is not only possible but rather straightforward as it suffices to solve a linear system with a nonsingular matrix! Moreover, only moments of order $d$ and $2d$ of the Lebesgue measure on $\K$ are needed. As already mentioned, exact recovery is possible only if $\K$ has very specific properties and indeed, crucial in the proof is a property of levels sets associated with homogeneous polynomials (and in fact, also true for level sets of positively homogeneous nonnegative functions). Main result =========== Notation and definitions ------------------------ Let $\R[\x]$ be the ring of polynomials in the variables $\x=(x_1,\ldots,x_n)$ and let $\R[\x]_d$ be the vector space of polynomials of degree at most $d$ (whose dimension is $s(d):={n+d\choose n}$). For every $d\in\N$, let $\N^n_d:=\{\alpha\in\N^n:\vert\alpha\vert \,(=\sum_i\alpha_i)=d\}$, and let $\v_d(\x)=(\x^\alpha)$, $\alpha\in\N^n$, be the vector of monomials of the canonical basis $(\x^\alpha)$ of $\R[\x]_{d}$. Denote by $\s_k$ the space of $k\times k$ real symmetric matrices with scalar product $\langle \B,\C\rangle={\rm trace}\,(\B\C)$; also, the notation $\B\succeq0$ (resp. $\B\succ0$) stands for $\B$ is positive semidefinite (resp. positive definite). A polynomial $f\in\R[\x]_d$ is written $$\x\mapsto f(\x)\,=\,\sum_{\alpha\in\N^n}f_\alpha\,\x^\alpha,$$ for some vector of coefficients $\f=(f_\alpha)\in\R^{s(d)}$. A real-valued polynomial $g:\R^n\to\R$ is homogeneous of degree $d$ ($d\in\N$) if $g(\lambda\x)=\lambda^dg(\x)$ for all $\lambda$ and all $\x\in\R$. Given $g\in\R[\x]$, denote by $G\subset\R^n$ the sublevel set $\{\x\,:\,g(\x)\leq 1\}$. If $g$ is homogeneous then $G$ is compact only if $g$ is nonnegative on $\R^n$ (and so $d$ is even). Indeed suppose that $g(\x_0)<0$ for some $\x_0\in\R^n$; then by homogeneity, $g(\lambda \x_0)<0$ for all $\lambda>0$ and so $G$ contains a half-line and cannot be compact. Main result ----------- The main result is based on the following result of independent interest valid for positively homogeneous functions (and not only homogeneous polynomials). A function $f:\R^n\to\R$ is positively homogeneous of degree $d\in\R$ if $f(\lambda\x)=\lambda^df(\x)$ for all $\lambda>0$ and all $\x\in\R^n$. Let $f:\R^n\to\R$ be a measurable, positively homogeneous and nonnegative function of degree $0<d\in\R$, with bounded level set $\{\x\,:\,f(\x)\leq 1\}$. Then for every $k\in\N$ and $\alpha\in\N^n$: $$\label{lem1-1} \int_{\{\x\,:\,f(\x)\leq 1\}}\x^\alpha\,f(\x)^k\,d\x\,=\,\frac{n+\vert\alpha\vert}{n+kd+\vert\alpha\vert}\,\int_{\{\x\,:\,f(\x)\leq 1\}}\,\x^\alpha\,d\x.\\$$ To prove (\[lem1-1\]) we use an argument already used in Morosov and Shakirov [@morosov1; @morosov2]. With $\alpha\in\N^n$, let $\talpha:=(\alpha_2,\ldots,\alpha_n)\in\N^{n-1}$ and define $\z:=(z_2,\ldots,z_n)$. Let $\phi:\R_+\to\R$ be measurable and consider the integral $\int_{\R^n}\phi(g(\x))\,\x^\alpha d\x$. Using the change of variable $x_1=t$ and $x_i=tz_i$ for all $i=2,\ldots,n$, and invoking homogeneity, one obtains: $$\begin{aligned} \int_{\R^n}\phi(f(\x))\,\x^\alpha\,d\x&=&\int_{\R^n}\phi(t^df(1,z_2,\ldots,z_n))\,t^{n+\vert\alpha\vert-1}\z^{\talpha}\,d(t,\z)\\ &=&d^{-1}\left(\int_0^\infty u^{(n+\vert\alpha\vert)/d-1}\phi(u)\,du\right)\times A_\alpha\\ \mbox{with $A_\alpha$}&=&\int_{\R^{n-1}}\z^{-\talpha}f(1,\z)^{-(n+\vert\alpha\vert)/d}\,d\z.\end{aligned}$$ Hence the choices $t\mapsto \phi(t):={\rm I}_{[0,1]}(t)$ and $t\mapsto \phi(t):=t^k {\rm I}_{[0,1]}(t)$ yield $$\begin{aligned} d\int_{\{\x\,:\,f(\x)\leq 1\}}\x^\alpha d\x&=&A_\alpha\int_0^1u^{(n+\vert\alpha\vert)/d-1}\,du=\frac{A_\alpha d}{n+\vert\alpha\vert}\\ d\int_{\{\x\,:\,f(\x)\leq 1\}}f(\x)^k\,\x^\alpha d\x&=&A_\alpha\int_0^1u^{(n+kd+\vert\alpha\vert)/d-1}\,du=\frac{A_\alpha d}{n+kd+\vert\alpha
{ "pile_set_name": "ArXiv" }
--- author: - | **Hamed Aslani$^\dag$, Davod Khojasteh Salkuyeh$^\ddag$[^1], Fatemeh Panjeh Ali Beik$^\S$\ *[$^\dag$Faculty of Mathematical Sciences, University of Guilan, Rasht, Iran]{}*\ *[$^\ddag$Faculty of Mathematical Sciences, and Center of Excellence for Mathematical Modelling,]{}*\ *[Optimization and Combinational Computing (MMOCC), University of Guilan, Rasht, Iran]{}*\ *[$^\S$Department of Mathematics, Vali-e-Asr University of Rafsanjan,P.O. Box 518, Rafsanjan, Iran]{}*\ ** title: '**On the preconditioning of three-by-three block saddle point problems**' --- \ [**Abstract.**]{} We establish a new iterative method for solving a class of large and sparse linear systems of equations with three-by-three block coefficient matrices having saddle point structure. Convergence properties of the proposed method are studied in details and its induced preconditioner is examined for accelerating the convergence speed of generalized minimal residual (GMRES) method. More precisely, we analyze the eigenvalue distribution of the preconditioned matrix. Numerical experiments are reported to demonstrate the effectiveness of the proposed preconditioner.\ [*Keywords*]{}: [iterative methods, sparse matrices, saddle point, convergence, preconditioning, Krylov methods. ]{}\ [*AMS Subject Classification*]{}: 65F10, 65F50, 65F08.\ \ Introduction ============ Consider the following three-by-three block system of linear equations, $$\label{eq1} \mathcal{A} {\bf x} \equiv\left(\begin{array}{ccc} {A} & {B^{T}} & {0} \\ {B} & {0} & {C^{T}} \\ {0} & {C} & {0} \end{array}\right)\left(\begin{array}{l} {x} \\ {y} \\ {z} \end{array}\right)=\left(\begin{array}{l} {f} \\ {g} \\ {h} \end{array}\right),$$ where $A\in \mathbb{R}^{n\times n}$, $B\in \mathbb{R}^{m\times n}$, $C\in \mathbb{R}^{l\times m}$, $f\in \mathbb{R}^n$, $g\in \mathbb{R}^m$ and $h\in \mathbb{R}^l$ are known, and ${\bf x}=\left(x; y; z\right)$ is an unknown vector to be determined. Here, the <span style="font-variant:small-caps;">Matlab</span> symbol $(x;y;z)$ is utilized to denote the vector $(x^{T},y^{T},z^{T})^{T}.$ In the sequel, we assume that $A$ is a symmetric positive definite matrix and the matrices $B$ and $C$ have full row rank. These assumptions guarantee the existence of a unique solution of ; see [@A2] for further details. Evidently matrix $\cal A$ can be regarded as a $2\times 2$ block matrix using the following partitioning strategy, $$\label{part} \mathcal{A} = \left( {\begin{array}{cc|c} A & {B^T } & {0} \\ B & 0 & C^T \\ \hline 0 & C & 0 \\ \end{array}} \right).$$ As seen, the above block matrix has a saddle point structure. Hence, we call Eq. by three-by-three block saddle point problem. Linear system of the form arises from many practical scientific and engineering application backgrounds, e.g., the discrete finite element methods for solving time-dependent Maxwell equation with discontinuous coefficient [@A3; @A4; @A5; @A6], the least squares problems[@A7], the Karush-Kuhn-Tucker (KKT) conditions of a type of quadratic program [@A8] and so on. Since the matrices $A,$ $B$ and $C$ in are large and sparse, the solution of is suited by iterative methods. In practice, stationary iterative methods may converge too slowly or fail to converge. For this reason they are usually combined with acceleration schemes, like Krylov subspace methods [@A9]. Here, we focus on preconditioned Krylov subspace methods, especially, the preconditiond GMRES method. As seen, the coefficient matrix $\mathcal{A}$ in Eq. can be considered in a two-by-two block form given by . The observation was used in the literature for constructing preconditioners to improve the convergence speed of Krylov subspace methods for solving , such as block triangular preconditioners [@A11; @A12; @A13], shift-splitting preconditioners [@A14] and parameterized preconditioners [@A15]. Recently, Huang and Ma [@A1] proposed the following block diagonal preconditioner, $$\label{eq992} \mathcal{P}_{D}=\left(\begin{array}{ccc} {A} & {0} & {0} \\ {0} & {S} & {0} \\ {0} & {0} & {C S^{-1} C^{T}} \end{array}\right),$$ for solving in which $S=B A^{-1} B^{T}.$ They also derive all the eigenpairs of preconditioned matrix. Xie and Li [@A2] presented the following three preconditioners $${\cal P}_1=\begin{pmatrix} A & 0 & 0 \\ B & -S & C^T \\ 0 & 0 & CS^{-1}C^T \end{pmatrix},~ {\cal P}_2=\begin{pmatrix} A & 0 & 0 \\ B & -S & C^T \\ 0 & 0 & -CS^{-1}C^T \end{pmatrix},~ {\cal P}_3=\begin{pmatrix} A & B^T & 0 \\ B & -S & 0 \\ 0 & 0 & -CS^{-1}C^T \end{pmatrix},$$ and analyzed spectral properties of corresponding preconditioned matrices in the case $S=BA^{-1}B^T$. The reported numerical results in [@A2] show that the above preconditioners can significantly improve the convergence speed of GMRES method. It can be observed that the preconditioner ${\cal P}_1$ outperforms other preconditioners in terms of both required CPU time and number of iterations for the convergence. Here, we consider the following equivalent form of : $$\label{eq111} {\cal B}{\bf x}\equiv \left(\begin{array}{ccc} {A} & {B^{T}} & {0} \\ -{B} & {0} & -{C^{T}} \\ {0} & {C} & {0} \end{array}\right)\left(\begin{array}{l} {x} \\ {y} \\ {z} \end{array}\right)=\left(\begin{array}{l} {f} \\ {-g} \\ {h} \end{array}\right)=\bf{b}.$$ Although the coefficient matrix of the system is not symmetric, it has some desirable properties. For instance, the matrix [$\mathcal{B}$]{} is positive semidefinite, i.e., $\mathcal{B}+\mathcal{B}^{T}$ is symmetric positive semidefinite. This is a significant for the GMRES method. In fact, the restarted version of GMRES($m$) converges for all $m\geq 1$. Recently, some iterative schemes have been extended in the literature for solving . For instance, Cao [@CAOAML] presented the shift-splitting method. In [@Huang-NumerAlgor; @Huang-NLWA], the Uzawa-type methods were developed. In this work, we present a new type of iterative method for solving three-by-three block saddle point problem . Next, we extract a preconditioner from the presented iterative method and examine its performance for speeding up the convergence of GMRES. The remainder of this paper organized as follows. Before ending this section, we present notations and basic preliminaries used in next sections. In section \[sec2\], we propose a new iterative method for solving and study its converges properties. In section \[sec3\], we extract a preconditioner from the proposed method and analyze the spectrum of preconditioned matrix. Brief discussions are given in section \[sec4\] about practical implementation of the preconditioner. In section \[sec5\], we report some numerical results and brief concluding remarks are included in section \[sec6\]. [Throughout this paper, the identity matrix is denoted by $I$. The symbol $x^{*}$ is used for the conjugate transpose of the vector $x.$ For any square matrix $A$ with real eigenvalues, the minimum and maximum eigenvalues of $A$ are indicated by $\lambda_{\min} (A)$ and $\lambda_{\max} (A)$, respectively. The notation $\
{ "pile_set_name": "ArXiv" }
--- author: - Gabriele Ghisellini title: Cosmological implications of Gamma Ray Bursts --- Introduction ============ Gamma Ray Bursts (GRBs) are powerful. We can compare their emitted power with the Planck power, i.e. the Planck energy divided by the Planck time, which can also be written as $$L_{\rm P} \, =\, { Mc^2 \over R_{\rm g}/c} \, =\, {c^5 \over G} \, \sim 3.6\times 10^{59}\,\,\, {\rm erg \, s^{-1}}$$ i.e. a mass entirely converted into energy in a time equal to the light crossing time of its gravitational radius $R_{\rm g}$ ($G$ is the gravitational constant). GRBs can emit, in electromagnetic form, $L\sim 10^{52}$–$10^{53}$ erg s$^{-1}$, while Active Galactic Nuclei can have luminosities up to $10^{48}$ erg s$^{-1}$ (but for a much longer time), and Supernovae can have $L\sim 10^{43}$ erg s$^{-1}$ for a month, and $L\sim 10^{45}$ erg s$^{-1}$ for a few hundreds seconds during the shock breakout. Due to their power, even relatively modest $\gamma$–ray instruments have no difficulties in detecting them also at high redshifts. Furthermore, hard X–rays can travel unabsorbed across the universe: with their largest power and least absorption, GRBs are thus ideal candidates to study the far universe. Standard candles? ================= The energetics of the prompt emission of GRBs span at least four orders of magnitudes: at first sight, GRBs are all but standard candles. However, there are a few correlations between the total bolometric energetics and the spectral properties of bursts which can be used to standardize the GRB energetics. In general, “blue" GRBs (having the peak of their prompt spectrum at higher energies) are more powerful/energetic (contrast this with blazars, behaving exactly the opposite way; Fossati et al. 1998). These correlations are named after the discoverer, and in the following I try to summarize them. [**Frail: universal energy reservoir? —**]{} Frail at al. (2001, see also Bloom et al. 2003) found that the collimation corrected energetics of those GRBs of known jet aperture angles clustered into a narrow distribution, hinting to a “universal energy reservoir" $E_{\gamma} =(1-\cos\theta_{\rm j}) E_{\rm \gamma, iso} \sim 10^{51}$ erg. The aperture angle of the jet is estimated in the following way. Consider a shell moving with a bulk Lorentz factor $\Gamma$. Unlike blazars, the motion is radial, not unidirectional. Due to aberration, the observer will see only a fraction $1/\Gamma^2$ of the emitting surface. But $\Gamma$, during the afterglow, is decreasing. At some time $t_{\rm j}$, the fraction of the observed surface becomes unity. This happens when $\Gamma = 1/\theta_{\rm j}$. Before $t_{\rm j}$ the increased fraction of the observable surface partially compensates for the decreasing emissivity, while after $t_{\rm j}$ this compensating effect ends. Therefore one expects a break in the light curve at $t_{\rm j}$. Since only geometry is involved, this break should be achromatic (Rhoads 1997). Knowing the dynamics of the system (i.e. how $\Gamma$ changed in time), we can derive $\theta_{\rm j}$. The dynamics is controlled by the conservation of energy and momentum, leading to the self similar law $M_{\rm ISM} = m_{F}/\Gamma = E_{\rm F} / (\Gamma^2 c^2)$, where $M_{\rm ISM}$ is the mass swept by the fireball at a given time, $\Gamma$ is the bulk Lorentz factor at that time, and $E_{\rm F}$ is the energy of the fireball. If the process is adiabatic, the latter is constant. With this law, we obtain $\theta_{\rm j} = \Gamma(t_{\rm j})^{-1}$ and then $$\begin{aligned} \theta_{\rm j} &=& 0.161 \, \left({ t_{\rm jet,d} \over 1+z}\right)^{3/8} \left({n \, \eta_{\gamma}\over E_{\rm iso,52}}\right)^{1/8}; \,\,\, \quad {\rm H} \nonumber \\ % \theta_{\rm j} &=& 0.2016 \, \left( {t_{\rm jet, d} \over 1+z}\right)^{1/4} \left( { \eta_\gamma\ A_* \over E_{\rm iso,52}}\right)^{1/4}; \quad {\rm W } \label{theta} \end{aligned}$$ where $n$ is the circumburst density in the homogeneous (H) case, $z$ is the redshift and $t_{\rm j,d}$ is the break time measured in days. The efficiency $\eta_\gamma$ relates the isotropic kinetic energy of the fireball $E_{\rm k, iso}$ to the prompt emitted energy $E_{\rm iso}$: $E_{\rm k, iso}= E_{\rm iso}/\eta_\gamma$. Usually, one assumes a constant value for all bursts, i.e. $\eta_\gamma =0.2$ (after its first use by Frail et al. 2001, following the estimate of this parameter in GRB 970508; Frail et al. 2000). For the wind (W) case, $n(r)=Ar^{-2}$ and $A_*$ is the value of $A$ \[$A=\dot M_{\rm w} /(4\pi v_{\rm w})=5\times 10^{11}A_*$ g cm$^{-1}$\] when setting the wind mass loss rate to $\dot M_{\rm w} =10^{-5} M_\odot$ yr$^{-1}$ and the wind velocity to $v_{\rm w}=10^3$ km s$^{-1}$. Usually, a constant value (i.e. $A_*=1$) is adopted for all bursts. -- -- -- -- [**The Amati correlation —**]{} Amati et al. (2002), considering $Beppo$SAX bursts, found that the isotropic energetics correlate with the peak energy $E_{\rm p}$ of the time integrated prompt emission: $E_{\rm p} \propto E_{\rm iso}^{1/2}$. This correlation, expanded in later works (Amati 2006, Ghirlanda et al. 2007), is obeyed by all but two bursts (the anomalous GRB 980425 and GRB 031203, but see Ghisellini et al. 2006) for which the redshift and $E_{\rm p}$ is known. Claims by Nakar & Piran 2005 and Band & Preece 2005 that the Amati correlation is spurious, resulting from selection effects, were contrasted by Ghirlanda et al. (2005), using a large sample of GRBs for which the pseudo–redshift were derived by the lag–luminosity relation. Fig. \[amawind\] show the updated (Jan. 2007) Amati correlation which includes 62 GRBs (plus the two outliers). [**The Yonetoku correlation —**]{} Also the peak luminosity $L_{\rm p, iso}$ of the prompt emission correlates with $E_{\rm p}$, in the same way as $E_{\rm iso}$: $E_{\rm p} \propto L_{\rm p, iso}^{1/2}$ (Yonetoku et al. 2004). The scatter is similar to the scatter of the Amati correlation. Since the luminosity $\propto \Gamma^2$, this correlation has the same form also in the comoving frame, contrary to the Amati one. [**The Ghirlanda correlation —**]{} By correcting the isotropic energetics by the factor $(1-\cos\theta_{\rm j}$), Ghirlanda et al. (2004) found that collimation corrected energy, $E_{\gamma}$, is not universal, but is tightly correlated with $E_{\rm p}$. To find $\theta_{\rm j}$, Eq. \[theta\] for the homogeneous case was originally used, with $t_{\rm j}$ derived from the optical light curves. The efficiency $\eta$ was assumed to be constant, as well as the density of the interstellar medium (unless it was derived by means, in a very few cases). The correlation is $E_{\rm p} \propto E_{\gamma}^{0.7}$. Later, Nava et al. (2006) considered a wind density profile and an updated list of GRBs (18 objects), and found a linear correlation: $E_{\rm p} \propto E_{\gamma}$. The linear form is particularly intriguing for two reasons. First, it means that it has the same linear form also in the comoving frame, since $E_\gamma$ and $E_{\rm p}$, being two energies, transform in the same way. The second reason is that $E_\gamma/E_{\rm p}$ is constant. This ratio is the number of photons at the peak, which must be the same for all bursts and it is approximately $10^{57}$ (coincidentally, the number of protons in a solar mass). The most updated correlation, using 25 GRBs and including Swift bursts (
{ "pile_set_name": "ArXiv" }
--- abstract: 'It is well known that some quantum and statistical fluctuations of a quantum field may be recovered by adding suitable stochastic sources to the mean field equations derived from the Schwinger-Keldysh (Closed-time-path) effective action. In this note we show that this method can be extended to higher correlations and higher (n-particle irreducible) effective actions. As an example, we investigate three and fourth order correlations by adding stochastic sources to the Schwinger - Dyson equations derived from the 2-particle irreducible effective action. This method is a simple way to investigate the nonlinear dynamics of quantum fluctuations.' address: - ' CONICET and Departamento de Fisica, FCEN' - ' Universidad de Buenos Aires- Ciudad Universitaria, 1428 Buenos Aires, Argentina' author: - Esteban Calzetta title: 'Fourth order full quantum correlations from a Langevin-Schwinger-Dyson equation' --- Introduction ============ Quantum fields fluctuate, and quantum fields out of equilibrium show both quantum and statistical fluctuations [@CH08]. In many problems of interest, the fluctuations are more relevant than the mean fields themselves. Problems that come to mind are the generation of primordial fluctuations during inflation [@CalHu95; @CalGon97; @RouVer08; @WuNgFo07; @WNLLC07], the fluctuations of soft fields induced by the interaction with hard quanta [@GreMul97; @CalHu97; @BOD98; @BOD99; @ASY99a; @ASY99b], and the fluctuations of a Bose-Einstein condensate as described by the stochastic Gross-Pitaievskii equation [@ProJac08; @GaAnFu01; @GarDav03; @BrBlGa05; @Sto99; @Sto01; @CaHuVe07]. In such a case, one can try to obtain information about the average behavior of the fluctuations by deriving equations of motion for the fluctuation-fluctuation correlations, or else one may attempt to investigate the space-time unfolding of the fluctuations by deriving suitable Langevin-like equations for them. In certain cases, the Langevin approach is also an efficient way to derive the required self-correlations [@CaRoVe03]. In the simplest set up, one is dealing with a bosonic field theory. The Heisenberg field operators are designed as $\Phi_H^a$. We use a DeWitt notation where $a$ accounts for both discrete and space-time indexes; repeated indexes are summed over the discrete ones and integrated over the continuous ones. The $\Phi_H^a$ have expectation values $\left\langle \Phi_H^a\right\rangle=\phi^a$. If only the mean fields are relevant, we may obtain causal equations of motion from them from the Schwinger-Keldysh (or closed-time-path (CTP)) 1-particle irreducible (1PI) effective action (EA). Because the Schwinger-Keldysh approach involves doubling the degrees of freedom, an extra discrete index appears, and the fields become $\Phi_H^A$ and $\phi^A$. In the simplest representation, $A=\left(i,a\right)$, where $i=1,2$ shows in which branch of the CTP we are. Other representations are also possible. The physical mean field equations, notwithstanding, are obtained from the CTP equations by adding the constraints \^[1a]{}=\^[2a]{}\^a \[ctpconstraint\] If fluctuations are important, one may add noise terms to the physical mean field equations of motion. The noise self correlation is also derived from the 1PI CTP EA, more precisely from terms that do not contribute to the mean field equations when the constraints (\[ctpconstraint\]) are enforced (see next Section). The resulting theory is still good enough to derive exact symmetric expectation values for the product of two quantum fields, that is, the Hadamard propagator G\_1\^[ab]{}={\_H\^a,\_H\^b}-2\^a\^b \[hadamard\] and in this sense it is a nontrivial extension of the mean field theory. This basic framework has been extensively used in cosmology (see [@CH08]; some influential papers are [@CalHu94; @HuMat95; @LomMaz97]; see also the reviews [@HuVer03; @Ver07]; for more recent work see [@HuRou07; @For05; @ArPaVe04; @HuRoVe04; @PhiHu02]) and in the theory of Bose-Einstein condensates. The important point of to which extent these fluctuations may be considered real is discussed in [@CH08]; for present purposes, it is enough to consider the stochastic approach as a shortcut to the actual propagators. Sometimes higher correlations are also important. For example, one may want to compute the expectation value of the energy-momentum tensor (or the fluctuations thereof) in an interacting bosonic field theory -that usually involves three and four point functions [@Mot86; @KuoFor93; @RouVer99; @PhiHu03; @AnMoMo05; @PeRoVe08]. Density - density correlations in a Bose-Einstein condensate are a four point correlation of the fundamental Heisenberg field; these are relevant, for example, when the condensate is investigated through Bragg scattering [@SOKD02; @SKODTD03; @PitStr03]. One may need accurate higher order order correlations to enforce important Ward-Takahashi or Slavnov-Taylor identities [@Ber04; @CarKov08]. Or simply one may want to compute higher cumulants as a way of accounting for nonlinear effects [@CSHY85]. In this case, one of the most powerful computational tools is to obtain self-consistent Schwinger-Dyson equations from variations of higher n-particle irreducible (nPI) EAs where all required correlations appear on equal footing as independent variables [@DomMar64a; @DomMar64b; @NorCor75; @Kim05]. There are both practical and fundamental reasons to wish to extend the stochastic approach to field fluctuations to higher correlations as well [@CalHu93; @CalHu95a]. On the practical side, getting the full fourth order correlations from a stochastic approach to the 2PI EA may be more efficient (or at least more heuristic) than computing the whole 4PI EA. On the fundamental side, let us mention the following issue. It is well known that the 2PI equations of motion for the propagators lead to the Kadanoff-Baym equations for the density of states and one-particle distribution function, and eventually to the Boltzmann equation (or similar) in the appropriate limit. However, it is also known that the Boltzmann equation is only a mean field approximation to a stochastic equation, where the noise terms, in the near equilibrium case, may be derived from the fluctuation-dissipation theorem. Of course, field theory complies with the Kubo-Martin-Schwinger theorem, and therefore has the fluctuation-dissipation theorem built in. So the noise terms in the stochastic Boltzmann equation must correspond to some elements already present in the 2PI EA. The fundamental question is to make those elements explicit. This issue was solved by Hu and one of us in ref. [@CalHu99]. Similar issues appear at every order in the Schwinger-Dyson hierarchy. The stochastic approach to the 1PI CTP EA exploits special features of this EA (see below) and is not readily generalizable to higher EAs. Similarly, the approach of Calzetta and Hu in ref. [@CalHu99] is also ad hoc, in this case for the 2PI EA. A systematic framework, which could be applied to any EA and to the symmetry broken or unbroken cases alike, would be highly desirable, not least because of the light it sheds on the particular approaches devised for the 1PI and 2PI cases. Our aim in this note is to develop such an uniform formalism. The rest of the paper is organized as follows. In the next section we study noise in the 1PI theory. We first show how one can associate a CTP 1PI EA efective action to a problem defined in terms of a Langevin equation. We then show that the 1PI EA for a quantum field theory problem has, under certain approximations, the same structure as the EA arising from a stochastic problem. This allows the direct identification of the equivalent stochastic problem to a given field theory. As an application, we review the derivation of the Hadamard propagator of the full theory from the equivalent stochastic equation. In the following Section we review the 2PI EA and two early attempts of a stochastic formulation of the propagator dynamics [@CalHu95a; @CalHu99]. We show the shortcomings of these attempts and how they differ from the proposal in this note. Finally we present a systematic approach to building stochastic equivalents for a given field theory and apply it to the 1PI and 2PI cases. Although we shall not discuss it explicitly, generalization to higher effective actions is straightforward. In the 2PI case, we finally obtain the same result as in [@CalHu99], but without the contrived arguments contained in that paper. We show the basic oversight contained in [@CalHu99], which obscured the simple derivation of the 2PI noise presented here. The paper ends with some brief final remarks. Stochastic approach to the 1PI EA ================================= The goal of this Section is to provide an heuristic introduction to stochastic equations derived from the 1PI EA. For a deeper discussion see [@CH08; @GRLE98]. From Langevin equations to effective actions -------------------------------------------- To see why it is natural to translate a problem described in
{ "pile_set_name": "ArXiv" }
--- abstract: 'The results from the initial exposure of the MINOS detectors to neutrinos produced by the Fermilab NuMI beam are reported here. The exposure consisted of $1.27\times 10^{20}$ 120 GeV protons incident on the NuMI target. The data show the observation of 215 neutrinos with energies below 30 GeV while $336\pm14.4$ events were expected. The data are consistent with $\nu_{\mu}$ disappearance via neutrino oscillations with $|{\Delta m^{2}_{23}}|=2.74^{+0.44}_{-0.26}\times 10^{-3}\text{ eV}^{2}/c^{4}$ and ${\sin^{2}2\theta_{23}}> 0.87$ (68% C.L.).' address: 'Fermilab, PO Box 500, Batavia, IL 60510, USA' author: - 'B. J. Rebel for the MINOS Collaboration' title: First MINOS Results with the NuMI Beam --- There is substantial evidence that ${\nu_{\mu}}$ oscillate into ${\nu_{\tau}}~$[@ref:osc1; @ref:osc2; @ref:osc5]. The Main Injector Neutrino Oscillation Search (MINOS) was designed to study this hypothesis. The probability that a ${\nu_{\mu}}$ of energy $E$ remains a ${\nu_{\mu}}$ after traveling a distance $L$ is $$P_{{\nu_{\mu}}\rightarrow{\nu_{\tau}}} = 1 - {\sin^{2}2\theta_{23}}\sin^{2}(1.27|{\Delta m^{2}_{23}}| L/E), \label{eq:oscprob}$$ where $|{\Delta m^{2}_{23}}|$ has units of eV$^{2}$, $L$ has units of km and $E$ is in GeV [@ref:parke]. MINOS tests the oscillation hypothesis by making two measurements of a beam of ${\nu_{\mu}}$ produced in the Neutrinos at the Main Injector (NuMI) beam at Fermilab. The first measurement occurs at the Near Detector (ND) located onsite at Fermilab, 1 km from the production point; the second measurement is made at the Far Detector (FD) located 735 km away in the Soudan Underground Mine in Soudan, Minnesota, USA. MINOS extracts the oscillation parameters by comparing the reconstructed energy spectra of the ${\nu_{\mu}}$ at the ND and FD. The NuMI beam is produced using 120 GeV protons from the Main Injector. The protons are delivered in 10 $\mu$s spills, each of which contains up to $3.0\times10^{13}$ protons. Two parabolic horns are pulsed with 200 kA of current in order to focus the produced $\pi^{+}$ and $K^{+}$ toward the detectors. The horns are pulsed in such a way as to maximize the production of neutrinos in the energy range of 1-3 GeV. A total of $1.27\times10^{20}$ protons on target (POT) were delivered for the data described below. The neutrino beam is made of 92.9% ${\nu_{\mu}}$, 5.8% ${\overline{\nu}_{\mu}}$, 1.2% ${\nu_{e}}$ and 0.1% ${\overline{\nu}_{e}}$. The MINOS detectors are steel-scintillator tracking calorimeters with toroidal magnetic fields averaging 1.3 T [@ref:minos1]. The steel planes are 2.54 cm thick and the scintillator is mounted to the steel. The scintillator planes are made of 4.1 cm wide and 1 cm thick strips. The strips in each plane are rotated 45$^{\circ}$ from the vertical and the strips in successive planes are rotated 90$^{\circ}$ from each other. The light produced in the scintillator is collected in 1.2 mm wavelength shifting fibers embedded in the scintillator. The fibers transport the light to the multi-anode photomultiplier tubes (PMTs). The detectors were made as similar as possible in order to cancel the majority of the uncertainties in the neutrino interaction modeling and detector response. Both detectors yield 6-7 photoelectrons per plane for normally incident minimum ionizing particles. The main design differences between the two detectors are due to the much higher rate ($\sim10^{5}$ times) in the ND than in the FD. The FD is 705 m below the surface, has a mass of 5.40 kton, and is composed of 484 instrumented planes. The detector has a regular octagonal cross section and is 8 m wide. The scintillator is read out at both ends of the strips using Hamamatsu M16 PMTs. The front end readout electronics are designed to provide high precision timing information. The FD data were blinded until the procedures for event selection and energy spectrum prediction were defined and understood. The blinding procedure hid a substantial and unknown fraction of the events in the FD. The ND is 103 m below the surface, has a mass of 0.98 kton, and is composed of 282 planes. The planes have an irregular octagonal cross section and are 4 m tall and 6 m wide. The geometry of the planes optimizes containment of hadronic showers and allows for the magnetic field to be similar to that in the FD. The front end electronics in the ND are designed to handle the high rate of interactions observed with each Main Injector spill. Up to 10 interactions can be observed in the ND for each spill of protons. As such the first step in the reconstruction of the ND data is to use timing and spacial information to separate the individual interactions. The data in the ND were not blinded. The energy of each neutrino interaction is found in the same way for both detectors. Muon tracks are found and their curvature in the magnetic field is fit to determine their energy. The hadronic showers are also found and their energy is determined. The events selected in both detectors were required to have visible energy, $E_{vis}$, less than 30 GeV and the events had to have a negatively charged track, a requirement chosen to select only ${\nu_{\mu}}$ interactions. A fiducial volume was defined to contain the hadronic energy of the event and reject background cosmic ray muons. The events were also required to occur within a 50 $\mu$s window surrounding the spill time. The background due to cosmic ray muons in the ND is negligible. However, an additional constraint to reduce the cosmic ray background was imposed in the FD; the direction of the reconstructed track at its vertex had to be within 53$^{\circ}$ of the neutrino beam direction. The background due to cosmic ray events in the FD is estimated to be $<0.5$ events (68% C.L.) as there are no events occurring within the 50 $\mu$s window. As the MINOS detectors record both neutral current (NC) and CC interactions, a particle identification parameter (PID) was defined to determine whether an event was NC or ${\nu_{\mu}}$ CC. The PID incorporated the probability density functions for the event length, fraction of energy contained in the track and average track pulse height per plane to separate the events into these two categories. Figure \[fig:pid\] shows the PID for the ND and FD data overlaid with the Monte Carlo NC and CC distributions. The events with PID $> -0.2$ were categorized as CC events in the FD; in the ND the requirement was PID $> -0.1$. The values of the PID were chosen so that the purity of the samples in the ND and FD were both about 98%. The efficiencies for selecting ${\nu_{\mu}}$ CC events with $E_{vis} < 30$ GeV in the fiducial volume are 74% for the FD and 67% for the ND. The measured energy spectrum in the ND is used to predict the unoscillated spectrum in the FD. The method used by MINOS to predict the FD spectrum is the [*Beam Matrix*]{} method [@ref:adam]. In this method the ND data are used to measure effects such as beam modeling, neutrino interactions and detector response that are common to both detectors. The beam simulation is used to derive a transfer matrix that relates ${\nu_{\mu}}$ in the two detectors via their parent hadrons. The matrix element $M_{ij}$ gives the relative probability that the distribution of secondary hadrons producing the observed ${\nu_{\mu}}$ of energy $E_{i}$ in the ND will produce the observed ${\nu_{\mu}}$ of energy $E_{j}$ in the FD. The reconstructed energy spectrum in the ND is translated into a flux by correcting the simulated ND acceptance and then dividing by the calculated cross-sections for each energy bin. The flux is multiplied by the matrix to yield the predicted unoscillated FD flux. The final step in the process is to do the inverse correction for cross-section and FD acceptance, resulting in the predicted visible energy spectrum. As a cross check of this method, the prediction was also done with the [*ND Fit*]{} method, which minimizes differences between the ND data and Monte Carlo by modifying the parameters associated with neutrino interactions and detector response. The FD Monte Carlo is adjusted by using the best-fit values of those parameters. The FD data set contains a total of 215 events with $E_{vis}< 30$ GeV compared to the unoscillated expectation of $336.0\pm14.4$. The uncertainty is due to systematic uncertainties associated with (a) the fiducial mass calculation and POT counting accuracy (4%), (b) the hadronic energy scale (11%) and (c) the NC component (50%). The observed energy spectrum is shown in
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Young’s modulus of graphene is investigated through the intrinsic thermal vibration in graphene which is ‘observed’ by molecular dynamics, and the results agree very well with the recent experiment \[Science **321**, 385 (2008)\]. This method is further applied to show that the Young’s modulus of graphene: (1). increases with increasing size and saturates after a threshold value of the size; (2). increases from 0.95 TPa to 1.1 TPa as temperature increases in the region \[100, 500\]K; (3). is insensitive to the isotopic disorder in the low disorder region ($< 5\%$), and decreases gradually after further increasing the disorder percentage.' author: - 'Jin-Wu Jiang' - 'Jian-Sheng Wang' - Baowen Li title: 'Young’s modulus of Graphene: a molecular dynamics study' --- The single layer graphene has unique electronic and other physical properties, thus becoming a promising candidate for various device applications.[@Novoselov1; @Neto] Among others, excellent mechanical property is an important advantage for the practical applications of graphene. Experimentally, the Young’s modulus ($Y$) of graphene has been measured by using atomic force microscope (AFM) to introduce external strain on graphene and record the force-displacement relation.[@Lee] The measured value for Young’s modulus is $1.0\pm 0.1$ TPa in this experiment. Theoretically, the Young’s modulus of graphene can be studied in a parallel way. Once the external strain is applied on graphene, the internal force or potential can be calculated in different approaches, such as *ab initio* calculations,[@Kudin; @Lier; @Konstantinova] molecular dynamics (MD)[@Khare] and inter-atomic potentials.[@Reddy; @Huang; @Lu] Then the Young’s modulus can be obtained from the force-displacement or the potential-displacement relation. For the carbon nanotubes (CNT), the Young’s modulus is theoretically studied in a similar way as that in graphene. However, in the experiment, besides the AFM method,[@Tombler] another group measured the Young’s modulus of CNT by observing the thermal vibration at the tip of the CNT using the transmission electron microscopy (TEM).[@Treacy; @Krishnan] For some unknown reasons, possibly technical challenges, this experimental method does not appear in the study of the Young’s modulus in graphene. As a supplement to this vacancy, the present work ‘observes’ the thermal vibration of graphene by MD instead of TEM, and then calculates the Young’s modulus from the ‘observed’ thermal vibration. In the engineering application of graphene, it will be beneficial if the mechanical property of graphene can be adjusted according to the demand. There are some possible methods that can manipulate the value of Young’s modulus in graphene, such as size of the sample, temperature, isotopic disorder, etc. It is a matter of practical importance and theoretical interest to find an effective method to control the mechanical property of graphene. The present calculation method for the Young’s modulus of graphene in this paper is readily applicable to address these issues. In this paper, we investigate the Young’s modulus of graphene by ‘observing’ the thermal vibrations with MD. The calculated Young’s modulus is in good agreement with the recent experimental one. Using this method, we can systematically study different effects on the Young’s modulus: size, temperature and isotopic disorder. It shows that the Young’s modulus increases as graphene size increases, and saturates. In the temperature range $100-500$ K, $Y$ increases from 0.95 TPa to 1.1 TPa as $T$ increases. For the isotopic disorder effect, $Y$ keeps almost unchanged within low disorder percentage ($< 5\%$), and decreases gradually after further increasing disorder percentage. In graphene there are both optical and acoustic vibration modes in the $z$ direction. For the optical phonon modes, the frequency is about 850 cm$^{-1}$, which is too high to be considerably excited under 500 K. While the acoustic phonon mode is a flexure mode with parabolic dispersion $\omega=\beta k^{2}$, which will be fully excited even at very low temperature. So the thermal mean-square vibration amplitude (TMSVA) of graphene in the $z$ direction is mainly attributed to the flexure mode under 500 K. In this sense, we consider the contribution of the flexure mode to TMSVA for an elastic plate in the following. The $x$ and $y$ axes lie in the plate, and $z$ direction is perpendicular to the plate. For convenience and without losing generality, we consider a square plate with length $L$. The equation for oscillations in $z$ direction of a plate is[@Landau]: $$\begin{aligned} &&\rho\frac{\partial^{2}z}{\partial t^{2}}+\frac{D}{h}\Delta^{2}z = 0, \label{eq_fm}\end{aligned}$$ where $D=\frac{1}{12}Yh^{3}/(1-\mu^{2})$. $\Delta$ is the two-dimensional Laplacian and $\rho$ is the density of the plate. $Y$ and $\mu$ are the Young’s modulus and the Poisson ratio, respectively. $h$ is the thickness of the plate. We apply fixed boundary condition in $x$ direction, and periodic boundary condition in $y$ direction: $$\begin{aligned} z(t,x=0,y) & = & 0,\nonumber\\ z(t,x=L,y) & = & 0,\\ z(t,x,y+L) & = & z(t,x,y).\nonumber \label{eq_boundary}\end{aligned}$$ The solution for the above partial differential equation under these boundary conditions can be found in Ref. : $$\begin{aligned} \omega_{n} & = & k_{n}^{2}\sqrt{\frac{Yh^{2}}{12\rho(1-\mu^{2})}},\nonumber\\ z_{n}(t,x,y) & = & u_{n}\sin (k_{1}x)\cdot\cos (k_{2}y)\cdot\cos(\omega_{n}t),\\ \vec{k} & = & k_{1}\vec{e}_{x}+k_{2}\vec{e}_{y},\nonumber \label{eq_eigen}\end{aligned}$$ where $k_{1}=\pi n_{1}/L$ and $k_{2}=2\pi n_{2}/L$. Using these eigen solution, the TMSVA for $n$-th phonon mode in $(x,y)$ at temperature $T$ can be obtained[@Krishnan]: $$\begin{aligned} \sigma_{n}^{2}(x,y)& = & 4k_{B}T\times\frac{12(1-\mu^{2})}{Yh^{2}V}\times\frac{1}{k_{n}^{4}}(\sin (k_{1}x)\cos (k_{2}y))^{2}.\end{aligned}$$ We mention that for those modes with $k_{1}\not=0$ and $k_{2}=0$, we have a similar result $\sigma_{n}^{2}(x,y)=2k_{B}T\times\frac{12(1-\sigma^{2})}{Eh^{2}V}\times\frac{1}{k_{n}^{4}}(\sin k_{1}x)^{2}.$ The spatial average of the TMSVA over $x$ and $y$ is: $$\begin{aligned} \langle\sigma_{n}^{2}\rangle & = & \frac{1}{S}\int\int_{D}\sigma_{n}^{2}(x,y)dxdy\nonumber\\ \nonumber\\ & = & k_{B}T\times\frac{12(1-\mu^{2})}{Yh^{2}V}\times\frac{1}{k_{n}^{4}},\end{aligned}$$ where $k_{1}\not=0$ and $k_{2}\not=0$. $D$ is the field in $x\in[0,L]$ and $y\in[0,L]$, and $S=L^{2}$ is the area of $D$. If $k_{1}\not=0$ and $k_{2}=0$, $\langle\sigma_{n}^{2}\rangle$ turns out to have the same expression as this general one. Because all modes are independent at the thermal equilibrium state at temperature $T$, they contribute to the TMSVA incoherently. As a result, the TMSVA at temperature $T$ is given by: $$\begin{aligned} \langle\sigma^{2}\rangle & = & \sum_{n=0}^{\infty}\langle\sigma_{n}^{2}\rangle\nonumber\\ \nonumber\\ & = & k_{B}T\times\frac{12(1-\mu^{2})}{Yh^{2}V}\times\sum_{n=0}^{\infty}\frac{1}{k_{n}^{4}}.\nonumber\\ & = & k_{B}T\times\frac{12(1-\mu^{2})}{Yh^{2}V}\times\frac{2S^{2}}{\pi^{4}}\times C\nonumber\\ \nonumber\\ & = & 0.31\times\frac{(1-\mu^{2})S}{h^{3
{ "pile_set_name": "ArXiv" }
--- abstract: 'Bound- and excited-state electronic nonlinearities in CdS quantum dots have been investigated by Degenerate Four-Wave Mixing (DFWM) and Z-scan techniques in the femtosecond time regime. This QD sample shows Kerr-type nonlinearity for incident beam intensity below 0.18 TW/cm$^2$. However, further increment in intensity results in four-photon absorption (4PA) indicated by open- and closed-aperture Z-scan experiments. Comparing open-aperture Z-scan experimental results with theoretical models, the 4PA coefficient $\alpha_4$ has been deduced. Furthermore, third-order nonlinear index $\gamma$ and refractive-index change coefficient $\sigma_r$ corresponding to excited-state electrons due to 4PA have been calculated from the closed-aperture Z-scan results. UV-visible absorption and photoluminescence experimental results are analyzed towards estimating band gap energy and defect state energy. Time Correlated Single Photon Counting (TCSPC) was employed to determine the decay time corresponding to band-edge and defect states. The linear and nonlinear optical techniques have allowed the direct observation of lower and higher-order electronic states in CdS quantum dots.' author: - 'P. Ghosh [^1]' - 'E. Ramya' - 'P. K. Mohapatra' - 'D. Kushavah' - 'D. N. Rao' - 'P. Vasa' - 'K. C. Rustagi' - 'B. P. Singh' bibliography: - 'REFERENCES\_pin3.bib' title: 'Observation of four-photon absorption and determination of corresponding nonlinearities in CdS quantum dots' --- Introduction ============ The physics of quantum dots (QDs) is of great scientific interest from both fundamental and application point of view. A comprehensive knowledge about nonlinear absorption and refraction processes in quasi-zero dimensional semiconductor structures or QDs is important for further development of nonlinear-optical semiconductor devices [@Marcelo2013; @Dakovski2013; @Lad2007; @Guang2008]. Quest for knowledge about this topic can be adequately addressed by nonlinear optical experimental techniques, such as Z-scan [@YoshinoPhysRevLett.91.063902; @Sheik-Bahae1990; @Said1992; @Wei1992], degenerate four-wave mixing (DFWM) [@Canto-Said1991; @Bindra1999], and pump-probe spectroscopy [@Gaponenko1994]. Over the past years, these nonlinear optical experimental techniques have been extensively used as powerful tools towards investigating the excited electron-hole pair states dynamics of semiconductor QDs, providing complementary information that obtained by linear optical experimental techniques. With the access of ultrafast and ultrahigh intense laser pulses, multiphoton absorption $\it i.e.$ simultaneous absorption of two or more photons has been extensively studied. These multiphoton absorption processes are exceedingly promising in many fields including optical limiting [@He:95; @Prasad2008; @Venkatram2008; @Kiran:02], 3D microfabrication [@Maruo:97], optical data storage [@Nature2002PNPrasad; @PARTHENOPOULOS1989], and biomedical applications [@Yanik2006]. In this regard, CdS QDs are of particular interest because of their high intrinsic nonlinearity [@Kalyaniwalla1990]. So far, various nonlinear processes for comprehensive materials were studied [@Sheik-Bahae1990; @Canto-Said1991; @Said1992]. Furthermore, third-order nonlinear index $\gamma$ and refractive-index change coefficient $\sigma_r$ corresponding to free-carriers due to TPA have been calculated from closed-aperture Z-scan results [@Said1992]. To the best of our knowledge, there are hardly any work included discussion on deriving these nonlinear parameters for three or four-photon absorption in QDs. In this paper, we report the detail investigation of nonlinear optical processes in CdS QDs synthesized by gamma-irradiation technique. Towards understanding these processes, intensity dependent DFWM, open, and closed-aperture Z-scan experiments were performed. Furthermore, we derived $\gamma$ and $\sigma_r$ values corresponding to excited-state electrons generated by four-photon absorption. Results of open-aperture Z-scan with 400 nm femtosecond laser pulses has also been presented. In the first section of results and discussion, we report nonlinear studies on this CdS QD sample. In the later part, we present UV-visible absorption, room temperature photoluminescence and TCSPC experimental results for better understanding of the electronic states in the QDs. Experimental ============ The results of ultrafast nonlinear experiments including DFWM, open-aperture and closed-aperture Z-scan on colloidal solution of CdS QD sample have been reported in this paper. These nonlinear studies are performed using a Ti: Sapphire femtosecond laser (Spectra-Physics, Mai Tai, Spitfire amplifier) having wavelength $\lambda = 800$ nm, and repetition rate 1 KHz. The pulse width was determined to be 110 fs through intensity autocorrelation measurements. The nonlinear properties are investigated for the intensity regime 0.02 TW/cm$^2$ to 0.80 TW/cm$^2$ with the femtosecond laser pulses. The input beam intensity is varied using a polarizer and a $\lambda/2$ plate combination. It can be noted that at this intensity range, the water solution does not show any nonlinear behaviour for DFWM as well as Z-scan experiments. The DFWM experiments are performed using folded boxcar geometry [@Wise1998]. In this technique, a three-dimensional phase-matching is implemented, which enables spatial separation of the signal-beam from the input beams. The fundamental beam is divided into three nearly equal intensity beams (intensity ratio of 1:1:0.9) in such a way that they form three corners of a square and are focused into the nonlinear medium. All three beams are synchronized both spatially and temporally. The resultant DFWM signal is generated due to the phase-matched interaction: $\overrightarrow{k}_4=\overrightarrow{k}_1-\overrightarrow{k}_2+\overrightarrow{k}_3$. In Z-scan experiments, a Gaussian laser beam is tightly focused onto an optically non-linear sample using a finite aperture and the transmittance through the medium is measured in the far field. Finally, the resultant transmittance is recorded as function of the sample position Z measured about the focal plane. Open-aperture Z-scan has also been performed at wavelength 400 nm (second harmonic of the fundamental wavelength from a BBO crystal). The details about synthesis and structural characterization of the CdS QDs are reported in [@Soumyendu2012]. Particle size distribution and chemical composition are obtained from the HRTEM images, XPS and Raman spectra analysis. Results and discussion ====================== DFWM signal versus probe delay plots for colloidal solution of CdS QDs are shown in Fig. \[cds\_dfwm\_800nm\] (a). ![\[cds\_dfwm\_800nm\] ](DFWM_signal_delay_a.pdf "fig:"){height="0.25\textheight"} ![\[cds\_dfwm\_800nm\] ](DFWM_signal_intensity_b.pdf "fig:"){height="0.25\textheight"} The signals are fitted with Gaussian function (solid curve). The signal profiles are nearly symmetric about the maximum ($\it i.e.$ zero time delay) illustrating that the response times of the nonlinearities are shorter than the pulse duration (110 fs). This fast response enhances their potential for photonic switching applications. The intensity dependence of the DFWM signal amplitude is presented in Fig. \[cds\_dfwm\_800nm\] (b). At relatively low input intensities ($< 200$ GW/cm$^2$), the DFWM signal amplitude followes a cubic (with a slope of 2.9$\pm$0.1) dependence. It clearly demonstrates that the nonlinearity behaves in a Kerr-like fashion and the origin of DFWM does not have contribution from any multiphoton absorption process, which leads to higher power dependence [@Sutherland1996]. It can be seen from the intensity dependence of the DFWM signal plot that the DFWM signal intensity goes down at input intensity around 180 GW/cm$^2$. This substantial reduction in the DFWM signal intensity is mainly due to the nonlinear absorption of all interacting beams. However, the DFWM signal does not show any higher power dependence, expected for multiphoton absorption, indicating the dominance of $\chi^{(3)}$ process over multiphoton photon absorption at this input intensity regime. To confirm this, we have performed open-aperture Z-scan experiment, which is discussed in the next section. The measurement of $\chi^{(3)}$ values are performed at zero time delay of all the beams. We estimated the magnitude of $\chi^{(3)}_{1111}$ by maintaining the same polarization for all the three incident beams. The third-order nonlinear optical susceptibility $\chi^{(3)}$ is estimated by comparing the measured DFWM signal of the sample with that of $CS_2$ as reference ($\chi^{(3)} = 5 \times 10^{-13}$ esu [@HBLiao1998; @Minoshima1991]) measured with the same experimental conditions. The equation relating $\chi^{(3)}_{ref}$ and $\chi^{(3)}_{samp}$ is given by [@Sutherland1996] $$\chi^{(3)}_{samp}=\Bigg(\frac{
{ "pile_set_name": "ArXiv" }
--- author: - 'Han Dong [^1], Ying-bin Wang [^2]' - 'Xin-he Meng [^3]' date: 'Received: date / Revised version: date' title: 'Extended Birkhoff’s Theorem in the $f(T)$ Gravity' --- [leer.eps]{} gsave 72 31 moveto 72 342 lineto 601 342 lineto 601 31 lineto 72 31 lineto showpage grestore Introduction ============ Since the discovery of the accelerating expansion of the universe, people have made great efforts to investigate the hidden mechanism, which also provides us with great opportunities to deeply probe the fundamental theories of gravity. As one of modified gravitational theories, the $f(T)$ gravity is firstly invoked to drive inflation by Ferraro and Fiorini [@fT0]. Later, Bengochea and Ferraro [@fT], as well as Linder [@fT1], propose to use the $f(T)$ theory to drive the current accelerated expansion of our universe without invoking the mysterious dark energy. The framework is a generalization of the so-called *Teleparallel Equivalent of General Relativity* (TEGR) which is firstly propounded by Einstein in 1928 [@einstein] and maturates in the 1960s (For some reviews, see [@TEGR1; @TEGR2]). Contrary to the theory of general relativity which is based on Riemann geometry involving only curvature, the TEGR is based on the so named Weitzenböck geometry with the non-vanishing torsion. Owing to the definition of Weitzenböck connection rather than the Levi-Civita connection, the Riemann curvature is automatically vanishing in the TEGR framework, which brings the theory a new name, *Teleparallel Gravity*. For a specific choice of parameters, the TEGR behaves completely equivalent to the Einstein’s theory of general relativity. Furthermore, by using the torsion scalar $T$ as the Lagrangian density, the TEGR can give a field equation with second order only, instead of the fourth order as in the Einstein’s field equation, and avoids the instability problems caused by higher order derivatives as demonstrated in the metric framework $f(R)$ gravity models. Similar to the generalization of Einstein’s theory of general relativity to the $f(R)$ theory (For some references, see [@fR0; @fRrev; @fR1; @fR11; @fRa0; @fRa1; @fRa2; @fRa3; @fRa4; @fRa5; @fRa6; @fRa7; @fRa8; @fRa9; @fR2; @fR3; @fR4]), the modified version of teleparallel gravity assumes a general function $f(T)$ as the model Lagrangian density. Also, the $f(T)$ theory can be directly reduced to the TEGR if we choose the simplest case, that is, $f(T)=T$. The Lorentz invariance and conformal invariance of the $f(T)$ theory is also investigated [@fT_Lorentz; @fT_conformal], with many interesting results presented. A class of $f(T)$ models with diagonal tetrad are proposed in succession to explain the late-time acceleration of the cosmic expansion without the mysteriously so-called dark energy, and are fitted the cosmological data-sets very well (e.g. [@fT; @fT1; @fT_w; @fT2; @fT3; @fT4; @fT5; @fT6; @fT7]). Most of the previous works consider the $f(T)$ gravity with diagonal tetrad field only. Noting that the tetrad field has sixteen components rather than ten as in the metric frame, there are more freedoms and more physical meaning from the extra uncertain six components. In our previous work [@fT_birkhoff], we have proved the validity of Birkhoff’s theorem in $f(T)$ gravity with a specific diagonal tetrad. In this letter, we study this issue more generally with also the off diagonal tetrad field, and discuss the physical meaning in a more extended context. The Birkhoff’s theorem is also called Jebsen-Birkhoff theorem, for it was actually discovered by Jebsen two years before George D. Birkhoff in 1923 [@birkhoff; @bb]. The theorem states that the spherically symmetric gravitational field in vacuum must be static, with a metric uniquely given by the Schwarzschild solution form of Einstein equations [@weinberg]. It is well known that the Schwarzschild metric is found in 1918 as the external (vacuum) solution of a static and spherical symmetric star. The Birkhoff’s theorem means that any spherically symmetric object possesses the same static gravitational field, as if the mass of the object were concentrated at the center. Even if the central spherical symmetric object is dynamic motion, such as the case in the collapse and pulsation of stars, the external gravitational field is still static if only the radial motion is spherically symmetric. The same feature is held in the classical Newtonian gravity. In this work we investigate the Birkhoff’s theorem in the $f(T)$ gravity model generally with both the diagonal and the off diagonal tetrad fields, analyze the extended meaning of this theorem, and study the equivalence between both Einstein frame and Jordan frame. First, in section two we briefly review the $f(T)$ theories, and in section three we prove the validity of Birkhoff’s theorem of the $f(T)$ gravity with both off diagonal tetrad and diagonal tetrad fields. In section four, we then discuss the validity of the Birkhoff’s theorem in the frame of $f(T)$ gravity via conformal transformation by regarding the Brans-Dicke-like scalar as effective matter. Both the Jordan and Einstein frames are discussed in this section. And some new conclusions and discussions are provided in the last section. Elements of $f(T)$ Gravity ========================== Instead of the metric tensor, the vierbein field $\mathbf{e}_{i}(x^{\mu})$ plays the role of the dynamical variable in the teleparallel gravity. It is defined as the orthonormal basis of the tangent space at each point $x^{\mu}$ in the manifold, namely, $\mathbf{e}_{i}\cdot \mathbf{e}_{j}=\eta_{ij}$, where $\eta_{ij}=diag(1,-1,-1,-1)$ is the Minkowski metric. The vierbein vector can be expanded in spacetime coordinate basis: $\mathbf{e}_{i}=e^{\mu}_{i} \partial_{\mu}$, $\mathbf{e}^i=e^i_\mu{\rm d}x^\mu$. According to the convention, Latin indices and Greek indices, both running from 0 to 3, label the tangent space coordinates and the spacetime coordinates respectively. The components of vierbein are related by $e_{\mu}^i e^{\mu}_j=\delta^{~i}_{j}$,   $e_{\mu}^i e^{\nu}_i=\delta_{\mu}^{~\nu}$. The metric tensor is determined uniquely by the vierbein as $$g_{\mu\nu}=\eta_{ij} e_{\mu}^i e_{\nu}^i,$$ which can be equivalently expressed as $\eta_{ij}=g_{\mu\nu} e^i_{\mu} e^j_{\nu}$. The definition of torsion tensor is given then by $$T^{\rho}_{~\mu\nu}=\Gamma^{\rho}_{~\nu\mu}-\Gamma^{\rho}_{~\mu\nu},$$ where $\Gamma^{\rho}_{~\mu\nu}$ is the connection. Evidently, $T^{\rho}_{~\mu\nu}$ vanishes in the Riemann geometry since the Levi-Civita connection is symmetric with respect to the two covariant indices. Differing from that in Einstein’s theory of general relativity, the teleparallel gravity uses Weitzenböck connection defined directly from the vierbein: $$\Gamma^{\rho}_{~\mu\nu}=e_i^{\rho} \partial_{\nu} e^i_{\mu}.$$ Accordingly, the antisymmetric non-vanishing torsion is $$\label{torsion} T^{\rho}_{~\mu\nu}=e_i^{\rho}(\partial_{\mu}e^i_{\nu} - \partial_{\nu}e^i_{\mu}).$$ It can be confirmed that the Riemann curvature in this framework is precisely vanishing: $$R^\rho_{~\theta\mu\nu}=\partial_\mu \Gamma^\rho_{~\theta\nu}-\partial_\nu \Gamma^\rho_{~\theta\mu}+\Gamma^\rho_ {~\sigma\mu}\Gamma^\sigma_{~\theta\nu}-\Gamma^\rho_{~\sigma\nu} \Gamma^\sigma_{\theta\mu}=0.$$ In order to get the action of the teleparallel gravity, it is convenient to define other two tensors: $$\label{contorsion} K^{\mu\nu}_{~~\rho}=-\frac{1}{2}(T^{\mu\nu}_{~~\rho}-T^{\nu\mu}_{~~\rho}-T_{\rho}^{~\mu\nu}),$$ and $$\label{S} S_\rho^{~\mu\nu}=\frac{1}{2}(K^{\mu\nu}_{~~\rho}+\delta_\rho^{~\mu}T^{\theta\nu}_{~~\theta}-\delta_\rho^{~\nu}T^ {\theta\mu}_{~~\theta}).$$ Then the torsion scalar as the teleparallel Lagrangian density is defined by $$\label{T}
{ "pile_set_name": "ArXiv" }
--- abstract: 'Let $X$ be a smooth irreducible complex algebraic variety of dimension $n$ and $L$ a very ample line bundle on $X$. Given a toric degeneration of $(X,L)$ satisfying some natural technical hypotheses, we construct a deformation $\{J_s\}$ of the complex structure on $X$ and bases $\mathcal{B}_s$ of $H^0(X,L, J_s)$ so that $J_0$ is the standard complex structure and, in the limit as $s \to \infty$, the basis elements approach dirac-delta distributions centered at Bohr-Sommerfeld fibers of a moment map associated to $X$ and its toric degeneration. The theory of Newton-Okounkov bodies and its associated toric degenerations shows that the technical hypotheses mentioned above hold in some generality. Our results significantly generalize previous results in geometric quantization which prove “independence of polarization” between Kähler quantizations and real polarizations. As an example, in the case of general flag varieties $X=G/B$ and for certain choices of $\lambda$, our result geometrically constructs a continuous degeneration of the (dual) canonical basis of $V_{\lambda}^*$ to a collection of dirac delta functions supported at the Bohr-Sommerfeld fibres corresponding exactly to the lattice points of a Littelmann-Berenstein-Zelevinsky string polytope $\Delta_{\underline{w}_0}(\lambda) \cap {{\mathbb{Z}}}^{\dim(G/B)}$.' address: - | Department of Math and Computer Science\ Mount Allison University\ 67 York St.\ Sackville, NB, E4L 1E6\ Canada - | Department of Mathematics and Statistics\ McMaster University\ 1280 Main Street West\ Hamilton, Ontario L8S4K1\ Canada - | Department of Mathematics\ University of Pittsburgh\ 301 Thackeray Hall\ Pittsburgh, PA, 15260\ USA author: - Mark Hamilton - Megumi Harada - Kiumars Kaveh title: 'Convergence of polarizations, toric degenerations, and Newton-Okounkov bodies' --- [^1] [^2] [^3] Introduction ============ The motivation for the present manuscript arose from two rather different research areas: the theory of geometric quantization in symplectic geometry on the one hand, and the algebraic-geometric theory of Newton-Okounkov bodies - particularly in its relation to representation theory - on the other. Since we do not expect all readers of this paper to be familiar with both theories, we begin with a brief description of each. We begin with a sketch of geometric quantization. As is well-known, symplectic geometry (Hamiltonian flows on symplectic manifolds) is the mathematical language for formulating classical physics, whereas it is the language of linear algebra and representation theory (unitary flows on Hilbert spaces) which forms the basis for formulating quantum physics. It has been a long-standing question within symplectic geometry to understand, from a purely mathematical and geometric perspective, the relation between the classical picture and the quantum picture, in terms of both the phase spaces and the defining equations of the dynamics. In one direction, to go from “quantum” to “classical”, one can “take a classical limit”. The reverse direction, i.e. that of systematically associating to a symplectic manifold $(M,\omega)$ a Hilbert space $Q(M,\omega)$ and to similarly relate, for instance, Hamilton’s equations on $(M,\omega)$ to Schrödinger-type equations on $Q(M,\omega)$, is generally referred to as the theory of *quantization*. In this manuscript, we deal specifically with *geometric quantization*, a theory which associates to a symplectic manifold $(M,\omega)$ a Hilbert space $Q(M,\omega)$. For a fixed $(M,\omega)$, it turns out that there are many possible ways of constructing a suitable Hilbert space $Q(M,\omega)$. To describe the choices we first set some notation. First suppose that $[\omega]$ is an integral cohomology class. Next, let $(L,\nabla, h)$ be a Hermitian line bundle with connection satisfying $\mathrm{curv}(\nabla)=\omega$. Such a triple is called a *pre-quantum line bundle*, or sometimes a *pre-quantization*. Also required is a *polarization,* of which the two main types are as follows. A *Kähler polarization* is a choice of compatible complex structure $J$ on $M$. Given such a $J$, one can define the quantization $Q(M, \omega)$ to be $H^0(M, L, J)$, the space of holomorphic sections of $L$ with respect to this complex structure $J$. On the other hand, one may also consider a (possibly singular) *real polarization* of $M$, which is a foliation of $M$ into Lagrangian submanifolds. Among the Lagrangian leaves one can define a special (usually finite, if $M$ is compact) subset called the *Bohr-Sommerfeld leaves*. There is not yet an agreed-upon “correct” definition of the corresponding Hilbert space for a real polarization, but one approach which has been investigated, and which will be used in this manuscript, is to consider distributional sections supported on the set of Bohr-Sommerfeld leaves. Based on the above discussion, the following natural question arises: *Is the quantization $Q(M,\omega)$ “independent of polarization,” i.e., independent of the choices made?* More specifically, we can ask: does the quantization coming from a Kähler polarization agree with the quantization coming from a real polarization? The results of this manuscript confirms independence of polarization in a rather large class of examples, significantly extending previously known results which were restricted to special cases such as toric varieties and flag varieties. We next briefly motivate the theory of Newton-Okounkov bodies. The famous Atiyah-Guillemin-Sternberg and Kirwan convexity theorems link equivariant symplectic and algebraic geometry to the combinatorics of polytopes. In the case of a toric variety $X$, the combinatorics of its moment map polytope $\Delta$ fully encodes the geometry of $X$, but this fails in the general case. In his influential work, Okounkov constructed (circa roughly 1996), for an (irreducible) projective variety $X \subseteq {\mathbb{P}}(V)$ equipped with an action of a reductive algebraic group $G$, a convex body $\tilde{\Delta}$ and a natural projection from $\tilde{\Delta}$ to the moment polytope $\Delta$ of $X$. Moreover, the volumes of the fibers of this projection encode the asymptotics of the multiplicities of the irreducible representations appearing in the homogeneous coordinate ring of $X$, or in other words, the Duistermaat-Heckman measure [@Okounkov-BM; @Okounkov-log-concave]. Recently, Askold Khovanskii and the third author (also independently Lazarsfeld and Mustata [@LazMus]) vastly generalized Okounkov’s ideas [@KavKho], and in particular constructed such $\tilde{\Delta}$ (called *Newton-Okounkov bodies* or sometimes simply *Okounkov bodies*) even without presence of any group action. In the setting studied by Okounkov, the maximum possible (real) dimension of the Newton-Okounkov body $\tilde{\Delta}$ is the transcendence degree of ${{\mathbb{C}}}(X)^U$ where $U$ is a maximal unipotent subgroup of $G$; when there is no group action (as in the setting studied in [@LazMus; @KavKho]) we have $\dim_{{{\mathbb{R}}}}(\tilde{\Delta}) = \dim_{{{\mathbb{C}}}}(X).$ Hence one interpretation of the results of Okounkov, Lazarsfeld-Mustata and Kaveh-Khovanskii is that there *is* a convex geometric/combinatorial object of ‘maximal’ dimension associated to $X$, even when $X$ is not a toric variety. This represents a vast expansion of the possible settings in which combinatorial methods may be used to analyze the geometry of algebraic varieties. There is promise of a rich theory which interacts with a wide range of inter-related areas: for instance, the third author showed [@Kav-cry] that the *Littelmann-Berenstein-Zelevinsky string polytopes* from representation theory, which generalize the well-known *Gel’fand-Cetlin polytopes*, are examples of $\tilde{\Delta}$. In the long-term, one can expect further applications to Schubert calculus and to geometric representation theory (e.g. see [@KST]). We now turn attention to the present manuscript. Firstly we should explain that the two seemingly disparate research areas mentioned above are related due to the results in [@HarKav], which uses a certain toric degeneration that arises from (the semigroup associated to) a Newton-Okounkov body [@Anderson] to construct *integrable systems*[^4] on a wide class of projective varieties. Integrable systems are highly special Hamiltonian systems on symplectic (or, in our setting, Kähler) manifolds, and naturally give rise to (singular) real polarizations. Therefore, the theory of Newton-Okounkov bodies and their associated toric degenerations provide a natural setting in which to examine the theory of geometric quantization. Before describing the statement of our main result (Theorem \[theorem
{ "pile_set_name": "ArXiv" }
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present an analysis of electron transport through two weakly coupled precision placed phosphorus donors in silicon. In particular, we examine the (1,1)$\leftrightarrow$(0,2) charge transition where we predict a new type of current blockade driven entirely by the nuclear spin dynamics. Using this nuclear spin blockade mechanism we devise a protocol to readout the state of single nuclear spins using electron transport measurements only. We extend our model to include realistic effects such as Stark shifted hyperfine interactions and multi-donor clusters. In the case of multi-donor clusters we show how nuclear spin blockade can be alleviated allowing for low magnetic field electron spin measurements.' author: - 'S. K. Gorman, M. A. Broome, W. J. Baker, M. Y. Simmons' bibliography: - 'MyBib.bib' title: The impact of nuclear spin dynamics on electron transport through donors --- Introduction ============ An understanding of electron transport through multiple quantum dots has enabled the progression of semiconductor quantum information protocols from single shot spin readout to two-qubit logic gates [@johnson2005; @koppens2005; @koppens2006; @nowack2011]. Not only do transport measurements provide us with important details on spin relaxation times and tunnel rates, but they play a vital role in aiding our understanding of the complex spin dynamics that occur in these systems, such as coherent manipulation of the electron spins [@koppens2005; @petta2005] and dynamical nuclear polarisation of the Overhauser field [@schuetz2014]. With recent advances in the fabrication of precision placed donors in silicon [@prati2012; @gonzalez-zalba2014; @fuechsle2012; @weber2012a; @buch2013] research in this field is now focused on spin transport through multi-donor chains. A deeper understanding of the interplay between electron- and nuclear-spins in the dynamics of such systems is a prerequisite for the progression of this field. In particular for the implementation of spin transport via donor chains [@hollenberg2006]. So far, protocols based on spin-chains [@bose2003; @bose2007; @kay2010], coherent tunneling adiabatic passage (CTAP) [@hollenberg2006; @rahman2009; @rahman2010], spin shuttling [@cirac2000; @skinner2003] and SWAP-gate operations [@loss1998] have been proposed, all of which require control of electron spin transport across donors. In order to further investigate these transport protocols it is crucial to understand the spin dynamics at the two donor level. However, despite the plethora of theoretical knowledge on gate-defined semiconductor double quantum dots [@vanderwiel2003; @taylor2007; @hanson2007]; double donor transport has not received as much attention. In this paper we show how the interplay between the electron and nuclear spins in donor-based systems affects not only the charge transport, but also the spin transport. To understand the impact of these nuclear spins we use a master equation approach to conduct a comprehensive numerical analysis of electron transport through a double donor system. We investigate electron spin resonance (ESR) combined with Pauli spin blockade (PSB) at low and high magnetic fields to manipulate and readout the electron spin states. Our most striking finding demonstrates that the presence of the quantised nuclear spin of the donors leads to a novel effect called nuclear spin blockade. Using this mechanism we propose a new spin readout protocol for the nuclear spins based on a measurement of the transport current. Finally, we analyse more realistic scenarios of inhomogeneous hyperfine interactions across the donors, as well as the case of multi-donor dots, which can be shown to be immune to nuclear spin blockade. Throughout the paper we neglect the dynamical behavior of the surrounding ^29^Si nuclear spins present in natural silicon. This interaction is much smaller than the donor hyperfine interaction [@schliemann2003] and it has also been shown that Si:P devices can be fabricated in isotopically pure ^28^Si where the absence of the ^29^Si extends the electron coherence times [@muhonen2014]. Transport at the (1,1) to (0,2) charge transition ================================================= We consider two weakly coupled phosphorus donors in a silicon lattice (approximately 15–20 nm apart [@koiller2002; @wellard2003]), P$_L$ and P$_R$ the left and right donor, respectively. Electrons are able to tunnel from in plane source to drain leads via both donors as shown schematically in Fig. \[fig:intro\]a. Describing the system is the Hamiltonian $H$, $$H = H_{ze} + H_{zn} + H_{t_c} + H_{\Delta} + H_{hf},$$ where $H_{ze}$ and $H_{zn}$ are the electron and nuclear Zeeman terms, $H_{t_c}$ is the tunnel coupling between the donor electrons, $H_{\Delta}$ is the energy detuning of the ${|S_{02} \rangle}$ state (singlet state with two electrons on a single donor nuclei) and $H_{hf}$ is the hyperfine interaction, for further details see Methods. Throughout the paper we refer to the Hamiltonian in the singlet-triplet basis of the electrons with energies shown in Fig. \[fig:intro\]b. By making a transformation from Hilbert space to Liouville space we incorporate incoherent processes that occur during spin transport [@ernst1987]. In doing so, tunnelling that occurs from the source at the rate $\Gamma_L$, to the donors and through to the drain at the rate $\Gamma_R$, are integrated with the coherent evolution of the system. Using this approach we can determine the spin and charge dynamics of the donor system during electron transport. ![[**Electron transport through two weakly coupled phosphorus donors in silicon.**]{} [**(a)**]{} A schematic representation of transport through a double donor system during PSB. Electrons can tunnel from the source to P$_L$ and from P$_R$ to the drain at rates $\Gamma_L$ and $\Gamma_R$, respectively. The two donor electrons are coherently tunnel coupled at the rate $t_c$ and have a contact hyperfine interaction with the nuclear spins, $A_L$ and $A_R$ with their respective nuclei. [**(b)**]{} Eigen energies of $H$ around $\Delta{=}0$, between the (1,1) and (0,2) charge configurations at an external magnetic field of $B_0{=}25$ mT and with hyperfine interaction strength set to $A_L{=}A_R{=}A{=}117.53$ MHz. The electron triplet states, ${|T_{11}^+ \rangle}$ (blue), ${|T_{11}^0 \rangle}$ (black), and ${|T_{11}^- \rangle}$ (pink) are split by the Zeeman energy, $\gamma_e B_0$. The ${|S_{02} \rangle}$ (green) state is detuned from the ${|S_{11} \rangle}$ (red) with an anti-crossing at $\Delta{=}0$ due to a tunnel coupling set to $t_c=A$. The (0,1) charge states are omitted for clarity.[]{data-label="fig:intro"}](Figure1.pdf){width="1\columnwidth"} As a first demonstration of the effect of quantised nuclear spin states we study the electron transport from drain to source (reverse of Fig. \[fig:intro\]a). In this scenario the charge cycle is (0,1)[$\rightarrow$]{}(0,2)[$\rightarrow$]{}(1,1)[$\rightarrow$]{}(0,1), where ($n_L$, $n_R$) corresponds to the electron numbers on the left and right donor nuclei. In the case of quantum dots it has been shown that the current, $I_{QD}$, as a function of the detuning, $\Delta$, is given by a known expression involving the coherent and incoherent tunnel rates [@nazarov1993], $$I_{QD} = \frac{|e| \Gamma_L \left(\frac{t_c}{2}\right)^2}{\left(\frac{\Gamma_R}{2}\right)^2 + \left(\frac{t_c}{2}\right)^2(2 + \frac{\Gamma_L}{\Gamma_R}) + \Delta^2},$$ where $e$ is the electron charge and $t_c$ is the tunnel coupling between the two dots. However, without an external magnetic ($B_0$) dependence, this equation will not account for any spin dynamics. In the case of donors, the measurable current is given by $I{=}\left|e\right|\Gamma_L P$(1,1), where $P$(1,1) is the probability of being in any (1,1) charge configuration, including the electron triplet states $\{{|T_{11}^+ \rangle}, {|T_{11}^- \rangle} \}$. ![[**Electron transport through double donors from drain to source.**]{} [**(a)**]{} The difference in current, $\delta I$, through a double donor as a function of detuning $\Delta$, and external magnetic field, $B_0$. The yellow line maps $J{=}\gamma_e B_0$ which follows a peak in $\delta I$ due to the mixing between ${|S_{02} \rangle}{\leftrightarrow}{|T_{11}^- \rangle}$ (contour lines guide the eye). [**(b)**]{} Eigen energies of $H$ at
{ "pile_set_name": "ArXiv" }
--- abstract: 'The response of shear thickening fluids (STFs) under ballistic impact has received considerable attention due to its field-responsive nature. While efforts have primarily focused on traditional ballistic fabrics impregnated with these fluids, the response of pure STFs to penetration has received limited attention. In the present study, the ballistic response of particle-based STFs is investigated and the effects of fluid density and particle strength on ballistic performance are isolated. It is shown that the loss of ballistic resistance in the STFs at higher impact velocities is governed by the material strength of the particles in suspension. The results illustrate the range of velocities over which these STFs may provide effective armor solutions.' author: - 'Oren E. Petel' - Simon Ouellet - Jason Loiseau - 'Bradley J. Marr' - 'David L. Frost' - 'Andrew J. Higgins' title: The Effect of Particle Strength on the Ballistic Resistance of Shear Thickening Fluids --- The integration of shear thickening fluids (STFs) in armor systems, a concept reported as early as Gates,[@Gates] has received considerable interest with the recent efforts to embed STFs within ballistic fabrics,[@Lee2003; @Tan; @Kalman; @Park2] which has been shown to increase their ballistic performance; however, experimental evidence also suggests performance limitations of these hybrid armour systems. This loss of performance is evident when considering steel-core projectiles [@Gates; @Park2] particularly when multiple layers and higher impact velocities are considered.[@Tan] Park et al.[@Park2] discussed preliminary experimental results in which a loss of effectiveness was seen against steel projectiles (FSPs) above an impact velocity of 300 m/s, the same velocity range investigated by Tan et al. [@Tan] The coupled nature of the fluid-fabric interactions make it difficult to ascertain whether this behavior is due to a loss of performance within the STF itself or a transition in the dominant failure mode within the fibers, rendering the presence of the STF inconsequential to the ballistic response. In the present study, we investigate the ballistic penetration of several STFs, particularly focusing on the role of particle strength in determining the ballistic response of STFs through variations of the particle material and volume fraction in the suspensions. STFs are field-responsive fluids which can undergo a sudden fluid-solid transition under certain stimuli. STFs have been extensively characterized using low-stress dynamic techniques,[@Hoffman; @Barnes; @Lim2] in which liquids are considered incompressible. These conditions are not directly relevant to the dynamic high-stress environment of a ballistic impact, where compressibility effects dominate material responses.[@Field] Lee and Kim [@LeeKim] estimated the stagnation pressure at the nose of a steel projectile impacting a STF-impregnated fabric to be on the order of several gigapascals, stresses at which compressibility effects must be considered. Under ballistic conditions, in addition to traditional shear thickening mechanisms, a compression-induced clustering of particles should be expected as the liquid density and particle volume fraction increase under high pressures,[@PetelJAP; @PetelPRE; @PetelAPS1; @PetelAPS2] resulting in extensive particle force chains forming around the projectile (Fig. \[fig1\]a and \[fig1\]b) as the solid phase volume fraction in the STF can increase by as much as 10$\%$ due to the impact pressures. The *in situ* formation of force chains implies that the ballistic response of various STFs should be directly related to the strength of the suspended particles. We investigated this hypothesis by studying the ballistic penetration of several STFs, a dilute suspension (non-shear-thickening), and neat ethylene glycol (the suspending medium for the mixtures considered). The suspended particulate phase of the STFs was varied in a manner that interrogated the influence of the material strength of the particles on penetration resistance. The experimental ballistic penetration results are presented in reference to an inertially-based penetration model to highlight the strength limitations of the various STFs. The penetration resistance was investigated by measuring the ability of the fluids to decelerate a 17 grain (1.1 g) chisel-nosed mild steel NATO-standard fragment simulating projectile (FSP), shown schematically in Fig. \[fig1\]c, in a configuration similar to that used by Nam et al.[@Nam] The samples were tested in a cylindrical aluminum capsule with an internal diameter of 38 mm and a length of 64 mm. Mylar diaphragms (0.1 mm thick), which were found to have a negligible influence on the experimental results, were used to confine the fluid samples in the capsules during the experiments. The target capsule was positioned close to the end of the smooth-bore gas-gun barrel in order to minimize the projectile yaw at the impact face. Experiments were conducted with a range of incident FSP velocities of 200 to 700 m/s. The incident and residual velocities of the FSP were measured with a Photron SA5 high-speed camera at 20,000 fps. A set of images taken of the incident and exiting projectile are shown in Fig. \[fig2\]a and \[fig2\]b respectively. ![Schematic of (a) a broken-out section view of the FSP penetrating the STF. (b) Top and side views of the FSP. \[fig1\]](STFImpact2c2_Final.eps) ![Photographs of the FSP (a) entering and (b) exiting the test capsule. \[fig2\]](ImpactPics_Final.eps) The investigation involved several mixtures, the proportions of which are given in Table \[Tab1\]. Particle settling was not a concern as the capsules were filled 5-10 minutes prior to the experiments and vortex mixers were used to ensure sufficient dispersion of the particles. The components of the various mixtures included liquid ethylene glycol (EG), silica (Fiber Optic Center, monodisperse spheres, $d$ = 1 $\mu$m), $\alpha$-silicon carbide (Washington Mills, irregular morphology, $d_{\mathrm{mean}}$ = 5 $\mu$m), and cornstarch (Fleischmann, $d_{\mathrm{mean}}$ = 10 $\mu$m). The three types of solid particles have drastically different material properties (Table \[Tab2\]). In order of increasing strength, the materials are cornstarch, silica, and silicon carbide. It should be noted that the silica particles used in the present study had a nano-porous structure consisting of voids which were inaccessible to the ethylene glycol. This void fraction resulted in a wetted bulk particle density of 1.85 g/cm$^{3}$, effectively containing a 16$\%$ gas-filled void fraction. The bulk particle density of the silica particles used in the present study is consistent with the wetted bulk density of silica particles used in previous ballistic experiments involving silica-based STFs.[@Lee2003] This significant gas-filled void fraction would have an adverse affect on the strength of the silica particles in comparison to the bulk-material values listed in Table \[Tab2\]. [lllll]{} &****&**** &****&\ &****&****&****&\ &&&&\ &&&&\ &&&&\ &&&&\ \ &&&&\ &&&&\ Of the mixtures that are summarized in Table \[Tab1\], three of the mixtures investigated exhibit shear thickening behaviors: 54 CS, 61 SiO$_{2}$, and 61 Mix. The 21 SiC mixture, a dilute suspension that did not exhibit shear thickening behavior, was used for a density-matched comparison to the 61 SiO$_{2}$ mixture. The velocity decrement of the FSP, the difference between the incident and residual velocities of the FSP penetrating the fluids, was used as the basis of comparison between test mixtures. This measure provides a means of evaluating the competition between the mixture density and the material strength of the suspended solid phase as the dominant factor in the ballistic resistance of the various fluids. -- ------ ----------------- ----------------- **** **** **** **** **** **** **** [^1] [^2] $^{\mathrm{b}}$ $^{\mathrm{b}}$ $^{\mathrm{b}}$ -- ------ ----------------- ----------------- : \[Tab2\]Summary of the bulk-material properties for the solid materials used in the present study. A simple analytical penetration model can be used to predict an inertially-dominated penetration behavior in targets of various densities. The assumptions inherent in this model are: (*i*) the projectile drives a plug through the target with a cross-sectional area equal to that of the chisel nose (Fig. \[fig1\]c), (*ii*) the target material has no material strength (the hydrodynamic limit), and (*iii*) the impact and penetration process is perfectly plastic, resulting in identical final velocities of the projectile and plug. The assumption concerning the cross-sectional area of the plug accounts for the divergence of material around the projectile tip. The model can therefore be adequately described by conserving momentum through the equation,$$\rho_{p} L_{p} \cdot V_{i} = \left(\rho_{p} L_{p} + \rho_{t} L_{t}A_{r}\right) \cdot V_{f} \label{eq1}$$ where $L$ is the length, $\rho$ is the density, $A_{r}$ is the area ratio of the chisel nose to the projected area of the F
{ "pile_set_name": "ArXiv" }
--- abstract: 'In this paper we study the cobordism of algebraic knots associated with weighted homogeneous polynomials, and in particular Brieskorn polynomials. Under some assumptions we prove that the associated algebraic knots are cobordant if and only if the Brieskorn polynomials have the same exponents.' address: - 'Département de Mathématiques, Université de Strasbourg, 7 rue René Descartes, 67084 Strasbourg cedex, France' - 'Faculty of Mathematics, Kyushu University, Hakozaki, Fukuoka 812-8581, Japan' author: - Vincent Blanlœil - Osamu Saeki title: Cobordism of algebraic knots defined by Brieskorn polynomials --- Introduction {#section1} ============ A *Brieskorn polynomial* is a polynomial of the form $$P(z) = z_1^{a_1} + z_2^{a_2} + \cdots + z_{n+1}^{a_{n+1}}$$ with $z = (z_1, z_2, \ldots, z_{n+1})$, $n \geq 1$, where the integers $a_j \geq 2$, $j = 1, 2, \ldots, n+1$, are called the *exponents*. The complex hypersurface in ${\mathbf{C}}^{n+1}$ defined by $P=0$ has an isolated singularity at the origin, which is called a *Brieskorn singularity*. In this paper, we will study Brieskorn singularities up to cobordism. We prove that two Brieskorn singularities have cobordant algebraic knots if and only if they have the same set of exponents, provided that no exponent is a multiple of another for each of the two Brieskorn polynomials. Consequently, for such Brieskorn polynomials the multiplicity is an invariant of the cobordism class of the associated algebraic knot. To be more precise, let $f : ({\mathbf{C}}^{n+1}, 0) \to ({\mathbf{C}}, 0)$ be a holomorphic function germ with an isolated critical point at the origin. We denote by $D^{2n+2}_{\varepsilon}$ the closed ball of radius $\varepsilon > 0$ centred at $0$ in ${\mathbf{C}}^{n+1}$, and by $S^{2n+1}_{\varepsilon}$ its boundary. According to Milnor [@Milnor], the oriented homeomorphism class of the pair $(D^{2n+2}_{\varepsilon}, f^{-1}(0) \cap D^{2n+2}_{\varepsilon})$ does not depend on the choice of a sufficiently small $\varepsilon > 0$, and by definition it is the *topological type* of $f$. (For other equivalent definitions, we refer the reader to [@King; @Perron; @Saeki89].) The oriented diffeomorphism class of the pair $(S^{2n+1}_{\varepsilon}, K_f)$, with $K_f = f^{-1}(0) \cap S^{2n+1}_{\varepsilon}$, is the *algebraic knot* associated with $f$, where $K_f$ is a closed oriented $(2n-1)$-dimensional manifold. According to Milnor’s cone structure theorem [@Milnor], the algebraic knot $K_f$ determines the topological type of $f$. In fact, it is known that the converse also holds. \[dfn:cob\] An *$m$-dimensional knot* (*$m$-knot*, for short) is a closed oriented $m$-dimensional submanifold of the oriented $(m+2)$-dimensional sphere $S^{m+2}$. Two $m$-knots $K_0$ and $K_1$ in $S^{m+2}$ are said to be *cobordant* if there exists a properly embedded oriented $(m+1)$-dimensional submanifold $X$ of $S^{m+2} \times [0,1]$ such that 1. $X$ is diffeomorphic to $K_0 \times [0,1]$, and 2. $\partial X = (K_0 \times \{0\}) \cup (-K_1 \times \{1\}).$ Such a manifold $X$ is called a *cobordism* between $K_0$ and $K_1$ (see Fig. \[fig2\]). (100,120)(0,0) (10,90)(30,50)(15,10) (21,57) (22,40) (8,48) (-5,100) (15,90)(35,50)(20,10) (20,90)(40,50)(25,10) (25,90)(45,50)(30,10) (30,90)(50,50)(35,10) (35,90)(55,50)(40,10) (40,90)(60,50)(45,10) (45,90)(65,50)(50,10) (50,90)(70,50)(55,10) (55,90)(75,50)(60,10) (60,90)(80,50)(65,10) (65,90)(85,50)(70,10) (70,90)(90,50)(75,10) (75,90)(95,50)(80,10) (80,90)(100,50)(85,10) (85,90)(105,50)(90,10) (21,57)(95,35)(65,55) (65,55)(35,75)(101,57) (22,40)(35,20)(102,40) (90,90)(110,50)(95,10) (101,57) (102,40) (104,48) (85,100) (35,0) In [@BM], for $n \geq 3$, necessary and sufficient conditions for two algebraic $(2n-1)$-knots to be cobordant have been obtained in terms of Seifert forms (for the definition of the Seifert form, see §\[section2\]). However, the computation of the Seifert form of a given algebraic knot is very difficult, and an explicit calculation is known only for a very limited class of algebraic knots. (In fact, even for algebraic knots associated with weighted homogeneous polynomials, Seifert forms have not been determined yet, as far as the authors know.) Furthermore, even if we know the Seifert forms explicitly, it is still difficult to see if given two such forms satisfy the algebraic conditions given in [@BM] or not. So, it is worthwhile to study the conditions for two algebraic knots associated with weighted homogeneous polynomials to be cobordant. We note that cobordism does not necessarily imply isotopy for algebraic knots in general. For details, see the survey article [@BS]. It is known that cobordant algebraic knots have Witt equivalent Seifert forms (for details, see §\[section2\]). In this paper, we give a necessary and sufficient condition for two algebraic knots associated with weighted homogeneous polynomials to have Witt equivalent Seifert forms over the real numbers in terms of their weights. Using this result, we give some conditions for two algebraic knots associated with Brieskorn polynomials to be cobordant in terms of the exponents. Under some assumptions, we show that two such knots are cobordant if and only if the Brieskorn polynomials have the same set of exponents. The paper is organized as follows. In §\[section2\], we state our results. We give a necessary and sufficient condition for two nondegenerate weighted homogeneous polynomials to have Witt equivalent Seifert forms over the real numbers, in terms of their weights. Then, we give more explicit results for Brieskorn polynomials. In §\[section3\], we prove the results stated in §\[section2\]. In §\[section4\], we give more precise results in the case of two and three variables. Throughout the paper we work in the smooth category. All the homology groups are with integer coefficients unless otherwise specified. Results {#section2} ======= Let $f(z)$ be a polynomial in ${\mathbf{C}}^{n+1}$ with an isolated critical point at the origin. We denote by $F_f$ the *Milnor fiber* associated with $f$, i.e., $F_f$ is the closure of a fiber of the Milnor fibration $\varphi_f : S^{2n+1}_\varepsilon \setminus K_f \to S^1$ defined by $\varphi_f(z) = f(z)/|f(z)|$. According to Milnor [@Milnor], $F_f$ is a compact $2n$-dimensional submanifold of $S^{2n+1}_\varepsilon$ which is homotopy equivalent to the bouquet of a finite number of copies of the $n$-dimensional sphere. The Seifert form $$L_f : H_n(F_f) \times H_n(F_f) \to {\mathbf{Z}}$$ associated with $f$ is defined by $$L_f(\alpha, \beta) = \mathrm{lk}(a_+, b),$$ where $a$ and $b$ are $n$-cycles representing $\alpha$ and $\beta$ in $H_n(F_f)$ respectively, $a_+$ is the $n$-cycle in $S^{2n+1}_\varepsilon$ obtained by pushing $a$ into the positive normal
{ "pile_set_name": "ArXiv" }
--- author: - 'C. Martayan' - 'M. Floquet' - 'A.M. Hubert' - 'J. Gutiérrez-Soto' - 'J. Fabregat' - 'C. Neiner' - 'M. Mekkas' date: 'Received /Accepted' title: ' Be stars and binaries in the field of the SMC open cluster NGC330 with VLT-FLAMES' --- Introduction ============ The Magellanic Clouds (MC), which contain a huge number of early-type stars, are particularly appropriate to investigate the effect of low metallicity on the B and Be stars populations, comparatively to the ones in the Milky Way (MW). The FLAMES-GIRAFFE instrumentation [@pasquini02] installed at the VLT allowed us to obtain significant samples of B and Be stars spectra in the Large and Small Magellanic Clouds (LMC and SMC) which are needed to achieve our goal. In @marta06a, we presented an overview of spectroscopic results for 176 early-type stars observed in the field of the LMC open cluster NGC2004. In @marta06b and @marta07 (hereafter Papers I and II) we searched for the effects of metallicity in the LMC and SMC, respectively. We showed that the lower the metallicity, the higher the rotational velocities. These observational results support theoretical predictions by @meynet00 [@meynet02] and @maeder01 for massive stars. Therefore the percentage of Be stars seems to be higher in lower metallicity environments such as the SMC (Z$<$0.001). In this fourth paper we present an overview of spectroscopic results for 346 early-type stars observed in the field of the SMC open cluster NGC330 with VLT-FLAMES. Note that the determination of their fundamental parameters (, , , and ) has already been reported in Paper II. We also search for pulsators among our Be stars sample through an analysis of their MACHO[^1] data. Theoretically, the pulsational instability of hot stars has a great dependence on metallicity. @pam99 showed that the instability strip for and Slow Pulsating B stars (SPB) vanishes at Z$<$0.01 and Z$<$0.006, respectively. Thus, Be stars, which show the same pulsational characteristics as those of classical B-type pulsators [e.g. @neiner05; @walker05a; @walker05b] in the MW, are among the best objects in the SMC to test their theoretical predictions.\ In the present paper, the observations and the reduction process are described in Sect. \[reduc\]. In Sect. \[result\] we present the characteristics of the H$\alpha$ emission line and the proportion of Be stars in the field as well as in clusters and OB associations. We perform a comparison with Be stars in the MW (Sect. \[propBe\]). In Sect. \[var\] we describe the variability detected in our Be stars sample thanks to an investigation of the MACHO and OGLE databases [@ogleI]. We report on the discovery of spectroscopic and photometric binaries (Sect. \[bin\]) and the identification of multi-periods in the light curves of several objects (Sect. \[varbesh\]) which pleads in favour of pulsations. We discuss the impact of metallicity on the proportion of Be stars and the presence of pulsations in Sect. \[discussion\]. Finally, we give a detailed study of 3 peculiar emission line objects that are not classical Be stars (Appendix \[pec\]), of binary systems (Appendix \[appB\]), and of short-term periodic Be stars (Appendix \[indBe\]). Observations {#reduc} ============ Spectra of a significant sample of the B star population in the young cluster SMC NGC330 and its surrounding field have been obtained with the ESO VLT-FLAMES facilities, as part of the Guaranteed Time Observation programs of the Paris Observatory (P.I.: F. Hammer). The multi-fibre spectrograph VLT-FLAMES has been used in MEDUSA mode (132 fibres) at medium resolution.\ As shown in Paper I and II the use of the setup LR02 (396.4–456.7 nm, hereafter blue spectra) is adequate for the determination of fundamental parameters, while the LR06 setup (643.8–718.4 nm, hereafter red spectra) is used to identify Be stars and to study the H$\alpha$ emission line characteristics. The spectral resolution is 6400 for LR02 and 8600 for LR06. The respective instrumental broadenings are $\simeq$ 50  and 35 . Observations (ESO runs 72.D-0245A and 72.D-0245C) were carried out on October 21, 22 and 23, 2003 (blue and red spectra) and on September 9 (blue spectra) and 10 (red spectra), 2004. The observational seeing ranged from 0.4 to 2$\arcsec$.\ The observed fields are centred at $\alpha$(2000) = 00h 55mn 15s and $\delta$(2000) = $-$72$^{\circ}$ 20$\arcmin$ 00$\arcsec$ for the observations of 2003 and at $\alpha$(2000) = 00h 55mn 25s and $\delta$(2000) = $-$72$^{\circ}$ 23$\arcmin$ 30$\arcsec$ for the run of 2004. Besides the young cluster NGC330, these fields contain several high-density groups of stars. The position of all our stellar and sky fibre targets is plotted in Fig. \[figure0\] (online).\ A sample of 346 stars among the 5370 B-type star candidates located in the selected fields has been observed during the two observing runs (see Paper II, Sect. 2). It contains 131 Be stars, 202 B-type stars, 4 O-type stars, 6 A-type stars, and 3 other types of stars, which are discussed in this paper.\ The data reduction was performed as described in Paper I and II. The S/N ratio of spectra obtained in the blue region varies from $\sim$ 15 for the fainter objects to $\sim$ 135 for the brighter ones (see Table 3 in Paper II).\ Results {#result} ======= After subtraction of the sky line contribution, it appeared that more than 80% of the 346 stars are contaminated by nebular lines. This contamination is particularly detectable in the H$\alpha$ line. Depending of the intensity level of the nebular H$\alpha$ line, a weak nebular contribution can also be detected in forbidden lines of \[NII\] at 6548 and 6583 [Å]{} and \[SII\] at 6717 and 6731 [Å]{} in the LR06 setup. When the nebular H$\alpha$ line is strong, the H$\gamma$ and H$\delta$ line profiles are also affected by a nebular component. With the same technique as the one used to identify stars with the Be phenomenon in the LMC [@marta06a], we tried to disentangle the circumstellar line emission (CS) component from emission produced by the nebular line in polluted spectra. Stellar and nebular radial velocities {#rv} ------------------------------------- For each star, the radial velocity () of nebular lines (H$\alpha$, \[NII\], and \[SII\]) has been measured and compared to the stellar . The mean accuracy is $\pm$ 10 . The statistical distribution of stellar and nebular RVs is mono-modal with a maximum at +155  and +165 , respectively. The difference between stellar and nebular lines peaks around -10  and does not seem to indicate any clear link between stars and the structures giving rise to the nebular lines. However, due to the weak difference between the stellar and nebular RVs, it has not been easy to correctly estimate the nebulosity contribution for Be stars which present a single-peaked H$\alpha$ emission line profile in their spectrum. Be stars {#Be} -------- Our sample contains 131 Be stars (see Table \[tablebeobs\]): 41 known Be stars including 39 ones from @keller99b and 2 others from @grebel92b, and 90 Be stars discovered in Paper II (see Table 3 therein). Note that among this second group of Be stars, 28 ones are suspected to be emission line stars from a slit-less ESO-WFI survey [@marta06c]. ### Emission-line characteristics {#emhalpha} The equivalent width (EW), maximum intensity (Imax for a single peak, and I(V) and I(R) for the violet and red peaks respectively in a double-peak emission), and the Full Width at Half Maximum (FWHM) of the circumstellar H$\alpha$ emission measured for each Be star are given in Table \[tablebeobs\]. The FWHM of the CS H$\alpha$ emission line ranges from 146 to 622 . The telluric lines as well as the nebular lines are resolved and the FWHM of the H$\alpha$ nebular line is $\sim$ 60 .\ Not correcting for the nebular H$\alpha$ emission line contributes to overestimate I$_{max}$ and underestimate the FWHM of the CS emission line. To a lesser degree it also leads to overestimate of EW${\alpha}$. We thus determined and subtracted this contribution from the H$\alpha$ emission line profile. As in @marta06a, we used the nebular ratio \[\]/H$\alpha$ to estimate the nebular contribution in the CS H$\alpha$ line in Be star spectra. From the fibres located on sky positions (
{ "pile_set_name": "ArXiv" }
--- abstract: | We have performed [[*HST*]{}]{} imaging of a sample of 23 high-redshift ($1.8<z<2.75$) Active Galactic Nuclei, drawn from the [[combo-17]{}]{}survey. The sample contains moderately luminous quasars ($M_B \sim -23$). The data are part of the [[gems]{}]{} imaging survey that provides high resolution optical images obtained with the Advanced Camera for Surveys in two bands ([F606W]{} and [F850LP]{}), sampling the rest-frame UV flux of the targets. To deblend the AGN images into nuclear and resolved (host galaxy) components we use a PSF subtraction technique that is strictly conservative with respect to the flux of the host galaxy. We resolve the host galaxies in both filter bands in 9 of the 23 AGN, whereas the remaining 14 objects are considered non-detections, with upper limits of less than 5 % of the nuclear flux. However, when we coadd the unresolved AGN images into a single high signal-to-noise composite image we find again an unambiguously resolved host galaxy. The recovered host galaxies have apparent magnitudes of $23.0<\mathrm{{F606W}}<26.0$ and $22.5<\mathrm{{F850LP}}<24.5$ with rest-frame UV colours in the range $-0.2<(\mathrm{{F606W}}-\mathrm{{F850LP}})_\mathrm{obs}<2.3$. The rest-frame absolute magnitudes at 200 nm are $-20.0<M_{200~\mathrm{nm}}<-22.2$. The photometric properties of the composite host are consistent with the individual resolved host galaxies. We find that the UV colors of all host galaxies are substantially bluer than expected from an old population of stars with formation redshift $z\le5$, independent of the assumed metallicities. These UV colours and luminosities range up to the values found for Lyman-break galaxies (LBGs) at $z=3$. Our results suggest either a recent starburst, of e.g. a few per cent of the total stellar mass and 100 Myrs before observation, with mass-fraction and age strongly degenerate, or the possibility that the detected UV emission may be due to young stars forming continuously. For the latter case we estimate star formation rates of typically $\sim$$6\,\mathrm{M}_\odot\;\mathrm{yr}^{-1}$ (uncorrected for internal dust attenuation), which again lies in the range of rates implied from the UV flux of LBGs. Our results agree with the recent discovery of enhanced blue stellar light in AGN hosts at lower redshifts. author: - 'K. Jahnke, S. F. Sánchez, L. Wisotzki, M. Barden, S. V. W. Beckwith, E. F. Bell, A. Borch, J. A. R. Caldwell, B. Häu[ß]{}ler, S. Jogee, D. H. McIntosh, K. Meisenheimer, C. Y. Peng, H.-W. Rix, R. S. Somerville and C. Wolf' title: 'UV light from young stars in GEMS quasar host galaxies at $1.8<z<2.75$' --- Introduction {#sec:intro} ============ Around redshifts of $z\sim 2$–3, luminous quasars were orders of magnitude more numerous than today. Although the physics of how active galactic nuclei evolve is still not understood, several links between galaxy and quasar evolution have emerged over recent years. The observational confirmation of supermassive black holes in the nuclei of all galaxies with a substantial bulge component [e.g. @gebh00] makes every such galaxy a potential AGN host. The strong evolution of the AGN space density could therefore be related to the availability of accretion fuel in the host galaxies, or to the frequency of AGN triggering events. Gravitational interaction and major or minor merging of galaxies have long been suggested as important factors in driving nuclear activity in galaxies. Confirming any of these as the dominant process has proved difficult, mainly because the morphological characteristics found for relatively nearby AGN host galaxies are so diverse. Furthermore, the properties of the hosts in the ‘heyday’ of quasars ($z \ga 2$) are still elusive, a consequence of the substantial observational difficulties. The contrast between the bright nuclear point sources and the surrounding galaxy increases dramatically beyond $z\sim 1$ as a result from both surface brightness dimming and waveband shifts towards the rest frame UV. The last years have seen numerous attempts to resolve the host galaxies of high-redshift quasars. Owing to the observational challenges of detecting distant host galaxies the observational effort for each object is large, and the observed samples have consequently been very small, of the order of $\la 5$ per target group. While radio-loud quasars appear to be very extended and have been resolved out to $z \sim 4$ [e.g., @lehn92; @carb98; @hutc99; @kuku01; @hutc03; @sanc03], this is not the case for the large majority of radio-quiet quasars. At high redshifts two constraints dominate observational studies of host galaxies: On one side, very good seeing conditions are required to maximize the spatial contrast of the compact nuclear source compared to the extended host galaxy. On the other, large telescope apertures are preferrential to trace faint quasar hosts to as far away from the nucleus as possible. Thus significant progress had to wait for 8m-class telescopes at very good sites with active optics systems – with a very high light collecting power but atmospheric seeing limitations – and for the [[*HST*]{}]{} and its high space-based sensitivity, combined with unprecedented spatial resolution, but limited size that might miss light from faint outer structures of the hosts. Some host galaxies of radio-quiet quasars at $z\simeq 2$ have now been resolved both in the near infrared [@aret98; @kuku01; @ridg01; @falo04] and in the optical domains [@hutc02], showing these objects to be moderately luminous, corresponding to present-day $L^\star$ or slightly brighter. However, host galaxy colours have been unavailable, precluding estimates of the mass-to-light ratio ($M/L$). Thus, without colours the observed luminosities, and their evolution with redshift, cannot be mapped to the mass evolution if young stars contribute a major fraction of AGN host’s light. This is important as several high-luminosity quasars at $z \ga 2$ appear to be located in very UV-luminous host galaxies [@lehn92; @aret98; @hutc02]. Also, at low redshifts there is a link between nuclear activity and enhanced global star formation in the host galaxies. @kauf03 reported that SDSS spectra of local Seyfert 2 galaxies show a significant contribution from young stellar populations, and that this trend is strongly correlated with nuclear luminosity. In a multicolour study of low-$z$ QSO hosts [@jahn04a] as well as at intermediate redshifts [@sanc04a see below] we found that hosts of elliptical morphology can be significantly bluer than the bulk of inactive ellipticals. These results indicate that in the recent past the star formation activity in galaxies hosting an AGN may be different from normal galaxies. The details are far from understood. Clearly more information is required to investigate the relation of starformation and AGN activity, their common cause or causal order and the evolution of these properties with redshift. The new generation of wide-field imaging mosaics obtained with the Hubble Space Telescope ([[*HST*]{}]{}), especially in the conjunction with deep AGN surveys, has opened a new observational avenue towards AGN host galaxy studies. Here we present first results on AGN within the [[gems]{}]{} project [@rix04], the largest [[*HST*]{}]{} colour mosaic to date. In the present paper we investigate the presence of rest-frame ultraviolet light in a substantial sample of $z>1.8$ AGN, all with nuclear luminosities near $M_B = -23$. In a companion paper [@sanc04a] we study rest-frame colours and morphological properties of a sample of intermediate-redshift ($z \la 1$) AGN. The paper is organised as follows. We first describe the sample selection and properties together with a summary of the observational data (Sect. \[sec:data\]). We then comment on the decomposition of the nuclear and galaxy contribution, including a brief summary of the extensive simulations that we use to estimate measurement errors (Sect. \[sec:analysis\]). In Sect. \[sec:results\] we present the measured host galaxy magnitudes and describe our treatment of non-detections. We move on to discuss the results in Sect. \[sec:discussion\], followed by our conclusions in Sect. \[sec:conclusions\]. We use $H_0=70$kms$^{-1}$Mpc$^{-1}$, $\Omega_m=0.3$ and $\Omega_\Lambda = 0.7$ throughout this paper. All quoted magnitudes are zeropointed to the AB system with ZP$_\mathrm{F606W}=26.493$ and ZP$_\mathrm{F850LP}=24.843$. AGN in the GEMS survey {#sec:data} ====================== Overall survey properties ------------------------- [[gems]{}]{}, Galaxy Evolution from Morphologies and SEDs [@rix04] is a large imaging
{ "pile_set_name": "ArXiv" }
--- author: - 'Michele Leone, Sumedha, and Martin Weigt' title: | Unsupervised and semi-supervised clustering by message passing:\ Soft-constraint affinity propagation --- Introduction ============ Clustering is a very important problem in data analysis [@JAIN; @DUDA]. Starting from a set of data points, one tries to group data such that points in one cluster are more similar in between each other than points in different clusters. The hope is that such a grouping unveils common functional characteristics. As an example, one of the currently most important application fields for clustering is the informatical analysis of biological high-throughput data, as given e.g. by gene expression data. Different cell states result in different expression patterns. If data are organized in a well-separated way, one can use one of the many unsupervised clustering methods to divide them into classes [@JAIN; @DUDA]; but if clusters overlap at their borders or if they have involved shapes, these algorithms in general face problems. However, clustering can still be achieved using a small fraction of previously labeled data (training set), making the clustering [*semi-supervised*]{} [@BOOK; @DOMANY2]. While designing algorithms for semi-supervised clustering, one has to be careful: They should efficiently use both types of information provided by the geometrical organization of the data points as well as the already assigned labels. In general there is not only one possible clustering. If one goes to a very fine scale, each single data point can be considered its own cluster. On a very rough scale, the whole data set becomes a single cluster. These two extreme cases may be connected by a full hierarchy of cluster-merging events. This idea is the basis of the oldest clustering method, which still is amongst the most popular one: [*hierarchical agglomerative clustering*]{} [@SOKAL; @JOHNSON]. It starts with clusters being isolated points, and in each algorithmic step the two closest clusters are merged (with the cluster distance given, e.g., by the minimal distance between pairs of cluster elements), until only one big cluster appears. This process can be visualized by the so-called dendrogram, which shows clearly possible hierarchical structures. The strong point of this algorithm is its conceptual clarity connected to an easy numerical implementation. Its major problem is that it is a greedy and local algorithm, no decision can be reversed. A second traditional and broadly used clustering method is [*K-means clustering*]{} [@MCQUEEN]. In this algorithm, one starts with a random assignment of data points to $K$ clusters, calculates the center of mass of each cluster, reassigns points to the closest cluster center, recalculates cluster centers etc., until the cluster assignment is converged. This method is a very efficiently implementable method, but it shows a strong dependence on the initial condition, getting trapped by local optima. So the algorithm has to be rerun many times to produce reliable clusterings, and the algorithmic efficiency is decreased. Further on $K$-means clustering assumes spherical clusters, elongated clusters tend to be divided artificially in sub-clusters. A first statistical-physics based method is [*super-paramagnetic clustering*]{} [@DOMANY1; @DOMANY2]. The idea is the following: First the network of pairwise similarities becomes preprocessed, only links to the closest neighbors are kept. On this sparsified network a ferromagnetic Potts model is defined. In between the paramagnetic high-temperature and the ferromagnetic low-temperature phase a super-paramagnetic phase can be found, where already large clusters tend to be aligned. Using Monte-Carlo simulations, one measures the pairwise probability for any two points to take the same value of their Potts variables. If this probability is large enough, these points are identified to be in the same cluster. This algorithm is very elegant since it does not assume any cluster number of structure, nor uses greedy methods. Due to the slow equilibration dynamics in the super-paramagnetic regime it needs, however, the implementation of sophisticated cluster Monte-Carlo algorithms. Note that also super-paramagnetic clusterings can be obtained by message passing techniques, but these require an explicit breaking of the symmetry between the values of the Potts variables to give non-trivial results. Also in the last years, many new clustering methods are being proposed. One particularly elegant and powerful method is [*affinity propagation*]{} (AP) [@FREY], which gave also the inspiration to our algorithm. The approach is slightly different: Each data point has to select an exemplar in between all other data points. This shall be done in a way to maximize the overall similarity between data points and exemplars. The selection is, however, restricted by a hard constraint: Whenever a point is chosen as an exemplar by somebody else, it is forced to be also its own self-exemplar. Clusters are consequently given as all points with a common exemplar. The number of clusters is regulated by a chemical potential (given in form of a self-similarity of data points), and good clusterings are identified via their robustness with respect to changes in this chemical potential. The computational hard task to optimize the overall similarity under the hard constraints is solved via message passing [@YEDIDIA; @MAXSUM], more precisely via belief propagation, which are equivalent to the Bethe-Peierls approximation / the cavity method in statistical physics [@MezardParisi; @MyBook]. Despite the very good performance on test data, also AP has some drawbacks: It assumes again more or less spherical clusters, which can be characterized by a single cluster exemplar. It does not allow for higher order pointing processes. A last concern is the robustness: Due to the hard constraint, the change of one single exemplar may result in a large avalanche of other changes. The aim of [*soft-constraint affinity propagation*]{} (SCAP) is to use the strong points and ideas of affinity propagation – the exemplar choice fulfilling a global optimization principle, the computationally efficient implementation via message-passing techniques – but curing the problems arising from the hard constraints. In [@SCAP] we have proposed a first version of this algorithm, and have shown that on gene-expression data it is very powerful. In this article, we propose a simplified version which is more efficient. Finally we show that SCAP also allows for a particularly elegant generalization to the semi-supervised case, [*i.e.*]{} to the inclusion of partially labeled data. As shown in some artificial and biological benchmark data, the partial labeling allows to extract the correct clustering even in cases where the unsupervised algorithm fails. The plan of the paper is the following: After this Introduction, we present in Sec. \[sec:scap\] the clustering problem and the derivation of SCAP, and we discuss time- and memory-efficient implementations which become important in the case of huge data sets. In Sec. \[sec:data\] we test the performance of SCAP on artificial data with clustered and hierarchical structures. Sec. \[sec:semi\] is dedicated to the generalization to semi-supervised clustering, and we conclude in the final Sec. \[sec:conclusion\]. The algorithm {#sec:scap} ============= Formulation of the problem -------------------------- The basic input to SCAP are pairwise similarities $S(\mu,\nu)$ between any two data points $\mu,\nu\in \{1,...,N\}$. In many cases, these similarities are given by the negative (squared) Euclidean distances between data points or by some correlation measure (as Pearson correlations) between data points. In principle they need not even to be symmetric in $\mu$ and $\nu$, as they might represent conditional dependencies between data points. The choice of the correct similarity measure will for sure influence the quality and the details of the clusterings found by SCAP, it depends on the nature of the data which shall be clustered. Here we assume therefore the similarities to be given. The main idea of SCAP is that each data point $\mu$ selects some other data point $\nu$ as its [*exemplar*]{}, i.e. as some reference point for itself. The exemplar choice is therefore given by a mapping $$\label{eq:c_map} {\mathbf c}:\ \ \{1,...,N\} \ \mapsto\ \{1,...,N\}$$ where, in difference to the original AP and the previous version of SCAP, no self-exemplars are allowed: $$\label{eq:no_self_exemplar} \forall \mu\in\{1,...,N\}:\ \ c_\mu \neq \mu\ .$$ The mapping ${\mathbf c}$ defines a directed graph with links going from data points to their exemplars, and clusters in this approach correspond to the connected components of (an undirected version) this graph. The aim in constructing ${\mathbf c}$ is to minimize the Hamiltonian, or cost function, $$\label{eq:H} {\cal H} ({\mathbf c}) = - \sum_{\mu=1}^N S(\mu,c_\mu)\ +\ p\ {\cal N}_c\ ,$$ with ${\cal N}_c$ being the number of distinct selected exemplars. This Hamiltonian consists of two parts: The first one is the negative sum of the similarities of all data points to their exemplars, so the algorithm tries to maximize this accumulated similarity. However, this term alone would lead to a local greedy clustering where each data point chooses its closest neighbor as an exemplar. The resulting clustering would contain ${\cal O}(N)$ clusters, so increasing the amount of data would lead to more instead of better defined clusters. The second term serves to [*compactify*]{} the clusters: $\chi_\mu$ is one
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider two systems of wave equations whose wave-packet solutions have trajectories that are altered by the “anomalous velocity” effect of a Berry curvature. The first is the matrix Weyl equation describing cyclotron motion of a charged massless fermion. The second is Maxwell equations for the whispering-gallery modes of light in a cylindrical waveguide. In the case of the massless fermion, the anomalous velocity is obscured by the contribution from the magnetic moment. In the whispering gallery modes the anomalous velocity causes the circumferential light ray to creep up the cylinder at the rate of one wavelength per orbit, and can be identified as a continuous version of the Imbert-Federov effect.' author: - MICHAEL STONE title: Berry phase and anomalous velocity of Weyl fermions and Maxwell photons --- Introduction ============ In many quantum systems the motion of a wave-packet is governed by semiclassical equations of the form [@blount; @sundaram-niu; @niu; @horvathy] &=& -+ e(), \[EQ:lorentz\]\ &=& + . \[EQ:chang-niu\] In the absence of the last term in the second equation these would just be Hamilton’s equations for a particle with hamiltonian ${\mathcal H}({\bf x},{\bf k})=\varepsilon({\bf k})+V({\bf x})$ moving in a magnetic field. The additional $\dot {\bf k}\times {\bm \Omega}$ term in (\[EQ:chang-niu\]) is the [*anomalous velocity*]{} correction to the na[ï]{}ve group velocity $\partial \varepsilon/\partial{\bf k}$. The vector ${\bf \Omega}$ is a function of the kinetic momentum ${\bf k}$ only, and is a Berry curvature which has different origins in different systems. For a Bloch electron in an energy band in a solid the curvature accounts for the effects of all other bands. In particle-physics applications the curvature arises from the intrinsic angular momentum of the particle. In all cases it affects the velocity because different momentum components of a localized wave-packet accumulate different geometric phases when both ${\bf k}$ is changing and the Berry curvature is non-zero [@chong]. These ${\bf k}$-dependent geometric phases are just as significant in determining the wave-packet position as the ${\bf k}$-dependent dynamical phases arising from the dispersion equation $\omega=\varepsilon({\bf k})$. A particularly simple example occurs in in the dynamics of massless relativistic fermions and in Weyl semimetals where bands touch at a point. In both systems the wavepackets are solutions of a Weyl “half-Dirac” equation and the Berry curvature arises because the spin (or pseudo-spin) vector is locked to the direction of the momentum ${\bf k}$. The forced precession of a spin-$S$ vector is the paradigm in the original Berry-phase paper [@berry1] and the corresponding curvature is simply ([**k**]{})= S . \[EQ:berry-curvature\] Even this basic example gives rise to much physics — the axial and gauge anomalies [@stephanov; @stone-dwivedi; @dwivedi-stone] and the chiral magnetic and vortical effect [@vilenkin; @fukushima; @son3]. The semiclassical analyses that reveal the anomalous velcity in (\[EQ:chang-niu\]) are quite intricate as they have to go beyond the leading WKB ray tracing equation. It is the aim of this paper to consider two simple systems in which the predicted anomalous velocity effect can be sought directly in the stationary eigenfunctions of underlying wave equation. In both cases the curvature is given by (\[EQ:berry-curvature\]). The first (in section \[SEC:cyclotron\]) is the circular motion of a massless charged spin-1/2 particle in a magnetic field. The second (in section \[SEC:whisper\]) is the circular motion of a spin-1 photon in an optical fibre waveguide. In the first case the presence of the anomalous velocity is obscured by the coupling of the magnetic field to the particle’s magnetic moment. The second unambiguously displays the expected anomalous velocity drift. Cyclotron orbits {#SEC:cyclotron} ================ We start by considering the cyclotron motion of a massless Weyl fermion with positive charge $e$ in a magnetic field ${\bf B}= -B \hat{\bf z}$. The field is derived from a vector potential ${\bf A}= B(y,-x)/2$ and its downward direction has been chosen so that the particle orbits in an anti-clockwise direction about the $z$ axis. The Weyl Hamiltonian for a right-handed spin-1/2 particle is H= -i(-ie[**A**]{}), where ${\bm \sigma}=(\sigma_1,\sigma_2,\sigma_3)$ denotes the Pauli matrices. We are using natural units in which $\hbar =c=1$, although we will occasionally insert these symbols when it helps to illuminate the discussion. Acting on functions proportional to $e^{ik_z z}$ we have H\^2 = [I]{}(- \^2 + r\^2 +eBL\_3 +k\_z\^2) + eB\_3, \[EQ:weyl-squared\] where $L_z= -i(x\partial_y-y\partial x)=-i\partial_\phi$ is the canonical (as opposed to kinetic) angular momentum and ${\mathbb I}$ denotes the 2-by-2 identity matrix. The eigenvalues of the scalar Shrödinger operator in parenthesis in (\[EQ:weyl-squared\]) are E\^2\_[n,l,k\_z]{} = eB{2n+|l|+l+ 1}+k\_z\^2, and the corresponding eigenfunctions are \_[n,l,k\_z]{}(r,) =()\^[(|l|+1)/2]{} r\^[|l|]{}(-) L\^[|l|]{}\_n() e\^[il]{}e\^[ik\_z z]{}. Both $n$ and $l$ are integers and $L^{|l|}_n$ is the associated Laguerre polynomial. When $n=0$, $k_z=0$, and $l>0$, the wavefunction $\varphi_{0,l,0}(r,\phi)$ corresponds to a particle describing a circular cyclotron orbit with the origin as its centre and radius R\_l=. If we decrease $l$ while staying in the same Landau level ([*i.e.*]{}  by increasing $n$ so as to keep $E^2_{n,l}$ fixed) the classical circular orbit keeps the same radius but its centre moves away from the origin and is smeared-out in $\theta$ over the full $2\pi$. When $l=0$ the circle passes through the origin. For $l$ negative, the energy no longer depends on $l$ and the Landau level keeps $n$ fixed while $l$ continues to decrease. The classical orbit still has the original radius, but no longer encloses the origin. In particular the case $n=k_z=0$ and $l<0$ corresponds particles in the lowest Landau level but with different orbit centres. By applying the projection operator $P= (E+H)/2E$ to the Schrödinger eigenfunction we find that the cyclotron-motion eigenfunctions of the Weyl hamiltonian $H$ with $n=0$, $l>0$, and longitudinal momentum $k_z$ are \_[0,l,k\_z]{}(r,,z)= e\^[ikz]{}e\^[il]{}r\^l (-). These states have energy $E_{l, k_z} = \sqrt{2leB+k_z^2}$ and the orbit radius is still R\_l= . At $k_z=0$, the angular velocity of a wave-packet is = .|\_[k\_z=0]{}= ensuring that $v_\phi= R_l\dot \phi=c=1$. There is a special case where $l=n=0$ and \_[0,0, k\_z]{}=e\^[ik\_z z]{} ( 01 ) (-) with $E=-k_z$. This mode only exists as a positive energy mode for $k_z<0$. It is this unbalanced mode, with a density of $eB/2\pi$ per unit area in the $x$, $y$ plane that is the source of the chiral-magnetic-effect current \_[CME]{}= \_0\^= , of a gas of zero-temperature Weyl fermions with chemical potential $\mu$ [@vilenkin; @fukushima]. Consider the $l>0$, $k_z=0$ orbits. Even though these orbits possess no component of momentum in the $z$ direction, plugging the time dependence of the classical orbital momentum ${\bf k}$ into the anomalous-velocity formula (\[EQ:chang-niu\]) suggests that they should creep down the $z$ axis. To compute the predicted creep-rate we observe that a particle of helicity $S$ whose spin direction is forced to describe a circle of co-latitude $\theta$ on a sphere with polar co-ordinates $\theta$, $\phi$ accumulates Berry phase at the rate [@berry1] \_[Berry]{} =-S(1-). For $S=+1/2$, and using our expression for $\dot \phi$, this becomes \_[Berry]{}= 12(-1)= 12 (-1). where = \~. In an energy eigenstate this accumulating geometric phase should be indistinguishable from the accumulating $-Et$ dynamical phase. In other words $\
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present results from simulations of rotating magnetized turbulent convection in spherical wedge geometry representing parts of the latitudinal and longitudinal extents of a star. Here we consider a set of runs for which the density stratification is varied, keeping the Reynolds and Coriolis numbers at similar values. In the case of weak stratification, we find quasi-steady dynamo solutions for moderate rotation and oscillatory ones with poleward migration of activity belts for more rapid rotation. For stronger stratification, the growth rate tends to become smaller. Furthermore, a transition from quasi-steady to oscillatory dynamos is found as the Coriolis number is increased, but now there is an equatorward migrating branch near the equator. The breakpoint where this happens corresponds to a rotation rate that is about 3–7 times the solar value. The phase relation of the magnetic field is such that the toroidal field lags behind the radial field by about $\pi/2$, which can be explained by an oscillatory $\alpha^2$ dynamo caused by the sign change of the $\alpha$-effect about the equator. We test the domain size dependence of our results for a rapidly rotating run with equatorward migration by varying the longitudinal extent of our wedge. The energy of the axisymmetric mean magnetic field decreases as the domain size increases and we find that an $m=1$ mode is excited for a full $2\pi$ azimuthal extent, reminiscent of the field configurations deduced from observations of rapidly rotating late-type stars.' author: - 'Petri J. Käpylä$^{1,2}$, Maarit J. Mantere$^{3,1}$, Elizabeth Cole$^{1}$, Jörn Warnecke$^{2,4}$ and Axel Brandenburg$^{2,4}$' bibliography: - 'paper.bib' title: Effects of enhanced stratification on equatorward dynamo wave propagation --- Introduction ============ The large-scale magnetic field of the Sun, manifested by the 11 year sunspot cycle, is generally believed to be generated within or just below the turbulent convection zone [e.g., @O03 and references therein]. The latter concept is based on the idea that strong shear in the tachocline near the bottom of the convection zone amplifies the toroidal magnetic field which then becomes buoyantly unstable and erupts to the surface [e.g., @Pa55a]. This process has been adopted in many mean-field models of the solar cycle in the form of a non-local $\alpha$-effect [e.g., @KO11], which is based on early ideas of [@Bab61] and [@Lei69] that the source term for poloidal field can be explained through the tilt of active regions. Such models assume a reduced turbulent diffusivity within the convection zone and a single cell anti-clockwise meridional circulation which acts as a conveyor belt for the magnetic field. These so-called flux transport models [e.g., @DC99] are now widely used to study the solar cycle and to predict its future course [@DG06; @CCJ07]. The flux transport paradigm is, however, facing several theoretical challenges: $10^5$gauss magnetic fields are expected to reside in the tachocline [@DSC93], but such fields are difficult to explain with dynamo theory [@GK11] and may have become unstable at much lower field strengths [@ASR05]. Furthermore, flux transport dynamos require a rather low value of the turbulent diffusivity within the convection zone [several $10^{11}\,{\rm cm}^2\,{\rm s}^{-1}$; see @BERB02], which is much less than the standard estimate of several $10^{12}\,{\rm cm}^2\,{\rm s}^{-1}$ based on mixing length theory, which, in turn, is also verified numerically [e.g., @KKB09a]. Several other issues have already been addressed within this paradigm, for example, the parity of the dynamo [@BERB02; @CNC04; @DdTGAW04] and the possibility of a multicellular structure of the meridional circulation [@JB07], which may be more complicated than that required in the flux transport models [@Ha11; @MFRT12; @ZBKDH13]. These difficulties have led to a revival of the distributed dynamo [e.g., @Br05; @Pi13] in which magnetic fields are generated throughout the convection zone due to turbulent effects [e.g., @KR80; @KKT06; @PS09]. Early studies of self-consistent three-dimensional magnetohydrodynamic (MHD) simulations of convection in spherical coordinates produced oscillatory large-scale dynamos [@Gi83; @Gl85], but the dynamo wave was found to propagate toward the poles rather than the equator—as in the Sun. These models are referred to as direct numerical simulations (DNS), i.e., all operators of viscous and diffusive terms are just the original ones, but with vastly increased viscosity and diffusivity coefficients. More recent anelastic large-eddy simulations (LES) with rotation rates somewhat higher than that of the Sun have produced non-oscillatory [@BBBMT10] and oscillatory [@BMBBT11; @NBBMT13] large-scale magnetic fields, depending essentially on the rotation rate and the vigor of the turbulence. However, similar models with the solar rotation rate have either failed to produce an appreciable large-scale component [@BMT04] or, more recently, oscillatory solutions with almost no latitudinal propagation of the activity belts [@GCS10; @RCGBS11]. These simulations covered a full spherical shell and used realistic values for solar luminosity and rotation rate, necessitating the use of anelastic solvers and spherical harmonics [e.g., @BMT04] or implicit methods [e.g. @GCS10]. Here we exploit an alternative approach by modeling fully compressible convection in wedge geometry [see also @RC01] with a finite-difference method. We omit the polar regions and usually cover only a part of the longitudinal extent, e.g., $90\degr$ instead of the full $360\degr$. At the cost of omitting connecting flows across the poles and introducing artificial boundaries there, the gain is that higher spatial resolution can be achieved. Furthermore, retaining the sound waves can be beneficial when considering possible helio- or asteroseismic applications. Our model is a hybrid between DNS and LES in that we supplement the thermal energy flux by an additional subgrid scale (SGS) term to stabilize the scheme and to further reduce the radiative background flux. Recent hydrodynamic [@KMB11; @KMGBC11] and MHD [@KKBMT10] studies have shown that this approach produces results that are in accordance with fully spherical models. Moreover, the first turbulent dynamo solution with solar-like migration properties of the magnetic field was recently obtained using this type of setup [@KMB12a]. Extended setups that include a coronal layer as a more realistic upper radial boundary have been successful in producing dynamo-driven coronal ejections [@WKMB12]. As we show in a companion paper [@WKMB13], a solar-like differential rotation pattern might be another consequence of including an outer coronal layer. Here we concentrate on exploring further the recent discovery of equatorward migration in spherical wedge simulations [@KMB12a]. In particular, we examine a set of runs for which the rotational influence on the fluid, measured by the Coriolis number, which is also called the inverse Rossby number, is kept approximately constant while the density stratification of the simulations is gradually increased. The model {#sec:model} ========= Our model is the same as that in [@KMB12a]. We consider a wedge in spherical polar coordinates, where $(r,\theta,\phi)$ denote radius, colatitude, and longitude. The radial, latitudinal, and longitudinal extents of the wedge are $r_0 \leq r \leq R$, $\theta_0 \leq \theta \leq \pi-\theta_0$, and $0 \leq \phi \leq \phi_0$, respectively, where $R$ is the radius of the star and $r_0=0.7\,R$ denotes the position of the bottom of the convection zone. Here we take $\theta_0=\pi/12$ and in most of our models we use $\phi_0=\pi/2$, so we cover a quarter of the azimuthal extent between $\pm75^\circ$ latitude. We solve the compressible hydromagnetic equations[^1], $$\frac{{\partial}\bm A}{{\partial}t} = {\bm u}\times{\bm B} - \mu_0\eta {\bm J},$$ $$\frac{D \ln \rho}{Dt} = -\bm\nabla\cdot\bm{u},$$ $$\frac{D\bm{u}}{Dt} = \bm{g} -2\bm\Omega_0\times\bm{u}+\frac{1}{\rho} \left(\bm{J}\times\bm{B}-\bm\nabla p +\bm\nabla \cdot 2\nu\rho\bm{\mathsf{S}}\right),$$ $$T\frac{D s}{Dt} = \frac{1}{\rho}\left[-\bm\nabla \cdot \left({\bm F^{\rm rad}}+ {\bm F^{\rm SGS}}\right) + \mu_0 \eta {\bm J}^2\right] +2\nu \bm{\mathsf{S}}^2, \label{equ:ss}$$ where ${\bm A}$ is the magnetic vector potential, $\bm
{ "pile_set_name": "ArXiv" }
--- author: - 'Steven V. W. Beckwith' title: Circumstellar Disks --- Introduction ============ Circumstellar disks usually contain only a few percent of the total material going into a young star after the main collapse has stopped and the surrounding molecular cloud is cleared away. Yet the disks are of great interest to the study of star formation, perhaps as great as the stars themselves, because the disks may build planetary systems. The planet Earth contains less than one millionth of the mass of the Sun, but it is probably the most interesting body in the Solar System, certainly to us. Beckwith & Sargent (1996) argue that the currently known properties of disks are evidence that other planetary systems are common in the Galaxy and discuss the reasons for the interest in disk properties; that article provides a broad introduction to the subject. The purpose of this chapter is to provide a tutorial on how observations of the radiation from disks may be used to elicit their physical characteristics. In keeping with the spirit of the Crete meeting, the treatment is not a comprehensive review nor will it give a complete analysis of each method used to tease the disk properties from faint light observed with telescopes. Rather, the idea is to show that basic intuition about disk physics is easily related to what is observed and provide a general foundation for the understanding of more elaborate theoretical calculations. Early disk models assumed that matter is confined to a very thin plane extending from the stellar surface to a sharp outer edge more than 100 AU from the star. The disk energy balance was attributed to accretion of matter through the disk. This oversimplified picture has been modified by a careful treatment of the underlying physics, and the more modern view is that the disk flares gently, often with an inner edge at some distance from the star, and is heated mainly by radiation as opposed to accretion. Some disks are surrounded by spheroidal “halos” that trap radiation and contain strong outflows. Most of the young disks are accompanied by mass loss in columns along the polar axes that contribute to the total energy budget. Although it is not always possible to derive disk characteristics unambiguously from observations, most of the intuitive interpretations have been supported by increasingly better data and improved angular resolution images allowing us to separate the different components of a star/disk system directly. The article is organized along the following questions: 1. What are the expected disk properties based on the theory of Solar System formation? 2. How do we identify disks? 3. How do we determine physical properties of disks from radiation? 4. Do the observed properties show that disks are interesting? Because this article is a tutorial, some of the material is adopted from articles that I co-authored for Nature (Beckwith & Sargent 1996) and Protostars and Planets IV (Beckwith, Henning, and Nakagawa 1999). The early Solar System ====================== It is generally agreed that a flat layer of gas and dust - a disk - orbited the early Sun and provided the material which later made up the Earth, Mars, Jupiter, and the other planets (Safronov 1969; Wood & Morfill 1988; Cameron 1988). The young Sun and the circumsolar disk were born from an extended cloud of gas and dust that was assembled from the detritus of dying stars and remnants of the early universe that collapsed under its own gravity. The material accumulated quickly onto the central proto-Sun but with enough residual angular momentum to prevent some from spiraling inwards - the exact proportion remaining in orbit is not known but should have been a considerable fraction of the total mass (Shu, Adams, & Lizano 1987; Bodenheimer 1995). The average angular momentum of the collapsing region defined a rotation axis around which the orbits quickly stabilized, creating a disk with a thickness much smaller than its radius, at least within the regions now containing the giant planets. The formation of the stable disk probably occurred over about $10^5$ years after the onset of free fall collapse (Shu et al. 1993), almost instantaneously in cosmic time. As the central proto-Sun evolved, the solid particles in the orbits settled to a dense layer in the mid-plane of the disk and began to stick together as they collided (Safronov 1969; Weidenschilling 1987; Mizuno et al. 1988). During the next $10^4$ to $10^5$ years, large rocks and small asteroids grew gradually from the small dust particles (Weidenschilling & Cuzzi 1993). When the gravitational pull of the largest asteroids was sufficient to attract neighboring pebbles and rocks, they grew even more rapidly to the size of small planets (Wetherill & Stuart 1993); gravity was important for bodies more than 10km across. The terrestrial planets are large accumulations of solid particles that grew from the collisions between these smaller bodies. In the outer parts of the disk, a few such solid cores became large enough (10 Earth masses) to accrete gas (Mizuno 1980; Stevenson 1982), the dominant reservoir of mass, and gave rise to the giant gas planets (Wetherill 1990). Temperatures close to the proto-Sun were presumably too high to allow gas accretion. The planet building phase is thought to have taken between $\sim 10^7$ and a few times $10^8$ years, although the cores probably developed quickly, within the first $10^6$ years or so. These timescales are not very well constrained by data, and it may well be observations of developing planetary systems around other stars that tell us how planets are really built. By analogy, we expect circumstellar disks to contain a few percent or more of the stellar mass, to extend at least 50AU from the central star, to be relatively flat, and to be free from disruption for at least a few million years if they are to create the rocky cores needed to build the large planets. These characteristics certainly represent only a subset of the disks that accompany star formation. Conceivably, a disk with substantially lower mass ($10^{-6}$M$_\odot$), size (a few AU), and lifetime ($\sim 10^6$yr) could create terrestrial-like planets suitable for life without the presence of gas giants. In principle, even larger, more massive disks could accompany the birth of stars much more massive than the Sun. Although we must keep an open mind about the characteristics the constitute a disk, the early Solar disk provides a framework to identify those disks that may become interesting for planet formation. How do we know that disks exist? ================================ Soon after the discovery that T Tauri stars – very young stars of approximately solar mass – had more radiation at infrared wavelengths than the photospheres should emit, Lynden-Bell and Pringle (1974) suggested that most of their peculiar characteristics might be explained by circumstellar disks. Their suggestion was based on the unusual spectral energy distributions: the stars radiated too much ultraviolet light [*and*]{} too much infrared light at the same time. A disk could account for the ultraviolet light through emission from the [*boundary layer*]{} between the star and the inner edge of the disk, in which matter from the disk suddenly accreted onto the star, slowing down from Keplerian speeds to essentially zero speed so rapidly that the radiation temperatures are tens of thousands of Kelvin. The infrared light was radiation from the outer parts of the disk resulting from energy liberated as the matter slowly spiraled to smaller radii eventually to accrete through the boundary layer. One of their strong predictions was that the long wavelength infrared radiation would follow a power law, $F_\nu \propto \nu^{1 \over 3}$, where $F_\nu$ is the flux density, and $\nu$ is the frequency of the radiation. This result for the release of accretion energy through a disk is quite general. Even with no accretion, a disk will be heated by radiation from the star itself. The dust grains in the disk absorb stellar radiation and re-radiate in the infrared to maintain thermal balance. Remarkably, the spectral energy distribution also follows a power law with the same exponent as for accretion over a broad range of wavelengths: $F_\nu \propto \nu^{1 \over 3}$ between about 5 and 100$\mu$m depending on the luminosity of the star. At wavelengths short ward of about 3$\mu$m, the flux density stops increasing due to the inner radius of the disk, and at long wavelengths the power law becomes steeper due to the outer edge of the disks. The most complete treatment for a flat disk is given by Adams, Lada, and Shu (1988). When Lynden-Bell and Pringle made their suggestion, the long wavelength SEDs of disks could only be measured to about 10$\mu$m. The SEDs could easily be explained over this limited spectral range by other distributions of dust near the stars. Although prescient, the predictions of the early disk theory went untested for nearly a decade. A few years after the SED calculations, Elsässer and Staude (1978) discovered that several young stars had rather high degrees of linear polarization in their optical light. They explained the polarization as scattered light from dust grains arranged symmetrically above and below a star that was obscured by a planar or toroidal distribution of dust oriented perpendicular to the line of sight and parallel to the direction of the polarization. These observations suggested that the dust distribution around the stars was axisymmetric and flattened relative to a spherical halo. The most striking demonstrations of axisymmetry in T Tauri stars are the well collimated jets seen in images of ionized lines. Mundt and Fried (1983) discovered the first of many young stellar jets in their image of HL Tau. The jets implied a strong axisymmetry near the stars, one
{ "pile_set_name": "ArXiv" }
--- abstract: 'We consider statistical models driven by Gaussian and non-Gaussian self-similar processes with long memory and we construct maximum likelihood estimators (MLE) for the drift parameter. Our approach is based in the non-Gaussian case on the approximation by random walks of the driving noise. We study the asymptotic behavior of the estimators and we give some numerical simulations to illustrate our results.' author: - | Karine Bertin $^{1}\quad$ Soledad Torres $^{1}\quad$ Ciprian A. Tudor $^{2}\vspace*{0.1in}$ [^1]\ $^{1}$ Departamento de Estadística, CIMFAV Universidad de Valparaíso,\ Casilla 123-V, 4059 Valparaiso, Chile.\ soledad.torres@uv.cl karine.bertin@uv.cl\ $^{2}$SAMOS/MATISSE, Centre d’Economie de La Sorbonne,\ Université de Panthéon-Sorbonne Paris 1,\ 90, rue de Tolbiac, 75634 Paris Cedex 13, France.\ tudor@univ-paris1.fr title: Maximum likelihood estimators and random walks in long memory models --- 0.5cm [**2000 AMS Classification Numbers:** ]{} 60G18, 62M99. 0.3cm [**Key words: Fractional Brownian motion, Maximum likelihood estimation, Rosenblatt process, Random walk.** ]{} 0.3cm Introduction ============ The self-similarity property for a stochastic process means that scaling of time is equivalent to an appropriate scaling of space. That is, a process $(Y_{t})_{t\geq 0}$ is selfsimilar of order $H>0$ if for all $c>0$ the processes $(Y_{ct}) _{t \geq 0} $ and $(c^{H} Y _{t} )_{t\geq 0}$ have the same finite dimensional distributions. This property is crucial in applications such as network traffic analysis, mathematical finance, astrophysics, hydrology or image processing. We refer to the monographs [@Beran], [@EM] or [@ST] for complete expositions on theoretical and practical aspects of self-similar stochastic processes. The most popular self-similar process is the fractional Brownian motion (fBm). Its practical applications are notorious. This process is defined as a centered Gaussian process $(B^{H}_{t}) _{t\geq 0}$ with covariance function $$R^{H}(t,s):=\mathbb{E} (B^{H}_{t}B^{H} _{s}) =\frac{1}{2} \left( t^{2H } + s^{2H} -\vert t-s \vert ^{2H}\right), \hskip0.5cm t,s \geq 0.$$ It can be also defined as the only Gaussian self-similar process with stationary increments. Recently, this stochastic process has been widely studied from the stochastic calculus point of view as well as from the statistical analysis point of view. Various types of stochastic integrals with respect to it have been introduced and several types of stochastic differential equations driven by fBm have been considered (see e.g. [@N], Section 5). Another example of a self-similar process still with long memory (but non-Gaussian) is the so-called Rosenblatt process which appears as limit in limit theorems for stationary sequences with a certain correlation function (see [@DM], [@Ta1]). Although it received a less important attention than the fractional Brownian motion, this process is still of interest in practical applications because of its self-similarity, stationarity of increments and long-range dependence. Actually the numerous uses of the fractional Brownian motion in practice (hydrology, telecommunications) are due to these properties; one prefers in general fBm before other processes because it is a Gaussian process and the calculus for it is easier; but in concrete situations when the Gaussian hypothesis is not plausible for the model, the Rosenblatt process may be an interesting alternative model. We mention also the work [@Taq3] for examples of the utilisation of non-Gaussian self-similar processes in practice. The stochastic analysis of the fractional Brownian motion naturally led to the statistical inference for diffusion processes with fBm as driving noise. We will study in this paper the problem of the estimation of the drift parameter. Assume that we have the model $$dX_t = \theta b(X_t) dt + dB^{H}_t, \hskip0.5cm t\in [0,T]$$ where $(B^{H}_{t})_{t\in [0,T]}$ is a fractional Brownian motion with Hurst index $H\in (0, 1)$, $b$ is a deterministic function satisfying some regularity conditions and the parameter $\theta \in \mathbb{R}$ has to be estimated. Such questions have been recently treated in several papers (see [@KB], [@TV] or [@SoTu]): in general the techniques used to construct maximum likelihood estimators (MLE) for the drift parameter $\theta$ are based on Girsanov transforms for fractional Brownian motion and depend on the properties of the deterministic fractional operators related to the fBm. Generally speaking, the authors of these papers assume that the whole trajectory of the process is continuously observed. Another possibility is to use Euler-type approximations for the solution of the above equation and to construct a MLE estimator based on the density of the observations given ”the past” (as in e.g. [@Rao], Section 3.4, for the case of stochastic equations driven by the Brownian motion). In this work our purpose is to make a first step in the direction of statistical inference for diffusion processes with self-similar, long memory and non-Gaussian driving noise. As far as we know, there are not many result on statistical inference for stochastic differential equations driven by non-Gaussian processes which in addition are not semimartingales. The basic example of a such process is the Rosenblatt process. We consider here the simple model $$X_{t}= at + Z^{H}_{t}, \hskip0.5cm$$ where $(Z^{H}_{t})_{t\in [0,T]}$ is a Rosenblatt process with known self-similarity index $H\in (\frac{1}{2}, 1)$ (see Sections \[walkrosen\] and Appendix for the definition) and $a\in \mathbb{R}$ is the parameter to be estimated. We mention that, since this process is not a semimartingale, it is not Gaussian and its density function is not explicitly known, the techniques considered in the Gaussian case cannot be applied here. We therefore use a different approach: we consider an approximated model in which we replace the noise $Z^{H}$ by a two-dimensional disturbed random walk $Z^{H,n}$ that, from a result is [@ToTu], converges weakly in the Skorohod topology to $Z^{H}$as $n\to \infty$. Note that this approximated model still keeps the main properties of the original model since the noise is asymptotically self-similar and it exhibits long range dependence. We then construct a MLE estimator (called sometimes in the literature, see e.g. [@Rao] “pseudo-MLE estimator”) using an Euler scheme method and we prove that this estimator is consistent. Although we have not martingales in the model, this construction involving random walks allows to use martingale arguments to obtain the asymptotic behavior of the estimators. Of course, this does not solve the problem of estimating $a$ in the standard model defined above but we think that our approach represents a step into the direction of developing models driven by non-semimartingales and non-Gaussian noises. Our paper is organized as follows. In Section \[prelim\] we recall some facts on the pseudo MLE estimators for the drift parameter in models driven by the standard Wiener process and by the fBm. We construct, in each model, estimators for the drift parameter and we prove their strong consistency (in the almost sure sense) or their $L^ {2}$ consistency under the condition $\alpha >1$ where $N^{\alpha}$ is the number of observations at our disposal and the step of the Euler scheme is $\frac{1}{N}$. This condition extends the usual hypothesis in the standard Wiener case (see \[c1\], see also [@Rao], paragraph 3.4). Section \[walkrosen\] is devoted to the study of the situation when the noise is the approximated Rosenblatt process; we construct again the estimator through an inductive method and we study its asymptotic behavior. The strong consistency is obtained under similar assumptions as in the Gaussian case. Section \[simu\] contains some numerical simulations and in the Appendix we recall the stochastic integral representations for the fBm and for the Rosenblatt process. Preliminaries {#prelim} ============= Let us start by recalling some known facts on maximum likelihood estimation in simple standard cases. Let $(W_{t}) _{t\in [0,T]}$ be a Wiener process on a classical Wiener space $(\Omega, {\cal{F}}, P)$ and let us consider the following simple model $$\label{modW} Y_{t}= at + W_{t}, \hskip0.5cm t\in [0,T]$$ with $T>0$ and assume that the parameter $a\in \mathbb{R}$ has to be estimated. One can for example use the Euler type discretization of (\[modW\]) $$Y_{t_{j+1}}^{(n)}:= Y_{t
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study a fluid of two-dimensional parallel hard squares in bulk and under confinement in channels, with the aim of evaluating the performance of Fundamental-Measure Theory (FMT). To this purpose, we first analyse the phase behaviour of the bulk system using FMT and Percus-Yevick theory, and compare the results with MD and MC simulations. In a second step, we study the confined system and check the results against those obtained from Transfer Matrix Method and from our own Monte Carlo simulations. Squares are confined to channels with parallel walls at angles of 0$^{\circ}$ or 45$^{\circ}$ relative to the diagonals of the parallel hard squares, respectively, which allows for an assessment of the effect of the external-potential symmetry on the fluid structural properties. In general FMT overestimates bulk correlations, predicting the existence of a columnar phase (absent in simulations) prior to crystallisation. The equation of state predicted by FMT compares well with simulations, although the PY approach with the virial route is better in some range of packing fractions. The FMT is highly accurate for the structure and correlations of the confined fluid due to the dimensional crossover property fulfilled by the theory. Both density profiles and equations of state of the confined system are accurately predicted by the theory. The highly non-uniform pair correlations inside the channel are also very well described by FMT.' address: - 'Departamento de Física Teórica de la Materia Condensada, Facultad de Ciencias, Universidad Autónoma de Madrid, E-28049 Madrid, Spain' - 'Grupo Interdisciplinar de Sistemas Complejos (GISC), Departamento de Matemáticas, Escuela Politécnica Superior, Universidad Carlos III de Madrid, Avenida de la Universidad 30, E-28911, Leganés, Madrid, Spain' - 'Institute of Physics and Mechatronics, University of Pannonia, PO BOX 158, Veszprém, H-8201 Hungary' - 'Departamento de Física Teórica de la Materia Condensada, Instituto de Ciencia de Materiales Nicolás Cabrera and IFIMAC, Universidad Autónoma de Madrid, E-28049 Madrid, Spain' author: - 'Miguel González-Pinto' - 'Yuri Martínez-Ratón' - Szabolcs Varga - Peter Gurin - Enrique Velasco title: 'Phase behaviour and correlations of parallel hard squares: From highly confined to bulk systems' --- Introduction {#intro} ============ Density functional theory (DFT) has proved to be a very successful tool to predict the phase behaviour of bulk and confined classical fluids [@Evans; @Hansen]. Since the local-density approximation is not appropriate for classical fluids, early versions of DFT for the hard-sphere (HS) system included correlations through averaged local densities, the effective density approximation [@Lutsko] or the weighted density approximation [@Curtin; @Tarazona1] being two widely used versions. Since then, DFT for HS has evolved to converge to a more sophisticated class of approximations: the so-called fundamental measure density functional theory (FMT). This theory was proposed by Rosenfeld in the 80’s [@Yasha1; @Yasha2], went through a period of refinement (in an effort to adequately describe HS crystallization [@Schmidt1]), and current versions adequately describe crystal anisotropies at high densities [@Tarazona2] or the HS equation of state (EOS) for fluid phases [@Roth1; @Roth1b]. Competent reviews of FMT for mixtures of HS and other hard particle systems can be found in [@Roth2] and [@Tarazona3] respectively. The first FMT functional for anisotropic particles was developed for mixtures of parallel hard squares (PHS) in 2D, mixtures of parallel hard cubes (PHC) in 3D, and also for a ternary mixture of hard rectangles (2D) or parallelepipeds (3D) with restricted orientations (Zwanzig approximation) [@Cuesta0; @Cuesta1; @Cuesta2]. These density functionals were used to calculate the phase diagrams of the one-component fluid, binary mixtures of hard cubes [@Yuri1], and also prolate and oblate Zwanzig particles [@Yuri2]. Recently they were also applied to the study of the phase behaviour of hard biaxial board-like particles [@Yuri3], polydisperse mixtures of highly oriented hard platelets [@Velasco1], and Zwanzig particles confined in a square cavity [@Miguel] or in geometrically structured three-dimensional surfaces [@Harnau]. FMT density functionals were also obtained for binary or ternary mixtures of freely rotating needles, platelets of vanishing thickness, and HS [@Schmidt2; @Schmidt3; @Schmidt4]. These functionals were applied to study the demixing behavior [@Schmidt2] and more recently the stacking phase diagrams of binary mixtures of anisotropic particles [@Heras]. A FMT functional for mixtures of parallel hard cylinders of finite thickness has been obtained within the dimensional crossover property [@Yuri_cylinders]. More recently, numerically-tractable versions of FMT functionals were obtained for freely rotating anisotropic particles which exploited the approximate decomposition of the Mayer function as convolutions of one-particle weights [@Mecke1; @Mecke2; @Mecke3], an idea originally proposed by Rosenfeld [@Rosenfeld_Mayer]. These versions were successful in the analysis of structural properties of platonic solids in contact with hard walls [@Marechal1], and in the study of the bulk phase behaviour of hard spherocylinders including also the smectic phase [@Mecke2]. For a recent review on DFT applied to the study of hard body models see Ref. [@Mederos]. It is usually accepted that a density functional fulfilling the dimensional crossover property should provide accurate predictions for the structure of highly confined fluids. The dimensional crossover property means that a functional for $D$-dimensional particles reduces to that for $D-1$-dimensional particles if density profiles are constrained from the higher to the lower dimension, provided both functionals were obtained separately from the same formalism. With this property alone FMT functionals for HS and hard disks can be obtained [@Tarazona_0D]. The FMT functionals were proved to be very accurate in the description of HS in high confinement [@White1; @White2; @Wu; @Mansoori; @Mariani], but there is not enough evidence of that for other anisotropic particles. Also, the FMT accurately predicts the properties of HS crystals [@Lutsko2] and of the fluid-crystal interface [@Hartel; @Oettel]. In the present article we study the performance of FMT in the description of two-dimensional fluids of confined PHS. Even though hard discs (HD) may be considered to be geometrically simpler than PHS at first sight, in fact the dimensional-crossover-compliant FMT functional of HD contains a complicated two-body weighted density, in contrast with that of PHS, which features only one-body weighted densities. We numerically implement the FMT functional for PHS to study the thermodynamics (EOS), structure (density profiles) and correlation (pair correlation functions) of the confined fluid and check these results against transfer matrix method (TMM) and our own Monte Carlo (MC) simulations [@MC]. Particles are confined in a narrow channel with parallel hard walls, such that only two particles can fit in the transverse direction of the channel, Fig. \[fig1m\]. Two different channels, corresponding to two different symmetries of the external potential representing the walls, will be studied: (i) a channel with walls parallel to one of the sides of the PHS, Fig. \[fig1m\](a), and (ii) a channel with walls at an angle of 45$^{\circ}$ with respect to the particle sides, Fig. \[fig1m\](b). The results presented here confirm the expectation that the FMT functional accurately describes the structure of highly confined fluids. However one could expect that, as the channel thickness becomes larger and the bulk limit is approached, the results will become progressively worse. For the purpose of evaluating the predictive power of the present functional in the description of the bulk system, we performed a minimisation using a Gaussian parametrization (note that a free-minimization was recently performed for the same functional in Ref. [@Roij1], which concluded that the Gaussian parametrization accurately describes the EOS and the phase transitions), and check the resulting EOS against MD simulations [@Hoover_MD]. Alternatively, the EOS from the Percus-Yevick (PY) approximation, both from virial and compressibility routes, were also obtained and compared with simulations. Finally, pair correlations functions were calculated from (i) the same PY approximation, (ii) from the Ornstein-Zernike relation together with the direct correlation function obtained from the FMT functional, and (iii) from the test-particle route (which involves functional minimisation with a particle fixed at the origin). Apart from predicting a spurious columnar (C) phase (already reported in [@Roij1; @Miguel]), and overemphasising pair correlations, the agreement between FMT and simulations is acceptable, especially regarding the EOS at high densities and the prediction of a relatively high percentage of vacancies in the crystal (K) phase, an issue recently confirmed by simulations [@Dijkstra1
{ "pile_set_name": "ArXiv" }
--- abstract: 'Accurate depth estimation from images is a fundamental task in many applications including scene understanding and reconstruction. Existing solutions for depth estimation often produce blurry approximations of low resolution. This paper presents a convolutional neural network for computing a high-resolution depth map given a single RGB image with the help of transfer learning. Following a standard encoder-decoder architecture, we leverage features extracted using high performing pre-trained networks when initializing our encoder along with augmentation and training strategies that lead to more accurate results. We show how, even for a very simple decoder, our method is able to achieve detailed high-resolution depth maps. Our network, with fewer parameters and training iterations, outperforms state-of-the-art on two datasets and also produces qualitatively better results that capture object boundaries more faithfully. Code and corresponding pre-trained weights are made publicly available[^1].' author: - | Ibraheem Alhashim\ KAUST\ [ibraheem.alhashim@kaust.edu.sa]{} - | Peter Wonka\ KAUST\ [pwonka@gmail.com]{} bibliography: - 'zbib.bib' title: High Quality Monocular Depth Estimation via Transfer Learning --- Introduction ============ Depth estimation from 2D images is a fundamental task in many applications including scene understanding and reconstruction [@Lee2011; @moreno2007active; @Hazirbas2016FuseNetID]. Having a dense depth map of the real-world can be very useful in applications including navigation and scene understanding, augmented reality [@Lee2011], image refocusing [@moreno2007active], and segmentation [@Hazirbas2016FuseNetID]. Recent developments in depth estimation are focusing on using convolutional neural networks (CNNs) to perform 2D to 3D reconstruction. While the performance of these methods has been steadily increasing, there are still major problems in both the quality and the resolution of these estimated depth maps. Recent applications in augmented reality, synthetic depth-of-field, and other image effects [@Hedman2018; @Cao2018; @Wang2018] require fast computation of high resolution 3D reconstructions in order to be applicable. For such applications, it is critical to faithfully reconstruct discontinuity in the depth maps and avoid the large perturbations that are often present in depth estimations computed using current CNNs. ![**Comparison of estimated depth maps:** input RGB images, ground truth depth maps, our estimated depth maps, state-of-the-art results of [@Fu2018DeepOR].[]{data-label="fig:teaser"}](teaser){width="\linewidth"} Based on our experimental analysis of existing architectures and training strategies [@Eigen2014; @Li2015; @Laina2016; @Xu2017; @Fu2018DeepOR] we set out with the design goal to develop a simpler architecture that makes training and future modifications easier. Despite, or maybe even due to its simplicity, our architecture produces depth map estimates of higher accuracy and significantly higher visual quality than those generated by existing methods (see Fig. \[fig:teaser\]). To achieve this, we rely on transfer learning were we repurpose high performing pre-trained networks that are originally designed for image classification as our deep features encoder. A key advantage of such a transfer learning-based approach is that it allows for a more modular architecture where future advances in one domain are easily transferred to the depth estimation problem. ![image](network_overview){width="\linewidth"} #### Contributions: Our contributions are threefold. First, we propose a simple transfer learning-based network architecture that produces depth estimations of higher accuracy and quality. The resulting depth maps capture object boundaries more faithfully than those generated by existing methods with fewer parameters and less training iterations. Second, we define a corresponding loss function, learning strategy, and simple data augmentation policy that enable faster learning. Third, we propose a new testing dataset of photo-realistic synthetic indoor scenes, with perfect ground truth, to better evaluate the generalization performance of depth estimating CNNs. We perform different experiments on several datasets to evaluate the performance and quality of our depth estimating network. The results show that our approach not only outperforms the state-of-the-art and produces high quality depth maps on standard depth estimation datasets, but it also results in the best generalization performance when applied to a novel dataset. Related Work ============ The problem of 3D scene reconstruction from RGB images is an ill-posed problem. Issues such as lack of scene coverage, scale ambiguities, translucent or reflective materials all contribute to ambiguous cases where geometry cannot be derived from appearance. In practice, the more successful approaches for capturing a scene’s depth rely on hardware assistance, e.g. using laser or IR-based sensors, or require a large number of views captured using high quality cameras followed by a long and expensive offline reconstruction process. Recently, methods that rely on CNNs are able to produce reasonable depth maps from a single or couple of RGB input images at real-time speeds. In the following, we look into some of the works that are relevant to the problem of depth estimation and 3D reconstruction from RGB input images. More specifically, we look into recent solutions that depend on deep neural networks. #### Monocular depth estimation has been considered by many CNN methods where they formulate the problem as a regression of the depth map from a single RGB image [@Eigen2014; @Laina2016; @Xu2017; @Hao2018DetailPD; @Xu2018StructuredAG; @Fu2018DeepOR]. While the performance of these methods have been increasing steadily, general problems in both the quality and resolution of the estimated depth maps leave a lot of room for improvement. Our main focus in this paper is to push towards generating higher quality depth maps with more accurate boundaries using standard neural network architectures. Our preliminary results do indicate that improvements on the state-of-the-art are possible to achieve by leveraging existing simple architectures that perform well on other computer vision tasks. #### Multi-view stereo reconstruction using CNN algorithms have been recently proposed [@Huang2018DeepMVSLM]. Prior work considered the subproblem that looks at image pairs [@Ummenhofer2017], or three consecutive frames [@Godard2018DiggingIS]. Joint key-frame based dense camera tracking and depth map estimation was presented by [@Zhou2018DeepTAMDT]. In this work, we seek to push the performance for single image depth estimation. We suspect that the features extracted by monocular depth estimators could also help derive better multi-view stereo reconstruction methods. #### Transfer learning approaches have been shown to be very helpful in many different contexts. In recent work, Zamir et al. investigated the efficiency of transfer learning between different tasks [@Zamir2018TaskonomyDT], many of which were are related to 3D reconstruction. Our method is heavily based on the idea of transfer learning where we make use of image encoders originally designed for the problem of image classification [@huang2017densely]. We found that using such encoders that do not aggressively downsample the spatial resolution of the input tend to produce sharper depth estimations especially with the presence of skip connections. #### Encoder-decoder networks have made significant contributions in many vision related problems such as image segmentation [@Ronneberger2015u], optical flow estimation [@Dosovitskiy2015], and image restoration [@LehtinenMHLKAA18]. In recent years, the use of such architectures have shown great success both in the supervised and the unsupervised setting of the depth estimation problem [@Godard2017; @Ummenhofer2017; @Huang2018DeepMVSLM; @Zhou2018DeepTAMDT]. Such methods typically use one or more encoder-decoder network as a sub part of their larger network. In this work, we employ a single straightforward encoder-decoder architecture with skip connections (see Fig. \[fig:network\_overview\]). Our results indicate that it is possible to achieve state-of-the-art high quality depth maps using a simple encoder-decoder architecture. Proposed Method {#sec:method} =============== In this section, we describe our method for estimating a depth map from a single RGB image. We first describe the employed encoder-decoder architecture. We then discuss our observations on the complexity of both encoder and decoder and its relation to performance. Next, we propose an appropriate loss function for the given task. Finally, we describe efficient augmentation policies that help the training process significantly. Network Architecture -------------------- #### Architecture. Fig. \[fig:network\_overview\] shows an overview of our encoder-decoder network for depth estimation. For our *encoder*, the input RGB image is encoded into a feature vector using the DenseNet-169 [@huang2017densely] network pretrained on ImageNet [@Deng2009]. This vector is then fed to a successive series of up-sampling layers [@LehtinenMHLKAA18], in order to construct the final depth map at half the input resolution. These upsampling layers and their associated skip-connections form our *decoder*. Our decoder does not contain any Batch Normalization [@Ioffe2015BNA] or other advanced layers recommended in recent state-of-the-art methods [@Fu2018DeepOR; @Hao2018DetailPD]. Further details about the architecture and its layers along with their exact shapes are described in the appendix. #### Complexity and performance. The high performance of our surprisingly simple architecture gives rise to questions about which components contribute the most towards achieving these quality depth maps. We have experimented with different state-of-the-art encoders [@Bianco2018], of more
{ "pile_set_name": "ArXiv" }
--- abstract: 'The static and dynamic structure factors of an interacting Fermi gas along the BCS-BEC crossover are calculated at momentum transfer $\hbar{\bf k}$ higher than the Fermi momentum. The spin structure factor is found to be very sensitive to the correlations associated with the formation of molecules. On the BEC side of the crossover, even close to unitarity, clear evidence is found for a molecular excitation at $\hbar^2 k^2 /4m$, where $m$ is the atomic mass. Both quantum Monte Carlo and dynamic mean-field results are presented.' author: - 'R. Combescot$^{a}$, S. Giorgini$^{b}$ and S. Stringari$^{b}$' title: ' Molecular signatures in the structure factor of an interacting Fermi gas.' --- The possibility of producing weakly bound molecules in interacting ultracold Fermi gases, raises very interesting challenges. These molecules are formed near a Feshbach resonance for positive values of the s-wave scattering length, have bosonic character and exhibit Bose-Einstein condensation at low temperature [@Exp; @Bourdel]. They have a remarkably long life-time as a consequence of the fermionic nature of the constituents which quenches the decay rate associated with three-body recombinations. Several properties of this new state of matter have been already investigated experimentally in harmonically trapped configurations. These include the molecular binding energy [@Regal], the release energy [@Bourdel], the size of the molecular cloud [@Bartenstein], the frequency of the collective oscillations [@Kinast], the pairing energy [@Chin], the vortical configurations [@Zwierlein] and the thermodynamic behavior [@Thomas]. In this paper we investigate another feature of these new many-body configurations, directly related to the molecular nature of the constituents: the behavior of the static and dynamic structure factor at relatively high momentum transfer. Experimentally the structure factor can be measured with two-photon Bragg scattering where two slightly detuned laser beams are impinged upon the trapped gas. The difference in the wave vectors of the beams defines the momentum transfer $\hbar {\bf k}$, while the frequency difference defines the energy transfer $\hbar \omega$. The atoms exposed to these beams can undergo a stimulated light scattering event by absorbing a photon from one of the beams and emitting a photon into the other. This technique, which has been already successfully applied to Bose-Einstein condensates [@Bragg], provides direct access to the imaginary part of the dynamic response function and hence, via the fluctuation-dissipation theorem, to the dynamic structure factor. At high momentum transfer the response is characterized by a quasi-elastic peak at $\omega=\hbar k^2/2M$, where $M$ is the mass of the elementary constituents of the system. The position of the peak is consequently expected to depend on whether photons scatter from free atoms ($M=m$) or molecules ($M=2m$). For positive values of the scattering length both scenarios are possible and their occurrence depends on the actual value of the momentum transfer. If $k$ is much larger than the inverse of the molecular size, photons mainly scatter from atoms and the quasi-elastic peak takes place at $\hbar k^2/2m$. In the opposite case, photons scatter from molecules and the excitation strength is concentrated at $\hbar k^2/4m$. The two regimes are associated with different velocities of the scattered particles given, respectively, by $\hbar k/m$ and $\hbar k/2m$. Let us suppose that the Fermi gas consists of an equal number $N/2$ of atoms in two hyperfine states (hereafter called spin-up and spin-down). Using $S_{\uparrow\uparrow}(k,\omega )=S_{\downarrow\downarrow}(k,\omega )$ and $S_{\uparrow\downarrow}(k,\omega )=S_{\downarrow\uparrow}(k,\omega )$, we can write the $T=0$ dynamic structure factor in the form $$\label{S} S(k,\omega)= 2\left(S_{\uparrow\uparrow}(k,\omega) + S_{\uparrow\downarrow}(k,\omega) \right)\, ,$$ with $$\label{Sdown} S_{\sigma\sigma^\prime}(k,\omega)\! =\! \sum_n \!<0|\rho_\sigma(k)|n><n|\rho_{\sigma^\prime}^\dagger(k)|0> \delta(\hbar \omega - E_{n0})$$ where $\rho_{\sigma}(k)=\sum_{i_\sigma} e^{-ikz_i}$ are the spin-up ($\sigma=\uparrow$) and spin-down ($\sigma=\downarrow$) components of the Fourier transform of the [*atomic*]{} density operator, while $|n>$ and $E_{n0}=E_n-E_0$ are the eigenstates and eigenenergies of the many-body Hamiltonian $H$. The frequency integral of the dynamic structure factor defines the so-called static structure factor relative to the different spin components: $$\label{Sstatic} \hbar\int_0^{\infty} d\omega \, S_{\sigma\sigma^\prime}(k,\omega)= \frac{N}{2} S_{\sigma\sigma^{\prime}}(k)\, .$$ Using the completeness relation one can write $$\label{Sstatic2} S_{\sigma\sigma^{\prime}}(k) = \frac{2}{N} <0|\sum_{i_{\sigma},j_{\sigma^{\prime}}}e^{-i(k(z_i-z_j)}|0>\, .$$ The total spin structure is then given by $S(k) = N^{-1}\hbar \int d\omega \,S(k,\omega)= S_{\uparrow\uparrow}(k)+ S_{\uparrow\downarrow}(k)$. The static structure factor is related to the two-body correlation functions through the relationships $$\begin{aligned} S_{\uparrow\uparrow}(k) &=& 1+\frac{n}{2}\int d{\bf r}\,[g_{\uparrow\uparrow}(r)-1]e^{i{\bf k \cdot r}} \nonumber\\ S_{\uparrow\downarrow}(k) &=& \frac{n}{2}\int d{\bf r}\,[g_{\uparrow\downarrow}(r)-1]e^{i{\bf k \cdot r}}\;, \label{corr}\end{aligned}$$ yielding $S(k) = 1+n\int d{\bf r}\,[g(r)-1]e^{-i{\bf k \cdot r}}$ with $g(r)=[g_{\uparrow\uparrow}(r)+g_{\uparrow\downarrow}(r)]/2$. In the above equations $n$ is the total particle density fixing the Fermi wave vector according to $k_F^3=3\pi^2n$. The behavior of the structure factor $S(k)$ at small momenta ($k\ll k_F$) is dominated by long-range correlations which give rise to a linear dependence in $k$. In a superfluid the slope is fixed by the sound velocity $c$, through the general law $S(k) = k/2mc$. In this paper we are however mainly interested in the behavior at large momentum transfer, typically such that $k \gtrsim k_F$. In the limit of very large $k$, the sum in Eq.(\[Sstatic2\]) is dominated by the autocorrelation term $i=j$ with identical spins. This leads to $S_{\uparrow\uparrow}(k) \rightarrow 1$. On the other hand, since there is no autocorrelation with different spins, $S_{\uparrow\downarrow}(k) \rightarrow 0$ for very large $k$. Actually in the ideal Fermi gas the dynamic spin-up spin-down structure factor identically vanishes ($S_{\uparrow\downarrow}(k,\omega)=0$) for all values of $k$ and $\omega$, reflecting the complete absence of correlations between particles of opposite spin [@Pines]. This quantity is therefore particularly well suited for studying the effect of interactions. Let us first consider the case of small and positive scattering length $k_F a\ll 1$. This is the so-called Bose-Einstein condensation (BEC) regime, where we have a dilute gas of weakly bound molecules, made of atoms with opposite spins, with normalized wavefunction $\Phi_0({\bf r})$ for the relative motion. When we consider distances of the order of the molecule size $a$, we have naturally a strong correlation between opposite spin atoms belonging to the same molecule. In this case the sum (\[Sstatic2\]) is dominated by this contribution, which gives: $$\label{SupdownM} S_{\uparrow\downarrow}(k) = \int d{\bf r} \;e^{i{\bf k \cdot r}}\,n_{mol}({\bf r})\;,$$ where $n_{mol}({\bf r})=|\Phi_0({\bf r})|^2$ is the probability to have the atoms separated by ${\bf r}$. This holds only for $k \gg k_F$, otherwise one has to take into account also correlations between atoms belonging to different molecules. In particular one finds that, for $ k_F \ll k \ll 1/a$, $S_{\uparrow\downarrow}(k)=1$, so that $S(k)=2$. This corresponds to the regime where the elementary constituents “seen" by the scattering probe are molecules and not atoms. When one moves away from this BEC regime towards the resonance, where $k_Fa \gg 1$, the many-body wave function cannot be simply written in terms of molecules anymore. In this very interesting regime the function $S_{\uparrow\downarrow}(k)$ is smaller than unity, but can still significantly differ from zero,
{ "pile_set_name": "ArXiv" }
--- abstract: | We study the inverse problem for the second order self-adjoint hyperbolic equation with the boundary data given on a part of the boundary. This paper is the continuation of the author’s paper \[E\]. In \[E\] we presented the crucial local step of the proof. In this paper we prove the global step. Our method is a modification of the BC-method with some new ideas. In particular, the way of the determination of the metric is new. author: - | G.Eskin,     Department of Mathematics, UCLA,\ Los Angeles, CA 90095-1555, USA.  E-mail: eskin@math.ucla.edu title: 'A new approach to hyperbolic inverse problems II (Global step)' --- Introduction. {#section 1} ============= Let $\Omega$ be a bounded domain in ${{\bf R}}^n,\ n\geq 2,$ with smooth boundary $\partial\Omega$. Consider the hyperbolic equation of the form: $$\begin{aligned} \label{eq:1.1} Lu\stackrel{def}{=} \frac{\partial^2 u}{\partial t^2} +\sum_{j,k=1}^n\frac{1}{\sqrt{g(x)}} \left(-i\frac{\partial}{\partial x_j}+A_j(x) \right) \sqrt{g(x)}g^{jk}(x) \left(-i\frac{\partial}{\partial x_k}+A_k(x)\right)u \nonumber \\ +V(x)u=0\end{aligned}$$ in $\Omega\times(0,T_0)$ with $C^\infty(\overline{\Omega})$ coefficients. Here $ \|g^{jk}(x)\|^{-1}$ is the metric tensor, $g(x)=\det\|g^{jk}\|^{-1}$. We assume that $$\label{eq:1.2} u(x,0)=u_t(x,0)=0 \ \ \mbox{in}\ \ \ \Omega,\ \ $$ $$u\left|_{\partial\Omega\times(0,T_0)}\right. = f(x,t).$$ Denote by $\Lambda$ the Dirichlet-to-Neumann (D-to-N) operator, i.e. $$\label{eq:1.4} \Lambda f=\sum_{j,k=1}^n g^{jk}(x)\left(\frac{\partial u}{\partial x_j}+ iA_j(x)u\right)\nu_k \left(\sum_{p,r=1}^n g^{pr}(x)\nu_p\nu_r\right)^{-\frac{1}{2}} {\Huge |}_{\partial\Omega\times(0,T_0)},$$ where $\nu=(\nu_1,...,\nu_n)$ is the unit exterior normal to $\partial\Omega$ with respect to the Euclidean metric. Let $\Gamma_0$ be an open subset of $\partial\Omega$. We say that the D-to-N operator is given on $\Gamma_0\times(0,T_0)$ if $\Lambda f{\Huge |}_{\Gamma_0\times(0,T_0)}$ is known for all smooth $f(x,t)$ with supports in $\Gamma_0\times (0,T_0]$. Denote by $G_0(\overline{\Omega})$ the group of all complex-valued functions $c(x)$ such that $c(x)\neq 0$ in $\overline{\Omega}$ and $c(x)=1$ on $\overline{\Gamma_0}$. We say that potentials $A(x)=(A_1(x),...,A_n(x))$ and $A'(x)=(A_1'(x),...,A_n'(x))$ are gauge equivalent if there exists $c(x)\in G_0(\overline{\Omega})$ such that $$A_j'(x)=A_j(x)-ic^{-1}(x)\frac{\partial c}{\partial x_j},\ \ \ \ 1\leq j\leq n.$$ Note that if $Lu=0$ then $$\label{eq:1.5} u'=c^{-1}(x)u$$ satisfies the equation $L'u'=0$ where $L'$ has the form (\[eq:1.1\]) with $A_j(x)$ replaced by $A_j'(x), \ 1\leq j\leq n$. We shall call (\[eq:1.5\]) the gauge transformation. We shall prove the following theorem: \[theo:1.1\] Let $L^{(p)},p=1,2$, be two operators of the form (\[eq:1.1\]) in domains $\Omega^{(p)},p=1,2,$ respectively. Let $\Gamma_0\subset\partial\Omega^{(1)}\cap\partial\Omega^{(2)}$ and let $\Lambda^{(p)},p=1,2,$ be the D-to-N operators corresponding to $L^{(p)}, p=1,2.$ Assume that $L^{(p)}$ are self-adjoint, i.e. coefficients $A_1^{(1)}(x),...,A_n^{(1)}(x),V^{(1)}(x)$ and $A_1^{(2)}(x),...,A_n^{(2)}(x),$\ $V^{(2)}(x)$ are real-valued. Suppose $T_0>2\max_{x\in \bar{\Omega}^{(1)}}d_1(x,\Gamma_0)$, where $d_1(x,\Gamma_0)$ is the distance in $\overline{\Omega_1}$ with respect to the metric $\|g_1^{jk}(x)\|^{-1}$ from $x\in \overline{\Omega^{(1)}}$ to $\Gamma_0$. Suppose that the D-to-N operators $\Lambda^{(1)}$ and $\Lambda^{(2)}$ are equal on $\Gamma_0\times(0,T_0)$ for all $f$ with $\mbox{supp\ } f\subset \Gamma_0\times (0,T_0]$. Then there exists a diffeomorphism $\varphi$ of $\overline{\Omega_2}$ onto $\overline{\Omega_1}$, $\varphi=I$ on $\Gamma_0$, and there exists a gauge transformation $c(x)\in G_0(\overline{\Omega^{(1)}}$ such that $c\circ\varphi\circ L^{(2)}=L^{(1)}$ in $\Omega^{(1)}$. An important case of the inverse problems with boundary data on a part of the boundary are the inverse problems in domains with obstacles. In this case $\Omega=\Omega_0\setminus\cup_{r=1}^m \overline{\Omega_r}$, where $\Omega_0$ is diffeomorphic to a ball, $\overline{\Omega_1},...,\overline{\Omega_m}$ are smooth nonintersecting domains in $\Omega_0$ called obstacles, $\Gamma_0=\partial\Omega_0$ and zero Dirichlet boundary conditions hold on $\partial\Omega_r,\ 1\leq r\leq m$ (c.f. \[E1\]). The first result on the inverse problems with the data on a part of the boundary was obtained in \[I\]. The general self-adjoint case was studied by the BC-method (see \[B1\], \[B2\], \[K\], \[KK\], \[KKL\], \[KL1\]). The present paper is a continuation of the paper \[E\] (see also \[E2\]). In \[E\] the crucial local step was considered, i.e. the unique determination of the coefficients of (\[eq:1.1\]) modulo a diffeomorphism and a gauge transformation near $\Gamma_0$. In this paper we complete the proof of Theorem \[theo:1.1\]. In §2 we state the main results proven in \[E\] and prove the extension lemma. In §3 we refine the results of §2, and in §4 we complete the proof of Theorem \[theo:1.1\]. The summary of the local step and the extension lemma. {#section 2} ====================================================== Let $L^{(p)}, p=1,2,$ be two operators of the form (\[eq:1.1\]) in $\Omega^{(p)}\times(0,T_0),\ L^{(p)}u_p=0$ in $\Omega^{(p)}\times (0,T_0),\ u_p(x,0)=u_{pt}(x,0)=0,\ x\in \Omega^{(p)},\ u_p|_{\Gamma_0\times(0,T_0)}=f,\ ,p=1,2.$ Let $\Gamma$ be an open connected subset of $\Gamma_0$ and let $x=(x',x_n)$ be a system of coordinates in a neighborhood $V\subset {{\bf R}}^n$ of $\Gamma$ such that $x_n=0$ is the equation of $\Gamma$ and $ x'=(x_1,...,x_{n-1})$ are local coordinates on $\Gamma$. Introduce semigeodesic coordinates in $V$ corresponding to $\Gamma$ and to the metric $\|g_p^{jk}\|^{-1},p=1,2$: $$\label{eq:2.1} y=\varphi_p(x).$$ Note that $\varphi_p(x)=(\varphi_{
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present the first Doppler images of the active eclipsing binary system SZ Psc, based on the high-resolution spectral data sets obtained in 2004 November and 2006 September–December. The least-squares deconvolution technique was applied to derive high signal-to-noise profiles from the observed spectra of SZ Psc. Absorption features contributed by a third component of the system were detected in the LSD profiles at all observed phases. We estimated the mass and period of the third component to be about $0.9 M_{\odot}$ and $1283 \pm 10$ d, respectively. After removing the contribution of the third body from the LSD profiles, we derived the surface maps of SZ Psc. The resulting Doppler images indicate significant starspot activities on the surface of the K subgiant component. The distributions of starspots are more complex than that revealed by previous photometric studies. The cooler K component exhibited pronounced high-latitude spots as well as numerous low- and intermediate-latitude spot groups during the entire observing seasons, but did not show any large, stable polar cap, different from many other active RS CVn-type binaries.' author: - 'Yue Xiang,$^{1}$$^{2}$$^{3}$[^1] Shenghong Gu,$^{1}$$^{2}$$^{\star}$ A. Collier Cameron,$^{4}$ J. R. Barnes$^{5}$' - | and Liyun Zhang$^{6}$\ $^{1}$Yunnan Observatories, Chinese Academy of Sciences, Kunming 650011, China\ $^{2}$Key Laboratory for the Structure and Evolution of Celestial Objects, Chinese Academy of Sciences, Kunming 650011, China\ $^{3}$University of Chinese Academy of Sciences, Beijing 100049, China\ $^{4}$School of Physics and Astronomy, University of St Andrews, Fife KY16 9SS, UK\ $^{5}$Department of Physical Sciences, The Open University, Walton Hall, Milton Keynes MK7 6AA, UK\ $^{6}$Department of Physics, College of Science, Guizhou University and NAOC-GZU-Sponsored Center for Astronomy, Guizhou University,\ Guiyang 550025, China title: The first Doppler images of the eclipsing binary --- \[firstpage\] stars: activity – stars: binaries: eclipsing – stars: imaging – stars: starspots – stars: individual: Introduction ============ SZ Psc is a double-lined partial eclipsing binary composed of an F8V hotter and a K1IV cooler component, with an orbital period of about 3.97 d. The cooler component is larger and more massive than the hotter one and has filled 85% of its Roche lobe [@pop1988]. The rotation of the hotter component is several times slower than its synchronous value, while the cooler component shows synchronous rotation [@eaton2007; @gla2008]. SZ Psc is very active and classified as a member of RS CVn-type stars [@hall1976]. It shows strong chromospheric emission lines attributed to its cooler component [@ram1981; @pop1988; @doyle1994; @fra1994]. Starspot activities on the K star were also revealed by many photometric studies [@eaton1979; @lan2001; @kang2003; @eaton2007]. The orbital period of SZ Psc is not constant [@jak1976; @tunca1984; @kal1995], which is similar to those of many other active binary systems. @kal1995 derived a periodicity of 56 yr and an amplitude of $4.3\times10^{-4}$ d for the period change of the system. They suggested that it can be explained by a combination of the magnetic activity and the stellar wind. SZ Psc is suspected to be a triple system. @eaton2007 revealed that the systemic velocity of the binary is changing with time, which indicates a third component in SZ Psc. They suggested an amplitude less than 8  and a period of 1143 or 1530 d for the systemic velocity and thus inferred that the third component is a cool dwarf with a mass of about 0.9–1.0$M_{\odot}$. They also found weak features in the D lines probably contributed by the third component and estimated its contribution to be about 3%–4% of the brightness of SZ Psc. So far, the physical properties of the third component and the outer orbit of SZ Psc are still poorly known. @zhang2008 analysed several chromospheric activity indicators using the spectral subtraction technique and revealed the rotational modulation of the activity on the cooler component of SZ Psc. In addition, they found absorption features in the H$_{\alpha}$ profiles of SZ Psc probably accounted for by prominence-like material around the K star or mass transfer between two components. Using higher time-resolved spectra, @cao2012 also detected absorption features in the H$_{\alpha}$ profiles, which indicate prominence activity on the cooler component. Their calculation shows that the distance of the prominence from the K star’s rotation axis exceeded the Roche lobe of the K star. @lan2001 derived surface images of both components of SZ Psc from long-term photometric observations. They revealed the presence of several active regions on the surface of the cooler component of SZ Psc. One of them is stable and facing the hotter component. @kang2003 derived unique solutions from light curves with good phase sampling using the starspot model and revealed that the variations of the shape of light curves are mainly accounted for by spot evolution and migration on the K star of SZ Psc. @eaton2007 suggested that the cooler component have many small starspots rather than a few large ones, because its line profiles lack large distortions. In order to investigate the starspot activities on active close binaries, we have carried out a series of high-resolution spectroscopic observations on targets with various stellar parameters and evolutionary stages [@gu2003; @xiang2014; @xiang2015]. In this work, we have derived the surface images of the K subgiant component of SZ Psc for 2004 November, 2006 September, October, November and December, through Doppler imaging technique. To our knowledge, there is no Doppler image for SZ Psc before, which could offer us a more detailed distribution of starspots than light-curve modelling. We shall describe the observations and data reduction in Section 2. The Doppler images will be given and discussed in Section 3 and 4, respectively. In Section 5, we shall summarize the present work. Observations and data reduction =============================== ------------ ----------- ------ ------- ------ UT Date HJD Exp. S/N S/N 2450000+ (s) Input LSD 20/11/2004 3330.1055 2400 141 1911 20/11/2004 3330.1365 2400 134 1818 21/11/2004 3331.1073 2400 87 1180 27/11/2004 3337.1254 2400 103 1407 01/09/2006 3980.1557 1800 55 750 01/09/2006 3980.1769 1800 62 847 04/09/2006 3983.1880 2100 83 1131 04/09/2006 3983.2125 2100 92 1262 05/09/2006 3984.1295 1800 101 1377 05/09/2006 3984.1505 1800 105 1430 06/09/2006 3985.1228 1800 121 1654 06/09/2006 3985.1437 1800 130 1776 28/10/2006 4036.9567 1800 131 1794 28/10/2006 4036.9777 1800 132 1811 28/10/2006 4037.0227 1800 142 1948 28/10/2006 4037.0438 1800 149 2039 28/10/2006 4037.0836 1800 135 1849 28/10/2006 4037.1051 1800 127 1741 28/10/2006 4037.1758 1800 106 1448 28/10/2006 4037.1968 1800 112 1530 29/10/2006 4037.9357 1800 85 1162 29/10/2006 4037.9568 1800 88 1199 29/10/2006 4037.9901 1800 85 1156 29/10/2006 4038.0125 1800 83 1136 29/10/2006 4038.0335 1800 87 1188 29/10/2006 4038.0551 1800 89 1216 30/10/2006 4039.0799 1800 95 1295 28/11/2006 4067.9843 1500 84 1152 28/11/2006 4068.0029 1500 90 1234 28/11/2006 4068.1349 2400 95 1298 29
{ "pile_set_name": "ArXiv" }
--- abstract: 'We have mapped the warm molecular gas traced by the H$_2$ S(0) $-$ H$_2$ S(5) pure rotational mid-infrared emission lines over a radial strip across the nucleus and disk of M51 (NGC 5194) using the Infrared Spectrograph (IRS) on the $Spitzer$ $Space$ $Telescope$. The six H$_2$ lines have markedly different emission distributions. We obtained the H$_2$ temperature and surface density distributions by assuming a two temperature model: a warm (T = 100 $-$ 300 K) phase traced by the low $J$ (S(0) – S(2)) lines and a hot phase (T = 400 $-$ 1000 K) traced by the high $J$ (S(2) – S(5)) lines. The lowest molecular gas temperatures are found within the spiral arms (T $\sim$ 155 K), while the highest temperatures are found in the inter-arm regions (T $>$ 700 K). The warm gas surface density reaches a maximum of 11 $\mathrm{M_\sun}$ $\mathrm{pc^{-2}}$ in the north-western spiral arm, whereas the hot gas surface density peaks at 0.24 $\mathrm{M_\sun}$ $\mathrm{pc^{-2}}$ at the nucleus. The spatial offset between the peaks in the warm and hot phases and the differences in the distributions of the H$_2$ line emission suggest that the warm phase is mostly produced by UV photons in star forming regions while the hot phase is mostly produced by shocks or X-rays associated with nuclear activity. The warm H$_2$ is found in the dust lanes of M51, spatially offset from the brightest H$\alpha$ regions. The warm H$_2$ is generally spatially coincident with the cold molecular gas traced by CO (J = 1 – 0) emission, consistent with excitation of the warm phase in dense photodissociation regions (PDRs). In contrast, the hot H$_2$ is most prominent in the nuclear region. Here, over a 0.5 kpc radius around the nucleus of M51, the hot H$_2$ coincides with \[O IV\](25.89 $\micron$) and X-ray emission indicating that shocks and/or X-rays are responsible for exciting this phase.' author: - Gregory Brunner - 'Kartik Sheth, Lee Armus' - 'Mark Wolfire, Stuart Vogel' - Eva Schinnerer - George Helou - Reginald Dufour - 'John-David Smith' - 'Daniel A. Dale' title: 'Warm Molecular Gas in M51: Mapping the Excitation Temperature and Mass of H$_2$ with the Spitzer Infrared Spectrograph' --- Introduction ============ Star formation and galactic evolution are connected via the molecular gas in a galaxy. In the Milky Way, star formation occurs in molecular clouds, although not all clouds are actively forming stars. On a global, galactic scale, star formation may be triggered whenever the molecular gas surface density is enhanced, for example, by a spiral density wave [@vog88], by increased pressure or gas density in galactic nuclei [@you91; @sak99; @she05], by hydrodynamic shocks along the leading edge of bars [@she00; @she02], and in the transition region at the ends of bars [@kl91; @she02]. How does this star formation affect the surrounding molecular gas? How is it heated and what is the distribution of the gas temperatures? How does the mass of the warm and hot gas vary from region to region? We address these questions using spectral line maps from a radial strip across the grand-design spiral galaxy, M51. M51 (the Whirlpool galaxy, NGC 5194) is a nearby, face-on spiral galaxy that is rich in molecular gas. Its proximity (assumed to be 8.2 Mpc [@tul88]), face-on orientation, and grand-design spiral morphology make it the ideal target for studies of the interstellar medium (ISM) across distinct dynamical, chemical, and physical environments in a galaxy. Studies of the molecular gas within M51 have revealed giant molecular associations (GMAs) along the spiral arms [@vog88; @ran90; @aal99], a reservoir of molecular gas in the nuclear region that is massive enough to fuel the active galactic nucleus (AGN) [@sco98], and spiral density wave triggered star-formation in molecular clouds [@vog88]. In addition to being well-studied at millimeter and radio wavelengths, M51 has also been studied at X-ray, UV, optical, near-infrared, infrared, and submillimeter wavelengths [@pal85; @ter98; @sco01; @cal05; @mat04; @mei05]. In this paper we present maps of the H$_2$ S(0) $-$ H$_2$ S(5) pure rotational mid-infrared lines over a strip across M51 created from $Spitzer$ $Space$ $Telescope$ Infrared Spectrograph (IRS) spectral mapping mode observations. The mid-infrared H$_2$ lines trace the warm (T = 100 – 1000 K) phase of H$_2$ and we use these lines to model the H$_2$ excitation-temperature, mass [@rig02; @hig06], and ortho-to-para ratio [@neu98; @neu06] across the M51 strips.[^1] We use the inferred distributions to place constraints on the energy injection mechanisms (i.e. radiative heating, shocks, turbulence) that heat the warm molecular gas phase of the ISM. Observations and Data Reduction =============================== Spectral Data ------------- We mapped a radial strip across using the short-low (SL; 5 – 14.5 $\micron$) and long-low (LL; 14 - 38 $\micron$) modules of the $Spitzer$ IRS in spectral mapping mode [@hou04]. The radial strips were 324$\arcsec$ $\times$ 57$\arcsec$ and 295$\arcsec$ $\times$ 51$\arcsec$ in the SL and LL, respectively. Each slit position was mapped twice with half-slit spacings. In total, 1,412 spectra were taken in the SL and 100 were taken in the LL. Integration times for individual spectra were 14.6 s in both the SL and LL. Dedicated off-source background observations were taken for the SL observations. Backgrounds for the LL observations were taken from outrigger data collected while the spacecraft was mapping in the adjacent module. Figure \[figure-1\] presents the astronomical observation requests (AORs) overlaid on the $Spitzer$ Infrared Array Camera (IRAC) 8 $\micron$ image of M51. The spectra were assembled from the basic calibration data (BCD) into spectral data cubes for each module using CUBISM [@ken03; @smi04; @smi07a]. Background subtraction and bad pixel removal were done within CUBISM. The individual BCDs were processed using the S14.0 version of the Spitzer Science Center (SSC) pipeline. In CUBISM, the SL and LL data cubes have 1$\farcs$85 and 5$\farcs$08 pixels, respectively. The pixel size is half the full width at half maximum (FWHM) of the point spread function (PSF) at the red end of a given module. In principle, the PSF should vary with wavelength but since the PSF is undersampled at the blue end of the module, it is approximately constant across a given module. So the approximate resolution of the SL and LL modules and maps of spectral features observed in the SL and LL modules is 3$\farcs$7 and 10$\farcs$1, respectively. We created continuum-subtracted line flux maps of the H$_2$ S(0) $-$ H$_2$ S(5) lines using a combination of PAHFIT [@smi07b] and our own code. PAHFIT is a spectral fitting routine that decomposes IRS low resolution spectra into broad PAH features, unresolved line emission, and grain continuum with the main advantage being that it allows one to recover the full line flux of any blended features. Several H$_2$ lines are blended with PAH features and atomic lines in IRS low resolution spectra: H$_2$ S(1) with the 17.0 $\micron$ PAH complex, H$_2$ S(2) with the 12.0 and 12.6 $\micron$ PAH complexes, H$_2$ S(4) with the 7.8 and 8.6 $\micron$ PAH complexes, and H$_2$ S(5) with the \[Ar II\](6.9 $\micron$) line. PAHFIT also solves for the foreground dust emission and dereddens the emitted line intensities. Our code concatenates SL1 and SL2, and LL1 and LL2 data cubes into two cubes, one for SL and one for LL and smoothes each map in the cubes by a 3 $\times$ 3 pixel box, conserving the flux, to increase the signal-to-noise ratio of the spectra. Then, for each pixel, the spectrum is extracted and PAHFIT is run to decompose it. Our code saves the location of the pixel on the sky along with the PAHFIT output (i.e. integrated line flux, line FWHM, line equivalent width, the uncertainty in the line flux, the fit to the entire spectrum and the fit to the continuum) for each spectrum and uses this information to construct line flux maps for all
{ "pile_set_name": "ArXiv" }
--- author: - 'Honglin Yuan[^1] ' - 'Tengyu Ma[^2]' bibliography: - 'FedAc.bib' title: '[Federated Accelerated Stochastic Gradient Descent]{}' --- [^1]: Stanford University, E-mail: `yuanhl@stanford.edu` [^2]: Stanford University, E-mail: `tengyuma@stanford.edu`
{ "pile_set_name": "ArXiv" }
--- abstract: | **-Background.** Autism spectrum disorder (ASD) affects the brain connectivity at different levels. Nonetheless, non-invasively distinguishing such effects using magnetic resonance imaging (MRI) remains very challenging to machine learning diagnostic frameworks due to ASD heterogeneity. So far, existing network neuroscience works mainly focused on functional (derived from functional MRI) and structural (derived from diffusion MRI) brain connectivity, which might not capture relational morphological changes between brain regions. Indeed, machine learning (ML) studies for ASD diagnosis using morphological brain networks derived from conventional T1-weighted MRI are very scarce. **-New Method.** To fill this gap, we leverage crowdsourcing by organizing a Kaggle competition to build a pool of machine learning pipelines for neurological disorder diagnosis with application to ASD diagnosis using cortical morphological networks derived from T1-weighted MRI. **-Results.** During the competition, participants were provided with a training dataset and only allowed to check their performance on a public test data. The final evaluation was performed on both public and hidden test datasets based on accuracy, sensitivity, and specificity metrics. Teams were ranked using each performance metric separately and the final ranking was determined based on the mean of all rankings. The first-ranked team achieved 70% accuracy, 72.5% sensitivity, and 67.5% specificity, while the second-ranked team achieved 63.8%, 62.5%, 65% respectively. **-Conclusion.** Leveraging participants to design ML diagnostic methods within a competitive machine learning setting has allowed the exploration and benchmarking of wide spectrum of ML methods for ASD diagnosis using cortical morphological networks. address: - 'BASIRA lab, Faculty of Computer and Informatics, Istanbul Technical University, Istanbul, Turkey' - 'School of Science and Engineering, Computing, University of Dundee, UK' author: - Ismail Bilgen - Goktug Guvercin - Islem Rekik bibliography: - 'Kbiblio.bib' title: 'Machine Learning Methods for Brain Network Classification: Application to Autism Diagnosis using Cortical Morphological Networks' --- Neurological disorders, Machine Learning, Computer-Aided Diagnosis, A Python Toolbox for Network Classification, Autism Spectrum Disorder, Introduction ============ Autism spectrum disorder (ASD) is a neuropsychiatric condition that impairs behavioral and cognitive functions of children such as communication and social interaction. The main symptoms of ASD are restricted and repetitive behaviors and interests. The number of ASD cases are increasing in the world over time as reported in [@baio2018]. The symptoms of ASD generally appear in the first two years and tend to be long-life persistent. Nevertheless, timely treatments can improve the symptoms and abilities to function substantially. Therefore, the early accurate diagnosis of ASD is crucial to develop specialized interventions [@zwaigenbaum2015]. However, the diagnosis of ASD is very challenging due to its complex nature and highly heterogeneous symptoms [@zhao2018]. Several studies in neuroimaging using different non-invasive brain imaging modalities such as functional MRI (fMRI) and diffusion MRI (dMRI) were proposed to overcome this challenge [@zhao2018; @anderson2011; @eslami2019; @dekhil2019; @heinsfeld2018; @brown2016]. Although such studies advanced our understanding of brain changes in ASD subjects on functional and structural connectivity levels, they overlooked relational morphological changes between brain regions. To address this gap in network neuroscience [@Fornito:2015; @Bassett:2017], a few recent studies investigated the potential of cortical morphological networks (CMNs), derived solely from T1-weighted MRI in distinguishing between the autistic and typical cortices [@soussia2017; @soussia2018; @morris2017; @georges2020]. Notably, several works investigated the change in morphology at a brain region level [@postema2019; @itahashi2015; @yang2016], however, these did not investigate the changes in one brain region of interest (ROI) *in relation* to another ROI. On the other hand, such morphological relationship between pairs of ROIs can be nicely modeled using morphological brain networks, where the morphological connectivity between two regions encodes their dissimilarity in morphology as introduced in [@mahjoub2018brain]. Although these few seminal works were the first to investigate how ASD affects CMNs [@soussia2017; @soussia2018; @morris2017; @georges2020], they were based on particular machine learning (ML) methods, which leaves us with a wider sprectrum of rich and diverse ML methods that are fully unexplored for ASD diagnosis. On the other hand, crowdsourcing has emerged as a framework to address computational challenges in many areas such as biomedical and genomics [@rodriguez2016; @belcastro2018; @marbach2012wisdom], which accelerates exploring and benchmarking both existing and novel approaches, and improves the robustness of solutions. For this purpose, we organized an in-class challenge via [@Kaggle][^1], where participants aim to classify ASD/NC subjects using solely CMNs derived from maximum principal curvature of the cortical surface. Teams were ranked based on three classification performance metrics: accuracy, sensitivity and specificity, evaluated on a a hidden test dataset. The final ranks of teams were determined by summing ranks on each individual metric. This challenge fills the gap emerging from the lack of studies on morphological brain networks for ASD diagnosis, by enabling an assessment of a wide range of methods through standardized performance metrics. In this manuscript, we present the results for the competition and describe the computational approaches of the top 20 ranked teams. We provide the comparison and comprehensive characterization of different ASD/NC ML classification methods on cortical morphological networks and interpret the performance of each method. Promoting open ML in network neuroscience, we have shared the Python network classification codes by the top 20 participating teams on BASIRA Lab GitHub: <https://github.com/basiralab/BrainNet-ML-ToolBox>, which were polished by the second-first author. Methods ======= Competition Organization ------------------------ The diagnosis of neurological disorders, such as ASD, can be formulated as a classification task in machine learning. In this case, the learning model being chosen for this task is fed by a training dataset in which each sample is represented with brain connectome map (e.g., CMN) of a subject and a binary classification label of the subject as with or without disorder. In the case of highly-qualified generalizability of the ML model to unseen data in the testing phase, such a learning model can be a pre-eminent method for the diagnosis of a particular brain disorder. To evaluate the generalizability of a wide spectrum of ML methods in diagnosing ASD patients using CMNs, a competition was set up in Kaggle platform. The teams participating in this competition were provided with two datasets for training and testing phases, respectively. After training their designed ML frameworks, the teams were allowed to produce a prediction label list for the test samples, and to submit those predictions to the competition platform under a limited number of rights so as to check the accuracy of their models. In this way, they could change their learning algorithm or make some modifications on it like parameter tuning or utilizing some preprocessing techniques. At the end of the competition, the teams were requested to propose their default ML frameworks so that they could be evaluated on a testing set in terms of accuracy, sensitivity, and specificity. Based on each evaluation metric, the teams were ranked. Ultimately, the overall competition rank was defined by the summation of the three metric ranks of each team. Analysis of Machine Learning Methods in the Brain Network Classification Kaggle Competition ------------------------------------------------------------------------------------------- In this section, we provide an overview of the machine learning pipelines that have been proposed by the top 20 leading teams in the competition. All methods laying the foundation of those pipelines are examined under three major ML categories: (1) preprocessing techniques, (2) dimensionality reduction methods, and, (3) learning models (Figure \[fig:trend\]). ![Trends of machine learning methods for brain network classification in Kaggle competition. The thickness of the link between two objects indicates the relative quantity of usage from left to right.[]{data-label="fig:trend"}](Fig1.png){width="100.00000%"} ### Preprocessing Techniques Preprocessing techniques are the algorithms on the front line of a machine learning pipeline composed of several steps such as data preparation, dimension reduction, model training, validation, and testing. The main reason why they are commonly adopted is that they can improve poor-quality data in some aspects such as outlier detection and feature scaling as well as imputation, thereby preparing a more polished data for further steps of learning process. In the competition, there are totally four preprocessing techniques that have been deployed in machine learning pipelines proposed by the participating teams, which are illustrated in Figure \[fig:preprocessing\]. Almost half of the teams have not preferred to perform any data preprocessing. On the other hand, feature scaling techniques (standardization and min-max scaling) have accounted for approximately one-third of total usage. While the elimination of constant features in the dataset has been leveraged by 4 teams, solely 1 team has utilized Isolation Forest algorithm for preprocessing. ![Brain network data preprocessing techniques used in the brain network classification Kaggle competition for distinguishing between autistic and healthy subjects.[]{data-label="fig:preprocessing"}](Fig2.png){width="100.00000%"} The features of the datasets used in some machine learning tasks generally vary in range, and this issue decreases the performance of many learning models and dimensionality reduction methods considerably [@géron2017hands-on]. For instance, gradient descent algorithm cannot modify the weights of the features
{ "pile_set_name": "ArXiv" }
--- abstract: 'We prove a regularity result for Lagrangian flows of Sobolev vector fields over $\operatorname{RCD}(K,N)$ metric measure spaces, regularity is understood with respect to a newly defined quasi-metric built from the Green function of the Laplacian. Its main application is that $\operatorname{RCD}(K,N)$ spaces have constant dimension. In this way we generalize to such abstract framework a result proved by Colding-Naber for Ricci limit spaces, introducing ingredients that are new even in the smooth setting.' author: - 'Elia Brué [^1]' - 'Daniele Semola [^2]' title: 'Constancy of the dimension for $\operatorname{RCD}(K,N)$ spaces via regularity of Lagrangian flows' --- Introduction {#introduction .unnumbered} ============ It is well known that many analytical and geometrical properties of Riemannian manifolds are deeply related to lower bounds on the Ricci curvature. Moreover, the class of $n$-dimensional Riemannian manifolds with Ricci curvature uniformly bounded from below and diameter bounded from above being precompact with respect to Gromov-Hausdorff convergence (see [@Gromov81]) was the starting point for the study of the so-called Ricci-limit spaces, initiated by Cheeger-Colding in the series of works [@CheegerColding96; @CheegerColding97; @CheegerColding2000a; @CheegerColding2000b]. Their deep analysis motivated the interest on finding a way to talk about Ricci curvature lower bounds without having a smooth structure at disposal, in analogy with the theory of Alexandrov spaces with sectional curvature bounded from below (see [@BuragoGromovPerelman92] and [@CheegerColding97 Appendix 2]). Meanwhile, it became soon clear that Ricci curvature lower bounds should be seen as a property coupling the measure and the distance, in contrast with sectional curvature bounds, depending solely on the metric structure. The investigation around a synthetic treatment of lower Ricci bounds began with the seminal and independent works by Lott-Villani [@LottVillani] and Sturm [@Sturm06a; @Sturm06b] in which the class of $\operatorname{CD}(K,N)$ metric measure spaces was introduced with the aim to provide a synthetic notion of having Ricci curvature bounded from below by $K\in{\mathbb{R}}$ and dimension bounded from above by $1\le N\le+\infty$. The $\operatorname{CD}(K,N)$ condition was therein formulated in terms of convexity-type properties of suitable entropies over the Wasserstein space. Crucial properties of such a notion are the compatibility with the smooth Riemannian case and the stability with respect to measured Gromov-Hausdorff convergence. However the class of $\operatorname{CD}(K,N)$ metric measure spaces is still too large to some extent. For instance, it includes smooth Finsler manifolds (see the last theorem in [@Villani09]) which are known not to appear as Ricci limit spaces after the above mentioned works by Cheeger-Colding. To single out spaces with a Riemannian-like behaviour from this broader class, Ambrosio-Gigli-Savaré introduced in [@AmbrosioGigliSavare14] the notion of metric measure space with Riemannian Ricci curvature bounded from below ($\operatorname{RCD}(K,\infty)$ m.m.s. for short), adding the request of linearity of the heat flow to the $\operatorname{CD}(K,\infty)$ condition. Building upon this, the definition of $\operatorname{RCD}(K,N)$ metric measure spaces, which will be the main object of our study in this paper, was proposed by Gigli in [@Gigli15] as a finite-dimensional counterpart to the $\operatorname{RCD}(K,\infty)$ condition, coupling the $\operatorname{CD}(K,N)$ condition with the linearity of the heat flow. In the last years, this has been a considerably increasing research area, with several contributions by many authors (see for instance [@AmbGigliMondRaj12; @AmbrosioGigliSavare14calc; @AmbrosioMondinoSavare15; @BacherSturm10; @CavallettiMilman16; @ErbarKuwadaSturm15; @Gigli14; @JangLiZhang; @VonRenesse08]) that have often given new insights over more classical questions, both of analytical and geometric nature. #### {#section .unnumbered} With the aim of better introducing the reader to the statement of the main result of this note, let us briefly describe which was the state of the art around the so-called structure theory of $\operatorname{RCD}(K,N)$ metric measure spaces. Given an arbitrary metric measure space, there is a well defined notion of measured tangent space at a fixed point, as pointed measured Gromov-Hausdorff limit of a sequence of rescalings of the starting space. In particular, in the case of an $\operatorname{RCD}(K,N)$ metric measure space $(X,{\mathsf{d}},{\mathfrak{m}})$, we can define, for any $1\le k\le N$, the $k$-dimensional regular set $\mathcal{R}_k$ to be the set of those $x\in X$ such that $x$ belongs to the support of ${\mathfrak{m}}$ and the tangent space of $(X,{\mathsf{d}},{\mathfrak{m}})$ at $x$ is the $k$-dimensional Euclidean space. Better said, $x\in\mathcal{R}_k$ if $(X,r^{-1}{\mathsf{d}},{\mathfrak{m}}_r^x,x)\to({\mathbb{R}}^k,{\mathsf{d}}_{{\mathbb{R}}^k}c_k,{\mathscr{L}}^k,0_{{\mathbb{R}}^k},)$ as $r\downarrow0$, where $${\mathfrak{m}}_r^x:=\left( \int_{B(x,r)} 1-\frac{{\mathsf{d}}(x,y)}{r} {\mathop{}\!\mathrm{d}}{\mathfrak{m}}(y)\right)^{-1}{\mathfrak{m}},\quad c_k:=\left(\int_{B(0,1)} \left(1-|y|\right) {\mathop{}\!\mathrm{d}}{\mathscr{L}}^k(y)\right)^{-1}$$ and the convergence is understood with respect to the pointed measured Gromov-Hasdorff topology. In [@MondinoNaber14] Mondino-Naber proved that, for any $\operatorname{RCD}(K,N)$ metric measure space $(X,{\mathsf{d}},{\mathfrak{m}})$, the regular sets $\mathcal{R}_k$ are $({\mathfrak{m}},k)$-rectifiable and that $$\label{eq:MN} {\mathfrak{m}}\left(X\setminus\bigcup_{k=1}^{[N]}\mathcal{R}_k\right)=0.$$ Later on, their result was sharpened in the independent works [@MondinoKell16; @GigliPasqualetto16; @DePhilippisMarcheseRindler17], where it was proved that $\operatorname{RCD}(K,N)$ spaces are rectifiable in the stronger sense of metric measures spaces, that is to say, for any $1\le k\le N$, the restriction of the reference measure ${\mathfrak{m}}$ to $\mathcal{R}_k$ is absolutely continuous with respect to the $k$-dimensional Hausdorff measure $\mathcal{H}^k$. Let us mention that a structure theory for Ricci limit spaces has been already developed by Cheeger-Colding in the aforementioned series of papers. Moreover, in [@CheegerColding97], they conjectured that there should be exactly one $k$-dimensional regular set $\mathcal{R}_k$ having positive measure. However, it took more than ten years before the work [@ColdingNaber12], where Colding-Naber affirmatively solved this conjecture. The analogous problem in the abstract framework of $\operatorname{RCD}(K,N)$ metric measure spaces remained open since the work of Mondino-Naber and, building upon almost all the ingredients developed in this note, we are able to solve it, proving the following: \[thm:constintro\] Let $(X,{\mathsf{d}},{\mathfrak{m}})$ be an $\operatorname{RCD}(K,N)$ for some $K\in{\mathbb{R}}$ and $1<N<+\infty$. Then, there is exactly one regular set $\mathcal{R}_k$ having positive ${\mathfrak{m}}$-measure in the Mondino-Naber decomposition of $(X,{\mathsf{d}},{\mathfrak{m}})$. #### {#section-1 .unnumbered} In order to motivate the development of this work, we find it relevant to spend a few words, first trying to explain why it seems hard to adapt the strategy pursued by Colding-Naber in the case of Ricci limits to the setting of $\operatorname{RCD}(K,N)$ spaces and then to present the heuristic standing behind our new approach. The technique developed in [@ColdingNaber12] is based on fine estimates on the behaviour of balls of small radii centered along the interior of a minimizing geodesic over a smooth Riemannian manifold (with Ricci bounded from below) that are stable enough to pass through the possibly singular Gromov-Hausdorff limits. When dealing with an abstract $\operatorname{RCD}(K,N)$ space there is no smooth approximating sequence one can appeal on. Nevertheless, one could try to reproduce their main estimate (see [@Colding
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study radiation-hydrodynamical normal modes of radiation-supported accretion disks in the WKB limit. It has long been known that in the large optical depth limit the standard equilibrium is unstable to convection. We study how the growth rate depends on location within the disk, optical depth, disk rotation, and the way in which the local dissipation rate depends on density and pressure. The greatest growth rates are found near the disk surface. Rotation stabilizes vertical wavevectors, so that growing modes tend to have nearly-horizontal wavevectors. Over the likely range of optical depths, the linear growth rate for convective instability has only a weak dependence on disk opacity. Perturbations to the dissipation have little effect on convective mode growth rates, but can cause growth of radiation sound waves.' author: - Paola Pietrini - 'Julian H. Krolik' title: 'Convective-Dynamical Instability in Radiation-Supported Accretion Disks' --- Introduction ============ Shakura & Sunyaev (1973) predicted that the inner portions of accretion disks that extend into relativistically-deep gravitational potentials should be radiation pressure-dominated when the accretion rate is greater than a modest fraction of the Eddington rate. In that regime, they found that disks could achieve hydrostatic balance in the vertical direction if the local dissipation rate were proportional to the local mass density. Given that assumption, upward radiation flux could support the disk matter against gravity if the density were essentially constant as a function of height (falling sharply to zero at the top surface) and the radiation pressure fell gradually from the disk midplane to the surface. Soon after this equilibrium was discovered, it was found to suffer from several sorts of instabilities. Lightman & Eardley (1974) pointed out that if the viscous stress is proportional to the total pressure (in this case, dominated by radiation), perturbations with radial wavelengths long compared to the vertical thickness $h$, but short compared to a radius $r$, grow on the (comparatively long) viscous inflow timescale. Shakura & Sunyaev (1976) then observed that in these conditions perturbations in the same range of wavelengths would also grow on the (shorter) thermal timescale. Bisnovatyi-Kogan & Blinnikov (1977) noticed that if the radiation is locked to the gas even on short lengthscales (i.e., if, for the purpose of dynamics, the optical depth is treated as effectively infinite), such disks should be convectively unstable, for the specific entropy decreases outward; the linear growth rate for convective “bubbles" was worked out by Lominadze & Chagelishvili (1984). More recently, Gammie (1998) has demonstrated that a magnetic field in radiation-supported disks can catalyze a short-wavelength ($kh \gg 1$) overstable wave mode. In view of these instabilities, it has long been a puzzle just what sort of equilibrium would actually be found in Nature when the accretion rate is high enough that radiation pressure-domination might be expected (see, e.g. Shapiro, Lightman & Eardley 1976; Liang 1977; Coroniti 1981; Svensson & Zdziarski 1994; Szuszkiewicz & Miller 1997; Krolik 1998). In this paper, we take a closer look at the nature of the short wavelength modes in radiation-supported disks without magnetic fields. Our goal (motivated by a companion work on radiation-hydrodynamics simulations of such disks: Agol & Krolik 2000b) is to examine more closely which modes can be expected to grow most quickly, what happens when finite optical depth permits some photon diffusion, and what role, if any, is played by associated perturbations in the local dissipation rate. Problem Definition ================== We begin by writing down the equations of non-relativistic radiation hydrodynamics so that we may first describe the equilibrium in this language, and then discuss linear perturbations to this equilibrium. Because we are interested in accretion disks, it is convenient to write them in a rotating frame. The first is the usual equation of mass conservation: $${\partial \rho \over \partial t} + \nabla \cdot \left(\rho \vec v\right) = 0.$$ Our notation is the usual one, in which $\rho$ is the mass density and $\vec v$ is the fluid velocity. Next is the the fluid force equation: $$\rho {\partial \vec v \over \partial t} + \rho \vec v \cdot \nabla \vec v = -\nabla p_g + \rho \vec g + (\kappa\rho/c)\vec{\cal F} + 2\rho \vec v \times \vec \Omega - \rho v_r (\partial\Omega/\partial\ln r) \hat\phi$$ where $p_g$ is the gas pressure, $\vec g$ is the local gravity, $\kappa$ is the opacity per unit mass, $\vec{\cal F}$ is the radiation flux, and $\Omega$ is the rotation rate of the fluid. Although all the fluid quantities are defined in the rotating frame, the radiation quantities (e.g., $\vec{\cal F}$) are defined in the frame of the local fluid motion, i.e. including any departures from corotation (Mihalas & Mihalas 1984). Note that we have omitted magnetic forces. For the two equations describing radiation energy density and momentum density, we follow Buchler (1979), but write the Lagrangian time derivatives explicitly, [*i.e.*]{}, $D/Dt = (\partial /\partial t +\vec v\cdot\nabla)$. The evolution of radiation energy density $E$ is described by: $${\partial E \over \partial t} + \vec v \cdot \nabla E + \nabla\cdot\vec{\cal F} + {\bf p_r} : \nabla \vec v + {E \over c^2} \nabla \cdot \vec v +{2\over c^2}\left({\partial \vec v \over \partial t} + \vec v \cdot \nabla \vec v \right) \cdot \vec{\cal F} = Q.$$ Here ${\bf p_r}$ is the radiation pressure tensor and $Q$ is the net local emissivity. Finally, there is the equation describing the time-dependence of the radiation momentum density $(1/c^2){\cal F}$: $${1 \over c}{\partial \vec{\cal F} \over \partial t} + {\vec v \over c} \cdot \nabla \vec{\cal F} + c \nabla \cdot {\bf p_r} + {1 \over c}\left(\vec{\cal F} \cdot \nabla \vec v + \vec{\cal F} \nabla \cdot \vec v\right) + {1 \over c}\left(E {\bf I} + {\bf p_r}\right) \cdot \left( {\partial \vec v \over \partial t} + \vec v \cdot \nabla \vec v \right) = \vec q .$$ In this equation, ${\bf I}$ is the identity matrix and $\vec q$ is the net rate per unit volume at which photon momentum is created by radiation (usually negative in the fluid frame because photon momentum is lost due to opacity, while newly-created photons are usually isotropic in the fluid frame). In the equilibrium, $\partial /\partial t = \vec v = 0$. To isolate the effect of radiation support, we also take the extreme limit of $p_g \ll p_r$. If we regard the rotation of the disk matter as cancelling the radial component of gravity, the only non-trivial remark to make about the equilibrium is that ${\cal F}_z = cg/\kappa$, where $ g=g_z(z,r)$ is the local vertical component of gravitational acceleration \[in a thin disk, $g_z(z,r)\simeq G M z/r^3$ for central mass $M$\]. Now consider perturbed versions of equations (1) through (4). In order to write these perturbations in Fourier-transform form (i.e., for any quantity $X$, the perturbation is $\delta{\tilde X} = \delta X \exp\{{i(k_rr+k_z z)-i\omega t}\}$), we will suppose that the wavevectors obey three conditions: that $k = (k_z^2 + k_r^2)^{1/2} \ll \kappa \rho$; that $k_z \gg 1/h$; and that $k_r \gg 1/r$. The first limit means that the diffusion approximation applies, i.e. ${\bf p_r} = p_r{\bf I}$, so that $E = 3p_r$. The second is the WKB approximation, as applied to variations in both the radial and vertical directions. Note that we further restrict our attention to axisymmetric perturbations. The condition $k_z h \gg 1$ also means that we can ignore any gradients in the gravity or equilibrium radiation flux. In addition, assuming $k_r\gg 1/r$ allows us to neglect the terms in vector divergences arising from cylindrical geometry. For example, after Fourier-transforming, $\nabla\cdot\delta\vec v$ becomes $ik_r\delta v_r +\delta v_r/r + ik_z\delta v_z \simeq ik_r\delta v_r + ik_z\delta v_z$. We then find: $$\begin{aligned} -
{ "pile_set_name": "ArXiv" }
--- abstract: 'This presents an overview of relativistic hydrodynamic modeling in heavy-ion collisions prepared for Hot Quarks 2016, at South Padre Island, TX, USA. The influence of the initial state and viscosity on various experimental observables are discussed. Specific problems that arise in the hydrodynamical modeling at the Beam Energy Scan are briefly discussed.' author: - 'Jacquelyn Noronha-Hostler' title: Hydrodynamic Overview at Hot Quarks 2016 --- Introduction ============ The Quark-Gluon Plasma (QGP) existed microseconds after the Big Bang where its size was significantly larger and its expansion significantly slower than the QGP experimentally measured in heavy ion collisions. The QGP produced in the laboratory is the hottest, smallest, and densest fluid known to mankind so the techniques used to describe its properties are still being developed. With the discovery that the QGP formed at RHIC and the LHC was a nearly perfect liquid, a “standard model" of heavy-ion collisions has emerged that includes fluctuating initial conditions, event-by-event relativistic viscous hydrodynamics, and a hadronic afterburner. The main signatures of nearly perfect fluidity are the flow harmonics (Fourier coefficients of the particle spectra) that can be reproduced within relativistic hydrodynamical calculations with an extremely small shear viscosity to entropy density ratio, $\eta/s\sim 0.08$. Originally, it was understood that there should be a lower bound for $\eta/s\sim 1/4\pi$ by combining a quasiparticle description with the uncertainty principle [@Danielewicz:1984ww]. The derivation of the KSS limit from strong coupling holography [@Kovtun:2003wp] initially gave support to such a bound, although, now it is known that there are holographic examples where $\eta/s$ can be even smaller, e.g., [@Kats:2007mq; @Brigante:2007nu; @Brigante:2008gz; @Buchel:2008vz; @Critelli:2014kra; @Finazzo:2016mhm]. It is expected that a minimum exists close to the crossover temperature [@Aoki:2006we] as one goes from a strongly interacting QGP phase into an eventually weakly interacting hadron gas phase [@NoronhaHostler:2008ju; @NoronhaHostler:2012ug]. Further references for the temperature dependence of viscosity can be found in [@Noronha-Hostler:2015qmd]. While a reasonable estimate for the range of values of $\eta/s$ can be made using sophisticated Bayesian techniques [@Bernhard:2015hxa; @Bernhard:2016tnd], its exact value is extremely dependent on the initial state formed immediately after the two heavy-ions collide. Over the years, a plethora of initial state models have been developed, each of which has a corresponding range of valid transport coefficients that allow for reasonable fit to experimental data. One of the most pressing issues remaining in heavy ions is finding observables either sensitive to only the initial state or only to transport coefficients, which will be discussed in detail in this proceedings. Collective Flow =============== When two heavy ions are smashed together, clear geometrical effects occur depending on if they hit head-on (central collisions), have a grazing collision (peripheral), or hit somewhere in between (mid-central). Experimentally, heavy ions are collided billions of times (each collision is an event) and each event produces a different number of particles that participated in the collision, known as multiplicity. The more central the collisions, the larger multiplicities are produced so the events are then sorted by their multiplicities into central classes where $0\%$ centrality have the highest multiplicities and $100\%$ centrality indicates the lowest possible multiplicities. Central collisions produce on average a circular shape in the transverse plane to the beam axis whereas as an approximate almond shape is produced for mid-central collisions and beyond. Due to quantum fluctuations in the initial position of protons and neutrons within each ion, a multitude of shapes can be produced [@Alver:2010gr]. Each highly inhomogeneous initial condition runs separately through hydrodynamics on an event-by-event basis. Experimentally the initial state cannot be measured directly, rather pressure gradients convert the initial geometrical shapes into a corresponding momentum space anisotropy, measured via flow harmonics. To obtain the flow harmonics, one calculates the Fourier coefficients of the particle spectra (with special care to reproduce the exact way experimentalists measure flow harmonics [@Luzum:2012da] where multiplicity weighing and centrality rebinning should not be ignored [@Gardim:2016nrr; @Betz:2016ayq]). A number of parameters that go into hydrodynamical modeling such as the initial time after which one assumes the system admits a hydrodynamic description, and also the switching temperature below which a hadronic transport is used. The initial time, $\tau_0$, depends on the collisional energy as well as the initial condition type and these issues were discussed in more detail in the Beam Energy Scan and anisotropic hydrodynamics talks at this conference. The maximum switching temperature, $T_{SW}$, is constrained by the hadronization temperature indicated from Lattice QCD [@Borsanyi:2014ewa] to ensure that one switches to the correct degrees of freedom. ![\[fig:IC\]Initial conditions used in heavy ion collisions organized by their basic assumptions.](profs.pdf){width="20pc"} The flow harmonics themselves are most sensitive to the choice in the initial conditions as well as the transport coefficients ($\eta/s$, bulk viscosity to entropy density, $\zeta/s$, and their corresponding relaxation times- $\tau_{\pi}$ and $\tau_{\Pi}$, respectively). In the next two sections experimental observables that constrain the initial state and transport coefficients are discussed. Separating the Initial State from Transport Coefficients ======================================================== At LHC energies a number of initial conditions such as IP-Glasma [@Gale:2012rq], EKRT [@Niemi:2015qia], and Trento (tuned to IP-Glasma) [@Moreland:2014oya] manage to fit well the two and four particle flow cumulants as well as the distribution of $v_n$’s. In Fig. \[fig:IC\] a schematic cartoon of the most well-known initial condition models categorized by their basic properties is shown. Additionally, extremely accurate predictions for the flow harmonics of LHC run 2 to the order of $\sim 5\%$ [@Niemi:2015voa; @Noronha-Hostler:2015uye] were made and later experimentally confirmed in [@Adam:2016izf]. However, the different models used in these predictions varied parameters like viscosity, freeze-out, and the inclusion of initial flow. It has been well-established that an approximately linear relationship exists between the initial energy/entropy density eccentricities, $\varepsilon_n$, and the experimentally measured flow harmonics $v_n$’s [@Teaney:2010vd; @Gardim:2011xv; @Teaney:2012ke; @Niemi:2012aj; @Gardim:2014tya] and that sub-nucleon fluctuations do not appear to play a significant role in the calculation of the lowest order harmonics in large collision systems [@Noronha-Hostler:2015coa]. That being said, higher order flow harmonics are significantly more complicated and depend on a variety of eccentricities [@Gardim:2011xv; @Gardim:2014tya]. Additionally, $v_1$ is especially complicated [@Gardim:2014tya] and may depend on the full $T^{\mu\nu}$ initialization [@Gardim:2011qn]. ![\[fig:mom\]First moment (mean), second moment (variance), third moment (skewness), and fourth moment (kurtosis) of a distribution.](momentsdis.pdf){width="20pc"} In order to more easily quantify the distribution of flow harmonics, multiparticle cumulants are used. Cumulants of the $v_2$ distribution are directly connected to the moments of the distribution via $v_2\{4\}/v_2\{2\})^4=2-\langle v_2^4\rangle/\langle v_2^2\rangle^2$, which can indicate the degree of which the system is fluctuating (see Fig. \[fig:mom\]). If there are no fluctuations in the system the $p_T$-integrated $v_2\{4\}/v_2\{2\}\rightarrow 1$, whereas $v_2\{4\}/v_2\{2\}< 1$ and $v_2\{4\}\sim v_2\{6\} \sim \dots$ is a sign of the collective behavior measured in heavy ion collisions. Note, however, that the higher order cumulants are not exactly identical, small deviations can exist due to the skewness of the initial conditions [@Giacalone:2016eyu]. Finally, complications do exist for more peripheral collisions where deviations are seen between the linear mapping of the initial eccentricities and the final elliptical flow [@Niemi:2015qia], which can be explained due to cubic response [@Noronha-Hostler:2015dbi]. Recently, symmetric cumulants [@ALICE:2016kpq] that measure the correlation of different order flow harmonics on an event-by-event basis have been measured in PbPb collisions and $SC(3,2)$ (which involves elliptic and triangular flow) appears to be almost entirely driven by the initial eccentricities. While it was thought that $SC(4,2)$ was sensitive to the choice in viscosity, much of that disappears after
{ "pile_set_name": "ArXiv" }
--- author: - Antoine Bourget - ', Julius F. Grimminger' - ', Amihay Hanany' - ', Marcus Sperling' - ', and Zhenghao Zhong' bibliography: - 'bibli.bib' --- Introduction {#sec:introduction} ============ $5$-dimensional $\Ncal{=}1$ gauge theories are perturbatively non-renormalisable and can only meaningfully be defined as mass deformations of renormalisation group fixed points. Initially, these theories have been studied from various aspects: field theory [@Seiberg:1996bd; @Morrison:1996xf; @Intriligator:1997pq], brane constructions [@Aharony:1997ju; @Aharony:1997bh; @DeWolfe:1999hj], and geometry via M-theory backgrounds with Calabi-Yau singularities [@Douglas:1996xp]. In this work, focus is placed on 5-brane webs in Type IIB superstring theory [@Aharony:1997ju; @Aharony:1997bh; @DeWolfe:1999hj] and generalisations that include orientifold planes [@Brunner:1997gk; @Bergman:2015dpa; @Zafrir:2015ftn; @Hayashi:2015vhy]. One advantage of these brane constructions is that they capture the dynamics of the corresponding 5d gauge theories and, simultaneously, their UV fixed points. Recent developments using brane webs include [@Bao:2011rc; @Bergman:2014kza; @Kim:2015jba; @Hayashi:2015fsa; @Gaiotto:2015una; @Zafrir:2015rga; @Hayashi:2015zka; @Ohmori:2015tka; @Zafrir:2016jpu; @Hayashi:2016abm; @Hayashi:2017btw; @Hayashi:2018lyv; @Cabrera:2018jxt; @Hayashi:2019yxj; @Hayashi:2019jvx]. As known from the $\surm(2)$ example with $N_f <8$ flavours [@Seiberg:1996bd; @Morrison:1996xf], it is important to understand the enhancement of the global symmetry of these theories at the fixed point. Hence, this question has been studied via various techniques: for instance, superconformal indices [@Kim:2012gu; @Rodriguez-Gomez:2013dpa; @Bergman:2013koa; @Bergman:2013ala; @Taki:2013vka; @Bergman:2013aca; @Hwang:2014uwa; @Zafrir:2014ywa; @Bergman:2014kza; @Zafrir:2015uaa; @Hayashi:2015fsa; @Yonekura:2015ksa], Nekrasov partition functions and topological string partition functions [@Bao:2011rc; @Bashkirov:2012re; @Iqbal:2012xm; @Bao:2013pwa; @Hayashi:2013qwa; @Hayashi:2014wfa; @Mitev:2014jza; @Kim:2014nqa; @Hayashi:2015xla]. The enhancement has been argued to be due to instanton operators [@Lambert:2014jna; @Rodriguez-Gomez:2015xwa; @Tachikawa:2015mha], which create instanton particles in the UV superconformal field theory. Recall that in 5 dimensions, the instanton is a particle charged under the $\uo_I$ topological symmetry associated to the conserved current $\mathrm{Tr} \ast (F\wedge F)$. Recently, there have been many works devoted to uncover further features [@Apruzzi:2019vpe; @Apruzzi:2019opn; @Apruzzi:2019enx; @Apruzzi:2019kgb]; in particular, classifications of $5$d SCFTs [@Jefferson:2017ahm; @Jefferson:2018irk; @Bhardwaj:2019jtr] and 5d $\Ncal{=}1$ gauge theories [@Bhardwaj:2020gyu] have been proposed. An interesting question concerns the Higgs branch of the full vacuum moduli space: for the low-energy effective theory the Higgs branch is described by the hyper-Kähler quotient construction [@Hitchin:1986ea]; in contrast, for the Higgs branch $\Higgs_\infty$ at the fixed point the same is not true. The first studies [@Cremonesi:2015lsa; @Ferlito:2017xdq] of $\Higgs_\infty$ indicated that in order to capture the geometric features of the moduli space at infinite gauge coupling, $3$d $\Ncal =4$ Coulomb branches of certain quiver gauge theories are useful. This idea has been further developed and systematised in [@Cabrera:2018jxt]: for a given 5-brane web, where each external 5-brane ends on a 7-brane, *magnetic quivers* can be derived such that the $3$d $\Ncal=4$ Coulomb branches thereof are equivalent geometric descriptions for the finite and infinite coupling $5$d $\Ncal=1$ Higgs branches. So far, the magnetic quiver description has only been available for $5$d gauge theories with special unitary gauge groups. However, many interesting 5d dualities, see for instance [@Gaiotto:2015una; @Hayashi:2015zka; @Jefferson:2018irk; @Bhardwaj:2020gyu], are between unitary and orthogonal or symplectic gauge groups. Hence, it is natural to further develop the understanding of Higgs branches of $5$d theories with orthogonal or symplectic groups. Suitable 5-brane web realisations either contain  or O$7$ planes. In this work, the focus is placed on 5-brane webs in the presence of  orientifolds. For single gauge groups with fundamental matter, the field theory classification of theories with non-trivial interacting fixed point has been presented in [@Intriligator:1997pq]; the subsequent brane construction [@Brunner:1997gk] confirmed these results. However, more non-trivial fixed points have been proposed in [@Bergman:2015dpa]. Consider the Higgs branch moduli space in more detail. To begin with, the finite coupling, or classical, Higgs branches are conventionally treated by an F and D-term analysis. However, it is a known fact that $\sprm(k)$ theories with $N_f <2k$ fundamental flavours and $\orm(k)$ theories with $N_f \leq k-3$ fundamental flavours do not admit complete Higgsing. Consequently, their analysis is currently incomplete. For instance, the analogous behaviour exists for 5d $\Ncal=1$ $\surm(k)$ SQCD with $N_f < 2k$, which has only recently been addressed in [@Bourget:2019rtl]. In terms of brane webs, the quaternionic Higgs branch degrees of freedom can be counted by a decomposition into independent subwebs, as introduced in [@Aharony:1997bh] and demonstrated for 5-brane webs with  planes in [@Zafrir:2015ftn]. Moving on to the infinite coupling Higgs branches, the enhancement of the global symmetry has been studied via field theory [@Zafrir:2015uaa] and brane webs [@Bergman:2015dpa]. Moreover, the counting of additional new Higgs branch dimension at the fixed point has been demonstrated in [@Zafrir:2015ftn]. Hence, dimension and global symmetry of $\Higgs_\infty$ are known, but no geometrical description has been provided yet. This is precisely the first aim of the present paper: to provide an improved description of finite and infinite coupling Higgs branches. The approach taken is known as *magnetic quivers* [@Cabrera:2018jxt; @Cabrera:2019izd; @Cabrera:2019dob] (see also [@Hanany:1996ie]): in brief, a magnetic quiver $\mathsf{Q}$ is a combinatorial object that is derived from the Type II brane configuration with 8 supercharges, describing a given theory $\mathsf{T}$ in a certain phase $\mathcal{P}$. The Higgs branch of $\mathsf{T}$ in that phase equals the 3d $\Ncal=4 $ Coulomb branch of the magnetic quiver, meaning that the combinatorial data is taken as an input to derive a space of dressed monopole operators in the sense of [@Cremonesi:2013lqa]. Thus, $$\begin{aligned} \Higgs \left(\text{phase $\mathcal{P}$ of theory $\mathsf{T}$}\right) = \Coulomb\left( \text{magnetic quiver $\mathsf{Q}(\mathcal{P})$} \right)\end{aligned}$$ holds as *equality of moduli spaces*. To be more precise: the magnetic quivers compute a geometric space, called *Higgs variety*. The Higgs branch chiral ring may contain nilpotent operators which makes the full Higgs branch a so called non-reduced scheme, called *Higgs Scheme*. This problem is addressed for classical $4$d $\mathcal{N}=2$ SQCD in [@Bourget:2019rtl]. In the rest of the paper only the geometric parts of the moduli space are studied and the analysis of nilpotent elements is left for future work. The concept of magnetic quivers has proven itself useful in a variety of cases: for $6$d $\Ncal=(
{ "pile_set_name": "ArXiv" }
Experimental evidence for oscillations among the three neutrino generations has been recently reported. For two-generation mixing, the probability that a neutrino created as type $\nu_1$ oscillates to type $\nu_2$ is: $$P(\nu_1 \rightarrow \nu_2) = \sin^2 2\alpha \sin^2 \left(\frac{1.27 \Delta m^2 L}{E_\nu}\right), \label{eq:posc}$$ where $\Delta m^2$ is the mass squared difference between the mass eigenstates in ${\rm eV^2}$, $\alpha$ is the mixing angle, $E_\nu$ is the incoming neutrino energy in $\rm GeV$, and $L$ is the distance between the points of creation and detection in km. Data from the Super-Kamiokande atmospheric neutrino experiment [@ATM] have been interpreted as evidence for $\nu_\mu \rightarrow \nu_{\tau}$ oscillations with $\sin^2 2\alpha > 0.88$ and $1.6 \times 10^{-3} < \Delta m^2 < 4 \times 10^{-3}$  ${\rm eV^2}$. The LSND experiment has reported [@lsnd] a signal consistent with $\bar{\nu}_\mu \rightarrow \bar{\nu}_e$ oscillations with $\sin^2 2\alpha \approx 10^{-2}$ and $\Delta m^2 \stackrel{>}{\scriptstyle\sim} 1$ $\rm eV{^2}$ . The solar neutrino experiments, and most recently SNO [@SNO] have reported evidence for oscillations of $\nu_e \rightarrow (\nu_\mu, \nu_{\tau})$ with $\Delta m^2 <10^{-3}{\rm eV^2}$. Within a three-generation mixing scenario and under the assumption that the $\Delta m^2$ values for $\nu$ and ${\overline{\nu}}$ are the same, it is not possible to simultaneously accommodate the Super-Kamiokande, LSND, and SNO results. Therefore, experimental searches for oscillations with both $\nu$ and ${\overline{\nu}}$ beams are of interest. In this letter, we report on a search for oscillations in both the $\nu_\mu \rightarrow \nu_e$ and $\overline\nu_\mu \rightarrow \overline\nu_e$ channels using a new sign-selected neutrino beam. High-purity $\nu$ and ${\overline{\nu}}$ beams are provided by the new Sign-Selected Quadrupole Train (SSQT) beamline at the Fermilab Tevatron during the 1996-1997 fixed target run. Hadrons are produced when the $800$ GeV primary proton beam interacts in a BeO target located 1436 m upstream of the neutrino detector. Sign-selected secondary particles of specified charge (mean energy of about 250 GeV) are directed in a 221 m beamline towards a 320 m decay region, while oppositely charged (and neutral) mesons are stopped in beam dumps. Two-body decays of the focused pions yield $\nu_\mu$ ($\overline\nu_\mu$) with a mean energy of $\approx$75 GeV. Two-body decays of the focused kaons yield $\nu_\mu$ ($\overline\nu_\mu$) with a mean energy of $\approx$200 GeV. Muons are stopped in a 915 m steel/earth shield. The energy and spatial distributions of $\nu_\mu$ ($\overline\nu_\mu$) CC events in the detector provide a determination of the flux of pions and kaons in the decay channel (used in the determination of the predicted $\nu_e$ and ${\overline{\nu}}_e$ fluxes). For $\nu_\mu$ running mode, the predicted energy spectra for $\nu_\mu$, $\overline\nu_\mu$, and ($\nu_e$+$\overline\nu_e$) CC events are shown in Figure \[fig:enu\](a). The corresponding spectra for $\overline\nu_\mu$ running mode are shown in Figure \[fig:enu\](b). The $\nu_\mu$ ($\overline\nu_\mu$)beam contains a 1.7% (1.6%) $\nu_e$’s(${\overline{\nu}}_e$’s) $93\%$ and $70\%$ of which are produced from $K^\pm \rightarrow \pi^0 e^\pm \stackrel{_{(-)}}{\nu_e}$, for $\nu$ and $\overline\nu$ modes, respectively. The proton beam is incident on the production target at an angle such that forward neutral kaons do not point at the detector. This greatly reduces the electron neutrino flux from neutral kaon decays (which is more difficult to model). The error in the predicted electron neutrino flux is reduced from $4.1 \%$ (in CCFR [@ALEX]) to $1.4 \%$ (NuTeV). The NuTeV detector [@NIM] is an upgrade of the CCFR detector  [@CCFR]. It consists of an 18 m long, 690 ton total absorption target-calorimeter with a mean density of ${\rm 4.2 ~g/cm^3}$. Muon energy is measured by a 10 m long iron toroidal spectrometer. The target consists of 168 steel plates, each ${\rm 3~m \times 3~m \times 5.15~cm}$, instrumented with liquid scintillation counters placed every two steel plates and drift chambers every four plates. The separation between consecutive scintillation counters corresponds to six radiation lengths. The energy resolution of the target calorimeter is $\Delta E_h/E_h \approx 0.85/\sqrt{E_h}$(GeV), and $\Delta E_e/E_e \approx 0.50/\sqrt{E_e}$(GeV) for hadrons and electrons, respectively. The muon momentum resolution is $\Delta p_\mu/p_\mu = 0.11$. The NuTeV detector is calibrated continuously every accelerator cycle (once a minute) with beams of electrons, muons, and hadrons during the slow spill part of the cycle. \[bh\] While the neutrinos arrived in gates a few $msec$ wide, the calibration beam arrived in a different gate 20 $sec$ long, followed by an off-spill cosmic ray gate for background measurement. These continuous test beam calibrations yield a reduction in the hadron energy scale error from $1\%$ (in CCFR [@CCFR]) to $0.43\% $ (in NuTeV [@NIM]). The event sample used in this analysis is similar to that used in the recent precise NuTeV measurement of the electroweak mixing angle [@NCPRL] with additional fiducial cuts, and ${\mbox{$E_{{\rm\scriptstyle}cal}$}}$ $>$ 30 GeV. The data sample consists of 1.5 x 10$^{6}$ $\nu$ events and 0.35 x 10$^{6}$ ${\overline{\nu}}$ events with a mean visible energy in the calorimeter (${\mbox{$E_{{\rm\scriptstyle}cal}$}}$) of 74 GeV and 56 GeV, respectively. The observed neutrino events are separated into CC and NC candidates. Both CC and NC interactions initiate a cascade of hadrons in the target that is registered in both the scintillation counters and drift chambers. Muon neutrino CC events are distinguished by the presence of a final state muon, which typically penetrates well beyond the hadronic shower and deposits energy in a large number of consecutive scintillation counters. NC events usually have no final state muon and deposit energy over a range of counters typical of a hadronic shower (about ten counters $\approx$ 1 m of steel). For each event, the length ($L$) is defined as the number of scintillation counters between the interaction vertex and the last counter consistent with at least single muon energy deposition. A pure sample of $\nu_{\mu}N \rightarrow \mu^-X$ $\nu_{\mu}$ charged current events is obtained from a $\lq$long’ sample with $L \geq 29$ for $\nu$ running mode ($L \geq 28$ for ${\overline{\nu}}$). The “short’ event sample consists of events with $L \leq 28$ for $\nu$ running mode ($L \leq 27$ for ${\overline{\nu}}$). Events with a $\lq$short’ length are primarily NC induced and originate from: 1. $\nu_{\mu,e}N \rightarrow \nu_{\mu,e}X$; $\nu_{\mu,e}$ NC events; ($\approx 65\%$); 2. $\nu_{\mu}N \rightarrow \mu^-X$; $\nu_{\mu}$ short CC events with muons which range out or exit the side of the calorimeter ($25\%$ for $\nu$, $15\%$ for ${\overline{\nu}}$); 3. $\nu_e N \rightarrow eX$ $\nu_e$ CC events ($10\%$ for $\nu$, $15\%$ for ${\overline{\nu}}$); 4. $\mu N \rightarrow \mu X$; steep cosmic ray interactions ($2\%$ for $\nu$ and $9\%$ for ${\overline{\nu}}$). The electron produced in a $\nu_e$ CC event (source 3) deposits energy in a few counters immediately downstream of the interaction vertex; this changes the longitudinal energy deposition profile of the shower. The energy profile is characterized by the ratio of the sum of the energy deposited in the first three scintillation counters to the total visible energy in the calorimeter ${\mbox{$E_{{\rm\scriptstyle}cal}$}}$: $$\eta_3 \equiv \frac{E_1 + E_2 + E
{ "pile_set_name": "ArXiv" }
--- abstract: 'The Three Gaps Theorem states that for any $\alpha \in (0,1)$ and any integer $N \geq 1$, the fractional parts of the sequence $0, \alpha, 2\alpha, \cdots, (N-1)\alpha$ partition the unit interval into $N$ subintervals having at most three distincts lengths. We here provide a new proof of this theorem using zippered rectangles, and present a new gaps theorem (along with two proofs) for general interval exchange transformations. We also touch on the statistics of the Three Gaps Theorem.' address: Department of Mathematics University of Washington author: - 'Diaaeldin (Dia) Taha' title: 'The Three Gaps Theorem, Interval Exchange Transformations, and Zippered Rectangles' --- Introduction ============ Orbits, gaps, and randomness ---------------------------- A key theme in dynamical systems is understanding the extent to which orbits of a dynamical system resemble sequences of independent, identically distributed (i.i.d.) random variables. In this paper we consider sequences $s = (s_n)_{n=1}^\infty \subset [0,1)$ arising as orbits of rotations and interval exchange transformations $T:[0,1) \rightarrow [0, 1)$. That is, we consider sequences $s$ defined by $$s_n = T^{n-1}(0),$$ with $T$ being an interval exchange transformation. In what follows, we use the interval $[0, 1)$ and the circle $\mathbb{S}^1$ interchangeably. Following [@Athreya2012-kd], the equidistribution of a sequence $s = (s_n)_{n=1}^\infty \subset [0,1)$ can be considered as a first order statistical measure of randomness. Specifically, a sequence $s = (s_n)_{n=1}^\infty \subset [0,1)$ is said to *equidistribute* if the measures $\Delta_N = \frac{1}{N}\sum_{n=1}^N \delta_{s_n}$ weak-$\ast$ converge to the Lebesgue measure $\operatorname{Leb}_{[0,1)}$ on the unit interval as $N \to \infty$. For the orbit sequences $s = (T^n x)_{n=1}^\infty$, ergodicity of the map $T$ implies this convergence for almost every starting point $x$, and unique ergodicity strengthens this to every point. This covers the first order statistics of the sequences. A finer measure of randomness is whether the *gap distribution* for the sequence $(s_n)_{n=1}^\infty$ resembles that of an i.i.d sequence of uniformly distributed random variables $(X_n)_{n=1}^\infty \subset [0,1)$. More precisely, given a finite segment $(s_n)_{n=1}^N$, the points arrange themselves on the unit interval $$0 \leq s_{\sigma_{s, N}(1)} \leq s_{\sigma_{s, N}(2)} \leq \cdots \leq s_{\sigma_{s, N}(N)} < 1,$$ with $\sigma_{s, N} : \{1, 2, \cdots, N\} \to \{1, 2, \cdots, N\}$ denoting a permutation of $\{1, 2, \cdots, N\}$ induced by the order of the points. The gaps $$s_{\sigma_{s,N}(1)} - 0,\ s_{\sigma_{s,N}(2)} - s_{\sigma_{s,N}(1)},\ \cdots,\ s_{\sigma_{s,N}(N)} - s_{\sigma_{s,N}(N-1)}, 1 - s_{\sigma_{s,N}(N)}$$ form a multiset $\widetilde{\operatorname{Gaps}}_{s, N}$. The limiting behavior of the normalized gap distribution $$\lim_{N \to \infty} \frac{\#\left((N \cdot \widetilde{\operatorname{Gaps}}_{s, N}) \cap (a, b)\right)}{N}$$ for $0 \leq a < b \leq \infty$ gives us the second order statistics of the sequence. For an iid sequence of uniformly distributed variables $(X_n)_{n=1}^\infty$, the limiting behavior is Poiassonian: for any $t>0$, $$\lim_{N \to \infty} \frac{\#\left((N \cdot \widetilde{Gaps}_{(X_n)_{n=1}^\infty, N} \cap (t, \infty))\right)}{N} = e^{-t}.$$ Sequences $s$ whose gaps stray from that Poissonian limiting behavior are called *exotic* in [@Athreya2012-kd]. The interested reader should refer to [@Athreya2012-kd] for examples of exotic sequences, and to get an understanding of the dynamical approach to gap distributions in general. Rotations, IETS, and gaps ------------------------- Circle rotations and interval exchange transformations (IETs for short) are low complexity maps (zero topological entropy) essential to the study of polygonal billiards and linear flows on translation surfaces. In this paper, we study gaps for sequences $s$ that arise as orbits of circle rotations $s = (R_\alpha^n0)_{n=0}^\infty$, and interval exchange transformations $s = (T^n0)_{n=0}^\infty$. Our main results are \[theorem: d + 2 gap theorem\], \[theorem: gap distribution for circle rotation\], and \[theorem: gap distribution for iets\], along with a new proof for the Three Gap Theorem (\[theorem: three gap theorem\]). We start by presenting the definition of interval exchange transformations, and follow it with a short exposition of the Three Gap Theorem to provide historical context. Interval Exchange Transformations {#subsection: interval exchange transformations} --------------------------------- Given $\lambda = (\lambda_1, \lambda_2, \cdots, \lambda_d) \in \mathbb{R}^n$ with $\lambda_i \geq 0$ and $|\lambda| = \sum_{i=1}^d \lambda_i = 1$, and a permutation $\pi \in S_d$ on the $d$ letters $\{1, 2, \cdots, d\}$, we get a map $T = T_{(\lambda, \pi)} : [0, 1) \to [0, 1)$ exchanging the intervals $I_i = [\sum_{k=1}^{i-1}\lambda_k,\sum_{k=1}^i \lambda_k)$ ($i=1, 2, \cdots, d$) according to the permuation $\pi$. That is, if $x \in I_i$, then $$T_{(\lambda, \pi)}(x) = x - \sum_{k < i}\lambda_k + \sum_{\pi(k) < \pi(i)}\lambda_k.$$ We say that $T_{(\lambda, \pi)}$ is a *$d$-interval exchange transformation*, or *$d$-IET* for short, with *length data* $\lambda$ and *combinatorial data* $\pi$. A permutation $\pi \in S_d$ is said to be *irreducible*, and denote that by $\pi \in S_d^o$, if $\pi(\{1, 2, \cdots, k\}) \neq \{1, 2, \cdots, k\}$ for every $k < d$. The length data $\lambda$ can be parametrized by the unit simplex $\Delta_d = \{(t_1, t_2, \cdots, t_d) \in \mathbb{R}^d \mid t_i \geq 0, \text{ and } \sum_{i=1}^d t_i = 1\}$. The unit simplex comes with the Lebesgue measure $\operatorname{Leb}_{\Delta_d}$, which makes it possible to talk about “almost all $d$-IETs”. Denote the discontinuities of $T^{-1}$ by $\alpha_0 = 0, \alpha_1, \cdots, \alpha_d = 1$, and those of $T$ by $\beta_0 = 0, \beta_1, \cdots, \beta_d = 1$. Note that the subintervals $(\alpha_{i-1}, \alpha_i)$ are permuted by $T^{-1}$, and get sent to the subintervals $(\beta_{\pi^{-1}(i) - 1}, \beta_{\pi^{-1}(i)})$ for $i = 1, 2, \cdots, d$. For more on IETs, the interested reader should check the excellent survery [@Yoccoz2007-zz]. ### Gaps of IETs Let $T = T_{(\pi, \lambda)} : [0, 1) \to [0, 1)$ be a $d$-interval exchange map with irreducible combinatorial data $\pi \in S_d^o$. For an integer $N \geq 1$, consider the orbit segment $\{T^k0 \mid 0 \leq n < N\}$, ordered on $[0, 1)$ as $$0 = T^{\sigma_{T, N}(0)}0 < T^{\sigma_{T, N}(1)}0 < \cdots < T^{\sigma_{T, N}(N-1)}0 < 1$$ with $\sigma_{T, N} : \{0, 1, \cdots, N-1\} \to \{0, 1, \cdots, N-1\}$ the permutation induced
{ "pile_set_name": "ArXiv" }
--- abstract: 'The benefits of portfolio diversification is a central tenet implicit to modern financial theory and practice. Linked to diversification is the notion of breadth. Breadth is correctly thought of as the number of independent bets available to an investor. Conventionally applications using breadth frequently assume only the number of separate bets. There may be a large discrepancy between these two interpretations. We utilize a simple singular-value decomposition (SVD) and the Keiser-Gutman stopping criterion to select the integer-valued effective dimensionality of the correlation matrix of returns. In an emerging market such as South African we document an estimated breadth that is considerably lower than anticipated. This lack of diversification may be because of market concentration, exposure to the global commodity cycle and local currency volatility. We discuss some practical extensions to a more statistically correct interpretation of market breadth, and its theoretical implications for both global and domestic investors.' author: - 'Daniel Polakowand Tim Gebbie[^1]' title: 'How many independent bets are there in an emerging market?' --- Introduction ============ One of the most widely accepted tenets of financial theory is the principle that diversification is an essential component of any well-constructed portfolio. Diversification serves to mitigate specific sources or risk within any single asset class, and systemic sources of risk across asset classes. Hence holding long positions in two resource companies suchs as BHP Billiton and Rio Tinto, may go a good way towards lessening the impact of company-specific risk within the international resources sector. Similarly, being exposed to property within a balanced (mutual) portfolio lessens the threat that other asset classes under-perform if property rallies. The idea is that spreading one’s bets results in value being unlocked slowly over time and that diversification is a way to deal with an uncertain and volatile investment universe. These are fairly convincing argument’s to most. On a simple mathematical level, through diversification, one enhances one’s risk-adjusted return by nature of a principal impact on any ‘risk’ denominator, be it the standard deviation of a Sharpe ratio or the active risk of an information ratio [@Grinold1989; @D90]. A portfolio that is comfortably ‘diversified’ is expected to have a higher Sharpe ratio and information ratio. Diversification is frequently lauded as the only free lunch that econometrics offers to fund managers, and “one ought to indulge heartily at the price” [@Thomas2005]. It is from this optimistic base that we enter the fray with the allegation that ‘diversification’ opportunities may be both limited and overstated. Diversification in its common pretext acts more to disguise value-add than to enhance it. Interestingly, the usefulness of diversification in the way it was originally intended is particularly limited in South Africa and possibly other emerging markets for some less-than obvious reasons, as we discuss later. As we illustrate, because diversification is a frequently misunderstood phenomenon it can clearly be a very mixed blessing. To the skilled fund-manager diversification may actually be an impediment. Spreading one’s bets too thinly across independent gambits condemns such talent towards the manifestation of mediocrity since there is little room to move efficiently in all dimensions. To the less prudent fund manager, however, diversification will often offer a safe-haven where poor bets amongst will be simultaneously countered by good ones in others. Furthermore, we show that in the context of conventional asset classes within Southern Africa, there is a lot less room to maneuver than most professional investors suspect due to an overriding communality of extraneous factors that impact similarly on a wide variety of asset classes. This has both implications for those global investors naively treating emerging markets as an independent asset class, those seeking international diversification from within an emerging market, and attempts to understand the theoretical applicability of asset pricing models in emerging markets. In section \[ss:breadth\] we discuss ‘breadth’; how it is understood, used, and typically misused. This is followed in section \[ss:svd\] by a discussion of well-known multivariate statistical technique that facilitates a more correct understanding of the available breadth within any universe of assets: using the singular value decomposition in conjunction with the Keiser-Gutman stopping criterion, to select the integer-valued effective dimensionality of a correlation matrix of returns, with eigenvalues greater-than or equal to 1. We are then able, in section \[ss:examples\] to illustrate the available breadth to investors in South African markets, by using some well known multivariate statistical techniques in relation to three examples: a portfolio of equity (See Figure’s \[fig:figure1\] and \[fig:figure2\]), an equity and bond portfolio (See Figure’s \[fig:figure3\] and \[fig:figure4\]) and last, a portfolio including, in addition to equity and bonds, cash, property and international bonds and international equity (See Figure \[fig:figure5\] and \[fig:figure6\]). Lastly, in light of the insights provided from these previous chapters, in section \[ss:conclusion\] we reconsider the role that asset allocation has within the context of a resident balanced portfolio and also focus our discussion within the context of the useful fundamental law of active management [@Clarke2002; @Grinold1989]. Breadth - Independence rather then separateness? {#ss:breadth} ------------------------------------------------ Conventional theory suggests that an increase in diversification opportunities (N) is accompanied by an increase in one$'$s information ratio (IR) [@Grinold1989]. Hence, in the terminology of active management, an increase in N serves to enhance ones ability to exploit information. Note at the onset that N is defined and treated as the number of separate bets (sensu Clarke et al. 2002). For example, assume we have a 60% chance of getting equity bets correct. A bet on one underlying will yield an IR of 0.2, a bet on five underlying securities, an IR of 0.45 and a bet on 20 underlying securities, an IR of 0.90. This situation is easily verified. The understanding stemming from this detail is universal. For example, Lee Thomas [@Thomas2005] notes the following implications for considering diversification in the selfsame light: 1. Since diversification has an obvious statistical basis, a larger number of bets will produce a higher information ratio. 2. A lot of the differences between fund managers’ performances are often ascribed to ‘skill’ whereas the differences may simply be an artifact of better diversified portfolios. 3. The search for higher quality investments should be superseded by the search for diversified investments. 4. Diversification is paramount to investment success – across asset classes, styles and countries. Diversify, diversify, diversify! The above-mentioned argument’s are very appealing, and very well utilized, but we disagree with each and every contention since all omit two essential truisms, which when understood, shed a very different light on the benefits of diversification and the nature of breadth. \[lemma:truism1\] The square root of N in mathematical statistics implies ‘independence’ amongst statistical units (here bets) [@Rice1995] rather than simply the notion of ‘separate bets’ as is most often implied. If I hold a portfolio of 10 single stocks, do I really have 10 ‘independent’ bets and is my breadth really 3.16? If I increase this to 1000 single stocks (assuming I have as many available), is my breadth 31.6? This is the key theoretical question dealt with here. \[lemma:truism2\] Skill is not generally or simply scalable over breadth. One requires considerable skill in preserving one’s information coefficient (or IC) [@Clarke2002] across an increasingly diverse universe of investable underlying securities. There is no [*a priori*]{} reason to expect, for argument’s sake, a South African fund manager or analyst to be as adept in understanding the earnings potential of a diversified industrial company, as in understanding the risks and upside of Chinese private equity; yet there is a continuity of forecasting skill invoked across both. This is a key practical implication considered here. Taken to its logical extreme, there is no reason to expect the same information coefficients (IC) between any two underlying securities. IC is an average measure that is typically applied to the sum total of all bets in a portfolio. We need to disaggregate the measure to understand its scalability. Understanding the reality and the benefits of diversification resides in understanding both truisms given in Lemma (\[lemma:truism1\]) and Lemma (\[lemma:truism2\]) concurrently. We focus on the following three pertinent questions: Question \#1 : Just how many South African single stocks do I need to add to a portfolio before I start to replicate pre-existing elements of diversification (i.e. saturate all elements of independence)? Question \#2 : Do I capture much more ‘breadth’ if I include other local and international asset classes? Question \#3 : What are the implications of these findings to fund managers? Methodology - The informational content in a SVD {#ss:svd} ------------------------------------------------ The foundation of this research arises from the confusion between the notions of ‘separate bets’ and ‘independent bets’; the two are not the same. The question really is - how alike are they, and are there better ways for understanding independence than through the manner in which [@Grinold1989; @Clarke2002] imply? We believe there are several ways to better represent independence than through Grinold’s
{ "pile_set_name": "ArXiv" }
--- abstract: 'The aim of this contribution is to derive a general matrix formula for the net period premium paid in more than one state. For this purpose we propose to combine actuarial technics with the graph optimization methodology. The obtained result is useful for example to more advanced models of dread disease insurances allowing period premiums paid by both healthy and ill person (e.g. not terminally yet). As an application, we provide analysis of dread disease insurances against the risk of lung cancer based on the actual data for the Lower Silesian Voivodship in Poland.' author: - | Joanna Dȩbicka [^1]\ Department of Statistics, Wroclaw University of Economics\ and\ Beata Zmyślona\ Department of Statistics, Wroclaw University of Economics title: | **Premium valuation for a multiple state model\ containing manifold premium-paid states** --- \#1 0 [0]{} 1 [0]{} [**Premium valuation for a multiple state model\ containing manifold premium-paid states**]{} [*Keywords:*]{} modified multiple state model; Dijkstra algorithm; net premium; stochastic interest rate; critical illness insurance; Accelerated Death Benefits. Introduction {#sec:intro} ============ The insurance market is constantly expanding. Insurers offer more flexible contracts taking into account various situations that may arise in life. An example would be a serious illness, in case of which, the priorities of the insured person may change considerably. In particular, it may be that the death benefit becomes less important while the life benefit becomes most important. On the insurance marked exist different kind of solutions to protect the insured against financial problems in this difficult situation. According to one of them, an insurer in such unexpected situations during the insurance period may offer purchase of an additional option called Accelerated Death Benefits (ADBs) to life insurance policyholder, which provides an acceleration of all or of a part of the basic death benefit to the insured before his death. By another alternative, the insured may buy dread disease insurance (or critical illness insurance) which provides the policyholder with a lump sum in case of dread disease which is included in a set of diseases specified by the policy conditions, such as heart attack, cancer or stroke (see [@DG93], [@HP99], [@Pit94], [@Pit14]). It implies that the dread disease policy does not meet any specific needs and does not protect the policyholder against such financial losses as loss of earnings or reimbursement of medical expenses. In both cases conditions of this insurance products state that the benefit is paid on diagnosis of a specified condition, rather than on disablement. This is understandable, because this type of insurance is sensitive to the development of medicine, not all dread diseases are as mortal as a few years ago. Thus insurers introduce strict conditions for the right to receive benefits associated with a severe disease. One of popular conditions is that benefits are paid not only on the diagnosis but also on the disease stage that directly depends on the expected future lifetime of a sick person. Then the insurer has to take into account that probability of death of a dread disease sufferer depends on the duration of the disease. Depending on the conditions insurance premium may be paid in various forms by: healthy or sick (but not terminally) person, living person or healthy person. This article focuses on accurate valuation of such insurance products. Multiple state modelling is a stochastic tool for designing and implementing insurance products. The multistate methodology is commonly used in calculation of actuarial values of different types of life and health insurances. A general approach to calculation of moments of the cash value of the future payment streams (including benefits, annuities and premiums) arising from a multistate insurance contract can be found in e.g. [@Deb13]. This methodology, developed for the discrete-time model (where insurance payments are excercised at the end of time intervals), is based on an [*modified multiple state model*]{} (or [*extended multiple state model*]{}), for which matrix formulas for actuarial values can be derived. This approach to costing contracts not only makes calculations easier, but also enables us to factorize the stochastic nature of the evolution of the insured risk and the interest rate, which can be observed in the derived formulas. The aim of this contribution is to derive a general matrix formula for the net period premium paid in more than one state, which can be applied to any type of insurance being modeled by the multiple state model. In a special case, when the insured pays the single premium in advance or period premiums under the condition that he is healthy (or active), the valuation of the contract may be done by the use of results derived in [@Deb13]. More advanced models of dread disease insurances (as e.g. ADB’s form) allowing period premiums paid by both healthy and ill person (e.g. not terminally yet) go beyond the scope of models analysed in [@Deb13] and need a different approach. This paper focuses on the solution of this problem by the use of the graph optimization methodology to find the shortest path between particular states of the model. Note that the formulas obtained in [@Deb13] are special cases of formula derived in this paper. The paper is organized as follows. In Section \[sec:m.s.m\] we describe the modified multiple state model and its probabilistic structure. This modification allows us to use matrix-form approach to costing insurance contract. In Section \[sec:mfnp\] we derive general matrix expressions for the net period premium paid in more than one state. Section \[sec:application\] deals with the study of dread disease insurances against the risk of lung cancer. The modified multiple state model for dread disease insurances is presented in Section \[sec:actuarial.model\]. The probability structure of the analyzed model is built under conditions that the probability of death for a dread disease sufferer depends on the duration of the disease and the payment of benefits associated with a severe disease depends both on the diagnosis and on the disease stage presented in [@DZ2015] (Section \[sec:lc-ex\]). In Section \[sec:ap.n.p\], the results obtained in Section \[sec:mfnp\] are applied to costing of different types of critical illness policies based on the actual data for the Lower Silesian Voivodship in Poland. Suggestions for further possible applications of obtained results are presented in Section \[sec:conc\]. Multiple state model {#sec:m.s.m} ==================== Following Haberman & Pitacco [@HP99], with a given insurance contract we assign a [*multiple state model*]{}. That is, at any time the insured risk is in one of a finite number of states labelled by $1,2,...,N$ or simply by letters. Let ${\mathcal{S}}$ be the [*state space*]{}. Each state corresponds to an event which determines the cash flows (premiums and benefits). Additionally, by ${\mathcal{T}}$ we denote a [*set of direct transitions*]{} between states of the state space. Thus ${\mathcal{T}}$ is a subset of the set of pairs $\left( {i,j} \right)$, i.e., ${\mathcal{T}} \subseteq \{ \left( {i,j} \right)\mid i \neq j; i,j \in {\mathcal{S}} \}$. The pair $({\mathcal{S}},{\mathcal{T}})$ is called a [*multiple state model*]{}, and describes all possible insured risk events as far as its evolution is concerned (usually up to the end of insurance). This model is structured so that it is a possibility to assign any cash flow arising from the insurance contract to one of the states (annuity, premiums), or the transition between them (lump sums). That it was possible to use matrix formulas for actuarial values, the multiple state model must be constructed so that each cash flow must maintain its to one of the states. Observe that for the lump sum the information that the insured risks is in a particular state at moment $k$ is not enough to determine the benefit at time $k$, because we need additional information about where the insured risk was at previous moment $k-1$. Matrix is a two-dimensional structure, thus it is not possible to determine the exact moment of realization of lump sum benefit by using above three pieces of information. It appears that each $({\mathcal{S}},{\mathcal{T}})$ model can be easily (by the recursive procedure proposed in [@Deb13]) extended to [*modified multiple state model*]{} $({\mathcal{S}}^{\ast},{\mathcal{T}}^{\ast})$ in which the lump sum benefit is affiliated with particular state and not a direct transition between states. In this paper we consider an insurance contract issued at time $0$ (defined as the time of issue of the insurance contract) and terminating according to the plan at a later time $n$ ($n$ is the term of policy). Moreover, $x$ is the age of the insured person at a policy issue. We focus on discrete-time model. Let $X^{\ast}(k)$ denote the state of an individual (the policy) at time $k$ ($k \in \textrm{T}= \{0,1,2,\dots ,n\}$). Hence the evolution of the insured risk is given by a discrete-time stochastic process $\{ {X^{\ast}(k); k\in \textrm{T}} \}$, with values in the finite set $\mathcal{S}^{\ast}=\{ 1, 2,...,N^{\ast}\}$. In order to describe the probabilistic structure of $\{X^{\ast}(k)\}$, for any moment $k \in \{0,1,2,...,n \}$, we introduce ${{\rm I\hspace{-0.
{ "pile_set_name": "ArXiv" }
--- abstract: 'This paper is the complementary work of [@C2]. Ramified quadratic extensions $E/F$, where $F$ is a finite unramified field extension of $\mathbb{Q}_2$, fall into two cases that we call *Case 1* and *Case 2*. In the previous work [@C2], we obtained the local density formula for a ramified hermitian lattice in *Case 1*. In this paper, we obtain the local density formula for the remaining *Case 2*, by constructing a smooth integral group scheme model for an appropriate unitary group. Consequently, this paper, combined with the paper [@GY] of W. T. Gan and J.-K. Yu and [@C2], allows the computation of the mass formula for any hermitian lattice $(L, H)$, when a base field is unramified over $\mathbb{Q}$ at a prime $(2)$.' address: | Sungmun Cho\ Graduate school of mathematics, Kyoto University, Kitashirakawa, Kyoto, 606-8502, JAPAN\ Current: Department of Mathematics, POSTECH, 77, Cheongam-ro, Nam-gu, Pohang-si, Gyeongsangbuk-do, 37673, KOREA author: - Sungmun Cho title: 'Group schemes and local densities of ramified hermitian lattices in residue characteristic 2 Part II, Expanded version' --- [^1] [^2] Introduction {#in} ============ Introduction {#in'} ------------ Local densities are local factors of the mass formula, which is an essential tool in the classification of hermitian lattices over number fields. We refer to the introduction of [@C2] for history along this context. This paper is the complementary work of [@C2]. Let $F$ be a finite unramified field extension of $\mathbb{Q}_2$. Ramified quadratic extensions $E/F$ fall into two cases that we call *Case 1* and *Case 2* (cf. Section \[Notations\]), depending on lower ramification groups $G_i$’s of the Galois group $\mathrm{Gal}(E/F)$ as follows: $$\left\{ \begin{array}{l } \textit{Case 1}: G_{-1}=G_{0}=G_{1}, G_{2}=0;\\ \textit{Case 2}: G_{-1}=G_{0}=G_{1}=G_{2}, G_{3}=0. \end{array} \right.$$ The paper [@C2] gives the local density formula of hermitian lattices in *Case 1*. The main contribution of this paper is to get an explicit formula for the local density of a hermitian $B$-lattice $(L, h)$ in *Case 2*, by explicitly constructing a certain smooth group scheme (called smooth integral model) associated to it that serves as an integral model for the unitary group associated to $(L\otimes_AF, h\otimes_AF)$ and by investigating its special fiber, where $B$ is a ramified quadratic extension of $A$ and $A$ is an unramified finite extension of $\mathbb{Z}_2$ with $F$ as the quotient field of $A$. In conclusion, this paper, combined with [@GY], [@C1], and [@C2], finally allows the computation of the mass formula for any hermitian lattice $(L, H)$ when a base field is unramified over $\mathbb{Q}$ at a prime $(2)$. As the simplest case, we can compute the mass formula for an arbitrary hermitian lattice explicitly when a base field is $\mathbb{Q}$. For a brief idea and comment on the proof, we refer to the introduction of [@C2]. The methodology and the structure of this paper are basically the same as those of [@C2] and thus we repeat a number of sentences and paragraphs from [@C2] for synchronization without comment. But *Case 2* is more difficult and technical than *Case 1*. This paper is organized as follows. We first state a structure theorem for integral hermitian forms in Section \[sthln\]. We then give an explicit construction of a smooth integral model $\underline{G}$ (in Section \[csm\]) and study its special fiber (in Section \[sf\]) in *Case 2*. Finally, we obtain an explicit formula for the local density in Section \[cv\] in *Case 2*. In Appendix \[App:AppendixB\], we provide an example to describe the smooth integral model and its special fiber and to compute the local density for a unimodular lattice of rank 1. The reader might want to skip to Appendix \[App:AppendixB\] and at least go to Appendix \[nc\] to get a first glimpse into why the case of $p=2$ is really different. Some of the ideas behind our construction can be seen in the simple example illustrated in Appendix \[cfot\]. Acknowledgements ---------------- This paper was initially the second half of [@C2]. Due to a huge number of pages and technical difficulty, we decide to divide it into two papers. The author greatly thanks the referee of Algebra & Number Theory, who read [@C2], and Professor Brian Conrad for incredibly precise and helpful comments and discussions on this project. The author also appreciates Professor Chia-Fu Yu’s notice to point out one error in this paper and [@C2]. It is explained in Remark \[correction\]. Structure theorem for hermitian lattices and notations {#sthln} ====================================================== In this section, we explain a structure theorem for hermitian lattices. This theorem is proved in [@C2]. Thus we take necessary definitions and theorems from [@C2], without providing proofs. Notations {#Notations} --------- Notations and definitions in this section are taken from [@C1], [@GY], [@J], and [@C2]. - Let $F$ be an unramified finite extension of $\mathbb{Q}_2$ with $A$ its ring of integers and $\kappa$ its residue field. - Let $E$ be a ramified quadratic field extension of $F$ with $B$ its ring of integers. - Let $\sigma$ be the non-trivial element of the Galois group $\mathrm{Gal}(E/F)$. - The lower ramification groups $G_i$’s of the Galois group $\mathrm{Gal}(E/F)$ satisfy one of the following: $$\left\{ \begin{array}{l } \textit{Case 1}: G_{-1}=G_{0}=G_{1}, G_{2}=0;\\ \textit{Case 2}: G_{-1}=G_{0}=G_{1}=G_{2}, G_{3}=0. \end{array} \right.$$ In *Case 2*, based on Section 6 and Section 9 of [@J], there is a suitable choice of a uniformizer $\pi$ of $B$ such that $$\pi=\sqrt{2\delta}, \textit{where $\delta\in A $ and $\delta\equiv 1 \mathrm{~mod~}2$}.$$ Thus $E=F(\pi)$ and $\sigma(\pi)=-\pi$. From now on, we assume that $E/F$ satisfies *Case 2* and a uniformizing element $\pi$ of $B$ and $\delta$ are fixed as explained above throughout this paper. - Set $$\xi:=\pi\cdot\sigma(\pi).$$ - We consider a $B$-lattice $L$ with a hermitian form $$h : L \times L \rightarrow B,$$ where $h(a\cdot v, b \cdot w)=\sigma(a)b\cdot h(v,w)$ and $h(w,v)=\sigma(h(v,w))$. Here, $a, b \in B$ and $v, w \in L$. We denote by a pair $(L, h)$ a hermitian lattice. We assume that $V=L\otimes_AF$ is nondegenerate with respect to $h$. - We denote by $(\epsilon)$ the $B$-lattice of rank 1 equipped with the hermitian form having Gram matrix $(\epsilon)$. We use the symbol $A(a, b, c)$ to denote the $B$-lattice $B\cdot e_1+B\cdot e_2$ with the hermitian form having Gram matrix $\begin{pmatrix} a & c \\ \sigma (c) & b \end{pmatrix}$. For each integer $i$, the lattice of rank 2 having Gram matrix $\begin{pmatrix} 0 & \pi^i \\ \sigma(\pi^i) & 0 \end{pmatrix}$ is called the hyperbolic plane and denoted by $H(i)$. - A hermitian lattice $L$ is the orthogonal sum of sublattices $L_1$ and $L_2$, written $L=L_1\oplus L_2$, if $L_1\cap L_2=0$, $L_1$ is orthogonal to $L_2$ with respect to the hermitian form $h$, and $L_1$ and $L_2$ together span $L$. - The ideal in $B$ generated by $h(x,x)$ as $x$ runs through $L$ will be called the norm of $L$ and written $n(L)$. - By the
{ "pile_set_name": "ArXiv" }
--- abstract: 'Regression analysis is an important machine learning task used for predictive analytic in business, sports analysis, etc. In regression analysis, optimization algorithms play a significant role in search the coefficients in the regression model. In this paper, nonlinear regression analysis using a recently developed meta-heuristic Multi-Verse Optimizer (MVO) is proposed. The proposed method is applied to 10 well-known benchmark nonlinear regression problems. A comparative study has been conducted with Particle Swarm Optimizer (PSO). The experimental results demonstrate that the proposed method statistically outperforms PSO algorithm.' author: - Jayri Bagchi - Tapas Si date: 'Received: date / Accepted: date' title: 'Nonlinear Regression Analysis Using Multi-Verse Optimizer' --- [example.eps]{} gsave newpath 20 20 moveto 20 220 lineto 220 220 lineto 220 20 lineto closepath 2 setlinewidth gsave .4 setgray fill grestore stroke grestore Introduction {#intro} ============ Regression analysis is a statistical method to explain the relationship between independent and dependent variables or parameters and predict the coefficients of the function. Linear regression analysis involves those functions that are linear combination of the independent parameters whereas nonlinear regression analysis is a type of regression analysis in which given data is modeled with a function that is a nonlinear combination of multiple independent variables. Some examples on nonlinear regression functions are exponential, logarithmic, trigonometric, power functions. Regression is the most important and widely used statistical technique with many applications in business and economics.\ OZSOY et al. [@Ref2] performed an estimation of nonlinear regression model parameters using PSO. This study compares the optimal and estimated parameters and its results hence show that the estimation of the coefficients using PSO yield reliable results. Mohanty [@Ref9] applied PSO to astronomical data analysis. The results show that PSO requires tuning of few parameters compared to GA but is found to be slightly worse than GA. A case study of PSO in regression analysis by Cheng et al. [@Ref6] utilized PSO to solve a regression problem in the dielectric relaxation field. The results show that PSO with ring structure has a good mean solution than PSO with a star structure. Erdogmus and Ekiz [@Ref9] proposed a nonlinear regression analysis using PSO and GA for some test problems shows that GA shows better performance in estimating the values of the coefficients. Their work further shows that such heuristic optimization algorithms can be an alternative to classic optimization methods. Lu et al. [@Ref3] performed a selection of most important descriptors to build QSAR models using modified PSO (PSO-MLR) and compared the results with GA (GA-MLR). The results reveal that PSO-MLR performed better than GA-MLR for the prediction set. Barmaplexis et al. [@Ref4] applied multi-linear regression, PSO and artificial neural networks in the pre-formulation phase of mini-tablet preparation to establish an acceptable processing window and identify product design space. Their results show that DoE-MLR regression equations gave good fitting results for 5 out of 8 responses whereas GP gave the best results for the other 3 responses. PSO-ANNs was only to fit all selected responses simultaneously. Cerny et al. [@Ref5] proposed a new type of genotype for Prefix Gene Expression Programming (PGEP). PGEP, improved from Gene Expression Programming (GEP) is used for Signomial Regression(SR). The method was called Differential Evolution-Prefix Gene Expression Programming (DE-PGEP) which allows for expression and constants to co-exist in the same vector spaced representation and be evolved simultaneously. Park et al. [@Ref7] proposed PSO based Signomial Regression (PSR) to solve non-linear regression problems. Their work attempted to solve the signomial function by estimating the parameters using PSO. Mishra [@Ref12] evaluates the performance of Differential Evolution at nonlinear curve fitting. Results show that DE has been successful to obtain optimum results even if parameter domains were wide but it couldn’t reach near-optimal results for the CPC-X problems which are the challenge problems for any nonlinear least-squares algorithm. Gilli et al. [@Ref13] used DE, PSO and Threshold Accepting methods to estimate the parameters of linear regression. Yang et al. [@Ref10] constructed the linear regression models for the symbolic interval-values data using PSO.\ The objective of this paper is to perform a nonlinear regression analysis using the MVO algorithm [@Ref1]. The proposed method is applied to 10 well-known benchmark nonlinear regression problems. A comparative study is conducted with PSO [@Ref8]. The experimental results with statistical analysis demonstrate that the proposed method outperforms PSO. Organization of this paper -------------------------- The remaining of the paper is organized as follows: the proposed method is discussed in section \[sec:1\]. The experimental setup including the regression models and dataset description is given in section \[sec:2\]. The results and discussion are given in section \[sec:3\]. Finally, the conclusion with future works is given in section \[sec:4\]. Materials & Methods {#sec:1} =================== Regression Analysis ------------------- Regression analysis is a statistical technique to estimate the relationships among the variables of a function. It is a commonly used method for obtaining the prediction function for predicting the values of the response variable using predictor variables [@Ref16]. There are three types of variable in regression such as - The unknown coefficients or parameters, denoted as $\beta$, may be represent a scalar or a vector - The independent variable or predictor variable, i.e., input vector $X=(x_1,x_2,\ldots,x_n)$ - The dependent variable or response variable, i.e., output $y$ The regression model in basic form can be defined as: $$y \approx f(x,\beta)$$ where $\beta=(\beta_0, \beta_1, \beta_2,\ldots, \beta_m)$.\ A linear regression model is a model of which output variable is the linear combination of coefficients and input variables and it is defined as [@Ref15]: $$y=\beta_0+\beta_1x_1+\beta_2x_2+\cdots+\beta_nx_n+\xi$$ where $\xi$ is a random variable, a disturbance that perturbs the output $y$. A nonlinear regression model is a model of which output variable is the nonlinear combination of coefficients and input variables. The nonlinear regression model is defined as follows [@Ref15]: $$y=f(x,\beta)+ \xi$$ where $f$ is the nonlinear function. In regression analysis, an optimizer is used to search the coefficients, i.e., parameters so that the model fits well the data. In the current work, the unknown parameters of different nonlinear regression models are searched using MVO algorithm. The MVO algorithm is discussed next. MVO Algorithm ------------- Multi-Verse Optimizer is an optimization algorithm whose design is inspired by the multiverse theory in Physics [@Ref1]. Multiverse theory in Physics states that there exist multiple universes and each universe possesses its own inflation rate which is responsible for the creation of stars, planets, asteroids, meteroids, black holes, white holes, wormholes, physical laws for that universe. For a universe to be stable, it must have a minimum inflation rate. So the goal of the MVO algorithm is to find the best solution by reducing the inflation rate of the universes which is also the fitness value. Now, observations from multiverse theory show that universes with higher inflation rate have more white holes and universes with a low inflation rates have more black holes. So to have a stable situation, objects from white holes have to travel to black holes. Also the objects in each universe may travel randomly to the best universe through wormholes. In MVO, each solution represents a universe and each variable to be an object in the universe. Further, the inflation rate is assigned to each universe which is proportional to the fitness value of each universe. MVO uses the concept of black holes and white holes for exploring search spaces and wormholes to exploit search spaces. When a tunnel is established between two universes, it is assumed that the universe with a higher inflation rate has more white holes and the universe with a lower inflation rate has more black holes. So universes exchange objects from white holes to black holes which improves the average inflation rates of all universes over the iterations. In order to mathematically model the above idea, the roulette Wheel mechanism is used that selects one of the universes with a high inflation rate to contain a white hole and allows objects from that universe to move into the universe containing a black hole and relatively low inflation rate. At every iteration, universes are sorted and one of them is selected by the roulette wheel to have a white hole. Assuming that $U$ is the matrix of universes with $d$ parameters and $n$ candidate solutions. Roulette wheel selection mechanism based on the normalized inflation rate is illustrated as below: $$x_i(j) = \begin{cases} x_k(j) & r_1<NI(U_i) \\ x_i(j) & r_1>=NI(U_i) \end{cases}$$ Here $x_i(j)$ indicates $j$th parameter of $i$th universe, $U_i$ shows $i
{ "pile_set_name": "ArXiv" }
--- abstract: 'The thickness of isothermal gaseous layers and their midplane volume densities $\rho_{gas}(R)$ were calculated for several spiral and $LSB$ galaxies by solving the self-consistent equilibrium equations for gaseous discs embedded into a stellar one. The self-gravity of the gas and influence of dark halo on the disk thickness were taken into account. The resulting midplane volume densities of spiral galaxies were compared with the azimuthally averaged star formation rate $SFR$ to verify the feasibility and universality of the Schmidt law $SFR\sim \rho_{gas}^n$.' author: - 'A.V. Zasov and O.V. Abramova [^1]' title: Midplane Gas Density and the Schmidt Law --- Gas density is the major parameter which determines the star formation rate in galaxies. Scmidt [@ZA_Schmidt59] suggested a simple form of the “volume” star formation rate – gas density relationship: $SFR_v\sim\rho_{gas}^n$ (where $n\approx 2$ for the solar vicinity), usually called the Schmidt law. Being essentially empirical, the Schmidt law and its modifications open a possibility to calculate the evolution models of galaxies, parameterizing the star formation history. However, the power index $n$ cannot be found directly from observations of other galaxies, because in order to estimate $\rho_{gas}$ it is necessary to know the gas layer thickness, which may vary significantly both along the galaxy radius and from one galaxy to another. Therefore in practice the Schmidt law is often replaced by the other one, superficially similar empirical law $SFR_s\sim\sigma_{gas}^N$ (usually called the Kennicutt–Schmidt law), where the compared values are scaled to unit disc surface area. In most cases values of $N$ obtained for different galaxies lay within the limits of $1<N<2$ but for some galaxies they prove to be much steeper ($N>3$ for M 33, see  Heyer et al. [@ZA_HeyerAll04]). To estimate $\rho_{gas}$, in this work we used the data on the gas surface density distributions, brightness distributions and velocity curves for galaxies taken from the literature. By solving the equilibrium and Poisson equations for stellar, $HI$ and $H_2$ discs, we calculated the midplane gas and star volume densities as a function of the radial distance $R$ for several spiral and (for comparison) $LSB$ galaxies. The former include: M33, M51, M81, M100, M101, M106 and our Galaxy. The central parts of galaxies where the bulge dominates and/or the observed rotation curve is uncertain were ignored. We assumed that the stellar and gaseous discs are axisymmetric being in hydrostatic equilibrium and that the pressure of gas is determined by its turbulent motion: $P_{gas}=\rho_{gas}\,C_{gas}^2$, where the velocity dispersion $C_{gas}$ was taken to be constant (although different for atomic and molecular gas). Both the self gravitation of gas and presence of dark halos which also influence the thickness of discs were taken into account. The equations were numerically solved using an iterative algorithm (see the details in Zasov & Abramova [@ZA_AZ]). To estimate the stellar disc thickness, two models were employed: stellar velocity dispersion $C_z$ was assumed to be either a constant or proportional to the marginal radial velocity dispersion $C_r$, that provides stellar disc with gravitational stability (see the discussion in [@ZA_Bottema93; @ZA_ZasAll04]). Both models give rather similar results. The obtained estimates of the midplane gas density $\rho_{gas}(R)$ for spiral and $LSB$ galaxies are illustrated in Fig. \[fig1\]. To compare the surface $(SFR_s)$ and volume $(SFR_v)$ star formation rates with the gas densities in spiral galaxies we used the estimates of $SFR_s$ from Boissier et al. [@ZA_main], based on the smoothed absorption-corrected UV profiles. The resulting diagrams are demonstrated in Figs. \[fig1\]a,b. [**Main results**]{}. ***1***. Marginally stable stellar discs in all cases but M33 and our Galaxy increase their thickness significantly beyond $R\approx 2-3\,R_0$. ***2***. Gaseous discs of $LSB$ galaxies are thicker than the stellar ones, and $\rho_{gas}$ is about an order of magnitudes lower than in HSB spiral galaxies. ***3***. There is no universal Schmidt law $SFR\sim\rho_{gas}^n$, common to all galaxies. Nevertheless, $SFR$, taken for the whole complex of galaxies, reveals better correlation with the volume gas density than with the column one. ***4***. Parameter $n$ in the Schmidt law in spiral galaxies ranges between 0.8 (M101) and 2.4 (M81). However if to consider the molecular gas only, the mean value of $n$ becomes close to unit. This work was supported by the Russian Fond of Basic Researches grant 07-02-00792. [10]{} Schmidt, M. 1959, ApJ, 129, 243 Heyer, M. H., Corbelli, E., Schneider, S. E. & Young, J. S. 2004, ApJ, 602, 723 Abramova, O.V. & Zasov, A.V. 2008, Astronomy Letters, in press (astro-ph 0710.0257) Bottema, R. 1993, A&A, 275, 16 Zasov, A. V., Khoperskov, A. V. & Tyurina, N. V. 2004, Astronomy Letters, 30(9), 593 Boissier, S., Boselli, A., Buat, V., Donas, J. & Milliard, B. 2004, A&A, 424, 465 [^1]: Sternberg Astronomical Institute, Universitetskii pr. 13, Moscow, 119991 Russia; [oxana@sai.msu.ru]{}, [zasov@sai.msu.ru]{}
{ "pile_set_name": "ArXiv" }
--- abstract: 'We study the motivic cohomology of the special fiber of quaternionic Shimura varieties at a prime of good reduction. We exhibit classes in these motivic cohomology groups and use this to give an explicit geometric realization of level raising between Hilbert modular forms. The main ingredient for our construction is a form of Ihara’s Lemma for compact quaternionic Shimura surfaces which we prove by generalizing a method of Diamond–Taylor. Along the way we also verify the Hecke orbit conjecture for these quaternionic Shimura varieties which is a key input for our proof of Ihara’s Lemma.' address: 'Institute for Advanced Study, 1 Einstein Drive, Princeton, NJ 08540.' author: - Rong Zhou bibliography: - 'bibfile.bib' title: Motivic Cohomology of Quaternionic Shimura varieties and level raising --- Introduction ============ Main Theorem ------------ The aim of this paper is to study the motivic cohomology of the special fiber of certain quaternionic Shimura varieties. For a scheme of finite type over a field, its motivic cohomology groups are a generalization of the usual Chow groups, and the main new observation of this paper is that for certain Shimura varieties, these groups can encode very rich arithmetic information. More precisely, we will show that the cycle class map from motivic cohomology to étale cohomology gives a geometric realization of level raising between Hilbert modular forms. We now state our main result. Let $F$ be a totally real field of even degree $[F:{\ensuremath{\mathbb{Q}}}]=g$ and $p$ a prime which is *inert* in $F$. Let $B$ be a totally indefinite quaternion algebra over $F$ which is unramified at the unique prime ${\ensuremath{\mathfrak{p}}}$ above $p$ and $G$ the associated reductive group over ${\ensuremath{\mathbb{Q}}}$. Let $K$ be a sufficiently small compact open subgroup of $G({\ensuremath{\mathbb{A}}}_f)$ such that $K=K_pK^p$ where $K_p\subset G({\ensuremath{\mathbb{Q}}}_p)=GL_2(F_{{\ensuremath{\mathfrak{p}}}})$ is the standard hyperspecial maximal compact $GL_2({\ensuremath{\mathcal{O}}}_{F_{{\ensuremath{\mathfrak{p}}}}})$ and $K^p\subset G({\ensuremath{\mathbb{A}}}_f^p)$. Then there is a Shimura variety ${\ensuremath{\mathrm{Sh}}}_K(G)$ defined over ${\ensuremath{\mathbb{Q}}}$; it extends to a smooth integral model $\underline{{\ensuremath{\mathrm{Sh}}}}_K(G)$ over ${\ensuremath{\mathbb{Z}}}_{(p)}$. We let ${\ensuremath{\mathscr{S}}}_K(G)$ denote its special fiber over ${\ensuremath{\mathbb{F}}}_p$ and ${\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\mathbb{F}}}_{p^g}}$ its base change to ${\ensuremath{\mathbb{F}}}_{p^g}$. Fix an irreducible cuspidal automorphic representation $\Pi$ of $GL_2(F)$ of parallel weight 2 defined over a number field ${\ensuremath{\mathbf{E}}}$. Let ${\ensuremath{\mathrm{R}}}$ be a finite set of places of $F$ not containing ${\ensuremath{\mathfrak{p}}}$ and away from which $\Pi$ is unramified and $K$ is hyperspecial. We also choose a prime $\lambda$ of ${\ensuremath{\mathcal{O}}}_{{\mathbf{E}}}$ whose residue characteristic is coprime to $p$ and write $k_\lambda={\ensuremath{\mathcal{O}}}_{{\mathbf{E}}}/\lambda$. We write ${\ensuremath{\mathrm{H}}}_{\ensuremath{\mathcal{M}}}^i({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\mathbb{F}}}_{p^g}},k_\lambda(j))$ for the motivic cohomology group with $k_\lambda$ coefficients defined in [@SuVo]. By [@Voe2], we may identify this with the higher Chow group $\mathrm{Ch}^j({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\mathbb{F}}}_{p^g}},2j-i,k_\lambda)$ defined in [@Bloch]. When $2j=i$, this group is just the usual Chow group of codimension $j$ cycles modulo rational equivalence (with coefficients in $k_\lambda$). The group $\mathrm{Ch}^j({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\mathbb{F}}}_{p^g}},2j-i,k_\lambda)$ is equipped with the following cycle class map to the absolute étale cohomology: $$\label{eq: intro cycle class map mod l}\mathrm{Ch}^j({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\mathbb{F}}}_{p^g}},2j-i,k_\lambda)\rightarrow {\ensuremath{\mathrm{H}}}_{\text{\'et}}^i({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\mathbb{F}}}_{p^g}},k_\lambda(j)).$$ We let ${\ensuremath{\mathbf{T}}}_{{\ensuremath{\mathrm{R}}}}$ denote the abstract Hecke algebra of $GL_2(F)$ away from ${\ensuremath{\mathrm{R}}}$; it is the ${\ensuremath{\mathbb{Z}}}$-algebra generated by elements $T_{{\ensuremath{\mathfrak{q}}}}, S_{{\ensuremath{\mathfrak{q}}}}$ where ${\ensuremath{\mathfrak{q}}}$ runs over primes of $F$ away from ${\ensuremath{\mathrm{R}}}$. Then the Hecke eigenvalues of $\Pi$ induce a map $$\phi^{\Pi}_\lambda:{\ensuremath{\mathbf{T}}}_{{\ensuremath{\mathrm{R}}}}\rightarrow {\ensuremath{\mathcal{O}}}_{{\mathbf{E}}}\rightarrow k_\lambda.$$ We write ${\ensuremath{\mathfrak{m}}}_{{\ensuremath{\mathrm{R}}}}:=\ker(\phi^{\Pi}_\lambda)$ a maximal ideal of ${\ensuremath{\mathbf{T}}}_{{\ensuremath{\mathrm{R}}}}$ and ${\ensuremath{\mathfrak{m}}}\subset {\ensuremath{\mathbf{T}}}_{{\ensuremath{\mathrm{R}}}\cup\{{\ensuremath{\mathfrak{p}}}\}}$ the preimage in ${\ensuremath{\mathbf{T}}}_{{\ensuremath{\mathrm{R}}}\cup\{{\ensuremath{\mathfrak{p}}}\}}$. The Hecke algebra ${\ensuremath{\mathbf{T}}}_{{\ensuremath{\mathrm{R}}}\cup\{{\ensuremath{\mathfrak{p}}}\}}$ acts on the étale cohomology ${\ensuremath{\mathrm{H}}}_{\mathrm{\acute{e}t}}^\bullet({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\overline{\mathbb{F}}_p}}},k_\lambda(-))$ and higher Chow groups $\mathrm{Ch}^j({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\mathbb{F}}}_{p^g}},2j-i,k_\lambda)$ of ${\ensuremath{\mathscr{S}}}_K(G)$. Upon making a large image assumption on the$\mod\lambda$ Galois representation associated to $\Pi$ (see Assumption \[ass: property of l\]) and localizing at the maximal ideal ${\ensuremath{\mathfrak{m}}}$, there is an isomorphism $${\ensuremath{\mathrm{H}}}_{\operatorname{\acute{e}t }}^{g+1}({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\mathbb{F}}}_{p^{g}}},k_\lambda(g/2+1)))_{{\ensuremath{\mathfrak{m}}}}\cong{\ensuremath{\mathrm{H}}}^1({\ensuremath{\mathbb{F}}}_{p^{g}},{\ensuremath{\mathrm{H}}}_{\operatorname{\acute{e}t }}^g({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\overline{\mathbb{F}}_p}}},k_\lambda(g/2+1))_{{\ensuremath{\mathfrak{m}}}}).$$ The cycle class map then induces the *Abel–Jacobi map*:$$\label{eq: intro Abel--Jacobi}\mathrm{Ch}^{g/2+1}({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\mathbb{F}}}_{p^{g}}},1,k_\lambda)_{{\ensuremath{\mathfrak{m}}}}\rightarrow {\ensuremath{\mathrm{H}}}^1({\ensuremath{\mathbb{F}}}_{p^{g}},{\ensuremath{\mathrm{H}}}_{\operatorname{\acute{e}t }}^g({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\overline{\mathbb{F}}_p}}},k_\lambda(g/2+1))_{{\ensuremath{\mathfrak{m}}}}).$$ In §\[sec: Motivic Cohomology and Level-raising\], we will define a subgroup $\mathrm{Ch}^{g/2+1}_{\mathrm{lr}}({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\mathbb{F}}}_{p^{g}}},1,k_\lambda)_{{\ensuremath{\mathfrak{m}}}}$ of $\mathrm{Ch}^{g/2+1}({\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\mathbb{F}}}_{p^{g}}},1,k_\lambda)_{{\ensuremath{\mathfrak{m}}}}$ using the geometry of Goren–Oort cycles[^1] on ${\ensuremath{\mathscr{S}}}_K(G)_{{\ensuremath{\mathbb{F}}}_{p^{g}}}$ as studied in [@TX], [@TX1] and [@LT]. As the notation suggests, this subgroup is related to level raising. The main Theorem of the paper is the following; we refer to §\[sec: Motivic Cohomology and Level-raising\] for the precise statement. \[thm:intro main\]Suppose that $p$ is a $\lambda$-level raising prime in the sense of Definition \[def: level raising prime\] and that Assumptions \[ass: property of l\] and \[
{ "pile_set_name": "ArXiv" }
[**Soft photon emission as a sign of sharp transition in quark-gluon plasma**]{}[^1] I.V.Andreev\ \ [**Abstract**]{} Photon emission arising in the course of transition between the states of quark-gluon and hadron plasma has been considered. Single-photon distributions and two-photon correlations in central rapidity region have been calculated for heavy ion collisions at high energies. It has been found that opposite side two-photon correlations can serve as a sign of sharp transitions between the states of strongly interaction matter. Introduction ============ Forty years ago E.S.Fradkin and his students had calculated the photon polarization operator in relativistic plasma at finite temperatures [@F]. These results will be used here for estimation of a new specific mechanism of photon production which may appear effective for identification of transitions between the states of quark and hadron matter in heavy ion collisions. The phenomenon under consideration is the photon production in the course of evolution of strongly interacting matter. Let us consider photons existing in the medium at initial moment $t_0$ having momentum $\bf k$ and energy $\omega_{in}$. Let the properties of the medium (its dielectric penetrability) change within time interval $\delta\tau$ so that the final photon energy is $\omega_f$. As a result of the energy change the production of extra photons with momenta $\pm\bf k$ takes place these photons having specific two-photon correlations. Analogous processes were considered for mesons [@AW; @AC; @A; @ACG] and applied to pion production in high-energy heavy ion collisions [@A1]. The conditions for a strong effect are the following: first, the ratio of the energies ${\omega_{in}/{\omega_f}}$ must not be too close to unity and second, the transition should be fast enough. Basic formulation ================= Time evolution of the transverse photon creation and annihilation operators is given by canonical Bogoliubov transformation [@B] which represents solution of the Hamilton equations and contains two modes with momenta $\pm\vk$ : a(,t)=u()a(,0)+v()a\^(-,0),\ a\^(-,t)=v()a(,0)+u()a\^(-,0), \[eq:1\] (polarizations are omitted for a moment). Here Bogoliubov coefficients $u,v$ satisfy equation |u()|\^[2]{}-|v()|\^[2]{}=1 \[eq:2\] preserving canonical commutation relations and the limit $t\rightarrow\infty$ must be taken. Physically the process under consideration is analogous to parametric excitation of quantum oscillators. It was considered in more details earlier [@A]. The Bogoliubov coefficients $u(\vk)$, $v(\vk)$ are taken to be real valued and $k=|\vk|$ dependent. So we use a parametrization u()=r(k), v()=r(k) \[eq:3\] thus introducing evolution parameter $r(k)$. To get feeling of the main features of the evolution effect (and for further references and comparison) let us formulate a simple model – fast simultaneous break-up of large homogeneous system at rest [@A1]. In this case the resulting single-particle momentum distribution can be written in a simple form =a\^()a()|\_[t]{} =\[eq:4\] (for single polarization) where $V$ is the volume of the system and $n(k)$ is the level occupation number at $t=0$. The first term in [*rhs*]{} of Eq.(4) describes amplification of existed particles and the second term describes the contribution arising due to rearrangement of the ground state of the system in the course of the transition. The transition effect is better seen in particle correlations.Two-particle inclusive cross-section is given here by =a\^\_[1]{}a\^\_[2]{}a\_[1]{}a\_[2]{}=a\^\_[1]{}a\_[1]{}a\^\_[2]{}a\_[2]{}+a\^\_[1]{}a\_[2]{}a\^\_[2]{}a\_[1]{}+a\^\_[1]{}a\^\_[2]{}a\_[1]{}a\_[2]{}\[eq:5\] The fist term in [*rhs*]{} of Eq.(5) is the product of single-particle distributions, the second term gives the usual Hanbury Brown-Twiss effect (HBT) and the third term is essential if time evolution takes place giving opposite side photon correlations (see below). The correlators in Eq.(5) in the case under consideration have the form: a\^()a()=G(-), \[eq:6\] a()b()=2r(k) G(+) \[eq:7\] where $G(\vko\pm\vkt)$ represents normalized Fourier transform of the source volume at break-up stage $(G(0)=1)$. It is sharply peaked function of $\vko\pm\vkt$ (at zero momentum) having characteristic scale of the order of inverse size of the source, this scale being much less than characteristic scales of photon momentum distribution $n(k)$ and evolution parameter $r(k)$. So the last two functions may be evaluated at any of momenta $\vko,\vkt\approx\pm\vk$ (we suggest that the process is $\vk\to -\vk$ symmetric). Relative correlation function which is measured in experiment is now given by C(,)=1+G\^[2]{}(-)+R\^[2]{}(k)G\^[2]{}(+) \[eq:8\] with R(k)= \[eq:9\] As it can be seen from Eqs.(8-9), HBT effect is given simply by the form- factor $G(\vko-\vkt)$ in this model whereas the transition effect depends strongly on evolution parameter $r(k)$. In turn $r(k)$ depends on time duration $\delta\tau$ of the transition. For very small characteristic times $\delta\tau$ the expression for $r(k)$ is universal [@AW], r(k)=(), 1 \[eq:10\] where $\omega_{in}$ and $\omega_f$ are particle energies before and after the transition. For larger $\delta\tau$ the evolution parameter lessens. In general we expect that it falls exponentially at large $\omega\delta\tau $ if the time dependence of the energy in the course of transition has no singularities at real times. So for large $\omega\delta\tau$ we shall use an exponentially falling expression motivated by solvable model expression [@A]. Below, after necessary modification, we apply the above consideration to photon production in heavy ion collisions. Photons in plasma ================= Spectrum of photons in plasma is given by dispersion equation \^[2]{}\_[k]{}=k\^[2]{}+(\_[k]{},k,T,,m) \[eq:11\] Here $\Pi$ is the polarization operator for transverse photons dependent on temperature $T$, chemical potential $\mu$ and the mass $m$ of charged particles. Below we use an approximate form extracted from original expression [@F]: =\^[2]{}\_[a]{}\[eq:12\] with \^[2]{}\_[a]{}=\_[m/T]{}\^dx (x\^[2]{}-)\^[1/2]{}n\_[F]{}(x,/T) \[eq:13\] where $\alpha=1/137$, $v^2$ is the averaged velocity squared of the charged particles in the plasma, factor $g$ takes into account the number of the particle kinds and their electric charges ($g=5/3$ for $u,d$ quarks) and $n_{F}$ is the occupation number of the charged particles (Fermi distribution). Polarization operator for scalar charged particles is approximately a half of that for fermions with substitution of Bose distribution for Fermi distribution. Evidently the polarization operator plays the role of (momentum dependent) photon termal mass squared $m^{2}_{\gamma}$. We calculated the polarization operator and photon spectrum for three possible kinds of plasma: quark-gluon plasma (QGP) with $u,d$ light quarks, constituent quark ($m=350 MeV$)-pion plasma and hadronic (pions and nucleons) plasma. Chemical potential (baryonic one) was taken to be equal to $100 MeV$ per quark corresponding to typical value for SPS energies. The temperature was taken to be equal to $140 MeV$ (see below). The evolution parameter $r(k)$ for photons is determined through photon energy $\omega(k)$. For small momenta $k$ the parameter $r(k)$ is well approximated by simple expression (to be used for $k_{T}<40 MeV$) r(k)== , k1 \[eq:14\] where $m^{2}_{\gamma i}$ are photon termal masses squared at both sides of the transition and $\langle m^{2}_{\gamma}\rangle$ is their average mass squared ,cf Eq.(10). At $k=0$ the values of $\delta m^2_\gamma$ are equal to 289, 178 and 106 (in $MeV^2$ units) for QGP-hadron, QGP-valon and valon-hadron transitions correspondingly. Corresponding values of zero momentum evolution parameter $r(0)$ are 0.330, 0.154 and 0.178. Higher momentum behaviour of $r(k)$ (to be used for $k_{T}>40 MeV$) is taken in the form r(k)=(-k) \[eq:15\] where $\delta\tau$ is time duration of the transition. The Eq.(15) is a simple version of the expression given by the solvable
{ "pile_set_name": "ArXiv" }
--- abstract: 'With the rapid development of economy and the accelerated globalization process, the aviation industry plays more and more critical role in today’s world, in both developed and developing countries. As the infrastructure of aviation industry, the airport network is one of the most important indicators of economic growth. In this paper, we investigate the evolution of Chinese airport network (CAN) via complex network theory. It is found that although the topology of CAN remains steady during the past several years, there are many dynamic switchings inside the network, which changes the relative relevance of airports and airlines. Moreover, we investigate the evolution of traffic flow (passengers and cargoes) on CAN. It is found that the traffic keeps growing in an exponential form and it has evident seasonal fluctuations. We also found that cargo traffic and passenger traffic are positively related but the correlations are quite different for different kinds of cities.' address: | $^a$School of Electronic and Information Engineering, Beihang University, Beijing, 100083, P.R.China\ $^b$School of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui, 230026, P. R. China\ author: - 'Jun Zhang$^{a}$, Xian-Bin Cao$^{a,b,*}$, Wen-Bo Du$^{a,b}$, Kai-Quan Cai $^{a}$' --- Complex network ,Chinese Airport network ,Transportation ,Evolution 89.75.-k ,89.75.Fb ,89.40.Da ,89.40.Dd Introduction ============ Ranging from biological systems to economic and social systems, many real-world complex systems can be represented by networks, including chemical-reaction networks, neuronal networks, food webs, telephone network, the World Wide Web, railroad and airline routes, social networks and scientific-collaboration networks [@network1; @network2; @network3]. Obviously, the real networks are neither regular lattices nor simple random networks. Since the small-world network model [@WSmodel] and the scale-free network model [@BAmodel] were brought forward at the end of the last century, people find that many real complex networks are actually associated with small-world property and a scale-free, power-law degree distribution. In the past ten years, the theory of complex networks has drawn continuous attention from different scientific communities, such as network modelling [@model1; @model2; @model3], synchronization [@synchronization1; @synchronization2], information traffic [@traffic1; @traffic2; @traffic3; @traffic4], epidemic spreading [@epidemic1; @epidemic2], cascading failures [@cascade1; @cascade2; @cascade3; @cascade4], evolutionary games [@game1; @game2; @game3; @game4; @game5] and social dynamics [@social] etc.. One interesting and important research direction is understanding the transportation infrastructures in the framework of complex network theory [@real1; @real2; @real3; @real4; @real5; @real6; @real7; @real8]. With the acceleration of globalization process, the aviation industry plays a more and more critical role in the economy and many scientists pay special attention to the airway transportation infrastructure. Complex network theory is naturally a useful tool since the airports can be denoted by vertex and the flights can be denoted with edges. In the past few years, some interesting researches have been reported to study the airport networks from the view of network theory. For example, Amaral et al. comprehensively investigated the worldwide airport network (WAN). They found that WAN is a typical scale-free small-world network and the most connected nodes in WAN are not necessarily the most central nodes, which means critical locations might not coincide with highly-connected hubs in the infrastructures. This interesting phenomenon inspired them to propose a geographical-political-constrained network model [@Amaral1; @Amaral2]. Vespignani et al. further investigated the intensity of WAN’s connections via the view of weighted networks and they found the correlations between weighted quantities and the topology. They proposed a weighted evolving network model to expand our understanding of weighted features of real systems. Besides, they also proposed a global epidemic model to study the role of WAN in the prediction and predictability of global epidemics [@Vespignani1; @Vespignani2]. Besides, several empirical works on Chinese Airport Network [@CAN1; @CAN2; @CAN3] and Indian Airport Network [@IAN1] reveal that the scale of national airport networks can exhibit different properties from the global scale of WAN, i.e., the two-regime power-law degree distribution and the disassortative mixing property. As the aviation industry is an important indicator of economic growth, it is necessary and more meaningful to investigate the evolution of airport network. Recently, Gautreau et al. studied the US airport network in the time period $1990 \sim 2000$. They found that most statistical indicators are stationary and an intense activity takes place at the microscopic level, with many disappearing/appearing links between airports [@Gautreau1]. Rocha studied the Brazilian airport network (BAN) in the time period $1995 \sim 2006$. He also found the network structure is dynamic with changes in the relevance of airports and airlines, and the traffic on BAN is doubled during the period while the topology of BAN shrinks [@Rocha1]. Inspired by their interesting works, we investigate evolution of Chinese Airport Network (CAN) from the year $1950$ to $2008$ ($1991$ to $2008$ for detailed traffic information and $2002$ to $2009$ for detailed topology information). It is found that the airway traffic volume increases in an exponential form while the topology has no significant change. The paper is organized as follows. In the next section, the description of CAN data is presented. The statistical analysis of CAN topology is given in Section 3. In Section 4, we analyze evolution of traffic flow on CAN. The paper is concluded by the last section. Development of CAN with Chinese GDP =================================== Airport network is the backbone of aviation industry. It includes airports and direct flights linking airport pairs. Since aviation industry is closely related to economy development and China has made a great economic miracle in the past decades, we firstly investigate the development of Chinese economy, airports and flights. Figure 1(a) shows the development of Chinese GDP from $1950$ to $2008$. One can see that it has great increment in the $58$ years. Especially, the historic Third Plenary Session of the Eleventh Central Committee was held in $1978$, ushering in China’s new historical period of reform and opening up. Since then, Chinese GDP increases faster and boosts in the beginning of $21st$ century (GDP increases as an exponential form since year $2000$, see the inset of Fig.1(a)). However, the development of airlines (Fig.1(b)) and airports (Fig.1(c)) is not in consistent with that of GDP. For the development of airports (Fig.1(c)), one can see that the number of airports grows in $1950 \sim 1975$, $1987 \sim 1995$ and $2005 \sim 2008$, but keeps constant in $1975 \sim 1987$ and $1995 \sim 2005$. The first increasing ($1950 \sim 1975$) mainly makes large prefecture-level cities connected, and the second increasing ($1987 \sim 1995$) mainly makes medium prefecture-level cities connected. The third increasing ($2005 \sim 2008$) is due to the rapid development of Chinese economy and China plans to build more airports by $2020$. From Fig.1(b), one can also see that the number of airlines remains constant since $1995$ and rises again in year $2007$ and $2008$. The steadiness is mainly due to efficiency reason. Opening new airlines means more operating expenses and commercial airline companies prefer to have a small number of hubs where all airlines connect. They would not like to add uneconomical airlines once a mature transportation network is constructed. Thus the number of airlines does not increase continuously. In year $2007$ and $2008$, as many new airports are put into service, many new airlines are naturally launched. Although the airline infrastructure (e.g., airports and airlines) does not keep growing due to various constraints, the traffic on CAN keeps growing with the GDP. As shown in Figure 2, the traffic (passengers and cargoes) grows almost linearly with GDP. By calculation, one can see that $1$ million RMB of GDP can support about $7$ passengers and $153$ kg cargoes. Moreover, the Chinese aviation industry is also shocked by the $2008$ global financial crisis. The top $3$ Chinese airline companies have reported their operating information of $2008$ and most important indicators are declining. This has been demonstrated by the annual report of Civil Aviation Administration of China (CAAC) and we can find in Fig.2 that the traffic of $2008$ is almost the same as that of $2007$. Topological properties of CAN ============================= The topology data of CAN are obtained from $14$ timetables provided by Civil Aviation Administration of China (CAAC) from $2002$ to $2009$ ($2$ timetables for years $2003\sim2008$, and $1$ timetable for the second half of $2002$ and the first half of $2009$). It should be noted that: - The timetable contains both domestic and international airlines. As we only focus on the domestic information, the international airlines are excluded. - Since Ref
{ "pile_set_name": "ArXiv" }
--- abstract: 'Transport, magnetic and thermal properties at high magnetic fields ($H$) and low temperatures ($T$) of the heavy fermion compound CeAuSb$_2$ are reported. At $H=0$ this layered system exhibits antiferromagnetic order below $T_N = 6$ K. Applying $B$ along the inter-plane direction, leads to a continuous suppression of $T_N$ and a quantum critical point at $H_c \simeq 5.4$ T. Although it exhibits Fermi liquid behavior within the Néel phase, in the paramagnetic state the fluctuations associated with $H_c$ give rise to unconventional behavior in the resistivity (sub-linear in $T$) and to a $T \ln {T}$ dependence in the magnetic contribution to the specific heat. For $H > H_c$ and low $T$ the electrical resistivity exhibits an unusual $T^3$-dependence.' author: - 'L. Balicas,$^1$ S. Nakatsuji,$^2$ H. Lee,$^3$ P. Schlottmann,$^1$, T. P. Murphy$^1$, and Z. Fisk$^3$' title: 'Magnetic field-tuned quantum critical point in CeAuSb$_{2}$' --- When the long-range order is suppressed to zero temperature by tuning an external variable, such as pressure, chemical composition or magnetic field $H$, the system is said to exhibit a quantum critical point (QCP)[@QCP; @stuart]. The magnetic field is an ideal control parameter, since it can be reversibly and continuously tuned towards the QCP. Two compounds with field-tuned QCP, YbRh$_2$Si$_2$ and Sr$_3$Ru$_2$O$_7$, reached prominence due to the non-Fermi liquid (NFL) behavior triggered by the quantum fluctuations associated with the QCP. In this letter we present a Ce-compound, CeAuSb$_2$, exhibiting a field-tuned QCP and unusual transport and thermodynamic properties. All three systems have a field-tuned QCP as a common thread, yet their behavior in high fields and low $T$ are considerably different. At zero field YbRh$_2$Si$_2$ exhibits a second-order phase transition into an antiferromagnetic (AF) state at $T_N =70$ mK [@trovareli]. A magnetic field applied along the inter-plane direction drives $T_N$ to zero at a critical $H_c \simeq 0.66$ T, leading to NFL behavior, i.e., a logarithmic increase of $C_{e}(T)/T$ and a quasilinear $T$ dependence of the electrical resistivity $\rho$ below 10 K [@gegenwartprl]. Above $H_c$ Fermi liquid (FL) behavior is recovered ($\rho\propto AT^2$ and constant Sommerfeld coefficient $\gamma$), with $A(H)$ and $\gamma(H)^2$ displaying a $1/(H-H_c)$ divergence as $H \rightarrow H_c$ [@gegenwartnature]. A similar trend was recently found in YbAgGe [@budkoprb]. Field-tuned anomalous metallic behavior in the vicinity of metamagnetism (MM) was studied in detail in Sr$_3$Ru$_2$O$_7$ [@grigera]. At a MM transition the magnetization $M$ increases rapidly over a narrow range of fields. The transition is of first order, since there is no broken symmetry involved, and terminates in a critical “end" point $(H^{\star}, T^{\star})$ [@millisprl]. In the anisotropic MM transition of Sr$_3$Ru$_2$O$_7$, $T^{\star}$ is found to decrease continuously as $H$ is rotated towards the inter-plane c-axis [@perry], thus opening the possibility of a QCP in a first order transition [@grigera; @millisprl]. This scenario is supported by the $T$-linear dependence in $\rho$ [@grigera], a divergence of the coefficient $A$ of the resistivity [@grigera], the enhancement of the effective mass of the quasiparticles [@borzi], and the $\ln{T}$-dependence of the specific heat $\gamma$ [@Zhou]. Remarkably, at very low $T$ and very close to the critical field $H_c$, $\rho$ displayed a $T^3$-dependence [@grigera]. The QCP of a MM transition is also believed to cause the rich phase diagram of URu$_2$Si$_2$ at high fields [@marcelo]. Among Ce compounds, so far there is evidence for a field-tuned QCP only in CeCoIn$_5$ [@bianchiandpaglioni] and CeIrIn$_5$ [@capan]. In these systems the QCP is believed to give rise to a (possibly unconventional) superconducting (SC) phase, in addition to NFL behavior. In CeCoIn$_5$ the nature of the magnetic correlations at low $T$ is still unclear, in part due to the close proximity to SC, while in CeIrIn$_5$ the quantum critical behavior close to the metamagnetic (MM) transition is still under investigation. In this letter we report on the anomalous properties of CeAuSb$_2$. CeAuSb$_2$ is a tetragonal metallic compound, which at $H=0$ orders AF [@onuki] with $T_N = 6.0$ K [@comment]. For $T < T_N$, $\rho(T)$ has the $A T^2$ dependence typical of a FL and the extrapolation of $C_e/T$ to $T=0$ yields a Sommerfeld coefficient of $\gamma \sim 0.1$ J/mol.K$^2$. Hence, CeAuSb$_2$ can be considered a system of relatively light heavy-fermions. Above $T_N$, on the other hand, $\rho(T)$ displays a $T^{\alpha}$ dependence with $\alpha \lesssim 1$ and, $C_e/T$ has a $-\ln T$ dependence, both characteristic of NFL behavior due to a nearby QCP. A magnetic field along the inter-plane direction leads to two subsequent metamagnetic transitions and the concomitant *continuous* suppression of $T_N$ to $T=0$ at $H_c = 5.3 \pm 0.2$ T. As the AF phase boundary is approached from the paramagnetic (PM) phase, $\gamma$ is enhanced and the $A$ coefficient of the resistivity diverges as $(H-H_c)^{-1}$. When $T$ is lowered for $H \sim H_c$, the $T$-dependence of $\rho$ is sub-linear and the one of $C_e/T$ is approximately $-\ln T$. These observations suggest the existence of a field-induced QCP at $H_c$. At higher fields an unconventional $T^3$-dependence emerges in $\rho$ and becomes more prominent as $H$ increases, suggesting that the FL state is *not* recovered for $H \gg H_c$. Single crystals of CeAuSb$_2$ were grown by the self-flux method, as described in Ref. [@canfield], using high purity starting materials with excess Sb as a flux. Microprobe analysis confirms the stoichiometric composition of the samples as well as the absence of sub-phases. Electrical transport measurements were performed with the Lock-In technique in both, a Bitter and a superconducting magnet, coupled to cryogenic facilities . The magnetization as a function of $T$ and $H$ was obtained in a Quantum Design DC SQUID, as well as with a high field vibrating sample magnetometer. The heat capacity was measured in a Quantum Design PPMS system using the relaxation time technique. The heat capacity measurements were extended to higher temperatures in order to resolve the crystalline electric field (CEF) scheme. A Schottky-like peak centered around $T \simeq 50$ K was observed, suggesting an excitation energy of $\Delta = 110$ K between the ground and first excited doublets. Figure 1a) displays the in-plane electrical resistivity $\rho$ as function of $T$ for $H=0$ T. The vertical arrow indicates the onset of AF order, below which a $T^2$-dependence of $\rho$ is obtained. Above $T_N$ a NFL-like $T^{\alpha}$-dependence with $\alpha \lesssim 1$ is observed. As $H$ increases the AF order is gradually suppressed and in the vicinity of the AF to PM phase boundary a linear dependence on $T$ emerges (see Fig. 1b)). At higher fields FL behavior, i.e., $\rho = \rho_{0} + AT^{2}$, is found over a limited range of $T$ as indicated in Fig. 1c) by the vertical arrows. The red lines are least square fits to the $T^2$-dependence. The slope, given by the $A$ coefficient, decreases as $H$ increases. At very low $T$ and very high $H$ $\rho$ displays an anomalous $T^3$-dependence (see Fig. 1d)), suggesting, on the one hand, an unusual scattering mechanism, and, on the other hand, that the $T^2$-dependence may just be a crossover regime between $T^{\alpha}$ with $\alpha < 1$ and the $T^3$ regions. The $H$ dependence of $\rho$ sheds more light on the role of the
{ "pile_set_name": "ArXiv" }
--- author: - 'Costas G. Papadopoulos$^{a,b}$, Damiano Tommasini$^{a}$ and Christopher Wever$^{a,c}$' bibliography: - 'pentabox\_paper.bib' title: The Pentabox Master Integrals with the Simplified Differential Equations approach --- Introduction ============ With LHC delivering collisions at the highest energy achieved so far, 13 TeV, experiments are analysing data corresponding to an integrated luminosity of $42$ pb$^{-1}$ [@Khachatryan:2015uqb] and $85$ pb$^{-1}$ [@Atlas], as well as those already collected at an energy of 8 TeV and an integrated luminosity of $20.3$ fb$^{-1}$ [@Aad:2015nda] and $19.7$ fb$^{-1}$ [@Khachatryan:2015tzo]. In order to keep up with the increasing experimental accuracy as more data is collected, more precise theoretical predictions and higher loop calculations are required [@Andersen:2014efa]. In the last ten years our understanding of the reduction of one-loop amplitudes to a set of Master Integrals ([**MI**]{}) based on unitarity methods [@Bern:1994cg; @Bern:1994zx] and at the integrand level via the OPP method [@Ossola:2006us; @Ossola:2008xq], has drastically changed the way one-loop calculations are preformed leading to many fully automated numerical tools (some reviews on the topic are [@AlcarazMaestre:2012vp; @Ellis:2011cr]). In the recent years, a lot of progress has been made also towards the extension of these reduction methods for two-loop amplitudes at the integral [@Gluza:2010ws; @Kosower:2011ty; @CaronHuot:2012ab] as well as the integrand [@Mastrolia:2011pr; @Badger:2012dp; @Badger:2013gxa; @Papadopoulos:2013hra] level. Contrary to the one-loop case, where MI have been known for a long time already [@'tHooft:1978xw], a complete library of MI at two-loops is still missing. At the moment this is the main obstacle to obtain a fully automated NNLO calculation framework similar to the one-loop one, that will satisfy the anticipated precision requirements at the LHC [@Butterworth:2014efa]. Following the work of [@Goncharov:1998kja; @Remiddi:1999ew; @Goncharov:2001iea], there has been a building consensus that the so-called [*Goncharov Polylogarithms*]{} ([**GPs**]{}) form a functional basis for many MI. A very successful method for calculating MI and expressing them in terms of GPs is the differential equations ([**DE**]{}) approach [@Kotikov:1990kg; @Kotikov:1991pm; @Bern:1992em; @Remiddi:1997ny; @Gehrmann:1999as; @Henn:2013pwa], which has been used in the past two decades to calculate various MI at two-loops  [@Caffo:1998du; @Gehrmann:1999as; @Gehrmann:2000zt; @Gehrmann:2001ck; @Bonciani:2003te; @Laporta:2004rb; @Bonciani:2008az; @Gehrmann:2013cxs; @vonManteuffel:2013uoa; @Henn:2014lfa; @Gehrmann:2014bfa; @Caola:2014lpa; @Papadopoulos:2014hla; @Gehrmann:2015ora]. In [@Papadopoulos:2014lla] a variant of the traditional DE approach to MI was presented, which was coined the Simplified Differential Equations ([**SDE**]{}) approach. In this paper we present a further application of this method, concerning the calculation of planar massless MI relevant to five-point amplitudes with one off-shell leg, as well as the complete set of planar MI for five-point on-shell amplitudes. This is an important step towards the calculation of the full set of MI with up to eight internal propagators needed to realise a fully automated reduction scheme, à la OPP, for NNLO QCD. Pentabox integrals are needed in particular in order to compute NNLO QCD corrections to several processes of interest at LHC [@Andersen:2014efa]. The $pp\rightarrow H+2$jets can be used to measure the $HWW$ coupling to a $5\%$ accuracy with 300 fb$^{-1}$ data. The $pp\rightarrow 3$jets to study the ratio of $3-$jet to $2-$jet cross sections and measure the running of the strong coupling constant. The $pp\rightarrow V+2$jets for PDF determination and background studies for multi-jet final states. The paper is organized as follows. In Section 2 we set the parameterization and notation of the variables describing the two-loop MI of interest. In Section 3 we discuss the DE obtained, and the results for the pentabox MI. We conclude in Section 4 and provide an overview of the topic and some perspective for future developments. In the Appendix \[x=1\] we present details on the derivation of the planar pentabox MI with on-shell legs and in the Appendix \[expbyreg\] we give a few characteristic examples on how the boundary conditions are properly reproduced in our approach by the DE. Finally in the ancillary files [@results], we provide our analytic results for all two-loop MI in terms of Goncharov polylogarithms together with explicit numerical results. The pentabox integrals ====================== The MI in this paper will be calculated with the SDE approach [@Papadopoulos:2014lla]. Assume that one is interested in calculating an $l-$loop Feynman integral with external momenta $\{p_j\}$, considered incoming, and internal propagators that are massless. Any $l-$loop Feynman integral can be then written as $$G_{a_1\cdots a_n}(\{p_j\},\epsilon)=\int\left(\prod_{r=1}^l \frac{d^dk_r}{i\pi^{d/2}}\right)\frac{1}{D_1^{a_1}\cdots D_n^{a_n}}, \hspace{0.5 cm} D_i=\left(c_{ij}k_j+d_{ij}p_j\right)^2,\,\, d=4-2\epsilon \label{eq:loopgen}$$ with matrices $\{c_{ij}\}$ and $\{d_{ij}\}$ determined by the topology and the momentum flow of the graph, and the denominators are defined in such a way that all scalar product invariants can be written as a linear combination of them. The exponents $a_i$ are integers and may be negative in order to accommodate irreducible numerators. Any integral $G_{a_1\cdots a_n}$ may be written as a linear combination of a finite subset of such integrals, called Master Integrals, with coefficients depending on the independent scalar products, $s_{ij}=p_i\cdot p_j$, and space-time dimension $d$, by the use of [*integration by parts*]{} ([**IBP**]{}) identities [@Chetyrkin:1981qh; @Tkachov:1981wb; @Laporta:2001dd]. In the traditional DE method, the Master Integrals are differentiated with respect to $p_i \cdot \frac{\partial}{\partial p_j}$ and the resulting integrals are reduced by IBP to give a linear system of first order DE [@Kotikov:1990kg; @Remiddi:1997ny]. The invariants, $s_{ij}$, are then parametrised in terms of dimensionless variables, defined on a case by case basis, so that the resulting DE can be solved in terms of GPs. Usually boundary terms corresponding to the appropriate limits of the chosen parameters have to be calculated using for instance expansion by regions techniques [@Beneke:1997zp; @Smirnov:2002pj]. SDE approach [@Papadopoulos:2014lla] is an attempt not only to simplify, but also to systematize, as much as possible, the derivation of the appropriate system of DE satisfied by the MI. To this end the external incoming momenta are [*parametrized*]{} linearly in terms of $x$ as $p_i(x)=p_i+(1-x)q_i$, where the $q_i$’s are a linear combination of the momenta $\{p_i\}$ such that $\sum_iq_i=0$. If $p_i^2=0$, the parameter $x$ captures the off-shell-ness of the external leg. The class of Feynman integrals in (\[eq:loopgen\]) are now dependent on $x$ through the external momenta: $$G_{a_1\cdots a_n}(\{s_{ij}\},\epsilon;x)=\int\left(\prod_{r=1}^l \frac{d^dk_r}{i\pi^{d/2}}\right)\frac{1}{D_1^{a_1}\cdots D_n^{a_n}}, \;\;\; D_i=\left( c_{ij}k_j+d_{ij}p_
{ "pile_set_name": "ArXiv" }
--- abstract: 'We present the results of spectroscopic observations of targets discovered during the first two years of the ESSENCE project. The goal of ESSENCE is to use a sample of $\sim$200 Type Ia supernovae (SNe Ia) at moderate redshifts $(0.2 \lesssim z \lesssim 0.8)$ to place constraints on the equation of state of the Universe. Spectroscopy not only provides the redshifts of the objects, but also confirms that some of the discoveries are indeed SNe Ia. This confirmation is critical to the project, as techniques developed to determine luminosity distances to SNe Ia depend upon the knowledge that the objects at high redshift are the same as the ones at low redshift. We describe the methods of target selection and prioritization, the telescopes and detectors, and the software used to identify objects. The redshifts deduced from spectral matching of high-redshift SNe Ia with low-redshift SNe Ia are consistent with those determined from host-galaxy spectra. We show that the high-redshift SNe Ia match well with low-redshift templates. We include all spectra obtained by the ESSENCE project, including 52 SNe Ia, 5 core-collapse SNe, 12 active galactic nuclei, 19 galaxies, 4 possibly variable stars, and 16 objects with uncertain identifications.' author: - 'Thomas Matheson, Stéphane Blondin, Ryan J. Foley, Ryan Chornock, Alexei V. Filippenko, Bruno Leibundgut, R. Chris Smith, Jesper Sollerman, Jason Spyromilio, Robert P. Kirshner, Alejandro Clocchiatti, Claudio Aguilera, Brian Barris, Andrew C. Becker, Peter Challis, Ricardo Covarrubias, Peter Garnavich, Malcolm Hicken, Saurabh Jha, Kevin Krisciunas, Weidong Li, Anthony Miceli, Gajus Miknaitis, Jose Luis Prieto, Armin Rest, Adam G. Riess, Maria Elena Salvo, Brian P. Schmidt, Christopher W. Stubbs, Nicholas B. Suntzeff, and John L. Tonry' title: 'Spectroscopy of High-Redshift Supernovae from the ESSENCE Project: The First Two Years' --- Introduction ============ The revolution wrought in modern cosmology using luminosity distances of Type Ia supernovae (SNe Ia) [@schmidt98; @riess98; @perlmutter99; @riess01; @knop03; @tonry03; @barris2304; @riess04b] relies upon the fact that the objects so employed are, in fact, SNe of Type Ia. Although the light-curve shape alone is useful [e.g., @barris04], the only way to be sure of the true nature of an object as a SN Ia is through spectroscopy. The calculation of luminosity distances depends upon the high-redshift objects being SNe Ia so that low-redshift calibration methods can be employed. The classification scheme for SNe is based upon the optical spectrum near maximum [see @filippenko97 for a review of SN types], so rest-wavelength optical spectroscopy is necessary to properly identify SNe Ia at high redshifts. Despite this significance, relatively little attention has been paid to the spectroscopy of the high-redshift SNe Ia, with some notable exceptions [@coil00]. Other publications that include high-redshift SN Ia spectra include @schmidt98, @riess98, @perlmutter98, @leibundgut01, @tonry03, @barris2304, @blondin04, @riess04b, and @lidman04. In addition to providing evidence for the acceleration of the expansion of the Universe, it was recognized at an early stage that high-redshift SNe Ia could put constraints on the equation of state for the Universe [@garnavich98], parameterized as $w = P/(\rho c^2)$, the ratio of the dark energy’s pressure to its density. To further explore this, the ESSENCE project was begun. The ESSENCE (Equation of State: SupErNovae trace Cosmic Expansion) project is a five-year ground-based SN survey designed to place constraints on the equation-of-state parameter for the Universe using $\sim$200 SNe Ia over a redshift range of $0.2 \lesssim z \lesssim 0.8$ [see @miknaitis05; @smith05 for a more extensive discussion of the goals and implementation of the ESSENCE project]. Spectroscopic identification of optical transients is a major component of the ESSENCE project. In addition to confirming some targets as SNe Ia, the spectroscopy provides redshifts, allowing the derived luminosity distances to be compared with a given cosmological model. So many targets are discovered during the ESSENCE survey that a large amount of telescope time on 6.5 m to 10 m telescopes is required. In the first two years of the program, we were fortunate enough to have been awarded over 60 nights at large-aperture telescopes. Even with this much time, though, our resources were insufficient to spectroscopically identify all of the potentially useful candidates. This remains the most significant limiting factor in achieving the ESSENCE goal of finding, identifying, and following the desired number of SNe Ia with the appropriate redshift distribution. Nonetheless, spectroscopic observations of ESSENCE targets in the time available have been successful, with almost fifty SNe Ia clearly identified, and several more characterized as likely SNe Ia. Other identifications include core-collapse SNe, active galactic nuclei (AGNs), and galaxies. The galaxy spectra may include an unidentified SN component. This paper will describe the results of the spectroscopic component of the first two years of the ESSENCE program. Year One refers to our 2002 Sep-Dec campaign; Year Two was our 2003 Sep-Dec campaign. In Section \[target\], we describe the process of target selection and prioritization. Section \[obs\] describes the technical aspects of the observations. We discuss target identification in Section \[id\]. The summary of results in terms of types of objects and success rates is given in Section \[results\]. In addition, we present in Section \[results\] all of the spectra obtained, including those of the SNe Ia (with low-redshift templates), core-collapse SNe, AGNs, galaxies, stars, and objects that remain unidentified. Target Selection\[target\] ========================== The ESSENCE survey uses the Blanco 4 m telescope at CTIO with the MOSAIC wide-field CCD camera to detect many kinds of optical transients [@smith05]. Temporal coverage helps to identify solar-system objects such as Kuiper Belt Objects (KBOs) and asteroids. Known AGNs and variable stars can also be eliminated from the possible SN Ia list. The remaining transients are all potentially SNe. They are also faint, requiring large-aperture telescopes to obtain spectra of the quality necessary to securely identify the object. Exposure times on 8-10 m telescopes are typically about half an hour, but can be as much as two hours. Such telescope time is difficult to obtain in quantity, so not all of the detected transients can be examined spectroscopically. We apply several criteria to prioritize target selection for spectroscopic observation. The first step in sorting targets is based upon the spectroscopic resources available. The equatorial fields used for the ESSENCE program are accessible from most major astronomical sites, so the main concern with matching targets to spectroscopic telescopes is the aperture size of the telescope. The ESSENCE targets are generally in the range $18 \lesssim m_R \lesssim 24$ mag. When 8-10 m telescopes are unavailable, the fainter targets become lower in priority. The limit for low-dispersion spectroscopy to identify SNe with the 6.5 m telescopes is $m_R \approx 22-23$ mag, although this will vary with weather conditions and seeing. If the full range of telescopes is available, then targets are prioritized by magnitude for observation at a given telescope. The longitudinal distribution of spectroscopic resources can be important if confirmation of a high-priority target is made during a night when multiple spectroscopic resources are available. By the time a target is confirmed, the fields may have set for telescopes in Chile, while they are still accessible from Hawaii. This requires active, real-time collaboration between the group finding SN candidates and those running the spectroscopic observations. One advantage of the ESSENCE program is that fields are imaged in multiple filters, allowing for discrimination of targets by color. @tonry03 present a table of expected SN Ia peak magnitudes as a function of redshift; see also @poznanski02, @galyam04, @riess04a, @strolger04, and @smith05 for discussions of color selection for SN candidates. Given apparent $R$-band and $I$-band magnitudes, one can calculate the $R-I$ color and compare that with an expected color for those magnitudes. The cadence of the ESSENCE program (returning to the same field every four days) will likely catch SNe at early phases (i.e., before maximum brightness). Early core-collapse SNe are bluer than SNe Ia, as are AGNs. For example, when selecting for higher-redshift targets, objects with $R-I \lesssim 0.2$ mag were considered unlikely to be S
{ "pile_set_name": "ArXiv" }
--- abstract: 'The particle unbound $^{26}$O nucleus is located outside the neutron drip line, and spontaneously decays by emitting two neutrons with a relatively long life time due to the centrifugal barrier. We study the decay of this nucleus with a three-body model assuming an inert $^{24}$O core and two valence neutrons. We first point out the importance of the neutron-neutron final state interaction in the observed decay energy spectrum. We also show that the energy and and angular distributions for the two emitted neutrons manifest a clear evidence for the strong neutron-neutron correlation in the three-body resonance state. In particular, we find an enhancement of two-neutron emission in back-to-back directions. This is interpreted as a consequence of [*dineutron correlation*]{}, with which the two neutrons are spatially localized before the emission.' author: - 'K. Hagino' - 'H. Sagawa' title: ' Correlated two-neutron emission in the decay of unbound nucleus $^{26}$O' --- Correlations among particles lead to a variety of rich phenomena in many-fermion systems, such as superconductivity and superfluidity. The spatial distribution of particles is also affected by the correlations. For many-electron systems, the Coulomb repulsion between electrons yields the so called Coulomb hole, in which the distribution of the second electron is largely suppressed in the vicinity of the first electron [@CN61; @RRB78]. In atomic nuclei, in contrast, an attractive nuclear force leads to the dineutron and diproton correlations, with which two nucleons are spatially localized in the surface region of nuclei[@BBR67; @CIMV84]. These nuclear correlations have attracted lots of attention recently [@BE91; @Zhukov93; @HS05; @MMS05; @PSS07], in connection to physics of weakly bound nuclei. In order to probe the inter-particle correlation, it has been a standard way in atomic physics to measure a double ionization with strong laser fields[@WSD94; @WGW00; @BKJ12; @BLHE12]. It has been observed that the ionization rate is significantly enhanced due to the electronic correlation, and moreover, there is a strong momentum correlation between the two emitted electrons. The corresponding experiment in nuclear physics is the Coulomb breakup of the Borromean nuclei $^{11}$Li and $^6$He, in which those nuclei are broken up to the core nuclei, $^9$Li and $^4$He, and two neutrons in the Coulomb field of a target nucleus [@N06; @A99; @NK12]. The observed breakup probabilities, especially those for the $^{11}$Li nucleus, show a sharp peak in the low-energy region, which can be accounted for only by taking into account the neutron-neutron correlations. Furthermore, from the observed strength distribution, the opening angle between the valence neutrons in the ground state of the Borromean nuclei has been inferred employing the cluster sum rule [@N06; @HS07; @BH07]. For both $^{11}$Li and $^6$He, the extracted opening angles were significantly smaller than the value for the independent neutrons, that is, 90 degrees, and clearly indicate the existence of the dineutron correlation. A small drawback with the cluster sum rule approach is that it yields only an expectation value of the opening angle and a detailed angular distribution cannot be studied with this method. For this reason, the energy and the angular distributions of the emitted neutrons from the Coulomb breakup have been investigated[@EB92]. However, it has been concluded that those distributions are largely determined by the properties of the neutron-core system, and thus it is difficult to acquire detailed information on the neutron-neutron correlations from the Coulomb breakup measurement [@HSNS09; @KKM10]. It is therefore desirable to seek for other probes for the nucleonic correlation. Among them, the two-proton radioactivity, that is, the spontaneous emission of two protons of proton-unbound nuclei, has been considered to be a good candidate for that purpose [@PKGR12]. An attractive feature of this phenomenon is that the two valence protons are emitted without an influence of disturbance of nuclei due to an external field. Very recently, the ground state [*two-neutron*]{} emission was discovered for $^{16}$Be[@SKB12]. Earlier measurements on the two-neutron emission include those for $^{10}$He [@J10] and $^{13}$Li [@J10; @A08]. These are a counter part of the two-proton emission of proton-rich nuclei, corresponding to a penetration of two neutrons over a centrifugal barrier. Subsequently, the two-neutron emission was discovered also for $^{26}$O[@LDK12; @CSA12] and $^{13}$Li [@KLD13]. So far, the experimental data have been analyzed only with a schematic dineutron model [@SKB12; @KLD13] (see also Ref. [@MOA12]). Although such schematic model appears to reproduce the data, realistic three-body model calculations with configuration mixings and full neutron-neutron correlations have been clearly urged. In this paper, we apply the three-body model with a density-dependent contact interaction between the valence neutrons to the decay problem of $^{26}$O, assuming $^{24}$O to be an inert core. This model has been successfully applied to describe the ground state properties and the [Coulomb break-up]{} of neutron-rich nuclei[@BE91; @HS05; @HSNS09; @EBH97]. In order to describe the decay of neutron-unbound nucleus, we shall take into account the couplings to continuum by the Green’s function technique, which was invented in Ref. [@EB92] in order to describe the continuum dipole excitations of $^{11}$Li. We shall discuss the role of neutron-neutron correlation in the decay probability, as well as in the energy and the angular distributions of the emitted neutrons. In the experiment of Ref. [@LDK12], the $^{26}$O nucleus was produced in the single proton-knockout reaction from a secondary $^{27}$F beam. We therefore first construct the ground state of $^{27}$F with a three-body model, assuming the $^{25}$F+$n$+$n$ structure. We then assume a sudden proton removal, that is, the $^{25}$F core changes to $^{24}$O keeping the configuration for the $n$+$n$ subsystem of $^{26}$O to be the same as in the ground state of $^{27}$F. This initial state, $\Psi_i$, is then evolved with the Hamiltonian for the three-body $^{24}$O+$n$+$n$ system for the two-neutron decay. We therefore consider two three-body Hamiltonians, one for the initial state $^{25}$F+$n$+$n$ and the other for the final state $^{24}$O+$n$+$n$. For both the systems, we use similar Hamiltonians as that in Refs. [@HS05; @EBH97], $$H=\hat{h}_{nC}(1)+\hat{h}_{nC}(2)+v(1,2) +\frac{{\mbox{\boldmath $p$}}_1\cdot{\mbox{\boldmath $p$}}_2}{A_cm}, \label{3bh}$$ where $A_c$ is the mass number of the core nucleus, $m$ is the nucleon mass, and $\hat{h}_{nC}$ is the single-particle (s.p.) Hamiltonian for a valence neutron interacting with the core. The last term in Eq. (\[3bh\]) is the two-body part of the recoil kinetic energy of the core nucleus [@EBH97], while the one-body part is included in $\hat{h}_{nC}$. We use a contact interaction between the valence neutrons, $v$, given as[@BE91; @HS05; @EBH97], $$v({\mbox{\boldmath $r$}}_1,{\mbox{\boldmath $r$}}_2)=\delta({\mbox{\boldmath $r$}}_1-{\mbox{\boldmath $r$}}_2) \left(v_0+\frac{v_\rho}{1+\exp[(r_1-R_\rho)/a_\rho]}\right). \label{vnn}$$ Here, the strength $v_0$ is determined to be $-$857.2 MeV$\cdot$fm$^{3}$ from the scattering length for the $nn$ scattering together with the cutoff energy, which we take $E_{\rm cut}=30$ MeV. See Refs.[@HS05; @EBH97] for the details. The second term in Eq. (\[vnn\]) simulates the density dependence of the interaction. Taking $R_\rho=1.34\times A_c^{1/3}$ fm and $a_\rho$=0.72 fm, we adjust the value of $v_\rho$ to be 952.3 MeV$\cdot$fm$^{3}$ so as to reproduce the experimental two-neutron separation energy of $^{27}$F, $S_{\rm 2n}$=2.80(18) MeV[@JSM07]. We employ a Woods-Saxon form for the s.p. potential in $\hat{h}_{nC}$. For the $^{24}$O+$n+n$ system, we take $a=0.72$ fm and $R_0=1.25A_c^{1/3}$ fm with $A_c=24$, and determine the values of $V_0=-44.1$ MeV and $V
{ "pile_set_name": "ArXiv" }
--- abstract: 'We describe a representation of the Bolthausen-Sznitman coalescent in terms of the cutting of random recursive trees. Using this representation, we prove results concerning the final collision of the coalescent restricted to $[n]$: we show that the distribution of the number of blocks involved in the final collision converges as $n\to\infty$, and obtain a scaling law for the sizes of these blocks. We also consider the discrete-time Markov chain giving the number of blocks after each collision of the coalescent restricted to $[n]$; we show that the transition probabilities of the time-reversal of this Markov chain have limits as $n \rightarrow \infty$. These results can be interpreted as describing a “post-gelation” phase of the Bolthausen-Sznitman coalescent, in which a giant cluster containing almost all of the mass has already formed and the remaining small blocks are being absorbed.' author: - 'Christina Goldschmidt and James B. Martin' bibliography: - 'recursivetrees.bib' title: 'Random recursive trees and the Bolthausen-Sznitman coalescent' --- Introduction ============ The Bolthausen-Sznitman coalescent, $\left(\Pi(t), t\geq 0\right)$, is a Markov process which takes its values in the set of partitions of ${\ensuremath{\mathbb{N}}}$. It is most easily defined via its restriction $\left(\Pi^{[n]}(t), t\geq 0\right)$ to the set $[n]:=\{1,2,\dots, n\}$, for $n\geq 1$. Denote by $\#\Pi^{[n]}(t)$ the number of blocks of $\Pi^{[n]}(t)$. Then $\left(\Pi^{[n]}(t), t\geq 0\right)$ is a continuous-time Markov chain whose transition rates are as follows: if $\#\Pi^{[n]}(t)=b$, then any $k$ of the blocks present coalesce at rate $$\label{eqn:lambdas} \lambda_{b,k}=\frac{(k-2)!(b-k)!}{(b-1)!}, \qquad 2\leq k\leq b\leq n.$$ It is usual to start the coalescent from the partition into singletons, $$\Pi(0)=\big(\{1\}, \{2\}, \{3\},\dots\big),$$ and in this case we say that the coalescent is *standard*. The Bolthausen-Sznitman coalescent was first introduced in [@Bolthausen/Sznitman], in the context of the Sherrington-Kirkpatrick model for spin glasses. In [@PitmanLambdaCoal], Pitman demonstrated a great number of its properties. He introduced the class of coalescents with multiple collisions (also known as $\Lambda$-coalescents), gave a construction of them based on Poisson random measures and studied the Bolthausen-Sznitman coalescent as a member of this class. Bertoin and Le Gall [@Bertoin/LeGall] give an alternative derivation in terms of the genealogy of a continuous-state branching process. Marchal [@MarchalVienna] gives a construction via regenerative sets. In this paper, we describe a new representation for the Bolthausen-Sznitman coalescent in terms of the cutting of random recursive trees. We then use this representation to prove results about the last collision of the coalescent restricted to $[n]$, as $n \rightarrow \infty$. We obtain scaling laws for the sizes of the blocks involved in the final collision; essentially, this collision involves one large block and one or more smaller blocks, whose combined size behaves like $n^U$, where $U$ has the uniform distribution on $[0,1]$. We also show that the distribution of the number of blocks involved in the final collision converges as $n\to\infty$ (for example, the probability that exactly two blocks are involved converges to $\log 2$). More generally, we can also consider the discrete-time Markov chain giving the number of blocks after each collision of the coalescent restricted to $[n]$. We show that the transition probabilities of the time-reversal of this Markov chain have limits as $n \rightarrow \infty$, which we make explicit. (We observe in passing that the form of these limiting probabilities yields certain infinite product expansions of powers of $e$, which appear to be new). These results can be interpreted as describing a “post-gelation” phase of the Bolthausen-Sznitman coalescent, in which a giant cluster containing almost all of the mass has already formed and the very small left-over blocks are being absorbed. We also note that the tree representation has an intrinsic asymmetry which contrasts strongly with the exchangeability properties of the coalescent itself (for example, the root of the tree always represents the block containing $1$). This makes it possible to read off properties concerning a tagged particle in the coalescent process directly from the tree representation. Random recursive trees ====================== The representation {#sec:BS/RRT} ------------------ A tree on $n$ vertices labelled $1,2,\ldots,n$ is called a *recursive tree* if the vertex labelled $1$ is the root and, for all $2 \leq k \leq n$, the sequence of vertex labels in the path from the root to $k$ is increasing (Stanley [@Stanley1] calls this an unoriented increasing tree). See Figure \[fig:rrt\] for an example of a recursive tree. Call a *random recursive tree* a tree chosen uniformly at random from the $(n-1)!$ possible recursive trees on $n$ vertices. A random recursive tree can also be constructed as follows. The vertex $1$ is distinguished as the root. We imagine the vertices arriving one by one. For $k \geq 2$, vertex $k$ attaches itself to a vertex chosen uniformly at random from $1, 2, \ldots, k-1$. For a detailed survey of results on recursive trees, see Smythe and Mahmoud [@Smythe/Mahmoud]. ![A recursive tree on $[10]$.[]{data-label="fig:rrt"}](rrt.eps) We can represent an infinite tree by the infinite sequence of integers $\{p_i\}_{i \geq 2}$, where $p_i$ is the parent of node $i$. (The condition for this to be a recursive tree is $1 \leq p_i < i$ for $i \geq 2$.) For the purposes of this article, it will be convenient also to define a random recursive tree on a label set $\{l_1, l_2, \ldots, l_b\}$ where $l_1, l_2, \ldots, l_b$ are the blocks of a partition of $[n]$ for some $n$, listed in increasing order of least elements. (That is, we require that $l_1 < l_2 < \ldots < l_b$ where “$<$” is the total order induced by the least elements of the blocks.) The tree is constructed in the obvious way: $l_1$ labels the root and $l_k$ is attached to a vertex chosen uniformly at random from those labelled $l_1, l_2, \ldots, l_{k-1}$. Call the *weight* of a label the number of integers it contains and let $\mathcal{L}_n$ be the set of partitions of $[n]$. ![The cutting procedure for a random recursive tree on $[10]$, with the cuts indicated.[]{data-label="fig:rrtcut"}](rrtcut2.eps) Meir and Moon [@Meir/Moon] define a cutting procedure which they apply to random recursive trees. They take a random recursive tree on $[n]$ and pick an edge $e$ uniformly at random from the $n-1$ present. This edge is deleted along with the entire subtree below it. These operations are then repeated until the root is isolated. The idea of cutting combinatorial trees in this way appears to have been introduced in Meir and Moon [@Meir/MoonRandomTrees] and remains a current research topic. Interest has tended to focus on the number of cuts required to isolate the root in different types of trees. Recent references include Janson [@JansonPreprint; @JansonVienna], Fill, Kapur and Panholzer [@Fill/Kapur/Panholzer] and Panholzer [@PanholzerDMTCS; @PanholzerVienna]. In particular, [@PanholzerVienna] treats the case of random recursive trees; the author’s presentation of this work at the MathInfo 2004 conference in Vienna stimulated the present work. A variant of the cutting procedure will be the basis of our representation of the Bolthausen-Sznitman coalescent. Suppose that instead of throwing the subtree below $e$ away, we add its labels to those of the vertex above $e$. We repeat this procedure until only the root remains, labelled by $[n]$. See Figure \[fig:rrtcut\] for an example. \[prop:rrt\] Suppose $T$ is a random recursive tree on $L = \{l_1,l_2,\ldots,l_b\} \in \mathcal{L}_n$. Pick an edge at random, cut it and add the labels below the cut to the label above. Then the resulting tree is a random recursive tree on the
{ "pile_set_name": "ArXiv" }
--- abstract: 'Nano–particles are of great interest in fundamental and applied research. However, their accurate visualization is often difficult and the interpretation of the obtained images can be complicated. We present a comparative scanning electron microscopy and helium ion microscopy study of [cetyltrimethylammonium–bromide (CTAB)]{} coated gold nano–rods. Using both methods we show how the gold core as well as the surrounding thin CTAB shell can selectively be visualized. This allows for a quantitative determination of the dimensions of the gold core or the CTAB shell. The obtained CTAB shell thickness of 1.0nm–1.5nm is in excellent agreement with earlier results using more demanding and reciprocal space techniques.' address: - 'Physics of Interfaces and Nanomaterials, MESA+ Institute for Nanotechnology, University of Twente, PO Box 217, 7500AE Enschede, The Netherlands' - 'NanoLab, MESA+ Institute for Nanotechnology, University of Twente, PO Box 217, 7500AE Enschede, The Netherlands' author: - Gregor Hlawacek - Imtiaz Ahmad - 'Mark A. Smithers' - 'E. Stefan Kooij' title: 'To see or not to see: Imaging surfactant coated nano–particles using HIM and SEM' --- 7 9 3 Helium Ion Microscopy ,Scanning Electron Microscopy ,Nano–particles Introduction {#sec:intro} ============ Today, nano–particles can be synthesized with a variety of shapes [@Ahmed2010a; @Lohse2013a; @Ye2013; @Grzelczak2008; @Nikoobakht2003] and arrangements [@Ahmed2010; @Bishop2009], allowing for different applications. To unveil the full potential of these nano–particle based applications [@Dreaden2012; @Halas2010] in general, it is imperative to understand and characterize the wide range of intriguing properties of these nanoscale entities. Important structural and compositional information can be obtained from high resolution imaging of these particles in their native form. It is crucial to realize that not only the shape but also the nearly always present surfactant layer influences the properties of the nano–particles [@Chanana2012]. Scanning Electron Microscopy (SEM) is routinely used to obtain information on the shape, size and arrangement of nano–particles. This method is very successful in this research field as it is minimal invasive and can achieve the required resolution of a few nano–meters down into the sub–nanometer range [@Vladar2009]. With the advent of new detectors that allow energy filtering and separation of the different contributions to the signal as well as the possibility to use ultra–low acceleration voltages, the surface sensitivity of the method has also increased substantially. Alternatively, a new charged particle scanning beam microscopy method has entered the market a few years ago. Helium Ion Microscopy (HIM) [@Economou2006] has an ultimate resolution as small as 0.29nm [@Hill2011; @Vladar2009] and a very high surface sensitivity [@Hlawacek2012]. It uses helium ions to generate a multitude of signals including secondary electrons (SE), backscattered helium (BSHe) and photons. Despite their obvious advantages, both methods—SEM [@Lau2010] as well as HIM [@vanGastel2011]—are plagued by carbon deposition in the scanned area. This carbon deposition reduces image quality and in particular hinders the detection of ultra–thin carbon layers intentionally present on the sample. HIM is particularly sensitive to this effect for two reason. Firstly, helium ions with a typical energy of 30kV are very efficient in cracking hydrocarbons present on the sample surface. These hydrocarbons are either present on the sample and/or replenished from the vacuum during imaging. Secondly, due to the high surface sensitivity of HIM already very thin layers of carbon will be visible in the image. In particular the last point also applies for very low–voltage SEM. However, applying appropriate cleaning procedures to the chamber as well as the sample prior to imaging this problem can be eliminated. Provided that deposition of carbon from the chamber vacuum can be excluded a very high sensitivity for intentionally deposited ultra–thin carbon layers is possible in HIM [@Hlawacek2012]. As a result of the surfactant assisted fabrication routes nano–particles are usually covered by such a thin carbon based layer. In the case discussed here, gold nano–rods are covered with an interdigiting double layer of cetyltrimethylammonium (CTA) which is formed during synthesis using CTA–bromide (CTAB). Comparison of Small Angle X–ray Scattering (SAXS) and Transmission Electron Microscopy (TEM) measurements revealed that the thickness of this shell is between 1.0nm and 1.5nm [@Sui2006]—and thus less than the length of a single stretched CTA molecular ion of 2.2nm [@Venkataraman2001]. In this paper we will present high–resolution images of CTAB/Au core–shell nano–particles obtained with SEM and HIM. In this context the underlying reasons for the visibility of either the gold–core or CTAB–shell in the different imaging modes will be discussed. By comparing core and shell we show that the thickness of the CTA layer can be measured with sufficient accuracy reducing the necessity for more elaborate measurement strategies such as SAXS and TEM. Materials and methods {#sec:mat-meth} ===================== Nano–rod preparation {#sec:mat-meth:nanorod} -------------------- CTAB–stablized gold nano–rods of aspect ratios 4 and 5 were synthesized using a seed–mediated synthesis [@Nikoobakht2003]. To remove excess CTAB from the suspensions, they were centrifuged at 15000rpm for 10minutes. The supernatant was carefully removed, leaving the sedimented nano–rods in the bottom of the centrifugetube. Finally, the nano–particles were resuspended in the same amount of Milli–Q water. This procedure was performed twice. In addition, the suspensions were centrifuged at 5600rpm for 5minutes to eliminate most spheres from the suspension. Ultraviolet–visible (UV–VIS) spectroscopy was used to identify typical resonances in the as–prepared nano–particles consisting of rods and some remaining spheres. The longitudinal peaks were situated at 800 nm and 860 nm for nano–rods of aspect ratio 4 and 5, respectively. The corresponding rod lengths amount to 45nm$\pm$5nm for aspect ratio 4 and 55nm$\pm$5nm for aspect ratio 5. The width of all rods is between 10nm and 12nm. Samples were prepared for HIM and SEM analysis, by drop–casting 30l of each suspension onto a clean SiO$_2$ substrate. Within 2h the liquid has completely evaporated, leaving a coffee–stain ring of gold nano–particles. No further sample conditioning was necessary for the subsequent SEM and HIM imaging. Charged particle beam microscopy {#sec:Mat-meth:microscopy} -------------------------------- HIM measurements were performed using an ultra–high vacuum (UHV) Orion Plus helium ion microscope from Zeiss [@vanGastel2011]. The microscope is equipped with an Everhardt–Thornley (ET) detector for Secondary Electron (SE) detection. A micro–channel plate situated below the last lens just above the sample allows the qualitative analysis of Backscattered Helium (BSHe). This detector yields images in which dark corresponds to light elements—having a low backscatter probability—and bright areas—with a high backscatter yield—correspond to heavy elements in the specimen. High Resolution Scanning Electron Microscopy (HRSEM) measurements were performed using a Merlin Field Emission SEM (FE–SEM) from Zeiss. The microscope is equipped with a on–axis in–lens secondary electron detector as well as a high efficiency off–axis secondary electron detector. The in–lens detector—which has been used in this study—is a high efficiency detector for SE1 and SE2 and owes its superb imaging results to the geometric position in the beam path and the combination with the electrostatic/electromagnetic lens. This detector is in particular powerful at low voltages provided a small working distance can be reached. Simulation methods {#sec:Mat-meth-simu} ------------------ In order to asses the yield and origin of secondary electrons as well as backscattered electrons in SEM, Monte Carlo simulations using CASINO [@Demers2011] have been utilized. The sample was modeled using a 2nm thick carbon layer on a 10nm thick gold slab on top of a silicon substrate. The density of the carbon layer has been manually set to 0.5g/cm$^3$. Secondary electron and backscattered electron yields were calculated as well as the ${Z_\mathrm{max}}$ distribution. SRIM [@Ziegler2008] calculations have been used to obtain insight into the contrast ratios for backscattered helium images. Backscatter yields for nano–rods and CTA covered silicon were calculated using the Kinchin–Pease approximation. The same sample setup as above has been used with the exception that the carbon layer has been replaced with a layer of CTA stoichiometry and a density of 0.5g/cm$^3$. Results {#sec:res} ======= In fig. \[fig:HIM-SEM\](A) a HIM image of gold nano–rods is presented. The image has been obtained from an area covered by several layers of nano
{ "pile_set_name": "ArXiv" }
--- abstract: 'With a self-similar magnetohydrodynamic (MHD) model of an exploding progenitor star and an outgoing rebound shock and with the thermal bremsstrahlung as the major radiation mechanism in X-ray bands, we reproduce the early X-ray light curve observed for the recent event of XRO 080109/SN 2008D association. The X-ray light curve consists of a fast rise, as the shock travels into the “visible layer" in the stellar envelope, and a subsequent power-law decay, as the plasma cools in a self-similar evolution. The observed spectral softening is naturally expected in our rebound MHD shock scenario. We propose to attribute the “non-thermal spectrum" observed to be a superposition of different thermal spectra produced at different layers of the stellar envelope.' author: - 'Ren-Yu Hu' - 'Yu-Qing Lou' title: | Rebound Shock Breakouts of Exploding\ Massive Stars: A MHD Void Model --- [ address=[Physics Department and Tsinghua Center for Astrophysics (THCA), Tsinghua University, Beijing 100084, China]{}, email=[hu-ry07@mails.tsinghua.edu.cn]{} ]{} [ address=[Physics Department and Tsinghua Center for Astrophysics (THCA), Tsinghua University, Beijing 100084, China]{}, email=[louyq@mail.tsinghua.edu.cn]{} ]{} Introduction ============ SN 2008D, the best type Ibc supernova detected so far, is preceded by a X-Ray Outburst (XRO) captured by SWIFT satellite on 2008 January 9, and this XRO is interpreted as a shock breakout of a Wolf-Rayet (WR) progenitor with a radius of $\sim 10^{11}$ cm [Soderberg2008]{}. The isotropic X-ray energy is estimated to be $\sim 2\times10^{46}$ erg, and there seems no collimation detected so the event is not regarded as a GRB. This XRO showed a rapid rise, peaked at $\sim 63$ s, and a decay modelled to be exponential with an e-folding time of $\sim 129$ s [@Soderberg2008]. The follow-up optical and ultraviolet observations indicate a total supernova kinetic energy of $\sim 2-4\times10^{51}$ erg and a mass of SN ejecta to be $\sim 3-5$ M$_{\odot}$ [@Soderberg2008]. Some authors estimate from a detailed spectral analysis that SN 2008D, originally a $\sim 30$ M$_{\odot}$ star, has a spherical symmetric explosion energy of $\sim 6\times10^{51}$ erg and an ejected mass $\sim 7$ M$_{\odot}$ [@Mazzali]. The evolution of optical spectra of XRO-SN 2008D resembles that of XRO-SN 2006aj, whose progenitor is also believed to be a WR star [@Campana2006]. The production of $\gamma$-rays and X-rays by shock breakouts has been proposed earlier [@Colgate; @Chevalier]. This XRO and the associated SN present an unprecedented case to be investigated in details, especially on interpretations for the rise and decay times of the X-ray light curve. The claim of an exponential decay may be premature given a fairly large scatter, and it may have concealed valuable physical clues offered by this XRO. During the XRO, the observed spectroscopic softening still lacks a convincing explanation. Here, we advance a self-similar MHD rebound shock model in an attempt to reproduce the observed X-ray light curve. The next section contains an overall description of the self-similar MHD model and the procedure of analysis; in the third section, we compare our model results with data; and conclusions are summed up in the last section. A Self-Similar MHD Void Shock Model =================================== For a polytropic magnetofluid in quasi-spherical symmetry under the self-gravity, the governing magnetohydrodynamic (MHD) equations include mass conservation, momentum conservation (Euler equation), magnetic induction equation, and an equation of specific entropy conservations along streamlines to approximate energetic processes. For this more general polytropic equation of state, we regard the polytropic index $\gamma$ as a parameter [@WangLou08]. These coupled nonlinear MHD partial differential equations (PDEs) can be reduced to nonlinear ordinary differential equations (ODEs) by introducing a self-similar transformation $r=k^{1/2}xt^n$, where $r$ is the radius, $t$ is the time and $k$ is a scale parameter relevant to the local sound speed, rendering the independent self-similar variable $x$ dimensionless. The corresponding transformation of the dependent MHD variables can be found in refs. [@WangLou08; @LouHu]. The exponent $n$ is a key parameter that determines the dynamic behaviour of a polytropic fluid. For $n+\gamma=2$, the formulation reduces to that of a conventional polytropic gas in which the specific entropy remains constant everywhere [@SutoSilk1988; @Yahil1983; @LouWang06; @LouWang07; @WangLou07]. The special case of $n=1$ and $\gamma=1$ corresponds to the isothermal case [@BianLou2005; @YuLouBianWu06]. Such self-similar evolutions represent an important subclass of all possible evolutions. We also introduce a dimensionless magnetic parameter to represent the strength of a magnetic field $h\equiv<B_t^2>/(16\pi^2G\rho^2r^2)$, where $<B_t^2>$ is the ensemble average of a random transverse magnetic field squared, $G$ is the gravity constant and $\rho$ is the mass density. Meanwhile, MHD shocks are necessary to connect different branches of self-similar solutions. The conservation laws impose constraints on physical variables across a MHD shock front. We can then derive downstream physical quantities (density, velocity, pressure and temperature) from the upstream physical quantities or vice versa. Self-similar solutions produce radial profiles of density, radial velocity, pressure and temperature at any time of evolution, and the detailed procedure of analysis can be found in the reference of Wang & Lou [@WangLou08]. It is also sensible to invoke the plasma cooling function and obtain radiation diagnostics from a magnetofluid of high temperatures $\sim 10^7-10^8$K [@Sutherland1993]. Recently, we obtained a new class of self-similar “void" solutions within a certain radius $r^*$ referred to as the void boundary. In general, such a void solution describes an expanding fluid envelope with a central cavity and possibly associated with an outgoing shock [@LouCao]. The self-similar evolution implies that the central void expands as a power-law in time $r^*\propto t^n$. We study detailed behaviours of void solutions under different parameters in a general polytropic MHD framework [HL, LouHu]{}. Here, we propose to utilize such void shock solutions to model the explosion of a massive progenitor star in the process of a rebound MHD shock breakout. The Bondi-Parker radius of a remnant compact object if any left in the center is defined as $$r_{\rm BP}=\frac{GM_*}{2a^2}\ ,\label{BP}$$ where $M_*$ is the mass of the central object and $a$ is the sound speed at the inner void edge of the surrounding gas. Far beyond this radius $r_{\rm BP}$, the gravity of the central object becomes negligible compared to the thermal pressure. For supernovae, $M_*$ would be of the order of M$_{\odot}$ [@Mazzali]. At $\sim 1$ s after the core bounce, the temperature of the stellar envelope is of the order of $10^8$ K, and the sound speed $a^2\sim10^{17}$ cm$^2$ s$^{-2}$, and then $r_{\rm BP}\sim 10^{8}$ cm. Meanwhile, the void radius $r^*$ expands to larger than $10^8$ cm [@Janka]. Furthermore, the Bondi-Parker radius expands slower than the void boundary does [@LouHu]. Therefore, the cavity assumption may be justifiable. MHD Model and X-ray Light Curve =============================== The self-similar MHD void shock model of a WR stellar envelope in explosion associated with a shock breakout and the corresponding X-ray light curve are shown in Figure \[Figure1\]. ![Our MHD void shock model for a shock breakout in a progenitor of SN 2008D (left) and the resulting X-ray light curve (right). On the left from top to bottom, the panels show the radial profiles of density, radial velocity, pressure, enclosed mass and temperature of the stellar envelope within radial range $10^8$ cm (void boundary) and $\sim10^{11}$ cm (outer boundary) at 1 s after the core collapse and rebounce. The model is obtained with the self-similar parameters as $n=0.8$, $\gamma=1.2$ (conventional polytropic) and $h=0$ (non-magnetized fluid). On the right, we compare the X-ray light curve calculated from our MHD void shock model (red curve) and data from the X-Ray Telescope (XRT) on board the SWIFT satellite [@Soderberg2008] (solid circles with error bars suppressed). X-ray fluxes are normalized to the peak flux. The X-ray light curve is shown as a function of time since the XRT trigger, noted as $t_{\rm obs}$. The
{ "pile_set_name": "ArXiv" }
--- abstract: 'Network science has proved useful in analyzing structure and dynamics of social networks in several areas. This paper aims at analyzing the relationships of characters in *Friends*, a famous sitcom. In particular, two important aspects are investigated. First, how are the structure of the communities (groups)? How different methods for community detection perform? Second, not only static structure of the graphs and causality relationships are investigated, but also temporal aspects. After all, this show was aired for ten years and thus plots, roles, and friendship patterns among the characters seem to have changed. Also, this sitcom is frequently associated with distinguishing facts such as: all six characters are equally prominent; it has no dominant storyline; and friendship as surrogate family. This paper uses tools from network theory to check whether these and other facts can be quantified and proved correct, especially considering the temporal aspect, i.e., what happens in the sitcom along time. The main findings regarding the centrality and temporal aspects are: patterns in graphs representing different time slices of the show change; overall, degrees of the six friends are indeed nearly the same; however, in different situations (thus graphs), the magnitudes of degree centrality do change; betweenness centrality differs significantly for each character thus some characters are better connectors than others; there is a high difference regarding degrees of the six friends versus the rest of the characters, which points to a centralized network; there are strong indications that the six friends are part of a surrogate family. As for the presence of groups within the network, methods of different natures were investigated aiming at detecting groups (communities) in networks representing different time slices as well as the network of all episodes. Such methods were compared (pairwise and also using various metrics, including plausibility). The multilevel method performs reasonably in general. Also, it stands out that those methods do not agree very much, resulting in groups that are very different from method to method.' author: - 'Ana L. C. Bazzan' title: | I will be there for you:\ six friends in a clique[^1] --- Introduction {#introduction .unnumbered} ============ With the increasing penetration of streaming technology, TV shows and series are becoming more and more popular. What makes some shows so appealing? Besides obvious items (plot, cast, cinematography, etc.), the structure of the network of characters underlying the plot (the social network of the show’s plot) can offer some hints too. The use of [network theory]{}  – which studies complex interacting systems represented as graphs – is a nice way to shed light on questions related to the social network underlying a TV show. For example, investigated who is/are the most central characters in *Game of Thrones*. This popular show was also the target of [@Jasonov2017] who computed the importance of characters and used them as features or input to a machine learning algorithm in order to predict how likely to die some characters are. [@Tan+2014] analyzed the character networks of Stargate and Star Trek and found that their structures are quite similar. These studies investigate particular issues related to these shows. However, in none of them the temporal (not only causal) aspects of the shows were deeply explored. Also, in many cases, the data employed to construct the networks were neither based on the entirety of the episodes, nor manually collected, which means that some parsing or other automated strategy had to be used. In the case of , they constructed a graph by including an edge between any two characters whose name appeared within 15 words in the text of the third book (A Storm of Swords); [@Jasonov2017] collected data about available scenes in dialogue (subtitles) format on a fan website ([genious.com](genious.com)) and assumed that within a scene everyone was then connected with everyone. In the present study, a broader range of issues (e.g., related to temporal patterns and community structure) of the situation comedy (sitcom) [*Friends*]{} is analyzed, spanning through ten seasons. [*Friends*]{} is an American television sitcom created by David Crane and Marta Kauffman, which was aired on NBC from 1994 to 2004. [*Friends*]{} featured six main characters – Rachel Green (Jennifer Aniston), Monica Geller (Courteney Cox), Phoebe Buffay (Lisa Kudrow), Joey Tribbiani (Matt LeBlanc), Chandler Bing (Matthew Perry), and Ross Geller (David Schwimmer). The story unfolds at three main settings: a Manhattan coffeehouse (Central Perk) and the apartments of Monica and Rachel and Joey and Chandler across the hall. According to [@Sternbergh2016], with the arrival of [*Friends*]{} to Netflix, the show is reaching a whole new generation of 20–30 year olds, and its popularity is on the rise. For the present study, data of each episode of [*Friends*]{} was manually collected based on the actual interactions of characters in each scene. An interaction happens when two characters talk (even if one talks and the other just listens) or touch or have eye contact. This means that, since not necessarily every character does interact with all others in a scene, each scene is not a complete graph connecting all characters in it. Thus, there are some differences between the way graphs are constructed in , [@Jasonov2017], and in the present work. Data was collected by watching each of the 236 episodes[^2]. Pairwise interactions were stored in text files that were then processed using *igraph* for *python*[^3]. One can either look at graphs for each episode, for all episodes together, or for any particular merge of episodes/situations (e.g., Thanksgiving episodes, all first episodes in each season). The main aim is to check whether well known facts about [*Friends*]{} – e.g., that all six characters are equally prominent – can be quantified and proved correct. Moreover, what can be said about such facts regarding different contexts or the passing of time? After all, [*Friends*]{} aired for ten years and things might have changed. Finally, how do known methods for community detection perform in the case of this dataset? Are there similarities to other human social networks? [*Friends*]{} as inspiration for academic studies {#friendsas-inspiration-for-academic-studies .unnumbered} ================================================= It is only natural that the sitcom [*Friends*]{} has attracted the attention of researchers in the area of Arts and Communication, especially in the late 1990s and the 2000s, when the show was still being aired or had just ended. However, [*Friends*]{} has also been the subject of a myriad of interesting studies, ranging from Social Sciences and Linguistics to Math and Computer Science. Moreover, the list includes recent work as well, showing that [*Friends*]{} is still popular. Some of these are: L. [@Marshall2007]’s thesis examined representations of friendship, gender, race, and social class in [*Friends*]{}. P. [@Quaglio2009] compared the language of [*Friends*]{} to natural conversation, in particular comparing high-frequency linguistic features that characterize conversation to the language of [*Friends*]{}. T. [@Heyd2010] studied the construction “*you guys*” as an emerging quasi pronoun for second-person plural address based on dialogue transcriptions of [*Friends*]{}. C.-J. [@Nan+2015] used a deep learning model for face recognition in [*Friends*]{}’s videos in order to distinguish the six main characters and establish the social network between them. [@Edwards+2018] compared different extraction methods for social networks in narratives providing evidence that automated methods of data extraction are reliable for many (though not all) analyses. What [*Friends*]{} is known for {#what-friendsis-known-for .unnumbered} =============================== [*Friends*]{} is frequently associated with these facts (especially the first two): - All six characters are equally prominent; - [*Friends*]{} is a multistory sitcom with no dominant storyline; - Monica likes to consider herself as hostess / mother hen; - Friendship as surrogate family; - Ross and Rachel have an intermittent relationship; - Chandler and Phoebe had originally been written as more secondary characters, to provide humor when needed; - The writers originally planned a big love story between Joey and Monica. In this article, these facts are investigate using tools of [network theory]{}. If the six characters in this sitcom are indeed equally prominent, one expects that quantitative measures of their importance in the story confirm this. If there is no dominant storyline, there should be no prominent character(s) in the episodes (apart from obvious exceptions). These (and other) characteristics of the social network of [*Friends*]{} are analyzed here, in a trans-disciplinary effort to establish connections between Humanities and Mathematics. The main characteristics – e.g., multistory with no prominent character – as well as the reasons behind the popularity of [*Friends*]{} have been the subject of various studies, stemming both from academic circles as well as from daily newspapers, blogs, etc. Here are some quotations that corroborate the just mentioned facts about this sitcom: 1. “This series has six major characters, three men and three women, who are generally given equal weight across the series. ”: K. [@Thompson2003], page 56; 2. “Beyond its glamour, “Friends” is widely lauded as the first true “ensemble” show – a series with
{ "pile_set_name": "ArXiv" }
--- abstract: 'Stars between two and three solar masses rotate rapidly on the main sequence, and their rotation rates in the core helium burning (secondary clump) phase can therefore be used to test models of angular momentum loss used for gyrochronology in a new regime. Because both their core and surface rotation rates can be measured, these stars can also be used to set strong constraints on angular momentum transport inside stars. We find that they are rotating slower than angular momentum conservation and rigid rotation would predict. Our results are insensitive to the degree of core-envelope coupling because of the small moment of inertia of the radiative core. We discuss two possible mechanisms for slowing down the surfaces of these stars: (1) substantial angular momentum loss, and (2) radial differential rotation in the surface convection zone. Modern angular momentum loss prescriptions used for solar-type stars predict secondary clump surface rotation rates in much better agreement with the data than prior variants used in the literature, and we argue that such enhanced loss is required to understand the combination of core and surface rotation rates. However, we find that the assumed radial differential rotation profile in convective regions has a strong impact on the predicted surface rotation rates, and that a combination of enhanced loss and radial differential rotation in the surface convection zone is also consistent with the data. We discuss future tests that can quantify the impact of both phenomena. Current data tentatively suggests that some combination of the two processes fits the data better than either one alone.' author: - 'Jamie Tayar, Marc H. Pinsonneault' bibliography: - '/home/spitzer/tayar/Documents/Apogee/latex/RapidRottext2.bib' title: Testing Angular Momentum Transport and Wind Loss in Intermediate Mass Core Helium Burning Stars --- Introduction ============ Real stars rotate, and rotation can have profound consequences for stellar structure and evolution. Despite this, rotation is frequently ignored in stellar models, or treated in a highly simplified fashion. The main culprit is the complex physics governing angular momentum evolution. Stellar evolution naturally generates strong internal shears, especially in evolved stars with rapidly contracting cores and expanding envelopes. Angular momentum can then be carried by convection-driven waves, Reynolds stresses from internal magnetic fields, and via large-scale circulation currents and weak turbulence driven by shears or instabilities. It is not a priori obvious which of these mechanisms is dominant. As a result, a wide range of internal rotation profiles could in principle exist; in turn, this permits a wide range of mixing rates and structural effects. Adding rotation therefore requires the consideration of a number of phenomena traditionally not included in stellar models. Empirical guidance is thus essential for progress, but historically the constraints on internal rotation have been sparse; in evolved stars, even surface rotation rates have been difficult to infer. With the advent of large time domain and spectroscopic surveys, however, the observational landscape has been radically transformed. There now exist hundreds of measurements of core rotation rates of evolved stars. Core rotation rates can be measured because the rotationally-split gravity modes propagating in the core can couple with surface pressure modes at similar frequency to form mixed modes which are visible on the surface but contain information on the core rotation [@Beck2012]. Measurements by @Mosser2012b suggested that core rotation periods for first ascent giants are of order tens of days and core rotation rates for helium burning stars are of order hundreds of days. As stars expand into red giants, their surface rotation must slow down to conserve angular momentum. Historically, this has made measuring surface rotation rates difficult. However, as the number of evolved stars monitored photometrically and measured spectroscopically has increased, it has become clear that in some of the more rapidly rotating giants, surface rotation measurements are possible. Specifically, surface rotation rates come from measurements of photometric modulation due to star spots [@Ceillier2017], velocity broadening of spectral lines that can be measured in high resolution spectra [@Massarotti2008; @Tayar2015], and Doppler-like splittings of the stellar surface pulsations [@Deheuvels2015]. In this paper, we constrain the degree of differential rotation and angular momentum loss in evolved stars. For this purpose, we chose to focus on intermediate mass, core helium burning stars, a sample which overlaps with the secondary clump identified by @Girardi1998. Isochrone fitting of binaries in this mass range indicates minimal mass loss in such stars [@Torres2015] consistent with the short timescale in which they cross the Hertzsprung gap. Main sequence rotation distributions have been measured for such stars [@ZorecRoyer2012] and because most of these intermediate mass stars do not undergo a helium flash, these rotational distributions can be smoothly forward modeled onto the secondary clump. The wide range of rotation rates on the main sequence is expected to produce relatively rapid rotation even in the core helium burning phase, producing seismically detectable core and envelope rotation as well as measurable spot modulation periods and velocity broadenings. Additionally, we find stars in this mass range particularly interesting because, while they are low enough mass to form a substantial fraction of the [[*Kepler*]{}]{} sample, their main sequence evolution is more similar to that of higher mass stars. We therefore hope to use intermediate mass stars as a bridge to understand the important processes affecting the rotation of massive stars, which can have substantial impacts on, for example, nucleosythetic processes in such stars and the energy budget of supernova progenitors. Except for mass-dependent mass loss, the main sequence rotational evolution of all stars above about 1.3 M$_\sun$ [the Kraft break, @Kraft1967] is thought to be similar. These stars are born with a wide, somewhat mass-dependent range of rotation rates [@Gray1982; @Finkenzeller1985; @Alecian2013]. In stars without strong primordial magnetic fields or tidally interacting companions [Ap and Am stars respectively, @Hubrig2000; @Debernardi2000], the lack of a deep surface convection zone means that such stars do not lose substantial angular momentum to a magnetized wind on the main sequence and their range of rotation rates persists to the end of the main sequence [@DurneyLatour1978]. The rotational evolution of such stars after they develop a surface convection zone on the post-main sequence is much less constrained. At that point, one must begin to consider not only the direct effects of structural evolution, but also the possibility of non-rigid rotation profiles. Structurally, while the slow evolution and long lifetime on the main sequence make the assumption that the whole star rotates as a solid body seem reasonable, rigid rotation during rapid post-main-sequence evolution is substantially less likely. One must therefore consider the possibility of decoupling between the shrinking core and the growing envelope, as well as the possibility of radial differential rotation in both the radiative zone [e.g. @Deheuvels2015] and the convective envelope [e.g. @KissinThompson2015]. Because red giants have very deep surface convection zones, differential rotation in such regions can have a very strong impact on their expected surface rotation rates. As an illustration, @Peterson1983 detected rapid rotation in blue horizontal branch stars, and their main sequence precursors are very slow rotators; furthermore, there is strong mass loss on the upper red giant branch. @Pinsonneault1991 and @Sills2000 concluded that this combination required strong differential rotation with depth in the surface convection zones of luminous red giants, and probably differential rotation with depth in their radiative cores as well. The problem is further complicated by the feedback of the rotation profile on the stellar structure [@MaederMeynet2000]. In addition to varied rotation profiles, to understand rotational evolution one must also consider the effects of loss, which can be strongly mass dependent. On the giant branch, mass loss is usually parameterized by a scaling which includes dependencies on the star’s luminosity, gravity, and radius [@Reimers1975]. In low-mass main sequence stars, the effects of a magnetized wind are usually considered [@Kawaler1988] but an explicit parameterization of mass loss is rarely used. The @Kawaler1988 formulation predicts torques that are a weak function of stellar radius, a conclusion challenged by @ReinersMohanty2012 on the basis of how magnetic fields were scaled relative to the solar case. Solutions for magnetized stellar winds [@Matt2012], rather than general scalings, were then found to predict a much stronger dependence of the torque on stellar properties, especially radius. Wind laws using this general approach are now being used for gyrochronology and angular momentum evolution models of low mass stars [@vanSadersPinsonneault2013; @GalletBouvier2013; @LanzafameSpada2015; @Matt2015]. It is, however, difficult to distinguish between such models directly using only low mass stars [see @Somers2017 for a recent example]. Since these same angular momentum loss laws form the basis for gyrochronology, changing their form could alter the inferred ages for many stars, especially those most different from the sun. In this paper, we collect the available data on the surface rotation rates in the secondary clump from a variety of methods (Section \[sec:methods\]). We also construct a theoretical framework for interpreting that data in the context of structural evolution, angular momentum loss, and radial differential rotation (Section \[sec:methods\]), discuss the predicted rotation trends with mass and radius (Section \[sec:trends\]) and compare the predicted and observed rotation distributions for each of our model cases (Section \[sec:Distributions\]). While our work sets some bounds on core rotation, which we discuss, we will demonstrate that the core coupling has only a minor impact on the predicted surface rates. We therefore postpone detailed discussion on the measured and predicted rates of core rotation as a function of mass and surface gravity to
{ "pile_set_name": "ArXiv" }